Merge remote-tracking branch 'upstream/develop' into refactor_integrator

# Conflicts:
#	openpype/plugins/publish/integrate_new.py
This commit is contained in:
Roy Nieterau 2022-04-14 12:33:07 +02:00
commit 971dfffa7b
101 changed files with 3135 additions and 1554 deletions

View file

@ -1,8 +1,27 @@
# Changelog
## [3.9.3-nightly.1](https://github.com/pypeclub/OpenPype/tree/HEAD)
## [3.9.4-nightly.1](https://github.com/pypeclub/OpenPype/tree/HEAD)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.9.2...HEAD)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.9.3...HEAD)
### 📖 Documentation
- Documentation: Python requirements to 3.7.9 [\#3035](https://github.com/pypeclub/OpenPype/pull/3035)
- Website Docs: Remove unused pages [\#2974](https://github.com/pypeclub/OpenPype/pull/2974)
**🚀 Enhancements**
- Resolve environment variable in google drive credential path [\#3008](https://github.com/pypeclub/OpenPype/pull/3008)
**🐛 Bug fixes**
- Ftrack: Integrate ftrack api fix [\#3044](https://github.com/pypeclub/OpenPype/pull/3044)
- Webpublisher - removed wrong hardcoded family [\#3043](https://github.com/pypeclub/OpenPype/pull/3043)
- Unreal: Creator import fixes [\#3040](https://github.com/pypeclub/OpenPype/pull/3040)
## [3.9.3](https://github.com/pypeclub/OpenPype/tree/3.9.3) (2022-04-07)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.9.3-nightly.2...3.9.3)
### 📖 Documentation
@ -10,23 +29,38 @@
**🆕 New features**
- Ftrack: Add description integrator [\#3027](https://github.com/pypeclub/OpenPype/pull/3027)
- Publishing textures for Unreal [\#2988](https://github.com/pypeclub/OpenPype/pull/2988)
- Maya to Unreal \> Static and Skeletal Meshes [\#2978](https://github.com/pypeclub/OpenPype/pull/2978)
- Maya to Unreal: Static and Skeletal Meshes [\#2978](https://github.com/pypeclub/OpenPype/pull/2978)
**🚀 Enhancements**
- Ftrack: Add more options for note text of integrate ftrack note [\#3025](https://github.com/pypeclub/OpenPype/pull/3025)
- Console Interpreter: Changed how console splitter size are reused on show [\#3016](https://github.com/pypeclub/OpenPype/pull/3016)
- Deadline: Use more suitable name for sequence review logic [\#3015](https://github.com/pypeclub/OpenPype/pull/3015)
- Nuke: add concurrency attr to deadline job [\#3005](https://github.com/pypeclub/OpenPype/pull/3005)
- Deadline: priority configurable in Maya jobs [\#2995](https://github.com/pypeclub/OpenPype/pull/2995)
- Workfiles tool: Save as published workfiles [\#2937](https://github.com/pypeclub/OpenPype/pull/2937)
**🐛 Bug fixes**
- Deadline: Fixed default value of use sequence for review [\#3033](https://github.com/pypeclub/OpenPype/pull/3033)
- Settings UI: Version column can be extended so version are visible [\#3032](https://github.com/pypeclub/OpenPype/pull/3032)
- General: Fix import after movements [\#3028](https://github.com/pypeclub/OpenPype/pull/3028)
- Harmony: Added creating subset name for workfile from template [\#3024](https://github.com/pypeclub/OpenPype/pull/3024)
- AfterEffects: Added creating subset name for workfile from template [\#3023](https://github.com/pypeclub/OpenPype/pull/3023)
- General: Add example addons to ignored [\#3022](https://github.com/pypeclub/OpenPype/pull/3022)
- SiteSync: fix transitive alternate sites, fix dropdown in Local Settings [\#3018](https://github.com/pypeclub/OpenPype/pull/3018)
- Maya: Remove missing import [\#3017](https://github.com/pypeclub/OpenPype/pull/3017)
- Ftrack: multiple reviewable componets [\#3012](https://github.com/pypeclub/OpenPype/pull/3012)
- Tray publisher: Fixes after code movement [\#3010](https://github.com/pypeclub/OpenPype/pull/3010)
- Nuke: fixing unicode type detection in effect loaders [\#3002](https://github.com/pypeclub/OpenPype/pull/3002)
- Fix - remove doubled dot in workfile created from template [\#2998](https://github.com/pypeclub/OpenPype/pull/2998)
- Nuke: removing redundant Ftrack asset when farm publishing [\#2996](https://github.com/pypeclub/OpenPype/pull/2996)
**Merged pull requests:**
- Maya: Allow to select invalid camera contents if no cameras found [\#3030](https://github.com/pypeclub/OpenPype/pull/3030)
- General: adding limitations for pyright [\#2994](https://github.com/pypeclub/OpenPype/pull/2994)
## [3.9.2](https://github.com/pypeclub/OpenPype/tree/3.9.2) (2022-04-04)
@ -37,7 +71,6 @@
- Documentation: Added mention of adding My Drive as a root [\#2999](https://github.com/pypeclub/OpenPype/pull/2999)
- Docs: Added MongoDB requirements [\#2951](https://github.com/pypeclub/OpenPype/pull/2951)
- Documentation: New publisher develop docs [\#2896](https://github.com/pypeclub/OpenPype/pull/2896)
**🆕 New features**
@ -57,13 +90,11 @@
- TVPaint: Extractor to convert PNG into EXR [\#2942](https://github.com/pypeclub/OpenPype/pull/2942)
- Workfiles: Open published workfiles [\#2925](https://github.com/pypeclub/OpenPype/pull/2925)
- General: Default modules loaded dynamically [\#2923](https://github.com/pypeclub/OpenPype/pull/2923)
- Nuke: Add no-audio Tag [\#2911](https://github.com/pypeclub/OpenPype/pull/2911)
- Flame: support for comment with xml attribute overrides [\#2892](https://github.com/pypeclub/OpenPype/pull/2892)
- Nuke: improving readability [\#2903](https://github.com/pypeclub/OpenPype/pull/2903)
**🐛 Bug fixes**
- Hosts: Remove path existence checks in 'add\_implementation\_envs' [\#3004](https://github.com/pypeclub/OpenPype/pull/3004)
- Fix - remove doubled dot in workfile created from template [\#2998](https://github.com/pypeclub/OpenPype/pull/2998)
- PS: fix renaming subset incorrectly in PS [\#2991](https://github.com/pypeclub/OpenPype/pull/2991)
- Fix: Disable setuptools auto discovery [\#2990](https://github.com/pypeclub/OpenPype/pull/2990)
- AEL: fix opening existing workfile if no scene opened [\#2989](https://github.com/pypeclub/OpenPype/pull/2989)
@ -92,7 +123,6 @@
- General: Move Attribute Definitions from pipeline [\#2931](https://github.com/pypeclub/OpenPype/pull/2931)
- General: Removed silo references and terminal splash [\#2927](https://github.com/pypeclub/OpenPype/pull/2927)
- General: Move pipeline constants to OpenPype [\#2918](https://github.com/pypeclub/OpenPype/pull/2918)
- General: Move formatting and workfile functions [\#2914](https://github.com/pypeclub/OpenPype/pull/2914)
- General: Move remaining plugins from avalon [\#2912](https://github.com/pypeclub/OpenPype/pull/2912)
**Merged pull requests:**
@ -108,52 +138,18 @@
**🚀 Enhancements**
- Nuke: Add no-audio Tag [\#2911](https://github.com/pypeclub/OpenPype/pull/2911)
- General: Change how OPENPYPE\_DEBUG value is handled [\#2907](https://github.com/pypeclub/OpenPype/pull/2907)
- Nuke: improving readability [\#2903](https://github.com/pypeclub/OpenPype/pull/2903)
- nuke: imageio adding ocio config version 1.2 [\#2897](https://github.com/pypeclub/OpenPype/pull/2897)
- Nuke: ExtractReviewSlate can handle more codes and profiles [\#2879](https://github.com/pypeclub/OpenPype/pull/2879)
- Flame: sequence used for reference video [\#2869](https://github.com/pypeclub/OpenPype/pull/2869)
**🐛 Bug fixes**
- General: Fix use of Anatomy roots [\#2904](https://github.com/pypeclub/OpenPype/pull/2904)
- Fixing gap detection in extract review [\#2902](https://github.com/pypeclub/OpenPype/pull/2902)
- Pyblish Pype - ensure current state is correct when entering new group order [\#2899](https://github.com/pypeclub/OpenPype/pull/2899)
- SceneInventory: Fix import of load function [\#2894](https://github.com/pypeclub/OpenPype/pull/2894)
- Harmony - fixed creator issue [\#2891](https://github.com/pypeclub/OpenPype/pull/2891)
- General: Remove forgotten use of avalon Creator [\#2885](https://github.com/pypeclub/OpenPype/pull/2885)
- General: Avoid circular import [\#2884](https://github.com/pypeclub/OpenPype/pull/2884)
- Fixes for attaching loaded containers \(\#2837\) [\#2874](https://github.com/pypeclub/OpenPype/pull/2874)
**🔀 Refactored code**
- General: Reduce style usage to OpenPype repository [\#2889](https://github.com/pypeclub/OpenPype/pull/2889)
- General: Move loader logic from avalon to openpype [\#2886](https://github.com/pypeclub/OpenPype/pull/2886)
## [3.9.0](https://github.com/pypeclub/OpenPype/tree/3.9.0) (2022-03-14)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.9.0-nightly.9...3.9.0)
### 📖 Documentation
- Documentation: Change Photoshop & AfterEffects plugin path [\#2878](https://github.com/pypeclub/OpenPype/pull/2878)
**🚀 Enhancements**
- General: Subset name filtering in ExtractReview outpus [\#2872](https://github.com/pypeclub/OpenPype/pull/2872)
- NewPublisher: Descriptions and Icons in creator dialog [\#2867](https://github.com/pypeclub/OpenPype/pull/2867)
**🐛 Bug fixes**
- General: Missing time function [\#2877](https://github.com/pypeclub/OpenPype/pull/2877)
- Deadline: Fix plugin name for tile assemble [\#2868](https://github.com/pypeclub/OpenPype/pull/2868)
- Nuke: gizmo precollect fix [\#2866](https://github.com/pypeclub/OpenPype/pull/2866)
- General: Fix hardlink for windows [\#2864](https://github.com/pypeclub/OpenPype/pull/2864)
**🔀 Refactored code**
- Refactor: move webserver tool to openpype [\#2876](https://github.com/pypeclub/OpenPype/pull/2876)
## [3.8.2](https://github.com/pypeclub/OpenPype/tree/3.8.2) (2022-02-07)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.8.2-nightly.3...3.8.2)

View file

@ -25,7 +25,7 @@ class AERenderInstance(RenderInstance):
class CollectAERender(abstract_collect_render.AbstractCollectRender):
order = pyblish.api.CollectorOrder + 0.498
order = pyblish.api.CollectorOrder + 0.400
label = "Collect After Effects Render Layers"
hosts = ["aftereffects"]

View file

@ -11,10 +11,8 @@ from .constants import (
from .lib import (
CTX,
FlameAppFramework,
get_project_manager,
get_current_project,
get_current_sequence,
create_bin,
create_segment_data_marker,
get_segment_data_marker,
set_segment_data_marker,
@ -29,7 +27,10 @@ from .lib import (
get_frame_from_filename,
get_padding_from_filename,
maintained_object_duplication,
get_clip_segment
maintained_temp_file_path,
get_clip_segment,
get_batch_group_from_desktop,
MediaInfoFile
)
from .utils import (
setup,
@ -56,7 +57,6 @@ from .plugin import (
PublishableClip,
ClipLoader,
OpenClipSolver
)
from .workio import (
open_file,
@ -71,6 +71,10 @@ from .render_utils import (
get_preset_path_by_xml_name,
modify_preset_file
)
from .batch_utils import (
create_batch_group,
create_batch_group_conent
)
__all__ = [
# constants
@ -83,10 +87,8 @@ __all__ = [
# lib
"CTX",
"FlameAppFramework",
"get_project_manager",
"get_current_project",
"get_current_sequence",
"create_bin",
"create_segment_data_marker",
"get_segment_data_marker",
"set_segment_data_marker",
@ -101,7 +103,10 @@ __all__ = [
"get_frame_from_filename",
"get_padding_from_filename",
"maintained_object_duplication",
"maintained_temp_file_path",
"get_clip_segment",
"get_batch_group_from_desktop",
"MediaInfoFile",
# pipeline
"install",
@ -142,5 +147,9 @@ __all__ = [
# render utils
"export_clip",
"get_preset_path_by_xml_name",
"modify_preset_file"
"modify_preset_file",
# batch utils
"create_batch_group",
"create_batch_group_conent"
]

View file

@ -0,0 +1,151 @@
import flame
def create_batch_group(
name,
frame_start,
frame_duration,
update_batch_group=None,
**kwargs
):
"""Create Batch Group in active project's Desktop
Args:
name (str): name of batch group to be created
frame_start (int): start frame of batch
frame_end (int): end frame of batch
update_batch_group (PyBatch)[optional]: batch group to update
Return:
PyBatch: active flame batch group
"""
# make sure some batch obj is present
batch_group = update_batch_group or flame.batch
schematic_reels = kwargs.get("shematic_reels") or ['LoadedReel1']
shelf_reels = kwargs.get("shelf_reels") or ['ShelfReel1']
handle_start = kwargs.get("handleStart") or 0
handle_end = kwargs.get("handleEnd") or 0
frame_start -= handle_start
frame_duration += handle_start + handle_end
if not update_batch_group:
# Create batch group with name, start_frame value, duration value,
# set of schematic reel names, set of shelf reel names
batch_group = batch_group.create_batch_group(
name,
start_frame=frame_start,
duration=frame_duration,
reels=schematic_reels,
shelf_reels=shelf_reels
)
else:
batch_group.name = name
batch_group.start_frame = frame_start
batch_group.duration = frame_duration
# add reels to batch group
_add_reels_to_batch_group(
batch_group, schematic_reels, shelf_reels)
# TODO: also update write node if there is any
# TODO: also update loaders to start from correct frameStart
if kwargs.get("switch_batch_tab"):
# use this command to switch to the batch tab
batch_group.go_to()
return batch_group
def _add_reels_to_batch_group(batch_group, reels, shelf_reels):
# update or create defined reels
# helper variables
reel_names = [
r.name.get_value()
for r in batch_group.reels
]
shelf_reel_names = [
r.name.get_value()
for r in batch_group.shelf_reels
]
# add schematic reels
for _r in reels:
if _r in reel_names:
continue
batch_group.create_reel(_r)
# add shelf reels
for _sr in shelf_reels:
if _sr in shelf_reel_names:
continue
batch_group.create_shelf_reel(_sr)
def create_batch_group_conent(batch_nodes, batch_links, batch_group=None):
"""Creating batch group with links
Args:
batch_nodes (list of dict): each dict is node definition
batch_links (list of dict): each dict is link definition
batch_group (PyBatch, optional): batch group. Defaults to None.
Return:
dict: all batch nodes {name or id: PyNode}
"""
# make sure some batch obj is present
batch_group = batch_group or flame.batch
all_batch_nodes = {
b.name.get_value(): b
for b in batch_group.nodes
}
for node in batch_nodes:
# NOTE: node_props needs to be ideally OrederDict type
node_id, node_type, node_props = (
node["id"], node["type"], node["properties"])
# get node name for checking if exists
node_name = node_props.pop("name", None) or node_id
if all_batch_nodes.get(node_name):
# update existing batch node
batch_node = all_batch_nodes[node_name]
else:
# create new batch node
batch_node = batch_group.create_node(node_type)
# set name
batch_node.name.set_value(node_name)
# set attributes found in node props
for key, value in node_props.items():
if not hasattr(batch_node, key):
continue
setattr(batch_node, key, value)
# add created node for possible linking
all_batch_nodes[node_id] = batch_node
# link nodes to each other
for link in batch_links:
_from_n, _to_n = link["from_node"], link["to_node"]
# check if all linking nodes are available
if not all([
all_batch_nodes.get(_from_n["id"]),
all_batch_nodes.get(_to_n["id"])
]):
continue
# link nodes in defined link
batch_group.connect_nodes(
all_batch_nodes[_from_n["id"]], _from_n["connector"],
all_batch_nodes[_to_n["id"]], _to_n["connector"]
)
# sort batch nodes
batch_group.organize()
return all_batch_nodes

View file

@ -3,7 +3,12 @@ import os
import re
import json
import pickle
import tempfile
import itertools
import contextlib
import xml.etree.cElementTree as cET
from copy import deepcopy
from xml.etree import ElementTree as ET
from pprint import pformat
from .constants import (
MARKER_COLOR,
@ -12,9 +17,10 @@ from .constants import (
COLOR_MAP,
MARKER_PUBLISH_DEFAULT
)
from openpype.api import Logger
log = Logger.get_logger(__name__)
import openpype.api as openpype
log = openpype.Logger.get_logger(__name__)
FRAME_PATTERN = re.compile(r"[\._](\d+)[\.]")
@ -227,16 +233,6 @@ class FlameAppFramework(object):
return True
def get_project_manager():
# TODO: get_project_manager
return
def get_media_storage():
# TODO: get_media_storage
return
def get_current_project():
import flame
return flame.project.current_project
@ -266,11 +262,6 @@ def get_current_sequence(selection):
return process_timeline
def create_bin(name, root=None):
# TODO: create_bin
return
def rescan_hooks():
import flame
try:
@ -280,6 +271,7 @@ def rescan_hooks():
def get_metadata(project_name, _log=None):
# TODO: can be replaced by MediaInfoFile class method
from adsk.libwiretapPythonClientAPI import (
WireTapClient,
WireTapServerHandle,
@ -704,6 +696,25 @@ def maintained_object_duplication(item):
flame.delete(duplicate)
@contextlib.contextmanager
def maintained_temp_file_path(suffix=None):
_suffix = suffix or ""
try:
# Store dumped json to temporary file
temporary_file = tempfile.mktemp(
suffix=_suffix, prefix="flame_maintained_")
yield temporary_file.replace("\\", "/")
except IOError as _error:
raise IOError(
"Not able to create temp json file: {}".format(_error))
finally:
# Remove the temporary json
os.remove(temporary_file)
def get_clip_segment(flame_clip):
name = flame_clip.name.get_value()
version = flame_clip.versions[0]
@ -717,3 +728,213 @@ def get_clip_segment(flame_clip):
raise ValueError("Clip `{}` has too many segments!".format(name))
return segments[0]
def get_batch_group_from_desktop(name):
project = get_current_project()
project_desktop = project.current_workspace.desktop
for bgroup in project_desktop.batch_groups:
if bgroup.name.get_value() in name:
return bgroup
class MediaInfoFile(object):
"""Class to get media info file clip data
Raises:
IOError: MEDIA_SCRIPT_PATH path doesn't exists
TypeError: Not able to generate clip xml data file
ET.ParseError: Missing clip in xml clip data
IOError: Not able to save xml clip data to file
Attributes:
str: `MEDIA_SCRIPT_PATH` path to flame binary
logging.Logger: `log` logger
TODO: add method for getting metadata to dict
"""
MEDIA_SCRIPT_PATH = "/opt/Autodesk/mio/current/dl_get_media_info"
log = log
_clip_data = None
_start_frame = None
_fps = None
_drop_mode = None
def __init__(self, path, **kwargs):
# replace log if any
if kwargs.get("logger"):
self.log = kwargs["logger"]
# test if `dl_get_media_info` paht exists
self._validate_media_script_path()
# derivate other feed variables
self.feed_basename = os.path.basename(path)
self.feed_dir = os.path.dirname(path)
self.feed_ext = os.path.splitext(self.feed_basename)[1][1:].lower()
with maintained_temp_file_path(".clip") as tmp_path:
self.log.info("Temp File: {}".format(tmp_path))
self._generate_media_info_file(tmp_path)
# get clip data and make them single if there is multiple
# clips data
xml_data = self._make_single_clip_media_info(tmp_path)
self.log.debug("xml_data: {}".format(xml_data))
self.log.debug("type: {}".format(type(xml_data)))
# get all time related data and assign them
self._get_time_info_from_origin(xml_data)
self.log.debug("start_frame: {}".format(self.start_frame))
self.log.debug("fps: {}".format(self.fps))
self.log.debug("drop frame: {}".format(self.drop_mode))
self.clip_data = xml_data
@property
def clip_data(self):
"""Clip's xml clip data
Returns:
xml.etree.ElementTree: xml data
"""
return self._clip_data
@clip_data.setter
def clip_data(self, data):
self._clip_data = data
@property
def start_frame(self):
""" Clip's starting frame found in timecode
Returns:
int: number of frames
"""
return self._start_frame
@start_frame.setter
def start_frame(self, number):
self._start_frame = int(number)
@property
def fps(self):
""" Clip's frame rate
Returns:
float: frame rate
"""
return self._fps
@fps.setter
def fps(self, fl_number):
self._fps = float(fl_number)
@property
def drop_mode(self):
""" Clip's drop frame mode
Returns:
str: drop frame flag
"""
return self._drop_mode
@drop_mode.setter
def drop_mode(self, text):
self._drop_mode = str(text)
def _validate_media_script_path(self):
if not os.path.isfile(self.MEDIA_SCRIPT_PATH):
raise IOError("Media Scirpt does not exist: `{}`".format(
self.MEDIA_SCRIPT_PATH))
def _generate_media_info_file(self, fpath):
# Create cmd arguments for gettig xml file info file
cmd_args = [
self.MEDIA_SCRIPT_PATH,
"-e", self.feed_ext,
"-o", fpath,
self.feed_dir
]
try:
# execute creation of clip xml template data
openpype.run_subprocess(cmd_args)
except TypeError as error:
raise TypeError(
"Error creating `{}` due: {}".format(fpath, error))
def _make_single_clip_media_info(self, fpath):
with open(fpath) as f:
lines = f.readlines()
_added_root = itertools.chain(
"<root>", deepcopy(lines)[1:], "</root>")
new_root = ET.fromstringlist(_added_root)
# find the clip which is matching to my input name
xml_clips = new_root.findall("clip")
matching_clip = None
for xml_clip in xml_clips:
if xml_clip.find("name").text in self.feed_basename:
matching_clip = xml_clip
if matching_clip is None:
# return warning there is missing clip
raise ET.ParseError(
"Missing clip in `{}`. Available clips {}".format(
self.feed_basename, [
xml_clip.find("name").text
for xml_clip in xml_clips
]
))
return matching_clip
def _get_time_info_from_origin(self, xml_data):
try:
for out_track in xml_data.iter('track'):
for out_feed in out_track.iter('feed'):
# start frame
out_feed_nb_ticks_obj = out_feed.find(
'startTimecode/nbTicks')
self.start_frame = out_feed_nb_ticks_obj.text
# fps
out_feed_fps_obj = out_feed.find(
'startTimecode/rate')
self.fps = out_feed_fps_obj.text
# drop frame mode
out_feed_drop_mode_obj = out_feed.find(
'startTimecode/dropMode')
self.drop_mode = out_feed_drop_mode_obj.text
break
else:
continue
except Exception as msg:
self.log.warning(msg)
@staticmethod
def write_clip_data_to_file(fpath, xml_element_data):
""" Write xml element of clip data to file
Args:
fpath (string): file path
xml_element_data (xml.etree.ElementTree.Element): xml data
Raises:
IOError: If data could not be written to file
"""
try:
# save it as new file
tree = cET.ElementTree(xml_element_data)
tree.write(
fpath, xml_declaration=True,
method='xml', encoding='UTF-8'
)
except IOError as error:
raise IOError(
"Not able to write data to file: {}".format(error))

View file

@ -1,24 +1,19 @@
import os
import re
import shutil
import sys
from xml.etree import ElementTree as ET
import six
import qargparse
from Qt import QtWidgets, QtCore
import openpype.api as openpype
from openpype.pipeline import (
LegacyCreator,
LoaderPlugin,
)
from openpype import style
from . import (
lib as flib,
pipeline as fpipeline,
constants
)
from copy import deepcopy
from xml.etree import ElementTree as ET
from Qt import QtCore, QtWidgets
import openpype.api as openpype
import qargparse
from openpype import style
from openpype.pipeline import LegacyCreator, LoaderPlugin
from . import constants
from . import lib as flib
from . import pipeline as fpipeline
log = openpype.Logger.get_logger(__name__)
@ -660,8 +655,8 @@ class PublishableClip:
# Publishing plugin functions
# Loader plugin functions
# Loader plugin functions
class ClipLoader(LoaderPlugin):
"""A basic clip loader for Flame
@ -681,50 +676,52 @@ class ClipLoader(LoaderPlugin):
]
class OpenClipSolver:
media_script_path = "/opt/Autodesk/mio/current/dl_get_media_info"
tmp_name = "_tmp.clip"
tmp_file = None
class OpenClipSolver(flib.MediaInfoFile):
create_new_clip = False
out_feed_nb_ticks = None
out_feed_fps = None
out_feed_drop_mode = None
log = log
def __init__(self, openclip_file_path, feed_data):
# test if media script paht exists
self._validate_media_script_path()
self.out_file = openclip_file_path
# new feed variables:
feed_path = feed_data["path"]
feed_path = feed_data.pop("path")
# initialize parent class
super(OpenClipSolver, self).__init__(
feed_path,
**feed_data
)
# get other metadata
self.feed_version_name = feed_data["version"]
self.feed_colorspace = feed_data.get("colorspace")
if feed_data.get("logger"):
self.log = feed_data["logger"]
self.log.debug("feed_version_name: {}".format(self.feed_version_name))
# derivate other feed variables
self.feed_basename = os.path.basename(feed_path)
self.feed_dir = os.path.dirname(feed_path)
self.feed_ext = os.path.splitext(self.feed_basename)[1][1:].lower()
if not os.path.isfile(openclip_file_path):
# openclip does not exist yet and will be created
self.tmp_file = self.out_file = openclip_file_path
self.log.debug("feed_ext: {}".format(self.feed_ext))
self.log.debug("out_file: {}".format(self.out_file))
if not self._is_valid_tmp_file(self.out_file):
self.create_new_clip = True
else:
# output a temp file
self.out_file = openclip_file_path
self.tmp_file = os.path.join(self.feed_dir, self.tmp_name)
self._clear_tmp_file()
def _is_valid_tmp_file(self, file):
# check if file exists
if os.path.isfile(file):
# test also if file is not empty
with open(file) as f:
lines = f.readlines()
self.log.info("Temp File: {}".format(self.tmp_file))
if len(lines) > 2:
return True
# file is probably corrupted
os.remove(file)
return False
def make(self):
self._generate_media_info_file()
if self.create_new_clip:
# New openClip
@ -732,42 +729,17 @@ class OpenClipSolver:
else:
self._update_open_clip()
def _validate_media_script_path(self):
if not os.path.isfile(self.media_script_path):
raise IOError("Media Scirpt does not exist: `{}`".format(
self.media_script_path))
def _generate_media_info_file(self):
# Create cmd arguments for gettig xml file info file
cmd_args = [
self.media_script_path,
"-e", self.feed_ext,
"-o", self.tmp_file,
self.feed_dir
]
# execute creation of clip xml template data
try:
openpype.run_subprocess(cmd_args)
except TypeError:
self.log.error("Error creating self.tmp_file")
six.reraise(*sys.exc_info())
def _clear_tmp_file(self):
if os.path.isfile(self.tmp_file):
os.remove(self.tmp_file)
def _clear_handler(self, xml_object):
for handler in xml_object.findall("./handler"):
self.log.debug("Handler found")
self.log.info("Handler found")
xml_object.remove(handler)
def _create_new_open_clip(self):
self.log.info("Building new openClip")
self.log.debug(">> self.clip_data: {}".format(self.clip_data))
tmp_xml = ET.parse(self.tmp_file)
tmp_xml_feeds = tmp_xml.find('tracks/track/feeds')
# clip data comming from MediaInfoFile
tmp_xml_feeds = self.clip_data.find('tracks/track/feeds')
tmp_xml_feeds.set('currentVersion', self.feed_version_name)
for tmp_feed in tmp_xml_feeds:
tmp_feed.set('vuid', self.feed_version_name)
@ -778,46 +750,48 @@ class OpenClipSolver:
self._clear_handler(tmp_feed)
tmp_xml_versions_obj = tmp_xml.find('versions')
tmp_xml_versions_obj = self.clip_data.find('versions')
tmp_xml_versions_obj.set('currentVersion', self.feed_version_name)
for xml_new_version in tmp_xml_versions_obj:
xml_new_version.set('uid', self.feed_version_name)
xml_new_version.set('type', 'version')
xml_data = self._fix_xml_data(tmp_xml)
self._clear_handler(self.clip_data)
self.log.info("Adding feed version: {}".format(self.feed_basename))
self._write_result_xml_to_file(xml_data)
self.log.info("openClip Updated: {}".format(self.tmp_file))
self.write_clip_data_to_file(self.out_file, self.clip_data)
def _update_open_clip(self):
self.log.info("Updating openClip ..")
out_xml = ET.parse(self.out_file)
tmp_xml = ET.parse(self.tmp_file)
out_xml = out_xml.getroot()
self.log.debug(">> out_xml: {}".format(out_xml))
self.log.debug(">> tmp_xml: {}".format(tmp_xml))
self.log.debug(">> self.clip_data: {}".format(self.clip_data))
# Get new feed from tmp file
tmp_xml_feed = tmp_xml.find('tracks/track/feeds/feed')
tmp_xml_feed = self.clip_data.find('tracks/track/feeds/feed')
self._clear_handler(tmp_xml_feed)
self._get_time_info_from_origin(out_xml)
if self.out_feed_fps:
# update fps from MediaInfoFile class
if self.fps:
tmp_feed_fps_obj = tmp_xml_feed.find(
"startTimecode/rate")
tmp_feed_fps_obj.text = self.out_feed_fps
if self.out_feed_nb_ticks:
tmp_feed_fps_obj.text = str(self.fps)
# update start_frame from MediaInfoFile class
if self.start_frame:
tmp_feed_nb_ticks_obj = tmp_xml_feed.find(
"startTimecode/nbTicks")
tmp_feed_nb_ticks_obj.text = self.out_feed_nb_ticks
if self.out_feed_drop_mode:
tmp_feed_nb_ticks_obj.text = str(self.start_frame)
# update drop_mode from MediaInfoFile class
if self.drop_mode:
tmp_feed_drop_mode_obj = tmp_xml_feed.find(
"startTimecode/dropMode")
tmp_feed_drop_mode_obj.text = self.out_feed_drop_mode
tmp_feed_drop_mode_obj.text = str(self.drop_mode)
new_path_obj = tmp_xml_feed.find(
"spans/span/path")
@ -850,7 +824,7 @@ class OpenClipSolver:
"version", {"type": "version", "uid": self.feed_version_name})
out_xml_versions_obj.insert(0, new_version_obj)
xml_data = self._fix_xml_data(out_xml)
self._clear_handler(out_xml)
# fist create backup
self._create_openclip_backup_file(self.out_file)
@ -858,30 +832,9 @@ class OpenClipSolver:
self.log.info("Adding feed version: {}".format(
self.feed_version_name))
self._write_result_xml_to_file(xml_data)
self.write_clip_data_to_file(self.out_file, out_xml)
self.log.info("openClip Updated: {}".format(self.out_file))
self._clear_tmp_file()
def _get_time_info_from_origin(self, xml_data):
try:
for out_track in xml_data.iter('track'):
for out_feed in out_track.iter('feed'):
out_feed_nb_ticks_obj = out_feed.find(
'startTimecode/nbTicks')
self.out_feed_nb_ticks = out_feed_nb_ticks_obj.text
out_feed_fps_obj = out_feed.find(
'startTimecode/rate')
self.out_feed_fps = out_feed_fps_obj.text
out_feed_drop_mode_obj = out_feed.find(
'startTimecode/dropMode')
self.out_feed_drop_mode = out_feed_drop_mode_obj.text
break
else:
continue
except Exception as msg:
self.log.warning(msg)
self.log.debug("OpenClip Updated: {}".format(self.out_file))
def _feed_exists(self, xml_data, path):
# loop all available feed paths and check if
@ -892,15 +845,6 @@ class OpenClipSolver:
"Not appending file as it already is in .clip file")
return True
def _fix_xml_data(self, xml_data):
xml_root = xml_data.getroot()
self._clear_handler(xml_root)
return ET.tostring(xml_root).decode('utf-8')
def _write_result_xml_to_file(self, xml_data):
with open(self.out_file, "w") as f:
f.write(xml_data)
def _create_openclip_backup_file(self, file):
bck_file = "{}.bak".format(file)
# if backup does not exist

View file

@ -185,7 +185,9 @@ class WireTapCom(object):
exit_code = subprocess.call(
project_create_cmd,
cwd=os.path.expanduser('~'))
cwd=os.path.expanduser('~'),
preexec_fn=_subprocess_preexec_fn
)
if exit_code != 0:
RuntimeError("Cannot create project in flame db")
@ -254,7 +256,7 @@ class WireTapCom(object):
filtered_users = [user for user in used_names if user_name in user]
if filtered_users:
# todo: need to find lastly created following regex pattern for
# TODO: need to find lastly created following regex pattern for
# date used in name
return filtered_users.pop()
@ -448,7 +450,9 @@ class WireTapCom(object):
exit_code = subprocess.call(
project_colorspace_cmd,
cwd=os.path.expanduser('~'))
cwd=os.path.expanduser('~'),
preexec_fn=_subprocess_preexec_fn
)
if exit_code != 0:
RuntimeError("Cannot set colorspace {} on project {}".format(
@ -456,6 +460,15 @@ class WireTapCom(object):
))
def _subprocess_preexec_fn():
""" Helper function
Setting permission mask to 0777
"""
os.setpgrp()
os.umask(0o000)
if __name__ == "__main__":
# get json exchange data
json_path = sys.argv[-1]

View file

@ -11,8 +11,6 @@ from . import utils
import flame
from pprint import pformat
reload(utils) # noqa
log = logging.getLogger(__name__)
@ -260,24 +258,15 @@ def create_otio_markers(otio_item, item):
otio_item.markers.append(otio_marker)
def create_otio_reference(clip_data):
def create_otio_reference(clip_data, fps=None):
metadata = _get_metadata(clip_data)
# get file info for path and start frame
frame_start = 0
fps = CTX.get_fps()
fps = fps or CTX.get_fps()
path = clip_data["fpath"]
reel_clip = None
match_reel_clip = [
clip for clip in CTX.clips
if clip["fpath"] == path
]
if match_reel_clip:
reel_clip = match_reel_clip.pop()
fps = reel_clip["fps"]
file_name = os.path.basename(path)
file_head, extension = os.path.splitext(file_name)
@ -339,13 +328,22 @@ def create_otio_reference(clip_data):
def create_otio_clip(clip_data):
from openpype.hosts.flame.api import MediaInfoFile
segment = clip_data["PySegment"]
# create media reference
media_reference = create_otio_reference(clip_data)
# calculate source in
first_frame = utils.get_frame_from_filename(clip_data["fpath"]) or 0
media_info = MediaInfoFile(clip_data["fpath"])
media_timecode_start = media_info.start_frame
media_fps = media_info.fps
# create media reference
media_reference = create_otio_reference(clip_data, media_fps)
# define first frame
first_frame = media_timecode_start or utils.get_frame_from_filename(
clip_data["fpath"]) or 0
source_in = int(clip_data["source_in"]) - int(first_frame)
# creatae source range
@ -378,38 +376,6 @@ def create_otio_gap(gap_start, clip_start, tl_start_frame, fps):
)
def get_clips_in_reels(project):
output_clips = []
project_desktop = project.current_workspace.desktop
for reel_group in project_desktop.reel_groups:
for reel in reel_group.reels:
for clip in reel.clips:
clip_data = {
"PyClip": clip,
"fps": float(str(clip.frame_rate)[:-4])
}
attrs = [
"name", "width", "height",
"ratio", "sample_rate", "bit_depth"
]
for attr in attrs:
val = getattr(clip, attr)
clip_data[attr] = val
version = clip.versions[-1]
track = version.tracks[-1]
for segment in track.segments:
segment_data = _get_segment_attributes(segment)
clip_data.update(segment_data)
output_clips.append(clip_data)
return output_clips
def _get_colourspace_policy():
output = {}
@ -493,9 +459,6 @@ def _get_shot_tokens_values(clip, tokens):
old_value = None
output = {}
if not clip.shot_name:
return output
old_value = clip.shot_name.get_value()
for token in tokens:
@ -513,15 +476,21 @@ def _get_shot_tokens_values(clip, tokens):
def _get_segment_attributes(segment):
# log.debug(dir(segment))
if str(segment.name)[1:-1] == "":
log.debug("Segment name|hidden: {}|{}".format(
segment.name.get_value(), segment.hidden
))
if (
segment.name.get_value() == ""
or segment.hidden.get_value()
):
return None
# Add timeline segment to tree
clip_data = {
"segment_name": segment.name.get_value(),
"segment_comment": segment.comment.get_value(),
"shot_name": segment.shot_name.get_value(),
"tape_name": segment.tape_name,
"source_name": segment.source_name,
"fpath": segment.file_path,
@ -529,9 +498,10 @@ def _get_segment_attributes(segment):
}
# add all available shot tokens
shot_tokens = _get_shot_tokens_values(segment, [
"<colour space>", "<width>", "<height>", "<depth>",
])
shot_tokens = _get_shot_tokens_values(
segment,
["<colour space>", "<width>", "<height>", "<depth>"]
)
clip_data.update(shot_tokens)
# populate shot source metadata
@ -561,11 +531,6 @@ def create_otio_timeline(sequence):
log.info(sequence.attributes)
CTX.project = get_current_flame_project()
CTX.clips = get_clips_in_reels(CTX.project)
log.debug(pformat(
CTX.clips
))
# get current timeline
CTX.set_fps(
@ -583,8 +548,13 @@ def create_otio_timeline(sequence):
# create otio tracks and clips
for ver in sequence.versions:
for track in ver.tracks:
if len(track.segments) == 0 and track.hidden:
return None
# avoid all empty tracks
# or hidden tracks
if (
len(track.segments) == 0
or track.hidden.get_value()
):
continue
# convert track to otio
otio_track = create_otio_track(
@ -597,11 +567,7 @@ def create_otio_timeline(sequence):
continue
all_segments.append(clip_data)
segments_ordered = {
itemindex: clip_data
for itemindex, clip_data in enumerate(
all_segments)
}
segments_ordered = dict(enumerate(all_segments))
log.debug("_ segments_ordered: {}".format(
pformat(segments_ordered)
))
@ -612,15 +578,11 @@ def create_otio_timeline(sequence):
log.debug("_ itemindex: {}".format(itemindex))
# Add Gap if needed
if itemindex == 0:
# if it is first track item at track then add
# it to previous item
prev_item = segment_data
else:
# get previous item
prev_item = segments_ordered[itemindex - 1]
prev_item = (
segment_data
if itemindex == 0
else segments_ordered[itemindex - 1]
)
log.debug("_ segment_data: {}".format(segment_data))
# calculate clip frame range difference from each other

View file

@ -22,7 +22,7 @@ class LoadClip(opfapi.ClipLoader):
# settings
reel_group_name = "OpenPype_Reels"
reel_name = "Loaded"
clip_name_template = "{asset}_{subset}_{representation}"
clip_name_template = "{asset}_{subset}_{output}"
def load(self, context, name, namespace, options):
@ -39,7 +39,7 @@ class LoadClip(opfapi.ClipLoader):
clip_name = self.clip_name_template.format(
**context["representation"]["context"])
# todo: settings in imageio
# TODO: settings in imageio
# convert colorspace with ocio to flame mapping
# in imageio flame section
colorspace = colorspace

View file

@ -0,0 +1,139 @@
import os
import flame
from pprint import pformat
import openpype.hosts.flame.api as opfapi
class LoadClipBatch(opfapi.ClipLoader):
"""Load a subset to timeline as clip
Place clip to timeline on its asset origin timings collected
during conforming to project
"""
families = ["render2d", "source", "plate", "render", "review"]
representations = ["exr", "dpx", "jpg", "jpeg", "png", "h264"]
label = "Load as clip to current batch"
order = -10
icon = "code-fork"
color = "orange"
# settings
reel_name = "OP_LoadedReel"
clip_name_template = "{asset}_{subset}_{output}"
def load(self, context, name, namespace, options):
# get flame objects
self.batch = options.get("batch") or flame.batch
# load clip to timeline and get main variables
namespace = namespace
version = context['version']
version_data = version.get("data", {})
version_name = version.get("name", None)
colorspace = version_data.get("colorspace", None)
# in case output is not in context replace key to representation
if not context["representation"]["context"].get("output"):
self.clip_name_template.replace("output", "representation")
clip_name = self.clip_name_template.format(
**context["representation"]["context"])
# TODO: settings in imageio
# convert colorspace with ocio to flame mapping
# in imageio flame section
colorspace = colorspace
# create workfile path
workfile_dir = options.get("workdir") or os.environ["AVALON_WORKDIR"]
openclip_dir = os.path.join(
workfile_dir, clip_name
)
openclip_path = os.path.join(
openclip_dir, clip_name + ".clip"
)
if not os.path.exists(openclip_dir):
os.makedirs(openclip_dir)
# prepare clip data from context ad send it to openClipLoader
loading_context = {
"path": self.fname.replace("\\", "/"),
"colorspace": colorspace,
"version": "v{:0>3}".format(version_name),
"logger": self.log
}
self.log.debug(pformat(
loading_context
))
self.log.debug(openclip_path)
# make openpype clip file
opfapi.OpenClipSolver(openclip_path, loading_context).make()
# prepare Reel group in actual desktop
opc = self._get_clip(
clip_name,
openclip_path
)
# add additional metadata from the version to imprint Avalon knob
add_keys = [
"frameStart", "frameEnd", "source", "author",
"fps", "handleStart", "handleEnd"
]
# move all version data keys to tag data
data_imprint = {
key: version_data.get(key, str(None))
for key in add_keys
}
# add variables related to version context
data_imprint.update({
"version": version_name,
"colorspace": colorspace,
"objectName": clip_name
})
# TODO: finish the containerisation
# opc_segment = opfapi.get_clip_segment(opc)
# return opfapi.containerise(
# opc_segment,
# name, namespace, context,
# self.__class__.__name__,
# data_imprint)
return opc
def _get_clip(self, name, clip_path):
reel = self._get_reel()
# with maintained openclip as opc
matching_clip = None
for cl in reel.clips:
if cl.name.get_value() != name:
continue
matching_clip = cl
if not matching_clip:
created_clips = flame.import_clips(str(clip_path), reel)
return created_clips.pop()
return matching_clip
def _get_reel(self):
matching_reel = [
rg for rg in self.batch.reels
if rg.name.get_value() == self.reel_name
]
return (
matching_reel.pop()
if matching_reel
else self.batch.create_reel(str(self.reel_name))
)

View file

@ -21,19 +21,12 @@ class CollectTimelineInstances(pyblish.api.ContextPlugin):
audio_track_items = []
# TODO: add to settings
# settings
xml_preset_attrs_from_comments = {
"width": "number",
"height": "number",
"pixelRatio": "float",
"resizeType": "string",
"resizeFilter": "string"
}
xml_preset_attrs_from_comments = []
add_tasks = []
def process(self, context):
project = context.data["flameProject"]
sequence = context.data["flameSequence"]
selected_segments = context.data["flameSelectedSegments"]
self.log.debug("__ selected_segments: {}".format(selected_segments))
@ -79,9 +72,9 @@ class CollectTimelineInstances(pyblish.api.ContextPlugin):
# solve handles length
marker_data["handleStart"] = min(
marker_data["handleStart"], head)
marker_data["handleStart"], abs(head))
marker_data["handleEnd"] = min(
marker_data["handleEnd"], tail)
marker_data["handleEnd"], abs(tail))
with_audio = bool(marker_data.pop("audio"))
@ -112,7 +105,11 @@ class CollectTimelineInstances(pyblish.api.ContextPlugin):
"fps": self.fps,
"flameSourceClip": source_clip,
"sourceFirstFrame": int(first_frame),
"path": file_path
"path": file_path,
"flameAddTasks": self.add_tasks,
"tasks": {
task["name"]: {"type": task["type"]}
for task in self.add_tasks}
})
# get otio clip data
@ -187,7 +184,10 @@ class CollectTimelineInstances(pyblish.api.ContextPlugin):
# split to key and value
key, value = split.split(":")
for a_name, a_type in self.xml_preset_attrs_from_comments.items():
for attr_data in self.xml_preset_attrs_from_comments:
a_name = attr_data["name"]
a_type = attr_data["type"]
# exclude all not related attributes
if a_name.lower() not in key.lower():
continue
@ -247,6 +247,7 @@ class CollectTimelineInstances(pyblish.api.ContextPlugin):
head = clip_data.get("segment_head")
tail = clip_data.get("segment_tail")
# HACK: it is here to serve for versions bellow 2021.1
if not head:
head = int(clip_data["source_in"]) - int(first_frame)
if not tail:

View file

@ -61,9 +61,13 @@ class ExtractSubsetResources(openpype.api.Extractor):
# flame objects
segment = instance.data["item"]
segment_name = segment.name.get_value()
sequence_clip = instance.context.data["flameSequence"]
clip_data = instance.data["flameSourceClip"]
clip = clip_data["PyClip"]
reel_clip = None
if clip_data:
reel_clip = clip_data["PyClip"]
# segment's parent track name
s_track_name = segment.parent.name.get_value()
@ -108,6 +112,16 @@ class ExtractSubsetResources(openpype.api.Extractor):
ignore_comment_attrs = preset_config["ignore_comment_attrs"]
color_out = preset_config["colorspace_out"]
# get attribures related loading in integrate_batch_group
load_to_batch_group = preset_config.get(
"load_to_batch_group")
batch_group_loader_name = preset_config.get(
"batch_group_loader_name")
# convert to None if empty string
if batch_group_loader_name == "":
batch_group_loader_name = None
# get frame range with handles for representation range
frame_start_handle = frame_start - handle_start
source_duration_handles = (
@ -117,8 +131,20 @@ class ExtractSubsetResources(openpype.api.Extractor):
in_mark = (source_start_handles - source_first_frame) + 1
out_mark = in_mark + source_duration_handles
# make test for type of preset and available reel_clip
if (
not reel_clip
and export_type != "Sequence Publish"
):
self.log.warning((
"Skipping preset {}. Not available "
"reel clip for {}").format(
preset_file, segment_name
))
continue
# by default export source clips
exporting_clip = clip
exporting_clip = reel_clip
if export_type == "Sequence Publish":
# change export clip to sequence
@ -150,7 +176,7 @@ class ExtractSubsetResources(openpype.api.Extractor):
if export_type == "Sequence Publish":
# only keep visible layer where instance segment is child
self.hide_other_tracks(duplclip, s_track_name)
self.hide_others(duplclip, segment_name, s_track_name)
# validate xml preset file is filled
if preset_file == "":
@ -211,7 +237,9 @@ class ExtractSubsetResources(openpype.api.Extractor):
"tags": repre_tags,
"data": {
"colorspace": color_out
}
},
"load_to_batch_group": load_to_batch_group,
"batch_group_loader_name": batch_group_loader_name
}
# collect all available content of export dir
@ -322,18 +350,26 @@ class ExtractSubsetResources(openpype.api.Extractor):
return new_stage_dir, new_files_list
def hide_other_tracks(self, sequence_clip, track_name):
def hide_others(self, sequence_clip, segment_name, track_name):
"""Helper method used only if sequence clip is used
Args:
sequence_clip (flame.Clip): sequence clip
segment_name (str): segment name
track_name (str): track name
"""
# create otio tracks and clips
for ver in sequence_clip.versions:
for track in ver.tracks:
if len(track.segments) == 0 and track.hidden:
if len(track.segments) == 0 and track.hidden.get_value():
continue
# hide tracks which are not parent track
if track.name.get_value() != track_name:
track.hidden = True
continue
# hidde all other segments
for segment in track.segments:
if segment.name.get_value() != segment_name:
segment.hidden = True

View file

@ -0,0 +1,328 @@
import os
import copy
from collections import OrderedDict
from pprint import pformat
import pyblish
from openpype.lib import get_workdir
import openpype.hosts.flame.api as opfapi
import openpype.pipeline as op_pipeline
class IntegrateBatchGroup(pyblish.api.InstancePlugin):
"""Integrate published shot to batch group"""
order = pyblish.api.IntegratorOrder + 0.45
label = "Integrate Batch Groups"
hosts = ["flame"]
families = ["clip"]
# settings
default_loader = "LoadClip"
def process(self, instance):
add_tasks = instance.data["flameAddTasks"]
# iterate all tasks from settings
for task_data in add_tasks:
# exclude batch group
if not task_data["create_batch_group"]:
continue
# create or get already created batch group
bgroup = self._get_batch_group(instance, task_data)
# add batch group content
all_batch_nodes = self._add_nodes_to_batch_with_links(
instance, task_data, bgroup)
for name, node in all_batch_nodes.items():
self.log.debug("name: {}, dir: {}".format(
name, dir(node)
))
self.log.debug("__ node.attributes: {}".format(
node.attributes
))
# load plate to batch group
self.log.info("Loading subset `{}` into batch `{}`".format(
instance.data["subset"], bgroup.name.get_value()
))
self._load_clip_to_context(instance, bgroup)
def _add_nodes_to_batch_with_links(self, instance, task_data, batch_group):
# get write file node properties > OrederDict because order does mater
write_pref_data = self._get_write_prefs(instance, task_data)
batch_nodes = [
{
"type": "comp",
"properties": {},
"id": "comp_node01"
},
{
"type": "Write File",
"properties": write_pref_data,
"id": "write_file_node01"
}
]
batch_links = [
{
"from_node": {
"id": "comp_node01",
"connector": "Result"
},
"to_node": {
"id": "write_file_node01",
"connector": "Front"
}
}
]
# add nodes into batch group
return opfapi.create_batch_group_conent(
batch_nodes, batch_links, batch_group)
def _load_clip_to_context(self, instance, bgroup):
# get all loaders for host
loaders_by_name = {
loader.__name__: loader
for loader in op_pipeline.discover_loader_plugins()
}
# get all published representations
published_representations = instance.data["published_representations"]
repres_db_id_by_name = {
repre_info["representation"]["name"]: repre_id
for repre_id, repre_info in published_representations.items()
}
# get all loadable representations
repres_by_name = {
repre["name"]: repre for repre in instance.data["representations"]
}
# get repre_id for the loadable representations
loader_name_by_repre_id = {
repres_db_id_by_name[repr_name]: {
"loader": repr_data["batch_group_loader_name"],
# add repre data for exception logging
"_repre_data": repr_data
}
for repr_name, repr_data in repres_by_name.items()
if repr_data.get("load_to_batch_group")
}
self.log.debug("__ loader_name_by_repre_id: {}".format(pformat(
loader_name_by_repre_id)))
# get representation context from the repre_id
repre_contexts = op_pipeline.load.get_repres_contexts(
loader_name_by_repre_id.keys())
self.log.debug("__ repre_contexts: {}".format(pformat(
repre_contexts)))
# loop all returned repres from repre_context dict
for repre_id, repre_context in repre_contexts.items():
self.log.debug("__ repre_id: {}".format(repre_id))
# get loader name by representation id
loader_name = (
loader_name_by_repre_id[repre_id]["loader"]
# if nothing was added to settings fallback to default
or self.default_loader
)
# get loader plugin
loader_plugin = loaders_by_name.get(loader_name)
if loader_plugin:
# load to flame by representation context
try:
op_pipeline.load.load_with_repre_context(
loader_plugin, repre_context, **{
"data": {
"workdir": self.task_workdir,
"batch": bgroup
}
})
except op_pipeline.load.IncompatibleLoaderError as msg:
self.log.error(
"Check allowed representations for Loader `{}` "
"in settings > error: {}".format(
loader_plugin.__name__, msg))
self.log.error(
"Representaton context >>{}<< is not compatible "
"with loader `{}`".format(
pformat(repre_context), loader_plugin.__name__
)
)
else:
self.log.warning(
"Something got wrong and there is not Loader found for "
"following data: {}".format(
pformat(loader_name_by_repre_id))
)
def _get_batch_group(self, instance, task_data):
frame_start = instance.data["frameStart"]
frame_end = instance.data["frameEnd"]
handle_start = instance.data["handleStart"]
handle_end = instance.data["handleEnd"]
frame_duration = (frame_end - frame_start) + 1
asset_name = instance.data["asset"]
task_name = task_data["name"]
batchgroup_name = "{}_{}".format(asset_name, task_name)
batch_data = {
"shematic_reels": [
"OP_LoadedReel"
],
"handleStart": handle_start,
"handleEnd": handle_end
}
self.log.debug(
"__ batch_data: {}".format(pformat(batch_data)))
# check if the batch group already exists
bgroup = opfapi.get_batch_group_from_desktop(batchgroup_name)
if not bgroup:
self.log.info(
"Creating new batch group: {}".format(batchgroup_name))
# create batch with utils
bgroup = opfapi.create_batch_group(
batchgroup_name,
frame_start,
frame_duration,
**batch_data
)
else:
self.log.info(
"Updating batch group: {}".format(batchgroup_name))
# update already created batch group
bgroup = opfapi.create_batch_group(
batchgroup_name,
frame_start,
frame_duration,
update_batch_group=bgroup,
**batch_data
)
return bgroup
def _get_anamoty_data_with_current_task(self, instance, task_data):
anatomy_data = copy.deepcopy(instance.data["anatomyData"])
task_name = task_data["name"]
task_type = task_data["type"]
anatomy_obj = instance.context.data["anatomy"]
# update task data in anatomy data
project_task_types = anatomy_obj["tasks"]
task_code = project_task_types.get(task_type, {}).get("short_name")
anatomy_data.update({
"task": {
"name": task_name,
"type": task_type,
"short": task_code
}
})
return anatomy_data
def _get_write_prefs(self, instance, task_data):
# update task in anatomy data
anatomy_data = self._get_anamoty_data_with_current_task(
instance, task_data)
self.task_workdir = self._get_shot_task_dir_path(
instance, task_data)
self.log.debug("__ task_workdir: {}".format(
self.task_workdir))
# TODO: this might be done with template in settings
render_dir_path = os.path.join(
self.task_workdir, "render", "flame")
if not os.path.exists(render_dir_path):
os.makedirs(render_dir_path, mode=0o777)
# TODO: add most of these to `imageio/flame/batch/write_node`
name = "{project[code]}_{asset}_{task[name]}".format(
**anatomy_data
)
# The path attribute where the rendered clip is exported
# /path/to/file.[0001-0010].exr
media_path = render_dir_path
# name of file represented by tokens
media_path_pattern = (
"<name>_v<iteration###>/<name>_v<iteration###>.<frame><ext>")
# The Create Open Clip attribute of the Write File node. \
# Determines if an Open Clip is created by the Write File node.
create_clip = True
# The Include Setup attribute of the Write File node.
# Determines if a Batch Setup file is created by the Write File node.
include_setup = True
# The path attribute where the Open Clip file is exported by
# the Write File node.
create_clip_path = "<name>"
# The path attribute where the Batch setup file
# is exported by the Write File node.
include_setup_path = "./<name>_v<iteration###>"
# The file type for the files written by the Write File node.
# Setting this attribute also overwrites format_extension,
# bit_depth and compress_mode to match the defaults for
# this file type.
file_type = "OpenEXR"
# The file extension for the files written by the Write File node.
# This attribute resets to match file_type whenever file_type
# is set. If you require a specific extension, you must
# set format_extension after setting file_type.
format_extension = "exr"
# The bit depth for the files written by the Write File node.
# This attribute resets to match file_type whenever file_type is set.
bit_depth = "16"
# The compressing attribute for the files exported by the Write
# File node. Only relevant when file_type in 'OpenEXR', 'Sgi', 'Tiff'
compress = True
# The compression format attribute for the specific File Types
# export by the Write File node. You must set compress_mode
# after setting file_type.
compress_mode = "DWAB"
# The frame index mode attribute of the Write File node.
# Value range: `Use Timecode` or `Use Start Frame`
frame_index_mode = "Use Start Frame"
frame_padding = 6
# The versioning mode of the Open Clip exported by the Write File node.
# Only available if create_clip = True.
version_mode = "Follow Iteration"
version_name = "v<version>"
version_padding = 3
# need to make sure the order of keys is correct
return OrderedDict((
("name", name),
("media_path", media_path),
("media_path_pattern", media_path_pattern),
("create_clip", create_clip),
("include_setup", include_setup),
("create_clip_path", create_clip_path),
("include_setup_path", include_setup_path),
("file_type", file_type),
("format_extension", format_extension),
("bit_depth", bit_depth),
("compress", compress),
("compress_mode", compress_mode),
("frame_index_mode", frame_index_mode),
("frame_padding", frame_padding),
("version_mode", version_mode),
("version_name", version_name),
("version_padding", version_padding)
))
def _get_shot_task_dir_path(self, instance, task_data):
project_doc = instance.data["projectEntity"]
asset_entity = instance.data["assetEntity"]
return get_workdir(
project_doc, asset_entity, task_data["name"], "flame")

View file

@ -9,6 +9,8 @@ class ValidateSourceClip(pyblish.api.InstancePlugin):
label = "Validate Source Clip"
hosts = ["flame"]
families = ["clip"]
optional = True
active = False
def process(self, instance):
flame_source_clip = instance.data["flameSourceClip"]

View file

@ -4,7 +4,6 @@ import logging
import contextlib
import hou
import hdefereval
import pyblish.api
import avalon.api
@ -305,7 +304,13 @@ def on_new():
start = hou.playbar.playbackRange()[0]
hou.setFrame(start)
hdefereval.executeDeferred(_enforce_start_frame)
if hou.isUIAvailable():
import hdefereval
hdefereval.executeDeferred(_enforce_start_frame)
else:
# Run without execute deferred when no UI is available because
# without UI `hdefereval` is not available to import
_enforce_start_frame()
def _set_context_settings():

View file

@ -4,8 +4,6 @@ import os
import json
import appdirs
import requests
import six
import sys
from maya import cmds
import maya.app.renderSetup.model.renderSetup as renderSetup
@ -14,6 +12,7 @@ from openpype.hosts.maya.api import (
lib,
plugin
)
from openpype.lib import requests_get
from openpype.api import (
get_system_settings,
get_project_settings,
@ -117,6 +116,8 @@ class CreateRender(plugin.Creator):
except KeyError:
self.aov_separator = "_"
manager = ModulesManager()
self.deadline_module = manager.modules_by_name["deadline"]
try:
default_servers = deadline_settings["deadline_urls"]
project_servers = (
@ -133,10 +134,8 @@ class CreateRender(plugin.Creator):
except AttributeError:
# Handle situation were we had only one url for deadline.
manager = ModulesManager()
deadline_module = manager.modules_by_name["deadline"]
# get default deadline webservice url from deadline module
self.deadline_servers = deadline_module.deadline_urls
self.deadline_servers = self.deadline_module.deadline_urls
def process(self):
"""Entry point."""
@ -205,48 +204,31 @@ class CreateRender(plugin.Creator):
def _deadline_webservice_changed(self):
"""Refresh Deadline server dependent options."""
# get selected server
from maya import cmds
webservice = self.deadline_servers[
self.server_aliases[
cmds.getAttr("{}.deadlineServers".format(self.instance))
]
]
pools = self._get_deadline_pools(webservice)
pools = self.deadline_module.get_deadline_pools(webservice, self.log)
cmds.deleteAttr("{}.primaryPool".format(self.instance))
cmds.deleteAttr("{}.secondaryPool".format(self.instance))
pool_setting = (self._project_settings["deadline"]
["publish"]
["CollectDeadlinePools"])
primary_pool = pool_setting["primary_pool"]
sorted_pools = self._set_default_pool(list(pools), primary_pool)
cmds.addAttr(self.instance, longName="primaryPool",
attributeType="enum",
enumName=":".join(pools))
cmds.addAttr(self.instance, longName="secondaryPool",
enumName=":".join(sorted_pools))
pools = ["-"] + pools
secondary_pool = pool_setting["secondary_pool"]
sorted_pools = self._set_default_pool(list(pools), secondary_pool)
cmds.addAttr("{}.secondaryPool".format(self.instance),
attributeType="enum",
enumName=":".join(["-"] + pools))
def _get_deadline_pools(self, webservice):
# type: (str) -> list
"""Get pools from Deadline.
Args:
webservice (str): Server url.
Returns:
list: Pools.
Throws:
RuntimeError: If deadline webservice is unreachable.
"""
argument = "{}/api/pools?NamesOnly=true".format(webservice)
try:
response = self._requests_get(argument)
except requests.exceptions.ConnectionError as exc:
msg = 'Cannot connect to deadline web service'
self.log.error(msg)
six.reraise(
RuntimeError,
RuntimeError('{} - {}'.format(msg, exc)),
sys.exc_info()[2])
if not response.ok:
self.log.warning("No pools retrieved")
return []
return response.json()
enumName=":".join(sorted_pools))
def _create_render_settings(self):
"""Create instance settings."""
@ -295,7 +277,8 @@ class CreateRender(plugin.Creator):
# use first one for initial list of pools.
deadline_url = next(iter(self.deadline_servers.values()))
pool_names = self._get_deadline_pools(deadline_url)
pool_names = self.deadline_module.get_deadline_pools(deadline_url,
self.log)
maya_submit_dl = self._project_settings.get(
"deadline", {}).get(
"publish", {}).get(
@ -326,12 +309,27 @@ class CreateRender(plugin.Creator):
self.log.info(" - pool: {}".format(pool["name"]))
pool_names.append(pool["name"])
self.data["primaryPool"] = pool_names
pool_setting = (self._project_settings["deadline"]
["publish"]
["CollectDeadlinePools"])
primary_pool = pool_setting["primary_pool"]
self.data["primaryPool"] = self._set_default_pool(pool_names,
primary_pool)
# We add a string "-" to allow the user to not
# set any secondary pools
self.data["secondaryPool"] = ["-"] + pool_names
pool_names = ["-"] + pool_names
secondary_pool = pool_setting["secondary_pool"]
self.data["secondaryPool"] = self._set_default_pool(pool_names,
secondary_pool)
self.options = {"useSelection": False} # Force no content
def _set_default_pool(self, pool_names, pool_value):
"""Reorder pool names, default should come first"""
if pool_value and pool_value in pool_names:
pool_names.remove(pool_value)
pool_names = [pool_value] + pool_names
return pool_names
def _load_credentials(self):
"""Load Muster credentials.
@ -366,7 +364,7 @@ class CreateRender(plugin.Creator):
"""
params = {"authToken": self._token}
api_entry = "/api/pools/list"
response = self._requests_get(self.MUSTER_REST_URL + api_entry,
response = requests_get(self.MUSTER_REST_URL + api_entry,
params=params)
if response.status_code != 200:
if response.status_code == 401:
@ -392,45 +390,11 @@ class CreateRender(plugin.Creator):
api_url = "{}/muster/show_login".format(
os.environ["OPENPYPE_WEBSERVER_URL"])
self.log.debug(api_url)
login_response = self._requests_get(api_url, timeout=1)
login_response = requests_get(api_url, timeout=1)
if login_response.status_code != 200:
self.log.error("Cannot show login form to Muster")
raise Exception("Cannot show login form to Muster")
def _requests_post(self, *args, **kwargs):
"""Wrap request post method.
Disabling SSL certificate validation if ``DONT_VERIFY_SSL`` environment
variable is found. This is useful when Deadline or Muster server are
running with self-signed certificates and their certificate is not
added to trusted certificates on client machines.
Warning:
Disabling SSL certificate validation is defeating one line
of defense SSL is providing and it is not recommended.
"""
if "verify" not in kwargs:
kwargs["verify"] = not os.getenv("OPENPYPE_DONT_VERIFY_SSL", True)
return requests.post(*args, **kwargs)
def _requests_get(self, *args, **kwargs):
"""Wrap request get method.
Disabling SSL certificate validation if ``DONT_VERIFY_SSL`` environment
variable is found. This is useful when Deadline or Muster server are
running with self-signed certificates and their certificate is not
added to trusted certificates on client machines.
Warning:
Disabling SSL certificate validation is defeating one line
of defense SSL is providing and it is not recommended.
"""
if "verify" not in kwargs:
kwargs["verify"] = not os.getenv("OPENPYPE_DONT_VERIFY_SSL", True)
return requests.get(*args, **kwargs)
def _set_default_renderer_settings(self, renderer):
"""Set basic settings based on renderer.

View file

@ -4,8 +4,6 @@ import os
import json
import appdirs
import requests
import six
import sys
from maya import cmds
import maya.app.renderSetup.model.renderSetup as renderSetup
@ -19,6 +17,7 @@ from openpype.api import (
get_project_settings
)
from openpype.lib import requests_get
from openpype.pipeline import CreatorError
from openpype.modules import ModulesManager
@ -40,6 +39,10 @@ class CreateVRayScene(plugin.Creator):
self._rs = renderSetup.instance()
self.data["exportOnFarm"] = False
deadline_settings = get_system_settings()["modules"]["deadline"]
manager = ModulesManager()
self.deadline_module = manager.modules_by_name["deadline"]
if not deadline_settings["enabled"]:
self.deadline_servers = {}
return
@ -62,10 +65,8 @@ class CreateVRayScene(plugin.Creator):
except AttributeError:
# Handle situation were we had only one url for deadline.
manager = ModulesManager()
deadline_module = manager.modules_by_name["deadline"]
# get default deadline webservice url from deadline module
self.deadline_servers = deadline_module.deadline_urls
self.deadline_servers = self.deadline_module.deadline_urls
def process(self):
"""Entry point."""
@ -128,7 +129,7 @@ class CreateVRayScene(plugin.Creator):
cmds.getAttr("{}.deadlineServers".format(self.instance))
]
]
pools = self._get_deadline_pools(webservice)
pools = self.deadline_module.get_deadline_pools(webservice)
cmds.deleteAttr("{}.primaryPool".format(self.instance))
cmds.deleteAttr("{}.secondaryPool".format(self.instance))
cmds.addAttr(self.instance, longName="primaryPool",
@ -138,33 +139,6 @@ class CreateVRayScene(plugin.Creator):
attributeType="enum",
enumName=":".join(["-"] + pools))
def _get_deadline_pools(self, webservice):
# type: (str) -> list
"""Get pools from Deadline.
Args:
webservice (str): Server url.
Returns:
list: Pools.
Throws:
RuntimeError: If deadline webservice is unreachable.
"""
argument = "{}/api/pools?NamesOnly=true".format(webservice)
try:
response = self._requests_get(argument)
except requests.exceptions.ConnectionError as exc:
msg = 'Cannot connect to deadline web service'
self.log.error(msg)
six.reraise(
CreatorError,
CreatorError('{} - {}'.format(msg, exc)),
sys.exc_info()[2])
if not response.ok:
self.log.warning("No pools retrieved")
return []
return response.json()
def _create_vray_instance_settings(self):
# get pools
pools = []
@ -195,7 +169,7 @@ class CreateVRayScene(plugin.Creator):
for k in self.deadline_servers.keys()
][0]
pool_names = self._get_deadline_pools(deadline_url)
pool_names = self.deadline_module.get_deadline_pools(deadline_url)
if muster_enabled:
self.log.info(">>> Loading Muster credentials ...")
@ -259,8 +233,8 @@ class CreateVRayScene(plugin.Creator):
"""
params = {"authToken": self._token}
api_entry = "/api/pools/list"
response = self._requests_get(self.MUSTER_REST_URL + api_entry,
params=params)
response = requests_get(self.MUSTER_REST_URL + api_entry,
params=params)
if response.status_code != 200:
if response.status_code == 401:
self.log.warning("Authentication token expired.")
@ -285,45 +259,7 @@ class CreateVRayScene(plugin.Creator):
api_url = "{}/muster/show_login".format(
os.environ["OPENPYPE_WEBSERVER_URL"])
self.log.debug(api_url)
login_response = self._requests_get(api_url, timeout=1)
login_response = requests_get(api_url, timeout=1)
if login_response.status_code != 200:
self.log.error("Cannot show login form to Muster")
raise CreatorError("Cannot show login form to Muster")
def _requests_post(self, *args, **kwargs):
"""Wrap request post method.
Disabling SSL certificate validation if ``DONT_VERIFY_SSL`` environment
variable is found. This is useful when Deadline or Muster server are
running with self-signed certificates and their certificate is not
added to trusted certificates on client machines.
Warning:
Disabling SSL certificate validation is defeating one line
of defense SSL is providing and it is not recommended.
"""
if "verify" not in kwargs:
kwargs["verify"] = (
False if os.getenv("OPENPYPE_DONT_VERIFY_SSL", True) else True
) # noqa
return requests.post(*args, **kwargs)
def _requests_get(self, *args, **kwargs):
"""Wrap request get method.
Disabling SSL certificate validation if ``DONT_VERIFY_SSL`` environment
variable is found. This is useful when Deadline or Muster server are
running with self-signed certificates and their certificate is not
added to trusted certificates on client machines.
Warning:
Disabling SSL certificate validation is defeating one line
of defense SSL is providing and it is not recommended.
"""
if "verify" not in kwargs:
kwargs["verify"] = (
False if os.getenv("OPENPYPE_DONT_VERIFY_SSL", True) else True
) # noqa
return requests.get(*args, **kwargs)

View file

@ -4,7 +4,6 @@ from bson.objectid import ObjectId
from openpype.pipeline import (
InventoryAction,
get_representation_context,
get_representation_path_from_context,
)
from openpype.hosts.maya.api.lib import (
maintained_selection,
@ -80,10 +79,10 @@ class ImportModelRender(InventoryAction):
})
context = get_representation_context(look_repr["_id"])
maya_file = get_representation_path_from_context(context)
maya_file = self.filepath_from_context(context)
context = get_representation_context(json_repr["_id"])
json_file = get_representation_path_from_context(context)
json_file = self.filepath_from_context(context)
# Import the look file
with maintained_selection():

View file

@ -386,6 +386,12 @@ class CollectMayaRender(pyblish.api.ContextPlugin):
overrides = self.parse_options(str(render_globals))
data.update(**overrides)
# get string values for pools
primary_pool = overrides["renderGlobals"]["Pool"]
secondary_pool = overrides["renderGlobals"].get("SecondaryPool")
data["primaryPool"] = primary_pool
data["secondaryPool"] = secondary_pool
# Define nice label
label = "{0} ({1})".format(expected_layer_name, data["asset"])
label += " [{0}-{1}]".format(

View file

@ -4,13 +4,13 @@ import getpass
import platform
import appdirs
import requests
from maya import cmds
from avalon import api
import pyblish.api
from openpype.lib import requests_post
from openpype.hosts.maya.api import lib
from openpype.api import get_system_settings
@ -184,7 +184,7 @@ class MayaSubmitMuster(pyblish.api.InstancePlugin):
"select": "name"
}
api_entry = '/api/templates/list'
response = self._requests_post(
response = requests_post(
self.MUSTER_REST_URL + api_entry, params=params)
if response.status_code != 200:
self.log.error(
@ -235,7 +235,7 @@ class MayaSubmitMuster(pyblish.api.InstancePlugin):
"name": "submit"
}
api_entry = '/api/queue/actions'
response = self._requests_post(
response = requests_post(
self.MUSTER_REST_URL + api_entry, params=params, json=payload)
if response.status_code != 200:
@ -549,16 +549,3 @@ class MayaSubmitMuster(pyblish.api.InstancePlugin):
% (value, int(value))
)
def _requests_post(self, *args, **kwargs):
""" Wrapper for requests, disabling SSL certificate validation if
DONT_VERIFY_SSL environment variable is found. This is useful when
Deadline or Muster server are running with self-signed certificates
and their certificate is not added to trusted certificates on
client machines.
WARNING: disabling SSL certificate validation is defeating one line
of defense SSL is providing and it is not recommended.
"""
if 'verify' not in kwargs:
kwargs['verify'] = False if os.getenv("OPENPYPE_DONT_VERIFY_SSL", True) else True # noqa
return requests.post(*args, **kwargs)

View file

@ -40,7 +40,14 @@ class ValidateCameraContents(pyblish.api.InstancePlugin):
# list when there are no actual cameras results in
# still an empty 'invalid' list
if len(cameras) < 1:
raise RuntimeError("No cameras in instance.")
if members:
# If there are members in the instance return all of
# them as 'invalid' so the user can still select invalid
cls.log.error("No cameras found in instance "
"members: {}".format(members))
return members
raise RuntimeError("No cameras found in empty instance.")
# non-camera shapes
valid_shapes = cmds.ls(shapes, type=('camera', 'locator'), long=True)

View file

@ -2,9 +2,9 @@ import os
import json
import appdirs
import requests
import pyblish.api
from openpype.lib import requests_get
from openpype.plugin import contextplugin_should_run
import openpype.hosts.maya.api.action
@ -51,7 +51,7 @@ class ValidateMusterConnection(pyblish.api.ContextPlugin):
'authToken': self._token
}
api_entry = '/api/pools/list'
response = self._requests_get(
response = requests_get(
MUSTER_REST_URL + api_entry, params=params)
assert response.status_code == 200, "invalid response from server"
assert response.json()['ResponseData'], "invalid data in response"
@ -88,35 +88,7 @@ class ValidateMusterConnection(pyblish.api.ContextPlugin):
api_url = "{}/muster/show_login".format(
os.environ["OPENPYPE_WEBSERVER_URL"])
cls.log.debug(api_url)
response = cls._requests_get(api_url, timeout=1)
response = requests_get(api_url, timeout=1)
if response.status_code != 200:
cls.log.error('Cannot show login form to Muster')
raise Exception('Cannot show login form to Muster')
def _requests_post(self, *args, **kwargs):
""" Wrapper for requests, disabling SSL certificate validation if
DONT_VERIFY_SSL environment variable is found. This is useful when
Deadline or Muster server are running with self-signed certificates
and their certificate is not added to trusted certificates on
client machines.
WARNING: disabling SSL certificate validation is defeating one line
of defense SSL is providing and it is not recommended.
"""
if 'verify' not in kwargs:
kwargs['verify'] = False if os.getenv("OPENPYPE_DONT_VERIFY_SSL", True) else True # noqa
return requests.post(*args, **kwargs)
def _requests_get(self, *args, **kwargs):
""" Wrapper for requests, disabling SSL certificate validation if
DONT_VERIFY_SSL environment variable is found. This is useful when
Deadline or Muster server are running with self-signed certificates
and their certificate is not added to trusted certificates on
client machines.
WARNING: disabling SSL certificate validation is defeating one line
of defense SSL is providing and it is not recommended.
"""
if 'verify' not in kwargs:
kwargs['verify'] = False if os.getenv("OPENPYPE_DONT_VERIFY_SSL", True) else True # noqa
return requests.get(*args, **kwargs)

View file

@ -1,5 +1,5 @@
from openpype.pipeline import InventoryAction
from openpype.hosts.nuke.api.commands import viewer_update_and_undo_stop
from openpype.hosts.nuke.api.command import viewer_update_and_undo_stop
class SelectContainers(InventoryAction):

View file

@ -14,7 +14,7 @@ from openpype.hosts.nuke.api.lib import (
get_avalon_knob_data,
set_avalon_knob_data
)
from openpype.hosts.nuke.api.commands import viewer_update_and_undo_stop
from openpype.hosts.nuke.api.command import viewer_update_and_undo_stop
from openpype.hosts.nuke.api import containerise, update_container

View file

@ -1,6 +1,9 @@
import os
import nuke
import copy
import pyblish.api
import openpype
from openpype.hosts.nuke.api.lib import maintained_selection
@ -18,6 +21,13 @@ class ExtractSlateFrame(openpype.api.Extractor):
families = ["slate"]
hosts = ["nuke"]
# Settings values
# - can be extended by other attributes from node in the future
key_value_mapping = {
"f_submission_note": [True, "{comment}"],
"f_submitting_for": [True, "{intent[value]}"],
"f_vfx_scope_of_work": [False, ""]
}
def process(self, instance):
if hasattr(self, "viewer_lut_raw"):
@ -129,9 +139,7 @@ class ExtractSlateFrame(openpype.api.Extractor):
for node in temporary_nodes:
nuke.delete(node)
def get_view_process_node(self):
# Select only the target node
if nuke.selectedNodes():
[n.setSelected(False) for n in nuke.selectedNodes()]
@ -162,13 +170,56 @@ class ExtractSlateFrame(openpype.api.Extractor):
return
comment = instance.context.data.get("comment")
intent_value = instance.context.data.get("intent")
if intent_value and isinstance(intent_value, dict):
intent_value = intent_value.get("value")
intent = instance.context.data.get("intent")
if not isinstance(intent, dict):
intent = {
"label": intent,
"value": intent
}
try:
node["f_submission_note"].setValue(comment)
node["f_submitting_for"].setValue(intent_value or "")
except NameError:
return
instance.data.pop("slateNode")
fill_data = copy.deepcopy(instance.data["anatomyData"])
fill_data.update({
"custom": copy.deepcopy(
instance.data.get("customData") or {}
),
"comment": comment,
"intent": intent
})
for key, value in self.key_value_mapping.items():
enabled, template = value
if not enabled:
self.log.debug("Key \"{}\" is disabled".format(key))
continue
try:
value = template.format(**fill_data)
except ValueError:
self.log.warning(
"Couldn't fill template \"{}\" with data: {}".format(
template, fill_data
),
exc_info=True
)
continue
except KeyError:
self.log.warning(
(
"Template contains unknown key."
" Template \"{}\" Data: {}"
).format(template, fill_data),
exc_info=True
)
continue
try:
node[key].setValue(value)
self.log.info("Change key \"{}\" to value \"{}\"".format(
key, value
))
except NameError:
self.log.warning((
"Failed to set value \"{}\" on node attribute \"{}\""
).format(value))

View file

@ -2,7 +2,6 @@ import os
import qargparse
from openpype.pipeline import get_representation_path_from_context
from openpype.hosts.photoshop import api as photoshop
from openpype.hosts.photoshop.api import get_unique_layer_name
@ -63,7 +62,7 @@ class ImageFromSequenceLoader(photoshop.PhotoshopLoader):
"""
files = []
for context in repre_contexts:
fname = get_representation_path_from_context(context)
fname = cls.filepath_from_context(context)
_, file_extension = os.path.splitext(fname)
for file_name in os.listdir(os.path.dirname(fname)):

View file

@ -0,0 +1,13 @@
import pyblish.api
class CollectSAAppName(pyblish.api.ContextPlugin):
"""Collect app name and label."""
label = "Collect App Name/Label"
order = pyblish.api.CollectorOrder - 0.5
hosts = ["standalonepublisher"]
def process(self, context):
context.data["appName"] = "standalone publisher"
context.data["appLabel"] = "Standalone publisher"

View file

@ -0,0 +1,13 @@
import pyblish.api
class CollectTrayPublisherAppName(pyblish.api.ContextPlugin):
"""Collect app name and label."""
label = "Collect App Name/Label"
order = pyblish.api.CollectorOrder - 0.5
hosts = ["traypublisher"]
def process(self, context):
context.data["appName"] = "tray publisher"
context.data["appLabel"] = "Tray publisher"

View file

@ -1,6 +1,6 @@
import collections
import qargparse
from avalon.pipeline import get_representation_context
from openpype.pipeline import get_representation_context
from openpype.hosts.tvpaint.api import lib, pipeline, plugin

Binary file not shown.

View file

@ -1,5 +1,8 @@
import os
import signal
import time
import tempfile
import shutil
import asyncio
from openpype.hosts.tvpaint.api.communication_server import (
@ -36,8 +39,28 @@ class TVPaintWorkerCommunicator(BaseCommunicator):
super()._start_webserver()
def _open_init_file(self):
"""Open init TVPaint file.
File triggers dialog missing path to audio file which must be closed
once and is ignored for rest of running process.
"""
current_dir = os.path.dirname(os.path.abspath(__file__))
init_filepath = os.path.join(current_dir, "init_file.tvpp")
with tempfile.NamedTemporaryFile(
mode="w", prefix="a_tvp_", suffix=".tvpp"
) as tmp_file:
tmp_filepath = tmp_file.name.replace("\\", "/")
shutil.copy(init_filepath, tmp_filepath)
george_script = "tv_LoadProject '\"'\"{}\"'\"'".format(tmp_filepath)
self.execute_george_through_file(george_script)
self.execute_george("tv_projectclose")
os.remove(tmp_filepath)
def _on_client_connect(self, *args, **kwargs):
super()._on_client_connect(*args, **kwargs)
self._open_init_file()
# Register as "ready to work" worker
self._worker_connection.register_as_worker()

View file

@ -4,7 +4,6 @@ import logging
from typing import List
import pyblish.api
from avalon import api
from openpype.pipeline import (
register_loader_plugin_path,
@ -76,30 +75,6 @@ def _register_events():
pass
class Creator(LegacyCreator):
hosts = ["unreal"]
asset_types = []
def process(self):
nodes = list()
with unreal.ScopedEditorTransaction("OpenPype Creating Instance"):
if (self.options or {}).get("useSelection"):
self.log.info("setting ...")
print("settings ...")
nodes = unreal.EditorUtilityLibrary.get_selected_assets()
asset_paths = [a.get_path_name() for a in nodes]
self.name = move_assets_to_path(
"/Game", self.name, asset_paths
)
instance = create_publish_instance("/Game", self.name)
imprint(instance, self.data)
return instance
def ls():
"""List all containers.

View file

@ -2,13 +2,11 @@ import unreal
from unreal import EditorAssetLibrary as eal
from unreal import EditorLevelLibrary as ell
from openpype.hosts.unreal.api.plugin import Creator
from avalon.unreal import (
instantiate,
)
from openpype.hosts.unreal.api import plugin
from openpype.hosts.unreal.api.pipeline import instantiate
class CreateCamera(Creator):
class CreateCamera(plugin.Creator):
"""Layout output for character rigs"""
name = "layoutMain"

View file

@ -1,12 +1,10 @@
# -*- coding: utf-8 -*-
from unreal import EditorLevelLibrary as ell
from openpype.hosts.unreal.api.plugin import Creator
from avalon.unreal import (
instantiate,
)
from openpype.hosts.unreal.api import plugin
from openpype.hosts.unreal.api.pipeline import instantiate
class CreateLayout(Creator):
class CreateLayout(plugin.Creator):
"""Layout output for character rigs."""
name = "layoutMain"

View file

@ -1,11 +1,10 @@
# -*- coding: utf-8 -*-
"""Create look in Unreal."""
import unreal # noqa
from openpype.hosts.unreal.api.plugin import Creator
from openpype.hosts.unreal.api import pipeline
from openpype.hosts.unreal.api import pipeline, plugin
class CreateLook(Creator):
class CreateLook(plugin.Creator):
"""Shader connections defining shape look."""
name = "unrealLook"

View file

@ -1,13 +1,13 @@
# -*- coding: utf-8 -*-
"""Create Static Meshes as FBX geometry."""
import unreal # noqa
from openpype.hosts.unreal.api.plugin import Creator
from openpype.hosts.unreal.api import plugin
from openpype.hosts.unreal.api.pipeline import (
instantiate,
)
class CreateStaticMeshFBX(Creator):
class CreateStaticMeshFBX(plugin.Creator):
"""Static FBX geometry."""
name = "unrealStaticMeshMain"

View file

@ -108,15 +108,18 @@ class CollectPublishedFiles(pyblish.api.ContextPlugin):
instance.data["representations"] = self._get_single_repre(
task_dir, task_data["files"], tags
)
file_url = os.path.join(task_dir, task_data["files"][0])
no_of_frames = self._get_number_of_frames(file_url)
if no_of_frames:
if family != 'workfile':
file_url = os.path.join(task_dir, task_data["files"][0])
try:
frame_end = int(frame_start) + math.ceil(no_of_frames)
instance.data["frameEnd"] = math.ceil(frame_end) - 1
self.log.debug("frameEnd:: {}".format(
instance.data["frameEnd"]))
except ValueError:
no_of_frames = self._get_number_of_frames(file_url)
if no_of_frames:
frame_end = int(frame_start) + \
math.ceil(no_of_frames)
frame_end = math.ceil(frame_end) - 1
instance.data["frameEnd"] = frame_end
self.log.debug("frameEnd:: {}".format(
instance.data["frameEnd"]))
except Exception:
self.log.warning("Unable to count frames "
"duration {}".format(no_of_frames))
@ -209,7 +212,6 @@ class CollectPublishedFiles(pyblish.api.ContextPlugin):
msg = "No family found for combination of " +\
"task_type: {}, is_sequence:{}, extension: {}".format(
task_type, is_sequence, extension)
found_family = "render"
assert found_family, msg
return (found_family,

View file

@ -221,6 +221,12 @@ from .openpype_version import (
is_current_version_higher_than_expected
)
from .connections import (
requests_get,
requests_post
)
terminal = Terminal
__all__ = [
@ -390,4 +396,7 @@ __all__ = [
"is_running_from_build",
"is_running_staging",
"is_current_version_studio_latest",
"requests_get",
"requests_post"
]

View file

@ -13,7 +13,8 @@ import six
from openpype.settings import (
get_system_settings,
get_project_settings
get_project_settings,
get_local_settings
)
from openpype.settings.constants import (
METADATA_KEYS,
@ -1272,6 +1273,9 @@ class EnvironmentPrepData(dict):
if data.get("env") is None:
data["env"] = os.environ.copy()
if "system_settings" not in data:
data["system_settings"] = get_system_settings()
super(EnvironmentPrepData, self).__init__(data)
@ -1395,8 +1399,27 @@ def prepare_app_environments(data, env_group=None, implementation_envs=True):
app = data["app"]
log = data["log"]
source_env = data["env"].copy()
_add_python_version_paths(app, data["env"], log)
_add_python_version_paths(app, source_env, log)
# Use environments from local settings
filtered_local_envs = {}
system_settings = data["system_settings"]
whitelist_envs = system_settings["general"].get("local_env_white_list")
if whitelist_envs:
local_settings = get_local_settings()
local_envs = local_settings.get("environments") or {}
filtered_local_envs = {
key: value
for key, value in local_envs.items()
if key in whitelist_envs
}
# Apply local environment variables for already existing values
for key, value in filtered_local_envs.items():
if key in source_env:
source_env[key] = value
# `added_env_keys` has debug purpose
added_env_keys = {app.group.name, app.name}
@ -1441,10 +1464,19 @@ def prepare_app_environments(data, env_group=None, implementation_envs=True):
# Choose right platform
tool_env = parse_environments(_env_values, env_group)
# Apply local environment variables
# - must happen between all values because they may be used during
# merge
for key, value in filtered_local_envs.items():
if key in tool_env:
tool_env[key] = value
# Merge dictionaries
env_values = _merge_env(tool_env, env_values)
merged_env = _merge_env(env_values, data["env"])
merged_env = _merge_env(env_values, source_env)
loaded_env = acre.compute(merged_env, cleanup=False)
final_env = None
@ -1464,7 +1496,7 @@ def prepare_app_environments(data, env_group=None, implementation_envs=True):
if final_env is None:
final_env = loaded_env
keys_to_remove = set(data["env"].keys()) - set(final_env.keys())
keys_to_remove = set(source_env.keys()) - set(final_env.keys())
# Update env
data["env"].update(final_env)
@ -1611,7 +1643,6 @@ def _prepare_last_workfile(data, workdir):
result will be stored.
workdir (str): Path to folder where workfiles should be stored.
"""
import avalon.api
from openpype.pipeline import HOST_WORKFILE_EXTENSIONS
log = data["log"]

View file

@ -0,0 +1,38 @@
import requests
import os
def requests_post(*args, **kwargs):
"""Wrap request post method.
Disabling SSL certificate validation if ``DONT_VERIFY_SSL`` environment
variable is found. This is useful when Deadline or Muster server are
running with self-signed certificates and their certificate is not
added to trusted certificates on client machines.
Warning:
Disabling SSL certificate validation is defeating one line
of defense SSL is providing and it is not recommended.
"""
if "verify" not in kwargs:
kwargs["verify"] = not os.getenv("OPENPYPE_DONT_VERIFY_SSL", True)
return requests.post(*args, **kwargs)
def requests_get(*args, **kwargs):
"""Wrap request get method.
Disabling SSL certificate validation if ``DONT_VERIFY_SSL`` environment
variable is found. This is useful when Deadline or Muster server are
running with self-signed certificates and their certificate is not
added to trusted certificates on client machines.
Warning:
Disabling SSL certificate validation is defeating one line
of defense SSL is providing and it is not recommended.
"""
if "verify" not in kwargs:
kwargs["verify"] = not os.getenv("OPENPYPE_DONT_VERIFY_SSL", True)
return requests.get(*args, **kwargs)

View file

@ -17,6 +17,9 @@ from .vendor_bin_utils import (
# Max length of string that is supported by ffmpeg
MAX_FFMPEG_STRING_LEN = 8196
# Not allowed symbols in attributes for ffmpeg
NOT_ALLOWED_FFMPEG_CHARS = ("\"", )
# OIIO known xml tags
STRING_TAGS = {
"format"
@ -367,11 +370,15 @@ def should_convert_for_ffmpeg(src_filepath):
return None
for attr_value in input_info["attribs"].values():
if (
isinstance(attr_value, str)
and len(attr_value) > MAX_FFMPEG_STRING_LEN
):
if not isinstance(attr_value, str):
continue
if len(attr_value) > MAX_FFMPEG_STRING_LEN:
return True
for char in NOT_ALLOWED_FFMPEG_CHARS:
if char in attr_value:
return True
return False
@ -422,7 +429,12 @@ def convert_for_ffmpeg(
compression = "none"
# Prepare subprocess arguments
oiio_cmd = [get_oiio_tools_path()]
oiio_cmd = [
get_oiio_tools_path(),
# Don't add any additional attributes
"--nosoftwareattrib",
]
# Add input compression if available
if compression:
oiio_cmd.extend(["--compression", compression])
@ -458,23 +470,33 @@ def convert_for_ffmpeg(
"--frames", "{}-{}".format(input_frame_start, input_frame_end)
])
ignore_attr_changes_added = False
for attr_name, attr_value in input_info["attribs"].items():
if not isinstance(attr_value, str):
continue
# Remove attributes that have string value longer than allowed length
# for ffmpeg
# for ffmpeg or when containt unallowed symbols
erase_reason = "Missing reason"
erase_attribute = False
if len(attr_value) > MAX_FFMPEG_STRING_LEN:
if not ignore_attr_changes_added:
# Attrite changes won't be added to attributes itself
ignore_attr_changes_added = True
oiio_cmd.append("--sansattrib")
erase_reason = "has too long value ({} chars).".format(
len(attr_value)
)
if erase_attribute:
for char in NOT_ALLOWED_FFMPEG_CHARS:
if char in attr_value:
erase_attribute = True
erase_reason = (
"contains unsupported character \"{}\"."
).format(char)
break
if erase_attribute:
# Set attribute to empty string
logger.info((
"Removed attribute \"{}\" from metadata"
" because has too long value ({} chars)."
).format(attr_name, len(attr_value)))
"Removed attribute \"{}\" from metadata because {}."
).format(attr_name, erase_reason))
oiio_cmd.extend(["--eraseattrib", attr_name])
# Add last argument - path to output

View file

@ -1,8 +1,19 @@
import os
import requests
import six
import sys
from openpype.lib import requests_get, PypeLogger
from openpype.modules import OpenPypeModule
from openpype_interfaces import IPluginPaths
class DeadlineWebserviceError(Exception):
"""
Exception to throw when connection to Deadline server fails.
"""
class DeadlineModule(OpenPypeModule, IPluginPaths):
name = "deadline"
@ -32,3 +43,35 @@ class DeadlineModule(OpenPypeModule, IPluginPaths):
return {
"publish": [os.path.join(current_dir, "plugins", "publish")]
}
@staticmethod
def get_deadline_pools(webservice, log=None):
# type: (str) -> list
"""Get pools from Deadline.
Args:
webservice (str): Server url.
log (Logger)
Returns:
list: Pools.
Throws:
RuntimeError: If deadline webservice is unreachable.
"""
if not log:
log = PypeLogger.get_logger(__name__)
argument = "{}/api/pools?NamesOnly=true".format(webservice)
try:
response = requests_get(argument)
except requests.exceptions.ConnectionError as exc:
msg = 'Cannot connect to DL web service {}'.format(webservice)
log.error(msg)
six.reraise(
DeadlineWebserviceError,
DeadlineWebserviceError('{} - {}'.format(msg, exc)),
sys.exc_info()[2])
if not response.ok:
log.warning("No pools retrieved")
return []
return response.json()

View file

@ -11,7 +11,7 @@ import pyblish.api
class CollectDeadlineServerFromInstance(pyblish.api.InstancePlugin):
"""Collect Deadline Webservice URL from instance."""
order = pyblish.api.CollectorOrder + 0.02
order = pyblish.api.CollectorOrder + 0.415
label = "Deadline Webservice from the Instance"
families = ["rendering"]

View file

@ -6,7 +6,7 @@ import pyblish.api
class CollectDefaultDeadlineServer(pyblish.api.ContextPlugin):
"""Collect default Deadline Webservice URL."""
order = pyblish.api.CollectorOrder + 0.01
order = pyblish.api.CollectorOrder + 0.410
label = "Default Deadline Webservice"
pass_mongo_url = False

View file

@ -0,0 +1,23 @@
# -*- coding: utf-8 -*-
"""Collect Deadline pools. Choose default one from Settings
"""
import pyblish.api
class CollectDeadlinePools(pyblish.api.InstancePlugin):
"""Collect pools from instance if present, from Setting otherwise."""
order = pyblish.api.CollectorOrder + 0.420
label = "Collect Deadline Pools"
families = ["rendering", "render.farm", "renderFarm", "renderlayer"]
primary_pool = None
secondary_pool = None
def process(self, instance):
if not instance.data.get("primaryPool"):
instance.data["primaryPool"] = self.primary_pool or "none"
if not instance.data.get("secondaryPool"):
instance.data["secondaryPool"] = self.secondary_pool or "none"

View file

@ -0,0 +1,31 @@
<?xml version="1.0" encoding="UTF-8"?>
<root>
<error id="main">
<title>Scene setting</title>
<description>
## Invalid Deadline pools found
Configured pools don't match what is set in Deadline.
{invalid_value_str}
### How to repair?
If your instance had deadline pools set on creation, remove or
change them.
In other cases inform admin to change them in Settings.
Available deadline pools {pools_str}.
</description>
<detail>
### __Detailed Info__
This error is shown when deadline pool is not on Deadline anymore. It
could happen in case of republish old workfile which was created with
previous deadline pools,
or someone changed pools on Deadline side, but didn't modify Openpype
Settings.
</detail>
</error>
</root>

View file

@ -37,8 +37,6 @@ class AfterEffectsSubmitDeadline(
priority = 50
chunk_size = 1000000
primary_pool = None
secondary_pool = None
group = None
department = None
multiprocess = True
@ -62,8 +60,8 @@ class AfterEffectsSubmitDeadline(
dln_job_info.Frames = frame_range
dln_job_info.Priority = self.priority
dln_job_info.Pool = self.primary_pool
dln_job_info.SecondaryPool = self.secondary_pool
dln_job_info.Pool = self._instance.data.get("primaryPool")
dln_job_info.SecondaryPool = self._instance.data.get("secondaryPool")
dln_job_info.Group = self.group
dln_job_info.Department = self.department
dln_job_info.ChunkSize = self.chunk_size

View file

@ -241,8 +241,6 @@ class HarmonySubmitDeadline(
optional = True
use_published = False
primary_pool = ""
secondary_pool = ""
priority = 50
chunk_size = 1000000
group = "none"
@ -259,8 +257,8 @@ class HarmonySubmitDeadline(
# for now, get those from presets. Later on it should be
# configurable in Harmony UI directly.
job_info.Priority = self.priority
job_info.Pool = self.primary_pool
job_info.SecondaryPool = self.secondary_pool
job_info.Pool = self._instance.data.get("primaryPool")
job_info.SecondaryPool = self._instance.data.get("secondaryPool")
job_info.ChunkSize = self.chunk_size
job_info.BatchName = os.path.basename(self._instance.data["source"])
job_info.Department = self.department

View file

@ -7,7 +7,7 @@ from avalon import api
import pyblish.api
import hou
# import hou ???
class HoudiniSubmitRenderDeadline(pyblish.api.InstancePlugin):
@ -71,7 +71,8 @@ class HoudiniSubmitRenderDeadline(pyblish.api.InstancePlugin):
"UserName": deadline_user,
"Plugin": "Houdini",
"Pool": "houdini_redshift", # todo: remove hardcoded pool
"Pool": instance.data.get("primaryPool"),
"secondaryPool": instance.data.get("secondaryPool"),
"Frames": frames,
"ChunkSize": instance.data.get("chunkSize", 10),

View file

@ -35,6 +35,7 @@ from maya import cmds
from avalon import api
import pyblish.api
from openpype.lib import requests_post
from openpype.hosts.maya.api import lib
# Documentation for keys available at:
@ -700,7 +701,7 @@ class MayaSubmitDeadline(pyblish.api.InstancePlugin):
tiles_count = instance.data.get("tilesX") * instance.data.get("tilesY") # noqa: E501
for tile_job in frame_payloads:
response = self._requests_post(url, json=tile_job)
response = requests_post(url, json=tile_job)
if not response.ok:
raise Exception(response.text)
@ -763,7 +764,7 @@ class MayaSubmitDeadline(pyblish.api.InstancePlugin):
job_idx, len(assembly_payloads)
))
self.log.debug(json.dumps(ass_job, indent=4, sort_keys=True))
response = self._requests_post(url, json=ass_job)
response = requests_post(url, json=ass_job)
if not response.ok:
raise Exception(response.text)
@ -781,7 +782,7 @@ class MayaSubmitDeadline(pyblish.api.InstancePlugin):
# E.g. http://192.168.0.1:8082/api/jobs
url = "{}/api/jobs".format(self.deadline_url)
response = self._requests_post(url, json=payload)
response = requests_post(url, json=payload)
if not response.ok:
raise Exception(response.text)
instance.data["deadlineSubmissionJob"] = response.json()
@ -989,7 +990,7 @@ class MayaSubmitDeadline(pyblish.api.InstancePlugin):
self.log.info("Submitting ass export job.")
url = "{}/api/jobs".format(self.deadline_url)
response = self._requests_post(url, json=payload)
response = requests_post(url, json=payload)
if not response.ok:
self.log.error("Submition failed!")
self.log.error(response.status_code)
@ -1013,44 +1014,6 @@ class MayaSubmitDeadline(pyblish.api.InstancePlugin):
% (value, int(value))
)
def _requests_post(self, *args, **kwargs):
"""Wrap request post method.
Disabling SSL certificate validation if ``DONT_VERIFY_SSL`` environment
variable is found. This is useful when Deadline or Muster server are
running with self-signed certificates and their certificate is not
added to trusted certificates on client machines.
Warning:
Disabling SSL certificate validation is defeating one line
of defense SSL is providing and it is not recommended.
"""
if 'verify' not in kwargs:
kwargs['verify'] = not os.getenv("OPENPYPE_DONT_VERIFY_SSL", True)
# add 10sec timeout before bailing out
kwargs['timeout'] = 10
return requests.post(*args, **kwargs)
def _requests_get(self, *args, **kwargs):
"""Wrap request get method.
Disabling SSL certificate validation if ``DONT_VERIFY_SSL`` environment
variable is found. This is useful when Deadline or Muster server are
running with self-signed certificates and their certificate is not
added to trusted certificates on client machines.
Warning:
Disabling SSL certificate validation is defeating one line
of defense SSL is providing and it is not recommended.
"""
if 'verify' not in kwargs:
kwargs['verify'] = not os.getenv("OPENPYPE_DONT_VERIFY_SSL", True)
# add 10sec timeout before bailing out
kwargs['timeout'] = 10
return requests.get(*args, **kwargs)
def format_vray_output_filename(self, filename, template, dir=False):
"""Format the expected output file of the Export job.

View file

@ -28,8 +28,6 @@ class NukeSubmitDeadline(pyblish.api.InstancePlugin):
priority = 50
chunk_size = 1
concurrent_tasks = 1
primary_pool = ""
secondary_pool = ""
group = ""
department = ""
limit_groups = {}
@ -187,8 +185,8 @@ class NukeSubmitDeadline(pyblish.api.InstancePlugin):
"Department": self.department,
"Pool": self.primary_pool,
"SecondaryPool": self.secondary_pool,
"Pool": instance.data.get("primaryPool"),
"SecondaryPool": instance.data.get("secondaryPool"),
"Group": self.group,
"Plugin": "Nuke",

View file

@ -8,6 +8,7 @@ from copy import copy, deepcopy
import requests
import clique
import openpype.api
from openpype.pipeline.farm.patterning import match_aov_pattern
from avalon import api, io
@ -107,7 +108,7 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin):
families = ["render.farm", "prerender.farm",
"renderlayer", "imagesequence", "vrayscene"]
aov_filter = {"maya": [r".*(?:[\._-])*([Bb]eauty)(?:[\.|_])*.*"],
aov_filter = {"maya": [r".*([Bb]eauty).*"],
"aftereffects": [r".*"], # for everything from AE
"harmony": [r".*"], # for everything from AE
"celaction": [r".*"]}
@ -129,7 +130,7 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin):
"OPENPYPE_PUBLISH_JOB"
]
# custom deadline atributes
# custom deadline attributes
deadline_department = ""
deadline_pool = ""
deadline_pool_secondary = ""
@ -259,8 +260,8 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin):
"Priority": priority,
"Group": self.deadline_group,
"Pool": self.deadline_pool,
"SecondaryPool": self.deadline_pool_secondary,
"Pool": instance.data.get("primaryPool"),
"SecondaryPool": instance.data.get("secondaryPool"),
"OutputDirectory0": output_dir
},
@ -449,12 +450,15 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin):
app = os.environ.get("AVALON_APP", "")
preview = False
if app in self.aov_filter.keys():
for aov_pattern in self.aov_filter[app]:
if re.match(aov_pattern, aov):
preview = True
break
if isinstance(col, list):
render_file_name = os.path.basename(col[0])
else:
render_file_name = os.path.basename(col)
aov_patterns = self.aov_filter
preview = match_aov_pattern(app, aov_patterns, render_file_name)
# toggle preview on if multipart is on
if instance_data.get("multipartExr"):
preview = True
@ -531,26 +535,18 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin):
# expected files contains more explicitly and from what
# should be review made.
# - "review" tag is never added when is set to 'False'
use_sequence_for_review = instance.get(
"useSequenceForReview", True
)
if use_sequence_for_review:
# if filtered aov name is found in filename, toggle it for
# preview video rendering
for app in self.aov_filter.keys():
if os.environ.get("AVALON_APP", "") == app:
# iteratre all aov filters
for aov in self.aov_filter[app]:
if re.match(
aov,
list(collection)[0]
):
preview = True
break
if instance["useSequenceForReview"]:
# toggle preview on if multipart is on
if instance.get("multipartExr", False):
preview = True
else:
render_file_name = list(collection)[0]
host_name = os.environ.get("AVALON_APP", "")
# if filtered aov name is found in filename, toggle it for
# preview video rendering
preview = match_aov_pattern(
host_name, self.aov_filter, render_file_name
)
staging = os.path.dirname(list(collection)[0])
success, rootless_staging_dir = (
@ -737,7 +733,7 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin):
"resolutionHeight": data.get("resolutionHeight", 1080),
"multipartExr": data.get("multipartExr", False),
"jobBatchName": data.get("jobBatchName", ""),
"useSequenceForReview": data.get("useSequenceForReview")
"useSequenceForReview": data.get("useSequenceForReview", True)
}
if "prerender" in instance.data["families"]:

View file

@ -0,0 +1,48 @@
import pyblish.api
from openpype.pipeline import (
PublishXmlValidationError,
OptionalPyblishPluginMixin
)
from openpype.modules.deadline.deadline_module import DeadlineModule
class ValidateDeadlinePools(OptionalPyblishPluginMixin,
pyblish.api.InstancePlugin):
"""Validate primaryPool and secondaryPool on instance.
Values are on instance based on value insertion when Creating instance or
by Settings in CollectDeadlinePools.
"""
label = "Validate Deadline Pools"
order = pyblish.api.ValidatorOrder
families = ["rendering", "render.farm", "renderFarm", "renderlayer"]
optional = True
def process(self, instance):
# get default deadline webservice url from deadline module
deadline_url = instance.context.data["defaultDeadline"]
self.log.info("deadline_url::{}".format(deadline_url))
pools = DeadlineModule.get_deadline_pools(deadline_url, log=self.log)
self.log.info("pools::{}".format(pools))
formatting_data = {
"pools_str": ",".join(pools)
}
primary_pool = instance.data.get("primaryPool")
if primary_pool and primary_pool not in pools:
msg = "Configured primary '{}' not present on Deadline".format(
instance.data["primaryPool"])
formatting_data["invalid_value_str"] = msg
raise PublishXmlValidationError(self, msg,
formatting_data=formatting_data)
secondary_pool = instance.data.get("secondaryPool")
if secondary_pool and secondary_pool not in pools:
msg = "Configured secondary '{}' not present on Deadline".format(
instance.data["secondaryPool"])
formatting_data["invalid_value_str"] = msg
raise PublishXmlValidationError(self, msg,
formatting_data=formatting_data)

View file

@ -135,7 +135,7 @@ def query_custom_attributes(
output.extend(
session.query(
(
"select value, entity_id from {}"
"select value, entity_id, configuration_id from {}"
" where entity_id in ({}) and configuration_id in ({})"
).format(
table_name,

View file

@ -0,0 +1,148 @@
"""
Requires:
context > ftrackSession
context > ftrackEntity
instance > ftrackEntity
Provides:
instance > customData > ftrack
"""
import copy
import pyblish.api
class CollectFtrackCustomAttributeData(pyblish.api.ContextPlugin):
"""Collect custom attribute values and store them to customData.
Data are stored into each instance in context under
instance.data["customData"]["ftrack"].
Hierarchical attributes are not looked up properly for that functionality
custom attribute values lookup must be extended.
"""
order = pyblish.api.CollectorOrder + 0.4992
label = "Collect Ftrack Custom Attribute Data"
# Name of custom attributes for which will be look for
custom_attribute_keys = []
def process(self, context):
if not self.custom_attribute_keys:
self.log.info("Custom attribute keys are not set. Skipping")
return
ftrack_entities_by_id = {}
default_entity_id = None
context_entity = context.data.get("ftrackEntity")
if context_entity:
entity_id = context_entity["id"]
default_entity_id = entity_id
ftrack_entities_by_id[entity_id] = context_entity
instances_by_entity_id = {
default_entity_id: []
}
for instance in context:
entity = instance.data.get("ftrackEntity")
if not entity:
instances_by_entity_id[default_entity_id].append(instance)
continue
entity_id = entity["id"]
ftrack_entities_by_id[entity_id] = entity
if entity_id not in instances_by_entity_id:
instances_by_entity_id[entity_id] = []
instances_by_entity_id[entity_id].append(instance)
if not ftrack_entities_by_id:
self.log.info("Ftrack entities are not set. Skipping")
return
session = context.data["ftrackSession"]
custom_attr_key_by_id = self.query_attr_confs(session)
if not custom_attr_key_by_id:
self.log.info((
"Didn't find any of defined custom attributes {}"
).format(", ".join(self.custom_attribute_keys)))
return
entity_ids = list(instances_by_entity_id.keys())
values_by_entity_id = self.query_attr_values(
session, entity_ids, custom_attr_key_by_id
)
for entity_id, instances in instances_by_entity_id.items():
if entity_id not in values_by_entity_id:
# Use defaut empty values
entity_id = None
for instance in instances:
value = copy.deepcopy(values_by_entity_id[entity_id])
if "customData" not in instance.data:
instance.data["customData"] = {}
instance.data["customData"]["ftrack"] = value
instance_label = (
instance.data.get("label") or instance.data["name"]
)
self.log.debug((
"Added ftrack custom data to instance \"{}\": {}"
).format(instance_label, value))
def query_attr_values(self, session, entity_ids, custom_attr_key_by_id):
# Prepare values for query
entity_ids_joined = ",".join([
'"{}"'.format(entity_id)
for entity_id in entity_ids
])
conf_ids_joined = ",".join([
'"{}"'.format(conf_id)
for conf_id in custom_attr_key_by_id.keys()
])
# Query custom attribute values
value_items = session.query(
(
"select value, entity_id, configuration_id"
" from CustomAttributeValue"
" where entity_id in ({}) and configuration_id in ({})"
).format(
entity_ids_joined,
conf_ids_joined
)
).all()
# Prepare default value output per entity id
values_by_key = {
key: None for key in self.custom_attribute_keys
}
# Prepare all entity ids that were queried
values_by_entity_id = {
entity_id: copy.deepcopy(values_by_key)
for entity_id in entity_ids
}
# Add none entity id which is used as default value
values_by_entity_id[None] = copy.deepcopy(values_by_key)
# Go through queried data and store them
for item in value_items:
conf_id = item["configuration_id"]
conf_key = custom_attr_key_by_id[conf_id]
entity_id = item["entity_id"]
values_by_entity_id[entity_id][conf_key] = item["value"]
return values_by_entity_id
def query_attr_confs(self, session):
custom_attributes = set(self.custom_attribute_keys)
cust_attrs_query = (
"select id, key from CustomAttributeConfiguration"
" where key in ({})"
).format(", ".join(
["\"{}\"".format(attr_name) for attr_name in custom_attributes]
))
custom_attr_confs = session.query(cust_attrs_query).all()
return {
conf["id"]: conf["key"]
for conf in custom_attr_confs
}

View file

@ -6,7 +6,7 @@ import avalon.api
class CollectFtrackApi(pyblish.api.ContextPlugin):
""" Collects an ftrack session and the current task id. """
order = pyblish.api.CollectorOrder + 0.4999
order = pyblish.api.CollectorOrder + 0.4991
label = "Collect Ftrack Api"
def process(self, context):

View file

@ -25,7 +25,7 @@ class CollectFtrackFamily(pyblish.api.InstancePlugin):
based on 'families' (editorial drives it by presence of 'review')
"""
label = "Collect Ftrack Family"
order = pyblish.api.CollectorOrder + 0.4998
order = pyblish.api.CollectorOrder + 0.4990
profiles = None

View file

@ -1,3 +1,15 @@
"""Integrate components into ftrack
Requires:
context -> ftrackSession - connected ftrack.Session
instance -> ftrackComponentsList - list of components to integrate
Provides:
instance -> ftrackIntegratedAssetVersionsData
# legacy
instance -> ftrackIntegratedAssetVersions
"""
import os
import sys
import six
@ -54,6 +66,114 @@ class IntegrateFtrackApi(pyblish.api.InstancePlugin):
self.log.debug(query)
return query
def process(self, instance):
session = instance.context.data["ftrackSession"]
context = instance.context
component_list = instance.data.get("ftrackComponentsList")
if not component_list:
self.log.info(
"Instance don't have components to integrate to Ftrack."
" Skipping."
)
return
session = instance.context.data["ftrackSession"]
context = instance.context
parent_entity = None
default_asset_name = None
# If instance has set "ftrackEntity" or "ftrackTask" then use them from
# instance. Even if they are set to None. If they are set to None it
# has a reason. (like has different context)
if "ftrackEntity" in instance.data or "ftrackTask" in instance.data:
task_entity = instance.data.get("ftrackTask")
parent_entity = instance.data.get("ftrackEntity")
elif "ftrackEntity" in context.data or "ftrackTask" in context.data:
task_entity = context.data.get("ftrackTask")
parent_entity = context.data.get("ftrackEntity")
if task_entity:
default_asset_name = task_entity["name"]
parent_entity = task_entity["parent"]
if parent_entity is None:
self.log.info((
"Skipping ftrack integration. Instance \"{}\" does not"
" have specified ftrack entities."
).format(str(instance)))
return
if not default_asset_name:
default_asset_name = parent_entity["name"]
# Change status on task
self._set_task_status(instance, task_entity, session)
# Prepare AssetTypes
asset_types_by_short = self._ensure_asset_types_exists(
session, component_list
)
asset_versions_data_by_id = {}
used_asset_versions = []
# Iterate over components and publish
for data in component_list:
self.log.debug("data: {}".format(data))
# AssetType
asset_type_short = data["assettype_data"]["short"]
asset_type_entity = asset_types_by_short[asset_type_short]
# Asset
asset_data = data.get("asset_data") or {}
if "name" not in asset_data:
asset_data["name"] = default_asset_name
asset_entity = self._ensure_asset_exists(
session,
asset_data,
asset_type_entity["id"],
parent_entity["id"]
)
# Asset Version
asset_version_data = data.get("assetversion_data") or {}
asset_version_entity = self._ensure_asset_version_exists(
session, asset_version_data, asset_entity["id"], task_entity
)
# Component
self.create_component(session, asset_version_entity, data)
# Store asset version and components items that were
version_id = asset_version_entity["id"]
if version_id not in asset_versions_data_by_id:
asset_versions_data_by_id[version_id] = {
"asset_version": asset_version_entity,
"component_items": []
}
asset_versions_data_by_id[version_id]["component_items"].append(
data
)
# Backwards compatibility
if asset_version_entity not in used_asset_versions:
used_asset_versions.append(asset_version_entity)
instance.data["ftrackIntegratedAssetVersionsData"] = (
asset_versions_data_by_id
)
# Backwards compatibility
asset_versions_key = "ftrackIntegratedAssetVersions"
if asset_versions_key not in instance.data:
instance.data[asset_versions_key] = []
for asset_version in used_asset_versions:
if asset_version not in instance.data[asset_versions_key]:
instance.data[asset_versions_key].append(asset_version)
def _set_task_status(self, instance, task_entity, session):
project_entity = instance.context.data.get("ftrackProject")
if not project_entity:
@ -100,190 +220,224 @@ class IntegrateFtrackApi(pyblish.api.InstancePlugin):
session._configure_locations()
six.reraise(tp, value, tb)
def process(self, instance):
session = instance.context.data["ftrackSession"]
context = instance.context
def _ensure_asset_types_exists(self, session, component_list):
"""Make sure that all AssetType entities exists for integration.
name = None
# If instance has set "ftrackEntity" or "ftrackTask" then use them from
# instance. Even if they are set to None. If they are set to None it
# has a reason. (like has different context)
if "ftrackEntity" in instance.data or "ftrackTask" in instance.data:
task = instance.data.get("ftrackTask")
parent = instance.data.get("ftrackEntity")
Returns:
dict: All asset types by short name.
"""
# Query existing asset types
asset_types = session.query("select id, short from AssetType").all()
# Stpore all existing short names
asset_type_shorts = {asset_type["short"] for asset_type in asset_types}
# Check which asset types are missing and store them
asset_type_names_by_missing_shorts = {}
default_short_name = "upload"
for data in component_list:
asset_type_data = data.get("assettype_data") or {}
asset_type_short = asset_type_data.get("short")
if not asset_type_short:
# Use default asset type name if not set and change the
# input data
asset_type_short = default_short_name
asset_type_data["short"] = asset_type_short
data["assettype_data"] = asset_type_data
elif "ftrackEntity" in context.data or "ftrackTask" in context.data:
task = context.data.get("ftrackTask")
parent = context.data.get("ftrackEntity")
if (
# Skip if short name exists
asset_type_short in asset_type_shorts
# Skip if short name was already added to missing types
# and asset type name is filled
# - if asset type name is missing then try use name from other
# data
or asset_type_names_by_missing_shorts.get(asset_type_short)
):
continue
if task:
parent = task["parent"]
name = task
elif parent:
name = parent["name"]
asset_type_names_by_missing_shorts[asset_type_short] = (
asset_type_data.get("name")
)
if not name:
self.log.info((
"Skipping ftrack integration. Instance \"{}\" does not"
" have specified ftrack entities."
).format(str(instance)))
return
# Create missing asset types if there are any
if asset_type_names_by_missing_shorts:
self.log.info("Creating asset types with short names: {}".format(
", ".join(asset_type_names_by_missing_shorts.keys())
))
for missing_short, type_name in (
asset_type_names_by_missing_shorts.items()
):
# Use short for name if name is not defined
if not type_name:
type_name = missing_short
# Use short name also for name
# - there is not other source for 'name'
session.create(
"AssetType",
{
"short": missing_short,
"name": type_name
}
)
info_msg = (
"Created new {entity_type} with data: {data}"
", metadata: {metadata}."
# Commit creation
session.commit()
# Requery asset types
asset_types = session.query(
"select id, short from AssetType"
).all()
return {asset_type["short"]: asset_type for asset_type in asset_types}
def _ensure_asset_exists(
self, session, asset_data, asset_type_id, parent_id
):
asset_name = asset_data["name"]
asset_entity = self._query_asset(
session, asset_name, asset_type_id, parent_id
)
if asset_entity is not None:
return asset_entity
asset_data = {
"name": asset_name,
"type_id": asset_type_id,
"context_id": parent_id
}
self.log.info("Created new Asset with data: {}.".format(asset_data))
session.create("Asset", asset_data)
session.commit()
return self._query_asset(session, asset_name, asset_type_id, parent_id)
def _query_asset(self, session, asset_name, asset_type_id, parent_id):
return session.query(
(
"select id from Asset"
" where name is \"{}\""
" and type_id is \"{}\""
" and context_id is \"{}\""
).format(asset_name, asset_type_id, parent_id)
).first()
def _ensure_asset_version_exists(
self, session, asset_version_data, asset_id, task_entity
):
task_id = None
if task_entity:
task_id = task_entity["id"]
# Try query asset version by criteria (asset id and version)
version = asset_version_data.get("version") or 0
asset_version_entity = self._query_asset_version(
session, version, asset_id
)
used_asset_versions = []
# Prepare comment value
comment = asset_version_data.get("comment") or ""
if asset_version_entity is not None:
changed = False
if comment != asset_version_entity["comment"]:
asset_version_entity["comment"] = comment
changed = True
self._set_task_status(instance, task, session)
if task_id != asset_version_entity["task_id"]:
asset_version_entity["task_id"] = task_id
changed = True
# Iterate over components and publish
for data in instance.data.get("ftrackComponentsList", []):
# AssetType
# Get existing entity.
assettype_data = {"short": "upload"}
assettype_data.update(data.get("assettype_data", {}))
self.log.debug("data: {}".format(data))
if changed:
session.commit()
assettype_entity = session.query(
self.query("AssetType", assettype_data)
).first()
# Create a new entity if none exits.
if not assettype_entity:
assettype_entity = session.create("AssetType", assettype_data)
self.log.debug("Created new AssetType with data: {}".format(
assettype_data
))
# Asset
# Get existing entity.
asset_data = {
"name": name,
"type": assettype_entity,
"parent": parent,
else:
new_asset_version_data = {
"version": version,
"asset_id": asset_id
}
asset_data.update(data.get("asset_data", {}))
if task_id:
new_asset_version_data["task_id"] = task_id
asset_entity = session.query(
self.query("Asset", asset_data)
).first()
if comment:
new_asset_version_data["comment"] = comment
self.log.info("asset entity: {}".format(asset_entity))
# Extracting metadata, and adding after entity creation. This is
# due to a ftrack_api bug where you can't add metadata on creation.
asset_metadata = asset_data.pop("metadata", {})
# Create a new entity if none exits.
if not asset_entity:
asset_entity = session.create("Asset", asset_data)
self.log.debug(
info_msg.format(
entity_type="Asset",
data=asset_data,
metadata=asset_metadata
)
)
try:
session.commit()
except Exception:
tp, value, tb = sys.exc_info()
session.rollback()
session._configure_locations()
six.reraise(tp, value, tb)
# Adding metadata
existing_asset_metadata = asset_entity["metadata"]
existing_asset_metadata.update(asset_metadata)
asset_entity["metadata"] = existing_asset_metadata
# AssetVersion
# Get existing entity.
assetversion_data = {
"version": 0,
"asset": asset_entity,
}
_assetversion_data = data.get("assetversion_data", {})
assetversion_cust_attrs = _assetversion_data.pop(
"custom_attributes", {}
self.log.info("Created new AssetVersion with data {}".format(
new_asset_version_data
))
session.create("AssetVersion", new_asset_version_data)
session.commit()
asset_version_entity = self._query_asset_version(
session, version, asset_id
)
asset_version_comment = _assetversion_data.pop(
"comment", None
)
assetversion_data.update(_assetversion_data)
assetversion_entity = session.query(
self.query("AssetVersion", assetversion_data)
).first()
# Extracting metadata, and adding after entity creation. This is
# due to a ftrack_api bug where you can't add metadata on creation.
assetversion_metadata = assetversion_data.pop("metadata", {})
if task:
assetversion_data['task'] = task
# Create a new entity if none exits.
if not assetversion_entity:
assetversion_entity = session.create(
"AssetVersion", assetversion_data
)
self.log.debug(
info_msg.format(
entity_type="AssetVersion",
data=assetversion_data,
metadata=assetversion_metadata
# Set custom attributes if there were any set
custom_attrs = asset_version_data.get("custom_attributes") or {}
for attr_key, attr_value in custom_attrs.items():
if attr_key in asset_version_entity["custom_attributes"]:
try:
asset_version_entity["custom_attributes"][attr_key] = (
attr_value
)
session.commit()
continue
except Exception:
session.rollback()
session._configure_locations()
self.log.warning(
(
"Custom Attrubute \"{0}\" is not available for"
" AssetVersion <{1}>. Can't set it's value to: \"{2}\""
).format(
attr_key, asset_version_entity["id"], str(attr_value)
)
try:
session.commit()
except Exception:
tp, value, tb = sys.exc_info()
session.rollback()
session._configure_locations()
six.reraise(tp, value, tb)
)
# Adding metadata
existing_assetversion_metadata = assetversion_entity["metadata"]
existing_assetversion_metadata.update(assetversion_metadata)
assetversion_entity["metadata"] = existing_assetversion_metadata
return asset_version_entity
# Add comment
if asset_version_comment:
assetversion_entity["comment"] = asset_version_comment
try:
session.commit()
except Exception:
session.rollback()
session._configure_locations()
self.log.warning((
"Comment was not possible to set for AssetVersion"
"\"{0}\". Can't set it's value to: \"{1}\""
).format(
assetversion_entity["id"], str(asset_version_comment)
))
def _query_asset_version(self, session, version, asset_id):
return session.query(
(
"select id, task_id, comment from AssetVersion"
" where version is \"{}\" and asset_id is \"{}\""
).format(version, asset_id)
).first()
# Adding Custom Attributes
for attr, val in assetversion_cust_attrs.items():
if attr in assetversion_entity["custom_attributes"]:
try:
assetversion_entity["custom_attributes"][attr] = val
session.commit()
continue
except Exception:
session.rollback()
session._configure_locations()
def create_component(self, session, asset_version_entity, data):
component_data = data.get("component_data") or {}
self.log.warning((
"Custom Attrubute \"{0}\""
" is not available for AssetVersion <{1}>."
" Can't set it's value to: \"{2}\""
).format(attr, assetversion_entity["id"], str(val)))
if not component_data.get("name"):
component_data["name"] = "main"
version_id = asset_version_entity["id"]
component_data["version_id"] = version_id
component_entity = session.query(
(
"select id, name from Component where name is \"{}\""
" and version_id is \"{}\""
).format(component_data["name"], version_id)
).first()
component_overwrite = data.get("component_overwrite", False)
location = data.get("component_location", session.pick_location())
# Overwrite existing component data if requested.
if component_entity and component_overwrite:
origin_location = session.query(
"Location where name is \"ftrack.origin\""
).one()
# Removing existing members from location
components = list(component_entity.get("members", []))
components += [component_entity]
for component in components:
for loc in component["component_locations"]:
if location["id"] == loc["location_id"]:
location.remove_component(
component, recursive=False
)
# Deleting existing members on component entity
for member in component_entity.get("members", []):
session.delete(member)
del(member)
# Have to commit the version and asset, because location can't
# determine the final location without.
try:
session.commit()
except Exception:
@ -292,175 +446,124 @@ class IntegrateFtrackApi(pyblish.api.InstancePlugin):
session._configure_locations()
six.reraise(tp, value, tb)
# Component
# Get existing entity.
component_data = {
"name": "main",
"version": assetversion_entity
}
component_data.update(data.get("component_data", {}))
# Reset members in memory
if "members" in component_entity.keys():
component_entity["members"] = []
component_entity = session.query(
self.query("Component", component_data)
).first()
# Add components to origin location
try:
collection = clique.parse(data["component_path"])
except ValueError:
# Assume its a single file
# Changing file type
name, ext = os.path.splitext(data["component_path"])
component_entity["file_type"] = ext
component_overwrite = data.get("component_overwrite", False)
location = data.get("component_location", session.pick_location())
# Overwrite existing component data if requested.
if component_entity and component_overwrite:
origin_location = session.query(
"Location where name is \"ftrack.origin\""
).one()
# Removing existing members from location
components = list(component_entity.get("members", []))
components += [component_entity]
for component in components:
for loc in component["component_locations"]:
if location["id"] == loc["location_id"]:
location.remove_component(
component, recursive=False
)
# Deleting existing members on component entity
for member in component_entity.get("members", []):
session.delete(member)
del(member)
try:
session.commit()
except Exception:
tp, value, tb = sys.exc_info()
session.rollback()
session._configure_locations()
six.reraise(tp, value, tb)
# Reset members in memory
if "members" in component_entity.keys():
component_entity["members"] = []
# Add components to origin location
try:
collection = clique.parse(data["component_path"])
except ValueError:
# Assume its a single file
# Changing file type
name, ext = os.path.splitext(data["component_path"])
component_entity["file_type"] = ext
origin_location.add_component(
component_entity, data["component_path"]
)
else:
# Changing file type
component_entity["file_type"] = collection.format("{tail}")
# Create member components for sequence.
for member_path in collection:
size = 0
try:
size = os.path.getsize(member_path)
except OSError:
pass
name = collection.match(member_path).group("index")
member_data = {
"name": name,
"container": component_entity,
"size": size,
"file_type": os.path.splitext(member_path)[-1]
}
component = session.create(
"FileComponent", member_data
)
origin_location.add_component(
component, member_path, recursive=False
)
component_entity["members"].append(component)
# Add components to location.
location.add_component(
component_entity, origin_location, recursive=True
)
data["component"] = component_entity
msg = "Overwriting Component with path: {0}, data: {1}, "
msg += "location: {2}"
self.log.info(
msg.format(
data["component_path"],
component_data,
location
)
)
# Extracting metadata, and adding after entity creation. This is
# due to a ftrack_api bug where you can't add metadata on creation.
component_metadata = component_data.pop("metadata", {})
# Create new component if none exists.
new_component = False
if not component_entity:
component_entity = assetversion_entity.create_component(
data["component_path"],
data=component_data,
location=location
)
data["component"] = component_entity
msg = "Created new Component with path: {0}, data: {1}"
msg += ", metadata: {2}, location: {3}"
self.log.info(
msg.format(
data["component_path"],
component_data,
component_metadata,
location
)
)
new_component = True
# Adding metadata
existing_component_metadata = component_entity["metadata"]
existing_component_metadata.update(component_metadata)
component_entity["metadata"] = existing_component_metadata
# if component_data['name'] = 'ftrackreview-mp4-mp4':
# assetversion_entity["thumbnail_id"]
# Setting assetversion thumbnail
if data.get("thumbnail", False):
assetversion_entity["thumbnail_id"] = component_entity["id"]
# Inform user about no changes to the database.
if (component_entity and not component_overwrite and
not new_component):
data["component"] = component_entity
self.log.info(
"Found existing component, and no request to overwrite. "
"Nothing has been changed."
origin_location.add_component(
component_entity, data["component_path"]
)
else:
# Commit changes.
try:
session.commit()
except Exception:
tp, value, tb = sys.exc_info()
session.rollback()
session._configure_locations()
six.reraise(tp, value, tb)
# Changing file type
component_entity["file_type"] = collection.format("{tail}")
if assetversion_entity not in used_asset_versions:
used_asset_versions.append(assetversion_entity)
# Create member components for sequence.
for member_path in collection:
asset_versions_key = "ftrackIntegratedAssetVersions"
if asset_versions_key not in instance.data:
instance.data[asset_versions_key] = []
size = 0
try:
size = os.path.getsize(member_path)
except OSError:
pass
for asset_version in used_asset_versions:
if asset_version not in instance.data[asset_versions_key]:
instance.data[asset_versions_key].append(asset_version)
name = collection.match(member_path).group("index")
member_data = {
"name": name,
"container": component_entity,
"size": size,
"file_type": os.path.splitext(member_path)[-1]
}
component = session.create(
"FileComponent", member_data
)
origin_location.add_component(
component, member_path, recursive=False
)
component_entity["members"].append(component)
# Add components to location.
location.add_component(
component_entity, origin_location, recursive=True
)
data["component"] = component_entity
self.log.info(
(
"Overwriting Component with path: {0}, data: {1},"
" location: {2}"
).format(
data["component_path"],
component_data,
location
)
)
# Extracting metadata, and adding after entity creation. This is
# due to a ftrack_api bug where you can't add metadata on creation.
component_metadata = component_data.pop("metadata", {})
# Create new component if none exists.
new_component = False
if not component_entity:
component_entity = asset_version_entity.create_component(
data["component_path"],
data=component_data,
location=location
)
data["component"] = component_entity
self.log.info(
(
"Created new Component with path: {0}, data: {1},"
" metadata: {2}, location: {3}"
).format(
data["component_path"],
component_data,
component_metadata,
location
)
)
new_component = True
# Adding metadata
existing_component_metadata = component_entity["metadata"]
existing_component_metadata.update(component_metadata)
component_entity["metadata"] = existing_component_metadata
# if component_data['name'] = 'ftrackreview-mp4-mp4':
# assetversion_entity["thumbnail_id"]
# Setting assetversion thumbnail
if data.get("thumbnail"):
asset_version_entity["thumbnail_id"] = component_entity["id"]
# Inform user about no changes to the database.
if (
component_entity
and not component_overwrite
and not new_component
):
data["component"] = component_entity
self.log.info(
"Found existing component, and no request to overwrite. "
"Nothing has been changed."
)
else:
# Commit changes.
try:
session.commit()
except Exception:
tp, value, tb = sys.exc_info()
session.rollback()
session._configure_locations()
six.reraise(tp, value, tb)

View file

@ -0,0 +1,84 @@
"""
Requires:
context > comment
context > ftrackSession
instance > ftrackIntegratedAssetVersionsData
"""
import sys
import six
import pyblish.api
class IntegrateFtrackDescription(pyblish.api.InstancePlugin):
"""Add description to AssetVersions in Ftrack."""
# Must be after integrate asset new
order = pyblish.api.IntegratorOrder + 0.4999
label = "Integrate Ftrack description"
families = ["ftrack"]
optional = True
# Can be set in settings:
# - Allows `intent` and `comment` keys
description_template = "{comment}"
def process(self, instance):
# Check if there are any integrated AssetVersion entities
asset_versions_key = "ftrackIntegratedAssetVersionsData"
asset_versions_data_by_id = instance.data.get(asset_versions_key)
if not asset_versions_data_by_id:
self.log.info("There are any integrated AssetVersions")
return
comment = (instance.context.data.get("comment") or "").strip()
if not comment:
self.log.info("Comment is not set.")
else:
self.log.debug("Comment is set to `{}`".format(comment))
session = instance.context.data["ftrackSession"]
intent = instance.context.data.get("intent")
intent_label = None
if intent and isinstance(intent, dict):
intent_val = intent.get("value")
intent_label = intent.get("label")
else:
intent_val = intent
if not intent_label:
intent_label = intent_val or ""
# if intent label is set then format comment
# - it is possible that intent_label is equal to "" (empty string)
if intent_label:
self.log.debug(
"Intent label is set to `{}`.".format(intent_label)
)
else:
self.log.debug("Intent is not set.")
for asset_version_data in asset_versions_data_by_id.values():
asset_version = asset_version_data["asset_version"]
# Backwards compatibility for older settings using
# attribute 'note_with_intent_template'
comment = self.description_template.format(**{
"intent": intent_label,
"comment": comment
})
asset_version["comment"] = comment
try:
session.commit()
self.log.debug("Comment added to AssetVersion \"{}\"".format(
str(asset_version)
))
except Exception:
tp, value, tb = sys.exc_info()
session.rollback()
session._configure_locations()
six.reraise(tp, value, tb)

View file

@ -40,6 +40,13 @@ class IntegrateFtrackInstance(pyblish.api.InstancePlugin):
def process(self, instance):
self.log.debug("instance {}".format(instance))
instance_repres = instance.data.get("representations")
if not instance_repres:
self.log.info((
"Skipping instance. Does not have any representations {}"
).format(str(instance)))
return
instance_version = instance.data.get("version")
if instance_version is None:
raise ValueError("Instance version not set")
@ -53,8 +60,12 @@ class IntegrateFtrackInstance(pyblish.api.InstancePlugin):
if not asset_type and family_low in self.family_mapping:
asset_type = self.family_mapping[family_low]
self.log.debug(self.family_mapping)
self.log.debug(family_low)
if not asset_type:
asset_type = "upload"
self.log.debug(
"Family: {}\nMapping: {}".format(family_low, self.family_mapping)
)
# Ignore this instance if neither "ftrackFamily" or a family mapping is
# found.
@ -64,13 +75,6 @@ class IntegrateFtrackInstance(pyblish.api.InstancePlugin):
).format(family))
return
instance_repres = instance.data.get("representations")
if not instance_repres:
self.log.info((
"Skipping instance. Does not have any representations {}"
).format(str(instance)))
return
# Prepare FPS
instance_fps = instance.data.get("fps")
if instance_fps is None:

View file

@ -1,7 +1,17 @@
"""
Requires:
context > hostName
context > appName
context > appLabel
context > comment
context > ftrackSession
instance > ftrackIntegratedAssetVersionsData
"""
import sys
import json
import pyblish.api
import six
import pyblish.api
class IntegrateFtrackNote(pyblish.api.InstancePlugin):
@ -15,100 +25,52 @@ class IntegrateFtrackNote(pyblish.api.InstancePlugin):
# Can be set in presets:
# - Allows only `intent` and `comment` keys
note_template = None
# Backwards compatibility
note_with_intent_template = "{intent}: {comment}"
# - note label must exist in Ftrack
note_labels = []
def get_intent_label(self, session, intent_value):
if not intent_value:
return
intent_configurations = session.query(
"CustomAttributeConfiguration where key is intent"
).all()
if not intent_configurations:
return
intent_configuration = intent_configurations[0]
if len(intent_configuration) > 1:
self.log.warning((
"Found more than one `intent` custom attribute."
" Using first found."
))
config = intent_configuration.get("config")
if not config:
return
configuration = json.loads(config)
items = configuration.get("data")
if not items:
return
if sys.version_info[0] < 3:
string_type = basestring
else:
string_type = str
if isinstance(items, string_type):
items = json.loads(items)
intent_label = None
for item in items:
if item["value"] == intent_value:
intent_label = item["menu"]
break
return intent_label
def process(self, instance):
comment = (instance.context.data.get("comment") or "").strip()
# Check if there are any integrated AssetVersion entities
asset_versions_key = "ftrackIntegratedAssetVersionsData"
asset_versions_data_by_id = instance.data.get(asset_versions_key)
if not asset_versions_data_by_id:
self.log.info("There are any integrated AssetVersions")
return
context = instance.context
host_name = context.data["hostName"]
app_name = context.data["appName"]
app_label = context.data["appLabel"]
comment = (context.data.get("comment") or "").strip()
if not comment:
self.log.info("Comment is not set.")
return
else:
self.log.debug("Comment is set to `{}`".format(comment))
self.log.debug("Comment is set to `{}`".format(comment))
session = instance.context.data["ftrackSession"]
session = context.data["ftrackSession"]
intent = instance.context.data.get("intent")
intent_label = None
if intent and isinstance(intent, dict):
intent_val = intent.get("value")
intent_label = intent.get("label")
else:
intent_val = intent_label = intent
intent_val = intent
final_label = None
if intent_val:
final_label = self.get_intent_label(session, intent_val)
if final_label is None:
final_label = intent_label
if not intent_label:
intent_label = intent_val or ""
# if intent label is set then format comment
# - it is possible that intent_label is equal to "" (empty string)
if final_label:
msg = "Intent label is set to `{}`.".format(final_label)
comment = self.note_with_intent_template.format(**{
"intent": final_label,
"comment": comment
})
elif intent_val:
msg = (
"Intent is set to `{}` and was not added"
" to comment because label is set to `{}`."
).format(intent_val, final_label)
if intent_label:
self.log.debug(
"Intent label is set to `{}`.".format(intent_label)
)
else:
msg = "Intent is not set."
self.log.debug(msg)
asset_versions_key = "ftrackIntegratedAssetVersions"
asset_versions = instance.data.get(asset_versions_key)
if not asset_versions:
self.log.info("There are any integrated AssetVersions")
return
self.log.debug("Intent is not set.")
user = session.query(
"User where username is \"{}\"".format(session.api_user)
@ -122,7 +84,7 @@ class IntegrateFtrackNote(pyblish.api.InstancePlugin):
labels = []
if self.note_labels:
all_labels = session.query("NoteLabel").all()
all_labels = session.query("select id, name from NoteLabel").all()
labels_by_low_name = {lab["name"].lower(): lab for lab in all_labels}
for _label in self.note_labels:
label = labels_by_low_name.get(_label.lower())
@ -134,7 +96,34 @@ class IntegrateFtrackNote(pyblish.api.InstancePlugin):
labels.append(label)
for asset_version in asset_versions:
for asset_version_data in asset_versions_data_by_id.values():
asset_version = asset_version_data["asset_version"]
component_items = asset_version_data["component_items"]
published_paths = set()
for component_item in component_items:
published_paths.add(component_item["component_path"])
# Backwards compatibility for older settings using
# attribute 'note_with_intent_template'
template = self.note_template
if template is None:
template = self.note_with_intent_template
format_data = {
"intent": intent_label,
"comment": comment,
"host_name": host_name,
"app_name": app_name,
"app_label": app_label,
"published_paths": "<br/>".join(sorted(published_paths)),
}
comment = template.format(**format_data)
if not comment:
self.log.info((
"Note for AssetVersion {} would be empty. Skipping."
"\nTemplate: {}\nData: {}"
).format(asset_version["id"], template, format_data))
continue
asset_version.create_note(comment, author=user, labels=labels)
try:

View file

@ -17,6 +17,7 @@ class DropboxHandler(AbstractProvider):
self.active = False
self.site_name = site_name
self.presets = presets
self.dbx = None
if not self.presets:
log.info(
@ -24,6 +25,11 @@ class DropboxHandler(AbstractProvider):
)
return
if not self.presets["enabled"]:
log.debug("Sync Server: Site {} not enabled for {}.".
format(site_name, project_name))
return
token = self.presets.get("token", "")
if not token:
msg = "Sync Server: No access token for dropbox provider"
@ -44,16 +50,13 @@ class DropboxHandler(AbstractProvider):
log.info(msg)
return
self.dbx = None
if self.presets["enabled"]:
try:
self.dbx = self._get_service(
token, acting_as_member, team_folder_name
)
except Exception as e:
log.info("Could not establish dropbox object: {}".format(e))
return
try:
self.dbx = self._get_service(
token, acting_as_member, team_folder_name
)
except Exception as e:
log.info("Could not establish dropbox object: {}".format(e))
return
super(AbstractProvider, self).__init__()

View file

@ -73,8 +73,28 @@ class GDriveHandler(AbstractProvider):
format(site_name))
return
cred_path = self.presets.get("credentials_url", {}).\
get(platform.system().lower()) or ''
if not self.presets["enabled"]:
log.debug("Sync Server: Site {} not enabled for {}.".
format(site_name, project_name))
return
current_platform = platform.system().lower()
cred_path = self.presets.get("credentials_url", {}). \
get(current_platform) or ''
if not cred_path:
msg = "Sync Server: Please, fill the credentials for gdrive "\
"provider for platform '{}' !".format(current_platform)
log.info(msg)
return
try:
cred_path = cred_path.format(**os.environ)
except KeyError as e:
log.info("Sync Server: The key(s) {} does not exist in the "
"environment variables".format(" ".join(e.args)))
return
if not os.path.exists(cred_path):
msg = "Sync Server: No credentials for gdrive provider " + \
"for '{}' on path '{}'!".format(site_name, cred_path)
@ -82,11 +102,10 @@ class GDriveHandler(AbstractProvider):
return
self.service = None
if self.presets["enabled"]:
self.service = self._get_gd_service(cred_path)
self.service = self._get_gd_service(cred_path)
self._tree = tree
self.active = True
self._tree = tree
self.active = True
def is_active(self):
"""

View file

@ -848,6 +848,11 @@ class SyncServerModule(OpenPypeModule, ITrayModule):
if self.enabled and sync_settings.get('enabled'):
sites.append(self.LOCAL_SITE)
active_site = sync_settings["config"]["active_site"]
# for Tray running background process
if active_site not in sites and active_site == get_local_site_id():
sites.append(active_site)
return sites
def tray_init(self):

View file

@ -41,6 +41,7 @@ from .load import (
loaders_from_representation,
get_representation_path,
get_representation_context,
get_repres_contexts,
)
@ -113,6 +114,7 @@ __all__ = (
"loaders_from_representation",
"get_representation_path",
"get_representation_context",
"get_repres_contexts",
# --- Publish ---

View file

View file

@ -0,0 +1,24 @@
# -*- coding: utf-8 -*-
import re
def match_aov_pattern(host_name, aov_patterns, render_file_name):
"""Matching against a `AOV` pattern in the render files.
In order to match the AOV name we must compare
against the render filename string that we are
grabbing the render filename string from the collection
that we have grabbed from `exp_files`.
Args:
app (str): Host name.
aov_patterns (dict): AOV patterns from AOV filters.
render_file_name (str): Incoming file name to match against.
Returns:
bool: Review state for rendered file (render_file_name).
"""
aov_pattern = aov_patterns.get(host_name, [])
if not aov_pattern:
return False
return any(re.match(p, render_file_name) for p in aov_pattern)

View file

@ -41,7 +41,8 @@ class LoaderPlugin(list):
def get_representations(cls):
return cls.representations
def filepath_from_context(self, context):
@classmethod
def filepath_from_context(cls, context):
return get_representation_path_from_context(context)
def load(self, context, name=None, namespace=None, options=None):

View file

@ -18,20 +18,30 @@ class CollectHostName(pyblish.api.ContextPlugin):
def process(self, context):
host_name = context.data.get("hostName")
app_name = context.data.get("appName")
app_label = context.data.get("appLabel")
# Don't override value if is already set
if host_name:
if host_name and app_name and app_label:
return
# Use AVALON_APP as first if available it is the same as host name
# - only if is not defined use AVALON_APP_NAME (e.g. on Farm) and
# set it back to AVALON_APP env variable
host_name = os.environ.get("AVALON_APP")
# Use AVALON_APP to get host name if available
if not host_name:
host_name = os.environ.get("AVALON_APP")
# Use AVALON_APP_NAME to get full app name
if not app_name:
app_name = os.environ.get("AVALON_APP_NAME")
if app_name:
app_manager = ApplicationManager()
app = app_manager.applications.get(app_name)
if app:
# Fill missing values based on app full name
if (not host_name or not app_label) and app_name:
app_manager = ApplicationManager()
app = app_manager.applications.get(app_name)
if app:
if not host_name:
host_name = app.host_name
if not app_label:
app_label = app.full_label
context.data["hostName"] = host_name
context.data["appName"] = app_name
context.data["appLabel"] = app_label

View file

@ -221,11 +221,17 @@ class ExtractBurnin(openpype.api.Extractor):
filled_anatomy = anatomy.format_all(burnin_data)
burnin_data["anatomy"] = filled_anatomy.get_solved()
# Add context data burnin_data.
burnin_data["custom"] = (
custom_data = copy.deepcopy(
instance.data.get("customData") or {}
)
# Backwards compatibility (since 2022/04/07)
custom_data.update(
instance.data.get("custom_burnin_data") or {}
)
# Add context data burnin_data.
burnin_data["custom"] = custom_data
# Add source camera name to burnin data
camera_name = repre.get("camera_name")
if camera_name:

View file

@ -188,8 +188,7 @@ class ExtractReview(pyblish.api.InstancePlugin):
outputs_per_repres = self._get_outputs_per_representations(
instance, profile_outputs
)
fill_data = copy.deepcopy(instance.data["anatomyData"])
for repre, outputs in outputs_per_repres:
for repre, outpu_defs in outputs_per_repres:
# Check if input should be preconverted before processing
# Store original staging dir (it's value may change)
src_repre_staging_dir = repre["stagingDir"]
@ -241,126 +240,143 @@ class ExtractReview(pyblish.api.InstancePlugin):
self.log
)
for _output_def in outputs:
output_def = copy.deepcopy(_output_def)
# Make sure output definition has "tags" key
if "tags" not in output_def:
output_def["tags"] = []
if "burnins" not in output_def:
output_def["burnins"] = []
# Create copy of representation
new_repre = copy.deepcopy(repre)
# Make sure new representation has origin staging dir
# - this is because source representation may change
# it's staging dir because of ffmpeg conversion
new_repre["stagingDir"] = src_repre_staging_dir
# Remove "delete" tag from new repre if there is
if "delete" in new_repre["tags"]:
new_repre["tags"].remove("delete")
# Add additional tags from output definition to representation
for tag in output_def["tags"]:
if tag not in new_repre["tags"]:
new_repre["tags"].append(tag)
# Add burnin link from output definition to representation
for burnin in output_def["burnins"]:
if burnin not in new_repre.get("burnins", []):
if not new_repre.get("burnins"):
new_repre["burnins"] = []
new_repre["burnins"].append(str(burnin))
self.log.debug(
"Linked burnins: `{}`".format(new_repre.get("burnins"))
try:
self._render_output_definitions(
instance, repre, src_repre_staging_dir, outpu_defs
)
self.log.debug(
"New representation tags: `{}`".format(
new_repre.get("tags"))
finally:
# Make sure temporary staging is cleaned up and representation
# has set origin stagingDir
if do_convert:
# Set staging dir of source representation back to previous
# value
repre["stagingDir"] = src_repre_staging_dir
if os.path.exists(new_staging_dir):
shutil.rmtree(new_staging_dir)
def _render_output_definitions(
self, instance, repre, src_repre_staging_dir, outpu_defs
):
fill_data = copy.deepcopy(instance.data["anatomyData"])
for _output_def in outpu_defs:
output_def = copy.deepcopy(_output_def)
# Make sure output definition has "tags" key
if "tags" not in output_def:
output_def["tags"] = []
if "burnins" not in output_def:
output_def["burnins"] = []
# Create copy of representation
new_repre = copy.deepcopy(repre)
# Make sure new representation has origin staging dir
# - this is because source representation may change
# it's staging dir because of ffmpeg conversion
new_repre["stagingDir"] = src_repre_staging_dir
# Remove "delete" tag from new repre if there is
if "delete" in new_repre["tags"]:
new_repre["tags"].remove("delete")
# Add additional tags from output definition to representation
for tag in output_def["tags"]:
if tag not in new_repre["tags"]:
new_repre["tags"].append(tag)
# Add burnin link from output definition to representation
for burnin in output_def["burnins"]:
if burnin not in new_repre.get("burnins", []):
if not new_repre.get("burnins"):
new_repre["burnins"] = []
new_repre["burnins"].append(str(burnin))
self.log.debug(
"Linked burnins: `{}`".format(new_repre.get("burnins"))
)
self.log.debug(
"New representation tags: `{}`".format(
new_repre.get("tags"))
)
temp_data = self.prepare_temp_data(instance, repre, output_def)
files_to_clean = []
if temp_data["input_is_sequence"]:
self.log.info("Filling gaps in sequence.")
files_to_clean = self.fill_sequence_gaps(
temp_data["origin_repre"]["files"],
new_repre["stagingDir"],
temp_data["frame_start"],
temp_data["frame_end"])
# create or update outputName
output_name = new_repre.get("outputName", "")
output_ext = new_repre["ext"]
if output_name:
output_name += "_"
output_name += output_def["filename_suffix"]
if temp_data["without_handles"]:
output_name += "_noHandles"
# add outputName to anatomy format fill_data
fill_data.update({
"output": output_name,
"ext": output_ext
})
try: # temporary until oiiotool is supported cross platform
ffmpeg_args = self._ffmpeg_arguments(
output_def, instance, new_repre, temp_data, fill_data
)
temp_data = self.prepare_temp_data(
instance, repre, output_def)
files_to_clean = []
if temp_data["input_is_sequence"]:
self.log.info("Filling gaps in sequence.")
files_to_clean = self.fill_sequence_gaps(
temp_data["origin_repre"]["files"],
new_repre["stagingDir"],
temp_data["frame_start"],
temp_data["frame_end"])
# create or update outputName
output_name = new_repre.get("outputName", "")
output_ext = new_repre["ext"]
if output_name:
output_name += "_"
output_name += output_def["filename_suffix"]
if temp_data["without_handles"]:
output_name += "_noHandles"
# add outputName to anatomy format fill_data
fill_data.update({
"output": output_name,
"ext": output_ext
})
try: # temporary until oiiotool is supported cross platform
ffmpeg_args = self._ffmpeg_arguments(
output_def, instance, new_repre, temp_data, fill_data
except ZeroDivisionError:
# TODO recalculate width and height using OIIO before
# conversion
if 'exr' in temp_data["origin_repre"]["ext"]:
self.log.warning(
(
"Unsupported compression on input files."
" Skipping!!!"
),
exc_info=True
)
except ZeroDivisionError:
if 'exr' in temp_data["origin_repre"]["ext"]:
self.log.debug("Unsupported compression on input " +
"files. Skipping!!!")
return
raise NotImplementedError
return
raise NotImplementedError
subprcs_cmd = " ".join(ffmpeg_args)
subprcs_cmd = " ".join(ffmpeg_args)
# run subprocess
self.log.debug("Executing: {}".format(subprcs_cmd))
# run subprocess
self.log.debug("Executing: {}".format(subprcs_cmd))
openpype.api.run_subprocess(
subprcs_cmd, shell=True, logger=self.log
)
openpype.api.run_subprocess(
subprcs_cmd, shell=True, logger=self.log
)
# delete files added to fill gaps
if files_to_clean:
for f in files_to_clean:
os.unlink(f)
# delete files added to fill gaps
if files_to_clean:
for f in files_to_clean:
os.unlink(f)
new_repre.update({
"name": "{}_{}".format(output_name, output_ext),
"outputName": output_name,
"outputDef": output_def,
"frameStartFtrack": temp_data["output_frame_start"],
"frameEndFtrack": temp_data["output_frame_end"],
"ffmpeg_cmd": subprcs_cmd
})
new_repre.update({
"name": "{}_{}".format(output_name, output_ext),
"outputName": output_name,
"outputDef": output_def,
"frameStartFtrack": temp_data["output_frame_start"],
"frameEndFtrack": temp_data["output_frame_end"],
"ffmpeg_cmd": subprcs_cmd
})
# Force to pop these key if are in new repre
new_repre.pop("preview", None)
new_repre.pop("thumbnail", None)
if "clean_name" in new_repre.get("tags", []):
new_repre.pop("outputName")
# Force to pop these key if are in new repre
new_repre.pop("preview", None)
new_repre.pop("thumbnail", None)
if "clean_name" in new_repre.get("tags", []):
new_repre.pop("outputName")
# adding representation
self.log.debug(
"Adding new representation: {}".format(new_repre)
)
instance.data["representations"].append(new_repre)
# Cleanup temp staging dir after procesisng of output definitions
if do_convert:
temp_dir = repre["stagingDir"]
shutil.rmtree(temp_dir)
# Set staging dir of source representation back to previous
# value
repre["stagingDir"] = src_repre_staging_dir
# adding representation
self.log.debug(
"Adding new representation: {}".format(new_repre)
)
instance.data["representations"].append(new_repre)
def input_is_sequence(self, repre):
"""Deduce from representation data if input is sequence."""

View file

@ -158,13 +158,15 @@ class ExtractReviewSlate(openpype.api.Extractor):
])
if use_legacy_code:
format_args = []
codec_args = repre["_profile"].get('codec', [])
output_args.extend(codec_args)
# preset's output data
output_args.extend(repre["_profile"].get('output', []))
else:
# Codecs are copied from source for whole input
codec_args = self._get_codec_args(repre)
format_args, codec_args = self._get_format_codec_args(repre)
output_args.extend(format_args)
output_args.extend(codec_args)
# make sure colors are correct
@ -266,8 +268,14 @@ class ExtractReviewSlate(openpype.api.Extractor):
"-safe", "0",
"-i", conc_text_path,
"-c", "copy",
output_path
]
# NOTE: Added because of OP Atom demuxers
# Add format arguments if there are any
# - keep format of output
if format_args:
concat_args.extend(format_args)
# Add final output path
concat_args.append(output_path)
# ffmpeg concat subprocess
self.log.debug(
@ -338,7 +346,7 @@ class ExtractReviewSlate(openpype.api.Extractor):
return vf_back
def _get_codec_args(self, repre):
def _get_format_codec_args(self, repre):
"""Detect possible codec arguments from representation."""
codec_args = []
@ -361,13 +369,9 @@ class ExtractReviewSlate(openpype.api.Extractor):
return codec_args
source_ffmpeg_cmd = repre.get("ffmpeg_cmd")
codec_args.extend(
get_ffmpeg_format_args(ffprobe_data, source_ffmpeg_cmd)
)
codec_args.extend(
get_ffmpeg_codec_args(
ffprobe_data, source_ffmpeg_cmd, logger=self.log
)
format_args = get_ffmpeg_format_args(ffprobe_data, source_ffmpeg_cmd)
codec_args = get_ffmpeg_codec_args(
ffprobe_data, source_ffmpeg_cmd, logger=self.log
)
return codec_args
return format_args, codec_args

View file

@ -2,8 +2,8 @@ import pyblish.api
from openpype.pipeline import PublishValidationError
class ValidateContainers(pyblish.api.InstancePlugin):
"""Validate existence of asset asset documents on instances.
class ValidateAssetDocs(pyblish.api.InstancePlugin):
"""Validate existence of asset documents on instances.
Without asset document it is not possible to publish the instance.
@ -22,10 +22,10 @@ class ValidateContainers(pyblish.api.InstancePlugin):
return
if instance.data.get("assetEntity"):
self.log.info("Instance have set asset document in it's data.")
self.log.info("Instance has set asset document in its data.")
else:
raise PublishValidationError((
"Instance \"{}\" don't have set asset"
" document which is needed for publishing."
"Instance \"{}\" doesn't have asset document "
"set which is needed for publishing."
).format(instance.data["name"]))

View file

@ -4,6 +4,10 @@
"CollectDefaultDeadlineServer": {
"pass_mongo_url": false
},
"CollectDeadlinePools": {
"primary_pool": "",
"secondary_pool": ""
},
"ValidateExpectedFiles": {
"enabled": true,
"active": true,
@ -38,8 +42,6 @@
"priority": 50,
"chunk_size": 10,
"concurrent_tasks": 1,
"primary_pool": "",
"secondary_pool": "",
"group": "",
"department": "",
"use_gpu": true,
@ -54,8 +56,6 @@
"use_published": true,
"priority": 50,
"chunk_size": 10000,
"primary_pool": "",
"secondary_pool": "",
"group": "",
"department": ""
},
@ -66,8 +66,6 @@
"use_published": true,
"priority": 50,
"chunk_size": 10000,
"primary_pool": "",
"secondary_pool": "",
"group": "",
"department": "",
"multiprocess": true
@ -83,7 +81,7 @@
"skip_integration_repre_list": [],
"aov_filter": {
"maya": [
".+(?:\\.|_)([Bb]eauty)(?:\\.|_).*"
".*([Bb]eauty).*"
],
"nuke": [
".*"
@ -100,4 +98,4 @@
}
}
}
}
}

View file

@ -20,6 +20,37 @@
}
},
"publish": {
"CollectTimelineInstances": {
"xml_preset_attrs_from_comments": [
{
"name": "width",
"type": "number"
},
{
"name": "height",
"type": "number"
},
{
"name": "pixelRatio",
"type": "float"
},
{
"name": "resizeType",
"type": "string"
},
{
"name": "resizeFilter",
"type": "string"
}
],
"add_tasks": [
{
"name": "compositing",
"type": "Compositing",
"create_batch_group": true
}
]
},
"ExtractSubsetResources": {
"keep_original_representation": false,
"export_presets_mapping": {
@ -31,7 +62,9 @@
"ignore_comment_attrs": false,
"colorspace_out": "ACES - ACEScg",
"representation_add_range": true,
"representation_tags": []
"representation_tags": [],
"load_to_batch_group": true,
"batch_group_loader_name": "LoadClip"
}
}
}
@ -58,7 +91,29 @@
],
"reel_group_name": "OpenPype_Reels",
"reel_name": "Loaded",
"clip_name_template": "{asset}_{subset}_{representation}"
"clip_name_template": "{asset}_{subset}_{output}"
},
"LoadClipBatch": {
"enabled": true,
"families": [
"render2d",
"source",
"plate",
"render",
"review"
],
"representations": [
"exr",
"dpx",
"jpg",
"jpeg",
"png",
"h264",
"mov",
"mp4"
],
"reel_name": "OP_LoadedReel",
"clip_name_template": "{asset}_{subset}_{output}"
}
}
}

View file

@ -352,11 +352,21 @@
}
]
},
"CollectFtrackCustomAttributeData": {
"enabled": false,
"custom_attribute_keys": []
},
"IntegrateFtrackNote": {
"enabled": true,
"note_with_intent_template": "{intent}: {comment}",
"note_template": "{intent}: {comment}",
"note_labels": []
},
"IntegrateFtrackDescription": {
"enabled": false,
"optional": true,
"active": true,
"description_template": "{comment}"
},
"ValidateFtrackAttributes": {
"enabled": false,
"ftrack_custom_attributes": {}

View file

@ -201,7 +201,7 @@
"tasks": [],
"template_name": "simpleUnrealTexture"
},
{
{
"families": [
"staticMesh",
"skeletalMesh"

View file

@ -160,7 +160,21 @@
}
},
"ExtractSlateFrame": {
"viewer_lut_raw": false
"viewer_lut_raw": false,
"key_value_mapping": {
"f_submission_note": [
true,
"{comment}"
],
"f_submitting_for": [
true,
"{intent[value]}"
],
"f_vfx_scope_of_work": [
false,
""
]
}
},
"IncrementScriptVersion": {
"enabled": true,

View file

@ -12,6 +12,7 @@
"linux": [],
"darwin": []
},
"local_env_white_list": [],
"openpype_path": {
"windows": [],
"darwin": [],

View file

@ -745,6 +745,7 @@ How output of the schema could look like on save:
### label
- add label with note or explanations
- it is possible to use html tags inside the label
- set `work_wrap` to `true`/`false` if you want to enable word wrapping in UI (default: `false`)
```
{

View file

@ -30,6 +30,24 @@
}
]
},
{
"type": "dict",
"collapsible": true,
"key": "CollectDeadlinePools",
"label": "Default Deadline Pools",
"children": [
{
"type": "text",
"key": "primary_pool",
"label": "Primary Pool"
},
{
"type": "text",
"key": "secondary_pool",
"label": "Secondary Pool"
}
]
},
{
"type": "dict",
"collapsible": true,
@ -223,16 +241,6 @@
{
"type": "splitter"
},
{
"type": "text",
"key": "primary_pool",
"label": "Primary Pool"
},
{
"type": "text",
"key": "secondary_pool",
"label": "Secondary Pool"
},
{
"type": "text",
"key": "group",
@ -313,16 +321,6 @@
"key": "chunk_size",
"label": "Chunk Size"
},
{
"type": "text",
"key": "primary_pool",
"label": "Primary Pool"
},
{
"type": "text",
"key": "secondary_pool",
"label": "Secondary Pool"
},
{
"type": "text",
"key": "group",
@ -372,16 +370,6 @@
"key": "chunk_size",
"label": "Chunk Size"
},
{
"type": "text",
"key": "primary_pool",
"label": "Primary Pool"
},
{
"type": "text",
"key": "secondary_pool",
"label": "Secondary Pool"
},
{
"type": "text",
"key": "group",

View file

@ -136,6 +136,87 @@
"key": "publish",
"label": "Publish plugins",
"children": [
{
"type": "dict",
"collapsible": true,
"key": "CollectTimelineInstances",
"label": "Collect Timeline Instances",
"is_group": true,
"children": [
{
"type": "collapsible-wrap",
"label": "XML presets attributes parsable from segment comments",
"collapsible": true,
"collapsed": true,
"children": [
{
"type": "list",
"key": "xml_preset_attrs_from_comments",
"object_type": {
"type": "dict",
"children": [
{
"type": "text",
"key": "name",
"label": "Attribute name"
},
{
"key": "type",
"label": "Attribute type",
"type": "enum",
"default": "number",
"enum_items": [
{
"number": "number"
},
{
"float": "float"
},
{
"string": "string"
}
]
}
]
}
}
]
},
{
"type": "collapsible-wrap",
"label": "Add tasks",
"collapsible": true,
"collapsed": true,
"children": [
{
"type": "list",
"key": "add_tasks",
"object_type": {
"type": "dict",
"children": [
{
"type": "text",
"key": "name",
"label": "Task name"
},
{
"key": "type",
"label": "Task type",
"multiselection": false,
"type": "task-types-enum"
},
{
"type": "boolean",
"key": "create_batch_group",
"label": "Create batch group"
}
]
}
}
]
}
]
},
{
"type": "dict",
"collapsible": true,
@ -221,6 +302,20 @@
"type": "text",
"multiline": false
}
},
{
"type": "separator"
},
{
"type": "boolean",
"key": "load_to_batch_group",
"label": "Load to batch group reel",
"default": false
},
{
"type": "text",
"key": "batch_group_loader_name",
"label": "Use loader name"
}
]
}
@ -281,6 +376,48 @@
"label": "Clip name template"
}
]
},
{
"type": "dict",
"collapsible": true,
"key": "LoadClipBatch",
"label": "Load as clip to current batch",
"checkbox_key": "enabled",
"children": [
{
"type": "boolean",
"key": "enabled",
"label": "Enabled"
},
{
"type": "list",
"key": "families",
"label": "Families",
"object_type": "text"
},
{
"type": "list",
"key": "representations",
"label": "Representations",
"object_type": "text"
},
{
"type": "separator"
},
{
"type": "text",
"key": "reel_name",
"label": "Reel name"
},
{
"type": "separator"
},
{
"type": "text",
"key": "clip_name_template",
"label": "Clip name template"
}
]
}
]
}

View file

@ -725,6 +725,31 @@
}
]
},
{
"type": "dict",
"collapsible": true,
"checkbox_key": "enabled",
"key": "CollectFtrackCustomAttributeData",
"label": "Collect Custom Attribute Data",
"is_group": true,
"children": [
{
"type": "boolean",
"key": "enabled",
"label": "Enabled"
},
{
"type": "label",
"label": "Collect custom attributes from ftrack for ftrack entities that can be used in some templates during publishing."
},
{
"type": "list",
"key": "custom_attribute_keys",
"label": "Custom attribute keys",
"object_type": "text"
}
]
},
{
"type": "dict",
"collapsible": true,
@ -738,10 +763,15 @@
"key": "enabled",
"label": "Enabled"
},
{
"type": "label",
"label": "Template may contain formatting keys <b>intent</b>, <b>comment</b>, <b>host_name</b>, <b>app_name</b>, <b>app_label</b> and <b>published_paths</b>."
},
{
"type": "text",
"key": "note_with_intent_template",
"label": "Note with intent template"
"key": "note_template",
"label": "Note template",
"multiline": true
},
{
"type": "list",
@ -751,6 +781,44 @@
}
]
},
{
"type": "dict",
"collapsible": true,
"checkbox_key": "enabled",
"key": "IntegrateFtrackDescription",
"label": "Integrate Ftrack Description",
"is_group": true,
"children": [
{
"type": "boolean",
"key": "enabled",
"label": "Enabled"
},
{
"type": "label",
"label": "Add description to integrated AssetVersion."
},
{
"type": "boolean",
"key": "optional",
"label": "Optional"
},
{
"type": "boolean",
"key": "active",
"label": "Active"
},
{
"type": "label",
"label": "Template may contain formatting keys <b>intent</b> and <b>comment</b>."
},
{
"type": "text",
"key": "description_template",
"label": "Description template"
}
]
},
{
"type": "dict",
"collapsible": true,

View file

@ -389,6 +389,59 @@
"type": "boolean",
"key": "viewer_lut_raw",
"label": "Viewer LUT raw"
},
{
"type": "separator"
},
{
"type": "label",
"label": "Fill specific slate node values with templates. Uncheck the checkbox to not change the value.",
"word_wrap": true
},
{
"type": "dict",
"key": "key_value_mapping",
"children": [
{
"type": "list-strict",
"key": "f_submission_note",
"label": "Submission Note:",
"object_types": [
{
"type": "boolean"
},
{
"type": "text"
}
]
},
{
"type": "list-strict",
"key": "f_submitting_for",
"label": "Submission For:",
"object_types": [
{
"type": "boolean"
},
{
"type": "text"
}
]
},
{
"type": "list-strict",
"key": "f_vfx_scope_of_work",
"label": "VFX Scope Of Work:",
"object_types": [
{
"type": "boolean"
},
{
"type": "text"
}
]
}
]
}
]
},

View file

@ -110,6 +110,17 @@
{
"type": "splitter"
},
{
"type": "list",
"key": "local_env_white_list",
"label": "Local overrides of environment variable keys",
"tooltip": "Environment variable keys that can be changed per machine using Local settings UI.\nKey changes are applied only on applications and tools environments.",
"use_label_wrap": true,
"object_type": "text"
},
{
"type": "splitter"
},
{
"type": "collapsible-wrap",
"label": "OpenPype deployment control",

View file

@ -1113,6 +1113,14 @@ def get_general_environments():
clear_metadata_from_settings(environments)
whitelist_envs = result["general"].get("local_env_white_list")
if whitelist_envs:
local_settings = get_local_settings()
local_envs = local_settings.get("environments") or {}
for key, value in local_envs.items():
if key in whitelist_envs and key in environments:
environments[key] = value
return environments

View file

@ -9,6 +9,7 @@ LABEL_DISCARD_CHANGES = "Discard changes"
# TODO move to settings constants
LOCAL_GENERAL_KEY = "general"
LOCAL_PROJECTS_KEY = "projects"
LOCAL_ENV_KEY = "environments"
LOCAL_APPS_KEY = "applications"
# Roots key constant

View file

@ -0,0 +1,93 @@
from Qt import QtWidgets
from openpype.tools.utils import PlaceholderLineEdit
class LocalEnvironmentsWidgets(QtWidgets.QWidget):
def __init__(self, system_settings_entity, parent):
super(LocalEnvironmentsWidgets, self).__init__(parent)
self._widgets_by_env_key = {}
self.system_settings_entity = system_settings_entity
content_widget = QtWidgets.QWidget(self)
content_layout = QtWidgets.QGridLayout(content_widget)
content_layout.setContentsMargins(0, 0, 0, 0)
layout = QtWidgets.QVBoxLayout(self)
layout.setContentsMargins(0, 0, 0, 0)
self._layout = layout
self._content_layout = content_layout
self._content_widget = content_widget
def _clear_layout(self, layout):
while layout.count() > 0:
item = layout.itemAt(0)
widget = item.widget()
layout.removeItem(item)
if widget is not None:
widget.setVisible(False)
widget.deleteLater()
def _reset_env_widgets(self):
self._clear_layout(self._content_layout)
self._clear_layout(self._layout)
content_widget = QtWidgets.QWidget(self)
content_layout = QtWidgets.QGridLayout(content_widget)
content_layout.setContentsMargins(0, 0, 0, 0)
white_list_entity = (
self.system_settings_entity["general"]["local_env_white_list"]
)
row = -1
for row, item in enumerate(white_list_entity):
key = item.value
label_widget = QtWidgets.QLabel(key, self)
input_widget = PlaceholderLineEdit(self)
input_widget.setPlaceholderText("< Keep studio value >")
content_layout.addWidget(label_widget, row, 0)
content_layout.addWidget(input_widget, row, 1)
self._widgets_by_env_key[key] = input_widget
if row < 0:
label_widget = QtWidgets.QLabel(
(
"Your studio does not allow to change"
" Environment variables locally."
),
self
)
content_layout.addWidget(label_widget, 0, 0)
content_layout.setColumnStretch(0, 1)
else:
content_layout.setColumnStretch(0, 0)
content_layout.setColumnStretch(1, 1)
self._layout.addWidget(content_widget, 1)
self._content_layout = content_layout
self._content_widget = content_widget
def update_local_settings(self, value):
if not value:
value = {}
self._reset_env_widgets()
for env_key, widget in self._widgets_by_env_key.items():
env_value = value.get(env_key) or ""
widget.setText(env_value)
def settings_value(self):
output = {}
for env_key, widget in self._widgets_by_env_key.items():
value = widget.text()
if value:
output[env_key] = value
if not output:
return None
return output

View file

@ -25,11 +25,13 @@ from .experimental_widget import (
LOCAL_EXPERIMENTAL_KEY
)
from .apps_widget import LocalApplicationsWidgets
from .environments_widget import LocalEnvironmentsWidgets
from .projects_widget import ProjectSettingsWidget
from .constants import (
LOCAL_GENERAL_KEY,
LOCAL_PROJECTS_KEY,
LOCAL_ENV_KEY,
LOCAL_APPS_KEY
)
@ -49,18 +51,20 @@ class LocalSettingsWidget(QtWidgets.QWidget):
self.pype_mongo_widget = None
self.general_widget = None
self.experimental_widget = None
self.envs_widget = None
self.apps_widget = None
self.projects_widget = None
self._create_pype_mongo_ui()
self._create_mongo_url_ui()
self._create_general_ui()
self._create_experimental_ui()
self._create_environments_ui()
self._create_app_ui()
self._create_project_ui()
self.main_layout.addStretch(1)
def _create_pype_mongo_ui(self):
def _create_mongo_url_ui(self):
pype_mongo_expand_widget = ExpandingWidget("OpenPype Mongo URL", self)
pype_mongo_content = QtWidgets.QWidget(self)
pype_mongo_layout = QtWidgets.QVBoxLayout(pype_mongo_content)
@ -110,6 +114,22 @@ class LocalSettingsWidget(QtWidgets.QWidget):
self.experimental_widget = experimental_widget
def _create_environments_ui(self):
envs_expand_widget = ExpandingWidget("Environments", self)
envs_content = QtWidgets.QWidget(self)
envs_layout = QtWidgets.QVBoxLayout(envs_content)
envs_layout.setContentsMargins(CHILD_OFFSET, 5, 0, 0)
envs_expand_widget.set_content_widget(envs_content)
envs_widget = LocalEnvironmentsWidgets(
self.system_settings, envs_content
)
envs_layout.addWidget(envs_widget)
self.main_layout.addWidget(envs_expand_widget)
self.envs_widget = envs_widget
def _create_app_ui(self):
# Applications
app_expand_widget = ExpandingWidget("Applications", self)
@ -154,6 +174,9 @@ class LocalSettingsWidget(QtWidgets.QWidget):
self.general_widget.update_local_settings(
value.get(LOCAL_GENERAL_KEY)
)
self.envs_widget.update_local_settings(
value.get(LOCAL_ENV_KEY)
)
self.app_widget.update_local_settings(
value.get(LOCAL_APPS_KEY)
)
@ -170,6 +193,10 @@ class LocalSettingsWidget(QtWidgets.QWidget):
if general_value:
output[LOCAL_GENERAL_KEY] = general_value
envs_value = self.envs_widget.settings_value()
if envs_value:
output[LOCAL_ENV_KEY] = envs_value
app_value = self.app_widget.settings_value()
if app_value:
output[LOCAL_APPS_KEY] = app_value

View file

@ -567,7 +567,9 @@ class GUIWidget(BaseWidget):
def _create_label_ui(self):
label = self.entity["label"]
word_wrap = self.entity.schema_data.get("word_wrap", False)
label_widget = QtWidgets.QLabel(label, self)
label_widget.setWordWrap(word_wrap)
label_widget.setTextInteractionFlags(QtCore.Qt.TextBrowserInteraction)
label_widget.setObjectName("SettingsLabel")
label_widget.linkActivated.connect(self._on_link_activate)

View file

@ -216,7 +216,7 @@ class SettingsCategoryWidget(QtWidgets.QWidget):
def create_ui(self):
self.modify_defaults_checkbox = None
conf_wrapper_widget = QtWidgets.QWidget(self)
conf_wrapper_widget = QtWidgets.QSplitter(self)
configurations_widget = QtWidgets.QWidget(conf_wrapper_widget)
# Breadcrumbs/Path widget
@ -294,10 +294,7 @@ class SettingsCategoryWidget(QtWidgets.QWidget):
configurations_layout.addWidget(scroll_widget, 1)
conf_wrapper_layout = QtWidgets.QHBoxLayout(conf_wrapper_widget)
conf_wrapper_layout.setContentsMargins(0, 0, 0, 0)
conf_wrapper_layout.setSpacing(0)
conf_wrapper_layout.addWidget(configurations_widget, 1)
conf_wrapper_widget.addWidget(configurations_widget)
main_layout = QtWidgets.QVBoxLayout(self)
main_layout.setContentsMargins(0, 0, 0, 0)
@ -327,7 +324,7 @@ class SettingsCategoryWidget(QtWidgets.QWidget):
self.breadcrumbs_model = None
self.refresh_btn = refresh_btn
self.conf_wrapper_layout = conf_wrapper_layout
self.conf_wrapper_widget = conf_wrapper_widget
self.main_layout = main_layout
self.ui_tweaks()
@ -818,7 +815,9 @@ class ProjectWidget(SettingsCategoryWidget):
project_list_widget = ProjectListWidget(self)
self.conf_wrapper_layout.insertWidget(0, project_list_widget, 0)
self.conf_wrapper_widget.insertWidget(0, project_list_widget)
self.conf_wrapper_widget.setStretchFactor(0, 0)
self.conf_wrapper_widget.setStretchFactor(1, 1)
project_list_widget.project_changed.connect(self._on_project_change)
project_list_widget.version_change_requested.connect(

View file

@ -409,6 +409,7 @@ class FamilyConfigCache:
project_name = os.environ.get("AVALON_PROJECT")
asset_name = os.environ.get("AVALON_ASSET")
task_name = os.environ.get("AVALON_TASK")
host_name = os.environ.get("AVALON_APP")
if not all((project_name, asset_name, task_name)):
return
@ -422,15 +423,21 @@ class FamilyConfigCache:
["family_filter_profiles"]
)
if profiles:
asset_doc = self.dbcon.find_one(
# Make sure connection is installed
# - accessing attribute which does not have auto-install
self.dbcon.install()
database = getattr(self.dbcon, "database", None)
if database is None:
database = self.dbcon._database
asset_doc = database[project_name].find_one(
{"type": "asset", "name": asset_name},
{"data.tasks": True}
)
) or {}
tasks_info = asset_doc.get("data", {}).get("tasks") or {}
task_type = tasks_info.get(task_name, {}).get("type")
profiles_filter = {
"task_types": task_type,
"hosts": os.environ["AVALON_APP"]
"hosts": host_name
}
matching_item = filter_profiles(profiles, profiles_filter)

View file

@ -1,3 +1,3 @@
# -*- coding: utf-8 -*-
"""Package declaring Pype version."""
__version__ = "3.9.3-nightly.1"
__version__ = "3.9.4-nightly.1"

View file

@ -1,6 +1,6 @@
[tool.poetry]
name = "OpenPype"
version = "3.9.3-nightly.1" # OpenPype
version = "3.9.4-nightly.1" # OpenPype
description = "Open VFX and Animation pipeline with support."
authors = ["OpenPype Team <info@openpype.io>"]
license = "MIT License"

View file

@ -1,7 +0,0 @@
---
id: api
title: Pype API
sidebar_label: API
---
Work in progress

View file

@ -1,17 +0,0 @@
---
id: artist_hosts
title: Hosts
sidebar_label: Hosts
---
## Maya
## Houdini
## Nuke
## Fusion
## Unreal
## System

View file

@ -1,145 +0,0 @@
---
id: artist_hosts_nuke
title: Nuke
sidebar_label: Nuke
---
:::important
After Nuke starts it will automatically **Apply All Settings** for you. If you are sure the settings are wrong just contact your supervisor and he will set them correctly for you in project database.
:::
:::note
The workflows are identical for both. We are supporting versions **`11.0`** and above.
:::
## OpenPype global tools
- [Set Context](artist_tools.md#set-context)
- [Work Files](artist_tools.md#workfiles)
- [Create](artist_tools.md#creator)
- [Load](artist_tools.md#loader)
- [Manage (Inventory)](artist_tools.md#inventory)
- [Publish](artist_tools.md#publisher)
- [Library Loader](artist_tools.md#library-loader)
## Nuke specific tools
<div class="row markdown">
<div class="col col--6 markdown">
### Set Frame Ranges
Use this feature in case you are not sure the frame range is correct.
##### Result
- setting Frame Range in script settings
- setting Frame Range in viewers (timeline)
</div>
<div class="col col--6 markdown">
![Set Frame Ranges](assets/nuke_setFrameRanges.png) <!-- picture needs to be changed -->
</div>
</div>
<figure>
![Set Frame Ranges Timeline](assets/nuke_setFrameRanges_timeline.png)
<figcaption>
1. limiting to Frame Range without handles
2. **Input** handle on start
3. **Output** handle on end
</figcaption>
</figure>
### Set Resolution
<div class="row markdown">
<div class="col col--6 markdown">
This menu item will set correct resolution format for you defined by your production.
##### Result
- creates new item in formats with project name
- sets the new format as used
</div>
<div class="col col--6 markdown">
![Set Resolution](assets/nuke_setResolution.png) <!-- picture needs to be changed -->
</div>
</div>
### Set Colorspace
<div class="row markdown">
<div class="col col--6 markdown">
This menu item will set correct Colorspace definitions for you. All has to be configured by your production (Project coordinator).
##### Result
- set Colorspace in your script settings
- set preview LUT to your viewers
- set correct colorspace to all discovered Read nodes (following expression set in settings)
</div>
<div class="col col--6 markdown">
![Set Colorspace](assets/nuke_setColorspace.png) <!-- picture needs to be changed -->
</div>
</div>
### Apply All Settings
<div class="row markdown">
<div class="col col--6 markdown">
It is usually enough if you once per while use this option just to make yourself sure the workfile is having set correct properties.
##### Result
- set Frame Ranges
- set Colorspace
- set Resolution
</div>
<div class="col col--6 markdown">
![Apply All Settings](assets/nuke_applyAllSettings.png) <!-- picture needs to be changed -->
</div>
</div>
### Build Workfile
<div class="row markdown">
<div class="col col--6 markdown">
This tool will append all available subsets into an actual node graph. It will look into database and get all last [versions](artist_concepts.md#version) of available [subsets](artist_concepts.md#subset).
##### Result
- adds all last versions of subsets (rendered image sequences) as read nodes
- adds publishable write node as `renderMain` subset
</div>
<div class="col col--6 markdown">
![Build First Work File](assets/nuke_buildFirstWorkfile.png)
</div>
</div>

View file

@ -161,7 +161,7 @@ Nuke OpenPype menu shows the current context
Launching Nuke with context stops your timer, and starts the clock on the shot and task you picked.
Openpype makes initial setup for your Nuke script. It is the same as running [Apply All Settings](artist_hosts_nuke.md#apply-all-settings) from the OpenPype menu.
Openpype makes initial setup for your Nuke script. It is the same as running [Apply All Settings](artist_hosts_nuke_tut.md#apply-all-settings) from the OpenPype menu.
- Reads frame range and resolution from Avalon database, sets it in Nuke Project Settings,
Creates Viewer node, sets its range and indicates handles by In and Out points.

View file

@ -14,7 +14,7 @@ The main things you will need to run and build pype are:
- **Terminal** in your OS
- PowerShell 5.0+ (Windows)
- Bash (Linux)
- [**Python 3.7.8**](#python) or higher
- [**Python 3.7.9**](#python) or higher
- [**MongoDB**](#database)

View file

@ -1,33 +0,0 @@
### Tools
Creator
Publisher
Loader
Scene Inventory
Look assigner
Workfiles
### Plugins
Deadline
Muster
Yeti
Arnold
Vray
Redshift
### Families
Model
Look
Rig
Animation
Cache
Camera
Assembly
MayaAscii (generic scene)
Setdress
RenderSetup
Review
arnoldStandin
vrayProxy
vrayScene
yetiCache
yetiRig

Some files were not shown because too many files have changed in this diff Show more