diff --git a/CHANGELOG.md b/CHANGELOG.md
index f1e7d5d9e0..c216dd0595 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -1,8 +1,27 @@
# Changelog
+## [3.9.4-nightly.1](https://github.com/pypeclub/OpenPype/tree/HEAD)
+
+[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.9.3...HEAD)
+
+### π Documentation
+
+- Documentation: Python requirements to 3.7.9 [\#3035](https://github.com/pypeclub/OpenPype/pull/3035)
+- Website Docs: Remove unused pages [\#2974](https://github.com/pypeclub/OpenPype/pull/2974)
+
+**π Enhancements**
+
+- Resolve environment variable in google drive credential path [\#3008](https://github.com/pypeclub/OpenPype/pull/3008)
+
+**π Bug fixes**
+
+- Ftrack: Integrate ftrack api fix [\#3044](https://github.com/pypeclub/OpenPype/pull/3044)
+- Webpublisher - removed wrong hardcoded family [\#3043](https://github.com/pypeclub/OpenPype/pull/3043)
+- Unreal: Creator import fixes [\#3040](https://github.com/pypeclub/OpenPype/pull/3040)
+
## [3.9.3](https://github.com/pypeclub/OpenPype/tree/3.9.3) (2022-04-07)
-[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.9.2...3.9.3)
+[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.9.3-nightly.2...3.9.3)
### π Documentation
@@ -20,7 +39,6 @@
- Console Interpreter: Changed how console splitter size are reused on show [\#3016](https://github.com/pypeclub/OpenPype/pull/3016)
- Deadline: Use more suitable name for sequence review logic [\#3015](https://github.com/pypeclub/OpenPype/pull/3015)
- Nuke: add concurrency attr to deadline job [\#3005](https://github.com/pypeclub/OpenPype/pull/3005)
-- Photoshop: create image without instance [\#3001](https://github.com/pypeclub/OpenPype/pull/3001)
- Deadline: priority configurable in Maya jobs [\#2995](https://github.com/pypeclub/OpenPype/pull/2995)
- Workfiles tool: Save as published workfiles [\#2937](https://github.com/pypeclub/OpenPype/pull/2937)
@@ -32,10 +50,12 @@
- Harmony: Added creating subset name for workfile from template [\#3024](https://github.com/pypeclub/OpenPype/pull/3024)
- AfterEffects: Added creating subset name for workfile from template [\#3023](https://github.com/pypeclub/OpenPype/pull/3023)
- General: Add example addons to ignored [\#3022](https://github.com/pypeclub/OpenPype/pull/3022)
+- SiteSync: fix transitive alternate sites, fix dropdown in Local Settings [\#3018](https://github.com/pypeclub/OpenPype/pull/3018)
- Maya: Remove missing import [\#3017](https://github.com/pypeclub/OpenPype/pull/3017)
- Ftrack: multiple reviewable componets [\#3012](https://github.com/pypeclub/OpenPype/pull/3012)
- Tray publisher: Fixes after code movement [\#3010](https://github.com/pypeclub/OpenPype/pull/3010)
- Nuke: fixing unicode type detection in effect loaders [\#3002](https://github.com/pypeclub/OpenPype/pull/3002)
+- Fix - remove doubled dot in workfile created from template [\#2998](https://github.com/pypeclub/OpenPype/pull/2998)
- Nuke: removing redundant Ftrack asset when farm publishing [\#2996](https://github.com/pypeclub/OpenPype/pull/2996)
**Merged pull requests:**
@@ -51,7 +71,6 @@
- Documentation: Added mention of adding My Drive as a root [\#2999](https://github.com/pypeclub/OpenPype/pull/2999)
- Docs: Added MongoDB requirements [\#2951](https://github.com/pypeclub/OpenPype/pull/2951)
-- Documentation: New publisher develop docs [\#2896](https://github.com/pypeclub/OpenPype/pull/2896)
**π New features**
@@ -60,6 +79,7 @@
**π Enhancements**
+- Photoshop: create image without instance [\#3001](https://github.com/pypeclub/OpenPype/pull/3001)
- TVPaint: Render scene family [\#3000](https://github.com/pypeclub/OpenPype/pull/3000)
- Nuke: ReviewDataMov Read RAW attribute [\#2985](https://github.com/pypeclub/OpenPype/pull/2985)
- General: `METADATA\_KEYS` constant as `frozenset` for optimal immutable lookup [\#2980](https://github.com/pypeclub/OpenPype/pull/2980)
@@ -70,13 +90,11 @@
- TVPaint: Extractor to convert PNG into EXR [\#2942](https://github.com/pypeclub/OpenPype/pull/2942)
- Workfiles: Open published workfiles [\#2925](https://github.com/pypeclub/OpenPype/pull/2925)
- General: Default modules loaded dynamically [\#2923](https://github.com/pypeclub/OpenPype/pull/2923)
-- Nuke: Add no-audio Tag [\#2911](https://github.com/pypeclub/OpenPype/pull/2911)
- Nuke: improving readability [\#2903](https://github.com/pypeclub/OpenPype/pull/2903)
**π Bug fixes**
- Hosts: Remove path existence checks in 'add\_implementation\_envs' [\#3004](https://github.com/pypeclub/OpenPype/pull/3004)
-- Fix - remove doubled dot in workfile created from template [\#2998](https://github.com/pypeclub/OpenPype/pull/2998)
- PS: fix renaming subset incorrectly in PS [\#2991](https://github.com/pypeclub/OpenPype/pull/2991)
- Fix: Disable setuptools auto discovery [\#2990](https://github.com/pypeclub/OpenPype/pull/2990)
- AEL: fix opening existing workfile if no scene opened [\#2989](https://github.com/pypeclub/OpenPype/pull/2989)
@@ -120,21 +138,13 @@
**π Enhancements**
+- Nuke: Add no-audio Tag [\#2911](https://github.com/pypeclub/OpenPype/pull/2911)
- General: Change how OPENPYPE\_DEBUG value is handled [\#2907](https://github.com/pypeclub/OpenPype/pull/2907)
-- nuke: imageio adding ocio config version 1.2 [\#2897](https://github.com/pypeclub/OpenPype/pull/2897)
-- Flame: support for comment with xml attribute overrides [\#2892](https://github.com/pypeclub/OpenPype/pull/2892)
**π Bug fixes**
- General: Fix use of Anatomy roots [\#2904](https://github.com/pypeclub/OpenPype/pull/2904)
- Fixing gap detection in extract review [\#2902](https://github.com/pypeclub/OpenPype/pull/2902)
-- Pyblish Pype - ensure current state is correct when entering new group order [\#2899](https://github.com/pypeclub/OpenPype/pull/2899)
-- SceneInventory: Fix import of load function [\#2894](https://github.com/pypeclub/OpenPype/pull/2894)
-- Harmony - fixed creator issue [\#2891](https://github.com/pypeclub/OpenPype/pull/2891)
-
-**π Refactored code**
-
-- General: Reduce style usage to OpenPype repository [\#2889](https://github.com/pypeclub/OpenPype/pull/2889)
## [3.9.0](https://github.com/pypeclub/OpenPype/tree/3.9.0) (2022-03-14)
diff --git a/openpype/hosts/flame/api/__init__.py b/openpype/hosts/flame/api/__init__.py
index f210c27f87..2c461e5f16 100644
--- a/openpype/hosts/flame/api/__init__.py
+++ b/openpype/hosts/flame/api/__init__.py
@@ -11,10 +11,8 @@ from .constants import (
from .lib import (
CTX,
FlameAppFramework,
- get_project_manager,
get_current_project,
get_current_sequence,
- create_bin,
create_segment_data_marker,
get_segment_data_marker,
set_segment_data_marker,
@@ -29,7 +27,10 @@ from .lib import (
get_frame_from_filename,
get_padding_from_filename,
maintained_object_duplication,
- get_clip_segment
+ maintained_temp_file_path,
+ get_clip_segment,
+ get_batch_group_from_desktop,
+ MediaInfoFile
)
from .utils import (
setup,
@@ -56,7 +57,6 @@ from .plugin import (
PublishableClip,
ClipLoader,
OpenClipSolver
-
)
from .workio import (
open_file,
@@ -71,6 +71,10 @@ from .render_utils import (
get_preset_path_by_xml_name,
modify_preset_file
)
+from .batch_utils import (
+ create_batch_group,
+ create_batch_group_conent
+)
__all__ = [
# constants
@@ -83,10 +87,8 @@ __all__ = [
# lib
"CTX",
"FlameAppFramework",
- "get_project_manager",
"get_current_project",
"get_current_sequence",
- "create_bin",
"create_segment_data_marker",
"get_segment_data_marker",
"set_segment_data_marker",
@@ -101,7 +103,10 @@ __all__ = [
"get_frame_from_filename",
"get_padding_from_filename",
"maintained_object_duplication",
+ "maintained_temp_file_path",
"get_clip_segment",
+ "get_batch_group_from_desktop",
+ "MediaInfoFile",
# pipeline
"install",
@@ -142,5 +147,9 @@ __all__ = [
# render utils
"export_clip",
"get_preset_path_by_xml_name",
- "modify_preset_file"
+ "modify_preset_file",
+
+ # batch utils
+ "create_batch_group",
+ "create_batch_group_conent"
]
diff --git a/openpype/hosts/flame/api/batch_utils.py b/openpype/hosts/flame/api/batch_utils.py
new file mode 100644
index 0000000000..9d419a4a90
--- /dev/null
+++ b/openpype/hosts/flame/api/batch_utils.py
@@ -0,0 +1,151 @@
+import flame
+
+
+def create_batch_group(
+ name,
+ frame_start,
+ frame_duration,
+ update_batch_group=None,
+ **kwargs
+):
+ """Create Batch Group in active project's Desktop
+
+ Args:
+ name (str): name of batch group to be created
+ frame_start (int): start frame of batch
+ frame_end (int): end frame of batch
+ update_batch_group (PyBatch)[optional]: batch group to update
+
+ Return:
+ PyBatch: active flame batch group
+ """
+ # make sure some batch obj is present
+ batch_group = update_batch_group or flame.batch
+
+ schematic_reels = kwargs.get("shematic_reels") or ['LoadedReel1']
+ shelf_reels = kwargs.get("shelf_reels") or ['ShelfReel1']
+
+ handle_start = kwargs.get("handleStart") or 0
+ handle_end = kwargs.get("handleEnd") or 0
+
+ frame_start -= handle_start
+ frame_duration += handle_start + handle_end
+
+ if not update_batch_group:
+ # Create batch group with name, start_frame value, duration value,
+ # set of schematic reel names, set of shelf reel names
+ batch_group = batch_group.create_batch_group(
+ name,
+ start_frame=frame_start,
+ duration=frame_duration,
+ reels=schematic_reels,
+ shelf_reels=shelf_reels
+ )
+ else:
+ batch_group.name = name
+ batch_group.start_frame = frame_start
+ batch_group.duration = frame_duration
+
+ # add reels to batch group
+ _add_reels_to_batch_group(
+ batch_group, schematic_reels, shelf_reels)
+
+ # TODO: also update write node if there is any
+ # TODO: also update loaders to start from correct frameStart
+
+ if kwargs.get("switch_batch_tab"):
+ # use this command to switch to the batch tab
+ batch_group.go_to()
+
+ return batch_group
+
+
+def _add_reels_to_batch_group(batch_group, reels, shelf_reels):
+ # update or create defined reels
+ # helper variables
+ reel_names = [
+ r.name.get_value()
+ for r in batch_group.reels
+ ]
+ shelf_reel_names = [
+ r.name.get_value()
+ for r in batch_group.shelf_reels
+ ]
+ # add schematic reels
+ for _r in reels:
+ if _r in reel_names:
+ continue
+ batch_group.create_reel(_r)
+
+ # add shelf reels
+ for _sr in shelf_reels:
+ if _sr in shelf_reel_names:
+ continue
+ batch_group.create_shelf_reel(_sr)
+
+
+def create_batch_group_conent(batch_nodes, batch_links, batch_group=None):
+ """Creating batch group with links
+
+ Args:
+ batch_nodes (list of dict): each dict is node definition
+ batch_links (list of dict): each dict is link definition
+ batch_group (PyBatch, optional): batch group. Defaults to None.
+
+ Return:
+ dict: all batch nodes {name or id: PyNode}
+ """
+ # make sure some batch obj is present
+ batch_group = batch_group or flame.batch
+ all_batch_nodes = {
+ b.name.get_value(): b
+ for b in batch_group.nodes
+ }
+ for node in batch_nodes:
+ # NOTE: node_props needs to be ideally OrederDict type
+ node_id, node_type, node_props = (
+ node["id"], node["type"], node["properties"])
+
+ # get node name for checking if exists
+ node_name = node_props.pop("name", None) or node_id
+
+ if all_batch_nodes.get(node_name):
+ # update existing batch node
+ batch_node = all_batch_nodes[node_name]
+ else:
+ # create new batch node
+ batch_node = batch_group.create_node(node_type)
+
+ # set name
+ batch_node.name.set_value(node_name)
+
+ # set attributes found in node props
+ for key, value in node_props.items():
+ if not hasattr(batch_node, key):
+ continue
+ setattr(batch_node, key, value)
+
+ # add created node for possible linking
+ all_batch_nodes[node_id] = batch_node
+
+ # link nodes to each other
+ for link in batch_links:
+ _from_n, _to_n = link["from_node"], link["to_node"]
+
+ # check if all linking nodes are available
+ if not all([
+ all_batch_nodes.get(_from_n["id"]),
+ all_batch_nodes.get(_to_n["id"])
+ ]):
+ continue
+
+ # link nodes in defined link
+ batch_group.connect_nodes(
+ all_batch_nodes[_from_n["id"]], _from_n["connector"],
+ all_batch_nodes[_to_n["id"]], _to_n["connector"]
+ )
+
+ # sort batch nodes
+ batch_group.organize()
+
+ return all_batch_nodes
diff --git a/openpype/hosts/flame/api/lib.py b/openpype/hosts/flame/api/lib.py
index aa2cfcb96d..c7c444c1fb 100644
--- a/openpype/hosts/flame/api/lib.py
+++ b/openpype/hosts/flame/api/lib.py
@@ -3,7 +3,12 @@ import os
import re
import json
import pickle
+import tempfile
+import itertools
import contextlib
+import xml.etree.cElementTree as cET
+from copy import deepcopy
+from xml.etree import ElementTree as ET
from pprint import pformat
from .constants import (
MARKER_COLOR,
@@ -12,9 +17,10 @@ from .constants import (
COLOR_MAP,
MARKER_PUBLISH_DEFAULT
)
-from openpype.api import Logger
-log = Logger.get_logger(__name__)
+import openpype.api as openpype
+
+log = openpype.Logger.get_logger(__name__)
FRAME_PATTERN = re.compile(r"[\._](\d+)[\.]")
@@ -227,16 +233,6 @@ class FlameAppFramework(object):
return True
-def get_project_manager():
- # TODO: get_project_manager
- return
-
-
-def get_media_storage():
- # TODO: get_media_storage
- return
-
-
def get_current_project():
import flame
return flame.project.current_project
@@ -266,11 +262,6 @@ def get_current_sequence(selection):
return process_timeline
-def create_bin(name, root=None):
- # TODO: create_bin
- return
-
-
def rescan_hooks():
import flame
try:
@@ -280,6 +271,7 @@ def rescan_hooks():
def get_metadata(project_name, _log=None):
+ # TODO: can be replaced by MediaInfoFile class method
from adsk.libwiretapPythonClientAPI import (
WireTapClient,
WireTapServerHandle,
@@ -704,6 +696,25 @@ def maintained_object_duplication(item):
flame.delete(duplicate)
+@contextlib.contextmanager
+def maintained_temp_file_path(suffix=None):
+ _suffix = suffix or ""
+
+ try:
+ # Store dumped json to temporary file
+ temporary_file = tempfile.mktemp(
+ suffix=_suffix, prefix="flame_maintained_")
+ yield temporary_file.replace("\\", "/")
+
+ except IOError as _error:
+ raise IOError(
+ "Not able to create temp json file: {}".format(_error))
+
+ finally:
+ # Remove the temporary json
+ os.remove(temporary_file)
+
+
def get_clip_segment(flame_clip):
name = flame_clip.name.get_value()
version = flame_clip.versions[0]
@@ -717,3 +728,213 @@ def get_clip_segment(flame_clip):
raise ValueError("Clip `{}` has too many segments!".format(name))
return segments[0]
+
+
+def get_batch_group_from_desktop(name):
+ project = get_current_project()
+ project_desktop = project.current_workspace.desktop
+
+ for bgroup in project_desktop.batch_groups:
+ if bgroup.name.get_value() in name:
+ return bgroup
+
+
+class MediaInfoFile(object):
+ """Class to get media info file clip data
+
+ Raises:
+ IOError: MEDIA_SCRIPT_PATH path doesn't exists
+ TypeError: Not able to generate clip xml data file
+ ET.ParseError: Missing clip in xml clip data
+ IOError: Not able to save xml clip data to file
+
+ Attributes:
+ str: `MEDIA_SCRIPT_PATH` path to flame binary
+ logging.Logger: `log` logger
+
+ TODO: add method for getting metadata to dict
+ """
+ MEDIA_SCRIPT_PATH = "/opt/Autodesk/mio/current/dl_get_media_info"
+
+ log = log
+
+ _clip_data = None
+ _start_frame = None
+ _fps = None
+ _drop_mode = None
+
+ def __init__(self, path, **kwargs):
+
+ # replace log if any
+ if kwargs.get("logger"):
+ self.log = kwargs["logger"]
+
+ # test if `dl_get_media_info` paht exists
+ self._validate_media_script_path()
+
+ # derivate other feed variables
+ self.feed_basename = os.path.basename(path)
+ self.feed_dir = os.path.dirname(path)
+ self.feed_ext = os.path.splitext(self.feed_basename)[1][1:].lower()
+
+ with maintained_temp_file_path(".clip") as tmp_path:
+ self.log.info("Temp File: {}".format(tmp_path))
+ self._generate_media_info_file(tmp_path)
+
+ # get clip data and make them single if there is multiple
+ # clips data
+ xml_data = self._make_single_clip_media_info(tmp_path)
+ self.log.debug("xml_data: {}".format(xml_data))
+ self.log.debug("type: {}".format(type(xml_data)))
+
+ # get all time related data and assign them
+ self._get_time_info_from_origin(xml_data)
+ self.log.debug("start_frame: {}".format(self.start_frame))
+ self.log.debug("fps: {}".format(self.fps))
+ self.log.debug("drop frame: {}".format(self.drop_mode))
+ self.clip_data = xml_data
+
+ @property
+ def clip_data(self):
+ """Clip's xml clip data
+
+ Returns:
+ xml.etree.ElementTree: xml data
+ """
+ return self._clip_data
+
+ @clip_data.setter
+ def clip_data(self, data):
+ self._clip_data = data
+
+ @property
+ def start_frame(self):
+ """ Clip's starting frame found in timecode
+
+ Returns:
+ int: number of frames
+ """
+ return self._start_frame
+
+ @start_frame.setter
+ def start_frame(self, number):
+ self._start_frame = int(number)
+
+ @property
+ def fps(self):
+ """ Clip's frame rate
+
+ Returns:
+ float: frame rate
+ """
+ return self._fps
+
+ @fps.setter
+ def fps(self, fl_number):
+ self._fps = float(fl_number)
+
+ @property
+ def drop_mode(self):
+ """ Clip's drop frame mode
+
+ Returns:
+ str: drop frame flag
+ """
+ return self._drop_mode
+
+ @drop_mode.setter
+ def drop_mode(self, text):
+ self._drop_mode = str(text)
+
+ def _validate_media_script_path(self):
+ if not os.path.isfile(self.MEDIA_SCRIPT_PATH):
+ raise IOError("Media Scirpt does not exist: `{}`".format(
+ self.MEDIA_SCRIPT_PATH))
+
+ def _generate_media_info_file(self, fpath):
+ # Create cmd arguments for gettig xml file info file
+ cmd_args = [
+ self.MEDIA_SCRIPT_PATH,
+ "-e", self.feed_ext,
+ "-o", fpath,
+ self.feed_dir
+ ]
+
+ try:
+ # execute creation of clip xml template data
+ openpype.run_subprocess(cmd_args)
+ except TypeError as error:
+ raise TypeError(
+ "Error creating `{}` due: {}".format(fpath, error))
+
+ def _make_single_clip_media_info(self, fpath):
+ with open(fpath) as f:
+ lines = f.readlines()
+ _added_root = itertools.chain(
+ "", deepcopy(lines)[1:], "")
+ new_root = ET.fromstringlist(_added_root)
+
+ # find the clip which is matching to my input name
+ xml_clips = new_root.findall("clip")
+ matching_clip = None
+ for xml_clip in xml_clips:
+ if xml_clip.find("name").text in self.feed_basename:
+ matching_clip = xml_clip
+
+ if matching_clip is None:
+ # return warning there is missing clip
+ raise ET.ParseError(
+ "Missing clip in `{}`. Available clips {}".format(
+ self.feed_basename, [
+ xml_clip.find("name").text
+ for xml_clip in xml_clips
+ ]
+ ))
+
+ return matching_clip
+
+ def _get_time_info_from_origin(self, xml_data):
+ try:
+ for out_track in xml_data.iter('track'):
+ for out_feed in out_track.iter('feed'):
+ # start frame
+ out_feed_nb_ticks_obj = out_feed.find(
+ 'startTimecode/nbTicks')
+ self.start_frame = out_feed_nb_ticks_obj.text
+
+ # fps
+ out_feed_fps_obj = out_feed.find(
+ 'startTimecode/rate')
+ self.fps = out_feed_fps_obj.text
+
+ # drop frame mode
+ out_feed_drop_mode_obj = out_feed.find(
+ 'startTimecode/dropMode')
+ self.drop_mode = out_feed_drop_mode_obj.text
+ break
+ else:
+ continue
+ except Exception as msg:
+ self.log.warning(msg)
+
+ @staticmethod
+ def write_clip_data_to_file(fpath, xml_element_data):
+ """ Write xml element of clip data to file
+
+ Args:
+ fpath (string): file path
+ xml_element_data (xml.etree.ElementTree.Element): xml data
+
+ Raises:
+ IOError: If data could not be written to file
+ """
+ try:
+ # save it as new file
+ tree = cET.ElementTree(xml_element_data)
+ tree.write(
+ fpath, xml_declaration=True,
+ method='xml', encoding='UTF-8'
+ )
+ except IOError as error:
+ raise IOError(
+ "Not able to write data to file: {}".format(error))
diff --git a/openpype/hosts/flame/api/plugin.py b/openpype/hosts/flame/api/plugin.py
index 4c9d3c5383..c87445fdd3 100644
--- a/openpype/hosts/flame/api/plugin.py
+++ b/openpype/hosts/flame/api/plugin.py
@@ -1,24 +1,19 @@
import os
import re
import shutil
-import sys
-from xml.etree import ElementTree as ET
-import six
-import qargparse
-from Qt import QtWidgets, QtCore
-import openpype.api as openpype
-from openpype.pipeline import (
- LegacyCreator,
- LoaderPlugin,
-)
-from openpype import style
-from . import (
- lib as flib,
- pipeline as fpipeline,
- constants
-)
-
from copy import deepcopy
+from xml.etree import ElementTree as ET
+
+from Qt import QtCore, QtWidgets
+
+import openpype.api as openpype
+import qargparse
+from openpype import style
+from openpype.pipeline import LegacyCreator, LoaderPlugin
+
+from . import constants
+from . import lib as flib
+from . import pipeline as fpipeline
log = openpype.Logger.get_logger(__name__)
@@ -660,8 +655,8 @@ class PublishableClip:
# Publishing plugin functions
-# Loader plugin functions
+# Loader plugin functions
class ClipLoader(LoaderPlugin):
"""A basic clip loader for Flame
@@ -681,50 +676,52 @@ class ClipLoader(LoaderPlugin):
]
-class OpenClipSolver:
- media_script_path = "/opt/Autodesk/mio/current/dl_get_media_info"
- tmp_name = "_tmp.clip"
- tmp_file = None
+class OpenClipSolver(flib.MediaInfoFile):
create_new_clip = False
- out_feed_nb_ticks = None
- out_feed_fps = None
- out_feed_drop_mode = None
-
log = log
def __init__(self, openclip_file_path, feed_data):
- # test if media script paht exists
- self._validate_media_script_path()
+ self.out_file = openclip_file_path
# new feed variables:
- feed_path = feed_data["path"]
+ feed_path = feed_data.pop("path")
+
+ # initialize parent class
+ super(OpenClipSolver, self).__init__(
+ feed_path,
+ **feed_data
+ )
+
+ # get other metadata
self.feed_version_name = feed_data["version"]
self.feed_colorspace = feed_data.get("colorspace")
-
- if feed_data.get("logger"):
- self.log = feed_data["logger"]
+ self.log.debug("feed_version_name: {}".format(self.feed_version_name))
# derivate other feed variables
self.feed_basename = os.path.basename(feed_path)
self.feed_dir = os.path.dirname(feed_path)
self.feed_ext = os.path.splitext(self.feed_basename)[1][1:].lower()
-
- if not os.path.isfile(openclip_file_path):
- # openclip does not exist yet and will be created
- self.tmp_file = self.out_file = openclip_file_path
+ self.log.debug("feed_ext: {}".format(self.feed_ext))
+ self.log.debug("out_file: {}".format(self.out_file))
+ if not self._is_valid_tmp_file(self.out_file):
self.create_new_clip = True
- else:
- # output a temp file
- self.out_file = openclip_file_path
- self.tmp_file = os.path.join(self.feed_dir, self.tmp_name)
- self._clear_tmp_file()
+ def _is_valid_tmp_file(self, file):
+ # check if file exists
+ if os.path.isfile(file):
+ # test also if file is not empty
+ with open(file) as f:
+ lines = f.readlines()
- self.log.info("Temp File: {}".format(self.tmp_file))
+ if len(lines) > 2:
+ return True
+
+ # file is probably corrupted
+ os.remove(file)
+ return False
def make(self):
- self._generate_media_info_file()
if self.create_new_clip:
# New openClip
@@ -732,42 +729,17 @@ class OpenClipSolver:
else:
self._update_open_clip()
- def _validate_media_script_path(self):
- if not os.path.isfile(self.media_script_path):
- raise IOError("Media Scirpt does not exist: `{}`".format(
- self.media_script_path))
-
- def _generate_media_info_file(self):
- # Create cmd arguments for gettig xml file info file
- cmd_args = [
- self.media_script_path,
- "-e", self.feed_ext,
- "-o", self.tmp_file,
- self.feed_dir
- ]
-
- # execute creation of clip xml template data
- try:
- openpype.run_subprocess(cmd_args)
- except TypeError:
- self.log.error("Error creating self.tmp_file")
- six.reraise(*sys.exc_info())
-
- def _clear_tmp_file(self):
- if os.path.isfile(self.tmp_file):
- os.remove(self.tmp_file)
-
def _clear_handler(self, xml_object):
for handler in xml_object.findall("./handler"):
- self.log.debug("Handler found")
+ self.log.info("Handler found")
xml_object.remove(handler)
def _create_new_open_clip(self):
self.log.info("Building new openClip")
+ self.log.debug(">> self.clip_data: {}".format(self.clip_data))
- tmp_xml = ET.parse(self.tmp_file)
-
- tmp_xml_feeds = tmp_xml.find('tracks/track/feeds')
+ # clip data comming from MediaInfoFile
+ tmp_xml_feeds = self.clip_data.find('tracks/track/feeds')
tmp_xml_feeds.set('currentVersion', self.feed_version_name)
for tmp_feed in tmp_xml_feeds:
tmp_feed.set('vuid', self.feed_version_name)
@@ -778,46 +750,48 @@ class OpenClipSolver:
self._clear_handler(tmp_feed)
- tmp_xml_versions_obj = tmp_xml.find('versions')
+ tmp_xml_versions_obj = self.clip_data.find('versions')
tmp_xml_versions_obj.set('currentVersion', self.feed_version_name)
for xml_new_version in tmp_xml_versions_obj:
xml_new_version.set('uid', self.feed_version_name)
xml_new_version.set('type', 'version')
- xml_data = self._fix_xml_data(tmp_xml)
+ self._clear_handler(self.clip_data)
self.log.info("Adding feed version: {}".format(self.feed_basename))
- self._write_result_xml_to_file(xml_data)
-
- self.log.info("openClip Updated: {}".format(self.tmp_file))
+ self.write_clip_data_to_file(self.out_file, self.clip_data)
def _update_open_clip(self):
self.log.info("Updating openClip ..")
out_xml = ET.parse(self.out_file)
- tmp_xml = ET.parse(self.tmp_file)
+ out_xml = out_xml.getroot()
self.log.debug(">> out_xml: {}".format(out_xml))
- self.log.debug(">> tmp_xml: {}".format(tmp_xml))
+ self.log.debug(">> self.clip_data: {}".format(self.clip_data))
# Get new feed from tmp file
- tmp_xml_feed = tmp_xml.find('tracks/track/feeds/feed')
+ tmp_xml_feed = self.clip_data.find('tracks/track/feeds/feed')
self._clear_handler(tmp_xml_feed)
- self._get_time_info_from_origin(out_xml)
- if self.out_feed_fps:
+ # update fps from MediaInfoFile class
+ if self.fps:
tmp_feed_fps_obj = tmp_xml_feed.find(
"startTimecode/rate")
- tmp_feed_fps_obj.text = self.out_feed_fps
- if self.out_feed_nb_ticks:
+ tmp_feed_fps_obj.text = str(self.fps)
+
+ # update start_frame from MediaInfoFile class
+ if self.start_frame:
tmp_feed_nb_ticks_obj = tmp_xml_feed.find(
"startTimecode/nbTicks")
- tmp_feed_nb_ticks_obj.text = self.out_feed_nb_ticks
- if self.out_feed_drop_mode:
+ tmp_feed_nb_ticks_obj.text = str(self.start_frame)
+
+ # update drop_mode from MediaInfoFile class
+ if self.drop_mode:
tmp_feed_drop_mode_obj = tmp_xml_feed.find(
"startTimecode/dropMode")
- tmp_feed_drop_mode_obj.text = self.out_feed_drop_mode
+ tmp_feed_drop_mode_obj.text = str(self.drop_mode)
new_path_obj = tmp_xml_feed.find(
"spans/span/path")
@@ -850,7 +824,7 @@ class OpenClipSolver:
"version", {"type": "version", "uid": self.feed_version_name})
out_xml_versions_obj.insert(0, new_version_obj)
- xml_data = self._fix_xml_data(out_xml)
+ self._clear_handler(out_xml)
# fist create backup
self._create_openclip_backup_file(self.out_file)
@@ -858,30 +832,9 @@ class OpenClipSolver:
self.log.info("Adding feed version: {}".format(
self.feed_version_name))
- self._write_result_xml_to_file(xml_data)
+ self.write_clip_data_to_file(self.out_file, out_xml)
- self.log.info("openClip Updated: {}".format(self.out_file))
-
- self._clear_tmp_file()
-
- def _get_time_info_from_origin(self, xml_data):
- try:
- for out_track in xml_data.iter('track'):
- for out_feed in out_track.iter('feed'):
- out_feed_nb_ticks_obj = out_feed.find(
- 'startTimecode/nbTicks')
- self.out_feed_nb_ticks = out_feed_nb_ticks_obj.text
- out_feed_fps_obj = out_feed.find(
- 'startTimecode/rate')
- self.out_feed_fps = out_feed_fps_obj.text
- out_feed_drop_mode_obj = out_feed.find(
- 'startTimecode/dropMode')
- self.out_feed_drop_mode = out_feed_drop_mode_obj.text
- break
- else:
- continue
- except Exception as msg:
- self.log.warning(msg)
+ self.log.debug("OpenClip Updated: {}".format(self.out_file))
def _feed_exists(self, xml_data, path):
# loop all available feed paths and check if
@@ -892,15 +845,6 @@ class OpenClipSolver:
"Not appending file as it already is in .clip file")
return True
- def _fix_xml_data(self, xml_data):
- xml_root = xml_data.getroot()
- self._clear_handler(xml_root)
- return ET.tostring(xml_root).decode('utf-8')
-
- def _write_result_xml_to_file(self, xml_data):
- with open(self.out_file, "w") as f:
- f.write(xml_data)
-
def _create_openclip_backup_file(self, file):
bck_file = "{}.bak".format(file)
# if backup does not exist
diff --git a/openpype/hosts/flame/api/scripts/wiretap_com.py b/openpype/hosts/flame/api/scripts/wiretap_com.py
index 54993d34eb..4825ff4386 100644
--- a/openpype/hosts/flame/api/scripts/wiretap_com.py
+++ b/openpype/hosts/flame/api/scripts/wiretap_com.py
@@ -185,7 +185,9 @@ class WireTapCom(object):
exit_code = subprocess.call(
project_create_cmd,
- cwd=os.path.expanduser('~'))
+ cwd=os.path.expanduser('~'),
+ preexec_fn=_subprocess_preexec_fn
+ )
if exit_code != 0:
RuntimeError("Cannot create project in flame db")
@@ -254,7 +256,7 @@ class WireTapCom(object):
filtered_users = [user for user in used_names if user_name in user]
if filtered_users:
- # todo: need to find lastly created following regex pattern for
+ # TODO: need to find lastly created following regex pattern for
# date used in name
return filtered_users.pop()
@@ -448,7 +450,9 @@ class WireTapCom(object):
exit_code = subprocess.call(
project_colorspace_cmd,
- cwd=os.path.expanduser('~'))
+ cwd=os.path.expanduser('~'),
+ preexec_fn=_subprocess_preexec_fn
+ )
if exit_code != 0:
RuntimeError("Cannot set colorspace {} on project {}".format(
@@ -456,6 +460,15 @@ class WireTapCom(object):
))
+def _subprocess_preexec_fn():
+ """ Helper function
+
+ Setting permission mask to 0777
+ """
+ os.setpgrp()
+ os.umask(0o000)
+
+
if __name__ == "__main__":
# get json exchange data
json_path = sys.argv[-1]
diff --git a/openpype/hosts/flame/otio/flame_export.py b/openpype/hosts/flame/otio/flame_export.py
index 8c240fc9d5..4fe05ec1d8 100644
--- a/openpype/hosts/flame/otio/flame_export.py
+++ b/openpype/hosts/flame/otio/flame_export.py
@@ -11,8 +11,6 @@ from . import utils
import flame
from pprint import pformat
-reload(utils) # noqa
-
log = logging.getLogger(__name__)
@@ -260,24 +258,15 @@ def create_otio_markers(otio_item, item):
otio_item.markers.append(otio_marker)
-def create_otio_reference(clip_data):
+def create_otio_reference(clip_data, fps=None):
metadata = _get_metadata(clip_data)
# get file info for path and start frame
frame_start = 0
- fps = CTX.get_fps()
+ fps = fps or CTX.get_fps()
path = clip_data["fpath"]
- reel_clip = None
- match_reel_clip = [
- clip for clip in CTX.clips
- if clip["fpath"] == path
- ]
- if match_reel_clip:
- reel_clip = match_reel_clip.pop()
- fps = reel_clip["fps"]
-
file_name = os.path.basename(path)
file_head, extension = os.path.splitext(file_name)
@@ -339,13 +328,22 @@ def create_otio_reference(clip_data):
def create_otio_clip(clip_data):
+ from openpype.hosts.flame.api import MediaInfoFile
+
segment = clip_data["PySegment"]
- # create media reference
- media_reference = create_otio_reference(clip_data)
-
# calculate source in
- first_frame = utils.get_frame_from_filename(clip_data["fpath"]) or 0
+ media_info = MediaInfoFile(clip_data["fpath"])
+ media_timecode_start = media_info.start_frame
+ media_fps = media_info.fps
+
+ # create media reference
+ media_reference = create_otio_reference(clip_data, media_fps)
+
+ # define first frame
+ first_frame = media_timecode_start or utils.get_frame_from_filename(
+ clip_data["fpath"]) or 0
+
source_in = int(clip_data["source_in"]) - int(first_frame)
# creatae source range
@@ -378,38 +376,6 @@ def create_otio_gap(gap_start, clip_start, tl_start_frame, fps):
)
-def get_clips_in_reels(project):
- output_clips = []
- project_desktop = project.current_workspace.desktop
-
- for reel_group in project_desktop.reel_groups:
- for reel in reel_group.reels:
- for clip in reel.clips:
- clip_data = {
- "PyClip": clip,
- "fps": float(str(clip.frame_rate)[:-4])
- }
-
- attrs = [
- "name", "width", "height",
- "ratio", "sample_rate", "bit_depth"
- ]
-
- for attr in attrs:
- val = getattr(clip, attr)
- clip_data[attr] = val
-
- version = clip.versions[-1]
- track = version.tracks[-1]
- for segment in track.segments:
- segment_data = _get_segment_attributes(segment)
- clip_data.update(segment_data)
-
- output_clips.append(clip_data)
-
- return output_clips
-
-
def _get_colourspace_policy():
output = {}
@@ -493,9 +459,6 @@ def _get_shot_tokens_values(clip, tokens):
old_value = None
output = {}
- if not clip.shot_name:
- return output
-
old_value = clip.shot_name.get_value()
for token in tokens:
@@ -513,15 +476,21 @@ def _get_shot_tokens_values(clip, tokens):
def _get_segment_attributes(segment):
- # log.debug(dir(segment))
- if str(segment.name)[1:-1] == "":
+ log.debug("Segment name|hidden: {}|{}".format(
+ segment.name.get_value(), segment.hidden
+ ))
+ if (
+ segment.name.get_value() == ""
+ or segment.hidden.get_value()
+ ):
return None
# Add timeline segment to tree
clip_data = {
"segment_name": segment.name.get_value(),
"segment_comment": segment.comment.get_value(),
+ "shot_name": segment.shot_name.get_value(),
"tape_name": segment.tape_name,
"source_name": segment.source_name,
"fpath": segment.file_path,
@@ -529,9 +498,10 @@ def _get_segment_attributes(segment):
}
# add all available shot tokens
- shot_tokens = _get_shot_tokens_values(segment, [
- "", "", "", "",
- ])
+ shot_tokens = _get_shot_tokens_values(
+ segment,
+ ["", "", "", ""]
+ )
clip_data.update(shot_tokens)
# populate shot source metadata
@@ -561,11 +531,6 @@ def create_otio_timeline(sequence):
log.info(sequence.attributes)
CTX.project = get_current_flame_project()
- CTX.clips = get_clips_in_reels(CTX.project)
-
- log.debug(pformat(
- CTX.clips
- ))
# get current timeline
CTX.set_fps(
@@ -583,8 +548,13 @@ def create_otio_timeline(sequence):
# create otio tracks and clips
for ver in sequence.versions:
for track in ver.tracks:
- if len(track.segments) == 0 and track.hidden:
- return None
+ # avoid all empty tracks
+ # or hidden tracks
+ if (
+ len(track.segments) == 0
+ or track.hidden.get_value()
+ ):
+ continue
# convert track to otio
otio_track = create_otio_track(
@@ -597,11 +567,7 @@ def create_otio_timeline(sequence):
continue
all_segments.append(clip_data)
- segments_ordered = {
- itemindex: clip_data
- for itemindex, clip_data in enumerate(
- all_segments)
- }
+ segments_ordered = dict(enumerate(all_segments))
log.debug("_ segments_ordered: {}".format(
pformat(segments_ordered)
))
@@ -612,15 +578,11 @@ def create_otio_timeline(sequence):
log.debug("_ itemindex: {}".format(itemindex))
# Add Gap if needed
- if itemindex == 0:
- # if it is first track item at track then add
- # it to previous item
- prev_item = segment_data
-
- else:
- # get previous item
- prev_item = segments_ordered[itemindex - 1]
-
+ prev_item = (
+ segment_data
+ if itemindex == 0
+ else segments_ordered[itemindex - 1]
+ )
log.debug("_ segment_data: {}".format(segment_data))
# calculate clip frame range difference from each other
diff --git a/openpype/hosts/flame/plugins/load/load_clip.py b/openpype/hosts/flame/plugins/load/load_clip.py
index 8980f72cb8..e0a7297381 100644
--- a/openpype/hosts/flame/plugins/load/load_clip.py
+++ b/openpype/hosts/flame/plugins/load/load_clip.py
@@ -22,7 +22,7 @@ class LoadClip(opfapi.ClipLoader):
# settings
reel_group_name = "OpenPype_Reels"
reel_name = "Loaded"
- clip_name_template = "{asset}_{subset}_{representation}"
+ clip_name_template = "{asset}_{subset}_{output}"
def load(self, context, name, namespace, options):
@@ -39,7 +39,7 @@ class LoadClip(opfapi.ClipLoader):
clip_name = self.clip_name_template.format(
**context["representation"]["context"])
- # todo: settings in imageio
+ # TODO: settings in imageio
# convert colorspace with ocio to flame mapping
# in imageio flame section
colorspace = colorspace
diff --git a/openpype/hosts/flame/plugins/load/load_clip_batch.py b/openpype/hosts/flame/plugins/load/load_clip_batch.py
new file mode 100644
index 0000000000..5de3226035
--- /dev/null
+++ b/openpype/hosts/flame/plugins/load/load_clip_batch.py
@@ -0,0 +1,139 @@
+import os
+import flame
+from pprint import pformat
+import openpype.hosts.flame.api as opfapi
+
+
+class LoadClipBatch(opfapi.ClipLoader):
+ """Load a subset to timeline as clip
+
+ Place clip to timeline on its asset origin timings collected
+ during conforming to project
+ """
+
+ families = ["render2d", "source", "plate", "render", "review"]
+ representations = ["exr", "dpx", "jpg", "jpeg", "png", "h264"]
+
+ label = "Load as clip to current batch"
+ order = -10
+ icon = "code-fork"
+ color = "orange"
+
+ # settings
+ reel_name = "OP_LoadedReel"
+ clip_name_template = "{asset}_{subset}_{output}"
+
+ def load(self, context, name, namespace, options):
+
+ # get flame objects
+ self.batch = options.get("batch") or flame.batch
+
+ # load clip to timeline and get main variables
+ namespace = namespace
+ version = context['version']
+ version_data = version.get("data", {})
+ version_name = version.get("name", None)
+ colorspace = version_data.get("colorspace", None)
+
+ # in case output is not in context replace key to representation
+ if not context["representation"]["context"].get("output"):
+ self.clip_name_template.replace("output", "representation")
+
+ clip_name = self.clip_name_template.format(
+ **context["representation"]["context"])
+
+ # TODO: settings in imageio
+ # convert colorspace with ocio to flame mapping
+ # in imageio flame section
+ colorspace = colorspace
+
+ # create workfile path
+ workfile_dir = options.get("workdir") or os.environ["AVALON_WORKDIR"]
+ openclip_dir = os.path.join(
+ workfile_dir, clip_name
+ )
+ openclip_path = os.path.join(
+ openclip_dir, clip_name + ".clip"
+ )
+ if not os.path.exists(openclip_dir):
+ os.makedirs(openclip_dir)
+
+ # prepare clip data from context ad send it to openClipLoader
+ loading_context = {
+ "path": self.fname.replace("\\", "/"),
+ "colorspace": colorspace,
+ "version": "v{:0>3}".format(version_name),
+ "logger": self.log
+
+ }
+ self.log.debug(pformat(
+ loading_context
+ ))
+ self.log.debug(openclip_path)
+
+ # make openpype clip file
+ opfapi.OpenClipSolver(openclip_path, loading_context).make()
+
+ # prepare Reel group in actual desktop
+ opc = self._get_clip(
+ clip_name,
+ openclip_path
+ )
+
+ # add additional metadata from the version to imprint Avalon knob
+ add_keys = [
+ "frameStart", "frameEnd", "source", "author",
+ "fps", "handleStart", "handleEnd"
+ ]
+
+ # move all version data keys to tag data
+ data_imprint = {
+ key: version_data.get(key, str(None))
+ for key in add_keys
+ }
+ # add variables related to version context
+ data_imprint.update({
+ "version": version_name,
+ "colorspace": colorspace,
+ "objectName": clip_name
+ })
+
+ # TODO: finish the containerisation
+ # opc_segment = opfapi.get_clip_segment(opc)
+
+ # return opfapi.containerise(
+ # opc_segment,
+ # name, namespace, context,
+ # self.__class__.__name__,
+ # data_imprint)
+
+ return opc
+
+ def _get_clip(self, name, clip_path):
+ reel = self._get_reel()
+
+ # with maintained openclip as opc
+ matching_clip = None
+ for cl in reel.clips:
+ if cl.name.get_value() != name:
+ continue
+ matching_clip = cl
+
+ if not matching_clip:
+ created_clips = flame.import_clips(str(clip_path), reel)
+ return created_clips.pop()
+
+ return matching_clip
+
+ def _get_reel(self):
+
+ matching_reel = [
+ rg for rg in self.batch.reels
+ if rg.name.get_value() == self.reel_name
+ ]
+
+ return (
+ matching_reel.pop()
+ if matching_reel
+ else self.batch.create_reel(str(self.reel_name))
+ )
diff --git a/openpype/hosts/flame/plugins/publish/collect_timeline_instances.py b/openpype/hosts/flame/plugins/publish/collect_timeline_instances.py
index 2482abd9c7..95c2002bd9 100644
--- a/openpype/hosts/flame/plugins/publish/collect_timeline_instances.py
+++ b/openpype/hosts/flame/plugins/publish/collect_timeline_instances.py
@@ -21,19 +21,12 @@ class CollectTimelineInstances(pyblish.api.ContextPlugin):
audio_track_items = []
- # TODO: add to settings
# settings
- xml_preset_attrs_from_comments = {
- "width": "number",
- "height": "number",
- "pixelRatio": "float",
- "resizeType": "string",
- "resizeFilter": "string"
- }
+ xml_preset_attrs_from_comments = []
+ add_tasks = []
def process(self, context):
project = context.data["flameProject"]
- sequence = context.data["flameSequence"]
selected_segments = context.data["flameSelectedSegments"]
self.log.debug("__ selected_segments: {}".format(selected_segments))
@@ -79,9 +72,9 @@ class CollectTimelineInstances(pyblish.api.ContextPlugin):
# solve handles length
marker_data["handleStart"] = min(
- marker_data["handleStart"], head)
+ marker_data["handleStart"], abs(head))
marker_data["handleEnd"] = min(
- marker_data["handleEnd"], tail)
+ marker_data["handleEnd"], abs(tail))
with_audio = bool(marker_data.pop("audio"))
@@ -112,7 +105,11 @@ class CollectTimelineInstances(pyblish.api.ContextPlugin):
"fps": self.fps,
"flameSourceClip": source_clip,
"sourceFirstFrame": int(first_frame),
- "path": file_path
+ "path": file_path,
+ "flameAddTasks": self.add_tasks,
+ "tasks": {
+ task["name"]: {"type": task["type"]}
+ for task in self.add_tasks}
})
# get otio clip data
@@ -187,7 +184,10 @@ class CollectTimelineInstances(pyblish.api.ContextPlugin):
# split to key and value
key, value = split.split(":")
- for a_name, a_type in self.xml_preset_attrs_from_comments.items():
+ for attr_data in self.xml_preset_attrs_from_comments:
+ a_name = attr_data["name"]
+ a_type = attr_data["type"]
+
# exclude all not related attributes
if a_name.lower() not in key.lower():
continue
@@ -247,6 +247,7 @@ class CollectTimelineInstances(pyblish.api.ContextPlugin):
head = clip_data.get("segment_head")
tail = clip_data.get("segment_tail")
+ # HACK: it is here to serve for versions bellow 2021.1
if not head:
head = int(clip_data["source_in"]) - int(first_frame)
if not tail:
diff --git a/openpype/hosts/flame/plugins/publish/extract_subset_resources.py b/openpype/hosts/flame/plugins/publish/extract_subset_resources.py
index 32f6b9508f..a780f8c9e5 100644
--- a/openpype/hosts/flame/plugins/publish/extract_subset_resources.py
+++ b/openpype/hosts/flame/plugins/publish/extract_subset_resources.py
@@ -61,9 +61,13 @@ class ExtractSubsetResources(openpype.api.Extractor):
# flame objects
segment = instance.data["item"]
+ segment_name = segment.name.get_value()
sequence_clip = instance.context.data["flameSequence"]
clip_data = instance.data["flameSourceClip"]
- clip = clip_data["PyClip"]
+
+ reel_clip = None
+ if clip_data:
+ reel_clip = clip_data["PyClip"]
# segment's parent track name
s_track_name = segment.parent.name.get_value()
@@ -108,6 +112,16 @@ class ExtractSubsetResources(openpype.api.Extractor):
ignore_comment_attrs = preset_config["ignore_comment_attrs"]
color_out = preset_config["colorspace_out"]
+ # get attribures related loading in integrate_batch_group
+ load_to_batch_group = preset_config.get(
+ "load_to_batch_group")
+ batch_group_loader_name = preset_config.get(
+ "batch_group_loader_name")
+
+ # convert to None if empty string
+ if batch_group_loader_name == "":
+ batch_group_loader_name = None
+
# get frame range with handles for representation range
frame_start_handle = frame_start - handle_start
source_duration_handles = (
@@ -117,8 +131,20 @@ class ExtractSubsetResources(openpype.api.Extractor):
in_mark = (source_start_handles - source_first_frame) + 1
out_mark = in_mark + source_duration_handles
+ # make test for type of preset and available reel_clip
+ if (
+ not reel_clip
+ and export_type != "Sequence Publish"
+ ):
+ self.log.warning((
+ "Skipping preset {}. Not available "
+ "reel clip for {}").format(
+ preset_file, segment_name
+ ))
+ continue
+
# by default export source clips
- exporting_clip = clip
+ exporting_clip = reel_clip
if export_type == "Sequence Publish":
# change export clip to sequence
@@ -150,7 +176,7 @@ class ExtractSubsetResources(openpype.api.Extractor):
if export_type == "Sequence Publish":
# only keep visible layer where instance segment is child
- self.hide_other_tracks(duplclip, s_track_name)
+ self.hide_others(duplclip, segment_name, s_track_name)
# validate xml preset file is filled
if preset_file == "":
@@ -211,7 +237,9 @@ class ExtractSubsetResources(openpype.api.Extractor):
"tags": repre_tags,
"data": {
"colorspace": color_out
- }
+ },
+ "load_to_batch_group": load_to_batch_group,
+ "batch_group_loader_name": batch_group_loader_name
}
# collect all available content of export dir
@@ -322,18 +350,26 @@ class ExtractSubsetResources(openpype.api.Extractor):
return new_stage_dir, new_files_list
- def hide_other_tracks(self, sequence_clip, track_name):
+ def hide_others(self, sequence_clip, segment_name, track_name):
"""Helper method used only if sequence clip is used
Args:
sequence_clip (flame.Clip): sequence clip
+ segment_name (str): segment name
track_name (str): track name
"""
# create otio tracks and clips
for ver in sequence_clip.versions:
for track in ver.tracks:
- if len(track.segments) == 0 and track.hidden:
+ if len(track.segments) == 0 and track.hidden.get_value():
continue
+ # hide tracks which are not parent track
if track.name.get_value() != track_name:
track.hidden = True
+ continue
+
+ # hidde all other segments
+ for segment in track.segments:
+ if segment.name.get_value() != segment_name:
+ segment.hidden = True
diff --git a/openpype/hosts/flame/plugins/publish/integrate_batch_group.py b/openpype/hosts/flame/plugins/publish/integrate_batch_group.py
new file mode 100644
index 0000000000..da9553cc2a
--- /dev/null
+++ b/openpype/hosts/flame/plugins/publish/integrate_batch_group.py
@@ -0,0 +1,328 @@
+import os
+import copy
+from collections import OrderedDict
+from pprint import pformat
+import pyblish
+from openpype.lib import get_workdir
+import openpype.hosts.flame.api as opfapi
+import openpype.pipeline as op_pipeline
+
+
+class IntegrateBatchGroup(pyblish.api.InstancePlugin):
+ """Integrate published shot to batch group"""
+
+ order = pyblish.api.IntegratorOrder + 0.45
+ label = "Integrate Batch Groups"
+ hosts = ["flame"]
+ families = ["clip"]
+
+ # settings
+ default_loader = "LoadClip"
+
+ def process(self, instance):
+ add_tasks = instance.data["flameAddTasks"]
+
+ # iterate all tasks from settings
+ for task_data in add_tasks:
+ # exclude batch group
+ if not task_data["create_batch_group"]:
+ continue
+
+ # create or get already created batch group
+ bgroup = self._get_batch_group(instance, task_data)
+
+ # add batch group content
+ all_batch_nodes = self._add_nodes_to_batch_with_links(
+ instance, task_data, bgroup)
+
+ for name, node in all_batch_nodes.items():
+ self.log.debug("name: {}, dir: {}".format(
+ name, dir(node)
+ ))
+ self.log.debug("__ node.attributes: {}".format(
+ node.attributes
+ ))
+
+ # load plate to batch group
+ self.log.info("Loading subset `{}` into batch `{}`".format(
+ instance.data["subset"], bgroup.name.get_value()
+ ))
+ self._load_clip_to_context(instance, bgroup)
+
+ def _add_nodes_to_batch_with_links(self, instance, task_data, batch_group):
+ # get write file node properties > OrederDict because order does mater
+ write_pref_data = self._get_write_prefs(instance, task_data)
+
+ batch_nodes = [
+ {
+ "type": "comp",
+ "properties": {},
+ "id": "comp_node01"
+ },
+ {
+ "type": "Write File",
+ "properties": write_pref_data,
+ "id": "write_file_node01"
+ }
+ ]
+ batch_links = [
+ {
+ "from_node": {
+ "id": "comp_node01",
+ "connector": "Result"
+ },
+ "to_node": {
+ "id": "write_file_node01",
+ "connector": "Front"
+ }
+ }
+ ]
+
+ # add nodes into batch group
+ return opfapi.create_batch_group_conent(
+ batch_nodes, batch_links, batch_group)
+
+ def _load_clip_to_context(self, instance, bgroup):
+ # get all loaders for host
+ loaders_by_name = {
+ loader.__name__: loader
+ for loader in op_pipeline.discover_loader_plugins()
+ }
+
+ # get all published representations
+ published_representations = instance.data["published_representations"]
+ repres_db_id_by_name = {
+ repre_info["representation"]["name"]: repre_id
+ for repre_id, repre_info in published_representations.items()
+ }
+
+ # get all loadable representations
+ repres_by_name = {
+ repre["name"]: repre for repre in instance.data["representations"]
+ }
+
+ # get repre_id for the loadable representations
+ loader_name_by_repre_id = {
+ repres_db_id_by_name[repr_name]: {
+ "loader": repr_data["batch_group_loader_name"],
+ # add repre data for exception logging
+ "_repre_data": repr_data
+ }
+ for repr_name, repr_data in repres_by_name.items()
+ if repr_data.get("load_to_batch_group")
+ }
+
+ self.log.debug("__ loader_name_by_repre_id: {}".format(pformat(
+ loader_name_by_repre_id)))
+
+ # get representation context from the repre_id
+ repre_contexts = op_pipeline.load.get_repres_contexts(
+ loader_name_by_repre_id.keys())
+
+ self.log.debug("__ repre_contexts: {}".format(pformat(
+ repre_contexts)))
+
+ # loop all returned repres from repre_context dict
+ for repre_id, repre_context in repre_contexts.items():
+ self.log.debug("__ repre_id: {}".format(repre_id))
+ # get loader name by representation id
+ loader_name = (
+ loader_name_by_repre_id[repre_id]["loader"]
+ # if nothing was added to settings fallback to default
+ or self.default_loader
+ )
+
+ # get loader plugin
+ loader_plugin = loaders_by_name.get(loader_name)
+ if loader_plugin:
+ # load to flame by representation context
+ try:
+ op_pipeline.load.load_with_repre_context(
+ loader_plugin, repre_context, **{
+ "data": {
+ "workdir": self.task_workdir,
+ "batch": bgroup
+ }
+ })
+ except op_pipeline.load.IncompatibleLoaderError as msg:
+ self.log.error(
+ "Check allowed representations for Loader `{}` "
+ "in settings > error: {}".format(
+ loader_plugin.__name__, msg))
+ self.log.error(
+ "Representaton context >>{}<< is not compatible "
+ "with loader `{}`".format(
+ pformat(repre_context), loader_plugin.__name__
+ )
+ )
+ else:
+ self.log.warning(
+ "Something got wrong and there is not Loader found for "
+ "following data: {}".format(
+ pformat(loader_name_by_repre_id))
+ )
+
+ def _get_batch_group(self, instance, task_data):
+ frame_start = instance.data["frameStart"]
+ frame_end = instance.data["frameEnd"]
+ handle_start = instance.data["handleStart"]
+ handle_end = instance.data["handleEnd"]
+ frame_duration = (frame_end - frame_start) + 1
+ asset_name = instance.data["asset"]
+
+ task_name = task_data["name"]
+ batchgroup_name = "{}_{}".format(asset_name, task_name)
+
+ batch_data = {
+ "shematic_reels": [
+ "OP_LoadedReel"
+ ],
+ "handleStart": handle_start,
+ "handleEnd": handle_end
+ }
+ self.log.debug(
+ "__ batch_data: {}".format(pformat(batch_data)))
+
+ # check if the batch group already exists
+ bgroup = opfapi.get_batch_group_from_desktop(batchgroup_name)
+
+ if not bgroup:
+ self.log.info(
+ "Creating new batch group: {}".format(batchgroup_name))
+ # create batch with utils
+ bgroup = opfapi.create_batch_group(
+ batchgroup_name,
+ frame_start,
+ frame_duration,
+ **batch_data
+ )
+
+ else:
+ self.log.info(
+ "Updating batch group: {}".format(batchgroup_name))
+ # update already created batch group
+ bgroup = opfapi.create_batch_group(
+ batchgroup_name,
+ frame_start,
+ frame_duration,
+ update_batch_group=bgroup,
+ **batch_data
+ )
+
+ return bgroup
+
+ def _get_anamoty_data_with_current_task(self, instance, task_data):
+ anatomy_data = copy.deepcopy(instance.data["anatomyData"])
+ task_name = task_data["name"]
+ task_type = task_data["type"]
+ anatomy_obj = instance.context.data["anatomy"]
+
+ # update task data in anatomy data
+ project_task_types = anatomy_obj["tasks"]
+ task_code = project_task_types.get(task_type, {}).get("short_name")
+ anatomy_data.update({
+ "task": {
+ "name": task_name,
+ "type": task_type,
+ "short": task_code
+ }
+ })
+ return anatomy_data
+
+ def _get_write_prefs(self, instance, task_data):
+ # update task in anatomy data
+ anatomy_data = self._get_anamoty_data_with_current_task(
+ instance, task_data)
+
+ self.task_workdir = self._get_shot_task_dir_path(
+ instance, task_data)
+ self.log.debug("__ task_workdir: {}".format(
+ self.task_workdir))
+
+ # TODO: this might be done with template in settings
+ render_dir_path = os.path.join(
+ self.task_workdir, "render", "flame")
+
+ if not os.path.exists(render_dir_path):
+ os.makedirs(render_dir_path, mode=0o777)
+
+ # TODO: add most of these to `imageio/flame/batch/write_node`
+ name = "{project[code]}_{asset}_{task[name]}".format(
+ **anatomy_data
+ )
+
+ # The path attribute where the rendered clip is exported
+ # /path/to/file.[0001-0010].exr
+ media_path = render_dir_path
+ # name of file represented by tokens
+ media_path_pattern = (
+ "_v/_v.")
+ # The Create Open Clip attribute of the Write File node. \
+ # Determines if an Open Clip is created by the Write File node.
+ create_clip = True
+ # The Include Setup attribute of the Write File node.
+ # Determines if a Batch Setup file is created by the Write File node.
+ include_setup = True
+ # The path attribute where the Open Clip file is exported by
+ # the Write File node.
+ create_clip_path = ""
+ # The path attribute where the Batch setup file
+ # is exported by the Write File node.
+ include_setup_path = "./_v"
+ # The file type for the files written by the Write File node.
+ # Setting this attribute also overwrites format_extension,
+ # bit_depth and compress_mode to match the defaults for
+ # this file type.
+ file_type = "OpenEXR"
+ # The file extension for the files written by the Write File node.
+ # This attribute resets to match file_type whenever file_type
+ # is set. If you require a specific extension, you must
+ # set format_extension after setting file_type.
+ format_extension = "exr"
+ # The bit depth for the files written by the Write File node.
+ # This attribute resets to match file_type whenever file_type is set.
+ bit_depth = "16"
+ # The compressing attribute for the files exported by the Write
+ # File node. Only relevant when file_type in 'OpenEXR', 'Sgi', 'Tiff'
+ compress = True
+ # The compression format attribute for the specific File Types
+ # export by the Write File node. You must set compress_mode
+ # after setting file_type.
+ compress_mode = "DWAB"
+ # The frame index mode attribute of the Write File node.
+ # Value range: `Use Timecode` or `Use Start Frame`
+ frame_index_mode = "Use Start Frame"
+ frame_padding = 6
+ # The versioning mode of the Open Clip exported by the Write File node.
+ # Only available if create_clip = True.
+ version_mode = "Follow Iteration"
+ version_name = "v"
+ version_padding = 3
+
+ # need to make sure the order of keys is correct
+ return OrderedDict((
+ ("name", name),
+ ("media_path", media_path),
+ ("media_path_pattern", media_path_pattern),
+ ("create_clip", create_clip),
+ ("include_setup", include_setup),
+ ("create_clip_path", create_clip_path),
+ ("include_setup_path", include_setup_path),
+ ("file_type", file_type),
+ ("format_extension", format_extension),
+ ("bit_depth", bit_depth),
+ ("compress", compress),
+ ("compress_mode", compress_mode),
+ ("frame_index_mode", frame_index_mode),
+ ("frame_padding", frame_padding),
+ ("version_mode", version_mode),
+ ("version_name", version_name),
+ ("version_padding", version_padding)
+ ))
+
+ def _get_shot_task_dir_path(self, instance, task_data):
+ project_doc = instance.data["projectEntity"]
+ asset_entity = instance.data["assetEntity"]
+
+ return get_workdir(
+ project_doc, asset_entity, task_data["name"], "flame")
diff --git a/openpype/hosts/flame/plugins/publish/validate_source_clip.py b/openpype/hosts/flame/plugins/publish/validate_source_clip.py
index 9ff015f628..345c00e05a 100644
--- a/openpype/hosts/flame/plugins/publish/validate_source_clip.py
+++ b/openpype/hosts/flame/plugins/publish/validate_source_clip.py
@@ -9,6 +9,8 @@ class ValidateSourceClip(pyblish.api.InstancePlugin):
label = "Validate Source Clip"
hosts = ["flame"]
families = ["clip"]
+ optional = True
+ active = False
def process(self, instance):
flame_source_clip = instance.data["flameSourceClip"]
diff --git a/openpype/hosts/houdini/api/pipeline.py b/openpype/hosts/houdini/api/pipeline.py
index 6a69814e2e..7048accceb 100644
--- a/openpype/hosts/houdini/api/pipeline.py
+++ b/openpype/hosts/houdini/api/pipeline.py
@@ -4,7 +4,6 @@ import logging
import contextlib
import hou
-import hdefereval
import pyblish.api
@@ -291,7 +290,13 @@ def on_new():
start = hou.playbar.playbackRange()[0]
hou.setFrame(start)
- hdefereval.executeDeferred(_enforce_start_frame)
+ if hou.isUIAvailable():
+ import hdefereval
+ hdefereval.executeDeferred(_enforce_start_frame)
+ else:
+ # Run without execute deferred when no UI is available because
+ # without UI `hdefereval` is not available to import
+ _enforce_start_frame()
def _set_context_settings():
diff --git a/openpype/hosts/nuke/plugins/publish/extract_slate_frame.py b/openpype/hosts/nuke/plugins/publish/extract_slate_frame.py
index e917a28046..fb52fc18b4 100644
--- a/openpype/hosts/nuke/plugins/publish/extract_slate_frame.py
+++ b/openpype/hosts/nuke/plugins/publish/extract_slate_frame.py
@@ -1,6 +1,9 @@
import os
import nuke
+import copy
+
import pyblish.api
+
import openpype
from openpype.hosts.nuke.api.lib import maintained_selection
@@ -18,6 +21,13 @@ class ExtractSlateFrame(openpype.api.Extractor):
families = ["slate"]
hosts = ["nuke"]
+ # Settings values
+ # - can be extended by other attributes from node in the future
+ key_value_mapping = {
+ "f_submission_note": [True, "{comment}"],
+ "f_submitting_for": [True, "{intent[value]}"],
+ "f_vfx_scope_of_work": [False, ""]
+ }
def process(self, instance):
if hasattr(self, "viewer_lut_raw"):
@@ -129,9 +139,7 @@ class ExtractSlateFrame(openpype.api.Extractor):
for node in temporary_nodes:
nuke.delete(node)
-
def get_view_process_node(self):
-
# Select only the target node
if nuke.selectedNodes():
[n.setSelected(False) for n in nuke.selectedNodes()]
@@ -162,13 +170,56 @@ class ExtractSlateFrame(openpype.api.Extractor):
return
comment = instance.context.data.get("comment")
- intent_value = instance.context.data.get("intent")
- if intent_value and isinstance(intent_value, dict):
- intent_value = intent_value.get("value")
+ intent = instance.context.data.get("intent")
+ if not isinstance(intent, dict):
+ intent = {
+ "label": intent,
+ "value": intent
+ }
- try:
- node["f_submission_note"].setValue(comment)
- node["f_submitting_for"].setValue(intent_value or "")
- except NameError:
- return
- instance.data.pop("slateNode")
+ fill_data = copy.deepcopy(instance.data["anatomyData"])
+ fill_data.update({
+ "custom": copy.deepcopy(
+ instance.data.get("customData") or {}
+ ),
+ "comment": comment,
+ "intent": intent
+ })
+
+ for key, value in self.key_value_mapping.items():
+ enabled, template = value
+ if not enabled:
+ self.log.debug("Key \"{}\" is disabled".format(key))
+ continue
+
+ try:
+ value = template.format(**fill_data)
+
+ except ValueError:
+ self.log.warning(
+ "Couldn't fill template \"{}\" with data: {}".format(
+ template, fill_data
+ ),
+ exc_info=True
+ )
+ continue
+
+ except KeyError:
+ self.log.warning(
+ (
+ "Template contains unknown key."
+ " Template \"{}\" Data: {}"
+ ).format(template, fill_data),
+ exc_info=True
+ )
+ continue
+
+ try:
+ node[key].setValue(value)
+ self.log.info("Change key \"{}\" to value \"{}\"".format(
+ key, value
+ ))
+ except NameError:
+ self.log.warning((
+ "Failed to set value \"{}\" on node attribute \"{}\""
+ ).format(value))
diff --git a/openpype/hosts/tvpaint/worker/init_file.tvpp b/openpype/hosts/tvpaint/worker/init_file.tvpp
new file mode 100644
index 0000000000..572d278fdb
Binary files /dev/null and b/openpype/hosts/tvpaint/worker/init_file.tvpp differ
diff --git a/openpype/hosts/tvpaint/worker/worker.py b/openpype/hosts/tvpaint/worker/worker.py
index cfd40bc7ba..9295c8afb4 100644
--- a/openpype/hosts/tvpaint/worker/worker.py
+++ b/openpype/hosts/tvpaint/worker/worker.py
@@ -1,5 +1,8 @@
+import os
import signal
import time
+import tempfile
+import shutil
import asyncio
from openpype.hosts.tvpaint.api.communication_server import (
@@ -36,8 +39,28 @@ class TVPaintWorkerCommunicator(BaseCommunicator):
super()._start_webserver()
+ def _open_init_file(self):
+ """Open init TVPaint file.
+
+ File triggers dialog missing path to audio file which must be closed
+ once and is ignored for rest of running process.
+ """
+ current_dir = os.path.dirname(os.path.abspath(__file__))
+ init_filepath = os.path.join(current_dir, "init_file.tvpp")
+ with tempfile.NamedTemporaryFile(
+ mode="w", prefix="a_tvp_", suffix=".tvpp"
+ ) as tmp_file:
+ tmp_filepath = tmp_file.name.replace("\\", "/")
+
+ shutil.copy(init_filepath, tmp_filepath)
+ george_script = "tv_LoadProject '\"'\"{}\"'\"'".format(tmp_filepath)
+ self.execute_george_through_file(george_script)
+ self.execute_george("tv_projectclose")
+ os.remove(tmp_filepath)
+
def _on_client_connect(self, *args, **kwargs):
super()._on_client_connect(*args, **kwargs)
+ self._open_init_file()
# Register as "ready to work" worker
self._worker_connection.register_as_worker()
diff --git a/openpype/hosts/webpublisher/plugins/publish/collect_published_files.py b/openpype/hosts/webpublisher/plugins/publish/collect_published_files.py
index 65cef14703..8edaf4f67b 100644
--- a/openpype/hosts/webpublisher/plugins/publish/collect_published_files.py
+++ b/openpype/hosts/webpublisher/plugins/publish/collect_published_files.py
@@ -108,15 +108,18 @@ class CollectPublishedFiles(pyblish.api.ContextPlugin):
instance.data["representations"] = self._get_single_repre(
task_dir, task_data["files"], tags
)
- file_url = os.path.join(task_dir, task_data["files"][0])
- no_of_frames = self._get_number_of_frames(file_url)
- if no_of_frames:
+ if family != 'workfile':
+ file_url = os.path.join(task_dir, task_data["files"][0])
try:
- frame_end = int(frame_start) + math.ceil(no_of_frames)
- instance.data["frameEnd"] = math.ceil(frame_end) - 1
- self.log.debug("frameEnd:: {}".format(
- instance.data["frameEnd"]))
- except ValueError:
+ no_of_frames = self._get_number_of_frames(file_url)
+ if no_of_frames:
+ frame_end = int(frame_start) + \
+ math.ceil(no_of_frames)
+ frame_end = math.ceil(frame_end) - 1
+ instance.data["frameEnd"] = frame_end
+ self.log.debug("frameEnd:: {}".format(
+ instance.data["frameEnd"]))
+ except Exception:
self.log.warning("Unable to count frames "
"duration {}".format(no_of_frames))
@@ -209,7 +212,6 @@ class CollectPublishedFiles(pyblish.api.ContextPlugin):
msg = "No family found for combination of " +\
"task_type: {}, is_sequence:{}, extension: {}".format(
task_type, is_sequence, extension)
- found_family = "render"
assert found_family, msg
return (found_family,
diff --git a/openpype/lib/applications.py b/openpype/lib/applications.py
index 5821c863d7..07b91dda03 100644
--- a/openpype/lib/applications.py
+++ b/openpype/lib/applications.py
@@ -13,7 +13,8 @@ import six
from openpype.settings import (
get_system_settings,
- get_project_settings
+ get_project_settings,
+ get_local_settings
)
from openpype.settings.constants import (
METADATA_KEYS,
@@ -1272,6 +1273,9 @@ class EnvironmentPrepData(dict):
if data.get("env") is None:
data["env"] = os.environ.copy()
+ if "system_settings" not in data:
+ data["system_settings"] = get_system_settings()
+
super(EnvironmentPrepData, self).__init__(data)
@@ -1395,8 +1399,27 @@ def prepare_app_environments(data, env_group=None, implementation_envs=True):
app = data["app"]
log = data["log"]
+ source_env = data["env"].copy()
- _add_python_version_paths(app, data["env"], log)
+ _add_python_version_paths(app, source_env, log)
+
+ # Use environments from local settings
+ filtered_local_envs = {}
+ system_settings = data["system_settings"]
+ whitelist_envs = system_settings["general"].get("local_env_white_list")
+ if whitelist_envs:
+ local_settings = get_local_settings()
+ local_envs = local_settings.get("environments") or {}
+ filtered_local_envs = {
+ key: value
+ for key, value in local_envs.items()
+ if key in whitelist_envs
+ }
+
+ # Apply local environment variables for already existing values
+ for key, value in filtered_local_envs.items():
+ if key in source_env:
+ source_env[key] = value
# `added_env_keys` has debug purpose
added_env_keys = {app.group.name, app.name}
@@ -1441,10 +1464,19 @@ def prepare_app_environments(data, env_group=None, implementation_envs=True):
# Choose right platform
tool_env = parse_environments(_env_values, env_group)
+
+ # Apply local environment variables
+ # - must happen between all values because they may be used during
+ # merge
+ for key, value in filtered_local_envs.items():
+ if key in tool_env:
+ tool_env[key] = value
+
# Merge dictionaries
env_values = _merge_env(tool_env, env_values)
- merged_env = _merge_env(env_values, data["env"])
+ merged_env = _merge_env(env_values, source_env)
+
loaded_env = acre.compute(merged_env, cleanup=False)
final_env = None
@@ -1464,7 +1496,7 @@ def prepare_app_environments(data, env_group=None, implementation_envs=True):
if final_env is None:
final_env = loaded_env
- keys_to_remove = set(data["env"].keys()) - set(final_env.keys())
+ keys_to_remove = set(source_env.keys()) - set(final_env.keys())
# Update env
data["env"].update(final_env)
@@ -1611,7 +1643,6 @@ def _prepare_last_workfile(data, workdir):
result will be stored.
workdir (str): Path to folder where workfiles should be stored.
"""
- import avalon.api
from openpype.pipeline import HOST_WORKFILE_EXTENSIONS
log = data["log"]
diff --git a/openpype/lib/transcoding.py b/openpype/lib/transcoding.py
index 8e79aba0ae..c2fecf6628 100644
--- a/openpype/lib/transcoding.py
+++ b/openpype/lib/transcoding.py
@@ -17,6 +17,9 @@ from .vendor_bin_utils import (
# Max length of string that is supported by ffmpeg
MAX_FFMPEG_STRING_LEN = 8196
+# Not allowed symbols in attributes for ffmpeg
+NOT_ALLOWED_FFMPEG_CHARS = ("\"", )
+
# OIIO known xml tags
STRING_TAGS = {
"format"
@@ -367,11 +370,15 @@ def should_convert_for_ffmpeg(src_filepath):
return None
for attr_value in input_info["attribs"].values():
- if (
- isinstance(attr_value, str)
- and len(attr_value) > MAX_FFMPEG_STRING_LEN
- ):
+ if not isinstance(attr_value, str):
+ continue
+
+ if len(attr_value) > MAX_FFMPEG_STRING_LEN:
return True
+
+ for char in NOT_ALLOWED_FFMPEG_CHARS:
+ if char in attr_value:
+ return True
return False
@@ -422,7 +429,12 @@ def convert_for_ffmpeg(
compression = "none"
# Prepare subprocess arguments
- oiio_cmd = [get_oiio_tools_path()]
+ oiio_cmd = [
+ get_oiio_tools_path(),
+
+ # Don't add any additional attributes
+ "--nosoftwareattrib",
+ ]
# Add input compression if available
if compression:
oiio_cmd.extend(["--compression", compression])
@@ -458,23 +470,33 @@ def convert_for_ffmpeg(
"--frames", "{}-{}".format(input_frame_start, input_frame_end)
])
- ignore_attr_changes_added = False
for attr_name, attr_value in input_info["attribs"].items():
if not isinstance(attr_value, str):
continue
# Remove attributes that have string value longer than allowed length
- # for ffmpeg
+ # for ffmpeg or when containt unallowed symbols
+ erase_reason = "Missing reason"
+ erase_attribute = False
if len(attr_value) > MAX_FFMPEG_STRING_LEN:
- if not ignore_attr_changes_added:
- # Attrite changes won't be added to attributes itself
- ignore_attr_changes_added = True
- oiio_cmd.append("--sansattrib")
+ erase_reason = "has too long value ({} chars).".format(
+ len(attr_value)
+ )
+
+ if erase_attribute:
+ for char in NOT_ALLOWED_FFMPEG_CHARS:
+ if char in attr_value:
+ erase_attribute = True
+ erase_reason = (
+ "contains unsupported character \"{}\"."
+ ).format(char)
+ break
+
+ if erase_attribute:
# Set attribute to empty string
logger.info((
- "Removed attribute \"{}\" from metadata"
- " because has too long value ({} chars)."
- ).format(attr_name, len(attr_value)))
+ "Removed attribute \"{}\" from metadata because {}."
+ ).format(attr_name, erase_reason))
oiio_cmd.extend(["--eraseattrib", attr_name])
# Add last argument - path to output
diff --git a/openpype/modules/ftrack/lib/custom_attributes.py b/openpype/modules/ftrack/lib/custom_attributes.py
index 29c6b5e7f8..2f53815368 100644
--- a/openpype/modules/ftrack/lib/custom_attributes.py
+++ b/openpype/modules/ftrack/lib/custom_attributes.py
@@ -135,7 +135,7 @@ def query_custom_attributes(
output.extend(
session.query(
(
- "select value, entity_id from {}"
+ "select value, entity_id, configuration_id from {}"
" where entity_id in ({}) and configuration_id in ({})"
).format(
table_name,
diff --git a/openpype/modules/ftrack/plugins/publish/collect_custom_attributes_data.py b/openpype/modules/ftrack/plugins/publish/collect_custom_attributes_data.py
new file mode 100644
index 0000000000..43fa3bc3f8
--- /dev/null
+++ b/openpype/modules/ftrack/plugins/publish/collect_custom_attributes_data.py
@@ -0,0 +1,148 @@
+"""
+Requires:
+ context > ftrackSession
+ context > ftrackEntity
+ instance > ftrackEntity
+
+Provides:
+ instance > customData > ftrack
+"""
+import copy
+
+import pyblish.api
+
+
+class CollectFtrackCustomAttributeData(pyblish.api.ContextPlugin):
+ """Collect custom attribute values and store them to customData.
+
+ Data are stored into each instance in context under
+ instance.data["customData"]["ftrack"].
+
+ Hierarchical attributes are not looked up properly for that functionality
+ custom attribute values lookup must be extended.
+ """
+
+ order = pyblish.api.CollectorOrder + 0.4992
+ label = "Collect Ftrack Custom Attribute Data"
+
+ # Name of custom attributes for which will be look for
+ custom_attribute_keys = []
+
+ def process(self, context):
+ if not self.custom_attribute_keys:
+ self.log.info("Custom attribute keys are not set. Skipping")
+ return
+
+ ftrack_entities_by_id = {}
+ default_entity_id = None
+
+ context_entity = context.data.get("ftrackEntity")
+ if context_entity:
+ entity_id = context_entity["id"]
+ default_entity_id = entity_id
+ ftrack_entities_by_id[entity_id] = context_entity
+
+ instances_by_entity_id = {
+ default_entity_id: []
+ }
+ for instance in context:
+ entity = instance.data.get("ftrackEntity")
+ if not entity:
+ instances_by_entity_id[default_entity_id].append(instance)
+ continue
+
+ entity_id = entity["id"]
+ ftrack_entities_by_id[entity_id] = entity
+ if entity_id not in instances_by_entity_id:
+ instances_by_entity_id[entity_id] = []
+ instances_by_entity_id[entity_id].append(instance)
+
+ if not ftrack_entities_by_id:
+ self.log.info("Ftrack entities are not set. Skipping")
+ return
+
+ session = context.data["ftrackSession"]
+ custom_attr_key_by_id = self.query_attr_confs(session)
+ if not custom_attr_key_by_id:
+ self.log.info((
+ "Didn't find any of defined custom attributes {}"
+ ).format(", ".join(self.custom_attribute_keys)))
+ return
+
+ entity_ids = list(instances_by_entity_id.keys())
+ values_by_entity_id = self.query_attr_values(
+ session, entity_ids, custom_attr_key_by_id
+ )
+
+ for entity_id, instances in instances_by_entity_id.items():
+ if entity_id not in values_by_entity_id:
+ # Use defaut empty values
+ entity_id = None
+
+ for instance in instances:
+ value = copy.deepcopy(values_by_entity_id[entity_id])
+ if "customData" not in instance.data:
+ instance.data["customData"] = {}
+ instance.data["customData"]["ftrack"] = value
+ instance_label = (
+ instance.data.get("label") or instance.data["name"]
+ )
+ self.log.debug((
+ "Added ftrack custom data to instance \"{}\": {}"
+ ).format(instance_label, value))
+
+ def query_attr_values(self, session, entity_ids, custom_attr_key_by_id):
+ # Prepare values for query
+ entity_ids_joined = ",".join([
+ '"{}"'.format(entity_id)
+ for entity_id in entity_ids
+ ])
+ conf_ids_joined = ",".join([
+ '"{}"'.format(conf_id)
+ for conf_id in custom_attr_key_by_id.keys()
+ ])
+ # Query custom attribute values
+ value_items = session.query(
+ (
+ "select value, entity_id, configuration_id"
+ " from CustomAttributeValue"
+ " where entity_id in ({}) and configuration_id in ({})"
+ ).format(
+ entity_ids_joined,
+ conf_ids_joined
+ )
+ ).all()
+
+ # Prepare default value output per entity id
+ values_by_key = {
+ key: None for key in self.custom_attribute_keys
+ }
+ # Prepare all entity ids that were queried
+ values_by_entity_id = {
+ entity_id: copy.deepcopy(values_by_key)
+ for entity_id in entity_ids
+ }
+ # Add none entity id which is used as default value
+ values_by_entity_id[None] = copy.deepcopy(values_by_key)
+ # Go through queried data and store them
+ for item in value_items:
+ conf_id = item["configuration_id"]
+ conf_key = custom_attr_key_by_id[conf_id]
+ entity_id = item["entity_id"]
+ values_by_entity_id[entity_id][conf_key] = item["value"]
+ return values_by_entity_id
+
+ def query_attr_confs(self, session):
+ custom_attributes = set(self.custom_attribute_keys)
+ cust_attrs_query = (
+ "select id, key from CustomAttributeConfiguration"
+ " where key in ({})"
+ ).format(", ".join(
+ ["\"{}\"".format(attr_name) for attr_name in custom_attributes]
+ ))
+
+ custom_attr_confs = session.query(cust_attrs_query).all()
+ return {
+ conf["id"]: conf["key"]
+ for conf in custom_attr_confs
+ }
diff --git a/openpype/modules/ftrack/plugins/publish/collect_ftrack_api.py b/openpype/modules/ftrack/plugins/publish/collect_ftrack_api.py
index 07af217fb6..436a61cc18 100644
--- a/openpype/modules/ftrack/plugins/publish/collect_ftrack_api.py
+++ b/openpype/modules/ftrack/plugins/publish/collect_ftrack_api.py
@@ -6,7 +6,7 @@ import avalon.api
class CollectFtrackApi(pyblish.api.ContextPlugin):
""" Collects an ftrack session and the current task id. """
- order = pyblish.api.CollectorOrder + 0.4999
+ order = pyblish.api.CollectorOrder + 0.4991
label = "Collect Ftrack Api"
def process(self, context):
diff --git a/openpype/modules/ftrack/plugins/publish/collect_ftrack_family.py b/openpype/modules/ftrack/plugins/publish/collect_ftrack_family.py
index 70030acad9..95987fe42e 100644
--- a/openpype/modules/ftrack/plugins/publish/collect_ftrack_family.py
+++ b/openpype/modules/ftrack/plugins/publish/collect_ftrack_family.py
@@ -25,7 +25,7 @@ class CollectFtrackFamily(pyblish.api.InstancePlugin):
based on 'families' (editorial drives it by presence of 'review')
"""
label = "Collect Ftrack Family"
- order = pyblish.api.CollectorOrder + 0.4998
+ order = pyblish.api.CollectorOrder + 0.4990
profiles = None
diff --git a/openpype/modules/ftrack/plugins/publish/integrate_ftrack_api.py b/openpype/modules/ftrack/plugins/publish/integrate_ftrack_api.py
index 7ebf807f55..650c59fae8 100644
--- a/openpype/modules/ftrack/plugins/publish/integrate_ftrack_api.py
+++ b/openpype/modules/ftrack/plugins/publish/integrate_ftrack_api.py
@@ -263,7 +263,9 @@ class IntegrateFtrackApi(pyblish.api.InstancePlugin):
self.log.info("Creating asset types with short names: {}".format(
", ".join(asset_type_names_by_missing_shorts.keys())
))
- for missing_short, type_name in asset_type_names_by_missing_shorts:
+ for missing_short, type_name in (
+ asset_type_names_by_missing_shorts.items()
+ ):
# Use short for name if name is not defined
if not type_name:
type_name = missing_short
diff --git a/openpype/modules/sync_server/providers/dropbox.py b/openpype/modules/sync_server/providers/dropbox.py
index f5910299e5..dfc42fed75 100644
--- a/openpype/modules/sync_server/providers/dropbox.py
+++ b/openpype/modules/sync_server/providers/dropbox.py
@@ -17,6 +17,7 @@ class DropboxHandler(AbstractProvider):
self.active = False
self.site_name = site_name
self.presets = presets
+ self.dbx = None
if not self.presets:
log.info(
@@ -24,6 +25,11 @@ class DropboxHandler(AbstractProvider):
)
return
+ if not self.presets["enabled"]:
+ log.debug("Sync Server: Site {} not enabled for {}.".
+ format(site_name, project_name))
+ return
+
token = self.presets.get("token", "")
if not token:
msg = "Sync Server: No access token for dropbox provider"
@@ -44,16 +50,13 @@ class DropboxHandler(AbstractProvider):
log.info(msg)
return
- self.dbx = None
-
- if self.presets["enabled"]:
- try:
- self.dbx = self._get_service(
- token, acting_as_member, team_folder_name
- )
- except Exception as e:
- log.info("Could not establish dropbox object: {}".format(e))
- return
+ try:
+ self.dbx = self._get_service(
+ token, acting_as_member, team_folder_name
+ )
+ except Exception as e:
+ log.info("Could not establish dropbox object: {}".format(e))
+ return
super(AbstractProvider, self).__init__()
diff --git a/openpype/modules/sync_server/providers/gdrive.py b/openpype/modules/sync_server/providers/gdrive.py
index b783f7958b..aa7329b104 100644
--- a/openpype/modules/sync_server/providers/gdrive.py
+++ b/openpype/modules/sync_server/providers/gdrive.py
@@ -73,6 +73,11 @@ class GDriveHandler(AbstractProvider):
format(site_name))
return
+ if not self.presets["enabled"]:
+ log.debug("Sync Server: Site {} not enabled for {}.".
+ format(site_name, project_name))
+ return
+
current_platform = platform.system().lower()
cred_path = self.presets.get("credentials_url", {}). \
get(current_platform) or ''
@@ -97,11 +102,10 @@ class GDriveHandler(AbstractProvider):
return
self.service = None
- if self.presets["enabled"]:
- self.service = self._get_gd_service(cred_path)
+ self.service = self._get_gd_service(cred_path)
- self._tree = tree
- self.active = True
+ self._tree = tree
+ self.active = True
def is_active(self):
"""
diff --git a/openpype/modules/sync_server/sync_server_module.py b/openpype/modules/sync_server/sync_server_module.py
index caf58503f1..2c27571f9f 100644
--- a/openpype/modules/sync_server/sync_server_module.py
+++ b/openpype/modules/sync_server/sync_server_module.py
@@ -848,6 +848,11 @@ class SyncServerModule(OpenPypeModule, ITrayModule):
if self.enabled and sync_settings.get('enabled'):
sites.append(self.LOCAL_SITE)
+ active_site = sync_settings["config"]["active_site"]
+ # for Tray running background process
+ if active_site not in sites and active_site == get_local_site_id():
+ sites.append(active_site)
+
return sites
def tray_init(self):
diff --git a/openpype/plugins/publish/extract_burnin.py b/openpype/plugins/publish/extract_burnin.py
index b2ca8850b6..41c84103a6 100644
--- a/openpype/plugins/publish/extract_burnin.py
+++ b/openpype/plugins/publish/extract_burnin.py
@@ -221,11 +221,17 @@ class ExtractBurnin(openpype.api.Extractor):
filled_anatomy = anatomy.format_all(burnin_data)
burnin_data["anatomy"] = filled_anatomy.get_solved()
- # Add context data burnin_data.
- burnin_data["custom"] = (
+ custom_data = copy.deepcopy(
+ instance.data.get("customData") or {}
+ )
+ # Backwards compatibility (since 2022/04/07)
+ custom_data.update(
instance.data.get("custom_burnin_data") or {}
)
+ # Add context data burnin_data.
+ burnin_data["custom"] = custom_data
+
# Add source camera name to burnin data
camera_name = repre.get("camera_name")
if camera_name:
diff --git a/openpype/plugins/publish/extract_review.py b/openpype/plugins/publish/extract_review.py
index 3ecea1f8bd..d569d82762 100644
--- a/openpype/plugins/publish/extract_review.py
+++ b/openpype/plugins/publish/extract_review.py
@@ -188,8 +188,7 @@ class ExtractReview(pyblish.api.InstancePlugin):
outputs_per_repres = self._get_outputs_per_representations(
instance, profile_outputs
)
- fill_data = copy.deepcopy(instance.data["anatomyData"])
- for repre, outputs in outputs_per_repres:
+ for repre, outpu_defs in outputs_per_repres:
# Check if input should be preconverted before processing
# Store original staging dir (it's value may change)
src_repre_staging_dir = repre["stagingDir"]
@@ -241,126 +240,143 @@ class ExtractReview(pyblish.api.InstancePlugin):
self.log
)
- for _output_def in outputs:
- output_def = copy.deepcopy(_output_def)
- # Make sure output definition has "tags" key
- if "tags" not in output_def:
- output_def["tags"] = []
-
- if "burnins" not in output_def:
- output_def["burnins"] = []
-
- # Create copy of representation
- new_repre = copy.deepcopy(repre)
- # Make sure new representation has origin staging dir
- # - this is because source representation may change
- # it's staging dir because of ffmpeg conversion
- new_repre["stagingDir"] = src_repre_staging_dir
-
- # Remove "delete" tag from new repre if there is
- if "delete" in new_repre["tags"]:
- new_repre["tags"].remove("delete")
-
- # Add additional tags from output definition to representation
- for tag in output_def["tags"]:
- if tag not in new_repre["tags"]:
- new_repre["tags"].append(tag)
-
- # Add burnin link from output definition to representation
- for burnin in output_def["burnins"]:
- if burnin not in new_repre.get("burnins", []):
- if not new_repre.get("burnins"):
- new_repre["burnins"] = []
- new_repre["burnins"].append(str(burnin))
-
- self.log.debug(
- "Linked burnins: `{}`".format(new_repre.get("burnins"))
+ try:
+ self._render_output_definitions(
+ instance, repre, src_repre_staging_dir, outpu_defs
)
- self.log.debug(
- "New representation tags: `{}`".format(
- new_repre.get("tags"))
+ finally:
+ # Make sure temporary staging is cleaned up and representation
+ # has set origin stagingDir
+ if do_convert:
+ # Set staging dir of source representation back to previous
+ # value
+ repre["stagingDir"] = src_repre_staging_dir
+ if os.path.exists(new_staging_dir):
+ shutil.rmtree(new_staging_dir)
+
+ def _render_output_definitions(
+ self, instance, repre, src_repre_staging_dir, outpu_defs
+ ):
+ fill_data = copy.deepcopy(instance.data["anatomyData"])
+ for _output_def in outpu_defs:
+ output_def = copy.deepcopy(_output_def)
+ # Make sure output definition has "tags" key
+ if "tags" not in output_def:
+ output_def["tags"] = []
+
+ if "burnins" not in output_def:
+ output_def["burnins"] = []
+
+ # Create copy of representation
+ new_repre = copy.deepcopy(repre)
+ # Make sure new representation has origin staging dir
+ # - this is because source representation may change
+ # it's staging dir because of ffmpeg conversion
+ new_repre["stagingDir"] = src_repre_staging_dir
+
+ # Remove "delete" tag from new repre if there is
+ if "delete" in new_repre["tags"]:
+ new_repre["tags"].remove("delete")
+
+ # Add additional tags from output definition to representation
+ for tag in output_def["tags"]:
+ if tag not in new_repre["tags"]:
+ new_repre["tags"].append(tag)
+
+ # Add burnin link from output definition to representation
+ for burnin in output_def["burnins"]:
+ if burnin not in new_repre.get("burnins", []):
+ if not new_repre.get("burnins"):
+ new_repre["burnins"] = []
+ new_repre["burnins"].append(str(burnin))
+
+ self.log.debug(
+ "Linked burnins: `{}`".format(new_repre.get("burnins"))
+ )
+
+ self.log.debug(
+ "New representation tags: `{}`".format(
+ new_repre.get("tags"))
+ )
+
+ temp_data = self.prepare_temp_data(instance, repre, output_def)
+ files_to_clean = []
+ if temp_data["input_is_sequence"]:
+ self.log.info("Filling gaps in sequence.")
+ files_to_clean = self.fill_sequence_gaps(
+ temp_data["origin_repre"]["files"],
+ new_repre["stagingDir"],
+ temp_data["frame_start"],
+ temp_data["frame_end"])
+
+ # create or update outputName
+ output_name = new_repre.get("outputName", "")
+ output_ext = new_repre["ext"]
+ if output_name:
+ output_name += "_"
+ output_name += output_def["filename_suffix"]
+ if temp_data["without_handles"]:
+ output_name += "_noHandles"
+
+ # add outputName to anatomy format fill_data
+ fill_data.update({
+ "output": output_name,
+ "ext": output_ext
+ })
+
+ try: # temporary until oiiotool is supported cross platform
+ ffmpeg_args = self._ffmpeg_arguments(
+ output_def, instance, new_repre, temp_data, fill_data
)
-
- temp_data = self.prepare_temp_data(
- instance, repre, output_def)
- files_to_clean = []
- if temp_data["input_is_sequence"]:
- self.log.info("Filling gaps in sequence.")
- files_to_clean = self.fill_sequence_gaps(
- temp_data["origin_repre"]["files"],
- new_repre["stagingDir"],
- temp_data["frame_start"],
- temp_data["frame_end"])
-
- # create or update outputName
- output_name = new_repre.get("outputName", "")
- output_ext = new_repre["ext"]
- if output_name:
- output_name += "_"
- output_name += output_def["filename_suffix"]
- if temp_data["without_handles"]:
- output_name += "_noHandles"
-
- # add outputName to anatomy format fill_data
- fill_data.update({
- "output": output_name,
- "ext": output_ext
- })
-
- try: # temporary until oiiotool is supported cross platform
- ffmpeg_args = self._ffmpeg_arguments(
- output_def, instance, new_repre, temp_data, fill_data
+ except ZeroDivisionError:
+ # TODO recalculate width and height using OIIO before
+ # conversion
+ if 'exr' in temp_data["origin_repre"]["ext"]:
+ self.log.warning(
+ (
+ "Unsupported compression on input files."
+ " Skipping!!!"
+ ),
+ exc_info=True
)
- except ZeroDivisionError:
- if 'exr' in temp_data["origin_repre"]["ext"]:
- self.log.debug("Unsupported compression on input " +
- "files. Skipping!!!")
- return
- raise NotImplementedError
+ return
+ raise NotImplementedError
- subprcs_cmd = " ".join(ffmpeg_args)
+ subprcs_cmd = " ".join(ffmpeg_args)
- # run subprocess
- self.log.debug("Executing: {}".format(subprcs_cmd))
+ # run subprocess
+ self.log.debug("Executing: {}".format(subprcs_cmd))
- openpype.api.run_subprocess(
- subprcs_cmd, shell=True, logger=self.log
- )
+ openpype.api.run_subprocess(
+ subprcs_cmd, shell=True, logger=self.log
+ )
- # delete files added to fill gaps
- if files_to_clean:
- for f in files_to_clean:
- os.unlink(f)
+ # delete files added to fill gaps
+ if files_to_clean:
+ for f in files_to_clean:
+ os.unlink(f)
- new_repre.update({
- "name": "{}_{}".format(output_name, output_ext),
- "outputName": output_name,
- "outputDef": output_def,
- "frameStartFtrack": temp_data["output_frame_start"],
- "frameEndFtrack": temp_data["output_frame_end"],
- "ffmpeg_cmd": subprcs_cmd
- })
+ new_repre.update({
+ "name": "{}_{}".format(output_name, output_ext),
+ "outputName": output_name,
+ "outputDef": output_def,
+ "frameStartFtrack": temp_data["output_frame_start"],
+ "frameEndFtrack": temp_data["output_frame_end"],
+ "ffmpeg_cmd": subprcs_cmd
+ })
- # Force to pop these key if are in new repre
- new_repre.pop("preview", None)
- new_repre.pop("thumbnail", None)
- if "clean_name" in new_repre.get("tags", []):
- new_repre.pop("outputName")
+ # Force to pop these key if are in new repre
+ new_repre.pop("preview", None)
+ new_repre.pop("thumbnail", None)
+ if "clean_name" in new_repre.get("tags", []):
+ new_repre.pop("outputName")
- # adding representation
- self.log.debug(
- "Adding new representation: {}".format(new_repre)
- )
- instance.data["representations"].append(new_repre)
-
- # Cleanup temp staging dir after procesisng of output definitions
- if do_convert:
- temp_dir = repre["stagingDir"]
- shutil.rmtree(temp_dir)
- # Set staging dir of source representation back to previous
- # value
- repre["stagingDir"] = src_repre_staging_dir
+ # adding representation
+ self.log.debug(
+ "Adding new representation: {}".format(new_repre)
+ )
+ instance.data["representations"].append(new_repre)
def input_is_sequence(self, repre):
"""Deduce from representation data if input is sequence."""
diff --git a/openpype/plugins/publish/extract_review_slate.py b/openpype/plugins/publish/extract_review_slate.py
index 505ae75169..49f0eac41d 100644
--- a/openpype/plugins/publish/extract_review_slate.py
+++ b/openpype/plugins/publish/extract_review_slate.py
@@ -158,13 +158,15 @@ class ExtractReviewSlate(openpype.api.Extractor):
])
if use_legacy_code:
+ format_args = []
codec_args = repre["_profile"].get('codec', [])
output_args.extend(codec_args)
# preset's output data
output_args.extend(repre["_profile"].get('output', []))
else:
# Codecs are copied from source for whole input
- codec_args = self._get_codec_args(repre)
+ format_args, codec_args = self._get_format_codec_args(repre)
+ output_args.extend(format_args)
output_args.extend(codec_args)
# make sure colors are correct
@@ -266,8 +268,14 @@ class ExtractReviewSlate(openpype.api.Extractor):
"-safe", "0",
"-i", conc_text_path,
"-c", "copy",
- output_path
]
+ # NOTE: Added because of OP Atom demuxers
+ # Add format arguments if there are any
+ # - keep format of output
+ if format_args:
+ concat_args.extend(format_args)
+ # Add final output path
+ concat_args.append(output_path)
# ffmpeg concat subprocess
self.log.debug(
@@ -338,7 +346,7 @@ class ExtractReviewSlate(openpype.api.Extractor):
return vf_back
- def _get_codec_args(self, repre):
+ def _get_format_codec_args(self, repre):
"""Detect possible codec arguments from representation."""
codec_args = []
@@ -361,13 +369,9 @@ class ExtractReviewSlate(openpype.api.Extractor):
return codec_args
source_ffmpeg_cmd = repre.get("ffmpeg_cmd")
- codec_args.extend(
- get_ffmpeg_format_args(ffprobe_data, source_ffmpeg_cmd)
- )
- codec_args.extend(
- get_ffmpeg_codec_args(
- ffprobe_data, source_ffmpeg_cmd, logger=self.log
- )
+ format_args = get_ffmpeg_format_args(ffprobe_data, source_ffmpeg_cmd)
+ codec_args = get_ffmpeg_codec_args(
+ ffprobe_data, source_ffmpeg_cmd, logger=self.log
)
- return codec_args
+ return format_args, codec_args
diff --git a/openpype/plugins/publish/integrate_new.py b/openpype/plugins/publish/integrate_new.py
index 959fd3bbee..5dcbb8fabd 100644
--- a/openpype/plugins/publish/integrate_new.py
+++ b/openpype/plugins/publish/integrate_new.py
@@ -8,6 +8,7 @@ import errno
import six
import re
import shutil
+from collections import deque, defaultdict
from bson.objectid import ObjectId
from pymongo import DeleteOne, InsertOne
@@ -1116,18 +1117,17 @@ class IntegrateAssetNew(pyblish.api.InstancePlugin):
rec["sites"].append(meta)
already_attached_sites[meta["name"]] = None
+ # add alternative sites
+ rec, already_attached_sites = self._add_alternative_sites(
+ system_sync_server_presets, already_attached_sites, rec)
+
# add skeleton for site where it should be always synced to
- for always_on_site in always_accesible:
+ for always_on_site in set(always_accesible):
if always_on_site not in already_attached_sites.keys():
meta = {"name": always_on_site.strip()}
rec["sites"].append(meta)
already_attached_sites[meta["name"]] = None
- # add alternative sites
- rec = self._add_alternative_sites(system_sync_server_presets,
- already_attached_sites,
- rec)
-
log.debug("final sites:: {}".format(rec["sites"]))
return rec
@@ -1158,22 +1158,60 @@ class IntegrateAssetNew(pyblish.api.InstancePlugin):
"""
conf_sites = system_sync_server_presets.get("sites", {})
+ alt_site_pairs = self._get_alt_site_pairs(conf_sites)
+
+ already_attached_keys = list(already_attached_sites.keys())
+ for added_site in already_attached_keys:
+ real_created = already_attached_sites[added_site]
+ for alt_site in alt_site_pairs.get(added_site, []):
+ if alt_site in already_attached_sites.keys():
+ continue
+ meta = {"name": alt_site}
+ # alt site inherits state of 'created_dt'
+ if real_created:
+ meta["created_dt"] = real_created
+ rec["sites"].append(meta)
+ already_attached_sites[meta["name"]] = real_created
+
+ return rec, already_attached_sites
+
+ def _get_alt_site_pairs(self, conf_sites):
+ """Returns dict of site and its alternative sites.
+
+ If `site` has alternative site, it means that alt_site has 'site' as
+ alternative site
+ Args:
+ conf_sites (dict)
+ Returns:
+ (dict): {'site': [alternative sites]...}
+ """
+ alt_site_pairs = defaultdict(list)
for site_name, site_info in conf_sites.items():
alt_sites = set(site_info.get("alternative_sites", []))
- already_attached_keys = list(already_attached_sites.keys())
- for added_site in already_attached_keys:
- if added_site in alt_sites:
- if site_name in already_attached_keys:
- continue
- meta = {"name": site_name}
- real_created = already_attached_sites[added_site]
- # alt site inherits state of 'created_dt'
- if real_created:
- meta["created_dt"] = real_created
- rec["sites"].append(meta)
- already_attached_sites[meta["name"]] = real_created
+ alt_site_pairs[site_name].extend(alt_sites)
- return rec
+ for alt_site in alt_sites:
+ alt_site_pairs[alt_site].append(site_name)
+
+ for site_name, alt_sites in alt_site_pairs.items():
+ sites_queue = deque(alt_sites)
+ while sites_queue:
+ alt_site = sites_queue.popleft()
+
+ # safety against wrong config
+ # {"SFTP": {"alternative_site": "SFTP"}
+ if alt_site == site_name or alt_site not in alt_site_pairs:
+ continue
+
+ for alt_alt_site in alt_site_pairs[alt_site]:
+ if (
+ alt_alt_site != site_name
+ and alt_alt_site not in alt_sites
+ ):
+ alt_sites.append(alt_alt_site)
+ sites_queue.append(alt_alt_site)
+
+ return alt_site_pairs
def handle_destination_files(self, integrated_file_sizes, mode):
""" Clean destination files
diff --git a/openpype/settings/defaults/project_settings/flame.json b/openpype/settings/defaults/project_settings/flame.json
index c7188b10b5..ef7a2a4467 100644
--- a/openpype/settings/defaults/project_settings/flame.json
+++ b/openpype/settings/defaults/project_settings/flame.json
@@ -20,6 +20,37 @@
}
},
"publish": {
+ "CollectTimelineInstances": {
+ "xml_preset_attrs_from_comments": [
+ {
+ "name": "width",
+ "type": "number"
+ },
+ {
+ "name": "height",
+ "type": "number"
+ },
+ {
+ "name": "pixelRatio",
+ "type": "float"
+ },
+ {
+ "name": "resizeType",
+ "type": "string"
+ },
+ {
+ "name": "resizeFilter",
+ "type": "string"
+ }
+ ],
+ "add_tasks": [
+ {
+ "name": "compositing",
+ "type": "Compositing",
+ "create_batch_group": true
+ }
+ ]
+ },
"ExtractSubsetResources": {
"keep_original_representation": false,
"export_presets_mapping": {
@@ -31,7 +62,9 @@
"ignore_comment_attrs": false,
"colorspace_out": "ACES - ACEScg",
"representation_add_range": true,
- "representation_tags": []
+ "representation_tags": [],
+ "load_to_batch_group": true,
+ "batch_group_loader_name": "LoadClip"
}
}
}
@@ -58,7 +91,29 @@
],
"reel_group_name": "OpenPype_Reels",
"reel_name": "Loaded",
- "clip_name_template": "{asset}_{subset}_{representation}"
+ "clip_name_template": "{asset}_{subset}_{output}"
+ },
+ "LoadClipBatch": {
+ "enabled": true,
+ "families": [
+ "render2d",
+ "source",
+ "plate",
+ "render",
+ "review"
+ ],
+ "representations": [
+ "exr",
+ "dpx",
+ "jpg",
+ "jpeg",
+ "png",
+ "h264",
+ "mov",
+ "mp4"
+ ],
+ "reel_name": "OP_LoadedReel",
+ "clip_name_template": "{asset}_{subset}_{output}"
}
}
}
\ No newline at end of file
diff --git a/openpype/settings/defaults/project_settings/ftrack.json b/openpype/settings/defaults/project_settings/ftrack.json
index 31d6a70ac7..deade08c0b 100644
--- a/openpype/settings/defaults/project_settings/ftrack.json
+++ b/openpype/settings/defaults/project_settings/ftrack.json
@@ -352,6 +352,10 @@
}
]
},
+ "CollectFtrackCustomAttributeData": {
+ "enabled": false,
+ "custom_attribute_keys": []
+ },
"IntegrateFtrackNote": {
"enabled": true,
"note_template": "{intent}: {comment}",
diff --git a/openpype/settings/defaults/project_settings/nuke.json b/openpype/settings/defaults/project_settings/nuke.json
index 44d7f2d9d0..ab015271ff 100644
--- a/openpype/settings/defaults/project_settings/nuke.json
+++ b/openpype/settings/defaults/project_settings/nuke.json
@@ -160,7 +160,21 @@
}
},
"ExtractSlateFrame": {
- "viewer_lut_raw": false
+ "viewer_lut_raw": false,
+ "key_value_mapping": {
+ "f_submission_note": [
+ true,
+ "{comment}"
+ ],
+ "f_submitting_for": [
+ true,
+ "{intent[value]}"
+ ],
+ "f_vfx_scope_of_work": [
+ false,
+ ""
+ ]
+ }
},
"IncrementScriptVersion": {
"enabled": true,
diff --git a/openpype/settings/defaults/system_settings/general.json b/openpype/settings/defaults/system_settings/general.json
index 5a3e39e5b6..e1785f8709 100644
--- a/openpype/settings/defaults/system_settings/general.json
+++ b/openpype/settings/defaults/system_settings/general.json
@@ -12,6 +12,7 @@
"linux": [],
"darwin": []
},
+ "local_env_white_list": [],
"openpype_path": {
"windows": [],
"darwin": [],
diff --git a/openpype/settings/entities/schemas/README.md b/openpype/settings/entities/schemas/README.md
index fbfd699937..b4bfef2972 100644
--- a/openpype/settings/entities/schemas/README.md
+++ b/openpype/settings/entities/schemas/README.md
@@ -745,6 +745,7 @@ How output of the schema could look like on save:
### label
- add label with note or explanations
- it is possible to use html tags inside the label
+- set `work_wrap` to `true`/`false` if you want to enable word wrapping in UI (default: `false`)
```
{
diff --git a/openpype/settings/entities/schemas/projects_schema/schema_project_flame.json b/openpype/settings/entities/schemas/projects_schema/schema_project_flame.json
index e352f8b132..fe11d63ac2 100644
--- a/openpype/settings/entities/schemas/projects_schema/schema_project_flame.json
+++ b/openpype/settings/entities/schemas/projects_schema/schema_project_flame.json
@@ -136,6 +136,87 @@
"key": "publish",
"label": "Publish plugins",
"children": [
+ {
+ "type": "dict",
+ "collapsible": true,
+ "key": "CollectTimelineInstances",
+ "label": "Collect Timeline Instances",
+ "is_group": true,
+ "children": [
+ {
+ "type": "collapsible-wrap",
+ "label": "XML presets attributes parsable from segment comments",
+ "collapsible": true,
+ "collapsed": true,
+ "children": [
+ {
+ "type": "list",
+ "key": "xml_preset_attrs_from_comments",
+ "object_type": {
+ "type": "dict",
+ "children": [
+ {
+ "type": "text",
+ "key": "name",
+ "label": "Attribute name"
+ },
+ {
+ "key": "type",
+ "label": "Attribute type",
+ "type": "enum",
+ "default": "number",
+ "enum_items": [
+ {
+ "number": "number"
+ },
+ {
+ "float": "float"
+ },
+ {
+ "string": "string"
+ }
+ ]
+ }
+ ]
+ }
+ }
+ ]
+ },
+ {
+ "type": "collapsible-wrap",
+ "label": "Add tasks",
+ "collapsible": true,
+ "collapsed": true,
+ "children": [
+ {
+ "type": "list",
+ "key": "add_tasks",
+ "object_type": {
+ "type": "dict",
+ "children": [
+ {
+ "type": "text",
+ "key": "name",
+ "label": "Task name"
+ },
+ {
+ "key": "type",
+ "label": "Task type",
+ "multiselection": false,
+ "type": "task-types-enum"
+ },
+ {
+ "type": "boolean",
+ "key": "create_batch_group",
+ "label": "Create batch group"
+ }
+ ]
+ }
+ }
+ ]
+ }
+ ]
+ },
{
"type": "dict",
"collapsible": true,
@@ -221,6 +302,20 @@
"type": "text",
"multiline": false
}
+ },
+ {
+ "type": "separator"
+ },
+ {
+ "type": "boolean",
+ "key": "load_to_batch_group",
+ "label": "Load to batch group reel",
+ "default": false
+ },
+ {
+ "type": "text",
+ "key": "batch_group_loader_name",
+ "label": "Use loader name"
}
]
}
@@ -281,6 +376,48 @@
"label": "Clip name template"
}
]
+ },
+ {
+ "type": "dict",
+ "collapsible": true,
+ "key": "LoadClipBatch",
+ "label": "Load as clip to current batch",
+ "checkbox_key": "enabled",
+ "children": [
+ {
+ "type": "boolean",
+ "key": "enabled",
+ "label": "Enabled"
+ },
+ {
+ "type": "list",
+ "key": "families",
+ "label": "Families",
+ "object_type": "text"
+ },
+ {
+ "type": "list",
+ "key": "representations",
+ "label": "Representations",
+ "object_type": "text"
+ },
+ {
+ "type": "separator"
+ },
+ {
+ "type": "text",
+ "key": "reel_name",
+ "label": "Reel name"
+ },
+ {
+ "type": "separator"
+ },
+ {
+ "type": "text",
+ "key": "clip_name_template",
+ "label": "Clip name template"
+ }
+ ]
}
]
}
diff --git a/openpype/settings/entities/schemas/projects_schema/schema_project_ftrack.json b/openpype/settings/entities/schemas/projects_schema/schema_project_ftrack.json
index 5ce9b24b4b..47effb3dbd 100644
--- a/openpype/settings/entities/schemas/projects_schema/schema_project_ftrack.json
+++ b/openpype/settings/entities/schemas/projects_schema/schema_project_ftrack.json
@@ -725,6 +725,31 @@
}
]
},
+ {
+ "type": "dict",
+ "collapsible": true,
+ "checkbox_key": "enabled",
+ "key": "CollectFtrackCustomAttributeData",
+ "label": "Collect Custom Attribute Data",
+ "is_group": true,
+ "children": [
+ {
+ "type": "boolean",
+ "key": "enabled",
+ "label": "Enabled"
+ },
+ {
+ "type": "label",
+ "label": "Collect custom attributes from ftrack for ftrack entities that can be used in some templates during publishing."
+ },
+ {
+ "type": "list",
+ "key": "custom_attribute_keys",
+ "label": "Custom attribute keys",
+ "object_type": "text"
+ }
+ ]
+ },
{
"type": "dict",
"collapsible": true,
diff --git a/openpype/settings/entities/schemas/projects_schema/schemas/schema_nuke_publish.json b/openpype/settings/entities/schemas/projects_schema/schemas/schema_nuke_publish.json
index 27e8957786..4a796f1933 100644
--- a/openpype/settings/entities/schemas/projects_schema/schemas/schema_nuke_publish.json
+++ b/openpype/settings/entities/schemas/projects_schema/schemas/schema_nuke_publish.json
@@ -389,6 +389,59 @@
"type": "boolean",
"key": "viewer_lut_raw",
"label": "Viewer LUT raw"
+ },
+ {
+ "type": "separator"
+ },
+ {
+ "type": "label",
+ "label": "Fill specific slate node values with templates. Uncheck the checkbox to not change the value.",
+ "word_wrap": true
+ },
+ {
+ "type": "dict",
+ "key": "key_value_mapping",
+ "children": [
+ {
+ "type": "list-strict",
+ "key": "f_submission_note",
+ "label": "Submission Note:",
+ "object_types": [
+ {
+ "type": "boolean"
+ },
+ {
+ "type": "text"
+ }
+ ]
+ },
+ {
+ "type": "list-strict",
+ "key": "f_submitting_for",
+ "label": "Submission For:",
+ "object_types": [
+ {
+ "type": "boolean"
+ },
+ {
+ "type": "text"
+ }
+ ]
+ },
+ {
+ "type": "list-strict",
+ "key": "f_vfx_scope_of_work",
+ "label": "VFX Scope Of Work:",
+ "object_types": [
+ {
+ "type": "boolean"
+ },
+ {
+ "type": "text"
+ }
+ ]
+ }
+ ]
}
]
},
diff --git a/openpype/settings/entities/schemas/system_schema/schema_general.json b/openpype/settings/entities/schemas/system_schema/schema_general.json
index 6306317df8..fcab4cd5d8 100644
--- a/openpype/settings/entities/schemas/system_schema/schema_general.json
+++ b/openpype/settings/entities/schemas/system_schema/schema_general.json
@@ -110,6 +110,17 @@
{
"type": "splitter"
},
+ {
+ "type": "list",
+ "key": "local_env_white_list",
+ "label": "Local overrides of environment variable keys",
+ "tooltip": "Environment variable keys that can be changed per machine using Local settings UI.\nKey changes are applied only on applications and tools environments.",
+ "use_label_wrap": true,
+ "object_type": "text"
+ },
+ {
+ "type": "splitter"
+ },
{
"type": "collapsible-wrap",
"label": "OpenPype deployment control",
diff --git a/openpype/settings/lib.py b/openpype/settings/lib.py
index 54502292dc..937329b417 100644
--- a/openpype/settings/lib.py
+++ b/openpype/settings/lib.py
@@ -1113,6 +1113,14 @@ def get_general_environments():
clear_metadata_from_settings(environments)
+ whitelist_envs = result["general"].get("local_env_white_list")
+ if whitelist_envs:
+ local_settings = get_local_settings()
+ local_envs = local_settings.get("environments") or {}
+ for key, value in local_envs.items():
+ if key in whitelist_envs and key in environments:
+ environments[key] = value
+
return environments
diff --git a/openpype/tools/settings/local_settings/constants.py b/openpype/tools/settings/local_settings/constants.py
index 1836c579af..16f87b6f05 100644
--- a/openpype/tools/settings/local_settings/constants.py
+++ b/openpype/tools/settings/local_settings/constants.py
@@ -9,6 +9,7 @@ LABEL_DISCARD_CHANGES = "Discard changes"
# TODO move to settings constants
LOCAL_GENERAL_KEY = "general"
LOCAL_PROJECTS_KEY = "projects"
+LOCAL_ENV_KEY = "environments"
LOCAL_APPS_KEY = "applications"
# Roots key constant
diff --git a/openpype/tools/settings/local_settings/environments_widget.py b/openpype/tools/settings/local_settings/environments_widget.py
new file mode 100644
index 0000000000..14ca517851
--- /dev/null
+++ b/openpype/tools/settings/local_settings/environments_widget.py
@@ -0,0 +1,93 @@
+from Qt import QtWidgets
+
+from openpype.tools.utils import PlaceholderLineEdit
+
+
+class LocalEnvironmentsWidgets(QtWidgets.QWidget):
+ def __init__(self, system_settings_entity, parent):
+ super(LocalEnvironmentsWidgets, self).__init__(parent)
+
+ self._widgets_by_env_key = {}
+ self.system_settings_entity = system_settings_entity
+
+ content_widget = QtWidgets.QWidget(self)
+ content_layout = QtWidgets.QGridLayout(content_widget)
+ content_layout.setContentsMargins(0, 0, 0, 0)
+
+ layout = QtWidgets.QVBoxLayout(self)
+ layout.setContentsMargins(0, 0, 0, 0)
+
+ self._layout = layout
+ self._content_layout = content_layout
+ self._content_widget = content_widget
+
+ def _clear_layout(self, layout):
+ while layout.count() > 0:
+ item = layout.itemAt(0)
+ widget = item.widget()
+ layout.removeItem(item)
+ if widget is not None:
+ widget.setVisible(False)
+ widget.deleteLater()
+
+ def _reset_env_widgets(self):
+ self._clear_layout(self._content_layout)
+ self._clear_layout(self._layout)
+
+ content_widget = QtWidgets.QWidget(self)
+ content_layout = QtWidgets.QGridLayout(content_widget)
+ content_layout.setContentsMargins(0, 0, 0, 0)
+ white_list_entity = (
+ self.system_settings_entity["general"]["local_env_white_list"]
+ )
+ row = -1
+ for row, item in enumerate(white_list_entity):
+ key = item.value
+ label_widget = QtWidgets.QLabel(key, self)
+ input_widget = PlaceholderLineEdit(self)
+ input_widget.setPlaceholderText("< Keep studio value >")
+
+ content_layout.addWidget(label_widget, row, 0)
+ content_layout.addWidget(input_widget, row, 1)
+
+ self._widgets_by_env_key[key] = input_widget
+
+ if row < 0:
+ label_widget = QtWidgets.QLabel(
+ (
+ "Your studio does not allow to change"
+ " Environment variables locally."
+ ),
+ self
+ )
+ content_layout.addWidget(label_widget, 0, 0)
+ content_layout.setColumnStretch(0, 1)
+
+ else:
+ content_layout.setColumnStretch(0, 0)
+ content_layout.setColumnStretch(1, 1)
+
+ self._layout.addWidget(content_widget, 1)
+
+ self._content_layout = content_layout
+ self._content_widget = content_widget
+
+ def update_local_settings(self, value):
+ if not value:
+ value = {}
+
+ self._reset_env_widgets()
+
+ for env_key, widget in self._widgets_by_env_key.items():
+ env_value = value.get(env_key) or ""
+ widget.setText(env_value)
+
+ def settings_value(self):
+ output = {}
+ for env_key, widget in self._widgets_by_env_key.items():
+ value = widget.text()
+ if value:
+ output[env_key] = value
+ if not output:
+ return None
+ return output
diff --git a/openpype/tools/settings/local_settings/window.py b/openpype/tools/settings/local_settings/window.py
index fb47e69a17..4db0e01476 100644
--- a/openpype/tools/settings/local_settings/window.py
+++ b/openpype/tools/settings/local_settings/window.py
@@ -25,11 +25,13 @@ from .experimental_widget import (
LOCAL_EXPERIMENTAL_KEY
)
from .apps_widget import LocalApplicationsWidgets
+from .environments_widget import LocalEnvironmentsWidgets
from .projects_widget import ProjectSettingsWidget
from .constants import (
LOCAL_GENERAL_KEY,
LOCAL_PROJECTS_KEY,
+ LOCAL_ENV_KEY,
LOCAL_APPS_KEY
)
@@ -49,18 +51,20 @@ class LocalSettingsWidget(QtWidgets.QWidget):
self.pype_mongo_widget = None
self.general_widget = None
self.experimental_widget = None
+ self.envs_widget = None
self.apps_widget = None
self.projects_widget = None
- self._create_pype_mongo_ui()
+ self._create_mongo_url_ui()
self._create_general_ui()
self._create_experimental_ui()
+ self._create_environments_ui()
self._create_app_ui()
self._create_project_ui()
self.main_layout.addStretch(1)
- def _create_pype_mongo_ui(self):
+ def _create_mongo_url_ui(self):
pype_mongo_expand_widget = ExpandingWidget("OpenPype Mongo URL", self)
pype_mongo_content = QtWidgets.QWidget(self)
pype_mongo_layout = QtWidgets.QVBoxLayout(pype_mongo_content)
@@ -110,6 +114,22 @@ class LocalSettingsWidget(QtWidgets.QWidget):
self.experimental_widget = experimental_widget
+ def _create_environments_ui(self):
+ envs_expand_widget = ExpandingWidget("Environments", self)
+ envs_content = QtWidgets.QWidget(self)
+ envs_layout = QtWidgets.QVBoxLayout(envs_content)
+ envs_layout.setContentsMargins(CHILD_OFFSET, 5, 0, 0)
+ envs_expand_widget.set_content_widget(envs_content)
+
+ envs_widget = LocalEnvironmentsWidgets(
+ self.system_settings, envs_content
+ )
+ envs_layout.addWidget(envs_widget)
+
+ self.main_layout.addWidget(envs_expand_widget)
+
+ self.envs_widget = envs_widget
+
def _create_app_ui(self):
# Applications
app_expand_widget = ExpandingWidget("Applications", self)
@@ -154,6 +174,9 @@ class LocalSettingsWidget(QtWidgets.QWidget):
self.general_widget.update_local_settings(
value.get(LOCAL_GENERAL_KEY)
)
+ self.envs_widget.update_local_settings(
+ value.get(LOCAL_ENV_KEY)
+ )
self.app_widget.update_local_settings(
value.get(LOCAL_APPS_KEY)
)
@@ -170,6 +193,10 @@ class LocalSettingsWidget(QtWidgets.QWidget):
if general_value:
output[LOCAL_GENERAL_KEY] = general_value
+ envs_value = self.envs_widget.settings_value()
+ if envs_value:
+ output[LOCAL_ENV_KEY] = envs_value
+
app_value = self.app_widget.settings_value()
if app_value:
output[LOCAL_APPS_KEY] = app_value
diff --git a/openpype/tools/settings/settings/base.py b/openpype/tools/settings/settings/base.py
index bd48b3a966..44ec09b2ca 100644
--- a/openpype/tools/settings/settings/base.py
+++ b/openpype/tools/settings/settings/base.py
@@ -567,7 +567,9 @@ class GUIWidget(BaseWidget):
def _create_label_ui(self):
label = self.entity["label"]
+ word_wrap = self.entity.schema_data.get("word_wrap", False)
label_widget = QtWidgets.QLabel(label, self)
+ label_widget.setWordWrap(word_wrap)
label_widget.setTextInteractionFlags(QtCore.Qt.TextBrowserInteraction)
label_widget.setObjectName("SettingsLabel")
label_widget.linkActivated.connect(self._on_link_activate)
diff --git a/openpype/tools/utils/lib.py b/openpype/tools/utils/lib.py
index 12dd637e6a..8e2044482a 100644
--- a/openpype/tools/utils/lib.py
+++ b/openpype/tools/utils/lib.py
@@ -410,6 +410,7 @@ class FamilyConfigCache:
project_name = os.environ.get("AVALON_PROJECT")
asset_name = os.environ.get("AVALON_ASSET")
task_name = os.environ.get("AVALON_TASK")
+ host_name = os.environ.get("AVALON_APP")
if not all((project_name, asset_name, task_name)):
return
@@ -423,15 +424,21 @@ class FamilyConfigCache:
["family_filter_profiles"]
)
if profiles:
- asset_doc = self.dbcon.find_one(
+ # Make sure connection is installed
+ # - accessing attribute which does not have auto-install
+ self.dbcon.install()
+ database = getattr(self.dbcon, "database", None)
+ if database is None:
+ database = self.dbcon._database
+ asset_doc = database[project_name].find_one(
{"type": "asset", "name": asset_name},
{"data.tasks": True}
- )
+ ) or {}
tasks_info = asset_doc.get("data", {}).get("tasks") or {}
task_type = tasks_info.get(task_name, {}).get("type")
profiles_filter = {
"task_types": task_type,
- "hosts": os.environ["AVALON_APP"]
+ "hosts": host_name
}
matching_item = filter_profiles(profiles, profiles_filter)
diff --git a/openpype/version.py b/openpype/version.py
index 97aa585ca7..08dcbb5aed 100644
--- a/openpype/version.py
+++ b/openpype/version.py
@@ -1,3 +1,3 @@
# -*- coding: utf-8 -*-
"""Package declaring Pype version."""
-__version__ = "3.9.3"
+__version__ = "3.9.4-nightly.1"
diff --git a/pyproject.toml b/pyproject.toml
index 006f6eb4e5..adec7ab158 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -1,6 +1,6 @@
[tool.poetry]
name = "OpenPype"
-version = "3.9.3" # OpenPype
+version = "3.9.4-nightly.1" # OpenPype
description = "Open VFX and Animation pipeline with support."
authors = ["OpenPype Team "]
license = "MIT License"
diff --git a/website/docs/api.md b/website/docs/api.md
deleted file mode 100644
index 7cad92d603..0000000000
--- a/website/docs/api.md
+++ /dev/null
@@ -1,7 +0,0 @@
----
-id: api
-title: Pype API
-sidebar_label: API
----
-
-Work in progress
diff --git a/website/docs/artist_hosts.md b/website/docs/artist_hosts.md
deleted file mode 100644
index 609f6d97c8..0000000000
--- a/website/docs/artist_hosts.md
+++ /dev/null
@@ -1,17 +0,0 @@
----
-id: artist_hosts
-title: Hosts
-sidebar_label: Hosts
----
-
-## Maya
-
-## Houdini
-
-## Nuke
-
-## Fusion
-
-## Unreal
-
-## System
diff --git a/website/docs/artist_hosts_nuke.md b/website/docs/artist_hosts_nuke.md
deleted file mode 100644
index 1e02599570..0000000000
--- a/website/docs/artist_hosts_nuke.md
+++ /dev/null
@@ -1,145 +0,0 @@
----
-id: artist_hosts_nuke
-title: Nuke
-sidebar_label: Nuke
----
-
-:::important
-After Nuke starts it will automatically **Apply All Settings** for you. If you are sure the settings are wrong just contact your supervisor and he will set them correctly for you in project database.
-:::
-
-:::note
-The workflows are identical for both. We are supporting versions **`11.0`** and above.
-:::
-
-## OpenPype global tools
-
-- [Set Context](artist_tools.md#set-context)
-- [Work Files](artist_tools.md#workfiles)
-- [Create](artist_tools.md#creator)
-- [Load](artist_tools.md#loader)
-- [Manage (Inventory)](artist_tools.md#inventory)
-- [Publish](artist_tools.md#publisher)
-- [Library Loader](artist_tools.md#library-loader)
-
-## Nuke specific tools
-
-
-
-
-### Set Frame Ranges
-
-Use this feature in case you are not sure the frame range is correct.
-
-##### Result
-
-- setting Frame Range in script settings
-- setting Frame Range in viewers (timeline)
-
-
-
-
-
-
-
-
-
-
-1. limiting to Frame Range without handles
-2. **Input** handle on start
-3. **Output** handle on end
-
-
-
-
-### Set Resolution
-
-
-
-
-
-This menu item will set correct resolution format for you defined by your production.
-
-##### Result
-
-- creates new item in formats with project name
-- sets the new format as used
-
-
-
-This menu item will set correct Colorspace definitions for you. All has to be configured by your production (Project coordinator).
-
-##### Result
-
-- set Colorspace in your script settings
-- set preview LUT to your viewers
-- set correct colorspace to all discovered Read nodes (following expression set in settings)
-
-
-
-It is usually enough if you once per while use this option just to make yourself sure the workfile is having set correct properties.
-
-##### Result
-
-- set Frame Ranges
-- set Colorspace
-- set Resolution
-
-
-
-
-
-
-
-
-
-### Build Workfile
-
-
-
-
-This tool will append all available subsets into an actual node graph. It will look into database and get all last [versions](artist_concepts.md#version) of available [subsets](artist_concepts.md#subset).
-
-
-##### Result
-
-- adds all last versions of subsets (rendered image sequences) as read nodes
-- adds publishable write node as `renderMain` subset
-
-
-
-
-
-
-
-
\ No newline at end of file
diff --git a/website/docs/artist_hosts_nuke_tut.md b/website/docs/artist_hosts_nuke_tut.md
index 4b0ef7a78a..eefb213dd2 100644
--- a/website/docs/artist_hosts_nuke_tut.md
+++ b/website/docs/artist_hosts_nuke_tut.md
@@ -161,7 +161,7 @@ Nuke OpenPype menu shows the current context
Launching Nuke with context stops your timer, and starts the clock on the shot and task you picked.
-Openpype makes initial setup for your Nuke script. It is the same as running [Apply All Settings](artist_hosts_nuke.md#apply-all-settings) from the OpenPype menu.
+Openpype makes initial setup for your Nuke script. It is the same as running [Apply All Settings](artist_hosts_nuke_tut.md#apply-all-settings) from the OpenPype menu.
- Reads frame range and resolution from Avalon database, sets it in Nuke Project Settings,
Creates Viewer node, sets itβs range and indicates handles by In and Out points.
diff --git a/website/docs/hosts-maya.md b/website/docs/hosts-maya.md
deleted file mode 100644
index 0ee0c2d86b..0000000000
--- a/website/docs/hosts-maya.md
+++ /dev/null
@@ -1,33 +0,0 @@
-### Tools
-Creator
-Publisher
-Loader
-Scene Inventory
-Look assigner
-Workfiles
-
-### Plugins
-Deadline
-Muster
-Yeti
-Arnold
-Vray
-Redshift
-
-### Families
-Model
-Look
-Rig
-Animation
-Cache
-Camera
-Assembly
-MayaAscii (generic scene)
-Setdress
-RenderSetup
-Review
-arnoldStandin
-vrayProxy
-vrayScene
-yetiCache
-yetiRig
diff --git a/website/docs/manager_naming.md b/website/docs/manager_naming.md
deleted file mode 100644
index bf822fbeb4..0000000000
--- a/website/docs/manager_naming.md
+++ /dev/null
@@ -1,56 +0,0 @@
----
-id: manager_naming
-title: Naming Conventions
-sidebar_label: Naming Conventions
----
-
-:::note
-This naming convention holds true for most of our pipeline. Please match it as close as possible even for projects and files that might be outside of pipeline scope at this point. Small errors count! The reason for given formatting is to allow people to understand the file at glance and that a script or a program can easily get meaningful information from your files without errors.
-:::
-
-## General rules
-
-For more detailed rules and different file types, have a look at naming conventions for scenes and assets
-
-- Every file starts with file code based on a project it belongs to e.g. βtst_β, βdrm_β
-- Optional subversion and comment always comes after the major version. v##.subversion_comment.
-- File names can only be composed of letters, numbers, underscores `_` and dots β.β
-- You can use snakeCase or CamelCase if you need more words in a section.Β thisIsLongerSentenceInComment
-- No spaces in filenames. Ever!
-- Frame numbers are always separated by a period β.β
-- If you're not sure use this template:
-
-## Work files
-
-**`{code}_{shot}_{task}_v001.ext`**
-
-**`{code}_{asset}_{task}_v001.ext`**
-
-**Examples:**
-
- prj_sh010_enviro_v001.ma
- prj_sh010_animation_v001.ma
- prj_sh010_comp_v001.nk
-
- prj_bob_modelling_v001.ma
- prj_bob_rigging_v001.ma
- prj_bob_lookdev_v001.ma
-
-:::info
-In all of the examples anything enclosed in curly brackets Β { }Β is compulsory in the name.
-Anything in square bracketsΒ [ ]Β is optional.
-:::
-
-## Published Assets
-
-**`{code}_{asset}_{family}_{subset}_{version}_[comment].ext`**
-
-**Examples:**
-
- prj_bob_model_main_v01.ma
- prj_bob_model_hires_v01.ma
- prj_bob_model_main_v01_clothes.ma
- prj_bob_model_main_v01_body.ma
- prj_bob_rig_main_v01.ma
- Prj_bob_look_main_v01.ma
- Prj_bob_look_wet_v01.ma