[Automated] Merged develop into main

This commit is contained in:
pypebot 2022-08-24 06:05:33 +02:00 committed by GitHub
commit 28ccf56684
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
55 changed files with 2738 additions and 1771 deletions

View file

@ -40,18 +40,6 @@ def settings(dev):
PypeCommands().launch_settings_gui(dev)
@main.command()
def standalonepublisher():
"""Show Pype Standalone publisher UI."""
PypeCommands().launch_standalone_publisher()
@main.command()
def traypublisher():
"""Show new OpenPype Standalone publisher UI."""
PypeCommands().launch_traypublisher()
@main.command()
def tray():
"""Launch pype tray.

39
openpype/client/notes.md Normal file
View file

@ -0,0 +1,39 @@
# Client functionality
## Reason
Preparation for OpenPype v4 server. Goal is to remove direct mongo calls in code to prepare a little bit for different source of data for code before. To start think about database calls less as mongo calls but more universally. To do so was implemented simple wrapper around database calls to not use pymongo specific code.
Current goal is not to make universal database model which can be easily replaced with any different source of data but to make it close as possible. Current implementation of OpenPype is too tighly connected to pymongo and it's abilities so we're trying to get closer with long term changes that can be used even in current state.
## Queries
Query functions don't use full potential of mongo queries like very specific queries based on subdictionaries or unknown structures. We try to avoid these calls as much as possible because they'll probably won't be available in future. If it's really necessary a new function can be added but only if it's reasonable for overall logic. All query functions were moved to `~/client/entities.py`. Each function has arguments with available filters and possible reduce of returned keys for each entity.
## Changes
Changes are a little bit complicated. Mongo has many options how update can happen which had to be reduced also it would be at this stage complicated to validate values which are created or updated thus automation is at this point almost none. Changes can be made using operations available in `~/client/operations.py`. Each operation require project name and entity type, but may require operation specific data.
### Create
Create operations expect already prepared document data, for that are prepared functions creating skeletal structures of documents (do not fill all required data), except `_id` all data should be right. Existence of entity is not validated so if the same creation operation is send n times it will create the entity n times which can cause issues.
### Update
Update operation require entity id and keys that should be changed, update dictionary must have {"key": value}. If value should be set in nested dictionary the key must have also all subkeys joined with dot `.` (e.g. `{"data": {"fps": 25}}` -> `{"data.fps": 25}`). To simplify update dictionaries were prepared functions which does that for you, their name has template `prepare_<entity type>_update_data` - they work on comparison of previous document and new document. If there is missing function for requested entity type it is because we didn't need it yet and require implementaion.
### Delete
Delete operation need entity id. Entity will be deleted from mongo.
## What (probably) won't be replaced
Some parts of code are still using direct mongo calls. In most of cases it is for very specific calls that are module specific or their usage will completely change in future.
- Mongo calls that are not project specific (out of `avalon` collection) will be removed or will have to use different mechanism how the data are stored. At this moment it is related to OpenPype settings and logs, ftrack server events, some other data.
- Sync server queries. They're complex and very specific for sync server module. Their replacement will require specific calls to OpenPype server in v4 thus their abstraction with wrapper is irrelevant and would complicate production in v3.
- Project managers (ftrack, kitsu, shotgrid, embedded Project Manager, etc.). Project managers are creating, updating or removing assets in v3, but in v4 will create folders with different structure. Wrapping creation of assets would not help to prepare for v4 because of new data structures. The same can be said about editorial Extract Hierarchy Avalon plugin which create project structure.
- Code parts that is marked as deprecated in v3 or will be deprecated in v4.
- integrate asset legacy publish plugin - already is legacy kept for safety
- integrate thumbnail - thumbnails will be stored in different way in v4
- input links - link will be stored in different way and will have different mechanism of linking. In v3 are links limited to same entity type "asset <-> asset" or "representation <-> representation".
## Known missing replacements
- change subset group in loader tool
- integrate subset group
- query input links in openpype lib
- create project in openpype lib
- save/create workfile doc in openpype lib
- integrate hero version

View file

@ -444,7 +444,7 @@ class UpdateOperation(AbstractOperation):
set_data = {}
for key, value in self._update_data.items():
if value is REMOVED_VALUE:
unset_data[key] = value
unset_data[key] = None
else:
set_data[key] = value

View file

@ -11,6 +11,8 @@ class AEWorkfileCreator(AutoCreator):
identifier = "workfile"
family = "workfile"
default_variant = "Main"
def get_instance_attr_defs(self):
return []
@ -35,7 +37,6 @@ class AEWorkfileCreator(AutoCreator):
existing_instance = instance
break
variant = ''
project_name = legacy_io.Session["AVALON_PROJECT"]
asset_name = legacy_io.Session["AVALON_ASSET"]
task_name = legacy_io.Session["AVALON_TASK"]
@ -44,15 +45,17 @@ class AEWorkfileCreator(AutoCreator):
if existing_instance is None:
asset_doc = get_asset_by_name(project_name, asset_name)
subset_name = self.get_subset_name(
variant, task_name, asset_doc, project_name, host_name
self.default_variant, task_name, asset_doc,
project_name, host_name
)
data = {
"asset": asset_name,
"task": task_name,
"variant": variant
"variant": self.default_variant
}
data.update(self.get_dynamic_data(
variant, task_name, asset_doc, project_name, host_name
self.default_variant, task_name, asset_doc,
project_name, host_name
))
new_instance = CreatedInstance(
@ -69,7 +72,9 @@ class AEWorkfileCreator(AutoCreator):
):
asset_doc = get_asset_by_name(project_name, asset_name)
subset_name = self.get_subset_name(
variant, task_name, asset_doc, project_name, host_name
self.default_variant, task_name, asset_doc,
project_name, host_name
)
existing_instance["asset"] = asset_name
existing_instance["task"] = task_name
existing_instance["subset"] = subset_name

View file

@ -11,6 +11,8 @@ class CollectWorkfile(pyblish.api.ContextPlugin):
label = "Collect After Effects Workfile Instance"
order = pyblish.api.CollectorOrder + 0.1
default_variant = "Main"
def process(self, context):
existing_instance = None
for instance in context:
@ -71,7 +73,7 @@ class CollectWorkfile(pyblish.api.ContextPlugin):
family = "workfile"
subset = get_subset_name_with_asset_doc(
family,
"",
self.default_variant,
context.data["anatomyData"]["task"]["name"],
context.data["assetEntity"],
context.data["anatomyData"]["project"]["name"],

View file

@ -30,7 +30,8 @@ from .lib import (
maintained_temp_file_path,
get_clip_segment,
get_batch_group_from_desktop,
MediaInfoFile
MediaInfoFile,
TimeEffectMetadata
)
from .utils import (
setup,
@ -107,6 +108,7 @@ __all__ = [
"get_clip_segment",
"get_batch_group_from_desktop",
"MediaInfoFile",
"TimeEffectMetadata",
# pipeline
"install",

View file

@ -5,10 +5,11 @@ import json
import pickle
import clique
import tempfile
import traceback
import itertools
import contextlib
import xml.etree.cElementTree as cET
from copy import deepcopy
from copy import deepcopy, copy
from xml.etree import ElementTree as ET
from pprint import pformat
from .constants import (
@ -266,7 +267,7 @@ def get_current_sequence(selection):
def rescan_hooks():
import flame
try:
flame.execute_shortcut('Rescan Python Hooks')
flame.execute_shortcut("Rescan Python Hooks")
except Exception:
pass
@ -1082,21 +1083,21 @@ class MediaInfoFile(object):
xml_data (ET.Element): clip data
"""
try:
for out_track in xml_data.iter('track'):
for out_feed in out_track.iter('feed'):
for out_track in xml_data.iter("track"):
for out_feed in out_track.iter("feed"):
# start frame
out_feed_nb_ticks_obj = out_feed.find(
'startTimecode/nbTicks')
"startTimecode/nbTicks")
self.start_frame = out_feed_nb_ticks_obj.text
# fps
out_feed_fps_obj = out_feed.find(
'startTimecode/rate')
"startTimecode/rate")
self.fps = out_feed_fps_obj.text
# drop frame mode
out_feed_drop_mode_obj = out_feed.find(
'startTimecode/dropMode')
"startTimecode/dropMode")
self.drop_mode = out_feed_drop_mode_obj.text
break
except Exception as msg:
@ -1118,8 +1119,153 @@ class MediaInfoFile(object):
tree = cET.ElementTree(xml_element_data)
tree.write(
fpath, xml_declaration=True,
method='xml', encoding='UTF-8'
method="xml", encoding="UTF-8"
)
except IOError as error:
raise IOError(
"Not able to write data to file: {}".format(error))
class TimeEffectMetadata(object):
log = log
_data = {}
_retime_modes = {
0: "speed",
1: "timewarp",
2: "duration"
}
def __init__(self, segment, logger=None):
if logger:
self.log = logger
self._data = self._get_metadata(segment)
@property
def data(self):
""" Returns timewarp effect data
Returns:
dict: retime data
"""
return self._data
def _get_metadata(self, segment):
effects = segment.effects or []
for effect in effects:
if effect.type == "Timewarp":
with maintained_temp_file_path(".timewarp_node") as tmp_path:
self.log.info("Temp File: {}".format(tmp_path))
effect.save_setup(tmp_path)
return self._get_attributes_from_xml(tmp_path)
return {}
def _get_attributes_from_xml(self, tmp_path):
with open(tmp_path, "r") as tw_setup_file:
tw_setup_string = tw_setup_file.read()
tw_setup_file.close()
tw_setup_xml = ET.fromstring(tw_setup_string)
tw_setup = self._dictify(tw_setup_xml)
# pprint(tw_setup)
try:
tw_setup_state = tw_setup["Setup"]["State"][0]
mode = int(
tw_setup_state["TW_RetimerMode"][0]["_text"]
)
r_data = {
"type": self._retime_modes[mode],
"effectStart": int(
tw_setup["Setup"]["Base"][0]["Range"][0]["Start"]),
"effectEnd": int(
tw_setup["Setup"]["Base"][0]["Range"][0]["End"])
}
if mode == 0: # speed
r_data[self._retime_modes[mode]] = float(
tw_setup_state["TW_Speed"]
[0]["Channel"][0]["Value"][0]["_text"]
) / 100
elif mode == 1: # timewarp
print("timing")
r_data[self._retime_modes[mode]] = self._get_anim_keys(
tw_setup_state["TW_Timing"]
)
elif mode == 2: # duration
r_data[self._retime_modes[mode]] = {
"start": {
"source": int(
tw_setup_state["TW_DurationTiming"][0]["Channel"]
[0]["KFrames"][0]["Key"][0]["Value"][0]["_text"]
),
"timeline": int(
tw_setup_state["TW_DurationTiming"][0]["Channel"]
[0]["KFrames"][0]["Key"][0]["Frame"][0]["_text"]
)
},
"end": {
"source": int(
tw_setup_state["TW_DurationTiming"][0]["Channel"]
[0]["KFrames"][0]["Key"][1]["Value"][0]["_text"]
),
"timeline": int(
tw_setup_state["TW_DurationTiming"][0]["Channel"]
[0]["KFrames"][0]["Key"][1]["Frame"][0]["_text"]
)
}
}
except Exception:
lines = traceback.format_exception(*sys.exc_info())
self.log.error("\n".join(lines))
return
return r_data
def _get_anim_keys(self, setup_cat, index=None):
return_data = {
"extrapolation": (
setup_cat[0]["Channel"][0]["Extrap"][0]["_text"]
),
"animKeys": []
}
for key in setup_cat[0]["Channel"][0]["KFrames"][0]["Key"]:
if index and int(key["Index"]) != index:
continue
key_data = {
"source": float(key["Value"][0]["_text"]),
"timeline": float(key["Frame"][0]["_text"]),
"index": int(key["Index"]),
"curveMode": key["CurveMode"][0]["_text"],
"curveOrder": key["CurveOrder"][0]["_text"]
}
if key.get("TangentMode"):
key_data["tangentMode"] = key["TangentMode"][0]["_text"]
return_data["animKeys"].append(key_data)
return return_data
def _dictify(self, xml_, root=True):
""" Convert xml object to dictionary
Args:
xml_ (xml.etree.ElementTree.Element): xml data
root (bool, optional): is root available. Defaults to True.
Returns:
dict: dictionarized xml
"""
if root:
return {xml_.tag: self._dictify(xml_, False)}
d = copy(xml_.attrib)
if xml_.text:
d["_text"] = xml_.text
for x in xml_.findall("./*"):
if x.tag not in d:
d[x.tag] = []
d[x.tag].append(self._dictify(x, False))
return d

View file

@ -275,7 +275,7 @@ def create_otio_reference(clip_data, fps=None):
def create_otio_clip(clip_data):
from openpype.hosts.flame.api import MediaInfoFile
from openpype.hosts.flame.api import MediaInfoFile, TimeEffectMetadata
segment = clip_data["PySegment"]
@ -284,14 +284,31 @@ def create_otio_clip(clip_data):
media_timecode_start = media_info.start_frame
media_fps = media_info.fps
# Timewarp metadata
tw_data = TimeEffectMetadata(segment, logger=log).data
log.debug("__ tw_data: {}".format(tw_data))
# define first frame
first_frame = media_timecode_start or utils.get_frame_from_filename(
clip_data["fpath"]) or 0
file_first_frame = utils.get_frame_from_filename(
clip_data["fpath"])
if file_first_frame:
file_first_frame = int(file_first_frame)
first_frame = media_timecode_start or file_first_frame or 0
_clip_source_in = int(clip_data["source_in"])
_clip_source_out = int(clip_data["source_out"])
_clip_record_in = clip_data["record_in"]
_clip_record_out = clip_data["record_out"]
_clip_record_duration = int(clip_data["record_duration"])
log.debug("_ file_first_frame: {}".format(file_first_frame))
log.debug("_ first_frame: {}".format(first_frame))
log.debug("_ _clip_source_in: {}".format(_clip_source_in))
log.debug("_ _clip_source_out: {}".format(_clip_source_out))
log.debug("_ _clip_record_in: {}".format(_clip_record_in))
log.debug("_ _clip_record_out: {}".format(_clip_record_out))
# first solve if the reverse timing
speed = 1
if clip_data["source_in"] > clip_data["source_out"]:
@ -302,16 +319,28 @@ def create_otio_clip(clip_data):
source_in = _clip_source_in - int(first_frame)
source_out = _clip_source_out - int(first_frame)
log.debug("_ source_in: {}".format(source_in))
log.debug("_ source_out: {}".format(source_out))
if file_first_frame:
log.debug("_ file_source_in: {}".format(
file_first_frame + source_in))
log.debug("_ file_source_in: {}".format(
file_first_frame + source_out))
source_duration = (source_out - source_in + 1)
# secondly check if any change of speed
if source_duration != _clip_record_duration:
retime_speed = float(source_duration) / float(_clip_record_duration)
log.debug("_ retime_speed: {}".format(retime_speed))
log.debug("_ calculated speed: {}".format(retime_speed))
speed *= retime_speed
log.debug("_ source_in: {}".format(source_in))
log.debug("_ source_out: {}".format(source_out))
# get speed from metadata if available
if tw_data.get("speed"):
speed = tw_data["speed"]
log.debug("_ metadata speed: {}".format(speed))
log.debug("_ speed: {}".format(speed))
log.debug("_ source_duration: {}".format(source_duration))
log.debug("_ _clip_record_duration: {}".format(_clip_record_duration))

View file

@ -8,6 +8,9 @@ import pyblish.api
import openpype.api
from openpype.hosts.flame import api as opfapi
from openpype.hosts.flame.api import MediaInfoFile
from openpype.pipeline.editorial import (
get_media_range_with_retimes
)
import flame
@ -47,7 +50,6 @@ class ExtractSubsetResources(openpype.api.Extractor):
export_presets_mapping = {}
def process(self, instance):
if not self.keep_original_representation:
# remove previeous representation if not needed
instance.data["representations"] = []
@ -67,18 +69,60 @@ class ExtractSubsetResources(openpype.api.Extractor):
# get media source first frame
source_first_frame = instance.data["sourceFirstFrame"]
self.log.debug("_ frame_start: {}".format(frame_start))
self.log.debug("_ source_first_frame: {}".format(source_first_frame))
# get timeline in/out of segment
clip_in = instance.data["clipIn"]
clip_out = instance.data["clipOut"]
# get retimed attributres
retimed_data = self._get_retimed_attributes(instance)
# get individual keys
r_handle_start = retimed_data["handle_start"]
r_handle_end = retimed_data["handle_end"]
r_source_dur = retimed_data["source_duration"]
r_speed = retimed_data["speed"]
# get handles value - take only the max from both
handle_start = instance.data["handleStart"]
handle_end = instance.data["handleStart"]
handle_end = instance.data["handleEnd"]
handles = max(handle_start, handle_end)
include_handles = instance.data.get("includeHandles")
# get media source range with handles
source_start_handles = instance.data["sourceStartH"]
source_end_handles = instance.data["sourceEndH"]
# retime if needed
if r_speed != 1.0:
source_start_handles = (
instance.data["sourceStart"] - r_handle_start)
source_end_handles = (
source_start_handles
+ (r_source_dur - 1)
+ r_handle_start
+ r_handle_end
)
# get frame range with handles for representation range
frame_start_handle = frame_start - handle_start
repre_frame_start = frame_start_handle
if include_handles:
if r_speed == 1.0:
frame_start_handle = frame_start
else:
frame_start_handle = (
frame_start - handle_start) + r_handle_start
self.log.debug("_ frame_start_handle: {}".format(
frame_start_handle))
self.log.debug("_ repre_frame_start: {}".format(
repre_frame_start))
# calculate duration with handles
source_duration_handles = (
source_end_handles - source_start_handles) + 1
# create staging dir path
staging_dir = self.staging_dir(instance)
@ -93,6 +137,28 @@ class ExtractSubsetResources(openpype.api.Extractor):
}
export_presets.update(self.export_presets_mapping)
if not instance.data.get("versionData"):
instance.data["versionData"] = {}
# set versiondata if any retime
version_data = retimed_data.get("version_data")
self.log.debug("_ version_data: {}".format(version_data))
if version_data:
instance.data["versionData"].update(version_data)
if r_speed != 1.0:
instance.data["versionData"].update({
"frameStart": frame_start_handle,
"frameEnd": (
(frame_start_handle + source_duration_handles - 1)
- (r_handle_start + r_handle_end)
)
})
self.log.debug("_ i_version_data: {}".format(
instance.data["versionData"]
))
# loop all preset names and
for unique_name, preset_config in export_presets.items():
modify_xml_data = {}
@ -115,20 +181,10 @@ class ExtractSubsetResources(openpype.api.Extractor):
)
)
# get frame range with handles for representation range
frame_start_handle = frame_start - handle_start
# calculate duration with handles
source_duration_handles = (
source_end_handles - source_start_handles)
# define in/out marks
in_mark = (source_start_handles - source_first_frame) + 1
out_mark = in_mark + source_duration_handles
exporting_clip = None
name_patern_xml = "<name>_{}.".format(
unique_name)
if export_type == "Sequence Publish":
# change export clip to sequence
exporting_clip = flame.duplicate(sequence_clip)
@ -142,19 +198,25 @@ class ExtractSubsetResources(openpype.api.Extractor):
"<segment name>_<shot name>_{}.").format(
unique_name)
# change in/out marks to timeline in/out
# only for h264 with baked retime
in_mark = clip_in
out_mark = clip_out
out_mark = clip_out + 1
modify_xml_data.update({
"exportHandles": True,
"nbHandles": handles
})
else:
in_mark = (source_start_handles - source_first_frame) + 1
out_mark = in_mark + source_duration_handles
exporting_clip = self.import_clip(clip_path)
exporting_clip.name.set_value("{}_{}".format(
asset_name, segment_name))
# add xml tags modifications
modify_xml_data.update({
"exportHandles": True,
"nbHandles": handles,
"startFrame": frame_start,
# enum position low start from 0
"frameIndex": 0,
"startFrame": repre_frame_start,
"namePattern": name_patern_xml
})
@ -162,6 +224,9 @@ class ExtractSubsetResources(openpype.api.Extractor):
# add any xml overrides collected form segment.comment
modify_xml_data.update(instance.data["xml_overrides"])
self.log.debug("_ in_mark: {}".format(in_mark))
self.log.debug("_ out_mark: {}".format(out_mark))
export_kwargs = {}
# validate xml preset file is filled
if preset_file == "":
@ -196,9 +261,8 @@ class ExtractSubsetResources(openpype.api.Extractor):
"namePattern": "__thumbnail"
})
thumb_frame_number = int(in_mark + (
source_duration_handles / 2))
(out_mark - in_mark + 1) / 2))
self.log.debug("__ in_mark: {}".format(in_mark))
self.log.debug("__ thumb_frame_number: {}".format(
thumb_frame_number
))
@ -210,9 +274,6 @@ class ExtractSubsetResources(openpype.api.Extractor):
"out_mark": out_mark
})
self.log.debug("__ modify_xml_data: {}".format(
pformat(modify_xml_data)
))
preset_path = opfapi.modify_preset_file(
preset_orig_xml_path, staging_dir, modify_xml_data)
@ -281,9 +342,9 @@ class ExtractSubsetResources(openpype.api.Extractor):
# add frame range
if preset_config["representation_add_range"]:
representation_data.update({
"frameStart": frame_start_handle,
"frameStart": repre_frame_start,
"frameEnd": (
frame_start_handle + source_duration_handles),
repre_frame_start + source_duration_handles) - 1,
"fps": instance.data["fps"]
})
@ -300,8 +361,32 @@ class ExtractSubsetResources(openpype.api.Extractor):
# at the end remove the duplicated clip
flame.delete(exporting_clip)
self.log.debug("All representations: {}".format(
pformat(instance.data["representations"])))
def _get_retimed_attributes(self, instance):
handle_start = instance.data["handleStart"]
handle_end = instance.data["handleEnd"]
# get basic variables
otio_clip = instance.data["otioClip"]
# get available range trimmed with processed retimes
retimed_attributes = get_media_range_with_retimes(
otio_clip, handle_start, handle_end)
self.log.debug(
">> retimed_attributes: {}".format(retimed_attributes))
r_media_in = int(retimed_attributes["mediaIn"])
r_media_out = int(retimed_attributes["mediaOut"])
version_data = retimed_attributes.get("versionData")
return {
"version_data": version_data,
"handle_start": int(retimed_attributes["handleStart"]),
"handle_end": int(retimed_attributes["handleEnd"]),
"source_duration": (
(r_media_out - r_media_in) + 1
),
"speed": float(retimed_attributes["speed"])
}
def _should_skip(self, preset_config, clip_path, unique_name):
# get activating attributes
@ -313,8 +398,6 @@ class ExtractSubsetResources(openpype.api.Extractor):
unique_name, activated_preset, filter_path_regex
)
)
self.log.debug(
"__ clip_path: `{}`".format(clip_path))
# skip if not activated presete
if not activated_preset:

View file

@ -11,6 +11,8 @@ class PSWorkfileCreator(AutoCreator):
identifier = "workfile"
family = "workfile"
default_variant = "Main"
def get_instance_attr_defs(self):
return []
@ -35,7 +37,6 @@ class PSWorkfileCreator(AutoCreator):
existing_instance = instance
break
variant = ''
project_name = legacy_io.Session["AVALON_PROJECT"]
asset_name = legacy_io.Session["AVALON_ASSET"]
task_name = legacy_io.Session["AVALON_TASK"]
@ -43,15 +44,17 @@ class PSWorkfileCreator(AutoCreator):
if existing_instance is None:
asset_doc = get_asset_by_name(project_name, asset_name)
subset_name = self.get_subset_name(
variant, task_name, asset_doc, project_name, host_name
self.default_variant, task_name, asset_doc,
project_name, host_name
)
data = {
"asset": asset_name,
"task": task_name,
"variant": variant
"variant": self.default_variant
}
data.update(self.get_dynamic_data(
variant, task_name, asset_doc, project_name, host_name
self.default_variant, task_name, asset_doc,
project_name, host_name
))
new_instance = CreatedInstance(
@ -67,7 +70,9 @@ class PSWorkfileCreator(AutoCreator):
):
asset_doc = get_asset_by_name(project_name, asset_name)
subset_name = self.get_subset_name(
variant, task_name, asset_doc, project_name, host_name
self.default_variant, task_name, asset_doc,
project_name, host_name
)
existing_instance["asset"] = asset_name
existing_instance["task"] = task_name
existing_instance["subset"] = subset_name

View file

@ -11,6 +11,8 @@ class CollectWorkfile(pyblish.api.ContextPlugin):
label = "Collect Workfile"
hosts = ["photoshop"]
default_variant = "Main"
def process(self, context):
existing_instance = None
for instance in context:
@ -20,9 +22,11 @@ class CollectWorkfile(pyblish.api.ContextPlugin):
break
family = "workfile"
# context.data["variant"] might come only from collect_batch_data
variant = context.data.get("variant") or self.default_variant
subset = get_subset_name_with_asset_doc(
family,
"",
variant,
context.data["anatomyData"]["task"]["name"],
context.data["assetEntity"],
context.data["anatomyData"]["project"]["name"],

View file

@ -1,5 +1,6 @@
import os
import shutil
from PIL import Image
import openpype.api
import openpype.lib
@ -8,10 +9,17 @@ from openpype.hosts.photoshop import api as photoshop
class ExtractReview(openpype.api.Extractor):
"""
Produce a flattened or sequence image file from all 'image' instances.
Produce a flattened or sequence image files from all 'image' instances.
If no 'image' instance is created, it produces flattened image from
all visible layers.
It creates review, thumbnail and mov representations.
'review' family could be used in other steps as a reference, as it
contains flattened image by default. (Eg. artist could load this
review as a single item and see full image. In most cases 'image'
family is separated by layers to better usage in animation or comp.)
"""
label = "Extract Review"
@ -22,6 +30,7 @@ class ExtractReview(openpype.api.Extractor):
jpg_options = None
mov_options = None
make_image_sequence = None
max_downscale_size = 8192
def process(self, instance):
staging_dir = self.staging_dir(instance)
@ -49,7 +58,7 @@ class ExtractReview(openpype.api.Extractor):
"stagingDir": staging_dir,
"tags": self.jpg_options['tags'],
})
processed_img_names = img_list
else:
self.log.info("Extract layers to flatten image.")
img_list = self._saves_flattened_layers(staging_dir, layers)
@ -57,26 +66,33 @@ class ExtractReview(openpype.api.Extractor):
instance.data["representations"].append({
"name": "jpg",
"ext": "jpg",
"files": img_list,
"files": img_list, # cannot be [] for single frame
"stagingDir": staging_dir,
"tags": self.jpg_options['tags']
})
processed_img_names = [img_list]
ffmpeg_path = openpype.lib.get_ffmpeg_tool_path("ffmpeg")
instance.data["stagingDir"] = staging_dir
# Generate thumbnail.
source_files_pattern = os.path.join(staging_dir,
self.output_seq_filename)
source_files_pattern = self._check_and_resize(processed_img_names,
source_files_pattern,
staging_dir)
# Generate thumbnail
thumbnail_path = os.path.join(staging_dir, "thumbnail.jpg")
self.log.info(f"Generate thumbnail {thumbnail_path}")
args = [
ffmpeg_path,
"-y",
"-i", os.path.join(staging_dir, self.output_seq_filename),
"-i", source_files_pattern,
"-vf", "scale=300:-1",
"-vframes", "1",
thumbnail_path
]
self.log.debug("thumbnail args:: {}".format(args))
output = openpype.lib.run_subprocess(args)
instance.data["representations"].append({
@ -94,11 +110,12 @@ class ExtractReview(openpype.api.Extractor):
args = [
ffmpeg_path,
"-y",
"-i", os.path.join(staging_dir, self.output_seq_filename),
"-i", source_files_pattern,
"-vf", "pad=ceil(iw/2)*2:ceil(ih/2)*2",
"-vframes", str(img_number),
mov_path
]
self.log.debug("mov args:: {}".format(args))
output = openpype.lib.run_subprocess(args)
self.log.debug(output)
instance.data["representations"].append({
@ -120,6 +137,34 @@ class ExtractReview(openpype.api.Extractor):
self.log.info(f"Extracted {instance} to {staging_dir}")
def _check_and_resize(self, processed_img_names, source_files_pattern,
staging_dir):
"""Check if saved image could be used in ffmpeg.
Ffmpeg has max size 16384x16384. Saved image(s) must be resized to be
used as a source for thumbnail or review mov.
"""
Image.MAX_IMAGE_PIXELS = None
first_url = os.path.join(staging_dir, processed_img_names[0])
with Image.open(first_url) as im:
width, height = im.size
if width > self.max_downscale_size or height > self.max_downscale_size:
resized_dir = os.path.join(staging_dir, "resized")
os.mkdir(resized_dir)
source_files_pattern = os.path.join(resized_dir,
self.output_seq_filename)
for file_name in processed_img_names:
source_url = os.path.join(staging_dir, file_name)
with Image.open(source_url) as res_img:
# 'thumbnail' automatically keeps aspect ratio
res_img.thumbnail((self.max_downscale_size,
self.max_downscale_size),
Image.ANTIALIAS)
res_img.save(os.path.join(resized_dir, file_name))
return source_files_pattern
def _get_image_path_from_instances(self, instance):
img_list = []

View file

@ -0,0 +1,6 @@
from .standalonepublish_module import StandAlonePublishModule
__all__ = (
"StandAlonePublishModule",
)

View file

@ -0,0 +1,57 @@
import os
import click
from openpype.lib import get_openpype_execute_args
from openpype.lib.execute import run_detached_process
from openpype.modules import OpenPypeModule
from openpype.modules.interfaces import ITrayAction, IHostModule
STANDALONEPUBLISH_ROOT_DIR = os.path.dirname(os.path.abspath(__file__))
class StandAlonePublishModule(OpenPypeModule, ITrayAction, IHostModule):
label = "Publish"
name = "standalonepublish_tool"
host_name = "standalonepublisher"
def initialize(self, modules_settings):
self.enabled = modules_settings[self.name]["enabled"]
self.publish_paths = [
os.path.join(STANDALONEPUBLISH_ROOT_DIR, "plugins", "publish")
]
def tray_init(self):
return
def on_action_trigger(self):
self.run_standalone_publisher()
def connect_with_modules(self, enabled_modules):
"""Collect publish paths from other modules."""
publish_paths = self.manager.collect_plugin_paths()["publish"]
self.publish_paths.extend(publish_paths)
def run_standalone_publisher(self):
args = get_openpype_execute_args("module", self.name, "launch")
run_detached_process(args)
def cli(self, click_group):
click_group.add_command(cli_main)
@click.group(
StandAlonePublishModule.name,
help="StandalonePublisher related commands.")
def cli_main():
pass
@cli_main.command()
def launch():
"""Launch StandalonePublisher tool UI."""
from openpype.tools import standalonepublish
standalonepublish.main()

View file

@ -0,0 +1,6 @@
from .module import TrayPublishModule
__all__ = (
"TrayPublishModule",
)

View file

@ -1,25 +1,24 @@
import os
import click
from openpype.lib import get_openpype_execute_args
from openpype.lib.execute import run_detached_process
from openpype.modules import OpenPypeModule
from openpype_interfaces import ITrayAction
from openpype.modules.interfaces import ITrayAction, IHostModule
TRAYPUBLISH_ROOT_DIR = os.path.dirname(os.path.abspath(__file__))
class TrayPublishAction(OpenPypeModule, ITrayAction):
class TrayPublishModule(OpenPypeModule, IHostModule, ITrayAction):
label = "New Publish (beta)"
name = "traypublish_tool"
host_name = "traypublish"
def initialize(self, modules_settings):
import openpype
self.enabled = True
self.publish_paths = [
os.path.join(
openpype.PACKAGE_DIR,
"hosts",
"traypublisher",
"plugins",
"publish"
)
os.path.join(TRAYPUBLISH_ROOT_DIR, "plugins", "publish")
]
self._experimental_tools = None
@ -29,7 +28,7 @@ class TrayPublishAction(OpenPypeModule, ITrayAction):
self._experimental_tools = ExperimentalTools()
def tray_menu(self, *args, **kwargs):
super(TrayPublishAction, self).tray_menu(*args, **kwargs)
super(TrayPublishModule, self).tray_menu(*args, **kwargs)
traypublisher = self._experimental_tools.get("traypublisher")
visible = False
if traypublisher and traypublisher.enabled:
@ -45,5 +44,24 @@ class TrayPublishAction(OpenPypeModule, ITrayAction):
self.publish_paths.extend(publish_paths)
def run_traypublisher(self):
args = get_openpype_execute_args("traypublisher")
args = get_openpype_execute_args(
"module", self.name, "launch"
)
run_detached_process(args)
def cli(self, click_group):
click_group.add_command(cli_main)
@click.group(TrayPublishModule.name, help="TrayPublisher related commands.")
def cli_main():
pass
@cli_main.command()
def launch():
"""Launch TrayPublish tool UI."""
from openpype.tools import traypublisher
traypublisher.main()

View file

@ -1,20 +1,12 @@
import os
from .tvpaint_module import (
get_launch_script_path,
TVPaintModule,
TVPAINT_ROOT_DIR,
)
def add_implementation_envs(env, _app):
"""Modify environments to contain all required for implementation."""
defaults = {
"OPENPYPE_LOG_NO_COLORS": "True"
}
for key, value in defaults.items():
if not env.get(key):
env[key] = value
def get_launch_script_path():
current_dir = os.path.dirname(os.path.abspath(__file__))
return os.path.join(
current_dir,
"api",
"launch_script.py"
)
__all__ = (
"get_launch_script_path",
"TVPaintModule",
"TVPAINT_ROOT_DIR",
)

View file

@ -6,7 +6,6 @@ from . import pipeline
from . import plugin
from .pipeline import (
install,
uninstall,
maintained_selection,
remove_instance,
list_instances,
@ -33,7 +32,6 @@ __all__ = (
"plugin",
"install",
"uninstall",
"maintained_selection",
"remove_instance",
"list_instances",

View file

@ -16,8 +16,6 @@ from openpype.pipeline import (
legacy_io,
register_loader_plugin_path,
register_creator_plugin_path,
deregister_loader_plugin_path,
deregister_creator_plugin_path,
AVALON_CONTAINER_ID,
)
@ -91,19 +89,6 @@ def install():
register_event_callback("application.exit", application_exit)
def uninstall():
"""Uninstall TVPaint-specific functionality.
This function is called automatically on calling `uninstall_host()`.
"""
log.info("OpenPype - Uninstalling TVPaint integration")
pyblish.api.deregister_host("tvpaint")
pyblish.api.deregister_plugin_path(PUBLISH_PATH)
deregister_loader_plugin_path(LOAD_PATH)
deregister_creator_plugin_path(CREATE_PATH)
def containerise(
name, namespace, members, context, loader, current_containers=None
):

View file

@ -0,0 +1,41 @@
import os
from openpype.modules import OpenPypeModule
from openpype.modules.interfaces import IHostModule
TVPAINT_ROOT_DIR = os.path.dirname(os.path.abspath(__file__))
def get_launch_script_path():
return os.path.join(
TVPAINT_ROOT_DIR,
"api",
"launch_script.py"
)
class TVPaintModule(OpenPypeModule, IHostModule):
name = "tvpaint"
host_name = "tvpaint"
def initialize(self, module_settings):
self.enabled = True
def add_implementation_envs(self, env, _app):
"""Modify environments to contain all required for implementation."""
defaults = {
"OPENPYPE_LOG_NO_COLORS": "True"
}
for key, value in defaults.items():
if not env.get(key):
env[key] = value
def get_launch_hook_paths(self, app):
if app.host_name != self.host_name:
return []
return [
os.path.join(TVPAINT_ROOT_DIR, "hooks")
]
def get_workfile_extensions(self):
return [".tvpp"]

View file

@ -1,24 +1,6 @@
import os
import openpype.hosts
from openpype.lib.applications import Application
from .module import UnrealModule
def add_implementation_envs(env: dict, _app: Application) -> None:
"""Modify environments to contain all required for implementation."""
# Set OPENPYPE_UNREAL_PLUGIN required for Unreal implementation
ue_plugin = "UE_5.0" if _app.name[:1] == "5" else "UE_4.7"
unreal_plugin_path = os.path.join(
os.path.dirname(os.path.abspath(openpype.hosts.__file__)),
"unreal", "integration", ue_plugin
)
if not env.get("OPENPYPE_UNREAL_PLUGIN"):
env["OPENPYPE_UNREAL_PLUGIN"] = unreal_plugin_path
# Set default environments if are not set via settings
defaults = {
"OPENPYPE_LOG_NO_COLORS": "True"
}
for key, value in defaults.items():
if not env.get(key):
env[key] = value
__all__ = (
"UnrealModule",
)

View file

@ -19,6 +19,7 @@ from .pipeline import (
show_tools_dialog,
show_tools_popup,
instantiate,
UnrealHost,
)
__all__ = [
@ -36,5 +37,6 @@ __all__ = [
"show_experimental_tools",
"show_tools_dialog",
"show_tools_popup",
"instantiate"
"instantiate",
"UnrealHost",
]

View file

@ -14,6 +14,7 @@ from openpype.pipeline import (
)
from openpype.tools.utils import host_tools
import openpype.hosts.unreal
from openpype.host import HostBase, ILoadHost
import unreal # noqa
@ -29,6 +30,32 @@ CREATE_PATH = os.path.join(PLUGINS_DIR, "create")
INVENTORY_PATH = os.path.join(PLUGINS_DIR, "inventory")
class UnrealHost(HostBase, ILoadHost):
"""Unreal host implementation.
For some time this class will re-use functions from module based
implementation for backwards compatibility of older unreal projects.
"""
name = "unreal"
def install(self):
install()
def get_containers(self):
return ls()
def show_tools_popup(self):
"""Show tools popup with actions leading to show other tools."""
show_tools_popup()
def show_tools_dialog(self):
"""Show tools dialog with actions leading to show other tools."""
show_tools_dialog()
def install():
"""Install Unreal configuration for OpenPype."""
print("-=" * 40)

View file

@ -3,7 +3,9 @@ import unreal
openpype_detected = True
try:
from openpype.pipeline import install_host
from openpype.hosts.unreal import api as openpype_host
from openpype.hosts.unreal.api import UnrealHost
openpype_host = UnrealHost()
except ImportError as exc:
openpype_host = None
openpype_detected = False

View file

@ -3,7 +3,9 @@ import unreal
openpype_detected = True
try:
from openpype.pipeline import install_host
from openpype.hosts.unreal import api as openpype_host
from openpype.hosts.unreal.api import UnrealHost
openpype_host = UnrealHost()
except ImportError as exc:
openpype_host = None
openpype_detected = False

View file

@ -0,0 +1,42 @@
import os
from openpype.modules import OpenPypeModule
from openpype.modules.interfaces import IHostModule
UNREAL_ROOT_DIR = os.path.dirname(os.path.abspath(__file__))
class UnrealModule(OpenPypeModule, IHostModule):
name = "unreal"
host_name = "unreal"
def initialize(self, module_settings):
self.enabled = True
def add_implementation_envs(self, env, app) -> None:
"""Modify environments to contain all required for implementation."""
# Set OPENPYPE_UNREAL_PLUGIN required for Unreal implementation
ue_plugin = "UE_5.0" if app.name[:1] == "5" else "UE_4.7"
unreal_plugin_path = os.path.join(
UNREAL_ROOT_DIR, "integration", ue_plugin
)
if not env.get("OPENPYPE_UNREAL_PLUGIN"):
env["OPENPYPE_UNREAL_PLUGIN"] = unreal_plugin_path
# Set default environments if are not set via settings
defaults = {
"OPENPYPE_LOG_NO_COLORS": "True"
}
for key, value in defaults.items():
if not env.get(key):
env[key] = value
def get_launch_hook_paths(self, app):
if app.host_name != self.host_name:
return []
return [
os.path.join(UNREAL_ROOT_DIR, "hooks")
]
def get_workfile_extensions(self):
return [".uproject"]

View file

@ -7,6 +7,8 @@ import logging
import functools
import warnings
import six
from openpype.client import (
get_project,
get_assets,
@ -15,7 +17,6 @@ from openpype.client import (
get_workfile_info,
)
from .profiles_filtering import filter_profiles
from .events import emit_event
from .path_templates import StringTemplate
legacy_io = None
@ -178,7 +179,7 @@ def is_latest(representation):
bool: Whether the representation is of latest version.
Deprecated:
Function will be removed after release version 3.14.*
Function will be removed after release version 3.15.*
"""
from openpype.pipeline.context_tools import is_representation_from_latest
@ -191,7 +192,7 @@ def any_outdated():
"""Return whether the current scene has any outdated content.
Deprecated:
Function will be removed after release version 3.14.*
Function will be removed after release version 3.15.*
"""
from openpype.pipeline.load import any_outdated_containers
@ -212,7 +213,7 @@ def get_asset(asset_name=None):
(MongoDB document)
Deprecated:
Function will be removed after release version 3.14.*
Function will be removed after release version 3.15.*
"""
from openpype.pipeline.context_tools import get_current_project_asset
@ -224,7 +225,7 @@ def get_asset(asset_name=None):
def get_system_general_anatomy_data(system_settings=None):
"""
Deprecated:
Function will be removed after release version 3.14.*
Function will be removed after release version 3.15.*
"""
from openpype.pipeline.template_data import get_general_template_data
@ -296,7 +297,7 @@ def get_latest_version(asset_name, subset_name, dbcon=None, project_name=None):
dict: Last version document for entered.
Deprecated:
Function will be removed after release version 3.14.*
Function will be removed after release version 3.15.*
"""
if not project_name:
@ -344,6 +345,9 @@ def get_workfile_template_key_from_context(
Raises:
ValueError: When both 'dbcon' and 'project_name' were not
passed.
Deprecated:
Function will be removed after release version 3.16.*
"""
from openpype.pipeline.workfile import (
@ -387,6 +391,9 @@ def get_workfile_template_key(
Raises:
ValueError: When both 'project_name' and 'project_settings' were not
passed.
Deprecated:
Function will be removed after release version 3.16.*
"""
from openpype.pipeline.workfile import get_workfile_template_key
@ -411,7 +418,7 @@ def get_workdir_data(project_doc, asset_doc, task_name, host_name):
dict: Data prepared for filling workdir template.
Deprecated:
Function will be removed after release version 3.14.*
Function will be removed after release version 3.15.*
"""
from openpype.pipeline.template_data import get_template_data
@ -447,6 +454,9 @@ def get_workdir_with_workdir_data(
Raises:
ValueError: When both `anatomy` and `project_name` are set to None.
Deprecated:
Function will be removed after release version 3.15.*
"""
if not anatomy and not project_name:
@ -492,6 +502,9 @@ def get_workdir(
Returns:
TemplateResult: Workdir path.
Deprecated:
Function will be removed after release version 3.15.*
"""
from openpype.pipeline.workfile import get_workdir
@ -518,7 +531,7 @@ def template_data_from_session(session=None):
dict: All available data from session.
Deprecated:
Function will be removed after release version 3.14.*
Function will be removed after release version 3.15.*
"""
from openpype.pipeline.context_tools import get_template_data_from_session
@ -526,7 +539,7 @@ def template_data_from_session(session=None):
return get_template_data_from_session(session)
@with_pipeline_io
@deprecated("openpype.pipeline.context_tools.compute_session_changes")
def compute_session_changes(
session, task=None, asset=None, app=None, template_key=None
):
@ -547,64 +560,49 @@ def compute_session_changes(
Returns:
dict: The required changes in the Session dictionary.
Deprecated:
Function will be removed after release version 3.16.*
"""
from openpype.pipeline.context_tools import get_workdir_from_session
from openpype.pipeline import legacy_io
from openpype.pipeline.context_tools import compute_session_changes
changes = dict()
if isinstance(asset, six.string_types):
project_name = legacy_io.active_project()
asset = get_asset_by_name(project_name, asset)
# If no changes, return directly
if not any([task, asset, app]):
return changes
# Get asset document and asset
asset_document = None
asset_tasks = None
if isinstance(asset, dict):
# Assume asset database document
asset_document = asset
asset_tasks = asset_document.get("data", {}).get("tasks")
asset = asset["name"]
if not asset_document or not asset_tasks:
# Assume asset name
project_name = session["AVALON_PROJECT"]
asset_document = get_asset_by_name(
project_name, asset, fields=["data.tasks"]
)
assert asset_document, "Asset must exist"
# Detect any changes compared session
mapping = {
"AVALON_ASSET": asset,
"AVALON_TASK": task,
"AVALON_APP": app,
}
changes = {
key: value
for key, value in mapping.items()
if value and value != session.get(key)
}
if not changes:
return changes
# Compute work directory (with the temporary changed session so far)
_session = session.copy()
_session.update(changes)
changes["AVALON_WORKDIR"] = get_workdir_from_session(_session)
return changes
return compute_session_changes(
session,
asset,
task,
template_key
)
@deprecated("openpype.pipeline.context_tools.get_workdir_from_session")
def get_workdir_from_session(session=None, template_key=None):
"""Calculate workdir path based on session data.
Args:
session (Union[None, Dict[str, str]]): Session to use. If not passed
current context session is used (from legacy_io).
template_key (Union[str, None]): Precalculate template key to define
workfile template name in Anatomy.
Returns:
str: Workdir path.
Deprecated:
Function will be removed after release version 3.16.*
"""
from openpype.pipeline.context_tools import get_workdir_from_session
return get_workdir_from_session(session, template_key)
@with_pipeline_io
@deprecated("openpype.pipeline.context_tools.change_current_context")
def update_current_task(task=None, asset=None, app=None, template_key=None):
"""Update active Session to a new task work area.
@ -617,35 +615,19 @@ def update_current_task(task=None, asset=None, app=None, template_key=None):
Returns:
dict: The changed key, values in the current Session.
Deprecated:
Function will be removed after release version 3.16.*
"""
changes = compute_session_changes(
legacy_io.Session,
task=task,
asset=asset,
app=app,
template_key=template_key
)
from openpype.pipeline import legacy_io
from openpype.pipeline.context_tools import change_current_context
# Update the Session and environments. Pop from environments all keys with
# value set to None.
for key, value in changes.items():
legacy_io.Session[key] = value
if value is None:
os.environ.pop(key, None)
else:
os.environ[key] = value
project_name = legacy_io.active_project()
if isinstance(asset, six.string_types):
asset = get_asset_by_name(project_name, asset)
data = changes.copy()
# Convert env keys to human readable keys
data["project_name"] = legacy_io.Session["AVALON_PROJECT"]
data["asset_name"] = legacy_io.Session["AVALON_ASSET"]
data["task_name"] = legacy_io.Session["AVALON_TASK"]
# Emit session change
emit_event("taskChanged", data)
return changes
return change_current_context(asset, task, template_key)
@deprecated("openpype.client.get_workfile_info")
@ -664,6 +646,9 @@ def get_workfile_doc(asset_id, task_name, filename, dbcon=None):
Returns:
dict: Workfile document or None.
Deprecated:
Function will be removed after release version 3.15.*
"""
# Use legacy_io if dbcon is not entered
@ -774,6 +759,11 @@ def save_workfile_data_to_doc(workfile_doc, data, dbcon=None):
@deprecated("openpype.pipeline.workfile.BuildWorkfile")
def BuildWorkfile():
"""Build workfile class was moved to workfile pipeline.
Deprecated:
Function will be removed after release version 3.16.*
"""
from openpype.pipeline.workfile import BuildWorkfile
return BuildWorkfile()
@ -816,10 +806,7 @@ def change_timer_to_current_context():
Deprecated:
This method is specific for TimersManager module so please use the
functionality from there. Function will be removed after release
version 3.14.*
TODO:
- use TimersManager's static method instead of reimplementing it here
version 3.15.*
"""
from openpype.pipeline import legacy_io
@ -934,6 +921,9 @@ def get_custom_workfile_template_by_context(
Returns:
str: Path to template or None if none of profiles match current
context. (Existence of formatted path is not validated.)
Deprecated:
Function will be removed after release version 3.16.*
"""
if anatomy is None:
@ -992,6 +982,9 @@ def get_custom_workfile_template_by_string_context(
Returns:
str: Path to template or None if none of profiles match current
context. (Existence of formatted path is not validated.)
Deprecated:
Function will be removed after release version 3.16.*
"""
project_name = None
@ -1026,6 +1019,9 @@ def get_custom_workfile_template(template_profiles):
Returns:
str: Path to template or None if none of profiles match current
context. (Existence of formatted path is not validated.)
Deprecated:
Function will be removed after release version 3.16.*
"""
from openpype.pipeline import legacy_io
@ -1054,6 +1050,9 @@ def get_last_workfile_with_version(
Returns:
tuple: Last workfile<str> with version<int> if there is any otherwise
returns (None, None).
Deprecated:
Function will be removed after release version 3.16.*
"""
from openpype.pipeline.workfile import get_last_workfile_with_version
@ -1080,6 +1079,9 @@ def get_last_workfile(
Returns:
str: Last or first workfile as filename of full path to filename.
Deprecated:
Function will be removed after release version 3.16.*
"""
from openpype.pipeline.workfile import get_last_workfile

View file

@ -4,6 +4,7 @@
It provides Deadline JobInfo data class.
"""
import json.decoder
import os
from abc import abstractmethod
import platform
@ -15,7 +16,12 @@ import attr
import requests
import pyblish.api
from openpype.pipeline.publish import AbstractMetaInstancePlugin
from openpype.pipeline.publish import (
AbstractMetaInstancePlugin,
KnownPublishError
)
JSONDecodeError = getattr(json.decoder, "JSONDecodeError", ValueError)
def requests_post(*args, **kwargs):
@ -615,7 +621,7 @@ class AbstractSubmitDeadline(pyblish.api.InstancePlugin):
str: resulting Deadline job id.
Throws:
RuntimeError: if submission fails.
KnownPublishError: if submission fails.
"""
url = "{}/api/jobs".format(self._deadline_url)
@ -625,9 +631,16 @@ class AbstractSubmitDeadline(pyblish.api.InstancePlugin):
self.log.error(response.status_code)
self.log.error(response.content)
self.log.debug(payload)
raise RuntimeError(response.text)
raise KnownPublishError(response.text)
try:
result = response.json()
except JSONDecodeError:
msg = "Broken response {}. ".format(response)
msg += "Try restarting the Deadline Webservice."
self.log.warning(msg, exc_info=True)
raise KnownPublishError("Broken response from DL")
result = response.json()
# for submit publish job
self._instance.data["deadlineSubmissionJob"] = result

View file

@ -1,49 +0,0 @@
import os
import platform
import subprocess
from openpype.lib import get_openpype_execute_args
from openpype.modules import OpenPypeModule
from openpype_interfaces import ITrayAction
class StandAlonePublishAction(OpenPypeModule, ITrayAction):
label = "Publish"
name = "standalonepublish_tool"
def initialize(self, modules_settings):
import openpype
self.enabled = modules_settings[self.name]["enabled"]
self.publish_paths = [
os.path.join(
openpype.PACKAGE_DIR,
"hosts",
"standalonepublisher",
"plugins",
"publish"
)
]
def tray_init(self):
return
def on_action_trigger(self):
self.run_standalone_publisher()
def connect_with_modules(self, enabled_modules):
"""Collect publish paths from other modules."""
publish_paths = self.manager.collect_plugin_paths()["publish"]
self.publish_paths.extend(publish_paths)
def run_standalone_publisher(self):
args = get_openpype_execute_args("standalonepublisher")
kwargs = {}
if platform.system().lower() == "darwin":
new_args = ["open", "-na", args.pop(0), "--args"]
new_args.extend(args)
args = new_args
detached_process = getattr(subprocess, "DETACHED_PROCESS", None)
if detached_process is not None:
kwargs["creationflags"] = detached_process
subprocess.Popen(args, **kwargs)

View file

@ -16,6 +16,7 @@ from openpype.client import (
get_asset_by_name,
version_is_latest,
)
from openpype.lib.events import emit_event
from openpype.modules import load_modules, ModulesManager
from openpype.settings import get_project_settings
@ -445,3 +446,103 @@ def get_custom_workfile_template_from_session(
session["AVALON_APP"],
project_settings=project_settings
)
def compute_session_changes(
session, asset_doc, task_name, template_key=None
):
"""Compute the changes for a session object on task under asset.
Function does not change the session object, only returns changes.
Args:
session (Dict[str, str]): The initial session to compute changes to.
This is required for computing the full Work Directory, as that
also depends on the values that haven't changed.
asset_doc (Dict[str, Any]): Asset document to switch to.
task_name (str): Name of task to switch to.
template_key (Union[str, None]): Prepare workfile template key in
anatomy templates.
Returns:
Dict[str, str]: Changes in the Session dictionary.
"""
changes = {}
# Get asset document and asset
if not asset_doc:
task_name = None
asset_name = None
else:
asset_name = asset_doc["name"]
# Detect any changes compared session
mapping = {
"AVALON_ASSET": asset_name,
"AVALON_TASK": task_name,
}
changes = {
key: value
for key, value in mapping.items()
if value != session.get(key)
}
if not changes:
return changes
# Compute work directory (with the temporary changed session so far)
changed_session = session.copy()
changed_session.update(changes)
workdir = None
if asset_doc:
workdir = get_workdir_from_session(
changed_session, template_key
)
changes["AVALON_WORKDIR"] = workdir
return changes
def change_current_context(asset_doc, task_name, template_key=None):
"""Update active Session to a new task work area.
This updates the live Session to a different task under asset.
Args:
asset_doc (Dict[str, Any]): The asset document to set.
task_name (str): The task to set under asset.
template_key (Union[str, None]): Prepared template key to be used for
workfile template in Anatomy.
Returns:
Dict[str, str]: The changed key, values in the current Session.
"""
changes = compute_session_changes(
legacy_io.Session,
asset_doc,
task_name,
template_key=template_key
)
# Update the Session and environments. Pop from environments all keys with
# value set to None.
for key, value in changes.items():
legacy_io.Session[key] = value
if value is None:
os.environ.pop(key, None)
else:
os.environ[key] = value
data = changes.copy()
# Convert env keys to human readable keys
data["project_name"] = legacy_io.Session["AVALON_PROJECT"]
data["asset_name"] = legacy_io.Session["AVALON_ASSET"]
data["task_name"] = legacy_io.Session["AVALON_TASK"]
# Emit session change
emit_event("taskChanged", data)
return changes

View file

@ -263,16 +263,17 @@ def get_media_range_with_retimes(otio_clip, handle_start, handle_end):
"retime": True,
"speed": time_scalar,
"timewarps": time_warp_nodes,
"handleStart": round(handle_start),
"handleEnd": round(handle_end)
"handleStart": int(round(handle_start)),
"handleEnd": int(round(handle_end))
}
}
returning_dict = {
"mediaIn": media_in_trimmed,
"mediaOut": media_out_trimmed,
"handleStart": round(handle_start),
"handleEnd": round(handle_end)
"handleStart": int(round(handle_start)),
"handleEnd": int(round(handle_end)),
"speed": time_scalar
}
# add version data only if retime

View file

@ -121,10 +121,8 @@ class CollectOtioSubsetResources(pyblish.api.InstancePlugin):
otio.schema.ImageSequenceReference
):
is_sequence = True
else:
# for OpenTimelineIO 0.12 and older
if metadata.get("padding"):
is_sequence = True
elif metadata.get("padding"):
is_sequence = True
self.log.info(
"frame_start-frame_end: {}-{}".format(frame_start, frame_end))

View file

@ -76,11 +76,6 @@ class PypeCommands:
import (run_webserver)
return run_webserver(*args, **kwargs)
@staticmethod
def launch_standalone_publisher():
from openpype.tools import standalonepublish
standalonepublish.main()
@staticmethod
def launch_traypublisher():
from openpype.tools import traypublisher

View file

@ -32,6 +32,7 @@
},
"ExtractReview": {
"make_image_sequence": false,
"max_downscale_size": 8192,
"jpg_options": {
"tags": []
},

View file

@ -186,6 +186,15 @@
"key": "make_image_sequence",
"label": "Makes an image sequence instead of a flatten image"
},
{
"type": "number",
"key": "max_downscale_size",
"label": "Maximum size of sources for review",
"tooltip": "FFMpeg can only handle limited resolution for creation of review and/or thumbnail",
"minimum": 300,
"maximum": 16384,
"decimal": 0
},
{
"type": "dict",
"collapsible": false,

View file

@ -22,6 +22,164 @@ from .constants import (
)
class SettingsStateInfo:
"""Helper state information about some settings state.
Is used to hold information about last saved and last opened UI. Keep
information about the time when that happened and on which machine under
which user and on which openpype version.
To create currrent machine and time information use 'create_new' method.
"""
timestamp_format = "%Y-%m-%d %H:%M:%S.%f"
def __init__(
self,
openpype_version,
settings_type,
project_name,
timestamp,
hostname,
hostip,
username,
system_name,
local_id
):
self.openpype_version = openpype_version
self.settings_type = settings_type
self.project_name = project_name
timestamp_obj = None
if timestamp:
timestamp_obj = datetime.datetime.strptime(
timestamp, self.timestamp_format
)
self.timestamp = timestamp
self.timestamp_obj = timestamp_obj
self.hostname = hostname
self.hostip = hostip
self.username = username
self.system_name = system_name
self.local_id = local_id
def copy(self):
return self.from_data(self.to_data())
@classmethod
def create_new(
cls, openpype_version, settings_type=None, project_name=None
):
"""Create information about this machine for current time."""
from openpype.lib.pype_info import get_workstation_info
now = datetime.datetime.now()
workstation_info = get_workstation_info()
return cls(
openpype_version,
settings_type,
project_name,
now.strftime(cls.timestamp_format),
workstation_info["hostname"],
workstation_info["hostip"],
workstation_info["username"],
workstation_info["system_name"],
workstation_info["local_id"]
)
@classmethod
def from_data(cls, data):
"""Create object from data."""
return cls(
data["openpype_version"],
data["settings_type"],
data["project_name"],
data["timestamp"],
data["hostname"],
data["hostip"],
data["username"],
data["system_name"],
data["local_id"]
)
def to_data(self):
data = self.to_document_data()
data.update({
"openpype_version": self.openpype_version,
"settings_type": self.settings_type,
"project_name": self.project_name
})
return data
@classmethod
def create_new_empty(cls, openpype_version, settings_type=None):
return cls(
openpype_version,
settings_type,
None,
None,
None,
None,
None,
None,
None
)
@classmethod
def from_document(cls, openpype_version, settings_type, document):
document = document or {}
project_name = document.get("project_name")
last_saved_info = document.get("last_saved_info")
if last_saved_info:
copy_last_saved_info = copy.deepcopy(last_saved_info)
copy_last_saved_info.update({
"openpype_version": openpype_version,
"settings_type": settings_type,
"project_name": project_name,
})
return cls.from_data(copy_last_saved_info)
return cls(
openpype_version,
settings_type,
project_name,
None,
None,
None,
None,
None,
None
)
def to_document_data(self):
return {
"timestamp": self.timestamp,
"hostname": self.hostname,
"hostip": self.hostip,
"username": self.username,
"system_name": self.system_name,
"local_id": self.local_id,
}
def __eq__(self, other):
if not isinstance(other, SettingsStateInfo):
return False
if other.timestamp_obj != self.timestamp_obj:
return False
return (
self.openpype_version == other.openpype_version
and self.hostname == other.hostname
and self.hostip == other.hostip
and self.username == other.username
and self.system_name == other.system_name
and self.local_id == other.local_id
)
@six.add_metaclass(ABCMeta)
class SettingsHandler:
@abstractmethod
@ -226,7 +384,7 @@ class SettingsHandler:
"""OpenPype versions that have any studio project anatomy overrides.
Returns:
list<str>: OpenPype versions strings.
List[str]: OpenPype versions strings.
"""
pass
@ -237,7 +395,7 @@ class SettingsHandler:
"""OpenPype versions that have any studio project settings overrides.
Returns:
list<str>: OpenPype versions strings.
List[str]: OpenPype versions strings.
"""
pass
@ -251,8 +409,87 @@ class SettingsHandler:
project_name(str): Name of project.
Returns:
list<str>: OpenPype versions strings.
List[str]: OpenPype versions strings.
"""
pass
@abstractmethod
def get_system_last_saved_info(self):
"""State of last system settings overrides at the moment when called.
This method must provide most recent data so using cached data is not
the way.
Returns:
SettingsStateInfo: Information about system settings overrides.
"""
pass
@abstractmethod
def get_project_last_saved_info(self, project_name):
"""State of last project settings overrides at the moment when called.
This method must provide most recent data so using cached data is not
the way.
Args:
project_name (Union[None, str]): Project name for which state
should be returned.
Returns:
SettingsStateInfo: Information about project settings overrides.
"""
pass
# UI related calls
@abstractmethod
def get_last_opened_info(self):
"""Get information about last opened UI.
Last opened UI is empty if there is noone who would have opened UI at
the moment when called.
Returns:
Union[None, SettingsStateInfo]: Information about machine who had
opened Settings UI.
"""
pass
@abstractmethod
def opened_settings_ui(self):
"""Callback called when settings UI is opened.
Information about this machine must be available when
'get_last_opened_info' is called from anywhere until
'closed_settings_ui' is called again.
Returns:
SettingsStateInfo: Object representing information about this
machine. Must be passed to 'closed_settings_ui' when finished.
"""
pass
@abstractmethod
def closed_settings_ui(self, info_obj):
"""Callback called when settings UI is closed.
From the moment this method is called the information about this
machine is removed and no more available when 'get_last_opened_info'
is called.
Callback should validate if this machine is still stored as opened ui
before changing any value.
Args:
info_obj (SettingsStateInfo): Object created when
'opened_settings_ui' was called.
"""
pass
@ -285,19 +522,22 @@ class CacheValues:
self.data = None
self.creation_time = None
self.version = None
self.last_saved_info = None
def data_copy(self):
if not self.data:
return {}
return copy.deepcopy(self.data)
def update_data(self, data, version=None):
def update_data(self, data, version):
self.data = data
self.creation_time = datetime.datetime.now()
if version is not None:
self.version = version
self.version = version
def update_from_document(self, document, version=None):
def update_last_saved_info(self, last_saved_info):
self.last_saved_info = last_saved_info
def update_from_document(self, document, version):
data = {}
if document:
if "data" in document:
@ -306,9 +546,9 @@ class CacheValues:
value = document["value"]
if value:
data = json.loads(value)
self.data = data
if version is not None:
self.version = version
self.version = version
def to_json_string(self):
return json.dumps(self.data or {})
@ -320,6 +560,9 @@ class CacheValues:
delta = (datetime.datetime.now() - self.creation_time).seconds
return delta > self.cache_lifetime
def set_outdated(self):
self.create_time = None
class MongoSettingsHandler(SettingsHandler):
"""Settings handler that use mongo for storing and loading of settings."""
@ -509,6 +752,14 @@ class MongoSettingsHandler(SettingsHandler):
# Update cache
self.system_settings_cache.update_data(data, self._current_version)
last_saved_info = SettingsStateInfo.create_new(
self._current_version,
SYSTEM_SETTINGS_KEY
)
self.system_settings_cache.update_last_saved_info(
last_saved_info
)
# Get copy of just updated cache
system_settings_data = self.system_settings_cache.data_copy()
@ -517,20 +768,29 @@ class MongoSettingsHandler(SettingsHandler):
system_settings_data
)
# Store system settings
self.collection.replace_one(
system_settings_doc = self.collection.find_one(
{
"type": self._system_settings_key,
"version": self._current_version
},
{
"type": self._system_settings_key,
"data": system_settings_data,
"version": self._current_version
},
upsert=True
{"_id": True}
)
# Store system settings
new_system_settings_doc = {
"type": self._system_settings_key,
"version": self._current_version,
"data": system_settings_data,
"last_saved_info": last_saved_info.to_document_data()
}
if not system_settings_doc:
self.collection.insert_one(new_system_settings_doc)
else:
self.collection.update_one(
{"_id": system_settings_doc["_id"]},
{"$set": new_system_settings_doc}
)
# Store global settings
self.collection.replace_one(
{
@ -562,6 +822,14 @@ class MongoSettingsHandler(SettingsHandler):
data_cache = self.project_settings_cache[project_name]
data_cache.update_data(overrides, self._current_version)
last_saved_info = SettingsStateInfo.create_new(
self._current_version,
PROJECT_SETTINGS_KEY,
project_name
)
data_cache.update_last_saved_info(last_saved_info)
self._save_project_data(
project_name, self._project_settings_key, data_cache
)
@ -665,26 +933,34 @@ class MongoSettingsHandler(SettingsHandler):
def _save_project_data(self, project_name, doc_type, data_cache):
is_default = bool(project_name is None)
replace_filter = {
query_filter = {
"type": doc_type,
"is_default": is_default,
"version": self._current_version
}
replace_data = {
last_saved_info = data_cache.last_saved_info
new_project_settings_doc = {
"type": doc_type,
"data": data_cache.data,
"is_default": is_default,
"version": self._current_version
"version": self._current_version,
"last_saved_info": last_saved_info.to_data()
}
if not is_default:
replace_filter["project_name"] = project_name
replace_data["project_name"] = project_name
query_filter["project_name"] = project_name
new_project_settings_doc["project_name"] = project_name
self.collection.replace_one(
replace_filter,
replace_data,
upsert=True
project_settings_doc = self.collection.find_one(
query_filter,
{"_id": True}
)
if project_settings_doc:
self.collection.update_one(
{"_id": project_settings_doc["_id"]},
{"$set": new_project_settings_doc}
)
else:
self.collection.insert_one(new_project_settings_doc)
def _get_versions_order_doc(self, projection=None):
# TODO cache
@ -1011,19 +1287,11 @@ class MongoSettingsHandler(SettingsHandler):
globals_document = self.collection.find_one({
"type": GLOBAL_SETTINGS_KEY
})
document = (
self._get_studio_system_settings_overrides_for_version()
document, version = self._get_system_settings_overrides_doc()
last_saved_info = SettingsStateInfo.from_document(
version, SYSTEM_SETTINGS_KEY, document
)
if document is None:
document = self._find_closest_system_settings()
version = None
if document:
if document["type"] == self._system_settings_key:
version = document["version"]
else:
version = LEGACY_SETTINGS_VERSION
merged_document = self._apply_global_settings(
document, globals_document
)
@ -1031,6 +1299,9 @@ class MongoSettingsHandler(SettingsHandler):
self.system_settings_cache.update_from_document(
merged_document, version
)
self.system_settings_cache.update_last_saved_info(
last_saved_info
)
cache = self.system_settings_cache
data = cache.data_copy()
@ -1038,24 +1309,43 @@ class MongoSettingsHandler(SettingsHandler):
return data, cache.version
return data
def _get_system_settings_overrides_doc(self):
document = (
self._get_studio_system_settings_overrides_for_version()
)
if document is None:
document = self._find_closest_system_settings()
version = None
if document:
if document["type"] == self._system_settings_key:
version = document["version"]
else:
version = LEGACY_SETTINGS_VERSION
return document, version
def get_system_last_saved_info(self):
# Make sure settings are recaches
self.system_settings_cache.set_outdated()
self.get_studio_system_settings_overrides(False)
return self.system_settings_cache.last_saved_info.copy()
def _get_project_settings_overrides(self, project_name, return_version):
if self.project_settings_cache[project_name].is_outdated:
document = self._get_project_settings_overrides_for_version(
document, version = self._get_project_settings_overrides_doc(
project_name
)
if document is None:
document = self._find_closest_project_settings(project_name)
version = None
if document:
if document["type"] == self._project_settings_key:
version = document["version"]
else:
version = LEGACY_SETTINGS_VERSION
self.project_settings_cache[project_name].update_from_document(
document, version
)
last_saved_info = SettingsStateInfo.from_document(
version, PROJECT_SETTINGS_KEY, document
)
self.project_settings_cache[project_name].update_last_saved_info(
last_saved_info
)
cache = self.project_settings_cache[project_name]
data = cache.data_copy()
@ -1063,6 +1353,29 @@ class MongoSettingsHandler(SettingsHandler):
return data, cache.version
return data
def _get_project_settings_overrides_doc(self, project_name):
document = self._get_project_settings_overrides_for_version(
project_name
)
if document is None:
document = self._find_closest_project_settings(project_name)
version = None
if document:
if document["type"] == self._project_settings_key:
version = document["version"]
else:
version = LEGACY_SETTINGS_VERSION
return document, version
def get_project_last_saved_info(self, project_name):
# Make sure settings are recaches
self.project_settings_cache[project_name].set_outdated()
self._get_project_settings_overrides(project_name, False)
return self.project_settings_cache[project_name].last_saved_info.copy()
def get_studio_project_settings_overrides(self, return_version):
"""Studio overrides of default project settings."""
return self._get_project_settings_overrides(None, return_version)
@ -1140,6 +1453,7 @@ class MongoSettingsHandler(SettingsHandler):
self.project_anatomy_cache[project_name].update_from_document(
document, version
)
else:
project_doc = get_project(project_name)
self.project_anatomy_cache[project_name].update_data(
@ -1359,6 +1673,64 @@ class MongoSettingsHandler(SettingsHandler):
return output
return self._sort_versions(output)
def get_last_opened_info(self):
doc = self.collection.find_one({
"type": "last_opened_settings_ui",
"version": self._current_version
}) or {}
info_data = doc.get("info")
if not info_data:
return None
# Fill not available information
info_data["openpype_version"] = self._current_version
info_data["settings_type"] = None
info_data["project_name"] = None
return SettingsStateInfo.from_data(info_data)
def opened_settings_ui(self):
doc_filter = {
"type": "last_opened_settings_ui",
"version": self._current_version
}
opened_info = SettingsStateInfo.create_new(self._current_version)
new_doc_data = copy.deepcopy(doc_filter)
new_doc_data["info"] = opened_info.to_document_data()
doc = self.collection.find_one(
doc_filter,
{"_id": True}
)
if doc:
self.collection.update_one(
{"_id": doc["_id"]},
{"$set": new_doc_data}
)
else:
self.collection.insert_one(new_doc_data)
return opened_info
def closed_settings_ui(self, info_obj):
doc_filter = {
"type": "last_opened_settings_ui",
"version": self._current_version
}
doc = self.collection.find_one(doc_filter) or {}
info_data = doc.get("info")
if not info_data:
return
info_data["openpype_version"] = self._current_version
info_data["settings_type"] = None
info_data["project_name"] = None
current_info = SettingsStateInfo.from_data(info_data)
if current_info == info_obj:
self.collection.update_one(
{"_id": doc["_id"]},
{"$set": {"info": None}}
)
class MongoLocalSettingsHandler(LocalSettingsHandler):
"""Settings handler that use mongo for store and load local settings.
@ -1405,7 +1777,7 @@ class MongoLocalSettingsHandler(LocalSettingsHandler):
"""
data = data or {}
self.local_settings_cache.update_data(data)
self.local_settings_cache.update_data(data, None)
self.collection.replace_one(
{
@ -1428,6 +1800,6 @@ class MongoLocalSettingsHandler(LocalSettingsHandler):
"site_id": self.local_site_id
})
self.local_settings_cache.update_from_document(document)
self.local_settings_cache.update_from_document(document, None)
return self.local_settings_cache.data_copy()

View file

@ -91,6 +91,31 @@ def calculate_changes(old_value, new_value):
return changes
@require_handler
def get_system_last_saved_info():
return _SETTINGS_HANDLER.get_system_last_saved_info()
@require_handler
def get_project_last_saved_info(project_name):
return _SETTINGS_HANDLER.get_project_last_saved_info(project_name)
@require_handler
def get_last_opened_info():
return _SETTINGS_HANDLER.get_last_opened_info()
@require_handler
def opened_settings_ui():
return _SETTINGS_HANDLER.opened_settings_ui()
@require_handler
def closed_settings_ui(info_obj):
return _SETTINGS_HANDLER.closed_settings_ui(info_obj)
@require_handler
def save_studio_settings(data):
"""Save studio overrides of system settings.

View file

@ -272,15 +272,17 @@ class SubsetsModel(TreeModel, BaseRepresentationModel):
# update availability on active site when version changes
if self.sync_server.enabled and version_doc:
repre_info = self.sync_server.get_repre_info_for_versions(
project_name,
[version_doc["_id"]],
self.active_site,
self.remote_site
repres_info = list(
self.sync_server.get_repre_info_for_versions(
project_name,
[version_doc["_id"]],
self.active_site,
self.remote_site
)
)
if repre_info:
if repres_info:
version_doc["data"].update(
self._get_repre_dict(repre_info[0]))
self._get_repre_dict(repres_info[0]))
self.set_version(index, version_doc)
@ -472,29 +474,34 @@ class SubsetsModel(TreeModel, BaseRepresentationModel):
last_versions_by_subset_id[subset_id] = hero_version
repre_info = {}
repre_info_by_version_id = {}
if self.sync_server.enabled:
version_ids = set()
versions_by_id = {}
for _subset_id, doc in last_versions_by_subset_id.items():
version_ids.add(doc["_id"])
versions_by_id[doc["_id"]] = doc
repres = self.sync_server.get_repre_info_for_versions(
repres_info = self.sync_server.get_repre_info_for_versions(
project_name,
list(version_ids), self.active_site, self.remote_site
list(versions_by_id.keys()),
self.active_site,
self.remote_site
)
for repre in repres:
for repre_info in repres_info:
if self._doc_fetching_stop:
return
version_id = repre_info["_id"]
doc = versions_by_id[version_id]
doc["active_provider"] = self.active_provider
doc["remote_provider"] = self.remote_provider
repre_info[repre["_id"]] = repre
repre_info_by_version_id[version_id] = repre_info
self._doc_payload = {
"asset_docs_by_id": asset_docs_by_id,
"subset_docs_by_id": subset_docs_by_id,
"subset_families": subset_families,
"last_versions_by_subset_id": last_versions_by_subset_id,
"repre_info_by_version_id": repre_info
"repre_info_by_version_id": repre_info_by_version_id
}
self.doc_fetched.emit()

View file

@ -17,6 +17,7 @@ from openpype.client import (
get_thumbnail_id_from_source,
get_thumbnail,
)
from openpype.client.operations import OperationsSession, REMOVED_VALUE
from openpype.pipeline import HeroVersionType, Anatomy
from openpype.pipeline.thumbnail import get_thumbnail_binary
from openpype.pipeline.load import (
@ -614,26 +615,30 @@ class SubsetWidget(QtWidgets.QWidget):
box.show()
def group_subsets(self, name, asset_ids, items):
field = "data.subsetGroup"
subset_ids = {
item["_id"]
for item in items
if item.get("_id")
}
if not subset_ids:
return
if name:
update = {"$set": {field: name}}
self.echo("Group subsets to '%s'.." % name)
else:
update = {"$unset": {field: ""}}
self.echo("Ungroup subsets..")
subsets = list()
for item in items:
subsets.append(item["subset"])
project_name = self.dbcon.active_project()
op_session = OperationsSession()
for subset_id in subset_ids:
op_session.update_entity(
project_name,
"subset",
subset_id,
{"data.subsetGroup": name or REMOVED_VALUE}
)
for asset_id in asset_ids:
filtr = {
"type": "subset",
"parent": asset_id,
"name": {"$in": subsets},
}
self.dbcon.update_many(filtr, update)
op_session.commit()
def echo(self, message):
print(message)

View file

@ -36,6 +36,11 @@ from openpype.settings.entities.op_version_entity import (
)
from openpype.settings import SaveWarningExc
from openpype.settings.lib import (
get_system_last_saved_info,
get_project_last_saved_info,
)
from .dialogs import SettingsLastSavedChanged, SettingsControlTaken
from .widgets import (
ProjectListWidget,
VersionAction
@ -115,12 +120,19 @@ class SettingsCategoryWidget(QtWidgets.QWidget):
"settings to update them to you current running OpenPype version."
)
def __init__(self, user_role, parent=None):
def __init__(self, controller, parent=None):
super(SettingsCategoryWidget, self).__init__(parent)
self.user_role = user_role
self._controller = controller
controller.event_system.add_callback(
"edit.mode.changed",
self._edit_mode_changed
)
self.entity = None
self._edit_mode = None
self._last_saved_info = None
self._reset_crashed = False
self._state = CategoryState.Idle
@ -191,6 +203,31 @@ class SettingsCategoryWidget(QtWidgets.QWidget):
)
raise TypeError("Unknown type: {}".format(label))
def _edit_mode_changed(self, event):
self.set_edit_mode(event["edit_mode"])
def set_edit_mode(self, enabled):
if enabled is self._edit_mode:
return
was_false = self._edit_mode is False
self._edit_mode = enabled
self.save_btn.setEnabled(enabled and not self._reset_crashed)
if enabled:
tooltip = (
"Someone else has opened settings UI."
"\nTry hit refresh to check if settings are already available."
)
else:
tooltip = "Save settings"
self.save_btn.setToolTip(tooltip)
# Reset when last saved information has changed
if was_false and not self._check_last_saved_info():
self.reset()
@property
def state(self):
return self._state
@ -286,7 +323,7 @@ class SettingsCategoryWidget(QtWidgets.QWidget):
footer_layout = QtWidgets.QHBoxLayout(footer_widget)
footer_layout.setContentsMargins(5, 5, 5, 5)
if self.user_role == "developer":
if self._controller.user_role == "developer":
self._add_developer_ui(footer_layout, footer_widget)
footer_layout.addWidget(empty_label, 1)
@ -434,6 +471,9 @@ class SettingsCategoryWidget(QtWidgets.QWidget):
self.set_state(CategoryState.Idle)
def save(self):
if not self._edit_mode:
return
if not self.items_are_valid():
return
@ -664,14 +704,16 @@ class SettingsCategoryWidget(QtWidgets.QWidget):
)
def _on_reset_crash(self):
self._reset_crashed = True
self.save_btn.setEnabled(False)
if self.breadcrumbs_model is not None:
self.breadcrumbs_model.set_entity(None)
def _on_reset_success(self):
self._reset_crashed = False
if not self.save_btn.isEnabled():
self.save_btn.setEnabled(True)
self.save_btn.setEnabled(self._edit_mode)
if self.breadcrumbs_model is not None:
path = self.breadcrumbs_bar.path()
@ -716,7 +758,24 @@ class SettingsCategoryWidget(QtWidgets.QWidget):
"""Callback on any tab widget save."""
return
def _check_last_saved_info(self):
raise NotImplementedError((
"{} does not have implemented '_check_last_saved_info'"
).format(self.__class__.__name__))
def _save(self):
self._controller.update_last_opened_info()
if not self._controller.opened_info:
dialog = SettingsControlTaken(self._last_saved_info, self)
dialog.exec_()
return
if not self._check_last_saved_info():
dialog = SettingsLastSavedChanged(self._last_saved_info, self)
dialog.exec_()
if dialog.result() == 0:
return
# Don't trigger restart if defaults are modified
if self.is_modifying_defaults:
require_restart = False
@ -775,6 +834,13 @@ class SystemWidget(SettingsCategoryWidget):
self._actions = []
super(SystemWidget, self).__init__(*args, **kwargs)
def _check_last_saved_info(self):
if self.is_modifying_defaults:
return True
last_saved_info = get_system_last_saved_info()
return self._last_saved_info == last_saved_info
def contain_category_key(self, category):
if category == "system_settings":
return True
@ -789,6 +855,10 @@ class SystemWidget(SettingsCategoryWidget):
)
entity.on_change_callbacks.append(self._on_entity_change)
self.entity = entity
last_saved_info = None
if not self.is_modifying_defaults:
last_saved_info = get_system_last_saved_info()
self._last_saved_info = last_saved_info
try:
if self.is_modifying_defaults:
entity.set_defaults_state()
@ -822,6 +892,13 @@ class ProjectWidget(SettingsCategoryWidget):
def __init__(self, *args, **kwargs):
super(ProjectWidget, self).__init__(*args, **kwargs)
def _check_last_saved_info(self):
if self.is_modifying_defaults:
return True
last_saved_info = get_project_last_saved_info(self.project_name)
return self._last_saved_info == last_saved_info
def contain_category_key(self, category):
if category in ("project_settings", "project_anatomy"):
return True
@ -901,6 +978,11 @@ class ProjectWidget(SettingsCategoryWidget):
entity.on_change_callbacks.append(self._on_entity_change)
self.project_list_widget.set_entity(entity)
self.entity = entity
last_saved_info = None
if not self.is_modifying_defaults:
last_saved_info = get_project_last_saved_info(self.project_name)
self._last_saved_info = last_saved_info
try:
if self.is_modifying_defaults:
self.entity.set_defaults_state()

View file

@ -0,0 +1,202 @@
from Qt import QtWidgets, QtCore
from openpype.tools.utils.delegates import pretty_date
class BaseInfoDialog(QtWidgets.QDialog):
width = 600
height = 400
def __init__(self, message, title, info_obj, parent=None):
super(BaseInfoDialog, self).__init__(parent)
self._result = 0
self._info_obj = info_obj
self.setWindowTitle(title)
message_label = QtWidgets.QLabel(message, self)
message_label.setWordWrap(True)
separator_widget_1 = QtWidgets.QFrame(self)
separator_widget_2 = QtWidgets.QFrame(self)
for separator_widget in (
separator_widget_1,
separator_widget_2
):
separator_widget.setObjectName("Separator")
separator_widget.setMinimumHeight(1)
separator_widget.setMaximumHeight(1)
other_information = QtWidgets.QWidget(self)
other_information_layout = QtWidgets.QFormLayout(other_information)
other_information_layout.setContentsMargins(0, 0, 0, 0)
for label, value in (
("Username", info_obj.username),
("Host name", info_obj.hostname),
("Host IP", info_obj.hostip),
("System name", info_obj.system_name),
("Local ID", info_obj.local_id),
):
other_information_layout.addRow(
label,
QtWidgets.QLabel(value, other_information)
)
timestamp_label = QtWidgets.QLabel(
pretty_date(info_obj.timestamp_obj), other_information
)
other_information_layout.addRow("Time", timestamp_label)
footer_widget = QtWidgets.QWidget(self)
buttons_widget = QtWidgets.QWidget(footer_widget)
buttons_layout = QtWidgets.QHBoxLayout(buttons_widget)
buttons_layout.setContentsMargins(0, 0, 0, 0)
buttons = self.get_buttons(buttons_widget)
for button in buttons:
buttons_layout.addWidget(button, 1)
footer_layout = QtWidgets.QHBoxLayout(footer_widget)
footer_layout.setContentsMargins(0, 0, 0, 0)
footer_layout.addStretch(1)
footer_layout.addWidget(buttons_widget, 0)
layout = QtWidgets.QVBoxLayout(self)
layout.addWidget(message_label, 0)
layout.addWidget(separator_widget_1, 0)
layout.addStretch(1)
layout.addWidget(other_information, 0, QtCore.Qt.AlignHCenter)
layout.addStretch(1)
layout.addWidget(separator_widget_2, 0)
layout.addWidget(footer_widget, 0)
timestamp_timer = QtCore.QTimer()
timestamp_timer.setInterval(1000)
timestamp_timer.timeout.connect(self._on_timestamp_timer)
self._timestamp_label = timestamp_label
self._timestamp_timer = timestamp_timer
def showEvent(self, event):
super(BaseInfoDialog, self).showEvent(event)
self._timestamp_timer.start()
self.resize(self.width, self.height)
def closeEvent(self, event):
self._timestamp_timer.stop()
super(BaseInfoDialog, self).closeEvent(event)
def _on_timestamp_timer(self):
self._timestamp_label.setText(
pretty_date(self._info_obj.timestamp_obj)
)
def result(self):
return self._result
def get_buttons(self, parent):
return []
class SettingsUIOpenedElsewhere(BaseInfoDialog):
def __init__(self, info_obj, parent=None):
title = "Someone else has opened Settings UI"
message = (
"Someone else has opened Settings UI which could cause data loss."
" Please contact the person on the other side."
"<br/><br/>You can continue in <b>view-only mode</b>."
" All changes in view mode will be lost."
"<br/><br/>You can <b>take control</b> which will cause that"
" all changes of settings on the other side will be lost.<br/>"
)
super(SettingsUIOpenedElsewhere, self).__init__(
message, title, info_obj, parent
)
def _on_take_control(self):
self._result = 1
self.close()
def _on_view_mode(self):
self._result = 0
self.close()
def get_buttons(self, parent):
take_control_btn = QtWidgets.QPushButton(
"Take control", parent
)
view_mode_btn = QtWidgets.QPushButton(
"View only", parent
)
take_control_btn.clicked.connect(self._on_take_control)
view_mode_btn.clicked.connect(self._on_view_mode)
return [
take_control_btn,
view_mode_btn
]
class SettingsLastSavedChanged(BaseInfoDialog):
width = 500
height = 300
def __init__(self, info_obj, parent=None):
title = "Settings has changed"
message = (
"Settings has changed while you had opened this settings session."
"<br/><br/>It is <b>recommended to refresh settings</b>"
" and re-apply changes in the new session."
)
super(SettingsLastSavedChanged, self).__init__(
message, title, info_obj, parent
)
def _on_save(self):
self._result = 1
self.close()
def _on_close(self):
self._result = 0
self.close()
def get_buttons(self, parent):
close_btn = QtWidgets.QPushButton(
"Close", parent
)
save_btn = QtWidgets.QPushButton(
"Save anyway", parent
)
close_btn.clicked.connect(self._on_close)
save_btn.clicked.connect(self._on_save)
return [
close_btn,
save_btn
]
class SettingsControlTaken(BaseInfoDialog):
width = 500
height = 300
def __init__(self, info_obj, parent=None):
title = "Settings control taken"
message = (
"Someone took control over your settings."
"<br/><br/>It is not possible to save changes of currently"
" opened session. Copy changes you want to keep and hit refresh."
)
super(SettingsControlTaken, self).__init__(
message, title, info_obj, parent
)
def _on_confirm(self):
self.close()
def get_buttons(self, parent):
confirm_btn = QtWidgets.QPushButton("Understand", parent)
confirm_btn.clicked.connect(self._on_confirm)
return [confirm_btn]

View file

@ -1,4 +1,18 @@
from Qt import QtWidgets, QtGui, QtCore
from openpype import style
from openpype.lib import is_admin_password_required
from openpype.lib.events import EventSystem
from openpype.widgets import PasswordDialog
from openpype.settings.lib import (
get_last_opened_info,
opened_settings_ui,
closed_settings_ui,
)
from .dialogs import SettingsUIOpenedElsewhere
from .categories import (
CategoryState,
SystemWidget,
@ -10,10 +24,80 @@ from .widgets import (
SettingsTabWidget
)
from .search_dialog import SearchEntitiesDialog
from openpype import style
from openpype.lib import is_admin_password_required
from openpype.widgets import PasswordDialog
class SettingsController:
"""Controller for settings tools.
Added when tool was finished for checks of last opened in settings
categories and being able communicated with main widget logic.
"""
def __init__(self, user_role):
self._user_role = user_role
self._event_system = EventSystem()
self._opened_info = None
self._last_opened_info = None
self._edit_mode = None
@property
def user_role(self):
return self._user_role
@property
def event_system(self):
return self._event_system
@property
def opened_info(self):
return self._opened_info
@property
def last_opened_info(self):
return self._last_opened_info
@property
def edit_mode(self):
return self._edit_mode
def ui_closed(self):
if self._opened_info is not None:
closed_settings_ui(self._opened_info)
self._opened_info = None
self._edit_mode = None
def set_edit_mode(self, enabled):
if self._edit_mode is enabled:
return
opened_info = None
if enabled:
opened_info = opened_settings_ui()
self._last_opened_info = opened_info
self._opened_info = opened_info
self._edit_mode = enabled
self.event_system.emit(
"edit.mode.changed",
{"edit_mode": enabled},
"controller"
)
def update_last_opened_info(self):
last_opened_info = get_last_opened_info()
enabled = False
if (
last_opened_info is None
or self._opened_info == last_opened_info
):
enabled = True
self._last_opened_info = last_opened_info
self.set_edit_mode(enabled)
class MainWidget(QtWidgets.QWidget):
@ -21,17 +105,25 @@ class MainWidget(QtWidgets.QWidget):
widget_width = 1000
widget_height = 600
window_title = "OpenPype Settings"
def __init__(self, user_role, parent=None, reset_on_show=True):
super(MainWidget, self).__init__(parent)
controller = SettingsController(user_role)
# Object referencing to this machine and time when UI was opened
# - is used on close event
self._main_reset = False
self._controller = controller
self._user_passed = False
self._reset_on_show = reset_on_show
self._password_dialog = None
self.setObjectName("SettingsMainWidget")
self.setWindowTitle("OpenPype Settings")
self.setWindowTitle(self.window_title)
self.resize(self.widget_width, self.widget_height)
@ -41,8 +133,8 @@ class MainWidget(QtWidgets.QWidget):
header_tab_widget = SettingsTabWidget(parent=self)
studio_widget = SystemWidget(user_role, header_tab_widget)
project_widget = ProjectWidget(user_role, header_tab_widget)
studio_widget = SystemWidget(controller, header_tab_widget)
project_widget = ProjectWidget(controller, header_tab_widget)
tab_widgets = [
studio_widget,
@ -64,6 +156,11 @@ class MainWidget(QtWidgets.QWidget):
self._shadow_widget = ShadowWidget("Working...", self)
self._shadow_widget.setVisible(False)
controller.event_system.add_callback(
"edit.mode.changed",
self._edit_mode_changed
)
header_tab_widget.currentChanged.connect(self._on_tab_changed)
search_dialog.path_clicked.connect(self._on_search_path_clicked)
@ -74,7 +171,7 @@ class MainWidget(QtWidgets.QWidget):
self._on_restart_required
)
tab_widget.reset_started.connect(self._on_reset_started)
tab_widget.reset_started.connect(self._on_reset_finished)
tab_widget.reset_finished.connect(self._on_reset_finished)
tab_widget.full_path_requested.connect(self._on_full_path_request)
header_tab_widget.context_menu_requested.connect(
@ -131,11 +228,31 @@ class MainWidget(QtWidgets.QWidget):
def showEvent(self, event):
super(MainWidget, self).showEvent(event)
if self._reset_on_show:
self._reset_on_show = False
# Trigger reset with 100ms delay
QtCore.QTimer.singleShot(100, self.reset)
def closeEvent(self, event):
self._controller.ui_closed()
super(MainWidget, self).closeEvent(event)
def _check_on_reset(self):
self._controller.update_last_opened_info()
if self._controller.edit_mode:
return
# if self._edit_mode is False:
# return
dialog = SettingsUIOpenedElsewhere(
self._controller.last_opened_info, self
)
dialog.exec_()
self._controller.set_edit_mode(dialog.result() == 1)
def _show_password_dialog(self):
if self._password_dialog:
self._password_dialog.open()
@ -176,8 +293,11 @@ class MainWidget(QtWidgets.QWidget):
if self._reset_on_show:
self._reset_on_show = False
self._main_reset = True
for tab_widget in self.tab_widgets:
tab_widget.reset()
self._main_reset = False
self._check_on_reset()
def _update_search_dialog(self, clear=False):
if self._search_dialog.isVisible():
@ -187,6 +307,12 @@ class MainWidget(QtWidgets.QWidget):
entity = widget.entity
self._search_dialog.set_root_entity(entity)
def _edit_mode_changed(self, event):
title = self.window_title
if not event["edit_mode"]:
title += " [View only]"
self.setWindowTitle(title)
def _on_tab_changed(self):
self._update_search_dialog()
@ -221,6 +347,9 @@ class MainWidget(QtWidgets.QWidget):
if current_widget is widget:
self._update_search_dialog()
if not self._main_reset:
self._check_on_reset()
def keyPressEvent(self, event):
if event.matches(QtGui.QKeySequence.Find):
# todo: search in all widgets (or in active)?

View file

@ -2,6 +2,7 @@ import os
import sys
import contextlib
import collections
import traceback
from Qt import QtWidgets, QtCore, QtGui
import qtawesome
@ -643,7 +644,11 @@ class DynamicQThread(QtCore.QThread):
def create_qthread(func, *args, **kwargs):
class Thread(QtCore.QThread):
def run(self):
func(*args, **kwargs)
try:
func(*args, **kwargs)
except BaseException:
traceback.print_exception(*sys.exc_info())
raise
return Thread()

View file

@ -14,15 +14,15 @@ from openpype.lib import (
emit_event,
create_workdir_extra_folders,
)
from openpype.lib.avalon_context import (
update_current_task,
compute_session_changes
)
from openpype.pipeline import (
registered_host,
legacy_io,
Anatomy,
)
from openpype.pipeline.context_tools import (
compute_session_changes,
change_current_context
)
from openpype.pipeline.workfile import get_workfile_template_key
from .model import (
@ -408,8 +408,8 @@ class FilesWidget(QtWidgets.QWidget):
)
changes = compute_session_changes(
session,
asset=self._get_asset_doc(),
task=self._task_name,
self._get_asset_doc(),
self._task_name,
template_key=self.template_key
)
session.update(changes)
@ -422,8 +422,8 @@ class FilesWidget(QtWidgets.QWidget):
session = legacy_io.Session.copy()
changes = compute_session_changes(
session,
asset=self._get_asset_doc(),
task=self._task_name,
self._get_asset_doc(),
self._task_name,
template_key=self.template_key
)
if not changes:
@ -431,9 +431,9 @@ class FilesWidget(QtWidgets.QWidget):
# to avoid any unwanted Task Changed callbacks to be triggered.
return
update_current_task(
asset=self._get_asset_doc(),
task=self._task_name,
change_current_context(
self._get_asset_doc(),
self._task_name,
template_key=self.template_key
)

View file

@ -299,7 +299,6 @@ class PublishFilesModel(QtGui.QStandardItemModel):
self.project_name,
asset_ids=[self._asset_id],
fields=["_id", "name"]
)
subset_ids = [subset_doc["_id"] for subset_doc in subset_docs]
@ -329,7 +328,9 @@ class PublishFilesModel(QtGui.QStandardItemModel):
# extension
extensions = [ext.replace(".", "") for ext in self._file_extensions]
repre_docs = get_representations(
self.project_name, version_ids, extensions
self.project_name,
version_ids=version_ids,
context_filters={"ext": extensions}
)
# Filter queried representations by task name if task is set

View file

@ -40,7 +40,6 @@ For more information [see here](admin_use.md#run-openpype).
| module | Run command line arguments for modules. | |
| repack-version | Tool to re-create version zip. | [📑](#repack-version-arguments) |
| tray | Launch OpenPype Tray. | [📑](#tray-arguments)
| eventserver | This should be ideally used by system service (such as systemd or upstart on linux and window service). | [📑](#eventserver-arguments) |
| launch | Launch application in Pype environment. | [📑](#launch-arguments) |
| publish | Pype takes JSON from provided path and use it to publish data in it. | [📑](#publish-arguments) |
| extractenvironments | Extract environment variables for entered context to a json file. | [📑](#extractenvironments-arguments) |
@ -48,7 +47,6 @@ For more information [see here](admin_use.md#run-openpype).
| interactive | Start python like interactive console session. | |
| projectmanager | Launch Project Manager UI | [📑](#projectmanager-arguments) |
| settings | Open Settings UI | [📑](#settings-arguments) |
| standalonepublisher | Open Standalone Publisher UI | [📑](#standalonepublisher-arguments) |
---
### `tray` arguments {#tray-arguments}
@ -57,25 +55,7 @@ For more information [see here](admin_use.md#run-openpype).
openpype_console tray
```
---
### `launch` arguments {#eventserver-arguments}
You have to set either proper environment variables to provide URL and credentials or use
option to specify them.
| Argument | Description |
| --- | --- |
| `--ftrack-url` | URL to ftrack server (can be set with `FTRACK_SERVER`) |
| `--ftrack-user` |user name to log in to ftrack (can be set with `FTRACK_API_USER`) |
| `--ftrack-api-key` | ftrack api key (can be set with `FTRACK_API_KEY`) |
| `--legacy` | run event server without mongo storing |
| `--clockify-api-key` | Clockify API key (can be set with `CLOCKIFY_API_KEY`) |
| `--clockify-workspace` | Clockify workspace (can be set with `CLOCKIFY_WORKSPACE`) |
To run ftrack event server:
```shell
openpype_console eventserver --ftrack-url=<url> --ftrack-user=<user> --ftrack-api-key=<key>
```
---
### `launch` arguments {#launch-arguments}
| Argument | Description |
@ -159,12 +139,6 @@ openpypeconsole settings
```
---
### `standalonepublisher` arguments {#standalonepublisher-arguments}
`standalonepublisher` has no command-line arguments.
```shell
openpype_console standalonepublisher
```
### `repack-version` arguments {#repack-version-arguments}
Takes path to unzipped and possibly modified OpenPype version. Files will be
zipped, checksums recalculated and version will be determined by folder name

View file

@ -0,0 +1,9 @@
---
id: admin_releases
title: Releases
sidebar_label: Releases
---
Information about releases can be found on GitHub [Releases page](https://github.com/pypeclub/OpenPype/releases).
You can find features and bugfixes in the codebase or full changelog for advanced users.

View file

@ -10,6 +10,8 @@ sidebar_label: Key Concepts
In our pipeline all the main entities the project is made from are internally considered *'Assets'*. Episode, sequence, shot, character, prop, etc. All of these behave identically in the pipeline. Asset names need to be absolutely unique within the project because they are their key identifier.
OpenPype has a limitation regarding duplicated names. Name of assets must be unique across whole project.
### Subset
Usually, an asset needs to be created in multiple *'flavours'*. A character might have multiple different looks, model needs to be published in different resolutions, a standard animation rig might not be usable in a crowd system and so on. 'Subsets' are here to accommodate all this variety that might be needed within a single asset. A model might have subset: *'main'*, *'proxy'*, *'sculpt'*, while data of *'look'* family could have subsets *'main'*, *'dirty'*, *'damaged'*. Subsets have some recommendations for their names, but ultimately it's up to the artist to use them for separation of publishes when needed.
@ -24,6 +26,11 @@ A numbered iteration of a given subset. Each version contains at least one [repr
Each published variant can come out of the software in multiple representations. All of them hold exactly the same data, but in different formats. A model, for example, might be saved as `.OBJ`, Alembic, Maya geometry or as all of them, to be ready for pickup in any other applications supporting these formats.
#### Naming convention
At this moment names of assets, tasks, subsets or representations can contain only letters, numbers and underscore.
### Family
Each published [subset][3b89d8e0] can have exactly one family assigned to it. Family determines the type of data that the subset holds. Family doesn't dictate the file type, but can enforce certain technical specifications. For example OpenPype default configuration expects `model` family to only contain geometry without any shaders or joints when it is published.

Binary file not shown.

After

Width:  |  Height:  |  Size: 15 KiB

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,899 @@
---
id: dev_settings
title: Settings
sidebar_label: Settings
---
Settings give the ability to change how OpenPype behaves in certain situations. Settings are split into 3 categories **system settings**, **project anatomy** and **project settings**. Project anatomy and project settings are grouped into a single category but there is a technical difference (explained later). Only difference in system and project settings is that system settings can't be technically handled on a project level or their values must be available no matter in which project the values are received. Settings have headless entities or settings UI.
There is one more category **local settings** but they don't have ability to be changed or defined easily. Local settings can change how settings work per machine, can affect both system and project settings but they're hardcoded for predefined values at this moment.
## Settings schemas
System and project settings are defined by settings schemas. Schema defines the structure of output value, what value types output will contain, how settings are stored and how its UI input will look.
## Settings values
Output of settings is a json serializable value. There are 3 possible types of value **default values**, **studio overrides** and **project overrides**. Default values must be always available for all settings schemas, their values are stored to code. Default values are what everyone who just installed OpenPype will use as default values. It is good practice to set example values but they should be actually relevant.
Setting overrides is what makes settings a powerful tool. Overrides contain only a part of settings with additional metadata that describe which parts of settings values should be replaced from overrides values. Using overrides gives the ability to save only specific values and use default values for rest. It is super useful in project settings which have up to 2 levels of overrides. In project settings are used **default values** as base on which are applied **studio overrides** and then **project overrides**. In practice it is possible to save only studio overrides which affect all projects. Changes in studio overrides are then propagated to all projects without project overrides. But values can be locked on project level so studio overrides are not used.
## Settings storage
As was mentioned default values are stored into repository files. Overrides are stored in the Mongo database. The value in mongo contain only overrides with metadata so their content on it's own is useless and must be used with combination of default values. System settings and project settings are stored into special collection. Single document represents one set of overrides with OpenPype version for which is stored. Settings are versioned and are loaded in specific order - current OpenPype version overrides or first lower available. If there are any overrides with the same or lower version then the first higher version is used. If there are any overrides then no overrides are applied.
Project anatomy is stored into a project document thus is not versioned and its values are always overridden. Any changes in anatomy schema may have a drastic effect on production and OpenPype updates.
## Settings schema items
As was mentioned schema items define output type of values, how they are stored and how they look in UI.
- schemas are (by default) defined by json files
- OpenPype core system settings schemas are stored in `~/openpype/settings/entities/schemas/system_schema/` and project settings in `~/openpype/settings/entities/schemas/projects_schema/`
- both contain `schema_main.json` which are entry points
- OpenPype modules/addons can define their settings schemas using `BaseModuleSettingsDef` in that case some functionality may be slightly modified
- single schema item is represented by dictionary (object) in json which has `"type"` key.
- **type** is only common key which is required for all schema items
- each item may have "input modifiers" (other keys in dictionary) and they may be required or optional based on the type
- there are special keys across all items
- `"is_file"` - this key is used when defaults values are stored in the file. Its value matches the filename where values are stored
- key is validated, must be unique in hierarchy otherwise it won't be possible to store default values
- make sense to fill it only if it's value if `true`
- `"is_group"` - define that all values under a key in settings hierarchy will be overridden if any value is modified
- this key is not allowed for all inputs as they may not have technical ability to handle it
- key is validated, must be unique in hierarchy and is automatically filled on last possible item if is not defined in schemas
- make sense to fill it only if it's value if `true`
- all entities can have set `"tooltip"` key with description which will be shown in UI on hover
### Inner schema
Settings schemas are big json files which would become unmanageable if they were in a single file. To be able to split them into multiple files to help organize them special types `schema` and `template` were added. Both types are related to a different file by filename. If a json file contains a dictionary it is considered as `schema` if it contains a list it is considered as a `template`.
#### schema
Schema item is replaced by content of entered schema name. It is recommended that the schema file is used only once in settings hierarchy. Templates are meant for reusing.
- schema must have `"name"` key which is name of schema that should be used
```javascript
{
"type": "schema",
"name": "my_schema_name"
}
```
#### template
Templates are almost the same as schema items but can contain one or more items which can be formatted with additional data or some keys can be skipped if needed. Templates are meant for reusing the same schemas with ability to modify content.
- legacy name is `schema_template` (still usable)
- template must have `"name"` key which is name of template file that should be used
- to fill formatting keys use `"template_data"`
- all items in template, except `__default_values__`, will replace `template` item in original schema
- template may contain other templates
```javascript
// Example template json file content
[
{
// Define default values for formatting values
// - gives ability to set the value but have default value
"__default_values__": {
"multipath_executables": true
}
}, {
"type": "raw-json",
"label": "{host_label} Environments",
"key": "{host_name}_environments"
}, {
"type": "path",
"key": "{host_name}_executables",
"label": "{host_label} - Full paths to executables",
"multiplatform": "{multipath_executables}",
"multipath": true
}
]
```
```javascript
// Example usage of the template in schema
{
"type": "dict",
"key": "template_examples",
"label": "Schema template examples",
"children": [
{
"type": "template",
"name": "example_template",
"template_data": [
{
"host_label": "Maya 2019",
"host_name": "maya_2019",
"multipath_executables": false
},
{
"host_label": "Maya 2020",
"host_name": "maya_2020"
},
{
"host_label": "Maya 2021",
"host_name": "maya_2021"
}
]
}
]
}
```
```javascript
// The same schema defined without templates
{
"type": "dict",
"key": "template_examples",
"label": "Schema template examples",
"children": [
{
"type": "raw-json",
"label": "Maya 2019 Environments",
"key": "maya_2019_environments"
}, {
"type": "path",
"key": "maya_2019_executables",
"label": "Maya 2019 - Full paths to executables",
"multiplatform": false,
"multipath": true
}, {
"type": "raw-json",
"label": "Maya 2020 Environments",
"key": "maya_2020_environments"
}, {
"type": "path",
"key": "maya_2020_executables",
"label": "Maya 2020 - Full paths to executables",
"multiplatform": true,
"multipath": true
}, {
"type": "raw-json",
"label": "Maya 2021 Environments",
"key": "maya_2021_environments"
}, {
"type": "path",
"key": "maya_2021_executables",
"label": "Maya 2021 - Full paths to executables",
"multiplatform": true,
"multipath": true
}
]
}
```
Template data can be used only to fill templates in values but not in keys. It is also possible to define default values for unfilled fields to do so one of the items in the list must be a dictionary with key "__default_values__"` and value as dictionary with default key: values (as in example above).
```javascript
{
...
// Allowed
"key": "{to_fill}"
...
// Not allowed
"{to_fill}": "value"
...
}
```
Because formatting values can be only string it is possible to use formatting values which are replaced with different types.
```javascript
// Template data
{
"template_data": {
"executable_multiplatform": {
"type": "schema",
"name": "my_multiplatform_schema"
}
}
}
// Template content
{
...
// Allowed - value is replaced with dictionary
"multiplatform": "{executable_multiplatform}"
...
// Not allowed - there is no way how it could be replaced
"multiplatform": "{executable_multiplatform}_enhanced_string"
...
}
```
#### dynamic_schema
Dynamic schema item marks a place in settings schema where schemas defined by `BaseModuleSettingsDef` can be placed.
- example:
```javascript
{
"type": "dynamic_schema",
"name": "project_settings/global"
}
```
- `BaseModuleSettingsDef` with implemented `get_settings_schemas` can return a dictionary where key defines a dynamic schema name and value schemas that will be put there
- dynamic schemas work almost the same way as templates
- one item can be replaced by multiple items (or by 0 items)
- goal is to dynamically load settings of OpenPype modules without having their schemas or default values in core repository
- values of these schemas are saved using the `BaseModuleSettingsDef` methods
- we recommend to use `JsonFilesSettingsDef` which has full implementation of storing default values to json files
- requires only to implement method `get_settings_root_path` which should return path to root directory where settings schema can be found and default values will be saved
### Basic Dictionary inputs
These inputs wraps another inputs into {key: value} relation
#### dict
- this is dictionary type wrapping more inputs with keys defined in schema
- may be used as dynamic children (e.g. in [list](#list) or [dict-modifiable](#dict-modifiable))
- in that case the only key modifier is `children` which is a list of its keys
- USAGE: e.g. List of dictionaries where each dictionary has the same structure.
- if is not used as dynamic children then must have defined `"key"` under which are it's values stored
- may be with or without `"label"` (only for GUI)
- `"label"` must be set to be able to mark item as group with `"is_group"` key set to True
- item with label can visually wrap its children
- this option is enabled by default to turn off set `"use_label_wrap"` to `False`
- label wrap is by default collapsible
- that can be set with key `"collapsible"` to `True`/`False`
- with key `"collapsed"` as `True`/`False` can be set that is collapsed when GUI is opened (Default: `False`)
- it is possible to add lighter background with `"highlight_content"` (Default: `False`)
- lighter background has limits of maximum applies after 3-4 nested highlighted items there is not much difference in the color
- output is dictionary `{the "key": children values}`
```javascript
// Example
{
"key": "applications",
"type": "dict",
"label": "Applications",
"collapsible": true,
"highlight_content": true,
"is_group": true,
"is_file": true,
"children": [
...ITEMS...
]
}
// Without label
{
"type": "dict",
"key": "global",
"children": [
...ITEMS...
]
}
// When used as widget
{
"type": "list",
"key": "profiles",
"label": "Profiles",
"object_type": {
"type": "dict",
"children": [
{
"key": "families",
"label": "Families",
"type": "list",
"object_type": "text"
}, {
"key": "hosts",
"label": "Hosts",
"type": "list",
"object_type": "text"
}
...
]
}
}
```
#### dict-roots
- entity can be used only in Project settings
- keys of dictionary are based on current project roots
- they are not updated "live" it is required to save root changes and then
modify values on this entity
# TODO do live updates
```javascript
{
"type": "dict-roots",
"key": "roots",
"label": "Roots",
"object_type": {
"type": "path",
"multiplatform": true,
"multipath": false
}
}
```
#### dict-conditional
- is similar to `dict` but has always available one enum entity
- the enum entity has single selection and it's value define other children entities
- each value of enumerator have defined children that will be used
- there is no way how to have shared entities across multiple enum items
- value from enumerator is also stored next to other values
- to define the key under which will be enum value stored use `enum_key`
- `enum_key` must match key regex and any enum item can't have children with same key
- `enum_label` is label of the entity for UI purposes
- enum items are define with `enum_children`
- it's a list where each item represents single item for the enum
- all items in `enum_children` must have at least `key` key which represents value stored under `enum_key`
- enum items can define `label` for UI purposes
- most important part is that item can define `children` key where are definitions of it's children (`children` value works the same way as in `dict`)
- to set default value for `enum_key` set it with `enum_default`
- entity must have defined `"label"` if is not used as widget
- is set as group if any parent is not group (can't have children as group)
- may be with or without `"label"` (only for GUI)
- `"label"` must be set to be able to mark item as group with `"is_group"` key set to True
- item with label can visually wrap its children
- this option is enabled by default to turn off set `"use_label_wrap"` to `False`
- label wrap is by default collapsible
- that can be set with key `"collapsible"` to `True`/`False`
- with key `"collapsed"` as `True`/`False` can be set that is collapsed when GUI is opened (Default: `False`)
- it is possible to add lighter background with `"highlight_content"` (Default: `False`)
- lighter background has limits of maximum applies after 3-4 nested highlighted items there is not much difference in the color
- for UI purposes was added `enum_is_horizontal` which will make combobox appear next to children inputs instead of on top of them (Default: `False`)
- this has extended ability of `enum_on_right` which will move combobox to right side next to children widgets (Default: `False`)
- output is dictionary `{the "key": children values}`
- using this type as template item for list type can be used to create infinite hierarchies
```javascript
// Example
{
"type": "dict-conditional",
"key": "my_key",
"label": "My Key",
"enum_key": "type",
"enum_label": "label",
"enum_children": [
// Each item must be a dictionary with 'key'
{
"key": "action",
"label": "Action",
"children": [
{
"type": "text",
"key": "key",
"label": "Key"
},
{
"type": "text",
"key": "label",
"label": "Label"
},
{
"type": "text",
"key": "command",
"label": "Comand"
}
]
},
{
"key": "menu",
"label": "Menu",
"children": [
{
"key": "children",
"label": "Children",
"type": "list",
"object_type": "text"
}
]
},
{
// Separator does not have children as "separator" value is enough
"key": "separator",
"label": "Separator"
}
]
}
```
How output of the schema could look like on save:
```javascript
{
"type": "separator"
}
{
"type": "action",
"key": "action_1",
"label": "Action 1",
"command": "run command -arg"
}
{
"type": "menu",
"children": [
"child_1",
"child_2"
]
}
```
### Inputs for setting any kind of value (`Pure` inputs)
- all inputs must have defined `"key"` if are not used as dynamic item
- they can also have defined `"label"`
#### boolean
- simple checkbox, nothing more to set
```javascript
{
"type": "boolean",
"key": "my_boolean_key",
"label": "Do you want to use Pype?"
}
```
#### number
- number input, can be used for both integer and float
- key `"decimal"` defines how many decimal places will be used, 0 is for integer input (Default: `0`)
- key `"minimum"` as minimum allowed number to enter (Default: `-99999`)
- key `"maxium"` as maximum allowed number to enter (Default: `99999`)
- key `"steps"` will change single step value of UI inputs (using arrows and wheel scroll)
- for UI it is possible to show slider to enable this option set `show_slider` to `true`
```javascript
{
"type": "number",
"key": "fps",
"label": "Frame rate (FPS)"
"decimal": 2,
"minimum": 1,
"maximum": 300000
}
```
```javascript
{
"type": "number",
"key": "ratio",
"label": "Ratio"
"decimal": 3,
"minimum": 0,
"maximum": 1,
"show_slider": true
}
```
#### text
- simple text input
- key `"multiline"` allows to enter multiple lines of text (Default: `False`)
- key `"placeholder"` allows to show text inside input when is empty (Default: `None`)
```javascript
{
"type": "text",
"key": "deadline_pool",
"label": "Deadline pool"
}
```
#### path-input
- Do not use this input in schema please (use `path` instead)
- this input is implemented to add additional features to text input
- this is meant to be used in proxy input `path`
#### raw-json
- a little bit enhanced text input for raw json
- can store dictionary (`{}`) or list (`[]`) but not both
- by default stores dictionary to change it to list set `is_list` to `True`
- has validations of json format
- output can be stored as string
- this is to allow any keys in dictionary
- set key `store_as_string` to `true`
- code using that setting must expected that value is string and use json module to convert it to python types
```javascript
{
"type": "raw-json",
"key": "profiles",
"label": "Extract Review profiles",
"is_list": true
}
```
#### enum
- enumeration of values that are predefined in schema
- multiselection can be allowed with setting key `"multiselection"` to `True` (Default: `False`)
- values are defined under value of key `"enum_items"` as list
- each item in list is simple dictionary where value is label and key is value which will be stored
- should be possible to enter single dictionary if order of items doesn't matter
- it is possible to set default selected value/s with `default` attribute
- it is recommended to use this option only in single selection mode
- at the end this option is used only when defying default settings value or in dynamic items
```javascript
{
"key": "tags",
"label": "Tags",
"type": "enum",
"multiselection": true,
"enum_items": [
{"burnin": "Add burnins"},
{"ftrackreview": "Add to Ftrack"},
{"delete": "Delete output"},
{"slate-frame": "Add slate frame"},
{"no-handles": "Skip handle frames"}
]
}
```
#### anatomy-templates-enum
- enumeration of all available anatomy template keys
- have only single selection mode
- it is possible to define default value `default`
- `"work"` is used if default value is not specified
- enum values are not updated on the fly it is required to save templates and
reset settings to recache values
```javascript
{
"key": "host",
"label": "Host name",
"type": "anatomy-templates-enum",
"default": "publish"
}
```
#### hosts-enum
- enumeration of available hosts
- multiselection can be allowed with setting key `"multiselection"` to `True` (Default: `False`)
- it is possible to add empty value (represented with empty string) with setting `"use_empty_value"` to `True` (Default: `False`)
- it is possible to set `"custom_labels"` for host names where key `""` is empty value (Default: `{}`)
- to filter host names it is required to define `"hosts_filter"` which is list of host names that will be available
- do not pass empty string if `use_empty_value` is enabled
- ignoring host names would be more dangerous in some cases
```javascript
{
"key": "host",
"label": "Host name",
"type": "hosts-enum",
"multiselection": false,
"use_empty_value": true,
"custom_labels": {
"": "N/A",
"nuke": "Nuke"
},
"hosts_filter": [
"nuke"
]
}
```
#### apps-enum
- enumeration of available application and their variants from system settings
- applications without host name are excluded
- can be used only in project settings
- has only `multiselection`
- used only in project anatomy
```javascript
{
"type": "apps-enum",
"key": "applications",
"label": "Applications"
}
```
#### tools-enum
- enumeration of available tools and their variants from system settings
- can be used only in project settings
- has only `multiselection`
- used only in project anatomy
```javascript
{
"type": "tools-enum",
"key": "tools_env",
"label": "Tools"
}
```
#### task-types-enum
- enumeration of task types from current project
- enum values are not updated on the fly and modifications of task types on project require save and reset to be propagated to this enum
- has set `multiselection` to `True` but can be changed to `False` in schema
#### deadline_url-enum
- deadline module specific enumerator using deadline system settings to fill it's values
- TODO: move this type to deadline module
### Inputs for setting value using Pure inputs
- these inputs also have required `"key"`
- attribute `"label"` is required in few conditions
- when item is marked `as_group` or when `use_label_wrap`
- they use Pure inputs "as widgets"
#### list
- output is list
- items can be added and removed
- items in list must be the same type
- to wrap item in collapsible widget with label on top set `use_label_wrap` to `True`
- when this is used `collapsible` and `collapsed` can be set (same as `dict` item does)
- type of items is defined with key `"object_type"`
- there are 2 possible ways how to set the type:
1.) dictionary with item modifiers (`number` input has `minimum`, `maximum` and `decimals`) in that case item type must be set as value of `"type"` (example below)
2.) item type name as string without modifiers (e.g. [text](#text))
3.) enhancement of 1.) there is also support of `template` type but be carefull about endless loop of templates
- goal of using `template` is to easily change same item definitions in multiple lists
1.) with item modifiers
```javascript
{
"type": "list",
"key": "exclude_ports",
"label": "Exclude ports",
"object_type": {
"type": "number", # number item type
"minimum": 1, # minimum modifier
"maximum": 65535 # maximum modifier
}
}
```
2.) without modifiers
```javascript
{
"type": "list",
"key": "exclude_ports",
"label": "Exclude ports",
"object_type": "text"
}
```
3.) with template definition
```javascript
// Schema of list item where template is used
{
"type": "list",
"key": "menu_items",
"label": "Menu Items",
"object_type": {
"type": "template",
"name": "template_object_example"
}
}
// WARNING:
// In this example the template use itself inside which will work in `list`
// but may cause an issue in other entity types (e.g. `dict`).
'template_object_example.json' :
[
{
"type": "dict-conditional",
"use_label_wrap": true,
"collapsible": true,
"key": "menu_items",
"label": "Menu items",
"enum_key": "type",
"enum_label": "Type",
"enum_children": [
{
"key": "action",
"label": "Action",
"children": [
{
"type": "text",
"key": "key",
"label": "Key"
}
]
}, {
"key": "menu",
"label": "Menu",
"children": [
{
"key": "children",
"label": "Children",
"type": "list",
"object_type": {
"type": "template",
"name": "template_object_example"
}
}
]
}
]
}
]
```
#### dict-modifiable
- one of dictionary inputs, this is only used as value input
- items in this input can be removed and added same way as in `list` input
- value items in dictionary must be the same type
- required keys may be defined under `"required_keys"`
- required keys must be defined as a list (e.g. `["key_1"]`) and are moved to the top
- these keys can't be removed or edited (it is possible to edit label if item is collapsible)
- type of items is defined with key `"object_type"`
- there are 2 possible ways how to set the object type (Examples below):
1. just a type name as string without modifiers (e.g. `"text"`)
2. full types with modifiers as dictionary(`number` input has `minimum`, `maximum` and `decimals`) in that case item type must be set as value of `"type"`
- this input can be collapsible
- `"use_label_wrap"` must be set to `True` (Default behavior)
- that can be set with key `"collapsible"` as `True`/`False` (Default: `True`)
- with key `"collapsed"` as `True`/`False` can be set that is collapsed when GUI is opened (Default: `False`)
1. **Object type** without modifiers
```javascript
{
"type": "dict-modifiable",
"object_type": "text",
"is_group": true,
"key": "templates_mapping",
"label": "Muster - Templates mapping",
"is_file": true
}
```
2. **Object type** with item modifiers
```javascript
{
"type": "dict-modifiable",
"object_type": {
"type": "number",
"minimum": 0,
"maximum": 300
},
"is_group": true,
"key": "templates_mapping",
"label": "Muster - Templates mapping",
"is_file": true
}
```
#### path
- input for paths, use `path-input` internally
- has 2 input modifiers `"multiplatform"` and `"multipath"`
- `"multiplatform"` - adds `"windows"`, `"linux"` and `"darwin"` path inputs (result is dictionary)
- `"multipath"` - it is possible to enter multiple paths
- if both are enabled result is dictionary with lists
```javascript
{
"type": "path",
"key": "ffmpeg_path",
"label": "FFmpeg path",
"multiplatform": true,
"multipath": true
}
```
#### list-strict
- input for strict number of items in list
- each child item can be different type with different possible modifiers
- it is possible to display them in horizontal or vertical layout
- key `"horizontal"` as `True`/`False` (Default: `True`)
- each child may have defined `"label"` which is shown next to input
- label does not reflect modifications or overrides (TODO)
- children item are defined under key `"object_types"` which is list of dictionaries
- key `"children"` is not used because is used for hierarchy validations in schema
- USAGE: For colors, transformations, etc. Custom number and different modifiers
give ability to define if color is HUE or RGB, 0-255, 0-1, 0-100 etc.
```javascript
{
"type": "list-strict",
"key": "color",
"label": "Color",
"object_types": [
{
"label": "Red",
"type": "number",
"minimum": 0,
"maximum": 255,
"decimal": 0
}, {
"label": "Green",
"type": "number",
"minimum": 0,
"maximum": 255,
"decimal": 0
}, {
"label": "Blue",
"type": "number",
"minimum": 0,
"maximum": 255,
"decimal": 0
}, {
"label": "Alpha",
"type": "number",
"minimum": 0,
"maximum": 1,
"decimal": 6
}
]
}
```
#### color
- pre implemented entity to store and load color values
- entity store and expect list of 4 integers in range 0-255
- integers represents rgba [Red, Green, Blue, Alpha]
- has modifier `"use_alpha"` which can be `True`/`False`
- alpha is always `255` if set to `True` and alpha slider is not visible in UI
```javascript
{
"type": "color",
"key": "bg_color",
"label": "Background Color"
}
```
### Anatomy
Anatomy represents data stored on project document. Item cares about **Project Anatomy**.
#### anatomy
- entity is just enhanced [dict](#dict) item
- anatomy has always all keys overridden with overrides
### Noninteractive items
Items used only for UI purposes.
#### label
- add label with note or explanations
- it is possible to use html tags inside the label
- set `work_wrap` to `true`/`false` if you want to enable word wrapping in UI (default: `false`)
```javascript
{
"type": "label",
"label": "<span style=\"color:#FF0000\";>RED LABEL:</span> Normal label"
}
```
#### separator
- legacy name is `splitter` (still usable)
- visual separator of items (more divider than separator)
```javascript
{
"type": "separator"
}
```
### Proxy wrappers
- should wrap multiple inputs only visually
- these do not have `"key"` key and do not allow to have `"is_file"` or `"is_group"` modifiers enabled
- can't be used as a widget (first item in e.g. `list`, `dict-modifiable`, etc.)
#### form
- wraps inputs into form look layout
- should be used only for Pure inputs
```javascript
{
"type": "dict-form",
"children": [
{
"type": "text",
"key": "deadline_department",
"label": "Deadline apartment"
}, {
"type": "number",
"key": "deadline_priority",
"label": "Deadline priority"
}, {
...
}
]
}
```
#### collapsible-wrap
- wraps inputs into collapsible widget
- looks like `dict` but does not hold `"key"`
- should be used only for Pure inputs
```javascript
{
"type": "collapsible-wrap",
"label": "Collapsible example"
"children": [
{
"type": "text",
"key": "_example_input_collapsible",
"label": "Example input in collapsible wrapper"
}, {
...
}
]
}
```
## How to add new settings
Always start with modifying or adding a new schema and don't worry about values. When you think schema is ready to use launch OpenPype settings in development mode using `poetry run python ./start.py settings --dev` or prepared script in `~/openpype/tools/run_settings(.sh|.ps1)`. Settings opened in development mode have the checkbox `Modify defaults` available in the bottom left corner. When checked default values are modified and saved on `Save`. This is a recommended approach on how default settings should be created instead of direct modification of files.
![Modify default settings](assets/settings_dev.png)

View file

@ -13,7 +13,7 @@ Ftrack is currently the main project management option for OpenPype. This docume
## Prepare Ftrack for OpenPype
### Server URL
If you want to connect Ftrack to OpenPype you might need to make few changes in Ftrack settings. These changes would take a long time to do manually, so we prepared a few Ftrack actions to help you out. First, you'll need to launch OpenPype settings, enable [Ftrack module](admin_settings_system.md#Ftrack), and enter the address to your Ftrack server.
If you want to connect Ftrack to OpenPype you might need to make few changes in Ftrack settings. These changes would take a long time to do manually, so we prepared a few Ftrack actions to help you out. First, you'll need to launch OpenPype settings, enable [Ftrack module](admin_settings_system.md#Ftrack), and enter the address to your Ftrack server.
### Login
Once your server is configured, restart OpenPype and you should be prompted to enter your [Ftrack credentials](artist_ftrack.md#How-to-use-Ftrack-in-OpenPype) to be able to run our Ftrack actions. If you are already logged in to Ftrack in your browser, it is enough to press `Ftrack login` and it will connect automatically.
@ -26,7 +26,7 @@ You can only use our Ftrack Actions and publish to Ftrack if each artist is logg
### Custom Attributes
After successfully connecting OpenPype with you Ftrack, you can right click on any project in Ftrack and you should see a bunch of actions available. The most important one is called `OpenPype Admin` and contains multiple options inside.
To prepare Ftrack for working with OpenPype you'll need to run [OpenPype Admin - Create/Update Custom Attributes](manager_ftrack_actions.md#create-update-avalon-attributes), which creates and sets the Custom Attributes necessary for OpenPype to function.
To prepare Ftrack for working with OpenPype you'll need to run [OpenPype Admin - Create/Update Custom Attributes](manager_ftrack_actions.md#create-update-avalon-attributes), which creates and sets the Custom Attributes necessary for OpenPype to function.
@ -34,7 +34,7 @@ To prepare Ftrack for working with OpenPype you'll need to run [OpenPype Admin -
Ftrack Event Server is the key to automation of many tasks like _status change_, _thumbnail update_, _automatic synchronization to Avalon database_ and many more. Event server should run at all times to perform the required processing as it is not possible to catch some of them retrospectively with enough certainty.
### Running event server
There are specific launch arguments for event server. With `openpype_console eventserver` you can launch event server but without prior preparation it will terminate immediately. The reason is that event server requires 3 pieces of information: _Ftrack server url_, _paths to events_ and _credentials (Username and API key)_. Ftrack server URL and Event path are set from OpenPype's environments by default, but the credentials must be done separatelly for security reasons.
There are specific launch arguments for event server. With `openpype_console module ftrack eventserver` you can launch event server but without prior preparation it will terminate immediately. The reason is that event server requires 3 pieces of information: _Ftrack server url_, _paths to events_ and _credentials (Username and API key)_. Ftrack server URL and Event path are set from OpenPype's environments by default, but the credentials must be done separatelly for security reasons.
@ -53,7 +53,7 @@ There are specific launch arguments for event server. With `openpype_console eve
- **`--ftrack-api-key "00000aaa-11bb-22cc-33dd-444444eeeee"`** : User's API key
- `--ftrack-url "https://yourdomain.ftrackapp.com/"` : Ftrack server URL _(it is not needed to enter if you have set `FTRACK_SERVER` in OpenPype' environments)_
So if you want to use OpenPype's environments then you can launch event server for first time with these arguments `openpype_console.exe eventserver --ftrack-user "my.username" --ftrack-api-key "00000aaa-11bb-22cc-33dd-444444eeeee" --store-credentials`. Since that time, if everything was entered correctly, you can launch event server with `openpype_console.exe eventserver`.
So if you want to use OpenPype's environments then you can launch event server for first time with these arguments `openpype_console.exe module ftrack eventserver --ftrack-user "my.username" --ftrack-api-key "00000aaa-11bb-22cc-33dd-444444eeeee" --store-credentials`. Since that time, if everything was entered correctly, you can launch event server with `openpype_console.exe module ftrack eventserver`.
</TabItem>
<TabItem value="env">
@ -72,7 +72,7 @@ We do not recommend setting your Ftrack user and api key environments in a persi
### Where to run event server
We recommend you to run event server on stable server machine with ability to connect to Avalon database and Ftrack web server. Best practice we recommend is to run event server as service. It can be Windows or Linux.
We recommend you to run event server on stable server machine with ability to connect to OpenPype database and Ftrack web server. Best practice we recommend is to run event server as service. It can be Windows or Linux.
:::important
Event server should **not** run more than once! It may cause major issues.
@ -99,11 +99,10 @@ Event server should **not** run more than once! It may cause major issues.
- add content to the file:
```sh
#!/usr/bin/env bash
export OPENPYPE_DEBUG=1
export OPENPYPE_MONGO=<openpype-mongo-url>
pushd /mnt/path/to/openpype
./openpype_console eventserver --ftrack-user <openpype-admin-user> --ftrack-api-key <api-key>
./openpype_console module ftrack eventserver --ftrack-user <openpype-admin-user> --ftrack-api-key <api-key> --debug
```
- change file permission:
`sudo chmod 0755 /opt/openpype/run_event_server.sh`
@ -140,14 +139,13 @@ WantedBy=multi-user.target
<TabItem value="win">
- create service file: `openpype-ftrack-eventserver.bat`
- add content to the service file:
- add content to the service file:
```sh
@echo off
set OPENPYPE_DEBUG=1
set OPENPYPE_MONGO=<openpype-mongo-url>
pushd \\path\to\openpype
openpype_console.exe eventserver --ftrack-user <openpype-admin-user> --ftrack-api-key <api-key>
openpype_console.exe module ftrack eventserver --ftrack-user <openpype-admin-user> --ftrack-api-key <api-key> --debug
```
- download and install `nssm.cc`
- create Windows service according to nssm.cc manual
@ -174,7 +172,7 @@ This event updates entities on their changes Ftrack. When new entity is created
Deleting an entity by Ftrack's default is not processed for security reasons _(to delete entity use [Delete Asset/Subset action](manager_ftrack_actions.md#delete-asset-subset))_.
:::
### Synchronize Hierarchical and Entity Attributes
### Synchronize Hierarchical and Entity Attributes
Auto-synchronization of hierarchical attributes from Ftrack entities.
@ -190,7 +188,7 @@ Change status of next task from `Not started` to `Ready` when previous task is a
Multiple detailed rules for next task update can be configured in the settings.
### Delete Avalon ID from new entity
### Delete Avalon ID from new entity
Is used to remove value from `Avalon/Mongo Id` Custom Attribute when entity is created.
@ -215,7 +213,7 @@ This event handler allows setting of different status to a first created Asset V
This is useful for example if first version publish doesn't contain any actual reviewable work, but is only used for roundtrip conform check, in which case this version could receive status `pending conform` instead of standard `pending review`
### Update status on next task
Change status on next task by task types order when task status state changed to "Done". All tasks with the same Task mapping of next task status changes From → To. Some status can be ignored.
Change status on next task by task types order when task status state changed to "Done". All tasks with the same Task mapping of next task status changes From → To. Some status can be ignored.
## Publish plugins
@ -238,7 +236,7 @@ Add Ftrack Family: enabled
#### Advanced adding if additional families present
In special cases adding 'ftrack' based on main family ('Families' set higher) is not enough.
In special cases adding 'ftrack' based on main family ('Families' set higher) is not enough.
(For example upload to Ftrack for 'plate' main family should only happen if 'review' is contained in instance 'families', not added in other cases. )
![Collect Ftrack Family](assets/ftrack/ftrack-collect-advanced.png)
![Collect Ftrack Family](assets/ftrack/ftrack-collect-advanced.png)

View file

@ -17,7 +17,7 @@ various usage scenarios.
You can find detailed breakdown of technical requirements [here](dev_requirements), but in general OpenPype should be able
to operate in most studios fairly quickly. The main obstacles are usually related to workflows and habits, that
might now be fully compatible with what OpenPype is expecting or enforcing.
might not be fully compatible with what OpenPype is expecting or enforcing. It is recommended to go through artists [key concepts](artist_concepts) to get idea about basics.
Keep in mind that if you run into any workflows that are not supported, it's usually just because we haven't hit
that particular case and it can most likely be added upon request.
@ -48,24 +48,3 @@ to the table
- Some DCCs do not support using Environment variables in file paths. This will make it very hard to maintain full multiplatform
compatibility as well variable storage roots.
- Relying on VPN connection and using it to work directly of network storage will be painfully slow.
## Repositories
### [OpenPype](https://github.com/pypeclub/pype)
This is where vast majority of the code that works with your data lives. It acts
as Avalon-Config, if we're speaking in avalon terms.
Avalon gives us the ability to work with a certain host, say Maya, in a standardized manner, but OpenPype defines **how** we work with all the data, allows most of the behavior to be configured on a very granular level and provides a comprehensive build and installation tools for it.
Thanks to that, we are able to maintain one codebase for vast majority of the features across all our clients deployments while keeping the option to tailor the pipeline to each individual studio.
### [Avalon-core](https://github.com/pypeclub/avalon-core)
Avalon-core is the heart of OpenPype. It provides the base functionality including key GUIs (albeit expanded and modified by us), database connection, standards for data structures, working with entities and some universal tools.
Avalon is being actively developed and maintained by a community of studios and TDs from around the world, with Pype Club team being an active contributor as well.
Due to the extensive work we've done on OpenPype and the need to react quickly to production needs, we
maintain our own fork of avalon-core, which is kept up to date with upstream changes as much as possible.

View file

@ -1,165 +0,0 @@
---
id: update_notes
title: Update Notes
sidebar_label: Update Notes
---
<a name="update_to_2.13.0"></a>
## **Updating to 2.13.0** ##
### MongoDB
**Must**
Due to changes in how tasks are stored in the database (we added task types and possibility of more arbitrary data.), we must take a few precautions when updating.
1. Make sure that ftrack event server with sync to avalon is NOT running during the update.
2. Any project that is to be worked on with 2.13 must be synced from ftrack to avalon with the updated sync to avalon action, or using and updated event server sync to avalon event.
If 2.12 event servers runs when trying to update the project sync with 2.13, it will override any changes.
### Nuke Studio / hiero
Make sure to re-generate pype tags and replace any `task` tags on your shots with the new ones. This will allow you to make multiple tasks of the same type, but with different task name at the same time.
### Nuke
Due to a minor update to nuke write node, artists will be prompted to update their write nodes before being able to publish any old shots. There is a "repair" action for this in the publisher, so it doesn't have to be done manually.
<a name="update_to_2.12.0"></a>
## **Updating to 2.12.0** ##
### Apps and tools
**Must**
run Create/Update Custom attributes action (to update custom attributes group)
check if studio has set custom intent values and move values to ~/config/presets/global/intent.json
**Optional**
Set true/false on application and tools by studio usage (eliminate app list in Ftrack and time for registering Ftrack ations)
<a name="update_to_2.11.0"></a>
## **Updating to 2.11.0** ##
### Maya in deadline
We added or own maya deadline plugin to make render management easier. It operates the same as standard mayaBatch in deadline, but allow us to separate Pype sumitted jobs from standard submitter. You'll need to follow this guide to update this [install pype deadline](https://pype.club/docs/admin_hosts#pype-dealine-supplement-code)
<a name="update_to_2.9.0"></a>
## **Updating to 2.9.0** ##
### Review and Burnin PRESETS
This release introduces a major update to working with review and burnin presets. They can now be much more granular and can target extremely specific usecases. The change is backwards compatible with previous format of review and burnin presets, however we highly recommend updating all the presets to the new format. Documentation on what this looks like can be found on pype main [documentation page](https://pype.club/docs/admin_presets_plugins#publishjson).
### Multiroot and storages
With the support of multiroot projects, we removed the old `storage.json` from configuration and replaced it with simpler `config/anatomy/roots.json`. This is a required change, but only needs to be done once per studio during the update to 2.9.0. [Read More](https://pype.club/docs/next/admin_config#roots)
<a name="update_to_2.7.0"></a>
## **Updating to 2.7.0** ##
### Master Versions
To activate `master` version workflow you need to activate `integrateMasterVersion` plugin in the `config/presets/plugins/global/publish.json`
```
"IntegrateMasterVersion": {"enabled": true},
```
### Ftrack
Make sure that `intent` attributes in ftrack is set correctly. It should follow this setup unless you have your custom values
```
{
"label": "Intent",
"key": "intent",
"type": "enumerator",
"entity_type": "assetversion",
"group": "avalon",
"config": {
"multiselect": false,
"data": [
{"test": "Test"},
{"wip": "WIP"},
{"final": "Final"}
]
}
```
<a name="update_to_2.6.0"></a>
## **Updating to 2.6.0** ##
### Dev vs Prod
If you want to differentiate between dev and prod deployments of pype, you need to add `config.ini` file to `pype-setup/pypeapp` folder with content.
```
[Default]
dev=true
```
### Ftrack
You will have to log in to ftrack in pype after the update. You should be automatically prompted with the ftrack login window when you launch 2.6 release for the first time.
Event server has to be restarted after the update to enable the ability to control it via action.
### Presets
There is a major change in the way how burnin presets are being stored. We simplified the preset format, however that means the currently running production configs need to be tweaked to match the new format.
:::note Example of converting burnin preset from 2.5 to 2.6
2.5 burnin preset
```
"burnins":{
"TOP_LEFT": {
"function": "text",
"text": "{dd}/{mm}/{yyyy}"
},
"TOP_CENTERED": {
"function": "text",
"text": ""
},
"TOP_RIGHT": {
"function": "text",
"text": "v{version:0>3}"
},
"BOTTOM_LEFT": {
"function": "text",
"text": "{frame_start}-{current_frame}-{frame_end}"
},
"BOTTOM_CENTERED": {
"function": "text",
"text": "{asset}"
},
"BOTTOM_RIGHT": {
"function": "frame_numbers",
"text": "{username}"
}
```
2.6 burnin preset
```
"burnins":{
"TOP_LEFT": "{dd}/{mm}/{yyyy}",
"TOP_CENTER": "",
"TOP_RIGHT": "v{version:0>3}"
"BOTTOM_LEFT": "{frame_start}-{current_frame}-{frame_end}",
"BOTTOM_CENTERED": "{asset}",
"BOTTOM_RIGHT": "{username}"
}
```

View file

@ -109,11 +109,7 @@ module.exports = {
"admin_hosts_tvpaint"
],
},
{
type: "category",
label: "Releases",
items: ["changelog", "update_notes"],
},
"admin_releases",
{
type: "category",
collapsed: false,
@ -152,6 +148,7 @@ module.exports = {
"dev_build",
"dev_testing",
"dev_contribute",
"dev_settings",
{
type: "category",
label: "Hosts integrations",