Merge branch 'develop' into enhancement/OP-2855_move-plugins-register-and-discover

This commit is contained in:
Jakub Trllo 2022-03-30 16:06:20 +02:00
commit 4b6ccc9ee3
69 changed files with 3629 additions and 1780 deletions

View file

@ -1,24 +1,59 @@
# Changelog
## [3.9.2-nightly.1](https://github.com/pypeclub/OpenPype/tree/HEAD)
## [3.9.2-nightly.3](https://github.com/pypeclub/OpenPype/tree/HEAD)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.9.1...HEAD)
### 📖 Documentation
- Docs: Added MongoDB requirements [\#2951](https://github.com/pypeclub/OpenPype/pull/2951)
**🆕 New features**
- Multiverse: First PR [\#2908](https://github.com/pypeclub/OpenPype/pull/2908)
**🚀 Enhancements**
- Slack: Added configurable maximum file size of review upload to Slack [\#2945](https://github.com/pypeclub/OpenPype/pull/2945)
- NewPublisher: Prepared implementation of optional pyblish plugin [\#2943](https://github.com/pypeclub/OpenPype/pull/2943)
- Workfiles: Open published workfiles [\#2925](https://github.com/pypeclub/OpenPype/pull/2925)
- General: Default modules loaded dynamically [\#2923](https://github.com/pypeclub/OpenPype/pull/2923)
- CI: change the version bump logic [\#2919](https://github.com/pypeclub/OpenPype/pull/2919)
- Deadline: Add headless argument [\#2916](https://github.com/pypeclub/OpenPype/pull/2916)
- Nuke: Add no-audio Tag [\#2911](https://github.com/pypeclub/OpenPype/pull/2911)
- Ftrack: Fill workfile in custom attribute [\#2906](https://github.com/pypeclub/OpenPype/pull/2906)
- Nuke: improving readability [\#2903](https://github.com/pypeclub/OpenPype/pull/2903)
- Settings UI: Add simple tooltips for settings entities [\#2901](https://github.com/pypeclub/OpenPype/pull/2901)
**🐛 Bug fixes**
- Slack: Added default for review\_upload\_limit for Slack [\#2965](https://github.com/pypeclub/OpenPype/pull/2965)
- Settings: Conditional dictionary avoid invalid logs [\#2956](https://github.com/pypeclub/OpenPype/pull/2956)
- LogViewer: Don't refresh on initialization [\#2949](https://github.com/pypeclub/OpenPype/pull/2949)
- nuke: python3 compatibility issue with `iteritems` [\#2948](https://github.com/pypeclub/OpenPype/pull/2948)
- General: anatomy data with correct task short key [\#2947](https://github.com/pypeclub/OpenPype/pull/2947)
- SceneInventory: Fix imports in UI [\#2944](https://github.com/pypeclub/OpenPype/pull/2944)
- Slack: add generic exception [\#2941](https://github.com/pypeclub/OpenPype/pull/2941)
- General: Python specific vendor paths on env injection [\#2939](https://github.com/pypeclub/OpenPype/pull/2939)
- General: More fail safe delete old versions [\#2936](https://github.com/pypeclub/OpenPype/pull/2936)
- Settings UI: Collapsed of collapsible wrapper works as expected [\#2934](https://github.com/pypeclub/OpenPype/pull/2934)
- Maya: Do not pass `set` to maya commands \(fixes support for older maya versions\) [\#2932](https://github.com/pypeclub/OpenPype/pull/2932)
- General: Don't print log record on OSError [\#2926](https://github.com/pypeclub/OpenPype/pull/2926)
- Hiero: Fix import of 'register\_event\_callback' [\#2924](https://github.com/pypeclub/OpenPype/pull/2924)
- Ftrack: Missing Ftrack id after editorial publish [\#2905](https://github.com/pypeclub/OpenPype/pull/2905)
- AfterEffects: Fix rendering for single frame in DL [\#2875](https://github.com/pypeclub/OpenPype/pull/2875)
**🔀 Refactored code**
- General: Move Attribute Definitions from pipeline [\#2931](https://github.com/pypeclub/OpenPype/pull/2931)
- General: Removed silo references and terminal splash [\#2927](https://github.com/pypeclub/OpenPype/pull/2927)
- General: Move pipeline constants to OpenPype [\#2918](https://github.com/pypeclub/OpenPype/pull/2918)
- General: Move formatting and workfile functions [\#2914](https://github.com/pypeclub/OpenPype/pull/2914)
- General: Move remaining plugins from avalon [\#2912](https://github.com/pypeclub/OpenPype/pull/2912)
**Merged pull requests:**
- Maya - added transparency into review creator [\#2952](https://github.com/pypeclub/OpenPype/pull/2952)
## [3.9.1](https://github.com/pypeclub/OpenPype/tree/3.9.1) (2022-03-18)
@ -42,6 +77,7 @@
- General: Remove forgotten use of avalon Creator [\#2885](https://github.com/pypeclub/OpenPype/pull/2885)
- General: Avoid circular import [\#2884](https://github.com/pypeclub/OpenPype/pull/2884)
- Fixes for attaching loaded containers \(\#2837\) [\#2874](https://github.com/pypeclub/OpenPype/pull/2874)
- Maya: Deformer node ids validation plugin [\#2826](https://github.com/pypeclub/OpenPype/pull/2826)
**🔀 Refactored code**
@ -70,10 +106,6 @@
- Maya: add loaded containers to published instance [\#2837](https://github.com/pypeclub/OpenPype/pull/2837)
- Ftrack: Can sync fps as string [\#2836](https://github.com/pypeclub/OpenPype/pull/2836)
- General: Custom function for find executable [\#2822](https://github.com/pypeclub/OpenPype/pull/2822)
- General: Color dialog UI fixes [\#2817](https://github.com/pypeclub/OpenPype/pull/2817)
- global: letter box calculated on output as last process [\#2812](https://github.com/pypeclub/OpenPype/pull/2812)
- Nuke: adding Reformat to baking mov plugin [\#2811](https://github.com/pypeclub/OpenPype/pull/2811)
- Manager: Update all to latest button [\#2805](https://github.com/pypeclub/OpenPype/pull/2805)
**🐛 Bug fixes**
@ -94,14 +126,12 @@
- Maya: Stop creation of reviews for Cryptomattes [\#2832](https://github.com/pypeclub/OpenPype/pull/2832)
- Deadline: Remove recreated event [\#2828](https://github.com/pypeclub/OpenPype/pull/2828)
- Deadline: Added missing events folder [\#2827](https://github.com/pypeclub/OpenPype/pull/2827)
- Maya: Deformer node ids validation plugin [\#2826](https://github.com/pypeclub/OpenPype/pull/2826)
- Settings: Missing document with OP versions may break start of OpenPype [\#2825](https://github.com/pypeclub/OpenPype/pull/2825)
- Deadline: more detailed temp file name for environment json [\#2824](https://github.com/pypeclub/OpenPype/pull/2824)
- General: Host name was formed from obsolete code [\#2821](https://github.com/pypeclub/OpenPype/pull/2821)
- Settings UI: Fix "Apply from" action [\#2820](https://github.com/pypeclub/OpenPype/pull/2820)
- Ftrack: Job killer with missing user [\#2819](https://github.com/pypeclub/OpenPype/pull/2819)
- Nuke: Use AVALON\_APP to get value for "app" key [\#2818](https://github.com/pypeclub/OpenPype/pull/2818)
- StandalonePublisher: use dynamic groups in subset names [\#2816](https://github.com/pypeclub/OpenPype/pull/2816)
**🔀 Refactored code**

View file

@ -1,35 +0,0 @@
import os
from openpype.lib import PreLaunchHook
class PrePython2Vendor(PreLaunchHook):
"""Prepend python 2 dependencies for py2 hosts."""
order = 10
def execute(self):
if not self.application.use_python_2:
return
# Prepare vendor dir path
self.log.info("adding global python 2 vendor")
pype_root = os.getenv("OPENPYPE_REPOS_ROOT")
python_2_vendor = os.path.join(
pype_root,
"openpype",
"vendor",
"python",
"python_2"
)
# Add Python 2 modules
python_paths = [
python_2_vendor
]
# Load PYTHONPATH from current launch context
python_path = self.launch_context.env.get("PYTHONPATH")
if python_path:
python_paths.append(python_path)
# Set new PYTHONPATH to launch context environments
self.launch_context.env["PYTHONPATH"] = os.pathsep.join(python_paths)

View file

@ -18,6 +18,7 @@ log = Logger.get_logger(__name__)
FRAME_PATTERN = re.compile(r"[\._](\d+)[\.]")
class CTX:
# singleton used for passing data between api modules
app_framework = None
@ -538,9 +539,17 @@ def get_segment_attributes(segment):
# head and tail with forward compatibility
if segment.head:
clip_data["segment_head"] = int(segment.head)
# `infinite` can be also returned
if isinstance(segment.head, str):
clip_data["segment_head"] = 0
else:
clip_data["segment_head"] = int(segment.head)
if segment.tail:
clip_data["segment_tail"] = int(segment.tail)
# `infinite` can be also returned
if isinstance(segment.tail, str):
clip_data["segment_tail"] = 0
else:
clip_data["segment_tail"] = int(segment.tail)
# add all available shot tokens
shot_tokens = _get_shot_tokens_values(segment, [

View file

@ -422,7 +422,13 @@ class WireTapCom(object):
color_policy = color_policy or "Legacy"
# check if the colour policy in custom dir
if not os.path.exists(color_policy):
if "/" in color_policy:
# if unlikelly full path was used make it redundant
color_policy = color_policy.replace("/syncolor/policies/", "")
# expecting input is `Shared/NameOfPolicy`
color_policy = "/syncolor/policies/{}".format(
color_policy)
else:
color_policy = "/syncolor/policies/Autodesk/{}".format(
color_policy)

View file

@ -34,119 +34,125 @@ class CollectTimelineInstances(pyblish.api.ContextPlugin):
def process(self, context):
project = context.data["flameProject"]
sequence = context.data["flameSequence"]
selected_segments = context.data["flameSelectedSegments"]
self.log.debug("__ selected_segments: {}".format(selected_segments))
self.otio_timeline = context.data["otioTimeline"]
self.clips_in_reels = opfapi.get_clips_in_reels(project)
self.fps = context.data["fps"]
# process all sellected
with opfapi.maintained_segment_selection(sequence) as segments:
for segment in segments:
comment_attributes = self._get_comment_attributes(segment)
self.log.debug("_ comment_attributes: {}".format(
pformat(comment_attributes)))
for segment in selected_segments:
# get openpype tag data
marker_data = opfapi.get_segment_data_marker(segment)
self.log.debug("__ marker_data: {}".format(
pformat(marker_data)))
clip_data = opfapi.get_segment_attributes(segment)
clip_name = clip_data["segment_name"]
self.log.debug("clip_name: {}".format(clip_name))
if not marker_data:
continue
# get openpype tag data
marker_data = opfapi.get_segment_data_marker(segment)
self.log.debug("__ marker_data: {}".format(
pformat(marker_data)))
if marker_data.get("id") != "pyblish.avalon.instance":
continue
if not marker_data:
continue
self.log.debug("__ segment.name: {}".format(
segment.name
))
if marker_data.get("id") != "pyblish.avalon.instance":
continue
comment_attributes = self._get_comment_attributes(segment)
# get file path
file_path = clip_data["fpath"]
self.log.debug("_ comment_attributes: {}".format(
pformat(comment_attributes)))
# get source clip
source_clip = self._get_reel_clip(file_path)
clip_data = opfapi.get_segment_attributes(segment)
clip_name = clip_data["segment_name"]
self.log.debug("clip_name: {}".format(clip_name))
first_frame = opfapi.get_frame_from_filename(file_path) or 0
# get file path
file_path = clip_data["fpath"]
head, tail = self._get_head_tail(clip_data, first_frame)
# get source clip
source_clip = self._get_reel_clip(file_path)
# solve handles length
marker_data["handleStart"] = min(
marker_data["handleStart"], head)
marker_data["handleEnd"] = min(
marker_data["handleEnd"], tail)
first_frame = opfapi.get_frame_from_filename(file_path) or 0
with_audio = bool(marker_data.pop("audio"))
head, tail = self._get_head_tail(clip_data, first_frame)
# add marker data to instance data
inst_data = dict(marker_data.items())
# solve handles length
marker_data["handleStart"] = min(
marker_data["handleStart"], head)
marker_data["handleEnd"] = min(
marker_data["handleEnd"], tail)
asset = marker_data["asset"]
subset = marker_data["subset"]
with_audio = bool(marker_data.pop("audio"))
# insert family into families
family = marker_data["family"]
families = [str(f) for f in marker_data["families"]]
families.insert(0, str(family))
# add marker data to instance data
inst_data = dict(marker_data.items())
# form label
label = asset
if asset != clip_name:
label += " ({})".format(clip_name)
label += " {}".format(subset)
label += " {}".format("[" + ", ".join(families) + "]")
asset = marker_data["asset"]
subset = marker_data["subset"]
inst_data.update({
"name": "{}_{}".format(asset, subset),
"label": label,
"asset": asset,
"item": segment,
"families": families,
"publish": marker_data["publish"],
"fps": self.fps,
"flameSourceClip": source_clip,
"sourceFirstFrame": int(first_frame),
"path": file_path
})
# insert family into families
family = marker_data["family"]
families = [str(f) for f in marker_data["families"]]
families.insert(0, str(family))
# get otio clip data
otio_data = self._get_otio_clip_instance_data(clip_data) or {}
self.log.debug("__ otio_data: {}".format(pformat(otio_data)))
# form label
label = asset
if asset != clip_name:
label += " ({})".format(clip_name)
label += " {} [{}]".format(subset, ", ".join(families))
# add to instance data
inst_data.update(otio_data)
self.log.debug("__ inst_data: {}".format(pformat(inst_data)))
inst_data.update({
"name": "{}_{}".format(asset, subset),
"label": label,
"asset": asset,
"item": segment,
"families": families,
"publish": marker_data["publish"],
"fps": self.fps,
"flameSourceClip": source_clip,
"sourceFirstFrame": int(first_frame),
"path": file_path
})
# add resolution
self._get_resolution_to_data(inst_data, context)
# get otio clip data
otio_data = self._get_otio_clip_instance_data(clip_data) or {}
self.log.debug("__ otio_data: {}".format(pformat(otio_data)))
# add comment attributes if any
inst_data.update(comment_attributes)
# add to instance data
inst_data.update(otio_data)
self.log.debug("__ inst_data: {}".format(pformat(inst_data)))
# create instance
instance = context.create_instance(**inst_data)
# add resolution
self._get_resolution_to_data(inst_data, context)
# add colorspace data
instance.data.update({
"versionData": {
"colorspace": clip_data["colour_space"],
}
})
# add comment attributes if any
inst_data.update(comment_attributes)
# create shot instance for shot attributes create/update
self._create_shot_instance(context, clip_name, **inst_data)
# create instance
instance = context.create_instance(**inst_data)
self.log.info("Creating instance: {}".format(instance))
self.log.info(
"_ instance.data: {}".format(pformat(instance.data)))
# add colorspace data
instance.data.update({
"versionData": {
"colorspace": clip_data["colour_space"],
}
})
if not with_audio:
continue
# create shot instance for shot attributes create/update
self._create_shot_instance(context, clip_name, **inst_data)
# add audioReview attribute to plate instance data
# if reviewTrack is on
if marker_data.get("reviewTrack") is not None:
instance.data["reviewAudio"] = True
self.log.info("Creating instance: {}".format(instance))
self.log.info(
"_ instance.data: {}".format(pformat(instance.data)))
if not with_audio:
continue
# add audioReview attribute to plate instance data
# if reviewTrack is on
if marker_data.get("reviewTrack") is not None:
instance.data["reviewAudio"] = True
def _get_comment_attributes(self, segment):
comment = segment.comment.get_value()
@ -188,7 +194,7 @@ class CollectTimelineInstances(pyblish.api.ContextPlugin):
# get pattern defined by type
pattern = TXT_PATERN
if a_type in ("number" , "float"):
if a_type in ("number", "float"):
pattern = NUM_PATERN
res_goup = pattern.findall(value)

View file

@ -31,27 +31,28 @@ class CollecTimelineOTIO(pyblish.api.ContextPlugin):
)
# adding otio timeline to context
with opfapi.maintained_segment_selection(sequence):
with opfapi.maintained_segment_selection(sequence) as selected_seg:
otio_timeline = flame_export.create_otio_timeline(sequence)
instance_data = {
"name": subset_name,
"asset": asset_doc["name"],
"subset": subset_name,
"family": "workfile"
}
instance_data = {
"name": subset_name,
"asset": asset_doc["name"],
"subset": subset_name,
"family": "workfile"
}
# create instance with workfile
instance = context.create_instance(**instance_data)
self.log.info("Creating instance: {}".format(instance))
# create instance with workfile
instance = context.create_instance(**instance_data)
self.log.info("Creating instance: {}".format(instance))
# update context with main project attributes
context.data.update({
"flameProject": project,
"flameSequence": sequence,
"otioTimeline": otio_timeline,
"currentFile": "Flame/{}/{}".format(
project.name, sequence.name
),
"fps": float(str(sequence.frame_rate)[:-4])
})
# update context with main project attributes
context.data.update({
"flameProject": project,
"flameSequence": sequence,
"otioTimeline": otio_timeline,
"currentFile": "Flame/{}/{}".format(
project.name, sequence.name
),
"flameSelectedSegments": selected_seg,
"fps": float(str(sequence.frame_rate)[:-4])
})

View file

@ -1511,7 +1511,7 @@ def get_container_members(container):
members = cmds.sets(container, query=True) or []
members = cmds.ls(members, long=True, objectsOnly=True) or []
members = set(members)
all_members = set(members)
# Include any referenced nodes from any reference in the container
# This is required since we've removed adding ALL nodes of a reference
@ -1530,9 +1530,9 @@ def get_container_members(container):
reference_members = cmds.ls(reference_members,
long=True,
objectsOnly=True)
members.update(reference_members)
all_members.update(reference_members)
return members
return list(all_members)
# region LOOKDEV

View file

@ -0,0 +1,51 @@
from openpype.hosts.maya.api import plugin, lib
class CreateMultiverseUsd(plugin.Creator):
"""Multiverse USD data"""
name = "usdMain"
label = "Multiverse USD"
family = "usd"
icon = "cubes"
def __init__(self, *args, **kwargs):
super(CreateMultiverseUsd, self).__init__(*args, **kwargs)
# Add animation data first, since it maintains order.
self.data.update(lib.collect_animation_data(True))
self.data["stripNamespaces"] = False
self.data["mergeTransformAndShape"] = False
self.data["writeAncestors"] = True
self.data["flattenParentXforms"] = False
self.data["writeSparseOverrides"] = False
self.data["useMetaPrimPath"] = False
self.data["customRootPath"] = ''
self.data["customAttributes"] = ''
self.data["nodeTypesToIgnore"] = ''
self.data["writeMeshes"] = True
self.data["writeCurves"] = True
self.data["writeParticles"] = True
self.data["writeCameras"] = False
self.data["writeLights"] = False
self.data["writeJoints"] = False
self.data["writeCollections"] = False
self.data["writePositions"] = True
self.data["writeNormals"] = True
self.data["writeUVs"] = True
self.data["writeColorSets"] = False
self.data["writeTangents"] = False
self.data["writeRefPositions"] = False
self.data["writeBlendShapes"] = False
self.data["writeDisplayColor"] = False
self.data["writeSkinWeights"] = False
self.data["writeMaterialAssignment"] = False
self.data["writeHardwareShader"] = False
self.data["writeShadingNetworks"] = False
self.data["writeTransformMatrix"] = True
self.data["writeUsdAttributes"] = False
self.data["timeVaryingTopology"] = False
self.data["customMaterialNamespace"] = ''
self.data["numTimeSamples"] = 1
self.data["timeSamplesSpan"] = 0.0

View file

@ -0,0 +1,23 @@
from openpype.hosts.maya.api import plugin, lib
class CreateMultiverseUsdComp(plugin.Creator):
"""Create Multiverse USD Composition"""
name = "usdCompositionMain"
label = "Multiverse USD Composition"
family = "usdComposition"
icon = "cubes"
def __init__(self, *args, **kwargs):
super(CreateMultiverseUsdComp, self).__init__(*args, **kwargs)
# Add animation data first, since it maintains order.
self.data.update(lib.collect_animation_data(True))
self.data["stripNamespaces"] = False
self.data["mergeTransformAndShape"] = False
self.data["flattenContent"] = False
self.data["writePendingOverrides"] = False
self.data["numTimeSamples"] = 1
self.data["timeSamplesSpan"] = 0.0

View file

@ -0,0 +1,28 @@
from openpype.hosts.maya.api import plugin, lib
class CreateMultiverseUsdOver(plugin.Creator):
"""Multiverse USD data"""
name = "usdOverrideMain"
label = "Multiverse USD Override"
family = "usdOverride"
icon = "cubes"
def __init__(self, *args, **kwargs):
super(CreateMultiverseUsdOver, self).__init__(*args, **kwargs)
# Add animation data first, since it maintains order.
self.data.update(lib.collect_animation_data(True))
self.data["writeAll"] = False
self.data["writeTransforms"] = True
self.data["writeVisibility"] = True
self.data["writeAttributes"] = True
self.data["writeMaterials"] = True
self.data["writeVariants"] = True
self.data["writeVariantsDefinition"] = True
self.data["writeActiveState"] = True
self.data["writeNamespaces"] = False
self.data["numTimeSamples"] = 1
self.data["timeSamplesSpan"] = 0.0

View file

@ -15,6 +15,14 @@ class CreateReview(plugin.Creator):
keepImages = False
isolate = False
imagePlane = True
transparency = [
"preset",
"simple",
"object sorting",
"weighted average",
"depth peeling",
"alpha cut"
]
def __init__(self, *args, **kwargs):
super(CreateReview, self).__init__(*args, **kwargs)
@ -28,5 +36,6 @@ class CreateReview(plugin.Creator):
data["isolate"] = self.isolate
data["keepImages"] = self.keepImages
data["imagePlane"] = self.imagePlane
data["transparency"] = self.transparency
self.data = data

View file

@ -0,0 +1,102 @@
# -*- coding: utf-8 -*-
import maya.cmds as cmds
from openpype.pipeline import (
load,
get_representation_path
)
from openpype.hosts.maya.api.lib import (
maintained_selection,
namespaced,
unique_namespace
)
from openpype.hosts.maya.api.pipeline import containerise
class MultiverseUsdLoader(load.LoaderPlugin):
"""Load the USD by Multiverse"""
families = ["model", "usd", "usdComposition", "usdOverride",
"pointcache", "animation"]
representations = ["usd", "usda", "usdc", "usdz", "abc"]
label = "Read USD by Multiverse"
order = -10
icon = "code-fork"
color = "orange"
def load(self, context, name=None, namespace=None, options=None):
asset = context['asset']['name']
namespace = namespace or unique_namespace(
asset + "_",
prefix="_" if asset[0].isdigit() else "",
suffix="_",
)
# Create the shape
cmds.loadPlugin("MultiverseForMaya", quiet=True)
shape = None
transform = None
with maintained_selection():
cmds.namespace(addNamespace=namespace)
with namespaced(namespace, new=False):
import multiverse
shape = multiverse.CreateUsdCompound(self.fname)
transform = cmds.listRelatives(
shape, parent=True, fullPath=True)[0]
# Lock the shape node so the user cannot delete it.
cmds.lockNode(shape, lock=True)
nodes = [transform, shape]
self[:] = nodes
return containerise(
name=name,
namespace=namespace,
nodes=nodes,
context=context,
loader=self.__class__.__name__)
def update(self, container, representation):
# type: (dict, dict) -> None
"""Update container with specified representation."""
node = container['objectName']
assert cmds.objExists(node), "Missing container"
members = cmds.sets(node, query=True) or []
shapes = cmds.ls(members, type="mvUsdCompoundShape")
assert shapes, "Cannot find mvUsdCompoundShape in container"
path = get_representation_path(representation)
import multiverse
for shape in shapes:
multiverse.SetUsdCompoundAssetPaths(shape, [path])
cmds.setAttr("{}.representation".format(node),
str(representation["_id"]),
type="string")
def switch(self, container, representation):
self.update(container, representation)
def remove(self, container):
# type: (dict) -> None
"""Remove loaded container."""
# Delete container and its contents
if cmds.objExists(container['objectName']):
members = cmds.sets(container['objectName'], query=True) or []
cmds.delete([container['objectName']] + members)
# Remove the namespace, if empty
namespace = container['namespace']
if cmds.namespace(exists=namespace):
members = cmds.namespaceInfo(namespace, listNamespace=True)
if not members:
cmds.namespace(removeNamespace=namespace)
else:
self.log.warning("Namespace not deleted because it "
"still has members: %s", namespace)

View file

@ -0,0 +1,210 @@
import os
import six
from maya import cmds
import openpype.api
from openpype.hosts.maya.api.lib import maintained_selection
class ExtractMultiverseUsd(openpype.api.Extractor):
"""Extractor for USD by Multiverse."""
label = "Extract Multiverse USD"
hosts = ["maya"]
families = ["usd"]
@property
def options(self):
"""Overridable options for Multiverse USD Export
Given in the following format
- {NAME: EXPECTED TYPE}
If the overridden option's type does not match,
the option is not included and a warning is logged.
"""
return {
"stripNamespaces": bool,
"mergeTransformAndShape": bool,
"writeAncestors": bool,
"flattenParentXforms": bool,
"writeSparseOverrides": bool,
"useMetaPrimPath": bool,
"customRootPath": str,
"customAttributes": str,
"nodeTypesToIgnore": str,
"writeMeshes": bool,
"writeCurves": bool,
"writeParticles": bool,
"writeCameras": bool,
"writeLights": bool,
"writeJoints": bool,
"writeCollections": bool,
"writePositions": bool,
"writeNormals": bool,
"writeUVs": bool,
"writeColorSets": bool,
"writeTangents": bool,
"writeRefPositions": bool,
"writeBlendShapes": bool,
"writeDisplayColor": bool,
"writeSkinWeights": bool,
"writeMaterialAssignment": bool,
"writeHardwareShader": bool,
"writeShadingNetworks": bool,
"writeTransformMatrix": bool,
"writeUsdAttributes": bool,
"timeVaryingTopology": bool,
"customMaterialNamespace": str,
"numTimeSamples": int,
"timeSamplesSpan": float
}
@property
def default_options(self):
"""The default options for Multiverse USD extraction."""
return {
"stripNamespaces": False,
"mergeTransformAndShape": False,
"writeAncestors": True,
"flattenParentXforms": False,
"writeSparseOverrides": False,
"useMetaPrimPath": False,
"customRootPath": str(),
"customAttributes": str(),
"nodeTypesToIgnore": str(),
"writeMeshes": True,
"writeCurves": True,
"writeParticles": True,
"writeCameras": False,
"writeLights": False,
"writeJoints": False,
"writeCollections": False,
"writePositions": True,
"writeNormals": True,
"writeUVs": True,
"writeColorSets": False,
"writeTangents": False,
"writeRefPositions": False,
"writeBlendShapes": False,
"writeDisplayColor": False,
"writeSkinWeights": False,
"writeMaterialAssignment": False,
"writeHardwareShader": False,
"writeShadingNetworks": False,
"writeTransformMatrix": True,
"writeUsdAttributes": False,
"timeVaryingTopology": False,
"customMaterialNamespace": str(),
"numTimeSamples": 1,
"timeSamplesSpan": 0.0
}
def parse_overrides(self, instance, options):
"""Inspect data of instance to determine overridden options"""
for key in instance.data:
if key not in self.options:
continue
# Ensure the data is of correct type
value = instance.data[key]
if isinstance(value, six.text_type):
value = str(value)
if not isinstance(value, self.options[key]):
self.log.warning(
"Overridden attribute {key} was of "
"the wrong type: {invalid_type} "
"- should have been {valid_type}".format(
key=key,
invalid_type=type(value).__name__,
valid_type=self.options[key].__name__))
continue
options[key] = value
return options
def process(self, instance):
# Load plugin firstly
cmds.loadPlugin("MultiverseForMaya", quiet=True)
# Define output file path
staging_dir = self.staging_dir(instance)
file_name = "{}.usd".format(instance.name)
file_path = os.path.join(staging_dir, file_name)
file_path = file_path.replace('\\', '/')
# Parse export options
options = self.default_options
options = self.parse_overrides(instance, options)
self.log.info("Export options: {0}".format(options))
# Perform extraction
self.log.info("Performing extraction ...")
with maintained_selection():
members = instance.data("setMembers")
members = cmds.ls(members,
dag=True,
shapes=True,
type=("mesh"),
noIntermediate=True,
long=True)
self.log.info('Collected object {}'.format(members))
import multiverse
time_opts = None
frame_start = instance.data['frameStart']
frame_end = instance.data['frameEnd']
handle_start = instance.data['handleStart']
handle_end = instance.data['handleEnd']
step = instance.data['step']
fps = instance.data['fps']
if frame_end != frame_start:
time_opts = multiverse.TimeOptions()
time_opts.writeTimeRange = True
time_opts.frameRange = (
frame_start - handle_start, frame_end + handle_end)
time_opts.frameIncrement = step
time_opts.numTimeSamples = instance.data["numTimeSamples"]
time_opts.timeSamplesSpan = instance.data["timeSamplesSpan"]
time_opts.framePerSecond = fps
asset_write_opts = multiverse.AssetWriteOptions(time_opts)
options_discard_keys = {
'numTimeSamples',
'timeSamplesSpan',
'frameStart',
'frameEnd',
'handleStart',
'handleEnd',
'step',
'fps'
}
for key, value in options.items():
if key in options_discard_keys:
continue
setattr(asset_write_opts, key, value)
multiverse.WriteAsset(file_path, members, asset_write_opts)
if "representations" not in instance.data:
instance.data["representations"] = []
representation = {
'name': 'usd',
'ext': 'usd',
'files': file_name,
"stagingDir": staging_dir
}
instance.data["representations"].append(representation)
self.log.info("Extracted instance {} to {}".format(
instance.name, file_path))

View file

@ -0,0 +1,151 @@
import os
from maya import cmds
import openpype.api
from openpype.hosts.maya.api.lib import maintained_selection
class ExtractMultiverseUsdComposition(openpype.api.Extractor):
"""Extractor of Multiverse USD Composition."""
label = "Extract Multiverse USD Composition"
hosts = ["maya"]
families = ["usdComposition"]
@property
def options(self):
"""Overridable options for Multiverse USD Export
Given in the following format
- {NAME: EXPECTED TYPE}
If the overridden option's type does not match,
the option is not included and a warning is logged.
"""
return {
"stripNamespaces": bool,
"mergeTransformAndShape": bool,
"flattenContent": bool,
"writePendingOverrides": bool,
"numTimeSamples": int,
"timeSamplesSpan": float
}
@property
def default_options(self):
"""The default options for Multiverse USD extraction."""
return {
"stripNamespaces": True,
"mergeTransformAndShape": False,
"flattenContent": False,
"writePendingOverrides": False,
"numTimeSamples": 1,
"timeSamplesSpan": 0.0
}
def parse_overrides(self, instance, options):
"""Inspect data of instance to determine overridden options"""
for key in instance.data:
if key not in self.options:
continue
# Ensure the data is of correct type
value = instance.data[key]
if not isinstance(value, self.options[key]):
self.log.warning(
"Overridden attribute {key} was of "
"the wrong type: {invalid_type} "
"- should have been {valid_type}".format(
key=key,
invalid_type=type(value).__name__,
valid_type=self.options[key].__name__))
continue
options[key] = value
return options
def process(self, instance):
# Load plugin firstly
cmds.loadPlugin("MultiverseForMaya", quiet=True)
# Define output file path
staging_dir = self.staging_dir(instance)
file_name = "{}.usd".format(instance.name)
file_path = os.path.join(staging_dir, file_name)
file_path = file_path.replace('\\', '/')
# Parse export options
options = self.default_options
options = self.parse_overrides(instance, options)
self.log.info("Export options: {0}".format(options))
# Perform extraction
self.log.info("Performing extraction ...")
with maintained_selection():
members = instance.data("setMembers")
members = cmds.ls(members,
dag=True,
shapes=True,
type="mvUsdCompoundShape",
noIntermediate=True,
long=True)
self.log.info('Collected object {}'.format(members))
import multiverse
time_opts = None
frame_start = instance.data['frameStart']
frame_end = instance.data['frameEnd']
handle_start = instance.data['handleStart']
handle_end = instance.data['handleEnd']
step = instance.data['step']
fps = instance.data['fps']
if frame_end != frame_start:
time_opts = multiverse.TimeOptions()
time_opts.writeTimeRange = True
time_opts.frameRange = (
frame_start - handle_start, frame_end + handle_end)
time_opts.frameIncrement = step
time_opts.numTimeSamples = instance.data["numTimeSamples"]
time_opts.timeSamplesSpan = instance.data["timeSamplesSpan"]
time_opts.framePerSecond = fps
comp_write_opts = multiverse.CompositionWriteOptions()
options_discard_keys = {
'numTimeSamples',
'timeSamplesSpan',
'frameStart',
'frameEnd',
'handleStart',
'handleEnd',
'step',
'fps'
}
for key, value in options.items():
if key in options_discard_keys:
continue
setattr(comp_write_opts, key, value)
multiverse.WriteComposition(file_path, members, comp_write_opts)
if "representations" not in instance.data:
instance.data["representations"] = []
representation = {
'name': 'usd',
'ext': 'usd',
'files': file_name,
"stagingDir": staging_dir
}
instance.data["representations"].append(representation)
self.log.info("Extracted instance {} to {}".format(
instance.name, file_path))

View file

@ -0,0 +1,139 @@
import os
import openpype.api
from openpype.hosts.maya.api.lib import maintained_selection
from maya import cmds
class ExtractMultiverseUsdOverride(openpype.api.Extractor):
"""Extractor for USD Override by Multiverse."""
label = "Extract Multiverse USD Override"
hosts = ["maya"]
families = ["usdOverride"]
@property
def options(self):
"""Overridable options for Multiverse USD Export
Given in the following format
- {NAME: EXPECTED TYPE}
If the overridden option's type does not match,
the option is not included and a warning is logged.
"""
return {
"writeAll": bool,
"writeTransforms": bool,
"writeVisibility": bool,
"writeAttributes": bool,
"writeMaterials": bool,
"writeVariants": bool,
"writeVariantsDefinition": bool,
"writeActiveState": bool,
"writeNamespaces": bool,
"numTimeSamples": int,
"timeSamplesSpan": float
}
@property
def default_options(self):
"""The default options for Multiverse USD extraction."""
return {
"writeAll": False,
"writeTransforms": True,
"writeVisibility": True,
"writeAttributes": True,
"writeMaterials": True,
"writeVariants": True,
"writeVariantsDefinition": True,
"writeActiveState": True,
"writeNamespaces": False,
"numTimeSamples": 1,
"timeSamplesSpan": 0.0
}
def process(self, instance):
# Load plugin firstly
cmds.loadPlugin("MultiverseForMaya", quiet=True)
# Define output file path
staging_dir = self.staging_dir(instance)
file_name = "{}.usda".format(instance.name)
file_path = os.path.join(staging_dir, file_name)
file_path = file_path.replace("\\", "/")
# Parse export options
options = self.default_options
self.log.info("Export options: {0}".format(options))
# Perform extraction
self.log.info("Performing extraction ...")
with maintained_selection():
members = instance.data("setMembers")
members = cmds.ls(members,
dag=True,
shapes=True,
type="mvUsdCompoundShape",
noIntermediate=True,
long=True)
self.log.info("Collected object {}".format(members))
# TODO: Deal with asset, composition, overide with options.
import multiverse
time_opts = None
frame_start = instance.data["frameStart"]
frame_end = instance.data["frameEnd"]
handle_start = instance.data["handleStart"]
handle_end = instance.data["handleEnd"]
step = instance.data["step"]
fps = instance.data["fps"]
if frame_end != frame_start:
time_opts = multiverse.TimeOptions()
time_opts.writeTimeRange = True
time_opts.frameRange = (
frame_start - handle_start, frame_end + handle_end)
time_opts.frameIncrement = step
time_opts.numTimeSamples = instance.data["numTimeSamples"]
time_opts.timeSamplesSpan = instance.data["timeSamplesSpan"]
time_opts.framePerSecond = fps
over_write_opts = multiverse.OverridesWriteOptions(time_opts)
options_discard_keys = {
"numTimeSamples",
"timeSamplesSpan",
"frameStart",
"frameEnd",
"handleStart",
"handleEnd",
"step",
"fps"
}
for key, value in options.items():
if key in options_discard_keys:
continue
setattr(over_write_opts, key, value)
for member in members:
multiverse.WriteOverrides(file_path, member, over_write_opts)
if "representations" not in instance.data:
instance.data["representations"] = []
representation = {
"name": "usd",
"ext": "usd",
"files": file_name,
"stagingDir": staging_dir
}
instance.data["representations"].append(representation)
self.log.info("Extracted instance {} to {}".format(
instance.name, file_path))

View file

@ -73,6 +73,11 @@ class ExtractPlayblast(openpype.api.Extractor):
pm.currentTime(refreshFrameInt - 1, edit=True)
pm.currentTime(refreshFrameInt, edit=True)
# Override transparency if requested.
transparency = instance.data.get("transparency", 0)
if transparency != 0:
preset["viewport2_options"]["transparencyAlgorithm"] = transparency
# Isolate view is requested by having objects in the set besides a
# camera.
if preset.pop("isolate_view", False) and instance.data.get("isolate"):

View file

@ -72,7 +72,7 @@ class LoadEffects(load.LoaderPlugin):
# getting data from json file with unicode conversion
with open(file, "r") as f:
json_f = {self.byteify(key): self.byteify(value)
for key, value in json.load(f).iteritems()}
for key, value in json.load(f).items()}
# get correct order of nodes by positions on track and subtrack
nodes_order = self.reorder_nodes(json_f)
@ -188,7 +188,7 @@ class LoadEffects(load.LoaderPlugin):
# getting data from json file with unicode conversion
with open(file, "r") as f:
json_f = {self.byteify(key): self.byteify(value)
for key, value in json.load(f).iteritems()}
for key, value in json.load(f).items()}
# get correct order of nodes by positions on track and subtrack
nodes_order = self.reorder_nodes(json_f)
@ -330,11 +330,11 @@ class LoadEffects(load.LoaderPlugin):
if isinstance(input, dict):
return {self.byteify(key): self.byteify(value)
for key, value in input.iteritems()}
for key, value in input.items()}
elif isinstance(input, list):
return [self.byteify(element) for element in input]
elif isinstance(input, unicode):
return input.encode('utf-8')
elif isinstance(input, str):
return str(input)
else:
return input

View file

@ -74,7 +74,7 @@ class LoadEffectsInputProcess(load.LoaderPlugin):
# getting data from json file with unicode conversion
with open(file, "r") as f:
json_f = {self.byteify(key): self.byteify(value)
for key, value in json.load(f).iteritems()}
for key, value in json.load(f).items()}
# get correct order of nodes by positions on track and subtrack
nodes_order = self.reorder_nodes(json_f)
@ -194,7 +194,7 @@ class LoadEffectsInputProcess(load.LoaderPlugin):
# getting data from json file with unicode conversion
with open(file, "r") as f:
json_f = {self.byteify(key): self.byteify(value)
for key, value in json.load(f).iteritems()}
for key, value in json.load(f).items()}
# get correct order of nodes by positions on track and subtrack
nodes_order = self.reorder_nodes(json_f)
@ -350,11 +350,11 @@ class LoadEffectsInputProcess(load.LoaderPlugin):
if isinstance(input, dict):
return {self.byteify(key): self.byteify(value)
for key, value in input.iteritems()}
for key, value in input.items()}
elif isinstance(input, list):
return [self.byteify(element) for element in input]
elif isinstance(input, unicode):
return input.encode('utf-8')
elif isinstance(input, str):
return str(input)
else:
return input

View file

@ -240,7 +240,7 @@ class LoadGizmoInputProcess(load.LoaderPlugin):
if isinstance(input, dict):
return {self.byteify(key): self.byteify(value)
for key, value in input.iteritems()}
for key, value in input.items()}
elif isinstance(input, list):
return [self.byteify(element) for element in input]
elif isinstance(input, unicode):

View file

@ -24,7 +24,11 @@ class ExtractReviewDataMov(openpype.api.Extractor):
outputs = {}
def process(self, instance):
families = instance.data["families"]
families = set(instance.data["families"])
# add main family to make sure all families are compared
families.add(instance.data["family"])
task_type = instance.context.data["taskType"]
subset = instance.data["subset"]
self.log.info("Creating staging dir...")
@ -50,51 +54,31 @@ class ExtractReviewDataMov(openpype.api.Extractor):
f_task_types = o_data["filter"]["task_types"]
f_subsets = o_data["filter"]["sebsets"]
self.log.debug(
"f_families `{}` > families: {}".format(
f_families, families))
self.log.debug(
"f_task_types `{}` > task_type: {}".format(
f_task_types, task_type))
self.log.debug(
"f_subsets `{}` > subset: {}".format(
f_subsets, subset))
# test if family found in context
test_families = any([
# first if exact family set is matching
# make sure only interesetion of list is correct
bool(set(families).intersection(f_families)),
# and if famiies are set at all
# if not then return True because we want this preset
# to be active if nothig is set
bool(not f_families)
])
# using intersection to make sure all defined
# families are present in combination
if f_families and not families.intersection(f_families):
continue
# test task types from filter
test_task_types = any([
# check if actual task type is defined in task types
# set in preset's filter
bool(task_type in f_task_types),
# and if taskTypes are defined in preset filter
# if not then return True, because we want this filter
# to be active if no taskType is set
bool(not f_task_types)
])
if f_task_types and task_type not in f_task_types:
continue
# test subsets from filter
test_subsets = any([
# check if any of subset filter inputs
# converted to regex patern is not found in subset
# we keep strict case sensitivity
bool(next((
s for s in f_subsets
if re.search(re.compile(s), subset)
), None)),
# but if no subsets were set then make this acuntable too
bool(not f_subsets)
])
# we need all filters to be positive for this
# preset to be activated
test_all = all([
test_families,
test_task_types,
test_subsets
])
# if it is not positive then skip this preset
if not test_all:
if f_subsets and not any(
re.search(s, subset) for s in f_subsets):
continue
self.log.info(

View file

@ -25,7 +25,7 @@ class RepairNukeWriteDeadlineTab(pyblish.api.Action):
# Remove existing knobs.
knob_names = openpype.hosts.nuke.lib.get_deadline_knob_names()
for name, knob in group_node.knobs().iteritems():
for name, knob in group_node.knobs().items():
if name in knob_names:
group_node.removeKnob(knob)

View file

@ -1,10 +1,10 @@
from avalon import io
from openpype.lib import NumberDef
from openpype.hosts.testhost.api import pipeline
from openpype.pipeline import (
AutoCreator,
CreatedInstance,
lib
)
from avalon import io
class MyAutoCreator(AutoCreator):
@ -13,7 +13,7 @@ class MyAutoCreator(AutoCreator):
def get_instance_attr_defs(self):
output = [
lib.NumberDef("number_key", label="Number")
NumberDef("number_key", label="Number")
]
return output

View file

@ -1,10 +1,16 @@
import json
from openpype import resources
from openpype.hosts.testhost.api import pipeline
from openpype.lib import (
UISeparatorDef,
UILabelDef,
BoolDef,
NumberDef,
FileDef,
)
from openpype.pipeline import (
Creator,
CreatedInstance,
lib
)
@ -54,17 +60,17 @@ class TestCreatorOne(Creator):
def get_instance_attr_defs(self):
output = [
lib.NumberDef("number_key", label="Number"),
NumberDef("number_key", label="Number"),
]
return output
def get_pre_create_attr_defs(self):
output = [
lib.BoolDef("use_selection", label="Use selection"),
lib.UISeparatorDef(),
lib.UILabelDef("Testing label"),
lib.FileDef("filepath", folders=True, label="Filepath"),
lib.FileDef(
BoolDef("use_selection", label="Use selection"),
UISeparatorDef(),
UILabelDef("Testing label"),
FileDef("filepath", folders=True, label="Filepath"),
FileDef(
"filepath_2", multipath=True, folders=True, label="Filepath 2"
)
]

View file

@ -1,8 +1,8 @@
from openpype.lib import NumberDef, TextDef
from openpype.hosts.testhost.api import pipeline
from openpype.pipeline import (
Creator,
CreatedInstance,
lib
)
@ -40,8 +40,8 @@ class TestCreatorTwo(Creator):
def get_instance_attr_defs(self):
output = [
lib.NumberDef("number_key"),
lib.TextDef("text_key")
NumberDef("number_key"),
TextDef("text_key")
]
return output

View file

@ -1,10 +1,8 @@
import json
import pyblish.api
from openpype.pipeline import (
OpenPypePyblishPluginMixin,
attribute_definitions
)
from openpype.lib import attribute_definitions
from openpype.pipeline import OpenPypePyblishPluginMixin
class CollectInstanceOneTestHost(

View file

@ -29,6 +29,21 @@ from .vendor_bin_utils import (
is_oiio_supported
)
from .attribute_definitions import (
AbtractAttrDef,
UIDef,
UISeparatorDef,
UILabelDef,
UnknownDef,
NumberDef,
TextDef,
EnumDef,
BoolDef,
FileDef,
)
from .env_tools import (
env_value_to_bool,
get_paths_from_environ,
@ -233,6 +248,19 @@ __all__ = [
"get_ffmpeg_tool_path",
"is_oiio_supported",
"AbtractAttrDef",
"UIDef",
"UISeparatorDef",
"UILabelDef",
"UnknownDef",
"NumberDef",
"TextDef",
"EnumDef",
"BoolDef",
"FileDef",
"import_filepath",
"modules_from_path",
"recursive_bases_from_class",

View file

@ -1319,6 +1319,41 @@ def _merge_env(env, current_env):
return result
def _add_python_version_paths(app, env, logger):
"""Add vendor packages specific for a Python version."""
# Skip adding if host name is not set
if not app.host_name:
return
# Add Python 2/3 modules
openpype_root = os.getenv("OPENPYPE_REPOS_ROOT")
python_vendor_dir = os.path.join(
openpype_root,
"openpype",
"vendor",
"python"
)
if app.use_python_2:
pythonpath = os.path.join(python_vendor_dir, "python_2")
else:
pythonpath = os.path.join(python_vendor_dir, "python_3")
if not os.path.exists(pythonpath):
return
logger.debug("Adding Python version specific paths to PYTHONPATH")
python_paths = [pythonpath]
# Load PYTHONPATH from current launch context
python_path = env.get("PYTHONPATH")
if python_path:
python_paths.append(python_path)
# Set new PYTHONPATH to launch context environments
env["PYTHONPATH"] = os.pathsep.join(python_paths)
def prepare_app_environments(data, env_group=None, implementation_envs=True):
"""Modify launch environments based on launched app and context.
@ -1331,6 +1366,8 @@ def prepare_app_environments(data, env_group=None, implementation_envs=True):
app = data["app"]
log = data["log"]
_add_python_version_paths(app, data["env"], log)
# `added_env_keys` has debug purpose
added_env_keys = {app.group.name, app.name}
# Environments for application

View file

@ -1705,7 +1705,7 @@ def _get_task_context_data_for_anatomy(
"task": {
"name": task_name,
"type": task_type,
"short_name": project_task_type_data["short_name"]
"short": project_task_type_data["short_name"]
}
}

View file

@ -28,26 +28,15 @@ from openpype.settings.lib import (
)
from openpype.lib import PypeLogger
DEFAULT_OPENPYPE_MODULES = (
"avalon_apps",
"clockify",
"log_viewer",
"deadline",
"muster",
"royalrender",
"python_console_interpreter",
"ftrack",
"slack",
"webserver",
"launcher_action",
"project_manager_action",
"settings_action",
"standalonepublish_action",
"traypublish_action",
"job_queue",
"timers_manager",
"sync_server",
# Files that will be always ignored on modules import
IGNORED_FILENAMES = (
"__pycache__",
)
# Files ignored on modules import from "./openpype/modules"
IGNORED_DEFAULT_FILENAMES = (
"__init__.py",
"base.py",
"interfaces.py",
)
@ -146,9 +135,16 @@ class _LoadCache:
def get_default_modules_dir():
"""Path to default OpenPype modules."""
current_dir = os.path.abspath(os.path.dirname(__file__))
return os.path.join(current_dir, "default_modules")
output = []
for folder_name in ("default_modules", ):
path = os.path.join(current_dir, folder_name)
if os.path.exists(path) and os.path.isdir(path):
output.append(path)
return output
def get_dynamic_modules_dirs():
@ -186,7 +182,7 @@ def get_dynamic_modules_dirs():
def get_module_dirs():
"""List of paths where OpenPype modules can be found."""
_dirpaths = []
_dirpaths.append(get_default_modules_dir())
_dirpaths.extend(get_default_modules_dir())
_dirpaths.extend(get_dynamic_modules_dirs())
dirpaths = []
@ -292,25 +288,45 @@ def _load_modules():
log = PypeLogger.get_logger("ModulesLoader")
current_dir = os.path.abspath(os.path.dirname(__file__))
processed_paths = set()
processed_paths.add(current_dir)
# Import default modules imported from 'openpype.modules'
for default_module_name in DEFAULT_OPENPYPE_MODULES:
for filename in os.listdir(current_dir):
# Ignore filenames
if (
filename in IGNORED_FILENAMES
or filename in IGNORED_DEFAULT_FILENAMES
):
continue
fullpath = os.path.join(current_dir, filename)
basename, ext = os.path.splitext(filename)
if not os.path.isdir(fullpath) and ext not in (".py", ):
continue
try:
import_str = "openpype.modules.{}".format(default_module_name)
new_import_str = "{}.{}".format(modules_key, default_module_name)
import_str = "openpype.modules.{}".format(basename)
new_import_str = "{}.{}".format(modules_key, basename)
default_module = __import__(import_str, fromlist=("", ))
sys.modules[new_import_str] = default_module
setattr(openpype_modules, default_module_name, default_module)
setattr(openpype_modules, basename, default_module)
except Exception:
msg = (
"Failed to import default module '{}'."
).format(default_module_name)
).format(basename)
log.error(msg, exc_info=True)
# Look for OpenPype modules in paths defined with `get_module_dirs`
# - dynamically imported OpenPype modules and addons
dirpaths = get_module_dirs()
for dirpath in dirpaths:
for dirpath in get_module_dirs():
# Skip already processed paths
if dirpath in processed_paths:
continue
processed_paths.add(dirpath)
if not os.path.exists(dirpath):
log.warning((
"Could not find path when loading OpenPype modules \"{}\""
@ -319,12 +335,15 @@ def _load_modules():
for filename in os.listdir(dirpath):
# Ignore filenames
if filename in ("__pycache__", ):
if filename in IGNORED_FILENAMES:
continue
fullpath = os.path.join(dirpath, filename)
basename, ext = os.path.splitext(filename)
if not os.path.isdir(fullpath) and ext not in (".py", ):
continue
# TODO add more logic how to define if folder is module or not
# - check manifest and content of manifest
try:

View file

@ -492,7 +492,8 @@ class DeleteOldVersions(BaseAction):
os.remove(file_path)
self.log.debug("Removed file: {}".format(file_path))
remainders.remove(file_path_base)
if file_path_base in remainders:
remainders.remove(file_path_base)
continue
seq_path_base = os.path.split(seq_path)[1]

View file

@ -26,3 +26,12 @@ class LogsWindow(QtWidgets.QWidget):
self.log_detail = log_detail
self.setStyleSheet(style.load_stylesheet())
self._frist_show = True
def showEvent(self, event):
super(LogsWindow, self).showEvent(event)
if self._frist_show:
self._frist_show = False
self.logs_widget.refresh()

View file

@ -155,6 +155,11 @@ class LogsWidget(QtWidgets.QWidget):
QtCore.Qt.DescendingOrder
)
refresh_triggered_timer = QtCore.QTimer()
refresh_triggered_timer.setSingleShot(True)
refresh_triggered_timer.setInterval(200)
refresh_triggered_timer.timeout.connect(self._on_refresh_timeout)
view.selectionModel().selectionChanged.connect(self._on_index_change)
refresh_btn.clicked.connect(self._on_refresh_clicked)
@ -169,10 +174,12 @@ class LogsWidget(QtWidgets.QWidget):
self.detail_widget = detail_widget
self.refresh_btn = refresh_btn
# prepare
self.refresh()
self._refresh_triggered_timer = refresh_triggered_timer
def refresh(self):
self._refresh_triggered_timer.start()
def _on_refresh_timeout(self):
self.model.refresh()
self.detail_widget.refresh()

View file

@ -35,20 +35,25 @@ class CollectSlackFamilies(pyblish.api.InstancePlugin):
return
# make slack publishable
if profile:
self.log.info("Found profile: {}".format(profile))
if instance.data.get('families'):
instance.data['families'].append('slack')
else:
instance.data['families'] = ['slack']
if not profile:
return
instance.data["slack_channel_message_profiles"] = \
profile["channel_messages"]
self.log.info("Found profile: {}".format(profile))
if instance.data.get('families'):
instance.data['families'].append('slack')
else:
instance.data['families'] = ['slack']
slack_token = (instance.context.data["project_settings"]
["slack"]
["token"])
instance.data["slack_token"] = slack_token
selected_profiles = profile["channel_messages"]
for prof in selected_profiles:
prof["review_upload_limit"] = profile.get("review_upload_limit",
50)
instance.data["slack_channel_message_profiles"] = selected_profiles
slack_token = (instance.context.data["project_settings"]
["slack"]
["token"])
instance.data["slack_token"] = slack_token
def main_family_from_instance(self, instance): # TODO yank from integrate
"""Returns main family of entered instance."""

View file

@ -35,7 +35,7 @@ class IntegrateSlackAPI(pyblish.api.InstancePlugin):
message = self._get_filled_message(message_profile["message"],
instance,
review_path)
self.log.info("message:: {}".format(message))
self.log.debug("message:: {}".format(message))
if not message:
return
@ -43,7 +43,8 @@ class IntegrateSlackAPI(pyblish.api.InstancePlugin):
publish_files.add(thumbnail_path)
if message_profile["upload_review"] and review_path:
publish_files.add(review_path)
message, publish_files = self._handle_review_upload(
message, message_profile, publish_files, review_path)
project = instance.context.data["anatomyData"]["project"]["code"]
for channel in message_profile["channels"]:
@ -75,6 +76,19 @@ class IntegrateSlackAPI(pyblish.api.InstancePlugin):
dbcon = mongo_client[database_name]["notification_messages"]
dbcon.insert_one(msg)
def _handle_review_upload(self, message, message_profile, publish_files,
review_path):
"""Check if uploaded file is not too large"""
review_file_size_MB = os.path.getsize(review_path) / 1024 / 1024
file_limit = message_profile.get("review_upload_limit", 50)
if review_file_size_MB > file_limit:
message += "\nReview upload omitted because of file size."
if review_path not in message:
message += "\nFile located at: {}".format(review_path)
else:
publish_files.add(review_path)
return message, publish_files
def _get_filled_message(self, message_templ, instance, review_path=None):
"""Use message_templ and data from instance to get message content.
@ -210,6 +224,9 @@ class IntegrateSlackAPI(pyblish.api.InstancePlugin):
# You will get a SlackApiError if "ok" is False
error_str = self._enrich_error(str(e.response["error"]), channel)
self.log.warning("Error happened {}".format(error_str))
except Exception as e:
error_str = self._enrich_error(str(e), channel)
self.log.warning("Not SlackAPI error", exc_info=True)
return None, []

View file

@ -3,8 +3,6 @@ from .constants import (
HOST_WORKFILE_EXTENSIONS,
)
from .lib import attribute_definitions
from .create import (
BaseCreator,
Creator,
@ -50,7 +48,8 @@ from .publish import (
PublishValidationError,
PublishXmlValidationError,
KnownPublishError,
OpenPypePyblishPluginMixin
OpenPypePyblishPluginMixin,
OptionalPyblishPluginMixin,
)
from .actions import (
@ -121,6 +120,7 @@ __all__ = (
"PublishXmlValidationError",
"KnownPublishError",
"OpenPypePyblishPluginMixin",
"OptionalPyblishPluginMixin",
# --- Actions ---
"LauncherAction",

View file

@ -6,7 +6,6 @@ import inspect
from uuid import uuid4
from contextlib import contextmanager
from ..lib import UnknownDef
from .creator_plugins import (
BaseCreator,
Creator,
@ -88,6 +87,8 @@ class AttributeValues:
origin_data(dict): Values loaded from host before conversion.
"""
def __init__(self, attr_defs, values, origin_data=None):
from openpype.lib.attribute_definitions import UnknownDef
if origin_data is None:
origin_data = copy.deepcopy(values)
self._origin_data = origin_data

View file

@ -1,30 +0,0 @@
from .attribute_definitions import (
AbtractAttrDef,
UIDef,
UISeparatorDef,
UILabelDef,
UnknownDef,
NumberDef,
TextDef,
EnumDef,
BoolDef,
FileDef,
)
__all__ = (
"AbtractAttrDef",
"UIDef",
"UISeparatorDef",
"UILabelDef",
"UnknownDef",
"NumberDef",
"TextDef",
"EnumDef",
"BoolDef",
"FileDef",
)

View file

@ -3,6 +3,7 @@ from .publish_plugins import (
PublishXmlValidationError,
KnownPublishError,
OpenPypePyblishPluginMixin,
OptionalPyblishPluginMixin,
)
from .lib import (
@ -18,6 +19,7 @@ __all__ = (
"PublishXmlValidationError",
"KnownPublishError",
"OpenPypePyblishPluginMixin",
"OptionalPyblishPluginMixin",
"DiscoverResult",
"publish_plugins_discover",

View file

@ -1,3 +1,4 @@
from openpype.lib import BoolDef
from .lib import load_help_content_from_plugin
@ -108,3 +109,64 @@ class OpenPypePyblishPluginMixin:
plugin_values[key]
)
return attribute_values
def get_attr_values_from_data(self, data):
"""Get attribute values for attribute definitions from data.
Args:
data(dict): Data from instance or context.
"""
return (
data
.get("publish_attributes", {})
.get(self.__class__.__name__, {})
)
class OptionalPyblishPluginMixin(OpenPypePyblishPluginMixin):
"""Prepare mixin for optional plugins.
Defined active attribute definition prepared for published and
prepares method which will check if is active or not.
```
class ValidateScene(
pyblish.api.InstancePlugin, OptionalPyblishPluginMixin
):
def process(self, instance):
# Skip the instance if is not active by data on the instance
if not self.is_active(instance.data):
return
```
"""
@classmethod
def get_attribute_defs(cls):
"""Attribute definitions based on plugin's optional attribute."""
# Empty list if plugin is not optional
if not getattr(cls, "optional", None):
return []
# Get active value from class as default value
active = getattr(cls, "active", True)
# Return boolean stored under 'active' key with label of the class name
label = cls.label or cls.__name__
return [
BoolDef("active", default=active, label=label)
]
def is_active(self, data):
"""Check if plugins is active for instance/context based on their data.
Args:
data(dict): Data from instance or context.
"""
# Skip if is not optional and return True
if not getattr(self, "optional", None):
return True
attr_values = self.get_attr_values_from_data(data)
active = attr_values.get("active")
if active is None:
active = getattr(self, "active", True)
return active

View file

@ -126,7 +126,8 @@ class DeleteOldVersions(load.SubsetLoaderPlugin):
os.remove(file_path)
self.log.debug("Removed file: {}".format(file_path))
remainders.remove(file_path_base)
if file_path_base in remainders:
remainders.remove(file_path_base)
continue
seq_path_base = os.path.split(seq_path)[1]
@ -333,6 +334,8 @@ class DeleteOldVersions(load.SubsetLoaderPlugin):
def main(self, data, remove_publish_folder):
# Size of files.
size = 0
if not data:
return size
if remove_publish_folder:
size = self.delete_whole_dir_paths(data["dir_paths"].values())
@ -418,6 +421,8 @@ class DeleteOldVersions(load.SubsetLoaderPlugin):
)
data = self.get_data(context, versions_to_keep)
if not data:
continue
size += self.main(data, remove_publish_folder)
print("Progressing {}/{}".format(count + 1, len(contexts)))

View file

@ -105,7 +105,9 @@ class IntegrateAssetNew(pyblish.api.InstancePlugin):
"effect",
"xgen",
"hda",
"usd"
"usd",
"usdComposition",
"usdOverride"
]
exclude_families = ["clip"]
db_representation_context_keys = [

View file

@ -8,11 +8,11 @@ M_ENVIRONMENT_KEY = "__environment_keys__"
# Metadata key for storing dynamic created labels
M_DYNAMIC_KEY_LABEL = "__dynamic_keys_labels__"
METADATA_KEYS = (
METADATA_KEYS = frozenset([
M_OVERRIDDEN_KEY,
M_ENVIRONMENT_KEY,
M_DYNAMIC_KEY_LABEL
)
])
# Keys where studio's system overrides are stored
GLOBAL_SETTINGS_KEY = "global_settings"

View file

@ -11,6 +11,7 @@
"task_types": [],
"tasks": [],
"subsets": [],
"review_upload_limit": 50.0,
"channel_messages": []
}
]

View file

@ -173,6 +173,10 @@ class BaseItemEntity(BaseEntity):
# Entity has set `_project_override_value` (is not NOT_SET)
self.had_project_override = False
self._default_log_invalid_types = True
self._studio_log_invalid_types = True
self._project_log_invalid_types = True
# Callbacks that are called on change.
# - main current purspose is to register GUI callbacks
self.on_change_callbacks = []
@ -419,7 +423,7 @@ class BaseItemEntity(BaseEntity):
raise InvalidValueType(self.valid_value_types, type(value), self.path)
# TODO convert to private method
def _check_update_value(self, value, value_source):
def _check_update_value(self, value, value_source, log_invalid_types=True):
"""Validation of value on update methods.
Update methods update data from currently saved settings so it is
@ -447,16 +451,17 @@ class BaseItemEntity(BaseEntity):
if new_value is not NOT_SET:
return new_value
# Warning log about invalid value type.
self.log.warning(
(
"{} Got invalid value type for {} values."
" Expected types: {} | Got Type: {} | Value: \"{}\""
).format(
self.path, value_source,
self.valid_value_types, type(value), str(value)
if log_invalid_types:
# Warning log about invalid value type.
self.log.warning(
(
"{} Got invalid value type for {} values."
" Expected types: {} | Got Type: {} | Value: \"{}\""
).format(
self.path, value_source,
self.valid_value_types, type(value), str(value)
)
)
)
return NOT_SET
def available_for_role(self, role_name=None):
@ -985,7 +990,7 @@ class ItemEntity(BaseItemEntity):
return self.root_item.get_entity_from_path(path)
@abstractmethod
def update_default_value(self, parent_values):
def update_default_value(self, parent_values, log_invalid_types=True):
"""Fill default values on startup or on refresh.
Default values stored in `openpype` repository should update all items
@ -995,11 +1000,13 @@ class ItemEntity(BaseItemEntity):
Args:
parent_values (dict): Values of parent's item. But in case item is
used as widget, `parent_values` contain value for item.
log_invalid_types (bool): Log invalid type of value. Used when
entity can have children with same keys and different types.
"""
pass
@abstractmethod
def update_studio_value(self, parent_values):
def update_studio_value(self, parent_values, log_invalid_types=True):
"""Fill studio override values on startup or on refresh.
Set studio value if is not set to NOT_SET, in that case studio
@ -1008,11 +1015,13 @@ class ItemEntity(BaseItemEntity):
Args:
parent_values (dict): Values of parent's item. But in case item is
used as widget, `parent_values` contain value for item.
log_invalid_types (bool): Log invalid type of value. Used when
entity can have children with same keys and different types.
"""
pass
@abstractmethod
def update_project_value(self, parent_values):
def update_project_value(self, parent_values, log_invalid_types=True):
"""Fill project override values on startup, refresh or project change.
Set project value if is not set to NOT_SET, in that case project
@ -1021,5 +1030,7 @@ class ItemEntity(BaseItemEntity):
Args:
parent_values (dict): Values of parent's item. But in case item is
used as widget, `parent_values` contain value for item.
log_invalid_types (bool): Log invalid type of value. Used when
entity can have children with same keys and different types.
"""
pass

View file

@ -518,12 +518,18 @@ class DictConditionalEntity(ItemEntity):
output.update(self._current_metadata)
return output
def _prepare_value(self, value):
def _prepare_value(self, value, log_invalid_types):
if value is NOT_SET or self.enum_key not in value:
return NOT_SET, NOT_SET
enum_value = value.get(self.enum_key)
if enum_value not in self.non_gui_children:
if log_invalid_types:
self.log.warning(
"{} Unknown enum key in default values: {}".format(
self.path, enum_value
)
)
return NOT_SET, NOT_SET
# Create copy of value before poping values
@ -551,22 +557,25 @@ class DictConditionalEntity(ItemEntity):
return value, metadata
def update_default_value(self, value):
def update_default_value(self, value, log_invalid_types=True):
"""Update default values.
Not an api method, should be called by parent.
"""
value = self._check_update_value(value, "default")
self._default_log_invalid_types = log_invalid_types
value = self._check_update_value(
value, "default", log_invalid_types
)
self.has_default_value = value is not NOT_SET
# TODO add value validation
value, metadata = self._prepare_value(value)
value, metadata = self._prepare_value(value, log_invalid_types)
self._default_metadata = metadata
if value is NOT_SET:
self.enum_entity.update_default_value(value)
self.enum_entity.update_default_value(value, log_invalid_types)
for children_by_key in self.non_gui_children.values():
for child_obj in children_by_key.values():
child_obj.update_default_value(value)
child_obj.update_default_value(value, log_invalid_types)
return
value_keys = set(value.keys())
@ -574,7 +583,7 @@ class DictConditionalEntity(ItemEntity):
expected_keys = set(self.non_gui_children[enum_value].keys())
expected_keys.add(self.enum_key)
unknown_keys = value_keys - expected_keys
if unknown_keys:
if unknown_keys and log_invalid_types:
self.log.warning(
"{} Unknown keys in default values: {}".format(
self.path,
@ -582,28 +591,37 @@ class DictConditionalEntity(ItemEntity):
)
)
self.enum_entity.update_default_value(enum_value)
for children_by_key in self.non_gui_children.values():
self.enum_entity.update_default_value(enum_value, log_invalid_types)
for enum_key, children_by_key in self.non_gui_children.items():
_log_invalid_types = log_invalid_types
if _log_invalid_types:
_log_invalid_types = enum_key == enum_value
value_copy = copy.deepcopy(value)
for key, child_obj in children_by_key.items():
child_value = value_copy.get(key, NOT_SET)
child_obj.update_default_value(child_value)
child_obj.update_default_value(child_value, _log_invalid_types)
def update_studio_value(self, value):
def update_studio_value(self, value, log_invalid_types=True):
"""Update studio override values.
Not an api method, should be called by parent.
"""
value = self._check_update_value(value, "studio override")
value, metadata = self._prepare_value(value)
self._studio_log_invalid_types = log_invalid_types
value = self._check_update_value(
value, "studio override", log_invalid_types
)
value, metadata = self._prepare_value(value, log_invalid_types)
self._studio_override_metadata = metadata
self.had_studio_override = metadata is not NOT_SET
if value is NOT_SET:
self.enum_entity.update_studio_value(value)
self.enum_entity.update_studio_value(value, log_invalid_types)
for children_by_key in self.non_gui_children.values():
for child_obj in children_by_key.values():
child_obj.update_studio_value(value)
child_obj.update_studio_value(value, log_invalid_types)
return
value_keys = set(value.keys())
@ -611,7 +629,7 @@ class DictConditionalEntity(ItemEntity):
expected_keys = set(self.non_gui_children[enum_value])
expected_keys.add(self.enum_key)
unknown_keys = value_keys - expected_keys
if unknown_keys:
if unknown_keys and log_invalid_types:
self.log.warning(
"{} Unknown keys in studio overrides: {}".format(
self.path,
@ -619,28 +637,36 @@ class DictConditionalEntity(ItemEntity):
)
)
self.enum_entity.update_studio_value(enum_value)
for children_by_key in self.non_gui_children.values():
self.enum_entity.update_studio_value(enum_value, log_invalid_types)
for enum_key, children_by_key in self.non_gui_children.items():
_log_invalid_types = log_invalid_types
if _log_invalid_types:
_log_invalid_types = enum_key == enum_value
value_copy = copy.deepcopy(value)
for key, child_obj in children_by_key.items():
child_value = value_copy.get(key, NOT_SET)
child_obj.update_studio_value(child_value)
child_obj.update_studio_value(child_value, _log_invalid_types)
def update_project_value(self, value):
def update_project_value(self, value, log_invalid_types=True):
"""Update project override values.
Not an api method, should be called by parent.
"""
value = self._check_update_value(value, "project override")
value, metadata = self._prepare_value(value)
self._project_log_invalid_types = log_invalid_types
value = self._check_update_value(
value, "project override", log_invalid_types
)
value, metadata = self._prepare_value(value, log_invalid_types)
self._project_override_metadata = metadata
self.had_project_override = metadata is not NOT_SET
if value is NOT_SET:
self.enum_entity.update_project_value(value)
self.enum_entity.update_project_value(value, log_invalid_types)
for children_by_key in self.non_gui_children.values():
for child_obj in children_by_key.values():
child_obj.update_project_value(value)
child_obj.update_project_value(value, log_invalid_types)
return
value_keys = set(value.keys())
@ -648,7 +674,7 @@ class DictConditionalEntity(ItemEntity):
expected_keys = set(self.non_gui_children[enum_value])
expected_keys.add(self.enum_key)
unknown_keys = value_keys - expected_keys
if unknown_keys:
if unknown_keys and log_invalid_types:
self.log.warning(
"{} Unknown keys in project overrides: {}".format(
self.path,
@ -656,12 +682,16 @@ class DictConditionalEntity(ItemEntity):
)
)
self.enum_entity.update_project_value(enum_value)
for children_by_key in self.non_gui_children.values():
self.enum_entity.update_project_value(enum_value, log_invalid_types)
for enum_key, children_by_key in self.non_gui_children.items():
_log_invalid_types = log_invalid_types
if _log_invalid_types:
_log_invalid_types = enum_key == enum_value
value_copy = copy.deepcopy(value)
for key, child_obj in children_by_key.items():
child_value = value_copy.get(key, NOT_SET)
child_obj.update_project_value(child_value)
child_obj.update_project_value(child_value, _log_invalid_types)
def _discard_changes(self, on_change_trigger):
self._ignore_child_changes = True

View file

@ -414,12 +414,16 @@ class DictImmutableKeysEntity(ItemEntity):
return value, metadata
def update_default_value(self, value):
def update_default_value(self, value, log_invalid_types=True):
"""Update default values.
Not an api method, should be called by parent.
"""
value = self._check_update_value(value, "default")
self._default_log_invalid_types = log_invalid_types
value = self._check_update_value(
value, "default", log_invalid_types
)
self.has_default_value = value is not NOT_SET
# TODO add value validation
value, metadata = self._prepare_value(value)
@ -427,13 +431,13 @@ class DictImmutableKeysEntity(ItemEntity):
if value is NOT_SET:
for child_obj in self.non_gui_children.values():
child_obj.update_default_value(value)
child_obj.update_default_value(value, log_invalid_types)
return
value_keys = set(value.keys())
expected_keys = set(self.non_gui_children)
unknown_keys = value_keys - expected_keys
if unknown_keys:
if unknown_keys and log_invalid_types:
self.log.warning(
"{} Unknown keys in default values: {}".format(
self.path,
@ -443,27 +447,31 @@ class DictImmutableKeysEntity(ItemEntity):
for key, child_obj in self.non_gui_children.items():
child_value = value.get(key, NOT_SET)
child_obj.update_default_value(child_value)
child_obj.update_default_value(child_value, log_invalid_types)
def update_studio_value(self, value):
def update_studio_value(self, value, log_invalid_types=True):
"""Update studio override values.
Not an api method, should be called by parent.
"""
value = self._check_update_value(value, "studio override")
self._studio_log_invalid_types = log_invalid_types
value = self._check_update_value(
value, "studio override", log_invalid_types
)
value, metadata = self._prepare_value(value)
self._studio_override_metadata = metadata
self.had_studio_override = metadata is not NOT_SET
if value is NOT_SET:
for child_obj in self.non_gui_children.values():
child_obj.update_studio_value(value)
child_obj.update_studio_value(value, log_invalid_types)
return
value_keys = set(value.keys())
expected_keys = set(self.non_gui_children)
unknown_keys = value_keys - expected_keys
if unknown_keys:
if unknown_keys and log_invalid_types:
self.log.warning(
"{} Unknown keys in studio overrides: {}".format(
self.path,
@ -472,27 +480,31 @@ class DictImmutableKeysEntity(ItemEntity):
)
for key, child_obj in self.non_gui_children.items():
child_value = value.get(key, NOT_SET)
child_obj.update_studio_value(child_value)
child_obj.update_studio_value(child_value, log_invalid_types)
def update_project_value(self, value):
def update_project_value(self, value, log_invalid_types=True):
"""Update project override values.
Not an api method, should be called by parent.
"""
value = self._check_update_value(value, "project override")
self._project_log_invalid_types = log_invalid_types
value = self._check_update_value(
value, "project override", log_invalid_types
)
value, metadata = self._prepare_value(value)
self._project_override_metadata = metadata
self.had_project_override = metadata is not NOT_SET
if value is NOT_SET:
for child_obj in self.non_gui_children.values():
child_obj.update_project_value(value)
child_obj.update_project_value(value, log_invalid_types)
return
value_keys = set(value.keys())
expected_keys = set(self.non_gui_children)
unknown_keys = value_keys - expected_keys
if unknown_keys:
if unknown_keys and log_invalid_types:
self.log.warning(
"{} Unknown keys in project overrides: {}".format(
self.path,
@ -502,7 +514,7 @@ class DictImmutableKeysEntity(ItemEntity):
for key, child_obj in self.non_gui_children.items():
child_value = value.get(key, NOT_SET)
child_obj.update_project_value(child_value)
child_obj.update_project_value(child_value, log_invalid_types)
def _discard_changes(self, on_change_trigger):
self._ignore_child_changes = True
@ -694,37 +706,48 @@ class RootsDictEntity(DictImmutableKeysEntity):
self._metadata_are_modified = False
self._current_metadata = {}
def update_default_value(self, value):
def update_default_value(self, value, log_invalid_types=True):
"""Update default values.
Not an api method, should be called by parent.
"""
value = self._check_update_value(value, "default")
self._default_log_invalid_types = log_invalid_types
value = self._check_update_value(
value, "default", log_invalid_types
)
value, _ = self._prepare_value(value)
self._default_value = value
self._default_metadata = {}
self.has_default_value = value is not NOT_SET
def update_studio_value(self, value):
def update_studio_value(self, value, log_invalid_types=True):
"""Update studio override values.
Not an api method, should be called by parent.
"""
value = self._check_update_value(value, "studio override")
self._studio_log_invalid_types = log_invalid_types
value = self._check_update_value(
value, "studio override", log_invalid_types
)
value, _ = self._prepare_value(value)
self._studio_value = value
self._studio_override_metadata = {}
self.had_studio_override = value is not NOT_SET
def update_project_value(self, value):
def update_project_value(self, value, log_invalid_types=True):
"""Update project override values.
Not an api method, should be called by parent.
"""
value = self._check_update_value(value, "project override")
self._project_log_invalid_types = log_invalid_types
value = self._check_update_value(
value, "project override", log_invalid_types
)
value, _metadata = self._prepare_value(value)
self._project_value = value
@ -886,37 +909,48 @@ class SyncServerSites(DictImmutableKeysEntity):
self._metadata_are_modified = False
self._current_metadata = {}
def update_default_value(self, value):
def update_default_value(self, value, log_invalid_types=True):
"""Update default values.
Not an api method, should be called by parent.
"""
value = self._check_update_value(value, "default")
self._default_log_invalid_types = log_invalid_types
value = self._check_update_value(
value, "default", log_invalid_types
)
value, _ = self._prepare_value(value)
self._default_value = value
self._default_metadata = {}
self.has_default_value = value is not NOT_SET
def update_studio_value(self, value):
def update_studio_value(self, value, log_invalid_types=True):
"""Update studio override values.
Not an api method, should be called by parent.
"""
value = self._check_update_value(value, "studio override")
self._studio_log_invalid_types = log_invalid_types
value = self._check_update_value(
value, "studio override", log_invalid_types
)
value, _ = self._prepare_value(value)
self._studio_value = value
self._studio_override_metadata = {}
self.had_studio_override = value is not NOT_SET
def update_project_value(self, value):
def update_project_value(self, value, log_invalid_types=True):
"""Update project override values.
Not an api method, should be called by parent.
"""
value = self._check_update_value(value, "project override")
self._project_log_invalid_types = log_invalid_types
value = self._check_update_value(
value, "project override", log_invalid_types
)
value, _metadata = self._prepare_value(value)
self._project_value = value

View file

@ -393,11 +393,15 @@ class DictMutableKeysEntity(EndpointEntity):
value = self.value_on_not_set
using_values_from_state = False
log_invalid_types = True
if state is OverrideState.PROJECT:
log_invalid_types = self._project_log_invalid_types
using_values_from_state = using_project_overrides
elif state is OverrideState.STUDIO:
log_invalid_types = self._studio_log_invalid_types
using_values_from_state = using_studio_overrides
elif state is OverrideState.DEFAULTS:
log_invalid_types = self._default_log_invalid_types
using_values_from_state = using_default_values
new_value = copy.deepcopy(value)
@ -437,11 +441,11 @@ class DictMutableKeysEntity(EndpointEntity):
if not label:
label = metadata_labels.get(new_key)
child_entity.update_default_value(_value)
child_entity.update_default_value(_value, log_invalid_types)
if using_project_overrides:
child_entity.update_project_value(_value)
child_entity.update_project_value(_value, log_invalid_types)
elif using_studio_overrides:
child_entity.update_studio_value(_value)
child_entity.update_studio_value(_value, log_invalid_types)
if label:
children_label_by_id[child_entity.id] = label
@ -598,8 +602,11 @@ class DictMutableKeysEntity(EndpointEntity):
metadata[key] = value.pop(key)
return value, metadata
def update_default_value(self, value):
value = self._check_update_value(value, "default")
def update_default_value(self, value, log_invalid_types=True):
self._default_log_invalid_types = log_invalid_types
value = self._check_update_value(
value, "default", log_invalid_types
)
has_default_value = value is not NOT_SET
if has_default_value:
for required_key in self.required_keys:
@ -611,15 +618,21 @@ class DictMutableKeysEntity(EndpointEntity):
self._default_value = value
self._default_metadata = metadata
def update_studio_value(self, value):
value = self._check_update_value(value, "studio override")
def update_studio_value(self, value, log_invalid_types=True):
self._studio_log_invalid_types = log_invalid_types
value = self._check_update_value(
value, "studio override", log_invalid_types
)
value, metadata = self._prepare_value(value)
self._studio_override_value = value
self._studio_override_metadata = metadata
self.had_studio_override = value is not NOT_SET
def update_project_value(self, value):
value = self._check_update_value(value, "project override")
def update_project_value(self, value, log_invalid_types=True):
self._project_log_invalid_types = log_invalid_types
value = self._check_update_value(
value, "project override", log_invalid_types
)
value, metadata = self._prepare_value(value)
self._project_override_value = value
self._project_override_metadata = metadata
@ -686,9 +699,12 @@ class DictMutableKeysEntity(EndpointEntity):
if not self._can_remove_from_project_override:
return
log_invalid_types = True
if self._has_studio_override:
log_invalid_types = self._studio_log_invalid_types
value = self._studio_override_value
elif self.has_default_value:
log_invalid_types = self._default_log_invalid_types
value = self._default_value
else:
value = self.value_on_not_set
@ -709,9 +725,9 @@ class DictMutableKeysEntity(EndpointEntity):
for _key, _value in new_value.items():
new_key = self._convert_to_regex_valid_key(_key)
child_entity = self._add_key(new_key)
child_entity.update_default_value(_value)
child_entity.update_default_value(_value, log_invalid_types)
if self._has_studio_override:
child_entity.update_studio_value(_value)
child_entity.update_studio_value(_value, log_invalid_types)
label = metadata_labels.get(_key)
if label:

View file

@ -90,18 +90,27 @@ class EndpointEntity(ItemEntity):
def require_restart(self):
return self.has_unsaved_changes
def update_default_value(self, value):
value = self._check_update_value(value, "default")
def update_default_value(self, value, log_invalid_types=True):
self._default_log_invalid_types = log_invalid_types
value = self._check_update_value(
value, "default", log_invalid_types
)
self._default_value = value
self.has_default_value = value is not NOT_SET
def update_studio_value(self, value):
value = self._check_update_value(value, "studio override")
def update_studio_value(self, value, log_invalid_types=True):
self._studio_log_invalid_types = log_invalid_types
value = self._check_update_value(
value, "studio override", log_invalid_types
)
self._studio_override_value = value
self.had_studio_override = bool(value is not NOT_SET)
def update_project_value(self, value):
value = self._check_update_value(value, "project override")
def update_project_value(self, value, log_invalid_types=True):
self._project_log_invalid_types = log_invalid_types
value = self._check_update_value(
value, "project override", log_invalid_types
)
self._project_override_value = value
self.had_project_override = bool(value is not NOT_SET)
@ -590,22 +599,26 @@ class RawJsonEntity(InputEntity):
metadata[key] = value.pop(key)
return value, metadata
def update_default_value(self, value):
value = self._check_update_value(value, "default")
def update_default_value(self, value, log_invalid_types=True):
value = self._check_update_value(value, "default", log_invalid_types)
self.has_default_value = value is not NOT_SET
value, metadata = self._prepare_value(value)
self._default_value = value
self.default_metadata = metadata
def update_studio_value(self, value):
value = self._check_update_value(value, "studio override")
def update_studio_value(self, value, log_invalid_types=True):
value = self._check_update_value(
value, "studio override", log_invalid_types
)
self.had_studio_override = value is not NOT_SET
value, metadata = self._prepare_value(value)
self._studio_override_value = value
self.studio_override_metadata = metadata
def update_project_value(self, value):
value = self._check_update_value(value, "project override")
def update_project_value(self, value, log_invalid_types=True):
value = self._check_update_value(
value, "project override", log_invalid_types
)
self.had_project_override = value is not NOT_SET
value, metadata = self._prepare_value(value)
self._project_override_value = value

View file

@ -173,14 +173,17 @@ class PathEntity(ItemEntity):
self._ignore_missing_defaults = ignore_missing_defaults
self.child_obj.set_override_state(state, ignore_missing_defaults)
def update_default_value(self, value):
self.child_obj.update_default_value(value)
def update_default_value(self, value, log_invalid_types=True):
self._default_log_invalid_types = log_invalid_types
self.child_obj.update_default_value(value, log_invalid_types)
def update_project_value(self, value):
self.child_obj.update_project_value(value)
def update_project_value(self, value, log_invalid_types=True):
self._studio_log_invalid_types = log_invalid_types
self.child_obj.update_project_value(value, log_invalid_types)
def update_studio_value(self, value):
self.child_obj.update_studio_value(value)
def update_studio_value(self, value, log_invalid_types=True):
self._project_log_invalid_types = log_invalid_types
self.child_obj.update_studio_value(value, log_invalid_types)
def _discard_changes(self, *args, **kwargs):
self.child_obj.discard_changes(*args, **kwargs)
@ -472,9 +475,9 @@ class ListStrictEntity(ItemEntity):
self._has_project_override = False
def _check_update_value(self, value, value_type):
def _check_update_value(self, value, value_type, log_invalid_types=True):
value = super(ListStrictEntity, self)._check_update_value(
value, value_type
value, value_type, log_invalid_types
)
if value is NOT_SET:
return value
@ -484,15 +487,16 @@ class ListStrictEntity(ItemEntity):
if value_len == child_len:
return value
self.log.warning(
(
"{} Amount of strict list items in {} values is"
" not same as expected. Expected {} items. Got {} items. {}"
).format(
self.path, value_type,
child_len, value_len, str(value)
if log_invalid_types:
self.log.warning(
(
"{} Amount of strict list items in {} values is not same"
" as expected. Expected {} items. Got {} items. {}"
).format(
self.path, value_type,
child_len, value_len, str(value)
)
)
)
if value_len < child_len:
# Fill missing values with NOT_SET
@ -504,36 +508,51 @@ class ListStrictEntity(ItemEntity):
value.pop(child_len)
return value
def update_default_value(self, value):
value = self._check_update_value(value, "default")
def update_default_value(self, value, log_invalid_types=True):
self._default_log_invalid_types = log_invalid_types
value = self._check_update_value(
value, "default", log_invalid_types
)
self.has_default_value = value is not NOT_SET
if value is NOT_SET:
for child_obj in self.children:
child_obj.update_default_value(value)
child_obj.update_default_value(value, log_invalid_types)
else:
for idx, item_value in enumerate(value):
self.children[idx].update_default_value(item_value)
self.children[idx].update_default_value(
item_value, log_invalid_types
)
def update_studio_value(self, value):
value = self._check_update_value(value, "studio override")
def update_studio_value(self, value, log_invalid_types=True):
self._studio_log_invalid_types = log_invalid_types
value = self._check_update_value(
value, "studio override", log_invalid_types
)
if value is NOT_SET:
for child_obj in self.children:
child_obj.update_studio_value(value)
child_obj.update_studio_value(value, log_invalid_types)
else:
for idx, item_value in enumerate(value):
self.children[idx].update_studio_value(item_value)
self.children[idx].update_studio_value(
item_value, log_invalid_types
)
def update_project_value(self, value):
value = self._check_update_value(value, "project override")
def update_project_value(self, value, log_invalid_types=True):
self._project_log_invalid_types = log_invalid_types
value = self._check_update_value(
value, "project override", log_invalid_types
)
if value is NOT_SET:
for child_obj in self.children:
child_obj.update_project_value(value)
child_obj.update_project_value(value, log_invalid_types)
else:
for idx, item_value in enumerate(value):
self.children[idx].update_project_value(item_value)
self.children[idx].update_project_value(
item_value, log_invalid_types
)
def reset_callbacks(self):
super(ListStrictEntity, self).reset_callbacks()

View file

@ -325,16 +325,24 @@ class ListEntity(EndpointEntity):
for item in value:
child_obj = self._add_new_item()
child_obj.update_default_value(item)
child_obj.update_default_value(
item, self._default_log_invalid_types
)
if self._override_state is OverrideState.PROJECT:
if self.had_project_override:
child_obj.update_project_value(item)
child_obj.update_project_value(
item, self._project_log_invalid_types
)
elif self.had_studio_override:
child_obj.update_studio_value(item)
child_obj.update_studio_value(
item, self._studio_log_invalid_types
)
elif self._override_state is OverrideState.STUDIO:
if self.had_studio_override:
child_obj.update_studio_value(item)
child_obj.update_studio_value(
item, self._studio_log_invalid_types
)
for child_obj in self.children:
child_obj.set_override_state(
@ -466,16 +474,24 @@ class ListEntity(EndpointEntity):
for item in value:
child_obj = self._add_new_item()
child_obj.update_default_value(item)
child_obj.update_default_value(
item, self._default_log_invalid_types
)
if self._override_state is OverrideState.PROJECT:
if self.had_project_override:
child_obj.update_project_value(item)
child_obj.update_project_value(
item, self._project_log_invalid_types
)
elif self.had_studio_override:
child_obj.update_studio_value(item)
child_obj.update_studio_value(
item, self._studio_log_invalid_types
)
elif self._override_state is OverrideState.STUDIO:
if self.had_studio_override:
child_obj.update_studio_value(item)
child_obj.update_studio_value(
item, self._studio_log_invalid_types
)
child_obj.set_override_state(
self._override_state, self._ignore_missing_defaults

View file

@ -75,6 +75,15 @@
"type": "list",
"object_type": "text"
},
{
"type": "number",
"key": "review_upload_limit",
"label": "Upload review maximum file size (MB)",
"decimal": 2,
"default": 50,
"minimum": 0,
"maximum": 1000000
},
{
"type": "separator"
},

View file

@ -24,6 +24,9 @@
},
{
"sequence": "Output as image sequence"
},
{
"no-audio": "Do not add audio"
}
]
}

View file

@ -4,11 +4,12 @@ from Qt import QtWidgets, QtCore
import qtawesome
from bson.objectid import ObjectId
from avalon import io, pipeline
from openpype.pipeline import (
from avalon import io
from openpype.pipeline.load import (
discover_loader_plugins,
switch_container,
get_repres_contexts,
loaders_from_repre_context,
)
from .widgets import (
@ -370,7 +371,7 @@ class SwitchAssetDialog(QtWidgets.QDialog):
loaders = None
for repre_context in repre_contexts.values():
_loaders = set(pipeline.loaders_from_repre_context(
_loaders = set(loaders_from_repre_context(
available_loaders, repre_context
))
if loaders is None:

View file

@ -92,7 +92,8 @@ class CollapsibleWrapper(WrapperWidget):
self.content_layout = content_layout
if self.collapsible:
body_widget.toggle_content(self.collapsed)
if not self.collapsed:
body_widget.toggle_content()
else:
body_widget.hide_toolbox(hide_content=False)

View file

@ -287,9 +287,5 @@ class PrettyTimeDelegate(QtWidgets.QStyledItemDelegate):
"""
def displayText(self, value, locale):
if value is None:
# Ignore None value
return
return pretty_timestamp(value)
if value is not None:
return pretty_timestamp(value)

View file

@ -1,9 +1,12 @@
from .window import Window
from .app import (
show,
Window
validate_host_requirements,
)
__all__ = [
"Window",
"show",
"Window"
"validate_host_requirements",
]

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,583 @@
import os
import logging
import shutil
import Qt
from Qt import QtWidgets, QtCore
from avalon import io, api
from openpype.tools.utils import PlaceholderLineEdit
from openpype.tools.utils.delegates import PrettyTimeDelegate
from openpype.lib import (
emit_event,
Anatomy,
get_workfile_template_key,
create_workdir_extra_folders,
)
from openpype.lib.avalon_context import (
update_current_task,
compute_session_changes
)
from .model import (
WorkAreaFilesModel,
PublishFilesModel,
FILEPATH_ROLE,
DATE_MODIFIED_ROLE,
)
from .save_as_dialog import SaveAsDialog
from .lib import TempPublishFiles
log = logging.getLogger(__name__)
class FilesView(QtWidgets.QTreeView):
doubleClickedLeft = QtCore.Signal()
doubleClickedRight = QtCore.Signal()
def mouseDoubleClickEvent(self, event):
if event.button() == QtCore.Qt.LeftButton:
self.doubleClickedLeft.emit()
elif event.button() == QtCore.Qt.RightButton:
self.doubleClickedRight.emit()
return super(FilesView, self).mouseDoubleClickEvent(event)
class FilesWidget(QtWidgets.QWidget):
"""A widget displaying files that allows to save and open files."""
file_selected = QtCore.Signal(str)
file_opened = QtCore.Signal()
publish_file_viewed = QtCore.Signal()
workfile_created = QtCore.Signal(str)
published_visible_changed = QtCore.Signal(bool)
def __init__(self, parent):
super(FilesWidget, self).__init__(parent)
# Setup
self._asset_id = None
self._asset_doc = None
self._task_name = None
self._task_type = None
# Pype's anatomy object for current project
self.anatomy = Anatomy(io.Session["AVALON_PROJECT"])
# Template key used to get work template from anatomy templates
self.template_key = "work"
# This is not root but workfile directory
self._workfiles_root = None
self._workdir_path = None
self.host = api.registered_host()
temp_publish_files = TempPublishFiles()
temp_publish_files.cleanup()
self._temp_publish_files = temp_publish_files
# Whether to automatically select the latest modified
# file on a refresh of the files model.
self.auto_select_latest_modified = True
# Avoid crash in Blender and store the message box
# (setting parent doesn't work as it hides the message box)
self._messagebox = None
# Filtering input
filter_widget = QtWidgets.QWidget(self)
published_checkbox = QtWidgets.QCheckBox("Published", filter_widget)
filter_input = PlaceholderLineEdit(filter_widget)
filter_input.setPlaceholderText("Filter files..")
filter_layout = QtWidgets.QHBoxLayout(filter_widget)
filter_layout.setContentsMargins(0, 0, 0, 0)
filter_layout.addWidget(published_checkbox, 0)
filter_layout.addWidget(filter_input, 1)
# Create the Files models
extensions = set(self.host.file_extensions())
views_widget = QtWidgets.QWidget(self)
# Workarea view
workarea_files_model = WorkAreaFilesModel(extensions)
# Create proxy model for files to be able sort and filter
workarea_proxy_model = QtCore.QSortFilterProxyModel()
workarea_proxy_model.setSourceModel(workarea_files_model)
workarea_proxy_model.setDynamicSortFilter(True)
workarea_proxy_model.setSortCaseSensitivity(QtCore.Qt.CaseInsensitive)
# Set up the file list tree view
workarea_files_view = FilesView(views_widget)
workarea_files_view.setModel(workarea_proxy_model)
workarea_files_view.setSortingEnabled(True)
workarea_files_view.setContextMenuPolicy(QtCore.Qt.CustomContextMenu)
# Date modified delegate
workarea_time_delegate = PrettyTimeDelegate()
workarea_files_view.setItemDelegateForColumn(1, workarea_time_delegate)
workarea_files_view.setIndentation(3) # smaller indentation
# Default to a wider first filename column it is what we mostly care
# about and the date modified is relatively small anyway.
workarea_files_view.setColumnWidth(0, 330)
# Publish files view
publish_files_model = PublishFilesModel(extensions, io, self.anatomy)
publish_proxy_model = QtCore.QSortFilterProxyModel()
publish_proxy_model.setSourceModel(publish_files_model)
publish_proxy_model.setDynamicSortFilter(True)
publish_proxy_model.setSortCaseSensitivity(QtCore.Qt.CaseInsensitive)
publish_files_view = FilesView(views_widget)
publish_files_view.setModel(publish_proxy_model)
publish_files_view.setSortingEnabled(True)
publish_files_view.setContextMenuPolicy(QtCore.Qt.CustomContextMenu)
# Date modified delegate
publish_time_delegate = PrettyTimeDelegate()
publish_files_view.setItemDelegateForColumn(1, publish_time_delegate)
publish_files_view.setIndentation(3) # smaller indentation
# Default to a wider first filename column it is what we mostly care
# about and the date modified is relatively small anyway.
publish_files_view.setColumnWidth(0, 330)
views_layout = QtWidgets.QHBoxLayout(views_widget)
views_layout.setContentsMargins(0, 0, 0, 0)
views_layout.addWidget(workarea_files_view, 1)
views_layout.addWidget(publish_files_view, 1)
# Home Page
# Build buttons widget for files widget
btns_widget = QtWidgets.QWidget(self)
btn_save = QtWidgets.QPushButton("Save As", btns_widget)
btn_browse = QtWidgets.QPushButton("Browse", btns_widget)
btn_open = QtWidgets.QPushButton("Open", btns_widget)
btn_view_published = QtWidgets.QPushButton("View", btns_widget)
btns_layout = QtWidgets.QHBoxLayout(btns_widget)
btns_layout.setContentsMargins(0, 0, 0, 0)
btns_layout.addWidget(btn_open, 1)
btns_layout.addWidget(btn_browse, 1)
btns_layout.addWidget(btn_save, 1)
btns_layout.addWidget(btn_view_published, 1)
# Build files widgets for home page
main_layout = QtWidgets.QVBoxLayout(self)
main_layout.setContentsMargins(0, 0, 0, 0)
main_layout.addWidget(filter_widget, 0)
main_layout.addWidget(views_widget, 1)
main_layout.addWidget(btns_widget, 0)
# Register signal callbacks
published_checkbox.stateChanged.connect(self._on_published_change)
filter_input.textChanged.connect(self._on_filter_text_change)
workarea_files_view.doubleClickedLeft.connect(
self._on_workarea_open_pressed
)
workarea_files_view.customContextMenuRequested.connect(
self._on_workarea_context_menu
)
workarea_files_view.selectionModel().selectionChanged.connect(
self.on_file_select
)
publish_files_view.doubleClickedLeft.connect(
self._on_view_published_pressed
)
btn_open.pressed.connect(self._on_workarea_open_pressed)
btn_browse.pressed.connect(self.on_browse_pressed)
btn_save.pressed.connect(self.on_save_as_pressed)
btn_view_published.pressed.connect(self._on_view_published_pressed)
# Store attributes
self._published_checkbox = published_checkbox
self._filter_input = filter_input
self._workarea_time_delegate = workarea_time_delegate
self._workarea_files_view = workarea_files_view
self._workarea_files_model = workarea_files_model
self._workarea_proxy_model = workarea_proxy_model
self._publish_time_delegate = publish_time_delegate
self._publish_files_view = publish_files_view
self._publish_files_model = publish_files_model
self._publish_proxy_model = publish_proxy_model
self._btns_widget = btns_widget
self._btn_open = btn_open
self._btn_browse = btn_browse
self._btn_save = btn_save
self._btn_view_published = btn_view_published
# Create a proxy widget for files widget
self.setFocusProxy(btn_open)
# Hide publish files widgets
publish_files_view.setVisible(False)
btn_view_published.setVisible(False)
@property
def published_enabled(self):
return self._published_checkbox.isChecked()
def _on_published_change(self):
published_enabled = self.published_enabled
self._workarea_files_view.setVisible(not published_enabled)
self._btn_open.setVisible(not published_enabled)
self._btn_browse.setVisible(not published_enabled)
self._btn_save.setVisible(not published_enabled)
self._publish_files_view.setVisible(published_enabled)
self._btn_view_published.setVisible(published_enabled)
self._update_filtering()
self._update_asset_task()
self.published_visible_changed.emit(published_enabled)
self._select_last_modified_file()
def _on_filter_text_change(self):
self._update_filtering()
def _update_filtering(self):
text = self._filter_input.text()
if self.published_enabled:
self._publish_proxy_model.setFilterFixedString(text)
else:
self._workarea_proxy_model.setFilterFixedString(text)
def set_save_enabled(self, enabled):
self._btn_save.setEnabled(enabled)
def set_asset_task(self, asset_id, task_name, task_type):
if asset_id != self._asset_id:
self._asset_doc = None
self._asset_id = asset_id
self._task_name = task_name
self._task_type = task_type
self._update_asset_task()
def _update_asset_task(self):
if self.published_enabled:
self._publish_files_model.set_context(
self._asset_id, self._task_name
)
has_valid_items = self._publish_files_model.has_valid_items()
self._btn_view_published.setEnabled(has_valid_items)
else:
# Define a custom session so we can query the work root
# for a "Work area" that is not our current Session.
# This way we can browse it even before we enter it.
if self._asset_id and self._task_name and self._task_type:
session = self._get_session()
self._workdir_path = session["AVALON_WORKDIR"]
self._workfiles_root = self.host.work_root(session)
self._workarea_files_model.set_root(self._workfiles_root)
else:
self._workarea_files_model.set_root(None)
# Disable/Enable buttons based on available files in model
has_valid_items = self._workarea_files_model.has_valid_items()
self._btn_browse.setEnabled(has_valid_items)
self._btn_open.setEnabled(has_valid_items)
# Manually trigger file selection
if not has_valid_items:
self.on_file_select()
def _get_asset_doc(self):
if self._asset_id is None:
return None
if self._asset_doc is None:
self._asset_doc = io.find_one({"_id": self._asset_id})
return self._asset_doc
def _get_session(self):
"""Return a modified session for the current asset and task"""
session = api.Session.copy()
self.template_key = get_workfile_template_key(
self._task_type,
session["AVALON_APP"],
project_name=session["AVALON_PROJECT"]
)
changes = compute_session_changes(
session,
asset=self._get_asset_doc(),
task=self._task_name,
template_key=self.template_key
)
session.update(changes)
return session
def _enter_session(self):
"""Enter the asset and task session currently selected"""
session = api.Session.copy()
changes = compute_session_changes(
session,
asset=self._get_asset_doc(),
task=self._task_name,
template_key=self.template_key
)
if not changes:
# Return early if we're already in the right Session context
# to avoid any unwanted Task Changed callbacks to be triggered.
return
update_current_task(
asset=self._get_asset_doc(),
task=self._task_name,
template_key=self.template_key
)
def open_file(self, filepath):
host = self.host
if host.has_unsaved_changes():
result = self.save_changes_prompt()
if result is None:
# Cancel operation
return False
# Save first if has changes
if result:
current_file = host.current_file()
if not current_file:
# If the user requested to save the current scene
# we can't actually automatically do so if the current
# file has not been saved with a name yet. So we'll have
# to opt out.
log.error("Can't save scene with no filename. Please "
"first save your work file using 'Save As'.")
return
# Save current scene, continue to open file
host.save_file(current_file)
self._enter_session()
host.open_file(filepath)
self.file_opened.emit()
def save_changes_prompt(self):
self._messagebox = messagebox = QtWidgets.QMessageBox(parent=self)
messagebox.setWindowFlags(messagebox.windowFlags() |
QtCore.Qt.FramelessWindowHint)
messagebox.setIcon(messagebox.Warning)
messagebox.setWindowTitle("Unsaved Changes!")
messagebox.setText(
"There are unsaved changes to the current file."
"\nDo you want to save the changes?"
)
messagebox.setStandardButtons(
messagebox.Yes | messagebox.No | messagebox.Cancel
)
result = messagebox.exec_()
if result == messagebox.Yes:
return True
if result == messagebox.No:
return False
return None
def get_filename(self):
"""Show save dialog to define filename for save or duplicate
Returns:
str: The filename to create.
"""
session = self._get_session()
window = SaveAsDialog(
parent=self,
root=self._workfiles_root,
anatomy=self.anatomy,
template_key=self.template_key,
session=session
)
window.exec_()
return window.get_result()
def on_duplicate_pressed(self):
work_file = self.get_filename()
if not work_file:
return
src = self._get_selected_filepath()
dst = os.path.join(self._workfiles_root, work_file)
shutil.copy(src, dst)
self.workfile_created.emit(dst)
self.refresh()
def _get_selected_filepath(self):
"""Return current filepath selected in view"""
if self.published_enabled:
source_view = self._publish_files_view
else:
source_view = self._workarea_files_view
selection = source_view.selectionModel()
index = selection.currentIndex()
if not index.isValid():
return
return index.data(FILEPATH_ROLE)
def _on_workarea_open_pressed(self):
path = self._get_selected_filepath()
if not path:
print("No file selected to open..")
return
self.open_file(path)
def on_browse_pressed(self):
ext_filter = "Work File (*{0})".format(
" *".join(self.host.file_extensions())
)
kwargs = {
"caption": "Work Files",
"filter": ext_filter
}
if Qt.__binding__ in ("PySide", "PySide2"):
kwargs["dir"] = self._workfiles_root
else:
kwargs["directory"] = self._workfiles_root
work_file = QtWidgets.QFileDialog.getOpenFileName(**kwargs)[0]
if work_file:
self.open_file(work_file)
def on_save_as_pressed(self):
work_filename = self.get_filename()
if not work_filename:
return
# Trigger before save event
emit_event(
"workfile.save.before",
{"filename": work_filename, "workdir_path": self._workdir_path},
source="workfiles.tool"
)
# Make sure workfiles root is updated
# - this triggers 'workio.work_root(...)' which may change value of
# '_workfiles_root'
self.set_asset_task(
self._asset_id, self._task_name, self._task_type
)
# Create workfiles root folder
if not os.path.exists(self._workfiles_root):
log.debug("Initializing Work Directory: %s", self._workfiles_root)
os.makedirs(self._workfiles_root)
# Update session if context has changed
self._enter_session()
# Prepare full path to workfile and save it
filepath = os.path.join(
os.path.normpath(self._workfiles_root), work_filename
)
self.host.save_file(filepath)
# Create extra folders
create_workdir_extra_folders(
self._workdir_path,
api.Session["AVALON_APP"],
self._task_type,
self._task_name,
api.Session["AVALON_PROJECT"]
)
# Trigger after save events
emit_event(
"workfile.save.after",
{"filename": work_filename, "workdir_path": self._workdir_path},
source="workfiles.tool"
)
self.workfile_created.emit(filepath)
# Refresh files model
self.refresh()
def _on_view_published_pressed(self):
filepath = self._get_selected_filepath()
if not filepath or not os.path.exists(filepath):
return
item = self._temp_publish_files.add_file(filepath)
self.host.open_file(item.filepath)
self.publish_file_viewed.emit()
# Change state back to workarea
self._published_checkbox.setChecked(False)
def on_file_select(self):
self.file_selected.emit(self._get_selected_filepath())
def refresh(self):
"""Refresh listed files for current selection in the interface"""
if self.published_enabled:
self._publish_files_model.refresh()
else:
self._workarea_files_model.refresh()
if self.auto_select_latest_modified:
self._select_last_modified_file()
def _on_workarea_context_menu(self, point):
index = self._workarea_files_view.indexAt(point)
if not index.isValid():
return
if not index.flags() & QtCore.Qt.ItemIsEnabled:
return
menu = QtWidgets.QMenu(self)
# Duplicate
action = QtWidgets.QAction("Duplicate", menu)
tip = "Duplicate selected file."
action.setToolTip(tip)
action.setStatusTip(tip)
action.triggered.connect(self.on_duplicate_pressed)
menu.addAction(action)
# Show the context action menu
global_point = self._workarea_files_view.mapToGlobal(point)
action = menu.exec_(global_point)
if not action:
return
def _select_last_modified_file(self):
"""Utility function to select the file with latest date modified"""
if self.published_enabled:
source_view = self._publish_files_view
else:
source_view = self._workarea_files_view
model = source_view.model()
highest_index = None
highest = 0
for row in range(model.rowCount()):
index = model.index(row, 0, parent=QtCore.QModelIndex())
if not index.isValid():
continue
modified = index.data(DATE_MODIFIED_ROLE)
if modified is not None and modified > highest:
highest_index = index
highest = modified
if highest_index:
source_view.setCurrentIndex(highest_index)

View file

@ -0,0 +1,272 @@
import os
import shutil
import uuid
import time
import json
import logging
import contextlib
import appdirs
class TempPublishFilesItem(object):
"""Object representing copied workfile in app temp folder.
Args:
item_id (str): Id of item used as subfolder.
data (dict): Metadata about temp files.
directory (str): Path to directory where files are copied to.
"""
def __init__(self, item_id, data, directory):
self._id = item_id
self._directory = directory
self._filepath = os.path.join(directory, data["filename"])
@property
def directory(self):
return self._directory
@property
def filepath(self):
return self._filepath
@property
def id(self):
return self._id
@property
def size(self):
if os.path.exists(self.filepath):
s = os.stat(self.filepath)
return s.st_size
return 0
class TempPublishFiles(object):
"""Directory where published workfiles are copied when opened.
Directory is located in appdirs on the machine. Folder contains file
with metadata about stored files. Each item in metadata has id, filename
and expiration time. When expiration time is higher then current time the
item is removed from metadata and it's files are deleted. Files of items
are stored in subfolder named by item's id.
Metadata file can be in theory opened and modified by multiple processes,
threads at one time. For those cases is created simple lock file which
is created before modification begins and is removed when modification
ends. Existence of the file means that it should not be modified by
any other process at the same time.
Metadata example:
```
{
"96050b4a-8974-4fca-8179-7c446c478d54": {
"created": 1647880725.555,
"expiration": 1647884325.555,
"filename": "cg_pigeon_workfileModeling_v025.ma"
},
...
}
```
## Why is this needed
Combination of more issues. Temp files are not automatically removed by
OS on windows so using tempfiles in TEMP would lead to kill disk space of
machine. There are also cases when someone wants to open multiple files
in short period of time and want to manually remove those files so keeping
track of temporary copied files in pre-defined structure is needed.
"""
minute_in_seconds = 60
hour_in_seconds = 60 * minute_in_seconds
day_in_seconds = 24 * hour_in_seconds
def __init__(self):
root_dir = appdirs.user_data_dir(
"published_workfiles_temp", "openpype"
)
if not os.path.exists(root_dir):
os.makedirs(root_dir)
metadata_path = os.path.join(root_dir, "metadata.json")
lock_path = os.path.join(root_dir, "lock.json")
self._root_dir = root_dir
self._metadata_path = metadata_path
self._lock_path = lock_path
self._log = None
@property
def log(self):
if self._log is None:
self._log = logging.getLogger(self.__class__.__name__)
return self._log
@property
def life_time(self):
"""How long will be new item kept in temp in seconds.
Returns:
int: Lifetime of temp item.
"""
return int(self.hour_in_seconds)
@property
def size(self):
"""File size of existing items."""
size = 0
for item in self.get_items():
size += item.size
return size
def add_file(self, src_path):
"""Add workfile to temp directory.
This will create new item and source path is copied to it's directory.
"""
filename = os.path.basename(src_path)
item_id = str(uuid.uuid4())
dst_dirpath = os.path.join(self._root_dir, item_id)
if not os.path.exists(dst_dirpath):
os.makedirs(dst_dirpath)
dst_path = os.path.join(dst_dirpath, filename)
shutil.copy(src_path, dst_path)
now = time.time()
item_data = {
"filename": filename,
"expiration": now + self.life_time,
"created": now
}
with self._modify_data() as data:
data[item_id] = item_data
return TempPublishFilesItem(item_id, item_data, dst_dirpath)
@contextlib.contextmanager
def _modify_data(self):
"""Create lock file when data in metadata file are modified."""
start_time = time.time()
timeout = 3
while os.path.exists(self._lock_path):
time.sleep(0.01)
if start_time > timeout:
self.log.warning((
"Waited for {} seconds to free lock file. Overriding lock."
).format(timeout))
with open(self._lock_path, "w") as stream:
json.dump({"pid": os.getpid()}, stream)
try:
data = self._get_data()
yield data
with open(self._metadata_path, "w") as stream:
json.dump(data, stream)
finally:
os.remove(self._lock_path)
def _get_data(self):
output = {}
if not os.path.exists(self._metadata_path):
return output
try:
with open(self._metadata_path, "r") as stream:
output = json.load(stream)
except Exception:
self.log.warning("Failed to read metadata file.", exc_info=True)
return output
def cleanup(self, check_expiration=True):
"""Cleanup files based on metadata.
Items that passed expiration are removed when this is called. Or all
files are removed when `check_expiration` is set to False.
Args:
check_expiration (bool): All items and files are removed when set
to True.
"""
data = self._get_data()
now = time.time()
remove_ids = set()
all_ids = set()
for item_id, item_data in data.items():
all_ids.add(item_id)
if check_expiration and now < item_data["expiration"]:
continue
remove_ids.add(item_id)
for item_id in remove_ids:
try:
self.remove_id(item_id)
except Exception:
self.log.warning(
"Failed to remove temp publish item \"{}\"".format(
item_id
),
exc_info=True
)
# Remove unknown folders/files
for filename in os.listdir(self._root_dir):
if filename in all_ids:
continue
full_path = os.path.join(self._root_dir, filename)
if full_path in (self._metadata_path, self._lock_path):
continue
try:
shutil.rmtree(full_path)
except Exception:
self.log.warning(
"Couldn't remove arbitrary path \"{}\"".format(full_path),
exc_info=True
)
def clear(self):
self.cleanup(False)
def get_items(self):
"""Receive all items from metadata file.
Returns:
list<TempPublishFilesItem>: Info about each item in metadata.
"""
output = []
data = self._get_data()
for item_id, item_data in data.items():
item_path = os.path.join(self._root_dir, item_id)
output.append(TempPublishFilesItem(item_id, item_data, item_path))
return output
def remove_id(self, item_id):
"""Remove files of item and then remove the item from metadata."""
filepath = os.path.join(self._root_dir, item_id)
if os.path.exists(filepath):
shutil.rmtree(filepath)
with self._modify_data() as data:
data.pop(item_id, None)
def file_size_to_string(file_size):
size = 0
size_ending_mapping = {
"KB": 1024 ** 1,
"MB": 1024 ** 2,
"GB": 1024 ** 3
}
ending = "B"
for _ending, _size in size_ending_mapping.items():
if file_size < _size:
break
size = file_size / _size
ending = _ending
return "{:.2f} {}".format(size, ending)

View file

@ -1,153 +1,179 @@
import os
import logging
from Qt import QtCore
from Qt import QtCore, QtGui
import qtawesome
from openpype.style import (
get_default_entity_icon_color,
get_disabled_entity_icon_color,
)
from openpype.tools.utils.models import TreeModel, Item
from openpype.pipeline import get_representation_path
log = logging.getLogger(__name__)
class FilesModel(TreeModel):
"""Model listing files with specified extensions in a root folder"""
Columns = ["filename", "date"]
FILEPATH_ROLE = QtCore.Qt.UserRole + 2
DATE_MODIFIED_ROLE = QtCore.Qt.UserRole + 3
ITEM_ID_ROLE = QtCore.Qt.UserRole + 4
FileNameRole = QtCore.Qt.UserRole + 2
DateModifiedRole = QtCore.Qt.UserRole + 3
FilePathRole = QtCore.Qt.UserRole + 4
IsEnabled = QtCore.Qt.UserRole + 5
def __init__(self, file_extensions, parent=None):
super(FilesModel, self).__init__(parent=parent)
class WorkAreaFilesModel(QtGui.QStandardItemModel):
"""Model is looking into one folder for files with extension."""
def __init__(self, extensions, *args, **kwargs):
super(WorkAreaFilesModel, self).__init__(*args, **kwargs)
self.setColumnCount(2)
self._root = None
self._file_extensions = file_extensions
self._icons = {
"file": qtawesome.icon(
"fa.file-o",
color=get_default_entity_icon_color()
self._file_extensions = extensions
self._invalid_path_item = None
self._empty_root_item = None
self._file_icon = qtawesome.icon(
"fa.file-o",
color=get_default_entity_icon_color()
)
self._invalid_item_visible = False
self._items_by_filename = {}
def _get_invalid_path_item(self):
if self._invalid_path_item is None:
message = "Work Area does not exist. Use Save As to create it."
item = QtGui.QStandardItem(message)
icon = qtawesome.icon(
"fa.times",
color=get_disabled_entity_icon_color()
)
}
item.setData(icon, QtCore.Qt.DecorationRole)
item.setFlags(QtCore.Qt.NoItemFlags)
item.setColumnCount(self.columnCount())
self._invalid_path_item = item
return self._invalid_path_item
def _get_empty_root_item(self):
if self._empty_root_item is None:
message = "Work Area is empty."
item = QtGui.QStandardItem(message)
icon = qtawesome.icon(
"fa.times",
color=get_disabled_entity_icon_color()
)
item.setData(icon, QtCore.Qt.DecorationRole)
item.setFlags(QtCore.Qt.NoItemFlags)
item.setColumnCount(self.columnCount())
self._empty_root_item = item
return self._empty_root_item
def set_root(self, root):
"""Change directory where to look for file."""
self._root = root
if root and not os.path.exists(root):
log.debug("Work Area does not exist: {}".format(root))
self.refresh()
def _add_empty(self):
item = Item()
item.update({
# Put a display message in 'filename'
"filename": "No files found.",
# Not-selectable
"enabled": False,
"date": None,
"filepath": None
})
self.add_child(item)
def _clear(self):
root_item = self.invisibleRootItem()
rows = root_item.rowCount()
if rows > 0:
if self._invalid_item_visible:
for row in range(rows):
root_item.takeRow(row)
else:
root_item.removeRows(0, rows)
self._items_by_filename = {}
def refresh(self):
self.clear()
self.beginResetModel()
root = self._root
if not root:
self.endResetModel()
return
if not os.path.exists(root):
"""Refresh and update model items."""
root_item = self.invisibleRootItem()
# If path is not set or does not exist then add invalid path item
if not self._root or not os.path.exists(self._root):
self._clear()
# Add Work Area does not exist placeholder
log.debug("Work Area does not exist: %s", root)
message = "Work Area does not exist. Use Save As to create it."
item = Item({
"filename": message,
"date": None,
"filepath": None,
"enabled": False,
"icon": qtawesome.icon(
"fa.times",
color=get_disabled_entity_icon_color()
)
})
self.add_child(item)
self.endResetModel()
item = self._get_invalid_path_item()
root_item.appendRow(item)
self._invalid_item_visible = True
return
extensions = self._file_extensions
# Clear items if previous refresh set '_invalid_item_visible' to True
# - Invalid items are not stored to '_items_by_filename' so they would
# not be removed
if self._invalid_item_visible:
self._clear()
for filename in os.listdir(root):
path = os.path.join(root, filename)
if os.path.isdir(path):
# Check for new items that should be added and items that should be
# removed
new_items = []
items_to_remove = set(self._items_by_filename.keys())
for filename in os.listdir(self._root):
filepath = os.path.join(self._root, filename)
if os.path.isdir(filepath):
continue
ext = os.path.splitext(filename)[1]
if extensions and ext not in extensions:
if ext not in self._file_extensions:
continue
modified = os.path.getmtime(path)
modified = os.path.getmtime(filepath)
item = Item({
"filename": filename,
"date": modified,
"filepath": path
})
# Use existing item or create new one
if filename in items_to_remove:
items_to_remove.remove(filename)
item = self._items_by_filename[filename]
else:
item = QtGui.QStandardItem(filename)
item.setColumnCount(self.columnCount())
item.setFlags(
QtCore.Qt.ItemIsEnabled | QtCore.Qt.ItemIsSelectable
)
item.setData(self._file_icon, QtCore.Qt.DecorationRole)
new_items.append(item)
self._items_by_filename[filename] = item
# Update data that may be different
item.setData(filepath, FILEPATH_ROLE)
item.setData(modified, DATE_MODIFIED_ROLE)
self.add_child(item)
# Add new items if there are any
if new_items:
root_item.appendRows(new_items)
if self.rowCount() == 0:
self._add_empty()
# Remove items that are no longer available
for filename in items_to_remove:
item = self._items_by_filename.pop(filename)
root_item.removeRow(item.row())
self.endResetModel()
def has_filenames(self):
for item in self._root_item.children():
if item.get("enabled", True):
return True
return False
def rowCount(self, parent=None):
if parent is None or not parent.isValid():
parent_item = self._root_item
# Add empty root item if there are not filenames that could be shown
if root_item.rowCount() > 0:
self._invalid_item_visible = False
else:
parent_item = parent.internalPointer()
return parent_item.childCount()
self._invalid_item_visible = True
item = self._get_empty_root_item()
root_item.appendRow(item)
def data(self, index, role):
if not index.isValid():
return
def has_valid_items(self):
"""Directory has files that are listed in items."""
return not self._invalid_item_visible
if role == QtCore.Qt.DecorationRole:
# Add icon to filename column
item = index.internalPointer()
if index.column() == 0:
if item["filepath"]:
return self._icons["file"]
return item.get("icon", None)
def flags(self, index):
# Use flags of first column for all columns
if index.column() != 0:
index = self.index(index.row(), 0, index.parent())
return super(WorkAreaFilesModel, self).flags(index)
if role == self.FileNameRole:
item = index.internalPointer()
return item["filename"]
def data(self, index, role=None):
if role is None:
role = QtCore.Qt.DisplayRole
if role == self.DateModifiedRole:
item = index.internalPointer()
return item["date"]
# Handle roles for first column
if index.column() == 1:
if role == QtCore.Qt.DecorationRole:
return None
if role == self.FilePathRole:
item = index.internalPointer()
return item["filepath"]
if role in (QtCore.Qt.DisplayRole, QtCore.Qt.EditRole):
role = DATE_MODIFIED_ROLE
index = self.index(index.row(), 0, index.parent())
if role == self.IsEnabled:
item = index.internalPointer()
return item.get("enabled", True)
return super(FilesModel, self).data(index, role)
return super(WorkAreaFilesModel, self).data(index, role)
def headerData(self, section, orientation, role):
# Show nice labels in the header
@ -160,4 +186,274 @@ class FilesModel(TreeModel):
elif section == 1:
return "Date modified"
return super(FilesModel, self).headerData(section, orientation, role)
return super(WorkAreaFilesModel, self).headerData(
section, orientation, role
)
class PublishFilesModel(QtGui.QStandardItemModel):
"""Model filling files with published files calculated from representation.
This model looks for workfile family representations based on selected
asset and task.
Asset must set to be able look for representations that could be used.
Task is used to filter representations by task.
Model has few filter criteria for filling.
- First criteria is that version document must have "workfile" in
"data.families".
- Second cirteria is that representation must have extension same as
defined extensions
- If task is set then representation must have 'task["name"]' with same
name.
"""
def __init__(self, extensions, dbcon, anatomy, *args, **kwargs):
super(PublishFilesModel, self).__init__(*args, **kwargs)
self.setColumnCount(2)
self._dbcon = dbcon
self._anatomy = anatomy
self._file_extensions = extensions
self._invalid_context_item = None
self._empty_root_item = None
self._file_icon = qtawesome.icon(
"fa.file-o",
color=get_default_entity_icon_color()
)
self._invalid_icon = qtawesome.icon(
"fa.times",
color=get_disabled_entity_icon_color()
)
self._invalid_item_visible = False
self._items_by_id = {}
self._asset_id = None
self._task_name = None
def _set_item_invalid(self, item):
item.setFlags(QtCore.Qt.NoItemFlags)
item.setData(self._invalid_icon, QtCore.Qt.DecorationRole)
def _set_item_valid(self, item):
item.setFlags(
QtCore.Qt.ItemIsEnabled | QtCore.Qt.ItemIsSelectable
)
item.setData(self._file_icon, QtCore.Qt.DecorationRole)
def _get_invalid_context_item(self):
if self._invalid_context_item is None:
item = QtGui.QStandardItem("Selected context is not valid.")
item.setColumnCount(self.columnCount())
self._set_item_invalid(item)
self._invalid_context_item = item
return self._invalid_context_item
def _get_empty_root_item(self):
if self._empty_root_item is None:
item = QtGui.QStandardItem("Didn't find any published workfiles.")
item.setColumnCount(self.columnCount())
self._set_item_invalid(item)
self._empty_root_item = item
return self._empty_root_item
def set_context(self, asset_id, task_name):
"""Change context to asset and task.
Args:
asset_id (ObjectId): Id of selected asset.
task_name (str): Name of selected task.
"""
self._asset_id = asset_id
self._task_name = task_name
self.refresh()
def _clear(self):
root_item = self.invisibleRootItem()
rows = root_item.rowCount()
if rows > 0:
if self._invalid_item_visible:
for row in range(rows):
root_item.takeRow(row)
else:
root_item.removeRows(0, rows)
self._items_by_id = {}
def _get_workfie_representations(self):
output = []
# Get subset docs of asset
subset_docs = self._dbcon.find(
{
"type": "subset",
"parent": self._asset_id
},
{
"_id": True,
"name": True
}
)
subset_ids = [subset_doc["_id"] for subset_doc in subset_docs]
if not subset_ids:
return output
# Get version docs of subsets with their families
version_docs = self._dbcon.find(
{
"type": "version",
"parent": {"$in": subset_ids}
},
{
"_id": True,
"data.families": True,
"parent": True
}
)
# Filter versions if they contain 'workfile' family
filtered_versions = []
for version_doc in version_docs:
data = version_doc.get("data") or {}
families = data.get("families") or []
if "workfile" in families:
filtered_versions.append(version_doc)
version_ids = [version_doc["_id"] for version_doc in filtered_versions]
if not version_ids:
return output
# Query representations of filtered versions and add filter for
# extension
extensions = [ext.replace(".", "") for ext in self._file_extensions]
repre_docs = self._dbcon.find(
{
"type": "representation",
"parent": {"$in": version_ids},
"context.ext": {"$in": extensions}
}
)
# Filter queried representations by task name if task is set
filtered_repre_docs = []
for repre_doc in repre_docs:
if self._task_name is None:
filtered_repre_docs.append(repre_doc)
continue
task_info = repre_doc["context"].get("task")
if not task_info:
print("Not task info")
continue
if isinstance(task_info, dict):
task_name = task_info.get("name")
else:
task_name = task_info
if task_name == self._task_name:
filtered_repre_docs.append(repre_doc)
# Collect paths of representations
for repre_doc in filtered_repre_docs:
path = get_representation_path(
repre_doc, root=self._anatomy.roots
)
output.append((path, repre_doc["_id"]))
return output
def refresh(self):
root_item = self.invisibleRootItem()
if not self._asset_id:
self._clear()
# Add Work Area does not exist placeholder
item = self._get_invalid_context_item()
root_item.appendRow(item)
self._invalid_item_visible = True
return
if self._invalid_item_visible:
self._clear()
new_items = []
items_to_remove = set(self._items_by_id.keys())
for item in self._get_workfie_representations():
filepath, repre_id = item
# TODO handle empty filepaths
if not filepath:
continue
filename = os.path.basename(filepath)
if repre_id in items_to_remove:
items_to_remove.remove(repre_id)
item = self._items_by_id[repre_id]
else:
item = QtGui.QStandardItem(filename)
item.setColumnCount(self.columnCount())
new_items.append(item)
self._items_by_id[repre_id] = item
if os.path.exists(filepath):
modified = os.path.getmtime(filepath)
tooltip = None
self._set_item_valid(item)
else:
modified = None
tooltip = "File is not available from this machine"
self._set_item_invalid(item)
item.setData(tooltip, QtCore.Qt.ToolTipRole)
item.setData(filepath, FILEPATH_ROLE)
item.setData(modified, DATE_MODIFIED_ROLE)
item.setData(repre_id, ITEM_ID_ROLE)
if new_items:
root_item.appendRows(new_items)
for filename in items_to_remove:
item = self._items_by_id.pop(filename)
root_item.removeRow(item.row())
if root_item.rowCount() > 0:
self._invalid_item_visible = False
else:
self._invalid_item_visible = True
item = self._get_empty_root_item()
root_item.appendRow(item)
def has_valid_items(self):
return not self._invalid_item_visible
def flags(self, index):
if index.column() != 0:
index = self.index(index.row(), 0, index.parent())
return super(PublishFilesModel, self).flags(index)
def data(self, index, role=None):
if role is None:
role = QtCore.Qt.DisplayRole
if index.column() == 1:
if role == QtCore.Qt.DecorationRole:
return None
if role in (QtCore.Qt.DisplayRole, QtCore.Qt.EditRole):
role = DATE_MODIFIED_ROLE
index = self.index(index.row(), 0, index.parent())
return super(PublishFilesModel, self).data(index, role)
def headerData(self, section, orientation, role):
# Show nice labels in the header
if (
role == QtCore.Qt.DisplayRole
and orientation == QtCore.Qt.Horizontal
):
if section == 0:
return "Name"
elif section == 1:
return "Date modified"
return super(PublishFilesModel, self).headerData(
section, orientation, role
)

View file

@ -0,0 +1,482 @@
import os
import re
import copy
import logging
from Qt import QtWidgets, QtCore
from avalon import api, io
from openpype.lib import (
get_last_workfile_with_version,
get_workdir_data,
)
from openpype.tools.utils import PlaceholderLineEdit
log = logging.getLogger(__name__)
def build_workfile_data(session):
"""Get the data required for workfile formatting from avalon `session`"""
# Set work file data for template formatting
asset_name = session["AVALON_ASSET"]
task_name = session["AVALON_TASK"]
host_name = session["AVALON_APP"]
project_doc = io.find_one(
{"type": "project"},
{
"name": True,
"data.code": True,
"config.tasks": True,
}
)
asset_doc = io.find_one(
{
"type": "asset",
"name": asset_name
},
{
"name": True,
"data.tasks": True,
"data.parents": True
}
)
data = get_workdir_data(project_doc, asset_doc, task_name, host_name)
data.update({
"version": 1,
"comment": "",
"ext": None
})
return data
class CommentMatcher(object):
"""Use anatomy and work file data to parse comments from filenames"""
def __init__(self, anatomy, template_key, data):
self.fname_regex = None
template = anatomy.templates[template_key]["file"]
if "{comment}" not in template:
# Don't look for comment if template doesn't allow it
return
# Create a regex group for extensions
extensions = api.registered_host().file_extensions()
any_extension = "(?:{})".format(
"|".join(re.escape(ext[1:]) for ext in extensions)
)
# Use placeholders that will never be in the filename
temp_data = copy.deepcopy(data)
temp_data["comment"] = "<<comment>>"
temp_data["version"] = "<<version>>"
temp_data["ext"] = "<<ext>>"
formatted = anatomy.format(temp_data)
fname_pattern = formatted[template_key]["file"]
fname_pattern = re.escape(fname_pattern)
# Replace comment and version with something we can match with regex
replacements = {
"<<comment>>": "(.+)",
"<<version>>": "[0-9]+",
"<<ext>>": any_extension,
}
for src, dest in replacements.items():
fname_pattern = fname_pattern.replace(re.escape(src), dest)
# Match from beginning to end of string to be safe
fname_pattern = "^{}$".format(fname_pattern)
self.fname_regex = re.compile(fname_pattern)
def parse_comment(self, filepath):
"""Parse the {comment} part from a filename"""
if not self.fname_regex:
return
fname = os.path.basename(filepath)
match = self.fname_regex.match(fname)
if match:
return match.group(1)
class SubversionLineEdit(QtWidgets.QWidget):
"""QLineEdit with QPushButton for drop down selection of list of strings"""
text_changed = QtCore.Signal(str)
def __init__(self, *args, **kwargs):
super(SubversionLineEdit, self).__init__(*args, **kwargs)
input_field = PlaceholderLineEdit(self)
menu_btn = QtWidgets.QPushButton(self)
menu_btn.setFixedWidth(18)
menu = QtWidgets.QMenu(self)
menu_btn.setMenu(menu)
layout = QtWidgets.QHBoxLayout(self)
layout.setContentsMargins(0, 0, 0, 0)
layout.setSpacing(3)
layout.addWidget(input_field, 1)
layout.addWidget(menu_btn, 0)
input_field.textChanged.connect(self.text_changed)
self.setFocusProxy(input_field)
self._input_field = input_field
self._menu_btn = menu_btn
self._menu = menu
def set_placeholder(self, placeholder):
self._input_field.setPlaceholderText(placeholder)
def set_text(self, text):
self._input_field.setText(text)
def set_values(self, values):
self._update(values)
def _on_button_clicked(self):
self._menu.exec_()
def _on_action_clicked(self, action):
self._input_field.setText(action.text())
def _update(self, values):
"""Create optional predefined subset names
Args:
default_names(list): all predefined names
Returns:
None
"""
menu = self._menu
button = self._menu_btn
state = any(values)
button.setEnabled(state)
if state is False:
return
# Include an empty string
values = [""] + sorted(values)
# Get and destroy the action group
group = button.findChild(QtWidgets.QActionGroup)
if group:
group.deleteLater()
# Build new action group
group = QtWidgets.QActionGroup(button)
for name in values:
action = group.addAction(name)
menu.addAction(action)
group.triggered.connect(self._on_action_clicked)
class SaveAsDialog(QtWidgets.QDialog):
"""Name Window to define a unique filename inside a root folder
The filename will be based on the "workfile" template defined in the
project["config"]["template"].
"""
def __init__(self, parent, root, anatomy, template_key, session=None):
super(SaveAsDialog, self).__init__(parent=parent)
self.setWindowFlags(self.windowFlags() | QtCore.Qt.FramelessWindowHint)
self.result = None
self.host = api.registered_host()
self.root = root
self.work_file = None
if not session:
# Fallback to active session
session = api.Session
self.data = build_workfile_data(session)
# Store project anatomy
self.anatomy = anatomy
self.template = anatomy.templates[template_key]["file"]
self.template_key = template_key
# Btns widget
btns_widget = QtWidgets.QWidget(self)
btn_ok = QtWidgets.QPushButton("Ok", btns_widget)
btn_cancel = QtWidgets.QPushButton("Cancel", btns_widget)
btns_layout = QtWidgets.QHBoxLayout(btns_widget)
btns_layout.addWidget(btn_ok)
btns_layout.addWidget(btn_cancel)
# Inputs widget
inputs_widget = QtWidgets.QWidget(self)
# Version widget
version_widget = QtWidgets.QWidget(inputs_widget)
# Version number input
version_input = QtWidgets.QSpinBox(version_widget)
version_input.setMinimum(1)
version_input.setMaximum(9999)
# Last version checkbox
last_version_check = QtWidgets.QCheckBox(
"Next Available Version", version_widget
)
last_version_check.setChecked(True)
version_layout = QtWidgets.QHBoxLayout(version_widget)
version_layout.setContentsMargins(0, 0, 0, 0)
version_layout.addWidget(version_input)
version_layout.addWidget(last_version_check)
# Preview widget
preview_label = QtWidgets.QLabel("Preview filename", inputs_widget)
# Subversion input
subversion = SubversionLineEdit(inputs_widget)
subversion.set_placeholder("Will be part of filename.")
# Extensions combobox
ext_combo = QtWidgets.QComboBox(inputs_widget)
# Add styled delegate to use stylesheets
ext_delegate = QtWidgets.QStyledItemDelegate()
ext_combo.setItemDelegate(ext_delegate)
ext_combo.addItems(self.host.file_extensions())
# Build inputs
inputs_layout = QtWidgets.QFormLayout(inputs_widget)
# Add version only if template contains version key
# - since the version can be padded with "{version:0>4}" we only search
# for "{version".
if "{version" in self.template:
inputs_layout.addRow("Version:", version_widget)
else:
version_widget.setVisible(False)
# Add subversion only if template contains `{comment}`
if "{comment}" in self.template:
inputs_layout.addRow("Subversion:", subversion)
# Detect whether a {comment} is in the current filename - if so,
# preserve it by default and set it in the comment/subversion field
current_filepath = self.host.current_file()
if current_filepath:
# We match the current filename against the current session
# instead of the session where the user is saving to.
current_data = build_workfile_data(api.Session)
matcher = CommentMatcher(anatomy, template_key, current_data)
comment = matcher.parse_comment(current_filepath)
if comment:
log.info("Detected subversion comment: {}".format(comment))
self.data["comment"] = comment
subversion.set_text(comment)
existing_comments = self.get_existing_comments()
subversion.set_values(existing_comments)
else:
subversion.setVisible(False)
inputs_layout.addRow("Extension:", ext_combo)
inputs_layout.addRow("Preview:", preview_label)
# Build layout
main_layout = QtWidgets.QVBoxLayout(self)
main_layout.addWidget(inputs_widget)
main_layout.addWidget(btns_widget)
# Signal callback registration
version_input.valueChanged.connect(self.on_version_spinbox_changed)
last_version_check.stateChanged.connect(
self.on_version_checkbox_changed
)
subversion.text_changed.connect(self.on_comment_changed)
ext_combo.currentIndexChanged.connect(self.on_extension_changed)
btn_ok.pressed.connect(self.on_ok_pressed)
btn_cancel.pressed.connect(self.on_cancel_pressed)
# Allow "Enter" key to accept the save.
btn_ok.setDefault(True)
# Force default focus to comment, some hosts didn't automatically
# apply focus to this line edit (e.g. Houdini)
subversion.setFocus()
# Store widgets
self.btn_ok = btn_ok
self.version_widget = version_widget
self.version_input = version_input
self.last_version_check = last_version_check
self.preview_label = preview_label
self.subversion = subversion
self.ext_combo = ext_combo
self._ext_delegate = ext_delegate
self.refresh()
def get_existing_comments(self):
matcher = CommentMatcher(self.anatomy, self.template_key, self.data)
host_extensions = set(self.host.file_extensions())
comments = set()
if os.path.isdir(self.root):
for fname in os.listdir(self.root):
if not os.path.isfile(os.path.join(self.root, fname)):
continue
ext = os.path.splitext(fname)[-1]
if ext not in host_extensions:
continue
comment = matcher.parse_comment(fname)
if comment:
comments.add(comment)
return list(comments)
def on_version_spinbox_changed(self, value):
self.data["version"] = value
self.refresh()
def on_version_checkbox_changed(self, _value):
self.refresh()
def on_comment_changed(self, text):
self.data["comment"] = text
self.refresh()
def on_extension_changed(self):
ext = self.ext_combo.currentText()
if ext == self.data["ext"]:
return
self.data["ext"] = ext
self.refresh()
def on_ok_pressed(self):
self.result = self.work_file
self.close()
def on_cancel_pressed(self):
self.close()
def get_result(self):
return self.result
def get_work_file(self):
data = copy.deepcopy(self.data)
if not data["comment"]:
data.pop("comment", None)
data["ext"] = data["ext"][1:]
anatomy_filled = self.anatomy.format(data)
return anatomy_filled[self.template_key]["file"]
def refresh(self):
extensions = self.host.file_extensions()
extension = self.data["ext"]
if extension is None:
# Define saving file extension
current_file = self.host.current_file()
if current_file:
# Match the extension of current file
_, extension = os.path.splitext(current_file)
else:
extension = extensions[0]
if extension != self.data["ext"]:
self.data["ext"] = extension
index = self.ext_combo.findText(
extension, QtCore.Qt.MatchFixedString
)
if index >= 0:
self.ext_combo.setCurrentIndex(index)
if not self.last_version_check.isChecked():
self.version_input.setEnabled(True)
self.data["version"] = self.version_input.value()
work_file = self.get_work_file()
else:
self.version_input.setEnabled(False)
data = copy.deepcopy(self.data)
template = str(self.template)
if not data["comment"]:
data.pop("comment", None)
data["ext"] = data["ext"][1:]
version = get_last_workfile_with_version(
self.root, template, data, extensions
)[1]
if version is None:
version = 1
else:
version += 1
found_valid_version = False
# Check if next version is valid version and give a chance to try
# next 100 versions
for idx in range(100):
# Store version to data
self.data["version"] = version
work_file = self.get_work_file()
# Safety check
path = os.path.join(self.root, work_file)
if not os.path.exists(path):
found_valid_version = True
break
# Try next version
version += 1
# Log warning
if idx == 0:
log.warning((
"BUG: Function `get_last_workfile_with_version` "
"didn't return last version."
))
# Raise exception if even 100 version fallback didn't help
if not found_valid_version:
raise AssertionError(
"This is a bug. Couldn't find valid version!"
)
self.work_file = work_file
path_exists = os.path.exists(os.path.join(self.root, work_file))
self.btn_ok.setEnabled(not path_exists)
if path_exists:
self.preview_label.setText(
"<font color='red'>Cannot create \"{0}\" because file exists!"
"</font>".format(work_file)
)
else:
self.preview_label.setText(
"<font color='green'>{0}</font>".format(work_file)
)

View file

@ -1,15 +0,0 @@
from Qt import QtWidgets, QtCore
class FilesView(QtWidgets.QTreeView):
doubleClickedLeft = QtCore.Signal()
doubleClickedRight = QtCore.Signal()
def mouseDoubleClickEvent(self, event):
if event.button() == QtCore.Qt.LeftButton:
self.doubleClickedLeft.emit()
elif event.button() == QtCore.Qt.RightButton:
self.doubleClickedRight.emit()
return super(FilesView, self).mouseDoubleClickEvent(event)

View file

@ -0,0 +1,393 @@
import os
import datetime
from Qt import QtCore, QtWidgets
from avalon import io
from openpype import style
from openpype.lib import (
get_workfile_doc,
create_workfile_doc,
save_workfile_data_to_doc,
)
from openpype.tools.utils.assets_widget import SingleSelectAssetsWidget
from openpype.tools.utils.tasks_widget import TasksWidget
from .files_widget import FilesWidget
from .lib import TempPublishFiles, file_size_to_string
class SidePanelWidget(QtWidgets.QWidget):
save_clicked = QtCore.Signal()
published_workfile_message = (
"<b>INFO</b>: Opened published workfiles will be stored in"
" temp directory on your machine. Current temp size: <b>{}</b>."
)
def __init__(self, parent=None):
super(SidePanelWidget, self).__init__(parent)
details_label = QtWidgets.QLabel("Details", self)
details_input = QtWidgets.QPlainTextEdit(self)
details_input.setReadOnly(True)
artist_note_widget = QtWidgets.QWidget(self)
note_label = QtWidgets.QLabel("Artist note", artist_note_widget)
note_input = QtWidgets.QPlainTextEdit(artist_note_widget)
btn_note_save = QtWidgets.QPushButton("Save note", artist_note_widget)
artist_note_layout = QtWidgets.QVBoxLayout(artist_note_widget)
artist_note_layout.setContentsMargins(0, 0, 0, 0)
artist_note_layout.addWidget(note_label, 0)
artist_note_layout.addWidget(note_input, 1)
artist_note_layout.addWidget(
btn_note_save, 0, alignment=QtCore.Qt.AlignRight
)
publish_temp_widget = QtWidgets.QWidget(self)
publish_temp_info_label = QtWidgets.QLabel(
self.published_workfile_message.format(
file_size_to_string(0)
),
publish_temp_widget
)
publish_temp_info_label.setWordWrap(True)
btn_clear_temp = QtWidgets.QPushButton(
"Clear temp", publish_temp_widget
)
publish_temp_layout = QtWidgets.QVBoxLayout(publish_temp_widget)
publish_temp_layout.setContentsMargins(0, 0, 0, 0)
publish_temp_layout.addWidget(publish_temp_info_label, 0)
publish_temp_layout.addWidget(
btn_clear_temp, 0, alignment=QtCore.Qt.AlignRight
)
main_layout = QtWidgets.QVBoxLayout(self)
main_layout.setContentsMargins(0, 0, 0, 0)
main_layout.addWidget(details_label, 0)
main_layout.addWidget(details_input, 1)
main_layout.addWidget(artist_note_widget, 1)
main_layout.addWidget(publish_temp_widget, 0)
note_input.textChanged.connect(self._on_note_change)
btn_note_save.clicked.connect(self._on_save_click)
btn_clear_temp.clicked.connect(self._on_clear_temp_click)
self._details_input = details_input
self._artist_note_widget = artist_note_widget
self._note_input = note_input
self._btn_note_save = btn_note_save
self._publish_temp_info_label = publish_temp_info_label
self._publish_temp_widget = publish_temp_widget
self._orig_note = ""
self._workfile_doc = None
publish_temp_widget.setVisible(False)
def set_published_visible(self, published_visible):
self._artist_note_widget.setVisible(not published_visible)
self._publish_temp_widget.setVisible(published_visible)
if published_visible:
self.refresh_publish_temp_sizes()
def refresh_publish_temp_sizes(self):
temp_publish_files = TempPublishFiles()
text = self.published_workfile_message.format(
file_size_to_string(temp_publish_files.size)
)
self._publish_temp_info_label.setText(text)
def _on_clear_temp_click(self):
temp_publish_files = TempPublishFiles()
temp_publish_files.clear()
self.refresh_publish_temp_sizes()
def _on_note_change(self):
text = self._note_input.toPlainText()
self._btn_note_save.setEnabled(self._orig_note != text)
def _on_save_click(self):
self._orig_note = self._note_input.toPlainText()
self._on_note_change()
self.save_clicked.emit()
def set_context(self, asset_id, task_name, filepath, workfile_doc):
# Check if asset, task and file are selected
# NOTE workfile document is not requirement
enabled = bool(asset_id) and bool(task_name) and bool(filepath)
self._details_input.setEnabled(enabled)
self._note_input.setEnabled(enabled)
self._btn_note_save.setEnabled(enabled)
# Make sure workfile doc is overridden
self._workfile_doc = workfile_doc
# Disable inputs and remove texts if any required arguments are missing
if not enabled:
self._orig_note = ""
self._details_input.setPlainText("")
self._note_input.setPlainText("")
return
orig_note = ""
if workfile_doc:
orig_note = workfile_doc["data"].get("note") or orig_note
self._orig_note = orig_note
self._note_input.setPlainText(orig_note)
# Set as empty string
self._details_input.setPlainText("")
filestat = os.stat(filepath)
size_value = file_size_to_string(filestat.st_size)
# Append html string
datetime_format = "%b %d %Y %H:%M:%S"
creation_time = datetime.datetime.fromtimestamp(filestat.st_ctime)
modification_time = datetime.datetime.fromtimestamp(filestat.st_mtime)
lines = (
"<b>Size:</b>",
size_value,
"<b>Created:</b>",
creation_time.strftime(datetime_format),
"<b>Modified:</b>",
modification_time.strftime(datetime_format)
)
self._details_input.appendHtml("<br>".join(lines))
def get_workfile_data(self):
data = {
"note": self._note_input.toPlainText()
}
return self._workfile_doc, data
class Window(QtWidgets.QMainWindow):
"""Work Files Window"""
title = "Work Files"
def __init__(self, parent=None):
super(Window, self).__init__(parent=parent)
self.setWindowTitle(self.title)
window_flags = QtCore.Qt.Window | QtCore.Qt.WindowCloseButtonHint
if not parent:
window_flags |= QtCore.Qt.WindowStaysOnTopHint
self.setWindowFlags(window_flags)
# Create pages widget and set it as central widget
pages_widget = QtWidgets.QStackedWidget(self)
self.setCentralWidget(pages_widget)
home_page_widget = QtWidgets.QWidget(pages_widget)
home_body_widget = QtWidgets.QWidget(home_page_widget)
assets_widget = SingleSelectAssetsWidget(io, parent=home_body_widget)
assets_widget.set_current_asset_btn_visibility(True)
tasks_widget = TasksWidget(io, home_body_widget)
files_widget = FilesWidget(home_body_widget)
side_panel = SidePanelWidget(home_body_widget)
pages_widget.addWidget(home_page_widget)
# Build home
home_page_layout = QtWidgets.QVBoxLayout(home_page_widget)
home_page_layout.addWidget(home_body_widget)
# Build home - body
body_layout = QtWidgets.QVBoxLayout(home_body_widget)
split_widget = QtWidgets.QSplitter(home_body_widget)
split_widget.addWidget(assets_widget)
split_widget.addWidget(tasks_widget)
split_widget.addWidget(files_widget)
split_widget.addWidget(side_panel)
split_widget.setSizes([255, 160, 455, 175])
body_layout.addWidget(split_widget)
# Add top margin for tasks to align it visually with files as
# the files widget has a filter field which tasks does not.
tasks_widget.setContentsMargins(0, 32, 0, 0)
# Set context after asset widget is refreshed
# - to do so it is necessary to wait until refresh is done
set_context_timer = QtCore.QTimer()
set_context_timer.setInterval(100)
# Connect signals
set_context_timer.timeout.connect(self._on_context_set_timeout)
assets_widget.selection_changed.connect(self._on_asset_changed)
tasks_widget.task_changed.connect(self._on_task_changed)
files_widget.file_selected.connect(self.on_file_select)
files_widget.workfile_created.connect(self.on_workfile_create)
files_widget.file_opened.connect(self._on_file_opened)
files_widget.publish_file_viewed.connect(
self._on_publish_file_viewed
)
files_widget.published_visible_changed.connect(
self._on_published_change
)
side_panel.save_clicked.connect(self.on_side_panel_save)
self._set_context_timer = set_context_timer
self.home_page_widget = home_page_widget
self.pages_widget = pages_widget
self.home_body_widget = home_body_widget
self.split_widget = split_widget
self.assets_widget = assets_widget
self.tasks_widget = tasks_widget
self.files_widget = files_widget
self.side_panel = side_panel
# Force focus on the open button by default, required for Houdini.
files_widget.setFocus()
self.resize(1200, 600)
self._first_show = True
self._context_to_set = None
def showEvent(self, event):
super(Window, self).showEvent(event)
if self._first_show:
self._first_show = False
self.refresh()
self.setStyleSheet(style.load_stylesheet())
def keyPressEvent(self, event):
"""Custom keyPressEvent.
Override keyPressEvent to do nothing so that Maya's panels won't
take focus when pressing "SHIFT" whilst mouse is over viewport or
outliner. This way users don't accidentally perform Maya commands
whilst trying to name an instance.
"""
def set_save_enabled(self, enabled):
self.files_widget.set_save_enabled(enabled)
def on_file_select(self, filepath):
asset_id = self.assets_widget.get_selected_asset_id()
task_name = self.tasks_widget.get_selected_task_name()
workfile_doc = None
if asset_id and task_name and filepath:
filename = os.path.split(filepath)[1]
workfile_doc = get_workfile_doc(
asset_id, task_name, filename, io
)
self.side_panel.set_context(
asset_id, task_name, filepath, workfile_doc
)
def on_workfile_create(self, filepath):
self._create_workfile_doc(filepath)
def _on_file_opened(self):
self.close()
def _on_publish_file_viewed(self):
self.side_panel.refresh_publish_temp_sizes()
def _on_published_change(self, visible):
self.side_panel.set_published_visible(visible)
def on_side_panel_save(self):
workfile_doc, data = self.side_panel.get_workfile_data()
if not workfile_doc:
filepath = self.files_widget._get_selected_filepath()
self._create_workfile_doc(filepath, force=True)
workfile_doc = self._get_current_workfile_doc()
save_workfile_data_to_doc(workfile_doc, data, io)
def _get_current_workfile_doc(self, filepath=None):
if filepath is None:
filepath = self.files_widget._get_selected_filepath()
task_name = self.tasks_widget.get_selected_task_name()
asset_id = self.assets_widget.get_selected_asset_id()
if not task_name or not asset_id or not filepath:
return
filename = os.path.split(filepath)[1]
return get_workfile_doc(
asset_id, task_name, filename, io
)
def _create_workfile_doc(self, filepath, force=False):
workfile_doc = None
if not force:
workfile_doc = self._get_current_workfile_doc(filepath)
if not workfile_doc:
workdir, filename = os.path.split(filepath)
asset_id = self.assets_widget.get_selected_asset_id()
asset_doc = io.find_one({"_id": asset_id})
task_name = self.tasks_widget.get_selected_task_name()
create_workfile_doc(asset_doc, task_name, filename, workdir, io)
def refresh(self):
# Refresh asset widget
self.assets_widget.refresh()
self._on_task_changed()
def set_context(self, context):
self._context_to_set = context
self._set_context_timer.start()
def _on_context_set_timeout(self):
if self._context_to_set is None:
self._set_context_timer.stop()
return
if self.assets_widget.refreshing:
return
self._context_to_set, context = None, self._context_to_set
if "asset" in context:
asset_doc = io.find_one(
{
"name": context["asset"],
"type": "asset"
},
{"_id": 1}
) or {}
asset_id = asset_doc.get("_id")
# Select the asset
self.assets_widget.select_asset(asset_id)
self.tasks_widget.set_asset_id(asset_id)
if "task" in context:
self.tasks_widget.select_task_name(context["task"])
self._on_task_changed()
def _on_asset_changed(self):
asset_id = self.assets_widget.get_selected_asset_id()
if asset_id:
self.tasks_widget.setEnabled(True)
else:
# Force disable the other widgets if no
# active selection
self.tasks_widget.setEnabled(False)
self.files_widget.setEnabled(False)
self.tasks_widget.set_asset_id(asset_id)
def _on_task_changed(self):
asset_id = self.assets_widget.get_selected_asset_id()
task_name = self.tasks_widget.get_selected_task_name()
task_type = self.tasks_widget.get_selected_task_type()
asset_is_valid = asset_id is not None
self.tasks_widget.setEnabled(asset_is_valid)
self.files_widget.setEnabled(bool(task_name) and asset_is_valid)
self.files_widget.set_asset_task(asset_id, task_name, task_type)
self.files_widget.refresh()

View file

@ -1,3 +1,3 @@
# -*- coding: utf-8 -*-
"""Package declaring Pype version."""
__version__ = "3.9.2-nightly.1"
__version__ = "3.9.2-nightly.3"

View file

@ -2,7 +2,7 @@ import uuid
from Qt import QtWidgets, QtCore
from openpype.pipeline.lib import (
from openpype.lib.attribute_definitions import (
AbtractAttrDef,
UnknownDef,
NumberDef,

2
poetry.lock generated
View file

@ -11,7 +11,7 @@ develop = false
type = "git"
url = "https://github.com/pypeclub/acre.git"
reference = "master"
resolved_reference = "55a7c331e6dc5f81639af50ca4a8cc9d73e9273d"
resolved_reference = "126f7a188cfe36718f707f42ebbc597e86aa86c3"
[[package]]
name = "aiohttp"

View file

@ -1,6 +1,6 @@
[tool.poetry]
name = "OpenPype"
version = "3.9.2-nightly.1" # OpenPype
version = "3.9.2-nightly.3" # OpenPype
description = "Open VFX and Animation pipeline with support."
authors = ["OpenPype Team <info@openpype.io>"]
license = "MIT License"

View file

@ -33,6 +33,8 @@ It can be built and ran on all common platforms. We develop and test on the foll
## Database
Database version should be at least **MongoDB 4.4**.
Pype needs site-wide installation of **MongoDB**. It should be installed on
reliable server, that all workstations (and possibly render nodes) can connect. This
server holds **Avalon** database that is at the core of everything