diff --git a/.github/ISSUE_TEMPLATE/bug_report.yml b/.github/ISSUE_TEMPLATE/bug_report.yml
index 2849a4951a..3c126048da 100644
--- a/.github/ISSUE_TEMPLATE/bug_report.yml
+++ b/.github/ISSUE_TEMPLATE/bug_report.yml
@@ -35,6 +35,8 @@ body:
label: Version
description: What version are you running? Look to OpenPype Tray
options:
+ - 3.17.4-nightly.1
+ - 3.17.3
- 3.17.3-nightly.2
- 3.17.3-nightly.1
- 3.17.2
@@ -133,8 +135,6 @@ body:
- 3.15.0
- 3.15.0-nightly.1
- 3.14.11-nightly.4
- - 3.14.11-nightly.3
- - 3.14.11-nightly.2
validations:
required: true
- type: dropdown
diff --git a/CHANGELOG.md b/CHANGELOG.md
index 7d5cf2c4d2..58428ab4d3 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -1,6 +1,379 @@
# Changelog
+## [3.17.3](https://github.com/ynput/OpenPype/tree/3.17.3)
+
+
+[Full Changelog](https://github.com/ynput/OpenPype/compare/3.17.2...3.17.3)
+
+### **🆕 New features**
+
+
+
+Maya: Multi-shot Layout Creator #5710
+
+New Multi-shot Layout creator is a way of automating creation of the new Layout instances in Maya, associated with correct shots, frame ranges and Camera Sequencer in Maya.
+
+
+___
+
+
+
+
+
+Colorspace: ociolook file product type workflow #5541
+
+Traypublisher support for publishing of colorspace look files (ociolook) which are json files holding any LUT files. This new product is available for loading in Nuke host at the moment.Added colorspace selector to publisher attribute with better labeling. We are supporting also Roles and Alias (only v2 configs).
+
+
+___
+
+
+
+
+
+Scene Inventory tool: Refactor Scene Inventory tool (for AYON) #5758
+
+Modified scene inventory tool for AYON. The main difference is in how project name is defined and replacement of assets combobox with folders dialog.
+
+
+___
+
+
+
+
+
+AYON: Support dev bundles #5783
+
+Modules can be loaded in AYON dev mode from different location.
+
+
+___
+
+
+
+### **🚀 Enhancements**
+
+
+
+Testing: Ingest Maya userSetup #5734
+
+Suggesting to ingest `userSetup.py` startup script for easier collaboration and transparency of testing.
+
+
+___
+
+
+
+
+
+Fusion: Work with pathmaps #5329
+
+Path maps are a big part of our Fusion workflow. We map the project folder to a path map within Fusion so all loaders and savers point to the path map variable. This way any computer on any OS can open any comp no matter where the project folder is located.
+
+
+___
+
+
+
+
+
+Maya: Add Maya 2024 and remove pre 2022. #5674
+
+Adding Maya 2024 as default application variant.Removing Maya 2020 and older, as these are not supported anymore.
+
+
+___
+
+
+
+
+
+Enhancement: Houdini: Allow using template keys in Houdini shelves manager #5727
+
+Allow using Template keys in Houdini shelves manager.
+
+
+___
+
+
+
+
+
+Houdini: Fix Show in usdview loader action #5737
+
+Fix the "Show in USD View" loader to show up in Houdini
+
+
+___
+
+
+
+
+
+Nuke: validator of asset context with repair actions #5749
+
+Instance nodes with different context of asset and task can be now validated and repaired via repair action.
+
+
+___
+
+
+
+
+
+AYON: Tools enhancements #5753
+
+Few enhancements and tweaks of AYON related tools.
+
+
+___
+
+
+
+
+
+Max: Tweaks on ValidateMaxContents #5759
+
+This PR provides enhancements on ValidateMaxContent as follow:
+- Rename `ValidateMaxContents` to `ValidateContainers`
+- Add related families which are required to pass the validation(All families except `Render` as the render instance is the one which only allows empty container)
+
+
+___
+
+
+
+
+
+Enhancement: Nuke refactor `SelectInvalidAction` #5762
+
+Refactor `SelectInvalidAction` to behave like other action for other host, create `SelectInstanceNodeAction` as dedicated action to select the instance node for a failed plugin.
+- Note: Selecting Instance Node will still select the instance node even if the user has currently 'fixed' the problem.
+
+
+___
+
+
+
+
+
+Enhancement: Tweak logging for Nuke for artist facing reports #5763
+
+Tweak logs that are not artist-facing to debug level + in some cases clarify what the logged value is.
+
+
+___
+
+
+
+
+
+AYON Settings: Disk mapping #5786
+
+Added disk mapping settings to core addon settings.
+
+
+___
+
+
+
+### **🐛 Bug fixes**
+
+
+
+Maya: add colorspace argument to redshiftTextureProcessor #5645
+
+In color managed Maya, texture processing during Look Extraction wasn't passing texture colorspaces set on textures to `redshiftTextureProcessor` tool. This in effect caused this tool to produce non-zero exit code (even though the texture was converted into wrong colorspace) and therefor crash of the extractor. This PR is passing colorspace to that tool if color management is enabled.
+
+
+___
+
+
+
+
+
+Maya: don't call `cmds.ogs()` in headless mode #5769
+
+`cmds.ogs()` is a call that will crash if Maya is running in headless mode (mayabatch, mayapy). This is handling that case.
+
+
+___
+
+
+
+
+
+Resolve: inventory management fix #5673
+
+Loaded Timeline item containers are now updating correctly and version management is working as it suppose to.
+- [x] updating loaded timeline items
+- [x] Removing of loaded timeline items
+
+
+___
+
+
+
+
+
+Blender: Remove 'update_hierarchy' #5756
+
+Remove `update_hierarchy` function which is causing crashes in scene inventory tool.
+
+
+___
+
+
+
+
+
+Max: bug fix on the settings in pointcloud family #5768
+
+Bug fix on the settings being errored out in validate point cloud(see links:https://github.com/ynput/OpenPype/pull/5759#pullrequestreview-1676681705) and passibly in point cloud extractor.
+
+
+___
+
+
+
+
+
+AYON settings: Fix default factory of tools #5773
+
+Fix default factory of application tools.
+
+
+___
+
+
+
+
+
+Fusion: added missing OPENPYPE_VERSION #5776
+
+Fusion submission to Deadline was missing OPENPYPE_VERSION env var when submitting from build (not source code directly). This missing env var might break rendering on DL if path to OP executable (openpype_console.exe) is not set explicitly and might cause an issue when different versions of OP are deployed.This PR adds this environment variable.
+
+
+___
+
+
+
+
+
+Ftrack: Skip tasks when looking for asset equivalent entity #5777
+
+Skip tasks when looking for asset equivalent entity.
+
+
+___
+
+
+
+
+
+Nuke: loading gizmos fixes #5779
+
+Gizmo product is not offered in Loader as plugin. It is also updating as expected.
+
+
+___
+
+
+
+
+
+General: thumbnail extractor as last extractor #5780
+
+Fixing issue with the order of the `ExtractOIIOTranscode` and `ExtractThumbnail` plugins. The problem was that the `ExtractThumbnail` plugin was processed before the `ExtractOIIOTranscode` plugin. As a result, the `ExtractThumbnail` plugin did not inherit the `review` tag into the representation data. This caused the `ExtractThumbnail` plugin to fail in processing and creating thumbnails.
+
+
+___
+
+
+
+
+
+Bug: fix key in application json #5787
+
+In PR #5705 `maya` was wrongly used instead of `mayapy`, breaking AYON defaults in AYON Application Addon.
+
+
+___
+
+
+
+
+
+'NumberAttrWidget' shows 'Multiselection' label on multiselection #5792
+
+Attribute definition widget 'NumberAttrWidget' shows `< Multiselection >` label on multiselection.
+
+
+___
+
+
+
+
+
+Publisher: Selection change by enabled checkbox on instance update attributes #5793
+
+Change of instance by clicking on enabled checkbox will actually update attributes on right side to match the selection.
+
+
+___
+
+
+
+
+
+Houdini: Remove `setParms` call since it's responsibility of `self.imprint` to set the values #5796
+
+Revert a recent change made in #5621 due to this comment. However the change is faulty as can be seen mentioned here
+
+
+___
+
+
+
+
+
+AYON loader: Fix SubsetLoader functionality #5799
+
+Fix SubsetLoader plugin processing in AYON loader tool.
+
+
+___
+
+
+
+### **Merged pull requests**
+
+
+
+Houdini: Add self publish button #5621
+
+This PR allows single publishing by adding a publish button to created rop nodes in HoudiniAdmins are much welcomed to enable it from houdini general settingsPublish Button also includes all input publish instances. in this screen shot the alembic instance is ignored because the switch is turned off
+
+
+___
+
+
+
+
+
+Nuke: fixing UNC support for OCIO path #5771
+
+UNC paths were broken on windows for custom OCIO path and this is solving the issue with removed double slash at start of path
+
+
+___
+
+
+
+
+
+
## [3.17.2](https://github.com/ynput/OpenPype/tree/3.17.2)
diff --git a/README.md b/README.md
index ce98f845e6..ed3e058002 100644
--- a/README.md
+++ b/README.md
@@ -279,7 +279,7 @@ arguments and it will create zip file that OpenPype can use.
Building documentation
----------------------
-Top build API documentation, run `.\tools\make_docs(.ps1|.sh)`. It will create html documentation
+To build API documentation, run `.\tools\make_docs(.ps1|.sh)`. It will create html documentation
from current sources in `.\docs\build`.
**Note that it needs existing virtual environment.**
diff --git a/openpype/hosts/blender/plugins/create/create_pointcache.py b/openpype/hosts/blender/plugins/create/create_pointcache.py
index 6220f68dc5..65cf18472d 100644
--- a/openpype/hosts/blender/plugins/create/create_pointcache.py
+++ b/openpype/hosts/blender/plugins/create/create_pointcache.py
@@ -3,11 +3,11 @@
import bpy
from openpype.pipeline import get_current_task_name
-import openpype.hosts.blender.api.plugin
-from openpype.hosts.blender.api import lib
+from openpype.hosts.blender.api import plugin, lib, ops
+from openpype.hosts.blender.api.pipeline import AVALON_INSTANCES
-class CreatePointcache(openpype.hosts.blender.api.plugin.Creator):
+class CreatePointcache(plugin.Creator):
"""Polygonal static geometry"""
name = "pointcacheMain"
@@ -16,20 +16,36 @@ class CreatePointcache(openpype.hosts.blender.api.plugin.Creator):
icon = "gears"
def process(self):
+ """ Run the creator on Blender main thread"""
+ mti = ops.MainThreadItem(self._process)
+ ops.execute_in_main_thread(mti)
+ def _process(self):
+ # Get Instance Container or create it if it does not exist
+ instances = bpy.data.collections.get(AVALON_INSTANCES)
+ if not instances:
+ instances = bpy.data.collections.new(name=AVALON_INSTANCES)
+ bpy.context.scene.collection.children.link(instances)
+
+ # Create instance object
asset = self.data["asset"]
subset = self.data["subset"]
- name = openpype.hosts.blender.api.plugin.asset_name(asset, subset)
- collection = bpy.data.collections.new(name=name)
- bpy.context.scene.collection.children.link(collection)
+ name = plugin.asset_name(asset, subset)
+ asset_group = bpy.data.objects.new(name=name, object_data=None)
+ asset_group.empty_display_type = 'SINGLE_ARROW'
+ instances.objects.link(asset_group)
self.data['task'] = get_current_task_name()
- lib.imprint(collection, self.data)
+ lib.imprint(asset_group, self.data)
+ # Add selected objects to instance
if (self.options or {}).get("useSelection"):
- objects = lib.get_selection()
- for obj in objects:
- collection.objects.link(obj)
- if obj.type == 'EMPTY':
- objects.extend(obj.children)
+ bpy.context.view_layer.objects.active = asset_group
+ selected = lib.get_selection()
+ for obj in selected:
+ if obj.parent in selected:
+ obj.select_set(False)
+ continue
+ selected.append(asset_group)
+ bpy.ops.object.parent_set(keep_transform=True)
- return collection
+ return asset_group
diff --git a/openpype/hosts/blender/plugins/load/load_abc.py b/openpype/hosts/blender/plugins/load/load_abc.py
index 9b3d940536..8d1863d4d5 100644
--- a/openpype/hosts/blender/plugins/load/load_abc.py
+++ b/openpype/hosts/blender/plugins/load/load_abc.py
@@ -60,18 +60,29 @@ class CacheModelLoader(plugin.AssetLoader):
imported = lib.get_selection()
- # Children must be linked before parents,
- # otherwise the hierarchy will break
+ # Use first EMPTY without parent as container
+ container = next(
+ (obj for obj in imported
+ if obj.type == "EMPTY" and not obj.parent),
+ None
+ )
+
objects = []
+ if container:
+ nodes = list(container.children)
- for obj in imported:
- obj.parent = asset_group
+ for obj in nodes:
+ obj.parent = asset_group
- for obj in imported:
- objects.append(obj)
- imported.extend(list(obj.children))
+ bpy.data.objects.remove(container)
- objects.reverse()
+ objects.extend(nodes)
+ for obj in nodes:
+ objects.extend(obj.children_recursive)
+ else:
+ for obj in imported:
+ obj.parent = asset_group
+ objects = imported
for obj in objects:
# Unlink the object from all collections
@@ -137,6 +148,7 @@ class CacheModelLoader(plugin.AssetLoader):
bpy.context.scene.collection.children.link(containers)
asset_group = bpy.data.objects.new(group_name, object_data=None)
+ asset_group.empty_display_type = 'SINGLE_ARROW'
containers.objects.link(asset_group)
objects = self._process(libpath, asset_group, group_name)
diff --git a/openpype/hosts/blender/plugins/publish/collect_instances.py b/openpype/hosts/blender/plugins/publish/collect_instances.py
index bc4b5ab092..ad2ce54147 100644
--- a/openpype/hosts/blender/plugins/publish/collect_instances.py
+++ b/openpype/hosts/blender/plugins/publish/collect_instances.py
@@ -19,85 +19,51 @@ class CollectInstances(pyblish.api.ContextPlugin):
@staticmethod
def get_asset_groups() -> Generator:
- """Return all 'model' collections.
-
- Check if the family is 'model' and if it doesn't have the
- representation set. If the representation is set, it is a loaded model
- and we don't want to publish it.
+ """Return all instances that are empty objects asset groups.
"""
instances = bpy.data.collections.get(AVALON_INSTANCES)
- for obj in instances.objects:
- avalon_prop = obj.get(AVALON_PROPERTY) or dict()
+ for obj in list(instances.objects) + list(instances.children):
+ avalon_prop = obj.get(AVALON_PROPERTY) or {}
if avalon_prop.get('id') == 'pyblish.avalon.instance':
yield obj
@staticmethod
- def get_collections() -> Generator:
- """Return all 'model' collections.
-
- Check if the family is 'model' and if it doesn't have the
- representation set. If the representation is set, it is a loaded model
- and we don't want to publish it.
- """
- for collection in bpy.data.collections:
- avalon_prop = collection.get(AVALON_PROPERTY) or dict()
- if avalon_prop.get('id') == 'pyblish.avalon.instance':
- yield collection
+ def create_instance(context, group):
+ avalon_prop = group[AVALON_PROPERTY]
+ asset = avalon_prop['asset']
+ family = avalon_prop['family']
+ subset = avalon_prop['subset']
+ task = avalon_prop['task']
+ name = f"{asset}_{subset}"
+ return context.create_instance(
+ name=name,
+ family=family,
+ families=[family],
+ subset=subset,
+ asset=asset,
+ task=task,
+ )
def process(self, context):
"""Collect the models from the current Blender scene."""
asset_groups = self.get_asset_groups()
- collections = self.get_collections()
for group in asset_groups:
- avalon_prop = group[AVALON_PROPERTY]
- asset = avalon_prop['asset']
- family = avalon_prop['family']
- subset = avalon_prop['subset']
- task = avalon_prop['task']
- name = f"{asset}_{subset}"
- instance = context.create_instance(
- name=name,
- family=family,
- families=[family],
- subset=subset,
- asset=asset,
- task=task,
- )
- objects = list(group.children)
- members = set()
- for obj in objects:
- objects.extend(list(obj.children))
- members.add(obj)
- members.add(group)
- instance[:] = list(members)
- self.log.debug(json.dumps(instance.data, indent=4))
- for obj in instance:
- self.log.debug(obj)
+ instance = self.create_instance(context, group)
+ members = []
+ if isinstance(group, bpy.types.Collection):
+ members = list(group.objects)
+ family = instance.data["family"]
+ if family == "animation":
+ for obj in group.objects:
+ if obj.type == 'EMPTY' and obj.get(AVALON_PROPERTY):
+ members.extend(
+ child for child in obj.children
+ if child.type == 'ARMATURE')
+ else:
+ members = group.children_recursive
- for collection in collections:
- avalon_prop = collection[AVALON_PROPERTY]
- asset = avalon_prop['asset']
- family = avalon_prop['family']
- subset = avalon_prop['subset']
- task = avalon_prop['task']
- name = f"{asset}_{subset}"
- instance = context.create_instance(
- name=name,
- family=family,
- families=[family],
- subset=subset,
- asset=asset,
- task=task,
- )
- members = list(collection.objects)
- if family == "animation":
- for obj in collection.objects:
- if obj.type == 'EMPTY' and obj.get(AVALON_PROPERTY):
- for child in obj.children:
- if child.type == 'ARMATURE':
- members.append(child)
- members.append(collection)
+ members.append(group)
instance[:] = members
self.log.debug(json.dumps(instance.data, indent=4))
for obj in instance:
diff --git a/openpype/hosts/blender/plugins/publish/extract_abc.py b/openpype/hosts/blender/plugins/publish/extract_abc.py
index 87159e53f0..7b6c4d7ae7 100644
--- a/openpype/hosts/blender/plugins/publish/extract_abc.py
+++ b/openpype/hosts/blender/plugins/publish/extract_abc.py
@@ -12,8 +12,7 @@ class ExtractABC(publish.Extractor):
label = "Extract ABC"
hosts = ["blender"]
- families = ["model", "pointcache"]
- optional = True
+ families = ["pointcache"]
def process(self, instance):
# Define extract output file path
@@ -62,3 +61,12 @@ class ExtractABC(publish.Extractor):
self.log.info("Extracted instance '%s' to: %s",
instance.name, representation)
+
+
+class ExtractModelABC(ExtractABC):
+ """Extract model as ABC."""
+
+ label = "Extract Model ABC"
+ hosts = ["blender"]
+ families = ["model"]
+ optional = True
diff --git a/openpype/hosts/blender/plugins/publish/increment_workfile_version.py b/openpype/hosts/blender/plugins/publish/increment_workfile_version.py
index 3d176f9c30..6ace14d77c 100644
--- a/openpype/hosts/blender/plugins/publish/increment_workfile_version.py
+++ b/openpype/hosts/blender/plugins/publish/increment_workfile_version.py
@@ -10,7 +10,7 @@ class IncrementWorkfileVersion(pyblish.api.ContextPlugin):
optional = True
hosts = ["blender"]
families = ["animation", "model", "rig", "action", "layout", "blendScene",
- "render"]
+ "pointcache", "render"]
def process(self, context):
diff --git a/openpype/hosts/nuke/plugins/load/load_ociolook.py b/openpype/hosts/nuke/plugins/load/load_ociolook.py
new file mode 100644
index 0000000000..18c8cdba35
--- /dev/null
+++ b/openpype/hosts/nuke/plugins/load/load_ociolook.py
@@ -0,0 +1,350 @@
+import os
+import json
+import secrets
+import nuke
+import six
+
+from openpype.client import (
+ get_version_by_id,
+ get_last_version_by_subset_id
+)
+from openpype.pipeline import (
+ load,
+ get_current_project_name,
+ get_representation_path,
+)
+from openpype.hosts.nuke.api import (
+ containerise,
+ viewer_update_and_undo_stop,
+ update_container,
+)
+
+
+class LoadOcioLookNodes(load.LoaderPlugin):
+ """Loading Ocio look to the nuke.Node graph"""
+
+ families = ["ociolook"]
+ representations = ["*"]
+ extensions = {"json"}
+
+ label = "Load OcioLook [nodes]"
+ order = 0
+ icon = "cc"
+ color = "white"
+ ignore_attr = ["useLifetime"]
+
+ # plugin attributes
+ current_node_color = "0x4ecd91ff"
+ old_node_color = "0xd88467ff"
+
+ # json file variables
+ schema_version = 1
+
+ def load(self, context, name, namespace, data):
+ """
+ Loading function to get the soft effects to particular read node
+
+ Arguments:
+ context (dict): context of version
+ name (str): name of the version
+ namespace (str): asset name
+ data (dict): compulsory attribute > not used
+
+ Returns:
+ nuke.Node: containerized nuke.Node object
+ """
+ namespace = namespace or context['asset']['name']
+ suffix = secrets.token_hex(nbytes=4)
+ object_name = "{}_{}_{}".format(
+ name, namespace, suffix)
+
+ # getting file path
+ filepath = self.filepath_from_context(context)
+
+ json_f = self._load_json_data(filepath)
+
+ group_node = self._create_group_node(
+ object_name, filepath, json_f["data"])
+
+ self._node_version_color(context["version"], group_node)
+
+ self.log.info(
+ "Loaded lut setup: `{}`".format(group_node["name"].value()))
+
+ return containerise(
+ node=group_node,
+ name=name,
+ namespace=namespace,
+ context=context,
+ loader=self.__class__.__name__,
+ data={
+ "objectName": object_name,
+ }
+ )
+
+ def _create_group_node(
+ self,
+ object_name,
+ filepath,
+ data
+ ):
+ """Creates group node with all the nodes inside.
+
+ Creating mainly `OCIOFileTransform` nodes with `OCIOColorSpace` nodes
+ in between - in case those are needed.
+
+ Arguments:
+ object_name (str): name of the group node
+ filepath (str): path to json file
+ data (dict): data from json file
+
+ Returns:
+ nuke.Node: group node with all the nodes inside
+ """
+ # get corresponding node
+
+ root_working_colorspace = nuke.root()["workingSpaceLUT"].value()
+
+ dir_path = os.path.dirname(filepath)
+ all_files = os.listdir(dir_path)
+
+ ocio_working_colorspace = _colorspace_name_by_type(
+ data["ocioLookWorkingSpace"])
+
+ # adding nodes to node graph
+ # just in case we are in group lets jump out of it
+ nuke.endGroup()
+
+ input_node = None
+ output_node = None
+ group_node = nuke.toNode(object_name)
+ if group_node:
+ # remove all nodes between Input and Output nodes
+ for node in group_node.nodes():
+ if node.Class() not in ["Input", "Output"]:
+ nuke.delete(node)
+ elif node.Class() == "Input":
+ input_node = node
+ elif node.Class() == "Output":
+ output_node = node
+ else:
+ group_node = nuke.createNode(
+ "Group",
+ "name {}_1".format(object_name),
+ inpanel=False
+ )
+
+ # adding content to the group node
+ with group_node:
+ pre_colorspace = root_working_colorspace
+
+ # reusing input node if it exists during update
+ if input_node:
+ pre_node = input_node
+ else:
+ pre_node = nuke.createNode("Input")
+ pre_node["name"].setValue("rgb")
+
+ # Compare script working colorspace with ocio working colorspace
+ # found in json file and convert to json's if needed
+ if pre_colorspace != ocio_working_colorspace:
+ pre_node = _add_ocio_colorspace_node(
+ pre_node,
+ pre_colorspace,
+ ocio_working_colorspace
+ )
+ pre_colorspace = ocio_working_colorspace
+
+ for ocio_item in data["ocioLookItems"]:
+ input_space = _colorspace_name_by_type(
+ ocio_item["input_colorspace"])
+ output_space = _colorspace_name_by_type(
+ ocio_item["output_colorspace"])
+
+ # making sure we are set to correct colorspace for otio item
+ if pre_colorspace != input_space:
+ pre_node = _add_ocio_colorspace_node(
+ pre_node,
+ pre_colorspace,
+ input_space
+ )
+
+ node = nuke.createNode("OCIOFileTransform")
+
+ # file path from lut representation
+ extension = ocio_item["ext"]
+ item_name = ocio_item["name"]
+
+ item_lut_file = next(
+ (
+ file for file in all_files
+ if file.endswith(extension)
+ ),
+ None
+ )
+ if not item_lut_file:
+ raise ValueError(
+ "File with extension '{}' not "
+ "found in directory".format(extension)
+ )
+
+ item_lut_path = os.path.join(
+ dir_path, item_lut_file).replace("\\", "/")
+ node["file"].setValue(item_lut_path)
+ node["name"].setValue(item_name)
+ node["direction"].setValue(ocio_item["direction"])
+ node["interpolation"].setValue(ocio_item["interpolation"])
+ node["working_space"].setValue(input_space)
+
+ pre_node.autoplace()
+ node.setInput(0, pre_node)
+ node.autoplace()
+ # pass output space into pre_colorspace for next iteration
+ # or for output node comparison
+ pre_colorspace = output_space
+ pre_node = node
+
+ # making sure we are back in script working colorspace
+ if pre_colorspace != root_working_colorspace:
+ pre_node = _add_ocio_colorspace_node(
+ pre_node,
+ pre_colorspace,
+ root_working_colorspace
+ )
+
+ # reusing output node if it exists during update
+ if not output_node:
+ output = nuke.createNode("Output")
+ else:
+ output = output_node
+
+ output.setInput(0, pre_node)
+
+ return group_node
+
+ def update(self, container, representation):
+
+ project_name = get_current_project_name()
+ version_doc = get_version_by_id(project_name, representation["parent"])
+
+ object_name = container['objectName']
+
+ filepath = get_representation_path(representation)
+
+ json_f = self._load_json_data(filepath)
+
+ group_node = self._create_group_node(
+ object_name,
+ filepath,
+ json_f["data"]
+ )
+
+ self._node_version_color(version_doc, group_node)
+
+ self.log.info("Updated lut setup: `{}`".format(
+ group_node["name"].value()))
+
+ return update_container(
+ group_node, {"representation": str(representation["_id"])})
+
+ def _load_json_data(self, filepath):
+ # getting data from json file with unicode conversion
+ with open(filepath, "r") as _file:
+ json_f = {self._bytify(key): self._bytify(value)
+ for key, value in json.load(_file).items()}
+
+ # check if the version in json_f is the same as plugin version
+ if json_f["version"] != self.schema_version:
+ raise KeyError(
+ "Version of json file is not the same as plugin version")
+
+ return json_f
+
+ def _bytify(self, input):
+ """
+ Converts unicode strings to strings
+ It goes through all dictionary
+
+ Arguments:
+ input (dict/str): input
+
+ Returns:
+ dict: with fixed values and keys
+
+ """
+
+ if isinstance(input, dict):
+ return {self._bytify(key): self._bytify(value)
+ for key, value in input.items()}
+ elif isinstance(input, list):
+ return [self._bytify(element) for element in input]
+ elif isinstance(input, six.text_type):
+ return str(input)
+ else:
+ return input
+
+ def switch(self, container, representation):
+ self.update(container, representation)
+
+ def remove(self, container):
+ node = nuke.toNode(container['objectName'])
+ with viewer_update_and_undo_stop():
+ nuke.delete(node)
+
+ def _node_version_color(self, version, node):
+ """ Coloring a node by correct color by actual version"""
+
+ project_name = get_current_project_name()
+ last_version_doc = get_last_version_by_subset_id(
+ project_name, version["parent"], fields=["_id"]
+ )
+
+ # change color of node
+ if version["_id"] == last_version_doc["_id"]:
+ color_value = self.current_node_color
+ else:
+ color_value = self.old_node_color
+ node["tile_color"].setValue(int(color_value, 16))
+
+
+def _colorspace_name_by_type(colorspace_data):
+ """
+ Returns colorspace name by type
+
+ Arguments:
+ colorspace_data (dict): colorspace data
+
+ Returns:
+ str: colorspace name
+ """
+ if colorspace_data["type"] == "colorspaces":
+ return colorspace_data["name"]
+ elif colorspace_data["type"] == "roles":
+ return colorspace_data["colorspace"]
+ else:
+ raise KeyError("Unknown colorspace type: {}".format(
+ colorspace_data["type"]))
+
+
+def _add_ocio_colorspace_node(pre_node, input_space, output_space):
+ """
+ Adds OCIOColorSpace node to the node graph
+
+ Arguments:
+ pre_node (nuke.Node): node to connect to
+ input_space (str): input colorspace
+ output_space (str): output colorspace
+
+ Returns:
+ nuke.Node: node with OCIOColorSpace node
+ """
+ node = nuke.createNode("OCIOColorSpace")
+ node.setInput(0, pre_node)
+ node["in_colorspace"].setValue(input_space)
+ node["out_colorspace"].setValue(output_space)
+
+ pre_node.autoplace()
+ node.setInput(0, pre_node)
+ node.autoplace()
+
+ return node
diff --git a/openpype/hosts/photoshop/api/extension/extension.zxp b/openpype/hosts/photoshop/api/extension/extension.zxp
deleted file mode 100644
index 39b766cd0d..0000000000
Binary files a/openpype/hosts/photoshop/api/extension/extension.zxp and /dev/null differ
diff --git a/openpype/hosts/traypublisher/plugins/create/create_colorspace_look.py b/openpype/hosts/traypublisher/plugins/create/create_colorspace_look.py
new file mode 100644
index 0000000000..5628d0973f
--- /dev/null
+++ b/openpype/hosts/traypublisher/plugins/create/create_colorspace_look.py
@@ -0,0 +1,173 @@
+# -*- coding: utf-8 -*-
+"""Creator of colorspace look files.
+
+This creator is used to publish colorspace look files thanks to
+production type `ociolook`. All files are published as representation.
+"""
+from pathlib import Path
+
+from openpype.client import get_asset_by_name
+from openpype.lib.attribute_definitions import (
+ FileDef, EnumDef, TextDef, UISeparatorDef
+)
+from openpype.pipeline import (
+ CreatedInstance,
+ CreatorError
+)
+from openpype.pipeline import colorspace
+from openpype.hosts.traypublisher.api.plugin import TrayPublishCreator
+
+
+class CreateColorspaceLook(TrayPublishCreator):
+ """Creates colorspace look files."""
+
+ identifier = "io.openpype.creators.traypublisher.colorspace_look"
+ label = "Colorspace Look"
+ family = "ociolook"
+ description = "Publishes color space look file."
+ extensions = [".cc", ".cube", ".3dl", ".spi1d", ".spi3d", ".csp", ".lut"]
+ enabled = False
+
+ colorspace_items = [
+ (None, "Not set")
+ ]
+ colorspace_attr_show = False
+ config_items = None
+ config_data = None
+
+ def get_detail_description(self):
+ return """# Colorspace Look
+
+This creator publishes color space look file (LUT).
+ """
+
+ def get_icon(self):
+ return "mdi.format-color-fill"
+
+ def create(self, subset_name, instance_data, pre_create_data):
+ repr_file = pre_create_data.get("luts_file")
+ if not repr_file:
+ raise CreatorError("No files specified")
+
+ files = repr_file.get("filenames")
+ if not files:
+ # this should never happen
+ raise CreatorError("Missing files from representation")
+
+ asset_doc = get_asset_by_name(
+ self.project_name, instance_data["asset"])
+
+ subset_name = self.get_subset_name(
+ variant=instance_data["variant"],
+ task_name=instance_data["task"] or "Not set",
+ project_name=self.project_name,
+ asset_doc=asset_doc,
+ )
+
+ instance_data["creator_attributes"] = {
+ "abs_lut_path": (
+ Path(repr_file["directory"]) / files[0]).as_posix()
+ }
+
+ # Create new instance
+ new_instance = CreatedInstance(self.family, subset_name,
+ instance_data, self)
+ new_instance.transient_data["config_items"] = self.config_items
+ new_instance.transient_data["config_data"] = self.config_data
+
+ self._store_new_instance(new_instance)
+
+ def collect_instances(self):
+ super().collect_instances()
+ for instance in self.create_context.instances:
+ if instance.creator_identifier == self.identifier:
+ instance.transient_data["config_items"] = self.config_items
+ instance.transient_data["config_data"] = self.config_data
+
+ def get_instance_attr_defs(self):
+ return [
+ EnumDef(
+ "working_colorspace",
+ self.colorspace_items,
+ default="Not set",
+ label="Working Colorspace",
+ ),
+ UISeparatorDef(
+ label="Advanced1"
+ ),
+ TextDef(
+ "abs_lut_path",
+ label="LUT Path",
+ ),
+ EnumDef(
+ "input_colorspace",
+ self.colorspace_items,
+ default="Not set",
+ label="Input Colorspace",
+ ),
+ EnumDef(
+ "direction",
+ [
+ (None, "Not set"),
+ ("forward", "Forward"),
+ ("inverse", "Inverse")
+ ],
+ default="Not set",
+ label="Direction"
+ ),
+ EnumDef(
+ "interpolation",
+ [
+ (None, "Not set"),
+ ("linear", "Linear"),
+ ("tetrahedral", "Tetrahedral"),
+ ("best", "Best"),
+ ("nearest", "Nearest")
+ ],
+ default="Not set",
+ label="Interpolation"
+ ),
+ EnumDef(
+ "output_colorspace",
+ self.colorspace_items,
+ default="Not set",
+ label="Output Colorspace",
+ ),
+ ]
+
+ def get_pre_create_attr_defs(self):
+ return [
+ FileDef(
+ "luts_file",
+ folders=False,
+ extensions=self.extensions,
+ allow_sequences=False,
+ single_item=True,
+ label="Look Files",
+ )
+ ]
+
+ def apply_settings(self, project_settings, system_settings):
+ host = self.create_context.host
+ host_name = host.name
+ project_name = host.get_current_project_name()
+ config_data = colorspace.get_imageio_config(
+ project_name, host_name,
+ project_settings=project_settings
+ )
+
+ if not config_data:
+ self.enabled = False
+ return
+
+ filepath = config_data["path"]
+ config_items = colorspace.get_ocio_config_colorspaces(filepath)
+ labeled_colorspaces = colorspace.get_colorspaces_enumerator_items(
+ config_items,
+ include_aliases=True,
+ include_roles=True
+ )
+ self.config_items = config_items
+ self.config_data = config_data
+ self.colorspace_items.extend(labeled_colorspaces)
+ self.enabled = True
diff --git a/openpype/hosts/traypublisher/plugins/publish/collect_colorspace_look.py b/openpype/hosts/traypublisher/plugins/publish/collect_colorspace_look.py
new file mode 100644
index 0000000000..6aede099bf
--- /dev/null
+++ b/openpype/hosts/traypublisher/plugins/publish/collect_colorspace_look.py
@@ -0,0 +1,86 @@
+import os
+from pprint import pformat
+import pyblish.api
+from openpype.pipeline import publish
+from openpype.pipeline import colorspace
+
+
+class CollectColorspaceLook(pyblish.api.InstancePlugin,
+ publish.OpenPypePyblishPluginMixin):
+ """Collect OCIO colorspace look from LUT file
+ """
+
+ label = "Collect Colorspace Look"
+ order = pyblish.api.CollectorOrder
+ hosts = ["traypublisher"]
+ families = ["ociolook"]
+
+ def process(self, instance):
+ creator_attrs = instance.data["creator_attributes"]
+
+ lut_repre_name = "LUTfile"
+ file_url = creator_attrs["abs_lut_path"]
+ file_name = os.path.basename(file_url)
+ base_name, ext = os.path.splitext(file_name)
+
+ # set output name with base_name which was cleared
+ # of all symbols and all parts were capitalized
+ output_name = (base_name.replace("_", " ")
+ .replace(".", " ")
+ .replace("-", " ")
+ .title()
+ .replace(" ", ""))
+
+ # get config items
+ config_items = instance.data["transientData"]["config_items"]
+ config_data = instance.data["transientData"]["config_data"]
+
+ # get colorspace items
+ converted_color_data = {}
+ for colorspace_key in [
+ "working_colorspace",
+ "input_colorspace",
+ "output_colorspace"
+ ]:
+ if creator_attrs[colorspace_key]:
+ color_data = colorspace.convert_colorspace_enumerator_item(
+ creator_attrs[colorspace_key], config_items)
+ converted_color_data[colorspace_key] = color_data
+ else:
+ converted_color_data[colorspace_key] = None
+
+ # add colorspace to config data
+ if converted_color_data["working_colorspace"]:
+ config_data["colorspace"] = (
+ converted_color_data["working_colorspace"]["name"]
+ )
+
+ # create lut representation data
+ lut_repre = {
+ "name": lut_repre_name,
+ "output": output_name,
+ "ext": ext.lstrip("."),
+ "files": file_name,
+ "stagingDir": os.path.dirname(file_url),
+ "tags": []
+ }
+ instance.data.update({
+ "representations": [lut_repre],
+ "source": file_url,
+ "ocioLookWorkingSpace": converted_color_data["working_colorspace"],
+ "ocioLookItems": [
+ {
+ "name": lut_repre_name,
+ "ext": ext.lstrip("."),
+ "input_colorspace": converted_color_data[
+ "input_colorspace"],
+ "output_colorspace": converted_color_data[
+ "output_colorspace"],
+ "direction": creator_attrs["direction"],
+ "interpolation": creator_attrs["interpolation"],
+ "config_data": config_data
+ }
+ ],
+ })
+
+ self.log.debug(pformat(instance.data))
diff --git a/openpype/hosts/traypublisher/plugins/publish/collect_explicit_colorspace.py b/openpype/hosts/traypublisher/plugins/publish/collect_explicit_colorspace.py
index eb7fbd87a0..5db2b0cbad 100644
--- a/openpype/hosts/traypublisher/plugins/publish/collect_explicit_colorspace.py
+++ b/openpype/hosts/traypublisher/plugins/publish/collect_explicit_colorspace.py
@@ -1,6 +1,8 @@
import pyblish.api
-from openpype.pipeline import registered_host
-from openpype.pipeline import publish
+from openpype.pipeline import (
+ publish,
+ registered_host
+)
from openpype.lib import EnumDef
from openpype.pipeline import colorspace
@@ -13,11 +15,14 @@ class CollectColorspace(pyblish.api.InstancePlugin,
label = "Choose representation colorspace"
order = pyblish.api.CollectorOrder + 0.49
hosts = ["traypublisher"]
+ families = ["render", "plate", "reference", "image", "online"]
+ enabled = False
colorspace_items = [
(None, "Don't override")
]
colorspace_attr_show = False
+ config_items = None
def process(self, instance):
values = self.get_attr_values_from_data(instance.data)
@@ -48,10 +53,14 @@ class CollectColorspace(pyblish.api.InstancePlugin,
if config_data:
filepath = config_data["path"]
config_items = colorspace.get_ocio_config_colorspaces(filepath)
- cls.colorspace_items.extend((
- (name, name) for name in config_items.keys()
- ))
- cls.colorspace_attr_show = True
+ labeled_colorspaces = colorspace.get_colorspaces_enumerator_items(
+ config_items,
+ include_aliases=True,
+ include_roles=True
+ )
+ cls.config_items = config_items
+ cls.colorspace_items.extend(labeled_colorspaces)
+ cls.enabled = True
@classmethod
def get_attribute_defs(cls):
@@ -60,7 +69,6 @@ class CollectColorspace(pyblish.api.InstancePlugin,
"colorspace",
cls.colorspace_items,
default="Don't override",
- label="Override Colorspace",
- hidden=not cls.colorspace_attr_show
+ label="Override Colorspace"
)
]
diff --git a/openpype/hosts/traypublisher/plugins/publish/extract_colorspace_look.py b/openpype/hosts/traypublisher/plugins/publish/extract_colorspace_look.py
new file mode 100644
index 0000000000..f94bbc7a49
--- /dev/null
+++ b/openpype/hosts/traypublisher/plugins/publish/extract_colorspace_look.py
@@ -0,0 +1,45 @@
+import os
+import json
+import pyblish.api
+from openpype.pipeline import publish
+
+
+class ExtractColorspaceLook(publish.Extractor,
+ publish.OpenPypePyblishPluginMixin):
+ """Extract OCIO colorspace look from LUT file
+ """
+
+ label = "Extract Colorspace Look"
+ order = pyblish.api.ExtractorOrder
+ hosts = ["traypublisher"]
+ families = ["ociolook"]
+
+ def process(self, instance):
+ ociolook_items = instance.data["ocioLookItems"]
+ ociolook_working_color = instance.data["ocioLookWorkingSpace"]
+ staging_dir = self.staging_dir(instance)
+
+ # create ociolook file attributes
+ ociolook_file_name = "ocioLookFile.json"
+ ociolook_file_content = {
+ "version": 1,
+ "data": {
+ "ocioLookItems": ociolook_items,
+ "ocioLookWorkingSpace": ociolook_working_color
+ }
+ }
+
+ # write ociolook content into json file saved in staging dir
+ file_url = os.path.join(staging_dir, ociolook_file_name)
+ with open(file_url, "w") as f_:
+ json.dump(ociolook_file_content, f_, indent=4)
+
+ # create lut representation data
+ ociolook_repre = {
+ "name": "ocioLookFile",
+ "ext": "json",
+ "files": ociolook_file_name,
+ "stagingDir": staging_dir,
+ "tags": []
+ }
+ instance.data["representations"].append(ociolook_repre)
diff --git a/openpype/hosts/traypublisher/plugins/publish/validate_colorspace.py b/openpype/hosts/traypublisher/plugins/publish/validate_colorspace.py
index 75b41cf606..03f9f299b2 100644
--- a/openpype/hosts/traypublisher/plugins/publish/validate_colorspace.py
+++ b/openpype/hosts/traypublisher/plugins/publish/validate_colorspace.py
@@ -18,6 +18,7 @@ class ValidateColorspace(pyblish.api.InstancePlugin,
label = "Validate representation colorspace"
order = pyblish.api.ValidatorOrder
hosts = ["traypublisher"]
+ families = ["render", "plate", "reference", "image", "online"]
def process(self, instance):
diff --git a/openpype/hosts/traypublisher/plugins/publish/validate_colorspace_look.py b/openpype/hosts/traypublisher/plugins/publish/validate_colorspace_look.py
new file mode 100644
index 0000000000..548ce9d15a
--- /dev/null
+++ b/openpype/hosts/traypublisher/plugins/publish/validate_colorspace_look.py
@@ -0,0 +1,89 @@
+import pyblish.api
+
+from openpype.pipeline import (
+ publish,
+ PublishValidationError
+)
+
+
+class ValidateColorspaceLook(pyblish.api.InstancePlugin,
+ publish.OpenPypePyblishPluginMixin):
+ """Validate colorspace look attributes"""
+
+ label = "Validate colorspace look attributes"
+ order = pyblish.api.ValidatorOrder
+ hosts = ["traypublisher"]
+ families = ["ociolook"]
+
+ def process(self, instance):
+ create_context = instance.context.data["create_context"]
+ created_instance = create_context.get_instance_by_id(
+ instance.data["instance_id"])
+ creator_defs = created_instance.creator_attribute_defs
+
+ ociolook_working_color = instance.data.get("ocioLookWorkingSpace")
+ ociolook_items = instance.data.get("ocioLookItems", [])
+
+ creator_defs_by_key = {_def.key: _def.label for _def in creator_defs}
+
+ not_set_keys = {}
+ if not ociolook_working_color:
+ not_set_keys["working_colorspace"] = creator_defs_by_key[
+ "working_colorspace"]
+
+ for ociolook_item in ociolook_items:
+ item_not_set_keys = self.validate_colorspace_set_attrs(
+ ociolook_item, creator_defs_by_key)
+ if item_not_set_keys:
+ not_set_keys[ociolook_item["name"]] = item_not_set_keys
+
+ if not_set_keys:
+ message = (
+ "Colorspace look attributes are not set: \n"
+ )
+ for key, value in not_set_keys.items():
+ if isinstance(value, list):
+ values_string = "\n\t- ".join(value)
+ message += f"\n\t{key}:\n\t- {values_string}"
+ else:
+ message += f"\n\t{value}"
+
+ raise PublishValidationError(
+ title="Colorspace Look attributes",
+ message=message,
+ description=message
+ )
+
+ def validate_colorspace_set_attrs(
+ self,
+ ociolook_item,
+ creator_defs_by_key
+ ):
+ """Validate colorspace look attributes"""
+
+ self.log.debug(f"Validate colorspace look attributes: {ociolook_item}")
+
+ check_keys = [
+ "input_colorspace",
+ "output_colorspace",
+ "direction",
+ "interpolation"
+ ]
+
+ not_set_keys = []
+ for key in check_keys:
+ if ociolook_item[key]:
+ # key is set and it is correct
+ continue
+
+ def_label = creator_defs_by_key.get(key)
+
+ if not def_label:
+ # raise since key is not recognized by creator defs
+ raise KeyError(
+ f"Colorspace look attribute '{key}' is not "
+ f"recognized by creator attributes: {creator_defs_by_key}"
+ )
+ not_set_keys.append(def_label)
+
+ return not_set_keys
diff --git a/openpype/modules/timers_manager/plugins/publish/start_timer.py b/openpype/modules/timers_manager/plugins/publish/start_timer.py
index 6408327ca1..19a67292f5 100644
--- a/openpype/modules/timers_manager/plugins/publish/start_timer.py
+++ b/openpype/modules/timers_manager/plugins/publish/start_timer.py
@@ -6,8 +6,6 @@ Requires:
import pyblish.api
-from openpype.pipeline import legacy_io
-
class StartTimer(pyblish.api.ContextPlugin):
label = "Start Timer"
@@ -25,9 +23,9 @@ class StartTimer(pyblish.api.ContextPlugin):
self.log.debug("Publish is not affecting running timers.")
return
- project_name = legacy_io.active_project()
- asset_name = legacy_io.Session.get("AVALON_ASSET")
- task_name = legacy_io.Session.get("AVALON_TASK")
+ project_name = context.data["projectName"]
+ asset_name = context.data.get("asset")
+ task_name = context.data.get("task")
if not project_name or not asset_name or not task_name:
self.log.info((
"Current context does not contain all"
diff --git a/openpype/pipeline/colorspace.py b/openpype/pipeline/colorspace.py
index 2800050496..9f720f6ae9 100644
--- a/openpype/pipeline/colorspace.py
+++ b/openpype/pipeline/colorspace.py
@@ -1,4 +1,3 @@
-from copy import deepcopy
import re
import os
import json
@@ -7,6 +6,7 @@ import functools
import platform
import tempfile
import warnings
+from copy import deepcopy
from openpype import PACKAGE_DIR
from openpype.settings import get_project_settings
@@ -356,7 +356,10 @@ def parse_colorspace_from_filepath(
"Must provide `config_path` if `colorspaces` is not provided."
)
- colorspaces = colorspaces or get_ocio_config_colorspaces(config_path)
+ colorspaces = (
+ colorspaces
+ or get_ocio_config_colorspaces(config_path)["colorspaces"]
+ )
underscored_colorspaces = {
key.replace(" ", "_"): key for key in colorspaces
if " " in key
@@ -393,7 +396,7 @@ def validate_imageio_colorspace_in_config(config_path, colorspace_name):
Returns:
bool: True if exists
"""
- colorspaces = get_ocio_config_colorspaces(config_path)
+ colorspaces = get_ocio_config_colorspaces(config_path)["colorspaces"]
if colorspace_name not in colorspaces:
raise KeyError(
"Missing colorspace '{}' in config file '{}'".format(
@@ -530,6 +533,157 @@ def get_ocio_config_colorspaces(config_path):
return CachedData.ocio_config_colorspaces[config_path]
+def convert_colorspace_enumerator_item(
+ colorspace_enum_item,
+ config_items
+):
+ """Convert colorspace enumerator item to dictionary
+
+ Args:
+ colorspace_item (str): colorspace and family in couple
+ config_items (dict[str,dict]): colorspace data
+
+ Returns:
+ dict: colorspace data
+ """
+ if "::" not in colorspace_enum_item:
+ return None
+
+ # split string with `::` separator and set first as key and second as value
+ item_type, item_name = colorspace_enum_item.split("::")
+
+ item_data = None
+ if item_type == "aliases":
+ # loop through all colorspaces and find matching alias
+ for name, _data in config_items.get("colorspaces", {}).items():
+ if item_name in _data.get("aliases", []):
+ item_data = deepcopy(_data)
+ item_data.update({
+ "name": name,
+ "type": "colorspace"
+ })
+ break
+ else:
+ # find matching colorspace item found in labeled_colorspaces
+ item_data = config_items.get(item_type, {}).get(item_name)
+ if item_data:
+ item_data = deepcopy(item_data)
+ item_data.update({
+ "name": item_name,
+ "type": item_type
+ })
+
+ # raise exception if item is not found
+ if not item_data:
+ message_config_keys = ", ".join(
+ "'{}':{}".format(
+ key,
+ set(config_items.get(key, {}).keys())
+ ) for key in config_items.keys()
+ )
+ raise KeyError(
+ "Missing colorspace item '{}' in config data: [{}]".format(
+ colorspace_enum_item, message_config_keys
+ )
+ )
+
+ return item_data
+
+
+def get_colorspaces_enumerator_items(
+ config_items,
+ include_aliases=False,
+ include_looks=False,
+ include_roles=False,
+ include_display_views=False
+):
+ """Get all colorspace data with labels
+
+ Wrapper function for aggregating all names and its families.
+ Families can be used for building menu and submenus in gui.
+
+ Args:
+ config_items (dict[str,dict]): colorspace data coming from
+ `get_ocio_config_colorspaces` function
+ include_aliases (bool): include aliases in result
+ include_looks (bool): include looks in result
+ include_roles (bool): include roles in result
+
+ Returns:
+ list[tuple[str,str]]: colorspace and family in couple
+ """
+ labeled_colorspaces = []
+ aliases = set()
+ colorspaces = set()
+ looks = set()
+ roles = set()
+ display_views = set()
+ for items_type, colorspace_items in config_items.items():
+ if items_type == "colorspaces":
+ for color_name, color_data in colorspace_items.items():
+ if color_data.get("aliases"):
+ aliases.update([
+ (
+ "aliases::{}".format(alias_name),
+ "[alias] {} ({})".format(alias_name, color_name)
+ )
+ for alias_name in color_data["aliases"]
+ ])
+ colorspaces.add((
+ "{}::{}".format(items_type, color_name),
+ "[colorspace] {}".format(color_name)
+ ))
+
+ elif items_type == "looks":
+ looks.update([
+ (
+ "{}::{}".format(items_type, name),
+ "[look] {} ({})".format(name, role_data["process_space"])
+ )
+ for name, role_data in colorspace_items.items()
+ ])
+
+ elif items_type == "displays_views":
+ display_views.update([
+ (
+ "{}::{}".format(items_type, name),
+ "[view (display)] {}".format(name)
+ )
+ for name, _ in colorspace_items.items()
+ ])
+
+ elif items_type == "roles":
+ roles.update([
+ (
+ "{}::{}".format(items_type, name),
+ "[role] {} ({})".format(name, role_data["colorspace"])
+ )
+ for name, role_data in colorspace_items.items()
+ ])
+
+ if roles and include_roles:
+ roles = sorted(roles, key=lambda x: x[0])
+ labeled_colorspaces.extend(roles)
+
+ # add colorspaces as second so it is not first in menu
+ colorspaces = sorted(colorspaces, key=lambda x: x[0])
+ labeled_colorspaces.extend(colorspaces)
+
+ if aliases and include_aliases:
+ aliases = sorted(aliases, key=lambda x: x[0])
+ labeled_colorspaces.extend(aliases)
+
+ if looks and include_looks:
+ looks = sorted(looks, key=lambda x: x[0])
+ labeled_colorspaces.extend(looks)
+
+ if display_views and include_display_views:
+ display_views = sorted(display_views, key=lambda x: x[0])
+ labeled_colorspaces.extend(display_views)
+
+ return labeled_colorspaces
+
+
# TODO: remove this in future - backward compatibility
@deprecated("_get_wrapped_with_subprocess")
def get_colorspace_data_subprocess(config_path):
diff --git a/openpype/plugins/load/push_to_library.py b/openpype/plugins/load/push_to_library.py
index dd7291e686..5befc5eb9d 100644
--- a/openpype/plugins/load/push_to_library.py
+++ b/openpype/plugins/load/push_to_library.py
@@ -1,6 +1,6 @@
import os
-from openpype import PACKAGE_DIR
+from openpype import PACKAGE_DIR, AYON_SERVER_ENABLED
from openpype.lib import get_openpype_execute_args, run_detached_process
from openpype.pipeline import load
from openpype.pipeline.load import LoadError
@@ -32,12 +32,22 @@ class PushToLibraryProject(load.SubsetLoaderPlugin):
raise LoadError("Please select only one item")
context = tuple(filtered_contexts)[0]
- push_tool_script_path = os.path.join(
- PACKAGE_DIR,
- "tools",
- "push_to_project",
- "app.py"
- )
+
+ if AYON_SERVER_ENABLED:
+ push_tool_script_path = os.path.join(
+ PACKAGE_DIR,
+ "tools",
+ "ayon_push_to_project",
+ "main.py"
+ )
+ else:
+ push_tool_script_path = os.path.join(
+ PACKAGE_DIR,
+ "tools",
+ "push_to_project",
+ "app.py"
+ )
+
project_doc = context["project"]
version_doc = context["version"]
project_name = project_doc["name"]
diff --git a/openpype/plugins/publish/integrate.py b/openpype/plugins/publish/integrate.py
index ce24dad1b5..f2ae470d40 100644
--- a/openpype/plugins/publish/integrate.py
+++ b/openpype/plugins/publish/integrate.py
@@ -107,6 +107,7 @@ class IntegrateAsset(pyblish.api.InstancePlugin):
"rig",
"plate",
"look",
+ "ociolook",
"audio",
"yetiRig",
"yeticache",
diff --git a/openpype/scripts/ocio_wrapper.py b/openpype/scripts/ocio_wrapper.py
index c362670126..fa231cd047 100644
--- a/openpype/scripts/ocio_wrapper.py
+++ b/openpype/scripts/ocio_wrapper.py
@@ -106,11 +106,47 @@ def _get_colorspace_data(config_path):
config = ocio.Config().CreateFromFile(str(config_path))
- return {
- c_.getName(): c_.getFamily()
- for c_ in config.getColorSpaces()
+ colorspace_data = {
+ "roles": {},
+ "colorspaces": {
+ color.getName(): {
+ "family": color.getFamily(),
+ "categories": list(color.getCategories()),
+ "aliases": list(color.getAliases()),
+ "equalitygroup": color.getEqualityGroup(),
+ }
+ for color in config.getColorSpaces()
+ },
+ "displays_views": {
+ f"{view} ({display})": {
+ "display": display,
+ "view": view
+
+ }
+ for display in config.getDisplays()
+ for view in config.getViews(display)
+ },
+ "looks": {}
}
+ # add looks
+ looks = config.getLooks()
+ if looks:
+ colorspace_data["looks"] = {
+ look.getName(): {"process_space": look.getProcessSpace()}
+ for look in looks
+ }
+
+ # add roles
+ roles = config.getRoles()
+ if roles:
+ colorspace_data["roles"] = {
+ role: {"colorspace": colorspace}
+ for (role, colorspace) in roles
+ }
+
+ return colorspace_data
+
@config.command(
name="get_views",
diff --git a/openpype/settings/defaults/project_settings/blender.json b/openpype/settings/defaults/project_settings/blender.json
index f3eb31174f..7fb8c333a6 100644
--- a/openpype/settings/defaults/project_settings/blender.json
+++ b/openpype/settings/defaults/project_settings/blender.json
@@ -89,10 +89,10 @@
"optional": true,
"active": false
},
- "ExtractABC": {
+ "ExtractModelABC": {
"enabled": true,
"optional": true,
- "active": false
+ "active": true
},
"ExtractBlendAnimation": {
"enabled": true,
diff --git a/openpype/settings/entities/schemas/projects_schema/schemas/schema_blender_publish.json b/openpype/settings/entities/schemas/projects_schema/schemas/schema_blender_publish.json
index 7f1a8a915b..b84c663e6c 100644
--- a/openpype/settings/entities/schemas/projects_schema/schemas/schema_blender_publish.json
+++ b/openpype/settings/entities/schemas/projects_schema/schemas/schema_blender_publish.json
@@ -181,12 +181,12 @@
"name": "template_publish_plugin",
"template_data": [
{
- "key": "ExtractFBX",
- "label": "Extract FBX (model and rig)"
+ "key": "ExtractModelABC",
+ "label": "Extract ABC (model)"
},
{
- "key": "ExtractABC",
- "label": "Extract ABC (model and pointcache)"
+ "key": "ExtractFBX",
+ "label": "Extract FBX (model and rig)"
},
{
"key": "ExtractBlendAnimation",
diff --git a/openpype/tools/attribute_defs/widgets.py b/openpype/tools/attribute_defs/widgets.py
index d9c55f4a64..91b5b229de 100644
--- a/openpype/tools/attribute_defs/widgets.py
+++ b/openpype/tools/attribute_defs/widgets.py
@@ -251,6 +251,30 @@ class LabelAttrWidget(_BaseAttrDefWidget):
self.main_layout.addWidget(input_widget, 0)
+class ClickableLineEdit(QtWidgets.QLineEdit):
+ clicked = QtCore.Signal()
+
+ def __init__(self, text, parent):
+ super(ClickableLineEdit, self).__init__(parent)
+ self.setText(text)
+ self.setReadOnly(True)
+
+ self._mouse_pressed = False
+
+ def mousePressEvent(self, event):
+ if event.button() == QtCore.Qt.LeftButton:
+ self._mouse_pressed = True
+ super(ClickableLineEdit, self).mousePressEvent(event)
+
+ def mouseReleaseEvent(self, event):
+ if self._mouse_pressed:
+ self._mouse_pressed = False
+ if self.rect().contains(event.pos()):
+ self.clicked.emit()
+
+ super(ClickableLineEdit, self).mouseReleaseEvent(event)
+
+
class NumberAttrWidget(_BaseAttrDefWidget):
def _ui_init(self):
decimals = self.attr_def.decimals
@@ -270,20 +294,37 @@ class NumberAttrWidget(_BaseAttrDefWidget):
input_widget.setButtonSymbols(
QtWidgets.QAbstractSpinBox.ButtonSymbols.NoButtons
)
+ input_line_edit = input_widget.lineEdit()
+ input_widget.installEventFilter(self)
+
+ multisel_widget = ClickableLineEdit("< Multiselection >", self)
input_widget.valueChanged.connect(self._on_value_change)
+ multisel_widget.clicked.connect(self._on_multi_click)
self._input_widget = input_widget
+ self._input_line_edit = input_line_edit
+ self._multisel_widget = multisel_widget
+ self._last_multivalue = None
+ self._multivalue = False
self.main_layout.addWidget(input_widget, 0)
+ self.main_layout.addWidget(multisel_widget, 0)
- def _on_value_change(self, new_value):
- self.value_changed.emit(new_value, self.attr_def.id)
+ def eventFilter(self, obj, event):
+ if (
+ self._multivalue
+ and obj is self._input_widget
+ and event.type() == QtCore.QEvent.FocusOut
+ ):
+ self._set_multiselection_visible(True)
+ return False
def current_value(self):
return self._input_widget.value()
def set_value(self, value, multivalue=False):
+ self._last_multivalue = None
if multivalue:
set_value = set(value)
if None in set_value:
@@ -291,13 +332,47 @@ class NumberAttrWidget(_BaseAttrDefWidget):
set_value.add(self.attr_def.default)
if len(set_value) > 1:
- self._input_widget.setSpecialValueText("Multiselection")
+ self._last_multivalue = next(iter(set_value), None)
+ self._set_multiselection_visible(True)
+ self._multivalue = True
return
value = tuple(set_value)[0]
+ self._multivalue = False
+ self._set_multiselection_visible(False)
+
if self.current_value != value:
self._input_widget.setValue(value)
+ def _on_value_change(self, new_value):
+ self._multivalue = False
+ self.value_changed.emit(new_value, self.attr_def.id)
+
+ def _on_multi_click(self):
+ self._set_multiselection_visible(False, True)
+
+ def _set_multiselection_visible(self, visible, change_focus=False):
+ self._input_widget.setVisible(not visible)
+ self._multisel_widget.setVisible(visible)
+ if visible:
+ return
+
+ # Change value once user clicked on the input field
+ if self._last_multivalue is None:
+ value = self.attr_def.default
+ else:
+ value = self._last_multivalue
+ self._input_widget.blockSignals(True)
+ self._input_widget.setValue(value)
+ self._input_widget.blockSignals(False)
+ if not change_focus:
+ return
+ # Change focus to input field and move cursor to the end
+ self._input_widget.setFocus(QtCore.Qt.MouseFocusReason)
+ self._input_line_edit.setCursorPosition(
+ len(self._input_line_edit.text())
+ )
+
class TextAttrWidget(_BaseAttrDefWidget):
def _ui_init(self):
diff --git a/openpype/tools/ayon_loader/models/actions.py b/openpype/tools/ayon_loader/models/actions.py
index 3edb04e9eb..177335a933 100644
--- a/openpype/tools/ayon_loader/models/actions.py
+++ b/openpype/tools/ayon_loader/models/actions.py
@@ -447,11 +447,12 @@ class LoaderActionsModel:
project_doc["code"] = project_doc["data"]["code"]
for version_doc in version_docs:
+ version_id = version_doc["_id"]
product_id = version_doc["parent"]
product_doc = product_docs_by_id[product_id]
folder_id = product_doc["parent"]
folder_doc = folder_docs_by_id[folder_id]
- version_context_by_id[product_id] = {
+ version_context_by_id[version_id] = {
"project": project_doc,
"asset": folder_doc,
"subset": product_doc,
diff --git a/openpype/tools/ayon_push_to_project/__init__.py b/openpype/tools/ayon_push_to_project/__init__.py
new file mode 100644
index 0000000000..83df110c96
--- /dev/null
+++ b/openpype/tools/ayon_push_to_project/__init__.py
@@ -0,0 +1,6 @@
+from .control import PushToContextController
+
+
+__all__ = (
+ "PushToContextController",
+)
diff --git a/openpype/tools/ayon_push_to_project/control.py b/openpype/tools/ayon_push_to_project/control.py
new file mode 100644
index 0000000000..0a19136701
--- /dev/null
+++ b/openpype/tools/ayon_push_to_project/control.py
@@ -0,0 +1,344 @@
+import threading
+
+from openpype.client import (
+ get_asset_by_id,
+ get_subset_by_id,
+ get_version_by_id,
+ get_representations,
+)
+from openpype.settings import get_project_settings
+from openpype.lib import prepare_template_data
+from openpype.lib.events import QueuedEventSystem
+from openpype.pipeline.create import get_subset_name_template
+from openpype.tools.ayon_utils.models import ProjectsModel, HierarchyModel
+
+from .models import (
+ PushToProjectSelectionModel,
+ UserPublishValuesModel,
+ IntegrateModel,
+)
+
+
+class PushToContextController:
+ def __init__(self, project_name=None, version_id=None):
+ self._event_system = self._create_event_system()
+
+ self._projects_model = ProjectsModel(self)
+ self._hierarchy_model = HierarchyModel(self)
+ self._integrate_model = IntegrateModel(self)
+
+ self._selection_model = PushToProjectSelectionModel(self)
+ self._user_values = UserPublishValuesModel(self)
+
+ self._src_project_name = None
+ self._src_version_id = None
+ self._src_asset_doc = None
+ self._src_subset_doc = None
+ self._src_version_doc = None
+ self._src_label = None
+
+ self._submission_enabled = False
+ self._process_thread = None
+ self._process_item_id = None
+
+ self.set_source(project_name, version_id)
+
+ # Events system
+ def emit_event(self, topic, data=None, source=None):
+ """Use implemented event system to trigger event."""
+
+ if data is None:
+ data = {}
+ self._event_system.emit(topic, data, source)
+
+ def register_event_callback(self, topic, callback):
+ self._event_system.add_callback(topic, callback)
+
+ def set_source(self, project_name, version_id):
+ """Set source project and version.
+
+ Args:
+ project_name (Union[str, None]): Source project name.
+ version_id (Union[str, None]): Source version id.
+ """
+
+ if (
+ project_name == self._src_project_name
+ and version_id == self._src_version_id
+ ):
+ return
+
+ self._src_project_name = project_name
+ self._src_version_id = version_id
+ self._src_label = None
+ asset_doc = None
+ subset_doc = None
+ version_doc = None
+ if project_name and version_id:
+ version_doc = get_version_by_id(project_name, version_id)
+
+ if version_doc:
+ subset_doc = get_subset_by_id(project_name, version_doc["parent"])
+
+ if subset_doc:
+ asset_doc = get_asset_by_id(project_name, subset_doc["parent"])
+
+ self._src_asset_doc = asset_doc
+ self._src_subset_doc = subset_doc
+ self._src_version_doc = version_doc
+ if asset_doc:
+ self._user_values.set_new_folder_name(asset_doc["name"])
+ variant = self._get_src_variant()
+ if variant:
+ self._user_values.set_variant(variant)
+
+ comment = version_doc["data"].get("comment")
+ if comment:
+ self._user_values.set_comment(comment)
+
+ self._emit_event(
+ "source.changed",
+ {
+ "project_name": project_name,
+ "version_id": version_id
+ }
+ )
+
+ def get_source_label(self):
+ """Get source label.
+
+ Returns:
+ str: Label describing source project and version as path.
+ """
+
+ if self._src_label is None:
+ self._src_label = self._prepare_source_label()
+ return self._src_label
+
+ def get_project_items(self, sender=None):
+ return self._projects_model.get_project_items(sender)
+
+ def get_folder_items(self, project_name, sender=None):
+ return self._hierarchy_model.get_folder_items(project_name, sender)
+
+ def get_task_items(self, project_name, folder_id, sender=None):
+ return self._hierarchy_model.get_task_items(
+ project_name, folder_id, sender
+ )
+
+ def get_user_values(self):
+ return self._user_values.get_data()
+
+ def set_user_value_folder_name(self, folder_name):
+ self._user_values.set_new_folder_name(folder_name)
+ self._invalidate()
+
+ def set_user_value_variant(self, variant):
+ self._user_values.set_variant(variant)
+ self._invalidate()
+
+ def set_user_value_comment(self, comment):
+ self._user_values.set_comment(comment)
+ self._invalidate()
+
+ def set_selected_project(self, project_name):
+ self._selection_model.set_selected_project(project_name)
+ self._invalidate()
+
+ def set_selected_folder(self, folder_id):
+ self._selection_model.set_selected_folder(folder_id)
+ self._invalidate()
+
+ def set_selected_task(self, task_id, task_name):
+ self._selection_model.set_selected_task(task_id, task_name)
+
+ def get_process_item_status(self, item_id):
+ return self._integrate_model.get_item_status(item_id)
+
+ # Processing methods
+ def submit(self, wait=True):
+ if not self._submission_enabled:
+ return
+
+ if self._process_thread is not None:
+ return
+
+ item_id = self._integrate_model.create_process_item(
+ self._src_project_name,
+ self._src_version_id,
+ self._selection_model.get_selected_project_name(),
+ self._selection_model.get_selected_folder_id(),
+ self._selection_model.get_selected_task_name(),
+ self._user_values.variant,
+ comment=self._user_values.comment,
+ new_folder_name=self._user_values.new_folder_name,
+ dst_version=1
+ )
+
+ self._process_item_id = item_id
+ self._emit_event("submit.started")
+ if wait:
+ self._submit_callback()
+ self._process_item_id = None
+ return item_id
+
+ thread = threading.Thread(target=self._submit_callback)
+ self._process_thread = thread
+ thread.start()
+ return item_id
+
+ def wait_for_process_thread(self):
+ if self._process_thread is None:
+ return
+ self._process_thread.join()
+ self._process_thread = None
+
+ def _prepare_source_label(self):
+ if not self._src_project_name or not self._src_version_id:
+ return "Source is not defined"
+
+ asset_doc = self._src_asset_doc
+ if not asset_doc:
+ return "Source is invalid"
+
+ folder_path_parts = list(asset_doc["data"]["parents"])
+ folder_path_parts.append(asset_doc["name"])
+ folder_path = "/".join(folder_path_parts)
+ subset_doc = self._src_subset_doc
+ version_doc = self._src_version_doc
+ return "Source: {}/{}/{}/v{:0>3}".format(
+ self._src_project_name,
+ folder_path,
+ subset_doc["name"],
+ version_doc["name"]
+ )
+
+ def _get_task_info_from_repre_docs(self, asset_doc, repre_docs):
+ asset_tasks = asset_doc["data"].get("tasks") or {}
+ found_comb = []
+ for repre_doc in repre_docs:
+ context = repre_doc["context"]
+ task_info = context.get("task")
+ if task_info is None:
+ continue
+
+ task_name = None
+ task_type = None
+ if isinstance(task_info, str):
+ task_name = task_info
+ asset_task_info = asset_tasks.get(task_info) or {}
+ task_type = asset_task_info.get("type")
+
+ elif isinstance(task_info, dict):
+ task_name = task_info.get("name")
+ task_type = task_info.get("type")
+
+ if task_name and task_type:
+ return task_name, task_type
+
+ if task_name:
+ found_comb.append((task_name, task_type))
+
+ for task_name, task_type in found_comb:
+ return task_name, task_type
+ return None, None
+
+ def _get_src_variant(self):
+ project_name = self._src_project_name
+ version_doc = self._src_version_doc
+ asset_doc = self._src_asset_doc
+ repre_docs = get_representations(
+ project_name, version_ids=[version_doc["_id"]]
+ )
+ task_name, task_type = self._get_task_info_from_repre_docs(
+ asset_doc, repre_docs
+ )
+
+ project_settings = get_project_settings(project_name)
+ subset_doc = self._src_subset_doc
+ family = subset_doc["data"].get("family")
+ if not family:
+ family = subset_doc["data"]["families"][0]
+ template = get_subset_name_template(
+ self._src_project_name,
+ family,
+ task_name,
+ task_type,
+ None,
+ project_settings=project_settings
+ )
+ template_low = template.lower()
+ variant_placeholder = "{variant}"
+ if (
+ variant_placeholder not in template_low
+ or (not task_name and "{task" in template_low)
+ ):
+ return ""
+
+ idx = template_low.index(variant_placeholder)
+ template_s = template[:idx]
+ template_e = template[idx + len(variant_placeholder):]
+ fill_data = prepare_template_data({
+ "family": family,
+ "task": task_name
+ })
+ try:
+ subset_s = template_s.format(**fill_data)
+ subset_e = template_e.format(**fill_data)
+ except Exception as exc:
+ print("Failed format", exc)
+ return ""
+
+ subset_name = self._src_subset_doc["name"]
+ if (
+ (subset_s and not subset_name.startswith(subset_s))
+ or (subset_e and not subset_name.endswith(subset_e))
+ ):
+ return ""
+
+ if subset_s:
+ subset_name = subset_name[len(subset_s):]
+ if subset_e:
+ subset_name = subset_name[:len(subset_e)]
+ return subset_name
+
+ def _check_submit_validations(self):
+ if not self._user_values.is_valid:
+ return False
+
+ if not self._selection_model.get_selected_project_name():
+ return False
+
+ if (
+ not self._user_values.new_folder_name
+ and not self._selection_model.get_selected_folder_id()
+ ):
+ return False
+ return True
+
+ def _invalidate(self):
+ submission_enabled = self._check_submit_validations()
+ if submission_enabled == self._submission_enabled:
+ return
+ self._submission_enabled = submission_enabled
+ self._emit_event(
+ "submission.enabled.changed",
+ {"enabled": submission_enabled}
+ )
+
+ def _submit_callback(self):
+ process_item_id = self._process_item_id
+ if process_item_id is None:
+ return
+ self._integrate_model.integrate_item(process_item_id)
+ self._emit_event("submit.finished", {})
+ if process_item_id == self._process_item_id:
+ self._process_item_id = None
+
+ def _emit_event(self, topic, data=None):
+ if data is None:
+ data = {}
+ self.emit_event(topic, data, "controller")
+
+ def _create_event_system(self):
+ return QueuedEventSystem()
diff --git a/openpype/tools/ayon_push_to_project/main.py b/openpype/tools/ayon_push_to_project/main.py
new file mode 100644
index 0000000000..e36940e488
--- /dev/null
+++ b/openpype/tools/ayon_push_to_project/main.py
@@ -0,0 +1,32 @@
+import click
+
+from openpype.tools.utils import get_openpype_qt_app
+from openpype.tools.ayon_push_to_project.ui import PushToContextSelectWindow
+
+
+def main_show(project_name, version_id):
+ app = get_openpype_qt_app()
+
+ window = PushToContextSelectWindow()
+ window.show()
+ window.set_source(project_name, version_id)
+
+ app.exec_()
+
+
+@click.command()
+@click.option("--project", help="Source project name")
+@click.option("--version", help="Source version id")
+def main(project, version):
+ """Run PushToProject tool to integrate version in different project.
+
+ Args:
+ project (str): Source project name.
+ version (str): Version id.
+ """
+
+ main_show(project, version)
+
+
+if __name__ == "__main__":
+ main()
diff --git a/openpype/tools/ayon_push_to_project/models/__init__.py b/openpype/tools/ayon_push_to_project/models/__init__.py
new file mode 100644
index 0000000000..99355b4296
--- /dev/null
+++ b/openpype/tools/ayon_push_to_project/models/__init__.py
@@ -0,0 +1,10 @@
+from .selection import PushToProjectSelectionModel
+from .user_values import UserPublishValuesModel
+from .integrate import IntegrateModel
+
+
+__all__ = (
+ "PushToProjectSelectionModel",
+ "UserPublishValuesModel",
+ "IntegrateModel",
+)
diff --git a/openpype/tools/ayon_push_to_project/models/integrate.py b/openpype/tools/ayon_push_to_project/models/integrate.py
new file mode 100644
index 0000000000..976d8cb4f0
--- /dev/null
+++ b/openpype/tools/ayon_push_to_project/models/integrate.py
@@ -0,0 +1,1214 @@
+import os
+import re
+import copy
+import socket
+import itertools
+import datetime
+import sys
+import traceback
+import uuid
+
+from bson.objectid import ObjectId
+
+from openpype.client import (
+ get_project,
+ get_assets,
+ get_asset_by_id,
+ get_subset_by_id,
+ get_subset_by_name,
+ get_version_by_id,
+ get_last_version_by_subset_id,
+ get_version_by_name,
+ get_representations,
+)
+from openpype.client.operations import (
+ OperationsSession,
+ new_asset_document,
+ new_subset_document,
+ new_version_doc,
+ new_representation_doc,
+ prepare_version_update_data,
+ prepare_representation_update_data,
+)
+from openpype.modules import ModulesManager
+from openpype.lib import (
+ StringTemplate,
+ get_openpype_username,
+ get_formatted_current_time,
+ source_hash,
+)
+
+from openpype.lib.file_transaction import FileTransaction
+from openpype.settings import get_project_settings
+from openpype.pipeline import Anatomy
+from openpype.pipeline.version_start import get_versioning_start
+from openpype.pipeline.template_data import get_template_data
+from openpype.pipeline.publish import get_publish_template_name
+from openpype.pipeline.create import get_subset_name
+
+UNKNOWN = object()
+
+
+class PushToProjectError(Exception):
+ pass
+
+
+class FileItem(object):
+ def __init__(self, path):
+ self.path = path
+
+ @property
+ def is_valid_file(self):
+ return os.path.exists(self.path) and os.path.isfile(self.path)
+
+
+class SourceFile(FileItem):
+ def __init__(self, path, frame=None, udim=None):
+ super(SourceFile, self).__init__(path)
+ self.frame = frame
+ self.udim = udim
+
+ def __repr__(self):
+ subparts = [self.__class__.__name__]
+ if self.frame is not None:
+ subparts.append("frame: {}".format(self.frame))
+ if self.udim is not None:
+ subparts.append("UDIM: {}".format(self.udim))
+
+ return "<{}> '{}'".format(" - ".join(subparts), self.path)
+
+
+class ResourceFile(FileItem):
+ def __init__(self, path, relative_path):
+ super(ResourceFile, self).__init__(path)
+ self.relative_path = relative_path
+
+ def __repr__(self):
+ return "<{}> '{}'".format(self.__class__.__name__, self.relative_path)
+
+ @property
+ def is_valid_file(self):
+ if not self.relative_path:
+ return False
+ return super(ResourceFile, self).is_valid_file
+
+
+class ProjectPushItem:
+ def __init__(
+ self,
+ src_project_name,
+ src_version_id,
+ dst_project_name,
+ dst_folder_id,
+ dst_task_name,
+ variant,
+ comment,
+ new_folder_name,
+ dst_version,
+ item_id=None,
+ ):
+ if not item_id:
+ item_id = uuid.uuid4().hex
+ self.src_project_name = src_project_name
+ self.src_version_id = src_version_id
+ self.dst_project_name = dst_project_name
+ self.dst_folder_id = dst_folder_id
+ self.dst_task_name = dst_task_name
+ self.dst_version = dst_version
+ self.variant = variant
+ self.new_folder_name = new_folder_name
+ self.comment = comment or ""
+ self.item_id = item_id
+ self._repr_value = None
+
+ @property
+ def _repr(self):
+ if not self._repr_value:
+ self._repr_value = "|".join([
+ self.src_project_name,
+ self.src_version_id,
+ self.dst_project_name,
+ str(self.dst_folder_id),
+ str(self.new_folder_name),
+ str(self.dst_task_name),
+ str(self.dst_version)
+ ])
+ return self._repr_value
+
+ def __repr__(self):
+ return "<{} - {}>".format(self.__class__.__name__, self._repr)
+
+ def to_data(self):
+ return {
+ "src_project_name": self.src_project_name,
+ "src_version_id": self.src_version_id,
+ "dst_project_name": self.dst_project_name,
+ "dst_folder_id": self.dst_folder_id,
+ "dst_task_name": self.dst_task_name,
+ "dst_version": self.dst_version,
+ "variant": self.variant,
+ "comment": self.comment,
+ "new_folder_name": self.new_folder_name,
+ "item_id": self.item_id,
+ }
+
+ @classmethod
+ def from_data(cls, data):
+ return cls(**data)
+
+
+class StatusMessage:
+ def __init__(self, message, level):
+ self.message = message
+ self.level = level
+
+ def __str__(self):
+ return "{}: {}".format(self.level.upper(), self.message)
+
+ def __repr__(self):
+ return "<{} - {}> {}".format(
+ self.__class__.__name__, self.level.upper, self.message
+ )
+
+
+class ProjectPushItemStatus:
+ def __init__(
+ self,
+ started=False,
+ failed=False,
+ finished=False,
+ fail_reason=None,
+ full_traceback=None
+ ):
+ self.started = started
+ self.failed = failed
+ self.finished = finished
+ self.fail_reason = fail_reason
+ self.full_traceback = full_traceback
+
+ def set_failed(self, fail_reason, exc_info=None):
+ """Set status as failed.
+
+ Attribute 'fail_reason' can change automatically based on passed value.
+ Reason is unset if 'failed' is 'False' and is set do default reason if
+ is set to 'True' and reason is not set.
+
+ Args:
+ fail_reason (str): Reason why failed.
+ exc_info(tuple): Exception info.
+ """
+
+ failed = True
+ if not fail_reason and not exc_info:
+ failed = False
+
+ full_traceback = None
+ if exc_info is not None:
+ full_traceback = "".join(traceback.format_exception(*exc_info))
+ if not fail_reason:
+ fail_reason = "Failed without specified reason"
+
+ self.failed = failed
+ self.fail_reason = fail_reason or None
+ self.full_traceback = full_traceback
+
+ def to_data(self):
+ return {
+ "started": self.started,
+ "failed": self.failed,
+ "finished": self.finished,
+ "fail_reason": self.fail_reason,
+ "full_traceback": self.full_traceback,
+ }
+
+ @classmethod
+ def from_data(cls, data):
+ return cls(**data)
+
+
+class ProjectPushRepreItem:
+ """Representation item.
+
+ Representation item based on representation document and project roots.
+
+ Representation document may have reference to:
+ - source files: Files defined with publish template
+ - resource files: Files that should be in publish directory
+ but filenames are not template based.
+
+ Args:
+ repre_doc (Dict[str, Ant]): Representation document.
+ roots (Dict[str, str]): Project roots (based on project anatomy).
+ """
+
+ def __init__(self, repre_doc, roots):
+ self._repre_doc = repre_doc
+ self._roots = roots
+ self._src_files = None
+ self._resource_files = None
+ self._frame = UNKNOWN
+
+ @property
+ def repre_doc(self):
+ return self._repre_doc
+
+ @property
+ def src_files(self):
+ if self._src_files is None:
+ self.get_source_files()
+ return self._src_files
+
+ @property
+ def resource_files(self):
+ if self._resource_files is None:
+ self.get_source_files()
+ return self._resource_files
+
+ @staticmethod
+ def _clean_path(path):
+ new_value = path.replace("\\", "/")
+ while "//" in new_value:
+ new_value = new_value.replace("//", "/")
+ return new_value
+
+ @staticmethod
+ def _get_relative_path(path, src_dirpath):
+ dirpath, basename = os.path.split(path)
+ if not dirpath.lower().startswith(src_dirpath.lower()):
+ return None
+
+ relative_dir = dirpath[len(src_dirpath):].lstrip("/")
+ if relative_dir:
+ relative_path = "/".join([relative_dir, basename])
+ else:
+ relative_path = basename
+ return relative_path
+
+ @property
+ def frame(self):
+ """First frame of representation files.
+
+ This value will be in representation document context if is sequence.
+
+ Returns:
+ Union[int, None]: First frame in representation files based on
+ source files or None if frame is not part of filename.
+ """
+
+ if self._frame is UNKNOWN:
+ frame = None
+ for src_file in self.src_files:
+ src_frame = src_file.frame
+ if (
+ src_frame is not None
+ and (frame is None or src_frame < frame)
+ ):
+ frame = src_frame
+ self._frame = frame
+ return self._frame
+
+ @staticmethod
+ def validate_source_files(src_files, resource_files):
+ if not src_files:
+ raise AssertionError((
+ "Couldn't figure out source files from representation."
+ " Found resource files {}"
+ ).format(", ".join(str(i) for i in resource_files)))
+
+ invalid_items = [
+ item
+ for item in itertools.chain(src_files, resource_files)
+ if not item.is_valid_file
+ ]
+ if invalid_items:
+ raise AssertionError((
+ "Source files that were not found on disk: {}"
+ ).format(", ".join(str(i) for i in invalid_items)))
+
+ def get_source_files(self):
+ if self._src_files is not None:
+ return self._src_files, self._resource_files
+
+ repre_context = self._repre_doc["context"]
+ if "frame" in repre_context or "udim" in repre_context:
+ src_files, resource_files = self._get_source_files_with_frames()
+ else:
+ src_files, resource_files = self._get_source_files()
+
+ self.validate_source_files(src_files, resource_files)
+
+ self._src_files = src_files
+ self._resource_files = resource_files
+ return self._src_files, self._resource_files
+
+ def _get_source_files_with_frames(self):
+ frame_placeholder = "__frame__"
+ udim_placeholder = "__udim__"
+ src_files = []
+ resource_files = []
+ template = self._repre_doc["data"]["template"]
+ # Remove padding from 'udim' and 'frame' formatting keys
+ # - "{frame:0>4}" -> "{frame}"
+ for key in ("udim", "frame"):
+ sub_part = "{" + key + "[^}]*}"
+ replacement = "{{{}}}".format(key)
+ template = re.sub(sub_part, replacement, template)
+
+ repre_context = self._repre_doc["context"]
+ fill_repre_context = copy.deepcopy(repre_context)
+ if "frame" in fill_repre_context:
+ fill_repre_context["frame"] = frame_placeholder
+
+ if "udim" in fill_repre_context:
+ fill_repre_context["udim"] = udim_placeholder
+
+ fill_roots = fill_repre_context["root"]
+ for root_name in tuple(fill_roots.keys()):
+ fill_roots[root_name] = "{{root[{}]}}".format(root_name)
+ repre_path = StringTemplate.format_template(
+ template, fill_repre_context)
+ repre_path = self._clean_path(repre_path)
+ src_dirpath, src_basename = os.path.split(repre_path)
+ src_basename = (
+ re.escape(src_basename)
+ .replace(frame_placeholder, "(?P[0-9]+)")
+ .replace(udim_placeholder, "(?P[0-9]+)")
+ )
+ src_basename_regex = re.compile("^{}$".format(src_basename))
+ for file_info in self._repre_doc["files"]:
+ filepath_template = self._clean_path(file_info["path"])
+ filepath = self._clean_path(
+ filepath_template.format(root=self._roots)
+ )
+ dirpath, basename = os.path.split(filepath_template)
+ if (
+ dirpath.lower() != src_dirpath.lower()
+ or not src_basename_regex.match(basename)
+ ):
+ relative_path = self._get_relative_path(filepath, src_dirpath)
+ resource_files.append(ResourceFile(filepath, relative_path))
+ continue
+
+ filepath = os.path.join(src_dirpath, basename)
+ frame = None
+ udim = None
+ for item in src_basename_regex.finditer(basename):
+ group_name = item.lastgroup
+ value = item.group(group_name)
+ if group_name == "frame":
+ frame = int(value)
+ elif group_name == "udim":
+ udim = value
+
+ src_files.append(SourceFile(filepath, frame, udim))
+
+ return src_files, resource_files
+
+ def _get_source_files(self):
+ src_files = []
+ resource_files = []
+ template = self._repre_doc["data"]["template"]
+ repre_context = self._repre_doc["context"]
+ fill_repre_context = copy.deepcopy(repre_context)
+ fill_roots = fill_repre_context["root"]
+ for root_name in tuple(fill_roots.keys()):
+ fill_roots[root_name] = "{{root[{}]}}".format(root_name)
+ repre_path = StringTemplate.format_template(template,
+ fill_repre_context)
+ repre_path = self._clean_path(repre_path)
+ src_dirpath = os.path.dirname(repre_path)
+ for file_info in self._repre_doc["files"]:
+ filepath_template = self._clean_path(file_info["path"])
+ filepath = self._clean_path(
+ filepath_template.format(root=self._roots))
+
+ if filepath_template.lower() == repre_path.lower():
+ src_files.append(
+ SourceFile(repre_path.format(root=self._roots))
+ )
+ else:
+ relative_path = self._get_relative_path(
+ filepath_template, src_dirpath
+ )
+ resource_files.append(
+ ResourceFile(filepath, relative_path)
+ )
+ return src_files, resource_files
+
+
+class ProjectPushItemProcess:
+ """
+ Args:
+ model (IntegrateModel): Model which is processing item.
+ item (ProjectPushItem): Item which is being processed.
+ """
+
+ # TODO where to get host?!!!
+ host_name = "republisher"
+
+ def __init__(self, model, item):
+ self._model = model
+ self._item = item
+
+ self._src_asset_doc = None
+ self._src_subset_doc = None
+ self._src_version_doc = None
+ self._src_repre_items = None
+
+ self._project_doc = None
+ self._anatomy = None
+ self._asset_doc = None
+ self._created_asset_doc = None
+ self._task_info = None
+ self._subset_doc = None
+ self._version_doc = None
+
+ self._family = None
+ self._subset_name = None
+
+ self._project_settings = None
+ self._template_name = None
+
+ self._status = ProjectPushItemStatus()
+ self._operations = OperationsSession()
+ self._file_transaction = FileTransaction()
+
+ self._messages = []
+
+ @property
+ def item_id(self):
+ return self._item.item_id
+
+ @property
+ def started(self):
+ return self._status.started
+
+ def get_status_data(self):
+ return self._status.to_data()
+
+ def integrate(self):
+ self._status.started = True
+ try:
+ self._log_info("Process started")
+ self._fill_source_variables()
+ self._log_info("Source entities were found")
+ self._fill_destination_project()
+ self._log_info("Destination project was found")
+ self._fill_or_create_destination_asset()
+ self._log_info("Destination asset was determined")
+ self._determine_family()
+ self._determine_publish_template_name()
+ self._determine_subset_name()
+ self._make_sure_subset_exists()
+ self._make_sure_version_exists()
+ self._log_info("Prerequirements were prepared")
+ self._integrate_representations()
+ self._log_info("Integration finished")
+
+ except PushToProjectError as exc:
+ if not self._status.failed:
+ self._status.set_failed(str(exc))
+
+ except Exception as exc:
+ _exc, _value, _tb = sys.exc_info()
+ self._status.set_failed(
+ "Unhandled error happened: {}".format(str(exc)),
+ (_exc, _value, _tb)
+ )
+
+ finally:
+ self._status.finished = True
+ self._emit_event(
+ "push.finished.changed",
+ {
+ "finished": True,
+ "item_id": self.item_id,
+ }
+ )
+
+ def _emit_event(self, topic, data):
+ self._model.emit_event(topic, data)
+
+ # Loggin helpers
+ # TODO better logging
+ def _add_message(self, message, level):
+ message_obj = StatusMessage(message, level)
+ self._messages.append(message_obj)
+ self._emit_event(
+ "push.message.added",
+ {
+ "message": message,
+ "level": level,
+ "item_id": self.item_id,
+ }
+ )
+ print(message_obj)
+ return message_obj
+
+ def _log_debug(self, message):
+ return self._add_message(message, "debug")
+
+ def _log_info(self, message):
+ return self._add_message(message, "info")
+
+ def _log_warning(self, message):
+ return self._add_message(message, "warning")
+
+ def _log_error(self, message):
+ return self._add_message(message, "error")
+
+ def _log_critical(self, message):
+ return self._add_message(message, "critical")
+
+ def _fill_source_variables(self):
+ src_project_name = self._item.src_project_name
+ src_version_id = self._item.src_version_id
+
+ project_doc = get_project(src_project_name)
+ if not project_doc:
+ self._status.set_failed(
+ f"Source project \"{src_project_name}\" was not found"
+ )
+
+ self._emit_event(
+ "push.failed.changed",
+ {"item_id": self.item_id}
+ )
+ raise PushToProjectError(self._status.fail_reason)
+
+ self._log_debug(f"Project '{src_project_name}' found")
+
+ version_doc = get_version_by_id(src_project_name, src_version_id)
+ if not version_doc:
+ self._status.set_failed((
+ f"Source version with id \"{src_version_id}\""
+ f" was not found in project \"{src_project_name}\""
+ ))
+ raise PushToProjectError(self._status.fail_reason)
+
+ subset_id = version_doc["parent"]
+ subset_doc = get_subset_by_id(src_project_name, subset_id)
+ if not subset_doc:
+ self._status.set_failed((
+ f"Could find subset with id \"{subset_id}\""
+ f" in project \"{src_project_name}\""
+ ))
+ raise PushToProjectError(self._status.fail_reason)
+
+ asset_id = subset_doc["parent"]
+ asset_doc = get_asset_by_id(src_project_name, asset_id)
+ if not asset_doc:
+ self._status.set_failed((
+ f"Could find asset with id \"{asset_id}\""
+ f" in project \"{src_project_name}\""
+ ))
+ raise PushToProjectError(self._status.fail_reason)
+
+ anatomy = Anatomy(src_project_name)
+
+ repre_docs = get_representations(
+ src_project_name,
+ version_ids=[src_version_id]
+ )
+ repre_items = [
+ ProjectPushRepreItem(repre_doc, anatomy.roots)
+ for repre_doc in repre_docs
+ ]
+ self._log_debug((
+ f"Found {len(repre_items)} representations on"
+ f" version {src_version_id} in project '{src_project_name}'"
+ ))
+ if not repre_items:
+ self._status.set_failed(
+ "Source version does not have representations"
+ f" (Version id: {src_version_id})"
+ )
+ raise PushToProjectError(self._status.fail_reason)
+
+ self._src_asset_doc = asset_doc
+ self._src_subset_doc = subset_doc
+ self._src_version_doc = version_doc
+ self._src_repre_items = repre_items
+
+ def _fill_destination_project(self):
+ # --- Destination entities ---
+ dst_project_name = self._item.dst_project_name
+ # Validate project existence
+ dst_project_doc = get_project(dst_project_name)
+ if not dst_project_doc:
+ self._status.set_failed(
+ f"Destination project '{dst_project_name}' was not found"
+ )
+ raise PushToProjectError(self._status.fail_reason)
+
+ self._log_debug(
+ f"Destination project '{dst_project_name}' found"
+ )
+ self._project_doc = dst_project_doc
+ self._anatomy = Anatomy(dst_project_name)
+ self._project_settings = get_project_settings(
+ self._item.dst_project_name
+ )
+
+ def _create_asset(
+ self,
+ src_asset_doc,
+ project_doc,
+ parent_asset_doc,
+ asset_name
+ ):
+ parent_id = None
+ parents = []
+ tools = []
+ if parent_asset_doc:
+ parent_id = parent_asset_doc["_id"]
+ parents = list(parent_asset_doc["data"]["parents"])
+ parents.append(parent_asset_doc["name"])
+ _tools = parent_asset_doc["data"].get("tools_env")
+ if _tools:
+ tools = list(_tools)
+
+ asset_name_low = asset_name.lower()
+ other_asset_docs = get_assets(
+ project_doc["name"], fields=["_id", "name", "data.visualParent"]
+ )
+ for other_asset_doc in other_asset_docs:
+ other_name = other_asset_doc["name"]
+ other_parent_id = other_asset_doc["data"].get("visualParent")
+ if other_name.lower() != asset_name_low:
+ continue
+
+ if other_parent_id != parent_id:
+ self._status.set_failed((
+ f"Asset with name \"{other_name}\" already"
+ " exists in different hierarchy."
+ ))
+ raise PushToProjectError(self._status.fail_reason)
+
+ self._log_debug((
+ f"Found already existing asset with name \"{other_name}\""
+ f" which match requested name \"{asset_name}\""
+ ))
+ return get_asset_by_id(project_doc["name"], other_asset_doc["_id"])
+
+ data_keys = (
+ "clipIn",
+ "clipOut",
+ "frameStart",
+ "frameEnd",
+ "handleStart",
+ "handleEnd",
+ "resolutionWidth",
+ "resolutionHeight",
+ "fps",
+ "pixelAspect",
+ )
+ asset_data = {
+ "visualParent": parent_id,
+ "parents": parents,
+ "tasks": {},
+ "tools_env": tools
+ }
+ src_asset_data = src_asset_doc["data"]
+ for key in data_keys:
+ if key in src_asset_data:
+ asset_data[key] = src_asset_data[key]
+
+ asset_doc = new_asset_document(
+ asset_name,
+ project_doc["_id"],
+ parent_id,
+ parents,
+ data=asset_data
+ )
+ self._operations.create_entity(
+ project_doc["name"],
+ asset_doc["type"],
+ asset_doc
+ )
+ self._log_info(
+ f"Creating new asset with name \"{asset_name}\""
+ )
+ self._created_asset_doc = asset_doc
+ return asset_doc
+
+ def _fill_or_create_destination_asset(self):
+ dst_project_name = self._item.dst_project_name
+ dst_folder_id = self._item.dst_folder_id
+ dst_task_name = self._item.dst_task_name
+ new_folder_name = self._item.new_folder_name
+ if not dst_folder_id and not new_folder_name:
+ self._status.set_failed(
+ "Push item does not have defined destination asset"
+ )
+ raise PushToProjectError(self._status.fail_reason)
+
+ # Get asset document
+ parent_asset_doc = None
+ if dst_folder_id:
+ parent_asset_doc = get_asset_by_id(
+ self._item.dst_project_name, self._item.dst_folder_id
+ )
+ if not parent_asset_doc:
+ self._status.set_failed(
+ f"Could find asset with id \"{dst_folder_id}\""
+ f" in project \"{dst_project_name}\""
+ )
+ raise PushToProjectError(self._status.fail_reason)
+
+ if not new_folder_name:
+ asset_doc = parent_asset_doc
+ else:
+ asset_doc = self._create_asset(
+ self._src_asset_doc,
+ self._project_doc,
+ parent_asset_doc,
+ new_folder_name
+ )
+ self._asset_doc = asset_doc
+ if not dst_task_name:
+ self._task_info = {}
+ return
+
+ asset_path_parts = list(asset_doc["data"]["parents"])
+ asset_path_parts.append(asset_doc["name"])
+ asset_path = "/".join(asset_path_parts)
+ asset_tasks = asset_doc.get("data", {}).get("tasks") or {}
+ task_info = asset_tasks.get(dst_task_name)
+ if not task_info:
+ self._status.set_failed(
+ f"Could find task with name \"{dst_task_name}\""
+ f" on asset \"{asset_path}\""
+ f" in project \"{dst_project_name}\""
+ )
+ raise PushToProjectError(self._status.fail_reason)
+
+ # Create copy of task info to avoid changing data in asset document
+ task_info = copy.deepcopy(task_info)
+ task_info["name"] = dst_task_name
+ # Fill rest of task information based on task type
+ task_type = task_info["type"]
+ task_type_info = self._project_doc["config"]["tasks"].get(
+ task_type, {})
+ task_info.update(task_type_info)
+ self._task_info = task_info
+
+ def _determine_family(self):
+ subset_doc = self._src_subset_doc
+ family = subset_doc["data"].get("family")
+ families = subset_doc["data"].get("families")
+ if not family and families:
+ family = families[0]
+
+ if not family:
+ self._status.set_failed(
+ "Couldn't figure out family from source subset"
+ )
+ raise PushToProjectError(self._status.fail_reason)
+
+ self._log_debug(
+ f"Publishing family is '{family}' (Based on source subset)"
+ )
+ self._family = family
+
+ def _determine_publish_template_name(self):
+ template_name = get_publish_template_name(
+ self._item.dst_project_name,
+ self.host_name,
+ self._family,
+ self._task_info.get("name"),
+ self._task_info.get("type"),
+ project_settings=self._project_settings
+ )
+ self._log_debug(
+ f"Using template '{template_name}' for integration"
+ )
+ self._template_name = template_name
+
+ def _determine_subset_name(self):
+ family = self._family
+ asset_doc = self._asset_doc
+ task_info = self._task_info
+ subset_name = get_subset_name(
+ family,
+ self._item.variant,
+ task_info.get("name"),
+ asset_doc,
+ project_name=self._item.dst_project_name,
+ host_name=self.host_name,
+ project_settings=self._project_settings
+ )
+ self._log_info(
+ f"Push will be integrating to subset with name '{subset_name}'"
+ )
+ self._subset_name = subset_name
+
+ def _make_sure_subset_exists(self):
+ project_name = self._item.dst_project_name
+ asset_id = self._asset_doc["_id"]
+ subset_name = self._subset_name
+ family = self._family
+ subset_doc = get_subset_by_name(project_name, subset_name, asset_id)
+ if subset_doc:
+ self._subset_doc = subset_doc
+ return subset_doc
+
+ data = {
+ "families": [family]
+ }
+ subset_doc = new_subset_document(
+ subset_name, family, asset_id, data
+ )
+ self._operations.create_entity(project_name, "subset", subset_doc)
+ self._subset_doc = subset_doc
+
+ def _make_sure_version_exists(self):
+ """Make sure version document exits in database."""
+
+ project_name = self._item.dst_project_name
+ version = self._item.dst_version
+ src_version_doc = self._src_version_doc
+ subset_doc = self._subset_doc
+ subset_id = subset_doc["_id"]
+ src_data = src_version_doc["data"]
+ families = subset_doc["data"].get("families")
+ if not families:
+ families = [subset_doc["data"]["family"]]
+
+ version_data = {
+ "families": list(families),
+ "fps": src_data.get("fps"),
+ "source": src_data.get("source"),
+ "machine": socket.gethostname(),
+ "comment": self._item.comment or "",
+ "author": get_openpype_username(),
+ "time": get_formatted_current_time(),
+ }
+ if version is None:
+ last_version_doc = get_last_version_by_subset_id(
+ project_name, subset_id
+ )
+ if last_version_doc:
+ version = int(last_version_doc["name"]) + 1
+ else:
+ version = get_versioning_start(
+ project_name,
+ self.host_name,
+ task_name=self._task_info["name"],
+ task_type=self._task_info["type"],
+ family=families[0],
+ subset=subset_doc["name"]
+ )
+
+ existing_version_doc = get_version_by_name(
+ project_name, version, subset_id
+ )
+ # Update existing version
+ if existing_version_doc:
+ version_doc = new_version_doc(
+ version, subset_id, version_data, existing_version_doc["_id"]
+ )
+ update_data = prepare_version_update_data(
+ existing_version_doc, version_doc
+ )
+ if update_data:
+ self._operations.update_entity(
+ project_name,
+ "version",
+ existing_version_doc["_id"],
+ update_data
+ )
+ self._version_doc = version_doc
+
+ return
+
+ version_doc = new_version_doc(
+ version, subset_id, version_data
+ )
+ self._operations.create_entity(project_name, "version", version_doc)
+
+ self._version_doc = version_doc
+
+ def _integrate_representations(self):
+ try:
+ self._real_integrate_representations()
+ except Exception:
+ self._operations.clear()
+ self._file_transaction.rollback()
+ raise
+
+ def _real_integrate_representations(self):
+ version_doc = self._version_doc
+ version_id = version_doc["_id"]
+ existing_repres = get_representations(
+ self._item.dst_project_name,
+ version_ids=[version_id]
+ )
+ existing_repres_by_low_name = {
+ repre_doc["name"].lower(): repre_doc
+ for repre_doc in existing_repres
+ }
+ template_name = self._template_name
+ anatomy = self._anatomy
+ formatting_data = get_template_data(
+ self._project_doc,
+ self._asset_doc,
+ self._task_info.get("name"),
+ self.host_name
+ )
+ formatting_data.update({
+ "subset": self._subset_name,
+ "family": self._family,
+ "version": version_doc["name"]
+ })
+
+ path_template = anatomy.templates[template_name]["path"].replace(
+ "\\", "/"
+ )
+ file_template = StringTemplate(
+ anatomy.templates[template_name]["file"]
+ )
+ self._log_info("Preparing files to transfer")
+ processed_repre_items = self._prepare_file_transactions(
+ anatomy, template_name, formatting_data, file_template
+ )
+ self._file_transaction.process()
+ self._log_info("Preparing database changes")
+ self._prepare_database_operations(
+ version_id,
+ processed_repre_items,
+ path_template,
+ existing_repres_by_low_name
+ )
+ self._log_info("Finalization")
+ self._operations.commit()
+ self._file_transaction.finalize()
+
+ def _prepare_file_transactions(
+ self, anatomy, template_name, formatting_data, file_template
+ ):
+ processed_repre_items = []
+ for repre_item in self._src_repre_items:
+ repre_doc = repre_item.repre_doc
+ repre_name = repre_doc["name"]
+ repre_format_data = copy.deepcopy(formatting_data)
+ repre_format_data["representation"] = repre_name
+ for src_file in repre_item.src_files:
+ ext = os.path.splitext(src_file.path)[-1]
+ repre_format_data["ext"] = ext[1:]
+ break
+
+ # Re-use 'output' from source representation
+ repre_output_name = repre_doc["context"].get("output")
+ if repre_output_name is not None:
+ repre_format_data["output"] = repre_output_name
+
+ template_obj = anatomy.templates_obj[template_name]["folder"]
+ folder_path = template_obj.format_strict(formatting_data)
+ repre_context = folder_path.used_values
+ folder_path_rootless = folder_path.rootless
+ repre_filepaths = []
+ published_path = None
+ for src_file in repre_item.src_files:
+ file_data = copy.deepcopy(repre_format_data)
+ frame = src_file.frame
+ if frame is not None:
+ file_data["frame"] = frame
+
+ udim = src_file.udim
+ if udim is not None:
+ file_data["udim"] = udim
+
+ filename = file_template.format_strict(file_data)
+ dst_filepath = os.path.normpath(
+ os.path.join(folder_path, filename)
+ )
+ dst_rootless_path = os.path.normpath(
+ os.path.join(folder_path_rootless, filename)
+ )
+ if published_path is None or frame == repre_item.frame:
+ published_path = dst_filepath
+ repre_context.update(filename.used_values)
+
+ repre_filepaths.append((dst_filepath, dst_rootless_path))
+ self._file_transaction.add(src_file.path, dst_filepath)
+
+ for resource_file in repre_item.resource_files:
+ dst_filepath = os.path.normpath(
+ os.path.join(folder_path, resource_file.relative_path)
+ )
+ dst_rootless_path = os.path.normpath(
+ os.path.join(
+ folder_path_rootless, resource_file.relative_path
+ )
+ )
+ repre_filepaths.append((dst_filepath, dst_rootless_path))
+ self._file_transaction.add(resource_file.path, dst_filepath)
+ processed_repre_items.append(
+ (repre_item, repre_filepaths, repre_context, published_path)
+ )
+ return processed_repre_items
+
+ def _prepare_database_operations(
+ self,
+ version_id,
+ processed_repre_items,
+ path_template,
+ existing_repres_by_low_name
+ ):
+ modules_manager = ModulesManager()
+ sync_server_module = modules_manager.get("sync_server")
+ if sync_server_module is None or not sync_server_module.enabled:
+ sites = [{
+ "name": "studio",
+ "created_dt": datetime.datetime.now()
+ }]
+ else:
+ sites = sync_server_module.compute_resource_sync_sites(
+ project_name=self._item.dst_project_name
+ )
+
+ added_repre_names = set()
+ for item in processed_repre_items:
+ (repre_item, repre_filepaths, repre_context, published_path) = item
+ repre_name = repre_item.repre_doc["name"]
+ added_repre_names.add(repre_name.lower())
+ new_repre_data = {
+ "path": published_path,
+ "template": path_template
+ }
+ new_repre_files = []
+ for (path, rootless_path) in repre_filepaths:
+ new_repre_files.append({
+ "_id": ObjectId(),
+ "path": rootless_path,
+ "size": os.path.getsize(path),
+ "hash": source_hash(path),
+ "sites": sites
+ })
+
+ existing_repre = existing_repres_by_low_name.get(
+ repre_name.lower()
+ )
+ entity_id = None
+ if existing_repre:
+ entity_id = existing_repre["_id"]
+ new_repre_doc = new_representation_doc(
+ repre_name,
+ version_id,
+ repre_context,
+ data=new_repre_data,
+ entity_id=entity_id
+ )
+ new_repre_doc["files"] = new_repre_files
+ if not existing_repre:
+ self._operations.create_entity(
+ self._item.dst_project_name,
+ new_repre_doc["type"],
+ new_repre_doc
+ )
+ else:
+ update_data = prepare_representation_update_data(
+ existing_repre, new_repre_doc
+ )
+ if update_data:
+ self._operations.update_entity(
+ self._item.dst_project_name,
+ new_repre_doc["type"],
+ new_repre_doc["_id"],
+ update_data
+ )
+
+ existing_repre_names = set(existing_repres_by_low_name.keys())
+ for repre_name in (existing_repre_names - added_repre_names):
+ repre_doc = existing_repres_by_low_name[repre_name]
+ self._operations.update_entity(
+ self._item.dst_project_name,
+ repre_doc["type"],
+ repre_doc["_id"],
+ {"type": "archived_representation"}
+ )
+
+
+class IntegrateModel:
+ def __init__(self, controller):
+ self._controller = controller
+ self._process_items = {}
+
+ def reset(self):
+ self._process_items = {}
+
+ def emit_event(self, topic, data=None, source=None):
+ self._controller.emit_event(topic, data, source)
+
+ def create_process_item(
+ self,
+ src_project_name,
+ src_version_id,
+ dst_project_name,
+ dst_folder_id,
+ dst_task_name,
+ variant,
+ comment,
+ new_folder_name,
+ dst_version,
+ ):
+ """Create new item for integration.
+
+ Args:
+ src_project_name (str): Source project name.
+ src_version_id (str): Source version id.
+ dst_project_name (str): Destination project name.
+ dst_folder_id (str): Destination folder id.
+ dst_task_name (str): Destination task name.
+ variant (str): Variant name.
+ comment (Union[str, None]): Comment.
+ new_folder_name (Union[str, None]): New folder name.
+ dst_version (int): Destination version number.
+
+ Returns:
+ str: Item id. The id can be used to trigger integration or get
+ status information.
+ """
+
+ item = ProjectPushItem(
+ src_project_name,
+ src_version_id,
+ dst_project_name,
+ dst_folder_id,
+ dst_task_name,
+ variant,
+ comment=comment,
+ new_folder_name=new_folder_name,
+ dst_version=dst_version
+ )
+ process_item = ProjectPushItemProcess(self, item)
+ self._process_items[item.item_id] = process_item
+ return item.item_id
+
+ def integrate_item(self, item_id):
+ """Start integration of item.
+
+ Args:
+ item_id (str): Item id which should be integrated.
+ """
+
+ item = self._process_items.get(item_id)
+ if item is None or item.started:
+ return
+ item.integrate()
+
+ def get_item_status(self, item_id):
+ """Status of an item.
+
+ Args:
+ item_id (str): Item id for which status should be returned.
+
+ Returns:
+ dict[str, Any]: Status data.
+ """
+
+ item = self._process_items.get(item_id)
+ if item is not None:
+ return item.get_status_data()
+ return None
diff --git a/openpype/tools/ayon_push_to_project/models/selection.py b/openpype/tools/ayon_push_to_project/models/selection.py
new file mode 100644
index 0000000000..19f1c6d37d
--- /dev/null
+++ b/openpype/tools/ayon_push_to_project/models/selection.py
@@ -0,0 +1,72 @@
+class PushToProjectSelectionModel(object):
+ """Model handling selection changes.
+
+ Triggering events:
+ - "selection.project.changed"
+ - "selection.folder.changed"
+ - "selection.task.changed"
+ """
+
+ event_source = "push-to-project.selection.model"
+
+ def __init__(self, controller):
+ self._controller = controller
+
+ self._project_name = None
+ self._folder_id = None
+ self._task_name = None
+ self._task_id = None
+
+ def get_selected_project_name(self):
+ return self._project_name
+
+ def set_selected_project(self, project_name):
+ if project_name == self._project_name:
+ return
+
+ self._project_name = project_name
+ self._controller.emit_event(
+ "selection.project.changed",
+ {"project_name": project_name},
+ self.event_source
+ )
+
+ def get_selected_folder_id(self):
+ return self._folder_id
+
+ def set_selected_folder(self, folder_id):
+ if folder_id == self._folder_id:
+ return
+
+ self._folder_id = folder_id
+ self._controller.emit_event(
+ "selection.folder.changed",
+ {
+ "project_name": self._project_name,
+ "folder_id": folder_id,
+ },
+ self.event_source
+ )
+
+ def get_selected_task_name(self):
+ return self._task_name
+
+ def get_selected_task_id(self):
+ return self._task_id
+
+ def set_selected_task(self, task_id, task_name):
+ if task_id == self._task_id:
+ return
+
+ self._task_name = task_name
+ self._task_id = task_id
+ self._controller.emit_event(
+ "selection.task.changed",
+ {
+ "project_name": self._project_name,
+ "folder_id": self._folder_id,
+ "task_name": task_name,
+ "task_id": task_id,
+ },
+ self.event_source
+ )
diff --git a/openpype/tools/ayon_push_to_project/models/user_values.py b/openpype/tools/ayon_push_to_project/models/user_values.py
new file mode 100644
index 0000000000..2a4faeb136
--- /dev/null
+++ b/openpype/tools/ayon_push_to_project/models/user_values.py
@@ -0,0 +1,110 @@
+import re
+
+from openpype.pipeline.create import SUBSET_NAME_ALLOWED_SYMBOLS
+
+
+class UserPublishValuesModel:
+ """Helper object to validate values required for push to different project.
+
+ Args:
+ controller (PushToContextController): Event system to catch
+ and emit events.
+ """
+
+ folder_name_regex = re.compile("^[a-zA-Z0-9_.]+$")
+ variant_regex = re.compile("^[{}]+$".format(SUBSET_NAME_ALLOWED_SYMBOLS))
+
+ def __init__(self, controller):
+ self._controller = controller
+ self._new_folder_name = None
+ self._variant = None
+ self._comment = None
+ self._is_variant_valid = False
+ self._is_new_folder_name_valid = False
+
+ self.set_new_folder_name("")
+ self.set_variant("")
+ self.set_comment("")
+
+ @property
+ def new_folder_name(self):
+ return self._new_folder_name
+
+ @property
+ def variant(self):
+ return self._variant
+
+ @property
+ def comment(self):
+ return self._comment
+
+ @property
+ def is_variant_valid(self):
+ return self._is_variant_valid
+
+ @property
+ def is_new_folder_name_valid(self):
+ return self._is_new_folder_name_valid
+
+ @property
+ def is_valid(self):
+ return self.is_variant_valid and self.is_new_folder_name_valid
+
+ def get_data(self):
+ return {
+ "new_folder_name": self._new_folder_name,
+ "variant": self._variant,
+ "comment": self._comment,
+ "is_variant_valid": self._is_variant_valid,
+ "is_new_folder_name_valid": self._is_new_folder_name_valid,
+ "is_valid": self.is_valid
+ }
+
+ def set_variant(self, variant):
+ if variant == self._variant:
+ return
+
+ self._variant = variant
+ is_valid = False
+ if variant:
+ is_valid = self.variant_regex.match(variant) is not None
+ self._is_variant_valid = is_valid
+
+ self._controller.emit_event(
+ "variant.changed",
+ {
+ "variant": variant,
+ "is_valid": self._is_variant_valid,
+ },
+ "user_values"
+ )
+
+ def set_new_folder_name(self, folder_name):
+ if self._new_folder_name == folder_name:
+ return
+
+ self._new_folder_name = folder_name
+ is_valid = True
+ if folder_name:
+ is_valid = (
+ self.folder_name_regex.match(folder_name) is not None
+ )
+ self._is_new_folder_name_valid = is_valid
+ self._controller.emit_event(
+ "new_folder_name.changed",
+ {
+ "new_folder_name": self._new_folder_name,
+ "is_valid": self._is_new_folder_name_valid,
+ },
+ "user_values"
+ )
+
+ def set_comment(self, comment):
+ if comment == self._comment:
+ return
+ self._comment = comment
+ self._controller.emit_event(
+ "comment.changed",
+ {"comment": comment},
+ "user_values"
+ )
diff --git a/openpype/tools/ayon_push_to_project/ui/__init__.py b/openpype/tools/ayon_push_to_project/ui/__init__.py
new file mode 100644
index 0000000000..1e86475530
--- /dev/null
+++ b/openpype/tools/ayon_push_to_project/ui/__init__.py
@@ -0,0 +1,6 @@
+from .window import PushToContextSelectWindow
+
+
+__all__ = (
+ "PushToContextSelectWindow",
+)
diff --git a/openpype/tools/ayon_push_to_project/ui/window.py b/openpype/tools/ayon_push_to_project/ui/window.py
new file mode 100644
index 0000000000..535c01c643
--- /dev/null
+++ b/openpype/tools/ayon_push_to_project/ui/window.py
@@ -0,0 +1,432 @@
+from qtpy import QtWidgets, QtGui, QtCore
+
+from openpype.style import load_stylesheet, get_app_icon_path
+from openpype.tools.utils import (
+ PlaceholderLineEdit,
+ SeparatorWidget,
+ set_style_property,
+)
+from openpype.tools.ayon_utils.widgets import (
+ ProjectsCombobox,
+ FoldersWidget,
+ TasksWidget,
+)
+from openpype.tools.ayon_push_to_project.control import (
+ PushToContextController,
+)
+
+
+class PushToContextSelectWindow(QtWidgets.QWidget):
+ def __init__(self, controller=None):
+ super(PushToContextSelectWindow, self).__init__()
+ if controller is None:
+ controller = PushToContextController()
+ self._controller = controller
+
+ self.setWindowTitle("Push to project (select context)")
+ self.setWindowIcon(QtGui.QIcon(get_app_icon_path()))
+
+ main_context_widget = QtWidgets.QWidget(self)
+
+ header_widget = QtWidgets.QWidget(main_context_widget)
+
+ header_label = QtWidgets.QLabel(
+ controller.get_source_label(),
+ header_widget
+ )
+
+ header_layout = QtWidgets.QHBoxLayout(header_widget)
+ header_layout.setContentsMargins(0, 0, 0, 0)
+ header_layout.addWidget(header_label)
+
+ main_splitter = QtWidgets.QSplitter(
+ QtCore.Qt.Horizontal, main_context_widget
+ )
+
+ context_widget = QtWidgets.QWidget(main_splitter)
+
+ projects_combobox = ProjectsCombobox(controller, context_widget)
+ projects_combobox.set_select_item_visible(True)
+ projects_combobox.set_standard_filter_enabled(True)
+
+ context_splitter = QtWidgets.QSplitter(
+ QtCore.Qt.Vertical, context_widget
+ )
+
+ folders_widget = FoldersWidget(controller, context_splitter)
+ folders_widget.set_deselectable(True)
+ tasks_widget = TasksWidget(controller, context_splitter)
+
+ context_splitter.addWidget(folders_widget)
+ context_splitter.addWidget(tasks_widget)
+
+ context_layout = QtWidgets.QVBoxLayout(context_widget)
+ context_layout.setContentsMargins(0, 0, 0, 0)
+ context_layout.addWidget(projects_combobox, 0)
+ context_layout.addWidget(context_splitter, 1)
+
+ # --- Inputs widget ---
+ inputs_widget = QtWidgets.QWidget(main_splitter)
+
+ folder_name_input = PlaceholderLineEdit(inputs_widget)
+ folder_name_input.setPlaceholderText("< Name of new folder >")
+ folder_name_input.setObjectName("ValidatedLineEdit")
+
+ variant_input = PlaceholderLineEdit(inputs_widget)
+ variant_input.setPlaceholderText("< Variant >")
+ variant_input.setObjectName("ValidatedLineEdit")
+
+ comment_input = PlaceholderLineEdit(inputs_widget)
+ comment_input.setPlaceholderText("< Publish comment >")
+
+ inputs_layout = QtWidgets.QFormLayout(inputs_widget)
+ inputs_layout.setContentsMargins(0, 0, 0, 0)
+ inputs_layout.addRow("New folder name", folder_name_input)
+ inputs_layout.addRow("Variant", variant_input)
+ inputs_layout.addRow("Comment", comment_input)
+
+ main_splitter.addWidget(context_widget)
+ main_splitter.addWidget(inputs_widget)
+
+ # --- Buttons widget ---
+ btns_widget = QtWidgets.QWidget(self)
+ cancel_btn = QtWidgets.QPushButton("Cancel", btns_widget)
+ publish_btn = QtWidgets.QPushButton("Publish", btns_widget)
+
+ btns_layout = QtWidgets.QHBoxLayout(btns_widget)
+ btns_layout.setContentsMargins(0, 0, 0, 0)
+ btns_layout.addStretch(1)
+ btns_layout.addWidget(cancel_btn, 0)
+ btns_layout.addWidget(publish_btn, 0)
+
+ sep_1 = SeparatorWidget(parent=main_context_widget)
+ sep_2 = SeparatorWidget(parent=main_context_widget)
+ main_context_layout = QtWidgets.QVBoxLayout(main_context_widget)
+ main_context_layout.addWidget(header_widget, 0)
+ main_context_layout.addWidget(sep_1, 0)
+ main_context_layout.addWidget(main_splitter, 1)
+ main_context_layout.addWidget(sep_2, 0)
+ main_context_layout.addWidget(btns_widget, 0)
+
+ # NOTE This was added in hurry
+ # - should be reorganized and changed styles
+ overlay_widget = QtWidgets.QFrame(self)
+ overlay_widget.setObjectName("OverlayFrame")
+
+ overlay_label = QtWidgets.QLabel(overlay_widget)
+ overlay_label.setAlignment(QtCore.Qt.AlignCenter)
+
+ overlay_btns_widget = QtWidgets.QWidget(overlay_widget)
+ overlay_btns_widget.setAttribute(QtCore.Qt.WA_TranslucentBackground)
+
+ # Add try again button (requires changes in controller)
+ overlay_try_btn = QtWidgets.QPushButton(
+ "Try again", overlay_btns_widget
+ )
+ overlay_close_btn = QtWidgets.QPushButton(
+ "Close", overlay_btns_widget
+ )
+
+ overlay_btns_layout = QtWidgets.QHBoxLayout(overlay_btns_widget)
+ overlay_btns_layout.addStretch(1)
+ overlay_btns_layout.addWidget(overlay_try_btn, 0)
+ overlay_btns_layout.addWidget(overlay_close_btn, 0)
+ overlay_btns_layout.addStretch(1)
+
+ overlay_layout = QtWidgets.QVBoxLayout(overlay_widget)
+ overlay_layout.addWidget(overlay_label, 0)
+ overlay_layout.addWidget(overlay_btns_widget, 0)
+ overlay_layout.setAlignment(QtCore.Qt.AlignCenter)
+
+ main_layout = QtWidgets.QStackedLayout(self)
+ main_layout.setContentsMargins(0, 0, 0, 0)
+ main_layout.addWidget(main_context_widget)
+ main_layout.addWidget(overlay_widget)
+ main_layout.setStackingMode(QtWidgets.QStackedLayout.StackAll)
+ main_layout.setCurrentWidget(main_context_widget)
+
+ show_timer = QtCore.QTimer()
+ show_timer.setInterval(0)
+
+ main_thread_timer = QtCore.QTimer()
+ main_thread_timer.setInterval(10)
+
+ user_input_changed_timer = QtCore.QTimer()
+ user_input_changed_timer.setInterval(200)
+ user_input_changed_timer.setSingleShot(True)
+
+ main_thread_timer.timeout.connect(self._on_main_thread_timer)
+ show_timer.timeout.connect(self._on_show_timer)
+ user_input_changed_timer.timeout.connect(self._on_user_input_timer)
+ folder_name_input.textChanged.connect(self._on_new_asset_change)
+ variant_input.textChanged.connect(self._on_variant_change)
+ comment_input.textChanged.connect(self._on_comment_change)
+
+ publish_btn.clicked.connect(self._on_select_click)
+ cancel_btn.clicked.connect(self._on_close_click)
+ overlay_close_btn.clicked.connect(self._on_close_click)
+ overlay_try_btn.clicked.connect(self._on_try_again_click)
+
+ controller.register_event_callback(
+ "new_folder_name.changed",
+ self._on_controller_new_asset_change
+ )
+ controller.register_event_callback(
+ "variant.changed", self._on_controller_variant_change
+ )
+ controller.register_event_callback(
+ "comment.changed", self._on_controller_comment_change
+ )
+ controller.register_event_callback(
+ "submission.enabled.changed", self._on_submission_change
+ )
+ controller.register_event_callback(
+ "source.changed", self._on_controller_source_change
+ )
+ controller.register_event_callback(
+ "submit.started", self._on_controller_submit_start
+ )
+ controller.register_event_callback(
+ "submit.finished", self._on_controller_submit_end
+ )
+ controller.register_event_callback(
+ "push.message.added", self._on_push_message
+ )
+
+ self._main_layout = main_layout
+
+ self._main_context_widget = main_context_widget
+
+ self._header_label = header_label
+ self._main_splitter = main_splitter
+
+ self._projects_combobox = projects_combobox
+ self._folders_widget = folders_widget
+ self._tasks_widget = tasks_widget
+
+ self._variant_input = variant_input
+ self._folder_name_input = folder_name_input
+ self._comment_input = comment_input
+
+ self._publish_btn = publish_btn
+
+ self._overlay_widget = overlay_widget
+ self._overlay_close_btn = overlay_close_btn
+ self._overlay_try_btn = overlay_try_btn
+ self._overlay_label = overlay_label
+
+ self._user_input_changed_timer = user_input_changed_timer
+ # Store current value on input text change
+ # The value is unset when is passed to controller
+ # The goal is to have controll over changes happened during user change
+ # in UI and controller auto-changes
+ self._variant_input_text = None
+ self._new_folder_name_input_text = None
+ self._comment_input_text = None
+
+ self._first_show = True
+ self._show_timer = show_timer
+ self._show_counter = 0
+
+ self._main_thread_timer = main_thread_timer
+ self._main_thread_timer_can_stop = True
+ self._last_submit_message = None
+ self._process_item_id = None
+
+ self._variant_is_valid = None
+ self._folder_is_valid = None
+
+ publish_btn.setEnabled(False)
+ overlay_close_btn.setVisible(False)
+ overlay_try_btn.setVisible(False)
+
+ # Support of public api function of controller
+ def set_source(self, project_name, version_id):
+ """Set source project and version.
+
+ Call the method on controller.
+
+ Args:
+ project_name (Union[str, None]): Name of project.
+ version_id (Union[str, None]): Version id.
+ """
+
+ self._controller.set_source(project_name, version_id)
+
+ def showEvent(self, event):
+ super(PushToContextSelectWindow, self).showEvent(event)
+ if self._first_show:
+ self._first_show = False
+ self._on_first_show()
+
+ def refresh(self):
+ user_values = self._controller.get_user_values()
+ new_folder_name = user_values["new_folder_name"]
+ variant = user_values["variant"]
+ self._folder_name_input.setText(new_folder_name or "")
+ self._variant_input.setText(variant or "")
+ self._invalidate_variant(user_values["is_variant_valid"])
+ self._invalidate_new_folder_name(
+ new_folder_name, user_values["is_new_folder_name_valid"]
+ )
+
+ self._projects_combobox.refresh()
+
+ def _on_first_show(self):
+ width = 740
+ height = 640
+ inputs_width = 360
+ self.setStyleSheet(load_stylesheet())
+ self.resize(width, height)
+ self._main_splitter.setSizes([width - inputs_width, inputs_width])
+ self._show_timer.start()
+
+ def _on_show_timer(self):
+ if self._show_counter < 3:
+ self._show_counter += 1
+ return
+ self._show_timer.stop()
+
+ self._show_counter = 0
+
+ self.refresh()
+
+ def _on_new_asset_change(self, text):
+ self._new_folder_name_input_text = text
+ self._user_input_changed_timer.start()
+
+ def _on_variant_change(self, text):
+ self._variant_input_text = text
+ self._user_input_changed_timer.start()
+
+ def _on_comment_change(self, text):
+ self._comment_input_text = text
+ self._user_input_changed_timer.start()
+
+ def _on_user_input_timer(self):
+ folder_name = self._new_folder_name_input_text
+ if folder_name is not None:
+ self._new_folder_name_input_text = None
+ self._controller.set_user_value_folder_name(folder_name)
+
+ variant = self._variant_input_text
+ if variant is not None:
+ self._variant_input_text = None
+ self._controller.set_user_value_variant(variant)
+
+ comment = self._comment_input_text
+ if comment is not None:
+ self._comment_input_text = None
+ self._controller.set_user_value_comment(comment)
+
+ def _on_controller_new_asset_change(self, event):
+ folder_name = event["new_folder_name"]
+ if (
+ self._new_folder_name_input_text is None
+ and folder_name != self._folder_name_input.text()
+ ):
+ self._folder_name_input.setText(folder_name)
+
+ self._invalidate_new_folder_name(folder_name, event["is_valid"])
+
+ def _on_controller_variant_change(self, event):
+ is_valid = event["is_valid"]
+ variant = event["variant"]
+ if (
+ self._variant_input_text is None
+ and variant != self._variant_input.text()
+ ):
+ self._variant_input.setText(variant)
+
+ self._invalidate_variant(is_valid)
+
+ def _on_controller_comment_change(self, event):
+ comment = event["comment"]
+ if (
+ self._comment_input_text is None
+ and comment != self._comment_input.text()
+ ):
+ self._comment_input.setText(comment)
+
+ def _on_controller_source_change(self):
+ self._header_label.setText(self._controller.get_source_label())
+
+ def _invalidate_new_folder_name(self, folder_name, is_valid):
+ self._tasks_widget.setVisible(not folder_name)
+ if self._folder_is_valid is is_valid:
+ return
+ self._folder_is_valid = is_valid
+ state = ""
+ if folder_name:
+ if is_valid is True:
+ state = "valid"
+ elif is_valid is False:
+ state = "invalid"
+ set_style_property(
+ self._folder_name_input, "state", state
+ )
+
+ def _invalidate_variant(self, is_valid):
+ if self._variant_is_valid is is_valid:
+ return
+ self._variant_is_valid = is_valid
+ state = "valid" if is_valid else "invalid"
+ set_style_property(self._variant_input, "state", state)
+
+ def _on_submission_change(self, event):
+ self._publish_btn.setEnabled(event["enabled"])
+
+ def _on_close_click(self):
+ self.close()
+
+ def _on_select_click(self):
+ self._process_item_id = self._controller.submit(wait=False)
+
+ def _on_try_again_click(self):
+ self._process_item_id = None
+ self._last_submit_message = None
+
+ self._overlay_close_btn.setVisible(False)
+ self._overlay_try_btn.setVisible(False)
+ self._main_layout.setCurrentWidget(self._main_context_widget)
+
+ def _on_main_thread_timer(self):
+ if self._last_submit_message:
+ self._overlay_label.setText(self._last_submit_message)
+ self._last_submit_message = None
+
+ process_status = self._controller.get_process_item_status(
+ self._process_item_id
+ )
+ push_failed = process_status["failed"]
+ fail_traceback = process_status["full_traceback"]
+ if self._main_thread_timer_can_stop:
+ self._main_thread_timer.stop()
+ self._overlay_close_btn.setVisible(True)
+ if push_failed and not fail_traceback:
+ self._overlay_try_btn.setVisible(True)
+
+ if push_failed:
+ message = "Push Failed:\n{}".format(process_status["fail_reason"])
+ if fail_traceback:
+ message += "\n{}".format(fail_traceback)
+ self._overlay_label.setText(message)
+ set_style_property(self._overlay_close_btn, "state", "error")
+
+ if self._main_thread_timer_can_stop:
+ # Join thread in controller
+ self._controller.wait_for_process_thread()
+ # Reset process item to None
+ self._process_item_id = None
+
+ def _on_controller_submit_start(self):
+ self._main_thread_timer_can_stop = False
+ self._main_thread_timer.start()
+ self._main_layout.setCurrentWidget(self._overlay_widget)
+ self._overlay_label.setText("Submittion started")
+
+ def _on_controller_submit_end(self):
+ self._main_thread_timer_can_stop = True
+
+ def _on_push_message(self, event):
+ self._last_submit_message = event["message"]
diff --git a/openpype/tools/push_to_project/control_integrate.py b/openpype/tools/push_to_project/control_integrate.py
index a822339ccf..9f083d8eb7 100644
--- a/openpype/tools/push_to_project/control_integrate.py
+++ b/openpype/tools/push_to_project/control_integrate.py
@@ -1051,6 +1051,11 @@ class ProjectPushItemProcess:
repre_format_data["ext"] = ext[1:]
break
+ # Re-use 'output' from source representation
+ repre_output_name = repre_doc["context"].get("output")
+ if repre_output_name is not None:
+ repre_format_data["output"] = repre_output_name
+
template_obj = anatomy.templates_obj[template_name]["folder"]
folder_path = template_obj.format_strict(formatting_data)
repre_context = folder_path.used_values
diff --git a/openpype/version.py b/openpype/version.py
index 6f740d0c78..e2e3c663af 100644
--- a/openpype/version.py
+++ b/openpype/version.py
@@ -1,3 +1,3 @@
# -*- coding: utf-8 -*-
"""Package declaring Pype version."""
-__version__ = "3.17.3-nightly.2"
+__version__ = "3.17.4-nightly.1"
diff --git a/pyproject.toml b/pyproject.toml
index ad93b70c0f..3803e4714e 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -1,6 +1,6 @@
[tool.poetry]
name = "OpenPype"
-version = "3.17.2" # OpenPype
+version = "3.17.3" # OpenPype
description = "Open VFX and Animation pipeline with support."
authors = ["OpenPype Team "]
license = "MIT License"
diff --git a/server_addon/blender/server/settings/publish_plugins.py b/server_addon/blender/server/settings/publish_plugins.py
index 5e047b7013..27dc0b232f 100644
--- a/server_addon/blender/server/settings/publish_plugins.py
+++ b/server_addon/blender/server/settings/publish_plugins.py
@@ -103,7 +103,7 @@ class PublishPuginsModel(BaseSettingsModel):
default_factory=ValidatePluginModel,
title="Extract FBX"
)
- ExtractABC: ValidatePluginModel = Field(
+ ExtractModelABC: ValidatePluginModel = Field(
default_factory=ValidatePluginModel,
title="Extract ABC"
)
@@ -197,10 +197,10 @@ DEFAULT_BLENDER_PUBLISH_SETTINGS = {
"optional": True,
"active": False
},
- "ExtractABC": {
+ "ExtractModelABC": {
"enabled": True,
"optional": True,
- "active": False
+ "active": True
},
"ExtractBlendAnimation": {
"enabled": True,
diff --git a/server_addon/openpype/client/pyproject.toml b/server_addon/openpype/client/pyproject.toml
index 6d5ac92ca7..40da8f6716 100644
--- a/server_addon/openpype/client/pyproject.toml
+++ b/server_addon/openpype/client/pyproject.toml
@@ -8,7 +8,6 @@ aiohttp_json_rpc = "*" # TVPaint server
aiohttp-middlewares = "^2.0.0"
wsrpc_aiohttp = "^3.1.1" # websocket server
clique = "1.6.*"
-shotgun_api3 = {git = "https://github.com/shotgunsoftware/python-api.git", rev = "v3.3.3"}
gazu = "^0.9.3"
google-api-python-client = "^1.12.8" # sync server google support (should be separate?)
jsonschema = "^2.6.0"
diff --git a/tests/unit/openpype/pipeline/publish/test_publish_plugins.py b/tests/unit/openpype/pipeline/publish/test_publish_plugins.py
index aace8cf7e3..1f7f551237 100644
--- a/tests/unit/openpype/pipeline/publish/test_publish_plugins.py
+++ b/tests/unit/openpype/pipeline/publish/test_publish_plugins.py
@@ -37,7 +37,7 @@ class TestPipelinePublishPlugins(TestPipeline):
# files are the same as those used in `test_pipeline_colorspace`
TEST_FILES = [
(
- "1Lf-mFxev7xiwZCWfImlRcw7Fj8XgNQMh",
+ "1csqimz8bbNcNgxtEXklLz6GRv91D3KgA",
"test_pipeline_colorspace.zip",
""
)
@@ -123,8 +123,7 @@ class TestPipelinePublishPlugins(TestPipeline):
def test_get_colorspace_settings(self, context, config_path_asset):
expected_config_template = (
- "{root[work]}/{project[name]}"
- "/{hierarchy}/{asset}/config/aces.ocio"
+ "{root[work]}/{project[name]}/config/aces.ocio"
)
expected_file_rules = {
"comp_review": {
@@ -177,16 +176,16 @@ class TestPipelinePublishPlugins(TestPipeline):
# load plugin function for testing
plugin = publish_plugins.ColormanagedPyblishPluginMixin()
plugin.log = log
+ context.data["imageioSettings"] = (config_data_nuke, file_rules_nuke)
plugin.set_representation_colorspace(
- representation_nuke, context,
- colorspace_settings=(config_data_nuke, file_rules_nuke)
+ representation_nuke, context
)
# load plugin function for testing
plugin = publish_plugins.ColormanagedPyblishPluginMixin()
plugin.log = log
+ context.data["imageioSettings"] = (config_data_hiero, file_rules_hiero)
plugin.set_representation_colorspace(
- representation_hiero, context,
- colorspace_settings=(config_data_hiero, file_rules_hiero)
+ representation_hiero, context
)
colorspace_data_nuke = representation_nuke.get("colorspaceData")
diff --git a/tests/unit/openpype/pipeline/test_colorspace_convert_colorspace_enumerator_item.py b/tests/unit/openpype/pipeline/test_colorspace_convert_colorspace_enumerator_item.py
new file mode 100644
index 0000000000..56ac2a5d28
--- /dev/null
+++ b/tests/unit/openpype/pipeline/test_colorspace_convert_colorspace_enumerator_item.py
@@ -0,0 +1,118 @@
+import unittest
+from openpype.pipeline.colorspace import convert_colorspace_enumerator_item
+
+
+class TestConvertColorspaceEnumeratorItem(unittest.TestCase):
+ def setUp(self):
+ self.config_items = {
+ "colorspaces": {
+ "sRGB": {
+ "aliases": ["sRGB_1"],
+ "family": "colorspace",
+ "categories": ["colors"],
+ "equalitygroup": "equalitygroup",
+ },
+ "Rec.709": {
+ "aliases": ["rec709_1", "rec709_2"],
+ },
+ },
+ "looks": {
+ "sRGB_to_Rec.709": {
+ "process_space": "sRGB",
+ },
+ },
+ "displays_views": {
+ "sRGB (ACES)": {
+ "view": "sRGB",
+ "display": "ACES",
+ },
+ "Rec.709 (ACES)": {
+ "view": "Rec.709",
+ "display": "ACES",
+ },
+ },
+ "roles": {
+ "compositing_linear": {
+ "colorspace": "linear",
+ },
+ },
+ }
+
+ def test_valid_item(self):
+ colorspace_item_data = convert_colorspace_enumerator_item(
+ "colorspaces::sRGB", self.config_items)
+ self.assertEqual(
+ colorspace_item_data,
+ {
+ "name": "sRGB",
+ "type": "colorspaces",
+ "aliases": ["sRGB_1"],
+ "family": "colorspace",
+ "categories": ["colors"],
+ "equalitygroup": "equalitygroup"
+ }
+ )
+
+ alias_item_data = convert_colorspace_enumerator_item(
+ "aliases::rec709_1", self.config_items)
+ self.assertEqual(
+ alias_item_data,
+ {
+ "aliases": ["rec709_1", "rec709_2"],
+ "name": "Rec.709",
+ "type": "colorspace"
+ }
+ )
+
+ display_view_item_data = convert_colorspace_enumerator_item(
+ "displays_views::sRGB (ACES)", self.config_items)
+ self.assertEqual(
+ display_view_item_data,
+ {
+ "type": "displays_views",
+ "name": "sRGB (ACES)",
+ "view": "sRGB",
+ "display": "ACES"
+ }
+ )
+
+ role_item_data = convert_colorspace_enumerator_item(
+ "roles::compositing_linear", self.config_items)
+ self.assertEqual(
+ role_item_data,
+ {
+ "name": "compositing_linear",
+ "type": "roles",
+ "colorspace": "linear"
+ }
+ )
+
+ look_item_data = convert_colorspace_enumerator_item(
+ "looks::sRGB_to_Rec.709", self.config_items)
+ self.assertEqual(
+ look_item_data,
+ {
+ "type": "looks",
+ "name": "sRGB_to_Rec.709",
+ "process_space": "sRGB"
+ }
+ )
+
+ def test_invalid_item(self):
+ config_items = {
+ "RGB": {
+ "sRGB": {"red": 255, "green": 255, "blue": 255},
+ "AdobeRGB": {"red": 255, "green": 255, "blue": 255},
+ }
+ }
+ with self.assertRaises(KeyError):
+ convert_colorspace_enumerator_item("RGB::invalid", config_items)
+
+ def test_missing_config_data(self):
+ config_items = {}
+ with self.assertRaises(KeyError):
+ convert_colorspace_enumerator_item("RGB::sRGB", config_items)
+
+
+if __name__ == '__main__':
+ unittest.main()
diff --git a/tests/unit/openpype/pipeline/test_colorspace_get_colorspaces_enumerator_items.py b/tests/unit/openpype/pipeline/test_colorspace_get_colorspaces_enumerator_items.py
new file mode 100644
index 0000000000..c221712d70
--- /dev/null
+++ b/tests/unit/openpype/pipeline/test_colorspace_get_colorspaces_enumerator_items.py
@@ -0,0 +1,121 @@
+import unittest
+
+from openpype.pipeline.colorspace import get_colorspaces_enumerator_items
+
+
+class TestGetColorspacesEnumeratorItems(unittest.TestCase):
+ def setUp(self):
+ self.config_items = {
+ "colorspaces": {
+ "sRGB": {
+ "aliases": ["sRGB_1"],
+ },
+ "Rec.709": {
+ "aliases": ["rec709_1", "rec709_2"],
+ },
+ },
+ "looks": {
+ "sRGB_to_Rec.709": {
+ "process_space": "sRGB",
+ },
+ },
+ "displays_views": {
+ "sRGB (ACES)": {
+ "view": "sRGB",
+ "display": "ACES",
+ },
+ "Rec.709 (ACES)": {
+ "view": "Rec.709",
+ "display": "ACES",
+ },
+ },
+ "roles": {
+ "compositing_linear": {
+ "colorspace": "linear",
+ },
+ },
+ }
+
+ def test_colorspaces(self):
+ result = get_colorspaces_enumerator_items(self.config_items)
+ expected = [
+ ("colorspaces::Rec.709", "[colorspace] Rec.709"),
+ ("colorspaces::sRGB", "[colorspace] sRGB"),
+ ]
+ self.assertEqual(result, expected)
+
+ def test_aliases(self):
+ result = get_colorspaces_enumerator_items(
+ self.config_items, include_aliases=True)
+ expected = [
+ ("colorspaces::Rec.709", "[colorspace] Rec.709"),
+ ("colorspaces::sRGB", "[colorspace] sRGB"),
+ ("aliases::rec709_1", "[alias] rec709_1 (Rec.709)"),
+ ("aliases::rec709_2", "[alias] rec709_2 (Rec.709)"),
+ ("aliases::sRGB_1", "[alias] sRGB_1 (sRGB)"),
+ ]
+ self.assertEqual(result, expected)
+
+ def test_looks(self):
+ result = get_colorspaces_enumerator_items(
+ self.config_items, include_looks=True)
+ expected = [
+ ("colorspaces::Rec.709", "[colorspace] Rec.709"),
+ ("colorspaces::sRGB", "[colorspace] sRGB"),
+ ("looks::sRGB_to_Rec.709", "[look] sRGB_to_Rec.709 (sRGB)"),
+ ]
+ self.assertEqual(result, expected)
+
+ def test_display_views(self):
+ result = get_colorspaces_enumerator_items(
+ self.config_items, include_display_views=True)
+ expected = [
+ ("colorspaces::Rec.709", "[colorspace] Rec.709"),
+ ("colorspaces::sRGB", "[colorspace] sRGB"),
+ ("displays_views::Rec.709 (ACES)", "[view (display)] Rec.709 (ACES)"), # noqa: E501
+ ("displays_views::sRGB (ACES)", "[view (display)] sRGB (ACES)"),
+
+ ]
+ self.assertEqual(result, expected)
+
+ def test_roles(self):
+ result = get_colorspaces_enumerator_items(
+ self.config_items, include_roles=True)
+ expected = [
+ ("roles::compositing_linear", "[role] compositing_linear (linear)"), # noqa: E501
+ ("colorspaces::Rec.709", "[colorspace] Rec.709"),
+ ("colorspaces::sRGB", "[colorspace] sRGB"),
+ ]
+ self.assertEqual(result, expected)
+
+ def test_all(self):
+ message_config_keys = ", ".join(
+ "'{}':{}".format(
+ key,
+ set(self.config_items.get(key, {}).keys())
+ ) for key in self.config_items.keys()
+ )
+ print("Testing with config: [{}]".format(message_config_keys))
+ result = get_colorspaces_enumerator_items(
+ self.config_items,
+ include_aliases=True,
+ include_looks=True,
+ include_roles=True,
+ include_display_views=True,
+ )
+ expected = [
+ ("roles::compositing_linear", "[role] compositing_linear (linear)"), # noqa: E501
+ ("colorspaces::Rec.709", "[colorspace] Rec.709"),
+ ("colorspaces::sRGB", "[colorspace] sRGB"),
+ ("aliases::rec709_1", "[alias] rec709_1 (Rec.709)"),
+ ("aliases::rec709_2", "[alias] rec709_2 (Rec.709)"),
+ ("aliases::sRGB_1", "[alias] sRGB_1 (sRGB)"),
+ ("looks::sRGB_to_Rec.709", "[look] sRGB_to_Rec.709 (sRGB)"),
+ ("displays_views::Rec.709 (ACES)", "[view (display)] Rec.709 (ACES)"), # noqa: E501
+ ("displays_views::sRGB (ACES)", "[view (display)] sRGB (ACES)"),
+ ]
+ self.assertEqual(result, expected)
+
+
+if __name__ == "__main__":
+ unittest.main()