diff --git a/CHANGELOG.md b/CHANGELOG.md index 48d0d8181e..f0a9a9651d 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,8 +1,42 @@ # Changelog +## [3.11.2-nightly.1](https://github.com/pypeclub/OpenPype/tree/HEAD) + +[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.11.1...HEAD) + +### 📖 Documentation + +- General: Add ability to change user value for templates [\#3366](https://github.com/pypeclub/OpenPype/pull/3366) +- Feature/multiverse [\#3350](https://github.com/pypeclub/OpenPype/pull/3350) + +**🚀 Enhancements** + +- Hosts: More options for in-host callbacks [\#3357](https://github.com/pypeclub/OpenPype/pull/3357) +- TVPaint: Extractor use mark in/out range to render [\#3308](https://github.com/pypeclub/OpenPype/pull/3308) +- Maya: Allow more data to be published along camera 🎥 [\#3304](https://github.com/pypeclub/OpenPype/pull/3304) + +**🐛 Bug fixes** + +- TVPaint: Make sure exit code is set to not None [\#3382](https://github.com/pypeclub/OpenPype/pull/3382) +- Maya: vray device aspect ratio fix [\#3381](https://github.com/pypeclub/OpenPype/pull/3381) +- Harmony: added unc path to zifile command in Harmony [\#3372](https://github.com/pypeclub/OpenPype/pull/3372) + +**🔀 Refactored code** + +- Harmony: Use client query functions [\#3378](https://github.com/pypeclub/OpenPype/pull/3378) +- Photoshop: Use client query functions [\#3375](https://github.com/pypeclub/OpenPype/pull/3375) +- AfterEffects: Use client query functions [\#3374](https://github.com/pypeclub/OpenPype/pull/3374) +- TVPaint: Use client query functions [\#3340](https://github.com/pypeclub/OpenPype/pull/3340) +- Ftrack: Use client query functions [\#3339](https://github.com/pypeclub/OpenPype/pull/3339) +- Standalone Publisher: Use client query functions [\#3330](https://github.com/pypeclub/OpenPype/pull/3330) + +**Merged pull requests:** + +- Maya - added support for single frame playblast review [\#3369](https://github.com/pypeclub/OpenPype/pull/3369) + ## [3.11.1](https://github.com/pypeclub/OpenPype/tree/3.11.1) (2022-06-20) -[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.11.0...3.11.1) +[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.11.1-nightly.1...3.11.1) **🆕 New features** @@ -26,9 +60,9 @@ - AE- fix validate\_scene\_settings and renderLocal [\#3358](https://github.com/pypeclub/OpenPype/pull/3358) - deadline: fixing misidentification of revieables [\#3356](https://github.com/pypeclub/OpenPype/pull/3356) - General: Create only one thumbnail per instance [\#3351](https://github.com/pypeclub/OpenPype/pull/3351) +- nuke: adding extract thumbnail settings 3.10 [\#3347](https://github.com/pypeclub/OpenPype/pull/3347) - General: Fix last version function [\#3345](https://github.com/pypeclub/OpenPype/pull/3345) - Deadline: added OPENPYPE\_MONGO to filter [\#3336](https://github.com/pypeclub/OpenPype/pull/3336) -- Nuke: fixing farm publishing if review is disabled [\#3306](https://github.com/pypeclub/OpenPype/pull/3306) **🔀 Refactored code** @@ -61,10 +95,10 @@ **🐛 Bug fixes** -- General: Handle empty source key on instance [\#3342](https://github.com/pypeclub/OpenPype/pull/3342) - Houdini: Fix Houdini VDB manage update wrong file attribute name [\#3322](https://github.com/pypeclub/OpenPype/pull/3322) - Nuke: anatomy compatibility issue hacks [\#3321](https://github.com/pypeclub/OpenPype/pull/3321) - hiero: otio p3 compatibility issue - metadata on effect use update 3.11 [\#3314](https://github.com/pypeclub/OpenPype/pull/3314) +- Nuke: fixing farm publishing if review is disabled [\#3306](https://github.com/pypeclub/OpenPype/pull/3306) - General: Vendorized modules for Python 2 and update poetry lock [\#3305](https://github.com/pypeclub/OpenPype/pull/3305) - Fix - added local targets to install host [\#3303](https://github.com/pypeclub/OpenPype/pull/3303) - Settings: Add missing default settings for nuke gizmo [\#3301](https://github.com/pypeclub/OpenPype/pull/3301) @@ -78,8 +112,6 @@ - Nuke: bake reformat was failing on string type [\#3261](https://github.com/pypeclub/OpenPype/pull/3261) - Maya: hotfix Pxr multitexture in looks [\#3260](https://github.com/pypeclub/OpenPype/pull/3260) - Unreal: Fix Camera Loading if Layout is missing [\#3255](https://github.com/pypeclub/OpenPype/pull/3255) -- Unreal: Fixed Animation loading in UE5 [\#3240](https://github.com/pypeclub/OpenPype/pull/3240) -- Unreal: Fixed Render creation in UE5 [\#3239](https://github.com/pypeclub/OpenPype/pull/3239) **🔀 Refactored code** @@ -90,7 +122,6 @@ - Maya: add pointcache family to gpu cache loader [\#3318](https://github.com/pypeclub/OpenPype/pull/3318) - Maya look: skip empty file attributes [\#3274](https://github.com/pypeclub/OpenPype/pull/3274) -- Harmony: 21.1 fix [\#3248](https://github.com/pypeclub/OpenPype/pull/3248) ## [3.10.0](https://github.com/pypeclub/OpenPype/tree/3.10.0) (2022-05-26) @@ -105,9 +136,6 @@ - nuke: use framerange issue [\#3254](https://github.com/pypeclub/OpenPype/pull/3254) - Ftrack: Chunk sizes for queries has minimal condition [\#3244](https://github.com/pypeclub/OpenPype/pull/3244) -- Maya: renderman displays needs to be filtered [\#3242](https://github.com/pypeclub/OpenPype/pull/3242) -- Ftrack: Validate that the user exists on ftrack [\#3237](https://github.com/pypeclub/OpenPype/pull/3237) -- Maya: Fix support for multiple resolutions [\#3236](https://github.com/pypeclub/OpenPype/pull/3236) **Merged pull requests:** diff --git a/openpype/client/__init__.py b/openpype/client/__init__.py index 16b1dcf321..e3b4ef5132 100644 --- a/openpype/client/__init__.py +++ b/openpype/client/__init__.py @@ -5,6 +5,7 @@ from .entities import ( get_asset_by_id, get_asset_by_name, get_assets, + get_archived_assets, get_asset_ids_with_subsets, get_subset_by_id, @@ -41,6 +42,7 @@ __all__ = ( "get_asset_by_id", "get_asset_by_name", "get_assets", + "get_archived_assets", "get_asset_ids_with_subsets", "get_subset_by_id", diff --git a/openpype/client/entities.py b/openpype/client/entities.py index cc4032712c..4b4a3729fe 100644 --- a/openpype/client/entities.py +++ b/openpype/client/entities.py @@ -139,8 +139,16 @@ def get_asset_by_name(project_name, asset_name, fields=None): return conn.find_one(query_filter, _prepare_fields(fields)) -def get_assets( - project_name, asset_ids=None, asset_names=None, archived=False, fields=None +# NOTE this could be just public function? +# - any better variable name instead of 'standard'? +# - same approach can be used for rest of types +def _get_assets( + project_name, + asset_ids=None, + asset_names=None, + standard=True, + archived=False, + fields=None ): """Assets for specified project by passed filters. @@ -153,6 +161,8 @@ def get_assets( project_name (str): Name of project where to look for queried entities. asset_ids (list[str|ObjectId]): Asset ids that should be found. asset_names (list[str]): Name assets that should be found. + standard (bool): Query standart assets (type 'asset'). + archived (bool): Query archived assets (type 'archived_asset'). fields (list[str]): Fields that should be returned. All fields are returned if 'None' is passed. @@ -161,10 +171,15 @@ def get_assets( passed filters. """ - asset_types = ["asset"] + asset_types = [] + if standard: + asset_types.append("asset") if archived: asset_types.append("archived_asset") + if not asset_types: + return [] + if len(asset_types) == 1: query_filter = {"type": asset_types[0]} else: @@ -186,6 +201,68 @@ def get_assets( return conn.find(query_filter, _prepare_fields(fields)) +def get_assets( + project_name, + asset_ids=None, + asset_names=None, + archived=False, + fields=None +): + """Assets for specified project by passed filters. + + Passed filters (ids and names) are always combined so all conditions must + match. + + To receive all assets from project just keep filters empty. + + Args: + project_name (str): Name of project where to look for queried entities. + asset_ids (list[str|ObjectId]): Asset ids that should be found. + asset_names (list[str]): Name assets that should be found. + archived (bool): Add also archived assets. + fields (list[str]): Fields that should be returned. All fields are + returned if 'None' is passed. + + Returns: + Cursor: Query cursor as iterable which returns asset documents matching + passed filters. + """ + + return _get_assets( + project_name, asset_ids, asset_names, True, archived, fields + ) + + +def get_archived_assets( + project_name, + asset_ids=None, + asset_names=None, + fields=None +): + """Archived assets for specified project by passed filters. + + Passed filters (ids and names) are always combined so all conditions must + match. + + To receive all archived assets from project just keep filters empty. + + Args: + project_name (str): Name of project where to look for queried entities. + asset_ids (list[str|ObjectId]): Asset ids that should be found. + asset_names (list[str]): Name assets that should be found. + fields (list[str]): Fields that should be returned. All fields are + returned if 'None' is passed. + + Returns: + Cursor: Query cursor as iterable which returns asset documents matching + passed filters. + """ + + return _get_assets( + project_name, asset_ids, asset_names, False, True, fields + ) + + def get_asset_ids_with_subsets(project_name, asset_ids=None): """Find out which assets have existing subsets. @@ -432,6 +509,7 @@ def _get_versions( project_name, subset_ids=None, version_ids=None, + versions=None, standard=True, hero=False, fields=None @@ -462,6 +540,16 @@ def _get_versions( return [] query_filter["_id"] = {"$in": version_ids} + if versions is not None: + versions = list(versions) + if not versions: + return [] + + if len(versions) == 1: + query_filter["name"] = versions[0] + else: + query_filter["name"] = {"$in": versions} + conn = _get_project_connection(project_name) return conn.find(query_filter, _prepare_fields(fields)) @@ -471,6 +559,7 @@ def get_versions( project_name, version_ids=None, subset_ids=None, + versions=None, hero=False, fields=None ): @@ -484,6 +573,8 @@ def get_versions( Filter ignored if 'None' is passed. subset_ids (list[str]): Subset ids that will be queried. Filter ignored if 'None' is passed. + versions (list[int]): Version names (as integers). + Filter ignored if 'None' is passed. hero (bool): Look also for hero versions. fields (list[str]): Fields that should be returned. All fields are returned if 'None' is passed. @@ -496,6 +587,7 @@ def get_versions( project_name, subset_ids, version_ids, + versions, standard=True, hero=hero, fields=fields diff --git a/openpype/hosts/aftereffects/api/pipeline.py b/openpype/hosts/aftereffects/api/pipeline.py index a428a1470d..0bc47665b0 100644 --- a/openpype/hosts/aftereffects/api/pipeline.py +++ b/openpype/hosts/aftereffects/api/pipeline.py @@ -65,14 +65,14 @@ def on_pyblish_instance_toggled(instance, old_value, new_value): instance[0].Visible = new_value -def get_asset_settings(): +def get_asset_settings(asset_doc): """Get settings on current asset from database. Returns: dict: Scene data. """ - asset_data = lib.get_asset()["data"] + asset_data = asset_doc["data"] fps = asset_data.get("fps") frame_start = asset_data.get("frameStart") frame_end = asset_data.get("frameEnd") diff --git a/openpype/hosts/aftereffects/plugins/create/workfile_creator.py b/openpype/hosts/aftereffects/plugins/create/workfile_creator.py index 88e55e21b5..badb3675fd 100644 --- a/openpype/hosts/aftereffects/plugins/create/workfile_creator.py +++ b/openpype/hosts/aftereffects/plugins/create/workfile_creator.py @@ -1,4 +1,5 @@ import openpype.hosts.aftereffects.api as api +from openpype.client import get_asset_by_name from openpype.pipeline import ( AutoCreator, CreatedInstance, @@ -41,10 +42,7 @@ class AEWorkfileCreator(AutoCreator): host_name = legacy_io.Session["AVALON_APP"] if existing_instance is None: - asset_doc = legacy_io.find_one({ - "type": "asset", - "name": asset_name - }) + asset_doc = get_asset_by_name(project_name, asset_name) subset_name = self.get_subset_name( variant, task_name, asset_doc, project_name, host_name ) @@ -69,10 +67,7 @@ class AEWorkfileCreator(AutoCreator): existing_instance["asset"] != asset_name or existing_instance["task"] != task_name ): - asset_doc = legacy_io.find_one({ - "type": "asset", - "name": asset_name - }) + asset_doc = get_asset_by_name(project_name, asset_name) subset_name = self.get_subset_name( variant, task_name, asset_doc, project_name, host_name ) diff --git a/openpype/hosts/aftereffects/plugins/publish/validate_scene_settings.py b/openpype/hosts/aftereffects/plugins/publish/validate_scene_settings.py index 6fe63fc41e..78f98d7445 100644 --- a/openpype/hosts/aftereffects/plugins/publish/validate_scene_settings.py +++ b/openpype/hosts/aftereffects/plugins/publish/validate_scene_settings.py @@ -1,5 +1,9 @@ # -*- coding: utf-8 -*- -"""Validate scene settings.""" +"""Validate scene settings. +Requires: + instance -> assetEntity + instance -> anatomyData +""" import os import re @@ -67,7 +71,8 @@ class ValidateSceneSettings(OptionalPyblishPluginMixin, if not self.is_active(instance.data): return - expected_settings = get_asset_settings() + asset_doc = instance.data["assetEntity"] + expected_settings = get_asset_settings(asset_doc) self.log.info("config from DB::{}".format(expected_settings)) task_name = instance.data["anatomyData"]["task"]["name"] diff --git a/openpype/hosts/harmony/api/README.md b/openpype/hosts/harmony/api/README.md index dd45eb14dd..b39f900886 100644 --- a/openpype/hosts/harmony/api/README.md +++ b/openpype/hosts/harmony/api/README.md @@ -610,7 +610,8 @@ class ImageSequenceLoader(load.LoaderPlugin): def update(self, container, representation): node = container.pop("node") - version = legacy_io.find_one({"_id": representation["parent"]}) + project_name = legacy_io.active_project() + version = get_version_by_id(project_name, representation["parent"]) files = [] for f in version["data"]["files"]: files.append( diff --git a/openpype/hosts/harmony/api/pipeline.py b/openpype/hosts/harmony/api/pipeline.py index b953d0e984..86b5753f7e 100644 --- a/openpype/hosts/harmony/api/pipeline.py +++ b/openpype/hosts/harmony/api/pipeline.py @@ -2,10 +2,10 @@ import os from pathlib import Path import logging -from bson.objectid import ObjectId import pyblish.api from openpype import lib +from openpype.client import get_representation_by_id from openpype.lib import register_event_callback from openpype.pipeline import ( legacy_io, @@ -104,22 +104,20 @@ def check_inventory(): If it does it will colorize outdated nodes and display warning message in Harmony. """ - if not lib.any_outdated(): - return + project_name = legacy_io.active_project() outdated_containers = [] for container in ls(): - representation = container['representation'] - representation_doc = legacy_io.find_one( - { - "_id": ObjectId(representation), - "type": "representation" - }, - projection={"parent": True} + representation_id = container['representation'] + representation_doc = get_representation_by_id( + project_name, representation_id, fields=["parent"] ) if representation_doc and not lib.is_latest(representation_doc): outdated_containers.append(container) + if not outdated_containers: + return + # Colour nodes. outdated_nodes = [] for container in outdated_containers: diff --git a/openpype/hosts/hiero/api/lib.py b/openpype/hosts/hiero/api/lib.py index 06dfd2f2ee..8c8c31bc4c 100644 --- a/openpype/hosts/hiero/api/lib.py +++ b/openpype/hosts/hiero/api/lib.py @@ -12,8 +12,13 @@ import shutil import hiero from Qt import QtWidgets -from bson.objectid import ObjectId +from openpype.client import ( + get_project, + get_versions, + get_last_versions, + get_representations, +) from openpype.pipeline import legacy_io from openpype.api import (Logger, Anatomy, get_anatomy_settings) from . import tags @@ -477,7 +482,7 @@ def sync_avalon_data_to_workfile(): project.setProjectRoot(active_project_root) # get project data from avalon db - project_doc = legacy_io.find_one({"type": "project"}) + project_doc = get_project(project_name) project_data = project_doc["data"] log.debug("project_data: {}".format(project_data)) @@ -1065,35 +1070,63 @@ def check_inventory_versions(track_items=None): clip_color_last = "green" clip_color = "red" - # get all track items from current timeline + item_with_repre_id = [] + repre_ids = set() + # Find all containers and collect it's node and representation ids for track_item in track_item: container = parse_container(track_item) if container: - # get representation from io - representation = legacy_io.find_one({ - "type": "representation", - "_id": ObjectId(container["representation"]) - }) + repre_id = container["representation"] + repre_ids.add(repre_id) + item_with_repre_id.append((track_item, repre_id)) - # Get start frame from version data - version = legacy_io.find_one({ - "type": "version", - "_id": representation["parent"] - }) + # Skip if nothing was found + if not repre_ids: + return - # get all versions in list - versions = legacy_io.find({ - "type": "version", - "parent": version["parent"] - }).distinct('name') + project_name = legacy_io.active_project() + # Find representations based on found containers + repre_docs = get_representations( + project_name, + repre_ids=repre_ids, + fields=["_id", "parent"] + ) + # Store representations by id and collect version ids + repre_docs_by_id = {} + version_ids = set() + for repre_doc in repre_docs: + # Use stringed representation id to match value in containers + repre_id = str(repre_doc["_id"]) + repre_docs_by_id[repre_id] = repre_doc + version_ids.add(repre_doc["parent"]) - max_version = max(versions) + version_docs = get_versions( + project_name, version_ids, fields=["_id", "name", "parent"] + ) + # Store versions by id and collect subset ids + version_docs_by_id = {} + subset_ids = set() + for version_doc in version_docs: + version_docs_by_id[version_doc["_id"]] = version_doc + subset_ids.add(version_doc["parent"]) - # set clip colour - if version.get("name") == max_version: - track_item.source().binItem().setColor(clip_color_last) - else: - track_item.source().binItem().setColor(clip_color) + # Query last versions based on subset ids + last_versions_by_subset_id = get_last_versions( + project_name, subset_ids=subset_ids, fields=["_id", "parent"] + ) + + for item in item_with_repre_id: + # Some python versions of nuke can't unfold tuple in for loop + track_item, repre_id = item + + repre_doc = repre_docs_by_id[repre_id] + version_doc = version_docs_by_id[repre_doc["parent"]] + last_version_doc = last_versions_by_subset_id[version_doc["parent"]] + # Check if last version is same as current version + if version_doc["_id"] == last_version_doc["_id"]: + track_item.source().binItem().setColor(clip_color_last) + else: + track_item.source().binItem().setColor(clip_color) def selection_changed_timeline(event): diff --git a/openpype/hosts/hiero/api/tags.py b/openpype/hosts/hiero/api/tags.py index 8c6ff2a77b..10df96fa53 100644 --- a/openpype/hosts/hiero/api/tags.py +++ b/openpype/hosts/hiero/api/tags.py @@ -2,6 +2,7 @@ import re import os import hiero +from openpype.client import get_project, get_assets from openpype.api import Logger from openpype.pipeline import legacy_io @@ -141,7 +142,9 @@ def add_tags_to_workfile(): nks_pres_tags = tag_data() # Get project task types. - tasks = legacy_io.find_one({"type": "project"})["config"]["tasks"] + project_name = legacy_io.active_project() + project_doc = get_project(project_name) + tasks = project_doc["config"]["tasks"] nks_pres_tags["[Tasks]"] = {} log.debug("__ tasks: {}".format(tasks)) for task_type in tasks.keys(): @@ -159,7 +162,9 @@ def add_tags_to_workfile(): # asset builds and shots. if int(os.getenv("TAG_ASSETBUILD_STARTUP", 0)) == 1: nks_pres_tags["[AssetBuilds]"] = {} - for asset in legacy_io.find({"type": "asset"}): + for asset in get_assets( + project_name, fields=["name", "data.entityType"] + ): if asset["data"]["entityType"] == "AssetBuild": nks_pres_tags["[AssetBuilds]"][asset["name"]] = { "editable": "1", diff --git a/openpype/hosts/hiero/plugins/load/load_clip.py b/openpype/hosts/hiero/plugins/load/load_clip.py index a3365253b3..2a7d1af41e 100644 --- a/openpype/hosts/hiero/plugins/load/load_clip.py +++ b/openpype/hosts/hiero/plugins/load/load_clip.py @@ -1,3 +1,7 @@ +from openpype.client import ( + get_version_by_id, + get_last_version_by_subset_id +) from openpype.pipeline import ( legacy_io, get_representation_path, @@ -103,12 +107,12 @@ class LoadClip(phiero.SequenceLoader): namespace = container['namespace'] track_item = phiero.get_track_items( track_item_name=namespace).pop() - version = legacy_io.find_one({ - "type": "version", - "_id": representation["parent"] - }) - version_data = version.get("data", {}) - version_name = version.get("name", None) + + project_name = legacy_io.active_project() + version_doc = get_version_by_id(project_name, representation["parent"]) + + version_data = version_doc.get("data", {}) + version_name = version_doc.get("name", None) colorspace = version_data.get("colorspace", None) object_name = "{}_{}".format(name, namespace) file = get_representation_path(representation).replace("\\", "/") @@ -143,7 +147,7 @@ class LoadClip(phiero.SequenceLoader): }) # update color of clip regarding the version order - self.set_item_color(track_item, version) + self.set_item_color(track_item, version_doc) return phiero.update_container(track_item, data_imprint) @@ -166,21 +170,14 @@ class LoadClip(phiero.SequenceLoader): cls.sequence = cls.track.parent() @classmethod - def set_item_color(cls, track_item, version): - + def set_item_color(cls, track_item, version_doc): + project_name = legacy_io.active_project() + last_version_doc = get_last_version_by_subset_id( + project_name, version_doc["parent"], fields=["_id"] + ) clip = track_item.source() - # define version name - version_name = version.get("name", None) - # get all versions in list - versions = legacy_io.find({ - "type": "version", - "parent": version["parent"] - }).distinct('name') - - max_version = max(versions) - # set clip colour - if version_name == max_version: + if version_doc["_id"] == last_version_doc["_id"]: clip.binItem().setColor(cls.clip_color_last) else: clip.binItem().setColor(cls.clip_color) diff --git a/openpype/hosts/hiero/plugins/publish_old_workflow/collect_assetbuilds.py b/openpype/hosts/hiero/plugins/publish_old_workflow/collect_assetbuilds.py index 10baf25803..5f96533052 100644 --- a/openpype/hosts/hiero/plugins/publish_old_workflow/collect_assetbuilds.py +++ b/openpype/hosts/hiero/plugins/publish_old_workflow/collect_assetbuilds.py @@ -1,4 +1,5 @@ from pyblish import api +from openpype.client import get_assets from openpype.pipeline import legacy_io @@ -17,8 +18,9 @@ class CollectAssetBuilds(api.ContextPlugin): hosts = ["hiero"] def process(self, context): + project_name = legacy_io.active_project() asset_builds = {} - for asset in legacy_io.find({"type": "asset"}): + for asset in get_assets(project_name): if asset["data"]["entityType"] == "AssetBuild": self.log.debug("Found \"{}\" in database.".format(asset)) asset_builds[asset["name"]] = asset diff --git a/openpype/hosts/maya/api/action.py b/openpype/hosts/maya/api/action.py index ca1006b6aa..90605734e7 100644 --- a/openpype/hosts/maya/api/action.py +++ b/openpype/hosts/maya/api/action.py @@ -3,6 +3,7 @@ from __future__ import absolute_import import pyblish.api +from openpype.client import get_asset_by_name from openpype.pipeline import legacy_io from openpype.api import get_errored_instances_from_context @@ -74,12 +75,21 @@ class GenerateUUIDsOnInvalidAction(pyblish.api.Action): from . import lib - asset = instance.data['asset'] - asset_id = legacy_io.find_one( - {"name": asset, "type": "asset"}, - projection={"_id": True} - )['_id'] - for node, _id in lib.generate_ids(nodes, asset_id=asset_id): + # Expecting this is called on validators in which case 'assetEntity' + # should be always available, but kept a way to query it by name. + asset_doc = instance.data.get("assetEntity") + if not asset_doc: + asset_name = instance.data["asset"] + project_name = legacy_io.active_project() + self.log.info(( + "Asset is not stored on instance." + " Querying by name \"{}\" from project \"{}\"" + ).format(asset_name, project_name)) + asset_doc = get_asset_by_name( + project_name, asset_name, fields=["_id"] + ) + + for node, _id in lib.generate_ids(nodes, asset_id=asset_doc["_id"]): lib.set_id(node, _id, overwrite=True) diff --git a/openpype/hosts/maya/api/commands.py b/openpype/hosts/maya/api/commands.py index dd616b6dd6..355edf3ae4 100644 --- a/openpype/hosts/maya/api/commands.py +++ b/openpype/hosts/maya/api/commands.py @@ -2,6 +2,7 @@ """OpenPype script commands to be used directly in Maya.""" from maya import cmds +from openpype.client import get_asset_by_name, get_project from openpype.pipeline import legacy_io @@ -79,8 +80,9 @@ def reset_frame_range(): cmds.currentUnit(time=fps) # Set frame start/end + project_name = legacy_io.active_project() asset_name = legacy_io.Session["AVALON_ASSET"] - asset = legacy_io.find_one({"name": asset_name, "type": "asset"}) + asset = get_asset_by_name(project_name, asset_name) frame_start = asset["data"].get("frameStart") frame_end = asset["data"].get("frameEnd") @@ -145,8 +147,9 @@ def reset_resolution(): resolution_height = 1080 # Get resolution from asset + project_name = legacy_io.active_project() asset_name = legacy_io.Session["AVALON_ASSET"] - asset_doc = legacy_io.find_one({"name": asset_name, "type": "asset"}) + asset_doc = get_asset_by_name(project_name, asset_name) resolution = _resolution_from_document(asset_doc) # Try get resolution from project if resolution is None: @@ -155,7 +158,7 @@ def reset_resolution(): "Asset \"{}\" does not have set resolution." " Trying to get resolution from project" ).format(asset_name)) - project_doc = legacy_io.find_one({"type": "project"}) + project_doc = get_project(project_name) resolution = _resolution_from_document(project_doc) if resolution is None: diff --git a/openpype/hosts/maya/api/lib.py b/openpype/hosts/maya/api/lib.py index bce03a648b..de9a9da911 100644 --- a/openpype/hosts/maya/api/lib.py +++ b/openpype/hosts/maya/api/lib.py @@ -12,11 +12,17 @@ import contextlib from collections import OrderedDict, defaultdict from math import ceil from six import string_types -import bson from maya import cmds, mel import maya.api.OpenMaya as om +from openpype.client import ( + get_project, + get_asset_by_name, + get_subsets, + get_last_versions, + get_representation_by_name +) from openpype import lib from openpype.api import get_anatomy_settings from openpype.pipeline import ( @@ -1387,15 +1393,11 @@ def generate_ids(nodes, asset_id=None): if asset_id is None: # Get the asset ID from the database for the asset of current context - asset_data = legacy_io.find_one( - { - "type": "asset", - "name": legacy_io.Session["AVALON_ASSET"] - }, - projection={"_id": True} - ) - assert asset_data, "No current asset found in Session" - asset_id = asset_data['_id'] + project_name = legacy_io.active_project() + asset_name = legacy_io.Session["AVALON_ASSET"] + asset_doc = get_asset_by_name(project_name, asset_name, fields=["_id"]) + assert asset_doc, "No current asset found in Session" + asset_id = asset_doc['_id'] node_ids = [] for node in nodes: @@ -1548,13 +1550,15 @@ def list_looks(asset_id): # # get all subsets with look leading in # the name associated with the asset - subset = legacy_io.find({ - "parent": bson.ObjectId(asset_id), - "type": "subset", - "name": {"$regex": "look*"} - }) - - return list(subset) + # TODO this should probably look for family 'look' instead of checking + # subset name that can not start with family + project_name = legacy_io.active_project() + subset_docs = get_subsets(project_name, asset_ids=[asset_id]) + return [ + subset_doc + for subset_doc in subset_docs + if subset_doc["name"].startswith("look") + ] def assign_look_by_version(nodes, version_id): @@ -1570,18 +1574,15 @@ def assign_look_by_version(nodes, version_id): None """ - # Get representations of shader file and relationships - look_representation = legacy_io.find_one({ - "type": "representation", - "parent": version_id, - "name": "ma" - }) + project_name = legacy_io.active_project() - json_representation = legacy_io.find_one({ - "type": "representation", - "parent": version_id, - "name": "json" - }) + # Get representations of shader file and relationships + look_representation = get_representation_by_name( + project_name, "ma", version_id + ) + json_representation = get_representation_by_name( + project_name, "json", version_id + ) # See if representation is already loaded, if so reuse it. host = registered_host() @@ -1639,42 +1640,54 @@ def assign_look(nodes, subset="lookDefault"): parts = pype_id.split(":", 1) grouped[parts[0]].append(node) + project_name = legacy_io.active_project() + subset_docs = get_subsets( + project_name, subset_names=[subset], asset_ids=grouped.keys() + ) + subset_docs_by_asset_id = { + str(subset_doc["parent"]): subset_doc + for subset_doc in subset_docs + } + subset_ids = { + subset_doc["_id"] + for subset_doc in subset_docs_by_asset_id.values() + } + last_version_docs = get_last_versions( + project_name, + subset_ids=subset_ids, + fields=["_id", "name", "data.families"] + ) + last_version_docs_by_subset_id = { + last_version_doc["parent"]: last_version_doc + for last_version_doc in last_version_docs + } + for asset_id, asset_nodes in grouped.items(): # create objectId for database - try: - asset_id = bson.ObjectId(asset_id) - except bson.errors.InvalidId: - log.warning("Asset ID is not compatible with bson") - continue - subset_data = legacy_io.find_one({ - "type": "subset", - "name": subset, - "parent": asset_id - }) - - if not subset_data: + subset_doc = subset_docs_by_asset_id.get(asset_id) + if not subset_doc: log.warning("No subset '{}' found for {}".format(subset, asset_id)) continue - # get last version - # with backwards compatibility - version = legacy_io.find_one( - { - "parent": subset_data['_id'], - "type": "version", - "data.families": {"$in": ["look"]} - }, - sort=[("name", -1)], - projection={ - "_id": True, - "name": True - } - ) + last_version = last_version_docs_by_subset_id.get(subset_doc["_id"]) + if not last_version: + log.warning(( + "Not found last version for subset '{}' on asset with id {}" + ).format(subset, asset_id)) + continue - log.debug("Assigning look '{}' ".format(subset, - version["name"])) + families = last_version.get("data", {}).get("families") or [] + if "look" not in families: + log.warning(( + "Last version for subset '{}' on asset with id {}" + " does not have look family" + ).format(subset, asset_id)) + continue - assign_look_by_version(asset_nodes, version['_id']) + log.debug("Assigning look '{}' ".format( + subset, last_version["name"])) + + assign_look_by_version(asset_nodes, last_version["_id"]) def apply_shaders(relationships, shadernodes, nodes): @@ -2126,9 +2139,11 @@ def set_scene_resolution(width, height, pixelAspect): control_node = "defaultResolution" current_renderer = cmds.getAttr("defaultRenderGlobals.currentRenderer") + aspect_ratio_attr = "deviceAspectRatio" # Give VRay a helping hand as it is slightly different from the rest if current_renderer == "vray": + aspect_ratio_attr = "aspectRatio" vray_node = "vraySettings" if cmds.objExists(vray_node): control_node = vray_node @@ -2141,7 +2156,8 @@ def set_scene_resolution(width, height, pixelAspect): cmds.setAttr("%s.height" % control_node, height) deviceAspectRatio = ((float(width) / float(height)) * float(pixelAspect)) - cmds.setAttr("%s.deviceAspectRatio" % control_node, deviceAspectRatio) + cmds.setAttr( + "{}.{}".format(control_node, aspect_ratio_attr), deviceAspectRatio) cmds.setAttr("%s.pixelAspect" % control_node, pixelAspect) @@ -2155,7 +2171,8 @@ def reset_scene_resolution(): None """ - project_doc = legacy_io.find_one({"type": "project"}) + project_name = legacy_io.active_project() + project_doc = get_project(project_name) project_data = project_doc["data"] asset_data = lib.get_asset()["data"] @@ -2188,7 +2205,8 @@ def set_context_settings(): """ # Todo (Wijnand): apply renderer and resolution of project - project_doc = legacy_io.find_one({"type": "project"}) + project_name = legacy_io.active_project() + project_doc = get_project(project_name) project_data = project_doc["data"] asset_data = lib.get_asset()["data"] diff --git a/openpype/hosts/maya/api/setdress.py b/openpype/hosts/maya/api/setdress.py index f8d3ed79b8..bea8f154b1 100644 --- a/openpype/hosts/maya/api/setdress.py +++ b/openpype/hosts/maya/api/setdress.py @@ -6,10 +6,16 @@ import contextlib import copy import six -from bson.objectid import ObjectId from maya import cmds +from openpype.client import ( + get_version_by_name, + get_last_version_by_subset_id, + get_representation_by_id, + get_representation_by_name, + get_representation_parents, +) from openpype.pipeline import ( schema, legacy_io, @@ -283,36 +289,35 @@ def update_package_version(container, version): """ # Versioning (from `core.maya.pipeline`) - current_representation = legacy_io.find_one({ - "_id": ObjectId(container["representation"]) - }) + project_name = legacy_io.active_project() + current_representation = get_representation_by_id( + project_name, container["representation"] + ) assert current_representation is not None, "This is a bug" - version_, subset, asset, project = legacy_io.parenthood( - current_representation + repre_parents = get_representation_parents( + project_name, current_representation ) + version_doc = subset_doc = asset_doc = project_doc = None + if repre_parents: + version_doc, subset_doc, asset_doc, project_doc = repre_parents if version == -1: - new_version = legacy_io.find_one({ - "type": "version", - "parent": subset["_id"] - }, sort=[("name", -1)]) + new_version = get_last_version_by_subset_id( + project_name, subset_doc["_id"] + ) else: - new_version = legacy_io.find_one({ - "type": "version", - "parent": subset["_id"], - "name": version, - }) + new_version = get_version_by_name( + project_name, version, subset_doc["_id"] + ) assert new_version is not None, "This is a bug" # Get the new representation (new file) - new_representation = legacy_io.find_one({ - "type": "representation", - "parent": new_version["_id"], - "name": current_representation["name"] - }) + new_representation = get_representation_by_name( + project_name, current_representation["name"], new_version["_id"] + ) update_package(container, new_representation) @@ -330,10 +335,10 @@ def update_package(set_container, representation): """ # Load the original package data - current_representation = legacy_io.find_one({ - "_id": ObjectId(set_container['representation']), - "type": "representation" - }) + project_name = legacy_io.active_project() + current_representation = get_representation_by_id( + project_name, set_container["representation"] + ) current_file = get_representation_path(current_representation) assert current_file.endswith(".json") @@ -380,6 +385,7 @@ def update_scene(set_container, containers, current_data, new_data, new_file): from openpype.hosts.maya.lib import DEFAULT_MATRIX, get_container_transforms set_namespace = set_container['namespace'] + project_name = legacy_io.active_project() # Update the setdress hierarchy alembic set_root = get_container_transforms(set_container, root=True) @@ -481,12 +487,12 @@ def update_scene(set_container, containers, current_data, new_data, new_file): # Check whether the conversion can be done by the Loader. # They *must* use the same asset, subset and Loader for # `update_container` to make sense. - old = legacy_io.find_one({ - "_id": ObjectId(representation_current) - }) - new = legacy_io.find_one({ - "_id": ObjectId(representation_new) - }) + old = get_representation_by_id( + project_name, representation_current + ) + new = get_representation_by_id( + project_name, representation_new + ) is_valid = compare_representations(old=old, new=new) if not is_valid: log.error("Skipping: %s. See log for details.", diff --git a/openpype/hosts/maya/plugins/create/create_multiverse_usd.py b/openpype/hosts/maya/plugins/create/create_multiverse_usd.py index 034714d51b..5290d5143f 100644 --- a/openpype/hosts/maya/plugins/create/create_multiverse_usd.py +++ b/openpype/hosts/maya/plugins/create/create_multiverse_usd.py @@ -16,7 +16,7 @@ class CreateMultiverseUsd(plugin.Creator): self.data.update(lib.collect_animation_data(True)) self.data["fileFormat"] = ["usd", "usda", "usdz"] - self.data["stripNamespaces"] = False + self.data["stripNamespaces"] = True self.data["mergeTransformAndShape"] = False self.data["writeAncestors"] = True self.data["flattenParentXforms"] = False @@ -37,15 +37,15 @@ class CreateMultiverseUsd(plugin.Creator): self.data["writeUVs"] = True self.data["writeColorSets"] = False self.data["writeTangents"] = False - self.data["writeRefPositions"] = False + self.data["writeRefPositions"] = True self.data["writeBlendShapes"] = False - self.data["writeDisplayColor"] = False + self.data["writeDisplayColor"] = True self.data["writeSkinWeights"] = False self.data["writeMaterialAssignment"] = False self.data["writeHardwareShader"] = False self.data["writeShadingNetworks"] = False self.data["writeTransformMatrix"] = True - self.data["writeUsdAttributes"] = False + self.data["writeUsdAttributes"] = True self.data["writeInstancesAsReferences"] = False self.data["timeVaryingTopology"] = False self.data["customMaterialNamespace"] = '' diff --git a/openpype/hosts/maya/plugins/inventory/import_modelrender.py b/openpype/hosts/maya/plugins/inventory/import_modelrender.py index a5367f16e5..8a7390bc8d 100644 --- a/openpype/hosts/maya/plugins/inventory/import_modelrender.py +++ b/openpype/hosts/maya/plugins/inventory/import_modelrender.py @@ -1,6 +1,10 @@ +import re import json -from bson.objectid import ObjectId +from openpype.client import ( + get_representation_by_id, + get_representations +) from openpype.pipeline import ( InventoryAction, get_representation_context, @@ -31,6 +35,7 @@ class ImportModelRender(InventoryAction): def process(self, containers): from maya import cmds + project_name = legacy_io.active_project() for container in containers: con_name = container["objectName"] nodes = [] @@ -40,9 +45,9 @@ class ImportModelRender(InventoryAction): else: nodes.append(n) - repr_doc = legacy_io.find_one({ - "_id": ObjectId(container["representation"]), - }) + repr_doc = get_representation_by_id( + project_name, container["representation"], fields=["parent"] + ) version_id = repr_doc["parent"] print("Importing render sets for model %r" % con_name) @@ -63,26 +68,38 @@ class ImportModelRender(InventoryAction): from maya import cmds + project_name = legacy_io.active_project() + repre_docs = get_representations( + project_name, version_ids=[version_id], fields=["_id", "name"] + ) # Get representations of shader file and relationships - look_repr = legacy_io.find_one({ - "type": "representation", - "parent": version_id, - "name": {"$regex": self.scene_type_regex}, - }) - if not look_repr: + json_repre = None + look_repres = [] + scene_type_regex = re.compile(self.scene_type_regex) + for repre_doc in repre_docs: + repre_name = repre_doc["name"] + if repre_name == self.look_data_type: + json_repre = repre_doc + continue + + if scene_type_regex.fullmatch(repre_name): + look_repres.append(repre_doc) + + # QUESTION should we care if there is more then one look + # representation? (since it's based on regex match) + look_repre = None + if look_repres: + look_repre = look_repres[0] + + # QUESTION shouldn't be json representation validated too? + if not look_repre: print("No model render sets for this model version..") return - json_repr = legacy_io.find_one({ - "type": "representation", - "parent": version_id, - "name": self.look_data_type, - }) - - context = get_representation_context(look_repr["_id"]) + context = get_representation_context(look_repre["_id"]) maya_file = self.filepath_from_context(context) - context = get_representation_context(json_repr["_id"]) + context = get_representation_context(json_repre["_id"]) json_file = self.filepath_from_context(context) # Import the look file diff --git a/openpype/hosts/maya/plugins/load/load_audio.py b/openpype/hosts/maya/plugins/load/load_audio.py index ce814e1299..6f60cb5726 100644 --- a/openpype/hosts/maya/plugins/load/load_audio.py +++ b/openpype/hosts/maya/plugins/load/load_audio.py @@ -1,5 +1,10 @@ from maya import cmds, mel +from openpype.client import ( + get_asset_by_id, + get_subset_by_id, + get_version_by_id, +) from openpype.pipeline import ( legacy_io, load, @@ -65,9 +70,16 @@ class AudioLoader(load.LoaderPlugin): ) # Set frame range. - version = legacy_io.find_one({"_id": representation["parent"]}) - subset = legacy_io.find_one({"_id": version["parent"]}) - asset = legacy_io.find_one({"_id": subset["parent"]}) + project_name = legacy_io.active_project() + version = get_version_by_id( + project_name, representation["parent"], fields=["parent"] + ) + subset = get_subset_by_id( + project_name, version["parent"], fields=["parent"] + ) + asset = get_asset_by_id( + project_name, subset["parent"], fields=["parent"] + ) audio_node.sourceStart.set(1 - asset["data"]["frameStart"]) audio_node.sourceEnd.set(asset["data"]["frameEnd"]) diff --git a/openpype/hosts/maya/plugins/load/load_image_plane.py b/openpype/hosts/maya/plugins/load/load_image_plane.py index 5e44917f28..b267921bdc 100644 --- a/openpype/hosts/maya/plugins/load/load_image_plane.py +++ b/openpype/hosts/maya/plugins/load/load_image_plane.py @@ -1,5 +1,10 @@ from Qt import QtWidgets, QtCore +from openpype.client import ( + get_asset_by_id, + get_subset_by_id, + get_version_by_id, +) from openpype.pipeline import ( legacy_io, load, @@ -216,9 +221,16 @@ class ImagePlaneLoader(load.LoaderPlugin): ) # Set frame range. - version = legacy_io.find_one({"_id": representation["parent"]}) - subset = legacy_io.find_one({"_id": version["parent"]}) - asset = legacy_io.find_one({"_id": subset["parent"]}) + project_name = legacy_io.active_project() + version = get_version_by_id( + project_name, representation["parent"], fields=["parent"] + ) + subset = get_subset_by_id( + project_name, version["parent"], fields=["parent"] + ) + asset = get_asset_by_id( + project_name, subset["parent"], fields=["parent"] + ) start_frame = asset["data"]["frameStart"] end_frame = asset["data"]["frameEnd"] image_plane_shape.frameOffset.set(1 - start_frame) diff --git a/openpype/hosts/maya/plugins/load/load_look.py b/openpype/hosts/maya/plugins/load/load_look.py index ae3a683241..7392adc4dd 100644 --- a/openpype/hosts/maya/plugins/load/load_look.py +++ b/openpype/hosts/maya/plugins/load/load_look.py @@ -5,6 +5,7 @@ from collections import defaultdict from Qt import QtWidgets +from openpype.client import get_representation_by_name from openpype.pipeline import ( legacy_io, get_representation_path, @@ -75,11 +76,10 @@ class LookLoader(openpype.hosts.maya.api.plugin.ReferenceLoader): shader_nodes = cmds.ls(members, type='shadingEngine') nodes = set(self._get_nodes_with_shader(shader_nodes)) - json_representation = legacy_io.find_one({ - "type": "representation", - "parent": representation['parent'], - "name": "json" - }) + project_name = legacy_io.active_project() + json_representation = get_representation_by_name( + project_name, "json", representation["parent"] + ) # Load relationships shader_relation = get_representation_path(json_representation) diff --git a/openpype/hosts/maya/plugins/load/load_vrayproxy.py b/openpype/hosts/maya/plugins/load/load_vrayproxy.py index 22d56139f6..e3d6166d3a 100644 --- a/openpype/hosts/maya/plugins/load/load_vrayproxy.py +++ b/openpype/hosts/maya/plugins/load/load_vrayproxy.py @@ -7,10 +7,9 @@ loader will use them instead of native vray vrmesh format. """ import os -from bson.objectid import ObjectId - import maya.cmds as cmds +from openpype.client import get_representation_by_name from openpype.api import get_project_settings from openpype.pipeline import ( legacy_io, @@ -185,12 +184,8 @@ class VRayProxyLoader(load.LoaderPlugin): """ self.log.debug( "Looking for abc in published representations of this version.") - abc_rep = legacy_io.find_one({ - "type": "representation", - "parent": ObjectId(version_id), - "name": "abc" - }) - + project_name = legacy_io.active_project() + abc_rep = get_representation_by_name(project_name, "abc", version_id) if abc_rep: self.log.debug("Found, we'll link alembic to vray proxy.") file_name = get_representation_path(abc_rep) diff --git a/openpype/hosts/maya/plugins/publish/collect_look.py b/openpype/hosts/maya/plugins/publish/collect_look.py index e8ada57f8f..ec583bcce7 100644 --- a/openpype/hosts/maya/plugins/publish/collect_look.py +++ b/openpype/hosts/maya/plugins/publish/collect_look.py @@ -40,7 +40,7 @@ FILE_NODES = { "aiImage": "filename", - "RedshiftNormalMap": "text0", + "RedshiftNormalMap": "tex0", "PxrBump": "filename", "PxrNormalMap": "filename", diff --git a/openpype/hosts/maya/plugins/publish/collect_review.py b/openpype/hosts/maya/plugins/publish/collect_review.py index e9e0d74c03..15b89ad53c 100644 --- a/openpype/hosts/maya/plugins/publish/collect_review.py +++ b/openpype/hosts/maya/plugins/publish/collect_review.py @@ -3,6 +3,7 @@ import pymel.core as pm import pyblish.api +from openpype.client import get_subset_by_name from openpype.pipeline import legacy_io @@ -78,11 +79,15 @@ class CollectReview(pyblish.api.InstancePlugin): self.log.debug('isntance data {}'.format(instance.data)) else: legacy_subset_name = task + 'Review' - asset_doc_id = instance.context.data['assetEntity']["_id"] - subsets = legacy_io.find({"type": "subset", - "name": legacy_subset_name, - "parent": asset_doc_id}).distinct("_id") - if len(list(subsets)) > 0: + asset_doc = instance.context.data['assetEntity'] + project_name = legacy_io.active_project() + subset_doc = get_subset_by_name( + project_name, + legacy_subset_name, + asset_doc["_id"], + fields=["_id"] + ) + if subset_doc: self.log.debug("Existing subsets found, keep legacy name.") instance.data['subset'] = legacy_subset_name diff --git a/openpype/hosts/maya/plugins/publish/extract_camera_alembic.py b/openpype/hosts/maya/plugins/publish/extract_camera_alembic.py index 5ad6b79d5c..4110ad474d 100644 --- a/openpype/hosts/maya/plugins/publish/extract_camera_alembic.py +++ b/openpype/hosts/maya/plugins/publish/extract_camera_alembic.py @@ -34,7 +34,6 @@ class ExtractCameraAlembic(openpype.api.Extractor): dag=True, type="camera") # validate required settings - assert len(cameras) == 1, "Not a single camera found in extraction" assert isinstance(step, float), "Step must be a float value" camera = cameras[0] @@ -44,8 +43,12 @@ class ExtractCameraAlembic(openpype.api.Extractor): path = os.path.join(dir_path, filename) # Perform alembic extraction + member_shapes = cmds.ls( + members, leaf=True, shapes=True, long=True, dag=True) with lib.maintained_selection(): - cmds.select(camera, replace=True, noExpand=True) + cmds.select( + member_shapes, + replace=True, noExpand=True) # Enforce forward slashes for AbcExport because we're # embedding it into a job string @@ -57,10 +60,12 @@ class ExtractCameraAlembic(openpype.api.Extractor): job_str += ' -step {0} '.format(step) if bake_to_worldspace: - transform = cmds.listRelatives(camera, - parent=True, - fullPath=True)[0] - job_str += ' -worldSpace -root {0}'.format(transform) + job_str += ' -worldSpace' + for member in member_shapes: + self.log.info(f"processing {member}") + transform = cmds.listRelatives( + member, parent=True, fullPath=True)[0] + job_str += ' -root {0}'.format(transform) job_str += ' -file "{0}"'.format(path) diff --git a/openpype/hosts/maya/plugins/publish/extract_camera_mayaScene.py b/openpype/hosts/maya/plugins/publish/extract_camera_mayaScene.py index 49c156f9cd..1cb30e65ea 100644 --- a/openpype/hosts/maya/plugins/publish/extract_camera_mayaScene.py +++ b/openpype/hosts/maya/plugins/publish/extract_camera_mayaScene.py @@ -131,12 +131,12 @@ class ExtractCameraMayaScene(openpype.api.Extractor): "bake to world space is ignored...") # get cameras - members = instance.data['setMembers'] + members = cmds.ls(instance.data['setMembers'], leaf=True, shapes=True, + long=True, dag=True) cameras = cmds.ls(members, leaf=True, shapes=True, long=True, dag=True, type="camera") # validate required settings - assert len(cameras) == 1, "Single camera must be found in extraction" assert isinstance(step, float), "Step must be a float value" camera = cameras[0] transform = cmds.listRelatives(camera, parent=True, fullPath=True) @@ -158,15 +158,24 @@ class ExtractCameraMayaScene(openpype.api.Extractor): frame_range=[start, end], step=step ) - baked_shapes = cmds.ls(baked, + baked_camera_shapes = cmds.ls(baked, type="camera", dag=True, shapes=True, long=True) + + members = members + baked_camera_shapes + members.remove(camera) else: - baked_shapes = cameras + baked_camera_shapes = cmds.ls(cameras, + type="camera", + dag=True, + shapes=True, + long=True) # Fix PLN-178: Don't allow background color to be non-black - for cam in baked_shapes: + for cam in cmds.ls( + baked_camera_shapes, type="camera", dag=True, + shapes=True, long=True): attrs = {"backgroundColorR": 0.0, "backgroundColorG": 0.0, "backgroundColorB": 0.0, @@ -177,7 +186,8 @@ class ExtractCameraMayaScene(openpype.api.Extractor): cmds.setAttr(plug, value) self.log.info("Performing extraction..") - cmds.select(baked_shapes, noExpand=True) + cmds.select(cmds.ls(members, dag=True, + shapes=True, long=True), noExpand=True) cmds.file(path, force=True, typ="mayaAscii" if self.scene_type == "ma" else "mayaBinary", # noqa: E501 diff --git a/openpype/hosts/maya/plugins/publish/extract_playblast.py b/openpype/hosts/maya/plugins/publish/extract_playblast.py index bb1ecf279d..ba939d5428 100644 --- a/openpype/hosts/maya/plugins/publish/extract_playblast.py +++ b/openpype/hosts/maya/plugins/publish/extract_playblast.py @@ -111,7 +111,8 @@ class ExtractPlayblast(openpype.api.Extractor): self.log.debug("playblast path {}".format(path)) collected_files = os.listdir(stagingdir) - collections, remainder = clique.assemble(collected_files) + collections, remainder = clique.assemble(collected_files, + minimum_items=1) self.log.debug("filename {}".format(filename)) frame_collection = None @@ -134,10 +135,15 @@ class ExtractPlayblast(openpype.api.Extractor): # Add camera node name to representation data camera_node_name = pm.ls(camera)[0].getTransform().name() + collected_files = list(frame_collection) + # single frame file shouldn't be in list, only as a string + if len(collected_files) == 1: + collected_files = collected_files[0] + representation = { 'name': 'png', 'ext': 'png', - 'files': list(frame_collection), + 'files': collected_files, "stagingDir": stagingdir, "frameStart": start, "frameEnd": end, diff --git a/openpype/hosts/maya/plugins/publish/validate_camera_contents.py b/openpype/hosts/maya/plugins/publish/validate_camera_contents.py index 20af8d2315..87712a4cea 100644 --- a/openpype/hosts/maya/plugins/publish/validate_camera_contents.py +++ b/openpype/hosts/maya/plugins/publish/validate_camera_contents.py @@ -20,6 +20,7 @@ class ValidateCameraContents(pyblish.api.InstancePlugin): hosts = ['maya'] label = 'Camera Contents' actions = [openpype.hosts.maya.api.action.SelectInvalidAction] + validate_shapes = True @classmethod def get_invalid(cls, instance): @@ -32,7 +33,7 @@ class ValidateCameraContents(pyblish.api.InstancePlugin): invalid = [] cameras = cmds.ls(shapes, type='camera', long=True) if len(cameras) != 1: - cls.log.warning("Camera instance must have a single camera. " + cls.log.error("Camera instance must have a single camera. " "Found {0}: {1}".format(len(cameras), cameras)) invalid.extend(cameras) @@ -49,15 +50,32 @@ class ValidateCameraContents(pyblish.api.InstancePlugin): raise RuntimeError("No cameras found in empty instance.") + if not cls.validate_shapes: + cls.log.info("Not validating shapes in the content.") + + for member in members: + parents = cmds.ls(member, long=True)[0].split("|")[1:-1] + parents_long_named = [ + "|".join(parents[:i]) for i in range(1, 1 + len(parents)) + ] + if cameras[0] in parents_long_named: + cls.log.error( + "{} is parented under camera {}".format( + member, cameras[0])) + invalid.extend(member) + return invalid + # non-camera shapes valid_shapes = cmds.ls(shapes, type=('camera', 'locator'), long=True) shapes = set(shapes) - set(valid_shapes) if shapes: shapes = list(shapes) - cls.log.warning("Camera instance should only contain camera " + cls.log.error("Camera instance should only contain camera " "shapes. Found: {0}".format(shapes)) invalid.extend(shapes) + + invalid = list(set(invalid)) return invalid diff --git a/openpype/hosts/maya/plugins/publish/validate_mesh_shader_connections.py b/openpype/hosts/maya/plugins/publish/validate_mesh_shader_connections.py index 0969573a90..e0835000f0 100644 --- a/openpype/hosts/maya/plugins/publish/validate_mesh_shader_connections.py +++ b/openpype/hosts/maya/plugins/publish/validate_mesh_shader_connections.py @@ -12,28 +12,41 @@ def pairs(iterable): yield i, y -def get_invalid_sets(shape): - """Get sets that are considered related but do not contain the shape. +def get_invalid_sets(shapes): + """Return invalid sets for the given shapes. - In some scenarios Maya keeps connections to multiple shaders - even if just a single one is assigned on the full object. + This takes a list of shape nodes to cache the set members for overlapping + sets in the queries. This avoids many Maya set member queries. - These are related sets returned by `maya.cmds.listSets` that don't - actually have the shape as member. + Returns: + dict: Dictionary of shapes and their invalid sets, e.g. + {"pCubeShape": ["set1", "set2"]} """ - invalid = [] - sets = cmds.listSets(object=shape, t=1, extendToShape=False) or [] - for s in sets: - members = cmds.sets(s, query=True, nodesOnly=True) - if not members: - invalid.append(s) - continue + cache = dict() + invalid = dict() - members = set(cmds.ls(members, long=True)) - if shape not in members: - invalid.append(s) + # Collect the sets from the shape + for shape in shapes: + invalid_sets = [] + sets = cmds.listSets(object=shape, t=1, extendToShape=False) or [] + for set_ in sets: + + members = cache.get(set_, None) + if members is None: + members = set(cmds.ls(cmds.sets(set_, + query=True, + nodesOnly=True), long=True)) + cache[set_] = members + + # If the shape is not actually present as a member of the set + # consider it invalid + if shape not in members: + invalid_sets.append(set_) + + if invalid_sets: + invalid[shape] = invalid_sets return invalid @@ -92,15 +105,9 @@ class ValidateMeshShaderConnections(pyblish.api.InstancePlugin): @staticmethod def get_invalid(instance): - shapes = cmds.ls(instance[:], dag=1, leaf=1, shapes=1, long=True) - - # todo: allow to check anything that can have a shader - shapes = cmds.ls(shapes, noIntermediate=True, long=True, type="mesh") - - invalid = [] - for shape in shapes: - if get_invalid_sets(shape): - invalid.append(shape) + nodes = instance[:] + shapes = cmds.ls(nodes, noIntermediate=True, long=True, type="mesh") + invalid = get_invalid_sets(shapes).keys() return invalid @@ -108,7 +115,7 @@ class ValidateMeshShaderConnections(pyblish.api.InstancePlugin): def repair(cls, instance): shapes = cls.get_invalid(instance) - for shape in shapes: - invalid_sets = get_invalid_sets(shape) + invalid = get_invalid_sets(shapes) + for shape, invalid_sets in invalid.items(): for set_node in invalid_sets: disconnect(shape, set_node) diff --git a/openpype/hosts/maya/plugins/publish/validate_node_ids_in_database.py b/openpype/hosts/maya/plugins/publish/validate_node_ids_in_database.py index 068d6b38a1..632b531668 100644 --- a/openpype/hosts/maya/plugins/publish/validate_node_ids_in_database.py +++ b/openpype/hosts/maya/plugins/publish/validate_node_ids_in_database.py @@ -1,6 +1,7 @@ import pyblish.api import openpype.api +from openpype.client import get_assets from openpype.pipeline import legacy_io import openpype.hosts.maya.api.action from openpype.hosts.maya.api import lib @@ -42,8 +43,12 @@ class ValidateNodeIdsInDatabase(pyblish.api.InstancePlugin): nodes=instance[:]) # check ids against database ids - db_asset_ids = legacy_io.find({"type": "asset"}).distinct("_id") - db_asset_ids = set(str(i) for i in db_asset_ids) + project_name = legacy_io.active_project() + asset_docs = get_assets(project_name, fields=["_id"]) + db_asset_ids = { + str(asset_doc["_id"]) + for asset_doc in asset_docs + } # Get all asset IDs for node in id_required_nodes: diff --git a/openpype/hosts/maya/plugins/publish/validate_node_ids_related.py b/openpype/hosts/maya/plugins/publish/validate_node_ids_related.py index 38407e4176..c8bac6e569 100644 --- a/openpype/hosts/maya/plugins/publish/validate_node_ids_related.py +++ b/openpype/hosts/maya/plugins/publish/validate_node_ids_related.py @@ -1,7 +1,6 @@ import pyblish.api import openpype.api -from openpype.pipeline import legacy_io import openpype.hosts.maya.api.action from openpype.hosts.maya.api import lib @@ -36,15 +35,7 @@ class ValidateNodeIDsRelated(pyblish.api.InstancePlugin): """Return the member nodes that are invalid""" invalid = list() - asset = instance.data['asset'] - asset_data = legacy_io.find_one( - { - "name": asset, - "type": "asset" - }, - projection={"_id": True} - ) - asset_id = str(asset_data['_id']) + asset_id = str(instance.data['assetEntity']["_id"]) # We do want to check the referenced nodes as we it might be # part of the end product diff --git a/openpype/hosts/maya/plugins/publish/validate_renderlayer_aovs.py b/openpype/hosts/maya/plugins/publish/validate_renderlayer_aovs.py index e65150eb0f..6b6fb03eec 100644 --- a/openpype/hosts/maya/plugins/publish/validate_renderlayer_aovs.py +++ b/openpype/hosts/maya/plugins/publish/validate_renderlayer_aovs.py @@ -1,8 +1,8 @@ import pyblish.api +from openpype.client import get_subset_by_name import openpype.hosts.maya.api.action from openpype.pipeline import legacy_io -import openpype.api class ValidateRenderLayerAOVs(pyblish.api.InstancePlugin): @@ -33,26 +33,23 @@ class ValidateRenderLayerAOVs(pyblish.api.InstancePlugin): raise RuntimeError("Found unregistered subsets: {}".format(invalid)) def get_invalid(self, instance): - invalid = [] - asset_name = instance.data["asset"] + project_name = legacy_io.active_project() + asset_doc = instance.data["assetEntity"] render_passses = instance.data.get("renderPasses", []) for render_pass in render_passses: - is_valid = self.validate_subset_registered(asset_name, render_pass) + is_valid = self.validate_subset_registered( + project_name, asset_doc, render_pass + ) if not is_valid: invalid.append(render_pass) return invalid - def validate_subset_registered(self, asset_name, subset_name): + def validate_subset_registered(self, project_name, asset_doc, subset_name): """Check if subset is registered in the database under the asset""" - asset = legacy_io.find_one({"type": "asset", "name": asset_name}) - is_valid = legacy_io.find_one({ - "type": "subset", - "name": subset_name, - "parent": asset["_id"] - }) - - return is_valid + return get_subset_by_name( + project_name, subset_name, asset_doc["_id"], fields=["_id"] + ) diff --git a/openpype/hosts/maya/plugins/publish/validate_rendersettings.py b/openpype/hosts/maya/plugins/publish/validate_rendersettings.py index ba6c1397ab..1dab3274a0 100644 --- a/openpype/hosts/maya/plugins/publish/validate_rendersettings.py +++ b/openpype/hosts/maya/plugins/publish/validate_rendersettings.py @@ -94,6 +94,7 @@ class ValidateRenderSettings(pyblish.api.InstancePlugin): def get_invalid(cls, instance): invalid = False + multipart = False renderer = instance.data['renderer'] layer = instance.data['setMembers'] @@ -113,6 +114,7 @@ class ValidateRenderSettings(pyblish.api.InstancePlugin): "{aov_separator}", instance.data.get("aovSeparator", "_")) required_prefix = "maya/" + default_prefix = cls.ImagePrefixTokens[renderer] if not anim_override: invalid = True @@ -213,14 +215,16 @@ class ValidateRenderSettings(pyblish.api.InstancePlugin): cls.log.error("Wrong image prefix [ {} ] - " "You can't use '' token " "with merge AOVs turned on".format(prefix)) + default_prefix = re.sub( + cls.R_AOV_TOKEN, "", default_prefix) + # remove aov token from prefix to pass validation + default_prefix = default_prefix.split("{aov_separator}")[0] elif not re.search(cls.R_AOV_TOKEN, prefix): invalid = True cls.log.error("Wrong image prefix [ {} ] - " "doesn't have: '' or " "token".format(prefix)) - # prefix check - default_prefix = cls.ImagePrefixTokens[renderer] default_prefix = default_prefix.replace( "{aov_separator}", instance.data.get("aovSeparator", "_")) if prefix.lower() != default_prefix.lower(): diff --git a/openpype/hosts/nuke/api/__init__.py b/openpype/hosts/nuke/api/__init__.py index b571c4098c..77fe4503d3 100644 --- a/openpype/hosts/nuke/api/__init__.py +++ b/openpype/hosts/nuke/api/__init__.py @@ -26,7 +26,11 @@ from .pipeline import ( update_container, ) from .lib import ( - maintained_selection + maintained_selection, + reset_selection, + get_view_process_node, + duplicate_node + ) from .utils import ( @@ -58,6 +62,9 @@ __all__ = ( "update_container", "maintained_selection", + "reset_selection", + "get_view_process_node", + "duplicate_node", "colorspace_exists_on_node", "get_colorspace_list" diff --git a/openpype/hosts/nuke/api/lib.py b/openpype/hosts/nuke/api/lib.py index 505eb19419..a8e01d0a36 100644 --- a/openpype/hosts/nuke/api/lib.py +++ b/openpype/hosts/nuke/api/lib.py @@ -3,6 +3,7 @@ from pprint import pformat import re import six import platform +import tempfile import contextlib from collections import OrderedDict @@ -711,6 +712,20 @@ def get_imageio_input_colorspace(filename): return preset_clrsp +def get_view_process_node(): + reset_selection() + + ipn_orig = None + for v in nuke.allNodes(filter="Viewer"): + ipn = v['input_process_node'].getValue() + if "VIEWER_INPUT" not in ipn: + ipn_orig = nuke.toNode(ipn) + ipn_orig.setSelected(True) + + if ipn_orig: + return duplicate_node(ipn_orig) + + def on_script_load(): ''' Callback for ffmpeg support ''' @@ -2374,6 +2389,8 @@ def process_workfile_builder(): env_value_to_bool, get_custom_workfile_template ) + # to avoid looping of the callback, remove it! + nuke.removeOnCreate(process_workfile_builder, nodeClass="Root") # get state from settings workfile_builder = get_current_project_settings()["nuke"].get( @@ -2429,9 +2446,6 @@ def process_workfile_builder(): if not openlv_on or not os.path.exists(last_workfile_path): return - # to avoid looping of the callback, remove it! - nuke.removeOnCreate(process_workfile_builder, nodeClass="Root") - log.info("Opening last workfile...") # open workfile open_file(last_workfile_path) @@ -2617,6 +2631,57 @@ class DirmapCache: return cls._sync_module +@contextlib.contextmanager +def _duplicate_node_temp(): + """Create a temp file where node is pasted during duplication. + + This is to avoid using clipboard for node duplication. + """ + + duplicate_node_temp_path = os.path.join( + tempfile.gettempdir(), + "openpype_nuke_duplicate_temp_{}".format(os.getpid()) + ) + + # This can happen only if 'duplicate_node' would be + if os.path.exists(duplicate_node_temp_path): + log.warning(( + "Temp file for node duplication already exists." + " Trying to remove {}" + ).format(duplicate_node_temp_path)) + os.remove(duplicate_node_temp_path) + + try: + # Yield the path where node can be copied + yield duplicate_node_temp_path + + finally: + # Remove the file at the end + os.remove(duplicate_node_temp_path) + + +def duplicate_node(node): + reset_selection() + + # select required node for duplication + node.setSelected(True) + + with _duplicate_node_temp() as filepath: + # copy selected to temp filepath + nuke.nodeCopy(filepath) + + # reset selection + reset_selection() + + # paste node and selection is on it only + dupli_node = nuke.nodePaste(filepath) + + # reset selection + reset_selection() + + return dupli_node + + def dirmap_file_name_filter(file_name): """Nuke callback function with single full path argument. diff --git a/openpype/hosts/nuke/api/plugin.py b/openpype/hosts/nuke/api/plugin.py index b8b56ef2b8..925cab0bef 100644 --- a/openpype/hosts/nuke/api/plugin.py +++ b/openpype/hosts/nuke/api/plugin.py @@ -14,12 +14,12 @@ from openpype.pipeline import ( from .lib import ( Knobby, check_subsetname_exists, - reset_selection, maintained_selection, set_avalon_knob_data, add_publish_knob, get_nuke_imageio_settings, - set_node_knobs_from_settings + set_node_knobs_from_settings, + get_view_process_node ) @@ -216,37 +216,6 @@ class ExporterReview(object): self.data["representations"].append(repre) - def get_view_input_process_node(self): - """ - Will get any active view process. - - Arguments: - self (class): in object definition - - Returns: - nuke.Node: copy node of Input Process node - """ - reset_selection() - ipn_orig = None - for v in nuke.allNodes(filter="Viewer"): - ip = v["input_process"].getValue() - ipn = v["input_process_node"].getValue() - if "VIEWER_INPUT" not in ipn and ip: - ipn_orig = nuke.toNode(ipn) - ipn_orig.setSelected(True) - - if ipn_orig: - # copy selected to clipboard - nuke.nodeCopy("%clipboard%") - # reset selection - reset_selection() - # paste node and selection is on it only - nuke.nodePaste("%clipboard%") - # assign to variable - ipn = nuke.selectedNode() - - return ipn - def get_imageio_baking_profile(self): from . import lib as opnlib nuke_imageio = opnlib.get_nuke_imageio_settings() @@ -311,7 +280,7 @@ class ExporterReviewLut(ExporterReview): self._temp_nodes = [] self.log.info("Deleted nodes...") - def generate_lut(self): + def generate_lut(self, **kwargs): bake_viewer_process = kwargs["bake_viewer_process"] bake_viewer_input_process_node = kwargs[ "bake_viewer_input_process"] @@ -329,7 +298,7 @@ class ExporterReviewLut(ExporterReview): if bake_viewer_process: # Node View Process if bake_viewer_input_process_node: - ipn = self.get_view_input_process_node() + ipn = get_view_process_node() if ipn is not None: # connect ipn.setInput(0, self.previous_node) @@ -511,7 +480,7 @@ class ExporterReviewMov(ExporterReview): if bake_viewer_process: if bake_viewer_input_process_node: # View Process node - ipn = self.get_view_input_process_node() + ipn = get_view_process_node() if ipn is not None: # connect ipn.setInput(0, self.previous_node) diff --git a/openpype/hosts/nuke/plugins/publish/extract_review_data_mov.py b/openpype/hosts/nuke/plugins/publish/extract_review_data_mov.py index 2a79d600ba..5ea7c352b9 100644 --- a/openpype/hosts/nuke/plugins/publish/extract_review_data_mov.py +++ b/openpype/hosts/nuke/plugins/publish/extract_review_data_mov.py @@ -1,4 +1,5 @@ import os +from pprint import pformat import re import pyblish.api import openpype @@ -50,6 +51,8 @@ class ExtractReviewDataMov(openpype.api.Extractor): with maintained_selection(): generated_repres = [] for o_name, o_data in self.outputs.items(): + self.log.debug( + "o_name: {}, o_data: {}".format(o_name, pformat(o_data))) f_families = o_data["filter"]["families"] f_task_types = o_data["filter"]["task_types"] f_subsets = o_data["filter"]["subsets"] @@ -88,7 +91,13 @@ class ExtractReviewDataMov(openpype.api.Extractor): # check if settings have more then one preset # so we dont need to add outputName to representation # in case there is only one preset - multiple_presets = bool(len(self.outputs.keys()) > 1) + multiple_presets = len(self.outputs.keys()) > 1 + + # adding bake presets to instance data for other plugins + if not instance.data.get("bakePresets"): + instance.data["bakePresets"] = {} + # add preset to bakePresets + instance.data["bakePresets"][o_name] = o_data # create exporter instance exporter = plugin.ExporterReviewMov( diff --git a/openpype/hosts/nuke/plugins/publish/extract_slate_frame.py b/openpype/hosts/nuke/plugins/publish/extract_slate_frame.py index fb52fc18b4..e0c4bdb953 100644 --- a/openpype/hosts/nuke/plugins/publish/extract_slate_frame.py +++ b/openpype/hosts/nuke/plugins/publish/extract_slate_frame.py @@ -1,11 +1,16 @@ import os +from pprint import pformat import nuke import copy import pyblish.api import openpype -from openpype.hosts.nuke.api.lib import maintained_selection +from openpype.hosts.nuke.api import ( + maintained_selection, + duplicate_node, + get_view_process_node +) class ExtractSlateFrame(openpype.api.Extractor): @@ -15,14 +20,13 @@ class ExtractSlateFrame(openpype.api.Extractor): """ - order = pyblish.api.ExtractorOrder - 0.001 + order = pyblish.api.ExtractorOrder + 0.011 label = "Extract Slate Frame" families = ["slate"] hosts = ["nuke"] # Settings values - # - can be extended by other attributes from node in the future key_value_mapping = { "f_submission_note": [True, "{comment}"], "f_submitting_for": [True, "{intent[value]}"], @@ -30,44 +34,107 @@ class ExtractSlateFrame(openpype.api.Extractor): } def process(self, instance): - if hasattr(self, "viewer_lut_raw"): - self.viewer_lut_raw = self.viewer_lut_raw - else: - self.viewer_lut_raw = False + + if "representations" not in instance.data: + instance.data["representations"] = [] + + self._create_staging_dir(instance) with maintained_selection(): self.log.debug("instance: {}".format(instance)) self.log.debug("instance.data[families]: {}".format( instance.data["families"])) - self.render_slate(instance) + if instance.data.get("bakePresets"): + for o_name, o_data in instance.data["bakePresets"].items(): + self.log.info("_ o_name: {}, o_data: {}".format( + o_name, pformat(o_data))) + self.render_slate( + instance, + o_name, + o_data["bake_viewer_process"], + o_data["bake_viewer_input_process"] + ) + else: + # backward compatibility + self.render_slate(instance) + + # also render image to sequence + self._render_slate_to_sequence(instance) + + def _create_staging_dir(self, instance): - def render_slate(self, instance): - node_subset_name = instance.data.get("name", None) - node = instance[0] # group node self.log.info("Creating staging dir...") - if "representations" not in instance.data: - instance.data["representations"] = list() - staging_dir = os.path.normpath( - os.path.dirname(instance.data['path'])) + os.path.dirname(instance.data["path"])) instance.data["stagingDir"] = staging_dir self.log.info( "StagingDir `{0}`...".format(instance.data["stagingDir"])) - frame_start = instance.data["frameStart"] - frame_end = instance.data["frameEnd"] - handle_start = instance.data["handleStart"] - handle_end = instance.data["handleEnd"] + def _check_frames_exists(self, instance): + # rendering path from group write node + fpath = instance.data["path"] - frame_length = int( - (frame_end - frame_start + 1) + (handle_start + handle_end) - ) + # instance frame range with handles + first = instance.data["frameStartHandle"] + last = instance.data["frameEndHandle"] + + padding = fpath.count('#') + + test_path_template = fpath + if padding: + repl_string = "#" * padding + test_path_template = fpath.replace( + repl_string, "%0{}d".format(padding)) + + for frame in range(first, last + 1): + test_file = test_path_template % frame + if not os.path.exists(test_file): + self.log.debug("__ test_file: `{}`".format(test_file)) + return None + + return True + + def render_slate( + self, + instance, + output_name=None, + bake_viewer_process=True, + bake_viewer_input_process=True + ): + """Slate frame renderer + + Args: + instance (PyblishInstance): Pyblish instance with subset data + output_name (str, optional): + Slate variation name. Defaults to None. + bake_viewer_process (bool, optional): + Switch for viewer profile baking. Defaults to True. + bake_viewer_input_process (bool, optional): + Switch for input process node baking. Defaults to True. + """ + slate_node = instance.data["slateNode"] + + # rendering path from group write node + fpath = instance.data["path"] + + # instance frame range with handles + first_frame = instance.data["frameStartHandle"] + last_frame = instance.data["frameEndHandle"] + + # fill slate node with comments + self.add_comment_slate_node(instance, slate_node) + + # solve output name if any is set + _output_name = output_name or "" + if _output_name: + _output_name = "_" + _output_name + + slate_first_frame = first_frame - 1 - temporary_nodes = [] collection = instance.data.get("collection", None) if collection: @@ -75,99 +142,101 @@ class ExtractSlateFrame(openpype.api.Extractor): fname = os.path.basename(collection.format( "{head}{padding}{tail}")) fhead = collection.format("{head}") - - collected_frames_len = int(len(collection.indexes)) - - # get first and last frame - first_frame = min(collection.indexes) - 1 - self.log.info('frame_length: {}'.format(frame_length)) - self.log.info( - 'len(collection.indexes): {}'.format(collected_frames_len) - ) - if ("slate" in instance.data["families"]) \ - and (frame_length != collected_frames_len): - first_frame += 1 - - last_frame = first_frame else: - fname = os.path.basename(instance.data.get("path", None)) + fname = os.path.basename(fpath) fhead = os.path.splitext(fname)[0] + "." - first_frame = instance.data.get("frameStartHandle", None) - 1 - last_frame = first_frame if "#" in fhead: fhead = fhead.replace("#", "")[:-1] - previous_node = node + self.log.debug("__ first_frame: {}".format(first_frame)) + self.log.debug("__ slate_first_frame: {}".format(slate_first_frame)) - # get input process and connect it to baking - ipn = self.get_view_process_node() - if ipn is not None: - ipn.setInput(0, previous_node) - previous_node = ipn - temporary_nodes.append(ipn) + # fallback if files does not exists + if self._check_frames_exists(instance): + # Read node + r_node = nuke.createNode("Read") + r_node["file"].setValue(fpath) + r_node["first"].setValue(first_frame) + r_node["origfirst"].setValue(first_frame) + r_node["last"].setValue(last_frame) + r_node["origlast"].setValue(last_frame) + r_node["colorspace"].setValue(instance.data["colorspace"]) + previous_node = r_node + temporary_nodes = [previous_node] + else: + previous_node = slate_node.dependencies().pop() + temporary_nodes = [] - if not self.viewer_lut_raw: + # only create colorspace baking if toggled on + if bake_viewer_process: + if bake_viewer_input_process: + # get input process and connect it to baking + ipn = get_view_process_node() + if ipn is not None: + ipn.setInput(0, previous_node) + previous_node = ipn + temporary_nodes.append(ipn) + + # add duplicate slate node and connect to previous + duply_slate_node = duplicate_node(slate_node) + duply_slate_node.setInput(0, previous_node) + previous_node = duply_slate_node + temporary_nodes.append(duply_slate_node) + + # add viewer display transformation node dag_node = nuke.createNode("OCIODisplay") dag_node.setInput(0, previous_node) previous_node = dag_node temporary_nodes.append(dag_node) + else: + # add duplicate slate node and connect to previous + duply_slate_node = duplicate_node(slate_node) + duply_slate_node.setInput(0, previous_node) + previous_node = duply_slate_node + temporary_nodes.append(duply_slate_node) + # create write node write_node = nuke.createNode("Write") - file = fhead + "slate.png" - path = os.path.join(staging_dir, file).replace("\\", "/") - instance.data["slateFrame"] = path + file = fhead[:-1] + _output_name + "_slate.png" + path = os.path.join( + instance.data["stagingDir"], file).replace("\\", "/") + + # add slate path to `slateFrames` instance data attr + if not instance.data.get("slateFrames"): + instance.data["slateFrames"] = {} + + instance.data["slateFrames"][output_name or "*"] = path + + # create write node write_node["file"].setValue(path) write_node["file_type"].setValue("png") write_node["raw"].setValue(1) write_node.setInput(0, previous_node) temporary_nodes.append(write_node) - # fill slate node with comments - self.add_comment_slate_node(instance) - # Render frames - nuke.execute(write_node.name(), int(first_frame), int(last_frame)) - # also render slate as sequence frame - nuke.execute(node_subset_name, int(first_frame), int(last_frame)) - - self.log.debug( - "slate frame path: {}".format(instance.data["slateFrame"])) + nuke.execute( + write_node.name(), int(slate_first_frame), int(slate_first_frame)) # Clean up for node in temporary_nodes: nuke.delete(node) - def get_view_process_node(self): - # Select only the target node - if nuke.selectedNodes(): - [n.setSelected(False) for n in nuke.selectedNodes()] + def _render_slate_to_sequence(self, instance): + # set slate frame + first_frame = instance.data["frameStartHandle"] + slate_first_frame = first_frame - 1 - ipn_orig = None - for v in [n for n in nuke.allNodes() - if "Viewer" in n.Class()]: - ip = v['input_process'].getValue() - ipn = v['input_process_node'].getValue() - if "VIEWER_INPUT" not in ipn and ip: - ipn_orig = nuke.toNode(ipn) - ipn_orig.setSelected(True) + # render slate as sequence frame + nuke.execute( + instance.data["name"], + int(slate_first_frame), + int(slate_first_frame) + ) - if ipn_orig: - nuke.nodeCopy('%clipboard%') - - [n.setSelected(False) for n in nuke.selectedNodes()] # Deselect all - - nuke.nodePaste('%clipboard%') - - ipn = nuke.selectedNode() - - return ipn - - def add_comment_slate_node(self, instance): - node = instance.data.get("slateNode") - if not node: - return + def add_comment_slate_node(self, instance, node): comment = instance.context.data.get("comment") intent = instance.context.data.get("intent") @@ -186,8 +255,8 @@ class ExtractSlateFrame(openpype.api.Extractor): "intent": intent }) - for key, value in self.key_value_mapping.items(): - enabled, template = value + for key, _values in self.key_value_mapping.items(): + enabled, template = _values if not enabled: self.log.debug("Key \"{}\" is disabled".format(key)) continue @@ -221,5 +290,5 @@ class ExtractSlateFrame(openpype.api.Extractor): )) except NameError: self.log.warning(( - "Failed to set value \"{}\" on node attribute \"{}\"" + "Failed to set value \"{0}\" on node attribute \"{0}\"" ).format(value)) diff --git a/openpype/hosts/nuke/plugins/publish/extract_thumbnail.py b/openpype/hosts/nuke/plugins/publish/extract_thumbnail.py index a622271855..2a919051d2 100644 --- a/openpype/hosts/nuke/plugins/publish/extract_thumbnail.py +++ b/openpype/hosts/nuke/plugins/publish/extract_thumbnail.py @@ -3,7 +3,10 @@ import os import nuke import pyblish.api import openpype -from openpype.hosts.nuke.api.lib import maintained_selection +from openpype.hosts.nuke.api import ( + maintained_selection, + get_view_process_node +) if sys.version_info[0] >= 3: @@ -17,7 +20,7 @@ class ExtractThumbnail(openpype.api.Extractor): """ - order = pyblish.api.ExtractorOrder + 0.01 + order = pyblish.api.ExtractorOrder + 0.011 label = "Extract Thumbnail" families = ["review"] @@ -39,15 +42,32 @@ class ExtractThumbnail(openpype.api.Extractor): self.log.debug("instance.data[families]: {}".format( instance.data["families"])) - self.render_thumbnail(instance) + if instance.data.get("bakePresets"): + for o_name, o_data in instance.data["bakePresets"].items(): + self.render_thumbnail(instance, o_name, **o_data) + else: + viewer_process_swithes = { + "bake_viewer_process": True, + "bake_viewer_input_process": True + } + self.render_thumbnail(instance, None, **viewer_process_swithes) - def render_thumbnail(self, instance): + def render_thumbnail(self, instance, output_name=None, **kwargs): first_frame = instance.data["frameStartHandle"] last_frame = instance.data["frameEndHandle"] # find frame range and define middle thumb frame mid_frame = int((last_frame - first_frame) / 2) + # solve output name if any is set + output_name = output_name or "" + if output_name: + output_name = "_" + output_name + + bake_viewer_process = kwargs["bake_viewer_process"] + bake_viewer_input_process_node = kwargs[ + "bake_viewer_input_process"] + node = instance[0] # group node self.log.info("Creating staging dir...") @@ -106,17 +126,7 @@ class ExtractThumbnail(openpype.api.Extractor): temporary_nodes.append(rnode) previous_node = rnode - # bake viewer input look node into thumbnail image - if self.bake_viewer_input_process: - # get input process and connect it to baking - ipn = self.get_view_process_node() - if ipn is not None: - ipn.setInput(0, previous_node) - previous_node = ipn - temporary_nodes.append(ipn) - reformat_node = nuke.createNode("Reformat") - ref_node = self.nodes.get("Reformat", None) if ref_node: for k, v in ref_node: @@ -129,8 +139,16 @@ class ExtractThumbnail(openpype.api.Extractor): previous_node = reformat_node temporary_nodes.append(reformat_node) - # bake viewer colorspace into thumbnail image - if self.bake_viewer_process: + # only create colorspace baking if toggled on + if bake_viewer_process: + if bake_viewer_input_process_node: + # get input process and connect it to baking + ipn = get_view_process_node() + if ipn is not None: + ipn.setInput(0, previous_node) + previous_node = ipn + temporary_nodes.append(ipn) + dag_node = nuke.createNode("OCIODisplay") dag_node.setInput(0, previous_node) previous_node = dag_node @@ -138,7 +156,7 @@ class ExtractThumbnail(openpype.api.Extractor): # create write node write_node = nuke.createNode("Write") - file = fhead + "jpg" + file = fhead[:-1] + output_name + ".jpg" name = "thumbnail" path = os.path.join(staging_dir, file).replace("\\", "/") instance.data["thumbnail"] = path @@ -168,30 +186,3 @@ class ExtractThumbnail(openpype.api.Extractor): # Clean up for node in temporary_nodes: nuke.delete(node) - - def get_view_process_node(self): - - # Select only the target node - if nuke.selectedNodes(): - [n.setSelected(False) for n in nuke.selectedNodes()] - - ipn_orig = None - for v in [n for n in nuke.allNodes() - if "Viewer" == n.Class()]: - ip = v['input_process'].getValue() - ipn = v['input_process_node'].getValue() - if "VIEWER_INPUT" not in ipn and ip: - ipn_orig = nuke.toNode(ipn) - ipn_orig.setSelected(True) - - if ipn_orig: - nuke.nodeCopy('%clipboard%') - - # Deselect all - [n.setSelected(False) for n in nuke.selectedNodes()] - - nuke.nodePaste('%clipboard%') - - ipn = nuke.selectedNode() - - return ipn diff --git a/openpype/hosts/photoshop/plugins/create/workfile_creator.py b/openpype/hosts/photoshop/plugins/create/workfile_creator.py index 875a9b8a94..43302329f1 100644 --- a/openpype/hosts/photoshop/plugins/create/workfile_creator.py +++ b/openpype/hosts/photoshop/plugins/create/workfile_creator.py @@ -1,4 +1,5 @@ import openpype.hosts.photoshop.api as api +from openpype.client import get_asset_by_name from openpype.pipeline import ( AutoCreator, CreatedInstance, @@ -40,10 +41,7 @@ class PSWorkfileCreator(AutoCreator): task_name = legacy_io.Session["AVALON_TASK"] host_name = legacy_io.Session["AVALON_APP"] if existing_instance is None: - asset_doc = legacy_io.find_one({ - "type": "asset", - "name": asset_name - }) + asset_doc = get_asset_by_name(project_name, asset_name) subset_name = self.get_subset_name( variant, task_name, asset_doc, project_name, host_name ) @@ -67,10 +65,7 @@ class PSWorkfileCreator(AutoCreator): existing_instance["asset"] != asset_name or existing_instance["task"] != task_name ): - asset_doc = legacy_io.find_one({ - "type": "asset", - "name": asset_name - }) + asset_doc = get_asset_by_name(project_name, asset_name) subset_name = self.get_subset_name( variant, task_name, asset_doc, project_name, host_name ) diff --git a/openpype/hosts/standalonepublisher/plugins/publish/collect_bulk_mov_instances.py b/openpype/hosts/standalonepublisher/plugins/publish/collect_bulk_mov_instances.py index 3e7fb19c00..052a97af7d 100644 --- a/openpype/hosts/standalonepublisher/plugins/publish/collect_bulk_mov_instances.py +++ b/openpype/hosts/standalonepublisher/plugins/publish/collect_bulk_mov_instances.py @@ -3,7 +3,7 @@ import json import pyblish.api from openpype.lib import get_subset_name_with_asset_doc -from openpype.pipeline import legacy_io +from openpype.client import get_asset_by_name class CollectBulkMovInstances(pyblish.api.InstancePlugin): @@ -24,12 +24,9 @@ class CollectBulkMovInstances(pyblish.api.InstancePlugin): def process(self, instance): context = instance.context + project_name = context.data["projectEntity"]["name"] asset_name = instance.data["asset"] - - asset_doc = legacy_io.find_one({ - "type": "asset", - "name": asset_name - }) + asset_doc = get_asset_by_name(project_name, asset_name) if not asset_doc: raise AssertionError(( "Couldn't find Asset document with name \"{}\"" @@ -52,7 +49,7 @@ class CollectBulkMovInstances(pyblish.api.InstancePlugin): self.subset_name_variant, task_name, asset_doc, - legacy_io.Session["AVALON_PROJECT"] + project_name ) instance_name = f"{asset_name}_{subset_name}" diff --git a/openpype/hosts/standalonepublisher/plugins/publish/collect_hierarchy.py b/openpype/hosts/standalonepublisher/plugins/publish/collect_hierarchy.py index 2452f77e56..9109bf6726 100644 --- a/openpype/hosts/standalonepublisher/plugins/publish/collect_hierarchy.py +++ b/openpype/hosts/standalonepublisher/plugins/publish/collect_hierarchy.py @@ -4,7 +4,7 @@ import re from copy import deepcopy import pyblish.api -from openpype.pipeline import legacy_io +from openpype.client import get_asset_by_id class CollectHierarchyInstance(pyblish.api.ContextPlugin): @@ -63,27 +63,32 @@ class CollectHierarchyInstance(pyblish.api.ContextPlugin): **instance.data["anatomyData"]) def create_hierarchy(self, instance): - parents = list() - hierarchy = list() - visual_hierarchy = [instance.context.data["assetEntity"]] + asset_doc = instance.context.data["assetEntity"] + project_doc = instance.context.data["projectEntity"] + project_name = project_doc["name"] + visual_hierarchy = [asset_doc] + current_doc = asset_doc while True: - visual_parent = legacy_io.find_one( - {"_id": visual_hierarchy[-1]["data"]["visualParent"]} - ) - if visual_parent: - visual_hierarchy.append(visual_parent) - else: - visual_hierarchy.append( - instance.context.data["projectEntity"]) + visual_parent_id = current_doc["data"]["visualParent"] + visual_parent = None + if visual_parent_id: + visual_parent = get_asset_by_id(project_name, visual_parent_id) + + if not visual_parent: + visual_hierarchy.append(project_doc) break + visual_hierarchy.append(visual_parent) + current_doc = visual_parent # add current selection context hierarchy from standalonepublisher + parents = list() for entity in reversed(visual_hierarchy): parents.append({ "entity_type": entity["data"]["entityType"], "entity_name": entity["name"] }) + hierarchy = list() if self.shot_add_hierarchy.get("enabled"): parent_template_patern = re.compile(r"\{([a-z]*?)\}") # fill the parents parts from presets @@ -131,9 +136,8 @@ class CollectHierarchyInstance(pyblish.api.ContextPlugin): self.log.warning(f"Hierarchy: {hierarchy}") self.log.info(f"parents: {parents}") + tasks_to_add = dict() if self.shot_add_tasks: - tasks_to_add = dict() - project_doc = legacy_io.find_one({"type": "project"}) project_tasks = project_doc["config"]["tasks"] for task_name, task_data in self.shot_add_tasks.items(): _task_data = deepcopy(task_data) @@ -152,9 +156,7 @@ class CollectHierarchyInstance(pyblish.api.ContextPlugin): task_name, list(project_tasks.keys()))) - instance.data["tasks"] = tasks_to_add - else: - instance.data["tasks"] = dict() + instance.data["tasks"] = tasks_to_add # updating hierarchy data instance.data["anatomyData"].update({ diff --git a/openpype/hosts/standalonepublisher/plugins/publish/collect_matching_asset.py b/openpype/hosts/standalonepublisher/plugins/publish/collect_matching_asset.py index 9d94bfdc91..82d7247b2b 100644 --- a/openpype/hosts/standalonepublisher/plugins/publish/collect_matching_asset.py +++ b/openpype/hosts/standalonepublisher/plugins/publish/collect_matching_asset.py @@ -4,7 +4,7 @@ import collections import pyblish.api from pprint import pformat -from openpype.pipeline import legacy_io +from openpype.client import get_assets class CollectMatchingAssetToInstance(pyblish.api.InstancePlugin): @@ -119,8 +119,9 @@ class CollectMatchingAssetToInstance(pyblish.api.InstancePlugin): def _asset_docs_by_parent_id(self, instance): # Query all assets for project and store them by parent's id to list + project_name = instance.context.data["projectEntity"]["name"] asset_docs_by_parent_id = collections.defaultdict(list) - for asset_doc in legacy_io.find({"type": "asset"}): + for asset_doc in get_assets(project_name): parent_id = asset_doc["data"]["visualParent"] asset_docs_by_parent_id[parent_id].append(asset_doc) return asset_docs_by_parent_id diff --git a/openpype/hosts/standalonepublisher/plugins/publish/validate_task_existence.py b/openpype/hosts/standalonepublisher/plugins/publish/validate_task_existence.py index 4c761c7a4c..19ea1a4778 100644 --- a/openpype/hosts/standalonepublisher/plugins/publish/validate_task_existence.py +++ b/openpype/hosts/standalonepublisher/plugins/publish/validate_task_existence.py @@ -1,9 +1,7 @@ import pyblish.api -from openpype.pipeline import ( - PublishXmlValidationError, - legacy_io, -) +from openpype.client import get_assets +from openpype.pipeline import PublishXmlValidationError class ValidateTaskExistence(pyblish.api.ContextPlugin): @@ -20,15 +18,11 @@ class ValidateTaskExistence(pyblish.api.ContextPlugin): for instance in context: asset_names.add(instance.data["asset"]) - asset_docs = legacy_io.find( - { - "type": "asset", - "name": {"$in": list(asset_names)} - }, - { - "name": 1, - "data.tasks": 1 - } + project_name = context.data["projectEntity"]["name"] + asset_docs = get_assets( + project_name, + asset_names=asset_names, + fields=["name", "data.tasks"] ) tasks_by_asset_names = {} for asset_doc in asset_docs: diff --git a/openpype/hosts/tvpaint/api/communication_server.py b/openpype/hosts/tvpaint/api/communication_server.py index 65cb9aa2f3..6ac3e6324c 100644 --- a/openpype/hosts/tvpaint/api/communication_server.py +++ b/openpype/hosts/tvpaint/api/communication_server.py @@ -707,6 +707,9 @@ class BaseCommunicator: if exit_code is not None: self.exit_code = exit_code + if self.exit_code is None: + self.exit_code = 0 + def stop(self): """Stop communication and currently running python process.""" log.info("Stopping communication") diff --git a/openpype/hosts/tvpaint/api/pipeline.py b/openpype/hosts/tvpaint/api/pipeline.py index 60c61a8cbf..0118c0104b 100644 --- a/openpype/hosts/tvpaint/api/pipeline.py +++ b/openpype/hosts/tvpaint/api/pipeline.py @@ -8,6 +8,7 @@ import requests import pyblish.api +from openpype.client import get_project, get_asset_by_name from openpype.hosts import tvpaint from openpype.api import get_current_project_settings from openpype.lib import register_event_callback @@ -442,14 +443,14 @@ def set_context_settings(asset_doc=None): Change fps, resolution and frame start/end. """ - if asset_doc is None: - # Use current session asset if not passed - asset_doc = legacy_io.find_one({ - "type": "asset", - "name": legacy_io.Session["AVALON_ASSET"] - }) - project_doc = legacy_io.find_one({"type": "project"}) + project_name = legacy_io.active_project() + if asset_doc is None: + asset_name = legacy_io.Session["AVALON_ASSET"] + # Use current session asset if not passed + asset_doc = get_asset_by_name(project_name, asset_name) + + project_doc = get_project(project_name) framerate = asset_doc["data"].get("fps") if framerate is None: diff --git a/openpype/hosts/tvpaint/plugins/load/load_workfile.py b/openpype/hosts/tvpaint/plugins/load/load_workfile.py index 0eab083c22..462f12abf0 100644 --- a/openpype/hosts/tvpaint/plugins/load/load_workfile.py +++ b/openpype/hosts/tvpaint/plugins/load/load_workfile.py @@ -1,5 +1,6 @@ import os +from openpype.client import get_project, get_asset_by_name from openpype.lib import ( StringTemplate, get_workfile_template_key_from_context, @@ -44,21 +45,17 @@ class LoadWorkfile(plugin.Loader): # Save workfile. host_name = "tvpaint" + project_name = context.get("project") asset_name = context.get("asset") task_name = context.get("task") # Far cases when there is workfile without context if not asset_name: + project_name = legacy_io.active_project() asset_name = legacy_io.Session["AVALON_ASSET"] task_name = legacy_io.Session["AVALON_TASK"] - project_doc = legacy_io.find_one({ - "type": "project" - }) - asset_doc = legacy_io.find_one({ - "type": "asset", - "name": asset_name - }) - project_name = project_doc["name"] + project_doc = get_project(project_name) + asset_doc = get_asset_by_name(project_name, asset_name) template_key = get_workfile_template_key_from_context( asset_name, diff --git a/openpype/hosts/tvpaint/plugins/publish/collect_instances.py b/openpype/hosts/tvpaint/plugins/publish/collect_instances.py index 782907b65d..9b6d5c4879 100644 --- a/openpype/hosts/tvpaint/plugins/publish/collect_instances.py +++ b/openpype/hosts/tvpaint/plugins/publish/collect_instances.py @@ -2,6 +2,7 @@ import json import copy import pyblish.api +from openpype.client import get_asset_by_name from openpype.lib import get_subset_name_with_asset_doc from openpype.pipeline import legacy_io @@ -92,17 +93,15 @@ class CollectInstances(pyblish.api.ContextPlugin): if family == "review": # Change subset name of review instance + # Project name from workfile context + project_name = context.data["workfile_context"]["project"] + # Collect asset doc to get asset id # - not sure if it's good idea to require asset id in # get_subset_name? asset_name = context.data["workfile_context"]["asset"] - asset_doc = legacy_io.find_one({ - "type": "asset", - "name": asset_name - }) + asset_doc = get_asset_by_name(project_name, asset_name) - # Project name from workfile context - project_name = context.data["workfile_context"]["project"] # Host name from environment variable host_name = context.data["hostName"] # Use empty variant value diff --git a/openpype/hosts/tvpaint/plugins/publish/collect_scene_render.py b/openpype/hosts/tvpaint/plugins/publish/collect_scene_render.py index 2b8dbdc5b4..20c5bb586a 100644 --- a/openpype/hosts/tvpaint/plugins/publish/collect_scene_render.py +++ b/openpype/hosts/tvpaint/plugins/publish/collect_scene_render.py @@ -2,8 +2,8 @@ import json import copy import pyblish.api +from openpype.client import get_asset_by_name from openpype.lib import get_subset_name_with_asset_doc -from openpype.pipeline import legacy_io class CollectRenderScene(pyblish.api.ContextPlugin): @@ -56,14 +56,11 @@ class CollectRenderScene(pyblish.api.ContextPlugin): # - not sure if it's good idea to require asset id in # get_subset_name? workfile_context = context.data["workfile_context"] - asset_name = workfile_context["asset"] - asset_doc = legacy_io.find_one({ - "type": "asset", - "name": asset_name - }) - # Project name from workfile context project_name = context.data["workfile_context"]["project"] + asset_name = workfile_context["asset"] + asset_doc = get_asset_by_name(project_name, asset_name) + # Host name from environment variable host_name = context.data["hostName"] # Variant is using render pass name diff --git a/openpype/hosts/tvpaint/plugins/publish/collect_workfile.py b/openpype/hosts/tvpaint/plugins/publish/collect_workfile.py index 70d92f82e9..88c5f4dbc7 100644 --- a/openpype/hosts/tvpaint/plugins/publish/collect_workfile.py +++ b/openpype/hosts/tvpaint/plugins/publish/collect_workfile.py @@ -2,6 +2,7 @@ import os import json import pyblish.api +from openpype.client import get_asset_by_name from openpype.lib import get_subset_name_with_asset_doc from openpype.pipeline import legacy_io @@ -22,19 +23,17 @@ class CollectWorkfile(pyblish.api.ContextPlugin): basename, ext = os.path.splitext(filename) instance = context.create_instance(name=basename) + # Project name from workfile context + project_name = context.data["workfile_context"]["project"] + # Get subset name of workfile instance # Collect asset doc to get asset id # - not sure if it's good idea to require asset id in # get_subset_name? family = "workfile" asset_name = context.data["workfile_context"]["asset"] - asset_doc = legacy_io.find_one({ - "type": "asset", - "name": asset_name - }) + asset_doc = get_asset_by_name(project_name, asset_name) - # Project name from workfile context - project_name = context.data["workfile_context"]["project"] # Host name from environment variable host_name = os.environ["AVALON_APP"] # Use empty variant value diff --git a/openpype/lib/avalon_context.py b/openpype/lib/avalon_context.py index 9d8a92cfe9..a03f066300 100644 --- a/openpype/lib/avalon_context.py +++ b/openpype/lib/avalon_context.py @@ -7,7 +7,6 @@ import platform import logging import collections import functools -import getpass from bson.objectid import ObjectId @@ -19,6 +18,7 @@ from .anatomy import Anatomy from .profiles_filtering import filter_profiles from .events import emit_event from .path_templates import StringTemplate +from .local_settings import get_openpype_username legacy_io = None @@ -550,7 +550,7 @@ def get_workdir_data(project_doc, asset_doc, task_name, host_name): "asset": asset_doc["name"], "parent": parent_name, "app": host_name, - "user": getpass.getuser(), + "user": get_openpype_username(), "hierarchy": hierarchy, } @@ -797,8 +797,14 @@ def update_current_task(task=None, asset=None, app=None, template_key=None): else: os.environ[key] = value + data = changes.copy() + # Convert env keys to human readable keys + data["project_name"] = legacy_io.Session["AVALON_PROJECT"] + data["asset_name"] = legacy_io.Session["AVALON_ASSET"] + data["task_name"] = legacy_io.Session["AVALON_TASK"] + # Emit session change - emit_event("taskChanged", changes.copy()) + emit_event("taskChanged", data) return changes diff --git a/openpype/modules/base.py b/openpype/modules/base.py index bca64b19f8..b9ccec13cc 100644 --- a/openpype/modules/base.py +++ b/openpype/modules/base.py @@ -463,6 +463,25 @@ class OpenPypeModule: pass + def on_host_install(self, host, host_name, project_name): + """Host was installed which gives option to handle in-host logic. + + It is a good option to register in-host event callbacks which are + specific for the module. The module is kept in memory for rest of + the process. + + Arguments may change in future. E.g. 'host_name' should be possible + to receive from 'host' object. + + Args: + host (ModuleType): Access to installed/registered host object. + host_name (str): Name of host. + project_name (str): Project name which is main part of host + context. + """ + + pass + def cli(self, module_click_group): """Add commands to click group. diff --git a/openpype/modules/deadline/plugins/publish/submit_harmony_deadline.py b/openpype/modules/deadline/plugins/publish/submit_harmony_deadline.py index 2cf502224f..a1ee5e0957 100644 --- a/openpype/modules/deadline/plugins/publish/submit_harmony_deadline.py +++ b/openpype/modules/deadline/plugins/publish/submit_harmony_deadline.py @@ -322,7 +322,9 @@ class HarmonySubmitDeadline( ) unzip_dir = (published_scene.parent / published_scene.stem) with _ZipFile(published_scene, "r") as zip_ref: - zip_ref.extractall(unzip_dir.as_posix()) + # UNC path (//?/) added to minimalize risk with extracting + # to large file paths + zip_ref.extractall("//?/" + str(unzip_dir.as_posix())) # find any xstage files in directory, prefer the one with the same name # as directory (plus extension) diff --git a/openpype/modules/deadline/plugins/publish/submit_publish_job.py b/openpype/modules/deadline/plugins/publish/submit_publish_job.py index 6d08e72839..b54b00d099 100644 --- a/openpype/modules/deadline/plugins/publish/submit_publish_job.py +++ b/openpype/modules/deadline/plugins/publish/submit_publish_job.py @@ -147,7 +147,7 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin): # mapping of instance properties to be transfered to new instance for every # specified family instance_transfer = { - "slate": ["slateFrame"], + "slate": ["slateFrames"], "review": ["lutPath"], "render2d": ["bakingNukeScripts", "version"], "renderlayer": ["convertToScanline"] diff --git a/openpype/modules/ftrack/event_handlers_server/action_prepare_project.py b/openpype/modules/ftrack/event_handlers_server/action_prepare_project.py index 975e49cb28..361aa98d16 100644 --- a/openpype/modules/ftrack/event_handlers_server/action_prepare_project.py +++ b/openpype/modules/ftrack/event_handlers_server/action_prepare_project.py @@ -1,8 +1,8 @@ import json +from openpype.client import get_project from openpype.api import ProjectSettings from openpype.lib import create_project -from openpype.pipeline import AvalonMongoDB from openpype.settings import SaveWarningExc from openpype_modules.ftrack.lib import ( @@ -363,12 +363,8 @@ class PrepareProjectServer(ServerAction): project_name = project_entity["full_name"] # Try to find project document - dbcon = AvalonMongoDB() - dbcon.install() - dbcon.Session["AVALON_PROJECT"] = project_name - project_doc = dbcon.find_one({ - "type": "project" - }) + project_doc = get_project(project_name) + # Create project if is not available # - creation is required to be able set project anatomy and attributes if not project_doc: @@ -376,9 +372,7 @@ class PrepareProjectServer(ServerAction): self.log.info("Creating project \"{} [{}]\"".format( project_name, project_code )) - create_project(project_name, project_code, dbcon=dbcon) - - dbcon.uninstall() + create_project(project_name, project_code) project_settings = ProjectSettings(project_name) project_anatomy_settings = project_settings["project_anatomy"] diff --git a/openpype/modules/ftrack/event_handlers_server/event_sync_to_avalon.py b/openpype/modules/ftrack/event_handlers_server/event_sync_to_avalon.py index b5f199b3e4..a4e791aaf0 100644 --- a/openpype/modules/ftrack/event_handlers_server/event_sync_to_avalon.py +++ b/openpype/modules/ftrack/event_handlers_server/event_sync_to_avalon.py @@ -12,6 +12,12 @@ from pymongo import UpdateOne import arrow import ftrack_api +from openpype.client import ( + get_project, + get_assets, + get_archived_assets, + get_asset_ids_with_subsets +) from openpype.pipeline import AvalonMongoDB, schema from openpype_modules.ftrack.lib import ( @@ -149,12 +155,11 @@ class SyncToAvalonEvent(BaseEvent): @property def avalon_entities(self): if self._avalon_ents is None: + project_name = self.cur_project["full_name"] self.dbcon.install() - self.dbcon.Session["AVALON_PROJECT"] = ( - self.cur_project["full_name"] - ) - avalon_project = self.dbcon.find_one({"type": "project"}) - avalon_entities = list(self.dbcon.find({"type": "asset"})) + self.dbcon.Session["AVALON_PROJECT"] = project_name + avalon_project = get_project(project_name) + avalon_entities = list(get_assets(project_name)) self._avalon_ents = (avalon_project, avalon_entities) return self._avalon_ents @@ -284,28 +289,21 @@ class SyncToAvalonEvent(BaseEvent): self._avalon_ents_by_ftrack_id[ftrack_id] = doc @property - def avalon_subsets_by_parents(self): - if self._avalon_subsets_by_parents is None: - self._avalon_subsets_by_parents = collections.defaultdict(list) - self.dbcon.install() - self.dbcon.Session["AVALON_PROJECT"] = ( - self.cur_project["full_name"] + def avalon_asset_ids_with_subsets(self): + if self._avalon_asset_ids_with_subsets is None: + project_name = self.cur_project["full_name"] + self._avalon_asset_ids_with_subsets = get_asset_ids_with_subsets( + project_name ) - for subset in self.dbcon.find({"type": "subset"}): - self._avalon_subsets_by_parents[subset["parent"]].append( - subset - ) - return self._avalon_subsets_by_parents + + return self._avalon_asset_ids_with_subsets @property def avalon_archived_by_id(self): if self._avalon_archived_by_id is None: self._avalon_archived_by_id = {} - self.dbcon.install() - self.dbcon.Session["AVALON_PROJECT"] = ( - self.cur_project["full_name"] - ) - for asset in self.dbcon.find({"type": "archived_asset"}): + project_name = self.cur_project["full_name"] + for asset in get_archived_assets(project_name): self._avalon_archived_by_id[asset["_id"]] = asset return self._avalon_archived_by_id @@ -327,7 +325,7 @@ class SyncToAvalonEvent(BaseEvent): avalon_project, avalon_entities = self.avalon_entities self._changeability_by_mongo_id[avalon_project["_id"]] = False self._bubble_changeability( - list(self.avalon_subsets_by_parents.keys()) + list(self.avalon_asset_ids_with_subsets) ) return self._changeability_by_mongo_id @@ -449,14 +447,9 @@ class SyncToAvalonEvent(BaseEvent): if not entity: # if entity is not found then it is subset without parent if entity_id in unchangeable_ids: - _subset_ids = [ - str(sub["_id"]) for sub in - self.avalon_subsets_by_parents[entity_id] - ] - joined_subset_ids = "| ".join(_subset_ids) self.log.warning(( - "Parent <{}> for subsets <{}> does not exist" - ).format(str(entity_id), joined_subset_ids)) + "Parent <{}> with subsets does not exist" + ).format(str(entity_id))) else: self.log.warning(( "In avalon are entities without valid parents that" @@ -483,7 +476,7 @@ class SyncToAvalonEvent(BaseEvent): self._avalon_ents_by_parent_id = None self._avalon_ents_by_ftrack_id = None self._avalon_ents_by_name = None - self._avalon_subsets_by_parents = None + self._avalon_asset_ids_with_subsets = None self._changeability_by_mongo_id = None self._avalon_archived_by_id = None self._avalon_archived_by_name = None diff --git a/openpype/modules/ftrack/event_handlers_server/event_user_assigment.py b/openpype/modules/ftrack/event_handlers_server/event_user_assigment.py index 593fc5e596..82b79e986b 100644 --- a/openpype/modules/ftrack/event_handlers_server/event_user_assigment.py +++ b/openpype/modules/ftrack/event_handlers_server/event_user_assigment.py @@ -1,11 +1,9 @@ import re import subprocess +from openpype.client import get_asset_by_id, get_asset_by_name from openpype_modules.ftrack.lib import BaseEvent from openpype_modules.ftrack.lib.avalon_sync import CUST_ATTR_ID_KEY -from openpype.pipeline import AvalonMongoDB - -from bson.objectid import ObjectId from openpype.api import Anatomy, get_project_settings @@ -36,8 +34,6 @@ class UserAssigmentEvent(BaseEvent): 3) path to publish files of task user was (de)assigned to """ - db_con = AvalonMongoDB() - def error(self, *err): for e in err: self.log.error(e) @@ -101,26 +97,16 @@ class UserAssigmentEvent(BaseEvent): :rtype: dict """ parent = task['parent'] - self.db_con.install() - self.db_con.Session['AVALON_PROJECT'] = task['project']['full_name'] - + project_name = task["project"]["full_name"] avalon_entity = None parent_id = parent['custom_attributes'].get(CUST_ATTR_ID_KEY) if parent_id: - parent_id = ObjectId(parent_id) - avalon_entity = self.db_con.find_one({ - '_id': parent_id, - 'type': 'asset' - }) + avalon_entity = get_asset_by_id(project_name, parent_id) if not avalon_entity: - avalon_entity = self.db_con.find_one({ - 'type': 'asset', - 'name': parent['name'] - }) + avalon_entity = get_asset_by_name(project_name, parent["name"]) if not avalon_entity: - self.db_con.uninstall() msg = 'Entity "{}" not found in avalon database'.format( parent['name'] ) @@ -129,7 +115,6 @@ class UserAssigmentEvent(BaseEvent): 'success': False, 'message': msg } - self.db_con.uninstall() return avalon_entity def _get_hierarchy(self, asset): diff --git a/openpype/modules/ftrack/event_handlers_user/action_applications.py b/openpype/modules/ftrack/event_handlers_user/action_applications.py index b25bc1b5cb..102f04c956 100644 --- a/openpype/modules/ftrack/event_handlers_user/action_applications.py +++ b/openpype/modules/ftrack/event_handlers_user/action_applications.py @@ -1,5 +1,6 @@ import os +from openpype.client import get_project from openpype_modules.ftrack.lib import BaseAction from openpype.lib.applications import ( ApplicationManager, @@ -7,7 +8,6 @@ from openpype.lib.applications import ( ApplictionExecutableNotFound, CUSTOM_LAUNCH_APP_GROUPS ) -from openpype.pipeline import AvalonMongoDB class AppplicationsAction(BaseAction): @@ -25,7 +25,6 @@ class AppplicationsAction(BaseAction): super(AppplicationsAction, self).__init__(*args, **kwargs) self.application_manager = ApplicationManager() - self.dbcon = AvalonMongoDB() @property def discover_identifier(self): @@ -110,12 +109,7 @@ class AppplicationsAction(BaseAction): if avalon_project_doc is None: ft_project = self.get_project_from_entity(entity) project_name = ft_project["full_name"] - if not self.dbcon.is_installed(): - self.dbcon.install() - self.dbcon.Session["AVALON_PROJECT"] = project_name - avalon_project_doc = self.dbcon.find_one({ - "type": "project" - }) or False + avalon_project_doc = get_project(project_name) or False event["data"]["avalon_project_doc"] = avalon_project_doc if not avalon_project_doc: diff --git a/openpype/modules/ftrack/event_handlers_user/action_delete_asset.py b/openpype/modules/ftrack/event_handlers_user/action_delete_asset.py index ee5c3d0d97..03d029b0c1 100644 --- a/openpype/modules/ftrack/event_handlers_user/action_delete_asset.py +++ b/openpype/modules/ftrack/event_handlers_user/action_delete_asset.py @@ -4,6 +4,7 @@ from datetime import datetime from bson.objectid import ObjectId +from openpype.client import get_assets, get_subsets from openpype.pipeline import AvalonMongoDB from openpype_modules.ftrack.lib import BaseAction, statics_icon from openpype_modules.ftrack.lib.avalon_sync import create_chunks @@ -91,10 +92,8 @@ class DeleteAssetSubset(BaseAction): continue ftrack_id = entity.get("entityId") - if not ftrack_id: - continue - - ftrack_ids.append(ftrack_id) + if ftrack_id: + ftrack_ids.append(ftrack_id) if project_in_selection: msg = "It is not possible to use this action on project entity." @@ -120,48 +119,51 @@ class DeleteAssetSubset(BaseAction): "message": "Invalid selection for this action (Bug)" } - if entities[0].entity_type.lower() == "project": - project = entities[0] - else: - project = entities[0]["project"] - + project = self.get_project_from_entity(entities[0], session) project_name = project["full_name"] self.dbcon.Session["AVALON_PROJECT"] = project_name - selected_av_entities = list(self.dbcon.find({ - "type": "asset", - "data.ftrackId": {"$in": ftrack_ids} - })) + asset_docs = list(get_assets( + project_name, + fields=["_id", "name", "data.ftrackId", "data.parents"] + )) + selected_av_entities = [] + found_ftrack_ids = set() + asset_docs_by_name = collections.defaultdict(list) + for asset_doc in asset_docs: + ftrack_id = asset_doc["data"].get("ftrackId") + if ftrack_id: + found_ftrack_ids.add(ftrack_id) + if ftrack_id in entity_mapping: + selected_av_entities.append(asset_doc) + + asset_name = asset_doc["name"] + asset_docs_by_name[asset_name].append(asset_doc) + found_without_ftrack_id = {} - if len(selected_av_entities) != len(ftrack_ids): - found_ftrack_ids = [ - ent["data"]["ftrackId"] for ent in selected_av_entities - ] - for ftrack_id, entity in entity_mapping.items(): - if ftrack_id in found_ftrack_ids: + for ftrack_id, entity in entity_mapping.items(): + if ftrack_id in found_ftrack_ids: + continue + + av_ents_by_name = asset_docs_by_name[entity["name"]] + if not av_ents_by_name: + continue + + ent_path_items = [ent["name"] for ent in entity["link"]] + end_index = len(ent_path_items) - 1 + parents = ent_path_items[1:end_index:] + # TODO we should say to user that + # few of them are missing in avalon + for av_ent in av_ents_by_name: + if av_ent["data"]["parents"] != parents: continue - av_ents_by_name = list(self.dbcon.find({ - "type": "asset", - "name": entity["name"] - })) - if not av_ents_by_name: - continue - - ent_path_items = [ent["name"] for ent in entity["link"]] - parents = ent_path_items[1:len(ent_path_items)-1:] - # TODO we should say to user that - # few of them are missing in avalon - for av_ent in av_ents_by_name: - if av_ent["data"]["parents"] != parents: - continue - - # TODO we should say to user that found entity - # with same name does not match same ftrack id? - if "ftrackId" not in av_ent["data"]: - selected_av_entities.append(av_ent) - found_without_ftrack_id[str(av_ent["_id"])] = ftrack_id - break + # TODO we should say to user that found entity + # with same name does not match same ftrack id? + if "ftrackId" not in av_ent["data"]: + selected_av_entities.append(av_ent) + found_without_ftrack_id[str(av_ent["_id"])] = ftrack_id + break if not selected_av_entities: return { @@ -206,10 +208,7 @@ class DeleteAssetSubset(BaseAction): items.append(id_item) asset_ids = [ent["_id"] for ent in selected_av_entities] - subsets_for_selection = self.dbcon.find({ - "type": "subset", - "parent": {"$in": asset_ids} - }) + subsets_for_selection = get_subsets(project_name, asset_ids=asset_ids) asset_ending = "" if len(selected_av_entities) > 1: @@ -459,13 +458,9 @@ class DeleteAssetSubset(BaseAction): if len(assets_to_delete) > 0: map_av_ftrack_id = spec_data["without_ftrack_id"] # Prepare data when deleting whole avalon asset - avalon_assets = self.dbcon.find( - {"type": "asset"}, - { - "_id": 1, - "data.visualParent": 1, - "data.ftrackId": 1 - } + avalon_assets = get_assets( + project_name, + fields=["_id", "data.visualParent", "data.ftrackId"] ) avalon_assets_by_parent = collections.defaultdict(list) for asset in avalon_assets: diff --git a/openpype/modules/ftrack/event_handlers_user/action_delete_old_versions.py b/openpype/modules/ftrack/event_handlers_user/action_delete_old_versions.py index a0bf6622e9..3400c509ab 100644 --- a/openpype/modules/ftrack/event_handlers_user/action_delete_old_versions.py +++ b/openpype/modules/ftrack/event_handlers_user/action_delete_old_versions.py @@ -5,7 +5,12 @@ import uuid import clique from pymongo import UpdateOne - +from openpype.client import ( + get_assets, + get_subsets, + get_versions, + get_representations +) from openpype.api import Anatomy from openpype.lib import StringTemplate, TemplateUnsolved from openpype.pipeline import AvalonMongoDB @@ -198,10 +203,9 @@ class DeleteOldVersions(BaseAction): self.log.debug("Project is set to {}".format(project_name)) # Get Assets from avalon database - assets = list(self.dbcon.find({ - "type": "asset", - "name": {"$in": avalon_asset_names} - })) + assets = list( + get_assets(project_name, asset_names=avalon_asset_names) + ) asset_id_to_name_map = { asset["_id"]: asset["name"] for asset in assets } @@ -210,10 +214,9 @@ class DeleteOldVersions(BaseAction): self.log.debug("Collected assets ({})".format(len(asset_ids))) # Get Subsets - subsets = list(self.dbcon.find({ - "type": "subset", - "parent": {"$in": asset_ids} - })) + subsets = list( + get_subsets(project_name, asset_ids=asset_ids) + ) subsets_by_id = {} subset_ids = [] for subset in subsets: @@ -230,10 +233,9 @@ class DeleteOldVersions(BaseAction): self.log.debug("Collected subsets ({})".format(len(subset_ids))) # Get Versions - versions = list(self.dbcon.find({ - "type": "version", - "parent": {"$in": subset_ids} - })) + versions = list( + get_versions(project_name, subset_ids=subset_ids) + ) versions_by_parent = collections.defaultdict(list) for ent in versions: @@ -295,10 +297,9 @@ class DeleteOldVersions(BaseAction): "message": msg } - repres = list(self.dbcon.find({ - "type": "representation", - "parent": {"$in": version_ids} - })) + repres = list( + get_representations(project_name, version_ids=version_ids) + ) self.log.debug( "Collected representations to remove ({})".format(len(repres)) diff --git a/openpype/modules/ftrack/event_handlers_user/action_delivery.py b/openpype/modules/ftrack/event_handlers_user/action_delivery.py index 86d88ef7cc..4b799b092b 100644 --- a/openpype/modules/ftrack/event_handlers_user/action_delivery.py +++ b/openpype/modules/ftrack/event_handlers_user/action_delivery.py @@ -3,8 +3,13 @@ import copy import json import collections -from bson.objectid import ObjectId - +from openpype.client import ( + get_project, + get_assets, + get_subsets, + get_versions, + get_representations +) from openpype.api import Anatomy, config from openpype_modules.ftrack.lib import BaseAction, statics_icon from openpype_modules.ftrack.lib.avalon_sync import CUST_ATTR_ID_KEY @@ -18,11 +23,9 @@ from openpype.lib.delivery import ( process_single_file, process_sequence ) -from openpype.pipeline import AvalonMongoDB class Delivery(BaseAction): - identifier = "delivery.action" label = "Delivery" description = "Deliver data to client" @@ -30,11 +33,6 @@ class Delivery(BaseAction): icon = statics_icon("ftrack", "action_icons", "Delivery.svg") settings_key = "delivery_action" - def __init__(self, *args, **kwargs): - self.dbcon = AvalonMongoDB() - - super(Delivery, self).__init__(*args, **kwargs) - def discover(self, session, entities, event): is_valid = False for entity in entities: @@ -57,9 +55,7 @@ class Delivery(BaseAction): project_entity = self.get_project_from_entity(entities[0]) project_name = project_entity["full_name"] - self.dbcon.install() - self.dbcon.Session["AVALON_PROJECT"] = project_name - project_doc = self.dbcon.find_one({"type": "project"}, {"name": True}) + project_doc = get_project(project_name, fields=["name"]) if not project_doc: return { "success": False, @@ -68,8 +64,7 @@ class Delivery(BaseAction): ).format(project_name) } - repre_names = self._get_repre_names(session, entities) - self.dbcon.uninstall() + repre_names = self._get_repre_names(project_name, session, entities) items.append({ "type": "hidden", @@ -198,17 +193,21 @@ class Delivery(BaseAction): "title": title } - def _get_repre_names(self, session, entities): - version_ids = self._get_interest_version_ids(session, entities) + def _get_repre_names(self, project_name, session, entities): + version_ids = self._get_interest_version_ids( + project_name, session, entities + ) if not version_ids: return [] - repre_docs = self.dbcon.find({ - "type": "representation", - "parent": {"$in": version_ids} - }) - return list(sorted(repre_docs.distinct("name"))) + repre_docs = get_representations( + project_name, + version_ids=version_ids, + fields=["name"] + ) + repre_names = {repre_doc["name"] for repre_doc in repre_docs} + return list(sorted(repre_names)) - def _get_interest_version_ids(self, session, entities): + def _get_interest_version_ids(self, project_name, session, entities): # Extract AssetVersion entities asset_versions = self._extract_asset_versions(session, entities) # Prepare Asset ids @@ -235,14 +234,18 @@ class Delivery(BaseAction): subset_names.add(asset["name"]) version_nums.add(asset_version["version"]) - asset_docs_by_ftrack_id = self._get_asset_docs(session, parent_ids) + asset_docs_by_ftrack_id = self._get_asset_docs( + project_name, session, parent_ids + ) subset_docs = self._get_subset_docs( + project_name, asset_docs_by_ftrack_id, subset_names, asset_versions, assets_by_id ) version_docs = self._get_version_docs( + project_name, asset_docs_by_ftrack_id, subset_docs, version_nums, @@ -290,6 +293,7 @@ class Delivery(BaseAction): def _get_version_docs( self, + project_name, asset_docs_by_ftrack_id, subset_docs, version_nums, @@ -300,11 +304,11 @@ class Delivery(BaseAction): subset_doc["_id"]: subset_doc for subset_doc in subset_docs } - version_docs = list(self.dbcon.find({ - "type": "version", - "parent": {"$in": list(subset_docs_by_id.keys())}, - "name": {"$in": list(version_nums)} - })) + version_docs = list(get_versions( + project_name, + subset_ids=subset_docs_by_id.keys(), + versions=version_nums + )) version_docs_by_parent_id = collections.defaultdict(dict) for version_doc in version_docs: subset_doc = subset_docs_by_id[version_doc["parent"]] @@ -345,6 +349,7 @@ class Delivery(BaseAction): def _get_subset_docs( self, + project_name, asset_docs_by_ftrack_id, subset_names, asset_versions, @@ -354,11 +359,11 @@ class Delivery(BaseAction): asset_doc["_id"] for asset_doc in asset_docs_by_ftrack_id.values() ] - subset_docs = list(self.dbcon.find({ - "type": "subset", - "parent": {"$in": asset_doc_ids}, - "name": {"$in": list(subset_names)} - })) + subset_docs = list(get_subsets( + project_name, + asset_ids=asset_doc_ids, + subset_names=subset_names + )) subset_docs_by_parent_id = collections.defaultdict(dict) for subset_doc in subset_docs: asset_id = subset_doc["parent"] @@ -385,15 +390,21 @@ class Delivery(BaseAction): filtered_subsets.append(subset_doc) return filtered_subsets - def _get_asset_docs(self, session, parent_ids): - asset_docs = list(self.dbcon.find({ - "type": "asset", - "data.ftrackId": {"$in": list(parent_ids)} - })) + def _get_asset_docs(self, project_name, session, parent_ids): + asset_docs = list(get_assets( + project_name, fields=["_id", "name", "data.ftrackId"] + )) + asset_docs_by_id = {} + asset_docs_by_name = {} asset_docs_by_ftrack_id = {} for asset_doc in asset_docs: + asset_id = str(asset_doc["_id"]) + asset_name = asset_doc["name"] ftrack_id = asset_doc["data"].get("ftrackId") + + asset_docs_by_id[asset_id] = asset_doc + asset_docs_by_name[asset_name] = asset_doc if ftrack_id: asset_docs_by_ftrack_id[ftrack_id] = asset_doc @@ -406,15 +417,15 @@ class Delivery(BaseAction): avalon_mongo_id_values = query_custom_attributes( session, [attr_def["id"]], parent_ids, True ) - entity_ids_by_mongo_id = { - ObjectId(item["value"]): item["entity_id"] - for item in avalon_mongo_id_values - if item["value"] - } - missing_ids = set(parent_ids) - for entity_id in set(entity_ids_by_mongo_id.values()): - if entity_id in missing_ids: + for item in avalon_mongo_id_values: + if not item["value"]: + continue + asset_id = item["value"] + entity_id = item["entity_id"] + asset_doc = asset_docs_by_id.get(asset_id) + if asset_doc: + asset_docs_by_ftrack_id[entity_id] = asset_doc missing_ids.remove(entity_id) entity_ids_by_name = {} @@ -427,36 +438,10 @@ class Delivery(BaseAction): for entity in not_found_entities } - expressions = [] - if entity_ids_by_mongo_id: - expression = { - "type": "asset", - "_id": {"$in": list(entity_ids_by_mongo_id.keys())} - } - expressions.append(expression) - - if entity_ids_by_name: - expression = { - "type": "asset", - "name": {"$in": list(entity_ids_by_name.keys())} - } - expressions.append(expression) - - if expressions: - if len(expressions) == 1: - filter = expressions[0] - else: - filter = {"$or": expressions} - - asset_docs = self.dbcon.find(filter) - for asset_doc in asset_docs: - if asset_doc["_id"] in entity_ids_by_mongo_id: - entity_id = entity_ids_by_mongo_id[asset_doc["_id"]] - asset_docs_by_ftrack_id[entity_id] = asset_doc - - elif asset_doc["name"] in entity_ids_by_name: - entity_id = entity_ids_by_name[asset_doc["name"]] - asset_docs_by_ftrack_id[entity_id] = asset_doc + for asset_name, entity_id in entity_ids_by_name.items(): + asset_doc = asset_docs_by_name.get(asset_name) + if asset_doc: + asset_docs_by_ftrack_id[entity_id] = asset_doc return asset_docs_by_ftrack_id @@ -490,7 +475,6 @@ class Delivery(BaseAction): session.commit() try: - self.dbcon.install() report = self.real_launch(session, entities, event) except Exception as exc: @@ -516,7 +500,6 @@ class Delivery(BaseAction): else: job["status"] = "failed" session.commit() - self.dbcon.uninstall() if not report["success"]: self.show_interface( @@ -558,16 +541,15 @@ class Delivery(BaseAction): if not os.path.exists(location_path): os.makedirs(location_path) - self.dbcon.Session["AVALON_PROJECT"] = project_name - self.log.debug("Collecting representations to process.") - version_ids = self._get_interest_version_ids(session, entities) - repres_to_deliver = list(self.dbcon.find({ - "type": "representation", - "parent": {"$in": version_ids}, - "name": {"$in": repre_names} - })) - + version_ids = self._get_interest_version_ids( + project_name, session, entities + ) + repres_to_deliver = list(get_representations( + project_name, + representation_names=repre_names, + version_ids=version_ids + )) anatomy = Anatomy(project_name) format_dict = get_format_dict(anatomy, location_path) diff --git a/openpype/modules/ftrack/event_handlers_user/action_fill_workfile_attr.py b/openpype/modules/ftrack/event_handlers_user/action_fill_workfile_attr.py index c7237a1150..d30c41a749 100644 --- a/openpype/modules/ftrack/event_handlers_user/action_fill_workfile_attr.py +++ b/openpype/modules/ftrack/event_handlers_user/action_fill_workfile_attr.py @@ -7,6 +7,10 @@ import datetime import ftrack_api +from openpype.client import ( + get_project, + get_assets, +) from openpype.api import get_project_settings from openpype.lib import ( get_workfile_template_key, @@ -14,7 +18,6 @@ from openpype.lib import ( Anatomy, StringTemplate, ) -from openpype.pipeline import AvalonMongoDB from openpype_modules.ftrack.lib import BaseAction, statics_icon from openpype_modules.ftrack.lib.avalon_sync import create_chunks @@ -248,10 +251,8 @@ class FillWorkfileAttributeAction(BaseAction): # Find matchin asset documents and map them by ftrack task entities # - result stored to 'asset_docs_with_task_entities' is list with # tuple `(asset document, [task entitis, ...])` - dbcon = AvalonMongoDB() - dbcon.Session["AVALON_PROJECT"] = project_name # Quety all asset documents - asset_docs = list(dbcon.find({"type": "asset"})) + asset_docs = list(get_assets(project_name)) job_entity["data"] = json.dumps({ "description": "(1/3) Asset documents queried." }) @@ -276,7 +277,7 @@ class FillWorkfileAttributeAction(BaseAction): # Keep placeholders in the template unfilled host_name = "{app}" extension = "{ext}" - project_doc = dbcon.find_one({"type": "project"}) + project_doc = get_project(project_name) project_settings = get_project_settings(project_name) anatomy = Anatomy(project_name) templates_by_key = {} diff --git a/openpype/modules/ftrack/event_handlers_user/action_prepare_project.py b/openpype/modules/ftrack/event_handlers_user/action_prepare_project.py index 0b14e7aa2b..e9dc11de9f 100644 --- a/openpype/modules/ftrack/event_handlers_user/action_prepare_project.py +++ b/openpype/modules/ftrack/event_handlers_user/action_prepare_project.py @@ -1,8 +1,8 @@ import json +from openpype.client import get_project from openpype.api import ProjectSettings from openpype.lib import create_project -from openpype.pipeline import AvalonMongoDB from openpype.settings import SaveWarningExc from openpype_modules.ftrack.lib import ( @@ -389,12 +389,8 @@ class PrepareProjectLocal(BaseAction): project_name = project_entity["full_name"] # Try to find project document - dbcon = AvalonMongoDB() - dbcon.install() - dbcon.Session["AVALON_PROJECT"] = project_name - project_doc = dbcon.find_one({ - "type": "project" - }) + project_doc = get_project(project_name) + # Create project if is not available # - creation is required to be able set project anatomy and attributes if not project_doc: @@ -402,9 +398,7 @@ class PrepareProjectLocal(BaseAction): self.log.info("Creating project \"{} [{}]\"".format( project_name, project_code )) - create_project(project_name, project_code, dbcon=dbcon) - - dbcon.uninstall() + create_project(project_name, project_code) project_settings = ProjectSettings(project_name) project_anatomy_settings = project_settings["project_anatomy"] diff --git a/openpype/modules/ftrack/event_handlers_user/action_rv.py b/openpype/modules/ftrack/event_handlers_user/action_rv.py index 040ca75582..2480ea7f95 100644 --- a/openpype/modules/ftrack/event_handlers_user/action_rv.py +++ b/openpype/modules/ftrack/event_handlers_user/action_rv.py @@ -5,9 +5,16 @@ import json import ftrack_api +from openpype.client import ( + get_asset_by_name, + get_subset_by_name, + get_version_by_name, + get_representation_by_name +) +from openpype.api import Anatomy from openpype.pipeline import ( get_representation_path, - legacy_io, + AvalonMongoDB, ) from openpype_modules.ftrack.lib import BaseAction, statics_icon @@ -255,9 +262,10 @@ class RVAction(BaseAction): "Component", list(event["data"]["values"].values())[0] )["version"]["asset"]["parent"]["link"][0] project = session.get(link["type"], link["id"]) - os.environ["AVALON_PROJECT"] = project["name"] - legacy_io.Session["AVALON_PROJECT"] = project["name"] - legacy_io.install() + project_name = project["full_name"] + dbcon = AvalonMongoDB() + dbcon.Session["AVALON_PROJECT"] = project_name + anatomy = Anatomy(project_name) location = ftrack_api.Session().pick_location() @@ -281,37 +289,38 @@ class RVAction(BaseAction): if online_source: continue - asset = legacy_io.find_one({"type": "asset", "name": parent_name}) - subset = legacy_io.find_one( - { - "type": "subset", - "name": component["version"]["asset"]["name"], - "parent": asset["_id"] - } + subset_name = component["version"]["asset"]["name"] + version_name = component["version"]["version"] + representation_name = component["file_type"][1:] + + asset_doc = get_asset_by_name( + project_name, parent_name, fields=["_id"] ) - version = legacy_io.find_one( - { - "type": "version", - "name": component["version"]["version"], - "parent": subset["_id"] - } + subset_doc = get_subset_by_name( + project_name, + subset_name=subset_name, + asset_id=asset_doc["_id"] ) - representation = legacy_io.find_one( - { - "type": "representation", - "parent": version["_id"], - "name": component["file_type"][1:] - } + version_doc = get_version_by_name( + project_name, + version=version_name, + subset_id=subset_doc["_id"] ) - if representation is None: - representation = legacy_io.find_one( - { - "type": "representation", - "parent": version["_id"], - "name": "preview" - } + repre_doc = get_representation_by_name( + project_name, + version_id=version_doc["_id"], + representation_name=representation_name + ) + if not repre_doc: + repre_doc = get_representation_by_name( + project_name, + version_id=version_doc["_id"], + representation_name="preview" ) - paths.append(get_representation_path(representation)) + + paths.append(get_representation_path( + repre_doc, root=anatomy.roots, dbcon=dbcon + )) return paths diff --git a/openpype/modules/ftrack/event_handlers_user/action_store_thumbnails_to_avalon.py b/openpype/modules/ftrack/event_handlers_user/action_store_thumbnails_to_avalon.py index 62fdfa2bdd..d655dddcaf 100644 --- a/openpype/modules/ftrack/event_handlers_user/action_store_thumbnails_to_avalon.py +++ b/openpype/modules/ftrack/event_handlers_user/action_store_thumbnails_to_avalon.py @@ -5,6 +5,14 @@ import requests from bson.objectid import ObjectId +from openpype.client import ( + get_project, + get_asset_by_id, + get_assets, + get_subset_by_name, + get_version_by_name, + get_representations +) from openpype_modules.ftrack.lib import BaseAction, statics_icon from openpype.api import Anatomy from openpype.pipeline import AvalonMongoDB @@ -385,7 +393,7 @@ class StoreThumbnailsToAvalon(BaseAction): db_con.Session["AVALON_PROJECT"] = project_name - avalon_project = db_con.find_one({"type": "project"}) + avalon_project = get_project(project_name) output["project"] = avalon_project if not avalon_project: @@ -399,19 +407,17 @@ class StoreThumbnailsToAvalon(BaseAction): asset_mongo_id = parent["custom_attributes"].get(CUST_ATTR_ID_KEY) if asset_mongo_id: try: - asset_mongo_id = ObjectId(asset_mongo_id) - asset_ent = db_con.find_one({ - "type": "asset", - "_id": asset_mongo_id - }) + asset_ent = get_asset_by_id(project_name, asset_mongo_id) except Exception: pass if not asset_ent: - asset_ent = db_con.find_one({ - "type": "asset", - "data.ftrackId": parent["id"] - }) + asset_docs = get_assets(project_name, asset_names=[parent["name"]]) + for asset_doc in asset_docs: + ftrack_id = asset_doc.get("data", {}).get("ftrackId") + if ftrack_id == parent["id"]: + asset_ent = asset_doc + break output["asset"] = asset_ent @@ -422,13 +428,11 @@ class StoreThumbnailsToAvalon(BaseAction): ) return output - asset_mongo_id = asset_ent["_id"] - - subset_ent = db_con.find_one({ - "type": "subset", - "parent": asset_mongo_id, - "name": subset_name - }) + subset_ent = get_subset_by_name( + project_name, + subset_name=subset_name, + asset_id=asset_ent["_id"] + ) output["subset"] = subset_ent @@ -439,11 +443,11 @@ class StoreThumbnailsToAvalon(BaseAction): ).format(subset_name, ent_path) return output - version_ent = db_con.find_one({ - "type": "version", - "name": version, - "parent": subset_ent["_id"] - }) + version_ent = get_version_by_name( + project_name, + version, + subset_ent["_id"] + ) output["version"] = version_ent @@ -454,10 +458,10 @@ class StoreThumbnailsToAvalon(BaseAction): ).format(version, subset_name, ent_path) return output - repre_ents = list(db_con.find({ - "type": "representation", - "parent": version_ent["_id"] - })) + repre_ents = list(get_representations( + project_name, + version_ids=[version_ent["_id"]] + )) output["representations"] = repre_ents return output diff --git a/openpype/modules/ftrack/lib/avalon_sync.py b/openpype/modules/ftrack/lib/avalon_sync.py index e4ba651bfd..68b5c62c53 100644 --- a/openpype/modules/ftrack/lib/avalon_sync.py +++ b/openpype/modules/ftrack/lib/avalon_sync.py @@ -6,6 +6,14 @@ import numbers import six +from openpype.client import ( + get_project, + get_assets, + get_archived_assets, + get_subsets, + get_versions, + get_representations +) from openpype.api import ( Logger, get_anatomy_settings @@ -576,6 +584,10 @@ class SyncEntitiesFactory: self.ft_project_id = ft_project_id self.entities_dict = entities_dict + @property + def project_name(self): + return self.entities_dict[self.ft_project_id]["name"] + @property def avalon_ents_by_id(self): """ @@ -660,9 +672,9 @@ class SyncEntitiesFactory: (list) of assets """ if self._avalon_archived_ents is None: - self._avalon_archived_ents = [ - ent for ent in self.dbcon.find({"type": "archived_asset"}) - ] + self._avalon_archived_ents = list( + get_archived_assets(self.project_name) + ) return self._avalon_archived_ents @property @@ -730,7 +742,7 @@ class SyncEntitiesFactory: """ if self._subsets_by_parent_id is None: self._subsets_by_parent_id = collections.defaultdict(list) - for subset in self.dbcon.find({"type": "subset"}): + for subset in get_subsets(self.project_name): self._subsets_by_parent_id[str(subset["parent"])].append( subset ) @@ -1421,8 +1433,8 @@ class SyncEntitiesFactory: # Avalon entities self.dbcon.install() self.dbcon.Session["AVALON_PROJECT"] = ft_project_name - avalon_project = self.dbcon.find_one({"type": "project"}) - avalon_entities = self.dbcon.find({"type": "asset"}) + avalon_project = get_project(ft_project_name) + avalon_entities = get_assets(ft_project_name) self.avalon_project = avalon_project self.avalon_entities = avalon_entities @@ -2258,46 +2270,37 @@ class SyncEntitiesFactory: self._delete_subsets_without_asset(subsets_to_remove) def _delete_subsets_without_asset(self, not_existing_parents): - subset_ids = [] - version_ids = [] repre_ids = [] to_delete = [] + subset_ids = [] for parent_id in not_existing_parents: subsets = self.subsets_by_parent_id.get(parent_id) if not subsets: continue for subset in subsets: - if subset.get("type") != "subset": - continue - subset_ids.append(subset["_id"]) + if subset.get("type") == "subset": + subset_ids.append(subset["_id"]) - db_subsets = self.dbcon.find({ - "_id": {"$in": subset_ids}, - "type": "subset" - }) - if not db_subsets: - return - - db_versions = self.dbcon.find({ - "parent": {"$in": subset_ids}, - "type": "version" - }) - if db_versions: - version_ids = [ver["_id"] for ver in db_versions] - - db_repres = self.dbcon.find({ - "parent": {"$in": version_ids}, - "type": "representation" - }) - if db_repres: - repre_ids = [repre["_id"] for repre in db_repres] + db_versions = get_versions( + self.project_name, + subset_ids=subset_ids, + fields=["_id"] + ) + version_ids = [ver["_id"] for ver in db_versions] + db_repres = get_representations( + self.project_name, + version_ids=version_ids, + fields=["_id"] + ) + repre_ids = [repre["_id"] for repre in db_repres] to_delete.extend(subset_ids) to_delete.extend(version_ids) to_delete.extend(repre_ids) - self.dbcon.delete_many({"_id": {"$in": to_delete}}) + if to_delete: + self.dbcon.delete_many({"_id": {"$in": to_delete}}) # Probably deprecated def _check_changeability(self, parent_id=None): @@ -2779,8 +2782,7 @@ class SyncEntitiesFactory: def report(self): items = [] - project_name = self.entities_dict[self.ft_project_id]["name"] - title = "Synchronization report ({}):".format(project_name) + title = "Synchronization report ({}):".format(self.project_name) keys = ["error", "warning", "info"] for key in keys: diff --git a/openpype/modules/ftrack/plugins/publish/integrate_hierarchy_ftrack.py b/openpype/modules/ftrack/plugins/publish/integrate_hierarchy_ftrack.py index 73398941eb..1a5d74bf26 100644 --- a/openpype/modules/ftrack/plugins/publish/integrate_hierarchy_ftrack.py +++ b/openpype/modules/ftrack/plugins/publish/integrate_hierarchy_ftrack.py @@ -3,7 +3,8 @@ import collections import six import pyblish.api from copy import deepcopy -from openpype.pipeline import legacy_io +from openpype.client import get_asset_by_id + # Copy of constant `openpype_modules.ftrack.lib.avalon_sync.CUST_ATTR_AUTO_SYNC` CUST_ATTR_AUTO_SYNC = "avalon_auto_sync" @@ -82,9 +83,6 @@ class IntegrateHierarchyToFtrack(pyblish.api.ContextPlugin): auto_sync_state = project[ "custom_attributes"][CUST_ATTR_AUTO_SYNC] - if not legacy_io.Session: - legacy_io.install() - self.ft_project = None # disable termporarily ftrack project's autosyncing @@ -93,14 +91,14 @@ class IntegrateHierarchyToFtrack(pyblish.api.ContextPlugin): try: # import ftrack hierarchy - self.import_to_ftrack(hierarchy_context) + self.import_to_ftrack(project_name, hierarchy_context) except Exception: raise finally: if auto_sync_state: self.auto_sync_on(project) - def import_to_ftrack(self, input_data, parent=None): + def import_to_ftrack(self, project_name, input_data, parent=None): # Prequery hiearchical custom attributes hier_custom_attributes = get_pype_attr(self.session)[1] hier_attr_by_key = { @@ -222,7 +220,7 @@ class IntegrateHierarchyToFtrack(pyblish.api.ContextPlugin): six.reraise(tp, value, tb) # Incoming links. - self.create_links(entity_data, entity) + self.create_links(project_name, entity_data, entity) try: self.session.commit() except Exception: @@ -255,9 +253,9 @@ class IntegrateHierarchyToFtrack(pyblish.api.ContextPlugin): # Import children. if 'childs' in entity_data: self.import_to_ftrack( - entity_data['childs'], entity) + project_name, entity_data['childs'], entity) - def create_links(self, entity_data, entity): + def create_links(self, project_name, entity_data, entity): # Clear existing links. for link in entity.get("incoming_links", []): self.session.delete(link) @@ -270,9 +268,15 @@ class IntegrateHierarchyToFtrack(pyblish.api.ContextPlugin): six.reraise(tp, value, tb) # Create new links. - for input in entity_data.get("inputs", []): - input_id = legacy_io.find_one({"_id": input})["data"]["ftrackId"] - assetbuild = self.session.get("AssetBuild", input_id) + for asset_id in entity_data.get("inputs", []): + asset_doc = get_asset_by_id(project_name, asset_id) + ftrack_id = None + if asset_doc: + ftrack_id = asset_doc["data"].get("ftrackId") + if not ftrack_id: + continue + + assetbuild = self.session.get("AssetBuild", ftrack_id) self.log.debug( "Creating link from {0} to {1}".format( assetbuild["name"], entity["name"] diff --git a/openpype/modules/timers_manager/timers_manager.py b/openpype/modules/timers_manager/timers_manager.py index 3f77a2b7dc..3cf1614316 100644 --- a/openpype/modules/timers_manager/timers_manager.py +++ b/openpype/modules/timers_manager/timers_manager.py @@ -7,6 +7,7 @@ from openpype_interfaces import ( ITrayService, ILaunchHookPaths ) +from openpype.lib.events import register_event_callback from openpype.pipeline import AvalonMongoDB from .exceptions import InvalidContextError @@ -422,3 +423,20 @@ class TimersManager(OpenPypeModule, ITrayService, ILaunchHookPaths): } return requests.post(rest_api_url, json=data) + + def on_host_install(self, host, host_name, project_name): + self.log.debug("Installing task changed callback") + register_event_callback("taskChanged", self._on_host_task_change) + + def _on_host_task_change(self, event): + project_name = event["project_name"] + asset_name = event["asset_name"] + task_name = event["task_name"] + self.log.debug(( + "Sending message that timer should change to" + " Project: {} Asset: {} Task: {}" + ).format(project_name, asset_name, task_name)) + + self.start_timer_with_webserver( + project_name, asset_name, task_name, self.log + ) diff --git a/openpype/pipeline/context_tools.py b/openpype/pipeline/context_tools.py index c6e09cfba1..4a147c230b 100644 --- a/openpype/pipeline/context_tools.py +++ b/openpype/pipeline/context_tools.py @@ -16,9 +16,7 @@ from openpype.modules import load_modules, ModulesManager from openpype.settings import get_project_settings from openpype.lib import ( Anatomy, - register_event_callback, filter_pyblish_plugins, - change_timer_to_current_context, ) from . import ( @@ -33,6 +31,9 @@ from . import ( _is_installed = False _registered_root = {"_": ""} _registered_host = {"_": None} +# Keep modules manager (and it's modules) in memory +# - that gives option to register modules' callbacks +_modules_manager = None log = logging.getLogger(__name__) @@ -44,6 +45,23 @@ PUBLISH_PATH = os.path.join(PLUGINS_DIR, "publish") LOAD_PATH = os.path.join(PLUGINS_DIR, "load") +def _get_modules_manager(): + """Get or create modules manager for host installation. + + This is not meant for public usage. Reason is to keep modules + in memory of process to be able trigger their event callbacks if they + need any. + + Returns: + ModulesManager: Manager wrapping discovered modules. + """ + + global _modules_manager + if _modules_manager is None: + _modules_manager = ModulesManager() + return _modules_manager + + def register_root(path): """Register currently active root""" log.info("Registering root: %s" % path) @@ -74,6 +92,7 @@ def install_host(host): _is_installed = True legacy_io.install() + modules_manager = _get_modules_manager() missing = list() for key in ("AVALON_PROJECT", "AVALON_ASSET"): @@ -95,8 +114,6 @@ def install_host(host): register_host(host) - register_event_callback("taskChanged", _on_task_change) - def modified_emit(obj, record): """Method replacing `emit` in Pyblish's MessageHandler.""" record.msg = record.getMessage() @@ -112,7 +129,14 @@ def install_host(host): else: pyblish.api.register_target("local") - install_openpype_plugins() + project_name = os.environ.get("AVALON_PROJECT") + host_name = os.environ.get("AVALON_APP") + + # Give option to handle host installation + for module in modules_manager.get_enabled_modules(): + module.on_host_install(host, host_name, project_name) + + install_openpype_plugins(project_name, host_name) def install_openpype_plugins(project_name=None, host_name=None): @@ -124,7 +148,7 @@ def install_openpype_plugins(project_name=None, host_name=None): pyblish.api.register_discovery_filter(filter_pyblish_plugins) register_loader_plugin_path(LOAD_PATH) - modules_manager = ModulesManager() + modules_manager = _get_modules_manager() publish_plugin_dirs = modules_manager.collect_plugin_paths()["publish"] for path in publish_plugin_dirs: pyblish.api.register_plugin_path(path) @@ -168,10 +192,6 @@ def install_openpype_plugins(project_name=None, host_name=None): register_inventory_action(path) -def _on_task_change(): - change_timer_to_current_context() - - def uninstall_host(): """Undo all of what `install()` did""" host = registered_host() diff --git a/openpype/plugins/publish/collect_current_pype_user.py b/openpype/plugins/publish/collect_current_pype_user.py index 1a52a59012..2d507ba292 100644 --- a/openpype/plugins/publish/collect_current_pype_user.py +++ b/openpype/plugins/publish/collect_current_pype_user.py @@ -1,5 +1,3 @@ -import os -import getpass import pyblish.api from openpype.lib import get_openpype_username diff --git a/openpype/plugins/publish/extract_review.py b/openpype/plugins/publish/extract_review.py index 879125dac3..b6e5fee1fe 100644 --- a/openpype/plugins/publish/extract_review.py +++ b/openpype/plugins/publish/extract_review.py @@ -763,7 +763,8 @@ class ExtractReview(pyblish.api.InstancePlugin): start_frame = int(start_frame) end_frame = int(end_frame) collections = clique.assemble(files)[0] - assert len(collections) == 1, "Multiple collections found." + msg = "Multiple collections {} found.".format(collections) + assert len(collections) == 1, msg col = collections[0] # do nothing if no gap is found in input range diff --git a/openpype/plugins/publish/extract_review_slate.py b/openpype/plugins/publish/extract_review_slate.py index cff71f67ac..28685c2e90 100644 --- a/openpype/plugins/publish/extract_review_slate.py +++ b/openpype/plugins/publish/extract_review_slate.py @@ -1,4 +1,6 @@ import os +from pprint import pformat +import re import openpype.api import pyblish from openpype.lib import ( @@ -21,6 +23,8 @@ class ExtractReviewSlate(openpype.api.Extractor): families = ["slate", "review"] match = pyblish.api.Subset + SUFFIX = "_slate" + hosts = ["nuke", "shell"] optional = True @@ -29,28 +33,19 @@ class ExtractReviewSlate(openpype.api.Extractor): if "representations" not in inst_data: raise RuntimeError("Burnin needs already created mov to work on.") - suffix = "_slate" - slate_path = inst_data.get("slateFrame") + # get slates frame from upstream + slates_data = inst_data.get("slateFrames") + if not slates_data: + # make it backward compatible and open for slates generator + # premium plugin + slates_data = { + "*": inst_data["slateFrame"] + } + + self.log.info("_ slates_data: {}".format(pformat(slates_data))) + ffmpeg_path = get_ffmpeg_tool_path("ffmpeg") - slate_streams = get_ffprobe_streams(slate_path, self.log) - # Try to find first stream with defined 'width' and 'height' - # - this is to avoid order of streams where audio can be as first - # - there may be a better way (checking `codec_type`?)+ - slate_width = None - slate_height = None - for slate_stream in slate_streams: - if "width" in slate_stream and "height" in slate_stream: - slate_width = int(slate_stream["width"]) - slate_height = int(slate_stream["height"]) - break - - # Raise exception of any stream didn't define input resolution - if slate_width is None: - raise AssertionError(( - "FFprobe couldn't read resolution from input file: \"{}\"" - ).format(slate_path)) - if "reviewToWidth" in inst_data: use_legacy_code = True else: @@ -77,6 +72,12 @@ class ExtractReviewSlate(openpype.api.Extractor): streams = get_ffprobe_streams( input_path, self.log ) + # get slate data + slate_path = self._get_slate_path(input_file, slates_data) + self.log.info("_ slate_path: {}".format(slate_path)) + + slate_width, slate_height = self._get_slates_resolution(slate_path) + # Get video metadata ( input_width, @@ -138,7 +139,7 @@ class ExtractReviewSlate(openpype.api.Extractor): _remove_at_end = [] ext = os.path.splitext(input_file)[1] - output_file = input_file.replace(ext, "") + suffix + ext + output_file = input_file.replace(ext, "") + self.SUFFIX + ext _remove_at_end.append(input_path) @@ -369,6 +370,43 @@ class ExtractReviewSlate(openpype.api.Extractor): self.log.debug(inst_data["representations"]) + def _get_slate_path(self, input_file, slates_data): + slate_path = None + for sl_n, _slate_path in slates_data.items(): + if "*" in sl_n: + slate_path = _slate_path + break + elif re.search(sl_n, input_file): + slate_path = _slate_path + break + + if not slate_path: + raise AttributeError( + "Missing slates paths: {}".format(slates_data)) + + return slate_path + + def _get_slates_resolution(self, slate_path): + slate_streams = get_ffprobe_streams(slate_path, self.log) + # Try to find first stream with defined 'width' and 'height' + # - this is to avoid order of streams where audio can be as first + # - there may be a better way (checking `codec_type`?)+ + slate_width = None + slate_height = None + for slate_stream in slate_streams: + if "width" in slate_stream and "height" in slate_stream: + slate_width = int(slate_stream["width"]) + slate_height = int(slate_stream["height"]) + break + + # Raise exception of any stream didn't define input resolution + if slate_width is None: + raise AssertionError(( + "FFprobe couldn't read resolution from input file: \"{}\"" + ).format(slate_path)) + + return (slate_width, slate_height) + def _get_video_metadata(self, streams): input_timecode = "" input_width = None diff --git a/openpype/settings/defaults/project_settings/deadline.json b/openpype/settings/defaults/project_settings/deadline.json index 6b0d7586c4..a6e7b4a94a 100644 --- a/openpype/settings/defaults/project_settings/deadline.json +++ b/openpype/settings/defaults/project_settings/deadline.json @@ -95,4 +95,4 @@ } } } -} +} \ No newline at end of file diff --git a/openpype/settings/defaults/project_settings/maya.json b/openpype/settings/defaults/project_settings/maya.json index efd22e13c8..cdd3a62d00 100644 --- a/openpype/settings/defaults/project_settings/maya.json +++ b/openpype/settings/defaults/project_settings/maya.json @@ -66,6 +66,28 @@ "defaults": [], "joint_hints": "jnt_org" }, + "CreateMultiverseLook": { + "enabled": true, + "publish_mip_map": true + }, + "CreateMultiverseUsd": { + "enabled": true, + "defaults": [ + "Main" + ] + }, + "CreateMultiverseUsdComp": { + "enabled": true, + "defaults": [ + "Main" + ] + }, + "CreateMultiverseUsdOver": { + "enabled": true, + "defaults": [ + "Main" + ] + }, "CreateAnimation": { "enabled": true, "defaults": [ @@ -379,6 +401,14 @@ "optional": true, "active": true }, + "ExtractAlembic": { + "enabled": true, + "families": [ + "pointcache", + "model", + "vrayproxy" + ] + }, "ValidateRigContents": { "enabled": false, "optional": true, @@ -413,6 +443,11 @@ "optional": true, "active": true }, + "ValidateCameraContents": { + "enabled": true, + "optional": true, + "validate_shapes": true + }, "ExtractPlayblast": { "capture_preset": { "Codec": { diff --git a/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_create.json b/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_create.json index 6dc10ed2a5..09287a8b50 100644 --- a/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_create.json +++ b/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_create.json @@ -124,10 +124,41 @@ ] }, + { + "type": "dict", + "collapsible": true, + "key": "CreateMultiverseLook", + "label": "Create Multiverse Look", + "checkbox_key": "enabled", + "children": [ + { + "type": "boolean", + "key": "enabled", + "label": "Enabled" + }, + { + "type": "boolean", + "key": "publish_mip_map", + "label": "Publish Mip Maps" + } + ] + }, { "type": "schema_template", "name": "template_create_plugin", "template_data": [ + { + "key": "CreateMultiverseUsd", + "label": "Create Multiverse USD" + }, + { + "key": "CreateMultiverseUsdComp", + "label": "Create Multiverse USD Composition" + }, + { + "key": "CreateMultiverseUsdOver", + "label": "Create Multiverse USD Override" + }, { "key": "CreateAnimation", "label": "Create Animation" diff --git a/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_publish.json b/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_publish.json index 9877b5ff0d..41b681d893 100644 --- a/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_publish.json +++ b/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_publish.json @@ -504,6 +504,30 @@ "label": "ValidateUniqueNames" } ] + }, + { + "type": "label", + "label": "Extractors" + }, + { + "type": "dict", + "collapsible": true, + "key": "ExtractAlembic", + "label": "Extract Alembic", + "checkbox_key": "enabled", + "children": [ + { + "type": "boolean", + "key": "enabled", + "label": "Enabled" + }, + { + "key": "families", + "label": "Families", + "type": "list", + "object_type": "text" + } + ] } ] }, @@ -570,6 +594,30 @@ } ] }, + { + "type": "dict", + "collapsible": true, + "key": "ValidateCameraContents", + "label": "Validate Camera Content", + "checkbox_key": "enabled", + "children": [ + { + "type": "boolean", + "key": "enabled", + "label": "Enabled" + }, + { + "type": "boolean", + "key": "optional", + "label": "Optional" + }, + { + "type": "boolean", + "key": "validate_shapes", + "label": "Validate presence of shapes" + } + ] + }, { "type": "splitter" }, diff --git a/openpype/tools/workfiles/files_widget.py b/openpype/tools/workfiles/files_widget.py index 68fe8301c9..a7e54471dc 100644 --- a/openpype/tools/workfiles/files_widget.py +++ b/openpype/tools/workfiles/files_widget.py @@ -1,6 +1,7 @@ import os import logging import shutil +import copy import Qt from Qt import QtWidgets, QtCore @@ -90,7 +91,9 @@ class FilesWidget(QtWidgets.QWidget): self._task_type = None # Pype's anatomy object for current project - self.anatomy = Anatomy(legacy_io.Session["AVALON_PROJECT"]) + project_name = legacy_io.Session["AVALON_PROJECT"] + self.anatomy = Anatomy(project_name) + self.project_name = project_name # Template key used to get work template from anatomy templates self.template_key = "work" @@ -98,6 +101,7 @@ class FilesWidget(QtWidgets.QWidget): self._workfiles_root = None self._workdir_path = None self.host = registered_host() + self.host_name = os.environ["AVALON_APP"] # Whether to automatically select the latest modified # file on a refresh of the files model. @@ -385,8 +389,9 @@ class FilesWidget(QtWidgets.QWidget): return None if self._asset_doc is None: - project_name = legacy_io.active_project() - self._asset_doc = get_asset_by_id(project_name, self._asset_id) + self._asset_doc = get_asset_by_id( + self.project_name, self._asset_id + ) return self._asset_doc @@ -396,8 +401,8 @@ class FilesWidget(QtWidgets.QWidget): session = legacy_io.Session.copy() self.template_key = get_workfile_template_key( self._task_type, - session["AVALON_APP"], - project_name=session["AVALON_PROJECT"] + self.host_name, + project_name=self.project_name ) changes = compute_session_changes( session, @@ -430,6 +435,21 @@ class FilesWidget(QtWidgets.QWidget): template_key=self.template_key ) + def _get_event_context_data(self): + asset_id = None + asset_name = None + asset_doc = self._get_asset_doc() + if asset_doc: + asset_id = asset_doc["_id"] + asset_name = asset_doc["name"] + return { + "project_name": self.project_name, + "asset_id": asset_id, + "asset_name": asset_name, + "task_name": self._task_name, + "host_name": self.host_name + } + def open_file(self, filepath): host = self.host if host.has_unsaved_changes(): @@ -453,8 +473,21 @@ class FilesWidget(QtWidgets.QWidget): # Save current scene, continue to open file host.save_file(current_file) + event_data_before = self._get_event_context_data() + event_data_before["filepath"] = filepath + event_data_after = copy.deepcopy(event_data_before) + emit_event( + "workfile.open.before", + event_data_before, + source="workfiles.tool" + ) self._enter_session() host.open_file(filepath) + emit_event( + "workfile.open.after", + event_data_after, + source="workfiles.tool" + ) self.file_opened.emit() def save_changes_prompt(self): @@ -567,9 +600,14 @@ class FilesWidget(QtWidgets.QWidget): src_path = self._get_selected_filepath() # Trigger before save event + event_data_before = self._get_event_context_data() + event_data_before.update({ + "filename": work_filename, + "workdir_path": self._workdir_path + }) emit_event( "workfile.save.before", - {"filename": work_filename, "workdir_path": self._workdir_path}, + event_data_before, source="workfiles.tool" ) @@ -602,15 +640,20 @@ class FilesWidget(QtWidgets.QWidget): # Create extra folders create_workdir_extra_folders( self._workdir_path, - legacy_io.Session["AVALON_APP"], + self.host_name, self._task_type, self._task_name, - legacy_io.Session["AVALON_PROJECT"] + self.project_name ) + event_data_after = self._get_event_context_data() + event_data_after.update({ + "filename": work_filename, + "workdir_path": self._workdir_path + }) # Trigger after save events emit_event( "workfile.save.after", - {"filename": work_filename, "workdir_path": self._workdir_path}, + event_data_after, source="workfiles.tool" ) diff --git a/openpype/version.py b/openpype/version.py index 7bf368108a..79e3b445f9 100644 --- a/openpype/version.py +++ b/openpype/version.py @@ -1,3 +1,3 @@ # -*- coding: utf-8 -*- """Package declaring Pype version.""" -__version__ = "3.11.1" +__version__ = "3.11.2-nightly.1" diff --git a/pyproject.toml b/pyproject.toml index ae89e7d9d8..4b297fe042 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -1,6 +1,6 @@ [tool.poetry] name = "OpenPype" -version = "3.11.1" # OpenPype +version = "3.11.2-nightly.1" # OpenPype description = "Open VFX and Animation pipeline with support." authors = ["OpenPype Team "] license = "MIT License" diff --git a/website/docs/admin_settings_project_anatomy.md b/website/docs/admin_settings_project_anatomy.md index 6e0b49f152..106faeb806 100644 --- a/website/docs/admin_settings_project_anatomy.md +++ b/website/docs/admin_settings_project_anatomy.md @@ -68,6 +68,7 @@ We have a few required anatomy templates for OpenPype to work properly, however | `representation` | Representation name | | `frame` | Frame number for sequence files. | | `app` | Application Name | +| `user` | User's login name (can be overridden in local settings) | | `output` | | | `comment` | | diff --git a/website/docs/artist_hosts_maya_multiverse.md b/website/docs/artist_hosts_maya_multiverse.md index e6520bafa0..a173e79125 100644 --- a/website/docs/artist_hosts_maya_multiverse.md +++ b/website/docs/artist_hosts_maya_multiverse.md @@ -65,6 +65,25 @@ the one depicted here: ![Maya - Multiverse Setup](assets/maya-multiverse_setup.png) + +``` +{ + "MULTIVERSE_PATH": "/Path/to/Multiverse-{MULTIVERSE_VERSION}", + "MAYA_MODULE_PATH": "{MULTIVERSE}/Maya;{MAYA_MODULE_PATH}" +} + +{ + "MULTIVERSE_VERSION": "7.1.0-py27" +} + +``` + +The Multiverse Maya module file (.mod) pointed above contains all the necessary +environment variables to run Multiverse. + +The OpenPype settings will contain blocks to enable/disable the Multiverse +Creators and Loader, along with sensible studio setting. + For more information about setup of Multiverse please refer to the relative page on the [Multiverse official documentation](https://multi-verse.io/docs). @@ -94,7 +113,7 @@ You can choose the USD file format in the Creators' set nodes: - Assets: `.usd` (default) or `.usda` or `.usdz` - Compositions: `.usda` (default) or `.usd` - Overrides: `.usda` (default) or `.usd` -- Looks: `.ma` +- Looks: `.ma` (default) ![Maya - Multiverse Asset Creator](assets/maya-multiverse_openpype_asset_creator.png) diff --git a/website/docs/assets/maya-multiverse_setup.png b/website/docs/assets/maya-multiverse_setup.png index 8aa89ef7e5..72bdb0d379 100644 Binary files a/website/docs/assets/maya-multiverse_setup.png and b/website/docs/assets/maya-multiverse_setup.png differ diff --git a/website/src/pages/index.js b/website/src/pages/index.js index 115102ed04..0886706015 100644 --- a/website/src/pages/index.js +++ b/website/src/pages/index.js @@ -361,7 +361,7 @@ function Home() { - DaVinci Resolve (Beta) + Resolve (Beta) @@ -374,6 +374,16 @@ function Home() { Ftrack + + + Shotgrid (Beta) + + + + + Kitsu (Beta) + + Clockify @@ -384,12 +394,7 @@ function Home() { Deadline - - - Muster - - - + Royal Render @@ -399,30 +404,30 @@ function Home() { Slack - - -

In development by us or OpenPype community.

- - +

Planned or in development by us and OpenPype community.

+ + diff --git a/website/static/img/app_hibob.png b/website/static/img/app_hibob.png new file mode 100644 index 0000000000..91dd8d3f6b Binary files /dev/null and b/website/static/img/app_hibob.png differ