mirror of
https://github.com/ynput/ayon-core.git
synced 2026-01-02 00:44:52 +01:00
Merge remote-tracking branch 'origin/develop' into OP-2449/Maya-Validate-Frame-Range
This commit is contained in:
commit
4b4057966c
134 changed files with 2563 additions and 1734 deletions
2
.gitignore
vendored
2
.gitignore
vendored
|
|
@ -102,3 +102,5 @@ website/.docusaurus
|
|||
|
||||
.poetry/
|
||||
.python-version
|
||||
|
||||
tools/run_eventserver.*
|
||||
|
|
|
|||
76
CHANGELOG.md
76
CHANGELOG.md
|
|
@ -1,8 +1,53 @@
|
|||
# Changelog
|
||||
|
||||
## [3.11.1-nightly.1](https://github.com/pypeclub/OpenPype/tree/HEAD)
|
||||
## [3.12.0-nightly.2](https://github.com/pypeclub/OpenPype/tree/HEAD)
|
||||
|
||||
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.11.0...HEAD)
|
||||
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.11.1...HEAD)
|
||||
|
||||
### 📖 Documentation
|
||||
|
||||
- Linux: update OIIO package [\#3401](https://github.com/pypeclub/OpenPype/pull/3401)
|
||||
- General: Add ability to change user value for templates [\#3366](https://github.com/pypeclub/OpenPype/pull/3366)
|
||||
- Multiverse: expose some settings to GUI [\#3350](https://github.com/pypeclub/OpenPype/pull/3350)
|
||||
|
||||
**🚀 Enhancements**
|
||||
|
||||
- Hosts: More options for in-host callbacks [\#3357](https://github.com/pypeclub/OpenPype/pull/3357)
|
||||
- Maya: Allow more data to be published along camera 🎥 [\#3304](https://github.com/pypeclub/OpenPype/pull/3304)
|
||||
|
||||
**🐛 Bug fixes**
|
||||
|
||||
- Nuke: Fix keyword argument in query function [\#3414](https://github.com/pypeclub/OpenPype/pull/3414)
|
||||
- Nuke: Collect representation files based on Write [\#3407](https://github.com/pypeclub/OpenPype/pull/3407)
|
||||
- General: Filter representations before integration start [\#3398](https://github.com/pypeclub/OpenPype/pull/3398)
|
||||
- Maya: look collector typo [\#3392](https://github.com/pypeclub/OpenPype/pull/3392)
|
||||
- TVPaint: Make sure exit code is set to not None [\#3382](https://github.com/pypeclub/OpenPype/pull/3382)
|
||||
- Maya: vray device aspect ratio fix [\#3381](https://github.com/pypeclub/OpenPype/pull/3381)
|
||||
- Harmony: added unc path to zifile command in Harmony [\#3372](https://github.com/pypeclub/OpenPype/pull/3372)
|
||||
- Standalone: settings improvements [\#3355](https://github.com/pypeclub/OpenPype/pull/3355)
|
||||
- Nuke: Load full model hierarchy by default [\#3328](https://github.com/pypeclub/OpenPype/pull/3328)
|
||||
|
||||
**🔀 Refactored code**
|
||||
|
||||
- Kitsu: renaming to plural func sync\_all\_projects [\#3397](https://github.com/pypeclub/OpenPype/pull/3397)
|
||||
- Hiero: Use client query functions [\#3393](https://github.com/pypeclub/OpenPype/pull/3393)
|
||||
- Nuke: Use client query functions [\#3391](https://github.com/pypeclub/OpenPype/pull/3391)
|
||||
- Maya: Use client query functions [\#3385](https://github.com/pypeclub/OpenPype/pull/3385)
|
||||
- Harmony: Use client query functions [\#3378](https://github.com/pypeclub/OpenPype/pull/3378)
|
||||
- Celaction: Use client query functions [\#3376](https://github.com/pypeclub/OpenPype/pull/3376)
|
||||
- Photoshop: Use client query functions [\#3375](https://github.com/pypeclub/OpenPype/pull/3375)
|
||||
- AfterEffects: Use client query functions [\#3374](https://github.com/pypeclub/OpenPype/pull/3374)
|
||||
- TVPaint: Use client query functions [\#3340](https://github.com/pypeclub/OpenPype/pull/3340)
|
||||
- Ftrack: Use client query functions [\#3339](https://github.com/pypeclub/OpenPype/pull/3339)
|
||||
- Standalone Publisher: Use client query functions [\#3330](https://github.com/pypeclub/OpenPype/pull/3330)
|
||||
|
||||
**Merged pull requests:**
|
||||
|
||||
- Maya - added support for single frame playblast review [\#3369](https://github.com/pypeclub/OpenPype/pull/3369)
|
||||
|
||||
## [3.11.1](https://github.com/pypeclub/OpenPype/tree/3.11.1) (2022-06-20)
|
||||
|
||||
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.11.1-nightly.1...3.11.1)
|
||||
|
||||
**🆕 New features**
|
||||
|
||||
|
|
@ -15,7 +60,6 @@
|
|||
- Ftrack: Removed requirement of pypeclub role from default settings [\#3354](https://github.com/pypeclub/OpenPype/pull/3354)
|
||||
- Kitsu: Prevent crash on missing frames information [\#3352](https://github.com/pypeclub/OpenPype/pull/3352)
|
||||
- Ftrack: Open browser from tray [\#3320](https://github.com/pypeclub/OpenPype/pull/3320)
|
||||
- Enhancement: More control over thumbnail processing. [\#3259](https://github.com/pypeclub/OpenPype/pull/3259)
|
||||
|
||||
**🐛 Bug fixes**
|
||||
|
||||
|
|
@ -57,8 +101,6 @@
|
|||
- General: Updated windows oiio tool [\#3268](https://github.com/pypeclub/OpenPype/pull/3268)
|
||||
- Unreal: add support for skeletalMesh and staticMesh to loaders [\#3267](https://github.com/pypeclub/OpenPype/pull/3267)
|
||||
- Maya: reference loaders could store placeholder in referenced url [\#3264](https://github.com/pypeclub/OpenPype/pull/3264)
|
||||
- TVPaint: Init file for TVPaint worker also handle guideline images [\#3250](https://github.com/pypeclub/OpenPype/pull/3250)
|
||||
- Nuke: Change default icon path in settings [\#3247](https://github.com/pypeclub/OpenPype/pull/3247)
|
||||
|
||||
**🐛 Bug fixes**
|
||||
|
||||
|
|
@ -76,11 +118,6 @@
|
|||
- Hiero: add support for task tags 3.10.x [\#3279](https://github.com/pypeclub/OpenPype/pull/3279)
|
||||
- General: Fix Oiio tool path resolving [\#3278](https://github.com/pypeclub/OpenPype/pull/3278)
|
||||
- Maya: Fix udim support for e.g. uppercase \<UDIM\> tag [\#3266](https://github.com/pypeclub/OpenPype/pull/3266)
|
||||
- Nuke: bake reformat was failing on string type [\#3261](https://github.com/pypeclub/OpenPype/pull/3261)
|
||||
- Maya: hotfix Pxr multitexture in looks [\#3260](https://github.com/pypeclub/OpenPype/pull/3260)
|
||||
- Unreal: Fix Camera Loading if Layout is missing [\#3255](https://github.com/pypeclub/OpenPype/pull/3255)
|
||||
- Unreal: Fixed Animation loading in UE5 [\#3240](https://github.com/pypeclub/OpenPype/pull/3240)
|
||||
- Unreal: Fixed Render creation in UE5 [\#3239](https://github.com/pypeclub/OpenPype/pull/3239)
|
||||
|
||||
**🔀 Refactored code**
|
||||
|
||||
|
|
@ -96,25 +133,6 @@
|
|||
|
||||
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.10.0-nightly.6...3.10.0)
|
||||
|
||||
**🚀 Enhancements**
|
||||
|
||||
- Maya: FBX camera export [\#3253](https://github.com/pypeclub/OpenPype/pull/3253)
|
||||
- General: updating common vendor `scriptmenu` to 1.5.2 [\#3246](https://github.com/pypeclub/OpenPype/pull/3246)
|
||||
|
||||
**🐛 Bug fixes**
|
||||
|
||||
- nuke: use framerange issue [\#3254](https://github.com/pypeclub/OpenPype/pull/3254)
|
||||
- Ftrack: Chunk sizes for queries has minimal condition [\#3244](https://github.com/pypeclub/OpenPype/pull/3244)
|
||||
- Maya: renderman displays needs to be filtered [\#3242](https://github.com/pypeclub/OpenPype/pull/3242)
|
||||
- Ftrack: Validate that the user exists on ftrack [\#3237](https://github.com/pypeclub/OpenPype/pull/3237)
|
||||
- Maya: Fix support for multiple resolutions [\#3236](https://github.com/pypeclub/OpenPype/pull/3236)
|
||||
- TVPaint: Look for more groups than 12 [\#3228](https://github.com/pypeclub/OpenPype/pull/3228)
|
||||
|
||||
**Merged pull requests:**
|
||||
|
||||
- Harmony: message length in 21.1 [\#3257](https://github.com/pypeclub/OpenPype/pull/3257)
|
||||
- Harmony: 21.1 fix [\#3249](https://github.com/pypeclub/OpenPype/pull/3249)
|
||||
|
||||
## [3.9.8](https://github.com/pypeclub/OpenPype/tree/3.9.8) (2022-05-19)
|
||||
|
||||
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.9.7...3.9.8)
|
||||
|
|
|
|||
|
|
@ -5,6 +5,7 @@ from .entities import (
|
|||
get_asset_by_id,
|
||||
get_asset_by_name,
|
||||
get_assets,
|
||||
get_archived_assets,
|
||||
get_asset_ids_with_subsets,
|
||||
|
||||
get_subset_by_id,
|
||||
|
|
@ -41,6 +42,7 @@ __all__ = (
|
|||
"get_asset_by_id",
|
||||
"get_asset_by_name",
|
||||
"get_assets",
|
||||
"get_archived_assets",
|
||||
"get_asset_ids_with_subsets",
|
||||
|
||||
"get_subset_by_id",
|
||||
|
|
|
|||
|
|
@ -139,8 +139,16 @@ def get_asset_by_name(project_name, asset_name, fields=None):
|
|||
return conn.find_one(query_filter, _prepare_fields(fields))
|
||||
|
||||
|
||||
def get_assets(
|
||||
project_name, asset_ids=None, asset_names=None, archived=False, fields=None
|
||||
# NOTE this could be just public function?
|
||||
# - any better variable name instead of 'standard'?
|
||||
# - same approach can be used for rest of types
|
||||
def _get_assets(
|
||||
project_name,
|
||||
asset_ids=None,
|
||||
asset_names=None,
|
||||
standard=True,
|
||||
archived=False,
|
||||
fields=None
|
||||
):
|
||||
"""Assets for specified project by passed filters.
|
||||
|
||||
|
|
@ -153,6 +161,8 @@ def get_assets(
|
|||
project_name (str): Name of project where to look for queried entities.
|
||||
asset_ids (list[str|ObjectId]): Asset ids that should be found.
|
||||
asset_names (list[str]): Name assets that should be found.
|
||||
standard (bool): Query standart assets (type 'asset').
|
||||
archived (bool): Query archived assets (type 'archived_asset').
|
||||
fields (list[str]): Fields that should be returned. All fields are
|
||||
returned if 'None' is passed.
|
||||
|
||||
|
|
@ -161,10 +171,15 @@ def get_assets(
|
|||
passed filters.
|
||||
"""
|
||||
|
||||
asset_types = ["asset"]
|
||||
asset_types = []
|
||||
if standard:
|
||||
asset_types.append("asset")
|
||||
if archived:
|
||||
asset_types.append("archived_asset")
|
||||
|
||||
if not asset_types:
|
||||
return []
|
||||
|
||||
if len(asset_types) == 1:
|
||||
query_filter = {"type": asset_types[0]}
|
||||
else:
|
||||
|
|
@ -186,6 +201,68 @@ def get_assets(
|
|||
return conn.find(query_filter, _prepare_fields(fields))
|
||||
|
||||
|
||||
def get_assets(
|
||||
project_name,
|
||||
asset_ids=None,
|
||||
asset_names=None,
|
||||
archived=False,
|
||||
fields=None
|
||||
):
|
||||
"""Assets for specified project by passed filters.
|
||||
|
||||
Passed filters (ids and names) are always combined so all conditions must
|
||||
match.
|
||||
|
||||
To receive all assets from project just keep filters empty.
|
||||
|
||||
Args:
|
||||
project_name (str): Name of project where to look for queried entities.
|
||||
asset_ids (list[str|ObjectId]): Asset ids that should be found.
|
||||
asset_names (list[str]): Name assets that should be found.
|
||||
archived (bool): Add also archived assets.
|
||||
fields (list[str]): Fields that should be returned. All fields are
|
||||
returned if 'None' is passed.
|
||||
|
||||
Returns:
|
||||
Cursor: Query cursor as iterable which returns asset documents matching
|
||||
passed filters.
|
||||
"""
|
||||
|
||||
return _get_assets(
|
||||
project_name, asset_ids, asset_names, True, archived, fields
|
||||
)
|
||||
|
||||
|
||||
def get_archived_assets(
|
||||
project_name,
|
||||
asset_ids=None,
|
||||
asset_names=None,
|
||||
fields=None
|
||||
):
|
||||
"""Archived assets for specified project by passed filters.
|
||||
|
||||
Passed filters (ids and names) are always combined so all conditions must
|
||||
match.
|
||||
|
||||
To receive all archived assets from project just keep filters empty.
|
||||
|
||||
Args:
|
||||
project_name (str): Name of project where to look for queried entities.
|
||||
asset_ids (list[str|ObjectId]): Asset ids that should be found.
|
||||
asset_names (list[str]): Name assets that should be found.
|
||||
fields (list[str]): Fields that should be returned. All fields are
|
||||
returned if 'None' is passed.
|
||||
|
||||
Returns:
|
||||
Cursor: Query cursor as iterable which returns asset documents matching
|
||||
passed filters.
|
||||
"""
|
||||
|
||||
return _get_assets(
|
||||
project_name, asset_ids, asset_names, False, True, fields
|
||||
)
|
||||
|
||||
|
||||
def get_asset_ids_with_subsets(project_name, asset_ids=None):
|
||||
"""Find out which assets have existing subsets.
|
||||
|
||||
|
|
@ -432,6 +509,7 @@ def _get_versions(
|
|||
project_name,
|
||||
subset_ids=None,
|
||||
version_ids=None,
|
||||
versions=None,
|
||||
standard=True,
|
||||
hero=False,
|
||||
fields=None
|
||||
|
|
@ -462,6 +540,16 @@ def _get_versions(
|
|||
return []
|
||||
query_filter["_id"] = {"$in": version_ids}
|
||||
|
||||
if versions is not None:
|
||||
versions = list(versions)
|
||||
if not versions:
|
||||
return []
|
||||
|
||||
if len(versions) == 1:
|
||||
query_filter["name"] = versions[0]
|
||||
else:
|
||||
query_filter["name"] = {"$in": versions}
|
||||
|
||||
conn = _get_project_connection(project_name)
|
||||
|
||||
return conn.find(query_filter, _prepare_fields(fields))
|
||||
|
|
@ -471,6 +559,7 @@ def get_versions(
|
|||
project_name,
|
||||
version_ids=None,
|
||||
subset_ids=None,
|
||||
versions=None,
|
||||
hero=False,
|
||||
fields=None
|
||||
):
|
||||
|
|
@ -484,6 +573,8 @@ def get_versions(
|
|||
Filter ignored if 'None' is passed.
|
||||
subset_ids (list[str]): Subset ids that will be queried.
|
||||
Filter ignored if 'None' is passed.
|
||||
versions (list[int]): Version names (as integers).
|
||||
Filter ignored if 'None' is passed.
|
||||
hero (bool): Look also for hero versions.
|
||||
fields (list[str]): Fields that should be returned. All fields are
|
||||
returned if 'None' is passed.
|
||||
|
|
@ -496,6 +587,7 @@ def get_versions(
|
|||
project_name,
|
||||
subset_ids,
|
||||
version_ids,
|
||||
versions,
|
||||
standard=True,
|
||||
hero=hero,
|
||||
fields=fields
|
||||
|
|
@ -697,14 +789,19 @@ def get_last_version_by_subset_id(project_name, subset_id, fields=None):
|
|||
|
||||
|
||||
def get_last_version_by_subset_name(
|
||||
project_name, subset_name, asset_id, fields=None
|
||||
project_name, subset_name, asset_id=None, asset_name=None, fields=None
|
||||
):
|
||||
"""Last version for passed subset name under asset id.
|
||||
"""Last version for passed subset name under asset id/name.
|
||||
|
||||
It is required to pass 'asset_id' or 'asset_name'. Asset id is recommended
|
||||
if is available.
|
||||
|
||||
Args:
|
||||
project_name (str): Name of project where to look for queried entities.
|
||||
subset_name (str): Name of subset.
|
||||
asset_id (str|ObjectId): Asset id which is parnt of passed subset name.
|
||||
asset_id (str|ObjectId): Asset id which is parent of passed
|
||||
subset name.
|
||||
asset_name (str): Asset name which is parent of passed subset name.
|
||||
fields (list[str]): Fields that should be returned. All fields are
|
||||
returned if 'None' is passed.
|
||||
|
||||
|
|
@ -713,6 +810,14 @@ def get_last_version_by_subset_name(
|
|||
Dict: Version document which can be reduced to specified 'fields'.
|
||||
"""
|
||||
|
||||
if not asset_id and not asset_name:
|
||||
return None
|
||||
|
||||
if not asset_id:
|
||||
asset_doc = get_asset_by_name(project_name, asset_name, fields=["_id"])
|
||||
if not asset_doc:
|
||||
return None
|
||||
asset_id = asset_doc["_id"]
|
||||
subset_doc = get_subset_by_name(
|
||||
project_name, subset_name, asset_id, fields=["_id"]
|
||||
)
|
||||
|
|
|
|||
|
|
@ -65,14 +65,14 @@ def on_pyblish_instance_toggled(instance, old_value, new_value):
|
|||
instance[0].Visible = new_value
|
||||
|
||||
|
||||
def get_asset_settings():
|
||||
def get_asset_settings(asset_doc):
|
||||
"""Get settings on current asset from database.
|
||||
|
||||
Returns:
|
||||
dict: Scene data.
|
||||
|
||||
"""
|
||||
asset_data = lib.get_asset()["data"]
|
||||
asset_data = asset_doc["data"]
|
||||
fps = asset_data.get("fps")
|
||||
frame_start = asset_data.get("frameStart")
|
||||
frame_end = asset_data.get("frameEnd")
|
||||
|
|
|
|||
|
|
@ -1,4 +1,5 @@
|
|||
import openpype.hosts.aftereffects.api as api
|
||||
from openpype.client import get_asset_by_name
|
||||
from openpype.pipeline import (
|
||||
AutoCreator,
|
||||
CreatedInstance,
|
||||
|
|
@ -41,10 +42,7 @@ class AEWorkfileCreator(AutoCreator):
|
|||
host_name = legacy_io.Session["AVALON_APP"]
|
||||
|
||||
if existing_instance is None:
|
||||
asset_doc = legacy_io.find_one({
|
||||
"type": "asset",
|
||||
"name": asset_name
|
||||
})
|
||||
asset_doc = get_asset_by_name(project_name, asset_name)
|
||||
subset_name = self.get_subset_name(
|
||||
variant, task_name, asset_doc, project_name, host_name
|
||||
)
|
||||
|
|
@ -69,10 +67,7 @@ class AEWorkfileCreator(AutoCreator):
|
|||
existing_instance["asset"] != asset_name
|
||||
or existing_instance["task"] != task_name
|
||||
):
|
||||
asset_doc = legacy_io.find_one({
|
||||
"type": "asset",
|
||||
"name": asset_name
|
||||
})
|
||||
asset_doc = get_asset_by_name(project_name, asset_name)
|
||||
subset_name = self.get_subset_name(
|
||||
variant, task_name, asset_doc, project_name, host_name
|
||||
)
|
||||
|
|
|
|||
|
|
@ -1,5 +1,9 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
"""Validate scene settings."""
|
||||
"""Validate scene settings.
|
||||
Requires:
|
||||
instance -> assetEntity
|
||||
instance -> anatomyData
|
||||
"""
|
||||
import os
|
||||
import re
|
||||
|
||||
|
|
@ -67,7 +71,8 @@ class ValidateSceneSettings(OptionalPyblishPluginMixin,
|
|||
if not self.is_active(instance.data):
|
||||
return
|
||||
|
||||
expected_settings = get_asset_settings()
|
||||
asset_doc = instance.data["assetEntity"]
|
||||
expected_settings = get_asset_settings(asset_doc)
|
||||
self.log.info("config from DB::{}".format(expected_settings))
|
||||
|
||||
task_name = instance.data["anatomyData"]["task"]["name"]
|
||||
|
|
|
|||
|
|
@ -4,6 +4,11 @@ from pprint import pformat
|
|||
|
||||
import pyblish.api
|
||||
|
||||
from openpype.client import (
|
||||
get_subsets,
|
||||
get_last_versions,
|
||||
get_representations
|
||||
)
|
||||
from openpype.pipeline import legacy_io
|
||||
|
||||
|
||||
|
|
@ -60,10 +65,10 @@ class AppendCelactionAudio(pyblish.api.ContextPlugin):
|
|||
"""
|
||||
|
||||
# Query all subsets for asset
|
||||
subset_docs = legacy_io.find({
|
||||
"type": "subset",
|
||||
"parent": asset_doc["_id"]
|
||||
})
|
||||
project_name = legacy_io.active_project()
|
||||
subset_docs = get_subsets(
|
||||
project_name, asset_ids=[asset_doc["_id"]], fields=["_id"]
|
||||
)
|
||||
# Collect all subset ids
|
||||
subset_ids = [
|
||||
subset_doc["_id"]
|
||||
|
|
@ -76,37 +81,19 @@ class AppendCelactionAudio(pyblish.api.ContextPlugin):
|
|||
"Try this for start `r'.*'`: asset: `{}`"
|
||||
).format(asset_doc["name"])
|
||||
|
||||
# Last version aggregation
|
||||
pipeline = [
|
||||
# Find all versions of those subsets
|
||||
{"$match": {
|
||||
"type": "version",
|
||||
"parent": {"$in": subset_ids}
|
||||
}},
|
||||
# Sorting versions all together
|
||||
{"$sort": {"name": 1}},
|
||||
# Group them by "parent", but only take the last
|
||||
{"$group": {
|
||||
"_id": "$parent",
|
||||
"_version_id": {"$last": "$_id"},
|
||||
"name": {"$last": "$name"}
|
||||
}}
|
||||
]
|
||||
last_versions_by_subset_id = dict()
|
||||
for doc in legacy_io.aggregate(pipeline):
|
||||
doc["parent"] = doc["_id"]
|
||||
doc["_id"] = doc.pop("_version_id")
|
||||
last_versions_by_subset_id[doc["parent"]] = doc
|
||||
last_versions_by_subset_id = get_last_versions(
|
||||
project_name, subset_ids, fields=["_id", "parent"]
|
||||
)
|
||||
|
||||
version_docs_by_id = {}
|
||||
for version_doc in last_versions_by_subset_id.values():
|
||||
version_docs_by_id[version_doc["_id"]] = version_doc
|
||||
|
||||
repre_docs = legacy_io.find({
|
||||
"type": "representation",
|
||||
"parent": {"$in": list(version_docs_by_id.keys())},
|
||||
"name": {"$in": representations}
|
||||
})
|
||||
repre_docs = get_representations(
|
||||
project_name,
|
||||
version_ids=version_docs_by_id.keys(),
|
||||
representation_names=representations
|
||||
)
|
||||
repre_docs_by_version_id = collections.defaultdict(list)
|
||||
for repre_doc in repre_docs:
|
||||
version_id = repre_doc["parent"]
|
||||
|
|
|
|||
|
|
@ -1,4 +1,5 @@
|
|||
import re
|
||||
from types import NoneType
|
||||
import pyblish
|
||||
import openpype.hosts.flame.api as opfapi
|
||||
from openpype.hosts.flame.otio import flame_export
|
||||
|
|
@ -75,6 +76,12 @@ class CollectTimelineInstances(pyblish.api.ContextPlugin):
|
|||
marker_data["handleEnd"]
|
||||
)
|
||||
|
||||
# make sure there is not NoneType rather 0
|
||||
if isinstance(head, NoneType):
|
||||
head = 0
|
||||
if isinstance(tail, NoneType):
|
||||
tail = 0
|
||||
|
||||
# make sure value is absolute
|
||||
if head != 0:
|
||||
head = abs(head)
|
||||
|
|
@ -125,7 +132,8 @@ class CollectTimelineInstances(pyblish.api.ContextPlugin):
|
|||
"flameAddTasks": self.add_tasks,
|
||||
"tasks": {
|
||||
task["name"]: {"type": task["type"]}
|
||||
for task in self.add_tasks}
|
||||
for task in self.add_tasks},
|
||||
"representations": []
|
||||
})
|
||||
self.log.debug("__ inst_data: {}".format(pformat(inst_data)))
|
||||
|
||||
|
|
|
|||
|
|
@ -23,6 +23,8 @@ class ExtractSubsetResources(openpype.api.Extractor):
|
|||
hosts = ["flame"]
|
||||
|
||||
# plugin defaults
|
||||
keep_original_representation = False
|
||||
|
||||
default_presets = {
|
||||
"thumbnail": {
|
||||
"active": True,
|
||||
|
|
@ -45,7 +47,9 @@ class ExtractSubsetResources(openpype.api.Extractor):
|
|||
export_presets_mapping = {}
|
||||
|
||||
def process(self, instance):
|
||||
if "representations" not in instance.data:
|
||||
|
||||
if not self.keep_original_representation:
|
||||
# remove previeous representation if not needed
|
||||
instance.data["representations"] = []
|
||||
|
||||
# flame objects
|
||||
|
|
@ -82,7 +86,11 @@ class ExtractSubsetResources(openpype.api.Extractor):
|
|||
# add default preset type for thumbnail and reviewable video
|
||||
# update them with settings and override in case the same
|
||||
# are found in there
|
||||
export_presets = deepcopy(self.default_presets)
|
||||
_preset_keys = [k.split('_')[0] for k in self.export_presets_mapping]
|
||||
export_presets = {
|
||||
k: v for k, v in deepcopy(self.default_presets).items()
|
||||
if k not in _preset_keys
|
||||
}
|
||||
export_presets.update(self.export_presets_mapping)
|
||||
|
||||
# loop all preset names and
|
||||
|
|
@ -218,9 +226,14 @@ class ExtractSubsetResources(openpype.api.Extractor):
|
|||
opfapi.export_clip(
|
||||
export_dir_path, exporting_clip, preset_path, **export_kwargs)
|
||||
|
||||
repr_name = unique_name
|
||||
# make sure only first segment is used if underscore in name
|
||||
# HACK: `ftrackreview_withLUT` will result only in `ftrackreview`
|
||||
repr_name = unique_name.split("_")[0]
|
||||
if (
|
||||
"thumbnail" in unique_name
|
||||
or "ftrackreview" in unique_name
|
||||
):
|
||||
repr_name = unique_name.split("_")[0]
|
||||
|
||||
# create representation data
|
||||
representation_data = {
|
||||
|
|
@ -259,7 +272,7 @@ class ExtractSubsetResources(openpype.api.Extractor):
|
|||
if os.path.splitext(f)[-1] == ".mov"
|
||||
]
|
||||
# then try if thumbnail is not in unique name
|
||||
or unique_name == "thumbnail"
|
||||
or repr_name == "thumbnail"
|
||||
):
|
||||
representation_data["files"] = files.pop()
|
||||
else:
|
||||
|
|
|
|||
|
|
@ -610,7 +610,8 @@ class ImageSequenceLoader(load.LoaderPlugin):
|
|||
def update(self, container, representation):
|
||||
node = container.pop("node")
|
||||
|
||||
version = legacy_io.find_one({"_id": representation["parent"]})
|
||||
project_name = legacy_io.active_project()
|
||||
version = get_version_by_id(project_name, representation["parent"])
|
||||
files = []
|
||||
for f in version["data"]["files"]:
|
||||
files.append(
|
||||
|
|
|
|||
|
|
@ -2,10 +2,10 @@ import os
|
|||
from pathlib import Path
|
||||
import logging
|
||||
|
||||
from bson.objectid import ObjectId
|
||||
import pyblish.api
|
||||
|
||||
from openpype import lib
|
||||
from openpype.client import get_representation_by_id
|
||||
from openpype.lib import register_event_callback
|
||||
from openpype.pipeline import (
|
||||
legacy_io,
|
||||
|
|
@ -104,22 +104,20 @@ def check_inventory():
|
|||
If it does it will colorize outdated nodes and display warning message
|
||||
in Harmony.
|
||||
"""
|
||||
if not lib.any_outdated():
|
||||
return
|
||||
|
||||
project_name = legacy_io.active_project()
|
||||
outdated_containers = []
|
||||
for container in ls():
|
||||
representation = container['representation']
|
||||
representation_doc = legacy_io.find_one(
|
||||
{
|
||||
"_id": ObjectId(representation),
|
||||
"type": "representation"
|
||||
},
|
||||
projection={"parent": True}
|
||||
representation_id = container['representation']
|
||||
representation_doc = get_representation_by_id(
|
||||
project_name, representation_id, fields=["parent"]
|
||||
)
|
||||
if representation_doc and not lib.is_latest(representation_doc):
|
||||
outdated_containers.append(container)
|
||||
|
||||
if not outdated_containers:
|
||||
return
|
||||
|
||||
# Colour nodes.
|
||||
outdated_nodes = []
|
||||
for container in outdated_containers:
|
||||
|
|
|
|||
|
|
@ -12,8 +12,13 @@ import shutil
|
|||
import hiero
|
||||
|
||||
from Qt import QtWidgets
|
||||
from bson.objectid import ObjectId
|
||||
|
||||
from openpype.client import (
|
||||
get_project,
|
||||
get_versions,
|
||||
get_last_versions,
|
||||
get_representations,
|
||||
)
|
||||
from openpype.pipeline import legacy_io
|
||||
from openpype.api import (Logger, Anatomy, get_anatomy_settings)
|
||||
from . import tags
|
||||
|
|
@ -477,7 +482,7 @@ def sync_avalon_data_to_workfile():
|
|||
project.setProjectRoot(active_project_root)
|
||||
|
||||
# get project data from avalon db
|
||||
project_doc = legacy_io.find_one({"type": "project"})
|
||||
project_doc = get_project(project_name)
|
||||
project_data = project_doc["data"]
|
||||
|
||||
log.debug("project_data: {}".format(project_data))
|
||||
|
|
@ -1065,35 +1070,63 @@ def check_inventory_versions(track_items=None):
|
|||
clip_color_last = "green"
|
||||
clip_color = "red"
|
||||
|
||||
# get all track items from current timeline
|
||||
item_with_repre_id = []
|
||||
repre_ids = set()
|
||||
# Find all containers and collect it's node and representation ids
|
||||
for track_item in track_item:
|
||||
container = parse_container(track_item)
|
||||
if container:
|
||||
# get representation from io
|
||||
representation = legacy_io.find_one({
|
||||
"type": "representation",
|
||||
"_id": ObjectId(container["representation"])
|
||||
})
|
||||
repre_id = container["representation"]
|
||||
repre_ids.add(repre_id)
|
||||
item_with_repre_id.append((track_item, repre_id))
|
||||
|
||||
# Get start frame from version data
|
||||
version = legacy_io.find_one({
|
||||
"type": "version",
|
||||
"_id": representation["parent"]
|
||||
})
|
||||
# Skip if nothing was found
|
||||
if not repre_ids:
|
||||
return
|
||||
|
||||
# get all versions in list
|
||||
versions = legacy_io.find({
|
||||
"type": "version",
|
||||
"parent": version["parent"]
|
||||
}).distinct('name')
|
||||
project_name = legacy_io.active_project()
|
||||
# Find representations based on found containers
|
||||
repre_docs = get_representations(
|
||||
project_name,
|
||||
repre_ids=repre_ids,
|
||||
fields=["_id", "parent"]
|
||||
)
|
||||
# Store representations by id and collect version ids
|
||||
repre_docs_by_id = {}
|
||||
version_ids = set()
|
||||
for repre_doc in repre_docs:
|
||||
# Use stringed representation id to match value in containers
|
||||
repre_id = str(repre_doc["_id"])
|
||||
repre_docs_by_id[repre_id] = repre_doc
|
||||
version_ids.add(repre_doc["parent"])
|
||||
|
||||
max_version = max(versions)
|
||||
version_docs = get_versions(
|
||||
project_name, version_ids, fields=["_id", "name", "parent"]
|
||||
)
|
||||
# Store versions by id and collect subset ids
|
||||
version_docs_by_id = {}
|
||||
subset_ids = set()
|
||||
for version_doc in version_docs:
|
||||
version_docs_by_id[version_doc["_id"]] = version_doc
|
||||
subset_ids.add(version_doc["parent"])
|
||||
|
||||
# set clip colour
|
||||
if version.get("name") == max_version:
|
||||
track_item.source().binItem().setColor(clip_color_last)
|
||||
else:
|
||||
track_item.source().binItem().setColor(clip_color)
|
||||
# Query last versions based on subset ids
|
||||
last_versions_by_subset_id = get_last_versions(
|
||||
project_name, subset_ids=subset_ids, fields=["_id", "parent"]
|
||||
)
|
||||
|
||||
for item in item_with_repre_id:
|
||||
# Some python versions of nuke can't unfold tuple in for loop
|
||||
track_item, repre_id = item
|
||||
|
||||
repre_doc = repre_docs_by_id[repre_id]
|
||||
version_doc = version_docs_by_id[repre_doc["parent"]]
|
||||
last_version_doc = last_versions_by_subset_id[version_doc["parent"]]
|
||||
# Check if last version is same as current version
|
||||
if version_doc["_id"] == last_version_doc["_id"]:
|
||||
track_item.source().binItem().setColor(clip_color_last)
|
||||
else:
|
||||
track_item.source().binItem().setColor(clip_color)
|
||||
|
||||
|
||||
def selection_changed_timeline(event):
|
||||
|
|
|
|||
|
|
@ -2,6 +2,7 @@ import re
|
|||
import os
|
||||
import hiero
|
||||
|
||||
from openpype.client import get_project, get_assets
|
||||
from openpype.api import Logger
|
||||
from openpype.pipeline import legacy_io
|
||||
|
||||
|
|
@ -141,7 +142,9 @@ def add_tags_to_workfile():
|
|||
nks_pres_tags = tag_data()
|
||||
|
||||
# Get project task types.
|
||||
tasks = legacy_io.find_one({"type": "project"})["config"]["tasks"]
|
||||
project_name = legacy_io.active_project()
|
||||
project_doc = get_project(project_name)
|
||||
tasks = project_doc["config"]["tasks"]
|
||||
nks_pres_tags["[Tasks]"] = {}
|
||||
log.debug("__ tasks: {}".format(tasks))
|
||||
for task_type in tasks.keys():
|
||||
|
|
@ -159,7 +162,9 @@ def add_tags_to_workfile():
|
|||
# asset builds and shots.
|
||||
if int(os.getenv("TAG_ASSETBUILD_STARTUP", 0)) == 1:
|
||||
nks_pres_tags["[AssetBuilds]"] = {}
|
||||
for asset in legacy_io.find({"type": "asset"}):
|
||||
for asset in get_assets(
|
||||
project_name, fields=["name", "data.entityType"]
|
||||
):
|
||||
if asset["data"]["entityType"] == "AssetBuild":
|
||||
nks_pres_tags["[AssetBuilds]"][asset["name"]] = {
|
||||
"editable": "1",
|
||||
|
|
|
|||
|
|
@ -1,3 +1,7 @@
|
|||
from openpype.client import (
|
||||
get_version_by_id,
|
||||
get_last_version_by_subset_id
|
||||
)
|
||||
from openpype.pipeline import (
|
||||
legacy_io,
|
||||
get_representation_path,
|
||||
|
|
@ -103,12 +107,12 @@ class LoadClip(phiero.SequenceLoader):
|
|||
namespace = container['namespace']
|
||||
track_item = phiero.get_track_items(
|
||||
track_item_name=namespace).pop()
|
||||
version = legacy_io.find_one({
|
||||
"type": "version",
|
||||
"_id": representation["parent"]
|
||||
})
|
||||
version_data = version.get("data", {})
|
||||
version_name = version.get("name", None)
|
||||
|
||||
project_name = legacy_io.active_project()
|
||||
version_doc = get_version_by_id(project_name, representation["parent"])
|
||||
|
||||
version_data = version_doc.get("data", {})
|
||||
version_name = version_doc.get("name", None)
|
||||
colorspace = version_data.get("colorspace", None)
|
||||
object_name = "{}_{}".format(name, namespace)
|
||||
file = get_representation_path(representation).replace("\\", "/")
|
||||
|
|
@ -143,7 +147,7 @@ class LoadClip(phiero.SequenceLoader):
|
|||
})
|
||||
|
||||
# update color of clip regarding the version order
|
||||
self.set_item_color(track_item, version)
|
||||
self.set_item_color(track_item, version_doc)
|
||||
|
||||
return phiero.update_container(track_item, data_imprint)
|
||||
|
||||
|
|
@ -166,21 +170,14 @@ class LoadClip(phiero.SequenceLoader):
|
|||
cls.sequence = cls.track.parent()
|
||||
|
||||
@classmethod
|
||||
def set_item_color(cls, track_item, version):
|
||||
|
||||
def set_item_color(cls, track_item, version_doc):
|
||||
project_name = legacy_io.active_project()
|
||||
last_version_doc = get_last_version_by_subset_id(
|
||||
project_name, version_doc["parent"], fields=["_id"]
|
||||
)
|
||||
clip = track_item.source()
|
||||
# define version name
|
||||
version_name = version.get("name", None)
|
||||
# get all versions in list
|
||||
versions = legacy_io.find({
|
||||
"type": "version",
|
||||
"parent": version["parent"]
|
||||
}).distinct('name')
|
||||
|
||||
max_version = max(versions)
|
||||
|
||||
# set clip colour
|
||||
if version_name == max_version:
|
||||
if version_doc["_id"] == last_version_doc["_id"]:
|
||||
clip.binItem().setColor(cls.clip_color_last)
|
||||
else:
|
||||
clip.binItem().setColor(cls.clip_color)
|
||||
|
|
|
|||
|
|
@ -1,4 +1,5 @@
|
|||
from pyblish import api
|
||||
from openpype.client import get_assets
|
||||
from openpype.pipeline import legacy_io
|
||||
|
||||
|
||||
|
|
@ -17,8 +18,9 @@ class CollectAssetBuilds(api.ContextPlugin):
|
|||
hosts = ["hiero"]
|
||||
|
||||
def process(self, context):
|
||||
project_name = legacy_io.active_project()
|
||||
asset_builds = {}
|
||||
for asset in legacy_io.find({"type": "asset"}):
|
||||
for asset in get_assets(project_name):
|
||||
if asset["data"]["entityType"] == "AssetBuild":
|
||||
self.log.debug("Found \"{}\" in database.".format(asset))
|
||||
asset_builds[asset["name"]] = asset
|
||||
|
|
|
|||
|
|
@ -4,6 +4,7 @@ from contextlib import contextmanager
|
|||
|
||||
import six
|
||||
|
||||
from openpype.client import get_asset_by_name
|
||||
from openpype.api import get_asset
|
||||
from openpype.pipeline import legacy_io
|
||||
|
||||
|
|
@ -74,16 +75,13 @@ def generate_ids(nodes, asset_id=None):
|
|||
"""
|
||||
|
||||
if asset_id is None:
|
||||
project_name = legacy_io.active_project()
|
||||
asset_name = legacy_io.Session["AVALON_ASSET"]
|
||||
# Get the asset ID from the database for the asset of current context
|
||||
asset_data = legacy_io.find_one(
|
||||
{
|
||||
"type": "asset",
|
||||
"name": legacy_io.Session["AVALON_ASSET"]
|
||||
},
|
||||
projection={"_id": True}
|
||||
)
|
||||
assert asset_data, "No current asset found in Session"
|
||||
asset_id = asset_data['_id']
|
||||
asset_doc = get_asset_by_name(project_name, asset_name, fields=["_id"])
|
||||
|
||||
assert asset_doc, "No current asset found in Session"
|
||||
asset_id = asset_doc['_id']
|
||||
|
||||
node_ids = []
|
||||
for node in nodes:
|
||||
|
|
@ -130,6 +128,8 @@ def get_output_parameter(node):
|
|||
elif node_type == "arnold":
|
||||
if node.evalParm("ar_ass_export_enable"):
|
||||
return node.parm("ar_ass_file")
|
||||
elif node_type == "Redshift_Proxy_Output":
|
||||
return node.parm("RS_archive_file")
|
||||
|
||||
raise TypeError("Node type '%s' not supported" % node_type)
|
||||
|
||||
|
|
@ -428,26 +428,29 @@ def maintained_selection():
|
|||
def reset_framerange():
|
||||
"""Set frame range to current asset"""
|
||||
|
||||
project_name = legacy_io.active_project()
|
||||
asset_name = legacy_io.Session["AVALON_ASSET"]
|
||||
asset = legacy_io.find_one({"name": asset_name, "type": "asset"})
|
||||
# Get the asset ID from the database for the asset of current context
|
||||
asset_doc = get_asset_by_name(project_name, asset_name)
|
||||
asset_data = asset_doc["data"]
|
||||
|
||||
frame_start = asset["data"].get("frameStart")
|
||||
frame_end = asset["data"].get("frameEnd")
|
||||
frame_start = asset_data.get("frameStart")
|
||||
frame_end = asset_data.get("frameEnd")
|
||||
# Backwards compatibility
|
||||
if frame_start is None or frame_end is None:
|
||||
frame_start = asset["data"].get("edit_in")
|
||||
frame_end = asset["data"].get("edit_out")
|
||||
frame_start = asset_data.get("edit_in")
|
||||
frame_end = asset_data.get("edit_out")
|
||||
|
||||
if frame_start is None or frame_end is None:
|
||||
log.warning("No edit information found for %s" % asset_name)
|
||||
return
|
||||
|
||||
handles = asset["data"].get("handles") or 0
|
||||
handle_start = asset["data"].get("handleStart")
|
||||
handles = asset_data.get("handles") or 0
|
||||
handle_start = asset_data.get("handleStart")
|
||||
if handle_start is None:
|
||||
handle_start = handles
|
||||
|
||||
handle_end = asset["data"].get("handleEnd")
|
||||
handle_end = asset_data.get("handleEnd")
|
||||
if handle_end is None:
|
||||
handle_end = handles
|
||||
|
||||
|
|
|
|||
|
|
@ -6,6 +6,7 @@ import logging
|
|||
from Qt import QtWidgets, QtCore, QtGui
|
||||
|
||||
from openpype import style
|
||||
from openpype.client import get_asset_by_name
|
||||
from openpype.pipeline import legacy_io
|
||||
from openpype.tools.utils.assets_widget import SingleSelectAssetsWidget
|
||||
|
||||
|
|
@ -46,10 +47,8 @@ class SelectAssetDialog(QtWidgets.QWidget):
|
|||
select_id = None
|
||||
name = self._parm.eval()
|
||||
if name:
|
||||
db_asset = legacy_io.find_one(
|
||||
{"name": name, "type": "asset"},
|
||||
{"_id": True}
|
||||
)
|
||||
project_name = legacy_io.active_project()
|
||||
db_asset = get_asset_by_name(project_name, name, fields=["_id"])
|
||||
if db_asset:
|
||||
select_id = db_asset["_id"]
|
||||
|
||||
|
|
|
|||
|
|
@ -1,6 +1,10 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
import hou
|
||||
|
||||
from openpype.client import (
|
||||
get_asset_by_name,
|
||||
get_subsets,
|
||||
)
|
||||
from openpype.pipeline import legacy_io
|
||||
from openpype.hosts.houdini.api import lib
|
||||
from openpype.hosts.houdini.api import plugin
|
||||
|
|
@ -23,20 +27,16 @@ class CreateHDA(plugin.Creator):
|
|||
# type: (str) -> bool
|
||||
"""Check if existing subset name versions already exists."""
|
||||
# Get all subsets of the current asset
|
||||
asset_id = legacy_io.find_one(
|
||||
{"name": self.data["asset"], "type": "asset"},
|
||||
projection={"_id": True}
|
||||
)['_id']
|
||||
subset_docs = legacy_io.find(
|
||||
{
|
||||
"type": "subset",
|
||||
"parent": asset_id
|
||||
},
|
||||
{"name": 1}
|
||||
project_name = legacy_io.active_project()
|
||||
asset_doc = get_asset_by_name(
|
||||
project_name, self.data["asset"], fields=["_id"]
|
||||
)
|
||||
subset_docs = get_subsets(
|
||||
project_name, asset_ids=[asset_doc["_id"]], fields=["name"]
|
||||
)
|
||||
existing_subset_names = set(subset_docs.distinct("name"))
|
||||
existing_subset_names_low = {
|
||||
_name.lower() for _name in existing_subset_names
|
||||
subset_doc["name"].lower()
|
||||
for subset_doc in subset_docs
|
||||
}
|
||||
return subset_name.lower() in existing_subset_names_low
|
||||
|
||||
|
|
|
|||
|
|
@ -0,0 +1,48 @@
|
|||
from openpype.hosts.houdini.api import plugin
|
||||
|
||||
|
||||
class CreateRedshiftProxy(plugin.Creator):
|
||||
"""Redshift Proxy"""
|
||||
|
||||
label = "Redshift Proxy"
|
||||
family = "redshiftproxy"
|
||||
icon = "magic"
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
super(CreateRedshiftProxy, self).__init__(*args, **kwargs)
|
||||
|
||||
# Remove the active, we are checking the bypass flag of the nodes
|
||||
self.data.pop("active", None)
|
||||
|
||||
# Redshift provides a `Redshift_Proxy_Output` node type which shows
|
||||
# a limited set of parameters by default and is set to extract a
|
||||
# Redshift Proxy. However when "imprinting" extra parameters needed
|
||||
# for OpenPype it starts showing all its parameters again. It's unclear
|
||||
# why this happens.
|
||||
# TODO: Somehow enforce so that it only shows the original limited
|
||||
# attributes of the Redshift_Proxy_Output node type
|
||||
self.data.update({"node_type": "Redshift_Proxy_Output"})
|
||||
|
||||
def _process(self, instance):
|
||||
"""Creator main entry point.
|
||||
|
||||
Args:
|
||||
instance (hou.Node): Created Houdini instance.
|
||||
|
||||
"""
|
||||
parms = {
|
||||
"RS_archive_file": '$HIP/pyblish/`chs("subset")`.$F4.rs',
|
||||
}
|
||||
|
||||
if self.nodes:
|
||||
node = self.nodes[0]
|
||||
path = node.path()
|
||||
parms["RS_archive_sopPath"] = path
|
||||
|
||||
instance.setParms(parms)
|
||||
|
||||
# Lock some Avalon attributes
|
||||
to_lock = ["family", "id"]
|
||||
for name in to_lock:
|
||||
parm = instance.parm(name)
|
||||
parm.lock(True)
|
||||
|
|
@ -44,7 +44,8 @@ class BgeoLoader(load.LoaderPlugin):
|
|||
|
||||
# Explicitly create a file node
|
||||
file_node = container.createNode("file", node_name=node_name)
|
||||
file_node.setParms({"file": self.format_path(self.fname, is_sequence)})
|
||||
file_node.setParms(
|
||||
{"file": self.format_path(self.fname, context["representation"])})
|
||||
|
||||
# Set display on last node
|
||||
file_node.setDisplayFlag(True)
|
||||
|
|
@ -62,15 +63,15 @@ class BgeoLoader(load.LoaderPlugin):
|
|||
)
|
||||
|
||||
@staticmethod
|
||||
def format_path(path, is_sequence):
|
||||
def format_path(path, representation):
|
||||
"""Format file path correctly for single bgeo or bgeo sequence."""
|
||||
if not os.path.exists(path):
|
||||
raise RuntimeError("Path does not exist: %s" % path)
|
||||
|
||||
is_sequence = bool(representation["context"].get("frame"))
|
||||
# The path is either a single file or sequence in a folder.
|
||||
if not is_sequence:
|
||||
filename = path
|
||||
print("single")
|
||||
else:
|
||||
filename = re.sub(r"(.*)\.(\d+)\.(bgeo.*)", "\\1.$F4.\\3", path)
|
||||
|
||||
|
|
@ -94,9 +95,9 @@ class BgeoLoader(load.LoaderPlugin):
|
|||
|
||||
# Update the file path
|
||||
file_path = get_representation_path(representation)
|
||||
file_path = self.format_path(file_path)
|
||||
file_path = self.format_path(file_path, representation)
|
||||
|
||||
file_node.setParms({"fileName": file_path})
|
||||
file_node.setParms({"file": file_path})
|
||||
|
||||
# Update attribute
|
||||
node.setParms({"representation": str(representation["_id"])})
|
||||
|
|
|
|||
|
|
@ -40,7 +40,8 @@ class VdbLoader(load.LoaderPlugin):
|
|||
|
||||
# Explicitly create a file node
|
||||
file_node = container.createNode("file", node_name=node_name)
|
||||
file_node.setParms({"file": self.format_path(self.fname)})
|
||||
file_node.setParms(
|
||||
{"file": self.format_path(self.fname, context["representation"])})
|
||||
|
||||
# Set display on last node
|
||||
file_node.setDisplayFlag(True)
|
||||
|
|
@ -57,30 +58,20 @@ class VdbLoader(load.LoaderPlugin):
|
|||
suffix="",
|
||||
)
|
||||
|
||||
def format_path(self, path):
|
||||
@staticmethod
|
||||
def format_path(path, representation):
|
||||
"""Format file path correctly for single vdb or vdb sequence."""
|
||||
if not os.path.exists(path):
|
||||
raise RuntimeError("Path does not exist: %s" % path)
|
||||
|
||||
is_sequence = bool(representation["context"].get("frame"))
|
||||
# The path is either a single file or sequence in a folder.
|
||||
is_single_file = os.path.isfile(path)
|
||||
if is_single_file:
|
||||
if not is_sequence:
|
||||
filename = path
|
||||
else:
|
||||
# The path points to the publish .vdb sequence folder so we
|
||||
# find the first file in there that ends with .vdb
|
||||
files = sorted(os.listdir(path))
|
||||
first = next((x for x in files if x.endswith(".vdb")), None)
|
||||
if first is None:
|
||||
raise RuntimeError(
|
||||
"Couldn't find first .vdb file of "
|
||||
"sequence in: %s" % path
|
||||
)
|
||||
filename = re.sub(r"(.*)\.(\d+)\.vdb$", "\\1.$F4.vdb", path)
|
||||
|
||||
# Set <frame>.vdb to $F.vdb
|
||||
first = re.sub(r"\.(\d+)\.vdb$", ".$F.vdb", first)
|
||||
|
||||
filename = os.path.join(path, first)
|
||||
filename = os.path.join(path, filename)
|
||||
|
||||
filename = os.path.normpath(filename)
|
||||
filename = filename.replace("\\", "/")
|
||||
|
|
@ -100,7 +91,7 @@ class VdbLoader(load.LoaderPlugin):
|
|||
|
||||
# Update the file path
|
||||
file_path = get_representation_path(representation)
|
||||
file_path = self.format_path(file_path)
|
||||
file_path = self.format_path(file_path, representation)
|
||||
|
||||
file_node.setParms({"file": file_path})
|
||||
|
||||
|
|
|
|||
|
|
@ -20,7 +20,7 @@ class CollectFrames(pyblish.api.InstancePlugin):
|
|||
|
||||
order = pyblish.api.CollectorOrder
|
||||
label = "Collect Frames"
|
||||
families = ["vdbcache", "imagesequence", "ass"]
|
||||
families = ["vdbcache", "imagesequence", "ass", "redshiftproxy"]
|
||||
|
||||
def process(self, instance):
|
||||
|
||||
|
|
|
|||
|
|
@ -12,6 +12,7 @@ class CollectOutputSOPPath(pyblish.api.InstancePlugin):
|
|||
"imagesequence",
|
||||
"usd",
|
||||
"usdrender",
|
||||
"redshiftproxy"
|
||||
]
|
||||
|
||||
hosts = ["houdini"]
|
||||
|
|
@ -54,6 +55,8 @@ class CollectOutputSOPPath(pyblish.api.InstancePlugin):
|
|||
else:
|
||||
out_node = node.parm("loppath").evalAsNode()
|
||||
|
||||
elif node_type == "Redshift_Proxy_Output":
|
||||
out_node = node.parm("RS_archive_sopPath").evalAsNode()
|
||||
else:
|
||||
raise ValueError(
|
||||
"ROP node type '%s' is" " not supported." % node_type
|
||||
|
|
|
|||
|
|
@ -1,5 +1,6 @@
|
|||
import pyblish.api
|
||||
|
||||
from openyppe.client import get_subset_by_name, get_asset_by_name
|
||||
from openpype.pipeline import legacy_io
|
||||
import openpype.lib.usdlib as usdlib
|
||||
|
||||
|
|
@ -50,10 +51,8 @@ class CollectUsdBootstrap(pyblish.api.InstancePlugin):
|
|||
|
||||
self.log.debug("Add bootstrap for: %s" % bootstrap)
|
||||
|
||||
asset = legacy_io.find_one({
|
||||
"name": instance.data["asset"],
|
||||
"type": "asset"
|
||||
})
|
||||
project_name = legacy_io.active_project()
|
||||
asset = get_asset_by_name(project_name, instance.data["asset"])
|
||||
assert asset, "Asset must exist: %s" % asset
|
||||
|
||||
# Check which are not about to be created and don't exist yet
|
||||
|
|
@ -70,7 +69,7 @@ class CollectUsdBootstrap(pyblish.api.InstancePlugin):
|
|||
|
||||
self.log.debug("Checking required bootstrap: %s" % required)
|
||||
for subset in required:
|
||||
if self._subset_exists(instance, subset, asset):
|
||||
if self._subset_exists(project_name, instance, subset, asset):
|
||||
continue
|
||||
|
||||
self.log.debug(
|
||||
|
|
@ -93,7 +92,7 @@ class CollectUsdBootstrap(pyblish.api.InstancePlugin):
|
|||
for key in ["asset"]:
|
||||
new.data[key] = instance.data[key]
|
||||
|
||||
def _subset_exists(self, instance, subset, asset):
|
||||
def _subset_exists(self, project_name, instance, subset, asset):
|
||||
"""Return whether subset exists in current context or in database."""
|
||||
# Allow it to be created during this publish session
|
||||
context = instance.context
|
||||
|
|
@ -106,9 +105,8 @@ class CollectUsdBootstrap(pyblish.api.InstancePlugin):
|
|||
|
||||
# Or, if they already exist in the database we can
|
||||
# skip them too.
|
||||
return bool(
|
||||
legacy_io.find_one(
|
||||
{"name": subset, "type": "subset", "parent": asset["_id"]},
|
||||
{"_id": True}
|
||||
)
|
||||
)
|
||||
if get_subset_by_name(
|
||||
project_name, subset, asset["_id"], fields=["_id"]
|
||||
):
|
||||
return True
|
||||
return False
|
||||
|
|
|
|||
|
|
@ -0,0 +1,48 @@
|
|||
import os
|
||||
|
||||
import pyblish.api
|
||||
import openpype.api
|
||||
from openpype.hosts.houdini.api.lib import render_rop
|
||||
|
||||
|
||||
class ExtractRedshiftProxy(openpype.api.Extractor):
|
||||
|
||||
order = pyblish.api.ExtractorOrder + 0.1
|
||||
label = "Extract Redshift Proxy"
|
||||
families = ["redshiftproxy"]
|
||||
hosts = ["houdini"]
|
||||
|
||||
def process(self, instance):
|
||||
|
||||
ropnode = instance[0]
|
||||
|
||||
# Get the filename from the filename parameter
|
||||
# `.evalParm(parameter)` will make sure all tokens are resolved
|
||||
output = ropnode.evalParm("RS_archive_file")
|
||||
staging_dir = os.path.normpath(os.path.dirname(output))
|
||||
instance.data["stagingDir"] = staging_dir
|
||||
file_name = os.path.basename(output)
|
||||
|
||||
self.log.info("Writing Redshift Proxy '%s' to '%s'" % (file_name,
|
||||
staging_dir))
|
||||
|
||||
render_rop(ropnode)
|
||||
|
||||
output = instance.data["frames"]
|
||||
|
||||
if "representations" not in instance.data:
|
||||
instance.data["representations"] = []
|
||||
|
||||
representation = {
|
||||
"name": "rs",
|
||||
"ext": "rs",
|
||||
"files": output,
|
||||
"stagingDir": staging_dir,
|
||||
}
|
||||
|
||||
# A single frame may also be rendered without start/end frame.
|
||||
if "frameStart" in instance.data and "frameEnd" in instance.data:
|
||||
representation["frameStart"] = instance.data["frameStart"]
|
||||
representation["frameEnd"] = instance.data["frameEnd"]
|
||||
|
||||
instance.data["representations"].append(representation)
|
||||
|
|
@ -7,6 +7,12 @@ from collections import deque
|
|||
import pyblish.api
|
||||
import openpype.api
|
||||
|
||||
from openpype.client import (
|
||||
get_asset_by_name,
|
||||
get_subset_by_name,
|
||||
get_last_version_by_subset_id,
|
||||
get_representation_by_name,
|
||||
)
|
||||
from openpype.pipeline import (
|
||||
get_representation_path,
|
||||
legacy_io,
|
||||
|
|
@ -244,11 +250,14 @@ class ExtractUSDLayered(openpype.api.Extractor):
|
|||
|
||||
# Set up the dependency for publish if they have new content
|
||||
# compared to previous publishes
|
||||
project_name = legacy_io.active_project()
|
||||
for dependency in active_dependencies:
|
||||
dependency_fname = dependency.data["usdFilename"]
|
||||
|
||||
filepath = os.path.join(staging_dir, dependency_fname)
|
||||
similar = self._compare_with_latest_publish(dependency, filepath)
|
||||
similar = self._compare_with_latest_publish(
|
||||
project_name, dependency, filepath
|
||||
)
|
||||
if similar:
|
||||
# Deactivate this dependency
|
||||
self.log.debug(
|
||||
|
|
@ -268,7 +277,7 @@ class ExtractUSDLayered(openpype.api.Extractor):
|
|||
instance.data["files"] = []
|
||||
instance.data["files"].append(fname)
|
||||
|
||||
def _compare_with_latest_publish(self, dependency, new_file):
|
||||
def _compare_with_latest_publish(self, project_name, dependency, new_file):
|
||||
import filecmp
|
||||
|
||||
_, ext = os.path.splitext(new_file)
|
||||
|
|
@ -276,35 +285,29 @@ class ExtractUSDLayered(openpype.api.Extractor):
|
|||
# Compare this dependency with the latest published version
|
||||
# to detect whether we should make this into a new publish
|
||||
# version. If not, skip it.
|
||||
asset = legacy_io.find_one(
|
||||
{"name": dependency.data["asset"], "type": "asset"}
|
||||
asset = get_asset_by_name(
|
||||
project_name, dependency.data["asset"], fields=["_id"]
|
||||
)
|
||||
subset = legacy_io.find_one(
|
||||
{
|
||||
"name": dependency.data["subset"],
|
||||
"type": "subset",
|
||||
"parent": asset["_id"],
|
||||
}
|
||||
subset = get_subset_by_name(
|
||||
project_name,
|
||||
dependency.data["subset"],
|
||||
asset["_id"],
|
||||
fields=["_id"]
|
||||
)
|
||||
if not subset:
|
||||
# Subset doesn't exist yet. Definitely new file
|
||||
self.log.debug("No existing subset..")
|
||||
return False
|
||||
|
||||
version = legacy_io.find_one(
|
||||
{"type": "version", "parent": subset["_id"], },
|
||||
sort=[("name", -1)]
|
||||
version = get_last_version_by_subset_id(
|
||||
project_name, subset["_id"], fields=["_id"]
|
||||
)
|
||||
if not version:
|
||||
self.log.debug("No existing version..")
|
||||
return False
|
||||
|
||||
representation = legacy_io.find_one(
|
||||
{
|
||||
"name": ext.lstrip("."),
|
||||
"type": "representation",
|
||||
"parent": version["_id"],
|
||||
}
|
||||
representation = get_representation_by_name(
|
||||
project_name, ext.lstrip("."), version["_id"]
|
||||
)
|
||||
if not representation:
|
||||
self.log.debug("No existing representation..")
|
||||
|
|
|
|||
|
|
@ -2,6 +2,7 @@ import re
|
|||
|
||||
import pyblish.api
|
||||
|
||||
from openpype.client import get_subset_by_name
|
||||
import openpype.api
|
||||
from openpype.pipeline import legacy_io
|
||||
|
||||
|
|
@ -15,31 +16,23 @@ class ValidateUSDShadeModelExists(pyblish.api.InstancePlugin):
|
|||
label = "USD Shade model exists"
|
||||
|
||||
def process(self, instance):
|
||||
|
||||
asset = instance.data["asset"]
|
||||
project_name = legacy_io.active_project()
|
||||
asset_name = instance.data["asset"]
|
||||
subset = instance.data["subset"]
|
||||
|
||||
# Assume shading variation starts after a dot separator
|
||||
shade_subset = subset.split(".", 1)[0]
|
||||
model_subset = re.sub("^usdShade", "usdModel", shade_subset)
|
||||
|
||||
asset_doc = legacy_io.find_one(
|
||||
{"name": asset, "type": "asset"},
|
||||
{"_id": True}
|
||||
)
|
||||
asset_doc = instance.data.get("assetEntity")
|
||||
if not asset_doc:
|
||||
raise RuntimeError("Asset does not exist: %s" % asset)
|
||||
raise RuntimeError("Asset document is not filled on instance.")
|
||||
|
||||
subset_doc = legacy_io.find_one(
|
||||
{
|
||||
"name": model_subset,
|
||||
"type": "subset",
|
||||
"parent": asset_doc["_id"],
|
||||
},
|
||||
{"_id": True}
|
||||
subset_doc = get_subset_by_name(
|
||||
project_name, model_subset, asset_doc["_id"], fields=["_id"]
|
||||
)
|
||||
if not subset_doc:
|
||||
raise RuntimeError(
|
||||
"USD Model subset not found: "
|
||||
"%s (%s)" % (model_subset, asset)
|
||||
"%s (%s)" % (model_subset, asset_name)
|
||||
)
|
||||
|
|
|
|||
|
|
@ -4,19 +4,9 @@ import husdoutputprocessors.base as base
|
|||
|
||||
import colorbleed.usdlib as usdlib
|
||||
|
||||
from openpype.pipeline import (
|
||||
legacy_io,
|
||||
registered_root,
|
||||
)
|
||||
|
||||
|
||||
def _get_project_publish_template():
|
||||
"""Return publish template from database for current project"""
|
||||
project = legacy_io.find_one(
|
||||
{"type": "project"},
|
||||
projection={"config.template.publish": True}
|
||||
)
|
||||
return project["config"]["template"]["publish"]
|
||||
from openpype.client import get_asset_by_name
|
||||
from openpype.api import Anatomy
|
||||
from openpype.pipeline import legacy_io
|
||||
|
||||
|
||||
class AvalonURIOutputProcessor(base.OutputProcessorBase):
|
||||
|
|
@ -35,7 +25,6 @@ class AvalonURIOutputProcessor(base.OutputProcessorBase):
|
|||
ever created in a Houdini session. Therefore be very careful
|
||||
about what data gets put in this object.
|
||||
"""
|
||||
self._template = None
|
||||
self._use_publish_paths = False
|
||||
self._cache = dict()
|
||||
|
||||
|
|
@ -60,14 +49,11 @@ class AvalonURIOutputProcessor(base.OutputProcessorBase):
|
|||
return self._parameters
|
||||
|
||||
def beginSave(self, config_node, t):
|
||||
self._template = _get_project_publish_template()
|
||||
|
||||
parm = self._parms["use_publish_paths"]
|
||||
self._use_publish_paths = config_node.parm(parm).evalAtTime(t)
|
||||
self._cache.clear()
|
||||
|
||||
def endSave(self):
|
||||
self._template = None
|
||||
self._use_publish_paths = None
|
||||
self._cache.clear()
|
||||
|
||||
|
|
@ -138,22 +124,19 @@ class AvalonURIOutputProcessor(base.OutputProcessorBase):
|
|||
"""
|
||||
|
||||
PROJECT = legacy_io.Session["AVALON_PROJECT"]
|
||||
asset_doc = legacy_io.find_one({
|
||||
"name": asset,
|
||||
"type": "asset"
|
||||
})
|
||||
anatomy = Anatomy(PROJECT)
|
||||
asset_doc = get_asset_by_name(PROJECT, asset)
|
||||
if not asset_doc:
|
||||
raise RuntimeError("Invalid asset name: '%s'" % asset)
|
||||
|
||||
root = registered_root()
|
||||
path = self._template.format(**{
|
||||
"root": root,
|
||||
formatted_anatomy = anatomy.format({
|
||||
"project": PROJECT,
|
||||
"asset": asset_doc["name"],
|
||||
"subset": subset,
|
||||
"representation": ext,
|
||||
"version": 0 # stub version zero
|
||||
})
|
||||
path = formatted_anatomy["publish"]["path"]
|
||||
|
||||
# Remove the version folder
|
||||
subset_folder = os.path.dirname(os.path.dirname(path))
|
||||
|
|
|
|||
|
|
@ -3,6 +3,7 @@ from __future__ import absolute_import
|
|||
|
||||
import pyblish.api
|
||||
|
||||
from openpype.client import get_asset_by_name
|
||||
from openpype.pipeline import legacy_io
|
||||
from openpype.api import get_errored_instances_from_context
|
||||
|
||||
|
|
@ -74,12 +75,21 @@ class GenerateUUIDsOnInvalidAction(pyblish.api.Action):
|
|||
|
||||
from . import lib
|
||||
|
||||
asset = instance.data['asset']
|
||||
asset_id = legacy_io.find_one(
|
||||
{"name": asset, "type": "asset"},
|
||||
projection={"_id": True}
|
||||
)['_id']
|
||||
for node, _id in lib.generate_ids(nodes, asset_id=asset_id):
|
||||
# Expecting this is called on validators in which case 'assetEntity'
|
||||
# should be always available, but kept a way to query it by name.
|
||||
asset_doc = instance.data.get("assetEntity")
|
||||
if not asset_doc:
|
||||
asset_name = instance.data["asset"]
|
||||
project_name = legacy_io.active_project()
|
||||
self.log.info((
|
||||
"Asset is not stored on instance."
|
||||
" Querying by name \"{}\" from project \"{}\""
|
||||
).format(asset_name, project_name))
|
||||
asset_doc = get_asset_by_name(
|
||||
project_name, asset_name, fields=["_id"]
|
||||
)
|
||||
|
||||
for node, _id in lib.generate_ids(nodes, asset_id=asset_doc["_id"]):
|
||||
lib.set_id(node, _id, overwrite=True)
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -2,6 +2,7 @@
|
|||
"""OpenPype script commands to be used directly in Maya."""
|
||||
from maya import cmds
|
||||
|
||||
from openpype.client import get_asset_by_name, get_project
|
||||
from openpype.pipeline import legacy_io
|
||||
|
||||
|
||||
|
|
@ -79,8 +80,9 @@ def reset_frame_range():
|
|||
cmds.currentUnit(time=fps)
|
||||
|
||||
# Set frame start/end
|
||||
project_name = legacy_io.active_project()
|
||||
asset_name = legacy_io.Session["AVALON_ASSET"]
|
||||
asset = legacy_io.find_one({"name": asset_name, "type": "asset"})
|
||||
asset = get_asset_by_name(project_name, asset_name)
|
||||
|
||||
frame_start = asset["data"].get("frameStart")
|
||||
frame_end = asset["data"].get("frameEnd")
|
||||
|
|
@ -145,8 +147,9 @@ def reset_resolution():
|
|||
resolution_height = 1080
|
||||
|
||||
# Get resolution from asset
|
||||
project_name = legacy_io.active_project()
|
||||
asset_name = legacy_io.Session["AVALON_ASSET"]
|
||||
asset_doc = legacy_io.find_one({"name": asset_name, "type": "asset"})
|
||||
asset_doc = get_asset_by_name(project_name, asset_name)
|
||||
resolution = _resolution_from_document(asset_doc)
|
||||
# Try get resolution from project
|
||||
if resolution is None:
|
||||
|
|
@ -155,7 +158,7 @@ def reset_resolution():
|
|||
"Asset \"{}\" does not have set resolution."
|
||||
" Trying to get resolution from project"
|
||||
).format(asset_name))
|
||||
project_doc = legacy_io.find_one({"type": "project"})
|
||||
project_doc = get_project(project_name)
|
||||
resolution = _resolution_from_document(project_doc)
|
||||
|
||||
if resolution is None:
|
||||
|
|
|
|||
|
|
@ -12,11 +12,17 @@ import contextlib
|
|||
from collections import OrderedDict, defaultdict
|
||||
from math import ceil
|
||||
from six import string_types
|
||||
import bson
|
||||
|
||||
from maya import cmds, mel
|
||||
import maya.api.OpenMaya as om
|
||||
|
||||
from openpype.client import (
|
||||
get_project,
|
||||
get_asset_by_name,
|
||||
get_subsets,
|
||||
get_last_versions,
|
||||
get_representation_by_name
|
||||
)
|
||||
from openpype import lib
|
||||
from openpype.api import get_anatomy_settings
|
||||
from openpype.pipeline import (
|
||||
|
|
@ -1387,15 +1393,11 @@ def generate_ids(nodes, asset_id=None):
|
|||
|
||||
if asset_id is None:
|
||||
# Get the asset ID from the database for the asset of current context
|
||||
asset_data = legacy_io.find_one(
|
||||
{
|
||||
"type": "asset",
|
||||
"name": legacy_io.Session["AVALON_ASSET"]
|
||||
},
|
||||
projection={"_id": True}
|
||||
)
|
||||
assert asset_data, "No current asset found in Session"
|
||||
asset_id = asset_data['_id']
|
||||
project_name = legacy_io.active_project()
|
||||
asset_name = legacy_io.Session["AVALON_ASSET"]
|
||||
asset_doc = get_asset_by_name(project_name, asset_name, fields=["_id"])
|
||||
assert asset_doc, "No current asset found in Session"
|
||||
asset_id = asset_doc['_id']
|
||||
|
||||
node_ids = []
|
||||
for node in nodes:
|
||||
|
|
@ -1548,13 +1550,15 @@ def list_looks(asset_id):
|
|||
|
||||
# # get all subsets with look leading in
|
||||
# the name associated with the asset
|
||||
subset = legacy_io.find({
|
||||
"parent": bson.ObjectId(asset_id),
|
||||
"type": "subset",
|
||||
"name": {"$regex": "look*"}
|
||||
})
|
||||
|
||||
return list(subset)
|
||||
# TODO this should probably look for family 'look' instead of checking
|
||||
# subset name that can not start with family
|
||||
project_name = legacy_io.active_project()
|
||||
subset_docs = get_subsets(project_name, asset_ids=[asset_id])
|
||||
return [
|
||||
subset_doc
|
||||
for subset_doc in subset_docs
|
||||
if subset_doc["name"].startswith("look")
|
||||
]
|
||||
|
||||
|
||||
def assign_look_by_version(nodes, version_id):
|
||||
|
|
@ -1570,18 +1574,15 @@ def assign_look_by_version(nodes, version_id):
|
|||
None
|
||||
"""
|
||||
|
||||
# Get representations of shader file and relationships
|
||||
look_representation = legacy_io.find_one({
|
||||
"type": "representation",
|
||||
"parent": version_id,
|
||||
"name": "ma"
|
||||
})
|
||||
project_name = legacy_io.active_project()
|
||||
|
||||
json_representation = legacy_io.find_one({
|
||||
"type": "representation",
|
||||
"parent": version_id,
|
||||
"name": "json"
|
||||
})
|
||||
# Get representations of shader file and relationships
|
||||
look_representation = get_representation_by_name(
|
||||
project_name, "ma", version_id
|
||||
)
|
||||
json_representation = get_representation_by_name(
|
||||
project_name, "json", version_id
|
||||
)
|
||||
|
||||
# See if representation is already loaded, if so reuse it.
|
||||
host = registered_host()
|
||||
|
|
@ -1639,42 +1640,54 @@ def assign_look(nodes, subset="lookDefault"):
|
|||
parts = pype_id.split(":", 1)
|
||||
grouped[parts[0]].append(node)
|
||||
|
||||
project_name = legacy_io.active_project()
|
||||
subset_docs = get_subsets(
|
||||
project_name, subset_names=[subset], asset_ids=grouped.keys()
|
||||
)
|
||||
subset_docs_by_asset_id = {
|
||||
str(subset_doc["parent"]): subset_doc
|
||||
for subset_doc in subset_docs
|
||||
}
|
||||
subset_ids = {
|
||||
subset_doc["_id"]
|
||||
for subset_doc in subset_docs_by_asset_id.values()
|
||||
}
|
||||
last_version_docs = get_last_versions(
|
||||
project_name,
|
||||
subset_ids=subset_ids,
|
||||
fields=["_id", "name", "data.families"]
|
||||
)
|
||||
last_version_docs_by_subset_id = {
|
||||
last_version_doc["parent"]: last_version_doc
|
||||
for last_version_doc in last_version_docs
|
||||
}
|
||||
|
||||
for asset_id, asset_nodes in grouped.items():
|
||||
# create objectId for database
|
||||
try:
|
||||
asset_id = bson.ObjectId(asset_id)
|
||||
except bson.errors.InvalidId:
|
||||
log.warning("Asset ID is not compatible with bson")
|
||||
continue
|
||||
subset_data = legacy_io.find_one({
|
||||
"type": "subset",
|
||||
"name": subset,
|
||||
"parent": asset_id
|
||||
})
|
||||
|
||||
if not subset_data:
|
||||
subset_doc = subset_docs_by_asset_id.get(asset_id)
|
||||
if not subset_doc:
|
||||
log.warning("No subset '{}' found for {}".format(subset, asset_id))
|
||||
continue
|
||||
|
||||
# get last version
|
||||
# with backwards compatibility
|
||||
version = legacy_io.find_one(
|
||||
{
|
||||
"parent": subset_data['_id'],
|
||||
"type": "version",
|
||||
"data.families": {"$in": ["look"]}
|
||||
},
|
||||
sort=[("name", -1)],
|
||||
projection={
|
||||
"_id": True,
|
||||
"name": True
|
||||
}
|
||||
)
|
||||
last_version = last_version_docs_by_subset_id.get(subset_doc["_id"])
|
||||
if not last_version:
|
||||
log.warning((
|
||||
"Not found last version for subset '{}' on asset with id {}"
|
||||
).format(subset, asset_id))
|
||||
continue
|
||||
|
||||
log.debug("Assigning look '{}' <v{:03d}>".format(subset,
|
||||
version["name"]))
|
||||
families = last_version.get("data", {}).get("families") or []
|
||||
if "look" not in families:
|
||||
log.warning((
|
||||
"Last version for subset '{}' on asset with id {}"
|
||||
" does not have look family"
|
||||
).format(subset, asset_id))
|
||||
continue
|
||||
|
||||
assign_look_by_version(asset_nodes, version['_id'])
|
||||
log.debug("Assigning look '{}' <v{:03d}>".format(
|
||||
subset, last_version["name"]))
|
||||
|
||||
assign_look_by_version(asset_nodes, last_version["_id"])
|
||||
|
||||
|
||||
def apply_shaders(relationships, shadernodes, nodes):
|
||||
|
|
@ -2126,9 +2139,11 @@ def set_scene_resolution(width, height, pixelAspect):
|
|||
|
||||
control_node = "defaultResolution"
|
||||
current_renderer = cmds.getAttr("defaultRenderGlobals.currentRenderer")
|
||||
aspect_ratio_attr = "deviceAspectRatio"
|
||||
|
||||
# Give VRay a helping hand as it is slightly different from the rest
|
||||
if current_renderer == "vray":
|
||||
aspect_ratio_attr = "aspectRatio"
|
||||
vray_node = "vraySettings"
|
||||
if cmds.objExists(vray_node):
|
||||
control_node = vray_node
|
||||
|
|
@ -2141,7 +2156,8 @@ def set_scene_resolution(width, height, pixelAspect):
|
|||
cmds.setAttr("%s.height" % control_node, height)
|
||||
|
||||
deviceAspectRatio = ((float(width) / float(height)) * float(pixelAspect))
|
||||
cmds.setAttr("%s.deviceAspectRatio" % control_node, deviceAspectRatio)
|
||||
cmds.setAttr(
|
||||
"{}.{}".format(control_node, aspect_ratio_attr), deviceAspectRatio)
|
||||
cmds.setAttr("%s.pixelAspect" % control_node, pixelAspect)
|
||||
|
||||
|
||||
|
|
@ -2155,7 +2171,8 @@ def reset_scene_resolution():
|
|||
None
|
||||
"""
|
||||
|
||||
project_doc = legacy_io.find_one({"type": "project"})
|
||||
project_name = legacy_io.active_project()
|
||||
project_doc = get_project(project_name)
|
||||
project_data = project_doc["data"]
|
||||
asset_data = lib.get_asset()["data"]
|
||||
|
||||
|
|
@ -2188,7 +2205,8 @@ def set_context_settings():
|
|||
"""
|
||||
|
||||
# Todo (Wijnand): apply renderer and resolution of project
|
||||
project_doc = legacy_io.find_one({"type": "project"})
|
||||
project_name = legacy_io.active_project()
|
||||
project_doc = get_project(project_name)
|
||||
project_data = project_doc["data"]
|
||||
asset_data = lib.get_asset()["data"]
|
||||
|
||||
|
|
|
|||
|
|
@ -6,10 +6,16 @@ import contextlib
|
|||
import copy
|
||||
|
||||
import six
|
||||
from bson.objectid import ObjectId
|
||||
|
||||
from maya import cmds
|
||||
|
||||
from openpype.client import (
|
||||
get_version_by_name,
|
||||
get_last_version_by_subset_id,
|
||||
get_representation_by_id,
|
||||
get_representation_by_name,
|
||||
get_representation_parents,
|
||||
)
|
||||
from openpype.pipeline import (
|
||||
schema,
|
||||
legacy_io,
|
||||
|
|
@ -283,36 +289,35 @@ def update_package_version(container, version):
|
|||
"""
|
||||
|
||||
# Versioning (from `core.maya.pipeline`)
|
||||
current_representation = legacy_io.find_one({
|
||||
"_id": ObjectId(container["representation"])
|
||||
})
|
||||
project_name = legacy_io.active_project()
|
||||
current_representation = get_representation_by_id(
|
||||
project_name, container["representation"]
|
||||
)
|
||||
|
||||
assert current_representation is not None, "This is a bug"
|
||||
|
||||
version_, subset, asset, project = legacy_io.parenthood(
|
||||
current_representation
|
||||
repre_parents = get_representation_parents(
|
||||
project_name, current_representation
|
||||
)
|
||||
version_doc = subset_doc = asset_doc = project_doc = None
|
||||
if repre_parents:
|
||||
version_doc, subset_doc, asset_doc, project_doc = repre_parents
|
||||
|
||||
if version == -1:
|
||||
new_version = legacy_io.find_one({
|
||||
"type": "version",
|
||||
"parent": subset["_id"]
|
||||
}, sort=[("name", -1)])
|
||||
new_version = get_last_version_by_subset_id(
|
||||
project_name, subset_doc["_id"]
|
||||
)
|
||||
else:
|
||||
new_version = legacy_io.find_one({
|
||||
"type": "version",
|
||||
"parent": subset["_id"],
|
||||
"name": version,
|
||||
})
|
||||
new_version = get_version_by_name(
|
||||
project_name, version, subset_doc["_id"]
|
||||
)
|
||||
|
||||
assert new_version is not None, "This is a bug"
|
||||
|
||||
# Get the new representation (new file)
|
||||
new_representation = legacy_io.find_one({
|
||||
"type": "representation",
|
||||
"parent": new_version["_id"],
|
||||
"name": current_representation["name"]
|
||||
})
|
||||
new_representation = get_representation_by_name(
|
||||
project_name, current_representation["name"], new_version["_id"]
|
||||
)
|
||||
|
||||
update_package(container, new_representation)
|
||||
|
||||
|
|
@ -330,10 +335,10 @@ def update_package(set_container, representation):
|
|||
"""
|
||||
|
||||
# Load the original package data
|
||||
current_representation = legacy_io.find_one({
|
||||
"_id": ObjectId(set_container['representation']),
|
||||
"type": "representation"
|
||||
})
|
||||
project_name = legacy_io.active_project()
|
||||
current_representation = get_representation_by_id(
|
||||
project_name, set_container["representation"]
|
||||
)
|
||||
|
||||
current_file = get_representation_path(current_representation)
|
||||
assert current_file.endswith(".json")
|
||||
|
|
@ -380,6 +385,7 @@ def update_scene(set_container, containers, current_data, new_data, new_file):
|
|||
from openpype.hosts.maya.lib import DEFAULT_MATRIX, get_container_transforms
|
||||
|
||||
set_namespace = set_container['namespace']
|
||||
project_name = legacy_io.active_project()
|
||||
|
||||
# Update the setdress hierarchy alembic
|
||||
set_root = get_container_transforms(set_container, root=True)
|
||||
|
|
@ -481,12 +487,12 @@ def update_scene(set_container, containers, current_data, new_data, new_file):
|
|||
# Check whether the conversion can be done by the Loader.
|
||||
# They *must* use the same asset, subset and Loader for
|
||||
# `update_container` to make sense.
|
||||
old = legacy_io.find_one({
|
||||
"_id": ObjectId(representation_current)
|
||||
})
|
||||
new = legacy_io.find_one({
|
||||
"_id": ObjectId(representation_new)
|
||||
})
|
||||
old = get_representation_by_id(
|
||||
project_name, representation_current
|
||||
)
|
||||
new = get_representation_by_id(
|
||||
project_name, representation_new
|
||||
)
|
||||
is_valid = compare_representations(old=old, new=new)
|
||||
if not is_valid:
|
||||
log.error("Skipping: %s. See log for details.",
|
||||
|
|
|
|||
|
|
@ -16,7 +16,7 @@ class CreateMultiverseUsd(plugin.Creator):
|
|||
self.data.update(lib.collect_animation_data(True))
|
||||
|
||||
self.data["fileFormat"] = ["usd", "usda", "usdz"]
|
||||
self.data["stripNamespaces"] = False
|
||||
self.data["stripNamespaces"] = True
|
||||
self.data["mergeTransformAndShape"] = False
|
||||
self.data["writeAncestors"] = True
|
||||
self.data["flattenParentXforms"] = False
|
||||
|
|
@ -37,15 +37,15 @@ class CreateMultiverseUsd(plugin.Creator):
|
|||
self.data["writeUVs"] = True
|
||||
self.data["writeColorSets"] = False
|
||||
self.data["writeTangents"] = False
|
||||
self.data["writeRefPositions"] = False
|
||||
self.data["writeRefPositions"] = True
|
||||
self.data["writeBlendShapes"] = False
|
||||
self.data["writeDisplayColor"] = False
|
||||
self.data["writeDisplayColor"] = True
|
||||
self.data["writeSkinWeights"] = False
|
||||
self.data["writeMaterialAssignment"] = False
|
||||
self.data["writeHardwareShader"] = False
|
||||
self.data["writeShadingNetworks"] = False
|
||||
self.data["writeTransformMatrix"] = True
|
||||
self.data["writeUsdAttributes"] = False
|
||||
self.data["writeUsdAttributes"] = True
|
||||
self.data["writeInstancesAsReferences"] = False
|
||||
self.data["timeVaryingTopology"] = False
|
||||
self.data["customMaterialNamespace"] = ''
|
||||
|
|
|
|||
|
|
@ -1,6 +1,10 @@
|
|||
import re
|
||||
import json
|
||||
from bson.objectid import ObjectId
|
||||
|
||||
from openpype.client import (
|
||||
get_representation_by_id,
|
||||
get_representations
|
||||
)
|
||||
from openpype.pipeline import (
|
||||
InventoryAction,
|
||||
get_representation_context,
|
||||
|
|
@ -31,6 +35,7 @@ class ImportModelRender(InventoryAction):
|
|||
def process(self, containers):
|
||||
from maya import cmds
|
||||
|
||||
project_name = legacy_io.active_project()
|
||||
for container in containers:
|
||||
con_name = container["objectName"]
|
||||
nodes = []
|
||||
|
|
@ -40,9 +45,9 @@ class ImportModelRender(InventoryAction):
|
|||
else:
|
||||
nodes.append(n)
|
||||
|
||||
repr_doc = legacy_io.find_one({
|
||||
"_id": ObjectId(container["representation"]),
|
||||
})
|
||||
repr_doc = get_representation_by_id(
|
||||
project_name, container["representation"], fields=["parent"]
|
||||
)
|
||||
version_id = repr_doc["parent"]
|
||||
|
||||
print("Importing render sets for model %r" % con_name)
|
||||
|
|
@ -63,26 +68,38 @@ class ImportModelRender(InventoryAction):
|
|||
|
||||
from maya import cmds
|
||||
|
||||
project_name = legacy_io.active_project()
|
||||
repre_docs = get_representations(
|
||||
project_name, version_ids=[version_id], fields=["_id", "name"]
|
||||
)
|
||||
# Get representations of shader file and relationships
|
||||
look_repr = legacy_io.find_one({
|
||||
"type": "representation",
|
||||
"parent": version_id,
|
||||
"name": {"$regex": self.scene_type_regex},
|
||||
})
|
||||
if not look_repr:
|
||||
json_repre = None
|
||||
look_repres = []
|
||||
scene_type_regex = re.compile(self.scene_type_regex)
|
||||
for repre_doc in repre_docs:
|
||||
repre_name = repre_doc["name"]
|
||||
if repre_name == self.look_data_type:
|
||||
json_repre = repre_doc
|
||||
continue
|
||||
|
||||
if scene_type_regex.fullmatch(repre_name):
|
||||
look_repres.append(repre_doc)
|
||||
|
||||
# QUESTION should we care if there is more then one look
|
||||
# representation? (since it's based on regex match)
|
||||
look_repre = None
|
||||
if look_repres:
|
||||
look_repre = look_repres[0]
|
||||
|
||||
# QUESTION shouldn't be json representation validated too?
|
||||
if not look_repre:
|
||||
print("No model render sets for this model version..")
|
||||
return
|
||||
|
||||
json_repr = legacy_io.find_one({
|
||||
"type": "representation",
|
||||
"parent": version_id,
|
||||
"name": self.look_data_type,
|
||||
})
|
||||
|
||||
context = get_representation_context(look_repr["_id"])
|
||||
context = get_representation_context(look_repre["_id"])
|
||||
maya_file = self.filepath_from_context(context)
|
||||
|
||||
context = get_representation_context(json_repr["_id"])
|
||||
context = get_representation_context(json_repre["_id"])
|
||||
json_file = self.filepath_from_context(context)
|
||||
|
||||
# Import the look file
|
||||
|
|
|
|||
|
|
@ -1,5 +1,10 @@
|
|||
from maya import cmds, mel
|
||||
|
||||
from openpype.client import (
|
||||
get_asset_by_id,
|
||||
get_subset_by_id,
|
||||
get_version_by_id,
|
||||
)
|
||||
from openpype.pipeline import (
|
||||
legacy_io,
|
||||
load,
|
||||
|
|
@ -65,9 +70,16 @@ class AudioLoader(load.LoaderPlugin):
|
|||
)
|
||||
|
||||
# Set frame range.
|
||||
version = legacy_io.find_one({"_id": representation["parent"]})
|
||||
subset = legacy_io.find_one({"_id": version["parent"]})
|
||||
asset = legacy_io.find_one({"_id": subset["parent"]})
|
||||
project_name = legacy_io.active_project()
|
||||
version = get_version_by_id(
|
||||
project_name, representation["parent"], fields=["parent"]
|
||||
)
|
||||
subset = get_subset_by_id(
|
||||
project_name, version["parent"], fields=["parent"]
|
||||
)
|
||||
asset = get_asset_by_id(
|
||||
project_name, subset["parent"], fields=["parent"]
|
||||
)
|
||||
audio_node.sourceStart.set(1 - asset["data"]["frameStart"])
|
||||
audio_node.sourceEnd.set(asset["data"]["frameEnd"])
|
||||
|
||||
|
|
|
|||
|
|
@ -1,5 +1,10 @@
|
|||
from Qt import QtWidgets, QtCore
|
||||
|
||||
from openpype.client import (
|
||||
get_asset_by_id,
|
||||
get_subset_by_id,
|
||||
get_version_by_id,
|
||||
)
|
||||
from openpype.pipeline import (
|
||||
legacy_io,
|
||||
load,
|
||||
|
|
@ -216,9 +221,16 @@ class ImagePlaneLoader(load.LoaderPlugin):
|
|||
)
|
||||
|
||||
# Set frame range.
|
||||
version = legacy_io.find_one({"_id": representation["parent"]})
|
||||
subset = legacy_io.find_one({"_id": version["parent"]})
|
||||
asset = legacy_io.find_one({"_id": subset["parent"]})
|
||||
project_name = legacy_io.active_project()
|
||||
version = get_version_by_id(
|
||||
project_name, representation["parent"], fields=["parent"]
|
||||
)
|
||||
subset = get_subset_by_id(
|
||||
project_name, version["parent"], fields=["parent"]
|
||||
)
|
||||
asset = get_asset_by_id(
|
||||
project_name, subset["parent"], fields=["parent"]
|
||||
)
|
||||
start_frame = asset["data"]["frameStart"]
|
||||
end_frame = asset["data"]["frameEnd"]
|
||||
image_plane_shape.frameOffset.set(1 - start_frame)
|
||||
|
|
|
|||
|
|
@ -5,6 +5,7 @@ from collections import defaultdict
|
|||
|
||||
from Qt import QtWidgets
|
||||
|
||||
from openpype.client import get_representation_by_name
|
||||
from openpype.pipeline import (
|
||||
legacy_io,
|
||||
get_representation_path,
|
||||
|
|
@ -75,11 +76,10 @@ class LookLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
|
|||
shader_nodes = cmds.ls(members, type='shadingEngine')
|
||||
nodes = set(self._get_nodes_with_shader(shader_nodes))
|
||||
|
||||
json_representation = legacy_io.find_one({
|
||||
"type": "representation",
|
||||
"parent": representation['parent'],
|
||||
"name": "json"
|
||||
})
|
||||
project_name = legacy_io.active_project()
|
||||
json_representation = get_representation_by_name(
|
||||
project_name, "json", representation["parent"]
|
||||
)
|
||||
|
||||
# Load relationships
|
||||
shader_relation = get_representation_path(json_representation)
|
||||
|
|
|
|||
|
|
@ -7,10 +7,9 @@ loader will use them instead of native vray vrmesh format.
|
|||
"""
|
||||
import os
|
||||
|
||||
from bson.objectid import ObjectId
|
||||
|
||||
import maya.cmds as cmds
|
||||
|
||||
from openpype.client import get_representation_by_name
|
||||
from openpype.api import get_project_settings
|
||||
from openpype.pipeline import (
|
||||
legacy_io,
|
||||
|
|
@ -185,12 +184,8 @@ class VRayProxyLoader(load.LoaderPlugin):
|
|||
"""
|
||||
self.log.debug(
|
||||
"Looking for abc in published representations of this version.")
|
||||
abc_rep = legacy_io.find_one({
|
||||
"type": "representation",
|
||||
"parent": ObjectId(version_id),
|
||||
"name": "abc"
|
||||
})
|
||||
|
||||
project_name = legacy_io.active_project()
|
||||
abc_rep = get_representation_by_name(project_name, "abc", version_id)
|
||||
if abc_rep:
|
||||
self.log.debug("Found, we'll link alembic to vray proxy.")
|
||||
file_name = get_representation_path(abc_rep)
|
||||
|
|
|
|||
|
|
@ -40,7 +40,7 @@ FILE_NODES = {
|
|||
|
||||
"aiImage": "filename",
|
||||
|
||||
"RedshiftNormalMap": "text0",
|
||||
"RedshiftNormalMap": "tex0",
|
||||
|
||||
"PxrBump": "filename",
|
||||
"PxrNormalMap": "filename",
|
||||
|
|
|
|||
|
|
@ -3,6 +3,7 @@ import pymel.core as pm
|
|||
|
||||
import pyblish.api
|
||||
|
||||
from openpype.client import get_subset_by_name
|
||||
from openpype.pipeline import legacy_io
|
||||
|
||||
|
||||
|
|
@ -78,11 +79,15 @@ class CollectReview(pyblish.api.InstancePlugin):
|
|||
self.log.debug('isntance data {}'.format(instance.data))
|
||||
else:
|
||||
legacy_subset_name = task + 'Review'
|
||||
asset_doc_id = instance.context.data['assetEntity']["_id"]
|
||||
subsets = legacy_io.find({"type": "subset",
|
||||
"name": legacy_subset_name,
|
||||
"parent": asset_doc_id}).distinct("_id")
|
||||
if len(list(subsets)) > 0:
|
||||
asset_doc = instance.context.data['assetEntity']
|
||||
project_name = legacy_io.active_project()
|
||||
subset_doc = get_subset_by_name(
|
||||
project_name,
|
||||
legacy_subset_name,
|
||||
asset_doc["_id"],
|
||||
fields=["_id"]
|
||||
)
|
||||
if subset_doc:
|
||||
self.log.debug("Existing subsets found, keep legacy name.")
|
||||
instance.data['subset'] = legacy_subset_name
|
||||
|
||||
|
|
|
|||
|
|
@ -34,7 +34,6 @@ class ExtractCameraAlembic(openpype.api.Extractor):
|
|||
dag=True, type="camera")
|
||||
|
||||
# validate required settings
|
||||
assert len(cameras) == 1, "Not a single camera found in extraction"
|
||||
assert isinstance(step, float), "Step must be a float value"
|
||||
camera = cameras[0]
|
||||
|
||||
|
|
@ -44,8 +43,12 @@ class ExtractCameraAlembic(openpype.api.Extractor):
|
|||
path = os.path.join(dir_path, filename)
|
||||
|
||||
# Perform alembic extraction
|
||||
member_shapes = cmds.ls(
|
||||
members, leaf=True, shapes=True, long=True, dag=True)
|
||||
with lib.maintained_selection():
|
||||
cmds.select(camera, replace=True, noExpand=True)
|
||||
cmds.select(
|
||||
member_shapes,
|
||||
replace=True, noExpand=True)
|
||||
|
||||
# Enforce forward slashes for AbcExport because we're
|
||||
# embedding it into a job string
|
||||
|
|
@ -57,10 +60,12 @@ class ExtractCameraAlembic(openpype.api.Extractor):
|
|||
job_str += ' -step {0} '.format(step)
|
||||
|
||||
if bake_to_worldspace:
|
||||
transform = cmds.listRelatives(camera,
|
||||
parent=True,
|
||||
fullPath=True)[0]
|
||||
job_str += ' -worldSpace -root {0}'.format(transform)
|
||||
job_str += ' -worldSpace'
|
||||
for member in member_shapes:
|
||||
self.log.info(f"processing {member}")
|
||||
transform = cmds.listRelatives(
|
||||
member, parent=True, fullPath=True)[0]
|
||||
job_str += ' -root {0}'.format(transform)
|
||||
|
||||
job_str += ' -file "{0}"'.format(path)
|
||||
|
||||
|
|
|
|||
|
|
@ -131,12 +131,12 @@ class ExtractCameraMayaScene(openpype.api.Extractor):
|
|||
"bake to world space is ignored...")
|
||||
|
||||
# get cameras
|
||||
members = instance.data['setMembers']
|
||||
members = cmds.ls(instance.data['setMembers'], leaf=True, shapes=True,
|
||||
long=True, dag=True)
|
||||
cameras = cmds.ls(members, leaf=True, shapes=True, long=True,
|
||||
dag=True, type="camera")
|
||||
|
||||
# validate required settings
|
||||
assert len(cameras) == 1, "Single camera must be found in extraction"
|
||||
assert isinstance(step, float), "Step must be a float value"
|
||||
camera = cameras[0]
|
||||
transform = cmds.listRelatives(camera, parent=True, fullPath=True)
|
||||
|
|
@ -158,15 +158,24 @@ class ExtractCameraMayaScene(openpype.api.Extractor):
|
|||
frame_range=[start, end],
|
||||
step=step
|
||||
)
|
||||
baked_shapes = cmds.ls(baked,
|
||||
baked_camera_shapes = cmds.ls(baked,
|
||||
type="camera",
|
||||
dag=True,
|
||||
shapes=True,
|
||||
long=True)
|
||||
|
||||
members = members + baked_camera_shapes
|
||||
members.remove(camera)
|
||||
else:
|
||||
baked_shapes = cameras
|
||||
baked_camera_shapes = cmds.ls(cameras,
|
||||
type="camera",
|
||||
dag=True,
|
||||
shapes=True,
|
||||
long=True)
|
||||
# Fix PLN-178: Don't allow background color to be non-black
|
||||
for cam in baked_shapes:
|
||||
for cam in cmds.ls(
|
||||
baked_camera_shapes, type="camera", dag=True,
|
||||
shapes=True, long=True):
|
||||
attrs = {"backgroundColorR": 0.0,
|
||||
"backgroundColorG": 0.0,
|
||||
"backgroundColorB": 0.0,
|
||||
|
|
@ -177,7 +186,8 @@ class ExtractCameraMayaScene(openpype.api.Extractor):
|
|||
cmds.setAttr(plug, value)
|
||||
|
||||
self.log.info("Performing extraction..")
|
||||
cmds.select(baked_shapes, noExpand=True)
|
||||
cmds.select(cmds.ls(members, dag=True,
|
||||
shapes=True, long=True), noExpand=True)
|
||||
cmds.file(path,
|
||||
force=True,
|
||||
typ="mayaAscii" if self.scene_type == "ma" else "mayaBinary", # noqa: E501
|
||||
|
|
|
|||
|
|
@ -111,7 +111,8 @@ class ExtractPlayblast(openpype.api.Extractor):
|
|||
self.log.debug("playblast path {}".format(path))
|
||||
|
||||
collected_files = os.listdir(stagingdir)
|
||||
collections, remainder = clique.assemble(collected_files)
|
||||
collections, remainder = clique.assemble(collected_files,
|
||||
minimum_items=1)
|
||||
|
||||
self.log.debug("filename {}".format(filename))
|
||||
frame_collection = None
|
||||
|
|
@ -134,10 +135,15 @@ class ExtractPlayblast(openpype.api.Extractor):
|
|||
# Add camera node name to representation data
|
||||
camera_node_name = pm.ls(camera)[0].getTransform().name()
|
||||
|
||||
collected_files = list(frame_collection)
|
||||
# single frame file shouldn't be in list, only as a string
|
||||
if len(collected_files) == 1:
|
||||
collected_files = collected_files[0]
|
||||
|
||||
representation = {
|
||||
'name': 'png',
|
||||
'ext': 'png',
|
||||
'files': list(frame_collection),
|
||||
'files': collected_files,
|
||||
"stagingDir": stagingdir,
|
||||
"frameStart": start,
|
||||
"frameEnd": end,
|
||||
|
|
|
|||
|
|
@ -20,6 +20,7 @@ class ValidateCameraContents(pyblish.api.InstancePlugin):
|
|||
hosts = ['maya']
|
||||
label = 'Camera Contents'
|
||||
actions = [openpype.hosts.maya.api.action.SelectInvalidAction]
|
||||
validate_shapes = True
|
||||
|
||||
@classmethod
|
||||
def get_invalid(cls, instance):
|
||||
|
|
@ -32,7 +33,7 @@ class ValidateCameraContents(pyblish.api.InstancePlugin):
|
|||
invalid = []
|
||||
cameras = cmds.ls(shapes, type='camera', long=True)
|
||||
if len(cameras) != 1:
|
||||
cls.log.warning("Camera instance must have a single camera. "
|
||||
cls.log.error("Camera instance must have a single camera. "
|
||||
"Found {0}: {1}".format(len(cameras), cameras))
|
||||
invalid.extend(cameras)
|
||||
|
||||
|
|
@ -49,15 +50,32 @@ class ValidateCameraContents(pyblish.api.InstancePlugin):
|
|||
|
||||
raise RuntimeError("No cameras found in empty instance.")
|
||||
|
||||
if not cls.validate_shapes:
|
||||
cls.log.info("Not validating shapes in the content.")
|
||||
|
||||
for member in members:
|
||||
parents = cmds.ls(member, long=True)[0].split("|")[1:-1]
|
||||
parents_long_named = [
|
||||
"|".join(parents[:i]) for i in range(1, 1 + len(parents))
|
||||
]
|
||||
if cameras[0] in parents_long_named:
|
||||
cls.log.error(
|
||||
"{} is parented under camera {}".format(
|
||||
member, cameras[0]))
|
||||
invalid.extend(member)
|
||||
return invalid
|
||||
|
||||
# non-camera shapes
|
||||
valid_shapes = cmds.ls(shapes, type=('camera', 'locator'), long=True)
|
||||
shapes = set(shapes) - set(valid_shapes)
|
||||
if shapes:
|
||||
shapes = list(shapes)
|
||||
cls.log.warning("Camera instance should only contain camera "
|
||||
cls.log.error("Camera instance should only contain camera "
|
||||
"shapes. Found: {0}".format(shapes))
|
||||
invalid.extend(shapes)
|
||||
|
||||
|
||||
|
||||
invalid = list(set(invalid))
|
||||
|
||||
return invalid
|
||||
|
|
|
|||
|
|
@ -12,28 +12,41 @@ def pairs(iterable):
|
|||
yield i, y
|
||||
|
||||
|
||||
def get_invalid_sets(shape):
|
||||
"""Get sets that are considered related but do not contain the shape.
|
||||
def get_invalid_sets(shapes):
|
||||
"""Return invalid sets for the given shapes.
|
||||
|
||||
In some scenarios Maya keeps connections to multiple shaders
|
||||
even if just a single one is assigned on the full object.
|
||||
This takes a list of shape nodes to cache the set members for overlapping
|
||||
sets in the queries. This avoids many Maya set member queries.
|
||||
|
||||
These are related sets returned by `maya.cmds.listSets` that don't
|
||||
actually have the shape as member.
|
||||
Returns:
|
||||
dict: Dictionary of shapes and their invalid sets, e.g.
|
||||
{"pCubeShape": ["set1", "set2"]}
|
||||
|
||||
"""
|
||||
|
||||
invalid = []
|
||||
sets = cmds.listSets(object=shape, t=1, extendToShape=False) or []
|
||||
for s in sets:
|
||||
members = cmds.sets(s, query=True, nodesOnly=True)
|
||||
if not members:
|
||||
invalid.append(s)
|
||||
continue
|
||||
cache = dict()
|
||||
invalid = dict()
|
||||
|
||||
members = set(cmds.ls(members, long=True))
|
||||
if shape not in members:
|
||||
invalid.append(s)
|
||||
# Collect the sets from the shape
|
||||
for shape in shapes:
|
||||
invalid_sets = []
|
||||
sets = cmds.listSets(object=shape, t=1, extendToShape=False) or []
|
||||
for set_ in sets:
|
||||
|
||||
members = cache.get(set_, None)
|
||||
if members is None:
|
||||
members = set(cmds.ls(cmds.sets(set_,
|
||||
query=True,
|
||||
nodesOnly=True), long=True))
|
||||
cache[set_] = members
|
||||
|
||||
# If the shape is not actually present as a member of the set
|
||||
# consider it invalid
|
||||
if shape not in members:
|
||||
invalid_sets.append(set_)
|
||||
|
||||
if invalid_sets:
|
||||
invalid[shape] = invalid_sets
|
||||
|
||||
return invalid
|
||||
|
||||
|
|
@ -92,15 +105,9 @@ class ValidateMeshShaderConnections(pyblish.api.InstancePlugin):
|
|||
@staticmethod
|
||||
def get_invalid(instance):
|
||||
|
||||
shapes = cmds.ls(instance[:], dag=1, leaf=1, shapes=1, long=True)
|
||||
|
||||
# todo: allow to check anything that can have a shader
|
||||
shapes = cmds.ls(shapes, noIntermediate=True, long=True, type="mesh")
|
||||
|
||||
invalid = []
|
||||
for shape in shapes:
|
||||
if get_invalid_sets(shape):
|
||||
invalid.append(shape)
|
||||
nodes = instance[:]
|
||||
shapes = cmds.ls(nodes, noIntermediate=True, long=True, type="mesh")
|
||||
invalid = get_invalid_sets(shapes).keys()
|
||||
|
||||
return invalid
|
||||
|
||||
|
|
@ -108,7 +115,7 @@ class ValidateMeshShaderConnections(pyblish.api.InstancePlugin):
|
|||
def repair(cls, instance):
|
||||
|
||||
shapes = cls.get_invalid(instance)
|
||||
for shape in shapes:
|
||||
invalid_sets = get_invalid_sets(shape)
|
||||
invalid = get_invalid_sets(shapes)
|
||||
for shape, invalid_sets in invalid.items():
|
||||
for set_node in invalid_sets:
|
||||
disconnect(shape, set_node)
|
||||
|
|
|
|||
|
|
@ -1,6 +1,7 @@
|
|||
import pyblish.api
|
||||
|
||||
import openpype.api
|
||||
from openpype.client import get_assets
|
||||
from openpype.pipeline import legacy_io
|
||||
import openpype.hosts.maya.api.action
|
||||
from openpype.hosts.maya.api import lib
|
||||
|
|
@ -42,8 +43,12 @@ class ValidateNodeIdsInDatabase(pyblish.api.InstancePlugin):
|
|||
nodes=instance[:])
|
||||
|
||||
# check ids against database ids
|
||||
db_asset_ids = legacy_io.find({"type": "asset"}).distinct("_id")
|
||||
db_asset_ids = set(str(i) for i in db_asset_ids)
|
||||
project_name = legacy_io.active_project()
|
||||
asset_docs = get_assets(project_name, fields=["_id"])
|
||||
db_asset_ids = {
|
||||
str(asset_doc["_id"])
|
||||
for asset_doc in asset_docs
|
||||
}
|
||||
|
||||
# Get all asset IDs
|
||||
for node in id_required_nodes:
|
||||
|
|
|
|||
|
|
@ -1,7 +1,6 @@
|
|||
import pyblish.api
|
||||
import openpype.api
|
||||
|
||||
from openpype.pipeline import legacy_io
|
||||
import openpype.hosts.maya.api.action
|
||||
from openpype.hosts.maya.api import lib
|
||||
|
||||
|
|
@ -36,15 +35,7 @@ class ValidateNodeIDsRelated(pyblish.api.InstancePlugin):
|
|||
"""Return the member nodes that are invalid"""
|
||||
invalid = list()
|
||||
|
||||
asset = instance.data['asset']
|
||||
asset_data = legacy_io.find_one(
|
||||
{
|
||||
"name": asset,
|
||||
"type": "asset"
|
||||
},
|
||||
projection={"_id": True}
|
||||
)
|
||||
asset_id = str(asset_data['_id'])
|
||||
asset_id = str(instance.data['assetEntity']["_id"])
|
||||
|
||||
# We do want to check the referenced nodes as we it might be
|
||||
# part of the end product
|
||||
|
|
|
|||
|
|
@ -1,8 +1,8 @@
|
|||
import pyblish.api
|
||||
|
||||
from openpype.client import get_subset_by_name
|
||||
import openpype.hosts.maya.api.action
|
||||
from openpype.pipeline import legacy_io
|
||||
import openpype.api
|
||||
|
||||
|
||||
class ValidateRenderLayerAOVs(pyblish.api.InstancePlugin):
|
||||
|
|
@ -33,26 +33,23 @@ class ValidateRenderLayerAOVs(pyblish.api.InstancePlugin):
|
|||
raise RuntimeError("Found unregistered subsets: {}".format(invalid))
|
||||
|
||||
def get_invalid(self, instance):
|
||||
|
||||
invalid = []
|
||||
|
||||
asset_name = instance.data["asset"]
|
||||
project_name = legacy_io.active_project()
|
||||
asset_doc = instance.data["assetEntity"]
|
||||
render_passses = instance.data.get("renderPasses", [])
|
||||
for render_pass in render_passses:
|
||||
is_valid = self.validate_subset_registered(asset_name, render_pass)
|
||||
is_valid = self.validate_subset_registered(
|
||||
project_name, asset_doc, render_pass
|
||||
)
|
||||
if not is_valid:
|
||||
invalid.append(render_pass)
|
||||
|
||||
return invalid
|
||||
|
||||
def validate_subset_registered(self, asset_name, subset_name):
|
||||
def validate_subset_registered(self, project_name, asset_doc, subset_name):
|
||||
"""Check if subset is registered in the database under the asset"""
|
||||
|
||||
asset = legacy_io.find_one({"type": "asset", "name": asset_name})
|
||||
is_valid = legacy_io.find_one({
|
||||
"type": "subset",
|
||||
"name": subset_name,
|
||||
"parent": asset["_id"]
|
||||
})
|
||||
|
||||
return is_valid
|
||||
return get_subset_by_name(
|
||||
project_name, subset_name, asset_doc["_id"], fields=["_id"]
|
||||
)
|
||||
|
|
|
|||
|
|
@ -94,6 +94,7 @@ class ValidateRenderSettings(pyblish.api.InstancePlugin):
|
|||
def get_invalid(cls, instance):
|
||||
|
||||
invalid = False
|
||||
multipart = False
|
||||
|
||||
renderer = instance.data['renderer']
|
||||
layer = instance.data['setMembers']
|
||||
|
|
@ -113,6 +114,7 @@ class ValidateRenderSettings(pyblish.api.InstancePlugin):
|
|||
"{aov_separator}", instance.data.get("aovSeparator", "_"))
|
||||
|
||||
required_prefix = "maya/<scene>"
|
||||
default_prefix = cls.ImagePrefixTokens[renderer]
|
||||
|
||||
if not anim_override:
|
||||
invalid = True
|
||||
|
|
@ -213,14 +215,16 @@ class ValidateRenderSettings(pyblish.api.InstancePlugin):
|
|||
cls.log.error("Wrong image prefix [ {} ] - "
|
||||
"You can't use '<renderpass>' token "
|
||||
"with merge AOVs turned on".format(prefix))
|
||||
default_prefix = re.sub(
|
||||
cls.R_AOV_TOKEN, "", default_prefix)
|
||||
# remove aov token from prefix to pass validation
|
||||
default_prefix = default_prefix.split("{aov_separator}")[0]
|
||||
elif not re.search(cls.R_AOV_TOKEN, prefix):
|
||||
invalid = True
|
||||
cls.log.error("Wrong image prefix [ {} ] - "
|
||||
"doesn't have: '<renderpass>' or "
|
||||
"token".format(prefix))
|
||||
|
||||
# prefix check
|
||||
default_prefix = cls.ImagePrefixTokens[renderer]
|
||||
default_prefix = default_prefix.replace(
|
||||
"{aov_separator}", instance.data.get("aovSeparator", "_"))
|
||||
if prefix.lower() != default_prefix.lower():
|
||||
|
|
|
|||
|
|
@ -8,9 +8,6 @@ from .workio import (
|
|||
)
|
||||
|
||||
from .command import (
|
||||
reset_frame_range,
|
||||
get_handles,
|
||||
reset_resolution,
|
||||
viewer_update_and_undo_stop
|
||||
)
|
||||
|
||||
|
|
@ -26,7 +23,11 @@ from .pipeline import (
|
|||
update_container,
|
||||
)
|
||||
from .lib import (
|
||||
maintained_selection
|
||||
maintained_selection,
|
||||
reset_selection,
|
||||
get_view_process_node,
|
||||
duplicate_node
|
||||
|
||||
)
|
||||
|
||||
from .utils import (
|
||||
|
|
@ -42,9 +43,6 @@ __all__ = (
|
|||
"current_file",
|
||||
"work_root",
|
||||
|
||||
"reset_frame_range",
|
||||
"get_handles",
|
||||
"reset_resolution",
|
||||
"viewer_update_and_undo_stop",
|
||||
|
||||
"OpenPypeCreator",
|
||||
|
|
@ -58,6 +56,9 @@ __all__ = (
|
|||
"update_container",
|
||||
|
||||
"maintained_selection",
|
||||
"reset_selection",
|
||||
"get_view_process_node",
|
||||
"duplicate_node",
|
||||
|
||||
"colorspace_exists_on_node",
|
||||
"get_colorspace_list"
|
||||
|
|
|
|||
|
|
@ -1,124 +1,10 @@
|
|||
import logging
|
||||
import contextlib
|
||||
import nuke
|
||||
from bson.objectid import ObjectId
|
||||
|
||||
from openpype.pipeline import legacy_io
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def reset_frame_range():
|
||||
""" Set frame range to current asset
|
||||
Also it will set a Viewer range with
|
||||
displayed handles
|
||||
"""
|
||||
|
||||
fps = float(legacy_io.Session.get("AVALON_FPS", 25))
|
||||
|
||||
nuke.root()["fps"].setValue(fps)
|
||||
name = legacy_io.Session["AVALON_ASSET"]
|
||||
asset = legacy_io.find_one({"name": name, "type": "asset"})
|
||||
asset_data = asset["data"]
|
||||
|
||||
handles = get_handles(asset)
|
||||
|
||||
frame_start = int(asset_data.get(
|
||||
"frameStart",
|
||||
asset_data.get("edit_in")))
|
||||
|
||||
frame_end = int(asset_data.get(
|
||||
"frameEnd",
|
||||
asset_data.get("edit_out")))
|
||||
|
||||
if not all([frame_start, frame_end]):
|
||||
missing = ", ".join(["frame_start", "frame_end"])
|
||||
msg = "'{}' are not set for asset '{}'!".format(missing, name)
|
||||
log.warning(msg)
|
||||
nuke.message(msg)
|
||||
return
|
||||
|
||||
frame_start -= handles
|
||||
frame_end += handles
|
||||
|
||||
nuke.root()["first_frame"].setValue(frame_start)
|
||||
nuke.root()["last_frame"].setValue(frame_end)
|
||||
|
||||
# setting active viewers
|
||||
vv = nuke.activeViewer().node()
|
||||
vv["frame_range_lock"].setValue(True)
|
||||
vv["frame_range"].setValue("{0}-{1}".format(
|
||||
int(asset_data["frameStart"]),
|
||||
int(asset_data["frameEnd"]))
|
||||
)
|
||||
|
||||
|
||||
def get_handles(asset):
|
||||
""" Gets handles data
|
||||
|
||||
Arguments:
|
||||
asset (dict): avalon asset entity
|
||||
|
||||
Returns:
|
||||
handles (int)
|
||||
"""
|
||||
data = asset["data"]
|
||||
if "handles" in data and data["handles"] is not None:
|
||||
return int(data["handles"])
|
||||
|
||||
parent_asset = None
|
||||
if "visualParent" in data:
|
||||
vp = data["visualParent"]
|
||||
if vp is not None:
|
||||
parent_asset = legacy_io.find_one({"_id": ObjectId(vp)})
|
||||
|
||||
if parent_asset is None:
|
||||
parent_asset = legacy_io.find_one({"_id": ObjectId(asset["parent"])})
|
||||
|
||||
if parent_asset is not None:
|
||||
return get_handles(parent_asset)
|
||||
else:
|
||||
return 0
|
||||
|
||||
|
||||
def reset_resolution():
|
||||
"""Set resolution to project resolution."""
|
||||
project = legacy_io.find_one({"type": "project"})
|
||||
p_data = project["data"]
|
||||
|
||||
width = p_data.get("resolution_width",
|
||||
p_data.get("resolutionWidth"))
|
||||
height = p_data.get("resolution_height",
|
||||
p_data.get("resolutionHeight"))
|
||||
|
||||
if not all([width, height]):
|
||||
missing = ", ".join(["width", "height"])
|
||||
msg = "No resolution information `{0}` found for '{1}'.".format(
|
||||
missing,
|
||||
project["name"])
|
||||
log.warning(msg)
|
||||
nuke.message(msg)
|
||||
return
|
||||
|
||||
current_width = nuke.root()["format"].value().width()
|
||||
current_height = nuke.root()["format"].value().height()
|
||||
|
||||
if width != current_width or height != current_height:
|
||||
|
||||
fmt = None
|
||||
for f in nuke.formats():
|
||||
if f.width() == width and f.height() == height:
|
||||
fmt = f.name()
|
||||
|
||||
if not fmt:
|
||||
nuke.addFormat(
|
||||
"{0} {1} {2}".format(int(width), int(height), project["name"])
|
||||
)
|
||||
fmt = project["name"]
|
||||
|
||||
nuke.root()["format"].setValue(fmt)
|
||||
|
||||
|
||||
@contextlib.contextmanager
|
||||
def viewer_update_and_undo_stop():
|
||||
"""Lock viewer from updating and stop recording undo steps"""
|
||||
|
|
|
|||
|
|
@ -3,14 +3,21 @@ from pprint import pformat
|
|||
import re
|
||||
import six
|
||||
import platform
|
||||
import tempfile
|
||||
import contextlib
|
||||
from collections import OrderedDict
|
||||
|
||||
import clique
|
||||
from bson.objectid import ObjectId
|
||||
|
||||
import nuke
|
||||
|
||||
from openpype.client import (
|
||||
get_project,
|
||||
get_asset_by_name,
|
||||
get_versions,
|
||||
get_last_versions,
|
||||
get_representations,
|
||||
)
|
||||
from openpype.api import (
|
||||
Logger,
|
||||
Anatomy,
|
||||
|
|
@ -711,6 +718,20 @@ def get_imageio_input_colorspace(filename):
|
|||
return preset_clrsp
|
||||
|
||||
|
||||
def get_view_process_node():
|
||||
reset_selection()
|
||||
|
||||
ipn_orig = None
|
||||
for v in nuke.allNodes(filter="Viewer"):
|
||||
ipn = v['input_process_node'].getValue()
|
||||
if "VIEWER_INPUT" not in ipn:
|
||||
ipn_orig = nuke.toNode(ipn)
|
||||
ipn_orig.setSelected(True)
|
||||
|
||||
if ipn_orig:
|
||||
return duplicate_node(ipn_orig)
|
||||
|
||||
|
||||
def on_script_load():
|
||||
''' Callback for ffmpeg support
|
||||
'''
|
||||
|
|
@ -734,47 +755,84 @@ def check_inventory_versions():
|
|||
from .pipeline import parse_container
|
||||
|
||||
# get all Loader nodes by avalon attribute metadata
|
||||
for each in nuke.allNodes():
|
||||
container = parse_container(each)
|
||||
node_with_repre_id = []
|
||||
repre_ids = set()
|
||||
# Find all containers and collect it's node and representation ids
|
||||
for node in nuke.allNodes():
|
||||
container = parse_container(node)
|
||||
|
||||
if container:
|
||||
node = nuke.toNode(container["objectName"])
|
||||
avalon_knob_data = read_avalon_data(node)
|
||||
repre_id = avalon_knob_data["representation"]
|
||||
|
||||
# get representation from io
|
||||
representation = legacy_io.find_one({
|
||||
"type": "representation",
|
||||
"_id": ObjectId(avalon_knob_data["representation"])
|
||||
})
|
||||
repre_ids.add(repre_id)
|
||||
node_with_repre_id.append((node, repre_id))
|
||||
|
||||
# Failsafe for not finding the representation.
|
||||
if not representation:
|
||||
log.warning(
|
||||
"Could not find the representation on "
|
||||
"node \"{}\"".format(node.name())
|
||||
)
|
||||
continue
|
||||
# Skip if nothing was found
|
||||
if not repre_ids:
|
||||
return
|
||||
|
||||
# Get start frame from version data
|
||||
version = legacy_io.find_one({
|
||||
"type": "version",
|
||||
"_id": representation["parent"]
|
||||
})
|
||||
project_name = legacy_io.active_project()
|
||||
# Find representations based on found containers
|
||||
repre_docs = get_representations(
|
||||
project_name,
|
||||
representation_ids=repre_ids,
|
||||
fields=["_id", "parent"]
|
||||
)
|
||||
# Store representations by id and collect version ids
|
||||
repre_docs_by_id = {}
|
||||
version_ids = set()
|
||||
for repre_doc in repre_docs:
|
||||
# Use stringed representation id to match value in containers
|
||||
repre_id = str(repre_doc["_id"])
|
||||
repre_docs_by_id[repre_id] = repre_doc
|
||||
version_ids.add(repre_doc["parent"])
|
||||
|
||||
# get all versions in list
|
||||
versions = legacy_io.find({
|
||||
"type": "version",
|
||||
"parent": version["parent"]
|
||||
}).distinct("name")
|
||||
version_docs = get_versions(
|
||||
project_name, version_ids, fields=["_id", "name", "parent"]
|
||||
)
|
||||
# Store versions by id and collect subset ids
|
||||
version_docs_by_id = {}
|
||||
subset_ids = set()
|
||||
for version_doc in version_docs:
|
||||
version_docs_by_id[version_doc["_id"]] = version_doc
|
||||
subset_ids.add(version_doc["parent"])
|
||||
|
||||
max_version = max(versions)
|
||||
# Query last versions based on subset ids
|
||||
last_versions_by_subset_id = get_last_versions(
|
||||
project_name, subset_ids=subset_ids, fields=["_id", "parent"]
|
||||
)
|
||||
|
||||
# check the available version and do match
|
||||
# change color of node if not max version
|
||||
if version.get("name") not in [max_version]:
|
||||
node["tile_color"].setValue(int("0xd84f20ff", 16))
|
||||
else:
|
||||
node["tile_color"].setValue(int("0x4ecd25ff", 16))
|
||||
# Loop through collected container nodes and their representation ids
|
||||
for item in node_with_repre_id:
|
||||
# Some python versions of nuke can't unfold tuple in for loop
|
||||
node, repre_id = item
|
||||
repre_doc = repre_docs_by_id.get(repre_id)
|
||||
# Failsafe for not finding the representation.
|
||||
if not repre_doc:
|
||||
log.warning((
|
||||
"Could not find the representation on node \"{}\""
|
||||
).format(node.name()))
|
||||
continue
|
||||
|
||||
version_id = repre_doc["parent"]
|
||||
version_doc = version_docs_by_id.get(version_id)
|
||||
if not version_doc:
|
||||
log.warning((
|
||||
"Could not find the version on node \"{}\""
|
||||
).format(node.name()))
|
||||
continue
|
||||
|
||||
# Get last version based on subset id
|
||||
subset_id = version_doc["parent"]
|
||||
last_version = last_versions_by_subset_id[subset_id]
|
||||
# Check if last version is same as current version
|
||||
if last_version["_id"] == version_doc["_id"]:
|
||||
color_value = "0x4ecd25ff"
|
||||
else:
|
||||
color_value = "0xd84f20ff"
|
||||
node["tile_color"].setValue(int(color_value, 16))
|
||||
|
||||
|
||||
def writes_version_sync():
|
||||
|
|
@ -899,11 +957,9 @@ def format_anatomy(data):
|
|||
file = script_name()
|
||||
data["version"] = get_version_from_path(file)
|
||||
|
||||
project_doc = legacy_io.find_one({"type": "project"})
|
||||
asset_doc = legacy_io.find_one({
|
||||
"type": "asset",
|
||||
"name": data["avalon"]["asset"]
|
||||
})
|
||||
project_name = anatomy.project_name
|
||||
project_doc = get_project(project_name)
|
||||
asset_doc = get_asset_by_name(project_name, data["avalon"]["asset"])
|
||||
task_name = os.environ["AVALON_TASK"]
|
||||
host_name = os.environ["AVALON_APP"]
|
||||
context_data = get_workdir_data(
|
||||
|
|
@ -1692,12 +1748,13 @@ class WorkfileSettings(object):
|
|||
|
||||
"""
|
||||
|
||||
def __init__(self,
|
||||
root_node=None,
|
||||
nodes=None,
|
||||
**kwargs):
|
||||
Context._project_doc = kwargs.get(
|
||||
"project") or legacy_io.find_one({"type": "project"})
|
||||
def __init__(self, root_node=None, nodes=None, **kwargs):
|
||||
project_doc = kwargs.get("project")
|
||||
if project_doc is None:
|
||||
project_name = legacy_io.active_project()
|
||||
project_doc = get_project(project_name)
|
||||
|
||||
Context._project_doc = project_doc
|
||||
self._asset = (
|
||||
kwargs.get("asset_name")
|
||||
or legacy_io.Session["AVALON_ASSET"]
|
||||
|
|
@ -2047,9 +2104,10 @@ class WorkfileSettings(object):
|
|||
def reset_resolution(self):
|
||||
"""Set resolution to project resolution."""
|
||||
log.info("Resetting resolution")
|
||||
project = legacy_io.find_one({"type": "project"})
|
||||
asset = legacy_io.Session["AVALON_ASSET"]
|
||||
asset = legacy_io.find_one({"name": asset, "type": "asset"})
|
||||
project_name = legacy_io.active_project()
|
||||
project = get_project(project_name)
|
||||
asset_name = legacy_io.Session["AVALON_ASSET"]
|
||||
asset = get_asset_by_name(project_name, asset_name)
|
||||
asset_data = asset.get('data', {})
|
||||
|
||||
data = {
|
||||
|
|
@ -2151,29 +2209,6 @@ class WorkfileSettings(object):
|
|||
set_context_favorites(favorite_items)
|
||||
|
||||
|
||||
def get_hierarchical_attr(entity, attr, default=None):
|
||||
attr_parts = attr.split('.')
|
||||
value = entity
|
||||
for part in attr_parts:
|
||||
value = value.get(part)
|
||||
if not value:
|
||||
break
|
||||
|
||||
if value or entity["type"].lower() == "project":
|
||||
return value
|
||||
|
||||
parent_id = entity["parent"]
|
||||
if (
|
||||
entity["type"].lower() == "asset"
|
||||
and entity.get("data", {}).get("visualParent")
|
||||
):
|
||||
parent_id = entity["data"]["visualParent"]
|
||||
|
||||
parent = legacy_io.find_one({"_id": parent_id})
|
||||
|
||||
return get_hierarchical_attr(parent, attr)
|
||||
|
||||
|
||||
def get_write_node_template_attr(node):
|
||||
''' Gets all defined data from presets
|
||||
|
||||
|
|
@ -2374,6 +2409,8 @@ def process_workfile_builder():
|
|||
env_value_to_bool,
|
||||
get_custom_workfile_template
|
||||
)
|
||||
# to avoid looping of the callback, remove it!
|
||||
nuke.removeOnCreate(process_workfile_builder, nodeClass="Root")
|
||||
|
||||
# get state from settings
|
||||
workfile_builder = get_current_project_settings()["nuke"].get(
|
||||
|
|
@ -2429,9 +2466,6 @@ def process_workfile_builder():
|
|||
if not openlv_on or not os.path.exists(last_workfile_path):
|
||||
return
|
||||
|
||||
# to avoid looping of the callback, remove it!
|
||||
nuke.removeOnCreate(process_workfile_builder, nodeClass="Root")
|
||||
|
||||
log.info("Opening last workfile...")
|
||||
# open workfile
|
||||
open_file(last_workfile_path)
|
||||
|
|
@ -2617,6 +2651,57 @@ class DirmapCache:
|
|||
return cls._sync_module
|
||||
|
||||
|
||||
@contextlib.contextmanager
|
||||
def _duplicate_node_temp():
|
||||
"""Create a temp file where node is pasted during duplication.
|
||||
|
||||
This is to avoid using clipboard for node duplication.
|
||||
"""
|
||||
|
||||
duplicate_node_temp_path = os.path.join(
|
||||
tempfile.gettempdir(),
|
||||
"openpype_nuke_duplicate_temp_{}".format(os.getpid())
|
||||
)
|
||||
|
||||
# This can happen only if 'duplicate_node' would be
|
||||
if os.path.exists(duplicate_node_temp_path):
|
||||
log.warning((
|
||||
"Temp file for node duplication already exists."
|
||||
" Trying to remove {}"
|
||||
).format(duplicate_node_temp_path))
|
||||
os.remove(duplicate_node_temp_path)
|
||||
|
||||
try:
|
||||
# Yield the path where node can be copied
|
||||
yield duplicate_node_temp_path
|
||||
|
||||
finally:
|
||||
# Remove the file at the end
|
||||
os.remove(duplicate_node_temp_path)
|
||||
|
||||
|
||||
def duplicate_node(node):
|
||||
reset_selection()
|
||||
|
||||
# select required node for duplication
|
||||
node.setSelected(True)
|
||||
|
||||
with _duplicate_node_temp() as filepath:
|
||||
# copy selected to temp filepath
|
||||
nuke.nodeCopy(filepath)
|
||||
|
||||
# reset selection
|
||||
reset_selection()
|
||||
|
||||
# paste node and selection is on it only
|
||||
dupli_node = nuke.nodePaste(filepath)
|
||||
|
||||
# reset selection
|
||||
reset_selection()
|
||||
|
||||
return dupli_node
|
||||
|
||||
|
||||
def dirmap_file_name_filter(file_name):
|
||||
"""Nuke callback function with single full path argument.
|
||||
|
||||
|
|
|
|||
|
|
@ -14,12 +14,12 @@ from openpype.pipeline import (
|
|||
from .lib import (
|
||||
Knobby,
|
||||
check_subsetname_exists,
|
||||
reset_selection,
|
||||
maintained_selection,
|
||||
set_avalon_knob_data,
|
||||
add_publish_knob,
|
||||
get_nuke_imageio_settings,
|
||||
set_node_knobs_from_settings
|
||||
set_node_knobs_from_settings,
|
||||
get_view_process_node
|
||||
)
|
||||
|
||||
|
||||
|
|
@ -216,37 +216,6 @@ class ExporterReview(object):
|
|||
|
||||
self.data["representations"].append(repre)
|
||||
|
||||
def get_view_input_process_node(self):
|
||||
"""
|
||||
Will get any active view process.
|
||||
|
||||
Arguments:
|
||||
self (class): in object definition
|
||||
|
||||
Returns:
|
||||
nuke.Node: copy node of Input Process node
|
||||
"""
|
||||
reset_selection()
|
||||
ipn_orig = None
|
||||
for v in nuke.allNodes(filter="Viewer"):
|
||||
ip = v["input_process"].getValue()
|
||||
ipn = v["input_process_node"].getValue()
|
||||
if "VIEWER_INPUT" not in ipn and ip:
|
||||
ipn_orig = nuke.toNode(ipn)
|
||||
ipn_orig.setSelected(True)
|
||||
|
||||
if ipn_orig:
|
||||
# copy selected to clipboard
|
||||
nuke.nodeCopy("%clipboard%")
|
||||
# reset selection
|
||||
reset_selection()
|
||||
# paste node and selection is on it only
|
||||
nuke.nodePaste("%clipboard%")
|
||||
# assign to variable
|
||||
ipn = nuke.selectedNode()
|
||||
|
||||
return ipn
|
||||
|
||||
def get_imageio_baking_profile(self):
|
||||
from . import lib as opnlib
|
||||
nuke_imageio = opnlib.get_nuke_imageio_settings()
|
||||
|
|
@ -311,7 +280,7 @@ class ExporterReviewLut(ExporterReview):
|
|||
self._temp_nodes = []
|
||||
self.log.info("Deleted nodes...")
|
||||
|
||||
def generate_lut(self):
|
||||
def generate_lut(self, **kwargs):
|
||||
bake_viewer_process = kwargs["bake_viewer_process"]
|
||||
bake_viewer_input_process_node = kwargs[
|
||||
"bake_viewer_input_process"]
|
||||
|
|
@ -329,7 +298,7 @@ class ExporterReviewLut(ExporterReview):
|
|||
if bake_viewer_process:
|
||||
# Node View Process
|
||||
if bake_viewer_input_process_node:
|
||||
ipn = self.get_view_input_process_node()
|
||||
ipn = get_view_process_node()
|
||||
if ipn is not None:
|
||||
# connect
|
||||
ipn.setInput(0, self.previous_node)
|
||||
|
|
@ -511,7 +480,7 @@ class ExporterReviewMov(ExporterReview):
|
|||
if bake_viewer_process:
|
||||
if bake_viewer_input_process_node:
|
||||
# View Process node
|
||||
ipn = self.get_view_input_process_node()
|
||||
ipn = get_view_process_node()
|
||||
if ipn is not None:
|
||||
# connect
|
||||
ipn.setInput(0, self.previous_node)
|
||||
|
|
|
|||
|
|
@ -1,6 +1,10 @@
|
|||
import nuke
|
||||
import nukescripts
|
||||
|
||||
from openpype.client import (
|
||||
get_version_by_id,
|
||||
get_last_version_by_subset_id,
|
||||
)
|
||||
from openpype.pipeline import (
|
||||
legacy_io,
|
||||
load,
|
||||
|
|
@ -188,18 +192,17 @@ class LoadBackdropNodes(load.LoaderPlugin):
|
|||
|
||||
# get main variables
|
||||
# Get version from io
|
||||
version = legacy_io.find_one({
|
||||
"type": "version",
|
||||
"_id": representation["parent"]
|
||||
})
|
||||
project_name = legacy_io.active_project()
|
||||
version_doc = get_version_by_id(project_name, representation["parent"])
|
||||
|
||||
# get corresponding node
|
||||
GN = nuke.toNode(container['objectName'])
|
||||
|
||||
file = get_representation_path(representation).replace("\\", "/")
|
||||
context = representation["context"]
|
||||
|
||||
name = container['name']
|
||||
version_data = version.get("data", {})
|
||||
vname = version.get("name", None)
|
||||
version_data = version_doc.get("data", {})
|
||||
vname = version_doc.get("name", None)
|
||||
first = version_data.get("frameStart", None)
|
||||
last = version_data.get("frameEnd", None)
|
||||
namespace = container['namespace']
|
||||
|
|
@ -237,20 +240,18 @@ class LoadBackdropNodes(load.LoaderPlugin):
|
|||
GN["name"].setValue(object_name)
|
||||
|
||||
# get all versions in list
|
||||
versions = legacy_io.find({
|
||||
"type": "version",
|
||||
"parent": version["parent"]
|
||||
}).distinct('name')
|
||||
|
||||
max_version = max(versions)
|
||||
last_version_doc = get_last_version_by_subset_id(
|
||||
project_name, version_doc["parent"], fields=["_id"]
|
||||
)
|
||||
|
||||
# change color of node
|
||||
if version.get("name") not in [max_version]:
|
||||
GN["tile_color"].setValue(int("0xd88467ff", 16))
|
||||
if version_doc["_id"] == last_version_doc["_id"]:
|
||||
color_value = self.node_color
|
||||
else:
|
||||
GN["tile_color"].setValue(int(self.node_color, 16))
|
||||
color_value = "0xd88467ff"
|
||||
GN["tile_color"].setValue(int(color_value, 16))
|
||||
|
||||
self.log.info("updated to version: {}".format(version.get("name")))
|
||||
self.log.info("updated to version: {}".format(version_doc.get("name")))
|
||||
|
||||
return update_container(GN, data_imprint)
|
||||
|
||||
|
|
|
|||
|
|
@ -1,5 +1,9 @@
|
|||
import nuke
|
||||
|
||||
from openpype.client import (
|
||||
get_version_by_id,
|
||||
get_last_version_by_subset_id
|
||||
)
|
||||
from openpype.pipeline import (
|
||||
legacy_io,
|
||||
load,
|
||||
|
|
@ -102,17 +106,16 @@ class AlembicCameraLoader(load.LoaderPlugin):
|
|||
None
|
||||
"""
|
||||
# Get version from io
|
||||
version = legacy_io.find_one({
|
||||
"type": "version",
|
||||
"_id": representation["parent"]
|
||||
})
|
||||
project_name = legacy_io.active_project()
|
||||
version_doc = get_version_by_id(project_name, representation["parent"])
|
||||
|
||||
object_name = container['objectName']
|
||||
# get corresponding node
|
||||
camera_node = nuke.toNode(object_name)
|
||||
|
||||
# get main variables
|
||||
version_data = version.get("data", {})
|
||||
vname = version.get("name", None)
|
||||
version_data = version_doc.get("data", {})
|
||||
vname = version_doc.get("name", None)
|
||||
first = version_data.get("frameStart", None)
|
||||
last = version_data.get("frameEnd", None)
|
||||
fps = version_data.get("fps") or nuke.root()["fps"].getValue()
|
||||
|
|
@ -165,28 +168,27 @@ class AlembicCameraLoader(load.LoaderPlugin):
|
|||
d.setInput(index, camera_node)
|
||||
|
||||
# color node by correct color by actual version
|
||||
self.node_version_color(version, camera_node)
|
||||
self.node_version_color(version_doc, camera_node)
|
||||
|
||||
self.log.info("updated to version: {}".format(version.get("name")))
|
||||
self.log.info("updated to version: {}".format(version_doc.get("name")))
|
||||
|
||||
return update_container(camera_node, data_imprint)
|
||||
|
||||
def node_version_color(self, version, node):
|
||||
def node_version_color(self, version_doc, node):
|
||||
""" Coloring a node by correct color by actual version
|
||||
"""
|
||||
# get all versions in list
|
||||
versions = legacy_io.find({
|
||||
"type": "version",
|
||||
"parent": version["parent"]
|
||||
}).distinct('name')
|
||||
|
||||
max_version = max(versions)
|
||||
project_name = legacy_io.active_project()
|
||||
last_version_doc = get_last_version_by_subset_id(
|
||||
project_name, version_doc["parent"], fields=["_id"]
|
||||
)
|
||||
|
||||
# change color of node
|
||||
if version.get("name") not in [max_version]:
|
||||
node["tile_color"].setValue(int("0xd88467ff", 16))
|
||||
if version_doc["_id"] == last_version_doc["_id"]:
|
||||
color_value = self.node_color
|
||||
else:
|
||||
node["tile_color"].setValue(int(self.node_color, 16))
|
||||
color_value = "0xd88467ff"
|
||||
node["tile_color"].setValue(int(color_value, 16))
|
||||
|
||||
def switch(self, container, representation):
|
||||
self.update(container, representation)
|
||||
|
|
|
|||
|
|
@ -1,6 +1,10 @@
|
|||
import nuke
|
||||
import qargparse
|
||||
|
||||
from openpype.client import (
|
||||
get_version_by_id,
|
||||
get_last_version_by_subset_id,
|
||||
)
|
||||
from openpype.pipeline import (
|
||||
legacy_io,
|
||||
get_representation_path,
|
||||
|
|
@ -196,11 +200,10 @@ class LoadClip(plugin.NukeLoader):
|
|||
|
||||
start_at_workfile = bool("start at" in read_node['frame_mode'].value())
|
||||
|
||||
version = legacy_io.find_one({
|
||||
"type": "version",
|
||||
"_id": representation["parent"]
|
||||
})
|
||||
version_data = version.get("data", {})
|
||||
project_name = legacy_io.active_project()
|
||||
version_doc = get_version_by_id(project_name, representation["parent"])
|
||||
|
||||
version_data = version_doc.get("data", {})
|
||||
repre_id = representation["_id"]
|
||||
|
||||
repre_cont = representation["context"]
|
||||
|
|
@ -251,7 +254,7 @@ class LoadClip(plugin.NukeLoader):
|
|||
"representation": str(representation["_id"]),
|
||||
"frameStart": str(first),
|
||||
"frameEnd": str(last),
|
||||
"version": str(version.get("name")),
|
||||
"version": str(version_doc.get("name")),
|
||||
"db_colorspace": colorspace,
|
||||
"source": version_data.get("source"),
|
||||
"handleStart": str(self.handle_start),
|
||||
|
|
@ -264,26 +267,24 @@ class LoadClip(plugin.NukeLoader):
|
|||
if used_colorspace:
|
||||
updated_dict["used_colorspace"] = used_colorspace
|
||||
|
||||
last_version_doc = get_last_version_by_subset_id(
|
||||
project_name, version_doc["parent"], fields=["_id"]
|
||||
)
|
||||
# change color of read_node
|
||||
# get all versions in list
|
||||
versions = legacy_io.find({
|
||||
"type": "version",
|
||||
"parent": version["parent"]
|
||||
}).distinct('name')
|
||||
|
||||
max_version = max(versions)
|
||||
|
||||
if version.get("name") not in [max_version]:
|
||||
read_node["tile_color"].setValue(int("0xd84f20ff", 16))
|
||||
if version_doc["_id"] == last_version_doc["_id"]:
|
||||
color_value = "0x4ecd25ff"
|
||||
else:
|
||||
read_node["tile_color"].setValue(int("0x4ecd25ff", 16))
|
||||
color_value = "0xd84f20ff"
|
||||
read_node["tile_color"].setValue(int(color_value, 16))
|
||||
|
||||
# Update the imprinted representation
|
||||
update_container(
|
||||
read_node,
|
||||
updated_dict
|
||||
)
|
||||
self.log.info("updated to version: {}".format(version.get("name")))
|
||||
self.log.info(
|
||||
"updated to version: {}".format(version_doc.get("name"))
|
||||
)
|
||||
|
||||
if version_data.get("retime", None):
|
||||
self._make_retimes(read_node, version_data)
|
||||
|
|
|
|||
|
|
@ -3,6 +3,10 @@ from collections import OrderedDict
|
|||
import nuke
|
||||
import six
|
||||
|
||||
from openpype.client import (
|
||||
get_version_by_id,
|
||||
get_last_version_by_subset_id,
|
||||
)
|
||||
from openpype.pipeline import (
|
||||
legacy_io,
|
||||
load,
|
||||
|
|
@ -148,17 +152,16 @@ class LoadEffects(load.LoaderPlugin):
|
|||
"""
|
||||
# get main variables
|
||||
# Get version from io
|
||||
version = legacy_io.find_one({
|
||||
"type": "version",
|
||||
"_id": representation["parent"]
|
||||
})
|
||||
project_name = legacy_io.active_project()
|
||||
version_doc = get_version_by_id(project_name, representation["parent"])
|
||||
|
||||
# get corresponding node
|
||||
GN = nuke.toNode(container['objectName'])
|
||||
|
||||
file = get_representation_path(representation).replace("\\", "/")
|
||||
name = container['name']
|
||||
version_data = version.get("data", {})
|
||||
vname = version.get("name", None)
|
||||
version_data = version_doc.get("data", {})
|
||||
vname = version_doc.get("name", None)
|
||||
first = version_data.get("frameStart", None)
|
||||
last = version_data.get("frameEnd", None)
|
||||
workfile_first_frame = int(nuke.root()["first_frame"].getValue())
|
||||
|
|
@ -243,21 +246,19 @@ class LoadEffects(load.LoaderPlugin):
|
|||
# try to find parent read node
|
||||
self.connect_read_node(GN, namespace, json_f["assignTo"])
|
||||
|
||||
# get all versions in list
|
||||
versions = legacy_io.find({
|
||||
"type": "version",
|
||||
"parent": version["parent"]
|
||||
}).distinct('name')
|
||||
|
||||
max_version = max(versions)
|
||||
last_version_doc = get_last_version_by_subset_id(
|
||||
project_name, version_doc["parent"], fields=["_id"]
|
||||
)
|
||||
|
||||
# change color of node
|
||||
if version.get("name") not in [max_version]:
|
||||
GN["tile_color"].setValue(int("0xd84f20ff", 16))
|
||||
if version_doc["_id"] == last_version_doc["_id"]:
|
||||
color_value = "0x3469ffff"
|
||||
else:
|
||||
GN["tile_color"].setValue(int("0x3469ffff", 16))
|
||||
color_value = "0xd84f20ff"
|
||||
|
||||
self.log.info("updated to version: {}".format(version.get("name")))
|
||||
GN["tile_color"].setValue(int(color_value, 16))
|
||||
|
||||
self.log.info("updated to version: {}".format(version_doc.get("name")))
|
||||
|
||||
def connect_read_node(self, group_node, asset, subset):
|
||||
"""
|
||||
|
|
|
|||
|
|
@ -3,6 +3,10 @@ from collections import OrderedDict
|
|||
import six
|
||||
import nuke
|
||||
|
||||
from openpype.client import (
|
||||
get_version_by_id,
|
||||
get_last_version_by_subset_id,
|
||||
)
|
||||
from openpype.pipeline import (
|
||||
legacy_io,
|
||||
load,
|
||||
|
|
@ -153,17 +157,16 @@ class LoadEffectsInputProcess(load.LoaderPlugin):
|
|||
|
||||
# get main variables
|
||||
# Get version from io
|
||||
version = legacy_io.find_one({
|
||||
"type": "version",
|
||||
"_id": representation["parent"]
|
||||
})
|
||||
project_name = legacy_io.active_project()
|
||||
version_doc = get_version_by_id(project_name, representation["parent"])
|
||||
|
||||
# get corresponding node
|
||||
GN = nuke.toNode(container['objectName'])
|
||||
|
||||
file = get_representation_path(representation).replace("\\", "/")
|
||||
name = container['name']
|
||||
version_data = version.get("data", {})
|
||||
vname = version.get("name", None)
|
||||
version_data = version_doc.get("data", {})
|
||||
vname = version_doc.get("name", None)
|
||||
first = version_data.get("frameStart", None)
|
||||
last = version_data.get("frameEnd", None)
|
||||
workfile_first_frame = int(nuke.root()["first_frame"].getValue())
|
||||
|
|
@ -251,20 +254,18 @@ class LoadEffectsInputProcess(load.LoaderPlugin):
|
|||
# return
|
||||
|
||||
# get all versions in list
|
||||
versions = legacy_io.find({
|
||||
"type": "version",
|
||||
"parent": version["parent"]
|
||||
}).distinct('name')
|
||||
|
||||
max_version = max(versions)
|
||||
last_version_doc = get_last_version_by_subset_id(
|
||||
project_name, version_doc["parent"], fields=["_id"]
|
||||
)
|
||||
|
||||
# change color of node
|
||||
if version.get("name") not in [max_version]:
|
||||
GN["tile_color"].setValue(int("0xd84f20ff", 16))
|
||||
if version_doc["_id"] == last_version_doc["_id"]:
|
||||
color_value = "0x3469ffff"
|
||||
else:
|
||||
GN["tile_color"].setValue(int("0x3469ffff", 16))
|
||||
color_value = "0xd84f20ff"
|
||||
GN["tile_color"].setValue(int(color_value, 16))
|
||||
|
||||
self.log.info("updated to version: {}".format(version.get("name")))
|
||||
self.log.info("updated to version: {}".format(version_doc.get("name")))
|
||||
|
||||
def connect_active_viewer(self, group_node):
|
||||
"""
|
||||
|
|
|
|||
|
|
@ -1,5 +1,9 @@
|
|||
import nuke
|
||||
|
||||
from openpype.client import (
|
||||
get_version_by_id,
|
||||
get_last_version_by_subset_id,
|
||||
)
|
||||
from openpype.pipeline import (
|
||||
legacy_io,
|
||||
load,
|
||||
|
|
@ -101,17 +105,16 @@ class LoadGizmo(load.LoaderPlugin):
|
|||
|
||||
# get main variables
|
||||
# Get version from io
|
||||
version = legacy_io.find_one({
|
||||
"type": "version",
|
||||
"_id": representation["parent"]
|
||||
})
|
||||
project_name = legacy_io.active_project()
|
||||
version_doc = get_version_by_id(project_name, representation["parent"])
|
||||
|
||||
# get corresponding node
|
||||
GN = nuke.toNode(container['objectName'])
|
||||
|
||||
file = get_representation_path(representation).replace("\\", "/")
|
||||
name = container['name']
|
||||
version_data = version.get("data", {})
|
||||
vname = version.get("name", None)
|
||||
version_data = version_doc.get("data", {})
|
||||
vname = version_doc.get("name", None)
|
||||
first = version_data.get("frameStart", None)
|
||||
last = version_data.get("frameEnd", None)
|
||||
namespace = container['namespace']
|
||||
|
|
@ -148,21 +151,18 @@ class LoadGizmo(load.LoaderPlugin):
|
|||
GN.setXYpos(xpos, ypos)
|
||||
GN["name"].setValue(object_name)
|
||||
|
||||
# get all versions in list
|
||||
versions = legacy_io.find({
|
||||
"type": "version",
|
||||
"parent": version["parent"]
|
||||
}).distinct('name')
|
||||
|
||||
max_version = max(versions)
|
||||
last_version_doc = get_last_version_by_subset_id(
|
||||
project_name, version_doc["parent"], fields=["_id"]
|
||||
)
|
||||
|
||||
# change color of node
|
||||
if version.get("name") not in [max_version]:
|
||||
GN["tile_color"].setValue(int("0xd88467ff", 16))
|
||||
if version_doc["_id"] == last_version_doc["_id"]:
|
||||
color_value = self.node_color
|
||||
else:
|
||||
GN["tile_color"].setValue(int(self.node_color, 16))
|
||||
color_value = "0xd88467ff"
|
||||
GN["tile_color"].setValue(int(color_value, 16))
|
||||
|
||||
self.log.info("updated to version: {}".format(version.get("name")))
|
||||
self.log.info("updated to version: {}".format(version_doc.get("name")))
|
||||
|
||||
return update_container(GN, data_imprint)
|
||||
|
||||
|
|
|
|||
|
|
@ -1,6 +1,10 @@
|
|||
import nuke
|
||||
import six
|
||||
|
||||
from openpype.client import (
|
||||
get_version_by_id,
|
||||
get_last_version_by_subset_id,
|
||||
)
|
||||
from openpype.pipeline import (
|
||||
legacy_io,
|
||||
load,
|
||||
|
|
@ -108,17 +112,16 @@ class LoadGizmoInputProcess(load.LoaderPlugin):
|
|||
|
||||
# get main variables
|
||||
# Get version from io
|
||||
version = legacy_io.find_one({
|
||||
"type": "version",
|
||||
"_id": representation["parent"]
|
||||
})
|
||||
project_name = legacy_io.active_project()
|
||||
version_doc = get_version_by_id(project_name, representation["parent"])
|
||||
|
||||
# get corresponding node
|
||||
GN = nuke.toNode(container['objectName'])
|
||||
|
||||
file = get_representation_path(representation).replace("\\", "/")
|
||||
name = container['name']
|
||||
version_data = version.get("data", {})
|
||||
vname = version.get("name", None)
|
||||
version_data = version_doc.get("data", {})
|
||||
vname = version_doc.get("name", None)
|
||||
first = version_data.get("frameStart", None)
|
||||
last = version_data.get("frameEnd", None)
|
||||
namespace = container['namespace']
|
||||
|
|
@ -155,21 +158,18 @@ class LoadGizmoInputProcess(load.LoaderPlugin):
|
|||
GN.setXYpos(xpos, ypos)
|
||||
GN["name"].setValue(object_name)
|
||||
|
||||
# get all versions in list
|
||||
versions = legacy_io.find({
|
||||
"type": "version",
|
||||
"parent": version["parent"]
|
||||
}).distinct('name')
|
||||
|
||||
max_version = max(versions)
|
||||
last_version_doc = get_last_version_by_subset_id(
|
||||
project_name, version_doc["parent"], fields=["_id"]
|
||||
)
|
||||
|
||||
# change color of node
|
||||
if version.get("name") not in [max_version]:
|
||||
GN["tile_color"].setValue(int("0xd88467ff", 16))
|
||||
if version_doc["_id"] == last_version_doc["_id"]:
|
||||
color_value = self.node_color
|
||||
else:
|
||||
GN["tile_color"].setValue(int(self.node_color, 16))
|
||||
color_value = "0xd88467ff"
|
||||
GN["tile_color"].setValue(int(color_value, 16))
|
||||
|
||||
self.log.info("updated to version: {}".format(version.get("name")))
|
||||
self.log.info("updated to version: {}".format(version_doc.get("name")))
|
||||
|
||||
return update_container(GN, data_imprint)
|
||||
|
||||
|
|
|
|||
|
|
@ -2,6 +2,10 @@ import nuke
|
|||
|
||||
import qargparse
|
||||
|
||||
from openpype.client import (
|
||||
get_version_by_id,
|
||||
get_last_version_by_subset_id,
|
||||
)
|
||||
from openpype.pipeline import (
|
||||
legacy_io,
|
||||
load,
|
||||
|
|
@ -186,20 +190,13 @@ class LoadImage(load.LoaderPlugin):
|
|||
format(frame_number, "0{}".format(padding)))
|
||||
|
||||
# Get start frame from version data
|
||||
version = legacy_io.find_one({
|
||||
"type": "version",
|
||||
"_id": representation["parent"]
|
||||
})
|
||||
project_name = legacy_io.active_project()
|
||||
version_doc = get_version_by_id(project_name, representation["parent"])
|
||||
last_version_doc = get_last_version_by_subset_id(
|
||||
project_name, version_doc["parent"], fields=["_id"]
|
||||
)
|
||||
|
||||
# get all versions in list
|
||||
versions = legacy_io.find({
|
||||
"type": "version",
|
||||
"parent": version["parent"]
|
||||
}).distinct('name')
|
||||
|
||||
max_version = max(versions)
|
||||
|
||||
version_data = version.get("data", {})
|
||||
version_data = version_doc.get("data", {})
|
||||
|
||||
last = first = int(frame_number)
|
||||
|
||||
|
|
@ -215,7 +212,7 @@ class LoadImage(load.LoaderPlugin):
|
|||
"representation": str(representation["_id"]),
|
||||
"frameStart": str(first),
|
||||
"frameEnd": str(last),
|
||||
"version": str(version.get("name")),
|
||||
"version": str(version_doc.get("name")),
|
||||
"colorspace": version_data.get("colorspace"),
|
||||
"source": version_data.get("source"),
|
||||
"fps": str(version_data.get("fps")),
|
||||
|
|
@ -223,17 +220,18 @@ class LoadImage(load.LoaderPlugin):
|
|||
})
|
||||
|
||||
# change color of node
|
||||
if version.get("name") not in [max_version]:
|
||||
node["tile_color"].setValue(int("0xd84f20ff", 16))
|
||||
if version_doc["_id"] == last_version_doc["_id"]:
|
||||
color_value = "0x4ecd25ff"
|
||||
else:
|
||||
node["tile_color"].setValue(int("0x4ecd25ff", 16))
|
||||
color_value = "0xd84f20ff"
|
||||
node["tile_color"].setValue(int(color_value, 16))
|
||||
|
||||
# Update the imprinted representation
|
||||
update_container(
|
||||
node,
|
||||
updated_dict
|
||||
)
|
||||
self.log.info("updated to version: {}".format(version.get("name")))
|
||||
self.log.info("updated to version: {}".format(version_doc.get("name")))
|
||||
|
||||
def remove(self, container):
|
||||
node = nuke.toNode(container['objectName'])
|
||||
|
|
|
|||
|
|
@ -1,5 +1,9 @@
|
|||
import nuke
|
||||
|
||||
from openpype.client import (
|
||||
get_version_by_id,
|
||||
get_last_version_by_subset_id,
|
||||
)
|
||||
from openpype.pipeline import (
|
||||
legacy_io,
|
||||
load,
|
||||
|
|
@ -60,6 +64,12 @@ class AlembicModelLoader(load.LoaderPlugin):
|
|||
inpanel=False
|
||||
)
|
||||
model_node.forceValidate()
|
||||
|
||||
# Ensure all items are imported and selected.
|
||||
scene_view = model_node.knob('scene_view')
|
||||
scene_view.setImportedItems(scene_view.getAllItems())
|
||||
scene_view.setSelectedItems(scene_view.getAllItems())
|
||||
|
||||
model_node["frame_rate"].setValue(float(fps))
|
||||
|
||||
# workaround because nuke's bug is not adding
|
||||
|
|
@ -100,17 +110,15 @@ class AlembicModelLoader(load.LoaderPlugin):
|
|||
None
|
||||
"""
|
||||
# Get version from io
|
||||
version = legacy_io.find_one({
|
||||
"type": "version",
|
||||
"_id": representation["parent"]
|
||||
})
|
||||
project_name = legacy_io.active_project()
|
||||
version_doc = get_version_by_id(project_name, representation["parent"])
|
||||
object_name = container['objectName']
|
||||
# get corresponding node
|
||||
model_node = nuke.toNode(object_name)
|
||||
|
||||
# get main variables
|
||||
version_data = version.get("data", {})
|
||||
vname = version.get("name", None)
|
||||
version_data = version_doc.get("data", {})
|
||||
vname = version_doc.get("name", None)
|
||||
first = version_data.get("frameStart", None)
|
||||
last = version_data.get("frameEnd", None)
|
||||
fps = version_data.get("fps") or nuke.root()["fps"].getValue()
|
||||
|
|
@ -142,6 +150,11 @@ class AlembicModelLoader(load.LoaderPlugin):
|
|||
model_node["frame_rate"].setValue(float(fps))
|
||||
model_node["file"].setValue(file)
|
||||
|
||||
# Ensure all items are imported and selected.
|
||||
scene_view = model_node.knob('scene_view')
|
||||
scene_view.setImportedItems(scene_view.getAllItems())
|
||||
scene_view.setSelectedItems(scene_view.getAllItems())
|
||||
|
||||
# workaround because nuke's bug is
|
||||
# not adding animation keys properly
|
||||
xpos = model_node.xpos()
|
||||
|
|
@ -163,28 +176,26 @@ class AlembicModelLoader(load.LoaderPlugin):
|
|||
d.setInput(index, model_node)
|
||||
|
||||
# color node by correct color by actual version
|
||||
self.node_version_color(version, model_node)
|
||||
self.node_version_color(version_doc, model_node)
|
||||
|
||||
self.log.info("updated to version: {}".format(version.get("name")))
|
||||
self.log.info("updated to version: {}".format(version_doc.get("name")))
|
||||
|
||||
return update_container(model_node, data_imprint)
|
||||
|
||||
def node_version_color(self, version, node):
|
||||
""" Coloring a node by correct color by actual version
|
||||
"""
|
||||
# get all versions in list
|
||||
versions = legacy_io.find({
|
||||
"type": "version",
|
||||
"parent": version["parent"]
|
||||
}).distinct('name')
|
||||
""" Coloring a node by correct color by actual version"""
|
||||
|
||||
max_version = max(versions)
|
||||
project_name = legacy_io.active_project()
|
||||
last_version_doc = get_last_version_by_subset_id(
|
||||
project_name, version["parent"], fields=["_id"]
|
||||
)
|
||||
|
||||
# change color of node
|
||||
if version.get("name") not in [max_version]:
|
||||
node["tile_color"].setValue(int("0xd88467ff", 16))
|
||||
if version["_id"] == last_version_doc["_id"]:
|
||||
color_value = self.node_color
|
||||
else:
|
||||
node["tile_color"].setValue(int(self.node_color, 16))
|
||||
color_value = "0xd88467ff"
|
||||
node["tile_color"].setValue(int(color_value, 16))
|
||||
|
||||
def switch(self, container, representation):
|
||||
self.update(container, representation)
|
||||
|
|
|
|||
|
|
@ -1,5 +1,9 @@
|
|||
import nuke
|
||||
|
||||
from openpype.client import (
|
||||
get_version_by_id,
|
||||
get_last_version_by_subset_id,
|
||||
)
|
||||
from openpype.pipeline import (
|
||||
legacy_io,
|
||||
load,
|
||||
|
|
@ -116,29 +120,23 @@ class LinkAsGroup(load.LoaderPlugin):
|
|||
root = get_representation_path(representation).replace("\\", "/")
|
||||
|
||||
# Get start frame from version data
|
||||
version = legacy_io.find_one({
|
||||
"type": "version",
|
||||
"_id": representation["parent"]
|
||||
})
|
||||
|
||||
# get all versions in list
|
||||
versions = legacy_io.find({
|
||||
"type": "version",
|
||||
"parent": version["parent"]
|
||||
}).distinct('name')
|
||||
|
||||
max_version = max(versions)
|
||||
project_name = legacy_io.active_project()
|
||||
version_doc = get_version_by_id(project_name, representation["parent"])
|
||||
last_version_doc = get_last_version_by_subset_id(
|
||||
project_name, version_doc["parent"], fields=["_id"]
|
||||
)
|
||||
|
||||
updated_dict = {}
|
||||
version_data = version_doc["data"]
|
||||
updated_dict.update({
|
||||
"representation": str(representation["_id"]),
|
||||
"frameEnd": version["data"].get("frameEnd"),
|
||||
"version": version.get("name"),
|
||||
"colorspace": version["data"].get("colorspace"),
|
||||
"source": version["data"].get("source"),
|
||||
"handles": version["data"].get("handles"),
|
||||
"fps": version["data"].get("fps"),
|
||||
"author": version["data"].get("author")
|
||||
"frameEnd": version_data.get("frameEnd"),
|
||||
"version": version_doc.get("name"),
|
||||
"colorspace": version_data.get("colorspace"),
|
||||
"source": version_data.get("source"),
|
||||
"handles": version_data.get("handles"),
|
||||
"fps": version_data.get("fps"),
|
||||
"author": version_data.get("author")
|
||||
})
|
||||
|
||||
# Update the imprinted representation
|
||||
|
|
@ -150,12 +148,13 @@ class LinkAsGroup(load.LoaderPlugin):
|
|||
node["file"].setValue(root)
|
||||
|
||||
# change color of node
|
||||
if version.get("name") not in [max_version]:
|
||||
node["tile_color"].setValue(int("0xd84f20ff", 16))
|
||||
if version_doc["_id"] == last_version_doc["_id"]:
|
||||
color_value = "0xff0ff0ff"
|
||||
else:
|
||||
node["tile_color"].setValue(int("0xff0ff0ff", 16))
|
||||
color_value = "0xd84f20ff"
|
||||
node["tile_color"].setValue(int(color_value, 16))
|
||||
|
||||
self.log.info("updated to version: {}".format(version.get("name")))
|
||||
self.log.info("updated to version: {}".format(version_doc.get("name")))
|
||||
|
||||
def remove(self, container):
|
||||
node = nuke.toNode(container['objectName'])
|
||||
|
|
|
|||
|
|
@ -3,6 +3,7 @@ import re
|
|||
import nuke
|
||||
import pyblish.api
|
||||
|
||||
from openpype.client import get_asset_by_name
|
||||
from openpype.pipeline import legacy_io
|
||||
|
||||
|
||||
|
|
@ -16,12 +17,11 @@ class CollectNukeReads(pyblish.api.InstancePlugin):
|
|||
families = ["source"]
|
||||
|
||||
def process(self, instance):
|
||||
asset_data = legacy_io.find_one({
|
||||
"type": "asset",
|
||||
"name": legacy_io.Session["AVALON_ASSET"]
|
||||
})
|
||||
project_name = legacy_io.active_project()
|
||||
asset_name = legacy_io.Session["AVALON_ASSET"]
|
||||
asset_doc = get_asset_by_name(project_name, asset_name)
|
||||
|
||||
self.log.debug("asset_data: {}".format(asset_data["data"]))
|
||||
self.log.debug("asset_doc: {}".format(asset_doc["data"]))
|
||||
|
||||
self.log.debug("checking instance: {}".format(instance))
|
||||
|
||||
|
|
@ -127,7 +127,7 @@ class CollectNukeReads(pyblish.api.InstancePlugin):
|
|||
"frameStart": first_frame,
|
||||
"frameEnd": last_frame,
|
||||
"colorspace": colorspace,
|
||||
"handles": int(asset_data["data"].get("handles", 0)),
|
||||
"handles": int(asset_doc["data"].get("handles", 0)),
|
||||
"step": 1,
|
||||
"fps": int(nuke.root()['fps'].value())
|
||||
})
|
||||
|
|
|
|||
|
|
@ -42,12 +42,22 @@ class NukeRenderLocal(openpype.api.Extractor):
|
|||
self.log.info("Start frame: {}".format(first_frame))
|
||||
self.log.info("End frame: {}".format(last_frame))
|
||||
|
||||
# write node url might contain nuke's ctl expressin
|
||||
# as [python ...]/path...
|
||||
path = node["file"].evaluate()
|
||||
node_file = node["file"]
|
||||
# Collecte expected filepaths for each frame
|
||||
# - for cases that output is still image is first created set of
|
||||
# paths which is then sorted and converted to list
|
||||
expected_paths = list(sorted({
|
||||
node_file.evaluate(frame)
|
||||
for frame in range(first_frame, last_frame + 1)
|
||||
}))
|
||||
# Extract only filenames for representation
|
||||
filenames = [
|
||||
os.path.basename(filepath)
|
||||
for filepath in expected_paths
|
||||
]
|
||||
|
||||
# Ensure output directory exists.
|
||||
out_dir = os.path.dirname(path)
|
||||
out_dir = os.path.dirname(expected_paths[0])
|
||||
if not os.path.exists(out_dir):
|
||||
os.makedirs(out_dir)
|
||||
|
||||
|
|
@ -67,12 +77,11 @@ class NukeRenderLocal(openpype.api.Extractor):
|
|||
if "representations" not in instance.data:
|
||||
instance.data["representations"] = []
|
||||
|
||||
collected_frames = os.listdir(out_dir)
|
||||
if len(collected_frames) == 1:
|
||||
if len(filenames) == 1:
|
||||
repre = {
|
||||
'name': ext,
|
||||
'ext': ext,
|
||||
'files': collected_frames.pop(),
|
||||
'files': filenames[0],
|
||||
"stagingDir": out_dir
|
||||
}
|
||||
else:
|
||||
|
|
@ -81,7 +90,7 @@ class NukeRenderLocal(openpype.api.Extractor):
|
|||
'ext': ext,
|
||||
'frameStart': "%0{}d".format(
|
||||
len(str(last_frame))) % first_frame,
|
||||
'files': collected_frames,
|
||||
'files': filenames,
|
||||
"stagingDir": out_dir
|
||||
}
|
||||
instance.data["representations"].append(repre)
|
||||
|
|
@ -105,7 +114,7 @@ class NukeRenderLocal(openpype.api.Extractor):
|
|||
families.remove('still.local')
|
||||
instance.data["families"] = families
|
||||
|
||||
collections, remainder = clique.assemble(collected_frames)
|
||||
collections, remainder = clique.assemble(filenames)
|
||||
self.log.info('collections: {}'.format(str(collections)))
|
||||
|
||||
if collections:
|
||||
|
|
|
|||
|
|
@ -1,4 +1,5 @@
|
|||
import os
|
||||
from pprint import pformat
|
||||
import re
|
||||
import pyblish.api
|
||||
import openpype
|
||||
|
|
@ -50,6 +51,8 @@ class ExtractReviewDataMov(openpype.api.Extractor):
|
|||
with maintained_selection():
|
||||
generated_repres = []
|
||||
for o_name, o_data in self.outputs.items():
|
||||
self.log.debug(
|
||||
"o_name: {}, o_data: {}".format(o_name, pformat(o_data)))
|
||||
f_families = o_data["filter"]["families"]
|
||||
f_task_types = o_data["filter"]["task_types"]
|
||||
f_subsets = o_data["filter"]["subsets"]
|
||||
|
|
@ -88,7 +91,13 @@ class ExtractReviewDataMov(openpype.api.Extractor):
|
|||
# check if settings have more then one preset
|
||||
# so we dont need to add outputName to representation
|
||||
# in case there is only one preset
|
||||
multiple_presets = bool(len(self.outputs.keys()) > 1)
|
||||
multiple_presets = len(self.outputs.keys()) > 1
|
||||
|
||||
# adding bake presets to instance data for other plugins
|
||||
if not instance.data.get("bakePresets"):
|
||||
instance.data["bakePresets"] = {}
|
||||
# add preset to bakePresets
|
||||
instance.data["bakePresets"][o_name] = o_data
|
||||
|
||||
# create exporter instance
|
||||
exporter = plugin.ExporterReviewMov(
|
||||
|
|
|
|||
|
|
@ -1,11 +1,16 @@
|
|||
import os
|
||||
from pprint import pformat
|
||||
import nuke
|
||||
import copy
|
||||
|
||||
import pyblish.api
|
||||
|
||||
import openpype
|
||||
from openpype.hosts.nuke.api.lib import maintained_selection
|
||||
from openpype.hosts.nuke.api import (
|
||||
maintained_selection,
|
||||
duplicate_node,
|
||||
get_view_process_node
|
||||
)
|
||||
|
||||
|
||||
class ExtractSlateFrame(openpype.api.Extractor):
|
||||
|
|
@ -15,14 +20,13 @@ class ExtractSlateFrame(openpype.api.Extractor):
|
|||
|
||||
"""
|
||||
|
||||
order = pyblish.api.ExtractorOrder - 0.001
|
||||
order = pyblish.api.ExtractorOrder + 0.011
|
||||
label = "Extract Slate Frame"
|
||||
|
||||
families = ["slate"]
|
||||
hosts = ["nuke"]
|
||||
|
||||
# Settings values
|
||||
# - can be extended by other attributes from node in the future
|
||||
key_value_mapping = {
|
||||
"f_submission_note": [True, "{comment}"],
|
||||
"f_submitting_for": [True, "{intent[value]}"],
|
||||
|
|
@ -30,44 +34,107 @@ class ExtractSlateFrame(openpype.api.Extractor):
|
|||
}
|
||||
|
||||
def process(self, instance):
|
||||
if hasattr(self, "viewer_lut_raw"):
|
||||
self.viewer_lut_raw = self.viewer_lut_raw
|
||||
else:
|
||||
self.viewer_lut_raw = False
|
||||
|
||||
if "representations" not in instance.data:
|
||||
instance.data["representations"] = []
|
||||
|
||||
self._create_staging_dir(instance)
|
||||
|
||||
with maintained_selection():
|
||||
self.log.debug("instance: {}".format(instance))
|
||||
self.log.debug("instance.data[families]: {}".format(
|
||||
instance.data["families"]))
|
||||
|
||||
self.render_slate(instance)
|
||||
if instance.data.get("bakePresets"):
|
||||
for o_name, o_data in instance.data["bakePresets"].items():
|
||||
self.log.info("_ o_name: {}, o_data: {}".format(
|
||||
o_name, pformat(o_data)))
|
||||
self.render_slate(
|
||||
instance,
|
||||
o_name,
|
||||
o_data["bake_viewer_process"],
|
||||
o_data["bake_viewer_input_process"]
|
||||
)
|
||||
else:
|
||||
# backward compatibility
|
||||
self.render_slate(instance)
|
||||
|
||||
# also render image to sequence
|
||||
self._render_slate_to_sequence(instance)
|
||||
|
||||
def _create_staging_dir(self, instance):
|
||||
|
||||
def render_slate(self, instance):
|
||||
node_subset_name = instance.data.get("name", None)
|
||||
node = instance[0] # group node
|
||||
self.log.info("Creating staging dir...")
|
||||
|
||||
if "representations" not in instance.data:
|
||||
instance.data["representations"] = list()
|
||||
|
||||
staging_dir = os.path.normpath(
|
||||
os.path.dirname(instance.data['path']))
|
||||
os.path.dirname(instance.data["path"]))
|
||||
|
||||
instance.data["stagingDir"] = staging_dir
|
||||
|
||||
self.log.info(
|
||||
"StagingDir `{0}`...".format(instance.data["stagingDir"]))
|
||||
|
||||
frame_start = instance.data["frameStart"]
|
||||
frame_end = instance.data["frameEnd"]
|
||||
handle_start = instance.data["handleStart"]
|
||||
handle_end = instance.data["handleEnd"]
|
||||
def _check_frames_exists(self, instance):
|
||||
# rendering path from group write node
|
||||
fpath = instance.data["path"]
|
||||
|
||||
frame_length = int(
|
||||
(frame_end - frame_start + 1) + (handle_start + handle_end)
|
||||
)
|
||||
# instance frame range with handles
|
||||
first = instance.data["frameStartHandle"]
|
||||
last = instance.data["frameEndHandle"]
|
||||
|
||||
padding = fpath.count('#')
|
||||
|
||||
test_path_template = fpath
|
||||
if padding:
|
||||
repl_string = "#" * padding
|
||||
test_path_template = fpath.replace(
|
||||
repl_string, "%0{}d".format(padding))
|
||||
|
||||
for frame in range(first, last + 1):
|
||||
test_file = test_path_template % frame
|
||||
if not os.path.exists(test_file):
|
||||
self.log.debug("__ test_file: `{}`".format(test_file))
|
||||
return None
|
||||
|
||||
return True
|
||||
|
||||
def render_slate(
|
||||
self,
|
||||
instance,
|
||||
output_name=None,
|
||||
bake_viewer_process=True,
|
||||
bake_viewer_input_process=True
|
||||
):
|
||||
"""Slate frame renderer
|
||||
|
||||
Args:
|
||||
instance (PyblishInstance): Pyblish instance with subset data
|
||||
output_name (str, optional):
|
||||
Slate variation name. Defaults to None.
|
||||
bake_viewer_process (bool, optional):
|
||||
Switch for viewer profile baking. Defaults to True.
|
||||
bake_viewer_input_process (bool, optional):
|
||||
Switch for input process node baking. Defaults to True.
|
||||
"""
|
||||
slate_node = instance.data["slateNode"]
|
||||
|
||||
# rendering path from group write node
|
||||
fpath = instance.data["path"]
|
||||
|
||||
# instance frame range with handles
|
||||
first_frame = instance.data["frameStartHandle"]
|
||||
last_frame = instance.data["frameEndHandle"]
|
||||
|
||||
# fill slate node with comments
|
||||
self.add_comment_slate_node(instance, slate_node)
|
||||
|
||||
# solve output name if any is set
|
||||
_output_name = output_name or ""
|
||||
if _output_name:
|
||||
_output_name = "_" + _output_name
|
||||
|
||||
slate_first_frame = first_frame - 1
|
||||
|
||||
temporary_nodes = []
|
||||
collection = instance.data.get("collection", None)
|
||||
|
||||
if collection:
|
||||
|
|
@ -75,99 +142,101 @@ class ExtractSlateFrame(openpype.api.Extractor):
|
|||
fname = os.path.basename(collection.format(
|
||||
"{head}{padding}{tail}"))
|
||||
fhead = collection.format("{head}")
|
||||
|
||||
collected_frames_len = int(len(collection.indexes))
|
||||
|
||||
# get first and last frame
|
||||
first_frame = min(collection.indexes) - 1
|
||||
self.log.info('frame_length: {}'.format(frame_length))
|
||||
self.log.info(
|
||||
'len(collection.indexes): {}'.format(collected_frames_len)
|
||||
)
|
||||
if ("slate" in instance.data["families"]) \
|
||||
and (frame_length != collected_frames_len):
|
||||
first_frame += 1
|
||||
|
||||
last_frame = first_frame
|
||||
else:
|
||||
fname = os.path.basename(instance.data.get("path", None))
|
||||
fname = os.path.basename(fpath)
|
||||
fhead = os.path.splitext(fname)[0] + "."
|
||||
first_frame = instance.data.get("frameStartHandle", None) - 1
|
||||
last_frame = first_frame
|
||||
|
||||
if "#" in fhead:
|
||||
fhead = fhead.replace("#", "")[:-1]
|
||||
|
||||
previous_node = node
|
||||
self.log.debug("__ first_frame: {}".format(first_frame))
|
||||
self.log.debug("__ slate_first_frame: {}".format(slate_first_frame))
|
||||
|
||||
# get input process and connect it to baking
|
||||
ipn = self.get_view_process_node()
|
||||
if ipn is not None:
|
||||
ipn.setInput(0, previous_node)
|
||||
previous_node = ipn
|
||||
temporary_nodes.append(ipn)
|
||||
# fallback if files does not exists
|
||||
if self._check_frames_exists(instance):
|
||||
# Read node
|
||||
r_node = nuke.createNode("Read")
|
||||
r_node["file"].setValue(fpath)
|
||||
r_node["first"].setValue(first_frame)
|
||||
r_node["origfirst"].setValue(first_frame)
|
||||
r_node["last"].setValue(last_frame)
|
||||
r_node["origlast"].setValue(last_frame)
|
||||
r_node["colorspace"].setValue(instance.data["colorspace"])
|
||||
previous_node = r_node
|
||||
temporary_nodes = [previous_node]
|
||||
else:
|
||||
previous_node = slate_node.dependencies().pop()
|
||||
temporary_nodes = []
|
||||
|
||||
if not self.viewer_lut_raw:
|
||||
# only create colorspace baking if toggled on
|
||||
if bake_viewer_process:
|
||||
if bake_viewer_input_process:
|
||||
# get input process and connect it to baking
|
||||
ipn = get_view_process_node()
|
||||
if ipn is not None:
|
||||
ipn.setInput(0, previous_node)
|
||||
previous_node = ipn
|
||||
temporary_nodes.append(ipn)
|
||||
|
||||
# add duplicate slate node and connect to previous
|
||||
duply_slate_node = duplicate_node(slate_node)
|
||||
duply_slate_node.setInput(0, previous_node)
|
||||
previous_node = duply_slate_node
|
||||
temporary_nodes.append(duply_slate_node)
|
||||
|
||||
# add viewer display transformation node
|
||||
dag_node = nuke.createNode("OCIODisplay")
|
||||
dag_node.setInput(0, previous_node)
|
||||
previous_node = dag_node
|
||||
temporary_nodes.append(dag_node)
|
||||
|
||||
else:
|
||||
# add duplicate slate node and connect to previous
|
||||
duply_slate_node = duplicate_node(slate_node)
|
||||
duply_slate_node.setInput(0, previous_node)
|
||||
previous_node = duply_slate_node
|
||||
temporary_nodes.append(duply_slate_node)
|
||||
|
||||
# create write node
|
||||
write_node = nuke.createNode("Write")
|
||||
file = fhead + "slate.png"
|
||||
path = os.path.join(staging_dir, file).replace("\\", "/")
|
||||
instance.data["slateFrame"] = path
|
||||
file = fhead[:-1] + _output_name + "_slate.png"
|
||||
path = os.path.join(
|
||||
instance.data["stagingDir"], file).replace("\\", "/")
|
||||
|
||||
# add slate path to `slateFrames` instance data attr
|
||||
if not instance.data.get("slateFrames"):
|
||||
instance.data["slateFrames"] = {}
|
||||
|
||||
instance.data["slateFrames"][output_name or "*"] = path
|
||||
|
||||
# create write node
|
||||
write_node["file"].setValue(path)
|
||||
write_node["file_type"].setValue("png")
|
||||
write_node["raw"].setValue(1)
|
||||
write_node.setInput(0, previous_node)
|
||||
temporary_nodes.append(write_node)
|
||||
|
||||
# fill slate node with comments
|
||||
self.add_comment_slate_node(instance)
|
||||
|
||||
# Render frames
|
||||
nuke.execute(write_node.name(), int(first_frame), int(last_frame))
|
||||
# also render slate as sequence frame
|
||||
nuke.execute(node_subset_name, int(first_frame), int(last_frame))
|
||||
|
||||
self.log.debug(
|
||||
"slate frame path: {}".format(instance.data["slateFrame"]))
|
||||
nuke.execute(
|
||||
write_node.name(), int(slate_first_frame), int(slate_first_frame))
|
||||
|
||||
# Clean up
|
||||
for node in temporary_nodes:
|
||||
nuke.delete(node)
|
||||
|
||||
def get_view_process_node(self):
|
||||
# Select only the target node
|
||||
if nuke.selectedNodes():
|
||||
[n.setSelected(False) for n in nuke.selectedNodes()]
|
||||
def _render_slate_to_sequence(self, instance):
|
||||
# set slate frame
|
||||
first_frame = instance.data["frameStartHandle"]
|
||||
slate_first_frame = first_frame - 1
|
||||
|
||||
ipn_orig = None
|
||||
for v in [n for n in nuke.allNodes()
|
||||
if "Viewer" in n.Class()]:
|
||||
ip = v['input_process'].getValue()
|
||||
ipn = v['input_process_node'].getValue()
|
||||
if "VIEWER_INPUT" not in ipn and ip:
|
||||
ipn_orig = nuke.toNode(ipn)
|
||||
ipn_orig.setSelected(True)
|
||||
# render slate as sequence frame
|
||||
nuke.execute(
|
||||
instance.data["name"],
|
||||
int(slate_first_frame),
|
||||
int(slate_first_frame)
|
||||
)
|
||||
|
||||
if ipn_orig:
|
||||
nuke.nodeCopy('%clipboard%')
|
||||
|
||||
[n.setSelected(False) for n in nuke.selectedNodes()] # Deselect all
|
||||
|
||||
nuke.nodePaste('%clipboard%')
|
||||
|
||||
ipn = nuke.selectedNode()
|
||||
|
||||
return ipn
|
||||
|
||||
def add_comment_slate_node(self, instance):
|
||||
node = instance.data.get("slateNode")
|
||||
if not node:
|
||||
return
|
||||
def add_comment_slate_node(self, instance, node):
|
||||
|
||||
comment = instance.context.data.get("comment")
|
||||
intent = instance.context.data.get("intent")
|
||||
|
|
@ -186,8 +255,8 @@ class ExtractSlateFrame(openpype.api.Extractor):
|
|||
"intent": intent
|
||||
})
|
||||
|
||||
for key, value in self.key_value_mapping.items():
|
||||
enabled, template = value
|
||||
for key, _values in self.key_value_mapping.items():
|
||||
enabled, template = _values
|
||||
if not enabled:
|
||||
self.log.debug("Key \"{}\" is disabled".format(key))
|
||||
continue
|
||||
|
|
@ -221,5 +290,5 @@ class ExtractSlateFrame(openpype.api.Extractor):
|
|||
))
|
||||
except NameError:
|
||||
self.log.warning((
|
||||
"Failed to set value \"{}\" on node attribute \"{}\""
|
||||
"Failed to set value \"{0}\" on node attribute \"{0}\""
|
||||
).format(value))
|
||||
|
|
|
|||
|
|
@ -3,7 +3,10 @@ import os
|
|||
import nuke
|
||||
import pyblish.api
|
||||
import openpype
|
||||
from openpype.hosts.nuke.api.lib import maintained_selection
|
||||
from openpype.hosts.nuke.api import (
|
||||
maintained_selection,
|
||||
get_view_process_node
|
||||
)
|
||||
|
||||
|
||||
if sys.version_info[0] >= 3:
|
||||
|
|
@ -17,7 +20,7 @@ class ExtractThumbnail(openpype.api.Extractor):
|
|||
|
||||
"""
|
||||
|
||||
order = pyblish.api.ExtractorOrder + 0.01
|
||||
order = pyblish.api.ExtractorOrder + 0.011
|
||||
label = "Extract Thumbnail"
|
||||
|
||||
families = ["review"]
|
||||
|
|
@ -39,15 +42,32 @@ class ExtractThumbnail(openpype.api.Extractor):
|
|||
self.log.debug("instance.data[families]: {}".format(
|
||||
instance.data["families"]))
|
||||
|
||||
self.render_thumbnail(instance)
|
||||
if instance.data.get("bakePresets"):
|
||||
for o_name, o_data in instance.data["bakePresets"].items():
|
||||
self.render_thumbnail(instance, o_name, **o_data)
|
||||
else:
|
||||
viewer_process_swithes = {
|
||||
"bake_viewer_process": True,
|
||||
"bake_viewer_input_process": True
|
||||
}
|
||||
self.render_thumbnail(instance, None, **viewer_process_swithes)
|
||||
|
||||
def render_thumbnail(self, instance):
|
||||
def render_thumbnail(self, instance, output_name=None, **kwargs):
|
||||
first_frame = instance.data["frameStartHandle"]
|
||||
last_frame = instance.data["frameEndHandle"]
|
||||
|
||||
# find frame range and define middle thumb frame
|
||||
mid_frame = int((last_frame - first_frame) / 2)
|
||||
|
||||
# solve output name if any is set
|
||||
output_name = output_name or ""
|
||||
if output_name:
|
||||
output_name = "_" + output_name
|
||||
|
||||
bake_viewer_process = kwargs["bake_viewer_process"]
|
||||
bake_viewer_input_process_node = kwargs[
|
||||
"bake_viewer_input_process"]
|
||||
|
||||
node = instance[0] # group node
|
||||
self.log.info("Creating staging dir...")
|
||||
|
||||
|
|
@ -106,17 +126,7 @@ class ExtractThumbnail(openpype.api.Extractor):
|
|||
temporary_nodes.append(rnode)
|
||||
previous_node = rnode
|
||||
|
||||
# bake viewer input look node into thumbnail image
|
||||
if self.bake_viewer_input_process:
|
||||
# get input process and connect it to baking
|
||||
ipn = self.get_view_process_node()
|
||||
if ipn is not None:
|
||||
ipn.setInput(0, previous_node)
|
||||
previous_node = ipn
|
||||
temporary_nodes.append(ipn)
|
||||
|
||||
reformat_node = nuke.createNode("Reformat")
|
||||
|
||||
ref_node = self.nodes.get("Reformat", None)
|
||||
if ref_node:
|
||||
for k, v in ref_node:
|
||||
|
|
@ -129,8 +139,16 @@ class ExtractThumbnail(openpype.api.Extractor):
|
|||
previous_node = reformat_node
|
||||
temporary_nodes.append(reformat_node)
|
||||
|
||||
# bake viewer colorspace into thumbnail image
|
||||
if self.bake_viewer_process:
|
||||
# only create colorspace baking if toggled on
|
||||
if bake_viewer_process:
|
||||
if bake_viewer_input_process_node:
|
||||
# get input process and connect it to baking
|
||||
ipn = get_view_process_node()
|
||||
if ipn is not None:
|
||||
ipn.setInput(0, previous_node)
|
||||
previous_node = ipn
|
||||
temporary_nodes.append(ipn)
|
||||
|
||||
dag_node = nuke.createNode("OCIODisplay")
|
||||
dag_node.setInput(0, previous_node)
|
||||
previous_node = dag_node
|
||||
|
|
@ -138,7 +156,7 @@ class ExtractThumbnail(openpype.api.Extractor):
|
|||
|
||||
# create write node
|
||||
write_node = nuke.createNode("Write")
|
||||
file = fhead + "jpg"
|
||||
file = fhead[:-1] + output_name + ".jpg"
|
||||
name = "thumbnail"
|
||||
path = os.path.join(staging_dir, file).replace("\\", "/")
|
||||
instance.data["thumbnail"] = path
|
||||
|
|
@ -168,30 +186,3 @@ class ExtractThumbnail(openpype.api.Extractor):
|
|||
# Clean up
|
||||
for node in temporary_nodes:
|
||||
nuke.delete(node)
|
||||
|
||||
def get_view_process_node(self):
|
||||
|
||||
# Select only the target node
|
||||
if nuke.selectedNodes():
|
||||
[n.setSelected(False) for n in nuke.selectedNodes()]
|
||||
|
||||
ipn_orig = None
|
||||
for v in [n for n in nuke.allNodes()
|
||||
if "Viewer" == n.Class()]:
|
||||
ip = v['input_process'].getValue()
|
||||
ipn = v['input_process_node'].getValue()
|
||||
if "VIEWER_INPUT" not in ipn and ip:
|
||||
ipn_orig = nuke.toNode(ipn)
|
||||
ipn_orig.setSelected(True)
|
||||
|
||||
if ipn_orig:
|
||||
nuke.nodeCopy('%clipboard%')
|
||||
|
||||
# Deselect all
|
||||
[n.setSelected(False) for n in nuke.selectedNodes()]
|
||||
|
||||
nuke.nodePaste('%clipboard%')
|
||||
|
||||
ipn = nuke.selectedNode()
|
||||
|
||||
return ipn
|
||||
|
|
|
|||
|
|
@ -1,7 +1,6 @@
|
|||
import nuke
|
||||
import pyblish.api
|
||||
|
||||
from openpype.pipeline import legacy_io
|
||||
from openpype.hosts.nuke.api.lib import (
|
||||
add_publish_knob,
|
||||
get_avalon_knob_data
|
||||
|
|
@ -20,12 +19,6 @@ class PreCollectNukeInstances(pyblish.api.ContextPlugin):
|
|||
sync_workfile_version_on_families = []
|
||||
|
||||
def process(self, context):
|
||||
asset_data = legacy_io.find_one({
|
||||
"type": "asset",
|
||||
"name": legacy_io.Session["AVALON_ASSET"]
|
||||
})
|
||||
|
||||
self.log.debug("asset_data: {}".format(asset_data["data"]))
|
||||
instances = []
|
||||
|
||||
root = nuke.root()
|
||||
|
|
|
|||
|
|
@ -4,7 +4,10 @@ from pprint import pformat
|
|||
import nuke
|
||||
import pyblish.api
|
||||
|
||||
import openpype.api as pype
|
||||
from openpype.client import (
|
||||
get_last_version_by_subset_name,
|
||||
get_representations,
|
||||
)
|
||||
from openpype.pipeline import (
|
||||
legacy_io,
|
||||
get_representation_path,
|
||||
|
|
@ -53,9 +56,21 @@ class CollectNukeWrites(pyblish.api.InstancePlugin):
|
|||
first_frame = int(node["first"].getValue())
|
||||
last_frame = int(node["last"].getValue())
|
||||
|
||||
# get path
|
||||
# Prepare expected output paths by evaluating each frame of write node
|
||||
# - paths are first collected to set to avoid duplicated paths, then
|
||||
# sorted and converted to list
|
||||
node_file = node["file"]
|
||||
expected_paths = list(sorted({
|
||||
node_file.evaluate(frame)
|
||||
for frame in range(first_frame, last_frame + 1)
|
||||
}))
|
||||
expected_filenames = [
|
||||
os.path.basename(filepath)
|
||||
for filepath in expected_paths
|
||||
]
|
||||
path = nuke.filename(node)
|
||||
output_dir = os.path.dirname(path)
|
||||
|
||||
self.log.debug('output dir: {}'.format(output_dir))
|
||||
|
||||
# create label
|
||||
|
|
@ -80,8 +95,11 @@ class CollectNukeWrites(pyblish.api.InstancePlugin):
|
|||
}
|
||||
|
||||
try:
|
||||
collected_frames = [f for f in os.listdir(output_dir)
|
||||
if ext in f]
|
||||
collected_frames = [
|
||||
filename
|
||||
for filename in os.listdir(output_dir)
|
||||
if filename in expected_filenames
|
||||
]
|
||||
if collected_frames:
|
||||
collected_frames_len = len(collected_frames)
|
||||
frame_start_str = "%0{}d".format(
|
||||
|
|
@ -180,17 +198,26 @@ class CollectNukeWrites(pyblish.api.InstancePlugin):
|
|||
if not instance.data["review"]:
|
||||
instance.data["useSequenceForReview"] = False
|
||||
|
||||
project_name = legacy_io.active_project()
|
||||
asset_name = instance.data["asset"]
|
||||
# * Add audio to instance if exists.
|
||||
# Find latest versions document
|
||||
version_doc = pype.get_latest_version(
|
||||
instance.data["asset"], "audioMain"
|
||||
last_version_doc = get_last_version_by_subset_name(
|
||||
project_name, "audioMain", asset_name=asset_name, fields=["_id"]
|
||||
)
|
||||
|
||||
repre_doc = None
|
||||
if version_doc:
|
||||
if last_version_doc:
|
||||
# Try to find it's representation (Expected there is only one)
|
||||
repre_doc = legacy_io.find_one(
|
||||
{"type": "representation", "parent": version_doc["_id"]}
|
||||
)
|
||||
repre_docs = list(get_representations(
|
||||
project_name, version_ids=[last_version_doc["_id"]]
|
||||
))
|
||||
if not repre_docs:
|
||||
self.log.warning(
|
||||
"Version document does not contain any representations"
|
||||
)
|
||||
else:
|
||||
repre_doc = repre_docs[0]
|
||||
|
||||
# Add audio to instance if representation was found
|
||||
if repre_doc:
|
||||
|
|
|
|||
|
|
@ -1,5 +1,6 @@
|
|||
import pyblish.api
|
||||
|
||||
from openpype.client import get_project, get_asset_by_id
|
||||
from openpype import lib
|
||||
from openpype.pipeline import legacy_io
|
||||
|
||||
|
|
@ -19,6 +20,7 @@ class ValidateScript(pyblish.api.InstancePlugin):
|
|||
asset_name = ctx_data["asset"]
|
||||
asset = lib.get_asset(asset_name)
|
||||
asset_data = asset["data"]
|
||||
project_name = legacy_io.active_project()
|
||||
|
||||
# These attributes will be checked
|
||||
attributes = [
|
||||
|
|
@ -48,12 +50,19 @@ class ValidateScript(pyblish.api.InstancePlugin):
|
|||
asset_attributes[attr] = asset_data[attr]
|
||||
|
||||
elif attr in hierarchical_attributes:
|
||||
# Try to find fps on parent
|
||||
parent = asset['parent']
|
||||
if asset_data['visualParent'] is not None:
|
||||
parent = asset_data['visualParent']
|
||||
# TODO this should be probably removed
|
||||
# Hierarchical attributes is not a thing since Pype 2?
|
||||
|
||||
value = self.check_parent_hierarchical(parent, attr)
|
||||
# Try to find attribute on parent
|
||||
parent_id = asset['parent']
|
||||
parent_type = "project"
|
||||
if asset_data['visualParent'] is not None:
|
||||
parent_type = "asset"
|
||||
parent_id = asset_data['visualParent']
|
||||
|
||||
value = self.check_parent_hierarchical(
|
||||
project_name, parent_type, parent_id, attr
|
||||
)
|
||||
if value is None:
|
||||
missing_attributes.append(attr)
|
||||
else:
|
||||
|
|
@ -113,12 +122,35 @@ class ValidateScript(pyblish.api.InstancePlugin):
|
|||
message = msg.format(", ".join(not_matching))
|
||||
raise ValueError(message)
|
||||
|
||||
def check_parent_hierarchical(self, entityId, attr):
|
||||
if entityId is None:
|
||||
def check_parent_hierarchical(
|
||||
self, project_name, parent_type, parent_id, attr
|
||||
):
|
||||
if parent_id is None:
|
||||
return None
|
||||
entity = legacy_io.find_one({"_id": entityId})
|
||||
if attr in entity['data']:
|
||||
|
||||
doc = None
|
||||
if parent_type == "project":
|
||||
doc = get_project(project_name)
|
||||
elif parent_type == "asset":
|
||||
doc = get_asset_by_id(project_name, parent_id)
|
||||
|
||||
if not doc:
|
||||
return None
|
||||
|
||||
doc_data = doc["data"]
|
||||
if attr in doc_data:
|
||||
self.log.info(attr)
|
||||
return entity['data'][attr]
|
||||
else:
|
||||
return self.check_parent_hierarchical(entity['parent'], attr)
|
||||
return doc_data[attr]
|
||||
|
||||
if parent_type == "project":
|
||||
return None
|
||||
|
||||
parent_id = doc_data.get("visualParent")
|
||||
new_parent_type = "asset"
|
||||
if parent_id is None:
|
||||
parent_id = doc["parent"]
|
||||
new_parent_type = "project"
|
||||
|
||||
return self.check_parent_hierarchical(
|
||||
project_name, new_parent_type, parent_id, attr
|
||||
)
|
||||
|
|
|
|||
|
|
@ -1,4 +1,5 @@
|
|||
import openpype.hosts.photoshop.api as api
|
||||
from openpype.client import get_asset_by_name
|
||||
from openpype.pipeline import (
|
||||
AutoCreator,
|
||||
CreatedInstance,
|
||||
|
|
@ -40,10 +41,7 @@ class PSWorkfileCreator(AutoCreator):
|
|||
task_name = legacy_io.Session["AVALON_TASK"]
|
||||
host_name = legacy_io.Session["AVALON_APP"]
|
||||
if existing_instance is None:
|
||||
asset_doc = legacy_io.find_one({
|
||||
"type": "asset",
|
||||
"name": asset_name
|
||||
})
|
||||
asset_doc = get_asset_by_name(project_name, asset_name)
|
||||
subset_name = self.get_subset_name(
|
||||
variant, task_name, asset_doc, project_name, host_name
|
||||
)
|
||||
|
|
@ -67,10 +65,7 @@ class PSWorkfileCreator(AutoCreator):
|
|||
existing_instance["asset"] != asset_name
|
||||
or existing_instance["task"] != task_name
|
||||
):
|
||||
asset_doc = legacy_io.find_one({
|
||||
"type": "asset",
|
||||
"name": asset_name
|
||||
})
|
||||
asset_doc = get_asset_by_name(project_name, asset_name)
|
||||
subset_name = self.get_subset_name(
|
||||
variant, task_name, asset_doc, project_name, host_name
|
||||
)
|
||||
|
|
|
|||
|
|
@ -3,7 +3,7 @@ import json
|
|||
import pyblish.api
|
||||
|
||||
from openpype.lib import get_subset_name_with_asset_doc
|
||||
from openpype.pipeline import legacy_io
|
||||
from openpype.client import get_asset_by_name
|
||||
|
||||
|
||||
class CollectBulkMovInstances(pyblish.api.InstancePlugin):
|
||||
|
|
@ -24,12 +24,9 @@ class CollectBulkMovInstances(pyblish.api.InstancePlugin):
|
|||
|
||||
def process(self, instance):
|
||||
context = instance.context
|
||||
project_name = context.data["projectEntity"]["name"]
|
||||
asset_name = instance.data["asset"]
|
||||
|
||||
asset_doc = legacy_io.find_one({
|
||||
"type": "asset",
|
||||
"name": asset_name
|
||||
})
|
||||
asset_doc = get_asset_by_name(project_name, asset_name)
|
||||
if not asset_doc:
|
||||
raise AssertionError((
|
||||
"Couldn't find Asset document with name \"{}\""
|
||||
|
|
@ -52,7 +49,7 @@ class CollectBulkMovInstances(pyblish.api.InstancePlugin):
|
|||
self.subset_name_variant,
|
||||
task_name,
|
||||
asset_doc,
|
||||
legacy_io.Session["AVALON_PROJECT"]
|
||||
project_name
|
||||
)
|
||||
instance_name = f"{asset_name}_{subset_name}"
|
||||
|
||||
|
|
|
|||
|
|
@ -1,9 +1,10 @@
|
|||
import os
|
||||
from pprint import pformat
|
||||
import re
|
||||
from copy import deepcopy
|
||||
import pyblish.api
|
||||
|
||||
from openpype.pipeline import legacy_io
|
||||
from openpype.client import get_asset_by_id
|
||||
|
||||
|
||||
class CollectHierarchyInstance(pyblish.api.ContextPlugin):
|
||||
|
|
@ -21,6 +22,7 @@ class CollectHierarchyInstance(pyblish.api.ContextPlugin):
|
|||
families = ["shot"]
|
||||
|
||||
# presets
|
||||
shot_rename = True
|
||||
shot_rename_template = None
|
||||
shot_rename_search_patterns = None
|
||||
shot_add_hierarchy = None
|
||||
|
|
@ -46,7 +48,7 @@ class CollectHierarchyInstance(pyblish.api.ContextPlugin):
|
|||
parent_name = instance.context.data["assetEntity"]["name"]
|
||||
clip = instance.data["item"]
|
||||
clip_name = os.path.splitext(clip.name)[0].lower()
|
||||
if self.shot_rename_search_patterns:
|
||||
if self.shot_rename_search_patterns and self.shot_rename:
|
||||
search_text += parent_name + clip_name
|
||||
instance.data["anatomyData"].update({"clip_name": clip_name})
|
||||
for type, pattern in self.shot_rename_search_patterns.items():
|
||||
|
|
@ -56,33 +58,38 @@ class CollectHierarchyInstance(pyblish.api.ContextPlugin):
|
|||
continue
|
||||
instance.data["anatomyData"][type] = match[-1]
|
||||
|
||||
# format to new shot name
|
||||
instance.data["asset"] = self.shot_rename_template.format(
|
||||
**instance.data["anatomyData"])
|
||||
# format to new shot name
|
||||
instance.data["asset"] = self.shot_rename_template.format(
|
||||
**instance.data["anatomyData"])
|
||||
|
||||
def create_hierarchy(self, instance):
|
||||
parents = list()
|
||||
hierarchy = list()
|
||||
visual_hierarchy = [instance.context.data["assetEntity"]]
|
||||
asset_doc = instance.context.data["assetEntity"]
|
||||
project_doc = instance.context.data["projectEntity"]
|
||||
project_name = project_doc["name"]
|
||||
visual_hierarchy = [asset_doc]
|
||||
current_doc = asset_doc
|
||||
while True:
|
||||
visual_parent = legacy_io.find_one(
|
||||
{"_id": visual_hierarchy[-1]["data"]["visualParent"]}
|
||||
)
|
||||
if visual_parent:
|
||||
visual_hierarchy.append(visual_parent)
|
||||
else:
|
||||
visual_hierarchy.append(
|
||||
instance.context.data["projectEntity"])
|
||||
visual_parent_id = current_doc["data"]["visualParent"]
|
||||
visual_parent = None
|
||||
if visual_parent_id:
|
||||
visual_parent = get_asset_by_id(project_name, visual_parent_id)
|
||||
|
||||
if not visual_parent:
|
||||
visual_hierarchy.append(project_doc)
|
||||
break
|
||||
visual_hierarchy.append(visual_parent)
|
||||
current_doc = visual_parent
|
||||
|
||||
# add current selection context hierarchy from standalonepublisher
|
||||
parents = list()
|
||||
for entity in reversed(visual_hierarchy):
|
||||
parents.append({
|
||||
"entity_type": entity["data"]["entityType"],
|
||||
"entity_name": entity["name"]
|
||||
})
|
||||
|
||||
if self.shot_add_hierarchy:
|
||||
hierarchy = list()
|
||||
if self.shot_add_hierarchy.get("enabled"):
|
||||
parent_template_patern = re.compile(r"\{([a-z]*?)\}")
|
||||
# fill the parents parts from presets
|
||||
shot_add_hierarchy = self.shot_add_hierarchy.copy()
|
||||
|
|
@ -126,12 +133,11 @@ class CollectHierarchyInstance(pyblish.api.ContextPlugin):
|
|||
instance.data["parents"] = parents
|
||||
|
||||
# print
|
||||
self.log.debug(f"Hierarchy: {hierarchy}")
|
||||
self.log.debug(f"parents: {parents}")
|
||||
self.log.warning(f"Hierarchy: {hierarchy}")
|
||||
self.log.info(f"parents: {parents}")
|
||||
|
||||
tasks_to_add = dict()
|
||||
if self.shot_add_tasks:
|
||||
tasks_to_add = dict()
|
||||
project_doc = legacy_io.find_one({"type": "project"})
|
||||
project_tasks = project_doc["config"]["tasks"]
|
||||
for task_name, task_data in self.shot_add_tasks.items():
|
||||
_task_data = deepcopy(task_data)
|
||||
|
|
@ -150,9 +156,7 @@ class CollectHierarchyInstance(pyblish.api.ContextPlugin):
|
|||
task_name,
|
||||
list(project_tasks.keys())))
|
||||
|
||||
instance.data["tasks"] = tasks_to_add
|
||||
else:
|
||||
instance.data["tasks"] = dict()
|
||||
instance.data["tasks"] = tasks_to_add
|
||||
|
||||
# updating hierarchy data
|
||||
instance.data["anatomyData"].update({
|
||||
|
|
@ -161,6 +165,9 @@ class CollectHierarchyInstance(pyblish.api.ContextPlugin):
|
|||
})
|
||||
|
||||
def process(self, context):
|
||||
self.log.info("self.shot_add_hierarchy: {}".format(
|
||||
pformat(self.shot_add_hierarchy)
|
||||
))
|
||||
for instance in context:
|
||||
if instance.data["family"] in self.families:
|
||||
self.processing_instance(instance)
|
||||
|
|
|
|||
|
|
@ -4,7 +4,7 @@ import collections
|
|||
import pyblish.api
|
||||
from pprint import pformat
|
||||
|
||||
from openpype.pipeline import legacy_io
|
||||
from openpype.client import get_assets
|
||||
|
||||
|
||||
class CollectMatchingAssetToInstance(pyblish.api.InstancePlugin):
|
||||
|
|
@ -119,8 +119,9 @@ class CollectMatchingAssetToInstance(pyblish.api.InstancePlugin):
|
|||
|
||||
def _asset_docs_by_parent_id(self, instance):
|
||||
# Query all assets for project and store them by parent's id to list
|
||||
project_name = instance.context.data["projectEntity"]["name"]
|
||||
asset_docs_by_parent_id = collections.defaultdict(list)
|
||||
for asset_doc in legacy_io.find({"type": "asset"}):
|
||||
for asset_doc in get_assets(project_name):
|
||||
parent_id = asset_doc["data"]["visualParent"]
|
||||
asset_docs_by_parent_id[parent_id].append(asset_doc)
|
||||
return asset_docs_by_parent_id
|
||||
|
|
|
|||
|
|
@ -39,11 +39,14 @@ class ExtractTrimVideoAudio(openpype.api.Extractor):
|
|||
# Generate mov file.
|
||||
fps = instance.data["fps"]
|
||||
video_file_path = instance.data["editorialSourcePath"]
|
||||
extensions = instance.data.get("extensions", [".mov"])
|
||||
extensions = instance.data.get("extensions", ["mov"])
|
||||
|
||||
for ext in extensions:
|
||||
self.log.info("Processing ext: `{}`".format(ext))
|
||||
|
||||
if not ext.startswith("."):
|
||||
ext = "." + ext
|
||||
|
||||
clip_trimed_path = os.path.join(
|
||||
staging_dir, instance.data["name"] + ext)
|
||||
# # check video file metadata
|
||||
|
|
|
|||
|
|
@ -1,9 +1,7 @@
|
|||
import pyblish.api
|
||||
|
||||
from openpype.pipeline import (
|
||||
PublishXmlValidationError,
|
||||
legacy_io,
|
||||
)
|
||||
from openpype.client import get_assets
|
||||
from openpype.pipeline import PublishXmlValidationError
|
||||
|
||||
|
||||
class ValidateTaskExistence(pyblish.api.ContextPlugin):
|
||||
|
|
@ -20,15 +18,11 @@ class ValidateTaskExistence(pyblish.api.ContextPlugin):
|
|||
for instance in context:
|
||||
asset_names.add(instance.data["asset"])
|
||||
|
||||
asset_docs = legacy_io.find(
|
||||
{
|
||||
"type": "asset",
|
||||
"name": {"$in": list(asset_names)}
|
||||
},
|
||||
{
|
||||
"name": 1,
|
||||
"data.tasks": 1
|
||||
}
|
||||
project_name = context.data["projectEntity"]["name"]
|
||||
asset_docs = get_assets(
|
||||
project_name,
|
||||
asset_names=asset_names,
|
||||
fields=["name", "data.tasks"]
|
||||
)
|
||||
tasks_by_asset_names = {}
|
||||
for asset_doc in asset_docs:
|
||||
|
|
|
|||
|
|
@ -707,6 +707,9 @@ class BaseCommunicator:
|
|||
if exit_code is not None:
|
||||
self.exit_code = exit_code
|
||||
|
||||
if self.exit_code is None:
|
||||
self.exit_code = 0
|
||||
|
||||
def stop(self):
|
||||
"""Stop communication and currently running python process."""
|
||||
log.info("Stopping communication")
|
||||
|
|
|
|||
|
|
@ -8,6 +8,7 @@ import requests
|
|||
|
||||
import pyblish.api
|
||||
|
||||
from openpype.client import get_project, get_asset_by_name
|
||||
from openpype.hosts import tvpaint
|
||||
from openpype.api import get_current_project_settings
|
||||
from openpype.lib import register_event_callback
|
||||
|
|
@ -442,14 +443,14 @@ def set_context_settings(asset_doc=None):
|
|||
|
||||
Change fps, resolution and frame start/end.
|
||||
"""
|
||||
if asset_doc is None:
|
||||
# Use current session asset if not passed
|
||||
asset_doc = legacy_io.find_one({
|
||||
"type": "asset",
|
||||
"name": legacy_io.Session["AVALON_ASSET"]
|
||||
})
|
||||
|
||||
project_doc = legacy_io.find_one({"type": "project"})
|
||||
project_name = legacy_io.active_project()
|
||||
if asset_doc is None:
|
||||
asset_name = legacy_io.Session["AVALON_ASSET"]
|
||||
# Use current session asset if not passed
|
||||
asset_doc = get_asset_by_name(project_name, asset_name)
|
||||
|
||||
project_doc = get_project(project_name)
|
||||
|
||||
framerate = asset_doc["data"].get("fps")
|
||||
if framerate is None:
|
||||
|
|
|
|||
|
|
@ -1,5 +1,6 @@
|
|||
import os
|
||||
|
||||
from openpype.client import get_project, get_asset_by_name
|
||||
from openpype.lib import (
|
||||
StringTemplate,
|
||||
get_workfile_template_key_from_context,
|
||||
|
|
@ -44,21 +45,17 @@ class LoadWorkfile(plugin.Loader):
|
|||
|
||||
# Save workfile.
|
||||
host_name = "tvpaint"
|
||||
project_name = context.get("project")
|
||||
asset_name = context.get("asset")
|
||||
task_name = context.get("task")
|
||||
# Far cases when there is workfile without context
|
||||
if not asset_name:
|
||||
project_name = legacy_io.active_project()
|
||||
asset_name = legacy_io.Session["AVALON_ASSET"]
|
||||
task_name = legacy_io.Session["AVALON_TASK"]
|
||||
|
||||
project_doc = legacy_io.find_one({
|
||||
"type": "project"
|
||||
})
|
||||
asset_doc = legacy_io.find_one({
|
||||
"type": "asset",
|
||||
"name": asset_name
|
||||
})
|
||||
project_name = project_doc["name"]
|
||||
project_doc = get_project(project_name)
|
||||
asset_doc = get_asset_by_name(project_name, asset_name)
|
||||
|
||||
template_key = get_workfile_template_key_from_context(
|
||||
asset_name,
|
||||
|
|
|
|||
|
|
@ -2,6 +2,7 @@ import json
|
|||
import copy
|
||||
import pyblish.api
|
||||
|
||||
from openpype.client import get_asset_by_name
|
||||
from openpype.lib import get_subset_name_with_asset_doc
|
||||
from openpype.pipeline import legacy_io
|
||||
|
||||
|
|
@ -92,17 +93,15 @@ class CollectInstances(pyblish.api.ContextPlugin):
|
|||
if family == "review":
|
||||
# Change subset name of review instance
|
||||
|
||||
# Project name from workfile context
|
||||
project_name = context.data["workfile_context"]["project"]
|
||||
|
||||
# Collect asset doc to get asset id
|
||||
# - not sure if it's good idea to require asset id in
|
||||
# get_subset_name?
|
||||
asset_name = context.data["workfile_context"]["asset"]
|
||||
asset_doc = legacy_io.find_one({
|
||||
"type": "asset",
|
||||
"name": asset_name
|
||||
})
|
||||
asset_doc = get_asset_by_name(project_name, asset_name)
|
||||
|
||||
# Project name from workfile context
|
||||
project_name = context.data["workfile_context"]["project"]
|
||||
# Host name from environment variable
|
||||
host_name = context.data["hostName"]
|
||||
# Use empty variant value
|
||||
|
|
|
|||
|
|
@ -2,8 +2,8 @@ import json
|
|||
import copy
|
||||
import pyblish.api
|
||||
|
||||
from openpype.client import get_asset_by_name
|
||||
from openpype.lib import get_subset_name_with_asset_doc
|
||||
from openpype.pipeline import legacy_io
|
||||
|
||||
|
||||
class CollectRenderScene(pyblish.api.ContextPlugin):
|
||||
|
|
@ -56,14 +56,11 @@ class CollectRenderScene(pyblish.api.ContextPlugin):
|
|||
# - not sure if it's good idea to require asset id in
|
||||
# get_subset_name?
|
||||
workfile_context = context.data["workfile_context"]
|
||||
asset_name = workfile_context["asset"]
|
||||
asset_doc = legacy_io.find_one({
|
||||
"type": "asset",
|
||||
"name": asset_name
|
||||
})
|
||||
|
||||
# Project name from workfile context
|
||||
project_name = context.data["workfile_context"]["project"]
|
||||
asset_name = workfile_context["asset"]
|
||||
asset_doc = get_asset_by_name(project_name, asset_name)
|
||||
|
||||
# Host name from environment variable
|
||||
host_name = context.data["hostName"]
|
||||
# Variant is using render pass name
|
||||
|
|
|
|||
|
|
@ -2,6 +2,7 @@ import os
|
|||
import json
|
||||
import pyblish.api
|
||||
|
||||
from openpype.client import get_asset_by_name
|
||||
from openpype.lib import get_subset_name_with_asset_doc
|
||||
from openpype.pipeline import legacy_io
|
||||
|
||||
|
|
@ -22,19 +23,17 @@ class CollectWorkfile(pyblish.api.ContextPlugin):
|
|||
basename, ext = os.path.splitext(filename)
|
||||
instance = context.create_instance(name=basename)
|
||||
|
||||
# Project name from workfile context
|
||||
project_name = context.data["workfile_context"]["project"]
|
||||
|
||||
# Get subset name of workfile instance
|
||||
# Collect asset doc to get asset id
|
||||
# - not sure if it's good idea to require asset id in
|
||||
# get_subset_name?
|
||||
family = "workfile"
|
||||
asset_name = context.data["workfile_context"]["asset"]
|
||||
asset_doc = legacy_io.find_one({
|
||||
"type": "asset",
|
||||
"name": asset_name
|
||||
})
|
||||
asset_doc = get_asset_by_name(project_name, asset_name)
|
||||
|
||||
# Project name from workfile context
|
||||
project_name = context.data["workfile_context"]["project"]
|
||||
# Host name from environment variable
|
||||
host_name = os.environ["AVALON_APP"]
|
||||
# Use empty variant value
|
||||
|
|
|
|||
|
|
@ -7,7 +7,6 @@ import platform
|
|||
import logging
|
||||
import collections
|
||||
import functools
|
||||
import getpass
|
||||
|
||||
from bson.objectid import ObjectId
|
||||
|
||||
|
|
@ -19,6 +18,7 @@ from .anatomy import Anatomy
|
|||
from .profiles_filtering import filter_profiles
|
||||
from .events import emit_event
|
||||
from .path_templates import StringTemplate
|
||||
from .local_settings import get_openpype_username
|
||||
|
||||
legacy_io = None
|
||||
|
||||
|
|
@ -550,7 +550,7 @@ def get_workdir_data(project_doc, asset_doc, task_name, host_name):
|
|||
"asset": asset_doc["name"],
|
||||
"parent": parent_name,
|
||||
"app": host_name,
|
||||
"user": getpass.getuser(),
|
||||
"user": get_openpype_username(),
|
||||
"hierarchy": hierarchy,
|
||||
}
|
||||
|
||||
|
|
@ -797,8 +797,14 @@ def update_current_task(task=None, asset=None, app=None, template_key=None):
|
|||
else:
|
||||
os.environ[key] = value
|
||||
|
||||
data = changes.copy()
|
||||
# Convert env keys to human readable keys
|
||||
data["project_name"] = legacy_io.Session["AVALON_PROJECT"]
|
||||
data["asset_name"] = legacy_io.Session["AVALON_ASSET"]
|
||||
data["task_name"] = legacy_io.Session["AVALON_TASK"]
|
||||
|
||||
# Emit session change
|
||||
emit_event("taskChanged", changes.copy())
|
||||
emit_event("taskChanged", data)
|
||||
|
||||
return changes
|
||||
|
||||
|
|
|
|||
|
|
@ -6,6 +6,7 @@ import logging
|
|||
import six
|
||||
import platform
|
||||
|
||||
from openpype.client import get_project
|
||||
from openpype.settings import get_project_settings
|
||||
|
||||
from .anatomy import Anatomy
|
||||
|
|
@ -171,45 +172,73 @@ def get_last_version_from_path(path_dir, filter):
|
|||
return None
|
||||
|
||||
|
||||
def compute_paths(basic_paths_items, project_root):
|
||||
def concatenate_splitted_paths(split_paths, anatomy):
|
||||
pattern_array = re.compile(r"\[.*\]")
|
||||
project_root_key = "__project_root__"
|
||||
output = []
|
||||
for path_items in basic_paths_items:
|
||||
for path_items in split_paths:
|
||||
clean_items = []
|
||||
if isinstance(path_items, str):
|
||||
path_items = [path_items]
|
||||
|
||||
for path_item in path_items:
|
||||
matches = re.findall(pattern_array, path_item)
|
||||
if len(matches) > 0:
|
||||
path_item = path_item.replace(matches[0], "")
|
||||
if path_item == project_root_key:
|
||||
path_item = project_root
|
||||
if not re.match(r"{.+}", path_item):
|
||||
path_item = re.sub(pattern_array, "", path_item)
|
||||
clean_items.append(path_item)
|
||||
|
||||
# backward compatibility
|
||||
if "__project_root__" in path_items:
|
||||
for root, root_path in anatomy.roots.items():
|
||||
if not os.path.exists(str(root_path)):
|
||||
log.debug("Root {} path path {} not exist on \
|
||||
computer!".format(root, root_path))
|
||||
continue
|
||||
clean_items = ["{{root[{}]}}".format(root),
|
||||
r"{project[name]}"] + clean_items[1:]
|
||||
output.append(os.path.normpath(os.path.sep.join(clean_items)))
|
||||
continue
|
||||
|
||||
output.append(os.path.normpath(os.path.sep.join(clean_items)))
|
||||
|
||||
return output
|
||||
|
||||
|
||||
def get_format_data(anatomy):
|
||||
project_doc = get_project(anatomy.project_name, fields=["data.code"])
|
||||
project_code = project_doc["data"]["code"]
|
||||
|
||||
return {
|
||||
"root": anatomy.roots,
|
||||
"project": {
|
||||
"name": anatomy.project_name,
|
||||
"code": project_code
|
||||
},
|
||||
}
|
||||
|
||||
|
||||
def fill_paths(path_list, anatomy):
|
||||
format_data = get_format_data(anatomy)
|
||||
filled_paths = []
|
||||
|
||||
for path in path_list:
|
||||
new_path = path.format(**format_data)
|
||||
filled_paths.append(new_path)
|
||||
|
||||
return filled_paths
|
||||
|
||||
|
||||
def create_project_folders(basic_paths, project_name):
|
||||
anatomy = Anatomy(project_name)
|
||||
roots_paths = []
|
||||
if isinstance(anatomy.roots, dict):
|
||||
for root in anatomy.roots.values():
|
||||
roots_paths.append(root.value)
|
||||
else:
|
||||
roots_paths.append(anatomy.roots.value)
|
||||
|
||||
for root_path in roots_paths:
|
||||
project_root = os.path.join(root_path, project_name)
|
||||
full_paths = compute_paths(basic_paths, project_root)
|
||||
# Create folders
|
||||
for path in full_paths:
|
||||
full_path = path.format(project_root=project_root)
|
||||
if os.path.exists(full_path):
|
||||
log.debug(
|
||||
"Folder already exists: {}".format(full_path)
|
||||
)
|
||||
else:
|
||||
log.debug("Creating folder: {}".format(full_path))
|
||||
os.makedirs(full_path)
|
||||
concat_paths = concatenate_splitted_paths(basic_paths, anatomy)
|
||||
filled_paths = fill_paths(concat_paths, anatomy)
|
||||
|
||||
# Create folders
|
||||
for path in filled_paths:
|
||||
if os.path.exists(path):
|
||||
log.debug("Folder already exists: {}".format(path))
|
||||
else:
|
||||
log.debug("Creating folder: {}".format(path))
|
||||
os.makedirs(path)
|
||||
|
||||
|
||||
def _list_path_items(folder_structure):
|
||||
|
|
@ -308,6 +337,7 @@ class HostDirmap:
|
|||
on_dirmap_enabled: run host code for enabling dirmap
|
||||
do_dirmap: run host code to do actual remapping
|
||||
"""
|
||||
|
||||
def __init__(self, host_name, project_settings, sync_module=None):
|
||||
self.host_name = host_name
|
||||
self.project_settings = project_settings
|
||||
|
|
|
|||
|
|
@ -463,6 +463,25 @@ class OpenPypeModule:
|
|||
|
||||
pass
|
||||
|
||||
def on_host_install(self, host, host_name, project_name):
|
||||
"""Host was installed which gives option to handle in-host logic.
|
||||
|
||||
It is a good option to register in-host event callbacks which are
|
||||
specific for the module. The module is kept in memory for rest of
|
||||
the process.
|
||||
|
||||
Arguments may change in future. E.g. 'host_name' should be possible
|
||||
to receive from 'host' object.
|
||||
|
||||
Args:
|
||||
host (ModuleType): Access to installed/registered host object.
|
||||
host_name (str): Name of host.
|
||||
project_name (str): Project name which is main part of host
|
||||
context.
|
||||
"""
|
||||
|
||||
pass
|
||||
|
||||
def cli(self, module_click_group):
|
||||
"""Add commands to click group.
|
||||
|
||||
|
|
|
|||
|
|
@ -322,7 +322,9 @@ class HarmonySubmitDeadline(
|
|||
)
|
||||
unzip_dir = (published_scene.parent / published_scene.stem)
|
||||
with _ZipFile(published_scene, "r") as zip_ref:
|
||||
zip_ref.extractall(unzip_dir.as_posix())
|
||||
# UNC path (//?/) added to minimalize risk with extracting
|
||||
# to large file paths
|
||||
zip_ref.extractall("//?/" + str(unzip_dir.as_posix()))
|
||||
|
||||
# find any xstage files in directory, prefer the one with the same name
|
||||
# as directory (plus extension)
|
||||
|
|
|
|||
|
|
@ -147,7 +147,7 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin):
|
|||
# mapping of instance properties to be transfered to new instance for every
|
||||
# specified family
|
||||
instance_transfer = {
|
||||
"slate": ["slateFrame"],
|
||||
"slate": ["slateFrames"],
|
||||
"review": ["lutPath"],
|
||||
"render2d": ["bakingNukeScripts", "version"],
|
||||
"renderlayer": ["convertToScanline"]
|
||||
|
|
|
|||
|
|
@ -1,8 +1,8 @@
|
|||
import json
|
||||
|
||||
from openpype.client import get_project
|
||||
from openpype.api import ProjectSettings
|
||||
from openpype.lib import create_project
|
||||
from openpype.pipeline import AvalonMongoDB
|
||||
from openpype.settings import SaveWarningExc
|
||||
|
||||
from openpype_modules.ftrack.lib import (
|
||||
|
|
@ -363,12 +363,8 @@ class PrepareProjectServer(ServerAction):
|
|||
project_name = project_entity["full_name"]
|
||||
|
||||
# Try to find project document
|
||||
dbcon = AvalonMongoDB()
|
||||
dbcon.install()
|
||||
dbcon.Session["AVALON_PROJECT"] = project_name
|
||||
project_doc = dbcon.find_one({
|
||||
"type": "project"
|
||||
})
|
||||
project_doc = get_project(project_name)
|
||||
|
||||
# Create project if is not available
|
||||
# - creation is required to be able set project anatomy and attributes
|
||||
if not project_doc:
|
||||
|
|
@ -376,9 +372,7 @@ class PrepareProjectServer(ServerAction):
|
|||
self.log.info("Creating project \"{} [{}]\"".format(
|
||||
project_name, project_code
|
||||
))
|
||||
create_project(project_name, project_code, dbcon=dbcon)
|
||||
|
||||
dbcon.uninstall()
|
||||
create_project(project_name, project_code)
|
||||
|
||||
project_settings = ProjectSettings(project_name)
|
||||
project_anatomy_settings = project_settings["project_anatomy"]
|
||||
|
|
|
|||
|
|
@ -12,6 +12,12 @@ from pymongo import UpdateOne
|
|||
import arrow
|
||||
import ftrack_api
|
||||
|
||||
from openpype.client import (
|
||||
get_project,
|
||||
get_assets,
|
||||
get_archived_assets,
|
||||
get_asset_ids_with_subsets
|
||||
)
|
||||
from openpype.pipeline import AvalonMongoDB, schema
|
||||
|
||||
from openpype_modules.ftrack.lib import (
|
||||
|
|
@ -149,12 +155,11 @@ class SyncToAvalonEvent(BaseEvent):
|
|||
@property
|
||||
def avalon_entities(self):
|
||||
if self._avalon_ents is None:
|
||||
project_name = self.cur_project["full_name"]
|
||||
self.dbcon.install()
|
||||
self.dbcon.Session["AVALON_PROJECT"] = (
|
||||
self.cur_project["full_name"]
|
||||
)
|
||||
avalon_project = self.dbcon.find_one({"type": "project"})
|
||||
avalon_entities = list(self.dbcon.find({"type": "asset"}))
|
||||
self.dbcon.Session["AVALON_PROJECT"] = project_name
|
||||
avalon_project = get_project(project_name)
|
||||
avalon_entities = list(get_assets(project_name))
|
||||
self._avalon_ents = (avalon_project, avalon_entities)
|
||||
return self._avalon_ents
|
||||
|
||||
|
|
@ -284,28 +289,21 @@ class SyncToAvalonEvent(BaseEvent):
|
|||
self._avalon_ents_by_ftrack_id[ftrack_id] = doc
|
||||
|
||||
@property
|
||||
def avalon_subsets_by_parents(self):
|
||||
if self._avalon_subsets_by_parents is None:
|
||||
self._avalon_subsets_by_parents = collections.defaultdict(list)
|
||||
self.dbcon.install()
|
||||
self.dbcon.Session["AVALON_PROJECT"] = (
|
||||
self.cur_project["full_name"]
|
||||
def avalon_asset_ids_with_subsets(self):
|
||||
if self._avalon_asset_ids_with_subsets is None:
|
||||
project_name = self.cur_project["full_name"]
|
||||
self._avalon_asset_ids_with_subsets = get_asset_ids_with_subsets(
|
||||
project_name
|
||||
)
|
||||
for subset in self.dbcon.find({"type": "subset"}):
|
||||
self._avalon_subsets_by_parents[subset["parent"]].append(
|
||||
subset
|
||||
)
|
||||
return self._avalon_subsets_by_parents
|
||||
|
||||
return self._avalon_asset_ids_with_subsets
|
||||
|
||||
@property
|
||||
def avalon_archived_by_id(self):
|
||||
if self._avalon_archived_by_id is None:
|
||||
self._avalon_archived_by_id = {}
|
||||
self.dbcon.install()
|
||||
self.dbcon.Session["AVALON_PROJECT"] = (
|
||||
self.cur_project["full_name"]
|
||||
)
|
||||
for asset in self.dbcon.find({"type": "archived_asset"}):
|
||||
project_name = self.cur_project["full_name"]
|
||||
for asset in get_archived_assets(project_name):
|
||||
self._avalon_archived_by_id[asset["_id"]] = asset
|
||||
return self._avalon_archived_by_id
|
||||
|
||||
|
|
@ -327,7 +325,7 @@ class SyncToAvalonEvent(BaseEvent):
|
|||
avalon_project, avalon_entities = self.avalon_entities
|
||||
self._changeability_by_mongo_id[avalon_project["_id"]] = False
|
||||
self._bubble_changeability(
|
||||
list(self.avalon_subsets_by_parents.keys())
|
||||
list(self.avalon_asset_ids_with_subsets)
|
||||
)
|
||||
|
||||
return self._changeability_by_mongo_id
|
||||
|
|
@ -449,14 +447,9 @@ class SyncToAvalonEvent(BaseEvent):
|
|||
if not entity:
|
||||
# if entity is not found then it is subset without parent
|
||||
if entity_id in unchangeable_ids:
|
||||
_subset_ids = [
|
||||
str(sub["_id"]) for sub in
|
||||
self.avalon_subsets_by_parents[entity_id]
|
||||
]
|
||||
joined_subset_ids = "| ".join(_subset_ids)
|
||||
self.log.warning((
|
||||
"Parent <{}> for subsets <{}> does not exist"
|
||||
).format(str(entity_id), joined_subset_ids))
|
||||
"Parent <{}> with subsets does not exist"
|
||||
).format(str(entity_id)))
|
||||
else:
|
||||
self.log.warning((
|
||||
"In avalon are entities without valid parents that"
|
||||
|
|
@ -483,7 +476,7 @@ class SyncToAvalonEvent(BaseEvent):
|
|||
self._avalon_ents_by_parent_id = None
|
||||
self._avalon_ents_by_ftrack_id = None
|
||||
self._avalon_ents_by_name = None
|
||||
self._avalon_subsets_by_parents = None
|
||||
self._avalon_asset_ids_with_subsets = None
|
||||
self._changeability_by_mongo_id = None
|
||||
self._avalon_archived_by_id = None
|
||||
self._avalon_archived_by_name = None
|
||||
|
|
|
|||
|
|
@ -1,11 +1,9 @@
|
|||
import re
|
||||
import subprocess
|
||||
|
||||
from openpype.client import get_asset_by_id, get_asset_by_name
|
||||
from openpype_modules.ftrack.lib import BaseEvent
|
||||
from openpype_modules.ftrack.lib.avalon_sync import CUST_ATTR_ID_KEY
|
||||
from openpype.pipeline import AvalonMongoDB
|
||||
|
||||
from bson.objectid import ObjectId
|
||||
|
||||
from openpype.api import Anatomy, get_project_settings
|
||||
|
||||
|
|
@ -36,8 +34,6 @@ class UserAssigmentEvent(BaseEvent):
|
|||
3) path to publish files of task user was (de)assigned to
|
||||
"""
|
||||
|
||||
db_con = AvalonMongoDB()
|
||||
|
||||
def error(self, *err):
|
||||
for e in err:
|
||||
self.log.error(e)
|
||||
|
|
@ -101,26 +97,16 @@ class UserAssigmentEvent(BaseEvent):
|
|||
:rtype: dict
|
||||
"""
|
||||
parent = task['parent']
|
||||
self.db_con.install()
|
||||
self.db_con.Session['AVALON_PROJECT'] = task['project']['full_name']
|
||||
|
||||
project_name = task["project"]["full_name"]
|
||||
avalon_entity = None
|
||||
parent_id = parent['custom_attributes'].get(CUST_ATTR_ID_KEY)
|
||||
if parent_id:
|
||||
parent_id = ObjectId(parent_id)
|
||||
avalon_entity = self.db_con.find_one({
|
||||
'_id': parent_id,
|
||||
'type': 'asset'
|
||||
})
|
||||
avalon_entity = get_asset_by_id(project_name, parent_id)
|
||||
|
||||
if not avalon_entity:
|
||||
avalon_entity = self.db_con.find_one({
|
||||
'type': 'asset',
|
||||
'name': parent['name']
|
||||
})
|
||||
avalon_entity = get_asset_by_name(project_name, parent["name"])
|
||||
|
||||
if not avalon_entity:
|
||||
self.db_con.uninstall()
|
||||
msg = 'Entity "{}" not found in avalon database'.format(
|
||||
parent['name']
|
||||
)
|
||||
|
|
@ -129,7 +115,6 @@ class UserAssigmentEvent(BaseEvent):
|
|||
'success': False,
|
||||
'message': msg
|
||||
}
|
||||
self.db_con.uninstall()
|
||||
return avalon_entity
|
||||
|
||||
def _get_hierarchy(self, asset):
|
||||
|
|
|
|||
|
|
@ -1,5 +1,6 @@
|
|||
import os
|
||||
|
||||
from openpype.client import get_project
|
||||
from openpype_modules.ftrack.lib import BaseAction
|
||||
from openpype.lib.applications import (
|
||||
ApplicationManager,
|
||||
|
|
@ -7,7 +8,6 @@ from openpype.lib.applications import (
|
|||
ApplictionExecutableNotFound,
|
||||
CUSTOM_LAUNCH_APP_GROUPS
|
||||
)
|
||||
from openpype.pipeline import AvalonMongoDB
|
||||
|
||||
|
||||
class AppplicationsAction(BaseAction):
|
||||
|
|
@ -25,7 +25,6 @@ class AppplicationsAction(BaseAction):
|
|||
super(AppplicationsAction, self).__init__(*args, **kwargs)
|
||||
|
||||
self.application_manager = ApplicationManager()
|
||||
self.dbcon = AvalonMongoDB()
|
||||
|
||||
@property
|
||||
def discover_identifier(self):
|
||||
|
|
@ -110,12 +109,7 @@ class AppplicationsAction(BaseAction):
|
|||
if avalon_project_doc is None:
|
||||
ft_project = self.get_project_from_entity(entity)
|
||||
project_name = ft_project["full_name"]
|
||||
if not self.dbcon.is_installed():
|
||||
self.dbcon.install()
|
||||
self.dbcon.Session["AVALON_PROJECT"] = project_name
|
||||
avalon_project_doc = self.dbcon.find_one({
|
||||
"type": "project"
|
||||
}) or False
|
||||
avalon_project_doc = get_project(project_name) or False
|
||||
event["data"]["avalon_project_doc"] = avalon_project_doc
|
||||
|
||||
if not avalon_project_doc:
|
||||
|
|
|
|||
|
|
@ -4,6 +4,7 @@ from datetime import datetime
|
|||
|
||||
from bson.objectid import ObjectId
|
||||
|
||||
from openpype.client import get_assets, get_subsets
|
||||
from openpype.pipeline import AvalonMongoDB
|
||||
from openpype_modules.ftrack.lib import BaseAction, statics_icon
|
||||
from openpype_modules.ftrack.lib.avalon_sync import create_chunks
|
||||
|
|
@ -91,10 +92,8 @@ class DeleteAssetSubset(BaseAction):
|
|||
continue
|
||||
|
||||
ftrack_id = entity.get("entityId")
|
||||
if not ftrack_id:
|
||||
continue
|
||||
|
||||
ftrack_ids.append(ftrack_id)
|
||||
if ftrack_id:
|
||||
ftrack_ids.append(ftrack_id)
|
||||
|
||||
if project_in_selection:
|
||||
msg = "It is not possible to use this action on project entity."
|
||||
|
|
@ -120,48 +119,51 @@ class DeleteAssetSubset(BaseAction):
|
|||
"message": "Invalid selection for this action (Bug)"
|
||||
}
|
||||
|
||||
if entities[0].entity_type.lower() == "project":
|
||||
project = entities[0]
|
||||
else:
|
||||
project = entities[0]["project"]
|
||||
|
||||
project = self.get_project_from_entity(entities[0], session)
|
||||
project_name = project["full_name"]
|
||||
self.dbcon.Session["AVALON_PROJECT"] = project_name
|
||||
|
||||
selected_av_entities = list(self.dbcon.find({
|
||||
"type": "asset",
|
||||
"data.ftrackId": {"$in": ftrack_ids}
|
||||
}))
|
||||
asset_docs = list(get_assets(
|
||||
project_name,
|
||||
fields=["_id", "name", "data.ftrackId", "data.parents"]
|
||||
))
|
||||
selected_av_entities = []
|
||||
found_ftrack_ids = set()
|
||||
asset_docs_by_name = collections.defaultdict(list)
|
||||
for asset_doc in asset_docs:
|
||||
ftrack_id = asset_doc["data"].get("ftrackId")
|
||||
if ftrack_id:
|
||||
found_ftrack_ids.add(ftrack_id)
|
||||
if ftrack_id in entity_mapping:
|
||||
selected_av_entities.append(asset_doc)
|
||||
|
||||
asset_name = asset_doc["name"]
|
||||
asset_docs_by_name[asset_name].append(asset_doc)
|
||||
|
||||
found_without_ftrack_id = {}
|
||||
if len(selected_av_entities) != len(ftrack_ids):
|
||||
found_ftrack_ids = [
|
||||
ent["data"]["ftrackId"] for ent in selected_av_entities
|
||||
]
|
||||
for ftrack_id, entity in entity_mapping.items():
|
||||
if ftrack_id in found_ftrack_ids:
|
||||
for ftrack_id, entity in entity_mapping.items():
|
||||
if ftrack_id in found_ftrack_ids:
|
||||
continue
|
||||
|
||||
av_ents_by_name = asset_docs_by_name[entity["name"]]
|
||||
if not av_ents_by_name:
|
||||
continue
|
||||
|
||||
ent_path_items = [ent["name"] for ent in entity["link"]]
|
||||
end_index = len(ent_path_items) - 1
|
||||
parents = ent_path_items[1:end_index:]
|
||||
# TODO we should say to user that
|
||||
# few of them are missing in avalon
|
||||
for av_ent in av_ents_by_name:
|
||||
if av_ent["data"]["parents"] != parents:
|
||||
continue
|
||||
|
||||
av_ents_by_name = list(self.dbcon.find({
|
||||
"type": "asset",
|
||||
"name": entity["name"]
|
||||
}))
|
||||
if not av_ents_by_name:
|
||||
continue
|
||||
|
||||
ent_path_items = [ent["name"] for ent in entity["link"]]
|
||||
parents = ent_path_items[1:len(ent_path_items)-1:]
|
||||
# TODO we should say to user that
|
||||
# few of them are missing in avalon
|
||||
for av_ent in av_ents_by_name:
|
||||
if av_ent["data"]["parents"] != parents:
|
||||
continue
|
||||
|
||||
# TODO we should say to user that found entity
|
||||
# with same name does not match same ftrack id?
|
||||
if "ftrackId" not in av_ent["data"]:
|
||||
selected_av_entities.append(av_ent)
|
||||
found_without_ftrack_id[str(av_ent["_id"])] = ftrack_id
|
||||
break
|
||||
# TODO we should say to user that found entity
|
||||
# with same name does not match same ftrack id?
|
||||
if "ftrackId" not in av_ent["data"]:
|
||||
selected_av_entities.append(av_ent)
|
||||
found_without_ftrack_id[str(av_ent["_id"])] = ftrack_id
|
||||
break
|
||||
|
||||
if not selected_av_entities:
|
||||
return {
|
||||
|
|
@ -206,10 +208,7 @@ class DeleteAssetSubset(BaseAction):
|
|||
|
||||
items.append(id_item)
|
||||
asset_ids = [ent["_id"] for ent in selected_av_entities]
|
||||
subsets_for_selection = self.dbcon.find({
|
||||
"type": "subset",
|
||||
"parent": {"$in": asset_ids}
|
||||
})
|
||||
subsets_for_selection = get_subsets(project_name, asset_ids=asset_ids)
|
||||
|
||||
asset_ending = ""
|
||||
if len(selected_av_entities) > 1:
|
||||
|
|
@ -459,13 +458,9 @@ class DeleteAssetSubset(BaseAction):
|
|||
if len(assets_to_delete) > 0:
|
||||
map_av_ftrack_id = spec_data["without_ftrack_id"]
|
||||
# Prepare data when deleting whole avalon asset
|
||||
avalon_assets = self.dbcon.find(
|
||||
{"type": "asset"},
|
||||
{
|
||||
"_id": 1,
|
||||
"data.visualParent": 1,
|
||||
"data.ftrackId": 1
|
||||
}
|
||||
avalon_assets = get_assets(
|
||||
project_name,
|
||||
fields=["_id", "data.visualParent", "data.ftrackId"]
|
||||
)
|
||||
avalon_assets_by_parent = collections.defaultdict(list)
|
||||
for asset in avalon_assets:
|
||||
|
|
|
|||
|
|
@ -5,7 +5,12 @@ import uuid
|
|||
import clique
|
||||
from pymongo import UpdateOne
|
||||
|
||||
|
||||
from openpype.client import (
|
||||
get_assets,
|
||||
get_subsets,
|
||||
get_versions,
|
||||
get_representations
|
||||
)
|
||||
from openpype.api import Anatomy
|
||||
from openpype.lib import StringTemplate, TemplateUnsolved
|
||||
from openpype.pipeline import AvalonMongoDB
|
||||
|
|
@ -198,10 +203,9 @@ class DeleteOldVersions(BaseAction):
|
|||
self.log.debug("Project is set to {}".format(project_name))
|
||||
|
||||
# Get Assets from avalon database
|
||||
assets = list(self.dbcon.find({
|
||||
"type": "asset",
|
||||
"name": {"$in": avalon_asset_names}
|
||||
}))
|
||||
assets = list(
|
||||
get_assets(project_name, asset_names=avalon_asset_names)
|
||||
)
|
||||
asset_id_to_name_map = {
|
||||
asset["_id"]: asset["name"] for asset in assets
|
||||
}
|
||||
|
|
@ -210,10 +214,9 @@ class DeleteOldVersions(BaseAction):
|
|||
self.log.debug("Collected assets ({})".format(len(asset_ids)))
|
||||
|
||||
# Get Subsets
|
||||
subsets = list(self.dbcon.find({
|
||||
"type": "subset",
|
||||
"parent": {"$in": asset_ids}
|
||||
}))
|
||||
subsets = list(
|
||||
get_subsets(project_name, asset_ids=asset_ids)
|
||||
)
|
||||
subsets_by_id = {}
|
||||
subset_ids = []
|
||||
for subset in subsets:
|
||||
|
|
@ -230,10 +233,9 @@ class DeleteOldVersions(BaseAction):
|
|||
self.log.debug("Collected subsets ({})".format(len(subset_ids)))
|
||||
|
||||
# Get Versions
|
||||
versions = list(self.dbcon.find({
|
||||
"type": "version",
|
||||
"parent": {"$in": subset_ids}
|
||||
}))
|
||||
versions = list(
|
||||
get_versions(project_name, subset_ids=subset_ids)
|
||||
)
|
||||
|
||||
versions_by_parent = collections.defaultdict(list)
|
||||
for ent in versions:
|
||||
|
|
@ -295,10 +297,9 @@ class DeleteOldVersions(BaseAction):
|
|||
"message": msg
|
||||
}
|
||||
|
||||
repres = list(self.dbcon.find({
|
||||
"type": "representation",
|
||||
"parent": {"$in": version_ids}
|
||||
}))
|
||||
repres = list(
|
||||
get_representations(project_name, version_ids=version_ids)
|
||||
)
|
||||
|
||||
self.log.debug(
|
||||
"Collected representations to remove ({})".format(len(repres))
|
||||
|
|
|
|||
|
|
@ -3,8 +3,13 @@ import copy
|
|||
import json
|
||||
import collections
|
||||
|
||||
from bson.objectid import ObjectId
|
||||
|
||||
from openpype.client import (
|
||||
get_project,
|
||||
get_assets,
|
||||
get_subsets,
|
||||
get_versions,
|
||||
get_representations
|
||||
)
|
||||
from openpype.api import Anatomy, config
|
||||
from openpype_modules.ftrack.lib import BaseAction, statics_icon
|
||||
from openpype_modules.ftrack.lib.avalon_sync import CUST_ATTR_ID_KEY
|
||||
|
|
@ -18,11 +23,9 @@ from openpype.lib.delivery import (
|
|||
process_single_file,
|
||||
process_sequence
|
||||
)
|
||||
from openpype.pipeline import AvalonMongoDB
|
||||
|
||||
|
||||
class Delivery(BaseAction):
|
||||
|
||||
identifier = "delivery.action"
|
||||
label = "Delivery"
|
||||
description = "Deliver data to client"
|
||||
|
|
@ -30,11 +33,6 @@ class Delivery(BaseAction):
|
|||
icon = statics_icon("ftrack", "action_icons", "Delivery.svg")
|
||||
settings_key = "delivery_action"
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
self.dbcon = AvalonMongoDB()
|
||||
|
||||
super(Delivery, self).__init__(*args, **kwargs)
|
||||
|
||||
def discover(self, session, entities, event):
|
||||
is_valid = False
|
||||
for entity in entities:
|
||||
|
|
@ -57,9 +55,7 @@ class Delivery(BaseAction):
|
|||
|
||||
project_entity = self.get_project_from_entity(entities[0])
|
||||
project_name = project_entity["full_name"]
|
||||
self.dbcon.install()
|
||||
self.dbcon.Session["AVALON_PROJECT"] = project_name
|
||||
project_doc = self.dbcon.find_one({"type": "project"}, {"name": True})
|
||||
project_doc = get_project(project_name, fields=["name"])
|
||||
if not project_doc:
|
||||
return {
|
||||
"success": False,
|
||||
|
|
@ -68,8 +64,7 @@ class Delivery(BaseAction):
|
|||
).format(project_name)
|
||||
}
|
||||
|
||||
repre_names = self._get_repre_names(session, entities)
|
||||
self.dbcon.uninstall()
|
||||
repre_names = self._get_repre_names(project_name, session, entities)
|
||||
|
||||
items.append({
|
||||
"type": "hidden",
|
||||
|
|
@ -198,17 +193,21 @@ class Delivery(BaseAction):
|
|||
"title": title
|
||||
}
|
||||
|
||||
def _get_repre_names(self, session, entities):
|
||||
version_ids = self._get_interest_version_ids(session, entities)
|
||||
def _get_repre_names(self, project_name, session, entities):
|
||||
version_ids = self._get_interest_version_ids(
|
||||
project_name, session, entities
|
||||
)
|
||||
if not version_ids:
|
||||
return []
|
||||
repre_docs = self.dbcon.find({
|
||||
"type": "representation",
|
||||
"parent": {"$in": version_ids}
|
||||
})
|
||||
return list(sorted(repre_docs.distinct("name")))
|
||||
repre_docs = get_representations(
|
||||
project_name,
|
||||
version_ids=version_ids,
|
||||
fields=["name"]
|
||||
)
|
||||
repre_names = {repre_doc["name"] for repre_doc in repre_docs}
|
||||
return list(sorted(repre_names))
|
||||
|
||||
def _get_interest_version_ids(self, session, entities):
|
||||
def _get_interest_version_ids(self, project_name, session, entities):
|
||||
# Extract AssetVersion entities
|
||||
asset_versions = self._extract_asset_versions(session, entities)
|
||||
# Prepare Asset ids
|
||||
|
|
@ -235,14 +234,18 @@ class Delivery(BaseAction):
|
|||
subset_names.add(asset["name"])
|
||||
version_nums.add(asset_version["version"])
|
||||
|
||||
asset_docs_by_ftrack_id = self._get_asset_docs(session, parent_ids)
|
||||
asset_docs_by_ftrack_id = self._get_asset_docs(
|
||||
project_name, session, parent_ids
|
||||
)
|
||||
subset_docs = self._get_subset_docs(
|
||||
project_name,
|
||||
asset_docs_by_ftrack_id,
|
||||
subset_names,
|
||||
asset_versions,
|
||||
assets_by_id
|
||||
)
|
||||
version_docs = self._get_version_docs(
|
||||
project_name,
|
||||
asset_docs_by_ftrack_id,
|
||||
subset_docs,
|
||||
version_nums,
|
||||
|
|
@ -290,6 +293,7 @@ class Delivery(BaseAction):
|
|||
|
||||
def _get_version_docs(
|
||||
self,
|
||||
project_name,
|
||||
asset_docs_by_ftrack_id,
|
||||
subset_docs,
|
||||
version_nums,
|
||||
|
|
@ -300,11 +304,11 @@ class Delivery(BaseAction):
|
|||
subset_doc["_id"]: subset_doc
|
||||
for subset_doc in subset_docs
|
||||
}
|
||||
version_docs = list(self.dbcon.find({
|
||||
"type": "version",
|
||||
"parent": {"$in": list(subset_docs_by_id.keys())},
|
||||
"name": {"$in": list(version_nums)}
|
||||
}))
|
||||
version_docs = list(get_versions(
|
||||
project_name,
|
||||
subset_ids=subset_docs_by_id.keys(),
|
||||
versions=version_nums
|
||||
))
|
||||
version_docs_by_parent_id = collections.defaultdict(dict)
|
||||
for version_doc in version_docs:
|
||||
subset_doc = subset_docs_by_id[version_doc["parent"]]
|
||||
|
|
@ -345,6 +349,7 @@ class Delivery(BaseAction):
|
|||
|
||||
def _get_subset_docs(
|
||||
self,
|
||||
project_name,
|
||||
asset_docs_by_ftrack_id,
|
||||
subset_names,
|
||||
asset_versions,
|
||||
|
|
@ -354,11 +359,11 @@ class Delivery(BaseAction):
|
|||
asset_doc["_id"]
|
||||
for asset_doc in asset_docs_by_ftrack_id.values()
|
||||
]
|
||||
subset_docs = list(self.dbcon.find({
|
||||
"type": "subset",
|
||||
"parent": {"$in": asset_doc_ids},
|
||||
"name": {"$in": list(subset_names)}
|
||||
}))
|
||||
subset_docs = list(get_subsets(
|
||||
project_name,
|
||||
asset_ids=asset_doc_ids,
|
||||
subset_names=subset_names
|
||||
))
|
||||
subset_docs_by_parent_id = collections.defaultdict(dict)
|
||||
for subset_doc in subset_docs:
|
||||
asset_id = subset_doc["parent"]
|
||||
|
|
@ -385,15 +390,21 @@ class Delivery(BaseAction):
|
|||
filtered_subsets.append(subset_doc)
|
||||
return filtered_subsets
|
||||
|
||||
def _get_asset_docs(self, session, parent_ids):
|
||||
asset_docs = list(self.dbcon.find({
|
||||
"type": "asset",
|
||||
"data.ftrackId": {"$in": list(parent_ids)}
|
||||
}))
|
||||
def _get_asset_docs(self, project_name, session, parent_ids):
|
||||
asset_docs = list(get_assets(
|
||||
project_name, fields=["_id", "name", "data.ftrackId"]
|
||||
))
|
||||
|
||||
asset_docs_by_id = {}
|
||||
asset_docs_by_name = {}
|
||||
asset_docs_by_ftrack_id = {}
|
||||
for asset_doc in asset_docs:
|
||||
asset_id = str(asset_doc["_id"])
|
||||
asset_name = asset_doc["name"]
|
||||
ftrack_id = asset_doc["data"].get("ftrackId")
|
||||
|
||||
asset_docs_by_id[asset_id] = asset_doc
|
||||
asset_docs_by_name[asset_name] = asset_doc
|
||||
if ftrack_id:
|
||||
asset_docs_by_ftrack_id[ftrack_id] = asset_doc
|
||||
|
||||
|
|
@ -406,15 +417,15 @@ class Delivery(BaseAction):
|
|||
avalon_mongo_id_values = query_custom_attributes(
|
||||
session, [attr_def["id"]], parent_ids, True
|
||||
)
|
||||
entity_ids_by_mongo_id = {
|
||||
ObjectId(item["value"]): item["entity_id"]
|
||||
for item in avalon_mongo_id_values
|
||||
if item["value"]
|
||||
}
|
||||
|
||||
missing_ids = set(parent_ids)
|
||||
for entity_id in set(entity_ids_by_mongo_id.values()):
|
||||
if entity_id in missing_ids:
|
||||
for item in avalon_mongo_id_values:
|
||||
if not item["value"]:
|
||||
continue
|
||||
asset_id = item["value"]
|
||||
entity_id = item["entity_id"]
|
||||
asset_doc = asset_docs_by_id.get(asset_id)
|
||||
if asset_doc:
|
||||
asset_docs_by_ftrack_id[entity_id] = asset_doc
|
||||
missing_ids.remove(entity_id)
|
||||
|
||||
entity_ids_by_name = {}
|
||||
|
|
@ -427,36 +438,10 @@ class Delivery(BaseAction):
|
|||
for entity in not_found_entities
|
||||
}
|
||||
|
||||
expressions = []
|
||||
if entity_ids_by_mongo_id:
|
||||
expression = {
|
||||
"type": "asset",
|
||||
"_id": {"$in": list(entity_ids_by_mongo_id.keys())}
|
||||
}
|
||||
expressions.append(expression)
|
||||
|
||||
if entity_ids_by_name:
|
||||
expression = {
|
||||
"type": "asset",
|
||||
"name": {"$in": list(entity_ids_by_name.keys())}
|
||||
}
|
||||
expressions.append(expression)
|
||||
|
||||
if expressions:
|
||||
if len(expressions) == 1:
|
||||
filter = expressions[0]
|
||||
else:
|
||||
filter = {"$or": expressions}
|
||||
|
||||
asset_docs = self.dbcon.find(filter)
|
||||
for asset_doc in asset_docs:
|
||||
if asset_doc["_id"] in entity_ids_by_mongo_id:
|
||||
entity_id = entity_ids_by_mongo_id[asset_doc["_id"]]
|
||||
asset_docs_by_ftrack_id[entity_id] = asset_doc
|
||||
|
||||
elif asset_doc["name"] in entity_ids_by_name:
|
||||
entity_id = entity_ids_by_name[asset_doc["name"]]
|
||||
asset_docs_by_ftrack_id[entity_id] = asset_doc
|
||||
for asset_name, entity_id in entity_ids_by_name.items():
|
||||
asset_doc = asset_docs_by_name.get(asset_name)
|
||||
if asset_doc:
|
||||
asset_docs_by_ftrack_id[entity_id] = asset_doc
|
||||
|
||||
return asset_docs_by_ftrack_id
|
||||
|
||||
|
|
@ -490,7 +475,6 @@ class Delivery(BaseAction):
|
|||
session.commit()
|
||||
|
||||
try:
|
||||
self.dbcon.install()
|
||||
report = self.real_launch(session, entities, event)
|
||||
|
||||
except Exception as exc:
|
||||
|
|
@ -516,7 +500,6 @@ class Delivery(BaseAction):
|
|||
else:
|
||||
job["status"] = "failed"
|
||||
session.commit()
|
||||
self.dbcon.uninstall()
|
||||
|
||||
if not report["success"]:
|
||||
self.show_interface(
|
||||
|
|
@ -558,16 +541,15 @@ class Delivery(BaseAction):
|
|||
if not os.path.exists(location_path):
|
||||
os.makedirs(location_path)
|
||||
|
||||
self.dbcon.Session["AVALON_PROJECT"] = project_name
|
||||
|
||||
self.log.debug("Collecting representations to process.")
|
||||
version_ids = self._get_interest_version_ids(session, entities)
|
||||
repres_to_deliver = list(self.dbcon.find({
|
||||
"type": "representation",
|
||||
"parent": {"$in": version_ids},
|
||||
"name": {"$in": repre_names}
|
||||
}))
|
||||
|
||||
version_ids = self._get_interest_version_ids(
|
||||
project_name, session, entities
|
||||
)
|
||||
repres_to_deliver = list(get_representations(
|
||||
project_name,
|
||||
representation_names=repre_names,
|
||||
version_ids=version_ids
|
||||
))
|
||||
anatomy = Anatomy(project_name)
|
||||
|
||||
format_dict = get_format_dict(anatomy, location_path)
|
||||
|
|
|
|||
|
|
@ -7,6 +7,10 @@ import datetime
|
|||
|
||||
import ftrack_api
|
||||
|
||||
from openpype.client import (
|
||||
get_project,
|
||||
get_assets,
|
||||
)
|
||||
from openpype.api import get_project_settings
|
||||
from openpype.lib import (
|
||||
get_workfile_template_key,
|
||||
|
|
@ -14,7 +18,6 @@ from openpype.lib import (
|
|||
Anatomy,
|
||||
StringTemplate,
|
||||
)
|
||||
from openpype.pipeline import AvalonMongoDB
|
||||
from openpype_modules.ftrack.lib import BaseAction, statics_icon
|
||||
from openpype_modules.ftrack.lib.avalon_sync import create_chunks
|
||||
|
||||
|
|
@ -248,10 +251,8 @@ class FillWorkfileAttributeAction(BaseAction):
|
|||
# Find matchin asset documents and map them by ftrack task entities
|
||||
# - result stored to 'asset_docs_with_task_entities' is list with
|
||||
# tuple `(asset document, [task entitis, ...])`
|
||||
dbcon = AvalonMongoDB()
|
||||
dbcon.Session["AVALON_PROJECT"] = project_name
|
||||
# Quety all asset documents
|
||||
asset_docs = list(dbcon.find({"type": "asset"}))
|
||||
asset_docs = list(get_assets(project_name))
|
||||
job_entity["data"] = json.dumps({
|
||||
"description": "(1/3) Asset documents queried."
|
||||
})
|
||||
|
|
@ -276,7 +277,7 @@ class FillWorkfileAttributeAction(BaseAction):
|
|||
# Keep placeholders in the template unfilled
|
||||
host_name = "{app}"
|
||||
extension = "{ext}"
|
||||
project_doc = dbcon.find_one({"type": "project"})
|
||||
project_doc = get_project(project_name)
|
||||
project_settings = get_project_settings(project_name)
|
||||
anatomy = Anatomy(project_name)
|
||||
templates_by_key = {}
|
||||
|
|
|
|||
|
|
@ -1,8 +1,8 @@
|
|||
import json
|
||||
|
||||
from openpype.client import get_project
|
||||
from openpype.api import ProjectSettings
|
||||
from openpype.lib import create_project
|
||||
from openpype.pipeline import AvalonMongoDB
|
||||
from openpype.settings import SaveWarningExc
|
||||
|
||||
from openpype_modules.ftrack.lib import (
|
||||
|
|
@ -389,12 +389,8 @@ class PrepareProjectLocal(BaseAction):
|
|||
project_name = project_entity["full_name"]
|
||||
|
||||
# Try to find project document
|
||||
dbcon = AvalonMongoDB()
|
||||
dbcon.install()
|
||||
dbcon.Session["AVALON_PROJECT"] = project_name
|
||||
project_doc = dbcon.find_one({
|
||||
"type": "project"
|
||||
})
|
||||
project_doc = get_project(project_name)
|
||||
|
||||
# Create project if is not available
|
||||
# - creation is required to be able set project anatomy and attributes
|
||||
if not project_doc:
|
||||
|
|
@ -402,9 +398,7 @@ class PrepareProjectLocal(BaseAction):
|
|||
self.log.info("Creating project \"{} [{}]\"".format(
|
||||
project_name, project_code
|
||||
))
|
||||
create_project(project_name, project_code, dbcon=dbcon)
|
||||
|
||||
dbcon.uninstall()
|
||||
create_project(project_name, project_code)
|
||||
|
||||
project_settings = ProjectSettings(project_name)
|
||||
project_anatomy_settings = project_settings["project_anatomy"]
|
||||
|
|
|
|||
|
|
@ -5,9 +5,16 @@ import json
|
|||
|
||||
import ftrack_api
|
||||
|
||||
from openpype.client import (
|
||||
get_asset_by_name,
|
||||
get_subset_by_name,
|
||||
get_version_by_name,
|
||||
get_representation_by_name
|
||||
)
|
||||
from openpype.api import Anatomy
|
||||
from openpype.pipeline import (
|
||||
get_representation_path,
|
||||
legacy_io,
|
||||
AvalonMongoDB,
|
||||
)
|
||||
from openpype_modules.ftrack.lib import BaseAction, statics_icon
|
||||
|
||||
|
|
@ -255,9 +262,10 @@ class RVAction(BaseAction):
|
|||
"Component", list(event["data"]["values"].values())[0]
|
||||
)["version"]["asset"]["parent"]["link"][0]
|
||||
project = session.get(link["type"], link["id"])
|
||||
os.environ["AVALON_PROJECT"] = project["name"]
|
||||
legacy_io.Session["AVALON_PROJECT"] = project["name"]
|
||||
legacy_io.install()
|
||||
project_name = project["full_name"]
|
||||
dbcon = AvalonMongoDB()
|
||||
dbcon.Session["AVALON_PROJECT"] = project_name
|
||||
anatomy = Anatomy(project_name)
|
||||
|
||||
location = ftrack_api.Session().pick_location()
|
||||
|
||||
|
|
@ -281,37 +289,38 @@ class RVAction(BaseAction):
|
|||
if online_source:
|
||||
continue
|
||||
|
||||
asset = legacy_io.find_one({"type": "asset", "name": parent_name})
|
||||
subset = legacy_io.find_one(
|
||||
{
|
||||
"type": "subset",
|
||||
"name": component["version"]["asset"]["name"],
|
||||
"parent": asset["_id"]
|
||||
}
|
||||
subset_name = component["version"]["asset"]["name"]
|
||||
version_name = component["version"]["version"]
|
||||
representation_name = component["file_type"][1:]
|
||||
|
||||
asset_doc = get_asset_by_name(
|
||||
project_name, parent_name, fields=["_id"]
|
||||
)
|
||||
version = legacy_io.find_one(
|
||||
{
|
||||
"type": "version",
|
||||
"name": component["version"]["version"],
|
||||
"parent": subset["_id"]
|
||||
}
|
||||
subset_doc = get_subset_by_name(
|
||||
project_name,
|
||||
subset_name=subset_name,
|
||||
asset_id=asset_doc["_id"]
|
||||
)
|
||||
representation = legacy_io.find_one(
|
||||
{
|
||||
"type": "representation",
|
||||
"parent": version["_id"],
|
||||
"name": component["file_type"][1:]
|
||||
}
|
||||
version_doc = get_version_by_name(
|
||||
project_name,
|
||||
version=version_name,
|
||||
subset_id=subset_doc["_id"]
|
||||
)
|
||||
if representation is None:
|
||||
representation = legacy_io.find_one(
|
||||
{
|
||||
"type": "representation",
|
||||
"parent": version["_id"],
|
||||
"name": "preview"
|
||||
}
|
||||
repre_doc = get_representation_by_name(
|
||||
project_name,
|
||||
version_id=version_doc["_id"],
|
||||
representation_name=representation_name
|
||||
)
|
||||
if not repre_doc:
|
||||
repre_doc = get_representation_by_name(
|
||||
project_name,
|
||||
version_id=version_doc["_id"],
|
||||
representation_name="preview"
|
||||
)
|
||||
paths.append(get_representation_path(representation))
|
||||
|
||||
paths.append(get_representation_path(
|
||||
repre_doc, root=anatomy.roots, dbcon=dbcon
|
||||
))
|
||||
|
||||
return paths
|
||||
|
||||
|
|
|
|||
|
|
@ -5,6 +5,14 @@ import requests
|
|||
|
||||
from bson.objectid import ObjectId
|
||||
|
||||
from openpype.client import (
|
||||
get_project,
|
||||
get_asset_by_id,
|
||||
get_assets,
|
||||
get_subset_by_name,
|
||||
get_version_by_name,
|
||||
get_representations
|
||||
)
|
||||
from openpype_modules.ftrack.lib import BaseAction, statics_icon
|
||||
from openpype.api import Anatomy
|
||||
from openpype.pipeline import AvalonMongoDB
|
||||
|
|
@ -385,7 +393,7 @@ class StoreThumbnailsToAvalon(BaseAction):
|
|||
|
||||
db_con.Session["AVALON_PROJECT"] = project_name
|
||||
|
||||
avalon_project = db_con.find_one({"type": "project"})
|
||||
avalon_project = get_project(project_name)
|
||||
output["project"] = avalon_project
|
||||
|
||||
if not avalon_project:
|
||||
|
|
@ -399,19 +407,17 @@ class StoreThumbnailsToAvalon(BaseAction):
|
|||
asset_mongo_id = parent["custom_attributes"].get(CUST_ATTR_ID_KEY)
|
||||
if asset_mongo_id:
|
||||
try:
|
||||
asset_mongo_id = ObjectId(asset_mongo_id)
|
||||
asset_ent = db_con.find_one({
|
||||
"type": "asset",
|
||||
"_id": asset_mongo_id
|
||||
})
|
||||
asset_ent = get_asset_by_id(project_name, asset_mongo_id)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
if not asset_ent:
|
||||
asset_ent = db_con.find_one({
|
||||
"type": "asset",
|
||||
"data.ftrackId": parent["id"]
|
||||
})
|
||||
asset_docs = get_assets(project_name, asset_names=[parent["name"]])
|
||||
for asset_doc in asset_docs:
|
||||
ftrack_id = asset_doc.get("data", {}).get("ftrackId")
|
||||
if ftrack_id == parent["id"]:
|
||||
asset_ent = asset_doc
|
||||
break
|
||||
|
||||
output["asset"] = asset_ent
|
||||
|
||||
|
|
@ -422,13 +428,11 @@ class StoreThumbnailsToAvalon(BaseAction):
|
|||
)
|
||||
return output
|
||||
|
||||
asset_mongo_id = asset_ent["_id"]
|
||||
|
||||
subset_ent = db_con.find_one({
|
||||
"type": "subset",
|
||||
"parent": asset_mongo_id,
|
||||
"name": subset_name
|
||||
})
|
||||
subset_ent = get_subset_by_name(
|
||||
project_name,
|
||||
subset_name=subset_name,
|
||||
asset_id=asset_ent["_id"]
|
||||
)
|
||||
|
||||
output["subset"] = subset_ent
|
||||
|
||||
|
|
@ -439,11 +443,11 @@ class StoreThumbnailsToAvalon(BaseAction):
|
|||
).format(subset_name, ent_path)
|
||||
return output
|
||||
|
||||
version_ent = db_con.find_one({
|
||||
"type": "version",
|
||||
"name": version,
|
||||
"parent": subset_ent["_id"]
|
||||
})
|
||||
version_ent = get_version_by_name(
|
||||
project_name,
|
||||
version,
|
||||
subset_ent["_id"]
|
||||
)
|
||||
|
||||
output["version"] = version_ent
|
||||
|
||||
|
|
@ -454,10 +458,10 @@ class StoreThumbnailsToAvalon(BaseAction):
|
|||
).format(version, subset_name, ent_path)
|
||||
return output
|
||||
|
||||
repre_ents = list(db_con.find({
|
||||
"type": "representation",
|
||||
"parent": version_ent["_id"]
|
||||
}))
|
||||
repre_ents = list(get_representations(
|
||||
project_name,
|
||||
version_ids=[version_ent["_id"]]
|
||||
))
|
||||
|
||||
output["representations"] = repre_ents
|
||||
return output
|
||||
|
|
|
|||
Some files were not shown because too many files have changed in this diff Show more
Loading…
Add table
Add a link
Reference in a new issue