mirror of
https://github.com/ynput/ayon-core.git
synced 2025-12-24 12:54:40 +01:00
Merge branch 'develop' into shotgrid_module
This commit is contained in:
commit
325827361a
417 changed files with 90396 additions and 1540 deletions
|
|
@ -309,7 +309,18 @@
|
|||
"contributions": [
|
||||
"code"
|
||||
]
|
||||
},
|
||||
{
|
||||
"login": "Tilix4",
|
||||
"name": "Félix David",
|
||||
"avatar_url": "https://avatars.githubusercontent.com/u/22875539?v=4",
|
||||
"profile": "http://felixdavid.com/",
|
||||
"contributions": [
|
||||
"code",
|
||||
"doc"
|
||||
]
|
||||
}
|
||||
],
|
||||
"contributorsPerLine": 7
|
||||
}
|
||||
"contributorsPerLine": 7,
|
||||
"skipCi": true
|
||||
}
|
||||
|
|
|
|||
131
CHANGELOG.md
131
CHANGELOG.md
|
|
@ -1,38 +1,75 @@
|
|||
# Changelog
|
||||
|
||||
## [3.10.0-nightly.4](https://github.com/pypeclub/OpenPype/tree/HEAD)
|
||||
## [3.11.0-nightly.1](https://github.com/pypeclub/OpenPype/tree/HEAD)
|
||||
|
||||
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.9.8...HEAD)
|
||||
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.10.0...HEAD)
|
||||
|
||||
### 📖 Documentation
|
||||
|
||||
- doc: adding royal render and multiverse to the web site [\#3285](https://github.com/pypeclub/OpenPype/pull/3285)
|
||||
|
||||
**🚀 Enhancements**
|
||||
|
||||
- General: Updated windows oiio tool [\#3268](https://github.com/pypeclub/OpenPype/pull/3268)
|
||||
- Unreal: add support for skeletalMesh and staticMesh to loaders [\#3267](https://github.com/pypeclub/OpenPype/pull/3267)
|
||||
- Maya: reference loaders could store placeholder in referenced url [\#3264](https://github.com/pypeclub/OpenPype/pull/3264)
|
||||
- TVPaint: Init file for TVPaint worker also handle guideline images [\#3250](https://github.com/pypeclub/OpenPype/pull/3250)
|
||||
- Nuke: Change default icon path in settings [\#3247](https://github.com/pypeclub/OpenPype/pull/3247)
|
||||
|
||||
**🐛 Bug fixes**
|
||||
|
||||
- Global: extract review slate issues [\#3286](https://github.com/pypeclub/OpenPype/pull/3286)
|
||||
- Webpublisher: return only active projects in ProjectsEndpoint [\#3281](https://github.com/pypeclub/OpenPype/pull/3281)
|
||||
- Hiero: add support for task tags 3.10.x [\#3279](https://github.com/pypeclub/OpenPype/pull/3279)
|
||||
- General: Fix Oiio tool path resolving [\#3278](https://github.com/pypeclub/OpenPype/pull/3278)
|
||||
- Maya: Fix udim support for e.g. uppercase \<UDIM\> tag [\#3266](https://github.com/pypeclub/OpenPype/pull/3266)
|
||||
- Nuke: bake reformat was failing on string type [\#3261](https://github.com/pypeclub/OpenPype/pull/3261)
|
||||
- Maya: hotfix Pxr multitexture in looks [\#3260](https://github.com/pypeclub/OpenPype/pull/3260)
|
||||
- Unreal: Fix Camera Loading if Layout is missing [\#3255](https://github.com/pypeclub/OpenPype/pull/3255)
|
||||
- Unreal: Fixed Animation loading in UE5 [\#3240](https://github.com/pypeclub/OpenPype/pull/3240)
|
||||
- Unreal: Fixed Render creation in UE5 [\#3239](https://github.com/pypeclub/OpenPype/pull/3239)
|
||||
- Unreal: Fixed Camera loading in UE5 [\#3238](https://github.com/pypeclub/OpenPype/pull/3238)
|
||||
- Flame: debugging [\#3224](https://github.com/pypeclub/OpenPype/pull/3224)
|
||||
- add silent audio to slate [\#3162](https://github.com/pypeclub/OpenPype/pull/3162)
|
||||
|
||||
**Merged pull requests:**
|
||||
|
||||
- Maya: better handling of legacy review subsets names [\#3269](https://github.com/pypeclub/OpenPype/pull/3269)
|
||||
- Deadline: publishing of animation and pointcache on a farm [\#3225](https://github.com/pypeclub/OpenPype/pull/3225)
|
||||
- Nuke: add pointcache and animation to loader [\#3186](https://github.com/pypeclub/OpenPype/pull/3186)
|
||||
- Add a gizmo menu to nuke [\#3172](https://github.com/pypeclub/OpenPype/pull/3172)
|
||||
|
||||
## [3.10.0](https://github.com/pypeclub/OpenPype/tree/3.10.0) (2022-05-26)
|
||||
|
||||
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.10.0-nightly.6...3.10.0)
|
||||
|
||||
**🆕 New features**
|
||||
|
||||
- General: OpenPype modules publish plugins are registered in host [\#3180](https://github.com/pypeclub/OpenPype/pull/3180)
|
||||
- General: Creator plugins from addons can be registered [\#3179](https://github.com/pypeclub/OpenPype/pull/3179)
|
||||
- Ftrack: Single image reviewable [\#3157](https://github.com/pypeclub/OpenPype/pull/3157)
|
||||
- Nuke: Expose write attributes to settings [\#3123](https://github.com/pypeclub/OpenPype/pull/3123)
|
||||
- Hiero: Initial frame publish support [\#3106](https://github.com/pypeclub/OpenPype/pull/3106)
|
||||
|
||||
**🚀 Enhancements**
|
||||
|
||||
- Maya: FBX camera export [\#3253](https://github.com/pypeclub/OpenPype/pull/3253)
|
||||
- General: updating common vendor `scriptmenu` to 1.5.2 [\#3246](https://github.com/pypeclub/OpenPype/pull/3246)
|
||||
- Project Manager: Allow to paste Tasks into multiple assets at the same time [\#3226](https://github.com/pypeclub/OpenPype/pull/3226)
|
||||
- Project manager: Sped up project load [\#3216](https://github.com/pypeclub/OpenPype/pull/3216)
|
||||
- Loader UI: Speed issues of loader with sync server [\#3199](https://github.com/pypeclub/OpenPype/pull/3199)
|
||||
- Looks: add basic support for Renderman [\#3190](https://github.com/pypeclub/OpenPype/pull/3190)
|
||||
- Maya: added clean\_import option to Import loader [\#3181](https://github.com/pypeclub/OpenPype/pull/3181)
|
||||
- Add the scripts menu definition to nuke [\#3168](https://github.com/pypeclub/OpenPype/pull/3168)
|
||||
- Maya: add maya 2023 to default applications [\#3167](https://github.com/pypeclub/OpenPype/pull/3167)
|
||||
- General: Add 'dataclasses' to required python modules [\#3149](https://github.com/pypeclub/OpenPype/pull/3149)
|
||||
- Hooks: Tweak logging grammar [\#3147](https://github.com/pypeclub/OpenPype/pull/3147)
|
||||
- Nuke: settings for reformat node in CreateWriteRender node [\#3143](https://github.com/pypeclub/OpenPype/pull/3143)
|
||||
- Publisher: UI Modifications and fixes [\#3139](https://github.com/pypeclub/OpenPype/pull/3139)
|
||||
- General: Simplified OP modules/addons import [\#3137](https://github.com/pypeclub/OpenPype/pull/3137)
|
||||
- Terminal: Tweak coloring of TrayModuleManager logging enabled states [\#3133](https://github.com/pypeclub/OpenPype/pull/3133)
|
||||
- General: Cleanup some Loader docstrings [\#3131](https://github.com/pypeclub/OpenPype/pull/3131)
|
||||
- Nuke: render instance with subset name filtered overrides [\#3117](https://github.com/pypeclub/OpenPype/pull/3117)
|
||||
- Unreal: Layout and Camera update and remove functions reimplemented and improvements [\#3116](https://github.com/pypeclub/OpenPype/pull/3116)
|
||||
- Settings: Remove environment groups from settings [\#3115](https://github.com/pypeclub/OpenPype/pull/3115)
|
||||
- TVPaint: Match renderlayer key with other hosts [\#3110](https://github.com/pypeclub/OpenPype/pull/3110)
|
||||
- Ftrack: AssetVersion status on publish [\#3108](https://github.com/pypeclub/OpenPype/pull/3108)
|
||||
- Tray publisher: Simple families from settings [\#3105](https://github.com/pypeclub/OpenPype/pull/3105)
|
||||
|
||||
**🐛 Bug fixes**
|
||||
|
||||
- nuke: use framerange issue [\#3254](https://github.com/pypeclub/OpenPype/pull/3254)
|
||||
- Ftrack: Chunk sizes for queries has minimal condition [\#3244](https://github.com/pypeclub/OpenPype/pull/3244)
|
||||
- Maya: renderman displays needs to be filtered [\#3242](https://github.com/pypeclub/OpenPype/pull/3242)
|
||||
- Ftrack: Validate that the user exists on ftrack [\#3237](https://github.com/pypeclub/OpenPype/pull/3237)
|
||||
- Maya: Fix support for multiple resolutions [\#3236](https://github.com/pypeclub/OpenPype/pull/3236)
|
||||
- TVPaint: Look for more groups than 12 [\#3228](https://github.com/pypeclub/OpenPype/pull/3228)
|
||||
- Hiero: debugging frame range and other 3.10 [\#3222](https://github.com/pypeclub/OpenPype/pull/3222)
|
||||
- Project Manager: Fix persistent editors on project change [\#3218](https://github.com/pypeclub/OpenPype/pull/3218)
|
||||
- Deadline: instance data overwrite fix [\#3214](https://github.com/pypeclub/OpenPype/pull/3214)
|
||||
- Ftrack: Push hierarchical attributes action works [\#3210](https://github.com/pypeclub/OpenPype/pull/3210)
|
||||
- Standalone Publisher: Always create new representation for thumbnail [\#3203](https://github.com/pypeclub/OpenPype/pull/3203)
|
||||
|
|
@ -45,25 +82,17 @@
|
|||
- General: Oiio conversion for ffmpeg checks for invalid characters [\#3166](https://github.com/pypeclub/OpenPype/pull/3166)
|
||||
- Fix for attaching render to subset [\#3164](https://github.com/pypeclub/OpenPype/pull/3164)
|
||||
- Harmony: fixed missing task name in render instance [\#3163](https://github.com/pypeclub/OpenPype/pull/3163)
|
||||
- Ftrack: Action delete old versions formatting works [\#3152](https://github.com/pypeclub/OpenPype/pull/3152)
|
||||
- Deadline: fix the output directory [\#3144](https://github.com/pypeclub/OpenPype/pull/3144)
|
||||
- General: New Session schema [\#3141](https://github.com/pypeclub/OpenPype/pull/3141)
|
||||
- General: Missing version on headless mode crash properly [\#3136](https://github.com/pypeclub/OpenPype/pull/3136)
|
||||
- TVPaint: Composite layers in reversed order [\#3135](https://github.com/pypeclub/OpenPype/pull/3135)
|
||||
- Nuke: fixing default settings for workfile builder loaders [\#3120](https://github.com/pypeclub/OpenPype/pull/3120)
|
||||
- Nuke: fix anatomy imageio regex default [\#3119](https://github.com/pypeclub/OpenPype/pull/3119)
|
||||
- General: Python 3 compatibility in queries [\#3112](https://github.com/pypeclub/OpenPype/pull/3112)
|
||||
- General: Collect loaded versions skips not existing representations [\#3095](https://github.com/pypeclub/OpenPype/pull/3095)
|
||||
|
||||
**🔀 Refactored code**
|
||||
|
||||
- General: Remove remaining imports from avalon [\#3130](https://github.com/pypeclub/OpenPype/pull/3130)
|
||||
- Avalon repo removed from Jobs workflow [\#3193](https://github.com/pypeclub/OpenPype/pull/3193)
|
||||
|
||||
**Merged pull requests:**
|
||||
|
||||
- Harmony: message length in 21.1 [\#3257](https://github.com/pypeclub/OpenPype/pull/3257)
|
||||
- Harmony: 21.1 fix [\#3249](https://github.com/pypeclub/OpenPype/pull/3249)
|
||||
- Maya: added jpg to filter for Image Plane Loader [\#3223](https://github.com/pypeclub/OpenPype/pull/3223)
|
||||
- Webpublisher: replace space by underscore in subset names [\#3160](https://github.com/pypeclub/OpenPype/pull/3160)
|
||||
- StandalonePublisher: removed Extract Background plugins [\#3093](https://github.com/pypeclub/OpenPype/pull/3093)
|
||||
|
||||
## [3.9.8](https://github.com/pypeclub/OpenPype/tree/3.9.8) (2022-05-19)
|
||||
|
||||
|
|
@ -72,6 +101,7 @@
|
|||
**🚀 Enhancements**
|
||||
|
||||
- nuke: generate publishing nodes inside render group node [\#3206](https://github.com/pypeclub/OpenPype/pull/3206)
|
||||
- Loader UI: Speed issues of loader with sync server [\#3200](https://github.com/pypeclub/OpenPype/pull/3200)
|
||||
- Backport of fix for attaching renders to subsets [\#3195](https://github.com/pypeclub/OpenPype/pull/3195)
|
||||
|
||||
**🐛 Bug fixes**
|
||||
|
|
@ -91,57 +121,14 @@
|
|||
|
||||
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.9.6...3.9.7)
|
||||
|
||||
**🆕 New features**
|
||||
|
||||
- Ftrack: Single image reviewable [\#3158](https://github.com/pypeclub/OpenPype/pull/3158)
|
||||
|
||||
**🚀 Enhancements**
|
||||
|
||||
- Deadline output dir issue to 3.9x [\#3155](https://github.com/pypeclub/OpenPype/pull/3155)
|
||||
- Compressed bgeo publishing in SAP and Houdini loader [\#3153](https://github.com/pypeclub/OpenPype/pull/3153)
|
||||
- nuke: removing redundant code from startup [\#3142](https://github.com/pypeclub/OpenPype/pull/3142)
|
||||
- Houdini: Add loader for alembic through Alembic Archive node [\#3140](https://github.com/pypeclub/OpenPype/pull/3140)
|
||||
|
||||
**🐛 Bug fixes**
|
||||
|
||||
- Ftrack: Action delete old versions formatting works [\#3154](https://github.com/pypeclub/OpenPype/pull/3154)
|
||||
- nuke: adding extract thumbnail settings [\#3148](https://github.com/pypeclub/OpenPype/pull/3148)
|
||||
|
||||
**Merged pull requests:**
|
||||
|
||||
- Webpublisher: replace space by underscore in subset names [\#3159](https://github.com/pypeclub/OpenPype/pull/3159)
|
||||
|
||||
## [3.9.6](https://github.com/pypeclub/OpenPype/tree/3.9.6) (2022-05-03)
|
||||
|
||||
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.9.5...3.9.6)
|
||||
|
||||
**🆕 New features**
|
||||
|
||||
- Nuke: render instance with subset name filtered overrides \(3.9.x\) [\#3125](https://github.com/pypeclub/OpenPype/pull/3125)
|
||||
|
||||
**🚀 Enhancements**
|
||||
|
||||
- TVPaint: Match renderlayer key with other hosts [\#3109](https://github.com/pypeclub/OpenPype/pull/3109)
|
||||
|
||||
**🐛 Bug fixes**
|
||||
|
||||
- TVPaint: Composite layers in reversed order [\#3134](https://github.com/pypeclub/OpenPype/pull/3134)
|
||||
- General: Python 3 compatibility in queries [\#3111](https://github.com/pypeclub/OpenPype/pull/3111)
|
||||
|
||||
**Merged pull requests:**
|
||||
|
||||
- Ftrack: AssetVersion status on publish [\#3114](https://github.com/pypeclub/OpenPype/pull/3114)
|
||||
- renderman support for 3.9.x [\#3107](https://github.com/pypeclub/OpenPype/pull/3107)
|
||||
|
||||
## [3.9.5](https://github.com/pypeclub/OpenPype/tree/3.9.5) (2022-04-25)
|
||||
|
||||
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.10.0-nightly.2...3.9.5)
|
||||
|
||||
**🐛 Bug fixes**
|
||||
|
||||
- Ftrack: Update Create Folders action [\#3092](https://github.com/pypeclub/OpenPype/pull/3092)
|
||||
- General: Extract review sequence is not converted with same names [\#3075](https://github.com/pypeclub/OpenPype/pull/3075)
|
||||
|
||||
## [3.9.4](https://github.com/pypeclub/OpenPype/tree/3.9.4) (2022-04-15)
|
||||
|
||||
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.9.4-nightly.2...3.9.4)
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
|
||||
<!-- ALL-CONTRIBUTORS-BADGE:START - Do not remove or modify this section -->
|
||||
[](#contributors-)
|
||||
[](#contributors-)
|
||||
<!-- ALL-CONTRIBUTORS-BADGE:END -->
|
||||
OpenPype
|
||||
====
|
||||
|
|
@ -328,6 +328,7 @@ Thanks goes to these wonderful people ([emoji key](https://allcontributors.org/d
|
|||
<td align="center"><a href="https://github.com/Malthaldar"><img src="https://avatars.githubusercontent.com/u/33671694?v=4?s=100" width="100px;" alt=""/><br /><sub><b>Malthaldar</b></sub></a><br /><a href="https://github.com/pypeclub/OpenPype/commits?author=Malthaldar" title="Code">💻</a></td>
|
||||
<td align="center"><a href="http://www.svenneve.com/"><img src="https://avatars.githubusercontent.com/u/2472863?v=4?s=100" width="100px;" alt=""/><br /><sub><b>Sven Neve</b></sub></a><br /><a href="https://github.com/pypeclub/OpenPype/commits?author=svenneve" title="Code">💻</a></td>
|
||||
<td align="center"><a href="https://github.com/zafrs"><img src="https://avatars.githubusercontent.com/u/26890002?v=4?s=100" width="100px;" alt=""/><br /><sub><b>zafrs</b></sub></a><br /><a href="https://github.com/pypeclub/OpenPype/commits?author=zafrs" title="Code">💻</a></td>
|
||||
<td align="center"><a href="http://felixdavid.com/"><img src="https://avatars.githubusercontent.com/u/22875539?v=4?s=100" width="100px;" alt=""/><br /><sub><b>Félix David</b></sub></a><br /><a href="https://github.com/pypeclub/OpenPype/commits?author=Tilix4" title="Code">💻</a> <a href="https://github.com/pypeclub/OpenPype/commits?author=Tilix4" title="Documentation">📖</a></td>
|
||||
</tr>
|
||||
</table>
|
||||
|
||||
|
|
|
|||
|
|
@ -44,6 +44,7 @@ from . import resources
|
|||
|
||||
from .plugin import (
|
||||
Extractor,
|
||||
Integrator,
|
||||
|
||||
ValidatePipelineOrder,
|
||||
ValidateContentsOrder,
|
||||
|
|
@ -86,6 +87,7 @@ __all__ = [
|
|||
|
||||
# plugin classes
|
||||
"Extractor",
|
||||
"Integrator",
|
||||
# ordering
|
||||
"ValidatePipelineOrder",
|
||||
"ValidateContentsOrder",
|
||||
|
|
|
|||
|
|
@ -3,6 +3,7 @@ import os
|
|||
import re
|
||||
import json
|
||||
import pickle
|
||||
import clique
|
||||
import tempfile
|
||||
import itertools
|
||||
import contextlib
|
||||
|
|
@ -560,7 +561,7 @@ def get_segment_attributes(segment):
|
|||
if not hasattr(segment, attr_name):
|
||||
continue
|
||||
attr = getattr(segment, attr_name)
|
||||
segment_attrs_data[attr] = str(attr).replace("+", ":")
|
||||
segment_attrs_data[attr_name] = str(attr).replace("+", ":")
|
||||
|
||||
if attr_name in ["record_in", "record_out"]:
|
||||
clip_data[attr_name] = attr.relative_frame
|
||||
|
|
@ -762,6 +763,7 @@ class MediaInfoFile(object):
|
|||
_start_frame = None
|
||||
_fps = None
|
||||
_drop_mode = None
|
||||
_file_pattern = None
|
||||
|
||||
def __init__(self, path, **kwargs):
|
||||
|
||||
|
|
@ -773,17 +775,28 @@ class MediaInfoFile(object):
|
|||
self._validate_media_script_path()
|
||||
|
||||
# derivate other feed variables
|
||||
self.feed_basename = os.path.basename(path)
|
||||
self.feed_dir = os.path.dirname(path)
|
||||
self.feed_ext = os.path.splitext(self.feed_basename)[1][1:].lower()
|
||||
feed_basename = os.path.basename(path)
|
||||
feed_dir = os.path.dirname(path)
|
||||
feed_ext = os.path.splitext(feed_basename)[1][1:].lower()
|
||||
|
||||
with maintained_temp_file_path(".clip") as tmp_path:
|
||||
self.log.info("Temp File: {}".format(tmp_path))
|
||||
self._generate_media_info_file(tmp_path)
|
||||
self._generate_media_info_file(tmp_path, feed_ext, feed_dir)
|
||||
|
||||
# get collection containing feed_basename from path
|
||||
self.file_pattern = self._get_collection(
|
||||
feed_basename, feed_dir, feed_ext)
|
||||
|
||||
if (
|
||||
not self.file_pattern
|
||||
and os.path.exists(os.path.join(feed_dir, feed_basename))
|
||||
):
|
||||
self.file_pattern = feed_basename
|
||||
|
||||
# get clip data and make them single if there is multiple
|
||||
# clips data
|
||||
xml_data = self._make_single_clip_media_info(tmp_path)
|
||||
xml_data = self._make_single_clip_media_info(
|
||||
tmp_path, feed_basename, self.file_pattern)
|
||||
self.log.debug("xml_data: {}".format(xml_data))
|
||||
self.log.debug("type: {}".format(type(xml_data)))
|
||||
|
||||
|
|
@ -794,6 +807,123 @@ class MediaInfoFile(object):
|
|||
self.log.debug("drop frame: {}".format(self.drop_mode))
|
||||
self.clip_data = xml_data
|
||||
|
||||
def _get_collection(self, feed_basename, feed_dir, feed_ext):
|
||||
""" Get collection string
|
||||
|
||||
Args:
|
||||
feed_basename (str): file base name
|
||||
feed_dir (str): file's directory
|
||||
feed_ext (str): file extension
|
||||
|
||||
Raises:
|
||||
AttributeError: feed_ext is not matching feed_basename
|
||||
|
||||
Returns:
|
||||
str: collection basename with range of sequence
|
||||
"""
|
||||
partialname = self._separate_file_head(feed_basename, feed_ext)
|
||||
self.log.debug("__ partialname: {}".format(partialname))
|
||||
|
||||
# make sure partial input basename is having correct extensoon
|
||||
if not partialname:
|
||||
raise AttributeError(
|
||||
"Wrong input attributes. Basename - {}, Ext - {}".format(
|
||||
feed_basename, feed_ext
|
||||
)
|
||||
)
|
||||
|
||||
# get all related files
|
||||
files = [
|
||||
f for f in os.listdir(feed_dir)
|
||||
if partialname == self._separate_file_head(f, feed_ext)
|
||||
]
|
||||
|
||||
# ignore reminders as we dont need them
|
||||
collections = clique.assemble(files)[0]
|
||||
|
||||
# in case no collection found return None
|
||||
# it is probably just single file
|
||||
if not collections:
|
||||
return
|
||||
|
||||
# we expect only one collection
|
||||
collection = collections[0]
|
||||
|
||||
self.log.debug("__ collection: {}".format(collection))
|
||||
|
||||
if collection.is_contiguous():
|
||||
return self._format_collection(collection)
|
||||
|
||||
# add `[` in front to make sure it want capture
|
||||
# shot name with the same number
|
||||
number_from_path = self._separate_number(feed_basename, feed_ext)
|
||||
search_number_pattern = "[" + number_from_path
|
||||
# convert to multiple collections
|
||||
_continues_colls = collection.separate()
|
||||
for _coll in _continues_colls:
|
||||
coll_to_text = self._format_collection(
|
||||
_coll, len(number_from_path))
|
||||
self.log.debug("__ coll_to_text: {}".format(coll_to_text))
|
||||
if search_number_pattern in coll_to_text:
|
||||
return coll_to_text
|
||||
|
||||
@staticmethod
|
||||
def _format_collection(collection, padding=None):
|
||||
padding = padding or collection.padding
|
||||
# if no holes then return collection
|
||||
head = collection.format("{head}")
|
||||
tail = collection.format("{tail}")
|
||||
range_template = "[{{:0{0}d}}-{{:0{0}d}}]".format(
|
||||
padding)
|
||||
ranges = range_template.format(
|
||||
min(collection.indexes),
|
||||
max(collection.indexes)
|
||||
)
|
||||
# if no holes then return collection
|
||||
return "{}{}{}".format(head, ranges, tail)
|
||||
|
||||
def _separate_file_head(self, basename, extension):
|
||||
""" Get only head with out sequence and extension
|
||||
|
||||
Args:
|
||||
basename (str): file base name
|
||||
extension (str): file extension
|
||||
|
||||
Returns:
|
||||
str: file head
|
||||
"""
|
||||
# in case sequence file
|
||||
found = re.findall(
|
||||
r"(.*)[._][\d]*(?=.{})".format(extension),
|
||||
basename,
|
||||
)
|
||||
if found:
|
||||
return found.pop()
|
||||
|
||||
# in case single file
|
||||
name, ext = os.path.splitext(basename)
|
||||
|
||||
if extension == ext[1:]:
|
||||
return name
|
||||
|
||||
def _separate_number(self, basename, extension):
|
||||
""" Get only sequence number as string
|
||||
|
||||
Args:
|
||||
basename (str): file base name
|
||||
extension (str): file extension
|
||||
|
||||
Returns:
|
||||
str: number with padding
|
||||
"""
|
||||
# in case sequence file
|
||||
found = re.findall(
|
||||
r"[._]([\d]*)(?=.{})".format(extension),
|
||||
basename,
|
||||
)
|
||||
if found:
|
||||
return found.pop()
|
||||
|
||||
@property
|
||||
def clip_data(self):
|
||||
"""Clip's xml clip data
|
||||
|
|
@ -846,18 +976,41 @@ class MediaInfoFile(object):
|
|||
def drop_mode(self, text):
|
||||
self._drop_mode = str(text)
|
||||
|
||||
@property
|
||||
def file_pattern(self):
|
||||
"""Clips file patter
|
||||
|
||||
Returns:
|
||||
str: file pattern. ex. file.[1-2].exr
|
||||
"""
|
||||
return self._file_pattern
|
||||
|
||||
@file_pattern.setter
|
||||
def file_pattern(self, fpattern):
|
||||
self._file_pattern = fpattern
|
||||
|
||||
def _validate_media_script_path(self):
|
||||
if not os.path.isfile(self.MEDIA_SCRIPT_PATH):
|
||||
raise IOError("Media Scirpt does not exist: `{}`".format(
|
||||
self.MEDIA_SCRIPT_PATH))
|
||||
|
||||
def _generate_media_info_file(self, fpath):
|
||||
def _generate_media_info_file(self, fpath, feed_ext, feed_dir):
|
||||
""" Generate media info xml .clip file
|
||||
|
||||
Args:
|
||||
fpath (str): .clip file path
|
||||
feed_ext (str): file extension to be filtered
|
||||
feed_dir (str): look up directory
|
||||
|
||||
Raises:
|
||||
TypeError: Type error if it fails
|
||||
"""
|
||||
# Create cmd arguments for gettig xml file info file
|
||||
cmd_args = [
|
||||
self.MEDIA_SCRIPT_PATH,
|
||||
"-e", self.feed_ext,
|
||||
"-e", feed_ext,
|
||||
"-o", fpath,
|
||||
self.feed_dir
|
||||
feed_dir
|
||||
]
|
||||
|
||||
try:
|
||||
|
|
@ -867,7 +1020,20 @@ class MediaInfoFile(object):
|
|||
raise TypeError(
|
||||
"Error creating `{}` due: {}".format(fpath, error))
|
||||
|
||||
def _make_single_clip_media_info(self, fpath):
|
||||
def _make_single_clip_media_info(self, fpath, feed_basename, path_pattern):
|
||||
""" Separate only relative clip object form .clip file
|
||||
|
||||
Args:
|
||||
fpath (str): clip file path
|
||||
feed_basename (str): search basename
|
||||
path_pattern (str): search file pattern (file.[1-2].exr)
|
||||
|
||||
Raises:
|
||||
ET.ParseError: if nothing found
|
||||
|
||||
Returns:
|
||||
ET.Element: xml element data of matching clip
|
||||
"""
|
||||
with open(fpath) as f:
|
||||
lines = f.readlines()
|
||||
_added_root = itertools.chain(
|
||||
|
|
@ -878,14 +1044,30 @@ class MediaInfoFile(object):
|
|||
xml_clips = new_root.findall("clip")
|
||||
matching_clip = None
|
||||
for xml_clip in xml_clips:
|
||||
if xml_clip.find("name").text in self.feed_basename:
|
||||
matching_clip = xml_clip
|
||||
clip_name = xml_clip.find("name").text
|
||||
self.log.debug("__ clip_name: `{}`".format(clip_name))
|
||||
if clip_name not in feed_basename:
|
||||
continue
|
||||
|
||||
# test path pattern
|
||||
for out_track in xml_clip.iter("track"):
|
||||
for out_feed in out_track.iter("feed"):
|
||||
for span in out_feed.iter("span"):
|
||||
# start frame
|
||||
span_path = span.find("path")
|
||||
self.log.debug(
|
||||
"__ span_path.text: {}, path_pattern: {}".format(
|
||||
span_path.text, path_pattern
|
||||
)
|
||||
)
|
||||
if path_pattern in span_path.text:
|
||||
matching_clip = xml_clip
|
||||
|
||||
if matching_clip is None:
|
||||
# return warning there is missing clip
|
||||
raise ET.ParseError(
|
||||
"Missing clip in `{}`. Available clips {}".format(
|
||||
self.feed_basename, [
|
||||
feed_basename, [
|
||||
xml_clip.find("name").text
|
||||
for xml_clip in xml_clips
|
||||
]
|
||||
|
|
@ -894,6 +1076,11 @@ class MediaInfoFile(object):
|
|||
return matching_clip
|
||||
|
||||
def _get_time_info_from_origin(self, xml_data):
|
||||
"""Set time info to class attributes
|
||||
|
||||
Args:
|
||||
xml_data (ET.Element): clip data
|
||||
"""
|
||||
try:
|
||||
for out_track in xml_data.iter('track'):
|
||||
for out_feed in out_track.iter('feed'):
|
||||
|
|
@ -912,8 +1099,6 @@ class MediaInfoFile(object):
|
|||
'startTimecode/dropMode')
|
||||
self.drop_mode = out_feed_drop_mode_obj.text
|
||||
break
|
||||
else:
|
||||
continue
|
||||
except Exception as msg:
|
||||
self.log.warning(msg)
|
||||
|
||||
|
|
|
|||
|
|
@ -360,6 +360,7 @@ class PublishableClip:
|
|||
driving_layer_default = ""
|
||||
index_from_segment_default = False
|
||||
use_shot_name_default = False
|
||||
include_handles_default = False
|
||||
|
||||
def __init__(self, segment, **kwargs):
|
||||
self.rename_index = kwargs["rename_index"]
|
||||
|
|
@ -493,6 +494,8 @@ class PublishableClip:
|
|||
"reviewTrack", {}).get("value") or self.review_track_default
|
||||
self.audio = self.ui_inputs.get(
|
||||
"audio", {}).get("value") or False
|
||||
self.include_handles = self.ui_inputs.get(
|
||||
"includeHandles", {}).get("value") or self.include_handles_default
|
||||
|
||||
# build subset name from layer name
|
||||
if self.subset_name == "[ track name ]":
|
||||
|
|
|
|||
|
|
@ -1,5 +1,8 @@
|
|||
import os
|
||||
from xml.etree import ElementTree as ET
|
||||
from openpype.api import Logger
|
||||
|
||||
log = Logger.get_logger(__name__)
|
||||
|
||||
|
||||
def export_clip(export_path, clip, preset_path, **kwargs):
|
||||
|
|
@ -143,10 +146,40 @@ def modify_preset_file(xml_path, staging_dir, data):
|
|||
|
||||
# change xml following data keys
|
||||
with open(xml_path, "r") as datafile:
|
||||
tree = ET.parse(datafile)
|
||||
_root = ET.parse(datafile)
|
||||
|
||||
for key, value in data.items():
|
||||
for element in tree.findall(".//{}".format(key)):
|
||||
element.text = str(value)
|
||||
tree.write(temp_path)
|
||||
try:
|
||||
if "/" in key:
|
||||
if not key.startswith("./"):
|
||||
key = ".//" + key
|
||||
|
||||
split_key_path = key.split("/")
|
||||
element_key = split_key_path[-1]
|
||||
parent_obj_path = "/".join(split_key_path[:-1])
|
||||
|
||||
parent_obj = _root.find(parent_obj_path)
|
||||
element_obj = parent_obj.find(element_key)
|
||||
if not element_obj:
|
||||
append_element(parent_obj, element_key, value)
|
||||
else:
|
||||
finds = _root.findall(".//{}".format(key))
|
||||
if not finds:
|
||||
raise AttributeError
|
||||
for element in finds:
|
||||
element.text = str(value)
|
||||
except AttributeError:
|
||||
log.warning(
|
||||
"Cannot create attribute: {}: {}. Skipping".format(
|
||||
key, value
|
||||
))
|
||||
_root.write(temp_path)
|
||||
|
||||
return temp_path
|
||||
|
||||
|
||||
def append_element(root_element_obj, key, value):
|
||||
new_element_obj = ET.Element(key)
|
||||
log.debug("__ new_element_obj: {}".format(new_element_obj))
|
||||
new_element_obj.text = str(value)
|
||||
root_element_obj.insert(0, new_element_obj)
|
||||
|
|
|
|||
|
|
@ -94,83 +94,30 @@ def create_otio_time_range(start_frame, frame_duration, fps):
|
|||
|
||||
def _get_metadata(item):
|
||||
if hasattr(item, 'metadata'):
|
||||
if not item.metadata:
|
||||
return {}
|
||||
return {key: value for key, value in dict(item.metadata)}
|
||||
return dict(item.metadata) if item.metadata else {}
|
||||
return {}
|
||||
|
||||
|
||||
def create_time_effects(otio_clip, item):
|
||||
# todo #2426: add retiming effects to export
|
||||
# get all subtrack items
|
||||
# subTrackItems = flatten(track_item.parent().subTrackItems())
|
||||
# speed = track_item.playbackSpeed()
|
||||
def create_time_effects(otio_clip, speed):
|
||||
otio_effect = None
|
||||
|
||||
# otio_effect = None
|
||||
# # retime on track item
|
||||
# if speed != 1.:
|
||||
# # make effect
|
||||
# otio_effect = otio.schema.LinearTimeWarp()
|
||||
# otio_effect.name = "Speed"
|
||||
# otio_effect.time_scalar = speed
|
||||
# otio_effect.metadata = {}
|
||||
# retime on track item
|
||||
if speed != 1.:
|
||||
# make effect
|
||||
otio_effect = otio.schema.LinearTimeWarp()
|
||||
otio_effect.name = "Speed"
|
||||
otio_effect.time_scalar = speed
|
||||
otio_effect.metadata = {}
|
||||
|
||||
# # freeze frame effect
|
||||
# if speed == 0.:
|
||||
# otio_effect = otio.schema.FreezeFrame()
|
||||
# otio_effect.name = "FreezeFrame"
|
||||
# otio_effect.metadata = {}
|
||||
# freeze frame effect
|
||||
if speed == 0.:
|
||||
otio_effect = otio.schema.FreezeFrame()
|
||||
otio_effect.name = "FreezeFrame"
|
||||
otio_effect.metadata = {}
|
||||
|
||||
# if otio_effect:
|
||||
# # add otio effect to clip effects
|
||||
# otio_clip.effects.append(otio_effect)
|
||||
|
||||
# # loop through and get all Timewarps
|
||||
# for effect in subTrackItems:
|
||||
# if ((track_item not in effect.linkedItems())
|
||||
# and (len(effect.linkedItems()) > 0)):
|
||||
# continue
|
||||
# # avoid all effect which are not TimeWarp and disabled
|
||||
# if "TimeWarp" not in effect.name():
|
||||
# continue
|
||||
|
||||
# if not effect.isEnabled():
|
||||
# continue
|
||||
|
||||
# node = effect.node()
|
||||
# name = node["name"].value()
|
||||
|
||||
# # solve effect class as effect name
|
||||
# _name = effect.name()
|
||||
# if "_" in _name:
|
||||
# effect_name = re.sub(r"(?:_)[_0-9]+", "", _name) # more numbers
|
||||
# else:
|
||||
# effect_name = re.sub(r"\d+", "", _name) # one number
|
||||
|
||||
# metadata = {}
|
||||
# # add knob to metadata
|
||||
# for knob in ["lookup", "length"]:
|
||||
# value = node[knob].value()
|
||||
# animated = node[knob].isAnimated()
|
||||
# if animated:
|
||||
# value = [
|
||||
# ((node[knob].getValueAt(i)) - i)
|
||||
# for i in range(
|
||||
# track_item.timelineIn(),
|
||||
# track_item.timelineOut() + 1)
|
||||
# ]
|
||||
|
||||
# metadata[knob] = value
|
||||
|
||||
# # make effect
|
||||
# otio_effect = otio.schema.TimeEffect()
|
||||
# otio_effect.name = name
|
||||
# otio_effect.effect_name = effect_name
|
||||
# otio_effect.metadata = metadata
|
||||
|
||||
# # add otio effect to clip effects
|
||||
# otio_clip.effects.append(otio_effect)
|
||||
pass
|
||||
if otio_effect:
|
||||
# add otio effect to clip effects
|
||||
otio_clip.effects.append(otio_effect)
|
||||
|
||||
|
||||
def _get_marker_color(flame_colour):
|
||||
|
|
@ -260,6 +207,7 @@ def create_otio_markers(otio_item, item):
|
|||
|
||||
def create_otio_reference(clip_data, fps=None):
|
||||
metadata = _get_metadata(clip_data)
|
||||
duration = int(clip_data["source_duration"])
|
||||
|
||||
# get file info for path and start frame
|
||||
frame_start = 0
|
||||
|
|
@ -273,7 +221,6 @@ def create_otio_reference(clip_data, fps=None):
|
|||
# get padding and other file infos
|
||||
log.debug("_ path: {}".format(path))
|
||||
|
||||
frame_duration = clip_data["source_duration"]
|
||||
otio_ex_ref_item = None
|
||||
|
||||
is_sequence = frame_number = utils.get_frame_from_filename(file_name)
|
||||
|
|
@ -300,7 +247,7 @@ def create_otio_reference(clip_data, fps=None):
|
|||
rate=fps,
|
||||
available_range=create_otio_time_range(
|
||||
frame_start,
|
||||
frame_duration,
|
||||
duration,
|
||||
fps
|
||||
)
|
||||
)
|
||||
|
|
@ -316,7 +263,7 @@ def create_otio_reference(clip_data, fps=None):
|
|||
target_url=reformated_path,
|
||||
available_range=create_otio_time_range(
|
||||
frame_start,
|
||||
frame_duration,
|
||||
duration,
|
||||
fps
|
||||
)
|
||||
)
|
||||
|
|
@ -333,23 +280,50 @@ def create_otio_clip(clip_data):
|
|||
segment = clip_data["PySegment"]
|
||||
|
||||
# calculate source in
|
||||
media_info = MediaInfoFile(clip_data["fpath"])
|
||||
media_info = MediaInfoFile(clip_data["fpath"], logger=log)
|
||||
media_timecode_start = media_info.start_frame
|
||||
media_fps = media_info.fps
|
||||
|
||||
# create media reference
|
||||
media_reference = create_otio_reference(clip_data, media_fps)
|
||||
|
||||
# define first frame
|
||||
first_frame = media_timecode_start or utils.get_frame_from_filename(
|
||||
clip_data["fpath"]) or 0
|
||||
|
||||
source_in = int(clip_data["source_in"]) - int(first_frame)
|
||||
_clip_source_in = int(clip_data["source_in"])
|
||||
_clip_source_out = int(clip_data["source_out"])
|
||||
_clip_record_duration = int(clip_data["record_duration"])
|
||||
|
||||
# first solve if the reverse timing
|
||||
speed = 1
|
||||
if clip_data["source_in"] > clip_data["source_out"]:
|
||||
source_in = _clip_source_out - int(first_frame)
|
||||
source_out = _clip_source_in - int(first_frame)
|
||||
speed = -1
|
||||
else:
|
||||
source_in = _clip_source_in - int(first_frame)
|
||||
source_out = _clip_source_out - int(first_frame)
|
||||
|
||||
source_duration = (source_out - source_in + 1)
|
||||
|
||||
# secondly check if any change of speed
|
||||
if source_duration != _clip_record_duration:
|
||||
retime_speed = float(source_duration) / float(_clip_record_duration)
|
||||
log.debug("_ retime_speed: {}".format(retime_speed))
|
||||
speed *= retime_speed
|
||||
|
||||
log.debug("_ source_in: {}".format(source_in))
|
||||
log.debug("_ source_out: {}".format(source_out))
|
||||
log.debug("_ speed: {}".format(speed))
|
||||
log.debug("_ source_duration: {}".format(source_duration))
|
||||
log.debug("_ _clip_record_duration: {}".format(_clip_record_duration))
|
||||
|
||||
# create media reference
|
||||
media_reference = create_otio_reference(
|
||||
clip_data, media_fps)
|
||||
|
||||
# creatae source range
|
||||
source_range = create_otio_time_range(
|
||||
source_in,
|
||||
clip_data["record_duration"],
|
||||
_clip_record_duration,
|
||||
CTX.get_fps()
|
||||
)
|
||||
|
||||
|
|
@ -363,6 +337,9 @@ def create_otio_clip(clip_data):
|
|||
if MARKERS_INCLUDE:
|
||||
create_otio_markers(otio_clip, segment)
|
||||
|
||||
if speed != 1:
|
||||
create_time_effects(otio_clip, speed)
|
||||
|
||||
return otio_clip
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -268,6 +268,14 @@ class CreateShotClip(opfapi.Creator):
|
|||
"target": "tag",
|
||||
"toolTip": "Handle at end of clip", # noqa
|
||||
"order": 2
|
||||
},
|
||||
"includeHandles": {
|
||||
"value": False,
|
||||
"type": "QCheckBox",
|
||||
"label": "Include handles",
|
||||
"target": "tag",
|
||||
"toolTip": "By default handles are excluded", # noqa
|
||||
"order": 3
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -1,8 +1,8 @@
|
|||
import re
|
||||
import pyblish
|
||||
import openpype
|
||||
import openpype.hosts.flame.api as opfapi
|
||||
from openpype.hosts.flame.otio import flame_export
|
||||
import openpype.lib as oplib
|
||||
|
||||
# # developer reload modules
|
||||
from pprint import pformat
|
||||
|
|
@ -36,6 +36,7 @@ class CollectTimelineInstances(pyblish.api.ContextPlugin):
|
|||
for segment in selected_segments:
|
||||
# get openpype tag data
|
||||
marker_data = opfapi.get_segment_data_marker(segment)
|
||||
|
||||
self.log.debug("__ marker_data: {}".format(
|
||||
pformat(marker_data)))
|
||||
|
||||
|
|
@ -58,24 +59,44 @@ class CollectTimelineInstances(pyblish.api.ContextPlugin):
|
|||
clip_name = clip_data["segment_name"]
|
||||
self.log.debug("clip_name: {}".format(clip_name))
|
||||
|
||||
# get otio clip data
|
||||
otio_data = self._get_otio_clip_instance_data(clip_data) or {}
|
||||
self.log.debug("__ otio_data: {}".format(pformat(otio_data)))
|
||||
|
||||
# get file path
|
||||
file_path = clip_data["fpath"]
|
||||
|
||||
first_frame = opfapi.get_frame_from_filename(file_path) or 0
|
||||
|
||||
head, tail = self._get_head_tail(clip_data, first_frame)
|
||||
head, tail = self._get_head_tail(
|
||||
clip_data,
|
||||
otio_data["otioClip"],
|
||||
marker_data["handleStart"],
|
||||
marker_data["handleEnd"]
|
||||
)
|
||||
|
||||
# make sure value is absolute
|
||||
if head != 0:
|
||||
head = abs(head)
|
||||
if tail != 0:
|
||||
tail = abs(tail)
|
||||
|
||||
# solve handles length
|
||||
marker_data["handleStart"] = min(
|
||||
marker_data["handleStart"], abs(head))
|
||||
marker_data["handleStart"], head)
|
||||
marker_data["handleEnd"] = min(
|
||||
marker_data["handleEnd"], abs(tail))
|
||||
marker_data["handleEnd"], tail)
|
||||
|
||||
workfile_start = self._set_workfile_start(marker_data)
|
||||
|
||||
with_audio = bool(marker_data.pop("audio"))
|
||||
|
||||
# add marker data to instance data
|
||||
inst_data = dict(marker_data.items())
|
||||
|
||||
# add ocio_data to instance data
|
||||
inst_data.update(otio_data)
|
||||
|
||||
asset = marker_data["asset"]
|
||||
subset = marker_data["subset"]
|
||||
|
||||
|
|
@ -98,6 +119,7 @@ class CollectTimelineInstances(pyblish.api.ContextPlugin):
|
|||
"families": families,
|
||||
"publish": marker_data["publish"],
|
||||
"fps": self.fps,
|
||||
"workfileFrameStart": workfile_start,
|
||||
"sourceFirstFrame": int(first_frame),
|
||||
"path": file_path,
|
||||
"flameAddTasks": self.add_tasks,
|
||||
|
|
@ -105,13 +127,6 @@ class CollectTimelineInstances(pyblish.api.ContextPlugin):
|
|||
task["name"]: {"type": task["type"]}
|
||||
for task in self.add_tasks}
|
||||
})
|
||||
|
||||
# get otio clip data
|
||||
otio_data = self._get_otio_clip_instance_data(clip_data) or {}
|
||||
self.log.debug("__ otio_data: {}".format(pformat(otio_data)))
|
||||
|
||||
# add to instance data
|
||||
inst_data.update(otio_data)
|
||||
self.log.debug("__ inst_data: {}".format(pformat(inst_data)))
|
||||
|
||||
# add resolution
|
||||
|
|
@ -145,6 +160,17 @@ class CollectTimelineInstances(pyblish.api.ContextPlugin):
|
|||
if marker_data.get("reviewTrack") is not None:
|
||||
instance.data["reviewAudio"] = True
|
||||
|
||||
@staticmethod
|
||||
def _set_workfile_start(data):
|
||||
include_handles = data.get("includeHandles")
|
||||
workfile_start = data["workfileFrameStart"]
|
||||
handle_start = data["handleStart"]
|
||||
|
||||
if include_handles:
|
||||
workfile_start += handle_start
|
||||
|
||||
return workfile_start
|
||||
|
||||
def _get_comment_attributes(self, segment):
|
||||
comment = segment.comment.get_value()
|
||||
|
||||
|
|
@ -236,20 +262,24 @@ class CollectTimelineInstances(pyblish.api.ContextPlugin):
|
|||
|
||||
return split_comments
|
||||
|
||||
def _get_head_tail(self, clip_data, first_frame):
|
||||
def _get_head_tail(self, clip_data, otio_clip, handle_start, handle_end):
|
||||
# calculate head and tail with forward compatibility
|
||||
head = clip_data.get("segment_head")
|
||||
tail = clip_data.get("segment_tail")
|
||||
self.log.debug("__ head: `{}`".format(head))
|
||||
self.log.debug("__ tail: `{}`".format(tail))
|
||||
|
||||
# HACK: it is here to serve for versions bellow 2021.1
|
||||
if not head:
|
||||
head = int(clip_data["source_in"]) - int(first_frame)
|
||||
if not tail:
|
||||
tail = int(
|
||||
clip_data["source_duration"] - (
|
||||
head + clip_data["record_duration"]
|
||||
)
|
||||
)
|
||||
if not any([head, tail]):
|
||||
retimed_attributes = oplib.get_media_range_with_retimes(
|
||||
otio_clip, handle_start, handle_end)
|
||||
self.log.debug(
|
||||
">> retimed_attributes: {}".format(retimed_attributes))
|
||||
|
||||
# retimed head and tail
|
||||
head = int(retimed_attributes["handleStart"])
|
||||
tail = int(retimed_attributes["handleEnd"])
|
||||
|
||||
return head, tail
|
||||
|
||||
def _get_resolution_to_data(self, data, context):
|
||||
|
|
@ -340,7 +370,7 @@ class CollectTimelineInstances(pyblish.api.ContextPlugin):
|
|||
continue
|
||||
if otio_clip.name not in segment.name.get_value():
|
||||
continue
|
||||
if openpype.lib.is_overlapping_otio_ranges(
|
||||
if oplib.is_overlapping_otio_ranges(
|
||||
parent_range, timeline_range, strict=True):
|
||||
|
||||
# add pypedata marker to otio_clip metadata
|
||||
|
|
|
|||
|
|
@ -39,7 +39,8 @@ class CollecTimelineOTIO(pyblish.api.ContextPlugin):
|
|||
"name": subset_name,
|
||||
"asset": asset_doc["name"],
|
||||
"subset": subset_name,
|
||||
"family": "workfile"
|
||||
"family": "workfile",
|
||||
"families": []
|
||||
}
|
||||
|
||||
# create instance with workfile
|
||||
|
|
|
|||
|
|
@ -6,6 +6,7 @@ from copy import deepcopy
|
|||
import pyblish.api
|
||||
import openpype.api
|
||||
from openpype.hosts.flame import api as opfapi
|
||||
from openpype.hosts.flame.api import MediaInfoFile
|
||||
|
||||
import flame
|
||||
|
||||
|
|
@ -33,24 +34,8 @@ class ExtractSubsetResources(openpype.api.Extractor):
|
|||
"representation_add_range": False,
|
||||
"representation_tags": ["thumbnail"],
|
||||
"path_regex": ".*"
|
||||
},
|
||||
"ftrackpreview": {
|
||||
"active": True,
|
||||
"ext": "mov",
|
||||
"xml_preset_file": "Apple iPad (1920x1080).xml",
|
||||
"xml_preset_dir": "",
|
||||
"export_type": "Movie",
|
||||
"parsed_comment_attrs": False,
|
||||
"colorspace_out": "Output - Rec.709",
|
||||
"representation_add_range": True,
|
||||
"representation_tags": [
|
||||
"review",
|
||||
"delete"
|
||||
],
|
||||
"path_regex": ".*"
|
||||
}
|
||||
}
|
||||
keep_original_representation = False
|
||||
|
||||
# hide publisher during exporting
|
||||
hide_ui_on_process = True
|
||||
|
|
@ -59,11 +44,7 @@ class ExtractSubsetResources(openpype.api.Extractor):
|
|||
export_presets_mapping = {}
|
||||
|
||||
def process(self, instance):
|
||||
if (
|
||||
self.keep_original_representation
|
||||
and "representations" not in instance.data
|
||||
or not self.keep_original_representation
|
||||
):
|
||||
if "representations" not in instance.data:
|
||||
instance.data["representations"] = []
|
||||
|
||||
# flame objects
|
||||
|
|
@ -91,7 +72,6 @@ class ExtractSubsetResources(openpype.api.Extractor):
|
|||
handles = max(handle_start, handle_end)
|
||||
|
||||
# get media source range with handles
|
||||
source_end_handles = instance.data["sourceEndH"]
|
||||
source_start_handles = instance.data["sourceStartH"]
|
||||
source_end_handles = instance.data["sourceEndH"]
|
||||
|
||||
|
|
@ -108,27 +88,7 @@ class ExtractSubsetResources(openpype.api.Extractor):
|
|||
for unique_name, preset_config in export_presets.items():
|
||||
modify_xml_data = {}
|
||||
|
||||
# get activating attributes
|
||||
activated_preset = preset_config["active"]
|
||||
filter_path_regex = preset_config.get("filter_path_regex")
|
||||
|
||||
self.log.info(
|
||||
"Preset `{}` is active `{}` with filter `{}`".format(
|
||||
unique_name, activated_preset, filter_path_regex
|
||||
)
|
||||
)
|
||||
self.log.debug(
|
||||
"__ clip_path: `{}`".format(clip_path))
|
||||
|
||||
# skip if not activated presete
|
||||
if not activated_preset:
|
||||
continue
|
||||
|
||||
# exclude by regex filter if any
|
||||
if (
|
||||
filter_path_regex
|
||||
and not re.search(filter_path_regex, clip_path)
|
||||
):
|
||||
if self._should_skip(preset_config, clip_path, unique_name):
|
||||
continue
|
||||
|
||||
# get all presets attributes
|
||||
|
|
@ -146,20 +106,12 @@ class ExtractSubsetResources(openpype.api.Extractor):
|
|||
)
|
||||
)
|
||||
|
||||
# get attribures related loading in integrate_batch_group
|
||||
load_to_batch_group = preset_config.get(
|
||||
"load_to_batch_group")
|
||||
batch_group_loader_name = preset_config.get(
|
||||
"batch_group_loader_name")
|
||||
|
||||
# convert to None if empty string
|
||||
if batch_group_loader_name == "":
|
||||
batch_group_loader_name = None
|
||||
|
||||
# get frame range with handles for representation range
|
||||
frame_start_handle = frame_start - handle_start
|
||||
|
||||
# calculate duration with handles
|
||||
source_duration_handles = (
|
||||
source_end_handles - source_start_handles) + 1
|
||||
source_end_handles - source_start_handles)
|
||||
|
||||
# define in/out marks
|
||||
in_mark = (source_start_handles - source_first_frame) + 1
|
||||
|
|
@ -180,15 +132,15 @@ class ExtractSubsetResources(openpype.api.Extractor):
|
|||
name_patern_xml = (
|
||||
"<segment name>_<shot name>_{}.").format(
|
||||
unique_name)
|
||||
|
||||
# change in/out marks to timeline in/out
|
||||
in_mark = clip_in
|
||||
out_mark = clip_out
|
||||
else:
|
||||
exporting_clip = self.import_clip(clip_path)
|
||||
exporting_clip.name.set_value("{}_{}".format(
|
||||
asset_name, segment_name))
|
||||
|
||||
# change in/out marks to timeline in/out
|
||||
in_mark = clip_in
|
||||
out_mark = clip_out
|
||||
|
||||
# add xml tags modifications
|
||||
modify_xml_data.update({
|
||||
"exportHandles": True,
|
||||
|
|
@ -201,10 +153,6 @@ class ExtractSubsetResources(openpype.api.Extractor):
|
|||
# add any xml overrides collected form segment.comment
|
||||
modify_xml_data.update(instance.data["xml_overrides"])
|
||||
|
||||
self.log.debug("__ modify_xml_data: {}".format(pformat(
|
||||
modify_xml_data
|
||||
)))
|
||||
|
||||
export_kwargs = {}
|
||||
# validate xml preset file is filled
|
||||
if preset_file == "":
|
||||
|
|
@ -231,19 +179,34 @@ class ExtractSubsetResources(openpype.api.Extractor):
|
|||
preset_dir, preset_file
|
||||
))
|
||||
|
||||
preset_path = opfapi.modify_preset_file(
|
||||
preset_orig_xml_path, staging_dir, modify_xml_data)
|
||||
|
||||
# define kwargs based on preset type
|
||||
if "thumbnail" in unique_name:
|
||||
export_kwargs["thumb_frame_number"] = int(in_mark + (
|
||||
modify_xml_data.update({
|
||||
"video/posterFrame": True,
|
||||
"video/useFrameAsPoster": 1,
|
||||
"namePattern": "__thumbnail"
|
||||
})
|
||||
thumb_frame_number = int(in_mark + (
|
||||
source_duration_handles / 2))
|
||||
|
||||
self.log.debug("__ in_mark: {}".format(in_mark))
|
||||
self.log.debug("__ thumb_frame_number: {}".format(
|
||||
thumb_frame_number
|
||||
))
|
||||
|
||||
export_kwargs["thumb_frame_number"] = thumb_frame_number
|
||||
else:
|
||||
export_kwargs.update({
|
||||
"in_mark": in_mark,
|
||||
"out_mark": out_mark
|
||||
})
|
||||
|
||||
self.log.debug("__ modify_xml_data: {}".format(
|
||||
pformat(modify_xml_data)
|
||||
))
|
||||
preset_path = opfapi.modify_preset_file(
|
||||
preset_orig_xml_path, staging_dir, modify_xml_data)
|
||||
|
||||
# get and make export dir paths
|
||||
export_dir_path = str(os.path.join(
|
||||
staging_dir, unique_name
|
||||
|
|
@ -254,18 +217,24 @@ class ExtractSubsetResources(openpype.api.Extractor):
|
|||
opfapi.export_clip(
|
||||
export_dir_path, exporting_clip, preset_path, **export_kwargs)
|
||||
|
||||
# make sure only first segment is used if underscore in name
|
||||
# HACK: `ftrackreview_withLUT` will result only in `ftrackreview`
|
||||
repr_name = unique_name.split("_")[0]
|
||||
|
||||
# create representation data
|
||||
representation_data = {
|
||||
"name": unique_name,
|
||||
"outputName": unique_name,
|
||||
"name": repr_name,
|
||||
"outputName": repr_name,
|
||||
"ext": extension,
|
||||
"stagingDir": export_dir_path,
|
||||
"tags": repre_tags,
|
||||
"data": {
|
||||
"colorspace": color_out
|
||||
},
|
||||
"load_to_batch_group": load_to_batch_group,
|
||||
"batch_group_loader_name": batch_group_loader_name
|
||||
"load_to_batch_group": preset_config.get(
|
||||
"load_to_batch_group"),
|
||||
"batch_group_loader_name": preset_config.get(
|
||||
"batch_group_loader_name") or None
|
||||
}
|
||||
|
||||
# collect all available content of export dir
|
||||
|
|
@ -320,6 +289,30 @@ class ExtractSubsetResources(openpype.api.Extractor):
|
|||
self.log.debug("All representations: {}".format(
|
||||
pformat(instance.data["representations"])))
|
||||
|
||||
def _should_skip(self, preset_config, clip_path, unique_name):
|
||||
# get activating attributes
|
||||
activated_preset = preset_config["active"]
|
||||
filter_path_regex = preset_config.get("filter_path_regex")
|
||||
|
||||
self.log.info(
|
||||
"Preset `{}` is active `{}` with filter `{}`".format(
|
||||
unique_name, activated_preset, filter_path_regex
|
||||
)
|
||||
)
|
||||
self.log.debug(
|
||||
"__ clip_path: `{}`".format(clip_path))
|
||||
|
||||
# skip if not activated presete
|
||||
if not activated_preset:
|
||||
return True
|
||||
|
||||
# exclude by regex filter if any
|
||||
if (
|
||||
filter_path_regex
|
||||
and not re.search(filter_path_regex, clip_path)
|
||||
):
|
||||
return True
|
||||
|
||||
def _unfolds_nested_folders(self, stage_dir, files_list, ext):
|
||||
"""Unfolds nested folders
|
||||
|
||||
|
|
@ -408,8 +401,17 @@ class ExtractSubsetResources(openpype.api.Extractor):
|
|||
"""
|
||||
Import clip from path
|
||||
"""
|
||||
clips = flame.import_clips(path)
|
||||
dir_path = os.path.dirname(path)
|
||||
media_info = MediaInfoFile(path, logger=self.log)
|
||||
file_pattern = media_info.file_pattern
|
||||
self.log.debug("__ file_pattern: {}".format(file_pattern))
|
||||
|
||||
# rejoin the pattern to dir path
|
||||
new_path = os.path.join(dir_path, file_pattern)
|
||||
|
||||
clips = flame.import_clips(new_path)
|
||||
self.log.info("Clips [{}] imported from `{}`".format(clips, path))
|
||||
|
||||
if not clips:
|
||||
self.log.warning("Path `{}` is not having any clips".format(path))
|
||||
return None
|
||||
|
|
|
|||
|
|
@ -35,7 +35,11 @@ function Client() {
|
|||
self.pack = function(num) {
|
||||
var ascii='';
|
||||
for (var i = 3; i >= 0; i--) {
|
||||
ascii += String.fromCharCode((num >> (8 * i)) & 255);
|
||||
var hex = ((num >> (8 * i)) & 255).toString(16);
|
||||
if (hex.length < 2){
|
||||
ascii += "0";
|
||||
}
|
||||
ascii += hex;
|
||||
}
|
||||
return ascii;
|
||||
};
|
||||
|
|
@ -279,19 +283,22 @@ function Client() {
|
|||
};
|
||||
|
||||
self._send = function(message) {
|
||||
var data = new QByteArray();
|
||||
var outstr = new QDataStream(data, QIODevice.WriteOnly);
|
||||
outstr.writeInt(0);
|
||||
data.append('UTF-8');
|
||||
outstr.device().seek(0);
|
||||
outstr.writeInt(data.size() - 4);
|
||||
var codec = QTextCodec.codecForUtfText(data);
|
||||
var msg = codec.fromUnicode(message);
|
||||
var l = msg.size();
|
||||
var coded = new QByteArray('AH').append(self.pack(l));
|
||||
coded = coded.append(msg);
|
||||
self.socket.write(new QByteArray(coded));
|
||||
self.logDebug('Sent.');
|
||||
/** Harmony 21.1 doesn't have QDataStream anymore.
|
||||
|
||||
This means we aren't able to write bytes into QByteArray so we had
|
||||
modify how content lenght is sent do the server.
|
||||
Content lenght is sent as string of 8 char convertible into integer
|
||||
(instead of 0x00000001[4 bytes] > "000000001"[8 bytes]) */
|
||||
var codec_name = new QByteArray().append("UTF-8");
|
||||
|
||||
var codec = QTextCodec.codecForName(codec_name);
|
||||
var msg = codec.fromUnicode(message);
|
||||
var l = msg.size();
|
||||
var header = new QByteArray().append('AH').append(self.pack(l));
|
||||
var coded = msg.prepend(header);
|
||||
|
||||
self.socket.write(coded);
|
||||
self.logDebug('Sent.');
|
||||
};
|
||||
|
||||
self.waitForLock = function() {
|
||||
|
|
@ -351,7 +358,14 @@ function start() {
|
|||
app.avalonClient = new Client();
|
||||
app.avalonClient.socket.connectToHost(host, port);
|
||||
}
|
||||
var menuBar = QApplication.activeWindow().menuBar();
|
||||
var mainWindow = null;
|
||||
var widgets = QApplication.topLevelWidgets();
|
||||
for (var i = 0 ; i < widgets.length; i++) {
|
||||
if (widgets[i] instanceof QMainWindow){
|
||||
mainWindow = widgets[i];
|
||||
}
|
||||
}
|
||||
var menuBar = mainWindow.menuBar();
|
||||
var actions = menuBar.actions();
|
||||
app.avalonMenu = null;
|
||||
|
||||
|
|
|
|||
|
|
@ -88,21 +88,25 @@ class Server(threading.Thread):
|
|||
"""
|
||||
current_time = time.time()
|
||||
while True:
|
||||
|
||||
self.log.info("wait ttt")
|
||||
# Receive the data in small chunks and retransmit it
|
||||
request = None
|
||||
header = self.connection.recv(6)
|
||||
header = self.connection.recv(10)
|
||||
if len(header) == 0:
|
||||
# null data received, socket is closing.
|
||||
self.log.info(f"[{self.timestamp()}] Connection closing.")
|
||||
break
|
||||
|
||||
if header[0:2] != b"AH":
|
||||
self.log.error("INVALID HEADER")
|
||||
length = struct.unpack(">I", header[2:])[0]
|
||||
content_length_str = header[2:].decode()
|
||||
|
||||
length = int(content_length_str, 16)
|
||||
data = self.connection.recv(length)
|
||||
while (len(data) < length):
|
||||
# we didn't received everything in first try, lets wait for
|
||||
# all data.
|
||||
self.log.info("loop")
|
||||
time.sleep(0.1)
|
||||
if self.connection is None:
|
||||
self.log.error(f"[{self.timestamp()}] "
|
||||
|
|
@ -113,7 +117,7 @@ class Server(threading.Thread):
|
|||
break
|
||||
|
||||
data += self.connection.recv(length - len(data))
|
||||
|
||||
self.log.debug("data:: {} {}".format(data, type(data)))
|
||||
self.received += data.decode("utf-8")
|
||||
pretty = self._pretty(self.received)
|
||||
self.log.debug(
|
||||
|
|
|
|||
|
|
@ -27,7 +27,9 @@ from .lib import (
|
|||
get_track_items,
|
||||
get_current_project,
|
||||
get_current_sequence,
|
||||
get_timeline_selection,
|
||||
get_current_track,
|
||||
get_track_item_tags,
|
||||
get_track_item_pype_tag,
|
||||
set_track_item_pype_tag,
|
||||
get_track_item_pype_data,
|
||||
|
|
@ -80,7 +82,9 @@ __all__ = [
|
|||
"get_track_items",
|
||||
"get_current_project",
|
||||
"get_current_sequence",
|
||||
"get_timeline_selection",
|
||||
"get_current_track",
|
||||
"get_track_item_tags",
|
||||
"get_track_item_pype_tag",
|
||||
"set_track_item_pype_tag",
|
||||
"get_track_item_pype_data",
|
||||
|
|
|
|||
|
|
@ -109,8 +109,9 @@ def register_hiero_events():
|
|||
# hiero.core.events.registerInterest("kShutdown", shutDown)
|
||||
# hiero.core.events.registerInterest("kStartup", startupCompleted)
|
||||
|
||||
hiero.core.events.registerInterest(
|
||||
("kSelectionChanged", "kTimeline"), selection_changed_timeline)
|
||||
# INFO: was disabled because it was slowing down timeline operations
|
||||
# hiero.core.events.registerInterest(
|
||||
# ("kSelectionChanged", "kTimeline"), selection_changed_timeline)
|
||||
|
||||
# workfiles
|
||||
try:
|
||||
|
|
|
|||
|
|
@ -1,6 +1,8 @@
|
|||
"""
|
||||
Host specific functions where host api is connected
|
||||
"""
|
||||
|
||||
from copy import deepcopy
|
||||
import os
|
||||
import re
|
||||
import sys
|
||||
|
|
@ -89,13 +91,19 @@ def get_current_sequence(name=None, new=False):
|
|||
if not sequence:
|
||||
# if nothing found create new with input name
|
||||
sequence = get_current_sequence(name, True)
|
||||
elif not name and not new:
|
||||
else:
|
||||
# if name is none and new is False then return current open sequence
|
||||
sequence = hiero.ui.activeSequence()
|
||||
|
||||
return sequence
|
||||
|
||||
|
||||
def get_timeline_selection():
|
||||
active_sequence = hiero.ui.activeSequence()
|
||||
timeline_editor = hiero.ui.getTimelineEditor(active_sequence)
|
||||
return list(timeline_editor.selection())
|
||||
|
||||
|
||||
def get_current_track(sequence, name, audio=False):
|
||||
"""
|
||||
Get current track in context of active project.
|
||||
|
|
@ -118,7 +126,7 @@ def get_current_track(sequence, name, audio=False):
|
|||
# get track by name
|
||||
track = None
|
||||
for _track in tracks:
|
||||
if _track.name() in name:
|
||||
if _track.name() == name:
|
||||
track = _track
|
||||
|
||||
if not track:
|
||||
|
|
@ -126,13 +134,14 @@ def get_current_track(sequence, name, audio=False):
|
|||
track = hiero.core.VideoTrack(name)
|
||||
else:
|
||||
track = hiero.core.AudioTrack(name)
|
||||
|
||||
sequence.addTrack(track)
|
||||
|
||||
return track
|
||||
|
||||
|
||||
def get_track_items(
|
||||
selected=False,
|
||||
selection=False,
|
||||
sequence_name=None,
|
||||
track_item_name=None,
|
||||
track_name=None,
|
||||
|
|
@ -143,7 +152,7 @@ def get_track_items(
|
|||
"""Get all available current timeline track items.
|
||||
|
||||
Attribute:
|
||||
selected (bool)[optional]: return only selected items on timeline
|
||||
selection (list)[optional]: list of selected track items
|
||||
sequence_name (str)[optional]: return only clips from input sequence
|
||||
track_item_name (str)[optional]: return only item with input name
|
||||
track_name (str)[optional]: return only items from track name
|
||||
|
|
@ -155,32 +164,34 @@ def get_track_items(
|
|||
Return:
|
||||
list or hiero.core.TrackItem: list of track items or single track item
|
||||
"""
|
||||
return_list = list()
|
||||
track_items = list()
|
||||
track_type = track_type or "video"
|
||||
selection = selection or []
|
||||
return_list = []
|
||||
|
||||
# get selected track items or all in active sequence
|
||||
if selected:
|
||||
if selection:
|
||||
try:
|
||||
selected_items = list(hiero.selection)
|
||||
for item in selected_items:
|
||||
if track_name and track_name in item.parent().name():
|
||||
# filter only items fitting input track name
|
||||
track_items.append(item)
|
||||
elif not track_name:
|
||||
# or add all if no track_name was defined
|
||||
track_items.append(item)
|
||||
for track_item in selection:
|
||||
log.info("___ track_item: {}".format(track_item))
|
||||
# make sure only trackitems are selected
|
||||
if not isinstance(track_item, hiero.core.TrackItem):
|
||||
continue
|
||||
|
||||
if _validate_all_atrributes(
|
||||
track_item,
|
||||
track_item_name,
|
||||
track_name,
|
||||
track_type,
|
||||
check_enabled,
|
||||
check_tagged
|
||||
):
|
||||
log.info("___ valid trackitem: {}".format(track_item))
|
||||
return_list.append(track_item)
|
||||
except AttributeError:
|
||||
pass
|
||||
|
||||
# check if any collected track items are
|
||||
# `core.Hiero.Python.TrackItem` instance
|
||||
if track_items:
|
||||
any_track_item = track_items[0]
|
||||
if not isinstance(any_track_item, hiero.core.TrackItem):
|
||||
selected_items = []
|
||||
|
||||
# collect all available active sequence track items
|
||||
if not track_items:
|
||||
if not return_list:
|
||||
sequence = get_current_sequence(name=sequence_name)
|
||||
# get all available tracks from sequence
|
||||
tracks = list(sequence.audioTracks()) + list(sequence.videoTracks())
|
||||
|
|
@ -191,42 +202,101 @@ def get_track_items(
|
|||
if check_enabled and not track.isEnabled():
|
||||
continue
|
||||
# and all items in track
|
||||
for item in track.items():
|
||||
if check_tagged and not item.tags():
|
||||
for track_item in track.items():
|
||||
# make sure no subtrackitem is also track items
|
||||
if not isinstance(track_item, hiero.core.TrackItem):
|
||||
continue
|
||||
|
||||
# check if track item is enabled
|
||||
if check_enabled:
|
||||
if not item.isEnabled():
|
||||
continue
|
||||
if track_item_name:
|
||||
if track_item_name in item.name():
|
||||
return item
|
||||
# make sure only track items with correct track names are added
|
||||
if track_name and track_name in track.name():
|
||||
# filter out only defined track_name items
|
||||
track_items.append(item)
|
||||
elif not track_name:
|
||||
# or add all if no track_name is defined
|
||||
track_items.append(item)
|
||||
if _validate_all_atrributes(
|
||||
track_item,
|
||||
track_item_name,
|
||||
track_name,
|
||||
track_type,
|
||||
check_enabled,
|
||||
check_tagged
|
||||
):
|
||||
return_list.append(track_item)
|
||||
|
||||
# filter out only track items with defined track_type
|
||||
for track_item in track_items:
|
||||
if track_type and track_type == "video" and isinstance(
|
||||
return return_list
|
||||
|
||||
|
||||
def _validate_all_atrributes(
|
||||
track_item,
|
||||
track_item_name,
|
||||
track_name,
|
||||
track_type,
|
||||
check_enabled,
|
||||
check_tagged
|
||||
):
|
||||
def _validate_correct_name_track_item():
|
||||
if track_item_name and track_item_name in track_item.name():
|
||||
return True
|
||||
elif not track_item_name:
|
||||
return True
|
||||
|
||||
def _validate_tagged_track_item():
|
||||
if check_tagged and track_item.tags():
|
||||
return True
|
||||
elif not check_tagged:
|
||||
return True
|
||||
|
||||
def _validate_enabled_track_item():
|
||||
if check_enabled and track_item.isEnabled():
|
||||
return True
|
||||
elif not check_enabled:
|
||||
return True
|
||||
|
||||
def _validate_parent_track_item():
|
||||
if track_name and track_name in track_item.parent().name():
|
||||
# filter only items fitting input track name
|
||||
return True
|
||||
elif not track_name:
|
||||
# or add all if no track_name was defined
|
||||
return True
|
||||
|
||||
def _validate_type_track_item():
|
||||
if track_type == "video" and isinstance(
|
||||
track_item.parent(), hiero.core.VideoTrack):
|
||||
# only video track items are allowed
|
||||
return_list.append(track_item)
|
||||
elif track_type and track_type == "audio" and isinstance(
|
||||
return True
|
||||
elif track_type == "audio" and isinstance(
|
||||
track_item.parent(), hiero.core.AudioTrack):
|
||||
# only audio track items are allowed
|
||||
return_list.append(track_item)
|
||||
elif not track_type:
|
||||
# add all if no track_type is defined
|
||||
return_list.append(track_item)
|
||||
return True
|
||||
|
||||
# return output list but make sure all items are TrackItems
|
||||
return [_i for _i in return_list
|
||||
if type(_i) == hiero.core.TrackItem]
|
||||
# check if track item is enabled
|
||||
return all([
|
||||
_validate_enabled_track_item(),
|
||||
_validate_type_track_item(),
|
||||
_validate_tagged_track_item(),
|
||||
_validate_parent_track_item(),
|
||||
_validate_correct_name_track_item()
|
||||
])
|
||||
|
||||
|
||||
def get_track_item_tags(track_item):
|
||||
"""
|
||||
Get track item tags excluded openpype tag
|
||||
|
||||
Attributes:
|
||||
trackItem (hiero.core.TrackItem): hiero object
|
||||
|
||||
Returns:
|
||||
hiero.core.Tag: hierarchy, orig clip attributes
|
||||
"""
|
||||
returning_tag_data = []
|
||||
# get all tags from track item
|
||||
_tags = track_item.tags()
|
||||
if not _tags:
|
||||
return []
|
||||
|
||||
# collect all tags which are not openpype tag
|
||||
returning_tag_data.extend(
|
||||
tag for tag in _tags
|
||||
if tag.name() != self.pype_tag_name
|
||||
)
|
||||
|
||||
return returning_tag_data
|
||||
|
||||
|
||||
def get_track_item_pype_tag(track_item):
|
||||
|
|
@ -245,7 +315,7 @@ def get_track_item_pype_tag(track_item):
|
|||
return None
|
||||
for tag in _tags:
|
||||
# return only correct tag defined by global name
|
||||
if tag.name() in self.pype_tag_name:
|
||||
if tag.name() == self.pype_tag_name:
|
||||
return tag
|
||||
|
||||
|
||||
|
|
@ -266,7 +336,7 @@ def set_track_item_pype_tag(track_item, data=None):
|
|||
"editable": "0",
|
||||
"note": "OpenPype data container",
|
||||
"icon": "openpype_icon.png",
|
||||
"metadata": {k: v for k, v in data.items()}
|
||||
"metadata": dict(data.items())
|
||||
}
|
||||
# get available pype tag if any
|
||||
_tag = get_track_item_pype_tag(track_item)
|
||||
|
|
@ -301,9 +371,9 @@ def get_track_item_pype_data(track_item):
|
|||
return None
|
||||
|
||||
# get tag metadata attribute
|
||||
tag_data = tag.metadata()
|
||||
tag_data = deepcopy(dict(tag.metadata()))
|
||||
# convert tag metadata to normal keys names and values to correct types
|
||||
for k, v in dict(tag_data).items():
|
||||
for k, v in tag_data.items():
|
||||
key = k.replace("tag.", "")
|
||||
|
||||
try:
|
||||
|
|
@ -324,7 +394,7 @@ def get_track_item_pype_data(track_item):
|
|||
log.warning(msg)
|
||||
value = v
|
||||
|
||||
data.update({key: value})
|
||||
data[key] = value
|
||||
|
||||
return data
|
||||
|
||||
|
|
@ -497,7 +567,7 @@ class PyblishSubmission(hiero.exporters.FnSubmission.Submission):
|
|||
from . import publish
|
||||
# Add submission to Hiero module for retrieval in plugins.
|
||||
hiero.submission = self
|
||||
publish()
|
||||
publish(hiero.ui.mainWindow())
|
||||
|
||||
|
||||
def add_submission():
|
||||
|
|
@ -527,7 +597,7 @@ class PublishAction(QtWidgets.QAction):
|
|||
# from getting picked up when not using the "Export" dialog.
|
||||
if hasattr(hiero, "submission"):
|
||||
del hiero.submission
|
||||
publish()
|
||||
publish(hiero.ui.mainWindow())
|
||||
|
||||
def eventHandler(self, event):
|
||||
# Add the Menu to the right-click menu
|
||||
|
|
@ -893,32 +963,33 @@ def apply_colorspace_clips():
|
|||
|
||||
|
||||
def is_overlapping(ti_test, ti_original, strict=False):
|
||||
covering_exp = bool(
|
||||
covering_exp = (
|
||||
(ti_test.timelineIn() <= ti_original.timelineIn())
|
||||
and (ti_test.timelineOut() >= ti_original.timelineOut())
|
||||
)
|
||||
inside_exp = bool(
|
||||
|
||||
if strict:
|
||||
return covering_exp
|
||||
|
||||
inside_exp = (
|
||||
(ti_test.timelineIn() >= ti_original.timelineIn())
|
||||
and (ti_test.timelineOut() <= ti_original.timelineOut())
|
||||
)
|
||||
overlaying_right_exp = bool(
|
||||
overlaying_right_exp = (
|
||||
(ti_test.timelineIn() < ti_original.timelineOut())
|
||||
and (ti_test.timelineOut() >= ti_original.timelineOut())
|
||||
)
|
||||
overlaying_left_exp = bool(
|
||||
overlaying_left_exp = (
|
||||
(ti_test.timelineOut() > ti_original.timelineIn())
|
||||
and (ti_test.timelineIn() <= ti_original.timelineIn())
|
||||
)
|
||||
|
||||
if not strict:
|
||||
return any((
|
||||
covering_exp,
|
||||
inside_exp,
|
||||
overlaying_right_exp,
|
||||
overlaying_left_exp
|
||||
))
|
||||
else:
|
||||
return covering_exp
|
||||
return any((
|
||||
covering_exp,
|
||||
inside_exp,
|
||||
overlaying_right_exp,
|
||||
overlaying_left_exp
|
||||
))
|
||||
|
||||
|
||||
def get_sequence_pattern_and_padding(file):
|
||||
|
|
@ -936,17 +1007,13 @@ def get_sequence_pattern_and_padding(file):
|
|||
"""
|
||||
foundall = re.findall(
|
||||
r"(#+)|(%\d+d)|(?<=[^a-zA-Z0-9])(\d+)(?=\.\w+$)", file)
|
||||
if foundall:
|
||||
found = sorted(list(set(foundall[0])))[-1]
|
||||
|
||||
if "%" in found:
|
||||
padding = int(re.findall(r"\d+", found)[-1])
|
||||
else:
|
||||
padding = len(found)
|
||||
|
||||
return found, padding
|
||||
else:
|
||||
if not foundall:
|
||||
return None, None
|
||||
found = sorted(list(set(foundall[0])))[-1]
|
||||
|
||||
padding = int(
|
||||
re.findall(r"\d+", found)[-1]) if "%" in found else len(found)
|
||||
return found, padding
|
||||
|
||||
|
||||
def sync_clip_name_to_data_asset(track_items_list):
|
||||
|
|
@ -982,7 +1049,7 @@ def sync_clip_name_to_data_asset(track_items_list):
|
|||
print("asset was changed in clip: {}".format(ti_name))
|
||||
|
||||
|
||||
def check_inventory_versions():
|
||||
def check_inventory_versions(track_items=None):
|
||||
"""
|
||||
Actual version color idetifier of Loaded containers
|
||||
|
||||
|
|
@ -993,14 +1060,14 @@ def check_inventory_versions():
|
|||
"""
|
||||
from . import parse_container
|
||||
|
||||
track_item = track_items or get_track_items()
|
||||
# presets
|
||||
clip_color_last = "green"
|
||||
clip_color = "red"
|
||||
|
||||
# get all track items from current timeline
|
||||
for track_item in get_track_items():
|
||||
for track_item in track_item:
|
||||
container = parse_container(track_item)
|
||||
|
||||
if container:
|
||||
# get representation from io
|
||||
representation = legacy_io.find_one({
|
||||
|
|
@ -1038,29 +1105,31 @@ def selection_changed_timeline(event):
|
|||
timeline_editor = event.sender
|
||||
selection = timeline_editor.selection()
|
||||
|
||||
selection = [ti for ti in selection
|
||||
if isinstance(ti, hiero.core.TrackItem)]
|
||||
track_items = get_track_items(
|
||||
selection=selection,
|
||||
track_type="video",
|
||||
check_enabled=True,
|
||||
check_locked=True,
|
||||
check_tagged=True
|
||||
)
|
||||
|
||||
# run checking function
|
||||
sync_clip_name_to_data_asset(selection)
|
||||
|
||||
# also mark old versions of loaded containers
|
||||
check_inventory_versions()
|
||||
sync_clip_name_to_data_asset(track_items)
|
||||
|
||||
|
||||
def before_project_save(event):
|
||||
track_items = get_track_items(
|
||||
selected=False,
|
||||
track_type="video",
|
||||
check_enabled=True,
|
||||
check_locked=True,
|
||||
check_tagged=True)
|
||||
check_tagged=True
|
||||
)
|
||||
|
||||
# run checking function
|
||||
sync_clip_name_to_data_asset(track_items)
|
||||
|
||||
# also mark old versions of loaded containers
|
||||
check_inventory_versions()
|
||||
check_inventory_versions(track_items)
|
||||
|
||||
|
||||
def get_main_window():
|
||||
|
|
|
|||
|
|
@ -151,7 +151,7 @@ def create_otio_reference(clip):
|
|||
padding = media_source.filenamePadding()
|
||||
file_head = media_source.filenameHead()
|
||||
is_sequence = not media_source.singleFile()
|
||||
frame_duration = media_source.duration() - 1
|
||||
frame_duration = media_source.duration()
|
||||
fps = utils.get_rate(clip) or self.project_fps
|
||||
extension = os.path.splitext(path)[-1]
|
||||
|
||||
|
|
|
|||
|
|
@ -143,6 +143,11 @@ def parse_container(track_item, validate=True):
|
|||
"""
|
||||
# convert tag metadata to normal keys names
|
||||
data = lib.get_track_item_pype_data(track_item)
|
||||
if (
|
||||
not data
|
||||
or data.get("id") != "pyblish.avalon.container"
|
||||
):
|
||||
return
|
||||
|
||||
if validate and data and data.get("schema"):
|
||||
schema.validate(data)
|
||||
|
|
|
|||
|
|
@ -1,4 +1,5 @@
|
|||
import os
|
||||
from pprint import pformat
|
||||
import re
|
||||
from copy import deepcopy
|
||||
|
||||
|
|
@ -400,7 +401,8 @@ class ClipLoader:
|
|||
|
||||
# inject asset data to representation dict
|
||||
self._get_asset_data()
|
||||
log.debug("__init__ self.data: `{}`".format(self.data))
|
||||
log.info("__init__ self.data: `{}`".format(pformat(self.data)))
|
||||
log.info("__init__ options: `{}`".format(pformat(options)))
|
||||
|
||||
# add active components to class
|
||||
if self.new_sequence:
|
||||
|
|
@ -482,7 +484,9 @@ class ClipLoader:
|
|||
|
||||
"""
|
||||
asset_name = self.context["representation"]["context"]["asset"]
|
||||
self.data["assetData"] = openpype.get_asset(asset_name)["data"]
|
||||
asset_doc = openpype.get_asset(asset_name)
|
||||
log.debug("__ asset_doc: {}".format(pformat(asset_doc)))
|
||||
self.data["assetData"] = asset_doc["data"]
|
||||
|
||||
def _make_track_item(self, source_bin_item, audio=False):
|
||||
""" Create track item with """
|
||||
|
|
@ -500,7 +504,7 @@ class ClipLoader:
|
|||
track_item.setSource(clip)
|
||||
track_item.setSourceIn(self.handle_start)
|
||||
track_item.setTimelineIn(self.timeline_in)
|
||||
track_item.setSourceOut(self.media_duration - self.handle_end)
|
||||
track_item.setSourceOut((self.media_duration) - self.handle_end)
|
||||
track_item.setTimelineOut(self.timeline_out)
|
||||
track_item.setPlaybackSpeed(1)
|
||||
self.active_track.addTrackItem(track_item)
|
||||
|
|
@ -520,14 +524,18 @@ class ClipLoader:
|
|||
self.handle_start = self.data["versionData"].get("handleStart")
|
||||
self.handle_end = self.data["versionData"].get("handleEnd")
|
||||
if self.handle_start is None:
|
||||
self.handle_start = int(self.data["assetData"]["handleStart"])
|
||||
self.handle_start = self.data["assetData"]["handleStart"]
|
||||
if self.handle_end is None:
|
||||
self.handle_end = int(self.data["assetData"]["handleEnd"])
|
||||
self.handle_end = self.data["assetData"]["handleEnd"]
|
||||
|
||||
self.handle_start = int(self.handle_start)
|
||||
self.handle_end = int(self.handle_end)
|
||||
|
||||
if self.sequencial_load:
|
||||
last_track_item = lib.get_track_items(
|
||||
sequence_name=self.active_sequence.name(),
|
||||
track_name=self.active_track.name())
|
||||
track_name=self.active_track.name()
|
||||
)
|
||||
if len(last_track_item) == 0:
|
||||
last_timeline_out = 0
|
||||
else:
|
||||
|
|
@ -541,17 +549,12 @@ class ClipLoader:
|
|||
self.timeline_in = int(self.data["assetData"]["clipIn"])
|
||||
self.timeline_out = int(self.data["assetData"]["clipOut"])
|
||||
|
||||
log.debug("__ self.timeline_in: {}".format(self.timeline_in))
|
||||
log.debug("__ self.timeline_out: {}".format(self.timeline_out))
|
||||
|
||||
# check if slate is included
|
||||
# either in version data families or by calculating frame diff
|
||||
slate_on = next(
|
||||
# check iterate if slate is in families
|
||||
(f for f in self.context["version"]["data"]["families"]
|
||||
if "slate" in f),
|
||||
# if nothing was found then use default None
|
||||
# so other bool could be used
|
||||
None) or bool(int(
|
||||
(self.timeline_out - self.timeline_in + 1)
|
||||
+ self.handle_start + self.handle_end) < self.media_duration)
|
||||
slate_on = "slate" in self.context["version"]["data"]["families"]
|
||||
log.debug("__ slate_on: {}".format(slate_on))
|
||||
|
||||
# if slate is on then remove the slate frame from beginning
|
||||
if slate_on:
|
||||
|
|
@ -572,7 +575,7 @@ class ClipLoader:
|
|||
# there were some cases were hiero was not creating it
|
||||
source_bin_item = None
|
||||
for item in self.active_bin.items():
|
||||
if self.data["clip_name"] in item.name():
|
||||
if self.data["clip_name"] == item.name():
|
||||
source_bin_item = item
|
||||
if not source_bin_item:
|
||||
log.warning("Problem with created Source clip: `{}`".format(
|
||||
|
|
@ -599,8 +602,8 @@ class Creator(LegacyCreator):
|
|||
rename_index = None
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
import openpype.hosts.hiero.api as phiero
|
||||
super(Creator, self).__init__(*args, **kwargs)
|
||||
import openpype.hosts.hiero.api as phiero
|
||||
self.presets = openpype.get_current_project_settings()[
|
||||
"hiero"]["create"].get(self.__class__.__name__, {})
|
||||
|
||||
|
|
@ -609,7 +612,10 @@ class Creator(LegacyCreator):
|
|||
self.sequence = phiero.get_current_sequence()
|
||||
|
||||
if (self.options or {}).get("useSelection"):
|
||||
self.selected = phiero.get_track_items(selected=True)
|
||||
timeline_selection = phiero.get_timeline_selection()
|
||||
self.selected = phiero.get_track_items(
|
||||
selection=timeline_selection
|
||||
)
|
||||
else:
|
||||
self.selected = phiero.get_track_items()
|
||||
|
||||
|
|
@ -716,6 +722,10 @@ class PublishClip:
|
|||
else:
|
||||
self.tag_data.update({"reviewTrack": None})
|
||||
|
||||
log.debug("___ self.tag_data: {}".format(
|
||||
pformat(self.tag_data)
|
||||
))
|
||||
|
||||
# create pype tag on track_item and add data
|
||||
lib.imprint(self.track_item, self.tag_data)
|
||||
|
||||
|
|
|
|||
|
|
@ -86,7 +86,7 @@ def update_tag(tag, data):
|
|||
|
||||
# due to hiero bug we have to make sure keys which are not existent in
|
||||
# data are cleared of value by `None`
|
||||
for _mk in mtd.keys():
|
||||
for _mk in mtd.dict().keys():
|
||||
if _mk.replace("tag.", "") not in data_mtd.keys():
|
||||
mtd.setValue(_mk, str(None))
|
||||
|
||||
|
|
|
|||
|
|
@ -3,10 +3,6 @@ from openpype.pipeline import (
|
|||
get_representation_path,
|
||||
)
|
||||
import openpype.hosts.hiero.api as phiero
|
||||
# from openpype.hosts.hiero.api import plugin, lib
|
||||
# reload(lib)
|
||||
# reload(plugin)
|
||||
# reload(phiero)
|
||||
|
||||
|
||||
class LoadClip(phiero.SequenceLoader):
|
||||
|
|
@ -106,7 +102,7 @@ class LoadClip(phiero.SequenceLoader):
|
|||
name = container['name']
|
||||
namespace = container['namespace']
|
||||
track_item = phiero.get_track_items(
|
||||
track_item_name=namespace)
|
||||
track_item_name=namespace).pop()
|
||||
version = legacy_io.find_one({
|
||||
"type": "version",
|
||||
"_id": representation["parent"]
|
||||
|
|
@ -157,7 +153,7 @@ class LoadClip(phiero.SequenceLoader):
|
|||
# load clip to timeline and get main variables
|
||||
namespace = container['namespace']
|
||||
track_item = phiero.get_track_items(
|
||||
track_item_name=namespace)
|
||||
track_item_name=namespace).pop()
|
||||
track = track_item.parent()
|
||||
|
||||
# remove track item from track
|
||||
|
|
|
|||
|
|
@ -4,16 +4,16 @@ from pyblish import api
|
|||
class CollectClipTagTasks(api.InstancePlugin):
|
||||
"""Collect Tags from selected track items."""
|
||||
|
||||
order = api.CollectorOrder
|
||||
order = api.CollectorOrder - 0.077
|
||||
label = "Collect Tag Tasks"
|
||||
hosts = ["hiero"]
|
||||
families = ['clip']
|
||||
families = ["shot"]
|
||||
|
||||
def process(self, instance):
|
||||
# gets tags
|
||||
tags = instance.data["tags"]
|
||||
|
||||
tasks = dict()
|
||||
tasks = {}
|
||||
for tag in tags:
|
||||
t_metadata = dict(tag.metadata())
|
||||
t_family = t_metadata.get("tag.family", "")
|
||||
|
|
@ -19,9 +19,12 @@ class PrecollectInstances(pyblish.api.ContextPlugin):
|
|||
|
||||
def process(self, context):
|
||||
self.otio_timeline = context.data["otioTimeline"]
|
||||
|
||||
timeline_selection = phiero.get_timeline_selection()
|
||||
selected_timeline_items = phiero.get_track_items(
|
||||
selected=True, check_tagged=True, check_enabled=True)
|
||||
selection=timeline_selection,
|
||||
check_tagged=True,
|
||||
check_enabled=True
|
||||
)
|
||||
|
||||
# only return enabled track items
|
||||
if not selected_timeline_items:
|
||||
|
|
@ -103,7 +106,10 @@ class PrecollectInstances(pyblish.api.ContextPlugin):
|
|||
|
||||
# clip's effect
|
||||
"clipEffectItems": subtracks,
|
||||
"clipAnnotations": annotations
|
||||
"clipAnnotations": annotations,
|
||||
|
||||
# add all additional tags
|
||||
"tags": phiero.get_track_item_tags(track_item)
|
||||
})
|
||||
|
||||
# otio clip data
|
||||
|
|
@ -292,9 +298,9 @@ class PrecollectInstances(pyblish.api.ContextPlugin):
|
|||
for otio_clip in self.otio_timeline.each_clip():
|
||||
track_name = otio_clip.parent().name
|
||||
parent_range = otio_clip.range_in_parent()
|
||||
if ti_track_name not in track_name:
|
||||
if ti_track_name != track_name:
|
||||
continue
|
||||
if otio_clip.name not in track_item.name():
|
||||
if otio_clip.name != track_item.name():
|
||||
continue
|
||||
self.log.debug("__ parent_range: {}".format(parent_range))
|
||||
self.log.debug("__ timeline_range: {}".format(timeline_range))
|
||||
|
|
@ -314,7 +320,7 @@ class PrecollectInstances(pyblish.api.ContextPlugin):
|
|||
speed = track_item.playbackSpeed()
|
||||
timeline = phiero.get_current_sequence()
|
||||
frame_start = int(track_item.timelineIn())
|
||||
frame_duration = int(track_item.sourceDuration() / speed)
|
||||
frame_duration = int((track_item.duration() - 1) / speed)
|
||||
fps = timeline.framerate().toFloat()
|
||||
|
||||
return hiero_export.create_otio_time_range(
|
||||
|
|
|
|||
|
|
@ -16,7 +16,7 @@ class PrecollectWorkfile(pyblish.api.ContextPlugin):
|
|||
"""Inject the current working file into context"""
|
||||
|
||||
label = "Precollect Workfile"
|
||||
order = pyblish.api.CollectorOrder - 0.5
|
||||
order = pyblish.api.CollectorOrder - 0.491
|
||||
|
||||
def process(self, context):
|
||||
|
||||
|
|
@ -84,6 +84,7 @@ class PrecollectWorkfile(pyblish.api.ContextPlugin):
|
|||
"colorspace": self.get_colorspace(project),
|
||||
"fps": fps
|
||||
}
|
||||
self.log.debug("__ context_data: {}".format(pformat(context_data)))
|
||||
context.data.update(context_data)
|
||||
|
||||
self.log.info("Creating instance: {}".format(instance))
|
||||
|
|
|
|||
|
|
@ -1737,8 +1737,11 @@ def apply_shaders(relationships, shadernodes, nodes):
|
|||
log.warning("No nodes found for shading engine "
|
||||
"'{0}'".format(id_shading_engines[0]))
|
||||
continue
|
||||
try:
|
||||
cmds.sets(filtered_nodes, forceElement=id_shading_engines[0])
|
||||
except RuntimeError as rte:
|
||||
log.error("Error during shader assignment: {}".format(rte))
|
||||
|
||||
cmds.sets(filtered_nodes, forceElement=id_shading_engines[0])
|
||||
# endregion
|
||||
|
||||
apply_attributes(attributes, nodes_by_id)
|
||||
|
|
|
|||
|
|
@ -1093,6 +1093,11 @@ class RenderProductsRenderman(ARenderProducts):
|
|||
if not enabled:
|
||||
continue
|
||||
|
||||
# Skip display types not producing any file output.
|
||||
# Is there a better way to do it?
|
||||
if not display_types.get(display["driverNode"]["type"]):
|
||||
continue
|
||||
|
||||
aov_name = name
|
||||
if aov_name == "rmanDefaultDisplay":
|
||||
aov_name = "beauty"
|
||||
|
|
|
|||
|
|
@ -66,11 +66,10 @@ def install():
|
|||
log.info("Installing callbacks ... ")
|
||||
register_event_callback("init", on_init)
|
||||
|
||||
# Callbacks below are not required for headless mode, the `init` however
|
||||
# is important to load referenced Alembics correctly at rendertime.
|
||||
if lib.IS_HEADLESS:
|
||||
log.info(("Running in headless mode, skipping Maya "
|
||||
"save/open/new callback installation.."))
|
||||
|
||||
return
|
||||
|
||||
_set_project()
|
||||
|
|
|
|||
|
|
@ -10,7 +10,8 @@ from openpype.pipeline import (
|
|||
get_representation_path,
|
||||
AVALON_CONTAINER_ID,
|
||||
)
|
||||
|
||||
from openpype.api import Anatomy
|
||||
from openpype.settings import get_project_settings
|
||||
from .pipeline import containerise
|
||||
from . import lib
|
||||
|
||||
|
|
@ -230,6 +231,10 @@ class ReferenceLoader(Loader):
|
|||
self.log.debug("No alembic nodes found in {}".format(members))
|
||||
|
||||
try:
|
||||
path = self.prepare_root_value(path,
|
||||
representation["context"]
|
||||
["project"]
|
||||
["code"])
|
||||
content = cmds.file(path,
|
||||
loadReference=reference_node,
|
||||
type=file_type,
|
||||
|
|
@ -319,6 +324,29 @@ class ReferenceLoader(Loader):
|
|||
except RuntimeError:
|
||||
pass
|
||||
|
||||
def prepare_root_value(self, file_url, project_name):
|
||||
"""Replace root value with env var placeholder.
|
||||
|
||||
Use ${OPENPYPE_ROOT_WORK} (or any other root) instead of proper root
|
||||
value when storing referenced url into a workfile.
|
||||
Useful for remote workflows with SiteSync.
|
||||
|
||||
Args:
|
||||
file_url (str)
|
||||
project_name (dict)
|
||||
Returns:
|
||||
(str)
|
||||
"""
|
||||
settings = get_project_settings(project_name)
|
||||
use_env_var_as_root = (settings["maya"]
|
||||
["maya-dirmap"]
|
||||
["use_env_var_as_root"])
|
||||
if use_env_var_as_root:
|
||||
anatomy = Anatomy(project_name)
|
||||
file_url = anatomy.replace_root_with_env_key(file_url, '${{{}}}')
|
||||
|
||||
return file_url
|
||||
|
||||
@staticmethod
|
||||
def _organize_containers(nodes, container):
|
||||
# type: (list, str) -> None
|
||||
|
|
|
|||
|
|
@ -38,3 +38,7 @@ class CreateAnimation(plugin.Creator):
|
|||
|
||||
# Default to exporting world-space
|
||||
self.data["worldSpace"] = True
|
||||
|
||||
# Default to not send to farm.
|
||||
self.data["farm"] = False
|
||||
self.data["priority"] = 50
|
||||
|
|
|
|||
15
openpype/hosts/maya/plugins/create/create_multiverse_look.py
Normal file
15
openpype/hosts/maya/plugins/create/create_multiverse_look.py
Normal file
|
|
@ -0,0 +1,15 @@
|
|||
from openpype.hosts.maya.api import plugin
|
||||
|
||||
|
||||
class CreateMultiverseLook(plugin.Creator):
|
||||
"""Create Multiverse Look"""
|
||||
|
||||
name = "mvLook"
|
||||
label = "Multiverse Look"
|
||||
family = "mvLook"
|
||||
icon = "cubes"
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
super(CreateMultiverseLook, self).__init__(*args, **kwargs)
|
||||
self.data["fileFormat"] = ["usda", "usd"]
|
||||
self.data["publishMipMap"] = True
|
||||
|
|
@ -2,11 +2,11 @@ from openpype.hosts.maya.api import plugin, lib
|
|||
|
||||
|
||||
class CreateMultiverseUsd(plugin.Creator):
|
||||
"""Multiverse USD data"""
|
||||
"""Create Multiverse USD Asset"""
|
||||
|
||||
name = "usdMain"
|
||||
label = "Multiverse USD"
|
||||
family = "usd"
|
||||
name = "mvUsdMain"
|
||||
label = "Multiverse USD Asset"
|
||||
family = "mvUsd"
|
||||
icon = "cubes"
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
|
|
@ -15,6 +15,7 @@ class CreateMultiverseUsd(plugin.Creator):
|
|||
# Add animation data first, since it maintains order.
|
||||
self.data.update(lib.collect_animation_data(True))
|
||||
|
||||
self.data["fileFormat"] = ["usd", "usda", "usdz"]
|
||||
self.data["stripNamespaces"] = False
|
||||
self.data["mergeTransformAndShape"] = False
|
||||
self.data["writeAncestors"] = True
|
||||
|
|
@ -45,6 +46,7 @@ class CreateMultiverseUsd(plugin.Creator):
|
|||
self.data["writeShadingNetworks"] = False
|
||||
self.data["writeTransformMatrix"] = True
|
||||
self.data["writeUsdAttributes"] = False
|
||||
self.data["writeInstancesAsReferences"] = False
|
||||
self.data["timeVaryingTopology"] = False
|
||||
self.data["customMaterialNamespace"] = ''
|
||||
self.data["numTimeSamples"] = 1
|
||||
|
|
|
|||
|
|
@ -4,9 +4,9 @@ from openpype.hosts.maya.api import plugin, lib
|
|||
class CreateMultiverseUsdComp(plugin.Creator):
|
||||
"""Create Multiverse USD Composition"""
|
||||
|
||||
name = "usdCompositionMain"
|
||||
name = "mvUsdCompositionMain"
|
||||
label = "Multiverse USD Composition"
|
||||
family = "usdComposition"
|
||||
family = "mvUsdComposition"
|
||||
icon = "cubes"
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
|
|
@ -15,9 +15,12 @@ class CreateMultiverseUsdComp(plugin.Creator):
|
|||
# Add animation data first, since it maintains order.
|
||||
self.data.update(lib.collect_animation_data(True))
|
||||
|
||||
# Order of `fileFormat` must match extract_multiverse_usd_comp.py
|
||||
self.data["fileFormat"] = ["usda", "usd"]
|
||||
self.data["stripNamespaces"] = False
|
||||
self.data["mergeTransformAndShape"] = False
|
||||
self.data["flattenContent"] = False
|
||||
self.data["writeAsCompoundLayers"] = False
|
||||
self.data["writePendingOverrides"] = False
|
||||
self.data["numTimeSamples"] = 1
|
||||
self.data["timeSamplesSpan"] = 0.0
|
||||
|
|
|
|||
|
|
@ -2,11 +2,11 @@ from openpype.hosts.maya.api import plugin, lib
|
|||
|
||||
|
||||
class CreateMultiverseUsdOver(plugin.Creator):
|
||||
"""Multiverse USD data"""
|
||||
"""Create Multiverse USD Override"""
|
||||
|
||||
name = "usdOverrideMain"
|
||||
name = "mvUsdOverrideMain"
|
||||
label = "Multiverse USD Override"
|
||||
family = "usdOverride"
|
||||
family = "mvUsdOverride"
|
||||
icon = "cubes"
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
|
|
@ -15,6 +15,8 @@ class CreateMultiverseUsdOver(plugin.Creator):
|
|||
# Add animation data first, since it maintains order.
|
||||
self.data.update(lib.collect_animation_data(True))
|
||||
|
||||
# Order of `fileFormat` must match extract_multiverse_usd_over.py
|
||||
self.data["fileFormat"] = ["usda", "usd"]
|
||||
self.data["writeAll"] = False
|
||||
self.data["writeTransforms"] = True
|
||||
self.data["writeVisibility"] = True
|
||||
|
|
|
|||
|
|
@ -28,3 +28,7 @@ class CreatePointCache(plugin.Creator):
|
|||
# Add options for custom attributes
|
||||
self.data["attr"] = ""
|
||||
self.data["attrPrefix"] = ""
|
||||
|
||||
# Default to not send to farm.
|
||||
self.data["farm"] = False
|
||||
self.data["priority"] = 50
|
||||
|
|
|
|||
|
|
@ -35,8 +35,9 @@ class AbcLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
|
|||
|
||||
# hero_001 (abc)
|
||||
# asset_counter{optional}
|
||||
|
||||
nodes = cmds.file(self.fname,
|
||||
file_url = self.prepare_root_value(self.fname,
|
||||
context["project"]["code"])
|
||||
nodes = cmds.file(file_url,
|
||||
namespace=namespace,
|
||||
sharedReferenceFile=False,
|
||||
groupReference=True,
|
||||
|
|
|
|||
|
|
@ -64,9 +64,11 @@ class AssProxyLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
|
|||
path = os.path.join(publish_folder, filename)
|
||||
|
||||
proxyPath = proxyPath_base + ".ma"
|
||||
self.log.info
|
||||
|
||||
nodes = cmds.file(proxyPath,
|
||||
file_url = self.prepare_root_value(proxyPath,
|
||||
context["project"]["code"])
|
||||
|
||||
nodes = cmds.file(file_url,
|
||||
namespace=namespace,
|
||||
reference=True,
|
||||
returnNewNodes=True,
|
||||
|
|
@ -123,7 +125,11 @@ class AssProxyLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
|
|||
assert os.path.exists(proxyPath), "%s does not exist." % proxyPath
|
||||
|
||||
try:
|
||||
content = cmds.file(proxyPath,
|
||||
file_url = self.prepare_root_value(proxyPath,
|
||||
representation["context"]
|
||||
["project"]
|
||||
["code"])
|
||||
content = cmds.file(file_url,
|
||||
loadReference=reference_node,
|
||||
type="mayaAscii",
|
||||
returnNewNodes=True)
|
||||
|
|
|
|||
|
|
@ -31,7 +31,9 @@ class LookLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
|
|||
import maya.cmds as cmds
|
||||
|
||||
with lib.maintained_selection():
|
||||
nodes = cmds.file(self.fname,
|
||||
file_url = self.prepare_root_value(self.fname,
|
||||
context["project"]["code"])
|
||||
nodes = cmds.file(file_url,
|
||||
namespace=namespace,
|
||||
reference=True,
|
||||
returnNewNodes=True)
|
||||
|
|
|
|||
|
|
@ -14,13 +14,13 @@ from openpype.hosts.maya.api.pipeline import containerise
|
|||
|
||||
|
||||
class MultiverseUsdLoader(load.LoaderPlugin):
|
||||
"""Load the USD by Multiverse"""
|
||||
"""Read USD data in a Multiverse Compound"""
|
||||
|
||||
families = ["model", "usd", "usdComposition", "usdOverride",
|
||||
families = ["model", "mvUsd", "mvUsdComposition", "mvUsdOverride",
|
||||
"pointcache", "animation"]
|
||||
representations = ["usd", "usda", "usdc", "usdz", "abc"]
|
||||
|
||||
label = "Read USD by Multiverse"
|
||||
label = "Load USD to Multiverse"
|
||||
order = -10
|
||||
icon = "code-fork"
|
||||
color = "orange"
|
||||
|
|
|
|||
|
|
@ -51,7 +51,9 @@ class ReferenceLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
|
|||
|
||||
with maintained_selection():
|
||||
cmds.loadPlugin("AbcImport.mll", quiet=True)
|
||||
nodes = cmds.file(self.fname,
|
||||
file_url = self.prepare_root_value(self.fname,
|
||||
context["project"]["code"])
|
||||
nodes = cmds.file(file_url,
|
||||
namespace=namespace,
|
||||
sharedReferenceFile=False,
|
||||
reference=True,
|
||||
|
|
|
|||
|
|
@ -53,7 +53,9 @@ class YetiRigLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
|
|||
|
||||
# load rig
|
||||
with lib.maintained_selection():
|
||||
nodes = cmds.file(self.fname,
|
||||
file_url = self.prepare_root_value(self.fname,
|
||||
context["project"]["code"])
|
||||
nodes = cmds.file(file_url,
|
||||
namespace=namespace,
|
||||
reference=True,
|
||||
returnNewNodes=True,
|
||||
|
|
|
|||
|
|
@ -55,3 +55,6 @@ class CollectAnimationOutputGeometry(pyblish.api.InstancePlugin):
|
|||
|
||||
# Store data in the instance for the validator
|
||||
instance.data["out_hierarchy"] = hierarchy
|
||||
|
||||
if instance.data.get("farm"):
|
||||
instance.data["families"].append("publish.farm")
|
||||
|
|
|
|||
20
openpype/hosts/maya/plugins/publish/collect_fbx_camera.py
Normal file
20
openpype/hosts/maya/plugins/publish/collect_fbx_camera.py
Normal file
|
|
@ -0,0 +1,20 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
from maya import cmds # noqa
|
||||
import pyblish.api
|
||||
|
||||
|
||||
class CollectFbxCamera(pyblish.api.InstancePlugin):
|
||||
"""Collect Camera for FBX export."""
|
||||
|
||||
order = pyblish.api.CollectorOrder + 0.2
|
||||
label = "Collect Camera for FBX export"
|
||||
families = ["camera"]
|
||||
|
||||
def process(self, instance):
|
||||
if not instance.data.get("families"):
|
||||
instance.data["families"] = []
|
||||
|
||||
if "fbx" not in instance.data["families"]:
|
||||
instance.data["families"].append("fbx")
|
||||
|
||||
instance.data["cameras"] = True
|
||||
|
|
@ -22,10 +22,46 @@ RENDERER_NODE_TYPES = [
|
|||
# redshift
|
||||
"RedshiftMeshParameters"
|
||||
]
|
||||
|
||||
SHAPE_ATTRS = set(SHAPE_ATTRS)
|
||||
|
||||
|
||||
def get_pxr_multitexture_file_attrs(node):
|
||||
attrs = []
|
||||
for i in range(9):
|
||||
if cmds.attributeQuery("filename{}".format(i), node=node, ex=True):
|
||||
file = cmds.getAttr("{}.filename{}".format(node, i))
|
||||
if file:
|
||||
attrs.append("filename{}".format(i))
|
||||
return attrs
|
||||
|
||||
|
||||
FILE_NODES = {
|
||||
"file": "fileTextureName",
|
||||
|
||||
"aiImage": "filename",
|
||||
|
||||
"RedshiftNormalMap": "text0",
|
||||
|
||||
"PxrBump": "filename",
|
||||
"PxrNormalMap": "filename",
|
||||
"PxrMultiTexture": get_pxr_multitexture_file_attrs,
|
||||
"PxrPtexture": "filename",
|
||||
"PxrTexture": "filename"
|
||||
}
|
||||
|
||||
|
||||
def get_attributes(dictionary, attr, node=None):
|
||||
# type: (dict, str, str) -> list
|
||||
if callable(dictionary[attr]):
|
||||
val = dictionary[attr](node)
|
||||
else:
|
||||
val = dictionary.get(attr, [])
|
||||
|
||||
if not isinstance(val, list):
|
||||
return [val]
|
||||
return val
|
||||
|
||||
|
||||
def get_look_attrs(node):
|
||||
"""Returns attributes of a node that are important for the look.
|
||||
|
||||
|
|
@ -51,15 +87,14 @@ def get_look_attrs(node):
|
|||
if cmds.objectType(node, isAType="shape"):
|
||||
attrs = cmds.listAttr(node, changedSinceFileOpen=True) or []
|
||||
for attr in attrs:
|
||||
if attr in SHAPE_ATTRS:
|
||||
if attr in SHAPE_ATTRS or \
|
||||
attr not in SHAPE_ATTRS and attr.startswith('ai'):
|
||||
result.append(attr)
|
||||
elif attr.startswith('ai'):
|
||||
result.append(attr)
|
||||
|
||||
return result
|
||||
|
||||
|
||||
def node_uses_image_sequence(node):
|
||||
def node_uses_image_sequence(node, node_path):
|
||||
# type: (str) -> bool
|
||||
"""Return whether file node uses an image sequence or single image.
|
||||
|
||||
Determine if a node uses an image sequence or just a single image,
|
||||
|
|
@ -74,13 +109,18 @@ def node_uses_image_sequence(node):
|
|||
"""
|
||||
|
||||
# useFrameExtension indicates an explicit image sequence
|
||||
node_path = get_file_node_path(node).lower()
|
||||
try:
|
||||
use_frame_extension = cmds.getAttr('%s.useFrameExtension' % node)
|
||||
except ValueError:
|
||||
use_frame_extension = False
|
||||
if use_frame_extension:
|
||||
return True
|
||||
|
||||
# The following tokens imply a sequence
|
||||
patterns = ["<udim>", "<tile>", "<uvtile>", "u<u>_v<v>", "<frame0"]
|
||||
|
||||
return (cmds.getAttr('%s.useFrameExtension' % node) or
|
||||
any(pattern in node_path for pattern in patterns))
|
||||
patterns = ["<udim>", "<tile>", "<uvtile>",
|
||||
"u<u>_v<v>", "<frame0", "<f4>"]
|
||||
node_path_lowered = node_path.lower()
|
||||
return any(pattern in node_path_lowered for pattern in patterns)
|
||||
|
||||
|
||||
def seq_to_glob(path):
|
||||
|
|
@ -137,14 +177,15 @@ def seq_to_glob(path):
|
|||
return path
|
||||
|
||||
|
||||
def get_file_node_path(node):
|
||||
def get_file_node_paths(node):
|
||||
# type: (str) -> list
|
||||
"""Get the file path used by a Maya file node.
|
||||
|
||||
Args:
|
||||
node (str): Name of the Maya file node
|
||||
|
||||
Returns:
|
||||
str: the file path in use
|
||||
list: the file paths in use
|
||||
|
||||
"""
|
||||
# if the path appears to be sequence, use computedFileTextureNamePattern,
|
||||
|
|
@ -163,15 +204,20 @@ def get_file_node_path(node):
|
|||
"<uvtile>"]
|
||||
lower = texture_pattern.lower()
|
||||
if any(pattern in lower for pattern in patterns):
|
||||
return texture_pattern
|
||||
return [texture_pattern]
|
||||
|
||||
if cmds.nodeType(node) == 'aiImage':
|
||||
return cmds.getAttr('{0}.filename'.format(node))
|
||||
if cmds.nodeType(node) == 'RedshiftNormalMap':
|
||||
return cmds.getAttr('{}.tex0'.format(node))
|
||||
try:
|
||||
file_attributes = get_attributes(
|
||||
FILE_NODES, cmds.nodeType(node), node)
|
||||
except AttributeError:
|
||||
file_attributes = "fileTextureName"
|
||||
|
||||
# otherwise use fileTextureName
|
||||
return cmds.getAttr('{0}.fileTextureName'.format(node))
|
||||
files = []
|
||||
for file_attr in file_attributes:
|
||||
if cmds.attributeQuery(file_attr, node=node, exists=True):
|
||||
files.append(cmds.getAttr("{}.{}".format(node, file_attr)))
|
||||
|
||||
return files
|
||||
|
||||
|
||||
def get_file_node_files(node):
|
||||
|
|
@ -185,16 +231,21 @@ def get_file_node_files(node):
|
|||
list: List of full file paths.
|
||||
|
||||
"""
|
||||
paths = get_file_node_paths(node)
|
||||
sequences = []
|
||||
replaces = []
|
||||
for index, path in enumerate(paths):
|
||||
if node_uses_image_sequence(node, path):
|
||||
glob_pattern = seq_to_glob(path)
|
||||
sequences.extend(glob.glob(glob_pattern))
|
||||
replaces.append(index)
|
||||
|
||||
path = get_file_node_path(node)
|
||||
path = cmds.workspace(expandName=path)
|
||||
if node_uses_image_sequence(node):
|
||||
glob_pattern = seq_to_glob(path)
|
||||
return glob.glob(glob_pattern)
|
||||
elif os.path.exists(path):
|
||||
return [path]
|
||||
else:
|
||||
return []
|
||||
for index in replaces:
|
||||
paths.pop(index)
|
||||
|
||||
paths.extend(sequences)
|
||||
|
||||
return [p for p in paths if os.path.exists(p)]
|
||||
|
||||
|
||||
class CollectLook(pyblish.api.InstancePlugin):
|
||||
|
|
@ -238,13 +289,13 @@ class CollectLook(pyblish.api.InstancePlugin):
|
|||
"for %s" % instance.data['name'])
|
||||
|
||||
# Discover related object sets
|
||||
self.log.info("Gathering sets..")
|
||||
self.log.info("Gathering sets ...")
|
||||
sets = self.collect_sets(instance)
|
||||
|
||||
# Lookup set (optimization)
|
||||
instance_lookup = set(cmds.ls(instance, long=True))
|
||||
|
||||
self.log.info("Gathering set relations..")
|
||||
self.log.info("Gathering set relations ...")
|
||||
# Ensure iteration happen in a list so we can remove keys from the
|
||||
# dict within the loop
|
||||
|
||||
|
|
@ -326,7 +377,10 @@ class CollectLook(pyblish.api.InstancePlugin):
|
|||
"volumeShader",
|
||||
"displacementShader",
|
||||
"aiSurfaceShader",
|
||||
"aiVolumeShader"]
|
||||
"aiVolumeShader",
|
||||
"rman__surface",
|
||||
"rman__displacement"
|
||||
]
|
||||
if look_sets:
|
||||
materials = []
|
||||
|
||||
|
|
@ -374,15 +428,17 @@ class CollectLook(pyblish.api.InstancePlugin):
|
|||
or []
|
||||
)
|
||||
|
||||
files = cmds.ls(history, type="file", long=True)
|
||||
files.extend(cmds.ls(history, type="aiImage", long=True))
|
||||
files.extend(cmds.ls(history, type="RedshiftNormalMap", long=True))
|
||||
all_supported_nodes = FILE_NODES.keys()
|
||||
files = []
|
||||
for node_type in all_supported_nodes:
|
||||
files.extend(cmds.ls(history, type=node_type, long=True))
|
||||
|
||||
self.log.info("Collected file nodes:\n{}".format(files))
|
||||
# Collect textures if any file nodes are found
|
||||
instance.data["resources"] = []
|
||||
for n in files:
|
||||
instance.data["resources"].append(self.collect_resource(n))
|
||||
for res in self.collect_resources(n):
|
||||
instance.data["resources"].append(res)
|
||||
|
||||
self.log.info("Collected resources: {}".format(instance.data["resources"]))
|
||||
|
||||
|
|
@ -502,7 +558,7 @@ class CollectLook(pyblish.api.InstancePlugin):
|
|||
|
||||
return attributes
|
||||
|
||||
def collect_resource(self, node):
|
||||
def collect_resources(self, node):
|
||||
"""Collect the link to the file(s) used (resource)
|
||||
Args:
|
||||
node (str): name of the node
|
||||
|
|
@ -510,68 +566,69 @@ class CollectLook(pyblish.api.InstancePlugin):
|
|||
Returns:
|
||||
dict
|
||||
"""
|
||||
|
||||
self.log.debug("processing: {}".format(node))
|
||||
if cmds.nodeType(node) not in ["file", "aiImage", "RedshiftNormalMap"]:
|
||||
all_supported_nodes = FILE_NODES.keys()
|
||||
if cmds.nodeType(node) not in all_supported_nodes:
|
||||
self.log.error(
|
||||
"Unsupported file node: {}".format(cmds.nodeType(node)))
|
||||
raise AssertionError("Unsupported file node")
|
||||
|
||||
if cmds.nodeType(node) == 'file':
|
||||
self.log.debug(" - file node")
|
||||
attribute = "{}.fileTextureName".format(node)
|
||||
computed_attribute = "{}.computedFileTextureNamePattern".format(node)
|
||||
elif cmds.nodeType(node) == 'aiImage':
|
||||
self.log.debug("aiImage node")
|
||||
attribute = "{}.filename".format(node)
|
||||
computed_attribute = attribute
|
||||
elif cmds.nodeType(node) == 'RedshiftNormalMap':
|
||||
self.log.debug("RedshiftNormalMap node")
|
||||
attribute = "{}.tex0".format(node)
|
||||
computed_attribute = attribute
|
||||
self.log.debug(" - got {}".format(cmds.nodeType(node)))
|
||||
|
||||
source = cmds.getAttr(attribute)
|
||||
self.log.info(" - file source: {}".format(source))
|
||||
color_space_attr = "{}.colorSpace".format(node)
|
||||
try:
|
||||
color_space = cmds.getAttr(color_space_attr)
|
||||
except ValueError:
|
||||
# node doesn't have colorspace attribute
|
||||
color_space = "Raw"
|
||||
# Compare with the computed file path, e.g. the one with the <UDIM>
|
||||
# pattern in it, to generate some logging information about this
|
||||
# difference
|
||||
# computed_attribute = "{}.computedFileTextureNamePattern".format(node)
|
||||
computed_source = cmds.getAttr(computed_attribute)
|
||||
if source != computed_source:
|
||||
self.log.debug("Detected computed file pattern difference "
|
||||
"from original pattern: {0} "
|
||||
"({1} -> {2})".format(node,
|
||||
source,
|
||||
computed_source))
|
||||
attributes = get_attributes(FILE_NODES, cmds.nodeType(node), node)
|
||||
for attribute in attributes:
|
||||
source = cmds.getAttr("{}.{}".format(
|
||||
node,
|
||||
attribute
|
||||
))
|
||||
computed_attribute = "{}.{}".format(node, attribute)
|
||||
if attribute == "fileTextureName":
|
||||
computed_attribute = node + ".computedFileTextureNamePattern"
|
||||
|
||||
# We replace backslashes with forward slashes because V-Ray
|
||||
# can't handle the UDIM files with the backslashes in the
|
||||
# paths as the computed patterns
|
||||
source = source.replace("\\", "/")
|
||||
self.log.info(" - file source: {}".format(source))
|
||||
color_space_attr = "{}.colorSpace".format(node)
|
||||
try:
|
||||
color_space = cmds.getAttr(color_space_attr)
|
||||
except ValueError:
|
||||
# node doesn't have colorspace attribute
|
||||
color_space = "Raw"
|
||||
# Compare with the computed file path, e.g. the one with
|
||||
# the <UDIM> pattern in it, to generate some logging information
|
||||
# about this difference
|
||||
computed_source = cmds.getAttr(computed_attribute)
|
||||
if source != computed_source:
|
||||
self.log.debug("Detected computed file pattern difference "
|
||||
"from original pattern: {0} "
|
||||
"({1} -> {2})".format(node,
|
||||
source,
|
||||
computed_source))
|
||||
|
||||
files = get_file_node_files(node)
|
||||
if len(files) == 0:
|
||||
self.log.error("No valid files found from node `%s`" % node)
|
||||
# We replace backslashes with forward slashes because V-Ray
|
||||
# can't handle the UDIM files with the backslashes in the
|
||||
# paths as the computed patterns
|
||||
source = source.replace("\\", "/")
|
||||
|
||||
self.log.info("collection of resource done:")
|
||||
self.log.info(" - node: {}".format(node))
|
||||
self.log.info(" - attribute: {}".format(attribute))
|
||||
self.log.info(" - source: {}".format(source))
|
||||
self.log.info(" - file: {}".format(files))
|
||||
self.log.info(" - color space: {}".format(color_space))
|
||||
files = get_file_node_files(node)
|
||||
if len(files) == 0:
|
||||
self.log.error("No valid files found from node `%s`" % node)
|
||||
|
||||
# Define the resource
|
||||
return {"node": node,
|
||||
"attribute": attribute,
|
||||
self.log.info("collection of resource done:")
|
||||
self.log.info(" - node: {}".format(node))
|
||||
self.log.info(" - attribute: {}".format(attribute))
|
||||
self.log.info(" - source: {}".format(source))
|
||||
self.log.info(" - file: {}".format(files))
|
||||
self.log.info(" - color space: {}".format(color_space))
|
||||
|
||||
# Define the resource
|
||||
yield {
|
||||
"node": node,
|
||||
# here we are passing not only attribute, but with node again
|
||||
# this should be simplified and changed extractor.
|
||||
"attribute": "{}.{}".format(node, attribute),
|
||||
"source": source, # required for resources
|
||||
"files": files,
|
||||
"color_space": color_space} # required for resources
|
||||
"color_space": color_space
|
||||
} # required for resources
|
||||
|
||||
|
||||
class CollectModelRenderSets(CollectLook):
|
||||
|
|
|
|||
372
openpype/hosts/maya/plugins/publish/collect_multiverse_look.py
Normal file
372
openpype/hosts/maya/plugins/publish/collect_multiverse_look.py
Normal file
|
|
@ -0,0 +1,372 @@
|
|||
import glob
|
||||
import os
|
||||
import re
|
||||
|
||||
from maya import cmds
|
||||
import pyblish.api
|
||||
from openpype.hosts.maya.api import lib
|
||||
|
||||
SHAPE_ATTRS = ["castsShadows",
|
||||
"receiveShadows",
|
||||
"motionBlur",
|
||||
"primaryVisibility",
|
||||
"smoothShading",
|
||||
"visibleInReflections",
|
||||
"visibleInRefractions",
|
||||
"doubleSided",
|
||||
"opposite"]
|
||||
|
||||
SHAPE_ATTRS = set(SHAPE_ATTRS)
|
||||
COLOUR_SPACES = ['sRGB', 'linear', 'auto']
|
||||
MIPMAP_EXTENSIONS = ['tdl']
|
||||
|
||||
|
||||
def get_look_attrs(node):
|
||||
"""Returns attributes of a node that are important for the look.
|
||||
|
||||
These are the "changed" attributes (those that have edits applied
|
||||
in the current scene).
|
||||
|
||||
Returns:
|
||||
list: Attribute names to extract
|
||||
|
||||
"""
|
||||
# When referenced get only attributes that are "changed since file open"
|
||||
# which includes any reference edits, otherwise take *all* user defined
|
||||
# attributes
|
||||
is_referenced = cmds.referenceQuery(node, isNodeReferenced=True)
|
||||
result = cmds.listAttr(node, userDefined=True,
|
||||
changedSinceFileOpen=is_referenced) or []
|
||||
|
||||
# `cbId` is added when a scene is saved, ignore by default
|
||||
if "cbId" in result:
|
||||
result.remove("cbId")
|
||||
|
||||
# For shapes allow render stat changes
|
||||
if cmds.objectType(node, isAType="shape"):
|
||||
attrs = cmds.listAttr(node, changedSinceFileOpen=True) or []
|
||||
for attr in attrs:
|
||||
if attr in SHAPE_ATTRS:
|
||||
result.append(attr)
|
||||
elif attr.startswith('ai'):
|
||||
result.append(attr)
|
||||
|
||||
return result
|
||||
|
||||
|
||||
def node_uses_image_sequence(node):
|
||||
"""Return whether file node uses an image sequence or single image.
|
||||
|
||||
Determine if a node uses an image sequence or just a single image,
|
||||
not always obvious from its file path alone.
|
||||
|
||||
Args:
|
||||
node (str): Name of the Maya node
|
||||
|
||||
Returns:
|
||||
bool: True if node uses an image sequence
|
||||
|
||||
"""
|
||||
|
||||
# useFrameExtension indicates an explicit image sequence
|
||||
node_path = get_file_node_path(node).lower()
|
||||
|
||||
# The following tokens imply a sequence
|
||||
patterns = ["<udim>", "<tile>", "<uvtile>", "u<u>_v<v>", "<frame0"]
|
||||
|
||||
return (cmds.getAttr('%s.useFrameExtension' % node) or
|
||||
any(pattern in node_path for pattern in patterns))
|
||||
|
||||
|
||||
def seq_to_glob(path):
|
||||
"""Takes an image sequence path and returns it in glob format,
|
||||
with the frame number replaced by a '*'.
|
||||
|
||||
Image sequences may be numerical sequences, e.g. /path/to/file.1001.exr
|
||||
will return as /path/to/file.*.exr.
|
||||
|
||||
Image sequences may also use tokens to denote sequences, e.g.
|
||||
/path/to/texture.<UDIM>.tif will return as /path/to/texture.*.tif.
|
||||
|
||||
Args:
|
||||
path (str): the image sequence path
|
||||
|
||||
Returns:
|
||||
str: Return glob string that matches the filename pattern.
|
||||
|
||||
"""
|
||||
|
||||
if path is None:
|
||||
return path
|
||||
|
||||
# If any of the patterns, convert the pattern
|
||||
patterns = {
|
||||
"<udim>": "<udim>",
|
||||
"<tile>": "<tile>",
|
||||
"<uvtile>": "<uvtile>",
|
||||
"#": "#",
|
||||
"u<u>_v<v>": "<u>|<v>",
|
||||
"<frame0": "<frame0\d+>", # noqa - copied from collect_look.py
|
||||
"<f>": "<f>"
|
||||
}
|
||||
|
||||
lower = path.lower()
|
||||
has_pattern = False
|
||||
for pattern, regex_pattern in patterns.items():
|
||||
if pattern in lower:
|
||||
path = re.sub(regex_pattern, "*", path, flags=re.IGNORECASE)
|
||||
has_pattern = True
|
||||
|
||||
if has_pattern:
|
||||
return path
|
||||
|
||||
base = os.path.basename(path)
|
||||
matches = list(re.finditer(r'\d+', base))
|
||||
if matches:
|
||||
match = matches[-1]
|
||||
new_base = '{0}*{1}'.format(base[:match.start()],
|
||||
base[match.end():])
|
||||
head = os.path.dirname(path)
|
||||
return os.path.join(head, new_base)
|
||||
else:
|
||||
return path
|
||||
|
||||
|
||||
def get_file_node_path(node):
|
||||
"""Get the file path used by a Maya file node.
|
||||
|
||||
Args:
|
||||
node (str): Name of the Maya file node
|
||||
|
||||
Returns:
|
||||
str: the file path in use
|
||||
|
||||
"""
|
||||
# if the path appears to be sequence, use computedFileTextureNamePattern,
|
||||
# this preserves the <> tag
|
||||
if cmds.attributeQuery('computedFileTextureNamePattern',
|
||||
node=node,
|
||||
exists=True):
|
||||
plug = '{0}.computedFileTextureNamePattern'.format(node)
|
||||
texture_pattern = cmds.getAttr(plug)
|
||||
|
||||
patterns = ["<udim>",
|
||||
"<tile>",
|
||||
"u<u>_v<v>",
|
||||
"<f>",
|
||||
"<frame0",
|
||||
"<uvtile>"]
|
||||
lower = texture_pattern.lower()
|
||||
if any(pattern in lower for pattern in patterns):
|
||||
return texture_pattern
|
||||
|
||||
if cmds.nodeType(node) == 'aiImage':
|
||||
return cmds.getAttr('{0}.filename'.format(node))
|
||||
if cmds.nodeType(node) == 'RedshiftNormalMap':
|
||||
return cmds.getAttr('{}.tex0'.format(node))
|
||||
|
||||
# otherwise use fileTextureName
|
||||
return cmds.getAttr('{0}.fileTextureName'.format(node))
|
||||
|
||||
|
||||
def get_file_node_files(node):
|
||||
"""Return the file paths related to the file node
|
||||
|
||||
Note:
|
||||
Will only return existing files. Returns an empty list
|
||||
if not valid existing files are linked.
|
||||
|
||||
Returns:
|
||||
list: List of full file paths.
|
||||
|
||||
"""
|
||||
|
||||
path = get_file_node_path(node)
|
||||
path = cmds.workspace(expandName=path)
|
||||
if node_uses_image_sequence(node):
|
||||
glob_pattern = seq_to_glob(path)
|
||||
return glob.glob(glob_pattern)
|
||||
elif os.path.exists(path):
|
||||
return [path]
|
||||
else:
|
||||
return []
|
||||
|
||||
|
||||
def get_mipmap(fname):
|
||||
for colour_space in COLOUR_SPACES:
|
||||
for mipmap_ext in MIPMAP_EXTENSIONS:
|
||||
mipmap_fname = '.'.join([fname, colour_space, mipmap_ext])
|
||||
if os.path.exists(mipmap_fname):
|
||||
return mipmap_fname
|
||||
return None
|
||||
|
||||
|
||||
def is_mipmap(fname):
|
||||
ext = os.path.splitext(fname)[1][1:]
|
||||
if ext in MIPMAP_EXTENSIONS:
|
||||
return True
|
||||
return False
|
||||
|
||||
|
||||
class CollectMultiverseLookData(pyblish.api.InstancePlugin):
|
||||
"""Collect Multiverse Look
|
||||
|
||||
"""
|
||||
|
||||
order = pyblish.api.CollectorOrder + 0.2
|
||||
label = 'Collect Multiverse Look'
|
||||
families = ["mvLook"]
|
||||
|
||||
def process(self, instance):
|
||||
# Load plugin first
|
||||
cmds.loadPlugin("MultiverseForMaya", quiet=True)
|
||||
import multiverse
|
||||
|
||||
self.log.info("Processing mvLook for '{}'".format(instance))
|
||||
|
||||
nodes = set()
|
||||
for node in instance:
|
||||
# We want only mvUsdCompoundShape nodes.
|
||||
nodes_of_interest = cmds.ls(node,
|
||||
dag=True,
|
||||
shapes=False,
|
||||
type="mvUsdCompoundShape",
|
||||
noIntermediate=True,
|
||||
long=True)
|
||||
nodes.update(nodes_of_interest)
|
||||
|
||||
files = []
|
||||
sets = {}
|
||||
instance.data["resources"] = []
|
||||
publishMipMap = instance.data["publishMipMap"]
|
||||
|
||||
for node in nodes:
|
||||
self.log.info("Getting resources for '{}'".format(node))
|
||||
|
||||
# We know what nodes need to be collected, now we need to
|
||||
# extract the materials overrides.
|
||||
overrides = multiverse.ListMaterialOverridePrims(node)
|
||||
for override in overrides:
|
||||
matOver = multiverse.GetMaterialOverride(node, override)
|
||||
|
||||
if isinstance(matOver, multiverse.MaterialSourceShadingGroup):
|
||||
# We now need to grab the shadingGroup so add it to the
|
||||
# sets we pass down the pipe.
|
||||
shadingGroup = matOver.shadingGroupName
|
||||
self.log.debug("ShadingGroup = '{}'".format(shadingGroup))
|
||||
sets[shadingGroup] = {"uuid": lib.get_id(
|
||||
shadingGroup), "members": list()}
|
||||
|
||||
# The SG may reference files, add those too!
|
||||
history = cmds.listHistory(shadingGroup)
|
||||
files = cmds.ls(history, type="file", long=True)
|
||||
|
||||
for f in files:
|
||||
resources = self.collect_resource(f, publishMipMap)
|
||||
instance.data["resources"].append(resources)
|
||||
|
||||
elif isinstance(matOver, multiverse.MaterialSourceUsdPath):
|
||||
# TODO: Handle this later.
|
||||
pass
|
||||
|
||||
# Store data on the instance for validators, extractos, etc.
|
||||
instance.data["lookData"] = {
|
||||
"attributes": [],
|
||||
"relationships": sets
|
||||
}
|
||||
|
||||
def collect_resource(self, node, publishMipMap):
|
||||
"""Collect the link to the file(s) used (resource)
|
||||
Args:
|
||||
node (str): name of the node
|
||||
|
||||
Returns:
|
||||
dict
|
||||
"""
|
||||
|
||||
self.log.debug("processing: {}".format(node))
|
||||
if cmds.nodeType(node) not in ["file", "aiImage", "RedshiftNormalMap"]:
|
||||
self.log.error(
|
||||
"Unsupported file node: {}".format(cmds.nodeType(node)))
|
||||
raise AssertionError("Unsupported file node")
|
||||
|
||||
if cmds.nodeType(node) == 'file':
|
||||
self.log.debug(" - file node")
|
||||
attribute = "{}.fileTextureName".format(node)
|
||||
computed_attribute = "{}.computedFileTextureNamePattern".format(
|
||||
node)
|
||||
elif cmds.nodeType(node) == 'aiImage':
|
||||
self.log.debug("aiImage node")
|
||||
attribute = "{}.filename".format(node)
|
||||
computed_attribute = attribute
|
||||
elif cmds.nodeType(node) == 'RedshiftNormalMap':
|
||||
self.log.debug("RedshiftNormalMap node")
|
||||
attribute = "{}.tex0".format(node)
|
||||
computed_attribute = attribute
|
||||
|
||||
source = cmds.getAttr(attribute)
|
||||
self.log.info(" - file source: {}".format(source))
|
||||
color_space_attr = "{}.colorSpace".format(node)
|
||||
try:
|
||||
color_space = cmds.getAttr(color_space_attr)
|
||||
except ValueError:
|
||||
# node doesn't have colorspace attribute
|
||||
color_space = "Raw"
|
||||
# Compare with the computed file path, e.g. the one with the <UDIM>
|
||||
# pattern in it, to generate some logging information about this
|
||||
# difference
|
||||
# computed_attribute = "{}.computedFileTextureNamePattern".format(node)
|
||||
computed_source = cmds.getAttr(computed_attribute)
|
||||
if source != computed_source:
|
||||
self.log.debug("Detected computed file pattern difference "
|
||||
"from original pattern: {0} "
|
||||
"({1} -> {2})".format(node,
|
||||
source,
|
||||
computed_source))
|
||||
|
||||
# We replace backslashes with forward slashes because V-Ray
|
||||
# can't handle the UDIM files with the backslashes in the
|
||||
# paths as the computed patterns
|
||||
source = source.replace("\\", "/")
|
||||
|
||||
files = get_file_node_files(node)
|
||||
files = self.handle_files(files, publishMipMap)
|
||||
if len(files) == 0:
|
||||
self.log.error("No valid files found from node `%s`" % node)
|
||||
|
||||
self.log.info("collection of resource done:")
|
||||
self.log.info(" - node: {}".format(node))
|
||||
self.log.info(" - attribute: {}".format(attribute))
|
||||
self.log.info(" - source: {}".format(source))
|
||||
self.log.info(" - file: {}".format(files))
|
||||
self.log.info(" - color space: {}".format(color_space))
|
||||
|
||||
# Define the resource
|
||||
return {"node": node,
|
||||
"attribute": attribute,
|
||||
"source": source, # required for resources
|
||||
"files": files,
|
||||
"color_space": color_space} # required for resources
|
||||
|
||||
def handle_files(self, files, publishMipMap):
|
||||
"""This will go through all the files and make sure that they are
|
||||
either already mipmapped or have a corresponding mipmap sidecar and
|
||||
add that to the list."""
|
||||
if not publishMipMap:
|
||||
return files
|
||||
|
||||
extra_files = []
|
||||
self.log.debug("Expecting MipMaps, going to look for them.")
|
||||
for fname in files:
|
||||
self.log.info("Checking '{}' for mipmaps".format(fname))
|
||||
if is_mipmap(fname):
|
||||
self.log.debug(" - file is already MipMap, skipping.")
|
||||
continue
|
||||
|
||||
mipmap = get_mipmap(fname)
|
||||
if mipmap:
|
||||
self.log.info(" mipmap found for '{}'".format(fname))
|
||||
extra_files.append(mipmap)
|
||||
else:
|
||||
self.log.warning(" no mipmap found for '{}'".format(fname))
|
||||
return files + extra_files
|
||||
14
openpype/hosts/maya/plugins/publish/collect_pointcache.py
Normal file
14
openpype/hosts/maya/plugins/publish/collect_pointcache.py
Normal file
|
|
@ -0,0 +1,14 @@
|
|||
import pyblish.api
|
||||
|
||||
|
||||
class CollectPointcache(pyblish.api.InstancePlugin):
|
||||
"""Collect pointcache data for instance."""
|
||||
|
||||
order = pyblish.api.CollectorOrder + 0.4
|
||||
families = ["pointcache"]
|
||||
label = "Collect Pointcache"
|
||||
hosts = ["maya"]
|
||||
|
||||
def process(self, instance):
|
||||
if instance.data.get("farm"):
|
||||
instance.data["families"].append("publish.farm")
|
||||
|
|
@ -339,9 +339,15 @@ class CollectMayaRender(pyblish.api.ContextPlugin):
|
|||
"source": filepath,
|
||||
"expectedFiles": full_exp_files,
|
||||
"publishRenderMetadataFolder": common_publish_meta_path,
|
||||
"resolutionWidth": cmds.getAttr("defaultResolution.width"),
|
||||
"resolutionHeight": cmds.getAttr("defaultResolution.height"),
|
||||
"pixelAspect": cmds.getAttr("defaultResolution.pixelAspect"),
|
||||
"resolutionWidth": lib.get_attr_in_layer(
|
||||
"defaultResolution.width", layer=layer_name
|
||||
),
|
||||
"resolutionHeight": lib.get_attr_in_layer(
|
||||
"defaultResolution.height", layer=layer_name
|
||||
),
|
||||
"pixelAspect": lib.get_attr_in_layer(
|
||||
"defaultResolution.pixelAspect", layer=layer_name
|
||||
),
|
||||
"tileRendering": render_instance.data.get("tileRendering") or False, # noqa: E501
|
||||
"tilesX": render_instance.data.get("tilesX") or 2,
|
||||
"tilesY": render_instance.data.get("tilesY") or 2,
|
||||
|
|
|
|||
|
|
@ -77,15 +77,14 @@ class CollectReview(pyblish.api.InstancePlugin):
|
|||
instance.data['remove'] = True
|
||||
self.log.debug('isntance data {}'.format(instance.data))
|
||||
else:
|
||||
if self.legacy:
|
||||
instance.data['subset'] = task + 'Review'
|
||||
else:
|
||||
subset = "{}{}{}".format(
|
||||
task,
|
||||
instance.data["subset"][0].upper(),
|
||||
instance.data["subset"][1:]
|
||||
)
|
||||
instance.data['subset'] = subset
|
||||
legacy_subset_name = task + 'Review'
|
||||
asset_doc_id = instance.context.data['assetEntity']["_id"]
|
||||
subsets = legacy_io.find({"type": "subset",
|
||||
"name": legacy_subset_name,
|
||||
"parent": asset_doc_id}).distinct("_id")
|
||||
if len(list(subsets)) > 0:
|
||||
self.log.debug("Existing subsets found, keep legacy name.")
|
||||
instance.data['subset'] = legacy_subset_name
|
||||
|
||||
instance.data['review_camera'] = camera
|
||||
instance.data['frameStartFtrack'] = \
|
||||
|
|
|
|||
|
|
@ -124,9 +124,15 @@ class CollectVrayScene(pyblish.api.InstancePlugin):
|
|||
# Add source to allow tracing back to the scene from
|
||||
# which was submitted originally
|
||||
"source": context.data["currentFile"].replace("\\", "/"),
|
||||
"resolutionWidth": cmds.getAttr("defaultResolution.width"),
|
||||
"resolutionHeight": cmds.getAttr("defaultResolution.height"),
|
||||
"pixelAspect": cmds.getAttr("defaultResolution.pixelAspect"),
|
||||
"resolutionWidth": lib.get_attr_in_layer(
|
||||
"defaultResolution.height", layer=layer_name
|
||||
),
|
||||
"resolutionHeight": lib.get_attr_in_layer(
|
||||
"defaultResolution.width", layer=layer_name
|
||||
),
|
||||
"pixelAspect": lib.get_attr_in_layer(
|
||||
"defaultResolution.pixelAspect", layer=layer_name
|
||||
),
|
||||
"priority": instance.data.get("priority"),
|
||||
"useMultipleSceneFiles": instance.data.get(
|
||||
"vraySceneMultipleFiles")
|
||||
|
|
|
|||
|
|
@ -16,13 +16,19 @@ class ExtractAnimation(openpype.api.Extractor):
|
|||
Positions and normals, uvs, creases are preserved, but nothing more,
|
||||
for plain and predictable point caches.
|
||||
|
||||
Plugin can run locally or remotely (on a farm - if instance is marked with
|
||||
"farm" it will be skipped in local processing, but processed on farm)
|
||||
"""
|
||||
|
||||
label = "Extract Animation"
|
||||
hosts = ["maya"]
|
||||
families = ["animation"]
|
||||
targets = ["local", "remote"]
|
||||
|
||||
def process(self, instance):
|
||||
if instance.data.get("farm"):
|
||||
self.log.debug("Should be processed on farm, skipping.")
|
||||
return
|
||||
|
||||
# Collect the out set nodes
|
||||
out_sets = [node for node in instance if node.endswith("out_SET")]
|
||||
|
|
@ -89,4 +95,6 @@ class ExtractAnimation(openpype.api.Extractor):
|
|||
}
|
||||
instance.data["representations"].append(representation)
|
||||
|
||||
instance.context.data["cleanupFullPaths"].append(path)
|
||||
|
||||
self.log.info("Extracted {} to {}".format(instance, dirname))
|
||||
|
|
|
|||
|
|
@ -146,7 +146,7 @@ class ExtractLook(openpype.api.Extractor):
|
|||
|
||||
label = "Extract Look (Maya Scene + JSON)"
|
||||
hosts = ["maya"]
|
||||
families = ["look"]
|
||||
families = ["look", "mvLook"]
|
||||
order = pyblish.api.ExtractorOrder + 0.2
|
||||
scene_type = "ma"
|
||||
look_data_type = "json"
|
||||
|
|
@ -372,10 +372,12 @@ class ExtractLook(openpype.api.Extractor):
|
|||
|
||||
if mode == COPY:
|
||||
transfers.append((source, destination))
|
||||
self.log.info('copying')
|
||||
self.log.info('file will be copied {} -> {}'.format(
|
||||
source, destination))
|
||||
elif mode == HARDLINK:
|
||||
hardlinks.append((source, destination))
|
||||
self.log.info('hardlinking')
|
||||
self.log.info('file will be hardlinked {} -> {}'.format(
|
||||
source, destination))
|
||||
|
||||
# Store the hashes from hash to destination to include in the
|
||||
# database
|
||||
|
|
|
|||
157
openpype/hosts/maya/plugins/publish/extract_multiverse_look.py
Normal file
157
openpype/hosts/maya/plugins/publish/extract_multiverse_look.py
Normal file
|
|
@ -0,0 +1,157 @@
|
|||
import os
|
||||
|
||||
from maya import cmds
|
||||
|
||||
import openpype.api
|
||||
from openpype.hosts.maya.api.lib import maintained_selection
|
||||
|
||||
|
||||
class ExtractMultiverseLook(openpype.api.Extractor):
|
||||
"""Extractor for Multiverse USD look data.
|
||||
|
||||
This will extract:
|
||||
|
||||
- the shading networks that are assigned in MEOW as Maya material overrides
|
||||
to a Multiverse Compound
|
||||
- settings for a Multiverse Write Override operation.
|
||||
|
||||
Relevant settings are visible in the Maya set node created by a Multiverse
|
||||
USD Look instance creator.
|
||||
|
||||
The input data contained in the set is:
|
||||
|
||||
- a single Multiverse Compound node with any number of Maya material
|
||||
overrides (typically set in MEOW)
|
||||
|
||||
Upon publish two files will be written:
|
||||
|
||||
- a .usda override file containing material assignment information
|
||||
- a .ma file containing shading networks
|
||||
|
||||
Note: when layering the material assignment override on a loaded Compound,
|
||||
remember to set a matching attribute override with the namespace of
|
||||
the loaded compound in order for the material assignment to resolve.
|
||||
"""
|
||||
|
||||
label = "Extract Multiverse USD Look"
|
||||
hosts = ["maya"]
|
||||
families = ["mvLook"]
|
||||
scene_type = "usda"
|
||||
file_formats = ["usda", "usd"]
|
||||
|
||||
@property
|
||||
def options(self):
|
||||
"""Overridable options for Multiverse USD Export
|
||||
|
||||
Given in the following format
|
||||
- {NAME: EXPECTED TYPE}
|
||||
|
||||
If the overridden option's type does not match,
|
||||
the option is not included and a warning is logged.
|
||||
|
||||
"""
|
||||
|
||||
return {
|
||||
"writeAll": bool,
|
||||
"writeTransforms": bool,
|
||||
"writeVisibility": bool,
|
||||
"writeAttributes": bool,
|
||||
"writeMaterials": bool,
|
||||
"writeVariants": bool,
|
||||
"writeVariantsDefinition": bool,
|
||||
"writeActiveState": bool,
|
||||
"writeNamespaces": bool,
|
||||
"numTimeSamples": int,
|
||||
"timeSamplesSpan": float
|
||||
}
|
||||
|
||||
@property
|
||||
def default_options(self):
|
||||
"""The default options for Multiverse USD extraction."""
|
||||
|
||||
return {
|
||||
"writeAll": False,
|
||||
"writeTransforms": False,
|
||||
"writeVisibility": False,
|
||||
"writeAttributes": False,
|
||||
"writeMaterials": True,
|
||||
"writeVariants": False,
|
||||
"writeVariantsDefinition": False,
|
||||
"writeActiveState": False,
|
||||
"writeNamespaces": False,
|
||||
"numTimeSamples": 1,
|
||||
"timeSamplesSpan": 0.0
|
||||
}
|
||||
|
||||
def get_file_format(self, instance):
|
||||
fileFormat = instance.data["fileFormat"]
|
||||
if fileFormat in range(len(self.file_formats)):
|
||||
self.scene_type = self.file_formats[fileFormat]
|
||||
|
||||
def process(self, instance):
|
||||
# Load plugin first
|
||||
cmds.loadPlugin("MultiverseForMaya", quiet=True)
|
||||
|
||||
# Define output file path
|
||||
staging_dir = self.staging_dir(instance)
|
||||
self.get_file_format(instance)
|
||||
file_name = "{0}.{1}".format(instance.name, self.scene_type)
|
||||
file_path = os.path.join(staging_dir, file_name)
|
||||
file_path = file_path.replace('\\', '/')
|
||||
|
||||
# Parse export options
|
||||
options = self.default_options
|
||||
self.log.info("Export options: {0}".format(options))
|
||||
|
||||
# Perform extraction
|
||||
self.log.info("Performing extraction ...")
|
||||
|
||||
with maintained_selection():
|
||||
members = instance.data("setMembers")
|
||||
members = cmds.ls(members,
|
||||
dag=True,
|
||||
shapes=False,
|
||||
type="mvUsdCompoundShape",
|
||||
noIntermediate=True,
|
||||
long=True)
|
||||
self.log.info('Collected object {}'.format(members))
|
||||
if len(members) > 1:
|
||||
self.log.error('More than one member: {}'.format(members))
|
||||
|
||||
import multiverse
|
||||
|
||||
over_write_opts = multiverse.OverridesWriteOptions()
|
||||
options_discard_keys = {
|
||||
"numTimeSamples",
|
||||
"timeSamplesSpan",
|
||||
"frameStart",
|
||||
"frameEnd",
|
||||
"handleStart",
|
||||
"handleEnd",
|
||||
"step",
|
||||
"fps"
|
||||
}
|
||||
for key, value in options.items():
|
||||
if key in options_discard_keys:
|
||||
continue
|
||||
setattr(over_write_opts, key, value)
|
||||
|
||||
for member in members:
|
||||
# @TODO: Make sure there is only one here.
|
||||
|
||||
self.log.debug("Writing Override for '{}'".format(member))
|
||||
multiverse.WriteOverrides(file_path, member, over_write_opts)
|
||||
|
||||
if "representations" not in instance.data:
|
||||
instance.data["representations"] = []
|
||||
|
||||
representation = {
|
||||
'name': self.scene_type,
|
||||
'ext': self.scene_type,
|
||||
'files': file_name,
|
||||
'stagingDir': staging_dir
|
||||
}
|
||||
instance.data["representations"].append(representation)
|
||||
|
||||
self.log.info("Extracted instance {} to {}".format(
|
||||
instance.name, file_path))
|
||||
|
|
@ -8,11 +8,27 @@ from openpype.hosts.maya.api.lib import maintained_selection
|
|||
|
||||
|
||||
class ExtractMultiverseUsd(openpype.api.Extractor):
|
||||
"""Extractor for USD by Multiverse."""
|
||||
"""Extractor for Multiverse USD Asset data.
|
||||
|
||||
label = "Extract Multiverse USD"
|
||||
This will extract settings for a Multiverse Write Asset operation:
|
||||
they are visible in the Maya set node created by a Multiverse USD
|
||||
Asset instance creator.
|
||||
|
||||
The input data contained in the set is:
|
||||
|
||||
- a single hierarchy of Maya nodes. Multiverse supports a variety of Maya
|
||||
nodes such as transforms, mesh, curves, particles, instances, particle
|
||||
instancers, pfx, MASH, lights, cameras, joints, connected materials,
|
||||
shading networks etc. including many of their attributes.
|
||||
|
||||
Upon publish a .usd (or .usdz) asset file will be typically written.
|
||||
"""
|
||||
|
||||
label = "Extract Multiverse USD Asset"
|
||||
hosts = ["maya"]
|
||||
families = ["usd"]
|
||||
families = ["mvUsd"]
|
||||
scene_type = "usd"
|
||||
file_formats = ["usd", "usda", "usdz"]
|
||||
|
||||
@property
|
||||
def options(self):
|
||||
|
|
@ -57,6 +73,7 @@ class ExtractMultiverseUsd(openpype.api.Extractor):
|
|||
"writeShadingNetworks": bool,
|
||||
"writeTransformMatrix": bool,
|
||||
"writeUsdAttributes": bool,
|
||||
"writeInstancesAsReferences": bool,
|
||||
"timeVaryingTopology": bool,
|
||||
"customMaterialNamespace": str,
|
||||
"numTimeSamples": int,
|
||||
|
|
@ -98,6 +115,7 @@ class ExtractMultiverseUsd(openpype.api.Extractor):
|
|||
"writeShadingNetworks": False,
|
||||
"writeTransformMatrix": True,
|
||||
"writeUsdAttributes": False,
|
||||
"writeInstancesAsReferences": False,
|
||||
"timeVaryingTopology": False,
|
||||
"customMaterialNamespace": str(),
|
||||
"numTimeSamples": 1,
|
||||
|
|
@ -130,12 +148,15 @@ class ExtractMultiverseUsd(openpype.api.Extractor):
|
|||
return options
|
||||
|
||||
def process(self, instance):
|
||||
# Load plugin firstly
|
||||
# Load plugin first
|
||||
cmds.loadPlugin("MultiverseForMaya", quiet=True)
|
||||
|
||||
# Define output file path
|
||||
staging_dir = self.staging_dir(instance)
|
||||
file_name = "{}.usd".format(instance.name)
|
||||
file_format = instance.data.get("fileFormat", 0)
|
||||
if file_format in range(len(self.file_formats)):
|
||||
self.scene_type = self.file_formats[file_format]
|
||||
file_name = "{0}.{1}".format(instance.name, self.scene_type)
|
||||
file_path = os.path.join(staging_dir, file_name)
|
||||
file_path = file_path.replace('\\', '/')
|
||||
|
||||
|
|
@ -149,12 +170,6 @@ class ExtractMultiverseUsd(openpype.api.Extractor):
|
|||
|
||||
with maintained_selection():
|
||||
members = instance.data("setMembers")
|
||||
members = cmds.ls(members,
|
||||
dag=True,
|
||||
shapes=True,
|
||||
type=("mesh"),
|
||||
noIntermediate=True,
|
||||
long=True)
|
||||
self.log.info('Collected object {}'.format(members))
|
||||
|
||||
import multiverse
|
||||
|
|
@ -199,10 +214,10 @@ class ExtractMultiverseUsd(openpype.api.Extractor):
|
|||
instance.data["representations"] = []
|
||||
|
||||
representation = {
|
||||
'name': 'usd',
|
||||
'ext': 'usd',
|
||||
'name': self.scene_type,
|
||||
'ext': self.scene_type,
|
||||
'files': file_name,
|
||||
"stagingDir": staging_dir
|
||||
'stagingDir': staging_dir
|
||||
}
|
||||
instance.data["representations"].append(representation)
|
||||
|
||||
|
|
|
|||
|
|
@ -7,11 +7,28 @@ from openpype.hosts.maya.api.lib import maintained_selection
|
|||
|
||||
|
||||
class ExtractMultiverseUsdComposition(openpype.api.Extractor):
|
||||
"""Extractor of Multiverse USD Composition."""
|
||||
"""Extractor of Multiverse USD Composition data.
|
||||
|
||||
This will extract settings for a Multiverse Write Composition operation:
|
||||
they are visible in the Maya set node created by a Multiverse USD
|
||||
Composition instance creator.
|
||||
|
||||
The input data contained in the set is either:
|
||||
|
||||
- a single hierarchy consisting of several Multiverse Compound nodes, with
|
||||
any number of layers, and Maya transform nodes
|
||||
- a single Compound node with more than one layer (in this case the "Write
|
||||
as Compound Layers" option should be set).
|
||||
|
||||
Upon publish a .usda composition file will be written.
|
||||
"""
|
||||
|
||||
label = "Extract Multiverse USD Composition"
|
||||
hosts = ["maya"]
|
||||
families = ["usdComposition"]
|
||||
families = ["mvUsdComposition"]
|
||||
scene_type = "usd"
|
||||
# Order of `fileFormat` must match create_multiverse_usd_comp.py
|
||||
file_formats = ["usda", "usd"]
|
||||
|
||||
@property
|
||||
def options(self):
|
||||
|
|
@ -29,6 +46,7 @@ class ExtractMultiverseUsdComposition(openpype.api.Extractor):
|
|||
"stripNamespaces": bool,
|
||||
"mergeTransformAndShape": bool,
|
||||
"flattenContent": bool,
|
||||
"writeAsCompoundLayers": bool,
|
||||
"writePendingOverrides": bool,
|
||||
"numTimeSamples": int,
|
||||
"timeSamplesSpan": float
|
||||
|
|
@ -42,6 +60,7 @@ class ExtractMultiverseUsdComposition(openpype.api.Extractor):
|
|||
"stripNamespaces": True,
|
||||
"mergeTransformAndShape": False,
|
||||
"flattenContent": False,
|
||||
"writeAsCompoundLayers": False,
|
||||
"writePendingOverrides": False,
|
||||
"numTimeSamples": 1,
|
||||
"timeSamplesSpan": 0.0
|
||||
|
|
@ -71,12 +90,15 @@ class ExtractMultiverseUsdComposition(openpype.api.Extractor):
|
|||
return options
|
||||
|
||||
def process(self, instance):
|
||||
# Load plugin firstly
|
||||
# Load plugin first
|
||||
cmds.loadPlugin("MultiverseForMaya", quiet=True)
|
||||
|
||||
# Define output file path
|
||||
staging_dir = self.staging_dir(instance)
|
||||
file_name = "{}.usd".format(instance.name)
|
||||
file_format = instance.data.get("fileFormat", 0)
|
||||
if file_format in range(len(self.file_formats)):
|
||||
self.scene_type = self.file_formats[file_format]
|
||||
file_name = "{0}.{1}".format(instance.name, self.scene_type)
|
||||
file_path = os.path.join(staging_dir, file_name)
|
||||
file_path = file_path.replace('\\', '/')
|
||||
|
||||
|
|
@ -90,12 +112,6 @@ class ExtractMultiverseUsdComposition(openpype.api.Extractor):
|
|||
|
||||
with maintained_selection():
|
||||
members = instance.data("setMembers")
|
||||
members = cmds.ls(members,
|
||||
dag=True,
|
||||
shapes=True,
|
||||
type="mvUsdCompoundShape",
|
||||
noIntermediate=True,
|
||||
long=True)
|
||||
self.log.info('Collected object {}'.format(members))
|
||||
|
||||
import multiverse
|
||||
|
|
@ -119,6 +135,18 @@ class ExtractMultiverseUsdComposition(openpype.api.Extractor):
|
|||
time_opts.framePerSecond = fps
|
||||
|
||||
comp_write_opts = multiverse.CompositionWriteOptions()
|
||||
|
||||
"""
|
||||
OP tells MV to write to a staging directory, and then moves the
|
||||
file to it's final publish directory. By default, MV write relative
|
||||
paths, but these paths will break when the referencing file moves.
|
||||
This option forces writes to absolute paths, which is ok within OP
|
||||
because all published assets have static paths, and MV can only
|
||||
reference published assets. When a proper UsdAssetResolver is used,
|
||||
this won't be needed.
|
||||
"""
|
||||
comp_write_opts.forceAbsolutePaths = True
|
||||
|
||||
options_discard_keys = {
|
||||
'numTimeSamples',
|
||||
'timeSamplesSpan',
|
||||
|
|
@ -140,10 +168,10 @@ class ExtractMultiverseUsdComposition(openpype.api.Extractor):
|
|||
instance.data["representations"] = []
|
||||
|
||||
representation = {
|
||||
'name': 'usd',
|
||||
'ext': 'usd',
|
||||
'name': self.scene_type,
|
||||
'ext': self.scene_type,
|
||||
'files': file_name,
|
||||
"stagingDir": staging_dir
|
||||
'stagingDir': staging_dir
|
||||
}
|
||||
instance.data["representations"].append(representation)
|
||||
|
||||
|
|
|
|||
|
|
@ -7,11 +7,26 @@ from maya import cmds
|
|||
|
||||
|
||||
class ExtractMultiverseUsdOverride(openpype.api.Extractor):
|
||||
"""Extractor for USD Override by Multiverse."""
|
||||
"""Extractor for Multiverse USD Override data.
|
||||
|
||||
This will extract settings for a Multiverse Write Override operation:
|
||||
they are visible in the Maya set node created by a Multiverse USD
|
||||
Override instance creator.
|
||||
|
||||
The input data contained in the set is:
|
||||
|
||||
- a single Multiverse Compound node with any number of overrides (typically
|
||||
set in MEOW)
|
||||
|
||||
Upon publish a .usda override file will be written.
|
||||
"""
|
||||
|
||||
label = "Extract Multiverse USD Override"
|
||||
hosts = ["maya"]
|
||||
families = ["usdOverride"]
|
||||
families = ["mvUsdOverride"]
|
||||
scene_type = "usd"
|
||||
# Order of `fileFormat` must match create_multiverse_usd_over.py
|
||||
file_formats = ["usda", "usd"]
|
||||
|
||||
@property
|
||||
def options(self):
|
||||
|
|
@ -58,12 +73,15 @@ class ExtractMultiverseUsdOverride(openpype.api.Extractor):
|
|||
}
|
||||
|
||||
def process(self, instance):
|
||||
# Load plugin firstly
|
||||
# Load plugin first
|
||||
cmds.loadPlugin("MultiverseForMaya", quiet=True)
|
||||
|
||||
# Define output file path
|
||||
staging_dir = self.staging_dir(instance)
|
||||
file_name = "{}.usda".format(instance.name)
|
||||
file_format = instance.data.get("fileFormat", 0)
|
||||
if file_format in range(len(self.file_formats)):
|
||||
self.scene_type = self.file_formats[file_format]
|
||||
file_name = "{0}.{1}".format(instance.name, self.scene_type)
|
||||
file_path = os.path.join(staging_dir, file_name)
|
||||
file_path = file_path.replace("\\", "/")
|
||||
|
||||
|
|
@ -78,7 +96,7 @@ class ExtractMultiverseUsdOverride(openpype.api.Extractor):
|
|||
members = instance.data("setMembers")
|
||||
members = cmds.ls(members,
|
||||
dag=True,
|
||||
shapes=True,
|
||||
shapes=False,
|
||||
type="mvUsdCompoundShape",
|
||||
noIntermediate=True,
|
||||
long=True)
|
||||
|
|
@ -128,10 +146,10 @@ class ExtractMultiverseUsdOverride(openpype.api.Extractor):
|
|||
instance.data["representations"] = []
|
||||
|
||||
representation = {
|
||||
"name": "usd",
|
||||
"ext": "usd",
|
||||
"files": file_name,
|
||||
"stagingDir": staging_dir
|
||||
'name': self.scene_type,
|
||||
'ext': self.scene_type,
|
||||
'files': file_name,
|
||||
'stagingDir': staging_dir
|
||||
}
|
||||
instance.data["representations"].append(representation)
|
||||
|
||||
|
|
|
|||
|
|
@ -16,6 +16,8 @@ class ExtractAlembic(openpype.api.Extractor):
|
|||
Positions and normals, uvs, creases are preserved, but nothing more,
|
||||
for plain and predictable point caches.
|
||||
|
||||
Plugin can run locally or remotely (on a farm - if instance is marked with
|
||||
"farm" it will be skipped in local processing, but processed on farm)
|
||||
"""
|
||||
|
||||
label = "Extract Pointcache (Alembic)"
|
||||
|
|
@ -23,8 +25,12 @@ class ExtractAlembic(openpype.api.Extractor):
|
|||
families = ["pointcache",
|
||||
"model",
|
||||
"vrayproxy"]
|
||||
targets = ["local", "remote"]
|
||||
|
||||
def process(self, instance):
|
||||
if instance.data.get("farm"):
|
||||
self.log.debug("Should be processed on farm, skipping.")
|
||||
return
|
||||
|
||||
nodes = instance[:]
|
||||
|
||||
|
|
@ -92,4 +98,6 @@ class ExtractAlembic(openpype.api.Extractor):
|
|||
}
|
||||
instance.data["representations"].append(representation)
|
||||
|
||||
instance.context.data["cleanupFullPaths"].append(path)
|
||||
|
||||
self.log.info("Extracted {} to {}".format(instance, dirname))
|
||||
|
|
|
|||
|
|
@ -0,0 +1,16 @@
|
|||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<root>
|
||||
<error id="main">
|
||||
<title>Errors found</title>
|
||||
<description>
|
||||
## Publish process has errors
|
||||
|
||||
At least one plugin failed before this plugin, job won't be sent to Deadline for processing before all issues are fixed.
|
||||
|
||||
### How to repair?
|
||||
|
||||
Check all failing plugins (should be highlighted in red) and fix issues if possible.
|
||||
</description>
|
||||
|
||||
</error>
|
||||
</root>
|
||||
|
|
@ -0,0 +1,28 @@
|
|||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<root>
|
||||
<error id="main">
|
||||
<title>Review subsets not unique</title>
|
||||
<description>
|
||||
## Non unique subset name found
|
||||
|
||||
Non unique subset names: '{non_unique}'
|
||||
<detail>
|
||||
### __Detailed Info__ (optional)
|
||||
|
||||
This might happen if you already published for this asset
|
||||
review subset with legacy name {task}Review.
|
||||
This legacy name limits possibility of publishing of multiple
|
||||
reviews from a single workfile. Proper review subset name should
|
||||
now
|
||||
contain variant also (as 'Main', 'Default' etc.). That would
|
||||
result in completely new subset though, so this situation must
|
||||
be handled manually.
|
||||
</detail>
|
||||
### How to repair?
|
||||
|
||||
Legacy subsets must be removed from Openpype DB, please ask admin
|
||||
to do that. Please provide them asset and subset names.
|
||||
|
||||
</description>
|
||||
</error>
|
||||
</root>
|
||||
|
|
@ -30,6 +30,10 @@ class ValidateAnimationContent(pyblish.api.InstancePlugin):
|
|||
|
||||
assert 'out_hierarchy' in instance.data, "Missing `out_hierarchy` data"
|
||||
|
||||
out_sets = [node for node in instance if node.endswith("out_SET")]
|
||||
msg = "Couldn't find exactly one out_SET: {0}".format(out_sets)
|
||||
assert len(out_sets) == 1, msg
|
||||
|
||||
# All nodes in the `out_hierarchy` must be among the nodes that are
|
||||
# in the instance. The nodes in the instance are found from the top
|
||||
# group, as such this tests whether all nodes are under that top group.
|
||||
|
|
|
|||
|
|
@ -0,0 +1,92 @@
|
|||
import pyblish.api
|
||||
import openpype.api
|
||||
import openpype.hosts.maya.api.action
|
||||
|
||||
import os
|
||||
|
||||
COLOUR_SPACES = ['sRGB', 'linear', 'auto']
|
||||
MIPMAP_EXTENSIONS = ['tdl']
|
||||
|
||||
|
||||
class ValidateMvLookContents(pyblish.api.InstancePlugin):
|
||||
order = openpype.api.ValidateContentsOrder
|
||||
families = ['mvLook']
|
||||
hosts = ['maya']
|
||||
label = 'Validate mvLook Data'
|
||||
actions = [openpype.hosts.maya.api.action.SelectInvalidAction]
|
||||
|
||||
# Allow this validation step to be skipped when you just need to
|
||||
# get things pushed through.
|
||||
optional = True
|
||||
|
||||
# These intents get enforced checks, other ones get warnings.
|
||||
enforced_intents = ['-', 'Final']
|
||||
|
||||
def process(self, instance):
|
||||
intent = instance.context.data['intent']['value']
|
||||
publishMipMap = instance.data["publishMipMap"]
|
||||
enforced = True
|
||||
if intent in self.enforced_intents:
|
||||
self.log.info("This validation will be enforced: '{}'"
|
||||
.format(intent))
|
||||
else:
|
||||
enforced = False
|
||||
self.log.info("This validation will NOT be enforced: '{}'"
|
||||
.format(intent))
|
||||
|
||||
if not instance[:]:
|
||||
raise RuntimeError("Instance is empty")
|
||||
|
||||
invalid = set()
|
||||
|
||||
resources = instance.data.get("resources", [])
|
||||
for resource in resources:
|
||||
files = resource["files"]
|
||||
self.log.debug("Resouce '{}', files: [{}]".format(resource, files))
|
||||
node = resource["node"]
|
||||
if len(files) == 0:
|
||||
self.log.error("File node '{}' uses no or non-existing "
|
||||
"files".format(node))
|
||||
invalid.add(node)
|
||||
continue
|
||||
for fname in files:
|
||||
if not self.valid_file(fname):
|
||||
self.log.error("File node '{}'/'{}' is not valid"
|
||||
.format(node, fname))
|
||||
invalid.add(node)
|
||||
|
||||
if publishMipMap and not self.is_or_has_mipmap(fname, files):
|
||||
msg = "File node '{}'/'{}' does not have a mipmap".format(
|
||||
node, fname)
|
||||
if enforced:
|
||||
invalid.add(node)
|
||||
self.log.error(msg)
|
||||
raise RuntimeError(msg)
|
||||
else:
|
||||
self.log.warning(msg)
|
||||
|
||||
if invalid:
|
||||
raise RuntimeError("'{}' has invalid look "
|
||||
"content".format(instance.name))
|
||||
|
||||
def valid_file(self, fname):
|
||||
self.log.debug("Checking validity of '{}'".format(fname))
|
||||
if not os.path.exists(fname):
|
||||
return False
|
||||
if os.path.getsize(fname) == 0:
|
||||
return False
|
||||
return True
|
||||
|
||||
def is_or_has_mipmap(self, fname, files):
|
||||
ext = os.path.splitext(fname)[1][1:]
|
||||
if ext in MIPMAP_EXTENSIONS:
|
||||
self.log.debug("Is a mipmap '{}'".format(fname))
|
||||
return True
|
||||
|
||||
for colour_space in COLOUR_SPACES:
|
||||
for mipmap_ext in MIPMAP_EXTENSIONS:
|
||||
mipmap_fname = '.'.join([fname, colour_space, mipmap_ext])
|
||||
if mipmap_fname in files:
|
||||
self.log.debug("Has a mipmap '{}'".format(fname))
|
||||
return True
|
||||
return False
|
||||
|
|
@ -0,0 +1,36 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
import collections
|
||||
import pyblish.api
|
||||
import openpype.api
|
||||
from openpype.pipeline import PublishXmlValidationError
|
||||
|
||||
|
||||
class ValidateReviewSubsetUniqueness(pyblish.api.ContextPlugin):
|
||||
"""Validates that nodes has common root."""
|
||||
|
||||
order = openpype.api.ValidateContentsOrder
|
||||
hosts = ["maya"]
|
||||
families = ["review"]
|
||||
label = "Validate Review Subset Unique"
|
||||
|
||||
def process(self, context):
|
||||
subset_names = []
|
||||
|
||||
for instance in context:
|
||||
self.log.info("instance:: {}".format(instance.data))
|
||||
if instance.data.get('publish'):
|
||||
subset_names.append(instance.data.get('subset'))
|
||||
|
||||
non_unique = \
|
||||
[item
|
||||
for item, count in collections.Counter(subset_names).items()
|
||||
if count > 1]
|
||||
msg = ("Instance subset names {} are not unique. ".format(non_unique) +
|
||||
"Ask admin to remove subset from DB for multiple reviews.")
|
||||
formatting_data = {
|
||||
"non_unique": ",".join(non_unique)
|
||||
}
|
||||
|
||||
if non_unique:
|
||||
raise PublishXmlValidationError(self, msg,
|
||||
formatting_data=formatting_data)
|
||||
86
openpype/hosts/nuke/api/gizmo_menu.py
Normal file
86
openpype/hosts/nuke/api/gizmo_menu.py
Normal file
|
|
@ -0,0 +1,86 @@
|
|||
import os
|
||||
import re
|
||||
import nuke
|
||||
|
||||
from openpype.api import Logger
|
||||
|
||||
log = Logger.get_logger(__name__)
|
||||
|
||||
|
||||
class GizmoMenu():
|
||||
def __init__(self, title, icon=None):
|
||||
|
||||
self.toolbar = self._create_toolbar_menu(
|
||||
title,
|
||||
icon=icon
|
||||
)
|
||||
|
||||
self._script_actions = []
|
||||
|
||||
def _create_toolbar_menu(self, name, icon=None):
|
||||
nuke_node_menu = nuke.menu("Nodes")
|
||||
return nuke_node_menu.addMenu(
|
||||
name,
|
||||
icon=icon
|
||||
)
|
||||
|
||||
def _make_menu_path(self, path, icon=None):
|
||||
parent = self.toolbar
|
||||
for folder in re.split(r"/|\\", path):
|
||||
if not folder:
|
||||
continue
|
||||
existing_menu = parent.findItem(folder)
|
||||
if existing_menu:
|
||||
parent = existing_menu
|
||||
else:
|
||||
parent = parent.addMenu(folder, icon=icon)
|
||||
|
||||
return parent
|
||||
|
||||
def build_from_configuration(self, configuration):
|
||||
for menu in configuration:
|
||||
# Construct parent path else parent is toolbar
|
||||
parent = self.toolbar
|
||||
gizmo_toolbar_path = menu.get("gizmo_toolbar_path")
|
||||
if gizmo_toolbar_path:
|
||||
parent = self._make_menu_path(gizmo_toolbar_path)
|
||||
|
||||
for item in menu["sub_gizmo_list"]:
|
||||
assert isinstance(item, dict), "Configuration is wrong!"
|
||||
|
||||
if not item.get("title"):
|
||||
continue
|
||||
|
||||
item_type = item.get("sourcetype")
|
||||
|
||||
if item_type == ("python" or "file"):
|
||||
parent.addCommand(
|
||||
item["title"],
|
||||
command=str(item["command"]),
|
||||
icon=item.get("icon"),
|
||||
shortcut=item.get("hotkey")
|
||||
)
|
||||
|
||||
# add separator
|
||||
# Special behavior for separators
|
||||
elif item_type == "separator":
|
||||
parent.addSeparator()
|
||||
|
||||
# add submenu
|
||||
# items should hold a collection of submenu items (dict)
|
||||
elif item_type == "menu":
|
||||
# assert "items" in item, "Menu is missing 'items' key"
|
||||
parent.addMenu(
|
||||
item['title'],
|
||||
icon=item.get('icon')
|
||||
)
|
||||
|
||||
def add_gizmo_path(self, gizmo_paths):
|
||||
for gizmo_path in gizmo_paths:
|
||||
if os.path.isdir(gizmo_path):
|
||||
for folder in os.listdir(gizmo_path):
|
||||
if os.path.isdir(os.path.join(gizmo_path, folder)):
|
||||
nuke.pluginAddPath(os.path.join(gizmo_path, folder))
|
||||
nuke.pluginAddPath(gizmo_path)
|
||||
else:
|
||||
log.warning("This path doesn't exist: {}".format(gizmo_path))
|
||||
|
|
@ -30,6 +30,8 @@ from openpype.pipeline import (
|
|||
legacy_io,
|
||||
)
|
||||
|
||||
from . import gizmo_menu
|
||||
|
||||
from .workio import (
|
||||
save_file,
|
||||
open_file
|
||||
|
|
@ -373,7 +375,7 @@ def add_write_node_legacy(name, **kwarg):
|
|||
Returns:
|
||||
node (obj): nuke write node
|
||||
"""
|
||||
frame_range = kwarg.get("use_range_limit", None)
|
||||
use_range_limit = kwarg.get("use_range_limit", None)
|
||||
|
||||
w = nuke.createNode(
|
||||
"Write",
|
||||
|
|
@ -391,10 +393,10 @@ def add_write_node_legacy(name, **kwarg):
|
|||
log.debug(e)
|
||||
continue
|
||||
|
||||
if frame_range:
|
||||
if use_range_limit:
|
||||
w["use_limit"].setValue(True)
|
||||
w["first"].setValue(frame_range[0])
|
||||
w["last"].setValue(frame_range[1])
|
||||
w["first"].setValue(kwarg["frame_range"][0])
|
||||
w["last"].setValue(kwarg["frame_range"][1])
|
||||
|
||||
return w
|
||||
|
||||
|
|
@ -409,7 +411,7 @@ def add_write_node(name, file_path, knobs, **kwarg):
|
|||
Returns:
|
||||
node (obj): nuke write node
|
||||
"""
|
||||
frame_range = kwarg.get("use_range_limit", None)
|
||||
use_range_limit = kwarg.get("use_range_limit", None)
|
||||
|
||||
w = nuke.createNode(
|
||||
"Write",
|
||||
|
|
@ -420,10 +422,10 @@ def add_write_node(name, file_path, knobs, **kwarg):
|
|||
# finally add knob overrides
|
||||
set_node_knobs_from_settings(w, knobs, **kwarg)
|
||||
|
||||
if frame_range:
|
||||
if use_range_limit:
|
||||
w["use_limit"].setValue(True)
|
||||
w["first"].setValue(frame_range[0])
|
||||
w["last"].setValue(frame_range[1])
|
||||
w["first"].setValue(kwarg["frame_range"][0])
|
||||
w["last"].setValue(kwarg["frame_range"][1])
|
||||
|
||||
return w
|
||||
|
||||
|
|
@ -2498,6 +2500,70 @@ def recreate_instance(origin_node, avalon_data=None):
|
|||
return new_node
|
||||
|
||||
|
||||
def add_scripts_gizmo():
|
||||
|
||||
# load configuration of custom menu
|
||||
project_settings = get_project_settings(os.getenv("AVALON_PROJECT"))
|
||||
platform_name = platform.system().lower()
|
||||
|
||||
for gizmo_settings in project_settings["nuke"]["gizmo"]:
|
||||
gizmo_list_definition = gizmo_settings["gizmo_definition"]
|
||||
toolbar_name = gizmo_settings["toolbar_menu_name"]
|
||||
# gizmo_toolbar_path = gizmo_settings["gizmo_toolbar_path"]
|
||||
gizmo_source_dir = gizmo_settings.get(
|
||||
"gizmo_source_dir", {}).get(platform_name)
|
||||
toolbar_icon_path = gizmo_settings.get(
|
||||
"toolbar_icon_path", {}).get(platform_name)
|
||||
|
||||
if not gizmo_source_dir:
|
||||
log.debug("Skipping studio gizmo `{}`, "
|
||||
"no gizmo path found.".format(toolbar_name)
|
||||
)
|
||||
return
|
||||
|
||||
if not gizmo_list_definition:
|
||||
log.debug("Skipping studio gizmo `{}`, "
|
||||
"no definition found.".format(toolbar_name)
|
||||
)
|
||||
return
|
||||
|
||||
if toolbar_icon_path:
|
||||
try:
|
||||
toolbar_icon_path = toolbar_icon_path.format(**os.environ)
|
||||
except KeyError as e:
|
||||
log.error(
|
||||
"This environment variable doesn't exist: {}".format(e)
|
||||
)
|
||||
|
||||
existing_gizmo_path = []
|
||||
for source_dir in gizmo_source_dir:
|
||||
try:
|
||||
resolve_source_dir = source_dir.format(**os.environ)
|
||||
except KeyError as e:
|
||||
log.error(
|
||||
"This environment variable doesn't exist: {}".format(e)
|
||||
)
|
||||
continue
|
||||
if not os.path.exists(resolve_source_dir):
|
||||
log.warning(
|
||||
"The source of gizmo `{}` does not exists".format(
|
||||
resolve_source_dir
|
||||
)
|
||||
)
|
||||
continue
|
||||
existing_gizmo_path.append(resolve_source_dir)
|
||||
|
||||
# run the launcher for Nuke toolbar
|
||||
toolbar_menu = gizmo_menu.GizmoMenu(
|
||||
title=toolbar_name,
|
||||
icon=toolbar_icon_path
|
||||
)
|
||||
|
||||
# apply configuration
|
||||
toolbar_menu.add_gizmo_path(existing_gizmo_path)
|
||||
toolbar_menu.build_from_configuration(gizmo_list_definition)
|
||||
|
||||
|
||||
class NukeDirmap(HostDirmap):
|
||||
def __init__(self, host_name, project_settings, sync_module, file_name):
|
||||
"""
|
||||
|
|
|
|||
|
|
@ -18,7 +18,8 @@ from .lib import (
|
|||
maintained_selection,
|
||||
set_avalon_knob_data,
|
||||
add_publish_knob,
|
||||
get_nuke_imageio_settings
|
||||
get_nuke_imageio_settings,
|
||||
set_node_knobs_from_settings
|
||||
)
|
||||
|
||||
|
||||
|
|
@ -497,16 +498,7 @@ class ExporterReviewMov(ExporterReview):
|
|||
add_tags.append("reformated")
|
||||
|
||||
rf_node = nuke.createNode("Reformat")
|
||||
for kn_conf in reformat_node_config:
|
||||
_type = kn_conf["type"]
|
||||
k_name = str(kn_conf["name"])
|
||||
k_value = kn_conf["value"]
|
||||
|
||||
# to remove unicode as nuke doesn't like it
|
||||
if _type == "string":
|
||||
k_value = str(kn_conf["value"])
|
||||
|
||||
rf_node[k_name].setValue(k_value)
|
||||
set_node_knobs_from_settings(rf_node, reformat_node_config)
|
||||
|
||||
# connect
|
||||
rf_node.setInput(0, self.previous_node)
|
||||
|
|
|
|||
|
|
@ -27,6 +27,10 @@ class CreateWritePrerender(plugin.AbstractWriteRender):
|
|||
# add fpath_template
|
||||
write_data["fpath_template"] = self.fpath_template
|
||||
write_data["use_range_limit"] = self.use_range_limit
|
||||
write_data["frame_range"] = (
|
||||
nuke.root()["first_frame"].value(),
|
||||
nuke.root()["last_frame"].value()
|
||||
)
|
||||
|
||||
if not self.is_legacy():
|
||||
return create_write_node(
|
||||
|
|
|
|||
|
|
@ -15,13 +15,13 @@ from openpype.hosts.nuke.api import (
|
|||
|
||||
class AlembicModelLoader(load.LoaderPlugin):
|
||||
"""
|
||||
This will load alembic model into script.
|
||||
This will load alembic model or anim into script.
|
||||
"""
|
||||
|
||||
families = ["model"]
|
||||
families = ["model", "pointcache", "animation"]
|
||||
representations = ["abc"]
|
||||
|
||||
label = "Load Alembic Model"
|
||||
label = "Load Alembic"
|
||||
icon = "cube"
|
||||
color = "orange"
|
||||
node_color = "0x4ecd91ff"
|
||||
|
|
|
|||
|
|
@ -1,4 +1,5 @@
|
|||
import nuke
|
||||
import os
|
||||
|
||||
from openpype.api import Logger
|
||||
from openpype.pipeline import install_host
|
||||
|
|
@ -7,8 +8,10 @@ from openpype.hosts.nuke.api.lib import (
|
|||
on_script_load,
|
||||
check_inventory_versions,
|
||||
WorkfileSettings,
|
||||
dirmap_file_name_filter
|
||||
dirmap_file_name_filter,
|
||||
add_scripts_gizmo
|
||||
)
|
||||
from openpype.settings import get_project_settings
|
||||
|
||||
log = Logger.get_logger(__name__)
|
||||
|
||||
|
|
@ -28,3 +31,34 @@ nuke.addOnScriptLoad(WorkfileSettings().set_context_settings)
|
|||
nuke.addFilenameFilter(dirmap_file_name_filter)
|
||||
|
||||
log.info('Automatic syncing of write file knob to script version')
|
||||
|
||||
|
||||
def add_scripts_menu():
|
||||
try:
|
||||
from scriptsmenu import launchfornuke
|
||||
except ImportError:
|
||||
log.warning(
|
||||
"Skipping studio.menu install, because "
|
||||
"'scriptsmenu' module seems unavailable."
|
||||
)
|
||||
return
|
||||
|
||||
# load configuration of custom menu
|
||||
project_settings = get_project_settings(os.getenv("AVALON_PROJECT"))
|
||||
config = project_settings["nuke"]["scriptsmenu"]["definition"]
|
||||
_menu = project_settings["nuke"]["scriptsmenu"]["name"]
|
||||
|
||||
if not config:
|
||||
log.warning("Skipping studio menu, no definition found.")
|
||||
return
|
||||
|
||||
# run the launcher for Maya menu
|
||||
studio_menu = launchfornuke.main(title=_menu.title())
|
||||
|
||||
# apply configuration
|
||||
studio_menu.build_from_configuration(studio_menu, config)
|
||||
|
||||
|
||||
add_scripts_menu()
|
||||
|
||||
add_scripts_gizmo()
|
||||
|
|
|
|||
|
|
@ -385,7 +385,7 @@ def ls():
|
|||
if "objectName" not in item and "members" in item:
|
||||
members = item["members"]
|
||||
if isinstance(members, list):
|
||||
members = "|".join(members)
|
||||
members = "|".join([str(member) for member in members])
|
||||
item["objectName"] = members
|
||||
return output
|
||||
|
||||
|
|
|
|||
|
|
@ -73,14 +73,8 @@ class ExtractSequence(pyblish.api.Extractor):
|
|||
|
||||
scene_bg_color = instance.context.data["sceneBgColor"]
|
||||
|
||||
# --- Fallbacks ----------------------------------------------------
|
||||
# This is required if validations of ranges are ignored.
|
||||
# - all of this code won't change processing if range to render
|
||||
# match to range of expected output
|
||||
|
||||
# Prepare output frames
|
||||
output_frame_start = frame_start - handle_start
|
||||
output_frame_end = frame_end + handle_end
|
||||
|
||||
# Change output frame start to 0 if handles cause it's negative number
|
||||
if output_frame_start < 0:
|
||||
|
|
@ -90,32 +84,8 @@ class ExtractSequence(pyblish.api.Extractor):
|
|||
).format(frame_start, handle_start))
|
||||
output_frame_start = 0
|
||||
|
||||
# Check Marks range and output range
|
||||
output_range = output_frame_end - output_frame_start
|
||||
marks_range = mark_out - mark_in
|
||||
|
||||
# Lower Mark Out if mark range is bigger than output
|
||||
# - do not rendered not used frames
|
||||
if output_range < marks_range:
|
||||
new_mark_out = mark_out - (marks_range - output_range)
|
||||
self.log.warning((
|
||||
"Lowering render range to {} frames. Changed Mark Out {} -> {}"
|
||||
).format(marks_range + 1, mark_out, new_mark_out))
|
||||
# Assign new mark out to variable
|
||||
mark_out = new_mark_out
|
||||
|
||||
# Lower output frame end so representation has right `frameEnd` value
|
||||
elif output_range > marks_range:
|
||||
new_output_frame_end = (
|
||||
output_frame_end - (output_range - marks_range)
|
||||
)
|
||||
self.log.warning((
|
||||
"Lowering representation range to {} frames."
|
||||
" Changed frame end {} -> {}"
|
||||
).format(output_range + 1, mark_out, new_output_frame_end))
|
||||
output_frame_end = new_output_frame_end
|
||||
|
||||
# -------------------------------------------------------------------
|
||||
# Calculate frame end
|
||||
output_frame_end = output_frame_start + (mark_out - mark_in)
|
||||
|
||||
# Save to staging dir
|
||||
output_dir = instance.data.get("stagingDir")
|
||||
|
|
|
|||
Binary file not shown.
|
|
@ -1,15 +1,19 @@
|
|||
import os
|
||||
import openpype.hosts
|
||||
from openpype.lib.applications import Application
|
||||
|
||||
|
||||
def add_implementation_envs(env, _app):
|
||||
def add_implementation_envs(env: dict, _app: Application) -> None:
|
||||
"""Modify environments to contain all required for implementation."""
|
||||
# Set OPENPYPE_UNREAL_PLUGIN required for Unreal implementation
|
||||
|
||||
ue_plugin = "UE_5.0" if _app.name[:1] == "5" else "UE_4.7"
|
||||
unreal_plugin_path = os.path.join(
|
||||
os.path.dirname(os.path.abspath(openpype.hosts.__file__)),
|
||||
"unreal", "integration"
|
||||
"unreal", "integration", ue_plugin
|
||||
)
|
||||
env["OPENPYPE_UNREAL_PLUGIN"] = unreal_plugin_path
|
||||
if not env.get("OPENPYPE_UNREAL_PLUGIN"):
|
||||
env["OPENPYPE_UNREAL_PLUGIN"] = unreal_plugin_path
|
||||
|
||||
# Set default environments if are not set via settings
|
||||
defaults = {
|
||||
|
|
|
|||
|
|
@ -25,7 +25,7 @@ class UnrealPrelaunchHook(PreLaunchHook):
|
|||
def __init__(self, *args, **kwargs):
|
||||
super().__init__(*args, **kwargs)
|
||||
|
||||
self.signature = "( {} )".format(self.__class__.__name__)
|
||||
self.signature = f"( {self.__class__.__name__} )"
|
||||
|
||||
def _get_work_filename(self):
|
||||
# Use last workfile if was found
|
||||
|
|
@ -71,7 +71,7 @@ class UnrealPrelaunchHook(PreLaunchHook):
|
|||
if int(engine_version.split(".")[0]) < 4 and \
|
||||
int(engine_version.split(".")[1]) < 26:
|
||||
raise ApplicationLaunchFailed((
|
||||
f"{self.signature} Old unsupported version of UE4 "
|
||||
f"{self.signature} Old unsupported version of UE "
|
||||
f"detected - {engine_version}"))
|
||||
except ValueError:
|
||||
# there can be string in minor version and in that case
|
||||
|
|
@ -99,18 +99,19 @@ class UnrealPrelaunchHook(PreLaunchHook):
|
|||
f"character ({unreal_project_name}). Appending 'P'"
|
||||
))
|
||||
unreal_project_name = f"P{unreal_project_name}"
|
||||
unreal_project_filename = f'{unreal_project_name}.uproject'
|
||||
|
||||
project_path = Path(os.path.join(workdir, unreal_project_name))
|
||||
|
||||
self.log.info((
|
||||
f"{self.signature} requested UE4 version: "
|
||||
f"{self.signature} requested UE version: "
|
||||
f"[ {engine_version} ]"
|
||||
))
|
||||
|
||||
detected = unreal_lib.get_engine_versions(self.launch_context.env)
|
||||
detected_str = ', '.join(detected.keys()) or 'none'
|
||||
self.log.info((
|
||||
f"{self.signature} detected UE4 versions: "
|
||||
f"{self.signature} detected UE versions: "
|
||||
f"[ {detected_str} ]"
|
||||
))
|
||||
if not detected:
|
||||
|
|
@ -123,10 +124,10 @@ class UnrealPrelaunchHook(PreLaunchHook):
|
|||
f"detected [ {engine_version} ]"
|
||||
))
|
||||
|
||||
ue4_path = unreal_lib.get_editor_executable_path(
|
||||
Path(detected[engine_version]))
|
||||
ue_path = unreal_lib.get_editor_executable_path(
|
||||
Path(detected[engine_version]), engine_version)
|
||||
|
||||
self.launch_context.launch_args = [ue4_path.as_posix()]
|
||||
self.launch_context.launch_args = [ue_path.as_posix()]
|
||||
project_path.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
project_file = project_path / unreal_project_filename
|
||||
|
|
@ -138,6 +139,11 @@ class UnrealPrelaunchHook(PreLaunchHook):
|
|||
))
|
||||
# Set "OPENPYPE_UNREAL_PLUGIN" to current process environment for
|
||||
# execution of `create_unreal_project`
|
||||
if self.launch_context.env.get("OPENPYPE_UNREAL_PLUGIN"):
|
||||
self.log.info((
|
||||
f"{self.signature} using OpenPype plugin from "
|
||||
f"{self.launch_context.env.get('OPENPYPE_UNREAL_PLUGIN')}"
|
||||
))
|
||||
env_key = "OPENPYPE_UNREAL_PLUGIN"
|
||||
if self.launch_context.env.get(env_key):
|
||||
os.environ[env_key] = self.launch_context.env[env_key]
|
||||
|
|
|
|||
|
|
@ -1,4 +1,4 @@
|
|||
# OpenPype Unreal Integration plugin
|
||||
# OpenPype Unreal Integration plugin - UE 4.x
|
||||
|
||||
This is plugin for Unreal Editor, creating menu for [OpenPype](https://github.com/getavalon) tools to run.
|
||||
|
||||
|
Before Width: | Height: | Size: 14 KiB After Width: | Height: | Size: 14 KiB |
|
Before Width: | Height: | Size: 4.8 KiB After Width: | Height: | Size: 4.8 KiB |
|
Before Width: | Height: | Size: 84 KiB After Width: | Height: | Size: 84 KiB |
35
openpype/hosts/unreal/integration/UE_5.0/.gitignore
vendored
Normal file
35
openpype/hosts/unreal/integration/UE_5.0/.gitignore
vendored
Normal file
|
|
@ -0,0 +1,35 @@
|
|||
# Prerequisites
|
||||
*.d
|
||||
|
||||
# Compiled Object files
|
||||
*.slo
|
||||
*.lo
|
||||
*.o
|
||||
*.obj
|
||||
|
||||
# Precompiled Headers
|
||||
*.gch
|
||||
*.pch
|
||||
|
||||
# Compiled Dynamic libraries
|
||||
*.so
|
||||
*.dylib
|
||||
*.dll
|
||||
|
||||
# Fortran module files
|
||||
*.mod
|
||||
*.smod
|
||||
|
||||
# Compiled Static libraries
|
||||
*.lai
|
||||
*.la
|
||||
*.a
|
||||
*.lib
|
||||
|
||||
# Executables
|
||||
*.exe
|
||||
*.out
|
||||
*.app
|
||||
|
||||
/Binaries
|
||||
/Intermediate
|
||||
|
|
@ -0,0 +1,28 @@
|
|||
import unreal
|
||||
|
||||
openpype_detected = True
|
||||
try:
|
||||
from openpype.pipeline import install_host
|
||||
from openpype.hosts.unreal import api as openpype_host
|
||||
except ImportError as exc:
|
||||
openpype_host = None
|
||||
openpype_detected = False
|
||||
unreal.log_error("OpenPype: cannot load OpenPype [ {} ]".format(exc))
|
||||
|
||||
if openpype_detected:
|
||||
install_host(openpype_host)
|
||||
|
||||
|
||||
@unreal.uclass()
|
||||
class OpenPypeIntegration(unreal.OpenPypePythonBridge):
|
||||
@unreal.ufunction(override=True)
|
||||
def RunInPython_Popup(self):
|
||||
unreal.log_warning("OpenPype: showing tools popup")
|
||||
if openpype_detected:
|
||||
openpype_host.show_tools_popup()
|
||||
|
||||
@unreal.ufunction(override=True)
|
||||
def RunInPython_Dialog(self):
|
||||
unreal.log_warning("OpenPype: showing tools dialog")
|
||||
if openpype_detected:
|
||||
openpype_host.show_tools_dialog()
|
||||
Some files were not shown because too many files have changed in this diff Show more
Loading…
Add table
Add a link
Reference in a new issue