Merge remote-tracking branch 'origin/develop' into feature/maya-unreal-layout

This commit is contained in:
Ondřej Samohel 2022-08-15 18:25:30 +02:00
commit 7aa5ff7068
No known key found for this signature in database
GPG key ID: 02376E18990A97C6
319 changed files with 14799 additions and 4780 deletions

3
.gitignore vendored
View file

@ -102,5 +102,8 @@ website/.docusaurus
.poetry/
.python-version
.editorconfig
.pre-commit-config.yaml
mypy.ini
tools/run_eventserver.*

10
.gitmodules vendored Normal file
View file

@ -0,0 +1,10 @@
[submodule "tools/modules/powershell/BurntToast"]
path = tools/modules/powershell/BurntToast
url = https://github.com/Windos/BurntToast.git
[submodule "tools/modules/powershell/PSWriteColor"]
path = tools/modules/powershell/PSWriteColor
url = https://github.com/EvotecIT/PSWriteColor.git
[submodule "vendor/configs/OpenColorIO-Configs"]
path = vendor/configs/OpenColorIO-Configs
url = https://github.com/imageworks/OpenColorIO-Configs

View file

@ -1,143 +1,139 @@
# Changelog
## [3.12.1-nightly.4](https://github.com/pypeclub/OpenPype/tree/HEAD)
## [3.13.1-nightly.2](https://github.com/pypeclub/OpenPype/tree/HEAD)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.12.0...HEAD)
### 📖 Documentation
- Docs: Added minimal permissions for MongoDB [\#3441](https://github.com/pypeclub/OpenPype/pull/3441)
**🆕 New features**
- Maya: Add VDB to Arnold loader [\#3433](https://github.com/pypeclub/OpenPype/pull/3433)
**🚀 Enhancements**
- General: Creator Plugins have access to project [\#3476](https://github.com/pypeclub/OpenPype/pull/3476)
- General: Better arguments order in creator init [\#3475](https://github.com/pypeclub/OpenPype/pull/3475)
- Ftrack: Trigger custom ftrack events on project creation and preparation [\#3465](https://github.com/pypeclub/OpenPype/pull/3465)
- Windows installer: Clean old files and add version subfolder [\#3445](https://github.com/pypeclub/OpenPype/pull/3445)
- Blender: Bugfix - Set fps properly on open [\#3426](https://github.com/pypeclub/OpenPype/pull/3426)
- Hiero: Add custom scripts menu [\#3425](https://github.com/pypeclub/OpenPype/pull/3425)
- Blender: pre pyside install for all platforms [\#3400](https://github.com/pypeclub/OpenPype/pull/3400)
- Maya: Ability to set resolution for playblasts from asset, and override through review instance. [\#3360](https://github.com/pypeclub/OpenPype/pull/3360)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.13.0...HEAD)
**🐛 Bug fixes**
- General: thumbnail extractor fix [\#3474](https://github.com/pypeclub/OpenPype/pull/3474)
- Kitsu: bugfix with sync-service ans publish plugins [\#3473](https://github.com/pypeclub/OpenPype/pull/3473)
- Flame: solved problem with multi-selected loading [\#3470](https://github.com/pypeclub/OpenPype/pull/3470)
- General: Fix query function in update logic [\#3468](https://github.com/pypeclub/OpenPype/pull/3468)
- Resolve: removed few bugs [\#3464](https://github.com/pypeclub/OpenPype/pull/3464)
- General: Delete old versions is safer when ftrack is disabled [\#3462](https://github.com/pypeclub/OpenPype/pull/3462)
- Nuke: fixing metadata slate TC difference [\#3455](https://github.com/pypeclub/OpenPype/pull/3455)
- Nuke: prerender reviewable fails [\#3450](https://github.com/pypeclub/OpenPype/pull/3450)
- Maya: fix hashing in Python 3 for tile rendering [\#3447](https://github.com/pypeclub/OpenPype/pull/3447)
- LogViewer: Escape html characters in log message [\#3443](https://github.com/pypeclub/OpenPype/pull/3443)
- Nuke: Slate frame is integrated [\#3427](https://github.com/pypeclub/OpenPype/pull/3427)
- Maya: Camera extra data - additional fix for \#3304 [\#3386](https://github.com/pypeclub/OpenPype/pull/3386)
- Maya: Handle excluding `model` family from frame range validator. [\#3370](https://github.com/pypeclub/OpenPype/pull/3370)
- General: Hero version representations have full context [\#3638](https://github.com/pypeclub/OpenPype/pull/3638)
- Maya: FBX support for update in reference loader [\#3631](https://github.com/pypeclub/OpenPype/pull/3631)
**🔀 Refactored code**
- Maya: Merge animation + pointcache extractor logic [\#3461](https://github.com/pypeclub/OpenPype/pull/3461)
- Maya: Re-use `maintained\_time` from lib [\#3460](https://github.com/pypeclub/OpenPype/pull/3460)
- General: Use query functions in global plugins [\#3459](https://github.com/pypeclub/OpenPype/pull/3459)
- Clockify: Use query functions in clockify actions [\#3458](https://github.com/pypeclub/OpenPype/pull/3458)
- General: Use query functions in rest api calls [\#3457](https://github.com/pypeclub/OpenPype/pull/3457)
- General: Use query functions in load utils [\#3446](https://github.com/pypeclub/OpenPype/pull/3446)
- General: Use Anatomy after move to pipeline [\#3436](https://github.com/pypeclub/OpenPype/pull/3436)
- General: Anatomy moved to pipeline [\#3435](https://github.com/pypeclub/OpenPype/pull/3435)
- Fusion: Use client query functions [\#3380](https://github.com/pypeclub/OpenPype/pull/3380)
- Resolve: Use client query functions [\#3379](https://github.com/pypeclub/OpenPype/pull/3379)
- TimersManager: Plugins are in timers manager module [\#3639](https://github.com/pypeclub/OpenPype/pull/3639)
- General: Move workfiles functions into pipeline [\#3637](https://github.com/pypeclub/OpenPype/pull/3637)
**Merged pull requests:**
- Kitsu|Fix: Movie project type fails & first loop children names [\#3636](https://github.com/pypeclub/OpenPype/pull/3636)
## [3.13.0](https://github.com/pypeclub/OpenPype/tree/3.13.0) (2022-08-09)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.13.0-nightly.1...3.13.0)
**🆕 New features**
- Support for mutliple installed versions - 3.13 [\#3605](https://github.com/pypeclub/OpenPype/pull/3605)
**🚀 Enhancements**
- Editorial: Mix audio use side file for ffmpeg filters [\#3630](https://github.com/pypeclub/OpenPype/pull/3630)
- Ftrack: Comment template can contain optional keys [\#3615](https://github.com/pypeclub/OpenPype/pull/3615)
- Ftrack: Add more metadata to ftrack components [\#3612](https://github.com/pypeclub/OpenPype/pull/3612)
- General: Add context to pyblish context [\#3594](https://github.com/pypeclub/OpenPype/pull/3594)
- Kitsu: Shot&Sequence name with prefix over appends [\#3593](https://github.com/pypeclub/OpenPype/pull/3593)
- Photoshop: implemented {layer} placeholder in subset template [\#3591](https://github.com/pypeclub/OpenPype/pull/3591)
- General: Python module appdirs from git [\#3589](https://github.com/pypeclub/OpenPype/pull/3589)
- Ftrack: Update ftrack api to 2.3.3 [\#3588](https://github.com/pypeclub/OpenPype/pull/3588)
- General: New Integrator small fixes [\#3583](https://github.com/pypeclub/OpenPype/pull/3583)
**🐛 Bug fixes**
- Maya: fix aov separator in Redshift [\#3625](https://github.com/pypeclub/OpenPype/pull/3625)
- Fix for multi-version build on Mac [\#3622](https://github.com/pypeclub/OpenPype/pull/3622)
- Ftrack: Sync hierarchical attributes can handle new created entities [\#3621](https://github.com/pypeclub/OpenPype/pull/3621)
- General: Extract review aspect ratio scale is calculated by ffmpeg [\#3620](https://github.com/pypeclub/OpenPype/pull/3620)
- Maya: Fix types of default settings [\#3617](https://github.com/pypeclub/OpenPype/pull/3617)
- Integrator: Don't force to have dot before frame [\#3611](https://github.com/pypeclub/OpenPype/pull/3611)
- AfterEffects: refactored integrate doesnt work formulti frame publishes [\#3610](https://github.com/pypeclub/OpenPype/pull/3610)
- Maya look data contents fails with custom attribute on group [\#3607](https://github.com/pypeclub/OpenPype/pull/3607)
- TrayPublisher: Fix wrong conflict merge [\#3600](https://github.com/pypeclub/OpenPype/pull/3600)
- Bugfix: Add OCIO as submodule to prepare for handling `maketx` color space conversion. [\#3590](https://github.com/pypeclub/OpenPype/pull/3590)
- Fix general settings environment variables resolution [\#3587](https://github.com/pypeclub/OpenPype/pull/3587)
- Editorial publishing workflow improvements [\#3580](https://github.com/pypeclub/OpenPype/pull/3580)
- General: Update imports in start script [\#3579](https://github.com/pypeclub/OpenPype/pull/3579)
- Nuke: render family integration consistency [\#3576](https://github.com/pypeclub/OpenPype/pull/3576)
- Ftrack: Handle missing published path in integrator [\#3570](https://github.com/pypeclub/OpenPype/pull/3570)
- Maya: fix Review image plane attribute [\#3569](https://github.com/pypeclub/OpenPype/pull/3569)
- Nuke: publish existing frames with slate with correct range [\#3555](https://github.com/pypeclub/OpenPype/pull/3555)
**🔀 Refactored code**
- General: Plugin settings handled by plugins [\#3623](https://github.com/pypeclub/OpenPype/pull/3623)
- General: Naive implementation of document create, update, delete [\#3601](https://github.com/pypeclub/OpenPype/pull/3601)
- General: Use query functions in general code [\#3596](https://github.com/pypeclub/OpenPype/pull/3596)
- General: Separate extraction of template data into more functions [\#3574](https://github.com/pypeclub/OpenPype/pull/3574)
- General: Lib cleanup [\#3571](https://github.com/pypeclub/OpenPype/pull/3571)
**Merged pull requests:**
- Webpublisher: timeout for PS studio processing [\#3619](https://github.com/pypeclub/OpenPype/pull/3619)
- Core: translated validate\_containers.py into New publisher style [\#3614](https://github.com/pypeclub/OpenPype/pull/3614)
- Enable write color sets on animation publish automatically [\#3582](https://github.com/pypeclub/OpenPype/pull/3582)
## [3.12.2](https://github.com/pypeclub/OpenPype/tree/3.12.2) (2022-07-27)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.12.2-nightly.4...3.12.2)
### 📖 Documentation
- Update website with more studios [\#3554](https://github.com/pypeclub/OpenPype/pull/3554)
- Documentation: Update publishing dev docs [\#3549](https://github.com/pypeclub/OpenPype/pull/3549)
**🚀 Enhancements**
- General: Global thumbnail extractor is ready for more cases [\#3561](https://github.com/pypeclub/OpenPype/pull/3561)
- Maya: add additional validators to Settings [\#3540](https://github.com/pypeclub/OpenPype/pull/3540)
- General: Interactive console in cli [\#3526](https://github.com/pypeclub/OpenPype/pull/3526)
- Ftrack: Automatic daily review session creation can define trigger hour [\#3516](https://github.com/pypeclub/OpenPype/pull/3516)
**🐛 Bug fixes**
- Maya: Fix animated attributes \(ie. overscan\) on loaded cameras breaking review publishing. [\#3562](https://github.com/pypeclub/OpenPype/pull/3562)
- NewPublisher: Python 2 compatible html escape [\#3559](https://github.com/pypeclub/OpenPype/pull/3559)
- Remove invalid submodules from `/vendor` [\#3557](https://github.com/pypeclub/OpenPype/pull/3557)
- General: Remove hosts filter on integrator plugins [\#3556](https://github.com/pypeclub/OpenPype/pull/3556)
- Settings: Clean default values of environments [\#3550](https://github.com/pypeclub/OpenPype/pull/3550)
- Module interfaces: Fix import error [\#3547](https://github.com/pypeclub/OpenPype/pull/3547)
- Workfiles tool: Show of tool and it's flags [\#3539](https://github.com/pypeclub/OpenPype/pull/3539)
- General: Create workfile documents works again [\#3538](https://github.com/pypeclub/OpenPype/pull/3538)
- Additional fixes for powershell scripts [\#3525](https://github.com/pypeclub/OpenPype/pull/3525)
- Maya: Added wrapper around cmds.setAttr [\#3523](https://github.com/pypeclub/OpenPype/pull/3523)
- Nuke: double slate [\#3521](https://github.com/pypeclub/OpenPype/pull/3521)
- General: Fix hash of centos oiio archive [\#3519](https://github.com/pypeclub/OpenPype/pull/3519)
- Maya: Renderman display output fix [\#3514](https://github.com/pypeclub/OpenPype/pull/3514)
- TrayPublisher: Simple creation enhancements and fixes [\#3513](https://github.com/pypeclub/OpenPype/pull/3513)
**🔀 Refactored code**
- General: Use query functions in integrator [\#3563](https://github.com/pypeclub/OpenPype/pull/3563)
- General: Mongo core connection moved to client [\#3531](https://github.com/pypeclub/OpenPype/pull/3531)
- Refactor Integrate Asset [\#3530](https://github.com/pypeclub/OpenPype/pull/3530)
- General: Client docstrings cleanup [\#3529](https://github.com/pypeclub/OpenPype/pull/3529)
- General: Move load related functions into pipeline [\#3527](https://github.com/pypeclub/OpenPype/pull/3527)
- General: Get current context document functions [\#3522](https://github.com/pypeclub/OpenPype/pull/3522)
**Merged pull requests:**
- Maya: fix active pane loss [\#3566](https://github.com/pypeclub/OpenPype/pull/3566)
## [3.12.1](https://github.com/pypeclub/OpenPype/tree/3.12.1) (2022-07-13)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.12.1-nightly.6...3.12.1)
## [3.12.0](https://github.com/pypeclub/OpenPype/tree/3.12.0) (2022-06-28)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.12.0-nightly.3...3.12.0)
### 📖 Documentation
- Fix typo in documentation: pyenv on mac [\#3417](https://github.com/pypeclub/OpenPype/pull/3417)
- Linux: update OIIO package [\#3401](https://github.com/pypeclub/OpenPype/pull/3401)
**🚀 Enhancements**
- Webserver: Added CORS middleware [\#3422](https://github.com/pypeclub/OpenPype/pull/3422)
- Attribute Defs UI: Files widget show what is allowed to drop in [\#3411](https://github.com/pypeclub/OpenPype/pull/3411)
- General: Add ability to change user value for templates [\#3366](https://github.com/pypeclub/OpenPype/pull/3366)
- Hosts: More options for in-host callbacks [\#3357](https://github.com/pypeclub/OpenPype/pull/3357)
- Multiverse: expose some settings to GUI [\#3350](https://github.com/pypeclub/OpenPype/pull/3350)
**🐛 Bug fixes**
- NewPublisher: Fix subset name change on change of creator plugin [\#3420](https://github.com/pypeclub/OpenPype/pull/3420)
- Bug: fix invalid avalon import [\#3418](https://github.com/pypeclub/OpenPype/pull/3418)
- Nuke: Fix keyword argument in query function [\#3414](https://github.com/pypeclub/OpenPype/pull/3414)
- Houdini: fix loading and updating vbd/bgeo sequences [\#3408](https://github.com/pypeclub/OpenPype/pull/3408)
- Nuke: Collect representation files based on Write [\#3407](https://github.com/pypeclub/OpenPype/pull/3407)
- General: Filter representations before integration start [\#3398](https://github.com/pypeclub/OpenPype/pull/3398)
- Maya: look collector typo [\#3392](https://github.com/pypeclub/OpenPype/pull/3392)
- TVPaint: Make sure exit code is set to not None [\#3382](https://github.com/pypeclub/OpenPype/pull/3382)
- Maya: vray device aspect ratio fix [\#3381](https://github.com/pypeclub/OpenPype/pull/3381)
- Flame: bunch of publishing issues [\#3377](https://github.com/pypeclub/OpenPype/pull/3377)
- Harmony: added unc path to zifile command in Harmony [\#3372](https://github.com/pypeclub/OpenPype/pull/3372)
- Standalone: settings improvements [\#3355](https://github.com/pypeclub/OpenPype/pull/3355)
**🔀 Refactored code**
- Unreal: Use client query functions [\#3421](https://github.com/pypeclub/OpenPype/pull/3421)
- General: Move editorial lib to pipeline [\#3419](https://github.com/pypeclub/OpenPype/pull/3419)
- Kitsu: renaming to plural func sync\_all\_projects [\#3397](https://github.com/pypeclub/OpenPype/pull/3397)
- Houdini: Use client query functions [\#3395](https://github.com/pypeclub/OpenPype/pull/3395)
- Hiero: Use client query functions [\#3393](https://github.com/pypeclub/OpenPype/pull/3393)
- Nuke: Use client query functions [\#3391](https://github.com/pypeclub/OpenPype/pull/3391)
- Maya: Use client query functions [\#3385](https://github.com/pypeclub/OpenPype/pull/3385)
- Harmony: Use client query functions [\#3378](https://github.com/pypeclub/OpenPype/pull/3378)
- Celaction: Use client query functions [\#3376](https://github.com/pypeclub/OpenPype/pull/3376)
- Photoshop: Use client query functions [\#3375](https://github.com/pypeclub/OpenPype/pull/3375)
- AfterEffects: Use client query functions [\#3374](https://github.com/pypeclub/OpenPype/pull/3374)
**Merged pull requests:**
- Sync Queue: Added far future value for null values for dates [\#3371](https://github.com/pypeclub/OpenPype/pull/3371)
- Maya - added support for single frame playblast review [\#3369](https://github.com/pypeclub/OpenPype/pull/3369)
## [3.11.1](https://github.com/pypeclub/OpenPype/tree/3.11.1) (2022-06-20)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.11.1-nightly.1...3.11.1)
**🆕 New features**
- Flame: custom export temp folder [\#3346](https://github.com/pypeclub/OpenPype/pull/3346)
- Nuke: removing third-party plugins [\#3344](https://github.com/pypeclub/OpenPype/pull/3344)
**🚀 Enhancements**
- Pyblish Pype: Hiding/Close issues [\#3367](https://github.com/pypeclub/OpenPype/pull/3367)
- Ftrack: Removed requirement of pypeclub role from default settings [\#3354](https://github.com/pypeclub/OpenPype/pull/3354)
- Kitsu: Prevent crash on missing frames information [\#3352](https://github.com/pypeclub/OpenPype/pull/3352)
**🐛 Bug fixes**
- Nuke: bake streams with slate on farm [\#3368](https://github.com/pypeclub/OpenPype/pull/3368)
- Harmony: audio validator has wrong logic [\#3364](https://github.com/pypeclub/OpenPype/pull/3364)
- Nuke: Fix missing variable in extract thumbnail [\#3363](https://github.com/pypeclub/OpenPype/pull/3363)
- Nuke: Fix precollect writes [\#3361](https://github.com/pypeclub/OpenPype/pull/3361)
- AE- fix validate\_scene\_settings and renderLocal [\#3358](https://github.com/pypeclub/OpenPype/pull/3358)
- deadline: fixing misidentification of revieables [\#3356](https://github.com/pypeclub/OpenPype/pull/3356)
- General: Create only one thumbnail per instance [\#3351](https://github.com/pypeclub/OpenPype/pull/3351)
- nuke: adding extract thumbnail settings 3.10 [\#3347](https://github.com/pypeclub/OpenPype/pull/3347)
- General: Fix last version function [\#3345](https://github.com/pypeclub/OpenPype/pull/3345)
## [3.11.0](https://github.com/pypeclub/OpenPype/tree/3.11.0) (2022-06-17)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.11.0-nightly.4...3.11.0)
**🐛 Bug fixes**
- General: Handle empty source key on instance [\#3342](https://github.com/pypeclub/OpenPype/pull/3342)
## [3.10.0](https://github.com/pypeclub/OpenPype/tree/3.10.0) (2022-05-26)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.10.0-nightly.6...3.10.0)

View file

@ -122,7 +122,7 @@ class OpenPypeVersion(semver.VersionInfo):
if self.staging:
if kwargs.get("build"):
if "staging" not in kwargs.get("build"):
kwargs["build"] = "{}-staging".format(kwargs.get("build"))
kwargs["build"] = f"{kwargs.get('build')}-staging"
else:
kwargs["build"] = "staging"
@ -136,8 +136,7 @@ class OpenPypeVersion(semver.VersionInfo):
return bool(result and self.staging == other.staging)
def __repr__(self):
return "<{}: {} - path={}>".format(
self.__class__.__name__, str(self), self.path)
return f"<{self.__class__.__name__}: {str(self)} - path={self.path}>"
def __lt__(self, other: OpenPypeVersion):
result = super().__lt__(other)
@ -232,10 +231,7 @@ class OpenPypeVersion(semver.VersionInfo):
return openpype_version
def __hash__(self):
if self.path:
return hash(self.path)
else:
return hash(str(self))
return hash(self.path) if self.path else hash(str(self))
@staticmethod
def is_version_in_dir(
@ -384,7 +380,8 @@ class OpenPypeVersion(semver.VersionInfo):
@classmethod
def get_local_versions(
cls, production: bool = None, staging: bool = None
cls, production: bool = None,
staging: bool = None, compatible_with: OpenPypeVersion = None
) -> List:
"""Get all versions available on this machine.
@ -394,6 +391,8 @@ class OpenPypeVersion(semver.VersionInfo):
Args:
production (bool): Return production versions.
staging (bool): Return staging versions.
compatible_with (OpenPypeVersion): Return only those compatible
with specified version.
"""
# Return all local versions if arguments are set to None
if production is None and staging is None:
@ -410,10 +409,19 @@ class OpenPypeVersion(semver.VersionInfo):
if not production and not staging:
return []
# DEPRECATED: backwards compatible way to look for versions in root
dir_to_search = Path(user_data_dir("openpype", "pypeclub"))
versions = OpenPypeVersion.get_versions_from_directory(
dir_to_search
dir_to_search, compatible_with=compatible_with
)
if compatible_with:
dir_to_search = Path(
user_data_dir("openpype", "pypeclub")) / f"{compatible_with.major}.{compatible_with.minor}" # noqa
versions += OpenPypeVersion.get_versions_from_directory(
dir_to_search, compatible_with=compatible_with
)
filtered_versions = []
for version in versions:
if version.is_staging():
@ -425,7 +433,8 @@ class OpenPypeVersion(semver.VersionInfo):
@classmethod
def get_remote_versions(
cls, production: bool = None, staging: bool = None
cls, production: bool = None,
staging: bool = None, compatible_with: OpenPypeVersion = None
) -> List:
"""Get all versions available in OpenPype Path.
@ -435,6 +444,8 @@ class OpenPypeVersion(semver.VersionInfo):
Args:
production (bool): Return production versions.
staging (bool): Return staging versions.
compatible_with (OpenPypeVersion): Return only those compatible
with specified version.
"""
# Return all local versions if arguments are set to None
if production is None and staging is None:
@ -468,7 +479,14 @@ class OpenPypeVersion(semver.VersionInfo):
if not dir_to_search:
return []
versions = cls.get_versions_from_directory(dir_to_search)
# DEPRECATED: look for version in root directory
versions = cls.get_versions_from_directory(
dir_to_search, compatible_with=compatible_with)
if compatible_with:
dir_to_search = dir_to_search / f"{compatible_with.major}.{compatible_with.minor}" # noqa
versions += cls.get_versions_from_directory(
dir_to_search, compatible_with=compatible_with)
filtered_versions = []
for version in versions:
if version.is_staging():
@ -479,11 +497,15 @@ class OpenPypeVersion(semver.VersionInfo):
return list(sorted(set(filtered_versions)))
@staticmethod
def get_versions_from_directory(openpype_dir: Path) -> List:
def get_versions_from_directory(
openpype_dir: Path,
compatible_with: OpenPypeVersion = None) -> List:
"""Get all detected OpenPype versions in directory.
Args:
openpype_dir (Path): Directory to scan.
compatible_with (OpenPypeVersion): Return only versions compatible
with build version specified as OpenPypeVersion.
Returns:
list of OpenPypeVersion
@ -492,10 +514,10 @@ class OpenPypeVersion(semver.VersionInfo):
ValueError: if invalid path is specified.
"""
if not openpype_dir.exists() and not openpype_dir.is_dir():
raise ValueError("specified directory is invalid")
_openpype_versions = []
if not openpype_dir.exists() and not openpype_dir.is_dir():
return _openpype_versions
# iterate over directory in first level and find all that might
# contain OpenPype.
for item in openpype_dir.iterdir():
@ -518,6 +540,10 @@ class OpenPypeVersion(semver.VersionInfo):
)[0]:
continue
if compatible_with and not detected_version.is_compatible(
compatible_with):
continue
detected_version.path = item
_openpype_versions.append(detected_version)
@ -549,8 +575,9 @@ class OpenPypeVersion(semver.VersionInfo):
def get_latest_version(
staging: bool = False,
local: bool = None,
remote: bool = None
) -> OpenPypeVersion:
remote: bool = None,
compatible_with: OpenPypeVersion = None
) -> Union[OpenPypeVersion, None]:
"""Get latest available version.
The version does not contain information about path and source.
@ -568,6 +595,9 @@ class OpenPypeVersion(semver.VersionInfo):
staging (bool, optional): List staging versions if True.
local (bool, optional): List local versions if True.
remote (bool, optional): List remote versions if True.
compatible_with (OpenPypeVersion, optional) Return only version
compatible with compatible_with.
"""
if local is None and remote is None:
local = True
@ -598,7 +628,12 @@ class OpenPypeVersion(semver.VersionInfo):
return None
all_versions.sort()
return all_versions[-1]
latest_version: OpenPypeVersion
latest_version = all_versions[-1]
if compatible_with and not latest_version.is_compatible(
compatible_with):
return None
return latest_version
@classmethod
def get_expected_studio_version(cls, staging=False, global_settings=None):
@ -621,6 +656,21 @@ class OpenPypeVersion(semver.VersionInfo):
return None
return OpenPypeVersion(version=result)
def is_compatible(self, version: OpenPypeVersion):
"""Test build compatibility.
This will simply compare major and minor versions (ignoring patch
and the rest).
Args:
version (OpenPypeVersion): Version to check compatibility with.
Returns:
bool: if the version is compatible
"""
return self.major == version.major and self.minor == version.minor
class BootstrapRepos:
"""Class for bootstrapping local OpenPype installation.
@ -741,8 +791,9 @@ class BootstrapRepos:
return
# create destination directory
if not self.data_dir.exists():
self.data_dir.mkdir(parents=True)
destination = self.data_dir / f"{installed_version.major}.{installed_version.minor}" # noqa
if not destination.exists():
destination.mkdir(parents=True)
# create zip inside temporary directory.
with tempfile.TemporaryDirectory() as temp_dir:
@ -770,7 +821,9 @@ class BootstrapRepos:
Path to moved zip on success.
"""
destination = self.data_dir / zip_file.name
version = OpenPypeVersion.version_in_str(zip_file.name)
destination_dir = self.data_dir / f"{version.major}.{version.minor}"
destination = destination_dir / zip_file.name
if destination.exists():
self._print(
@ -782,7 +835,7 @@ class BootstrapRepos:
self._print(str(e), LOG_ERROR, exc_info=True)
return None
try:
shutil.move(zip_file.as_posix(), self.data_dir.as_posix())
shutil.move(zip_file.as_posix(), destination_dir.as_posix())
except shutil.Error as e:
self._print(str(e), LOG_ERROR, exc_info=True)
return None
@ -995,6 +1048,16 @@ class BootstrapRepos:
@staticmethod
def _validate_dir(path: Path) -> tuple:
"""Validate checksums in a given path.
Args:
path (Path): path to folder to validate.
Returns:
tuple(bool, str): returns status and reason as a bool
and str in a tuple.
"""
checksums_file = Path(path / "checksums")
if not checksums_file.exists():
# FIXME: This should be set to False sometimes in the future
@ -1076,7 +1139,20 @@ class BootstrapRepos:
sys.path.insert(0, directory.as_posix())
@staticmethod
def find_openpype_version(version, staging):
def find_openpype_version(
version: Union[str, OpenPypeVersion],
staging: bool,
compatible_with: OpenPypeVersion = None
) -> Union[OpenPypeVersion, None]:
"""Find location of specified OpenPype version.
Args:
version (Union[str, OpenPypeVersion): Version to find.
staging (bool): Filter staging versions.
compatible_with (OpenPypeVersion, optional): Find only
versions compatible with specified one.
"""
if isinstance(version, str):
version = OpenPypeVersion(version=version)
@ -1085,7 +1161,8 @@ class BootstrapRepos:
return installed_version
local_versions = OpenPypeVersion.get_local_versions(
staging=staging, production=not staging
staging=staging, production=not staging,
compatible_with=compatible_with
)
zip_version = None
for local_version in local_versions:
@ -1099,7 +1176,8 @@ class BootstrapRepos:
return zip_version
remote_versions = OpenPypeVersion.get_remote_versions(
staging=staging, production=not staging
staging=staging, production=not staging,
compatible_with=compatible_with
)
for remote_version in remote_versions:
if remote_version == version:
@ -1107,13 +1185,14 @@ class BootstrapRepos:
return None
@staticmethod
def find_latest_openpype_version(staging):
def find_latest_openpype_version(
staging, compatible_with: OpenPypeVersion = None):
installed_version = OpenPypeVersion.get_installed_version()
local_versions = OpenPypeVersion.get_local_versions(
staging=staging
staging=staging, compatible_with=compatible_with
)
remote_versions = OpenPypeVersion.get_remote_versions(
staging=staging
staging=staging, compatible_with=compatible_with
)
all_versions = local_versions + remote_versions
if not staging:
@ -1138,7 +1217,9 @@ class BootstrapRepos:
self,
openpype_path: Union[Path, str] = None,
staging: bool = False,
include_zips: bool = False) -> Union[List[OpenPypeVersion], None]:
include_zips: bool = False,
compatible_with: OpenPypeVersion = None
) -> Union[List[OpenPypeVersion], None]:
"""Get ordered dict of detected OpenPype version.
Resolution order for OpenPype is following:
@ -1154,6 +1235,8 @@ class BootstrapRepos:
otherwise.
include_zips (bool, optional): If set True it will try to find
OpenPype in zip files in given directory.
compatible_with (OpenPypeVersion, optional): Find only those
versions compatible with the one specified.
Returns:
dict of Path: Dictionary of detected OpenPype version.
@ -1172,30 +1255,56 @@ class BootstrapRepos:
("Finding OpenPype in non-filesystem locations is"
" not implemented yet."))
dir_to_search = self.data_dir
user_versions = self.get_openpype_versions(self.data_dir, staging)
# if we have openpype_path specified, search only there.
version_dir = ""
if compatible_with:
version_dir = f"{compatible_with.major}.{compatible_with.minor}"
# if checks bellow for OPENPYPE_PATH and registry fails, use data_dir
# DEPRECATED: lookup in root of this folder is deprecated in favour
# of major.minor sub-folders.
dirs_to_search = [
self.data_dir
]
if compatible_with:
dirs_to_search.append(self.data_dir / version_dir)
if openpype_path:
dir_to_search = openpype_path
dirs_to_search = [openpype_path]
if compatible_with:
dirs_to_search.append(openpype_path / version_dir)
else:
if os.getenv("OPENPYPE_PATH"):
if Path(os.getenv("OPENPYPE_PATH")).exists():
dir_to_search = Path(os.getenv("OPENPYPE_PATH"))
# first try OPENPYPE_PATH and if that is not available,
# try registry.
if os.getenv("OPENPYPE_PATH") \
and Path(os.getenv("OPENPYPE_PATH")).exists():
dirs_to_search = [Path(os.getenv("OPENPYPE_PATH"))]
if compatible_with:
dirs_to_search.append(
Path(os.getenv("OPENPYPE_PATH")) / version_dir)
else:
try:
registry_dir = Path(
str(self.registry.get_item("openPypePath")))
if registry_dir.exists():
dir_to_search = registry_dir
dirs_to_search = [registry_dir]
if compatible_with:
dirs_to_search.append(registry_dir / version_dir)
except ValueError:
# nothing found in registry, we'll use data dir
pass
openpype_versions = self.get_openpype_versions(dir_to_search, staging)
openpype_versions += user_versions
openpype_versions = []
for dir_to_search in dirs_to_search:
try:
openpype_versions += self.get_openpype_versions(
dir_to_search, staging, compatible_with=compatible_with)
except ValueError:
# location is invalid, skip it
pass
# remove zip file version if needed.
if not include_zips:
openpype_versions = [
v for v in openpype_versions if v.path.suffix != ".zip"
@ -1308,9 +1417,8 @@ class BootstrapRepos:
raise ValueError(
f"version {version} is not associated with any file")
destination = self.data_dir / version.path.stem
if destination.exists():
assert destination.is_dir()
destination = self.data_dir / f"{version.major}.{version.minor}" / version.path.stem # noqa
if destination.exists() and destination.is_dir():
try:
shutil.rmtree(destination)
except OSError as e:
@ -1379,7 +1487,7 @@ class BootstrapRepos:
else:
dir_name = openpype_version.path.stem
destination = self.data_dir / dir_name
destination = self.data_dir / f"{openpype_version.major}.{openpype_version.minor}" / dir_name # noqa
# test if destination directory already exist, if so lets delete it.
if destination.exists() and force:
@ -1557,14 +1665,18 @@ class BootstrapRepos:
return False
return True
def get_openpype_versions(self,
openpype_dir: Path,
staging: bool = False) -> list:
def get_openpype_versions(
self,
openpype_dir: Path,
staging: bool = False,
compatible_with: OpenPypeVersion = None) -> list:
"""Get all detected OpenPype versions in directory.
Args:
openpype_dir (Path): Directory to scan.
staging (bool, optional): Find staging versions if True.
compatible_with (OpenPypeVersion, optional): Get only versions
compatible with the one specified.
Returns:
list of OpenPypeVersion
@ -1574,7 +1686,7 @@ class BootstrapRepos:
"""
if not openpype_dir.exists() and not openpype_dir.is_dir():
raise ValueError("specified directory is invalid")
raise ValueError(f"specified directory {openpype_dir} is invalid")
_openpype_versions = []
# iterate over directory in first level and find all that might
@ -1599,6 +1711,10 @@ class BootstrapRepos:
):
continue
if compatible_with and \
not detected_version.is_compatible(compatible_with):
continue
detected_version.path = item
if staging and detected_version.is_staging():
_openpype_versions.append(detected_version)

View file

@ -21,6 +21,11 @@ class OpenPypeVersionNotFound(Exception):
pass
class OpenPypeVersionIncompatible(Exception):
"""OpenPype version is not compatible with the installed one (build)."""
pass
def should_add_certificate_path_to_mongo_url(mongo_url):
"""Check if should add ca certificate to mongo url.

View file

@ -9,6 +9,7 @@ from .settings import (
)
from .lib import (
PypeLogger,
Logger,
Anatomy,
config,
execute,
@ -58,8 +59,6 @@ from .action import (
RepairContextAction
)
# for backward compatibility with Pype 2
Logger = PypeLogger
__all__ = [
"get_system_settings",

View file

@ -2,7 +2,7 @@
"""Package for handling pype command line arguments."""
import os
import sys
import code
import click
# import sys
@ -424,3 +424,45 @@ def pack_project(project, dirpath):
def unpack_project(zipfile, root):
"""Create a package of project with all files and database dump."""
PypeCommands().unpack_project(zipfile, root)
@main.command()
def interactive():
"""Interative (Python like) console.
Helpfull command not only for development to directly work with python
interpreter.
Warning:
Executable 'openpype_gui' on windows won't work.
"""
from openpype.version import __version__
banner = "OpenPype {}\nPython {} on {}".format(
__version__, sys.version, sys.platform
)
code.interact(banner)
@main.command()
@click.option("--build", help="Print only build version",
is_flag=True, default=False)
def version(build):
"""Print OpenPype version."""
from openpype.version import __version__
from igniter.bootstrap_repos import BootstrapRepos, OpenPypeVersion
from pathlib import Path
import os
if getattr(sys, 'frozen', False):
local_version = BootstrapRepos.get_version(
Path(os.getenv("OPENPYPE_ROOT")))
else:
local_version = OpenPypeVersion.get_installed_version_str()
if build:
print(local_version)
return
print(f"{__version__} (booted: {local_version})")

View file

@ -1,3 +1,7 @@
from .mongo import (
OpenPypeMongoConnection,
)
from .entities import (
get_projects,
get_project,
@ -25,6 +29,8 @@ from .entities import (
get_last_version_by_subset_name,
get_output_link_versions,
version_is_latest,
get_representation_by_id,
get_representation_by_name,
get_representations,
@ -40,6 +46,8 @@ from .entities import (
)
__all__ = (
"OpenPypeMongoConnection",
"get_projects",
"get_project",
"get_whole_project",
@ -66,6 +74,8 @@ __all__ = (
"get_last_version_by_subset_name",
"get_output_link_versions",
"version_is_latest",
"get_representation_by_id",
"get_representation_by_name",
"get_representations",

File diff suppressed because it is too large Load diff

235
openpype/client/mongo.py Normal file
View file

@ -0,0 +1,235 @@
import os
import sys
import time
import logging
import pymongo
import certifi
if sys.version_info[0] == 2:
from urlparse import urlparse, parse_qs
else:
from urllib.parse import urlparse, parse_qs
class MongoEnvNotSet(Exception):
pass
def _decompose_url(url):
"""Decompose mongo url to basic components.
Used for creation of MongoHandler which expect mongo url components as
separated kwargs. Components are at the end not used as we're setting
connection directly this is just a dumb components for MongoHandler
validation pass.
"""
# Use first url from passed url
# - this is because it is possible to pass multiple urls for multiple
# replica sets which would crash on urlparse otherwise
# - please don't use comma in username of password
url = url.split(",")[0]
components = {
"scheme": None,
"host": None,
"port": None,
"username": None,
"password": None,
"auth_db": None
}
result = urlparse(url)
if result.scheme is None:
_url = "mongodb://{}".format(url)
result = urlparse(_url)
components["scheme"] = result.scheme
components["host"] = result.hostname
try:
components["port"] = result.port
except ValueError:
raise RuntimeError("invalid port specified")
components["username"] = result.username
components["password"] = result.password
try:
components["auth_db"] = parse_qs(result.query)['authSource'][0]
except KeyError:
# no auth db provided, mongo will use the one we are connecting to
pass
return components
def get_default_components():
mongo_url = os.environ.get("OPENPYPE_MONGO")
if mongo_url is None:
raise MongoEnvNotSet(
"URL for Mongo logging connection is not set."
)
return _decompose_url(mongo_url)
def should_add_certificate_path_to_mongo_url(mongo_url):
"""Check if should add ca certificate to mongo url.
Since 30.9.2021 cloud mongo requires newer certificates that are not
available on most of workstation. This adds path to certifi certificate
which is valid for it. To add the certificate path url must have scheme
'mongodb+srv' or has 'ssl=true' or 'tls=true' in url query.
"""
parsed = urlparse(mongo_url)
query = parse_qs(parsed.query)
lowered_query_keys = set(key.lower() for key in query.keys())
add_certificate = False
# Check if url 'ssl' or 'tls' are set to 'true'
for key in ("ssl", "tls"):
if key in query and "true" in query["ssl"]:
add_certificate = True
break
# Check if url contains 'mongodb+srv'
if not add_certificate and parsed.scheme == "mongodb+srv":
add_certificate = True
# Check if url does already contain certificate path
if add_certificate and "tlscafile" in lowered_query_keys:
add_certificate = False
return add_certificate
def validate_mongo_connection(mongo_uri):
"""Check if provided mongodb URL is valid.
Args:
mongo_uri (str): URL to validate.
Raises:
ValueError: When port in mongo uri is not valid.
pymongo.errors.InvalidURI: If passed mongo is invalid.
pymongo.errors.ServerSelectionTimeoutError: If connection timeout
passed so probably couldn't connect to mongo server.
"""
client = OpenPypeMongoConnection.create_connection(
mongo_uri, retry_attempts=1
)
client.close()
class OpenPypeMongoConnection:
"""Singleton MongoDB connection.
Keeps MongoDB connections by url.
"""
mongo_clients = {}
log = logging.getLogger("OpenPypeMongoConnection")
@staticmethod
def get_default_mongo_url():
return os.environ["OPENPYPE_MONGO"]
@classmethod
def get_mongo_client(cls, mongo_url=None):
if mongo_url is None:
mongo_url = cls.get_default_mongo_url()
connection = cls.mongo_clients.get(mongo_url)
if connection:
# Naive validation of existing connection
try:
connection.server_info()
with connection.start_session():
pass
except Exception:
connection = None
if not connection:
cls.log.debug("Creating mongo connection to {}".format(mongo_url))
connection = cls.create_connection(mongo_url)
cls.mongo_clients[mongo_url] = connection
return connection
@classmethod
def create_connection(cls, mongo_url, timeout=None, retry_attempts=None):
parsed = urlparse(mongo_url)
# Force validation of scheme
if parsed.scheme not in ["mongodb", "mongodb+srv"]:
raise pymongo.errors.InvalidURI((
"Invalid URI scheme:"
" URI must begin with 'mongodb://' or 'mongodb+srv://'"
))
if timeout is None:
timeout = int(os.environ.get("AVALON_TIMEOUT") or 1000)
kwargs = {
"serverSelectionTimeoutMS": timeout
}
if should_add_certificate_path_to_mongo_url(mongo_url):
kwargs["ssl_ca_certs"] = certifi.where()
mongo_client = pymongo.MongoClient(mongo_url, **kwargs)
if retry_attempts is None:
retry_attempts = 3
elif not retry_attempts:
retry_attempts = 1
last_exc = None
valid = False
t1 = time.time()
for attempt in range(1, retry_attempts + 1):
try:
mongo_client.server_info()
with mongo_client.start_session():
pass
valid = True
break
except Exception as exc:
last_exc = exc
if attempt < retry_attempts:
cls.log.warning(
"Attempt {} failed. Retrying... ".format(attempt)
)
time.sleep(1)
if not valid:
raise last_exc
cls.log.info("Connected to {}, delay {:.3f}s".format(
mongo_url, time.time() - t1
))
return mongo_client
def get_project_database():
db_name = os.environ.get("AVALON_DB") or "avalon"
return OpenPypeMongoConnection.get_mongo_client()[db_name]
def get_project_connection(project_name):
"""Direct access to mongo collection.
We're trying to avoid using direct access to mongo. This should be used
only for Create, Update and Remove operations until there are implemented
api calls for that.
Args:
project_name(str): Project name for which collection should be
returned.
Returns:
pymongo.Collection: Collection realated to passed project.
"""
if not project_name:
raise ValueError("Invalid project name {}".format(str(project_name)))
return get_project_database()[project_name]

View file

@ -0,0 +1,634 @@
import uuid
import copy
import collections
from abc import ABCMeta, abstractmethod, abstractproperty
import six
from bson.objectid import ObjectId
from pymongo import DeleteOne, InsertOne, UpdateOne
from .mongo import get_project_connection
REMOVED_VALUE = object()
CURRENT_PROJECT_SCHEMA = "openpype:project-3.0"
CURRENT_PROJECT_CONFIG_SCHEMA = "openpype:config-2.0"
CURRENT_ASSET_DOC_SCHEMA = "openpype:asset-3.0"
CURRENT_SUBSET_SCHEMA = "openpype:subset-3.0"
CURRENT_VERSION_SCHEMA = "openpype:version-3.0"
CURRENT_REPRESENTATION_SCHEMA = "openpype:representation-2.0"
CURRENT_WORKFILE_INFO_SCHEMA = "openpype:workfile-1.0"
def _create_or_convert_to_mongo_id(mongo_id):
if mongo_id is None:
return ObjectId()
return ObjectId(mongo_id)
def new_project_document(
project_name, project_code, config, data=None, entity_id=None
):
"""Create skeleton data of project document.
Args:
project_name (str): Name of project. Used as identifier of a project.
project_code (str): Shorter version of projet without spaces and
special characters (in most of cases). Should be also considered
as unique name across projects.
config (Dic[str, Any]): Project config consist of roots, templates,
applications and other project Anatomy related data.
data (Dict[str, Any]): Project data with information about it's
attributes (e.g. 'fps' etc.) or integration specific keys.
entity_id (Union[str, ObjectId]): Predefined id of document. New id is
created if not passed.
Returns:
Dict[str, Any]: Skeleton of project document.
"""
if data is None:
data = {}
data["code"] = project_code
return {
"_id": _create_or_convert_to_mongo_id(entity_id),
"name": project_name,
"type": CURRENT_PROJECT_SCHEMA,
"entity_data": data,
"config": config
}
def new_asset_document(
name, project_id, parent_id, parents, data=None, entity_id=None
):
"""Create skeleton data of asset document.
Args:
name (str): Is considered as unique identifier of asset in project.
project_id (Union[str, ObjectId]): Id of project doument.
parent_id (Union[str, ObjectId]): Id of parent asset.
parents (List[str]): List of parent assets names.
data (Dict[str, Any]): Asset document data. Empty dictionary is used
if not passed. Value of 'parent_id' is used to fill 'visualParent'.
entity_id (Union[str, ObjectId]): Predefined id of document. New id is
created if not passed.
Returns:
Dict[str, Any]: Skeleton of asset document.
"""
if data is None:
data = {}
if parent_id is not None:
parent_id = ObjectId(parent_id)
data["visualParent"] = parent_id
data["parents"] = parents
return {
"_id": _create_or_convert_to_mongo_id(entity_id),
"type": "asset",
"name": name,
"parent": ObjectId(project_id),
"data": data,
"schema": CURRENT_ASSET_DOC_SCHEMA
}
def new_subset_document(name, family, asset_id, data=None, entity_id=None):
"""Create skeleton data of subset document.
Args:
name (str): Is considered as unique identifier of subset under asset.
family (str): Subset's family.
asset_id (Union[str, ObjectId]): Id of parent asset.
data (Dict[str, Any]): Subset document data. Empty dictionary is used
if not passed. Value of 'family' is used to fill 'family'.
entity_id (Union[str, ObjectId]): Predefined id of document. New id is
created if not passed.
Returns:
Dict[str, Any]: Skeleton of subset document.
"""
if data is None:
data = {}
data["family"] = family
return {
"_id": _create_or_convert_to_mongo_id(entity_id),
"schema": CURRENT_SUBSET_SCHEMA,
"type": "subset",
"name": name,
"data": data,
"parent": asset_id
}
def new_version_doc(version, subset_id, data=None, entity_id=None):
"""Create skeleton data of version document.
Args:
version (int): Is considered as unique identifier of version
under subset.
subset_id (Union[str, ObjectId]): Id of parent subset.
data (Dict[str, Any]): Version document data.
entity_id (Union[str, ObjectId]): Predefined id of document. New id is
created if not passed.
Returns:
Dict[str, Any]: Skeleton of version document.
"""
if data is None:
data = {}
return {
"_id": _create_or_convert_to_mongo_id(entity_id),
"schema": CURRENT_VERSION_SCHEMA,
"type": "version",
"name": int(version),
"parent": subset_id,
"data": data
}
def new_representation_doc(
name, version_id, context, data=None, entity_id=None
):
"""Create skeleton data of asset document.
Args:
version (int): Is considered as unique identifier of version
under subset.
version_id (Union[str, ObjectId]): Id of parent version.
context (Dict[str, Any]): Representation context used for fill template
of to query.
data (Dict[str, Any]): Representation document data.
entity_id (Union[str, ObjectId]): Predefined id of document. New id is
created if not passed.
Returns:
Dict[str, Any]: Skeleton of version document.
"""
if data is None:
data = {}
return {
"_id": _create_or_convert_to_mongo_id(entity_id),
"schema": CURRENT_REPRESENTATION_SCHEMA,
"type": "representation",
"parent": version_id,
"name": name,
"data": data,
# Imprint shortcut to context for performance reasons.
"context": context
}
def new_workfile_info_doc(
filename, asset_id, task_name, files, data=None, entity_id=None
):
"""Create skeleton data of workfile info document.
Workfile document is at this moment used primarily for artist notes.
Args:
filename (str): Filename of workfile.
asset_id (Union[str, ObjectId]): Id of asset under which workfile live.
task_name (str): Task under which was workfile created.
files (List[str]): List of rootless filepaths related to workfile.
data (Dict[str, Any]): Additional metadata.
Returns:
Dict[str, Any]: Skeleton of workfile info document.
"""
if not data:
data = {}
return {
"_id": _create_or_convert_to_mongo_id(entity_id),
"type": "workfile",
"parent": ObjectId(asset_id),
"task_name": task_name,
"filename": filename,
"data": data,
"files": files
}
def _prepare_update_data(old_doc, new_doc, replace):
changes = {}
for key, value in new_doc.items():
if key not in old_doc or value != old_doc[key]:
changes[key] = value
if replace:
for key in old_doc.keys():
if key not in new_doc:
changes[key] = REMOVED_VALUE
return changes
def prepare_subset_update_data(old_doc, new_doc, replace=True):
"""Compare two subset documents and prepare update data.
Based on compared values will create update data for 'UpdateOperation'.
Empty output means that documents are identical.
Returns:
Dict[str, Any]: Changes between old and new document.
"""
return _prepare_update_data(old_doc, new_doc, replace)
def prepare_version_update_data(old_doc, new_doc, replace=True):
"""Compare two version documents and prepare update data.
Based on compared values will create update data for 'UpdateOperation'.
Empty output means that documents are identical.
Returns:
Dict[str, Any]: Changes between old and new document.
"""
return _prepare_update_data(old_doc, new_doc, replace)
def prepare_representation_update_data(old_doc, new_doc, replace=True):
"""Compare two representation documents and prepare update data.
Based on compared values will create update data for 'UpdateOperation'.
Empty output means that documents are identical.
Returns:
Dict[str, Any]: Changes between old and new document.
"""
return _prepare_update_data(old_doc, new_doc, replace)
def prepare_workfile_info_update_data(old_doc, new_doc, replace=True):
"""Compare two workfile info documents and prepare update data.
Based on compared values will create update data for 'UpdateOperation'.
Empty output means that documents are identical.
Returns:
Dict[str, Any]: Changes between old and new document.
"""
return _prepare_update_data(old_doc, new_doc, replace)
@six.add_metaclass(ABCMeta)
class AbstractOperation(object):
"""Base operation class.
Opration represent a call into database. The call can create, change or
remove data.
Args:
project_name (str): On which project operation will happen.
entity_type (str): Type of entity on which change happens.
e.g. 'asset', 'representation' etc.
"""
def __init__(self, project_name, entity_type):
self._project_name = project_name
self._entity_type = entity_type
self._id = str(uuid.uuid4())
@property
def project_name(self):
return self._project_name
@property
def id(self):
"""Identifier of operation."""
return self._id
@property
def entity_type(self):
return self._entity_type
@abstractproperty
def operation_name(self):
"""Stringified type of operation."""
pass
@abstractmethod
def to_mongo_operation(self):
"""Convert operation to Mongo batch operation."""
pass
def to_data(self):
"""Convert opration to data that can be converted to json or others.
Warning:
Current state returns ObjectId objects which cannot be parsed by
json.
Returns:
Dict[str, Any]: Description of operation.
"""
return {
"id": self._id,
"entity_type": self.entity_type,
"project_name": self.project_name,
"operation": self.operation_name
}
class CreateOperation(AbstractOperation):
"""Opeartion to create an entity.
Args:
project_name (str): On which project operation will happen.
entity_type (str): Type of entity on which change happens.
e.g. 'asset', 'representation' etc.
data (Dict[str, Any]): Data of entity that will be created.
"""
operation_name = "create"
def __init__(self, project_name, entity_type, data):
super(CreateOperation, self).__init__(project_name, entity_type)
if not data:
data = {}
else:
data = copy.deepcopy(dict(data))
if "_id" not in data:
data["_id"] = ObjectId()
else:
data["_id"] = ObjectId(data["_id"])
self._entity_id = data["_id"]
self._data = data
def __setitem__(self, key, value):
self.set_value(key, value)
def __getitem__(self, key):
return self.data[key]
def set_value(self, key, value):
self.data[key] = value
def get(self, key, *args, **kwargs):
return self.data.get(key, *args, **kwargs)
@property
def entity_id(self):
return self._entity_id
@property
def data(self):
return self._data
def to_mongo_operation(self):
return InsertOne(copy.deepcopy(self._data))
def to_data(self):
output = super(CreateOperation, self).to_data()
output["data"] = copy.deepcopy(self.data)
return output
class UpdateOperation(AbstractOperation):
"""Opeartion to update an entity.
Args:
project_name (str): On which project operation will happen.
entity_type (str): Type of entity on which change happens.
e.g. 'asset', 'representation' etc.
entity_id (Union[str, ObjectId]): Identifier of an entity.
update_data (Dict[str, Any]): Key -> value changes that will be set in
database. If value is set to 'REMOVED_VALUE' the key will be
removed. Only first level of dictionary is checked (on purpose).
"""
operation_name = "update"
def __init__(self, project_name, entity_type, entity_id, update_data):
super(UpdateOperation, self).__init__(project_name, entity_type)
self._entity_id = ObjectId(entity_id)
self._update_data = update_data
@property
def entity_id(self):
return self._entity_id
@property
def update_data(self):
return self._update_data
def to_mongo_operation(self):
unset_data = {}
set_data = {}
for key, value in self._update_data.items():
if value is REMOVED_VALUE:
unset_data[key] = value
else:
set_data[key] = value
op_data = {}
if unset_data:
op_data["$unset"] = unset_data
if set_data:
op_data["$set"] = set_data
if not op_data:
return None
return UpdateOne(
{"_id": self.entity_id},
op_data
)
def to_data(self):
changes = {}
for key, value in self._update_data.items():
if value is REMOVED_VALUE:
value = None
changes[key] = value
output = super(UpdateOperation, self).to_data()
output.update({
"entity_id": self.entity_id,
"changes": changes
})
return output
class DeleteOperation(AbstractOperation):
"""Opeartion to delete an entity.
Args:
project_name (str): On which project operation will happen.
entity_type (str): Type of entity on which change happens.
e.g. 'asset', 'representation' etc.
entity_id (Union[str, ObjectId]): Entity id that will be removed.
"""
operation_name = "delete"
def __init__(self, project_name, entity_type, entity_id):
super(DeleteOperation, self).__init__(project_name, entity_type)
self._entity_id = ObjectId(entity_id)
@property
def entity_id(self):
return self._entity_id
def to_mongo_operation(self):
return DeleteOne({"_id": self.entity_id})
def to_data(self):
output = super(DeleteOperation, self).to_data()
output["entity_id"] = self.entity_id
return output
class OperationsSession(object):
"""Session storing operations that should happen in an order.
At this moment does not handle anything special can be sonsidered as
stupid list of operations that will happen after each other. If creation
of same entity is there multiple times it's handled in any way and document
values are not validated.
All operations must be related to single project.
Args:
project_name (str): Project name to which are operations related.
"""
def __init__(self):
self._operations = []
def add(self, operation):
"""Add operation to be processed.
Args:
operation (BaseOperation): Operation that should be processed.
"""
if not isinstance(
operation,
(CreateOperation, UpdateOperation, DeleteOperation)
):
raise TypeError("Expected Operation object got {}".format(
str(type(operation))
))
self._operations.append(operation)
def append(self, operation):
"""Add operation to be processed.
Args:
operation (BaseOperation): Operation that should be processed.
"""
self.add(operation)
def extend(self, operations):
"""Add operations to be processed.
Args:
operations (List[BaseOperation]): Operations that should be
processed.
"""
for operation in operations:
self.add(operation)
def remove(self, operation):
"""Remove operation."""
self._operations.remove(operation)
def clear(self):
"""Clear all registered operations."""
self._operations = []
def to_data(self):
return [
operation.to_data()
for operation in self._operations
]
def commit(self):
"""Commit session operations."""
operations, self._operations = self._operations, []
if not operations:
return
operations_by_project = collections.defaultdict(list)
for operation in operations:
operations_by_project[operation.project_name].append(operation)
for project_name, operations in operations_by_project.items():
bulk_writes = []
for operation in operations:
mongo_op = operation.to_mongo_operation()
if mongo_op is not None:
bulk_writes.append(mongo_op)
if bulk_writes:
collection = get_project_connection(project_name)
collection.bulk_write(bulk_writes)
def create_entity(self, project_name, entity_type, data):
"""Fast access to 'CreateOperation'.
Returns:
CreateOperation: Object of update operation.
"""
operation = CreateOperation(project_name, entity_type, data)
self.add(operation)
return operation
def update_entity(self, project_name, entity_type, entity_id, update_data):
"""Fast access to 'UpdateOperation'.
Returns:
UpdateOperation: Object of update operation.
"""
operation = UpdateOperation(
project_name, entity_type, entity_id, update_data
)
self.add(operation)
return operation
def delete_entity(self, project_name, entity_type, entity_id):
"""Fast access to 'DeleteOperation'.
Returns:
DeleteOperation: Object of delete operation.
"""
operation = DeleteOperation(project_name, entity_type, entity_id)
self.add(operation)
return operation

View file

@ -1,11 +1,11 @@
import os
import shutil
from openpype.lib import (
PreLaunchHook,
get_custom_workfile_template_by_context,
from openpype.lib import PreLaunchHook
from openpype.settings import get_project_settings
from openpype.pipeline.workfile import (
get_custom_workfile_template,
get_custom_workfile_template_by_string_context
)
from openpype.settings import get_project_settings
class CopyTemplateWorkfile(PreLaunchHook):
@ -54,41 +54,22 @@ class CopyTemplateWorkfile(PreLaunchHook):
project_name = self.data["project_name"]
asset_name = self.data["asset_name"]
task_name = self.data["task_name"]
host_name = self.application.host_name
project_settings = get_project_settings(project_name)
host_settings = project_settings[self.application.host_name]
workfile_builder_settings = host_settings.get("workfile_builder")
if not workfile_builder_settings:
# TODO remove warning when deprecated
self.log.warning((
"Seems like old version of settings is used."
" Can't access custom templates in host \"{}\"."
).format(self.application.full_label))
return
if not workfile_builder_settings["create_first_version"]:
self.log.info((
"Project \"{}\" has turned off to create first workfile for"
" application \"{}\""
).format(project_name, self.application.full_label))
return
# Backwards compatibility
template_profiles = workfile_builder_settings.get("custom_templates")
if not template_profiles:
self.log.info(
"Custom templates are not filled. Skipping template copy."
)
return
project_doc = self.data.get("project_doc")
asset_doc = self.data.get("asset_doc")
anatomy = self.data.get("anatomy")
if project_doc and asset_doc:
self.log.debug("Started filtering of custom template paths.")
template_path = get_custom_workfile_template_by_context(
template_profiles, project_doc, asset_doc, task_name, anatomy
template_path = get_custom_workfile_template(
project_doc,
asset_doc,
task_name,
host_name,
anatomy,
project_settings
)
else:
@ -96,10 +77,13 @@ class CopyTemplateWorkfile(PreLaunchHook):
"Global data collection probably did not execute."
" Using backup solution."
))
dbcon = self.data.get("dbcon")
template_path = get_custom_workfile_template_by_string_context(
template_profiles, project_name, asset_name, task_name,
dbcon, anatomy
project_name,
asset_name,
task_name,
host_name,
anatomy,
project_settings
)
if not template_path:

View file

@ -1,3 +1,4 @@
from openpype.client import get_project, get_asset_by_name
from openpype.lib import (
PreLaunchHook,
EnvironmentPrepData,
@ -69,7 +70,7 @@ class GlobalHostDataHook(PreLaunchHook):
self.data["dbcon"] = dbcon
# Project document
project_doc = dbcon.find_one({"type": "project"})
project_doc = get_project(project_name)
self.data["project_doc"] = project_doc
asset_name = self.data.get("asset_name")
@ -79,8 +80,5 @@ class GlobalHostDataHook(PreLaunchHook):
)
return
asset_doc = dbcon.find_one({
"type": "asset",
"name": asset_name
})
asset_doc = get_asset_by_name(project_name, asset_name)
self.data["asset_doc"] = asset_doc

View file

@ -1,5 +1,4 @@
import os
import sys
from Qt import QtWidgets
@ -15,6 +14,7 @@ from openpype.pipeline import (
AVALON_CONTAINER_ID,
legacy_io,
)
from openpype.pipeline.load import any_outdated_containers
import openpype.hosts.aftereffects
from openpype.lib import register_event_callback
@ -136,7 +136,7 @@ def ls():
def check_inventory():
"""Checks loaded containers if they are of highest version"""
if not lib.any_outdated():
if not any_outdated_containers():
return
# Warn about outdated containers.

View file

@ -102,7 +102,6 @@ class CollectAERender(publish.AbstractCollectRender):
attachTo=False,
setMembers='',
publish=True,
renderer='aerender',
name=subset_name,
resolutionWidth=render_q.width,
resolutionHeight=render_q.height,
@ -113,7 +112,6 @@ class CollectAERender(publish.AbstractCollectRender):
frameStart=frame_start,
frameEnd=frame_end,
frameStep=1,
toBeRenderedOn='deadline',
fps=fps,
app_version=app_version,
publish_attributes=inst.data.get("publish_attributes", {}),
@ -138,6 +136,9 @@ class CollectAERender(publish.AbstractCollectRender):
fam = "render.farm"
if fam not in instance.families:
instance.families.append(fam)
instance.toBeRenderedOn = "deadline"
instance.renderer = "aerender"
instance.farm = True # to skip integrate
instances.append(instance)
instances_to_remove.append(inst)

View file

@ -220,12 +220,9 @@ class LaunchQtApp(bpy.types.Operator):
self._app.store_window(self.bl_idname, window)
self._window = window
if not isinstance(
self._window,
(QtWidgets.QMainWindow, QtWidgets.QDialog, ModuleType)
):
if not isinstance(self._window, (QtWidgets.QWidget, ModuleType)):
raise AttributeError(
"`window` should be a `QDialog or module`. Got: {}".format(
"`window` should be a `QWidget or module`. Got: {}".format(
str(type(window))
)
)
@ -249,9 +246,9 @@ class LaunchQtApp(bpy.types.Operator):
self._window.setWindowFlags(on_top_flags)
self._window.show()
if on_top_flags != origin_flags:
self._window.setWindowFlags(origin_flags)
self._window.show()
# if on_top_flags != origin_flags:
# self._window.setWindowFlags(origin_flags)
# self._window.show()
return {'FINISHED'}

View file

@ -136,7 +136,8 @@ class CollectTimelineInstances(pyblish.api.ContextPlugin):
"tasks": {
task["name"]: {"type": task["type"]}
for task in self.add_tasks},
"representations": []
"representations": [],
"newAssetPublishing": True
})
self.log.debug("__ inst_data: {}".format(pformat(inst_data)))

View file

@ -3,9 +3,9 @@ import copy
from collections import OrderedDict
from pprint import pformat
import pyblish
from openpype.lib import get_workdir
import openpype.hosts.flame.api as opfapi
import openpype.pipeline as op_pipeline
from openpype.pipeline.workfile import get_workdir
class IntegrateBatchGroup(pyblish.api.InstancePlugin):
@ -323,6 +323,14 @@ class IntegrateBatchGroup(pyblish.api.InstancePlugin):
def _get_shot_task_dir_path(self, instance, task_data):
project_doc = instance.data["projectEntity"]
asset_entity = instance.data["assetEntity"]
anatomy = instance.context.data["anatomy"]
project_settings = instance.context.data["project_settings"]
return get_workdir(
project_doc, asset_entity, task_data["name"], "flame")
project_doc,
asset_entity,
task_data["name"],
"flame",
anatomy,
project_settings=project_settings
)

View file

@ -3,9 +3,7 @@ import re
import sys
import logging
# Pipeline imports
from openpype.client import (
get_project,
get_asset_by_name,
get_versions,
)
@ -17,13 +15,10 @@ from openpype.pipeline import (
from openpype.lib import version_up
from openpype.hosts.fusion import api
from openpype.hosts.fusion.api import lib
from openpype.lib.avalon_context import get_workdir_from_session
from openpype.pipeline.context_tools import get_workdir_from_session
log = logging.getLogger("Update Slap Comp")
self = sys.modules[__name__]
self._project = None
def _format_version_folder(folder):
"""Format a version folder based on the filepath
@ -212,9 +207,6 @@ def switch(asset_name, filepath=None, new=True):
asset = get_asset_by_name(project_name, asset_name)
assert asset, "Could not find '%s' in the database" % asset_name
# Get current project
self._project = get_project(project_name)
# Go to comp
if not filepath:
current_comp = api.get_current_comp()

View file

@ -14,7 +14,7 @@ from openpype.pipeline import (
legacy_io,
)
from openpype.hosts.fusion import api
from openpype.lib.avalon_context import get_workdir_from_session
from openpype.pipeline.context_tools import get_workdir_from_session
log = logging.getLogger("Fusion Switch Shot")

View file

@ -4,17 +4,16 @@ import logging
import pyblish.api
from openpype import lib
from openpype.client import get_representation_by_id
from openpype.lib import register_event_callback
from openpype.pipeline import (
legacy_io,
register_loader_plugin_path,
register_creator_plugin_path,
deregister_loader_plugin_path,
deregister_creator_plugin_path,
AVALON_CONTAINER_ID,
)
from openpype.pipeline.load import get_outdated_containers
from openpype.pipeline.context_tools import get_current_project_asset
import openpype.hosts.harmony
import openpype.hosts.harmony.api as harmony
@ -50,7 +49,9 @@ def get_asset_settings():
dict: Scene data.
"""
asset_data = lib.get_asset()["data"]
asset_doc = get_current_project_asset()
asset_data = asset_doc["data"]
fps = asset_data.get("fps")
frame_start = asset_data.get("frameStart")
frame_end = asset_data.get("frameEnd")
@ -105,16 +106,7 @@ def check_inventory():
in Harmony.
"""
project_name = legacy_io.active_project()
outdated_containers = []
for container in ls():
representation_id = container['representation']
representation_doc = get_representation_by_id(
project_name, representation_id, fields=["parent"]
)
if representation_doc and not lib.is_latest(representation_doc):
outdated_containers.append(container)
outdated_containers = get_outdated_containers()
if not outdated_containers:
return

View file

@ -5,8 +5,8 @@ from openpype.pipeline import (
load,
get_representation_path,
)
from openpype.pipeline.context_tools import is_representation_from_latest
import openpype.hosts.harmony.api as harmony
import openpype.lib
copy_files = """function copyFile(srcFilename, dstFilename)
@ -280,9 +280,7 @@ class BackgroundLoader(load.LoaderPlugin):
)
def update(self, container, representation):
path = get_representation_path(representation)
with open(path) as json_file:
data = json.load(json_file)
@ -300,10 +298,9 @@ class BackgroundLoader(load.LoaderPlugin):
bg_folder = os.path.dirname(path)
path = get_representation_path(representation)
print(container)
is_latest = is_representation_from_latest(representation)
for layer in sorted(layers):
file_to_import = [
os.path.join(bg_folder, layer).replace("\\", "/")
@ -347,7 +344,7 @@ class BackgroundLoader(load.LoaderPlugin):
}
%s
""" % (sig, sig)
if openpype.lib.is_latest(representation):
if is_latest:
harmony.send({"function": func, "args": [node, "green"]})
else:
harmony.send({"function": func, "args": [node, "red"]})

View file

@ -10,8 +10,8 @@ from openpype.pipeline import (
load,
get_representation_path,
)
from openpype.pipeline.context_tools import is_representation_from_latest
import openpype.hosts.harmony.api as harmony
import openpype.lib
class ImageSequenceLoader(load.LoaderPlugin):
@ -109,7 +109,7 @@ class ImageSequenceLoader(load.LoaderPlugin):
)
# Colour node.
if openpype.lib.is_latest(representation):
if is_representation_from_latest(representation):
harmony.send(
{
"function": "PypeHarmony.setColor",

View file

@ -10,8 +10,8 @@ from openpype.pipeline import (
load,
get_representation_path,
)
from openpype.pipeline.context_tools import is_representation_from_latest
import openpype.hosts.harmony.api as harmony
import openpype.lib
class TemplateLoader(load.LoaderPlugin):
@ -83,7 +83,7 @@ class TemplateLoader(load.LoaderPlugin):
self_name = self.__class__.__name__
update_and_replace = False
if openpype.lib.is_latest(representation):
if is_representation_from_latest(representation):
self._set_green(node)
else:
self._set_red(node)

View file

@ -55,6 +55,10 @@ class ValidateSceneSettings(pyblish.api.InstancePlugin):
def process(self, instance):
"""Plugin entry point."""
# TODO 'get_asset_settings' could expect asset document as argument
# which is available on 'context.data["assetEntity"]'
# - the same approach can be used in 'ValidateSceneSettingsRepair'
expected_settings = harmony.get_asset_settings()
self.log.info("scene settings from DB:".format(expected_settings))

View file

@ -10,6 +10,7 @@ import qargparse
import openpype.api as openpype
from openpype.pipeline import LoaderPlugin, LegacyCreator
from openpype.pipeline.context_tools import get_current_project_asset
from . import lib
log = openpype.Logger().get_logger(__name__)
@ -484,7 +485,7 @@ class ClipLoader:
"""
asset_name = self.context["representation"]["context"]["asset"]
asset_doc = openpype.get_asset(asset_name)
asset_doc = get_current_project_asset(asset_name)
log.debug("__ asset_doc: {}".format(pformat(asset_doc)))
self.data["assetData"] = asset_doc["data"]

View file

@ -109,7 +109,8 @@ class PrecollectInstances(pyblish.api.ContextPlugin):
"clipAnnotations": annotations,
# add all additional tags
"tags": phiero.get_track_item_tags(track_item)
"tags": phiero.get_track_item_tags(track_item),
"newAssetPublishing": True
})
# otio clip data

View file

@ -5,8 +5,8 @@ from contextlib import contextmanager
import six
from openpype.client import get_asset_by_name
from openpype.api import get_asset
from openpype.pipeline import legacy_io
from openpype.pipeline.context_tools import get_current_project_asset
import hou
@ -16,7 +16,7 @@ log = logging.getLogger(__name__)
def get_asset_fps():
"""Return current asset fps."""
return get_asset()["data"].get("fps")
return get_current_project_asset()["data"].get("fps")
def set_id(node, unique_id, overwrite=False):

View file

@ -12,13 +12,13 @@ from openpype.pipeline import (
register_loader_plugin_path,
AVALON_CONTAINER_ID,
)
from openpype.pipeline.load import any_outdated_containers
import openpype.hosts.houdini
from openpype.hosts.houdini.api import lib
from openpype.lib import (
register_event_callback,
emit_event,
any_outdated,
)
from .lib import get_asset_fps
@ -245,7 +245,7 @@ def on_open():
# ensure it is using correct FPS for the asset
lib.validate_fps()
if any_outdated():
if any_outdated_containers():
from openpype.widgets import popup
log.warning("Scene has outdated content.")

View file

@ -23,7 +23,6 @@ from openpype.client import (
get_last_versions,
get_representation_by_name
)
from openpype import lib
from openpype.api import get_anatomy_settings
from openpype.pipeline import (
legacy_io,
@ -33,6 +32,7 @@ from openpype.pipeline import (
load_container,
registered_host,
)
from openpype.pipeline.context_tools import get_current_project_asset
from .commands import reset_frame_range
@ -2174,7 +2174,7 @@ def reset_scene_resolution():
project_name = legacy_io.active_project()
project_doc = get_project(project_name)
project_data = project_doc["data"]
asset_data = lib.get_asset()["data"]
asset_data = get_current_project_asset()["data"]
# Set project resolution
width_key = "resolutionWidth"
@ -2208,7 +2208,8 @@ def set_context_settings():
project_name = legacy_io.active_project()
project_doc = get_project(project_name)
project_data = project_doc["data"]
asset_data = lib.get_asset()["data"]
asset_doc = get_current_project_asset(fields=["data.fps"])
asset_data = asset_doc.get("data", {})
# Set project fps
fps = asset_data.get("fps", project_data.get("fps", 25))
@ -2233,7 +2234,7 @@ def validate_fps():
"""
fps = lib.get_asset()["data"]["fps"]
fps = get_current_project_asset(fields=["data.fps"])["data"]["fps"]
# TODO(antirotor): This is hack as for framerates having multiple
# decimal places. FTrack is ceiling decimal values on
# fps to two decimal places but Maya 2019+ is reporting those fps
@ -2522,12 +2523,30 @@ def load_capture_preset(data=None):
temp_options2['multiSampleEnable'] = False
temp_options2['multiSampleCount'] = preset[id][key]
if key == 'renderDepthOfField':
temp_options2['renderDepthOfField'] = preset[id][key]
if key == 'ssaoEnable':
if preset[id][key] is True:
temp_options2['ssaoEnable'] = True
else:
temp_options2['ssaoEnable'] = False
if key == 'ssaoSamples':
temp_options2['ssaoSamples'] = preset[id][key]
if key == 'ssaoAmount':
temp_options2['ssaoAmount'] = preset[id][key]
if key == 'ssaoRadius':
temp_options2['ssaoRadius'] = preset[id][key]
if key == 'hwFogDensity':
temp_options2['hwFogDensity'] = preset[id][key]
if key == 'ssaoFilterRadius':
temp_options2['ssaoFilterRadius'] = preset[id][key]
if key == 'alphaCut':
temp_options2['transparencyAlgorithm'] = 5
temp_options2['transparencyQuality'] = 1
@ -2535,6 +2554,48 @@ def load_capture_preset(data=None):
if key == 'headsUpDisplay':
temp_options['headsUpDisplay'] = True
if key == 'fogging':
temp_options['fogging'] = preset[id][key] or False
if key == 'hwFogStart':
temp_options2['hwFogStart'] = preset[id][key]
if key == 'hwFogEnd':
temp_options2['hwFogEnd'] = preset[id][key]
if key == 'hwFogAlpha':
temp_options2['hwFogAlpha'] = preset[id][key]
if key == 'hwFogFalloff':
temp_options2['hwFogFalloff'] = int(preset[id][key])
if key == 'hwFogColorR':
temp_options2['hwFogColorR'] = preset[id][key]
if key == 'hwFogColorG':
temp_options2['hwFogColorG'] = preset[id][key]
if key == 'hwFogColorB':
temp_options2['hwFogColorB'] = preset[id][key]
if key == 'motionBlurEnable':
if preset[id][key] is True:
temp_options2['motionBlurEnable'] = True
else:
temp_options2['motionBlurEnable'] = False
if key == 'motionBlurSampleCount':
temp_options2['motionBlurSampleCount'] = preset[id][key]
if key == 'motionBlurShutterOpenFraction':
temp_options2['motionBlurShutterOpenFraction'] = preset[id][key]
if key == 'lineAAEnable':
if preset[id][key] is True:
temp_options2['lineAAEnable'] = True
else:
temp_options2['lineAAEnable'] = False
else:
temp_options[str(key)] = preset[id][key]
@ -2544,7 +2605,24 @@ def load_capture_preset(data=None):
'gpuCacheDisplayFilter',
'multiSample',
'ssaoEnable',
'textureMaxResolution'
'ssaoSamples',
'ssaoAmount',
'ssaoFilterRadius',
'ssaoRadius',
'hwFogStart',
'hwFogEnd',
'hwFogAlpha',
'hwFogFalloff',
'hwFogColorR',
'hwFogColorG',
'hwFogColorB',
'hwFogDensity',
'textureMaxResolution',
'motionBlurEnable',
'motionBlurSampleCount',
'motionBlurShutterOpenFraction',
'lineAAEnable',
'renderDepthOfField'
]:
temp_options.pop(key, None)
@ -2974,8 +3052,9 @@ def update_content_on_context_change():
This will update scene content to match new asset on context change
"""
scene_sets = cmds.listSets(allSets=True)
new_asset = legacy_io.Session["AVALON_ASSET"]
new_data = lib.get_asset()["data"]
asset_doc = get_current_project_asset()
new_asset = asset_doc["name"]
new_data = asset_doc["data"]
for s in scene_sets:
try:
if cmds.getAttr("{}.id".format(s)) == "pyblish.avalon.instance":

View file

@ -82,6 +82,14 @@ IMAGE_PREFIXES = {
RENDERMAN_IMAGE_DIR = "maya/<scene>/<layer>"
def has_tokens(string, tokens):
"""Return whether any of tokens is in input string (case-insensitive)"""
pattern = "({})".format("|".join(re.escape(token) for token in tokens))
match = re.search(pattern, string, re.IGNORECASE)
return bool(match)
@attr.s
class LayerMetadata(object):
"""Data class for Render Layer metadata."""
@ -99,6 +107,12 @@ class LayerMetadata(object):
# Render Products
products = attr.ib(init=False, default=attr.Factory(list))
# The AOV separator token. Note that not all renderers define an explicit
# render separator but allow to put the AOV/RenderPass token anywhere in
# the file path prefix. For those renderers we'll fall back to whatever
# is between the last occurrences of <RenderLayer> and <RenderPass> tokens.
aov_separator = attr.ib(default="_")
@attr.s
class RenderProduct(object):
@ -183,7 +197,6 @@ class ARenderProducts:
self.layer = layer
self.render_instance = render_instance
self.multipart = False
self.aov_separator = render_instance.data.get("aovSeparator", "_")
# Initialize
self.layer_data = self._get_layer_data()
@ -296,6 +309,42 @@ class ARenderProducts:
return lib.get_attr_in_layer(plug, layer=self.layer)
@staticmethod
def extract_separator(file_prefix):
"""Extract AOV separator character from the prefix.
Default behavior extracts the part between
last occurrences of <RenderLayer> and <RenderPass>
Todo:
This code also triggers for V-Ray which overrides it explicitly
so this code will invalidly debug log it couldn't extract the
AOV separator even though it does set it in RenderProductsVray.
Args:
file_prefix (str): File prefix with tokens.
Returns:
str or None: prefix character if it can be extracted.
"""
layer_tokens = ["<renderlayer>", "<layer>"]
aov_tokens = ["<aov>", "<renderpass>"]
def match_last(tokens, text):
"""regex match the last occurence from a list of tokens"""
pattern = "(?:.*)({})".format("|".join(tokens))
return re.search(pattern, text, re.IGNORECASE)
layer_match = match_last(layer_tokens, file_prefix)
aov_match = match_last(aov_tokens, file_prefix)
separator = None
if layer_match and aov_match:
matches = sorted((layer_match, aov_match),
key=lambda match: match.end(1))
separator = file_prefix[matches[0].end(1):matches[1].start(1)]
return separator
def _get_layer_data(self):
# type: () -> LayerMetadata
# ______________________________________________
@ -304,7 +353,7 @@ class ARenderProducts:
# ____________________/
_, scene_basename = os.path.split(cmds.file(q=True, loc=True))
scene_name, _ = os.path.splitext(scene_basename)
kwargs = {}
file_prefix = self.get_renderer_prefix()
# If the Render Layer belongs to a Render Setup layer then the
@ -319,6 +368,13 @@ class ARenderProducts:
# defaultRenderLayer renders as masterLayer
layer_name = "masterLayer"
separator = self.extract_separator(file_prefix)
if separator:
kwargs["aov_separator"] = separator
else:
log.debug("Couldn't extract aov separator from "
"file prefix: {}".format(file_prefix))
# todo: Support Custom Frames sequences 0,5-10,100-120
# Deadline allows submitting renders with a custom frame list
# to support those cases we might want to allow 'custom frames'
@ -335,7 +391,8 @@ class ARenderProducts:
layerName=layer_name,
renderer=self.renderer,
defaultExt=self._get_attr("defaultRenderGlobals.imfPluginKey"),
filePrefix=file_prefix
filePrefix=file_prefix,
**kwargs
)
def _generate_file_sequence(
@ -680,9 +737,17 @@ class RenderProductsVray(ARenderProducts):
"""
prefix = super(RenderProductsVray, self).get_renderer_prefix()
prefix = "{}{}<aov>".format(prefix, self.aov_separator)
aov_separator = self._get_aov_separator()
prefix = "{}{}<aov>".format(prefix, aov_separator)
return prefix
def _get_aov_separator(self):
# type: () -> str
"""Return the V-Ray AOV/Render Elements separator"""
return self._get_attr(
"vraySettings.fileNameRenderElementSeparator"
)
def _get_layer_data(self):
# type: () -> LayerMetadata
"""Override to get vray specific extension."""
@ -694,6 +759,8 @@ class RenderProductsVray(ARenderProducts):
layer_data.defaultExt = default_ext
layer_data.padding = self._get_attr("vraySettings.fileNamePadding")
layer_data.aov_separator = self._get_aov_separator()
return layer_data
def get_render_products(self):
@ -913,8 +980,9 @@ class RenderProductsRedshift(ARenderProducts):
:func:`ARenderProducts.get_renderer_prefix()`
"""
prefix = super(RenderProductsRedshift, self).get_renderer_prefix()
prefix = "{}{}<aov>".format(prefix, self.aov_separator)
file_prefix = super(RenderProductsRedshift, self).get_renderer_prefix()
separator = self.extract_separator(file_prefix)
prefix = "{}{}<aov>".format(file_prefix, separator or "_")
return prefix
def get_render_products(self):
@ -1087,7 +1155,7 @@ class RenderProductsRenderman(ARenderProducts):
"d_tiff": "tif"
}
displays = get_displays()["displays"]
displays = get_displays(override_dst="render")["displays"]
for name, display in displays.items():
enabled = display["params"]["enable"]["value"]
if not enabled:
@ -1106,9 +1174,33 @@ class RenderProductsRenderman(ARenderProducts):
display["driverNode"]["type"], "exr")
for camera in cameras:
product = RenderProduct(productName=aov_name,
ext=extensions,
camera=camera)
# Create render product and set it as multipart only on
# display types supporting it. In all other cases, Renderman
# will create separate output per channel.
if display["driverNode"]["type"] in ["d_openexr", "d_deepexr", "d_tiff"]: # noqa
product = RenderProduct(
productName=aov_name,
ext=extensions,
camera=camera,
multipart=True
)
else:
# this code should handle the case where no multipart
# capable format is selected. But since it involves
# shady logic to determine what channel become what
# lets not do that as all productions will use exr anyway.
"""
for channel in display['params']['displayChannels']['value']: # noqa
product = RenderProduct(
productName="{}_{}".format(aov_name, channel),
ext=extensions,
camera=camera,
multipart=False
)
"""
raise UnsupportedImageFormatException(
"Only exr, deep exr and tiff formats are supported.")
products.append(product)
return products
@ -1201,3 +1293,7 @@ class UnsupportedRendererException(Exception):
Raised when requesting data from unsupported renderer.
"""
class UnsupportedImageFormatException(Exception):
"""Custom exception to report unsupported output image format."""

View file

@ -0,0 +1,241 @@
# -*- coding: utf-8 -*-
"""Class for handling Render Settings."""
from maya import cmds # noqa
import maya.mel as mel
import six
import sys
from openpype.api import (
get_project_settings,
get_current_project_settings
)
from openpype.pipeline import legacy_io
from openpype.pipeline import CreatorError
from openpype.pipeline.context_tools import get_current_project_asset
from openpype.hosts.maya.api.commands import reset_frame_range
class RenderSettings(object):
_image_prefix_nodes = {
'vray': 'vraySettings.fileNamePrefix',
'arnold': 'defaultRenderGlobals.imageFilePrefix',
'renderman': 'defaultRenderGlobals.imageFilePrefix',
'redshift': 'defaultRenderGlobals.imageFilePrefix'
}
_image_prefixes = {
'vray': get_current_project_settings()["maya"]["RenderSettings"]["vray_renderer"]["image_prefix"], # noqa
'arnold': get_current_project_settings()["maya"]["RenderSettings"]["arnold_renderer"]["image_prefix"], # noqa
'renderman': 'maya/<Scene>/<layer>/<layer>{aov_separator}<aov>',
'redshift': get_current_project_settings()["maya"]["RenderSettings"]["redshift_renderer"]["image_prefix"] # noqa
}
_aov_chars = {
"dot": ".",
"dash": "-",
"underscore": "_"
}
@classmethod
def get_image_prefix_attr(cls, renderer):
return cls._image_prefix_nodes[renderer]
def __init__(self, project_settings=None):
self._project_settings = project_settings
if not self._project_settings:
self._project_settings = get_project_settings(
legacy_io.Session["AVALON_PROJECT"]
)
def set_default_renderer_settings(self, renderer=None):
"""Set basic settings based on renderer."""
if not renderer:
renderer = cmds.getAttr(
'defaultRenderGlobals.currentRenderer').lower()
asset_doc = get_current_project_asset()
# project_settings/maya/create/CreateRender/aov_separator
try:
aov_separator = self._aov_chars[(
self._project_settings["maya"]
["RenderSettings"]
["aov_separator"]
)]
except KeyError:
aov_separator = "_"
reset_frame = self._project_settings["maya"]["RenderSettings"]["reset_current_frame"] # noqa
if reset_frame:
start_frame = cmds.getAttr("defaultRenderGlobals.startFrame")
cmds.currentTime(start_frame, edit=True)
if renderer in self._image_prefix_nodes:
prefix = self._image_prefixes[renderer]
prefix = prefix.replace("{aov_separator}", aov_separator)
cmds.setAttr(self._image_prefix_nodes[renderer],
prefix, type="string") # noqa
else:
print("{0} isn't a supported renderer to autoset settings.".format(renderer)) # noqa
# TODO: handle not having res values in the doc
width = asset_doc["data"].get("resolutionWidth")
height = asset_doc["data"].get("resolutionHeight")
if renderer == "arnold":
# set renderer settings for Arnold from project settings
self._set_arnold_settings(width, height)
if renderer == "vray":
self._set_vray_settings(aov_separator, width, height)
if renderer == "redshift":
self._set_redshift_settings(width, height)
def _set_arnold_settings(self, width, height):
"""Sets settings for Arnold."""
from mtoa.core import createOptions # noqa
from mtoa.aovs import AOVInterface # noqa
createOptions()
arnold_render_presets = self._project_settings["maya"]["RenderSettings"]["arnold_renderer"] # noqa
# Force resetting settings and AOV list to avoid having to deal with
# AOV checking logic, for now.
# This is a work around because the standard
# function to revert render settings does not reset AOVs list in MtoA
# Fetch current aovs in case there's any.
current_aovs = AOVInterface().getAOVs()
# Remove fetched AOVs
AOVInterface().removeAOVs(current_aovs)
mel.eval("unifiedRenderGlobalsRevertToDefault")
img_ext = arnold_render_presets["image_format"]
img_prefix = arnold_render_presets["image_prefix"]
aovs = arnold_render_presets["aov_list"]
img_tiled = arnold_render_presets["tiled"]
multi_exr = arnold_render_presets["multilayer_exr"]
additional_options = arnold_render_presets["additional_options"]
for aov in aovs:
AOVInterface('defaultArnoldRenderOptions').addAOV(aov)
cmds.setAttr("defaultResolution.width", width)
cmds.setAttr("defaultResolution.height", height)
self._set_global_output_settings()
cmds.setAttr(
"defaultRenderGlobals.imageFilePrefix", img_prefix, type="string")
cmds.setAttr(
"defaultArnoldDriver.ai_translator", img_ext, type="string")
cmds.setAttr(
"defaultArnoldDriver.exrTiled", img_tiled)
cmds.setAttr(
"defaultArnoldDriver.mergeAOVs", multi_exr)
# Passes additional options in from the schema as a list
# but converts it to a dictionary because ftrack doesn't
# allow fullstops in custom attributes. Then checks for
# type of MtoA attribute passed to adjust the `setAttr`
# command accordingly.
self._additional_attribs_setter(additional_options)
for item in additional_options:
attribute, value = item
if (cmds.getAttr(str(attribute), type=True)) == "long":
cmds.setAttr(str(attribute), int(value))
elif (cmds.getAttr(str(attribute), type=True)) == "bool":
cmds.setAttr(str(attribute), int(value), type = "Boolean") # noqa
elif (cmds.getAttr(str(attribute), type=True)) == "string":
cmds.setAttr(str(attribute), str(value), type = "string") # noqa
reset_frame_range()
def _set_redshift_settings(self, width, height):
"""Sets settings for Redshift."""
redshift_render_presets = (
self._project_settings
["maya"]
["RenderSettings"]
["redshift_renderer"]
)
additional_options = redshift_render_presets["additional_options"]
ext = redshift_render_presets["image_format"]
img_exts = ["iff", "exr", "tif", "png", "tga", "jpg"]
img_ext = img_exts.index(ext)
self._set_global_output_settings()
cmds.setAttr("redshiftOptions.imageFormat", img_ext)
cmds.setAttr("defaultResolution.width", width)
cmds.setAttr("defaultResolution.height", height)
self._additional_attribs_setter(additional_options)
def _set_vray_settings(self, aov_separator, width, height):
# type: (str, int, int) -> None
"""Sets important settings for Vray."""
settings = cmds.ls(type="VRaySettingsNode")
node = settings[0] if settings else cmds.createNode("VRaySettingsNode")
vray_render_presets = (
self._project_settings
["maya"]
["RenderSettings"]
["vray_renderer"]
)
# Set aov separator
# First we need to explicitly set the UI items in Render Settings
# because that is also what V-Ray updates to when that Render Settings
# UI did initialize before and refreshes again.
MENU = "vrayRenderElementSeparator"
if cmds.optionMenuGrp(MENU, query=True, exists=True):
items = cmds.optionMenuGrp(MENU, query=True, ill=True)
separators = [cmds.menuItem(i, query=True, label=True) for i in items] # noqa: E501
try:
sep_idx = separators.index(aov_separator)
except ValueError as e:
six.reraise(
CreatorError,
CreatorError(
"AOV character {} not in {}".format(
aov_separator, separators)),
sys.exc_info()[2])
cmds.optionMenuGrp(MENU, edit=True, select=sep_idx + 1)
# Set the render element attribute as string. This is also what V-Ray
# sets whenever the `vrayRenderElementSeparator` menu items switch
cmds.setAttr(
"{}.fileNameRenderElementSeparator".format(node),
aov_separator,
type="string"
)
# Set render file format to exr
cmds.setAttr("{}.imageFormatStr".format(node), "exr", type="string")
# animType
cmds.setAttr("{}.animType".format(node), 1)
# resolution
cmds.setAttr("{}.width".format(node), width)
cmds.setAttr("{}.height".format(node), height)
additional_options = vray_render_presets["additional_options"]
self._additional_attribs_setter(additional_options)
@staticmethod
def _set_global_output_settings():
# enable animation
cmds.setAttr("defaultRenderGlobals.outFormatControl", 0)
cmds.setAttr("defaultRenderGlobals.animation", 1)
cmds.setAttr("defaultRenderGlobals.putFrameBeforeExt", 1)
cmds.setAttr("defaultRenderGlobals.extensionPadding", 4)
def _additional_attribs_setter(self, additional_attribs):
print(additional_attribs)
for item in additional_attribs:
attribute, value = item
if (cmds.getAttr(str(attribute), type=True)) == "long":
cmds.setAttr(str(attribute), int(value))
elif (cmds.getAttr(str(attribute), type=True)) == "bool":
cmds.setAttr(str(attribute), int(value)) # noqa
elif (cmds.getAttr(str(attribute), type=True)) == "string":
cmds.setAttr(str(attribute), str(value), type = "string") # noqa

View file

@ -6,11 +6,11 @@ from Qt import QtWidgets, QtGui
import maya.utils
import maya.cmds as cmds
from openpype.api import BuildWorkfile
from openpype.settings import get_project_settings
from openpype.pipeline import legacy_io
from openpype.pipeline.workfile import BuildWorkfile
from openpype.tools.utils import host_tools
from openpype.hosts.maya.api import lib
from openpype.hosts.maya.api import lib, lib_rendersettings
from .lib import get_main_window, IS_HEADLESS
from .commands import reset_frame_range
@ -44,6 +44,7 @@ def install():
parent="MayaWindow"
)
renderer = cmds.getAttr('defaultRenderGlobals.currentRenderer').lower()
# Create context menu
context_label = "{}, {}".format(
legacy_io.Session["AVALON_ASSET"],
@ -98,6 +99,13 @@ def install():
cmds.menuItem(divider=True)
cmds.menuItem(
"Set Render Settings",
command=lambda *args: lib_rendersettings.RenderSettings().set_default_renderer_settings() # noqa
)
cmds.menuItem(divider=True)
cmds.menuItem(
"Work Files...",
command=lambda *args: host_tools.show_workfiles(

View file

@ -13,7 +13,6 @@ from openpype.host import HostBase, IWorkfileHost, ILoadHost
import openpype.hosts.maya
from openpype.tools.utils import host_tools
from openpype.lib import (
any_outdated,
register_event_callback,
emit_event
)
@ -28,6 +27,7 @@ from openpype.pipeline import (
deregister_creator_plugin_path,
AVALON_CONTAINER_ID,
)
from openpype.pipeline.load import any_outdated_containers
from openpype.hosts.maya.lib import copy_workspace_mel
from . import menu, lib
from .workio import (
@ -470,7 +470,7 @@ def on_open():
lib.validate_fps()
lib.fix_incompatible_containers()
if any_outdated():
if any_outdated_containers():
log.warning("Scene has outdated content.")
# Find maya main window

View file

@ -208,7 +208,8 @@ class ReferenceLoader(Loader):
file_type = {
"ma": "mayaAscii",
"mb": "mayaBinary",
"abc": "Alembic"
"abc": "Alembic",
"fbx": "FBX"
}.get(representation["name"])
assert file_type, "Unsupported representation: %s" % representation
@ -234,7 +235,7 @@ class ReferenceLoader(Loader):
path = self.prepare_root_value(path,
representation["context"]
["project"]
["code"])
["name"])
content = cmds.file(path,
loadReference=reference_node,
type=file_type,

View file

@ -6,7 +6,7 @@ Shader names are stored as simple text file over GridFS in mongodb.
"""
import os
from Qt import QtWidgets, QtCore, QtGui
from openpype.lib.mongo import OpenPypeMongoConnection
from openpype.client.mongo import OpenPypeMongoConnection
from openpype import resources
import gridfs

View file

@ -11,6 +11,7 @@ class CreateAnimation(plugin.Creator):
label = "Animation"
family = "animation"
icon = "male"
write_color_sets = False
def __init__(self, *args, **kwargs):
super(CreateAnimation, self).__init__(*args, **kwargs)
@ -22,7 +23,7 @@ class CreateAnimation(plugin.Creator):
self.data[key] = value
# Write vertex colors with the geometry.
self.data["writeColorSets"] = False
self.data["writeColorSets"] = self.write_color_sets
self.data["writeFaceSets"] = False
# Include only renderable visible shapes.

View file

@ -11,6 +11,7 @@ class CreatePointCache(plugin.Creator):
label = "Point Cache"
family = "pointcache"
icon = "gears"
write_color_sets = False
def __init__(self, *args, **kwargs):
super(CreatePointCache, self).__init__(*args, **kwargs)
@ -18,7 +19,8 @@ class CreatePointCache(plugin.Creator):
# Add animation data
self.data.update(lib.collect_animation_data())
self.data["writeColorSets"] = False # Vertex colors with the geometry.
# Vertex colors with the geometry.
self.data["writeColorSets"] = self.write_color_sets
self.data["writeFaceSets"] = False # Vertex colors with the geometry.
self.data["renderableOnly"] = False # Only renderable visible shapes
self.data["visibleOnly"] = False # only nodes that are visible

View file

@ -1,27 +1,34 @@
# -*- coding: utf-8 -*-
"""Create ``Render`` instance in Maya."""
import os
import json
import os
import appdirs
import requests
from maya import cmds
import maya.app.renderSetup.model.renderSetup as renderSetup
from maya.app.renderSetup.model import renderSetup
from openpype.api import (
get_system_settings,
get_project_settings,
)
from openpype.hosts.maya.api import (
lib,
lib_rendersettings,
plugin
)
from openpype.lib import requests_get
from openpype.api import (
get_system_settings,
get_project_settings,
get_asset)
get_project_settings)
from openpype.modules import ModulesManager
from openpype.pipeline import legacy_io
from openpype.pipeline import (
CreatorError,
legacy_io,
)
from openpype.pipeline.context_tools import get_current_project_asset
class CreateRender(plugin.Creator):
@ -69,35 +76,6 @@ class CreateRender(plugin.Creator):
_user = None
_password = None
# renderSetup instance
_rs = None
_image_prefix_nodes = {
'mentalray': 'defaultRenderGlobals.imageFilePrefix',
'vray': 'vraySettings.fileNamePrefix',
'arnold': 'defaultRenderGlobals.imageFilePrefix',
'renderman': 'rmanGlobals.imageFileFormat',
'redshift': 'defaultRenderGlobals.imageFilePrefix',
'mayahardware2': 'defaultRenderGlobals.imageFilePrefix',
}
_image_prefixes = {
'mentalray': 'maya/<Scene>/<RenderLayer>/<RenderLayer>{aov_separator}<RenderPass>', # noqa
'vray': 'maya/<scene>/<Layer>/<Layer>',
'arnold': 'maya/<Scene>/<RenderLayer>/<RenderLayer>{aov_separator}<RenderPass>', # noqa
# this needs `imageOutputDir`
# (<ws>/renders/maya/<scene>) set separately
'renderman': '<layer>_<aov>.<f4>.<ext>',
'redshift': 'maya/<Scene>/<RenderLayer>/<RenderLayer>', # noqa
'mayahardware2': 'maya/<Scene>/<RenderLayer>/<RenderLayer>', # noqa
}
_aov_chars = {
"dot": ".",
"dash": "-",
"underscore": "_"
}
_project_settings = None
def __init__(self, *args, **kwargs):
@ -109,18 +87,8 @@ class CreateRender(plugin.Creator):
return
self._project_settings = get_project_settings(
legacy_io.Session["AVALON_PROJECT"])
# project_settings/maya/create/CreateRender/aov_separator
try:
self.aov_separator = self._aov_chars[(
self._project_settings["maya"]
["create"]
["CreateRender"]
["aov_separator"]
)]
except KeyError:
self.aov_separator = "_"
if self._project_settings["maya"]["RenderSettings"]["apply_render_settings"]: # noqa
lib_rendersettings.RenderSettings().set_default_renderer_settings()
manager = ModulesManager()
self.deadline_module = manager.modules_by_name["deadline"]
try:
@ -177,13 +145,13 @@ class CreateRender(plugin.Creator):
])
cmds.setAttr("{}.machineList".format(self.instance), lock=True)
self._rs = renderSetup.instance()
layers = self._rs.getRenderLayers()
rs = renderSetup.instance()
layers = rs.getRenderLayers()
if use_selection:
print(">>> processing existing layers")
self.log.info("Processing existing layers")
sets = []
for layer in layers:
print(" - creating set for {}:{}".format(
self.log.info(" - creating set for {}:{}".format(
namespace, layer.name()))
render_set = cmds.sets(
n="{}:{}".format(namespace, layer.name()))
@ -193,17 +161,10 @@ class CreateRender(plugin.Creator):
# if no render layers are present, create default one with
# asterisk selector
if not layers:
render_layer = self._rs.createRenderLayer('Main')
render_layer = rs.createRenderLayer('Main')
collection = render_layer.createCollection("defaultCollection")
collection.getSelector().setPattern('*')
renderer = cmds.getAttr(
'defaultRenderGlobals.currentRenderer').lower()
# handle various renderman names
if renderer.startswith('renderman'):
renderer = 'renderman'
self._set_default_renderer_settings(renderer)
return self.instance
def _deadline_webservice_changed(self):
@ -237,7 +198,7 @@ class CreateRender(plugin.Creator):
def _create_render_settings(self):
"""Create instance settings."""
# get pools
# get pools (slave machines of the render farm)
pool_names = []
default_priority = 50
@ -281,7 +242,8 @@ class CreateRender(plugin.Creator):
# if 'default' server is not between selected,
# use first one for initial list of pools.
deadline_url = next(iter(self.deadline_servers.values()))
# Uses function to get pool machines from the assigned deadline
# url in settings
pool_names = self.deadline_module.get_deadline_pools(deadline_url,
self.log)
maya_submit_dl = self._project_settings.get(
@ -400,102 +362,36 @@ class CreateRender(plugin.Creator):
self.log.error("Cannot show login form to Muster")
raise Exception("Cannot show login form to Muster")
def _set_default_renderer_settings(self, renderer):
"""Set basic settings based on renderer.
def _requests_post(self, *args, **kwargs):
"""Wrap request post method.
Args:
renderer (str): Renderer name.
Disabling SSL certificate validation if ``DONT_VERIFY_SSL`` environment
variable is found. This is useful when Deadline or Muster server are
running with self-signed certificates and their certificate is not
added to trusted certificates on client machines.
Warning:
Disabling SSL certificate validation is defeating one line
of defense SSL is providing and it is not recommended.
"""
prefix = self._image_prefixes[renderer]
prefix = prefix.replace("{aov_separator}", self.aov_separator)
cmds.setAttr(self._image_prefix_nodes[renderer],
prefix,
type="string")
if "verify" not in kwargs:
kwargs["verify"] = not os.getenv("OPENPYPE_DONT_VERIFY_SSL", True)
return requests.post(*args, **kwargs)
asset = get_asset()
def _requests_get(self, *args, **kwargs):
"""Wrap request get method.
if renderer == "arnold":
# set format to exr
Disabling SSL certificate validation if ``DONT_VERIFY_SSL`` environment
variable is found. This is useful when Deadline or Muster server are
running with self-signed certificates and their certificate is not
added to trusted certificates on client machines.
cmds.setAttr(
"defaultArnoldDriver.ai_translator", "exr", type="string")
self._set_global_output_settings()
# resolution
cmds.setAttr(
"defaultResolution.width",
asset["data"].get("resolutionWidth"))
cmds.setAttr(
"defaultResolution.height",
asset["data"].get("resolutionHeight"))
Warning:
Disabling SSL certificate validation is defeating one line
of defense SSL is providing and it is not recommended.
if renderer == "vray":
self._set_vray_settings(asset)
if renderer == "redshift":
cmds.setAttr("redshiftOptions.imageFormat", 1)
# resolution
cmds.setAttr(
"defaultResolution.width",
asset["data"].get("resolutionWidth"))
cmds.setAttr(
"defaultResolution.height",
asset["data"].get("resolutionHeight"))
self._set_global_output_settings()
if renderer == "renderman":
cmds.setAttr("rmanGlobals.imageOutputDir",
"maya/<scene>/<layer>", type="string")
def _set_vray_settings(self, asset):
# type: (dict) -> None
"""Sets important settings for Vray."""
settings = cmds.ls(type="VRaySettingsNode")
node = settings[0] if settings else cmds.createNode("VRaySettingsNode")
# set separator
# set it in vray menu
if cmds.optionMenuGrp("vrayRenderElementSeparator", exists=True,
q=True):
items = cmds.optionMenuGrp(
"vrayRenderElementSeparator", ill=True, query=True)
separators = [cmds.menuItem(i, label=True, query=True) for i in items] # noqa: E501
try:
sep_idx = separators.index(self.aov_separator)
except ValueError:
raise CreatorError(
"AOV character {} not in {}".format(
self.aov_separator, separators))
cmds.optionMenuGrp(
"vrayRenderElementSeparator", sl=sep_idx + 1, edit=True)
cmds.setAttr(
"{}.fileNameRenderElementSeparator".format(node),
self.aov_separator,
type="string"
)
# set format to exr
cmds.setAttr(
"{}.imageFormatStr".format(node), "exr", type="string")
# animType
cmds.setAttr(
"{}.animType".format(node), 1)
# resolution
cmds.setAttr(
"{}.width".format(node),
asset["data"].get("resolutionWidth"))
cmds.setAttr(
"{}.height".format(node),
asset["data"].get("resolutionHeight"))
@staticmethod
def _set_global_output_settings():
# enable animation
cmds.setAttr("defaultRenderGlobals.outFormatControl", 0)
cmds.setAttr("defaultRenderGlobals.animation", 1)
cmds.setAttr("defaultRenderGlobals.putFrameBeforeExt", 1)
cmds.setAttr("defaultRenderGlobals.extensionPadding", 4)
"""
if "verify" not in kwargs:
kwargs["verify"] = not os.getenv("OPENPYPE_DONT_VERIFY_SSL", True)
return requests.get(*args, **kwargs)

View file

@ -551,7 +551,9 @@ class CollectLook(pyblish.api.InstancePlugin):
if cmds.getAttr(attribute, type=True) == "message":
continue
node_attributes[attr] = cmds.getAttr(attribute)
# Only include if there are any properties we care about
if not node_attributes:
continue
attributes.append({"name": node,
"uuid": lib.get_id(node),
"attributes": node_attributes})

View file

@ -72,7 +72,6 @@ class CollectMayaRender(pyblish.api.ContextPlugin):
def process(self, context):
"""Entry point to collector."""
render_instance = None
deadline_url = None
for instance in context:
if "rendering" in instance.data["families"]:
@ -96,23 +95,12 @@ class CollectMayaRender(pyblish.api.ContextPlugin):
asset = legacy_io.Session["AVALON_ASSET"]
workspace = context.data["workspaceDir"]
deadline_settings = (
context.data
["system_settings"]
["modules"]
["deadline"]
)
if deadline_settings["enabled"]:
deadline_url = render_instance.data.get("deadlineUrl")
self._rs = renderSetup.instance()
current_layer = self._rs.getVisibleRenderLayer()
# Retrieve render setup layers
rs = renderSetup.instance()
maya_render_layers = {
layer.name(): layer for layer in self._rs.getRenderLayers()
layer.name(): layer for layer in rs.getRenderLayers()
}
self.maya_layers = maya_render_layers
for layer in collected_render_layers:
try:
if layer.startswith("LAYER_"):
@ -147,49 +135,34 @@ class CollectMayaRender(pyblish.api.ContextPlugin):
self.log.warning(msg)
continue
# test if there are sets (subsets) to attach render to
# detect if there are sets (subsets) to attach render to
sets = cmds.sets(layer, query=True) or []
attach_to = []
if sets:
for s in sets:
if "family" not in cmds.listAttr(s):
continue
for s in sets:
if not cmds.attributeQuery("family", node=s, exists=True):
continue
attach_to.append(
{
"version": None, # we need integrator for that
"subset": s,
"family": cmds.getAttr("{}.family".format(s)),
}
)
self.log.info(" -> attach render to: {}".format(s))
attach_to.append(
{
"version": None, # we need integrator for that
"subset": s,
"family": cmds.getAttr("{}.family".format(s)),
}
)
self.log.info(" -> attach render to: {}".format(s))
layer_name = "rs_{}".format(expected_layer_name)
# collect all frames we are expecting to be rendered
renderer = cmds.getAttr(
"defaultRenderGlobals.currentRenderer"
).lower()
renderer = self.get_render_attribute("currentRenderer",
layer=layer_name)
# handle various renderman names
if renderer.startswith("renderman"):
renderer = "renderman"
try:
aov_separator = self._aov_chars[(
context.data["project_settings"]
["create"]
["CreateRender"]
["aov_separator"]
)]
except KeyError:
aov_separator = "_"
render_instance.data["aovSeparator"] = aov_separator
# return all expected files for all cameras and aovs in given
# frame range
layer_render_products = get_layer_render_products(
layer_name, render_instance)
layer_render_products = get_layer_render_products(layer_name)
render_products = layer_render_products.layer_data.products
assert render_products, "no render products generated"
exp_files = []
@ -226,13 +199,11 @@ class CollectMayaRender(pyblish.api.ContextPlugin):
)
# append full path
full_exp_files = []
aov_dict = {}
default_render_file = context.data.get('project_settings')\
.get('maya')\
.get('create')\
.get('CreateRender')\
.get('default_render_image_folder')
.get('RenderSettings')\
.get('default_render_image_folder') or ""
# replace relative paths with absolute. Render products are
# returned as list of dictionaries.
publish_meta_path = None
@ -246,6 +217,7 @@ class CollectMayaRender(pyblish.api.ContextPlugin):
full_paths.append(full_path)
publish_meta_path = os.path.dirname(full_path)
aov_dict[aov_first_key] = full_paths
full_exp_files = [aov_dict]
frame_start_render = int(self.get_render_attribute(
"startFrame", layer=layer_name))
@ -269,8 +241,6 @@ class CollectMayaRender(pyblish.api.ContextPlugin):
frame_start_handle = frame_start_render
frame_end_handle = frame_end_render
full_exp_files.append(aov_dict)
# find common path to store metadata
# so if image prefix is branching to many directories
# metadata file will be located in top-most common
@ -299,16 +269,6 @@ class CollectMayaRender(pyblish.api.ContextPlugin):
self.log.info("collecting layer: {}".format(layer_name))
# Get layer specific settings, might be overrides
try:
aov_separator = self._aov_chars[(
context.data["project_settings"]
["create"]
["CreateRender"]
["aov_separator"]
)]
except KeyError:
aov_separator = "_"
data = {
"subset": expected_layer_name,
"attachTo": attach_to,
@ -357,11 +317,15 @@ class CollectMayaRender(pyblish.api.ContextPlugin):
"useReferencedAovs": render_instance.data.get(
"useReferencedAovs") or render_instance.data.get(
"vrayUseReferencedAovs") or False,
"aovSeparator": aov_separator
"aovSeparator": layer_render_products.layer_data.aov_separator # noqa: E501
}
if deadline_url:
data["deadlineUrl"] = deadline_url
# Collect Deadline url if Deadline module is enabled
deadline_settings = (
context.data["system_settings"]["modules"]["deadline"]
)
if deadline_settings["enabled"]:
data["deadlineUrl"] = render_instance.data.get("deadlineUrl")
if self.sync_workfile_version:
data["version"] = context.data["version"]
@ -370,19 +334,6 @@ class CollectMayaRender(pyblish.api.ContextPlugin):
if instance.data['family'] == "workfile":
instance.data["version"] = context.data["version"]
# Apply each user defined attribute as data
for attr in cmds.listAttr(layer, userDefined=True) or list():
try:
value = cmds.getAttr("{}.{}".format(layer, attr))
except Exception:
# Some attributes cannot be read directly,
# such as mesh and color attributes. These
# are considered non-essential to this
# particular publishing pipeline.
value = None
data[attr] = value
# handle standalone renderers
if render_instance.data.get("vrayScene") is True:
data["families"].append("vrayscene_render")
@ -490,10 +441,6 @@ class CollectMayaRender(pyblish.api.ContextPlugin):
return pool_a, pool_b
def _get_overrides(self, layer):
rset = self.maya_layers[layer].renderSettingsCollectionInstance()
return rset.getOverrides()
@staticmethod
def get_render_attribute(attr, layer):
"""Get attribute from render options.

View file

@ -27,6 +27,29 @@ def escape_space(path):
return '"{}"'.format(path) if " " in path else path
def get_ocio_config_path(profile_folder):
"""Path to OpenPype vendorized OCIO.
Vendorized OCIO config file path is grabbed from the specific path
hierarchy specified below.
"{OPENPYPE_ROOT}/vendor/OpenColorIO-Configs/{profile_folder}/config.ocio"
Args:
profile_folder (str): Name of folder to grab config file from.
Returns:
str: Path to vendorized config file.
"""
return os.path.join(
os.environ["OPENPYPE_ROOT"],
"vendor",
"configs",
"OpenColorIO-Configs",
profile_folder,
"config.ocio"
)
def find_paths_by_hash(texture_hash):
"""Find the texture hash key in the dictionary.
@ -79,10 +102,11 @@ def maketx(source, destination, *args):
# use oiio-optimized settings for tile-size, planarconfig, metadata
"--oiio",
"--filter lanczos3",
escape_space(source)
]
cmd.extend(args)
cmd.extend(["-o", escape_space(destination), escape_space(source)])
cmd.extend(["-o", escape_space(destination)])
cmd = " ".join(cmd)
@ -405,7 +429,19 @@ class ExtractLook(openpype.api.Extractor):
# node doesn't have color space attribute
color_space = "Raw"
else:
if files_metadata[source]["color_space"] == "Raw":
# get the resolved files
metadata = files_metadata.get(source)
# if the files are unresolved from `source`
# assume color space from the first file of
# the resource
if not metadata:
first_file = next(iter(resource.get(
"files", [])), None)
if not first_file:
continue
first_filepath = os.path.normpath(first_file)
metadata = files_metadata[first_filepath]
if metadata["color_space"] == "Raw":
# set color space to raw if we linearized it
color_space = "Raw"
# Remap file node filename to destination
@ -493,6 +529,8 @@ class ExtractLook(openpype.api.Extractor):
else:
colorconvert = ""
config_path = get_ocio_config_path("nuke-default")
color_config = "--colorconfig {0}".format(config_path)
# Ensure folder exists
if not os.path.exists(os.path.dirname(converted)):
os.makedirs(os.path.dirname(converted))
@ -502,10 +540,11 @@ class ExtractLook(openpype.api.Extractor):
filepath,
converted,
# Include `source-hash` as string metadata
"-sattrib",
"--sattrib",
"sourceHash",
escape_space(texture_hash),
colorconvert,
color_config
)
return converted, COPY, texture_hash

View file

@ -128,8 +128,10 @@ class ExtractPlayblast(openpype.api.Extractor):
# Update preset with current panel setting
# if override_viewport_options is turned off
if not override_viewport_options:
panel = cmds.getPanel(with_focus=True)
panel_preset = capture.parse_active_view()
preset.update(panel_preset)
cmds.setFocus(panel)
path = capture.capture(**preset)

View file

@ -100,6 +100,13 @@ class ExtractThumbnail(openpype.api.Extractor):
# camera.
if preset.pop("isolate_view", False) and instance.data.get("isolate"):
preset["isolate"] = instance.data["setMembers"]
# Show or Hide Image Plane
image_plane = instance.data.get("imagePlane", True)
if "viewport_options" in preset:
preset["viewport_options"]["imagePlane"] = image_plane
else:
preset["viewport_options"] = {"imagePlane": image_plane}
with lib.maintained_time():
# Force viewer to False in call to capture because we have our own
@ -110,14 +117,17 @@ class ExtractThumbnail(openpype.api.Extractor):
# Update preset with current panel setting
# if override_viewport_options is turned off
if not override_viewport_options:
panel = cmds.getPanel(with_focus=True)
panel_preset = capture.parse_active_view()
preset.update(panel_preset)
cmds.setFocus(panel)
path = capture.capture(**preset)
playblast = self._fix_playblast_output_path(path)
_, thumbnail = os.path.split(playblast)
self.log.info("file list {}".format(thumbnail))
if "representations" not in instance.data:

View file

@ -78,14 +78,13 @@ class ValidateLookContents(pyblish.api.InstancePlugin):
# Check if attributes are on a node with an ID, crucial for rebuild!
for attr_changes in lookdata["attributes"]:
if not attr_changes["uuid"]:
if not attr_changes["uuid"] and not attr_changes["attributes"]:
cls.log.error("Node '%s' has no cbId, please set the "
"attributes to its children if it has any"
% attr_changes["name"])
invalid.add(instance.name)
return list(invalid)
@classmethod
def validate_looks(cls, instance):

View file

@ -2,8 +2,8 @@ import maya.cmds as cmds
import pyblish.api
import openpype.api
from openpype import lib
import openpype.hosts.maya.api.lib as mayalib
from openpype.pipeline.context_tools import get_current_project_asset
from math import ceil
@ -41,7 +41,9 @@ class ValidateMayaUnits(pyblish.api.ContextPlugin):
# now flooring the value?
fps = float_round(context.data.get('fps'), 2, ceil)
asset_fps = lib.get_asset()["data"]["fps"]
# TODO repace query with using 'context.data["assetEntity"]'
asset_doc = get_current_project_asset()
asset_fps = asset_doc["data"]["fps"]
self.log.info('Units (linear): {0}'.format(linearunits))
self.log.info('Units (angular): {0}'.format(angularunits))
@ -91,5 +93,7 @@ class ValidateMayaUnits(pyblish.api.ContextPlugin):
cls.log.debug(current_linear)
cls.log.info("Setting time unit to match project")
asset_fps = lib.get_asset()["data"]["fps"]
# TODO repace query with using 'context.data["assetEntity"]'
asset_doc = get_current_project_asset()
asset_fps = asset_doc["data"]["fps"]
mayalib.set_scene_fps(asset_fps)

View file

@ -10,7 +10,7 @@ from openpype.pipeline import legacy_io
import openpype.hosts.maya.api.action
from openpype.hosts.maya.api.shader_definition_editor import (
DEFINITION_FILENAME)
from openpype.lib.mongo import OpenPypeMongoConnection
from openpype.client.mongo import OpenPypeMongoConnection
import gridfs

View file

@ -1,17 +1,15 @@
import maya.mel as mel
import pymel.core as pm
from maya import cmds
import pyblish.api
import openpype.api
def get_file_rule(rule):
"""Workaround for a bug in python with cmds.workspace"""
return mel.eval('workspace -query -fileRuleEntry "{}"'.format(rule))
class ValidateRenderImageRule(pyblish.api.InstancePlugin):
"""Validates "images" file rule is set to "renders/"
"""Validates Maya Workpace "images" file rule matches project settings.
This validates against the configured default render image folder:
Studio Settings > Project > Maya >
Render Settings > Default render image folder.
"""
@ -23,24 +21,29 @@ class ValidateRenderImageRule(pyblish.api.InstancePlugin):
def process(self, instance):
default_render_file = self.get_default_render_image_folder(instance)
required_images_rule = self.get_default_render_image_folder(instance)
current_images_rule = cmds.workspace(fileRuleEntry="images")
assert get_file_rule("images") == default_render_file, (
"Workspace's `images` file rule must be set to: {}".format(
default_render_file
assert current_images_rule == required_images_rule, (
"Invalid workspace `images` file rule value: '{}'. "
"Must be set to: '{}'".format(
current_images_rule, required_images_rule
)
)
@classmethod
def repair(cls, instance):
default = cls.get_default_render_image_folder(instance)
pm.workspace.fileRules["images"] = default
pm.system.Workspace.save()
required_images_rule = cls.get_default_render_image_folder(instance)
current_images_rule = cmds.workspace(fileRuleEntry="images")
if current_images_rule != required_images_rule:
cmds.workspace(fileRule=("images", required_images_rule))
cmds.workspace(saveWorkspace=True)
@staticmethod
def get_default_render_image_folder(instance):
return instance.context.data.get('project_settings')\
.get('maya') \
.get('create') \
.get('CreateRender') \
.get('RenderSettings') \
.get('default_render_image_folder')

View file

@ -1,20 +1,11 @@
import re
import pyblish.api
import openpype.api
import openpype.hosts.maya.api.action
from maya import cmds
ImagePrefixes = {
'mentalray': 'defaultRenderGlobals.imageFilePrefix',
'vray': 'vraySettings.fileNamePrefix',
'arnold': 'defaultRenderGlobals.imageFilePrefix',
'renderman': 'defaultRenderGlobals.imageFilePrefix',
'redshift': 'defaultRenderGlobals.imageFilePrefix',
'mayahardware2': 'defaultRenderGlobals.imageFilePrefix',
}
import openpype.api
import openpype.hosts.maya.api.action
from openpype.hosts.maya.api.render_settings import RenderSettings
class ValidateRenderSingleCamera(pyblish.api.InstancePlugin):
@ -47,7 +38,11 @@ class ValidateRenderSingleCamera(pyblish.api.InstancePlugin):
# handle various renderman names
if renderer.startswith('renderman'):
renderer = 'renderman'
file_prefix = cmds.getAttr(ImagePrefixes[renderer])
file_prefix = cmds.getAttr(
RenderSettings.get_image_prefix_attr(renderer)
)
if len(cameras) > 1:
if re.search(cls.R_CAMERA_TOKEN, file_prefix):

View file

@ -6,7 +6,7 @@ from openpype.pipeline import PublishXmlValidationError
class ValidateReviewSubsetUniqueness(pyblish.api.ContextPlugin):
"""Validates that nodes has common root."""
"""Validates that review subset has unique name."""
order = openpype.api.ValidateContentsOrder
hosts = ["maya"]
@ -17,7 +17,7 @@ class ValidateReviewSubsetUniqueness(pyblish.api.ContextPlugin):
subset_names = []
for instance in context:
self.log.info("instance:: {}".format(instance.data))
self.log.debug("Instance: {}".format(instance.data))
if instance.data.get('publish'):
subset_names.append(instance.data.get('subset'))

View file

@ -4,8 +4,7 @@ import openpype.api
class ValidateSetdressRoot(pyblish.api.InstancePlugin):
"""
"""
"""Validate if set dress top root node is published."""
order = openpype.api.ValidateContentsOrder
label = "SetDress Root"

View file

@ -21,10 +21,7 @@ from openpype.client import (
)
from openpype.api import (
Logger,
BuildWorkfile,
get_version_from_path,
get_workdir_data,
get_asset,
get_current_project_settings,
)
from openpype.tools.utils import host_tools
@ -35,11 +32,17 @@ from openpype.settings import (
get_anatomy_settings,
)
from openpype.modules import ModulesManager
from openpype.pipeline.template_data import get_template_data_with_names
from openpype.pipeline import (
discover_legacy_creator_plugins,
legacy_io,
Anatomy,
)
from openpype.pipeline.context_tools import (
get_current_project_asset,
get_custom_workfile_template_from_session
)
from openpype.pipeline.workfile import BuildWorkfile
from . import gizmo_menu
@ -910,19 +913,17 @@ def get_render_path(node):
''' Generate Render path from presets regarding avalon knob data
'''
avalon_knob_data = read_avalon_data(node)
data = {'avalon': avalon_knob_data}
nuke_imageio_writes = get_imageio_node_setting(
node_class=avalon_knob_data["family"],
node_class=avalon_knob_data["families"],
plugin_name=avalon_knob_data["creator"],
subset=avalon_knob_data["subset"]
)
host_name = os.environ.get("AVALON_APP")
data.update({
"app": host_name,
data = {
"avalon": avalon_knob_data,
"nuke_imageio_writes": nuke_imageio_writes
})
}
anatomy_filled = format_anatomy(data)
return anatomy_filled["render"]["path"].replace("\\", "/")
@ -965,12 +966,11 @@ def format_anatomy(data):
data["version"] = get_version_from_path(file)
project_name = anatomy.project_name
project_doc = get_project(project_name)
asset_doc = get_asset_by_name(project_name, data["avalon"]["asset"])
asset_name = data["avalon"]["asset"]
task_name = os.environ["AVALON_TASK"]
host_name = os.environ["AVALON_APP"]
context_data = get_workdir_data(
project_doc, asset_doc, task_name, host_name
context_data = get_template_data_with_names(
project_name, asset_name, task_name, host_name
)
data.update(context_data)
data.update({
@ -1128,10 +1128,8 @@ def create_write_node(
if knob["name"] == "file_type":
representation = knob["value"]
host_name = os.environ.get("AVALON_APP")
try:
data.update({
"app": host_name,
"imageio_writes": imageio_writes,
"representation": representation,
})
@ -1766,7 +1764,7 @@ class WorkfileSettings(object):
kwargs.get("asset_name")
or legacy_io.Session["AVALON_ASSET"]
)
self._asset_entity = get_asset(self._asset)
self._asset_entity = get_current_project_asset(self._asset)
self._root_node = root_node or nuke.root()
self._nodes = self.get_nodes(nodes=nodes)
@ -1925,7 +1923,7 @@ class WorkfileSettings(object):
families.append(avalon_knob_data.get("families"))
nuke_imageio_writes = get_imageio_node_setting(
node_class=avalon_knob_data["family"],
node_class=avalon_knob_data["families"],
plugin_name=avalon_knob_data["creator"],
subset=avalon_knob_data["subset"]
)
@ -2224,7 +2222,7 @@ def get_write_node_template_attr(node):
avalon_knob_data = read_avalon_data(node)
# get template data
nuke_imageio_writes = get_imageio_node_setting(
node_class=avalon_knob_data["family"],
node_class=avalon_knob_data["families"],
plugin_name=avalon_knob_data["creator"],
subset=avalon_knob_data["subset"]
)
@ -2440,22 +2438,21 @@ def _launch_workfile_app():
if starting_up or closing_down:
return
from .pipeline import get_main_window
main_window = get_main_window()
host_tools.show_workfiles(parent=main_window)
# Make sure on top is enabled on first show so the window is not hidden
# under main nuke window
# - this happened on Centos 7 and it is because the focus of nuke
# changes to the main window after showing because of initialization
# which moves workfiles tool under it
host_tools.show_workfiles(parent=None, on_top=True)
def process_workfile_builder():
from openpype.lib import (
env_value_to_bool,
get_custom_workfile_template
)
# to avoid looping of the callback, remove it!
nuke.removeOnCreate(process_workfile_builder, nodeClass="Root")
# get state from settings
workfile_builder = get_current_project_settings()["nuke"].get(
project_settings = get_current_project_settings()
workfile_builder = project_settings["nuke"].get(
"workfile_builder", {})
# get all imortant settings
@ -2465,7 +2462,6 @@ def process_workfile_builder():
# get settings
createfv_on = workfile_builder.get("create_first_version") or None
custom_templates = workfile_builder.get("custom_templates") or None
builder_on = workfile_builder.get("builder_on_start") or None
last_workfile_path = os.environ.get("AVALON_LAST_WORKFILE")
@ -2473,8 +2469,8 @@ def process_workfile_builder():
# generate first version in file not existing and feature is enabled
if createfv_on and not os.path.exists(last_workfile_path):
# get custom template path if any
custom_template_path = get_custom_workfile_template(
custom_templates
custom_template_path = get_custom_workfile_template_from_session(
project_settings=project_settings
)
# if custom template is defined

View file

@ -9,7 +9,6 @@ import pyblish.api
import openpype
from openpype.api import (
Logger,
BuildWorkfile,
get_current_project_settings
)
from openpype.lib import register_event_callback
@ -22,6 +21,7 @@ from openpype.pipeline import (
deregister_inventory_action_path,
AVALON_CONTAINER_ID,
)
from openpype.pipeline.workfile import BuildWorkfile
from openpype.tools.utils import host_tools
from .command import viewer_update_and_undo_stop
@ -142,6 +142,14 @@ def uninstall():
_uninstall_menu()
def _show_workfiles():
# Make sure parent is not set
# - this makes Workfiles tool as separated window which
# avoid issues with reopening
# - it is possible to explicitly change on top flag of the tool
host_tools.show_workfiles(parent=None, on_top=False)
def _install_menu():
# uninstall original avalon menu
main_window = get_main_window()
@ -158,7 +166,7 @@ def _install_menu():
menu.addSeparator()
menu.addCommand(
"Work Files...",
lambda: host_tools.show_workfiles(parent=main_window)
_show_workfiles
)
menu.addSeparator()

View file

@ -181,8 +181,6 @@ class ExporterReview(object):
# get first and last frame
self.first_frame = min(self.collection.indexes)
self.last_frame = max(self.collection.indexes)
if "slate" in self.instance.data["families"]:
self.first_frame += 1
else:
self.fname = os.path.basename(self.path_in)
self.fhead = os.path.splitext(self.fname)[0] + "."

View file

@ -54,20 +54,28 @@ class LoadClip(plugin.NukeLoader):
script_start = int(nuke.root()["first_frame"].value())
# option gui
defaults = {
"start_at_workfile": True
options_defaults = {
"start_at_workfile": True,
"add_retime": True
}
options = [
qargparse.Boolean(
"start_at_workfile",
help="Load at workfile start frame",
default=True
)
]
node_name_template = "{class_name}_{ext}"
@classmethod
def get_options(cls, *args):
return [
qargparse.Boolean(
"start_at_workfile",
help="Load at workfile start frame",
default=cls.options_defaults["start_at_workfile"]
),
qargparse.Boolean(
"add_retime",
help="Load with retime",
default=cls.options_defaults["add_retime"]
)
]
@classmethod
def get_representations(cls):
return (
@ -86,7 +94,10 @@ class LoadClip(plugin.NukeLoader):
file = self.fname.replace("\\", "/")
start_at_workfile = options.get(
"start_at_workfile", self.defaults["start_at_workfile"])
"start_at_workfile", self.options_defaults["start_at_workfile"])
add_retime = options.get(
"add_retime", self.options_defaults["add_retime"])
version = context['version']
version_data = version.get("data", {})
@ -151,7 +162,7 @@ class LoadClip(plugin.NukeLoader):
data_imprint = {}
for k in add_keys:
if k == 'version':
data_imprint.update({k: context["version"]['name']})
data_imprint[k] = context["version"]['name']
elif k == 'colorspace':
colorspace = repre["data"].get(k)
colorspace = colorspace or version_data.get(k)
@ -159,10 +170,13 @@ class LoadClip(plugin.NukeLoader):
if used_colorspace:
data_imprint["used_colorspace"] = used_colorspace
else:
data_imprint.update(
{k: context["version"]['data'].get(k, str(None))})
data_imprint[k] = context["version"]['data'].get(
k, str(None))
data_imprint.update({"objectName": read_name})
data_imprint["objectName"] = read_name
if add_retime and version_data.get("retime", None):
data_imprint["addRetime"] = True
read_node["tile_color"].setValue(int("0x4ecd25ff", 16))
@ -174,7 +188,7 @@ class LoadClip(plugin.NukeLoader):
loader=self.__class__.__name__,
data=data_imprint)
if version_data.get("retime", None):
if add_retime and version_data.get("retime", None):
self._make_retimes(read_node, version_data)
self.set_as_member(read_node)
@ -198,7 +212,12 @@ class LoadClip(plugin.NukeLoader):
read_node = nuke.toNode(container['objectName'])
file = get_representation_path(representation).replace("\\", "/")
start_at_workfile = bool("start at" in read_node['frame_mode'].value())
start_at_workfile = "start at" in read_node['frame_mode'].value()
add_retime = [
key for key in read_node.knobs().keys()
if "addRetime" in key
]
project_name = legacy_io.active_project()
version_doc = get_version_by_id(project_name, representation["parent"])
@ -286,7 +305,7 @@ class LoadClip(plugin.NukeLoader):
"updated to version: {}".format(version_doc.get("name"))
)
if version_data.get("retime", None):
if add_retime and version_data.get("retime", None):
self._make_retimes(read_node, version_data)
else:
self.clear_members(read_node)

View file

@ -33,6 +33,7 @@ class CollectSlate(pyblish.api.InstancePlugin):
if slate_node:
instance.data["slateNode"] = slate_node
instance.data["slate"] = True
instance.data["families"].append("slate")
instance.data["versionData"]["families"].append("slate")
self.log.info(

View file

@ -31,10 +31,6 @@ class NukeRenderLocal(openpype.api.Extractor):
first_frame = instance.data.get("frameStartHandle", None)
# exception for slate workflow
if "slate" in families:
first_frame -= 1
last_frame = instance.data.get("frameEndHandle", None)
node_subset_name = instance.data.get("name", None)
@ -68,10 +64,6 @@ class NukeRenderLocal(openpype.api.Extractor):
int(last_frame)
)
# exception for slate workflow
if "slate" in families:
first_frame += 1
ext = node["file_type"].value()
if "representations" not in instance.data:
@ -88,8 +80,11 @@ class NukeRenderLocal(openpype.api.Extractor):
repre = {
'name': ext,
'ext': ext,
'frameStart': "%0{}d".format(
len(str(last_frame))) % first_frame,
'frameStart': (
"{{:0>{}}}"
.format(len(str(last_frame)))
.format(first_frame)
),
'files': filenames,
"stagingDir": out_dir
}
@ -105,13 +100,16 @@ class NukeRenderLocal(openpype.api.Extractor):
instance.data['family'] = 'render'
families.remove('render.local')
families.insert(0, "render2d")
instance.data["anatomyData"]["family"] = "render"
elif "prerender.local" in families:
instance.data['family'] = 'prerender'
families.remove('prerender.local')
families.insert(0, "prerender")
instance.data["anatomyData"]["family"] = "prerender"
elif "still.local" in families:
instance.data['family'] = 'image'
families.remove('still.local')
instance.data["anatomyData"]["family"] = "image"
instance.data["families"] = families
collections, remainder = clique.assemble(filenames)
@ -123,4 +121,4 @@ class NukeRenderLocal(openpype.api.Extractor):
self.log.info('Finished render')
self.log.debug("instance extracted: {}".format(instance.data))
self.log.debug("_ instance.data: {}".format(instance.data))

View file

@ -13,6 +13,7 @@ from openpype.hosts.nuke.api import (
get_view_process_node
)
class ExtractSlateFrame(openpype.api.Extractor):
"""Extracts movie and thumbnail with baked in luts
@ -236,6 +237,7 @@ class ExtractSlateFrame(openpype.api.Extractor):
def _render_slate_to_sequence(self, instance):
# set slate frame
first_frame = instance.data["frameStartHandle"]
last_frame = instance.data["frameEndHandle"]
slate_first_frame = first_frame - 1
# render slate as sequence frame
@ -284,6 +286,13 @@ class ExtractSlateFrame(openpype.api.Extractor):
matching_repre["files"] = [first_filename, slate_filename]
elif slate_filename not in matching_repre["files"]:
matching_repre["files"].insert(0, slate_filename)
matching_repre["frameStart"] = (
"{{:0>{}}}"
.format(len(str(last_frame)))
.format(slate_first_frame)
)
self.log.debug(
"__ matching_repre: {}".format(pformat(matching_repre)))
self.log.warning("Added slate frame to representation files")

View file

@ -50,7 +50,7 @@ class PreCollectNukeInstances(pyblish.api.ContextPlugin):
# establish families
family = avalon_knob_data["family"]
families_ak = avalon_knob_data.get("families", [])
families = list()
families = []
# except disabled nodes but exclude backdrops in test
if ("nukenodes" not in family) and (node["disable"].value()):
@ -94,6 +94,7 @@ class PreCollectNukeInstances(pyblish.api.ContextPlugin):
# Farm rendering
self.log.info("flagged for farm render")
instance.data["transfer"] = False
instance.data["farm"] = True
families.append("{}.farm".format(family))
family = families_ak.lower()
@ -110,10 +111,10 @@ class PreCollectNukeInstances(pyblish.api.ContextPlugin):
self.log.debug("__ families: `{}`".format(families))
# Get format
format = root['format'].value()
resolution_width = format.width()
resolution_height = format.height()
pixel_aspect = format.pixelAspect()
format_ = root['format'].value()
resolution_width = format_.width()
resolution_height = format_.height()
pixel_aspect = format_.pixelAspect()
# get publish knob value
if "publish" not in node.knobs():
@ -124,8 +125,11 @@ class PreCollectNukeInstances(pyblish.api.ContextPlugin):
self.log.debug("__ _families_test: `{}`".format(_families_test))
for family_test in _families_test:
if family_test in self.sync_workfile_version_on_families:
self.log.debug("Syncing version with workfile for '{}'"
.format(family_test))
self.log.debug(
"Syncing version with workfile for '{}'".format(
family_test
)
)
# get version to instance for integration
instance.data['version'] = instance.context.data['version']

View file

@ -144,8 +144,10 @@ class CollectNukeWrites(pyblish.api.InstancePlugin):
self.log.debug("colorspace: `{}`".format(colorspace))
version_data = {
"families": [f.replace(".local", "").replace(".farm", "")
for f in _families_test if "write" not in f],
"families": [
_f.replace(".local", "").replace(".farm", "")
for _f in _families_test if "write" != _f
],
"colorspace": colorspace
}

View file

@ -98,7 +98,7 @@ class ValidateRenderedFrames(pyblish.api.InstancePlugin):
self.log.error(msg)
raise ValidationException(msg)
collected_frames_len = int(len(collection.indexes))
collected_frames_len = len(collection.indexes)
coll_start = min(collection.indexes)
coll_end = max(collection.indexes)

View file

@ -1,7 +1,6 @@
import pyblish.api
from openpype.client import get_project, get_asset_by_id
from openpype import lib
from openpype.client import get_project, get_asset_by_id, get_asset_by_name
from openpype.pipeline import legacy_io
@ -17,10 +16,11 @@ class ValidateScript(pyblish.api.InstancePlugin):
def process(self, instance):
ctx_data = instance.context.data
asset_name = ctx_data["asset"]
asset = lib.get_asset(asset_name)
asset_data = asset["data"]
project_name = legacy_io.active_project()
asset_name = ctx_data["asset"]
# TODO repace query with using 'instance.data["assetEntity"]'
asset = get_asset_by_name(project_name, asset_name)
asset_data = asset["data"]
# These attributes will be checked
attributes = [

View file

@ -1,6 +1,5 @@
import os
from Qt import QtWidgets
from bson.objectid import ObjectId
import pyblish.api
@ -13,8 +12,8 @@ from openpype.pipeline import (
deregister_loader_plugin_path,
deregister_creator_plugin_path,
AVALON_CONTAINER_ID,
registered_host,
)
from openpype.pipeline.load import any_outdated_containers
import openpype.hosts.photoshop
from . import lib
@ -30,7 +29,7 @@ INVENTORY_PATH = os.path.join(PLUGINS_DIR, "inventory")
def check_inventory():
if not lib.any_outdated():
if not any_outdated_containers():
return
# Warn about outdated containers.

View file

@ -1,3 +1,5 @@
import re
from openpype.hosts.photoshop import api
from openpype.lib import BoolDef
from openpype.pipeline import (
@ -5,6 +7,8 @@ from openpype.pipeline import (
CreatedInstance,
legacy_io
)
from openpype.lib import prepare_template_data
from openpype.pipeline.create import SUBSET_NAME_ALLOWED_SYMBOLS
class ImageCreator(Creator):
@ -38,17 +42,24 @@ class ImageCreator(Creator):
top_level_selected_items = stub.get_selected_layers()
if pre_create_data.get("use_selection"):
only_single_item_selected = len(top_level_selected_items) == 1
for selected_item in top_level_selected_items:
if (
only_single_item_selected or
pre_create_data.get("create_multiple")):
if (
only_single_item_selected or
pre_create_data.get("create_multiple")):
for selected_item in top_level_selected_items:
if selected_item.group:
groups_to_create.append(selected_item)
else:
top_layers_to_wrap.append(selected_item)
else:
group = stub.group_selected_layers(subset_name_from_ui)
groups_to_create.append(group)
else:
group = stub.group_selected_layers(subset_name_from_ui)
groups_to_create.append(group)
else:
stub.select_layers(stub.get_layers())
try:
group = stub.group_selected_layers(subset_name_from_ui)
except:
raise ValueError("Cannot group locked Bakcground layer!")
groups_to_create.append(group)
if not groups_to_create and not top_layers_to_wrap:
group = stub.create_group(subset_name_from_ui)
@ -60,6 +71,7 @@ class ImageCreator(Creator):
group = stub.group_selected_layers(layer.name)
groups_to_create.append(group)
layer_name = ''
creating_multiple_groups = len(groups_to_create) > 1
for group in groups_to_create:
subset_name = subset_name_from_ui # reset to name from creator UI
@ -67,8 +79,16 @@ class ImageCreator(Creator):
created_group_name = self._clean_highlights(stub, group.name)
if creating_multiple_groups:
# concatenate with layer name to differentiate subsets
subset_name += group.name.title().replace(" ", "")
layer_name = re.sub(
"[^{}]+".format(SUBSET_NAME_ALLOWED_SYMBOLS),
"",
group.name
)
if "{layer}" not in subset_name.lower():
subset_name += "{Layer}"
layer_fill = prepare_template_data({"layer": layer_name})
subset_name = subset_name.format(**layer_fill)
if group.long_name:
for directory in group.long_name[::-1]:
@ -143,3 +163,6 @@ class ImageCreator(Creator):
def _clean_highlights(self, stub, item):
return item.replace(stub.PUBLISH_ICON, '').replace(stub.LOADED_ICON,
'')
@classmethod
def get_dynamic_data(cls, *args, **kwargs):
return {"layer": "{layer}"}

View file

@ -1,7 +1,12 @@
import re
from Qt import QtWidgets
from openpype.pipeline import create
from openpype.hosts.photoshop import api as photoshop
from openpype.lib import prepare_template_data
from openpype.pipeline.create import SUBSET_NAME_ALLOWED_SYMBOLS
class CreateImage(create.LegacyCreator):
"""Image folder for publish."""
@ -75,6 +80,7 @@ class CreateImage(create.LegacyCreator):
groups.append(group)
creator_subset_name = self.data["subset"]
layer_name = ''
for group in groups:
long_names = []
group.name = group.name.replace(stub.PUBLISH_ICON, ''). \
@ -82,7 +88,16 @@ class CreateImage(create.LegacyCreator):
subset_name = creator_subset_name
if len(groups) > 1:
subset_name += group.name.title().replace(" ", "")
layer_name = re.sub(
"[^{}]+".format(SUBSET_NAME_ALLOWED_SYMBOLS),
"",
group.name
)
if "{layer}" not in subset_name.lower():
subset_name += "{Layer}"
layer_fill = prepare_template_data({"layer": layer_name})
subset_name = subset_name.format(**layer_fill)
if group.long_name:
for directory in group.long_name[::-1]:
@ -98,3 +113,7 @@ class CreateImage(create.LegacyCreator):
# reusing existing group, need to rename afterwards
if not create_group:
stub.rename_layer(group.id, stub.PUBLISH_ICON + group.name)
@classmethod
def get_dynamic_data(cls, *args, **kwargs):
return {"layer": "{layer}"}

View file

@ -4,6 +4,7 @@ import pyblish.api
import openpype.api
from openpype.pipeline import PublishXmlValidationError
from openpype.hosts.photoshop import api as photoshop
from openpype.pipeline.create import SUBSET_NAME_ALLOWED_SYMBOLS
class ValidateNamingRepair(pyblish.api.Action):
@ -50,6 +51,13 @@ class ValidateNamingRepair(pyblish.api.Action):
subset_name = re.sub(invalid_chars, replace_char,
instance.data["subset"])
# format from Tool Creator
subset_name = re.sub(
"[^{}]+".format(SUBSET_NAME_ALLOWED_SYMBOLS),
"",
subset_name
)
layer_meta["subset"] = subset_name
stub.imprint(instance_id, layer_meta)

View file

@ -4,11 +4,11 @@ import uuid
import qargparse
from Qt import QtWidgets, QtCore
import openpype.api as pype
from openpype.pipeline import (
LegacyCreator,
LoaderPlugin,
)
from openpype.pipeline.context_tools import get_current_project_asset
from openpype.hosts import resolve
from . import lib
@ -375,7 +375,7 @@ class ClipLoader:
"""
asset_name = self.context["representation"]["context"]["asset"]
self.data["assetData"] = pype.get_asset(asset_name)["data"]
self.data["assetData"] = get_current_project_asset(asset_name)["data"]
def load(self):
# create project bin for the media to be imported into

View file

@ -70,7 +70,8 @@ class PrecollectInstances(pyblish.api.ContextPlugin):
"publish": resolve.get_publish_attribute(timeline_item),
"fps": context.data["fps"],
"handleStart": handle_start,
"handleEnd": handle_end
"handleEnd": handle_end,
"newAssetPublishing": True
})
# otio clip data

View file

@ -19,6 +19,7 @@ import os
import opentimelineio as otio
import pyblish.api
from openpype import lib as plib
from openpype.pipeline.context_tools import get_current_project_asset
class OTIO_View(pyblish.api.Action):
@ -116,7 +117,7 @@ class CollectEditorial(pyblish.api.InstancePlugin):
if extension == ".edl":
# EDL has no frame rate embedded so needs explicit
# frame rate else 24 is asssumed.
kwargs["rate"] = plib.get_asset()["data"]["fps"]
kwargs["rate"] = get_current_project_asset()["data"]["fps"]
instance.data["otio_timeline"] = otio.adapters.read_from_file(
file_path, **kwargs)

View file

@ -1,8 +1,12 @@
import os
from copy import deepcopy
import opentimelineio as otio
import pyblish.api
from openpype import lib as plib
from copy import deepcopy
from openpype.pipeline.context_tools import get_current_project_asset
class CollectInstances(pyblish.api.InstancePlugin):
"""Collect instances from editorial's OTIO sequence"""
@ -48,7 +52,7 @@ class CollectInstances(pyblish.api.InstancePlugin):
# get timeline otio data
timeline = instance.data["otio_timeline"]
fps = plib.get_asset()["data"]["fps"]
fps = get_current_project_asset()["data"]["fps"]
tracks = timeline.each_child(
descended_from_type=otio.schema.Track
@ -166,7 +170,8 @@ class CollectInstances(pyblish.api.InstancePlugin):
"frameStart": frame_start,
"frameEnd": frame_end,
"frameStartH": frame_start - handle_start,
"frameEndH": frame_end + handle_end
"frameEndH": frame_end + handle_end,
"newAssetPublishing": True
}
for data_key in instance_data_filter:

View file

@ -3,8 +3,8 @@ import re
import pyblish.api
import openpype.api
from openpype import lib
from openpype.pipeline import PublishXmlValidationError
from openpype.pipeline.context_tools import get_current_project_asset
class ValidateFrameRange(pyblish.api.InstancePlugin):
@ -27,7 +27,8 @@ class ValidateFrameRange(pyblish.api.InstancePlugin):
for pattern in self.skip_timelines_check):
self.log.info("Skipping for {} task".format(instance.data["task"]))
asset_data = lib.get_asset(instance.data["asset"])["data"]
# TODO repace query with using 'instance.data["assetEntity"]'
asset_data = get_current_project_asset(instance.data["asset"])["data"]
frame_start = asset_data["frameStart"]
frame_end = asset_data["frameEnd"]
handle_start = asset_data["handleStart"]

View file

@ -1,6 +1,6 @@
import os
import json
from openpype.pipeline import legacy_io
from openpype.client import get_asset_by_name
class HostContext:
@ -17,10 +17,10 @@ class HostContext:
if not asset_name:
return project_name
asset_doc = legacy_io.find_one(
{"type": "asset", "name": asset_name},
{"data.parents": 1}
asset_doc = get_asset_by_name(
project_name, asset_name, fields=["data.parents"]
)
parents = asset_doc.get("data", {}).get("parents") or []
hierarchy = [project_name]

View file

@ -1,10 +1,11 @@
from openpype.lib import NumberDef
from openpype.hosts.testhost.api import pipeline
from openpype.client import get_asset_by_name
from openpype.pipeline import (
legacy_io,
AutoCreator,
CreatedInstance,
)
from openpype.hosts.testhost.api import pipeline
class MyAutoCreator(AutoCreator):
@ -44,10 +45,7 @@ class MyAutoCreator(AutoCreator):
host_name = legacy_io.Session["AVALON_APP"]
if existing_instance is None:
asset_doc = legacy_io.find_one({
"type": "asset",
"name": asset_name
})
asset_doc = get_asset_by_name(project_name, asset_name)
subset_name = self.get_subset_name(
variant, task_name, asset_doc, project_name, host_name
)
@ -69,10 +67,7 @@ class MyAutoCreator(AutoCreator):
existing_instance["asset"] != asset_name
or existing_instance["task"] != task_name
):
asset_doc = legacy_io.find_one({
"type": "asset",
"name": asset_name
})
asset_doc = get_asset_by_name(project_name, asset_name)
subset_name = self.get_subset_name(
variant, task_name, asset_doc, project_name, host_name
)

View file

@ -1,20 +1,8 @@
from .pipeline import (
install,
ls,
set_project_name,
get_context_title,
get_context_data,
update_context_data,
TrayPublisherHost,
)
__all__ = (
"install",
"ls",
"set_project_name",
"get_context_title",
"get_context_data",
"update_context_data",
"TrayPublisherHost",
)

View file

@ -0,0 +1,331 @@
import re
from copy import deepcopy
from openpype.client import get_asset_by_id
from openpype.pipeline.create import CreatorError
class ShotMetadataSolver:
""" Solving hierarchical metadata
Used during editorial publishing. Works with imput
clip name and settings defining python formatable
template. Settings also define searching patterns
and its token keys used for formating in templates.
"""
NO_DECOR_PATERN = re.compile(r"\{([a-z]*?)\}")
# presets
clip_name_tokenizer = None
shot_rename = True
shot_hierarchy = None
shot_add_tasks = None
def __init__(
self,
clip_name_tokenizer,
shot_rename,
shot_hierarchy,
shot_add_tasks,
logger
):
self.clip_name_tokenizer = clip_name_tokenizer
self.shot_rename = shot_rename
self.shot_hierarchy = shot_hierarchy
self.shot_add_tasks = shot_add_tasks
self.log = logger
def _rename_template(self, data):
"""Shot renaming function
Args:
data (dict): formating data
Raises:
CreatorError: If missing keys
Returns:
str: formated new name
"""
shot_rename_template = self.shot_rename[
"shot_rename_template"]
try:
# format to new shot name
return shot_rename_template.format(**data)
except KeyError as _E:
raise CreatorError((
"Make sure all keys in settings are correct:: \n\n"
f"From template string {shot_rename_template} > "
f"`{_E}` has no equivalent in \n"
f"{list(data.keys())} input formating keys!"
))
def _generate_tokens(self, clip_name, source_data):
"""Token generator
Settings defines token pairs key and regex expression.
Args:
clip_name (str): name of clip in editorial
source_data (dict): data for formating
Raises:
CreatorError: if missing key
Returns:
dict: updated source_data
"""
output_data = deepcopy(source_data["anatomy_data"])
output_data["clip_name"] = clip_name
if not self.clip_name_tokenizer:
return output_data
parent_name = source_data["selected_asset_doc"]["name"]
search_text = parent_name + clip_name
for token_key, pattern in self.clip_name_tokenizer.items():
p = re.compile(pattern)
match = p.findall(search_text)
if not match:
raise CreatorError((
"Make sure regex expression works with your data: \n\n"
f"'{token_key}' with regex '{pattern}' in your settings\n"
"can't find any match in your clip name "
f"'{search_text}'!\n\nLook to: "
"'project_settings/traypublisher/editorial_creators"
"/editorial_simple/clip_name_tokenizer'\n"
"at your project settings..."
))
# QUESTION:how to refactory `match[-1]` to some better way?
output_data[token_key] = match[-1]
return output_data
def _create_parents_from_settings(self, parents, data):
"""Formating parent components.
Args:
parents (list): list of dict parent components
data (dict): formating data
Raises:
CreatorError: missing formating key
CreatorError: missing token key
KeyError: missing parent token
Returns:
list: list of dict of parent components
"""
# fill the parents parts from presets
shot_hierarchy = deepcopy(self.shot_hierarchy)
hierarchy_parents = shot_hierarchy["parents"]
# fill parent keys data template from anatomy data
try:
_parent_tokens_formating_data = {
parent_token["name"]: parent_token["value"].format(**data)
for parent_token in hierarchy_parents
}
except KeyError as _E:
raise CreatorError((
"Make sure all keys in settings are correct : \n"
f"`{_E}` has no equivalent in \n{list(data.keys())}"
))
_parent_tokens_type = {
parent_token["name"]: parent_token["type"]
for parent_token in hierarchy_parents
}
for _index, _parent in enumerate(
shot_hierarchy["parents_path"].split("/")
):
# format parent token with value which is formated
try:
parent_name = _parent.format(
**_parent_tokens_formating_data)
except KeyError as _E:
raise CreatorError((
"Make sure all keys in settings are correct : \n\n"
f"`{_E}` from template string "
f"{shot_hierarchy['parents_path']}, "
f" has no equivalent in \n"
f"{list(_parent_tokens_formating_data.keys())} parents"
))
parent_token_name = (
self.NO_DECOR_PATERN.findall(_parent).pop())
if not parent_token_name:
raise KeyError(
f"Parent token is not found in: `{_parent}`")
# find parent type
parent_token_type = _parent_tokens_type[parent_token_name]
# in case selected context is set to the same asset
if (
_index == 0
and parents[-1]["entity_name"] == parent_name
):
self.log.debug(f" skipping : {parent_name}")
continue
# in case first parent is project then start parents from start
if (
_index == 0
and parent_token_type == "Project"
):
self.log.debug("rebuilding parents from scratch")
project_parent = parents[0]
parents = [project_parent]
continue
parents.append({
"entity_type": parent_token_type,
"entity_name": parent_name
})
self.log.debug(f"__ parents: {parents}")
return parents
def _create_hierarchy_path(self, parents):
"""Converting hierarchy path from parents
Args:
parents (list): list of dict parent components
Returns:
str: hierarchy path
"""
return "/".join(
[
p["entity_name"] for p in parents
if p["entity_type"] != "Project"
]
) if parents else ""
def _get_parents_from_selected_asset(
self,
asset_doc,
project_doc
):
"""Returning parents from context on selected asset.
Context defined in Traypublisher project tree.
Args:
asset_doc (db obj): selected asset doc
project_doc (db obj): actual project doc
Returns:
list: list of dict parent components
"""
project_name = project_doc["name"]
visual_hierarchy = [asset_doc]
current_doc = asset_doc
# looping trought all available visual parents
# if they are not available anymore than it breaks
while True:
visual_parent_id = current_doc["data"]["visualParent"]
visual_parent = None
if visual_parent_id:
visual_parent = get_asset_by_id(project_name, visual_parent_id)
if not visual_parent:
visual_hierarchy.append(project_doc)
break
visual_hierarchy.append(visual_parent)
current_doc = visual_parent
# add current selection context hierarchy
return [
{
"entity_type": entity["data"]["entityType"],
"entity_name": entity["name"]
}
for entity in reversed(visual_hierarchy)
]
def _generate_tasks_from_settings(self, project_doc):
"""Convert settings inputs to task data.
Args:
project_doc (db obj): actual project doc
Raises:
KeyError: Missing task type in project doc
Returns:
dict: tasks data
"""
tasks_to_add = {}
project_tasks = project_doc["config"]["tasks"]
for task_name, task_data in self.shot_add_tasks.items():
_task_data = deepcopy(task_data)
# check if task type in project task types
if _task_data["type"] in project_tasks.keys():
tasks_to_add[task_name] = _task_data
else:
raise KeyError(
"Missing task type `{}` for `{}` is not"
" existing in `{}``".format(
_task_data["type"],
task_name,
list(project_tasks.keys())
)
)
return tasks_to_add
def generate_data(self, clip_name, source_data):
"""Metadata generator.
Converts input data to hierarchy mentadata.
Args:
clip_name (str): clip name
source_data (dict): formating data
Returns:
(str, dict): shot name and hierarchy data
"""
self.log.info(f"_ source_data: {source_data}")
tasks = {}
asset_doc = source_data["selected_asset_doc"]
project_doc = source_data["project_doc"]
# match clip to shot name at start
shot_name = clip_name
# parse all tokens and generate formating data
formating_data = self._generate_tokens(shot_name, source_data)
# generate parents from selected asset
parents = self._get_parents_from_selected_asset(asset_doc, project_doc)
if self.shot_rename["enabled"]:
shot_name = self._rename_template(formating_data)
self.log.info(f"Renamed shot name: {shot_name}")
if self.shot_hierarchy["enabled"]:
parents = self._create_parents_from_settings(
parents, formating_data)
if self.shot_add_tasks:
tasks = self._generate_tasks_from_settings(
project_doc)
return shot_name, {
"hierarchy": self._create_hierarchy_path(parents),
"parents": parents,
"tasks": tasks
}

View file

@ -9,6 +9,8 @@ from openpype.pipeline import (
register_creator_plugin_path,
legacy_io,
)
from openpype.host import HostBase, INewPublisher
ROOT_DIR = os.path.dirname(os.path.dirname(
os.path.abspath(__file__)
@ -17,6 +19,35 @@ PUBLISH_PATH = os.path.join(ROOT_DIR, "plugins", "publish")
CREATE_PATH = os.path.join(ROOT_DIR, "plugins", "create")
class TrayPublisherHost(HostBase, INewPublisher):
name = "traypublisher"
def install(self):
os.environ["AVALON_APP"] = self.name
legacy_io.Session["AVALON_APP"] = self.name
pyblish.api.register_host("traypublisher")
pyblish.api.register_plugin_path(PUBLISH_PATH)
register_creator_plugin_path(CREATE_PATH)
def get_context_title(self):
return HostContext.get_project_name()
def get_context_data(self):
return HostContext.get_context_data()
def update_context_data(self, data, changes):
HostContext.save_context_data(data, changes)
def set_project_name(self, project_name):
# TODO Deregister project specific plugins and register new project
# plugins
os.environ["AVALON_PROJECT"] = project_name
legacy_io.Session["AVALON_PROJECT"] = project_name
legacy_io.install()
HostContext.set_project_name(project_name)
class HostContext:
_context_json_path = None
@ -150,32 +181,3 @@ def get_context_data():
def update_context_data(data, changes):
HostContext.save_context_data(data)
def get_context_title():
return HostContext.get_project_name()
def ls():
"""Probably will never return loaded containers."""
return []
def install():
"""This is called before a project is known.
Project is defined with 'set_project_name'.
"""
os.environ["AVALON_APP"] = "traypublisher"
pyblish.api.register_host("traypublisher")
pyblish.api.register_plugin_path(PUBLISH_PATH)
register_creator_plugin_path(CREATE_PATH)
def set_project_name(project_name):
# TODO Deregister project specific plugins and register new project plugins
os.environ["AVALON_PROJECT"] = project_name
legacy_io.Session["AVALON_PROJECT"] = project_name
legacy_io.install()
HostContext.set_project_name(project_name)

View file

@ -1,8 +1,9 @@
from openpype.pipeline import (
from openpype.lib.attribute_definitions import FileDef
from openpype.pipeline.create import (
Creator,
HiddenCreator,
CreatedInstance
)
from openpype.lib import FileDef
from .pipeline import (
list_instances,
@ -11,6 +12,64 @@ from .pipeline import (
HostContext,
)
IMAGE_EXTENSIONS = [
".ani", ".anim", ".apng", ".art", ".bmp", ".bpg", ".bsave", ".cal",
".cin", ".cpc", ".cpt", ".dds", ".dpx", ".ecw", ".exr", ".fits",
".flic", ".flif", ".fpx", ".gif", ".hdri", ".hevc", ".icer",
".icns", ".ico", ".cur", ".ics", ".ilbm", ".jbig", ".jbig2",
".jng", ".jpeg", ".jpeg-ls", ".jpeg", ".2000", ".jpg", ".xr",
".jpeg", ".xt", ".jpeg-hdr", ".kra", ".mng", ".miff", ".nrrd",
".ora", ".pam", ".pbm", ".pgm", ".ppm", ".pnm", ".pcx", ".pgf",
".pictor", ".png", ".psb", ".psp", ".qtvr", ".ras",
".rgbe", ".logluv", ".tiff", ".sgi", ".tga", ".tiff", ".tiff/ep",
".tiff/it", ".ufo", ".ufp", ".wbmp", ".webp", ".xbm", ".xcf",
".xpm", ".xwd"
]
VIDEO_EXTENSIONS = [
".3g2", ".3gp", ".amv", ".asf", ".avi", ".drc", ".f4a", ".f4b",
".f4p", ".f4v", ".flv", ".gif", ".gifv", ".m2v", ".m4p", ".m4v",
".mkv", ".mng", ".mov", ".mp2", ".mp4", ".mpe", ".mpeg", ".mpg",
".mpv", ".mxf", ".nsv", ".ogg", ".ogv", ".qt", ".rm", ".rmvb",
".roq", ".svi", ".vob", ".webm", ".wmv", ".yuv"
]
REVIEW_EXTENSIONS = IMAGE_EXTENSIONS + VIDEO_EXTENSIONS
class HiddenTrayPublishCreator(HiddenCreator):
host_name = "traypublisher"
def collect_instances(self):
for instance_data in list_instances():
creator_id = instance_data.get("creator_identifier")
if creator_id == self.identifier:
instance = CreatedInstance.from_existing(
instance_data, self
)
self._add_instance_to_context(instance)
def update_instances(self, update_list):
update_instances(update_list)
def remove_instances(self, instances):
remove_instances(instances)
for instance in instances:
self._remove_instance_from_context(instance)
def _store_new_instance(self, new_instance):
"""Tray publisher specific method to store instance.
Instance is stored into "workfile" of traypublisher and also add it
to CreateContext.
Args:
new_instance (CreatedInstance): Instance that should be stored.
"""
# Host implementation of storing metadata about instance
HostContext.add_instance(new_instance.data_to_store())
# Add instance to current context
self._add_instance_to_context(new_instance)
class TrayPublishCreator(Creator):
create_allow_context_change = True
@ -33,9 +92,20 @@ class TrayPublishCreator(Creator):
for instance in instances:
self._remove_instance_from_context(instance)
def get_pre_create_attr_defs(self):
# Use same attributes as for instance attrobites
return self.get_instance_attr_defs()
def _store_new_instance(self, new_instance):
"""Tray publisher specific method to store instance.
Instance is stored into "workfile" of traypublisher and also add it
to CreateContext.
Args:
new_instance (CreatedInstance): Instance that should be stored.
"""
# Host implementation of storing metadata about instance
HostContext.add_instance(new_instance.data_to_store())
# Add instance to current context
self._add_instance_to_context(new_instance)
class SettingsCreator(TrayPublishCreator):
@ -43,37 +113,40 @@ class SettingsCreator(TrayPublishCreator):
extensions = []
def collect_instances(self):
for instance_data in list_instances():
creator_id = instance_data.get("creator_identifier")
if creator_id == self.identifier:
instance = CreatedInstance.from_existing(
instance_data, self
)
self._add_instance_to_context(instance)
def create(self, subset_name, data, pre_create_data):
# Pass precreate data to creator attributes
data["creator_attributes"] = pre_create_data
data["settings_creator"] = True
# Create new instance
new_instance = CreatedInstance(self.family, subset_name, data, self)
# Host implementation of storing metadata about instance
HostContext.add_instance(new_instance.data_to_store())
# Add instance to current context
self._add_instance_to_context(new_instance)
self._store_new_instance(new_instance)
def get_instance_attr_defs(self):
return [
FileDef(
"filepath",
"representation_files",
folders=False,
extensions=self.extensions,
allow_sequences=self.allow_sequences,
label="Filepath",
single_item=not self.allow_multiple_items,
label="Representations",
),
FileDef(
"reviewable",
folders=False,
extensions=REVIEW_EXTENSIONS,
allow_sequences=True,
single_item=True,
label="Reviewable representations",
extensions_label="Single reviewable item"
)
]
def get_pre_create_attr_defs(self):
# Use same attributes as for instance attrobites
return self.get_instance_attr_defs()
@classmethod
def from_settings(cls, item_data):
identifier = item_data["identifier"]
@ -92,6 +165,7 @@ class SettingsCreator(TrayPublishCreator):
"detailed_description": item_data["detailed_description"],
"extensions": item_data["extensions"],
"allow_sequences": item_data["allow_sequences"],
"allow_multiple_items": item_data["allow_multiple_items"],
"default_variants": item_data["default_variants"]
}
)

View file

@ -0,0 +1,869 @@
import os
from copy import deepcopy
from pprint import pformat
import opentimelineio as otio
from openpype.client import (
get_asset_by_name,
get_project
)
from openpype.hosts.traypublisher.api.plugin import (
TrayPublishCreator,
HiddenTrayPublishCreator
)
from openpype.hosts.traypublisher.api.editorial import (
ShotMetadataSolver
)
from openpype.pipeline import CreatedInstance
from openpype.lib import (
get_ffprobe_data,
convert_ffprobe_fps_value,
FileDef,
TextDef,
NumberDef,
EnumDef,
BoolDef,
UISeparatorDef,
UILabelDef
)
CLIP_ATTR_DEFS = [
EnumDef(
"fps",
items={
"from_selection": "From selection",
23.997: "23.976",
24: "24",
25: "25",
29.97: "29.97",
30: "30"
},
label="FPS"
),
NumberDef(
"workfile_start_frame",
default=1001,
label="Workfile start frame"
),
NumberDef(
"handle_start",
default=0,
label="Handle start"
),
NumberDef(
"handle_end",
default=0,
label="Handle end"
)
]
class EditorialClipInstanceCreatorBase(HiddenTrayPublishCreator):
""" Wrapper class for clip family creators
Args:
HiddenTrayPublishCreator (BaseCreator): hidden supporting class
"""
host_name = "traypublisher"
def create(self, instance_data, source_data=None):
self.log.info(f"instance_data: {instance_data}")
subset_name = instance_data["subset"]
# Create new instance
new_instance = CreatedInstance(
self.family, subset_name, instance_data, self
)
self.log.info(f"instance_data: {pformat(new_instance.data)}")
self._store_new_instance(new_instance)
return new_instance
def get_instance_attr_defs(self):
return [
BoolDef(
"add_review_family",
default=True,
label="Review"
)
]
class EditorialShotInstanceCreator(EditorialClipInstanceCreatorBase):
""" Shot family class
The shot metadata instance carrier.
Args:
EditorialClipInstanceCreatorBase (BaseCreator): hidden supporting class
"""
identifier = "editorial_shot"
family = "shot"
label = "Editorial Shot"
def get_instance_attr_defs(self):
attr_defs = [
TextDef(
"asset_name",
label="Asset name",
)
]
attr_defs.extend(CLIP_ATTR_DEFS)
return attr_defs
class EditorialPlateInstanceCreator(EditorialClipInstanceCreatorBase):
""" Plate family class
Plate representation instance.
Args:
EditorialClipInstanceCreatorBase (BaseCreator): hidden supporting class
"""
identifier = "editorial_plate"
family = "plate"
label = "Editorial Plate"
class EditorialAudioInstanceCreator(EditorialClipInstanceCreatorBase):
""" Audio family class
Audio representation instance.
Args:
EditorialClipInstanceCreatorBase (BaseCreator): hidden supporting class
"""
identifier = "editorial_audio"
family = "audio"
label = "Editorial Audio"
class EditorialReviewInstanceCreator(EditorialClipInstanceCreatorBase):
""" Review family class
Review representation instance.
Args:
EditorialClipInstanceCreatorBase (BaseCreator): hidden supporting class
"""
identifier = "editorial_review"
family = "review"
label = "Editorial Review"
class EditorialSimpleCreator(TrayPublishCreator):
""" Editorial creator class
Simple workflow creator. This creator only disecting input
video file into clip chunks and then converts each to
defined format defined Settings for each subset preset.
Args:
TrayPublishCreator (Creator): Tray publisher plugin class
"""
label = "Editorial Simple"
family = "editorial"
identifier = "editorial_simple"
default_variants = [
"main"
]
description = "Editorial files to generate shots."
detailed_description = """
Supporting publishing new shots to project
or updating already created. Publishing will create OTIO file.
"""
icon = "fa.file"
def __init__(
self, project_settings, *args, **kwargs
):
super(EditorialSimpleCreator, self).__init__(
project_settings, *args, **kwargs
)
editorial_creators = deepcopy(
project_settings["traypublisher"]["editorial_creators"]
)
# get this creator settings by identifier
self._creator_settings = editorial_creators.get(self.identifier)
clip_name_tokenizer = self._creator_settings["clip_name_tokenizer"]
shot_rename = self._creator_settings["shot_rename"]
shot_hierarchy = self._creator_settings["shot_hierarchy"]
shot_add_tasks = self._creator_settings["shot_add_tasks"]
self._shot_metadata_solver = ShotMetadataSolver(
clip_name_tokenizer,
shot_rename,
shot_hierarchy,
shot_add_tasks,
self.log
)
# try to set main attributes from settings
if self._creator_settings.get("default_variants"):
self.default_variants = self._creator_settings["default_variants"]
def create(self, subset_name, instance_data, pre_create_data):
allowed_family_presets = self._get_allowed_family_presets(
pre_create_data)
clip_instance_properties = {
k: v for k, v in pre_create_data.items()
if k != "sequence_filepath_data"
if k not in [
i["family"] for i in self._creator_settings["family_presets"]
]
}
# Create otio editorial instance
asset_name = instance_data["asset"]
asset_doc = get_asset_by_name(self.project_name, asset_name)
self.log.info(pre_create_data["fps"])
if pre_create_data["fps"] == "from_selection":
# get asset doc data attributes
fps = asset_doc["data"]["fps"]
else:
fps = float(pre_create_data["fps"])
instance_data.update({
"fps": fps
})
# get path of sequence
sequence_path_data = pre_create_data["sequence_filepath_data"]
media_path_data = pre_create_data["media_filepaths_data"]
sequence_path = self._get_path_from_file_data(sequence_path_data)
media_path = self._get_path_from_file_data(media_path_data)
# get otio timeline
otio_timeline = self._create_otio_timeline(
sequence_path, fps)
# Create all clip instances
clip_instance_properties.update({
"fps": fps,
"parent_asset_name": asset_name,
"variant": instance_data["variant"]
})
# create clip instances
self._get_clip_instances(
otio_timeline,
media_path,
clip_instance_properties,
family_presets=allowed_family_presets
)
# create otio editorial instance
self._create_otio_instance(
subset_name, instance_data,
sequence_path, media_path,
otio_timeline
)
def _create_otio_instance(
self,
subset_name,
data,
sequence_path,
media_path,
otio_timeline
):
"""Otio instance creating function
Args:
subset_name (str): name of subset
data (dict): instnance data
sequence_path (str): path to sequence file
media_path (str): path to media file
otio_timeline (otio.Timeline): otio timeline object
"""
# Pass precreate data to creator attributes
data.update({
"sequenceFilePath": sequence_path,
"editorialSourcePath": media_path,
"otioTimeline": otio.adapters.write_to_string(otio_timeline)
})
new_instance = CreatedInstance(
self.family, subset_name, data, self
)
self._store_new_instance(new_instance)
def _create_otio_timeline(self, sequence_path, fps):
"""Creating otio timeline from sequence path
Args:
sequence_path (str): path to sequence file
fps (float): frame per second
Returns:
otio.Timeline: otio timeline object
"""
# get editorial sequence file into otio timeline object
extension = os.path.splitext(sequence_path)[1]
kwargs = {}
if extension == ".edl":
# EDL has no frame rate embedded so needs explicit
# frame rate else 24 is asssumed.
kwargs["rate"] = fps
kwargs["ignore_timecode_mismatch"] = True
self.log.info(f"kwargs: {kwargs}")
return otio.adapters.read_from_file(sequence_path, **kwargs)
def _get_path_from_file_data(self, file_path_data):
"""Converting creator path data to single path string
Args:
file_path_data (FileDefItem): creator path data inputs
Raises:
FileExistsError: in case nothing had been set
Returns:
str: path string
"""
# TODO: just temporarly solving only one media file
if isinstance(file_path_data, list):
file_path_data = file_path_data.pop()
if len(file_path_data["filenames"]) == 0:
raise FileExistsError(
f"File path was not added: {file_path_data}")
return os.path.join(
file_path_data["directory"], file_path_data["filenames"][0])
def _get_clip_instances(
self,
otio_timeline,
media_path,
instance_data,
family_presets
):
"""Helping function fro creating clip instance
Args:
otio_timeline (otio.Timeline): otio timeline object
media_path (str): media file path string
instance_data (dict): clip instance data
family_presets (list): list of dict settings subset presets
"""
self.asset_name_check = []
tracks = otio_timeline.each_child(
descended_from_type=otio.schema.Track
)
# media data for audio sream and reference solving
media_data = self._get_media_source_metadata(media_path)
for track in tracks:
self.log.debug(f"track.name: {track.name}")
try:
track_start_frame = (
abs(track.source_range.start_time.value)
)
self.log.debug(f"track_start_frame: {track_start_frame}")
track_start_frame -= self.timeline_frame_start
except AttributeError:
track_start_frame = 0
self.log.debug(f"track_start_frame: {track_start_frame}")
for clip in track.each_child():
if not self._validate_clip_for_processing(clip):
continue
# get available frames info to clip data
self._create_otio_reference(clip, media_path, media_data)
# convert timeline range to source range
self._restore_otio_source_range(clip)
base_instance_data = self._get_base_instance_data(
clip,
instance_data,
track_start_frame
)
parenting_data = {
"instance_label": None,
"instance_id": None
}
self.log.info((
"Creating subsets from presets: \n"
f"{pformat(family_presets)}"
))
for _fpreset in family_presets:
# exclude audio family if no audio stream
if (
_fpreset["family"] == "audio"
and not media_data.get("audio")
):
continue
instance = self._make_subset_instance(
clip,
_fpreset,
deepcopy(base_instance_data),
parenting_data
)
self.log.debug(f"{pformat(dict(instance.data))}")
def _restore_otio_source_range(self, otio_clip):
"""Infusing source range.
Otio clip is missing proper source clip range so
here we add them from from parent timeline frame range.
Args:
otio_clip (otio.Clip): otio clip object
"""
otio_clip.source_range = otio_clip.range_in_parent()
def _create_otio_reference(
self,
otio_clip,
media_path,
media_data
):
"""Creating otio reference at otio clip.
Args:
otio_clip (otio.Clip): otio clip object
media_path (str): media file path string
media_data (dict): media metadata
"""
start_frame = media_data["start_frame"]
frame_duration = media_data["duration"]
fps = media_data["fps"]
available_range = otio.opentime.TimeRange(
start_time=otio.opentime.RationalTime(
start_frame, fps),
duration=otio.opentime.RationalTime(
frame_duration, fps)
)
# in case old OTIO or video file create `ExternalReference`
media_reference = otio.schema.ExternalReference(
target_url=media_path,
available_range=available_range
)
otio_clip.media_reference = media_reference
def _get_media_source_metadata(self, path):
"""Get all available metadata from file
Args:
path (str): media file path string
Raises:
AssertionError: ffprobe couldn't read metadata
Returns:
dict: media file metadata
"""
return_data = {}
try:
media_data = get_ffprobe_data(
path, self.log
)
self.log.debug(f"__ media_data: {pformat(media_data)}")
# get video stream data
video_stream = media_data["streams"][0]
return_data = {
"video": True,
"start_frame": 0,
"duration": int(video_stream["nb_frames"]),
"fps": float(
convert_ffprobe_fps_value(
video_stream["r_frame_rate"]
)
)
}
# get audio streams data
audio_stream = [
stream for stream in media_data["streams"]
if stream["codec_type"] == "audio"
]
if audio_stream:
return_data["audio"] = True
except Exception as exc:
raise AssertionError((
"FFprobe couldn't read information about input file: "
f"\"{path}\". Error message: {exc}"
))
return return_data
def _make_subset_instance(
self,
otio_clip,
preset,
instance_data,
parenting_data
):
"""Making subset instance from input preset
Args:
otio_clip (otio.Clip): otio clip object
preset (dict): sigle family preset
instance_data (dict): instance data
parenting_data (dict): shot instance parent data
Returns:
CreatedInstance: creator instance object
"""
family = preset["family"]
label = self._make_subset_naming(
preset,
instance_data
)
instance_data["label"] = label
# add file extension filter only if it is not shot family
if family == "shot":
instance_data["otioClip"] = (
otio.adapters.write_to_string(otio_clip))
c_instance = self.create_context.creators[
"editorial_shot"].create(
instance_data)
parenting_data.update({
"instance_label": label,
"instance_id": c_instance.data["instance_id"]
})
else:
# add review family if defined
instance_data.update({
"outputFileType": preset["output_file_type"],
"parent_instance_id": parenting_data["instance_id"],
"creator_attributes": {
"parent_instance": parenting_data["instance_label"],
"add_review_family": preset.get("review")
}
})
creator_identifier = f"editorial_{family}"
editorial_clip_creator = self.create_context.creators[
creator_identifier]
c_instance = editorial_clip_creator.create(
instance_data)
return c_instance
def _make_subset_naming(
self,
preset,
instance_data
):
""" Subset name maker
Args:
preset (dict): single preset item
instance_data (dict): instance data
Returns:
str: label string
"""
shot_name = instance_data["shotName"]
variant_name = instance_data["variant"]
family = preset["family"]
# get variant name from preset or from inharitance
_variant_name = preset.get("variant") or variant_name
self.log.debug(f"__ family: {family}")
self.log.debug(f"__ preset: {preset}")
# subset name
subset_name = "{}{}".format(
family, _variant_name.capitalize()
)
label = "{}_{}".format(
shot_name,
subset_name
)
instance_data.update({
"family": family,
"label": label,
"variant": _variant_name,
"subset": subset_name,
})
return label
def _get_base_instance_data(
self,
otio_clip,
instance_data,
track_start_frame,
):
""" Factoring basic set of instance data.
Args:
otio_clip (otio.Clip): otio clip object
instance_data (dict): precreate instance data
track_start_frame (int): track start frame
Returns:
dict: instance data
"""
# get clip instance properties
parent_asset_name = instance_data["parent_asset_name"]
handle_start = instance_data["handle_start"]
handle_end = instance_data["handle_end"]
timeline_offset = instance_data["timeline_offset"]
workfile_start_frame = instance_data["workfile_start_frame"]
fps = instance_data["fps"]
variant_name = instance_data["variant"]
# basic unique asset name
clip_name = os.path.splitext(otio_clip.name)[0].lower()
project_doc = get_project(self.project_name)
shot_name, shot_metadata = self._shot_metadata_solver.generate_data(
clip_name,
{
"anatomy_data": {
"project": {
"name": self.project_name,
"code": project_doc["data"]["code"]
},
"parent": parent_asset_name,
"app": self.host_name
},
"selected_asset_doc": get_asset_by_name(
self.project_name, parent_asset_name),
"project_doc": project_doc
}
)
self._validate_name_uniqueness(shot_name)
timing_data = self._get_timing_data(
otio_clip,
timeline_offset,
track_start_frame,
workfile_start_frame
)
# create creator attributes
creator_attributes = {
"asset_name": shot_name,
"Parent hierarchy path": shot_metadata["hierarchy"],
"workfile_start_frame": workfile_start_frame,
"fps": fps,
"handle_start": int(handle_start),
"handle_end": int(handle_end)
}
creator_attributes.update(timing_data)
# create shared new instance data
base_instance_data = {
"shotName": shot_name,
"variant": variant_name,
# HACK: just for temporal bug workaround
# TODO: should loockup shot name for update
"asset": parent_asset_name,
"task": "",
"newAssetPublishing": True,
# parent time properties
"trackStartFrame": track_start_frame,
"timelineOffset": timeline_offset,
# creator_attributes
"creator_attributes": creator_attributes
}
# add hierarchy shot metadata
base_instance_data.update(shot_metadata)
return base_instance_data
def _get_timing_data(
self,
otio_clip,
timeline_offset,
track_start_frame,
workfile_start_frame
):
"""Returning available timing data
Args:
otio_clip (otio.Clip): otio clip object
timeline_offset (int): offset value
track_start_frame (int): starting frame input
workfile_start_frame (int): start frame for shot's workfiles
Returns:
dict: timing metadata
"""
# frame ranges data
clip_in = otio_clip.range_in_parent().start_time.value
clip_in += track_start_frame
clip_out = otio_clip.range_in_parent().end_time_inclusive().value
clip_out += track_start_frame
self.log.info(f"clip_in: {clip_in} | clip_out: {clip_out}")
# add offset in case there is any
self.log.debug(f"__ timeline_offset: {timeline_offset}")
if timeline_offset:
clip_in += timeline_offset
clip_out += timeline_offset
clip_duration = otio_clip.duration().value
self.log.info(f"clip duration: {clip_duration}")
source_in = otio_clip.trimmed_range().start_time.value
source_out = source_in + clip_duration
# define starting frame for future shot
frame_start = (
clip_in if workfile_start_frame is None
else workfile_start_frame
)
frame_end = frame_start + (clip_duration - 1)
return {
"frameStart": int(frame_start),
"frameEnd": int(frame_end),
"clipIn": int(clip_in),
"clipOut": int(clip_out),
"clipDuration": int(otio_clip.duration().value),
"sourceIn": int(source_in),
"sourceOut": int(source_out)
}
def _get_allowed_family_presets(self, pre_create_data):
""" Filter out allowed family presets.
Args:
pre_create_data (dict): precreate attributes inputs
Returns:
list: lit of dict with preset items
"""
self.log.debug(f"__ pre_create_data: {pre_create_data}")
return [
{"family": "shot"},
*[
preset for preset in self._creator_settings["family_presets"]
if pre_create_data[preset["family"]]
]
]
def _validate_clip_for_processing(self, otio_clip):
"""Validate otio clip attribues
Args:
otio_clip (otio.Clip): otio clip object
Returns:
bool: True if all passing conditions
"""
if otio_clip.name is None:
return False
if isinstance(otio_clip, otio.schema.Gap):
return False
# skip all generators like black empty
if isinstance(
otio_clip.media_reference,
otio.schema.GeneratorReference):
return False
# Transitions are ignored, because Clips have the full frame
# range.
if isinstance(otio_clip, otio.schema.Transition):
return False
return True
def _validate_name_uniqueness(self, name):
""" Validating name uniqueness.
In context of other clip names in sequence file.
Args:
name (str): shot name string
"""
if name not in self.asset_name_check:
self.asset_name_check.append(name)
else:
self.log.warning(
f"Duplicate shot name: {name}! "
"Please check names in the input sequence files."
)
def get_pre_create_attr_defs(self):
""" Creating pre-create attributes at creator plugin.
Returns:
list: list of attribute object instances
"""
# Use same attributes as for instance attrobites
attr_defs = [
FileDef(
"sequence_filepath_data",
folders=False,
extensions=[
".edl",
".xml",
".aaf",
".fcpxml"
],
allow_sequences=False,
single_item=True,
label="Sequence file",
),
FileDef(
"media_filepaths_data",
folders=False,
extensions=[
".mov",
".mp4",
".wav"
],
allow_sequences=False,
single_item=False,
label="Media files",
),
# TODO: perhpas better would be timecode and fps input
NumberDef(
"timeline_offset",
default=0,
label="Timeline offset"
),
UISeparatorDef(),
UILabelDef("Clip instance attributes"),
UISeparatorDef()
]
# add variants swithers
attr_defs.extend(
BoolDef(_var["family"], label=_var["family"])
for _var in self._creator_settings["family_presets"]
)
attr_defs.append(UISeparatorDef())
attr_defs.extend(CLIP_ATTR_DEFS)
return attr_defs

View file

@ -1,6 +1,7 @@
import os
from openpype.api import get_project_settings, Logger
from openpype.api import get_project_settings
log = Logger.get_logger(__name__)
def initialize():
@ -13,6 +14,7 @@ def initialize():
global_variables = globals()
for item in simple_creators:
dynamic_plugin = SettingsCreator.from_settings(item)
global_variables[dynamic_plugin.__name__] = dynamic_plugin

View file

@ -0,0 +1,216 @@
import copy
import os
import re
from openpype.client import get_assets, get_asset_by_name
from openpype.lib import (
FileDef,
BoolDef,
get_subset_name_with_asset_doc,
TaskNotSetError,
)
from openpype.pipeline import (
CreatedInstance,
CreatorError
)
from openpype.hosts.traypublisher.api.plugin import TrayPublishCreator
class BatchMovieCreator(TrayPublishCreator):
"""Creates instances from movie file(s).
Intended for .mov files, but should work for any video file.
Doesn't handle image sequences though.
"""
identifier = "render_movie_batch"
label = "Batch Movies"
family = "render"
description = "Publish batch of video files"
create_allow_context_change = False
version_regex = re.compile(r"^(.+)_v([0-9]+)$")
def __init__(self, project_settings, *args, **kwargs):
super(BatchMovieCreator, self).__init__(project_settings,
*args, **kwargs)
creator_settings = (
project_settings["traypublisher"]["BatchMovieCreator"]
)
self.default_variants = creator_settings["default_variants"]
self.default_tasks = creator_settings["default_tasks"]
self.extensions = creator_settings["extensions"]
def get_icon(self):
return "fa.file"
def create(self, subset_name, data, pre_create_data):
file_paths = pre_create_data.get("filepath")
if not file_paths:
return
for file_info in file_paths:
instance_data = copy.deepcopy(data)
file_name = file_info["filenames"][0]
filepath = os.path.join(file_info["directory"], file_name)
instance_data["creator_attributes"] = {"filepath": filepath}
asset_doc, version = self.get_asset_doc_from_file_name(
file_name, self.project_name)
subset_name, task_name = self._get_subset_and_task(
asset_doc, data["variant"], self.project_name)
instance_data["task"] = task_name
instance_data["asset"] = asset_doc["name"]
# Create new instance
new_instance = CreatedInstance(self.family, subset_name,
instance_data, self)
self._store_new_instance(new_instance)
def get_asset_doc_from_file_name(self, source_filename, project_name):
"""Try to parse out asset name from file name provided.
Artists might provide various file name formats.
Currently handled:
- chair.mov
- chair_v001.mov
- my_chair_to_upload.mov
"""
version = None
asset_name = os.path.splitext(source_filename)[0]
# Always first check if source filename is in assets
matching_asset_doc = self._get_asset_by_name_case_not_sensitive(
project_name, asset_name)
if matching_asset_doc is None:
matching_asset_doc, version = (
self._parse_with_version(project_name, asset_name))
if matching_asset_doc is None:
matching_asset_doc = self._parse_containing(project_name,
asset_name)
if matching_asset_doc is None:
raise CreatorError(
"Cannot guess asset name from {}".format(source_filename))
return matching_asset_doc, version
def _parse_with_version(self, project_name, asset_name):
"""Try to parse asset name from a file name containing version too
Eg. 'chair_v001.mov' >> 'chair', 1
"""
self.log.debug((
"Asset doc by \"{}\" was not found, trying version regex."
).format(asset_name))
matching_asset_doc = version_number = None
regex_result = self.version_regex.findall(asset_name)
if regex_result:
_asset_name, _version_number = regex_result[0]
matching_asset_doc = self._get_asset_by_name_case_not_sensitive(
project_name, _asset_name)
if matching_asset_doc:
version_number = int(_version_number)
return matching_asset_doc, version_number
def _parse_containing(self, project_name, asset_name):
"""Look if file name contains any existing asset name"""
for asset_doc in get_assets(project_name, fields=["name"]):
if asset_doc["name"].lower() in asset_name.lower():
return get_asset_by_name(project_name, asset_doc["name"])
def _get_subset_and_task(self, asset_doc, variant, project_name):
"""Create subset name according to standard template process"""
task_name = self._get_task_name(asset_doc)
try:
subset_name = get_subset_name_with_asset_doc(
self.family,
variant,
task_name,
asset_doc,
project_name
)
except TaskNotSetError:
# Create instance with fake task
# - instance will be marked as invalid so it can't be published
# but user have ability to change it
# NOTE: This expect that there is not task 'Undefined' on asset
task_name = "Undefined"
subset_name = get_subset_name_with_asset_doc(
self.family,
variant,
task_name,
asset_doc,
project_name
)
return subset_name, task_name
def _get_task_name(self, asset_doc):
"""Get applicable task from 'asset_doc' """
available_task_names = {}
asset_tasks = asset_doc.get("data", {}).get("tasks") or {}
for task_name in asset_tasks.keys():
available_task_names[task_name.lower()] = task_name
task_name = None
for _task_name in self.default_tasks:
_task_name_low = _task_name.lower()
if _task_name_low in available_task_names:
task_name = available_task_names[_task_name_low]
break
return task_name
def get_instance_attr_defs(self):
return [
BoolDef(
"add_review_family",
default=True,
label="Review"
)
]
def get_pre_create_attr_defs(self):
# Use same attributes as for instance attributes
return [
FileDef(
"filepath",
folders=False,
single_item=False,
extensions=self.extensions,
label="Filepath"
),
BoolDef(
"add_review_family",
default=True,
label="Review"
)
]
def get_detail_description(self):
return """# Publish batch of .mov to multiple assets.
File names must then contain only asset name, or asset name + version.
(eg. 'chair.mov', 'chair_v001.mov', not really safe `my_chair_v001.mov`
"""
def _get_asset_by_name_case_not_sensitive(self, project_name, asset_name):
"""Handle more cases in file names"""
asset_name = re.compile(asset_name, re.IGNORECASE)
assets = list(get_assets(project_name, asset_names=[asset_name]))
if assets:
if len(assets) > 1:
self.log.warning("Too many records found for {}".format(
asset_name))
return
return assets.pop()

View file

@ -0,0 +1,36 @@
from pprint import pformat
import pyblish.api
class CollectClipInstance(pyblish.api.InstancePlugin):
"""Collect clip instances and resolve its parent"""
label = "Collect Clip Instances"
order = pyblish.api.CollectorOrder - 0.081
hosts = ["traypublisher"]
families = ["plate", "review", "audio"]
def process(self, instance):
creator_identifier = instance.data["creator_identifier"]
if creator_identifier not in [
"editorial_plate",
"editorial_audio",
"editorial_review"
]:
return
instance.data["families"].append("clip")
parent_instance_id = instance.data["parent_instance_id"]
edit_shared_data = instance.context.data["editorialSharedData"]
instance.data.update(
edit_shared_data[parent_instance_id]
)
if "editorialSourcePath" in instance.context.data.keys():
instance.data["editorialSourcePath"] = (
instance.context.data["editorialSourcePath"])
instance.data["families"].append("trimming")
self.log.debug(pformat(instance.data))

View file

@ -0,0 +1,48 @@
import os
from pprint import pformat
import pyblish.api
import opentimelineio as otio
class CollectEditorialInstance(pyblish.api.InstancePlugin):
"""Collect data for instances created by settings creators."""
label = "Collect Editorial Instances"
order = pyblish.api.CollectorOrder - 0.1
hosts = ["traypublisher"]
families = ["editorial"]
def process(self, instance):
if "families" not in instance.data:
instance.data["families"] = []
if "representations" not in instance.data:
instance.data["representations"] = []
fpath = instance.data["sequenceFilePath"]
otio_timeline_string = instance.data.pop("otioTimeline")
otio_timeline = otio.adapters.read_from_string(
otio_timeline_string)
instance.context.data["otioTimeline"] = otio_timeline
instance.context.data["editorialSourcePath"] = (
instance.data["editorialSourcePath"])
self.log.info(fpath)
instance.data["stagingDir"] = os.path.dirname(fpath)
_, ext = os.path.splitext(fpath)
instance.data["representations"].append({
"ext": ext[1:],
"name": ext[1:],
"stagingDir": instance.data["stagingDir"],
"files": os.path.basename(fpath)
})
self.log.debug("Created Editorial Instance {}".format(
pformat(instance.data)
))

View file

@ -0,0 +1,30 @@
import pyblish.api
class CollectEditorialReviewable(pyblish.api.InstancePlugin):
""" Collect review input from user.
Adds the input to instance data.
"""
label = "Collect Editorial Reviewable"
order = pyblish.api.CollectorOrder
families = ["plate", "review", "audio"]
hosts = ["traypublisher"]
def process(self, instance):
creator_identifier = instance.data["creator_identifier"]
if creator_identifier not in [
"editorial_plate",
"editorial_audio",
"editorial_review"
]:
return
creator_attributes = instance.data["creator_attributes"]
if creator_attributes["add_review_family"]:
instance.data["families"].append("review")
self.log.debug("instance.data {}".format(instance.data))

View file

@ -0,0 +1,47 @@
import os
import pyblish.api
from openpype.pipeline import OpenPypePyblishPluginMixin
class CollectMovieBatch(
pyblish.api.InstancePlugin, OpenPypePyblishPluginMixin
):
"""Collect file url for batch movies and create representation.
Adds review on instance and to repre.tags based on value of toggle button
on creator.
"""
label = "Collect Movie Batch Files"
order = pyblish.api.CollectorOrder
hosts = ["traypublisher"]
def process(self, instance):
if instance.data.get("creator_identifier") != "render_movie_batch":
return
creator_attributes = instance.data["creator_attributes"]
file_url = creator_attributes["filepath"]
file_name = os.path.basename(file_url)
_, ext = os.path.splitext(file_name)
repre = {
"name": ext[1:],
"ext": ext[1:],
"files": file_name,
"stagingDir": os.path.dirname(file_url),
"tags": []
}
if creator_attributes["add_review_family"]:
repre["tags"].append("review")
instance.data["families"].append("review")
instance.data["representations"].append(repre)
instance.data["source"] = file_url
self.log.debug("instance.data {}".format(instance.data))

View file

@ -1,31 +0,0 @@
import pyblish.api
from openpype.lib import BoolDef
from openpype.pipeline import OpenPypePyblishPluginMixin
class CollectReviewFamily(
pyblish.api.InstancePlugin, OpenPypePyblishPluginMixin
):
"""Add review family."""
label = "Collect Review Family"
order = pyblish.api.CollectorOrder - 0.49
hosts = ["traypublisher"]
families = [
"image",
"render",
"plate",
"review"
]
def process(self, instance):
values = self.get_attr_values_from_data(instance.data)
if values.get("add_review_family"):
instance.data["families"].append("review")
@classmethod
def get_attribute_defs(cls):
return [
BoolDef("add_review_family", label="Review", default=True)
]

View file

@ -0,0 +1,213 @@
from pprint import pformat
import pyblish.api
import opentimelineio as otio
class CollectShotInstance(pyblish.api.InstancePlugin):
""" Collect shot instances
Resolving its user inputs from creator attributes
to instance data.
"""
label = "Collect Shot Instances"
order = pyblish.api.CollectorOrder - 0.09
hosts = ["traypublisher"]
families = ["shot"]
SHARED_KEYS = [
"asset",
"fps",
"handleStart",
"handleEnd",
"frameStart",
"frameEnd",
"clipIn",
"clipOut",
"clipDuration",
"sourceIn",
"sourceOut",
"otioClip",
"workfileFrameStart"
]
def process(self, instance):
self.log.debug(pformat(instance.data))
creator_identifier = instance.data["creator_identifier"]
if "editorial" not in creator_identifier:
return
# get otio clip object
otio_clip = self._get_otio_clip(instance)
instance.data["otioClip"] = otio_clip
# first solve the inputs from creator attr
data = self._solve_inputs_to_data(instance)
instance.data.update(data)
# distribute all shared keys to clips instances
self._distribute_shared_data(instance)
self._solve_hierarchy_context(instance)
self.log.debug(pformat(instance.data))
def _get_otio_clip(self, instance):
""" Converts otio string data.
Convert them to proper otio object
and finds its equivalent at otio timeline.
This process is a hack to support also
resolving parent range.
Args:
instance (obj): publishing instance
Returns:
otio.Clip: otio clip object
"""
context = instance.context
# convert otio clip from string to object
otio_clip_string = instance.data.pop("otioClip")
otio_clip = otio.adapters.read_from_string(
otio_clip_string)
otio_timeline = context.data["otioTimeline"]
clips = [
clip for clip in otio_timeline.each_child(
descended_from_type=otio.schema.Clip)
if clip.name == otio_clip.name
]
otio_clip = clips.pop()
self.log.debug(f"__ otioclip.parent: {otio_clip.parent}")
return otio_clip
def _distribute_shared_data(self, instance):
""" Distribute all defined keys.
All data are shared between all related
instances in context.
Args:
instance (obj): publishing instance
"""
context = instance.context
instance_id = instance.data["instance_id"]
if not context.data.get("editorialSharedData"):
context.data["editorialSharedData"] = {}
context.data["editorialSharedData"][instance_id] = {
_k: _v for _k, _v in instance.data.items()
if _k in self.SHARED_KEYS
}
def _solve_inputs_to_data(self, instance):
""" Resolve all user inputs into instance data.
Args:
instance (obj): publishing instance
Returns:
dict: instance data updating data
"""
_cr_attrs = instance.data["creator_attributes"]
workfile_start_frame = _cr_attrs["workfile_start_frame"]
frame_start = _cr_attrs["frameStart"]
frame_end = _cr_attrs["frameEnd"]
frame_dur = frame_end - frame_start
return {
"asset": _cr_attrs["asset_name"],
"fps": float(_cr_attrs["fps"]),
"handleStart": _cr_attrs["handle_start"],
"handleEnd": _cr_attrs["handle_end"],
"frameStart": workfile_start_frame,
"frameEnd": workfile_start_frame + frame_dur,
"clipIn": _cr_attrs["clipIn"],
"clipOut": _cr_attrs["clipOut"],
"clipDuration": _cr_attrs["clipDuration"],
"sourceIn": _cr_attrs["sourceIn"],
"sourceOut": _cr_attrs["sourceOut"],
"workfileFrameStart": workfile_start_frame
}
def _solve_hierarchy_context(self, instance):
""" Adding hierarchy data to context shared data.
Args:
instance (obj): publishing instance
"""
context = instance.context
final_context = (
context.data["hierarchyContext"]
if context.data.get("hierarchyContext")
else {}
)
name = instance.data["asset"]
# get handles
handle_start = int(instance.data["handleStart"])
handle_end = int(instance.data["handleEnd"])
in_info = {
"entity_type": "Shot",
"custom_attributes": {
"handleStart": handle_start,
"handleEnd": handle_end,
"frameStart": instance.data["frameStart"],
"frameEnd": instance.data["frameEnd"],
"clipIn": instance.data["clipIn"],
"clipOut": instance.data["clipOut"],
"fps": instance.data["fps"]
},
"tasks": instance.data["tasks"]
}
parents = instance.data.get('parents', [])
self.log.debug(f"parents: {pformat(parents)}")
actual = {name: in_info}
for parent in reversed(parents):
parent_name = parent["entity_name"]
next_dict = {
parent_name: {
"entity_type": parent["entity_type"],
"childs": actual
}
}
actual = next_dict
final_context = self._update_dict(final_context, actual)
# adding hierarchy context to instance
context.data["hierarchyContext"] = final_context
self.log.debug(pformat(final_context))
def _update_dict(self, ex_dict, new_dict):
""" Recursion function
Updating nested data with another nested data.
Args:
ex_dict (dict): nested data
new_dict (dict): nested data
Returns:
dict: updated nested data
"""
for key in ex_dict:
if key in new_dict and isinstance(ex_dict[key], dict):
new_dict[key] = self._update_dict(ex_dict[key], new_dict[key])
elif not ex_dict.get(key) or not new_dict.get(key):
new_dict[key] = ex_dict[key]
return new_dict

View file

@ -1,9 +1,31 @@
import os
import tempfile
import clique
import pyblish.api
class CollectSettingsSimpleInstances(pyblish.api.InstancePlugin):
"""Collect data for instances created by settings creators."""
"""Collect data for instances created by settings creators.
Plugin create representations for simple instances based
on 'representation_files' attribute stored on instance data.
There is also possibility to have reviewable representation which can be
stored under 'reviewable' attribute stored on instance data. If there was
already created representation with the same files as 'revieable' containes
Representations can be marked for review and in that case is also added
'review' family to instance families. For review can be marked only one
representation so **first** representation that has extension available
in '_review_extensions' is used for review.
For instance 'source' is used path from last representation created
from 'representation_files'.
Set staging directory on instance. That is probably never used because
each created representation has it's own staging dir.
"""
label = "Collect Settings Simple Instances"
order = pyblish.api.CollectorOrder - 0.49
@ -14,37 +36,193 @@ class CollectSettingsSimpleInstances(pyblish.api.InstancePlugin):
if not instance.data.get("settings_creator"):
return
if "families" not in instance.data:
instance.data["families"] = []
instance_label = instance.data["name"]
# Create instance's staging dir in temp
tmp_folder = tempfile.mkdtemp(prefix="traypublisher_")
instance.data["stagingDir"] = tmp_folder
instance.context.data["cleanupFullPaths"].append(tmp_folder)
if "representations" not in instance.data:
instance.data["representations"] = []
repres = instance.data["representations"]
self.log.debug((
"Created temp staging directory for instance {}. {}"
).format(instance_label, tmp_folder))
# Store filepaths for validation of their existence
source_filepaths = []
# Make sure there are no representations with same name
repre_names_counter = {}
# Store created names for logging
repre_names = []
# Store set of filepaths per each representation
representation_files_mapping = []
source = self._create_main_representations(
instance,
source_filepaths,
repre_names_counter,
repre_names,
representation_files_mapping
)
self._create_review_representation(
instance,
source_filepaths,
repre_names_counter,
repre_names,
representation_files_mapping
)
instance.data["source"] = source
instance.data["sourceFilepaths"] = list(set(source_filepaths))
self.log.debug(
(
"Created Simple Settings instance \"{}\""
" with {} representations: {}"
).format(
instance_label,
len(instance.data["representations"]),
", ".join(repre_names)
)
)
def _create_main_representations(
self,
instance,
source_filepaths,
repre_names_counter,
repre_names,
representation_files_mapping
):
creator_attributes = instance.data["creator_attributes"]
filepath_items = creator_attributes["representation_files"]
if not isinstance(filepath_items, list):
filepath_items = [filepath_items]
source = None
for filepath_item in filepath_items:
# Skip if filepath item does not have filenames
if not filepath_item["filenames"]:
continue
filepaths = {
os.path.join(filepath_item["directory"], filename)
for filename in filepath_item["filenames"]
}
source_filepaths.extend(filepaths)
source = self._calculate_source(filepaths)
representation = self._create_representation_data(
filepath_item, repre_names_counter, repre_names
)
instance.data["representations"].append(representation)
representation_files_mapping.append(
(filepaths, representation, source)
)
return source
def _create_review_representation(
self,
instance,
source_filepaths,
repre_names_counter,
repre_names,
representation_files_mapping
):
# Skip review representation creation if there are no representations
# created for "main" part
# - review representation must not be created in that case so
# validation can care about it
if not representation_files_mapping:
self.log.warning((
"There are missing source representations."
" Creation of review representation was skipped."
))
return
creator_attributes = instance.data["creator_attributes"]
filepath_item = creator_attributes["filepath"]
self.log.info(filepath_item)
filepaths = [
os.path.join(filepath_item["directory"], filename)
for filename in filepath_item["filenames"]
]
review_file_item = creator_attributes["reviewable"]
filenames = review_file_item.get("filenames")
if not filenames:
self.log.debug((
"Filepath for review is not defined."
" Skipping review representation creation."
))
return
instance.data["sourceFilepaths"] = filepaths
instance.data["stagingDir"] = filepath_item["directory"]
filepaths = {
os.path.join(review_file_item["directory"], filename)
for filename in filenames
}
source_filepaths.extend(filepaths)
# First try to find out representation with same filepaths
# so it's not needed to create new representation just for review
review_representation = None
# Review path (only for logging)
review_path = None
for item in representation_files_mapping:
_filepaths, representation, repre_path = item
if _filepaths == filepaths:
review_representation = representation
review_path = repre_path
break
if review_representation is None:
self.log.debug("Creating new review representation")
review_path = self._calculate_source(filepaths)
review_representation = self._create_representation_data(
review_file_item, repre_names_counter, repre_names
)
instance.data["representations"].append(review_representation)
if "review" not in instance.data["families"]:
instance.data["families"].append("review")
review_representation["tags"].append("review")
self.log.debug("Representation {} was marked for review. {}".format(
review_representation["name"], review_path
))
def _create_representation_data(
self, filepath_item, repre_names_counter, repre_names
):
"""Create new representation data based on file item.
Args:
filepath_item (Dict[str, Any]): Item with information about
representation paths.
repre_names_counter (Dict[str, int]): Store count of representation
names.
repre_names (List[str]): All used representation names. For
logging purposes.
Returns:
Dict: Prepared base representation data.
"""
filenames = filepath_item["filenames"]
_, ext = os.path.splitext(filenames[0])
ext = ext[1:]
if len(filenames) == 1:
filenames = filenames[0]
repres.append({
"ext": ext,
"name": ext,
repre_name = repre_ext = ext[1:]
if repre_name not in repre_names_counter:
repre_names_counter[repre_name] = 2
else:
counter = repre_names_counter[repre_name]
repre_names_counter[repre_name] += 1
repre_name = "{}_{}".format(repre_name, counter)
repre_names.append(repre_name)
return {
"ext": repre_ext,
"name": repre_name,
"stagingDir": filepath_item["directory"],
"files": filenames
})
"files": filenames,
"tags": []
}
self.log.debug("Created Simple Settings instance {}".format(
instance.data
))
def _calculate_source(self, filepaths):
cols, rems = clique.assemble(filepaths)
if cols:
source = cols[0].format("{head}{padding}{tail}")
elif rems:
source = rems[0]
return source

View file

@ -0,0 +1,15 @@
<?xml version="1.0" encoding="UTF-8"?>
<root>
<error id="main">
<title>Invalid frame range</title>
<description>
## Invalid frame range
Expected duration or '{duration}' frames set in database, workfile contains only '{found}' frames.
### How to repair?
Modify configuration in the database or tweak frame range in the workfile.
</description>
</error>
</root>

View file

@ -3,8 +3,17 @@ import pyblish.api
from openpype.pipeline import PublishValidationError
class ValidateWorkfilePath(pyblish.api.InstancePlugin):
"""Validate existence of workfile instance existence."""
class ValidateFilePath(pyblish.api.InstancePlugin):
"""Validate existence of source filepaths on instance.
Plugins looks into key 'sourceFilepaths' and validate if paths there
actually exist on disk.
Also validate if the key is filled but is empty. In that case also
crashes so do not fill the key if unfilled value should not cause error.
This is primarily created for Simple Creator instances.
"""
label = "Validate Workfile"
order = pyblish.api.ValidatorOrder - 0.49
@ -14,12 +23,28 @@ class ValidateWorkfilePath(pyblish.api.InstancePlugin):
def process(self, instance):
if "sourceFilepaths" not in instance.data:
self.log.info((
"Can't validate source filepaths existence."
"Skipped validation of source filepaths existence."
" Instance does not have collected 'sourceFilepaths'"
))
return
filepaths = instance.data.get("sourceFilepaths")
family = instance.data["family"]
label = instance.data["name"]
filepaths = instance.data["sourceFilepaths"]
if not filepaths:
raise PublishValidationError(
(
"Source filepaths of '{}' instance \"{}\" are not filled"
).format(family, label),
"File not filled",
(
"## Files were not filled"
"\nThis mean that you didn't enter any files into required"
" file input."
"\n- Please refresh publishing and check instance"
" <b>{}</b>"
).format(label)
)
not_found_files = [
filepath
@ -34,11 +59,7 @@ class ValidateWorkfilePath(pyblish.api.InstancePlugin):
raise PublishValidationError(
(
"Filepath of '{}' instance \"{}\" does not exist:\n{}"
).format(
instance.data["family"],
instance.data["name"],
joined_paths
),
).format(family, label, joined_paths),
"File not found",
(
"## Files were not found\nFiles\n{}"

View file

@ -0,0 +1,75 @@
import re
import pyblish.api
import openpype.api
from openpype.pipeline import (
PublishXmlValidationError,
OptionalPyblishPluginMixin
)
class ValidateFrameRange(OptionalPyblishPluginMixin,
pyblish.api.InstancePlugin):
"""Validating frame range of rendered files against state in DB."""
label = "Validate Frame Range"
hosts = ["traypublisher"]
families = ["render"]
order = openpype.api.ValidateContentsOrder
optional = True
# published data might be sequence (.mov, .mp4) in that counting files
# doesnt make sense
check_extensions = ["exr", "dpx", "jpg", "jpeg", "png", "tiff", "tga",
"gif", "svg"]
skip_timelines_check = [] # skip for specific task names (regex)
def process(self, instance):
# Skip the instance if is not active by data on the instance
if not self.is_active(instance.data):
return
if (self.skip_timelines_check and
any(re.search(pattern, instance.data["task"])
for pattern in self.skip_timelines_check)):
self.log.info("Skipping for {} task".format(instance.data["task"]))
asset_doc = instance.data["assetEntity"]
asset_data = asset_doc["data"]
frame_start = asset_data["frameStart"]
frame_end = asset_data["frameEnd"]
handle_start = asset_data["handleStart"]
handle_end = asset_data["handleEnd"]
duration = (frame_end - frame_start + 1) + handle_start + handle_end
repres = instance.data.get("representations")
if not repres:
self.log.info("No representations, skipping.")
return
first_repre = repres[0]
ext = first_repre['ext'].replace(".", '')
if not ext or ext.lower() not in self.check_extensions:
self.log.warning("Cannot check for extension {}".format(ext))
return
files = first_repre["files"]
if isinstance(files, str):
files = [files]
frames = len(files)
msg = (
"Frame duration from DB:'{}' doesn't match number of files:'{}'"
" Please change frame range for Asset or limit no. of files"
). format(int(duration), frames)
formatting_data = {"duration": duration,
"found": frames}
if frames != duration:
raise PublishXmlValidationError(self, msg,
formatting_data=formatting_data)
self.log.debug("Valid ranges expected '{}' - found '{}'".
format(int(duration), frames))

View file

@ -1,17 +1,16 @@
import os
from openpype.client import get_project, get_asset_by_name
from openpype.lib import (
StringTemplate,
get_workfile_template_key_from_context,
get_workdir_data,
get_last_workfile_with_version,
)
from openpype.lib import StringTemplate
from openpype.pipeline import (
registered_host,
legacy_io,
Anatomy,
)
from openpype.pipeline.workfile import (
get_workfile_template_key_from_context,
get_last_workfile_with_version,
)
from openpype.pipeline.template_data import get_template_data_with_names
from openpype.hosts.tvpaint.api import lib, pipeline, plugin
@ -54,19 +53,17 @@ class LoadWorkfile(plugin.Loader):
asset_name = legacy_io.Session["AVALON_ASSET"]
task_name = legacy_io.Session["AVALON_TASK"]
project_doc = get_project(project_name)
asset_doc = get_asset_by_name(project_name, asset_name)
template_key = get_workfile_template_key_from_context(
asset_name,
task_name,
host_name,
project_name=project_name,
dbcon=legacy_io
project_name=project_name
)
anatomy = Anatomy(project_name)
data = get_workdir_data(project_doc, asset_doc, task_name, host_name)
data = get_template_data_with_names(
project_name, asset_name, task_name, host_name
)
data["root"] = anatomy.roots
file_template = anatomy.templates[template_key]["file"]

View file

@ -1,13 +1,13 @@
# -*- coding: utf-8 -*-
"""Hook to launch Unreal and prepare projects."""
import os
import copy
from pathlib import Path
from openpype.lib import (
PreLaunchHook,
ApplicationLaunchFailed,
ApplicationNotFound,
get_workdir_data,
get_workfile_template_key
)
import openpype.hosts.unreal.lib as unreal_lib
@ -35,18 +35,13 @@ class UnrealPrelaunchHook(PreLaunchHook):
return last_workfile.name
# Prepare data for fill data and for getting workfile template key
task_name = self.data["task_name"]
anatomy = self.data["anatomy"]
asset_doc = self.data["asset_doc"]
project_doc = self.data["project_doc"]
asset_tasks = asset_doc.get("data", {}).get("tasks") or {}
task_info = asset_tasks.get(task_name) or {}
task_type = task_info.get("type")
# Use already prepared workdir data
workdir_data = copy.deepcopy(self.data["workdir_data"])
task_type = workdir_data.get("task", {}).get("type")
workdir_data = get_workdir_data(
project_doc, asset_doc, task_name, self.host_name
)
# QUESTION raise exception if version is part of filename template?
workdir_data["version"] = 1
workdir_data["ext"] = "uproject"

View file

@ -8,13 +8,13 @@ from unreal import EditorAssetLibrary
from unreal import MovieSceneSkeletalAnimationTrack
from unreal import MovieSceneSkeletalAnimationSection
from openpype.pipeline.context_tools import get_current_project_asset
from openpype.pipeline import (
get_representation_path,
AVALON_CONTAINER_ID
)
from openpype.hosts.unreal.api import plugin
from openpype.hosts.unreal.api import pipeline as unreal_pipeline
from openpype.api import get_asset
class AnimationFBXLoader(plugin.Loader):
@ -53,6 +53,8 @@ class AnimationFBXLoader(plugin.Loader):
if not actor:
return None
asset_doc = get_current_project_asset(fields=["data.fps"])
task.set_editor_property('filename', self.fname)
task.set_editor_property('destination_path', asset_dir)
task.set_editor_property('destination_name', asset_name)
@ -80,7 +82,7 @@ class AnimationFBXLoader(plugin.Loader):
task.options.anim_sequence_import_data.set_editor_property(
'use_default_sample_rate', False)
task.options.anim_sequence_import_data.set_editor_property(
'custom_sample_rate', get_asset()["data"].get("fps"))
'custom_sample_rate', asset_doc.get("data", {}).get("fps"))
task.options.anim_sequence_import_data.set_editor_property(
'import_custom_attribute', True)
task.options.anim_sequence_import_data.set_editor_property(
@ -246,6 +248,7 @@ class AnimationFBXLoader(plugin.Loader):
def update(self, container, representation):
name = container["asset_name"]
source_path = get_representation_path(representation)
asset_doc = get_current_project_asset(fields=["data.fps"])
destination_path = container["namespace"]
task = unreal.AssetImportTask()
@ -279,7 +282,7 @@ class AnimationFBXLoader(plugin.Loader):
task.options.anim_sequence_import_data.set_editor_property(
'use_default_sample_rate', False)
task.options.anim_sequence_import_data.set_editor_property(
'custom_sample_rate', get_asset()["data"].get("fps"))
'custom_sample_rate', asset_doc.get("data", {}).get("fps"))
task.options.anim_sequence_import_data.set_editor_property(
'import_custom_attribute', True)
task.options.anim_sequence_import_data.set_editor_property(

View file

@ -23,7 +23,7 @@ from openpype.pipeline import (
AVALON_CONTAINER_ID,
legacy_io,
)
from openpype.api import get_asset
from openpype.pipeline.context_tools import get_current_project_asset
from openpype.api import get_current_project_settings
from openpype.hosts.unreal.api import plugin
from openpype.hosts.unreal.api import pipeline as unreal_pipeline
@ -232,6 +232,7 @@ class LayoutLoader(plugin.Loader):
anim_path = f"{asset_dir}/animations/{anim_file_name}"
asset_doc = get_current_project_asset()
# Import animation
task = unreal.AssetImportTask()
task.options = unreal.FbxImportUI()
@ -266,7 +267,7 @@ class LayoutLoader(plugin.Loader):
task.options.anim_sequence_import_data.set_editor_property(
'use_default_sample_rate', False)
task.options.anim_sequence_import_data.set_editor_property(
'custom_sample_rate', get_asset()["data"].get("fps"))
'custom_sample_rate', asset_doc.get("data", {}).get("fps"))
task.options.anim_sequence_import_data.set_editor_property(
'import_custom_attribute', True)
task.options.anim_sequence_import_data.set_editor_property(

View file

@ -10,9 +10,9 @@ from aiohttp.web_response import Response
from openpype.client import (
get_projects,
get_assets,
OpenPypeMongoConnection,
)
from openpype.lib import (
OpenPypeMongoConnection,
PypeLogger,
)
from openpype.lib.remote_publish import (

View file

@ -6,6 +6,7 @@ import requests
import json
import subprocess
from openpype.client import OpenPypeMongoConnection
from openpype.lib import PypeLogger
from .webpublish_routes import (
@ -121,8 +122,6 @@ def run_webserver(*args, **kwargs):
def reprocess_failed(upload_dir, webserver_url):
# log.info("check_reprocesable_records")
from openpype.lib import OpenPypeMongoConnection
mongo_client = OpenPypeMongoConnection.get_mongo_client()
database_name = os.environ["OPENPYPE_DATABASE_NAME"]
dbcon = mongo_client[database_name]["webpublishes"]

View file

@ -63,7 +63,10 @@ from .execute import (
path_to_subprocess_arg,
CREATE_NO_WINDOW
)
from .log import PypeLogger, timeit
from .log import (
Logger,
PypeLogger,
)
from .path_templates import (
merge_dict,
@ -83,8 +86,9 @@ from .anatomy import (
Anatomy
)
from .config import (
from .dateutils import (
get_datetime_data,
get_timestamp,
get_formatted_current_time
)
@ -111,6 +115,7 @@ from .transcoding import (
get_ffmpeg_codec_args,
get_ffmpeg_format_args,
convert_ffprobe_fps_value,
convert_ffprobe_fps_to_float,
)
from .avalon_context import (
CURRENT_DOC_SCHEMAS,
@ -283,6 +288,7 @@ __all__ = [
"get_ffmpeg_codec_args",
"get_ffmpeg_format_args",
"convert_ffprobe_fps_value",
"convert_ffprobe_fps_to_float",
"CURRENT_DOC_SCHEMAS",
"PROJECT_NAME_ALLOWED_SYMBOLS",
@ -370,13 +376,13 @@ __all__ = [
"get_datetime_data",
"get_formatted_current_time",
"Logger",
"PypeLogger",
"get_default_components",
"validate_mongo_connection",
"OpenPypeMongoConnection",
"timeit",
"is_overlapping_otio_ranges",
"otio_range_with_handles",
"convert_to_padded_path",

View file

@ -27,12 +27,6 @@ from openpype.settings.constants import (
from . import PypeLogger
from .profiles_filtering import filter_profiles
from .local_settings import get_openpype_username
from .avalon_context import (
get_workdir_data,
get_workdir_with_workdir_data,
get_workfile_template_key,
get_last_workfile
)
from .python_module_tools import (
modules_from_path,
@ -1576,6 +1570,9 @@ def prepare_context_environments(data, env_group=None):
data (EnvironmentPrepData): Dictionary where result and intermediate
result will be stored.
"""
from openpype.pipeline.template_data import get_template_data
# Context environments
log = data["log"]
@ -1596,7 +1593,9 @@ def prepare_context_environments(data, env_group=None):
# Load project specific environments
project_name = project_doc["name"]
project_settings = get_project_settings(project_name)
system_settings = get_system_settings()
data["project_settings"] = project_settings
data["system_settings"] = system_settings
# Apply project specific environments on current env value
apply_project_environments_value(
project_name, data["env"], project_settings, env_group
@ -1619,8 +1618,8 @@ def prepare_context_environments(data, env_group=None):
if not app.is_host:
return
workdir_data = get_workdir_data(
project_doc, asset_doc, task_name, app.host_name
workdir_data = get_template_data(
project_doc, asset_doc, task_name, app.host_name, system_settings
)
data["workdir_data"] = workdir_data
@ -1631,7 +1630,14 @@ def prepare_context_environments(data, env_group=None):
data["task_type"] = task_type
try:
workdir = get_workdir_with_workdir_data(workdir_data, anatomy)
from openpype.pipeline.workfile import get_workdir_with_workdir_data
workdir = get_workdir_with_workdir_data(
workdir_data,
anatomy.project_name,
anatomy,
project_settings=project_settings
)
except Exception as exc:
raise ApplicationLaunchFailed(
@ -1721,11 +1727,19 @@ def _prepare_last_workfile(data, workdir):
if not last_workfile_path:
extensions = HOST_WORKFILE_EXTENSIONS.get(app.host_name)
if extensions:
from openpype.pipeline.workfile import (
get_workfile_template_key,
get_last_workfile
)
anatomy = data["anatomy"]
project_settings = data["project_settings"]
task_type = workdir_data["task"]["type"]
template_key = get_workfile_template_key(
task_type, app.host_name, project_settings=project_settings
task_type,
app.host_name,
project_name,
project_settings=project_settings
)
# Find last workfile
file_template = str(anatomy.templates[template_key]["file"])

View file

@ -14,6 +14,7 @@ class AbstractAttrDefMeta(ABCMeta):
Each object of `AbtractAttrDef` mus have defined 'key' attribute.
"""
def __call__(self, *args, **kwargs):
obj = super(AbstractAttrDefMeta, self).__call__(*args, **kwargs)
init_class = getattr(obj, "__init__class__", None)
@ -45,6 +46,7 @@ class AbtractAttrDef:
is_label_horizontal(bool): UI specific argument. Specify if label is
next to value input or ahead.
"""
is_value_def = True
def __init__(
@ -77,6 +79,7 @@ class AbtractAttrDef:
Convert passed value to a valid type. Use default if value can't be
converted.
"""
pass
@ -113,6 +116,7 @@ class UnknownDef(AbtractAttrDef):
This attribute can be used to keep existing data unchanged but does not
have known definition of type.
"""
def __init__(self, key, default=None, **kwargs):
kwargs["default"] = default
super(UnknownDef, self).__init__(key, **kwargs)
@ -204,6 +208,7 @@ class TextDef(AbtractAttrDef):
placeholder(str): UI placeholder for attribute.
default(str, None): Default value. Empty string used when not defined.
"""
def __init__(
self, key, multiline=None, regex=None, placeholder=None, default=None,
**kwargs
@ -531,14 +536,15 @@ class FileDef(AbtractAttrDef):
Args:
single_item(bool): Allow only single path item.
folders(bool): Allow folder paths.
extensions(list<str>): Allow files with extensions. Empty list will
extensions(List[str]): Allow files with extensions. Empty list will
allow all extensions and None will disable files completely.
default(str, list<str>): Defautl value.
extensions_label(str): Custom label shown instead of extensions in UI.
default(str, List[str]): Default value.
"""
def __init__(
self, key, single_item=True, folders=None, extensions=None,
allow_sequences=True, default=None, **kwargs
allow_sequences=True, extensions_label=None, default=None, **kwargs
):
if folders is None and extensions is None:
folders = True
@ -578,6 +584,7 @@ class FileDef(AbtractAttrDef):
self.folders = folders
self.extensions = set(extensions)
self.allow_sequences = allow_sequences
self.extensions_label = extensions_label
super(FileDef, self).__init__(key, default=default, **kwargs)
def __eq__(self, other):

Some files were not shown because too many files have changed in this diff Show more