Merge branch 'develop' into update_user_studios

This commit is contained in:
Milan Kolar 2022-07-22 09:20:31 +02:00
commit 0a2006cf7f
272 changed files with 9721 additions and 5538 deletions

2
.gitignore vendored
View file

@ -102,3 +102,5 @@ website/.docusaurus
.poetry/
.python-version
tools/run_eventserver.*

7
.gitmodules vendored Normal file
View file

@ -0,0 +1,7 @@
[submodule "tools/modules/powershell/BurntToast"]
path = tools/modules/powershell/BurntToast
url = https://github.com/Windos/BurntToast.git
[submodule "tools/modules/powershell/PSWriteColor"]
path = tools/modules/powershell/PSWriteColor
url = https://github.com/EvotecIT/PSWriteColor.git

View file

@ -1,147 +1,141 @@
# Changelog
## [3.11.2-nightly.1](https://github.com/pypeclub/OpenPype/tree/HEAD)
## [3.12.2-nightly.2](https://github.com/pypeclub/OpenPype/tree/HEAD)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.11.1...HEAD)
### 📖 Documentation
- General: Add ability to change user value for templates [\#3366](https://github.com/pypeclub/OpenPype/pull/3366)
- Feature/multiverse [\#3350](https://github.com/pypeclub/OpenPype/pull/3350)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.12.1...HEAD)
**🚀 Enhancements**
- Hosts: More options for in-host callbacks [\#3357](https://github.com/pypeclub/OpenPype/pull/3357)
- TVPaint: Extractor use mark in/out range to render [\#3308](https://github.com/pypeclub/OpenPype/pull/3308)
- Maya: Allow more data to be published along camera 🎥 [\#3304](https://github.com/pypeclub/OpenPype/pull/3304)
- General: Interactive console in cli [\#3526](https://github.com/pypeclub/OpenPype/pull/3526)
- Ftrack: Automatic daily review session creation can define trigger hour [\#3516](https://github.com/pypeclub/OpenPype/pull/3516)
- Ftrack: add source into Note [\#3509](https://github.com/pypeclub/OpenPype/pull/3509)
- Ftrack: Trigger custom ftrack topic of project structure creation [\#3506](https://github.com/pypeclub/OpenPype/pull/3506)
- Settings UI: Add extract to file action on project view [\#3505](https://github.com/pypeclub/OpenPype/pull/3505)
- Add pack and unpack convenience scripts [\#3502](https://github.com/pypeclub/OpenPype/pull/3502)
- General: Event system [\#3499](https://github.com/pypeclub/OpenPype/pull/3499)
- NewPublisher: Keep plugins with mismatch target in report [\#3498](https://github.com/pypeclub/OpenPype/pull/3498)
- Nuke: load clip with options from settings [\#3497](https://github.com/pypeclub/OpenPype/pull/3497)
- TrayPublisher: implemented render\_mov\_batch [\#3486](https://github.com/pypeclub/OpenPype/pull/3486)
- Migrate basic families to the new Tray Publisher [\#3469](https://github.com/pypeclub/OpenPype/pull/3469)
**🐛 Bug fixes**
- TVPaint: Make sure exit code is set to not None [\#3382](https://github.com/pypeclub/OpenPype/pull/3382)
- Maya: vray device aspect ratio fix [\#3381](https://github.com/pypeclub/OpenPype/pull/3381)
- Harmony: added unc path to zifile command in Harmony [\#3372](https://github.com/pypeclub/OpenPype/pull/3372)
- Additional fixes for powershell scripts [\#3525](https://github.com/pypeclub/OpenPype/pull/3525)
- Maya: Added wrapper around cmds.setAttr [\#3523](https://github.com/pypeclub/OpenPype/pull/3523)
- General: Fix hash of centos oiio archive [\#3519](https://github.com/pypeclub/OpenPype/pull/3519)
- Maya: Renderman display output fix [\#3514](https://github.com/pypeclub/OpenPype/pull/3514)
- TrayPublisher: Simple creation enhancements and fixes [\#3513](https://github.com/pypeclub/OpenPype/pull/3513)
- NewPublisher: Publish attributes are properly collected [\#3510](https://github.com/pypeclub/OpenPype/pull/3510)
- TrayPublisher: Make sure host name is filled [\#3504](https://github.com/pypeclub/OpenPype/pull/3504)
- NewPublisher: Groups work and enum multivalue [\#3501](https://github.com/pypeclub/OpenPype/pull/3501)
**🔀 Refactored code**
- Harmony: Use client query functions [\#3378](https://github.com/pypeclub/OpenPype/pull/3378)
- Photoshop: Use client query functions [\#3375](https://github.com/pypeclub/OpenPype/pull/3375)
- AfterEffects: Use client query functions [\#3374](https://github.com/pypeclub/OpenPype/pull/3374)
- TVPaint: Use client query functions [\#3340](https://github.com/pypeclub/OpenPype/pull/3340)
- Ftrack: Use client query functions [\#3339](https://github.com/pypeclub/OpenPype/pull/3339)
- Standalone Publisher: Use client query functions [\#3330](https://github.com/pypeclub/OpenPype/pull/3330)
- General: Client docstrings cleanup [\#3529](https://github.com/pypeclub/OpenPype/pull/3529)
- TimersManager: Use query functions [\#3495](https://github.com/pypeclub/OpenPype/pull/3495)
**Merged pull requests:**
## [3.12.1](https://github.com/pypeclub/OpenPype/tree/3.12.1) (2022-07-13)
- Maya - added support for single frame playblast review [\#3369](https://github.com/pypeclub/OpenPype/pull/3369)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.12.1-nightly.6...3.12.1)
### 📖 Documentation
- Docs: Added minimal permissions for MongoDB [\#3441](https://github.com/pypeclub/OpenPype/pull/3441)
**🆕 New features**
- Maya: Add VDB to Arnold loader [\#3433](https://github.com/pypeclub/OpenPype/pull/3433)
**🚀 Enhancements**
- TrayPublisher: Added more options for grouping of instances [\#3494](https://github.com/pypeclub/OpenPype/pull/3494)
- NewPublisher: Align creator attributes from top to bottom [\#3487](https://github.com/pypeclub/OpenPype/pull/3487)
- NewPublisher: Added ability to use label of instance [\#3484](https://github.com/pypeclub/OpenPype/pull/3484)
- General: Creator Plugins have access to project [\#3476](https://github.com/pypeclub/OpenPype/pull/3476)
- General: Better arguments order in creator init [\#3475](https://github.com/pypeclub/OpenPype/pull/3475)
- Ftrack: Trigger custom ftrack events on project creation and preparation [\#3465](https://github.com/pypeclub/OpenPype/pull/3465)
- Windows installer: Clean old files and add version subfolder [\#3445](https://github.com/pypeclub/OpenPype/pull/3445)
- Blender: Bugfix - Set fps properly on open [\#3426](https://github.com/pypeclub/OpenPype/pull/3426)
- Hiero: Add custom scripts menu [\#3425](https://github.com/pypeclub/OpenPype/pull/3425)
- Blender: pre pyside install for all platforms [\#3400](https://github.com/pypeclub/OpenPype/pull/3400)
**🐛 Bug fixes**
- TrayPublisher: Keep use instance label in list view [\#3493](https://github.com/pypeclub/OpenPype/pull/3493)
- General: Extract review use first frame of input sequence [\#3491](https://github.com/pypeclub/OpenPype/pull/3491)
- General: Fix Plist loading for application launch [\#3485](https://github.com/pypeclub/OpenPype/pull/3485)
- Nuke: Workfile tools open on start [\#3479](https://github.com/pypeclub/OpenPype/pull/3479)
- New Publisher: Disabled context change allows creation [\#3478](https://github.com/pypeclub/OpenPype/pull/3478)
- General: thumbnail extractor fix [\#3474](https://github.com/pypeclub/OpenPype/pull/3474)
- Kitsu: bugfix with sync-service ans publish plugins [\#3473](https://github.com/pypeclub/OpenPype/pull/3473)
- Flame: solved problem with multi-selected loading [\#3470](https://github.com/pypeclub/OpenPype/pull/3470)
- General: Fix query function in update logic [\#3468](https://github.com/pypeclub/OpenPype/pull/3468)
- Resolve: removed few bugs [\#3464](https://github.com/pypeclub/OpenPype/pull/3464)
- General: Delete old versions is safer when ftrack is disabled [\#3462](https://github.com/pypeclub/OpenPype/pull/3462)
- Nuke: fixing metadata slate TC difference [\#3455](https://github.com/pypeclub/OpenPype/pull/3455)
- Nuke: prerender reviewable fails [\#3450](https://github.com/pypeclub/OpenPype/pull/3450)
- Maya: fix hashing in Python 3 for tile rendering [\#3447](https://github.com/pypeclub/OpenPype/pull/3447)
- LogViewer: Escape html characters in log message [\#3443](https://github.com/pypeclub/OpenPype/pull/3443)
- Nuke: Slate frame is integrated [\#3427](https://github.com/pypeclub/OpenPype/pull/3427)
**🔀 Refactored code**
- Maya: Merge animation + pointcache extractor logic [\#3461](https://github.com/pypeclub/OpenPype/pull/3461)
- Maya: Re-use `maintained\_time` from lib [\#3460](https://github.com/pypeclub/OpenPype/pull/3460)
- General: Use query functions in global plugins [\#3459](https://github.com/pypeclub/OpenPype/pull/3459)
- Clockify: Use query functions in clockify actions [\#3458](https://github.com/pypeclub/OpenPype/pull/3458)
- General: Use query functions in rest api calls [\#3457](https://github.com/pypeclub/OpenPype/pull/3457)
- General: Use query functions in openpype lib functions [\#3454](https://github.com/pypeclub/OpenPype/pull/3454)
- General: Use query functions in load utils [\#3446](https://github.com/pypeclub/OpenPype/pull/3446)
- General: Move publish plugin and publish render abstractions [\#3442](https://github.com/pypeclub/OpenPype/pull/3442)
- General: Use Anatomy after move to pipeline [\#3436](https://github.com/pypeclub/OpenPype/pull/3436)
- General: Anatomy moved to pipeline [\#3435](https://github.com/pypeclub/OpenPype/pull/3435)
## [3.12.0](https://github.com/pypeclub/OpenPype/tree/3.12.0) (2022-06-28)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.12.0-nightly.3...3.12.0)
### 📖 Documentation
- Fix typo in documentation: pyenv on mac [\#3417](https://github.com/pypeclub/OpenPype/pull/3417)
- Linux: update OIIO package [\#3401](https://github.com/pypeclub/OpenPype/pull/3401)
**🚀 Enhancements**
- Webserver: Added CORS middleware [\#3422](https://github.com/pypeclub/OpenPype/pull/3422)
- Attribute Defs UI: Files widget show what is allowed to drop in [\#3411](https://github.com/pypeclub/OpenPype/pull/3411)
**🐛 Bug fixes**
- NewPublisher: Fix subset name change on change of creator plugin [\#3420](https://github.com/pypeclub/OpenPype/pull/3420)
- Bug: fix invalid avalon import [\#3418](https://github.com/pypeclub/OpenPype/pull/3418)
- Nuke: Fix keyword argument in query function [\#3414](https://github.com/pypeclub/OpenPype/pull/3414)
- Houdini: fix loading and updating vbd/bgeo sequences [\#3408](https://github.com/pypeclub/OpenPype/pull/3408)
- Nuke: Collect representation files based on Write [\#3407](https://github.com/pypeclub/OpenPype/pull/3407)
- General: Filter representations before integration start [\#3398](https://github.com/pypeclub/OpenPype/pull/3398)
- Maya: look collector typo [\#3392](https://github.com/pypeclub/OpenPype/pull/3392)
**🔀 Refactored code**
- Unreal: Use client query functions [\#3421](https://github.com/pypeclub/OpenPype/pull/3421)
- General: Move editorial lib to pipeline [\#3419](https://github.com/pypeclub/OpenPype/pull/3419)
- Kitsu: renaming to plural func sync\_all\_projects [\#3397](https://github.com/pypeclub/OpenPype/pull/3397)
- Houdini: Use client query functions [\#3395](https://github.com/pypeclub/OpenPype/pull/3395)
- Hiero: Use client query functions [\#3393](https://github.com/pypeclub/OpenPype/pull/3393)
- Nuke: Use client query functions [\#3391](https://github.com/pypeclub/OpenPype/pull/3391)
## [3.11.1](https://github.com/pypeclub/OpenPype/tree/3.11.1) (2022-06-20)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.11.1-nightly.1...3.11.1)
**🆕 New features**
- Flame: custom export temp folder [\#3346](https://github.com/pypeclub/OpenPype/pull/3346)
- Nuke: removing third-party plugins [\#3344](https://github.com/pypeclub/OpenPype/pull/3344)
**🚀 Enhancements**
- Pyblish Pype: Hiding/Close issues [\#3367](https://github.com/pypeclub/OpenPype/pull/3367)
- Ftrack: Removed requirement of pypeclub role from default settings [\#3354](https://github.com/pypeclub/OpenPype/pull/3354)
- Kitsu: Prevent crash on missing frames information [\#3352](https://github.com/pypeclub/OpenPype/pull/3352)
- Ftrack: Open browser from tray [\#3320](https://github.com/pypeclub/OpenPype/pull/3320)
- Enhancement: More control over thumbnail processing. [\#3259](https://github.com/pypeclub/OpenPype/pull/3259)
**🐛 Bug fixes**
- Nuke: bake streams with slate on farm [\#3368](https://github.com/pypeclub/OpenPype/pull/3368)
- Harmony: audio validator has wrong logic [\#3364](https://github.com/pypeclub/OpenPype/pull/3364)
- Nuke: Fix missing variable in extract thumbnail [\#3363](https://github.com/pypeclub/OpenPype/pull/3363)
- Nuke: Fix precollect writes [\#3361](https://github.com/pypeclub/OpenPype/pull/3361)
- AE- fix validate\_scene\_settings and renderLocal [\#3358](https://github.com/pypeclub/OpenPype/pull/3358)
- deadline: fixing misidentification of revieables [\#3356](https://github.com/pypeclub/OpenPype/pull/3356)
- General: Create only one thumbnail per instance [\#3351](https://github.com/pypeclub/OpenPype/pull/3351)
- nuke: adding extract thumbnail settings 3.10 [\#3347](https://github.com/pypeclub/OpenPype/pull/3347)
- General: Fix last version function [\#3345](https://github.com/pypeclub/OpenPype/pull/3345)
- Deadline: added OPENPYPE\_MONGO to filter [\#3336](https://github.com/pypeclub/OpenPype/pull/3336)
**🔀 Refactored code**
- Webpublisher: Use client query functions [\#3333](https://github.com/pypeclub/OpenPype/pull/3333)
## [3.11.0](https://github.com/pypeclub/OpenPype/tree/3.11.0) (2022-06-17)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.11.0-nightly.4...3.11.0)
### 📖 Documentation
- Documentation: Add app key to template documentation [\#3299](https://github.com/pypeclub/OpenPype/pull/3299)
- doc: adding royal render and multiverse to the web site [\#3285](https://github.com/pypeclub/OpenPype/pull/3285)
**🚀 Enhancements**
- Settings: Settings can be extracted from UI [\#3323](https://github.com/pypeclub/OpenPype/pull/3323)
- updated poetry installation source [\#3316](https://github.com/pypeclub/OpenPype/pull/3316)
- Ftrack: Action to easily create daily review session [\#3310](https://github.com/pypeclub/OpenPype/pull/3310)
- TVPaint: Extractor use mark in/out range to render [\#3309](https://github.com/pypeclub/OpenPype/pull/3309)
- Ftrack: Delivery action can work on ReviewSessions [\#3307](https://github.com/pypeclub/OpenPype/pull/3307)
- Maya: Look assigner UI improvements [\#3298](https://github.com/pypeclub/OpenPype/pull/3298)
- Ftrack: Action to transfer values of hierarchical attributes [\#3284](https://github.com/pypeclub/OpenPype/pull/3284)
- Maya: better handling of legacy review subsets names [\#3269](https://github.com/pypeclub/OpenPype/pull/3269)
- General: Updated windows oiio tool [\#3268](https://github.com/pypeclub/OpenPype/pull/3268)
- Unreal: add support for skeletalMesh and staticMesh to loaders [\#3267](https://github.com/pypeclub/OpenPype/pull/3267)
- Maya: reference loaders could store placeholder in referenced url [\#3264](https://github.com/pypeclub/OpenPype/pull/3264)
- TVPaint: Init file for TVPaint worker also handle guideline images [\#3250](https://github.com/pypeclub/OpenPype/pull/3250)
- Nuke: Change default icon path in settings [\#3247](https://github.com/pypeclub/OpenPype/pull/3247)
**🐛 Bug fixes**
- Houdini: Fix Houdini VDB manage update wrong file attribute name [\#3322](https://github.com/pypeclub/OpenPype/pull/3322)
- Nuke: anatomy compatibility issue hacks [\#3321](https://github.com/pypeclub/OpenPype/pull/3321)
- hiero: otio p3 compatibility issue - metadata on effect use update 3.11 [\#3314](https://github.com/pypeclub/OpenPype/pull/3314)
- Nuke: fixing farm publishing if review is disabled [\#3306](https://github.com/pypeclub/OpenPype/pull/3306)
- General: Vendorized modules for Python 2 and update poetry lock [\#3305](https://github.com/pypeclub/OpenPype/pull/3305)
- Fix - added local targets to install host [\#3303](https://github.com/pypeclub/OpenPype/pull/3303)
- Settings: Add missing default settings for nuke gizmo [\#3301](https://github.com/pypeclub/OpenPype/pull/3301)
- Maya: Fix swaped width and height in reviews [\#3300](https://github.com/pypeclub/OpenPype/pull/3300)
- Maya: point cache publish handles Maya instances [\#3297](https://github.com/pypeclub/OpenPype/pull/3297)
- Global: extract review slate issues [\#3286](https://github.com/pypeclub/OpenPype/pull/3286)
- Webpublisher: return only active projects in ProjectsEndpoint [\#3281](https://github.com/pypeclub/OpenPype/pull/3281)
- Hiero: add support for task tags 3.10.x [\#3279](https://github.com/pypeclub/OpenPype/pull/3279)
- General: Fix Oiio tool path resolving [\#3278](https://github.com/pypeclub/OpenPype/pull/3278)
- Maya: Fix udim support for e.g. uppercase \<UDIM\> tag [\#3266](https://github.com/pypeclub/OpenPype/pull/3266)
- Nuke: bake reformat was failing on string type [\#3261](https://github.com/pypeclub/OpenPype/pull/3261)
- Maya: hotfix Pxr multitexture in looks [\#3260](https://github.com/pypeclub/OpenPype/pull/3260)
- Unreal: Fix Camera Loading if Layout is missing [\#3255](https://github.com/pypeclub/OpenPype/pull/3255)
**🔀 Refactored code**
- Blender: Use client query functions [\#3331](https://github.com/pypeclub/OpenPype/pull/3331)
- General: Define query functions [\#3288](https://github.com/pypeclub/OpenPype/pull/3288)
**Merged pull requests:**
- Maya: add pointcache family to gpu cache loader [\#3318](https://github.com/pypeclub/OpenPype/pull/3318)
- Maya look: skip empty file attributes [\#3274](https://github.com/pypeclub/OpenPype/pull/3274)
## [3.10.0](https://github.com/pypeclub/OpenPype/tree/3.10.0) (2022-05-26)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.10.0-nightly.6...3.10.0)
**🚀 Enhancements**
- Maya: FBX camera export [\#3253](https://github.com/pypeclub/OpenPype/pull/3253)
- General: updating common vendor `scriptmenu` to 1.5.2 [\#3246](https://github.com/pypeclub/OpenPype/pull/3246)
**🐛 Bug fixes**
- nuke: use framerange issue [\#3254](https://github.com/pypeclub/OpenPype/pull/3254)
- Ftrack: Chunk sizes for queries has minimal condition [\#3244](https://github.com/pypeclub/OpenPype/pull/3244)
**Merged pull requests:**
- Harmony: message length in 21.1 [\#3257](https://github.com/pypeclub/OpenPype/pull/3257)
- Harmony: 21.1 fix [\#3249](https://github.com/pypeclub/OpenPype/pull/3249)
## [3.9.8](https://github.com/pypeclub/OpenPype/tree/3.9.8) (2022-05-19)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.9.7...3.9.8)

View file

@ -18,7 +18,8 @@ AppPublisher=Orbi Tools s.r.o
AppPublisherURL=http://pype.club
AppSupportURL=http://pype.club
AppUpdatesURL=http://pype.club
DefaultDirName={autopf}\{#MyAppName}
DefaultDirName={autopf}\{#MyAppName}\{#AppVer}
UsePreviousAppDir=no
DisableProgramGroupPage=yes
OutputBaseFilename={#MyAppName}-{#AppVer}-install
AllowCancelDuringInstall=yes
@ -27,7 +28,7 @@ AllowCancelDuringInstall=yes
PrivilegesRequiredOverridesAllowed=dialog
SetupIconFile=igniter\openpype.ico
OutputDir=build\
Compression=lzma
Compression=lzma2
SolidCompression=yes
WizardStyle=modern
@ -37,6 +38,11 @@ Name: "english"; MessagesFile: "compiler:Default.isl"
[Tasks]
Name: "desktopicon"; Description: "{cm:CreateDesktopIcon}"; GroupDescription: "{cm:AdditionalIcons}"; Flags: unchecked
[InstallDelete]
; clean everything in previous installation folder
Type: filesandordirs; Name: "{app}\*"
[Files]
Source: "build\{#build}\*"; DestDir: "{app}"; Flags: ignoreversion recursesubdirs createallsubdirs
; NOTE: Don't use "Flags: ignoreversion" on any shared system files

View file

@ -15,7 +15,6 @@ from .lib import (
run_subprocess,
version_up,
get_asset,
get_hierarchy,
get_workdir_data,
get_version_from_path,
get_last_version_from_path,
@ -101,7 +100,6 @@ __all__ = [
# get contextual data
"version_up",
"get_asset",
"get_hierarchy",
"get_workdir_data",
"get_version_from_path",
"get_last_version_from_path",

View file

@ -2,7 +2,7 @@
"""Package for handling pype command line arguments."""
import os
import sys
import code
import click
# import sys
@ -424,3 +424,22 @@ def pack_project(project, dirpath):
def unpack_project(zipfile, root):
"""Create a package of project with all files and database dump."""
PypeCommands().unpack_project(zipfile, root)
@main.command()
def interactive():
"""Interative (Python like) console.
Helpfull command not only for development to directly work with python
interpreter.
Warning:
Executable 'openpype_gui' on windows won't work.
"""
from openpype.version import __version__
banner = "OpenPype {}\nPython {} on {}".format(
__version__, sys.version, sys.platform
)
code.interact(banner)

View file

@ -1,6 +1,7 @@
from .entities import (
get_projects,
get_project,
get_whole_project,
get_asset_by_id,
get_asset_by_name,
@ -29,15 +30,19 @@ from .entities import (
get_representations,
get_representation_parents,
get_representations_parents,
get_archived_representations,
get_thumbnail,
get_thumbnails,
get_thumbnail_id_from_source,
get_workfile_info,
)
__all__ = (
"get_projects",
"get_project",
"get_whole_project",
"get_asset_by_id",
"get_asset_by_name",
@ -66,8 +71,11 @@ __all__ = (
"get_representations",
"get_representation_parents",
"get_representations_parents",
"get_archived_representations",
"get_thumbnail",
"get_thumbnails",
"get_thumbnail_id_from_source",
"get_workfile_info",
)

File diff suppressed because it is too large Load diff

View file

@ -1,11 +1,10 @@
from openpype.api import Anatomy
from openpype.lib import (
PreLaunchHook,
EnvironmentPrepData,
prepare_app_environments,
prepare_context_environments
)
from openpype.pipeline import AvalonMongoDB
from openpype.pipeline import AvalonMongoDB, Anatomy
class GlobalHostDataHook(PreLaunchHook):

13
openpype/host/__init__.py Normal file
View file

@ -0,0 +1,13 @@
from .host import (
HostBase,
IWorkfileHost,
ILoadHost,
INewPublisher,
)
__all__ = (
"HostBase",
"IWorkfileHost",
"ILoadHost",
"INewPublisher",
)

524
openpype/host/host.py Normal file
View file

@ -0,0 +1,524 @@
import logging
import contextlib
from abc import ABCMeta, abstractproperty, abstractmethod
import six
# NOTE can't import 'typing' because of issues in Maya 2020
# - shiboken crashes on 'typing' module import
class MissingMethodsError(ValueError):
"""Exception when host miss some required methods for specific workflow.
Args:
host (HostBase): Host implementation where are missing methods.
missing_methods (list[str]): List of missing methods.
"""
def __init__(self, host, missing_methods):
joined_missing = ", ".join(
['"{}"'.format(item) for item in missing_methods]
)
message = (
"Host \"{}\" miss methods {}".format(host.name, joined_missing)
)
super(MissingMethodsError, self).__init__(message)
@six.add_metaclass(ABCMeta)
class HostBase(object):
"""Base of host implementation class.
Host is pipeline implementation of DCC application. This class should help
to identify what must/should/can be implemented for specific functionality.
Compared to 'avalon' concept:
What was before considered as functions in host implementation folder. The
host implementation should primarily care about adding ability of creation
(mark subsets to be published) and optionaly about referencing published
representations as containers.
Host may need extend some functionality like working with workfiles
or loading. Not all host implementations may allow that for those purposes
can be logic extended with implementing functions for the purpose. There
are prepared interfaces to be able identify what must be implemented to
be able use that functionality.
- current statement is that it is not required to inherit from interfaces
but all of the methods are validated (only their existence!)
# Installation of host before (avalon concept):
```python
from openpype.pipeline import install_host
import openpype.hosts.maya.api as host
install_host(host)
```
# Installation of host now:
```python
from openpype.pipeline import install_host
from openpype.hosts.maya.api import MayaHost
host = MayaHost()
install_host(host)
```
Todo:
- move content of 'install_host' as method of this class
- register host object
- install legacy_io
- install global plugin paths
- store registered plugin paths to this object
- handle current context (project, asset, task)
- this must be done in many separated steps
- have it's object of host tools instead of using globals
This implementation will probably change over time when more
functionality and responsibility will be added.
"""
_log = None
def __init__(self):
"""Initialization of host.
Register DCC callbacks, host specific plugin paths, targets etc.
(Part of what 'install' did in 'avalon' concept.)
Note:
At this moment global "installation" must happen before host
installation. Because of this current limitation it is recommended
to implement 'install' method which is triggered after global
'install'.
"""
pass
@property
def log(self):
if self._log is None:
self._log = logging.getLogger(self.__class__.__name__)
return self._log
@abstractproperty
def name(self):
"""Host name."""
pass
def get_current_context(self):
"""Get current context information.
This method should be used to get current context of host. Usage of
this method can be crutial for host implementations in DCCs where
can be opened multiple workfiles at one moment and change of context
can't be catched properly.
Default implementation returns values from 'legacy_io.Session'.
Returns:
dict: Context with 3 keys 'project_name', 'asset_name' and
'task_name'. All of them can be 'None'.
"""
from openpype.pipeline import legacy_io
if legacy_io.is_installed():
legacy_io.install()
return {
"project_name": legacy_io.Session["AVALON_PROJECT"],
"asset_name": legacy_io.Session["AVALON_ASSET"],
"task_name": legacy_io.Session["AVALON_TASK"]
}
def get_context_title(self):
"""Context title shown for UI purposes.
Should return current context title if possible.
Note:
This method is used only for UI purposes so it is possible to
return some logical title for contextless cases.
Is not meant for "Context menu" label.
Returns:
str: Context title.
None: Default title is used based on UI implementation.
"""
# Use current context to fill the context title
current_context = self.get_current_context()
project_name = current_context["project_name"]
asset_name = current_context["asset_name"]
task_name = current_context["task_name"]
items = []
if project_name:
items.append(project_name)
if asset_name:
items.append(asset_name)
if task_name:
items.append(task_name)
if items:
return "/".join(items)
return None
@contextlib.contextmanager
def maintained_selection(self):
"""Some functionlity will happen but selection should stay same.
This is DCC specific. Some may not allow to implement this ability
that is reason why default implementation is empty context manager.
Yields:
None: Yield when is ready to restore selected at the end.
"""
try:
yield
finally:
pass
class ILoadHost:
"""Implementation requirements to be able use reference of representations.
The load plugins can do referencing even without implementation of methods
here, but switch and removement of containers would not be possible.
Questions:
- Is list container dependency of host or load plugins?
- Should this be directly in HostBase?
- how to find out if referencing is available?
- do we need to know that?
"""
@staticmethod
def get_missing_load_methods(host):
"""Look for missing methods on "old type" host implementation.
Method is used for validation of implemented functions related to
loading. Checks only existence of methods.
Args:
Union[ModuleType, HostBase]: Object of host where to look for
required methods.
Returns:
list[str]: Missing method implementations for loading workflow.
"""
if isinstance(host, ILoadHost):
return []
required = ["ls"]
missing = []
for name in required:
if not hasattr(host, name):
missing.append(name)
return missing
@staticmethod
def validate_load_methods(host):
"""Validate implemented methods of "old type" host for load workflow.
Args:
Union[ModuleType, HostBase]: Object of host to validate.
Raises:
MissingMethodsError: If there are missing methods on host
implementation.
"""
missing = ILoadHost.get_missing_load_methods(host)
if missing:
raise MissingMethodsError(host, missing)
@abstractmethod
def get_containers(self):
"""Retreive referenced containers from scene.
This can be implemented in hosts where referencing can be used.
Todo:
Rename function to something more self explanatory.
Suggestion: 'get_containers'
Returns:
list[dict]: Information about loaded containers.
"""
pass
# --- Deprecated method names ---
def ls(self):
"""Deprecated variant of 'get_containers'.
Todo:
Remove when all usages are replaced.
"""
return self.get_containers()
@six.add_metaclass(ABCMeta)
class IWorkfileHost:
"""Implementation requirements to be able use workfile utils and tool."""
@staticmethod
def get_missing_workfile_methods(host):
"""Look for missing methods on "old type" host implementation.
Method is used for validation of implemented functions related to
workfiles. Checks only existence of methods.
Args:
Union[ModuleType, HostBase]: Object of host where to look for
required methods.
Returns:
list[str]: Missing method implementations for workfiles workflow.
"""
if isinstance(host, IWorkfileHost):
return []
required = [
"open_file",
"save_file",
"current_file",
"has_unsaved_changes",
"file_extensions",
"work_root",
]
missing = []
for name in required:
if not hasattr(host, name):
missing.append(name)
return missing
@staticmethod
def validate_workfile_methods(host):
"""Validate methods of "old type" host for workfiles workflow.
Args:
Union[ModuleType, HostBase]: Object of host to validate.
Raises:
MissingMethodsError: If there are missing methods on host
implementation.
"""
missing = IWorkfileHost.get_missing_workfile_methods(host)
if missing:
raise MissingMethodsError(host, missing)
@abstractmethod
def get_workfile_extensions(self):
"""Extensions that can be used as save.
Questions:
This could potentially use 'HostDefinition'.
"""
return []
@abstractmethod
def save_workfile(self, dst_path=None):
"""Save currently opened scene.
Args:
dst_path (str): Where the current scene should be saved. Or use
current path if 'None' is passed.
"""
pass
@abstractmethod
def open_workfile(self, filepath):
"""Open passed filepath in the host.
Args:
filepath (str): Path to workfile.
"""
pass
@abstractmethod
def get_current_workfile(self):
"""Retreive path to current opened file.
Returns:
str: Path to file which is currently opened.
None: If nothing is opened.
"""
return None
def workfile_has_unsaved_changes(self):
"""Currently opened scene is saved.
Not all hosts can know if current scene is saved because the API of
DCC does not support it.
Returns:
bool: True if scene is saved and False if has unsaved
modifications.
None: Can't tell if workfiles has modifications.
"""
return None
def work_root(self, session):
"""Modify workdir per host.
Default implementation keeps workdir untouched.
Warnings:
We must handle this modification with more sofisticated way because
this can't be called out of DCC so opening of last workfile
(calculated before DCC is launched) is complicated. Also breaking
defined work template is not a good idea.
Only place where it's really used and can make sense is Maya. There
workspace.mel can modify subfolders where to look for maya files.
Args:
session (dict): Session context data.
Returns:
str: Path to new workdir.
"""
return session["AVALON_WORKDIR"]
# --- Deprecated method names ---
def file_extensions(self):
"""Deprecated variant of 'get_workfile_extensions'.
Todo:
Remove when all usages are replaced.
"""
return self.get_workfile_extensions()
def save_file(self, dst_path=None):
"""Deprecated variant of 'save_workfile'.
Todo:
Remove when all usages are replaced.
"""
self.save_workfile()
def open_file(self, filepath):
"""Deprecated variant of 'open_workfile'.
Todo:
Remove when all usages are replaced.
"""
return self.open_workfile(filepath)
def current_file(self):
"""Deprecated variant of 'get_current_workfile'.
Todo:
Remove when all usages are replaced.
"""
return self.get_current_workfile()
def has_unsaved_changes(self):
"""Deprecated variant of 'workfile_has_unsaved_changes'.
Todo:
Remove when all usages are replaced.
"""
return self.workfile_has_unsaved_changes()
class INewPublisher:
"""Functions related to new creation system in new publisher.
New publisher is not storing information only about each created instance
but also some global data. At this moment are data related only to context
publish plugins but that can extend in future.
"""
@staticmethod
def get_missing_publish_methods(host):
"""Look for missing methods on "old type" host implementation.
Method is used for validation of implemented functions related to
new publish creation. Checks only existence of methods.
Args:
Union[ModuleType, HostBase]: Host module where to look for
required methods.
Returns:
list[str]: Missing method implementations for new publsher
workflow.
"""
if isinstance(host, INewPublisher):
return []
required = [
"get_context_data",
"update_context_data",
]
missing = []
for name in required:
if not hasattr(host, name):
missing.append(name)
return missing
@staticmethod
def validate_publish_methods(host):
"""Validate implemented methods of "old type" host.
Args:
Union[ModuleType, HostBase]: Host module to validate.
Raises:
MissingMethodsError: If there are missing methods on host
implementation.
"""
missing = INewPublisher.get_missing_publish_methods(host)
if missing:
raise MissingMethodsError(host, missing)
@abstractmethod
def get_context_data(self):
"""Get global data related to creation-publishing from workfile.
These data are not related to any created instance but to whole
publishing context. Not saving/returning them will cause that each
reset of publishing resets all values to default ones.
Context data can contain information about enabled/disabled publish
plugins or other values that can be filled by artist.
Returns:
dict: Context data stored using 'update_context_data'.
"""
pass
@abstractmethod
def update_context_data(self, data, changes):
"""Store global context data to workfile.
Called when some values in context data has changed.
Without storing the values in a way that 'get_context_data' would
return them will each reset of publishing cause loose of filled values
by artist. Best practice is to store values into workfile, if possible.
Args:
data (dict): New data as are.
changes (dict): Only data that has been changed. Each value has
tuple with '(<old>, <new>)' value.
"""
pass

View file

@ -17,11 +17,8 @@ class RenderCreator(Creator):
create_allow_context_change = True
def __init__(
self, create_context, system_settings, project_settings, headless=False
):
super(RenderCreator, self).__init__(create_context, system_settings,
project_settings, headless)
def __init__(self, project_settings, *args, **kwargs):
super(RenderCreator, self).__init__(project_settings, *args, **kwargs)
self._default_variants = (project_settings["aftereffects"]
["create"]
["RenderCreator"]

View file

@ -6,8 +6,8 @@ import attr
import pyblish.api
from openpype.settings import get_project_settings
from openpype.lib import abstract_collect_render
from openpype.lib.abstract_collect_render import RenderInstance
from openpype.pipeline import publish
from openpype.pipeline.publish import RenderInstance
from openpype.hosts.aftereffects.api import get_stub
@ -25,7 +25,7 @@ class AERenderInstance(RenderInstance):
file_name = attr.ib(default=None)
class CollectAERender(abstract_collect_render.AbstractCollectRender):
class CollectAERender(publish.AbstractCollectRender):
order = pyblish.api.CollectorOrder + 0.405
label = "Collect After Effects Render Layers"

View file

@ -93,7 +93,7 @@ def set_start_end_frames():
# Default scene settings
frameStart = scene.frame_start
frameEnd = scene.frame_end
fps = scene.render.fps
fps = scene.render.fps / scene.render.fps_base
resolution_x = scene.render.resolution_x
resolution_y = scene.render.resolution_y
@ -116,7 +116,8 @@ def set_start_end_frames():
scene.frame_start = frameStart
scene.frame_end = frameEnd
scene.render.fps = fps
scene.render.fps = round(fps)
scene.render.fps_base = round(fps) / fps
scene.render.resolution_x = resolution_x
scene.render.resolution_y = resolution_y

View file

@ -1,6 +1,7 @@
import os
import re
import subprocess
from platform import system
from openpype.lib import PreLaunchHook
@ -13,12 +14,9 @@ class InstallPySideToBlender(PreLaunchHook):
For pipeline implementation is required to have Qt binding installed in
blender's python packages.
Prelaunch hook can work only on Windows right now.
"""
app_groups = ["blender"]
platforms = ["windows"]
def execute(self):
# Prelaunch hook is not crucial
@ -34,25 +32,28 @@ class InstallPySideToBlender(PreLaunchHook):
# Get blender's python directory
version_regex = re.compile(r"^[2-3]\.[0-9]+$")
platform = system().lower()
executable = self.launch_context.executable.executable_path
if os.path.basename(executable).lower() != "blender.exe":
expected_executable = "blender"
if platform == "windows":
expected_executable += ".exe"
if os.path.basename(executable).lower() != expected_executable:
self.log.info((
"Executable does not lead to blender.exe file. Can't determine"
" blender's python to check/install PySide2."
f"Executable does not lead to {expected_executable} file."
"Can't determine blender's python to check/install PySide2."
))
return
executable_dir = os.path.dirname(executable)
versions_dir = os.path.dirname(executable)
if platform == "darwin":
versions_dir = os.path.join(
os.path.dirname(versions_dir), "Resources"
)
version_subfolders = []
for name in os.listdir(executable_dir):
fullpath = os.path.join(name, executable_dir)
if not os.path.isdir(fullpath):
continue
if not version_regex.match(name):
continue
version_subfolders.append(name)
for dir_entry in os.scandir(versions_dir):
if dir_entry.is_dir() and version_regex.match(dir_entry.name):
version_subfolders.append(dir_entry.name)
if not version_subfolders:
self.log.info(
@ -72,16 +73,21 @@ class InstallPySideToBlender(PreLaunchHook):
version_subfolder = version_subfolders[0]
pythond_dir = os.path.join(
os.path.dirname(executable),
version_subfolder,
"python"
)
python_dir = os.path.join(versions_dir, version_subfolder, "python")
python_lib = os.path.join(python_dir, "lib")
python_version = "python"
if platform != "windows":
for dir_entry in os.scandir(python_lib):
if dir_entry.is_dir() and dir_entry.name.startswith("python"):
python_lib = dir_entry.path
python_version = dir_entry.name
break
# Change PYTHONPATH to contain blender's packages as first
python_paths = [
os.path.join(pythond_dir, "lib"),
os.path.join(pythond_dir, "lib", "site-packages"),
python_lib,
os.path.join(python_lib, "site-packages"),
]
python_path = self.launch_context.env.get("PYTHONPATH") or ""
for path in python_path.split(os.pathsep):
@ -91,7 +97,15 @@ class InstallPySideToBlender(PreLaunchHook):
self.launch_context.env["PYTHONPATH"] = os.pathsep.join(python_paths)
# Get blender's python executable
python_executable = os.path.join(pythond_dir, "bin", "python.exe")
python_bin = os.path.join(python_dir, "bin")
if platform == "windows":
python_executable = os.path.join(python_bin, "python.exe")
else:
python_executable = os.path.join(python_bin, python_version)
# Check for python with enabled 'pymalloc'
if not os.path.exists(python_executable):
python_executable += "m"
if not os.path.exists(python_executable):
self.log.warning(
"Couldn't find python executable for blender. {}".format(
@ -106,7 +120,15 @@ class InstallPySideToBlender(PreLaunchHook):
return
# Install PySide2 in blender's python
self.install_pyside_windows(python_executable)
if platform == "windows":
result = self.install_pyside_windows(python_executable)
else:
result = self.install_pyside(python_executable)
if result:
self.log.info("Successfully installed PySide2 module to blender.")
else:
self.log.warning("Failed to install PySide2 module to blender.")
def install_pyside_windows(self, python_executable):
"""Install PySide2 python module to blender's python.
@ -144,19 +166,41 @@ class InstallPySideToBlender(PreLaunchHook):
lpDirectory=os.path.dirname(python_executable)
)
process_handle = process_info["hProcess"]
obj = win32event.WaitForSingleObject(
process_handle, win32event.INFINITE
)
win32event.WaitForSingleObject(process_handle, win32event.INFINITE)
returncode = win32process.GetExitCodeProcess(process_handle)
if returncode == 0:
self.log.info(
"Successfully installed PySide2 module to blender."
)
return
return returncode == 0
except pywintypes.error:
pass
self.log.warning("Failed to install PySide2 module to blender.")
def install_pyside(self, python_executable):
"""Install PySide2 python module to blender's python."""
try:
# Parameters
# - use "-m pip" as module pip to install PySide2 and argument
# "--ignore-installed" is to force install module to blender's
# site-packages and make sure it is binary compatible
args = [
python_executable,
"-m",
"pip",
"install",
"--ignore-installed",
"PySide2",
]
process = subprocess.Popen(
args, stdout=subprocess.PIPE, universal_newlines=True
)
process.communicate()
return process.returncode == 0
except PermissionError:
self.log.warning(
"Permission denied with command:"
"\"{}\".".format(" ".join(args))
)
except OSError as error:
self.log.warning(f"OS error has occurred: \"{error}\".")
except subprocess.SubprocessError:
pass
def is_pyside_installed(self, python_executable):
"""Check if PySide2 module is in blender's pip list.
@ -169,7 +213,7 @@ class InstallPySideToBlender(PreLaunchHook):
args = [python_executable, "-m", "pip", "list"]
process = subprocess.Popen(args, stdout=subprocess.PIPE)
stdout, _ = process.communicate()
lines = stdout.decode().split("\r\n")
lines = stdout.decode().split(os.linesep)
# Second line contain dashes that define maximum length of module name.
# Second column of dashes define maximum length of module version.
package_dashes, *_ = lines[1].split(" ")

View file

@ -4,6 +4,11 @@ from pprint import pformat
import pyblish.api
from openpype.client import (
get_subsets,
get_last_versions,
get_representations
)
from openpype.pipeline import legacy_io
@ -60,10 +65,10 @@ class AppendCelactionAudio(pyblish.api.ContextPlugin):
"""
# Query all subsets for asset
subset_docs = legacy_io.find({
"type": "subset",
"parent": asset_doc["_id"]
})
project_name = legacy_io.active_project()
subset_docs = get_subsets(
project_name, asset_ids=[asset_doc["_id"]], fields=["_id"]
)
# Collect all subset ids
subset_ids = [
subset_doc["_id"]
@ -76,37 +81,19 @@ class AppendCelactionAudio(pyblish.api.ContextPlugin):
"Try this for start `r'.*'`: asset: `{}`"
).format(asset_doc["name"])
# Last version aggregation
pipeline = [
# Find all versions of those subsets
{"$match": {
"type": "version",
"parent": {"$in": subset_ids}
}},
# Sorting versions all together
{"$sort": {"name": 1}},
# Group them by "parent", but only take the last
{"$group": {
"_id": "$parent",
"_version_id": {"$last": "$_id"},
"name": {"$last": "$name"}
}}
]
last_versions_by_subset_id = dict()
for doc in legacy_io.aggregate(pipeline):
doc["parent"] = doc["_id"]
doc["_id"] = doc.pop("_version_id")
last_versions_by_subset_id[doc["parent"]] = doc
last_versions_by_subset_id = get_last_versions(
project_name, subset_ids, fields=["_id", "parent"]
)
version_docs_by_id = {}
for version_doc in last_versions_by_subset_id.values():
version_docs_by_id[version_doc["_id"]] = version_doc
repre_docs = legacy_io.find({
"type": "representation",
"parent": {"$in": list(version_docs_by_id.keys())},
"name": {"$in": representations}
})
repre_docs = get_representations(
project_name,
version_ids=version_docs_by_id.keys(),
representation_names=representations
)
repre_docs_by_version_id = collections.defaultdict(list)
for repre_doc in repre_docs:
version_id = repre_doc["parent"]

View file

@ -2,7 +2,7 @@ import os
import flame
from pprint import pformat
import openpype.hosts.flame.api as opfapi
from openpype.lib import StringTemplate
class LoadClip(opfapi.ClipLoader):
"""Load a subset to timeline as clip
@ -22,7 +22,7 @@ class LoadClip(opfapi.ClipLoader):
# settings
reel_group_name = "OpenPype_Reels"
reel_name = "Loaded"
clip_name_template = "{asset}_{subset}_{output}"
clip_name_template = "{asset}_{subset}<_{output}>"
def load(self, context, name, namespace, options):
@ -36,8 +36,8 @@ class LoadClip(opfapi.ClipLoader):
version_data = version.get("data", {})
version_name = version.get("name", None)
colorspace = version_data.get("colorspace", None)
clip_name = self.clip_name_template.format(
**context["representation"]["context"])
clip_name = StringTemplate(self.clip_name_template).format(
context["representation"]["context"])
# TODO: settings in imageio
# convert colorspace with ocio to flame mapping

View file

@ -2,6 +2,7 @@ import os
import flame
from pprint import pformat
import openpype.hosts.flame.api as opfapi
from openpype.lib import StringTemplate
class LoadClipBatch(opfapi.ClipLoader):
@ -21,7 +22,7 @@ class LoadClipBatch(opfapi.ClipLoader):
# settings
reel_name = "OP_LoadedReel"
clip_name_template = "{asset}_{subset}_{output}"
clip_name_template = "{asset}_{subset}<_{output}>"
def load(self, context, name, namespace, options):
@ -39,8 +40,8 @@ class LoadClipBatch(opfapi.ClipLoader):
if not context["representation"]["context"].get("output"):
self.clip_name_template.replace("output", "representation")
clip_name = self.clip_name_template.format(
**context["representation"]["context"])
clip_name = StringTemplate(self.clip_name_template).format(
context["representation"]["context"])
# TODO: settings in imageio
# convert colorspace with ocio to flame mapping

View file

@ -1,8 +1,12 @@
import re
from types import NoneType
import pyblish
import openpype.hosts.flame.api as opfapi
from openpype.hosts.flame.otio import flame_export
import openpype.lib as oplib
from openpype.pipeline.editorial import (
is_overlapping_otio_ranges,
get_media_range_with_retimes
)
# # developer reload modules
from pprint import pformat
@ -75,6 +79,12 @@ class CollectTimelineInstances(pyblish.api.ContextPlugin):
marker_data["handleEnd"]
)
# make sure there is not NoneType rather 0
if isinstance(head, NoneType):
head = 0
if isinstance(tail, NoneType):
tail = 0
# make sure value is absolute
if head != 0:
head = abs(head)
@ -125,7 +135,8 @@ class CollectTimelineInstances(pyblish.api.ContextPlugin):
"flameAddTasks": self.add_tasks,
"tasks": {
task["name"]: {"type": task["type"]}
for task in self.add_tasks}
for task in self.add_tasks},
"representations": []
})
self.log.debug("__ inst_data: {}".format(pformat(inst_data)))
@ -271,7 +282,7 @@ class CollectTimelineInstances(pyblish.api.ContextPlugin):
# HACK: it is here to serve for versions bellow 2021.1
if not any([head, tail]):
retimed_attributes = oplib.get_media_range_with_retimes(
retimed_attributes = get_media_range_with_retimes(
otio_clip, handle_start, handle_end)
self.log.debug(
">> retimed_attributes: {}".format(retimed_attributes))
@ -370,7 +381,7 @@ class CollectTimelineInstances(pyblish.api.ContextPlugin):
continue
if otio_clip.name not in segment.name.get_value():
continue
if oplib.is_overlapping_otio_ranges(
if is_overlapping_otio_ranges(
parent_range, timeline_range, strict=True):
# add pypedata marker to otio_clip metadata

View file

@ -23,6 +23,8 @@ class ExtractSubsetResources(openpype.api.Extractor):
hosts = ["flame"]
# plugin defaults
keep_original_representation = False
default_presets = {
"thumbnail": {
"active": True,
@ -45,7 +47,9 @@ class ExtractSubsetResources(openpype.api.Extractor):
export_presets_mapping = {}
def process(self, instance):
if "representations" not in instance.data:
if not self.keep_original_representation:
# remove previeous representation if not needed
instance.data["representations"] = []
# flame objects
@ -82,7 +86,11 @@ class ExtractSubsetResources(openpype.api.Extractor):
# add default preset type for thumbnail and reviewable video
# update them with settings and override in case the same
# are found in there
export_presets = deepcopy(self.default_presets)
_preset_keys = [k.split('_')[0] for k in self.export_presets_mapping]
export_presets = {
k: v for k, v in deepcopy(self.default_presets).items()
if k not in _preset_keys
}
export_presets.update(self.export_presets_mapping)
# loop all preset names and
@ -218,9 +226,14 @@ class ExtractSubsetResources(openpype.api.Extractor):
opfapi.export_clip(
export_dir_path, exporting_clip, preset_path, **export_kwargs)
repr_name = unique_name
# make sure only first segment is used if underscore in name
# HACK: `ftrackreview_withLUT` will result only in `ftrackreview`
repr_name = unique_name.split("_")[0]
if (
"thumbnail" in unique_name
or "ftrackreview" in unique_name
):
repr_name = unique_name.split("_")[0]
# create representation data
representation_data = {
@ -259,7 +272,7 @@ class ExtractSubsetResources(openpype.api.Extractor):
if os.path.splitext(f)[-1] == ".mov"
]
# then try if thumbnail is not in unique name
or unique_name == "thumbnail"
or repr_name == "thumbnail"
):
representation_data["files"] = files.pop()
else:

View file

@ -323,6 +323,8 @@ class IntegrateBatchGroup(pyblish.api.InstancePlugin):
def _get_shot_task_dir_path(self, instance, task_data):
project_doc = instance.data["projectEntity"]
asset_entity = instance.data["assetEntity"]
anatomy = instance.context.data["anatomy"]
return get_workdir(
project_doc, asset_entity, task_data["name"], "flame")
project_doc, asset_entity, task_data["name"], "flame", anatomy
)

View file

@ -3,9 +3,16 @@ import sys
import re
import contextlib
from bson.objectid import ObjectId
from Qt import QtGui
from openpype.client import (
get_asset_by_name,
get_subset_by_name,
get_last_version_by_subset_id,
get_representation_by_id,
get_representation_by_name,
get_representation_parents,
)
from openpype.pipeline import (
switch_container,
legacy_io,
@ -93,13 +100,16 @@ def switch_item(container,
raise ValueError("Must have at least one change provided to switch.")
# Collect any of current asset, subset and representation if not provided
# so we can use the original name from those.
# so we can use the original name from those.
project_name = legacy_io.active_project()
if any(not x for x in [asset_name, subset_name, representation_name]):
_id = ObjectId(container["representation"])
representation = legacy_io.find_one({
"type": "representation", "_id": _id
})
version, subset, asset, project = legacy_io.parenthood(representation)
repre_id = container["representation"]
representation = get_representation_by_id(project_name, repre_id)
repre_parent_docs = get_representation_parents(representation)
if repre_parent_docs:
version, subset, asset, _ = repre_parent_docs
else:
version = subset = asset = None
if asset_name is None:
asset_name = asset["name"]
@ -111,39 +121,26 @@ def switch_item(container,
representation_name = representation["name"]
# Find the new one
asset = legacy_io.find_one({
"name": asset_name,
"type": "asset"
})
asset = get_asset_by_name(project_name, asset_name, fields=["_id"])
assert asset, ("Could not find asset in the database with the name "
"'%s'" % asset_name)
subset = legacy_io.find_one({
"name": subset_name,
"type": "subset",
"parent": asset["_id"]
})
subset = get_subset_by_name(
project_name, subset_name, asset["_id"], fields=["_id"]
)
assert subset, ("Could not find subset in the database with the name "
"'%s'" % subset_name)
version = legacy_io.find_one(
{
"type": "version",
"parent": subset["_id"]
},
sort=[('name', -1)]
version = get_last_version_by_subset_id(
project_name, subset["_id"], fields=["_id"]
)
assert version, "Could not find a version for {}.{}".format(
asset_name, subset_name
)
representation = legacy_io.find_one({
"name": representation_name,
"type": "representation",
"parent": version["_id"]}
representation = get_representation_by_name(
project_name, representation_name, version["_id"]
)
assert representation, ("Could not find representation in the database "
"with the name '%s'" % representation_name)

View file

@ -1,6 +1,7 @@
import os
import contextlib
from openpype.client import get_version_by_id
from openpype.pipeline import (
load,
legacy_io,
@ -123,7 +124,7 @@ def loader_shift(loader, frame, relative=True):
class FusionLoadSequence(load.LoaderPlugin):
"""Load image sequence into Fusion"""
families = ["imagesequence", "review", "render"]
families = ["imagesequence", "review", "render", "plate"]
representations = ["*"]
label = "Load sequence"
@ -211,10 +212,8 @@ class FusionLoadSequence(load.LoaderPlugin):
path = self._get_first_image(root)
# Get start frame from version data
version = legacy_io.find_one({
"type": "version",
"_id": representation["parent"]
})
project_name = legacy_io.active_project()
version = get_version_by_id(project_name, representation["parent"])
start = version["data"].get("frameStart")
if start is None:
self.log.warning("Missing start frame for updated version"

View file

@ -4,6 +4,11 @@ import sys
import logging
# Pipeline imports
from openpype.client import (
get_project,
get_asset_by_name,
get_versions,
)
from openpype.pipeline import (
legacy_io,
install_host,
@ -164,9 +169,9 @@ def update_frame_range(comp, representations):
"""
version_ids = [r["parent"] for r in representations]
versions = legacy_io.find({"type": "version", "_id": {"$in": version_ids}})
versions = list(versions)
project_name = legacy_io.active_project()
version_ids = {r["parent"] for r in representations}
versions = list(get_versions(project_name, version_ids))
versions = [v for v in versions
if v["data"].get("frameStart", None) is not None]
@ -203,11 +208,12 @@ def switch(asset_name, filepath=None, new=True):
# Assert asset name exists
# It is better to do this here then to wait till switch_shot does it
asset = legacy_io.find_one({"type": "asset", "name": asset_name})
project_name = legacy_io.active_project()
asset = get_asset_by_name(project_name, asset_name)
assert asset, "Could not find '%s' in the database" % asset_name
# Get current project
self._project = legacy_io.find_one({"type": "project"})
self._project = get_project(project_name)
# Go to comp
if not filepath:

View file

@ -7,6 +7,7 @@ from Qt import QtWidgets, QtCore
import qtawesome as qta
from openpype.client import get_assets
from openpype import style
from openpype.pipeline import (
install_host,
@ -142,7 +143,7 @@ class App(QtWidgets.QWidget):
# Clear any existing items
self._assets.clear()
asset_names = [a["name"] for a in self.collect_assets()]
asset_names = self.collect_asset_names()
completer = QtWidgets.QCompleter(asset_names)
self._assets.setCompleter(completer)
@ -165,8 +166,14 @@ class App(QtWidgets.QWidget):
items = glob.glob("{}/*.comp".format(directory))
return items
def collect_assets(self):
return list(legacy_io.find({"type": "asset"}, {"name": True}))
def collect_asset_names(self):
project_name = legacy_io.active_project()
asset_docs = get_assets(project_name, fields=["name"])
asset_names = {
asset_doc["name"]
for asset_doc in asset_docs
}
return list(asset_names)
def populate_comp_box(self, files):
"""Ensure we display the filename only but the path is stored as well

View file

@ -4,11 +4,10 @@ from pathlib import Path
import attr
import openpype.lib
import openpype.lib.abstract_collect_render
from openpype.lib.abstract_collect_render import RenderInstance
from openpype.lib import get_formatted_current_time
from openpype.pipeline import legacy_io
from openpype.pipeline import publish
from openpype.pipeline.publish import RenderInstance
import openpype.hosts.harmony.api as harmony
@ -20,8 +19,7 @@ class HarmonyRenderInstance(RenderInstance):
leadingZeros = attr.ib(default=3)
class CollectFarmRender(openpype.lib.abstract_collect_render.
AbstractCollectRender):
class CollectFarmRender(publish.AbstractCollectRender):
"""Gather all publishable renders."""
# https://docs.toonboom.com/help/harmony-17/premium/reference/node/output/write-node-image-formats.html

View file

@ -0,0 +1,85 @@
import logging
from scriptsmenu import scriptsmenu
from Qt import QtWidgets
log = logging.getLogger(__name__)
def _hiero_main_window():
"""Return Hiero's main window"""
for obj in QtWidgets.QApplication.topLevelWidgets():
if (obj.inherits('QMainWindow') and
obj.metaObject().className() == 'Foundry::UI::DockMainWindow'):
return obj
raise RuntimeError('Could not find HieroWindow instance')
def _hiero_main_menubar():
"""Retrieve the main menubar of the Hiero window"""
hiero_window = _hiero_main_window()
menubar = [i for i in hiero_window.children() if isinstance(
i,
QtWidgets.QMenuBar
)]
assert len(menubar) == 1, "Error, could not find menu bar!"
return menubar[0]
def find_scripts_menu(title, parent):
"""
Check if the menu exists with the given title in the parent
Args:
title (str): the title name of the scripts menu
parent (QtWidgets.QMenuBar): the menubar to check
Returns:
QtWidgets.QMenu or None
"""
menu = None
search = [i for i in parent.children() if
isinstance(i, scriptsmenu.ScriptsMenu)
and i.title() == title]
if search:
assert len(search) < 2, ("Multiple instances of menu '{}' "
"in menu bar".format(title))
menu = search[0]
return menu
def main(title="Scripts", parent=None, objectName=None):
"""Build the main scripts menu in Hiero
Args:
title (str): name of the menu in the application
parent (QtWidgets.QtMenuBar): the parent object for the menu
objectName (str): custom objectName for scripts menu
Returns:
scriptsmenu.ScriptsMenu instance
"""
hieromainbar = parent or _hiero_main_menubar()
try:
# check menu already exists
menu = find_scripts_menu(title, hieromainbar)
if not menu:
log.info("Attempting to build menu ...")
object_name = objectName or title.lower()
menu = scriptsmenu.ScriptsMenu(title=title,
parent=hieromainbar,
objectName=object_name)
except Exception as e:
log.error(e)
return
return menu

View file

@ -12,10 +12,16 @@ import shutil
import hiero
from Qt import QtWidgets
from bson.objectid import ObjectId
from openpype.pipeline import legacy_io
from openpype.api import (Logger, Anatomy, get_anatomy_settings)
from openpype.client import (
get_project,
get_versions,
get_last_versions,
get_representations,
)
from openpype.settings import get_anatomy_settings
from openpype.pipeline import legacy_io, Anatomy
from openpype.api import Logger
from . import tags
try:
@ -477,7 +483,7 @@ def sync_avalon_data_to_workfile():
project.setProjectRoot(active_project_root)
# get project data from avalon db
project_doc = legacy_io.find_one({"type": "project"})
project_doc = get_project(project_name)
project_data = project_doc["data"]
log.debug("project_data: {}".format(project_data))
@ -1065,35 +1071,63 @@ def check_inventory_versions(track_items=None):
clip_color_last = "green"
clip_color = "red"
# get all track items from current timeline
item_with_repre_id = []
repre_ids = set()
# Find all containers and collect it's node and representation ids
for track_item in track_item:
container = parse_container(track_item)
if container:
# get representation from io
representation = legacy_io.find_one({
"type": "representation",
"_id": ObjectId(container["representation"])
})
repre_id = container["representation"]
repre_ids.add(repre_id)
item_with_repre_id.append((track_item, repre_id))
# Get start frame from version data
version = legacy_io.find_one({
"type": "version",
"_id": representation["parent"]
})
# Skip if nothing was found
if not repre_ids:
return
# get all versions in list
versions = legacy_io.find({
"type": "version",
"parent": version["parent"]
}).distinct('name')
project_name = legacy_io.active_project()
# Find representations based on found containers
repre_docs = get_representations(
project_name,
repre_ids=repre_ids,
fields=["_id", "parent"]
)
# Store representations by id and collect version ids
repre_docs_by_id = {}
version_ids = set()
for repre_doc in repre_docs:
# Use stringed representation id to match value in containers
repre_id = str(repre_doc["_id"])
repre_docs_by_id[repre_id] = repre_doc
version_ids.add(repre_doc["parent"])
max_version = max(versions)
version_docs = get_versions(
project_name, version_ids, fields=["_id", "name", "parent"]
)
# Store versions by id and collect subset ids
version_docs_by_id = {}
subset_ids = set()
for version_doc in version_docs:
version_docs_by_id[version_doc["_id"]] = version_doc
subset_ids.add(version_doc["parent"])
# set clip colour
if version.get("name") == max_version:
track_item.source().binItem().setColor(clip_color_last)
else:
track_item.source().binItem().setColor(clip_color)
# Query last versions based on subset ids
last_versions_by_subset_id = get_last_versions(
project_name, subset_ids=subset_ids, fields=["_id", "parent"]
)
for item in item_with_repre_id:
# Some python versions of nuke can't unfold tuple in for loop
track_item, repre_id = item
repre_doc = repre_docs_by_id[repre_id]
version_doc = version_docs_by_id[repre_doc["parent"]]
last_version_doc = last_versions_by_subset_id[version_doc["parent"]]
# Check if last version is same as current version
if version_doc["_id"] == last_version_doc["_id"]:
track_item.source().binItem().setColor(clip_color_last)
else:
track_item.source().binItem().setColor(clip_color)
def selection_changed_timeline(event):

View file

@ -9,6 +9,7 @@ from openpype.pipeline import legacy_io
from openpype.tools.utils import host_tools
from . import tags
from openpype.settings import get_project_settings
log = Logger.get_logger(__name__)
@ -41,6 +42,7 @@ def menu_install():
Installing menu into Hiero
"""
from Qt import QtGui
from . import (
publish, launch_workfiles_app, reload_config,
@ -138,3 +140,30 @@ def menu_install():
exeprimental_action.triggered.connect(
lambda: host_tools.show_experimental_tools_dialog(parent=main_window)
)
def add_scripts_menu():
try:
from . import launchforhiero
except ImportError:
log.warning(
"Skipping studio.menu install, because "
"'scriptsmenu' module seems unavailable."
)
return
# load configuration of custom menu
project_settings = get_project_settings(os.getenv("AVALON_PROJECT"))
config = project_settings["hiero"]["scriptsmenu"]["definition"]
_menu = project_settings["hiero"]["scriptsmenu"]["name"]
if not config:
log.warning("Skipping studio menu, no definition found.")
return
# run the launcher for Hiero menu
studio_menu = launchforhiero.main(title=_menu.title())
# apply configuration
studio_menu.build_from_configuration(studio_menu, config)

View file

@ -48,6 +48,7 @@ def install():
# install menu
menu.menu_install()
menu.add_scripts_menu()
# register hiero events
events.register_hiero_events()

View file

@ -2,6 +2,7 @@ import re
import os
import hiero
from openpype.client import get_project, get_assets
from openpype.api import Logger
from openpype.pipeline import legacy_io
@ -141,7 +142,9 @@ def add_tags_to_workfile():
nks_pres_tags = tag_data()
# Get project task types.
tasks = legacy_io.find_one({"type": "project"})["config"]["tasks"]
project_name = legacy_io.active_project()
project_doc = get_project(project_name)
tasks = project_doc["config"]["tasks"]
nks_pres_tags["[Tasks]"] = {}
log.debug("__ tasks: {}".format(tasks))
for task_type in tasks.keys():
@ -159,7 +162,9 @@ def add_tags_to_workfile():
# asset builds and shots.
if int(os.getenv("TAG_ASSETBUILD_STARTUP", 0)) == 1:
nks_pres_tags["[AssetBuilds]"] = {}
for asset in legacy_io.find({"type": "asset"}):
for asset in get_assets(
project_name, fields=["name", "data.entityType"]
):
if asset["data"]["entityType"] == "AssetBuild":
nks_pres_tags["[AssetBuilds]"][asset["name"]] = {
"editable": "1",

View file

@ -1,3 +1,7 @@
from openpype.client import (
get_version_by_id,
get_last_version_by_subset_id
)
from openpype.pipeline import (
legacy_io,
get_representation_path,
@ -103,12 +107,12 @@ class LoadClip(phiero.SequenceLoader):
namespace = container['namespace']
track_item = phiero.get_track_items(
track_item_name=namespace).pop()
version = legacy_io.find_one({
"type": "version",
"_id": representation["parent"]
})
version_data = version.get("data", {})
version_name = version.get("name", None)
project_name = legacy_io.active_project()
version_doc = get_version_by_id(project_name, representation["parent"])
version_data = version_doc.get("data", {})
version_name = version_doc.get("name", None)
colorspace = version_data.get("colorspace", None)
object_name = "{}_{}".format(name, namespace)
file = get_representation_path(representation).replace("\\", "/")
@ -143,7 +147,7 @@ class LoadClip(phiero.SequenceLoader):
})
# update color of clip regarding the version order
self.set_item_color(track_item, version)
self.set_item_color(track_item, version_doc)
return phiero.update_container(track_item, data_imprint)
@ -166,21 +170,14 @@ class LoadClip(phiero.SequenceLoader):
cls.sequence = cls.track.parent()
@classmethod
def set_item_color(cls, track_item, version):
def set_item_color(cls, track_item, version_doc):
project_name = legacy_io.active_project()
last_version_doc = get_last_version_by_subset_id(
project_name, version_doc["parent"], fields=["_id"]
)
clip = track_item.source()
# define version name
version_name = version.get("name", None)
# get all versions in list
versions = legacy_io.find({
"type": "version",
"parent": version["parent"]
}).distinct('name')
max_version = max(versions)
# set clip colour
if version_name == max_version:
if version_doc["_id"] == last_version_doc["_id"]:
clip.binItem().setColor(cls.clip_color_last)
else:
clip.binItem().setColor(cls.clip_color)

View file

@ -1,5 +1,5 @@
import pyblish
import openpype
from openpype.pipeline.editorial import is_overlapping_otio_ranges
from openpype.hosts.hiero import api as phiero
from openpype.hosts.hiero.api.otio import hiero_export
import hiero
@ -275,7 +275,7 @@ class PrecollectInstances(pyblish.api.ContextPlugin):
parent_range = otio_audio.range_in_parent()
# if any overaling clip found then return True
if openpype.lib.is_overlapping_otio_ranges(
if is_overlapping_otio_ranges(
parent_range, timeline_range, strict=False):
return True
@ -304,7 +304,7 @@ class PrecollectInstances(pyblish.api.ContextPlugin):
continue
self.log.debug("__ parent_range: {}".format(parent_range))
self.log.debug("__ timeline_range: {}".format(timeline_range))
if openpype.lib.is_overlapping_otio_ranges(
if is_overlapping_otio_ranges(
parent_range, timeline_range, strict=True):
# add pypedata marker to otio_clip metadata

View file

@ -1,4 +1,5 @@
from pyblish import api
from openpype.client import get_assets
from openpype.pipeline import legacy_io
@ -17,8 +18,9 @@ class CollectAssetBuilds(api.ContextPlugin):
hosts = ["hiero"]
def process(self, context):
project_name = legacy_io.active_project()
asset_builds = {}
for asset in legacy_io.find({"type": "asset"}):
for asset in get_assets(project_name):
if asset["data"]["entityType"] == "AssetBuild":
self.log.debug("Found \"{}\" in database.".format(asset))
asset_builds[asset["name"]] = asset

View file

@ -4,6 +4,7 @@ from contextlib import contextmanager
import six
from openpype.client import get_asset_by_name
from openpype.api import get_asset
from openpype.pipeline import legacy_io
@ -74,16 +75,13 @@ def generate_ids(nodes, asset_id=None):
"""
if asset_id is None:
project_name = legacy_io.active_project()
asset_name = legacy_io.Session["AVALON_ASSET"]
# Get the asset ID from the database for the asset of current context
asset_data = legacy_io.find_one(
{
"type": "asset",
"name": legacy_io.Session["AVALON_ASSET"]
},
projection={"_id": True}
)
assert asset_data, "No current asset found in Session"
asset_id = asset_data['_id']
asset_doc = get_asset_by_name(project_name, asset_name, fields=["_id"])
assert asset_doc, "No current asset found in Session"
asset_id = asset_doc['_id']
node_ids = []
for node in nodes:
@ -430,26 +428,29 @@ def maintained_selection():
def reset_framerange():
"""Set frame range to current asset"""
project_name = legacy_io.active_project()
asset_name = legacy_io.Session["AVALON_ASSET"]
asset = legacy_io.find_one({"name": asset_name, "type": "asset"})
# Get the asset ID from the database for the asset of current context
asset_doc = get_asset_by_name(project_name, asset_name)
asset_data = asset_doc["data"]
frame_start = asset["data"].get("frameStart")
frame_end = asset["data"].get("frameEnd")
frame_start = asset_data.get("frameStart")
frame_end = asset_data.get("frameEnd")
# Backwards compatibility
if frame_start is None or frame_end is None:
frame_start = asset["data"].get("edit_in")
frame_end = asset["data"].get("edit_out")
frame_start = asset_data.get("edit_in")
frame_end = asset_data.get("edit_out")
if frame_start is None or frame_end is None:
log.warning("No edit information found for %s" % asset_name)
return
handles = asset["data"].get("handles") or 0
handle_start = asset["data"].get("handleStart")
handles = asset_data.get("handles") or 0
handle_start = asset_data.get("handleStart")
if handle_start is None:
handle_start = handles
handle_end = asset["data"].get("handleEnd")
handle_end = asset_data.get("handleEnd")
if handle_end is None:
handle_end = handles

View file

@ -6,6 +6,7 @@ import logging
from Qt import QtWidgets, QtCore, QtGui
from openpype import style
from openpype.client import get_asset_by_name
from openpype.pipeline import legacy_io
from openpype.tools.utils.assets_widget import SingleSelectAssetsWidget
@ -46,10 +47,8 @@ class SelectAssetDialog(QtWidgets.QWidget):
select_id = None
name = self._parm.eval()
if name:
db_asset = legacy_io.find_one(
{"name": name, "type": "asset"},
{"_id": True}
)
project_name = legacy_io.active_project()
db_asset = get_asset_by_name(project_name, name, fields=["_id"])
if db_asset:
select_id = db_asset["_id"]

View file

@ -1,6 +1,10 @@
# -*- coding: utf-8 -*-
import hou
from openpype.client import (
get_asset_by_name,
get_subsets,
)
from openpype.pipeline import legacy_io
from openpype.hosts.houdini.api import lib
from openpype.hosts.houdini.api import plugin
@ -23,20 +27,16 @@ class CreateHDA(plugin.Creator):
# type: (str) -> bool
"""Check if existing subset name versions already exists."""
# Get all subsets of the current asset
asset_id = legacy_io.find_one(
{"name": self.data["asset"], "type": "asset"},
projection={"_id": True}
)['_id']
subset_docs = legacy_io.find(
{
"type": "subset",
"parent": asset_id
},
{"name": 1}
project_name = legacy_io.active_project()
asset_doc = get_asset_by_name(
project_name, self.data["asset"], fields=["_id"]
)
subset_docs = get_subsets(
project_name, asset_ids=[asset_doc["_id"]], fields=["name"]
)
existing_subset_names = set(subset_docs.distinct("name"))
existing_subset_names_low = {
_name.lower() for _name in existing_subset_names
subset_doc["name"].lower()
for subset_doc in subset_docs
}
return subset_name.lower() in existing_subset_names_low

View file

@ -44,7 +44,8 @@ class BgeoLoader(load.LoaderPlugin):
# Explicitly create a file node
file_node = container.createNode("file", node_name=node_name)
file_node.setParms({"file": self.format_path(self.fname, is_sequence)})
file_node.setParms(
{"file": self.format_path(self.fname, context["representation"])})
# Set display on last node
file_node.setDisplayFlag(True)
@ -62,15 +63,15 @@ class BgeoLoader(load.LoaderPlugin):
)
@staticmethod
def format_path(path, is_sequence):
def format_path(path, representation):
"""Format file path correctly for single bgeo or bgeo sequence."""
if not os.path.exists(path):
raise RuntimeError("Path does not exist: %s" % path)
is_sequence = bool(representation["context"].get("frame"))
# The path is either a single file or sequence in a folder.
if not is_sequence:
filename = path
print("single")
else:
filename = re.sub(r"(.*)\.(\d+)\.(bgeo.*)", "\\1.$F4.\\3", path)
@ -94,9 +95,9 @@ class BgeoLoader(load.LoaderPlugin):
# Update the file path
file_path = get_representation_path(representation)
file_path = self.format_path(file_path)
file_path = self.format_path(file_path, representation)
file_node.setParms({"fileName": file_path})
file_node.setParms({"file": file_path})
# Update attribute
node.setParms({"representation": str(representation["_id"])})

View file

@ -40,7 +40,8 @@ class VdbLoader(load.LoaderPlugin):
# Explicitly create a file node
file_node = container.createNode("file", node_name=node_name)
file_node.setParms({"file": self.format_path(self.fname)})
file_node.setParms(
{"file": self.format_path(self.fname, context["representation"])})
# Set display on last node
file_node.setDisplayFlag(True)
@ -57,30 +58,20 @@ class VdbLoader(load.LoaderPlugin):
suffix="",
)
def format_path(self, path):
@staticmethod
def format_path(path, representation):
"""Format file path correctly for single vdb or vdb sequence."""
if not os.path.exists(path):
raise RuntimeError("Path does not exist: %s" % path)
is_sequence = bool(representation["context"].get("frame"))
# The path is either a single file or sequence in a folder.
is_single_file = os.path.isfile(path)
if is_single_file:
if not is_sequence:
filename = path
else:
# The path points to the publish .vdb sequence folder so we
# find the first file in there that ends with .vdb
files = sorted(os.listdir(path))
first = next((x for x in files if x.endswith(".vdb")), None)
if first is None:
raise RuntimeError(
"Couldn't find first .vdb file of "
"sequence in: %s" % path
)
filename = re.sub(r"(.*)\.(\d+)\.vdb$", "\\1.$F4.vdb", path)
# Set <frame>.vdb to $F.vdb
first = re.sub(r"\.(\d+)\.vdb$", ".$F.vdb", first)
filename = os.path.join(path, first)
filename = os.path.join(path, filename)
filename = os.path.normpath(filename)
filename = filename.replace("\\", "/")
@ -100,7 +91,7 @@ class VdbLoader(load.LoaderPlugin):
# Update the file path
file_path = get_representation_path(representation)
file_path = self.format_path(file_path)
file_path = self.format_path(file_path, representation)
file_node.setParms({"file": file_path})

View file

@ -1,5 +1,6 @@
import pyblish.api
from openyppe.client import get_subset_by_name, get_asset_by_name
from openpype.pipeline import legacy_io
import openpype.lib.usdlib as usdlib
@ -50,10 +51,8 @@ class CollectUsdBootstrap(pyblish.api.InstancePlugin):
self.log.debug("Add bootstrap for: %s" % bootstrap)
asset = legacy_io.find_one({
"name": instance.data["asset"],
"type": "asset"
})
project_name = legacy_io.active_project()
asset = get_asset_by_name(project_name, instance.data["asset"])
assert asset, "Asset must exist: %s" % asset
# Check which are not about to be created and don't exist yet
@ -70,7 +69,7 @@ class CollectUsdBootstrap(pyblish.api.InstancePlugin):
self.log.debug("Checking required bootstrap: %s" % required)
for subset in required:
if self._subset_exists(instance, subset, asset):
if self._subset_exists(project_name, instance, subset, asset):
continue
self.log.debug(
@ -93,7 +92,7 @@ class CollectUsdBootstrap(pyblish.api.InstancePlugin):
for key in ["asset"]:
new.data[key] = instance.data[key]
def _subset_exists(self, instance, subset, asset):
def _subset_exists(self, project_name, instance, subset, asset):
"""Return whether subset exists in current context or in database."""
# Allow it to be created during this publish session
context = instance.context
@ -106,9 +105,8 @@ class CollectUsdBootstrap(pyblish.api.InstancePlugin):
# Or, if they already exist in the database we can
# skip them too.
return bool(
legacy_io.find_one(
{"name": subset, "type": "subset", "parent": asset["_id"]},
{"_id": True}
)
)
if get_subset_by_name(
project_name, subset, asset["_id"], fields=["_id"]
):
return True
return False

View file

@ -7,6 +7,12 @@ from collections import deque
import pyblish.api
import openpype.api
from openpype.client import (
get_asset_by_name,
get_subset_by_name,
get_last_version_by_subset_id,
get_representation_by_name,
)
from openpype.pipeline import (
get_representation_path,
legacy_io,
@ -244,11 +250,14 @@ class ExtractUSDLayered(openpype.api.Extractor):
# Set up the dependency for publish if they have new content
# compared to previous publishes
project_name = legacy_io.active_project()
for dependency in active_dependencies:
dependency_fname = dependency.data["usdFilename"]
filepath = os.path.join(staging_dir, dependency_fname)
similar = self._compare_with_latest_publish(dependency, filepath)
similar = self._compare_with_latest_publish(
project_name, dependency, filepath
)
if similar:
# Deactivate this dependency
self.log.debug(
@ -268,7 +277,7 @@ class ExtractUSDLayered(openpype.api.Extractor):
instance.data["files"] = []
instance.data["files"].append(fname)
def _compare_with_latest_publish(self, dependency, new_file):
def _compare_with_latest_publish(self, project_name, dependency, new_file):
import filecmp
_, ext = os.path.splitext(new_file)
@ -276,35 +285,29 @@ class ExtractUSDLayered(openpype.api.Extractor):
# Compare this dependency with the latest published version
# to detect whether we should make this into a new publish
# version. If not, skip it.
asset = legacy_io.find_one(
{"name": dependency.data["asset"], "type": "asset"}
asset = get_asset_by_name(
project_name, dependency.data["asset"], fields=["_id"]
)
subset = legacy_io.find_one(
{
"name": dependency.data["subset"],
"type": "subset",
"parent": asset["_id"],
}
subset = get_subset_by_name(
project_name,
dependency.data["subset"],
asset["_id"],
fields=["_id"]
)
if not subset:
# Subset doesn't exist yet. Definitely new file
self.log.debug("No existing subset..")
return False
version = legacy_io.find_one(
{"type": "version", "parent": subset["_id"], },
sort=[("name", -1)]
version = get_last_version_by_subset_id(
project_name, subset["_id"], fields=["_id"]
)
if not version:
self.log.debug("No existing version..")
return False
representation = legacy_io.find_one(
{
"name": ext.lstrip("."),
"type": "representation",
"parent": version["_id"],
}
representation = get_representation_by_name(
project_name, ext.lstrip("."), version["_id"]
)
if not representation:
self.log.debug("No existing representation..")

View file

@ -2,6 +2,7 @@ import re
import pyblish.api
from openpype.client import get_subset_by_name
import openpype.api
from openpype.pipeline import legacy_io
@ -15,31 +16,23 @@ class ValidateUSDShadeModelExists(pyblish.api.InstancePlugin):
label = "USD Shade model exists"
def process(self, instance):
asset = instance.data["asset"]
project_name = legacy_io.active_project()
asset_name = instance.data["asset"]
subset = instance.data["subset"]
# Assume shading variation starts after a dot separator
shade_subset = subset.split(".", 1)[0]
model_subset = re.sub("^usdShade", "usdModel", shade_subset)
asset_doc = legacy_io.find_one(
{"name": asset, "type": "asset"},
{"_id": True}
)
asset_doc = instance.data.get("assetEntity")
if not asset_doc:
raise RuntimeError("Asset does not exist: %s" % asset)
raise RuntimeError("Asset document is not filled on instance.")
subset_doc = legacy_io.find_one(
{
"name": model_subset,
"type": "subset",
"parent": asset_doc["_id"],
},
{"_id": True}
subset_doc = get_subset_by_name(
project_name, model_subset, asset_doc["_id"], fields=["_id"]
)
if not subset_doc:
raise RuntimeError(
"USD Model subset not found: "
"%s (%s)" % (model_subset, asset)
"%s (%s)" % (model_subset, asset_name)
)

View file

@ -4,19 +4,8 @@ import husdoutputprocessors.base as base
import colorbleed.usdlib as usdlib
from openpype.pipeline import (
legacy_io,
registered_root,
)
def _get_project_publish_template():
"""Return publish template from database for current project"""
project = legacy_io.find_one(
{"type": "project"},
projection={"config.template.publish": True}
)
return project["config"]["template"]["publish"]
from openpype.client import get_asset_by_name
from openpype.pipeline import legacy_io, Anatomy
class AvalonURIOutputProcessor(base.OutputProcessorBase):
@ -35,7 +24,6 @@ class AvalonURIOutputProcessor(base.OutputProcessorBase):
ever created in a Houdini session. Therefore be very careful
about what data gets put in this object.
"""
self._template = None
self._use_publish_paths = False
self._cache = dict()
@ -60,14 +48,11 @@ class AvalonURIOutputProcessor(base.OutputProcessorBase):
return self._parameters
def beginSave(self, config_node, t):
self._template = _get_project_publish_template()
parm = self._parms["use_publish_paths"]
self._use_publish_paths = config_node.parm(parm).evalAtTime(t)
self._cache.clear()
def endSave(self):
self._template = None
self._use_publish_paths = None
self._cache.clear()
@ -138,22 +123,19 @@ class AvalonURIOutputProcessor(base.OutputProcessorBase):
"""
PROJECT = legacy_io.Session["AVALON_PROJECT"]
asset_doc = legacy_io.find_one({
"name": asset,
"type": "asset"
})
anatomy = Anatomy(PROJECT)
asset_doc = get_asset_by_name(PROJECT, asset)
if not asset_doc:
raise RuntimeError("Invalid asset name: '%s'" % asset)
root = registered_root()
path = self._template.format(**{
"root": root,
formatted_anatomy = anatomy.format({
"project": PROJECT,
"asset": asset_doc["name"],
"subset": subset,
"representation": ext,
"version": 0 # stub version zero
})
path = formatted_anatomy["publish"]["path"]
# Remove the version folder
subset_folder = os.path.dirname(os.path.dirname(path))

View file

@ -5,11 +5,11 @@ Anything that isn't defined here is INTERNAL and unreliable for external use.
"""
from .pipeline import (
install,
uninstall,
ls,
containerise,
MayaHost,
)
from .plugin import (
Creator,
@ -40,11 +40,11 @@ from .lib import (
__all__ = [
"install",
"uninstall",
"ls",
"containerise",
"MayaHost",
"Creator",
"Loader",

View file

@ -1908,7 +1908,7 @@ def iter_parents(node):
"""
while True:
split = node.rsplit("|", 1)
if len(split) == 1:
if len(split) == 1 or not split[0]:
return
node = split[0]
@ -2522,12 +2522,30 @@ def load_capture_preset(data=None):
temp_options2['multiSampleEnable'] = False
temp_options2['multiSampleCount'] = preset[id][key]
if key == 'renderDepthOfField':
temp_options2['renderDepthOfField'] = preset[id][key]
if key == 'ssaoEnable':
if preset[id][key] is True:
temp_options2['ssaoEnable'] = True
else:
temp_options2['ssaoEnable'] = False
if key == 'ssaoSamples':
temp_options2['ssaoSamples'] = preset[id][key]
if key == 'ssaoAmount':
temp_options2['ssaoAmount'] = preset[id][key]
if key == 'ssaoRadius':
temp_options2['ssaoRadius'] = preset[id][key]
if key == 'hwFogDensity':
temp_options2['hwFogDensity'] = preset[id][key]
if key == 'ssaoFilterRadius':
temp_options2['ssaoFilterRadius'] = preset[id][key]
if key == 'alphaCut':
temp_options2['transparencyAlgorithm'] = 5
temp_options2['transparencyQuality'] = 1
@ -2535,6 +2553,48 @@ def load_capture_preset(data=None):
if key == 'headsUpDisplay':
temp_options['headsUpDisplay'] = True
if key == 'fogging':
temp_options['fogging'] = preset[id][key] or False
if key == 'hwFogStart':
temp_options2['hwFogStart'] = preset[id][key]
if key == 'hwFogEnd':
temp_options2['hwFogEnd'] = preset[id][key]
if key == 'hwFogAlpha':
temp_options2['hwFogAlpha'] = preset[id][key]
if key == 'hwFogFalloff':
temp_options2['hwFogFalloff'] = int(preset[id][key])
if key == 'hwFogColorR':
temp_options2['hwFogColorR'] = preset[id][key]
if key == 'hwFogColorG':
temp_options2['hwFogColorG'] = preset[id][key]
if key == 'hwFogColorB':
temp_options2['hwFogColorB'] = preset[id][key]
if key == 'motionBlurEnable':
if preset[id][key] is True:
temp_options2['motionBlurEnable'] = True
else:
temp_options2['motionBlurEnable'] = False
if key == 'motionBlurSampleCount':
temp_options2['motionBlurSampleCount'] = preset[id][key]
if key == 'motionBlurShutterOpenFraction':
temp_options2['motionBlurShutterOpenFraction'] = preset[id][key]
if key == 'lineAAEnable':
if preset[id][key] is True:
temp_options2['lineAAEnable'] = True
else:
temp_options2['lineAAEnable'] = False
else:
temp_options[str(key)] = preset[id][key]
@ -2544,7 +2604,24 @@ def load_capture_preset(data=None):
'gpuCacheDisplayFilter',
'multiSample',
'ssaoEnable',
'textureMaxResolution'
'ssaoSamples',
'ssaoAmount',
'ssaoFilterRadius',
'ssaoRadius',
'hwFogStart',
'hwFogEnd',
'hwFogAlpha',
'hwFogFalloff',
'hwFogColorR',
'hwFogColorG',
'hwFogColorB',
'hwFogDensity',
'textureMaxResolution',
'motionBlurEnable',
'motionBlurSampleCount',
'motionBlurShutterOpenFraction',
'lineAAEnable',
'renderDepthOfField'
]:
temp_options.pop(key, None)
@ -3213,3 +3290,209 @@ def parent_nodes(nodes, parent=None):
node[0].setParent(node[1])
if delete_parent:
pm.delete(parent_node)
@contextlib.contextmanager
def maintained_time():
ct = cmds.currentTime(query=True)
try:
yield
finally:
cmds.currentTime(ct, edit=True)
def iter_visible_nodes_in_range(nodes, start, end):
"""Yield nodes that are visible in start-end frame range.
- Ignores intermediateObjects completely.
- Considers animated visibility attributes + upstream visibilities.
This is optimized for large scenes where some nodes in the parent
hierarchy might have some input connections to the visibilities,
e.g. key, driven keys, connections to other attributes, etc.
This only does a single time step to `start` if current frame is
not inside frame range since the assumption is made that changing
a frame isn't so slow that it beats querying all visibility
plugs through MDGContext on another frame.
Args:
nodes (list): List of node names to consider.
start (int, float): Start frame.
end (int, float): End frame.
Returns:
list: List of node names. These will be long full path names so
might have a longer name than the input nodes.
"""
# States we consider per node
VISIBLE = 1 # always visible
INVISIBLE = 0 # always invisible
ANIMATED = -1 # animated visibility
# Ensure integers
start = int(start)
end = int(end)
# Consider only non-intermediate dag nodes and use the "long" names.
nodes = cmds.ls(nodes, long=True, noIntermediate=True, type="dagNode")
if not nodes:
return
with maintained_time():
# Go to first frame of the range if the current time is outside
# the queried range so can directly query all visible nodes on
# that frame.
current_time = cmds.currentTime(query=True)
if not (start <= current_time <= end):
cmds.currentTime(start)
visible = cmds.ls(nodes, long=True, visible=True)
for node in visible:
yield node
if len(visible) == len(nodes) or start == end:
# All are visible on frame one, so they are at least visible once
# inside the frame range.
return
# For the invisible ones check whether its visibility and/or
# any of its parents visibility attributes are animated. If so, it might
# get visible on other frames in the range.
def memodict(f):
"""Memoization decorator for a function taking a single argument.
See: http://code.activestate.com/recipes/
578231-probably-the-fastest-memoization-decorator-in-the-/
"""
class memodict(dict):
def __missing__(self, key):
ret = self[key] = f(key)
return ret
return memodict().__getitem__
@memodict
def get_state(node):
plug = node + ".visibility"
connections = cmds.listConnections(plug,
source=True,
destination=False)
if connections:
return ANIMATED
else:
return VISIBLE if cmds.getAttr(plug) else INVISIBLE
visible = set(visible)
invisible = [node for node in nodes if node not in visible]
always_invisible = set()
# Iterate over the nodes by short to long names to iterate the highest
# in hierarchy nodes first. So the collected data can be used from the
# cache for parent queries in next iterations.
node_dependencies = dict()
for node in sorted(invisible, key=len):
state = get_state(node)
if state == INVISIBLE:
always_invisible.add(node)
continue
# If not always invisible by itself we should go through and check
# the parents to see if any of them are always invisible. For those
# that are "ANIMATED" we consider that this node is dependent on
# that attribute, we store them as dependency.
dependencies = set()
if state == ANIMATED:
dependencies.add(node)
traversed_parents = list()
for parent in iter_parents(node):
if parent in always_invisible or get_state(parent) == INVISIBLE:
# When parent is always invisible then consider this parent,
# this node we started from and any of the parents we
# have traversed in-between to be *always invisible*
always_invisible.add(parent)
always_invisible.add(node)
always_invisible.update(traversed_parents)
break
# If we have traversed the parent before and its visibility
# was dependent on animated visibilities then we can just extend
# its dependencies for to those for this node and break further
# iteration upwards.
parent_dependencies = node_dependencies.get(parent, None)
if parent_dependencies is not None:
dependencies.update(parent_dependencies)
break
state = get_state(parent)
if state == ANIMATED:
dependencies.add(parent)
traversed_parents.append(parent)
if node not in always_invisible and dependencies:
node_dependencies[node] = dependencies
if not node_dependencies:
return
# Now we only have to check the visibilities for nodes that have animated
# visibility dependencies upstream. The fastest way to check these
# visibility attributes across different frames is with Python api 2.0
# so we do that.
@memodict
def get_visibility_mplug(node):
"""Return api 2.0 MPlug with cached memoize decorator"""
sel = om.MSelectionList()
sel.add(node)
dag = sel.getDagPath(0)
return om.MFnDagNode(dag).findPlug("visibility", True)
@contextlib.contextmanager
def dgcontext(mtime):
"""MDGContext context manager"""
context = om.MDGContext(mtime)
try:
previous = context.makeCurrent()
yield context
finally:
previous.makeCurrent()
# We skip the first frame as we already used that frame to check for
# overall visibilities. And end+1 to include the end frame.
scene_units = om.MTime.uiUnit()
for frame in range(start + 1, end + 1):
mtime = om.MTime(frame, unit=scene_units)
# Build little cache so we don't query the same MPlug's value
# again if it was checked on this frame and also is a dependency
# for another node
frame_visibilities = {}
with dgcontext(mtime) as context:
for node, dependencies in list(node_dependencies.items()):
for dependency in dependencies:
dependency_visible = frame_visibilities.get(dependency,
None)
if dependency_visible is None:
mplug = get_visibility_mplug(dependency)
dependency_visible = mplug.asBool(context)
frame_visibilities[dependency] = dependency_visible
if not dependency_visible:
# One dependency is not visible, thus the
# node is not visible.
break
else:
# All dependencies are visible.
yield node
# Remove node with dependencies for next frame iterations
# because it was visible at least once.
node_dependencies.pop(node)
# If no more nodes to process break the frame iterations..
if not node_dependencies:
break

View file

@ -1087,7 +1087,7 @@ class RenderProductsRenderman(ARenderProducts):
"d_tiff": "tif"
}
displays = get_displays()["displays"]
displays = get_displays(override_dst="render")["displays"]
for name, display in displays.items():
enabled = display["params"]["enable"]["value"]
if not enabled:
@ -1106,9 +1106,33 @@ class RenderProductsRenderman(ARenderProducts):
display["driverNode"]["type"], "exr")
for camera in cameras:
product = RenderProduct(productName=aov_name,
ext=extensions,
camera=camera)
# Create render product and set it as multipart only on
# display types supporting it. In all other cases, Renderman
# will create separate output per channel.
if display["driverNode"]["type"] in ["d_openexr", "d_deepexr", "d_tiff"]: # noqa
product = RenderProduct(
productName=aov_name,
ext=extensions,
camera=camera,
multipart=True
)
else:
# this code should handle the case where no multipart
# capable format is selected. But since it involves
# shady logic to determine what channel become what
# lets not do that as all productions will use exr anyway.
"""
for channel in display['params']['displayChannels']['value']: # noqa
product = RenderProduct(
productName="{}_{}".format(aov_name, channel),
ext=extensions,
camera=camera,
multipart=False
)
"""
raise UnsupportedImageFormatException(
"Only exr, deep exr and tiff formats are supported.")
products.append(product)
return products
@ -1201,3 +1225,7 @@ class UnsupportedRendererException(Exception):
Raised when requesting data from unsupported renderer.
"""
class UnsupportedImageFormatException(Exception):
"""Custom exception to report unsupported output image format."""

View file

@ -1,13 +1,15 @@
import os
import sys
import errno
import logging
import contextlib
from maya import utils, cmds, OpenMaya
import maya.api.OpenMaya as om
import pyblish.api
from openpype.settings import get_project_settings
from openpype.host import HostBase, IWorkfileHost, ILoadHost
import openpype.hosts.maya
from openpype.tools.utils import host_tools
from openpype.lib import (
@ -28,6 +30,14 @@ from openpype.pipeline import (
)
from openpype.hosts.maya.lib import copy_workspace_mel
from . import menu, lib
from .workio import (
open_file,
save_file,
file_extensions,
has_unsaved_changes,
work_root,
current_file
)
log = logging.getLogger("openpype.hosts.maya")
@ -40,49 +50,121 @@ INVENTORY_PATH = os.path.join(PLUGINS_DIR, "inventory")
AVALON_CONTAINERS = ":AVALON_CONTAINERS"
self = sys.modules[__name__]
self._ignore_lock = False
self._events = {}
class MayaHost(HostBase, IWorkfileHost, ILoadHost):
name = "maya"
def install():
from openpype.settings import get_project_settings
def __init__(self):
super(MayaHost, self).__init__()
self._op_events = {}
project_settings = get_project_settings(os.getenv("AVALON_PROJECT"))
# process path mapping
dirmap_processor = MayaDirmap("maya", project_settings)
dirmap_processor.process_dirmap()
def install(self):
project_settings = get_project_settings(os.getenv("AVALON_PROJECT"))
# process path mapping
dirmap_processor = MayaDirmap("maya", project_settings)
dirmap_processor.process_dirmap()
pyblish.api.register_plugin_path(PUBLISH_PATH)
pyblish.api.register_host("mayabatch")
pyblish.api.register_host("mayapy")
pyblish.api.register_host("maya")
pyblish.api.register_plugin_path(PUBLISH_PATH)
pyblish.api.register_host("mayabatch")
pyblish.api.register_host("mayapy")
pyblish.api.register_host("maya")
register_loader_plugin_path(LOAD_PATH)
register_creator_plugin_path(CREATE_PATH)
register_inventory_action_path(INVENTORY_PATH)
log.info(PUBLISH_PATH)
register_loader_plugin_path(LOAD_PATH)
register_creator_plugin_path(CREATE_PATH)
register_inventory_action_path(INVENTORY_PATH)
self.log.info(PUBLISH_PATH)
log.info("Installing callbacks ... ")
register_event_callback("init", on_init)
self.log.info("Installing callbacks ... ")
register_event_callback("init", on_init)
if lib.IS_HEADLESS:
log.info(("Running in headless mode, skipping Maya "
"save/open/new callback installation.."))
if lib.IS_HEADLESS:
self.log.info((
"Running in headless mode, skipping Maya save/open/new"
" callback installation.."
))
return
return
_set_project()
_register_callbacks()
_set_project()
self._register_callbacks()
menu.install()
menu.install()
register_event_callback("save", on_save)
register_event_callback("open", on_open)
register_event_callback("new", on_new)
register_event_callback("before.save", on_before_save)
register_event_callback("taskChanged", on_task_changed)
register_event_callback("workfile.save.before", before_workfile_save)
register_event_callback("save", on_save)
register_event_callback("open", on_open)
register_event_callback("new", on_new)
register_event_callback("before.save", on_before_save)
register_event_callback("taskChanged", on_task_changed)
register_event_callback("workfile.save.before", before_workfile_save)
def open_workfile(self, filepath):
return open_file(filepath)
def save_workfile(self, filepath=None):
return save_file(filepath)
def work_root(self, session):
return work_root(session)
def get_current_workfile(self):
return current_file()
def workfile_has_unsaved_changes(self):
return has_unsaved_changes()
def get_workfile_extensions(self):
return file_extensions()
def get_containers(self):
return ls()
@contextlib.contextmanager
def maintained_selection(self):
with lib.maintained_selection():
yield
def _register_callbacks(self):
for handler, event in self._op_events.copy().items():
if event is None:
continue
try:
OpenMaya.MMessage.removeCallback(event)
self._op_events[handler] = None
except RuntimeError as exc:
self.log.info(exc)
self._op_events[_on_scene_save] = OpenMaya.MSceneMessage.addCallback(
OpenMaya.MSceneMessage.kBeforeSave, _on_scene_save
)
self._op_events[_before_scene_save] = (
OpenMaya.MSceneMessage.addCheckCallback(
OpenMaya.MSceneMessage.kBeforeSaveCheck,
_before_scene_save
)
)
self._op_events[_on_scene_new] = OpenMaya.MSceneMessage.addCallback(
OpenMaya.MSceneMessage.kAfterNew, _on_scene_new
)
self._op_events[_on_maya_initialized] = (
OpenMaya.MSceneMessage.addCallback(
OpenMaya.MSceneMessage.kMayaInitialized,
_on_maya_initialized
)
)
self._op_events[_on_scene_open] = OpenMaya.MSceneMessage.addCallback(
OpenMaya.MSceneMessage.kAfterOpen, _on_scene_open
)
self.log.info("Installed event handler _on_scene_save..")
self.log.info("Installed event handler _before_scene_save..")
self.log.info("Installed event handler _on_scene_new..")
self.log.info("Installed event handler _on_maya_initialized..")
self.log.info("Installed event handler _on_scene_open..")
def _set_project():
@ -106,44 +188,6 @@ def _set_project():
cmds.workspace(workdir, openWorkspace=True)
def _register_callbacks():
for handler, event in self._events.copy().items():
if event is None:
continue
try:
OpenMaya.MMessage.removeCallback(event)
self._events[handler] = None
except RuntimeError as e:
log.info(e)
self._events[_on_scene_save] = OpenMaya.MSceneMessage.addCallback(
OpenMaya.MSceneMessage.kBeforeSave, _on_scene_save
)
self._events[_before_scene_save] = OpenMaya.MSceneMessage.addCheckCallback(
OpenMaya.MSceneMessage.kBeforeSaveCheck, _before_scene_save
)
self._events[_on_scene_new] = OpenMaya.MSceneMessage.addCallback(
OpenMaya.MSceneMessage.kAfterNew, _on_scene_new
)
self._events[_on_maya_initialized] = OpenMaya.MSceneMessage.addCallback(
OpenMaya.MSceneMessage.kMayaInitialized, _on_maya_initialized
)
self._events[_on_scene_open] = OpenMaya.MSceneMessage.addCallback(
OpenMaya.MSceneMessage.kAfterOpen, _on_scene_open
)
log.info("Installed event handler _on_scene_save..")
log.info("Installed event handler _before_scene_save..")
log.info("Installed event handler _on_scene_new..")
log.info("Installed event handler _on_maya_initialized..")
log.info("Installed event handler _on_scene_open..")
def _on_maya_initialized(*args):
emit_event("init")
@ -475,7 +519,6 @@ def on_task_changed():
workdir = legacy_io.Session["AVALON_WORKDIR"]
if os.path.exists(workdir):
log.info("Updating Maya workspace for task change to %s", workdir)
_set_project()
# Set Maya fileDialog's start-dir to /scenes

View file

@ -9,8 +9,8 @@ from openpype.pipeline import (
LoaderPlugin,
get_representation_path,
AVALON_CONTAINER_ID,
Anatomy,
)
from openpype.api import Anatomy
from openpype.settings import get_project_settings
from .pipeline import containerise
from . import lib

View file

@ -296,12 +296,9 @@ def update_package_version(container, version):
assert current_representation is not None, "This is a bug"
repre_parents = get_representation_parents(
project_name, current_representation
version_doc, subset_doc, asset_doc, project_doc = (
get_representation_parents(project_name, current_representation)
)
version_doc = subset_doc = asset_doc = project_doc = None
if repre_parents:
version_doc, subset_doc, asset_doc, project_doc = repre_parents
if version == -1:
new_version = get_last_version_by_subset_id(

View file

@ -15,6 +15,8 @@ class CreateReview(plugin.Creator):
keepImages = False
isolate = False
imagePlane = True
Width = 0
Height = 0
transparency = [
"preset",
"simple",
@ -33,6 +35,8 @@ class CreateReview(plugin.Creator):
for key, value in animation_data.items():
data[key] = value
data["review_width"] = self.Width
data["review_height"] = self.Height
data["isolate"] = self.isolate
data["keepImages"] = self.keepImages
data["imagePlane"] = self.imagePlane

View file

@ -0,0 +1,139 @@
import os
from openpype.api import get_project_settings
from openpype.pipeline import (
load,
get_representation_path
)
# TODO aiVolume doesn't automatically set velocity fps correctly, set manual?
class LoadVDBtoArnold(load.LoaderPlugin):
"""Load OpenVDB for Arnold in aiVolume"""
families = ["vdbcache"]
representations = ["vdb"]
label = "Load VDB to Arnold"
icon = "cloud"
color = "orange"
def load(self, context, name, namespace, data):
from maya import cmds
from openpype.hosts.maya.api.pipeline import containerise
from openpype.hosts.maya.api.lib import unique_namespace
try:
family = context["representation"]["context"]["family"]
except ValueError:
family = "vdbcache"
# Check if the plugin for arnold is available on the pc
try:
cmds.loadPlugin("mtoa", quiet=True)
except Exception as exc:
self.log.error("Encountered exception:\n%s" % exc)
return
asset = context['asset']
asset_name = asset["name"]
namespace = namespace or unique_namespace(
asset_name + "_",
prefix="_" if asset_name[0].isdigit() else "",
suffix="_",
)
# Root group
label = "{}:{}".format(namespace, name)
root = cmds.group(name=label, empty=True)
settings = get_project_settings(os.environ['AVALON_PROJECT'])
colors = settings['maya']['load']['colors']
c = colors.get(family)
if c is not None:
cmds.setAttr(root + ".useOutlinerColor", 1)
cmds.setAttr(root + ".outlinerColor",
(float(c[0]) / 255),
(float(c[1]) / 255),
(float(c[2]) / 255)
)
# Create VRayVolumeGrid
grid_node = cmds.createNode("aiVolume",
name="{}Shape".format(root),
parent=root)
self._set_path(grid_node,
path=self.fname,
representation=context["representation"])
# Lock the shape node so the user can't delete the transform/shape
# as if it was referenced
cmds.lockNode(grid_node, lock=True)
nodes = [root, grid_node]
self[:] = nodes
return containerise(
name=name,
namespace=namespace,
nodes=nodes,
context=context,
loader=self.__class__.__name__)
def update(self, container, representation):
from maya import cmds
path = get_representation_path(representation)
# Find VRayVolumeGrid
members = cmds.sets(container['objectName'], query=True)
grid_nodes = cmds.ls(members, type="aiVolume", long=True)
assert len(grid_nodes) == 1, "This is a bug"
# Update the VRayVolumeGrid
self._set_path(grid_nodes[0], path=path, representation=representation)
# Update container representation
cmds.setAttr(container["objectName"] + ".representation",
str(representation["_id"]),
type="string")
def switch(self, container, representation):
self.update(container, representation)
def remove(self, container):
from maya import cmds
# Get all members of the avalon container, ensure they are unlocked
# and delete everything
members = cmds.sets(container['objectName'], query=True)
cmds.lockNode(members, lock=False)
cmds.delete([container['objectName']] + members)
# Clean up the namespace
try:
cmds.namespace(removeNamespace=container['namespace'],
deleteNamespaceContent=True)
except RuntimeError:
pass
@staticmethod
def _set_path(grid_node,
path,
representation):
"""Apply the settings for the VDB path to the aiVolume node"""
from maya import cmds
if not os.path.exists(path):
raise RuntimeError("Path does not exist: %s" % path)
is_sequence = bool(representation["context"].get("frame"))
cmds.setAttr(grid_node + ".useFrameExtension", is_sequence)
# Set file path
cmds.setAttr(grid_node + ".filename", path, type="string")

View file

@ -1,11 +1,21 @@
import os
from openpype.api import get_project_settings
from openpype.pipeline import load
from openpype.pipeline import (
load,
get_representation_path
)
class LoadVDBtoRedShift(load.LoaderPlugin):
"""Load OpenVDB in a Redshift Volume Shape"""
"""Load OpenVDB in a Redshift Volume Shape
Note that the RedshiftVolumeShape is created without a RedshiftVolume
shader assigned. To get the Redshift volume to render correctly assign
a RedshiftVolume shader (in the Hypershade) and set the density, scatter
and emission channels to the channel names of the volumes in the VDB file.
"""
families = ["vdbcache"]
representations = ["vdb"]
@ -55,7 +65,7 @@ class LoadVDBtoRedShift(load.LoaderPlugin):
# Root group
label = "{}:{}".format(namespace, name)
root = cmds.group(name=label, empty=True)
root = cmds.createNode("transform", name=label)
settings = get_project_settings(os.environ['AVALON_PROJECT'])
colors = settings['maya']['load']['colors']
@ -74,9 +84,9 @@ class LoadVDBtoRedShift(load.LoaderPlugin):
name="{}RVSShape".format(label),
parent=root)
cmds.setAttr("{}.fileName".format(volume_node),
self.fname,
type="string")
self._set_path(volume_node,
path=self.fname,
representation=context["representation"])
nodes = [root, volume_node]
self[:] = nodes
@ -87,3 +97,56 @@ class LoadVDBtoRedShift(load.LoaderPlugin):
nodes=nodes,
context=context,
loader=self.__class__.__name__)
def update(self, container, representation):
from maya import cmds
path = get_representation_path(representation)
# Find VRayVolumeGrid
members = cmds.sets(container['objectName'], query=True)
grid_nodes = cmds.ls(members, type="RedshiftVolumeShape", long=True)
assert len(grid_nodes) == 1, "This is a bug"
# Update the VRayVolumeGrid
self._set_path(grid_nodes[0], path=path, representation=representation)
# Update container representation
cmds.setAttr(container["objectName"] + ".representation",
str(representation["_id"]),
type="string")
def remove(self, container):
from maya import cmds
# Get all members of the avalon container, ensure they are unlocked
# and delete everything
members = cmds.sets(container['objectName'], query=True)
cmds.lockNode(members, lock=False)
cmds.delete([container['objectName']] + members)
# Clean up the namespace
try:
cmds.namespace(removeNamespace=container['namespace'],
deleteNamespaceContent=True)
except RuntimeError:
pass
def switch(self, container, representation):
self.update(container, representation)
@staticmethod
def _set_path(grid_node,
path,
representation):
"""Apply the settings for the VDB path to the RedshiftVolumeShape"""
from maya import cmds
if not os.path.exists(path):
raise RuntimeError("Path does not exist: %s" % path)
is_sequence = bool(representation["context"].get("frame"))
cmds.setAttr(grid_node + ".useFrameExtension", is_sequence)
# Set file path
cmds.setAttr(grid_node + ".fileName", path, type="string")

View file

@ -40,7 +40,7 @@ FILE_NODES = {
"aiImage": "filename",
"RedshiftNormalMap": "text0",
"RedshiftNormalMap": "tex0",
"PxrBump": "filename",
"PxrNormalMap": "filename",

View file

@ -71,6 +71,8 @@ class CollectReview(pyblish.api.InstancePlugin):
data['handles'] = instance.data.get('handles', None)
data['step'] = instance.data['step']
data['fps'] = instance.data['fps']
data['review_width'] = instance.data['review_width']
data['review_height'] = instance.data['review_height']
data["isolate"] = instance.data["isolate"]
cmds.setAttr(str(instance) + '.active', 1)
self.log.debug('data {}'.format(instance.context[i].data))

View file

@ -1,100 +0,0 @@
import os
from maya import cmds
import openpype.api
from openpype.hosts.maya.api.lib import (
extract_alembic,
suspended_refresh,
maintained_selection
)
class ExtractAnimation(openpype.api.Extractor):
"""Produce an alembic of just point positions and normals.
Positions and normals, uvs, creases are preserved, but nothing more,
for plain and predictable point caches.
Plugin can run locally or remotely (on a farm - if instance is marked with
"farm" it will be skipped in local processing, but processed on farm)
"""
label = "Extract Animation"
hosts = ["maya"]
families = ["animation"]
targets = ["local", "remote"]
def process(self, instance):
if instance.data.get("farm"):
self.log.debug("Should be processed on farm, skipping.")
return
# Collect the out set nodes
out_sets = [node for node in instance if node.endswith("out_SET")]
if len(out_sets) != 1:
raise RuntimeError("Couldn't find exactly one out_SET: "
"{0}".format(out_sets))
out_set = out_sets[0]
roots = cmds.sets(out_set, query=True)
# Include all descendants
nodes = roots + cmds.listRelatives(roots,
allDescendents=True,
fullPath=True) or []
# Collect the start and end including handles
start = instance.data["frameStartHandle"]
end = instance.data["frameEndHandle"]
self.log.info("Extracting animation..")
dirname = self.staging_dir(instance)
parent_dir = self.staging_dir(instance)
filename = "{name}.abc".format(**instance.data)
path = os.path.join(parent_dir, filename)
options = {
"step": instance.data.get("step", 1.0) or 1.0,
"attr": ["cbId"],
"writeVisibility": True,
"writeCreases": True,
"uvWrite": True,
"selection": True,
"worldSpace": instance.data.get("worldSpace", True),
"writeColorSets": instance.data.get("writeColorSets", False),
"writeFaceSets": instance.data.get("writeFaceSets", False)
}
if not instance.data.get("includeParentHierarchy", True):
# Set the root nodes if we don't want to include parents
# The roots are to be considered the ones that are the actual
# direct members of the set
options["root"] = roots
if int(cmds.about(version=True)) >= 2017:
# Since Maya 2017 alembic supports multiple uv sets - write them.
options["writeUVSets"] = True
with suspended_refresh():
with maintained_selection():
cmds.select(nodes, noExpand=True)
extract_alembic(file=path,
startFrame=float(start),
endFrame=float(end),
**options)
if "representations" not in instance.data:
instance.data["representations"] = []
representation = {
'name': 'abc',
'ext': 'abc',
'files': filename,
"stagingDir": dirname,
}
instance.data["representations"].append(representation)
instance.context.data["cleanupFullPaths"].append(path)
self.log.info("Extracted {} to {}".format(instance, dirname))

View file

@ -30,7 +30,7 @@ class ExtractCameraAlembic(openpype.api.Extractor):
# get cameras
members = instance.data['setMembers']
cameras = cmds.ls(members, leaf=True, shapes=True, long=True,
cameras = cmds.ls(members, leaf=True, long=True,
dag=True, type="camera")
# validate required settings
@ -61,10 +61,30 @@ class ExtractCameraAlembic(openpype.api.Extractor):
if bake_to_worldspace:
job_str += ' -worldSpace'
for member in member_shapes:
self.log.info(f"processing {member}")
# if baked, drop the camera hierarchy to maintain
# clean output and backwards compatibility
camera_root = cmds.listRelatives(
camera, parent=True, fullPath=True)[0]
job_str += ' -root {0}'.format(camera_root)
for member in members:
descendants = cmds.listRelatives(member,
allDescendents=True,
fullPath=True) or []
shapes = cmds.ls(descendants, shapes=True,
noIntermediate=True, long=True)
cameras = cmds.ls(shapes, type="camera", long=True)
if cameras:
if not set(shapes) - set(cameras):
continue
self.log.warning((
"Camera hierarchy contains additional geometry. "
"Extraction will fail.")
)
transform = cmds.listRelatives(
member, parent=True, fullPath=True)[0]
member, parent=True, fullPath=True)
transform = transform[0] if transform else member
job_str += ' -root {0}'.format(transform)
job_str += ' -file "{0}"'.format(path)

View file

@ -172,18 +172,19 @@ class ExtractCameraMayaScene(openpype.api.Extractor):
dag=True,
shapes=True,
long=True)
attrs = {"backgroundColorR": 0.0,
"backgroundColorG": 0.0,
"backgroundColorB": 0.0,
"overscan": 1.0}
# Fix PLN-178: Don't allow background color to be non-black
for cam in cmds.ls(
for cam, (attr, value) in itertools.product(cmds.ls(
baked_camera_shapes, type="camera", dag=True,
shapes=True, long=True):
attrs = {"backgroundColorR": 0.0,
"backgroundColorG": 0.0,
"backgroundColorB": 0.0,
"overscan": 1.0}
for attr, value in attrs.items():
plug = "{0}.{1}".format(cam, attr)
unlock(plug)
cmds.setAttr(plug, value)
long=True), attrs.items()):
plug = "{0}.{1}".format(cam, attr)
unlock(plug)
cmds.setAttr(plug, value)
self.log.info("Performing extraction..")
cmds.select(cmds.ls(members, dag=True,

View file

@ -1,6 +1,7 @@
import os
import glob
import contextlib
import clique
import capture
@ -50,8 +51,32 @@ class ExtractPlayblast(openpype.api.Extractor):
['override_viewport_options']
)
preset = lib.load_capture_preset(data=self.capture_preset)
# Grab capture presets from the project settings
capture_presets = self.capture_preset
# Set resolution variables from capture presets
width_preset = capture_presets["Resolution"]["width"]
height_preset = capture_presets["Resolution"]["height"]
# Set resolution variables from asset values
asset_data = instance.data["assetEntity"]["data"]
asset_width = asset_data.get("resolutionWidth")
asset_height = asset_data.get("resolutionHeight")
review_instance_width = instance.data.get("review_width")
review_instance_height = instance.data.get("review_height")
preset['camera'] = camera
# Tests if project resolution is set,
# if it is a value other than zero, that value is
# used, if not then the asset resolution is
# used
if review_instance_width and review_instance_height:
preset['width'] = review_instance_width
preset['height'] = review_instance_height
elif width_preset and height_preset:
preset['width'] = width_preset
preset['height'] = height_preset
elif asset_width and asset_height:
preset['width'] = asset_width
preset['height'] = asset_height
preset['start_frame'] = start
preset['end_frame'] = end
camera_option = preset.get("camera_option", {})
@ -90,7 +115,7 @@ class ExtractPlayblast(openpype.api.Extractor):
else:
preset["viewport_options"] = {"imagePlane": image_plane}
with maintained_time():
with lib.maintained_time():
filename = preset.get("filename", "%TEMP%")
# Force viewer to False in call to capture because we have our own
@ -153,12 +178,3 @@ class ExtractPlayblast(openpype.api.Extractor):
'camera_name': camera_node_name
}
instance.data["representations"].append(representation)
@contextlib.contextmanager
def maintained_time():
ct = cmds.currentTime(query=True)
try:
yield
finally:
cmds.currentTime(ct, edit=True)

View file

@ -6,7 +6,8 @@ import openpype.api
from openpype.hosts.maya.api.lib import (
extract_alembic,
suspended_refresh,
maintained_selection
maintained_selection,
iter_visible_nodes_in_range
)
@ -32,7 +33,7 @@ class ExtractAlembic(openpype.api.Extractor):
self.log.debug("Should be processed on farm, skipping.")
return
nodes = instance[:]
nodes, roots = self.get_members_and_roots(instance)
# Collect the start and end including handles
start = float(instance.data.get("frameStartHandle", 1))
@ -45,10 +46,6 @@ class ExtractAlembic(openpype.api.Extractor):
attr_prefixes = instance.data.get("attrPrefix", "").split(";")
attr_prefixes = [value for value in attr_prefixes if value.strip()]
# Get extra export arguments
writeColorSets = instance.data.get("writeColorSets", False)
writeFaceSets = instance.data.get("writeFaceSets", False)
self.log.info("Extracting pointcache..")
dirname = self.staging_dir(instance)
@ -62,8 +59,8 @@ class ExtractAlembic(openpype.api.Extractor):
"attrPrefix": attr_prefixes,
"writeVisibility": True,
"writeCreases": True,
"writeColorSets": writeColorSets,
"writeFaceSets": writeFaceSets,
"writeColorSets": instance.data.get("writeColorSets", False),
"writeFaceSets": instance.data.get("writeFaceSets", False),
"uvWrite": True,
"selection": True,
"worldSpace": instance.data.get("worldSpace", True)
@ -73,12 +70,22 @@ class ExtractAlembic(openpype.api.Extractor):
# Set the root nodes if we don't want to include parents
# The roots are to be considered the ones that are the actual
# direct members of the set
options["root"] = instance.data.get("setMembers")
options["root"] = roots
if int(cmds.about(version=True)) >= 2017:
# Since Maya 2017 alembic supports multiple uv sets - write them.
options["writeUVSets"] = True
if instance.data.get("visibleOnly", False):
# If we only want to include nodes that are visible in the frame
# range then we need to do our own check. Alembic's `visibleOnly`
# flag does not filter out those that are only hidden on some
# frames as it counts "animated" or "connected" visibilities as
# if it's always visible.
nodes = list(iter_visible_nodes_in_range(nodes,
start=start,
end=end))
with suspended_refresh():
with maintained_selection():
cmds.select(nodes, noExpand=True)
@ -101,3 +108,28 @@ class ExtractAlembic(openpype.api.Extractor):
instance.context.data["cleanupFullPaths"].append(path)
self.log.info("Extracted {} to {}".format(instance, dirname))
def get_members_and_roots(self, instance):
return instance[:], instance.data.get("setMembers")
class ExtractAnimation(ExtractAlembic):
label = "Extract Animation"
families = ["animation"]
def get_members_and_roots(self, instance):
# Collect the out set nodes
out_sets = [node for node in instance if node.endswith("out_SET")]
if len(out_sets) != 1:
raise RuntimeError("Couldn't find exactly one out_SET: "
"{0}".format(out_sets))
out_set = out_sets[0]
roots = cmds.sets(out_set, query=True)
# Include all descendants
nodes = roots + cmds.listRelatives(roots,
allDescendents=True,
fullPath=True) or []
return nodes, roots

View file

@ -1,5 +1,4 @@
import os
import contextlib
import glob
import capture
@ -28,7 +27,6 @@ class ExtractThumbnail(openpype.api.Extractor):
camera = instance.data['review_camera']
capture_preset = ""
capture_preset = (
instance.context.data["project_settings"]['maya']['publish']['ExtractPlayblast']['capture_preset']
)
@ -60,7 +58,29 @@ class ExtractThumbnail(openpype.api.Extractor):
"overscan": 1.0,
"depthOfField": cmds.getAttr("{0}.depthOfField".format(camera)),
}
capture_presets = capture_preset
# Set resolution variables from capture presets
width_preset = capture_presets["Resolution"]["width"]
height_preset = capture_presets["Resolution"]["height"]
# Set resolution variables from asset values
asset_data = instance.data["assetEntity"]["data"]
asset_width = asset_data.get("resolutionWidth")
asset_height = asset_data.get("resolutionHeight")
review_instance_width = instance.data.get("review_width")
review_instance_height = instance.data.get("review_height")
# Tests if project resolution is set,
# if it is a value other than zero, that value is
# used, if not then the asset resolution is
# used
if review_instance_width and review_instance_height:
preset['width'] = review_instance_width
preset['height'] = review_instance_height
elif width_preset and height_preset:
preset['width'] = width_preset
preset['height'] = height_preset
elif asset_width and asset_height:
preset['width'] = asset_width
preset['height'] = asset_height
stagingDir = self.staging_dir(instance)
filename = "{0}".format(instance.name)
path = os.path.join(stagingDir, filename)
@ -81,9 +101,7 @@ class ExtractThumbnail(openpype.api.Extractor):
if preset.pop("isolate_view", False) and instance.data.get("isolate"):
preset["isolate"] = instance.data["setMembers"]
with maintained_time():
filename = preset.get("filename", "%TEMP%")
with lib.maintained_time():
# Force viewer to False in call to capture because we have our own
# viewer opening call to allow a signal to trigger between
# playblast and viewer
@ -152,12 +170,3 @@ class ExtractThumbnail(openpype.api.Extractor):
filepath = max(files, key=os.path.getmtime)
return filepath
@contextlib.contextmanager
def maintained_time():
ct = cmds.currentTime(query=True)
try:
yield
finally:
cmds.currentTime(ct, edit=True)

View file

@ -51,18 +51,7 @@ class ValidateCameraContents(pyblish.api.InstancePlugin):
raise RuntimeError("No cameras found in empty instance.")
if not cls.validate_shapes:
cls.log.info("Not validating shapes in the content.")
for member in members:
parents = cmds.ls(member, long=True)[0].split("|")[1:-1]
parents_long_named = [
"|".join(parents[:i]) for i in range(1, 1 + len(parents))
]
if cameras[0] in parents_long_named:
cls.log.error(
"{} is parented under camera {}".format(
member, cameras[0]))
invalid.extend(member)
cls.log.info("not validating shapes in the content")
return invalid
# non-camera shapes

View file

@ -27,6 +27,7 @@ class ValidateFrameRange(pyblish.api.InstancePlugin):
"yeticache"]
optional = True
actions = [openpype.api.RepairAction]
exclude_families = []
def process(self, instance):
context = instance.context
@ -56,7 +57,9 @@ class ValidateFrameRange(pyblish.api.InstancePlugin):
# compare with data on instance
errors = []
if [ef for ef in self.exclude_families
if instance.data["family"] in ef]:
return
if(inst_start != frame_start_handle):
errors.append("Instance start frame [ {} ] doesn't "
"match the one set on instance [ {} ]: "

View file

@ -6,7 +6,7 @@ from openpype.pipeline import PublishXmlValidationError
class ValidateReviewSubsetUniqueness(pyblish.api.ContextPlugin):
"""Validates that nodes has common root."""
"""Validates that review subset has unique name."""
order = openpype.api.ValidateContentsOrder
hosts = ["maya"]
@ -17,7 +17,7 @@ class ValidateReviewSubsetUniqueness(pyblish.api.ContextPlugin):
subset_names = []
for instance in context:
self.log.info("instance:: {}".format(instance.data))
self.log.debug("Instance: {}".format(instance.data))
if instance.data.get('publish'):
subset_names.append(instance.data.get('subset'))

View file

@ -4,8 +4,7 @@ import openpype.api
class ValidateSetdressRoot(pyblish.api.InstancePlugin):
"""
"""
"""Validate if set dress top root node is published."""
order = openpype.api.ValidateContentsOrder
label = "SetDress Root"

View file

@ -0,0 +1,51 @@
import pyblish.api
import openpype.api
from openpype.hosts.maya.api.lib import iter_visible_nodes_in_range
import openpype.hosts.maya.api.action
class ValidateAlembicVisibleOnly(pyblish.api.InstancePlugin):
"""Validates at least a single node is visible in frame range.
This validation only validates if the `visibleOnly` flag is enabled
on the instance - otherwise the validation is skipped.
"""
order = openpype.api.ValidateContentsOrder + 0.05
label = "Alembic Visible Only"
hosts = ["maya"]
families = ["pointcache", "animation"]
actions = [openpype.hosts.maya.api.action.SelectInvalidAction]
def process(self, instance):
if not instance.data.get("visibleOnly", False):
self.log.debug("Visible only is disabled. Validation skipped..")
return
invalid = self.get_invalid(instance)
if invalid:
start, end = self.get_frame_range(instance)
raise RuntimeError("No visible nodes found in "
"frame range {}-{}.".format(start, end))
@classmethod
def get_invalid(cls, instance):
if instance.data["family"] == "animation":
# Special behavior to use the nodes in out_SET
nodes = instance.data["out_hierarchy"]
else:
nodes = instance[:]
start, end = cls.get_frame_range(instance)
if not any(iter_visible_nodes_in_range(nodes, start, end)):
# Return the nodes we have considered so the user can identify
# them with the select invalid action
return nodes
@staticmethod
def get_frame_range(instance):
data = instance.data
return data["frameStartHandle"], data["frameEndHandle"]

View file

@ -1,10 +1,11 @@
import os
from openpype.api import get_project_settings
from openpype.pipeline import install_host
from openpype.hosts.maya import api
from openpype.hosts.maya.api import MayaHost
from maya import cmds
install_host(api)
host = MayaHost()
install_host(host)
print("starting OpenPype usersetup")

View file

@ -8,9 +8,6 @@ from .workio import (
)
from .command import (
reset_frame_range,
get_handles,
reset_resolution,
viewer_update_and_undo_stop
)
@ -46,9 +43,6 @@ __all__ = (
"current_file",
"work_root",
"reset_frame_range",
"get_handles",
"reset_resolution",
"viewer_update_and_undo_stop",
"OpenPypeCreator",

View file

@ -1,124 +1,10 @@
import logging
import contextlib
import nuke
from bson.objectid import ObjectId
from openpype.pipeline import legacy_io
log = logging.getLogger(__name__)
def reset_frame_range():
""" Set frame range to current asset
Also it will set a Viewer range with
displayed handles
"""
fps = float(legacy_io.Session.get("AVALON_FPS", 25))
nuke.root()["fps"].setValue(fps)
name = legacy_io.Session["AVALON_ASSET"]
asset = legacy_io.find_one({"name": name, "type": "asset"})
asset_data = asset["data"]
handles = get_handles(asset)
frame_start = int(asset_data.get(
"frameStart",
asset_data.get("edit_in")))
frame_end = int(asset_data.get(
"frameEnd",
asset_data.get("edit_out")))
if not all([frame_start, frame_end]):
missing = ", ".join(["frame_start", "frame_end"])
msg = "'{}' are not set for asset '{}'!".format(missing, name)
log.warning(msg)
nuke.message(msg)
return
frame_start -= handles
frame_end += handles
nuke.root()["first_frame"].setValue(frame_start)
nuke.root()["last_frame"].setValue(frame_end)
# setting active viewers
vv = nuke.activeViewer().node()
vv["frame_range_lock"].setValue(True)
vv["frame_range"].setValue("{0}-{1}".format(
int(asset_data["frameStart"]),
int(asset_data["frameEnd"]))
)
def get_handles(asset):
""" Gets handles data
Arguments:
asset (dict): avalon asset entity
Returns:
handles (int)
"""
data = asset["data"]
if "handles" in data and data["handles"] is not None:
return int(data["handles"])
parent_asset = None
if "visualParent" in data:
vp = data["visualParent"]
if vp is not None:
parent_asset = legacy_io.find_one({"_id": ObjectId(vp)})
if parent_asset is None:
parent_asset = legacy_io.find_one({"_id": ObjectId(asset["parent"])})
if parent_asset is not None:
return get_handles(parent_asset)
else:
return 0
def reset_resolution():
"""Set resolution to project resolution."""
project = legacy_io.find_one({"type": "project"})
p_data = project["data"]
width = p_data.get("resolution_width",
p_data.get("resolutionWidth"))
height = p_data.get("resolution_height",
p_data.get("resolutionHeight"))
if not all([width, height]):
missing = ", ".join(["width", "height"])
msg = "No resolution information `{0}` found for '{1}'.".format(
missing,
project["name"])
log.warning(msg)
nuke.message(msg)
return
current_width = nuke.root()["format"].value().width()
current_height = nuke.root()["format"].value().height()
if width != current_width or height != current_height:
fmt = None
for f in nuke.formats():
if f.width() == width and f.height() == height:
fmt = f.name()
if not fmt:
nuke.addFormat(
"{0} {1} {2}".format(int(width), int(height), project["name"])
)
fmt = project["name"]
nuke.root()["format"].setValue(fmt)
@contextlib.contextmanager
def viewer_update_and_undo_stop():
"""Lock viewer from updating and stop recording undo steps"""

View file

@ -8,27 +8,37 @@ import contextlib
from collections import OrderedDict
import clique
from bson.objectid import ObjectId
import nuke
from Qt import QtCore, QtWidgets
from openpype.client import (
get_project,
get_asset_by_name,
get_versions,
get_last_versions,
get_representations,
)
from openpype.api import (
Logger,
Anatomy,
BuildWorkfile,
get_version_from_path,
get_anatomy_settings,
get_workdir_data,
get_asset,
get_current_project_settings,
)
from openpype.tools.utils import host_tools
from openpype.lib import env_value_to_bool
from openpype.lib.path_tools import HostDirmap
from openpype.settings import get_project_settings
from openpype.settings import (
get_project_settings,
get_anatomy_settings,
)
from openpype.modules import ModulesManager
from openpype.pipeline import (
discover_legacy_creator_plugins,
legacy_io,
Anatomy,
)
from . import gizmo_menu
@ -55,7 +65,10 @@ class Context:
main_window = None
context_label = None
project_name = os.getenv("AVALON_PROJECT")
# Workfile related code
workfiles_launched = False
workfiles_tool_timer = None
# Seems unused
_project_doc = None
@ -749,47 +762,84 @@ def check_inventory_versions():
from .pipeline import parse_container
# get all Loader nodes by avalon attribute metadata
for each in nuke.allNodes():
container = parse_container(each)
node_with_repre_id = []
repre_ids = set()
# Find all containers and collect it's node and representation ids
for node in nuke.allNodes():
container = parse_container(node)
if container:
node = nuke.toNode(container["objectName"])
avalon_knob_data = read_avalon_data(node)
repre_id = avalon_knob_data["representation"]
# get representation from io
representation = legacy_io.find_one({
"type": "representation",
"_id": ObjectId(avalon_knob_data["representation"])
})
repre_ids.add(repre_id)
node_with_repre_id.append((node, repre_id))
# Failsafe for not finding the representation.
if not representation:
log.warning(
"Could not find the representation on "
"node \"{}\"".format(node.name())
)
continue
# Skip if nothing was found
if not repre_ids:
return
# Get start frame from version data
version = legacy_io.find_one({
"type": "version",
"_id": representation["parent"]
})
project_name = legacy_io.active_project()
# Find representations based on found containers
repre_docs = get_representations(
project_name,
representation_ids=repre_ids,
fields=["_id", "parent"]
)
# Store representations by id and collect version ids
repre_docs_by_id = {}
version_ids = set()
for repre_doc in repre_docs:
# Use stringed representation id to match value in containers
repre_id = str(repre_doc["_id"])
repre_docs_by_id[repre_id] = repre_doc
version_ids.add(repre_doc["parent"])
# get all versions in list
versions = legacy_io.find({
"type": "version",
"parent": version["parent"]
}).distinct("name")
version_docs = get_versions(
project_name, version_ids, fields=["_id", "name", "parent"]
)
# Store versions by id and collect subset ids
version_docs_by_id = {}
subset_ids = set()
for version_doc in version_docs:
version_docs_by_id[version_doc["_id"]] = version_doc
subset_ids.add(version_doc["parent"])
max_version = max(versions)
# Query last versions based on subset ids
last_versions_by_subset_id = get_last_versions(
project_name, subset_ids=subset_ids, fields=["_id", "parent"]
)
# check the available version and do match
# change color of node if not max version
if version.get("name") not in [max_version]:
node["tile_color"].setValue(int("0xd84f20ff", 16))
else:
node["tile_color"].setValue(int("0x4ecd25ff", 16))
# Loop through collected container nodes and their representation ids
for item in node_with_repre_id:
# Some python versions of nuke can't unfold tuple in for loop
node, repre_id = item
repre_doc = repre_docs_by_id.get(repre_id)
# Failsafe for not finding the representation.
if not repre_doc:
log.warning((
"Could not find the representation on node \"{}\""
).format(node.name()))
continue
version_id = repre_doc["parent"]
version_doc = version_docs_by_id.get(version_id)
if not version_doc:
log.warning((
"Could not find the version on node \"{}\""
).format(node.name()))
continue
# Get last version based on subset id
subset_id = version_doc["parent"]
last_version = last_versions_by_subset_id[subset_id]
# Check if last version is same as current version
if last_version["_id"] == version_doc["_id"]:
color_value = "0x4ecd25ff"
else:
color_value = "0xd84f20ff"
node["tile_color"].setValue(int(color_value, 16))
def writes_version_sync():
@ -914,11 +964,9 @@ def format_anatomy(data):
file = script_name()
data["version"] = get_version_from_path(file)
project_doc = legacy_io.find_one({"type": "project"})
asset_doc = legacy_io.find_one({
"type": "asset",
"name": data["avalon"]["asset"]
})
project_name = anatomy.project_name
project_doc = get_project(project_name)
asset_doc = get_asset_by_name(project_name, data["avalon"]["asset"])
task_name = os.environ["AVALON_TASK"]
host_name = os.environ["AVALON_APP"]
context_data = get_workdir_data(
@ -1707,12 +1755,13 @@ class WorkfileSettings(object):
"""
def __init__(self,
root_node=None,
nodes=None,
**kwargs):
Context._project_doc = kwargs.get(
"project") or legacy_io.find_one({"type": "project"})
def __init__(self, root_node=None, nodes=None, **kwargs):
project_doc = kwargs.get("project")
if project_doc is None:
project_name = legacy_io.active_project()
project_doc = get_project(project_name)
Context._project_doc = project_doc
self._asset = (
kwargs.get("asset_name")
or legacy_io.Session["AVALON_ASSET"]
@ -2062,9 +2111,10 @@ class WorkfileSettings(object):
def reset_resolution(self):
"""Set resolution to project resolution."""
log.info("Resetting resolution")
project = legacy_io.find_one({"type": "project"})
asset = legacy_io.Session["AVALON_ASSET"]
asset = legacy_io.find_one({"name": asset, "type": "asset"})
project_name = legacy_io.active_project()
project = get_project(project_name)
asset_name = legacy_io.Session["AVALON_ASSET"]
asset = get_asset_by_name(project_name, asset_name)
asset_data = asset.get('data', {})
data = {
@ -2166,29 +2216,6 @@ class WorkfileSettings(object):
set_context_favorites(favorite_items)
def get_hierarchical_attr(entity, attr, default=None):
attr_parts = attr.split('.')
value = entity
for part in attr_parts:
value = value.get(part)
if not value:
break
if value or entity["type"].lower() == "project":
return value
parent_id = entity["parent"]
if (
entity["type"].lower() == "asset"
and entity.get("data", {}).get("visualParent")
):
parent_id = entity["data"]["visualParent"]
parent = legacy_io.find_one({"_id": parent_id})
return get_hierarchical_attr(parent, attr)
def get_write_node_template_attr(node):
''' Gets all defined data from presets
@ -2362,12 +2389,19 @@ def select_nodes(nodes):
def launch_workfiles_app():
'''Function letting start workfiles after start of host
'''
from openpype.lib import (
env_value_to_bool
)
from .pipeline import get_main_window
"""Show workfiles tool on nuke launch.
Trigger to show workfiles tool on application launch. Can be executed only
once all other calls are ignored.
Workfiles tool show is deffered after application initialization using
QTimer.
"""
if Context.workfiles_launched:
return
Context.workfiles_launched = True
# get all imortant settings
open_at_start = env_value_to_bool(
@ -2378,10 +2412,40 @@ def launch_workfiles_app():
if not open_at_start:
return
if not Context.workfiles_launched:
Context.workfiles_launched = True
main_window = get_main_window()
host_tools.show_workfiles(parent=main_window)
# Show workfiles tool using timer
# - this will be probably triggered during initialization in that case
# the application is not be able to show uis so it must be
# deffered using timer
# - timer should be processed when initialization ends
# When applications starts to process events.
timer = QtCore.QTimer()
timer.timeout.connect(_launch_workfile_app)
timer.setInterval(100)
Context.workfiles_tool_timer = timer
timer.start()
def _launch_workfile_app():
# Safeguard to not show window when application is still starting up
# or is already closing down.
closing_down = QtWidgets.QApplication.closingDown()
starting_up = QtWidgets.QApplication.startingUp()
# Stop the timer if application finished start up of is closing down
if closing_down or not starting_up:
Context.workfiles_tool_timer.stop()
Context.workfiles_tool_timer = None
# Skip if application is starting up or closing down
if starting_up or closing_down:
return
# Make sure on top is enabled on first show so the window is not hidden
# under main nuke window
# - this happened on Centos 7 and it is because the focus of nuke
# changes to the main window after showing because of initialization
# which moves workfiles tool under it
host_tools.show_workfiles(parent=None, on_top=True)
def process_workfile_builder():

View file

@ -120,8 +120,9 @@ def install():
nuke.addOnCreate(workfile_settings.set_context_settings, nodeClass="Root")
nuke.addOnCreate(workfile_settings.set_favorites, nodeClass="Root")
nuke.addOnCreate(process_workfile_builder, nodeClass="Root")
nuke.addOnCreate(launch_workfiles_app, nodeClass="Root")
_install_menu()
launch_workfiles_app()
def uninstall():
@ -141,6 +142,14 @@ def uninstall():
_uninstall_menu()
def _show_workfiles():
# Make sure parent is not set
# - this makes Workfiles tool as separated window which
# avoid issues with reopening
# - it is possible to explicitly change on top flag of the tool
host_tools.show_workfiles(parent=None, on_top=False)
def _install_menu():
# uninstall original avalon menu
main_window = get_main_window()
@ -157,7 +166,7 @@ def _install_menu():
menu.addSeparator()
menu.addCommand(
"Work Files...",
lambda: host_tools.show_workfiles(parent=main_window)
_show_workfiles
)
menu.addSeparator()

View file

@ -1,6 +1,10 @@
import nuke
import nukescripts
from openpype.client import (
get_version_by_id,
get_last_version_by_subset_id,
)
from openpype.pipeline import (
legacy_io,
load,
@ -188,18 +192,17 @@ class LoadBackdropNodes(load.LoaderPlugin):
# get main variables
# Get version from io
version = legacy_io.find_one({
"type": "version",
"_id": representation["parent"]
})
project_name = legacy_io.active_project()
version_doc = get_version_by_id(project_name, representation["parent"])
# get corresponding node
GN = nuke.toNode(container['objectName'])
file = get_representation_path(representation).replace("\\", "/")
context = representation["context"]
name = container['name']
version_data = version.get("data", {})
vname = version.get("name", None)
version_data = version_doc.get("data", {})
vname = version_doc.get("name", None)
first = version_data.get("frameStart", None)
last = version_data.get("frameEnd", None)
namespace = container['namespace']
@ -237,20 +240,18 @@ class LoadBackdropNodes(load.LoaderPlugin):
GN["name"].setValue(object_name)
# get all versions in list
versions = legacy_io.find({
"type": "version",
"parent": version["parent"]
}).distinct('name')
max_version = max(versions)
last_version_doc = get_last_version_by_subset_id(
project_name, version_doc["parent"], fields=["_id"]
)
# change color of node
if version.get("name") not in [max_version]:
GN["tile_color"].setValue(int("0xd88467ff", 16))
if version_doc["_id"] == last_version_doc["_id"]:
color_value = self.node_color
else:
GN["tile_color"].setValue(int(self.node_color, 16))
color_value = "0xd88467ff"
GN["tile_color"].setValue(int(color_value, 16))
self.log.info("updated to version: {}".format(version.get("name")))
self.log.info("updated to version: {}".format(version_doc.get("name")))
return update_container(GN, data_imprint)

View file

@ -1,5 +1,9 @@
import nuke
from openpype.client import (
get_version_by_id,
get_last_version_by_subset_id
)
from openpype.pipeline import (
legacy_io,
load,
@ -102,17 +106,16 @@ class AlembicCameraLoader(load.LoaderPlugin):
None
"""
# Get version from io
version = legacy_io.find_one({
"type": "version",
"_id": representation["parent"]
})
project_name = legacy_io.active_project()
version_doc = get_version_by_id(project_name, representation["parent"])
object_name = container['objectName']
# get corresponding node
camera_node = nuke.toNode(object_name)
# get main variables
version_data = version.get("data", {})
vname = version.get("name", None)
version_data = version_doc.get("data", {})
vname = version_doc.get("name", None)
first = version_data.get("frameStart", None)
last = version_data.get("frameEnd", None)
fps = version_data.get("fps") or nuke.root()["fps"].getValue()
@ -165,28 +168,27 @@ class AlembicCameraLoader(load.LoaderPlugin):
d.setInput(index, camera_node)
# color node by correct color by actual version
self.node_version_color(version, camera_node)
self.node_version_color(version_doc, camera_node)
self.log.info("updated to version: {}".format(version.get("name")))
self.log.info("updated to version: {}".format(version_doc.get("name")))
return update_container(camera_node, data_imprint)
def node_version_color(self, version, node):
def node_version_color(self, version_doc, node):
""" Coloring a node by correct color by actual version
"""
# get all versions in list
versions = legacy_io.find({
"type": "version",
"parent": version["parent"]
}).distinct('name')
max_version = max(versions)
project_name = legacy_io.active_project()
last_version_doc = get_last_version_by_subset_id(
project_name, version_doc["parent"], fields=["_id"]
)
# change color of node
if version.get("name") not in [max_version]:
node["tile_color"].setValue(int("0xd88467ff", 16))
if version_doc["_id"] == last_version_doc["_id"]:
color_value = self.node_color
else:
node["tile_color"].setValue(int(self.node_color, 16))
color_value = "0xd88467ff"
node["tile_color"].setValue(int(color_value, 16))
def switch(self, container, representation):
self.update(container, representation)

View file

@ -1,6 +1,10 @@
import nuke
import qargparse
from openpype.client import (
get_version_by_id,
get_last_version_by_subset_id,
)
from openpype.pipeline import (
legacy_io,
get_representation_path,
@ -50,20 +54,28 @@ class LoadClip(plugin.NukeLoader):
script_start = int(nuke.root()["first_frame"].value())
# option gui
defaults = {
"start_at_workfile": True
options_defaults = {
"start_at_workfile": True,
"add_retime": True
}
options = [
qargparse.Boolean(
"start_at_workfile",
help="Load at workfile start frame",
default=True
)
]
node_name_template = "{class_name}_{ext}"
@classmethod
def get_options(cls, *args):
return [
qargparse.Boolean(
"start_at_workfile",
help="Load at workfile start frame",
default=cls.options_defaults["start_at_workfile"]
),
qargparse.Boolean(
"add_retime",
help="Load with retime",
default=cls.options_defaults["add_retime"]
)
]
@classmethod
def get_representations(cls):
return (
@ -82,7 +94,10 @@ class LoadClip(plugin.NukeLoader):
file = self.fname.replace("\\", "/")
start_at_workfile = options.get(
"start_at_workfile", self.defaults["start_at_workfile"])
"start_at_workfile", self.options_defaults["start_at_workfile"])
add_retime = options.get(
"add_retime", self.options_defaults["add_retime"])
version = context['version']
version_data = version.get("data", {})
@ -147,7 +162,7 @@ class LoadClip(plugin.NukeLoader):
data_imprint = {}
for k in add_keys:
if k == 'version':
data_imprint.update({k: context["version"]['name']})
data_imprint[k] = context["version"]['name']
elif k == 'colorspace':
colorspace = repre["data"].get(k)
colorspace = colorspace or version_data.get(k)
@ -155,10 +170,13 @@ class LoadClip(plugin.NukeLoader):
if used_colorspace:
data_imprint["used_colorspace"] = used_colorspace
else:
data_imprint.update(
{k: context["version"]['data'].get(k, str(None))})
data_imprint[k] = context["version"]['data'].get(
k, str(None))
data_imprint.update({"objectName": read_name})
data_imprint["objectName"] = read_name
if add_retime and version_data.get("retime", None):
data_imprint["addRetime"] = True
read_node["tile_color"].setValue(int("0x4ecd25ff", 16))
@ -170,7 +188,7 @@ class LoadClip(plugin.NukeLoader):
loader=self.__class__.__name__,
data=data_imprint)
if version_data.get("retime", None):
if add_retime and version_data.get("retime", None):
self._make_retimes(read_node, version_data)
self.set_as_member(read_node)
@ -194,13 +212,17 @@ class LoadClip(plugin.NukeLoader):
read_node = nuke.toNode(container['objectName'])
file = get_representation_path(representation).replace("\\", "/")
start_at_workfile = bool("start at" in read_node['frame_mode'].value())
start_at_workfile = "start at" in read_node['frame_mode'].value()
version = legacy_io.find_one({
"type": "version",
"_id": representation["parent"]
})
version_data = version.get("data", {})
add_retime = [
key for key in read_node.knobs().keys()
if "addRetime" in key
]
project_name = legacy_io.active_project()
version_doc = get_version_by_id(project_name, representation["parent"])
version_data = version_doc.get("data", {})
repre_id = representation["_id"]
repre_cont = representation["context"]
@ -251,7 +273,7 @@ class LoadClip(plugin.NukeLoader):
"representation": str(representation["_id"]),
"frameStart": str(first),
"frameEnd": str(last),
"version": str(version.get("name")),
"version": str(version_doc.get("name")),
"db_colorspace": colorspace,
"source": version_data.get("source"),
"handleStart": str(self.handle_start),
@ -264,28 +286,26 @@ class LoadClip(plugin.NukeLoader):
if used_colorspace:
updated_dict["used_colorspace"] = used_colorspace
last_version_doc = get_last_version_by_subset_id(
project_name, version_doc["parent"], fields=["_id"]
)
# change color of read_node
# get all versions in list
versions = legacy_io.find({
"type": "version",
"parent": version["parent"]
}).distinct('name')
max_version = max(versions)
if version.get("name") not in [max_version]:
read_node["tile_color"].setValue(int("0xd84f20ff", 16))
if version_doc["_id"] == last_version_doc["_id"]:
color_value = "0x4ecd25ff"
else:
read_node["tile_color"].setValue(int("0x4ecd25ff", 16))
color_value = "0xd84f20ff"
read_node["tile_color"].setValue(int(color_value, 16))
# Update the imprinted representation
update_container(
read_node,
updated_dict
)
self.log.info("updated to version: {}".format(version.get("name")))
self.log.info(
"updated to version: {}".format(version_doc.get("name"))
)
if version_data.get("retime", None):
if add_retime and version_data.get("retime", None):
self._make_retimes(read_node, version_data)
else:
self.clear_members(read_node)

View file

@ -3,6 +3,10 @@ from collections import OrderedDict
import nuke
import six
from openpype.client import (
get_version_by_id,
get_last_version_by_subset_id,
)
from openpype.pipeline import (
legacy_io,
load,
@ -148,17 +152,16 @@ class LoadEffects(load.LoaderPlugin):
"""
# get main variables
# Get version from io
version = legacy_io.find_one({
"type": "version",
"_id": representation["parent"]
})
project_name = legacy_io.active_project()
version_doc = get_version_by_id(project_name, representation["parent"])
# get corresponding node
GN = nuke.toNode(container['objectName'])
file = get_representation_path(representation).replace("\\", "/")
name = container['name']
version_data = version.get("data", {})
vname = version.get("name", None)
version_data = version_doc.get("data", {})
vname = version_doc.get("name", None)
first = version_data.get("frameStart", None)
last = version_data.get("frameEnd", None)
workfile_first_frame = int(nuke.root()["first_frame"].getValue())
@ -243,21 +246,19 @@ class LoadEffects(load.LoaderPlugin):
# try to find parent read node
self.connect_read_node(GN, namespace, json_f["assignTo"])
# get all versions in list
versions = legacy_io.find({
"type": "version",
"parent": version["parent"]
}).distinct('name')
max_version = max(versions)
last_version_doc = get_last_version_by_subset_id(
project_name, version_doc["parent"], fields=["_id"]
)
# change color of node
if version.get("name") not in [max_version]:
GN["tile_color"].setValue(int("0xd84f20ff", 16))
if version_doc["_id"] == last_version_doc["_id"]:
color_value = "0x3469ffff"
else:
GN["tile_color"].setValue(int("0x3469ffff", 16))
color_value = "0xd84f20ff"
self.log.info("updated to version: {}".format(version.get("name")))
GN["tile_color"].setValue(int(color_value, 16))
self.log.info("updated to version: {}".format(version_doc.get("name")))
def connect_read_node(self, group_node, asset, subset):
"""

View file

@ -3,6 +3,10 @@ from collections import OrderedDict
import six
import nuke
from openpype.client import (
get_version_by_id,
get_last_version_by_subset_id,
)
from openpype.pipeline import (
legacy_io,
load,
@ -153,17 +157,16 @@ class LoadEffectsInputProcess(load.LoaderPlugin):
# get main variables
# Get version from io
version = legacy_io.find_one({
"type": "version",
"_id": representation["parent"]
})
project_name = legacy_io.active_project()
version_doc = get_version_by_id(project_name, representation["parent"])
# get corresponding node
GN = nuke.toNode(container['objectName'])
file = get_representation_path(representation).replace("\\", "/")
name = container['name']
version_data = version.get("data", {})
vname = version.get("name", None)
version_data = version_doc.get("data", {})
vname = version_doc.get("name", None)
first = version_data.get("frameStart", None)
last = version_data.get("frameEnd", None)
workfile_first_frame = int(nuke.root()["first_frame"].getValue())
@ -251,20 +254,18 @@ class LoadEffectsInputProcess(load.LoaderPlugin):
# return
# get all versions in list
versions = legacy_io.find({
"type": "version",
"parent": version["parent"]
}).distinct('name')
max_version = max(versions)
last_version_doc = get_last_version_by_subset_id(
project_name, version_doc["parent"], fields=["_id"]
)
# change color of node
if version.get("name") not in [max_version]:
GN["tile_color"].setValue(int("0xd84f20ff", 16))
if version_doc["_id"] == last_version_doc["_id"]:
color_value = "0x3469ffff"
else:
GN["tile_color"].setValue(int("0x3469ffff", 16))
color_value = "0xd84f20ff"
GN["tile_color"].setValue(int(color_value, 16))
self.log.info("updated to version: {}".format(version.get("name")))
self.log.info("updated to version: {}".format(version_doc.get("name")))
def connect_active_viewer(self, group_node):
"""

View file

@ -1,5 +1,9 @@
import nuke
from openpype.client import (
get_version_by_id,
get_last_version_by_subset_id,
)
from openpype.pipeline import (
legacy_io,
load,
@ -101,17 +105,16 @@ class LoadGizmo(load.LoaderPlugin):
# get main variables
# Get version from io
version = legacy_io.find_one({
"type": "version",
"_id": representation["parent"]
})
project_name = legacy_io.active_project()
version_doc = get_version_by_id(project_name, representation["parent"])
# get corresponding node
GN = nuke.toNode(container['objectName'])
file = get_representation_path(representation).replace("\\", "/")
name = container['name']
version_data = version.get("data", {})
vname = version.get("name", None)
version_data = version_doc.get("data", {})
vname = version_doc.get("name", None)
first = version_data.get("frameStart", None)
last = version_data.get("frameEnd", None)
namespace = container['namespace']
@ -148,21 +151,18 @@ class LoadGizmo(load.LoaderPlugin):
GN.setXYpos(xpos, ypos)
GN["name"].setValue(object_name)
# get all versions in list
versions = legacy_io.find({
"type": "version",
"parent": version["parent"]
}).distinct('name')
max_version = max(versions)
last_version_doc = get_last_version_by_subset_id(
project_name, version_doc["parent"], fields=["_id"]
)
# change color of node
if version.get("name") not in [max_version]:
GN["tile_color"].setValue(int("0xd88467ff", 16))
if version_doc["_id"] == last_version_doc["_id"]:
color_value = self.node_color
else:
GN["tile_color"].setValue(int(self.node_color, 16))
color_value = "0xd88467ff"
GN["tile_color"].setValue(int(color_value, 16))
self.log.info("updated to version: {}".format(version.get("name")))
self.log.info("updated to version: {}".format(version_doc.get("name")))
return update_container(GN, data_imprint)

View file

@ -1,6 +1,10 @@
import nuke
import six
from openpype.client import (
get_version_by_id,
get_last_version_by_subset_id,
)
from openpype.pipeline import (
legacy_io,
load,
@ -108,17 +112,16 @@ class LoadGizmoInputProcess(load.LoaderPlugin):
# get main variables
# Get version from io
version = legacy_io.find_one({
"type": "version",
"_id": representation["parent"]
})
project_name = legacy_io.active_project()
version_doc = get_version_by_id(project_name, representation["parent"])
# get corresponding node
GN = nuke.toNode(container['objectName'])
file = get_representation_path(representation).replace("\\", "/")
name = container['name']
version_data = version.get("data", {})
vname = version.get("name", None)
version_data = version_doc.get("data", {})
vname = version_doc.get("name", None)
first = version_data.get("frameStart", None)
last = version_data.get("frameEnd", None)
namespace = container['namespace']
@ -155,21 +158,18 @@ class LoadGizmoInputProcess(load.LoaderPlugin):
GN.setXYpos(xpos, ypos)
GN["name"].setValue(object_name)
# get all versions in list
versions = legacy_io.find({
"type": "version",
"parent": version["parent"]
}).distinct('name')
max_version = max(versions)
last_version_doc = get_last_version_by_subset_id(
project_name, version_doc["parent"], fields=["_id"]
)
# change color of node
if version.get("name") not in [max_version]:
GN["tile_color"].setValue(int("0xd88467ff", 16))
if version_doc["_id"] == last_version_doc["_id"]:
color_value = self.node_color
else:
GN["tile_color"].setValue(int(self.node_color, 16))
color_value = "0xd88467ff"
GN["tile_color"].setValue(int(color_value, 16))
self.log.info("updated to version: {}".format(version.get("name")))
self.log.info("updated to version: {}".format(version_doc.get("name")))
return update_container(GN, data_imprint)

View file

@ -2,6 +2,10 @@ import nuke
import qargparse
from openpype.client import (
get_version_by_id,
get_last_version_by_subset_id,
)
from openpype.pipeline import (
legacy_io,
load,
@ -186,20 +190,13 @@ class LoadImage(load.LoaderPlugin):
format(frame_number, "0{}".format(padding)))
# Get start frame from version data
version = legacy_io.find_one({
"type": "version",
"_id": representation["parent"]
})
project_name = legacy_io.active_project()
version_doc = get_version_by_id(project_name, representation["parent"])
last_version_doc = get_last_version_by_subset_id(
project_name, version_doc["parent"], fields=["_id"]
)
# get all versions in list
versions = legacy_io.find({
"type": "version",
"parent": version["parent"]
}).distinct('name')
max_version = max(versions)
version_data = version.get("data", {})
version_data = version_doc.get("data", {})
last = first = int(frame_number)
@ -215,7 +212,7 @@ class LoadImage(load.LoaderPlugin):
"representation": str(representation["_id"]),
"frameStart": str(first),
"frameEnd": str(last),
"version": str(version.get("name")),
"version": str(version_doc.get("name")),
"colorspace": version_data.get("colorspace"),
"source": version_data.get("source"),
"fps": str(version_data.get("fps")),
@ -223,17 +220,18 @@ class LoadImage(load.LoaderPlugin):
})
# change color of node
if version.get("name") not in [max_version]:
node["tile_color"].setValue(int("0xd84f20ff", 16))
if version_doc["_id"] == last_version_doc["_id"]:
color_value = "0x4ecd25ff"
else:
node["tile_color"].setValue(int("0x4ecd25ff", 16))
color_value = "0xd84f20ff"
node["tile_color"].setValue(int(color_value, 16))
# Update the imprinted representation
update_container(
node,
updated_dict
)
self.log.info("updated to version: {}".format(version.get("name")))
self.log.info("updated to version: {}".format(version_doc.get("name")))
def remove(self, container):
node = nuke.toNode(container['objectName'])

View file

@ -1,5 +1,9 @@
import nuke
from openpype.client import (
get_version_by_id,
get_last_version_by_subset_id,
)
from openpype.pipeline import (
legacy_io,
load,
@ -60,6 +64,12 @@ class AlembicModelLoader(load.LoaderPlugin):
inpanel=False
)
model_node.forceValidate()
# Ensure all items are imported and selected.
scene_view = model_node.knob('scene_view')
scene_view.setImportedItems(scene_view.getAllItems())
scene_view.setSelectedItems(scene_view.getAllItems())
model_node["frame_rate"].setValue(float(fps))
# workaround because nuke's bug is not adding
@ -100,17 +110,15 @@ class AlembicModelLoader(load.LoaderPlugin):
None
"""
# Get version from io
version = legacy_io.find_one({
"type": "version",
"_id": representation["parent"]
})
project_name = legacy_io.active_project()
version_doc = get_version_by_id(project_name, representation["parent"])
object_name = container['objectName']
# get corresponding node
model_node = nuke.toNode(object_name)
# get main variables
version_data = version.get("data", {})
vname = version.get("name", None)
version_data = version_doc.get("data", {})
vname = version_doc.get("name", None)
first = version_data.get("frameStart", None)
last = version_data.get("frameEnd", None)
fps = version_data.get("fps") or nuke.root()["fps"].getValue()
@ -142,6 +150,11 @@ class AlembicModelLoader(load.LoaderPlugin):
model_node["frame_rate"].setValue(float(fps))
model_node["file"].setValue(file)
# Ensure all items are imported and selected.
scene_view = model_node.knob('scene_view')
scene_view.setImportedItems(scene_view.getAllItems())
scene_view.setSelectedItems(scene_view.getAllItems())
# workaround because nuke's bug is
# not adding animation keys properly
xpos = model_node.xpos()
@ -163,28 +176,26 @@ class AlembicModelLoader(load.LoaderPlugin):
d.setInput(index, model_node)
# color node by correct color by actual version
self.node_version_color(version, model_node)
self.node_version_color(version_doc, model_node)
self.log.info("updated to version: {}".format(version.get("name")))
self.log.info("updated to version: {}".format(version_doc.get("name")))
return update_container(model_node, data_imprint)
def node_version_color(self, version, node):
""" Coloring a node by correct color by actual version
"""
# get all versions in list
versions = legacy_io.find({
"type": "version",
"parent": version["parent"]
}).distinct('name')
""" Coloring a node by correct color by actual version"""
max_version = max(versions)
project_name = legacy_io.active_project()
last_version_doc = get_last_version_by_subset_id(
project_name, version["parent"], fields=["_id"]
)
# change color of node
if version.get("name") not in [max_version]:
node["tile_color"].setValue(int("0xd88467ff", 16))
if version["_id"] == last_version_doc["_id"]:
color_value = self.node_color
else:
node["tile_color"].setValue(int(self.node_color, 16))
color_value = "0xd88467ff"
node["tile_color"].setValue(int(color_value, 16))
def switch(self, container, representation):
self.update(container, representation)

View file

@ -1,5 +1,9 @@
import nuke
from openpype.client import (
get_version_by_id,
get_last_version_by_subset_id,
)
from openpype.pipeline import (
legacy_io,
load,
@ -116,29 +120,23 @@ class LinkAsGroup(load.LoaderPlugin):
root = get_representation_path(representation).replace("\\", "/")
# Get start frame from version data
version = legacy_io.find_one({
"type": "version",
"_id": representation["parent"]
})
# get all versions in list
versions = legacy_io.find({
"type": "version",
"parent": version["parent"]
}).distinct('name')
max_version = max(versions)
project_name = legacy_io.active_project()
version_doc = get_version_by_id(project_name, representation["parent"])
last_version_doc = get_last_version_by_subset_id(
project_name, version_doc["parent"], fields=["_id"]
)
updated_dict = {}
version_data = version_doc["data"]
updated_dict.update({
"representation": str(representation["_id"]),
"frameEnd": version["data"].get("frameEnd"),
"version": version.get("name"),
"colorspace": version["data"].get("colorspace"),
"source": version["data"].get("source"),
"handles": version["data"].get("handles"),
"fps": version["data"].get("fps"),
"author": version["data"].get("author")
"frameEnd": version_data.get("frameEnd"),
"version": version_doc.get("name"),
"colorspace": version_data.get("colorspace"),
"source": version_data.get("source"),
"handles": version_data.get("handles"),
"fps": version_data.get("fps"),
"author": version_data.get("author")
})
# Update the imprinted representation
@ -150,12 +148,13 @@ class LinkAsGroup(load.LoaderPlugin):
node["file"].setValue(root)
# change color of node
if version.get("name") not in [max_version]:
node["tile_color"].setValue(int("0xd84f20ff", 16))
if version_doc["_id"] == last_version_doc["_id"]:
color_value = "0xff0ff0ff"
else:
node["tile_color"].setValue(int("0xff0ff0ff", 16))
color_value = "0xd84f20ff"
node["tile_color"].setValue(int(color_value, 16))
self.log.info("updated to version: {}".format(version.get("name")))
self.log.info("updated to version: {}".format(version_doc.get("name")))
def remove(self, container):
node = nuke.toNode(container['objectName'])

View file

@ -3,6 +3,7 @@ import re
import nuke
import pyblish.api
from openpype.client import get_asset_by_name
from openpype.pipeline import legacy_io
@ -16,12 +17,11 @@ class CollectNukeReads(pyblish.api.InstancePlugin):
families = ["source"]
def process(self, instance):
asset_data = legacy_io.find_one({
"type": "asset",
"name": legacy_io.Session["AVALON_ASSET"]
})
project_name = legacy_io.active_project()
asset_name = legacy_io.Session["AVALON_ASSET"]
asset_doc = get_asset_by_name(project_name, asset_name)
self.log.debug("asset_data: {}".format(asset_data["data"]))
self.log.debug("asset_doc: {}".format(asset_doc["data"]))
self.log.debug("checking instance: {}".format(instance))
@ -127,7 +127,7 @@ class CollectNukeReads(pyblish.api.InstancePlugin):
"frameStart": first_frame,
"frameEnd": last_frame,
"colorspace": colorspace,
"handles": int(asset_data["data"].get("handles", 0)),
"handles": int(asset_doc["data"].get("handles", 0)),
"step": 1,
"fps": int(nuke.root()['fps'].value())
})

View file

@ -42,12 +42,22 @@ class NukeRenderLocal(openpype.api.Extractor):
self.log.info("Start frame: {}".format(first_frame))
self.log.info("End frame: {}".format(last_frame))
# write node url might contain nuke's ctl expressin
# as [python ...]/path...
path = node["file"].evaluate()
node_file = node["file"]
# Collecte expected filepaths for each frame
# - for cases that output is still image is first created set of
# paths which is then sorted and converted to list
expected_paths = list(sorted({
node_file.evaluate(frame)
for frame in range(first_frame, last_frame + 1)
}))
# Extract only filenames for representation
filenames = [
os.path.basename(filepath)
for filepath in expected_paths
]
# Ensure output directory exists.
out_dir = os.path.dirname(path)
out_dir = os.path.dirname(expected_paths[0])
if not os.path.exists(out_dir):
os.makedirs(out_dir)
@ -67,12 +77,11 @@ class NukeRenderLocal(openpype.api.Extractor):
if "representations" not in instance.data:
instance.data["representations"] = []
collected_frames = os.listdir(out_dir)
if len(collected_frames) == 1:
if len(filenames) == 1:
repre = {
'name': ext,
'ext': ext,
'files': collected_frames.pop(),
'files': filenames[0],
"stagingDir": out_dir
}
else:
@ -81,7 +90,7 @@ class NukeRenderLocal(openpype.api.Extractor):
'ext': ext,
'frameStart': "%0{}d".format(
len(str(last_frame))) % first_frame,
'files': collected_frames,
'files': filenames,
"stagingDir": out_dir
}
instance.data["representations"].append(repre)
@ -105,7 +114,7 @@ class NukeRenderLocal(openpype.api.Extractor):
families.remove('still.local')
instance.data["families"] = families
collections, remainder = clique.assemble(collected_frames)
collections, remainder = clique.assemble(filenames)
self.log.info('collections: {}'.format(str(collections)))
if collections:

View file

@ -104,7 +104,10 @@ class ExtractReviewDataMov(openpype.api.Extractor):
self, instance, o_name, o_data["extension"],
multiple_presets)
if "render.farm" in families:
if (
"render.farm" in families or
"prerender.farm" in families
):
if "review" in instance.data["families"]:
instance.data["families"].remove("review")

View file

@ -4,6 +4,7 @@ import nuke
import copy
import pyblish.api
import six
import openpype
from openpype.hosts.nuke.api import (
@ -12,7 +13,6 @@ from openpype.hosts.nuke.api import (
get_view_process_node
)
class ExtractSlateFrame(openpype.api.Extractor):
"""Extracts movie and thumbnail with baked in luts
@ -152,6 +152,7 @@ class ExtractSlateFrame(openpype.api.Extractor):
self.log.debug("__ first_frame: {}".format(first_frame))
self.log.debug("__ slate_first_frame: {}".format(slate_first_frame))
above_slate_node = slate_node.dependencies().pop()
# fallback if files does not exists
if self._check_frames_exists(instance):
# Read node
@ -164,8 +165,16 @@ class ExtractSlateFrame(openpype.api.Extractor):
r_node["colorspace"].setValue(instance.data["colorspace"])
previous_node = r_node
temporary_nodes = [previous_node]
# adding copy metadata node for correct frame metadata
cm_node = nuke.createNode("CopyMetaData")
cm_node.setInput(0, previous_node)
cm_node.setInput(1, above_slate_node)
previous_node = cm_node
temporary_nodes.append(cm_node)
else:
previous_node = slate_node.dependencies().pop()
previous_node = above_slate_node
temporary_nodes = []
# only create colorspace baking if toggled on
@ -236,6 +245,48 @@ class ExtractSlateFrame(openpype.api.Extractor):
int(slate_first_frame)
)
# Add file to representation files
# - get write node
write_node = instance.data["writeNode"]
# - evaluate filepaths for first frame and slate frame
first_filename = os.path.basename(
write_node["file"].evaluate(first_frame))
slate_filename = os.path.basename(
write_node["file"].evaluate(slate_first_frame))
# Find matching representation based on first filename
matching_repre = None
is_sequence = None
for repre in instance.data["representations"]:
files = repre["files"]
if (
not isinstance(files, six.string_types)
and first_filename in files
):
matching_repre = repre
is_sequence = True
break
elif files == first_filename:
matching_repre = repre
is_sequence = False
break
if not matching_repre:
self.log.info((
"Matching reresentaion was not found."
" Representation files were not filled with slate."
))
return
# Add frame to matching representation files
if not is_sequence:
matching_repre["files"] = [first_filename, slate_filename]
elif slate_filename not in matching_repre["files"]:
matching_repre["files"].insert(0, slate_filename)
self.log.warning("Added slate frame to representation files")
def add_comment_slate_node(self, instance, node):
comment = instance.context.data.get("comment")

View file

@ -1,7 +1,6 @@
import nuke
import pyblish.api
from openpype.pipeline import legacy_io
from openpype.hosts.nuke.api.lib import (
add_publish_knob,
get_avalon_knob_data
@ -20,12 +19,6 @@ class PreCollectNukeInstances(pyblish.api.ContextPlugin):
sync_workfile_version_on_families = []
def process(self, context):
asset_data = legacy_io.find_one({
"type": "asset",
"name": legacy_io.Session["AVALON_ASSET"]
})
self.log.debug("asset_data: {}".format(asset_data["data"]))
instances = []
root = nuke.root()

View file

@ -4,7 +4,10 @@ from pprint import pformat
import nuke
import pyblish.api
import openpype.api as pype
from openpype.client import (
get_last_version_by_subset_name,
get_representations,
)
from openpype.pipeline import (
legacy_io,
get_representation_path,
@ -32,6 +35,7 @@ class CollectNukeWrites(pyblish.api.InstancePlugin):
if node is None:
return
instance.data["writeNode"] = node
self.log.debug("checking instance: {}".format(instance))
# Determine defined file type
@ -53,9 +57,21 @@ class CollectNukeWrites(pyblish.api.InstancePlugin):
first_frame = int(node["first"].getValue())
last_frame = int(node["last"].getValue())
# get path
# Prepare expected output paths by evaluating each frame of write node
# - paths are first collected to set to avoid duplicated paths, then
# sorted and converted to list
node_file = node["file"]
expected_paths = list(sorted({
node_file.evaluate(frame)
for frame in range(first_frame, last_frame + 1)
}))
expected_filenames = [
os.path.basename(filepath)
for filepath in expected_paths
]
path = nuke.filename(node)
output_dir = os.path.dirname(path)
self.log.debug('output dir: {}'.format(output_dir))
# create label
@ -80,8 +96,11 @@ class CollectNukeWrites(pyblish.api.InstancePlugin):
}
try:
collected_frames = [f for f in os.listdir(output_dir)
if ext in f]
collected_frames = [
filename
for filename in os.listdir(output_dir)
if filename in expected_filenames
]
if collected_frames:
collected_frames_len = len(collected_frames)
frame_start_str = "%0{}d".format(
@ -180,17 +199,26 @@ class CollectNukeWrites(pyblish.api.InstancePlugin):
if not instance.data["review"]:
instance.data["useSequenceForReview"] = False
project_name = legacy_io.active_project()
asset_name = instance.data["asset"]
# * Add audio to instance if exists.
# Find latest versions document
version_doc = pype.get_latest_version(
instance.data["asset"], "audioMain"
last_version_doc = get_last_version_by_subset_name(
project_name, "audioMain", asset_name=asset_name, fields=["_id"]
)
repre_doc = None
if version_doc:
if last_version_doc:
# Try to find it's representation (Expected there is only one)
repre_doc = legacy_io.find_one(
{"type": "representation", "parent": version_doc["_id"]}
)
repre_docs = list(get_representations(
project_name, version_ids=[last_version_doc["_id"]]
))
if not repre_docs:
self.log.warning(
"Version document does not contain any representations"
)
else:
repre_doc = repre_docs[0]
# Add audio to instance if representation was found
if repre_doc:

View file

@ -1,5 +1,6 @@
import pyblish.api
from openpype.client import get_project, get_asset_by_id
from openpype import lib
from openpype.pipeline import legacy_io
@ -19,6 +20,7 @@ class ValidateScript(pyblish.api.InstancePlugin):
asset_name = ctx_data["asset"]
asset = lib.get_asset(asset_name)
asset_data = asset["data"]
project_name = legacy_io.active_project()
# These attributes will be checked
attributes = [
@ -48,12 +50,19 @@ class ValidateScript(pyblish.api.InstancePlugin):
asset_attributes[attr] = asset_data[attr]
elif attr in hierarchical_attributes:
# Try to find fps on parent
parent = asset['parent']
if asset_data['visualParent'] is not None:
parent = asset_data['visualParent']
# TODO this should be probably removed
# Hierarchical attributes is not a thing since Pype 2?
value = self.check_parent_hierarchical(parent, attr)
# Try to find attribute on parent
parent_id = asset['parent']
parent_type = "project"
if asset_data['visualParent'] is not None:
parent_type = "asset"
parent_id = asset_data['visualParent']
value = self.check_parent_hierarchical(
project_name, parent_type, parent_id, attr
)
if value is None:
missing_attributes.append(attr)
else:
@ -113,12 +122,35 @@ class ValidateScript(pyblish.api.InstancePlugin):
message = msg.format(", ".join(not_matching))
raise ValueError(message)
def check_parent_hierarchical(self, entityId, attr):
if entityId is None:
def check_parent_hierarchical(
self, project_name, parent_type, parent_id, attr
):
if parent_id is None:
return None
entity = legacy_io.find_one({"_id": entityId})
if attr in entity['data']:
doc = None
if parent_type == "project":
doc = get_project(project_name)
elif parent_type == "asset":
doc = get_asset_by_id(project_name, parent_id)
if not doc:
return None
doc_data = doc["data"]
if attr in doc_data:
self.log.info(attr)
return entity['data'][attr]
else:
return self.check_parent_hierarchical(entity['parent'], attr)
return doc_data[attr]
if parent_type == "project":
return None
parent_id = doc_data.get("visualParent")
new_parent_type = "asset"
if parent_id is None:
parent_id = doc["parent"]
new_parent_type = "project"
return self.check_parent_hierarchical(
project_name, new_parent_type, parent_id, attr
)

View file

@ -4,7 +4,7 @@ import re
import os
import contextlib
from opentimelineio import opentime
import openpype
from openpype.pipeline.editorial import is_overlapping_otio_ranges
from ..otio import davinci_export as otio_export
@ -319,14 +319,13 @@ def get_current_timeline_items(
selected_track_count = timeline.GetTrackCount(track_type)
# loop all tracks and get items
_clips = dict()
_clips = {}
for track_index in range(1, (int(selected_track_count) + 1)):
_track_name = timeline.GetTrackName(track_type, track_index)
# filter out all unmathed track names
if track_name:
if _track_name not in track_name:
continue
if track_name and _track_name not in track_name:
continue
timeline_items = timeline.GetItemListInTrack(
track_type, track_index)
@ -348,12 +347,8 @@ def get_current_timeline_items(
"index": clip_index
}
ti_color = ti.GetClipColor()
if filter is True:
if selecting_color in ti_color:
selected_clips.append(data)
else:
if filter and selecting_color in ti_color or not filter:
selected_clips.append(data)
return selected_clips
@ -824,7 +819,7 @@ def get_otio_clip_instance_data(otio_timeline, timeline_item_data):
continue
if otio_clip.name not in timeline_item.GetName():
continue
if openpype.lib.is_overlapping_otio_ranges(
if is_overlapping_otio_ranges(
parent_range, timeline_range, strict=True):
# add pypedata marker to otio_clip metadata

View file

@ -506,7 +506,7 @@ class Creator(LegacyCreator):
super(Creator, self).__init__(*args, **kwargs)
from openpype.api import get_current_project_settings
resolve_p_settings = get_current_project_settings().get("resolve")
self.presets = dict()
self.presets = {}
if resolve_p_settings:
self.presets = resolve_p_settings["create"].get(
self.__class__.__name__, {})

View file

@ -116,12 +116,13 @@ class CreateShotClip(resolve.Creator):
"order": 0},
"vSyncTrack": {
"value": gui_tracks, # noqa
"type": "QComboBox",
"label": "Hero track",
"target": "ui",
"toolTip": "Select driving track name which should be mastering all others", # noqa
"order": 1}
"type": "QComboBox",
"label": "Hero track",
"target": "ui",
"toolTip": "Select driving track name which should be mastering all others", # noqa
"order": 1
}
}
},
"publishSettings": {
"type": "section",
@ -172,28 +173,31 @@ class CreateShotClip(resolve.Creator):
"target": "ui",
"order": 4,
"value": {
"workfileFrameStart": {
"value": 1001,
"type": "QSpinBox",
"label": "Workfiles Start Frame",
"target": "tag",
"toolTip": "Set workfile starting frame number", # noqa
"order": 0},
"handleStart": {
"value": 0,
"type": "QSpinBox",
"label": "Handle start (head)",
"target": "tag",
"toolTip": "Handle at start of clip", # noqa
"order": 1},
"handleEnd": {
"value": 0,
"type": "QSpinBox",
"label": "Handle end (tail)",
"target": "tag",
"toolTip": "Handle at end of clip", # noqa
"order": 2},
}
"workfileFrameStart": {
"value": 1001,
"type": "QSpinBox",
"label": "Workfiles Start Frame",
"target": "tag",
"toolTip": "Set workfile starting frame number", # noqa
"order": 0
},
"handleStart": {
"value": 0,
"type": "QSpinBox",
"label": "Handle start (head)",
"target": "tag",
"toolTip": "Handle at start of clip", # noqa
"order": 1
},
"handleEnd": {
"value": 0,
"type": "QSpinBox",
"label": "Handle end (tail)",
"target": "tag",
"toolTip": "Handle at end of clip", # noqa
"order": 2
}
}
}
}
@ -229,8 +233,10 @@ class CreateShotClip(resolve.Creator):
v_sync_track = widget.result["vSyncTrack"]["value"]
# sort selected trackItems by
sorted_selected_track_items = list()
unsorted_selected_track_items = list()
sorted_selected_track_items = []
unsorted_selected_track_items = []
print("_____ selected ______")
print(self.selected)
for track_item_data in self.selected:
if track_item_data["track"]["name"] in v_sync_track:
sorted_selected_track_items.append(track_item_data)
@ -253,10 +259,10 @@ class CreateShotClip(resolve.Creator):
"sq_frame_start": sq_frame_start,
"sq_markers": sq_markers
}
print(kwargs)
for i, track_item_data in enumerate(sorted_selected_track_items):
self.rename_index = i
self.log.info(track_item_data)
# convert track item to timeline media pool item
track_item = resolve.PublishClip(
self, track_item_data, **kwargs).convert()

View file

@ -1,6 +1,10 @@
from copy import deepcopy
from importlib import reload
from openpype.client import (
get_version_by_id,
get_last_version_by_subset_id,
)
from openpype.hosts import resolve
from openpype.pipeline import (
get_representation_path,
@ -19,7 +23,7 @@ class LoadClip(resolve.TimelineItemLoader):
"""
families = ["render2d", "source", "plate", "render", "review"]
representations = ["exr", "dpx", "jpg", "jpeg", "png", "h264", ".mov"]
representations = ["exr", "dpx", "jpg", "jpeg", "png", "h264", "mov"]
label = "Load as clip"
order = -10
@ -96,10 +100,8 @@ class LoadClip(resolve.TimelineItemLoader):
namespace = container['namespace']
timeline_item_data = resolve.get_pype_timeline_item_by_name(namespace)
timeline_item = timeline_item_data["clip"]["item"]
version = legacy_io.find_one({
"type": "version",
"_id": representation["parent"]
})
project_name = legacy_io.active_project()
version = get_version_by_id(project_name, representation["parent"])
version_data = version.get("data", {})
version_name = version.get("name", None)
colorspace = version_data.get("colorspace", None)
@ -138,19 +140,22 @@ class LoadClip(resolve.TimelineItemLoader):
@classmethod
def set_item_color(cls, timeline_item, version):
# define version name
version_name = version.get("name", None)
# get all versions in list
versions = legacy_io.find({
"type": "version",
"parent": version["parent"]
}).distinct('name')
max_version = max(versions)
project_name = legacy_io.active_project()
last_version_doc = get_last_version_by_subset_id(
project_name,
version["parent"],
fields=["name"]
)
if last_version_doc:
last_version = last_version_doc["name"]
else:
last_version = None
# set clip colour
if version_name == max_version:
if version_name == last_version:
timeline_item.SetClipColor(cls.clip_color_last)
else:
timeline_item.SetClipColor(cls.clip_color)

View file

@ -30,7 +30,8 @@ class PrecollectWorkfile(pyblish.api.ContextPlugin):
"asset": asset,
"subset": "{}{}".format(asset, subset.capitalize()),
"item": project,
"family": "workfile"
"family": "workfile",
"families": []
}
# create instance with workfile

View file

@ -1,4 +1,5 @@
import os
from pprint import pformat
import re
from copy import deepcopy
import pyblish.api
@ -21,6 +22,7 @@ class CollectHierarchyInstance(pyblish.api.ContextPlugin):
families = ["shot"]
# presets
shot_rename = True
shot_rename_template = None
shot_rename_search_patterns = None
shot_add_hierarchy = None
@ -46,7 +48,7 @@ class CollectHierarchyInstance(pyblish.api.ContextPlugin):
parent_name = instance.context.data["assetEntity"]["name"]
clip = instance.data["item"]
clip_name = os.path.splitext(clip.name)[0].lower()
if self.shot_rename_search_patterns:
if self.shot_rename_search_patterns and self.shot_rename:
search_text += parent_name + clip_name
instance.data["anatomyData"].update({"clip_name": clip_name})
for type, pattern in self.shot_rename_search_patterns.items():
@ -56,9 +58,9 @@ class CollectHierarchyInstance(pyblish.api.ContextPlugin):
continue
instance.data["anatomyData"][type] = match[-1]
# format to new shot name
instance.data["asset"] = self.shot_rename_template.format(
**instance.data["anatomyData"])
# format to new shot name
instance.data["asset"] = self.shot_rename_template.format(
**instance.data["anatomyData"])
def create_hierarchy(self, instance):
asset_doc = instance.context.data["assetEntity"]
@ -87,7 +89,7 @@ class CollectHierarchyInstance(pyblish.api.ContextPlugin):
})
hierarchy = list()
if self.shot_add_hierarchy:
if self.shot_add_hierarchy.get("enabled"):
parent_template_patern = re.compile(r"\{([a-z]*?)\}")
# fill the parents parts from presets
shot_add_hierarchy = self.shot_add_hierarchy.copy()
@ -131,8 +133,8 @@ class CollectHierarchyInstance(pyblish.api.ContextPlugin):
instance.data["parents"] = parents
# print
self.log.debug(f"Hierarchy: {hierarchy}")
self.log.debug(f"parents: {parents}")
self.log.warning(f"Hierarchy: {hierarchy}")
self.log.info(f"parents: {parents}")
tasks_to_add = dict()
if self.shot_add_tasks:
@ -163,6 +165,9 @@ class CollectHierarchyInstance(pyblish.api.ContextPlugin):
})
def process(self, context):
self.log.info("self.shot_add_hierarchy: {}".format(
pformat(self.shot_add_hierarchy)
))
for instance in context:
if instance.data["family"] in self.families:
self.processing_instance(instance)

View file

@ -39,11 +39,14 @@ class ExtractTrimVideoAudio(openpype.api.Extractor):
# Generate mov file.
fps = instance.data["fps"]
video_file_path = instance.data["editorialSourcePath"]
extensions = instance.data.get("extensions", [".mov"])
extensions = instance.data.get("extensions", ["mov"])
for ext in extensions:
self.log.info("Processing ext: `{}`".format(ext))
if not ext.startswith("."):
ext = "." + ext
clip_trimed_path = os.path.join(
staging_dir, instance.data["name"] + ext)
# # check video file metadata

View file

@ -1,20 +1,8 @@
from .pipeline import (
install,
ls,
set_project_name,
get_context_title,
get_context_data,
update_context_data,
TrayPublisherHost,
)
__all__ = (
"install",
"ls",
"set_project_name",
"get_context_title",
"get_context_data",
"update_context_data",
"TrayPublisherHost",
)

View file

@ -9,6 +9,8 @@ from openpype.pipeline import (
register_creator_plugin_path,
legacy_io,
)
from openpype.host import HostBase, INewPublisher
ROOT_DIR = os.path.dirname(os.path.dirname(
os.path.abspath(__file__)
@ -17,6 +19,35 @@ PUBLISH_PATH = os.path.join(ROOT_DIR, "plugins", "publish")
CREATE_PATH = os.path.join(ROOT_DIR, "plugins", "create")
class TrayPublisherHost(HostBase, INewPublisher):
name = "traypublisher"
def install(self):
os.environ["AVALON_APP"] = self.name
legacy_io.Session["AVALON_APP"] = self.name
pyblish.api.register_host("traypublisher")
pyblish.api.register_plugin_path(PUBLISH_PATH)
register_creator_plugin_path(CREATE_PATH)
def get_context_title(self):
return HostContext.get_project_name()
def get_context_data(self):
return HostContext.get_context_data()
def update_context_data(self, data, changes):
HostContext.save_context_data(data, changes)
def set_project_name(self, project_name):
# TODO Deregister project specific plugins and register new project
# plugins
os.environ["AVALON_PROJECT"] = project_name
legacy_io.Session["AVALON_PROJECT"] = project_name
legacy_io.install()
HostContext.set_project_name(project_name)
class HostContext:
_context_json_path = None
@ -150,32 +181,3 @@ def get_context_data():
def update_context_data(data, changes):
HostContext.save_context_data(data)
def get_context_title():
return HostContext.get_project_name()
def ls():
"""Probably will never return loaded containers."""
return []
def install():
"""This is called before a project is known.
Project is defined with 'set_project_name'.
"""
os.environ["AVALON_APP"] = "traypublisher"
pyblish.api.register_host("traypublisher")
pyblish.api.register_plugin_path(PUBLISH_PATH)
register_creator_plugin_path(CREATE_PATH)
def set_project_name(project_name):
# TODO Deregister project specific plugins and register new project plugins
os.environ["AVALON_PROJECT"] = project_name
legacy_io.Session["AVALON_PROJECT"] = project_name
legacy_io.install()
HostContext.set_project_name(project_name)

View file

@ -1,8 +1,8 @@
from openpype.lib.attribute_definitions import FileDef
from openpype.pipeline import (
Creator,
CreatedInstance
)
from openpype.lib import FileDef
from .pipeline import (
list_instances,
@ -12,6 +12,29 @@ from .pipeline import (
)
IMAGE_EXTENSIONS = [
".ani", ".anim", ".apng", ".art", ".bmp", ".bpg", ".bsave", ".cal",
".cin", ".cpc", ".cpt", ".dds", ".dpx", ".ecw", ".exr", ".fits",
".flic", ".flif", ".fpx", ".gif", ".hdri", ".hevc", ".icer",
".icns", ".ico", ".cur", ".ics", ".ilbm", ".jbig", ".jbig2",
".jng", ".jpeg", ".jpeg-ls", ".jpeg", ".2000", ".jpg", ".xr",
".jpeg", ".xt", ".jpeg-hdr", ".kra", ".mng", ".miff", ".nrrd",
".ora", ".pam", ".pbm", ".pgm", ".ppm", ".pnm", ".pcx", ".pgf",
".pictor", ".png", ".psb", ".psp", ".qtvr", ".ras",
".rgbe", ".logluv", ".tiff", ".sgi", ".tga", ".tiff", ".tiff/ep",
".tiff/it", ".ufo", ".ufp", ".wbmp", ".webp", ".xbm", ".xcf",
".xpm", ".xwd"
]
VIDEO_EXTENSIONS = [
".3g2", ".3gp", ".amv", ".asf", ".avi", ".drc", ".f4a", ".f4b",
".f4p", ".f4v", ".flv", ".gif", ".gifv", ".m2v", ".m4p", ".m4v",
".mkv", ".mng", ".mov", ".mp2", ".mp4", ".mpe", ".mpeg", ".mpg",
".mpv", ".mxf", ".nsv", ".ogg", ".ogv", ".qt", ".rm", ".rmvb",
".roq", ".svi", ".vob", ".webm", ".wmv", ".yuv"
]
REVIEW_EXTENSIONS = IMAGE_EXTENSIONS + VIDEO_EXTENSIONS
class TrayPublishCreator(Creator):
create_allow_context_change = True
host_name = "traypublisher"
@ -37,6 +60,21 @@ class TrayPublishCreator(Creator):
# Use same attributes as for instance attrobites
return self.get_instance_attr_defs()
def _store_new_instance(self, new_instance):
"""Tray publisher specific method to store instance.
Instance is stored into "workfile" of traypublisher and also add it
to CreateContext.
Args:
new_instance (CreatedInstance): Instance that should be stored.
"""
# Host implementation of storing metadata about instance
HostContext.add_instance(new_instance.data_to_store())
# Add instance to current context
self._add_instance_to_context(new_instance)
class SettingsCreator(TrayPublishCreator):
create_allow_context_change = True
@ -58,19 +96,27 @@ class SettingsCreator(TrayPublishCreator):
data["settings_creator"] = True
# Create new instance
new_instance = CreatedInstance(self.family, subset_name, data, self)
# Host implementation of storing metadata about instance
HostContext.add_instance(new_instance.data_to_store())
# Add instance to current context
self._add_instance_to_context(new_instance)
self._store_new_instance(new_instance)
def get_instance_attr_defs(self):
return [
FileDef(
"filepath",
"representation_files",
folders=False,
extensions=self.extensions,
allow_sequences=self.allow_sequences,
label="Filepath",
single_item=not self.allow_multiple_items,
label="Representations",
),
FileDef(
"reviewable",
folders=False,
extensions=REVIEW_EXTENSIONS,
allow_sequences=True,
single_item=True,
label="Reviewable representations",
extensions_label="Single reviewable item"
)
]
@ -92,6 +138,7 @@ class SettingsCreator(TrayPublishCreator):
"detailed_description": item_data["detailed_description"],
"extensions": item_data["extensions"],
"allow_sequences": item_data["allow_sequences"],
"allow_multiple_items": item_data["allow_multiple_items"],
"default_variants": item_data["default_variants"]
}
)

View file

@ -0,0 +1,216 @@
import copy
import os
import re
from openpype.client import get_assets, get_asset_by_name
from openpype.lib import (
FileDef,
BoolDef,
get_subset_name_with_asset_doc,
TaskNotSetError,
)
from openpype.pipeline import (
CreatedInstance,
CreatorError
)
from openpype.hosts.traypublisher.api.plugin import TrayPublishCreator
class BatchMovieCreator(TrayPublishCreator):
"""Creates instances from movie file(s).
Intended for .mov files, but should work for any video file.
Doesn't handle image sequences though.
"""
identifier = "render_movie_batch"
label = "Batch Movies"
family = "render"
description = "Publish batch of video files"
create_allow_context_change = False
version_regex = re.compile(r"^(.+)_v([0-9]+)$")
def __init__(self, project_settings, *args, **kwargs):
super(BatchMovieCreator, self).__init__(project_settings,
*args, **kwargs)
creator_settings = (
project_settings["traypublisher"]["BatchMovieCreator"]
)
self.default_variants = creator_settings["default_variants"]
self.default_tasks = creator_settings["default_tasks"]
self.extensions = creator_settings["extensions"]
def get_icon(self):
return "fa.file"
def create(self, subset_name, data, pre_create_data):
file_paths = pre_create_data.get("filepath")
if not file_paths:
return
for file_info in file_paths:
instance_data = copy.deepcopy(data)
file_name = file_info["filenames"][0]
filepath = os.path.join(file_info["directory"], file_name)
instance_data["creator_attributes"] = {"filepath": filepath}
asset_doc, version = self.get_asset_doc_from_file_name(
file_name, self.project_name)
subset_name, task_name = self._get_subset_and_task(
asset_doc, data["variant"], self.project_name)
instance_data["task"] = task_name
instance_data["asset"] = asset_doc["name"]
# Create new instance
new_instance = CreatedInstance(self.family, subset_name,
instance_data, self)
self._store_new_instance(new_instance)
def get_asset_doc_from_file_name(self, source_filename, project_name):
"""Try to parse out asset name from file name provided.
Artists might provide various file name formats.
Currently handled:
- chair.mov
- chair_v001.mov
- my_chair_to_upload.mov
"""
version = None
asset_name = os.path.splitext(source_filename)[0]
# Always first check if source filename is in assets
matching_asset_doc = self._get_asset_by_name_case_not_sensitive(
project_name, asset_name)
if matching_asset_doc is None:
matching_asset_doc, version = (
self._parse_with_version(project_name, asset_name))
if matching_asset_doc is None:
matching_asset_doc = self._parse_containing(project_name,
asset_name)
if matching_asset_doc is None:
raise CreatorError(
"Cannot guess asset name from {}".format(source_filename))
return matching_asset_doc, version
def _parse_with_version(self, project_name, asset_name):
"""Try to parse asset name from a file name containing version too
Eg. 'chair_v001.mov' >> 'chair', 1
"""
self.log.debug((
"Asset doc by \"{}\" was not found, trying version regex."
).format(asset_name))
matching_asset_doc = version_number = None
regex_result = self.version_regex.findall(asset_name)
if regex_result:
_asset_name, _version_number = regex_result[0]
matching_asset_doc = self._get_asset_by_name_case_not_sensitive(
project_name, _asset_name)
if matching_asset_doc:
version_number = int(_version_number)
return matching_asset_doc, version_number
def _parse_containing(self, project_name, asset_name):
"""Look if file name contains any existing asset name"""
for asset_doc in get_assets(project_name, fields=["name"]):
if asset_doc["name"].lower() in asset_name.lower():
return get_asset_by_name(project_name, asset_doc["name"])
def _get_subset_and_task(self, asset_doc, variant, project_name):
"""Create subset name according to standard template process"""
task_name = self._get_task_name(asset_doc)
try:
subset_name = get_subset_name_with_asset_doc(
self.family,
variant,
task_name,
asset_doc,
project_name
)
except TaskNotSetError:
# Create instance with fake task
# - instance will be marked as invalid so it can't be published
# but user have ability to change it
# NOTE: This expect that there is not task 'Undefined' on asset
task_name = "Undefined"
subset_name = get_subset_name_with_asset_doc(
self.family,
variant,
task_name,
asset_doc,
project_name
)
return subset_name, task_name
def _get_task_name(self, asset_doc):
"""Get applicable task from 'asset_doc' """
available_task_names = {}
asset_tasks = asset_doc.get("data", {}).get("tasks") or {}
for task_name in asset_tasks.keys():
available_task_names[task_name.lower()] = task_name
task_name = None
for _task_name in self.default_tasks:
_task_name_low = _task_name.lower()
if _task_name_low in available_task_names:
task_name = available_task_names[_task_name_low]
break
return task_name
def get_instance_attr_defs(self):
return [
BoolDef(
"add_review_family",
default=True,
label="Review"
)
]
def get_pre_create_attr_defs(self):
# Use same attributes as for instance attributes
return [
FileDef(
"filepath",
folders=False,
single_item=False,
extensions=self.extensions,
label="Filepath"
),
BoolDef(
"add_review_family",
default=True,
label="Review"
)
]
def get_detail_description(self):
return """# Publish batch of .mov to multiple assets.
File names must then contain only asset name, or asset name + version.
(eg. 'chair.mov', 'chair_v001.mov', not really safe `my_chair_v001.mov`
"""
def _get_asset_by_name_case_not_sensitive(self, project_name, asset_name):
"""Handle more cases in file names"""
asset_name = re.compile(asset_name, re.IGNORECASE)
assets = list(get_assets(project_name, asset_names=[asset_name]))
if assets:
if len(assets) > 1:
self.log.warning("Too many records found for {}".format(
asset_name))
return
return assets.pop()

View file

@ -0,0 +1,47 @@
import os
import pyblish.api
from openpype.pipeline import OpenPypePyblishPluginMixin
class CollectMovieBatch(
pyblish.api.InstancePlugin, OpenPypePyblishPluginMixin
):
"""Collect file url for batch movies and create representation.
Adds review on instance and to repre.tags based on value of toggle button
on creator.
"""
label = "Collect Movie Batch Files"
order = pyblish.api.CollectorOrder
hosts = ["traypublisher"]
def process(self, instance):
if instance.data.get("creator_identifier") != "render_movie_batch":
return
creator_attributes = instance.data["creator_attributes"]
file_url = creator_attributes["filepath"]
file_name = os.path.basename(file_url)
_, ext = os.path.splitext(file_name)
repre = {
"name": ext[1:],
"ext": ext[1:],
"files": file_name,
"stagingDir": os.path.dirname(file_url),
"tags": []
}
if creator_attributes["add_review_family"]:
repre["tags"].append("review")
instance.data["families"].append("review")
instance.data["representations"].append(repre)
instance.data["source"] = file_url
self.log.debug("instance.data {}".format(instance.data))

View file

@ -1,31 +0,0 @@
import pyblish.api
from openpype.lib import BoolDef
from openpype.pipeline import OpenPypePyblishPluginMixin
class CollectReviewFamily(
pyblish.api.InstancePlugin, OpenPypePyblishPluginMixin
):
"""Add review family."""
label = "Collect Review Family"
order = pyblish.api.CollectorOrder - 0.49
hosts = ["traypublisher"]
families = [
"image",
"render",
"plate",
"review"
]
def process(self, instance):
values = self.get_attr_values_from_data(instance.data)
if values.get("add_review_family"):
instance.data["families"].append("review")
@classmethod
def get_attribute_defs(cls):
return [
BoolDef("add_review_family", label="Review", default=True)
]

Some files were not shown because too many files have changed in this diff Show more