Merge branch 'develop' into release/3.13.x

This commit is contained in:
Milan Kolar 2022-07-22 08:13:52 +02:00
commit 799f0c8c2f
213 changed files with 7979 additions and 4366 deletions

7
.gitmodules vendored Normal file
View file

@ -0,0 +1,7 @@
[submodule "tools/modules/powershell/BurntToast"]
path = tools/modules/powershell/BurntToast
url = https://github.com/Windos/BurntToast.git
[submodule "tools/modules/powershell/PSWriteColor"]
path = tools/modules/powershell/PSWriteColor
url = https://github.com/EvotecIT/PSWriteColor.git

View file

@ -1,8 +1,99 @@
# Changelog
## [3.12.2-nightly.2](https://github.com/pypeclub/OpenPype/tree/HEAD)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.12.1...HEAD)
**🚀 Enhancements**
- General: Interactive console in cli [\#3526](https://github.com/pypeclub/OpenPype/pull/3526)
- Ftrack: Automatic daily review session creation can define trigger hour [\#3516](https://github.com/pypeclub/OpenPype/pull/3516)
- Ftrack: add source into Note [\#3509](https://github.com/pypeclub/OpenPype/pull/3509)
- Ftrack: Trigger custom ftrack topic of project structure creation [\#3506](https://github.com/pypeclub/OpenPype/pull/3506)
- Settings UI: Add extract to file action on project view [\#3505](https://github.com/pypeclub/OpenPype/pull/3505)
- Add pack and unpack convenience scripts [\#3502](https://github.com/pypeclub/OpenPype/pull/3502)
- General: Event system [\#3499](https://github.com/pypeclub/OpenPype/pull/3499)
- NewPublisher: Keep plugins with mismatch target in report [\#3498](https://github.com/pypeclub/OpenPype/pull/3498)
- Nuke: load clip with options from settings [\#3497](https://github.com/pypeclub/OpenPype/pull/3497)
- TrayPublisher: implemented render\_mov\_batch [\#3486](https://github.com/pypeclub/OpenPype/pull/3486)
- Migrate basic families to the new Tray Publisher [\#3469](https://github.com/pypeclub/OpenPype/pull/3469)
**🐛 Bug fixes**
- Additional fixes for powershell scripts [\#3525](https://github.com/pypeclub/OpenPype/pull/3525)
- Maya: Added wrapper around cmds.setAttr [\#3523](https://github.com/pypeclub/OpenPype/pull/3523)
- General: Fix hash of centos oiio archive [\#3519](https://github.com/pypeclub/OpenPype/pull/3519)
- Maya: Renderman display output fix [\#3514](https://github.com/pypeclub/OpenPype/pull/3514)
- TrayPublisher: Simple creation enhancements and fixes [\#3513](https://github.com/pypeclub/OpenPype/pull/3513)
- NewPublisher: Publish attributes are properly collected [\#3510](https://github.com/pypeclub/OpenPype/pull/3510)
- TrayPublisher: Make sure host name is filled [\#3504](https://github.com/pypeclub/OpenPype/pull/3504)
- NewPublisher: Groups work and enum multivalue [\#3501](https://github.com/pypeclub/OpenPype/pull/3501)
**🔀 Refactored code**
- General: Client docstrings cleanup [\#3529](https://github.com/pypeclub/OpenPype/pull/3529)
- TimersManager: Use query functions [\#3495](https://github.com/pypeclub/OpenPype/pull/3495)
## [3.12.1](https://github.com/pypeclub/OpenPype/tree/3.12.1) (2022-07-13)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.12.1-nightly.6...3.12.1)
### 📖 Documentation
- Docs: Added minimal permissions for MongoDB [\#3441](https://github.com/pypeclub/OpenPype/pull/3441)
**🆕 New features**
- Maya: Add VDB to Arnold loader [\#3433](https://github.com/pypeclub/OpenPype/pull/3433)
**🚀 Enhancements**
- TrayPublisher: Added more options for grouping of instances [\#3494](https://github.com/pypeclub/OpenPype/pull/3494)
- NewPublisher: Align creator attributes from top to bottom [\#3487](https://github.com/pypeclub/OpenPype/pull/3487)
- NewPublisher: Added ability to use label of instance [\#3484](https://github.com/pypeclub/OpenPype/pull/3484)
- General: Creator Plugins have access to project [\#3476](https://github.com/pypeclub/OpenPype/pull/3476)
- General: Better arguments order in creator init [\#3475](https://github.com/pypeclub/OpenPype/pull/3475)
- Ftrack: Trigger custom ftrack events on project creation and preparation [\#3465](https://github.com/pypeclub/OpenPype/pull/3465)
- Windows installer: Clean old files and add version subfolder [\#3445](https://github.com/pypeclub/OpenPype/pull/3445)
- Blender: Bugfix - Set fps properly on open [\#3426](https://github.com/pypeclub/OpenPype/pull/3426)
- Hiero: Add custom scripts menu [\#3425](https://github.com/pypeclub/OpenPype/pull/3425)
- Blender: pre pyside install for all platforms [\#3400](https://github.com/pypeclub/OpenPype/pull/3400)
**🐛 Bug fixes**
- TrayPublisher: Keep use instance label in list view [\#3493](https://github.com/pypeclub/OpenPype/pull/3493)
- General: Extract review use first frame of input sequence [\#3491](https://github.com/pypeclub/OpenPype/pull/3491)
- General: Fix Plist loading for application launch [\#3485](https://github.com/pypeclub/OpenPype/pull/3485)
- Nuke: Workfile tools open on start [\#3479](https://github.com/pypeclub/OpenPype/pull/3479)
- New Publisher: Disabled context change allows creation [\#3478](https://github.com/pypeclub/OpenPype/pull/3478)
- General: thumbnail extractor fix [\#3474](https://github.com/pypeclub/OpenPype/pull/3474)
- Kitsu: bugfix with sync-service ans publish plugins [\#3473](https://github.com/pypeclub/OpenPype/pull/3473)
- Flame: solved problem with multi-selected loading [\#3470](https://github.com/pypeclub/OpenPype/pull/3470)
- General: Fix query function in update logic [\#3468](https://github.com/pypeclub/OpenPype/pull/3468)
- Resolve: removed few bugs [\#3464](https://github.com/pypeclub/OpenPype/pull/3464)
- General: Delete old versions is safer when ftrack is disabled [\#3462](https://github.com/pypeclub/OpenPype/pull/3462)
- Nuke: fixing metadata slate TC difference [\#3455](https://github.com/pypeclub/OpenPype/pull/3455)
- Nuke: prerender reviewable fails [\#3450](https://github.com/pypeclub/OpenPype/pull/3450)
- Maya: fix hashing in Python 3 for tile rendering [\#3447](https://github.com/pypeclub/OpenPype/pull/3447)
- LogViewer: Escape html characters in log message [\#3443](https://github.com/pypeclub/OpenPype/pull/3443)
- Nuke: Slate frame is integrated [\#3427](https://github.com/pypeclub/OpenPype/pull/3427)
**🔀 Refactored code**
- Maya: Merge animation + pointcache extractor logic [\#3461](https://github.com/pypeclub/OpenPype/pull/3461)
- Maya: Re-use `maintained\_time` from lib [\#3460](https://github.com/pypeclub/OpenPype/pull/3460)
- General: Use query functions in global plugins [\#3459](https://github.com/pypeclub/OpenPype/pull/3459)
- Clockify: Use query functions in clockify actions [\#3458](https://github.com/pypeclub/OpenPype/pull/3458)
- General: Use query functions in rest api calls [\#3457](https://github.com/pypeclub/OpenPype/pull/3457)
- General: Use query functions in openpype lib functions [\#3454](https://github.com/pypeclub/OpenPype/pull/3454)
- General: Use query functions in load utils [\#3446](https://github.com/pypeclub/OpenPype/pull/3446)
- General: Move publish plugin and publish render abstractions [\#3442](https://github.com/pypeclub/OpenPype/pull/3442)
- General: Use Anatomy after move to pipeline [\#3436](https://github.com/pypeclub/OpenPype/pull/3436)
- General: Anatomy moved to pipeline [\#3435](https://github.com/pypeclub/OpenPype/pull/3435)
## [3.12.0](https://github.com/pypeclub/OpenPype/tree/3.12.0) (2022-06-28)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.11.1...3.12.0)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.12.0-nightly.3...3.12.0)
### 📖 Documentation
@ -13,10 +104,6 @@
- Webserver: Added CORS middleware [\#3422](https://github.com/pypeclub/OpenPype/pull/3422)
- Attribute Defs UI: Files widget show what is allowed to drop in [\#3411](https://github.com/pypeclub/OpenPype/pull/3411)
- General: Add ability to change user value for templates [\#3366](https://github.com/pypeclub/OpenPype/pull/3366)
- Hosts: More options for in-host callbacks [\#3357](https://github.com/pypeclub/OpenPype/pull/3357)
- Multiverse: expose some settings to GUI [\#3350](https://github.com/pypeclub/OpenPype/pull/3350)
- Maya: Allow more data to be published along camera 🎥 [\#3304](https://github.com/pypeclub/OpenPype/pull/3304)
**🐛 Bug fixes**
@ -27,11 +114,6 @@
- Nuke: Collect representation files based on Write [\#3407](https://github.com/pypeclub/OpenPype/pull/3407)
- General: Filter representations before integration start [\#3398](https://github.com/pypeclub/OpenPype/pull/3398)
- Maya: look collector typo [\#3392](https://github.com/pypeclub/OpenPype/pull/3392)
- TVPaint: Make sure exit code is set to not None [\#3382](https://github.com/pypeclub/OpenPype/pull/3382)
- Maya: vray device aspect ratio fix [\#3381](https://github.com/pypeclub/OpenPype/pull/3381)
- Harmony: added unc path to zifile command in Harmony [\#3372](https://github.com/pypeclub/OpenPype/pull/3372)
- Standalone: settings improvements [\#3355](https://github.com/pypeclub/OpenPype/pull/3355)
- Nuke: Load full model hierarchy by default [\#3328](https://github.com/pypeclub/OpenPype/pull/3328)
**🔀 Refactored code**
@ -41,97 +123,15 @@
- Houdini: Use client query functions [\#3395](https://github.com/pypeclub/OpenPype/pull/3395)
- Hiero: Use client query functions [\#3393](https://github.com/pypeclub/OpenPype/pull/3393)
- Nuke: Use client query functions [\#3391](https://github.com/pypeclub/OpenPype/pull/3391)
- Maya: Use client query functions [\#3385](https://github.com/pypeclub/OpenPype/pull/3385)
- Harmony: Use client query functions [\#3378](https://github.com/pypeclub/OpenPype/pull/3378)
- Celaction: Use client query functions [\#3376](https://github.com/pypeclub/OpenPype/pull/3376)
- Photoshop: Use client query functions [\#3375](https://github.com/pypeclub/OpenPype/pull/3375)
- AfterEffects: Use client query functions [\#3374](https://github.com/pypeclub/OpenPype/pull/3374)
- TVPaint: Use client query functions [\#3340](https://github.com/pypeclub/OpenPype/pull/3340)
- Ftrack: Use client query functions [\#3339](https://github.com/pypeclub/OpenPype/pull/3339)
- Standalone Publisher: Use client query functions [\#3330](https://github.com/pypeclub/OpenPype/pull/3330)
**Merged pull requests:**
- Sync Queue: Added far future value for null values for dates [\#3371](https://github.com/pypeclub/OpenPype/pull/3371)
- Maya - added support for single frame playblast review [\#3369](https://github.com/pypeclub/OpenPype/pull/3369)
## [3.11.1](https://github.com/pypeclub/OpenPype/tree/3.11.1) (2022-06-20)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.11.1-nightly.1...3.11.1)
**🆕 New features**
- Flame: custom export temp folder [\#3346](https://github.com/pypeclub/OpenPype/pull/3346)
- Nuke: removing third-party plugins [\#3344](https://github.com/pypeclub/OpenPype/pull/3344)
**🚀 Enhancements**
- Pyblish Pype: Hiding/Close issues [\#3367](https://github.com/pypeclub/OpenPype/pull/3367)
- Ftrack: Removed requirement of pypeclub role from default settings [\#3354](https://github.com/pypeclub/OpenPype/pull/3354)
- Kitsu: Prevent crash on missing frames information [\#3352](https://github.com/pypeclub/OpenPype/pull/3352)
- Ftrack: Open browser from tray [\#3320](https://github.com/pypeclub/OpenPype/pull/3320)
**🐛 Bug fixes**
- Flame: bunch of publishing issues [\#3377](https://github.com/pypeclub/OpenPype/pull/3377)
- Nuke: bake streams with slate on farm [\#3368](https://github.com/pypeclub/OpenPype/pull/3368)
- Harmony: audio validator has wrong logic [\#3364](https://github.com/pypeclub/OpenPype/pull/3364)
- Nuke: Fix missing variable in extract thumbnail [\#3363](https://github.com/pypeclub/OpenPype/pull/3363)
- Nuke: Fix precollect writes [\#3361](https://github.com/pypeclub/OpenPype/pull/3361)
- AE- fix validate\_scene\_settings and renderLocal [\#3358](https://github.com/pypeclub/OpenPype/pull/3358)
- deadline: fixing misidentification of revieables [\#3356](https://github.com/pypeclub/OpenPype/pull/3356)
- General: Create only one thumbnail per instance [\#3351](https://github.com/pypeclub/OpenPype/pull/3351)
- nuke: adding extract thumbnail settings 3.10 [\#3347](https://github.com/pypeclub/OpenPype/pull/3347)
- General: Fix last version function [\#3345](https://github.com/pypeclub/OpenPype/pull/3345)
- Deadline: added OPENPYPE\_MONGO to filter [\#3336](https://github.com/pypeclub/OpenPype/pull/3336)
- Nuke: fixing farm publishing if review is disabled [\#3306](https://github.com/pypeclub/OpenPype/pull/3306)
**🔀 Refactored code**
- Webpublisher: Use client query functions [\#3333](https://github.com/pypeclub/OpenPype/pull/3333)
## [3.11.0](https://github.com/pypeclub/OpenPype/tree/3.11.0) (2022-06-17)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.11.0-nightly.4...3.11.0)
### 📖 Documentation
- Documentation: Add app key to template documentation [\#3299](https://github.com/pypeclub/OpenPype/pull/3299)
- doc: adding royal render and multiverse to the web site [\#3285](https://github.com/pypeclub/OpenPype/pull/3285)
**🚀 Enhancements**
- Settings: Settings can be extracted from UI [\#3323](https://github.com/pypeclub/OpenPype/pull/3323)
- updated poetry installation source [\#3316](https://github.com/pypeclub/OpenPype/pull/3316)
- Ftrack: Action to easily create daily review session [\#3310](https://github.com/pypeclub/OpenPype/pull/3310)
- TVPaint: Extractor use mark in/out range to render [\#3309](https://github.com/pypeclub/OpenPype/pull/3309)
- Ftrack: Delivery action can work on ReviewSessions [\#3307](https://github.com/pypeclub/OpenPype/pull/3307)
- Maya: Look assigner UI improvements [\#3298](https://github.com/pypeclub/OpenPype/pull/3298)
- Ftrack: Action to transfer values of hierarchical attributes [\#3284](https://github.com/pypeclub/OpenPype/pull/3284)
**🐛 Bug fixes**
- General: Handle empty source key on instance [\#3342](https://github.com/pypeclub/OpenPype/pull/3342)
- Houdini: Fix Houdini VDB manage update wrong file attribute name [\#3322](https://github.com/pypeclub/OpenPype/pull/3322)
- Nuke: anatomy compatibility issue hacks [\#3321](https://github.com/pypeclub/OpenPype/pull/3321)
- hiero: otio p3 compatibility issue - metadata on effect use update 3.11 [\#3314](https://github.com/pypeclub/OpenPype/pull/3314)
- General: Vendorized modules for Python 2 and update poetry lock [\#3305](https://github.com/pypeclub/OpenPype/pull/3305)
- Fix - added local targets to install host [\#3303](https://github.com/pypeclub/OpenPype/pull/3303)
- Settings: Add missing default settings for nuke gizmo [\#3301](https://github.com/pypeclub/OpenPype/pull/3301)
- Maya: Fix swaped width and height in reviews [\#3300](https://github.com/pypeclub/OpenPype/pull/3300)
- Maya: point cache publish handles Maya instances [\#3297](https://github.com/pypeclub/OpenPype/pull/3297)
- Global: extract review slate issues [\#3286](https://github.com/pypeclub/OpenPype/pull/3286)
- Webpublisher: return only active projects in ProjectsEndpoint [\#3281](https://github.com/pypeclub/OpenPype/pull/3281)
**🔀 Refactored code**
- Blender: Use client query functions [\#3331](https://github.com/pypeclub/OpenPype/pull/3331)
- General: Define query functions [\#3288](https://github.com/pypeclub/OpenPype/pull/3288)
**Merged pull requests:**
- Maya: add pointcache family to gpu cache loader [\#3318](https://github.com/pypeclub/OpenPype/pull/3318)
## [3.10.0](https://github.com/pypeclub/OpenPype/tree/3.10.0) (2022-05-26)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.10.0-nightly.6...3.10.0)

View file

@ -18,7 +18,8 @@ AppPublisher=Orbi Tools s.r.o
AppPublisherURL=http://pype.club
AppSupportURL=http://pype.club
AppUpdatesURL=http://pype.club
DefaultDirName={autopf}\{#MyAppName}
DefaultDirName={autopf}\{#MyAppName}\{#AppVer}
UsePreviousAppDir=no
DisableProgramGroupPage=yes
OutputBaseFilename={#MyAppName}-{#AppVer}-install
AllowCancelDuringInstall=yes
@ -27,7 +28,7 @@ AllowCancelDuringInstall=yes
PrivilegesRequiredOverridesAllowed=dialog
SetupIconFile=igniter\openpype.ico
OutputDir=build\
Compression=lzma
Compression=lzma2
SolidCompression=yes
WizardStyle=modern
@ -37,6 +38,11 @@ Name: "english"; MessagesFile: "compiler:Default.isl"
[Tasks]
Name: "desktopicon"; Description: "{cm:CreateDesktopIcon}"; GroupDescription: "{cm:AdditionalIcons}"; Flags: unchecked
[InstallDelete]
; clean everything in previous installation folder
Type: filesandordirs; Name: "{app}\*"
[Files]
Source: "build\{#build}\*"; DestDir: "{app}"; Flags: ignoreversion recursesubdirs createallsubdirs
; NOTE: Don't use "Flags: ignoreversion" on any shared system files

View file

@ -15,7 +15,6 @@ from .lib import (
run_subprocess,
version_up,
get_asset,
get_hierarchy,
get_workdir_data,
get_version_from_path,
get_last_version_from_path,
@ -101,7 +100,6 @@ __all__ = [
# get contextual data
"version_up",
"get_asset",
"get_hierarchy",
"get_workdir_data",
"get_version_from_path",
"get_last_version_from_path",

View file

@ -2,7 +2,7 @@
"""Package for handling pype command line arguments."""
import os
import sys
import code
import click
# import sys
@ -424,3 +424,22 @@ def pack_project(project, dirpath):
def unpack_project(zipfile, root):
"""Create a package of project with all files and database dump."""
PypeCommands().unpack_project(zipfile, root)
@main.command()
def interactive():
"""Interative (Python like) console.
Helpfull command not only for development to directly work with python
interpreter.
Warning:
Executable 'openpype_gui' on windows won't work.
"""
from openpype.version import __version__
banner = "OpenPype {}\nPython {} on {}".format(
__version__, sys.version, sys.platform
)
code.interact(banner)

View file

@ -1,6 +1,7 @@
from .entities import (
get_projects,
get_project,
get_whole_project,
get_asset_by_id,
get_asset_by_name,
@ -29,15 +30,19 @@ from .entities import (
get_representations,
get_representation_parents,
get_representations_parents,
get_archived_representations,
get_thumbnail,
get_thumbnails,
get_thumbnail_id_from_source,
get_workfile_info,
)
__all__ = (
"get_projects",
"get_project",
"get_whole_project",
"get_asset_by_id",
"get_asset_by_name",
@ -66,8 +71,11 @@ __all__ = (
"get_representations",
"get_representation_parents",
"get_representations_parents",
"get_archived_representations",
"get_thumbnail",
"get_thumbnails",
"get_thumbnail_id_from_source",
"get_workfile_info",
)

File diff suppressed because it is too large Load diff

View file

@ -1,11 +1,10 @@
from openpype.api import Anatomy
from openpype.lib import (
PreLaunchHook,
EnvironmentPrepData,
prepare_app_environments,
prepare_context_environments
)
from openpype.pipeline import AvalonMongoDB
from openpype.pipeline import AvalonMongoDB, Anatomy
class GlobalHostDataHook(PreLaunchHook):

13
openpype/host/__init__.py Normal file
View file

@ -0,0 +1,13 @@
from .host import (
HostBase,
IWorkfileHost,
ILoadHost,
INewPublisher,
)
__all__ = (
"HostBase",
"IWorkfileHost",
"ILoadHost",
"INewPublisher",
)

524
openpype/host/host.py Normal file
View file

@ -0,0 +1,524 @@
import logging
import contextlib
from abc import ABCMeta, abstractproperty, abstractmethod
import six
# NOTE can't import 'typing' because of issues in Maya 2020
# - shiboken crashes on 'typing' module import
class MissingMethodsError(ValueError):
"""Exception when host miss some required methods for specific workflow.
Args:
host (HostBase): Host implementation where are missing methods.
missing_methods (list[str]): List of missing methods.
"""
def __init__(self, host, missing_methods):
joined_missing = ", ".join(
['"{}"'.format(item) for item in missing_methods]
)
message = (
"Host \"{}\" miss methods {}".format(host.name, joined_missing)
)
super(MissingMethodsError, self).__init__(message)
@six.add_metaclass(ABCMeta)
class HostBase(object):
"""Base of host implementation class.
Host is pipeline implementation of DCC application. This class should help
to identify what must/should/can be implemented for specific functionality.
Compared to 'avalon' concept:
What was before considered as functions in host implementation folder. The
host implementation should primarily care about adding ability of creation
(mark subsets to be published) and optionaly about referencing published
representations as containers.
Host may need extend some functionality like working with workfiles
or loading. Not all host implementations may allow that for those purposes
can be logic extended with implementing functions for the purpose. There
are prepared interfaces to be able identify what must be implemented to
be able use that functionality.
- current statement is that it is not required to inherit from interfaces
but all of the methods are validated (only their existence!)
# Installation of host before (avalon concept):
```python
from openpype.pipeline import install_host
import openpype.hosts.maya.api as host
install_host(host)
```
# Installation of host now:
```python
from openpype.pipeline import install_host
from openpype.hosts.maya.api import MayaHost
host = MayaHost()
install_host(host)
```
Todo:
- move content of 'install_host' as method of this class
- register host object
- install legacy_io
- install global plugin paths
- store registered plugin paths to this object
- handle current context (project, asset, task)
- this must be done in many separated steps
- have it's object of host tools instead of using globals
This implementation will probably change over time when more
functionality and responsibility will be added.
"""
_log = None
def __init__(self):
"""Initialization of host.
Register DCC callbacks, host specific plugin paths, targets etc.
(Part of what 'install' did in 'avalon' concept.)
Note:
At this moment global "installation" must happen before host
installation. Because of this current limitation it is recommended
to implement 'install' method which is triggered after global
'install'.
"""
pass
@property
def log(self):
if self._log is None:
self._log = logging.getLogger(self.__class__.__name__)
return self._log
@abstractproperty
def name(self):
"""Host name."""
pass
def get_current_context(self):
"""Get current context information.
This method should be used to get current context of host. Usage of
this method can be crutial for host implementations in DCCs where
can be opened multiple workfiles at one moment and change of context
can't be catched properly.
Default implementation returns values from 'legacy_io.Session'.
Returns:
dict: Context with 3 keys 'project_name', 'asset_name' and
'task_name'. All of them can be 'None'.
"""
from openpype.pipeline import legacy_io
if legacy_io.is_installed():
legacy_io.install()
return {
"project_name": legacy_io.Session["AVALON_PROJECT"],
"asset_name": legacy_io.Session["AVALON_ASSET"],
"task_name": legacy_io.Session["AVALON_TASK"]
}
def get_context_title(self):
"""Context title shown for UI purposes.
Should return current context title if possible.
Note:
This method is used only for UI purposes so it is possible to
return some logical title for contextless cases.
Is not meant for "Context menu" label.
Returns:
str: Context title.
None: Default title is used based on UI implementation.
"""
# Use current context to fill the context title
current_context = self.get_current_context()
project_name = current_context["project_name"]
asset_name = current_context["asset_name"]
task_name = current_context["task_name"]
items = []
if project_name:
items.append(project_name)
if asset_name:
items.append(asset_name)
if task_name:
items.append(task_name)
if items:
return "/".join(items)
return None
@contextlib.contextmanager
def maintained_selection(self):
"""Some functionlity will happen but selection should stay same.
This is DCC specific. Some may not allow to implement this ability
that is reason why default implementation is empty context manager.
Yields:
None: Yield when is ready to restore selected at the end.
"""
try:
yield
finally:
pass
class ILoadHost:
"""Implementation requirements to be able use reference of representations.
The load plugins can do referencing even without implementation of methods
here, but switch and removement of containers would not be possible.
Questions:
- Is list container dependency of host or load plugins?
- Should this be directly in HostBase?
- how to find out if referencing is available?
- do we need to know that?
"""
@staticmethod
def get_missing_load_methods(host):
"""Look for missing methods on "old type" host implementation.
Method is used for validation of implemented functions related to
loading. Checks only existence of methods.
Args:
Union[ModuleType, HostBase]: Object of host where to look for
required methods.
Returns:
list[str]: Missing method implementations for loading workflow.
"""
if isinstance(host, ILoadHost):
return []
required = ["ls"]
missing = []
for name in required:
if not hasattr(host, name):
missing.append(name)
return missing
@staticmethod
def validate_load_methods(host):
"""Validate implemented methods of "old type" host for load workflow.
Args:
Union[ModuleType, HostBase]: Object of host to validate.
Raises:
MissingMethodsError: If there are missing methods on host
implementation.
"""
missing = ILoadHost.get_missing_load_methods(host)
if missing:
raise MissingMethodsError(host, missing)
@abstractmethod
def get_containers(self):
"""Retreive referenced containers from scene.
This can be implemented in hosts where referencing can be used.
Todo:
Rename function to something more self explanatory.
Suggestion: 'get_containers'
Returns:
list[dict]: Information about loaded containers.
"""
pass
# --- Deprecated method names ---
def ls(self):
"""Deprecated variant of 'get_containers'.
Todo:
Remove when all usages are replaced.
"""
return self.get_containers()
@six.add_metaclass(ABCMeta)
class IWorkfileHost:
"""Implementation requirements to be able use workfile utils and tool."""
@staticmethod
def get_missing_workfile_methods(host):
"""Look for missing methods on "old type" host implementation.
Method is used for validation of implemented functions related to
workfiles. Checks only existence of methods.
Args:
Union[ModuleType, HostBase]: Object of host where to look for
required methods.
Returns:
list[str]: Missing method implementations for workfiles workflow.
"""
if isinstance(host, IWorkfileHost):
return []
required = [
"open_file",
"save_file",
"current_file",
"has_unsaved_changes",
"file_extensions",
"work_root",
]
missing = []
for name in required:
if not hasattr(host, name):
missing.append(name)
return missing
@staticmethod
def validate_workfile_methods(host):
"""Validate methods of "old type" host for workfiles workflow.
Args:
Union[ModuleType, HostBase]: Object of host to validate.
Raises:
MissingMethodsError: If there are missing methods on host
implementation.
"""
missing = IWorkfileHost.get_missing_workfile_methods(host)
if missing:
raise MissingMethodsError(host, missing)
@abstractmethod
def get_workfile_extensions(self):
"""Extensions that can be used as save.
Questions:
This could potentially use 'HostDefinition'.
"""
return []
@abstractmethod
def save_workfile(self, dst_path=None):
"""Save currently opened scene.
Args:
dst_path (str): Where the current scene should be saved. Or use
current path if 'None' is passed.
"""
pass
@abstractmethod
def open_workfile(self, filepath):
"""Open passed filepath in the host.
Args:
filepath (str): Path to workfile.
"""
pass
@abstractmethod
def get_current_workfile(self):
"""Retreive path to current opened file.
Returns:
str: Path to file which is currently opened.
None: If nothing is opened.
"""
return None
def workfile_has_unsaved_changes(self):
"""Currently opened scene is saved.
Not all hosts can know if current scene is saved because the API of
DCC does not support it.
Returns:
bool: True if scene is saved and False if has unsaved
modifications.
None: Can't tell if workfiles has modifications.
"""
return None
def work_root(self, session):
"""Modify workdir per host.
Default implementation keeps workdir untouched.
Warnings:
We must handle this modification with more sofisticated way because
this can't be called out of DCC so opening of last workfile
(calculated before DCC is launched) is complicated. Also breaking
defined work template is not a good idea.
Only place where it's really used and can make sense is Maya. There
workspace.mel can modify subfolders where to look for maya files.
Args:
session (dict): Session context data.
Returns:
str: Path to new workdir.
"""
return session["AVALON_WORKDIR"]
# --- Deprecated method names ---
def file_extensions(self):
"""Deprecated variant of 'get_workfile_extensions'.
Todo:
Remove when all usages are replaced.
"""
return self.get_workfile_extensions()
def save_file(self, dst_path=None):
"""Deprecated variant of 'save_workfile'.
Todo:
Remove when all usages are replaced.
"""
self.save_workfile()
def open_file(self, filepath):
"""Deprecated variant of 'open_workfile'.
Todo:
Remove when all usages are replaced.
"""
return self.open_workfile(filepath)
def current_file(self):
"""Deprecated variant of 'get_current_workfile'.
Todo:
Remove when all usages are replaced.
"""
return self.get_current_workfile()
def has_unsaved_changes(self):
"""Deprecated variant of 'workfile_has_unsaved_changes'.
Todo:
Remove when all usages are replaced.
"""
return self.workfile_has_unsaved_changes()
class INewPublisher:
"""Functions related to new creation system in new publisher.
New publisher is not storing information only about each created instance
but also some global data. At this moment are data related only to context
publish plugins but that can extend in future.
"""
@staticmethod
def get_missing_publish_methods(host):
"""Look for missing methods on "old type" host implementation.
Method is used for validation of implemented functions related to
new publish creation. Checks only existence of methods.
Args:
Union[ModuleType, HostBase]: Host module where to look for
required methods.
Returns:
list[str]: Missing method implementations for new publsher
workflow.
"""
if isinstance(host, INewPublisher):
return []
required = [
"get_context_data",
"update_context_data",
]
missing = []
for name in required:
if not hasattr(host, name):
missing.append(name)
return missing
@staticmethod
def validate_publish_methods(host):
"""Validate implemented methods of "old type" host.
Args:
Union[ModuleType, HostBase]: Host module to validate.
Raises:
MissingMethodsError: If there are missing methods on host
implementation.
"""
missing = INewPublisher.get_missing_publish_methods(host)
if missing:
raise MissingMethodsError(host, missing)
@abstractmethod
def get_context_data(self):
"""Get global data related to creation-publishing from workfile.
These data are not related to any created instance but to whole
publishing context. Not saving/returning them will cause that each
reset of publishing resets all values to default ones.
Context data can contain information about enabled/disabled publish
plugins or other values that can be filled by artist.
Returns:
dict: Context data stored using 'update_context_data'.
"""
pass
@abstractmethod
def update_context_data(self, data, changes):
"""Store global context data to workfile.
Called when some values in context data has changed.
Without storing the values in a way that 'get_context_data' would
return them will each reset of publishing cause loose of filled values
by artist. Best practice is to store values into workfile, if possible.
Args:
data (dict): New data as are.
changes (dict): Only data that has been changed. Each value has
tuple with '(<old>, <new>)' value.
"""
pass

View file

@ -17,11 +17,8 @@ class RenderCreator(Creator):
create_allow_context_change = True
def __init__(
self, create_context, system_settings, project_settings, headless=False
):
super(RenderCreator, self).__init__(create_context, system_settings,
project_settings, headless)
def __init__(self, project_settings, *args, **kwargs):
super(RenderCreator, self).__init__(project_settings, *args, **kwargs)
self._default_variants = (project_settings["aftereffects"]
["create"]
["RenderCreator"]

View file

@ -6,8 +6,8 @@ import attr
import pyblish.api
from openpype.settings import get_project_settings
from openpype.lib import abstract_collect_render
from openpype.lib.abstract_collect_render import RenderInstance
from openpype.pipeline import publish
from openpype.pipeline.publish import RenderInstance
from openpype.hosts.aftereffects.api import get_stub
@ -25,7 +25,7 @@ class AERenderInstance(RenderInstance):
file_name = attr.ib(default=None)
class CollectAERender(abstract_collect_render.AbstractCollectRender):
class CollectAERender(publish.AbstractCollectRender):
order = pyblish.api.CollectorOrder + 0.405
label = "Collect After Effects Render Layers"

View file

@ -93,7 +93,7 @@ def set_start_end_frames():
# Default scene settings
frameStart = scene.frame_start
frameEnd = scene.frame_end
fps = scene.render.fps
fps = scene.render.fps / scene.render.fps_base
resolution_x = scene.render.resolution_x
resolution_y = scene.render.resolution_y
@ -116,7 +116,8 @@ def set_start_end_frames():
scene.frame_start = frameStart
scene.frame_end = frameEnd
scene.render.fps = fps
scene.render.fps = round(fps)
scene.render.fps_base = round(fps) / fps
scene.render.resolution_x = resolution_x
scene.render.resolution_y = resolution_y

View file

@ -1,6 +1,7 @@
import os
import re
import subprocess
from platform import system
from openpype.lib import PreLaunchHook
@ -13,12 +14,9 @@ class InstallPySideToBlender(PreLaunchHook):
For pipeline implementation is required to have Qt binding installed in
blender's python packages.
Prelaunch hook can work only on Windows right now.
"""
app_groups = ["blender"]
platforms = ["windows"]
def execute(self):
# Prelaunch hook is not crucial
@ -34,25 +32,28 @@ class InstallPySideToBlender(PreLaunchHook):
# Get blender's python directory
version_regex = re.compile(r"^[2-3]\.[0-9]+$")
platform = system().lower()
executable = self.launch_context.executable.executable_path
if os.path.basename(executable).lower() != "blender.exe":
expected_executable = "blender"
if platform == "windows":
expected_executable += ".exe"
if os.path.basename(executable).lower() != expected_executable:
self.log.info((
"Executable does not lead to blender.exe file. Can't determine"
" blender's python to check/install PySide2."
f"Executable does not lead to {expected_executable} file."
"Can't determine blender's python to check/install PySide2."
))
return
executable_dir = os.path.dirname(executable)
versions_dir = os.path.dirname(executable)
if platform == "darwin":
versions_dir = os.path.join(
os.path.dirname(versions_dir), "Resources"
)
version_subfolders = []
for name in os.listdir(executable_dir):
fullpath = os.path.join(name, executable_dir)
if not os.path.isdir(fullpath):
continue
if not version_regex.match(name):
continue
version_subfolders.append(name)
for dir_entry in os.scandir(versions_dir):
if dir_entry.is_dir() and version_regex.match(dir_entry.name):
version_subfolders.append(dir_entry.name)
if not version_subfolders:
self.log.info(
@ -72,16 +73,21 @@ class InstallPySideToBlender(PreLaunchHook):
version_subfolder = version_subfolders[0]
pythond_dir = os.path.join(
os.path.dirname(executable),
version_subfolder,
"python"
)
python_dir = os.path.join(versions_dir, version_subfolder, "python")
python_lib = os.path.join(python_dir, "lib")
python_version = "python"
if platform != "windows":
for dir_entry in os.scandir(python_lib):
if dir_entry.is_dir() and dir_entry.name.startswith("python"):
python_lib = dir_entry.path
python_version = dir_entry.name
break
# Change PYTHONPATH to contain blender's packages as first
python_paths = [
os.path.join(pythond_dir, "lib"),
os.path.join(pythond_dir, "lib", "site-packages"),
python_lib,
os.path.join(python_lib, "site-packages"),
]
python_path = self.launch_context.env.get("PYTHONPATH") or ""
for path in python_path.split(os.pathsep):
@ -91,7 +97,15 @@ class InstallPySideToBlender(PreLaunchHook):
self.launch_context.env["PYTHONPATH"] = os.pathsep.join(python_paths)
# Get blender's python executable
python_executable = os.path.join(pythond_dir, "bin", "python.exe")
python_bin = os.path.join(python_dir, "bin")
if platform == "windows":
python_executable = os.path.join(python_bin, "python.exe")
else:
python_executable = os.path.join(python_bin, python_version)
# Check for python with enabled 'pymalloc'
if not os.path.exists(python_executable):
python_executable += "m"
if not os.path.exists(python_executable):
self.log.warning(
"Couldn't find python executable for blender. {}".format(
@ -106,7 +120,15 @@ class InstallPySideToBlender(PreLaunchHook):
return
# Install PySide2 in blender's python
self.install_pyside_windows(python_executable)
if platform == "windows":
result = self.install_pyside_windows(python_executable)
else:
result = self.install_pyside(python_executable)
if result:
self.log.info("Successfully installed PySide2 module to blender.")
else:
self.log.warning("Failed to install PySide2 module to blender.")
def install_pyside_windows(self, python_executable):
"""Install PySide2 python module to blender's python.
@ -144,19 +166,41 @@ class InstallPySideToBlender(PreLaunchHook):
lpDirectory=os.path.dirname(python_executable)
)
process_handle = process_info["hProcess"]
obj = win32event.WaitForSingleObject(
process_handle, win32event.INFINITE
)
win32event.WaitForSingleObject(process_handle, win32event.INFINITE)
returncode = win32process.GetExitCodeProcess(process_handle)
if returncode == 0:
self.log.info(
"Successfully installed PySide2 module to blender."
)
return
return returncode == 0
except pywintypes.error:
pass
self.log.warning("Failed to install PySide2 module to blender.")
def install_pyside(self, python_executable):
"""Install PySide2 python module to blender's python."""
try:
# Parameters
# - use "-m pip" as module pip to install PySide2 and argument
# "--ignore-installed" is to force install module to blender's
# site-packages and make sure it is binary compatible
args = [
python_executable,
"-m",
"pip",
"install",
"--ignore-installed",
"PySide2",
]
process = subprocess.Popen(
args, stdout=subprocess.PIPE, universal_newlines=True
)
process.communicate()
return process.returncode == 0
except PermissionError:
self.log.warning(
"Permission denied with command:"
"\"{}\".".format(" ".join(args))
)
except OSError as error:
self.log.warning(f"OS error has occurred: \"{error}\".")
except subprocess.SubprocessError:
pass
def is_pyside_installed(self, python_executable):
"""Check if PySide2 module is in blender's pip list.
@ -169,7 +213,7 @@ class InstallPySideToBlender(PreLaunchHook):
args = [python_executable, "-m", "pip", "list"]
process = subprocess.Popen(args, stdout=subprocess.PIPE)
stdout, _ = process.communicate()
lines = stdout.decode().split("\r\n")
lines = stdout.decode().split(os.linesep)
# Second line contain dashes that define maximum length of module name.
# Second column of dashes define maximum length of module version.
package_dashes, *_ = lines[1].split(" ")

View file

@ -2,7 +2,7 @@ import os
import flame
from pprint import pformat
import openpype.hosts.flame.api as opfapi
from openpype.lib import StringTemplate
class LoadClip(opfapi.ClipLoader):
"""Load a subset to timeline as clip
@ -22,7 +22,7 @@ class LoadClip(opfapi.ClipLoader):
# settings
reel_group_name = "OpenPype_Reels"
reel_name = "Loaded"
clip_name_template = "{asset}_{subset}_{output}"
clip_name_template = "{asset}_{subset}<_{output}>"
def load(self, context, name, namespace, options):
@ -36,8 +36,8 @@ class LoadClip(opfapi.ClipLoader):
version_data = version.get("data", {})
version_name = version.get("name", None)
colorspace = version_data.get("colorspace", None)
clip_name = self.clip_name_template.format(
**context["representation"]["context"])
clip_name = StringTemplate(self.clip_name_template).format(
context["representation"]["context"])
# TODO: settings in imageio
# convert colorspace with ocio to flame mapping

View file

@ -2,6 +2,7 @@ import os
import flame
from pprint import pformat
import openpype.hosts.flame.api as opfapi
from openpype.lib import StringTemplate
class LoadClipBatch(opfapi.ClipLoader):
@ -21,7 +22,7 @@ class LoadClipBatch(opfapi.ClipLoader):
# settings
reel_name = "OP_LoadedReel"
clip_name_template = "{asset}_{subset}_{output}"
clip_name_template = "{asset}_{subset}<_{output}>"
def load(self, context, name, namespace, options):
@ -39,8 +40,8 @@ class LoadClipBatch(opfapi.ClipLoader):
if not context["representation"]["context"].get("output"):
self.clip_name_template.replace("output", "representation")
clip_name = self.clip_name_template.format(
**context["representation"]["context"])
clip_name = StringTemplate(self.clip_name_template).format(
context["representation"]["context"])
# TODO: settings in imageio
# convert colorspace with ocio to flame mapping

View file

@ -323,6 +323,8 @@ class IntegrateBatchGroup(pyblish.api.InstancePlugin):
def _get_shot_task_dir_path(self, instance, task_data):
project_doc = instance.data["projectEntity"]
asset_entity = instance.data["assetEntity"]
anatomy = instance.context.data["anatomy"]
return get_workdir(
project_doc, asset_entity, task_data["name"], "flame")
project_doc, asset_entity, task_data["name"], "flame", anatomy
)

View file

@ -3,9 +3,16 @@ import sys
import re
import contextlib
from bson.objectid import ObjectId
from Qt import QtGui
from openpype.client import (
get_asset_by_name,
get_subset_by_name,
get_last_version_by_subset_id,
get_representation_by_id,
get_representation_by_name,
get_representation_parents,
)
from openpype.pipeline import (
switch_container,
legacy_io,
@ -93,13 +100,16 @@ def switch_item(container,
raise ValueError("Must have at least one change provided to switch.")
# Collect any of current asset, subset and representation if not provided
# so we can use the original name from those.
# so we can use the original name from those.
project_name = legacy_io.active_project()
if any(not x for x in [asset_name, subset_name, representation_name]):
_id = ObjectId(container["representation"])
representation = legacy_io.find_one({
"type": "representation", "_id": _id
})
version, subset, asset, project = legacy_io.parenthood(representation)
repre_id = container["representation"]
representation = get_representation_by_id(project_name, repre_id)
repre_parent_docs = get_representation_parents(representation)
if repre_parent_docs:
version, subset, asset, _ = repre_parent_docs
else:
version = subset = asset = None
if asset_name is None:
asset_name = asset["name"]
@ -111,39 +121,26 @@ def switch_item(container,
representation_name = representation["name"]
# Find the new one
asset = legacy_io.find_one({
"name": asset_name,
"type": "asset"
})
asset = get_asset_by_name(project_name, asset_name, fields=["_id"])
assert asset, ("Could not find asset in the database with the name "
"'%s'" % asset_name)
subset = legacy_io.find_one({
"name": subset_name,
"type": "subset",
"parent": asset["_id"]
})
subset = get_subset_by_name(
project_name, subset_name, asset["_id"], fields=["_id"]
)
assert subset, ("Could not find subset in the database with the name "
"'%s'" % subset_name)
version = legacy_io.find_one(
{
"type": "version",
"parent": subset["_id"]
},
sort=[('name', -1)]
version = get_last_version_by_subset_id(
project_name, subset["_id"], fields=["_id"]
)
assert version, "Could not find a version for {}.{}".format(
asset_name, subset_name
)
representation = legacy_io.find_one({
"name": representation_name,
"type": "representation",
"parent": version["_id"]}
representation = get_representation_by_name(
project_name, representation_name, version["_id"]
)
assert representation, ("Could not find representation in the database "
"with the name '%s'" % representation_name)

View file

@ -1,6 +1,7 @@
import os
import contextlib
from openpype.client import get_version_by_id
from openpype.pipeline import (
load,
legacy_io,
@ -123,7 +124,7 @@ def loader_shift(loader, frame, relative=True):
class FusionLoadSequence(load.LoaderPlugin):
"""Load image sequence into Fusion"""
families = ["imagesequence", "review", "render"]
families = ["imagesequence", "review", "render", "plate"]
representations = ["*"]
label = "Load sequence"
@ -211,10 +212,8 @@ class FusionLoadSequence(load.LoaderPlugin):
path = self._get_first_image(root)
# Get start frame from version data
version = legacy_io.find_one({
"type": "version",
"_id": representation["parent"]
})
project_name = legacy_io.active_project()
version = get_version_by_id(project_name, representation["parent"])
start = version["data"].get("frameStart")
if start is None:
self.log.warning("Missing start frame for updated version"

View file

@ -4,6 +4,11 @@ import sys
import logging
# Pipeline imports
from openpype.client import (
get_project,
get_asset_by_name,
get_versions,
)
from openpype.pipeline import (
legacy_io,
install_host,
@ -164,9 +169,9 @@ def update_frame_range(comp, representations):
"""
version_ids = [r["parent"] for r in representations]
versions = legacy_io.find({"type": "version", "_id": {"$in": version_ids}})
versions = list(versions)
project_name = legacy_io.active_project()
version_ids = {r["parent"] for r in representations}
versions = list(get_versions(project_name, version_ids))
versions = [v for v in versions
if v["data"].get("frameStart", None) is not None]
@ -203,11 +208,12 @@ def switch(asset_name, filepath=None, new=True):
# Assert asset name exists
# It is better to do this here then to wait till switch_shot does it
asset = legacy_io.find_one({"type": "asset", "name": asset_name})
project_name = legacy_io.active_project()
asset = get_asset_by_name(project_name, asset_name)
assert asset, "Could not find '%s' in the database" % asset_name
# Get current project
self._project = legacy_io.find_one({"type": "project"})
self._project = get_project(project_name)
# Go to comp
if not filepath:

View file

@ -7,6 +7,7 @@ from Qt import QtWidgets, QtCore
import qtawesome as qta
from openpype.client import get_assets
from openpype import style
from openpype.pipeline import (
install_host,
@ -142,7 +143,7 @@ class App(QtWidgets.QWidget):
# Clear any existing items
self._assets.clear()
asset_names = [a["name"] for a in self.collect_assets()]
asset_names = self.collect_asset_names()
completer = QtWidgets.QCompleter(asset_names)
self._assets.setCompleter(completer)
@ -165,8 +166,14 @@ class App(QtWidgets.QWidget):
items = glob.glob("{}/*.comp".format(directory))
return items
def collect_assets(self):
return list(legacy_io.find({"type": "asset"}, {"name": True}))
def collect_asset_names(self):
project_name = legacy_io.active_project()
asset_docs = get_assets(project_name, fields=["name"])
asset_names = {
asset_doc["name"]
for asset_doc in asset_docs
}
return list(asset_names)
def populate_comp_box(self, files):
"""Ensure we display the filename only but the path is stored as well

View file

@ -4,11 +4,10 @@ from pathlib import Path
import attr
import openpype.lib
import openpype.lib.abstract_collect_render
from openpype.lib.abstract_collect_render import RenderInstance
from openpype.lib import get_formatted_current_time
from openpype.pipeline import legacy_io
from openpype.pipeline import publish
from openpype.pipeline.publish import RenderInstance
import openpype.hosts.harmony.api as harmony
@ -20,8 +19,7 @@ class HarmonyRenderInstance(RenderInstance):
leadingZeros = attr.ib(default=3)
class CollectFarmRender(openpype.lib.abstract_collect_render.
AbstractCollectRender):
class CollectFarmRender(publish.AbstractCollectRender):
"""Gather all publishable renders."""
# https://docs.toonboom.com/help/harmony-17/premium/reference/node/output/write-node-image-formats.html

View file

@ -0,0 +1,85 @@
import logging
from scriptsmenu import scriptsmenu
from Qt import QtWidgets
log = logging.getLogger(__name__)
def _hiero_main_window():
"""Return Hiero's main window"""
for obj in QtWidgets.QApplication.topLevelWidgets():
if (obj.inherits('QMainWindow') and
obj.metaObject().className() == 'Foundry::UI::DockMainWindow'):
return obj
raise RuntimeError('Could not find HieroWindow instance')
def _hiero_main_menubar():
"""Retrieve the main menubar of the Hiero window"""
hiero_window = _hiero_main_window()
menubar = [i for i in hiero_window.children() if isinstance(
i,
QtWidgets.QMenuBar
)]
assert len(menubar) == 1, "Error, could not find menu bar!"
return menubar[0]
def find_scripts_menu(title, parent):
"""
Check if the menu exists with the given title in the parent
Args:
title (str): the title name of the scripts menu
parent (QtWidgets.QMenuBar): the menubar to check
Returns:
QtWidgets.QMenu or None
"""
menu = None
search = [i for i in parent.children() if
isinstance(i, scriptsmenu.ScriptsMenu)
and i.title() == title]
if search:
assert len(search) < 2, ("Multiple instances of menu '{}' "
"in menu bar".format(title))
menu = search[0]
return menu
def main(title="Scripts", parent=None, objectName=None):
"""Build the main scripts menu in Hiero
Args:
title (str): name of the menu in the application
parent (QtWidgets.QtMenuBar): the parent object for the menu
objectName (str): custom objectName for scripts menu
Returns:
scriptsmenu.ScriptsMenu instance
"""
hieromainbar = parent or _hiero_main_menubar()
try:
# check menu already exists
menu = find_scripts_menu(title, hieromainbar)
if not menu:
log.info("Attempting to build menu ...")
object_name = objectName or title.lower()
menu = scriptsmenu.ScriptsMenu(title=title,
parent=hieromainbar,
objectName=object_name)
except Exception as e:
log.error(e)
return
return menu

View file

@ -19,8 +19,9 @@ from openpype.client import (
get_last_versions,
get_representations,
)
from openpype.pipeline import legacy_io
from openpype.api import (Logger, Anatomy, get_anatomy_settings)
from openpype.settings import get_anatomy_settings
from openpype.pipeline import legacy_io, Anatomy
from openpype.api import Logger
from . import tags
try:

View file

@ -9,6 +9,7 @@ from openpype.pipeline import legacy_io
from openpype.tools.utils import host_tools
from . import tags
from openpype.settings import get_project_settings
log = Logger.get_logger(__name__)
@ -41,6 +42,7 @@ def menu_install():
Installing menu into Hiero
"""
from Qt import QtGui
from . import (
publish, launch_workfiles_app, reload_config,
@ -138,3 +140,30 @@ def menu_install():
exeprimental_action.triggered.connect(
lambda: host_tools.show_experimental_tools_dialog(parent=main_window)
)
def add_scripts_menu():
try:
from . import launchforhiero
except ImportError:
log.warning(
"Skipping studio.menu install, because "
"'scriptsmenu' module seems unavailable."
)
return
# load configuration of custom menu
project_settings = get_project_settings(os.getenv("AVALON_PROJECT"))
config = project_settings["hiero"]["scriptsmenu"]["definition"]
_menu = project_settings["hiero"]["scriptsmenu"]["name"]
if not config:
log.warning("Skipping studio menu, no definition found.")
return
# run the launcher for Hiero menu
studio_menu = launchforhiero.main(title=_menu.title())
# apply configuration
studio_menu.build_from_configuration(studio_menu, config)

View file

@ -48,6 +48,7 @@ def install():
# install menu
menu.menu_install()
menu.add_scripts_menu()
# register hiero events
events.register_hiero_events()

View file

@ -5,8 +5,7 @@ import husdoutputprocessors.base as base
import colorbleed.usdlib as usdlib
from openpype.client import get_asset_by_name
from openpype.api import Anatomy
from openpype.pipeline import legacy_io
from openpype.pipeline import legacy_io, Anatomy
class AvalonURIOutputProcessor(base.OutputProcessorBase):

View file

@ -5,11 +5,11 @@ Anything that isn't defined here is INTERNAL and unreliable for external use.
"""
from .pipeline import (
install,
uninstall,
ls,
containerise,
MayaHost,
)
from .plugin import (
Creator,
@ -40,11 +40,11 @@ from .lib import (
__all__ = [
"install",
"uninstall",
"ls",
"containerise",
"MayaHost",
"Creator",
"Loader",

View file

@ -1908,7 +1908,7 @@ def iter_parents(node):
"""
while True:
split = node.rsplit("|", 1)
if len(split) == 1:
if len(split) == 1 or not split[0]:
return
node = split[0]
@ -2522,12 +2522,30 @@ def load_capture_preset(data=None):
temp_options2['multiSampleEnable'] = False
temp_options2['multiSampleCount'] = preset[id][key]
if key == 'renderDepthOfField':
temp_options2['renderDepthOfField'] = preset[id][key]
if key == 'ssaoEnable':
if preset[id][key] is True:
temp_options2['ssaoEnable'] = True
else:
temp_options2['ssaoEnable'] = False
if key == 'ssaoSamples':
temp_options2['ssaoSamples'] = preset[id][key]
if key == 'ssaoAmount':
temp_options2['ssaoAmount'] = preset[id][key]
if key == 'ssaoRadius':
temp_options2['ssaoRadius'] = preset[id][key]
if key == 'hwFogDensity':
temp_options2['hwFogDensity'] = preset[id][key]
if key == 'ssaoFilterRadius':
temp_options2['ssaoFilterRadius'] = preset[id][key]
if key == 'alphaCut':
temp_options2['transparencyAlgorithm'] = 5
temp_options2['transparencyQuality'] = 1
@ -2535,6 +2553,48 @@ def load_capture_preset(data=None):
if key == 'headsUpDisplay':
temp_options['headsUpDisplay'] = True
if key == 'fogging':
temp_options['fogging'] = preset[id][key] or False
if key == 'hwFogStart':
temp_options2['hwFogStart'] = preset[id][key]
if key == 'hwFogEnd':
temp_options2['hwFogEnd'] = preset[id][key]
if key == 'hwFogAlpha':
temp_options2['hwFogAlpha'] = preset[id][key]
if key == 'hwFogFalloff':
temp_options2['hwFogFalloff'] = int(preset[id][key])
if key == 'hwFogColorR':
temp_options2['hwFogColorR'] = preset[id][key]
if key == 'hwFogColorG':
temp_options2['hwFogColorG'] = preset[id][key]
if key == 'hwFogColorB':
temp_options2['hwFogColorB'] = preset[id][key]
if key == 'motionBlurEnable':
if preset[id][key] is True:
temp_options2['motionBlurEnable'] = True
else:
temp_options2['motionBlurEnable'] = False
if key == 'motionBlurSampleCount':
temp_options2['motionBlurSampleCount'] = preset[id][key]
if key == 'motionBlurShutterOpenFraction':
temp_options2['motionBlurShutterOpenFraction'] = preset[id][key]
if key == 'lineAAEnable':
if preset[id][key] is True:
temp_options2['lineAAEnable'] = True
else:
temp_options2['lineAAEnable'] = False
else:
temp_options[str(key)] = preset[id][key]
@ -2544,7 +2604,24 @@ def load_capture_preset(data=None):
'gpuCacheDisplayFilter',
'multiSample',
'ssaoEnable',
'textureMaxResolution'
'ssaoSamples',
'ssaoAmount',
'ssaoFilterRadius',
'ssaoRadius',
'hwFogStart',
'hwFogEnd',
'hwFogAlpha',
'hwFogFalloff',
'hwFogColorR',
'hwFogColorG',
'hwFogColorB',
'hwFogDensity',
'textureMaxResolution',
'motionBlurEnable',
'motionBlurSampleCount',
'motionBlurShutterOpenFraction',
'lineAAEnable',
'renderDepthOfField'
]:
temp_options.pop(key, None)
@ -3213,3 +3290,209 @@ def parent_nodes(nodes, parent=None):
node[0].setParent(node[1])
if delete_parent:
pm.delete(parent_node)
@contextlib.contextmanager
def maintained_time():
ct = cmds.currentTime(query=True)
try:
yield
finally:
cmds.currentTime(ct, edit=True)
def iter_visible_nodes_in_range(nodes, start, end):
"""Yield nodes that are visible in start-end frame range.
- Ignores intermediateObjects completely.
- Considers animated visibility attributes + upstream visibilities.
This is optimized for large scenes where some nodes in the parent
hierarchy might have some input connections to the visibilities,
e.g. key, driven keys, connections to other attributes, etc.
This only does a single time step to `start` if current frame is
not inside frame range since the assumption is made that changing
a frame isn't so slow that it beats querying all visibility
plugs through MDGContext on another frame.
Args:
nodes (list): List of node names to consider.
start (int, float): Start frame.
end (int, float): End frame.
Returns:
list: List of node names. These will be long full path names so
might have a longer name than the input nodes.
"""
# States we consider per node
VISIBLE = 1 # always visible
INVISIBLE = 0 # always invisible
ANIMATED = -1 # animated visibility
# Ensure integers
start = int(start)
end = int(end)
# Consider only non-intermediate dag nodes and use the "long" names.
nodes = cmds.ls(nodes, long=True, noIntermediate=True, type="dagNode")
if not nodes:
return
with maintained_time():
# Go to first frame of the range if the current time is outside
# the queried range so can directly query all visible nodes on
# that frame.
current_time = cmds.currentTime(query=True)
if not (start <= current_time <= end):
cmds.currentTime(start)
visible = cmds.ls(nodes, long=True, visible=True)
for node in visible:
yield node
if len(visible) == len(nodes) or start == end:
# All are visible on frame one, so they are at least visible once
# inside the frame range.
return
# For the invisible ones check whether its visibility and/or
# any of its parents visibility attributes are animated. If so, it might
# get visible on other frames in the range.
def memodict(f):
"""Memoization decorator for a function taking a single argument.
See: http://code.activestate.com/recipes/
578231-probably-the-fastest-memoization-decorator-in-the-/
"""
class memodict(dict):
def __missing__(self, key):
ret = self[key] = f(key)
return ret
return memodict().__getitem__
@memodict
def get_state(node):
plug = node + ".visibility"
connections = cmds.listConnections(plug,
source=True,
destination=False)
if connections:
return ANIMATED
else:
return VISIBLE if cmds.getAttr(plug) else INVISIBLE
visible = set(visible)
invisible = [node for node in nodes if node not in visible]
always_invisible = set()
# Iterate over the nodes by short to long names to iterate the highest
# in hierarchy nodes first. So the collected data can be used from the
# cache for parent queries in next iterations.
node_dependencies = dict()
for node in sorted(invisible, key=len):
state = get_state(node)
if state == INVISIBLE:
always_invisible.add(node)
continue
# If not always invisible by itself we should go through and check
# the parents to see if any of them are always invisible. For those
# that are "ANIMATED" we consider that this node is dependent on
# that attribute, we store them as dependency.
dependencies = set()
if state == ANIMATED:
dependencies.add(node)
traversed_parents = list()
for parent in iter_parents(node):
if parent in always_invisible or get_state(parent) == INVISIBLE:
# When parent is always invisible then consider this parent,
# this node we started from and any of the parents we
# have traversed in-between to be *always invisible*
always_invisible.add(parent)
always_invisible.add(node)
always_invisible.update(traversed_parents)
break
# If we have traversed the parent before and its visibility
# was dependent on animated visibilities then we can just extend
# its dependencies for to those for this node and break further
# iteration upwards.
parent_dependencies = node_dependencies.get(parent, None)
if parent_dependencies is not None:
dependencies.update(parent_dependencies)
break
state = get_state(parent)
if state == ANIMATED:
dependencies.add(parent)
traversed_parents.append(parent)
if node not in always_invisible and dependencies:
node_dependencies[node] = dependencies
if not node_dependencies:
return
# Now we only have to check the visibilities for nodes that have animated
# visibility dependencies upstream. The fastest way to check these
# visibility attributes across different frames is with Python api 2.0
# so we do that.
@memodict
def get_visibility_mplug(node):
"""Return api 2.0 MPlug with cached memoize decorator"""
sel = om.MSelectionList()
sel.add(node)
dag = sel.getDagPath(0)
return om.MFnDagNode(dag).findPlug("visibility", True)
@contextlib.contextmanager
def dgcontext(mtime):
"""MDGContext context manager"""
context = om.MDGContext(mtime)
try:
previous = context.makeCurrent()
yield context
finally:
previous.makeCurrent()
# We skip the first frame as we already used that frame to check for
# overall visibilities. And end+1 to include the end frame.
scene_units = om.MTime.uiUnit()
for frame in range(start + 1, end + 1):
mtime = om.MTime(frame, unit=scene_units)
# Build little cache so we don't query the same MPlug's value
# again if it was checked on this frame and also is a dependency
# for another node
frame_visibilities = {}
with dgcontext(mtime) as context:
for node, dependencies in list(node_dependencies.items()):
for dependency in dependencies:
dependency_visible = frame_visibilities.get(dependency,
None)
if dependency_visible is None:
mplug = get_visibility_mplug(dependency)
dependency_visible = mplug.asBool(context)
frame_visibilities[dependency] = dependency_visible
if not dependency_visible:
# One dependency is not visible, thus the
# node is not visible.
break
else:
# All dependencies are visible.
yield node
# Remove node with dependencies for next frame iterations
# because it was visible at least once.
node_dependencies.pop(node)
# If no more nodes to process break the frame iterations..
if not node_dependencies:
break

View file

@ -1087,7 +1087,7 @@ class RenderProductsRenderman(ARenderProducts):
"d_tiff": "tif"
}
displays = get_displays()["displays"]
displays = get_displays(override_dst="render")["displays"]
for name, display in displays.items():
enabled = display["params"]["enable"]["value"]
if not enabled:
@ -1106,9 +1106,33 @@ class RenderProductsRenderman(ARenderProducts):
display["driverNode"]["type"], "exr")
for camera in cameras:
product = RenderProduct(productName=aov_name,
ext=extensions,
camera=camera)
# Create render product and set it as multipart only on
# display types supporting it. In all other cases, Renderman
# will create separate output per channel.
if display["driverNode"]["type"] in ["d_openexr", "d_deepexr", "d_tiff"]: # noqa
product = RenderProduct(
productName=aov_name,
ext=extensions,
camera=camera,
multipart=True
)
else:
# this code should handle the case where no multipart
# capable format is selected. But since it involves
# shady logic to determine what channel become what
# lets not do that as all productions will use exr anyway.
"""
for channel in display['params']['displayChannels']['value']: # noqa
product = RenderProduct(
productName="{}_{}".format(aov_name, channel),
ext=extensions,
camera=camera,
multipart=False
)
"""
raise UnsupportedImageFormatException(
"Only exr, deep exr and tiff formats are supported.")
products.append(product)
return products
@ -1201,3 +1225,7 @@ class UnsupportedRendererException(Exception):
Raised when requesting data from unsupported renderer.
"""
class UnsupportedImageFormatException(Exception):
"""Custom exception to report unsupported output image format."""

View file

@ -1,13 +1,15 @@
import os
import sys
import errno
import logging
import contextlib
from maya import utils, cmds, OpenMaya
import maya.api.OpenMaya as om
import pyblish.api
from openpype.settings import get_project_settings
from openpype.host import HostBase, IWorkfileHost, ILoadHost
import openpype.hosts.maya
from openpype.tools.utils import host_tools
from openpype.lib import (
@ -28,6 +30,14 @@ from openpype.pipeline import (
)
from openpype.hosts.maya.lib import copy_workspace_mel
from . import menu, lib
from .workio import (
open_file,
save_file,
file_extensions,
has_unsaved_changes,
work_root,
current_file
)
log = logging.getLogger("openpype.hosts.maya")
@ -40,49 +50,121 @@ INVENTORY_PATH = os.path.join(PLUGINS_DIR, "inventory")
AVALON_CONTAINERS = ":AVALON_CONTAINERS"
self = sys.modules[__name__]
self._ignore_lock = False
self._events = {}
class MayaHost(HostBase, IWorkfileHost, ILoadHost):
name = "maya"
def install():
from openpype.settings import get_project_settings
def __init__(self):
super(MayaHost, self).__init__()
self._op_events = {}
project_settings = get_project_settings(os.getenv("AVALON_PROJECT"))
# process path mapping
dirmap_processor = MayaDirmap("maya", project_settings)
dirmap_processor.process_dirmap()
def install(self):
project_settings = get_project_settings(os.getenv("AVALON_PROJECT"))
# process path mapping
dirmap_processor = MayaDirmap("maya", project_settings)
dirmap_processor.process_dirmap()
pyblish.api.register_plugin_path(PUBLISH_PATH)
pyblish.api.register_host("mayabatch")
pyblish.api.register_host("mayapy")
pyblish.api.register_host("maya")
pyblish.api.register_plugin_path(PUBLISH_PATH)
pyblish.api.register_host("mayabatch")
pyblish.api.register_host("mayapy")
pyblish.api.register_host("maya")
register_loader_plugin_path(LOAD_PATH)
register_creator_plugin_path(CREATE_PATH)
register_inventory_action_path(INVENTORY_PATH)
log.info(PUBLISH_PATH)
register_loader_plugin_path(LOAD_PATH)
register_creator_plugin_path(CREATE_PATH)
register_inventory_action_path(INVENTORY_PATH)
self.log.info(PUBLISH_PATH)
log.info("Installing callbacks ... ")
register_event_callback("init", on_init)
self.log.info("Installing callbacks ... ")
register_event_callback("init", on_init)
if lib.IS_HEADLESS:
log.info(("Running in headless mode, skipping Maya "
"save/open/new callback installation.."))
if lib.IS_HEADLESS:
self.log.info((
"Running in headless mode, skipping Maya save/open/new"
" callback installation.."
))
return
return
_set_project()
_register_callbacks()
_set_project()
self._register_callbacks()
menu.install()
menu.install()
register_event_callback("save", on_save)
register_event_callback("open", on_open)
register_event_callback("new", on_new)
register_event_callback("before.save", on_before_save)
register_event_callback("taskChanged", on_task_changed)
register_event_callback("workfile.save.before", before_workfile_save)
register_event_callback("save", on_save)
register_event_callback("open", on_open)
register_event_callback("new", on_new)
register_event_callback("before.save", on_before_save)
register_event_callback("taskChanged", on_task_changed)
register_event_callback("workfile.save.before", before_workfile_save)
def open_workfile(self, filepath):
return open_file(filepath)
def save_workfile(self, filepath=None):
return save_file(filepath)
def work_root(self, session):
return work_root(session)
def get_current_workfile(self):
return current_file()
def workfile_has_unsaved_changes(self):
return has_unsaved_changes()
def get_workfile_extensions(self):
return file_extensions()
def get_containers(self):
return ls()
@contextlib.contextmanager
def maintained_selection(self):
with lib.maintained_selection():
yield
def _register_callbacks(self):
for handler, event in self._op_events.copy().items():
if event is None:
continue
try:
OpenMaya.MMessage.removeCallback(event)
self._op_events[handler] = None
except RuntimeError as exc:
self.log.info(exc)
self._op_events[_on_scene_save] = OpenMaya.MSceneMessage.addCallback(
OpenMaya.MSceneMessage.kBeforeSave, _on_scene_save
)
self._op_events[_before_scene_save] = (
OpenMaya.MSceneMessage.addCheckCallback(
OpenMaya.MSceneMessage.kBeforeSaveCheck,
_before_scene_save
)
)
self._op_events[_on_scene_new] = OpenMaya.MSceneMessage.addCallback(
OpenMaya.MSceneMessage.kAfterNew, _on_scene_new
)
self._op_events[_on_maya_initialized] = (
OpenMaya.MSceneMessage.addCallback(
OpenMaya.MSceneMessage.kMayaInitialized,
_on_maya_initialized
)
)
self._op_events[_on_scene_open] = OpenMaya.MSceneMessage.addCallback(
OpenMaya.MSceneMessage.kAfterOpen, _on_scene_open
)
self.log.info("Installed event handler _on_scene_save..")
self.log.info("Installed event handler _before_scene_save..")
self.log.info("Installed event handler _on_scene_new..")
self.log.info("Installed event handler _on_maya_initialized..")
self.log.info("Installed event handler _on_scene_open..")
def _set_project():
@ -106,44 +188,6 @@ def _set_project():
cmds.workspace(workdir, openWorkspace=True)
def _register_callbacks():
for handler, event in self._events.copy().items():
if event is None:
continue
try:
OpenMaya.MMessage.removeCallback(event)
self._events[handler] = None
except RuntimeError as e:
log.info(e)
self._events[_on_scene_save] = OpenMaya.MSceneMessage.addCallback(
OpenMaya.MSceneMessage.kBeforeSave, _on_scene_save
)
self._events[_before_scene_save] = OpenMaya.MSceneMessage.addCheckCallback(
OpenMaya.MSceneMessage.kBeforeSaveCheck, _before_scene_save
)
self._events[_on_scene_new] = OpenMaya.MSceneMessage.addCallback(
OpenMaya.MSceneMessage.kAfterNew, _on_scene_new
)
self._events[_on_maya_initialized] = OpenMaya.MSceneMessage.addCallback(
OpenMaya.MSceneMessage.kMayaInitialized, _on_maya_initialized
)
self._events[_on_scene_open] = OpenMaya.MSceneMessage.addCallback(
OpenMaya.MSceneMessage.kAfterOpen, _on_scene_open
)
log.info("Installed event handler _on_scene_save..")
log.info("Installed event handler _before_scene_save..")
log.info("Installed event handler _on_scene_new..")
log.info("Installed event handler _on_maya_initialized..")
log.info("Installed event handler _on_scene_open..")
def _on_maya_initialized(*args):
emit_event("init")
@ -475,7 +519,6 @@ def on_task_changed():
workdir = legacy_io.Session["AVALON_WORKDIR"]
if os.path.exists(workdir):
log.info("Updating Maya workspace for task change to %s", workdir)
_set_project()
# Set Maya fileDialog's start-dir to /scenes

View file

@ -9,8 +9,8 @@ from openpype.pipeline import (
LoaderPlugin,
get_representation_path,
AVALON_CONTAINER_ID,
Anatomy,
)
from openpype.api import Anatomy
from openpype.settings import get_project_settings
from .pipeline import containerise
from . import lib

View file

@ -296,12 +296,9 @@ def update_package_version(container, version):
assert current_representation is not None, "This is a bug"
repre_parents = get_representation_parents(
project_name, current_representation
version_doc, subset_doc, asset_doc, project_doc = (
get_representation_parents(project_name, current_representation)
)
version_doc = subset_doc = asset_doc = project_doc = None
if repre_parents:
version_doc, subset_doc, asset_doc, project_doc = repre_parents
if version == -1:
new_version = get_last_version_by_subset_id(

View file

@ -15,6 +15,8 @@ class CreateReview(plugin.Creator):
keepImages = False
isolate = False
imagePlane = True
Width = 0
Height = 0
transparency = [
"preset",
"simple",
@ -33,6 +35,8 @@ class CreateReview(plugin.Creator):
for key, value in animation_data.items():
data[key] = value
data["review_width"] = self.Width
data["review_height"] = self.Height
data["isolate"] = self.isolate
data["keepImages"] = self.keepImages
data["imagePlane"] = self.imagePlane

View file

@ -0,0 +1,139 @@
import os
from openpype.api import get_project_settings
from openpype.pipeline import (
load,
get_representation_path
)
# TODO aiVolume doesn't automatically set velocity fps correctly, set manual?
class LoadVDBtoArnold(load.LoaderPlugin):
"""Load OpenVDB for Arnold in aiVolume"""
families = ["vdbcache"]
representations = ["vdb"]
label = "Load VDB to Arnold"
icon = "cloud"
color = "orange"
def load(self, context, name, namespace, data):
from maya import cmds
from openpype.hosts.maya.api.pipeline import containerise
from openpype.hosts.maya.api.lib import unique_namespace
try:
family = context["representation"]["context"]["family"]
except ValueError:
family = "vdbcache"
# Check if the plugin for arnold is available on the pc
try:
cmds.loadPlugin("mtoa", quiet=True)
except Exception as exc:
self.log.error("Encountered exception:\n%s" % exc)
return
asset = context['asset']
asset_name = asset["name"]
namespace = namespace or unique_namespace(
asset_name + "_",
prefix="_" if asset_name[0].isdigit() else "",
suffix="_",
)
# Root group
label = "{}:{}".format(namespace, name)
root = cmds.group(name=label, empty=True)
settings = get_project_settings(os.environ['AVALON_PROJECT'])
colors = settings['maya']['load']['colors']
c = colors.get(family)
if c is not None:
cmds.setAttr(root + ".useOutlinerColor", 1)
cmds.setAttr(root + ".outlinerColor",
(float(c[0]) / 255),
(float(c[1]) / 255),
(float(c[2]) / 255)
)
# Create VRayVolumeGrid
grid_node = cmds.createNode("aiVolume",
name="{}Shape".format(root),
parent=root)
self._set_path(grid_node,
path=self.fname,
representation=context["representation"])
# Lock the shape node so the user can't delete the transform/shape
# as if it was referenced
cmds.lockNode(grid_node, lock=True)
nodes = [root, grid_node]
self[:] = nodes
return containerise(
name=name,
namespace=namespace,
nodes=nodes,
context=context,
loader=self.__class__.__name__)
def update(self, container, representation):
from maya import cmds
path = get_representation_path(representation)
# Find VRayVolumeGrid
members = cmds.sets(container['objectName'], query=True)
grid_nodes = cmds.ls(members, type="aiVolume", long=True)
assert len(grid_nodes) == 1, "This is a bug"
# Update the VRayVolumeGrid
self._set_path(grid_nodes[0], path=path, representation=representation)
# Update container representation
cmds.setAttr(container["objectName"] + ".representation",
str(representation["_id"]),
type="string")
def switch(self, container, representation):
self.update(container, representation)
def remove(self, container):
from maya import cmds
# Get all members of the avalon container, ensure they are unlocked
# and delete everything
members = cmds.sets(container['objectName'], query=True)
cmds.lockNode(members, lock=False)
cmds.delete([container['objectName']] + members)
# Clean up the namespace
try:
cmds.namespace(removeNamespace=container['namespace'],
deleteNamespaceContent=True)
except RuntimeError:
pass
@staticmethod
def _set_path(grid_node,
path,
representation):
"""Apply the settings for the VDB path to the aiVolume node"""
from maya import cmds
if not os.path.exists(path):
raise RuntimeError("Path does not exist: %s" % path)
is_sequence = bool(representation["context"].get("frame"))
cmds.setAttr(grid_node + ".useFrameExtension", is_sequence)
# Set file path
cmds.setAttr(grid_node + ".filename", path, type="string")

View file

@ -1,11 +1,21 @@
import os
from openpype.api import get_project_settings
from openpype.pipeline import load
from openpype.pipeline import (
load,
get_representation_path
)
class LoadVDBtoRedShift(load.LoaderPlugin):
"""Load OpenVDB in a Redshift Volume Shape"""
"""Load OpenVDB in a Redshift Volume Shape
Note that the RedshiftVolumeShape is created without a RedshiftVolume
shader assigned. To get the Redshift volume to render correctly assign
a RedshiftVolume shader (in the Hypershade) and set the density, scatter
and emission channels to the channel names of the volumes in the VDB file.
"""
families = ["vdbcache"]
representations = ["vdb"]
@ -55,7 +65,7 @@ class LoadVDBtoRedShift(load.LoaderPlugin):
# Root group
label = "{}:{}".format(namespace, name)
root = cmds.group(name=label, empty=True)
root = cmds.createNode("transform", name=label)
settings = get_project_settings(os.environ['AVALON_PROJECT'])
colors = settings['maya']['load']['colors']
@ -74,9 +84,9 @@ class LoadVDBtoRedShift(load.LoaderPlugin):
name="{}RVSShape".format(label),
parent=root)
cmds.setAttr("{}.fileName".format(volume_node),
self.fname,
type="string")
self._set_path(volume_node,
path=self.fname,
representation=context["representation"])
nodes = [root, volume_node]
self[:] = nodes
@ -87,3 +97,56 @@ class LoadVDBtoRedShift(load.LoaderPlugin):
nodes=nodes,
context=context,
loader=self.__class__.__name__)
def update(self, container, representation):
from maya import cmds
path = get_representation_path(representation)
# Find VRayVolumeGrid
members = cmds.sets(container['objectName'], query=True)
grid_nodes = cmds.ls(members, type="RedshiftVolumeShape", long=True)
assert len(grid_nodes) == 1, "This is a bug"
# Update the VRayVolumeGrid
self._set_path(grid_nodes[0], path=path, representation=representation)
# Update container representation
cmds.setAttr(container["objectName"] + ".representation",
str(representation["_id"]),
type="string")
def remove(self, container):
from maya import cmds
# Get all members of the avalon container, ensure they are unlocked
# and delete everything
members = cmds.sets(container['objectName'], query=True)
cmds.lockNode(members, lock=False)
cmds.delete([container['objectName']] + members)
# Clean up the namespace
try:
cmds.namespace(removeNamespace=container['namespace'],
deleteNamespaceContent=True)
except RuntimeError:
pass
def switch(self, container, representation):
self.update(container, representation)
@staticmethod
def _set_path(grid_node,
path,
representation):
"""Apply the settings for the VDB path to the RedshiftVolumeShape"""
from maya import cmds
if not os.path.exists(path):
raise RuntimeError("Path does not exist: %s" % path)
is_sequence = bool(representation["context"].get("frame"))
cmds.setAttr(grid_node + ".useFrameExtension", is_sequence)
# Set file path
cmds.setAttr(grid_node + ".fileName", path, type="string")

View file

@ -71,6 +71,8 @@ class CollectReview(pyblish.api.InstancePlugin):
data['handles'] = instance.data.get('handles', None)
data['step'] = instance.data['step']
data['fps'] = instance.data['fps']
data['review_width'] = instance.data['review_width']
data['review_height'] = instance.data['review_height']
data["isolate"] = instance.data["isolate"]
cmds.setAttr(str(instance) + '.active', 1)
self.log.debug('data {}'.format(instance.context[i].data))

View file

@ -1,100 +0,0 @@
import os
from maya import cmds
import openpype.api
from openpype.hosts.maya.api.lib import (
extract_alembic,
suspended_refresh,
maintained_selection
)
class ExtractAnimation(openpype.api.Extractor):
"""Produce an alembic of just point positions and normals.
Positions and normals, uvs, creases are preserved, but nothing more,
for plain and predictable point caches.
Plugin can run locally or remotely (on a farm - if instance is marked with
"farm" it will be skipped in local processing, but processed on farm)
"""
label = "Extract Animation"
hosts = ["maya"]
families = ["animation"]
targets = ["local", "remote"]
def process(self, instance):
if instance.data.get("farm"):
self.log.debug("Should be processed on farm, skipping.")
return
# Collect the out set nodes
out_sets = [node for node in instance if node.endswith("out_SET")]
if len(out_sets) != 1:
raise RuntimeError("Couldn't find exactly one out_SET: "
"{0}".format(out_sets))
out_set = out_sets[0]
roots = cmds.sets(out_set, query=True)
# Include all descendants
nodes = roots + cmds.listRelatives(roots,
allDescendents=True,
fullPath=True) or []
# Collect the start and end including handles
start = instance.data["frameStartHandle"]
end = instance.data["frameEndHandle"]
self.log.info("Extracting animation..")
dirname = self.staging_dir(instance)
parent_dir = self.staging_dir(instance)
filename = "{name}.abc".format(**instance.data)
path = os.path.join(parent_dir, filename)
options = {
"step": instance.data.get("step", 1.0) or 1.0,
"attr": ["cbId"],
"writeVisibility": True,
"writeCreases": True,
"uvWrite": True,
"selection": True,
"worldSpace": instance.data.get("worldSpace", True),
"writeColorSets": instance.data.get("writeColorSets", False),
"writeFaceSets": instance.data.get("writeFaceSets", False)
}
if not instance.data.get("includeParentHierarchy", True):
# Set the root nodes if we don't want to include parents
# The roots are to be considered the ones that are the actual
# direct members of the set
options["root"] = roots
if int(cmds.about(version=True)) >= 2017:
# Since Maya 2017 alembic supports multiple uv sets - write them.
options["writeUVSets"] = True
with suspended_refresh():
with maintained_selection():
cmds.select(nodes, noExpand=True)
extract_alembic(file=path,
startFrame=float(start),
endFrame=float(end),
**options)
if "representations" not in instance.data:
instance.data["representations"] = []
representation = {
'name': 'abc',
'ext': 'abc',
'files': filename,
"stagingDir": dirname,
}
instance.data["representations"].append(representation)
instance.context.data["cleanupFullPaths"].append(path)
self.log.info("Extracted {} to {}".format(instance, dirname))

View file

@ -30,7 +30,7 @@ class ExtractCameraAlembic(openpype.api.Extractor):
# get cameras
members = instance.data['setMembers']
cameras = cmds.ls(members, leaf=True, shapes=True, long=True,
cameras = cmds.ls(members, leaf=True, long=True,
dag=True, type="camera")
# validate required settings
@ -61,10 +61,30 @@ class ExtractCameraAlembic(openpype.api.Extractor):
if bake_to_worldspace:
job_str += ' -worldSpace'
for member in member_shapes:
self.log.info(f"processing {member}")
# if baked, drop the camera hierarchy to maintain
# clean output and backwards compatibility
camera_root = cmds.listRelatives(
camera, parent=True, fullPath=True)[0]
job_str += ' -root {0}'.format(camera_root)
for member in members:
descendants = cmds.listRelatives(member,
allDescendents=True,
fullPath=True) or []
shapes = cmds.ls(descendants, shapes=True,
noIntermediate=True, long=True)
cameras = cmds.ls(shapes, type="camera", long=True)
if cameras:
if not set(shapes) - set(cameras):
continue
self.log.warning((
"Camera hierarchy contains additional geometry. "
"Extraction will fail.")
)
transform = cmds.listRelatives(
member, parent=True, fullPath=True)[0]
member, parent=True, fullPath=True)
transform = transform[0] if transform else member
job_str += ' -root {0}'.format(transform)
job_str += ' -file "{0}"'.format(path)

View file

@ -172,18 +172,19 @@ class ExtractCameraMayaScene(openpype.api.Extractor):
dag=True,
shapes=True,
long=True)
attrs = {"backgroundColorR": 0.0,
"backgroundColorG": 0.0,
"backgroundColorB": 0.0,
"overscan": 1.0}
# Fix PLN-178: Don't allow background color to be non-black
for cam in cmds.ls(
for cam, (attr, value) in itertools.product(cmds.ls(
baked_camera_shapes, type="camera", dag=True,
shapes=True, long=True):
attrs = {"backgroundColorR": 0.0,
"backgroundColorG": 0.0,
"backgroundColorB": 0.0,
"overscan": 1.0}
for attr, value in attrs.items():
plug = "{0}.{1}".format(cam, attr)
unlock(plug)
cmds.setAttr(plug, value)
long=True), attrs.items()):
plug = "{0}.{1}".format(cam, attr)
unlock(plug)
cmds.setAttr(plug, value)
self.log.info("Performing extraction..")
cmds.select(cmds.ls(members, dag=True,

View file

@ -1,6 +1,7 @@
import os
import glob
import contextlib
import clique
import capture
@ -50,8 +51,32 @@ class ExtractPlayblast(openpype.api.Extractor):
['override_viewport_options']
)
preset = lib.load_capture_preset(data=self.capture_preset)
# Grab capture presets from the project settings
capture_presets = self.capture_preset
# Set resolution variables from capture presets
width_preset = capture_presets["Resolution"]["width"]
height_preset = capture_presets["Resolution"]["height"]
# Set resolution variables from asset values
asset_data = instance.data["assetEntity"]["data"]
asset_width = asset_data.get("resolutionWidth")
asset_height = asset_data.get("resolutionHeight")
review_instance_width = instance.data.get("review_width")
review_instance_height = instance.data.get("review_height")
preset['camera'] = camera
# Tests if project resolution is set,
# if it is a value other than zero, that value is
# used, if not then the asset resolution is
# used
if review_instance_width and review_instance_height:
preset['width'] = review_instance_width
preset['height'] = review_instance_height
elif width_preset and height_preset:
preset['width'] = width_preset
preset['height'] = height_preset
elif asset_width and asset_height:
preset['width'] = asset_width
preset['height'] = asset_height
preset['start_frame'] = start
preset['end_frame'] = end
camera_option = preset.get("camera_option", {})
@ -90,7 +115,7 @@ class ExtractPlayblast(openpype.api.Extractor):
else:
preset["viewport_options"] = {"imagePlane": image_plane}
with maintained_time():
with lib.maintained_time():
filename = preset.get("filename", "%TEMP%")
# Force viewer to False in call to capture because we have our own
@ -153,12 +178,3 @@ class ExtractPlayblast(openpype.api.Extractor):
'camera_name': camera_node_name
}
instance.data["representations"].append(representation)
@contextlib.contextmanager
def maintained_time():
ct = cmds.currentTime(query=True)
try:
yield
finally:
cmds.currentTime(ct, edit=True)

View file

@ -6,7 +6,8 @@ import openpype.api
from openpype.hosts.maya.api.lib import (
extract_alembic,
suspended_refresh,
maintained_selection
maintained_selection,
iter_visible_nodes_in_range
)
@ -32,7 +33,7 @@ class ExtractAlembic(openpype.api.Extractor):
self.log.debug("Should be processed on farm, skipping.")
return
nodes = instance[:]
nodes, roots = self.get_members_and_roots(instance)
# Collect the start and end including handles
start = float(instance.data.get("frameStartHandle", 1))
@ -45,10 +46,6 @@ class ExtractAlembic(openpype.api.Extractor):
attr_prefixes = instance.data.get("attrPrefix", "").split(";")
attr_prefixes = [value for value in attr_prefixes if value.strip()]
# Get extra export arguments
writeColorSets = instance.data.get("writeColorSets", False)
writeFaceSets = instance.data.get("writeFaceSets", False)
self.log.info("Extracting pointcache..")
dirname = self.staging_dir(instance)
@ -62,8 +59,8 @@ class ExtractAlembic(openpype.api.Extractor):
"attrPrefix": attr_prefixes,
"writeVisibility": True,
"writeCreases": True,
"writeColorSets": writeColorSets,
"writeFaceSets": writeFaceSets,
"writeColorSets": instance.data.get("writeColorSets", False),
"writeFaceSets": instance.data.get("writeFaceSets", False),
"uvWrite": True,
"selection": True,
"worldSpace": instance.data.get("worldSpace", True)
@ -73,12 +70,22 @@ class ExtractAlembic(openpype.api.Extractor):
# Set the root nodes if we don't want to include parents
# The roots are to be considered the ones that are the actual
# direct members of the set
options["root"] = instance.data.get("setMembers")
options["root"] = roots
if int(cmds.about(version=True)) >= 2017:
# Since Maya 2017 alembic supports multiple uv sets - write them.
options["writeUVSets"] = True
if instance.data.get("visibleOnly", False):
# If we only want to include nodes that are visible in the frame
# range then we need to do our own check. Alembic's `visibleOnly`
# flag does not filter out those that are only hidden on some
# frames as it counts "animated" or "connected" visibilities as
# if it's always visible.
nodes = list(iter_visible_nodes_in_range(nodes,
start=start,
end=end))
with suspended_refresh():
with maintained_selection():
cmds.select(nodes, noExpand=True)
@ -101,3 +108,28 @@ class ExtractAlembic(openpype.api.Extractor):
instance.context.data["cleanupFullPaths"].append(path)
self.log.info("Extracted {} to {}".format(instance, dirname))
def get_members_and_roots(self, instance):
return instance[:], instance.data.get("setMembers")
class ExtractAnimation(ExtractAlembic):
label = "Extract Animation"
families = ["animation"]
def get_members_and_roots(self, instance):
# Collect the out set nodes
out_sets = [node for node in instance if node.endswith("out_SET")]
if len(out_sets) != 1:
raise RuntimeError("Couldn't find exactly one out_SET: "
"{0}".format(out_sets))
out_set = out_sets[0]
roots = cmds.sets(out_set, query=True)
# Include all descendants
nodes = roots + cmds.listRelatives(roots,
allDescendents=True,
fullPath=True) or []
return nodes, roots

View file

@ -1,5 +1,4 @@
import os
import contextlib
import glob
import capture
@ -28,7 +27,6 @@ class ExtractThumbnail(openpype.api.Extractor):
camera = instance.data['review_camera']
capture_preset = ""
capture_preset = (
instance.context.data["project_settings"]['maya']['publish']['ExtractPlayblast']['capture_preset']
)
@ -60,7 +58,29 @@ class ExtractThumbnail(openpype.api.Extractor):
"overscan": 1.0,
"depthOfField": cmds.getAttr("{0}.depthOfField".format(camera)),
}
capture_presets = capture_preset
# Set resolution variables from capture presets
width_preset = capture_presets["Resolution"]["width"]
height_preset = capture_presets["Resolution"]["height"]
# Set resolution variables from asset values
asset_data = instance.data["assetEntity"]["data"]
asset_width = asset_data.get("resolutionWidth")
asset_height = asset_data.get("resolutionHeight")
review_instance_width = instance.data.get("review_width")
review_instance_height = instance.data.get("review_height")
# Tests if project resolution is set,
# if it is a value other than zero, that value is
# used, if not then the asset resolution is
# used
if review_instance_width and review_instance_height:
preset['width'] = review_instance_width
preset['height'] = review_instance_height
elif width_preset and height_preset:
preset['width'] = width_preset
preset['height'] = height_preset
elif asset_width and asset_height:
preset['width'] = asset_width
preset['height'] = asset_height
stagingDir = self.staging_dir(instance)
filename = "{0}".format(instance.name)
path = os.path.join(stagingDir, filename)
@ -81,9 +101,7 @@ class ExtractThumbnail(openpype.api.Extractor):
if preset.pop("isolate_view", False) and instance.data.get("isolate"):
preset["isolate"] = instance.data["setMembers"]
with maintained_time():
filename = preset.get("filename", "%TEMP%")
with lib.maintained_time():
# Force viewer to False in call to capture because we have our own
# viewer opening call to allow a signal to trigger between
# playblast and viewer
@ -152,12 +170,3 @@ class ExtractThumbnail(openpype.api.Extractor):
filepath = max(files, key=os.path.getmtime)
return filepath
@contextlib.contextmanager
def maintained_time():
ct = cmds.currentTime(query=True)
try:
yield
finally:
cmds.currentTime(ct, edit=True)

View file

@ -51,18 +51,7 @@ class ValidateCameraContents(pyblish.api.InstancePlugin):
raise RuntimeError("No cameras found in empty instance.")
if not cls.validate_shapes:
cls.log.info("Not validating shapes in the content.")
for member in members:
parents = cmds.ls(member, long=True)[0].split("|")[1:-1]
parents_long_named = [
"|".join(parents[:i]) for i in range(1, 1 + len(parents))
]
if cameras[0] in parents_long_named:
cls.log.error(
"{} is parented under camera {}".format(
member, cameras[0]))
invalid.extend(member)
cls.log.info("not validating shapes in the content")
return invalid
# non-camera shapes

View file

@ -27,6 +27,7 @@ class ValidateFrameRange(pyblish.api.InstancePlugin):
"yeticache"]
optional = True
actions = [openpype.api.RepairAction]
exclude_families = []
def process(self, instance):
context = instance.context
@ -56,7 +57,9 @@ class ValidateFrameRange(pyblish.api.InstancePlugin):
# compare with data on instance
errors = []
if [ef for ef in self.exclude_families
if instance.data["family"] in ef]:
return
if(inst_start != frame_start_handle):
errors.append("Instance start frame [ {} ] doesn't "
"match the one set on instance [ {} ]: "

View file

@ -6,7 +6,7 @@ from openpype.pipeline import PublishXmlValidationError
class ValidateReviewSubsetUniqueness(pyblish.api.ContextPlugin):
"""Validates that nodes has common root."""
"""Validates that review subset has unique name."""
order = openpype.api.ValidateContentsOrder
hosts = ["maya"]
@ -17,7 +17,7 @@ class ValidateReviewSubsetUniqueness(pyblish.api.ContextPlugin):
subset_names = []
for instance in context:
self.log.info("instance:: {}".format(instance.data))
self.log.debug("Instance: {}".format(instance.data))
if instance.data.get('publish'):
subset_names.append(instance.data.get('subset'))

View file

@ -4,8 +4,7 @@ import openpype.api
class ValidateSetdressRoot(pyblish.api.InstancePlugin):
"""
"""
"""Validate if set dress top root node is published."""
order = openpype.api.ValidateContentsOrder
label = "SetDress Root"

View file

@ -0,0 +1,51 @@
import pyblish.api
import openpype.api
from openpype.hosts.maya.api.lib import iter_visible_nodes_in_range
import openpype.hosts.maya.api.action
class ValidateAlembicVisibleOnly(pyblish.api.InstancePlugin):
"""Validates at least a single node is visible in frame range.
This validation only validates if the `visibleOnly` flag is enabled
on the instance - otherwise the validation is skipped.
"""
order = openpype.api.ValidateContentsOrder + 0.05
label = "Alembic Visible Only"
hosts = ["maya"]
families = ["pointcache", "animation"]
actions = [openpype.hosts.maya.api.action.SelectInvalidAction]
def process(self, instance):
if not instance.data.get("visibleOnly", False):
self.log.debug("Visible only is disabled. Validation skipped..")
return
invalid = self.get_invalid(instance)
if invalid:
start, end = self.get_frame_range(instance)
raise RuntimeError("No visible nodes found in "
"frame range {}-{}.".format(start, end))
@classmethod
def get_invalid(cls, instance):
if instance.data["family"] == "animation":
# Special behavior to use the nodes in out_SET
nodes = instance.data["out_hierarchy"]
else:
nodes = instance[:]
start, end = cls.get_frame_range(instance)
if not any(iter_visible_nodes_in_range(nodes, start, end)):
# Return the nodes we have considered so the user can identify
# them with the select invalid action
return nodes
@staticmethod
def get_frame_range(instance):
data = instance.data
return data["frameStartHandle"], data["frameEndHandle"]

View file

@ -1,10 +1,11 @@
import os
from openpype.api import get_project_settings
from openpype.pipeline import install_host
from openpype.hosts.maya import api
from openpype.hosts.maya.api import MayaHost
from maya import cmds
install_host(api)
host = MayaHost()
install_host(host)
print("starting OpenPype usersetup")

View file

@ -10,6 +10,7 @@ from collections import OrderedDict
import clique
import nuke
from Qt import QtCore, QtWidgets
from openpype.client import (
get_project,
@ -20,21 +21,24 @@ from openpype.client import (
)
from openpype.api import (
Logger,
Anatomy,
BuildWorkfile,
get_version_from_path,
get_anatomy_settings,
get_workdir_data,
get_asset,
get_current_project_settings,
)
from openpype.tools.utils import host_tools
from openpype.lib import env_value_to_bool
from openpype.lib.path_tools import HostDirmap
from openpype.settings import get_project_settings
from openpype.settings import (
get_project_settings,
get_anatomy_settings,
)
from openpype.modules import ModulesManager
from openpype.pipeline import (
discover_legacy_creator_plugins,
legacy_io,
Anatomy,
)
from . import gizmo_menu
@ -61,7 +65,10 @@ class Context:
main_window = None
context_label = None
project_name = os.getenv("AVALON_PROJECT")
# Workfile related code
workfiles_launched = False
workfiles_tool_timer = None
# Seems unused
_project_doc = None
@ -2382,12 +2389,19 @@ def select_nodes(nodes):
def launch_workfiles_app():
'''Function letting start workfiles after start of host
'''
from openpype.lib import (
env_value_to_bool
)
from .pipeline import get_main_window
"""Show workfiles tool on nuke launch.
Trigger to show workfiles tool on application launch. Can be executed only
once all other calls are ignored.
Workfiles tool show is deffered after application initialization using
QTimer.
"""
if Context.workfiles_launched:
return
Context.workfiles_launched = True
# get all imortant settings
open_at_start = env_value_to_bool(
@ -2398,10 +2412,40 @@ def launch_workfiles_app():
if not open_at_start:
return
if not Context.workfiles_launched:
Context.workfiles_launched = True
main_window = get_main_window()
host_tools.show_workfiles(parent=main_window)
# Show workfiles tool using timer
# - this will be probably triggered during initialization in that case
# the application is not be able to show uis so it must be
# deffered using timer
# - timer should be processed when initialization ends
# When applications starts to process events.
timer = QtCore.QTimer()
timer.timeout.connect(_launch_workfile_app)
timer.setInterval(100)
Context.workfiles_tool_timer = timer
timer.start()
def _launch_workfile_app():
# Safeguard to not show window when application is still starting up
# or is already closing down.
closing_down = QtWidgets.QApplication.closingDown()
starting_up = QtWidgets.QApplication.startingUp()
# Stop the timer if application finished start up of is closing down
if closing_down or not starting_up:
Context.workfiles_tool_timer.stop()
Context.workfiles_tool_timer = None
# Skip if application is starting up or closing down
if starting_up or closing_down:
return
# Make sure on top is enabled on first show so the window is not hidden
# under main nuke window
# - this happened on Centos 7 and it is because the focus of nuke
# changes to the main window after showing because of initialization
# which moves workfiles tool under it
host_tools.show_workfiles(parent=None, on_top=True)
def process_workfile_builder():

View file

@ -120,8 +120,9 @@ def install():
nuke.addOnCreate(workfile_settings.set_context_settings, nodeClass="Root")
nuke.addOnCreate(workfile_settings.set_favorites, nodeClass="Root")
nuke.addOnCreate(process_workfile_builder, nodeClass="Root")
nuke.addOnCreate(launch_workfiles_app, nodeClass="Root")
_install_menu()
launch_workfiles_app()
def uninstall():
@ -141,6 +142,14 @@ def uninstall():
_uninstall_menu()
def _show_workfiles():
# Make sure parent is not set
# - this makes Workfiles tool as separated window which
# avoid issues with reopening
# - it is possible to explicitly change on top flag of the tool
host_tools.show_workfiles(parent=None, on_top=False)
def _install_menu():
# uninstall original avalon menu
main_window = get_main_window()
@ -157,7 +166,7 @@ def _install_menu():
menu.addSeparator()
menu.addCommand(
"Work Files...",
lambda: host_tools.show_workfiles(parent=main_window)
_show_workfiles
)
menu.addSeparator()

View file

@ -54,20 +54,28 @@ class LoadClip(plugin.NukeLoader):
script_start = int(nuke.root()["first_frame"].value())
# option gui
defaults = {
"start_at_workfile": True
options_defaults = {
"start_at_workfile": True,
"add_retime": True
}
options = [
qargparse.Boolean(
"start_at_workfile",
help="Load at workfile start frame",
default=True
)
]
node_name_template = "{class_name}_{ext}"
@classmethod
def get_options(cls, *args):
return [
qargparse.Boolean(
"start_at_workfile",
help="Load at workfile start frame",
default=cls.options_defaults["start_at_workfile"]
),
qargparse.Boolean(
"add_retime",
help="Load with retime",
default=cls.options_defaults["add_retime"]
)
]
@classmethod
def get_representations(cls):
return (
@ -86,7 +94,10 @@ class LoadClip(plugin.NukeLoader):
file = self.fname.replace("\\", "/")
start_at_workfile = options.get(
"start_at_workfile", self.defaults["start_at_workfile"])
"start_at_workfile", self.options_defaults["start_at_workfile"])
add_retime = options.get(
"add_retime", self.options_defaults["add_retime"])
version = context['version']
version_data = version.get("data", {})
@ -151,7 +162,7 @@ class LoadClip(plugin.NukeLoader):
data_imprint = {}
for k in add_keys:
if k == 'version':
data_imprint.update({k: context["version"]['name']})
data_imprint[k] = context["version"]['name']
elif k == 'colorspace':
colorspace = repre["data"].get(k)
colorspace = colorspace or version_data.get(k)
@ -159,10 +170,13 @@ class LoadClip(plugin.NukeLoader):
if used_colorspace:
data_imprint["used_colorspace"] = used_colorspace
else:
data_imprint.update(
{k: context["version"]['data'].get(k, str(None))})
data_imprint[k] = context["version"]['data'].get(
k, str(None))
data_imprint.update({"objectName": read_name})
data_imprint["objectName"] = read_name
if add_retime and version_data.get("retime", None):
data_imprint["addRetime"] = True
read_node["tile_color"].setValue(int("0x4ecd25ff", 16))
@ -174,7 +188,7 @@ class LoadClip(plugin.NukeLoader):
loader=self.__class__.__name__,
data=data_imprint)
if version_data.get("retime", None):
if add_retime and version_data.get("retime", None):
self._make_retimes(read_node, version_data)
self.set_as_member(read_node)
@ -198,7 +212,12 @@ class LoadClip(plugin.NukeLoader):
read_node = nuke.toNode(container['objectName'])
file = get_representation_path(representation).replace("\\", "/")
start_at_workfile = bool("start at" in read_node['frame_mode'].value())
start_at_workfile = "start at" in read_node['frame_mode'].value()
add_retime = [
key for key in read_node.knobs().keys()
if "addRetime" in key
]
project_name = legacy_io.active_project()
version_doc = get_version_by_id(project_name, representation["parent"])
@ -286,7 +305,7 @@ class LoadClip(plugin.NukeLoader):
"updated to version: {}".format(version_doc.get("name"))
)
if version_data.get("retime", None):
if add_retime and version_data.get("retime", None):
self._make_retimes(read_node, version_data)
else:
self.clear_members(read_node)

View file

@ -104,7 +104,10 @@ class ExtractReviewDataMov(openpype.api.Extractor):
self, instance, o_name, o_data["extension"],
multiple_presets)
if "render.farm" in families:
if (
"render.farm" in families or
"prerender.farm" in families
):
if "review" in instance.data["families"]:
instance.data["families"].remove("review")

View file

@ -4,6 +4,7 @@ import nuke
import copy
import pyblish.api
import six
import openpype
from openpype.hosts.nuke.api import (
@ -12,7 +13,6 @@ from openpype.hosts.nuke.api import (
get_view_process_node
)
class ExtractSlateFrame(openpype.api.Extractor):
"""Extracts movie and thumbnail with baked in luts
@ -152,6 +152,7 @@ class ExtractSlateFrame(openpype.api.Extractor):
self.log.debug("__ first_frame: {}".format(first_frame))
self.log.debug("__ slate_first_frame: {}".format(slate_first_frame))
above_slate_node = slate_node.dependencies().pop()
# fallback if files does not exists
if self._check_frames_exists(instance):
# Read node
@ -164,8 +165,16 @@ class ExtractSlateFrame(openpype.api.Extractor):
r_node["colorspace"].setValue(instance.data["colorspace"])
previous_node = r_node
temporary_nodes = [previous_node]
# adding copy metadata node for correct frame metadata
cm_node = nuke.createNode("CopyMetaData")
cm_node.setInput(0, previous_node)
cm_node.setInput(1, above_slate_node)
previous_node = cm_node
temporary_nodes.append(cm_node)
else:
previous_node = slate_node.dependencies().pop()
previous_node = above_slate_node
temporary_nodes = []
# only create colorspace baking if toggled on
@ -236,6 +245,48 @@ class ExtractSlateFrame(openpype.api.Extractor):
int(slate_first_frame)
)
# Add file to representation files
# - get write node
write_node = instance.data["writeNode"]
# - evaluate filepaths for first frame and slate frame
first_filename = os.path.basename(
write_node["file"].evaluate(first_frame))
slate_filename = os.path.basename(
write_node["file"].evaluate(slate_first_frame))
# Find matching representation based on first filename
matching_repre = None
is_sequence = None
for repre in instance.data["representations"]:
files = repre["files"]
if (
not isinstance(files, six.string_types)
and first_filename in files
):
matching_repre = repre
is_sequence = True
break
elif files == first_filename:
matching_repre = repre
is_sequence = False
break
if not matching_repre:
self.log.info((
"Matching reresentaion was not found."
" Representation files were not filled with slate."
))
return
# Add frame to matching representation files
if not is_sequence:
matching_repre["files"] = [first_filename, slate_filename]
elif slate_filename not in matching_repre["files"]:
matching_repre["files"].insert(0, slate_filename)
self.log.warning("Added slate frame to representation files")
def add_comment_slate_node(self, instance, node):
comment = instance.context.data.get("comment")

View file

@ -35,6 +35,7 @@ class CollectNukeWrites(pyblish.api.InstancePlugin):
if node is None:
return
instance.data["writeNode"] = node
self.log.debug("checking instance: {}".format(instance))
# Determine defined file type

View file

@ -319,14 +319,13 @@ def get_current_timeline_items(
selected_track_count = timeline.GetTrackCount(track_type)
# loop all tracks and get items
_clips = dict()
_clips = {}
for track_index in range(1, (int(selected_track_count) + 1)):
_track_name = timeline.GetTrackName(track_type, track_index)
# filter out all unmathed track names
if track_name:
if _track_name not in track_name:
continue
if track_name and _track_name not in track_name:
continue
timeline_items = timeline.GetItemListInTrack(
track_type, track_index)
@ -348,12 +347,8 @@ def get_current_timeline_items(
"index": clip_index
}
ti_color = ti.GetClipColor()
if filter is True:
if selecting_color in ti_color:
selected_clips.append(data)
else:
if filter and selecting_color in ti_color or not filter:
selected_clips.append(data)
return selected_clips

View file

@ -506,7 +506,7 @@ class Creator(LegacyCreator):
super(Creator, self).__init__(*args, **kwargs)
from openpype.api import get_current_project_settings
resolve_p_settings = get_current_project_settings().get("resolve")
self.presets = dict()
self.presets = {}
if resolve_p_settings:
self.presets = resolve_p_settings["create"].get(
self.__class__.__name__, {})

View file

@ -116,12 +116,13 @@ class CreateShotClip(resolve.Creator):
"order": 0},
"vSyncTrack": {
"value": gui_tracks, # noqa
"type": "QComboBox",
"label": "Hero track",
"target": "ui",
"toolTip": "Select driving track name which should be mastering all others", # noqa
"order": 1}
"type": "QComboBox",
"label": "Hero track",
"target": "ui",
"toolTip": "Select driving track name which should be mastering all others", # noqa
"order": 1
}
}
},
"publishSettings": {
"type": "section",
@ -172,28 +173,31 @@ class CreateShotClip(resolve.Creator):
"target": "ui",
"order": 4,
"value": {
"workfileFrameStart": {
"value": 1001,
"type": "QSpinBox",
"label": "Workfiles Start Frame",
"target": "tag",
"toolTip": "Set workfile starting frame number", # noqa
"order": 0},
"handleStart": {
"value": 0,
"type": "QSpinBox",
"label": "Handle start (head)",
"target": "tag",
"toolTip": "Handle at start of clip", # noqa
"order": 1},
"handleEnd": {
"value": 0,
"type": "QSpinBox",
"label": "Handle end (tail)",
"target": "tag",
"toolTip": "Handle at end of clip", # noqa
"order": 2},
}
"workfileFrameStart": {
"value": 1001,
"type": "QSpinBox",
"label": "Workfiles Start Frame",
"target": "tag",
"toolTip": "Set workfile starting frame number", # noqa
"order": 0
},
"handleStart": {
"value": 0,
"type": "QSpinBox",
"label": "Handle start (head)",
"target": "tag",
"toolTip": "Handle at start of clip", # noqa
"order": 1
},
"handleEnd": {
"value": 0,
"type": "QSpinBox",
"label": "Handle end (tail)",
"target": "tag",
"toolTip": "Handle at end of clip", # noqa
"order": 2
}
}
}
}
@ -229,8 +233,10 @@ class CreateShotClip(resolve.Creator):
v_sync_track = widget.result["vSyncTrack"]["value"]
# sort selected trackItems by
sorted_selected_track_items = list()
unsorted_selected_track_items = list()
sorted_selected_track_items = []
unsorted_selected_track_items = []
print("_____ selected ______")
print(self.selected)
for track_item_data in self.selected:
if track_item_data["track"]["name"] in v_sync_track:
sorted_selected_track_items.append(track_item_data)
@ -253,10 +259,10 @@ class CreateShotClip(resolve.Creator):
"sq_frame_start": sq_frame_start,
"sq_markers": sq_markers
}
print(kwargs)
for i, track_item_data in enumerate(sorted_selected_track_items):
self.rename_index = i
self.log.info(track_item_data)
# convert track item to timeline media pool item
track_item = resolve.PublishClip(
self, track_item_data, **kwargs).convert()

View file

@ -1,6 +1,10 @@
from copy import deepcopy
from importlib import reload
from openpype.client import (
get_version_by_id,
get_last_version_by_subset_id,
)
from openpype.hosts import resolve
from openpype.pipeline import (
get_representation_path,
@ -19,7 +23,7 @@ class LoadClip(resolve.TimelineItemLoader):
"""
families = ["render2d", "source", "plate", "render", "review"]
representations = ["exr", "dpx", "jpg", "jpeg", "png", "h264", ".mov"]
representations = ["exr", "dpx", "jpg", "jpeg", "png", "h264", "mov"]
label = "Load as clip"
order = -10
@ -96,10 +100,8 @@ class LoadClip(resolve.TimelineItemLoader):
namespace = container['namespace']
timeline_item_data = resolve.get_pype_timeline_item_by_name(namespace)
timeline_item = timeline_item_data["clip"]["item"]
version = legacy_io.find_one({
"type": "version",
"_id": representation["parent"]
})
project_name = legacy_io.active_project()
version = get_version_by_id(project_name, representation["parent"])
version_data = version.get("data", {})
version_name = version.get("name", None)
colorspace = version_data.get("colorspace", None)
@ -138,19 +140,22 @@ class LoadClip(resolve.TimelineItemLoader):
@classmethod
def set_item_color(cls, timeline_item, version):
# define version name
version_name = version.get("name", None)
# get all versions in list
versions = legacy_io.find({
"type": "version",
"parent": version["parent"]
}).distinct('name')
max_version = max(versions)
project_name = legacy_io.active_project()
last_version_doc = get_last_version_by_subset_id(
project_name,
version["parent"],
fields=["name"]
)
if last_version_doc:
last_version = last_version_doc["name"]
else:
last_version = None
# set clip colour
if version_name == max_version:
if version_name == last_version:
timeline_item.SetClipColor(cls.clip_color_last)
else:
timeline_item.SetClipColor(cls.clip_color)

View file

@ -30,7 +30,8 @@ class PrecollectWorkfile(pyblish.api.ContextPlugin):
"asset": asset,
"subset": "{}{}".format(asset, subset.capitalize()),
"item": project,
"family": "workfile"
"family": "workfile",
"families": []
}
# create instance with workfile

View file

@ -1,20 +1,8 @@
from .pipeline import (
install,
ls,
set_project_name,
get_context_title,
get_context_data,
update_context_data,
TrayPublisherHost,
)
__all__ = (
"install",
"ls",
"set_project_name",
"get_context_title",
"get_context_data",
"update_context_data",
"TrayPublisherHost",
)

View file

@ -9,6 +9,8 @@ from openpype.pipeline import (
register_creator_plugin_path,
legacy_io,
)
from openpype.host import HostBase, INewPublisher
ROOT_DIR = os.path.dirname(os.path.dirname(
os.path.abspath(__file__)
@ -17,6 +19,35 @@ PUBLISH_PATH = os.path.join(ROOT_DIR, "plugins", "publish")
CREATE_PATH = os.path.join(ROOT_DIR, "plugins", "create")
class TrayPublisherHost(HostBase, INewPublisher):
name = "traypublisher"
def install(self):
os.environ["AVALON_APP"] = self.name
legacy_io.Session["AVALON_APP"] = self.name
pyblish.api.register_host("traypublisher")
pyblish.api.register_plugin_path(PUBLISH_PATH)
register_creator_plugin_path(CREATE_PATH)
def get_context_title(self):
return HostContext.get_project_name()
def get_context_data(self):
return HostContext.get_context_data()
def update_context_data(self, data, changes):
HostContext.save_context_data(data, changes)
def set_project_name(self, project_name):
# TODO Deregister project specific plugins and register new project
# plugins
os.environ["AVALON_PROJECT"] = project_name
legacy_io.Session["AVALON_PROJECT"] = project_name
legacy_io.install()
HostContext.set_project_name(project_name)
class HostContext:
_context_json_path = None
@ -150,32 +181,3 @@ def get_context_data():
def update_context_data(data, changes):
HostContext.save_context_data(data)
def get_context_title():
return HostContext.get_project_name()
def ls():
"""Probably will never return loaded containers."""
return []
def install():
"""This is called before a project is known.
Project is defined with 'set_project_name'.
"""
os.environ["AVALON_APP"] = "traypublisher"
pyblish.api.register_host("traypublisher")
pyblish.api.register_plugin_path(PUBLISH_PATH)
register_creator_plugin_path(CREATE_PATH)
def set_project_name(project_name):
# TODO Deregister project specific plugins and register new project plugins
os.environ["AVALON_PROJECT"] = project_name
legacy_io.Session["AVALON_PROJECT"] = project_name
legacy_io.install()
HostContext.set_project_name(project_name)

View file

@ -1,8 +1,8 @@
from openpype.lib.attribute_definitions import FileDef
from openpype.pipeline import (
Creator,
CreatedInstance
)
from openpype.lib import FileDef
from .pipeline import (
list_instances,
@ -12,6 +12,29 @@ from .pipeline import (
)
IMAGE_EXTENSIONS = [
".ani", ".anim", ".apng", ".art", ".bmp", ".bpg", ".bsave", ".cal",
".cin", ".cpc", ".cpt", ".dds", ".dpx", ".ecw", ".exr", ".fits",
".flic", ".flif", ".fpx", ".gif", ".hdri", ".hevc", ".icer",
".icns", ".ico", ".cur", ".ics", ".ilbm", ".jbig", ".jbig2",
".jng", ".jpeg", ".jpeg-ls", ".jpeg", ".2000", ".jpg", ".xr",
".jpeg", ".xt", ".jpeg-hdr", ".kra", ".mng", ".miff", ".nrrd",
".ora", ".pam", ".pbm", ".pgm", ".ppm", ".pnm", ".pcx", ".pgf",
".pictor", ".png", ".psb", ".psp", ".qtvr", ".ras",
".rgbe", ".logluv", ".tiff", ".sgi", ".tga", ".tiff", ".tiff/ep",
".tiff/it", ".ufo", ".ufp", ".wbmp", ".webp", ".xbm", ".xcf",
".xpm", ".xwd"
]
VIDEO_EXTENSIONS = [
".3g2", ".3gp", ".amv", ".asf", ".avi", ".drc", ".f4a", ".f4b",
".f4p", ".f4v", ".flv", ".gif", ".gifv", ".m2v", ".m4p", ".m4v",
".mkv", ".mng", ".mov", ".mp2", ".mp4", ".mpe", ".mpeg", ".mpg",
".mpv", ".mxf", ".nsv", ".ogg", ".ogv", ".qt", ".rm", ".rmvb",
".roq", ".svi", ".vob", ".webm", ".wmv", ".yuv"
]
REVIEW_EXTENSIONS = IMAGE_EXTENSIONS + VIDEO_EXTENSIONS
class TrayPublishCreator(Creator):
create_allow_context_change = True
host_name = "traypublisher"
@ -37,6 +60,21 @@ class TrayPublishCreator(Creator):
# Use same attributes as for instance attrobites
return self.get_instance_attr_defs()
def _store_new_instance(self, new_instance):
"""Tray publisher specific method to store instance.
Instance is stored into "workfile" of traypublisher and also add it
to CreateContext.
Args:
new_instance (CreatedInstance): Instance that should be stored.
"""
# Host implementation of storing metadata about instance
HostContext.add_instance(new_instance.data_to_store())
# Add instance to current context
self._add_instance_to_context(new_instance)
class SettingsCreator(TrayPublishCreator):
create_allow_context_change = True
@ -58,19 +96,27 @@ class SettingsCreator(TrayPublishCreator):
data["settings_creator"] = True
# Create new instance
new_instance = CreatedInstance(self.family, subset_name, data, self)
# Host implementation of storing metadata about instance
HostContext.add_instance(new_instance.data_to_store())
# Add instance to current context
self._add_instance_to_context(new_instance)
self._store_new_instance(new_instance)
def get_instance_attr_defs(self):
return [
FileDef(
"filepath",
"representation_files",
folders=False,
extensions=self.extensions,
allow_sequences=self.allow_sequences,
label="Filepath",
single_item=not self.allow_multiple_items,
label="Representations",
),
FileDef(
"reviewable",
folders=False,
extensions=REVIEW_EXTENSIONS,
allow_sequences=True,
single_item=True,
label="Reviewable representations",
extensions_label="Single reviewable item"
)
]
@ -92,6 +138,7 @@ class SettingsCreator(TrayPublishCreator):
"detailed_description": item_data["detailed_description"],
"extensions": item_data["extensions"],
"allow_sequences": item_data["allow_sequences"],
"allow_multiple_items": item_data["allow_multiple_items"],
"default_variants": item_data["default_variants"]
}
)

View file

@ -0,0 +1,216 @@
import copy
import os
import re
from openpype.client import get_assets, get_asset_by_name
from openpype.lib import (
FileDef,
BoolDef,
get_subset_name_with_asset_doc,
TaskNotSetError,
)
from openpype.pipeline import (
CreatedInstance,
CreatorError
)
from openpype.hosts.traypublisher.api.plugin import TrayPublishCreator
class BatchMovieCreator(TrayPublishCreator):
"""Creates instances from movie file(s).
Intended for .mov files, but should work for any video file.
Doesn't handle image sequences though.
"""
identifier = "render_movie_batch"
label = "Batch Movies"
family = "render"
description = "Publish batch of video files"
create_allow_context_change = False
version_regex = re.compile(r"^(.+)_v([0-9]+)$")
def __init__(self, project_settings, *args, **kwargs):
super(BatchMovieCreator, self).__init__(project_settings,
*args, **kwargs)
creator_settings = (
project_settings["traypublisher"]["BatchMovieCreator"]
)
self.default_variants = creator_settings["default_variants"]
self.default_tasks = creator_settings["default_tasks"]
self.extensions = creator_settings["extensions"]
def get_icon(self):
return "fa.file"
def create(self, subset_name, data, pre_create_data):
file_paths = pre_create_data.get("filepath")
if not file_paths:
return
for file_info in file_paths:
instance_data = copy.deepcopy(data)
file_name = file_info["filenames"][0]
filepath = os.path.join(file_info["directory"], file_name)
instance_data["creator_attributes"] = {"filepath": filepath}
asset_doc, version = self.get_asset_doc_from_file_name(
file_name, self.project_name)
subset_name, task_name = self._get_subset_and_task(
asset_doc, data["variant"], self.project_name)
instance_data["task"] = task_name
instance_data["asset"] = asset_doc["name"]
# Create new instance
new_instance = CreatedInstance(self.family, subset_name,
instance_data, self)
self._store_new_instance(new_instance)
def get_asset_doc_from_file_name(self, source_filename, project_name):
"""Try to parse out asset name from file name provided.
Artists might provide various file name formats.
Currently handled:
- chair.mov
- chair_v001.mov
- my_chair_to_upload.mov
"""
version = None
asset_name = os.path.splitext(source_filename)[0]
# Always first check if source filename is in assets
matching_asset_doc = self._get_asset_by_name_case_not_sensitive(
project_name, asset_name)
if matching_asset_doc is None:
matching_asset_doc, version = (
self._parse_with_version(project_name, asset_name))
if matching_asset_doc is None:
matching_asset_doc = self._parse_containing(project_name,
asset_name)
if matching_asset_doc is None:
raise CreatorError(
"Cannot guess asset name from {}".format(source_filename))
return matching_asset_doc, version
def _parse_with_version(self, project_name, asset_name):
"""Try to parse asset name from a file name containing version too
Eg. 'chair_v001.mov' >> 'chair', 1
"""
self.log.debug((
"Asset doc by \"{}\" was not found, trying version regex."
).format(asset_name))
matching_asset_doc = version_number = None
regex_result = self.version_regex.findall(asset_name)
if regex_result:
_asset_name, _version_number = regex_result[0]
matching_asset_doc = self._get_asset_by_name_case_not_sensitive(
project_name, _asset_name)
if matching_asset_doc:
version_number = int(_version_number)
return matching_asset_doc, version_number
def _parse_containing(self, project_name, asset_name):
"""Look if file name contains any existing asset name"""
for asset_doc in get_assets(project_name, fields=["name"]):
if asset_doc["name"].lower() in asset_name.lower():
return get_asset_by_name(project_name, asset_doc["name"])
def _get_subset_and_task(self, asset_doc, variant, project_name):
"""Create subset name according to standard template process"""
task_name = self._get_task_name(asset_doc)
try:
subset_name = get_subset_name_with_asset_doc(
self.family,
variant,
task_name,
asset_doc,
project_name
)
except TaskNotSetError:
# Create instance with fake task
# - instance will be marked as invalid so it can't be published
# but user have ability to change it
# NOTE: This expect that there is not task 'Undefined' on asset
task_name = "Undefined"
subset_name = get_subset_name_with_asset_doc(
self.family,
variant,
task_name,
asset_doc,
project_name
)
return subset_name, task_name
def _get_task_name(self, asset_doc):
"""Get applicable task from 'asset_doc' """
available_task_names = {}
asset_tasks = asset_doc.get("data", {}).get("tasks") or {}
for task_name in asset_tasks.keys():
available_task_names[task_name.lower()] = task_name
task_name = None
for _task_name in self.default_tasks:
_task_name_low = _task_name.lower()
if _task_name_low in available_task_names:
task_name = available_task_names[_task_name_low]
break
return task_name
def get_instance_attr_defs(self):
return [
BoolDef(
"add_review_family",
default=True,
label="Review"
)
]
def get_pre_create_attr_defs(self):
# Use same attributes as for instance attributes
return [
FileDef(
"filepath",
folders=False,
single_item=False,
extensions=self.extensions,
label="Filepath"
),
BoolDef(
"add_review_family",
default=True,
label="Review"
)
]
def get_detail_description(self):
return """# Publish batch of .mov to multiple assets.
File names must then contain only asset name, or asset name + version.
(eg. 'chair.mov', 'chair_v001.mov', not really safe `my_chair_v001.mov`
"""
def _get_asset_by_name_case_not_sensitive(self, project_name, asset_name):
"""Handle more cases in file names"""
asset_name = re.compile(asset_name, re.IGNORECASE)
assets = list(get_assets(project_name, asset_names=[asset_name]))
if assets:
if len(assets) > 1:
self.log.warning("Too many records found for {}".format(
asset_name))
return
return assets.pop()

View file

@ -0,0 +1,47 @@
import os
import pyblish.api
from openpype.pipeline import OpenPypePyblishPluginMixin
class CollectMovieBatch(
pyblish.api.InstancePlugin, OpenPypePyblishPluginMixin
):
"""Collect file url for batch movies and create representation.
Adds review on instance and to repre.tags based on value of toggle button
on creator.
"""
label = "Collect Movie Batch Files"
order = pyblish.api.CollectorOrder
hosts = ["traypublisher"]
def process(self, instance):
if instance.data.get("creator_identifier") != "render_movie_batch":
return
creator_attributes = instance.data["creator_attributes"]
file_url = creator_attributes["filepath"]
file_name = os.path.basename(file_url)
_, ext = os.path.splitext(file_name)
repre = {
"name": ext[1:],
"ext": ext[1:],
"files": file_name,
"stagingDir": os.path.dirname(file_url),
"tags": []
}
if creator_attributes["add_review_family"]:
repre["tags"].append("review")
instance.data["families"].append("review")
instance.data["representations"].append(repre)
instance.data["source"] = file_url
self.log.debug("instance.data {}".format(instance.data))

View file

@ -1,31 +0,0 @@
import pyblish.api
from openpype.lib import BoolDef
from openpype.pipeline import OpenPypePyblishPluginMixin
class CollectReviewFamily(
pyblish.api.InstancePlugin, OpenPypePyblishPluginMixin
):
"""Add review family."""
label = "Collect Review Family"
order = pyblish.api.CollectorOrder - 0.49
hosts = ["traypublisher"]
families = [
"image",
"render",
"plate",
"review"
]
def process(self, instance):
values = self.get_attr_values_from_data(instance.data)
if values.get("add_review_family"):
instance.data["families"].append("review")
@classmethod
def get_attribute_defs(cls):
return [
BoolDef("add_review_family", label="Review", default=True)
]

View file

@ -1,9 +1,31 @@
import os
import tempfile
import clique
import pyblish.api
class CollectSettingsSimpleInstances(pyblish.api.InstancePlugin):
"""Collect data for instances created by settings creators."""
"""Collect data for instances created by settings creators.
Plugin create representations for simple instances based
on 'representation_files' attribute stored on instance data.
There is also possibility to have reviewable representation which can be
stored under 'reviewable' attribute stored on instance data. If there was
already created representation with the same files as 'revieable' containes
Representations can be marked for review and in that case is also added
'review' family to instance families. For review can be marked only one
representation so **first** representation that has extension available
in '_review_extensions' is used for review.
For instance 'source' is used path from last representation created
from 'representation_files'.
Set staging directory on instance. That is probably never used because
each created representation has it's own staging dir.
"""
label = "Collect Settings Simple Instances"
order = pyblish.api.CollectorOrder - 0.49
@ -14,37 +36,193 @@ class CollectSettingsSimpleInstances(pyblish.api.InstancePlugin):
if not instance.data.get("settings_creator"):
return
if "families" not in instance.data:
instance.data["families"] = []
instance_label = instance.data["name"]
# Create instance's staging dir in temp
tmp_folder = tempfile.mkdtemp(prefix="traypublisher_")
instance.data["stagingDir"] = tmp_folder
instance.context.data["cleanupFullPaths"].append(tmp_folder)
if "representations" not in instance.data:
instance.data["representations"] = []
repres = instance.data["representations"]
self.log.debug((
"Created temp staging directory for instance {}. {}"
).format(instance_label, tmp_folder))
# Store filepaths for validation of their existence
source_filepaths = []
# Make sure there are no representations with same name
repre_names_counter = {}
# Store created names for logging
repre_names = []
# Store set of filepaths per each representation
representation_files_mapping = []
source = self._create_main_representations(
instance,
source_filepaths,
repre_names_counter,
repre_names,
representation_files_mapping
)
self._create_review_representation(
instance,
source_filepaths,
repre_names_counter,
repre_names,
representation_files_mapping
)
instance.data["source"] = source
instance.data["sourceFilepaths"] = list(set(source_filepaths))
self.log.debug(
(
"Created Simple Settings instance \"{}\""
" with {} representations: {}"
).format(
instance_label,
len(instance.data["representations"]),
", ".join(repre_names)
)
)
def _create_main_representations(
self,
instance,
source_filepaths,
repre_names_counter,
repre_names,
representation_files_mapping
):
creator_attributes = instance.data["creator_attributes"]
filepath_items = creator_attributes["representation_files"]
if not isinstance(filepath_items, list):
filepath_items = [filepath_items]
source = None
for filepath_item in filepath_items:
# Skip if filepath item does not have filenames
if not filepath_item["filenames"]:
continue
filepaths = {
os.path.join(filepath_item["directory"], filename)
for filename in filepath_item["filenames"]
}
source_filepaths.extend(filepaths)
source = self._calculate_source(filepaths)
representation = self._create_representation_data(
filepath_item, repre_names_counter, repre_names
)
instance.data["representations"].append(representation)
representation_files_mapping.append(
(filepaths, representation, source)
)
return source
def _create_review_representation(
self,
instance,
source_filepaths,
repre_names_counter,
repre_names,
representation_files_mapping
):
# Skip review representation creation if there are no representations
# created for "main" part
# - review representation must not be created in that case so
# validation can care about it
if not representation_files_mapping:
self.log.warning((
"There are missing source representations."
" Creation of review representation was skipped."
))
return
creator_attributes = instance.data["creator_attributes"]
filepath_item = creator_attributes["filepath"]
self.log.info(filepath_item)
filepaths = [
os.path.join(filepath_item["directory"], filename)
for filename in filepath_item["filenames"]
]
review_file_item = creator_attributes["reviewable"]
filenames = review_file_item.get("filenames")
if not filenames:
self.log.debug((
"Filepath for review is not defined."
" Skipping review representation creation."
))
return
instance.data["sourceFilepaths"] = filepaths
instance.data["stagingDir"] = filepath_item["directory"]
filepaths = {
os.path.join(review_file_item["directory"], filename)
for filename in filenames
}
source_filepaths.extend(filepaths)
# First try to find out representation with same filepaths
# so it's not needed to create new representation just for review
review_representation = None
# Review path (only for logging)
review_path = None
for item in representation_files_mapping:
_filepaths, representation, repre_path = item
if _filepaths == filepaths:
review_representation = representation
review_path = repre_path
break
if review_representation is None:
self.log.debug("Creating new review representation")
review_path = self._calculate_source(filepaths)
review_representation = self._create_representation_data(
review_file_item, repre_names_counter, repre_names
)
instance.data["representations"].append(review_representation)
if "review" not in instance.data["families"]:
instance.data["families"].append("review")
review_representation["tags"].append("review")
self.log.debug("Representation {} was marked for review. {}".format(
review_representation["name"], review_path
))
def _create_representation_data(
self, filepath_item, repre_names_counter, repre_names
):
"""Create new representation data based on file item.
Args:
filepath_item (Dict[str, Any]): Item with information about
representation paths.
repre_names_counter (Dict[str, int]): Store count of representation
names.
repre_names (List[str]): All used representation names. For
logging purposes.
Returns:
Dict: Prepared base representation data.
"""
filenames = filepath_item["filenames"]
_, ext = os.path.splitext(filenames[0])
ext = ext[1:]
if len(filenames) == 1:
filenames = filenames[0]
repres.append({
"ext": ext,
"name": ext,
repre_name = repre_ext = ext[1:]
if repre_name not in repre_names_counter:
repre_names_counter[repre_name] = 2
else:
counter = repre_names_counter[repre_name]
repre_names_counter[repre_name] += 1
repre_name = "{}_{}".format(repre_name, counter)
repre_names.append(repre_name)
return {
"ext": repre_ext,
"name": repre_name,
"stagingDir": filepath_item["directory"],
"files": filenames
})
"files": filenames,
"tags": []
}
self.log.debug("Created Simple Settings instance {}".format(
instance.data
))
def _calculate_source(self, filepaths):
cols, rems = clique.assemble(filepaths)
if cols:
source = cols[0].format("{head}{padding}{tail}")
elif rems:
source = rems[0]
return source

View file

@ -0,0 +1,15 @@
<?xml version="1.0" encoding="UTF-8"?>
<root>
<error id="main">
<title>Invalid frame range</title>
<description>
## Invalid frame range
Expected duration or '{duration}' frames set in database, workfile contains only '{found}' frames.
### How to repair?
Modify configuration in the database or tweak frame range in the workfile.
</description>
</error>
</root>

View file

@ -3,8 +3,17 @@ import pyblish.api
from openpype.pipeline import PublishValidationError
class ValidateWorkfilePath(pyblish.api.InstancePlugin):
"""Validate existence of workfile instance existence."""
class ValidateFilePath(pyblish.api.InstancePlugin):
"""Validate existence of source filepaths on instance.
Plugins looks into key 'sourceFilepaths' and validate if paths there
actually exist on disk.
Also validate if the key is filled but is empty. In that case also
crashes so do not fill the key if unfilled value should not cause error.
This is primarily created for Simple Creator instances.
"""
label = "Validate Workfile"
order = pyblish.api.ValidatorOrder - 0.49
@ -14,12 +23,28 @@ class ValidateWorkfilePath(pyblish.api.InstancePlugin):
def process(self, instance):
if "sourceFilepaths" not in instance.data:
self.log.info((
"Can't validate source filepaths existence."
"Skipped validation of source filepaths existence."
" Instance does not have collected 'sourceFilepaths'"
))
return
filepaths = instance.data.get("sourceFilepaths")
family = instance.data["family"]
label = instance.data["name"]
filepaths = instance.data["sourceFilepaths"]
if not filepaths:
raise PublishValidationError(
(
"Source filepaths of '{}' instance \"{}\" are not filled"
).format(family, label),
"File not filled",
(
"## Files were not filled"
"\nThis mean that you didn't enter any files into required"
" file input."
"\n- Please refresh publishing and check instance"
" <b>{}</b>"
).format(label)
)
not_found_files = [
filepath
@ -34,11 +59,7 @@ class ValidateWorkfilePath(pyblish.api.InstancePlugin):
raise PublishValidationError(
(
"Filepath of '{}' instance \"{}\" does not exist:\n{}"
).format(
instance.data["family"],
instance.data["name"],
joined_paths
),
).format(family, label, joined_paths),
"File not found",
(
"## Files were not found\nFiles\n{}"

View file

@ -0,0 +1,75 @@
import re
import pyblish.api
import openpype.api
from openpype.pipeline import (
PublishXmlValidationError,
OptionalPyblishPluginMixin
)
class ValidateFrameRange(OptionalPyblishPluginMixin,
pyblish.api.InstancePlugin):
"""Validating frame range of rendered files against state in DB."""
label = "Validate Frame Range"
hosts = ["traypublisher"]
families = ["render"]
order = openpype.api.ValidateContentsOrder
optional = True
# published data might be sequence (.mov, .mp4) in that counting files
# doesnt make sense
check_extensions = ["exr", "dpx", "jpg", "jpeg", "png", "tiff", "tga",
"gif", "svg"]
skip_timelines_check = [] # skip for specific task names (regex)
def process(self, instance):
# Skip the instance if is not active by data on the instance
if not self.is_active(instance.data):
return
if (self.skip_timelines_check and
any(re.search(pattern, instance.data["task"])
for pattern in self.skip_timelines_check)):
self.log.info("Skipping for {} task".format(instance.data["task"]))
asset_doc = instance.data["assetEntity"]
asset_data = asset_doc["data"]
frame_start = asset_data["frameStart"]
frame_end = asset_data["frameEnd"]
handle_start = asset_data["handleStart"]
handle_end = asset_data["handleEnd"]
duration = (frame_end - frame_start + 1) + handle_start + handle_end
repres = instance.data.get("representations")
if not repres:
self.log.info("No representations, skipping.")
return
first_repre = repres[0]
ext = first_repre['ext'].replace(".", '')
if not ext or ext.lower() not in self.check_extensions:
self.log.warning("Cannot check for extension {}".format(ext))
return
files = first_repre["files"]
if isinstance(files, str):
files = [files]
frames = len(files)
msg = (
"Frame duration from DB:'{}' doesn't match number of files:'{}'"
" Please change frame range for Asset or limit no. of files"
). format(int(duration), frames)
formatting_data = {"duration": duration,
"found": frames}
if frames != duration:
raise PublishXmlValidationError(self, msg,
formatting_data=formatting_data)
self.log.debug("Valid ranges expected '{}' - found '{}'".
format(int(duration), frames))

View file

@ -10,8 +10,8 @@ from openpype.lib import (
from openpype.pipeline import (
registered_host,
legacy_io,
Anatomy,
)
from openpype.api import Anatomy
from openpype.hosts.tvpaint.api import lib, pipeline, plugin

View file

@ -2,7 +2,7 @@ import os
import unreal
from openpype.api import Anatomy
from openpype.pipeline import Anatomy
from openpype.hosts.unreal.api import pipeline

View file

@ -3,7 +3,7 @@ from pathlib import Path
import unreal
from openpype.api import Anatomy
from openpype.pipeline import Anatomy
from openpype.hosts.unreal.api import pipeline
import pyblish.api

View file

@ -120,7 +120,6 @@ from .avalon_context import (
is_latest,
any_outdated,
get_asset,
get_hierarchy,
get_linked_assets,
get_latest_version,
get_system_general_anatomy_data,
@ -292,7 +291,6 @@ __all__ = [
"is_latest",
"any_outdated",
"get_asset",
"get_hierarchy",
"get_linked_assets",
"get_latest_version",
"get_system_general_anatomy_data",

View file

@ -1,269 +1,33 @@
# -*- coding: utf-8 -*-
"""Collect render template.
"""Content was moved to 'openpype.pipeline.publish.abstract_collect_render'.
TODO: use @dataclass when times come.
Please change your imports as soon as possible.
File will be probably removed in OpenPype 3.14.*
"""
from abc import abstractmethod
import attr
import six
import pyblish.api
from openpype.pipeline import legacy_io
from .abstract_metaplugins import AbstractMetaContextPlugin
import warnings
from openpype.pipeline.publish import AbstractCollectRender, RenderInstance
@attr.s
class RenderInstance(object):
"""Data collected by collectors.
This data class later on passed to collected instances.
Those attributes are required later on.
"""
# metadata
version = attr.ib() # instance version
time = attr.ib() # time of instance creation (get_formatted_current_time)
source = attr.ib() # path to source scene file
label = attr.ib() # label to show in GUI
subset = attr.ib() # subset name
task = attr.ib() # task name
asset = attr.ib() # asset name (AVALON_ASSET)
attachTo = attr.ib() # subset name to attach render to
setMembers = attr.ib() # list of nodes/members producing render output
publish = attr.ib() # bool, True to publish instance
name = attr.ib() # instance name
# format settings
resolutionWidth = attr.ib() # resolution width (1920)
resolutionHeight = attr.ib() # resolution height (1080)
pixelAspect = attr.ib() # pixel aspect (1.0)
# time settings
frameStart = attr.ib() # start frame
frameEnd = attr.ib() # start end
frameStep = attr.ib() # frame step
handleStart = attr.ib(default=None) # start frame
handleEnd = attr.ib(default=None) # start frame
# for software (like Harmony) where frame range cannot be set by DB
# handles need to be propagated if exist
ignoreFrameHandleCheck = attr.ib(default=False)
# --------------------
# With default values
# metadata
renderer = attr.ib(default="") # renderer - can be used in Deadline
review = attr.ib(default=False) # generate review from instance (bool)
priority = attr.ib(default=50) # job priority on farm
family = attr.ib(default="renderlayer")
families = attr.ib(default=["renderlayer"]) # list of families
# format settings
multipartExr = attr.ib(default=False) # flag for multipart exrs
convertToScanline = attr.ib(default=False) # flag for exr conversion
tileRendering = attr.ib(default=False) # bool: treat render as tiles
tilesX = attr.ib(default=0) # number of tiles in X
tilesY = attr.ib(default=0) # number of tiles in Y
# submit_publish_job
toBeRenderedOn = attr.ib(default=None)
deadlineSubmissionJob = attr.ib(default=None)
anatomyData = attr.ib(default=None)
outputDir = attr.ib(default=None)
context = attr.ib(default=None)
@frameStart.validator
def check_frame_start(self, _, value):
"""Validate if frame start is not larger then end."""
if value > self.frameEnd:
raise ValueError("frameStart must be smaller "
"or equal then frameEnd")
@frameEnd.validator
def check_frame_end(self, _, value):
"""Validate if frame end is not less then start."""
if value < self.frameStart:
raise ValueError("frameEnd must be smaller "
"or equal then frameStart")
@tilesX.validator
def check_tiles_x(self, _, value):
"""Validate if tile x isn't less then 1."""
if not self.tileRendering:
return
if value < 1:
raise ValueError("tile X size cannot be less then 1")
if value == 1 and self.tilesY == 1:
raise ValueError("both tiles X a Y sizes are set to 1")
@tilesY.validator
def check_tiles_y(self, _, value):
"""Validate if tile y isn't less then 1."""
if not self.tileRendering:
return
if value < 1:
raise ValueError("tile Y size cannot be less then 1")
if value == 1 and self.tilesX == 1:
raise ValueError("both tiles X a Y sizes are set to 1")
class CollectRenderDeprecated(DeprecationWarning):
pass
@six.add_metaclass(AbstractMetaContextPlugin)
class AbstractCollectRender(pyblish.api.ContextPlugin):
"""Gather all publishable render layers from renderSetup."""
warnings.simplefilter("always", CollectRenderDeprecated)
warnings.warn(
(
"Content of 'abstract_collect_render' was moved."
"\nUsing deprecated source of 'abstract_collect_render'. Content was"
" move to 'openpype.pipeline.publish.abstract_collect_render'."
" Please change your imports as soon as possible."
),
category=CollectRenderDeprecated,
stacklevel=4
)
order = pyblish.api.CollectorOrder + 0.01
label = "Collect Render"
sync_workfile_version = False
def __init__(self, *args, **kwargs):
"""Constructor."""
super(AbstractCollectRender, self).__init__(*args, **kwargs)
self._file_path = None
self._asset = legacy_io.Session["AVALON_ASSET"]
self._context = None
def process(self, context):
"""Entry point to collector."""
self._context = context
for instance in context:
# make sure workfile instance publishing is enabled
try:
if "workfile" in instance.data["families"]:
instance.data["publish"] = True
# TODO merge renderFarm and render.farm
if ("renderFarm" in instance.data["families"] or
"render.farm" in instance.data["families"]):
instance.data["remove"] = True
except KeyError:
# be tolerant if 'families' is missing.
pass
self._file_path = context.data["currentFile"].replace("\\", "/")
render_instances = self.get_instances(context)
for render_instance in render_instances:
exp_files = self.get_expected_files(render_instance)
assert exp_files, "no file names were generated, this is bug"
# if we want to attach render to subset, check if we have AOV's
# in expectedFiles. If so, raise error as we cannot attach AOV
# (considered to be subset on its own) to another subset
if render_instance.attachTo:
assert isinstance(exp_files, list), (
"attaching multiple AOVs or renderable cameras to "
"subset is not supported"
)
frame_start_render = int(render_instance.frameStart)
frame_end_render = int(render_instance.frameEnd)
if (render_instance.ignoreFrameHandleCheck or
int(context.data['frameStartHandle']) == frame_start_render
and int(context.data['frameEndHandle']) == frame_end_render): # noqa: W503, E501
handle_start = context.data['handleStart']
handle_end = context.data['handleEnd']
frame_start = context.data['frameStart']
frame_end = context.data['frameEnd']
frame_start_handle = context.data['frameStartHandle']
frame_end_handle = context.data['frameEndHandle']
else:
handle_start = 0
handle_end = 0
frame_start = frame_start_render
frame_end = frame_end_render
frame_start_handle = frame_start_render
frame_end_handle = frame_end_render
data = {
"handleStart": handle_start,
"handleEnd": handle_end,
"frameStart": frame_start,
"frameEnd": frame_end,
"frameStartHandle": frame_start_handle,
"frameEndHandle": frame_end_handle,
"byFrameStep": int(render_instance.frameStep),
"author": context.data["user"],
# Add source to allow tracing back to the scene from
# which was submitted originally
"expectedFiles": exp_files,
}
if self.sync_workfile_version:
data["version"] = context.data["version"]
# add additional data
data = self.add_additional_data(data)
render_instance_dict = attr.asdict(render_instance)
instance = context.create_instance(render_instance.name)
instance.data["label"] = render_instance.label
instance.data.update(render_instance_dict)
instance.data.update(data)
self.post_collecting_action()
@abstractmethod
def get_instances(self, context):
"""Get all renderable instances and their data.
Args:
context (pyblish.api.Context): Context object.
Returns:
list of :class:`RenderInstance`: All collected renderable instances
(like render layers, write nodes, etc.)
"""
pass
@abstractmethod
def get_expected_files(self, render_instance):
"""Get list of expected files.
Returns:
list: expected files. This can be either simple list of files with
their paths, or list of dictionaries, where key is name of AOV
for example and value is list of files for that AOV.
Example::
['/path/to/file.001.exr', '/path/to/file.002.exr']
or as dictionary:
[
{
"beauty": ['/path/to/beauty.001.exr', ...],
"mask": ['/path/to/mask.001.exr']
}
]
"""
pass
def add_additional_data(self, data):
"""Add additional data to collected instance.
This can be overridden by host implementation to add custom
additional data.
"""
return data
def post_collecting_action(self):
"""Execute some code after collection is done.
This is useful for example for restoring current render layer.
"""
pass
__all__ = (
"AbstractCollectRender",
"RenderInstance"
)

View file

@ -1,53 +1,32 @@
# -*- coding: utf-8 -*-
"""Abstract ExpectedFile class definition."""
from abc import ABCMeta, abstractmethod
import six
"""Content was moved to 'openpype.pipeline.publish.abstract_expected_files'.
Please change your imports as soon as possible.
File will be probably removed in OpenPype 3.14.*
"""
import warnings
from openpype.pipeline.publish import ExpectedFiles
@six.add_metaclass(ABCMeta)
class ExpectedFiles:
"""Class grouping functionality for all supported renderers.
Attributes:
multipart (bool): Flag if multipart exrs are used.
"""
multipart = False
@abstractmethod
def get(self, render_instance):
"""Get expected files for given renderer and render layer.
This method should return dictionary of all files we are expecting
to be rendered from the host. Usually `render_instance` corresponds
to *render layer*. Result can be either flat list with the file
paths or it can be list of dictionaries. Each key corresponds to
for example AOV name or channel, etc.
Example::
['/path/to/file.001.exr', '/path/to/file.002.exr']
or as dictionary:
[
{
"beauty": ['/path/to/beauty.001.exr', ...],
"mask": ['/path/to/mask.001.exr']
}
]
class ExpectedFilesDeprecated(DeprecationWarning):
pass
Args:
render_instance (:class:`RenderInstance`): Data passed from
collector to determine files. This should be instance of
:class:`abstract_collect_render.RenderInstance`
warnings.simplefilter("always", ExpectedFilesDeprecated)
warnings.warn(
(
"Content of 'abstract_expected_files' was moved."
"\nUsing deprecated source of 'abstract_expected_files'. Content was"
" move to 'openpype.pipeline.publish.abstract_expected_files'."
" Please change your imports as soon as possible."
),
category=ExpectedFilesDeprecated,
stacklevel=4
)
Returns:
list: Full paths to expected rendered files.
list of dict: Path to expected rendered files categorized by
AOVs, etc.
"""
raise NotImplementedError()
__all__ = (
"ExpectedFiles",
)

View file

@ -1,10 +1,35 @@
from abc import ABCMeta
from pyblish.plugin import MetaPlugin, ExplicitMetaPlugin
"""Content was moved to 'openpype.pipeline.publish.publish_plugins'.
Please change your imports as soon as possible.
File will be probably removed in OpenPype 3.14.*
"""
import warnings
from openpype.pipeline.publish import (
AbstractMetaInstancePlugin,
AbstractMetaContextPlugin
)
class AbstractMetaInstancePlugin(ABCMeta, MetaPlugin):
class MetaPluginsDeprecated(DeprecationWarning):
pass
class AbstractMetaContextPlugin(ABCMeta, ExplicitMetaPlugin):
pass
warnings.simplefilter("always", MetaPluginsDeprecated)
warnings.warn(
(
"Content of 'abstract_metaplugins' was moved."
"\nUsing deprecated source of 'abstract_metaplugins'. Content was"
" moved to 'openpype.pipeline.publish.publish_plugins'."
" Please change your imports as soon as possible."
),
category=MetaPluginsDeprecated,
stacklevel=4
)
__all__ = (
"AbstractMetaInstancePlugin",
"AbstractMetaContextPlugin",
)

File diff suppressed because it is too large Load diff

View file

@ -11,6 +11,10 @@ from abc import ABCMeta, abstractmethod
import six
from openpype.client import (
get_project,
get_asset_by_name,
)
from openpype.settings import (
get_system_settings,
get_project_settings,
@ -20,10 +24,7 @@ from openpype.settings.constants import (
METADATA_KEYS,
M_DYNAMIC_KEY_LABEL
)
from . import (
PypeLogger,
Anatomy
)
from . import PypeLogger
from .profiles_filtering import filter_profiles
from .local_settings import get_openpype_username
from .avalon_context import (
@ -664,7 +665,11 @@ class ApplicationExecutable:
if os.path.exists(plist_filepath):
import plistlib
parsed_plist = plistlib.readPlist(plist_filepath)
if hasattr(plistlib, "load"):
with open(plist_filepath, "rb") as stream:
parsed_plist = plistlib.load(stream)
else:
parsed_plist = plistlib.readPlist(plist_filepath)
executable_filename = parsed_plist.get("CFBundleExecutable")
if executable_filename:
@ -1305,7 +1310,7 @@ def get_app_environments_for_context(
dict: Environments for passed context and application.
"""
from openpype.pipeline import AvalonMongoDB
from openpype.pipeline import AvalonMongoDB, Anatomy
# Avalon database connection
dbcon = AvalonMongoDB()
@ -1313,11 +1318,8 @@ def get_app_environments_for_context(
dbcon.install()
# Project document
project_doc = dbcon.find_one({"type": "project"})
asset_doc = dbcon.find_one({
"type": "asset",
"name": asset_name
})
project_doc = get_project(project_name)
asset_doc = get_asset_by_name(project_name, asset_name)
if modules_manager is None:
from openpype.modules import ModulesManager

View file

@ -14,6 +14,7 @@ class AbstractAttrDefMeta(ABCMeta):
Each object of `AbtractAttrDef` mus have defined 'key' attribute.
"""
def __call__(self, *args, **kwargs):
obj = super(AbstractAttrDefMeta, self).__call__(*args, **kwargs)
init_class = getattr(obj, "__init__class__", None)
@ -45,6 +46,7 @@ class AbtractAttrDef:
is_label_horizontal(bool): UI specific argument. Specify if label is
next to value input or ahead.
"""
is_value_def = True
def __init__(
@ -77,6 +79,7 @@ class AbtractAttrDef:
Convert passed value to a valid type. Use default if value can't be
converted.
"""
pass
@ -113,6 +116,7 @@ class UnknownDef(AbtractAttrDef):
This attribute can be used to keep existing data unchanged but does not
have known definition of type.
"""
def __init__(self, key, default=None, **kwargs):
kwargs["default"] = default
super(UnknownDef, self).__init__(key, **kwargs)
@ -204,6 +208,7 @@ class TextDef(AbtractAttrDef):
placeholder(str): UI placeholder for attribute.
default(str, None): Default value. Empty string used when not defined.
"""
def __init__(
self, key, multiline=None, regex=None, placeholder=None, default=None,
**kwargs
@ -531,14 +536,15 @@ class FileDef(AbtractAttrDef):
Args:
single_item(bool): Allow only single path item.
folders(bool): Allow folder paths.
extensions(list<str>): Allow files with extensions. Empty list will
extensions(List[str]): Allow files with extensions. Empty list will
allow all extensions and None will disable files completely.
default(str, list<str>): Defautl value.
extensions_label(str): Custom label shown instead of extensions in UI.
default(str, List[str]): Default value.
"""
def __init__(
self, key, single_item=True, folders=None, extensions=None,
allow_sequences=True, default=None, **kwargs
allow_sequences=True, extensions_label=None, default=None, **kwargs
):
if folders is None and extensions is None:
folders = True
@ -578,6 +584,7 @@ class FileDef(AbtractAttrDef):
self.folders = folders
self.extensions = set(extensions)
self.allow_sequences = allow_sequences
self.extensions_label = extensions_label
super(FileDef, self).__init__(key, default=default, **kwargs)
def __eq__(self, other):

View file

@ -7,14 +7,25 @@ import platform
import logging
import collections
import functools
import warnings
from bson.objectid import ObjectId
from openpype.client import (
get_project,
get_assets,
get_asset_by_name,
get_subset_by_name,
get_subsets,
get_version_by_id,
get_last_versions,
get_last_version_by_subset_id,
get_representations,
get_representation_by_id,
get_workfile_info,
)
from openpype.settings import (
get_project_settings,
get_system_settings
)
from .anatomy import Anatomy
from .profiles_filtering import filter_profiles
from .events import emit_event
from .path_templates import StringTemplate
@ -36,6 +47,51 @@ PROJECT_NAME_REGEX = re.compile(
)
class AvalonContextDeprecatedWarning(DeprecationWarning):
pass
def deprecated(new_destination):
"""Mark functions as deprecated.
It will result in a warning being emitted when the function is used.
"""
func = None
if callable(new_destination):
func = new_destination
new_destination = None
def _decorator(decorated_func):
if new_destination is None:
warning_message = (
" Please check content of deprecated function to figure out"
" possible replacement."
)
else:
warning_message = " Please replace your usage with '{}'.".format(
new_destination
)
@functools.wraps(decorated_func)
def wrapper(*args, **kwargs):
warnings.simplefilter("always", AvalonContextDeprecatedWarning)
warnings.warn(
(
"Call to deprecated function '{}'"
"\nFunction was moved or removed.{}"
).format(decorated_func.__name__, warning_message),
category=AvalonContextDeprecatedWarning,
stacklevel=4
)
return decorated_func(*args, **kwargs)
return wrapper
if func is None:
return _decorator
return _decorator(func)
def create_project(
project_name, project_code, library_project=False, dbcon=None
):
@ -65,6 +121,11 @@ def create_project(
from openpype.pipeline import AvalonMongoDB
from openpype.pipeline.schema import validate
if get_project(project_name, fields=["name"]):
raise ValueError("Project with name \"{}\" already exists".format(
project_name
))
if dbcon is None:
dbcon = AvalonMongoDB()
@ -74,15 +135,6 @@ def create_project(
).format(project_name))
database = dbcon.database
project_doc = database[project_name].find_one(
{"type": "project"},
{"name": 1}
)
if project_doc:
raise ValueError("Project with name \"{}\" already exists".format(
project_name
))
project_doc = {
"type": "project",
"name": project_name,
@ -105,7 +157,7 @@ def create_project(
database[project_name].delete_one({"type": "project"})
raise
project_doc = database[project_name].find_one({"type": "project"})
project_doc = get_project(project_name)
try:
# Validate created project document
@ -137,23 +189,23 @@ def is_latest(representation):
Returns:
bool: Whether the representation is of latest version.
"""
version = legacy_io.find_one({"_id": representation['parent']})
project_name = legacy_io.active_project()
version = get_version_by_id(
project_name,
representation["parent"],
fields=["_id", "type", "parent"]
)
if version["type"] == "hero_version":
return True
# Get highest version under the parent
highest_version = legacy_io.find_one({
"type": "version",
"parent": version["parent"]
}, sort=[("name", -1)], projection={"name": True})
last_version = get_last_version_by_subset_id(
project_name, version["parent"], fields=["_id"]
)
if version['name'] == highest_version['name']:
return True
else:
return False
return version["_id"] == last_version["_id"]
@with_pipeline_io
@ -161,6 +213,7 @@ def any_outdated():
"""Return whether the current scene has any outdated content"""
from openpype.pipeline import registered_host
project_name = legacy_io.active_project()
checked = set()
host = registered_host()
for container in host.ls():
@ -168,12 +221,8 @@ def any_outdated():
if representation in checked:
continue
representation_doc = legacy_io.find_one(
{
"_id": ObjectId(representation),
"type": "representation"
},
projection={"parent": True}
representation_doc = get_representation_by_id(
project_name, representation, fields=["parent"]
)
if representation_doc and not is_latest(representation_doc):
return True
@ -191,81 +240,29 @@ def any_outdated():
def get_asset(asset_name=None):
""" Returning asset document from database by its name.
Doesn't count with duplicities on asset names!
Doesn't count with duplicities on asset names!
Args:
asset_name (str)
Args:
asset_name (str)
Returns:
(MongoDB document)
Returns:
(MongoDB document)
"""
project_name = legacy_io.active_project()
if not asset_name:
asset_name = legacy_io.Session["AVALON_ASSET"]
asset_document = legacy_io.find_one({
"name": asset_name,
"type": "asset"
})
asset_document = get_asset_by_name(project_name, asset_name)
if not asset_document:
raise TypeError("Entity \"{}\" was not found in DB".format(asset_name))
return asset_document
@with_pipeline_io
def get_hierarchy(asset_name=None):
"""
Obtain asset hierarchy path string from mongo db
Args:
asset_name (str)
Returns:
(string): asset hierarchy path
"""
if not asset_name:
asset_name = legacy_io.Session.get(
"AVALON_ASSET",
os.environ["AVALON_ASSET"]
)
asset_entity = legacy_io.find_one({
"type": 'asset',
"name": asset_name
})
not_set = "PARENTS_NOT_SET"
entity_parents = asset_entity.get("data", {}).get("parents", not_set)
# If entity already have parents then just return joined
if entity_parents != not_set:
return "/".join(entity_parents)
# Else query parents through visualParents and store result to entity
hierarchy_items = []
entity = asset_entity
while True:
parent_id = entity.get("data", {}).get("visualParent")
if not parent_id:
break
entity = legacy_io.find_one({"_id": parent_id})
hierarchy_items.append(entity["name"])
# Add parents to entity data for next query
entity_data = asset_entity.get("data", {})
entity_data["parents"] = hierarchy_items
legacy_io.update_many(
{"_id": asset_entity["_id"]},
{"$set": {"data": entity_data}}
)
return "/".join(hierarchy_items)
def get_system_general_anatomy_data():
system_settings = get_system_settings()
def get_system_general_anatomy_data(system_settings=None):
if not system_settings:
system_settings = get_system_settings()
studio_name = system_settings["general"]["studio_name"]
studio_code = system_settings["general"]["studio_code"]
return {
@ -313,11 +310,13 @@ def get_linked_assets(asset_doc):
Returns:
(list) Asset documents of input links for passed asset doc.
"""
link_ids = get_linked_asset_ids(asset_doc)
if not link_ids:
return []
return list(legacy_io.find({"_id": {"$in": link_ids}}))
project_name = legacy_io.active_project()
return list(get_assets(project_name, link_ids))
@with_pipeline_io
@ -339,20 +338,14 @@ def get_latest_version(asset_name, subset_name, dbcon=None, project_name=None):
dict: Last version document for entered .
"""
if not dbcon:
log.debug("Using `legacy_io` for query.")
dbcon = legacy_io
# Make sure is installed
dbcon.install()
if not project_name:
if not dbcon:
log.debug("Using `legacy_io` for query.")
dbcon = legacy_io
# Make sure is installed
dbcon.install()
if project_name and project_name != dbcon.Session.get("AVALON_PROJECT"):
# `legacy_io` has only `_database` attribute
# but `AvalonMongoDB` has `database`
database = getattr(dbcon, "database", dbcon._database)
collection = database[project_name]
else:
project_name = dbcon.Session.get("AVALON_PROJECT")
collection = dbcon
project_name = dbcon.active_project()
log.debug((
"Getting latest version for Project: \"{}\" Asset: \"{}\""
@ -360,19 +353,15 @@ def get_latest_version(asset_name, subset_name, dbcon=None, project_name=None):
).format(project_name, asset_name, subset_name))
# Query asset document id by asset name
asset_doc = collection.find_one(
{"type": "asset", "name": asset_name},
{"_id": True}
)
asset_doc = get_asset_by_name(project_name, asset_name, fields=["_id"])
if not asset_doc:
log.info(
"Asset \"{}\" was not found in Database.".format(asset_name)
)
return None
subset_doc = collection.find_one(
{"type": "subset", "name": subset_name, "parent": asset_doc["_id"]},
{"_id": True}
subset_doc = get_subset_by_name(
project_name, subset_name, asset_doc["_id"]
)
if not subset_doc:
log.info(
@ -380,9 +369,8 @@ def get_latest_version(asset_name, subset_name, dbcon=None, project_name=None):
)
return None
version_doc = collection.find_one(
{"type": "version", "parent": subset_doc["_id"]},
sort=[("name", -1)],
version_doc = get_last_version_by_subset_id(
project_name, subset_doc["_id"]
)
if not version_doc:
log.info(
@ -420,28 +408,17 @@ def get_workfile_template_key_from_context(
ValueError: When both 'dbcon' and 'project_name' were not
passed.
"""
if not dbcon:
if not project_name:
if not project_name:
if not dbcon:
raise ValueError((
"`get_workfile_template_key_from_context` requires to pass"
" one of 'dbcon' or 'project_name' arguments."
))
from openpype.pipeline import AvalonMongoDB
dbcon = AvalonMongoDB()
dbcon.Session["AVALON_PROJECT"] = project_name
project_name = dbcon.active_project()
elif not project_name:
project_name = dbcon.Session["AVALON_PROJECT"]
asset_doc = dbcon.find_one(
{
"type": "asset",
"name": asset_name
},
{
"data.tasks": 1
}
asset_doc = get_asset_by_name(
project_name, asset_name, fields=["data.tasks"]
)
asset_tasks = asset_doc.get("data", {}).get("tasks") or {}
task_info = asset_tasks.get(task_name) or {}
@ -593,6 +570,7 @@ def get_workdir_with_workdir_data(
))
if not anatomy:
from openpype.pipeline import Anatomy
anatomy = Anatomy(project_name)
if not template_key:
@ -604,7 +582,10 @@ def get_workdir_with_workdir_data(
anatomy_filled = anatomy.format(workdir_data)
# Output is TemplateResult object which contain useful data
return anatomy_filled[template_key]["folder"]
output = anatomy_filled[template_key]["folder"]
if output:
return output.normalized()
return output
def get_workdir(
@ -634,7 +615,9 @@ def get_workdir(
Returns:
TemplateResult: Workdir path.
"""
if not anatomy:
from openpype.pipeline import Anatomy
anatomy = Anatomy(project_doc["name"])
workdir_data = get_workdir_data(
@ -661,15 +644,11 @@ def template_data_from_session(session=None):
session = legacy_io.Session
project_name = session["AVALON_PROJECT"]
project_doc = legacy_io.database[project_name].find_one({
"type": "project"
})
asset_doc = legacy_io.database[project_name].find_one({
"type": "asset",
"name": session["AVALON_ASSET"]
})
asset_name = session["AVALON_ASSET"]
task_name = session["AVALON_TASK"]
host_name = session["AVALON_APP"]
project_doc = get_project(project_name)
asset_doc = get_asset_by_name(project_name, asset_name)
return get_workdir_data(project_doc, asset_doc, task_name, host_name)
@ -694,8 +673,8 @@ def compute_session_changes(
Returns:
dict: The required changes in the Session dictionary.
"""
changes = dict()
# If no changes, return directly
@ -713,12 +692,9 @@ def compute_session_changes(
if not asset_document or not asset_tasks:
# Assume asset name
asset_document = legacy_io.find_one(
{
"name": asset,
"type": "asset"
},
{"data.tasks": True}
project_name = session["AVALON_PROJECT"]
asset_document = get_asset_by_name(
project_name, asset, fields=["data.tasks"]
)
assert asset_document, "Asset must exist"
@ -747,6 +723,8 @@ def compute_session_changes(
@with_pipeline_io
def get_workdir_from_session(session=None, template_key=None):
from openpype.pipeline import Anatomy
if session is None:
session = legacy_io.Session
project_name = session["AVALON_PROJECT"]
@ -762,7 +740,10 @@ def get_workdir_from_session(session=None, template_key=None):
host_name,
project_name=project_name
)
return anatomy_filled[template_key]["folder"]
path = anatomy_filled[template_key]["folder"]
if path:
path = os.path.normpath(path)
return path
@with_pipeline_io
@ -810,6 +791,7 @@ def update_current_task(task=None, asset=None, app=None, template_key=None):
@with_pipeline_io
@deprecated("openpype.client.get_workfile_info")
def get_workfile_doc(asset_id, task_name, filename, dbcon=None):
"""Return workfile document for entered context.
@ -826,16 +808,13 @@ def get_workfile_doc(asset_id, task_name, filename, dbcon=None):
Returns:
dict: Workfile document or None.
"""
# Use legacy_io if dbcon is not entered
if not dbcon:
dbcon = legacy_io
return dbcon.find_one({
"type": "workfile",
"parent": asset_id,
"task_name": task_name,
"filename": filename
})
project_name = dbcon.active_project()
return get_workfile_info(project_name, asset_id, task_name, filename)
@with_pipeline_io
@ -853,6 +832,8 @@ def create_workfile_doc(asset_doc, task_name, filename, workdir, dbcon=None):
dbcon (AvalonMongoDB): Optionally enter avalon AvalonMongoDB object and
`legacy_io` is used if not entered.
"""
from openpype.pipeline import Anatomy
# Use legacy_io if dbcon is not entered
if not dbcon:
dbcon = legacy_io
@ -868,12 +849,13 @@ def create_workfile_doc(asset_doc, task_name, filename, workdir, dbcon=None):
doc_data = copy.deepcopy(doc_filter)
# Prepare project for workdir data
project_doc = dbcon.find_one({"type": "project"})
project_name = dbcon.active_project()
project_doc = get_project(project_name)
workdir_data = get_workdir_data(
project_doc, asset_doc, task_name, dbcon.Session["AVALON_APP"]
)
# Prepare anatomy
anatomy = Anatomy(project_doc["name"])
anatomy = Anatomy(project_name)
# Get workdir path (result is anatomy.TemplateResult)
template_workdir = get_workdir_with_workdir_data(
workdir_data, anatomy
@ -988,12 +970,11 @@ class BuildWorkfile:
from openpype.pipeline import discover_loader_plugins
# Get current asset name and entity
project_name = legacy_io.active_project()
current_asset_name = legacy_io.Session["AVALON_ASSET"]
current_asset_entity = legacy_io.find_one({
"type": "asset",
"name": current_asset_name
})
current_asset_entity = get_asset_by_name(
project_name, current_asset_name
)
# Skip if asset was not found
if not current_asset_entity:
print("Asset entity with name `{}` was not found".format(
@ -1498,7 +1479,7 @@ class BuildWorkfile:
return loaded_containers
@with_pipeline_io
def _collect_last_version_repres(self, asset_entities):
def _collect_last_version_repres(self, asset_docs):
"""Collect subsets, versions and representations for asset_entities.
Args:
@ -1531,64 +1512,56 @@ class BuildWorkfile:
```
"""
if not asset_entities:
return {}
output = {}
if not asset_docs:
return output
asset_entity_by_ids = {asset["_id"]: asset for asset in asset_entities}
asset_docs_by_ids = {asset["_id"]: asset for asset in asset_docs}
subsets = list(legacy_io.find({
"type": "subset",
"parent": {"$in": list(asset_entity_by_ids.keys())}
}))
project_name = legacy_io.active_project()
subsets = list(get_subsets(
project_name, asset_ids=asset_docs_by_ids.keys()
))
subset_entity_by_ids = {subset["_id"]: subset for subset in subsets}
sorted_versions = list(legacy_io.find({
"type": "version",
"parent": {"$in": list(subset_entity_by_ids.keys())}
}).sort("name", -1))
last_version_by_subset_id = get_last_versions(
project_name, subset_entity_by_ids.keys()
)
last_version_docs_by_id = {
version["_id"]: version
for version in last_version_by_subset_id.values()
}
repre_docs = get_representations(
project_name, version_ids=last_version_docs_by_id.keys()
)
subset_id_with_latest_version = []
last_versions_by_id = {}
for version in sorted_versions:
subset_id = version["parent"]
if subset_id in subset_id_with_latest_version:
continue
subset_id_with_latest_version.append(subset_id)
last_versions_by_id[version["_id"]] = version
for repre_doc in repre_docs:
version_id = repre_doc["parent"]
version_doc = last_version_docs_by_id[version_id]
repres = legacy_io.find({
"type": "representation",
"parent": {"$in": list(last_versions_by_id.keys())}
})
subset_id = version_doc["parent"]
subset_doc = subset_entity_by_ids[subset_id]
output = {}
for repre in repres:
version_id = repre["parent"]
version = last_versions_by_id[version_id]
subset_id = version["parent"]
subset = subset_entity_by_ids[subset_id]
asset_id = subset["parent"]
asset = asset_entity_by_ids[asset_id]
asset_id = subset_doc["parent"]
asset_doc = asset_docs_by_ids[asset_id]
if asset_id not in output:
output[asset_id] = {
"asset_entity": asset,
"asset_entity": asset_doc,
"subsets": {}
}
if subset_id not in output[asset_id]["subsets"]:
output[asset_id]["subsets"][subset_id] = {
"subset_entity": subset,
"subset_entity": subset_doc,
"version": {
"version_entity": version,
"version_entity": version_doc,
"repres": []
}
}
output[asset_id]["subsets"][subset_id]["version"]["repres"].append(
repre
repre_doc
)
return output
@ -1673,6 +1646,7 @@ def _get_task_context_data_for_anatomy(
"""
if anatomy is None:
from openpype.pipeline import Anatomy
anatomy = Anatomy(project_doc["name"])
asset_name = asset_doc["name"]
@ -1741,6 +1715,7 @@ def get_custom_workfile_template_by_context(
"""
if anatomy is None:
from openpype.pipeline import Anatomy
anatomy = Anatomy(project_doc["name"])
# get project, asset, task anatomy context data
@ -1794,35 +1769,19 @@ def get_custom_workfile_template_by_string_context(
context. (Existence of formatted path is not validated.)
"""
if dbcon is None:
from openpype.pipeline import AvalonMongoDB
project_name = None
if anatomy is not None:
project_name = anatomy.project_name
dbcon = AvalonMongoDB()
if not project_name and dbcon is not None:
project_name = dbcon.active_project()
dbcon.install()
if not project_name:
raise ValueError("Can't determina project")
if dbcon.Session["AVALON_PROJECT"] != project_name:
dbcon.Session["AVALON_PROJECT"] = project_name
project_doc = dbcon.find_one(
{"type": "project"},
# All we need is "name" and "data.code" keys
{
"name": 1,
"data.code": 1
}
)
asset_doc = dbcon.find_one(
{
"type": "asset",
"name": asset_name
},
# All we need is "name" and "data.tasks" keys
{
"name": 1,
"data.tasks": 1
}
)
project_doc = get_project(project_name, fields=["name", "data.code"])
asset_doc = get_asset_by_name(
project_name, asset_name, fields=["name", "data.tasks"])
return get_custom_workfile_template_by_context(
template_profiles, project_doc, asset_doc, task_name, anatomy

View file

@ -11,6 +11,10 @@ except Exception:
from openpype.lib.python_2_comp import WeakMethod
class MissingEventSystem(Exception):
pass
class EventCallback(object):
"""Callback registered to a topic.
@ -176,16 +180,20 @@ class Event(object):
topic (str): Identifier of event.
data (Any): Data specific for event. Dictionary is recommended.
source (str): Identifier of source.
event_system (EventSystem): Event system in which can be event
triggered.
"""
_data = {}
def __init__(self, topic, data=None, source=None):
def __init__(self, topic, data=None, source=None, event_system=None):
self._id = str(uuid4())
self._topic = topic
if data is None:
data = {}
self._data = data
self._source = source
self._event_system = event_system
def __getitem__(self, key):
return self._data[key]
@ -211,28 +219,118 @@ class Event(object):
def emit(self):
"""Emit event and trigger callbacks."""
StoredCallbacks.emit_event(self)
if self._event_system is None:
raise MissingEventSystem(
"Can't emit event {}. Does not have set event system.".format(
str(repr(self))
)
)
self._event_system.emit_event(self)
class StoredCallbacks:
_registered_callbacks = []
class EventSystem(object):
"""Encapsulate event handling into an object.
System wraps registered callbacks and triggered events into single object
so it is possible to create mutltiple independent systems that have their
topics and callbacks.
"""
def __init__(self):
self._registered_callbacks = []
def add_callback(self, topic, callback):
"""Register callback in event system.
Args:
topic (str): Topic for EventCallback.
callback (Callable): Function or method that will be called
when topic is triggered.
Returns:
EventCallback: Created callback object which can be used to
stop listening.
"""
@classmethod
def add_callback(cls, topic, callback):
callback = EventCallback(topic, callback)
cls._registered_callbacks.append(callback)
self._registered_callbacks.append(callback)
return callback
@classmethod
def emit_event(cls, event):
def create_event(self, topic, data, source):
"""Create new event which is bound to event system.
Args:
topic (str): Event topic.
data (dict): Data related to event.
source (str): Source of event.
Returns:
Event: Object of event.
"""
return Event(topic, data, source, self)
def emit(self, topic, data, source):
"""Create event based on passed data and emit it.
This is easiest way how to trigger event in an event system.
Args:
topic (str): Event topic.
data (dict): Data related to event.
source (str): Source of event.
Returns:
Event: Created and emitted event.
"""
event = self.create_event(topic, data, source)
event.emit()
return event
def emit_event(self, event):
"""Emit event object.
Args:
event (Event): Prepared event with topic and data.
"""
invalid_callbacks = []
for callback in cls._registered_callbacks:
for callback in self._registered_callbacks:
callback.process_event(event)
if not callback.is_ref_valid:
invalid_callbacks.append(callback)
for callback in invalid_callbacks:
cls._registered_callbacks.remove(callback)
self._registered_callbacks.remove(callback)
class GlobalEventSystem:
"""Event system living in global scope of process.
This is primarily used in host implementation to trigger events
related to DCC changes or changes of context in the host implementation.
"""
_global_event_system = None
@classmethod
def get_global_event_system(cls):
if cls._global_event_system is None:
cls._global_event_system = EventSystem()
return cls._global_event_system
@classmethod
def add_callback(cls, topic, callback):
event_system = cls.get_global_event_system()
return event_system.add_callback(topic, callback)
@classmethod
def emit(cls, topic, data, source):
event_system = cls.get_global_event_system()
return event_system.emit(topic, data, source)
def register_event_callback(topic, callback):
@ -249,7 +347,8 @@ def register_event_callback(topic, callback):
enable/disable listening to a topic or remove the callback from
the topic completely.
"""
return StoredCallbacks.add_callback(topic, callback)
return GlobalEventSystem.add_callback(topic, callback)
def emit_event(topic, data=None, source=None):
@ -263,6 +362,5 @@ def emit_event(topic, data=None, source=None):
Returns:
Event: Object of event that was emitted.
"""
event = Event(topic, data, source)
event.emit()
return event
return GlobalEventSystem.emit(topic, data, source)

View file

@ -409,6 +409,19 @@ class TemplateResult(str):
self.invalid_types
)
def normalized(self):
"""Convert to normalized path."""
cls = self.__class__
return cls(
os.path.normpath(self),
self.template,
self.solved,
self.used_values,
self.missing_keys,
self.invalid_types
)
class TemplatesResultDict(dict):
"""Holds and wrap TemplateResults for easy bug report."""

View file

@ -9,7 +9,6 @@ import platform
from openpype.client import get_project
from openpype.settings import get_project_settings
from .anatomy import Anatomy
from .profiles_filtering import filter_profiles
log = logging.getLogger(__name__)
@ -227,6 +226,7 @@ def fill_paths(path_list, anatomy):
def create_project_folders(basic_paths, project_name):
from openpype.pipeline import Anatomy
anatomy = Anatomy(project_name)
concat_paths = concatenate_splitted_paths(basic_paths, anatomy)

View file

@ -6,10 +6,10 @@ import logging
import re
import json
from .profiles_filtering import filter_profiles
from openpype.client import get_asset_by_id
from openpype.settings import get_project_settings
from .profiles_filtering import filter_profiles
log = logging.getLogger(__name__)
@ -135,24 +135,17 @@ def get_subset_name(
This is legacy function should be replaced with
`get_subset_name_with_asset_doc` where asset document is expected.
"""
if dbcon is None:
from openpype.pipeline import AvalonMongoDB
dbcon = AvalonMongoDB()
dbcon.Session["AVALON_PROJECT"] = project_name
if project_name is None:
project_name = dbcon.project_name
dbcon.install()
asset_doc = dbcon.find_one(
{"_id": asset_id},
{"data.tasks": True}
) or {}
asset_doc = get_asset_by_id(project_name, asset_id, fields=["data.tasks"])
return get_subset_name_with_asset_doc(
family,
variant,
task_name,
asset_doc,
asset_doc or {},
project_name,
host_name,
default_template,

View file

@ -24,7 +24,10 @@ from bson.json_util import (
dumps,
CANONICAL_JSON_OPTIONS
)
from openpype.client import (
get_project,
get_whole_project,
)
from openpype.pipeline import AvalonMongoDB
DOCUMENTS_FILE_NAME = "database"
@ -50,14 +53,12 @@ def pack_project(project_name, destination_dir=None):
Args:
project_name(str): Project that should be packaged.
destination_dir(str): Optinal path where zip will be stored. Project's
destination_dir(str): Optional path where zip will be stored. Project's
root is used if not passed.
"""
print("Creating package of project \"{}\"".format(project_name))
# Validate existence of project
dbcon = AvalonMongoDB()
dbcon.Session["AVALON_PROJECT"] = project_name
project_doc = dbcon.find_one({"type": "project"})
project_doc = get_project(project_name)
if not project_doc:
raise ValueError("Project \"{}\" was not found in database".format(
project_name
@ -118,7 +119,7 @@ def pack_project(project_name, destination_dir=None):
temp_docs_json = s.name
# Query all project documents and store them to temp json
docs = list(dbcon.find({}))
docs = list(get_whole_project(project_name))
data = dumps(
docs, json_options=CANONICAL_JSON_OPTIONS
)
@ -147,7 +148,7 @@ def pack_project(project_name, destination_dir=None):
# Cleanup
os.remove(temp_docs_json)
os.remove(temp_metadata_json)
dbcon.uninstall()
print("*** Packing finished ***")
@ -207,7 +208,7 @@ def unpack_project(path_to_zip, new_root=None):
print("Using different root path {}".format(new_root))
root_path = new_root
project_doc = collection.find_one({"type": "project"})
project_doc = get_project(project_name)
roots = project_doc["config"]["roots"]
key = tuple(roots.keys())[0]
update_key = "config.roots.{}.{}".format(key, low_platform)

View file

@ -8,10 +8,8 @@ except ImportError:
# Allow to fall back on Multiverse 6.3.0+ pxr usd library
from mvpxr import Usd, UsdGeom, Sdf, Kind
from openpype.pipeline import (
registered_root,
legacy_io,
)
from openpype.client import get_project, get_asset_by_name
from openpype.pipeline import legacy_io, Anatomy
log = logging.getLogger(__name__)
@ -128,7 +126,8 @@ def create_model(filename, asset, variant_subsets):
"""
asset_doc = legacy_io.find_one({"name": asset, "type": "asset"})
project_name = legacy_io.active_project()
asset_doc = get_asset_by_name(project_name, asset)
assert asset_doc, "Asset not found: %s" % asset
variants = []
@ -178,7 +177,8 @@ def create_shade(filename, asset, variant_subsets):
"""
asset_doc = legacy_io.find_one({"name": asset, "type": "asset"})
project_name = legacy_io.active_project()
asset_doc = get_asset_by_name(project_name, asset)
assert asset_doc, "Asset not found: %s" % asset
variants = []
@ -213,7 +213,8 @@ def create_shade_variation(filename, asset, model_variant, shade_variants):
"""
asset_doc = legacy_io.find_one({"name": asset, "type": "asset"})
project_name = legacy_io.active_project()
asset_doc = get_asset_by_name(project_name, asset)
assert asset_doc, "Asset not found: %s" % asset
variants = []
@ -313,21 +314,25 @@ def get_usd_master_path(asset, subset, representation):
"""
project = legacy_io.find_one(
{"type": "project"}, projection={"config.template.publish": True}
project_name = legacy_io.active_project()
anatomy = Anatomy(project_name)
project_doc = get_project(
project_name,
fields=["name", "data.code"]
)
template = project["config"]["template"]["publish"]
if isinstance(asset, dict) and "name" in asset:
# Allow explicitly passing asset document
asset_doc = asset
else:
asset_doc = legacy_io.find_one({"name": asset, "type": "asset"})
asset_doc = get_asset_by_name(project_name, asset, fields=["name"])
path = template.format(
**{
"root": registered_root(),
"project": legacy_io.Session["AVALON_PROJECT"],
formatted_result = anatomy.format(
{
"project": {
"name": project_name,
"code": project_doc.get("data", {}).get("code")
},
"asset": asset_doc["name"],
"subset": subset,
"representation": representation,
@ -335,6 +340,7 @@ def get_usd_master_path(asset, subset, representation):
}
)
path = formatted_result["publish"]["path"]
# Remove the version folder
subset_folder = os.path.dirname(os.path.dirname(path))
master_folder = os.path.join(subset_folder, "master")

View file

@ -5,7 +5,12 @@ from bson.objectid import ObjectId
from aiohttp.web_response import Response
from openpype.pipeline import AvalonMongoDB
from openpype.client import (
get_projects,
get_project,
get_assets,
get_asset_by_name,
)
from openpype_modules.webserver.base_routes import RestApiEndpoint
@ -14,19 +19,13 @@ class _RestApiEndpoint(RestApiEndpoint):
self.resource = resource
super(_RestApiEndpoint, self).__init__()
@property
def dbcon(self):
return self.resource.dbcon
class AvalonProjectsEndpoint(_RestApiEndpoint):
async def get(self) -> Response:
output = []
for project_name in self.dbcon.database.collection_names():
project_doc = self.dbcon.database[project_name].find_one({
"type": "project"
})
output.append(project_doc)
output = [
project_doc
for project_doc in get_projects()
]
return Response(
status=200,
body=self.resource.encode(output),
@ -36,9 +35,7 @@ class AvalonProjectsEndpoint(_RestApiEndpoint):
class AvalonProjectEndpoint(_RestApiEndpoint):
async def get(self, project_name) -> Response:
project_doc = self.dbcon.database[project_name].find_one({
"type": "project"
})
project_doc = get_project(project_name)
if project_doc:
return Response(
status=200,
@ -53,9 +50,7 @@ class AvalonProjectEndpoint(_RestApiEndpoint):
class AvalonAssetsEndpoint(_RestApiEndpoint):
async def get(self, project_name) -> Response:
asset_docs = list(self.dbcon.database[project_name].find({
"type": "asset"
}))
asset_docs = list(get_assets(project_name))
return Response(
status=200,
body=self.resource.encode(asset_docs),
@ -65,10 +60,7 @@ class AvalonAssetsEndpoint(_RestApiEndpoint):
class AvalonAssetEndpoint(_RestApiEndpoint):
async def get(self, project_name, asset_name) -> Response:
asset_doc = self.dbcon.database[project_name].find_one({
"type": "asset",
"name": asset_name
})
asset_doc = get_asset_by_name(project_name, asset_name)
if asset_doc:
return Response(
status=200,
@ -88,9 +80,6 @@ class AvalonRestApiResource:
self.module = avalon_module
self.server_manager = server_manager
self.dbcon = AvalonMongoDB()
self.dbcon.install()
self.prefix = "/avalon"
self.endpoint_defs = (

View file

@ -49,6 +49,7 @@ class _ModuleClass(object):
Object of this class can be stored to `sys.modules` and used for storing
dynamically imported modules.
"""
def __init__(self, name):
# Call setattr on super class
super(_ModuleClass, self).__setattr__("name", name)
@ -116,12 +117,13 @@ class _InterfacesClass(_ModuleClass):
- this is because interfaces must be available even if are missing
implementation
"""
def __getattr__(self, attr_name):
if attr_name not in self.__attributes__:
if attr_name in ("__path__", "__file__"):
return None
raise ImportError((
raise AttributeError((
"cannot import name '{}' from 'openpype_interfaces'"
).format(attr_name))

View file

@ -1,16 +1,9 @@
from openpype.api import Logger
from openpype.pipeline import (
legacy_io,
LauncherAction,
)
from openpype.client import get_asset_by_name
from openpype.pipeline import LauncherAction
from openpype_modules.clockify.clockify_api import ClockifyAPI
log = Logger.get_logger(__name__)
class ClockifyStart(LauncherAction):
name = "clockify_start_timer"
label = "Clockify - Start Timer"
icon = "clockify_icon"
@ -24,20 +17,19 @@ class ClockifyStart(LauncherAction):
return False
def process(self, session, **kwargs):
project_name = session['AVALON_PROJECT']
asset_name = session['AVALON_ASSET']
task_name = session['AVALON_TASK']
project_name = session["AVALON_PROJECT"]
asset_name = session["AVALON_ASSET"]
task_name = session["AVALON_TASK"]
description = asset_name
asset = legacy_io.find_one({
'type': 'asset',
'name': asset_name
})
if asset is not None:
desc_items = asset.get('data', {}).get('parents', [])
asset_doc = get_asset_by_name(
project_name, asset_name, fields=["data.parents"]
)
if asset_doc is not None:
desc_items = asset_doc.get("data", {}).get("parents", [])
desc_items.append(asset_name)
desc_items.append(task_name)
description = '/'.join(desc_items)
description = "/".join(desc_items)
project_id = self.clockapi.get_project_id(project_name)
tag_ids = []

View file

@ -1,11 +1,6 @@
from openpype.client import get_projects, get_project
from openpype_modules.clockify.clockify_api import ClockifyAPI
from openpype.api import Logger
from openpype.pipeline import (
legacy_io,
LauncherAction,
)
log = Logger.get_logger(__name__)
from openpype.pipeline import LauncherAction
class ClockifySync(LauncherAction):
@ -22,39 +17,36 @@ class ClockifySync(LauncherAction):
return self.have_permissions
def process(self, session, **kwargs):
project_name = session.get('AVALON_PROJECT', None)
project_name = session.get("AVALON_PROJECT") or ""
projects_to_sync = []
if project_name.strip() == '' or project_name is None:
for project in legacy_io.projects():
projects_to_sync.append(project)
if project_name.strip():
projects_to_sync = [get_project(project_name)]
else:
project = legacy_io.find_one({'type': 'project'})
projects_to_sync.append(project)
projects_to_sync = get_projects()
projects_info = {}
for project in projects_to_sync:
task_types = project['config']['tasks'].keys()
projects_info[project['name']] = task_types
task_types = project["config"]["tasks"].keys()
projects_info[project["name"]] = task_types
clockify_projects = self.clockapi.get_projects()
for project_name, task_types in projects_info.items():
if project_name not in clockify_projects:
response = self.clockapi.add_project(project_name)
if 'id' not in response:
self.log.error('Project {} can\'t be created'.format(
project_name
))
continue
project_id = response['id']
else:
project_id = clockify_projects[project_name]
if project_name in clockify_projects:
continue
response = self.clockapi.add_project(project_name)
if "id" not in response:
self.log.error("Project {} can't be created".format(
project_name
))
continue
clockify_workspace_tags = self.clockapi.get_tags()
for task_type in task_types:
if task_type not in clockify_workspace_tags:
response = self.clockapi.add_tag(task_type)
if 'id' not in response:
if "id" not in response:
self.log.error('Task {} can\'t be created'.format(
task_type
))

View file

@ -15,7 +15,7 @@ import attr
import requests
import pyblish.api
from openpype.lib.abstract_metaplugins import AbstractMetaInstancePlugin
from openpype.pipeline.publish import AbstractMetaInstancePlugin
def requests_post(*args, **kwargs):

View file

@ -55,7 +55,7 @@ class HoudiniSubmitPublishDeadline(pyblish.api.ContextPlugin):
scenename = os.path.basename(scene)
# Get project code
project = legacy_io.find_one({"type": "project"})
project = context.data["projectEntity"]
code = project["data"].get("code", project["name"])
job_name = "{scene} [PUBLISH]".format(scene=scenename)

View file

@ -711,7 +711,9 @@ class MayaSubmitDeadline(pyblish.api.InstancePlugin):
new_payload["JobInfo"].update(tiles_data["JobInfo"])
new_payload["PluginInfo"].update(tiles_data["PluginInfo"])
job_hash = hashlib.sha256("{}_{}".format(file_index, file))
self.log.info("hashing {} - {}".format(file_index, file))
job_hash = hashlib.sha256(
("{}_{}".format(file_index, file)).encode("utf-8"))
frame_jobs[frame] = job_hash.hexdigest()
new_payload["JobInfo"]["ExtraInfo0"] = job_hash.hexdigest()
new_payload["JobInfo"]["ExtraInfo1"] = file

View file

@ -11,6 +11,7 @@ import clique
import pyblish.api
import openpype.api
from openpype.client import get_representations
from openpype.pipeline import (
get_representation_path,
legacy_io,
@ -18,15 +19,23 @@ from openpype.pipeline import (
from openpype.pipeline.farm.patterning import match_aov_pattern
def get_resources(version, extension=None):
def get_resources(project_name, version, extension=None):
"""Get the files from the specific version."""
query = {"type": "representation", "parent": version["_id"]}
# TODO this functions seems to be weird
# - it's looking for representation with one extension or first (any)
# representation from a version?
# - not sure how this should work, maybe it does for specific use cases
# but probably can't be used for all resources from 2D workflows
extensions = None
if extension:
query["name"] = extension
representation = legacy_io.find_one(query)
assert representation, "This is a bug"
extensions = [extension]
repre_docs = list(get_representations(
project_name, version_ids=[version["_id"]], extensions=extensions
))
assert repre_docs, "This is a bug"
representation = repre_docs[0]
directory = get_representation_path(representation)
print("Source: ", directory)
resources = sorted(
@ -330,13 +339,16 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin):
self.log.info("Preparing to copy ...")
start = instance.data.get("frameStart")
end = instance.data.get("frameEnd")
project_name = legacy_io.active_project()
# get latest version of subset
# this will stop if subset wasn't published yet
version = openpype.api.get_latest_version(instance.data.get("asset"),
instance.data.get("subset"))
# get its files based on extension
subset_resources = get_resources(version, representation.get("ext"))
subset_resources = get_resources(
project_name, version, representation.get("ext")
)
r_col, _ = clique.assemble(subset_resources)
# if override remove all frames we are expecting to be rendered
@ -1045,7 +1057,7 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin):
get publish_path
Args:
anatomy (pype.lib.anatomy.Anatomy):
anatomy (openpype.pipeline.anatomy.Anatomy):
template_data (dict): pre-calculated collected data for process
asset (string): asset name
subset (string): subset name (actually group name of subset)

View file

@ -6,7 +6,10 @@ import collections
import ftrack_api
from openpype.lib import get_datetime_data
from openpype.api import get_project_settings
from openpype.settings.lib import (
get_project_settings,
get_default_project_settings
)
from openpype_modules.ftrack.lib import ServerAction
@ -79,6 +82,35 @@ class CreateDailyReviewSessionServerAction(ServerAction):
)
return True
def _calculate_next_cycle_delta(self):
studio_default_settings = get_default_project_settings()
action_settings = (
studio_default_settings
["ftrack"]
[self.settings_frack_subkey]
[self.settings_key]
)
cycle_hour_start = action_settings.get("cycle_hour_start")
if not cycle_hour_start:
h = m = s = 0
else:
h, m, s = cycle_hour_start
# Create threading timer which will trigger creation of report
# at the 00:00:01 of next day
# - callback will trigger another timer which will have 1 day offset
now = datetime.datetime.now()
# Create object of today morning
expected_next_trigger = datetime.datetime(
now.year, now.month, now.day, h, m, s
)
if expected_next_trigger > now:
seconds = (expected_next_trigger - now).total_seconds()
else:
expected_next_trigger += self._day_delta
seconds = (expected_next_trigger - now).total_seconds()
return seconds, expected_next_trigger
def register(self, *args, **kwargs):
"""Override register to be able trigger """
# Register server action as would be normally
@ -86,22 +118,14 @@ class CreateDailyReviewSessionServerAction(ServerAction):
*args, **kwargs
)
# Create threading timer which will trigger creation of report
# at the 00:00:01 of next day
# - callback will trigger another timer which will have 1 day offset
now = datetime.datetime.now()
# Create object of today morning
today_morning = datetime.datetime(
now.year, now.month, now.day, 0, 0, 1
)
# Add a day delta (to calculate next day date)
next_day_morning = today_morning + self._day_delta
# Calculate first delta in seconds for first threading timer
first_delta = (next_day_morning - now).total_seconds()
seconds_delta, cycle_time = self._calculate_next_cycle_delta()
# Store cycle time which will be used to create next timer
self._last_cyle_time = next_day_morning
self._last_cyle_time = cycle_time
# Create timer thread
self._cycle_timer = threading.Timer(first_delta, self._timer_callback)
self._cycle_timer = threading.Timer(
seconds_delta, self._timer_callback
)
self._cycle_timer.start()
self._check_review_session()
@ -111,13 +135,12 @@ class CreateDailyReviewSessionServerAction(ServerAction):
self._cycle_timer is not None
and self._last_cyle_time is not None
):
now = datetime.datetime.now()
while self._last_cyle_time < now:
self._last_cyle_time = self._last_cyle_time + self._day_delta
seconds_delta, cycle_time = self._calculate_next_cycle_delta()
self._last_cyle_time = cycle_time
delay = (self._last_cyle_time - now).total_seconds()
self._cycle_timer = threading.Timer(delay, self._timer_callback)
self._cycle_timer = threading.Timer(
seconds_delta, self._timer_callback
)
self._cycle_timer.start()
self._check_review_session()

View file

@ -1,4 +1,5 @@
import json
import copy
from openpype.client import get_project
from openpype.api import ProjectSettings
@ -373,6 +374,10 @@ class PrepareProjectServer(ServerAction):
project_name, project_code
))
create_project(project_name, project_code)
self.trigger_event(
"openpype.project.created",
{"project_name": project_name}
)
project_settings = ProjectSettings(project_name)
project_anatomy_settings = project_settings["project_anatomy"]
@ -400,6 +405,10 @@ class PrepareProjectServer(ServerAction):
self.log.debug("- Key \"{}\" set to \"{}\"".format(key, value))
session.commit()
event_data = copy.deepcopy(in_data)
event_data["project_name"] = project_name
self.trigger_event("openpype.project.prepared", event_data)
return True

View file

@ -1,7 +1,8 @@
import time
import sys
import json
import traceback
import ftrack_api
from openpype_modules.ftrack.lib import ServerAction
from openpype_modules.ftrack.lib.avalon_sync import SyncEntitiesFactory
@ -180,6 +181,13 @@ class SyncToAvalonServer(ServerAction):
"* Total time: {}".format(time_7 - time_start)
)
if self.entities_factory.project_created:
event = ftrack_api.event.base.Event(
topic="openpype.project.created",
data={"project_name": project_name}
)
self.session.event_hub.publish(event)
report = self.entities_factory.report()
if report and report.get("items"):
default_title = "Synchronization report ({}):".format(

Some files were not shown because too many files have changed in this diff Show more