Merge branch 'develop' into feature/OP-3499_Move-publish-render-abstractions

This commit is contained in:
Jakub Trllo 2022-07-04 16:56:11 +02:00
commit 1395c743d6
81 changed files with 3079 additions and 1794 deletions

View file

@ -1,13 +1,36 @@
# Changelog # Changelog
## [3.12.1-nightly.1](https://github.com/pypeclub/OpenPype/tree/HEAD) ## [3.12.1-nightly.2](https://github.com/pypeclub/OpenPype/tree/HEAD)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.12.0...HEAD) [Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.12.0...HEAD)
### 📖 Documentation
- Docs: Added minimal permissions for MongoDB [\#3441](https://github.com/pypeclub/OpenPype/pull/3441)
**🆕 New features**
- Maya: Add VDB to Arnold loader [\#3433](https://github.com/pypeclub/OpenPype/pull/3433)
**🚀 Enhancements** **🚀 Enhancements**
- Blender: Bugfix - Set fps properly on open [\#3426](https://github.com/pypeclub/OpenPype/pull/3426)
- Blender: pre pyside install for all platforms [\#3400](https://github.com/pypeclub/OpenPype/pull/3400) - Blender: pre pyside install for all platforms [\#3400](https://github.com/pypeclub/OpenPype/pull/3400)
**🐛 Bug fixes**
- Nuke: prerender reviewable fails [\#3450](https://github.com/pypeclub/OpenPype/pull/3450)
- Maya: fix hashing in Python 3 for tile rendering [\#3447](https://github.com/pypeclub/OpenPype/pull/3447)
- LogViewer: Escape html characters in log message [\#3443](https://github.com/pypeclub/OpenPype/pull/3443)
- Nuke: Slate frame is integrated [\#3427](https://github.com/pypeclub/OpenPype/pull/3427)
**🔀 Refactored code**
- Clockify: Use query functions in clockify actions [\#3458](https://github.com/pypeclub/OpenPype/pull/3458)
- General: Use query functions in rest api calls [\#3457](https://github.com/pypeclub/OpenPype/pull/3457)
- General: Use Anatomy after move to pipeline [\#3436](https://github.com/pypeclub/OpenPype/pull/3436)
- General: Anatomy moved to pipeline [\#3435](https://github.com/pypeclub/OpenPype/pull/3435)
## [3.12.0](https://github.com/pypeclub/OpenPype/tree/3.12.0) (2022-06-28) ## [3.12.0](https://github.com/pypeclub/OpenPype/tree/3.12.0) (2022-06-28)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.12.0-nightly.3...3.12.0) [Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.12.0-nightly.3...3.12.0)
@ -24,7 +47,6 @@
- General: Add ability to change user value for templates [\#3366](https://github.com/pypeclub/OpenPype/pull/3366) - General: Add ability to change user value for templates [\#3366](https://github.com/pypeclub/OpenPype/pull/3366)
- Hosts: More options for in-host callbacks [\#3357](https://github.com/pypeclub/OpenPype/pull/3357) - Hosts: More options for in-host callbacks [\#3357](https://github.com/pypeclub/OpenPype/pull/3357)
- Multiverse: expose some settings to GUI [\#3350](https://github.com/pypeclub/OpenPype/pull/3350) - Multiverse: expose some settings to GUI [\#3350](https://github.com/pypeclub/OpenPype/pull/3350)
- Maya: Allow more data to be published along camera 🎥 [\#3304](https://github.com/pypeclub/OpenPype/pull/3304)
**🐛 Bug fixes** **🐛 Bug fixes**
@ -57,7 +79,6 @@
- AfterEffects: Use client query functions [\#3374](https://github.com/pypeclub/OpenPype/pull/3374) - AfterEffects: Use client query functions [\#3374](https://github.com/pypeclub/OpenPype/pull/3374)
- TVPaint: Use client query functions [\#3340](https://github.com/pypeclub/OpenPype/pull/3340) - TVPaint: Use client query functions [\#3340](https://github.com/pypeclub/OpenPype/pull/3340)
- Ftrack: Use client query functions [\#3339](https://github.com/pypeclub/OpenPype/pull/3339) - Ftrack: Use client query functions [\#3339](https://github.com/pypeclub/OpenPype/pull/3339)
- Webpublisher: Use client query functions [\#3333](https://github.com/pypeclub/OpenPype/pull/3333)
- Standalone Publisher: Use client query functions [\#3330](https://github.com/pypeclub/OpenPype/pull/3330) - Standalone Publisher: Use client query functions [\#3330](https://github.com/pypeclub/OpenPype/pull/3330)
**Merged pull requests:** **Merged pull requests:**
@ -93,17 +114,15 @@
- nuke: adding extract thumbnail settings 3.10 [\#3347](https://github.com/pypeclub/OpenPype/pull/3347) - nuke: adding extract thumbnail settings 3.10 [\#3347](https://github.com/pypeclub/OpenPype/pull/3347)
- General: Fix last version function [\#3345](https://github.com/pypeclub/OpenPype/pull/3345) - General: Fix last version function [\#3345](https://github.com/pypeclub/OpenPype/pull/3345)
- Deadline: added OPENPYPE\_MONGO to filter [\#3336](https://github.com/pypeclub/OpenPype/pull/3336) - Deadline: added OPENPYPE\_MONGO to filter [\#3336](https://github.com/pypeclub/OpenPype/pull/3336)
- Nuke: fixing farm publishing if review is disabled [\#3306](https://github.com/pypeclub/OpenPype/pull/3306)
**🔀 Refactored code**
- Webpublisher: Use client query functions [\#3333](https://github.com/pypeclub/OpenPype/pull/3333)
## [3.11.0](https://github.com/pypeclub/OpenPype/tree/3.11.0) (2022-06-17) ## [3.11.0](https://github.com/pypeclub/OpenPype/tree/3.11.0) (2022-06-17)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.11.0-nightly.4...3.11.0) [Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.11.0-nightly.4...3.11.0)
### 📖 Documentation
- Documentation: Add app key to template documentation [\#3299](https://github.com/pypeclub/OpenPype/pull/3299)
- doc: adding royal render and multiverse to the web site [\#3285](https://github.com/pypeclub/OpenPype/pull/3285)
**🚀 Enhancements** **🚀 Enhancements**
- Settings: Settings can be extracted from UI [\#3323](https://github.com/pypeclub/OpenPype/pull/3323) - Settings: Settings can be extracted from UI [\#3323](https://github.com/pypeclub/OpenPype/pull/3323)
@ -111,8 +130,6 @@
- Ftrack: Action to easily create daily review session [\#3310](https://github.com/pypeclub/OpenPype/pull/3310) - Ftrack: Action to easily create daily review session [\#3310](https://github.com/pypeclub/OpenPype/pull/3310)
- TVPaint: Extractor use mark in/out range to render [\#3309](https://github.com/pypeclub/OpenPype/pull/3309) - TVPaint: Extractor use mark in/out range to render [\#3309](https://github.com/pypeclub/OpenPype/pull/3309)
- Ftrack: Delivery action can work on ReviewSessions [\#3307](https://github.com/pypeclub/OpenPype/pull/3307) - Ftrack: Delivery action can work on ReviewSessions [\#3307](https://github.com/pypeclub/OpenPype/pull/3307)
- Maya: Look assigner UI improvements [\#3298](https://github.com/pypeclub/OpenPype/pull/3298)
- Ftrack: Action to transfer values of hierarchical attributes [\#3284](https://github.com/pypeclub/OpenPype/pull/3284)
**🐛 Bug fixes** **🐛 Bug fixes**
@ -120,17 +137,10 @@
- Houdini: Fix Houdini VDB manage update wrong file attribute name [\#3322](https://github.com/pypeclub/OpenPype/pull/3322) - Houdini: Fix Houdini VDB manage update wrong file attribute name [\#3322](https://github.com/pypeclub/OpenPype/pull/3322)
- Nuke: anatomy compatibility issue hacks [\#3321](https://github.com/pypeclub/OpenPype/pull/3321) - Nuke: anatomy compatibility issue hacks [\#3321](https://github.com/pypeclub/OpenPype/pull/3321)
- hiero: otio p3 compatibility issue - metadata on effect use update 3.11 [\#3314](https://github.com/pypeclub/OpenPype/pull/3314) - hiero: otio p3 compatibility issue - metadata on effect use update 3.11 [\#3314](https://github.com/pypeclub/OpenPype/pull/3314)
- General: Vendorized modules for Python 2 and update poetry lock [\#3305](https://github.com/pypeclub/OpenPype/pull/3305)
- Fix - added local targets to install host [\#3303](https://github.com/pypeclub/OpenPype/pull/3303)
- Settings: Add missing default settings for nuke gizmo [\#3301](https://github.com/pypeclub/OpenPype/pull/3301)
- Maya: Fix swaped width and height in reviews [\#3300](https://github.com/pypeclub/OpenPype/pull/3300)
- Maya: point cache publish handles Maya instances [\#3297](https://github.com/pypeclub/OpenPype/pull/3297)
- Global: extract review slate issues [\#3286](https://github.com/pypeclub/OpenPype/pull/3286)
**🔀 Refactored code** **🔀 Refactored code**
- Blender: Use client query functions [\#3331](https://github.com/pypeclub/OpenPype/pull/3331) - Blender: Use client query functions [\#3331](https://github.com/pypeclub/OpenPype/pull/3331)
- General: Define query functions [\#3288](https://github.com/pypeclub/OpenPype/pull/3288)
**Merged pull requests:** **Merged pull requests:**

View file

@ -18,7 +18,8 @@ AppPublisher=Orbi Tools s.r.o
AppPublisherURL=http://pype.club AppPublisherURL=http://pype.club
AppSupportURL=http://pype.club AppSupportURL=http://pype.club
AppUpdatesURL=http://pype.club AppUpdatesURL=http://pype.club
DefaultDirName={autopf}\{#MyAppName} DefaultDirName={autopf}\{#MyAppName}\{#AppVer}
UsePreviousAppDir=no
DisableProgramGroupPage=yes DisableProgramGroupPage=yes
OutputBaseFilename={#MyAppName}-{#AppVer}-install OutputBaseFilename={#MyAppName}-{#AppVer}-install
AllowCancelDuringInstall=yes AllowCancelDuringInstall=yes
@ -27,7 +28,7 @@ AllowCancelDuringInstall=yes
PrivilegesRequiredOverridesAllowed=dialog PrivilegesRequiredOverridesAllowed=dialog
SetupIconFile=igniter\openpype.ico SetupIconFile=igniter\openpype.ico
OutputDir=build\ OutputDir=build\
Compression=lzma Compression=lzma2
SolidCompression=yes SolidCompression=yes
WizardStyle=modern WizardStyle=modern
@ -37,6 +38,11 @@ Name: "english"; MessagesFile: "compiler:Default.isl"
[Tasks] [Tasks]
Name: "desktopicon"; Description: "{cm:CreateDesktopIcon}"; GroupDescription: "{cm:AdditionalIcons}"; Flags: unchecked Name: "desktopicon"; Description: "{cm:CreateDesktopIcon}"; GroupDescription: "{cm:AdditionalIcons}"; Flags: unchecked
[InstallDelete]
; clean everything in previous installation folder
Type: filesandordirs; Name: "{app}\*"
[Files] [Files]
Source: "build\{#build}\*"; DestDir: "{app}"; Flags: ignoreversion recursesubdirs createallsubdirs Source: "build\{#build}\*"; DestDir: "{app}"; Flags: ignoreversion recursesubdirs createallsubdirs
; NOTE: Don't use "Flags: ignoreversion" on any shared system files ; NOTE: Don't use "Flags: ignoreversion" on any shared system files

View file

@ -15,12 +15,15 @@ from bson.objectid import ObjectId
from openpype.lib.mongo import OpenPypeMongoConnection from openpype.lib.mongo import OpenPypeMongoConnection
def _get_project_connection(project_name=None): def _get_project_database():
db_name = os.environ.get("AVALON_DB") or "avalon" db_name = os.environ.get("AVALON_DB") or "avalon"
mongodb = OpenPypeMongoConnection.get_mongo_client()[db_name] return OpenPypeMongoConnection.get_mongo_client()[db_name]
if project_name:
return mongodb[project_name]
return mongodb def _get_project_connection(project_name):
if not project_name:
raise ValueError("Invalid project name {}".format(str(project_name)))
return _get_project_database()[project_name]
def _prepare_fields(fields, required_fields=None): def _prepare_fields(fields, required_fields=None):
@ -55,7 +58,7 @@ def _convert_ids(in_ids):
def get_projects(active=True, inactive=False, fields=None): def get_projects(active=True, inactive=False, fields=None):
mongodb = _get_project_connection() mongodb = _get_project_database()
for project_name in mongodb.collection_names(): for project_name in mongodb.collection_names():
if project_name in ("system.indexes",): if project_name in ("system.indexes",):
continue continue
@ -1009,8 +1012,10 @@ def get_representations_parents(project_name, representations):
versions_by_subset_id = collections.defaultdict(list) versions_by_subset_id = collections.defaultdict(list)
subsets_by_subset_id = {} subsets_by_subset_id = {}
subsets_by_asset_id = collections.defaultdict(list) subsets_by_asset_id = collections.defaultdict(list)
output = {}
for representation in representations: for representation in representations:
repre_id = representation["_id"] repre_id = representation["_id"]
output[repre_id] = (None, None, None, None)
version_id = representation["parent"] version_id = representation["parent"]
repres_by_version_id[version_id].append(representation) repres_by_version_id[version_id].append(representation)
@ -1040,7 +1045,6 @@ def get_representations_parents(project_name, representations):
project = get_project(project_name) project = get_project(project_name)
output = {}
for version_id, representations in repres_by_version_id.items(): for version_id, representations in repres_by_version_id.items():
asset = None asset = None
subset = None subset = None
@ -1080,7 +1084,7 @@ def get_representation_parents(project_name, representation):
parents_by_repre_id = get_representations_parents( parents_by_repre_id = get_representations_parents(
project_name, [representation] project_name, [representation]
) )
return parents_by_repre_id.get(repre_id) return parents_by_repre_id[repre_id]
def get_thumbnail_id_from_source(project_name, src_type, src_id): def get_thumbnail_id_from_source(project_name, src_type, src_id):

View file

@ -1,11 +1,10 @@
from openpype.api import Anatomy
from openpype.lib import ( from openpype.lib import (
PreLaunchHook, PreLaunchHook,
EnvironmentPrepData, EnvironmentPrepData,
prepare_app_environments, prepare_app_environments,
prepare_context_environments prepare_context_environments
) )
from openpype.pipeline import AvalonMongoDB from openpype.pipeline import AvalonMongoDB, Anatomy
class GlobalHostDataHook(PreLaunchHook): class GlobalHostDataHook(PreLaunchHook):

13
openpype/host/__init__.py Normal file
View file

@ -0,0 +1,13 @@
from .host import (
HostBase,
IWorkfileHost,
ILoadHost,
INewPublisher,
)
__all__ = (
"HostBase",
"IWorkfileHost",
"ILoadHost",
"INewPublisher",
)

524
openpype/host/host.py Normal file
View file

@ -0,0 +1,524 @@
import logging
import contextlib
from abc import ABCMeta, abstractproperty, abstractmethod
import six
# NOTE can't import 'typing' because of issues in Maya 2020
# - shiboken crashes on 'typing' module import
class MissingMethodsError(ValueError):
"""Exception when host miss some required methods for specific workflow.
Args:
host (HostBase): Host implementation where are missing methods.
missing_methods (list[str]): List of missing methods.
"""
def __init__(self, host, missing_methods):
joined_missing = ", ".join(
['"{}"'.format(item) for item in missing_methods]
)
message = (
"Host \"{}\" miss methods {}".format(host.name, joined_missing)
)
super(MissingMethodsError, self).__init__(message)
@six.add_metaclass(ABCMeta)
class HostBase(object):
"""Base of host implementation class.
Host is pipeline implementation of DCC application. This class should help
to identify what must/should/can be implemented for specific functionality.
Compared to 'avalon' concept:
What was before considered as functions in host implementation folder. The
host implementation should primarily care about adding ability of creation
(mark subsets to be published) and optionaly about referencing published
representations as containers.
Host may need extend some functionality like working with workfiles
or loading. Not all host implementations may allow that for those purposes
can be logic extended with implementing functions for the purpose. There
are prepared interfaces to be able identify what must be implemented to
be able use that functionality.
- current statement is that it is not required to inherit from interfaces
but all of the methods are validated (only their existence!)
# Installation of host before (avalon concept):
```python
from openpype.pipeline import install_host
import openpype.hosts.maya.api as host
install_host(host)
```
# Installation of host now:
```python
from openpype.pipeline import install_host
from openpype.hosts.maya.api import MayaHost
host = MayaHost()
install_host(host)
```
Todo:
- move content of 'install_host' as method of this class
- register host object
- install legacy_io
- install global plugin paths
- store registered plugin paths to this object
- handle current context (project, asset, task)
- this must be done in many separated steps
- have it's object of host tools instead of using globals
This implementation will probably change over time when more
functionality and responsibility will be added.
"""
_log = None
def __init__(self):
"""Initialization of host.
Register DCC callbacks, host specific plugin paths, targets etc.
(Part of what 'install' did in 'avalon' concept.)
Note:
At this moment global "installation" must happen before host
installation. Because of this current limitation it is recommended
to implement 'install' method which is triggered after global
'install'.
"""
pass
@property
def log(self):
if self._log is None:
self._log = logging.getLogger(self.__class__.__name__)
return self._log
@abstractproperty
def name(self):
"""Host name."""
pass
def get_current_context(self):
"""Get current context information.
This method should be used to get current context of host. Usage of
this method can be crutial for host implementations in DCCs where
can be opened multiple workfiles at one moment and change of context
can't be catched properly.
Default implementation returns values from 'legacy_io.Session'.
Returns:
dict: Context with 3 keys 'project_name', 'asset_name' and
'task_name'. All of them can be 'None'.
"""
from openpype.pipeline import legacy_io
if legacy_io.is_installed():
legacy_io.install()
return {
"project_name": legacy_io.Session["AVALON_PROJECT"],
"asset_name": legacy_io.Session["AVALON_ASSET"],
"task_name": legacy_io.Session["AVALON_TASK"]
}
def get_context_title(self):
"""Context title shown for UI purposes.
Should return current context title if possible.
Note:
This method is used only for UI purposes so it is possible to
return some logical title for contextless cases.
Is not meant for "Context menu" label.
Returns:
str: Context title.
None: Default title is used based on UI implementation.
"""
# Use current context to fill the context title
current_context = self.get_current_context()
project_name = current_context["project_name"]
asset_name = current_context["asset_name"]
task_name = current_context["task_name"]
items = []
if project_name:
items.append(project_name)
if asset_name:
items.append(asset_name)
if task_name:
items.append(task_name)
if items:
return "/".join(items)
return None
@contextlib.contextmanager
def maintained_selection(self):
"""Some functionlity will happen but selection should stay same.
This is DCC specific. Some may not allow to implement this ability
that is reason why default implementation is empty context manager.
Yields:
None: Yield when is ready to restore selected at the end.
"""
try:
yield
finally:
pass
class ILoadHost:
"""Implementation requirements to be able use reference of representations.
The load plugins can do referencing even without implementation of methods
here, but switch and removement of containers would not be possible.
Questions:
- Is list container dependency of host or load plugins?
- Should this be directly in HostBase?
- how to find out if referencing is available?
- do we need to know that?
"""
@staticmethod
def get_missing_load_methods(host):
"""Look for missing methods on "old type" host implementation.
Method is used for validation of implemented functions related to
loading. Checks only existence of methods.
Args:
Union[ModuleType, HostBase]: Object of host where to look for
required methods.
Returns:
list[str]: Missing method implementations for loading workflow.
"""
if isinstance(host, ILoadHost):
return []
required = ["ls"]
missing = []
for name in required:
if not hasattr(host, name):
missing.append(name)
return missing
@staticmethod
def validate_load_methods(host):
"""Validate implemented methods of "old type" host for load workflow.
Args:
Union[ModuleType, HostBase]: Object of host to validate.
Raises:
MissingMethodsError: If there are missing methods on host
implementation.
"""
missing = ILoadHost.get_missing_load_methods(host)
if missing:
raise MissingMethodsError(host, missing)
@abstractmethod
def get_containers(self):
"""Retreive referenced containers from scene.
This can be implemented in hosts where referencing can be used.
Todo:
Rename function to something more self explanatory.
Suggestion: 'get_containers'
Returns:
list[dict]: Information about loaded containers.
"""
pass
# --- Deprecated method names ---
def ls(self):
"""Deprecated variant of 'get_containers'.
Todo:
Remove when all usages are replaced.
"""
return self.get_containers()
@six.add_metaclass(ABCMeta)
class IWorkfileHost:
"""Implementation requirements to be able use workfile utils and tool."""
@staticmethod
def get_missing_workfile_methods(host):
"""Look for missing methods on "old type" host implementation.
Method is used for validation of implemented functions related to
workfiles. Checks only existence of methods.
Args:
Union[ModuleType, HostBase]: Object of host where to look for
required methods.
Returns:
list[str]: Missing method implementations for workfiles workflow.
"""
if isinstance(host, IWorkfileHost):
return []
required = [
"open_file",
"save_file",
"current_file",
"has_unsaved_changes",
"file_extensions",
"work_root",
]
missing = []
for name in required:
if not hasattr(host, name):
missing.append(name)
return missing
@staticmethod
def validate_workfile_methods(host):
"""Validate methods of "old type" host for workfiles workflow.
Args:
Union[ModuleType, HostBase]: Object of host to validate.
Raises:
MissingMethodsError: If there are missing methods on host
implementation.
"""
missing = IWorkfileHost.get_missing_workfile_methods(host)
if missing:
raise MissingMethodsError(host, missing)
@abstractmethod
def get_workfile_extensions(self):
"""Extensions that can be used as save.
Questions:
This could potentially use 'HostDefinition'.
"""
return []
@abstractmethod
def save_workfile(self, dst_path=None):
"""Save currently opened scene.
Args:
dst_path (str): Where the current scene should be saved. Or use
current path if 'None' is passed.
"""
pass
@abstractmethod
def open_workfile(self, filepath):
"""Open passed filepath in the host.
Args:
filepath (str): Path to workfile.
"""
pass
@abstractmethod
def get_current_workfile(self):
"""Retreive path to current opened file.
Returns:
str: Path to file which is currently opened.
None: If nothing is opened.
"""
return None
def workfile_has_unsaved_changes(self):
"""Currently opened scene is saved.
Not all hosts can know if current scene is saved because the API of
DCC does not support it.
Returns:
bool: True if scene is saved and False if has unsaved
modifications.
None: Can't tell if workfiles has modifications.
"""
return None
def work_root(self, session):
"""Modify workdir per host.
Default implementation keeps workdir untouched.
Warnings:
We must handle this modification with more sofisticated way because
this can't be called out of DCC so opening of last workfile
(calculated before DCC is launched) is complicated. Also breaking
defined work template is not a good idea.
Only place where it's really used and can make sense is Maya. There
workspace.mel can modify subfolders where to look for maya files.
Args:
session (dict): Session context data.
Returns:
str: Path to new workdir.
"""
return session["AVALON_WORKDIR"]
# --- Deprecated method names ---
def file_extensions(self):
"""Deprecated variant of 'get_workfile_extensions'.
Todo:
Remove when all usages are replaced.
"""
return self.get_workfile_extensions()
def save_file(self, dst_path=None):
"""Deprecated variant of 'save_workfile'.
Todo:
Remove when all usages are replaced.
"""
self.save_workfile()
def open_file(self, filepath):
"""Deprecated variant of 'open_workfile'.
Todo:
Remove when all usages are replaced.
"""
return self.open_workfile(filepath)
def current_file(self):
"""Deprecated variant of 'get_current_workfile'.
Todo:
Remove when all usages are replaced.
"""
return self.get_current_workfile()
def has_unsaved_changes(self):
"""Deprecated variant of 'workfile_has_unsaved_changes'.
Todo:
Remove when all usages are replaced.
"""
return self.workfile_has_unsaved_changes()
class INewPublisher:
"""Functions related to new creation system in new publisher.
New publisher is not storing information only about each created instance
but also some global data. At this moment are data related only to context
publish plugins but that can extend in future.
"""
@staticmethod
def get_missing_publish_methods(host):
"""Look for missing methods on "old type" host implementation.
Method is used for validation of implemented functions related to
new publish creation. Checks only existence of methods.
Args:
Union[ModuleType, HostBase]: Host module where to look for
required methods.
Returns:
list[str]: Missing method implementations for new publsher
workflow.
"""
if isinstance(host, INewPublisher):
return []
required = [
"get_context_data",
"update_context_data",
]
missing = []
for name in required:
if not hasattr(host, name):
missing.append(name)
return missing
@staticmethod
def validate_publish_methods(host):
"""Validate implemented methods of "old type" host.
Args:
Union[ModuleType, HostBase]: Host module to validate.
Raises:
MissingMethodsError: If there are missing methods on host
implementation.
"""
missing = INewPublisher.get_missing_publish_methods(host)
if missing:
raise MissingMethodsError(host, missing)
@abstractmethod
def get_context_data(self):
"""Get global data related to creation-publishing from workfile.
These data are not related to any created instance but to whole
publishing context. Not saving/returning them will cause that each
reset of publishing resets all values to default ones.
Context data can contain information about enabled/disabled publish
plugins or other values that can be filled by artist.
Returns:
dict: Context data stored using 'update_context_data'.
"""
pass
@abstractmethod
def update_context_data(self, data, changes):
"""Store global context data to workfile.
Called when some values in context data has changed.
Without storing the values in a way that 'get_context_data' would
return them will each reset of publishing cause loose of filled values
by artist. Best practice is to store values into workfile, if possible.
Args:
data (dict): New data as are.
changes (dict): Only data that has been changed. Each value has
tuple with '(<old>, <new>)' value.
"""
pass

View file

@ -3,9 +3,16 @@ import sys
import re import re
import contextlib import contextlib
from bson.objectid import ObjectId
from Qt import QtGui from Qt import QtGui
from openpype.client import (
get_asset_by_name,
get_subset_by_name,
get_last_version_by_subset_id,
get_representation_by_id,
get_representation_by_name,
get_representation_parents,
)
from openpype.pipeline import ( from openpype.pipeline import (
switch_container, switch_container,
legacy_io, legacy_io,
@ -93,13 +100,16 @@ def switch_item(container,
raise ValueError("Must have at least one change provided to switch.") raise ValueError("Must have at least one change provided to switch.")
# Collect any of current asset, subset and representation if not provided # Collect any of current asset, subset and representation if not provided
# so we can use the original name from those. # so we can use the original name from those.
project_name = legacy_io.active_project()
if any(not x for x in [asset_name, subset_name, representation_name]): if any(not x for x in [asset_name, subset_name, representation_name]):
_id = ObjectId(container["representation"]) repre_id = container["representation"]
representation = legacy_io.find_one({ representation = get_representation_by_id(project_name, repre_id)
"type": "representation", "_id": _id repre_parent_docs = get_representation_parents(representation)
}) if repre_parent_docs:
version, subset, asset, project = legacy_io.parenthood(representation) version, subset, asset, _ = repre_parent_docs
else:
version = subset = asset = None
if asset_name is None: if asset_name is None:
asset_name = asset["name"] asset_name = asset["name"]
@ -111,39 +121,26 @@ def switch_item(container,
representation_name = representation["name"] representation_name = representation["name"]
# Find the new one # Find the new one
asset = legacy_io.find_one({ asset = get_asset_by_name(project_name, asset_name, fields=["_id"])
"name": asset_name,
"type": "asset"
})
assert asset, ("Could not find asset in the database with the name " assert asset, ("Could not find asset in the database with the name "
"'%s'" % asset_name) "'%s'" % asset_name)
subset = legacy_io.find_one({ subset = get_subset_by_name(
"name": subset_name, project_name, subset_name, asset["_id"], fields=["_id"]
"type": "subset", )
"parent": asset["_id"]
})
assert subset, ("Could not find subset in the database with the name " assert subset, ("Could not find subset in the database with the name "
"'%s'" % subset_name) "'%s'" % subset_name)
version = legacy_io.find_one( version = get_last_version_by_subset_id(
{ project_name, subset["_id"], fields=["_id"]
"type": "version",
"parent": subset["_id"]
},
sort=[('name', -1)]
) )
assert version, "Could not find a version for {}.{}".format( assert version, "Could not find a version for {}.{}".format(
asset_name, subset_name asset_name, subset_name
) )
representation = legacy_io.find_one({ representation = get_representation_by_name(
"name": representation_name, project_name, representation_name, version["_id"]
"type": "representation",
"parent": version["_id"]}
) )
assert representation, ("Could not find representation in the database " assert representation, ("Could not find representation in the database "
"with the name '%s'" % representation_name) "with the name '%s'" % representation_name)

View file

@ -1,6 +1,7 @@
import os import os
import contextlib import contextlib
from openpype.client import get_version_by_id
from openpype.pipeline import ( from openpype.pipeline import (
load, load,
legacy_io, legacy_io,
@ -123,7 +124,7 @@ def loader_shift(loader, frame, relative=True):
class FusionLoadSequence(load.LoaderPlugin): class FusionLoadSequence(load.LoaderPlugin):
"""Load image sequence into Fusion""" """Load image sequence into Fusion"""
families = ["imagesequence", "review", "render"] families = ["imagesequence", "review", "render", "plate"]
representations = ["*"] representations = ["*"]
label = "Load sequence" label = "Load sequence"
@ -211,10 +212,8 @@ class FusionLoadSequence(load.LoaderPlugin):
path = self._get_first_image(root) path = self._get_first_image(root)
# Get start frame from version data # Get start frame from version data
version = legacy_io.find_one({ project_name = legacy_io.active_project()
"type": "version", version = get_version_by_id(project_name, representation["parent"])
"_id": representation["parent"]
})
start = version["data"].get("frameStart") start = version["data"].get("frameStart")
if start is None: if start is None:
self.log.warning("Missing start frame for updated version" self.log.warning("Missing start frame for updated version"

View file

@ -4,6 +4,11 @@ import sys
import logging import logging
# Pipeline imports # Pipeline imports
from openpype.client import (
get_project,
get_asset_by_name,
get_versions,
)
from openpype.pipeline import ( from openpype.pipeline import (
legacy_io, legacy_io,
install_host, install_host,
@ -164,9 +169,9 @@ def update_frame_range(comp, representations):
""" """
version_ids = [r["parent"] for r in representations] project_name = legacy_io.active_project()
versions = legacy_io.find({"type": "version", "_id": {"$in": version_ids}}) version_ids = {r["parent"] for r in representations}
versions = list(versions) versions = list(get_versions(project_name, version_ids))
versions = [v for v in versions versions = [v for v in versions
if v["data"].get("frameStart", None) is not None] if v["data"].get("frameStart", None) is not None]
@ -203,11 +208,12 @@ def switch(asset_name, filepath=None, new=True):
# Assert asset name exists # Assert asset name exists
# It is better to do this here then to wait till switch_shot does it # It is better to do this here then to wait till switch_shot does it
asset = legacy_io.find_one({"type": "asset", "name": asset_name}) project_name = legacy_io.active_project()
asset = get_asset_by_name(project_name, asset_name)
assert asset, "Could not find '%s' in the database" % asset_name assert asset, "Could not find '%s' in the database" % asset_name
# Get current project # Get current project
self._project = legacy_io.find_one({"type": "project"}) self._project = get_project(project_name)
# Go to comp # Go to comp
if not filepath: if not filepath:

View file

@ -7,6 +7,7 @@ from Qt import QtWidgets, QtCore
import qtawesome as qta import qtawesome as qta
from openpype.client import get_assets
from openpype import style from openpype import style
from openpype.pipeline import ( from openpype.pipeline import (
install_host, install_host,
@ -142,7 +143,7 @@ class App(QtWidgets.QWidget):
# Clear any existing items # Clear any existing items
self._assets.clear() self._assets.clear()
asset_names = [a["name"] for a in self.collect_assets()] asset_names = self.collect_asset_names()
completer = QtWidgets.QCompleter(asset_names) completer = QtWidgets.QCompleter(asset_names)
self._assets.setCompleter(completer) self._assets.setCompleter(completer)
@ -165,8 +166,14 @@ class App(QtWidgets.QWidget):
items = glob.glob("{}/*.comp".format(directory)) items = glob.glob("{}/*.comp".format(directory))
return items return items
def collect_assets(self): def collect_asset_names(self):
return list(legacy_io.find({"type": "asset"}, {"name": True})) project_name = legacy_io.active_project()
asset_docs = get_assets(project_name, fields=["name"])
asset_names = {
asset_doc["name"]
for asset_doc in asset_docs
}
return list(asset_names)
def populate_comp_box(self, files): def populate_comp_box(self, files):
"""Ensure we display the filename only but the path is stored as well """Ensure we display the filename only but the path is stored as well

View file

@ -19,8 +19,9 @@ from openpype.client import (
get_last_versions, get_last_versions,
get_representations, get_representations,
) )
from openpype.pipeline import legacy_io from openpype.settings import get_anatomy_settings
from openpype.api import (Logger, Anatomy, get_anatomy_settings) from openpype.pipeline import legacy_io, Anatomy
from openpype.api import Logger
from . import tags from . import tags
try: try:

View file

@ -5,8 +5,7 @@ import husdoutputprocessors.base as base
import colorbleed.usdlib as usdlib import colorbleed.usdlib as usdlib
from openpype.client import get_asset_by_name from openpype.client import get_asset_by_name
from openpype.api import Anatomy from openpype.pipeline import legacy_io, Anatomy
from openpype.pipeline import legacy_io
class AvalonURIOutputProcessor(base.OutputProcessorBase): class AvalonURIOutputProcessor(base.OutputProcessorBase):

View file

@ -5,11 +5,11 @@ Anything that isn't defined here is INTERNAL and unreliable for external use.
""" """
from .pipeline import ( from .pipeline import (
install,
uninstall, uninstall,
ls, ls,
containerise, containerise,
MayaHost,
) )
from .plugin import ( from .plugin import (
Creator, Creator,
@ -40,11 +40,11 @@ from .lib import (
__all__ = [ __all__ = [
"install",
"uninstall", "uninstall",
"ls", "ls",
"containerise", "containerise",
"MayaHost",
"Creator", "Creator",
"Loader", "Loader",

View file

@ -1908,7 +1908,7 @@ def iter_parents(node):
""" """
while True: while True:
split = node.rsplit("|", 1) split = node.rsplit("|", 1)
if len(split) == 1: if len(split) == 1 or not split[0]:
return return
node = split[0] node = split[0]
@ -3213,3 +3213,209 @@ def parent_nodes(nodes, parent=None):
node[0].setParent(node[1]) node[0].setParent(node[1])
if delete_parent: if delete_parent:
pm.delete(parent_node) pm.delete(parent_node)
@contextlib.contextmanager
def maintained_time():
ct = cmds.currentTime(query=True)
try:
yield
finally:
cmds.currentTime(ct, edit=True)
def iter_visible_nodes_in_range(nodes, start, end):
"""Yield nodes that are visible in start-end frame range.
- Ignores intermediateObjects completely.
- Considers animated visibility attributes + upstream visibilities.
This is optimized for large scenes where some nodes in the parent
hierarchy might have some input connections to the visibilities,
e.g. key, driven keys, connections to other attributes, etc.
This only does a single time step to `start` if current frame is
not inside frame range since the assumption is made that changing
a frame isn't so slow that it beats querying all visibility
plugs through MDGContext on another frame.
Args:
nodes (list): List of node names to consider.
start (int, float): Start frame.
end (int, float): End frame.
Returns:
list: List of node names. These will be long full path names so
might have a longer name than the input nodes.
"""
# States we consider per node
VISIBLE = 1 # always visible
INVISIBLE = 0 # always invisible
ANIMATED = -1 # animated visibility
# Ensure integers
start = int(start)
end = int(end)
# Consider only non-intermediate dag nodes and use the "long" names.
nodes = cmds.ls(nodes, long=True, noIntermediate=True, type="dagNode")
if not nodes:
return
with maintained_time():
# Go to first frame of the range if the current time is outside
# the queried range so can directly query all visible nodes on
# that frame.
current_time = cmds.currentTime(query=True)
if not (start <= current_time <= end):
cmds.currentTime(start)
visible = cmds.ls(nodes, long=True, visible=True)
for node in visible:
yield node
if len(visible) == len(nodes) or start == end:
# All are visible on frame one, so they are at least visible once
# inside the frame range.
return
# For the invisible ones check whether its visibility and/or
# any of its parents visibility attributes are animated. If so, it might
# get visible on other frames in the range.
def memodict(f):
"""Memoization decorator for a function taking a single argument.
See: http://code.activestate.com/recipes/
578231-probably-the-fastest-memoization-decorator-in-the-/
"""
class memodict(dict):
def __missing__(self, key):
ret = self[key] = f(key)
return ret
return memodict().__getitem__
@memodict
def get_state(node):
plug = node + ".visibility"
connections = cmds.listConnections(plug,
source=True,
destination=False)
if connections:
return ANIMATED
else:
return VISIBLE if cmds.getAttr(plug) else INVISIBLE
visible = set(visible)
invisible = [node for node in nodes if node not in visible]
always_invisible = set()
# Iterate over the nodes by short to long names to iterate the highest
# in hierarchy nodes first. So the collected data can be used from the
# cache for parent queries in next iterations.
node_dependencies = dict()
for node in sorted(invisible, key=len):
state = get_state(node)
if state == INVISIBLE:
always_invisible.add(node)
continue
# If not always invisible by itself we should go through and check
# the parents to see if any of them are always invisible. For those
# that are "ANIMATED" we consider that this node is dependent on
# that attribute, we store them as dependency.
dependencies = set()
if state == ANIMATED:
dependencies.add(node)
traversed_parents = list()
for parent in iter_parents(node):
if parent in always_invisible or get_state(parent) == INVISIBLE:
# When parent is always invisible then consider this parent,
# this node we started from and any of the parents we
# have traversed in-between to be *always invisible*
always_invisible.add(parent)
always_invisible.add(node)
always_invisible.update(traversed_parents)
break
# If we have traversed the parent before and its visibility
# was dependent on animated visibilities then we can just extend
# its dependencies for to those for this node and break further
# iteration upwards.
parent_dependencies = node_dependencies.get(parent, None)
if parent_dependencies is not None:
dependencies.update(parent_dependencies)
break
state = get_state(parent)
if state == ANIMATED:
dependencies.add(parent)
traversed_parents.append(parent)
if node not in always_invisible and dependencies:
node_dependencies[node] = dependencies
if not node_dependencies:
return
# Now we only have to check the visibilities for nodes that have animated
# visibility dependencies upstream. The fastest way to check these
# visibility attributes across different frames is with Python api 2.0
# so we do that.
@memodict
def get_visibility_mplug(node):
"""Return api 2.0 MPlug with cached memoize decorator"""
sel = om.MSelectionList()
sel.add(node)
dag = sel.getDagPath(0)
return om.MFnDagNode(dag).findPlug("visibility", True)
@contextlib.contextmanager
def dgcontext(mtime):
"""MDGContext context manager"""
context = om.MDGContext(mtime)
try:
previous = context.makeCurrent()
yield context
finally:
previous.makeCurrent()
# We skip the first frame as we already used that frame to check for
# overall visibilities. And end+1 to include the end frame.
scene_units = om.MTime.uiUnit()
for frame in range(start + 1, end + 1):
mtime = om.MTime(frame, unit=scene_units)
# Build little cache so we don't query the same MPlug's value
# again if it was checked on this frame and also is a dependency
# for another node
frame_visibilities = {}
with dgcontext(mtime) as context:
for node, dependencies in list(node_dependencies.items()):
for dependency in dependencies:
dependency_visible = frame_visibilities.get(dependency,
None)
if dependency_visible is None:
mplug = get_visibility_mplug(dependency)
dependency_visible = mplug.asBool(context)
frame_visibilities[dependency] = dependency_visible
if not dependency_visible:
# One dependency is not visible, thus the
# node is not visible.
break
else:
# All dependencies are visible.
yield node
# Remove node with dependencies for next frame iterations
# because it was visible at least once.
node_dependencies.pop(node)
# If no more nodes to process break the frame iterations..
if not node_dependencies:
break

View file

@ -1,13 +1,15 @@
import os import os
import sys
import errno import errno
import logging import logging
import contextlib
from maya import utils, cmds, OpenMaya from maya import utils, cmds, OpenMaya
import maya.api.OpenMaya as om import maya.api.OpenMaya as om
import pyblish.api import pyblish.api
from openpype.settings import get_project_settings
from openpype.host import HostBase, IWorkfileHost, ILoadHost
import openpype.hosts.maya import openpype.hosts.maya
from openpype.tools.utils import host_tools from openpype.tools.utils import host_tools
from openpype.lib import ( from openpype.lib import (
@ -28,6 +30,14 @@ from openpype.pipeline import (
) )
from openpype.hosts.maya.lib import copy_workspace_mel from openpype.hosts.maya.lib import copy_workspace_mel
from . import menu, lib from . import menu, lib
from .workio import (
open_file,
save_file,
file_extensions,
has_unsaved_changes,
work_root,
current_file
)
log = logging.getLogger("openpype.hosts.maya") log = logging.getLogger("openpype.hosts.maya")
@ -40,49 +50,121 @@ INVENTORY_PATH = os.path.join(PLUGINS_DIR, "inventory")
AVALON_CONTAINERS = ":AVALON_CONTAINERS" AVALON_CONTAINERS = ":AVALON_CONTAINERS"
self = sys.modules[__name__]
self._ignore_lock = False
self._events = {}
class MayaHost(HostBase, IWorkfileHost, ILoadHost):
name = "maya"
def install(): def __init__(self):
from openpype.settings import get_project_settings super(MayaHost, self).__init__()
self._op_events = {}
project_settings = get_project_settings(os.getenv("AVALON_PROJECT")) def install(self):
# process path mapping project_settings = get_project_settings(os.getenv("AVALON_PROJECT"))
dirmap_processor = MayaDirmap("maya", project_settings) # process path mapping
dirmap_processor.process_dirmap() dirmap_processor = MayaDirmap("maya", project_settings)
dirmap_processor.process_dirmap()
pyblish.api.register_plugin_path(PUBLISH_PATH) pyblish.api.register_plugin_path(PUBLISH_PATH)
pyblish.api.register_host("mayabatch") pyblish.api.register_host("mayabatch")
pyblish.api.register_host("mayapy") pyblish.api.register_host("mayapy")
pyblish.api.register_host("maya") pyblish.api.register_host("maya")
register_loader_plugin_path(LOAD_PATH) register_loader_plugin_path(LOAD_PATH)
register_creator_plugin_path(CREATE_PATH) register_creator_plugin_path(CREATE_PATH)
register_inventory_action_path(INVENTORY_PATH) register_inventory_action_path(INVENTORY_PATH)
log.info(PUBLISH_PATH) self.log.info(PUBLISH_PATH)
log.info("Installing callbacks ... ") self.log.info("Installing callbacks ... ")
register_event_callback("init", on_init) register_event_callback("init", on_init)
if lib.IS_HEADLESS: if lib.IS_HEADLESS:
log.info(("Running in headless mode, skipping Maya " self.log.info((
"save/open/new callback installation..")) "Running in headless mode, skipping Maya save/open/new"
" callback installation.."
))
return return
_set_project() _set_project()
_register_callbacks() self._register_callbacks()
menu.install() menu.install()
register_event_callback("save", on_save) register_event_callback("save", on_save)
register_event_callback("open", on_open) register_event_callback("open", on_open)
register_event_callback("new", on_new) register_event_callback("new", on_new)
register_event_callback("before.save", on_before_save) register_event_callback("before.save", on_before_save)
register_event_callback("taskChanged", on_task_changed) register_event_callback("taskChanged", on_task_changed)
register_event_callback("workfile.save.before", before_workfile_save) register_event_callback("workfile.save.before", before_workfile_save)
def open_workfile(self, filepath):
return open_file(filepath)
def save_workfile(self, filepath=None):
return save_file(filepath)
def work_root(self, session):
return work_root(session)
def get_current_workfile(self):
return current_file()
def workfile_has_unsaved_changes(self):
return has_unsaved_changes()
def get_workfile_extensions(self):
return file_extensions()
def get_containers(self):
return ls()
@contextlib.contextmanager
def maintained_selection(self):
with lib.maintained_selection():
yield
def _register_callbacks(self):
for handler, event in self._op_events.copy().items():
if event is None:
continue
try:
OpenMaya.MMessage.removeCallback(event)
self._op_events[handler] = None
except RuntimeError as exc:
self.log.info(exc)
self._op_events[_on_scene_save] = OpenMaya.MSceneMessage.addCallback(
OpenMaya.MSceneMessage.kBeforeSave, _on_scene_save
)
self._op_events[_before_scene_save] = (
OpenMaya.MSceneMessage.addCheckCallback(
OpenMaya.MSceneMessage.kBeforeSaveCheck,
_before_scene_save
)
)
self._op_events[_on_scene_new] = OpenMaya.MSceneMessage.addCallback(
OpenMaya.MSceneMessage.kAfterNew, _on_scene_new
)
self._op_events[_on_maya_initialized] = (
OpenMaya.MSceneMessage.addCallback(
OpenMaya.MSceneMessage.kMayaInitialized,
_on_maya_initialized
)
)
self._op_events[_on_scene_open] = OpenMaya.MSceneMessage.addCallback(
OpenMaya.MSceneMessage.kAfterOpen, _on_scene_open
)
self.log.info("Installed event handler _on_scene_save..")
self.log.info("Installed event handler _before_scene_save..")
self.log.info("Installed event handler _on_scene_new..")
self.log.info("Installed event handler _on_maya_initialized..")
self.log.info("Installed event handler _on_scene_open..")
def _set_project(): def _set_project():
@ -106,44 +188,6 @@ def _set_project():
cmds.workspace(workdir, openWorkspace=True) cmds.workspace(workdir, openWorkspace=True)
def _register_callbacks():
for handler, event in self._events.copy().items():
if event is None:
continue
try:
OpenMaya.MMessage.removeCallback(event)
self._events[handler] = None
except RuntimeError as e:
log.info(e)
self._events[_on_scene_save] = OpenMaya.MSceneMessage.addCallback(
OpenMaya.MSceneMessage.kBeforeSave, _on_scene_save
)
self._events[_before_scene_save] = OpenMaya.MSceneMessage.addCheckCallback(
OpenMaya.MSceneMessage.kBeforeSaveCheck, _before_scene_save
)
self._events[_on_scene_new] = OpenMaya.MSceneMessage.addCallback(
OpenMaya.MSceneMessage.kAfterNew, _on_scene_new
)
self._events[_on_maya_initialized] = OpenMaya.MSceneMessage.addCallback(
OpenMaya.MSceneMessage.kMayaInitialized, _on_maya_initialized
)
self._events[_on_scene_open] = OpenMaya.MSceneMessage.addCallback(
OpenMaya.MSceneMessage.kAfterOpen, _on_scene_open
)
log.info("Installed event handler _on_scene_save..")
log.info("Installed event handler _before_scene_save..")
log.info("Installed event handler _on_scene_new..")
log.info("Installed event handler _on_maya_initialized..")
log.info("Installed event handler _on_scene_open..")
def _on_maya_initialized(*args): def _on_maya_initialized(*args):
emit_event("init") emit_event("init")
@ -475,7 +519,6 @@ def on_task_changed():
workdir = legacy_io.Session["AVALON_WORKDIR"] workdir = legacy_io.Session["AVALON_WORKDIR"]
if os.path.exists(workdir): if os.path.exists(workdir):
log.info("Updating Maya workspace for task change to %s", workdir) log.info("Updating Maya workspace for task change to %s", workdir)
_set_project() _set_project()
# Set Maya fileDialog's start-dir to /scenes # Set Maya fileDialog's start-dir to /scenes

View file

@ -9,8 +9,8 @@ from openpype.pipeline import (
LoaderPlugin, LoaderPlugin,
get_representation_path, get_representation_path,
AVALON_CONTAINER_ID, AVALON_CONTAINER_ID,
Anatomy,
) )
from openpype.api import Anatomy
from openpype.settings import get_project_settings from openpype.settings import get_project_settings
from .pipeline import containerise from .pipeline import containerise
from . import lib from . import lib

View file

@ -296,12 +296,9 @@ def update_package_version(container, version):
assert current_representation is not None, "This is a bug" assert current_representation is not None, "This is a bug"
repre_parents = get_representation_parents( version_doc, subset_doc, asset_doc, project_doc = (
project_name, current_representation get_representation_parents(project_name, current_representation)
) )
version_doc = subset_doc = asset_doc = project_doc = None
if repre_parents:
version_doc, subset_doc, asset_doc, project_doc = repre_parents
if version == -1: if version == -1:
new_version = get_last_version_by_subset_id( new_version = get_last_version_by_subset_id(

View file

@ -15,6 +15,8 @@ class CreateReview(plugin.Creator):
keepImages = False keepImages = False
isolate = False isolate = False
imagePlane = True imagePlane = True
Width = 0
Height = 0
transparency = [ transparency = [
"preset", "preset",
"simple", "simple",
@ -33,6 +35,8 @@ class CreateReview(plugin.Creator):
for key, value in animation_data.items(): for key, value in animation_data.items():
data[key] = value data[key] = value
data["review_width"] = self.Width
data["review_height"] = self.Height
data["isolate"] = self.isolate data["isolate"] = self.isolate
data["keepImages"] = self.keepImages data["keepImages"] = self.keepImages
data["imagePlane"] = self.imagePlane data["imagePlane"] = self.imagePlane

View file

@ -0,0 +1,139 @@
import os
from openpype.api import get_project_settings
from openpype.pipeline import (
load,
get_representation_path
)
# TODO aiVolume doesn't automatically set velocity fps correctly, set manual?
class LoadVDBtoArnold(load.LoaderPlugin):
"""Load OpenVDB for Arnold in aiVolume"""
families = ["vdbcache"]
representations = ["vdb"]
label = "Load VDB to Arnold"
icon = "cloud"
color = "orange"
def load(self, context, name, namespace, data):
from maya import cmds
from openpype.hosts.maya.api.pipeline import containerise
from openpype.hosts.maya.api.lib import unique_namespace
try:
family = context["representation"]["context"]["family"]
except ValueError:
family = "vdbcache"
# Check if the plugin for arnold is available on the pc
try:
cmds.loadPlugin("mtoa", quiet=True)
except Exception as exc:
self.log.error("Encountered exception:\n%s" % exc)
return
asset = context['asset']
asset_name = asset["name"]
namespace = namespace or unique_namespace(
asset_name + "_",
prefix="_" if asset_name[0].isdigit() else "",
suffix="_",
)
# Root group
label = "{}:{}".format(namespace, name)
root = cmds.group(name=label, empty=True)
settings = get_project_settings(os.environ['AVALON_PROJECT'])
colors = settings['maya']['load']['colors']
c = colors.get(family)
if c is not None:
cmds.setAttr(root + ".useOutlinerColor", 1)
cmds.setAttr(root + ".outlinerColor",
(float(c[0]) / 255),
(float(c[1]) / 255),
(float(c[2]) / 255)
)
# Create VRayVolumeGrid
grid_node = cmds.createNode("aiVolume",
name="{}Shape".format(root),
parent=root)
self._set_path(grid_node,
path=self.fname,
representation=context["representation"])
# Lock the shape node so the user can't delete the transform/shape
# as if it was referenced
cmds.lockNode(grid_node, lock=True)
nodes = [root, grid_node]
self[:] = nodes
return containerise(
name=name,
namespace=namespace,
nodes=nodes,
context=context,
loader=self.__class__.__name__)
def update(self, container, representation):
from maya import cmds
path = get_representation_path(representation)
# Find VRayVolumeGrid
members = cmds.sets(container['objectName'], query=True)
grid_nodes = cmds.ls(members, type="aiVolume", long=True)
assert len(grid_nodes) == 1, "This is a bug"
# Update the VRayVolumeGrid
self._set_path(grid_nodes[0], path=path, representation=representation)
# Update container representation
cmds.setAttr(container["objectName"] + ".representation",
str(representation["_id"]),
type="string")
def switch(self, container, representation):
self.update(container, representation)
def remove(self, container):
from maya import cmds
# Get all members of the avalon container, ensure they are unlocked
# and delete everything
members = cmds.sets(container['objectName'], query=True)
cmds.lockNode(members, lock=False)
cmds.delete([container['objectName']] + members)
# Clean up the namespace
try:
cmds.namespace(removeNamespace=container['namespace'],
deleteNamespaceContent=True)
except RuntimeError:
pass
@staticmethod
def _set_path(grid_node,
path,
representation):
"""Apply the settings for the VDB path to the aiVolume node"""
from maya import cmds
if not os.path.exists(path):
raise RuntimeError("Path does not exist: %s" % path)
is_sequence = bool(representation["context"].get("frame"))
cmds.setAttr(grid_node + ".useFrameExtension", is_sequence)
# Set file path
cmds.setAttr(grid_node + ".filename", path, type="string")

View file

@ -1,11 +1,21 @@
import os import os
from openpype.api import get_project_settings from openpype.api import get_project_settings
from openpype.pipeline import load from openpype.pipeline import (
load,
get_representation_path
)
class LoadVDBtoRedShift(load.LoaderPlugin): class LoadVDBtoRedShift(load.LoaderPlugin):
"""Load OpenVDB in a Redshift Volume Shape""" """Load OpenVDB in a Redshift Volume Shape
Note that the RedshiftVolumeShape is created without a RedshiftVolume
shader assigned. To get the Redshift volume to render correctly assign
a RedshiftVolume shader (in the Hypershade) and set the density, scatter
and emission channels to the channel names of the volumes in the VDB file.
"""
families = ["vdbcache"] families = ["vdbcache"]
representations = ["vdb"] representations = ["vdb"]
@ -55,7 +65,7 @@ class LoadVDBtoRedShift(load.LoaderPlugin):
# Root group # Root group
label = "{}:{}".format(namespace, name) label = "{}:{}".format(namespace, name)
root = cmds.group(name=label, empty=True) root = cmds.createNode("transform", name=label)
settings = get_project_settings(os.environ['AVALON_PROJECT']) settings = get_project_settings(os.environ['AVALON_PROJECT'])
colors = settings['maya']['load']['colors'] colors = settings['maya']['load']['colors']
@ -74,9 +84,9 @@ class LoadVDBtoRedShift(load.LoaderPlugin):
name="{}RVSShape".format(label), name="{}RVSShape".format(label),
parent=root) parent=root)
cmds.setAttr("{}.fileName".format(volume_node), self._set_path(volume_node,
self.fname, path=self.fname,
type="string") representation=context["representation"])
nodes = [root, volume_node] nodes = [root, volume_node]
self[:] = nodes self[:] = nodes
@ -87,3 +97,56 @@ class LoadVDBtoRedShift(load.LoaderPlugin):
nodes=nodes, nodes=nodes,
context=context, context=context,
loader=self.__class__.__name__) loader=self.__class__.__name__)
def update(self, container, representation):
from maya import cmds
path = get_representation_path(representation)
# Find VRayVolumeGrid
members = cmds.sets(container['objectName'], query=True)
grid_nodes = cmds.ls(members, type="RedshiftVolumeShape", long=True)
assert len(grid_nodes) == 1, "This is a bug"
# Update the VRayVolumeGrid
self._set_path(grid_nodes[0], path=path, representation=representation)
# Update container representation
cmds.setAttr(container["objectName"] + ".representation",
str(representation["_id"]),
type="string")
def remove(self, container):
from maya import cmds
# Get all members of the avalon container, ensure they are unlocked
# and delete everything
members = cmds.sets(container['objectName'], query=True)
cmds.lockNode(members, lock=False)
cmds.delete([container['objectName']] + members)
# Clean up the namespace
try:
cmds.namespace(removeNamespace=container['namespace'],
deleteNamespaceContent=True)
except RuntimeError:
pass
def switch(self, container, representation):
self.update(container, representation)
@staticmethod
def _set_path(grid_node,
path,
representation):
"""Apply the settings for the VDB path to the RedshiftVolumeShape"""
from maya import cmds
if not os.path.exists(path):
raise RuntimeError("Path does not exist: %s" % path)
is_sequence = bool(representation["context"].get("frame"))
cmds.setAttr(grid_node + ".useFrameExtension", is_sequence)
# Set file path
cmds.setAttr(grid_node + ".fileName", path, type="string")

View file

@ -71,6 +71,8 @@ class CollectReview(pyblish.api.InstancePlugin):
data['handles'] = instance.data.get('handles', None) data['handles'] = instance.data.get('handles', None)
data['step'] = instance.data['step'] data['step'] = instance.data['step']
data['fps'] = instance.data['fps'] data['fps'] = instance.data['fps']
data['review_width'] = instance.data['review_width']
data['review_height'] = instance.data['review_height']
data["isolate"] = instance.data["isolate"] data["isolate"] = instance.data["isolate"]
cmds.setAttr(str(instance) + '.active', 1) cmds.setAttr(str(instance) + '.active', 1)
self.log.debug('data {}'.format(instance.context[i].data)) self.log.debug('data {}'.format(instance.context[i].data))

View file

@ -6,7 +6,8 @@ import openpype.api
from openpype.hosts.maya.api.lib import ( from openpype.hosts.maya.api.lib import (
extract_alembic, extract_alembic,
suspended_refresh, suspended_refresh,
maintained_selection maintained_selection,
iter_visible_nodes_in_range
) )
@ -76,6 +77,16 @@ class ExtractAnimation(openpype.api.Extractor):
# Since Maya 2017 alembic supports multiple uv sets - write them. # Since Maya 2017 alembic supports multiple uv sets - write them.
options["writeUVSets"] = True options["writeUVSets"] = True
if instance.data.get("visibleOnly", False):
# If we only want to include nodes that are visible in the frame
# range then we need to do our own check. Alembic's `visibleOnly`
# flag does not filter out those that are only hidden on some
# frames as it counts "animated" or "connected" visibilities as
# if it's always visible.
nodes = list(iter_visible_nodes_in_range(nodes,
start=start,
end=end))
with suspended_refresh(): with suspended_refresh():
with maintained_selection(): with maintained_selection():
cmds.select(nodes, noExpand=True) cmds.select(nodes, noExpand=True)

View file

@ -30,7 +30,7 @@ class ExtractCameraAlembic(openpype.api.Extractor):
# get cameras # get cameras
members = instance.data['setMembers'] members = instance.data['setMembers']
cameras = cmds.ls(members, leaf=True, shapes=True, long=True, cameras = cmds.ls(members, leaf=True, long=True,
dag=True, type="camera") dag=True, type="camera")
# validate required settings # validate required settings
@ -61,10 +61,30 @@ class ExtractCameraAlembic(openpype.api.Extractor):
if bake_to_worldspace: if bake_to_worldspace:
job_str += ' -worldSpace' job_str += ' -worldSpace'
for member in member_shapes:
self.log.info(f"processing {member}") # if baked, drop the camera hierarchy to maintain
# clean output and backwards compatibility
camera_root = cmds.listRelatives(
camera, parent=True, fullPath=True)[0]
job_str += ' -root {0}'.format(camera_root)
for member in members:
descendants = cmds.listRelatives(member,
allDescendents=True,
fullPath=True) or []
shapes = cmds.ls(descendants, shapes=True,
noIntermediate=True, long=True)
cameras = cmds.ls(shapes, type="camera", long=True)
if cameras:
if not set(shapes) - set(cameras):
continue
self.log.warning((
"Camera hierarchy contains additional geometry. "
"Extraction will fail.")
)
transform = cmds.listRelatives( transform = cmds.listRelatives(
member, parent=True, fullPath=True)[0] member, parent=True, fullPath=True)
transform = transform[0] if transform else member
job_str += ' -root {0}'.format(transform) job_str += ' -root {0}'.format(transform)
job_str += ' -file "{0}"'.format(path) job_str += ' -file "{0}"'.format(path)

View file

@ -172,18 +172,19 @@ class ExtractCameraMayaScene(openpype.api.Extractor):
dag=True, dag=True,
shapes=True, shapes=True,
long=True) long=True)
attrs = {"backgroundColorR": 0.0,
"backgroundColorG": 0.0,
"backgroundColorB": 0.0,
"overscan": 1.0}
# Fix PLN-178: Don't allow background color to be non-black # Fix PLN-178: Don't allow background color to be non-black
for cam in cmds.ls( for cam, (attr, value) in itertools.product(cmds.ls(
baked_camera_shapes, type="camera", dag=True, baked_camera_shapes, type="camera", dag=True,
shapes=True, long=True): long=True), attrs.items()):
attrs = {"backgroundColorR": 0.0, plug = "{0}.{1}".format(cam, attr)
"backgroundColorG": 0.0, unlock(plug)
"backgroundColorB": 0.0, cmds.setAttr(plug, value)
"overscan": 1.0}
for attr, value in attrs.items():
plug = "{0}.{1}".format(cam, attr)
unlock(plug)
cmds.setAttr(plug, value)
self.log.info("Performing extraction..") self.log.info("Performing extraction..")
cmds.select(cmds.ls(members, dag=True, cmds.select(cmds.ls(members, dag=True,

View file

@ -1,6 +1,7 @@
import os import os
import glob import glob
import contextlib import contextlib
import clique import clique
import capture import capture
@ -50,8 +51,32 @@ class ExtractPlayblast(openpype.api.Extractor):
['override_viewport_options'] ['override_viewport_options']
) )
preset = lib.load_capture_preset(data=self.capture_preset) preset = lib.load_capture_preset(data=self.capture_preset)
# Grab capture presets from the project settings
capture_presets = self.capture_preset
# Set resolution variables from capture presets
width_preset = capture_presets["Resolution"]["width"]
height_preset = capture_presets["Resolution"]["height"]
# Set resolution variables from asset values
asset_data = instance.data["assetEntity"]["data"]
asset_width = asset_data.get("resolutionWidth")
asset_height = asset_data.get("resolutionHeight")
review_instance_width = instance.data.get("review_width")
review_instance_height = instance.data.get("review_height")
preset['camera'] = camera preset['camera'] = camera
# Tests if project resolution is set,
# if it is a value other than zero, that value is
# used, if not then the asset resolution is
# used
if review_instance_width and review_instance_height:
preset['width'] = review_instance_width
preset['height'] = review_instance_height
elif width_preset and height_preset:
preset['width'] = width_preset
preset['height'] = height_preset
elif asset_width and asset_height:
preset['width'] = asset_width
preset['height'] = asset_height
preset['start_frame'] = start preset['start_frame'] = start
preset['end_frame'] = end preset['end_frame'] = end
camera_option = preset.get("camera_option", {}) camera_option = preset.get("camera_option", {})

View file

@ -6,7 +6,8 @@ import openpype.api
from openpype.hosts.maya.api.lib import ( from openpype.hosts.maya.api.lib import (
extract_alembic, extract_alembic,
suspended_refresh, suspended_refresh,
maintained_selection maintained_selection,
iter_visible_nodes_in_range
) )
@ -79,6 +80,16 @@ class ExtractAlembic(openpype.api.Extractor):
# Since Maya 2017 alembic supports multiple uv sets - write them. # Since Maya 2017 alembic supports multiple uv sets - write them.
options["writeUVSets"] = True options["writeUVSets"] = True
if instance.data.get("visibleOnly", False):
# If we only want to include nodes that are visible in the frame
# range then we need to do our own check. Alembic's `visibleOnly`
# flag does not filter out those that are only hidden on some
# frames as it counts "animated" or "connected" visibilities as
# if it's always visible.
nodes = list(iter_visible_nodes_in_range(nodes,
start=start,
end=end))
with suspended_refresh(): with suspended_refresh():
with maintained_selection(): with maintained_selection():
cmds.select(nodes, noExpand=True) cmds.select(nodes, noExpand=True)

View file

@ -60,7 +60,29 @@ class ExtractThumbnail(openpype.api.Extractor):
"overscan": 1.0, "overscan": 1.0,
"depthOfField": cmds.getAttr("{0}.depthOfField".format(camera)), "depthOfField": cmds.getAttr("{0}.depthOfField".format(camera)),
} }
capture_presets = capture_preset
# Set resolution variables from capture presets
width_preset = capture_presets["Resolution"]["width"]
height_preset = capture_presets["Resolution"]["height"]
# Set resolution variables from asset values
asset_data = instance.data["assetEntity"]["data"]
asset_width = asset_data.get("resolutionWidth")
asset_height = asset_data.get("resolutionHeight")
review_instance_width = instance.data.get("review_width")
review_instance_height = instance.data.get("review_height")
# Tests if project resolution is set,
# if it is a value other than zero, that value is
# used, if not then the asset resolution is
# used
if review_instance_width and review_instance_height:
preset['width'] = review_instance_width
preset['height'] = review_instance_height
elif width_preset and height_preset:
preset['width'] = width_preset
preset['height'] = height_preset
elif asset_width and asset_height:
preset['width'] = asset_width
preset['height'] = asset_height
stagingDir = self.staging_dir(instance) stagingDir = self.staging_dir(instance)
filename = "{0}".format(instance.name) filename = "{0}".format(instance.name)
path = os.path.join(stagingDir, filename) path = os.path.join(stagingDir, filename)

View file

@ -51,18 +51,7 @@ class ValidateCameraContents(pyblish.api.InstancePlugin):
raise RuntimeError("No cameras found in empty instance.") raise RuntimeError("No cameras found in empty instance.")
if not cls.validate_shapes: if not cls.validate_shapes:
cls.log.info("Not validating shapes in the content.") cls.log.info("not validating shapes in the content")
for member in members:
parents = cmds.ls(member, long=True)[0].split("|")[1:-1]
parents_long_named = [
"|".join(parents[:i]) for i in range(1, 1 + len(parents))
]
if cameras[0] in parents_long_named:
cls.log.error(
"{} is parented under camera {}".format(
member, cameras[0]))
invalid.extend(member)
return invalid return invalid
# non-camera shapes # non-camera shapes

View file

@ -0,0 +1,51 @@
import pyblish.api
import openpype.api
from openpype.hosts.maya.api.lib import iter_visible_nodes_in_range
import openpype.hosts.maya.api.action
class ValidateAlembicVisibleOnly(pyblish.api.InstancePlugin):
"""Validates at least a single node is visible in frame range.
This validation only validates if the `visibleOnly` flag is enabled
on the instance - otherwise the validation is skipped.
"""
order = openpype.api.ValidateContentsOrder + 0.05
label = "Alembic Visible Only"
hosts = ["maya"]
families = ["pointcache", "animation"]
actions = [openpype.hosts.maya.api.action.SelectInvalidAction]
def process(self, instance):
if not instance.data.get("visibleOnly", False):
self.log.debug("Visible only is disabled. Validation skipped..")
return
invalid = self.get_invalid(instance)
if invalid:
start, end = self.get_frame_range(instance)
raise RuntimeError("No visible nodes found in "
"frame range {}-{}.".format(start, end))
@classmethod
def get_invalid(cls, instance):
if instance.data["family"] == "animation":
# Special behavior to use the nodes in out_SET
nodes = instance.data["out_hierarchy"]
else:
nodes = instance[:]
start, end = cls.get_frame_range(instance)
if not any(iter_visible_nodes_in_range(nodes, start, end)):
# Return the nodes we have considered so the user can identify
# them with the select invalid action
return nodes
@staticmethod
def get_frame_range(instance):
data = instance.data
return data["frameStartHandle"], data["frameEndHandle"]

View file

@ -1,10 +1,11 @@
import os import os
from openpype.api import get_project_settings from openpype.api import get_project_settings
from openpype.pipeline import install_host from openpype.pipeline import install_host
from openpype.hosts.maya import api from openpype.hosts.maya.api import MayaHost
from maya import cmds from maya import cmds
install_host(api) host = MayaHost()
install_host(host)
print("starting OpenPype usersetup") print("starting OpenPype usersetup")

View file

@ -20,21 +20,23 @@ from openpype.client import (
) )
from openpype.api import ( from openpype.api import (
Logger, Logger,
Anatomy,
BuildWorkfile, BuildWorkfile,
get_version_from_path, get_version_from_path,
get_anatomy_settings,
get_workdir_data, get_workdir_data,
get_asset, get_asset,
get_current_project_settings, get_current_project_settings,
) )
from openpype.tools.utils import host_tools from openpype.tools.utils import host_tools
from openpype.lib.path_tools import HostDirmap from openpype.lib.path_tools import HostDirmap
from openpype.settings import get_project_settings from openpype.settings import (
get_project_settings,
get_anatomy_settings,
)
from openpype.modules import ModulesManager from openpype.modules import ModulesManager
from openpype.pipeline import ( from openpype.pipeline import (
discover_legacy_creator_plugins, discover_legacy_creator_plugins,
legacy_io, legacy_io,
Anatomy,
) )
from . import gizmo_menu from . import gizmo_menu

View file

@ -104,7 +104,10 @@ class ExtractReviewDataMov(openpype.api.Extractor):
self, instance, o_name, o_data["extension"], self, instance, o_name, o_data["extension"],
multiple_presets) multiple_presets)
if "render.farm" in families: if (
"render.farm" in families or
"prerender.farm" in families
):
if "review" in instance.data["families"]: if "review" in instance.data["families"]:
instance.data["families"].remove("review") instance.data["families"].remove("review")

View file

@ -4,6 +4,7 @@ import nuke
import copy import copy
import pyblish.api import pyblish.api
import six
import openpype import openpype
from openpype.hosts.nuke.api import ( from openpype.hosts.nuke.api import (
@ -12,7 +13,6 @@ from openpype.hosts.nuke.api import (
get_view_process_node get_view_process_node
) )
class ExtractSlateFrame(openpype.api.Extractor): class ExtractSlateFrame(openpype.api.Extractor):
"""Extracts movie and thumbnail with baked in luts """Extracts movie and thumbnail with baked in luts
@ -236,6 +236,48 @@ class ExtractSlateFrame(openpype.api.Extractor):
int(slate_first_frame) int(slate_first_frame)
) )
# Add file to representation files
# - get write node
write_node = instance.data["writeNode"]
# - evaluate filepaths for first frame and slate frame
first_filename = os.path.basename(
write_node["file"].evaluate(first_frame))
slate_filename = os.path.basename(
write_node["file"].evaluate(slate_first_frame))
# Find matching representation based on first filename
matching_repre = None
is_sequence = None
for repre in instance.data["representations"]:
files = repre["files"]
if (
not isinstance(files, six.string_types)
and first_filename in files
):
matching_repre = repre
is_sequence = True
break
elif files == first_filename:
matching_repre = repre
is_sequence = False
break
if not matching_repre:
self.log.info((
"Matching reresentaion was not found."
" Representation files were not filled with slate."
))
return
# Add frame to matching representation files
if not is_sequence:
matching_repre["files"] = [first_filename, slate_filename]
elif slate_filename not in matching_repre["files"]:
matching_repre["files"].insert(0, slate_filename)
self.log.warning("Added slate frame to representation files")
def add_comment_slate_node(self, instance, node): def add_comment_slate_node(self, instance, node):
comment = instance.context.data.get("comment") comment = instance.context.data.get("comment")

View file

@ -35,6 +35,7 @@ class CollectNukeWrites(pyblish.api.InstancePlugin):
if node is None: if node is None:
return return
instance.data["writeNode"] = node
self.log.debug("checking instance: {}".format(instance)) self.log.debug("checking instance: {}".format(instance))
# Determine defined file type # Determine defined file type

View file

@ -1,6 +1,10 @@
from copy import deepcopy from copy import deepcopy
from importlib import reload from importlib import reload
from openpype.client import (
get_version_by_id,
get_last_version_by_subset_id,
)
from openpype.hosts import resolve from openpype.hosts import resolve
from openpype.pipeline import ( from openpype.pipeline import (
get_representation_path, get_representation_path,
@ -96,10 +100,8 @@ class LoadClip(resolve.TimelineItemLoader):
namespace = container['namespace'] namespace = container['namespace']
timeline_item_data = resolve.get_pype_timeline_item_by_name(namespace) timeline_item_data = resolve.get_pype_timeline_item_by_name(namespace)
timeline_item = timeline_item_data["clip"]["item"] timeline_item = timeline_item_data["clip"]["item"]
version = legacy_io.find_one({ project_name = legacy_io.active_project()
"type": "version", version = get_version_by_id(project_name, representation["parent"])
"_id": representation["parent"]
})
version_data = version.get("data", {}) version_data = version.get("data", {})
version_name = version.get("name", None) version_name = version.get("name", None)
colorspace = version_data.get("colorspace", None) colorspace = version_data.get("colorspace", None)
@ -138,19 +140,22 @@ class LoadClip(resolve.TimelineItemLoader):
@classmethod @classmethod
def set_item_color(cls, timeline_item, version): def set_item_color(cls, timeline_item, version):
# define version name # define version name
version_name = version.get("name", None) version_name = version.get("name", None)
# get all versions in list # get all versions in list
versions = legacy_io.find({ project_name = legacy_io.active_project()
"type": "version", last_version_doc = get_last_version_by_subset_id(
"parent": version["parent"] project_name,
}).distinct('name') version["parent"],
fields=["name"]
max_version = max(versions) )
if last_version_doc:
last_version = last_version_doc["name"]
else:
last_version = None
# set clip colour # set clip colour
if version_name == max_version: if version_name == last_version:
timeline_item.SetClipColor(cls.clip_color_last) timeline_item.SetClipColor(cls.clip_color_last)
else: else:
timeline_item.SetClipColor(cls.clip_color) timeline_item.SetClipColor(cls.clip_color)

View file

@ -10,8 +10,8 @@ from openpype.lib import (
from openpype.pipeline import ( from openpype.pipeline import (
registered_host, registered_host,
legacy_io, legacy_io,
Anatomy,
) )
from openpype.api import Anatomy
from openpype.hosts.tvpaint.api import lib, pipeline, plugin from openpype.hosts.tvpaint.api import lib, pipeline, plugin

View file

@ -2,7 +2,7 @@ import os
import unreal import unreal
from openpype.api import Anatomy from openpype.pipeline import Anatomy
from openpype.hosts.unreal.api import pipeline from openpype.hosts.unreal.api import pipeline

View file

@ -3,7 +3,7 @@ from pathlib import Path
import unreal import unreal
from openpype.api import Anatomy from openpype.pipeline import Anatomy
from openpype.hosts.unreal.api import pipeline from openpype.hosts.unreal.api import pipeline
import pyblish.api import pyblish.api

File diff suppressed because it is too large Load diff

View file

@ -20,10 +20,7 @@ from openpype.settings.constants import (
METADATA_KEYS, METADATA_KEYS,
M_DYNAMIC_KEY_LABEL M_DYNAMIC_KEY_LABEL
) )
from . import ( from . import PypeLogger
PypeLogger,
Anatomy
)
from .profiles_filtering import filter_profiles from .profiles_filtering import filter_profiles
from .local_settings import get_openpype_username from .local_settings import get_openpype_username
from .avalon_context import ( from .avalon_context import (
@ -1305,7 +1302,7 @@ def get_app_environments_for_context(
dict: Environments for passed context and application. dict: Environments for passed context and application.
""" """
from openpype.pipeline import AvalonMongoDB from openpype.pipeline import AvalonMongoDB, Anatomy
# Avalon database connection # Avalon database connection
dbcon = AvalonMongoDB() dbcon = AvalonMongoDB()

View file

@ -14,7 +14,6 @@ from openpype.settings import (
get_project_settings, get_project_settings,
get_system_settings get_system_settings
) )
from .anatomy import Anatomy
from .profiles_filtering import filter_profiles from .profiles_filtering import filter_profiles
from .events import emit_event from .events import emit_event
from .path_templates import StringTemplate from .path_templates import StringTemplate
@ -593,6 +592,7 @@ def get_workdir_with_workdir_data(
)) ))
if not anatomy: if not anatomy:
from openpype.pipeline import Anatomy
anatomy = Anatomy(project_name) anatomy = Anatomy(project_name)
if not template_key: if not template_key:
@ -604,7 +604,10 @@ def get_workdir_with_workdir_data(
anatomy_filled = anatomy.format(workdir_data) anatomy_filled = anatomy.format(workdir_data)
# Output is TemplateResult object which contain useful data # Output is TemplateResult object which contain useful data
return anatomy_filled[template_key]["folder"] path = anatomy_filled[template_key]["folder"]
if path:
path = os.path.normpath(path)
return path
def get_workdir( def get_workdir(
@ -635,6 +638,7 @@ def get_workdir(
TemplateResult: Workdir path. TemplateResult: Workdir path.
""" """
if not anatomy: if not anatomy:
from openpype.pipeline import Anatomy
anatomy = Anatomy(project_doc["name"]) anatomy = Anatomy(project_doc["name"])
workdir_data = get_workdir_data( workdir_data = get_workdir_data(
@ -747,6 +751,8 @@ def compute_session_changes(
@with_pipeline_io @with_pipeline_io
def get_workdir_from_session(session=None, template_key=None): def get_workdir_from_session(session=None, template_key=None):
from openpype.pipeline import Anatomy
if session is None: if session is None:
session = legacy_io.Session session = legacy_io.Session
project_name = session["AVALON_PROJECT"] project_name = session["AVALON_PROJECT"]
@ -762,7 +768,10 @@ def get_workdir_from_session(session=None, template_key=None):
host_name, host_name,
project_name=project_name project_name=project_name
) )
return anatomy_filled[template_key]["folder"] path = anatomy_filled[template_key]["folder"]
if path:
path = os.path.normpath(path)
return path
@with_pipeline_io @with_pipeline_io
@ -853,6 +862,8 @@ def create_workfile_doc(asset_doc, task_name, filename, workdir, dbcon=None):
dbcon (AvalonMongoDB): Optionally enter avalon AvalonMongoDB object and dbcon (AvalonMongoDB): Optionally enter avalon AvalonMongoDB object and
`legacy_io` is used if not entered. `legacy_io` is used if not entered.
""" """
from openpype.pipeline import Anatomy
# Use legacy_io if dbcon is not entered # Use legacy_io if dbcon is not entered
if not dbcon: if not dbcon:
dbcon = legacy_io dbcon = legacy_io
@ -1673,6 +1684,7 @@ def _get_task_context_data_for_anatomy(
""" """
if anatomy is None: if anatomy is None:
from openpype.pipeline import Anatomy
anatomy = Anatomy(project_doc["name"]) anatomy = Anatomy(project_doc["name"])
asset_name = asset_doc["name"] asset_name = asset_doc["name"]
@ -1741,6 +1753,7 @@ def get_custom_workfile_template_by_context(
""" """
if anatomy is None: if anatomy is None:
from openpype.pipeline import Anatomy
anatomy = Anatomy(project_doc["name"]) anatomy = Anatomy(project_doc["name"])
# get project, asset, task anatomy context data # get project, asset, task anatomy context data

View file

@ -9,7 +9,6 @@ import platform
from openpype.client import get_project from openpype.client import get_project
from openpype.settings import get_project_settings from openpype.settings import get_project_settings
from .anatomy import Anatomy
from .profiles_filtering import filter_profiles from .profiles_filtering import filter_profiles
log = logging.getLogger(__name__) log = logging.getLogger(__name__)
@ -227,6 +226,7 @@ def fill_paths(path_list, anatomy):
def create_project_folders(basic_paths, project_name): def create_project_folders(basic_paths, project_name):
from openpype.pipeline import Anatomy
anatomy = Anatomy(project_name) anatomy = Anatomy(project_name)
concat_paths = concatenate_splitted_paths(basic_paths, anatomy) concat_paths = concatenate_splitted_paths(basic_paths, anatomy)

View file

@ -5,7 +5,12 @@ from bson.objectid import ObjectId
from aiohttp.web_response import Response from aiohttp.web_response import Response
from openpype.pipeline import AvalonMongoDB from openpype.client import (
get_projects,
get_project,
get_assets,
get_asset_by_name,
)
from openpype_modules.webserver.base_routes import RestApiEndpoint from openpype_modules.webserver.base_routes import RestApiEndpoint
@ -14,19 +19,13 @@ class _RestApiEndpoint(RestApiEndpoint):
self.resource = resource self.resource = resource
super(_RestApiEndpoint, self).__init__() super(_RestApiEndpoint, self).__init__()
@property
def dbcon(self):
return self.resource.dbcon
class AvalonProjectsEndpoint(_RestApiEndpoint): class AvalonProjectsEndpoint(_RestApiEndpoint):
async def get(self) -> Response: async def get(self) -> Response:
output = [] output = [
for project_name in self.dbcon.database.collection_names(): project_doc
project_doc = self.dbcon.database[project_name].find_one({ for project_doc in get_projects()
"type": "project" ]
})
output.append(project_doc)
return Response( return Response(
status=200, status=200,
body=self.resource.encode(output), body=self.resource.encode(output),
@ -36,9 +35,7 @@ class AvalonProjectsEndpoint(_RestApiEndpoint):
class AvalonProjectEndpoint(_RestApiEndpoint): class AvalonProjectEndpoint(_RestApiEndpoint):
async def get(self, project_name) -> Response: async def get(self, project_name) -> Response:
project_doc = self.dbcon.database[project_name].find_one({ project_doc = get_project(project_name)
"type": "project"
})
if project_doc: if project_doc:
return Response( return Response(
status=200, status=200,
@ -53,9 +50,7 @@ class AvalonProjectEndpoint(_RestApiEndpoint):
class AvalonAssetsEndpoint(_RestApiEndpoint): class AvalonAssetsEndpoint(_RestApiEndpoint):
async def get(self, project_name) -> Response: async def get(self, project_name) -> Response:
asset_docs = list(self.dbcon.database[project_name].find({ asset_docs = list(get_assets(project_name))
"type": "asset"
}))
return Response( return Response(
status=200, status=200,
body=self.resource.encode(asset_docs), body=self.resource.encode(asset_docs),
@ -65,10 +60,7 @@ class AvalonAssetsEndpoint(_RestApiEndpoint):
class AvalonAssetEndpoint(_RestApiEndpoint): class AvalonAssetEndpoint(_RestApiEndpoint):
async def get(self, project_name, asset_name) -> Response: async def get(self, project_name, asset_name) -> Response:
asset_doc = self.dbcon.database[project_name].find_one({ asset_doc = get_asset_by_name(project_name, asset_name)
"type": "asset",
"name": asset_name
})
if asset_doc: if asset_doc:
return Response( return Response(
status=200, status=200,
@ -88,9 +80,6 @@ class AvalonRestApiResource:
self.module = avalon_module self.module = avalon_module
self.server_manager = server_manager self.server_manager = server_manager
self.dbcon = AvalonMongoDB()
self.dbcon.install()
self.prefix = "/avalon" self.prefix = "/avalon"
self.endpoint_defs = ( self.endpoint_defs = (

View file

@ -1,16 +1,9 @@
from openpype.api import Logger from openpype.client import get_asset_by_name
from openpype.pipeline import ( from openpype.pipeline import LauncherAction
legacy_io,
LauncherAction,
)
from openpype_modules.clockify.clockify_api import ClockifyAPI from openpype_modules.clockify.clockify_api import ClockifyAPI
log = Logger.get_logger(__name__)
class ClockifyStart(LauncherAction): class ClockifyStart(LauncherAction):
name = "clockify_start_timer" name = "clockify_start_timer"
label = "Clockify - Start Timer" label = "Clockify - Start Timer"
icon = "clockify_icon" icon = "clockify_icon"
@ -24,20 +17,19 @@ class ClockifyStart(LauncherAction):
return False return False
def process(self, session, **kwargs): def process(self, session, **kwargs):
project_name = session['AVALON_PROJECT'] project_name = session["AVALON_PROJECT"]
asset_name = session['AVALON_ASSET'] asset_name = session["AVALON_ASSET"]
task_name = session['AVALON_TASK'] task_name = session["AVALON_TASK"]
description = asset_name description = asset_name
asset = legacy_io.find_one({ asset_doc = get_asset_by_name(
'type': 'asset', project_name, asset_name, fields=["data.parents"]
'name': asset_name )
}) if asset_doc is not None:
if asset is not None: desc_items = asset_doc.get("data", {}).get("parents", [])
desc_items = asset.get('data', {}).get('parents', [])
desc_items.append(asset_name) desc_items.append(asset_name)
desc_items.append(task_name) desc_items.append(task_name)
description = '/'.join(desc_items) description = "/".join(desc_items)
project_id = self.clockapi.get_project_id(project_name) project_id = self.clockapi.get_project_id(project_name)
tag_ids = [] tag_ids = []

View file

@ -1,11 +1,6 @@
from openpype.client import get_projects, get_project
from openpype_modules.clockify.clockify_api import ClockifyAPI from openpype_modules.clockify.clockify_api import ClockifyAPI
from openpype.api import Logger from openpype.pipeline import LauncherAction
from openpype.pipeline import (
legacy_io,
LauncherAction,
)
log = Logger.get_logger(__name__)
class ClockifySync(LauncherAction): class ClockifySync(LauncherAction):
@ -22,39 +17,36 @@ class ClockifySync(LauncherAction):
return self.have_permissions return self.have_permissions
def process(self, session, **kwargs): def process(self, session, **kwargs):
project_name = session.get('AVALON_PROJECT', None) project_name = session.get("AVALON_PROJECT") or ""
projects_to_sync = [] projects_to_sync = []
if project_name.strip() == '' or project_name is None: if project_name.strip():
for project in legacy_io.projects(): projects_to_sync = [get_project(project_name)]
projects_to_sync.append(project)
else: else:
project = legacy_io.find_one({'type': 'project'}) projects_to_sync = get_projects()
projects_to_sync.append(project)
projects_info = {} projects_info = {}
for project in projects_to_sync: for project in projects_to_sync:
task_types = project['config']['tasks'].keys() task_types = project["config"]["tasks"].keys()
projects_info[project['name']] = task_types projects_info[project["name"]] = task_types
clockify_projects = self.clockapi.get_projects() clockify_projects = self.clockapi.get_projects()
for project_name, task_types in projects_info.items(): for project_name, task_types in projects_info.items():
if project_name not in clockify_projects: if project_name in clockify_projects:
response = self.clockapi.add_project(project_name) continue
if 'id' not in response:
self.log.error('Project {} can\'t be created'.format( response = self.clockapi.add_project(project_name)
project_name if "id" not in response:
)) self.log.error("Project {} can't be created".format(
continue project_name
project_id = response['id'] ))
else: continue
project_id = clockify_projects[project_name]
clockify_workspace_tags = self.clockapi.get_tags() clockify_workspace_tags = self.clockapi.get_tags()
for task_type in task_types: for task_type in task_types:
if task_type not in clockify_workspace_tags: if task_type not in clockify_workspace_tags:
response = self.clockapi.add_tag(task_type) response = self.clockapi.add_tag(task_type)
if 'id' not in response: if "id" not in response:
self.log.error('Task {} can\'t be created'.format( self.log.error('Task {} can\'t be created'.format(
task_type task_type
)) ))

View file

@ -710,7 +710,9 @@ class MayaSubmitDeadline(pyblish.api.InstancePlugin):
new_payload["JobInfo"].update(tiles_data["JobInfo"]) new_payload["JobInfo"].update(tiles_data["JobInfo"])
new_payload["PluginInfo"].update(tiles_data["PluginInfo"]) new_payload["PluginInfo"].update(tiles_data["PluginInfo"])
job_hash = hashlib.sha256("{}_{}".format(file_index, file)) self.log.info("hashing {} - {}".format(file_index, file))
job_hash = hashlib.sha256(
("{}_{}".format(file_index, file)).encode("utf-8"))
frame_jobs[frame] = job_hash.hexdigest() frame_jobs[frame] = job_hash.hexdigest()
new_payload["JobInfo"]["ExtraInfo0"] = job_hash.hexdigest() new_payload["JobInfo"]["ExtraInfo0"] = job_hash.hexdigest()
new_payload["JobInfo"]["ExtraInfo1"] = file new_payload["JobInfo"]["ExtraInfo1"] = file

View file

@ -1045,7 +1045,7 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin):
get publish_path get publish_path
Args: Args:
anatomy (pype.lib.anatomy.Anatomy): anatomy (openpype.pipeline.anatomy.Anatomy):
template_data (dict): pre-calculated collected data for process template_data (dict): pre-calculated collected data for process
asset (string): asset name asset (string): asset name
subset (string): subset name (actually group name of subset) subset (string): subset name (actually group name of subset)

View file

@ -2,11 +2,11 @@ import re
import subprocess import subprocess
from openpype.client import get_asset_by_id, get_asset_by_name from openpype.client import get_asset_by_id, get_asset_by_name
from openpype.settings import get_project_settings
from openpype.pipeline import Anatomy
from openpype_modules.ftrack.lib import BaseEvent from openpype_modules.ftrack.lib import BaseEvent
from openpype_modules.ftrack.lib.avalon_sync import CUST_ATTR_ID_KEY from openpype_modules.ftrack.lib.avalon_sync import CUST_ATTR_ID_KEY
from openpype.api import Anatomy, get_project_settings
class UserAssigmentEvent(BaseEvent): class UserAssigmentEvent(BaseEvent):
""" """

View file

@ -1,7 +1,7 @@
import os import os
import collections import collections
import copy import copy
from openpype.api import Anatomy from openpype.pipeline import Anatomy
from openpype_modules.ftrack.lib import BaseAction, statics_icon from openpype_modules.ftrack.lib import BaseAction, statics_icon

View file

@ -11,9 +11,8 @@ from openpype.client import (
get_versions, get_versions,
get_representations get_representations
) )
from openpype.api import Anatomy
from openpype.lib import StringTemplate, TemplateUnsolved from openpype.lib import StringTemplate, TemplateUnsolved
from openpype.pipeline import AvalonMongoDB from openpype.pipeline import AvalonMongoDB, Anatomy
from openpype_modules.ftrack.lib import BaseAction, statics_icon from openpype_modules.ftrack.lib import BaseAction, statics_icon

View file

@ -10,12 +10,13 @@ from openpype.client import (
get_versions, get_versions,
get_representations get_representations
) )
from openpype.api import Anatomy, config from openpype.pipeline import Anatomy
from openpype_modules.ftrack.lib import BaseAction, statics_icon from openpype_modules.ftrack.lib import BaseAction, statics_icon
from openpype_modules.ftrack.lib.avalon_sync import CUST_ATTR_ID_KEY from openpype_modules.ftrack.lib.avalon_sync import CUST_ATTR_ID_KEY
from openpype_modules.ftrack.lib.custom_attributes import ( from openpype_modules.ftrack.lib.custom_attributes import (
query_custom_attributes query_custom_attributes
) )
from openpype.lib import config
from openpype.lib.delivery import ( from openpype.lib.delivery import (
path_from_representation, path_from_representation,
get_format_dict, get_format_dict,

View file

@ -11,13 +11,13 @@ from openpype.client import (
get_project, get_project,
get_assets, get_assets,
) )
from openpype.api import get_project_settings from openpype.settings import get_project_settings
from openpype.lib import ( from openpype.lib import (
get_workfile_template_key, get_workfile_template_key,
get_workdir_data, get_workdir_data,
Anatomy,
StringTemplate, StringTemplate,
) )
from openpype.pipeline import Anatomy
from openpype_modules.ftrack.lib import BaseAction, statics_icon from openpype_modules.ftrack.lib import BaseAction, statics_icon
from openpype_modules.ftrack.lib.avalon_sync import create_chunks from openpype_modules.ftrack.lib.avalon_sync import create_chunks

View file

@ -11,10 +11,10 @@ from openpype.client import (
get_version_by_name, get_version_by_name,
get_representation_by_name get_representation_by_name
) )
from openpype.api import Anatomy
from openpype.pipeline import ( from openpype.pipeline import (
get_representation_path, get_representation_path,
AvalonMongoDB, AvalonMongoDB,
Anatomy,
) )
from openpype_modules.ftrack.lib import BaseAction, statics_icon from openpype_modules.ftrack.lib import BaseAction, statics_icon

View file

@ -14,8 +14,7 @@ from openpype.client import (
get_representations get_representations
) )
from openpype_modules.ftrack.lib import BaseAction, statics_icon from openpype_modules.ftrack.lib import BaseAction, statics_icon
from openpype.api import Anatomy from openpype.pipeline import AvalonMongoDB, Anatomy
from openpype.pipeline import AvalonMongoDB
from openpype_modules.ftrack.lib.avalon_sync import CUST_ATTR_ID_KEY from openpype_modules.ftrack.lib.avalon_sync import CUST_ATTR_ID_KEY

View file

@ -1,3 +1,4 @@
import html
from Qt import QtCore, QtWidgets from Qt import QtCore, QtWidgets
import qtawesome import qtawesome
from .models import LogModel, LogsFilterProxy from .models import LogModel, LogsFilterProxy
@ -286,7 +287,7 @@ class OutputWidget(QtWidgets.QWidget):
if level == "debug": if level == "debug":
line_f = ( line_f = (
"<font color=\"Yellow\"> -" "<font color=\"Yellow\"> -"
" <font color=\"Lime\">{{ {loggerName} }}: [" " <font color=\"Lime\">{{ {logger_name} }}: ["
" <font color=\"White\">{message}" " <font color=\"White\">{message}"
" <font color=\"Lime\">]" " <font color=\"Lime\">]"
) )
@ -299,7 +300,7 @@ class OutputWidget(QtWidgets.QWidget):
elif level == "warning": elif level == "warning":
line_f = ( line_f = (
"<font color=\"Yellow\">*** WRN:" "<font color=\"Yellow\">*** WRN:"
" <font color=\"Lime\"> >>> {{ {loggerName} }}: [" " <font color=\"Lime\"> >>> {{ {logger_name} }}: ["
" <font color=\"White\">{message}" " <font color=\"White\">{message}"
" <font color=\"Lime\">]" " <font color=\"Lime\">]"
) )
@ -307,16 +308,25 @@ class OutputWidget(QtWidgets.QWidget):
line_f = ( line_f = (
"<font color=\"Red\">!!! ERR:" "<font color=\"Red\">!!! ERR:"
" <font color=\"White\">{timestamp}" " <font color=\"White\">{timestamp}"
" <font color=\"Lime\">>>> {{ {loggerName} }}: [" " <font color=\"Lime\">>>> {{ {logger_name} }}: ["
" <font color=\"White\">{message}" " <font color=\"White\">{message}"
" <font color=\"Lime\">]" " <font color=\"Lime\">]"
) )
logger_name = log["loggerName"]
timestamp = ""
if not show_timecode:
timestamp = log["timestamp"]
message = log["message"]
exc = log.get("exception") exc = log.get("exception")
if exc: if exc:
log["message"] = exc["message"] message = exc["message"]
line = line_f.format(**log) line = line_f.format(
message=html.escape(message),
logger_name=logger_name,
timestamp=timestamp
)
if show_timecode: if show_timecode:
timestamp = log["timestamp"] timestamp = log["timestamp"]

View file

@ -4,10 +4,11 @@ import shutil
import threading import threading
import time import time
from openpype.api import Logger, Anatomy from openpype.api import Logger
from openpype.pipeline import Anatomy
from .abstract_provider import AbstractProvider from .abstract_provider import AbstractProvider
log = Logger().get_logger("SyncServer") log = Logger.get_logger("SyncServer")
class LocalDriveHandler(AbstractProvider): class LocalDriveHandler(AbstractProvider):

View file

@ -9,14 +9,12 @@ from collections import deque, defaultdict
from openpype.modules import OpenPypeModule from openpype.modules import OpenPypeModule
from openpype_interfaces import ITrayModule from openpype_interfaces import ITrayModule
from openpype.api import ( from openpype.settings import (
Anatomy,
get_project_settings, get_project_settings,
get_system_settings, get_system_settings,
get_local_site_id
) )
from openpype.lib import PypeLogger from openpype.lib import PypeLogger, get_local_site_id
from openpype.pipeline import AvalonMongoDB from openpype.pipeline import AvalonMongoDB, Anatomy
from openpype.settings.lib import ( from openpype.settings.lib import (
get_default_anatomy_settings, get_default_anatomy_settings,
get_anatomy_settings get_anatomy_settings
@ -28,7 +26,7 @@ from .providers import lib
from .utils import time_function, SyncStatus, SiteAlreadyPresentError from .utils import time_function, SyncStatus, SiteAlreadyPresentError
log = PypeLogger().get_logger("SyncServer") log = PypeLogger.get_logger("SyncServer")
class SyncServerModule(OpenPypeModule, ITrayModule): class SyncServerModule(OpenPypeModule, ITrayModule):

View file

@ -6,6 +6,7 @@ from .constants import (
from .mongodb import ( from .mongodb import (
AvalonMongoDB, AvalonMongoDB,
) )
from .anatomy import Anatomy
from .create import ( from .create import (
BaseCreator, BaseCreator,
@ -96,6 +97,9 @@ __all__ = (
# --- MongoDB --- # --- MongoDB ---
"AvalonMongoDB", "AvalonMongoDB",
# --- Anatomy ---
"Anatomy",
# --- Create --- # --- Create ---
"BaseCreator", "BaseCreator",
"Creator", "Creator",

1257
openpype/pipeline/anatomy.py Normal file

File diff suppressed because it is too large Load diff

View file

@ -1,11 +1,9 @@
"""Core pipeline functionality""" """Core pipeline functionality"""
import os import os
import sys
import json import json
import types import types
import logging import logging
import inspect
import platform import platform
import pyblish.api import pyblish.api
@ -14,11 +12,8 @@ from pyblish.lib import MessageHandler
import openpype import openpype
from openpype.modules import load_modules, ModulesManager from openpype.modules import load_modules, ModulesManager
from openpype.settings import get_project_settings from openpype.settings import get_project_settings
from openpype.lib import ( from openpype.lib import filter_pyblish_plugins
Anatomy, from .anatomy import Anatomy
filter_pyblish_plugins,
)
from . import ( from . import (
legacy_io, legacy_io,
register_loader_plugin_path, register_loader_plugin_path,
@ -235,73 +230,10 @@ def register_host(host):
required, or browse the source code. required, or browse the source code.
""" """
signatures = {
"ls": []
}
_validate_signature(host, signatures)
_registered_host["_"] = host _registered_host["_"] = host
def _validate_signature(module, signatures):
# Required signatures for each member
missing = list()
invalid = list()
success = True
for member in signatures:
if not hasattr(module, member):
missing.append(member)
success = False
else:
attr = getattr(module, member)
if sys.version_info.major >= 3:
signature = inspect.getfullargspec(attr)[0]
else:
signature = inspect.getargspec(attr)[0]
required_signature = signatures[member]
assert isinstance(signature, list)
assert isinstance(required_signature, list)
if not all(member in signature
for member in required_signature):
invalid.append({
"member": member,
"signature": ", ".join(signature),
"required": ", ".join(required_signature)
})
success = False
if not success:
report = list()
if missing:
report.append(
"Incomplete interface for module: '%s'\n"
"Missing: %s" % (module, ", ".join(
"'%s'" % member for member in missing))
)
if invalid:
report.append(
"'%s': One or more members were found, but didn't "
"have the right argument signature." % module.__name__
)
for member in invalid:
report.append(
" Found: {member}({signature})".format(**member)
)
report.append(
" Expected: {member}({required})".format(**member)
)
raise ValueError("\n".join(report))
def registered_host(): def registered_host():
"""Return currently registered host""" """Return currently registered host"""
return _registered_host["_"] return _registered_host["_"]

View file

@ -6,6 +6,7 @@ import inspect
from uuid import uuid4 from uuid import uuid4
from contextlib import contextmanager from contextlib import contextmanager
from openpype.host import INewPublisher
from openpype.pipeline import legacy_io from openpype.pipeline import legacy_io
from openpype.pipeline.mongodb import ( from openpype.pipeline.mongodb import (
AvalonMongoDB, AvalonMongoDB,
@ -651,12 +652,6 @@ class CreateContext:
discover_publish_plugins(bool): Discover publish plugins during reset discover_publish_plugins(bool): Discover publish plugins during reset
phase. phase.
""" """
# Methods required in host implementaion to be able create instances
# or change context data.
required_methods = (
"get_context_data",
"update_context_data"
)
def __init__( def __init__(
self, host, dbcon=None, headless=False, reset=True, self, host, dbcon=None, headless=False, reset=True,
@ -738,10 +733,10 @@ class CreateContext:
Args: Args:
host(ModuleType): Host implementaion. host(ModuleType): Host implementaion.
""" """
missing = set()
for attr_name in cls.required_methods: missing = set(
if not hasattr(host, attr_name): INewPublisher.get_missing_publish_methods(host)
missing.add(attr_name) )
return missing return missing
@property @property

View file

@ -18,9 +18,13 @@ _database = database = None
log = logging.getLogger(__name__) log = logging.getLogger(__name__)
def is_installed():
return module._is_installed
def install(): def install():
"""Establish a persistent connection to the database""" """Establish a persistent connection to the database"""
if module._is_installed: if is_installed():
return return
session = session_data_from_environment(context_keys=True) session = session_data_from_environment(context_keys=True)
@ -55,7 +59,7 @@ def uninstall():
def requires_install(func): def requires_install(func):
@functools.wraps(func) @functools.wraps(func)
def decorated(*args, **kwargs): def decorated(*args, **kwargs):
if not module._is_installed: if not is_installed():
install() install()
return func(*args, **kwargs) return func(*args, **kwargs)
return decorated return decorated

View file

@ -6,13 +6,24 @@ import logging
import inspect import inspect
import numbers import numbers
import six from openpype.client import (
from bson.objectid import ObjectId get_project,
get_assets,
from openpype.lib import Anatomy get_subsets,
get_versions,
get_version_by_id,
get_last_version_by_subset_id,
get_hero_version_by_subset_id,
get_version_by_name,
get_representations,
get_representation_by_id,
get_representation_by_name,
get_representation_parents
)
from openpype.pipeline import ( from openpype.pipeline import (
schema, schema,
legacy_io, legacy_io,
Anatomy,
) )
log = logging.getLogger(__name__) log = logging.getLogger(__name__)
@ -52,13 +63,10 @@ def get_repres_contexts(representation_ids, dbcon=None):
Returns: Returns:
dict: The full representation context by representation id. dict: The full representation context by representation id.
keys are repre_id, value is dictionary with full: keys are repre_id, value is dictionary with full documents of
asset_doc asset, subset, version and representation.
version_doc
subset_doc
repre_doc
""" """
if not dbcon: if not dbcon:
dbcon = legacy_io dbcon = legacy_io
@ -66,26 +74,18 @@ def get_repres_contexts(representation_ids, dbcon=None):
if not representation_ids: if not representation_ids:
return contexts return contexts
_representation_ids = [] project_name = dbcon.active_project()
for repre_id in representation_ids: repre_docs = get_representations(project_name, representation_ids)
if isinstance(repre_id, six.string_types):
repre_id = ObjectId(repre_id)
_representation_ids.append(repre_id)
repre_docs = dbcon.find({
"type": "representation",
"_id": {"$in": _representation_ids}
})
repre_docs_by_id = {} repre_docs_by_id = {}
version_ids = set() version_ids = set()
for repre_doc in repre_docs: for repre_doc in repre_docs:
version_ids.add(repre_doc["parent"]) version_ids.add(repre_doc["parent"])
repre_docs_by_id[repre_doc["_id"]] = repre_doc repre_docs_by_id[repre_doc["_id"]] = repre_doc
version_docs = dbcon.find({ version_docs = get_versions(
"type": {"$in": ["version", "hero_version"]}, project_name, version_ids, hero=True
"_id": {"$in": list(version_ids)} )
})
version_docs_by_id = {} version_docs_by_id = {}
hero_version_docs = [] hero_version_docs = []
@ -99,10 +99,7 @@ def get_repres_contexts(representation_ids, dbcon=None):
subset_ids.add(version_doc["parent"]) subset_ids.add(version_doc["parent"])
if versions_for_hero: if versions_for_hero:
_version_docs = dbcon.find({ _version_docs = get_versions(project_name, versions_for_hero)
"type": "version",
"_id": {"$in": list(versions_for_hero)}
})
_version_data_by_id = { _version_data_by_id = {
version_doc["_id"]: version_doc["data"] version_doc["_id"]: version_doc["data"]
for version_doc in _version_docs for version_doc in _version_docs
@ -114,26 +111,20 @@ def get_repres_contexts(representation_ids, dbcon=None):
version_data = copy.deepcopy(_version_data_by_id[version_id]) version_data = copy.deepcopy(_version_data_by_id[version_id])
version_docs_by_id[hero_version_id]["data"] = version_data version_docs_by_id[hero_version_id]["data"] = version_data
subset_docs = dbcon.find({ subset_docs = get_subsets(project_name, subset_ids)
"type": "subset",
"_id": {"$in": list(subset_ids)}
})
subset_docs_by_id = {} subset_docs_by_id = {}
asset_ids = set() asset_ids = set()
for subset_doc in subset_docs: for subset_doc in subset_docs:
subset_docs_by_id[subset_doc["_id"]] = subset_doc subset_docs_by_id[subset_doc["_id"]] = subset_doc
asset_ids.add(subset_doc["parent"]) asset_ids.add(subset_doc["parent"])
asset_docs = dbcon.find({ asset_docs = get_assets(project_name, asset_ids)
"type": "asset",
"_id": {"$in": list(asset_ids)}
})
asset_docs_by_id = { asset_docs_by_id = {
asset_doc["_id"]: asset_doc asset_doc["_id"]: asset_doc
for asset_doc in asset_docs for asset_doc in asset_docs
} }
project_doc = dbcon.find_one({"type": "project"}) project_doc = get_project(project_name)
for repre_id, repre_doc in repre_docs_by_id.items(): for repre_id, repre_doc in repre_docs_by_id.items():
version_doc = version_docs_by_id[repre_doc["parent"]] version_doc = version_docs_by_id[repre_doc["parent"]]
@ -173,32 +164,21 @@ def get_subset_contexts(subset_ids, dbcon=None):
if not subset_ids: if not subset_ids:
return contexts return contexts
_subset_ids = set() project_name = dbcon.active_project()
for subset_id in subset_ids: subset_docs = get_subsets(project_name, subset_ids)
if isinstance(subset_id, six.string_types):
subset_id = ObjectId(subset_id)
_subset_ids.add(subset_id)
subset_docs = dbcon.find({
"type": "subset",
"_id": {"$in": list(_subset_ids)}
})
subset_docs_by_id = {} subset_docs_by_id = {}
asset_ids = set() asset_ids = set()
for subset_doc in subset_docs: for subset_doc in subset_docs:
subset_docs_by_id[subset_doc["_id"]] = subset_doc subset_docs_by_id[subset_doc["_id"]] = subset_doc
asset_ids.add(subset_doc["parent"]) asset_ids.add(subset_doc["parent"])
asset_docs = dbcon.find({ asset_docs = get_assets(project_name, asset_ids)
"type": "asset",
"_id": {"$in": list(asset_ids)}
})
asset_docs_by_id = { asset_docs_by_id = {
asset_doc["_id"]: asset_doc asset_doc["_id"]: asset_doc
for asset_doc in asset_docs for asset_doc in asset_docs
} }
project_doc = dbcon.find_one({"type": "project"}) project_doc = get_project(project_name)
for subset_id, subset_doc in subset_docs_by_id.items(): for subset_id, subset_doc in subset_docs_by_id.items():
asset_doc = asset_docs_by_id[subset_doc["parent"]] asset_doc = asset_docs_by_id[subset_doc["parent"]]
@ -224,16 +204,17 @@ def get_representation_context(representation):
Returns: Returns:
dict: The full representation context. dict: The full representation context.
""" """
assert representation is not None, "This is a bug" assert representation is not None, "This is a bug"
if isinstance(representation, (six.string_types, ObjectId)): if not isinstance(representation, dict):
representation = legacy_io.find_one( representation = get_representation_by_id(representation)
{"_id": ObjectId(str(representation))})
version, subset, asset, project = legacy_io.parenthood(representation) project_name = legacy_io.active_project()
version, subset, asset, project = get_representation_parents(
project_name, representation
)
assert all([representation, version, subset, asset, project]), ( assert all([representation, version, subset, asset, project]), (
"This is a bug" "This is a bug"
@ -405,42 +386,36 @@ def update_container(container, version=-1):
"""Update a container""" """Update a container"""
# Compute the different version from 'representation' # Compute the different version from 'representation'
current_representation = legacy_io.find_one({ project_name = legacy_io.active_project()
"_id": ObjectId(container["representation"]) current_representation = get_representation_by_id(
}) project_name, container["representation"]
)
assert current_representation is not None, "This is a bug" assert current_representation is not None, "This is a bug"
current_version, subset, asset, project = legacy_io.parenthood( current_version = get_version_by_id(
current_representation) project_name, current_representation["_id"], fields=["parent"]
)
if version == -1: if version == -1:
new_version = legacy_io.find_one({ new_version = get_last_version_by_subset_id(
"type": "version", project_name, current_version["parent"], fields=["_id"]
"parent": subset["_id"] )
}, sort=[("name", -1)])
elif isinstance(version, HeroVersionType):
new_version = get_hero_version_by_subset_id(
project_name, current_version["parent"], fields=["_id"]
)
else: else:
if isinstance(version, HeroVersionType): new_version = get_version_by_name(
version_query = { project_name, version, current_version["parent"], fields=["_id"]
"parent": subset["_id"], )
"type": "hero_version"
}
else:
version_query = {
"parent": subset["_id"],
"type": "version",
"name": version
}
new_version = legacy_io.find_one(version_query)
assert new_version is not None, "This is a bug" assert new_version is not None, "This is a bug"
new_representation = legacy_io.find_one({ new_representation = get_representation_by_name(
"type": "representation", project_name, current_representation["name"], new_version["_id"]
"parent": new_version["_id"], )
"name": current_representation["name"]
})
assert new_representation is not None, "Representation wasn't found" assert new_representation is not None, "Representation wasn't found"
path = get_representation_path(new_representation) path = get_representation_path(new_representation)
@ -482,10 +457,10 @@ def switch_container(container, representation, loader_plugin=None):
)) ))
# Get the new representation to switch to # Get the new representation to switch to
new_representation = legacy_io.find_one({ project_name = legacy_io.active_project()
"type": "representation", new_representation = get_representation_by_id(
"_id": representation["_id"], project_name, representation["_id"]
}) )
new_context = get_representation_context(new_representation) new_context = get_representation_context(new_representation)
if not is_compatible_loader(loader_plugin, new_context): if not is_compatible_loader(loader_plugin, new_context):

View file

@ -9,9 +9,8 @@ import qargparse
from Qt import QtWidgets, QtCore from Qt import QtWidgets, QtCore
from openpype import style from openpype import style
from openpype.pipeline import load, AvalonMongoDB from openpype.pipeline import load, AvalonMongoDB, Anatomy
from openpype.lib import StringTemplate from openpype.lib import StringTemplate
from openpype.api import Anatomy
class DeleteOldVersions(load.SubsetLoaderPlugin): class DeleteOldVersions(load.SubsetLoaderPlugin):

View file

@ -3,8 +3,8 @@ from collections import defaultdict
from Qt import QtWidgets, QtCore, QtGui from Qt import QtWidgets, QtCore, QtGui
from openpype.pipeline import load, AvalonMongoDB from openpype.lib import config
from openpype.api import Anatomy, config from openpype.pipeline import load, AvalonMongoDB, Anatomy
from openpype import resources, style from openpype import resources, style
from openpype.lib.delivery import ( from openpype.lib.delivery import (

View file

@ -4,11 +4,11 @@ Requires:
os.environ -> AVALON_PROJECT os.environ -> AVALON_PROJECT
Provides: Provides:
context -> anatomy (pype.api.Anatomy) context -> anatomy (openpype.pipeline.anatomy.Anatomy)
""" """
import os import os
from openpype.api import Anatomy
import pyblish.api import pyblish.api
from openpype.pipeline import Anatomy
class CollectAnatomyObject(pyblish.api.ContextPlugin): class CollectAnatomyObject(pyblish.api.ContextPlugin):

View file

@ -19,7 +19,7 @@ class ExtractThumbnail(pyblish.api.InstancePlugin):
label = "Extract Thumbnail" label = "Extract Thumbnail"
order = pyblish.api.ExtractorOrder order = pyblish.api.ExtractorOrder
families = [ families = [
"imagesequence", "render", "render2d", "imagesequence", "render", "render2d", "prerender",
"source", "plate", "take" "source", "plate", "take"
] ]
hosts = ["shell", "fusion", "resolve"] hosts = ["shell", "fusion", "resolve"]

View file

@ -1,4 +1,20 @@
{ {
"load": {
"ImageSequenceLoader": {
"family": [
"shot",
"render",
"image",
"plate",
"reference"
],
"representations": [
"jpeg",
"png",
"jpg"
]
}
},
"publish": { "publish": {
"CollectPalettes": { "CollectPalettes": {
"allowed_tasks": [ "allowed_tasks": [

View file

@ -5,6 +5,34 @@
"label": "Harmony", "label": "Harmony",
"is_file": true, "is_file": true,
"children": [ "children": [
{
"type": "dict",
"collapsible": true,
"key": "load",
"label": "Loader plugins",
"children": [
{
"type": "dict",
"collapsible": true,
"key": "ImageSequenceLoader",
"label": "Load Image Sequence",
"children": [
{
"type": "list",
"key": "family",
"label": "Families",
"object_type": "text"
},
{
"type": "list",
"key": "representations",
"label": "Representations",
"object_type": "text"
}
]
}
]
},
{ {
"type": "dict", "type": "dict",
"collapsible": true, "collapsible": true,

View file

@ -17,8 +17,7 @@ from openpype.client import (
get_thumbnail_id_from_source, get_thumbnail_id_from_source,
get_thumbnail, get_thumbnail,
) )
from openpype.api import Anatomy from openpype.pipeline import HeroVersionType, Anatomy
from openpype.pipeline import HeroVersionType
from openpype.pipeline.thumbnail import get_thumbnail_binary from openpype.pipeline.thumbnail import get_thumbnail_binary
from openpype.pipeline.load import ( from openpype.pipeline.load import (
discover_loader_plugins, discover_loader_plugins,

View file

@ -6,6 +6,7 @@ from collections import defaultdict
from Qt import QtCore, QtGui from Qt import QtCore, QtGui
import qtawesome import qtawesome
from openpype.host import ILoadHost
from openpype.client import ( from openpype.client import (
get_asset_by_id, get_asset_by_id,
get_subset_by_id, get_subset_by_id,
@ -193,7 +194,10 @@ class InventoryModel(TreeModel):
host = registered_host() host = registered_host()
if not items: # for debugging or testing, injecting items from outside if not items: # for debugging or testing, injecting items from outside
items = host.ls() if isinstance(host, ILoadHost):
items = host.get_containers()
else:
items = host.ls()
self.clear() self.clear()

View file

@ -6,8 +6,7 @@ import speedcopy
from openpype.client import get_project, get_asset_by_name from openpype.client import get_project, get_asset_by_name
from openpype.lib import Terminal from openpype.lib import Terminal
from openpype.api import Anatomy from openpype.pipeline import legacy_io, Anatomy
from openpype.pipeline import legacy_io
t = Terminal() t = Terminal()

View file

@ -6,7 +6,7 @@ use singleton approach with global functions (using helper anyway).
import os import os
import pyblish.api import pyblish.api
from openpype.host import IWorkfileHost, ILoadHost
from openpype.pipeline import ( from openpype.pipeline import (
registered_host, registered_host,
legacy_io, legacy_io,
@ -49,12 +49,11 @@ class HostToolsHelper:
def get_workfiles_tool(self, parent): def get_workfiles_tool(self, parent):
"""Create, cache and return workfiles tool window.""" """Create, cache and return workfiles tool window."""
if self._workfiles_tool is None: if self._workfiles_tool is None:
from openpype.tools.workfiles.app import ( from openpype.tools.workfiles.app import Window
Window, validate_host_requirements
)
# Host validation # Host validation
host = registered_host() host = registered_host()
validate_host_requirements(host) IWorkfileHost.validate_workfile_methods(host)
workfiles_window = Window(parent=parent) workfiles_window = Window(parent=parent)
self._workfiles_tool = workfiles_window self._workfiles_tool = workfiles_window
@ -92,6 +91,9 @@ class HostToolsHelper:
if self._loader_tool is None: if self._loader_tool is None:
from openpype.tools.loader import LoaderWindow from openpype.tools.loader import LoaderWindow
host = registered_host()
ILoadHost.validate_load_methods(host)
loader_window = LoaderWindow(parent=parent or self._parent) loader_window = LoaderWindow(parent=parent or self._parent)
self._loader_tool = loader_window self._loader_tool = loader_window
@ -164,6 +166,9 @@ class HostToolsHelper:
if self._scene_inventory_tool is None: if self._scene_inventory_tool is None:
from openpype.tools.sceneinventory import SceneInventoryWindow from openpype.tools.sceneinventory import SceneInventoryWindow
host = registered_host()
ILoadHost.validate_load_methods(host)
scene_inventory_window = SceneInventoryWindow( scene_inventory_window = SceneInventoryWindow(
parent=parent or self._parent parent=parent or self._parent
) )

View file

@ -1,12 +1,10 @@
from .window import Window from .window import Window
from .app import ( from .app import (
show, show,
validate_host_requirements,
) )
__all__ = [ __all__ = [
"Window", "Window",
"show", "show",
"validate_host_requirements",
] ]

View file

@ -1,6 +1,7 @@
import sys import sys
import logging import logging
from openpype.host import IWorkfileHost
from openpype.pipeline import ( from openpype.pipeline import (
registered_host, registered_host,
legacy_io, legacy_io,
@ -14,31 +15,6 @@ module = sys.modules[__name__]
module.window = None module.window = None
def validate_host_requirements(host):
if host is None:
raise RuntimeError("No registered host.")
# Verify the host has implemented the api for Work Files
required = [
"open_file",
"save_file",
"current_file",
"has_unsaved_changes",
"work_root",
"file_extensions",
]
missing = []
for name in required:
if not hasattr(host, name):
missing.append(name)
if missing:
raise RuntimeError(
"Host is missing required Work Files interfaces: "
"%s (host: %s)" % (", ".join(missing), host)
)
return True
def show(root=None, debug=False, parent=None, use_context=True, save=True): def show(root=None, debug=False, parent=None, use_context=True, save=True):
"""Show Work Files GUI""" """Show Work Files GUI"""
# todo: remove `root` argument to show() # todo: remove `root` argument to show()
@ -50,7 +26,7 @@ def show(root=None, debug=False, parent=None, use_context=True, save=True):
pass pass
host = registered_host() host = registered_host()
validate_host_requirements(host) IWorkfileHost.validate_workfile_methods(host)
if debug: if debug:
legacy_io.Session["AVALON_ASSET"] = "Mock" legacy_io.Session["AVALON_ASSET"] = "Mock"

View file

@ -6,12 +6,12 @@ import copy
import Qt import Qt
from Qt import QtWidgets, QtCore from Qt import QtWidgets, QtCore
from openpype.host import IWorkfileHost
from openpype.client import get_asset_by_id from openpype.client import get_asset_by_id
from openpype.tools.utils import PlaceholderLineEdit from openpype.tools.utils import PlaceholderLineEdit
from openpype.tools.utils.delegates import PrettyTimeDelegate from openpype.tools.utils.delegates import PrettyTimeDelegate
from openpype.lib import ( from openpype.lib import (
emit_event, emit_event,
Anatomy,
get_workfile_template_key, get_workfile_template_key,
create_workdir_extra_folders, create_workdir_extra_folders,
) )
@ -22,6 +22,7 @@ from openpype.lib.avalon_context import (
from openpype.pipeline import ( from openpype.pipeline import (
registered_host, registered_host,
legacy_io, legacy_io,
Anatomy,
) )
from .model import ( from .model import (
WorkAreaFilesModel, WorkAreaFilesModel,
@ -125,7 +126,7 @@ class FilesWidget(QtWidgets.QWidget):
filter_layout.addWidget(published_checkbox, 0) filter_layout.addWidget(published_checkbox, 0)
# Create the Files models # Create the Files models
extensions = set(self.host.file_extensions()) extensions = set(self._get_host_extensions())
views_widget = QtWidgets.QWidget(self) views_widget = QtWidgets.QWidget(self)
# --- Workarea view --- # --- Workarea view ---
@ -452,7 +453,12 @@ class FilesWidget(QtWidgets.QWidget):
def open_file(self, filepath): def open_file(self, filepath):
host = self.host host = self.host
if host.has_unsaved_changes(): if isinstance(host, IWorkfileHost):
has_unsaved_changes = host.workfile_has_unsaved_changes()
else:
has_unsaved_changes = host.has_unsaved_changes()
if has_unsaved_changes:
result = self.save_changes_prompt() result = self.save_changes_prompt()
if result is None: if result is None:
# Cancel operation # Cancel operation
@ -460,7 +466,10 @@ class FilesWidget(QtWidgets.QWidget):
# Save first if has changes # Save first if has changes
if result: if result:
current_file = host.current_file() if isinstance(host, IWorkfileHost):
current_file = host.get_current_workfile()
else:
current_file = host.current_file()
if not current_file: if not current_file:
# If the user requested to save the current scene # If the user requested to save the current scene
# we can't actually automatically do so if the current # we can't actually automatically do so if the current
@ -471,7 +480,10 @@ class FilesWidget(QtWidgets.QWidget):
return return
# Save current scene, continue to open file # Save current scene, continue to open file
host.save_file(current_file) if isinstance(host, IWorkfileHost):
host.save_workfile(current_file)
else:
host.save_file(current_file)
event_data_before = self._get_event_context_data() event_data_before = self._get_event_context_data()
event_data_before["filepath"] = filepath event_data_before["filepath"] = filepath
@ -482,7 +494,10 @@ class FilesWidget(QtWidgets.QWidget):
source="workfiles.tool" source="workfiles.tool"
) )
self._enter_session() self._enter_session()
host.open_file(filepath) if isinstance(host, IWorkfileHost):
host.open_workfile(filepath)
else:
host.open_file(filepath)
emit_event( emit_event(
"workfile.open.after", "workfile.open.after",
event_data_after, event_data_after,
@ -524,7 +539,7 @@ class FilesWidget(QtWidgets.QWidget):
filepath = self._get_selected_filepath() filepath = self._get_selected_filepath()
extensions = [os.path.splitext(filepath)[1]] extensions = [os.path.splitext(filepath)[1]]
else: else:
extensions = self.host.file_extensions() extensions = self._get_host_extensions()
window = SaveAsDialog( window = SaveAsDialog(
parent=self, parent=self,
@ -572,9 +587,14 @@ class FilesWidget(QtWidgets.QWidget):
self.open_file(path) self.open_file(path)
def _get_host_extensions(self):
if isinstance(self.host, IWorkfileHost):
return self.host.get_workfile_extensions()
return self.host.file_extensions()
def on_browse_pressed(self): def on_browse_pressed(self):
ext_filter = "Work File (*{0})".format( ext_filter = "Work File (*{0})".format(
" *".join(self.host.file_extensions()) " *".join(self._get_host_extensions())
) )
kwargs = { kwargs = {
"caption": "Work Files", "caption": "Work Files",
@ -632,10 +652,16 @@ class FilesWidget(QtWidgets.QWidget):
self._enter_session() self._enter_session()
if not self.published_enabled: if not self.published_enabled:
self.host.save_file(filepath) if isinstance(self.host, IWorkfileHost):
self.host.save_workfile(filepath)
else:
self.host.save_file(filepath)
else: else:
shutil.copy(src_path, filepath) shutil.copy(src_path, filepath)
self.host.open_file(filepath) if isinstance(self.host, IWorkfileHost):
self.host.open_workfile(filepath)
else:
self.host.open_file(filepath)
# Create extra folders # Create extra folders
create_workdir_extra_folders( create_workdir_extra_folders(

View file

@ -1,3 +1,3 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
"""Package declaring Pype version.""" """Package declaring Pype version."""
__version__ = "3.12.1-nightly.1" __version__ = "3.12.1-nightly.2"

View file

@ -1,6 +1,6 @@
[tool.poetry] [tool.poetry]
name = "OpenPype" name = "OpenPype"
version = "3.12.1-nightly.1" # OpenPype version = "3.12.1-nightly.2" # OpenPype
description = "Open VFX and Animation pipeline with support." description = "Open VFX and Animation pipeline with support."
authors = ["OpenPype Team <info@openpype.io>"] authors = ["OpenPype Team <info@openpype.io>"]
license = "MIT License" license = "MIT License"

View file

@ -105,6 +105,10 @@ save it in secure way to your systems keyring - on Windows it is **Credential Ma
This can be also set beforehand with environment variable `OPENPYPE_MONGO`. If set it takes precedence This can be also set beforehand with environment variable `OPENPYPE_MONGO`. If set it takes precedence
over the one set in keyring. over the one set in keyring.
:::tip Minimal permissions for DB user
- `readWrite` role to `openpype` and `avalon` databases
- `find` permission on `openpype`, `avalon` and `local`
#### Check for OpenPype version path #### Check for OpenPype version path
When connection to MongoDB is made, OpenPype will get various settings from there - one among them is When connection to MongoDB is made, OpenPype will get various settings from there - one among them is
directory location where OpenPype versions are stored. If this directory exists OpenPype tries to directory location where OpenPype versions are stored. If this directory exists OpenPype tries to

View file

@ -0,0 +1,89 @@
---
id: dev_host_implementation
title: Host implementation
sidebar_label: Host implementation
toc_max_heading_level: 4
---
Host is an integration of DCC but in most of cases have logic that need to be handled before DCC is launched. Then based on abilities (or purpose) of DCC the integration can support different pipeline workflows.
## Pipeline workflows
Workflows available in OpenPype are Workfiles, Load and Create-Publish. Each of them may require some functionality available in integration (e.g. call host API to achieve certain functionality). We'll go through them later.
## How to implement and manage host
At this moment there is not fully unified way how host should be implemented but we're working on it. Host should have a "public face" code that can be used outside of DCC and in-DCC integration code. The main reason is that in-DCC code can have specific dependencies for python modules not available out of it's process. Hosts are located in `openpype/hosts/{host name}` folder. Current code (at many places) expect that the host name has equivalent folder there. So each subfolder should be named with the name of host it represents.
### Recommended folder structure
```python
openpype/hosts/{host name}
│ # Content of DCC integration - with in-DCC imports
├─ api
│ ├─ __init__.py
│ └─ [DCC integration files]
│ # Plugins related to host - dynamically imported (can contain in-DCC imports)
├─ plugins
│ ├─ create
│ │ └─ [create plugin files]
│ ├─ load
│ │ └─ [load plugin files]
│ └─ publish
│ └─ [publish plugin files]
│ # Launch hooks - used to modify how application is launched
├─ hooks
│ └─ [some pre/post launch hooks]
|
│ # Code initializing host integration in-DCC (DCC specific - example from Maya)
├─ startup
│ └─ userSetup.py
│ # Public interface
├─ __init__.py
└─ [other public code]
```
### Launch Hooks
Launch hooks are not directly connected to host implementation, but they can be used to modify launch of process which may be crutial for the implementation. Launch hook are plugins called when DCC is launched. They are processed in sequence before and after launch. Pre launch hooks can change how process of DCC is launched, e.g. change subprocess flags, modify environments or modify launch arguments. If prelaunch hook crashes the application is not launched at all. Postlaunch hooks are triggered after launch of subprocess. They can be used to change statuses in your project tracker, start timer, etc. Crashed postlaunch hooks have no effect on rest of postlaunch hooks or launched process. They can be filtered by platform, host and application and order is defined by integer value. Hooks inside host are automatically loaded (one reason why folder name should match host name) or can be defined from modules. Hooks execution share same launch context where can be stored data used across multiple hooks (please be very specific in stored keys e.g. 'project' vs. 'project_name'). For more detailed information look into `openpype/lib/applications.py`.
### Public interface
Public face is at this moment related to launching of the DCC. At this moment there there is only option to modify environment variables before launch by implementing function `add_implementation_envs` (must be available in `openpype/hosts/{host name}/__init__.py`). The function is called after pre launch hooks, as last step before subprocess launch, to be able set environment variables crutial for proper integration. It is also good place for functions that are used in prelaunch hooks and in-DCC integration. Future plans are to be able get workfiles extensions from here. Right now workfiles extensions are hardcoded in `openpype/pipeline/constants.py` under `HOST_WORKFILE_EXTENSIONS`, we would like to handle hosts as addons similar to OpenPype modules, and more improvements which are now hardcoded.
### Integration
We've prepared base class `HostBase` in `openpype/host/host.py` to define minimum requirements and provide some default method implementations. The minimum requirement for a host is `name` attribute, this host would not be able to do much but is valid. To extend functionality we've prepared interfaces that helps to identify what is host capable of and if is possible to use certain tools with it. For those cases we defined interfaces for each workflow. `IWorkfileHost` interface add requirement to implement workfiles related methods which makes host usable in combination with Workfiles tool. `ILoadHost` interface add requirements to be able load, update, switch or remove referenced representations which should add support to use Loader and Scene Inventory tools. `INewPublisher` interface is required to be able use host with new OpenPype publish workflow. This is what must or can be implemented to allow certain functionality. `HostBase` will have more responsibility which will be taken from global variables in future. This process won't happen at once, but will be slow to keep backwards compatibility for some time.
#### Example
```python
from openpype.host import HostBase, IWorkfileHost, ILoadHost
class MayaHost(HostBase, IWorkfileHost, ILoadHost):
def open_workfile(self, filepath):
...
def save_current_workfile(self, filepath=None):
...
def get_current_workfile(self):
...
...
```
### Install integration
We have prepared a host class, now where and how to initialize it's object? This part is DCC specific. In DCCs like Maya with embedded python and Qt we use advantage of being able to initialize object of the class directly in DCC process on start, the same happens in Nuke, Hiero and Houdini. In DCCs like Photoshop or Harmony there is launched OpenPype (python) process next to it which handles host initialization and communication with the DCC process (e.g. using sockects). Created object of host must be installed and registered to global scope of OpenPype. Which means that at this moment one process can handle only one host at a time.
#### Install example (Maya startup file)
```python
from openpype.pipeline import install_host
from openpype.hosts.maya.api import MayaHost
host = MayaHost()
install_host(host)
```
Function `install_host` cares about installing global plugins, callbacks and register host. Host registration means that the object is kept in memory and is accessible using `get_registered_host()`.
### Using UI tools
Most of functionality in DCCs is provided to artists by using UI tools. We're trying to keep UIs consistent so we use same set of tools in each host, all or most of them are Qt based. There is a `HostToolsHelper` in `openpype/tools/utils/host_tools.py` which unify showing of default tools, they can be showed almost at any point. Some of them are validating if host is capable of using them (Workfiles, Loader and Scene Inventory) which is related to [pipeline workflows](#pipeline-workflows). `HostToolsHelper` provides API to show tools but host integration must care about giving artists ability to show them. Most of DCCs have some extendable menu bar where is possible to add custom actions, which is preferred approach how to give ability to show the tools.

View file

@ -155,6 +155,7 @@ module.exports = {
type: "category", type: "category",
label: "Hosts integrations", label: "Hosts integrations",
items: [ items: [
"dev_host_implementation",
"dev_publishing" "dev_publishing"
] ]
} }