Merge branch 'develop' into cleanup_reuse_maintained_time

This commit is contained in:
Milan Kolar 2022-07-05 08:09:41 +02:00 committed by GitHub
commit c8fb351c3d
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
108 changed files with 3898 additions and 2260 deletions

View file

@ -1,35 +1,75 @@
# Changelog
## [3.12.0-nightly.2](https://github.com/pypeclub/OpenPype/tree/HEAD)
## [3.12.1-nightly.2](https://github.com/pypeclub/OpenPype/tree/HEAD)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.11.1...HEAD)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.12.0...HEAD)
### 📖 Documentation
- Linux: update OIIO package [\#3401](https://github.com/pypeclub/OpenPype/pull/3401)
- General: Add ability to change user value for templates [\#3366](https://github.com/pypeclub/OpenPype/pull/3366)
- Multiverse: expose some settings to GUI [\#3350](https://github.com/pypeclub/OpenPype/pull/3350)
- Docs: Added minimal permissions for MongoDB [\#3441](https://github.com/pypeclub/OpenPype/pull/3441)
**🆕 New features**
- Maya: Add VDB to Arnold loader [\#3433](https://github.com/pypeclub/OpenPype/pull/3433)
**🚀 Enhancements**
- Hosts: More options for in-host callbacks [\#3357](https://github.com/pypeclub/OpenPype/pull/3357)
- Maya: Allow more data to be published along camera 🎥 [\#3304](https://github.com/pypeclub/OpenPype/pull/3304)
- Blender: Bugfix - Set fps properly on open [\#3426](https://github.com/pypeclub/OpenPype/pull/3426)
- Blender: pre pyside install for all platforms [\#3400](https://github.com/pypeclub/OpenPype/pull/3400)
**🐛 Bug fixes**
- Nuke: prerender reviewable fails [\#3450](https://github.com/pypeclub/OpenPype/pull/3450)
- Maya: fix hashing in Python 3 for tile rendering [\#3447](https://github.com/pypeclub/OpenPype/pull/3447)
- LogViewer: Escape html characters in log message [\#3443](https://github.com/pypeclub/OpenPype/pull/3443)
- Nuke: Slate frame is integrated [\#3427](https://github.com/pypeclub/OpenPype/pull/3427)
**🔀 Refactored code**
- Clockify: Use query functions in clockify actions [\#3458](https://github.com/pypeclub/OpenPype/pull/3458)
- General: Use query functions in rest api calls [\#3457](https://github.com/pypeclub/OpenPype/pull/3457)
- General: Use Anatomy after move to pipeline [\#3436](https://github.com/pypeclub/OpenPype/pull/3436)
- General: Anatomy moved to pipeline [\#3435](https://github.com/pypeclub/OpenPype/pull/3435)
## [3.12.0](https://github.com/pypeclub/OpenPype/tree/3.12.0) (2022-06-28)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.12.0-nightly.3...3.12.0)
### 📖 Documentation
- Fix typo in documentation: pyenv on mac [\#3417](https://github.com/pypeclub/OpenPype/pull/3417)
- Linux: update OIIO package [\#3401](https://github.com/pypeclub/OpenPype/pull/3401)
**🚀 Enhancements**
- Webserver: Added CORS middleware [\#3422](https://github.com/pypeclub/OpenPype/pull/3422)
- Attribute Defs UI: Files widget show what is allowed to drop in [\#3411](https://github.com/pypeclub/OpenPype/pull/3411)
- General: Add ability to change user value for templates [\#3366](https://github.com/pypeclub/OpenPype/pull/3366)
- Hosts: More options for in-host callbacks [\#3357](https://github.com/pypeclub/OpenPype/pull/3357)
- Multiverse: expose some settings to GUI [\#3350](https://github.com/pypeclub/OpenPype/pull/3350)
**🐛 Bug fixes**
- NewPublisher: Fix subset name change on change of creator plugin [\#3420](https://github.com/pypeclub/OpenPype/pull/3420)
- Bug: fix invalid avalon import [\#3418](https://github.com/pypeclub/OpenPype/pull/3418)
- Nuke: Fix keyword argument in query function [\#3414](https://github.com/pypeclub/OpenPype/pull/3414)
- Houdini: fix loading and updating vbd/bgeo sequences [\#3408](https://github.com/pypeclub/OpenPype/pull/3408)
- Nuke: Collect representation files based on Write [\#3407](https://github.com/pypeclub/OpenPype/pull/3407)
- General: Filter representations before integration start [\#3398](https://github.com/pypeclub/OpenPype/pull/3398)
- Maya: look collector typo [\#3392](https://github.com/pypeclub/OpenPype/pull/3392)
- TVPaint: Make sure exit code is set to not None [\#3382](https://github.com/pypeclub/OpenPype/pull/3382)
- Maya: vray device aspect ratio fix [\#3381](https://github.com/pypeclub/OpenPype/pull/3381)
- Flame: bunch of publishing issues [\#3377](https://github.com/pypeclub/OpenPype/pull/3377)
- Harmony: added unc path to zifile command in Harmony [\#3372](https://github.com/pypeclub/OpenPype/pull/3372)
- Standalone: settings improvements [\#3355](https://github.com/pypeclub/OpenPype/pull/3355)
- Nuke: Load full model hierarchy by default [\#3328](https://github.com/pypeclub/OpenPype/pull/3328)
**🔀 Refactored code**
- Unreal: Use client query functions [\#3421](https://github.com/pypeclub/OpenPype/pull/3421)
- General: Move editorial lib to pipeline [\#3419](https://github.com/pypeclub/OpenPype/pull/3419)
- Kitsu: renaming to plural func sync\_all\_projects [\#3397](https://github.com/pypeclub/OpenPype/pull/3397)
- Houdini: Use client query functions [\#3395](https://github.com/pypeclub/OpenPype/pull/3395)
- Hiero: Use client query functions [\#3393](https://github.com/pypeclub/OpenPype/pull/3393)
- Nuke: Use client query functions [\#3391](https://github.com/pypeclub/OpenPype/pull/3391)
- Maya: Use client query functions [\#3385](https://github.com/pypeclub/OpenPype/pull/3385)
@ -43,6 +83,7 @@
**Merged pull requests:**
- Sync Queue: Added far future value for null values for dates [\#3371](https://github.com/pypeclub/OpenPype/pull/3371)
- Maya - added support for single frame playblast review [\#3369](https://github.com/pypeclub/OpenPype/pull/3369)
## [3.11.1](https://github.com/pypeclub/OpenPype/tree/3.11.1) (2022-06-20)
@ -73,7 +114,6 @@
- nuke: adding extract thumbnail settings 3.10 [\#3347](https://github.com/pypeclub/OpenPype/pull/3347)
- General: Fix last version function [\#3345](https://github.com/pypeclub/OpenPype/pull/3345)
- Deadline: added OPENPYPE\_MONGO to filter [\#3336](https://github.com/pypeclub/OpenPype/pull/3336)
- Nuke: fixing farm publishing if review is disabled [\#3306](https://github.com/pypeclub/OpenPype/pull/3306)
**🔀 Refactored code**
@ -83,11 +123,6 @@
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.11.0-nightly.4...3.11.0)
### 📖 Documentation
- Documentation: Add app key to template documentation [\#3299](https://github.com/pypeclub/OpenPype/pull/3299)
- doc: adding royal render and multiverse to the web site [\#3285](https://github.com/pypeclub/OpenPype/pull/3285)
**🚀 Enhancements**
- Settings: Settings can be extracted from UI [\#3323](https://github.com/pypeclub/OpenPype/pull/3323)
@ -95,12 +130,6 @@
- Ftrack: Action to easily create daily review session [\#3310](https://github.com/pypeclub/OpenPype/pull/3310)
- TVPaint: Extractor use mark in/out range to render [\#3309](https://github.com/pypeclub/OpenPype/pull/3309)
- Ftrack: Delivery action can work on ReviewSessions [\#3307](https://github.com/pypeclub/OpenPype/pull/3307)
- Maya: Look assigner UI improvements [\#3298](https://github.com/pypeclub/OpenPype/pull/3298)
- Ftrack: Action to transfer values of hierarchical attributes [\#3284](https://github.com/pypeclub/OpenPype/pull/3284)
- Maya: better handling of legacy review subsets names [\#3269](https://github.com/pypeclub/OpenPype/pull/3269)
- General: Updated windows oiio tool [\#3268](https://github.com/pypeclub/OpenPype/pull/3268)
- Unreal: add support for skeletalMesh and staticMesh to loaders [\#3267](https://github.com/pypeclub/OpenPype/pull/3267)
- Maya: reference loaders could store placeholder in referenced url [\#3264](https://github.com/pypeclub/OpenPype/pull/3264)
**🐛 Bug fixes**
@ -108,26 +137,14 @@
- Houdini: Fix Houdini VDB manage update wrong file attribute name [\#3322](https://github.com/pypeclub/OpenPype/pull/3322)
- Nuke: anatomy compatibility issue hacks [\#3321](https://github.com/pypeclub/OpenPype/pull/3321)
- hiero: otio p3 compatibility issue - metadata on effect use update 3.11 [\#3314](https://github.com/pypeclub/OpenPype/pull/3314)
- General: Vendorized modules for Python 2 and update poetry lock [\#3305](https://github.com/pypeclub/OpenPype/pull/3305)
- Fix - added local targets to install host [\#3303](https://github.com/pypeclub/OpenPype/pull/3303)
- Settings: Add missing default settings for nuke gizmo [\#3301](https://github.com/pypeclub/OpenPype/pull/3301)
- Maya: Fix swaped width and height in reviews [\#3300](https://github.com/pypeclub/OpenPype/pull/3300)
- Maya: point cache publish handles Maya instances [\#3297](https://github.com/pypeclub/OpenPype/pull/3297)
- Global: extract review slate issues [\#3286](https://github.com/pypeclub/OpenPype/pull/3286)
- Webpublisher: return only active projects in ProjectsEndpoint [\#3281](https://github.com/pypeclub/OpenPype/pull/3281)
- Hiero: add support for task tags 3.10.x [\#3279](https://github.com/pypeclub/OpenPype/pull/3279)
- General: Fix Oiio tool path resolving [\#3278](https://github.com/pypeclub/OpenPype/pull/3278)
- Maya: Fix udim support for e.g. uppercase \<UDIM\> tag [\#3266](https://github.com/pypeclub/OpenPype/pull/3266)
**🔀 Refactored code**
- Blender: Use client query functions [\#3331](https://github.com/pypeclub/OpenPype/pull/3331)
- General: Define query functions [\#3288](https://github.com/pypeclub/OpenPype/pull/3288)
**Merged pull requests:**
- Maya: add pointcache family to gpu cache loader [\#3318](https://github.com/pypeclub/OpenPype/pull/3318)
- Maya look: skip empty file attributes [\#3274](https://github.com/pypeclub/OpenPype/pull/3274)
## [3.10.0](https://github.com/pypeclub/OpenPype/tree/3.10.0) (2022-05-26)

View file

@ -18,7 +18,8 @@ AppPublisher=Orbi Tools s.r.o
AppPublisherURL=http://pype.club
AppSupportURL=http://pype.club
AppUpdatesURL=http://pype.club
DefaultDirName={autopf}\{#MyAppName}
DefaultDirName={autopf}\{#MyAppName}\{#AppVer}
UsePreviousAppDir=no
DisableProgramGroupPage=yes
OutputBaseFilename={#MyAppName}-{#AppVer}-install
AllowCancelDuringInstall=yes
@ -27,7 +28,7 @@ AllowCancelDuringInstall=yes
PrivilegesRequiredOverridesAllowed=dialog
SetupIconFile=igniter\openpype.ico
OutputDir=build\
Compression=lzma
Compression=lzma2
SolidCompression=yes
WizardStyle=modern
@ -37,6 +38,11 @@ Name: "english"; MessagesFile: "compiler:Default.isl"
[Tasks]
Name: "desktopicon"; Description: "{cm:CreateDesktopIcon}"; GroupDescription: "{cm:AdditionalIcons}"; Flags: unchecked
[InstallDelete]
; clean everything in previous installation folder
Type: filesandordirs; Name: "{app}\*"
[Files]
Source: "build\{#build}\*"; DestDir: "{app}"; Flags: ignoreversion recursesubdirs createallsubdirs
; NOTE: Don't use "Flags: ignoreversion" on any shared system files

View file

@ -15,12 +15,15 @@ from bson.objectid import ObjectId
from openpype.lib.mongo import OpenPypeMongoConnection
def _get_project_connection(project_name=None):
def _get_project_database():
db_name = os.environ.get("AVALON_DB") or "avalon"
mongodb = OpenPypeMongoConnection.get_mongo_client()[db_name]
if project_name:
return mongodb[project_name]
return mongodb
return OpenPypeMongoConnection.get_mongo_client()[db_name]
def _get_project_connection(project_name):
if not project_name:
raise ValueError("Invalid project name {}".format(str(project_name)))
return _get_project_database()[project_name]
def _prepare_fields(fields, required_fields=None):
@ -55,7 +58,7 @@ def _convert_ids(in_ids):
def get_projects(active=True, inactive=False, fields=None):
mongodb = _get_project_connection()
mongodb = _get_project_database()
for project_name in mongodb.collection_names():
if project_name in ("system.indexes",):
continue
@ -146,6 +149,7 @@ def _get_assets(
project_name,
asset_ids=None,
asset_names=None,
parent_ids=None,
standard=True,
archived=False,
fields=None
@ -161,6 +165,7 @@ def _get_assets(
project_name (str): Name of project where to look for queried entities.
asset_ids (list[str|ObjectId]): Asset ids that should be found.
asset_names (list[str]): Name assets that should be found.
parent_ids (list[str|ObjectId]): Parent asset ids.
standard (bool): Query standart assets (type 'asset').
archived (bool): Query archived assets (type 'archived_asset').
fields (list[str]): Fields that should be returned. All fields are
@ -196,6 +201,12 @@ def _get_assets(
return []
query_filter["name"] = {"$in": list(asset_names)}
if parent_ids is not None:
parent_ids = _convert_ids(parent_ids)
if not parent_ids:
return []
query_filter["data.visualParent"] = {"$in": parent_ids}
conn = _get_project_connection(project_name)
return conn.find(query_filter, _prepare_fields(fields))
@ -205,6 +216,7 @@ def get_assets(
project_name,
asset_ids=None,
asset_names=None,
parent_ids=None,
archived=False,
fields=None
):
@ -219,6 +231,7 @@ def get_assets(
project_name (str): Name of project where to look for queried entities.
asset_ids (list[str|ObjectId]): Asset ids that should be found.
asset_names (list[str]): Name assets that should be found.
parent_ids (list[str|ObjectId]): Parent asset ids.
archived (bool): Add also archived assets.
fields (list[str]): Fields that should be returned. All fields are
returned if 'None' is passed.
@ -229,7 +242,13 @@ def get_assets(
"""
return _get_assets(
project_name, asset_ids, asset_names, True, archived, fields
project_name,
asset_ids,
asset_names,
parent_ids,
True,
archived,
fields
)
@ -237,6 +256,7 @@ def get_archived_assets(
project_name,
asset_ids=None,
asset_names=None,
parent_ids=None,
fields=None
):
"""Archived assets for specified project by passed filters.
@ -250,6 +270,7 @@ def get_archived_assets(
project_name (str): Name of project where to look for queried entities.
asset_ids (list[str|ObjectId]): Asset ids that should be found.
asset_names (list[str]): Name assets that should be found.
parent_ids (list[str|ObjectId]): Parent asset ids.
fields (list[str]): Fields that should be returned. All fields are
returned if 'None' is passed.
@ -259,7 +280,7 @@ def get_archived_assets(
"""
return _get_assets(
project_name, asset_ids, asset_names, False, True, fields
project_name, asset_ids, asset_names, parent_ids, False, True, fields
)
@ -991,8 +1012,10 @@ def get_representations_parents(project_name, representations):
versions_by_subset_id = collections.defaultdict(list)
subsets_by_subset_id = {}
subsets_by_asset_id = collections.defaultdict(list)
output = {}
for representation in representations:
repre_id = representation["_id"]
output[repre_id] = (None, None, None, None)
version_id = representation["parent"]
repres_by_version_id[version_id].append(representation)
@ -1022,7 +1045,6 @@ def get_representations_parents(project_name, representations):
project = get_project(project_name)
output = {}
for version_id, representations in repres_by_version_id.items():
asset = None
subset = None
@ -1062,7 +1084,7 @@ def get_representation_parents(project_name, representation):
parents_by_repre_id = get_representations_parents(
project_name, [representation]
)
return parents_by_repre_id.get(repre_id)
return parents_by_repre_id[repre_id]
def get_thumbnail_id_from_source(project_name, src_type, src_id):

View file

@ -1,11 +1,10 @@
from openpype.api import Anatomy
from openpype.lib import (
PreLaunchHook,
EnvironmentPrepData,
prepare_app_environments,
prepare_context_environments
)
from openpype.pipeline import AvalonMongoDB
from openpype.pipeline import AvalonMongoDB, Anatomy
class GlobalHostDataHook(PreLaunchHook):

13
openpype/host/__init__.py Normal file
View file

@ -0,0 +1,13 @@
from .host import (
HostBase,
IWorkfileHost,
ILoadHost,
INewPublisher,
)
__all__ = (
"HostBase",
"IWorkfileHost",
"ILoadHost",
"INewPublisher",
)

524
openpype/host/host.py Normal file
View file

@ -0,0 +1,524 @@
import logging
import contextlib
from abc import ABCMeta, abstractproperty, abstractmethod
import six
# NOTE can't import 'typing' because of issues in Maya 2020
# - shiboken crashes on 'typing' module import
class MissingMethodsError(ValueError):
"""Exception when host miss some required methods for specific workflow.
Args:
host (HostBase): Host implementation where are missing methods.
missing_methods (list[str]): List of missing methods.
"""
def __init__(self, host, missing_methods):
joined_missing = ", ".join(
['"{}"'.format(item) for item in missing_methods]
)
message = (
"Host \"{}\" miss methods {}".format(host.name, joined_missing)
)
super(MissingMethodsError, self).__init__(message)
@six.add_metaclass(ABCMeta)
class HostBase(object):
"""Base of host implementation class.
Host is pipeline implementation of DCC application. This class should help
to identify what must/should/can be implemented for specific functionality.
Compared to 'avalon' concept:
What was before considered as functions in host implementation folder. The
host implementation should primarily care about adding ability of creation
(mark subsets to be published) and optionaly about referencing published
representations as containers.
Host may need extend some functionality like working with workfiles
or loading. Not all host implementations may allow that for those purposes
can be logic extended with implementing functions for the purpose. There
are prepared interfaces to be able identify what must be implemented to
be able use that functionality.
- current statement is that it is not required to inherit from interfaces
but all of the methods are validated (only their existence!)
# Installation of host before (avalon concept):
```python
from openpype.pipeline import install_host
import openpype.hosts.maya.api as host
install_host(host)
```
# Installation of host now:
```python
from openpype.pipeline import install_host
from openpype.hosts.maya.api import MayaHost
host = MayaHost()
install_host(host)
```
Todo:
- move content of 'install_host' as method of this class
- register host object
- install legacy_io
- install global plugin paths
- store registered plugin paths to this object
- handle current context (project, asset, task)
- this must be done in many separated steps
- have it's object of host tools instead of using globals
This implementation will probably change over time when more
functionality and responsibility will be added.
"""
_log = None
def __init__(self):
"""Initialization of host.
Register DCC callbacks, host specific plugin paths, targets etc.
(Part of what 'install' did in 'avalon' concept.)
Note:
At this moment global "installation" must happen before host
installation. Because of this current limitation it is recommended
to implement 'install' method which is triggered after global
'install'.
"""
pass
@property
def log(self):
if self._log is None:
self._log = logging.getLogger(self.__class__.__name__)
return self._log
@abstractproperty
def name(self):
"""Host name."""
pass
def get_current_context(self):
"""Get current context information.
This method should be used to get current context of host. Usage of
this method can be crutial for host implementations in DCCs where
can be opened multiple workfiles at one moment and change of context
can't be catched properly.
Default implementation returns values from 'legacy_io.Session'.
Returns:
dict: Context with 3 keys 'project_name', 'asset_name' and
'task_name'. All of them can be 'None'.
"""
from openpype.pipeline import legacy_io
if legacy_io.is_installed():
legacy_io.install()
return {
"project_name": legacy_io.Session["AVALON_PROJECT"],
"asset_name": legacy_io.Session["AVALON_ASSET"],
"task_name": legacy_io.Session["AVALON_TASK"]
}
def get_context_title(self):
"""Context title shown for UI purposes.
Should return current context title if possible.
Note:
This method is used only for UI purposes so it is possible to
return some logical title for contextless cases.
Is not meant for "Context menu" label.
Returns:
str: Context title.
None: Default title is used based on UI implementation.
"""
# Use current context to fill the context title
current_context = self.get_current_context()
project_name = current_context["project_name"]
asset_name = current_context["asset_name"]
task_name = current_context["task_name"]
items = []
if project_name:
items.append(project_name)
if asset_name:
items.append(asset_name)
if task_name:
items.append(task_name)
if items:
return "/".join(items)
return None
@contextlib.contextmanager
def maintained_selection(self):
"""Some functionlity will happen but selection should stay same.
This is DCC specific. Some may not allow to implement this ability
that is reason why default implementation is empty context manager.
Yields:
None: Yield when is ready to restore selected at the end.
"""
try:
yield
finally:
pass
class ILoadHost:
"""Implementation requirements to be able use reference of representations.
The load plugins can do referencing even without implementation of methods
here, but switch and removement of containers would not be possible.
Questions:
- Is list container dependency of host or load plugins?
- Should this be directly in HostBase?
- how to find out if referencing is available?
- do we need to know that?
"""
@staticmethod
def get_missing_load_methods(host):
"""Look for missing methods on "old type" host implementation.
Method is used for validation of implemented functions related to
loading. Checks only existence of methods.
Args:
Union[ModuleType, HostBase]: Object of host where to look for
required methods.
Returns:
list[str]: Missing method implementations for loading workflow.
"""
if isinstance(host, ILoadHost):
return []
required = ["ls"]
missing = []
for name in required:
if not hasattr(host, name):
missing.append(name)
return missing
@staticmethod
def validate_load_methods(host):
"""Validate implemented methods of "old type" host for load workflow.
Args:
Union[ModuleType, HostBase]: Object of host to validate.
Raises:
MissingMethodsError: If there are missing methods on host
implementation.
"""
missing = ILoadHost.get_missing_load_methods(host)
if missing:
raise MissingMethodsError(host, missing)
@abstractmethod
def get_containers(self):
"""Retreive referenced containers from scene.
This can be implemented in hosts where referencing can be used.
Todo:
Rename function to something more self explanatory.
Suggestion: 'get_containers'
Returns:
list[dict]: Information about loaded containers.
"""
pass
# --- Deprecated method names ---
def ls(self):
"""Deprecated variant of 'get_containers'.
Todo:
Remove when all usages are replaced.
"""
return self.get_containers()
@six.add_metaclass(ABCMeta)
class IWorkfileHost:
"""Implementation requirements to be able use workfile utils and tool."""
@staticmethod
def get_missing_workfile_methods(host):
"""Look for missing methods on "old type" host implementation.
Method is used for validation of implemented functions related to
workfiles. Checks only existence of methods.
Args:
Union[ModuleType, HostBase]: Object of host where to look for
required methods.
Returns:
list[str]: Missing method implementations for workfiles workflow.
"""
if isinstance(host, IWorkfileHost):
return []
required = [
"open_file",
"save_file",
"current_file",
"has_unsaved_changes",
"file_extensions",
"work_root",
]
missing = []
for name in required:
if not hasattr(host, name):
missing.append(name)
return missing
@staticmethod
def validate_workfile_methods(host):
"""Validate methods of "old type" host for workfiles workflow.
Args:
Union[ModuleType, HostBase]: Object of host to validate.
Raises:
MissingMethodsError: If there are missing methods on host
implementation.
"""
missing = IWorkfileHost.get_missing_workfile_methods(host)
if missing:
raise MissingMethodsError(host, missing)
@abstractmethod
def get_workfile_extensions(self):
"""Extensions that can be used as save.
Questions:
This could potentially use 'HostDefinition'.
"""
return []
@abstractmethod
def save_workfile(self, dst_path=None):
"""Save currently opened scene.
Args:
dst_path (str): Where the current scene should be saved. Or use
current path if 'None' is passed.
"""
pass
@abstractmethod
def open_workfile(self, filepath):
"""Open passed filepath in the host.
Args:
filepath (str): Path to workfile.
"""
pass
@abstractmethod
def get_current_workfile(self):
"""Retreive path to current opened file.
Returns:
str: Path to file which is currently opened.
None: If nothing is opened.
"""
return None
def workfile_has_unsaved_changes(self):
"""Currently opened scene is saved.
Not all hosts can know if current scene is saved because the API of
DCC does not support it.
Returns:
bool: True if scene is saved and False if has unsaved
modifications.
None: Can't tell if workfiles has modifications.
"""
return None
def work_root(self, session):
"""Modify workdir per host.
Default implementation keeps workdir untouched.
Warnings:
We must handle this modification with more sofisticated way because
this can't be called out of DCC so opening of last workfile
(calculated before DCC is launched) is complicated. Also breaking
defined work template is not a good idea.
Only place where it's really used and can make sense is Maya. There
workspace.mel can modify subfolders where to look for maya files.
Args:
session (dict): Session context data.
Returns:
str: Path to new workdir.
"""
return session["AVALON_WORKDIR"]
# --- Deprecated method names ---
def file_extensions(self):
"""Deprecated variant of 'get_workfile_extensions'.
Todo:
Remove when all usages are replaced.
"""
return self.get_workfile_extensions()
def save_file(self, dst_path=None):
"""Deprecated variant of 'save_workfile'.
Todo:
Remove when all usages are replaced.
"""
self.save_workfile()
def open_file(self, filepath):
"""Deprecated variant of 'open_workfile'.
Todo:
Remove when all usages are replaced.
"""
return self.open_workfile(filepath)
def current_file(self):
"""Deprecated variant of 'get_current_workfile'.
Todo:
Remove when all usages are replaced.
"""
return self.get_current_workfile()
def has_unsaved_changes(self):
"""Deprecated variant of 'workfile_has_unsaved_changes'.
Todo:
Remove when all usages are replaced.
"""
return self.workfile_has_unsaved_changes()
class INewPublisher:
"""Functions related to new creation system in new publisher.
New publisher is not storing information only about each created instance
but also some global data. At this moment are data related only to context
publish plugins but that can extend in future.
"""
@staticmethod
def get_missing_publish_methods(host):
"""Look for missing methods on "old type" host implementation.
Method is used for validation of implemented functions related to
new publish creation. Checks only existence of methods.
Args:
Union[ModuleType, HostBase]: Host module where to look for
required methods.
Returns:
list[str]: Missing method implementations for new publsher
workflow.
"""
if isinstance(host, INewPublisher):
return []
required = [
"get_context_data",
"update_context_data",
]
missing = []
for name in required:
if not hasattr(host, name):
missing.append(name)
return missing
@staticmethod
def validate_publish_methods(host):
"""Validate implemented methods of "old type" host.
Args:
Union[ModuleType, HostBase]: Host module to validate.
Raises:
MissingMethodsError: If there are missing methods on host
implementation.
"""
missing = INewPublisher.get_missing_publish_methods(host)
if missing:
raise MissingMethodsError(host, missing)
@abstractmethod
def get_context_data(self):
"""Get global data related to creation-publishing from workfile.
These data are not related to any created instance but to whole
publishing context. Not saving/returning them will cause that each
reset of publishing resets all values to default ones.
Context data can contain information about enabled/disabled publish
plugins or other values that can be filled by artist.
Returns:
dict: Context data stored using 'update_context_data'.
"""
pass
@abstractmethod
def update_context_data(self, data, changes):
"""Store global context data to workfile.
Called when some values in context data has changed.
Without storing the values in a way that 'get_context_data' would
return them will each reset of publishing cause loose of filled values
by artist. Best practice is to store values into workfile, if possible.
Args:
data (dict): New data as are.
changes (dict): Only data that has been changed. Each value has
tuple with '(<old>, <new>)' value.
"""
pass

View file

@ -93,7 +93,7 @@ def set_start_end_frames():
# Default scene settings
frameStart = scene.frame_start
frameEnd = scene.frame_end
fps = scene.render.fps
fps = scene.render.fps / scene.render.fps_base
resolution_x = scene.render.resolution_x
resolution_y = scene.render.resolution_y
@ -116,7 +116,8 @@ def set_start_end_frames():
scene.frame_start = frameStart
scene.frame_end = frameEnd
scene.render.fps = fps
scene.render.fps = round(fps)
scene.render.fps_base = round(fps) / fps
scene.render.resolution_x = resolution_x
scene.render.resolution_y = resolution_y

View file

@ -1,6 +1,7 @@
import os
import re
import subprocess
from platform import system
from openpype.lib import PreLaunchHook
@ -13,12 +14,9 @@ class InstallPySideToBlender(PreLaunchHook):
For pipeline implementation is required to have Qt binding installed in
blender's python packages.
Prelaunch hook can work only on Windows right now.
"""
app_groups = ["blender"]
platforms = ["windows"]
def execute(self):
# Prelaunch hook is not crucial
@ -34,25 +32,28 @@ class InstallPySideToBlender(PreLaunchHook):
# Get blender's python directory
version_regex = re.compile(r"^[2-3]\.[0-9]+$")
platform = system().lower()
executable = self.launch_context.executable.executable_path
if os.path.basename(executable).lower() != "blender.exe":
expected_executable = "blender"
if platform == "windows":
expected_executable += ".exe"
if os.path.basename(executable).lower() != expected_executable:
self.log.info((
"Executable does not lead to blender.exe file. Can't determine"
" blender's python to check/install PySide2."
f"Executable does not lead to {expected_executable} file."
"Can't determine blender's python to check/install PySide2."
))
return
executable_dir = os.path.dirname(executable)
versions_dir = os.path.dirname(executable)
if platform == "darwin":
versions_dir = os.path.join(
os.path.dirname(versions_dir), "Resources"
)
version_subfolders = []
for name in os.listdir(executable_dir):
fullpath = os.path.join(name, executable_dir)
if not os.path.isdir(fullpath):
continue
if not version_regex.match(name):
continue
version_subfolders.append(name)
for dir_entry in os.scandir(versions_dir):
if dir_entry.is_dir() and version_regex.match(dir_entry.name):
version_subfolders.append(dir_entry.name)
if not version_subfolders:
self.log.info(
@ -72,16 +73,21 @@ class InstallPySideToBlender(PreLaunchHook):
version_subfolder = version_subfolders[0]
pythond_dir = os.path.join(
os.path.dirname(executable),
version_subfolder,
"python"
)
python_dir = os.path.join(versions_dir, version_subfolder, "python")
python_lib = os.path.join(python_dir, "lib")
python_version = "python"
if platform != "windows":
for dir_entry in os.scandir(python_lib):
if dir_entry.is_dir() and dir_entry.name.startswith("python"):
python_lib = dir_entry.path
python_version = dir_entry.name
break
# Change PYTHONPATH to contain blender's packages as first
python_paths = [
os.path.join(pythond_dir, "lib"),
os.path.join(pythond_dir, "lib", "site-packages"),
python_lib,
os.path.join(python_lib, "site-packages"),
]
python_path = self.launch_context.env.get("PYTHONPATH") or ""
for path in python_path.split(os.pathsep):
@ -91,7 +97,15 @@ class InstallPySideToBlender(PreLaunchHook):
self.launch_context.env["PYTHONPATH"] = os.pathsep.join(python_paths)
# Get blender's python executable
python_executable = os.path.join(pythond_dir, "bin", "python.exe")
python_bin = os.path.join(python_dir, "bin")
if platform == "windows":
python_executable = os.path.join(python_bin, "python.exe")
else:
python_executable = os.path.join(python_bin, python_version)
# Check for python with enabled 'pymalloc'
if not os.path.exists(python_executable):
python_executable += "m"
if not os.path.exists(python_executable):
self.log.warning(
"Couldn't find python executable for blender. {}".format(
@ -106,7 +120,15 @@ class InstallPySideToBlender(PreLaunchHook):
return
# Install PySide2 in blender's python
self.install_pyside_windows(python_executable)
if platform == "windows":
result = self.install_pyside_windows(python_executable)
else:
result = self.install_pyside(python_executable)
if result:
self.log.info("Successfully installed PySide2 module to blender.")
else:
self.log.warning("Failed to install PySide2 module to blender.")
def install_pyside_windows(self, python_executable):
"""Install PySide2 python module to blender's python.
@ -144,19 +166,41 @@ class InstallPySideToBlender(PreLaunchHook):
lpDirectory=os.path.dirname(python_executable)
)
process_handle = process_info["hProcess"]
obj = win32event.WaitForSingleObject(
process_handle, win32event.INFINITE
)
win32event.WaitForSingleObject(process_handle, win32event.INFINITE)
returncode = win32process.GetExitCodeProcess(process_handle)
if returncode == 0:
self.log.info(
"Successfully installed PySide2 module to blender."
)
return
return returncode == 0
except pywintypes.error:
pass
self.log.warning("Failed to install PySide2 module to blender.")
def install_pyside(self, python_executable):
"""Install PySide2 python module to blender's python."""
try:
# Parameters
# - use "-m pip" as module pip to install PySide2 and argument
# "--ignore-installed" is to force install module to blender's
# site-packages and make sure it is binary compatible
args = [
python_executable,
"-m",
"pip",
"install",
"--ignore-installed",
"PySide2",
]
process = subprocess.Popen(
args, stdout=subprocess.PIPE, universal_newlines=True
)
process.communicate()
return process.returncode == 0
except PermissionError:
self.log.warning(
"Permission denied with command:"
"\"{}\".".format(" ".join(args))
)
except OSError as error:
self.log.warning(f"OS error has occurred: \"{error}\".")
except subprocess.SubprocessError:
pass
def is_pyside_installed(self, python_executable):
"""Check if PySide2 module is in blender's pip list.
@ -169,7 +213,7 @@ class InstallPySideToBlender(PreLaunchHook):
args = [python_executable, "-m", "pip", "list"]
process = subprocess.Popen(args, stdout=subprocess.PIPE)
stdout, _ = process.communicate()
lines = stdout.decode().split("\r\n")
lines = stdout.decode().split(os.linesep)
# Second line contain dashes that define maximum length of module name.
# Second column of dashes define maximum length of module version.
package_dashes, *_ = lines[1].split(" ")

View file

@ -1,8 +1,12 @@
import re
from types import NoneType
import pyblish
import openpype.hosts.flame.api as opfapi
from openpype.hosts.flame.otio import flame_export
import openpype.lib as oplib
from openpype.pipeline.editorial import (
is_overlapping_otio_ranges,
get_media_range_with_retimes
)
# # developer reload modules
from pprint import pformat
@ -75,6 +79,12 @@ class CollectTimelineInstances(pyblish.api.ContextPlugin):
marker_data["handleEnd"]
)
# make sure there is not NoneType rather 0
if isinstance(head, NoneType):
head = 0
if isinstance(tail, NoneType):
tail = 0
# make sure value is absolute
if head != 0:
head = abs(head)
@ -125,7 +135,8 @@ class CollectTimelineInstances(pyblish.api.ContextPlugin):
"flameAddTasks": self.add_tasks,
"tasks": {
task["name"]: {"type": task["type"]}
for task in self.add_tasks}
for task in self.add_tasks},
"representations": []
})
self.log.debug("__ inst_data: {}".format(pformat(inst_data)))
@ -271,7 +282,7 @@ class CollectTimelineInstances(pyblish.api.ContextPlugin):
# HACK: it is here to serve for versions bellow 2021.1
if not any([head, tail]):
retimed_attributes = oplib.get_media_range_with_retimes(
retimed_attributes = get_media_range_with_retimes(
otio_clip, handle_start, handle_end)
self.log.debug(
">> retimed_attributes: {}".format(retimed_attributes))
@ -370,7 +381,7 @@ class CollectTimelineInstances(pyblish.api.ContextPlugin):
continue
if otio_clip.name not in segment.name.get_value():
continue
if oplib.is_overlapping_otio_ranges(
if is_overlapping_otio_ranges(
parent_range, timeline_range, strict=True):
# add pypedata marker to otio_clip metadata

View file

@ -23,6 +23,8 @@ class ExtractSubsetResources(openpype.api.Extractor):
hosts = ["flame"]
# plugin defaults
keep_original_representation = False
default_presets = {
"thumbnail": {
"active": True,
@ -45,7 +47,9 @@ class ExtractSubsetResources(openpype.api.Extractor):
export_presets_mapping = {}
def process(self, instance):
if "representations" not in instance.data:
if not self.keep_original_representation:
# remove previeous representation if not needed
instance.data["representations"] = []
# flame objects
@ -82,7 +86,11 @@ class ExtractSubsetResources(openpype.api.Extractor):
# add default preset type for thumbnail and reviewable video
# update them with settings and override in case the same
# are found in there
export_presets = deepcopy(self.default_presets)
_preset_keys = [k.split('_')[0] for k in self.export_presets_mapping]
export_presets = {
k: v for k, v in deepcopy(self.default_presets).items()
if k not in _preset_keys
}
export_presets.update(self.export_presets_mapping)
# loop all preset names and
@ -218,9 +226,14 @@ class ExtractSubsetResources(openpype.api.Extractor):
opfapi.export_clip(
export_dir_path, exporting_clip, preset_path, **export_kwargs)
repr_name = unique_name
# make sure only first segment is used if underscore in name
# HACK: `ftrackreview_withLUT` will result only in `ftrackreview`
repr_name = unique_name.split("_")[0]
if (
"thumbnail" in unique_name
or "ftrackreview" in unique_name
):
repr_name = unique_name.split("_")[0]
# create representation data
representation_data = {
@ -259,7 +272,7 @@ class ExtractSubsetResources(openpype.api.Extractor):
if os.path.splitext(f)[-1] == ".mov"
]
# then try if thumbnail is not in unique name
or unique_name == "thumbnail"
or repr_name == "thumbnail"
):
representation_data["files"] = files.pop()
else:

View file

@ -3,9 +3,16 @@ import sys
import re
import contextlib
from bson.objectid import ObjectId
from Qt import QtGui
from openpype.client import (
get_asset_by_name,
get_subset_by_name,
get_last_version_by_subset_id,
get_representation_by_id,
get_representation_by_name,
get_representation_parents,
)
from openpype.pipeline import (
switch_container,
legacy_io,
@ -93,13 +100,16 @@ def switch_item(container,
raise ValueError("Must have at least one change provided to switch.")
# Collect any of current asset, subset and representation if not provided
# so we can use the original name from those.
# so we can use the original name from those.
project_name = legacy_io.active_project()
if any(not x for x in [asset_name, subset_name, representation_name]):
_id = ObjectId(container["representation"])
representation = legacy_io.find_one({
"type": "representation", "_id": _id
})
version, subset, asset, project = legacy_io.parenthood(representation)
repre_id = container["representation"]
representation = get_representation_by_id(project_name, repre_id)
repre_parent_docs = get_representation_parents(representation)
if repre_parent_docs:
version, subset, asset, _ = repre_parent_docs
else:
version = subset = asset = None
if asset_name is None:
asset_name = asset["name"]
@ -111,39 +121,26 @@ def switch_item(container,
representation_name = representation["name"]
# Find the new one
asset = legacy_io.find_one({
"name": asset_name,
"type": "asset"
})
asset = get_asset_by_name(project_name, asset_name, fields=["_id"])
assert asset, ("Could not find asset in the database with the name "
"'%s'" % asset_name)
subset = legacy_io.find_one({
"name": subset_name,
"type": "subset",
"parent": asset["_id"]
})
subset = get_subset_by_name(
project_name, subset_name, asset["_id"], fields=["_id"]
)
assert subset, ("Could not find subset in the database with the name "
"'%s'" % subset_name)
version = legacy_io.find_one(
{
"type": "version",
"parent": subset["_id"]
},
sort=[('name', -1)]
version = get_last_version_by_subset_id(
project_name, subset["_id"], fields=["_id"]
)
assert version, "Could not find a version for {}.{}".format(
asset_name, subset_name
)
representation = legacy_io.find_one({
"name": representation_name,
"type": "representation",
"parent": version["_id"]}
representation = get_representation_by_name(
project_name, representation_name, version["_id"]
)
assert representation, ("Could not find representation in the database "
"with the name '%s'" % representation_name)

View file

@ -1,6 +1,7 @@
import os
import contextlib
from openpype.client import get_version_by_id
from openpype.pipeline import (
load,
legacy_io,
@ -123,7 +124,7 @@ def loader_shift(loader, frame, relative=True):
class FusionLoadSequence(load.LoaderPlugin):
"""Load image sequence into Fusion"""
families = ["imagesequence", "review", "render"]
families = ["imagesequence", "review", "render", "plate"]
representations = ["*"]
label = "Load sequence"
@ -211,10 +212,8 @@ class FusionLoadSequence(load.LoaderPlugin):
path = self._get_first_image(root)
# Get start frame from version data
version = legacy_io.find_one({
"type": "version",
"_id": representation["parent"]
})
project_name = legacy_io.active_project()
version = get_version_by_id(project_name, representation["parent"])
start = version["data"].get("frameStart")
if start is None:
self.log.warning("Missing start frame for updated version"

View file

@ -4,6 +4,11 @@ import sys
import logging
# Pipeline imports
from openpype.client import (
get_project,
get_asset_by_name,
get_versions,
)
from openpype.pipeline import (
legacy_io,
install_host,
@ -164,9 +169,9 @@ def update_frame_range(comp, representations):
"""
version_ids = [r["parent"] for r in representations]
versions = legacy_io.find({"type": "version", "_id": {"$in": version_ids}})
versions = list(versions)
project_name = legacy_io.active_project()
version_ids = {r["parent"] for r in representations}
versions = list(get_versions(project_name, version_ids))
versions = [v for v in versions
if v["data"].get("frameStart", None) is not None]
@ -203,11 +208,12 @@ def switch(asset_name, filepath=None, new=True):
# Assert asset name exists
# It is better to do this here then to wait till switch_shot does it
asset = legacy_io.find_one({"type": "asset", "name": asset_name})
project_name = legacy_io.active_project()
asset = get_asset_by_name(project_name, asset_name)
assert asset, "Could not find '%s' in the database" % asset_name
# Get current project
self._project = legacy_io.find_one({"type": "project"})
self._project = get_project(project_name)
# Go to comp
if not filepath:

View file

@ -7,6 +7,7 @@ from Qt import QtWidgets, QtCore
import qtawesome as qta
from openpype.client import get_assets
from openpype import style
from openpype.pipeline import (
install_host,
@ -142,7 +143,7 @@ class App(QtWidgets.QWidget):
# Clear any existing items
self._assets.clear()
asset_names = [a["name"] for a in self.collect_assets()]
asset_names = self.collect_asset_names()
completer = QtWidgets.QCompleter(asset_names)
self._assets.setCompleter(completer)
@ -165,8 +166,14 @@ class App(QtWidgets.QWidget):
items = glob.glob("{}/*.comp".format(directory))
return items
def collect_assets(self):
return list(legacy_io.find({"type": "asset"}, {"name": True}))
def collect_asset_names(self):
project_name = legacy_io.active_project()
asset_docs = get_assets(project_name, fields=["name"])
asset_names = {
asset_doc["name"]
for asset_doc in asset_docs
}
return list(asset_names)
def populate_comp_box(self, files):
"""Ensure we display the filename only but the path is stored as well

View file

@ -19,8 +19,9 @@ from openpype.client import (
get_last_versions,
get_representations,
)
from openpype.pipeline import legacy_io
from openpype.api import (Logger, Anatomy, get_anatomy_settings)
from openpype.settings import get_anatomy_settings
from openpype.pipeline import legacy_io, Anatomy
from openpype.api import Logger
from . import tags
try:

View file

@ -1,5 +1,5 @@
import pyblish
import openpype
from openpype.pipeline.editorial import is_overlapping_otio_ranges
from openpype.hosts.hiero import api as phiero
from openpype.hosts.hiero.api.otio import hiero_export
import hiero
@ -275,7 +275,7 @@ class PrecollectInstances(pyblish.api.ContextPlugin):
parent_range = otio_audio.range_in_parent()
# if any overaling clip found then return True
if openpype.lib.is_overlapping_otio_ranges(
if is_overlapping_otio_ranges(
parent_range, timeline_range, strict=False):
return True
@ -304,7 +304,7 @@ class PrecollectInstances(pyblish.api.ContextPlugin):
continue
self.log.debug("__ parent_range: {}".format(parent_range))
self.log.debug("__ timeline_range: {}".format(timeline_range))
if openpype.lib.is_overlapping_otio_ranges(
if is_overlapping_otio_ranges(
parent_range, timeline_range, strict=True):
# add pypedata marker to otio_clip metadata

View file

@ -5,8 +5,7 @@ import husdoutputprocessors.base as base
import colorbleed.usdlib as usdlib
from openpype.client import get_asset_by_name
from openpype.api import Anatomy
from openpype.pipeline import legacy_io
from openpype.pipeline import legacy_io, Anatomy
class AvalonURIOutputProcessor(base.OutputProcessorBase):

View file

@ -5,11 +5,11 @@ Anything that isn't defined here is INTERNAL and unreliable for external use.
"""
from .pipeline import (
install,
uninstall,
ls,
containerise,
MayaHost,
)
from .plugin import (
Creator,
@ -40,11 +40,11 @@ from .lib import (
__all__ = [
"install",
"uninstall",
"ls",
"containerise",
"MayaHost",
"Creator",
"Loader",

View file

@ -3224,7 +3224,7 @@ def maintained_time():
cmds.currentTime(ct, edit=True)
def iter_visible_in_frame_range(nodes, start, end):
def iter_visible_nodes_in_range(nodes, start, end):
"""Yield nodes that are visible in start-end frame range.
- Ignores intermediateObjects completely.

View file

@ -1,13 +1,15 @@
import os
import sys
import errno
import logging
import contextlib
from maya import utils, cmds, OpenMaya
import maya.api.OpenMaya as om
import pyblish.api
from openpype.settings import get_project_settings
from openpype.host import HostBase, IWorkfileHost, ILoadHost
import openpype.hosts.maya
from openpype.tools.utils import host_tools
from openpype.lib import (
@ -28,6 +30,14 @@ from openpype.pipeline import (
)
from openpype.hosts.maya.lib import copy_workspace_mel
from . import menu, lib
from .workio import (
open_file,
save_file,
file_extensions,
has_unsaved_changes,
work_root,
current_file
)
log = logging.getLogger("openpype.hosts.maya")
@ -40,49 +50,121 @@ INVENTORY_PATH = os.path.join(PLUGINS_DIR, "inventory")
AVALON_CONTAINERS = ":AVALON_CONTAINERS"
self = sys.modules[__name__]
self._ignore_lock = False
self._events = {}
class MayaHost(HostBase, IWorkfileHost, ILoadHost):
name = "maya"
def install():
from openpype.settings import get_project_settings
def __init__(self):
super(MayaHost, self).__init__()
self._op_events = {}
project_settings = get_project_settings(os.getenv("AVALON_PROJECT"))
# process path mapping
dirmap_processor = MayaDirmap("maya", project_settings)
dirmap_processor.process_dirmap()
def install(self):
project_settings = get_project_settings(os.getenv("AVALON_PROJECT"))
# process path mapping
dirmap_processor = MayaDirmap("maya", project_settings)
dirmap_processor.process_dirmap()
pyblish.api.register_plugin_path(PUBLISH_PATH)
pyblish.api.register_host("mayabatch")
pyblish.api.register_host("mayapy")
pyblish.api.register_host("maya")
pyblish.api.register_plugin_path(PUBLISH_PATH)
pyblish.api.register_host("mayabatch")
pyblish.api.register_host("mayapy")
pyblish.api.register_host("maya")
register_loader_plugin_path(LOAD_PATH)
register_creator_plugin_path(CREATE_PATH)
register_inventory_action_path(INVENTORY_PATH)
log.info(PUBLISH_PATH)
register_loader_plugin_path(LOAD_PATH)
register_creator_plugin_path(CREATE_PATH)
register_inventory_action_path(INVENTORY_PATH)
self.log.info(PUBLISH_PATH)
log.info("Installing callbacks ... ")
register_event_callback("init", on_init)
self.log.info("Installing callbacks ... ")
register_event_callback("init", on_init)
if lib.IS_HEADLESS:
log.info(("Running in headless mode, skipping Maya "
"save/open/new callback installation.."))
if lib.IS_HEADLESS:
self.log.info((
"Running in headless mode, skipping Maya save/open/new"
" callback installation.."
))
return
return
_set_project()
_register_callbacks()
_set_project()
self._register_callbacks()
menu.install()
menu.install()
register_event_callback("save", on_save)
register_event_callback("open", on_open)
register_event_callback("new", on_new)
register_event_callback("before.save", on_before_save)
register_event_callback("taskChanged", on_task_changed)
register_event_callback("workfile.save.before", before_workfile_save)
register_event_callback("save", on_save)
register_event_callback("open", on_open)
register_event_callback("new", on_new)
register_event_callback("before.save", on_before_save)
register_event_callback("taskChanged", on_task_changed)
register_event_callback("workfile.save.before", before_workfile_save)
def open_workfile(self, filepath):
return open_file(filepath)
def save_workfile(self, filepath=None):
return save_file(filepath)
def work_root(self, session):
return work_root(session)
def get_current_workfile(self):
return current_file()
def workfile_has_unsaved_changes(self):
return has_unsaved_changes()
def get_workfile_extensions(self):
return file_extensions()
def get_containers(self):
return ls()
@contextlib.contextmanager
def maintained_selection(self):
with lib.maintained_selection():
yield
def _register_callbacks(self):
for handler, event in self._op_events.copy().items():
if event is None:
continue
try:
OpenMaya.MMessage.removeCallback(event)
self._op_events[handler] = None
except RuntimeError as exc:
self.log.info(exc)
self._op_events[_on_scene_save] = OpenMaya.MSceneMessage.addCallback(
OpenMaya.MSceneMessage.kBeforeSave, _on_scene_save
)
self._op_events[_before_scene_save] = (
OpenMaya.MSceneMessage.addCheckCallback(
OpenMaya.MSceneMessage.kBeforeSaveCheck,
_before_scene_save
)
)
self._op_events[_on_scene_new] = OpenMaya.MSceneMessage.addCallback(
OpenMaya.MSceneMessage.kAfterNew, _on_scene_new
)
self._op_events[_on_maya_initialized] = (
OpenMaya.MSceneMessage.addCallback(
OpenMaya.MSceneMessage.kMayaInitialized,
_on_maya_initialized
)
)
self._op_events[_on_scene_open] = OpenMaya.MSceneMessage.addCallback(
OpenMaya.MSceneMessage.kAfterOpen, _on_scene_open
)
self.log.info("Installed event handler _on_scene_save..")
self.log.info("Installed event handler _before_scene_save..")
self.log.info("Installed event handler _on_scene_new..")
self.log.info("Installed event handler _on_maya_initialized..")
self.log.info("Installed event handler _on_scene_open..")
def _set_project():
@ -106,44 +188,6 @@ def _set_project():
cmds.workspace(workdir, openWorkspace=True)
def _register_callbacks():
for handler, event in self._events.copy().items():
if event is None:
continue
try:
OpenMaya.MMessage.removeCallback(event)
self._events[handler] = None
except RuntimeError as e:
log.info(e)
self._events[_on_scene_save] = OpenMaya.MSceneMessage.addCallback(
OpenMaya.MSceneMessage.kBeforeSave, _on_scene_save
)
self._events[_before_scene_save] = OpenMaya.MSceneMessage.addCheckCallback(
OpenMaya.MSceneMessage.kBeforeSaveCheck, _before_scene_save
)
self._events[_on_scene_new] = OpenMaya.MSceneMessage.addCallback(
OpenMaya.MSceneMessage.kAfterNew, _on_scene_new
)
self._events[_on_maya_initialized] = OpenMaya.MSceneMessage.addCallback(
OpenMaya.MSceneMessage.kMayaInitialized, _on_maya_initialized
)
self._events[_on_scene_open] = OpenMaya.MSceneMessage.addCallback(
OpenMaya.MSceneMessage.kAfterOpen, _on_scene_open
)
log.info("Installed event handler _on_scene_save..")
log.info("Installed event handler _before_scene_save..")
log.info("Installed event handler _on_scene_new..")
log.info("Installed event handler _on_maya_initialized..")
log.info("Installed event handler _on_scene_open..")
def _on_maya_initialized(*args):
emit_event("init")
@ -475,7 +519,6 @@ def on_task_changed():
workdir = legacy_io.Session["AVALON_WORKDIR"]
if os.path.exists(workdir):
log.info("Updating Maya workspace for task change to %s", workdir)
_set_project()
# Set Maya fileDialog's start-dir to /scenes

View file

@ -9,8 +9,8 @@ from openpype.pipeline import (
LoaderPlugin,
get_representation_path,
AVALON_CONTAINER_ID,
Anatomy,
)
from openpype.api import Anatomy
from openpype.settings import get_project_settings
from .pipeline import containerise
from . import lib

View file

@ -296,12 +296,9 @@ def update_package_version(container, version):
assert current_representation is not None, "This is a bug"
repre_parents = get_representation_parents(
project_name, current_representation
version_doc, subset_doc, asset_doc, project_doc = (
get_representation_parents(project_name, current_representation)
)
version_doc = subset_doc = asset_doc = project_doc = None
if repre_parents:
version_doc, subset_doc, asset_doc, project_doc = repre_parents
if version == -1:
new_version = get_last_version_by_subset_id(

View file

@ -15,6 +15,8 @@ class CreateReview(plugin.Creator):
keepImages = False
isolate = False
imagePlane = True
Width = 0
Height = 0
transparency = [
"preset",
"simple",
@ -33,6 +35,8 @@ class CreateReview(plugin.Creator):
for key, value in animation_data.items():
data[key] = value
data["review_width"] = self.Width
data["review_height"] = self.Height
data["isolate"] = self.isolate
data["keepImages"] = self.keepImages
data["imagePlane"] = self.imagePlane

View file

@ -0,0 +1,139 @@
import os
from openpype.api import get_project_settings
from openpype.pipeline import (
load,
get_representation_path
)
# TODO aiVolume doesn't automatically set velocity fps correctly, set manual?
class LoadVDBtoArnold(load.LoaderPlugin):
"""Load OpenVDB for Arnold in aiVolume"""
families = ["vdbcache"]
representations = ["vdb"]
label = "Load VDB to Arnold"
icon = "cloud"
color = "orange"
def load(self, context, name, namespace, data):
from maya import cmds
from openpype.hosts.maya.api.pipeline import containerise
from openpype.hosts.maya.api.lib import unique_namespace
try:
family = context["representation"]["context"]["family"]
except ValueError:
family = "vdbcache"
# Check if the plugin for arnold is available on the pc
try:
cmds.loadPlugin("mtoa", quiet=True)
except Exception as exc:
self.log.error("Encountered exception:\n%s" % exc)
return
asset = context['asset']
asset_name = asset["name"]
namespace = namespace or unique_namespace(
asset_name + "_",
prefix="_" if asset_name[0].isdigit() else "",
suffix="_",
)
# Root group
label = "{}:{}".format(namespace, name)
root = cmds.group(name=label, empty=True)
settings = get_project_settings(os.environ['AVALON_PROJECT'])
colors = settings['maya']['load']['colors']
c = colors.get(family)
if c is not None:
cmds.setAttr(root + ".useOutlinerColor", 1)
cmds.setAttr(root + ".outlinerColor",
(float(c[0]) / 255),
(float(c[1]) / 255),
(float(c[2]) / 255)
)
# Create VRayVolumeGrid
grid_node = cmds.createNode("aiVolume",
name="{}Shape".format(root),
parent=root)
self._set_path(grid_node,
path=self.fname,
representation=context["representation"])
# Lock the shape node so the user can't delete the transform/shape
# as if it was referenced
cmds.lockNode(grid_node, lock=True)
nodes = [root, grid_node]
self[:] = nodes
return containerise(
name=name,
namespace=namespace,
nodes=nodes,
context=context,
loader=self.__class__.__name__)
def update(self, container, representation):
from maya import cmds
path = get_representation_path(representation)
# Find VRayVolumeGrid
members = cmds.sets(container['objectName'], query=True)
grid_nodes = cmds.ls(members, type="aiVolume", long=True)
assert len(grid_nodes) == 1, "This is a bug"
# Update the VRayVolumeGrid
self._set_path(grid_nodes[0], path=path, representation=representation)
# Update container representation
cmds.setAttr(container["objectName"] + ".representation",
str(representation["_id"]),
type="string")
def switch(self, container, representation):
self.update(container, representation)
def remove(self, container):
from maya import cmds
# Get all members of the avalon container, ensure they are unlocked
# and delete everything
members = cmds.sets(container['objectName'], query=True)
cmds.lockNode(members, lock=False)
cmds.delete([container['objectName']] + members)
# Clean up the namespace
try:
cmds.namespace(removeNamespace=container['namespace'],
deleteNamespaceContent=True)
except RuntimeError:
pass
@staticmethod
def _set_path(grid_node,
path,
representation):
"""Apply the settings for the VDB path to the aiVolume node"""
from maya import cmds
if not os.path.exists(path):
raise RuntimeError("Path does not exist: %s" % path)
is_sequence = bool(representation["context"].get("frame"))
cmds.setAttr(grid_node + ".useFrameExtension", is_sequence)
# Set file path
cmds.setAttr(grid_node + ".filename", path, type="string")

View file

@ -1,11 +1,21 @@
import os
from openpype.api import get_project_settings
from openpype.pipeline import load
from openpype.pipeline import (
load,
get_representation_path
)
class LoadVDBtoRedShift(load.LoaderPlugin):
"""Load OpenVDB in a Redshift Volume Shape"""
"""Load OpenVDB in a Redshift Volume Shape
Note that the RedshiftVolumeShape is created without a RedshiftVolume
shader assigned. To get the Redshift volume to render correctly assign
a RedshiftVolume shader (in the Hypershade) and set the density, scatter
and emission channels to the channel names of the volumes in the VDB file.
"""
families = ["vdbcache"]
representations = ["vdb"]
@ -55,7 +65,7 @@ class LoadVDBtoRedShift(load.LoaderPlugin):
# Root group
label = "{}:{}".format(namespace, name)
root = cmds.group(name=label, empty=True)
root = cmds.createNode("transform", name=label)
settings = get_project_settings(os.environ['AVALON_PROJECT'])
colors = settings['maya']['load']['colors']
@ -74,9 +84,9 @@ class LoadVDBtoRedShift(load.LoaderPlugin):
name="{}RVSShape".format(label),
parent=root)
cmds.setAttr("{}.fileName".format(volume_node),
self.fname,
type="string")
self._set_path(volume_node,
path=self.fname,
representation=context["representation"])
nodes = [root, volume_node]
self[:] = nodes
@ -87,3 +97,56 @@ class LoadVDBtoRedShift(load.LoaderPlugin):
nodes=nodes,
context=context,
loader=self.__class__.__name__)
def update(self, container, representation):
from maya import cmds
path = get_representation_path(representation)
# Find VRayVolumeGrid
members = cmds.sets(container['objectName'], query=True)
grid_nodes = cmds.ls(members, type="RedshiftVolumeShape", long=True)
assert len(grid_nodes) == 1, "This is a bug"
# Update the VRayVolumeGrid
self._set_path(grid_nodes[0], path=path, representation=representation)
# Update container representation
cmds.setAttr(container["objectName"] + ".representation",
str(representation["_id"]),
type="string")
def remove(self, container):
from maya import cmds
# Get all members of the avalon container, ensure they are unlocked
# and delete everything
members = cmds.sets(container['objectName'], query=True)
cmds.lockNode(members, lock=False)
cmds.delete([container['objectName']] + members)
# Clean up the namespace
try:
cmds.namespace(removeNamespace=container['namespace'],
deleteNamespaceContent=True)
except RuntimeError:
pass
def switch(self, container, representation):
self.update(container, representation)
@staticmethod
def _set_path(grid_node,
path,
representation):
"""Apply the settings for the VDB path to the RedshiftVolumeShape"""
from maya import cmds
if not os.path.exists(path):
raise RuntimeError("Path does not exist: %s" % path)
is_sequence = bool(representation["context"].get("frame"))
cmds.setAttr(grid_node + ".useFrameExtension", is_sequence)
# Set file path
cmds.setAttr(grid_node + ".fileName", path, type="string")

View file

@ -71,6 +71,8 @@ class CollectReview(pyblish.api.InstancePlugin):
data['handles'] = instance.data.get('handles', None)
data['step'] = instance.data['step']
data['fps'] = instance.data['fps']
data['review_width'] = instance.data['review_width']
data['review_height'] = instance.data['review_height']
data["isolate"] = instance.data["isolate"]
cmds.setAttr(str(instance) + '.active', 1)
self.log.debug('data {}'.format(instance.context[i].data))

View file

@ -7,7 +7,7 @@ from openpype.hosts.maya.api.lib import (
extract_alembic,
suspended_refresh,
maintained_selection,
iter_visible_in_frame_range
iter_visible_nodes_in_range
)
@ -83,7 +83,7 @@ class ExtractAnimation(openpype.api.Extractor):
# flag does not filter out those that are only hidden on some
# frames as it counts "animated" or "connected" visibilities as
# if it's always visible.
nodes = list(iter_visible_in_frame_range(nodes,
nodes = list(iter_visible_nodes_in_range(nodes,
start=start,
end=end))

View file

@ -30,7 +30,7 @@ class ExtractCameraAlembic(openpype.api.Extractor):
# get cameras
members = instance.data['setMembers']
cameras = cmds.ls(members, leaf=True, shapes=True, long=True,
cameras = cmds.ls(members, leaf=True, long=True,
dag=True, type="camera")
# validate required settings
@ -61,10 +61,30 @@ class ExtractCameraAlembic(openpype.api.Extractor):
if bake_to_worldspace:
job_str += ' -worldSpace'
for member in member_shapes:
self.log.info(f"processing {member}")
# if baked, drop the camera hierarchy to maintain
# clean output and backwards compatibility
camera_root = cmds.listRelatives(
camera, parent=True, fullPath=True)[0]
job_str += ' -root {0}'.format(camera_root)
for member in members:
descendants = cmds.listRelatives(member,
allDescendents=True,
fullPath=True) or []
shapes = cmds.ls(descendants, shapes=True,
noIntermediate=True, long=True)
cameras = cmds.ls(shapes, type="camera", long=True)
if cameras:
if not set(shapes) - set(cameras):
continue
self.log.warning((
"Camera hierarchy contains additional geometry. "
"Extraction will fail.")
)
transform = cmds.listRelatives(
member, parent=True, fullPath=True)[0]
member, parent=True, fullPath=True)
transform = transform[0] if transform else member
job_str += ' -root {0}'.format(transform)
job_str += ' -file "{0}"'.format(path)

View file

@ -172,18 +172,19 @@ class ExtractCameraMayaScene(openpype.api.Extractor):
dag=True,
shapes=True,
long=True)
attrs = {"backgroundColorR": 0.0,
"backgroundColorG": 0.0,
"backgroundColorB": 0.0,
"overscan": 1.0}
# Fix PLN-178: Don't allow background color to be non-black
for cam in cmds.ls(
for cam, (attr, value) in itertools.product(cmds.ls(
baked_camera_shapes, type="camera", dag=True,
shapes=True, long=True):
attrs = {"backgroundColorR": 0.0,
"backgroundColorG": 0.0,
"backgroundColorB": 0.0,
"overscan": 1.0}
for attr, value in attrs.items():
plug = "{0}.{1}".format(cam, attr)
unlock(plug)
cmds.setAttr(plug, value)
long=True), attrs.items()):
plug = "{0}.{1}".format(cam, attr)
unlock(plug)
cmds.setAttr(plug, value)
self.log.info("Performing extraction..")
cmds.select(cmds.ls(members, dag=True,

View file

@ -1,4 +1,7 @@
import os
import glob
import contextlib
import clique
import capture
@ -48,8 +51,32 @@ class ExtractPlayblast(openpype.api.Extractor):
['override_viewport_options']
)
preset = lib.load_capture_preset(data=self.capture_preset)
# Grab capture presets from the project settings
capture_presets = self.capture_preset
# Set resolution variables from capture presets
width_preset = capture_presets["Resolution"]["width"]
height_preset = capture_presets["Resolution"]["height"]
# Set resolution variables from asset values
asset_data = instance.data["assetEntity"]["data"]
asset_width = asset_data.get("resolutionWidth")
asset_height = asset_data.get("resolutionHeight")
review_instance_width = instance.data.get("review_width")
review_instance_height = instance.data.get("review_height")
preset['camera'] = camera
# Tests if project resolution is set,
# if it is a value other than zero, that value is
# used, if not then the asset resolution is
# used
if review_instance_width and review_instance_height:
preset['width'] = review_instance_width
preset['height'] = review_instance_height
elif width_preset and height_preset:
preset['width'] = width_preset
preset['height'] = height_preset
elif asset_width and asset_height:
preset['width'] = asset_width
preset['height'] = asset_height
preset['start_frame'] = start
preset['end_frame'] = end
camera_option = preset.get("camera_option", {})

View file

@ -7,7 +7,7 @@ from openpype.hosts.maya.api.lib import (
extract_alembic,
suspended_refresh,
maintained_selection,
iter_visible_in_frame_range
iter_visible_nodes_in_range
)
@ -86,7 +86,7 @@ class ExtractAlembic(openpype.api.Extractor):
# flag does not filter out those that are only hidden on some
# frames as it counts "animated" or "connected" visibilities as
# if it's always visible.
nodes = list(iter_visible_in_frame_range(nodes,
nodes = list(iter_visible_nodes_in_range(nodes,
start=start,
end=end))

View file

@ -58,7 +58,29 @@ class ExtractThumbnail(openpype.api.Extractor):
"overscan": 1.0,
"depthOfField": cmds.getAttr("{0}.depthOfField".format(camera)),
}
capture_presets = capture_preset
# Set resolution variables from capture presets
width_preset = capture_presets["Resolution"]["width"]
height_preset = capture_presets["Resolution"]["height"]
# Set resolution variables from asset values
asset_data = instance.data["assetEntity"]["data"]
asset_width = asset_data.get("resolutionWidth")
asset_height = asset_data.get("resolutionHeight")
review_instance_width = instance.data.get("review_width")
review_instance_height = instance.data.get("review_height")
# Tests if project resolution is set,
# if it is a value other than zero, that value is
# used, if not then the asset resolution is
# used
if review_instance_width and review_instance_height:
preset['width'] = review_instance_width
preset['height'] = review_instance_height
elif width_preset and height_preset:
preset['width'] = width_preset
preset['height'] = height_preset
elif asset_width and asset_height:
preset['width'] = asset_width
preset['height'] = asset_height
stagingDir = self.staging_dir(instance)
filename = "{0}".format(instance.name)
path = os.path.join(stagingDir, filename)

View file

@ -51,18 +51,7 @@ class ValidateCameraContents(pyblish.api.InstancePlugin):
raise RuntimeError("No cameras found in empty instance.")
if not cls.validate_shapes:
cls.log.info("Not validating shapes in the content.")
for member in members:
parents = cmds.ls(member, long=True)[0].split("|")[1:-1]
parents_long_named = [
"|".join(parents[:i]) for i in range(1, 1 + len(parents))
]
if cameras[0] in parents_long_named:
cls.log.error(
"{} is parented under camera {}".format(
member, cameras[0]))
invalid.extend(member)
cls.log.info("not validating shapes in the content")
return invalid
# non-camera shapes

View file

@ -27,6 +27,7 @@ class ValidateFrameRange(pyblish.api.InstancePlugin):
"yeticache"]
optional = True
actions = [openpype.api.RepairAction]
exclude_families = []
def process(self, instance):
context = instance.context
@ -56,7 +57,9 @@ class ValidateFrameRange(pyblish.api.InstancePlugin):
# compare with data on instance
errors = []
if [ef for ef in self.exclude_families
if instance.data["family"] in ef]:
return
if(inst_start != frame_start_handle):
errors.append("Instance start frame [ {} ] doesn't "
"match the one set on instance [ {} ]: "

View file

@ -1,7 +1,7 @@
import pyblish.api
import openpype.api
from openpype.hosts.maya.api.lib import iter_visible_in_frame_range
from openpype.hosts.maya.api.lib import iter_visible_nodes_in_range
import openpype.hosts.maya.api.action
@ -40,7 +40,7 @@ class ValidateAlembicVisibleOnly(pyblish.api.InstancePlugin):
nodes = instance[:]
start, end = cls.get_frame_range(instance)
if not any(iter_visible_in_frame_range(nodes, start, end)):
if not any(iter_visible_nodes_in_range(nodes, start, end)):
# Return the nodes we have considered so the user can identify
# them with the select invalid action
return nodes

View file

@ -1,10 +1,11 @@
import os
from openpype.api import get_project_settings
from openpype.pipeline import install_host
from openpype.hosts.maya import api
from openpype.hosts.maya.api import MayaHost
from maya import cmds
install_host(api)
host = MayaHost()
install_host(host)
print("starting OpenPype usersetup")

View file

@ -20,21 +20,23 @@ from openpype.client import (
)
from openpype.api import (
Logger,
Anatomy,
BuildWorkfile,
get_version_from_path,
get_anatomy_settings,
get_workdir_data,
get_asset,
get_current_project_settings,
)
from openpype.tools.utils import host_tools
from openpype.lib.path_tools import HostDirmap
from openpype.settings import get_project_settings
from openpype.settings import (
get_project_settings,
get_anatomy_settings,
)
from openpype.modules import ModulesManager
from openpype.pipeline import (
discover_legacy_creator_plugins,
legacy_io,
Anatomy,
)
from . import gizmo_menu

View file

@ -104,7 +104,10 @@ class ExtractReviewDataMov(openpype.api.Extractor):
self, instance, o_name, o_data["extension"],
multiple_presets)
if "render.farm" in families:
if (
"render.farm" in families or
"prerender.farm" in families
):
if "review" in instance.data["families"]:
instance.data["families"].remove("review")

View file

@ -4,6 +4,7 @@ import nuke
import copy
import pyblish.api
import six
import openpype
from openpype.hosts.nuke.api import (
@ -12,7 +13,6 @@ from openpype.hosts.nuke.api import (
get_view_process_node
)
class ExtractSlateFrame(openpype.api.Extractor):
"""Extracts movie and thumbnail with baked in luts
@ -236,6 +236,48 @@ class ExtractSlateFrame(openpype.api.Extractor):
int(slate_first_frame)
)
# Add file to representation files
# - get write node
write_node = instance.data["writeNode"]
# - evaluate filepaths for first frame and slate frame
first_filename = os.path.basename(
write_node["file"].evaluate(first_frame))
slate_filename = os.path.basename(
write_node["file"].evaluate(slate_first_frame))
# Find matching representation based on first filename
matching_repre = None
is_sequence = None
for repre in instance.data["representations"]:
files = repre["files"]
if (
not isinstance(files, six.string_types)
and first_filename in files
):
matching_repre = repre
is_sequence = True
break
elif files == first_filename:
matching_repre = repre
is_sequence = False
break
if not matching_repre:
self.log.info((
"Matching reresentaion was not found."
" Representation files were not filled with slate."
))
return
# Add frame to matching representation files
if not is_sequence:
matching_repre["files"] = [first_filename, slate_filename]
elif slate_filename not in matching_repre["files"]:
matching_repre["files"].insert(0, slate_filename)
self.log.warning("Added slate frame to representation files")
def add_comment_slate_node(self, instance, node):
comment = instance.context.data.get("comment")

View file

@ -35,6 +35,7 @@ class CollectNukeWrites(pyblish.api.InstancePlugin):
if node is None:
return
instance.data["writeNode"] = node
self.log.debug("checking instance: {}".format(instance))
# Determine defined file type

View file

@ -4,7 +4,7 @@ import re
import os
import contextlib
from opentimelineio import opentime
import openpype
from openpype.pipeline.editorial import is_overlapping_otio_ranges
from ..otio import davinci_export as otio_export
@ -824,7 +824,7 @@ def get_otio_clip_instance_data(otio_timeline, timeline_item_data):
continue
if otio_clip.name not in timeline_item.GetName():
continue
if openpype.lib.is_overlapping_otio_ranges(
if is_overlapping_otio_ranges(
parent_range, timeline_range, strict=True):
# add pypedata marker to otio_clip metadata

View file

@ -1,6 +1,10 @@
from copy import deepcopy
from importlib import reload
from openpype.client import (
get_version_by_id,
get_last_version_by_subset_id,
)
from openpype.hosts import resolve
from openpype.pipeline import (
get_representation_path,
@ -96,10 +100,8 @@ class LoadClip(resolve.TimelineItemLoader):
namespace = container['namespace']
timeline_item_data = resolve.get_pype_timeline_item_by_name(namespace)
timeline_item = timeline_item_data["clip"]["item"]
version = legacy_io.find_one({
"type": "version",
"_id": representation["parent"]
})
project_name = legacy_io.active_project()
version = get_version_by_id(project_name, representation["parent"])
version_data = version.get("data", {})
version_name = version.get("name", None)
colorspace = version_data.get("colorspace", None)
@ -138,19 +140,22 @@ class LoadClip(resolve.TimelineItemLoader):
@classmethod
def set_item_color(cls, timeline_item, version):
# define version name
version_name = version.get("name", None)
# get all versions in list
versions = legacy_io.find({
"type": "version",
"parent": version["parent"]
}).distinct('name')
max_version = max(versions)
project_name = legacy_io.active_project()
last_version_doc = get_last_version_by_subset_id(
project_name,
version["parent"],
fields=["name"]
)
if last_version_doc:
last_version = last_version_doc["name"]
else:
last_version = None
# set clip colour
if version_name == max_version:
if version_name == last_version:
timeline_item.SetClipColor(cls.clip_color_last)
else:
timeline_item.SetClipColor(cls.clip_color)

View file

@ -10,8 +10,8 @@ from openpype.lib import (
from openpype.pipeline import (
registered_host,
legacy_io,
Anatomy,
)
from openpype.api import Anatomy
from openpype.hosts.tvpaint.api import lib, pipeline, plugin

View file

@ -2,7 +2,7 @@ import os
import unreal
from openpype.api import Anatomy
from openpype.pipeline import Anatomy
from openpype.hosts.unreal.api import pipeline

View file

@ -1,6 +1,5 @@
import unreal
from openpype.pipeline import legacy_io
from openpype.hosts.unreal.api import pipeline
from openpype.hosts.unreal.api.plugin import Creator

View file

@ -6,7 +6,7 @@ import unreal
from unreal import EditorAssetLibrary
from unreal import EditorLevelLibrary
from unreal import EditorLevelUtils
from openpype.client import get_assets, get_asset_by_name
from openpype.pipeline import (
AVALON_CONTAINER_ID,
legacy_io,
@ -24,14 +24,6 @@ class CameraLoader(plugin.Loader):
icon = "cube"
color = "orange"
def _get_data(self, asset_name):
asset_doc = legacy_io.find_one({
"type": "asset",
"name": asset_name
})
return asset_doc.get("data")
def _set_sequence_hierarchy(
self, seq_i, seq_j, min_frame_j, max_frame_j
):
@ -177,6 +169,19 @@ class CameraLoader(plugin.Loader):
EditorLevelLibrary.save_all_dirty_levels()
EditorLevelLibrary.load_level(level)
project_name = legacy_io.active_project()
# TODO refactor
# - Creationg of hierarchy should be a function in unreal integration
# - it's used in multiple loaders but must not be loader's logic
# - hard to say what is purpose of the loop
# - variables does not match their meaning
# - why scene is stored to sequences?
# - asset documents vs. elements
# - cleanup variable names in whole function
# - e.g. 'asset', 'asset_name', 'asset_data', 'asset_doc'
# - really inefficient queries of asset documents
# - existing asset in scene is considered as "with correct values"
# - variable 'elements' is modified during it's loop
# Get all the sequences in the hierarchy. It will create them, if
# they don't exist.
sequences = []
@ -201,26 +206,30 @@ class CameraLoader(plugin.Loader):
factory=unreal.LevelSequenceFactoryNew()
)
asset_data = legacy_io.find_one({
"type": "asset",
"name": h.split('/')[-1]
})
id = asset_data.get('_id')
asset_data = get_asset_by_name(
project_name,
h.split('/')[-1],
fields=["_id", "data.fps"]
)
start_frames = []
end_frames = []
elements = list(
legacy_io.find({"type": "asset", "data.visualParent": id}))
elements = list(get_assets(
project_name,
parent_ids=[asset_data["_id"]],
fields=["_id", "data.clipIn", "data.clipOut"]
))
for e in elements:
start_frames.append(e.get('data').get('clipIn'))
end_frames.append(e.get('data').get('clipOut'))
elements.extend(legacy_io.find({
"type": "asset",
"data.visualParent": e.get('_id')
}))
elements.extend(get_assets(
project_name,
parent_ids=[e["_id"]],
fields=["_id", "data.clipIn", "data.clipOut"]
))
min_frame = min(start_frames)
max_frame = max(end_frames)
@ -256,7 +265,7 @@ class CameraLoader(plugin.Loader):
sequences[i], sequences[i + 1],
frame_ranges[i + 1][0], frame_ranges[i + 1][1])
data = self._get_data(asset)
data = get_asset_by_name(project_name, asset)["data"]
cam_seq.set_display_rate(
unreal.FrameRate(data.get("fps"), 1.0))
cam_seq.set_playback_start(0)

View file

@ -1,6 +1,5 @@
# -*- coding: utf-8 -*-
"""Loader for layouts."""
import os
import json
from pathlib import Path
@ -12,6 +11,7 @@ from unreal import AssetToolsHelpers
from unreal import FBXImportType
from unreal import MathLibrary as umath
from openpype.client import get_asset_by_name, get_assets
from openpype.pipeline import (
discover_loader_plugins,
loaders_from_representation,
@ -88,15 +88,6 @@ class LayoutLoader(plugin.Loader):
return None
@staticmethod
def _get_data(asset_name):
asset_doc = legacy_io.find_one({
"type": "asset",
"name": asset_name
})
return asset_doc.get("data")
@staticmethod
def _set_sequence_hierarchy(
seq_i, seq_j, max_frame_i, min_frame_j, max_frame_j, map_paths
@ -364,26 +355,30 @@ class LayoutLoader(plugin.Loader):
factory=unreal.LevelSequenceFactoryNew()
)
asset_data = legacy_io.find_one({
"type": "asset",
"name": h_dir.split('/')[-1]
})
id = asset_data.get('_id')
project_name = legacy_io.active_project()
asset_data = get_asset_by_name(
project_name,
h_dir.split('/')[-1],
fields=["_id", "data.fps"]
)
start_frames = []
end_frames = []
elements = list(
legacy_io.find({"type": "asset", "data.visualParent": id}))
elements = list(get_assets(
project_name,
parent_ids=[asset_data["_id"]],
fields=["_id", "data.clipIn", "data.clipOut"]
))
for e in elements:
start_frames.append(e.get('data').get('clipIn'))
end_frames.append(e.get('data').get('clipOut'))
elements.extend(legacy_io.find({
"type": "asset",
"data.visualParent": e.get('_id')
}))
elements.extend(get_assets(
project_name,
parent_ids=[e["_id"]],
fields=["_id", "data.clipIn", "data.clipOut"]
))
min_frame = min(start_frames)
max_frame = max(end_frames)
@ -659,7 +654,8 @@ class LayoutLoader(plugin.Loader):
frame_ranges[i + 1][0], frame_ranges[i + 1][1],
[level])
data = self._get_data(asset)
project_name = legacy_io.active_project()
data = get_asset_by_name(project_name, asset)["data"]
shot.set_display_rate(
unreal.FrameRate(data.get("fps"), 1.0))
shot.set_playback_start(0)

View file

@ -3,7 +3,7 @@ from pathlib import Path
import unreal
from openpype.api import Anatomy
from openpype.pipeline import Anatomy
from openpype.hosts.unreal.api import pipeline
import pyblish.api

View file

@ -9,6 +9,7 @@ import unreal
from unreal import EditorLevelLibrary as ell
from unreal import EditorAssetLibrary as eal
from openpype.client import get_representation_by_name
import openpype.api
from openpype.pipeline import legacy_io
@ -34,6 +35,7 @@ class ExtractLayout(openpype.api.Extractor):
"Wrong level loaded"
json_data = []
project_name = legacy_io.active_project()
for member in instance[:]:
actor = ell.get_actor_reference(member)
@ -57,17 +59,13 @@ class ExtractLayout(openpype.api.Extractor):
self.log.error("AssetContainer not found.")
return
parent = eal.get_metadata_tag(asset_container, "parent")
parent_id = eal.get_metadata_tag(asset_container, "parent")
family = eal.get_metadata_tag(asset_container, "family")
self.log.info("Parent: {}".format(parent))
blend = legacy_io.find_one(
{
"type": "representation",
"parent": ObjectId(parent),
"name": "blend"
},
projection={"_id": True})
self.log.info("Parent: {}".format(parent_id))
blend = get_representation_by_name(
project_name, "blend", parent_id, fields=["_id"]
)
blend_id = blend["_id"]
json_element = {}

File diff suppressed because it is too large Load diff

View file

@ -20,10 +20,7 @@ from openpype.settings.constants import (
METADATA_KEYS,
M_DYNAMIC_KEY_LABEL
)
from . import (
PypeLogger,
Anatomy
)
from . import PypeLogger
from .profiles_filtering import filter_profiles
from .local_settings import get_openpype_username
from .avalon_context import (
@ -1305,7 +1302,7 @@ def get_app_environments_for_context(
dict: Environments for passed context and application.
"""
from openpype.pipeline import AvalonMongoDB
from openpype.pipeline import AvalonMongoDB, Anatomy
# Avalon database connection
dbcon = AvalonMongoDB()

View file

@ -14,7 +14,6 @@ from openpype.settings import (
get_project_settings,
get_system_settings
)
from .anatomy import Anatomy
from .profiles_filtering import filter_profiles
from .events import emit_event
from .path_templates import StringTemplate
@ -593,6 +592,7 @@ def get_workdir_with_workdir_data(
))
if not anatomy:
from openpype.pipeline import Anatomy
anatomy = Anatomy(project_name)
if not template_key:
@ -604,7 +604,10 @@ def get_workdir_with_workdir_data(
anatomy_filled = anatomy.format(workdir_data)
# Output is TemplateResult object which contain useful data
return anatomy_filled[template_key]["folder"]
path = anatomy_filled[template_key]["folder"]
if path:
path = os.path.normpath(path)
return path
def get_workdir(
@ -635,6 +638,7 @@ def get_workdir(
TemplateResult: Workdir path.
"""
if not anatomy:
from openpype.pipeline import Anatomy
anatomy = Anatomy(project_doc["name"])
workdir_data = get_workdir_data(
@ -747,6 +751,8 @@ def compute_session_changes(
@with_pipeline_io
def get_workdir_from_session(session=None, template_key=None):
from openpype.pipeline import Anatomy
if session is None:
session = legacy_io.Session
project_name = session["AVALON_PROJECT"]
@ -762,7 +768,10 @@ def get_workdir_from_session(session=None, template_key=None):
host_name,
project_name=project_name
)
return anatomy_filled[template_key]["folder"]
path = anatomy_filled[template_key]["folder"]
if path:
path = os.path.normpath(path)
return path
@with_pipeline_io
@ -853,6 +862,8 @@ def create_workfile_doc(asset_doc, task_name, filename, workdir, dbcon=None):
dbcon (AvalonMongoDB): Optionally enter avalon AvalonMongoDB object and
`legacy_io` is used if not entered.
"""
from openpype.pipeline import Anatomy
# Use legacy_io if dbcon is not entered
if not dbcon:
dbcon = legacy_io
@ -1673,6 +1684,7 @@ def _get_task_context_data_for_anatomy(
"""
if anatomy is None:
from openpype.pipeline import Anatomy
anatomy = Anatomy(project_doc["name"])
asset_name = asset_doc["name"]
@ -1741,6 +1753,7 @@ def get_custom_workfile_template_by_context(
"""
if anatomy is None:
from openpype.pipeline import Anatomy
anatomy = Anatomy(project_doc["name"])
# get project, asset, task anatomy context data

View file

@ -1,289 +1,102 @@
import os
import re
import clique
from .import_utils import discover_host_vendor_module
"""Code related to editorial utility functions was moved
to 'openpype.pipeline.editorial' please change your imports as soon as
possible. File will be probably removed in OpenPype 3.14.*
"""
try:
import opentimelineio as otio
from opentimelineio import opentime as _ot
except ImportError:
if not os.environ.get("AVALON_APP"):
raise
otio = discover_host_vendor_module("opentimelineio")
_ot = discover_host_vendor_module("opentimelineio.opentime")
import warnings
import functools
def otio_range_to_frame_range(otio_range):
start = _ot.to_frames(
otio_range.start_time, otio_range.start_time.rate)
end = start + _ot.to_frames(
otio_range.duration, otio_range.duration.rate)
return start, end
class EditorialDeprecatedWarning(DeprecationWarning):
pass
def otio_range_with_handles(otio_range, instance):
handle_start = instance.data["handleStart"]
handle_end = instance.data["handleEnd"]
handles_duration = handle_start + handle_end
fps = float(otio_range.start_time.rate)
start = _ot.to_frames(otio_range.start_time, fps)
duration = _ot.to_frames(otio_range.duration, fps)
def editorial_deprecated(func):
"""Mark functions as deprecated.
return _ot.TimeRange(
start_time=_ot.RationalTime((start - handle_start), fps),
duration=_ot.RationalTime((duration + handles_duration), fps)
)
def is_overlapping_otio_ranges(test_otio_range, main_otio_range, strict=False):
test_start, test_end = otio_range_to_frame_range(test_otio_range)
main_start, main_end = otio_range_to_frame_range(main_otio_range)
covering_exp = bool(
(test_start <= main_start) and (test_end >= main_end)
)
inside_exp = bool(
(test_start >= main_start) and (test_end <= main_end)
)
overlaying_right_exp = bool(
(test_start <= main_end) and (test_end >= main_end)
)
overlaying_left_exp = bool(
(test_end >= main_start) and (test_start <= main_start)
)
if not strict:
return any((
covering_exp,
inside_exp,
overlaying_right_exp,
overlaying_left_exp
))
else:
return covering_exp
def convert_to_padded_path(path, padding):
It will result in a warning being emitted when the function is used.
"""
Return correct padding in sequence string
Args:
path (str): path url or simple file name
padding (int): number of padding
Returns:
type: string with reformated path
Example:
convert_to_padded_path("plate.%d.exr") > plate.%04d.exr
"""
if "%d" in path:
path = re.sub("%d", "%0{padding}d".format(padding=padding), path)
return path
@functools.wraps(func)
def new_func(*args, **kwargs):
warnings.simplefilter("always", EditorialDeprecatedWarning)
warnings.warn(
(
"Call to deprecated function '{}'."
" Function was moved to 'openpype.pipeline.editorial'."
).format(func.__name__),
category=EditorialDeprecatedWarning,
stacklevel=2
)
return func(*args, **kwargs)
return new_func
def trim_media_range(media_range, source_range):
"""
Trim input media range with clip source range.
@editorial_deprecated
def otio_range_to_frame_range(*args, **kwargs):
from openpype.pipeline.editorial import otio_range_to_frame_range
Args:
media_range (otio._ot._ot.TimeRange): available range of media
source_range (otio._ot._ot.TimeRange): clip required range
Returns:
otio._ot._ot.TimeRange: trimmed media range
"""
rw_media_start = _ot.RationalTime(
media_range.start_time.value + source_range.start_time.value,
media_range.start_time.rate
)
rw_media_duration = _ot.RationalTime(
source_range.duration.value,
media_range.duration.rate
)
return _ot.TimeRange(
rw_media_start, rw_media_duration)
return otio_range_to_frame_range(*args, **kwargs)
def range_from_frames(start, duration, fps):
"""
Returns otio time range.
@editorial_deprecated
def otio_range_with_handles(*args, **kwargs):
from openpype.pipeline.editorial import otio_range_with_handles
Args:
start (int): frame start
duration (int): frame duration
fps (float): frame range
Returns:
otio._ot._ot.TimeRange: created range
"""
return _ot.TimeRange(
_ot.RationalTime(start, fps),
_ot.RationalTime(duration, fps)
)
return otio_range_with_handles(*args, **kwargs)
def frames_to_secons(frames, framerate):
"""
Returning secons.
@editorial_deprecated
def is_overlapping_otio_ranges(*args, **kwargs):
from openpype.pipeline.editorial import is_overlapping_otio_ranges
Args:
frames (int): frame
framerate (float): frame rate
Returns:
float: second value
"""
rt = _ot.from_frames(frames, framerate)
return _ot.to_seconds(rt)
return is_overlapping_otio_ranges(*args, **kwargs)
def frames_to_timecode(frames, framerate):
rt = _ot.from_frames(frames, framerate)
return _ot.to_timecode(rt)
@editorial_deprecated
def convert_to_padded_path(*args, **kwargs):
from openpype.pipeline.editorial import convert_to_padded_path
return convert_to_padded_path(*args, **kwargs)
def make_sequence_collection(path, otio_range, metadata):
"""
Make collection from path otio range and otio metadata.
@editorial_deprecated
def trim_media_range(*args, **kwargs):
from openpype.pipeline.editorial import trim_media_range
Args:
path (str): path to image sequence with `%d`
otio_range (otio._ot._ot.TimeRange): range to be used
metadata (dict): data where padding value can be found
Returns:
list: dir_path (str): path to sequence, collection object
"""
if "%" not in path:
return None
file_name = os.path.basename(path)
dir_path = os.path.dirname(path)
head = file_name.split("%")[0]
tail = os.path.splitext(file_name)[-1]
first, last = otio_range_to_frame_range(otio_range)
collection = clique.Collection(
head=head, tail=tail, padding=metadata["padding"])
collection.indexes.update([i for i in range(first, last)])
return dir_path, collection
return trim_media_range(*args, **kwargs)
def _sequence_resize(source, length):
step = float(len(source) - 1) / (length - 1)
for i in range(length):
low, ratio = divmod(i * step, 1)
high = low + 1 if ratio > 0 else low
yield (1 - ratio) * source[int(low)] + ratio * source[int(high)]
@editorial_deprecated
def range_from_frames(*args, **kwargs):
from openpype.pipeline.editorial import range_from_frames
return range_from_frames(*args, **kwargs)
def get_media_range_with_retimes(otio_clip, handle_start, handle_end):
source_range = otio_clip.source_range
available_range = otio_clip.available_range()
media_in = available_range.start_time.value
media_out = available_range.end_time_inclusive().value
@editorial_deprecated
def frames_to_secons(*args, **kwargs):
from openpype.pipeline.editorial import frames_to_seconds
# modifiers
time_scalar = 1.
offset_in = 0
offset_out = 0
time_warp_nodes = []
return frames_to_seconds(*args, **kwargs)
# Check for speed effects and adjust playback speed accordingly
for effect in otio_clip.effects:
if isinstance(effect, otio.schema.LinearTimeWarp):
time_scalar = effect.time_scalar
elif isinstance(effect, otio.schema.FreezeFrame):
# For freeze frame, playback speed must be set after range
time_scalar = 0.
@editorial_deprecated
def frames_to_timecode(*args, **kwargs):
from openpype.pipeline.editorial import frames_to_timecode
elif isinstance(effect, otio.schema.TimeEffect):
# For freeze frame, playback speed must be set after range
name = effect.name
effect_name = effect.effect_name
if "TimeWarp" not in effect_name:
continue
metadata = effect.metadata
lookup = metadata.get("lookup")
if not lookup:
continue
return frames_to_timecode(*args, **kwargs)
# time warp node
tw_node = {
"Class": "TimeWarp",
"name": name
}
tw_node.update(metadata)
tw_node["lookup"] = list(lookup)
# get first and last frame offsets
offset_in += lookup[0]
offset_out += lookup[-1]
@editorial_deprecated
def make_sequence_collection(*args, **kwargs):
from openpype.pipeline.editorial import make_sequence_collection
# add to timewarp nodes
time_warp_nodes.append(tw_node)
return make_sequence_collection(*args, **kwargs)
# multiply by time scalar
offset_in *= time_scalar
offset_out *= time_scalar
# filip offset if reversed speed
if time_scalar < 0:
_offset_in = offset_out
_offset_out = offset_in
offset_in = _offset_in
offset_out = _offset_out
@editorial_deprecated
def get_media_range_with_retimes(*args, **kwargs):
from openpype.pipeline.editorial import get_media_range_with_retimes
# scale handles
handle_start *= abs(time_scalar)
handle_end *= abs(time_scalar)
# filip handles if reversed speed
if time_scalar < 0:
_handle_start = handle_end
_handle_end = handle_start
handle_start = _handle_start
handle_end = _handle_end
source_in = source_range.start_time.value
media_in_trimmed = (
media_in + source_in + offset_in)
media_out_trimmed = (
media_in + source_in + (
((source_range.duration.value - 1) * abs(
time_scalar)) + offset_out))
# calculate available handles
if (media_in_trimmed - media_in) < handle_start:
handle_start = (media_in_trimmed - media_in)
if (media_out - media_out_trimmed) < handle_end:
handle_end = (media_out - media_out_trimmed)
# create version data
version_data = {
"versionData": {
"retime": True,
"speed": time_scalar,
"timewarps": time_warp_nodes,
"handleStart": round(handle_start),
"handleEnd": round(handle_end)
}
}
returning_dict = {
"mediaIn": media_in_trimmed,
"mediaOut": media_out_trimmed,
"handleStart": round(handle_start),
"handleEnd": round(handle_end)
}
# add version data only if retime
if time_warp_nodes or time_scalar != 1.:
returning_dict.update(version_data)
return returning_dict
return get_media_range_with_retimes(*args, **kwargs)

View file

@ -1,25 +0,0 @@
import os
import sys
import importlib
from .log import PypeLogger as Logger
log = Logger().get_logger(__name__)
def discover_host_vendor_module(module_name):
host = os.environ["AVALON_APP"]
pype_root = os.environ["OPENPYPE_REPOS_ROOT"]
main_module = module_name.split(".")[0]
module_path = os.path.join(
pype_root, "hosts", host, "vendor", main_module)
log.debug(
"Importing module from host vendor path: `{}`".format(module_path))
if not os.path.exists(module_path):
log.warning(
"Path not existing: `{}`".format(module_path))
return None
sys.path.insert(1, module_path)
return importlib.import_module(module_name)

View file

@ -9,7 +9,6 @@ import platform
from openpype.client import get_project
from openpype.settings import get_project_settings
from .anatomy import Anatomy
from .profiles_filtering import filter_profiles
log = logging.getLogger(__name__)
@ -227,6 +226,7 @@ def fill_paths(path_list, anatomy):
def create_project_folders(basic_paths, project_name):
from openpype.pipeline import Anatomy
anatomy = Anatomy(project_name)
concat_paths = concatenate_splitted_paths(basic_paths, anatomy)

View file

@ -5,7 +5,12 @@ from bson.objectid import ObjectId
from aiohttp.web_response import Response
from openpype.pipeline import AvalonMongoDB
from openpype.client import (
get_projects,
get_project,
get_assets,
get_asset_by_name,
)
from openpype_modules.webserver.base_routes import RestApiEndpoint
@ -14,19 +19,13 @@ class _RestApiEndpoint(RestApiEndpoint):
self.resource = resource
super(_RestApiEndpoint, self).__init__()
@property
def dbcon(self):
return self.resource.dbcon
class AvalonProjectsEndpoint(_RestApiEndpoint):
async def get(self) -> Response:
output = []
for project_name in self.dbcon.database.collection_names():
project_doc = self.dbcon.database[project_name].find_one({
"type": "project"
})
output.append(project_doc)
output = [
project_doc
for project_doc in get_projects()
]
return Response(
status=200,
body=self.resource.encode(output),
@ -36,9 +35,7 @@ class AvalonProjectsEndpoint(_RestApiEndpoint):
class AvalonProjectEndpoint(_RestApiEndpoint):
async def get(self, project_name) -> Response:
project_doc = self.dbcon.database[project_name].find_one({
"type": "project"
})
project_doc = get_project(project_name)
if project_doc:
return Response(
status=200,
@ -53,9 +50,7 @@ class AvalonProjectEndpoint(_RestApiEndpoint):
class AvalonAssetsEndpoint(_RestApiEndpoint):
async def get(self, project_name) -> Response:
asset_docs = list(self.dbcon.database[project_name].find({
"type": "asset"
}))
asset_docs = list(get_assets(project_name))
return Response(
status=200,
body=self.resource.encode(asset_docs),
@ -65,10 +60,7 @@ class AvalonAssetsEndpoint(_RestApiEndpoint):
class AvalonAssetEndpoint(_RestApiEndpoint):
async def get(self, project_name, asset_name) -> Response:
asset_doc = self.dbcon.database[project_name].find_one({
"type": "asset",
"name": asset_name
})
asset_doc = get_asset_by_name(project_name, asset_name)
if asset_doc:
return Response(
status=200,
@ -88,9 +80,6 @@ class AvalonRestApiResource:
self.module = avalon_module
self.server_manager = server_manager
self.dbcon = AvalonMongoDB()
self.dbcon.install()
self.prefix = "/avalon"
self.endpoint_defs = (

View file

@ -1,16 +1,9 @@
from openpype.api import Logger
from openpype.pipeline import (
legacy_io,
LauncherAction,
)
from openpype.client import get_asset_by_name
from openpype.pipeline import LauncherAction
from openpype_modules.clockify.clockify_api import ClockifyAPI
log = Logger.get_logger(__name__)
class ClockifyStart(LauncherAction):
name = "clockify_start_timer"
label = "Clockify - Start Timer"
icon = "clockify_icon"
@ -24,20 +17,19 @@ class ClockifyStart(LauncherAction):
return False
def process(self, session, **kwargs):
project_name = session['AVALON_PROJECT']
asset_name = session['AVALON_ASSET']
task_name = session['AVALON_TASK']
project_name = session["AVALON_PROJECT"]
asset_name = session["AVALON_ASSET"]
task_name = session["AVALON_TASK"]
description = asset_name
asset = legacy_io.find_one({
'type': 'asset',
'name': asset_name
})
if asset is not None:
desc_items = asset.get('data', {}).get('parents', [])
asset_doc = get_asset_by_name(
project_name, asset_name, fields=["data.parents"]
)
if asset_doc is not None:
desc_items = asset_doc.get("data", {}).get("parents", [])
desc_items.append(asset_name)
desc_items.append(task_name)
description = '/'.join(desc_items)
description = "/".join(desc_items)
project_id = self.clockapi.get_project_id(project_name)
tag_ids = []

View file

@ -1,11 +1,6 @@
from openpype.client import get_projects, get_project
from openpype_modules.clockify.clockify_api import ClockifyAPI
from openpype.api import Logger
from openpype.pipeline import (
legacy_io,
LauncherAction,
)
log = Logger.get_logger(__name__)
from openpype.pipeline import LauncherAction
class ClockifySync(LauncherAction):
@ -22,39 +17,36 @@ class ClockifySync(LauncherAction):
return self.have_permissions
def process(self, session, **kwargs):
project_name = session.get('AVALON_PROJECT', None)
project_name = session.get("AVALON_PROJECT") or ""
projects_to_sync = []
if project_name.strip() == '' or project_name is None:
for project in legacy_io.projects():
projects_to_sync.append(project)
if project_name.strip():
projects_to_sync = [get_project(project_name)]
else:
project = legacy_io.find_one({'type': 'project'})
projects_to_sync.append(project)
projects_to_sync = get_projects()
projects_info = {}
for project in projects_to_sync:
task_types = project['config']['tasks'].keys()
projects_info[project['name']] = task_types
task_types = project["config"]["tasks"].keys()
projects_info[project["name"]] = task_types
clockify_projects = self.clockapi.get_projects()
for project_name, task_types in projects_info.items():
if project_name not in clockify_projects:
response = self.clockapi.add_project(project_name)
if 'id' not in response:
self.log.error('Project {} can\'t be created'.format(
project_name
))
continue
project_id = response['id']
else:
project_id = clockify_projects[project_name]
if project_name in clockify_projects:
continue
response = self.clockapi.add_project(project_name)
if "id" not in response:
self.log.error("Project {} can't be created".format(
project_name
))
continue
clockify_workspace_tags = self.clockapi.get_tags()
for task_type in task_types:
if task_type not in clockify_workspace_tags:
response = self.clockapi.add_tag(task_type)
if 'id' not in response:
if "id" not in response:
self.log.error('Task {} can\'t be created'.format(
task_type
))

View file

@ -710,7 +710,9 @@ class MayaSubmitDeadline(pyblish.api.InstancePlugin):
new_payload["JobInfo"].update(tiles_data["JobInfo"])
new_payload["PluginInfo"].update(tiles_data["PluginInfo"])
job_hash = hashlib.sha256("{}_{}".format(file_index, file))
self.log.info("hashing {} - {}".format(file_index, file))
job_hash = hashlib.sha256(
("{}_{}".format(file_index, file)).encode("utf-8"))
frame_jobs[frame] = job_hash.hexdigest()
new_payload["JobInfo"]["ExtraInfo0"] = job_hash.hexdigest()
new_payload["JobInfo"]["ExtraInfo1"] = file

View file

@ -1045,7 +1045,7 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin):
get publish_path
Args:
anatomy (pype.lib.anatomy.Anatomy):
anatomy (openpype.pipeline.anatomy.Anatomy):
template_data (dict): pre-calculated collected data for process
asset (string): asset name
subset (string): subset name (actually group name of subset)

View file

@ -2,11 +2,11 @@ import re
import subprocess
from openpype.client import get_asset_by_id, get_asset_by_name
from openpype.settings import get_project_settings
from openpype.pipeline import Anatomy
from openpype_modules.ftrack.lib import BaseEvent
from openpype_modules.ftrack.lib.avalon_sync import CUST_ATTR_ID_KEY
from openpype.api import Anatomy, get_project_settings
class UserAssigmentEvent(BaseEvent):
"""

View file

@ -1,7 +1,7 @@
import os
import collections
import copy
from openpype.api import Anatomy
from openpype.pipeline import Anatomy
from openpype_modules.ftrack.lib import BaseAction, statics_icon

View file

@ -11,9 +11,8 @@ from openpype.client import (
get_versions,
get_representations
)
from openpype.api import Anatomy
from openpype.lib import StringTemplate, TemplateUnsolved
from openpype.pipeline import AvalonMongoDB
from openpype.pipeline import AvalonMongoDB, Anatomy
from openpype_modules.ftrack.lib import BaseAction, statics_icon

View file

@ -10,12 +10,13 @@ from openpype.client import (
get_versions,
get_representations
)
from openpype.api import Anatomy, config
from openpype.pipeline import Anatomy
from openpype_modules.ftrack.lib import BaseAction, statics_icon
from openpype_modules.ftrack.lib.avalon_sync import CUST_ATTR_ID_KEY
from openpype_modules.ftrack.lib.custom_attributes import (
query_custom_attributes
)
from openpype.lib import config
from openpype.lib.delivery import (
path_from_representation,
get_format_dict,

View file

@ -11,13 +11,13 @@ from openpype.client import (
get_project,
get_assets,
)
from openpype.api import get_project_settings
from openpype.settings import get_project_settings
from openpype.lib import (
get_workfile_template_key,
get_workdir_data,
Anatomy,
StringTemplate,
)
from openpype.pipeline import Anatomy
from openpype_modules.ftrack.lib import BaseAction, statics_icon
from openpype_modules.ftrack.lib.avalon_sync import create_chunks

View file

@ -11,10 +11,10 @@ from openpype.client import (
get_version_by_name,
get_representation_by_name
)
from openpype.api import Anatomy
from openpype.pipeline import (
get_representation_path,
AvalonMongoDB,
Anatomy,
)
from openpype_modules.ftrack.lib import BaseAction, statics_icon

View file

@ -14,8 +14,7 @@ from openpype.client import (
get_representations
)
from openpype_modules.ftrack.lib import BaseAction, statics_icon
from openpype.api import Anatomy
from openpype.pipeline import AvalonMongoDB
from openpype.pipeline import AvalonMongoDB, Anatomy
from openpype_modules.ftrack.lib.avalon_sync import CUST_ATTR_ID_KEY

View file

@ -1,3 +1,4 @@
import html
from Qt import QtCore, QtWidgets
import qtawesome
from .models import LogModel, LogsFilterProxy
@ -286,7 +287,7 @@ class OutputWidget(QtWidgets.QWidget):
if level == "debug":
line_f = (
"<font color=\"Yellow\"> -"
" <font color=\"Lime\">{{ {loggerName} }}: ["
" <font color=\"Lime\">{{ {logger_name} }}: ["
" <font color=\"White\">{message}"
" <font color=\"Lime\">]"
)
@ -299,7 +300,7 @@ class OutputWidget(QtWidgets.QWidget):
elif level == "warning":
line_f = (
"<font color=\"Yellow\">*** WRN:"
" <font color=\"Lime\"> >>> {{ {loggerName} }}: ["
" <font color=\"Lime\"> >>> {{ {logger_name} }}: ["
" <font color=\"White\">{message}"
" <font color=\"Lime\">]"
)
@ -307,16 +308,25 @@ class OutputWidget(QtWidgets.QWidget):
line_f = (
"<font color=\"Red\">!!! ERR:"
" <font color=\"White\">{timestamp}"
" <font color=\"Lime\">>>> {{ {loggerName} }}: ["
" <font color=\"Lime\">>>> {{ {logger_name} }}: ["
" <font color=\"White\">{message}"
" <font color=\"Lime\">]"
)
logger_name = log["loggerName"]
timestamp = ""
if not show_timecode:
timestamp = log["timestamp"]
message = log["message"]
exc = log.get("exception")
if exc:
log["message"] = exc["message"]
message = exc["message"]
line = line_f.format(**log)
line = line_f.format(
message=html.escape(message),
logger_name=logger_name,
timestamp=timestamp
)
if show_timecode:
timestamp = log["timestamp"]

View file

@ -4,10 +4,11 @@ import shutil
import threading
import time
from openpype.api import Logger, Anatomy
from openpype.api import Logger
from openpype.pipeline import Anatomy
from .abstract_provider import AbstractProvider
log = Logger().get_logger("SyncServer")
log = Logger.get_logger("SyncServer")
class LocalDriveHandler(AbstractProvider):

View file

@ -9,14 +9,12 @@ from collections import deque, defaultdict
from openpype.modules import OpenPypeModule
from openpype_interfaces import ITrayModule
from openpype.api import (
Anatomy,
from openpype.settings import (
get_project_settings,
get_system_settings,
get_local_site_id
)
from openpype.lib import PypeLogger
from openpype.pipeline import AvalonMongoDB
from openpype.lib import PypeLogger, get_local_site_id
from openpype.pipeline import AvalonMongoDB, Anatomy
from openpype.settings.lib import (
get_default_anatomy_settings,
get_anatomy_settings
@ -28,7 +26,7 @@ from .providers import lib
from .utils import time_function, SyncStatus, SiteAlreadyPresentError
log = PypeLogger().get_logger("SyncServer")
log = PypeLogger.get_logger("SyncServer")
class SyncServerModule(OpenPypeModule, ITrayModule):

View file

@ -1,6 +1,7 @@
import os
import attr
from bson.objectid import ObjectId
import datetime
from Qt import QtCore
from Qt.QtCore import Qt
@ -413,6 +414,23 @@ class _SyncRepresentationModel(QtCore.QAbstractTableModel):
return index
return None
def _convert_date(self, date_value, current_date):
"""Converts 'date_value' to string.
Value of date_value might contain date in the future, used for nicely
sort queued items next to last downloaded.
"""
try:
converted_date = None
# ignore date in the future - for sorting only
if date_value and date_value < current_date:
converted_date = date_value.strftime("%Y%m%dT%H%M%SZ")
except (AttributeError, TypeError):
# ignore unparseable values
pass
return converted_date
class SyncRepresentationSummaryModel(_SyncRepresentationModel):
"""
@ -560,7 +578,7 @@ class SyncRepresentationSummaryModel(_SyncRepresentationModel):
remote_provider = lib.translate_provider_for_icon(self.sync_server,
self.project,
remote_site)
current_date = datetime.datetime.now()
for repre in result.get("paginatedResults"):
files = repre.get("files", [])
if isinstance(files, dict): # aggregate returns dictionary
@ -570,14 +588,10 @@ class SyncRepresentationSummaryModel(_SyncRepresentationModel):
if not files:
continue
local_updated = remote_updated = None
if repre.get('updated_dt_local'):
local_updated = \
repre.get('updated_dt_local').strftime("%Y%m%dT%H%M%SZ")
if repre.get('updated_dt_remote'):
remote_updated = \
repre.get('updated_dt_remote').strftime("%Y%m%dT%H%M%SZ")
local_updated = self._convert_date(repre.get('updated_dt_local'),
current_date)
remote_updated = self._convert_date(repre.get('updated_dt_remote'),
current_date)
avg_progress_remote = lib.convert_progress(
repre.get('avg_progress_remote', '0'))
@ -645,6 +659,8 @@ class SyncRepresentationSummaryModel(_SyncRepresentationModel):
if limit == 0:
limit = SyncRepresentationSummaryModel.PAGE_SIZE
# replace null with value in the future for better sorting
dummy_max_date = datetime.datetime(2099, 1, 1)
aggr = [
{"$match": self.get_match_part()},
{'$unwind': '$files'},
@ -687,7 +703,7 @@ class SyncRepresentationSummaryModel(_SyncRepresentationModel):
{'$cond': [
{'$size': "$order_remote.last_failed_dt"},
"$order_remote.last_failed_dt",
[]
[dummy_max_date]
]}
]}},
'updated_dt_local': {'$first': {
@ -696,7 +712,7 @@ class SyncRepresentationSummaryModel(_SyncRepresentationModel):
{'$cond': [
{'$size': "$order_local.last_failed_dt"},
"$order_local.last_failed_dt",
[]
[dummy_max_date]
]}
]}},
'files_size': {'$ifNull': ["$files.size", 0]},
@ -1039,6 +1055,7 @@ class SyncRepresentationDetailModel(_SyncRepresentationModel):
self.project,
remote_site)
current_date = datetime.datetime.now()
for repre in result.get("paginatedResults"):
# log.info("!!! repre:: {}".format(repre))
files = repre.get("files", [])
@ -1046,16 +1063,12 @@ class SyncRepresentationDetailModel(_SyncRepresentationModel):
files = [files]
for file in files:
local_updated = remote_updated = None
if repre.get('updated_dt_local'):
local_updated = \
repre.get('updated_dt_local').strftime(
"%Y%m%dT%H%M%SZ")
if repre.get('updated_dt_remote'):
remote_updated = \
repre.get('updated_dt_remote').strftime(
"%Y%m%dT%H%M%SZ")
local_updated = self._convert_date(
repre.get('updated_dt_local'),
current_date)
remote_updated = self._convert_date(
repre.get('updated_dt_remote'),
current_date)
remote_progress = lib.convert_progress(
repre.get('progress_remote', '0'))
@ -1104,6 +1117,7 @@ class SyncRepresentationDetailModel(_SyncRepresentationModel):
if limit == 0:
limit = SyncRepresentationSummaryModel.PAGE_SIZE
dummy_max_date = datetime.datetime(2099, 1, 1)
aggr = [
{"$match": self.get_match_part()},
{"$unwind": "$files"},
@ -1147,7 +1161,7 @@ class SyncRepresentationDetailModel(_SyncRepresentationModel):
'$cond': [
{'$size': "$order_remote.last_failed_dt"},
"$order_remote.last_failed_dt",
[]
[dummy_max_date]
]
}
]
@ -1160,7 +1174,7 @@ class SyncRepresentationDetailModel(_SyncRepresentationModel):
'$cond': [
{'$size': "$order_local.last_failed_dt"},
"$order_local.last_failed_dt",
[]
[dummy_max_date]
]
}
]

View file

@ -0,0 +1,283 @@
r"""
===============
CORS Middleware
===============
.. versionadded:: 0.2.0
Dealing with CORS headers for aiohttp applications.
**IMPORTANT:** There is a `aiohttp-cors
<https://pypi.org/project/aiohttp_cors/>`_ library, which handles CORS
headers by attaching additional handlers to aiohttp application for
OPTIONS (preflight) requests. In same time this CORS middleware mimics the
logic of `django-cors-headers <https://pypi.org/project/django-cors-headers>`_,
where all handling done in the middleware without any additional handlers. This
approach allows aiohttp application to respond with CORS headers for OPTIONS or
wildcard handlers, which is not possible with ``aiohttp-cors`` due to
https://github.com/aio-libs/aiohttp-cors/issues/241 issue.
For detailed information about CORS (Cross Origin Resource Sharing) please
visit:
- `Wikipedia <https://en.m.wikipedia.org/wiki/Cross-origin_resource_sharing>`_
- Or `MDN <https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS>`_
Configuration
=============
**IMPORTANT:** By default, CORS middleware do not allow any origins to access
content from your aiohttp appliction. Which means, you need carefully check
possible options and provide custom values for your needs.
Usage
=====
.. code-block:: python
import re
from aiohttp import web
from aiohttp_middlewares import cors_middleware
from aiohttp_middlewares.cors import DEFAULT_ALLOW_HEADERS
# Unsecure configuration to allow all CORS requests
app = web.Application(
middlewares=[cors_middleware(allow_all=True)]
)
# Allow CORS requests from URL http://localhost:3000
app = web.Application(
middlewares=[
cors_middleware(origins=["http://localhost:3000"])
]
)
# Allow CORS requests from all localhost urls
app = web.Application(
middlewares=[
cors_middleware(
origins=[re.compile(r"^https?\:\/\/localhost")]
)
]
)
# Allow CORS requests from https://frontend.myapp.com as well
# as allow credentials
CORS_ALLOW_ORIGINS = ["https://frontend.myapp.com"]
app = web.Application(
middlewares=[
cors_middleware(
origins=CORS_ALLOW_ORIGINS,
allow_credentials=True,
)
]
)
# Allow CORS requests only for API urls
app = web.Application(
middelwares=[
cors_middleware(
origins=CORS_ALLOW_ORIGINS,
urls=[re.compile(r"^\/api")],
)
]
)
# Allow CORS requests for POST & PATCH methods, and for all
# default headers and `X-Client-UID`
app = web.Application(
middlewares=[
cors_middleware(
origings=CORS_ALLOW_ORIGINS,
allow_methods=("POST", "PATCH"),
allow_headers=DEFAULT_ALLOW_HEADERS
+ ("X-Client-UID",),
)
]
)
"""
import logging
import re
from typing import Pattern, Tuple
from aiohttp import web
from aiohttp_middlewares.annotations import (
Handler,
Middleware,
StrCollection,
UrlCollection,
)
from aiohttp_middlewares.utils import match_path
ACCESS_CONTROL = "Access-Control"
ACCESS_CONTROL_ALLOW = f"{ACCESS_CONTROL}-Allow"
ACCESS_CONTROL_ALLOW_CREDENTIALS = f"{ACCESS_CONTROL_ALLOW}-Credentials"
ACCESS_CONTROL_ALLOW_HEADERS = f"{ACCESS_CONTROL_ALLOW}-Headers"
ACCESS_CONTROL_ALLOW_METHODS = f"{ACCESS_CONTROL_ALLOW}-Methods"
ACCESS_CONTROL_ALLOW_ORIGIN = f"{ACCESS_CONTROL_ALLOW}-Origin"
ACCESS_CONTROL_EXPOSE_HEADERS = f"{ACCESS_CONTROL}-Expose-Headers"
ACCESS_CONTROL_MAX_AGE = f"{ACCESS_CONTROL}-Max-Age"
ACCESS_CONTROL_REQUEST_METHOD = f"{ACCESS_CONTROL}-Request-Method"
DEFAULT_ALLOW_HEADERS = (
"accept",
"accept-encoding",
"authorization",
"content-type",
"dnt",
"origin",
"user-agent",
"x-csrftoken",
"x-requested-with",
)
DEFAULT_ALLOW_METHODS = ("DELETE", "GET", "OPTIONS", "PATCH", "POST", "PUT")
DEFAULT_URLS: Tuple[Pattern[str]] = (re.compile(r".*"),)
logger = logging.getLogger(__name__)
def cors_middleware(
*,
allow_all: bool = False,
origins: UrlCollection = None,
urls: UrlCollection = None,
expose_headers: StrCollection = None,
allow_headers: StrCollection = DEFAULT_ALLOW_HEADERS,
allow_methods: StrCollection = DEFAULT_ALLOW_METHODS,
allow_credentials: bool = False,
max_age: int = None,
) -> Middleware:
"""Middleware to provide CORS headers for aiohttp applications.
:param allow_all:
When enabled, allow any Origin to access content from your aiohttp web
application. **Please be careful with enabling this option as it may
result in security issues for your application.** By default: ``False``
:param origins:
Allow content access for given list of origins. Support supplying
strings for exact origin match or regex instances. By default: ``None``
:param urls:
Allow contect access for given list of URLs in aiohttp application.
By default: *apply CORS headers for all URLs*
:param expose_headers:
List of headers to be exposed with every CORS request. By default:
``None``
:param allow_headers:
List of allowed headers. By default:
.. code-block:: python
(
"accept",
"accept-encoding",
"authorization",
"content-type",
"dnt",
"origin",
"user-agent",
"x-csrftoken",
"x-requested-with",
)
:param allow_methods:
List of allowed methods. By default:
.. code-block:: python
("DELETE", "GET", "OPTIONS", "PATCH", "POST", "PUT")
:param allow_credentials:
When enabled apply allow credentials header in response, which results
in sharing cookies on shared resources. **Please be careful with
allowing credentials for CORS requests.** By default: ``False``
:param max_age: Access control max age in seconds. By default: ``None``
"""
check_urls: UrlCollection = DEFAULT_URLS if urls is None else urls
@web.middleware
async def middleware(
request: web.Request, handler: Handler
) -> web.StreamResponse:
# Initial vars
request_method = request.method
request_path = request.rel_url.path
# Is this an OPTIONS request
is_options_request = request_method == "OPTIONS"
# Is this a preflight request
is_preflight_request = (
is_options_request
and ACCESS_CONTROL_REQUEST_METHOD in request.headers
)
# Log extra data
log_extra = {
"is_preflight_request": is_preflight_request,
"method": request_method.lower(),
"path": request_path,
}
# Check whether CORS should be enabled for given URL or not. By default
# CORS enabled for all URLs
if not match_items(check_urls, request_path):
logger.debug(
"Request should not be processed via CORS middleware",
extra=log_extra,
)
return await handler(request)
# If this is a preflight request - generate empty response
if is_preflight_request:
response = web.StreamResponse()
# Otherwise - call actual handler
else:
response = await handler(request)
# Now check origin heaer
origin = request.headers.get("Origin")
# Empty origin - do nothing
if not origin:
logger.debug(
"Request does not have Origin header. CORS headers not "
"available for given requests",
extra=log_extra,
)
return response
# Set allow credentials header if necessary
if allow_credentials:
response.headers[ACCESS_CONTROL_ALLOW_CREDENTIALS] = "true"
# Check whether current origin satisfies CORS policy
if not allow_all and not (origins and match_items(origins, origin)):
logger.debug(
"CORS headers not allowed for given Origin", extra=log_extra
)
return response
# Now start supplying CORS headers
# First one is Access-Control-Allow-Origin
if allow_all and not allow_credentials:
cors_origin = "*"
else:
cors_origin = origin
response.headers[ACCESS_CONTROL_ALLOW_ORIGIN] = cors_origin
# Then Access-Control-Expose-Headers
if expose_headers:
response.headers[ACCESS_CONTROL_EXPOSE_HEADERS] = ", ".join(
expose_headers
)
# Now, if this is an options request, respond with extra Allow headers
if is_options_request:
response.headers[ACCESS_CONTROL_ALLOW_HEADERS] = ", ".join(
allow_headers
)
response.headers[ACCESS_CONTROL_ALLOW_METHODS] = ", ".join(
allow_methods
)
if max_age is not None:
response.headers[ACCESS_CONTROL_MAX_AGE] = str(max_age)
# If this is preflight request - do not allow other middlewares to
# process this request
if is_preflight_request:
logger.debug(
"Provide CORS headers with empty response for preflight "
"request",
extra=log_extra,
)
raise web.HTTPOk(text="", headers=response.headers)
# Otherwise return normal response
logger.debug("Provide CORS headers for request", extra=log_extra)
return response
return middleware
def match_items(items: UrlCollection, value: str) -> bool:
"""Go through all items and try to match item with given value."""
return any(match_path(item, value) for item in items)

View file

@ -1,15 +1,18 @@
import re
import threading
import asyncio
from aiohttp import web
from openpype.lib import PypeLogger
from .cors_middleware import cors_middleware
log = PypeLogger.get_logger("WebServer")
class WebServerManager:
"""Manger that care about web server thread."""
def __init__(self, port=None, host=None):
self.port = port or 8079
self.host = host or "localhost"
@ -18,7 +21,13 @@ class WebServerManager:
self.handlers = {}
self.on_stop_callbacks = []
self.app = web.Application()
self.app = web.Application(
middlewares=[
cors_middleware(
origins=[re.compile(r"^https?\:\/\/localhost")]
)
]
)
# add route with multiple methods for single "external app"

View file

@ -6,6 +6,7 @@ from .constants import (
from .mongodb import (
AvalonMongoDB,
)
from .anatomy import Anatomy
from .create import (
BaseCreator,
@ -96,6 +97,9 @@ __all__ = (
# --- MongoDB ---
"AvalonMongoDB",
# --- Anatomy ---
"Anatomy",
# --- Create ---
"BaseCreator",
"Creator",

1257
openpype/pipeline/anatomy.py Normal file

File diff suppressed because it is too large Load diff

View file

@ -1,11 +1,9 @@
"""Core pipeline functionality"""
import os
import sys
import json
import types
import logging
import inspect
import platform
import pyblish.api
@ -14,11 +12,8 @@ from pyblish.lib import MessageHandler
import openpype
from openpype.modules import load_modules, ModulesManager
from openpype.settings import get_project_settings
from openpype.lib import (
Anatomy,
filter_pyblish_plugins,
)
from openpype.lib import filter_pyblish_plugins
from .anatomy import Anatomy
from . import (
legacy_io,
register_loader_plugin_path,
@ -235,73 +230,10 @@ def register_host(host):
required, or browse the source code.
"""
signatures = {
"ls": []
}
_validate_signature(host, signatures)
_registered_host["_"] = host
def _validate_signature(module, signatures):
# Required signatures for each member
missing = list()
invalid = list()
success = True
for member in signatures:
if not hasattr(module, member):
missing.append(member)
success = False
else:
attr = getattr(module, member)
if sys.version_info.major >= 3:
signature = inspect.getfullargspec(attr)[0]
else:
signature = inspect.getargspec(attr)[0]
required_signature = signatures[member]
assert isinstance(signature, list)
assert isinstance(required_signature, list)
if not all(member in signature
for member in required_signature):
invalid.append({
"member": member,
"signature": ", ".join(signature),
"required": ", ".join(required_signature)
})
success = False
if not success:
report = list()
if missing:
report.append(
"Incomplete interface for module: '%s'\n"
"Missing: %s" % (module, ", ".join(
"'%s'" % member for member in missing))
)
if invalid:
report.append(
"'%s': One or more members were found, but didn't "
"have the right argument signature." % module.__name__
)
for member in invalid:
report.append(
" Found: {member}({signature})".format(**member)
)
report.append(
" Expected: {member}({required})".format(**member)
)
raise ValueError("\n".join(report))
def registered_host():
"""Return currently registered host"""
return _registered_host["_"]

View file

@ -6,6 +6,7 @@ import inspect
from uuid import uuid4
from contextlib import contextmanager
from openpype.host import INewPublisher
from openpype.pipeline import legacy_io
from openpype.pipeline.mongodb import (
AvalonMongoDB,
@ -651,12 +652,6 @@ class CreateContext:
discover_publish_plugins(bool): Discover publish plugins during reset
phase.
"""
# Methods required in host implementaion to be able create instances
# or change context data.
required_methods = (
"get_context_data",
"update_context_data"
)
def __init__(
self, host, dbcon=None, headless=False, reset=True,
@ -738,10 +733,10 @@ class CreateContext:
Args:
host(ModuleType): Host implementaion.
"""
missing = set()
for attr_name in cls.required_methods:
if not hasattr(host, attr_name):
missing.add(attr_name)
missing = set(
INewPublisher.get_missing_publish_methods(host)
)
return missing
@property

View file

@ -0,0 +1,282 @@
import os
import re
import clique
import opentimelineio as otio
from opentimelineio import opentime as _ot
def otio_range_to_frame_range(otio_range):
start = _ot.to_frames(
otio_range.start_time, otio_range.start_time.rate)
end = start + _ot.to_frames(
otio_range.duration, otio_range.duration.rate)
return start, end
def otio_range_with_handles(otio_range, instance):
handle_start = instance.data["handleStart"]
handle_end = instance.data["handleEnd"]
handles_duration = handle_start + handle_end
fps = float(otio_range.start_time.rate)
start = _ot.to_frames(otio_range.start_time, fps)
duration = _ot.to_frames(otio_range.duration, fps)
return _ot.TimeRange(
start_time=_ot.RationalTime((start - handle_start), fps),
duration=_ot.RationalTime((duration + handles_duration), fps)
)
def is_overlapping_otio_ranges(test_otio_range, main_otio_range, strict=False):
test_start, test_end = otio_range_to_frame_range(test_otio_range)
main_start, main_end = otio_range_to_frame_range(main_otio_range)
covering_exp = bool(
(test_start <= main_start) and (test_end >= main_end)
)
inside_exp = bool(
(test_start >= main_start) and (test_end <= main_end)
)
overlaying_right_exp = bool(
(test_start <= main_end) and (test_end >= main_end)
)
overlaying_left_exp = bool(
(test_end >= main_start) and (test_start <= main_start)
)
if not strict:
return any((
covering_exp,
inside_exp,
overlaying_right_exp,
overlaying_left_exp
))
else:
return covering_exp
def convert_to_padded_path(path, padding):
"""
Return correct padding in sequence string
Args:
path (str): path url or simple file name
padding (int): number of padding
Returns:
type: string with reformated path
Example:
convert_to_padded_path("plate.%d.exr") > plate.%04d.exr
"""
if "%d" in path:
path = re.sub("%d", "%0{padding}d".format(padding=padding), path)
return path
def trim_media_range(media_range, source_range):
"""
Trim input media range with clip source range.
Args:
media_range (otio._ot._ot.TimeRange): available range of media
source_range (otio._ot._ot.TimeRange): clip required range
Returns:
otio._ot._ot.TimeRange: trimmed media range
"""
rw_media_start = _ot.RationalTime(
media_range.start_time.value + source_range.start_time.value,
media_range.start_time.rate
)
rw_media_duration = _ot.RationalTime(
source_range.duration.value,
media_range.duration.rate
)
return _ot.TimeRange(
rw_media_start, rw_media_duration)
def range_from_frames(start, duration, fps):
"""
Returns otio time range.
Args:
start (int): frame start
duration (int): frame duration
fps (float): frame range
Returns:
otio._ot._ot.TimeRange: created range
"""
return _ot.TimeRange(
_ot.RationalTime(start, fps),
_ot.RationalTime(duration, fps)
)
def frames_to_seconds(frames, framerate):
"""
Returning seconds.
Args:
frames (int): frame
framerate (float): frame rate
Returns:
float: second value
"""
rt = _ot.from_frames(frames, framerate)
return _ot.to_seconds(rt)
def frames_to_timecode(frames, framerate):
rt = _ot.from_frames(frames, framerate)
return _ot.to_timecode(rt)
def make_sequence_collection(path, otio_range, metadata):
"""
Make collection from path otio range and otio metadata.
Args:
path (str): path to image sequence with `%d`
otio_range (otio._ot._ot.TimeRange): range to be used
metadata (dict): data where padding value can be found
Returns:
list: dir_path (str): path to sequence, collection object
"""
if "%" not in path:
return None
file_name = os.path.basename(path)
dir_path = os.path.dirname(path)
head = file_name.split("%")[0]
tail = os.path.splitext(file_name)[-1]
first, last = otio_range_to_frame_range(otio_range)
collection = clique.Collection(
head=head, tail=tail, padding=metadata["padding"])
collection.indexes.update([i for i in range(first, last)])
return dir_path, collection
def _sequence_resize(source, length):
step = float(len(source) - 1) / (length - 1)
for i in range(length):
low, ratio = divmod(i * step, 1)
high = low + 1 if ratio > 0 else low
yield (1 - ratio) * source[int(low)] + ratio * source[int(high)]
def get_media_range_with_retimes(otio_clip, handle_start, handle_end):
source_range = otio_clip.source_range
available_range = otio_clip.available_range()
media_in = available_range.start_time.value
media_out = available_range.end_time_inclusive().value
# modifiers
time_scalar = 1.
offset_in = 0
offset_out = 0
time_warp_nodes = []
# Check for speed effects and adjust playback speed accordingly
for effect in otio_clip.effects:
if isinstance(effect, otio.schema.LinearTimeWarp):
time_scalar = effect.time_scalar
elif isinstance(effect, otio.schema.FreezeFrame):
# For freeze frame, playback speed must be set after range
time_scalar = 0.
elif isinstance(effect, otio.schema.TimeEffect):
# For freeze frame, playback speed must be set after range
name = effect.name
effect_name = effect.effect_name
if "TimeWarp" not in effect_name:
continue
metadata = effect.metadata
lookup = metadata.get("lookup")
if not lookup:
continue
# time warp node
tw_node = {
"Class": "TimeWarp",
"name": name
}
tw_node.update(metadata)
tw_node["lookup"] = list(lookup)
# get first and last frame offsets
offset_in += lookup[0]
offset_out += lookup[-1]
# add to timewarp nodes
time_warp_nodes.append(tw_node)
# multiply by time scalar
offset_in *= time_scalar
offset_out *= time_scalar
# filip offset if reversed speed
if time_scalar < 0:
_offset_in = offset_out
_offset_out = offset_in
offset_in = _offset_in
offset_out = _offset_out
# scale handles
handle_start *= abs(time_scalar)
handle_end *= abs(time_scalar)
# filip handles if reversed speed
if time_scalar < 0:
_handle_start = handle_end
_handle_end = handle_start
handle_start = _handle_start
handle_end = _handle_end
source_in = source_range.start_time.value
media_in_trimmed = (
media_in + source_in + offset_in)
media_out_trimmed = (
media_in + source_in + (
((source_range.duration.value - 1) * abs(
time_scalar)) + offset_out))
# calculate available handles
if (media_in_trimmed - media_in) < handle_start:
handle_start = (media_in_trimmed - media_in)
if (media_out - media_out_trimmed) < handle_end:
handle_end = (media_out - media_out_trimmed)
# create version data
version_data = {
"versionData": {
"retime": True,
"speed": time_scalar,
"timewarps": time_warp_nodes,
"handleStart": round(handle_start),
"handleEnd": round(handle_end)
}
}
returning_dict = {
"mediaIn": media_in_trimmed,
"mediaOut": media_out_trimmed,
"handleStart": round(handle_start),
"handleEnd": round(handle_end)
}
# add version data only if retime
if time_warp_nodes or time_scalar != 1.:
returning_dict.update(version_data)
return returning_dict

View file

@ -18,9 +18,13 @@ _database = database = None
log = logging.getLogger(__name__)
def is_installed():
return module._is_installed
def install():
"""Establish a persistent connection to the database"""
if module._is_installed:
if is_installed():
return
session = session_data_from_environment(context_keys=True)
@ -55,7 +59,7 @@ def uninstall():
def requires_install(func):
@functools.wraps(func)
def decorated(*args, **kwargs):
if not module._is_installed:
if not is_installed():
install()
return func(*args, **kwargs)
return decorated

View file

@ -6,13 +6,24 @@ import logging
import inspect
import numbers
import six
from bson.objectid import ObjectId
from openpype.lib import Anatomy
from openpype.client import (
get_project,
get_assets,
get_subsets,
get_versions,
get_version_by_id,
get_last_version_by_subset_id,
get_hero_version_by_subset_id,
get_version_by_name,
get_representations,
get_representation_by_id,
get_representation_by_name,
get_representation_parents
)
from openpype.pipeline import (
schema,
legacy_io,
Anatomy,
)
log = logging.getLogger(__name__)
@ -52,13 +63,10 @@ def get_repres_contexts(representation_ids, dbcon=None):
Returns:
dict: The full representation context by representation id.
keys are repre_id, value is dictionary with full:
asset_doc
version_doc
subset_doc
repre_doc
keys are repre_id, value is dictionary with full documents of
asset, subset, version and representation.
"""
if not dbcon:
dbcon = legacy_io
@ -66,26 +74,18 @@ def get_repres_contexts(representation_ids, dbcon=None):
if not representation_ids:
return contexts
_representation_ids = []
for repre_id in representation_ids:
if isinstance(repre_id, six.string_types):
repre_id = ObjectId(repre_id)
_representation_ids.append(repre_id)
project_name = dbcon.active_project()
repre_docs = get_representations(project_name, representation_ids)
repre_docs = dbcon.find({
"type": "representation",
"_id": {"$in": _representation_ids}
})
repre_docs_by_id = {}
version_ids = set()
for repre_doc in repre_docs:
version_ids.add(repre_doc["parent"])
repre_docs_by_id[repre_doc["_id"]] = repre_doc
version_docs = dbcon.find({
"type": {"$in": ["version", "hero_version"]},
"_id": {"$in": list(version_ids)}
})
version_docs = get_versions(
project_name, version_ids, hero=True
)
version_docs_by_id = {}
hero_version_docs = []
@ -99,10 +99,7 @@ def get_repres_contexts(representation_ids, dbcon=None):
subset_ids.add(version_doc["parent"])
if versions_for_hero:
_version_docs = dbcon.find({
"type": "version",
"_id": {"$in": list(versions_for_hero)}
})
_version_docs = get_versions(project_name, versions_for_hero)
_version_data_by_id = {
version_doc["_id"]: version_doc["data"]
for version_doc in _version_docs
@ -114,26 +111,20 @@ def get_repres_contexts(representation_ids, dbcon=None):
version_data = copy.deepcopy(_version_data_by_id[version_id])
version_docs_by_id[hero_version_id]["data"] = version_data
subset_docs = dbcon.find({
"type": "subset",
"_id": {"$in": list(subset_ids)}
})
subset_docs = get_subsets(project_name, subset_ids)
subset_docs_by_id = {}
asset_ids = set()
for subset_doc in subset_docs:
subset_docs_by_id[subset_doc["_id"]] = subset_doc
asset_ids.add(subset_doc["parent"])
asset_docs = dbcon.find({
"type": "asset",
"_id": {"$in": list(asset_ids)}
})
asset_docs = get_assets(project_name, asset_ids)
asset_docs_by_id = {
asset_doc["_id"]: asset_doc
for asset_doc in asset_docs
}
project_doc = dbcon.find_one({"type": "project"})
project_doc = get_project(project_name)
for repre_id, repre_doc in repre_docs_by_id.items():
version_doc = version_docs_by_id[repre_doc["parent"]]
@ -173,32 +164,21 @@ def get_subset_contexts(subset_ids, dbcon=None):
if not subset_ids:
return contexts
_subset_ids = set()
for subset_id in subset_ids:
if isinstance(subset_id, six.string_types):
subset_id = ObjectId(subset_id)
_subset_ids.add(subset_id)
subset_docs = dbcon.find({
"type": "subset",
"_id": {"$in": list(_subset_ids)}
})
project_name = dbcon.active_project()
subset_docs = get_subsets(project_name, subset_ids)
subset_docs_by_id = {}
asset_ids = set()
for subset_doc in subset_docs:
subset_docs_by_id[subset_doc["_id"]] = subset_doc
asset_ids.add(subset_doc["parent"])
asset_docs = dbcon.find({
"type": "asset",
"_id": {"$in": list(asset_ids)}
})
asset_docs = get_assets(project_name, asset_ids)
asset_docs_by_id = {
asset_doc["_id"]: asset_doc
for asset_doc in asset_docs
}
project_doc = dbcon.find_one({"type": "project"})
project_doc = get_project(project_name)
for subset_id, subset_doc in subset_docs_by_id.items():
asset_doc = asset_docs_by_id[subset_doc["parent"]]
@ -224,16 +204,17 @@ def get_representation_context(representation):
Returns:
dict: The full representation context.
"""
assert representation is not None, "This is a bug"
if isinstance(representation, (six.string_types, ObjectId)):
representation = legacy_io.find_one(
{"_id": ObjectId(str(representation))})
if not isinstance(representation, dict):
representation = get_representation_by_id(representation)
version, subset, asset, project = legacy_io.parenthood(representation)
project_name = legacy_io.active_project()
version, subset, asset, project = get_representation_parents(
project_name, representation
)
assert all([representation, version, subset, asset, project]), (
"This is a bug"
@ -405,42 +386,36 @@ def update_container(container, version=-1):
"""Update a container"""
# Compute the different version from 'representation'
current_representation = legacy_io.find_one({
"_id": ObjectId(container["representation"])
})
project_name = legacy_io.active_project()
current_representation = get_representation_by_id(
project_name, container["representation"]
)
assert current_representation is not None, "This is a bug"
current_version, subset, asset, project = legacy_io.parenthood(
current_representation)
current_version = get_version_by_id(
project_name, current_representation["_id"], fields=["parent"]
)
if version == -1:
new_version = legacy_io.find_one({
"type": "version",
"parent": subset["_id"]
}, sort=[("name", -1)])
new_version = get_last_version_by_subset_id(
project_name, current_version["parent"], fields=["_id"]
)
elif isinstance(version, HeroVersionType):
new_version = get_hero_version_by_subset_id(
project_name, current_version["parent"], fields=["_id"]
)
else:
if isinstance(version, HeroVersionType):
version_query = {
"parent": subset["_id"],
"type": "hero_version"
}
else:
version_query = {
"parent": subset["_id"],
"type": "version",
"name": version
}
new_version = legacy_io.find_one(version_query)
new_version = get_version_by_name(
project_name, version, current_version["parent"], fields=["_id"]
)
assert new_version is not None, "This is a bug"
new_representation = legacy_io.find_one({
"type": "representation",
"parent": new_version["_id"],
"name": current_representation["name"]
})
new_representation = get_representation_by_name(
project_name, current_representation["name"], new_version["_id"]
)
assert new_representation is not None, "Representation wasn't found"
path = get_representation_path(new_representation)
@ -482,10 +457,10 @@ def switch_container(container, representation, loader_plugin=None):
))
# Get the new representation to switch to
new_representation = legacy_io.find_one({
"type": "representation",
"_id": representation["_id"],
})
project_name = legacy_io.active_project()
new_representation = get_representation_by_id(
project_name, representation["_id"]
)
new_context = get_representation_context(new_representation)
if not is_compatible_loader(loader_plugin, new_context):

View file

@ -9,9 +9,8 @@ import qargparse
from Qt import QtWidgets, QtCore
from openpype import style
from openpype.pipeline import load, AvalonMongoDB
from openpype.pipeline import load, AvalonMongoDB, Anatomy
from openpype.lib import StringTemplate
from openpype.api import Anatomy
class DeleteOldVersions(load.SubsetLoaderPlugin):

View file

@ -3,8 +3,8 @@ from collections import defaultdict
from Qt import QtWidgets, QtCore, QtGui
from openpype.pipeline import load, AvalonMongoDB
from openpype.api import Anatomy, config
from openpype.lib import config
from openpype.pipeline import load, AvalonMongoDB, Anatomy
from openpype import resources, style
from openpype.lib.delivery import (

View file

@ -4,11 +4,11 @@ Requires:
os.environ -> AVALON_PROJECT
Provides:
context -> anatomy (pype.api.Anatomy)
context -> anatomy (openpype.pipeline.anatomy.Anatomy)
"""
import os
from openpype.api import Anatomy
import pyblish.api
from openpype.pipeline import Anatomy
class CollectAnatomyObject(pyblish.api.ContextPlugin):

View file

@ -8,8 +8,11 @@ Requires:
# import os
import opentimelineio as otio
import pyblish.api
import openpype.lib
from pprint import pformat
from openpype.pipeline.editorial import (
otio_range_to_frame_range,
otio_range_with_handles
)
class CollectOtioFrameRanges(pyblish.api.InstancePlugin):
@ -31,9 +34,9 @@ class CollectOtioFrameRanges(pyblish.api.InstancePlugin):
otio_tl_range = otio_clip.range_in_parent()
otio_src_range = otio_clip.source_range
otio_avalable_range = otio_clip.available_range()
otio_tl_range_handles = openpype.lib.otio_range_with_handles(
otio_tl_range_handles = otio_range_with_handles(
otio_tl_range, instance)
otio_src_range_handles = openpype.lib.otio_range_with_handles(
otio_src_range_handles = otio_range_with_handles(
otio_src_range, instance)
# get source avalable start frame
@ -42,7 +45,7 @@ class CollectOtioFrameRanges(pyblish.api.InstancePlugin):
otio_avalable_range.start_time.rate)
# convert to frames
range_convert = openpype.lib.otio_range_to_frame_range
range_convert = otio_range_to_frame_range
tl_start, tl_end = range_convert(otio_tl_range)
tl_start_h, tl_end_h = range_convert(otio_tl_range_handles)
src_start, src_end = range_convert(otio_src_range)

View file

@ -10,7 +10,11 @@ import os
import clique
import opentimelineio as otio
import pyblish.api
import openpype.lib as oplib
from openpype.pipeline.editorial import (
get_media_range_with_retimes,
range_from_frames,
make_sequence_collection
)
class CollectOtioSubsetResources(pyblish.api.InstancePlugin):
@ -42,7 +46,7 @@ class CollectOtioSubsetResources(pyblish.api.InstancePlugin):
available_duration = otio_avalable_range.duration.value
# get available range trimmed with processed retimes
retimed_attributes = oplib.get_media_range_with_retimes(
retimed_attributes = get_media_range_with_retimes(
otio_clip, handle_start, handle_end)
self.log.debug(
">> retimed_attributes: {}".format(retimed_attributes))
@ -64,7 +68,7 @@ class CollectOtioSubsetResources(pyblish.api.InstancePlugin):
a_frame_end_h = media_out + handle_end
# create trimmed otio time range
trimmed_media_range_h = oplib.range_from_frames(
trimmed_media_range_h = range_from_frames(
a_frame_start_h, (a_frame_end_h - a_frame_start_h) + 1,
media_fps
)
@ -144,7 +148,7 @@ class CollectOtioSubsetResources(pyblish.api.InstancePlugin):
# in case it is file sequence but not new OTIO schema
# `ImageSequenceReference`
path = media_ref.target_url
collection_data = oplib.make_sequence_collection(
collection_data = make_sequence_collection(
path, trimmed_media_range_h, metadata)
self.staging_dir, collection = collection_data

View file

@ -19,7 +19,7 @@ class ExtractThumbnail(pyblish.api.InstancePlugin):
label = "Extract Thumbnail"
order = pyblish.api.ExtractorOrder
families = [
"imagesequence", "render", "render2d",
"imagesequence", "render", "render2d", "prerender",
"source", "plate", "take"
]
hosts = ["shell", "fusion", "resolve"]

View file

@ -19,6 +19,13 @@ import clique
import opentimelineio as otio
from pyblish import api
import openpype
from openpype.pipeline.editorial import (
otio_range_to_frame_range,
trim_media_range,
range_from_frames,
frames_to_seconds,
make_sequence_collection
)
class ExtractOTIOReview(openpype.api.Extractor):
@ -161,7 +168,7 @@ class ExtractOTIOReview(openpype.api.Extractor):
dirname = media_ref.target_url_base
head = media_ref.name_prefix
tail = media_ref.name_suffix
first, last = openpype.lib.otio_range_to_frame_range(
first, last = otio_range_to_frame_range(
available_range)
collection = clique.Collection(
head=head,
@ -180,7 +187,7 @@ class ExtractOTIOReview(openpype.api.Extractor):
# in case it is file sequence but not new OTIO schema
# `ImageSequenceReference`
path = media_ref.target_url
collection_data = openpype.lib.make_sequence_collection(
collection_data = make_sequence_collection(
path, available_range, metadata)
dir_path, collection = collection_data
@ -305,8 +312,8 @@ class ExtractOTIOReview(openpype.api.Extractor):
duration = avl_durtation
# return correct trimmed range
return openpype.lib.trim_media_range(
avl_range, openpype.lib.range_from_frames(start, duration, fps)
return trim_media_range(
avl_range, range_from_frames(start, duration, fps)
)
def _render_seqment(self, sequence=None,
@ -357,8 +364,8 @@ class ExtractOTIOReview(openpype.api.Extractor):
frame_start = otio_range.start_time.value
input_fps = otio_range.start_time.rate
frame_duration = otio_range.duration.value
sec_start = openpype.lib.frames_to_secons(frame_start, input_fps)
sec_duration = openpype.lib.frames_to_secons(
sec_start = frames_to_seconds(frame_start, input_fps)
sec_duration = frames_to_seconds(
frame_duration, input_fps
)
@ -370,8 +377,7 @@ class ExtractOTIOReview(openpype.api.Extractor):
])
elif gap:
sec_duration = openpype.lib.frames_to_secons(
gap, self.actual_fps)
sec_duration = frames_to_seconds(gap, self.actual_fps)
# form command for rendering gap files
command.extend([

View file

@ -9,6 +9,7 @@ import os
from pyblish import api
import openpype
from copy import deepcopy
from openpype.pipeline.editorial import frames_to_seconds
class ExtractOTIOTrimmingVideo(openpype.api.Extractor):
@ -81,8 +82,8 @@ class ExtractOTIOTrimmingVideo(openpype.api.Extractor):
frame_start = otio_range.start_time.value
input_fps = otio_range.start_time.rate
frame_duration = otio_range.duration.value - 1
sec_start = openpype.lib.frames_to_secons(frame_start, input_fps)
sec_duration = openpype.lib.frames_to_secons(frame_duration, input_fps)
sec_start = frames_to_seconds(frame_start, input_fps)
sec_duration = frames_to_seconds(frame_duration, input_fps)
# form command for rendering gap files
command.extend([

View file

@ -1,4 +1,20 @@
{
"load": {
"ImageSequenceLoader": {
"family": [
"shot",
"render",
"image",
"plate",
"reference"
],
"representations": [
"jpeg",
"png",
"jpg"
]
}
},
"publish": {
"CollectPalettes": {
"allowed_tasks": [

View file

@ -204,7 +204,8 @@
"ValidateFrameRange": {
"enabled": true,
"optional": true,
"active": true
"active": true,
"exclude_families": ["model", "rig", "staticMesh"]
},
"ValidateShaderName": {
"enabled": false,

View file

@ -5,6 +5,34 @@
"label": "Harmony",
"is_file": true,
"children": [
{
"type": "dict",
"collapsible": true,
"key": "load",
"label": "Loader plugins",
"children": [
{
"type": "dict",
"collapsible": true,
"key": "ImageSequenceLoader",
"label": "Load Image Sequence",
"children": [
{
"type": "list",
"key": "family",
"label": "Families",
"object_type": "text"
},
{
"type": "list",
"key": "representations",
"label": "Representations",
"object_type": "text"
}
]
}
]
},
{
"type": "dict",
"collapsible": true,

View file

@ -62,13 +62,36 @@
}
]
},
{
"type": "schema_template",
"name": "template_publish_plugin",
"template_data": [
{
"type": "dict",
"collapsible": true,
"key": "ValidateFrameRange",
"label": "Validate Frame Range",
"checkbox_key": "enabled",
"children": [
{
"key": "ValidateFrameRange",
"label": "Validate Frame Range"
"type": "boolean",
"key": "enabled",
"label": "Enabled"
},
{
"type": "boolean",
"key": "optional",
"label": "Optional"
},
{
"type": "boolean",
"key": "active",
"label": "Active"
},
{
"type": "splitter"
},
{
"key": "exclude_families",
"label": "Families",
"type": "list",
"object_type": "text"
}
]
},

View file

@ -1418,3 +1418,6 @@ InViewButton, InViewButton:disabled {
InViewButton:hover {
background: rgba(255, 255, 255, 37);
}
SupportLabel {
color: {color:font-disabled};
}

View file

@ -17,8 +17,7 @@ from openpype.client import (
get_thumbnail_id_from_source,
get_thumbnail,
)
from openpype.api import Anatomy
from openpype.pipeline import HeroVersionType
from openpype.pipeline import HeroVersionType, Anatomy
from openpype.pipeline.thumbnail import get_thumbnail_binary
from openpype.pipeline.load import (
discover_loader_plugins,

View file

@ -977,7 +977,12 @@ class CreateDialog(QtWidgets.QDialog):
elif variant:
self.variant_hints_menu.addAction(variant)
self.variant_input.setText(default_variant or "Main")
variant_text = default_variant or "Main"
# Make sure subset name is updated to new plugin
if variant_text == self.variant_input.text():
self._on_variant_change()
else:
self.variant_input.setText(variant_text)
def _on_variant_widget_resize(self):
self.variant_hints_btn.setFixedHeight(self.variant_input.height())

View file

@ -6,6 +6,7 @@ from collections import defaultdict
from Qt import QtCore, QtGui
import qtawesome
from openpype.host import ILoadHost
from openpype.client import (
get_asset_by_id,
get_subset_by_id,
@ -193,7 +194,10 @@ class InventoryModel(TreeModel):
host = registered_host()
if not items: # for debugging or testing, injecting items from outside
items = host.ls()
if isinstance(host, ILoadHost):
items = host.get_containers()
else:
items = host.ls()
self.clear()

View file

@ -6,8 +6,7 @@ import speedcopy
from openpype.client import get_project, get_asset_by_name
from openpype.lib import Terminal
from openpype.api import Anatomy
from openpype.pipeline import legacy_io
from openpype.pipeline import legacy_io, Anatomy
t = Terminal()

View file

@ -6,7 +6,7 @@ use singleton approach with global functions (using helper anyway).
import os
import pyblish.api
from openpype.host import IWorkfileHost, ILoadHost
from openpype.pipeline import (
registered_host,
legacy_io,
@ -49,12 +49,11 @@ class HostToolsHelper:
def get_workfiles_tool(self, parent):
"""Create, cache and return workfiles tool window."""
if self._workfiles_tool is None:
from openpype.tools.workfiles.app import (
Window, validate_host_requirements
)
from openpype.tools.workfiles.app import Window
# Host validation
host = registered_host()
validate_host_requirements(host)
IWorkfileHost.validate_workfile_methods(host)
workfiles_window = Window(parent=parent)
self._workfiles_tool = workfiles_window
@ -92,6 +91,9 @@ class HostToolsHelper:
if self._loader_tool is None:
from openpype.tools.loader import LoaderWindow
host = registered_host()
ILoadHost.validate_load_methods(host)
loader_window = LoaderWindow(parent=parent or self._parent)
self._loader_tool = loader_window
@ -164,6 +166,9 @@ class HostToolsHelper:
if self._scene_inventory_tool is None:
from openpype.tools.sceneinventory import SceneInventoryWindow
host = registered_host()
ILoadHost.validate_load_methods(host)
scene_inventory_window = SceneInventoryWindow(
parent=parent or self._parent
)

View file

@ -1,12 +1,10 @@
from .window import Window
from .app import (
show,
validate_host_requirements,
)
__all__ = [
"Window",
"show",
"validate_host_requirements",
]

View file

@ -1,6 +1,7 @@
import sys
import logging
from openpype.host import IWorkfileHost
from openpype.pipeline import (
registered_host,
legacy_io,
@ -14,31 +15,6 @@ module = sys.modules[__name__]
module.window = None
def validate_host_requirements(host):
if host is None:
raise RuntimeError("No registered host.")
# Verify the host has implemented the api for Work Files
required = [
"open_file",
"save_file",
"current_file",
"has_unsaved_changes",
"work_root",
"file_extensions",
]
missing = []
for name in required:
if not hasattr(host, name):
missing.append(name)
if missing:
raise RuntimeError(
"Host is missing required Work Files interfaces: "
"%s (host: %s)" % (", ".join(missing), host)
)
return True
def show(root=None, debug=False, parent=None, use_context=True, save=True):
"""Show Work Files GUI"""
# todo: remove `root` argument to show()
@ -50,7 +26,7 @@ def show(root=None, debug=False, parent=None, use_context=True, save=True):
pass
host = registered_host()
validate_host_requirements(host)
IWorkfileHost.validate_workfile_methods(host)
if debug:
legacy_io.Session["AVALON_ASSET"] = "Mock"

Some files were not shown because too many files have changed in this diff Show more