Merge remote-tracking branch 'upstream/develop' into loader_show_containers_in_scene

# Conflicts:
#	openpype/tools/loader/model.py
This commit is contained in:
Roy Nieterau 2022-09-08 20:09:44 +02:00
commit 8d048bd72a
740 changed files with 56493 additions and 9510 deletions

5
.gitmodules vendored
View file

@ -4,7 +4,4 @@
[submodule "tools/modules/powershell/PSWriteColor"]
path = tools/modules/powershell/PSWriteColor
url = https://github.com/EvotecIT/PSWriteColor.git
[submodule "vendor/configs/OpenColorIO-Configs"]
path = vendor/configs/OpenColorIO-Configs
url = https://github.com/imageworks/OpenColorIO-Configs
url = https://github.com/EvotecIT/PSWriteColor.git

View file

@ -1,110 +1,149 @@
# Changelog
## [3.14.2-nightly.3](https://github.com/pypeclub/OpenPype/tree/HEAD)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.14.1...HEAD)
**🆕 New features**
- Nuke: Build workfile by template [\#3763](https://github.com/pypeclub/OpenPype/pull/3763)
**🚀 Enhancements**
- Photoshop: attempt to speed up ExtractImage [\#3793](https://github.com/pypeclub/OpenPype/pull/3793)
- SyncServer: Added cli commands for sync server [\#3765](https://github.com/pypeclub/OpenPype/pull/3765)
- Blender: Publisher collect workfile representation [\#3670](https://github.com/pypeclub/OpenPype/pull/3670)
- Maya: move set render settings menu entry [\#3669](https://github.com/pypeclub/OpenPype/pull/3669)
- Scene Inventory: Maya add actions to select from or to scene [\#3659](https://github.com/pypeclub/OpenPype/pull/3659)
**🐛 Bug fixes**
- Resolve: Addon import is Python 2 compatible [\#3798](https://github.com/pypeclub/OpenPype/pull/3798)
- nuke: validate write node is not failing due wrong type [\#3780](https://github.com/pypeclub/OpenPype/pull/3780)
- Fix - changed format of version string in pyproject.toml [\#3777](https://github.com/pypeclub/OpenPype/pull/3777)
- Ftrack status fix typo prgoress -\> progress [\#3761](https://github.com/pypeclub/OpenPype/pull/3761)
- Fix version resolution [\#3757](https://github.com/pypeclub/OpenPype/pull/3757)
- Maya: `containerise` dont skip empty values [\#3674](https://github.com/pypeclub/OpenPype/pull/3674)
**🔀 Refactored code**
- Photoshop: Use new Extractor location [\#3789](https://github.com/pypeclub/OpenPype/pull/3789)
- Blender: Use new Extractor location [\#3787](https://github.com/pypeclub/OpenPype/pull/3787)
- AfterEffects: Use new Extractor location [\#3784](https://github.com/pypeclub/OpenPype/pull/3784)
- General: Remove unused teshost [\#3773](https://github.com/pypeclub/OpenPype/pull/3773)
- General: Copied 'Extractor' plugin to publish pipeline [\#3771](https://github.com/pypeclub/OpenPype/pull/3771)
- General: Move queries of asset and representation links [\#3770](https://github.com/pypeclub/OpenPype/pull/3770)
- General: Create project function moved to client code [\#3766](https://github.com/pypeclub/OpenPype/pull/3766)
- General: Move delivery logic to pipeline [\#3751](https://github.com/pypeclub/OpenPype/pull/3751)
- General: Move hostdirname functionality into host [\#3749](https://github.com/pypeclub/OpenPype/pull/3749)
- General: Move publish utils to pipeline [\#3745](https://github.com/pypeclub/OpenPype/pull/3745)
- Houdini: Define houdini as addon [\#3735](https://github.com/pypeclub/OpenPype/pull/3735)
- Flame: Defined flame as addon [\#3732](https://github.com/pypeclub/OpenPype/pull/3732)
- Resolve: Define resolve as addon [\#3727](https://github.com/pypeclub/OpenPype/pull/3727)
**Merged pull requests:**
- Standalone Publisher: Ignore empty labels, then still use name like other asset models [\#3779](https://github.com/pypeclub/OpenPype/pull/3779)
## [3.14.1](https://github.com/pypeclub/OpenPype/tree/3.14.1) (2022-08-30)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.14.1-nightly.4...3.14.1)
### 📖 Documentation
- Documentation: Few updates [\#3698](https://github.com/pypeclub/OpenPype/pull/3698)
- Documentation: Settings development [\#3660](https://github.com/pypeclub/OpenPype/pull/3660)
**🆕 New features**
- Webpublisher:change create flatten image into tri state [\#3678](https://github.com/pypeclub/OpenPype/pull/3678)
- Blender: validators code correction with settings and defaults [\#3662](https://github.com/pypeclub/OpenPype/pull/3662)
**🚀 Enhancements**
- General: Thumbnail can use project roots [\#3750](https://github.com/pypeclub/OpenPype/pull/3750)
- Settings: Remove settings lock on tray exit [\#3720](https://github.com/pypeclub/OpenPype/pull/3720)
- General: Added helper getters to modules manager [\#3712](https://github.com/pypeclub/OpenPype/pull/3712)
- Unreal: Define unreal as module and use host class [\#3701](https://github.com/pypeclub/OpenPype/pull/3701)
- Settings: Lock settings UI session [\#3700](https://github.com/pypeclub/OpenPype/pull/3700)
- General: Benevolent context label collector [\#3686](https://github.com/pypeclub/OpenPype/pull/3686)
- Ftrack: Store ftrack entities on hierarchy integration to instances [\#3677](https://github.com/pypeclub/OpenPype/pull/3677)
- Blender: ops refresh manager after process events [\#3663](https://github.com/pypeclub/OpenPype/pull/3663)
**🐛 Bug fixes**
- Maya: Fix typo in getPanel argument `with\_focus` -\> `withFocus` [\#3753](https://github.com/pypeclub/OpenPype/pull/3753)
- General: Smaller fixes of imports [\#3748](https://github.com/pypeclub/OpenPype/pull/3748)
- General: Logger tweaks [\#3741](https://github.com/pypeclub/OpenPype/pull/3741)
- Nuke: missing job dependency if multiple bake streams [\#3737](https://github.com/pypeclub/OpenPype/pull/3737)
- Nuke: color-space settings from anatomy is working [\#3721](https://github.com/pypeclub/OpenPype/pull/3721)
- Settings: Fix studio default anatomy save [\#3716](https://github.com/pypeclub/OpenPype/pull/3716)
- Maya: Use project name instead of project code [\#3709](https://github.com/pypeclub/OpenPype/pull/3709)
- Settings: Fix project overrides save [\#3708](https://github.com/pypeclub/OpenPype/pull/3708)
- Workfiles tool: Fix published workfile filtering [\#3704](https://github.com/pypeclub/OpenPype/pull/3704)
- PS, AE: Provide default variant value for workfile subset [\#3703](https://github.com/pypeclub/OpenPype/pull/3703)
- Flame: retime is working on clip publishing [\#3684](https://github.com/pypeclub/OpenPype/pull/3684)
- Webpublisher: added check for empty context [\#3682](https://github.com/pypeclub/OpenPype/pull/3682)
**🔀 Refactored code**
- General: Host addons cleanup [\#3744](https://github.com/pypeclub/OpenPype/pull/3744)
- Webpublisher: Webpublisher is used as addon [\#3740](https://github.com/pypeclub/OpenPype/pull/3740)
- Photoshop: Defined photoshop as addon [\#3736](https://github.com/pypeclub/OpenPype/pull/3736)
- Harmony: Defined harmony as addon [\#3734](https://github.com/pypeclub/OpenPype/pull/3734)
- General: Module interfaces cleanup [\#3731](https://github.com/pypeclub/OpenPype/pull/3731)
- AfterEffects: Move AE functions from general lib [\#3730](https://github.com/pypeclub/OpenPype/pull/3730)
- Blender: Define blender as module [\#3729](https://github.com/pypeclub/OpenPype/pull/3729)
- AfterEffects: Define AfterEffects as module [\#3728](https://github.com/pypeclub/OpenPype/pull/3728)
- General: Replace PypeLogger with Logger [\#3725](https://github.com/pypeclub/OpenPype/pull/3725)
- Nuke: Define nuke as module [\#3724](https://github.com/pypeclub/OpenPype/pull/3724)
- General: Move subset name functionality [\#3723](https://github.com/pypeclub/OpenPype/pull/3723)
- General: Move creators plugin getter [\#3714](https://github.com/pypeclub/OpenPype/pull/3714)
- General: Move constants from lib to client [\#3713](https://github.com/pypeclub/OpenPype/pull/3713)
- Loader: Subset groups using client operations [\#3710](https://github.com/pypeclub/OpenPype/pull/3710)
- TVPaint: Defined as module [\#3707](https://github.com/pypeclub/OpenPype/pull/3707)
- StandalonePublisher: Define StandalonePublisher as module [\#3706](https://github.com/pypeclub/OpenPype/pull/3706)
- TrayPublisher: Define TrayPublisher as module [\#3705](https://github.com/pypeclub/OpenPype/pull/3705)
- General: Move context specific functions to context tools [\#3702](https://github.com/pypeclub/OpenPype/pull/3702)
**Merged pull requests:**
- Hiero: Define hiero as module [\#3717](https://github.com/pypeclub/OpenPype/pull/3717)
- Deadline: better logging for DL webservice failures [\#3694](https://github.com/pypeclub/OpenPype/pull/3694)
- Photoshop: resize saved images in ExtractReview for ffmpeg [\#3676](https://github.com/pypeclub/OpenPype/pull/3676)
## [3.14.0](https://github.com/pypeclub/OpenPype/tree/3.14.0) (2022-08-18)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.14.0-nightly.1...3.14.0)
**🚀 Enhancements**
- Ftrack: Addiotional component metadata [\#3685](https://github.com/pypeclub/OpenPype/pull/3685)
- Ftrack: Set task status on farm publishing [\#3680](https://github.com/pypeclub/OpenPype/pull/3680)
- Ftrack: Set task status on task creation in integrate hierarchy [\#3675](https://github.com/pypeclub/OpenPype/pull/3675)
- Maya: Disable rendering of all lights for render instances submitted through Deadline. [\#3661](https://github.com/pypeclub/OpenPype/pull/3661)
**🐛 Bug fixes**
- General: Switch from hero version to versioned works [\#3691](https://github.com/pypeclub/OpenPype/pull/3691)
- General: Fix finding of last version [\#3656](https://github.com/pypeclub/OpenPype/pull/3656)
**🔀 Refactored code**
- General: Use client projects getter [\#3673](https://github.com/pypeclub/OpenPype/pull/3673)
**Merged pull requests:**
- Deadline: Global job pre load is not Pype 2 compatible [\#3666](https://github.com/pypeclub/OpenPype/pull/3666)
## [3.13.0](https://github.com/pypeclub/OpenPype/tree/3.13.0) (2022-08-09)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.13.0-nightly.1...3.13.0)
**🆕 New features**
- Support for mutliple installed versions - 3.13 [\#3605](https://github.com/pypeclub/OpenPype/pull/3605)
**🚀 Enhancements**
- Editorial: Mix audio use side file for ffmpeg filters [\#3630](https://github.com/pypeclub/OpenPype/pull/3630)
- Ftrack: Comment template can contain optional keys [\#3615](https://github.com/pypeclub/OpenPype/pull/3615)
- Ftrack: Add more metadata to ftrack components [\#3612](https://github.com/pypeclub/OpenPype/pull/3612)
- General: Add context to pyblish context [\#3594](https://github.com/pypeclub/OpenPype/pull/3594)
- Kitsu: Shot&Sequence name with prefix over appends [\#3593](https://github.com/pypeclub/OpenPype/pull/3593)
- Photoshop: implemented {layer} placeholder in subset template [\#3591](https://github.com/pypeclub/OpenPype/pull/3591)
- General: Python module appdirs from git [\#3589](https://github.com/pypeclub/OpenPype/pull/3589)
- Ftrack: Update ftrack api to 2.3.3 [\#3588](https://github.com/pypeclub/OpenPype/pull/3588)
- General: New Integrator small fixes [\#3583](https://github.com/pypeclub/OpenPype/pull/3583)
**🐛 Bug fixes**
- Maya: fix aov separator in Redshift [\#3625](https://github.com/pypeclub/OpenPype/pull/3625)
- Fix for multi-version build on Mac [\#3622](https://github.com/pypeclub/OpenPype/pull/3622)
- Ftrack: Sync hierarchical attributes can handle new created entities [\#3621](https://github.com/pypeclub/OpenPype/pull/3621)
- General: Extract review aspect ratio scale is calculated by ffmpeg [\#3620](https://github.com/pypeclub/OpenPype/pull/3620)
- Maya: Fix types of default settings [\#3617](https://github.com/pypeclub/OpenPype/pull/3617)
- Integrator: Don't force to have dot before frame [\#3611](https://github.com/pypeclub/OpenPype/pull/3611)
- AfterEffects: refactored integrate doesnt work formulti frame publishes [\#3610](https://github.com/pypeclub/OpenPype/pull/3610)
- Maya look data contents fails with custom attribute on group [\#3607](https://github.com/pypeclub/OpenPype/pull/3607)
- TrayPublisher: Fix wrong conflict merge [\#3600](https://github.com/pypeclub/OpenPype/pull/3600)
- Bugfix: Add OCIO as submodule to prepare for handling `maketx` color space conversion. [\#3590](https://github.com/pypeclub/OpenPype/pull/3590)
- Fix general settings environment variables resolution [\#3587](https://github.com/pypeclub/OpenPype/pull/3587)
- Editorial publishing workflow improvements [\#3580](https://github.com/pypeclub/OpenPype/pull/3580)
- General: Update imports in start script [\#3579](https://github.com/pypeclub/OpenPype/pull/3579)
- Nuke: render family integration consistency [\#3576](https://github.com/pypeclub/OpenPype/pull/3576)
- Ftrack: Handle missing published path in integrator [\#3570](https://github.com/pypeclub/OpenPype/pull/3570)
- Nuke: publish existing frames with slate with correct range [\#3555](https://github.com/pypeclub/OpenPype/pull/3555)
**🔀 Refactored code**
- General: Plugin settings handled by plugins [\#3623](https://github.com/pypeclub/OpenPype/pull/3623)
- General: Naive implementation of document create, update, delete [\#3601](https://github.com/pypeclub/OpenPype/pull/3601)
- General: Use query functions in general code [\#3596](https://github.com/pypeclub/OpenPype/pull/3596)
- General: Separate extraction of template data into more functions [\#3574](https://github.com/pypeclub/OpenPype/pull/3574)
- General: Lib cleanup [\#3571](https://github.com/pypeclub/OpenPype/pull/3571)
**Merged pull requests:**
- Webpublisher: timeout for PS studio processing [\#3619](https://github.com/pypeclub/OpenPype/pull/3619)
- Core: translated validate\_containers.py into New publisher style [\#3614](https://github.com/pypeclub/OpenPype/pull/3614)
- Enable write color sets on animation publish automatically [\#3582](https://github.com/pypeclub/OpenPype/pull/3582)
## [3.12.2](https://github.com/pypeclub/OpenPype/tree/3.12.2) (2022-07-27)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.12.2-nightly.4...3.12.2)
### 📖 Documentation
- Update website with more studios [\#3554](https://github.com/pypeclub/OpenPype/pull/3554)
- Documentation: Update publishing dev docs [\#3549](https://github.com/pypeclub/OpenPype/pull/3549)
**🚀 Enhancements**
- General: Global thumbnail extractor is ready for more cases [\#3561](https://github.com/pypeclub/OpenPype/pull/3561)
- Maya: add additional validators to Settings [\#3540](https://github.com/pypeclub/OpenPype/pull/3540)
- General: Interactive console in cli [\#3526](https://github.com/pypeclub/OpenPype/pull/3526)
- Ftrack: Automatic daily review session creation can define trigger hour [\#3516](https://github.com/pypeclub/OpenPype/pull/3516)
- Ftrack: add source into Note [\#3509](https://github.com/pypeclub/OpenPype/pull/3509)
- Add pack and unpack convenience scripts [\#3502](https://github.com/pypeclub/OpenPype/pull/3502)
- NewPublisher: Keep plugins with mismatch target in report [\#3498](https://github.com/pypeclub/OpenPype/pull/3498)
**🐛 Bug fixes**
- Maya: fix Review image plane attribute [\#3569](https://github.com/pypeclub/OpenPype/pull/3569)
- Maya: Fix animated attributes \(ie. overscan\) on loaded cameras breaking review publishing. [\#3562](https://github.com/pypeclub/OpenPype/pull/3562)
- NewPublisher: Python 2 compatible html escape [\#3559](https://github.com/pypeclub/OpenPype/pull/3559)
- Remove invalid submodules from `/vendor` [\#3557](https://github.com/pypeclub/OpenPype/pull/3557)
- General: Remove hosts filter on integrator plugins [\#3556](https://github.com/pypeclub/OpenPype/pull/3556)
- Settings: Clean default values of environments [\#3550](https://github.com/pypeclub/OpenPype/pull/3550)
- Module interfaces: Fix import error [\#3547](https://github.com/pypeclub/OpenPype/pull/3547)
- Workfiles tool: Show of tool and it's flags [\#3539](https://github.com/pypeclub/OpenPype/pull/3539)
- General: Create workfile documents works again [\#3538](https://github.com/pypeclub/OpenPype/pull/3538)
- Additional fixes for powershell scripts [\#3525](https://github.com/pypeclub/OpenPype/pull/3525)
- Maya: Added wrapper around cmds.setAttr [\#3523](https://github.com/pypeclub/OpenPype/pull/3523)
- Nuke: double slate [\#3521](https://github.com/pypeclub/OpenPype/pull/3521)
- General: Fix hash of centos oiio archive [\#3519](https://github.com/pypeclub/OpenPype/pull/3519)
- Maya: Renderman display output fix [\#3514](https://github.com/pypeclub/OpenPype/pull/3514)
- TrayPublisher: Simple creation enhancements and fixes [\#3513](https://github.com/pypeclub/OpenPype/pull/3513)
- TrayPublisher: Make sure host name is filled [\#3504](https://github.com/pypeclub/OpenPype/pull/3504)
- NewPublisher: Groups work and enum multivalue [\#3501](https://github.com/pypeclub/OpenPype/pull/3501)
**🔀 Refactored code**
- General: Use query functions in integrator [\#3563](https://github.com/pypeclub/OpenPype/pull/3563)
- General: Mongo core connection moved to client [\#3531](https://github.com/pypeclub/OpenPype/pull/3531)
- Refactor Integrate Asset [\#3530](https://github.com/pypeclub/OpenPype/pull/3530)
- General: Client docstrings cleanup [\#3529](https://github.com/pypeclub/OpenPype/pull/3529)
- General: Move load related functions into pipeline [\#3527](https://github.com/pypeclub/OpenPype/pull/3527)
- General: Get current context document functions [\#3522](https://github.com/pypeclub/OpenPype/pull/3522)
**Merged pull requests:**
- Maya: fix active pane loss [\#3566](https://github.com/pypeclub/OpenPype/pull/3566)
## [3.12.1](https://github.com/pypeclub/OpenPype/tree/3.12.1) (2022-07-13)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.12.1-nightly.6...3.12.1)

View file

@ -63,7 +63,7 @@ class OpenPypeVersion(semver.VersionInfo):
"""
staging = False
path = None
_VERSION_REGEX = re.compile(r"(?P<major>0|[1-9]\d*)\.(?P<minor>0|[1-9]\d*)\.(?P<patch>0|[1-9]\d*)(?:-(?P<prerelease>(?:0|[1-9]\d*|\d*[a-zA-Z-][0-9a-zA-Z-]*)(?:\.(?:0|[1-9]\d*|\d*[a-zA-Z-][0-9a-zA-Z-]*))*))?(?:\+(?P<buildmetadata>[0-9a-zA-Z-]+(?:\.[0-9a-zA-Z-]+)*))?$") # noqa: E501
_VERSION_REGEX = re.compile(r"(?P<major>0|[1-9]\d*)\.(?P<minor>0|[1-9]\d*)\.(?P<patch>0|[1-9]\d*)(?:-(?P<prerelease>(?:0|[1-9]\d*|\d*[a-zA-Z-][0-9a-zA-Z-]*)(?:\.(?:0|[1-9]\d*|\d*[a-zA-Z-][0-9a-zA-Z-]*))*))?(?:\+(?P<buildmetadata>[0-9a-zA-Z-]+(?:\.[0-9a-zA-Z-]+)*))?") # noqa: E501
_installed_version = None
def __init__(self, *args, **kwargs):
@ -381,7 +381,7 @@ class OpenPypeVersion(semver.VersionInfo):
@classmethod
def get_local_versions(
cls, production: bool = None,
staging: bool = None, compatible_with: OpenPypeVersion = None
staging: bool = None
) -> List:
"""Get all versions available on this machine.
@ -391,8 +391,10 @@ class OpenPypeVersion(semver.VersionInfo):
Args:
production (bool): Return production versions.
staging (bool): Return staging versions.
compatible_with (OpenPypeVersion): Return only those compatible
with specified version.
Returns:
list: of compatible versions available on the machine.
"""
# Return all local versions if arguments are set to None
if production is None and staging is None:
@ -411,16 +413,7 @@ class OpenPypeVersion(semver.VersionInfo):
# DEPRECATED: backwards compatible way to look for versions in root
dir_to_search = Path(user_data_dir("openpype", "pypeclub"))
versions = OpenPypeVersion.get_versions_from_directory(
dir_to_search, compatible_with=compatible_with
)
if compatible_with:
dir_to_search = Path(
user_data_dir("openpype", "pypeclub")) / f"{compatible_with.major}.{compatible_with.minor}" # noqa
versions += OpenPypeVersion.get_versions_from_directory(
dir_to_search, compatible_with=compatible_with
)
versions = OpenPypeVersion.get_versions_from_directory(dir_to_search)
filtered_versions = []
for version in versions:
@ -434,7 +427,7 @@ class OpenPypeVersion(semver.VersionInfo):
@classmethod
def get_remote_versions(
cls, production: bool = None,
staging: bool = None, compatible_with: OpenPypeVersion = None
staging: bool = None
) -> List:
"""Get all versions available in OpenPype Path.
@ -444,8 +437,7 @@ class OpenPypeVersion(semver.VersionInfo):
Args:
production (bool): Return production versions.
staging (bool): Return staging versions.
compatible_with (OpenPypeVersion): Return only those compatible
with specified version.
"""
# Return all local versions if arguments are set to None
if production is None and staging is None:
@ -479,13 +471,7 @@ class OpenPypeVersion(semver.VersionInfo):
if not dir_to_search:
return []
# DEPRECATED: look for version in root directory
versions = cls.get_versions_from_directory(
dir_to_search, compatible_with=compatible_with)
if compatible_with:
dir_to_search = dir_to_search / f"{compatible_with.major}.{compatible_with.minor}" # noqa
versions += cls.get_versions_from_directory(
dir_to_search, compatible_with=compatible_with)
versions = cls.get_versions_from_directory(dir_to_search)
filtered_versions = []
for version in versions:
@ -498,14 +484,11 @@ class OpenPypeVersion(semver.VersionInfo):
@staticmethod
def get_versions_from_directory(
openpype_dir: Path,
compatible_with: OpenPypeVersion = None) -> List:
openpype_dir: Path) -> List:
"""Get all detected OpenPype versions in directory.
Args:
openpype_dir (Path): Directory to scan.
compatible_with (OpenPypeVersion): Return only versions compatible
with build version specified as OpenPypeVersion.
Returns:
list of OpenPypeVersion
@ -514,15 +497,22 @@ class OpenPypeVersion(semver.VersionInfo):
ValueError: if invalid path is specified.
"""
_openpype_versions = []
openpype_versions = []
if not openpype_dir.exists() and not openpype_dir.is_dir():
return _openpype_versions
return openpype_versions
# iterate over directory in first level and find all that might
# contain OpenPype.
for item in openpype_dir.iterdir():
# if the item is directory with major.minor version, dive deeper
# if file, strip extension, in case of dir not.
if item.is_dir() and re.match(r"^\d+\.\d+$", item.name):
_versions = OpenPypeVersion.get_versions_from_directory(
item)
if _versions:
openpype_versions += _versions
# if file exists, strip extension, in case of dir don't.
name = item.name if item.is_dir() else item.stem
result = OpenPypeVersion.version_in_str(name)
@ -540,14 +530,10 @@ class OpenPypeVersion(semver.VersionInfo):
)[0]:
continue
if compatible_with and not detected_version.is_compatible(
compatible_with):
continue
detected_version.path = item
_openpype_versions.append(detected_version)
openpype_versions.append(detected_version)
return sorted(_openpype_versions)
return sorted(openpype_versions)
@staticmethod
def get_installed_version_str() -> str:
@ -575,15 +561,14 @@ class OpenPypeVersion(semver.VersionInfo):
def get_latest_version(
staging: bool = False,
local: bool = None,
remote: bool = None,
compatible_with: OpenPypeVersion = None
remote: bool = None
) -> Union[OpenPypeVersion, None]:
"""Get latest available version.
"""Get the latest available version.
The version does not contain information about path and source.
This is utility version to get latest version from all found. Build
version is not listed if staging is enabled.
This is utility version to get the latest version from all found.
Build version is not listed if staging is enabled.
Arguments 'local' and 'remote' define if local and remote repository
versions are used. All versions are used if both are not set (or set
@ -595,8 +580,9 @@ class OpenPypeVersion(semver.VersionInfo):
staging (bool, optional): List staging versions if True.
local (bool, optional): List local versions if True.
remote (bool, optional): List remote versions if True.
compatible_with (OpenPypeVersion, optional) Return only version
compatible with compatible_with.
Returns:
Latest OpenPypeVersion or None
"""
if local is None and remote is None:
@ -628,12 +614,7 @@ class OpenPypeVersion(semver.VersionInfo):
return None
all_versions.sort()
latest_version: OpenPypeVersion
latest_version = all_versions[-1]
if compatible_with and not latest_version.is_compatible(
compatible_with):
return None
return latest_version
return all_versions[-1]
@classmethod
def get_expected_studio_version(cls, staging=False, global_settings=None):
@ -764,9 +745,9 @@ class BootstrapRepos:
self, repo_dir: Path = None) -> Union[OpenPypeVersion, None]:
"""Copy zip created from OpenPype repositories to user data dir.
This detect OpenPype version either in local "live" OpenPype
This detects OpenPype version either in local "live" OpenPype
repository or in user provided path. Then it will zip it in temporary
directory and finally it will move it to destination which is user
directory, and finally it will move it to destination which is user
data directory. Existing files will be replaced.
Args:
@ -777,7 +758,7 @@ class BootstrapRepos:
"""
# if repo dir is not set, we detect local "live" OpenPype repository
# version and use it as a source. Otherwise repo_dir is user
# version and use it as a source. Otherwise, repo_dir is user
# entered location.
if repo_dir:
version = self.get_version(repo_dir)
@ -1141,28 +1122,27 @@ class BootstrapRepos:
@staticmethod
def find_openpype_version(
version: Union[str, OpenPypeVersion],
staging: bool,
compatible_with: OpenPypeVersion = None
staging: bool
) -> Union[OpenPypeVersion, None]:
"""Find location of specified OpenPype version.
Args:
version (Union[str, OpenPypeVersion): Version to find.
staging (bool): Filter staging versions.
compatible_with (OpenPypeVersion, optional): Find only
versions compatible with specified one.
Returns:
requested OpenPypeVersion.
"""
installed_version = OpenPypeVersion.get_installed_version()
if isinstance(version, str):
version = OpenPypeVersion(version=version)
installed_version = OpenPypeVersion.get_installed_version()
if installed_version == version:
return installed_version
local_versions = OpenPypeVersion.get_local_versions(
staging=staging, production=not staging,
compatible_with=compatible_with
staging=staging, production=not staging
)
zip_version = None
for local_version in local_versions:
@ -1176,8 +1156,7 @@ class BootstrapRepos:
return zip_version
remote_versions = OpenPypeVersion.get_remote_versions(
staging=staging, production=not staging,
compatible_with=compatible_with
staging=staging, production=not staging
)
for remote_version in remote_versions:
if remote_version == version:
@ -1186,13 +1165,23 @@ class BootstrapRepos:
@staticmethod
def find_latest_openpype_version(
staging, compatible_with: OpenPypeVersion = None):
staging: bool
) -> Union[OpenPypeVersion, None]:
"""Find the latest available OpenPype version in all location.
Args:
staging (bool): True to look for staging versions.
Returns:
Latest OpenPype version on None if nothing was found.
"""
installed_version = OpenPypeVersion.get_installed_version()
local_versions = OpenPypeVersion.get_local_versions(
staging=staging, compatible_with=compatible_with
staging=staging
)
remote_versions = OpenPypeVersion.get_remote_versions(
staging=staging, compatible_with=compatible_with
staging=staging
)
all_versions = local_versions + remote_versions
if not staging:
@ -1217,8 +1206,7 @@ class BootstrapRepos:
self,
openpype_path: Union[Path, str] = None,
staging: bool = False,
include_zips: bool = False,
compatible_with: OpenPypeVersion = None
include_zips: bool = False
) -> Union[List[OpenPypeVersion], None]:
"""Get ordered dict of detected OpenPype version.
@ -1235,8 +1223,6 @@ class BootstrapRepos:
otherwise.
include_zips (bool, optional): If set True it will try to find
OpenPype in zip files in given directory.
compatible_with (OpenPypeVersion, optional): Find only those
versions compatible with the one specified.
Returns:
dict of Path: Dictionary of detected OpenPype version.
@ -1255,52 +1241,34 @@ class BootstrapRepos:
("Finding OpenPype in non-filesystem locations is"
" not implemented yet."))
version_dir = ""
if compatible_with:
version_dir = f"{compatible_with.major}.{compatible_with.minor}"
# if checks bellow for OPENPYPE_PATH and registry fails, use data_dir
# DEPRECATED: lookup in root of this folder is deprecated in favour
# of major.minor sub-folders.
dirs_to_search = [
self.data_dir
]
if compatible_with:
dirs_to_search.append(self.data_dir / version_dir)
dirs_to_search = [self.data_dir]
if openpype_path:
dirs_to_search = [openpype_path]
if compatible_with:
dirs_to_search.append(openpype_path / version_dir)
else:
elif os.getenv("OPENPYPE_PATH") \
and Path(os.getenv("OPENPYPE_PATH")).exists():
# first try OPENPYPE_PATH and if that is not available,
# try registry.
if os.getenv("OPENPYPE_PATH") \
and Path(os.getenv("OPENPYPE_PATH")).exists():
dirs_to_search = [Path(os.getenv("OPENPYPE_PATH"))]
dirs_to_search = [Path(os.getenv("OPENPYPE_PATH"))]
else:
try:
registry_dir = Path(
str(self.registry.get_item("openPypePath")))
if registry_dir.exists():
dirs_to_search = [registry_dir]
if compatible_with:
dirs_to_search.append(
Path(os.getenv("OPENPYPE_PATH")) / version_dir)
else:
try:
registry_dir = Path(
str(self.registry.get_item("openPypePath")))
if registry_dir.exists():
dirs_to_search = [registry_dir]
if compatible_with:
dirs_to_search.append(registry_dir / version_dir)
except ValueError:
# nothing found in registry, we'll use data dir
pass
except ValueError:
# nothing found in registry, we'll use data dir
pass
openpype_versions = []
for dir_to_search in dirs_to_search:
try:
openpype_versions += self.get_openpype_versions(
dir_to_search, staging, compatible_with=compatible_with)
dir_to_search, staging)
except ValueError:
# location is invalid, skip it
pass
@ -1668,15 +1636,12 @@ class BootstrapRepos:
def get_openpype_versions(
self,
openpype_dir: Path,
staging: bool = False,
compatible_with: OpenPypeVersion = None) -> list:
staging: bool = False) -> list:
"""Get all detected OpenPype versions in directory.
Args:
openpype_dir (Path): Directory to scan.
staging (bool, optional): Find staging versions if True.
compatible_with (OpenPypeVersion, optional): Get only versions
compatible with the one specified.
Returns:
list of OpenPypeVersion
@ -1688,12 +1653,18 @@ class BootstrapRepos:
if not openpype_dir.exists() and not openpype_dir.is_dir():
raise ValueError(f"specified directory {openpype_dir} is invalid")
_openpype_versions = []
openpype_versions = []
# iterate over directory in first level and find all that might
# contain OpenPype.
for item in openpype_dir.iterdir():
# if the item is directory with major.minor version, dive deeper
if item.is_dir() and re.match(r"^\d+\.\d+$", item.name):
_versions = self.get_openpype_versions(
item, staging=staging)
if _versions:
openpype_versions += _versions
# if file, strip extension, in case of dir not.
# if it is file, strip extension, in case of dir don't.
name = item.name if item.is_dir() else item.stem
result = OpenPypeVersion.version_in_str(name)
@ -1711,18 +1682,14 @@ class BootstrapRepos:
):
continue
if compatible_with and \
not detected_version.is_compatible(compatible_with):
continue
detected_version.path = item
if staging and detected_version.is_staging():
_openpype_versions.append(detected_version)
openpype_versions.append(detected_version)
if not staging and not detected_version.is_staging():
_openpype_versions.append(detected_version)
openpype_versions.append(detected_version)
return sorted(_openpype_versions)
return sorted(openpype_versions)
class OpenPypeVersionExists(Exception):

View file

@ -62,7 +62,7 @@ class InstallThread(QThread):
progress_callback=self.set_progress, message=self.message)
local_version = OpenPypeVersion.get_installed_version_str()
# if user did entered nothing, we install OpenPype from local version.
# if user did enter nothing, we install OpenPype from local version.
# zip content of `repos`, copy it to user data dir and append
# version to it.
if not self._path:
@ -93,6 +93,23 @@ class InstallThread(QThread):
detected = bs.find_openpype(include_zips=True)
if detected:
if not OpenPypeVersion.get_installed_version().is_compatible(
detected[-1]):
self.message.emit((
f"Latest detected version {detected[-1]} "
"is not compatible with the currently running "
f"{local_version}"
), True)
self.message.emit((
"Filtering detected versions to compatible ones..."
), False)
detected = [
version for version in detected
if version.is_compatible(
OpenPypeVersion.get_installed_version())
]
if OpenPypeVersion(
version=local_version, path=Path()) < detected[-1]:
self.message.emit((

View file

@ -1,44 +1,82 @@
# absolute_import is needed to counter the `module has no cmds error` in Maya
from __future__ import absolute_import
import warnings
import functools
import pyblish.api
def get_errored_instances_from_context(context):
instances = list()
for result in context.data["results"]:
if result["instance"] is None:
# When instance is None we are on the "context" result
continue
if result["error"]:
instances.append(result["instance"])
return instances
class ActionDeprecatedWarning(DeprecationWarning):
pass
def get_errored_plugins_from_data(context):
"""Get all failed validation plugins
Args:
context (object):
Returns:
list of plugins which failed during validation
def deprecated(new_destination):
"""Mark functions as deprecated.
It will result in a warning being emitted when the function is used.
"""
plugins = list()
results = context.data.get("results", [])
for result in results:
if result["success"] is True:
continue
plugins.append(result["plugin"])
func = None
if callable(new_destination):
func = new_destination
new_destination = None
return plugins
def _decorator(decorated_func):
if new_destination is None:
warning_message = (
" Please check content of deprecated function to figure out"
" possible replacement."
)
else:
warning_message = " Please replace your usage with '{}'.".format(
new_destination
)
@functools.wraps(decorated_func)
def wrapper(*args, **kwargs):
warnings.simplefilter("always", ActionDeprecatedWarning)
warnings.warn(
(
"Call to deprecated function '{}'"
"\nFunction was moved or removed.{}"
).format(decorated_func.__name__, warning_message),
category=ActionDeprecatedWarning,
stacklevel=4
)
return decorated_func(*args, **kwargs)
return wrapper
if func is None:
return _decorator
return _decorator(func)
@deprecated("openpype.pipeline.publish.get_errored_instances_from_context")
def get_errored_instances_from_context(context):
"""
Deprecated:
Since 3.14.* will be removed in 3.16.* or later.
"""
from openpype.pipeline.publish import get_errored_instances_from_context
return get_errored_instances_from_context(context)
@deprecated("openpype.pipeline.publish.get_errored_plugins_from_context")
def get_errored_plugins_from_data(context):
"""
Deprecated:
Since 3.14.* will be removed in 3.16.* or later.
"""
from openpype.pipeline.publish import get_errored_plugins_from_context
return get_errored_plugins_from_context(context)
# 'RepairAction' and 'RepairContextAction' were moved to
# 'openpype.pipeline.publish' please change you imports.
# There is no "reasonable" way hot mark these classes as deprecated to show
# warning of wrong import.
# Deprecated since 3.14.* will be removed in 3.16.*
class RepairAction(pyblish.api.Action):
"""Repairs the action
@ -65,6 +103,7 @@ class RepairAction(pyblish.api.Action):
plugin.repair(instance)
# Deprecated since 3.14.* will be removed in 3.16.*
class RepairContextAction(pyblish.api.Action):
"""Repairs the action

View file

@ -49,7 +49,6 @@ from .plugin import (
ValidateContentsOrder,
ValidateSceneOrder,
ValidateMeshOrder,
ValidationException
)
# temporary fix, might
@ -94,8 +93,6 @@ __all__ = [
"RepairAction",
"RepairContextAction",
"ValidationException",
# get contextual data
"version_up",
"get_asset",

View file

@ -40,18 +40,6 @@ def settings(dev):
PypeCommands().launch_settings_gui(dev)
@main.command()
def standalonepublisher():
"""Show Pype Standalone publisher UI."""
PypeCommands().launch_standalone_publisher()
@main.command()
def traypublisher():
"""Show new OpenPype Standalone publisher UI."""
PypeCommands().launch_traypublisher()
@main.command()
def tray():
"""Launch pype tray.

View file

@ -45,6 +45,17 @@ from .entities import (
get_workfile_info,
)
from .entity_links import (
get_linked_asset_ids,
get_linked_assets,
get_linked_representation_id,
)
from .operations import (
create_project,
)
__all__ = (
"OpenPypeMongoConnection",
@ -88,4 +99,10 @@ __all__ = (
"get_thumbnail_id_from_source",
"get_workfile_info",
"get_linked_asset_ids",
"get_linked_assets",
"get_linked_representation_id",
"create_project",
)

View file

@ -6,6 +6,7 @@ that has project name as a context (e.g. on 'ProjectEntity'?).
+ We will need more specific functions doing wery specific queires really fast.
"""
import re
import collections
import six
@ -31,17 +32,37 @@ def _prepare_fields(fields, required_fields=None):
return output
def _convert_id(in_id):
def convert_id(in_id):
"""Helper function for conversion of id from string to ObjectId.
Args:
in_id (Union[str, ObjectId, Any]): Entity id that should be converted
to right type for queries.
Returns:
Union[ObjectId, Any]: Converted ids to ObjectId or in type.
"""
if isinstance(in_id, six.string_types):
return ObjectId(in_id)
return in_id
def _convert_ids(in_ids):
def convert_ids(in_ids):
"""Helper function for conversion of ids from string to ObjectId.
Args:
in_ids (Iterable[Union[str, ObjectId, Any]]): List of entity ids that
should be converted to right type for queries.
Returns:
List[ObjectId]: Converted ids to ObjectId.
"""
_output = set()
for in_id in in_ids:
if in_id is not None:
_output.add(_convert_id(in_id))
_output.add(convert_id(in_id))
return list(_output)
@ -57,7 +78,7 @@ def get_projects(active=True, inactive=False, fields=None):
yield project_doc
def get_project(project_name, active=True, inactive=False, fields=None):
def get_project(project_name, active=True, inactive=True, fields=None):
# Skip if both are disabled
if not active and not inactive:
return None
@ -114,7 +135,7 @@ def get_asset_by_id(project_name, asset_id, fields=None):
None: Asset was not found by id.
"""
asset_id = _convert_id(asset_id)
asset_id = convert_id(asset_id)
if not asset_id:
return None
@ -195,7 +216,7 @@ def _get_assets(
query_filter = {"type": {"$in": asset_types}}
if asset_ids is not None:
asset_ids = _convert_ids(asset_ids)
asset_ids = convert_ids(asset_ids)
if not asset_ids:
return []
query_filter["_id"] = {"$in": asset_ids}
@ -206,7 +227,7 @@ def _get_assets(
query_filter["name"] = {"$in": list(asset_names)}
if parent_ids is not None:
parent_ids = _convert_ids(parent_ids)
parent_ids = convert_ids(parent_ids)
if not parent_ids:
return []
query_filter["data.visualParent"] = {"$in": parent_ids}
@ -306,7 +327,7 @@ def get_asset_ids_with_subsets(project_name, asset_ids=None):
"type": "subset"
}
if asset_ids is not None:
asset_ids = _convert_ids(asset_ids)
asset_ids = convert_ids(asset_ids)
if not asset_ids:
return []
subset_query["parent"] = {"$in": asset_ids}
@ -346,7 +367,7 @@ def get_subset_by_id(project_name, subset_id, fields=None):
Dict: Subset document which can be reduced to specified 'fields'.
"""
subset_id = _convert_id(subset_id)
subset_id = convert_id(subset_id)
if not subset_id:
return None
@ -373,7 +394,7 @@ def get_subset_by_name(project_name, subset_name, asset_id, fields=None):
if not subset_name:
return None
asset_id = _convert_id(asset_id)
asset_id = convert_id(asset_id)
if not asset_id:
return None
@ -427,13 +448,13 @@ def get_subsets(
query_filter = {"type": {"$in": subset_types}}
if asset_ids is not None:
asset_ids = _convert_ids(asset_ids)
asset_ids = convert_ids(asset_ids)
if not asset_ids:
return []
query_filter["parent"] = {"$in": asset_ids}
if subset_ids is not None:
subset_ids = _convert_ids(subset_ids)
subset_ids = convert_ids(subset_ids)
if not subset_ids:
return []
query_filter["_id"] = {"$in": subset_ids}
@ -448,7 +469,7 @@ def get_subsets(
for asset_id, names in names_by_asset_ids.items():
if asset_id and names:
or_query.append({
"parent": _convert_id(asset_id),
"parent": convert_id(asset_id),
"name": {"$in": list(names)}
})
if not or_query:
@ -509,7 +530,7 @@ def get_version_by_id(project_name, version_id, fields=None):
Dict: Version document which can be reduced to specified 'fields'.
"""
version_id = _convert_id(version_id)
version_id = convert_id(version_id)
if not version_id:
return None
@ -536,7 +557,7 @@ def get_version_by_name(project_name, version, subset_id, fields=None):
Dict: Version document which can be reduced to specified 'fields'.
"""
subset_id = _convert_id(subset_id)
subset_id = convert_id(subset_id)
if not subset_id:
return None
@ -566,7 +587,7 @@ def version_is_latest(project_name, version_id):
bool: True if is latest version from subset else False.
"""
version_id = _convert_id(version_id)
version_id = convert_id(version_id)
if not version_id:
return False
version_doc = get_version_by_id(
@ -609,13 +630,13 @@ def _get_versions(
query_filter = {"type": {"$in": version_types}}
if subset_ids is not None:
subset_ids = _convert_ids(subset_ids)
subset_ids = convert_ids(subset_ids)
if not subset_ids:
return []
query_filter["parent"] = {"$in": subset_ids}
if version_ids is not None:
version_ids = _convert_ids(version_ids)
version_ids = convert_ids(version_ids)
if not version_ids:
return []
query_filter["_id"] = {"$in": version_ids}
@ -689,7 +710,7 @@ def get_hero_version_by_subset_id(project_name, subset_id, fields=None):
Dict: Hero version entity data.
"""
subset_id = _convert_id(subset_id)
subset_id = convert_id(subset_id)
if not subset_id:
return None
@ -719,7 +740,7 @@ def get_hero_version_by_id(project_name, version_id, fields=None):
Dict: Hero version entity data.
"""
version_id = _convert_id(version_id)
version_id = convert_id(version_id)
if not version_id:
return None
@ -785,7 +806,7 @@ def get_output_link_versions(project_name, version_id, fields=None):
links for passed version.
"""
version_id = _convert_id(version_id)
version_id = convert_id(version_id)
if not version_id:
return []
@ -811,7 +832,7 @@ def get_last_versions(project_name, subset_ids, fields=None):
dict[ObjectId, int]: Key is subset id and value is last version name.
"""
subset_ids = _convert_ids(subset_ids)
subset_ids = convert_ids(subset_ids)
if not subset_ids:
return {}
@ -897,7 +918,7 @@ def get_last_version_by_subset_id(project_name, subset_id, fields=None):
Dict: Version document which can be reduced to specified 'fields'.
"""
subset_id = _convert_id(subset_id)
subset_id = convert_id(subset_id)
if not subset_id:
return None
@ -970,7 +991,7 @@ def get_representation_by_id(project_name, representation_id, fields=None):
"type": {"$in": repre_types}
}
if representation_id is not None:
query_filter["_id"] = _convert_id(representation_id)
query_filter["_id"] = convert_id(representation_id)
conn = get_project_connection(project_name)
@ -995,7 +1016,7 @@ def get_representation_by_name(
to specified 'fields'.
"""
version_id = _convert_id(version_id)
version_id = convert_id(version_id)
if not version_id or not representation_name:
return None
repre_types = ["representation", "archived_representations"]
@ -1009,17 +1030,70 @@ def get_representation_by_name(
return conn.find_one(query_filter, _prepare_fields(fields))
def _flatten_dict(data):
flatten_queue = collections.deque()
flatten_queue.append(data)
output = {}
while flatten_queue:
item = flatten_queue.popleft()
for key, value in item.items():
if not isinstance(value, dict):
output[key] = value
continue
tmp = {}
for subkey, subvalue in value.items():
new_key = "{}.{}".format(key, subkey)
tmp[new_key] = subvalue
flatten_queue.append(tmp)
return output
def _regex_filters(filters):
output = []
for key, value in filters.items():
regexes = []
a_values = []
if isinstance(value, re.Pattern):
regexes.append(value)
elif isinstance(value, (list, tuple, set)):
for item in value:
if isinstance(item, re.Pattern):
regexes.append(item)
else:
a_values.append(item)
else:
a_values.append(value)
key_filters = []
if len(a_values) == 1:
key_filters.append({key: a_values[0]})
elif a_values:
key_filters.append({key: {"$in": a_values}})
for regex in regexes:
key_filters.append({key: {"$regex": regex}})
if len(key_filters) == 1:
output.append(key_filters[0])
else:
output.append({"$or": key_filters})
return output
def _get_representations(
project_name,
representation_ids,
representation_names,
version_ids,
extensions,
context_filters,
names_by_version_ids,
standard,
archived,
fields
):
default_output = []
repre_types = []
if standard:
repre_types.append("representation")
@ -1027,7 +1101,7 @@ def _get_representations(
repre_types.append("archived_representation")
if not repre_types:
return []
return default_output
if len(repre_types) == 1:
query_filter = {"type": repre_types[0]}
@ -1035,38 +1109,62 @@ def _get_representations(
query_filter = {"type": {"$in": repre_types}}
if representation_ids is not None:
representation_ids = _convert_ids(representation_ids)
representation_ids = convert_ids(representation_ids)
if not representation_ids:
return []
return default_output
query_filter["_id"] = {"$in": representation_ids}
if representation_names is not None:
if not representation_names:
return []
return default_output
query_filter["name"] = {"$in": list(representation_names)}
if version_ids is not None:
version_ids = _convert_ids(version_ids)
version_ids = convert_ids(version_ids)
if not version_ids:
return []
return default_output
query_filter["parent"] = {"$in": version_ids}
if extensions is not None:
if not extensions:
return []
query_filter["context.ext"] = {"$in": list(extensions)}
or_queries = []
if names_by_version_ids is not None:
or_query = []
for version_id, names in names_by_version_ids.items():
if version_id and names:
or_query.append({
"parent": _convert_id(version_id),
"parent": convert_id(version_id),
"name": {"$in": list(names)}
})
if not or_query:
return default_output
or_queries.append(or_query)
if context_filters is not None:
if not context_filters:
return []
query_filter["$or"] = or_query
_flatten_filters = _flatten_dict(context_filters)
flatten_filters = {}
for key, value in _flatten_filters.items():
if not key.startswith("context"):
key = "context.{}".format(key)
flatten_filters[key] = value
for item in _regex_filters(flatten_filters):
for key, value in item.items():
if key != "$or":
query_filter[key] = value
elif value:
or_queries.append(value)
if len(or_queries) == 1:
query_filter["$or"] = or_queries[0]
elif or_queries:
and_query = []
for or_query in or_queries:
if isinstance(or_query, list):
or_query = {"$or": or_query}
and_query.append(or_query)
query_filter["$and"] = and_query
conn = get_project_connection(project_name)
@ -1078,7 +1176,7 @@ def get_representations(
representation_ids=None,
representation_names=None,
version_ids=None,
extensions=None,
context_filters=None,
names_by_version_ids=None,
archived=False,
standard=True,
@ -1096,8 +1194,8 @@ def get_representations(
as filter. Filter ignored if 'None' is passed.
version_ids (Iterable[str]): Subset ids used as parent filter. Filter
ignored if 'None' is passed.
extensions (Iterable[str]): Filter by extension of main representation
file (without dot).
context_filters (Dict[str, List[str, re.Pattern]]): Filter by
representation context fields.
names_by_version_ids (dict[ObjectId, list[str]]): Complex filtering
using version ids and list of names under the version.
archived (bool): Output will also contain archived representations.
@ -1113,7 +1211,7 @@ def get_representations(
representation_ids=representation_ids,
representation_names=representation_names,
version_ids=version_ids,
extensions=extensions,
context_filters=context_filters,
names_by_version_ids=names_by_version_ids,
standard=True,
archived=archived,
@ -1126,7 +1224,7 @@ def get_archived_representations(
representation_ids=None,
representation_names=None,
version_ids=None,
extensions=None,
context_filters=None,
names_by_version_ids=None,
fields=None
):
@ -1142,8 +1240,8 @@ def get_archived_representations(
as filter. Filter ignored if 'None' is passed.
version_ids (Iterable[str]): Subset ids used as parent filter. Filter
ignored if 'None' is passed.
extensions (Iterable[str]): Filter by extension of main representation
file (without dot).
context_filters (Dict[str, List[str, re.Pattern]]): Filter by
representation context fields.
names_by_version_ids (dict[ObjectId, List[str]]): Complex filtering
using version ids and list of names under the version.
fields (Iterable[str]): Fields that should be returned. All fields are
@ -1158,7 +1256,7 @@ def get_archived_representations(
representation_ids=representation_ids,
representation_names=representation_names,
version_ids=version_ids,
extensions=extensions,
context_filters=context_filters,
names_by_version_ids=names_by_version_ids,
standard=False,
archived=True,
@ -1181,58 +1279,64 @@ def get_representations_parents(project_name, representations):
dict[ObjectId, tuple]: Parents by representation id.
"""
repres_by_version_id = collections.defaultdict(list)
versions_by_version_id = {}
versions_by_subset_id = collections.defaultdict(list)
subsets_by_subset_id = {}
subsets_by_asset_id = collections.defaultdict(list)
repre_docs_by_version_id = collections.defaultdict(list)
version_docs_by_version_id = {}
version_docs_by_subset_id = collections.defaultdict(list)
subset_docs_by_subset_id = {}
subset_docs_by_asset_id = collections.defaultdict(list)
output = {}
for representation in representations:
repre_id = representation["_id"]
for repre_doc in representations:
repre_id = repre_doc["_id"]
version_id = repre_doc["parent"]
output[repre_id] = (None, None, None, None)
version_id = representation["parent"]
repres_by_version_id[version_id].append(representation)
repre_docs_by_version_id[version_id].append(repre_doc)
versions = get_versions(
project_name, version_ids=repres_by_version_id.keys()
version_docs = get_versions(
project_name,
version_ids=repre_docs_by_version_id.keys(),
hero=True
)
for version in versions:
version_id = version["_id"]
subset_id = version["parent"]
versions_by_version_id[version_id] = version
versions_by_subset_id[subset_id].append(version)
for version_doc in version_docs:
version_id = version_doc["_id"]
subset_id = version_doc["parent"]
version_docs_by_version_id[version_id] = version_doc
version_docs_by_subset_id[subset_id].append(version_doc)
subsets = get_subsets(
project_name, subset_ids=versions_by_subset_id.keys()
subset_docs = get_subsets(
project_name, subset_ids=version_docs_by_subset_id.keys()
)
for subset in subsets:
subset_id = subset["_id"]
asset_id = subset["parent"]
subsets_by_subset_id[subset_id] = subset
subsets_by_asset_id[asset_id].append(subset)
for subset_doc in subset_docs:
subset_id = subset_doc["_id"]
asset_id = subset_doc["parent"]
subset_docs_by_subset_id[subset_id] = subset_doc
subset_docs_by_asset_id[asset_id].append(subset_doc)
assets = get_assets(project_name, asset_ids=subsets_by_asset_id.keys())
assets_by_id = {
asset["_id"]: asset
for asset in assets
asset_docs = get_assets(
project_name, asset_ids=subset_docs_by_asset_id.keys()
)
asset_docs_by_id = {
asset_doc["_id"]: asset_doc
for asset_doc in asset_docs
}
project = get_project(project_name)
project_doc = get_project(project_name)
for version_id, representations in repres_by_version_id.items():
asset = None
subset = None
version = versions_by_version_id.get(version_id)
if version:
subset_id = version["parent"]
subset = subsets_by_subset_id.get(subset_id)
if subset:
asset_id = subset["parent"]
asset = assets_by_id.get(asset_id)
for version_id, repre_docs in repre_docs_by_version_id.items():
asset_doc = None
subset_doc = None
version_doc = version_docs_by_version_id.get(version_id)
if version_doc:
subset_id = version_doc["parent"]
subset_doc = subset_docs_by_subset_id.get(subset_id)
if subset_doc:
asset_id = subset_doc["parent"]
asset_doc = asset_docs_by_id.get(asset_id)
for representation in representations:
repre_id = representation["_id"]
output[repre_id] = (version, subset, asset, project)
for repre_doc in repre_docs:
repre_id = repre_doc["_id"]
output[repre_id] = (
version_doc, subset_doc, asset_doc, project_doc
)
return output
@ -1277,7 +1381,7 @@ def get_thumbnail_id_from_source(project_name, src_type, src_id):
if not src_type or not src_id:
return None
query_filter = {"_id": _convert_id(src_id)}
query_filter = {"_id": convert_id(src_id)}
conn = get_project_connection(project_name)
src_doc = conn.find_one(query_filter, {"data.thumbnail_id"})
@ -1304,7 +1408,7 @@ def get_thumbnails(project_name, thumbnail_ids, fields=None):
"""
if thumbnail_ids:
thumbnail_ids = _convert_ids(thumbnail_ids)
thumbnail_ids = convert_ids(thumbnail_ids)
if not thumbnail_ids:
return []
@ -1332,7 +1436,7 @@ def get_thumbnail(project_name, thumbnail_id, fields=None):
if not thumbnail_id:
return None
query_filter = {"type": "thumbnail", "_id": _convert_id(thumbnail_id)}
query_filter = {"type": "thumbnail", "_id": convert_id(thumbnail_id)}
conn = get_project_connection(project_name)
return conn.find_one(query_filter, _prepare_fields(fields))
@ -1360,7 +1464,7 @@ def get_workfile_info(
query_filter = {
"type": "workfile",
"parent": _convert_id(asset_id),
"parent": convert_id(asset_id),
"task_name": task_name,
"filename": filename
}
@ -1371,7 +1475,7 @@ def get_workfile_info(
"""
## Custom data storage:
- Settings - OP settings overrides and local settings
- Logging - logs from PypeLogger
- Logging - logs from Logger
- Webpublisher - jobs
- Ftrack - events
- Maya - Shaders

View file

@ -0,0 +1,232 @@
from .mongo import get_project_connection
from .entities import (
get_assets,
get_asset_by_id,
get_representation_by_id,
convert_id,
)
def get_linked_asset_ids(project_name, asset_doc=None, asset_id=None):
"""Extract linked asset ids from asset document.
One of asset document or asset id must be passed.
Note:
Asset links now works only from asset to assets.
Args:
asset_doc (dict): Asset document from DB.
Returns:
List[Union[ObjectId, str]]: Asset ids of input links.
"""
output = []
if not asset_doc and not asset_id:
return output
if not asset_doc:
asset_doc = get_asset_by_id(
project_name, asset_id, fields=["data.inputLinks"]
)
input_links = asset_doc["data"].get("inputLinks")
if not input_links:
return output
for item in input_links:
# Backwards compatibility for "_id" key which was replaced with
# "id"
if "_id" in item:
link_id = item["_id"]
else:
link_id = item["id"]
output.append(link_id)
return output
def get_linked_assets(
project_name, asset_doc=None, asset_id=None, fields=None
):
"""Return linked assets based on passed asset document.
One of asset document or asset id must be passed.
Args:
project_name (str): Name of project where to look for queried entities.
asset_doc (Dict[str, Any]): Asset document from database.
asset_id (Union[ObjectId, str]): Asset id. Can be used instead of
asset document.
fields (Iterable[str]): Fields that should be returned. All fields are
returned if 'None' is passed.
Returns:
List[Dict[str, Any]]: Asset documents of input links for passed
asset doc.
"""
if not asset_doc:
if not asset_id:
return []
asset_doc = get_asset_by_id(
project_name,
asset_id,
fields=["data.inputLinks"]
)
if not asset_doc:
return []
link_ids = get_linked_asset_ids(project_name, asset_doc=asset_doc)
if not link_ids:
return []
return list(get_assets(project_name, asset_ids=link_ids, fields=fields))
def get_linked_representation_id(
project_name, repre_doc=None, repre_id=None, link_type=None, max_depth=None
):
"""Returns list of linked ids of particular type (if provided).
One of representation document or representation id must be passed.
Note:
Representation links now works only from representation through version
back to representations.
Args:
project_name (str): Name of project where look for links.
repre_doc (Dict[str, Any]): Representation document.
repre_id (Union[ObjectId, str]): Representation id.
link_type (str): Type of link (e.g. 'reference', ...).
max_depth (int): Limit recursion level. Default: 0
Returns:
List[ObjectId] Linked representation ids.
"""
if repre_doc:
repre_id = repre_doc["_id"]
if repre_id:
repre_id = convert_id(repre_id)
if not repre_id and not repre_doc:
return []
version_id = None
if repre_doc:
version_id = repre_doc.get("parent")
if not version_id:
repre_doc = get_representation_by_id(
project_name, repre_id, fields=["parent"]
)
version_id = repre_doc["parent"]
if not version_id:
return []
if max_depth is None:
max_depth = 0
match = {
"_id": version_id,
"type": {"$in": ["version", "hero_version"]}
}
graph_lookup = {
"from": project_name,
"startWith": "$data.inputLinks.id",
"connectFromField": "data.inputLinks.id",
"connectToField": "_id",
"as": "outputs_recursive",
"depthField": "depth"
}
if max_depth != 0:
# We offset by -1 since 0 basically means no recursion
# but the recursion only happens after the initial lookup
# for outputs.
graph_lookup["maxDepth"] = max_depth - 1
query_pipeline = [
# Match
{"$match": match},
# Recursive graph lookup for inputs
{"$graphLookup": graph_lookup}
]
conn = get_project_connection(project_name)
result = conn.aggregate(query_pipeline)
referenced_version_ids = _process_referenced_pipeline_result(
result, link_type
)
if not referenced_version_ids:
return []
ref_ids = conn.distinct(
"_id",
filter={
"parent": {"$in": list(referenced_version_ids)},
"type": "representation"
}
)
return list(ref_ids)
def _process_referenced_pipeline_result(result, link_type):
"""Filters result from pipeline for particular link_type.
Pipeline cannot use link_type directly in a query.
Returns:
(list)
"""
referenced_version_ids = set()
correctly_linked_ids = set()
for item in result:
input_links = item["data"].get("inputLinks")
if not input_links:
continue
_filter_input_links(
input_links,
link_type,
correctly_linked_ids
)
# outputs_recursive in random order, sort by depth
outputs_recursive = item.get("outputs_recursive")
if not outputs_recursive:
continue
for output in sorted(outputs_recursive, key=lambda o: o["depth"]):
output_links = output["data"].get("inputLinks")
if not output_links:
continue
# Leaf
if output["_id"] not in correctly_linked_ids:
continue
_filter_input_links(
output_links,
link_type,
correctly_linked_ids
)
referenced_version_ids.add(output["_id"])
return referenced_version_ids
def _filter_input_links(input_links, link_type, correctly_linked_ids):
for input_link in input_links:
if link_type and input_link["type"] != link_type:
continue
link_id = input_link.get("id") or input_link.get("_id")
if link_id is not None:
correctly_linked_ids.add(link_id)

39
openpype/client/notes.md Normal file
View file

@ -0,0 +1,39 @@
# Client functionality
## Reason
Preparation for OpenPype v4 server. Goal is to remove direct mongo calls in code to prepare a little bit for different source of data for code before. To start think about database calls less as mongo calls but more universally. To do so was implemented simple wrapper around database calls to not use pymongo specific code.
Current goal is not to make universal database model which can be easily replaced with any different source of data but to make it close as possible. Current implementation of OpenPype is too tighly connected to pymongo and it's abilities so we're trying to get closer with long term changes that can be used even in current state.
## Queries
Query functions don't use full potential of mongo queries like very specific queries based on subdictionaries or unknown structures. We try to avoid these calls as much as possible because they'll probably won't be available in future. If it's really necessary a new function can be added but only if it's reasonable for overall logic. All query functions were moved to `~/client/entities.py`. Each function has arguments with available filters and possible reduce of returned keys for each entity.
## Changes
Changes are a little bit complicated. Mongo has many options how update can happen which had to be reduced also it would be at this stage complicated to validate values which are created or updated thus automation is at this point almost none. Changes can be made using operations available in `~/client/operations.py`. Each operation require project name and entity type, but may require operation specific data.
### Create
Create operations expect already prepared document data, for that are prepared functions creating skeletal structures of documents (do not fill all required data), except `_id` all data should be right. Existence of entity is not validated so if the same creation operation is send n times it will create the entity n times which can cause issues.
### Update
Update operation require entity id and keys that should be changed, update dictionary must have {"key": value}. If value should be set in nested dictionary the key must have also all subkeys joined with dot `.` (e.g. `{"data": {"fps": 25}}` -> `{"data.fps": 25}`). To simplify update dictionaries were prepared functions which does that for you, their name has template `prepare_<entity type>_update_data` - they work on comparison of previous document and new document. If there is missing function for requested entity type it is because we didn't need it yet and require implementaion.
### Delete
Delete operation need entity id. Entity will be deleted from mongo.
## What (probably) won't be replaced
Some parts of code are still using direct mongo calls. In most of cases it is for very specific calls that are module specific or their usage will completely change in future.
- Mongo calls that are not project specific (out of `avalon` collection) will be removed or will have to use different mechanism how the data are stored. At this moment it is related to OpenPype settings and logs, ftrack server events, some other data.
- Sync server queries. They're complex and very specific for sync server module. Their replacement will require specific calls to OpenPype server in v4 thus their abstraction with wrapper is irrelevant and would complicate production in v3.
- Project managers (ftrack, kitsu, shotgrid, embedded Project Manager, etc.). Project managers are creating, updating or removing assets in v3, but in v4 will create folders with different structure. Wrapping creation of assets would not help to prepare for v4 because of new data structures. The same can be said about editorial Extract Hierarchy Avalon plugin which create project structure.
- Code parts that is marked as deprecated in v3 or will be deprecated in v4.
- integrate asset legacy publish plugin - already is legacy kept for safety
- integrate thumbnail - thumbnails will be stored in different way in v4
- input links - link will be stored in different way and will have different mechanism of linking. In v3 are links limited to same entity type "asset <-> asset" or "representation <-> representation".
## Known missing replacements
- change subset group in loader tool
- integrate subset group
- query input links in openpype lib
- create project in openpype lib
- save/create workfile doc in openpype lib
- integrate hero version

View file

@ -1,3 +1,4 @@
import re
import uuid
import copy
import collections
@ -8,15 +9,23 @@ from bson.objectid import ObjectId
from pymongo import DeleteOne, InsertOne, UpdateOne
from .mongo import get_project_connection
from .entities import get_project
REMOVED_VALUE = object()
PROJECT_NAME_ALLOWED_SYMBOLS = "a-zA-Z0-9_"
PROJECT_NAME_REGEX = re.compile(
"^[{}]+$".format(PROJECT_NAME_ALLOWED_SYMBOLS)
)
CURRENT_PROJECT_SCHEMA = "openpype:project-3.0"
CURRENT_PROJECT_CONFIG_SCHEMA = "openpype:config-2.0"
CURRENT_ASSET_DOC_SCHEMA = "openpype:asset-3.0"
CURRENT_SUBSET_SCHEMA = "openpype:subset-3.0"
CURRENT_VERSION_SCHEMA = "openpype:version-3.0"
CURRENT_REPRESENTATION_SCHEMA = "openpype:representation-2.0"
CURRENT_WORKFILE_INFO_SCHEMA = "openpype:workfile-1.0"
CURRENT_THUMBNAIL_SCHEMA = "openpype:thumbnail-1.0"
def _create_or_convert_to_mongo_id(mongo_id):
@ -188,6 +197,61 @@ def new_representation_doc(
}
def new_thumbnail_doc(data=None, entity_id=None):
"""Create skeleton data of thumbnail document.
Args:
data (Dict[str, Any]): Thumbnail document data.
entity_id (Union[str, ObjectId]): Predefined id of document. New id is
created if not passed.
Returns:
Dict[str, Any]: Skeleton of thumbnail document.
"""
if data is None:
data = {}
return {
"_id": _create_or_convert_to_mongo_id(entity_id),
"type": "thumbnail",
"schema": CURRENT_THUMBNAIL_SCHEMA,
"data": data
}
def new_workfile_info_doc(
filename, asset_id, task_name, files, data=None, entity_id=None
):
"""Create skeleton data of workfile info document.
Workfile document is at this moment used primarily for artist notes.
Args:
filename (str): Filename of workfile.
asset_id (Union[str, ObjectId]): Id of asset under which workfile live.
task_name (str): Task under which was workfile created.
files (List[str]): List of rootless filepaths related to workfile.
data (Dict[str, Any]): Additional metadata.
Returns:
Dict[str, Any]: Skeleton of workfile info document.
"""
if not data:
data = {}
return {
"_id": _create_or_convert_to_mongo_id(entity_id),
"type": "workfile",
"parent": ObjectId(asset_id),
"task_name": task_name,
"filename": filename,
"data": data,
"files": files
}
def _prepare_update_data(old_doc, new_doc, replace):
changes = {}
for key, value in new_doc.items():
@ -243,6 +307,20 @@ def prepare_representation_update_data(old_doc, new_doc, replace=True):
return _prepare_update_data(old_doc, new_doc, replace)
def prepare_workfile_info_update_data(old_doc, new_doc, replace=True):
"""Compare two workfile info documents and prepare update data.
Based on compared values will create update data for 'UpdateOperation'.
Empty output means that documents are identical.
Returns:
Dict[str, Any]: Changes between old and new document.
"""
return _prepare_update_data(old_doc, new_doc, replace)
@six.add_metaclass(ABCMeta)
class AbstractOperation(object):
"""Base operation class.
@ -397,7 +475,7 @@ class UpdateOperation(AbstractOperation):
set_data = {}
for key, value in self._update_data.items():
if value is REMOVED_VALUE:
unset_data[key] = value
unset_data[key] = None
else:
set_data[key] = value
@ -585,3 +663,89 @@ class OperationsSession(object):
operation = DeleteOperation(project_name, entity_type, entity_id)
self.add(operation)
return operation
def create_project(project_name, project_code, library_project=False):
"""Create project using OpenPype settings.
This project creation function is not validating project document on
creation. It is because project document is created blindly with only
minimum required information about project which is it's name, code, type
and schema.
Entered project name must be unique and project must not exist yet.
Note:
This function is here to be OP v4 ready but in v3 has more logic
to do. That's why inner imports are in the body.
Args:
project_name(str): New project name. Should be unique.
project_code(str): Project's code should be unique too.
library_project(bool): Project is library project.
Raises:
ValueError: When project name already exists in MongoDB.
Returns:
dict: Created project document.
"""
from openpype.settings import ProjectSettings, SaveWarningExc
from openpype.pipeline.schema import validate
if get_project(project_name, fields=["name"]):
raise ValueError("Project with name \"{}\" already exists".format(
project_name
))
if not PROJECT_NAME_REGEX.match(project_name):
raise ValueError((
"Project name \"{}\" contain invalid characters"
).format(project_name))
project_doc = {
"type": "project",
"name": project_name,
"data": {
"code": project_code,
"library_project": library_project
},
"schema": CURRENT_PROJECT_SCHEMA
}
op_session = OperationsSession()
# Insert document with basic data
create_op = op_session.create_entity(
project_name, project_doc["type"], project_doc
)
op_session.commit()
# Load ProjectSettings for the project and save it to store all attributes
# and Anatomy
try:
project_settings_entity = ProjectSettings(project_name)
project_settings_entity.save()
except SaveWarningExc as exc:
print(str(exc))
except Exception:
op_session.delete_entity(
project_name, project_doc["type"], create_op.entity_id
)
op_session.commit()
raise
project_doc = get_project(project_name)
try:
# Validate created project document
validate(project_doc)
except Exception:
# Remove project if is not valid
op_session.delete_entity(
project_name, project_doc["type"], create_op.entity_id
)
op_session.commit()
raise
return project_doc

View file

@ -1,11 +1,11 @@
import os
import shutil
from openpype.lib import (
PreLaunchHook,
get_custom_workfile_template_by_context,
from openpype.lib import PreLaunchHook
from openpype.settings import get_project_settings
from openpype.pipeline.workfile import (
get_custom_workfile_template,
get_custom_workfile_template_by_string_context
)
from openpype.settings import get_project_settings
class CopyTemplateWorkfile(PreLaunchHook):
@ -54,41 +54,22 @@ class CopyTemplateWorkfile(PreLaunchHook):
project_name = self.data["project_name"]
asset_name = self.data["asset_name"]
task_name = self.data["task_name"]
host_name = self.application.host_name
project_settings = get_project_settings(project_name)
host_settings = project_settings[self.application.host_name]
workfile_builder_settings = host_settings.get("workfile_builder")
if not workfile_builder_settings:
# TODO remove warning when deprecated
self.log.warning((
"Seems like old version of settings is used."
" Can't access custom templates in host \"{}\"."
).format(self.application.full_label))
return
if not workfile_builder_settings["create_first_version"]:
self.log.info((
"Project \"{}\" has turned off to create first workfile for"
" application \"{}\""
).format(project_name, self.application.full_label))
return
# Backwards compatibility
template_profiles = workfile_builder_settings.get("custom_templates")
if not template_profiles:
self.log.info(
"Custom templates are not filled. Skipping template copy."
)
return
project_doc = self.data.get("project_doc")
asset_doc = self.data.get("asset_doc")
anatomy = self.data.get("anatomy")
if project_doc and asset_doc:
self.log.debug("Started filtering of custom template paths.")
template_path = get_custom_workfile_template_by_context(
template_profiles, project_doc, asset_doc, task_name, anatomy
template_path = get_custom_workfile_template(
project_doc,
asset_doc,
task_name,
host_name,
anatomy,
project_settings
)
else:
@ -96,10 +77,13 @@ class CopyTemplateWorkfile(PreLaunchHook):
"Global data collection probably did not execute."
" Using backup solution."
))
dbcon = self.data.get("dbcon")
template_path = get_custom_workfile_template_by_string_context(
template_profiles, project_name, asset_name, task_name,
dbcon, anatomy
project_name,
asset_name,
task_name,
host_name,
anatomy,
project_settings
)
if not template_path:

View file

@ -1,8 +1,6 @@
import os
from openpype.lib import (
PreLaunchHook,
create_workdir_extra_folders
)
from openpype.lib import PreLaunchHook
from openpype.pipeline.workfile import create_workdir_extra_folders
class AddLastWorkfileToLaunchArgs(PreLaunchHook):

View file

@ -1,13 +1,22 @@
from .host import (
HostBase,
)
from .interfaces import (
IWorkfileHost,
ILoadHost,
INewPublisher,
)
from .dirmap import HostDirmap
__all__ = (
"HostBase",
"IWorkfileHost",
"ILoadHost",
"INewPublisher",
"HostDirmap",
)

205
openpype/host/dirmap.py Normal file
View file

@ -0,0 +1,205 @@
"""Dirmap functionality used in host integrations inside DCCs.
Idea for current dirmap implementation was used from Maya where is possible to
enter source and destination roots and maya will try each found source
in referenced file replace with each destionation paths. First path which
exists is used.
"""
import os
from abc import ABCMeta, abstractmethod
import six
from openpype.lib import Logger
from openpype.modules import ModulesManager
from openpype.settings import get_project_settings
from openpype.settings.lib import get_site_local_overrides
@six.add_metaclass(ABCMeta)
class HostDirmap(object):
"""Abstract class for running dirmap on a workfile in a host.
Dirmap is used to translate paths inside of host workfile from one
OS to another. (Eg. arstist created workfile on Win, different artists
opens same file on Linux.)
Expects methods to be implemented inside of host:
on_dirmap_enabled: run host code for enabling dirmap
do_dirmap: run host code to do actual remapping
"""
def __init__(
self, host_name, project_name, project_settings=None, sync_module=None
):
self.host_name = host_name
self.project_name = project_name
self._project_settings = project_settings
self._sync_module = sync_module # to limit reinit of Modules
self._log = None
self._mapping = None # cache mapping
@property
def sync_module(self):
if self._sync_module is None:
manager = ModulesManager()
self._sync_module = manager["sync_server"]
return self._sync_module
@property
def project_settings(self):
if self._project_settings is None:
self._project_settings = get_project_settings(self.project_name)
return self._project_settings
@property
def log(self):
if self._log is None:
self._log = Logger.get_logger(self.__class__.__name__)
return self._log
@abstractmethod
def on_enable_dirmap(self):
"""Run host dependent operation for enabling dirmap if necessary."""
pass
@abstractmethod
def dirmap_routine(self, source_path, destination_path):
"""Run host dependent remapping from source_path to destination_path"""
pass
def process_dirmap(self):
# type: (dict) -> None
"""Go through all paths in Settings and set them using `dirmap`.
If artists has Site Sync enabled, take dirmap mapping directly from
Local Settings when artist is syncing workfile locally.
Args:
project_settings (dict): Settings for current project.
"""
if not self._mapping:
self._mapping = self.get_mappings(self.project_settings)
if not self._mapping:
return
self.log.info("Processing directory mapping ...")
self.on_enable_dirmap()
self.log.info("mapping:: {}".format(self._mapping))
for k, sp in enumerate(self._mapping["source-path"]):
dst = self._mapping["destination-path"][k]
try:
print("{} -> {}".format(sp, dst))
self.dirmap_routine(sp, dst)
except IndexError:
# missing corresponding destination path
self.log.error((
"invalid dirmap mapping, missing corresponding"
" destination directory."
))
break
except RuntimeError:
self.log.error(
"invalid path {} -> {}, mapping not registered".format(
sp, dst
)
)
continue
def get_mappings(self, project_settings):
"""Get translation from source-path to destination-path.
It checks if Site Sync is enabled and user chose to use local
site, in that case configuration in Local Settings takes precedence
"""
local_mapping = self._get_local_sync_dirmap(project_settings)
dirmap_label = "{}-dirmap".format(self.host_name)
if (
not self.project_settings[self.host_name].get(dirmap_label)
and not local_mapping
):
return {}
mapping_settings = self.project_settings[self.host_name][dirmap_label]
mapping_enabled = mapping_settings["enabled"] or bool(local_mapping)
if not mapping_enabled:
return {}
mapping = (
local_mapping
or mapping_settings["paths"]
or {}
)
if (
not mapping
or not mapping.get("destination-path")
or not mapping.get("source-path")
):
return {}
return mapping
def _get_local_sync_dirmap(self, project_settings):
"""
Returns dirmap if synch to local project is enabled.
Only valid mapping is from roots of remote site to local site set
in Local Settings.
Args:
project_settings (dict)
Returns:
dict : { "source-path": [XXX], "destination-path": [YYYY]}
"""
mapping = {}
if not project_settings["global"]["sync_server"]["enabled"]:
return mapping
project_name = os.getenv("AVALON_PROJECT")
active_site = self.sync_module.get_local_normalized_site(
self.sync_module.get_active_site(project_name))
remote_site = self.sync_module.get_local_normalized_site(
self.sync_module.get_remote_site(project_name))
self.log.debug(
"active {} - remote {}".format(active_site, remote_site)
)
if (
active_site == "local"
and project_name in self.sync_module.get_enabled_projects()
and active_site != remote_site
):
sync_settings = self.sync_module.get_sync_project_setting(
project_name,
exclude_locals=False,
cached=False)
active_overrides = get_site_local_overrides(
project_name, active_site)
remote_overrides = get_site_local_overrides(
project_name, remote_site)
self.log.debug("local overrides {}".format(active_overrides))
self.log.debug("remote overrides {}".format(remote_overrides))
for root_name, active_site_dir in active_overrides.items():
remote_site_dir = (
remote_overrides.get(root_name)
or sync_settings["sites"][remote_site]["root"][root_name]
)
if os.path.isdir(active_site_dir):
if "destination-path" not in mapping:
mapping["destination-path"] = []
mapping["destination-path"].append(active_site_dir)
if "source-path" not in mapping:
mapping["source-path"] = []
mapping["source-path"].append(remote_site_dir)
self.log.debug("local sync mapping:: {}".format(mapping))
return mapping

View file

@ -1,30 +1,12 @@
import logging
import contextlib
from abc import ABCMeta, abstractproperty, abstractmethod
from abc import ABCMeta, abstractproperty
import six
# NOTE can't import 'typing' because of issues in Maya 2020
# - shiboken crashes on 'typing' module import
class MissingMethodsError(ValueError):
"""Exception when host miss some required methods for specific workflow.
Args:
host (HostBase): Host implementation where are missing methods.
missing_methods (list[str]): List of missing methods.
"""
def __init__(self, host, missing_methods):
joined_missing = ", ".join(
['"{}"'.format(item) for item in missing_methods]
)
message = (
"Host \"{}\" miss methods {}".format(host.name, joined_missing)
)
super(MissingMethodsError, self).__init__(message)
@six.add_metaclass(ABCMeta)
class HostBase(object):
"""Base of host implementation class.
@ -178,347 +160,3 @@ class HostBase(object):
yield
finally:
pass
class ILoadHost:
"""Implementation requirements to be able use reference of representations.
The load plugins can do referencing even without implementation of methods
here, but switch and removement of containers would not be possible.
Questions:
- Is list container dependency of host or load plugins?
- Should this be directly in HostBase?
- how to find out if referencing is available?
- do we need to know that?
"""
@staticmethod
def get_missing_load_methods(host):
"""Look for missing methods on "old type" host implementation.
Method is used for validation of implemented functions related to
loading. Checks only existence of methods.
Args:
Union[ModuleType, HostBase]: Object of host where to look for
required methods.
Returns:
list[str]: Missing method implementations for loading workflow.
"""
if isinstance(host, ILoadHost):
return []
required = ["ls"]
missing = []
for name in required:
if not hasattr(host, name):
missing.append(name)
return missing
@staticmethod
def validate_load_methods(host):
"""Validate implemented methods of "old type" host for load workflow.
Args:
Union[ModuleType, HostBase]: Object of host to validate.
Raises:
MissingMethodsError: If there are missing methods on host
implementation.
"""
missing = ILoadHost.get_missing_load_methods(host)
if missing:
raise MissingMethodsError(host, missing)
@abstractmethod
def get_containers(self):
"""Retreive referenced containers from scene.
This can be implemented in hosts where referencing can be used.
Todo:
Rename function to something more self explanatory.
Suggestion: 'get_containers'
Returns:
list[dict]: Information about loaded containers.
"""
pass
# --- Deprecated method names ---
def ls(self):
"""Deprecated variant of 'get_containers'.
Todo:
Remove when all usages are replaced.
"""
return self.get_containers()
@six.add_metaclass(ABCMeta)
class IWorkfileHost:
"""Implementation requirements to be able use workfile utils and tool."""
@staticmethod
def get_missing_workfile_methods(host):
"""Look for missing methods on "old type" host implementation.
Method is used for validation of implemented functions related to
workfiles. Checks only existence of methods.
Args:
Union[ModuleType, HostBase]: Object of host where to look for
required methods.
Returns:
list[str]: Missing method implementations for workfiles workflow.
"""
if isinstance(host, IWorkfileHost):
return []
required = [
"open_file",
"save_file",
"current_file",
"has_unsaved_changes",
"file_extensions",
"work_root",
]
missing = []
for name in required:
if not hasattr(host, name):
missing.append(name)
return missing
@staticmethod
def validate_workfile_methods(host):
"""Validate methods of "old type" host for workfiles workflow.
Args:
Union[ModuleType, HostBase]: Object of host to validate.
Raises:
MissingMethodsError: If there are missing methods on host
implementation.
"""
missing = IWorkfileHost.get_missing_workfile_methods(host)
if missing:
raise MissingMethodsError(host, missing)
@abstractmethod
def get_workfile_extensions(self):
"""Extensions that can be used as save.
Questions:
This could potentially use 'HostDefinition'.
"""
return []
@abstractmethod
def save_workfile(self, dst_path=None):
"""Save currently opened scene.
Args:
dst_path (str): Where the current scene should be saved. Or use
current path if 'None' is passed.
"""
pass
@abstractmethod
def open_workfile(self, filepath):
"""Open passed filepath in the host.
Args:
filepath (str): Path to workfile.
"""
pass
@abstractmethod
def get_current_workfile(self):
"""Retreive path to current opened file.
Returns:
str: Path to file which is currently opened.
None: If nothing is opened.
"""
return None
def workfile_has_unsaved_changes(self):
"""Currently opened scene is saved.
Not all hosts can know if current scene is saved because the API of
DCC does not support it.
Returns:
bool: True if scene is saved and False if has unsaved
modifications.
None: Can't tell if workfiles has modifications.
"""
return None
def work_root(self, session):
"""Modify workdir per host.
Default implementation keeps workdir untouched.
Warnings:
We must handle this modification with more sofisticated way because
this can't be called out of DCC so opening of last workfile
(calculated before DCC is launched) is complicated. Also breaking
defined work template is not a good idea.
Only place where it's really used and can make sense is Maya. There
workspace.mel can modify subfolders where to look for maya files.
Args:
session (dict): Session context data.
Returns:
str: Path to new workdir.
"""
return session["AVALON_WORKDIR"]
# --- Deprecated method names ---
def file_extensions(self):
"""Deprecated variant of 'get_workfile_extensions'.
Todo:
Remove when all usages are replaced.
"""
return self.get_workfile_extensions()
def save_file(self, dst_path=None):
"""Deprecated variant of 'save_workfile'.
Todo:
Remove when all usages are replaced.
"""
self.save_workfile()
def open_file(self, filepath):
"""Deprecated variant of 'open_workfile'.
Todo:
Remove when all usages are replaced.
"""
return self.open_workfile(filepath)
def current_file(self):
"""Deprecated variant of 'get_current_workfile'.
Todo:
Remove when all usages are replaced.
"""
return self.get_current_workfile()
def has_unsaved_changes(self):
"""Deprecated variant of 'workfile_has_unsaved_changes'.
Todo:
Remove when all usages are replaced.
"""
return self.workfile_has_unsaved_changes()
class INewPublisher:
"""Functions related to new creation system in new publisher.
New publisher is not storing information only about each created instance
but also some global data. At this moment are data related only to context
publish plugins but that can extend in future.
"""
@staticmethod
def get_missing_publish_methods(host):
"""Look for missing methods on "old type" host implementation.
Method is used for validation of implemented functions related to
new publish creation. Checks only existence of methods.
Args:
Union[ModuleType, HostBase]: Host module where to look for
required methods.
Returns:
list[str]: Missing method implementations for new publsher
workflow.
"""
if isinstance(host, INewPublisher):
return []
required = [
"get_context_data",
"update_context_data",
]
missing = []
for name in required:
if not hasattr(host, name):
missing.append(name)
return missing
@staticmethod
def validate_publish_methods(host):
"""Validate implemented methods of "old type" host.
Args:
Union[ModuleType, HostBase]: Host module to validate.
Raises:
MissingMethodsError: If there are missing methods on host
implementation.
"""
missing = INewPublisher.get_missing_publish_methods(host)
if missing:
raise MissingMethodsError(host, missing)
@abstractmethod
def get_context_data(self):
"""Get global data related to creation-publishing from workfile.
These data are not related to any created instance but to whole
publishing context. Not saving/returning them will cause that each
reset of publishing resets all values to default ones.
Context data can contain information about enabled/disabled publish
plugins or other values that can be filled by artist.
Returns:
dict: Context data stored using 'update_context_data'.
"""
pass
@abstractmethod
def update_context_data(self, data, changes):
"""Store global context data to workfile.
Called when some values in context data has changed.
Without storing the values in a way that 'get_context_data' would
return them will each reset of publishing cause loose of filled values
by artist. Best practice is to store values into workfile, if possible.
Args:
data (dict): New data as are.
changes (dict): Only data that has been changed. Each value has
tuple with '(<old>, <new>)' value.
"""
pass

370
openpype/host/interfaces.py Normal file
View file

@ -0,0 +1,370 @@
from abc import ABCMeta, abstractmethod
import six
class MissingMethodsError(ValueError):
"""Exception when host miss some required methods for specific workflow.
Args:
host (HostBase): Host implementation where are missing methods.
missing_methods (list[str]): List of missing methods.
"""
def __init__(self, host, missing_methods):
joined_missing = ", ".join(
['"{}"'.format(item) for item in missing_methods]
)
host_name = getattr(host, "name", None)
if not host_name:
try:
host_name = host.__file__.replace("\\", "/").split("/")[-3]
except Exception:
host_name = str(host)
message = (
"Host \"{}\" miss methods {}".format(host_name, joined_missing)
)
super(MissingMethodsError, self).__init__(message)
class ILoadHost:
"""Implementation requirements to be able use reference of representations.
The load plugins can do referencing even without implementation of methods
here, but switch and removement of containers would not be possible.
Questions:
- Is list container dependency of host or load plugins?
- Should this be directly in HostBase?
- how to find out if referencing is available?
- do we need to know that?
"""
@staticmethod
def get_missing_load_methods(host):
"""Look for missing methods on "old type" host implementation.
Method is used for validation of implemented functions related to
loading. Checks only existence of methods.
Args:
Union[ModuleType, HostBase]: Object of host where to look for
required methods.
Returns:
list[str]: Missing method implementations for loading workflow.
"""
if isinstance(host, ILoadHost):
return []
required = ["ls"]
missing = []
for name in required:
if not hasattr(host, name):
missing.append(name)
return missing
@staticmethod
def validate_load_methods(host):
"""Validate implemented methods of "old type" host for load workflow.
Args:
Union[ModuleType, HostBase]: Object of host to validate.
Raises:
MissingMethodsError: If there are missing methods on host
implementation.
"""
missing = ILoadHost.get_missing_load_methods(host)
if missing:
raise MissingMethodsError(host, missing)
@abstractmethod
def get_containers(self):
"""Retreive referenced containers from scene.
This can be implemented in hosts where referencing can be used.
Todo:
Rename function to something more self explanatory.
Suggestion: 'get_containers'
Returns:
list[dict]: Information about loaded containers.
"""
pass
# --- Deprecated method names ---
def ls(self):
"""Deprecated variant of 'get_containers'.
Todo:
Remove when all usages are replaced.
"""
return self.get_containers()
@six.add_metaclass(ABCMeta)
class IWorkfileHost:
"""Implementation requirements to be able use workfile utils and tool."""
@staticmethod
def get_missing_workfile_methods(host):
"""Look for missing methods on "old type" host implementation.
Method is used for validation of implemented functions related to
workfiles. Checks only existence of methods.
Args:
Union[ModuleType, HostBase]: Object of host where to look for
required methods.
Returns:
list[str]: Missing method implementations for workfiles workflow.
"""
if isinstance(host, IWorkfileHost):
return []
required = [
"open_file",
"save_file",
"current_file",
"has_unsaved_changes",
"file_extensions",
"work_root",
]
missing = []
for name in required:
if not hasattr(host, name):
missing.append(name)
return missing
@staticmethod
def validate_workfile_methods(host):
"""Validate methods of "old type" host for workfiles workflow.
Args:
Union[ModuleType, HostBase]: Object of host to validate.
Raises:
MissingMethodsError: If there are missing methods on host
implementation.
"""
missing = IWorkfileHost.get_missing_workfile_methods(host)
if missing:
raise MissingMethodsError(host, missing)
@abstractmethod
def get_workfile_extensions(self):
"""Extensions that can be used as save.
Questions:
This could potentially use 'HostDefinition'.
"""
return []
@abstractmethod
def save_workfile(self, dst_path=None):
"""Save currently opened scene.
Args:
dst_path (str): Where the current scene should be saved. Or use
current path if 'None' is passed.
"""
pass
@abstractmethod
def open_workfile(self, filepath):
"""Open passed filepath in the host.
Args:
filepath (str): Path to workfile.
"""
pass
@abstractmethod
def get_current_workfile(self):
"""Retreive path to current opened file.
Returns:
str: Path to file which is currently opened.
None: If nothing is opened.
"""
return None
def workfile_has_unsaved_changes(self):
"""Currently opened scene is saved.
Not all hosts can know if current scene is saved because the API of
DCC does not support it.
Returns:
bool: True if scene is saved and False if has unsaved
modifications.
None: Can't tell if workfiles has modifications.
"""
return None
def work_root(self, session):
"""Modify workdir per host.
Default implementation keeps workdir untouched.
Warnings:
We must handle this modification with more sofisticated way because
this can't be called out of DCC so opening of last workfile
(calculated before DCC is launched) is complicated. Also breaking
defined work template is not a good idea.
Only place where it's really used and can make sense is Maya. There
workspace.mel can modify subfolders where to look for maya files.
Args:
session (dict): Session context data.
Returns:
str: Path to new workdir.
"""
return session["AVALON_WORKDIR"]
# --- Deprecated method names ---
def file_extensions(self):
"""Deprecated variant of 'get_workfile_extensions'.
Todo:
Remove when all usages are replaced.
"""
return self.get_workfile_extensions()
def save_file(self, dst_path=None):
"""Deprecated variant of 'save_workfile'.
Todo:
Remove when all usages are replaced.
"""
self.save_workfile()
def open_file(self, filepath):
"""Deprecated variant of 'open_workfile'.
Todo:
Remove when all usages are replaced.
"""
return self.open_workfile(filepath)
def current_file(self):
"""Deprecated variant of 'get_current_workfile'.
Todo:
Remove when all usages are replaced.
"""
return self.get_current_workfile()
def has_unsaved_changes(self):
"""Deprecated variant of 'workfile_has_unsaved_changes'.
Todo:
Remove when all usages are replaced.
"""
return self.workfile_has_unsaved_changes()
class INewPublisher:
"""Functions related to new creation system in new publisher.
New publisher is not storing information only about each created instance
but also some global data. At this moment are data related only to context
publish plugins but that can extend in future.
"""
@staticmethod
def get_missing_publish_methods(host):
"""Look for missing methods on "old type" host implementation.
Method is used for validation of implemented functions related to
new publish creation. Checks only existence of methods.
Args:
Union[ModuleType, HostBase]: Host module where to look for
required methods.
Returns:
list[str]: Missing method implementations for new publsher
workflow.
"""
if isinstance(host, INewPublisher):
return []
required = [
"get_context_data",
"update_context_data",
]
missing = []
for name in required:
if not hasattr(host, name):
missing.append(name)
return missing
@staticmethod
def validate_publish_methods(host):
"""Validate implemented methods of "old type" host.
Args:
Union[ModuleType, HostBase]: Host module to validate.
Raises:
MissingMethodsError: If there are missing methods on host
implementation.
"""
missing = INewPublisher.get_missing_publish_methods(host)
if missing:
raise MissingMethodsError(host, missing)
@abstractmethod
def get_context_data(self):
"""Get global data related to creation-publishing from workfile.
These data are not related to any created instance but to whole
publishing context. Not saving/returning them will cause that each
reset of publishing resets all values to default ones.
Context data can contain information about enabled/disabled publish
plugins or other values that can be filled by artist.
Returns:
dict: Context data stored using 'update_context_data'.
"""
pass
@abstractmethod
def update_context_data(self, data, changes):
"""Store global context data to workfile.
Called when some values in context data has changed.
Without storing the values in a way that 'get_context_data' would
return them will each reset of publishing cause loose of filled values
by artist. Best practice is to store values into workfile, if possible.
Args:
data (dict): New data as are.
changes (dict): Only data that has been changed. Each value has
tuple with '(<old>, <new>)' value.
"""
pass

View file

@ -1,9 +1,6 @@
def add_implementation_envs(env, _app):
"""Modify environments to contain all required for implementation."""
defaults = {
"OPENPYPE_LOG_NO_COLORS": "True",
"WEBSOCKET_URL": "ws://localhost:8097/ws/"
}
for key, value in defaults.items():
if not env.get(key):
env[key] = value
from .addon import AfterEffectsAddon
__all__ = (
"AfterEffectsAddon",
)

View file

@ -0,0 +1,23 @@
from openpype.modules import OpenPypeModule
from openpype.modules.interfaces import IHostAddon
class AfterEffectsAddon(OpenPypeModule, IHostAddon):
name = "aftereffects"
host_name = "aftereffects"
def initialize(self, module_settings):
self.enabled = True
def add_implementation_envs(self, env, _app):
"""Modify environments to contain all required for implementation."""
defaults = {
"OPENPYPE_LOG_NO_COLORS": "True",
"WEBSOCKET_URL": "ws://localhost:8097/ws/"
}
for key, value in defaults.items():
if not env.get(key):
env[key] = value
def get_workfile_extensions(self):
return [".aep"]

View file

@ -1,13 +1,16 @@
import os
import sys
import re
import json
import contextlib
import traceback
import logging
from functools import partial
from Qt import QtWidgets
from openpype.pipeline import install_host
from openpype.lib.remote_publish import headless_publish
from openpype.modules import ModulesManager
from openpype.tools.utils import host_tools
from .launch_logic import ProcessLauncher, get_stub
@ -35,10 +38,18 @@ def main(*subprocess_args):
launcher.start()
if os.environ.get("HEADLESS_PUBLISH"):
launcher.execute_in_main_thread(lambda: headless_publish(
log,
"CloseAE",
os.environ.get("IS_TEST")))
manager = ModulesManager()
webpublisher_addon = manager["webpublisher"]
launcher.execute_in_main_thread(
partial(
webpublisher_addon.headless_publish,
log,
"CloseAE",
os.environ.get("IS_TEST")
)
)
elif os.environ.get("AVALON_PHOTOSHOP_WORKFILES_ON_LAUNCH", True):
save = False
if os.getenv("WORKFILES_SAVE_AS"):
@ -68,3 +79,57 @@ def get_extension_manifest_path():
"CSXS",
"manifest.xml"
)
def get_unique_layer_name(layers, name):
"""
Gets all layer names and if 'name' is present in them, increases
suffix by 1 (eg. creates unique layer name - for Loader)
Args:
layers (list): of strings, names only
name (string): checked value
Returns:
(string): name_00X (without version)
"""
names = {}
for layer in layers:
layer_name = re.sub(r'_\d{3}$', '', layer)
if layer_name in names.keys():
names[layer_name] = names[layer_name] + 1
else:
names[layer_name] = 1
occurrences = names.get(name, 0)
return "{}_{:0>3d}".format(name, occurrences + 1)
def get_background_layers(file_url):
"""
Pulls file name from background json file, enrich with folder url for
AE to be able import files.
Order is important, follows order in json.
Args:
file_url (str): abs url of background json
Returns:
(list): of abs paths to images
"""
with open(file_url) as json_file:
data = json.load(json_file)
layers = list()
bg_folder = os.path.dirname(file_url)
for child in data['children']:
if child.get("filename"):
layers.append(os.path.join(bg_folder, child.get("filename")).
replace("\\", "/"))
else:
for layer in child['children']:
if layer.get("filename"):
layers.append(os.path.join(bg_folder,
layer.get("filename")).
replace("\\", "/"))
return layers

View file

@ -1,12 +1,11 @@
"""Host API required Work Files tool"""
import os
from openpype.pipeline import HOST_WORKFILE_EXTENSIONS
from .launch_logic import get_stub
def file_extensions():
return HOST_WORKFILE_EXTENSIONS["aftereffects"]
return [".aep"]
def has_unsaved_changes():

View file

@ -11,6 +11,8 @@ class AEWorkfileCreator(AutoCreator):
identifier = "workfile"
family = "workfile"
default_variant = "Main"
def get_instance_attr_defs(self):
return []
@ -35,7 +37,6 @@ class AEWorkfileCreator(AutoCreator):
existing_instance = instance
break
variant = ''
project_name = legacy_io.Session["AVALON_PROJECT"]
asset_name = legacy_io.Session["AVALON_ASSET"]
task_name = legacy_io.Session["AVALON_TASK"]
@ -44,15 +45,17 @@ class AEWorkfileCreator(AutoCreator):
if existing_instance is None:
asset_doc = get_asset_by_name(project_name, asset_name)
subset_name = self.get_subset_name(
variant, task_name, asset_doc, project_name, host_name
self.default_variant, task_name, asset_doc,
project_name, host_name
)
data = {
"asset": asset_name,
"task": task_name,
"variant": variant
"variant": self.default_variant
}
data.update(self.get_dynamic_data(
variant, task_name, asset_doc, project_name, host_name
self.default_variant, task_name, asset_doc,
project_name, host_name
))
new_instance = CreatedInstance(
@ -69,7 +72,9 @@ class AEWorkfileCreator(AutoCreator):
):
asset_doc = get_asset_by_name(project_name, asset_name)
subset_name = self.get_subset_name(
variant, task_name, asset_doc, project_name, host_name
self.default_variant, task_name, asset_doc,
project_name, host_name
)
existing_instance["asset"] = asset_name
existing_instance["task"] = task_name
existing_instance["subset"] = subset_name

View file

@ -1,14 +1,14 @@
import re
from openpype.lib import (
get_background_layers,
get_unique_layer_name
)
from openpype.pipeline import get_representation_path
from openpype.hosts.aftereffects.api import (
AfterEffectsLoader,
containerise
)
from openpype.hosts.aftereffects.api.lib import (
get_background_layers,
get_unique_layer_name,
)
class BackgroundLoader(AfterEffectsLoader):

View file

@ -1,12 +1,11 @@
import re
from openpype import lib
from openpype.pipeline import get_representation_path
from openpype.hosts.aftereffects.api import (
AfterEffectsLoader,
containerise
)
from openpype.hosts.aftereffects.api.lib import get_unique_layer_name
class FileLoader(AfterEffectsLoader):
@ -28,7 +27,7 @@ class FileLoader(AfterEffectsLoader):
stub = self.get_stub()
layers = stub.get_items(comps=True, folders=True, footages=True)
existing_layers = [layer.name for layer in layers]
comp_name = lib.get_unique_layer_name(
comp_name = get_unique_layer_name(
existing_layers, "{}_{}".format(context["asset"]["name"], name))
import_options = {}
@ -87,7 +86,7 @@ class FileLoader(AfterEffectsLoader):
if namespace_from_container != layer_name:
layers = stub.get_items(comps=True)
existing_layers = [layer.name for layer in layers]
layer_name = lib.get_unique_layer_name(
layer_name = get_unique_layer_name(
existing_layers,
"{}_{}".format(context["asset"], context["subset"]))
else: # switching version - keep same name

View file

@ -1,8 +1,8 @@
import os
import pyblish.api
from openpype.lib import get_subset_name_with_asset_doc
from openpype.pipeline import legacy_io
from openpype.pipeline.create import get_subset_name
class CollectWorkfile(pyblish.api.ContextPlugin):
@ -11,6 +11,8 @@ class CollectWorkfile(pyblish.api.ContextPlugin):
label = "Collect After Effects Workfile Instance"
order = pyblish.api.CollectorOrder + 0.1
default_variant = "Main"
def process(self, context):
existing_instance = None
for instance in context:
@ -69,13 +71,14 @@ class CollectWorkfile(pyblish.api.ContextPlugin):
# workfile instance
family = "workfile"
subset = get_subset_name_with_asset_doc(
subset = get_subset_name(
family,
"",
self.default_variant,
context.data["anatomyData"]["task"]["name"],
context.data["assetEntity"],
context.data["anatomyData"]["project"]["name"],
host_name=context.data["hostName"]
host_name=context.data["hostName"],
project_settings=context.data["project_settings"]
)
# Create instance
instance = context.create_instance(subset)

View file

@ -2,14 +2,18 @@ import os
import sys
import six
import openpype.api
from openpype.lib import (
get_ffmpeg_tool_path,
run_subprocess,
)
from openpype.pipeline import publish
from openpype.hosts.aftereffects.api import get_stub
class ExtractLocalRender(openpype.api.Extractor):
class ExtractLocalRender(publish.Extractor):
"""Render RenderQueue locally."""
order = openpype.api.Extractor.order - 0.47
order = publish.Extractor.order - 0.47
label = "Extract Local Render"
hosts = ["aftereffects"]
families = ["renderLocal", "render.local"]
@ -53,7 +57,7 @@ class ExtractLocalRender(openpype.api.Extractor):
instance.data["representations"] = [repre_data]
ffmpeg_path = openpype.lib.get_ffmpeg_tool_path("ffmpeg")
ffmpeg_path = get_ffmpeg_tool_path("ffmpeg")
# Generate thumbnail.
thumbnail_path = os.path.join(staging_dir, "thumbnail.jpg")
@ -66,7 +70,7 @@ class ExtractLocalRender(openpype.api.Extractor):
]
self.log.debug("Thumbnail args:: {}".format(args))
try:
output = openpype.lib.run_subprocess(args)
output = run_subprocess(args)
except TypeError:
self.log.warning("Error in creating thumbnail")
six.reraise(*sys.exc_info())

View file

@ -1,13 +1,13 @@
import pyblish.api
import openpype.api
from openpype.pipeline import publish
from openpype.hosts.aftereffects.api import get_stub
class ExtractSaveScene(pyblish.api.ContextPlugin):
"""Save scene before extraction."""
order = openpype.api.Extractor.order - 0.48
order = publish.Extractor.order - 0.48
label = "Extract Save Scene"
hosts = ["aftereffects"]

View file

@ -1,6 +1,6 @@
import pyblish.api
from openpype.action import get_errored_plugins_from_data
from openpype.lib import version_up
from openpype.pipeline.publish import get_errored_plugins_from_context
from openpype.hosts.aftereffects.api import get_stub
@ -18,7 +18,7 @@ class IncrementWorkfile(pyblish.api.InstancePlugin):
optional = True
def process(self, instance):
errored_plugins = get_errored_plugins_from_data(instance.context)
errored_plugins = get_errored_plugins_from_context(instance.context)
if errored_plugins:
raise RuntimeError(
"Skipping incrementing current file because publishing failed."

View file

@ -1,8 +1,8 @@
import openpype.api
from openpype.pipeline import publish
from openpype.hosts.aftereffects.api import get_stub
class RemovePublishHighlight(openpype.api.Extractor):
class RemovePublishHighlight(publish.Extractor):
"""Clean utf characters which are not working in DL
Published compositions are marked with unicode icon which causes
@ -10,7 +10,7 @@ class RemovePublishHighlight(openpype.api.Extractor):
rendering, add it later back to avoid confusion.
"""
order = openpype.api.Extractor.order - 0.49 # just before save
order = publish.Extractor.order - 0.49 # just before save
label = "Clean render comp"
hosts = ["aftereffects"]
families = ["render.farm"]

View file

@ -1,9 +1,9 @@
import pyblish.api
import openpype.api
from openpype.pipeline import (
from openpype.pipeline import legacy_io
from openpype.pipeline.publish import (
ValidateContentsOrder,
PublishXmlValidationError,
legacy_io,
)
from openpype.hosts.aftereffects.api import get_stub
@ -50,7 +50,7 @@ class ValidateInstanceAsset(pyblish.api.InstancePlugin):
label = "Validate Instance Asset"
hosts = ["aftereffects"]
actions = [ValidateInstanceAssetRepair]
order = openpype.api.ValidateContentsOrder
order = ValidateContentsOrder
def process(self, instance):
instance_asset = instance.data["asset"]

View file

@ -1,52 +1,6 @@
import os
from .addon import BlenderAddon
def add_implementation_envs(env, _app):
"""Modify environments to contain all required for implementation."""
# Prepare path to implementation script
implementation_user_script_path = os.path.join(
os.path.dirname(os.path.abspath(__file__)),
"blender_addon"
)
# Add blender implementation script path to PYTHONPATH
python_path = env.get("PYTHONPATH") or ""
python_path_parts = [
path
for path in python_path.split(os.pathsep)
if path
]
python_path_parts.insert(0, implementation_user_script_path)
env["PYTHONPATH"] = os.pathsep.join(python_path_parts)
# Modify Blender user scripts path
previous_user_scripts = set()
# Implementation path is added to set for easier paths check inside loops
# - will be removed at the end
previous_user_scripts.add(implementation_user_script_path)
openpype_blender_user_scripts = (
env.get("OPENPYPE_BLENDER_USER_SCRIPTS") or ""
)
for path in openpype_blender_user_scripts.split(os.pathsep):
if path:
previous_user_scripts.add(os.path.normpath(path))
blender_user_scripts = env.get("BLENDER_USER_SCRIPTS") or ""
for path in blender_user_scripts.split(os.pathsep):
if path:
previous_user_scripts.add(os.path.normpath(path))
# Remove implementation path from user script paths as is set to
# `BLENDER_USER_SCRIPTS`
previous_user_scripts.remove(implementation_user_script_path)
env["BLENDER_USER_SCRIPTS"] = implementation_user_script_path
# Set custom user scripts env
env["OPENPYPE_BLENDER_USER_SCRIPTS"] = os.pathsep.join(
previous_user_scripts
)
# Define Qt binding if not defined
if not env.get("QT_PREFERRED_BINDING"):
env["QT_PREFERRED_BINDING"] = "PySide2"
__all__ = (
"BlenderAddon",
)

View file

@ -0,0 +1,73 @@
import os
from openpype.modules import OpenPypeModule
from openpype.modules.interfaces import IHostAddon
BLENDER_ROOT_DIR = os.path.dirname(os.path.abspath(__file__))
class BlenderAddon(OpenPypeModule, IHostAddon):
name = "blender"
host_name = "blender"
def initialize(self, module_settings):
self.enabled = True
def add_implementation_envs(self, env, _app):
"""Modify environments to contain all required for implementation."""
# Prepare path to implementation script
implementation_user_script_path = os.path.join(
BLENDER_ROOT_DIR,
"blender_addon"
)
# Add blender implementation script path to PYTHONPATH
python_path = env.get("PYTHONPATH") or ""
python_path_parts = [
path
for path in python_path.split(os.pathsep)
if path
]
python_path_parts.insert(0, implementation_user_script_path)
env["PYTHONPATH"] = os.pathsep.join(python_path_parts)
# Modify Blender user scripts path
previous_user_scripts = set()
# Implementation path is added to set for easier paths check inside
# loops - will be removed at the end
previous_user_scripts.add(implementation_user_script_path)
openpype_blender_user_scripts = (
env.get("OPENPYPE_BLENDER_USER_SCRIPTS") or ""
)
for path in openpype_blender_user_scripts.split(os.pathsep):
if path:
previous_user_scripts.add(os.path.normpath(path))
blender_user_scripts = env.get("BLENDER_USER_SCRIPTS") or ""
for path in blender_user_scripts.split(os.pathsep):
if path:
previous_user_scripts.add(os.path.normpath(path))
# Remove implementation path from user script paths as is set to
# `BLENDER_USER_SCRIPTS`
previous_user_scripts.remove(implementation_user_script_path)
env["BLENDER_USER_SCRIPTS"] = implementation_user_script_path
# Set custom user scripts env
env["OPENPYPE_BLENDER_USER_SCRIPTS"] = os.pathsep.join(
previous_user_scripts
)
# Define Qt binding if not defined
if not env.get("QT_PREFERRED_BINDING"):
env["QT_PREFERRED_BINDING"] = "PySide2"
def get_launch_hook_paths(self, app):
if app.host_name != self.host_name:
return []
return [
os.path.join(BLENDER_ROOT_DIR, "hooks")
]
def get_workfile_extensions(self):
return [".blend"]

View file

@ -2,7 +2,7 @@ import bpy
import pyblish.api
from openpype.api import get_errored_instances_from_context
from openpype.pipeline.publish import get_errored_instances_from_context
class SelectInvalidAction(pyblish.api.Action):

View file

@ -234,7 +234,7 @@ def lsattrs(attrs: Dict) -> List:
def read(node: bpy.types.bpy_struct_meta_idprop):
"""Return user-defined attributes from `node`"""
data = dict(node.get(pipeline.AVALON_PROPERTY))
data = dict(node.get(pipeline.AVALON_PROPERTY, {}))
# Ignore hidden/internal data
data = {

View file

@ -26,7 +26,7 @@ PREVIEW_COLLECTIONS: Dict = dict()
# This seems like a good value to keep the Qt app responsive and doesn't slow
# down Blender. At least on macOS I the interace of Blender gets very laggy if
# you make it smaller.
TIMER_INTERVAL: float = 0.01
TIMER_INTERVAL: float = 0.01 if platform.system() == "Windows" else 0.1
class BlenderApplication(QtWidgets.QApplication):
@ -164,6 +164,12 @@ def _process_app_events() -> Optional[float]:
dialog.setDetailedText(detail)
dialog.exec_()
# Refresh Manager
if GlobalClass.app:
manager = GlobalClass.app.get_window("WM_OT_avalon_manager")
if manager:
manager.refresh()
if not GlobalClass.is_windows:
if OpenFileCacher.opening_file:
return TIMER_INTERVAL
@ -192,10 +198,11 @@ class LaunchQtApp(bpy.types.Operator):
self._app = BlenderApplication.get_app()
GlobalClass.app = self._app
bpy.app.timers.register(
_process_app_events,
persistent=True
)
if not bpy.app.timers.is_registered(_process_app_events):
bpy.app.timers.register(
_process_app_events,
persistent=True
)
def execute(self, context):
"""Execute the operator.

View file

@ -5,8 +5,6 @@ from typing import List, Optional
import bpy
from openpype.pipeline import HOST_WORKFILE_EXTENSIONS
class OpenFileCacher:
"""Store information about opening file.
@ -78,7 +76,7 @@ def has_unsaved_changes() -> bool:
def file_extensions() -> List[str]:
"""Return the supported file extensions for Blender scene files."""
return HOST_WORKFILE_EXTENSIONS["blender"]
return [".blend"]
def work_root(session: dict) -> str:

View file

@ -1,4 +1,10 @@
from openpype.pipeline import install_host
from openpype.hosts.blender import api
install_host(api)
def register():
install_host(api)
def unregister():
pass

View file

@ -6,12 +6,12 @@ from typing import Dict, List, Optional
import bpy
from openpype import lib
from openpype.pipeline import (
legacy_create,
get_representation_path,
AVALON_CONTAINER_ID,
)
from openpype.pipeline.create import get_legacy_creator_by_name
from openpype.hosts.blender.api import plugin
from openpype.hosts.blender.api.pipeline import (
AVALON_CONTAINERS,
@ -157,7 +157,7 @@ class BlendLayoutLoader(plugin.AssetLoader):
t.id = local_obj
elif local_obj.type == 'EMPTY':
creator_plugin = lib.get_creator_by_name("CreateAnimation")
creator_plugin = get_legacy_creator_by_name("CreateAnimation")
if not creator_plugin:
raise ValueError("Creator plugin \"CreateAnimation\" was "
"not found.")

View file

@ -118,7 +118,7 @@ class JsonLayoutLoader(plugin.AssetLoader):
# Camera creation when loading a layout is not necessary for now,
# but the code is worth keeping in case we need it in the future.
# # Create the camera asset and the camera instance
# creator_plugin = lib.get_creator_by_name("CreateCamera")
# creator_plugin = get_legacy_creator_by_name("CreateCamera")
# if not creator_plugin:
# raise ValueError("Creator plugin \"CreateCamera\" was "
# "not found.")

View file

@ -6,12 +6,12 @@ from typing import Dict, List, Optional
import bpy
from openpype import lib
from openpype.pipeline import (
legacy_create,
get_representation_path,
AVALON_CONTAINER_ID,
)
from openpype.pipeline.create import get_legacy_creator_by_name
from openpype.hosts.blender.api import (
plugin,
get_selection,
@ -244,7 +244,7 @@ class BlendRigLoader(plugin.AssetLoader):
objects = self._process(libpath, asset_group, group_name, action)
if create_animation:
creator_plugin = lib.get_creator_by_name("CreateAnimation")
creator_plugin = get_legacy_creator_by_name("CreateAnimation")
if not creator_plugin:
raise ValueError("Creator plugin \"CreateAnimation\" was "
"not found.")

View file

@ -1,6 +1,19 @@
import os
import bpy
import pyblish.api
from openpype.pipeline import legacy_io
from openpype.hosts.blender.api import workio
class SaveWorkfiledAction(pyblish.api.Action):
"""Save Workfile."""
label = "Save Workfile"
on = "failed"
icon = "save"
def process(self, context, plugin):
bpy.ops.wm.avalon_workfiles()
class CollectBlenderCurrentFile(pyblish.api.ContextPlugin):
@ -8,12 +21,52 @@ class CollectBlenderCurrentFile(pyblish.api.ContextPlugin):
order = pyblish.api.CollectorOrder - 0.5
label = "Blender Current File"
hosts = ['blender']
hosts = ["blender"]
actions = [SaveWorkfiledAction]
def process(self, context):
"""Inject the current working file"""
current_file = bpy.data.filepath
context.data['currentFile'] = current_file
current_file = workio.current_file()
assert current_file != '', "Current file is empty. " \
"Save the file before continuing."
context.data["currentFile"] = current_file
assert current_file, (
"Current file is empty. Save the file before continuing."
)
folder, file = os.path.split(current_file)
filename, ext = os.path.splitext(file)
task = legacy_io.Session["AVALON_TASK"]
data = {}
# create instance
instance = context.create_instance(name=filename)
subset = "workfile" + task.capitalize()
data.update({
"subset": subset,
"asset": os.getenv("AVALON_ASSET", None),
"label": subset,
"publish": True,
"family": "workfile",
"families": ["workfile"],
"setMembers": [current_file],
"frameStart": bpy.context.scene.frame_start,
"frameEnd": bpy.context.scene.frame_end,
})
data["representations"] = [{
"name": ext.lstrip("."),
"ext": ext.lstrip("."),
"files": file,
"stagingDir": folder,
}]
instance.data.update(data)
self.log.info("Collected instance: {}".format(file))
self.log.info("Scene path: {}".format(current_file))
self.log.info("staging Dir: {}".format(folder))
self.log.info("subset: {}".format(subset))

View file

@ -2,12 +2,12 @@ import os
import bpy
from openpype import api
from openpype.pipeline import publish
from openpype.hosts.blender.api import plugin
from openpype.hosts.blender.api.pipeline import AVALON_PROPERTY
class ExtractABC(api.Extractor):
class ExtractABC(publish.Extractor):
"""Extract as ABC."""
label = "Extract ABC"

View file

@ -2,10 +2,10 @@ import os
import bpy
import openpype.api
from openpype.pipeline import publish
class ExtractBlend(openpype.api.Extractor):
class ExtractBlend(publish.Extractor):
"""Extract a blend file."""
label = "Extract Blend"

View file

@ -2,10 +2,10 @@ import os
import bpy
import openpype.api
from openpype.pipeline import publish
class ExtractBlendAnimation(openpype.api.Extractor):
class ExtractBlendAnimation(publish.Extractor):
"""Extract a blend file."""
label = "Extract Blend"

View file

@ -2,11 +2,11 @@ import os
import bpy
from openpype import api
from openpype.pipeline import publish
from openpype.hosts.blender.api import plugin
class ExtractCamera(api.Extractor):
class ExtractCamera(publish.Extractor):
"""Extract as the camera as FBX."""
label = "Extract Camera"

View file

@ -2,12 +2,12 @@ import os
import bpy
from openpype import api
from openpype.pipeline import publish
from openpype.hosts.blender.api import plugin
from openpype.hosts.blender.api.pipeline import AVALON_PROPERTY
class ExtractFBX(api.Extractor):
class ExtractFBX(publish.Extractor):
"""Extract as FBX."""
label = "Extract FBX"

View file

@ -5,12 +5,12 @@ import bpy
import bpy_extras
import bpy_extras.anim_utils
from openpype import api
from openpype.pipeline import publish
from openpype.hosts.blender.api import plugin
from openpype.hosts.blender.api.pipeline import AVALON_PROPERTY
class ExtractAnimationFBX(api.Extractor):
class ExtractAnimationFBX(publish.Extractor):
"""Extract as animation."""
label = "Extract FBX"

View file

@ -6,12 +6,12 @@ import bpy_extras
import bpy_extras.anim_utils
from openpype.client import get_representation_by_name
from openpype.pipeline import publish
from openpype.hosts.blender.api import plugin
from openpype.hosts.blender.api.pipeline import AVALON_PROPERTY
import openpype.api
class ExtractLayout(openpype.api.Extractor):
class ExtractLayout(publish.Extractor):
"""Extract a layout."""
label = "Extract Layout"
@ -180,7 +180,7 @@ class ExtractLayout(openpype.api.Extractor):
"rotation": {
"x": asset.rotation_euler.x,
"y": asset.rotation_euler.y,
"z": asset.rotation_euler.z,
"z": asset.rotation_euler.z
},
"scale": {
"x": asset.scale.x,
@ -189,6 +189,18 @@ class ExtractLayout(openpype.api.Extractor):
}
}
json_element["transform_matrix"] = []
for row in list(asset.matrix_world.transposed()):
json_element["transform_matrix"].append(list(row))
json_element["basis"] = [
[1, 0, 0, 0],
[0, -1, 0, 0],
[0, 0, 1, 0],
[0, 0, 0, 1]
]
# Extract the animation as well
if family == "rig":
f, n = self._export_animation(

View file

@ -1,9 +1,11 @@
from typing import List
import mathutils
import bpy
import pyblish.api
import openpype.api
import openpype.hosts.blender.api.action
from openpype.pipeline.publish import ValidateContentsOrder
class ValidateCameraZeroKeyframe(pyblish.api.InstancePlugin):
@ -14,21 +16,18 @@ class ValidateCameraZeroKeyframe(pyblish.api.InstancePlugin):
in Unreal and Blender.
"""
order = openpype.api.ValidateContentsOrder
order = ValidateContentsOrder
hosts = ["blender"]
families = ["camera"]
category = "geometry"
version = (0, 1, 0)
label = "Zero Keyframe"
actions = [openpype.hosts.blender.api.action.SelectInvalidAction]
_identity = mathutils.Matrix()
@classmethod
def get_invalid(cls, instance) -> List:
@staticmethod
def get_invalid(instance) -> List:
invalid = []
for obj in [obj for obj in instance]:
if obj.type == "CAMERA":
for obj in instance:
if isinstance(obj, bpy.types.Object) and obj.type == "CAMERA":
if obj.animation_data and obj.animation_data.action:
action = obj.animation_data.action
frames_set = set()
@ -45,4 +44,5 @@ class ValidateCameraZeroKeyframe(pyblish.api.InstancePlugin):
invalid = self.get_invalid(instance)
if invalid:
raise RuntimeError(
f"Object found in instance is not in Object Mode: {invalid}")
f"Camera must have a keyframe at frame 0: {invalid}"
)

View file

@ -3,13 +3,14 @@ from typing import List
import bpy
import pyblish.api
import openpype.api
import openpype.hosts.blender.api.action
class ValidateMeshHasUvs(pyblish.api.InstancePlugin):
"""Validate that the current mesh has UV's."""
order = pyblish.api.ValidatorOrder
order = openpype.api.ValidateContentsOrder
hosts = ["blender"]
families = ["model"]
category = "geometry"
@ -25,7 +26,10 @@ class ValidateMeshHasUvs(pyblish.api.InstancePlugin):
for uv_layer in obj.data.uv_layers:
for polygon in obj.data.polygons:
for loop_index in polygon.loop_indices:
if not uv_layer.data[loop_index].uv:
if (
loop_index >= len(uv_layer.data)
or not uv_layer.data[loop_index].uv
):
return False
return True
@ -33,20 +37,20 @@ class ValidateMeshHasUvs(pyblish.api.InstancePlugin):
@classmethod
def get_invalid(cls, instance) -> List:
invalid = []
# TODO (jasper): only check objects in the collection that will be published?
for obj in [
obj for obj in instance]:
try:
if obj.type == 'MESH':
# Make sure we are in object mode.
bpy.ops.object.mode_set(mode='OBJECT')
if not cls.has_uvs(obj):
invalid.append(obj)
except:
continue
for obj in instance:
if isinstance(obj, bpy.types.Object) and obj.type == 'MESH':
if obj.mode != "OBJECT":
cls.log.warning(
f"Mesh object {obj.name} should be in 'OBJECT' mode"
" to be properly checked."
)
if not cls.has_uvs(obj):
invalid.append(obj)
return invalid
def process(self, instance):
invalid = self.get_invalid(instance)
if invalid:
raise RuntimeError(f"Meshes found in instance without valid UV's: {invalid}")
raise RuntimeError(
f"Meshes found in instance without valid UV's: {invalid}"
)

View file

@ -3,28 +3,27 @@ from typing import List
import bpy
import pyblish.api
import openpype.api
import openpype.hosts.blender.api.action
class ValidateMeshNoNegativeScale(pyblish.api.Validator):
"""Ensure that meshes don't have a negative scale."""
order = pyblish.api.ValidatorOrder
order = openpype.api.ValidateContentsOrder
hosts = ["blender"]
families = ["model"]
category = "geometry"
label = "Mesh No Negative Scale"
actions = [openpype.hosts.blender.api.action.SelectInvalidAction]
@staticmethod
def get_invalid(instance) -> List:
invalid = []
# TODO (jasper): only check objects in the collection that will be published?
for obj in [
obj for obj in bpy.data.objects if obj.type == 'MESH'
]:
if any(v < 0 for v in obj.scale):
invalid.append(obj)
for obj in instance:
if isinstance(obj, bpy.types.Object) and obj.type == 'MESH':
if any(v < 0 for v in obj.scale):
invalid.append(obj)
return invalid
def process(self, instance):

View file

@ -1,7 +1,11 @@
from typing import List
import bpy
import pyblish.api
import openpype.api
import openpype.hosts.blender.api.action
from openpype.pipeline.publish import ValidateContentsOrder
class ValidateNoColonsInName(pyblish.api.InstancePlugin):
@ -12,20 +16,20 @@ class ValidateNoColonsInName(pyblish.api.InstancePlugin):
"""
order = openpype.api.ValidateContentsOrder
order = ValidateContentsOrder
hosts = ["blender"]
families = ["model", "rig"]
version = (0, 1, 0)
label = "No Colons in names"
actions = [openpype.hosts.blender.api.action.SelectInvalidAction]
@classmethod
def get_invalid(cls, instance) -> List:
@staticmethod
def get_invalid(instance) -> List:
invalid = []
for obj in [obj for obj in instance]:
for obj in instance:
if ':' in obj.name:
invalid.append(obj)
if obj.type == 'ARMATURE':
if isinstance(obj, bpy.types.Object) and obj.type == 'ARMATURE':
for bone in obj.data.bones:
if ':' in bone.name:
invalid.append(obj)
@ -36,4 +40,5 @@ class ValidateNoColonsInName(pyblish.api.InstancePlugin):
invalid = self.get_invalid(instance)
if invalid:
raise RuntimeError(
f"Objects found with colon in name: {invalid}")
f"Objects found with colon in name: {invalid}"
)

View file

@ -1,5 +1,7 @@
from typing import List
import bpy
import pyblish.api
import openpype.hosts.blender.api.action
@ -10,26 +12,21 @@ class ValidateObjectIsInObjectMode(pyblish.api.InstancePlugin):
order = pyblish.api.ValidatorOrder - 0.01
hosts = ["blender"]
families = ["model", "rig", "layout"]
category = "geometry"
label = "Validate Object Mode"
actions = [openpype.hosts.blender.api.action.SelectInvalidAction]
optional = False
@classmethod
def get_invalid(cls, instance) -> List:
@staticmethod
def get_invalid(instance) -> List:
invalid = []
for obj in [obj for obj in instance]:
try:
if obj.type == 'MESH' or obj.type == 'ARMATURE':
# Check if the object is in object mode.
if not obj.mode == 'OBJECT':
invalid.append(obj)
except Exception:
continue
for obj in instance:
if isinstance(obj, bpy.types.Object) and obj.mode != "OBJECT":
invalid.append(obj)
return invalid
def process(self, instance):
invalid = self.get_invalid(instance)
if invalid:
raise RuntimeError(
f"Object found in instance is not in Object Mode: {invalid}")
f"Object found in instance is not in Object Mode: {invalid}"
)

View file

@ -1,9 +1,12 @@
from typing import List
import mathutils
import bpy
import pyblish.api
import openpype.api
import openpype.hosts.blender.api.action
from openpype.pipeline.publish import ValidateContentsOrder
class ValidateTransformZero(pyblish.api.InstancePlugin):
@ -15,10 +18,9 @@ class ValidateTransformZero(pyblish.api.InstancePlugin):
"""
order = openpype.api.ValidateContentsOrder
order = ValidateContentsOrder
hosts = ["blender"]
families = ["model"]
category = "geometry"
version = (0, 1, 0)
label = "Transform Zero"
actions = [openpype.hosts.blender.api.action.SelectInvalidAction]
@ -28,8 +30,11 @@ class ValidateTransformZero(pyblish.api.InstancePlugin):
@classmethod
def get_invalid(cls, instance) -> List:
invalid = []
for obj in [obj for obj in instance]:
if obj.matrix_basis != cls._identity:
for obj in instance:
if (
isinstance(obj, bpy.types.Object)
and obj.matrix_basis != cls._identity
):
invalid.append(obj)
return invalid
@ -37,4 +42,6 @@ class ValidateTransformZero(pyblish.api.InstancePlugin):
invalid = self.get_invalid(instance)
if invalid:
raise RuntimeError(
f"Object found in instance is not in Object Mode: {invalid}")
"Object found in instance has not"
f" transform to zero: {invalid}"
)

View file

@ -14,7 +14,7 @@ from openpype.tools.utils import host_tools
from openpype.pipeline import install_openpype_plugins
log = Logger().get_logger("Celaction_cli_publisher")
log = Logger.get_logger("Celaction_cli_publisher")
publish_host = "celaction"

View file

@ -1,22 +1,10 @@
import os
HOST_DIR = os.path.dirname(
os.path.abspath(__file__)
from .addon import (
HOST_DIR,
FlameAddon,
)
def add_implementation_envs(env, _app):
# Add requirements to DL_PYTHON_HOOK_PATH
pype_root = os.environ["OPENPYPE_REPOS_ROOT"]
env["DL_PYTHON_HOOK_PATH"] = os.path.join(
pype_root, "openpype", "hosts", "flame", "startup")
env.pop("QT_AUTO_SCREEN_SCALE_FACTOR", None)
# Set default values if are not already set via settings
defaults = {
"LOGLEVEL": "DEBUG"
}
for key, value in defaults.items():
if not env.get(key):
env[key] = value
__all__ = (
"HOST_DIR",
"FlameAddon",
)

View file

@ -0,0 +1,36 @@
import os
from openpype.modules import OpenPypeModule
from openpype.modules.interfaces import IHostAddon
HOST_DIR = os.path.dirname(os.path.abspath(__file__))
class FlameAddon(OpenPypeModule, IHostAddon):
name = "flame"
host_name = "flame"
def initialize(self, module_settings):
self.enabled = True
def add_implementation_envs(self, env, _app):
# Add requirements to DL_PYTHON_HOOK_PATH
env["DL_PYTHON_HOOK_PATH"] = os.path.join(HOST_DIR, "startup")
env.pop("QT_AUTO_SCREEN_SCALE_FACTOR", None)
# Set default values if are not already set via settings
defaults = {
"LOGLEVEL": "DEBUG"
}
for key, value in defaults.items():
if not env.get(key):
env[key] = value
def get_launch_hook_paths(self, app):
if app.host_name != self.host_name:
return []
return [
os.path.join(HOST_DIR, "hooks")
]
def get_workfile_extensions(self):
return [".otoc"]

View file

@ -30,7 +30,8 @@ from .lib import (
maintained_temp_file_path,
get_clip_segment,
get_batch_group_from_desktop,
MediaInfoFile
MediaInfoFile,
TimeEffectMetadata
)
from .utils import (
setup,
@ -107,6 +108,7 @@ __all__ = [
"get_clip_segment",
"get_batch_group_from_desktop",
"MediaInfoFile",
"TimeEffectMetadata",
# pipeline
"install",

View file

@ -5,10 +5,11 @@ import json
import pickle
import clique
import tempfile
import traceback
import itertools
import contextlib
import xml.etree.cElementTree as cET
from copy import deepcopy
from copy import deepcopy, copy
from xml.etree import ElementTree as ET
from pprint import pformat
from .constants import (
@ -266,7 +267,7 @@ def get_current_sequence(selection):
def rescan_hooks():
import flame
try:
flame.execute_shortcut('Rescan Python Hooks')
flame.execute_shortcut("Rescan Python Hooks")
except Exception:
pass
@ -1082,21 +1083,21 @@ class MediaInfoFile(object):
xml_data (ET.Element): clip data
"""
try:
for out_track in xml_data.iter('track'):
for out_feed in out_track.iter('feed'):
for out_track in xml_data.iter("track"):
for out_feed in out_track.iter("feed"):
# start frame
out_feed_nb_ticks_obj = out_feed.find(
'startTimecode/nbTicks')
"startTimecode/nbTicks")
self.start_frame = out_feed_nb_ticks_obj.text
# fps
out_feed_fps_obj = out_feed.find(
'startTimecode/rate')
"startTimecode/rate")
self.fps = out_feed_fps_obj.text
# drop frame mode
out_feed_drop_mode_obj = out_feed.find(
'startTimecode/dropMode')
"startTimecode/dropMode")
self.drop_mode = out_feed_drop_mode_obj.text
break
except Exception as msg:
@ -1118,8 +1119,153 @@ class MediaInfoFile(object):
tree = cET.ElementTree(xml_element_data)
tree.write(
fpath, xml_declaration=True,
method='xml', encoding='UTF-8'
method="xml", encoding="UTF-8"
)
except IOError as error:
raise IOError(
"Not able to write data to file: {}".format(error))
class TimeEffectMetadata(object):
log = log
_data = {}
_retime_modes = {
0: "speed",
1: "timewarp",
2: "duration"
}
def __init__(self, segment, logger=None):
if logger:
self.log = logger
self._data = self._get_metadata(segment)
@property
def data(self):
""" Returns timewarp effect data
Returns:
dict: retime data
"""
return self._data
def _get_metadata(self, segment):
effects = segment.effects or []
for effect in effects:
if effect.type == "Timewarp":
with maintained_temp_file_path(".timewarp_node") as tmp_path:
self.log.info("Temp File: {}".format(tmp_path))
effect.save_setup(tmp_path)
return self._get_attributes_from_xml(tmp_path)
return {}
def _get_attributes_from_xml(self, tmp_path):
with open(tmp_path, "r") as tw_setup_file:
tw_setup_string = tw_setup_file.read()
tw_setup_file.close()
tw_setup_xml = ET.fromstring(tw_setup_string)
tw_setup = self._dictify(tw_setup_xml)
# pprint(tw_setup)
try:
tw_setup_state = tw_setup["Setup"]["State"][0]
mode = int(
tw_setup_state["TW_RetimerMode"][0]["_text"]
)
r_data = {
"type": self._retime_modes[mode],
"effectStart": int(
tw_setup["Setup"]["Base"][0]["Range"][0]["Start"]),
"effectEnd": int(
tw_setup["Setup"]["Base"][0]["Range"][0]["End"])
}
if mode == 0: # speed
r_data[self._retime_modes[mode]] = float(
tw_setup_state["TW_Speed"]
[0]["Channel"][0]["Value"][0]["_text"]
) / 100
elif mode == 1: # timewarp
print("timing")
r_data[self._retime_modes[mode]] = self._get_anim_keys(
tw_setup_state["TW_Timing"]
)
elif mode == 2: # duration
r_data[self._retime_modes[mode]] = {
"start": {
"source": int(
tw_setup_state["TW_DurationTiming"][0]["Channel"]
[0]["KFrames"][0]["Key"][0]["Value"][0]["_text"]
),
"timeline": int(
tw_setup_state["TW_DurationTiming"][0]["Channel"]
[0]["KFrames"][0]["Key"][0]["Frame"][0]["_text"]
)
},
"end": {
"source": int(
tw_setup_state["TW_DurationTiming"][0]["Channel"]
[0]["KFrames"][0]["Key"][1]["Value"][0]["_text"]
),
"timeline": int(
tw_setup_state["TW_DurationTiming"][0]["Channel"]
[0]["KFrames"][0]["Key"][1]["Frame"][0]["_text"]
)
}
}
except Exception:
lines = traceback.format_exception(*sys.exc_info())
self.log.error("\n".join(lines))
return
return r_data
def _get_anim_keys(self, setup_cat, index=None):
return_data = {
"extrapolation": (
setup_cat[0]["Channel"][0]["Extrap"][0]["_text"]
),
"animKeys": []
}
for key in setup_cat[0]["Channel"][0]["KFrames"][0]["Key"]:
if index and int(key["Index"]) != index:
continue
key_data = {
"source": float(key["Value"][0]["_text"]),
"timeline": float(key["Frame"][0]["_text"]),
"index": int(key["Index"]),
"curveMode": key["CurveMode"][0]["_text"],
"curveOrder": key["CurveOrder"][0]["_text"]
}
if key.get("TangentMode"):
key_data["tangentMode"] = key["TangentMode"][0]["_text"]
return_data["animKeys"].append(key_data)
return return_data
def _dictify(self, xml_, root=True):
""" Convert xml object to dictionary
Args:
xml_ (xml.etree.ElementTree.Element): xml data
root (bool, optional): is root available. Defaults to True.
Returns:
dict: dictionarized xml
"""
if root:
return {xml_.tag: self._dictify(xml_, False)}
d = copy(xml_.attrib)
if xml_.text:
d["_text"] = xml_.text
for x in xml_.findall("./*"):
if x.tag not in d:
d[x.tag] = []
d[x.tag].append(self._dictify(x, False))
return d

View file

@ -275,7 +275,7 @@ def create_otio_reference(clip_data, fps=None):
def create_otio_clip(clip_data):
from openpype.hosts.flame.api import MediaInfoFile
from openpype.hosts.flame.api import MediaInfoFile, TimeEffectMetadata
segment = clip_data["PySegment"]
@ -284,14 +284,31 @@ def create_otio_clip(clip_data):
media_timecode_start = media_info.start_frame
media_fps = media_info.fps
# Timewarp metadata
tw_data = TimeEffectMetadata(segment, logger=log).data
log.debug("__ tw_data: {}".format(tw_data))
# define first frame
first_frame = media_timecode_start or utils.get_frame_from_filename(
clip_data["fpath"]) or 0
file_first_frame = utils.get_frame_from_filename(
clip_data["fpath"])
if file_first_frame:
file_first_frame = int(file_first_frame)
first_frame = media_timecode_start or file_first_frame or 0
_clip_source_in = int(clip_data["source_in"])
_clip_source_out = int(clip_data["source_out"])
_clip_record_in = clip_data["record_in"]
_clip_record_out = clip_data["record_out"]
_clip_record_duration = int(clip_data["record_duration"])
log.debug("_ file_first_frame: {}".format(file_first_frame))
log.debug("_ first_frame: {}".format(first_frame))
log.debug("_ _clip_source_in: {}".format(_clip_source_in))
log.debug("_ _clip_source_out: {}".format(_clip_source_out))
log.debug("_ _clip_record_in: {}".format(_clip_record_in))
log.debug("_ _clip_record_out: {}".format(_clip_record_out))
# first solve if the reverse timing
speed = 1
if clip_data["source_in"] > clip_data["source_out"]:
@ -302,16 +319,28 @@ def create_otio_clip(clip_data):
source_in = _clip_source_in - int(first_frame)
source_out = _clip_source_out - int(first_frame)
log.debug("_ source_in: {}".format(source_in))
log.debug("_ source_out: {}".format(source_out))
if file_first_frame:
log.debug("_ file_source_in: {}".format(
file_first_frame + source_in))
log.debug("_ file_source_in: {}".format(
file_first_frame + source_out))
source_duration = (source_out - source_in + 1)
# secondly check if any change of speed
if source_duration != _clip_record_duration:
retime_speed = float(source_duration) / float(_clip_record_duration)
log.debug("_ retime_speed: {}".format(retime_speed))
log.debug("_ calculated speed: {}".format(retime_speed))
speed *= retime_speed
log.debug("_ source_in: {}".format(source_in))
log.debug("_ source_out: {}".format(source_out))
# get speed from metadata if available
if tw_data.get("speed"):
speed = tw_data["speed"]
log.debug("_ metadata speed: {}".format(speed))
log.debug("_ speed: {}".format(speed))
log.debug("_ source_duration: {}".format(source_duration))
log.debug("_ _clip_record_duration: {}".format(_clip_record_duration))

View file

@ -1,9 +1,9 @@
import pyblish.api
import openpype.lib as oplib
from openpype.pipeline import legacy_io
import openpype.hosts.flame.api as opfapi
from openpype.hosts.flame.otio import flame_export
from openpype.pipeline import legacy_io
from openpype.pipeline.create import get_subset_name
class CollecTimelineOTIO(pyblish.api.ContextPlugin):
@ -24,11 +24,14 @@ class CollecTimelineOTIO(pyblish.api.ContextPlugin):
sequence = opfapi.get_current_sequence(opfapi.CTX.selection)
# create subset name
subset_name = oplib.get_subset_name_with_asset_doc(
subset_name = get_subset_name(
family,
variant,
task_name,
asset_doc,
context.data["projectName"],
context.data["hostName"],
project_settings=context.data["project_settings"]
)
# adding otio timeline to context

View file

@ -8,6 +8,9 @@ import pyblish.api
import openpype.api
from openpype.hosts.flame import api as opfapi
from openpype.hosts.flame.api import MediaInfoFile
from openpype.pipeline.editorial import (
get_media_range_with_retimes
)
import flame
@ -47,7 +50,6 @@ class ExtractSubsetResources(openpype.api.Extractor):
export_presets_mapping = {}
def process(self, instance):
if not self.keep_original_representation:
# remove previeous representation if not needed
instance.data["representations"] = []
@ -67,18 +69,60 @@ class ExtractSubsetResources(openpype.api.Extractor):
# get media source first frame
source_first_frame = instance.data["sourceFirstFrame"]
self.log.debug("_ frame_start: {}".format(frame_start))
self.log.debug("_ source_first_frame: {}".format(source_first_frame))
# get timeline in/out of segment
clip_in = instance.data["clipIn"]
clip_out = instance.data["clipOut"]
# get retimed attributres
retimed_data = self._get_retimed_attributes(instance)
# get individual keys
r_handle_start = retimed_data["handle_start"]
r_handle_end = retimed_data["handle_end"]
r_source_dur = retimed_data["source_duration"]
r_speed = retimed_data["speed"]
# get handles value - take only the max from both
handle_start = instance.data["handleStart"]
handle_end = instance.data["handleStart"]
handle_end = instance.data["handleEnd"]
handles = max(handle_start, handle_end)
include_handles = instance.data.get("includeHandles")
# get media source range with handles
source_start_handles = instance.data["sourceStartH"]
source_end_handles = instance.data["sourceEndH"]
# retime if needed
if r_speed != 1.0:
source_start_handles = (
instance.data["sourceStart"] - r_handle_start)
source_end_handles = (
source_start_handles
+ (r_source_dur - 1)
+ r_handle_start
+ r_handle_end
)
# get frame range with handles for representation range
frame_start_handle = frame_start - handle_start
repre_frame_start = frame_start_handle
if include_handles:
if r_speed == 1.0:
frame_start_handle = frame_start
else:
frame_start_handle = (
frame_start - handle_start) + r_handle_start
self.log.debug("_ frame_start_handle: {}".format(
frame_start_handle))
self.log.debug("_ repre_frame_start: {}".format(
repre_frame_start))
# calculate duration with handles
source_duration_handles = (
source_end_handles - source_start_handles) + 1
# create staging dir path
staging_dir = self.staging_dir(instance)
@ -93,6 +137,28 @@ class ExtractSubsetResources(openpype.api.Extractor):
}
export_presets.update(self.export_presets_mapping)
if not instance.data.get("versionData"):
instance.data["versionData"] = {}
# set versiondata if any retime
version_data = retimed_data.get("version_data")
self.log.debug("_ version_data: {}".format(version_data))
if version_data:
instance.data["versionData"].update(version_data)
if r_speed != 1.0:
instance.data["versionData"].update({
"frameStart": frame_start_handle,
"frameEnd": (
(frame_start_handle + source_duration_handles - 1)
- (r_handle_start + r_handle_end)
)
})
self.log.debug("_ i_version_data: {}".format(
instance.data["versionData"]
))
# loop all preset names and
for unique_name, preset_config in export_presets.items():
modify_xml_data = {}
@ -115,20 +181,10 @@ class ExtractSubsetResources(openpype.api.Extractor):
)
)
# get frame range with handles for representation range
frame_start_handle = frame_start - handle_start
# calculate duration with handles
source_duration_handles = (
source_end_handles - source_start_handles)
# define in/out marks
in_mark = (source_start_handles - source_first_frame) + 1
out_mark = in_mark + source_duration_handles
exporting_clip = None
name_patern_xml = "<name>_{}.".format(
unique_name)
if export_type == "Sequence Publish":
# change export clip to sequence
exporting_clip = flame.duplicate(sequence_clip)
@ -142,19 +198,25 @@ class ExtractSubsetResources(openpype.api.Extractor):
"<segment name>_<shot name>_{}.").format(
unique_name)
# change in/out marks to timeline in/out
# only for h264 with baked retime
in_mark = clip_in
out_mark = clip_out
out_mark = clip_out + 1
modify_xml_data.update({
"exportHandles": True,
"nbHandles": handles
})
else:
in_mark = (source_start_handles - source_first_frame) + 1
out_mark = in_mark + source_duration_handles
exporting_clip = self.import_clip(clip_path)
exporting_clip.name.set_value("{}_{}".format(
asset_name, segment_name))
# add xml tags modifications
modify_xml_data.update({
"exportHandles": True,
"nbHandles": handles,
"startFrame": frame_start,
# enum position low start from 0
"frameIndex": 0,
"startFrame": repre_frame_start,
"namePattern": name_patern_xml
})
@ -162,6 +224,9 @@ class ExtractSubsetResources(openpype.api.Extractor):
# add any xml overrides collected form segment.comment
modify_xml_data.update(instance.data["xml_overrides"])
self.log.debug("_ in_mark: {}".format(in_mark))
self.log.debug("_ out_mark: {}".format(out_mark))
export_kwargs = {}
# validate xml preset file is filled
if preset_file == "":
@ -196,9 +261,8 @@ class ExtractSubsetResources(openpype.api.Extractor):
"namePattern": "__thumbnail"
})
thumb_frame_number = int(in_mark + (
source_duration_handles / 2))
(out_mark - in_mark + 1) / 2))
self.log.debug("__ in_mark: {}".format(in_mark))
self.log.debug("__ thumb_frame_number: {}".format(
thumb_frame_number
))
@ -210,9 +274,6 @@ class ExtractSubsetResources(openpype.api.Extractor):
"out_mark": out_mark
})
self.log.debug("__ modify_xml_data: {}".format(
pformat(modify_xml_data)
))
preset_path = opfapi.modify_preset_file(
preset_orig_xml_path, staging_dir, modify_xml_data)
@ -281,9 +342,9 @@ class ExtractSubsetResources(openpype.api.Extractor):
# add frame range
if preset_config["representation_add_range"]:
representation_data.update({
"frameStart": frame_start_handle,
"frameStart": repre_frame_start,
"frameEnd": (
frame_start_handle + source_duration_handles),
repre_frame_start + source_duration_handles) - 1,
"fps": instance.data["fps"]
})
@ -300,8 +361,32 @@ class ExtractSubsetResources(openpype.api.Extractor):
# at the end remove the duplicated clip
flame.delete(exporting_clip)
self.log.debug("All representations: {}".format(
pformat(instance.data["representations"])))
def _get_retimed_attributes(self, instance):
handle_start = instance.data["handleStart"]
handle_end = instance.data["handleEnd"]
# get basic variables
otio_clip = instance.data["otioClip"]
# get available range trimmed with processed retimes
retimed_attributes = get_media_range_with_retimes(
otio_clip, handle_start, handle_end)
self.log.debug(
">> retimed_attributes: {}".format(retimed_attributes))
r_media_in = int(retimed_attributes["mediaIn"])
r_media_out = int(retimed_attributes["mediaOut"])
version_data = retimed_attributes.get("versionData")
return {
"version_data": version_data,
"handle_start": int(retimed_attributes["handleStart"]),
"handle_end": int(retimed_attributes["handleEnd"]),
"source_duration": (
(r_media_out - r_media_in) + 1
),
"speed": float(retimed_attributes["speed"])
}
def _should_skip(self, preset_config, clip_path, unique_name):
# get activating attributes
@ -313,8 +398,6 @@ class ExtractSubsetResources(openpype.api.Extractor):
unique_name, activated_preset, filter_path_regex
)
)
self.log.debug(
"__ clip_path: `{}`".format(clip_path))
# skip if not activated presete
if not activated_preset:

View file

@ -3,9 +3,9 @@ import copy
from collections import OrderedDict
from pprint import pformat
import pyblish
from openpype.lib import get_workdir
import openpype.hosts.flame.api as opfapi
import openpype.pipeline as op_pipeline
from openpype.pipeline.workfile import get_workdir
class IntegrateBatchGroup(pyblish.api.InstancePlugin):
@ -324,7 +324,13 @@ class IntegrateBatchGroup(pyblish.api.InstancePlugin):
project_doc = instance.data["projectEntity"]
asset_entity = instance.data["assetEntity"]
anatomy = instance.context.data["anatomy"]
project_settings = instance.context.data["project_settings"]
return get_workdir(
project_doc, asset_entity, task_data["name"], "flame", anatomy
project_doc,
asset_entity,
task_data["name"],
"flame",
anatomy,
project_settings=project_settings
)

View file

@ -8,7 +8,7 @@ import contextlib
import pyblish.api
from openpype.api import Logger
from openpype.lib import Logger
from openpype.pipeline import (
register_loader_plugin_path,
register_creator_plugin_path,
@ -20,7 +20,7 @@ from openpype.pipeline import (
)
import openpype.hosts.fusion
log = Logger().get_logger(__name__)
log = Logger.get_logger(__name__)
HOST_DIR = os.path.dirname(os.path.abspath(openpype.hosts.fusion.__file__))
PLUGINS_DIR = os.path.join(HOST_DIR, "plugins")

View file

@ -17,9 +17,9 @@ class FusionIncrementCurrentFile(pyblish.api.ContextPlugin):
def process(self, context):
from openpype.lib import version_up
from openpype.action import get_errored_plugins_from_data
from openpype.pipeline.publish import get_errored_plugins_from_context
errored_plugins = get_errored_plugins_from_data(context)
errored_plugins = get_errored_plugins_from_context(context)
if any(plugin.__name__ == "FusionSubmitDeadline"
for plugin in errored_plugins):
raise RuntimeError("Skipping incrementing current file because "

View file

@ -1,6 +1,6 @@
import pyblish.api
from openpype import action
from openpype.pipeline.publish import RepairAction
class ValidateBackgroundDepth(pyblish.api.InstancePlugin):
@ -8,7 +8,7 @@ class ValidateBackgroundDepth(pyblish.api.InstancePlugin):
order = pyblish.api.ValidatorOrder
label = "Validate Background Depth 32 bit"
actions = [action.RepairAction]
actions = [RepairAction]
hosts = ["fusion"]
families = ["render"]
optional = True

View file

@ -1,6 +1,6 @@
import pyblish.api
from openpype import action
from openpype.pipeline.publish import RepairAction
class ValidateCreateFolderChecked(pyblish.api.InstancePlugin):
@ -11,7 +11,7 @@ class ValidateCreateFolderChecked(pyblish.api.InstancePlugin):
"""
order = pyblish.api.ValidatorOrder
actions = [action.RepairAction]
actions = [RepairAction]
label = "Validate Create Folder Checked"
families = ["render"]
hosts = ["fusion"]

View file

@ -15,7 +15,7 @@ from openpype.pipeline import (
from openpype.lib import version_up
from openpype.hosts.fusion import api
from openpype.hosts.fusion.api import lib
from openpype.lib.avalon_context import get_workdir_from_session
from openpype.pipeline.context_tools import get_workdir_from_session
log = logging.getLogger("Update Slap Comp")

View file

@ -1,14 +1,12 @@
import os
import sys
from openpype.api import Logger
from openpype.lib import Logger
from openpype.pipeline import (
install_host,
registered_host,
)
log = Logger().get_logger(__name__)
def main(env):
from openpype.hosts.fusion import api
@ -17,6 +15,7 @@ def main(env):
# activate resolve from pype
install_host(api)
log = Logger.get_logger(__name__)
log.info(f"Registered host: {registered_host()}")
menu.launch_openpype_menu()

View file

@ -14,7 +14,7 @@ from openpype.pipeline import (
legacy_io,
)
from openpype.hosts.fusion import api
from openpype.lib.avalon_context import get_workdir_from_session
from openpype.pipeline.context_tools import get_workdir_from_session
log = logging.getLogger("Fusion Switch Shot")

View file

@ -1,11 +1,10 @@
import os
from .addon import (
HARMONY_HOST_DIR,
HarmonyAddon,
)
def add_implementation_envs(env, _app):
"""Modify environments to contain all required for implementation."""
openharmony_path = os.path.join(
os.environ["OPENPYPE_REPOS_ROOT"], "openpype", "hosts",
"harmony", "vendor", "OpenHarmony"
)
# TODO check if is already set? What to do if is already set?
env["LIB_OPENHARMONY_PATH"] = openharmony_path
__all__ = (
"HARMONY_HOST_DIR",
"HarmonyAddon",
)

View file

@ -0,0 +1,24 @@
import os
from openpype.modules import OpenPypeModule
from openpype.modules.interfaces import IHostAddon
HARMONY_HOST_DIR = os.path.dirname(os.path.abspath(__file__))
class HarmonyAddon(OpenPypeModule, IHostAddon):
name = "harmony"
host_name = "harmony"
def initialize(self, module_settings):
self.enabled = True
def add_implementation_envs(self, env, _app):
"""Modify environments to contain all required for implementation."""
openharmony_path = os.path.join(
HARMONY_HOST_DIR, "vendor", "OpenHarmony"
)
# TODO check if is already set? What to do if is already set?
env["LIB_OPENHARMONY_PATH"] = openharmony_path
def get_workfile_extensions(self):
return [".zip"]

View file

@ -14,14 +14,14 @@ from openpype.pipeline import (
)
from openpype.pipeline.load import get_outdated_containers
from openpype.pipeline.context_tools import get_current_project_asset
import openpype.hosts.harmony
from openpype.hosts.harmony import HARMONY_HOST_DIR
import openpype.hosts.harmony.api as harmony
log = logging.getLogger("openpype.hosts.harmony")
HOST_DIR = os.path.dirname(os.path.abspath(openpype.hosts.harmony.__file__))
PLUGINS_DIR = os.path.join(HOST_DIR, "plugins")
PLUGINS_DIR = os.path.join(HARMONY_HOST_DIR, "plugins")
PUBLISH_PATH = os.path.join(PLUGINS_DIR, "publish")
LOAD_PATH = os.path.join(PLUGINS_DIR, "load")
CREATE_PATH = os.path.join(PLUGINS_DIR, "create")

View file

@ -2,8 +2,6 @@
import os
import shutil
from openpype.pipeline import HOST_WORKFILE_EXTENSIONS
from .lib import (
ProcessContext,
get_local_harmony_path,
@ -16,7 +14,7 @@ save_disabled = False
def file_extensions():
return HOST_WORKFILE_EXTENSIONS["harmony"]
return [".zip"]
def has_unsaved_changes():

View file

@ -1,9 +1,9 @@
# -*- coding: utf-8 -*-
"""Collect current workfile from Harmony."""
import pyblish.api
import os
import pyblish.api
from openpype.lib import get_subset_name_with_asset_doc
from openpype.pipeline.create import get_subset_name
class CollectWorkfile(pyblish.api.ContextPlugin):
@ -17,13 +17,14 @@ class CollectWorkfile(pyblish.api.ContextPlugin):
"""Plugin entry point."""
family = "workfile"
basename = os.path.basename(context.data["currentFile"])
subset = get_subset_name_with_asset_doc(
subset = get_subset_name(
family,
"",
context.data["anatomyData"]["task"]["name"],
context.data["assetEntity"],
context.data["anatomyData"]["project"]["name"],
host_name=context.data["hostName"]
host_name=context.data["hostName"],
project_settings=context.data["project_settings"]
)
# Create instance

View file

@ -1,7 +1,7 @@
import os
import pyblish.api
from openpype.action import get_errored_plugins_from_data
from openpype.pipeline.publish import get_errored_plugins_from_context
from openpype.lib import version_up
import openpype.hosts.harmony.api as harmony
@ -19,7 +19,7 @@ class IncrementWorkfile(pyblish.api.InstancePlugin):
optional = True
def process(self, instance):
errored_plugins = get_errored_plugins_from_data(instance.context)
errored_plugins = get_errored_plugins_from_context(instance.context)
if errored_plugins:
raise RuntimeError(
"Skipping incrementing current file because publishing failed."

View file

@ -1,9 +1,12 @@
import os
import pyblish.api
import openpype.api
from openpype.pipeline import PublishXmlValidationError
import openpype.hosts.harmony.api as harmony
from openpype.pipeline.publish import (
ValidateContentsOrder,
PublishXmlValidationError,
)
class ValidateInstanceRepair(pyblish.api.Action):
@ -37,7 +40,7 @@ class ValidateInstance(pyblish.api.InstancePlugin):
label = "Validate Instance"
hosts = ["harmony"]
actions = [ValidateInstanceRepair]
order = openpype.api.ValidateContentsOrder
order = ValidateContentsOrder
def process(self, instance):
instance_asset = instance.data["asset"]

View file

@ -1,41 +1,10 @@
import os
import platform
from .addon import (
HIERO_ROOT_DIR,
HieroAddon,
)
def add_implementation_envs(env, _app):
# Add requirements to HIERO_PLUGIN_PATH
pype_root = os.environ["OPENPYPE_REPOS_ROOT"]
new_hiero_paths = [
os.path.join(pype_root, "openpype", "hosts", "hiero", "api", "startup")
]
old_hiero_path = env.get("HIERO_PLUGIN_PATH") or ""
for path in old_hiero_path.split(os.pathsep):
if not path:
continue
norm_path = os.path.normpath(path)
if norm_path not in new_hiero_paths:
new_hiero_paths.append(norm_path)
env["HIERO_PLUGIN_PATH"] = os.pathsep.join(new_hiero_paths)
env.pop("QT_AUTO_SCREEN_SCALE_FACTOR", None)
# Try to add QuickTime to PATH
quick_time_path = "C:/Program Files (x86)/QuickTime/QTSystem"
if platform.system() == "windows" and os.path.exists(quick_time_path):
path_value = env.get("PATH") or ""
path_paths = [
path
for path in path_value.split(os.pathsep)
if path
]
path_paths.append(quick_time_path)
env["PATH"] = os.pathsep.join(path_paths)
# Set default values if are not already set via settings
defaults = {
"LOGLEVEL": "DEBUG"
}
for key, value in defaults.items():
if not env.get(key):
env[key] = value
__all__ = (
"HIERO_ROOT_DIR",
"HieroAddon",
)

View file

@ -0,0 +1,63 @@
import os
import platform
from openpype.modules import OpenPypeModule
from openpype.modules.interfaces import IHostAddon
HIERO_ROOT_DIR = os.path.dirname(os.path.abspath(__file__))
class HieroAddon(OpenPypeModule, IHostAddon):
name = "hiero"
host_name = "hiero"
def initialize(self, module_settings):
self.enabled = True
def add_implementation_envs(self, env, _app):
# Add requirements to HIERO_PLUGIN_PATH
new_hiero_paths = [
os.path.join(HIERO_ROOT_DIR, "api", "startup")
]
old_hiero_path = env.get("HIERO_PLUGIN_PATH") or ""
for path in old_hiero_path.split(os.pathsep):
if not path:
continue
norm_path = os.path.normpath(path)
if norm_path not in new_hiero_paths:
new_hiero_paths.append(norm_path)
env["HIERO_PLUGIN_PATH"] = os.pathsep.join(new_hiero_paths)
env.pop("QT_AUTO_SCREEN_SCALE_FACTOR", None)
# Add vendor to PYTHONPATH
python_path = env["PYTHONPATH"]
python_path_parts = []
if python_path:
python_path_parts = python_path.split(os.pathsep)
vendor_path = os.path.join(HIERO_ROOT_DIR, "vendor")
python_path_parts.insert(0, vendor_path)
env["PYTHONPATH"] = os.pathsep.join(python_path_parts)
# Set default values if are not already set via settings
defaults = {
"LOGLEVEL": "DEBUG"
}
for key, value in defaults.items():
if not env.get(key):
env[key] = value
# Try to add QuickTime to PATH
quick_time_path = "C:/Program Files (x86)/QuickTime/QTSystem"
if platform.system() == "windows" and os.path.exists(quick_time_path):
path_value = env.get("PATH") or ""
path_paths = [
path
for path in path_value.split(os.pathsep)
if path
]
path_paths.append(quick_time_path)
env["PATH"] = os.pathsep.join(path_paths)
def get_workfile_extensions(self):
return [".hrox"]

View file

@ -1,7 +1,6 @@
import os
import hiero.core.events
from openpype.api import Logger
from openpype.lib import register_event_callback
from openpype.lib import Logger, register_event_callback
from .lib import (
sync_avalon_data_to_workfile,
launch_workfiles_app,
@ -11,7 +10,7 @@ from .lib import (
from .tags import add_tags_to_workfile
from .menu import update_menu_task_label
log = Logger().get_logger(__name__)
log = Logger.get_logger(__name__)
def startupCompleted(event):

View file

@ -21,7 +21,7 @@ from openpype.client import (
)
from openpype.settings import get_anatomy_settings
from openpype.pipeline import legacy_io, Anatomy
from openpype.api import Logger
from openpype.lib import Logger
from . import tags
try:
@ -34,7 +34,7 @@ except ImportError:
# from opentimelineio import opentime
# from pprint import pformat
log = Logger().get_logger(__name__)
log = Logger.get_logger(__name__)
self = sys.modules[__name__]
self._has_been_setup = False

View file

@ -6,7 +6,7 @@ import contextlib
from collections import OrderedDict
from pyblish import api as pyblish
from openpype.api import Logger
from openpype.lib import Logger
from openpype.pipeline import (
schema,
register_creator_plugin_path,
@ -18,7 +18,7 @@ from openpype.pipeline import (
from openpype.tools.utils import host_tools
from . import lib, menu, events
log = Logger().get_logger(__name__)
log = Logger.get_logger(__name__)
# plugin paths
API_DIR = os.path.dirname(os.path.abspath(__file__))

View file

@ -9,11 +9,12 @@ from Qt import QtWidgets, QtCore
import qargparse
import openpype.api as openpype
from openpype.lib import Logger
from openpype.pipeline import LoaderPlugin, LegacyCreator
from openpype.pipeline.context_tools import get_current_project_asset
from . import lib
log = openpype.Logger().get_logger(__name__)
log = Logger.get_logger(__name__)
def load_stylesheet():

View file

@ -2,13 +2,12 @@ import os
import hiero
from openpype.api import Logger
from openpype.pipeline import HOST_WORKFILE_EXTENSIONS
log = Logger.get_logger(__name__)
def file_extensions():
return HOST_WORKFILE_EXTENSIONS["hiero"]
return [".hrox"]
def has_unsaved_changes():

View file

@ -318,10 +318,9 @@ class PrecollectInstances(pyblish.api.ContextPlugin):
@staticmethod
def create_otio_time_range_from_timeline_item_data(track_item):
speed = track_item.playbackSpeed()
timeline = phiero.get_current_sequence()
frame_start = int(track_item.timelineIn())
frame_duration = int((track_item.duration() - 1) / speed)
frame_duration = int(track_item.duration())
fps = timeline.framerate().toFloat()
return hiero_export.create_otio_time_range(

View file

@ -0,0 +1,33 @@
# Protocol Buffers - Google's data interchange format
# Copyright 2008 Google Inc. All rights reserved.
# https://developers.google.com/protocol-buffers/
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
# Copyright 2007 Google Inc. All Rights Reserved.
__version__ = '3.20.1'

View file

@ -0,0 +1,26 @@
# -*- coding: utf-8 -*-
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: google/protobuf/any.proto
"""Generated protocol buffer code."""
from google.protobuf.internal import builder as _builder
from google.protobuf import descriptor as _descriptor
from google.protobuf import descriptor_pool as _descriptor_pool
from google.protobuf import symbol_database as _symbol_database
# @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default()
DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n\x19google/protobuf/any.proto\x12\x0fgoogle.protobuf\"&\n\x03\x41ny\x12\x10\n\x08type_url\x18\x01 \x01(\t\x12\r\n\x05value\x18\x02 \x01(\x0c\x42v\n\x13\x63om.google.protobufB\x08\x41nyProtoP\x01Z,google.golang.org/protobuf/types/known/anypb\xa2\x02\x03GPB\xaa\x02\x1eGoogle.Protobuf.WellKnownTypesb\x06proto3')
_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, globals())
_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'google.protobuf.any_pb2', globals())
if _descriptor._USE_C_DESCRIPTORS == False:
DESCRIPTOR._options = None
DESCRIPTOR._serialized_options = b'\n\023com.google.protobufB\010AnyProtoP\001Z,google.golang.org/protobuf/types/known/anypb\242\002\003GPB\252\002\036Google.Protobuf.WellKnownTypes'
_ANY._serialized_start=46
_ANY._serialized_end=84
# @@protoc_insertion_point(module_scope)

View file

@ -0,0 +1,32 @@
# -*- coding: utf-8 -*-
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: google/protobuf/api.proto
"""Generated protocol buffer code."""
from google.protobuf.internal import builder as _builder
from google.protobuf import descriptor as _descriptor
from google.protobuf import descriptor_pool as _descriptor_pool
from google.protobuf import symbol_database as _symbol_database
# @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default()
from google.protobuf import source_context_pb2 as google_dot_protobuf_dot_source__context__pb2
from google.protobuf import type_pb2 as google_dot_protobuf_dot_type__pb2
DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n\x19google/protobuf/api.proto\x12\x0fgoogle.protobuf\x1a$google/protobuf/source_context.proto\x1a\x1agoogle/protobuf/type.proto\"\x81\x02\n\x03\x41pi\x12\x0c\n\x04name\x18\x01 \x01(\t\x12(\n\x07methods\x18\x02 \x03(\x0b\x32\x17.google.protobuf.Method\x12(\n\x07options\x18\x03 \x03(\x0b\x32\x17.google.protobuf.Option\x12\x0f\n\x07version\x18\x04 \x01(\t\x12\x36\n\x0esource_context\x18\x05 \x01(\x0b\x32\x1e.google.protobuf.SourceContext\x12&\n\x06mixins\x18\x06 \x03(\x0b\x32\x16.google.protobuf.Mixin\x12\'\n\x06syntax\x18\x07 \x01(\x0e\x32\x17.google.protobuf.Syntax\"\xd5\x01\n\x06Method\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\x18\n\x10request_type_url\x18\x02 \x01(\t\x12\x19\n\x11request_streaming\x18\x03 \x01(\x08\x12\x19\n\x11response_type_url\x18\x04 \x01(\t\x12\x1a\n\x12response_streaming\x18\x05 \x01(\x08\x12(\n\x07options\x18\x06 \x03(\x0b\x32\x17.google.protobuf.Option\x12\'\n\x06syntax\x18\x07 \x01(\x0e\x32\x17.google.protobuf.Syntax\"#\n\x05Mixin\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\x0c\n\x04root\x18\x02 \x01(\tBv\n\x13\x63om.google.protobufB\x08\x41piProtoP\x01Z,google.golang.org/protobuf/types/known/apipb\xa2\x02\x03GPB\xaa\x02\x1eGoogle.Protobuf.WellKnownTypesb\x06proto3')
_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, globals())
_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'google.protobuf.api_pb2', globals())
if _descriptor._USE_C_DESCRIPTORS == False:
DESCRIPTOR._options = None
DESCRIPTOR._serialized_options = b'\n\023com.google.protobufB\010ApiProtoP\001Z,google.golang.org/protobuf/types/known/apipb\242\002\003GPB\252\002\036Google.Protobuf.WellKnownTypes'
_API._serialized_start=113
_API._serialized_end=370
_METHOD._serialized_start=373
_METHOD._serialized_end=586
_MIXIN._serialized_start=588
_MIXIN._serialized_end=623
# @@protoc_insertion_point(module_scope)

View file

@ -0,0 +1,35 @@
# -*- coding: utf-8 -*-
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: google/protobuf/compiler/plugin.proto
"""Generated protocol buffer code."""
from google.protobuf.internal import builder as _builder
from google.protobuf import descriptor as _descriptor
from google.protobuf import descriptor_pool as _descriptor_pool
from google.protobuf import symbol_database as _symbol_database
# @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default()
from google.protobuf import descriptor_pb2 as google_dot_protobuf_dot_descriptor__pb2
DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n%google/protobuf/compiler/plugin.proto\x12\x18google.protobuf.compiler\x1a google/protobuf/descriptor.proto\"F\n\x07Version\x12\r\n\x05major\x18\x01 \x01(\x05\x12\r\n\x05minor\x18\x02 \x01(\x05\x12\r\n\x05patch\x18\x03 \x01(\x05\x12\x0e\n\x06suffix\x18\x04 \x01(\t\"\xba\x01\n\x14\x43odeGeneratorRequest\x12\x18\n\x10\x66ile_to_generate\x18\x01 \x03(\t\x12\x11\n\tparameter\x18\x02 \x01(\t\x12\x38\n\nproto_file\x18\x0f \x03(\x0b\x32$.google.protobuf.FileDescriptorProto\x12;\n\x10\x63ompiler_version\x18\x03 \x01(\x0b\x32!.google.protobuf.compiler.Version\"\xc1\x02\n\x15\x43odeGeneratorResponse\x12\r\n\x05\x65rror\x18\x01 \x01(\t\x12\x1a\n\x12supported_features\x18\x02 \x01(\x04\x12\x42\n\x04\x66ile\x18\x0f \x03(\x0b\x32\x34.google.protobuf.compiler.CodeGeneratorResponse.File\x1a\x7f\n\x04\x46ile\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\x17\n\x0finsertion_point\x18\x02 \x01(\t\x12\x0f\n\x07\x63ontent\x18\x0f \x01(\t\x12?\n\x13generated_code_info\x18\x10 \x01(\x0b\x32\".google.protobuf.GeneratedCodeInfo\"8\n\x07\x46\x65\x61ture\x12\x10\n\x0c\x46\x45\x41TURE_NONE\x10\x00\x12\x1b\n\x17\x46\x45\x41TURE_PROTO3_OPTIONAL\x10\x01\x42W\n\x1c\x63om.google.protobuf.compilerB\x0cPluginProtosZ)google.golang.org/protobuf/types/pluginpb')
_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, globals())
_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'google.protobuf.compiler.plugin_pb2', globals())
if _descriptor._USE_C_DESCRIPTORS == False:
DESCRIPTOR._options = None
DESCRIPTOR._serialized_options = b'\n\034com.google.protobuf.compilerB\014PluginProtosZ)google.golang.org/protobuf/types/pluginpb'
_VERSION._serialized_start=101
_VERSION._serialized_end=171
_CODEGENERATORREQUEST._serialized_start=174
_CODEGENERATORREQUEST._serialized_end=360
_CODEGENERATORRESPONSE._serialized_start=363
_CODEGENERATORRESPONSE._serialized_end=684
_CODEGENERATORRESPONSE_FILE._serialized_start=499
_CODEGENERATORRESPONSE_FILE._serialized_end=626
_CODEGENERATORRESPONSE_FEATURE._serialized_start=628
_CODEGENERATORRESPONSE_FEATURE._serialized_end=684
# @@protoc_insertion_point(module_scope)

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,177 @@
# Protocol Buffers - Google's data interchange format
# Copyright 2008 Google Inc. All rights reserved.
# https://developers.google.com/protocol-buffers/
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""Provides a container for DescriptorProtos."""
__author__ = 'matthewtoia@google.com (Matt Toia)'
import warnings
class Error(Exception):
pass
class DescriptorDatabaseConflictingDefinitionError(Error):
"""Raised when a proto is added with the same name & different descriptor."""
class DescriptorDatabase(object):
"""A container accepting FileDescriptorProtos and maps DescriptorProtos."""
def __init__(self):
self._file_desc_protos_by_file = {}
self._file_desc_protos_by_symbol = {}
def Add(self, file_desc_proto):
"""Adds the FileDescriptorProto and its types to this database.
Args:
file_desc_proto: The FileDescriptorProto to add.
Raises:
DescriptorDatabaseConflictingDefinitionError: if an attempt is made to
add a proto with the same name but different definition than an
existing proto in the database.
"""
proto_name = file_desc_proto.name
if proto_name not in self._file_desc_protos_by_file:
self._file_desc_protos_by_file[proto_name] = file_desc_proto
elif self._file_desc_protos_by_file[proto_name] != file_desc_proto:
raise DescriptorDatabaseConflictingDefinitionError(
'%s already added, but with different descriptor.' % proto_name)
else:
return
# Add all the top-level descriptors to the index.
package = file_desc_proto.package
for message in file_desc_proto.message_type:
for name in _ExtractSymbols(message, package):
self._AddSymbol(name, file_desc_proto)
for enum in file_desc_proto.enum_type:
self._AddSymbol(('.'.join((package, enum.name))), file_desc_proto)
for enum_value in enum.value:
self._file_desc_protos_by_symbol[
'.'.join((package, enum_value.name))] = file_desc_proto
for extension in file_desc_proto.extension:
self._AddSymbol(('.'.join((package, extension.name))), file_desc_proto)
for service in file_desc_proto.service:
self._AddSymbol(('.'.join((package, service.name))), file_desc_proto)
def FindFileByName(self, name):
"""Finds the file descriptor proto by file name.
Typically the file name is a relative path ending to a .proto file. The
proto with the given name will have to have been added to this database
using the Add method or else an error will be raised.
Args:
name: The file name to find.
Returns:
The file descriptor proto matching the name.
Raises:
KeyError if no file by the given name was added.
"""
return self._file_desc_protos_by_file[name]
def FindFileContainingSymbol(self, symbol):
"""Finds the file descriptor proto containing the specified symbol.
The symbol should be a fully qualified name including the file descriptor's
package and any containing messages. Some examples:
'some.package.name.Message'
'some.package.name.Message.NestedEnum'
'some.package.name.Message.some_field'
The file descriptor proto containing the specified symbol must be added to
this database using the Add method or else an error will be raised.
Args:
symbol: The fully qualified symbol name.
Returns:
The file descriptor proto containing the symbol.
Raises:
KeyError if no file contains the specified symbol.
"""
try:
return self._file_desc_protos_by_symbol[symbol]
except KeyError:
# Fields, enum values, and nested extensions are not in
# _file_desc_protos_by_symbol. Try to find the top level
# descriptor. Non-existent nested symbol under a valid top level
# descriptor can also be found. The behavior is the same with
# protobuf C++.
top_level, _, _ = symbol.rpartition('.')
try:
return self._file_desc_protos_by_symbol[top_level]
except KeyError:
# Raise the original symbol as a KeyError for better diagnostics.
raise KeyError(symbol)
def FindFileContainingExtension(self, extendee_name, extension_number):
# TODO(jieluo): implement this API.
return None
def FindAllExtensionNumbers(self, extendee_name):
# TODO(jieluo): implement this API.
return []
def _AddSymbol(self, name, file_desc_proto):
if name in self._file_desc_protos_by_symbol:
warn_msg = ('Conflict register for file "' + file_desc_proto.name +
'": ' + name +
' is already defined in file "' +
self._file_desc_protos_by_symbol[name].name + '"')
warnings.warn(warn_msg, RuntimeWarning)
self._file_desc_protos_by_symbol[name] = file_desc_proto
def _ExtractSymbols(desc_proto, package):
"""Pulls out all the symbols from a descriptor proto.
Args:
desc_proto: The proto to extract symbols from.
package: The package containing the descriptor type.
Yields:
The fully qualified name found in the descriptor.
"""
message_name = package + '.' + desc_proto.name if package else desc_proto.name
yield message_name
for nested_type in desc_proto.nested_type:
for symbol in _ExtractSymbols(nested_type, message_name):
yield symbol
for enum_type in desc_proto.enum_type:
yield '.'.join((message_name, enum_type.name))

File diff suppressed because one or more lines are too long

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,26 @@
# -*- coding: utf-8 -*-
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: google/protobuf/duration.proto
"""Generated protocol buffer code."""
from google.protobuf.internal import builder as _builder
from google.protobuf import descriptor as _descriptor
from google.protobuf import descriptor_pool as _descriptor_pool
from google.protobuf import symbol_database as _symbol_database
# @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default()
DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n\x1egoogle/protobuf/duration.proto\x12\x0fgoogle.protobuf\"*\n\x08\x44uration\x12\x0f\n\x07seconds\x18\x01 \x01(\x03\x12\r\n\x05nanos\x18\x02 \x01(\x05\x42\x83\x01\n\x13\x63om.google.protobufB\rDurationProtoP\x01Z1google.golang.org/protobuf/types/known/durationpb\xf8\x01\x01\xa2\x02\x03GPB\xaa\x02\x1eGoogle.Protobuf.WellKnownTypesb\x06proto3')
_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, globals())
_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'google.protobuf.duration_pb2', globals())
if _descriptor._USE_C_DESCRIPTORS == False:
DESCRIPTOR._options = None
DESCRIPTOR._serialized_options = b'\n\023com.google.protobufB\rDurationProtoP\001Z1google.golang.org/protobuf/types/known/durationpb\370\001\001\242\002\003GPB\252\002\036Google.Protobuf.WellKnownTypes'
_DURATION._serialized_start=51
_DURATION._serialized_end=93
# @@protoc_insertion_point(module_scope)

View file

@ -0,0 +1,26 @@
# -*- coding: utf-8 -*-
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: google/protobuf/empty.proto
"""Generated protocol buffer code."""
from google.protobuf.internal import builder as _builder
from google.protobuf import descriptor as _descriptor
from google.protobuf import descriptor_pool as _descriptor_pool
from google.protobuf import symbol_database as _symbol_database
# @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default()
DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n\x1bgoogle/protobuf/empty.proto\x12\x0fgoogle.protobuf\"\x07\n\x05\x45mptyB}\n\x13\x63om.google.protobufB\nEmptyProtoP\x01Z.google.golang.org/protobuf/types/known/emptypb\xf8\x01\x01\xa2\x02\x03GPB\xaa\x02\x1eGoogle.Protobuf.WellKnownTypesb\x06proto3')
_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, globals())
_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'google.protobuf.empty_pb2', globals())
if _descriptor._USE_C_DESCRIPTORS == False:
DESCRIPTOR._options = None
DESCRIPTOR._serialized_options = b'\n\023com.google.protobufB\nEmptyProtoP\001Z.google.golang.org/protobuf/types/known/emptypb\370\001\001\242\002\003GPB\252\002\036Google.Protobuf.WellKnownTypes'
_EMPTY._serialized_start=48
_EMPTY._serialized_end=55
# @@protoc_insertion_point(module_scope)

View file

@ -0,0 +1,26 @@
# -*- coding: utf-8 -*-
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: google/protobuf/field_mask.proto
"""Generated protocol buffer code."""
from google.protobuf.internal import builder as _builder
from google.protobuf import descriptor as _descriptor
from google.protobuf import descriptor_pool as _descriptor_pool
from google.protobuf import symbol_database as _symbol_database
# @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default()
DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n google/protobuf/field_mask.proto\x12\x0fgoogle.protobuf\"\x1a\n\tFieldMask\x12\r\n\x05paths\x18\x01 \x03(\tB\x85\x01\n\x13\x63om.google.protobufB\x0e\x46ieldMaskProtoP\x01Z2google.golang.org/protobuf/types/known/fieldmaskpb\xf8\x01\x01\xa2\x02\x03GPB\xaa\x02\x1eGoogle.Protobuf.WellKnownTypesb\x06proto3')
_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, globals())
_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'google.protobuf.field_mask_pb2', globals())
if _descriptor._USE_C_DESCRIPTORS == False:
DESCRIPTOR._options = None
DESCRIPTOR._serialized_options = b'\n\023com.google.protobufB\016FieldMaskProtoP\001Z2google.golang.org/protobuf/types/known/fieldmaskpb\370\001\001\242\002\003GPB\252\002\036Google.Protobuf.WellKnownTypes'
_FIELDMASK._serialized_start=53
_FIELDMASK._serialized_end=79
# @@protoc_insertion_point(module_scope)

View file

@ -0,0 +1,443 @@
#! /usr/bin/env python
#
# Protocol Buffers - Google's data interchange format
# Copyright 2008 Google Inc. All rights reserved.
# https://developers.google.com/protocol-buffers/
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""Adds support for parameterized tests to Python's unittest TestCase class.
A parameterized test is a method in a test case that is invoked with different
argument tuples.
A simple example:
class AdditionExample(parameterized.TestCase):
@parameterized.parameters(
(1, 2, 3),
(4, 5, 9),
(1, 1, 3))
def testAddition(self, op1, op2, result):
self.assertEqual(result, op1 + op2)
Each invocation is a separate test case and properly isolated just
like a normal test method, with its own setUp/tearDown cycle. In the
example above, there are three separate testcases, one of which will
fail due to an assertion error (1 + 1 != 3).
Parameters for individual test cases can be tuples (with positional parameters)
or dictionaries (with named parameters):
class AdditionExample(parameterized.TestCase):
@parameterized.parameters(
{'op1': 1, 'op2': 2, 'result': 3},
{'op1': 4, 'op2': 5, 'result': 9},
)
def testAddition(self, op1, op2, result):
self.assertEqual(result, op1 + op2)
If a parameterized test fails, the error message will show the
original test name (which is modified internally) and the arguments
for the specific invocation, which are part of the string returned by
the shortDescription() method on test cases.
The id method of the test, used internally by the unittest framework,
is also modified to show the arguments. To make sure that test names
stay the same across several invocations, object representations like
>>> class Foo(object):
... pass
>>> repr(Foo())
'<__main__.Foo object at 0x23d8610>'
are turned into '<__main__.Foo>'. For even more descriptive names,
especially in test logs, you can use the named_parameters decorator. In
this case, only tuples are supported, and the first parameters has to
be a string (or an object that returns an apt name when converted via
str()):
class NamedExample(parameterized.TestCase):
@parameterized.named_parameters(
('Normal', 'aa', 'aaa', True),
('EmptyPrefix', '', 'abc', True),
('BothEmpty', '', '', True))
def testStartsWith(self, prefix, string, result):
self.assertEqual(result, strings.startswith(prefix))
Named tests also have the benefit that they can be run individually
from the command line:
$ testmodule.py NamedExample.testStartsWithNormal
.
--------------------------------------------------------------------
Ran 1 test in 0.000s
OK
Parameterized Classes
=====================
If invocation arguments are shared across test methods in a single
TestCase class, instead of decorating all test methods
individually, the class itself can be decorated:
@parameterized.parameters(
(1, 2, 3)
(4, 5, 9))
class ArithmeticTest(parameterized.TestCase):
def testAdd(self, arg1, arg2, result):
self.assertEqual(arg1 + arg2, result)
def testSubtract(self, arg2, arg2, result):
self.assertEqual(result - arg1, arg2)
Inputs from Iterables
=====================
If parameters should be shared across several test cases, or are dynamically
created from other sources, a single non-tuple iterable can be passed into
the decorator. This iterable will be used to obtain the test cases:
class AdditionExample(parameterized.TestCase):
@parameterized.parameters(
c.op1, c.op2, c.result for c in testcases
)
def testAddition(self, op1, op2, result):
self.assertEqual(result, op1 + op2)
Single-Argument Test Methods
============================
If a test method takes only one argument, the single argument does not need to
be wrapped into a tuple:
class NegativeNumberExample(parameterized.TestCase):
@parameterized.parameters(
-1, -3, -4, -5
)
def testIsNegative(self, arg):
self.assertTrue(IsNegative(arg))
"""
__author__ = 'tmarek@google.com (Torsten Marek)'
import functools
import re
import types
import unittest
import uuid
try:
# Since python 3
import collections.abc as collections_abc
except ImportError:
# Won't work after python 3.8
import collections as collections_abc
ADDR_RE = re.compile(r'\<([a-zA-Z0-9_\-\.]+) object at 0x[a-fA-F0-9]+\>')
_SEPARATOR = uuid.uuid1().hex
_FIRST_ARG = object()
_ARGUMENT_REPR = object()
def _CleanRepr(obj):
return ADDR_RE.sub(r'<\1>', repr(obj))
# Helper function formerly from the unittest module, removed from it in
# Python 2.7.
def _StrClass(cls):
return '%s.%s' % (cls.__module__, cls.__name__)
def _NonStringIterable(obj):
return (isinstance(obj, collections_abc.Iterable) and
not isinstance(obj, str))
def _FormatParameterList(testcase_params):
if isinstance(testcase_params, collections_abc.Mapping):
return ', '.join('%s=%s' % (argname, _CleanRepr(value))
for argname, value in testcase_params.items())
elif _NonStringIterable(testcase_params):
return ', '.join(map(_CleanRepr, testcase_params))
else:
return _FormatParameterList((testcase_params,))
class _ParameterizedTestIter(object):
"""Callable and iterable class for producing new test cases."""
def __init__(self, test_method, testcases, naming_type):
"""Returns concrete test functions for a test and a list of parameters.
The naming_type is used to determine the name of the concrete
functions as reported by the unittest framework. If naming_type is
_FIRST_ARG, the testcases must be tuples, and the first element must
have a string representation that is a valid Python identifier.
Args:
test_method: The decorated test method.
testcases: (list of tuple/dict) A list of parameter
tuples/dicts for individual test invocations.
naming_type: The test naming type, either _NAMED or _ARGUMENT_REPR.
"""
self._test_method = test_method
self.testcases = testcases
self._naming_type = naming_type
def __call__(self, *args, **kwargs):
raise RuntimeError('You appear to be running a parameterized test case '
'without having inherited from parameterized.'
'TestCase. This is bad because none of '
'your test cases are actually being run.')
def __iter__(self):
test_method = self._test_method
naming_type = self._naming_type
def MakeBoundParamTest(testcase_params):
@functools.wraps(test_method)
def BoundParamTest(self):
if isinstance(testcase_params, collections_abc.Mapping):
test_method(self, **testcase_params)
elif _NonStringIterable(testcase_params):
test_method(self, *testcase_params)
else:
test_method(self, testcase_params)
if naming_type is _FIRST_ARG:
# Signal the metaclass that the name of the test function is unique
# and descriptive.
BoundParamTest.__x_use_name__ = True
BoundParamTest.__name__ += str(testcase_params[0])
testcase_params = testcase_params[1:]
elif naming_type is _ARGUMENT_REPR:
# __x_extra_id__ is used to pass naming information to the __new__
# method of TestGeneratorMetaclass.
# The metaclass will make sure to create a unique, but nondescriptive
# name for this test.
BoundParamTest.__x_extra_id__ = '(%s)' % (
_FormatParameterList(testcase_params),)
else:
raise RuntimeError('%s is not a valid naming type.' % (naming_type,))
BoundParamTest.__doc__ = '%s(%s)' % (
BoundParamTest.__name__, _FormatParameterList(testcase_params))
if test_method.__doc__:
BoundParamTest.__doc__ += '\n%s' % (test_method.__doc__,)
return BoundParamTest
return (MakeBoundParamTest(c) for c in self.testcases)
def _IsSingletonList(testcases):
"""True iff testcases contains only a single non-tuple element."""
return len(testcases) == 1 and not isinstance(testcases[0], tuple)
def _ModifyClass(class_object, testcases, naming_type):
assert not getattr(class_object, '_id_suffix', None), (
'Cannot add parameters to %s,'
' which already has parameterized methods.' % (class_object,))
class_object._id_suffix = id_suffix = {}
# We change the size of __dict__ while we iterate over it,
# which Python 3.x will complain about, so use copy().
for name, obj in class_object.__dict__.copy().items():
if (name.startswith(unittest.TestLoader.testMethodPrefix)
and isinstance(obj, types.FunctionType)):
delattr(class_object, name)
methods = {}
_UpdateClassDictForParamTestCase(
methods, id_suffix, name,
_ParameterizedTestIter(obj, testcases, naming_type))
for name, meth in methods.items():
setattr(class_object, name, meth)
def _ParameterDecorator(naming_type, testcases):
"""Implementation of the parameterization decorators.
Args:
naming_type: The naming type.
testcases: Testcase parameters.
Returns:
A function for modifying the decorated object.
"""
def _Apply(obj):
if isinstance(obj, type):
_ModifyClass(
obj,
list(testcases) if not isinstance(testcases, collections_abc.Sequence)
else testcases,
naming_type)
return obj
else:
return _ParameterizedTestIter(obj, testcases, naming_type)
if _IsSingletonList(testcases):
assert _NonStringIterable(testcases[0]), (
'Single parameter argument must be a non-string iterable')
testcases = testcases[0]
return _Apply
def parameters(*testcases): # pylint: disable=invalid-name
"""A decorator for creating parameterized tests.
See the module docstring for a usage example.
Args:
*testcases: Parameters for the decorated method, either a single
iterable, or a list of tuples/dicts/objects (for tests
with only one argument).
Returns:
A test generator to be handled by TestGeneratorMetaclass.
"""
return _ParameterDecorator(_ARGUMENT_REPR, testcases)
def named_parameters(*testcases): # pylint: disable=invalid-name
"""A decorator for creating parameterized tests.
See the module docstring for a usage example. The first element of
each parameter tuple should be a string and will be appended to the
name of the test method.
Args:
*testcases: Parameters for the decorated method, either a single
iterable, or a list of tuples.
Returns:
A test generator to be handled by TestGeneratorMetaclass.
"""
return _ParameterDecorator(_FIRST_ARG, testcases)
class TestGeneratorMetaclass(type):
"""Metaclass for test cases with test generators.
A test generator is an iterable in a testcase that produces callables. These
callables must be single-argument methods. These methods are injected into
the class namespace and the original iterable is removed. If the name of the
iterable conforms to the test pattern, the injected methods will be picked
up as tests by the unittest framework.
In general, it is supposed to be used in conjunction with the
parameters decorator.
"""
def __new__(mcs, class_name, bases, dct):
dct['_id_suffix'] = id_suffix = {}
for name, obj in dct.copy().items():
if (name.startswith(unittest.TestLoader.testMethodPrefix) and
_NonStringIterable(obj)):
iterator = iter(obj)
dct.pop(name)
_UpdateClassDictForParamTestCase(dct, id_suffix, name, iterator)
return type.__new__(mcs, class_name, bases, dct)
def _UpdateClassDictForParamTestCase(dct, id_suffix, name, iterator):
"""Adds individual test cases to a dictionary.
Args:
dct: The target dictionary.
id_suffix: The dictionary for mapping names to test IDs.
name: The original name of the test case.
iterator: The iterator generating the individual test cases.
"""
for idx, func in enumerate(iterator):
assert callable(func), 'Test generators must yield callables, got %r' % (
func,)
if getattr(func, '__x_use_name__', False):
new_name = func.__name__
else:
new_name = '%s%s%d' % (name, _SEPARATOR, idx)
assert new_name not in dct, (
'Name of parameterized test case "%s" not unique' % (new_name,))
dct[new_name] = func
id_suffix[new_name] = getattr(func, '__x_extra_id__', '')
class TestCase(unittest.TestCase, metaclass=TestGeneratorMetaclass):
"""Base class for test cases using the parameters decorator."""
def _OriginalName(self):
return self._testMethodName.split(_SEPARATOR)[0]
def __str__(self):
return '%s (%s)' % (self._OriginalName(), _StrClass(self.__class__))
def id(self): # pylint: disable=invalid-name
"""Returns the descriptive ID of the test.
This is used internally by the unittesting framework to get a name
for the test to be used in reports.
Returns:
The test id.
"""
return '%s.%s%s' % (_StrClass(self.__class__),
self._OriginalName(),
self._id_suffix.get(self._testMethodName, ''))
def CoopTestCase(other_base_class):
"""Returns a new base class with a cooperative metaclass base.
This enables the TestCase to be used in combination
with other base classes that have custom metaclasses, such as
mox.MoxTestBase.
Only works with metaclasses that do not override type.__new__.
Example:
import google3
import mox
from google3.testing.pybase import parameterized
class ExampleTest(parameterized.CoopTestCase(mox.MoxTestBase)):
...
Args:
other_base_class: (class) A test case base class.
Returns:
A new class object.
"""
metaclass = type(
'CoopMetaclass',
(other_base_class.__metaclass__,
TestGeneratorMetaclass), {})
return metaclass(
'CoopTestCase',
(other_base_class, TestCase), {})

Some files were not shown because too many files have changed in this diff Show more