Merge branch 'develop' into feature/OP-3878_Change-extractor-usage-in-houdini

This commit is contained in:
Jakub Trllo 2022-09-22 09:45:24 +02:00
commit bc17820ef3
223 changed files with 5429 additions and 3383 deletions

View file

@ -6,6 +6,8 @@ labels: bug
assignees: ''
---
**Running version**
[ex. 3.14.1-nightly.2]
**Describe the bug**
A clear and concise description of what the bug is.

3
.gitignore vendored
View file

@ -107,3 +107,6 @@ website/.docusaurus
mypy.ini
tools/run_eventserver.*
# Developer tools
tools/dev_*

View file

@ -1,48 +1,105 @@
# Changelog
## [3.14.2-nightly.1](https://github.com/pypeclub/OpenPype/tree/HEAD)
## [3.14.3-nightly.3](https://github.com/pypeclub/OpenPype/tree/HEAD)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.14.1...HEAD)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.14.2...HEAD)
**🆕 New features**
**🚀 Enhancements**
- Houdini: Publishing workfiles [\#3697](https://github.com/pypeclub/OpenPype/pull/3697)
- Maya: better logging in Maketx [\#3886](https://github.com/pypeclub/OpenPype/pull/3886)
- TrayPublisher: added persisting of last selected project [\#3871](https://github.com/pypeclub/OpenPype/pull/3871)
- TrayPublisher: added text filter on project name to Tray Publisher [\#3867](https://github.com/pypeclub/OpenPype/pull/3867)
- Github issues adding `running version` section [\#3864](https://github.com/pypeclub/OpenPype/pull/3864)
- Publisher: Increase size of main window [\#3862](https://github.com/pypeclub/OpenPype/pull/3862)
- Photoshop: synchronize image version with workfile [\#3854](https://github.com/pypeclub/OpenPype/pull/3854)
- General: Simple script for getting license information about used packages [\#3843](https://github.com/pypeclub/OpenPype/pull/3843)
- Houdini: Increment current file on workfile publish [\#3840](https://github.com/pypeclub/OpenPype/pull/3840)
- Publisher: Add new publisher to host tools [\#3833](https://github.com/pypeclub/OpenPype/pull/3833)
- General: lock task workfiles when they are working on [\#3810](https://github.com/pypeclub/OpenPype/pull/3810)
- Maya: Workspace mel loaded from settings [\#3790](https://github.com/pypeclub/OpenPype/pull/3790)
**🐛 Bug fixes**
- Ftrack status fix typo prgoress -\> progress [\#3761](https://github.com/pypeclub/OpenPype/pull/3761)
- Settings: Add missing default settings [\#3870](https://github.com/pypeclub/OpenPype/pull/3870)
- General: Copy of workfile does not use 'copy' function but 'copyfile' [\#3869](https://github.com/pypeclub/OpenPype/pull/3869)
- Tray Publisher: skip plugin if otioTimeline is missing [\#3856](https://github.com/pypeclub/OpenPype/pull/3856)
- Maya: Extract Playblast fix textures + labelize viewport show settings [\#3852](https://github.com/pypeclub/OpenPype/pull/3852)
- Ftrack: Url validation does not require ftrackapp [\#3834](https://github.com/pypeclub/OpenPype/pull/3834)
- Maya+Ftrack: Change typo in family name `mayaascii` -\> `mayaAscii` [\#3820](https://github.com/pypeclub/OpenPype/pull/3820)
- Maya Deadline: Fix Tile Rendering by forcing integer pixel values [\#3758](https://github.com/pypeclub/OpenPype/pull/3758)
**🔀 Refactored code**
- Hiero: Use new Extractor location [\#3851](https://github.com/pypeclub/OpenPype/pull/3851)
- Maya: Remove old legacy \(ftrack\) plug-ins that are of no use anymore [\#3819](https://github.com/pypeclub/OpenPype/pull/3819)
- Nuke: Use new Extractor location [\#3799](https://github.com/pypeclub/OpenPype/pull/3799)
**Merged pull requests:**
- Maya: RenderSettings set default image format for V-Ray+Redshift to exr [\#3879](https://github.com/pypeclub/OpenPype/pull/3879)
- Remove lockfile during publish [\#3874](https://github.com/pypeclub/OpenPype/pull/3874)
## [3.14.2](https://github.com/pypeclub/OpenPype/tree/3.14.2) (2022-09-12)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.14.2-nightly.5...3.14.2)
**🆕 New features**
- Nuke: Build workfile by template [\#3763](https://github.com/pypeclub/OpenPype/pull/3763)
**🚀 Enhancements**
- Flame: Adding Creator's retimed shot and handles switch [\#3826](https://github.com/pypeclub/OpenPype/pull/3826)
- Flame: OpenPype submenu to batch and media manager [\#3825](https://github.com/pypeclub/OpenPype/pull/3825)
- General: Better pixmap scaling [\#3809](https://github.com/pypeclub/OpenPype/pull/3809)
- Photoshop: attempt to speed up ExtractImage [\#3793](https://github.com/pypeclub/OpenPype/pull/3793)
- SyncServer: Added cli commands for sync server [\#3765](https://github.com/pypeclub/OpenPype/pull/3765)
- Kitsu: Drop 'entities root' setting. [\#3739](https://github.com/pypeclub/OpenPype/pull/3739)
**🐛 Bug fixes**
- General: Fix Pattern access in client code [\#3828](https://github.com/pypeclub/OpenPype/pull/3828)
- Launcher: Skip opening last work file works for groups [\#3822](https://github.com/pypeclub/OpenPype/pull/3822)
- Maya: Publishing data key change [\#3811](https://github.com/pypeclub/OpenPype/pull/3811)
- Igniter: Fix status handling when version is already installed [\#3804](https://github.com/pypeclub/OpenPype/pull/3804)
- Resolve: Addon import is Python 2 compatible [\#3798](https://github.com/pypeclub/OpenPype/pull/3798)
- Hiero: retimed clip publishing is working [\#3792](https://github.com/pypeclub/OpenPype/pull/3792)
- nuke: validate write node is not failing due wrong type [\#3780](https://github.com/pypeclub/OpenPype/pull/3780)
- Fix - changed format of version string in pyproject.toml [\#3777](https://github.com/pypeclub/OpenPype/pull/3777)
- Ftrack status fix typo prgoress -\> progress [\#3761](https://github.com/pypeclub/OpenPype/pull/3761)
- Fix version resolution [\#3757](https://github.com/pypeclub/OpenPype/pull/3757)
**🔀 Refactored code**
- Photoshop: Use new Extractor location [\#3789](https://github.com/pypeclub/OpenPype/pull/3789)
- Blender: Use new Extractor location [\#3787](https://github.com/pypeclub/OpenPype/pull/3787)
- AfterEffects: Use new Extractor location [\#3784](https://github.com/pypeclub/OpenPype/pull/3784)
- Maya: Use new Extractor location [\#3775](https://github.com/pypeclub/OpenPype/pull/3775)
- General: Remove unused teshost [\#3773](https://github.com/pypeclub/OpenPype/pull/3773)
- General: Copied 'Extractor' plugin to publish pipeline [\#3771](https://github.com/pypeclub/OpenPype/pull/3771)
- General: Move queries of asset and representation links [\#3770](https://github.com/pypeclub/OpenPype/pull/3770)
- General: Move create project folders to pipeline [\#3768](https://github.com/pypeclub/OpenPype/pull/3768)
- General: Create project function moved to client code [\#3766](https://github.com/pypeclub/OpenPype/pull/3766)
- Maya: Refactor submit deadline to use AbstractSubmitDeadline [\#3759](https://github.com/pypeclub/OpenPype/pull/3759)
- General: Change publish template settings location [\#3755](https://github.com/pypeclub/OpenPype/pull/3755)
- General: Move hostdirname functionality into host [\#3749](https://github.com/pypeclub/OpenPype/pull/3749)
- Webpublisher: Webpublisher is used as addon [\#3740](https://github.com/pypeclub/OpenPype/pull/3740)
- Houdini: Define houdini as addon [\#3735](https://github.com/pypeclub/OpenPype/pull/3735)
- Fusion: Defined fusion as addon [\#3733](https://github.com/pypeclub/OpenPype/pull/3733)
- Flame: Defined flame as addon [\#3732](https://github.com/pypeclub/OpenPype/pull/3732)
- Resolve: Define resolve as addon [\#3727](https://github.com/pypeclub/OpenPype/pull/3727)
**Merged pull requests:**
- Standalone Publisher: Ignore empty labels, then still use name like other asset models [\#3779](https://github.com/pypeclub/OpenPype/pull/3779)
- Kitsu - sync\_all\_project - add list ignore\_projects [\#3776](https://github.com/pypeclub/OpenPype/pull/3776)
## [3.14.1](https://github.com/pypeclub/OpenPype/tree/3.14.1) (2022-08-30)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.14.1-nightly.4...3.14.1)
### 📖 Documentation
- Documentation: Few updates [\#3698](https://github.com/pypeclub/OpenPype/pull/3698)
- Documentation: Settings development [\#3660](https://github.com/pypeclub/OpenPype/pull/3660)
**🆕 New features**
- Webpublisher:change create flatten image into tri state [\#3678](https://github.com/pypeclub/OpenPype/pull/3678)
- Blender: validators code correction with settings and defaults [\#3662](https://github.com/pypeclub/OpenPype/pull/3662)
**🚀 Enhancements**
- General: Thumbnail can use project roots [\#3750](https://github.com/pypeclub/OpenPype/pull/3750)
- Settings: Remove settings lock on tray exit [\#3720](https://github.com/pypeclub/OpenPype/pull/3720)
- General: Added helper getters to modules manager [\#3712](https://github.com/pypeclub/OpenPype/pull/3712)
- Unreal: Define unreal as module and use host class [\#3701](https://github.com/pypeclub/OpenPype/pull/3701)
- Settings: Lock settings UI session [\#3700](https://github.com/pypeclub/OpenPype/pull/3700)
- General: Benevolent context label collector [\#3686](https://github.com/pypeclub/OpenPype/pull/3686)
- Ftrack: Store ftrack entities on hierarchy integration to instances [\#3677](https://github.com/pypeclub/OpenPype/pull/3677)
- Blender: ops refresh manager after process events [\#3663](https://github.com/pypeclub/OpenPype/pull/3663)
**🐛 Bug fixes**
@ -50,19 +107,13 @@
- General: Smaller fixes of imports [\#3748](https://github.com/pypeclub/OpenPype/pull/3748)
- General: Logger tweaks [\#3741](https://github.com/pypeclub/OpenPype/pull/3741)
- Nuke: missing job dependency if multiple bake streams [\#3737](https://github.com/pypeclub/OpenPype/pull/3737)
- Nuke: color-space settings from anatomy is working [\#3721](https://github.com/pypeclub/OpenPype/pull/3721)
- Settings: Fix studio default anatomy save [\#3716](https://github.com/pypeclub/OpenPype/pull/3716)
- Maya: Use project name instead of project code [\#3709](https://github.com/pypeclub/OpenPype/pull/3709)
- Settings: Fix project overrides save [\#3708](https://github.com/pypeclub/OpenPype/pull/3708)
- Workfiles tool: Fix published workfile filtering [\#3704](https://github.com/pypeclub/OpenPype/pull/3704)
- PS, AE: Provide default variant value for workfile subset [\#3703](https://github.com/pypeclub/OpenPype/pull/3703)
- Flame: retime is working on clip publishing [\#3684](https://github.com/pypeclub/OpenPype/pull/3684)
- Webpublisher: added check for empty context [\#3682](https://github.com/pypeclub/OpenPype/pull/3682)
**🔀 Refactored code**
- General: Move delivery logic to pipeline [\#3751](https://github.com/pypeclub/OpenPype/pull/3751)
- General: Move publish utils to pipeline [\#3745](https://github.com/pypeclub/OpenPype/pull/3745)
- General: Host addons cleanup [\#3744](https://github.com/pypeclub/OpenPype/pull/3744)
- Webpublisher: Webpublisher is used as addon [\#3740](https://github.com/pypeclub/OpenPype/pull/3740)
- Photoshop: Defined photoshop as addon [\#3736](https://github.com/pypeclub/OpenPype/pull/3736)
- Harmony: Defined harmony as addon [\#3734](https://github.com/pypeclub/OpenPype/pull/3734)
- General: Module interfaces cleanup [\#3731](https://github.com/pypeclub/OpenPype/pull/3731)
@ -71,75 +122,15 @@
- AfterEffects: Define AfterEffects as module [\#3728](https://github.com/pypeclub/OpenPype/pull/3728)
- General: Replace PypeLogger with Logger [\#3725](https://github.com/pypeclub/OpenPype/pull/3725)
- Nuke: Define nuke as module [\#3724](https://github.com/pypeclub/OpenPype/pull/3724)
- General: Move subset name functionality [\#3723](https://github.com/pypeclub/OpenPype/pull/3723)
- General: Move creators plugin getter [\#3714](https://github.com/pypeclub/OpenPype/pull/3714)
- General: Move constants from lib to client [\#3713](https://github.com/pypeclub/OpenPype/pull/3713)
- Loader: Subset groups using client operations [\#3710](https://github.com/pypeclub/OpenPype/pull/3710)
- TVPaint: Defined as module [\#3707](https://github.com/pypeclub/OpenPype/pull/3707)
- StandalonePublisher: Define StandalonePublisher as module [\#3706](https://github.com/pypeclub/OpenPype/pull/3706)
- TrayPublisher: Define TrayPublisher as module [\#3705](https://github.com/pypeclub/OpenPype/pull/3705)
- General: Move context specific functions to context tools [\#3702](https://github.com/pypeclub/OpenPype/pull/3702)
**Merged pull requests:**
- Hiero: Define hiero as module [\#3717](https://github.com/pypeclub/OpenPype/pull/3717)
- Deadline: better logging for DL webservice failures [\#3694](https://github.com/pypeclub/OpenPype/pull/3694)
- Photoshop: resize saved images in ExtractReview for ffmpeg [\#3676](https://github.com/pypeclub/OpenPype/pull/3676)
## [3.14.0](https://github.com/pypeclub/OpenPype/tree/3.14.0) (2022-08-18)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.14.0-nightly.1...3.14.0)
**🚀 Enhancements**
- Ftrack: Addiotional component metadata [\#3685](https://github.com/pypeclub/OpenPype/pull/3685)
- Ftrack: Set task status on task creation in integrate hierarchy [\#3675](https://github.com/pypeclub/OpenPype/pull/3675)
- Maya: Disable rendering of all lights for render instances submitted through Deadline. [\#3661](https://github.com/pypeclub/OpenPype/pull/3661)
- General: Optimized OCIO configs [\#3650](https://github.com/pypeclub/OpenPype/pull/3650)
**🐛 Bug fixes**
- General: Switch from hero version to versioned works [\#3691](https://github.com/pypeclub/OpenPype/pull/3691)
- General: Fix finding of last version [\#3656](https://github.com/pypeclub/OpenPype/pull/3656)
- General: Extract Review can scale with pixel aspect ratio [\#3644](https://github.com/pypeclub/OpenPype/pull/3644)
- Maya: Refactor moved usage of CreateRender settings [\#3643](https://github.com/pypeclub/OpenPype/pull/3643)
- General: Hero version representations have full context [\#3638](https://github.com/pypeclub/OpenPype/pull/3638)
- Nuke: color settings for render write node is working now [\#3632](https://github.com/pypeclub/OpenPype/pull/3632)
- Maya: FBX support for update in reference loader [\#3631](https://github.com/pypeclub/OpenPype/pull/3631)
**🔀 Refactored code**
- General: Use client projects getter [\#3673](https://github.com/pypeclub/OpenPype/pull/3673)
- Resolve: Match folder structure to other hosts [\#3653](https://github.com/pypeclub/OpenPype/pull/3653)
- Maya: Hosts as modules [\#3647](https://github.com/pypeclub/OpenPype/pull/3647)
- TimersManager: Plugins are in timers manager module [\#3639](https://github.com/pypeclub/OpenPype/pull/3639)
- General: Move workfiles functions into pipeline [\#3637](https://github.com/pypeclub/OpenPype/pull/3637)
**Merged pull requests:**
- Deadline: Global job pre load is not Pype 2 compatible [\#3666](https://github.com/pypeclub/OpenPype/pull/3666)
- Maya: Remove unused get current renderer logic [\#3645](https://github.com/pypeclub/OpenPype/pull/3645)
- Kitsu|Fix: Movie project type fails & first loop children names [\#3636](https://github.com/pypeclub/OpenPype/pull/3636)
- fix the bug of failing to extract look when UDIMs format used in AiImage [\#3628](https://github.com/pypeclub/OpenPype/pull/3628)
## [3.13.0](https://github.com/pypeclub/OpenPype/tree/3.13.0) (2022-08-09)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.13.0-nightly.1...3.13.0)
**🚀 Enhancements**
- Editorial: Mix audio use side file for ffmpeg filters [\#3630](https://github.com/pypeclub/OpenPype/pull/3630)
**🐛 Bug fixes**
- Maya: fix aov separator in Redshift [\#3625](https://github.com/pypeclub/OpenPype/pull/3625)
- Fix for multi-version build on Mac [\#3622](https://github.com/pypeclub/OpenPype/pull/3622)
- Ftrack: Sync hierarchical attributes can handle new created entities [\#3621](https://github.com/pypeclub/OpenPype/pull/3621)
**🔀 Refactored code**
- General: Plugin settings handled by plugins [\#3623](https://github.com/pypeclub/OpenPype/pull/3623)
## [3.12.2](https://github.com/pypeclub/OpenPype/tree/3.12.2) (2022-07-27)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.12.2-nightly.4...3.12.2)

View file

@ -41,7 +41,7 @@ It can be built and ran on all common platforms. We develop and test on the foll
- **Linux**
- **Ubuntu** 20.04 LTS
- **Centos** 7
- **Mac OSX**
- **Mac OSX**
- **10.15** Catalina
- **11.1** Big Sur (using Rosetta2)
@ -287,6 +287,14 @@ To run tests, execute `.\tools\run_tests(.ps1|.sh)`.
**Note that it needs existing virtual environment.**
Developer tools
-------------
In case you wish to add your own tools to `.\tools` folder without git tracking, it is possible by adding it with `dev_*` suffix (example: `dev_clear_pyc(.ps1|.sh)`).
## Contributors ✨
Thanks goes to these wonderful people ([emoji key](https://allcontributors.org/docs/en/emoji-key)):

View file

@ -63,7 +63,7 @@ class OpenPypeVersion(semver.VersionInfo):
"""
staging = False
path = None
_VERSION_REGEX = re.compile(r"(?P<major>0|[1-9]\d*)\.(?P<minor>0|[1-9]\d*)\.(?P<patch>0|[1-9]\d*)(?:-(?P<prerelease>(?:0|[1-9]\d*|\d*[a-zA-Z-][0-9a-zA-Z-]*)(?:\.(?:0|[1-9]\d*|\d*[a-zA-Z-][0-9a-zA-Z-]*))*))?(?:\+(?P<buildmetadata>[0-9a-zA-Z-]+(?:\.[0-9a-zA-Z-]+)*))?$") # noqa: E501
_VERSION_REGEX = re.compile(r"(?P<major>0|[1-9]\d*)\.(?P<minor>0|[1-9]\d*)\.(?P<patch>0|[1-9]\d*)(?:-(?P<prerelease>(?:0|[1-9]\d*|\d*[a-zA-Z-][0-9a-zA-Z-]*)(?:\.(?:0|[1-9]\d*|\d*[a-zA-Z-][0-9a-zA-Z-]*))*))?(?:\+(?P<buildmetadata>[0-9a-zA-Z-]+(?:\.[0-9a-zA-Z-]+)*))?") # noqa: E501
_installed_version = None
def __init__(self, *args, **kwargs):

View file

@ -388,8 +388,11 @@ class InstallDialog(QtWidgets.QDialog):
install_thread.start()
def _installation_finished(self):
# TODO we should find out why status can be set to 'None'?
# - 'InstallThread.run' should handle all cases so not sure where
# that come from
status = self._install_thread.result()
if status >= 0:
if status is not None and status >= 0:
self._update_progress(100)
QtWidgets.QApplication.processEvents()
self.done(3)

View file

@ -45,6 +45,12 @@ from .entities import (
get_workfile_info,
)
from .entity_links import (
get_linked_asset_ids,
get_linked_assets,
get_linked_representation_id,
)
from .operations import (
create_project,
)
@ -94,5 +100,9 @@ __all__ = (
"get_workfile_info",
"get_linked_asset_ids",
"get_linked_assets",
"get_linked_representation_id",
"create_project",
)

View file

@ -14,6 +14,8 @@ from bson.objectid import ObjectId
from .mongo import get_project_database, get_project_connection
PatternType = type(re.compile(""))
def _prepare_fields(fields, required_fields=None):
if not fields:
@ -32,17 +34,37 @@ def _prepare_fields(fields, required_fields=None):
return output
def _convert_id(in_id):
def convert_id(in_id):
"""Helper function for conversion of id from string to ObjectId.
Args:
in_id (Union[str, ObjectId, Any]): Entity id that should be converted
to right type for queries.
Returns:
Union[ObjectId, Any]: Converted ids to ObjectId or in type.
"""
if isinstance(in_id, six.string_types):
return ObjectId(in_id)
return in_id
def _convert_ids(in_ids):
def convert_ids(in_ids):
"""Helper function for conversion of ids from string to ObjectId.
Args:
in_ids (Iterable[Union[str, ObjectId, Any]]): List of entity ids that
should be converted to right type for queries.
Returns:
List[ObjectId]: Converted ids to ObjectId.
"""
_output = set()
for in_id in in_ids:
if in_id is not None:
_output.add(_convert_id(in_id))
_output.add(convert_id(in_id))
return list(_output)
@ -115,7 +137,7 @@ def get_asset_by_id(project_name, asset_id, fields=None):
None: Asset was not found by id.
"""
asset_id = _convert_id(asset_id)
asset_id = convert_id(asset_id)
if not asset_id:
return None
@ -196,7 +218,7 @@ def _get_assets(
query_filter = {"type": {"$in": asset_types}}
if asset_ids is not None:
asset_ids = _convert_ids(asset_ids)
asset_ids = convert_ids(asset_ids)
if not asset_ids:
return []
query_filter["_id"] = {"$in": asset_ids}
@ -207,7 +229,7 @@ def _get_assets(
query_filter["name"] = {"$in": list(asset_names)}
if parent_ids is not None:
parent_ids = _convert_ids(parent_ids)
parent_ids = convert_ids(parent_ids)
if not parent_ids:
return []
query_filter["data.visualParent"] = {"$in": parent_ids}
@ -307,7 +329,7 @@ def get_asset_ids_with_subsets(project_name, asset_ids=None):
"type": "subset"
}
if asset_ids is not None:
asset_ids = _convert_ids(asset_ids)
asset_ids = convert_ids(asset_ids)
if not asset_ids:
return []
subset_query["parent"] = {"$in": asset_ids}
@ -347,7 +369,7 @@ def get_subset_by_id(project_name, subset_id, fields=None):
Dict: Subset document which can be reduced to specified 'fields'.
"""
subset_id = _convert_id(subset_id)
subset_id = convert_id(subset_id)
if not subset_id:
return None
@ -374,7 +396,7 @@ def get_subset_by_name(project_name, subset_name, asset_id, fields=None):
if not subset_name:
return None
asset_id = _convert_id(asset_id)
asset_id = convert_id(asset_id)
if not asset_id:
return None
@ -428,13 +450,13 @@ def get_subsets(
query_filter = {"type": {"$in": subset_types}}
if asset_ids is not None:
asset_ids = _convert_ids(asset_ids)
asset_ids = convert_ids(asset_ids)
if not asset_ids:
return []
query_filter["parent"] = {"$in": asset_ids}
if subset_ids is not None:
subset_ids = _convert_ids(subset_ids)
subset_ids = convert_ids(subset_ids)
if not subset_ids:
return []
query_filter["_id"] = {"$in": subset_ids}
@ -449,7 +471,7 @@ def get_subsets(
for asset_id, names in names_by_asset_ids.items():
if asset_id and names:
or_query.append({
"parent": _convert_id(asset_id),
"parent": convert_id(asset_id),
"name": {"$in": list(names)}
})
if not or_query:
@ -510,7 +532,7 @@ def get_version_by_id(project_name, version_id, fields=None):
Dict: Version document which can be reduced to specified 'fields'.
"""
version_id = _convert_id(version_id)
version_id = convert_id(version_id)
if not version_id:
return None
@ -537,7 +559,7 @@ def get_version_by_name(project_name, version, subset_id, fields=None):
Dict: Version document which can be reduced to specified 'fields'.
"""
subset_id = _convert_id(subset_id)
subset_id = convert_id(subset_id)
if not subset_id:
return None
@ -567,7 +589,7 @@ def version_is_latest(project_name, version_id):
bool: True if is latest version from subset else False.
"""
version_id = _convert_id(version_id)
version_id = convert_id(version_id)
if not version_id:
return False
version_doc = get_version_by_id(
@ -610,13 +632,13 @@ def _get_versions(
query_filter = {"type": {"$in": version_types}}
if subset_ids is not None:
subset_ids = _convert_ids(subset_ids)
subset_ids = convert_ids(subset_ids)
if not subset_ids:
return []
query_filter["parent"] = {"$in": subset_ids}
if version_ids is not None:
version_ids = _convert_ids(version_ids)
version_ids = convert_ids(version_ids)
if not version_ids:
return []
query_filter["_id"] = {"$in": version_ids}
@ -690,7 +712,7 @@ def get_hero_version_by_subset_id(project_name, subset_id, fields=None):
Dict: Hero version entity data.
"""
subset_id = _convert_id(subset_id)
subset_id = convert_id(subset_id)
if not subset_id:
return None
@ -720,7 +742,7 @@ def get_hero_version_by_id(project_name, version_id, fields=None):
Dict: Hero version entity data.
"""
version_id = _convert_id(version_id)
version_id = convert_id(version_id)
if not version_id:
return None
@ -786,7 +808,7 @@ def get_output_link_versions(project_name, version_id, fields=None):
links for passed version.
"""
version_id = _convert_id(version_id)
version_id = convert_id(version_id)
if not version_id:
return []
@ -812,7 +834,7 @@ def get_last_versions(project_name, subset_ids, fields=None):
dict[ObjectId, int]: Key is subset id and value is last version name.
"""
subset_ids = _convert_ids(subset_ids)
subset_ids = convert_ids(subset_ids)
if not subset_ids:
return {}
@ -898,7 +920,7 @@ def get_last_version_by_subset_id(project_name, subset_id, fields=None):
Dict: Version document which can be reduced to specified 'fields'.
"""
subset_id = _convert_id(subset_id)
subset_id = convert_id(subset_id)
if not subset_id:
return None
@ -971,7 +993,7 @@ def get_representation_by_id(project_name, representation_id, fields=None):
"type": {"$in": repre_types}
}
if representation_id is not None:
query_filter["_id"] = _convert_id(representation_id)
query_filter["_id"] = convert_id(representation_id)
conn = get_project_connection(project_name)
@ -996,7 +1018,7 @@ def get_representation_by_name(
to specified 'fields'.
"""
version_id = _convert_id(version_id)
version_id = convert_id(version_id)
if not version_id or not representation_name:
return None
repre_types = ["representation", "archived_representations"]
@ -1034,11 +1056,11 @@ def _regex_filters(filters):
for key, value in filters.items():
regexes = []
a_values = []
if isinstance(value, re.Pattern):
if isinstance(value, PatternType):
regexes.append(value)
elif isinstance(value, (list, tuple, set)):
for item in value:
if isinstance(item, re.Pattern):
if isinstance(item, PatternType):
regexes.append(item)
else:
a_values.append(item)
@ -1089,7 +1111,7 @@ def _get_representations(
query_filter = {"type": {"$in": repre_types}}
if representation_ids is not None:
representation_ids = _convert_ids(representation_ids)
representation_ids = convert_ids(representation_ids)
if not representation_ids:
return default_output
query_filter["_id"] = {"$in": representation_ids}
@ -1100,7 +1122,7 @@ def _get_representations(
query_filter["name"] = {"$in": list(representation_names)}
if version_ids is not None:
version_ids = _convert_ids(version_ids)
version_ids = convert_ids(version_ids)
if not version_ids:
return default_output
query_filter["parent"] = {"$in": version_ids}
@ -1111,7 +1133,7 @@ def _get_representations(
for version_id, names in names_by_version_ids.items():
if version_id and names:
or_query.append({
"parent": _convert_id(version_id),
"parent": convert_id(version_id),
"name": {"$in": list(names)}
})
if not or_query:
@ -1174,7 +1196,7 @@ def get_representations(
as filter. Filter ignored if 'None' is passed.
version_ids (Iterable[str]): Subset ids used as parent filter. Filter
ignored if 'None' is passed.
context_filters (Dict[str, List[str, re.Pattern]]): Filter by
context_filters (Dict[str, List[str, PatternType]]): Filter by
representation context fields.
names_by_version_ids (dict[ObjectId, list[str]]): Complex filtering
using version ids and list of names under the version.
@ -1220,7 +1242,7 @@ def get_archived_representations(
as filter. Filter ignored if 'None' is passed.
version_ids (Iterable[str]): Subset ids used as parent filter. Filter
ignored if 'None' is passed.
context_filters (Dict[str, List[str, re.Pattern]]): Filter by
context_filters (Dict[str, List[str, PatternType]]): Filter by
representation context fields.
names_by_version_ids (dict[ObjectId, List[str]]): Complex filtering
using version ids and list of names under the version.
@ -1361,7 +1383,7 @@ def get_thumbnail_id_from_source(project_name, src_type, src_id):
if not src_type or not src_id:
return None
query_filter = {"_id": _convert_id(src_id)}
query_filter = {"_id": convert_id(src_id)}
conn = get_project_connection(project_name)
src_doc = conn.find_one(query_filter, {"data.thumbnail_id"})
@ -1388,7 +1410,7 @@ def get_thumbnails(project_name, thumbnail_ids, fields=None):
"""
if thumbnail_ids:
thumbnail_ids = _convert_ids(thumbnail_ids)
thumbnail_ids = convert_ids(thumbnail_ids)
if not thumbnail_ids:
return []
@ -1416,7 +1438,7 @@ def get_thumbnail(project_name, thumbnail_id, fields=None):
if not thumbnail_id:
return None
query_filter = {"type": "thumbnail", "_id": _convert_id(thumbnail_id)}
query_filter = {"type": "thumbnail", "_id": convert_id(thumbnail_id)}
conn = get_project_connection(project_name)
return conn.find_one(query_filter, _prepare_fields(fields))
@ -1444,7 +1466,7 @@ def get_workfile_info(
query_filter = {
"type": "workfile",
"parent": _convert_id(asset_id),
"parent": convert_id(asset_id),
"task_name": task_name,
"filename": filename
}

View file

@ -0,0 +1,232 @@
from .mongo import get_project_connection
from .entities import (
get_assets,
get_asset_by_id,
get_representation_by_id,
convert_id,
)
def get_linked_asset_ids(project_name, asset_doc=None, asset_id=None):
"""Extract linked asset ids from asset document.
One of asset document or asset id must be passed.
Note:
Asset links now works only from asset to assets.
Args:
asset_doc (dict): Asset document from DB.
Returns:
List[Union[ObjectId, str]]: Asset ids of input links.
"""
output = []
if not asset_doc and not asset_id:
return output
if not asset_doc:
asset_doc = get_asset_by_id(
project_name, asset_id, fields=["data.inputLinks"]
)
input_links = asset_doc["data"].get("inputLinks")
if not input_links:
return output
for item in input_links:
# Backwards compatibility for "_id" key which was replaced with
# "id"
if "_id" in item:
link_id = item["_id"]
else:
link_id = item["id"]
output.append(link_id)
return output
def get_linked_assets(
project_name, asset_doc=None, asset_id=None, fields=None
):
"""Return linked assets based on passed asset document.
One of asset document or asset id must be passed.
Args:
project_name (str): Name of project where to look for queried entities.
asset_doc (Dict[str, Any]): Asset document from database.
asset_id (Union[ObjectId, str]): Asset id. Can be used instead of
asset document.
fields (Iterable[str]): Fields that should be returned. All fields are
returned if 'None' is passed.
Returns:
List[Dict[str, Any]]: Asset documents of input links for passed
asset doc.
"""
if not asset_doc:
if not asset_id:
return []
asset_doc = get_asset_by_id(
project_name,
asset_id,
fields=["data.inputLinks"]
)
if not asset_doc:
return []
link_ids = get_linked_asset_ids(project_name, asset_doc=asset_doc)
if not link_ids:
return []
return list(get_assets(project_name, asset_ids=link_ids, fields=fields))
def get_linked_representation_id(
project_name, repre_doc=None, repre_id=None, link_type=None, max_depth=None
):
"""Returns list of linked ids of particular type (if provided).
One of representation document or representation id must be passed.
Note:
Representation links now works only from representation through version
back to representations.
Args:
project_name (str): Name of project where look for links.
repre_doc (Dict[str, Any]): Representation document.
repre_id (Union[ObjectId, str]): Representation id.
link_type (str): Type of link (e.g. 'reference', ...).
max_depth (int): Limit recursion level. Default: 0
Returns:
List[ObjectId] Linked representation ids.
"""
if repre_doc:
repre_id = repre_doc["_id"]
if repre_id:
repre_id = convert_id(repre_id)
if not repre_id and not repre_doc:
return []
version_id = None
if repre_doc:
version_id = repre_doc.get("parent")
if not version_id:
repre_doc = get_representation_by_id(
project_name, repre_id, fields=["parent"]
)
version_id = repre_doc["parent"]
if not version_id:
return []
if max_depth is None:
max_depth = 0
match = {
"_id": version_id,
"type": {"$in": ["version", "hero_version"]}
}
graph_lookup = {
"from": project_name,
"startWith": "$data.inputLinks.id",
"connectFromField": "data.inputLinks.id",
"connectToField": "_id",
"as": "outputs_recursive",
"depthField": "depth"
}
if max_depth != 0:
# We offset by -1 since 0 basically means no recursion
# but the recursion only happens after the initial lookup
# for outputs.
graph_lookup["maxDepth"] = max_depth - 1
query_pipeline = [
# Match
{"$match": match},
# Recursive graph lookup for inputs
{"$graphLookup": graph_lookup}
]
conn = get_project_connection(project_name)
result = conn.aggregate(query_pipeline)
referenced_version_ids = _process_referenced_pipeline_result(
result, link_type
)
if not referenced_version_ids:
return []
ref_ids = conn.distinct(
"_id",
filter={
"parent": {"$in": list(referenced_version_ids)},
"type": "representation"
}
)
return list(ref_ids)
def _process_referenced_pipeline_result(result, link_type):
"""Filters result from pipeline for particular link_type.
Pipeline cannot use link_type directly in a query.
Returns:
(list)
"""
referenced_version_ids = set()
correctly_linked_ids = set()
for item in result:
input_links = item["data"].get("inputLinks")
if not input_links:
continue
_filter_input_links(
input_links,
link_type,
correctly_linked_ids
)
# outputs_recursive in random order, sort by depth
outputs_recursive = item.get("outputs_recursive")
if not outputs_recursive:
continue
for output in sorted(outputs_recursive, key=lambda o: o["depth"]):
output_links = output["data"].get("inputLinks")
if not output_links:
continue
# Leaf
if output["_id"] not in correctly_linked_ids:
continue
_filter_input_links(
output_links,
link_type,
correctly_linked_ids
)
referenced_version_ids.add(output["_id"])
return referenced_version_ids
def _filter_input_links(input_links, link_type, correctly_linked_ids):
for input_link in input_links:
if link_type and input_link["type"] != link_type:
continue
link_id = input_link.get("id") or input_link.get("_id")
if link_id is not None:
correctly_linked_ids.add(link_id)

View file

@ -19,6 +19,7 @@ class AddLastWorkfileToLaunchArgs(PreLaunchHook):
"hiero",
"houdini",
"nukestudio",
"fusion",
"blender",
"photoshop",
"tvpaint",

View file

@ -1,8 +1,6 @@
import os
from openpype.lib import (
PreLaunchHook,
create_workdir_extra_folders
)
from openpype.lib import PreLaunchHook
from openpype.pipeline.workfile import create_workdir_extra_folders
class AddLastWorkfileToLaunchArgs(PreLaunchHook):

View file

@ -2,14 +2,18 @@ import os
import sys
import six
import openpype.api
from openpype.lib import (
get_ffmpeg_tool_path,
run_subprocess,
)
from openpype.pipeline import publish
from openpype.hosts.aftereffects.api import get_stub
class ExtractLocalRender(openpype.api.Extractor):
class ExtractLocalRender(publish.Extractor):
"""Render RenderQueue locally."""
order = openpype.api.Extractor.order - 0.47
order = publish.Extractor.order - 0.47
label = "Extract Local Render"
hosts = ["aftereffects"]
families = ["renderLocal", "render.local"]
@ -53,7 +57,7 @@ class ExtractLocalRender(openpype.api.Extractor):
instance.data["representations"] = [repre_data]
ffmpeg_path = openpype.lib.get_ffmpeg_tool_path("ffmpeg")
ffmpeg_path = get_ffmpeg_tool_path("ffmpeg")
# Generate thumbnail.
thumbnail_path = os.path.join(staging_dir, "thumbnail.jpg")
@ -66,7 +70,7 @@ class ExtractLocalRender(openpype.api.Extractor):
]
self.log.debug("Thumbnail args:: {}".format(args))
try:
output = openpype.lib.run_subprocess(args)
output = run_subprocess(args)
except TypeError:
self.log.warning("Error in creating thumbnail")
six.reraise(*sys.exc_info())

View file

@ -1,13 +1,13 @@
import pyblish.api
import openpype.api
from openpype.pipeline import publish
from openpype.hosts.aftereffects.api import get_stub
class ExtractSaveScene(pyblish.api.ContextPlugin):
"""Save scene before extraction."""
order = openpype.api.Extractor.order - 0.48
order = publish.Extractor.order - 0.48
label = "Extract Save Scene"
hosts = ["aftereffects"]

View file

@ -1,8 +1,8 @@
import openpype.api
from openpype.pipeline import publish
from openpype.hosts.aftereffects.api import get_stub
class RemovePublishHighlight(openpype.api.Extractor):
class RemovePublishHighlight(publish.Extractor):
"""Clean utf characters which are not working in DL
Published compositions are marked with unicode icon which causes
@ -10,7 +10,7 @@ class RemovePublishHighlight(openpype.api.Extractor):
rendering, add it later back to avoid confusion.
"""
order = openpype.api.Extractor.order - 0.49 # just before save
order = publish.Extractor.order - 0.49 # just before save
label = "Clean render comp"
hosts = ["aftereffects"]
families = ["render.farm"]

View file

@ -1,6 +1,19 @@
import os
import bpy
import pyblish.api
from openpype.pipeline import legacy_io
from openpype.hosts.blender.api import workio
class SaveWorkfiledAction(pyblish.api.Action):
"""Save Workfile."""
label = "Save Workfile"
on = "failed"
icon = "save"
def process(self, context, plugin):
bpy.ops.wm.avalon_workfiles()
class CollectBlenderCurrentFile(pyblish.api.ContextPlugin):
@ -8,12 +21,52 @@ class CollectBlenderCurrentFile(pyblish.api.ContextPlugin):
order = pyblish.api.CollectorOrder - 0.5
label = "Blender Current File"
hosts = ['blender']
hosts = ["blender"]
actions = [SaveWorkfiledAction]
def process(self, context):
"""Inject the current working file"""
current_file = bpy.data.filepath
context.data['currentFile'] = current_file
current_file = workio.current_file()
assert current_file != '', "Current file is empty. " \
"Save the file before continuing."
context.data["currentFile"] = current_file
assert current_file, (
"Current file is empty. Save the file before continuing."
)
folder, file = os.path.split(current_file)
filename, ext = os.path.splitext(file)
task = legacy_io.Session["AVALON_TASK"]
data = {}
# create instance
instance = context.create_instance(name=filename)
subset = "workfile" + task.capitalize()
data.update({
"subset": subset,
"asset": os.getenv("AVALON_ASSET", None),
"label": subset,
"publish": True,
"family": "workfile",
"families": ["workfile"],
"setMembers": [current_file],
"frameStart": bpy.context.scene.frame_start,
"frameEnd": bpy.context.scene.frame_end,
})
data["representations"] = [{
"name": ext.lstrip("."),
"ext": ext.lstrip("."),
"files": file,
"stagingDir": folder,
}]
instance.data.update(data)
self.log.info("Collected instance: {}".format(file))
self.log.info("Scene path: {}".format(current_file))
self.log.info("staging Dir: {}".format(folder))
self.log.info("subset: {}".format(subset))

View file

@ -2,12 +2,12 @@ import os
import bpy
from openpype import api
from openpype.pipeline import publish
from openpype.hosts.blender.api import plugin
from openpype.hosts.blender.api.pipeline import AVALON_PROPERTY
class ExtractABC(api.Extractor):
class ExtractABC(publish.Extractor):
"""Extract as ABC."""
label = "Extract ABC"

View file

@ -2,10 +2,10 @@ import os
import bpy
import openpype.api
from openpype.pipeline import publish
class ExtractBlend(openpype.api.Extractor):
class ExtractBlend(publish.Extractor):
"""Extract a blend file."""
label = "Extract Blend"

View file

@ -2,10 +2,10 @@ import os
import bpy
import openpype.api
from openpype.pipeline import publish
class ExtractBlendAnimation(openpype.api.Extractor):
class ExtractBlendAnimation(publish.Extractor):
"""Extract a blend file."""
label = "Extract Blend"

View file

@ -2,11 +2,11 @@ import os
import bpy
from openpype import api
from openpype.pipeline import publish
from openpype.hosts.blender.api import plugin
class ExtractCamera(api.Extractor):
class ExtractCamera(publish.Extractor):
"""Extract as the camera as FBX."""
label = "Extract Camera"

View file

@ -2,12 +2,12 @@ import os
import bpy
from openpype import api
from openpype.pipeline import publish
from openpype.hosts.blender.api import plugin
from openpype.hosts.blender.api.pipeline import AVALON_PROPERTY
class ExtractFBX(api.Extractor):
class ExtractFBX(publish.Extractor):
"""Extract as FBX."""
label = "Extract FBX"

View file

@ -5,12 +5,12 @@ import bpy
import bpy_extras
import bpy_extras.anim_utils
from openpype import api
from openpype.pipeline import publish
from openpype.hosts.blender.api import plugin
from openpype.hosts.blender.api.pipeline import AVALON_PROPERTY
class ExtractAnimationFBX(api.Extractor):
class ExtractAnimationFBX(publish.Extractor):
"""Extract as animation."""
label = "Extract FBX"

View file

@ -6,12 +6,12 @@ import bpy_extras
import bpy_extras.anim_utils
from openpype.client import get_representation_by_name
from openpype.pipeline import publish
from openpype.hosts.blender.api import plugin
from openpype.hosts.blender.api.pipeline import AVALON_PROPERTY
import openpype.api
class ExtractLayout(openpype.api.Extractor):
class ExtractLayout(publish.Extractor):
"""Extract a layout."""
label = "Extract Layout"

View file

@ -1,113 +0,0 @@
import os
import collections
from pprint import pformat
import pyblish.api
from openpype.client import (
get_subsets,
get_last_versions,
get_representations
)
from openpype.pipeline import legacy_io
class AppendCelactionAudio(pyblish.api.ContextPlugin):
label = "Colect Audio for publishing"
order = pyblish.api.CollectorOrder + 0.1
def process(self, context):
self.log.info('Collecting Audio Data')
asset_doc = context.data["assetEntity"]
# get all available representations
subsets = self.get_subsets(
asset_doc,
representations=["audio", "wav"]
)
self.log.info(f"subsets is: {pformat(subsets)}")
if not subsets.get("audioMain"):
raise AttributeError("`audioMain` subset does not exist")
reprs = subsets.get("audioMain", {}).get("representations", [])
self.log.info(f"reprs is: {pformat(reprs)}")
repr = next((r for r in reprs), None)
if not repr:
raise "Missing `audioMain` representation"
self.log.info(f"representation is: {repr}")
audio_file = repr.get('data', {}).get('path', "")
if os.path.exists(audio_file):
context.data["audioFile"] = audio_file
self.log.info(
'audio_file: {}, has been added to context'.format(audio_file))
else:
self.log.warning("Couldn't find any audio file on Ftrack.")
def get_subsets(self, asset_doc, representations):
"""
Query subsets with filter on name.
The method will return all found subsets and its defined version
and subsets. Version could be specified with number. Representation
can be filtered.
Arguments:
asset_doct (dict): Asset (shot) mongo document
representations (list): list for all representations
Returns:
dict: subsets with version and representations in keys
"""
# Query all subsets for asset
project_name = legacy_io.active_project()
subset_docs = get_subsets(
project_name, asset_ids=[asset_doc["_id"]], fields=["_id"]
)
# Collect all subset ids
subset_ids = [
subset_doc["_id"]
for subset_doc in subset_docs
]
# Check if we found anything
assert subset_ids, (
"No subsets found. Check correct filter. "
"Try this for start `r'.*'`: asset: `{}`"
).format(asset_doc["name"])
last_versions_by_subset_id = get_last_versions(
project_name, subset_ids, fields=["_id", "parent"]
)
version_docs_by_id = {}
for version_doc in last_versions_by_subset_id.values():
version_docs_by_id[version_doc["_id"]] = version_doc
repre_docs = get_representations(
project_name,
version_ids=version_docs_by_id.keys(),
representation_names=representations
)
repre_docs_by_version_id = collections.defaultdict(list)
for repre_doc in repre_docs:
version_id = repre_doc["parent"]
repre_docs_by_version_id[version_id].append(repre_doc)
output_dict = {}
for version_id, repre_docs in repre_docs_by_version_id.items():
version_doc = version_docs_by_id[version_id]
subset_id = version_doc["parent"]
subset_doc = last_versions_by_subset_id[subset_id]
# Store queried docs by subset name
output_dict[subset_doc["name"]] = {
"representations": repre_docs,
"version": version_doc
}
return output_dict

View file

@ -51,7 +51,8 @@ from .pipeline import (
)
from .menu import (
FlameMenuProjectConnect,
FlameMenuTimeline
FlameMenuTimeline,
FlameMenuUniversal
)
from .plugin import (
Creator,
@ -131,6 +132,7 @@ __all__ = [
# menu
"FlameMenuProjectConnect",
"FlameMenuTimeline",
"FlameMenuUniversal",
# plugin
"Creator",

View file

@ -201,3 +201,53 @@ class FlameMenuTimeline(_FlameMenuApp):
if self.flame:
self.flame.execute_shortcut('Rescan Python Hooks')
self.log.info('Rescan Python Hooks')
class FlameMenuUniversal(_FlameMenuApp):
# flameMenuProjectconnect app takes care of the preferences dialog as well
def __init__(self, framework):
_FlameMenuApp.__init__(self, framework)
def __getattr__(self, name):
def method(*args, **kwargs):
project = self.dynamic_menu_data.get(name)
if project:
self.link_project(project)
return method
def build_menu(self):
if not self.flame:
return []
menu = deepcopy(self.menu)
menu['actions'].append({
"name": "Load...",
"execute": lambda x: self.tools_helper.show_loader()
})
menu['actions'].append({
"name": "Manage...",
"execute": lambda x: self.tools_helper.show_scene_inventory()
})
menu['actions'].append({
"name": "Library...",
"execute": lambda x: self.tools_helper.show_library_loader()
})
return menu
def refresh(self, *args, **kwargs):
self.rescan()
def rescan(self, *args, **kwargs):
if not self.flame:
try:
import flame
self.flame = flame
except ImportError:
self.flame = None
if self.flame:
self.flame.execute_shortcut('Rescan Python Hooks')
self.log.info('Rescan Python Hooks')

View file

@ -361,6 +361,8 @@ class PublishableClip:
index_from_segment_default = False
use_shot_name_default = False
include_handles_default = False
retimed_handles_default = True
retimed_framerange_default = True
def __init__(self, segment, **kwargs):
self.rename_index = kwargs["rename_index"]
@ -496,6 +498,14 @@ class PublishableClip:
"audio", {}).get("value") or False
self.include_handles = self.ui_inputs.get(
"includeHandles", {}).get("value") or self.include_handles_default
self.retimed_handles = (
self.ui_inputs.get("retimedHandles", {}).get("value")
or self.retimed_handles_default
)
self.retimed_framerange = (
self.ui_inputs.get("retimedFramerange", {}).get("value")
or self.retimed_framerange_default
)
# build subset name from layer name
if self.subset_name == "[ track name ]":

View file

@ -22,6 +22,7 @@ class FlamePrelaunch(PreLaunchHook):
in environment var FLAME_SCRIPT_DIR.
"""
app_groups = ["flame"]
permissions = 0o777
wtc_script_path = os.path.join(
opflame.HOST_DIR, "api", "scripts", "wiretap_com.py")
@ -38,6 +39,7 @@ class FlamePrelaunch(PreLaunchHook):
"""Hook entry method."""
project_doc = self.data["project_doc"]
project_name = project_doc["name"]
volume_name = _env.get("FLAME_WIRETAP_VOLUME")
# get image io
project_anatomy = self.data["anatomy"]
@ -81,7 +83,7 @@ class FlamePrelaunch(PreLaunchHook):
data_to_script = {
# from settings
"host_name": _env.get("FLAME_WIRETAP_HOSTNAME") or hostname,
"volume_name": _env.get("FLAME_WIRETAP_VOLUME"),
"volume_name": volume_name,
"group_name": _env.get("FLAME_WIRETAP_GROUP"),
"color_policy": str(imageio_flame["project"]["colourPolicy"]),
@ -99,8 +101,41 @@ class FlamePrelaunch(PreLaunchHook):
app_arguments = self._get_launch_arguments(data_to_script)
# fix project data permission issue
self._fix_permissions(project_name, volume_name)
self.launch_context.launch_args.extend(app_arguments)
def _fix_permissions(self, project_name, volume_name):
"""Work around for project data permissions
Reported issue: when project is created locally on one machine,
it is impossible to migrate it to other machine. Autodesk Flame
is crating some unmanagable files which needs to be opened to 0o777.
Args:
project_name (str): project name
volume_name (str): studio volume
"""
dirs_to_modify = [
"/usr/discreet/project/{}".format(project_name),
"/opt/Autodesk/clip/{}/{}.prj".format(volume_name, project_name),
"/usr/discreet/clip/{}/{}.prj".format(volume_name, project_name)
]
for dirtm in dirs_to_modify:
for root, dirs, files in os.walk(dirtm):
try:
for name in set(dirs) | set(files):
path = os.path.join(root, name)
st = os.stat(path)
if oct(st.st_mode) != self.permissions:
os.chmod(path, self.permissions)
except OSError as exc:
self.log.warning("Not able to open files: {}".format(exc))
def _get_flame_fps(self, fps_num):
fps_table = {
float(23.976): "23.976 fps",

View file

@ -276,6 +276,22 @@ class CreateShotClip(opfapi.Creator):
"target": "tag",
"toolTip": "By default handles are excluded", # noqa
"order": 3
},
"retimedHandles": {
"value": True,
"type": "QCheckBox",
"label": "Retimed handles",
"target": "tag",
"toolTip": "By default handles are retimed.", # noqa
"order": 4
},
"retimedFramerange": {
"value": True,
"type": "QCheckBox",
"label": "Retimed framerange",
"target": "tag",
"toolTip": "By default framerange is retimed.", # noqa
"order": 5
}
}
}

View file

@ -131,6 +131,10 @@ class CollectTimelineInstances(pyblish.api.ContextPlugin):
"fps": self.fps,
"workfileFrameStart": workfile_start,
"sourceFirstFrame": int(first_frame),
"notRetimedHandles": (
not marker_data.get("retimedHandles")),
"notRetimedFramerange": (
not marker_data.get("retimedFramerange")),
"path": file_path,
"flameAddTasks": self.add_tasks,
"tasks": {

View file

@ -90,26 +90,38 @@ class ExtractSubsetResources(openpype.api.Extractor):
handle_end = instance.data["handleEnd"]
handles = max(handle_start, handle_end)
include_handles = instance.data.get("includeHandles")
retimed_handles = instance.data.get("retimedHandles")
# get media source range with handles
source_start_handles = instance.data["sourceStartH"]
source_end_handles = instance.data["sourceEndH"]
# retime if needed
if r_speed != 1.0:
source_start_handles = (
instance.data["sourceStart"] - r_handle_start)
source_end_handles = (
source_start_handles
+ (r_source_dur - 1)
+ r_handle_start
+ r_handle_end
)
if retimed_handles:
# handles are retimed
source_start_handles = (
instance.data["sourceStart"] - r_handle_start)
source_end_handles = (
source_start_handles
+ (r_source_dur - 1)
+ r_handle_start
+ r_handle_end
)
else:
# handles are not retimed
source_end_handles = (
source_start_handles
+ (r_source_dur - 1)
+ handle_start
+ handle_end
)
# get frame range with handles for representation range
frame_start_handle = frame_start - handle_start
repre_frame_start = frame_start_handle
if include_handles:
if r_speed == 1.0:
if r_speed == 1.0 or not retimed_handles:
frame_start_handle = frame_start
else:
frame_start_handle = (

View file

@ -73,6 +73,8 @@ def load_apps():
opfapi.FlameMenuProjectConnect(opfapi.CTX.app_framework))
opfapi.CTX.flame_apps.append(
opfapi.FlameMenuTimeline(opfapi.CTX.app_framework))
opfapi.CTX.flame_apps.append(
opfapi.FlameMenuUniversal(opfapi.CTX.app_framework))
opfapi.CTX.app_framework.log.info("Apps are loaded")
@ -191,3 +193,27 @@ def get_timeline_custom_ui_actions():
openpype_install()
return _build_app_menu("FlameMenuTimeline")
def get_batch_custom_ui_actions():
"""Hook to create submenu in batch
Returns:
list: menu object
"""
# install openpype and the host
openpype_install()
return _build_app_menu("FlameMenuUniversal")
def get_media_panel_custom_ui_actions():
"""Hook to create submenu in desktop
Returns:
list: menu object
"""
# install openpype and the host
openpype_install()
return _build_app_menu("FlameMenuUniversal")

View file

@ -0,0 +1,10 @@
from .addon import (
FusionAddon,
FUSION_HOST_DIR,
)
__all__ = (
"FusionAddon",
"FUSION_HOST_DIR",
)

View file

@ -0,0 +1,32 @@
import os
from openpype.modules import OpenPypeModule
from openpype.modules.interfaces import IHostAddon
FUSION_HOST_DIR = os.path.dirname(os.path.abspath(__file__))
class FusionAddon(OpenPypeModule, IHostAddon):
name = "fusion"
host_name = "fusion"
def initialize(self, module_settings):
self.enabled = True
def get_launch_hook_paths(self, app):
if app.host_name != self.host_name:
return []
return [
os.path.join(FUSION_HOST_DIR, "hooks")
]
def add_implementation_envs(self, env, _app):
# Set default values if are not already set via settings
defaults = {
"OPENPYPE_LOG_NO_COLORS": "Yes"
}
for key, value in defaults.items():
if not env.get(key):
env[key] = value
def get_workfile_extensions(self):
return [".comp"]

View file

@ -5,10 +5,7 @@ from .pipeline import (
ls,
imprint_container,
parse_container,
get_current_comp,
comp_lock_and_undo_chunk
parse_container
)
from .workio import (
@ -22,8 +19,10 @@ from .workio import (
from .lib import (
maintained_selection,
get_additional_data,
update_frame_range
update_frame_range,
set_asset_framerange,
get_current_comp,
comp_lock_and_undo_chunk
)
from .menu import launch_openpype_menu
@ -38,9 +37,6 @@ __all__ = [
"imprint_container",
"parse_container",
"get_current_comp",
"comp_lock_and_undo_chunk",
# workio
"open_file",
"save_file",
@ -51,8 +47,10 @@ __all__ = [
# lib
"maintained_selection",
"get_additional_data",
"update_frame_range",
"set_asset_framerange",
"get_current_comp",
"comp_lock_and_undo_chunk",
# menu
"launch_openpype_menu",

View file

@ -5,6 +5,7 @@ import contextlib
from Qt import QtGui
from openpype.lib import Logger
from openpype.client import (
get_asset_by_name,
get_subset_by_name,
@ -17,13 +18,14 @@ from openpype.pipeline import (
switch_container,
legacy_io,
)
from .pipeline import get_current_comp, comp_lock_and_undo_chunk
from openpype.pipeline.context_tools import get_current_project_asset
self = sys.modules[__name__]
self._project = None
def update_frame_range(start, end, comp=None, set_render_range=True):
def update_frame_range(start, end, comp=None, set_render_range=True,
handle_start=0, handle_end=0):
"""Set Fusion comp's start and end frame range
Args:
@ -32,6 +34,8 @@ def update_frame_range(start, end, comp=None, set_render_range=True):
comp (object, Optional): comp object from fusion
set_render_range (bool, Optional): When True this will also set the
composition's render start and end frame.
handle_start (float, int, Optional): frame handles before start frame
handle_end (float, int, Optional): frame handles after end frame
Returns:
None
@ -41,11 +45,16 @@ def update_frame_range(start, end, comp=None, set_render_range=True):
if not comp:
comp = get_current_comp()
# Convert any potential none type to zero
handle_start = handle_start or 0
handle_end = handle_end or 0
attrs = {
"COMPN_GlobalStart": start,
"COMPN_GlobalEnd": end
"COMPN_GlobalStart": start - handle_start,
"COMPN_GlobalEnd": end + handle_end
}
# set frame range
if set_render_range:
attrs.update({
"COMPN_RenderStart": start,
@ -56,24 +65,116 @@ def update_frame_range(start, end, comp=None, set_render_range=True):
comp.SetAttrs(attrs)
def get_additional_data(container):
"""Get Fusion related data for the container
def set_asset_framerange():
"""Set Comp's frame range based on current asset"""
asset_doc = get_current_project_asset()
start = asset_doc["data"]["frameStart"]
end = asset_doc["data"]["frameEnd"]
handle_start = asset_doc["data"]["handleStart"]
handle_end = asset_doc["data"]["handleEnd"]
update_frame_range(start, end, set_render_range=True,
handle_start=handle_start,
handle_end=handle_end)
Args:
container(dict): the container found by the ls() function
Returns:
dict
def set_asset_resolution():
"""Set Comp's resolution width x height default based on current asset"""
asset_doc = get_current_project_asset()
width = asset_doc["data"]["resolutionWidth"]
height = asset_doc["data"]["resolutionHeight"]
comp = get_current_comp()
print("Setting comp frame format resolution to {}x{}".format(width,
height))
comp.SetPrefs({
"Comp.FrameFormat.Width": width,
"Comp.FrameFormat.Height": height,
})
def validate_comp_prefs(comp=None):
"""Validate current comp defaults with asset settings.
Validates fps, resolutionWidth, resolutionHeight, aspectRatio.
This does *not* validate frameStart, frameEnd, handleStart and handleEnd.
"""
tool = container["_tool"]
tile_color = tool.TileColor
if tile_color is None:
return {}
if comp is None:
comp = get_current_comp()
return {"color": QtGui.QColor.fromRgbF(tile_color["R"],
tile_color["G"],
tile_color["B"])}
log = Logger.get_logger("validate_comp_prefs")
fields = [
"name",
"data.fps",
"data.resolutionWidth",
"data.resolutionHeight",
"data.pixelAspect"
]
asset_doc = get_current_project_asset(fields=fields)
asset_data = asset_doc["data"]
comp_frame_format_prefs = comp.GetPrefs("Comp.FrameFormat")
# Pixel aspect ratio in Fusion is set as AspectX and AspectY so we convert
# the data to something that is more sensible to Fusion
asset_data["pixelAspectX"] = asset_data.pop("pixelAspect")
asset_data["pixelAspectY"] = 1.0
validations = [
("fps", "Rate", "FPS"),
("resolutionWidth", "Width", "Resolution Width"),
("resolutionHeight", "Height", "Resolution Height"),
("pixelAspectX", "AspectX", "Pixel Aspect Ratio X"),
("pixelAspectY", "AspectY", "Pixel Aspect Ratio Y")
]
invalid = []
for key, comp_key, label in validations:
asset_value = asset_data[key]
comp_value = comp_frame_format_prefs.get(comp_key)
if asset_value != comp_value:
# todo: Actually show dialog to user instead of just logging
log.warning(
"Comp {pref} {value} does not match asset "
"'{asset_name}' {pref} {asset_value}".format(
pref=label,
value=comp_value,
asset_name=asset_doc["name"],
asset_value=asset_value)
)
invalid_msg = "{} {} should be {}".format(label,
comp_value,
asset_value)
invalid.append(invalid_msg)
if invalid:
def _on_repair():
attributes = dict()
for key, comp_key, _label in validations:
value = asset_data[key]
comp_key_full = "Comp.FrameFormat.{}".format(comp_key)
attributes[comp_key_full] = value
comp.SetPrefs(attributes)
from . import menu
from openpype.widgets import popup
from openpype.style import load_stylesheet
dialog = popup.Popup(parent=menu.menu)
dialog.setWindowTitle("Fusion comp has invalid configuration")
msg = "Comp preferences mismatches '{}'".format(asset_doc["name"])
msg += "\n" + "\n".join(invalid)
dialog.setMessage(msg)
dialog.setButtonText("Repair")
dialog.on_clicked.connect(_on_repair)
dialog.show()
dialog.raise_()
dialog.activateWindow()
dialog.setStyleSheet(load_stylesheet())
def switch_item(container,
@ -195,3 +296,21 @@ def get_frame_path(path):
padding = 4 # default Fusion padding
return filename, padding, ext
def get_current_comp():
"""Hack to get current comp in this session"""
fusion = getattr(sys.modules["__main__"], "fusion", None)
return fusion.CurrentComp if fusion else None
@contextlib.contextmanager
def comp_lock_and_undo_chunk(comp, undo_queue_name="Script CMD"):
"""Lock comp and open an undo chunk during the context"""
try:
comp.Lock()
comp.StartUndo(undo_queue_name)
yield
finally:
comp.Unlock()
comp.EndUndo()

View file

@ -1,43 +1,25 @@
import os
import sys
from Qt import QtWidgets, QtCore
from Qt import QtWidgets, QtCore, QtGui
from openpype import style
from openpype.tools.utils import host_tools
from openpype.style import load_stylesheet
from openpype.lib import register_event_callback
from openpype.hosts.fusion.scripts import (
set_rendermode,
duplicate_with_inputs
)
from openpype.hosts.fusion.api.lib import (
set_asset_framerange,
set_asset_resolution
)
from openpype.pipeline import legacy_io
from openpype.resources import get_openpype_icon_filepath
from .pulse import FusionPulse
def load_stylesheet():
path = os.path.join(os.path.dirname(__file__), "menu_style.qss")
if not os.path.exists(path):
print("Unable to load stylesheet, file not found in resources")
return ""
with open(path, "r") as file_stream:
stylesheet = file_stream.read()
return stylesheet
class Spacer(QtWidgets.QWidget):
def __init__(self, height, *args, **kwargs):
super(Spacer, self).__init__(*args, **kwargs)
self.setFixedHeight(height)
real_spacer = QtWidgets.QWidget(self)
real_spacer.setObjectName("Spacer")
real_spacer.setFixedHeight(height)
layout = QtWidgets.QVBoxLayout(self)
layout.setContentsMargins(0, 0, 0, 0)
layout.addWidget(real_spacer)
self.setLayout(layout)
self = sys.modules[__name__]
self.menu = None
class OpenPypeMenu(QtWidgets.QWidget):
@ -46,15 +28,29 @@ class OpenPypeMenu(QtWidgets.QWidget):
self.setObjectName("OpenPypeMenu")
icon_path = get_openpype_icon_filepath()
icon = QtGui.QIcon(icon_path)
self.setWindowIcon(icon)
self.setWindowFlags(
QtCore.Qt.Window
| QtCore.Qt.CustomizeWindowHint
| QtCore.Qt.WindowTitleHint
| QtCore.Qt.WindowMinimizeButtonHint
| QtCore.Qt.WindowCloseButtonHint
| QtCore.Qt.WindowStaysOnTopHint
)
self.render_mode_widget = None
self.setWindowTitle("OpenPype")
asset_label = QtWidgets.QLabel("Context", self)
asset_label.setStyleSheet("""QLabel {
font-size: 14px;
font-weight: 600;
color: #5f9fb8;
}""")
asset_label.setAlignment(QtCore.Qt.AlignHCenter)
workfiles_btn = QtWidgets.QPushButton("Workfiles...", self)
create_btn = QtWidgets.QPushButton("Create...", self)
publish_btn = QtWidgets.QPushButton("Publish...", self)
@ -62,77 +58,107 @@ class OpenPypeMenu(QtWidgets.QWidget):
manager_btn = QtWidgets.QPushButton("Manage...", self)
libload_btn = QtWidgets.QPushButton("Library...", self)
rendermode_btn = QtWidgets.QPushButton("Set render mode...", self)
set_framerange_btn = QtWidgets.QPushButton("Set Frame Range", self)
set_resolution_btn = QtWidgets.QPushButton("Set Resolution", self)
duplicate_with_inputs_btn = QtWidgets.QPushButton(
"Duplicate with input connections", self
)
reset_resolution_btn = QtWidgets.QPushButton(
"Reset Resolution from project", self
)
layout = QtWidgets.QVBoxLayout(self)
layout.setContentsMargins(10, 20, 10, 20)
layout.addWidget(asset_label)
layout.addSpacing(20)
layout.addWidget(workfiles_btn)
layout.addSpacing(20)
layout.addWidget(create_btn)
layout.addWidget(publish_btn)
layout.addWidget(load_btn)
layout.addWidget(publish_btn)
layout.addWidget(manager_btn)
layout.addWidget(Spacer(15, self))
layout.addSpacing(20)
layout.addWidget(libload_btn)
layout.addWidget(Spacer(15, self))
layout.addSpacing(20)
layout.addWidget(set_framerange_btn)
layout.addWidget(set_resolution_btn)
layout.addWidget(rendermode_btn)
layout.addWidget(Spacer(15, self))
layout.addSpacing(20)
layout.addWidget(duplicate_with_inputs_btn)
layout.addWidget(reset_resolution_btn)
self.setLayout(layout)
# Store reference so we can update the label
self.asset_label = asset_label
workfiles_btn.clicked.connect(self.on_workfile_clicked)
create_btn.clicked.connect(self.on_create_clicked)
publish_btn.clicked.connect(self.on_publish_clicked)
load_btn.clicked.connect(self.on_load_clicked)
manager_btn.clicked.connect(self.on_manager_clicked)
libload_btn.clicked.connect(self.on_libload_clicked)
rendermode_btn.clicked.connect(self.on_rendernode_clicked)
rendermode_btn.clicked.connect(self.on_rendermode_clicked)
duplicate_with_inputs_btn.clicked.connect(
self.on_duplicate_with_inputs_clicked)
reset_resolution_btn.clicked.connect(self.on_reset_resolution_clicked)
set_resolution_btn.clicked.connect(self.on_set_resolution_clicked)
set_framerange_btn.clicked.connect(self.on_set_framerange_clicked)
self._callbacks = []
self.register_callback("taskChanged", self.on_task_changed)
self.on_task_changed()
# Force close current process if Fusion is closed
self._pulse = FusionPulse(parent=self)
self._pulse.start()
def on_task_changed(self):
# Update current context label
label = legacy_io.Session["AVALON_ASSET"]
self.asset_label.setText(label)
def register_callback(self, name, fn):
# Create a wrapper callback that we only store
# for as long as we want it to persist as callback
def _callback(*args):
fn()
self._callbacks.append(_callback)
register_event_callback(name, _callback)
def deregister_all_callbacks(self):
self._callbacks[:] = []
def on_workfile_clicked(self):
print("Clicked Workfile")
host_tools.show_workfiles()
def on_create_clicked(self):
print("Clicked Create")
host_tools.show_creator()
def on_publish_clicked(self):
print("Clicked Publish")
host_tools.show_publish()
def on_load_clicked(self):
print("Clicked Load")
host_tools.show_loader(use_context=True)
def on_manager_clicked(self):
print("Clicked Manager")
host_tools.show_scene_inventory()
def on_libload_clicked(self):
print("Clicked Library")
host_tools.show_library_loader()
def on_rendernode_clicked(self):
print("Clicked Set Render Mode")
def on_rendermode_clicked(self):
if self.render_mode_widget is None:
window = set_rendermode.SetRenderMode()
window.setStyleSheet(style.load_stylesheet())
window.setStyleSheet(load_stylesheet())
window.show()
self.render_mode_widget = window
else:
@ -140,15 +166,16 @@ class OpenPypeMenu(QtWidgets.QWidget):
def on_duplicate_with_inputs_clicked(self):
duplicate_with_inputs.duplicate_with_input_connections()
print("Clicked Set Colorspace")
def on_reset_resolution_clicked(self):
print("Clicked Reset Resolution")
def on_set_resolution_clicked(self):
set_asset_resolution()
def on_set_framerange_clicked(self):
set_asset_framerange()
def launch_openpype_menu():
app = QtWidgets.QApplication(sys.argv)
app.setQuitOnLastWindowClosed(False)
pype_menu = OpenPypeMenu()
@ -156,5 +183,8 @@ def launch_openpype_menu():
pype_menu.setStyleSheet(stylesheet)
pype_menu.show()
self.menu = pype_menu
sys.exit(app.exec_())
result = app.exec_()
print("Shutting down..")
sys.exit(result)

View file

@ -1,29 +0,0 @@
QWidget {
background-color: #282828;
border-radius: 3;
}
QPushButton {
border: 1px solid #090909;
background-color: #201f1f;
color: #ffffff;
padding: 5;
}
QPushButton:focus {
background-color: "#171717";
color: #d0d0d0;
}
QPushButton:hover {
background-color: "#171717";
color: #e64b3d;
}
#OpenPypeMenu {
border: 1px solid #fef9ef;
}
#Spacer {
background-color: #282828;
}

View file

@ -2,13 +2,14 @@
Basic avalon integration
"""
import os
import sys
import logging
import contextlib
import pyblish.api
from openpype.lib import Logger
from openpype.lib import (
Logger,
register_event_callback
)
from openpype.pipeline import (
register_loader_plugin_path,
register_creator_plugin_path,
@ -18,12 +19,19 @@ from openpype.pipeline import (
deregister_inventory_action_path,
AVALON_CONTAINER_ID,
)
import openpype.hosts.fusion
from openpype.pipeline.load import any_outdated_containers
from openpype.hosts.fusion import FUSION_HOST_DIR
from openpype.tools.utils import host_tools
from .lib import (
get_current_comp,
comp_lock_and_undo_chunk,
validate_comp_prefs
)
log = Logger.get_logger(__name__)
HOST_DIR = os.path.dirname(os.path.abspath(openpype.hosts.fusion.__file__))
PLUGINS_DIR = os.path.join(HOST_DIR, "plugins")
PLUGINS_DIR = os.path.join(FUSION_HOST_DIR, "plugins")
PUBLISH_PATH = os.path.join(PLUGINS_DIR, "publish")
LOAD_PATH = os.path.join(PLUGINS_DIR, "load")
@ -40,7 +48,7 @@ class CompLogHandler(logging.Handler):
def install():
"""Install fusion-specific functionality of avalon-core.
"""Install fusion-specific functionality of OpenPype.
This is where you install menus and register families, data
and loaders into fusion.
@ -52,7 +60,7 @@ def install():
"""
# Remove all handlers associated with the root logger object, because
# that one sometimes logs as "warnings" incorrectly.
# that one always logs as "warnings" incorrectly.
for handler in logging.root.handlers[:]:
logging.root.removeHandler(handler)
@ -64,8 +72,6 @@ def install():
logger.addHandler(handler)
logger.setLevel(logging.DEBUG)
log.info("openpype.hosts.fusion installed")
pyblish.api.register_host("fusion")
pyblish.api.register_plugin_path(PUBLISH_PATH)
log.info("Registering Fusion plug-ins..")
@ -78,6 +84,11 @@ def install():
"instanceToggled", on_pyblish_instance_toggled
)
# Fusion integration currently does not attach to direct callbacks of
# the application. So we use workfile callbacks to allow similar behavior
# on save and open
register_event_callback("workfile.open.after", on_after_open)
def uninstall():
"""Uninstall all that was installed
@ -103,7 +114,7 @@ def uninstall():
)
def on_pyblish_instance_toggled(instance, new_value, old_value):
def on_pyblish_instance_toggled(instance, old_value, new_value):
"""Toggle saver tool passthrough states on instance toggles."""
comp = instance.context.data.get("currentComp")
if not comp:
@ -126,6 +137,38 @@ def on_pyblish_instance_toggled(instance, new_value, old_value):
tool.SetAttrs({"TOOLB_PassThrough": passthrough})
def on_after_open(_event):
comp = get_current_comp()
validate_comp_prefs(comp)
if any_outdated_containers():
log.warning("Scene has outdated content.")
# Find OpenPype menu to attach to
from . import menu
def _on_show_scene_inventory():
# ensure that comp is active
frame = comp.CurrentFrame
if not frame:
print("Comp is closed, skipping show scene inventory")
return
frame.ActivateFrame() # raise comp window
host_tools.show_scene_inventory()
from openpype.widgets import popup
from openpype.style import load_stylesheet
dialog = popup.Popup(parent=menu.menu)
dialog.setWindowTitle("Fusion comp has outdated content")
dialog.setMessage("There are outdated containers in "
"your Fusion comp.")
dialog.on_clicked.connect(_on_show_scene_inventory)
dialog.show()
dialog.raise_()
dialog.activateWindow()
dialog.setStyleSheet(load_stylesheet())
def ls():
"""List containers from active Fusion scene
@ -211,19 +254,3 @@ def parse_container(tool):
return container
def get_current_comp():
"""Hack to get current comp in this session"""
fusion = getattr(sys.modules["__main__"], "fusion", None)
return fusion.CurrentComp if fusion else None
@contextlib.contextmanager
def comp_lock_and_undo_chunk(comp, undo_queue_name="Script CMD"):
"""Lock comp and open an undo chunk during the context"""
try:
comp.Lock()
comp.StartUndo(undo_queue_name)
yield
finally:
comp.Unlock()
comp.EndUndo()

View file

@ -0,0 +1,60 @@
import os
import sys
from Qt import QtCore
class PulseThread(QtCore.QThread):
no_response = QtCore.Signal()
def __init__(self, parent=None):
super(PulseThread, self).__init__(parent=parent)
def run(self):
app = getattr(sys.modules["__main__"], "app", None)
# Interval in milliseconds
interval = os.environ.get("OPENPYPE_FUSION_PULSE_INTERVAL", 1000)
while True:
if self.isInterruptionRequested():
return
try:
app.Test()
except Exception:
self.no_response.emit()
self.msleep(interval)
class FusionPulse(QtCore.QObject):
"""A Timer that checks whether host app is still alive.
This checks whether the Fusion process is still active at a certain
interval. This is useful due to how Fusion runs its scripts. Each script
runs in its own environment and process (a `fusionscript` process each).
If Fusion would go down and we have a UI process running at the same time
then it can happen that the `fusionscript.exe` will remain running in the
background in limbo due to e.g. a Qt interface's QApplication that keeps
running infinitely.
Warning:
When the host is not detected this will automatically exit
the current process.
"""
def __init__(self, parent=None):
super(FusionPulse, self).__init__(parent=parent)
self._thread = PulseThread(parent=self)
self._thread.no_response.connect(self.on_no_response)
def on_no_response(self):
print("Pulse detected no response from Fusion..")
sys.exit(1)
def start(self):
self._thread.start()
def stop(self):
self._thread.requestInterruption()

View file

@ -2,13 +2,11 @@
import sys
import os
from openpype.pipeline import HOST_WORKFILE_EXTENSIONS
from .pipeline import get_current_comp
from .lib import get_current_comp
def file_extensions():
return HOST_WORKFILE_EXTENSIONS["fusion"]
return [".comp"]
def has_unsaved_changes():

View file

@ -0,0 +1,60 @@
{
Action
{
ID = "OpenPype_Menu",
Category = "OpenPype",
Name = "OpenPype Menu",
Targets =
{
Composition =
{
Execute = _Lua [=[
local scriptPath = app:MapPath("OpenPype:MenuScripts/openpype_menu.py")
if bmd.fileexists(scriptPath) == false then
print("[OpenPype Error] Can't run file: " .. scriptPath)
else
target:RunScript(scriptPath)
end
]=],
},
},
},
Action
{
ID = "OpenPype_Install_PySide2",
Category = "OpenPype",
Name = "Install PySide2",
Targets =
{
Composition =
{
Execute = _Lua [=[
local scriptPath = app:MapPath("OpenPype:MenuScripts/install_pyside2.py")
if bmd.fileexists(scriptPath) == false then
print("[OpenPype Error] Can't run file: " .. scriptPath)
else
target:RunScript(scriptPath)
end
]=],
},
},
},
Menus
{
Target = "ChildFrame",
Before "Help"
{
Sub "OpenPype"
{
"OpenPype_Menu{}",
"_",
Sub "Admin" {
"OpenPype_Install_PySide2{}"
}
}
},
},
}

View file

@ -0,0 +1,6 @@
### OpenPype deploy MenuScripts
Note that this `MenuScripts` is not an official Fusion folder.
OpenPype only uses this folder in `{fusion}/deploy/` to trigger the OpenPype menu actions.
They are used in the actions defined in `.fu` files in `{fusion}/deploy/Config`.

View file

@ -0,0 +1,29 @@
# This is just a quick hack for users running Py3 locally but having no
# Qt library installed
import os
import subprocess
import importlib
try:
from Qt import QtWidgets # noqa: F401
from Qt import __binding__
print(f"Qt binding: {__binding__}")
mod = importlib.import_module(__binding__)
print(f"Qt path: {mod.__file__}")
print("Qt library found, nothing to do..")
except ImportError:
print("Assuming no Qt library is installed..")
print('Installing PySide2 for Python 3.6: '
f'{os.environ["FUSION16_PYTHON36_HOME"]}')
# Get full path to python executable
exe = "python.exe" if os.name == 'nt' else "python"
python = os.path.join(os.environ["FUSION16_PYTHON36_HOME"], exe)
assert os.path.exists(python), f"Python doesn't exist: {python}"
# Do python -m pip install PySide2
args = [python, "-m", "pip", "install", "PySide2"]
print(f"Args: {args}")
subprocess.Popen(args)

View file

@ -9,6 +9,10 @@ from openpype.pipeline import (
def main(env):
# This script working directory starts in Fusion application folder.
# However the contents of that folder can conflict with Qt library dlls
# so we make sure to move out of it to avoid DLL Load Failed errors.
os.chdir("..")
from openpype.hosts.fusion import api
from openpype.hosts.fusion.api import menu
@ -20,6 +24,11 @@ def main(env):
menu.launch_openpype_menu()
# Initiate a QTimer to check if Fusion is still alive every X interval
# If Fusion is not found - kill itself
# todo(roy): Implement timer that ensures UI doesn't remain when e.g.
# Fusion closes down
if __name__ == "__main__":
result = main(os.environ)

View file

@ -0,0 +1,19 @@
{
Locked = true,
Global = {
Paths = {
Map = {
["OpenPype:"] = "$(OPENPYPE_FUSION)/deploy",
["Reactor:"] = "$(REACTOR)",
["Config:"] = "UserPaths:Config;OpenPype:Config",
["Scripts:"] = "UserPaths:Scripts;Reactor:System/Scripts;OpenPype:Scripts",
["UserPaths:"] = "UserData:;AllData:;Fusion:;Reactor:Deploy"
},
},
Script = {
PythonVersion = 3,
Python3Forced = true
},
},
}

View file

@ -0,0 +1,40 @@
import os
import platform
from openpype.lib import PreLaunchHook, ApplicationLaunchFailed
class FusionPreLaunchOCIO(PreLaunchHook):
"""Set OCIO environment variable for Fusion"""
app_groups = ["fusion"]
def execute(self):
"""Hook entry method."""
# get image io
project_settings = self.data["project_settings"]
# make sure anatomy settings are having flame key
imageio_fusion = project_settings.get("fusion", {}).get("imageio")
if not imageio_fusion:
raise ApplicationLaunchFailed((
"Anatomy project settings are missing `fusion` key. "
"Please make sure you remove project overrides on "
"Anatomy ImageIO")
)
ocio = imageio_fusion.get("ocio")
enabled = ocio.get("enabled", False)
if not enabled:
return
platform_key = platform.system().lower()
ocio_path = ocio["configFilePath"][platform_key]
if not ocio_path:
raise ApplicationLaunchFailed(
"Fusion OCIO is enabled in project settings but no OCIO config"
f"path is set for your current platform: {platform_key}"
)
self.log.info(f"Setting OCIO config path: {ocio_path}")
self.launch_context.env["OCIO"] = os.pathsep.join(ocio_path)

View file

@ -1,114 +1,61 @@
import os
import shutil
import openpype.hosts.fusion
from openpype.lib import PreLaunchHook, ApplicationLaunchFailed
from openpype.hosts.fusion import FUSION_HOST_DIR
class FusionPrelaunch(PreLaunchHook):
"""
This hook will check if current workfile path has Fusion
project inside.
"""Prepares OpenPype Fusion environment
Requires FUSION_PYTHON3_HOME to be defined in the environment for Fusion
to point at a valid Python 3 build for Fusion. That is Python 3.3-3.10
for Fusion 18 and Fusion 3.6 for Fusion 16 and 17.
This also sets FUSION16_MasterPrefs to apply the fusion master prefs
as set in openpype/hosts/fusion/deploy/fusion_shared.prefs to enable
the OpenPype menu and force Python 3 over Python 2.
"""
app_groups = ["fusion"]
def execute(self):
# making sure python 3.6 is installed at provided path
py36_dir = self.launch_context.env.get("PYTHON36")
if not py36_dir:
# making sure python 3 is installed at provided path
# Py 3.3-3.10 for Fusion 18+ or Py 3.6 for Fu 16-17
py3_var = "FUSION_PYTHON3_HOME"
fusion_python3_home = self.launch_context.env.get(py3_var, "")
self.log.info(f"Looking for Python 3 in: {fusion_python3_home}")
for path in fusion_python3_home.split(os.pathsep):
# Allow defining multiple paths to allow "fallback" to other
# path. But make to set only a single path as final variable.
py3_dir = os.path.normpath(path)
if os.path.isdir(py3_dir):
break
else:
raise ApplicationLaunchFailed(
"Required environment variable \"PYTHON36\" is not set."
"\n\nFusion implementation requires to have"
" installed Python 3.6"
"Python 3 is not installed at the provided path.\n"
"Make sure the environment in fusion settings has "
"'FUSION_PYTHON3_HOME' set correctly and make sure "
"Python 3 is installed in the given path."
f"\n\nPYTHON36: {fusion_python3_home}"
)
py36_dir = os.path.normpath(py36_dir)
if not os.path.isdir(py36_dir):
raise ApplicationLaunchFailed(
"Python 3.6 is not installed at the provided path.\n"
"Either make sure the environments in fusion settings has"
" 'PYTHON36' set corectly or make sure Python 3.6 is installed"
f" in the given path.\n\nPYTHON36: {py36_dir}"
)
self.log.info(f"Path to Fusion Python folder: '{py36_dir}'...")
self.launch_context.env["PYTHON36"] = py36_dir
self.log.info(f"Setting {py3_var}: '{py3_dir}'...")
self.launch_context.env[py3_var] = py3_dir
utility_dir = self.launch_context.env.get("FUSION_UTILITY_SCRIPTS_DIR")
if not utility_dir:
raise ApplicationLaunchFailed(
"Required Fusion utility script dir environment variable"
" \"FUSION_UTILITY_SCRIPTS_DIR\" is not set."
)
# Fusion 18+ requires FUSION_PYTHON3_HOME to also be on PATH
self.launch_context.env["PATH"] += ";" + py3_dir
# setting utility scripts dir for scripts syncing
utility_dir = os.path.normpath(utility_dir)
if not os.path.isdir(utility_dir):
raise ApplicationLaunchFailed(
"Fusion utility script dir does not exist. Either make sure "
"the environments in fusion settings has"
" 'FUSION_UTILITY_SCRIPTS_DIR' set correctly or reinstall "
f"Fusion.\n\nFUSION_UTILITY_SCRIPTS_DIR: '{utility_dir}'"
)
# Fusion 16 and 17 use FUSION16_PYTHON36_HOME instead of
# FUSION_PYTHON3_HOME and will only work with a Python 3.6 version
# TODO: Detect Fusion version to only set for specific Fusion build
self.launch_context.env["FUSION16_PYTHON36_HOME"] = py3_dir
self._sync_utility_scripts(self.launch_context.env)
self.log.info("Fusion Pype wrapper has been installed")
# Add our Fusion Master Prefs which is the only way to customize
# Fusion to define where it can read custom scripts and tools from
self.log.info(f"Setting OPENPYPE_FUSION: {FUSION_HOST_DIR}")
self.launch_context.env["OPENPYPE_FUSION"] = FUSION_HOST_DIR
def _sync_utility_scripts(self, env):
""" Synchronizing basic utlility scripts for resolve.
To be able to run scripts from inside `Fusion/Workspace/Scripts` menu
all scripts has to be accessible from defined folder.
"""
if not env:
env = {k: v for k, v in os.environ.items()}
# initiate inputs
scripts = {}
us_env = env.get("FUSION_UTILITY_SCRIPTS_SOURCE_DIR")
us_dir = env.get("FUSION_UTILITY_SCRIPTS_DIR", "")
us_paths = [os.path.join(
os.path.dirname(os.path.abspath(openpype.hosts.fusion.__file__)),
"utility_scripts"
)]
# collect script dirs
if us_env:
self.log.info(f"Utility Scripts Env: `{us_env}`")
us_paths = us_env.split(
os.pathsep) + us_paths
# collect scripts from dirs
for path in us_paths:
scripts.update({path: os.listdir(path)})
self.log.info(f"Utility Scripts Dir: `{us_paths}`")
self.log.info(f"Utility Scripts: `{scripts}`")
# make sure no script file is in folder
if next((s for s in os.listdir(us_dir)), None):
for s in os.listdir(us_dir):
path = os.path.normpath(
os.path.join(us_dir, s))
self.log.info(f"Removing `{path}`...")
# remove file or directory if not in our folders
if not os.path.isdir(path):
os.remove(path)
else:
shutil.rmtree(path)
# copy scripts into Resolve's utility scripts dir
for d, sl in scripts.items():
# directory and scripts list
for s in sl:
# script in script list
src = os.path.normpath(os.path.join(d, s))
dst = os.path.normpath(os.path.join(us_dir, s))
self.log.info(f"Copying `{src}` to `{dst}`...")
# copy file or directory from our folders to fusion's folder
if not os.path.isdir(src):
shutil.copy2(src, dst)
else:
shutil.copytree(src, dst)
pref_var = "FUSION16_MasterPrefs" # used by Fusion 16, 17 and 18
prefs = os.path.join(FUSION_HOST_DIR, "deploy", "fusion_shared.prefs")
self.log.info(f"Setting {pref_var}: {prefs}")
self.launch_context.env[pref_var] = prefs

View file

@ -1,6 +1,9 @@
import os
from openpype.pipeline import LegacyCreator
from openpype.pipeline import (
LegacyCreator,
legacy_io
)
from openpype.hosts.fusion.api import (
get_current_comp,
comp_lock_and_undo_chunk
@ -21,12 +24,9 @@ class CreateOpenEXRSaver(LegacyCreator):
comp = get_current_comp()
# todo: improve method of getting current environment
# todo: pref avalon.Session over os.environ
workdir = os.path.normpath(legacy_io.Session["AVALON_WORKDIR"])
workdir = os.path.normpath(os.environ["AVALON_WORKDIR"])
filename = "{}..tiff".format(self.name)
filename = "{}..exr".format(self.name)
filepath = os.path.join(workdir, "render", filename)
with comp_lock_and_undo_chunk(comp):
@ -39,10 +39,10 @@ class CreateOpenEXRSaver(LegacyCreator):
saver["Clip"] = filepath
saver["OutputFormat"] = file_format
# # # Set standard TIFF settings
# Check file format settings are available
if saver[file_format] is None:
raise RuntimeError("File format is not set to TiffFormat, "
"this is a bug")
raise RuntimeError("File format is not set to {}, "
"this is a bug".format(file_format))
# Set file format attributes
saver[file_format]["Depth"] = 1 # int8 | int16 | float32 | other

View file

@ -101,6 +101,9 @@ def loader_shift(loader, frame, relative=True):
else:
shift = frame - old_in
if not shift:
return 0
# Shifting global in will try to automatically compensate for the change
# in the "ClipTimeStart" and "HoldFirstFrame" inputs, so we preserve those
# input values to "just shift" the clip
@ -149,9 +152,8 @@ class FusionLoadSequence(load.LoaderPlugin):
tool["Clip"] = path
# Set global in point to start frame (if in version.data)
start = context["version"]["data"].get("frameStart", None)
if start is not None:
loader_shift(tool, start, relative=False)
start = self._get_start(context["version"], tool)
loader_shift(tool, start, relative=False)
imprint_container(tool,
name=name,
@ -214,12 +216,7 @@ class FusionLoadSequence(load.LoaderPlugin):
# Get start frame from version data
project_name = legacy_io.active_project()
version = get_version_by_id(project_name, representation["parent"])
start = version["data"].get("frameStart")
if start is None:
self.log.warning("Missing start frame for updated version"
"assuming starts at frame 0 for: "
"{} ({})".format(tool.Name, representation))
start = 0
start = self._get_start(version, tool)
with comp_lock_and_undo_chunk(comp, "Update Loader"):
@ -256,3 +253,27 @@ class FusionLoadSequence(load.LoaderPlugin):
"""Get first file in representation root"""
files = sorted(os.listdir(root))
return os.path.join(root, files[0])
def _get_start(self, version_doc, tool):
"""Return real start frame of published files (incl. handles)"""
data = version_doc["data"]
# Get start frame directly with handle if it's in data
start = data.get("frameStartHandle")
if start is not None:
return start
# Get frame start without handles
start = data.get("frameStart")
if start is None:
self.log.warning("Missing start frame for version "
"assuming starts at frame 0 for: "
"{}".format(tool.Name))
return 0
# Use `handleStart` if the data is available
handle_start = data.get("handleStart")
if handle_start:
start -= handle_start
return start

View file

@ -0,0 +1,114 @@
from bson.objectid import ObjectId
import pyblish.api
from openpype.pipeline import registered_host
def collect_input_containers(tools):
"""Collect containers that contain any of the node in `nodes`.
This will return any loaded Avalon container that contains at least one of
the nodes. As such, the Avalon container is an input for it. Or in short,
there are member nodes of that container.
Returns:
list: Input avalon containers
"""
# Lookup by node ids
lookup = frozenset([tool.Name for tool in tools])
containers = []
host = registered_host()
for container in host.ls():
name = container["_tool"].Name
# We currently assume no "groups" as containers but just single tools
# like a single "Loader" operator. As such we just check whether the
# Loader is part of the processing queue.
if name in lookup:
containers.append(container)
return containers
def iter_upstream(tool):
"""Yields all upstream inputs for the current tool.
Yields:
tool: The input tools.
"""
def get_connected_input_tools(tool):
"""Helper function that returns connected input tools for a tool."""
inputs = []
# Filter only to actual types that will have sensible upstream
# connections. So we ignore just "Number" inputs as they can be
# many to iterate, slowing things down quite a bit - and in practice
# they don't have upstream connections.
VALID_INPUT_TYPES = ['Image', 'Particles', 'Mask', 'DataType3D']
for type_ in VALID_INPUT_TYPES:
for input_ in tool.GetInputList(type_).values():
output = input_.GetConnectedOutput()
if output:
input_tool = output.GetTool()
inputs.append(input_tool)
return inputs
# Initialize process queue with the node's inputs itself
queue = get_connected_input_tools(tool)
# We keep track of which node names we have processed so far, to ensure we
# don't process the same hierarchy again. We are not pushing the tool
# itself into the set as that doesn't correctly recognize the same tool.
# Since tool names are unique in a comp in Fusion we rely on that.
collected = set(tool.Name for tool in queue)
# Traverse upstream references for all nodes and yield them as we
# process the queue.
while queue:
upstream_tool = queue.pop()
yield upstream_tool
# Find upstream tools that are not collected yet.
upstream_inputs = get_connected_input_tools(upstream_tool)
upstream_inputs = [t for t in upstream_inputs if
t.Name not in collected]
queue.extend(upstream_inputs)
collected.update(tool.Name for tool in upstream_inputs)
class CollectUpstreamInputs(pyblish.api.InstancePlugin):
"""Collect source input containers used for this publish.
This will include `inputs` data of which loaded publishes were used in the
generation of this publish. This leaves an upstream trace to what was used
as input.
"""
label = "Collect Inputs"
order = pyblish.api.CollectorOrder + 0.2
hosts = ["fusion"]
def process(self, instance):
# Get all upstream and include itself
tool = instance[0]
nodes = list(iter_upstream(tool))
nodes.append(tool)
# Collect containers for the given set of nodes
containers = collect_input_containers(nodes)
inputs = [ObjectId(c["representation"]) for c in containers]
instance.data["inputRepresentations"] = inputs
self.log.info("Collected inputs: %s" % inputs)

View file

@ -4,19 +4,21 @@ import pyblish.api
def get_comp_render_range(comp):
"""Return comp's start and end render range."""
"""Return comp's start-end render range and global start-end range."""
comp_attrs = comp.GetAttrs()
start = comp_attrs["COMPN_RenderStart"]
end = comp_attrs["COMPN_RenderEnd"]
global_start = comp_attrs["COMPN_GlobalStart"]
global_end = comp_attrs["COMPN_GlobalEnd"]
# Whenever render ranges are undefined fall back
# to the comp's global start and end
if start == -1000000000:
start = comp_attrs["COMPN_GlobalEnd"]
start = global_start
if end == -1000000000:
end = comp_attrs["COMPN_GlobalStart"]
end = global_end
return start, end
return start, end, global_start, global_end
class CollectInstances(pyblish.api.ContextPlugin):
@ -42,9 +44,11 @@ class CollectInstances(pyblish.api.ContextPlugin):
tools = comp.GetToolList(False).values()
savers = [tool for tool in tools if tool.ID == "Saver"]
start, end = get_comp_render_range(comp)
start, end, global_start, global_end = get_comp_render_range(comp)
context.data["frameStart"] = int(start)
context.data["frameEnd"] = int(end)
context.data["frameStartHandle"] = int(global_start)
context.data["frameEndHandle"] = int(global_end)
for tool in savers:
path = tool["Clip"][comp.TIME_UNDEFINED]
@ -78,8 +82,10 @@ class CollectInstances(pyblish.api.ContextPlugin):
"label": label,
"frameStart": context.data["frameStart"],
"frameEnd": context.data["frameEnd"],
"frameStartHandle": context.data["frameStartHandle"],
"frameEndHandle": context.data["frameStartHandle"],
"fps": context.data["fps"],
"families": ["render", "review", "ftrack"],
"families": ["render", "review"],
"family": "render",
"active": active,
"publish": active # backwards compatibility

View file

@ -20,6 +20,8 @@ class Fusionlocal(pyblish.api.InstancePlugin):
def process(self, instance):
# This plug-in runs only once and thus assumes all instances
# currently will render the same frame range
context = instance.context
key = "__hasRun{}".format(self.__class__.__name__)
if context.data.get(key, False):
@ -28,8 +30,8 @@ class Fusionlocal(pyblish.api.InstancePlugin):
context.data[key] = True
current_comp = context.data["currentComp"]
frame_start = current_comp.GetAttrs("COMPN_RenderStart")
frame_end = current_comp.GetAttrs("COMPN_RenderEnd")
frame_start = context.data["frameStartHandle"]
frame_end = context.data["frameEndHandle"]
path = instance.data["path"]
output_dir = instance.data["outputDir"]
@ -40,7 +42,11 @@ class Fusionlocal(pyblish.api.InstancePlugin):
self.log.info("End frame: {}".format(frame_end))
with comp_lock_and_undo_chunk(current_comp):
result = current_comp.Render()
result = current_comp.Render({
"Start": frame_start,
"End": frame_end,
"Wait": True
})
if "representations" not in instance.data:
instance.data["representations"] = []

View file

@ -1,284 +0,0 @@
import os
import re
import sys
import logging
from openpype.client import (
get_asset_by_name,
get_versions,
)
from openpype.pipeline import (
legacy_io,
install_host,
registered_host,
)
from openpype.lib import version_up
from openpype.hosts.fusion import api
from openpype.hosts.fusion.api import lib
from openpype.pipeline.context_tools import get_workdir_from_session
log = logging.getLogger("Update Slap Comp")
def _format_version_folder(folder):
"""Format a version folder based on the filepath
Assumption here is made that, if the path does not exists the folder
will be "v001"
Args:
folder: file path to a folder
Returns:
str: new version folder name
"""
new_version = 1
if os.path.isdir(folder):
re_version = re.compile(r"v\d+$")
versions = [i for i in os.listdir(folder) if os.path.isdir(i)
and re_version.match(i)]
if versions:
# ensure the "v" is not included
new_version = int(max(versions)[1:]) + 1
version_folder = "v{:03d}".format(new_version)
return version_folder
def _get_fusion_instance():
fusion = getattr(sys.modules["__main__"], "fusion", None)
if fusion is None:
try:
# Support for FuScript.exe, BlackmagicFusion module for py2 only
import BlackmagicFusion as bmf
fusion = bmf.scriptapp("Fusion")
except ImportError:
raise RuntimeError("Could not find a Fusion instance")
return fusion
def _format_filepath(session):
project = session["AVALON_PROJECT"]
asset = session["AVALON_ASSET"]
# Save updated slap comp
work_path = get_workdir_from_session(session)
walk_to_dir = os.path.join(work_path, "scenes", "slapcomp")
slapcomp_dir = os.path.abspath(walk_to_dir)
# Ensure destination exists
if not os.path.isdir(slapcomp_dir):
log.warning("Folder did not exist, creating folder structure")
os.makedirs(slapcomp_dir)
# Compute output path
new_filename = "{}_{}_slapcomp_v001.comp".format(project, asset)
new_filepath = os.path.join(slapcomp_dir, new_filename)
# Create new unique filepath
if os.path.exists(new_filepath):
new_filepath = version_up(new_filepath)
return new_filepath
def _update_savers(comp, session):
"""Update all savers of the current comp to ensure the output is correct
This will refactor the Saver file outputs to the renders of the new session
that is provided.
In the case the original saver path had a path set relative to a /fusion/
folder then that relative path will be matched with the exception of all
"version" (e.g. v010) references will be reset to v001. Otherwise only a
version folder will be computed in the new session's work "render" folder
to dump the files in and keeping the original filenames.
Args:
comp (object): current comp instance
session (dict): the current Avalon session
Returns:
None
"""
new_work = get_workdir_from_session(session)
renders = os.path.join(new_work, "renders")
version_folder = _format_version_folder(renders)
renders_version = os.path.join(renders, version_folder)
comp.Print("New renders to: %s\n" % renders)
with api.comp_lock_and_undo_chunk(comp):
savers = comp.GetToolList(False, "Saver").values()
for saver in savers:
filepath = saver.GetAttrs("TOOLST_Clip_Name")[1.0]
# Get old relative path to the "fusion" app folder so we can apply
# the same relative path afterwards. If not found fall back to
# using just a version folder with the filename in it.
# todo: can we make this less magical?
relpath = filepath.replace("\\", "/").rsplit("/fusion/", 1)[-1]
if os.path.isabs(relpath):
# If not relative to a "/fusion/" folder then just use filename
filename = os.path.basename(filepath)
log.warning("Can't parse relative path, refactoring to only"
"filename in a version folder: %s" % filename)
new_path = os.path.join(renders_version, filename)
else:
# Else reuse the relative path
# Reset version in folder and filename in the relative path
# to v001. The version should be is only detected when prefixed
# with either `_v` (underscore) or `/v` (folder)
version_pattern = r"(/|_)v[0-9]+"
if re.search(version_pattern, relpath):
new_relpath = re.sub(version_pattern,
r"\1v001",
relpath)
log.info("Resetting version folders to v001: "
"%s -> %s" % (relpath, new_relpath))
relpath = new_relpath
new_path = os.path.join(new_work, relpath)
saver["Clip"] = new_path
def update_frame_range(comp, representations):
"""Update the frame range of the comp and render length
The start and end frame are based on the lowest start frame and the highest
end frame
Args:
comp (object): current focused comp
representations (list) collection of dicts
Returns:
None
"""
project_name = legacy_io.active_project()
version_ids = {r["parent"] for r in representations}
versions = list(get_versions(project_name, version_ids))
versions = [v for v in versions
if v["data"].get("frameStart", None) is not None]
if not versions:
log.warning("No versions loaded to match frame range to.\n")
return
start = min(v["data"]["frameStart"] for v in versions)
end = max(v["data"]["frameEnd"] for v in versions)
lib.update_frame_range(start, end, comp=comp)
def switch(asset_name, filepath=None, new=True):
"""Switch the current containers of the file to the other asset (shot)
Args:
filepath (str): file path of the comp file
asset_name (str): name of the asset (shot)
new (bool): Save updated comp under a different name
Returns:
comp path (str): new filepath of the updated comp
"""
# If filepath provided, ensure it is valid absolute path
if filepath is not None:
if not os.path.isabs(filepath):
filepath = os.path.abspath(filepath)
assert os.path.exists(filepath), "%s must exist " % filepath
# Assert asset name exists
# It is better to do this here then to wait till switch_shot does it
project_name = legacy_io.active_project()
asset = get_asset_by_name(project_name, asset_name)
assert asset, "Could not find '%s' in the database" % asset_name
# Go to comp
if not filepath:
current_comp = api.get_current_comp()
assert current_comp is not None, "Could not find current comp"
else:
fusion = _get_fusion_instance()
current_comp = fusion.LoadComp(filepath, quiet=True)
assert current_comp is not None, (
"Fusion could not load '{}'").format(filepath)
host = registered_host()
containers = list(host.ls())
assert containers, "Nothing to update"
representations = []
for container in containers:
try:
representation = lib.switch_item(
container,
asset_name=asset_name)
representations.append(representation)
except Exception as e:
current_comp.Print("Error in switching! %s\n" % e.message)
message = "Switched %i Loaders of the %i\n" % (len(representations),
len(containers))
current_comp.Print(message)
# Build the session to switch to
switch_to_session = legacy_io.Session.copy()
switch_to_session["AVALON_ASSET"] = asset['name']
if new:
comp_path = _format_filepath(switch_to_session)
# Update savers output based on new session
_update_savers(current_comp, switch_to_session)
else:
comp_path = version_up(filepath)
current_comp.Print(comp_path)
current_comp.Print("\nUpdating frame range")
update_frame_range(current_comp, representations)
current_comp.Save(comp_path)
return comp_path
if __name__ == '__main__':
# QUESTION: can we convert this to gui rather then standalone script?
# TODO: convert to gui tool
import argparse
parser = argparse.ArgumentParser(description="Switch to a shot within an"
"existing comp file")
parser.add_argument("--file_path",
type=str,
default=True,
help="File path of the comp to use")
parser.add_argument("--asset_name",
type=str,
default=True,
help="Name of the asset (shot) to switch")
args, unknown = parser.parse_args()
install_host(api)
switch(args.asset_name, args.file_path)
sys.exit(0)

View file

@ -2,10 +2,11 @@
import os
import json
import pyblish.api
import openpype
from openpype.pipeline import publish
class ExtractClipEffects(openpype.api.Extractor):
class ExtractClipEffects(publish.Extractor):
"""Extract clip effects instances."""
order = pyblish.api.ExtractorOrder

View file

@ -1,9 +1,14 @@
import os
import pyblish.api
import openpype
from openpype.lib import (
get_oiio_tools_path,
run_subprocess,
)
from openpype.pipeline import publish
class ExtractFrames(openpype.api.Extractor):
class ExtractFrames(publish.Extractor):
"""Extracts frames"""
order = pyblish.api.ExtractorOrder
@ -13,7 +18,7 @@ class ExtractFrames(openpype.api.Extractor):
movie_extensions = ["mov", "mp4"]
def process(self, instance):
oiio_tool_path = openpype.lib.get_oiio_tools_path()
oiio_tool_path = get_oiio_tools_path()
staging_dir = self.staging_dir(instance)
output_template = os.path.join(staging_dir, instance.data["name"])
sequence = instance.context.data["activeTimeline"]
@ -43,7 +48,7 @@ class ExtractFrames(openpype.api.Extractor):
args.extend(["--powc", "0.45,0.45,0.45,1.0"])
args.extend([input_path, "-o", output_path])
output = openpype.api.run_subprocess(args)
output = run_subprocess(args)
failed_output = "oiiotool produced no output."
if failed_output in output:

View file

@ -1,9 +1,10 @@
import os
import pyblish.api
import openpype.api
from openpype.pipeline import publish
class ExtractThumnail(openpype.api.Extractor):
class ExtractThumnail(publish.Extractor):
"""
Extractor for track item's tumnails
"""

View file

@ -318,10 +318,9 @@ class PrecollectInstances(pyblish.api.ContextPlugin):
@staticmethod
def create_otio_time_range_from_timeline_item_data(track_item):
speed = track_item.playbackSpeed()
timeline = phiero.get_current_sequence()
frame_start = int(track_item.timelineIn())
frame_duration = int((track_item.duration() - 1) / speed)
frame_duration = int(track_item.duration())
fps = timeline.framerate().toFloat()
return hiero_export.create_otio_time_range(

View file

@ -14,7 +14,7 @@ from openpype.pipeline import (
)
from openpype.pipeline.load import any_outdated_containers
from openpype.hosts.houdini import HOUDINI_HOST_DIR
from openpype.hosts.houdini.api import lib
from openpype.hosts.houdini.api import lib, shelves
from openpype.lib import (
register_event_callback,
@ -73,6 +73,7 @@ def install():
# so it initializes into the correct scene FPS, Frame Range, etc.
# todo: make sure this doesn't trigger when opening with last workfile
_set_context_settings()
shelves.generate_shelves()
def uninstall():

View file

@ -0,0 +1,204 @@
import os
import logging
import platform
import six
from openpype.settings import get_project_settings
import hou
log = logging.getLogger("openpype.hosts.houdini.shelves")
if six.PY2:
FileNotFoundError = IOError
def generate_shelves():
"""This function generates complete shelves from shelf set to tools
in Houdini from openpype project settings houdini shelf definition.
Raises:
FileNotFoundError: Raised when the shelf set filepath does not exist
"""
current_os = platform.system().lower()
# load configuration of houdini shelves
project_settings = get_project_settings(os.getenv("AVALON_PROJECT"))
shelves_set_config = project_settings["houdini"]["shelves"]
if not shelves_set_config:
log.debug(
"No custom shelves found in project settings."
)
return
for shelf_set_config in shelves_set_config:
shelf_set_filepath = shelf_set_config.get('shelf_set_source_path')
if shelf_set_filepath[current_os]:
if not os.path.isfile(shelf_set_filepath[current_os]):
raise FileNotFoundError(
"This path doesn't exist - {}".format(
shelf_set_filepath[current_os]
)
)
hou.shelves.newShelfSet(file_path=shelf_set_filepath[current_os])
continue
shelf_set_name = shelf_set_config.get('shelf_set_name')
if not shelf_set_name:
log.warning(
"No name found in shelf set definition."
)
return
shelf_set = get_or_create_shelf_set(shelf_set_name)
shelves_definition = shelf_set_config.get('shelf_definition')
if not shelves_definition:
log.debug(
"No shelf definition found for shelf set named '{}'".format(
shelf_set_name
)
)
return
for shelf_definition in shelves_definition:
shelf_name = shelf_definition.get('shelf_name')
if not shelf_name:
log.warning(
"No name found in shelf definition."
)
return
shelf = get_or_create_shelf(shelf_name)
if not shelf_definition.get('tools_list'):
log.debug(
"No tool definition found for shelf named {}".format(
shelf_name
)
)
return
mandatory_attributes = {'name', 'script'}
for tool_definition in shelf_definition.get('tools_list'):
# We verify that the name and script attibutes of the tool
# are set
if not all(
tool_definition[key] for key in mandatory_attributes
):
log.warning(
"You need to specify at least the name and \
the script path of the tool.")
continue
tool = get_or_create_tool(tool_definition, shelf)
if not tool:
return
# Add the tool to the shelf if not already in it
if tool not in shelf.tools():
shelf.setTools(list(shelf.tools()) + [tool])
# Add the shelf in the shelf set if not already in it
if shelf not in shelf_set.shelves():
shelf_set.setShelves(shelf_set.shelves() + (shelf,))
def get_or_create_shelf_set(shelf_set_label):
"""This function verifies if the shelf set label exists. If not,
creates a new shelf set.
Arguments:
shelf_set_label (str): The label of the shelf set
Returns:
hou.ShelfSet: The shelf set existing or the new one
"""
all_shelves_sets = hou.shelves.shelfSets().values()
shelf_sets = [
shelf for shelf in all_shelves_sets if shelf.label() == shelf_set_label
]
if shelf_sets:
return shelf_sets[0]
shelf_set_name = shelf_set_label.replace(' ', '_').lower()
new_shelf_set = hou.shelves.newShelfSet(
name=shelf_set_name,
label=shelf_set_label
)
return new_shelf_set
def get_or_create_shelf(shelf_label):
"""This function verifies if the shelf label exists. If not, creates
a new shelf.
Arguments:
shelf_label (str): The label of the shelf
Returns:
hou.Shelf: The shelf existing or the new one
"""
all_shelves = hou.shelves.shelves().values()
shelf = [s for s in all_shelves if s.label() == shelf_label]
if shelf:
return shelf[0]
shelf_name = shelf_label.replace(' ', '_').lower()
new_shelf = hou.shelves.newShelf(
name=shelf_name,
label=shelf_label
)
return new_shelf
def get_or_create_tool(tool_definition, shelf):
"""This function verifies if the tool exists and updates it. If not, creates
a new one.
Arguments:
tool_definition (dict): Dict with label, script, icon and help
shelf (hou.Shelf): The parent shelf of the tool
Returns:
hou.Tool: The tool updated or the new one
"""
existing_tools = shelf.tools()
tool_label = tool_definition.get('label')
existing_tool = [
tool for tool in existing_tools if tool.label() == tool_label
]
if existing_tool:
tool_definition.pop('name', None)
tool_definition.pop('label', None)
existing_tool[0].setData(**tool_definition)
return existing_tool[0]
tool_name = tool_label.replace(' ', '_').lower()
if not os.path.exists(tool_definition['script']):
log.warning(
"This path doesn't exist - {}".format(
tool_definition['script']
)
)
return
with open(tool_definition['script']) as f:
script = f.read()
tool_definition.update({'script': script})
new_tool = hou.shelves.newTool(name=tool_name, **tool_definition)
return new_tool

View file

@ -1,3 +1,5 @@
from bson.objectid import ObjectId
import pyblish.api
from openpype.pipeline import registered_host
@ -115,7 +117,7 @@ class CollectUpstreamInputs(pyblish.api.InstancePlugin):
# Collect containers for the given set of nodes
containers = collect_input_containers(nodes)
inputs = [c["representation"] for c in containers]
instance.data["inputs"] = inputs
inputs = [ObjectId(c["representation"]) for c in containers]
instance.data["inputRepresentations"] = inputs
self.log.info("Collected inputs: %s" % inputs)

View file

@ -2,10 +2,9 @@ import pyblish.api
from openpype.lib import version_up
from openpype.pipeline import registered_host
from openpype.pipeline.publish import get_errored_plugins_from_context
class IncrementCurrentFile(pyblish.api.InstancePlugin):
class IncrementCurrentFile(pyblish.api.ContextPlugin):
"""Increment the current file.
Saves the current scene with an increased version number.
@ -15,30 +14,10 @@ class IncrementCurrentFile(pyblish.api.InstancePlugin):
label = "Increment current file"
order = pyblish.api.IntegratorOrder + 9.0
hosts = ["houdini"]
families = ["colorbleed.usdrender", "redshift_rop"]
targets = ["local"]
families = ["workfile"]
optional = True
def process(self, instance):
# This should be a ContextPlugin, but this is a workaround
# for a bug in pyblish to run once for a family: issue #250
context = instance.context
key = "__hasRun{}".format(self.__class__.__name__)
if context.data.get(key, False):
return
else:
context.data[key] = True
context = instance.context
errored_plugins = get_errored_plugins_from_context(context)
if any(
plugin.__name__ == "HoudiniSubmitPublishDeadline"
for plugin in errored_plugins
):
raise RuntimeError(
"Skipping incrementing current file because "
"submission to deadline failed."
)
def process(self, context):
# Filename must not have changed since collecting
host = registered_host()

View file

@ -1,35 +0,0 @@
import pyblish.api
import hou
from openpype.lib import version_up
from openpype.pipeline.publish import get_errored_plugins_from_context
class IncrementCurrentFileDeadline(pyblish.api.ContextPlugin):
"""Increment the current file.
Saves the current scene with an increased version number.
"""
label = "Increment current file"
order = pyblish.api.IntegratorOrder + 9.0
hosts = ["houdini"]
targets = ["deadline"]
def process(self, context):
errored_plugins = get_errored_plugins_from_context(context)
if any(
plugin.__name__ == "HoudiniSubmitPublishDeadline"
for plugin in errored_plugins
):
raise RuntimeError(
"Skipping incrementing current file because "
"submission to deadline failed."
)
current_filepath = context.data["currentFile"]
new_filepath = version_up(current_filepath)
hou.hipFile.save(file_name=new_filepath, save_to_recent_files=True)

View file

@ -2483,7 +2483,7 @@ def load_capture_preset(data=None):
# DISPLAY OPTIONS
id = 'Display Options'
disp_options = {}
for key in preset['Display Options']:
for key in preset[id]:
if key.startswith('background'):
disp_options[key] = preset['Display Options'][key]
if len(disp_options[key]) == 4:

View file

@ -5,6 +5,7 @@ import maya.mel as mel
import six
import sys
from openpype.lib import Logger
from openpype.api import (
get_project_settings,
get_current_project_settings
@ -38,6 +39,8 @@ class RenderSettings(object):
"underscore": "_"
}
log = Logger.get_logger("RenderSettings")
@classmethod
def get_image_prefix_attr(cls, renderer):
return cls._image_prefix_nodes[renderer]
@ -133,20 +136,7 @@ class RenderSettings(object):
cmds.setAttr(
"defaultArnoldDriver.mergeAOVs", multi_exr)
# Passes additional options in from the schema as a list
# but converts it to a dictionary because ftrack doesn't
# allow fullstops in custom attributes. Then checks for
# type of MtoA attribute passed to adjust the `setAttr`
# command accordingly.
self._additional_attribs_setter(additional_options)
for item in additional_options:
attribute, value = item
if (cmds.getAttr(str(attribute), type=True)) == "long":
cmds.setAttr(str(attribute), int(value))
elif (cmds.getAttr(str(attribute), type=True)) == "bool":
cmds.setAttr(str(attribute), int(value), type = "Boolean") # noqa
elif (cmds.getAttr(str(attribute), type=True)) == "string":
cmds.setAttr(str(attribute), str(value), type = "string") # noqa
reset_frame_range()
def _set_redshift_settings(self, width, height):
@ -230,12 +220,20 @@ class RenderSettings(object):
cmds.setAttr("defaultRenderGlobals.extensionPadding", 4)
def _additional_attribs_setter(self, additional_attribs):
print(additional_attribs)
for item in additional_attribs:
attribute, value = item
if (cmds.getAttr(str(attribute), type=True)) == "long":
cmds.setAttr(str(attribute), int(value))
elif (cmds.getAttr(str(attribute), type=True)) == "bool":
cmds.setAttr(str(attribute), int(value)) # noqa
elif (cmds.getAttr(str(attribute), type=True)) == "string":
cmds.setAttr(str(attribute), str(value), type = "string") # noqa
attribute = str(attribute) # ensure str conversion from settings
attribute_type = cmds.getAttr(attribute, type=True)
if attribute_type in {"long", "bool"}:
cmds.setAttr(attribute, int(value))
elif attribute_type == "string":
cmds.setAttr(attribute, str(value), type="string")
elif attribute_type in {"double", "doubleAngle", "doubleLinear"}:
cmds.setAttr(attribute, float(value))
else:
self.log.error(
"Attribute {attribute} can not be set due to unsupported "
"type: {attribute_type}".format(
attribute=attribute,
attribute_type=attribute_type)
)

View file

@ -348,3 +348,71 @@ def get_attr_overrides(node_attr, layer,
break
return reversed(plug_overrides)
def get_shader_in_layer(node, layer):
"""Return the assigned shader in a renderlayer without switching layers.
This has been developed and tested for Legacy Renderlayers and *not* for
Render Setup.
Note: This will also return the shader for any face assignments, however
it will *not* return the components they are assigned to. This could
be implemented, but since Maya's renderlayers are famous for breaking
with face assignments there has been no need for this function to
support that.
Returns:
list: The list of assigned shaders in the given layer.
"""
def _get_connected_shader(plug):
"""Return current shader"""
return cmds.listConnections(plug,
source=False,
destination=True,
plugs=False,
connections=False,
type="shadingEngine") or []
# We check the instObjGroups (shader connection) for layer overrides.
plug = node + ".instObjGroups"
# Ignore complex query if we're in the layer anyway (optimization)
current_layer = cmds.editRenderLayerGlobals(query=True,
currentRenderLayer=True)
if layer == current_layer:
return _get_connected_shader(plug)
connections = cmds.listConnections(plug,
plugs=True,
source=False,
destination=True,
type="renderLayer") or []
connections = filter(lambda x: x.endswith(".outPlug"), connections)
if not connections:
# If no overrides anywhere on the shader, just get the current shader
return _get_connected_shader(plug)
def _get_override(connections, layer):
"""Return the overridden connection for that layer in connections"""
# If there's an override on that layer, return that.
for connection in connections:
if (connection.startswith(layer + ".outAdjustments") and
connection.endswith(".outPlug")):
# This is a shader override on that layer so get the shader
# connected to .outValue of the .outAdjustment[i]
out_adjustment = connection.rsplit(".", 1)[0]
connection_attr = out_adjustment + ".outValue"
override = cmds.listConnections(connection_attr) or []
return override
override_shader = _get_override(connections, layer)
if override_shader is not None:
return override_shader
else:
# Get the override for "defaultRenderLayer" (=masterLayer)
return _get_override(connections, layer="defaultRenderLayer")

View file

@ -104,13 +104,6 @@ def install():
cmds.menuItem(divider=True)
cmds.menuItem(
"Set Render Settings",
command=lambda *args: lib_rendersettings.RenderSettings().set_default_renderer_settings() # noqa
)
cmds.menuItem(divider=True)
cmds.menuItem(
"Work Files...",
command=lambda *args: host_tools.show_workfiles(
@ -132,6 +125,12 @@ def install():
"Set Colorspace",
command=lambda *args: lib.set_colorspace(),
)
cmds.menuItem(
"Set Render Settings",
command=lambda *args: lib_rendersettings.RenderSettings().set_default_renderer_settings() # noqa
)
cmds.menuItem(divider=True, parent=MENU_NAME)
cmds.menuItem(
"Build First Workfile",

View file

@ -16,6 +16,7 @@ from openpype.host import (
HostDirmap,
)
from openpype.tools.utils import host_tools
from openpype.tools.workfiles.lock_dialog import WorkfileLockDialog
from openpype.lib import (
register_event_callback,
emit_event
@ -31,8 +32,14 @@ from openpype.pipeline import (
AVALON_CONTAINER_ID,
)
from openpype.pipeline.load import any_outdated_containers
from openpype.pipeline.workfile.lock_workfile import (
create_workfile_lock,
remove_workfile_lock,
is_workfile_locked,
is_workfile_lock_enabled
)
from openpype.hosts.maya import MAYA_ROOT_DIR
from openpype.hosts.maya.lib import copy_workspace_mel
from openpype.hosts.maya.lib import create_workspace_mel
from . import menu, lib
from .workio import (
@ -63,7 +70,7 @@ class MayaHost(HostBase, IWorkfileHost, ILoadHost):
self._op_events = {}
def install(self):
project_name = os.getenv("AVALON_PROJECT")
project_name = legacy_io.active_project()
project_settings = get_project_settings(project_name)
# process path mapping
dirmap_processor = MayaDirmap("maya", project_name, project_settings)
@ -99,8 +106,13 @@ class MayaHost(HostBase, IWorkfileHost, ILoadHost):
register_event_callback("open", on_open)
register_event_callback("new", on_new)
register_event_callback("before.save", on_before_save)
register_event_callback("after.save", on_after_save)
register_event_callback("before.close", on_before_close)
register_event_callback("before.file.open", before_file_open)
register_event_callback("taskChanged", on_task_changed)
register_event_callback("workfile.open.before", before_workfile_open)
register_event_callback("workfile.save.before", before_workfile_save)
register_event_callback("workfile.save.before", after_workfile_save)
def open_workfile(self, filepath):
return open_file(filepath)
@ -143,6 +155,13 @@ class MayaHost(HostBase, IWorkfileHost, ILoadHost):
OpenMaya.MSceneMessage.kBeforeSave, _on_scene_save
)
self._op_events[_after_scene_save] = (
OpenMaya.MSceneMessage.addCallback(
OpenMaya.MSceneMessage.kAfterSave,
_after_scene_save
)
)
self._op_events[_before_scene_save] = (
OpenMaya.MSceneMessage.addCheckCallback(
OpenMaya.MSceneMessage.kBeforeSaveCheck,
@ -161,15 +180,35 @@ class MayaHost(HostBase, IWorkfileHost, ILoadHost):
)
)
self._op_events[_on_scene_open] = OpenMaya.MSceneMessage.addCallback(
OpenMaya.MSceneMessage.kAfterOpen, _on_scene_open
self._op_events[_on_scene_open] = (
OpenMaya.MSceneMessage.addCallback(
OpenMaya.MSceneMessage.kAfterOpen,
_on_scene_open
)
)
self._op_events[_before_scene_open] = (
OpenMaya.MSceneMessage.addCallback(
OpenMaya.MSceneMessage.kBeforeOpen,
_before_scene_open
)
)
self._op_events[_before_close_maya] = (
OpenMaya.MSceneMessage.addCallback(
OpenMaya.MSceneMessage.kMayaExiting,
_before_close_maya
)
)
self.log.info("Installed event handler _on_scene_save..")
self.log.info("Installed event handler _before_scene_save..")
self.log.info("Installed event handler _on_after_save..")
self.log.info("Installed event handler _on_scene_new..")
self.log.info("Installed event handler _on_maya_initialized..")
self.log.info("Installed event handler _on_scene_open..")
self.log.info("Installed event handler _check_lock_file..")
self.log.info("Installed event handler _before_close_maya..")
def _set_project():
@ -208,6 +247,10 @@ def _on_scene_new(*args):
emit_event("new")
def _after_scene_save(*arg):
emit_event("after.save")
def _on_scene_save(*args):
emit_event("save")
@ -216,6 +259,14 @@ def _on_scene_open(*args):
emit_event("open")
def _before_close_maya(*args):
emit_event("before.close")
def _before_scene_open(*args):
emit_event("before.file.open")
def _before_scene_save(return_code, client_data):
# Default to allowing the action. Registered
@ -229,6 +280,23 @@ def _before_scene_save(return_code, client_data):
)
def _remove_workfile_lock():
"""Remove workfile lock on current file"""
if not handle_workfile_locks():
return
filepath = current_file()
log.info("Removing lock on current file {}...".format(filepath))
if filepath:
remove_workfile_lock(filepath)
def handle_workfile_locks():
if lib.IS_HEADLESS:
return False
project_name = legacy_io.active_project()
return is_workfile_lock_enabled(MayaHost.name, project_name)
def uninstall():
pyblish.api.deregister_plugin_path(PUBLISH_PATH)
pyblish.api.deregister_host("mayabatch")
@ -349,21 +417,13 @@ def containerise(name,
("id", AVALON_CONTAINER_ID),
("name", name),
("namespace", namespace),
("loader", str(loader)),
("loader", loader),
("representation", context["representation"]["_id"]),
]
for key, value in data:
if not value:
continue
if isinstance(value, (int, float)):
cmds.addAttr(container, longName=key, attributeType="short")
cmds.setAttr(container + "." + key, value)
else:
cmds.addAttr(container, longName=key, dataType="string")
cmds.setAttr(container + "." + key, value, type="string")
cmds.addAttr(container, longName=key, dataType="string")
cmds.setAttr(container + "." + key, str(value), type="string")
main_container = cmds.ls(AVALON_CONTAINERS, type="objectSet")
if not main_container:
@ -434,6 +494,46 @@ def on_before_save():
return lib.validate_fps()
def on_after_save():
"""Check if there is a lockfile after save"""
check_lock_on_current_file()
def check_lock_on_current_file():
"""Check if there is a user opening the file"""
if not handle_workfile_locks():
return
log.info("Running callback on checking the lock file...")
# add the lock file when opening the file
filepath = current_file()
if is_workfile_locked(filepath):
# add lockfile dialog
workfile_dialog = WorkfileLockDialog(filepath)
if not workfile_dialog.exec_():
cmds.file(new=True)
return
create_workfile_lock(filepath)
def on_before_close():
"""Delete the lock file after user quitting the Maya Scene"""
log.info("Closing Maya...")
# delete the lock file
filepath = current_file()
if handle_workfile_locks():
remove_workfile_lock(filepath)
def before_file_open():
"""check lock file when the file changed"""
# delete the lock file
_remove_workfile_lock()
def on_save():
"""Automatically add IDs to new nodes
@ -442,6 +542,8 @@ def on_save():
"""
log.info("Running callback on save..")
# remove lockfile if users jumps over from one scene to another
_remove_workfile_lock()
# # Update current task for the current scene
# update_task_from_path(cmds.file(query=True, sceneName=True))
@ -499,6 +601,9 @@ def on_open():
dialog.on_clicked.connect(_on_show_inventory)
dialog.show()
# create lock file for the maya scene
check_lock_on_current_file()
def on_new():
"""Set project resolution and fps when create a new file"""
@ -514,6 +619,7 @@ def on_new():
"from openpype.hosts.maya.api import lib;"
"lib.add_render_layer_change_observer()")
lib.set_context_settings()
_remove_workfile_lock()
def on_task_changed():
@ -541,7 +647,7 @@ def on_task_changed():
lib.update_content_on_context_change()
msg = " project: {}\n asset: {}\n task:{}".format(
legacy_io.Session["AVALON_PROJECT"],
legacy_io.active_project(),
legacy_io.Session["AVALON_ASSET"],
legacy_io.Session["AVALON_TASK"]
)
@ -552,10 +658,26 @@ def on_task_changed():
)
def before_workfile_open():
if handle_workfile_locks():
_remove_workfile_lock()
def before_workfile_save(event):
project_name = legacy_io.active_project()
if handle_workfile_locks():
_remove_workfile_lock()
workdir_path = event["workdir_path"]
if workdir_path:
copy_workspace_mel(workdir_path)
create_workspace_mel(workdir_path, project_name)
def after_workfile_save(event):
workfile_name = event["filename"]
if handle_workfile_locks():
if workfile_name:
if not is_workfile_locked(workfile_name):
create_workfile_lock(workfile_name)
class MayaDirmap(HostDirmap):

View file

@ -1,5 +1,5 @@
from openpype.lib import PreLaunchHook
from openpype.hosts.maya.lib import copy_workspace_mel
from openpype.hosts.maya.lib import create_workspace_mel
class PreCopyMel(PreLaunchHook):
@ -10,9 +10,10 @@ class PreCopyMel(PreLaunchHook):
app_groups = ["maya"]
def execute(self):
project_name = self.launch_context.env.get("AVALON_PROJECT")
workdir = self.launch_context.env.get("AVALON_WORKDIR")
if not workdir:
self.log.warning("BUG: Workdir is not filled.")
return
copy_workspace_mel(workdir)
create_workspace_mel(workdir, project_name)

View file

@ -1,26 +1,24 @@
import os
import shutil
from openpype.settings import get_project_settings
from openpype.lib import Logger
def copy_workspace_mel(workdir):
# Check that source mel exists
current_dir = os.path.dirname(os.path.abspath(__file__))
src_filepath = os.path.join(current_dir, "resources", "workspace.mel")
if not os.path.exists(src_filepath):
print("Source mel file does not exist. {}".format(src_filepath))
return
# Skip if workspace.mel already exists
def create_workspace_mel(workdir, project_name):
dst_filepath = os.path.join(workdir, "workspace.mel")
if os.path.exists(dst_filepath):
return
# Create workdir if does not exists yet
if not os.path.exists(workdir):
os.makedirs(workdir)
# Copy file
print("Copying workspace mel \"{}\" -> \"{}\"".format(
src_filepath, dst_filepath
))
shutil.copy(src_filepath, dst_filepath)
project_setting = get_project_settings(project_name)
mel_script = project_setting["maya"].get("mel_workspace")
# Skip if mel script in settings is empty
if not mel_script:
log = Logger.get_logger("create_workspace_mel")
log.debug("File 'workspace.mel' not created. Settings value is empty.")
return
with open(dst_filepath, "w") as mel_file:
mel_file.write(mel_script)

View file

@ -0,0 +1,46 @@
from maya import cmds
from openpype.pipeline import InventoryAction, registered_host
from openpype.hosts.maya.api.lib import get_container_members
class SelectInScene(InventoryAction):
"""Select nodes in the scene from selected containers in scene inventory"""
label = "Select in scene"
icon = "search"
color = "#888888"
order = 99
def process(self, containers):
all_members = []
for container in containers:
members = get_container_members(container)
all_members.extend(members)
cmds.select(all_members, replace=True, noExpand=True)
class HighlightBySceneSelection(InventoryAction):
"""Select containers in scene inventory from the current scene selection"""
label = "Highlight by scene selection"
icon = "search"
color = "#888888"
order = 100
def process(self, containers):
selection = set(cmds.ls(selection=True, long=True, objectsOnly=True))
host = registered_host()
to_select = []
for container in host.get_containers():
members = get_container_members(container)
if any(member in selection for member in members):
to_select.append(container["objectName"])
return {
"objectNames": to_select,
"options": {"clear": True}
}

View file

@ -70,7 +70,7 @@ class CollectAssembly(pyblish.api.InstancePlugin):
data[representation_id].append(instance_data)
instance.data["scenedata"] = dict(data)
instance.data["hierarchy"] = list(set(hierarchy_nodes))
instance.data["nodesHierarchy"] = list(set(hierarchy_nodes))
def get_file_rule(self, rule):
return mel.eval('workspace -query -fileRuleEntry "{}"'.format(rule))

View file

@ -0,0 +1,215 @@
import copy
from bson.objectid import ObjectId
from maya import cmds
import maya.api.OpenMaya as om
import pyblish.api
from openpype.pipeline import registered_host
from openpype.hosts.maya.api.lib import get_container_members
from openpype.hosts.maya.api.lib_rendersetup import get_shader_in_layer
def iter_history(nodes,
filter=om.MFn.kInvalid,
direction=om.MItDependencyGraph.kUpstream):
"""Iterate unique upstream history for list of nodes.
This acts as a replacement to maya.cmds.listHistory.
It's faster by about 2x-3x. It returns less than
maya.cmds.listHistory as it excludes the input nodes
from the output (unless an input node was history
for another input node). It also excludes duplicates.
Args:
nodes (list): Maya node names to start search from.
filter (om.MFn.Type): Filter to only specific types.
e.g. to dag nodes using om.MFn.kDagNode
direction (om.MItDependencyGraph.Direction): Direction to traverse in.
Defaults to upstream.
Yields:
str: Node names in upstream history.
"""
if not nodes:
return
sel = om.MSelectionList()
for node in nodes:
sel.add(node)
it = om.MItDependencyGraph(sel.getDependNode(0)) # init iterator
handle = om.MObjectHandle
traversed = set()
fn_dep = om.MFnDependencyNode()
fn_dag = om.MFnDagNode()
for i in range(sel.length()):
start_node = sel.getDependNode(i)
start_node_hash = handle(start_node).hashCode()
if start_node_hash in traversed:
continue
it.resetTo(start_node,
filter=filter,
direction=direction)
while not it.isDone():
node = it.currentNode()
node_hash = handle(node).hashCode()
if node_hash in traversed:
it.prune()
it.next() # noqa: B305
continue
traversed.add(node_hash)
if node.hasFn(om.MFn.kDagNode):
fn_dag.setObject(node)
yield fn_dag.fullPathName()
else:
fn_dep.setObject(node)
yield fn_dep.name()
it.next() # noqa: B305
def collect_input_containers(containers, nodes):
"""Collect containers that contain any of the node in `nodes`.
This will return any loaded Avalon container that contains at least one of
the nodes. As such, the Avalon container is an input for it. Or in short,
there are member nodes of that container.
Returns:
list: Input avalon containers
"""
# Assume the containers have collected their cached '_members' data
# in the collector.
return [container for container in containers
if any(node in container["_members"] for node in nodes)]
class CollectUpstreamInputs(pyblish.api.InstancePlugin):
"""Collect input source inputs for this publish.
This will include `inputs` data of which loaded publishes were used in the
generation of this publish. This leaves an upstream trace to what was used
as input.
"""
label = "Collect Inputs"
order = pyblish.api.CollectorOrder + 0.34
hosts = ["maya"]
def process(self, instance):
# For large scenes the querying of "host.ls()" can be relatively slow
# e.g. up to a second. Many instances calling it easily slows this
# down. As such, we cache it so we trigger it only once.
# todo: Instead of hidden cache make "CollectContainers" plug-in
cache_key = "__cache_containers"
scene_containers = instance.context.data.get(cache_key, None)
if scene_containers is None:
# Query the scenes' containers if there's no cache yet
host = registered_host()
scene_containers = list(host.ls())
for container in scene_containers:
# Embed the members into the container dictionary
container_members = set(get_container_members(container))
container["_members"] = container_members
instance.context.data["__cache_containers"] = scene_containers
# Collect the relevant input containers for this instance
if "renderlayer" in set(instance.data.get("families", [])):
# Special behavior for renderlayers
self.log.debug("Collecting renderlayer inputs....")
containers = self._collect_renderlayer_inputs(scene_containers,
instance)
else:
# Basic behavior
nodes = instance[:]
# Include any input connections of history with long names
# For optimization purposes only trace upstream from shape nodes
# looking for used dag nodes. This way having just a constraint
# on a transform is also ignored which tended to give irrelevant
# inputs for the majority of our use cases. We tend to care more
# about geometry inputs.
shapes = cmds.ls(nodes,
type=("mesh", "nurbsSurface", "nurbsCurve"),
noIntermediate=True)
if shapes:
history = list(iter_history(shapes, filter=om.MFn.kShape))
history = cmds.ls(history, long=True)
# Include the transforms in the collected history as shapes
# are excluded from containers
transforms = cmds.listRelatives(cmds.ls(history, shapes=True),
parent=True,
fullPath=True,
type="transform")
if transforms:
history.extend(transforms)
if history:
nodes = list(set(nodes + history))
# Collect containers for the given set of nodes
containers = collect_input_containers(scene_containers,
nodes)
inputs = [ObjectId(c["representation"]) for c in containers]
instance.data["inputRepresentations"] = inputs
self.log.info("Collected inputs: %s" % inputs)
def _collect_renderlayer_inputs(self, scene_containers, instance):
"""Collects inputs from nodes in renderlayer, incl. shaders + camera"""
# Get the renderlayer
renderlayer = instance.data.get("setMembers")
if renderlayer == "defaultRenderLayer":
# Assume all loaded containers in the scene are inputs
# for the masterlayer
return copy.deepcopy(scene_containers)
else:
# Get the members of the layer
members = cmds.editRenderLayerMembers(renderlayer,
query=True,
fullNames=True) or []
# In some cases invalid objects are returned from
# `editRenderLayerMembers` so we filter them out
members = cmds.ls(members, long=True)
# Include all children
children = cmds.listRelatives(members,
allDescendents=True,
fullPath=True) or []
members.extend(children)
# Include assigned shaders in renderlayer
shapes = cmds.ls(members, shapes=True, long=True)
shaders = set()
for shape in shapes:
shape_shaders = get_shader_in_layer(shape, layer=renderlayer)
if not shape_shaders:
continue
shaders.update(shape_shaders)
members.extend(shaders)
# Explicitly include the camera being rendered in renderlayer
cameras = instance.data.get("cameras")
members.extend(cameras)
containers = collect_input_containers(scene_containers, members)
return containers

View file

@ -1,25 +0,0 @@
from maya import cmds
import pyblish.api
class CollectMayaScene(pyblish.api.InstancePlugin):
"""Collect Maya Scene Data
"""
order = pyblish.api.CollectorOrder + 0.2
label = 'Collect Model Data'
families = ["mayaScene"]
def process(self, instance):
# Extract only current frame (override)
frame = cmds.currentTime(query=True)
instance.data["frameStart"] = frame
instance.data["frameEnd"] = frame
# make ftrack publishable
if instance.data.get('families'):
instance.data['families'].append('ftrack')
else:
instance.data['families'] = ['ftrack']

View file

@ -0,0 +1,26 @@
from maya import cmds
import pyblish.api
class CollectMayaSceneTime(pyblish.api.InstancePlugin):
"""Collect Maya Scene playback range
This allows to reproduce the playback range for the content to be loaded.
It does *not* limit the extracted data to only data inside that time range.
"""
order = pyblish.api.CollectorOrder + 0.2
label = 'Collect Maya Scene Time'
families = ["mayaScene"]
def process(self, instance):
instance.data.update({
"frameStart": cmds.playbackOptions(query=True, minTime=True),
"frameEnd": cmds.playbackOptions(query=True, maxTime=True),
"frameStartHandle": cmds.playbackOptions(query=True,
animationStartTime=True),
"frameEndHandle": cmds.playbackOptions(query=True,
animationEndTime=True)
})

View file

@ -293,6 +293,7 @@ class CollectMayaRender(pyblish.api.ContextPlugin):
"source": filepath,
"expectedFiles": full_exp_files,
"publishRenderMetadataFolder": common_publish_meta_path,
"renderProducts": layer_render_products,
"resolutionWidth": lib.get_attr_in_layer(
"defaultResolution.width", layer=layer_name
),
@ -359,7 +360,6 @@ class CollectMayaRender(pyblish.api.ContextPlugin):
instance.data["label"] = label
instance.data["farm"] = True
instance.data.update(data)
self.log.debug("data: {}".format(json.dumps(data, indent=4)))
def parse_options(self, render_globals):
"""Get all overrides with a value, skip those without.

View file

@ -1,22 +0,0 @@
from maya import cmds
import pyblish.api
class CollectRigData(pyblish.api.InstancePlugin):
"""Collect rig data
Ensures rigs are published to Ftrack.
"""
order = pyblish.api.CollectorOrder + 0.2
label = 'Collect Rig Data'
families = ["rig"]
def process(self, instance):
# make ftrack publishable
if instance.data.get('families'):
instance.data['families'].append('ftrack')
else:
instance.data['families'] = ['ftrack']

View file

@ -1,12 +1,12 @@
import os
import openpype.api
from maya import cmds
from openpype.pipeline import publish
from openpype.hosts.maya.api.lib import maintained_selection
class ExtractAssStandin(openpype.api.Extractor):
class ExtractAssStandin(publish.Extractor):
"""Extract the content of the instance to a ass file
Things to pay attention to:

View file

@ -1,14 +1,13 @@
import os
import json
import os
import openpype.api
from openpype.pipeline import publish
from openpype.hosts.maya.api.lib import extract_alembic
from maya import cmds
class ExtractAssembly(openpype.api.Extractor):
class ExtractAssembly(publish.Extractor):
"""Produce an alembic of just point positions and normals.
Positions and normals are preserved, but nothing more,
@ -33,7 +32,7 @@ class ExtractAssembly(openpype.api.Extractor):
json.dump(instance.data["scenedata"], filepath, ensure_ascii=False)
self.log.info("Extracting point cache ..")
cmds.select(instance.data["hierarchy"])
cmds.select(instance.data["nodesHierarchy"])
# Run basic alembic exporter
extract_alembic(file=hierarchy_path,

View file

@ -3,17 +3,17 @@ import contextlib
from maya import cmds
import openpype.api
from openpype.pipeline import publish
from openpype.hosts.maya.api.lib import maintained_selection
class ExtractAssProxy(openpype.api.Extractor):
class ExtractAssProxy(publish.Extractor):
"""Extract proxy model as Maya Ascii to use as arnold standin
"""
order = openpype.api.Extractor.order + 0.2
order = publish.Extractor.order + 0.2
label = "Ass Proxy (Maya ASCII)"
hosts = ["maya"]
families = ["ass"]

View file

@ -2,11 +2,11 @@ import os
from maya import cmds
import openpype.api
from openpype.pipeline import publish
from openpype.hosts.maya.api import lib
class ExtractCameraAlembic(openpype.api.Extractor):
class ExtractCameraAlembic(publish.Extractor):
"""Extract a Camera as Alembic.
The cameras gets baked to world space by default. Only when the instance's

View file

@ -5,7 +5,7 @@ import itertools
from maya import cmds
import openpype.api
from openpype.pipeline import publish
from openpype.hosts.maya.api import lib
@ -78,7 +78,7 @@ def unlock(plug):
cmds.disconnectAttr(source, destination)
class ExtractCameraMayaScene(openpype.api.Extractor):
class ExtractCameraMayaScene(publish.Extractor):
"""Extract a Camera as Maya Scene.
This will create a duplicate of the camera that will be baked *with*

View file

@ -4,13 +4,13 @@ import os
from maya import cmds # noqa
import maya.mel as mel # noqa
import pyblish.api
import openpype.api
from openpype.hosts.maya.api.lib import maintained_selection
from openpype.pipeline import publish
from openpype.hosts.maya.api.lib import maintained_selection
from openpype.hosts.maya.api import fbx
class ExtractFBX(openpype.api.Extractor):
class ExtractFBX(publish.Extractor):
"""Extract FBX from Maya.
This extracts reproducible FBX exports ignoring any of the

View file

@ -5,13 +5,11 @@ import json
from maya import cmds
from maya.api import OpenMaya as om
from bson.objectid import ObjectId
from openpype.pipeline import legacy_io
import openpype.api
from openpype.client import get_representation_by_id
from openpype.pipeline import legacy_io, publish
class ExtractLayout(openpype.api.Extractor):
class ExtractLayout(publish.Extractor):
"""Extract a layout."""
label = "Extract Layout"
@ -30,6 +28,8 @@ class ExtractLayout(openpype.api.Extractor):
instance.data["representations"] = []
json_data = []
# TODO representation queries can be refactored to be faster
project_name = legacy_io.active_project()
for asset in cmds.sets(str(instance), query=True):
# Find the container
@ -43,11 +43,11 @@ class ExtractLayout(openpype.api.Extractor):
representation_id = cmds.getAttr(f"{container}.representation")
representation = legacy_io.find_one(
{
"type": "representation",
"_id": ObjectId(representation_id)
}, projection={"parent": True, "context.family": True})
representation = get_representation_by_id(
project_name,
representation_id,
fields=["parent", "context.family"]
)
self.log.info(representation)
@ -102,9 +102,10 @@ class ExtractLayout(openpype.api.Extractor):
for i in range(0, len(t_matrix_list), row_length):
t_matrix.append(t_matrix_list[i:i + row_length])
json_element["transform_matrix"] = []
for row in t_matrix:
json_element["transform_matrix"].append(list(row))
json_element["transform_matrix"] = [
list(row)
for row in t_matrix
]
basis_list = [
1, 0, 0, 0,

View file

@ -13,8 +13,8 @@ from maya import cmds # noqa
import pyblish.api
import openpype.api
from openpype.pipeline import legacy_io
from openpype.lib import source_hash, run_subprocess
from openpype.pipeline import legacy_io, publish
from openpype.hosts.maya.api import lib
# Modes for transfer
@ -68,7 +68,7 @@ def find_paths_by_hash(texture_hash):
return legacy_io.distinct(key, {"type": "version"})
def maketx(source, destination, *args):
def maketx(source, destination, args, logger):
"""Make `.tx` using `maketx` with some default settings.
The settings are based on default as used in Arnold's
@ -79,7 +79,8 @@ def maketx(source, destination, *args):
Args:
source (str): Path to source file.
destination (str): Writing destination path.
*args: Additional arguments for `maketx`.
args (list): Additional arguments for `maketx`.
logger (logging.Logger): Logger to log messages to.
Returns:
str: Output of `maketx` command.
@ -94,7 +95,7 @@ def maketx(source, destination, *args):
"OIIO tool not found in {}".format(maketx_path))
raise AssertionError("OIIO tool not found")
cmd = [
subprocess_args = [
maketx_path,
"-v", # verbose
"-u", # update mode
@ -103,27 +104,20 @@ def maketx(source, destination, *args):
"--checknan",
# use oiio-optimized settings for tile-size, planarconfig, metadata
"--oiio",
"--filter lanczos3",
escape_space(source)
"--filter", "lanczos3",
source
]
cmd.extend(args)
cmd.extend(["-o", escape_space(destination)])
subprocess_args.extend(args)
subprocess_args.extend(["-o", destination])
cmd = " ".join(cmd)
cmd = " ".join(subprocess_args)
logger.debug(cmd)
CREATE_NO_WINDOW = 0x08000000 # noqa
kwargs = dict(args=cmd, stderr=subprocess.STDOUT)
if sys.platform == "win32":
kwargs["creationflags"] = CREATE_NO_WINDOW
try:
out = subprocess.check_output(**kwargs)
except subprocess.CalledProcessError as exc:
print(exc)
import traceback
traceback.print_exc()
out = run_subprocess(subprocess_args)
except Exception:
logger.error("Maketx converion failed", exc_info=True)
raise
return out
@ -161,7 +155,7 @@ def no_workspace_dir():
os.rmdir(fake_workspace_dir)
class ExtractLook(openpype.api.Extractor):
class ExtractLook(publish.Extractor):
"""Extract Look (Maya Scene + JSON)
Only extracts the sets (shadingEngines and alike) alongside a .json file
@ -505,7 +499,7 @@ class ExtractLook(openpype.api.Extractor):
args = []
if do_maketx:
args.append("maketx")
texture_hash = openpype.api.source_hash(filepath, *args)
texture_hash = source_hash(filepath, *args)
# If source has been published before with the same settings,
# then don't reprocess but hardlink from the original
@ -524,15 +518,17 @@ class ExtractLook(openpype.api.Extractor):
if do_maketx and ext != ".tx":
# Produce .tx file in staging if source file is not .tx
converted = os.path.join(staging, "resources", fname + ".tx")
additional_args = [
"--sattrib",
"sourceHash",
texture_hash
]
if linearize:
self.log.info("tx: converting sRGB -> linear")
colorconvert = "--colorconvert sRGB linear"
else:
colorconvert = ""
additional_args.extend(["--colorconvert", "sRGB", "linear"])
config_path = get_ocio_config_path("nuke-default")
color_config = "--colorconfig {0}".format(config_path)
additional_args.extend(["--colorconfig", config_path])
# Ensure folder exists
if not os.path.exists(os.path.dirname(converted)):
os.makedirs(os.path.dirname(converted))
@ -541,12 +537,8 @@ class ExtractLook(openpype.api.Extractor):
maketx(
filepath,
converted,
# Include `source-hash` as string metadata
"--sattrib",
"sourceHash",
escape_space(texture_hash),
colorconvert,
color_config
additional_args,
self.log
)
return converted, COPY, texture_hash

View file

@ -4,12 +4,11 @@ import os
from maya import cmds
import openpype.api
from openpype.hosts.maya.api.lib import maintained_selection
from openpype.pipeline import AVALON_CONTAINER_ID
from openpype.pipeline import AVALON_CONTAINER_ID, publish
class ExtractMayaSceneRaw(openpype.api.Extractor):
class ExtractMayaSceneRaw(publish.Extractor):
"""Extract as Maya Scene (raw).
This will preserve all references, construction history, etc.

View file

@ -4,11 +4,11 @@ import os
from maya import cmds
import openpype.api
from openpype.pipeline import publish
from openpype.hosts.maya.api import lib
class ExtractModel(openpype.api.Extractor):
class ExtractModel(publish.Extractor):
"""Extract as Model (Maya Scene).
Only extracts contents based on the original "setMembers" data to ensure

View file

@ -2,11 +2,11 @@ import os
from maya import cmds
import openpype.api
from openpype.pipeline import publish
from openpype.hosts.maya.api.lib import maintained_selection
class ExtractMultiverseLook(openpype.api.Extractor):
class ExtractMultiverseLook(publish.Extractor):
"""Extractor for Multiverse USD look data.
This will extract:

View file

@ -3,11 +3,11 @@ import six
from maya import cmds
import openpype.api
from openpype.pipeline import publish
from openpype.hosts.maya.api.lib import maintained_selection
class ExtractMultiverseUsd(openpype.api.Extractor):
class ExtractMultiverseUsd(publish.Extractor):
"""Extractor for Multiverse USD Asset data.
This will extract settings for a Multiverse Write Asset operation:

View file

@ -2,11 +2,11 @@ import os
from maya import cmds
import openpype.api
from openpype.pipeline import publish
from openpype.hosts.maya.api.lib import maintained_selection
class ExtractMultiverseUsdComposition(openpype.api.Extractor):
class ExtractMultiverseUsdComposition(publish.Extractor):
"""Extractor of Multiverse USD Composition data.
This will extract settings for a Multiverse Write Composition operation:

View file

@ -1,12 +1,12 @@
import os
import openpype.api
from openpype.pipeline import publish
from openpype.hosts.maya.api.lib import maintained_selection
from maya import cmds
class ExtractMultiverseUsdOverride(openpype.api.Extractor):
class ExtractMultiverseUsdOverride(publish.Extractor):
"""Extractor for Multiverse USD Override data.
This will extract settings for a Multiverse Write Override operation:

View file

@ -1,18 +1,16 @@
import os
import glob
import contextlib
import clique
import capture
from openpype.pipeline import publish
from openpype.hosts.maya.api import lib
import openpype.api
from maya import cmds
import pymel.core as pm
class ExtractPlayblast(openpype.api.Extractor):
class ExtractPlayblast(publish.Extractor):
"""Extract viewport playblast.
Takes review camera and creates review Quicktime video based on viewport

View file

@ -2,7 +2,7 @@ import os
from maya import cmds
import openpype.api
from openpype.pipeline import publish
from openpype.hosts.maya.api.lib import (
extract_alembic,
suspended_refresh,
@ -11,7 +11,7 @@ from openpype.hosts.maya.api.lib import (
)
class ExtractAlembic(openpype.api.Extractor):
class ExtractAlembic(publish.Extractor):
"""Produce an alembic of just point positions and normals.
Positions and normals, uvs, creases are preserved, but nothing more,

View file

@ -4,11 +4,11 @@ import os
from maya import cmds
import openpype.api
from openpype.pipeline import publish
from openpype.hosts.maya.api.lib import maintained_selection
class ExtractRedshiftProxy(openpype.api.Extractor):
class ExtractRedshiftProxy(publish.Extractor):
"""Extract the content of the instance to a redshift proxy file."""
label = "Redshift Proxy (.rs)"

View file

@ -1,10 +1,11 @@
import json
import os
import openpype.api
import json
import maya.app.renderSetup.model.renderSetup as renderSetup
from openpype.pipeline import publish
class ExtractRenderSetup(openpype.api.Extractor):
class ExtractRenderSetup(publish.Extractor):
"""
Produce renderSetup template file

Some files were not shown because too many files have changed in this diff Show more