diff --git a/.gitignore b/.gitignore index ea5b20eb69..4b773e97ed 100644 --- a/.gitignore +++ b/.gitignore @@ -107,3 +107,6 @@ website/.docusaurus mypy.ini tools/run_eventserver.* + +# Developer tools +tools/dev_* diff --git a/CHANGELOG.md b/CHANGELOG.md index 2a8e962085..0ffb6a996b 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,44 +1,94 @@ # Changelog -## [3.14.1-nightly.3](https://github.com/pypeclub/OpenPype/tree/HEAD) +## [3.14.2-nightly.4](https://github.com/pypeclub/OpenPype/tree/HEAD) -[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.14.0...HEAD) +[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.14.1...HEAD) + +**🆕 New features** + +- Nuke: Build workfile by template [\#3763](https://github.com/pypeclub/OpenPype/pull/3763) +- Houdini: Publishing workfiles [\#3697](https://github.com/pypeclub/OpenPype/pull/3697) +- Global: making collect audio plugin global [\#3679](https://github.com/pypeclub/OpenPype/pull/3679) + +**🚀 Enhancements** + +- Flame: Adding Creator's retimed shot and handles switch [\#3826](https://github.com/pypeclub/OpenPype/pull/3826) +- Flame: OpenPype submenu to batch and media manager [\#3825](https://github.com/pypeclub/OpenPype/pull/3825) +- General: Better pixmap scaling [\#3809](https://github.com/pypeclub/OpenPype/pull/3809) +- Photoshop: attempt to speed up ExtractImage [\#3793](https://github.com/pypeclub/OpenPype/pull/3793) +- SyncServer: Added cli commands for sync server [\#3765](https://github.com/pypeclub/OpenPype/pull/3765) +- Kitsu: Drop 'entities root' setting. [\#3739](https://github.com/pypeclub/OpenPype/pull/3739) + +**🐛 Bug fixes** + +- General: Fix Pattern access in client code [\#3828](https://github.com/pypeclub/OpenPype/pull/3828) +- Launcher: Skip opening last work file works for groups [\#3822](https://github.com/pypeclub/OpenPype/pull/3822) +- Maya: Publishing data key change [\#3811](https://github.com/pypeclub/OpenPype/pull/3811) +- Igniter: Fix status handling when version is already installed [\#3804](https://github.com/pypeclub/OpenPype/pull/3804) +- Resolve: Addon import is Python 2 compatible [\#3798](https://github.com/pypeclub/OpenPype/pull/3798) +- Hiero: retimed clip publishing is working [\#3792](https://github.com/pypeclub/OpenPype/pull/3792) +- nuke: validate write node is not failing due wrong type [\#3780](https://github.com/pypeclub/OpenPype/pull/3780) +- Fix - changed format of version string in pyproject.toml [\#3777](https://github.com/pypeclub/OpenPype/pull/3777) +- Ftrack status fix typo prgoress -\> progress [\#3761](https://github.com/pypeclub/OpenPype/pull/3761) +- Fix version resolution [\#3757](https://github.com/pypeclub/OpenPype/pull/3757) + +**🔀 Refactored code** + +- Photoshop: Use new Extractor location [\#3789](https://github.com/pypeclub/OpenPype/pull/3789) +- Blender: Use new Extractor location [\#3787](https://github.com/pypeclub/OpenPype/pull/3787) +- AfterEffects: Use new Extractor location [\#3784](https://github.com/pypeclub/OpenPype/pull/3784) +- General: Remove unused teshost [\#3773](https://github.com/pypeclub/OpenPype/pull/3773) +- General: Copied 'Extractor' plugin to publish pipeline [\#3771](https://github.com/pypeclub/OpenPype/pull/3771) +- General: Move queries of asset and representation links [\#3770](https://github.com/pypeclub/OpenPype/pull/3770) +- General: Move create project folders to pipeline [\#3768](https://github.com/pypeclub/OpenPype/pull/3768) +- General: Create project function moved to client code [\#3766](https://github.com/pypeclub/OpenPype/pull/3766) +- General: Move hostdirname functionality into host [\#3749](https://github.com/pypeclub/OpenPype/pull/3749) +- General: Move publish utils to pipeline [\#3745](https://github.com/pypeclub/OpenPype/pull/3745) +- Houdini: Define houdini as addon [\#3735](https://github.com/pypeclub/OpenPype/pull/3735) +- Fusion: Defined fusion as addon [\#3733](https://github.com/pypeclub/OpenPype/pull/3733) +- Flame: Defined flame as addon [\#3732](https://github.com/pypeclub/OpenPype/pull/3732) +- Resolve: Define resolve as addon [\#3727](https://github.com/pypeclub/OpenPype/pull/3727) + +**Merged pull requests:** + +- Standalone Publisher: Ignore empty labels, then still use name like other asset models [\#3779](https://github.com/pypeclub/OpenPype/pull/3779) +- Kitsu - sync\_all\_project - add list ignore\_projects [\#3776](https://github.com/pypeclub/OpenPype/pull/3776) + +## [3.14.1](https://github.com/pypeclub/OpenPype/tree/3.14.1) (2022-08-30) + +[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.14.1-nightly.4...3.14.1) ### 📖 Documentation - Documentation: Few updates [\#3698](https://github.com/pypeclub/OpenPype/pull/3698) -- Documentation: Settings development [\#3660](https://github.com/pypeclub/OpenPype/pull/3660) - -**🆕 New features** - -- Webpublisher:change create flatten image into tri state [\#3678](https://github.com/pypeclub/OpenPype/pull/3678) **🚀 Enhancements** +- General: Thumbnail can use project roots [\#3750](https://github.com/pypeclub/OpenPype/pull/3750) +- git: update gitignore [\#3722](https://github.com/pypeclub/OpenPype/pull/3722) - Settings: Remove settings lock on tray exit [\#3720](https://github.com/pypeclub/OpenPype/pull/3720) - General: Added helper getters to modules manager [\#3712](https://github.com/pypeclub/OpenPype/pull/3712) - Unreal: Define unreal as module and use host class [\#3701](https://github.com/pypeclub/OpenPype/pull/3701) - Settings: Lock settings UI session [\#3700](https://github.com/pypeclub/OpenPype/pull/3700) - General: Benevolent context label collector [\#3686](https://github.com/pypeclub/OpenPype/pull/3686) -- Ftrack: Store ftrack entities on hierarchy integration to instances [\#3677](https://github.com/pypeclub/OpenPype/pull/3677) -- Ftrack: More logs related to auto sync value change [\#3671](https://github.com/pypeclub/OpenPype/pull/3671) -- Blender: ops refresh manager after process events [\#3663](https://github.com/pypeclub/OpenPype/pull/3663) **🐛 Bug fixes** +- Maya: Fix typo in getPanel argument `with\_focus` -\> `withFocus` [\#3753](https://github.com/pypeclub/OpenPype/pull/3753) +- General: Smaller fixes of imports [\#3748](https://github.com/pypeclub/OpenPype/pull/3748) - General: Logger tweaks [\#3741](https://github.com/pypeclub/OpenPype/pull/3741) +- Nuke: missing job dependency if multiple bake streams [\#3737](https://github.com/pypeclub/OpenPype/pull/3737) - Nuke: color-space settings from anatomy is working [\#3721](https://github.com/pypeclub/OpenPype/pull/3721) - Settings: Fix studio default anatomy save [\#3716](https://github.com/pypeclub/OpenPype/pull/3716) - Maya: Use project name instead of project code [\#3709](https://github.com/pypeclub/OpenPype/pull/3709) - Settings: Fix project overrides save [\#3708](https://github.com/pypeclub/OpenPype/pull/3708) - Workfiles tool: Fix published workfile filtering [\#3704](https://github.com/pypeclub/OpenPype/pull/3704) - PS, AE: Provide default variant value for workfile subset [\#3703](https://github.com/pypeclub/OpenPype/pull/3703) -- RoyalRender: handle host name that is not set [\#3695](https://github.com/pypeclub/OpenPype/pull/3695) -- Flame: retime is working on clip publishing [\#3684](https://github.com/pypeclub/OpenPype/pull/3684) - Webpublisher: added check for empty context [\#3682](https://github.com/pypeclub/OpenPype/pull/3682) **🔀 Refactored code** +- General: Move delivery logic to pipeline [\#3751](https://github.com/pypeclub/OpenPype/pull/3751) - General: Host addons cleanup [\#3744](https://github.com/pypeclub/OpenPype/pull/3744) - Webpublisher: Webpublisher is used as addon [\#3740](https://github.com/pypeclub/OpenPype/pull/3740) - Photoshop: Defined photoshop as addon [\#3736](https://github.com/pypeclub/OpenPype/pull/3736) @@ -62,7 +112,6 @@ - Hiero: Define hiero as module [\#3717](https://github.com/pypeclub/OpenPype/pull/3717) - Deadline: better logging for DL webservice failures [\#3694](https://github.com/pypeclub/OpenPype/pull/3694) -- Photoshop: resize saved images in ExtractReview for ffmpeg [\#3676](https://github.com/pypeclub/OpenPype/pull/3676) ## [3.14.0](https://github.com/pypeclub/OpenPype/tree/3.14.0) (2022-08-18) @@ -72,71 +121,16 @@ - Ftrack: Addiotional component metadata [\#3685](https://github.com/pypeclub/OpenPype/pull/3685) - Ftrack: Set task status on farm publishing [\#3680](https://github.com/pypeclub/OpenPype/pull/3680) -- Ftrack: Set task status on task creation in integrate hierarchy [\#3675](https://github.com/pypeclub/OpenPype/pull/3675) -- Maya: Disable rendering of all lights for render instances submitted through Deadline. [\#3661](https://github.com/pypeclub/OpenPype/pull/3661) -- General: Optimized OCIO configs [\#3650](https://github.com/pypeclub/OpenPype/pull/3650) **🐛 Bug fixes** - General: Switch from hero version to versioned works [\#3691](https://github.com/pypeclub/OpenPype/pull/3691) -- General: Fix finding of last version [\#3656](https://github.com/pypeclub/OpenPype/pull/3656) -- General: Extract Review can scale with pixel aspect ratio [\#3644](https://github.com/pypeclub/OpenPype/pull/3644) -- Maya: Refactor moved usage of CreateRender settings [\#3643](https://github.com/pypeclub/OpenPype/pull/3643) -- General: Hero version representations have full context [\#3638](https://github.com/pypeclub/OpenPype/pull/3638) -- Nuke: color settings for render write node is working now [\#3632](https://github.com/pypeclub/OpenPype/pull/3632) -- Maya: FBX support for update in reference loader [\#3631](https://github.com/pypeclub/OpenPype/pull/3631) - -**🔀 Refactored code** - -- General: Use client projects getter [\#3673](https://github.com/pypeclub/OpenPype/pull/3673) -- Resolve: Match folder structure to other hosts [\#3653](https://github.com/pypeclub/OpenPype/pull/3653) -- Maya: Hosts as modules [\#3647](https://github.com/pypeclub/OpenPype/pull/3647) -- TimersManager: Plugins are in timers manager module [\#3639](https://github.com/pypeclub/OpenPype/pull/3639) -- General: Move workfiles functions into pipeline [\#3637](https://github.com/pypeclub/OpenPype/pull/3637) - -**Merged pull requests:** - -- Deadline: Global job pre load is not Pype 2 compatible [\#3666](https://github.com/pypeclub/OpenPype/pull/3666) -- Maya: Remove unused get current renderer logic [\#3645](https://github.com/pypeclub/OpenPype/pull/3645) -- Kitsu|Fix: Movie project type fails & first loop children names [\#3636](https://github.com/pypeclub/OpenPype/pull/3636) -- fix the bug of failing to extract look when UDIMs format used in AiImage [\#3628](https://github.com/pypeclub/OpenPype/pull/3628) +- Flame: retime is working on clip publishing [\#3684](https://github.com/pypeclub/OpenPype/pull/3684) ## [3.13.0](https://github.com/pypeclub/OpenPype/tree/3.13.0) (2022-08-09) [Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.13.0-nightly.1...3.13.0) -**🆕 New features** - -- Support for mutliple installed versions - 3.13 [\#3605](https://github.com/pypeclub/OpenPype/pull/3605) - -**🚀 Enhancements** - -- Editorial: Mix audio use side file for ffmpeg filters [\#3630](https://github.com/pypeclub/OpenPype/pull/3630) -- Ftrack: Comment template can contain optional keys [\#3615](https://github.com/pypeclub/OpenPype/pull/3615) -- Ftrack: Add more metadata to ftrack components [\#3612](https://github.com/pypeclub/OpenPype/pull/3612) - -**🐛 Bug fixes** - -- Maya: fix aov separator in Redshift [\#3625](https://github.com/pypeclub/OpenPype/pull/3625) -- Fix for multi-version build on Mac [\#3622](https://github.com/pypeclub/OpenPype/pull/3622) -- Ftrack: Sync hierarchical attributes can handle new created entities [\#3621](https://github.com/pypeclub/OpenPype/pull/3621) -- General: Extract review aspect ratio scale is calculated by ffmpeg [\#3620](https://github.com/pypeclub/OpenPype/pull/3620) -- Maya: Fix types of default settings [\#3617](https://github.com/pypeclub/OpenPype/pull/3617) -- Integrator: Don't force to have dot before frame [\#3611](https://github.com/pypeclub/OpenPype/pull/3611) -- AfterEffects: refactored integrate doesnt work formulti frame publishes [\#3610](https://github.com/pypeclub/OpenPype/pull/3610) -- Maya look data contents fails with custom attribute on group [\#3607](https://github.com/pypeclub/OpenPype/pull/3607) -- TrayPublisher: Fix wrong conflict merge [\#3600](https://github.com/pypeclub/OpenPype/pull/3600) - -**🔀 Refactored code** - -- General: Plugin settings handled by plugins [\#3623](https://github.com/pypeclub/OpenPype/pull/3623) -- General: Naive implementation of document create, update, delete [\#3601](https://github.com/pypeclub/OpenPype/pull/3601) - -**Merged pull requests:** - -- Webpublisher: timeout for PS studio processing [\#3619](https://github.com/pypeclub/OpenPype/pull/3619) -- Core: translated validate\_containers.py into New publisher style [\#3614](https://github.com/pypeclub/OpenPype/pull/3614) - ## [3.12.2](https://github.com/pypeclub/OpenPype/tree/3.12.2) (2022-07-27) [Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.12.2-nightly.4...3.12.2) diff --git a/README.md b/README.md index 8a1e88b1f3..b8d33e65b3 100644 --- a/README.md +++ b/README.md @@ -41,7 +41,7 @@ It can be built and ran on all common platforms. We develop and test on the foll - **Linux** - **Ubuntu** 20.04 LTS - **Centos** 7 -- **Mac OSX** +- **Mac OSX** - **10.15** Catalina - **11.1** Big Sur (using Rosetta2) @@ -287,6 +287,14 @@ To run tests, execute `.\tools\run_tests(.ps1|.sh)`. **Note that it needs existing virtual environment.** + +Developer tools +------------- + +In case you wish to add your own tools to `.\tools` folder without git tracking, it is possible by adding it with `dev_*` suffix (example: `dev_clear_pyc(.ps1|.sh)`). + + + ## Contributors ✨ Thanks goes to these wonderful people ([emoji key](https://allcontributors.org/docs/en/emoji-key)): diff --git a/igniter/bootstrap_repos.py b/igniter/bootstrap_repos.py index c5003b062e..ccc9d4ac52 100644 --- a/igniter/bootstrap_repos.py +++ b/igniter/bootstrap_repos.py @@ -63,7 +63,7 @@ class OpenPypeVersion(semver.VersionInfo): """ staging = False path = None - _VERSION_REGEX = re.compile(r"(?P0|[1-9]\d*)\.(?P0|[1-9]\d*)\.(?P0|[1-9]\d*)(?:-(?P(?:0|[1-9]\d*|\d*[a-zA-Z-][0-9a-zA-Z-]*)(?:\.(?:0|[1-9]\d*|\d*[a-zA-Z-][0-9a-zA-Z-]*))*))?(?:\+(?P[0-9a-zA-Z-]+(?:\.[0-9a-zA-Z-]+)*))?$") # noqa: E501 + _VERSION_REGEX = re.compile(r"(?P0|[1-9]\d*)\.(?P0|[1-9]\d*)\.(?P0|[1-9]\d*)(?:-(?P(?:0|[1-9]\d*|\d*[a-zA-Z-][0-9a-zA-Z-]*)(?:\.(?:0|[1-9]\d*|\d*[a-zA-Z-][0-9a-zA-Z-]*))*))?(?:\+(?P[0-9a-zA-Z-]+(?:\.[0-9a-zA-Z-]+)*))?") # noqa: E501 _installed_version = None def __init__(self, *args, **kwargs): diff --git a/igniter/install_dialog.py b/igniter/install_dialog.py index b09529f5c5..65ddd58735 100644 --- a/igniter/install_dialog.py +++ b/igniter/install_dialog.py @@ -388,8 +388,11 @@ class InstallDialog(QtWidgets.QDialog): install_thread.start() def _installation_finished(self): + # TODO we should find out why status can be set to 'None'? + # - 'InstallThread.run' should handle all cases so not sure where + # that come from status = self._install_thread.result() - if status >= 0: + if status is not None and status >= 0: self._update_progress(100) QtWidgets.QApplication.processEvents() self.done(3) diff --git a/openpype/action.py b/openpype/action.py index 50741875e4..de9cdee010 100644 --- a/openpype/action.py +++ b/openpype/action.py @@ -1,44 +1,82 @@ -# absolute_import is needed to counter the `module has no cmds error` in Maya -from __future__ import absolute_import - +import warnings +import functools import pyblish.api -def get_errored_instances_from_context(context): - - instances = list() - for result in context.data["results"]: - if result["instance"] is None: - # When instance is None we are on the "context" result - continue - - if result["error"]: - instances.append(result["instance"]) - - return instances +class ActionDeprecatedWarning(DeprecationWarning): + pass -def get_errored_plugins_from_data(context): - """Get all failed validation plugins - - Args: - context (object): - - Returns: - list of plugins which failed during validation +def deprecated(new_destination): + """Mark functions as deprecated. + It will result in a warning being emitted when the function is used. """ - plugins = list() - results = context.data.get("results", []) - for result in results: - if result["success"] is True: - continue - plugins.append(result["plugin"]) + func = None + if callable(new_destination): + func = new_destination + new_destination = None - return plugins + def _decorator(decorated_func): + if new_destination is None: + warning_message = ( + " Please check content of deprecated function to figure out" + " possible replacement." + ) + else: + warning_message = " Please replace your usage with '{}'.".format( + new_destination + ) + + @functools.wraps(decorated_func) + def wrapper(*args, **kwargs): + warnings.simplefilter("always", ActionDeprecatedWarning) + warnings.warn( + ( + "Call to deprecated function '{}'" + "\nFunction was moved or removed.{}" + ).format(decorated_func.__name__, warning_message), + category=ActionDeprecatedWarning, + stacklevel=4 + ) + return decorated_func(*args, **kwargs) + return wrapper + + if func is None: + return _decorator + return _decorator(func) +@deprecated("openpype.pipeline.publish.get_errored_instances_from_context") +def get_errored_instances_from_context(context): + """ + Deprecated: + Since 3.14.* will be removed in 3.16.* or later. + """ + + from openpype.pipeline.publish import get_errored_instances_from_context + + return get_errored_instances_from_context(context) + + +@deprecated("openpype.pipeline.publish.get_errored_plugins_from_context") +def get_errored_plugins_from_data(context): + """ + Deprecated: + Since 3.14.* will be removed in 3.16.* or later. + """ + + from openpype.pipeline.publish import get_errored_plugins_from_context + + return get_errored_plugins_from_context(context) + + +# 'RepairAction' and 'RepairContextAction' were moved to +# 'openpype.pipeline.publish' please change you imports. +# There is no "reasonable" way hot mark these classes as deprecated to show +# warning of wrong import. +# Deprecated since 3.14.* will be removed in 3.16.* class RepairAction(pyblish.api.Action): """Repairs the action @@ -65,6 +103,7 @@ class RepairAction(pyblish.api.Action): plugin.repair(instance) +# Deprecated since 3.14.* will be removed in 3.16.* class RepairContextAction(pyblish.api.Action): """Repairs the action diff --git a/openpype/api.py b/openpype/api.py index c2227c1a52..0466eb7f78 100644 --- a/openpype/api.py +++ b/openpype/api.py @@ -49,7 +49,6 @@ from .plugin import ( ValidateContentsOrder, ValidateSceneOrder, ValidateMeshOrder, - ValidationException ) # temporary fix, might @@ -94,8 +93,6 @@ __all__ = [ "RepairAction", "RepairContextAction", - "ValidationException", - # get contextual data "version_up", "get_asset", diff --git a/openpype/client/__init__.py b/openpype/client/__init__.py index 64a82334d9..7831afd8ad 100644 --- a/openpype/client/__init__.py +++ b/openpype/client/__init__.py @@ -45,6 +45,17 @@ from .entities import ( get_workfile_info, ) +from .entity_links import ( + get_linked_asset_ids, + get_linked_assets, + get_linked_representation_id, +) + +from .operations import ( + create_project, +) + + __all__ = ( "OpenPypeMongoConnection", @@ -88,4 +99,10 @@ __all__ = ( "get_thumbnail_id_from_source", "get_workfile_info", + + "get_linked_asset_ids", + "get_linked_assets", + "get_linked_representation_id", + + "create_project", ) diff --git a/openpype/client/entities.py b/openpype/client/entities.py index 3d2730a17c..43afccf2f1 100644 --- a/openpype/client/entities.py +++ b/openpype/client/entities.py @@ -14,6 +14,8 @@ from bson.objectid import ObjectId from .mongo import get_project_database, get_project_connection +PatternType = type(re.compile("")) + def _prepare_fields(fields, required_fields=None): if not fields: @@ -32,17 +34,37 @@ def _prepare_fields(fields, required_fields=None): return output -def _convert_id(in_id): +def convert_id(in_id): + """Helper function for conversion of id from string to ObjectId. + + Args: + in_id (Union[str, ObjectId, Any]): Entity id that should be converted + to right type for queries. + + Returns: + Union[ObjectId, Any]: Converted ids to ObjectId or in type. + """ + if isinstance(in_id, six.string_types): return ObjectId(in_id) return in_id -def _convert_ids(in_ids): +def convert_ids(in_ids): + """Helper function for conversion of ids from string to ObjectId. + + Args: + in_ids (Iterable[Union[str, ObjectId, Any]]): List of entity ids that + should be converted to right type for queries. + + Returns: + List[ObjectId]: Converted ids to ObjectId. + """ + _output = set() for in_id in in_ids: if in_id is not None: - _output.add(_convert_id(in_id)) + _output.add(convert_id(in_id)) return list(_output) @@ -58,7 +80,7 @@ def get_projects(active=True, inactive=False, fields=None): yield project_doc -def get_project(project_name, active=True, inactive=False, fields=None): +def get_project(project_name, active=True, inactive=True, fields=None): # Skip if both are disabled if not active and not inactive: return None @@ -115,7 +137,7 @@ def get_asset_by_id(project_name, asset_id, fields=None): None: Asset was not found by id. """ - asset_id = _convert_id(asset_id) + asset_id = convert_id(asset_id) if not asset_id: return None @@ -196,7 +218,7 @@ def _get_assets( query_filter = {"type": {"$in": asset_types}} if asset_ids is not None: - asset_ids = _convert_ids(asset_ids) + asset_ids = convert_ids(asset_ids) if not asset_ids: return [] query_filter["_id"] = {"$in": asset_ids} @@ -207,7 +229,7 @@ def _get_assets( query_filter["name"] = {"$in": list(asset_names)} if parent_ids is not None: - parent_ids = _convert_ids(parent_ids) + parent_ids = convert_ids(parent_ids) if not parent_ids: return [] query_filter["data.visualParent"] = {"$in": parent_ids} @@ -307,7 +329,7 @@ def get_asset_ids_with_subsets(project_name, asset_ids=None): "type": "subset" } if asset_ids is not None: - asset_ids = _convert_ids(asset_ids) + asset_ids = convert_ids(asset_ids) if not asset_ids: return [] subset_query["parent"] = {"$in": asset_ids} @@ -347,7 +369,7 @@ def get_subset_by_id(project_name, subset_id, fields=None): Dict: Subset document which can be reduced to specified 'fields'. """ - subset_id = _convert_id(subset_id) + subset_id = convert_id(subset_id) if not subset_id: return None @@ -374,7 +396,7 @@ def get_subset_by_name(project_name, subset_name, asset_id, fields=None): if not subset_name: return None - asset_id = _convert_id(asset_id) + asset_id = convert_id(asset_id) if not asset_id: return None @@ -428,13 +450,13 @@ def get_subsets( query_filter = {"type": {"$in": subset_types}} if asset_ids is not None: - asset_ids = _convert_ids(asset_ids) + asset_ids = convert_ids(asset_ids) if not asset_ids: return [] query_filter["parent"] = {"$in": asset_ids} if subset_ids is not None: - subset_ids = _convert_ids(subset_ids) + subset_ids = convert_ids(subset_ids) if not subset_ids: return [] query_filter["_id"] = {"$in": subset_ids} @@ -449,7 +471,7 @@ def get_subsets( for asset_id, names in names_by_asset_ids.items(): if asset_id and names: or_query.append({ - "parent": _convert_id(asset_id), + "parent": convert_id(asset_id), "name": {"$in": list(names)} }) if not or_query: @@ -510,7 +532,7 @@ def get_version_by_id(project_name, version_id, fields=None): Dict: Version document which can be reduced to specified 'fields'. """ - version_id = _convert_id(version_id) + version_id = convert_id(version_id) if not version_id: return None @@ -537,7 +559,7 @@ def get_version_by_name(project_name, version, subset_id, fields=None): Dict: Version document which can be reduced to specified 'fields'. """ - subset_id = _convert_id(subset_id) + subset_id = convert_id(subset_id) if not subset_id: return None @@ -567,7 +589,7 @@ def version_is_latest(project_name, version_id): bool: True if is latest version from subset else False. """ - version_id = _convert_id(version_id) + version_id = convert_id(version_id) if not version_id: return False version_doc = get_version_by_id( @@ -610,13 +632,13 @@ def _get_versions( query_filter = {"type": {"$in": version_types}} if subset_ids is not None: - subset_ids = _convert_ids(subset_ids) + subset_ids = convert_ids(subset_ids) if not subset_ids: return [] query_filter["parent"] = {"$in": subset_ids} if version_ids is not None: - version_ids = _convert_ids(version_ids) + version_ids = convert_ids(version_ids) if not version_ids: return [] query_filter["_id"] = {"$in": version_ids} @@ -690,7 +712,7 @@ def get_hero_version_by_subset_id(project_name, subset_id, fields=None): Dict: Hero version entity data. """ - subset_id = _convert_id(subset_id) + subset_id = convert_id(subset_id) if not subset_id: return None @@ -720,7 +742,7 @@ def get_hero_version_by_id(project_name, version_id, fields=None): Dict: Hero version entity data. """ - version_id = _convert_id(version_id) + version_id = convert_id(version_id) if not version_id: return None @@ -786,7 +808,7 @@ def get_output_link_versions(project_name, version_id, fields=None): links for passed version. """ - version_id = _convert_id(version_id) + version_id = convert_id(version_id) if not version_id: return [] @@ -812,7 +834,7 @@ def get_last_versions(project_name, subset_ids, fields=None): dict[ObjectId, int]: Key is subset id and value is last version name. """ - subset_ids = _convert_ids(subset_ids) + subset_ids = convert_ids(subset_ids) if not subset_ids: return {} @@ -898,7 +920,7 @@ def get_last_version_by_subset_id(project_name, subset_id, fields=None): Dict: Version document which can be reduced to specified 'fields'. """ - subset_id = _convert_id(subset_id) + subset_id = convert_id(subset_id) if not subset_id: return None @@ -971,7 +993,7 @@ def get_representation_by_id(project_name, representation_id, fields=None): "type": {"$in": repre_types} } if representation_id is not None: - query_filter["_id"] = _convert_id(representation_id) + query_filter["_id"] = convert_id(representation_id) conn = get_project_connection(project_name) @@ -996,7 +1018,7 @@ def get_representation_by_name( to specified 'fields'. """ - version_id = _convert_id(version_id) + version_id = convert_id(version_id) if not version_id or not representation_name: return None repre_types = ["representation", "archived_representations"] @@ -1034,11 +1056,11 @@ def _regex_filters(filters): for key, value in filters.items(): regexes = [] a_values = [] - if isinstance(value, re.Pattern): + if isinstance(value, PatternType): regexes.append(value) elif isinstance(value, (list, tuple, set)): for item in value: - if isinstance(item, re.Pattern): + if isinstance(item, PatternType): regexes.append(item) else: a_values.append(item) @@ -1089,7 +1111,7 @@ def _get_representations( query_filter = {"type": {"$in": repre_types}} if representation_ids is not None: - representation_ids = _convert_ids(representation_ids) + representation_ids = convert_ids(representation_ids) if not representation_ids: return default_output query_filter["_id"] = {"$in": representation_ids} @@ -1100,7 +1122,7 @@ def _get_representations( query_filter["name"] = {"$in": list(representation_names)} if version_ids is not None: - version_ids = _convert_ids(version_ids) + version_ids = convert_ids(version_ids) if not version_ids: return default_output query_filter["parent"] = {"$in": version_ids} @@ -1111,7 +1133,7 @@ def _get_representations( for version_id, names in names_by_version_ids.items(): if version_id and names: or_query.append({ - "parent": _convert_id(version_id), + "parent": convert_id(version_id), "name": {"$in": list(names)} }) if not or_query: @@ -1174,7 +1196,7 @@ def get_representations( as filter. Filter ignored if 'None' is passed. version_ids (Iterable[str]): Subset ids used as parent filter. Filter ignored if 'None' is passed. - context_filters (Dict[str, List[str, re.Pattern]]): Filter by + context_filters (Dict[str, List[str, PatternType]]): Filter by representation context fields. names_by_version_ids (dict[ObjectId, list[str]]): Complex filtering using version ids and list of names under the version. @@ -1220,7 +1242,7 @@ def get_archived_representations( as filter. Filter ignored if 'None' is passed. version_ids (Iterable[str]): Subset ids used as parent filter. Filter ignored if 'None' is passed. - context_filters (Dict[str, List[str, re.Pattern]]): Filter by + context_filters (Dict[str, List[str, PatternType]]): Filter by representation context fields. names_by_version_ids (dict[ObjectId, List[str]]): Complex filtering using version ids and list of names under the version. @@ -1361,7 +1383,7 @@ def get_thumbnail_id_from_source(project_name, src_type, src_id): if not src_type or not src_id: return None - query_filter = {"_id": _convert_id(src_id)} + query_filter = {"_id": convert_id(src_id)} conn = get_project_connection(project_name) src_doc = conn.find_one(query_filter, {"data.thumbnail_id"}) @@ -1388,7 +1410,7 @@ def get_thumbnails(project_name, thumbnail_ids, fields=None): """ if thumbnail_ids: - thumbnail_ids = _convert_ids(thumbnail_ids) + thumbnail_ids = convert_ids(thumbnail_ids) if not thumbnail_ids: return [] @@ -1416,7 +1438,7 @@ def get_thumbnail(project_name, thumbnail_id, fields=None): if not thumbnail_id: return None - query_filter = {"type": "thumbnail", "_id": _convert_id(thumbnail_id)} + query_filter = {"type": "thumbnail", "_id": convert_id(thumbnail_id)} conn = get_project_connection(project_name) return conn.find_one(query_filter, _prepare_fields(fields)) @@ -1444,7 +1466,7 @@ def get_workfile_info( query_filter = { "type": "workfile", - "parent": _convert_id(asset_id), + "parent": convert_id(asset_id), "task_name": task_name, "filename": filename } diff --git a/openpype/client/entity_links.py b/openpype/client/entity_links.py new file mode 100644 index 0000000000..66214f469c --- /dev/null +++ b/openpype/client/entity_links.py @@ -0,0 +1,232 @@ +from .mongo import get_project_connection +from .entities import ( + get_assets, + get_asset_by_id, + get_representation_by_id, + convert_id, +) + + +def get_linked_asset_ids(project_name, asset_doc=None, asset_id=None): + """Extract linked asset ids from asset document. + + One of asset document or asset id must be passed. + + Note: + Asset links now works only from asset to assets. + + Args: + asset_doc (dict): Asset document from DB. + + Returns: + List[Union[ObjectId, str]]: Asset ids of input links. + """ + + output = [] + if not asset_doc and not asset_id: + return output + + if not asset_doc: + asset_doc = get_asset_by_id( + project_name, asset_id, fields=["data.inputLinks"] + ) + + input_links = asset_doc["data"].get("inputLinks") + if not input_links: + return output + + for item in input_links: + # Backwards compatibility for "_id" key which was replaced with + # "id" + if "_id" in item: + link_id = item["_id"] + else: + link_id = item["id"] + output.append(link_id) + return output + + +def get_linked_assets( + project_name, asset_doc=None, asset_id=None, fields=None +): + """Return linked assets based on passed asset document. + + One of asset document or asset id must be passed. + + Args: + project_name (str): Name of project where to look for queried entities. + asset_doc (Dict[str, Any]): Asset document from database. + asset_id (Union[ObjectId, str]): Asset id. Can be used instead of + asset document. + fields (Iterable[str]): Fields that should be returned. All fields are + returned if 'None' is passed. + + Returns: + List[Dict[str, Any]]: Asset documents of input links for passed + asset doc. + """ + + if not asset_doc: + if not asset_id: + return [] + asset_doc = get_asset_by_id( + project_name, + asset_id, + fields=["data.inputLinks"] + ) + if not asset_doc: + return [] + + link_ids = get_linked_asset_ids(project_name, asset_doc=asset_doc) + if not link_ids: + return [] + + return list(get_assets(project_name, asset_ids=link_ids, fields=fields)) + + +def get_linked_representation_id( + project_name, repre_doc=None, repre_id=None, link_type=None, max_depth=None +): + """Returns list of linked ids of particular type (if provided). + + One of representation document or representation id must be passed. + Note: + Representation links now works only from representation through version + back to representations. + + Args: + project_name (str): Name of project where look for links. + repre_doc (Dict[str, Any]): Representation document. + repre_id (Union[ObjectId, str]): Representation id. + link_type (str): Type of link (e.g. 'reference', ...). + max_depth (int): Limit recursion level. Default: 0 + + Returns: + List[ObjectId] Linked representation ids. + """ + + if repre_doc: + repre_id = repre_doc["_id"] + + if repre_id: + repre_id = convert_id(repre_id) + + if not repre_id and not repre_doc: + return [] + + version_id = None + if repre_doc: + version_id = repre_doc.get("parent") + + if not version_id: + repre_doc = get_representation_by_id( + project_name, repre_id, fields=["parent"] + ) + version_id = repre_doc["parent"] + + if not version_id: + return [] + + if max_depth is None: + max_depth = 0 + + match = { + "_id": version_id, + "type": {"$in": ["version", "hero_version"]} + } + + graph_lookup = { + "from": project_name, + "startWith": "$data.inputLinks.id", + "connectFromField": "data.inputLinks.id", + "connectToField": "_id", + "as": "outputs_recursive", + "depthField": "depth" + } + if max_depth != 0: + # We offset by -1 since 0 basically means no recursion + # but the recursion only happens after the initial lookup + # for outputs. + graph_lookup["maxDepth"] = max_depth - 1 + + query_pipeline = [ + # Match + {"$match": match}, + # Recursive graph lookup for inputs + {"$graphLookup": graph_lookup} + ] + + conn = get_project_connection(project_name) + result = conn.aggregate(query_pipeline) + referenced_version_ids = _process_referenced_pipeline_result( + result, link_type + ) + if not referenced_version_ids: + return [] + + ref_ids = conn.distinct( + "_id", + filter={ + "parent": {"$in": list(referenced_version_ids)}, + "type": "representation" + } + ) + + return list(ref_ids) + + +def _process_referenced_pipeline_result(result, link_type): + """Filters result from pipeline for particular link_type. + + Pipeline cannot use link_type directly in a query. + + Returns: + (list) + """ + + referenced_version_ids = set() + correctly_linked_ids = set() + for item in result: + input_links = item["data"].get("inputLinks") + if not input_links: + continue + + _filter_input_links( + input_links, + link_type, + correctly_linked_ids + ) + + # outputs_recursive in random order, sort by depth + outputs_recursive = item.get("outputs_recursive") + if not outputs_recursive: + continue + + for output in sorted(outputs_recursive, key=lambda o: o["depth"]): + output_links = output["data"].get("inputLinks") + if not output_links: + continue + + # Leaf + if output["_id"] not in correctly_linked_ids: + continue + + _filter_input_links( + output_links, + link_type, + correctly_linked_ids + ) + + referenced_version_ids.add(output["_id"]) + + return referenced_version_ids + + +def _filter_input_links(input_links, link_type, correctly_linked_ids): + for input_link in input_links: + if link_type and input_link["type"] != link_type: + continue + + link_id = input_link.get("id") or input_link.get("_id") + if link_id is not None: + correctly_linked_ids.add(link_id) diff --git a/openpype/client/operations.py b/openpype/client/operations.py index c0716ee109..48e8645726 100644 --- a/openpype/client/operations.py +++ b/openpype/client/operations.py @@ -9,6 +9,7 @@ from bson.objectid import ObjectId from pymongo import DeleteOne, InsertOne, UpdateOne from .mongo import get_project_connection +from .entities import get_project REMOVED_VALUE = object() @@ -24,6 +25,7 @@ CURRENT_SUBSET_SCHEMA = "openpype:subset-3.0" CURRENT_VERSION_SCHEMA = "openpype:version-3.0" CURRENT_REPRESENTATION_SCHEMA = "openpype:representation-2.0" CURRENT_WORKFILE_INFO_SCHEMA = "openpype:workfile-1.0" +CURRENT_THUMBNAIL_SCHEMA = "openpype:thumbnail-1.0" def _create_or_convert_to_mongo_id(mongo_id): @@ -195,6 +197,29 @@ def new_representation_doc( } +def new_thumbnail_doc(data=None, entity_id=None): + """Create skeleton data of thumbnail document. + + Args: + data (Dict[str, Any]): Thumbnail document data. + entity_id (Union[str, ObjectId]): Predefined id of document. New id is + created if not passed. + + Returns: + Dict[str, Any]: Skeleton of thumbnail document. + """ + + if data is None: + data = {} + + return { + "_id": _create_or_convert_to_mongo_id(entity_id), + "type": "thumbnail", + "schema": CURRENT_THUMBNAIL_SCHEMA, + "data": data + } + + def new_workfile_info_doc( filename, asset_id, task_name, files, data=None, entity_id=None ): @@ -638,3 +663,89 @@ class OperationsSession(object): operation = DeleteOperation(project_name, entity_type, entity_id) self.add(operation) return operation + + +def create_project(project_name, project_code, library_project=False): + """Create project using OpenPype settings. + + This project creation function is not validating project document on + creation. It is because project document is created blindly with only + minimum required information about project which is it's name, code, type + and schema. + + Entered project name must be unique and project must not exist yet. + + Note: + This function is here to be OP v4 ready but in v3 has more logic + to do. That's why inner imports are in the body. + + Args: + project_name(str): New project name. Should be unique. + project_code(str): Project's code should be unique too. + library_project(bool): Project is library project. + + Raises: + ValueError: When project name already exists in MongoDB. + + Returns: + dict: Created project document. + """ + + from openpype.settings import ProjectSettings, SaveWarningExc + from openpype.pipeline.schema import validate + + if get_project(project_name, fields=["name"]): + raise ValueError("Project with name \"{}\" already exists".format( + project_name + )) + + if not PROJECT_NAME_REGEX.match(project_name): + raise ValueError(( + "Project name \"{}\" contain invalid characters" + ).format(project_name)) + + project_doc = { + "type": "project", + "name": project_name, + "data": { + "code": project_code, + "library_project": library_project + }, + "schema": CURRENT_PROJECT_SCHEMA + } + + op_session = OperationsSession() + # Insert document with basic data + create_op = op_session.create_entity( + project_name, project_doc["type"], project_doc + ) + op_session.commit() + + # Load ProjectSettings for the project and save it to store all attributes + # and Anatomy + try: + project_settings_entity = ProjectSettings(project_name) + project_settings_entity.save() + except SaveWarningExc as exc: + print(str(exc)) + except Exception: + op_session.delete_entity( + project_name, project_doc["type"], create_op.entity_id + ) + op_session.commit() + raise + + project_doc = get_project(project_name) + + try: + # Validate created project document + validate(project_doc) + except Exception: + # Remove project if is not valid + op_session.delete_entity( + project_name, project_doc["type"], create_op.entity_id + ) + op_session.commit() + raise + + return project_doc diff --git a/openpype/hooks/pre_create_extra_workdir_folders.py b/openpype/hooks/pre_create_extra_workdir_folders.py index d79c5831ee..c5af620c87 100644 --- a/openpype/hooks/pre_create_extra_workdir_folders.py +++ b/openpype/hooks/pre_create_extra_workdir_folders.py @@ -1,8 +1,6 @@ import os -from openpype.lib import ( - PreLaunchHook, - create_workdir_extra_folders -) +from openpype.lib import PreLaunchHook +from openpype.pipeline.workfile import create_workdir_extra_folders class AddLastWorkfileToLaunchArgs(PreLaunchHook): diff --git a/openpype/host/__init__.py b/openpype/host/__init__.py index 84a2fa930a..519888fce3 100644 --- a/openpype/host/__init__.py +++ b/openpype/host/__init__.py @@ -1,13 +1,22 @@ from .host import ( HostBase, +) + +from .interfaces import ( IWorkfileHost, ILoadHost, INewPublisher, ) +from .dirmap import HostDirmap + + __all__ = ( "HostBase", + "IWorkfileHost", "ILoadHost", "INewPublisher", + + "HostDirmap", ) diff --git a/openpype/host/dirmap.py b/openpype/host/dirmap.py new file mode 100644 index 0000000000..88d68f27bf --- /dev/null +++ b/openpype/host/dirmap.py @@ -0,0 +1,205 @@ +"""Dirmap functionality used in host integrations inside DCCs. + +Idea for current dirmap implementation was used from Maya where is possible to +enter source and destination roots and maya will try each found source +in referenced file replace with each destionation paths. First path which +exists is used. +""" + +import os +from abc import ABCMeta, abstractmethod + +import six + +from openpype.lib import Logger +from openpype.modules import ModulesManager +from openpype.settings import get_project_settings +from openpype.settings.lib import get_site_local_overrides + + +@six.add_metaclass(ABCMeta) +class HostDirmap(object): + """Abstract class for running dirmap on a workfile in a host. + + Dirmap is used to translate paths inside of host workfile from one + OS to another. (Eg. arstist created workfile on Win, different artists + opens same file on Linux.) + + Expects methods to be implemented inside of host: + on_dirmap_enabled: run host code for enabling dirmap + do_dirmap: run host code to do actual remapping + """ + + def __init__( + self, host_name, project_name, project_settings=None, sync_module=None + ): + self.host_name = host_name + self.project_name = project_name + self._project_settings = project_settings + self._sync_module = sync_module # to limit reinit of Modules + self._log = None + self._mapping = None # cache mapping + + @property + def sync_module(self): + if self._sync_module is None: + manager = ModulesManager() + self._sync_module = manager["sync_server"] + return self._sync_module + + @property + def project_settings(self): + if self._project_settings is None: + self._project_settings = get_project_settings(self.project_name) + return self._project_settings + + @property + def log(self): + if self._log is None: + self._log = Logger.get_logger(self.__class__.__name__) + return self._log + + @abstractmethod + def on_enable_dirmap(self): + """Run host dependent operation for enabling dirmap if necessary.""" + pass + + @abstractmethod + def dirmap_routine(self, source_path, destination_path): + """Run host dependent remapping from source_path to destination_path""" + pass + + def process_dirmap(self): + # type: (dict) -> None + """Go through all paths in Settings and set them using `dirmap`. + + If artists has Site Sync enabled, take dirmap mapping directly from + Local Settings when artist is syncing workfile locally. + + Args: + project_settings (dict): Settings for current project. + """ + + if not self._mapping: + self._mapping = self.get_mappings(self.project_settings) + if not self._mapping: + return + + self.log.info("Processing directory mapping ...") + self.on_enable_dirmap() + self.log.info("mapping:: {}".format(self._mapping)) + + for k, sp in enumerate(self._mapping["source-path"]): + dst = self._mapping["destination-path"][k] + try: + print("{} -> {}".format(sp, dst)) + self.dirmap_routine(sp, dst) + except IndexError: + # missing corresponding destination path + self.log.error(( + "invalid dirmap mapping, missing corresponding" + " destination directory." + )) + break + except RuntimeError: + self.log.error( + "invalid path {} -> {}, mapping not registered".format( + sp, dst + ) + ) + continue + + def get_mappings(self, project_settings): + """Get translation from source-path to destination-path. + + It checks if Site Sync is enabled and user chose to use local + site, in that case configuration in Local Settings takes precedence + """ + + local_mapping = self._get_local_sync_dirmap(project_settings) + dirmap_label = "{}-dirmap".format(self.host_name) + if ( + not self.project_settings[self.host_name].get(dirmap_label) + and not local_mapping + ): + return {} + mapping_settings = self.project_settings[self.host_name][dirmap_label] + mapping_enabled = mapping_settings["enabled"] or bool(local_mapping) + if not mapping_enabled: + return {} + + mapping = ( + local_mapping + or mapping_settings["paths"] + or {} + ) + + if ( + not mapping + or not mapping.get("destination-path") + or not mapping.get("source-path") + ): + return {} + return mapping + + def _get_local_sync_dirmap(self, project_settings): + """ + Returns dirmap if synch to local project is enabled. + + Only valid mapping is from roots of remote site to local site set + in Local Settings. + + Args: + project_settings (dict) + Returns: + dict : { "source-path": [XXX], "destination-path": [YYYY]} + """ + + mapping = {} + + if not project_settings["global"]["sync_server"]["enabled"]: + return mapping + + project_name = os.getenv("AVALON_PROJECT") + + active_site = self.sync_module.get_local_normalized_site( + self.sync_module.get_active_site(project_name)) + remote_site = self.sync_module.get_local_normalized_site( + self.sync_module.get_remote_site(project_name)) + self.log.debug( + "active {} - remote {}".format(active_site, remote_site) + ) + + if ( + active_site == "local" + and project_name in self.sync_module.get_enabled_projects() + and active_site != remote_site + ): + sync_settings = self.sync_module.get_sync_project_setting( + project_name, + exclude_locals=False, + cached=False) + + active_overrides = get_site_local_overrides( + project_name, active_site) + remote_overrides = get_site_local_overrides( + project_name, remote_site) + + self.log.debug("local overrides {}".format(active_overrides)) + self.log.debug("remote overrides {}".format(remote_overrides)) + for root_name, active_site_dir in active_overrides.items(): + remote_site_dir = ( + remote_overrides.get(root_name) + or sync_settings["sites"][remote_site]["root"][root_name] + ) + if os.path.isdir(active_site_dir): + if "destination-path" not in mapping: + mapping["destination-path"] = [] + mapping["destination-path"].append(active_site_dir) + + if "source-path" not in mapping: + mapping["source-path"] = [] + mapping["source-path"].append(remote_site_dir) + + self.log.debug("local sync mapping:: {}".format(mapping)) + return mapping diff --git a/openpype/host/host.py b/openpype/host/host.py index 9cdbb819e1..99f7868727 100644 --- a/openpype/host/host.py +++ b/openpype/host/host.py @@ -1,37 +1,12 @@ import logging import contextlib -from abc import ABCMeta, abstractproperty, abstractmethod +from abc import ABCMeta, abstractproperty import six # NOTE can't import 'typing' because of issues in Maya 2020 # - shiboken crashes on 'typing' module import -class MissingMethodsError(ValueError): - """Exception when host miss some required methods for specific workflow. - - Args: - host (HostBase): Host implementation where are missing methods. - missing_methods (list[str]): List of missing methods. - """ - - def __init__(self, host, missing_methods): - joined_missing = ", ".join( - ['"{}"'.format(item) for item in missing_methods] - ) - if isinstance(host, HostBase): - host_name = host.name - else: - try: - host_name = host.__file__.replace("\\", "/").split("/")[-3] - except Exception: - host_name = str(host) - message = ( - "Host \"{}\" miss methods {}".format(host_name, joined_missing) - ) - super(MissingMethodsError, self).__init__(message) - - @six.add_metaclass(ABCMeta) class HostBase(object): """Base of host implementation class. @@ -185,347 +160,3 @@ class HostBase(object): yield finally: pass - - -class ILoadHost: - """Implementation requirements to be able use reference of representations. - - The load plugins can do referencing even without implementation of methods - here, but switch and removement of containers would not be possible. - - Questions: - - Is list container dependency of host or load plugins? - - Should this be directly in HostBase? - - how to find out if referencing is available? - - do we need to know that? - """ - - @staticmethod - def get_missing_load_methods(host): - """Look for missing methods on "old type" host implementation. - - Method is used for validation of implemented functions related to - loading. Checks only existence of methods. - - Args: - Union[ModuleType, HostBase]: Object of host where to look for - required methods. - - Returns: - list[str]: Missing method implementations for loading workflow. - """ - - if isinstance(host, ILoadHost): - return [] - - required = ["ls"] - missing = [] - for name in required: - if not hasattr(host, name): - missing.append(name) - return missing - - @staticmethod - def validate_load_methods(host): - """Validate implemented methods of "old type" host for load workflow. - - Args: - Union[ModuleType, HostBase]: Object of host to validate. - - Raises: - MissingMethodsError: If there are missing methods on host - implementation. - """ - missing = ILoadHost.get_missing_load_methods(host) - if missing: - raise MissingMethodsError(host, missing) - - @abstractmethod - def get_containers(self): - """Retreive referenced containers from scene. - - This can be implemented in hosts where referencing can be used. - - Todo: - Rename function to something more self explanatory. - Suggestion: 'get_containers' - - Returns: - list[dict]: Information about loaded containers. - """ - - pass - - # --- Deprecated method names --- - def ls(self): - """Deprecated variant of 'get_containers'. - - Todo: - Remove when all usages are replaced. - """ - - return self.get_containers() - - -@six.add_metaclass(ABCMeta) -class IWorkfileHost: - """Implementation requirements to be able use workfile utils and tool.""" - - @staticmethod - def get_missing_workfile_methods(host): - """Look for missing methods on "old type" host implementation. - - Method is used for validation of implemented functions related to - workfiles. Checks only existence of methods. - - Args: - Union[ModuleType, HostBase]: Object of host where to look for - required methods. - - Returns: - list[str]: Missing method implementations for workfiles workflow. - """ - - if isinstance(host, IWorkfileHost): - return [] - - required = [ - "open_file", - "save_file", - "current_file", - "has_unsaved_changes", - "file_extensions", - "work_root", - ] - missing = [] - for name in required: - if not hasattr(host, name): - missing.append(name) - return missing - - @staticmethod - def validate_workfile_methods(host): - """Validate methods of "old type" host for workfiles workflow. - - Args: - Union[ModuleType, HostBase]: Object of host to validate. - - Raises: - MissingMethodsError: If there are missing methods on host - implementation. - """ - - missing = IWorkfileHost.get_missing_workfile_methods(host) - if missing: - raise MissingMethodsError(host, missing) - - @abstractmethod - def get_workfile_extensions(self): - """Extensions that can be used as save. - - Questions: - This could potentially use 'HostDefinition'. - """ - - return [] - - @abstractmethod - def save_workfile(self, dst_path=None): - """Save currently opened scene. - - Args: - dst_path (str): Where the current scene should be saved. Or use - current path if 'None' is passed. - """ - - pass - - @abstractmethod - def open_workfile(self, filepath): - """Open passed filepath in the host. - - Args: - filepath (str): Path to workfile. - """ - - pass - - @abstractmethod - def get_current_workfile(self): - """Retreive path to current opened file. - - Returns: - str: Path to file which is currently opened. - None: If nothing is opened. - """ - - return None - - def workfile_has_unsaved_changes(self): - """Currently opened scene is saved. - - Not all hosts can know if current scene is saved because the API of - DCC does not support it. - - Returns: - bool: True if scene is saved and False if has unsaved - modifications. - None: Can't tell if workfiles has modifications. - """ - - return None - - def work_root(self, session): - """Modify workdir per host. - - Default implementation keeps workdir untouched. - - Warnings: - We must handle this modification with more sofisticated way because - this can't be called out of DCC so opening of last workfile - (calculated before DCC is launched) is complicated. Also breaking - defined work template is not a good idea. - Only place where it's really used and can make sense is Maya. There - workspace.mel can modify subfolders where to look for maya files. - - Args: - session (dict): Session context data. - - Returns: - str: Path to new workdir. - """ - - return session["AVALON_WORKDIR"] - - # --- Deprecated method names --- - def file_extensions(self): - """Deprecated variant of 'get_workfile_extensions'. - - Todo: - Remove when all usages are replaced. - """ - return self.get_workfile_extensions() - - def save_file(self, dst_path=None): - """Deprecated variant of 'save_workfile'. - - Todo: - Remove when all usages are replaced. - """ - - self.save_workfile() - - def open_file(self, filepath): - """Deprecated variant of 'open_workfile'. - - Todo: - Remove when all usages are replaced. - """ - - return self.open_workfile(filepath) - - def current_file(self): - """Deprecated variant of 'get_current_workfile'. - - Todo: - Remove when all usages are replaced. - """ - - return self.get_current_workfile() - - def has_unsaved_changes(self): - """Deprecated variant of 'workfile_has_unsaved_changes'. - - Todo: - Remove when all usages are replaced. - """ - - return self.workfile_has_unsaved_changes() - - -class INewPublisher: - """Functions related to new creation system in new publisher. - - New publisher is not storing information only about each created instance - but also some global data. At this moment are data related only to context - publish plugins but that can extend in future. - """ - - @staticmethod - def get_missing_publish_methods(host): - """Look for missing methods on "old type" host implementation. - - Method is used for validation of implemented functions related to - new publish creation. Checks only existence of methods. - - Args: - Union[ModuleType, HostBase]: Host module where to look for - required methods. - - Returns: - list[str]: Missing method implementations for new publsher - workflow. - """ - - if isinstance(host, INewPublisher): - return [] - - required = [ - "get_context_data", - "update_context_data", - ] - missing = [] - for name in required: - if not hasattr(host, name): - missing.append(name) - return missing - - @staticmethod - def validate_publish_methods(host): - """Validate implemented methods of "old type" host. - - Args: - Union[ModuleType, HostBase]: Host module to validate. - - Raises: - MissingMethodsError: If there are missing methods on host - implementation. - """ - missing = INewPublisher.get_missing_publish_methods(host) - if missing: - raise MissingMethodsError(host, missing) - - @abstractmethod - def get_context_data(self): - """Get global data related to creation-publishing from workfile. - - These data are not related to any created instance but to whole - publishing context. Not saving/returning them will cause that each - reset of publishing resets all values to default ones. - - Context data can contain information about enabled/disabled publish - plugins or other values that can be filled by artist. - - Returns: - dict: Context data stored using 'update_context_data'. - """ - - pass - - @abstractmethod - def update_context_data(self, data, changes): - """Store global context data to workfile. - - Called when some values in context data has changed. - - Without storing the values in a way that 'get_context_data' would - return them will each reset of publishing cause loose of filled values - by artist. Best practice is to store values into workfile, if possible. - - Args: - data (dict): New data as are. - changes (dict): Only data that has been changed. Each value has - tuple with '(, )' value. - """ - - pass diff --git a/openpype/host/interfaces.py b/openpype/host/interfaces.py new file mode 100644 index 0000000000..cbf12b0d13 --- /dev/null +++ b/openpype/host/interfaces.py @@ -0,0 +1,370 @@ +from abc import ABCMeta, abstractmethod +import six + + +class MissingMethodsError(ValueError): + """Exception when host miss some required methods for specific workflow. + + Args: + host (HostBase): Host implementation where are missing methods. + missing_methods (list[str]): List of missing methods. + """ + + def __init__(self, host, missing_methods): + joined_missing = ", ".join( + ['"{}"'.format(item) for item in missing_methods] + ) + host_name = getattr(host, "name", None) + if not host_name: + try: + host_name = host.__file__.replace("\\", "/").split("/")[-3] + except Exception: + host_name = str(host) + message = ( + "Host \"{}\" miss methods {}".format(host_name, joined_missing) + ) + super(MissingMethodsError, self).__init__(message) + + +class ILoadHost: + """Implementation requirements to be able use reference of representations. + + The load plugins can do referencing even without implementation of methods + here, but switch and removement of containers would not be possible. + + Questions: + - Is list container dependency of host or load plugins? + - Should this be directly in HostBase? + - how to find out if referencing is available? + - do we need to know that? + """ + + @staticmethod + def get_missing_load_methods(host): + """Look for missing methods on "old type" host implementation. + + Method is used for validation of implemented functions related to + loading. Checks only existence of methods. + + Args: + Union[ModuleType, HostBase]: Object of host where to look for + required methods. + + Returns: + list[str]: Missing method implementations for loading workflow. + """ + + if isinstance(host, ILoadHost): + return [] + + required = ["ls"] + missing = [] + for name in required: + if not hasattr(host, name): + missing.append(name) + return missing + + @staticmethod + def validate_load_methods(host): + """Validate implemented methods of "old type" host for load workflow. + + Args: + Union[ModuleType, HostBase]: Object of host to validate. + + Raises: + MissingMethodsError: If there are missing methods on host + implementation. + """ + missing = ILoadHost.get_missing_load_methods(host) + if missing: + raise MissingMethodsError(host, missing) + + @abstractmethod + def get_containers(self): + """Retreive referenced containers from scene. + + This can be implemented in hosts where referencing can be used. + + Todo: + Rename function to something more self explanatory. + Suggestion: 'get_containers' + + Returns: + list[dict]: Information about loaded containers. + """ + + pass + + # --- Deprecated method names --- + def ls(self): + """Deprecated variant of 'get_containers'. + + Todo: + Remove when all usages are replaced. + """ + + return self.get_containers() + + +@six.add_metaclass(ABCMeta) +class IWorkfileHost: + """Implementation requirements to be able use workfile utils and tool.""" + + @staticmethod + def get_missing_workfile_methods(host): + """Look for missing methods on "old type" host implementation. + + Method is used for validation of implemented functions related to + workfiles. Checks only existence of methods. + + Args: + Union[ModuleType, HostBase]: Object of host where to look for + required methods. + + Returns: + list[str]: Missing method implementations for workfiles workflow. + """ + + if isinstance(host, IWorkfileHost): + return [] + + required = [ + "open_file", + "save_file", + "current_file", + "has_unsaved_changes", + "file_extensions", + "work_root", + ] + missing = [] + for name in required: + if not hasattr(host, name): + missing.append(name) + return missing + + @staticmethod + def validate_workfile_methods(host): + """Validate methods of "old type" host for workfiles workflow. + + Args: + Union[ModuleType, HostBase]: Object of host to validate. + + Raises: + MissingMethodsError: If there are missing methods on host + implementation. + """ + + missing = IWorkfileHost.get_missing_workfile_methods(host) + if missing: + raise MissingMethodsError(host, missing) + + @abstractmethod + def get_workfile_extensions(self): + """Extensions that can be used as save. + + Questions: + This could potentially use 'HostDefinition'. + """ + + return [] + + @abstractmethod + def save_workfile(self, dst_path=None): + """Save currently opened scene. + + Args: + dst_path (str): Where the current scene should be saved. Or use + current path if 'None' is passed. + """ + + pass + + @abstractmethod + def open_workfile(self, filepath): + """Open passed filepath in the host. + + Args: + filepath (str): Path to workfile. + """ + + pass + + @abstractmethod + def get_current_workfile(self): + """Retreive path to current opened file. + + Returns: + str: Path to file which is currently opened. + None: If nothing is opened. + """ + + return None + + def workfile_has_unsaved_changes(self): + """Currently opened scene is saved. + + Not all hosts can know if current scene is saved because the API of + DCC does not support it. + + Returns: + bool: True if scene is saved and False if has unsaved + modifications. + None: Can't tell if workfiles has modifications. + """ + + return None + + def work_root(self, session): + """Modify workdir per host. + + Default implementation keeps workdir untouched. + + Warnings: + We must handle this modification with more sofisticated way because + this can't be called out of DCC so opening of last workfile + (calculated before DCC is launched) is complicated. Also breaking + defined work template is not a good idea. + Only place where it's really used and can make sense is Maya. There + workspace.mel can modify subfolders where to look for maya files. + + Args: + session (dict): Session context data. + + Returns: + str: Path to new workdir. + """ + + return session["AVALON_WORKDIR"] + + # --- Deprecated method names --- + def file_extensions(self): + """Deprecated variant of 'get_workfile_extensions'. + + Todo: + Remove when all usages are replaced. + """ + return self.get_workfile_extensions() + + def save_file(self, dst_path=None): + """Deprecated variant of 'save_workfile'. + + Todo: + Remove when all usages are replaced. + """ + + self.save_workfile() + + def open_file(self, filepath): + """Deprecated variant of 'open_workfile'. + + Todo: + Remove when all usages are replaced. + """ + + return self.open_workfile(filepath) + + def current_file(self): + """Deprecated variant of 'get_current_workfile'. + + Todo: + Remove when all usages are replaced. + """ + + return self.get_current_workfile() + + def has_unsaved_changes(self): + """Deprecated variant of 'workfile_has_unsaved_changes'. + + Todo: + Remove when all usages are replaced. + """ + + return self.workfile_has_unsaved_changes() + + +class INewPublisher: + """Functions related to new creation system in new publisher. + + New publisher is not storing information only about each created instance + but also some global data. At this moment are data related only to context + publish plugins but that can extend in future. + """ + + @staticmethod + def get_missing_publish_methods(host): + """Look for missing methods on "old type" host implementation. + + Method is used for validation of implemented functions related to + new publish creation. Checks only existence of methods. + + Args: + Union[ModuleType, HostBase]: Host module where to look for + required methods. + + Returns: + list[str]: Missing method implementations for new publsher + workflow. + """ + + if isinstance(host, INewPublisher): + return [] + + required = [ + "get_context_data", + "update_context_data", + ] + missing = [] + for name in required: + if not hasattr(host, name): + missing.append(name) + return missing + + @staticmethod + def validate_publish_methods(host): + """Validate implemented methods of "old type" host. + + Args: + Union[ModuleType, HostBase]: Host module to validate. + + Raises: + MissingMethodsError: If there are missing methods on host + implementation. + """ + missing = INewPublisher.get_missing_publish_methods(host) + if missing: + raise MissingMethodsError(host, missing) + + @abstractmethod + def get_context_data(self): + """Get global data related to creation-publishing from workfile. + + These data are not related to any created instance but to whole + publishing context. Not saving/returning them will cause that each + reset of publishing resets all values to default ones. + + Context data can contain information about enabled/disabled publish + plugins or other values that can be filled by artist. + + Returns: + dict: Context data stored using 'update_context_data'. + """ + + pass + + @abstractmethod + def update_context_data(self, data, changes): + """Store global context data to workfile. + + Called when some values in context data has changed. + + Without storing the values in a way that 'get_context_data' would + return them will each reset of publishing cause loose of filled values + by artist. Best practice is to store values into workfile, if possible. + + Args: + data (dict): New data as are. + changes (dict): Only data that has been changed. Each value has + tuple with '(, )' value. + """ + + pass diff --git a/openpype/hosts/aftereffects/plugins/publish/extract_local_render.py b/openpype/hosts/aftereffects/plugins/publish/extract_local_render.py index 7323a0b125..dc65cee61d 100644 --- a/openpype/hosts/aftereffects/plugins/publish/extract_local_render.py +++ b/openpype/hosts/aftereffects/plugins/publish/extract_local_render.py @@ -2,14 +2,18 @@ import os import sys import six -import openpype.api +from openpype.lib import ( + get_ffmpeg_tool_path, + run_subprocess, +) +from openpype.pipeline import publish from openpype.hosts.aftereffects.api import get_stub -class ExtractLocalRender(openpype.api.Extractor): +class ExtractLocalRender(publish.Extractor): """Render RenderQueue locally.""" - order = openpype.api.Extractor.order - 0.47 + order = publish.Extractor.order - 0.47 label = "Extract Local Render" hosts = ["aftereffects"] families = ["renderLocal", "render.local"] @@ -53,7 +57,7 @@ class ExtractLocalRender(openpype.api.Extractor): instance.data["representations"] = [repre_data] - ffmpeg_path = openpype.lib.get_ffmpeg_tool_path("ffmpeg") + ffmpeg_path = get_ffmpeg_tool_path("ffmpeg") # Generate thumbnail. thumbnail_path = os.path.join(staging_dir, "thumbnail.jpg") @@ -66,7 +70,7 @@ class ExtractLocalRender(openpype.api.Extractor): ] self.log.debug("Thumbnail args:: {}".format(args)) try: - output = openpype.lib.run_subprocess(args) + output = run_subprocess(args) except TypeError: self.log.warning("Error in creating thumbnail") six.reraise(*sys.exc_info()) diff --git a/openpype/hosts/aftereffects/plugins/publish/extract_save_scene.py b/openpype/hosts/aftereffects/plugins/publish/extract_save_scene.py index eb2977309f..343838eb49 100644 --- a/openpype/hosts/aftereffects/plugins/publish/extract_save_scene.py +++ b/openpype/hosts/aftereffects/plugins/publish/extract_save_scene.py @@ -1,13 +1,13 @@ import pyblish.api -import openpype.api +from openpype.pipeline import publish from openpype.hosts.aftereffects.api import get_stub class ExtractSaveScene(pyblish.api.ContextPlugin): """Save scene before extraction.""" - order = openpype.api.Extractor.order - 0.48 + order = publish.Extractor.order - 0.48 label = "Extract Save Scene" hosts = ["aftereffects"] diff --git a/openpype/hosts/aftereffects/plugins/publish/increment_workfile.py b/openpype/hosts/aftereffects/plugins/publish/increment_workfile.py index 0829355f3b..d8f6ef5d27 100644 --- a/openpype/hosts/aftereffects/plugins/publish/increment_workfile.py +++ b/openpype/hosts/aftereffects/plugins/publish/increment_workfile.py @@ -1,6 +1,6 @@ import pyblish.api -from openpype.action import get_errored_plugins_from_data from openpype.lib import version_up +from openpype.pipeline.publish import get_errored_plugins_from_context from openpype.hosts.aftereffects.api import get_stub @@ -18,7 +18,7 @@ class IncrementWorkfile(pyblish.api.InstancePlugin): optional = True def process(self, instance): - errored_plugins = get_errored_plugins_from_data(instance.context) + errored_plugins = get_errored_plugins_from_context(instance.context) if errored_plugins: raise RuntimeError( "Skipping incrementing current file because publishing failed." diff --git a/openpype/hosts/aftereffects/plugins/publish/remove_publish_highlight.py b/openpype/hosts/aftereffects/plugins/publish/remove_publish_highlight.py index 5f3fcc3089..370f916f04 100644 --- a/openpype/hosts/aftereffects/plugins/publish/remove_publish_highlight.py +++ b/openpype/hosts/aftereffects/plugins/publish/remove_publish_highlight.py @@ -1,8 +1,8 @@ -import openpype.api +from openpype.pipeline import publish from openpype.hosts.aftereffects.api import get_stub -class RemovePublishHighlight(openpype.api.Extractor): +class RemovePublishHighlight(publish.Extractor): """Clean utf characters which are not working in DL Published compositions are marked with unicode icon which causes @@ -10,7 +10,7 @@ class RemovePublishHighlight(openpype.api.Extractor): rendering, add it later back to avoid confusion. """ - order = openpype.api.Extractor.order - 0.49 # just before save + order = publish.Extractor.order - 0.49 # just before save label = "Clean render comp" hosts = ["aftereffects"] families = ["render.farm"] diff --git a/openpype/hosts/aftereffects/plugins/publish/validate_instance_asset.py b/openpype/hosts/aftereffects/plugins/publish/validate_instance_asset.py index 7a9356f020..6c36136b20 100644 --- a/openpype/hosts/aftereffects/plugins/publish/validate_instance_asset.py +++ b/openpype/hosts/aftereffects/plugins/publish/validate_instance_asset.py @@ -1,9 +1,9 @@ import pyblish.api -import openpype.api -from openpype.pipeline import ( +from openpype.pipeline import legacy_io +from openpype.pipeline.publish import ( + ValidateContentsOrder, PublishXmlValidationError, - legacy_io, ) from openpype.hosts.aftereffects.api import get_stub @@ -50,7 +50,7 @@ class ValidateInstanceAsset(pyblish.api.InstancePlugin): label = "Validate Instance Asset" hosts = ["aftereffects"] actions = [ValidateInstanceAssetRepair] - order = openpype.api.ValidateContentsOrder + order = ValidateContentsOrder def process(self, instance): instance_asset = instance.data["asset"] diff --git a/openpype/hosts/blender/api/action.py b/openpype/hosts/blender/api/action.py index 09ef76326e..fe0833e39f 100644 --- a/openpype/hosts/blender/api/action.py +++ b/openpype/hosts/blender/api/action.py @@ -2,7 +2,7 @@ import bpy import pyblish.api -from openpype.api import get_errored_instances_from_context +from openpype.pipeline.publish import get_errored_instances_from_context class SelectInvalidAction(pyblish.api.Action): diff --git a/openpype/hosts/blender/plugins/publish/collect_current_file.py b/openpype/hosts/blender/plugins/publish/collect_current_file.py index 72976c490b..c3097a0694 100644 --- a/openpype/hosts/blender/plugins/publish/collect_current_file.py +++ b/openpype/hosts/blender/plugins/publish/collect_current_file.py @@ -1,6 +1,19 @@ +import os import bpy import pyblish.api +from openpype.pipeline import legacy_io +from openpype.hosts.blender.api import workio + + +class SaveWorkfiledAction(pyblish.api.Action): + """Save Workfile.""" + label = "Save Workfile" + on = "failed" + icon = "save" + + def process(self, context, plugin): + bpy.ops.wm.avalon_workfiles() class CollectBlenderCurrentFile(pyblish.api.ContextPlugin): @@ -8,12 +21,52 @@ class CollectBlenderCurrentFile(pyblish.api.ContextPlugin): order = pyblish.api.CollectorOrder - 0.5 label = "Blender Current File" - hosts = ['blender'] + hosts = ["blender"] + actions = [SaveWorkfiledAction] def process(self, context): """Inject the current working file""" - current_file = bpy.data.filepath - context.data['currentFile'] = current_file + current_file = workio.current_file() - assert current_file != '', "Current file is empty. " \ - "Save the file before continuing." + context.data["currentFile"] = current_file + + assert current_file, ( + "Current file is empty. Save the file before continuing." + ) + + folder, file = os.path.split(current_file) + filename, ext = os.path.splitext(file) + + task = legacy_io.Session["AVALON_TASK"] + + data = {} + + # create instance + instance = context.create_instance(name=filename) + subset = "workfile" + task.capitalize() + + data.update({ + "subset": subset, + "asset": os.getenv("AVALON_ASSET", None), + "label": subset, + "publish": True, + "family": "workfile", + "families": ["workfile"], + "setMembers": [current_file], + "frameStart": bpy.context.scene.frame_start, + "frameEnd": bpy.context.scene.frame_end, + }) + + data["representations"] = [{ + "name": ext.lstrip("."), + "ext": ext.lstrip("."), + "files": file, + "stagingDir": folder, + }] + + instance.data.update(data) + + self.log.info("Collected instance: {}".format(file)) + self.log.info("Scene path: {}".format(current_file)) + self.log.info("staging Dir: {}".format(folder)) + self.log.info("subset: {}".format(subset)) diff --git a/openpype/hosts/blender/plugins/publish/extract_abc.py b/openpype/hosts/blender/plugins/publish/extract_abc.py index a26a92f7e4..1cab9d225b 100644 --- a/openpype/hosts/blender/plugins/publish/extract_abc.py +++ b/openpype/hosts/blender/plugins/publish/extract_abc.py @@ -2,12 +2,12 @@ import os import bpy -from openpype import api +from openpype.pipeline import publish from openpype.hosts.blender.api import plugin from openpype.hosts.blender.api.pipeline import AVALON_PROPERTY -class ExtractABC(api.Extractor): +class ExtractABC(publish.Extractor): """Extract as ABC.""" label = "Extract ABC" diff --git a/openpype/hosts/blender/plugins/publish/extract_blend.py b/openpype/hosts/blender/plugins/publish/extract_blend.py index 9add633f05..6a001b6f65 100644 --- a/openpype/hosts/blender/plugins/publish/extract_blend.py +++ b/openpype/hosts/blender/plugins/publish/extract_blend.py @@ -2,10 +2,10 @@ import os import bpy -import openpype.api +from openpype.pipeline import publish -class ExtractBlend(openpype.api.Extractor): +class ExtractBlend(publish.Extractor): """Extract a blend file.""" label = "Extract Blend" diff --git a/openpype/hosts/blender/plugins/publish/extract_blend_animation.py b/openpype/hosts/blender/plugins/publish/extract_blend_animation.py index 4917223331..477411b73d 100644 --- a/openpype/hosts/blender/plugins/publish/extract_blend_animation.py +++ b/openpype/hosts/blender/plugins/publish/extract_blend_animation.py @@ -2,10 +2,10 @@ import os import bpy -import openpype.api +from openpype.pipeline import publish -class ExtractBlendAnimation(openpype.api.Extractor): +class ExtractBlendAnimation(publish.Extractor): """Extract a blend file.""" label = "Extract Blend" diff --git a/openpype/hosts/blender/plugins/publish/extract_camera.py b/openpype/hosts/blender/plugins/publish/extract_camera.py index b2c7611b58..9fd181825c 100644 --- a/openpype/hosts/blender/plugins/publish/extract_camera.py +++ b/openpype/hosts/blender/plugins/publish/extract_camera.py @@ -2,11 +2,11 @@ import os import bpy -from openpype import api +from openpype.pipeline import publish from openpype.hosts.blender.api import plugin -class ExtractCamera(api.Extractor): +class ExtractCamera(publish.Extractor): """Extract as the camera as FBX.""" label = "Extract Camera" diff --git a/openpype/hosts/blender/plugins/publish/extract_fbx.py b/openpype/hosts/blender/plugins/publish/extract_fbx.py index 3ac66f33a4..0ad797c226 100644 --- a/openpype/hosts/blender/plugins/publish/extract_fbx.py +++ b/openpype/hosts/blender/plugins/publish/extract_fbx.py @@ -2,12 +2,12 @@ import os import bpy -from openpype import api +from openpype.pipeline import publish from openpype.hosts.blender.api import plugin from openpype.hosts.blender.api.pipeline import AVALON_PROPERTY -class ExtractFBX(api.Extractor): +class ExtractFBX(publish.Extractor): """Extract as FBX.""" label = "Extract FBX" diff --git a/openpype/hosts/blender/plugins/publish/extract_fbx_animation.py b/openpype/hosts/blender/plugins/publish/extract_fbx_animation.py index 4b4a92932a..062b42e99d 100644 --- a/openpype/hosts/blender/plugins/publish/extract_fbx_animation.py +++ b/openpype/hosts/blender/plugins/publish/extract_fbx_animation.py @@ -5,12 +5,12 @@ import bpy import bpy_extras import bpy_extras.anim_utils -from openpype import api +from openpype.pipeline import publish from openpype.hosts.blender.api import plugin from openpype.hosts.blender.api.pipeline import AVALON_PROPERTY -class ExtractAnimationFBX(api.Extractor): +class ExtractAnimationFBX(publish.Extractor): """Extract as animation.""" label = "Extract FBX" diff --git a/openpype/hosts/blender/plugins/publish/extract_layout.py b/openpype/hosts/blender/plugins/publish/extract_layout.py index 8502c6fbd4..f2d04f1178 100644 --- a/openpype/hosts/blender/plugins/publish/extract_layout.py +++ b/openpype/hosts/blender/plugins/publish/extract_layout.py @@ -6,12 +6,12 @@ import bpy_extras import bpy_extras.anim_utils from openpype.client import get_representation_by_name +from openpype.pipeline import publish from openpype.hosts.blender.api import plugin from openpype.hosts.blender.api.pipeline import AVALON_PROPERTY -import openpype.api -class ExtractLayout(openpype.api.Extractor): +class ExtractLayout(publish.Extractor): """Extract a layout.""" label = "Extract Layout" diff --git a/openpype/hosts/blender/plugins/publish/validate_camera_zero_keyframe.py b/openpype/hosts/blender/plugins/publish/validate_camera_zero_keyframe.py index 39b9b67511..9ac0561ff3 100644 --- a/openpype/hosts/blender/plugins/publish/validate_camera_zero_keyframe.py +++ b/openpype/hosts/blender/plugins/publish/validate_camera_zero_keyframe.py @@ -1,9 +1,11 @@ from typing import List -import mathutils +import bpy import pyblish.api +import openpype.api import openpype.hosts.blender.api.action +from openpype.pipeline.publish import ValidateContentsOrder class ValidateCameraZeroKeyframe(pyblish.api.InstancePlugin): @@ -14,21 +16,18 @@ class ValidateCameraZeroKeyframe(pyblish.api.InstancePlugin): in Unreal and Blender. """ - order = openpype.api.ValidateContentsOrder + order = ValidateContentsOrder hosts = ["blender"] families = ["camera"] - category = "geometry" version = (0, 1, 0) label = "Zero Keyframe" actions = [openpype.hosts.blender.api.action.SelectInvalidAction] - _identity = mathutils.Matrix() - - @classmethod - def get_invalid(cls, instance) -> List: + @staticmethod + def get_invalid(instance) -> List: invalid = [] - for obj in [obj for obj in instance]: - if obj.type == "CAMERA": + for obj in instance: + if isinstance(obj, bpy.types.Object) and obj.type == "CAMERA": if obj.animation_data and obj.animation_data.action: action = obj.animation_data.action frames_set = set() @@ -45,4 +44,5 @@ class ValidateCameraZeroKeyframe(pyblish.api.InstancePlugin): invalid = self.get_invalid(instance) if invalid: raise RuntimeError( - f"Object found in instance is not in Object Mode: {invalid}") + f"Camera must have a keyframe at frame 0: {invalid}" + ) diff --git a/openpype/hosts/blender/plugins/publish/validate_mesh_has_uv.py b/openpype/hosts/blender/plugins/publish/validate_mesh_has_uv.py index 1c73476fc8..83146c641e 100644 --- a/openpype/hosts/blender/plugins/publish/validate_mesh_has_uv.py +++ b/openpype/hosts/blender/plugins/publish/validate_mesh_has_uv.py @@ -3,13 +3,14 @@ from typing import List import bpy import pyblish.api +import openpype.api import openpype.hosts.blender.api.action class ValidateMeshHasUvs(pyblish.api.InstancePlugin): """Validate that the current mesh has UV's.""" - order = pyblish.api.ValidatorOrder + order = openpype.api.ValidateContentsOrder hosts = ["blender"] families = ["model"] category = "geometry" @@ -25,7 +26,10 @@ class ValidateMeshHasUvs(pyblish.api.InstancePlugin): for uv_layer in obj.data.uv_layers: for polygon in obj.data.polygons: for loop_index in polygon.loop_indices: - if not uv_layer.data[loop_index].uv: + if ( + loop_index >= len(uv_layer.data) + or not uv_layer.data[loop_index].uv + ): return False return True @@ -33,20 +37,20 @@ class ValidateMeshHasUvs(pyblish.api.InstancePlugin): @classmethod def get_invalid(cls, instance) -> List: invalid = [] - # TODO (jasper): only check objects in the collection that will be published? - for obj in [ - obj for obj in instance]: - try: - if obj.type == 'MESH': - # Make sure we are in object mode. - bpy.ops.object.mode_set(mode='OBJECT') - if not cls.has_uvs(obj): - invalid.append(obj) - except: - continue + for obj in instance: + if isinstance(obj, bpy.types.Object) and obj.type == 'MESH': + if obj.mode != "OBJECT": + cls.log.warning( + f"Mesh object {obj.name} should be in 'OBJECT' mode" + " to be properly checked." + ) + if not cls.has_uvs(obj): + invalid.append(obj) return invalid def process(self, instance): invalid = self.get_invalid(instance) if invalid: - raise RuntimeError(f"Meshes found in instance without valid UV's: {invalid}") + raise RuntimeError( + f"Meshes found in instance without valid UV's: {invalid}" + ) diff --git a/openpype/hosts/blender/plugins/publish/validate_mesh_no_negative_scale.py b/openpype/hosts/blender/plugins/publish/validate_mesh_no_negative_scale.py index 00159a2d36..329a8d80c3 100644 --- a/openpype/hosts/blender/plugins/publish/validate_mesh_no_negative_scale.py +++ b/openpype/hosts/blender/plugins/publish/validate_mesh_no_negative_scale.py @@ -3,28 +3,27 @@ from typing import List import bpy import pyblish.api +import openpype.api import openpype.hosts.blender.api.action class ValidateMeshNoNegativeScale(pyblish.api.Validator): """Ensure that meshes don't have a negative scale.""" - order = pyblish.api.ValidatorOrder + order = openpype.api.ValidateContentsOrder hosts = ["blender"] families = ["model"] + category = "geometry" label = "Mesh No Negative Scale" actions = [openpype.hosts.blender.api.action.SelectInvalidAction] @staticmethod def get_invalid(instance) -> List: invalid = [] - # TODO (jasper): only check objects in the collection that will be published? - for obj in [ - obj for obj in bpy.data.objects if obj.type == 'MESH' - ]: - if any(v < 0 for v in obj.scale): - invalid.append(obj) - + for obj in instance: + if isinstance(obj, bpy.types.Object) and obj.type == 'MESH': + if any(v < 0 for v in obj.scale): + invalid.append(obj) return invalid def process(self, instance): diff --git a/openpype/hosts/blender/plugins/publish/validate_no_colons_in_name.py b/openpype/hosts/blender/plugins/publish/validate_no_colons_in_name.py index 261ff864d5..3d7c5294f6 100644 --- a/openpype/hosts/blender/plugins/publish/validate_no_colons_in_name.py +++ b/openpype/hosts/blender/plugins/publish/validate_no_colons_in_name.py @@ -1,7 +1,11 @@ from typing import List +import bpy + import pyblish.api +import openpype.api import openpype.hosts.blender.api.action +from openpype.pipeline.publish import ValidateContentsOrder class ValidateNoColonsInName(pyblish.api.InstancePlugin): @@ -12,20 +16,20 @@ class ValidateNoColonsInName(pyblish.api.InstancePlugin): """ - order = openpype.api.ValidateContentsOrder + order = ValidateContentsOrder hosts = ["blender"] families = ["model", "rig"] version = (0, 1, 0) label = "No Colons in names" actions = [openpype.hosts.blender.api.action.SelectInvalidAction] - @classmethod - def get_invalid(cls, instance) -> List: + @staticmethod + def get_invalid(instance) -> List: invalid = [] - for obj in [obj for obj in instance]: + for obj in instance: if ':' in obj.name: invalid.append(obj) - if obj.type == 'ARMATURE': + if isinstance(obj, bpy.types.Object) and obj.type == 'ARMATURE': for bone in obj.data.bones: if ':' in bone.name: invalid.append(obj) @@ -36,4 +40,5 @@ class ValidateNoColonsInName(pyblish.api.InstancePlugin): invalid = self.get_invalid(instance) if invalid: raise RuntimeError( - f"Objects found with colon in name: {invalid}") + f"Objects found with colon in name: {invalid}" + ) diff --git a/openpype/hosts/blender/plugins/publish/validate_object_mode.py b/openpype/hosts/blender/plugins/publish/validate_object_mode.py index 90ef0b7c41..ac60e00f89 100644 --- a/openpype/hosts/blender/plugins/publish/validate_object_mode.py +++ b/openpype/hosts/blender/plugins/publish/validate_object_mode.py @@ -1,5 +1,7 @@ from typing import List +import bpy + import pyblish.api import openpype.hosts.blender.api.action @@ -10,26 +12,21 @@ class ValidateObjectIsInObjectMode(pyblish.api.InstancePlugin): order = pyblish.api.ValidatorOrder - 0.01 hosts = ["blender"] families = ["model", "rig", "layout"] - category = "geometry" label = "Validate Object Mode" actions = [openpype.hosts.blender.api.action.SelectInvalidAction] optional = False - @classmethod - def get_invalid(cls, instance) -> List: + @staticmethod + def get_invalid(instance) -> List: invalid = [] - for obj in [obj for obj in instance]: - try: - if obj.type == 'MESH' or obj.type == 'ARMATURE': - # Check if the object is in object mode. - if not obj.mode == 'OBJECT': - invalid.append(obj) - except Exception: - continue + for obj in instance: + if isinstance(obj, bpy.types.Object) and obj.mode != "OBJECT": + invalid.append(obj) return invalid def process(self, instance): invalid = self.get_invalid(instance) if invalid: raise RuntimeError( - f"Object found in instance is not in Object Mode: {invalid}") + f"Object found in instance is not in Object Mode: {invalid}" + ) diff --git a/openpype/hosts/blender/plugins/publish/validate_transform_zero.py b/openpype/hosts/blender/plugins/publish/validate_transform_zero.py index 7456dbc423..249b14743b 100644 --- a/openpype/hosts/blender/plugins/publish/validate_transform_zero.py +++ b/openpype/hosts/blender/plugins/publish/validate_transform_zero.py @@ -1,9 +1,12 @@ from typing import List import mathutils +import bpy import pyblish.api +import openpype.api import openpype.hosts.blender.api.action +from openpype.pipeline.publish import ValidateContentsOrder class ValidateTransformZero(pyblish.api.InstancePlugin): @@ -15,10 +18,9 @@ class ValidateTransformZero(pyblish.api.InstancePlugin): """ - order = openpype.api.ValidateContentsOrder + order = ValidateContentsOrder hosts = ["blender"] families = ["model"] - category = "geometry" version = (0, 1, 0) label = "Transform Zero" actions = [openpype.hosts.blender.api.action.SelectInvalidAction] @@ -28,8 +30,11 @@ class ValidateTransformZero(pyblish.api.InstancePlugin): @classmethod def get_invalid(cls, instance) -> List: invalid = [] - for obj in [obj for obj in instance]: - if obj.matrix_basis != cls._identity: + for obj in instance: + if ( + isinstance(obj, bpy.types.Object) + and obj.matrix_basis != cls._identity + ): invalid.append(obj) return invalid @@ -37,4 +42,6 @@ class ValidateTransformZero(pyblish.api.InstancePlugin): invalid = self.get_invalid(instance) if invalid: raise RuntimeError( - f"Object found in instance is not in Object Mode: {invalid}") + "Object found in instance has not" + f" transform to zero: {invalid}" + ) diff --git a/openpype/hosts/celaction/plugins/publish/collect_audio.py b/openpype/hosts/celaction/plugins/publish/collect_audio.py deleted file mode 100644 index c6e3bf2c03..0000000000 --- a/openpype/hosts/celaction/plugins/publish/collect_audio.py +++ /dev/null @@ -1,113 +0,0 @@ -import os -import collections -from pprint import pformat - -import pyblish.api - -from openpype.client import ( - get_subsets, - get_last_versions, - get_representations -) -from openpype.pipeline import legacy_io - - -class AppendCelactionAudio(pyblish.api.ContextPlugin): - - label = "Colect Audio for publishing" - order = pyblish.api.CollectorOrder + 0.1 - - def process(self, context): - self.log.info('Collecting Audio Data') - asset_doc = context.data["assetEntity"] - - # get all available representations - subsets = self.get_subsets( - asset_doc, - representations=["audio", "wav"] - ) - self.log.info(f"subsets is: {pformat(subsets)}") - - if not subsets.get("audioMain"): - raise AttributeError("`audioMain` subset does not exist") - - reprs = subsets.get("audioMain", {}).get("representations", []) - self.log.info(f"reprs is: {pformat(reprs)}") - - repr = next((r for r in reprs), None) - if not repr: - raise "Missing `audioMain` representation" - self.log.info(f"representation is: {repr}") - - audio_file = repr.get('data', {}).get('path', "") - - if os.path.exists(audio_file): - context.data["audioFile"] = audio_file - self.log.info( - 'audio_file: {}, has been added to context'.format(audio_file)) - else: - self.log.warning("Couldn't find any audio file on Ftrack.") - - def get_subsets(self, asset_doc, representations): - """ - Query subsets with filter on name. - - The method will return all found subsets and its defined version - and subsets. Version could be specified with number. Representation - can be filtered. - - Arguments: - asset_doct (dict): Asset (shot) mongo document - representations (list): list for all representations - - Returns: - dict: subsets with version and representations in keys - """ - - # Query all subsets for asset - project_name = legacy_io.active_project() - subset_docs = get_subsets( - project_name, asset_ids=[asset_doc["_id"]], fields=["_id"] - ) - # Collect all subset ids - subset_ids = [ - subset_doc["_id"] - for subset_doc in subset_docs - ] - - # Check if we found anything - assert subset_ids, ( - "No subsets found. Check correct filter. " - "Try this for start `r'.*'`: asset: `{}`" - ).format(asset_doc["name"]) - - last_versions_by_subset_id = get_last_versions( - project_name, subset_ids, fields=["_id", "parent"] - ) - - version_docs_by_id = {} - for version_doc in last_versions_by_subset_id.values(): - version_docs_by_id[version_doc["_id"]] = version_doc - - repre_docs = get_representations( - project_name, - version_ids=version_docs_by_id.keys(), - representation_names=representations - ) - repre_docs_by_version_id = collections.defaultdict(list) - for repre_doc in repre_docs: - version_id = repre_doc["parent"] - repre_docs_by_version_id[version_id].append(repre_doc) - - output_dict = {} - for version_id, repre_docs in repre_docs_by_version_id.items(): - version_doc = version_docs_by_id[version_id] - subset_id = version_doc["parent"] - subset_doc = last_versions_by_subset_id[subset_id] - # Store queried docs by subset name - output_dict[subset_doc["name"]] = { - "representations": repre_docs, - "version": version_doc - } - - return output_dict diff --git a/openpype/hosts/flame/__init__.py b/openpype/hosts/flame/__init__.py index f839357147..b45f107747 100644 --- a/openpype/hosts/flame/__init__.py +++ b/openpype/hosts/flame/__init__.py @@ -1,22 +1,10 @@ -import os - -HOST_DIR = os.path.dirname( - os.path.abspath(__file__) +from .addon import ( + HOST_DIR, + FlameAddon, ) -def add_implementation_envs(env, _app): - # Add requirements to DL_PYTHON_HOOK_PATH - pype_root = os.environ["OPENPYPE_REPOS_ROOT"] - - env["DL_PYTHON_HOOK_PATH"] = os.path.join( - pype_root, "openpype", "hosts", "flame", "startup") - env.pop("QT_AUTO_SCREEN_SCALE_FACTOR", None) - - # Set default values if are not already set via settings - defaults = { - "LOGLEVEL": "DEBUG" - } - for key, value in defaults.items(): - if not env.get(key): - env[key] = value +__all__ = ( + "HOST_DIR", + "FlameAddon", +) diff --git a/openpype/hosts/flame/addon.py b/openpype/hosts/flame/addon.py new file mode 100644 index 0000000000..5a34413bb0 --- /dev/null +++ b/openpype/hosts/flame/addon.py @@ -0,0 +1,36 @@ +import os +from openpype.modules import OpenPypeModule +from openpype.modules.interfaces import IHostAddon + +HOST_DIR = os.path.dirname(os.path.abspath(__file__)) + + +class FlameAddon(OpenPypeModule, IHostAddon): + name = "flame" + host_name = "flame" + + def initialize(self, module_settings): + self.enabled = True + + def add_implementation_envs(self, env, _app): + # Add requirements to DL_PYTHON_HOOK_PATH + env["DL_PYTHON_HOOK_PATH"] = os.path.join(HOST_DIR, "startup") + env.pop("QT_AUTO_SCREEN_SCALE_FACTOR", None) + + # Set default values if are not already set via settings + defaults = { + "LOGLEVEL": "DEBUG" + } + for key, value in defaults.items(): + if not env.get(key): + env[key] = value + + def get_launch_hook_paths(self, app): + if app.host_name != self.host_name: + return [] + return [ + os.path.join(HOST_DIR, "hooks") + ] + + def get_workfile_extensions(self): + return [".otoc"] diff --git a/openpype/hosts/flame/api/__init__.py b/openpype/hosts/flame/api/__init__.py index 76c1c93379..c00ee958b6 100644 --- a/openpype/hosts/flame/api/__init__.py +++ b/openpype/hosts/flame/api/__init__.py @@ -51,7 +51,8 @@ from .pipeline import ( ) from .menu import ( FlameMenuProjectConnect, - FlameMenuTimeline + FlameMenuTimeline, + FlameMenuUniversal ) from .plugin import ( Creator, @@ -131,6 +132,7 @@ __all__ = [ # menu "FlameMenuProjectConnect", "FlameMenuTimeline", + "FlameMenuUniversal", # plugin "Creator", diff --git a/openpype/hosts/flame/api/menu.py b/openpype/hosts/flame/api/menu.py index 7f1a6a24e2..f72a352bba 100644 --- a/openpype/hosts/flame/api/menu.py +++ b/openpype/hosts/flame/api/menu.py @@ -201,3 +201,53 @@ class FlameMenuTimeline(_FlameMenuApp): if self.flame: self.flame.execute_shortcut('Rescan Python Hooks') self.log.info('Rescan Python Hooks') + + +class FlameMenuUniversal(_FlameMenuApp): + + # flameMenuProjectconnect app takes care of the preferences dialog as well + + def __init__(self, framework): + _FlameMenuApp.__init__(self, framework) + + def __getattr__(self, name): + def method(*args, **kwargs): + project = self.dynamic_menu_data.get(name) + if project: + self.link_project(project) + return method + + def build_menu(self): + if not self.flame: + return [] + + menu = deepcopy(self.menu) + + menu['actions'].append({ + "name": "Load...", + "execute": lambda x: self.tools_helper.show_loader() + }) + menu['actions'].append({ + "name": "Manage...", + "execute": lambda x: self.tools_helper.show_scene_inventory() + }) + menu['actions'].append({ + "name": "Library...", + "execute": lambda x: self.tools_helper.show_library_loader() + }) + return menu + + def refresh(self, *args, **kwargs): + self.rescan() + + def rescan(self, *args, **kwargs): + if not self.flame: + try: + import flame + self.flame = flame + except ImportError: + self.flame = None + + if self.flame: + self.flame.execute_shortcut('Rescan Python Hooks') + self.log.info('Rescan Python Hooks') diff --git a/openpype/hosts/flame/api/plugin.py b/openpype/hosts/flame/api/plugin.py index efbabb6a55..145b1f0921 100644 --- a/openpype/hosts/flame/api/plugin.py +++ b/openpype/hosts/flame/api/plugin.py @@ -361,6 +361,8 @@ class PublishableClip: index_from_segment_default = False use_shot_name_default = False include_handles_default = False + retimed_handles_default = True + retimed_framerange_default = True def __init__(self, segment, **kwargs): self.rename_index = kwargs["rename_index"] @@ -496,6 +498,14 @@ class PublishableClip: "audio", {}).get("value") or False self.include_handles = self.ui_inputs.get( "includeHandles", {}).get("value") or self.include_handles_default + self.retimed_handles = ( + self.ui_inputs.get("retimedHandles", {}).get("value") + or self.retimed_handles_default + ) + self.retimed_framerange = ( + self.ui_inputs.get("retimedFramerange", {}).get("value") + or self.retimed_framerange_default + ) # build subset name from layer name if self.subset_name == "[ track name ]": diff --git a/openpype/hosts/flame/plugins/create/create_shot_clip.py b/openpype/hosts/flame/plugins/create/create_shot_clip.py index fa239ea420..b03a39a7ca 100644 --- a/openpype/hosts/flame/plugins/create/create_shot_clip.py +++ b/openpype/hosts/flame/plugins/create/create_shot_clip.py @@ -276,6 +276,22 @@ class CreateShotClip(opfapi.Creator): "target": "tag", "toolTip": "By default handles are excluded", # noqa "order": 3 + }, + "retimedHandles": { + "value": True, + "type": "QCheckBox", + "label": "Retimed handles", + "target": "tag", + "toolTip": "By default handles are retimed.", # noqa + "order": 4 + }, + "retimedFramerange": { + "value": True, + "type": "QCheckBox", + "label": "Retimed framerange", + "target": "tag", + "toolTip": "By default framerange is retimed.", # noqa + "order": 5 } } } diff --git a/openpype/hosts/flame/plugins/publish/collect_timeline_instances.py b/openpype/hosts/flame/plugins/publish/collect_timeline_instances.py index 992db62c75..d6ff13b059 100644 --- a/openpype/hosts/flame/plugins/publish/collect_timeline_instances.py +++ b/openpype/hosts/flame/plugins/publish/collect_timeline_instances.py @@ -131,6 +131,10 @@ class CollectTimelineInstances(pyblish.api.ContextPlugin): "fps": self.fps, "workfileFrameStart": workfile_start, "sourceFirstFrame": int(first_frame), + "notRetimedHandles": ( + not marker_data.get("retimedHandles")), + "notRetimedFramerange": ( + not marker_data.get("retimedFramerange")), "path": file_path, "flameAddTasks": self.add_tasks, "tasks": { diff --git a/openpype/hosts/flame/plugins/publish/extract_subset_resources.py b/openpype/hosts/flame/plugins/publish/extract_subset_resources.py index 3e1e8db986..1d42330e23 100644 --- a/openpype/hosts/flame/plugins/publish/extract_subset_resources.py +++ b/openpype/hosts/flame/plugins/publish/extract_subset_resources.py @@ -90,26 +90,38 @@ class ExtractSubsetResources(openpype.api.Extractor): handle_end = instance.data["handleEnd"] handles = max(handle_start, handle_end) include_handles = instance.data.get("includeHandles") + retimed_handles = instance.data.get("retimedHandles") # get media source range with handles source_start_handles = instance.data["sourceStartH"] source_end_handles = instance.data["sourceEndH"] + # retime if needed if r_speed != 1.0: - source_start_handles = ( - instance.data["sourceStart"] - r_handle_start) - source_end_handles = ( - source_start_handles - + (r_source_dur - 1) - + r_handle_start - + r_handle_end - ) + if retimed_handles: + # handles are retimed + source_start_handles = ( + instance.data["sourceStart"] - r_handle_start) + source_end_handles = ( + source_start_handles + + (r_source_dur - 1) + + r_handle_start + + r_handle_end + ) + else: + # handles are not retimed + source_end_handles = ( + source_start_handles + + (r_source_dur - 1) + + handle_start + + handle_end + ) # get frame range with handles for representation range frame_start_handle = frame_start - handle_start repre_frame_start = frame_start_handle if include_handles: - if r_speed == 1.0: + if r_speed == 1.0 or not retimed_handles: frame_start_handle = frame_start else: frame_start_handle = ( diff --git a/openpype/hosts/flame/startup/openpype_in_flame.py b/openpype/hosts/flame/startup/openpype_in_flame.py index f2ac23b19e..d07aaa6b7d 100644 --- a/openpype/hosts/flame/startup/openpype_in_flame.py +++ b/openpype/hosts/flame/startup/openpype_in_flame.py @@ -73,6 +73,8 @@ def load_apps(): opfapi.FlameMenuProjectConnect(opfapi.CTX.app_framework)) opfapi.CTX.flame_apps.append( opfapi.FlameMenuTimeline(opfapi.CTX.app_framework)) + opfapi.CTX.flame_apps.append( + opfapi.FlameMenuUniversal(opfapi.CTX.app_framework)) opfapi.CTX.app_framework.log.info("Apps are loaded") @@ -191,3 +193,27 @@ def get_timeline_custom_ui_actions(): openpype_install() return _build_app_menu("FlameMenuTimeline") + + +def get_batch_custom_ui_actions(): + """Hook to create submenu in batch + + Returns: + list: menu object + """ + # install openpype and the host + openpype_install() + + return _build_app_menu("FlameMenuUniversal") + + +def get_media_panel_custom_ui_actions(): + """Hook to create submenu in desktop + + Returns: + list: menu object + """ + # install openpype and the host + openpype_install() + + return _build_app_menu("FlameMenuUniversal") diff --git a/openpype/hosts/fusion/__init__.py b/openpype/hosts/fusion/__init__.py index e69de29bb2..ddae01890b 100644 --- a/openpype/hosts/fusion/__init__.py +++ b/openpype/hosts/fusion/__init__.py @@ -0,0 +1,10 @@ +from .addon import ( + FusionAddon, + FUSION_HOST_DIR, +) + + +__all__ = ( + "FusionAddon", + "FUSION_HOST_DIR", +) diff --git a/openpype/hosts/fusion/addon.py b/openpype/hosts/fusion/addon.py new file mode 100644 index 0000000000..e257005061 --- /dev/null +++ b/openpype/hosts/fusion/addon.py @@ -0,0 +1,23 @@ +import os +from openpype.modules import OpenPypeModule +from openpype.modules.interfaces import IHostAddon + +FUSION_HOST_DIR = os.path.dirname(os.path.abspath(__file__)) + + +class FusionAddon(OpenPypeModule, IHostAddon): + name = "fusion" + host_name = "fusion" + + def initialize(self, module_settings): + self.enabled = True + + def get_launch_hook_paths(self, app): + if app.host_name != self.host_name: + return [] + return [ + os.path.join(FUSION_HOST_DIR, "hooks") + ] + + def get_workfile_extensions(self): + return [".comp"] diff --git a/openpype/hosts/fusion/api/pipeline.py b/openpype/hosts/fusion/api/pipeline.py index 54a6c94b60..987eae214b 100644 --- a/openpype/hosts/fusion/api/pipeline.py +++ b/openpype/hosts/fusion/api/pipeline.py @@ -18,12 +18,11 @@ from openpype.pipeline import ( deregister_inventory_action_path, AVALON_CONTAINER_ID, ) -import openpype.hosts.fusion +from openpype.hosts.fusion import FUSION_HOST_DIR log = Logger.get_logger(__name__) -HOST_DIR = os.path.dirname(os.path.abspath(openpype.hosts.fusion.__file__)) -PLUGINS_DIR = os.path.join(HOST_DIR, "plugins") +PLUGINS_DIR = os.path.join(FUSION_HOST_DIR, "plugins") PUBLISH_PATH = os.path.join(PLUGINS_DIR, "publish") LOAD_PATH = os.path.join(PLUGINS_DIR, "load") diff --git a/openpype/hosts/fusion/api/workio.py b/openpype/hosts/fusion/api/workio.py index a1710c6e3a..89752d3e6d 100644 --- a/openpype/hosts/fusion/api/workio.py +++ b/openpype/hosts/fusion/api/workio.py @@ -2,13 +2,11 @@ import sys import os -from openpype.pipeline import HOST_WORKFILE_EXTENSIONS - from .pipeline import get_current_comp def file_extensions(): - return HOST_WORKFILE_EXTENSIONS["fusion"] + return [".comp"] def has_unsaved_changes(): diff --git a/openpype/hosts/fusion/plugins/publish/increment_current_file_deadline.py b/openpype/hosts/fusion/plugins/publish/increment_current_file_deadline.py index 6483454d96..5c595638e9 100644 --- a/openpype/hosts/fusion/plugins/publish/increment_current_file_deadline.py +++ b/openpype/hosts/fusion/plugins/publish/increment_current_file_deadline.py @@ -17,9 +17,9 @@ class FusionIncrementCurrentFile(pyblish.api.ContextPlugin): def process(self, context): from openpype.lib import version_up - from openpype.action import get_errored_plugins_from_data + from openpype.pipeline.publish import get_errored_plugins_from_context - errored_plugins = get_errored_plugins_from_data(context) + errored_plugins = get_errored_plugins_from_context(context) if any(plugin.__name__ == "FusionSubmitDeadline" for plugin in errored_plugins): raise RuntimeError("Skipping incrementing current file because " diff --git a/openpype/hosts/fusion/plugins/publish/validate_background_depth.py b/openpype/hosts/fusion/plugins/publish/validate_background_depth.py index a0734d8278..4268fab528 100644 --- a/openpype/hosts/fusion/plugins/publish/validate_background_depth.py +++ b/openpype/hosts/fusion/plugins/publish/validate_background_depth.py @@ -1,6 +1,6 @@ import pyblish.api -from openpype import action +from openpype.pipeline.publish import RepairAction class ValidateBackgroundDepth(pyblish.api.InstancePlugin): @@ -8,7 +8,7 @@ class ValidateBackgroundDepth(pyblish.api.InstancePlugin): order = pyblish.api.ValidatorOrder label = "Validate Background Depth 32 bit" - actions = [action.RepairAction] + actions = [RepairAction] hosts = ["fusion"] families = ["render"] optional = True diff --git a/openpype/hosts/fusion/plugins/publish/validate_create_folder_checked.py b/openpype/hosts/fusion/plugins/publish/validate_create_folder_checked.py index 45ed53f65c..f6beefefc1 100644 --- a/openpype/hosts/fusion/plugins/publish/validate_create_folder_checked.py +++ b/openpype/hosts/fusion/plugins/publish/validate_create_folder_checked.py @@ -1,6 +1,6 @@ import pyblish.api -from openpype import action +from openpype.pipeline.publish import RepairAction class ValidateCreateFolderChecked(pyblish.api.InstancePlugin): @@ -11,7 +11,7 @@ class ValidateCreateFolderChecked(pyblish.api.InstancePlugin): """ order = pyblish.api.ValidatorOrder - actions = [action.RepairAction] + actions = [RepairAction] label = "Validate Create Folder Checked" families = ["render"] hosts = ["fusion"] diff --git a/openpype/hosts/harmony/plugins/publish/increment_workfile.py b/openpype/hosts/harmony/plugins/publish/increment_workfile.py index 417377fff8..1caf581567 100644 --- a/openpype/hosts/harmony/plugins/publish/increment_workfile.py +++ b/openpype/hosts/harmony/plugins/publish/increment_workfile.py @@ -1,7 +1,7 @@ import os import pyblish.api -from openpype.action import get_errored_plugins_from_data +from openpype.pipeline.publish import get_errored_plugins_from_context from openpype.lib import version_up import openpype.hosts.harmony.api as harmony @@ -19,7 +19,7 @@ class IncrementWorkfile(pyblish.api.InstancePlugin): optional = True def process(self, instance): - errored_plugins = get_errored_plugins_from_data(instance.context) + errored_plugins = get_errored_plugins_from_context(instance.context) if errored_plugins: raise RuntimeError( "Skipping incrementing current file because publishing failed." diff --git a/openpype/hosts/harmony/plugins/publish/validate_instances.py b/openpype/hosts/harmony/plugins/publish/validate_instances.py index 373ef94cc3..ac367082ef 100644 --- a/openpype/hosts/harmony/plugins/publish/validate_instances.py +++ b/openpype/hosts/harmony/plugins/publish/validate_instances.py @@ -1,9 +1,12 @@ import os import pyblish.api -import openpype.api -from openpype.pipeline import PublishXmlValidationError + import openpype.hosts.harmony.api as harmony +from openpype.pipeline.publish import ( + ValidateContentsOrder, + PublishXmlValidationError, +) class ValidateInstanceRepair(pyblish.api.Action): @@ -37,7 +40,7 @@ class ValidateInstance(pyblish.api.InstancePlugin): label = "Validate Instance" hosts = ["harmony"] actions = [ValidateInstanceRepair] - order = openpype.api.ValidateContentsOrder + order = ValidateContentsOrder def process(self, instance): instance_asset = instance.data["asset"] diff --git a/openpype/hosts/hiero/plugins/publish/precollect_instances.py b/openpype/hosts/hiero/plugins/publish/precollect_instances.py index 0c7dbc1f22..84f2927fc7 100644 --- a/openpype/hosts/hiero/plugins/publish/precollect_instances.py +++ b/openpype/hosts/hiero/plugins/publish/precollect_instances.py @@ -318,10 +318,9 @@ class PrecollectInstances(pyblish.api.ContextPlugin): @staticmethod def create_otio_time_range_from_timeline_item_data(track_item): - speed = track_item.playbackSpeed() timeline = phiero.get_current_sequence() frame_start = int(track_item.timelineIn()) - frame_duration = int((track_item.duration() - 1) / speed) + frame_duration = int(track_item.duration()) fps = timeline.framerate().toFloat() return hiero_export.create_otio_time_range( diff --git a/openpype/hosts/houdini/__init__.py b/openpype/hosts/houdini/__init__.py index a3ee38db8d..38bf1fcc2d 100644 --- a/openpype/hosts/houdini/__init__.py +++ b/openpype/hosts/houdini/__init__.py @@ -1,38 +1,10 @@ -import os +from .addon import ( + HoudiniAddon, + HOUDINI_HOST_DIR, +) -def add_implementation_envs(env, _app): - # Add requirements to HOUDINI_PATH and HOUDINI_MENU_PATH - pype_root = os.environ["OPENPYPE_REPOS_ROOT"] - - startup_path = os.path.join( - pype_root, "openpype", "hosts", "houdini", "startup" - ) - new_houdini_path = [startup_path] - new_houdini_menu_path = [startup_path] - - old_houdini_path = env.get("HOUDINI_PATH") or "" - old_houdini_menu_path = env.get("HOUDINI_MENU_PATH") or "" - - for path in old_houdini_path.split(os.pathsep): - if not path: - continue - - norm_path = os.path.normpath(path) - if norm_path not in new_houdini_path: - new_houdini_path.append(norm_path) - - for path in old_houdini_menu_path.split(os.pathsep): - if not path: - continue - - norm_path = os.path.normpath(path) - if norm_path not in new_houdini_menu_path: - new_houdini_menu_path.append(norm_path) - - # Add ampersand for unknown reason (Maybe is needed in Houdini?) - new_houdini_path.append("&") - new_houdini_menu_path.append("&") - - env["HOUDINI_PATH"] = os.pathsep.join(new_houdini_path) - env["HOUDINI_MENU_PATH"] = os.pathsep.join(new_houdini_menu_path) +__all__ = ( + "HoudiniAddon", + "HOUDINI_HOST_DIR", +) diff --git a/openpype/hosts/houdini/addon.py b/openpype/hosts/houdini/addon.py new file mode 100644 index 0000000000..8d88e83c56 --- /dev/null +++ b/openpype/hosts/houdini/addon.py @@ -0,0 +1,55 @@ +import os +from openpype.modules import OpenPypeModule +from openpype.modules.interfaces import IHostAddon + +HOUDINI_HOST_DIR = os.path.dirname(os.path.abspath(__file__)) + + +class HoudiniAddon(OpenPypeModule, IHostAddon): + name = "houdini" + host_name = "houdini" + + def initialize(self, module_settings): + self.enabled = True + + def add_implementation_envs(self, env, _app): + # Add requirements to HOUDINI_PATH and HOUDINI_MENU_PATH + startup_path = os.path.join(HOUDINI_HOST_DIR, "startup") + new_houdini_path = [startup_path] + new_houdini_menu_path = [startup_path] + + old_houdini_path = env.get("HOUDINI_PATH") or "" + old_houdini_menu_path = env.get("HOUDINI_MENU_PATH") or "" + + for path in old_houdini_path.split(os.pathsep): + if not path: + continue + + norm_path = os.path.normpath(path) + if norm_path not in new_houdini_path: + new_houdini_path.append(norm_path) + + for path in old_houdini_menu_path.split(os.pathsep): + if not path: + continue + + norm_path = os.path.normpath(path) + if norm_path not in new_houdini_menu_path: + new_houdini_menu_path.append(norm_path) + + # Add ampersand for unknown reason (Maybe is needed in Houdini?) + new_houdini_path.append("&") + new_houdini_menu_path.append("&") + + env["HOUDINI_PATH"] = os.pathsep.join(new_houdini_path) + env["HOUDINI_MENU_PATH"] = os.pathsep.join(new_houdini_menu_path) + + def get_launch_hook_paths(self, app): + if app.host_name != self.host_name: + return [] + return [ + os.path.join(HOUDINI_HOST_DIR, "hooks") + ] + + def get_workfile_extensions(self): + return [".hip", ".hiplc", ".hipnc"] diff --git a/openpype/hosts/houdini/api/pipeline.py b/openpype/hosts/houdini/api/pipeline.py index b5f5459392..2ae8a4dbf7 100644 --- a/openpype/hosts/houdini/api/pipeline.py +++ b/openpype/hosts/houdini/api/pipeline.py @@ -13,7 +13,7 @@ from openpype.pipeline import ( AVALON_CONTAINER_ID, ) from openpype.pipeline.load import any_outdated_containers -import openpype.hosts.houdini +from openpype.hosts.houdini import HOUDINI_HOST_DIR from openpype.hosts.houdini.api import lib from openpype.lib import ( @@ -28,8 +28,7 @@ log = logging.getLogger("openpype.hosts.houdini") AVALON_CONTAINERS = "/obj/AVALON_CONTAINERS" IS_HEADLESS = not hasattr(hou, "ui") -HOST_DIR = os.path.dirname(os.path.abspath(openpype.hosts.houdini.__file__)) -PLUGINS_DIR = os.path.join(HOST_DIR, "plugins") +PLUGINS_DIR = os.path.join(HOUDINI_HOST_DIR, "plugins") PUBLISH_PATH = os.path.join(PLUGINS_DIR, "publish") LOAD_PATH = os.path.join(PLUGINS_DIR, "load") CREATE_PATH = os.path.join(PLUGINS_DIR, "create") @@ -66,7 +65,7 @@ def install(): self._has_been_setup = True # add houdini vendor packages - hou_pythonpath = os.path.join(os.path.dirname(HOST_DIR), "vendor") + hou_pythonpath = os.path.join(HOUDINI_HOST_DIR, "vendor") sys.path.append(hou_pythonpath) diff --git a/openpype/hosts/houdini/api/workio.py b/openpype/hosts/houdini/api/workio.py index e0213023fd..5f7efff333 100644 --- a/openpype/hosts/houdini/api/workio.py +++ b/openpype/hosts/houdini/api/workio.py @@ -2,11 +2,10 @@ import os import hou -from openpype.pipeline import HOST_WORKFILE_EXTENSIONS def file_extensions(): - return HOST_WORKFILE_EXTENSIONS["houdini"] + return [".hip", ".hiplc", ".hipnc"] def has_unsaved_changes(): diff --git a/openpype/hosts/houdini/hooks/set_paths.py b/openpype/hosts/houdini/hooks/set_paths.py index cd2f98fb76..04a33b1643 100644 --- a/openpype/hosts/houdini/hooks/set_paths.py +++ b/openpype/hosts/houdini/hooks/set_paths.py @@ -1,5 +1,4 @@ from openpype.lib import PreLaunchHook -import os class SetPath(PreLaunchHook): @@ -15,4 +14,4 @@ class SetPath(PreLaunchHook): self.log.warning("BUG: Workdir is not filled.") return - os.chdir(workdir) + self.launch_context.kwargs["cwd"] = workdir diff --git a/openpype/hosts/houdini/plugins/publish/collect_current_file.py b/openpype/hosts/houdini/plugins/publish/collect_current_file.py index c0b987ebbc..1383c274a2 100644 --- a/openpype/hosts/houdini/plugins/publish/collect_current_file.py +++ b/openpype/hosts/houdini/plugins/publish/collect_current_file.py @@ -1,27 +1,28 @@ import os import hou +from openpype.pipeline import legacy_io import pyblish.api class CollectHoudiniCurrentFile(pyblish.api.ContextPlugin): """Inject the current working file into context""" - order = pyblish.api.CollectorOrder - 0.5 + order = pyblish.api.CollectorOrder - 0.01 label = "Houdini Current File" hosts = ["houdini"] def process(self, context): """Inject the current working file""" - filepath = hou.hipFile.path() - if not os.path.exists(filepath): + current_file = hou.hipFile.path() + if not os.path.exists(current_file): # By default Houdini will even point a new scene to a path. # However if the file is not saved at all and does not exist, # we assume the user never set it. filepath = "" - elif os.path.basename(filepath) == "untitled.hip": + elif os.path.basename(current_file) == "untitled.hip": # Due to even a new file being called 'untitled.hip' we are unable # to confirm the current scene was ever saved because the file # could have existed already. We will allow it if the file exists, @@ -33,4 +34,43 @@ class CollectHoudiniCurrentFile(pyblish.api.ContextPlugin): "saved correctly." ) - context.data["currentFile"] = filepath + context.data["currentFile"] = current_file + + folder, file = os.path.split(current_file) + filename, ext = os.path.splitext(file) + + task = legacy_io.Session["AVALON_TASK"] + + data = {} + + # create instance + instance = context.create_instance(name=filename) + subset = 'workfile' + task.capitalize() + + data.update({ + "subset": subset, + "asset": os.getenv("AVALON_ASSET", None), + "label": subset, + "publish": True, + "family": 'workfile', + "families": ['workfile'], + "setMembers": [current_file], + "frameStart": context.data['frameStart'], + "frameEnd": context.data['frameEnd'], + "handleStart": context.data['handleStart'], + "handleEnd": context.data['handleEnd'] + }) + + data['representations'] = [{ + 'name': ext.lstrip("."), + 'ext': ext.lstrip("."), + 'files': file, + "stagingDir": folder, + }] + + instance.data.update(data) + + self.log.info('Collected instance: {}'.format(file)) + self.log.info('Scene path: {}'.format(current_file)) + self.log.info('staging Dir: {}'.format(folder)) + self.log.info('subset: {}'.format(subset)) diff --git a/openpype/hosts/houdini/plugins/publish/increment_current_file.py b/openpype/hosts/houdini/plugins/publish/increment_current_file.py index c5cacd1880..5cb14d732a 100644 --- a/openpype/hosts/houdini/plugins/publish/increment_current_file.py +++ b/openpype/hosts/houdini/plugins/publish/increment_current_file.py @@ -1,8 +1,8 @@ import pyblish.api -from openpype.api import version_up -from openpype.action import get_errored_plugins_from_data +from openpype.lib import version_up from openpype.pipeline import registered_host +from openpype.pipeline.publish import get_errored_plugins_from_context class IncrementCurrentFile(pyblish.api.InstancePlugin): @@ -30,7 +30,7 @@ class IncrementCurrentFile(pyblish.api.InstancePlugin): context.data[key] = True context = instance.context - errored_plugins = get_errored_plugins_from_data(context) + errored_plugins = get_errored_plugins_from_context(context) if any( plugin.__name__ == "HoudiniSubmitPublishDeadline" for plugin in errored_plugins diff --git a/openpype/hosts/houdini/plugins/publish/increment_current_file_deadline.py b/openpype/hosts/houdini/plugins/publish/increment_current_file_deadline.py index faa015f739..cb0d7e3680 100644 --- a/openpype/hosts/houdini/plugins/publish/increment_current_file_deadline.py +++ b/openpype/hosts/houdini/plugins/publish/increment_current_file_deadline.py @@ -1,8 +1,8 @@ import pyblish.api import hou -from openpype.api import version_up -from openpype.action import get_errored_plugins_from_data +from openpype.lib import version_up +from openpype.pipeline.publish import get_errored_plugins_from_context class IncrementCurrentFileDeadline(pyblish.api.ContextPlugin): @@ -19,7 +19,7 @@ class IncrementCurrentFileDeadline(pyblish.api.ContextPlugin): def process(self, context): - errored_plugins = get_errored_plugins_from_data(context) + errored_plugins = get_errored_plugins_from_context(context) if any( plugin.__name__ == "HoudiniSubmitPublishDeadline" for plugin in errored_plugins diff --git a/openpype/hosts/houdini/plugins/publish/valiate_vdb_input_node.py b/openpype/hosts/houdini/plugins/publish/valiate_vdb_input_node.py index 0ae1bc94eb..ac408bc842 100644 --- a/openpype/hosts/houdini/plugins/publish/valiate_vdb_input_node.py +++ b/openpype/hosts/houdini/plugins/publish/valiate_vdb_input_node.py @@ -1,5 +1,5 @@ import pyblish.api -import openpype.api +from openpype.pipeline.publish import ValidateContentsOrder class ValidateVDBInputNode(pyblish.api.InstancePlugin): @@ -16,7 +16,7 @@ class ValidateVDBInputNode(pyblish.api.InstancePlugin): """ - order = openpype.api.ValidateContentsOrder + 0.1 + order = ValidateContentsOrder + 0.1 families = ["vdbcache"] hosts = ["houdini"] label = "Validate Input Node (VDB)" diff --git a/openpype/hosts/houdini/plugins/publish/validate_abc_primitive_to_detail.py b/openpype/hosts/houdini/plugins/publish/validate_abc_primitive_to_detail.py index 3e17d3e8de..ea800707fb 100644 --- a/openpype/hosts/houdini/plugins/publish/validate_abc_primitive_to_detail.py +++ b/openpype/hosts/houdini/plugins/publish/validate_abc_primitive_to_detail.py @@ -1,8 +1,9 @@ import pyblish.api -import openpype.api from collections import defaultdict +from openpype.pipeline.publish import ValidateContentsOrder + class ValidateAbcPrimitiveToDetail(pyblish.api.InstancePlugin): """Validate Alembic ROP Primitive to Detail attribute is consistent. @@ -15,7 +16,7 @@ class ValidateAbcPrimitiveToDetail(pyblish.api.InstancePlugin): """ - order = openpype.api.ValidateContentsOrder + 0.1 + order = ValidateContentsOrder + 0.1 families = ["pointcache"] hosts = ["houdini"] label = "Validate Primitive to Detail (Abc)" diff --git a/openpype/hosts/houdini/plugins/publish/validate_alembic_face_sets.py b/openpype/hosts/houdini/plugins/publish/validate_alembic_face_sets.py index e9126ffef0..cbed3ea235 100644 --- a/openpype/hosts/houdini/plugins/publish/validate_alembic_face_sets.py +++ b/openpype/hosts/houdini/plugins/publish/validate_alembic_face_sets.py @@ -1,5 +1,6 @@ import pyblish.api -import openpype.api + +from openpype.pipeline.publish import ValidateContentsOrder class ValidateAlembicROPFaceSets(pyblish.api.InstancePlugin): @@ -17,7 +18,7 @@ class ValidateAlembicROPFaceSets(pyblish.api.InstancePlugin): """ - order = openpype.api.ValidateContentsOrder + 0.1 + order = ValidateContentsOrder + 0.1 families = ["pointcache"] hosts = ["houdini"] label = "Validate Alembic ROP Face Sets" diff --git a/openpype/hosts/houdini/plugins/publish/validate_alembic_input_node.py b/openpype/hosts/houdini/plugins/publish/validate_alembic_input_node.py index 8d7e3b611f..2625ae5f83 100644 --- a/openpype/hosts/houdini/plugins/publish/validate_alembic_input_node.py +++ b/openpype/hosts/houdini/plugins/publish/validate_alembic_input_node.py @@ -1,5 +1,6 @@ import pyblish.api -import colorbleed.api + +from openpype.pipeline.publish import ValidateContentsOrder class ValidateAlembicInputNode(pyblish.api.InstancePlugin): @@ -11,7 +12,7 @@ class ValidateAlembicInputNode(pyblish.api.InstancePlugin): """ - order = colorbleed.api.ValidateContentsOrder + 0.1 + order = ValidateContentsOrder + 0.1 families = ["pointcache"] hosts = ["houdini"] label = "Validate Input Node (Abc)" diff --git a/openpype/hosts/houdini/plugins/publish/validate_bypass.py b/openpype/hosts/houdini/plugins/publish/validate_bypass.py index fc4e18f701..7cf8da69d6 100644 --- a/openpype/hosts/houdini/plugins/publish/validate_bypass.py +++ b/openpype/hosts/houdini/plugins/publish/validate_bypass.py @@ -1,5 +1,5 @@ import pyblish.api -import openpype.api +from openpype.pipeline.publish import ValidateContentsOrder class ValidateBypassed(pyblish.api.InstancePlugin): @@ -11,7 +11,7 @@ class ValidateBypassed(pyblish.api.InstancePlugin): """ - order = openpype.api.ValidateContentsOrder - 0.1 + order = ValidateContentsOrder - 0.1 families = ["*"] hosts = ["houdini"] label = "Validate ROP Bypass" diff --git a/openpype/hosts/houdini/plugins/publish/validate_camera_rop.py b/openpype/hosts/houdini/plugins/publish/validate_camera_rop.py index a0919e1323..d414920f8b 100644 --- a/openpype/hosts/houdini/plugins/publish/validate_camera_rop.py +++ b/openpype/hosts/houdini/plugins/publish/validate_camera_rop.py @@ -1,11 +1,11 @@ import pyblish.api -import openpype.api +from openpype.pipeline.publish import ValidateContentsOrder class ValidateCameraROP(pyblish.api.InstancePlugin): """Validate Camera ROP settings.""" - order = openpype.api.ValidateContentsOrder + order = ValidateContentsOrder families = ["camera"] hosts = ["houdini"] label = "Camera ROP" diff --git a/openpype/hosts/houdini/plugins/publish/validate_mkpaths_toggled.py b/openpype/hosts/houdini/plugins/publish/validate_mkpaths_toggled.py index cd72877949..be6a798a95 100644 --- a/openpype/hosts/houdini/plugins/publish/validate_mkpaths_toggled.py +++ b/openpype/hosts/houdini/plugins/publish/validate_mkpaths_toggled.py @@ -1,11 +1,11 @@ import pyblish.api -import openpype.api +from openpype.pipeline.publish import ValidateContentsOrder class ValidateIntermediateDirectoriesChecked(pyblish.api.InstancePlugin): """Validate Create Intermediate Directories is enabled on ROP node.""" - order = openpype.api.ValidateContentsOrder + order = ValidateContentsOrder families = ["pointcache", "camera", "vdbcache"] hosts = ["houdini"] label = "Create Intermediate Directories Checked" diff --git a/openpype/hosts/houdini/plugins/publish/validate_no_errors.py b/openpype/hosts/houdini/plugins/publish/validate_no_errors.py index f58e5f8d7d..76635d4ed5 100644 --- a/openpype/hosts/houdini/plugins/publish/validate_no_errors.py +++ b/openpype/hosts/houdini/plugins/publish/validate_no_errors.py @@ -1,6 +1,6 @@ import pyblish.api -import openpype.api import hou +from openpype.pipeline.publish import ValidateContentsOrder def cook_in_range(node, start, end): @@ -28,7 +28,7 @@ def get_errors(node): class ValidateNoErrors(pyblish.api.InstancePlugin): """Validate the Instance has no current cooking errors.""" - order = openpype.api.ValidateContentsOrder + order = ValidateContentsOrder hosts = ["houdini"] label = "Validate no errors" diff --git a/openpype/hosts/houdini/plugins/publish/validate_primitive_hierarchy_paths.py b/openpype/hosts/houdini/plugins/publish/validate_primitive_hierarchy_paths.py index 1eb36763bb..7a8cd04f15 100644 --- a/openpype/hosts/houdini/plugins/publish/validate_primitive_hierarchy_paths.py +++ b/openpype/hosts/houdini/plugins/publish/validate_primitive_hierarchy_paths.py @@ -1,5 +1,5 @@ import pyblish.api -import openpype.api +from openpype.pipeline.publish import ValidateContentsOrder class ValidatePrimitiveHierarchyPaths(pyblish.api.InstancePlugin): @@ -11,7 +11,7 @@ class ValidatePrimitiveHierarchyPaths(pyblish.api.InstancePlugin): """ - order = openpype.api.ValidateContentsOrder + 0.1 + order = ValidateContentsOrder + 0.1 families = ["pointcache"] hosts = ["houdini"] label = "Validate Prims Hierarchy Path" diff --git a/openpype/hosts/houdini/plugins/publish/validate_remote_publish.py b/openpype/hosts/houdini/plugins/publish/validate_remote_publish.py index 95c66edff0..0ab182c584 100644 --- a/openpype/hosts/houdini/plugins/publish/validate_remote_publish.py +++ b/openpype/hosts/houdini/plugins/publish/validate_remote_publish.py @@ -1,7 +1,7 @@ import pyblish.api -import openpype.api from openpype.hosts.houdini.api import lib +from openpype.pipeline.publish import RepairContextAction import hou @@ -14,7 +14,7 @@ class ValidateRemotePublishOutNode(pyblish.api.ContextPlugin): hosts = ["houdini"] targets = ["deadline"] label = "Remote Publish ROP node" - actions = [openpype.api.RepairContextAction] + actions = [RepairContextAction] def process(self, context): diff --git a/openpype/hosts/houdini/plugins/publish/validate_remote_publish_enabled.py b/openpype/hosts/houdini/plugins/publish/validate_remote_publish_enabled.py index b681fd0ee1..afc8df7528 100644 --- a/openpype/hosts/houdini/plugins/publish/validate_remote_publish_enabled.py +++ b/openpype/hosts/houdini/plugins/publish/validate_remote_publish_enabled.py @@ -1,7 +1,7 @@ import pyblish.api -import openpype.api import hou +from openpype.pipeline.publish import RepairContextAction class ValidateRemotePublishEnabled(pyblish.api.ContextPlugin): @@ -12,7 +12,7 @@ class ValidateRemotePublishEnabled(pyblish.api.ContextPlugin): hosts = ["houdini"] targets = ["deadline"] label = "Remote Publish ROP enabled" - actions = [openpype.api.RepairContextAction] + actions = [RepairContextAction] def process(self, context): diff --git a/openpype/hosts/houdini/plugins/publish/validate_usd_shade_model_exists.py b/openpype/hosts/houdini/plugins/publish/validate_usd_shade_model_exists.py index b979b87d84..f08c7c72c5 100644 --- a/openpype/hosts/houdini/plugins/publish/validate_usd_shade_model_exists.py +++ b/openpype/hosts/houdini/plugins/publish/validate_usd_shade_model_exists.py @@ -3,14 +3,14 @@ import re import pyblish.api from openpype.client import get_subset_by_name -import openpype.api from openpype.pipeline import legacy_io +from openpype.pipeline.publish import ValidateContentsOrder class ValidateUSDShadeModelExists(pyblish.api.InstancePlugin): """Validate the Instance has no current cooking errors.""" - order = openpype.api.ValidateContentsOrder + order = ValidateContentsOrder hosts = ["houdini"] families = ["usdShade"] label = "USD Shade model exists" diff --git a/openpype/hosts/houdini/plugins/publish/validate_usd_shade_workspace.py b/openpype/hosts/houdini/plugins/publish/validate_usd_shade_workspace.py index a77ca2f3cb..a4902b48a9 100644 --- a/openpype/hosts/houdini/plugins/publish/validate_usd_shade_workspace.py +++ b/openpype/hosts/houdini/plugins/publish/validate_usd_shade_workspace.py @@ -1,5 +1,5 @@ import pyblish.api -import openpype.api +from openpype.pipeline.publish import ValidateContentsOrder import hou @@ -12,7 +12,7 @@ class ValidateUsdShadeWorkspace(pyblish.api.InstancePlugin): """ - order = openpype.api.ValidateContentsOrder + order = ValidateContentsOrder hosts = ["houdini"] families = ["usdShade"] label = "USD Shade Workspace" diff --git a/openpype/hosts/houdini/plugins/publish/validate_vdb_input_node.py b/openpype/hosts/houdini/plugins/publish/validate_vdb_input_node.py index 0ae1bc94eb..ac408bc842 100644 --- a/openpype/hosts/houdini/plugins/publish/validate_vdb_input_node.py +++ b/openpype/hosts/houdini/plugins/publish/validate_vdb_input_node.py @@ -1,5 +1,5 @@ import pyblish.api -import openpype.api +from openpype.pipeline.publish import ValidateContentsOrder class ValidateVDBInputNode(pyblish.api.InstancePlugin): @@ -16,7 +16,7 @@ class ValidateVDBInputNode(pyblish.api.InstancePlugin): """ - order = openpype.api.ValidateContentsOrder + 0.1 + order = ValidateContentsOrder + 0.1 families = ["vdbcache"] hosts = ["houdini"] label = "Validate Input Node (VDB)" diff --git a/openpype/hosts/houdini/plugins/publish/validate_vdb_output_node.py b/openpype/hosts/houdini/plugins/publish/validate_vdb_output_node.py index 1ba840b71d..55ed581d4c 100644 --- a/openpype/hosts/houdini/plugins/publish/validate_vdb_output_node.py +++ b/openpype/hosts/houdini/plugins/publish/validate_vdb_output_node.py @@ -1,6 +1,6 @@ import pyblish.api -import openpype.api import hou +from openpype.pipeline.publish import ValidateContentsOrder class ValidateVDBOutputNode(pyblish.api.InstancePlugin): @@ -17,7 +17,7 @@ class ValidateVDBOutputNode(pyblish.api.InstancePlugin): """ - order = openpype.api.ValidateContentsOrder + 0.1 + order = ValidateContentsOrder + 0.1 families = ["vdbcache"] hosts = ["houdini"] label = "Validate Output Node (VDB)" diff --git a/openpype/hosts/houdini/plugins/publish/validate_workfile_paths.py b/openpype/hosts/houdini/plugins/publish/validate_workfile_paths.py new file mode 100644 index 0000000000..79b3e894e5 --- /dev/null +++ b/openpype/hosts/houdini/plugins/publish/validate_workfile_paths.py @@ -0,0 +1,57 @@ +# -*- coding: utf-8 -*- +import openpype.api +import pyblish.api +import hou + + +class ValidateWorkfilePaths(pyblish.api.InstancePlugin): + """Validate workfile paths so they are absolute.""" + + order = pyblish.api.ValidatorOrder + families = ["workfile"] + hosts = ["houdini"] + label = "Validate Workfile Paths" + actions = [openpype.api.RepairAction] + optional = True + + node_types = ["file", "alembic"] + prohibited_vars = ["$HIP", "$JOB"] + + def process(self, instance): + invalid = self.get_invalid() + self.log.info( + "node types to check: {}".format(", ".join(self.node_types))) + self.log.info( + "prohibited vars: {}".format(", ".join(self.prohibited_vars)) + ) + if invalid: + for param in invalid: + self.log.error( + "{}: {}".format(param.path(), param.unexpandedString())) + + raise RuntimeError("Invalid paths found") + + @classmethod + def get_invalid(cls): + invalid = [] + for param, _ in hou.fileReferences(): + # skip nodes we are not interested in + if param.node().type().name() not in cls.node_types: + continue + + if any( + v for v in cls.prohibited_vars + if v in param.unexpandedString()): + invalid.append(param) + + return invalid + + @classmethod + def repair(cls, instance): + invalid = cls.get_invalid() + for param in invalid: + cls.log.info("processing: {}".format(param.path())) + cls.log.info("Replacing {} for {}".format( + param.unexpandedString(), + hou.text.expandString(param.unexpandedString()))) + param.set(hou.text.expandString(param.unexpandedString())) diff --git a/openpype/hosts/houdini/startup/python3.9libs/pythonrc.py b/openpype/hosts/houdini/startup/python3.9libs/pythonrc.py new file mode 100644 index 0000000000..afadbffd3e --- /dev/null +++ b/openpype/hosts/houdini/startup/python3.9libs/pythonrc.py @@ -0,0 +1,10 @@ +from openpype.pipeline import install_host +from openpype.hosts.houdini import api + + +def main(): + print("Installing OpenPype ...") + install_host(api) + + +main() diff --git a/openpype/hosts/maya/__init__.py b/openpype/hosts/maya/__init__.py index 860db766f3..bb940a881b 100644 --- a/openpype/hosts/maya/__init__.py +++ b/openpype/hosts/maya/__init__.py @@ -1,6 +1,10 @@ -from .addon import MayaAddon +from .addon import ( + MayaAddon, + MAYA_ROOT_DIR, +) __all__ = ( "MayaAddon", + "MAYA_ROOT_DIR", ) diff --git a/openpype/hosts/maya/api/action.py b/openpype/hosts/maya/api/action.py index 90605734e7..065fdf3691 100644 --- a/openpype/hosts/maya/api/action.py +++ b/openpype/hosts/maya/api/action.py @@ -5,7 +5,7 @@ import pyblish.api from openpype.client import get_asset_by_name from openpype.pipeline import legacy_io -from openpype.api import get_errored_instances_from_context +from openpype.pipeline.publish import get_errored_instances_from_context class GenerateUUIDsOnInvalidAction(pyblish.api.Action): diff --git a/openpype/hosts/maya/api/menu.py b/openpype/hosts/maya/api/menu.py index ebba706a6c..666f555660 100644 --- a/openpype/hosts/maya/api/menu.py +++ b/openpype/hosts/maya/api/menu.py @@ -104,13 +104,6 @@ def install(): cmds.menuItem(divider=True) - cmds.menuItem( - "Set Render Settings", - command=lambda *args: lib_rendersettings.RenderSettings().set_default_renderer_settings() # noqa - ) - - cmds.menuItem(divider=True) - cmds.menuItem( "Work Files...", command=lambda *args: host_tools.show_workfiles( @@ -132,6 +125,12 @@ def install(): "Set Colorspace", command=lambda *args: lib.set_colorspace(), ) + + cmds.menuItem( + "Set Render Settings", + command=lambda *args: lib_rendersettings.RenderSettings().set_default_renderer_settings() # noqa + ) + cmds.menuItem(divider=True, parent=MENU_NAME) cmds.menuItem( "Build First Workfile", diff --git a/openpype/hosts/maya/api/pipeline.py b/openpype/hosts/maya/api/pipeline.py index f565f6a308..c963b5d996 100644 --- a/openpype/hosts/maya/api/pipeline.py +++ b/openpype/hosts/maya/api/pipeline.py @@ -9,14 +9,17 @@ import maya.api.OpenMaya as om import pyblish.api from openpype.settings import get_project_settings -from openpype.host import HostBase, IWorkfileHost, ILoadHost -import openpype.hosts.maya +from openpype.host import ( + HostBase, + IWorkfileHost, + ILoadHost, + HostDirmap, +) from openpype.tools.utils import host_tools from openpype.lib import ( register_event_callback, emit_event ) -from openpype.lib.path_tools import HostDirmap from openpype.pipeline import ( legacy_io, register_loader_plugin_path, @@ -28,7 +31,9 @@ from openpype.pipeline import ( AVALON_CONTAINER_ID, ) from openpype.pipeline.load import any_outdated_containers +from openpype.hosts.maya import MAYA_ROOT_DIR from openpype.hosts.maya.lib import copy_workspace_mel + from . import menu, lib from .workio import ( open_file, @@ -41,8 +46,7 @@ from .workio import ( log = logging.getLogger("openpype.hosts.maya") -HOST_DIR = os.path.dirname(os.path.abspath(openpype.hosts.maya.__file__)) -PLUGINS_DIR = os.path.join(HOST_DIR, "plugins") +PLUGINS_DIR = os.path.join(MAYA_ROOT_DIR, "plugins") PUBLISH_PATH = os.path.join(PLUGINS_DIR, "publish") LOAD_PATH = os.path.join(PLUGINS_DIR, "load") CREATE_PATH = os.path.join(PLUGINS_DIR, "create") @@ -59,9 +63,10 @@ class MayaHost(HostBase, IWorkfileHost, ILoadHost): self._op_events = {} def install(self): - project_settings = get_project_settings(os.getenv("AVALON_PROJECT")) + project_name = os.getenv("AVALON_PROJECT") + project_settings = get_project_settings(project_name) # process path mapping - dirmap_processor = MayaDirmap("maya", project_settings) + dirmap_processor = MayaDirmap("maya", project_name, project_settings) dirmap_processor.process_dirmap() pyblish.api.register_plugin_path(PUBLISH_PATH) @@ -344,21 +349,13 @@ def containerise(name, ("id", AVALON_CONTAINER_ID), ("name", name), ("namespace", namespace), - ("loader", str(loader)), + ("loader", loader), ("representation", context["representation"]["_id"]), ] for key, value in data: - if not value: - continue - - if isinstance(value, (int, float)): - cmds.addAttr(container, longName=key, attributeType="short") - cmds.setAttr(container + "." + key, value) - - else: - cmds.addAttr(container, longName=key, dataType="string") - cmds.setAttr(container + "." + key, value, type="string") + cmds.addAttr(container, longName=key, dataType="string") + cmds.setAttr(container + "." + key, str(value), type="string") main_container = cmds.ls(AVALON_CONTAINERS, type="objectSet") if not main_container: diff --git a/openpype/hosts/maya/plugins/inventory/select_containers.py b/openpype/hosts/maya/plugins/inventory/select_containers.py new file mode 100644 index 0000000000..f85bf17ab0 --- /dev/null +++ b/openpype/hosts/maya/plugins/inventory/select_containers.py @@ -0,0 +1,46 @@ +from maya import cmds + +from openpype.pipeline import InventoryAction, registered_host +from openpype.hosts.maya.api.lib import get_container_members + + +class SelectInScene(InventoryAction): + """Select nodes in the scene from selected containers in scene inventory""" + + label = "Select in scene" + icon = "search" + color = "#888888" + order = 99 + + def process(self, containers): + + all_members = [] + for container in containers: + members = get_container_members(container) + all_members.extend(members) + cmds.select(all_members, replace=True, noExpand=True) + + +class HighlightBySceneSelection(InventoryAction): + """Select containers in scene inventory from the current scene selection""" + + label = "Highlight by scene selection" + icon = "search" + color = "#888888" + order = 100 + + def process(self, containers): + + selection = set(cmds.ls(selection=True, long=True, objectsOnly=True)) + host = registered_host() + + to_select = [] + for container in host.get_containers(): + members = get_container_members(container) + if any(member in selection for member in members): + to_select.append(container["objectName"]) + + return { + "objectNames": to_select, + "options": {"clear": True} + } diff --git a/openpype/hosts/maya/plugins/publish/collect_assembly.py b/openpype/hosts/maya/plugins/publish/collect_assembly.py index 1a65bf1fde..2aef9ab908 100644 --- a/openpype/hosts/maya/plugins/publish/collect_assembly.py +++ b/openpype/hosts/maya/plugins/publish/collect_assembly.py @@ -70,7 +70,7 @@ class CollectAssembly(pyblish.api.InstancePlugin): data[representation_id].append(instance_data) instance.data["scenedata"] = dict(data) - instance.data["hierarchy"] = list(set(hierarchy_nodes)) + instance.data["nodesHierarchy"] = list(set(hierarchy_nodes)) def get_file_rule(self, rule): return mel.eval('workspace -query -fileRuleEntry "{}"'.format(rule)) diff --git a/openpype/hosts/maya/plugins/publish/collect_render.py b/openpype/hosts/maya/plugins/publish/collect_render.py index ebda5e190d..14aac2f206 100644 --- a/openpype/hosts/maya/plugins/publish/collect_render.py +++ b/openpype/hosts/maya/plugins/publish/collect_render.py @@ -293,6 +293,7 @@ class CollectMayaRender(pyblish.api.ContextPlugin): "source": filepath, "expectedFiles": full_exp_files, "publishRenderMetadataFolder": common_publish_meta_path, + "renderProducts": layer_render_products, "resolutionWidth": lib.get_attr_in_layer( "defaultResolution.width", layer=layer_name ), @@ -359,7 +360,6 @@ class CollectMayaRender(pyblish.api.ContextPlugin): instance.data["label"] = label instance.data["farm"] = True instance.data.update(data) - self.log.debug("data: {}".format(json.dumps(data, indent=4))) def parse_options(self, render_globals): """Get all overrides with a value, skip those without. diff --git a/openpype/hosts/maya/plugins/publish/extract_assembly.py b/openpype/hosts/maya/plugins/publish/extract_assembly.py index 482930b76e..120805894e 100644 --- a/openpype/hosts/maya/plugins/publish/extract_assembly.py +++ b/openpype/hosts/maya/plugins/publish/extract_assembly.py @@ -33,7 +33,7 @@ class ExtractAssembly(openpype.api.Extractor): json.dump(instance.data["scenedata"], filepath, ensure_ascii=False) self.log.info("Extracting point cache ..") - cmds.select(instance.data["hierarchy"]) + cmds.select(instance.data["nodesHierarchy"]) # Run basic alembic exporter extract_alembic(file=hierarchy_path, diff --git a/openpype/hosts/maya/plugins/publish/extract_playblast.py b/openpype/hosts/maya/plugins/publish/extract_playblast.py index 54ef09e060..871adda0c3 100644 --- a/openpype/hosts/maya/plugins/publish/extract_playblast.py +++ b/openpype/hosts/maya/plugins/publish/extract_playblast.py @@ -128,7 +128,7 @@ class ExtractPlayblast(openpype.api.Extractor): # Update preset with current panel setting # if override_viewport_options is turned off if not override_viewport_options: - panel = cmds.getPanel(with_focus=True) + panel = cmds.getPanel(withFocus=True) panel_preset = capture.parse_active_view() preset.update(panel_preset) cmds.setFocus(panel) diff --git a/openpype/hosts/maya/plugins/publish/extract_thumbnail.py b/openpype/hosts/maya/plugins/publish/extract_thumbnail.py index 01980578cf..9380da5128 100644 --- a/openpype/hosts/maya/plugins/publish/extract_thumbnail.py +++ b/openpype/hosts/maya/plugins/publish/extract_thumbnail.py @@ -100,9 +100,9 @@ class ExtractThumbnail(openpype.api.Extractor): # camera. if preset.pop("isolate_view", False) and instance.data.get("isolate"): preset["isolate"] = instance.data["setMembers"] - + # Show or Hide Image Plane - image_plane = instance.data.get("imagePlane", True) + image_plane = instance.data.get("imagePlane", True) if "viewport_options" in preset: preset["viewport_options"]["imagePlane"] = image_plane else: @@ -117,7 +117,7 @@ class ExtractThumbnail(openpype.api.Extractor): # Update preset with current panel setting # if override_viewport_options is turned off if not override_viewport_options: - panel = cmds.getPanel(with_focus=True) + panel = cmds.getPanel(withFocus=True) panel_preset = capture.parse_active_view() preset.update(panel_preset) cmds.setFocus(panel) diff --git a/openpype/hosts/maya/plugins/publish/increment_current_file_deadline.py b/openpype/hosts/maya/plugins/publish/increment_current_file_deadline.py index f9cfac3eb9..b5d5847e9f 100644 --- a/openpype/hosts/maya/plugins/publish/increment_current_file_deadline.py +++ b/openpype/hosts/maya/plugins/publish/increment_current_file_deadline.py @@ -16,12 +16,11 @@ class IncrementCurrentFileDeadline(pyblish.api.ContextPlugin): def process(self, context): - import os from maya import cmds from openpype.lib import version_up - from openpype.action import get_errored_plugins_from_data + from openpype.pipeline.publish import get_errored_plugins_from_context - errored_plugins = get_errored_plugins_from_data(context) + errored_plugins = get_errored_plugins_from_context(context) if any(plugin.__name__ == "MayaSubmitDeadline" for plugin in errored_plugins): raise RuntimeError("Skipping incrementing current file because " diff --git a/openpype/hosts/maya/plugins/publish/validate_animation_content.py b/openpype/hosts/maya/plugins/publish/validate_animation_content.py index 7638c44b87..6f7a6b905a 100644 --- a/openpype/hosts/maya/plugins/publish/validate_animation_content.py +++ b/openpype/hosts/maya/plugins/publish/validate_animation_content.py @@ -1,6 +1,7 @@ import pyblish.api import openpype.api import openpype.hosts.maya.api.action +from openpype.pipeline.publish import ValidateContentsOrder class ValidateAnimationContent(pyblish.api.InstancePlugin): @@ -11,7 +12,7 @@ class ValidateAnimationContent(pyblish.api.InstancePlugin): """ - order = openpype.api.ValidateContentsOrder + order = ValidateContentsOrder hosts = ["maya"] families = ["animation"] label = "Animation Content" diff --git a/openpype/hosts/maya/plugins/publish/validate_animation_out_set_related_node_ids.py b/openpype/hosts/maya/plugins/publish/validate_animation_out_set_related_node_ids.py index 05d63f1d56..aa27633402 100644 --- a/openpype/hosts/maya/plugins/publish/validate_animation_out_set_related_node_ids.py +++ b/openpype/hosts/maya/plugins/publish/validate_animation_out_set_related_node_ids.py @@ -4,6 +4,10 @@ import pyblish.api import openpype.api import openpype.hosts.maya.api.action from openpype.hosts.maya.api import lib +from openpype.pipeline.publish import ( + RepairAction, + ValidateContentsOrder, +) class ValidateOutRelatedNodeIds(pyblish.api.InstancePlugin): @@ -16,13 +20,13 @@ class ValidateOutRelatedNodeIds(pyblish.api.InstancePlugin): """ - order = openpype.api.ValidateContentsOrder + order = ValidateContentsOrder families = ['animation', "pointcache"] hosts = ['maya'] label = 'Animation Out Set Related Node Ids' actions = [ openpype.hosts.maya.api.action.SelectInvalidAction, - openpype.api.RepairAction + RepairAction ] def process(self, instance): diff --git a/openpype/hosts/maya/plugins/publish/validate_ass_relative_paths.py b/openpype/hosts/maya/plugins/publish/validate_ass_relative_paths.py index 5fb9bd98b1..ac6ce4d22d 100644 --- a/openpype/hosts/maya/plugins/publish/validate_ass_relative_paths.py +++ b/openpype/hosts/maya/plugins/publish/validate_ass_relative_paths.py @@ -4,18 +4,20 @@ import types import maya.cmds as cmds import pyblish.api -import openpype.api -import openpype.hosts.maya.api.action +from openpype.pipeline.publish import ( + RepairAction, + ValidateContentsOrder, +) class ValidateAssRelativePaths(pyblish.api.InstancePlugin): """Ensure exporting ass file has set relative texture paths""" - order = openpype.api.ValidateContentsOrder + order = ValidateContentsOrder hosts = ['maya'] families = ['ass'] label = "ASS has relative texture paths" - actions = [openpype.api.RepairAction] + actions = [RepairAction] def process(self, instance): # we cannot ask this until user open render settings as diff --git a/openpype/hosts/maya/plugins/publish/validate_assembly_transforms.py b/openpype/hosts/maya/plugins/publish/validate_assembly_transforms.py index dca59b147b..fb25b617be 100644 --- a/openpype/hosts/maya/plugins/publish/validate_assembly_transforms.py +++ b/openpype/hosts/maya/plugins/publish/validate_assembly_transforms.py @@ -4,6 +4,7 @@ import openpype.api from maya import cmds import openpype.hosts.maya.api.action +from openpype.pipeline.publish import RepairAction class ValidateAssemblyModelTransforms(pyblish.api.InstancePlugin): @@ -29,7 +30,7 @@ class ValidateAssemblyModelTransforms(pyblish.api.InstancePlugin): label = "Assembly Model Transforms" families = ["assembly"] actions = [openpype.hosts.maya.api.action.SelectInvalidAction, - openpype.api.RepairAction] + RepairAction] prompt_message = ("You are about to reset the matrix to the default values." " This can alter the look of your scene. " @@ -47,7 +48,7 @@ class ValidateAssemblyModelTransforms(pyblish.api.InstancePlugin): from openpype.hosts.maya.api import lib # Get all transforms in the loaded containers - container_roots = cmds.listRelatives(instance.data["hierarchy"], + container_roots = cmds.listRelatives(instance.data["nodesHierarchy"], children=True, type="transform", fullPath=True) diff --git a/openpype/hosts/maya/plugins/publish/validate_attributes.py b/openpype/hosts/maya/plugins/publish/validate_attributes.py index e2a22f80b6..136c38bc1d 100644 --- a/openpype/hosts/maya/plugins/publish/validate_attributes.py +++ b/openpype/hosts/maya/plugins/publish/validate_attributes.py @@ -1,7 +1,10 @@ import pymel.core as pm import pyblish.api -import openpype.api +from openpype.pipeline.publish import ( + RepairContextAction, + ValidateContentsOrder, +) class ValidateAttributes(pyblish.api.ContextPlugin): @@ -16,10 +19,10 @@ class ValidateAttributes(pyblish.api.ContextPlugin): } """ - order = openpype.api.ValidateContentsOrder + order = ValidateContentsOrder label = "Attributes" hosts = ["maya"] - actions = [openpype.api.RepairContextAction] + actions = [RepairContextAction] optional = True attributes = None diff --git a/openpype/hosts/maya/plugins/publish/validate_camera_attributes.py b/openpype/hosts/maya/plugins/publish/validate_camera_attributes.py index e019788aff..19c1179e52 100644 --- a/openpype/hosts/maya/plugins/publish/validate_camera_attributes.py +++ b/openpype/hosts/maya/plugins/publish/validate_camera_attributes.py @@ -3,6 +3,7 @@ from maya import cmds import pyblish.api import openpype.api import openpype.hosts.maya.api.action +from openpype.pipeline.publish import ValidateContentsOrder class ValidateCameraAttributes(pyblish.api.InstancePlugin): @@ -14,7 +15,7 @@ class ValidateCameraAttributes(pyblish.api.InstancePlugin): """ - order = openpype.api.ValidateContentsOrder + order = ValidateContentsOrder families = ['camera'] hosts = ['maya'] label = 'Camera Attributes' diff --git a/openpype/hosts/maya/plugins/publish/validate_camera_contents.py b/openpype/hosts/maya/plugins/publish/validate_camera_contents.py index 5f6faddbe7..f846319807 100644 --- a/openpype/hosts/maya/plugins/publish/validate_camera_contents.py +++ b/openpype/hosts/maya/plugins/publish/validate_camera_contents.py @@ -3,6 +3,7 @@ from maya import cmds import pyblish.api import openpype.api import openpype.hosts.maya.api.action +from openpype.pipeline.publish import ValidateContentsOrder class ValidateCameraContents(pyblish.api.InstancePlugin): @@ -15,7 +16,7 @@ class ValidateCameraContents(pyblish.api.InstancePlugin): """ - order = openpype.api.ValidateContentsOrder + order = ValidateContentsOrder families = ['camera'] hosts = ['maya'] label = 'Camera Contents' diff --git a/openpype/hosts/maya/plugins/publish/validate_color_sets.py b/openpype/hosts/maya/plugins/publish/validate_color_sets.py index 45224b0672..cab9d6ebab 100644 --- a/openpype/hosts/maya/plugins/publish/validate_color_sets.py +++ b/openpype/hosts/maya/plugins/publish/validate_color_sets.py @@ -3,6 +3,10 @@ from maya import cmds import pyblish.api import openpype.api import openpype.hosts.maya.api.action +from openpype.pipeline.publish import ( + RepairAction, + ValidateMeshOrder, +) class ValidateColorSets(pyblish.api.Validator): @@ -13,13 +17,13 @@ class ValidateColorSets(pyblish.api.Validator): """ - order = openpype.api.ValidateMeshOrder + order = ValidateMeshOrder hosts = ['maya'] families = ['model'] category = 'geometry' label = 'Mesh ColorSets' actions = [openpype.hosts.maya.api.action.SelectInvalidAction, - openpype.api.RepairAction] + RepairAction] optional = True @staticmethod diff --git a/openpype/hosts/maya/plugins/publish/validate_current_renderlayer_renderable.py b/openpype/hosts/maya/plugins/publish/validate_current_renderlayer_renderable.py index 3c3ea68fc6..f072e5e323 100644 --- a/openpype/hosts/maya/plugins/publish/validate_current_renderlayer_renderable.py +++ b/openpype/hosts/maya/plugins/publish/validate_current_renderlayer_renderable.py @@ -1,7 +1,7 @@ import pyblish.api from maya import cmds -from openpype.plugin import contextplugin_should_run +from openpype.pipeline.publish import context_plugin_should_run class ValidateCurrentRenderLayerIsRenderable(pyblish.api.ContextPlugin): @@ -24,7 +24,7 @@ class ValidateCurrentRenderLayerIsRenderable(pyblish.api.ContextPlugin): def process(self, context): # Workaround bug pyblish-base#250 - if not contextplugin_should_run(self, context): + if not context_plugin_should_run(self, context): return layer = cmds.editRenderLayerGlobals(query=True, currentRenderLayer=True) diff --git a/openpype/hosts/maya/plugins/publish/validate_cycle_error.py b/openpype/hosts/maya/plugins/publish/validate_cycle_error.py index 4dfe0b8add..d3b8316d94 100644 --- a/openpype/hosts/maya/plugins/publish/validate_cycle_error.py +++ b/openpype/hosts/maya/plugins/publish/validate_cycle_error.py @@ -5,12 +5,13 @@ import pyblish.api import openpype.api import openpype.hosts.maya.api.action from openpype.hosts.maya.api.lib import maintained_selection +from openpype.pipeline.publish import ValidateContentsOrder class ValidateCycleError(pyblish.api.InstancePlugin): """Validate nodes produce no cycle errors.""" - order = openpype.api.ValidateContentsOrder + 0.05 + order = ValidateContentsOrder + 0.05 label = "Cycle Errors" hosts = ["maya"] families = ["rig"] diff --git a/openpype/hosts/maya/plugins/publish/validate_frame_range.py b/openpype/hosts/maya/plugins/publish/validate_frame_range.py index c51766379e..b467a7c232 100644 --- a/openpype/hosts/maya/plugins/publish/validate_frame_range.py +++ b/openpype/hosts/maya/plugins/publish/validate_frame_range.py @@ -1,7 +1,10 @@ import pyblish.api -import openpype.api from maya import cmds +from openpype.pipeline.publish import ( + RepairAction, + ValidateContentsOrder, +) class ValidateFrameRange(pyblish.api.InstancePlugin): @@ -18,7 +21,7 @@ class ValidateFrameRange(pyblish.api.InstancePlugin): """ label = "Validate Frame Range" - order = openpype.api.ValidateContentsOrder + order = ValidateContentsOrder families = ["animation", "pointcache", "camera", @@ -26,7 +29,7 @@ class ValidateFrameRange(pyblish.api.InstancePlugin): "review", "yeticache"] optional = True - actions = [openpype.api.RepairAction] + actions = [RepairAction] exclude_families = [] def process(self, instance): diff --git a/openpype/hosts/maya/plugins/publish/validate_instance_has_members.py b/openpype/hosts/maya/plugins/publish/validate_instance_has_members.py index e04a26e4fd..bf92ac5099 100644 --- a/openpype/hosts/maya/plugins/publish/validate_instance_has_members.py +++ b/openpype/hosts/maya/plugins/publish/validate_instance_has_members.py @@ -1,12 +1,13 @@ import pyblish.api import openpype.api import openpype.hosts.maya.api.action +from openpype.pipeline.publish import ValidateContentsOrder class ValidateInstanceHasMembers(pyblish.api.InstancePlugin): """Validates instance objectSet has *any* members.""" - order = openpype.api.ValidateContentsOrder + order = ValidateContentsOrder hosts = ["maya"] label = 'Instance has members' actions = [openpype.hosts.maya.api.action.SelectInvalidAction] diff --git a/openpype/hosts/maya/plugins/publish/validate_instance_in_context.py b/openpype/hosts/maya/plugins/publish/validate_instance_in_context.py index 7b8c335062..41bb414829 100644 --- a/openpype/hosts/maya/plugins/publish/validate_instance_in_context.py +++ b/openpype/hosts/maya/plugins/publish/validate_instance_in_context.py @@ -3,7 +3,7 @@ from __future__ import absolute_import import pyblish.api -import openpype.api +from openpype.pipeline.publish import ValidateContentsOrder from maya import cmds @@ -98,7 +98,7 @@ class ValidateInstanceInContext(pyblish.api.InstancePlugin): Action on this validator will select invalid instances in Outliner. """ - order = openpype.api.ValidateContentsOrder + order = ValidateContentsOrder label = "Instance in same Context" optional = True hosts = ["maya"] diff --git a/openpype/hosts/maya/plugins/publish/validate_instance_subset.py b/openpype/hosts/maya/plugins/publish/validate_instance_subset.py index 539f3f9d3c..bb3dde761c 100644 --- a/openpype/hosts/maya/plugins/publish/validate_instance_subset.py +++ b/openpype/hosts/maya/plugins/publish/validate_instance_subset.py @@ -1,8 +1,8 @@ import pyblish.api -import openpype.api import string import six +from openpype.pipeline.publish import ValidateContentsOrder # Allow only characters, numbers and underscore allowed = set(string.ascii_lowercase + @@ -18,7 +18,7 @@ def validate_name(subset): class ValidateSubsetName(pyblish.api.InstancePlugin): """Validates subset name has only valid characters""" - order = openpype.api.ValidateContentsOrder + order = ValidateContentsOrder families = ["*"] label = "Subset Name" diff --git a/openpype/hosts/maya/plugins/publish/validate_loaded_plugin.py b/openpype/hosts/maya/plugins/publish/validate_loaded_plugin.py index 9306d8ce15..624074aaf9 100644 --- a/openpype/hosts/maya/plugins/publish/validate_loaded_plugin.py +++ b/openpype/hosts/maya/plugins/publish/validate_loaded_plugin.py @@ -1,7 +1,8 @@ +import os import pyblish.api import maya.cmds as cmds -import openpype.api -import os + +from openpype.pipeline.publish import RepairContextAction class ValidateLoadedPlugin(pyblish.api.ContextPlugin): @@ -10,7 +11,7 @@ class ValidateLoadedPlugin(pyblish.api.ContextPlugin): label = "Loaded Plugin" order = pyblish.api.ValidatorOrder host = ["maya"] - actions = [openpype.api.RepairContextAction] + actions = [RepairContextAction] @classmethod def get_invalid(cls, context): diff --git a/openpype/hosts/maya/plugins/publish/validate_look_contents.py b/openpype/hosts/maya/plugins/publish/validate_look_contents.py index b1e1d5416b..d9819b05d5 100644 --- a/openpype/hosts/maya/plugins/publish/validate_look_contents.py +++ b/openpype/hosts/maya/plugins/publish/validate_look_contents.py @@ -1,6 +1,7 @@ import pyblish.api import openpype.api import openpype.hosts.maya.api.action +from openpype.pipeline.publish import ValidateContentsOrder class ValidateLookContents(pyblish.api.InstancePlugin): @@ -17,7 +18,7 @@ class ValidateLookContents(pyblish.api.InstancePlugin): """ - order = openpype.api.ValidateContentsOrder + order = ValidateContentsOrder families = ['look'] hosts = ['maya'] label = 'Look Data Contents' diff --git a/openpype/hosts/maya/plugins/publish/validate_look_default_shaders_connections.py b/openpype/hosts/maya/plugins/publish/validate_look_default_shaders_connections.py index 262dd10b74..20f561a892 100644 --- a/openpype/hosts/maya/plugins/publish/validate_look_default_shaders_connections.py +++ b/openpype/hosts/maya/plugins/publish/validate_look_default_shaders_connections.py @@ -1,7 +1,7 @@ from maya import cmds import pyblish.api -import openpype.api +from openpype.pipeline.publish import ValidateContentsOrder class ValidateLookDefaultShadersConnections(pyblish.api.InstancePlugin): @@ -16,7 +16,7 @@ class ValidateLookDefaultShadersConnections(pyblish.api.InstancePlugin): """ - order = openpype.api.ValidateContentsOrder + order = ValidateContentsOrder families = ['look'] hosts = ['maya'] label = 'Look Default Shader Connections' diff --git a/openpype/hosts/maya/plugins/publish/validate_look_id_reference_edits.py b/openpype/hosts/maya/plugins/publish/validate_look_id_reference_edits.py index 9d074f927b..f223c1a42b 100644 --- a/openpype/hosts/maya/plugins/publish/validate_look_id_reference_edits.py +++ b/openpype/hosts/maya/plugins/publish/validate_look_id_reference_edits.py @@ -4,6 +4,10 @@ from maya import cmds import pyblish.api import openpype.api import openpype.hosts.maya.api.action +from openpype.pipeline.publish import ( + RepairAction, + ValidateContentsOrder, +) class ValidateLookIdReferenceEdits(pyblish.api.InstancePlugin): @@ -16,12 +20,12 @@ class ValidateLookIdReferenceEdits(pyblish.api.InstancePlugin): """ - order = openpype.api.ValidateContentsOrder + order = ValidateContentsOrder families = ['look'] hosts = ['maya'] label = 'Look Id Reference Edits' actions = [openpype.hosts.maya.api.action.SelectInvalidAction, - openpype.api.RepairAction] + RepairAction] def process(self, instance): invalid = self.get_invalid(instance) diff --git a/openpype/hosts/maya/plugins/publish/validate_look_members_unique.py b/openpype/hosts/maya/plugins/publish/validate_look_members_unique.py index 2367602d05..210fcb174d 100644 --- a/openpype/hosts/maya/plugins/publish/validate_look_members_unique.py +++ b/openpype/hosts/maya/plugins/publish/validate_look_members_unique.py @@ -3,6 +3,7 @@ from collections import defaultdict import pyblish.api import openpype.api import openpype.hosts.maya.api.action +from openpype.pipeline.publish import ValidatePipelineOrder class ValidateUniqueRelationshipMembers(pyblish.api.InstancePlugin): @@ -20,7 +21,7 @@ class ValidateUniqueRelationshipMembers(pyblish.api.InstancePlugin): """ - order = openpype.api.ValidatePipelineOrder + order = ValidatePipelineOrder label = 'Look members unique' hosts = ['maya'] families = ['look'] diff --git a/openpype/hosts/maya/plugins/publish/validate_look_no_default_shaders.py b/openpype/hosts/maya/plugins/publish/validate_look_no_default_shaders.py index 8ba6cde988..95f8fa20d0 100644 --- a/openpype/hosts/maya/plugins/publish/validate_look_no_default_shaders.py +++ b/openpype/hosts/maya/plugins/publish/validate_look_no_default_shaders.py @@ -3,6 +3,7 @@ from maya import cmds import pyblish.api import openpype.api import openpype.hosts.maya.api.action +from openpype.pipeline.publish import ValidateContentsOrder class ValidateLookNoDefaultShaders(pyblish.api.InstancePlugin): @@ -23,7 +24,7 @@ class ValidateLookNoDefaultShaders(pyblish.api.InstancePlugin): """ - order = openpype.api.ValidateContentsOrder + 0.01 + order = ValidateContentsOrder + 0.01 families = ['look'] hosts = ['maya'] label = 'Look No Default Shaders' diff --git a/openpype/hosts/maya/plugins/publish/validate_look_sets.py b/openpype/hosts/maya/plugins/publish/validate_look_sets.py index 5e737ca876..3a60b771f4 100644 --- a/openpype/hosts/maya/plugins/publish/validate_look_sets.py +++ b/openpype/hosts/maya/plugins/publish/validate_look_sets.py @@ -1,8 +1,8 @@ -import openpype.hosts.maya.api.action -from openpype.hosts.maya.api import lib - import pyblish.api import openpype.api +import openpype.hosts.maya.api.action +from openpype.hosts.maya.api import lib +from openpype.pipeline.publish import ValidateContentsOrder class ValidateLookSets(pyblish.api.InstancePlugin): @@ -38,7 +38,7 @@ class ValidateLookSets(pyblish.api.InstancePlugin): """ - order = openpype.api.ValidateContentsOrder + order = ValidateContentsOrder families = ['look'] hosts = ['maya'] label = 'Look Sets' diff --git a/openpype/hosts/maya/plugins/publish/validate_look_shading_group.py b/openpype/hosts/maya/plugins/publish/validate_look_shading_group.py index e8affac036..7d043eddb8 100644 --- a/openpype/hosts/maya/plugins/publish/validate_look_shading_group.py +++ b/openpype/hosts/maya/plugins/publish/validate_look_shading_group.py @@ -3,6 +3,10 @@ from maya import cmds import pyblish.api import openpype.api import openpype.hosts.maya.api.action +from openpype.pipeline.publish import ( + RepairAction, + ValidateContentsOrder, +) class ValidateShadingEngine(pyblish.api.InstancePlugin): @@ -11,12 +15,12 @@ class ValidateShadingEngine(pyblish.api.InstancePlugin): Shading engines should be named "{surface_shader}SG" """ - order = openpype.api.ValidateContentsOrder + order = ValidateContentsOrder families = ["look"] hosts = ["maya"] label = "Look Shading Engine Naming" actions = [ - openpype.hosts.maya.api.action.SelectInvalidAction, openpype.api.RepairAction + openpype.hosts.maya.api.action.SelectInvalidAction, RepairAction ] # The default connections to check diff --git a/openpype/hosts/maya/plugins/publish/validate_look_single_shader.py b/openpype/hosts/maya/plugins/publish/validate_look_single_shader.py index 2b32ccf492..51e1232bb7 100644 --- a/openpype/hosts/maya/plugins/publish/validate_look_single_shader.py +++ b/openpype/hosts/maya/plugins/publish/validate_look_single_shader.py @@ -3,6 +3,7 @@ from maya import cmds import pyblish.api import openpype.api import openpype.hosts.maya.api.action +from openpype.pipeline.publish import ValidateContentsOrder class ValidateSingleShader(pyblish.api.InstancePlugin): @@ -12,7 +13,7 @@ class ValidateSingleShader(pyblish.api.InstancePlugin): """ - order = openpype.api.ValidateContentsOrder + order = ValidateContentsOrder families = ['look'] hosts = ['maya'] label = 'Look Single Shader Per Shape' diff --git a/openpype/hosts/maya/plugins/publish/validate_maya_units.py b/openpype/hosts/maya/plugins/publish/validate_maya_units.py index 5f67adec76..5698d795ff 100644 --- a/openpype/hosts/maya/plugins/publish/validate_maya_units.py +++ b/openpype/hosts/maya/plugins/publish/validate_maya_units.py @@ -1,10 +1,14 @@ import maya.cmds as cmds import pyblish.api -import openpype.api + import openpype.hosts.maya.api.lib as mayalib from openpype.pipeline.context_tools import get_current_project_asset from math import ceil +from openpype.pipeline.publish import ( + RepairContextAction, + ValidateSceneOrder, +) def float_round(num, places=0, direction=ceil): @@ -14,10 +18,10 @@ def float_round(num, places=0, direction=ceil): class ValidateMayaUnits(pyblish.api.ContextPlugin): """Check if the Maya units are set correct""" - order = openpype.api.ValidateSceneOrder + order = ValidateSceneOrder label = "Maya Units" hosts = ['maya'] - actions = [openpype.api.RepairContextAction] + actions = [RepairContextAction] validate_linear_units = True linear_units = "cm" diff --git a/openpype/hosts/maya/plugins/publish/validate_mesh_arnold_attributes.py b/openpype/hosts/maya/plugins/publish/validate_mesh_arnold_attributes.py index 90eb01aa12..abfe1213a0 100644 --- a/openpype/hosts/maya/plugins/publish/validate_mesh_arnold_attributes.py +++ b/openpype/hosts/maya/plugins/publish/validate_mesh_arnold_attributes.py @@ -4,6 +4,10 @@ import pyblish.api import openpype.api import openpype.hosts.maya.api.action from openpype.hosts.maya.api.lib import maintained_selection +from openpype.pipeline.publish import ( + RepairAction, + ValidateMeshOrder, +) class ValidateMeshArnoldAttributes(pyblish.api.InstancePlugin): @@ -13,14 +17,14 @@ class ValidateMeshArnoldAttributes(pyblish.api.InstancePlugin): later published looks can discover non-default Arnold attributes. """ - order = openpype.api.ValidateMeshOrder + order = ValidateMeshOrder hosts = ["maya"] families = ["model"] category = "geometry" label = "Mesh Arnold Attributes" actions = [ openpype.hosts.maya.api.action.SelectInvalidAction, - openpype.api.RepairAction + RepairAction ] optional = True if cmds.getAttr( diff --git a/openpype/hosts/maya/plugins/publish/validate_mesh_has_uv.py b/openpype/hosts/maya/plugins/publish/validate_mesh_has_uv.py index 8f9b5d1c4e..4d2885d6e2 100644 --- a/openpype/hosts/maya/plugins/publish/validate_mesh_has_uv.py +++ b/openpype/hosts/maya/plugins/publish/validate_mesh_has_uv.py @@ -5,6 +5,7 @@ from maya import cmds import pyblish.api import openpype.api import openpype.hosts.maya.api.action +from openpype.pipeline.publish import ValidateMeshOrder def len_flattened(components): @@ -45,7 +46,7 @@ class ValidateMeshHasUVs(pyblish.api.InstancePlugin): UVs for every face. """ - order = openpype.api.ValidateMeshOrder + order = ValidateMeshOrder hosts = ['maya'] families = ['model'] category = 'geometry' diff --git a/openpype/hosts/maya/plugins/publish/validate_mesh_lamina_faces.py b/openpype/hosts/maya/plugins/publish/validate_mesh_lamina_faces.py index 8fa1f3cf3b..e7a73c21b0 100644 --- a/openpype/hosts/maya/plugins/publish/validate_mesh_lamina_faces.py +++ b/openpype/hosts/maya/plugins/publish/validate_mesh_lamina_faces.py @@ -3,6 +3,7 @@ from maya import cmds import pyblish.api import openpype.api import openpype.hosts.maya.api.action +from openpype.pipeline.publish import ValidateMeshOrder class ValidateMeshLaminaFaces(pyblish.api.InstancePlugin): @@ -12,7 +13,7 @@ class ValidateMeshLaminaFaces(pyblish.api.InstancePlugin): """ - order = openpype.api.ValidateMeshOrder + order = ValidateMeshOrder hosts = ['maya'] families = ['model'] category = 'geometry' diff --git a/openpype/hosts/maya/plugins/publish/validate_mesh_ngons.py b/openpype/hosts/maya/plugins/publish/validate_mesh_ngons.py index ab0beb2a9c..24d6188ec8 100644 --- a/openpype/hosts/maya/plugins/publish/validate_mesh_ngons.py +++ b/openpype/hosts/maya/plugins/publish/validate_mesh_ngons.py @@ -4,6 +4,7 @@ import pyblish.api import openpype.api import openpype.hosts.maya.api.action from openpype.hosts.maya.api import lib +from openpype.pipeline.publish import ValidateContentsOrder class ValidateMeshNgons(pyblish.api.Validator): @@ -16,7 +17,7 @@ class ValidateMeshNgons(pyblish.api.Validator): """ - order = openpype.api.ValidateContentsOrder + order = ValidateContentsOrder hosts = ["maya"] families = ["model"] label = "Mesh ngons" diff --git a/openpype/hosts/maya/plugins/publish/validate_mesh_no_negative_scale.py b/openpype/hosts/maya/plugins/publish/validate_mesh_no_negative_scale.py index 5ccfa7377a..18ceccaa28 100644 --- a/openpype/hosts/maya/plugins/publish/validate_mesh_no_negative_scale.py +++ b/openpype/hosts/maya/plugins/publish/validate_mesh_no_negative_scale.py @@ -3,6 +3,7 @@ from maya import cmds import pyblish.api import openpype.api import openpype.hosts.maya.api.action +from openpype.pipeline.publish import ValidateMeshOrder class ValidateMeshNoNegativeScale(pyblish.api.Validator): @@ -17,7 +18,7 @@ class ValidateMeshNoNegativeScale(pyblish.api.Validator): """ - order = openpype.api.ValidateMeshOrder + order = ValidateMeshOrder hosts = ['maya'] families = ['model'] label = 'Mesh No Negative Scale' diff --git a/openpype/hosts/maya/plugins/publish/validate_mesh_non_manifold.py b/openpype/hosts/maya/plugins/publish/validate_mesh_non_manifold.py index 9bd584bbbf..e75a132d50 100644 --- a/openpype/hosts/maya/plugins/publish/validate_mesh_non_manifold.py +++ b/openpype/hosts/maya/plugins/publish/validate_mesh_non_manifold.py @@ -3,6 +3,7 @@ from maya import cmds import pyblish.api import openpype.api import openpype.hosts.maya.api.action +from openpype.pipeline.publish import ValidateMeshOrder class ValidateMeshNonManifold(pyblish.api.Validator): @@ -13,7 +14,7 @@ class ValidateMeshNonManifold(pyblish.api.Validator): """ - order = openpype.api.ValidateMeshOrder + order = ValidateMeshOrder hosts = ['maya'] families = ['model'] label = 'Mesh Non-Manifold Vertices/Edges' diff --git a/openpype/hosts/maya/plugins/publish/validate_mesh_non_zero_edge.py b/openpype/hosts/maya/plugins/publish/validate_mesh_non_zero_edge.py index 5e6f24cf79..8c03b54971 100644 --- a/openpype/hosts/maya/plugins/publish/validate_mesh_non_zero_edge.py +++ b/openpype/hosts/maya/plugins/publish/validate_mesh_non_zero_edge.py @@ -4,6 +4,7 @@ import pyblish.api import openpype.api import openpype.hosts.maya.api.action from openpype.hosts.maya.api import lib +from openpype.pipeline.publish import ValidateMeshOrder class ValidateMeshNonZeroEdgeLength(pyblish.api.InstancePlugin): @@ -16,7 +17,7 @@ class ValidateMeshNonZeroEdgeLength(pyblish.api.InstancePlugin): """ - order = openpype.api.ValidateMeshOrder + order = ValidateMeshOrder families = ['model'] hosts = ['maya'] category = 'geometry' diff --git a/openpype/hosts/maya/plugins/publish/validate_mesh_normals_unlocked.py b/openpype/hosts/maya/plugins/publish/validate_mesh_normals_unlocked.py index 750932df54..7d88161058 100644 --- a/openpype/hosts/maya/plugins/publish/validate_mesh_normals_unlocked.py +++ b/openpype/hosts/maya/plugins/publish/validate_mesh_normals_unlocked.py @@ -4,6 +4,10 @@ import maya.api.OpenMaya as om2 import pyblish.api import openpype.api import openpype.hosts.maya.api.action +from openpype.pipeline.publish import ( + RepairAction, + ValidateMeshOrder, +) class ValidateMeshNormalsUnlocked(pyblish.api.Validator): @@ -14,14 +18,14 @@ class ValidateMeshNormalsUnlocked(pyblish.api.Validator): """ - order = openpype.api.ValidateMeshOrder + order = ValidateMeshOrder hosts = ['maya'] families = ['model'] category = 'geometry' version = (0, 1, 0) label = 'Mesh Normals Unlocked' actions = [openpype.hosts.maya.api.action.SelectInvalidAction, - openpype.api.RepairAction] + RepairAction] optional = True @staticmethod diff --git a/openpype/hosts/maya/plugins/publish/validate_mesh_overlapping_uvs.py b/openpype/hosts/maya/plugins/publish/validate_mesh_overlapping_uvs.py index bf95d8ba09..dde3e4fead 100644 --- a/openpype/hosts/maya/plugins/publish/validate_mesh_overlapping_uvs.py +++ b/openpype/hosts/maya/plugins/publish/validate_mesh_overlapping_uvs.py @@ -6,6 +6,7 @@ import maya.api.OpenMaya as om import pymel.core as pm from six.moves import xrange +from openpype.pipeline.publish import ValidateMeshOrder class GetOverlappingUVs(object): @@ -232,7 +233,7 @@ class ValidateMeshHasOverlappingUVs(pyblish.api.InstancePlugin): It is optional to warn publisher about it. """ - order = openpype.api.ValidateMeshOrder + order = ValidateMeshOrder hosts = ['maya'] families = ['model'] category = 'geometry' diff --git a/openpype/hosts/maya/plugins/publish/validate_mesh_shader_connections.py b/openpype/hosts/maya/plugins/publish/validate_mesh_shader_connections.py index e0835000f0..9621fd5aa8 100644 --- a/openpype/hosts/maya/plugins/publish/validate_mesh_shader_connections.py +++ b/openpype/hosts/maya/plugins/publish/validate_mesh_shader_connections.py @@ -3,6 +3,10 @@ from maya import cmds import pyblish.api import openpype.api import openpype.hosts.maya.api.action +from openpype.pipeline.publish import ( + RepairAction, + ValidateMeshOrder, +) def pairs(iterable): @@ -86,12 +90,12 @@ class ValidateMeshShaderConnections(pyblish.api.InstancePlugin): """ - order = openpype.api.ValidateMeshOrder + order = ValidateMeshOrder hosts = ['maya'] families = ['model'] label = "Mesh Shader Connections" actions = [openpype.hosts.maya.api.action.SelectInvalidAction, - openpype.api.RepairAction] + RepairAction] def process(self, instance): """Process all the nodes in the instance 'objectSet'""" diff --git a/openpype/hosts/maya/plugins/publish/validate_mesh_single_uv_set.py b/openpype/hosts/maya/plugins/publish/validate_mesh_single_uv_set.py index 9d2aeb7d99..3fb09356d3 100644 --- a/openpype/hosts/maya/plugins/publish/validate_mesh_single_uv_set.py +++ b/openpype/hosts/maya/plugins/publish/validate_mesh_single_uv_set.py @@ -4,6 +4,10 @@ import pyblish.api import openpype.api import openpype.hosts.maya.api.action from openpype.hosts.maya.api import lib +from openpype.pipeline.publish import ( + RepairAction, + ValidateMeshOrder, +) class ValidateMeshSingleUVSet(pyblish.api.InstancePlugin): @@ -15,7 +19,7 @@ class ValidateMeshSingleUVSet(pyblish.api.InstancePlugin): """ - order = openpype.api.ValidateMeshOrder + order = ValidateMeshOrder hosts = ['maya'] families = ['model', 'pointcache'] category = 'uv' @@ -23,7 +27,7 @@ class ValidateMeshSingleUVSet(pyblish.api.InstancePlugin): version = (0, 1, 0) label = "Mesh Single UV Set" actions = [openpype.hosts.maya.api.action.SelectInvalidAction, - openpype.api.RepairAction] + RepairAction] @staticmethod def get_invalid(instance): diff --git a/openpype/hosts/maya/plugins/publish/validate_mesh_uv_set_map1.py b/openpype/hosts/maya/plugins/publish/validate_mesh_uv_set_map1.py index 52c45d3b0c..2711682f76 100644 --- a/openpype/hosts/maya/plugins/publish/validate_mesh_uv_set_map1.py +++ b/openpype/hosts/maya/plugins/publish/validate_mesh_uv_set_map1.py @@ -3,6 +3,10 @@ from maya import cmds import pyblish.api import openpype.api import openpype.hosts.maya.api.action +from openpype.pipeline.publish import ( + RepairAction, + ValidateMeshOrder, +) class ValidateMeshUVSetMap1(pyblish.api.InstancePlugin): @@ -15,13 +19,13 @@ class ValidateMeshUVSetMap1(pyblish.api.InstancePlugin): """ - order = openpype.api.ValidateMeshOrder + order = ValidateMeshOrder hosts = ['maya'] families = ['model'] optional = True label = "Mesh has map1 UV Set" actions = [openpype.hosts.maya.api.action.SelectInvalidAction, - openpype.api.RepairAction] + RepairAction] @staticmethod def get_invalid(instance): diff --git a/openpype/hosts/maya/plugins/publish/validate_mesh_vertices_have_edges.py b/openpype/hosts/maya/plugins/publish/validate_mesh_vertices_have_edges.py index 463c3c4c50..350a5f4789 100644 --- a/openpype/hosts/maya/plugins/publish/validate_mesh_vertices_have_edges.py +++ b/openpype/hosts/maya/plugins/publish/validate_mesh_vertices_have_edges.py @@ -5,6 +5,10 @@ from maya import cmds import pyblish.api import openpype.api import openpype.hosts.maya.api.action +from openpype.pipeline.publish import ( + RepairAction, + ValidateMeshOrder, +) def len_flattened(components): @@ -57,13 +61,13 @@ class ValidateMeshVerticesHaveEdges(pyblish.api.InstancePlugin): """ - order = openpype.api.ValidateMeshOrder + order = ValidateMeshOrder hosts = ['maya'] families = ['model'] category = 'geometry' label = 'Mesh Vertices Have Edges' actions = [openpype.hosts.maya.api.action.SelectInvalidAction, - openpype.api.RepairAction] + RepairAction] @classmethod def repair(cls, instance): diff --git a/openpype/hosts/maya/plugins/publish/validate_model_content.py b/openpype/hosts/maya/plugins/publish/validate_model_content.py index aee0ea52f0..0557858639 100644 --- a/openpype/hosts/maya/plugins/publish/validate_model_content.py +++ b/openpype/hosts/maya/plugins/publish/validate_model_content.py @@ -4,6 +4,7 @@ import pyblish.api import openpype.api import openpype.hosts.maya.api.action from openpype.hosts.maya.api import lib +from openpype.pipeline.publish import ValidateContentsOrder class ValidateModelContent(pyblish.api.InstancePlugin): @@ -14,7 +15,7 @@ class ValidateModelContent(pyblish.api.InstancePlugin): """ - order = openpype.api.ValidateContentsOrder + order = ValidateContentsOrder hosts = ["maya"] families = ["model"] label = "Model Content" diff --git a/openpype/hosts/maya/plugins/publish/validate_model_name.py b/openpype/hosts/maya/plugins/publish/validate_model_name.py index 02107d5732..99a4b2654e 100644 --- a/openpype/hosts/maya/plugins/publish/validate_model_name.py +++ b/openpype/hosts/maya/plugins/publish/validate_model_name.py @@ -7,6 +7,7 @@ import pyblish.api import openpype.api from openpype.pipeline import legacy_io +from openpype.pipeline.publish import ValidateContentsOrder import openpype.hosts.maya.api.action from openpype.hosts.maya.api.shader_definition_editor import ( DEFINITION_FILENAME) @@ -23,7 +24,7 @@ class ValidateModelName(pyblish.api.InstancePlugin): """ optional = True - order = openpype.api.ValidateContentsOrder + order = ValidateContentsOrder hosts = ["maya"] families = ["model"] label = "Model Name" diff --git a/openpype/hosts/maya/plugins/publish/validate_muster_connection.py b/openpype/hosts/maya/plugins/publish/validate_muster_connection.py index 6dc7bd3bc4..c31ccf405c 100644 --- a/openpype/hosts/maya/plugins/publish/validate_muster_connection.py +++ b/openpype/hosts/maya/plugins/publish/validate_muster_connection.py @@ -5,8 +5,10 @@ import appdirs import pyblish.api from openpype.lib import requests_get -from openpype.plugin import contextplugin_should_run -import openpype.hosts.maya.api.action +from openpype.pipeline.publish import ( + context_plugin_should_run, + RepairAction, +) class ValidateMusterConnection(pyblish.api.ContextPlugin): @@ -21,12 +23,12 @@ class ValidateMusterConnection(pyblish.api.ContextPlugin): token = None if not os.environ.get("MUSTER_REST_URL"): active = False - actions = [openpype.api.RepairAction] + actions = [RepairAction] def process(self, context): # Workaround bug pyblish-base#250 - if not contextplugin_should_run(self, context): + if not context_plugin_should_run(self, context): return # test if we have environment set (redundant as this plugin shouldn' diff --git a/openpype/hosts/maya/plugins/publish/validate_mvlook_contents.py b/openpype/hosts/maya/plugins/publish/validate_mvlook_contents.py index bac2c030c8..62f360cd86 100644 --- a/openpype/hosts/maya/plugins/publish/validate_mvlook_contents.py +++ b/openpype/hosts/maya/plugins/publish/validate_mvlook_contents.py @@ -1,15 +1,16 @@ +import os import pyblish.api import openpype.api import openpype.hosts.maya.api.action +from openpype.pipeline.publish import ValidateContentsOrder -import os COLOUR_SPACES = ['sRGB', 'linear', 'auto'] MIPMAP_EXTENSIONS = ['tdl'] class ValidateMvLookContents(pyblish.api.InstancePlugin): - order = openpype.api.ValidateContentsOrder + order = ValidateContentsOrder families = ['mvLook'] hosts = ['maya'] label = 'Validate mvLook Data' diff --git a/openpype/hosts/maya/plugins/publish/validate_no_animation.py b/openpype/hosts/maya/plugins/publish/validate_no_animation.py index 6621e452f0..177de1468d 100644 --- a/openpype/hosts/maya/plugins/publish/validate_no_animation.py +++ b/openpype/hosts/maya/plugins/publish/validate_no_animation.py @@ -3,6 +3,7 @@ from maya import cmds import pyblish.api import openpype.api import openpype.hosts.maya.api.action +from openpype.pipeline.publish import ValidateContentsOrder class ValidateNoAnimation(pyblish.api.Validator): @@ -14,7 +15,7 @@ class ValidateNoAnimation(pyblish.api.Validator): """ - order = openpype.api.ValidateContentsOrder + order = ValidateContentsOrder label = "No Animation" hosts = ["maya"] families = ["model"] diff --git a/openpype/hosts/maya/plugins/publish/validate_no_default_camera.py b/openpype/hosts/maya/plugins/publish/validate_no_default_camera.py index c3f6f3c38e..d4ddb28070 100644 --- a/openpype/hosts/maya/plugins/publish/validate_no_default_camera.py +++ b/openpype/hosts/maya/plugins/publish/validate_no_default_camera.py @@ -3,6 +3,7 @@ from maya import cmds import pyblish.api import openpype.api import openpype.hosts.maya.api.action +from openpype.pipeline.publish import ValidateContentsOrder class ValidateNoDefaultCameras(pyblish.api.InstancePlugin): @@ -13,7 +14,7 @@ class ValidateNoDefaultCameras(pyblish.api.InstancePlugin): settings when being loaded and sometimes being skipped. """ - order = openpype.api.ValidateContentsOrder + order = ValidateContentsOrder hosts = ['maya'] families = ['camera'] version = (0, 1, 0) diff --git a/openpype/hosts/maya/plugins/publish/validate_no_namespace.py b/openpype/hosts/maya/plugins/publish/validate_no_namespace.py index 5b3d6bc9c4..95caa1007f 100644 --- a/openpype/hosts/maya/plugins/publish/validate_no_namespace.py +++ b/openpype/hosts/maya/plugins/publish/validate_no_namespace.py @@ -3,6 +3,11 @@ import maya.cmds as cmds import pyblish.api import openpype.api +from openpype.pipeline.publish import ( + RepairAction, + ValidateContentsOrder, +) + import openpype.hosts.maya.api.action @@ -16,14 +21,14 @@ def get_namespace(node_name): class ValidateNoNamespace(pyblish.api.InstancePlugin): """Ensure the nodes don't have a namespace""" - order = openpype.api.ValidateContentsOrder + order = ValidateContentsOrder hosts = ['maya'] families = ['model'] category = 'cleanup' version = (0, 1, 0) label = 'No Namespaces' actions = [openpype.hosts.maya.api.action.SelectInvalidAction, - openpype.api.RepairAction] + RepairAction] @staticmethod def get_invalid(instance): diff --git a/openpype/hosts/maya/plugins/publish/validate_no_null_transforms.py b/openpype/hosts/maya/plugins/publish/validate_no_null_transforms.py index 36d61b03e8..f31fd09c95 100644 --- a/openpype/hosts/maya/plugins/publish/validate_no_null_transforms.py +++ b/openpype/hosts/maya/plugins/publish/validate_no_null_transforms.py @@ -3,6 +3,10 @@ import maya.cmds as cmds import pyblish.api import openpype.api import openpype.hosts.maya.api.action +from openpype.pipeline.publish import ( + RepairAction, + ValidateContentsOrder, +) def has_shape_children(node): @@ -37,13 +41,13 @@ class ValidateNoNullTransforms(pyblish.api.InstancePlugin): """ - order = openpype.api.ValidateContentsOrder + order = ValidateContentsOrder hosts = ['maya'] families = ['model'] category = 'cleanup' version = (0, 1, 0) label = 'No Empty/Null Transforms' - actions = [openpype.api.RepairAction, + actions = [RepairAction, openpype.hosts.maya.api.action.SelectInvalidAction] @staticmethod diff --git a/openpype/hosts/maya/plugins/publish/validate_no_unknown_nodes.py b/openpype/hosts/maya/plugins/publish/validate_no_unknown_nodes.py index d140a1f24a..20fe34f2fd 100644 --- a/openpype/hosts/maya/plugins/publish/validate_no_unknown_nodes.py +++ b/openpype/hosts/maya/plugins/publish/validate_no_unknown_nodes.py @@ -3,6 +3,7 @@ from maya import cmds import pyblish.api import openpype.api import openpype.hosts.maya.api.action +from openpype.pipeline.publish import ValidateContentsOrder class ValidateNoUnknownNodes(pyblish.api.InstancePlugin): @@ -16,7 +17,7 @@ class ValidateNoUnknownNodes(pyblish.api.InstancePlugin): """ - order = openpype.api.ValidateContentsOrder + order = ValidateContentsOrder hosts = ['maya'] families = ['model', 'rig'] optional = True diff --git a/openpype/hosts/maya/plugins/publish/validate_node_ids.py b/openpype/hosts/maya/plugins/publish/validate_node_ids.py index d17d34117f..877ba0e781 100644 --- a/openpype/hosts/maya/plugins/publish/validate_node_ids.py +++ b/openpype/hosts/maya/plugins/publish/validate_node_ids.py @@ -1,7 +1,7 @@ import pyblish.api import openpype.api +from openpype.pipeline.publish import ValidatePipelineOrder import openpype.hosts.maya.api.action - from openpype.hosts.maya.api import lib @@ -14,7 +14,7 @@ class ValidateNodeIDs(pyblish.api.InstancePlugin): """ - order = openpype.api.ValidatePipelineOrder + order = ValidatePipelineOrder label = 'Instance Nodes Have ID' hosts = ['maya'] families = ["model", diff --git a/openpype/hosts/maya/plugins/publish/validate_node_ids_deformed_shapes.py b/openpype/hosts/maya/plugins/publish/validate_node_ids_deformed_shapes.py index 0324be9fc9..1fe4a34e07 100644 --- a/openpype/hosts/maya/plugins/publish/validate_node_ids_deformed_shapes.py +++ b/openpype/hosts/maya/plugins/publish/validate_node_ids_deformed_shapes.py @@ -4,6 +4,10 @@ import pyblish.api import openpype.api import openpype.hosts.maya.api.action from openpype.hosts.maya.api import lib +from openpype.pipeline.publish import ( + RepairAction, + ValidateContentsOrder, +) class ValidateNodeIdsDeformedShape(pyblish.api.InstancePlugin): @@ -16,13 +20,13 @@ class ValidateNodeIdsDeformedShape(pyblish.api.InstancePlugin): """ - order = openpype.api.ValidateContentsOrder + order = ValidateContentsOrder families = ['look'] hosts = ['maya'] label = 'Deformed shape ids' actions = [ openpype.hosts.maya.api.action.SelectInvalidAction, - openpype.api.RepairAction + RepairAction ] def process(self, instance): diff --git a/openpype/hosts/maya/plugins/publish/validate_node_ids_in_database.py b/openpype/hosts/maya/plugins/publish/validate_node_ids_in_database.py index 632b531668..a5b1215f30 100644 --- a/openpype/hosts/maya/plugins/publish/validate_node_ids_in_database.py +++ b/openpype/hosts/maya/plugins/publish/validate_node_ids_in_database.py @@ -3,6 +3,7 @@ import pyblish.api import openpype.api from openpype.client import get_assets from openpype.pipeline import legacy_io +from openpype.pipeline.publish import ValidatePipelineOrder import openpype.hosts.maya.api.action from openpype.hosts.maya.api import lib @@ -18,7 +19,7 @@ class ValidateNodeIdsInDatabase(pyblish.api.InstancePlugin): """ - order = openpype.api.ValidatePipelineOrder + order = ValidatePipelineOrder label = 'Node Ids in Database' hosts = ['maya'] families = ["*"] diff --git a/openpype/hosts/maya/plugins/publish/validate_node_ids_related.py b/openpype/hosts/maya/plugins/publish/validate_node_ids_related.py index c8bac6e569..a7595d7392 100644 --- a/openpype/hosts/maya/plugins/publish/validate_node_ids_related.py +++ b/openpype/hosts/maya/plugins/publish/validate_node_ids_related.py @@ -1,6 +1,7 @@ import pyblish.api import openpype.api +from openpype.pipeline.publish import ValidatePipelineOrder import openpype.hosts.maya.api.action from openpype.hosts.maya.api import lib @@ -10,7 +11,7 @@ class ValidateNodeIDsRelated(pyblish.api.InstancePlugin): """ - order = openpype.api.ValidatePipelineOrder + order = ValidatePipelineOrder label = 'Node Ids Related (ID)' hosts = ['maya'] families = ["model", diff --git a/openpype/hosts/maya/plugins/publish/validate_node_ids_unique.py b/openpype/hosts/maya/plugins/publish/validate_node_ids_unique.py index ed9ef526d6..5ff18358e2 100644 --- a/openpype/hosts/maya/plugins/publish/validate_node_ids_unique.py +++ b/openpype/hosts/maya/plugins/publish/validate_node_ids_unique.py @@ -2,6 +2,7 @@ from collections import defaultdict import pyblish.api import openpype.api +from openpype.pipeline.publish import ValidatePipelineOrder import openpype.hosts.maya.api.action from openpype.hosts.maya.api import lib @@ -12,7 +13,7 @@ class ValidateNodeIdsUnique(pyblish.api.InstancePlugin): Here we ensure that what has been added to the instance is unique """ - order = openpype.api.ValidatePipelineOrder + order = ValidatePipelineOrder label = 'Non Duplicate Instance Members (ID)' hosts = ['maya'] families = ["model", diff --git a/openpype/hosts/maya/plugins/publish/validate_node_no_ghosting.py b/openpype/hosts/maya/plugins/publish/validate_node_no_ghosting.py index 38f3ab1e68..2f22d6da1e 100644 --- a/openpype/hosts/maya/plugins/publish/validate_node_no_ghosting.py +++ b/openpype/hosts/maya/plugins/publish/validate_node_no_ghosting.py @@ -3,6 +3,7 @@ from maya import cmds import pyblish.api import openpype.api import openpype.hosts.maya.api.action +from openpype.pipeline.publish import ValidateContentsOrder class ValidateNodeNoGhosting(pyblish.api.InstancePlugin): @@ -17,7 +18,7 @@ class ValidateNodeNoGhosting(pyblish.api.InstancePlugin): """ - order = openpype.api.ValidateContentsOrder + order = ValidateContentsOrder hosts = ['maya'] families = ['model', 'rig'] label = "No Ghosting" diff --git a/openpype/hosts/maya/plugins/publish/validate_render_image_rule.py b/openpype/hosts/maya/plugins/publish/validate_render_image_rule.py index 4d3796e429..78bb022785 100644 --- a/openpype/hosts/maya/plugins/publish/validate_render_image_rule.py +++ b/openpype/hosts/maya/plugins/publish/validate_render_image_rule.py @@ -1,7 +1,10 @@ from maya import cmds import pyblish.api -import openpype.api +from openpype.pipeline.publish import ( + RepairAction, + ValidateContentsOrder, +) class ValidateRenderImageRule(pyblish.api.InstancePlugin): @@ -13,11 +16,11 @@ class ValidateRenderImageRule(pyblish.api.InstancePlugin): """ - order = openpype.api.ValidateContentsOrder + order = ValidateContentsOrder label = "Images File Rule (Workspace)" hosts = ["maya"] families = ["renderlayer"] - actions = [openpype.api.RepairAction] + actions = [RepairAction] def process(self, instance): diff --git a/openpype/hosts/maya/plugins/publish/validate_render_no_default_cameras.py b/openpype/hosts/maya/plugins/publish/validate_render_no_default_cameras.py index 044cc7c6a2..da35f42291 100644 --- a/openpype/hosts/maya/plugins/publish/validate_render_no_default_cameras.py +++ b/openpype/hosts/maya/plugins/publish/validate_render_no_default_cameras.py @@ -3,12 +3,13 @@ from maya import cmds import pyblish.api import openpype.api import openpype.hosts.maya.api.action +from openpype.pipeline.publish import ValidateContentsOrder class ValidateRenderNoDefaultCameras(pyblish.api.InstancePlugin): """Ensure no default (startup) cameras are to be rendered.""" - order = openpype.api.ValidateContentsOrder + order = ValidateContentsOrder hosts = ['maya'] families = ['renderlayer'] label = "No Default Cameras Renderable" diff --git a/openpype/hosts/maya/plugins/publish/validate_render_single_camera.py b/openpype/hosts/maya/plugins/publish/validate_render_single_camera.py index 35b87fd0ab..fc41b1cf5b 100644 --- a/openpype/hosts/maya/plugins/publish/validate_render_single_camera.py +++ b/openpype/hosts/maya/plugins/publish/validate_render_single_camera.py @@ -6,6 +6,7 @@ from maya import cmds import openpype.api import openpype.hosts.maya.api.action from openpype.hosts.maya.api.render_settings import RenderSettings +from openpype.pipeline.publish import ValidateContentsOrder class ValidateRenderSingleCamera(pyblish.api.InstancePlugin): @@ -15,7 +16,7 @@ class ValidateRenderSingleCamera(pyblish.api.InstancePlugin): prefix must contain token. """ - order = openpype.api.ValidateContentsOrder + order = ValidateContentsOrder label = "Render Single Camera" hosts = ['maya'] families = ["renderlayer", diff --git a/openpype/hosts/maya/plugins/publish/validate_rendersettings.py b/openpype/hosts/maya/plugins/publish/validate_rendersettings.py index f19c0bff36..08ecc0d149 100644 --- a/openpype/hosts/maya/plugins/publish/validate_rendersettings.py +++ b/openpype/hosts/maya/plugins/publish/validate_rendersettings.py @@ -6,7 +6,10 @@ from collections import OrderedDict from maya import cmds, mel import pyblish.api -import openpype.api +from openpype.pipeline.publish import ( + RepairAction, + ValidateContentsOrder, +) from openpype.hosts.maya.api import lib @@ -39,11 +42,11 @@ class ValidateRenderSettings(pyblish.api.InstancePlugin): """ - order = openpype.api.ValidateContentsOrder + order = ValidateContentsOrder label = "Render Settings" hosts = ["maya"] families = ["renderlayer"] - actions = [openpype.api.RepairAction] + actions = [RepairAction] ImagePrefixes = { 'mentalray': 'defaultRenderGlobals.imageFilePrefix', diff --git a/openpype/hosts/maya/plugins/publish/validate_resources.py b/openpype/hosts/maya/plugins/publish/validate_resources.py index 08f0f5467c..b7bd47ad0a 100644 --- a/openpype/hosts/maya/plugins/publish/validate_resources.py +++ b/openpype/hosts/maya/plugins/publish/validate_resources.py @@ -2,7 +2,7 @@ import os from collections import defaultdict import pyblish.api -import openpype.api +from openpype.pipeline.publish import ValidateContentsOrder class ValidateResources(pyblish.api.InstancePlugin): @@ -17,7 +17,7 @@ class ValidateResources(pyblish.api.InstancePlugin): """ - order = openpype.api.ValidateContentsOrder + order = ValidateContentsOrder label = "Resources Unique" def process(self, instance): diff --git a/openpype/hosts/maya/plugins/publish/validate_review_subset_uniqueness.py b/openpype/hosts/maya/plugins/publish/validate_review_subset_uniqueness.py index 04cc9ab5fb..361c594013 100644 --- a/openpype/hosts/maya/plugins/publish/validate_review_subset_uniqueness.py +++ b/openpype/hosts/maya/plugins/publish/validate_review_subset_uniqueness.py @@ -1,14 +1,16 @@ # -*- coding: utf-8 -*- import collections import pyblish.api -import openpype.api -from openpype.pipeline import PublishXmlValidationError +from openpype.pipeline.publish import ( + ValidateContentsOrder, + PublishXmlValidationError, +) class ValidateReviewSubsetUniqueness(pyblish.api.ContextPlugin): """Validates that review subset has unique name.""" - order = openpype.api.ValidateContentsOrder + order = ValidateContentsOrder hosts = ["maya"] families = ["review"] label = "Validate Review Subset Unique" diff --git a/openpype/hosts/maya/plugins/publish/validate_rig_contents.py b/openpype/hosts/maya/plugins/publish/validate_rig_contents.py index 6fe51d7b51..1096c95486 100644 --- a/openpype/hosts/maya/plugins/publish/validate_rig_contents.py +++ b/openpype/hosts/maya/plugins/publish/validate_rig_contents.py @@ -1,7 +1,7 @@ from maya import cmds import pyblish.api -import openpype.api +from openpype.pipeline.publish import ValidateContentsOrder class ValidateRigContents(pyblish.api.InstancePlugin): @@ -13,7 +13,7 @@ class ValidateRigContents(pyblish.api.InstancePlugin): """ - order = openpype.api.ValidateContentsOrder + order = ValidateContentsOrder label = "Rig Contents" hosts = ["maya"] families = ["rig"] diff --git a/openpype/hosts/maya/plugins/publish/validate_rig_controllers.py b/openpype/hosts/maya/plugins/publish/validate_rig_controllers.py index d5a1fd3529..1e42abdcd9 100644 --- a/openpype/hosts/maya/plugins/publish/validate_rig_controllers.py +++ b/openpype/hosts/maya/plugins/publish/validate_rig_controllers.py @@ -2,7 +2,10 @@ from maya import cmds import pyblish.api -import openpype.api +from openpype.pipeline.publish import ( + ValidateContentsOrder, + RepairAction, +) import openpype.hosts.maya.api.action from openpype.hosts.maya.api.lib import undo_chunk @@ -25,11 +28,11 @@ class ValidateRigControllers(pyblish.api.InstancePlugin): - Break all incoming connections to keyable attributes """ - order = openpype.api.ValidateContentsOrder + 0.05 + order = ValidateContentsOrder + 0.05 label = "Rig Controllers" hosts = ["maya"] families = ["rig"] - actions = [openpype.api.RepairAction, + actions = [RepairAction, openpype.hosts.maya.api.action.SelectInvalidAction] # Default controller values diff --git a/openpype/hosts/maya/plugins/publish/validate_rig_controllers_arnold_attributes.py b/openpype/hosts/maya/plugins/publish/validate_rig_controllers_arnold_attributes.py index 1f1db9156b..3d486cf7a4 100644 --- a/openpype/hosts/maya/plugins/publish/validate_rig_controllers_arnold_attributes.py +++ b/openpype/hosts/maya/plugins/publish/validate_rig_controllers_arnold_attributes.py @@ -3,6 +3,10 @@ from maya import cmds import pyblish.api import openpype.api +from openpype.pipeline.publish import ( + ValidateContentsOrder, + RepairAction, +) from openpype.hosts.maya.api import lib import openpype.hosts.maya.api.action @@ -26,11 +30,11 @@ class ValidateRigControllersArnoldAttributes(pyblish.api.InstancePlugin): This validator will ensure they are hidden or unkeyable attributes. """ - order = openpype.api.ValidateContentsOrder + 0.05 + order = ValidateContentsOrder + 0.05 label = "Rig Controllers (Arnold Attributes)" hosts = ["maya"] families = ["rig"] - actions = [openpype.api.RepairAction, + actions = [RepairAction, openpype.hosts.maya.api.action.SelectInvalidAction] attributes = [ diff --git a/openpype/hosts/maya/plugins/publish/validate_rig_joints_hidden.py b/openpype/hosts/maya/plugins/publish/validate_rig_joints_hidden.py index 5df754fff4..86967d7502 100644 --- a/openpype/hosts/maya/plugins/publish/validate_rig_joints_hidden.py +++ b/openpype/hosts/maya/plugins/publish/validate_rig_joints_hidden.py @@ -4,6 +4,10 @@ import pyblish.api import openpype.api import openpype.hosts.maya.api.action from openpype.hosts.maya.api import lib +from openpype.pipeline.publish import ( + RepairAction, + ValidateContentsOrder, +) class ValidateRigJointsHidden(pyblish.api.InstancePlugin): @@ -17,13 +21,13 @@ class ValidateRigJointsHidden(pyblish.api.InstancePlugin): """ - order = openpype.api.ValidateContentsOrder + order = ValidateContentsOrder hosts = ['maya'] families = ['rig'] version = (0, 1, 0) label = "Joints Hidden" actions = [openpype.hosts.maya.api.action.SelectInvalidAction, - openpype.api.RepairAction] + RepairAction] @staticmethod def get_invalid(instance): diff --git a/openpype/hosts/maya/plugins/publish/validate_rig_out_set_node_ids.py b/openpype/hosts/maya/plugins/publish/validate_rig_out_set_node_ids.py index cc3723a6e1..70128ac493 100644 --- a/openpype/hosts/maya/plugins/publish/validate_rig_out_set_node_ids.py +++ b/openpype/hosts/maya/plugins/publish/validate_rig_out_set_node_ids.py @@ -4,6 +4,10 @@ import pyblish.api import openpype.api import openpype.hosts.maya.api.action from openpype.hosts.maya.api import lib +from openpype.pipeline.publish import ( + RepairAction, + ValidateContentsOrder, +) class ValidateRigOutSetNodeIds(pyblish.api.InstancePlugin): @@ -16,13 +20,13 @@ class ValidateRigOutSetNodeIds(pyblish.api.InstancePlugin): """ - order = openpype.api.ValidateContentsOrder + order = ValidateContentsOrder families = ["rig"] hosts = ['maya'] label = 'Rig Out Set Node Ids' actions = [ openpype.hosts.maya.api.action.SelectInvalidAction, - openpype.api.RepairAction + RepairAction ] allow_history_only = False diff --git a/openpype/hosts/maya/plugins/publish/validate_rig_output_ids.py b/openpype/hosts/maya/plugins/publish/validate_rig_output_ids.py index 7c5c540c60..f075f42ff2 100644 --- a/openpype/hosts/maya/plugins/publish/validate_rig_output_ids.py +++ b/openpype/hosts/maya/plugins/publish/validate_rig_output_ids.py @@ -4,6 +4,10 @@ import pyblish.api import openpype.api import openpype.hosts.maya.api.action +from openpype.pipeline.publish import ( + RepairAction, + ValidateContentsOrder, +) class ValidateRigOutputIds(pyblish.api.InstancePlugin): @@ -13,11 +17,11 @@ class ValidateRigOutputIds(pyblish.api.InstancePlugin): to ensure the id from the model is preserved through animation. """ - order = openpype.api.ValidateContentsOrder + 0.05 + order = ValidateContentsOrder + 0.05 label = "Rig Output Ids" hosts = ["maya"] families = ["rig"] - actions = [openpype.api.RepairAction, + actions = [RepairAction, openpype.hosts.maya.api.action.SelectInvalidAction] def process(self, instance): diff --git a/openpype/hosts/maya/plugins/publish/validate_scene_set_workspace.py b/openpype/hosts/maya/plugins/publish/validate_scene_set_workspace.py index 174bc44a6f..ec2bea220d 100644 --- a/openpype/hosts/maya/plugins/publish/validate_scene_set_workspace.py +++ b/openpype/hosts/maya/plugins/publish/validate_scene_set_workspace.py @@ -3,7 +3,8 @@ import os import maya.cmds as cmds import pyblish.api -import openpype.api + +from openpype.pipeline.publish import ValidatePipelineOrder def is_subdir(path, root_dir): @@ -28,7 +29,7 @@ def is_subdir(path, root_dir): class ValidateSceneSetWorkspace(pyblish.api.ContextPlugin): """Validate the scene is inside the currently set Maya workspace""" - order = openpype.api.ValidatePipelineOrder + order = ValidatePipelineOrder hosts = ['maya'] category = 'scene' version = (0, 1, 0) diff --git a/openpype/hosts/maya/plugins/publish/validate_setdress_root.py b/openpype/hosts/maya/plugins/publish/validate_setdress_root.py index 8e23a7c04f..5fd971f8c4 100644 --- a/openpype/hosts/maya/plugins/publish/validate_setdress_root.py +++ b/openpype/hosts/maya/plugins/publish/validate_setdress_root.py @@ -1,12 +1,11 @@ - import pyblish.api -import openpype.api +from openpype.pipeline.publish import ValidateContentsOrder class ValidateSetdressRoot(pyblish.api.InstancePlugin): """Validate if set dress top root node is published.""" - order = openpype.api.ValidateContentsOrder + order = ValidateContentsOrder label = "SetDress Root" hosts = ["maya"] families = ["setdress"] diff --git a/openpype/hosts/maya/plugins/publish/validate_shader_name.py b/openpype/hosts/maya/plugins/publish/validate_shader_name.py index 24111f0ad4..522b42fd00 100644 --- a/openpype/hosts/maya/plugins/publish/validate_shader_name.py +++ b/openpype/hosts/maya/plugins/publish/validate_shader_name.py @@ -1,9 +1,10 @@ +import re from maya import cmds import pyblish.api import openpype.api import openpype.hosts.maya.api.action -import re +from openpype.pipeline.publish import ValidateContentsOrder class ValidateShaderName(pyblish.api.InstancePlugin): @@ -13,7 +14,7 @@ class ValidateShaderName(pyblish.api.InstancePlugin): """ optional = True - order = openpype.api.ValidateContentsOrder + order = ValidateContentsOrder families = ["look"] hosts = ['maya'] label = 'Validate Shaders Name' diff --git a/openpype/hosts/maya/plugins/publish/validate_shape_default_names.py b/openpype/hosts/maya/plugins/publish/validate_shape_default_names.py index e08e06b50e..25bd3442a3 100644 --- a/openpype/hosts/maya/plugins/publish/validate_shape_default_names.py +++ b/openpype/hosts/maya/plugins/publish/validate_shape_default_names.py @@ -5,6 +5,10 @@ from maya import cmds import pyblish.api import openpype.api import openpype.hosts.maya.api.action +from openpype.pipeline.publish import ( + ValidateContentsOrder, + RepairAction, +) def short_name(node): @@ -31,7 +35,7 @@ class ValidateShapeDefaultNames(pyblish.api.InstancePlugin): """ - order = openpype.api.ValidateContentsOrder + order = ValidateContentsOrder hosts = ['maya'] families = ['model'] category = 'cleanup' @@ -39,7 +43,7 @@ class ValidateShapeDefaultNames(pyblish.api.InstancePlugin): version = (0, 1, 0) label = "Shape Default Naming" actions = [openpype.hosts.maya.api.action.SelectInvalidAction, - openpype.api.RepairAction] + RepairAction] @staticmethod def _define_default_name(shape): diff --git a/openpype/hosts/maya/plugins/publish/validate_shape_render_stats.py b/openpype/hosts/maya/plugins/publish/validate_shape_render_stats.py index 714451bb98..0980d6b4b6 100644 --- a/openpype/hosts/maya/plugins/publish/validate_shape_render_stats.py +++ b/openpype/hosts/maya/plugins/publish/validate_shape_render_stats.py @@ -4,17 +4,21 @@ import openpype.api from maya import cmds import openpype.hosts.maya.api.action +from openpype.pipeline.publish import ( + RepairAction, + ValidateMeshOrder, +) class ValidateShapeRenderStats(pyblish.api.Validator): """Ensure all render stats are set to the default values.""" - order = openpype.api.ValidateMeshOrder + order = ValidateMeshOrder hosts = ['maya'] families = ['model'] label = 'Shape Default Render Stats' actions = [openpype.hosts.maya.api.action.SelectInvalidAction, - openpype.api.RepairAction] + RepairAction] defaults = {'castsShadows': 1, 'receiveShadows': 1, diff --git a/openpype/hosts/maya/plugins/publish/validate_shape_zero.py b/openpype/hosts/maya/plugins/publish/validate_shape_zero.py index 343eaccb7d..9e30735d40 100644 --- a/openpype/hosts/maya/plugins/publish/validate_shape_zero.py +++ b/openpype/hosts/maya/plugins/publish/validate_shape_zero.py @@ -4,6 +4,10 @@ import pyblish.api import openpype.api import openpype.hosts.maya.api.action from openpype.hosts.maya.api import lib +from openpype.pipeline.publish import ( + ValidateContentsOrder, + RepairAction, +) class ValidateShapeZero(pyblish.api.Validator): @@ -13,13 +17,13 @@ class ValidateShapeZero(pyblish.api.Validator): """ - order = openpype.api.ValidateContentsOrder + order = ValidateContentsOrder hosts = ["maya"] families = ["model"] label = "Shape Zero (Freeze)" actions = [ openpype.hosts.maya.api.action.SelectInvalidAction, - openpype.api.RepairAction + RepairAction ] @staticmethod diff --git a/openpype/hosts/maya/plugins/publish/validate_single_assembly.py b/openpype/hosts/maya/plugins/publish/validate_single_assembly.py index 9fb3a47e6d..8771ca58d1 100644 --- a/openpype/hosts/maya/plugins/publish/validate_single_assembly.py +++ b/openpype/hosts/maya/plugins/publish/validate_single_assembly.py @@ -1,5 +1,5 @@ import pyblish.api -import openpype.api +from openpype.pipeline.publish import ValidateContentsOrder class ValidateSingleAssembly(pyblish.api.InstancePlugin): @@ -17,7 +17,7 @@ class ValidateSingleAssembly(pyblish.api.InstancePlugin): """ - order = openpype.api.ValidateContentsOrder + order = ValidateContentsOrder hosts = ['maya'] families = ['rig', 'animation'] label = 'Single Assembly' diff --git a/openpype/hosts/maya/plugins/publish/validate_skeletalmesh_hierarchy.py b/openpype/hosts/maya/plugins/publish/validate_skeletalmesh_hierarchy.py index 54a86d27cf..8221c18b17 100644 --- a/openpype/hosts/maya/plugins/publish/validate_skeletalmesh_hierarchy.py +++ b/openpype/hosts/maya/plugins/publish/validate_skeletalmesh_hierarchy.py @@ -1,7 +1,10 @@ # -*- coding: utf-8 -*- import pyblish.api -import openpype.api -from openpype.pipeline import PublishXmlValidationError + +from openpype.pipeline.publish import ( + ValidateContentsOrder, + PublishXmlValidationError, +) from maya import cmds @@ -9,7 +12,7 @@ from maya import cmds class ValidateSkeletalMeshHierarchy(pyblish.api.InstancePlugin): """Validates that nodes has common root.""" - order = openpype.api.ValidateContentsOrder + order = ValidateContentsOrder hosts = ["maya"] families = ["skeletalMesh"] label = "Skeletal Mesh Top Node" diff --git a/openpype/hosts/maya/plugins/publish/validate_skinCluster_deformer_set.py b/openpype/hosts/maya/plugins/publish/validate_skinCluster_deformer_set.py index 8c804786f3..86ff914cb0 100644 --- a/openpype/hosts/maya/plugins/publish/validate_skinCluster_deformer_set.py +++ b/openpype/hosts/maya/plugins/publish/validate_skinCluster_deformer_set.py @@ -3,6 +3,7 @@ from maya import cmds import pyblish.api import openpype.api import openpype.hosts.maya.api.action +from openpype.pipeline.publish import ValidateContentsOrder class ValidateSkinclusterDeformerSet(pyblish.api.InstancePlugin): @@ -14,7 +15,7 @@ class ValidateSkinclusterDeformerSet(pyblish.api.InstancePlugin): """ - order = openpype.api.ValidateContentsOrder + order = ValidateContentsOrder hosts = ['maya'] families = ['fbx'] label = "Skincluster Deformer Relationships" diff --git a/openpype/hosts/maya/plugins/publish/validate_step_size.py b/openpype/hosts/maya/plugins/publish/validate_step_size.py index 172ac5f26e..552a936966 100644 --- a/openpype/hosts/maya/plugins/publish/validate_step_size.py +++ b/openpype/hosts/maya/plugins/publish/validate_step_size.py @@ -1,6 +1,7 @@ import pyblish.api import openpype.api import openpype.hosts.maya.api.action +from openpype.pipeline.publish import ValidateContentsOrder class ValidateStepSize(pyblish.api.InstancePlugin): @@ -10,7 +11,7 @@ class ValidateStepSize(pyblish.api.InstancePlugin): """ - order = openpype.api.ValidateContentsOrder + order = ValidateContentsOrder label = 'Step size' families = ['camera', 'pointcache', diff --git a/openpype/hosts/maya/plugins/publish/validate_transform_naming_suffix.py b/openpype/hosts/maya/plugins/publish/validate_transform_naming_suffix.py index 6f5ff24b9c..64faf9ecb6 100644 --- a/openpype/hosts/maya/plugins/publish/validate_transform_naming_suffix.py +++ b/openpype/hosts/maya/plugins/publish/validate_transform_naming_suffix.py @@ -5,6 +5,7 @@ from maya import cmds import pyblish.api import openpype.api import openpype.hosts.maya.api.action +from openpype.pipeline.publish import ValidateContentsOrder class ValidateTransformNamingSuffix(pyblish.api.InstancePlugin): @@ -27,7 +28,7 @@ class ValidateTransformNamingSuffix(pyblish.api.InstancePlugin): """ - order = openpype.api.ValidateContentsOrder + order = ValidateContentsOrder hosts = ['maya'] families = ['model'] category = 'cleanup' diff --git a/openpype/hosts/maya/plugins/publish/validate_transform_zero.py b/openpype/hosts/maya/plugins/publish/validate_transform_zero.py index fdd09658d1..9e232f6023 100644 --- a/openpype/hosts/maya/plugins/publish/validate_transform_zero.py +++ b/openpype/hosts/maya/plugins/publish/validate_transform_zero.py @@ -3,6 +3,7 @@ from maya import cmds import pyblish.api import openpype.api import openpype.hosts.maya.api.action +from openpype.pipeline.publish import ValidateContentsOrder class ValidateTransformZero(pyblish.api.Validator): @@ -14,7 +15,7 @@ class ValidateTransformZero(pyblish.api.Validator): """ - order = openpype.api.ValidateContentsOrder + order = ValidateContentsOrder hosts = ["maya"] families = ["model"] category = "geometry" diff --git a/openpype/hosts/maya/plugins/publish/validate_unreal_mesh_triangulated.py b/openpype/hosts/maya/plugins/publish/validate_unreal_mesh_triangulated.py index c05121a1b0..1ed3e5531c 100644 --- a/openpype/hosts/maya/plugins/publish/validate_unreal_mesh_triangulated.py +++ b/openpype/hosts/maya/plugins/publish/validate_unreal_mesh_triangulated.py @@ -3,12 +3,13 @@ from maya import cmds import pyblish.api import openpype.api +from openpype.pipeline.publish import ValidateMeshOrder class ValidateUnrealMeshTriangulated(pyblish.api.InstancePlugin): """Validate if mesh is made of triangles for Unreal Engine""" - order = openpype.api.ValidateMeshOrder + order = ValidateMeshOrder hosts = ["maya"] families = ["staticMesh"] category = "geometry" diff --git a/openpype/hosts/maya/plugins/publish/validate_unreal_staticmesh_naming.py b/openpype/hosts/maya/plugins/publish/validate_unreal_staticmesh_naming.py index 33788d1835..a4bb54f5af 100644 --- a/openpype/hosts/maya/plugins/publish/validate_unreal_staticmesh_naming.py +++ b/openpype/hosts/maya/plugins/publish/validate_unreal_staticmesh_naming.py @@ -6,7 +6,8 @@ import pyblish.api import openpype.api import openpype.hosts.maya.api.action from openpype.pipeline import legacy_io -from openpype.api import get_project_settings +from openpype.settings import get_project_settings +from openpype.pipeline.publish import ValidateContentsOrder class ValidateUnrealStaticMeshName(pyblish.api.InstancePlugin): @@ -50,7 +51,7 @@ class ValidateUnrealStaticMeshName(pyblish.api.InstancePlugin): """ optional = True - order = openpype.api.ValidateContentsOrder + order = ValidateContentsOrder hosts = ["maya"] families = ["staticMesh"] label = "Unreal Static Mesh Name" diff --git a/openpype/hosts/maya/plugins/publish/validate_unreal_up_axis.py b/openpype/hosts/maya/plugins/publish/validate_unreal_up_axis.py index 5e1b04889f..dd699735d9 100644 --- a/openpype/hosts/maya/plugins/publish/validate_unreal_up_axis.py +++ b/openpype/hosts/maya/plugins/publish/validate_unreal_up_axis.py @@ -2,7 +2,11 @@ from maya import cmds import pyblish.api -import openpype.api + +from openpype.pipeline.publish import ( + ValidateContentsOrder, + RepairAction, +) class ValidateUnrealUpAxis(pyblish.api.ContextPlugin): @@ -10,11 +14,11 @@ class ValidateUnrealUpAxis(pyblish.api.ContextPlugin): optional = True active = False - order = openpype.api.ValidateContentsOrder + order = ValidateContentsOrder hosts = ["maya"] families = ["staticMesh"] label = "Unreal Up-Axis check" - actions = [openpype.api.RepairAction] + actions = [RepairAction] def process(self, context): assert cmds.upAxis(q=True, axis=True) == "z", ( diff --git a/openpype/hosts/maya/plugins/publish/validate_visible_only.py b/openpype/hosts/maya/plugins/publish/validate_visible_only.py index 59a7f976ab..f326b91796 100644 --- a/openpype/hosts/maya/plugins/publish/validate_visible_only.py +++ b/openpype/hosts/maya/plugins/publish/validate_visible_only.py @@ -3,6 +3,7 @@ import pyblish.api import openpype.api from openpype.hosts.maya.api.lib import iter_visible_nodes_in_range import openpype.hosts.maya.api.action +from openpype.pipeline.publish import ValidateContentsOrder class ValidateAlembicVisibleOnly(pyblish.api.InstancePlugin): @@ -12,7 +13,7 @@ class ValidateAlembicVisibleOnly(pyblish.api.InstancePlugin): on the instance - otherwise the validation is skipped. """ - order = openpype.api.ValidateContentsOrder + 0.05 + order = ValidateContentsOrder + 0.05 label = "Alembic Visible Only" hosts = ["maya"] families = ["pointcache", "animation"] diff --git a/openpype/hosts/maya/plugins/publish/validate_vray_distributed_rendering.py b/openpype/hosts/maya/plugins/publish/validate_vray_distributed_rendering.py index 5e35565383..366f3bd10e 100644 --- a/openpype/hosts/maya/plugins/publish/validate_vray_distributed_rendering.py +++ b/openpype/hosts/maya/plugins/publish/validate_vray_distributed_rendering.py @@ -1,6 +1,9 @@ import pyblish.api -import openpype.api from openpype.hosts.maya.api import lib +from openpype.pipeline.publish import ( + ValidateContentsOrder, + RepairAction, +) from maya import cmds @@ -15,10 +18,10 @@ class ValidateVRayDistributedRendering(pyblish.api.InstancePlugin): """ - order = openpype.api.ValidateContentsOrder + order = ValidateContentsOrder label = "VRay Distributed Rendering" families = ["renderlayer"] - actions = [openpype.api.RepairAction] + actions = [RepairAction] # V-Ray attribute names enabled_attr = "vraySettings.sys_distributed_rendering_on" diff --git a/openpype/hosts/maya/plugins/publish/validate_vray_referenced_aovs.py b/openpype/hosts/maya/plugins/publish/validate_vray_referenced_aovs.py index 7a48c29b7d..39c721e717 100644 --- a/openpype/hosts/maya/plugins/publish/validate_vray_referenced_aovs.py +++ b/openpype/hosts/maya/plugins/publish/validate_vray_referenced_aovs.py @@ -4,7 +4,7 @@ import pyblish.api import types from maya import cmds -import openpype.hosts.maya.api.action +from openpype.pipeline.publish import RepairContextAction class ValidateVrayReferencedAOVs(pyblish.api.InstancePlugin): @@ -20,7 +20,7 @@ class ValidateVrayReferencedAOVs(pyblish.api.InstancePlugin): label = 'VRay Referenced AOVs' hosts = ['maya'] families = ['renderlayer'] - actions = [openpype.api.RepairContextAction] + actions = [RepairContextAction] def process(self, instance): """Plugin main entry point.""" diff --git a/openpype/hosts/maya/plugins/publish/validate_vray_translator_settings.py b/openpype/hosts/maya/plugins/publish/validate_vray_translator_settings.py index 1deabde4a2..f49811c2c0 100644 --- a/openpype/hosts/maya/plugins/publish/validate_vray_translator_settings.py +++ b/openpype/hosts/maya/plugins/publish/validate_vray_translator_settings.py @@ -1,8 +1,11 @@ # -*- coding: utf-8 -*- """Validate VRay Translator settings.""" import pyblish.api -import openpype.api -from openpype.plugin import contextplugin_should_run +from openpype.pipeline.publish import ( + context_plugin_should_run, + RepairContextAction, + ValidateContentsOrder, +) from maya import cmds @@ -10,15 +13,15 @@ from maya import cmds class ValidateVRayTranslatorEnabled(pyblish.api.ContextPlugin): """Validate VRay Translator settings for extracting vrscenes.""" - order = openpype.api.ValidateContentsOrder + order = ValidateContentsOrder label = "VRay Translator Settings" families = ["vrayscene_layer"] - actions = [openpype.api.RepairContextAction] + actions = [RepairContextAction] def process(self, context): """Plugin entry point.""" # Workaround bug pyblish-base#250 - if not contextplugin_should_run(self, context): + if not context_plugin_should_run(self, context): return invalid = self.get_invalid(context) diff --git a/openpype/hosts/maya/plugins/publish/validate_yeti_renderscript_callbacks.py b/openpype/hosts/maya/plugins/publish/validate_yeti_renderscript_callbacks.py index 79cd09315e..a864a18cee 100644 --- a/openpype/hosts/maya/plugins/publish/validate_yeti_renderscript_callbacks.py +++ b/openpype/hosts/maya/plugins/publish/validate_yeti_renderscript_callbacks.py @@ -1,7 +1,7 @@ from maya import cmds import pyblish.api -import openpype.api +from openpype.pipeline.publish import ValidateContentsOrder class ValidateYetiRenderScriptCallbacks(pyblish.api.InstancePlugin): @@ -20,7 +20,7 @@ class ValidateYetiRenderScriptCallbacks(pyblish.api.InstancePlugin): """ - order = openpype.api.ValidateContentsOrder + order = ValidateContentsOrder label = "Yeti Render Script Callbacks" hosts = ["maya"] families = ["renderlayer"] diff --git a/openpype/hosts/maya/plugins/publish/validate_yeti_rig_cache_state.py b/openpype/hosts/maya/plugins/publish/validate_yeti_rig_cache_state.py index 5610733577..4842134b12 100644 --- a/openpype/hosts/maya/plugins/publish/validate_yeti_rig_cache_state.py +++ b/openpype/hosts/maya/plugins/publish/validate_yeti_rig_cache_state.py @@ -1,7 +1,7 @@ import pyblish.api -import openpype.action import maya.cmds as cmds import openpype.hosts.maya.api.action +from openpype.pipeline.publish import RepairAction class ValidateYetiRigCacheState(pyblish.api.InstancePlugin): @@ -17,7 +17,7 @@ class ValidateYetiRigCacheState(pyblish.api.InstancePlugin): label = "Yeti Rig Cache State" hosts = ["maya"] families = ["yetiRig"] - actions = [openpype.action.RepairAction, + actions = [RepairAction, openpype.hosts.maya.api.action.SelectInvalidAction] def process(self, instance): diff --git a/openpype/hosts/maya/plugins/publish/validate_yeti_rig_input_in_instance.py b/openpype/hosts/maya/plugins/publish/validate_yeti_rig_input_in_instance.py index 651c8da849..0fe89634f5 100644 --- a/openpype/hosts/maya/plugins/publish/validate_yeti_rig_input_in_instance.py +++ b/openpype/hosts/maya/plugins/publish/validate_yeti_rig_input_in_instance.py @@ -3,12 +3,13 @@ from maya import cmds import pyblish.api import openpype.api import openpype.hosts.maya.api.action +from openpype.pipeline.publish import ValidateContentsOrder class ValidateYetiRigInputShapesInInstance(pyblish.api.Validator): """Validate if all input nodes are part of the instance's hierarchy""" - order = openpype.api.ValidateContentsOrder + order = ValidateContentsOrder hosts = ["maya"] families = ["yetiRig"] label = "Yeti Rig Input Shapes In Instance" diff --git a/openpype/hosts/nuke/api/actions.py b/openpype/hosts/nuke/api/actions.py index c4a6f0fb84..92b83560da 100644 --- a/openpype/hosts/nuke/api/actions.py +++ b/openpype/hosts/nuke/api/actions.py @@ -1,6 +1,6 @@ import pyblish.api -from openpype.api import get_errored_instances_from_context +from openpype.pipeline.publish import get_errored_instances_from_context from .lib import ( reset_selection, select_nodes diff --git a/openpype/hosts/nuke/api/lib.py b/openpype/hosts/nuke/api/lib.py index b14f1a1eb1..e55fdbfcb2 100644 --- a/openpype/hosts/nuke/api/lib.py +++ b/openpype/hosts/nuke/api/lib.py @@ -19,17 +19,19 @@ from openpype.client import ( get_last_versions, get_representations, ) -from openpype.api import ( + +from openpype.host import HostDirmap +from openpype.tools.utils import host_tools +from openpype.lib import ( + env_value_to_bool, Logger, get_version_from_path, - get_current_project_settings, ) -from openpype.tools.utils import host_tools -from openpype.lib import env_value_to_bool -from openpype.lib.path_tools import HostDirmap + from openpype.settings import ( get_project_settings, get_anatomy_settings, + get_current_project_settings, ) from openpype.modules import ModulesManager from openpype.pipeline.template_data import get_template_data_with_names @@ -76,6 +78,23 @@ class Context: _project_doc = None +def get_main_window(): + """Acquire Nuke's main window""" + if Context.main_window is None: + from Qt import QtWidgets + + top_widgets = QtWidgets.QApplication.topLevelWidgets() + name = "Foundry::UI::DockMainWindow" + for widget in top_widgets: + if ( + widget.inherits("QMainWindow") + and widget.metaObject().className() == name + ): + Context.main_window = widget + break + return Context.main_window + + class Knobby(object): """For creating knob which it's type isn't mapped in `create_knobs` @@ -2651,20 +2670,16 @@ def add_scripts_gizmo(): class NukeDirmap(HostDirmap): - def __init__(self, host_name, project_settings, sync_module, file_name): + def __init__(self, file_name, *args, **kwargs): """ - Args: - host_name (str): Nuke - project_settings (dict): settings of current project - sync_module (SyncServerModule): to limit reinitialization - file_name (str): full path of referenced file from workfiles + Args: + file_name (str): full path of referenced file from workfiles + *args (tuple): Positional arguments for 'HostDirmap' class + **kwargs (dict): Keyword arguments for 'HostDirmap' class """ - self.host_name = host_name - self.project_settings = project_settings - self.file_name = file_name - self.sync_module = sync_module - self._mapping = None # cache mapping + self.file_name = file_name + super(NukeDirmap, self).__init__(*args, **kwargs) def on_enable_dirmap(self): pass @@ -2684,14 +2699,20 @@ class NukeDirmap(HostDirmap): class DirmapCache: """Caching class to get settings and sync_module easily and only once.""" + _project_name = None _project_settings = None _sync_module = None + @classmethod + def project_name(cls): + if cls._project_name is None: + cls._project_name = os.getenv("AVALON_PROJECT") + return cls._project_name + @classmethod def project_settings(cls): if cls._project_settings is None: - cls._project_settings = get_project_settings( - os.getenv("AVALON_PROJECT")) + cls._project_settings = get_project_settings(cls.project_name()) return cls._project_settings @classmethod @@ -2702,32 +2723,25 @@ class DirmapCache: @contextlib.contextmanager -def _duplicate_node_temp(): +def node_tempfile(): """Create a temp file where node is pasted during duplication. This is to avoid using clipboard for node duplication. """ - duplicate_node_temp_path = os.path.join( - tempfile.gettempdir(), - "openpype_nuke_duplicate_temp_{}".format(os.getpid()) + tmp_file = tempfile.NamedTemporaryFile( + mode="w", prefix="openpype_nuke_temp_", suffix=".nk", delete=False ) - - # This can happen only if 'duplicate_node' would be - if os.path.exists(duplicate_node_temp_path): - log.warning(( - "Temp file for node duplication already exists." - " Trying to remove {}" - ).format(duplicate_node_temp_path)) - os.remove(duplicate_node_temp_path) + tmp_file.close() + node_tempfile_path = tmp_file.name try: # Yield the path where node can be copied - yield duplicate_node_temp_path + yield node_tempfile_path finally: # Remove the file at the end - os.remove(duplicate_node_temp_path) + os.remove(node_tempfile_path) def duplicate_node(node): @@ -2736,7 +2750,7 @@ def duplicate_node(node): # select required node for duplication node.setSelected(True) - with _duplicate_node_temp() as filepath: + with node_tempfile() as filepath: # copy selected to temp filepath nuke.nodeCopy(filepath) @@ -2757,10 +2771,14 @@ def dirmap_file_name_filter(file_name): Checks project settings for potential mapping from source to dest. """ - dirmap_processor = NukeDirmap("nuke", - DirmapCache.project_settings(), - DirmapCache.sync_module(), - file_name) + + dirmap_processor = NukeDirmap( + file_name, + "nuke", + DirmapCache.project_name(), + DirmapCache.project_settings(), + DirmapCache.sync_module(), + ) dirmap_processor.process_dirmap() if os.path.exists(dirmap_processor.file_name): return dirmap_processor.file_name @@ -2807,3 +2825,100 @@ def ls_img_sequence(path): } return False + + +def get_group_io_nodes(nodes): + """Get the input and the output of a group of nodes.""" + + if not nodes: + raise ValueError("there is no nodes in the list") + + input_node = None + output_node = None + + if len(nodes) == 1: + input_node = output_node = nodes[0] + + else: + for node in nodes: + if "Input" in node.name(): + input_node = node + + if "Output" in node.name(): + output_node = node + + if input_node is not None and output_node is not None: + break + + if input_node is None: + raise ValueError("No Input found") + + if output_node is None: + raise ValueError("No Output found") + return input_node, output_node + + +def get_extreme_positions(nodes): + """Get the 4 numbers that represent the box of a group of nodes.""" + + if not nodes: + raise ValueError("there is no nodes in the list") + + nodes_xpos = [n.xpos() for n in nodes] + \ + [n.xpos() + n.screenWidth() for n in nodes] + + nodes_ypos = [n.ypos() for n in nodes] + \ + [n.ypos() + n.screenHeight() for n in nodes] + + min_x, min_y = (min(nodes_xpos), min(nodes_ypos)) + max_x, max_y = (max(nodes_xpos), max(nodes_ypos)) + return min_x, min_y, max_x, max_y + + +def refresh_node(node): + """Correct a bug caused by the multi-threading of nuke. + + Refresh the node to make sure that it takes the desired attributes. + """ + + x = node.xpos() + y = node.ypos() + nuke.autoplaceSnap(node) + node.setXYpos(x, y) + + +def refresh_nodes(nodes): + for node in nodes: + refresh_node(node) + + +def get_names_from_nodes(nodes): + """Get list of nodes names. + + Args: + nodes(List[nuke.Node]): List of nodes to convert into names. + + Returns: + List[str]: Name of passed nodes. + """ + + return [ + node.name() + for node in nodes + ] + + +def get_nodes_by_names(names): + """Get list of nuke nodes based on their names. + + Args: + names (List[str]): List of node names to be found. + + Returns: + List[nuke.Node]: List of nodes found by name. + """ + + return [ + nuke.toNode(name) + for name in names + ] diff --git a/openpype/hosts/nuke/api/lib_template_builder.py b/openpype/hosts/nuke/api/lib_template_builder.py new file mode 100644 index 0000000000..61baa23928 --- /dev/null +++ b/openpype/hosts/nuke/api/lib_template_builder.py @@ -0,0 +1,220 @@ +from collections import OrderedDict + +import qargparse + +import nuke + +from openpype.tools.utils.widgets import OptionDialog + +from .lib import imprint, get_main_window + + +# To change as enum +build_types = ["context_asset", "linked_asset", "all_assets"] + + +def get_placeholder_attributes(node, enumerate=False): + list_atts = { + "builder_type", + "family", + "representation", + "loader", + "loader_args", + "order", + "asset", + "subset", + "hierarchy", + "siblings", + "last_loaded" + } + attributes = {} + for attr in node.knobs().keys(): + if attr in list_atts: + if enumerate: + try: + attributes[attr] = node.knob(attr).values() + except AttributeError: + attributes[attr] = node.knob(attr).getValue() + else: + attributes[attr] = node.knob(attr).getValue() + + return attributes + + +def delete_placeholder_attributes(node): + """Delete all extra placeholder attributes.""" + + extra_attributes = get_placeholder_attributes(node) + for attribute in extra_attributes.keys(): + try: + node.removeKnob(node.knob(attribute)) + except ValueError: + continue + + +def hide_placeholder_attributes(node): + """Hide all extra placeholder attributes.""" + + extra_attributes = get_placeholder_attributes(node) + for attribute in extra_attributes.keys(): + try: + node.knob(attribute).setVisible(False) + except ValueError: + continue + + +def create_placeholder(): + args = placeholder_window() + if not args: + # operation canceled, no locator created + return + + placeholder = nuke.nodes.NoOp() + placeholder.setName("PLACEHOLDER") + placeholder.knob("tile_color").setValue(4278190335) + + # custom arg parse to force empty data query + # and still imprint them on placeholder + # and getting items when arg is of type Enumerator + options = OrderedDict() + for arg in args: + if not type(arg) == qargparse.Separator: + options[str(arg)] = arg._data.get("items") or arg.read() + imprint(placeholder, options) + imprint(placeholder, {"is_placeholder": True}) + placeholder.knob("is_placeholder").setVisible(False) + + +def update_placeholder(): + placeholder = nuke.selectedNodes() + if not placeholder: + raise ValueError("No node selected") + if len(placeholder) > 1: + raise ValueError("Too many selected nodes") + placeholder = placeholder[0] + + args = placeholder_window(get_placeholder_attributes(placeholder)) + if not args: + return # operation canceled + # delete placeholder attributes + delete_placeholder_attributes(placeholder) + + options = OrderedDict() + for arg in args: + if not type(arg) == qargparse.Separator: + options[str(arg)] = arg._data.get("items") or arg.read() + imprint(placeholder, options) + + +def imprint_enum(placeholder, args): + """ + Imprint method doesn't act properly with enums. + Replacing the functionnality with this for now + """ + + enum_values = { + str(arg): arg.read() + for arg in args + if arg._data.get("items") + } + string_to_value_enum_table = { + build: idx + for idx, build in enumerate(build_types) + } + attrs = {} + for key, value in enum_values.items(): + attrs[key] = string_to_value_enum_table[value] + + +def placeholder_window(options=None): + options = options or dict() + dialog = OptionDialog(parent=get_main_window()) + dialog.setWindowTitle("Create Placeholder") + + args = [ + qargparse.Separator("Main attributes"), + qargparse.Enum( + "builder_type", + label="Asset Builder Type", + default=options.get("builder_type", 0), + items=build_types, + help="""Asset Builder Type +Builder type describe what template loader will look for. + +context_asset : Template loader will look for subsets of +current context asset (Asset bob will find asset) + +linked_asset : Template loader will look for assets linked +to current context asset. +Linked asset are looked in OpenPype database under field "inputLinks" +""" + ), + qargparse.String( + "family", + default=options.get("family", ""), + label="OpenPype Family", + placeholder="ex: image, plate ..."), + qargparse.String( + "representation", + default=options.get("representation", ""), + label="OpenPype Representation", + placeholder="ex: mov, png ..."), + qargparse.String( + "loader", + default=options.get("loader", ""), + label="Loader", + placeholder="ex: LoadClip, LoadImage ...", + help="""Loader + +Defines what openpype loader will be used to load assets. +Useable loader depends on current host's loader list. +Field is case sensitive. +"""), + qargparse.String( + "loader_args", + default=options.get("loader_args", ""), + label="Loader Arguments", + placeholder='ex: {"camera":"persp", "lights":True}', + help="""Loader + +Defines a dictionnary of arguments used to load assets. +Useable arguments depend on current placeholder Loader. +Field should be a valid python dict. Anything else will be ignored. +"""), + qargparse.Integer( + "order", + default=options.get("order", 0), + min=0, + max=999, + label="Order", + placeholder="ex: 0, 100 ... (smallest order loaded first)", + help="""Order + +Order defines asset loading priority (0 to 999) +Priority rule is : "lowest is first to load"."""), + qargparse.Separator( + "Optional attributes "), + qargparse.String( + "asset", + default=options.get("asset", ""), + label="Asset filter", + placeholder="regex filtering by asset name", + help="Filtering assets by matching field regex to asset's name"), + qargparse.String( + "subset", + default=options.get("subset", ""), + label="Subset filter", + placeholder="regex filtering by subset name", + help="Filtering assets by matching field regex to subset's name"), + qargparse.String( + "hierarchy", + default=options.get("hierarchy", ""), + label="Hierarchy filter", + placeholder="regex filtering by asset's hierarchy", + help="Filtering assets by matching field asset's hierarchy") + ] + dialog.create(args) + if not dialog.exec_(): + return None + + return args diff --git a/openpype/hosts/nuke/api/pipeline.py b/openpype/hosts/nuke/api/pipeline.py index c1cd8f771a..bac42128cc 100644 --- a/openpype/hosts/nuke/api/pipeline.py +++ b/openpype/hosts/nuke/api/pipeline.py @@ -22,10 +22,16 @@ from openpype.pipeline import ( AVALON_CONTAINER_ID, ) from openpype.pipeline.workfile import BuildWorkfile +from openpype.pipeline.workfile.build_template import ( + build_workfile_template, + update_workfile_template +) from openpype.tools.utils import host_tools from .command import viewer_update_and_undo_stop from .lib import ( + Context, + get_main_window, add_publish_knob, WorkfileSettings, process_workfile_builder, @@ -33,7 +39,9 @@ from .lib import ( check_inventory_versions, set_avalon_knob_data, read_avalon_data, - Context +) +from .lib_template_builder import ( + create_placeholder, update_placeholder ) log = Logger.get_logger(__name__) @@ -53,23 +61,6 @@ if os.getenv("PYBLISH_GUI", None): pyblish.api.register_gui(os.getenv("PYBLISH_GUI", None)) -def get_main_window(): - """Acquire Nuke's main window""" - if Context.main_window is None: - from Qt import QtWidgets - - top_widgets = QtWidgets.QApplication.topLevelWidgets() - name = "Foundry::UI::DockMainWindow" - for widget in top_widgets: - if ( - widget.inherits("QMainWindow") - and widget.metaObject().className() == name - ): - Context.main_window = widget - break - return Context.main_window - - def reload_config(): """Attempt to reload pipeline at run-time. @@ -219,6 +210,24 @@ def _install_menu(): lambda: BuildWorkfile().process() ) + menu_template = menu.addMenu("Template Builder") # creating template menu + menu_template.addCommand( + "Build Workfile from template", + lambda: build_workfile_template() + ) + menu_template.addCommand( + "Update Workfile", + lambda: update_workfile_template() + ) + menu_template.addSeparator() + menu_template.addCommand( + "Create Place Holder", + lambda: create_placeholder() + ) + menu_template.addCommand( + "Update Place Holder", + lambda: update_placeholder() + ) menu.addSeparator() menu.addCommand( "Experimental tools...", diff --git a/openpype/hosts/nuke/api/template_loader.py b/openpype/hosts/nuke/api/template_loader.py new file mode 100644 index 0000000000..5ff4b8fc41 --- /dev/null +++ b/openpype/hosts/nuke/api/template_loader.py @@ -0,0 +1,639 @@ +import re +import collections + +import nuke + +from openpype.client import get_representations +from openpype.pipeline import legacy_io +from openpype.pipeline.workfile.abstract_template_loader import ( + AbstractPlaceholder, + AbstractTemplateLoader, +) + +from .lib import ( + find_free_space_to_paste_nodes, + get_extreme_positions, + get_group_io_nodes, + imprint, + refresh_node, + refresh_nodes, + reset_selection, + get_names_from_nodes, + get_nodes_by_names, + select_nodes, + duplicate_node, + node_tempfile, +) + +from .lib_template_builder import ( + delete_placeholder_attributes, + get_placeholder_attributes, + hide_placeholder_attributes +) + +PLACEHOLDER_SET = "PLACEHOLDERS_SET" + + +class NukeTemplateLoader(AbstractTemplateLoader): + """Concrete implementation of AbstractTemplateLoader for Nuke + + """ + + def import_template(self, path): + """Import template into current scene. + Block if a template is already loaded. + + Args: + path (str): A path to current template (usually given by + get_template_path implementation) + + Returns: + bool: Wether the template was succesfully imported or not + """ + + # TODO check if the template is already imported + + nuke.nodePaste(path) + reset_selection() + + return True + + def preload(self, placeholder, loaders_by_name, last_representation): + placeholder.data["nodes_init"] = nuke.allNodes() + placeholder.data["last_repre_id"] = str(last_representation["_id"]) + + def populate_template(self, ignored_ids=None): + processed_key = "_node_processed" + + processed_nodes = [] + nodes = self.get_template_nodes() + while nodes: + # Mark nodes as processed so they're not re-executed + # - that can happen if processing of placeholder node fails + for node in nodes: + imprint(node, {processed_key: True}) + processed_nodes.append(node) + + super(NukeTemplateLoader, self).populate_template(ignored_ids) + + # Recollect nodes to repopulate + nodes = [] + for node in self.get_template_nodes(): + # Skip already processed nodes + if ( + processed_key in node.knobs() + and node.knob(processed_key).value() + ): + continue + nodes.append(node) + + for node in processed_nodes: + knob = node.knob(processed_key) + if knob is not None: + node.removeKnob(knob) + + @staticmethod + def get_template_nodes(): + placeholders = [] + all_groups = collections.deque() + all_groups.append(nuke.thisGroup()) + while all_groups: + group = all_groups.popleft() + for node in group.nodes(): + if isinstance(node, nuke.Group): + all_groups.append(node) + + node_knobs = node.knobs() + if ( + "builder_type" not in node_knobs + or "is_placeholder" not in node_knobs + or not node.knob("is_placeholder").value() + ): + continue + + if "empty" in node_knobs and node.knob("empty").value(): + continue + + placeholders.append(node) + + return placeholders + + def update_missing_containers(self): + nodes_by_id = collections.defaultdict(list) + + for node in nuke.allNodes(): + node_knobs = node.knobs().keys() + if "repre_id" in node_knobs: + repre_id = node.knob("repre_id").getValue() + nodes_by_id[repre_id].append(node.name()) + + if "empty" in node_knobs: + node.removeKnob(node.knob("empty")) + imprint(node, {"empty": False}) + + for node_names in nodes_by_id.values(): + node = None + for node_name in node_names: + node_by_name = nuke.toNode(node_name) + if "builder_type" in node_by_name.knobs().keys(): + node = node_by_name + break + + if node is None: + continue + + placeholder = nuke.nodes.NoOp() + placeholder.setName("PLACEHOLDER") + placeholder.knob("tile_color").setValue(4278190335) + attributes = get_placeholder_attributes(node, enumerate=True) + imprint(placeholder, attributes) + pos_x = int(node.knob("x").getValue()) + pos_y = int(node.knob("y").getValue()) + placeholder.setXYpos(pos_x, pos_y) + imprint(placeholder, {"nb_children": 1}) + refresh_node(placeholder) + + self.populate_template(self.get_loaded_containers_by_id()) + + def get_loaded_containers_by_id(self): + repre_ids = set() + for node in nuke.allNodes(): + if "repre_id" in node.knobs(): + repre_ids.add(node.knob("repre_id").getValue()) + + # Removes duplicates in the list + return list(repre_ids) + + def delete_placeholder(self, placeholder): + placeholder_node = placeholder.data["node"] + last_loaded = placeholder.data["last_loaded"] + if not placeholder.data["delete"]: + if "empty" in placeholder_node.knobs().keys(): + placeholder_node.removeKnob(placeholder_node.knob("empty")) + imprint(placeholder_node, {"empty": True}) + return + + if not last_loaded: + nuke.delete(placeholder_node) + return + + if "last_loaded" in placeholder_node.knobs().keys(): + for node_name in placeholder_node.knob("last_loaded").values(): + node = nuke.toNode(node_name) + try: + delete_placeholder_attributes(node) + except Exception: + pass + + last_loaded_names = [ + loaded_node.name() + for loaded_node in last_loaded + ] + imprint(placeholder_node, {"last_loaded": last_loaded_names}) + + for node in last_loaded: + refresh_node(node) + refresh_node(placeholder_node) + if "builder_type" not in node.knobs().keys(): + attributes = get_placeholder_attributes(placeholder_node, True) + imprint(node, attributes) + imprint(node, {"is_placeholder": False}) + hide_placeholder_attributes(node) + node.knob("is_placeholder").setVisible(False) + imprint( + node, + { + "x": placeholder_node.xpos(), + "y": placeholder_node.ypos() + } + ) + node.knob("x").setVisible(False) + node.knob("y").setVisible(False) + nuke.delete(placeholder_node) + + +class NukePlaceholder(AbstractPlaceholder): + """Concrete implementation of AbstractPlaceholder for Nuke""" + + optional_keys = {"asset", "subset", "hierarchy"} + + def get_data(self, node): + user_data = dict() + node_knobs = node.knobs() + for attr in self.required_keys.union(self.optional_keys): + if attr in node_knobs: + user_data[attr] = node_knobs[attr].getValue() + user_data["node"] = node + + nb_children = 0 + if "nb_children" in node_knobs: + nb_children = int(node_knobs["nb_children"].getValue()) + user_data["nb_children"] = nb_children + + siblings = [] + if "siblings" in node_knobs: + siblings = node_knobs["siblings"].values() + user_data["siblings"] = siblings + + node_full_name = node.fullName() + user_data["group_name"] = node_full_name.rpartition(".")[0] + user_data["last_loaded"] = [] + user_data["delete"] = False + self.data = user_data + + def parent_in_hierarchy(self, containers): + return + + def create_sib_copies(self): + """ creating copies of the palce_holder siblings (the ones who were + loaded with it) for the new nodes added + + Returns : + copies (dict) : with copied nodes names and their copies + """ + + copies = {} + siblings = get_nodes_by_names(self.data["siblings"]) + for node in siblings: + new_node = duplicate_node(node) + + x_init = int(new_node.knob("x_init").getValue()) + y_init = int(new_node.knob("y_init").getValue()) + new_node.setXYpos(x_init, y_init) + if isinstance(new_node, nuke.BackdropNode): + w_init = new_node.knob("w_init").getValue() + h_init = new_node.knob("h_init").getValue() + new_node.knob("bdwidth").setValue(w_init) + new_node.knob("bdheight").setValue(h_init) + refresh_node(node) + + if "repre_id" in node.knobs().keys(): + node.removeKnob(node.knob("repre_id")) + copies[node.name()] = new_node + return copies + + def fix_z_order(self): + """Fix the problem of z_order when a backdrop is loaded.""" + + nodes_loaded = self.data["last_loaded"] + loaded_backdrops = [] + bd_orders = set() + for node in nodes_loaded: + if isinstance(node, nuke.BackdropNode): + loaded_backdrops.append(node) + bd_orders.add(node.knob("z_order").getValue()) + + if not bd_orders: + return + + sib_orders = set() + for node_name in self.data["siblings"]: + node = nuke.toNode(node_name) + if isinstance(node, nuke.BackdropNode): + sib_orders.add(node.knob("z_order").getValue()) + + if not sib_orders: + return + + min_order = min(bd_orders) + max_order = max(sib_orders) + for backdrop_node in loaded_backdrops: + z_order = backdrop_node.knob("z_order").getValue() + backdrop_node.knob("z_order").setValue( + z_order + max_order - min_order + 1) + + def update_nodes(self, nodes, considered_nodes, offset_y=None): + """Adjust backdrop nodes dimensions and positions. + + Considering some nodes sizes. + + Args: + nodes (list): list of nodes to update + considered_nodes (list): list of nodes to consider while updating + positions and dimensions + offset (int): distance between copies + """ + + placeholder_node = self.data["node"] + + min_x, min_y, max_x, max_y = get_extreme_positions(considered_nodes) + + diff_x = diff_y = 0 + contained_nodes = [] # for backdrops + + if offset_y is None: + width_ph = placeholder_node.screenWidth() + height_ph = placeholder_node.screenHeight() + diff_y = max_y - min_y - height_ph + diff_x = max_x - min_x - width_ph + contained_nodes = [placeholder_node] + min_x = placeholder_node.xpos() + min_y = placeholder_node.ypos() + else: + siblings = get_nodes_by_names(self.data["siblings"]) + minX, _, maxX, _ = get_extreme_positions(siblings) + diff_y = max_y - min_y + 20 + diff_x = abs(max_x - min_x - maxX + minX) + contained_nodes = considered_nodes + + if diff_y <= 0 and diff_x <= 0: + return + + for node in nodes: + refresh_node(node) + + if ( + node == placeholder_node + or node in considered_nodes + ): + continue + + if ( + not isinstance(node, nuke.BackdropNode) + or ( + isinstance(node, nuke.BackdropNode) + and not set(contained_nodes) <= set(node.getNodes()) + ) + ): + if offset_y is None and node.xpos() >= min_x: + node.setXpos(node.xpos() + diff_x) + + if node.ypos() >= min_y: + node.setYpos(node.ypos() + diff_y) + + else: + width = node.screenWidth() + height = node.screenHeight() + node.knob("bdwidth").setValue(width + diff_x) + node.knob("bdheight").setValue(height + diff_y) + + refresh_node(node) + + def imprint_inits(self): + """Add initial positions and dimensions to the attributes""" + + for node in nuke.allNodes(): + refresh_node(node) + imprint(node, {"x_init": node.xpos(), "y_init": node.ypos()}) + node.knob("x_init").setVisible(False) + node.knob("y_init").setVisible(False) + width = node.screenWidth() + height = node.screenHeight() + if "bdwidth" in node.knobs(): + imprint(node, {"w_init": width, "h_init": height}) + node.knob("w_init").setVisible(False) + node.knob("h_init").setVisible(False) + refresh_node(node) + + def imprint_siblings(self): + """ + - add siblings names to placeholder attributes (nodes loaded with it) + - add Id to the attributes of all the other nodes + """ + + loaded_nodes = self.data["last_loaded"] + loaded_nodes_set = set(loaded_nodes) + data = {"repre_id": str(self.data["last_repre_id"])} + + for node in loaded_nodes: + node_knobs = node.knobs() + if "builder_type" not in node_knobs: + # save the id of representation for all imported nodes + imprint(node, data) + node.knob("repre_id").setVisible(False) + refresh_node(node) + continue + + if ( + "is_placeholder" not in node_knobs + or ( + "is_placeholder" in node_knobs + and node.knob("is_placeholder").value() + ) + ): + siblings = list(loaded_nodes_set - {node}) + siblings_name = get_names_from_nodes(siblings) + siblings = {"siblings": siblings_name} + imprint(node, siblings) + + def set_loaded_connections(self): + """ + set inputs and outputs of loaded nodes""" + + placeholder_node = self.data["node"] + input_node, output_node = get_group_io_nodes(self.data["last_loaded"]) + for node in placeholder_node.dependent(): + for idx in range(node.inputs()): + if node.input(idx) == placeholder_node: + node.setInput(idx, output_node) + + for node in placeholder_node.dependencies(): + for idx in range(placeholder_node.inputs()): + if placeholder_node.input(idx) == node: + input_node.setInput(0, node) + + def set_copies_connections(self, copies): + """Set inputs and outputs of the copies. + + Args: + copies (dict): Copied nodes by their names. + """ + + last_input, last_output = get_group_io_nodes(self.data["last_loaded"]) + siblings = get_nodes_by_names(self.data["siblings"]) + siblings_input, siblings_output = get_group_io_nodes(siblings) + copy_input = copies[siblings_input.name()] + copy_output = copies[siblings_output.name()] + + for node_init in siblings: + if node_init == siblings_output: + continue + + node_copy = copies[node_init.name()] + for node in node_init.dependent(): + for idx in range(node.inputs()): + if node.input(idx) != node_init: + continue + + if node in siblings: + copies[node.name()].setInput(idx, node_copy) + else: + last_input.setInput(0, node_copy) + + for node in node_init.dependencies(): + for idx in range(node_init.inputs()): + if node_init.input(idx) != node: + continue + + if node_init == siblings_input: + copy_input.setInput(idx, node) + elif node in siblings: + node_copy.setInput(idx, copies[node.name()]) + else: + node_copy.setInput(idx, last_output) + + siblings_input.setInput(0, copy_output) + + def move_to_placeholder_group(self, nodes_loaded): + """ + opening the placeholder's group and copying loaded nodes in it. + + Returns : + nodes_loaded (list): the new list of pasted nodes + """ + + groups_name = self.data["group_name"] + reset_selection() + select_nodes(nodes_loaded) + if groups_name: + with node_tempfile() as filepath: + nuke.nodeCopy(filepath) + for node in nuke.selectedNodes(): + nuke.delete(node) + group = nuke.toNode(groups_name) + group.begin() + nuke.nodePaste(filepath) + nodes_loaded = nuke.selectedNodes() + return nodes_loaded + + def clean(self): + # deselect all selected nodes + placeholder_node = self.data["node"] + + # getting the latest nodes added + nodes_init = self.data["nodes_init"] + nodes_loaded = list(set(nuke.allNodes()) - set(nodes_init)) + self.log.debug("Loaded nodes: {}".format(nodes_loaded)) + if not nodes_loaded: + return + + self.data["delete"] = True + + nodes_loaded = self.move_to_placeholder_group(nodes_loaded) + self.data["last_loaded"] = nodes_loaded + refresh_nodes(nodes_loaded) + + # positioning of the loaded nodes + min_x, min_y, _, _ = get_extreme_positions(nodes_loaded) + for node in nodes_loaded: + xpos = (node.xpos() - min_x) + placeholder_node.xpos() + ypos = (node.ypos() - min_y) + placeholder_node.ypos() + node.setXYpos(xpos, ypos) + refresh_nodes(nodes_loaded) + + self.fix_z_order() # fix the problem of z_order for backdrops + self.imprint_siblings() + + if self.data["nb_children"] == 0: + # save initial nodes postions and dimensions, update them + # and set inputs and outputs of loaded nodes + + self.imprint_inits() + self.update_nodes(nuke.allNodes(), nodes_loaded) + self.set_loaded_connections() + + elif self.data["siblings"]: + # create copies of placeholder siblings for the new loaded nodes, + # set their inputs and outpus and update all nodes positions and + # dimensions and siblings names + + siblings = get_nodes_by_names(self.data["siblings"]) + refresh_nodes(siblings) + copies = self.create_sib_copies() + new_nodes = list(copies.values()) # copies nodes + self.update_nodes(new_nodes, nodes_loaded) + placeholder_node.removeKnob(placeholder_node.knob("siblings")) + new_nodes_name = get_names_from_nodes(new_nodes) + imprint(placeholder_node, {"siblings": new_nodes_name}) + self.set_copies_connections(copies) + + self.update_nodes( + nuke.allNodes(), + new_nodes + nodes_loaded, + 20 + ) + + new_siblings = get_names_from_nodes(new_nodes) + self.data["siblings"] = new_siblings + + else: + # if the placeholder doesn't have siblings, the loaded + # nodes will be placed in a free space + + xpointer, ypointer = find_free_space_to_paste_nodes( + nodes_loaded, direction="bottom", offset=200 + ) + node = nuke.createNode("NoOp") + reset_selection() + nuke.delete(node) + for node in nodes_loaded: + xpos = (node.xpos() - min_x) + xpointer + ypos = (node.ypos() - min_y) + ypointer + node.setXYpos(xpos, ypos) + + self.data["nb_children"] += 1 + reset_selection() + # go back to root group + nuke.root().begin() + + def get_representations(self, current_asset_doc, linked_asset_docs): + project_name = legacy_io.active_project() + + builder_type = self.data["builder_type"] + if builder_type == "context_asset": + context_filters = { + "asset": [re.compile(self.data["asset"])], + "subset": [re.compile(self.data["subset"])], + "hierarchy": [re.compile(self.data["hierarchy"])], + "representations": [self.data["representation"]], + "family": [self.data["family"]] + } + + elif builder_type != "linked_asset": + context_filters = { + "asset": [ + current_asset_doc["name"], + re.compile(self.data["asset"]) + ], + "subset": [re.compile(self.data["subset"])], + "hierarchy": [re.compile(self.data["hierarchy"])], + "representation": [self.data["representation"]], + "family": [self.data["family"]] + } + + else: + asset_regex = re.compile(self.data["asset"]) + linked_asset_names = [] + for asset_doc in linked_asset_docs: + asset_name = asset_doc["name"] + if asset_regex.match(asset_name): + linked_asset_names.append(asset_name) + + if not linked_asset_names: + return [] + + context_filters = { + "asset": linked_asset_names, + "subset": [re.compile(self.data["subset"])], + "hierarchy": [re.compile(self.data["hierarchy"])], + "representation": [self.data["representation"]], + "family": [self.data["family"]], + } + + return list(get_representations( + project_name, + context_filters=context_filters + )) + + def err_message(self): + return ( + "Error while trying to load a representation.\n" + "Either the subset wasn't published or the template is malformed." + "\n\n" + "Builder was looking for:\n{attributes}".format( + attributes="\n".join([ + "{}: {}".format(key.title(), value) + for key, value in self.data.items()] + ) + ) + ) diff --git a/openpype/hosts/nuke/plugins/publish/precollect_writes.py b/openpype/hosts/nuke/plugins/publish/precollect_writes.py index e37cc8a80a..17c4bc30cf 100644 --- a/openpype/hosts/nuke/plugins/publish/precollect_writes.py +++ b/openpype/hosts/nuke/plugins/publish/precollect_writes.py @@ -201,34 +201,6 @@ class CollectNukeWrites(pyblish.api.InstancePlugin): if not instance.data["review"]: instance.data["useSequenceForReview"] = False - project_name = legacy_io.active_project() - asset_name = instance.data["asset"] - # * Add audio to instance if exists. - # Find latest versions document - last_version_doc = get_last_version_by_subset_name( - project_name, "audioMain", asset_name=asset_name, fields=["_id"] - ) - - repre_doc = None - if last_version_doc: - # Try to find it's representation (Expected there is only one) - repre_docs = list(get_representations( - project_name, version_ids=[last_version_doc["_id"]] - )) - if not repre_docs: - self.log.warning( - "Version document does not contain any representations" - ) - else: - repre_doc = repre_docs[0] - - # Add audio to instance if representation was found - if repre_doc: - instance.data["audio"] = [{ - "offset": 0, - "filename": get_representation_path(repre_doc) - }] - self.log.debug("instance.data: {}".format(pformat(instance.data))) def is_prerender(self, families): diff --git a/openpype/hosts/nuke/plugins/publish/validate_asset_name.py b/openpype/hosts/nuke/plugins/publish/validate_asset_name.py index 7647471f8a..52731140ff 100644 --- a/openpype/hosts/nuke/plugins/publish/validate_asset_name.py +++ b/openpype/hosts/nuke/plugins/publish/validate_asset_name.py @@ -4,10 +4,13 @@ from __future__ import absolute_import import nuke import pyblish.api -import openpype.api + import openpype.hosts.nuke.api.lib as nlib import openpype.hosts.nuke.api as nuke_api -from openpype.pipeline import PublishXmlValidationError +from openpype.pipeline.publish import ( + ValidateContentsOrder, + PublishXmlValidationError, +) class SelectInvalidInstances(pyblish.api.Action): @@ -97,7 +100,7 @@ class ValidateCorrectAssetName(pyblish.api.InstancePlugin): Action on this validator will select invalid instances in Outliner. """ - order = openpype.api.ValidateContentsOrder + order = ValidateContentsOrder label = "Validate correct asset name" hosts = ["nuke"] actions = [ diff --git a/openpype/hosts/nuke/plugins/publish/validate_knobs.py b/openpype/hosts/nuke/plugins/publish/validate_knobs.py index e2b11892e5..d44f27791a 100644 --- a/openpype/hosts/nuke/plugins/publish/validate_knobs.py +++ b/openpype/hosts/nuke/plugins/publish/validate_knobs.py @@ -1,8 +1,11 @@ import nuke import six import pyblish.api -import openpype.api -from openpype.pipeline import PublishXmlValidationError + +from openpype.pipeline.publish import ( + RepairContextAction, + PublishXmlValidationError, +) class ValidateKnobs(pyblish.api.ContextPlugin): @@ -24,7 +27,7 @@ class ValidateKnobs(pyblish.api.ContextPlugin): order = pyblish.api.ValidatorOrder label = "Validate Knobs" hosts = ["nuke"] - actions = [openpype.api.RepairContextAction] + actions = [RepairContextAction] optional = True def process(self, context): diff --git a/openpype/hosts/nuke/plugins/publish/validate_output_resolution.py b/openpype/hosts/nuke/plugins/publish/validate_output_resolution.py index fc07e9b83b..1e59880f90 100644 --- a/openpype/hosts/nuke/plugins/publish/validate_output_resolution.py +++ b/openpype/hosts/nuke/plugins/publish/validate_output_resolution.py @@ -1,8 +1,8 @@ - import pyblish.api -import openpype.api + from openpype.hosts.nuke.api import maintained_selection from openpype.pipeline import PublishXmlValidationError +from openpype.pipeline.publish import RepairAction import nuke @@ -18,7 +18,7 @@ class ValidateOutputResolution(pyblish.api.InstancePlugin): families = ["render", "render.local", "render.farm"] label = "Write Resolution" hosts = ["nuke"] - actions = [openpype.api.RepairAction] + actions = [RepairAction] missing_msg = "Missing Reformat node in render group node" resolution_msg = "Reformat is set to wrong format" diff --git a/openpype/hosts/nuke/plugins/publish/validate_script_attributes.py b/openpype/hosts/nuke/plugins/publish/validate_script_attributes.py index 106d7a2524..f0632f8080 100644 --- a/openpype/hosts/nuke/plugins/publish/validate_script_attributes.py +++ b/openpype/hosts/nuke/plugins/publish/validate_script_attributes.py @@ -1,8 +1,8 @@ from pprint import pformat import pyblish.api -import openpype.api from openpype.pipeline import PublishXmlValidationError +from openpype.pipeline.publish import RepairAction from openpype.hosts.nuke.api.lib import ( get_avalon_knob_data, WorkfileSettings @@ -19,7 +19,7 @@ class ValidateScriptAttributes(pyblish.api.InstancePlugin): label = "Validatte script attributes" hosts = ["nuke"] optional = True - actions = [openpype.api.RepairAction] + actions = [RepairAction] def process(self, instance): root = nuke.root() diff --git a/openpype/hosts/nuke/plugins/publish/validate_write_legacy.py b/openpype/hosts/nuke/plugins/publish/validate_write_legacy.py index 9fb57c1698..699526ef57 100644 --- a/openpype/hosts/nuke/plugins/publish/validate_write_legacy.py +++ b/openpype/hosts/nuke/plugins/publish/validate_write_legacy.py @@ -3,8 +3,9 @@ import toml import nuke import pyblish.api -import openpype.api + from openpype.pipeline import discover_creator_plugins +from openpype.pipeline.publish import RepairAction from openpype.hosts.nuke.api.lib import get_avalon_knob_data @@ -16,7 +17,7 @@ class ValidateWriteLegacy(pyblish.api.InstancePlugin): families = ["write"] label = "Validate Write Legacy" hosts = ["nuke"] - actions = [openpype.api.RepairAction] + actions = [RepairAction] def process(self, instance): node = instance[0] diff --git a/openpype/hosts/nuke/plugins/publish/validate_write_nodes.py b/openpype/hosts/nuke/plugins/publish/validate_write_nodes.py index 362ff31174..26a563b13b 100644 --- a/openpype/hosts/nuke/plugins/publish/validate_write_nodes.py +++ b/openpype/hosts/nuke/plugins/publish/validate_write_nodes.py @@ -1,8 +1,9 @@ import pyblish.api -from openpype.api import get_errored_instances_from_context +from openpype.pipeline.publish import get_errored_instances_from_context from openpype.hosts.nuke.api.lib import ( get_write_node_template_attr, - set_node_knobs_from_settings + set_node_knobs_from_settings, + color_gui_to_int ) from openpype.pipeline import PublishXmlValidationError @@ -76,8 +77,11 @@ class ValidateNukeWriteNode(pyblish.api.InstancePlugin): # fix type differences if type(node_value) in (int, float): - value = float(value) - node_value = float(node_value) + if isinstance(value, list): + value = color_gui_to_int(value) + else: + value = float(value) + node_value = float(node_value) else: value = str(value) node_value = str(node_value) diff --git a/openpype/hosts/photoshop/api/README.md b/openpype/hosts/photoshop/api/README.md index 80792a4da0..4a36746cb2 100644 --- a/openpype/hosts/photoshop/api/README.md +++ b/openpype/hosts/photoshop/api/README.md @@ -127,11 +127,11 @@ class CollectInstances(pyblish.api.ContextPlugin): ```python import os -import openpype.api -from avalon import photoshop +from openpype.pipeline import publish +from openpype.hosts.photoshop import api as photoshop -class ExtractImage(openpype.api.Extractor): +class ExtractImage(publish.Extractor): """Produce a flattened image file from instance This plug-in takes into account only the layers in the group. diff --git a/openpype/hosts/photoshop/api/lib.py b/openpype/hosts/photoshop/api/lib.py index 73a546604f..221b4314e6 100644 --- a/openpype/hosts/photoshop/api/lib.py +++ b/openpype/hosts/photoshop/api/lib.py @@ -64,10 +64,15 @@ def maintained_selection(): @contextlib.contextmanager -def maintained_visibility(): - """Maintain visibility during context.""" +def maintained_visibility(layers=None): + """Maintain visibility during context. + + Args: + layers (list) of PSItem (used for caching) + """ visibility = {} - layers = stub().get_layers() + if not layers: + layers = stub().get_layers() for layer in layers: visibility[layer.id] = layer.visible try: diff --git a/openpype/hosts/photoshop/api/ws_stub.py b/openpype/hosts/photoshop/api/ws_stub.py index b49bf1c73f..2c4d0ad5fc 100644 --- a/openpype/hosts/photoshop/api/ws_stub.py +++ b/openpype/hosts/photoshop/api/ws_stub.py @@ -229,10 +229,11 @@ class PhotoshopServerStub: return self._get_layers_in_layers(parent_ids) - def get_layers_in_layers_ids(self, layers_ids): + def get_layers_in_layers_ids(self, layers_ids, layers=None): """Return all layers that belong to layers (might be groups). Args: + layers_ids layers : Returns: @@ -240,10 +241,13 @@ class PhotoshopServerStub: """ parent_ids = set(layers_ids) - return self._get_layers_in_layers(parent_ids) + return self._get_layers_in_layers(parent_ids, layers) - def _get_layers_in_layers(self, parent_ids): - all_layers = self.get_layers() + def _get_layers_in_layers(self, parent_ids, layers=None): + if not layers: + layers = self.get_layers() + + all_layers = layers ret = [] for layer in all_layers: @@ -394,14 +398,17 @@ class PhotoshopServerStub: self.hide_all_others_layers_ids(extract_ids) - def hide_all_others_layers_ids(self, extract_ids): + def hide_all_others_layers_ids(self, extract_ids, layers=None): """hides all layers that are not part of the list or that are not children of this list Args: extract_ids (list): list of integer that should be visible + layers (list) of PSItem (used for caching) """ - for layer in self.get_layers(): + if not layers: + layers = self.get_layers() + for layer in layers: if layer.visible and layer.id not in extract_ids: self.set_visible(layer.id, False) diff --git a/openpype/hosts/photoshop/plugins/publish/extract_image.py b/openpype/hosts/photoshop/plugins/publish/extract_image.py index a133e33409..cdb28c742d 100644 --- a/openpype/hosts/photoshop/plugins/publish/extract_image.py +++ b/openpype/hosts/photoshop/plugins/publish/extract_image.py @@ -1,61 +1,99 @@ import os -import openpype.api +import pyblish.api +from openpype.pipeline import publish from openpype.hosts.photoshop import api as photoshop -class ExtractImage(openpype.api.Extractor): - """Produce a flattened image file from instance +class ExtractImage(pyblish.api.ContextPlugin): + """Extract all layers (groups) marked for publish. - This plug-in takes into account only the layers in the group. + Usually publishable instance is created as a wrapper of layer(s). For each + publishable instance so many images as there is 'formats' is created. + + Logic tries to hide/unhide layers minimum times. + + Called once for all publishable instances. """ + order = publish.Extractor.order - 0.48 label = "Extract Image" hosts = ["photoshop"] + families = ["image", "background"] formats = ["png", "jpg"] - def process(self, instance): - staging_dir = self.staging_dir(instance) - self.log.info("Outputting image to {}".format(staging_dir)) - - # Perform extraction + def process(self, context): stub = photoshop.stub() - files = {} + hidden_layer_ids = set() + + all_layers = stub.get_layers() + for layer in all_layers: + if not layer.visible: + hidden_layer_ids.add(layer.id) + stub.hide_all_others_layers_ids([], layers=all_layers) + with photoshop.maintained_selection(): - self.log.info("Extracting %s" % str(list(instance))) - with photoshop.maintained_visibility(): - ids = set() - layer = instance.data.get("layer") - if layer: - ids.add(layer.id) - add_ids = instance.data.pop("ids", None) - if add_ids: - ids.update(set(add_ids)) - extract_ids = set([ll.id for ll in stub. - get_layers_in_layers_ids(ids)]) - stub.hide_all_others_layers_ids(extract_ids) + with photoshop.maintained_visibility(layers=all_layers): + for instance in context: + if instance.data["family"] not in self.families: + continue - file_basename = os.path.splitext( - stub.get_active_document_name() - )[0] - for extension in self.formats: - _filename = "{}.{}".format(file_basename, extension) - files[extension] = _filename + staging_dir = self.staging_dir(instance) + self.log.info("Outputting image to {}".format(staging_dir)) - full_filename = os.path.join(staging_dir, _filename) - stub.saveAs(full_filename, extension, True) - self.log.info(f"Extracted: {extension}") + # Perform extraction + files = {} + ids = set() + layer = instance.data.get("layer") + if layer: + ids.add(layer.id) + add_ids = instance.data.pop("ids", None) + if add_ids: + ids.update(set(add_ids)) + extract_ids = set([ll.id for ll in stub. + get_layers_in_layers_ids(ids, all_layers) + if ll.id not in hidden_layer_ids]) - representations = [] - for extension, filename in files.items(): - representations.append({ - "name": extension, - "ext": extension, - "files": filename, - "stagingDir": staging_dir - }) - instance.data["representations"] = representations - instance.data["stagingDir"] = staging_dir + for extracted_id in extract_ids: + stub.set_visible(extracted_id, True) - self.log.info(f"Extracted {instance} to {staging_dir}") + file_basename = os.path.splitext( + stub.get_active_document_name() + )[0] + for extension in self.formats: + _filename = "{}.{}".format(file_basename, + extension) + files[extension] = _filename + + full_filename = os.path.join(staging_dir, + _filename) + stub.saveAs(full_filename, extension, True) + self.log.info(f"Extracted: {extension}") + + representations = [] + for extension, filename in files.items(): + representations.append({ + "name": extension, + "ext": extension, + "files": filename, + "stagingDir": staging_dir + }) + instance.data["representations"] = representations + instance.data["stagingDir"] = staging_dir + + self.log.info(f"Extracted {instance} to {staging_dir}") + + for extracted_id in extract_ids: + stub.set_visible(extracted_id, False) + + def staging_dir(self, instance): + """Provide a temporary directory in which to store extracted files + + Upon calling this method the staging directory is stored inside + the instance.data['stagingDir'] + """ + + from openpype.pipeline.publish import get_instance_staging_dir + + return get_instance_staging_dir(instance) diff --git a/openpype/hosts/photoshop/plugins/publish/extract_review.py b/openpype/hosts/photoshop/plugins/publish/extract_review.py index 5d37c86ed8..e5fee311f8 100644 --- a/openpype/hosts/photoshop/plugins/publish/extract_review.py +++ b/openpype/hosts/photoshop/plugins/publish/extract_review.py @@ -2,12 +2,15 @@ import os import shutil from PIL import Image -import openpype.api -import openpype.lib +from openpype.lib import ( + run_subprocess, + get_ffmpeg_tool_path, +) +from openpype.pipeline import publish from openpype.hosts.photoshop import api as photoshop -class ExtractReview(openpype.api.Extractor): +class ExtractReview(publish.Extractor): """ Produce a flattened or sequence image files from all 'image' instances. @@ -72,7 +75,7 @@ class ExtractReview(openpype.api.Extractor): }) processed_img_names = [img_list] - ffmpeg_path = openpype.lib.get_ffmpeg_tool_path("ffmpeg") + ffmpeg_path = get_ffmpeg_tool_path("ffmpeg") instance.data["stagingDir"] = staging_dir @@ -93,7 +96,7 @@ class ExtractReview(openpype.api.Extractor): thumbnail_path ] self.log.debug("thumbnail args:: {}".format(args)) - output = openpype.lib.run_subprocess(args) + output = run_subprocess(args) instance.data["representations"].append({ "name": "thumbnail", @@ -116,7 +119,7 @@ class ExtractReview(openpype.api.Extractor): mov_path ] self.log.debug("mov args:: {}".format(args)) - output = openpype.lib.run_subprocess(args) + output = run_subprocess(args) self.log.debug(output) instance.data["representations"].append({ "name": "mov", diff --git a/openpype/hosts/photoshop/plugins/publish/extract_save_scene.py b/openpype/hosts/photoshop/plugins/publish/extract_save_scene.py index 03086f389f..aa900fec9f 100644 --- a/openpype/hosts/photoshop/plugins/publish/extract_save_scene.py +++ b/openpype/hosts/photoshop/plugins/publish/extract_save_scene.py @@ -1,11 +1,11 @@ -import openpype.api +from openpype.pipeline import publish from openpype.hosts.photoshop import api as photoshop -class ExtractSaveScene(openpype.api.Extractor): +class ExtractSaveScene(publish.Extractor): """Save scene before extraction.""" - order = openpype.api.Extractor.order - 0.49 + order = publish.Extractor.order - 0.49 label = "Extract Save Scene" hosts = ["photoshop"] families = ["workfile"] diff --git a/openpype/hosts/photoshop/plugins/publish/increment_workfile.py b/openpype/hosts/photoshop/plugins/publish/increment_workfile.py index 92132c393b..665dd67fc5 100644 --- a/openpype/hosts/photoshop/plugins/publish/increment_workfile.py +++ b/openpype/hosts/photoshop/plugins/publish/increment_workfile.py @@ -1,6 +1,6 @@ import os import pyblish.api -from openpype.action import get_errored_plugins_from_data +from openpype.pipeline.publish import get_errored_plugins_from_context from openpype.lib import version_up from openpype.hosts.photoshop import api as photoshop @@ -19,7 +19,7 @@ class IncrementWorkfile(pyblish.api.InstancePlugin): optional = True def process(self, instance): - errored_plugins = get_errored_plugins_from_data(instance.context) + errored_plugins = get_errored_plugins_from_context(instance.context) if errored_plugins: raise RuntimeError( "Skipping incrementing current file because publishing failed." diff --git a/openpype/hosts/photoshop/plugins/publish/validate_instance_asset.py b/openpype/hosts/photoshop/plugins/publish/validate_instance_asset.py index b65f9d259f..2609f7a8cf 100644 --- a/openpype/hosts/photoshop/plugins/publish/validate_instance_asset.py +++ b/openpype/hosts/photoshop/plugins/publish/validate_instance_asset.py @@ -1,7 +1,7 @@ import pyblish.api -import openpype.api from openpype.pipeline import legacy_io +from openpype.pipeline.publish import ValidateContentsOrder from openpype.hosts.photoshop import api as photoshop @@ -45,7 +45,7 @@ class ValidateInstanceAsset(pyblish.api.InstancePlugin): label = "Validate Instance Asset" hosts = ["photoshop"] actions = [ValidateInstanceAssetRepair] - order = openpype.api.ValidateContentsOrder + order = ValidateContentsOrder def process(self, instance): instance_asset = instance.data["asset"] diff --git a/openpype/hosts/photoshop/plugins/publish/validate_naming.py b/openpype/hosts/photoshop/plugins/publish/validate_naming.py index 8106d6ff16..0665aff9d0 100644 --- a/openpype/hosts/photoshop/plugins/publish/validate_naming.py +++ b/openpype/hosts/photoshop/plugins/publish/validate_naming.py @@ -1,10 +1,13 @@ import re import pyblish.api -import openpype.api -from openpype.pipeline import PublishXmlValidationError + from openpype.hosts.photoshop import api as photoshop from openpype.pipeline.create import SUBSET_NAME_ALLOWED_SYMBOLS +from openpype.pipeline.publish import ( + ValidateContentsOrder, + PublishXmlValidationError, +) class ValidateNamingRepair(pyblish.api.Action): @@ -72,7 +75,7 @@ class ValidateNaming(pyblish.api.InstancePlugin): label = "Validate Naming" hosts = ["photoshop"] - order = openpype.api.ValidateContentsOrder + order = ValidateContentsOrder families = ["image"] actions = [ValidateNamingRepair] diff --git a/openpype/hosts/photoshop/plugins/publish/validate_unique_subsets.py b/openpype/hosts/photoshop/plugins/publish/validate_unique_subsets.py index 01f2323157..78e84729ce 100644 --- a/openpype/hosts/photoshop/plugins/publish/validate_unique_subsets.py +++ b/openpype/hosts/photoshop/plugins/publish/validate_unique_subsets.py @@ -1,7 +1,9 @@ import collections import pyblish.api -import openpype.api -from openpype.pipeline import PublishXmlValidationError +from openpype.pipeline.publish import ( + ValidateContentsOrder, + PublishXmlValidationError, +) class ValidateSubsetUniqueness(pyblish.api.ContextPlugin): @@ -11,7 +13,7 @@ class ValidateSubsetUniqueness(pyblish.api.ContextPlugin): label = "Validate Subset Uniqueness" hosts = ["photoshop"] - order = openpype.api.ValidateContentsOrder + order = ValidateContentsOrder families = ["image"] def process(self, context): diff --git a/openpype/hosts/resolve/__init__.py b/openpype/hosts/resolve/__init__.py index e69de29bb2..b4a994bbaa 100644 --- a/openpype/hosts/resolve/__init__.py +++ b/openpype/hosts/resolve/__init__.py @@ -0,0 +1,6 @@ +from .addon import ResolveAddon + + +__all__ = ( + "ResolveAddon", +) diff --git a/openpype/hosts/resolve/addon.py b/openpype/hosts/resolve/addon.py new file mode 100644 index 0000000000..a31da52a6d --- /dev/null +++ b/openpype/hosts/resolve/addon.py @@ -0,0 +1,24 @@ +import os + +from openpype.modules import OpenPypeModule +from openpype.modules.interfaces import IHostAddon + +from .utils import RESOLVE_ROOT_DIR + + +class ResolveAddon(OpenPypeModule, IHostAddon): + name = "resolve" + host_name = "resolve" + + def initialize(self, module_settings): + self.enabled = True + + def get_launch_hook_paths(self, app): + if app.host_name != self.host_name: + return [] + return [ + os.path.join(RESOLVE_ROOT_DIR, "hooks") + ] + + def get_workfile_extensions(self): + return [".drp"] diff --git a/openpype/hosts/resolve/api/action.py b/openpype/hosts/resolve/api/action.py index d55a24a39a..ceedc2cc54 100644 --- a/openpype/hosts/resolve/api/action.py +++ b/openpype/hosts/resolve/api/action.py @@ -4,7 +4,7 @@ from __future__ import absolute_import import pyblish.api -from openpype.action import get_errored_instances_from_context +from openpype.pipeline.publish import get_errored_instances_from_context class SelectInvalidAction(pyblish.api.Action): diff --git a/openpype/hosts/resolve/utils.py b/openpype/hosts/resolve/utils.py index 382a7cf344..d5c133bbf5 100644 --- a/openpype/hosts/resolve/utils.py +++ b/openpype/hosts/resolve/utils.py @@ -17,7 +17,7 @@ def setup(env): # collect script dirs if us_env: - log.info(f"Utility Scripts Env: `{us_env}`") + log.info("Utility Scripts Env: `{}`".format(us_env)) us_paths = us_env.split( os.pathsep) + us_paths @@ -25,13 +25,13 @@ def setup(env): for path in us_paths: scripts.update({path: os.listdir(path)}) - log.info(f"Utility Scripts Dir: `{us_paths}`") - log.info(f"Utility Scripts: `{scripts}`") + log.info("Utility Scripts Dir: `{}`".format(us_paths)) + log.info("Utility Scripts: `{}`".format(scripts)) # make sure no script file is in folder for s in os.listdir(us_dir): path = os.path.join(us_dir, s) - log.info(f"Removing `{path}`...") + log.info("Removing `{}`...".format(path)) if os.path.isdir(path): shutil.rmtree(path, onerror=None) else: @@ -44,7 +44,7 @@ def setup(env): # script in script list src = os.path.join(d, s) dst = os.path.join(us_dir, s) - log.info(f"Copying `{src}` to `{dst}`...") + log.info("Copying `{}` to `{}`...".format(src, dst)) if os.path.isdir(src): shutil.copytree( src, dst, symlinks=False, diff --git a/openpype/hosts/standalonepublisher/plugins/publish/validate_editorial_resources.py b/openpype/hosts/standalonepublisher/plugins/publish/validate_editorial_resources.py index afb828474d..3d2b6d04ad 100644 --- a/openpype/hosts/standalonepublisher/plugins/publish/validate_editorial_resources.py +++ b/openpype/hosts/standalonepublisher/plugins/publish/validate_editorial_resources.py @@ -1,6 +1,8 @@ import pyblish.api -import openpype.api -from openpype.pipeline import PublishXmlValidationError +from openpype.pipeline.publish import ( + ValidateContentsOrder, + PublishXmlValidationError, +) class ValidateEditorialResources(pyblish.api.InstancePlugin): @@ -13,7 +15,7 @@ class ValidateEditorialResources(pyblish.api.InstancePlugin): # make sure it is enabled only if at least both families are available match = pyblish.api.Subset - order = openpype.api.ValidateContentsOrder + order = ValidateContentsOrder def process(self, instance): self.log.debug( diff --git a/openpype/hosts/standalonepublisher/plugins/publish/validate_frame_ranges.py b/openpype/hosts/standalonepublisher/plugins/publish/validate_frame_ranges.py index ff7f60354e..074c62ea0e 100644 --- a/openpype/hosts/standalonepublisher/plugins/publish/validate_frame_ranges.py +++ b/openpype/hosts/standalonepublisher/plugins/publish/validate_frame_ranges.py @@ -2,9 +2,11 @@ import re import pyblish.api -import openpype.api -from openpype.pipeline import PublishXmlValidationError from openpype.pipeline.context_tools import get_current_project_asset +from openpype.pipeline.publish import ( + ValidateContentsOrder, + PublishXmlValidationError, +) class ValidateFrameRange(pyblish.api.InstancePlugin): @@ -13,7 +15,7 @@ class ValidateFrameRange(pyblish.api.InstancePlugin): label = "Validate Frame Range" hosts = ["standalonepublisher"] families = ["render"] - order = openpype.api.ValidateContentsOrder + order = ValidateContentsOrder optional = True # published data might be sequence (.mov, .mp4) in that counting files diff --git a/openpype/hosts/standalonepublisher/plugins/publish/validate_shot_duplicates.py b/openpype/hosts/standalonepublisher/plugins/publish/validate_shot_duplicates.py index fe655f6b74..df04ae3b66 100644 --- a/openpype/hosts/standalonepublisher/plugins/publish/validate_shot_duplicates.py +++ b/openpype/hosts/standalonepublisher/plugins/publish/validate_shot_duplicates.py @@ -1,14 +1,17 @@ import pyblish.api -import openpype.api -from openpype.pipeline import PublishXmlValidationError +from openpype.pipeline.publish import ( + ValidateContentsOrder, + PublishXmlValidationError, +) + class ValidateShotDuplicates(pyblish.api.ContextPlugin): """Validating no duplicate names are in context.""" label = "Validate Shot Duplicates" hosts = ["standalonepublisher"] - order = openpype.api.ValidateContentsOrder + order = ValidateContentsOrder def process(self, context): shot_names = [] diff --git a/openpype/hosts/standalonepublisher/plugins/publish/validate_simple_unreal_texture_naming.py b/openpype/hosts/standalonepublisher/plugins/publish/validate_simple_unreal_texture_naming.py index ef8da9f280..c123bef4f8 100644 --- a/openpype/hosts/standalonepublisher/plugins/publish/validate_simple_unreal_texture_naming.py +++ b/openpype/hosts/standalonepublisher/plugins/publish/validate_simple_unreal_texture_naming.py @@ -1,16 +1,19 @@ # -*- coding: utf-8 -*- """Validator for correct file naming.""" -import pyblish.api -import openpype.api import re -from openpype.pipeline import PublishXmlValidationError +import pyblish.api + +from openpype.pipeline.publish import ( + ValidateContentsOrder, + PublishXmlValidationError, +) class ValidateSimpleUnrealTextureNaming(pyblish.api.InstancePlugin): label = "Validate Unreal Texture Names" hosts = ["standalonepublisher"] families = ["simpleUnrealTexture"] - order = openpype.api.ValidateContentsOrder + order = ValidateContentsOrder regex = "^T_{asset}.*" def process(self, instance): diff --git a/openpype/hosts/standalonepublisher/plugins/publish/validate_sources.py b/openpype/hosts/standalonepublisher/plugins/publish/validate_sources.py index 316f58988f..1782f53de2 100644 --- a/openpype/hosts/standalonepublisher/plugins/publish/validate_sources.py +++ b/openpype/hosts/standalonepublisher/plugins/publish/validate_sources.py @@ -2,8 +2,10 @@ import os import pyblish.api -import openpype.api -from openpype.pipeline import PublishXmlValidationError +from openpype.pipeline.publish import ( + ValidateContentsOrder, + PublishXmlValidationError, +) class ValidateSources(pyblish.api.InstancePlugin): @@ -13,7 +15,7 @@ class ValidateSources(pyblish.api.InstancePlugin): got deleted between starting of SP and now. """ - order = openpype.api.ValidateContentsOrder + order = ValidateContentsOrder label = "Check source files" optional = True # only for unforeseeable cases diff --git a/openpype/hosts/standalonepublisher/plugins/publish/validate_texture_batch.py b/openpype/hosts/standalonepublisher/plugins/publish/validate_texture_batch.py index d66fb257bb..44f69e48f7 100644 --- a/openpype/hosts/standalonepublisher/plugins/publish/validate_texture_batch.py +++ b/openpype/hosts/standalonepublisher/plugins/publish/validate_texture_batch.py @@ -1,7 +1,9 @@ import pyblish.api -import openpype.api -from openpype.pipeline import PublishXmlValidationError +from openpype.pipeline.publish import ( + ValidateContentsOrder, + PublishXmlValidationError, +) class ValidateTextureBatch(pyblish.api.InstancePlugin): @@ -9,7 +11,7 @@ class ValidateTextureBatch(pyblish.api.InstancePlugin): label = "Validate Texture Presence" hosts = ["standalonepublisher"] - order = openpype.api.ValidateContentsOrder + order = ValidateContentsOrder families = ["texture_batch_workfile"] optional = False diff --git a/openpype/hosts/standalonepublisher/plugins/publish/validate_texture_has_workfile.py b/openpype/hosts/standalonepublisher/plugins/publish/validate_texture_has_workfile.py index 0e67464f59..f489d37f59 100644 --- a/openpype/hosts/standalonepublisher/plugins/publish/validate_texture_has_workfile.py +++ b/openpype/hosts/standalonepublisher/plugins/publish/validate_texture_has_workfile.py @@ -1,7 +1,9 @@ import pyblish.api -import openpype.api -from openpype.pipeline import PublishXmlValidationError +from openpype.pipeline.publish import ( + ValidateContentsOrder, + PublishXmlValidationError, +) class ValidateTextureHasWorkfile(pyblish.api.InstancePlugin): @@ -12,7 +14,7 @@ class ValidateTextureHasWorkfile(pyblish.api.InstancePlugin): """ label = "Validate Texture Has Workfile" hosts = ["standalonepublisher"] - order = openpype.api.ValidateContentsOrder + order = ValidateContentsOrder families = ["textures"] optional = True diff --git a/openpype/hosts/standalonepublisher/plugins/publish/validate_texture_name.py b/openpype/hosts/standalonepublisher/plugins/publish/validate_texture_name.py index 751ad917ca..22f4a0eafc 100644 --- a/openpype/hosts/standalonepublisher/plugins/publish/validate_texture_name.py +++ b/openpype/hosts/standalonepublisher/plugins/publish/validate_texture_name.py @@ -1,14 +1,16 @@ import pyblish.api -import openpype.api -from openpype.pipeline import PublishXmlValidationError +from openpype.pipeline.publish import ( + ValidateContentsOrder, + PublishXmlValidationError, +) class ValidateTextureBatchNaming(pyblish.api.InstancePlugin): """Validates that all instances had properly formatted name.""" label = "Validate Texture Batch Naming" hosts = ["standalonepublisher"] - order = openpype.api.ValidateContentsOrder + order = ValidateContentsOrder families = ["texture_batch_workfile", "textures"] optional = False diff --git a/openpype/hosts/standalonepublisher/plugins/publish/validate_texture_versions.py b/openpype/hosts/standalonepublisher/plugins/publish/validate_texture_versions.py index 84d9def895..dab160d537 100644 --- a/openpype/hosts/standalonepublisher/plugins/publish/validate_texture_versions.py +++ b/openpype/hosts/standalonepublisher/plugins/publish/validate_texture_versions.py @@ -1,7 +1,9 @@ import pyblish.api -import openpype.api -from openpype.pipeline import PublishXmlValidationError +from openpype.pipeline.publish import ( + ValidateContentsOrder, + PublishXmlValidationError, +) class ValidateTextureBatchVersions(pyblish.api.InstancePlugin): @@ -14,7 +16,7 @@ class ValidateTextureBatchVersions(pyblish.api.InstancePlugin): """ label = "Validate Texture Batch Versions" hosts = ["standalonepublisher"] - order = openpype.api.ValidateContentsOrder + order = ValidateContentsOrder families = ["textures"] optional = False diff --git a/openpype/hosts/standalonepublisher/plugins/publish/validate_texture_workfiles.py b/openpype/hosts/standalonepublisher/plugins/publish/validate_texture_workfiles.py index fa492a80d8..56ea82f6b6 100644 --- a/openpype/hosts/standalonepublisher/plugins/publish/validate_texture_workfiles.py +++ b/openpype/hosts/standalonepublisher/plugins/publish/validate_texture_workfiles.py @@ -1,7 +1,9 @@ import pyblish.api -import openpype.api -from openpype.pipeline import PublishXmlValidationError +from openpype.pipeline.publish import ( + ValidateContentsOrder, + PublishXmlValidationError, +) class ValidateTextureBatchWorkfiles(pyblish.api.InstancePlugin): @@ -12,7 +14,7 @@ class ValidateTextureBatchWorkfiles(pyblish.api.InstancePlugin): label = "Validate Texture Workfile Has Resources" hosts = ["standalonepublisher"] - order = openpype.api.ValidateContentsOrder + order = ValidateContentsOrder families = ["texture_batch_workfile"] optional = True diff --git a/openpype/hosts/testhost/README.md b/openpype/hosts/testhost/README.md deleted file mode 100644 index f69e02a3b3..0000000000 --- a/openpype/hosts/testhost/README.md +++ /dev/null @@ -1,16 +0,0 @@ -# What is `testhost` -Host `testhost` was created to fake running host for testing of publisher. - -Does not have any proper launch mechanism at the moment. There is python script `./run_publish.py` which will show publisher window. The script requires to set few variables to run. Execution will register host `testhost`, register global publish plugins and register creator and publish plugins from `./plugins`. - -## Data -Created instances and context data are stored into json files inside `./api` folder. Can be easily modified to save them to a different place. - -## Plugins -Test host has few plugins to be able test publishing. - -### Creators -They are just example plugins using functions from `api` to create/remove/update data. One of them is auto creator which means that is triggered on each reset of create context. Others are manual creators both creating the same family. - -### Publishers -Collectors are example plugin to use `get_attribute_defs` to define attributes for specific families or for context. Validators are to test `PublishValidationError`. diff --git a/openpype/hosts/testhost/__init__.py b/openpype/hosts/testhost/__init__.py deleted file mode 100644 index e69de29bb2..0000000000 diff --git a/openpype/hosts/testhost/api/__init__.py b/openpype/hosts/testhost/api/__init__.py deleted file mode 100644 index a929a891aa..0000000000 --- a/openpype/hosts/testhost/api/__init__.py +++ /dev/null @@ -1,43 +0,0 @@ -import os -import logging -import pyblish.api - -from openpype.pipeline import register_creator_plugin_path - -from .pipeline import ( - ls, - list_instances, - update_instances, - remove_instances, - get_context_data, - update_context_data, - get_context_title -) - - -HOST_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) -PLUGINS_DIR = os.path.join(HOST_DIR, "plugins") -PUBLISH_PATH = os.path.join(PLUGINS_DIR, "publish") -CREATE_PATH = os.path.join(PLUGINS_DIR, "create") - -log = logging.getLogger(__name__) - - -def install(): - log.info("OpenPype - Installing TestHost integration") - pyblish.api.register_host("testhost") - pyblish.api.register_plugin_path(PUBLISH_PATH) - register_creator_plugin_path(CREATE_PATH) - - -__all__ = ( - "ls", - "list_instances", - "update_instances", - "remove_instances", - "get_context_data", - "update_context_data", - "get_context_title", - - "install" -) diff --git a/openpype/hosts/testhost/api/context.json b/openpype/hosts/testhost/api/context.json deleted file mode 100644 index 0967ef424b..0000000000 --- a/openpype/hosts/testhost/api/context.json +++ /dev/null @@ -1 +0,0 @@ -{} diff --git a/openpype/hosts/testhost/api/instances.json b/openpype/hosts/testhost/api/instances.json deleted file mode 100644 index d955012514..0000000000 --- a/openpype/hosts/testhost/api/instances.json +++ /dev/null @@ -1,108 +0,0 @@ -[ - { - "id": "pyblish.avalon.instance", - "active": true, - "family": "test", - "subset": "testMyVariant", - "version": 1, - "asset": "sq01_sh0010", - "task": "Compositing", - "variant": "myVariant", - "instance_id": "a485f148-9121-46a5-8157-aa64df0fb449", - "creator_attributes": { - "number_key": 10, - "ha": 10 - }, - "publish_attributes": { - "CollectFtrackApi": { - "add_ftrack_family": false - } - }, - "creator_identifier": "test_one" - }, - { - "id": "pyblish.avalon.instance", - "active": true, - "family": "test", - "subset": "testMyVariant2", - "version": 1, - "asset": "sq01_sh0010", - "task": "Compositing", - "variant": "myVariant2", - "creator_attributes": {}, - "instance_id": "a485f148-9121-46a5-8157-aa64df0fb444", - "publish_attributes": { - "CollectFtrackApi": { - "add_ftrack_family": true - } - }, - "creator_identifier": "test_one" - }, - { - "id": "pyblish.avalon.instance", - "active": true, - "family": "test", - "subset": "testMain", - "version": 1, - "asset": "sq01_sh0010", - "task": "Compositing", - "variant": "Main", - "creator_attributes": {}, - "instance_id": "3607bc95-75f6-4648-a58d-e699f413d09f", - "publish_attributes": { - "CollectFtrackApi": { - "add_ftrack_family": true - } - }, - "creator_identifier": "test_two" - }, - { - "id": "pyblish.avalon.instance", - "active": true, - "family": "test", - "subset": "testMain2", - "version": 1, - "asset": "sq01_sh0020", - "task": "Compositing", - "variant": "Main2", - "instance_id": "4ccf56f6-9982-4837-967c-a49695dbe8eb", - "creator_attributes": {}, - "publish_attributes": { - "CollectFtrackApi": { - "add_ftrack_family": true - } - }, - "creator_identifier": "test_two" - }, - { - "id": "pyblish.avalon.instance", - "family": "test_three", - "subset": "test_threeMain2", - "active": true, - "version": 1, - "asset": "sq01_sh0020", - "task": "Compositing", - "variant": "Main2", - "instance_id": "4ccf56f6-9982-4837-967c-a49695dbe8ec", - "creator_attributes": {}, - "publish_attributes": { - "CollectFtrackApi": { - "add_ftrack_family": true - } - } - }, - { - "id": "pyblish.avalon.instance", - "family": "workfile", - "subset": "workfileMain", - "active": true, - "creator_identifier": "workfile", - "version": 1, - "asset": "Alpaca_01", - "task": "modeling", - "variant": "Main", - "instance_id": "7c9ddfc7-9f9c-4c1c-b233-38c966735fb6", - "creator_attributes": {}, - "publish_attributes": {} - } -] \ No newline at end of file diff --git a/openpype/hosts/testhost/api/pipeline.py b/openpype/hosts/testhost/api/pipeline.py deleted file mode 100644 index 1e05f336fb..0000000000 --- a/openpype/hosts/testhost/api/pipeline.py +++ /dev/null @@ -1,155 +0,0 @@ -import os -import json -from openpype.client import get_asset_by_name - - -class HostContext: - instances_json_path = None - context_json_path = None - - @classmethod - def get_context_title(cls): - project_name = os.environ.get("AVALON_PROJECT") - if not project_name: - return "TestHost" - - asset_name = os.environ.get("AVALON_ASSET") - if not asset_name: - return project_name - - asset_doc = get_asset_by_name( - project_name, asset_name, fields=["data.parents"] - ) - - parents = asset_doc.get("data", {}).get("parents") or [] - - hierarchy = [project_name] - hierarchy.extend(parents) - hierarchy.append("{}".format(asset_name)) - task_name = os.environ.get("AVALON_TASK") - if task_name: - hierarchy.append(task_name) - - return "/".join(hierarchy) - - @classmethod - def get_current_dir_filepath(cls, filename): - return os.path.join( - os.path.dirname(os.path.abspath(__file__)), - filename - ) - - @classmethod - def get_instances_json_path(cls): - if cls.instances_json_path is None: - cls.instances_json_path = cls.get_current_dir_filepath( - "instances.json" - ) - return cls.instances_json_path - - @classmethod - def get_context_json_path(cls): - if cls.context_json_path is None: - cls.context_json_path = cls.get_current_dir_filepath( - "context.json" - ) - return cls.context_json_path - - @classmethod - def add_instance(cls, instance): - instances = cls.get_instances() - instances.append(instance) - cls.save_instances(instances) - - @classmethod - def save_instances(cls, instances): - json_path = cls.get_instances_json_path() - with open(json_path, "w") as json_stream: - json.dump(instances, json_stream, indent=4) - - @classmethod - def get_instances(cls): - json_path = cls.get_instances_json_path() - if not os.path.exists(json_path): - instances = [] - with open(json_path, "w") as json_stream: - json.dump(json_stream, instances) - else: - with open(json_path, "r") as json_stream: - instances = json.load(json_stream) - return instances - - @classmethod - def get_context_data(cls): - json_path = cls.get_context_json_path() - if not os.path.exists(json_path): - data = {} - with open(json_path, "w") as json_stream: - json.dump(data, json_stream) - else: - with open(json_path, "r") as json_stream: - data = json.load(json_stream) - return data - - @classmethod - def save_context_data(cls, data): - json_path = cls.get_context_json_path() - with open(json_path, "w") as json_stream: - json.dump(data, json_stream, indent=4) - - -def ls(): - return [] - - -def list_instances(): - return HostContext.get_instances() - - -def update_instances(update_list): - updated_instances = {} - for instance, _changes in update_list: - updated_instances[instance.id] = instance.data_to_store() - - instances = HostContext.get_instances() - for instance_data in instances: - instance_id = instance_data["instance_id"] - if instance_id in updated_instances: - new_instance_data = updated_instances[instance_id] - old_keys = set(instance_data.keys()) - new_keys = set(new_instance_data.keys()) - instance_data.update(new_instance_data) - for key in (old_keys - new_keys): - instance_data.pop(key) - - HostContext.save_instances(instances) - - -def remove_instances(instances): - if not isinstance(instances, (tuple, list)): - instances = [instances] - - current_instances = HostContext.get_instances() - for instance in instances: - instance_id = instance.data["instance_id"] - found_idx = None - for idx, _instance in enumerate(current_instances): - if instance_id == _instance["instance_id"]: - found_idx = idx - break - - if found_idx is not None: - current_instances.pop(found_idx) - HostContext.save_instances(current_instances) - - -def get_context_data(): - return HostContext.get_context_data() - - -def update_context_data(data, changes): - HostContext.save_context_data(data) - - -def get_context_title(): - return HostContext.get_context_title() diff --git a/openpype/hosts/testhost/plugins/create/auto_creator.py b/openpype/hosts/testhost/plugins/create/auto_creator.py deleted file mode 100644 index 8d59fc3242..0000000000 --- a/openpype/hosts/testhost/plugins/create/auto_creator.py +++ /dev/null @@ -1,75 +0,0 @@ -from openpype.lib import NumberDef -from openpype.client import get_asset_by_name -from openpype.pipeline import ( - legacy_io, - AutoCreator, - CreatedInstance, -) -from openpype.hosts.testhost.api import pipeline - - -class MyAutoCreator(AutoCreator): - identifier = "workfile" - family = "workfile" - - def get_instance_attr_defs(self): - output = [ - NumberDef("number_key", label="Number") - ] - return output - - def collect_instances(self): - for instance_data in pipeline.list_instances(): - creator_id = instance_data.get("creator_identifier") - if creator_id == self.identifier: - subset_name = instance_data["subset"] - instance = CreatedInstance( - self.family, subset_name, instance_data, self - ) - self._add_instance_to_context(instance) - - def update_instances(self, update_list): - pipeline.update_instances(update_list) - - def create(self): - existing_instance = None - for instance in self.create_context.instances: - if instance.family == self.family: - existing_instance = instance - break - - variant = "Main" - project_name = legacy_io.Session["AVALON_PROJECT"] - asset_name = legacy_io.Session["AVALON_ASSET"] - task_name = legacy_io.Session["AVALON_TASK"] - host_name = legacy_io.Session["AVALON_APP"] - - if existing_instance is None: - asset_doc = get_asset_by_name(project_name, asset_name) - subset_name = self.get_subset_name( - variant, task_name, asset_doc, project_name, host_name - ) - data = { - "asset": asset_name, - "task": task_name, - "variant": variant - } - data.update(self.get_dynamic_data( - variant, task_name, asset_doc, project_name, host_name - )) - - new_instance = CreatedInstance( - self.family, subset_name, data, self - ) - self._add_instance_to_context(new_instance) - - elif ( - existing_instance["asset"] != asset_name - or existing_instance["task"] != task_name - ): - asset_doc = get_asset_by_name(project_name, asset_name) - subset_name = self.get_subset_name( - variant, task_name, asset_doc, project_name, host_name - ) - existing_instance["asset"] = asset_name - existing_instance["task"] = task_name diff --git a/openpype/hosts/testhost/plugins/create/test_creator_1.py b/openpype/hosts/testhost/plugins/create/test_creator_1.py deleted file mode 100644 index 7664276fa2..0000000000 --- a/openpype/hosts/testhost/plugins/create/test_creator_1.py +++ /dev/null @@ -1,94 +0,0 @@ -import json -from openpype import resources -from openpype.hosts.testhost.api import pipeline -from openpype.lib import ( - UISeparatorDef, - UILabelDef, - BoolDef, - NumberDef, - FileDef, -) -from openpype.pipeline import ( - Creator, - CreatedInstance, -) - - -class TestCreatorOne(Creator): - identifier = "test_one" - label = "test" - family = "test" - description = "Testing creator of testhost" - - create_allow_context_change = False - - def get_icon(self): - return resources.get_openpype_splash_filepath() - - def collect_instances(self): - for instance_data in pipeline.list_instances(): - creator_id = instance_data.get("creator_identifier") - if creator_id == self.identifier: - instance = CreatedInstance.from_existing( - instance_data, self - ) - self._add_instance_to_context(instance) - - def update_instances(self, update_list): - pipeline.update_instances(update_list) - - def remove_instances(self, instances): - pipeline.remove_instances(instances) - for instance in instances: - self._remove_instance_from_context(instance) - - def create(self, subset_name, data, pre_create_data): - print("Data that can be used in create:\n{}".format( - json.dumps(pre_create_data, indent=4) - )) - new_instance = CreatedInstance(self.family, subset_name, data, self) - pipeline.HostContext.add_instance(new_instance.data_to_store()) - self.log.info(new_instance.data) - self._add_instance_to_context(new_instance) - - def get_default_variants(self): - return [ - "myVariant", - "variantTwo", - "different_variant" - ] - - def get_instance_attr_defs(self): - output = [ - NumberDef("number_key", label="Number"), - ] - return output - - def get_pre_create_attr_defs(self): - output = [ - BoolDef("use_selection", label="Use selection"), - UISeparatorDef(), - UILabelDef("Testing label"), - FileDef("filepath", folders=True, label="Filepath"), - FileDef( - "filepath_2", multipath=True, folders=True, label="Filepath 2" - ) - ] - return output - - def get_detail_description(self): - return """# Relictus funes est Nyseides currusque nunc oblita - -## Causa sed - -Lorem markdownum posito consumptis, *plebe Amorque*, abstitimus rogatus fictaque -gladium Circe, nos? Bos aeternum quae. Utque me, si aliquem cladis, et vestigia -arbor, sic mea ferre lacrimae agantur prospiciens hactenus. Amanti dentes pete, -vos quid laudemque rastrorumque terras in gratantibus **radix** erat cedemus? - -Pudor tu ponderibus verbaque illa; ire ergo iam Venus patris certe longae -cruentum lecta, et quaeque. Sit doce nox. Anteit ad tempora magni plenaque et -videres mersit sibique auctor in tendunt mittit cunctos ventisque gravitate -volucris quemquam Aeneaden. Pectore Mensis somnus; pectora -[ferunt](http://www.mox.org/oculosbracchia)? Fertilitatis bella dulce et suum? - """ diff --git a/openpype/hosts/testhost/plugins/create/test_creator_2.py b/openpype/hosts/testhost/plugins/create/test_creator_2.py deleted file mode 100644 index f54adee8a2..0000000000 --- a/openpype/hosts/testhost/plugins/create/test_creator_2.py +++ /dev/null @@ -1,74 +0,0 @@ -from openpype.lib import NumberDef, TextDef -from openpype.hosts.testhost.api import pipeline -from openpype.pipeline import ( - Creator, - CreatedInstance, -) - - -class TestCreatorTwo(Creator): - identifier = "test_two" - label = "test" - family = "test" - description = "A second testing creator" - - def get_icon(self): - return "cube" - - def create(self, subset_name, data, pre_create_data): - new_instance = CreatedInstance(self.family, subset_name, data, self) - pipeline.HostContext.add_instance(new_instance.data_to_store()) - self.log.info(new_instance.data) - self._add_instance_to_context(new_instance) - - def collect_instances(self): - for instance_data in pipeline.list_instances(): - creator_id = instance_data.get("creator_identifier") - if creator_id == self.identifier: - instance = CreatedInstance.from_existing( - instance_data, self - ) - self._add_instance_to_context(instance) - - def update_instances(self, update_list): - pipeline.update_instances(update_list) - - def remove_instances(self, instances): - pipeline.remove_instances(instances) - for instance in instances: - self._remove_instance_from_context(instance) - - def get_instance_attr_defs(self): - output = [ - NumberDef("number_key"), - TextDef("text_key") - ] - return output - - def get_detail_description(self): - return """# Lorem ipsum, dolor sit amet. [![Awesome](https://cdn.rawgit.com/sindresorhus/awesome/d7305f38d29fed78fa85652e3a63e154dd8e8829/media/badge.svg)](https://github.com/sindresorhus/awesome) - -> A curated list of awesome lorem ipsum generators. - -Inspired by the [awesome](https://github.com/sindresorhus/awesome) list thing. - - -## Table of Contents - -- [Legend](#legend) -- [Practical](#briefcase-practical) -- [Whimsical](#roller_coaster-whimsical) - - [Animals](#rabbit-animals) - - [Eras](#tophat-eras) - - [Famous Individuals](#sunglasses-famous-individuals) - - [Music](#microphone-music) - - [Food and Drink](#pizza-food-and-drink) - - [Geographic and Dialects](#earth_africa-geographic-and-dialects) - - [Literature](#books-literature) - - [Miscellaneous](#cyclone-miscellaneous) - - [Sports and Fitness](#bicyclist-sports-and-fitness) - - [TV and Film](#movie_camera-tv-and-film) -- [Tools, Apps, and Extensions](#wrench-tools-apps-and-extensions) -- [Contribute](#contribute) -- [TODO](#todo) -""" diff --git a/openpype/hosts/testhost/plugins/publish/collect_context.py b/openpype/hosts/testhost/plugins/publish/collect_context.py deleted file mode 100644 index 0ab98fb84b..0000000000 --- a/openpype/hosts/testhost/plugins/publish/collect_context.py +++ /dev/null @@ -1,34 +0,0 @@ -import pyblish.api - -from openpype.pipeline import ( - OpenPypePyblishPluginMixin, - attribute_definitions -) - - -class CollectContextDataTestHost( - pyblish.api.ContextPlugin, OpenPypePyblishPluginMixin -): - """ - Collecting temp json data sent from a host context - and path for returning json data back to hostself. - """ - - label = "Collect Source - Test Host" - order = pyblish.api.CollectorOrder - 0.4 - hosts = ["testhost"] - - @classmethod - def get_attribute_defs(cls): - return [ - attribute_definitions.BoolDef( - "test_bool", - True, - label="Bool input" - ) - ] - - def process(self, context): - # get json paths from os and load them - for instance in context: - instance.data["source"] = "testhost" diff --git a/openpype/hosts/testhost/plugins/publish/collect_instance_1.py b/openpype/hosts/testhost/plugins/publish/collect_instance_1.py deleted file mode 100644 index c7241a15a8..0000000000 --- a/openpype/hosts/testhost/plugins/publish/collect_instance_1.py +++ /dev/null @@ -1,52 +0,0 @@ -import json -import pyblish.api - -from openpype.lib import attribute_definitions -from openpype.pipeline import OpenPypePyblishPluginMixin - - -class CollectInstanceOneTestHost( - pyblish.api.InstancePlugin, OpenPypePyblishPluginMixin -): - """ - Collecting temp json data sent from a host context - and path for returning json data back to hostself. - """ - - label = "Collect Instance 1 - Test Host" - order = pyblish.api.CollectorOrder - 0.3 - hosts = ["testhost"] - - @classmethod - def get_attribute_defs(cls): - return [ - attribute_definitions.NumberDef( - "version", - default=1, - minimum=1, - maximum=999, - decimals=0, - label="Version" - ) - ] - - def process(self, instance): - self._debug_log(instance) - - publish_attributes = instance.data.get("publish_attributes") - if not publish_attributes: - return - - values = publish_attributes.get(self.__class__.__name__) - if not values: - return - - instance.data["version"] = values["version"] - - def _debug_log(self, instance): - def _default_json(value): - return str(value) - - self.log.info( - json.dumps(instance.data, indent=4, default=_default_json) - ) diff --git a/openpype/hosts/testhost/plugins/publish/validate_context_with_error.py b/openpype/hosts/testhost/plugins/publish/validate_context_with_error.py deleted file mode 100644 index 46e996a569..0000000000 --- a/openpype/hosts/testhost/plugins/publish/validate_context_with_error.py +++ /dev/null @@ -1,57 +0,0 @@ -import pyblish.api -from openpype.pipeline import PublishValidationError - - -class ValidateInstanceAssetRepair(pyblish.api.Action): - """Repair the instance asset.""" - - label = "Repair" - icon = "wrench" - on = "failed" - - def process(self, context, plugin): - pass - - -description = """ -## Publish plugins - -### Validate Scene Settings - -#### Skip Resolution Check for Tasks - -Set regex pattern(s) to look for in a Task name to skip resolution check against values from DB. - -#### Skip Timeline Check for Tasks - -Set regex pattern(s) to look for in a Task name to skip `frameStart`, `frameEnd` check against values from DB. - -### AfterEffects Submit to Deadline - -* `Use Published scene` - Set to True (green) when Deadline should take published scene as a source instead of uploaded local one. -* `Priority` - priority of job on farm -* `Primary Pool` - here is list of pool fetched from server you can select from. -* `Secondary Pool` -* `Frames Per Task` - number of sequence division between individual tasks (chunks) -making one job on farm. -""" - - -class ValidateContextWithError(pyblish.api.ContextPlugin): - """Validate the instance asset is the current selected context asset. - - As it might happen that multiple worfiles are opened, switching - between them would mess with selected context. - In that case outputs might be output under wrong asset! - - Repair action will use Context asset value (from Workfiles or Launcher) - Closing and reopening with Workfiles will refresh Context value. - """ - - label = "Validate Context With Error" - hosts = ["testhost"] - actions = [ValidateInstanceAssetRepair] - order = pyblish.api.ValidatorOrder - - def process(self, context): - raise PublishValidationError("Crashing", "Context error", description) diff --git a/openpype/hosts/testhost/plugins/publish/validate_with_error.py b/openpype/hosts/testhost/plugins/publish/validate_with_error.py deleted file mode 100644 index 5a2888a8b0..0000000000 --- a/openpype/hosts/testhost/plugins/publish/validate_with_error.py +++ /dev/null @@ -1,57 +0,0 @@ -import pyblish.api -from openpype.pipeline import PublishValidationError - - -class ValidateInstanceAssetRepair(pyblish.api.Action): - """Repair the instance asset.""" - - label = "Repair" - icon = "wrench" - on = "failed" - - def process(self, context, plugin): - pass - - -description = """ -## Publish plugins - -### Validate Scene Settings - -#### Skip Resolution Check for Tasks - -Set regex pattern(s) to look for in a Task name to skip resolution check against values from DB. - -#### Skip Timeline Check for Tasks - -Set regex pattern(s) to look for in a Task name to skip `frameStart`, `frameEnd` check against values from DB. - -### AfterEffects Submit to Deadline - -* `Use Published scene` - Set to True (green) when Deadline should take published scene as a source instead of uploaded local one. -* `Priority` - priority of job on farm -* `Primary Pool` - here is list of pool fetched from server you can select from. -* `Secondary Pool` -* `Frames Per Task` - number of sequence division between individual tasks (chunks) -making one job on farm. -""" - - -class ValidateWithError(pyblish.api.InstancePlugin): - """Validate the instance asset is the current selected context asset. - - As it might happen that multiple worfiles are opened, switching - between them would mess with selected context. - In that case outputs might be output under wrong asset! - - Repair action will use Context asset value (from Workfiles or Launcher) - Closing and reopening with Workfiles will refresh Context value. - """ - - label = "Validate With Error" - hosts = ["testhost"] - actions = [ValidateInstanceAssetRepair] - order = pyblish.api.ValidatorOrder - - def process(self, instance): - raise PublishValidationError("Crashing", "Instance error", description) diff --git a/openpype/hosts/testhost/run_publish.py b/openpype/hosts/testhost/run_publish.py deleted file mode 100644 index c7ad63aafd..0000000000 --- a/openpype/hosts/testhost/run_publish.py +++ /dev/null @@ -1,68 +0,0 @@ -import os -import sys - -mongo_url = "" -project_name = "" -asset_name = "" -task_name = "" -ftrack_url = "" -ftrack_username = "" -ftrack_api_key = "" - - -def multi_dirname(path, times=1): - for _ in range(times): - path = os.path.dirname(path) - return path - - -host_name = "testhost" -current_file = os.path.abspath(__file__) -openpype_dir = multi_dirname(current_file, 4) - -os.environ["OPENPYPE_MONGO"] = mongo_url -os.environ["OPENPYPE_ROOT"] = openpype_dir -os.environ["AVALON_PROJECT"] = project_name -os.environ["AVALON_ASSET"] = asset_name -os.environ["AVALON_TASK"] = task_name -os.environ["AVALON_APP"] = host_name -os.environ["OPENPYPE_DATABASE_NAME"] = "openpype" -os.environ["AVALON_TIMEOUT"] = "1000" -os.environ["AVALON_DB"] = "avalon" -os.environ["FTRACK_SERVER"] = ftrack_url -os.environ["FTRACK_API_USER"] = ftrack_username -os.environ["FTRACK_API_KEY"] = ftrack_api_key -for path in [ - openpype_dir, - r"{}\repos\avalon-core".format(openpype_dir), - r"{}\.venv\Lib\site-packages".format(openpype_dir) -]: - sys.path.append(path) - -from Qt import QtWidgets, QtCore - -from openpype.tools.publisher.window import PublisherWindow - - -def main(): - """Main function for testing purposes.""" - import pyblish.api - from openpype.pipeline import install_host - from openpype.modules import ModulesManager - from openpype.hosts.testhost import api as testhost - - manager = ModulesManager() - for plugin_path in manager.collect_plugin_paths()["publish"]: - pyblish.api.register_plugin_path(plugin_path) - - install_host(testhost) - - QtWidgets.QApplication.setAttribute(QtCore.Qt.AA_EnableHighDpiScaling) - app = QtWidgets.QApplication([]) - window = PublisherWindow() - window.show() - app.exec_() - - -if __name__ == "__main__": - main() diff --git a/openpype/hosts/traypublisher/plugins/publish/validate_frame_ranges.py b/openpype/hosts/traypublisher/plugins/publish/validate_frame_ranges.py index 947624100a..b962ea464a 100644 --- a/openpype/hosts/traypublisher/plugins/publish/validate_frame_ranges.py +++ b/openpype/hosts/traypublisher/plugins/publish/validate_frame_ranges.py @@ -2,10 +2,10 @@ import re import pyblish.api -import openpype.api -from openpype.pipeline import ( +from openpype.pipeline.publish import ( + ValidateContentsOrder, PublishXmlValidationError, - OptionalPyblishPluginMixin + OptionalPyblishPluginMixin, ) @@ -16,7 +16,7 @@ class ValidateFrameRange(OptionalPyblishPluginMixin, label = "Validate Frame Range" hosts = ["traypublisher"] families = ["render"] - order = openpype.api.ValidateContentsOrder + order = ValidateContentsOrder optional = True # published data might be sequence (.mov, .mp4) in that counting files diff --git a/openpype/lib/__init__.py b/openpype/lib/__init__.py index adb857a056..17aafc3e8b 100644 --- a/openpype/lib/__init__.py +++ b/openpype/lib/__init__.py @@ -192,6 +192,8 @@ from .plugin_tools import ( ) from .path_tools import ( + format_file_size, + collect_frames, create_hard_link, version_up, get_version_from_path, @@ -353,6 +355,8 @@ __all__ = [ "set_plugin_attributes_from_settings", "source_hash", + "format_file_size", + "collect_frames", "create_hard_link", "version_up", "get_version_from_path", diff --git a/openpype/lib/avalon_context.py b/openpype/lib/avalon_context.py index 7d56d039d4..12f4a5198b 100644 --- a/openpype/lib/avalon_context.py +++ b/openpype/lib/avalon_context.py @@ -84,6 +84,7 @@ def deprecated(new_destination): return _decorator(func) +@deprecated("openpype.client.operations.create_project") def create_project( project_name, project_code, library_project=False, dbcon=None ): @@ -107,59 +108,14 @@ def create_project( Returns: dict: Created project document. + + Deprecated: + Function will be removed after release version 3.16.* """ - from openpype.settings import ProjectSettings, SaveWarningExc - from openpype.pipeline import AvalonMongoDB - from openpype.pipeline.schema import validate + from openpype.client.operations import create_project - if get_project(project_name, fields=["name"]): - raise ValueError("Project with name \"{}\" already exists".format( - project_name - )) - - if dbcon is None: - dbcon = AvalonMongoDB() - - if not PROJECT_NAME_REGEX.match(project_name): - raise ValueError(( - "Project name \"{}\" contain invalid characters" - ).format(project_name)) - - database = dbcon.database - project_doc = { - "type": "project", - "name": project_name, - "data": { - "code": project_code, - "library_project": library_project - }, - "schema": CURRENT_DOC_SCHEMAS["project"] - } - # Insert document with basic data - database[project_name].insert_one(project_doc) - # Load ProjectSettings for the project and save it to store all attributes - # and Anatomy - try: - project_settings_entity = ProjectSettings(project_name) - project_settings_entity.save() - except SaveWarningExc as exc: - print(str(exc)) - except Exception: - database[project_name].delete_one({"type": "project"}) - raise - - project_doc = get_project(project_name) - - try: - # Validate created project document - validate(project_doc) - except Exception: - # Remove project if is not valid - database[project_name].delete_one({"type": "project"}) - raise - - return project_doc + return create_project(project_name, project_code, library_project) def with_pipeline_io(func): @@ -236,6 +192,7 @@ def get_system_general_anatomy_data(system_settings=None): return get_general_template_data(system_settings) +@deprecated("openpype.client.get_linked_asset_ids") def get_linked_asset_ids(asset_doc): """Return linked asset ids for `asset_doc` from DB @@ -244,26 +201,20 @@ def get_linked_asset_ids(asset_doc): Returns: (list): MongoDB ids of input links. + + Deprecated: + Function will be removed after release version 3.16.* """ - output = [] - if not asset_doc: - return output - input_links = asset_doc["data"].get("inputLinks") or [] - if input_links: - for item in input_links: - # Backwards compatibility for "_id" key which was replaced with - # "id" - if "_id" in item: - link_id = item["_id"] - else: - link_id = item["id"] - output.append(link_id) + from openpype.client import get_linked_asset_ids + from openpype.pipeline import legacy_io - return output + project_name = legacy_io.active_project() + + return get_linked_asset_ids(project_name, asset_doc=asset_doc) -@with_pipeline_io +@deprecated("openpype.client.get_linked_assets") def get_linked_assets(asset_doc): """Return linked assets for `asset_doc` from DB @@ -272,14 +223,17 @@ def get_linked_assets(asset_doc): Returns: (list) Asset documents of input links for passed asset doc. + + Deprecated: + Function will be removed after release version 3.15.* """ - link_ids = get_linked_asset_ids(asset_doc) - if not link_ids: - return [] + from openpype.pipeline import legacy_io + from openpype.client import get_linked_assets project_name = legacy_io.active_project() - return list(get_assets(project_name, link_ids)) + + return get_linked_assets(project_name, asset_doc=asset_doc) @deprecated("openpype.client.get_last_version_by_subset_name") @@ -1085,9 +1039,10 @@ def get_last_workfile( ) -@with_pipeline_io -def get_linked_ids_for_representations(project_name, repre_ids, dbcon=None, - link_type=None, max_depth=0): +@deprecated("openpype.client.get_linked_representation_id") +def get_linked_ids_for_representations( + project_name, repre_ids, dbcon=None, link_type=None, max_depth=0 +): """Returns list of linked ids of particular type (if provided). Goes from representations to version, back to representations @@ -1098,104 +1053,25 @@ def get_linked_ids_for_representations(project_name, repre_ids, dbcon=None, with Session. link_type (str): ['reference', '..] max_depth (int): limit how many levels of recursion + Returns: (list) of ObjectId - linked representations + + Deprecated: + Function will be removed after release version 3.16.* """ - # Create new dbcon if not passed and use passed project name - if not dbcon: - from openpype.pipeline import AvalonMongoDB - dbcon = AvalonMongoDB() - dbcon.Session["AVALON_PROJECT"] = project_name - # Validate that passed dbcon has same project - elif dbcon.Session["AVALON_PROJECT"] != project_name: - raise ValueError("Passed connection does not have right project") + + from openpype.client import get_linked_representation_id if not isinstance(repre_ids, list): repre_ids = [repre_ids] - version_ids = dbcon.distinct("parent", { - "_id": {"$in": repre_ids}, - "type": "representation" - }) - - match = { - "_id": {"$in": version_ids}, - "type": "version" - } - - graph_lookup = { - "from": project_name, - "startWith": "$data.inputLinks.id", - "connectFromField": "data.inputLinks.id", - "connectToField": "_id", - "as": "outputs_recursive", - "depthField": "depth" - } - if max_depth != 0: - # We offset by -1 since 0 basically means no recursion - # but the recursion only happens after the initial lookup - # for outputs. - graph_lookup["maxDepth"] = max_depth - 1 - - pipeline_ = [ - # Match - {"$match": match}, - # Recursive graph lookup for inputs - {"$graphLookup": graph_lookup} - ] - - result = dbcon.aggregate(pipeline_) - referenced_version_ids = _process_referenced_pipeline_result(result, - link_type) - - ref_ids = dbcon.distinct( - "_id", - filter={ - "parent": {"$in": list(referenced_version_ids)}, - "type": "representation" - } - ) - - return list(ref_ids) - - -def _process_referenced_pipeline_result(result, link_type): - """Filters result from pipeline for particular link_type. - - Pipeline cannot use link_type directly in a query. - Returns: - (list) - """ - referenced_version_ids = set() - correctly_linked_ids = set() - for item in result: - input_links = item["data"].get("inputLinks", []) - correctly_linked_ids = _filter_input_links(input_links, - link_type, - correctly_linked_ids) - - # outputs_recursive in random order, sort by depth - outputs_recursive = sorted(item.get("outputs_recursive", []), - key=lambda d: d["depth"]) - - for output in outputs_recursive: - if output["_id"] not in correctly_linked_ids: # leaf - continue - - correctly_linked_ids = _filter_input_links( - output["data"].get("inputLinks", []), - link_type, - correctly_linked_ids) - - referenced_version_ids.add(output["_id"]) - - return referenced_version_ids - - -def _filter_input_links(input_links, link_type, correctly_linked_ids): - for input_link in input_links: - if not link_type or input_link["type"] == link_type: - correctly_linked_ids.add(input_link.get("id") or - input_link.get("_id")) # legacy - - return correctly_linked_ids + output = [] + for repre_id in repre_ids: + output.extend(get_linked_representation_id( + project_name, + repre_id=repre_id, + link_type=link_type, + max_depth=max_depth + )) + return output diff --git a/openpype/lib/delivery.py b/openpype/lib/delivery.py index ffcfe9fa4d..efb542de75 100644 --- a/openpype/lib/delivery.py +++ b/openpype/lib/delivery.py @@ -1,81 +1,113 @@ """Functions useful for delivery action or loader""" import os import shutil -import glob -import clique -import collections - -from .path_templates import ( - StringTemplate, - TemplateUnsolved, -) +import functools +import warnings +class DeliveryDeprecatedWarning(DeprecationWarning): + pass + + +def deprecated(new_destination): + """Mark functions as deprecated. + + It will result in a warning being emitted when the function is used. + """ + + func = None + if callable(new_destination): + func = new_destination + new_destination = None + + def _decorator(decorated_func): + if new_destination is None: + warning_message = ( + " Please check content of deprecated function to figure out" + " possible replacement." + ) + else: + warning_message = " Please replace your usage with '{}'.".format( + new_destination + ) + + @functools.wraps(decorated_func) + def wrapper(*args, **kwargs): + warnings.simplefilter("always", DeliveryDeprecatedWarning) + warnings.warn( + ( + "Call to deprecated function '{}'" + "\nFunction was moved or removed.{}" + ).format(decorated_func.__name__, warning_message), + category=DeliveryDeprecatedWarning, + stacklevel=4 + ) + return decorated_func(*args, **kwargs) + return wrapper + + if func is None: + return _decorator + return _decorator(func) + + +@deprecated("openpype.lib.path_tools.collect_frames") def collect_frames(files): + """Returns dict of source path and its frame, if from sequence + + Uses clique as most precise solution, used when anatomy template that + created files is not known. + + Assumption is that frames are separated by '.', negative frames are not + allowed. + + Args: + files(list) or (set with single value): list of source paths + + Returns: + (dict): {'/asset/subset_v001.0001.png': '0001', ....} + + Deprecated: + Function was moved to different location and will be removed + after 3.16.* release. """ - Returns dict of source path and its frame, if from sequence - Uses clique as most precise solution, used when anatomy template that - created files is not known. + from .path_tools import collect_frames - Assumption is that frames are separated by '.', negative frames are not - allowed. + return collect_frames(files) - Args: - files(list) or (set with single value): list of source paths - Returns: - (dict): {'/asset/subset_v001.0001.png': '0001', ....} + +@deprecated("openpype.lib.path_tools.format_file_size") +def sizeof_fmt(num, suffix=None): + """Returns formatted string with size in appropriate unit + + Deprecated: + Function was moved to different location and will be removed + after 3.16.* release. """ - patterns = [clique.PATTERNS["frames"]] - collections, remainder = clique.assemble(files, minimum_items=1, - patterns=patterns) - sources_and_frames = {} - if collections: - for collection in collections: - src_head = collection.head - src_tail = collection.tail - - for index in collection.indexes: - src_frame = collection.format("{padding}") % index - src_file_name = "{}{}{}".format(src_head, src_frame, - src_tail) - sources_and_frames[src_file_name] = src_frame - else: - sources_and_frames[remainder.pop()] = None - - return sources_and_frames - - -def sizeof_fmt(num, suffix='B'): - """Returns formatted string with size in appropriate unit""" - for unit in ['', 'Ki', 'Mi', 'Gi', 'Ti', 'Pi', 'Ei', 'Zi']: - if abs(num) < 1024.0: - return "%3.1f%s%s" % (num, unit, suffix) - num /= 1024.0 - return "%.1f%s%s" % (num, 'Yi', suffix) + from .path_tools import format_file_size + return format_file_size(num, suffix) +@deprecated("openpype.pipeline.load.get_representation_path_with_anatomy") def path_from_representation(representation, anatomy): - try: - template = representation["data"]["template"] + """Get representation path using representation document and anatomy. - except KeyError: - return None + Args: + representation (Dict[str, Any]): Representation document. + anatomy (Anatomy): Project anatomy. - try: - context = representation["context"] - context["root"] = anatomy.roots - path = StringTemplate.format_strict_template(template, context) - return os.path.normpath(path) + Deprecated: + Function was moved to different location and will be removed + after 3.16.* release. + """ - except TemplateUnsolved: - # Template references unavailable data - return None + from openpype.pipeline.load import get_representation_path_with_anatomy - return path + return get_representation_path_with_anatomy(representation, anatomy) +@deprecated def copy_file(src_path, dst_path): """Hardlink file if possible(to save space), copy if not""" from openpype.lib import create_hard_link # safer importing @@ -91,131 +123,96 @@ def copy_file(src_path, dst_path): shutil.copyfile(src_path, dst_path) +@deprecated("openpype.pipeline.delivery.get_format_dict") def get_format_dict(anatomy, location_path): """Returns replaced root values from user provider value. - Args: - anatomy (Anatomy) - location_path (str): user provided value - Returns: - (dict): prepared for formatting of a template + Args: + anatomy (Anatomy) + location_path (str): user provided value + + Returns: + (dict): prepared for formatting of a template + + Deprecated: + Function was moved to different location and will be removed + after 3.16.* release. """ - format_dict = {} - if location_path: - location_path = location_path.replace("\\", "/") - root_names = anatomy.root_names_from_templates( - anatomy.templates["delivery"] - ) - if root_names is None: - format_dict["root"] = location_path - else: - format_dict["root"] = {} - for name in root_names: - format_dict["root"][name] = location_path - return format_dict + + from openpype.pipeline.delivery import get_format_dict + + return get_format_dict(anatomy, location_path) +@deprecated("openpype.pipeline.delivery.check_destination_path") def check_destination_path(repre_id, anatomy, anatomy_data, datetime_data, template_name): """ Try to create destination path based on 'template_name'. - In the case that path cannot be filled, template contains unmatched - keys, provide error message to filter out repre later. + In the case that path cannot be filled, template contains unmatched + keys, provide error message to filter out repre later. - Args: - anatomy (Anatomy) - anatomy_data (dict): context to fill anatomy - datetime_data (dict): values with actual date - template_name (str): to pick correct delivery template - Returns: - (collections.defauldict): {"TYPE_OF_ERROR":"ERROR_DETAIL"} + Args: + anatomy (Anatomy) + anatomy_data (dict): context to fill anatomy + datetime_data (dict): values with actual date + template_name (str): to pick correct delivery template + + Returns: + (collections.defauldict): {"TYPE_OF_ERROR":"ERROR_DETAIL"} + + Deprecated: + Function was moved to different location and will be removed + after 3.16.* release. """ - anatomy_data.update(datetime_data) - anatomy_filled = anatomy.format_all(anatomy_data) - dest_path = anatomy_filled["delivery"][template_name] - report_items = collections.defaultdict(list) - if not dest_path.solved: - msg = ( - "Missing keys in Representation's context" - " for anatomy template \"{}\"." - ).format(template_name) + from openpype.pipeline.delivery import check_destination_path - sub_msg = ( - "Representation: {}
" - ).format(repre_id) - - if dest_path.missing_keys: - keys = ", ".join(dest_path.missing_keys) - sub_msg += ( - "- Missing keys: \"{}\"
" - ).format(keys) - - if dest_path.invalid_types: - items = [] - for key, value in dest_path.invalid_types.items(): - items.append("\"{}\" {}".format(key, str(value))) - - keys = ", ".join(items) - sub_msg += ( - "- Invalid value DataType: \"{}\"
" - ).format(keys) - - report_items[msg].append(sub_msg) - - return report_items + return check_destination_path( + repre_id, + anatomy, + anatomy_data, + datetime_data, + template_name + ) +@deprecated("openpype.pipeline.delivery.deliver_single_file") def process_single_file( src_path, repre, anatomy, template_name, anatomy_data, format_dict, report_items, log ): """Copy single file to calculated path based on template - Args: - src_path(str): path of source representation file - _repre (dict): full repre, used only in process_sequence, here only - as to share same signature - anatomy (Anatomy) - template_name (string): user selected delivery template name - anatomy_data (dict): data from repre to fill anatomy with - format_dict (dict): root dictionary with names and values - report_items (collections.defaultdict): to return error messages - log (Logger): for log printing - Returns: - (collections.defaultdict , int) + Args: + src_path(str): path of source representation file + _repre (dict): full repre, used only in process_sequence, here only + as to share same signature + anatomy (Anatomy) + template_name (string): user selected delivery template name + anatomy_data (dict): data from repre to fill anatomy with + format_dict (dict): root dictionary with names and values + report_items (collections.defaultdict): to return error messages + log (Logger): for log printing + + Returns: + (collections.defaultdict , int) + + Deprecated: + Function was moved to different location and will be removed + after 3.16.* release. """ - # Make sure path is valid for all platforms - src_path = os.path.normpath(src_path.replace("\\", "/")) - if not os.path.exists(src_path): - msg = "{} doesn't exist for {}".format(src_path, repre["_id"]) - report_items["Source file was not found"].append(msg) - return report_items, 0 + from openpype.pipeline.delivery import deliver_single_file - anatomy_filled = anatomy.format(anatomy_data) - if format_dict: - template_result = anatomy_filled["delivery"][template_name] - delivery_path = template_result.rootless.format(**format_dict) - else: - delivery_path = anatomy_filled["delivery"][template_name] - - # Backwards compatibility when extension contained `.` - delivery_path = delivery_path.replace("..", ".") - # Make sure path is valid for all platforms - delivery_path = os.path.normpath(delivery_path.replace("\\", "/")) - - delivery_folder = os.path.dirname(delivery_path) - if not os.path.exists(delivery_folder): - os.makedirs(delivery_folder) - - log.debug("Copying single: {} -> {}".format(src_path, delivery_path)) - copy_file(src_path, delivery_path) - - return report_items, 1 + return deliver_single_file( + src_path, repre, anatomy, template_name, anatomy_data, format_dict, + report_items, log + ) +@deprecated("openpype.pipeline.delivery.deliver_sequence") def process_sequence( src_path, repre, anatomy, template_name, anatomy_data, format_dict, report_items, log @@ -223,128 +220,33 @@ def process_sequence( """ For Pype2(mainly - works in 3 too) where representation might not contain files. - Uses listing physical files (not 'files' on repre as a)might not be - present, b)might not be reliable for representation and copying them. + Uses listing physical files (not 'files' on repre as a)might not be + present, b)might not be reliable for representation and copying them. - TODO Should be refactored when files are sufficient to drive all - representations. + TODO Should be refactored when files are sufficient to drive all + representations. - Args: - src_path(str): path of source representation file - repre (dict): full representation - anatomy (Anatomy) - template_name (string): user selected delivery template name - anatomy_data (dict): data from repre to fill anatomy with - format_dict (dict): root dictionary with names and values - report_items (collections.defaultdict): to return error messages - log (Logger): for log printing - Returns: - (collections.defaultdict , int) + Args: + src_path(str): path of source representation file + repre (dict): full representation + anatomy (Anatomy) + template_name (string): user selected delivery template name + anatomy_data (dict): data from repre to fill anatomy with + format_dict (dict): root dictionary with names and values + report_items (collections.defaultdict): to return error messages + log (Logger): for log printing + + Returns: + (collections.defaultdict , int) + + Deprecated: + Function was moved to different location and will be removed + after 3.16.* release. """ - src_path = os.path.normpath(src_path.replace("\\", "/")) - def hash_path_exist(myPath): - res = myPath.replace('#', '*') - glob_search_results = glob.glob(res) - if len(glob_search_results) > 0: - return True - return False + from openpype.pipeline.delivery import deliver_sequence - if not hash_path_exist(src_path): - msg = "{} doesn't exist for {}".format(src_path, - repre["_id"]) - report_items["Source file was not found"].append(msg) - return report_items, 0 - - delivery_templates = anatomy.templates.get("delivery") or {} - delivery_template = delivery_templates.get(template_name) - if delivery_template is None: - msg = ( - "Delivery template \"{}\" in anatomy of project \"{}\"" - " was not found" - ).format(template_name, anatomy.project_name) - report_items[""].append(msg) - return report_items, 0 - - # Check if 'frame' key is available in template which is required - # for sequence delivery - if "{frame" not in delivery_template: - msg = ( - "Delivery template \"{}\" in anatomy of project \"{}\"" - "does not contain '{{frame}}' key to fill. Delivery of sequence" - " can't be processed." - ).format(template_name, anatomy.project_name) - report_items[""].append(msg) - return report_items, 0 - - dir_path, file_name = os.path.split(str(src_path)) - - context = repre["context"] - ext = context.get("ext", context.get("representation")) - - if not ext: - msg = "Source extension not found, cannot find collection" - report_items[msg].append(src_path) - log.warning("{} <{}>".format(msg, context)) - return report_items, 0 - - ext = "." + ext - # context.representation could be .psd - ext = ext.replace("..", ".") - - src_collections, remainder = clique.assemble(os.listdir(dir_path)) - src_collection = None - for col in src_collections: - if col.tail != ext: - continue - - src_collection = col - break - - if src_collection is None: - msg = "Source collection of files was not found" - report_items[msg].append(src_path) - log.warning("{} <{}>".format(msg, src_path)) - return report_items, 0 - - frame_indicator = "@####@" - - anatomy_data["frame"] = frame_indicator - anatomy_filled = anatomy.format(anatomy_data) - - if format_dict: - template_result = anatomy_filled["delivery"][template_name] - delivery_path = template_result.rootless.format(**format_dict) - else: - delivery_path = anatomy_filled["delivery"][template_name] - - delivery_path = os.path.normpath(delivery_path.replace("\\", "/")) - delivery_folder = os.path.dirname(delivery_path) - dst_head, dst_tail = delivery_path.split(frame_indicator) - dst_padding = src_collection.padding - dst_collection = clique.Collection( - head=dst_head, - tail=dst_tail, - padding=dst_padding + return deliver_sequence( + src_path, repre, anatomy, template_name, anatomy_data, format_dict, + report_items, log ) - - if not os.path.exists(delivery_folder): - os.makedirs(delivery_folder) - - src_head = src_collection.head - src_tail = src_collection.tail - uploaded = 0 - for index in src_collection.indexes: - src_padding = src_collection.format("{padding}") % index - src_file_name = "{}{}{}".format(src_head, src_padding, src_tail) - src = os.path.normpath( - os.path.join(dir_path, src_file_name) - ) - - dst_padding = dst_collection.format("{padding}") % index - dst = "{}{}{}".format(dst_head, dst_padding, dst_tail) - log.debug("Copying single: {} -> {}".format(src, dst)) - copy_file(src, dst) - uploaded += 1 - - return report_items, uploaded diff --git a/openpype/lib/path_tools.py b/openpype/lib/path_tools.py index 4f28be3302..0b6d0a3391 100644 --- a/openpype/lib/path_tools.py +++ b/openpype/lib/path_tools.py @@ -1,19 +1,81 @@ import os import re -import abc -import json import logging -import six import platform +import functools +import warnings -from openpype.client import get_project -from openpype.settings import get_project_settings - -from .profiles_filtering import filter_profiles +import clique log = logging.getLogger(__name__) +class PathToolsDeprecatedWarning(DeprecationWarning): + pass + + +def deprecated(new_destination): + """Mark functions as deprecated. + + It will result in a warning being emitted when the function is used. + """ + + func = None + if callable(new_destination): + func = new_destination + new_destination = None + + def _decorator(decorated_func): + if new_destination is None: + warning_message = ( + " Please check content of deprecated function to figure out" + " possible replacement." + ) + else: + warning_message = " Please replace your usage with '{}'.".format( + new_destination + ) + + @functools.wraps(decorated_func) + def wrapper(*args, **kwargs): + warnings.simplefilter("always", PathToolsDeprecatedWarning) + warnings.warn( + ( + "Call to deprecated function '{}'" + "\nFunction was moved or removed.{}" + ).format(decorated_func.__name__, warning_message), + category=PathToolsDeprecatedWarning, + stacklevel=4 + ) + return decorated_func(*args, **kwargs) + return wrapper + + if func is None: + return _decorator + return _decorator(func) + + +def format_file_size(file_size, suffix=None): + """Returns formatted string with size in appropriate unit. + + Args: + file_size (int): Size of file in bytes. + suffix (str): Suffix for formatted size. Default is 'B' (as bytes). + + Returns: + str: Formatted size using proper unit and passed suffix (e.g. 7 MiB). + """ + + if suffix is None: + suffix = "B" + + for unit in ["", "Ki", "Mi", "Gi", "Ti", "Pi", "Ei", "Zi"]: + if abs(file_size) < 1024.0: + return "%3.1f%s%s" % (file_size, unit, suffix) + file_size /= 1024.0 + return "%.1f%s%s" % (file_size, "Yi", suffix) + + def create_hard_link(src_path, dst_path): """Create hardlink of file. @@ -50,6 +112,43 @@ def create_hard_link(src_path, dst_path): ) +def collect_frames(files): + """Returns dict of source path and its frame, if from sequence + + Uses clique as most precise solution, used when anatomy template that + created files is not known. + + Assumption is that frames are separated by '.', negative frames are not + allowed. + + Args: + files(list) or (set with single value): list of source paths + + Returns: + (dict): {'/asset/subset_v001.0001.png': '0001', ....} + """ + + patterns = [clique.PATTERNS["frames"]] + collections, remainder = clique.assemble( + files, minimum_items=1, patterns=patterns) + + sources_and_frames = {} + if collections: + for collection in collections: + src_head = collection.head + src_tail = collection.tail + + for index in collection.indexes: + src_frame = collection.format("{padding}") % index + src_file_name = "{}{}{}".format( + src_head, src_frame, src_tail) + sources_and_frames[src_file_name] = src_frame + else: + sources_and_frames[remainder.pop()] = None + + return sources_and_frames + + def _rreplace(s, a, b, n=1): """Replace a with b in string s from right side n times.""" return b.join(s.rsplit(a, n)) @@ -119,12 +218,12 @@ def get_version_from_path(file): """Find version number in file path string. Args: - file (string): file path + file (str): file path Returns: - v: version number in string ('001') - + str: version number in string ('001') """ + pattern = re.compile(r"[\._]v([0-9]+)", re.IGNORECASE) try: return pattern.findall(file)[-1] @@ -140,16 +239,17 @@ def get_last_version_from_path(path_dir, filter): """Find last version of given directory content. Args: - path_dir (string): directory path + path_dir (str): directory path filter (list): list of strings used as file name filter Returns: - string: file name with last version + str: file name with last version Example: last_version_file = get_last_version_from_path( "/project/shots/shot01/work", ["shot01", "compositing", "nk"]) """ + assert os.path.isdir(path_dir), "`path_dir` argument needs to be directory" assert isinstance(filter, list) and ( len(filter) != 0), "`filter` argument needs to be list and not empty" @@ -171,107 +271,69 @@ def get_last_version_from_path(path_dir, filter): return None +@deprecated("openpype.pipeline.project_folders.concatenate_splitted_paths") def concatenate_splitted_paths(split_paths, anatomy): - pattern_array = re.compile(r"\[.*\]") - output = [] - for path_items in split_paths: - clean_items = [] - if isinstance(path_items, str): - path_items = [path_items] + """ + Deprecated: + Function will be removed after release version 3.16.* + """ - for path_item in path_items: - if not re.match(r"{.+}", path_item): - path_item = re.sub(pattern_array, "", path_item) - clean_items.append(path_item) + from openpype.pipeline.project_folders import concatenate_splitted_paths - # backward compatibility - if "__project_root__" in path_items: - for root, root_path in anatomy.roots.items(): - if not os.path.exists(str(root_path)): - log.debug("Root {} path path {} not exist on \ - computer!".format(root, root_path)) - continue - clean_items = ["{{root[{}]}}".format(root), - r"{project[name]}"] + clean_items[1:] - output.append(os.path.normpath(os.path.sep.join(clean_items))) - continue - - output.append(os.path.normpath(os.path.sep.join(clean_items))) - - return output + return concatenate_splitted_paths(split_paths, anatomy) +@deprecated def get_format_data(anatomy): - project_doc = get_project(anatomy.project_name, fields=["data.code"]) - project_code = project_doc["data"]["code"] + """ + Deprecated: + Function will be removed after release version 3.16.* + """ - return { - "root": anatomy.roots, - "project": { - "name": anatomy.project_name, - "code": project_code - }, - } + from openpype.pipeline.template_data import get_project_template_data + + data = get_project_template_data(project_name=anatomy.project_name) + data["root"] = anatomy.roots + return data +@deprecated("openpype.pipeline.project_folders.fill_paths") def fill_paths(path_list, anatomy): - format_data = get_format_data(anatomy) - filled_paths = [] + """ + Deprecated: + Function will be removed after release version 3.16.* + """ - for path in path_list: - new_path = path.format(**format_data) - filled_paths.append(new_path) + from openpype.pipeline.project_folders import fill_paths - return filled_paths + return fill_paths(path_list, anatomy) +@deprecated("openpype.pipeline.project_folders.create_project_folders") def create_project_folders(basic_paths, project_name): - from openpype.pipeline import Anatomy - anatomy = Anatomy(project_name) + """ + Deprecated: + Function will be removed after release version 3.16.* + """ - concat_paths = concatenate_splitted_paths(basic_paths, anatomy) - filled_paths = fill_paths(concat_paths, anatomy) + from openpype.pipeline.project_folders import create_project_folders - # Create folders - for path in filled_paths: - if os.path.exists(path): - log.debug("Folder already exists: {}".format(path)) - else: - log.debug("Creating folder: {}".format(path)) - os.makedirs(path) - - -def _list_path_items(folder_structure): - output = [] - for key, value in folder_structure.items(): - if not value: - output.append(key) - else: - paths = _list_path_items(value) - for path in paths: - if not isinstance(path, (list, tuple)): - path = [path] - - item = [key] - item.extend(path) - output.append(item) - - return output + return create_project_folders(project_name, basic_paths) +@deprecated("openpype.pipeline.project_folders.get_project_basic_paths") def get_project_basic_paths(project_name): - project_settings = get_project_settings(project_name) - folder_structure = ( - project_settings["global"]["project_folder_structure"] - ) - if not folder_structure: - return [] + """ + Deprecated: + Function will be removed after release version 3.16.* + """ - if isinstance(folder_structure, str): - folder_structure = json.loads(folder_structure) - return _list_path_items(folder_structure) + from openpype.pipeline.project_folders import get_project_basic_paths + + return get_project_basic_paths(project_name) +@deprecated("openpype.pipeline.workfile.create_workdir_extra_folders") def create_workdir_extra_folders( workdir, host_name, task_type, task_name, project_name, project_settings=None @@ -288,193 +350,18 @@ def create_workdir_extra_folders( project_name (str): Name of project on which task is. project_settings (dict): Prepared project settings. Are loaded if not passed. - """ - # Load project settings if not set - if not project_settings: - project_settings = get_project_settings(project_name) - # Load extra folders profiles - extra_folders_profiles = ( - project_settings["global"]["tools"]["Workfiles"]["extra_folders"] + Deprecated: + Function will be removed after release version 3.16.* + """ + + from openpype.pipeline.project_folders import create_workdir_extra_folders + + return create_workdir_extra_folders( + workdir, + host_name, + task_type, + task_name, + project_name, + project_settings ) - # Skip if are empty - if not extra_folders_profiles: - return - - # Prepare profiles filters - filter_data = { - "task_types": task_type, - "task_names": task_name, - "hosts": host_name - } - profile = filter_profiles(extra_folders_profiles, filter_data) - if profile is None: - return - - for subfolder in profile["folders"]: - # Make sure backslashes are converted to forwards slashes - # and does not start with slash - subfolder = subfolder.replace("\\", "/").lstrip("/") - # Skip empty strings - if not subfolder: - continue - - fullpath = os.path.join(workdir, subfolder) - if not os.path.exists(fullpath): - os.makedirs(fullpath) - - -@six.add_metaclass(abc.ABCMeta) -class HostDirmap: - """ - Abstract class for running dirmap on a workfile in a host. - - Dirmap is used to translate paths inside of host workfile from one - OS to another. (Eg. arstist created workfile on Win, different artists - opens same file on Linux.) - - Expects methods to be implemented inside of host: - on_dirmap_enabled: run host code for enabling dirmap - do_dirmap: run host code to do actual remapping - """ - - def __init__(self, host_name, project_settings, sync_module=None): - self.host_name = host_name - self.project_settings = project_settings - self.sync_module = sync_module # to limit reinit of Modules - - self._mapping = None # cache mapping - - @abc.abstractmethod - def on_enable_dirmap(self): - """ - Run host dependent operation for enabling dirmap if necessary. - """ - - @abc.abstractmethod - def dirmap_routine(self, source_path, destination_path): - """ - Run host dependent remapping from source_path to destination_path - """ - - def process_dirmap(self): - # type: (dict) -> None - """Go through all paths in Settings and set them using `dirmap`. - - If artists has Site Sync enabled, take dirmap mapping directly from - Local Settings when artist is syncing workfile locally. - - Args: - project_settings (dict): Settings for current project. - - """ - if not self._mapping: - self._mapping = self.get_mappings(self.project_settings) - if not self._mapping: - return - - log.info("Processing directory mapping ...") - self.on_enable_dirmap() - log.info("mapping:: {}".format(self._mapping)) - - for k, sp in enumerate(self._mapping["source-path"]): - try: - print("{} -> {}".format(sp, - self._mapping["destination-path"][k])) - self.dirmap_routine(sp, - self._mapping["destination-path"][k]) - except IndexError: - # missing corresponding destination path - log.error(("invalid dirmap mapping, missing corresponding" - " destination directory.")) - break - except RuntimeError: - log.error("invalid path {} -> {}, mapping not registered".format( # noqa: E501 - sp, self._mapping["destination-path"][k] - )) - continue - - def get_mappings(self, project_settings): - """Get translation from source-path to destination-path. - - It checks if Site Sync is enabled and user chose to use local - site, in that case configuration in Local Settings takes precedence - """ - local_mapping = self._get_local_sync_dirmap(project_settings) - dirmap_label = "{}-dirmap".format(self.host_name) - if not self.project_settings[self.host_name].get(dirmap_label) and \ - not local_mapping: - return [] - mapping = local_mapping or \ - self.project_settings[self.host_name][dirmap_label]["paths"] or {} - enbled = self.project_settings[self.host_name][dirmap_label]["enabled"] - mapping_enabled = enbled or bool(local_mapping) - - if not mapping or not mapping_enabled or \ - not mapping.get("destination-path") or \ - not mapping.get("source-path"): - return [] - return mapping - - def _get_local_sync_dirmap(self, project_settings): - """ - Returns dirmap if synch to local project is enabled. - - Only valid mapping is from roots of remote site to local site set - in Local Settings. - - Args: - project_settings (dict) - Returns: - dict : { "source-path": [XXX], "destination-path": [YYYY]} - """ - import json - mapping = {} - - if not project_settings["global"]["sync_server"]["enabled"]: - return mapping - - from openpype.settings.lib import get_site_local_overrides - - if not self.sync_module: - from openpype.modules import ModulesManager - manager = ModulesManager() - self.sync_module = manager.modules_by_name["sync_server"] - - project_name = os.getenv("AVALON_PROJECT") - - active_site = self.sync_module.get_local_normalized_site( - self.sync_module.get_active_site(project_name)) - remote_site = self.sync_module.get_local_normalized_site( - self.sync_module.get_remote_site(project_name)) - log.debug("active {} - remote {}".format(active_site, remote_site)) - - if active_site == "local" \ - and project_name in self.sync_module.get_enabled_projects()\ - and active_site != remote_site: - - sync_settings = self.sync_module.get_sync_project_setting( - os.getenv("AVALON_PROJECT"), exclude_locals=False, - cached=False) - - active_overrides = get_site_local_overrides( - os.getenv("AVALON_PROJECT"), active_site) - remote_overrides = get_site_local_overrides( - os.getenv("AVALON_PROJECT"), remote_site) - - log.debug("local overrides".format(active_overrides)) - log.debug("remote overrides".format(remote_overrides)) - for root_name, active_site_dir in active_overrides.items(): - remote_site_dir = remote_overrides.get(root_name) or\ - sync_settings["sites"][remote_site]["root"][root_name] - if os.path.isdir(active_site_dir): - if not mapping.get("destination-path"): - mapping["destination-path"] = [] - mapping["destination-path"].append(active_site_dir) - - if not mapping.get("source-path"): - mapping["source-path"] = [] - mapping["source-path"].append(remote_site_dir) - - log.debug("local sync mapping:: {}".format(mapping)) - return mapping diff --git a/openpype/lib/plugin_tools.py b/openpype/lib/plugin_tools.py index 81d268ea1c..1e157dfbfd 100644 --- a/openpype/lib/plugin_tools.py +++ b/openpype/lib/plugin_tools.py @@ -3,7 +3,6 @@ import os import logging import re -import json import warnings import functools diff --git a/openpype/modules/deadline/abstract_submit_deadline.py b/openpype/modules/deadline/abstract_submit_deadline.py index 0bad981fdf..512ff800ee 100644 --- a/openpype/modules/deadline/abstract_submit_deadline.py +++ b/openpype/modules/deadline/abstract_submit_deadline.py @@ -9,6 +9,7 @@ import os from abc import abstractmethod import platform import getpass +from functools import partial from collections import OrderedDict import six @@ -66,6 +67,96 @@ def requests_get(*args, **kwargs): return requests.get(*args, **kwargs) +class DeadlineKeyValueVar(dict): + """ + + Serializes dictionary key values as "{key}={value}" like Deadline uses + for EnvironmentKeyValue. + + As an example: + EnvironmentKeyValue0="A_KEY=VALUE_A" + EnvironmentKeyValue1="OTHER_KEY=VALUE_B" + + The keys are serialized in alphabetical order (sorted). + + Example: + >>> var = DeadlineKeyValueVar("EnvironmentKeyValue") + >>> var["my_var"] = "hello" + >>> var["my_other_var"] = "hello2" + >>> var.serialize() + + + """ + def __init__(self, key): + super(DeadlineKeyValueVar, self).__init__() + self.__key = key + + def serialize(self): + key = self.__key + + # Allow custom location for index in serialized string + if "{}" not in key: + key = key + "{}" + + return { + key.format(index): "{}={}".format(var_key, var_value) + for index, (var_key, var_value) in enumerate(sorted(self.items())) + } + + +class DeadlineIndexedVar(dict): + """ + + Allows to set and query values by integer indices: + Query: var[1] or var.get(1) + Set: var[1] = "my_value" + Append: var += "value" + + Note: Iterating the instance is not guarantueed to be the order of the + indices. To do so iterate with `sorted()` + + """ + def __init__(self, key): + super(DeadlineIndexedVar, self).__init__() + self.__key = key + + def serialize(self): + key = self.__key + + # Allow custom location for index in serialized string + if "{}" not in key: + key = key + "{}" + + return { + key.format(index): value for index, value in sorted(self.items()) + } + + def next_available_index(self): + # Add as first unused entry + i = 0 + while i in self.keys(): + i += 1 + return i + + def update(self, data): + # Force the integer key check + for key, value in data.items(): + self.__setitem__(key, value) + + def __iadd__(self, other): + index = self.next_available_index() + self[index] = other + return self + + def __setitem__(self, key, value): + if not isinstance(key, int): + raise TypeError("Key must be an integer: {}".format(key)) + + if key < 0: + raise ValueError("Negative index can't be set: {}".format(key)) + dict.__setitem__(self, key, value) + + @attr.s class DeadlineJobInfo(object): """Mapping of all Deadline *JobInfo* attributes. @@ -218,24 +309,8 @@ class DeadlineJobInfo(object): # Environment # ---------------------------------------------- - _environmentKeyValue = attr.ib(factory=list) - - @property - def EnvironmentKeyValue(self): # noqa: N802 - """Return all environment key values formatted for Deadline. - - Returns: - dict: as `{'EnvironmentKeyValue0', 'key=value'}` - - """ - out = {} - for index, v in enumerate(self._environmentKeyValue): - out["EnvironmentKeyValue{}".format(index)] = v - return out - - @EnvironmentKeyValue.setter - def EnvironmentKeyValue(self, val): # noqa: N802 - self._environmentKeyValue.append(val) + EnvironmentKeyValue = attr.ib(factory=partial(DeadlineKeyValueVar, + "EnvironmentKeyValue")) IncludeEnvironment = attr.ib(default=None) # Default: false UseJobEnvironmentOnly = attr.ib(default=None) # Default: false @@ -243,121 +318,29 @@ class DeadlineJobInfo(object): # Job Extra Info # ---------------------------------------------- - _extraInfos = attr.ib(factory=list) - _extraInfoKeyValues = attr.ib(factory=list) - - @property - def ExtraInfo(self): # noqa: N802 - """Return all ExtraInfo values formatted for Deadline. - - Returns: - dict: as `{'ExtraInfo0': 'value'}` - - """ - out = {} - for index, v in enumerate(self._extraInfos): - out["ExtraInfo{}".format(index)] = v - return out - - @ExtraInfo.setter - def ExtraInfo(self, val): # noqa: N802 - self._extraInfos.append(val) - - @property - def ExtraInfoKeyValue(self): # noqa: N802 - """Return all ExtraInfoKeyValue values formatted for Deadline. - - Returns: - dict: as {'ExtraInfoKeyValue0': 'key=value'}` - - """ - out = {} - for index, v in enumerate(self._extraInfoKeyValues): - out["ExtraInfoKeyValue{}".format(index)] = v - return out - - @ExtraInfoKeyValue.setter - def ExtraInfoKeyValue(self, val): # noqa: N802 - self._extraInfoKeyValues.append(val) + ExtraInfo = attr.ib(factory=partial(DeadlineIndexedVar, "ExtraInfo")) + ExtraInfoKeyValue = attr.ib(factory=partial(DeadlineKeyValueVar, + "ExtraInfoKeyValue")) # Task Extra Info Names # ---------------------------------------------- OverrideTaskExtraInfoNames = attr.ib(default=None) # Default: false - _taskExtraInfos = attr.ib(factory=list) - - @property - def TaskExtraInfoName(self): # noqa: N802 - """Return all TaskExtraInfoName values formatted for Deadline. - - Returns: - dict: as `{'TaskExtraInfoName0': 'value'}` - - """ - out = {} - for index, v in enumerate(self._taskExtraInfos): - out["TaskExtraInfoName{}".format(index)] = v - return out - - @TaskExtraInfoName.setter - def TaskExtraInfoName(self, val): # noqa: N802 - self._taskExtraInfos.append(val) + TaskExtraInfoName = attr.ib(factory=partial(DeadlineIndexedVar, + "TaskExtraInfoName")) # Output # ---------------------------------------------- - _outputFilename = attr.ib(factory=list) - _outputFilenameTile = attr.ib(factory=list) - _outputDirectory = attr.ib(factory=list) + OutputFilename = attr.ib(factory=partial(DeadlineIndexedVar, + "OutputFilename")) + OutputFilenameTile = attr.ib(factory=partial(DeadlineIndexedVar, + "OutputFilename{}Tile")) + OutputDirectory = attr.ib(factory=partial(DeadlineIndexedVar, + "OutputDirectory")) - @property - def OutputFilename(self): # noqa: N802 - """Return all OutputFilename values formatted for Deadline. - - Returns: - dict: as `{'OutputFilename0': 'filename'}` - - """ - out = {} - for index, v in enumerate(self._outputFilename): - out["OutputFilename{}".format(index)] = v - return out - - @OutputFilename.setter - def OutputFilename(self, val): # noqa: N802 - self._outputFilename.append(val) - - @property - def OutputFilenameTile(self): # noqa: N802 - """Return all OutputFilename#Tile values formatted for Deadline. - - Returns: - dict: as `{'OutputFilenme#Tile': 'tile'}` - - """ - out = {} - for index, v in enumerate(self._outputFilenameTile): - out["OutputFilename{}Tile".format(index)] = v - return out - - @OutputFilenameTile.setter - def OutputFilenameTile(self, val): # noqa: N802 - self._outputFilenameTile.append(val) - - @property - def OutputDirectory(self): # noqa: N802 - """Return all OutputDirectory values formatted for Deadline. - - Returns: - dict: as `{'OutputDirectory0': 'dir'}` - - """ - out = {} - for index, v in enumerate(self._outputDirectory): - out["OutputDirectory{}".format(index)] = v - return out - - @OutputDirectory.setter - def OutputDirectory(self, val): # noqa: N802 - self._outputDirectory.append(val) + # Asset Dependency + # ---------------------------------------------- + AssetDependency = attr.ib(factory=partial(DeadlineIndexedVar, + "AssetDependency")) # Tile Job # ---------------------------------------------- @@ -381,7 +364,7 @@ class DeadlineJobInfo(object): """ def filter_data(a, v): - if a.name.startswith("_"): + if isinstance(v, (DeadlineIndexedVar, DeadlineKeyValueVar)): return False if v is None: return False @@ -389,15 +372,27 @@ class DeadlineJobInfo(object): serialized = attr.asdict( self, dict_factory=OrderedDict, filter=filter_data) - serialized.update(self.EnvironmentKeyValue) - serialized.update(self.ExtraInfo) - serialized.update(self.ExtraInfoKeyValue) - serialized.update(self.TaskExtraInfoName) - serialized.update(self.OutputFilename) - serialized.update(self.OutputFilenameTile) - serialized.update(self.OutputDirectory) + + # Custom serialize these attributes + for attribute in [ + self.EnvironmentKeyValue, + self.ExtraInfo, + self.ExtraInfoKeyValue, + self.TaskExtraInfoName, + self.OutputFilename, + self.OutputFilenameTile, + self.OutputDirectory, + self.AssetDependency + ]: + serialized.update(attribute.serialize()) + return serialized + def update(self, data): + """Update instance with data dict""" + for key, value in data.items(): + setattr(self, key, value) + @six.add_metaclass(AbstractMetaInstancePlugin) class AbstractSubmitDeadline(pyblish.api.InstancePlugin): @@ -521,68 +516,72 @@ class AbstractSubmitDeadline(pyblish.api.InstancePlugin): published. """ - anatomy = self._instance.context.data['anatomy'] - file_path = None - for i in self._instance.context: - if "workfile" in i.data["families"] \ - or i.data["family"] == "workfile": - # test if there is instance of workfile waiting - # to be published. - assert i.data["publish"] is True, ( - "Workfile (scene) must be published along") - # determine published path from Anatomy. - template_data = i.data.get("anatomyData") - rep = i.data.get("representations")[0].get("ext") - template_data["representation"] = rep - template_data["ext"] = rep - template_data["comment"] = None - anatomy_filled = anatomy.format(template_data) - template_filled = anatomy_filled["publish"]["path"] - file_path = os.path.normpath(template_filled) - self.log.info("Using published scene for render {}".format( - file_path)) + instance = self._instance + workfile_instance = self._get_workfile_instance(instance.context) + if workfile_instance is None: + return - if not os.path.exists(file_path): - self.log.error("published scene does not exist!") - raise + # determine published path from Anatomy. + template_data = workfile_instance.data.get("anatomyData") + rep = workfile_instance.data.get("representations")[0] + template_data["representation"] = rep.get("name") + template_data["ext"] = rep.get("ext") + template_data["comment"] = None - if not replace_in_path: - return file_path + anatomy = instance.context.data['anatomy'] + anatomy_filled = anatomy.format(template_data) + template_filled = anatomy_filled["publish"]["path"] + file_path = os.path.normpath(template_filled) - # now we need to switch scene in expected files - # because token will now point to published - # scene file and that might differ from current one - new_scene = os.path.splitext( - os.path.basename(file_path))[0] - orig_scene = os.path.splitext( - os.path.basename( - self._instance.context.data["currentFile"]))[0] - exp = self._instance.data.get("expectedFiles") + self.log.info("Using published scene for render {}".format(file_path)) - if isinstance(exp[0], dict): - # we have aovs and we need to iterate over them - new_exp = {} - for aov, files in exp[0].items(): - replaced_files = [] - for f in files: - replaced_files.append( - str(f).replace(orig_scene, new_scene) - ) - new_exp[aov] = replaced_files - # [] might be too much here, TODO - self._instance.data["expectedFiles"] = [new_exp] - else: - new_exp = [] - for f in exp: - new_exp.append( - str(f).replace(orig_scene, new_scene) - ) - self._instance.data["expectedFiles"] = new_exp + if not os.path.exists(file_path): + self.log.error("published scene does not exist!") + raise - self.log.info("Scene name was switched {} -> {}".format( - orig_scene, new_scene - )) + if not replace_in_path: + return file_path + + # now we need to switch scene in expected files + # because token will now point to published + # scene file and that might differ from current one + def _clean_name(path): + return os.path.splitext(os.path.basename(path))[0] + + new_scene = _clean_name(file_path) + orig_scene = _clean_name(instance.context.data["currentFile"]) + expected_files = instance.data.get("expectedFiles") + + if isinstance(expected_files[0], dict): + # we have aovs and we need to iterate over them + new_exp = {} + for aov, files in expected_files[0].items(): + replaced_files = [] + for f in files: + replaced_files.append( + str(f).replace(orig_scene, new_scene) + ) + new_exp[aov] = replaced_files + # [] might be too much here, TODO + instance.data["expectedFiles"] = [new_exp] + else: + new_exp = [] + for f in expected_files: + new_exp.append( + str(f).replace(orig_scene, new_scene) + ) + instance.data["expectedFiles"] = new_exp + + metadata_folder = instance.data.get("publishRenderMetadataFolder") + if metadata_folder: + metadata_folder = metadata_folder.replace(orig_scene, + new_scene) + instance.data["publishRenderMetadataFolder"] = metadata_folder + + self.log.info("Scene name was switched {} -> {}".format( + orig_scene, new_scene + )) return file_path @@ -645,3 +644,22 @@ class AbstractSubmitDeadline(pyblish.api.InstancePlugin): self._instance.data["deadlineSubmissionJob"] = result return result["_id"] + + @staticmethod + def _get_workfile_instance(context): + """Find workfile instance in context""" + for i in context: + + is_workfile = ( + "workfile" in i.data.get("families", []) or + i.data["family"] == "workfile" + ) + if not is_workfile: + continue + + # test if there is instance of workfile waiting + # to be published. + assert i.data["publish"] is True, ( + "Workfile (scene) must be published along") + + return i diff --git a/openpype/modules/deadline/plugins/publish/collect_deadline_server_from_instance.py b/openpype/modules/deadline/plugins/publish/collect_deadline_server_from_instance.py index a7035cd99f..9981bead3e 100644 --- a/openpype/modules/deadline/plugins/publish/collect_deadline_server_from_instance.py +++ b/openpype/modules/deadline/plugins/publish/collect_deadline_server_from_instance.py @@ -13,7 +13,7 @@ class CollectDeadlineServerFromInstance(pyblish.api.InstancePlugin): order = pyblish.api.CollectorOrder + 0.415 label = "Deadline Webservice from the Instance" - families = ["rendering"] + families = ["rendering", "renderlayer"] def process(self, instance): instance.data["deadlineUrl"] = self._collect_deadline_url(instance) diff --git a/openpype/modules/deadline/plugins/publish/submit_aftereffects_deadline.py b/openpype/modules/deadline/plugins/publish/submit_aftereffects_deadline.py index c55f85c8da..0c1ffa6bd7 100644 --- a/openpype/modules/deadline/plugins/publish/submit_aftereffects_deadline.py +++ b/openpype/modules/deadline/plugins/publish/submit_aftereffects_deadline.py @@ -3,8 +3,10 @@ import attr import getpass import pyblish.api -from openpype.lib import env_value_to_bool -from openpype.lib.delivery import collect_frames +from openpype.lib import ( + env_value_to_bool, + collect_frames, +) from openpype.pipeline import legacy_io from openpype_modules.deadline import abstract_submit_deadline from openpype_modules.deadline.abstract_submit_deadline import DeadlineJobInfo @@ -65,9 +67,9 @@ class AfterEffectsSubmitDeadline( dln_job_info.Group = self.group dln_job_info.Department = self.department dln_job_info.ChunkSize = self.chunk_size - dln_job_info.OutputFilename = \ + dln_job_info.OutputFilename += \ os.path.basename(self._instance.data["expectedFiles"][0]) - dln_job_info.OutputDirectory = \ + dln_job_info.OutputDirectory += \ os.path.dirname(self._instance.data["expectedFiles"][0]) dln_job_info.JobDelay = "00:00:00" @@ -90,13 +92,12 @@ class AfterEffectsSubmitDeadline( environment = dict({key: os.environ[key] for key in keys if key in os.environ}, **legacy_io.Session) for key in keys: - val = environment.get(key) - if val: - dln_job_info.EnvironmentKeyValue = "{key}={value}".format( - key=key, - value=val) + value = environment.get(key) + if value: + dln_job_info.EnvironmentKeyValue[key] = value + # to recognize job from PYPE for turning Event On/Off - dln_job_info.EnvironmentKeyValue = "OPENPYPE_RENDER_JOB=1" + dln_job_info.EnvironmentKeyValue["OPENPYPE_RENDER_JOB"] = "1" return dln_job_info diff --git a/openpype/modules/deadline/plugins/publish/submit_harmony_deadline.py b/openpype/modules/deadline/plugins/publish/submit_harmony_deadline.py index 3f9c09b592..6327143623 100644 --- a/openpype/modules/deadline/plugins/publish/submit_harmony_deadline.py +++ b/openpype/modules/deadline/plugins/publish/submit_harmony_deadline.py @@ -284,14 +284,12 @@ class HarmonySubmitDeadline( environment = dict({key: os.environ[key] for key in keys if key in os.environ}, **legacy_io.Session) for key in keys: - val = environment.get(key) - if val: - job_info.EnvironmentKeyValue = "{key}={value}".format( - key=key, - value=val) + value = environment.get(key) + if value: + job_info.EnvironmentKeyValue[key] = value # to recognize job from PYPE for turning Event On/Off - job_info.EnvironmentKeyValue = "OPENPYPE_RENDER_JOB=1" + job_info.EnvironmentKeyValue["OPENPYPE_RENDER_JOB"] = "1" return job_info diff --git a/openpype/modules/deadline/plugins/publish/submit_maya_deadline.py b/openpype/modules/deadline/plugins/publish/submit_maya_deadline.py index 7966861358..7c486b7c34 100644 --- a/openpype/modules/deadline/plugins/publish/submit_maya_deadline.py +++ b/openpype/modules/deadline/plugins/publish/submit_maya_deadline.py @@ -18,7 +18,6 @@ Attributes: from __future__ import print_function import os -import json import getpass import copy import re @@ -27,45 +26,686 @@ from datetime import datetime import itertools from collections import OrderedDict -import clique -import requests +import attr from maya import cmds -import pyblish.api - -from openpype.lib import requests_post -from openpype.hosts.maya.api import lib from openpype.pipeline import legacy_io -# Documentation for keys available at: -# https://docs.thinkboxsoftware.com -# /products/deadline/8.0/1_User%20Manual/manual -# /manual-submission.html#job-info-file-options +from openpype_modules.deadline import abstract_submit_deadline +from openpype_modules.deadline.abstract_submit_deadline import DeadlineJobInfo -payload_skeleton_template = { - "JobInfo": { - "BatchName": None, # Top-level group name - "Name": None, # Job name, as seen in Monitor - "UserName": None, - "Plugin": "MayaBatch", - "Frames": "{start}-{end}x{step}", - "Comment": None, - "Priority": 50, - }, - "PluginInfo": { - "SceneFile": None, # Input - "OutputFilePath": None, # Output directory and filename - "OutputFilePrefix": None, - "Version": cmds.about(version=True), # Mandatory for Deadline - "UsingRenderLayers": True, - "RenderLayer": None, # Render only this layer - "Renderer": None, - "ProjectPath": None, # Resolve relative references - "RenderSetupIncludeLights": None, # Include all lights flag. - }, - "AuxFiles": [] # Mandatory for Deadline, may be empty -} + +@attr.s +class MayaPluginInfo: + SceneFile = attr.ib(default=None) # Input + OutputFilePath = attr.ib(default=None) # Output directory and filename + OutputFilePrefix = attr.ib(default=None) + Version = attr.ib(default=None) # Mandatory for Deadline + UsingRenderLayers = attr.ib(default=True) + RenderLayer = attr.ib(default=None) # Render only this layer + Renderer = attr.ib(default=None) + ProjectPath = attr.ib(default=None) # Resolve relative references + RenderSetupIncludeLights = attr.ib(default=None) # Include all lights flag + + +@attr.s +class PythonPluginInfo: + ScriptFile = attr.ib() + Version = attr.ib(default="3.6") + Arguments = attr.ib(default=None) + SingleFrameOnly = attr.ib(default=None) + + +@attr.s +class VRayPluginInfo: + InputFilename = attr.ib(default=None) # Input + SeparateFilesPerFrame = attr.ib(default=None) + VRayEngine = attr.ib(default="V-Ray") + Width = attr.ib(default=None) + Height = attr.ib(default=None) # Mandatory for Deadline + OutputFilePath = attr.ib(default=True) + OutputFileName = attr.ib(default=None) # Render only this layer + + +@attr.s +class ArnoldPluginInfo: + ArnoldFile = attr.ib(default=None) + + +class MayaSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline): + + label = "Submit Render to Deadline" + hosts = ["maya"] + families = ["renderlayer"] + targets = ["local"] + + tile_assembler_plugin = "OpenPypeTileAssembler" + priority = 50 + tile_priority = 50 + limit = [] # limit groups + jobInfo = {} + pluginInfo = {} + group = "none" + + def get_job_info(self): + job_info = DeadlineJobInfo(Plugin="MayaBatch") + + # todo: test whether this works for existing production cases + # where custom jobInfo was stored in the project settings + job_info.update(self.jobInfo) + + instance = self._instance + context = instance.context + + # Always use the original work file name for the Job name even when + # rendering is done from the published Work File. The original work + # file name is clearer because it can also have subversion strings, + # etc. which are stripped for the published file. + src_filepath = context.data["currentFile"] + src_filename = os.path.basename(src_filepath) + + job_info.Name = "%s - %s" % (src_filename, instance.name) + job_info.BatchName = src_filename + job_info.Plugin = instance.data.get("mayaRenderPlugin", "MayaBatch") + job_info.UserName = context.data.get("deadlineUser", getpass.getuser()) + + # Deadline requires integers in frame range + frames = "{start}-{end}x{step}".format( + start=int(instance.data["frameStartHandle"]), + end=int(instance.data["frameEndHandle"]), + step=int(instance.data["byFrameStep"]), + ) + job_info.Frames = frames + + job_info.Pool = instance.data.get("primaryPool") + job_info.SecondaryPool = instance.data.get("secondaryPool") + job_info.ChunkSize = instance.data.get("chunkSize", 10) + job_info.Comment = context.data.get("comment") + job_info.Priority = instance.data.get("priority", self.priority) + job_info.FramesPerTask = instance.data.get("framesPerTask", 1) + + if self.group != "none" and self.group: + job_info.Group = self.group + + if self.limit: + job_info.LimitGroups = ",".join(self.limit) + + # Add options from RenderGlobals + render_globals = instance.data.get("renderGlobals", {}) + job_info.update(render_globals) + + keys = [ + "FTRACK_API_KEY", + "FTRACK_API_USER", + "FTRACK_SERVER", + "OPENPYPE_SG_USER", + "AVALON_PROJECT", + "AVALON_ASSET", + "AVALON_TASK", + "AVALON_APP_NAME", + "OPENPYPE_DEV", + "OPENPYPE_VERSION" + ] + # Add mongo url if it's enabled + if self._instance.context.data.get("deadlinePassMongoUrl"): + keys.append("OPENPYPE_MONGO") + + environment = dict({key: os.environ[key] for key in keys + if key in os.environ}, **legacy_io.Session) + + for key in keys: + value = environment.get(key) + if not value: + continue + job_info.EnvironmentKeyValue[key] = value + + # to recognize job from PYPE for turning Event On/Off + job_info.EnvironmentKeyValue["OPENPYPE_RENDER_JOB"] = "1" + job_info.EnvironmentKeyValue["OPENPYPE_LOG_NO_COLORS"] = "1" + + # Adding file dependencies. + if self.asset_dependencies: + dependencies = instance.context.data["fileDependencies"] + dependencies.append(context.data["currentFile"]) + for dependency in dependencies: + job_info.AssetDependency += dependency + + # Add list of expected files to job + # --------------------------------- + exp = instance.data.get("expectedFiles") + for filepath in self._iter_expected_files(exp): + job_info.OutputDirectory += os.path.dirname(filepath) + job_info.OutputFilename += os.path.basename(filepath) + + return job_info + + def get_plugin_info(self): + + instance = self._instance + context = instance.context + + plugin_info = MayaPluginInfo( + SceneFile=self.scene_path, + Version=cmds.about(version=True), + RenderLayer=instance.data['setMembers'], + Renderer=instance.data["renderer"], + RenderSetupIncludeLights=instance.data.get("renderSetupIncludeLights"), # noqa + ProjectPath=context.data["workspaceDir"], + UsingRenderLayers=True, + ) + + plugin_payload = attr.asdict(plugin_info) + + # Patching with pluginInfo from settings + for key, value in self.pluginInfo.items(): + plugin_payload[key] = value + + return plugin_payload + + def process_submission(self): + + instance = self._instance + context = instance.context + + filepath = self.scene_path # publish if `use_publish` else workfile + + # TODO: Avoid the need for this logic here, needed for submit publish + # Store output dir for unified publisher (filesequence) + expected_files = instance.data["expectedFiles"] + first_file = next(self._iter_expected_files(expected_files)) + output_dir = os.path.dirname(first_file) + instance.data["outputDir"] = output_dir + instance.data["toBeRenderedOn"] = "deadline" + + # Patch workfile (only when use_published is enabled) + if self.use_published: + self._patch_workfile() + + # Gather needed data ------------------------------------------------ + workspace = context.data["workspaceDir"] + default_render_file = instance.context.data.get('project_settings')\ + .get('maya')\ + .get('RenderSettings')\ + .get('default_render_image_folder') + filename = os.path.basename(filepath) + dirname = os.path.join(workspace, default_render_file) + + # Fill in common data to payload ------------------------------------ + # TODO: Replace these with collected data from CollectRender + payload_data = { + "filename": filename, + "dirname": dirname, + } + + # Submit preceding export jobs ------------------------------------- + export_job = None + assert not all(x in instance.data["families"] + for x in ['vrayscene', 'assscene']), ( + "Vray Scene and Ass Scene options are mutually exclusive") + + if "vrayscene" in instance.data["families"]: + self.log.debug("Submitting V-Ray scene render..") + vray_export_payload = self._get_vray_export_payload(payload_data) + export_job = self.submit(vray_export_payload) + + payload = self._get_vray_render_payload(payload_data) + + elif "assscene" in instance.data["families"]: + self.log.debug("Submitting Arnold .ass standalone render..") + ass_export_payload = self._get_arnold_export_payload(payload_data) + export_job = self.submit(ass_export_payload) + + payload = self._get_arnold_render_payload(payload_data) + else: + self.log.debug("Submitting MayaBatch render..") + payload = self._get_maya_payload(payload_data) + + # Add export job as dependency -------------------------------------- + if export_job: + job_info, _ = payload + job_info.JobDependency = export_job + + if instance.data.get("tileRendering"): + # Prepare tiles data + self._tile_render(payload) + else: + # Submit main render job + job_info, plugin_info = payload + self.submit(self.assemble_payload(job_info, plugin_info)) + + def _tile_render(self, payload): + """Submit as tile render per frame with dependent assembly jobs.""" + + # As collected by super process() + instance = self._instance + + payload_job_info, payload_plugin_info = payload + job_info = copy.deepcopy(payload_job_info) + plugin_info = copy.deepcopy(payload_plugin_info) + + # if we have sequence of files, we need to create tile job for + # every frame + job_info.TileJob = True + job_info.TileJobTilesInX = instance.data.get("tilesX") + job_info.TileJobTilesInY = instance.data.get("tilesY") + + tiles_count = job_info.TileJobTilesInX * job_info.TileJobTilesInY + + plugin_info["ImageHeight"] = instance.data.get("resolutionHeight") + plugin_info["ImageWidth"] = instance.data.get("resolutionWidth") + plugin_info["RegionRendering"] = True + + R_FRAME_NUMBER = re.compile( + r".+\.(?P[0-9]+)\..+") # noqa: N806, E501 + REPL_FRAME_NUMBER = re.compile( + r"(.+\.)([0-9]+)(\..+)") # noqa: N806, E501 + + exp = instance.data["expectedFiles"] + if isinstance(exp[0], dict): + # we have aovs and we need to iterate over them + # get files from `beauty` + files = exp[0].get("beauty") + # assembly files are used for assembly jobs as we need to put + # together all AOVs + assembly_files = list( + itertools.chain.from_iterable( + [f for _, f in exp[0].items()])) + if not files: + # if beauty doesn't exist, use first aov we found + files = exp[0].get(list(exp[0].keys())[0]) + else: + files = exp + assembly_files = files + + # Define frame tile jobs + frame_file_hash = {} + frame_payloads = {} + file_index = 1 + for file in files: + frame = re.search(R_FRAME_NUMBER, file).group("frame") + + new_job_info = copy.deepcopy(job_info) + new_job_info.Name += " (Frame {} - {} tiles)".format(frame, + tiles_count) + new_job_info.TileJobFrame = frame + + new_plugin_info = copy.deepcopy(plugin_info) + + # Add tile data into job info and plugin info + tiles_data = _format_tiles( + file, 0, + instance.data.get("tilesX"), + instance.data.get("tilesY"), + instance.data.get("resolutionWidth"), + instance.data.get("resolutionHeight"), + payload_plugin_info["OutputFilePrefix"] + )[0] + + new_job_info.update(tiles_data["JobInfo"]) + new_plugin_info.update(tiles_data["PluginInfo"]) + + self.log.info("hashing {} - {}".format(file_index, file)) + job_hash = hashlib.sha256( + ("{}_{}".format(file_index, file)).encode("utf-8")) + + file_hash = job_hash.hexdigest() + frame_file_hash[frame] = file_hash + + new_job_info.ExtraInfo[0] = file_hash + new_job_info.ExtraInfo[1] = file + + frame_payloads[frame] = self.assemble_payload( + job_info=new_job_info, + plugin_info=new_plugin_info + ) + file_index += 1 + + self.log.info( + "Submitting tile job(s) [{}] ...".format(len(frame_payloads))) + + # Submit frame tile jobs + frame_tile_job_id = {} + for frame, tile_job_payload in frame_payloads.items(): + job_id = self.submit(tile_job_payload) + frame_tile_job_id[frame] = job_id + + # Define assembly payloads + assembly_job_info = copy.deepcopy(job_info) + assembly_job_info.Plugin = self.tile_assembler_plugin + assembly_job_info.Name += " - Tile Assembly Job" + assembly_job_info.Frames = 1 + assembly_job_info.MachineLimit = 1 + assembly_job_info.Priority = instance.data.get("tile_priority", + self.tile_priority) + + assembly_plugin_info = { + "CleanupTiles": 1, + "ErrorOnMissing": True, + "Renderer": self._instance.data["renderer"] + } + + assembly_payloads = [] + output_dir = self.job_info.OutputDirectory[0] + for file in assembly_files: + frame = re.search(R_FRAME_NUMBER, file).group("frame") + + frame_assembly_job_info = copy.deepcopy(assembly_job_info) + frame_assembly_job_info.Name += " (Frame {})".format(frame) + frame_assembly_job_info.OutputFilename[0] = re.sub( + REPL_FRAME_NUMBER, + "\\1{}\\3".format("#" * len(frame)), file) + + file_hash = frame_file_hash[frame] + tile_job_id = frame_tile_job_id[frame] + + frame_assembly_job_info.ExtraInfo[0] = file_hash + frame_assembly_job_info.ExtraInfo[1] = file + frame_assembly_job_info.JobDependency = tile_job_id + + # write assembly job config files + now = datetime.now() + + config_file = os.path.join( + output_dir, + "{}_config_{}.txt".format( + os.path.splitext(file)[0], + now.strftime("%Y_%m_%d_%H_%M_%S") + ) + ) + try: + if not os.path.isdir(output_dir): + os.makedirs(output_dir) + except OSError: + # directory is not available + self.log.warning("Path is unreachable: " + "`{}`".format(output_dir)) + + with open(config_file, "w") as cf: + print("TileCount={}".format(tiles_count), file=cf) + print("ImageFileName={}".format(file), file=cf) + print("ImageWidth={}".format( + instance.data.get("resolutionWidth")), file=cf) + print("ImageHeight={}".format( + instance.data.get("resolutionHeight")), file=cf) + + tiles = _format_tiles( + file, 0, + instance.data.get("tilesX"), + instance.data.get("tilesY"), + instance.data.get("resolutionWidth"), + instance.data.get("resolutionHeight"), + payload_plugin_info["OutputFilePrefix"] + )[1] + for k, v in sorted(tiles.items()): + print("{}={}".format(k, v), file=cf) + + payload = self.assemble_payload( + job_info=frame_assembly_job_info, + plugin_info=assembly_plugin_info.copy(), + # todo: aux file transfers don't work with deadline webservice + # add config file as job auxFile + # aux_files=[config_file] + ) + assembly_payloads.append(payload) + + # Submit assembly jobs + assembly_job_ids = [] + num_assemblies = len(assembly_payloads) + for i, payload in enumerate(assembly_payloads): + self.log.info( + "submitting assembly job {} of {}".format(i + 1, + num_assemblies) + ) + assembly_job_id = self.submit(payload) + assembly_job_ids.append(assembly_job_id) + + instance.data["assemblySubmissionJobs"] = assembly_job_ids + + def _get_maya_payload(self, data): + + job_info = copy.deepcopy(self.job_info) + + if self.asset_dependencies: + # Asset dependency to wait for at least the scene file to sync. + job_info.AssetDependency += self.scene_path + + # Get layer prefix + render_products = self._instance.data["renderProducts"] + layer_metadata = render_products.layer_data + layer_prefix = layer_metadata.filePrefix + + # This hack is here because of how Deadline handles Renderman version. + # it considers everything with `renderman` set as version older than + # Renderman 22, and so if we are using renderman > 21 we need to set + # renderer string on the job to `renderman22`. We will have to change + # this when Deadline releases new version handling this. + renderer = self._instance.data["renderer"] + if renderer == "renderman": + try: + from rfm2.config import cfg # noqa + except ImportError: + raise Exception("Cannot determine renderman version") + + rman_version = cfg().build_info.version() # type: str + if int(rman_version.split(".")[0]) > 22: + renderer = "renderman22" + + plugin_info = copy.deepcopy(self.plugin_info) + plugin_info.update({ + # Output directory and filename + "OutputFilePath": data["dirname"].replace("\\", "/"), + "OutputFilePrefix": layer_prefix, + }) + + return job_info, plugin_info + + def _get_vray_export_payload(self, data): + + job_info = copy.deepcopy(self.job_info) + job_info.Name = self._job_info_label("Export") + + # Get V-Ray settings info to compute output path + vray_scene = self.format_vray_output_filename() + + plugin_info = { + "Renderer": "vray", + "SkipExistingFrames": True, + "UseLegacyRenderLayers": True, + "OutputFilePath": os.path.dirname(vray_scene) + } + + return job_info, attr.asdict(plugin_info) + + def _get_arnold_export_payload(self, data): + + try: + from openpype.scripts import export_maya_ass_job + except Exception: + raise AssertionError( + "Expected module 'export_maya_ass_job' to be available") + + module_path = export_maya_ass_job.__file__ + if module_path.endswith(".pyc"): + module_path = module_path[: -len(".pyc")] + ".py" + + script = os.path.normpath(module_path) + + job_info = copy.deepcopy(self.job_info) + job_info.Name = self._job_info_label("Export") + + # Force a single frame Python job + job_info.Plugin = "Python" + job_info.Frames = 1 + + renderlayer = self._instance.data["setMembers"] + + # add required env vars for the export script + envs = { + "AVALON_APP_NAME": os.environ.get("AVALON_APP_NAME"), + "OPENPYPE_ASS_EXPORT_RENDER_LAYER": renderlayer, + "OPENPYPE_ASS_EXPORT_SCENE_FILE": self.scene_path, + "OPENPYPE_ASS_EXPORT_OUTPUT": job_info.OutputFilename[0], + "OPENPYPE_ASS_EXPORT_START": int(self._instance.data["frameStartHandle"]), # noqa + "OPENPYPE_ASS_EXPORT_END": int(self._instance.data["frameEndHandle"]), # noqa + "OPENPYPE_ASS_EXPORT_STEP": 1 + } + for key, value in envs.items(): + if not value: + continue + job_info.EnvironmentKeyValue[key] = value + + plugin_info = PythonPluginInfo( + ScriptFile=script, + Version="3.6", + Arguments="", + SingleFrameOnly="True" + ) + + return job_info, attr.asdict(plugin_info) + + def _get_vray_render_payload(self, data): + + # Job Info + job_info = copy.deepcopy(self.job_info) + job_info.Name = self._job_info_label("Render") + job_info.Plugin = "Vray" + job_info.OverrideTaskExtraInfoNames = False + + # Plugin Info + plugin_info = VRayPluginInfo( + InputFilename=self.format_vray_output_filename(), + SeparateFilesPerFrame=False, + VRayEngine="V-Ray", + Width=self._instance.data["resolutionWidth"], + Height=self._instance.data["resolutionHeight"], + OutputFilePath=job_info.OutputDirectory[0], + OutputFileName=job_info.OutputFilename[0] + ) + + return job_info, attr.asdict(plugin_info) + + def _get_arnold_render_payload(self, data): + + # Job Info + job_info = copy.deepcopy(self.job_info) + job_info.Name = self._job_info_label("Render") + job_info.Plugin = "Arnold" + job_info.OverrideTaskExtraInfoNames = False + + # Plugin Info + ass_file, _ = os.path.splitext(data["output_filename_0"]) + ass_filepath = ass_file + ".ass" + + plugin_info = ArnoldPluginInfo( + ArnoldFile=ass_filepath + ) + + return job_info, attr.asdict(plugin_info) + + def format_vray_output_filename(self): + """Format the expected output file of the Export job. + + Example: + /_/ + "shot010_v006/shot010_v006_CHARS/CHARS_0001.vrscene" + Returns: + str + + """ + + # "vrayscene//_/" + vray_settings = cmds.ls(type="VRaySettingsNode") + node = vray_settings[0] + template = cmds.getAttr("{}.vrscene_filename".format(node)) + scene, _ = os.path.splitext(self.scene_path) + + def smart_replace(string, key_values): + new_string = string + for key, value in key_values.items(): + new_string = new_string.replace(key, value) + return new_string + + # Get workfile scene path without extension to format vrscene_filename + scene_filename = os.path.basename(self.scene_path) + scene_filename_no_ext, _ = os.path.splitext(scene_filename) + + layer = self._instance.data['setMembers'] + + # Reformat without tokens + output_path = smart_replace( + template, + {"": scene_filename_no_ext, + "": layer}) + + start_frame = int(self._instance.data["frameStartHandle"]) + workspace = self._instance.context.data["workspace"] + filename_zero = "{}_{:04d}.vrscene".format(output_path, start_frame) + filepath_zero = os.path.join(workspace, filename_zero) + + return filepath_zero.replace("\\", "/") + + def _patch_workfile(self): + """Patch Maya scene. + + This will take list of patches (lines to add) and apply them to + *published* Maya scene file (that is used later for rendering). + + Patches are dict with following structure:: + { + "name": "Name of patch", + "regex": "regex of line before patch", + "line": "line to insert" + } + + """ + project_settings = self._instance.context.data["project_settings"] + patches = ( + project_settings.get( + "deadline", {}).get( + "publish", {}).get( + "MayaSubmitDeadline", {}).get( + "scene_patches", {}) + ) + if not patches: + return + + if not os.path.splitext(self.scene_path)[1].lower() != ".ma": + self.log.debug("Skipping workfile patch since workfile is not " + ".ma file") + return + + compiled_regex = [re.compile(p["regex"]) for p in patches] + with open(self.scene_path, "r+") as pf: + scene_data = pf.readlines() + for ln, line in enumerate(scene_data): + for i, r in enumerate(compiled_regex): + if re.match(r, line): + scene_data.insert(ln + 1, patches[i]["line"]) + pf.seek(0) + pf.writelines(scene_data) + pf.truncate() + self.log.info("Applied {} patch to scene.".format( + patches[i]["name"] + )) + + def _job_info_label(self, label): + return "{label} {job.Name} [{start}-{end}]".format( + label=label, + job=self.job_info, + start=int(self._instance.data["frameStartHandle"]), + end=int(self._instance.data["frameEndHandle"]), + ) + + @staticmethod + def _iter_expected_files(exp): + if isinstance(exp[0], dict): + for _aov, files in exp[0].items(): + for file in files: + yield file + else: + for file in exp: + yield file def _format_tiles( @@ -114,14 +754,21 @@ def _format_tiles( used for assembler configuration. """ - tile = 0 + # Math used requires integers for correct output - as such + # we ensure our inputs are correct. + assert type(tiles_x) is int, "tiles_x must be an integer" + assert type(tiles_y) is int, "tiles_y must be an integer" + assert type(width) is int, "width must be an integer" + assert type(height) is int, "height must be an integer" + out = {"JobInfo": {}, "PluginInfo": {}} cfg = OrderedDict() - w_space = width / tiles_x - h_space = height / tiles_y + w_space = width // tiles_x + h_space = height // tiles_y cfg["TilesCropped"] = "False" + tile = 0 for tile_x in range(1, tiles_x + 1): for tile_y in reversed(range(1, tiles_y + 1)): tile_prefix = "_tile_{}x{}_{}x{}_".format( @@ -129,1034 +776,35 @@ def _format_tiles( tiles_x, tiles_y ) - out_tile_index = "OutputFilename{}Tile{}".format( - str(index), tile - ) + top = height - (tile_y * h_space) + bottom = height - ((tile_y - 1) * h_space) - 1 + left = (tile_x - 1) * w_space + right = (tile_x * w_space) - 1 + + # Job Info new_filename = "{}/{}{}".format( os.path.dirname(filename), tile_prefix, os.path.basename(filename) ) - out["JobInfo"][out_tile_index] = new_filename - out["PluginInfo"]["RegionPrefix{}".format(str(tile))] = \ - "/{}".format(tile_prefix).join(prefix.rsplit("/", 1)) + out["JobInfo"]["OutputFilename{}Tile{}".format(index, tile)] = new_filename # noqa - out["PluginInfo"]["RegionTop{}".format(tile)] = int(height) - (tile_y * h_space) # noqa: E501 - out["PluginInfo"]["RegionBottom{}".format(tile)] = int(height) - ((tile_y - 1) * h_space) - 1 # noqa: E501 - out["PluginInfo"]["RegionLeft{}".format(tile)] = (tile_x - 1) * w_space # noqa: E501 - out["PluginInfo"]["RegionRight{}".format(tile)] = (tile_x * w_space) - 1 # noqa: E501 + # Plugin Info + out["PluginInfo"]["RegionPrefix{}".format(tile)] = "/{}".format(tile_prefix).join(prefix.rsplit("/", 1)) # noqa: E501 + out["PluginInfo"]["RegionTop{}".format(tile)] = top + out["PluginInfo"]["RegionBottom{}".format(tile)] = bottom + out["PluginInfo"]["RegionLeft{}".format(tile)] = left + out["PluginInfo"]["RegionRight{}".format(tile)] = right + # Tile config cfg["Tile{}".format(tile)] = new_filename cfg["Tile{}Tile".format(tile)] = new_filename cfg["Tile{}FileName".format(tile)] = new_filename - cfg["Tile{}X".format(tile)] = (tile_x - 1) * w_space - - cfg["Tile{}Y".format(tile)] = int(height) - (tile_y * h_space) - + cfg["Tile{}X".format(tile)] = left + cfg["Tile{}Y".format(tile)] = top cfg["Tile{}Width".format(tile)] = w_space cfg["Tile{}Height".format(tile)] = h_space tile += 1 + return out, cfg - - -def get_renderer_variables(renderlayer, root): - """Retrieve the extension which has been set in the VRay settings. - - Will return None if the current renderer is not VRay - For Maya 2016.5 and up the renderSetup creates renderSetupLayer node which - start with `rs`. Use the actual node name, do NOT use the `nice name` - - Args: - renderlayer (str): the node name of the renderlayer. - root (str): base path to render - - Returns: - dict - - """ - renderer = lib.get_renderer(renderlayer or lib.get_current_renderlayer()) - render_attrs = lib.RENDER_ATTRS.get(renderer, lib.RENDER_ATTRS["default"]) - - padding = cmds.getAttr("{}.{}".format(render_attrs["node"], - render_attrs["padding"])) - - filename_0 = cmds.renderSettings( - fullPath=True, - gin="#" * int(padding), - lut=True, - layer=renderlayer or lib.get_current_renderlayer())[0] - filename_0 = re.sub('_', '_beauty', - filename_0, flags=re.IGNORECASE) - prefix_attr = "defaultRenderGlobals.imageFilePrefix" - - scene = cmds.file(query=True, sceneName=True) - scene, _ = os.path.splitext(os.path.basename(scene)) - - if renderer == "vray": - renderlayer = renderlayer.split("_")[-1] - # Maya's renderSettings function does not return V-Ray file extension - # so we get the extension from vraySettings - extension = cmds.getAttr("vraySettings.imageFormatStr") - - # When V-Ray image format has not been switched once from default .png - # the getAttr command above returns None. As such we explicitly set - # it to `.png` - if extension is None: - extension = "png" - - if extension in ["exr (multichannel)", "exr (deep)"]: - extension = "exr" - - prefix_attr = "vraySettings.fileNamePrefix" - filename_prefix = cmds.getAttr(prefix_attr) - # we need to determine path for vray as maya `renderSettings` query - # does not work for vray. - - filename_0 = re.sub('', scene, filename_prefix, flags=re.IGNORECASE) # noqa: E501 - filename_0 = re.sub('', renderlayer, filename_0, flags=re.IGNORECASE) # noqa: E501 - filename_0 = "{}.{}.{}".format( - filename_0, "#" * int(padding), extension) - filename_0 = os.path.normpath(os.path.join(root, filename_0)) - elif renderer == "renderman": - prefix_attr = "rmanGlobals.imageFileFormat" - # NOTE: This is guessing extensions from renderman display types. - # Some of them are just framebuffers, d_texture format can be - # set in display setting. We set those now to None, but it - # should be handled more gracefully. - display_types = { - "d_deepexr": "exr", - "d_it": None, - "d_null": None, - "d_openexr": "exr", - "d_png": "png", - "d_pointcloud": "ptc", - "d_targa": "tga", - "d_texture": None, - "d_tiff": "tif" - } - - extension = display_types.get( - cmds.listConnections("rmanDefaultDisplay.displayType")[0], - "exr" - ) or "exr" - - filename_prefix = "{}/{}".format( - cmds.getAttr("rmanGlobals.imageOutputDir"), - cmds.getAttr("rmanGlobals.imageFileFormat") - ) - - renderlayer = renderlayer.split("_")[-1] - - filename_0 = re.sub('', scene, filename_prefix, flags=re.IGNORECASE) # noqa: E501 - filename_0 = re.sub('', renderlayer, filename_0, flags=re.IGNORECASE) # noqa: E501 - filename_0 = re.sub('', "#" * int(padding), filename_0, flags=re.IGNORECASE) # noqa: E501 - filename_0 = re.sub('', extension, filename_0, flags=re.IGNORECASE) # noqa: E501 - filename_0 = os.path.normpath(os.path.join(root, filename_0)) - elif renderer == "redshift": - # mapping redshift extension dropdown values to strings - ext_mapping = ["iff", "exr", "tif", "png", "tga", "jpg"] - extension = ext_mapping[ - cmds.getAttr("redshiftOptions.imageFormat") - ] - else: - # Get the extension, getAttr defaultRenderGlobals.imageFormat - # returns an index number. - filename_base = os.path.basename(filename_0) - extension = os.path.splitext(filename_base)[-1].strip(".") - - filename_prefix = cmds.getAttr(prefix_attr) - return {"ext": extension, - "filename_prefix": filename_prefix, - "padding": padding, - "filename_0": filename_0} - - -class MayaSubmitDeadline(pyblish.api.InstancePlugin): - """Submit available render layers to Deadline. - - Renders are submitted to a Deadline Web Service as - supplied via settings key "DEADLINE_REST_URL". - - Attributes: - use_published (bool): Use published scene to render instead of the - one in work area. - - """ - - label = "Submit to Deadline" - order = pyblish.api.IntegratorOrder + 0.1 - hosts = ["maya"] - families = ["renderlayer"] - targets = ["local"] - - use_published = True - tile_assembler_plugin = "OpenPypeTileAssembler" - asset_dependencies = False - priority = 50 - tile_priority = 50 - limit_groups = [] - jobInfo = {} - pluginInfo = {} - group = "none" - - def process(self, instance): - """Plugin entry point.""" - instance.data["toBeRenderedOn"] = "deadline" - context = instance.context - - self._instance = instance - self.payload_skeleton = copy.deepcopy(payload_skeleton_template) - - # get default deadline webservice url from deadline module - self.deadline_url = instance.context.data.get("defaultDeadline") - # if custom one is set in instance, use that - if instance.data.get("deadlineUrl"): - self.deadline_url = instance.data.get("deadlineUrl") - assert self.deadline_url, "Requires Deadline Webservice URL" - - # just using existing names from Setting - self._job_info = self.jobInfo - - self._plugin_info = self.pluginInfo - - self.limit_groups = self.limit - - context = instance.context - workspace = context.data["workspaceDir"] - anatomy = context.data['anatomy'] - instance.data["toBeRenderedOn"] = "deadline" - - filepath = None - patches = ( - context.data["project_settings"].get( - "deadline", {}).get( - "publish", {}).get( - "MayaSubmitDeadline", {}).get( - "scene_patches", {}) - ) - - # Handle render/export from published scene or not ------------------ - if self.use_published: - patched_files = [] - for i in context: - if "workfile" not in i.data["families"]: - continue - assert i.data["publish"] is True, ( - "Workfile (scene) must be published along") - template_data = i.data.get("anatomyData") - rep = i.data.get("representations")[0].get("name") - template_data["representation"] = rep - template_data["ext"] = rep - template_data["comment"] = None - anatomy_filled = anatomy.format(template_data) - template_filled = anatomy_filled["publish"]["path"] - filepath = os.path.normpath(template_filled) - self.log.info("Using published scene for render {}".format( - filepath)) - - if not os.path.exists(filepath): - self.log.error("published scene does not exist!") - raise - # now we need to switch scene in expected files - # because token will now point to published - # scene file and that might differ from current one - new_scene = os.path.splitext( - os.path.basename(filepath))[0] - orig_scene = os.path.splitext( - os.path.basename(context.data["currentFile"]))[0] - exp = instance.data.get("expectedFiles") - - if isinstance(exp[0], dict): - # we have aovs and we need to iterate over them - new_exp = {} - for aov, files in exp[0].items(): - replaced_files = [] - for f in files: - replaced_files.append( - f.replace(orig_scene, new_scene) - ) - new_exp[aov] = replaced_files - instance.data["expectedFiles"] = [new_exp] - else: - new_exp = [] - for f in exp: - new_exp.append( - f.replace(orig_scene, new_scene) - ) - instance.data["expectedFiles"] = [new_exp] - - if instance.data.get("publishRenderMetadataFolder"): - instance.data["publishRenderMetadataFolder"] = \ - instance.data["publishRenderMetadataFolder"].replace( - orig_scene, new_scene) - self.log.info("Scene name was switched {} -> {}".format( - orig_scene, new_scene - )) - # patch workfile is needed - if filepath not in patched_files: - patched_file = self._patch_workfile(filepath, patches) - patched_files.append(patched_file) - - all_instances = [] - for result in context.data["results"]: - if (result["instance"] is not None and - result["instance"] not in all_instances): # noqa: E128 - all_instances.append(result["instance"]) - - # fallback if nothing was set - if not filepath: - self.log.warning("Falling back to workfile") - filepath = context.data["currentFile"] - - self.log.debug(filepath) - - # Gather needed data ------------------------------------------------ - default_render_file = instance.context.data.get('project_settings')\ - .get('maya')\ - .get('RenderSettings')\ - .get('default_render_image_folder') - filename = os.path.basename(filepath) - comment = context.data.get("comment", "") - dirname = os.path.join(workspace, default_render_file) - renderlayer = instance.data['setMembers'] # rs_beauty - deadline_user = context.data.get("user", getpass.getuser()) - - # Always use the original work file name for the Job name even when - # rendering is done from the published Work File. The original work - # file name is clearer because it can also have subversion strings, - # etc. which are stripped for the published file. - src_filename = os.path.basename(context.data["currentFile"]) - jobname = "%s - %s" % (src_filename, instance.name) - - # Get the variables depending on the renderer - render_variables = get_renderer_variables(renderlayer, dirname) - filename_0 = render_variables["filename_0"] - if self.use_published: - new_scene = os.path.splitext(filename)[0] - orig_scene = os.path.splitext( - os.path.basename(context.data["currentFile"]))[0] - filename_0 = render_variables["filename_0"].replace( - orig_scene, new_scene) - - output_filename_0 = filename_0 - - # this is needed because renderman handles directory and file - # prefixes separately - if self._instance.data["renderer"] == "renderman": - dirname = os.path.dirname(output_filename_0) - - # Create render folder ---------------------------------------------- - try: - # Ensure render folder exists - os.makedirs(dirname) - except OSError: - pass - - # Fill in common data to payload ------------------------------------ - payload_data = {} - payload_data["filename"] = filename - payload_data["filepath"] = filepath - payload_data["jobname"] = jobname - payload_data["deadline_user"] = deadline_user - payload_data["comment"] = comment - payload_data["output_filename_0"] = output_filename_0 - payload_data["render_variables"] = render_variables - payload_data["renderlayer"] = renderlayer - payload_data["workspace"] = workspace - payload_data["dirname"] = dirname - - self.log.info("--- Submission data:") - for k, v in payload_data.items(): - self.log.info("- {}: {}".format(k, v)) - self.log.info("-" * 20) - - frame_pattern = self.payload_skeleton["JobInfo"]["Frames"] - self.payload_skeleton["JobInfo"]["Frames"] = frame_pattern.format( - start=int(self._instance.data["frameStartHandle"]), - end=int(self._instance.data["frameEndHandle"]), - step=int(self._instance.data["byFrameStep"])) - - self.payload_skeleton["JobInfo"]["Plugin"] = self._instance.data.get( - "mayaRenderPlugin", "MayaBatch") - - self.payload_skeleton["JobInfo"]["BatchName"] = src_filename - # Job name, as seen in Monitor - self.payload_skeleton["JobInfo"]["Name"] = jobname - # Arbitrary username, for visualisation in Monitor - self.payload_skeleton["JobInfo"]["UserName"] = deadline_user - # Set job priority - self.payload_skeleton["JobInfo"]["Priority"] = \ - self._instance.data.get("priority", self.priority) - - if self.group != "none" and self.group: - self.payload_skeleton["JobInfo"]["Group"] = self.group - - if self.limit_groups: - self.payload_skeleton["JobInfo"]["LimitGroups"] = \ - ",".join(self.limit_groups) - # Optional, enable double-click to preview rendered - # frames from Deadline Monitor - self.payload_skeleton["JobInfo"]["OutputDirectory0"] = \ - os.path.dirname(output_filename_0).replace("\\", "/") - self.payload_skeleton["JobInfo"]["OutputFilename0"] = \ - output_filename_0.replace("\\", "/") - - self.payload_skeleton["JobInfo"]["Comment"] = comment - self.payload_skeleton["PluginInfo"]["RenderLayer"] = renderlayer - - self.payload_skeleton["PluginInfo"]["RenderSetupIncludeLights"] = instance.data.get("renderSetupIncludeLights") # noqa - # Adding file dependencies. - dependencies = instance.context.data["fileDependencies"] - dependencies.append(filepath) - if self.asset_dependencies: - for dependency in dependencies: - key = "AssetDependency" + str(dependencies.index(dependency)) - self.payload_skeleton["JobInfo"][key] = dependency - - # Handle environments ----------------------------------------------- - # We need those to pass them to pype for it to set correct context - keys = [ - "FTRACK_API_KEY", - "FTRACK_API_USER", - "FTRACK_SERVER", - "OPENPYPE_SG_USER", - "AVALON_PROJECT", - "AVALON_ASSET", - "AVALON_TASK", - "AVALON_APP_NAME", - "OPENPYPE_DEV", - "OPENPYPE_LOG_NO_COLORS", - "OPENPYPE_VERSION" - ] - # Add mongo url if it's enabled - if instance.context.data.get("deadlinePassMongoUrl"): - keys.append("OPENPYPE_MONGO") - - environment = dict({key: os.environ[key] for key in keys - if key in os.environ}, **legacy_io.Session) - environment["OPENPYPE_LOG_NO_COLORS"] = "1" - environment["OPENPYPE_MAYA_VERSION"] = cmds.about(v=True) - # to recognize job from PYPE for turning Event On/Off - environment["OPENPYPE_RENDER_JOB"] = "1" - self.payload_skeleton["JobInfo"].update({ - "EnvironmentKeyValue%d" % index: "{key}={value}".format( - key=key, - value=environment[key] - ) for index, key in enumerate(environment) - }) - # Add options from RenderGlobals------------------------------------- - render_globals = instance.data.get("renderGlobals", {}) - self.payload_skeleton["JobInfo"].update(render_globals) - - # Submit preceding export jobs ------------------------------------- - export_job = None - assert not all(x in instance.data["families"] - for x in ['vrayscene', 'assscene']), ( - "Vray Scene and Ass Scene options are mutually exclusive") - if "vrayscene" in instance.data["families"]: - export_job = self._submit_export(payload_data, "vray") - - if "assscene" in instance.data["families"]: - export_job = self._submit_export(payload_data, "arnold") - - # Prepare main render job ------------------------------------------- - if "vrayscene" in instance.data["families"]: - payload = self._get_vray_render_payload(payload_data) - elif "assscene" in instance.data["families"]: - payload = self._get_arnold_render_payload(payload_data) - else: - payload = self._get_maya_payload(payload_data) - - # Add export job as dependency -------------------------------------- - if export_job: - payload["JobInfo"]["JobDependency0"] = export_job - - # Add list of expected files to job --------------------------------- - exp = instance.data.get("expectedFiles") - exp_index = 0 - output_filenames = {} - - if isinstance(exp[0], dict): - # we have aovs and we need to iterate over them - for _aov, files in exp[0].items(): - col, rem = clique.assemble(files) - if not col and rem: - # we couldn't find any collections but have - # individual files. - assert len(rem) == 1, ("Found multiple non related files " - "to render, don't know what to do " - "with them.") - output_file = rem[0] - if not instance.data.get("tileRendering"): - payload['JobInfo']['OutputFilename' + str(exp_index)] = output_file # noqa: E501 - else: - output_file = col[0].format('{head}{padding}{tail}') - if not instance.data.get("tileRendering"): - payload['JobInfo']['OutputFilename' + str(exp_index)] = output_file # noqa: E501 - - output_filenames['OutputFilename' + str(exp_index)] = output_file # noqa: E501 - exp_index += 1 - else: - col, rem = clique.assemble(exp) - if not col and rem: - # we couldn't find any collections but have - # individual files. - assert len(rem) == 1, ("Found multiple non related files " - "to render, don't know what to do " - "with them.") - - output_file = rem[0] - if not instance.data.get("tileRendering"): - payload['JobInfo']['OutputFilename' + str(exp_index)] = output_file # noqa: E501 - else: - output_file = col[0].format('{head}{padding}{tail}') - if not instance.data.get("tileRendering"): - payload['JobInfo']['OutputFilename' + str(exp_index)] = output_file # noqa: E501 - - output_filenames['OutputFilename' + str(exp_index)] = output_file - - plugin = payload["JobInfo"]["Plugin"] - self.log.info("using render plugin : {}".format(plugin)) - - # Store output dir for unified publisher (filesequence) - instance.data["outputDir"] = os.path.dirname(output_filename_0) - - self.preflight_check(instance) - - # add jobInfo and pluginInfo variables from Settings - payload["JobInfo"].update(self._job_info) - payload["PluginInfo"].update(self._plugin_info) - - # Prepare tiles data ------------------------------------------------ - if instance.data.get("tileRendering"): - # if we have sequence of files, we need to create tile job for - # every frame - - payload["JobInfo"]["TileJob"] = True - payload["JobInfo"]["TileJobTilesInX"] = instance.data.get("tilesX") - payload["JobInfo"]["TileJobTilesInY"] = instance.data.get("tilesY") - payload["PluginInfo"]["ImageHeight"] = instance.data.get("resolutionHeight") # noqa: E501 - payload["PluginInfo"]["ImageWidth"] = instance.data.get("resolutionWidth") # noqa: E501 - payload["PluginInfo"]["RegionRendering"] = True - - assembly_payload = { - "AuxFiles": [], - "JobInfo": { - "BatchName": payload["JobInfo"]["BatchName"], - "Frames": 1, - "Name": "{} - Tile Assembly Job".format( - payload["JobInfo"]["Name"]), - "OutputDirectory0": - payload["JobInfo"]["OutputDirectory0"].replace( - "\\", "/"), - "Plugin": self.tile_assembler_plugin, - "MachineLimit": 1 - }, - "PluginInfo": { - "CleanupTiles": 1, - "ErrorOnMissing": True - } - } - assembly_payload["JobInfo"].update(output_filenames) - assembly_payload["JobInfo"]["Priority"] = self._instance.data.get( - "tile_priority", self.tile_priority) - assembly_payload["JobInfo"]["UserName"] = deadline_user - - frame_payloads = [] - assembly_payloads = [] - - R_FRAME_NUMBER = re.compile(r".+\.(?P[0-9]+)\..+") # noqa: N806, E501 - REPL_FRAME_NUMBER = re.compile(r"(.+\.)([0-9]+)(\..+)") # noqa: N806, E501 - - if isinstance(exp[0], dict): - # we have aovs and we need to iterate over them - # get files from `beauty` - files = exp[0].get("beauty") - # assembly files are used for assembly jobs as we need to put - # together all AOVs - assembly_files = list( - itertools.chain.from_iterable( - [f for _, f in exp[0].items()])) - if not files: - # if beauty doesn't exists, use first aov we found - files = exp[0].get(list(exp[0].keys())[0]) - else: - files = exp - assembly_files = files - - frame_jobs = {} - - file_index = 1 - for file in files: - frame = re.search(R_FRAME_NUMBER, file).group("frame") - new_payload = copy.deepcopy(payload) - new_payload["JobInfo"]["Name"] = \ - "{} (Frame {} - {} tiles)".format( - payload["JobInfo"]["Name"], - frame, - instance.data.get("tilesX") * instance.data.get("tilesY") # noqa: E501 - ) - self.log.info( - "... preparing job {}".format( - new_payload["JobInfo"]["Name"])) - new_payload["JobInfo"]["TileJobFrame"] = frame - - tiles_data = _format_tiles( - file, 0, - instance.data.get("tilesX"), - instance.data.get("tilesY"), - instance.data.get("resolutionWidth"), - instance.data.get("resolutionHeight"), - payload["PluginInfo"]["OutputFilePrefix"] - )[0] - new_payload["JobInfo"].update(tiles_data["JobInfo"]) - new_payload["PluginInfo"].update(tiles_data["PluginInfo"]) - - self.log.info("hashing {} - {}".format(file_index, file)) - job_hash = hashlib.sha256( - ("{}_{}".format(file_index, file)).encode("utf-8")) - frame_jobs[frame] = job_hash.hexdigest() - new_payload["JobInfo"]["ExtraInfo0"] = job_hash.hexdigest() - new_payload["JobInfo"]["ExtraInfo1"] = file - - frame_payloads.append(new_payload) - file_index += 1 - - file_index = 1 - for file in assembly_files: - frame = re.search(R_FRAME_NUMBER, file).group("frame") - new_assembly_payload = copy.deepcopy(assembly_payload) - new_assembly_payload["JobInfo"]["Name"] = \ - "{} (Frame {})".format( - assembly_payload["JobInfo"]["Name"], - frame) - new_assembly_payload["JobInfo"]["OutputFilename0"] = re.sub( - REPL_FRAME_NUMBER, - "\\1{}\\3".format("#" * len(frame)), file) - - new_assembly_payload["PluginInfo"]["Renderer"] = self._instance.data["renderer"] # noqa: E501 - new_assembly_payload["JobInfo"]["ExtraInfo0"] = frame_jobs[frame] # noqa: E501 - new_assembly_payload["JobInfo"]["ExtraInfo1"] = file - assembly_payloads.append(new_assembly_payload) - file_index += 1 - - self.log.info( - "Submitting tile job(s) [{}] ...".format(len(frame_payloads))) - - url = "{}/api/jobs".format(self.deadline_url) - tiles_count = instance.data.get("tilesX") * instance.data.get("tilesY") # noqa: E501 - - for tile_job in frame_payloads: - response = requests_post(url, json=tile_job) - if not response.ok: - raise Exception(response.text) - - job_id = response.json()["_id"] - hash = response.json()["Props"]["Ex0"] - - for assembly_job in assembly_payloads: - if assembly_job["JobInfo"]["ExtraInfo0"] == hash: - assembly_job["JobInfo"]["JobDependency0"] = job_id - - for assembly_job in assembly_payloads: - file = assembly_job["JobInfo"]["ExtraInfo1"] - # write assembly job config files - now = datetime.now() - - config_file = os.path.join( - os.path.dirname(output_filename_0), - "{}_config_{}.txt".format( - os.path.splitext(file)[0], - now.strftime("%Y_%m_%d_%H_%M_%S") - ) - ) - - try: - if not os.path.isdir(os.path.dirname(config_file)): - os.makedirs(os.path.dirname(config_file)) - except OSError: - # directory is not available - self.log.warning( - "Path is unreachable: `{}`".format( - os.path.dirname(config_file))) - - # add config file as job auxFile - assembly_job["AuxFiles"] = [config_file] - - with open(config_file, "w") as cf: - print("TileCount={}".format(tiles_count), file=cf) - print("ImageFileName={}".format(file), file=cf) - print("ImageWidth={}".format( - instance.data.get("resolutionWidth")), file=cf) - print("ImageHeight={}".format( - instance.data.get("resolutionHeight")), file=cf) - - tiles = _format_tiles( - file, 0, - instance.data.get("tilesX"), - instance.data.get("tilesY"), - instance.data.get("resolutionWidth"), - instance.data.get("resolutionHeight"), - payload["PluginInfo"]["OutputFilePrefix"] - )[1] - sorted(tiles) - for k, v in tiles.items(): - print("{}={}".format(k, v), file=cf) - - job_idx = 1 - instance.data["assemblySubmissionJobs"] = [] - for ass_job in assembly_payloads: - self.log.info("submitting assembly job {} of {}".format( - job_idx, len(assembly_payloads) - )) - self.log.debug(json.dumps(ass_job, indent=4, sort_keys=True)) - response = requests_post(url, json=ass_job) - if not response.ok: - raise Exception(response.text) - - instance.data["assemblySubmissionJobs"].append( - response.json()["_id"]) - job_idx += 1 - - instance.data["jobBatchName"] = payload["JobInfo"]["BatchName"] - self.log.info("Setting batch name on instance: {}".format( - instance.data["jobBatchName"])) - else: - # Submit job to farm -------------------------------------------- - self.log.info("Submitting ...") - self.log.debug(json.dumps(payload, indent=4, sort_keys=True)) - - # E.g. http://192.168.0.1:8082/api/jobs - url = "{}/api/jobs".format(self.deadline_url) - response = requests_post(url, json=payload) - if not response.ok: - raise Exception(response.text) - instance.data["deadlineSubmissionJob"] = response.json() - - def _get_maya_payload(self, data): - payload = copy.deepcopy(self.payload_skeleton) - - if not self.asset_dependencies: - job_info_ext = {} - - else: - job_info_ext = { - # Asset dependency to wait for at least the scene file to sync. - "AssetDependency0": data["filepath"], - } - - renderer = self._instance.data["renderer"] - - # This hack is here because of how Deadline handles Renderman version. - # it considers everything with `renderman` set as version older than - # Renderman 22, and so if we are using renderman > 21 we need to set - # renderer string on the job to `renderman22`. We will have to change - # this when Deadline releases new version handling this. - if self._instance.data["renderer"] == "renderman": - try: - from rfm2.config import cfg # noqa - except ImportError: - raise Exception("Cannot determine renderman version") - - rman_version = cfg().build_info.version() # type: str - if int(rman_version.split(".")[0]) > 22: - renderer = "renderman22" - - plugin_info = { - "SceneFile": data["filepath"], - # Output directory and filename - "OutputFilePath": data["dirname"].replace("\\", "/"), - "OutputFilePrefix": data["render_variables"]["filename_prefix"], # noqa: E501 - - # Only render layers are considered renderable in this pipeline - "UsingRenderLayers": True, - - # Render only this layer - "RenderLayer": data["renderlayer"], - - # Determine which renderer to use from the file itself - "Renderer": renderer, - - # Resolve relative references - "ProjectPath": data["workspace"], - } - payload["JobInfo"].update(job_info_ext) - payload["PluginInfo"].update(plugin_info) - return payload - - def _get_vray_export_payload(self, data): - payload = copy.deepcopy(self.payload_skeleton) - vray_settings = cmds.ls(type="VRaySettingsNode") - node = vray_settings[0] - template = cmds.getAttr("{}.vrscene_filename".format(node)) - scene, _ = os.path.splitext(data["filename"]) - first_file = self.format_vray_output_filename(scene, template) - first_file = "{}/{}".format(data["workspace"], first_file) - output = os.path.dirname(first_file) - job_info_ext = { - # Job name, as seen in Monitor - "Name": "Export {} [{}-{}]".format( - data["jobname"], - int(self._instance.data["frameStartHandle"]), - int(self._instance.data["frameEndHandle"])), - - "Plugin": self._instance.data.get( - "mayaRenderPlugin", "MayaPype"), - "FramesPerTask": self._instance.data.get("framesPerTask", 1) - } - - plugin_info_ext = { - # Renderer - "Renderer": "vray", - # Input - "SceneFile": data["filepath"], - "SkipExistingFrames": True, - "UsingRenderLayers": True, - "UseLegacyRenderLayers": True, - "RenderLayer": data["renderlayer"], - "ProjectPath": data["workspace"], - "OutputFilePath": output - } - - payload["JobInfo"].update(job_info_ext) - payload["PluginInfo"].update(plugin_info_ext) - return payload - - def _get_arnold_export_payload(self, data): - - try: - from openpype.scripts import export_maya_ass_job - except Exception: - raise AssertionError( - "Expected module 'export_maya_ass_job' to be available") - - module_path = export_maya_ass_job.__file__ - if module_path.endswith(".pyc"): - module_path = module_path[: -len(".pyc")] + ".py" - - script = os.path.normpath(module_path) - - payload = copy.deepcopy(self.payload_skeleton) - job_info_ext = { - # Job name, as seen in Monitor - "Name": "Export {} [{}-{}]".format( - data["jobname"], - int(self._instance.data["frameStartHandle"]), - int(self._instance.data["frameEndHandle"])), - - "Plugin": "Python", - "FramesPerTask": self._instance.data.get("framesPerTask", 1), - "Frames": 1 - } - - plugin_info_ext = { - "Version": "3.6", - "ScriptFile": script, - "Arguments": "", - "SingleFrameOnly": "True", - } - payload["JobInfo"].update(job_info_ext) - payload["PluginInfo"].update(plugin_info_ext) - - envs = [ - v - for k, v in payload["JobInfo"].items() - if k.startswith("EnvironmentKeyValue") - ] - - # add app name to environment - envs.append( - "AVALON_APP_NAME={}".format(os.environ.get("AVALON_APP_NAME"))) - envs.append( - "OPENPYPE_ASS_EXPORT_RENDER_LAYER={}".format(data["renderlayer"])) - envs.append( - "OPENPYPE_ASS_EXPORT_SCENE_FILE={}".format(data["filepath"])) - envs.append( - "OPENPYPE_ASS_EXPORT_OUTPUT={}".format( - payload['JobInfo']['OutputFilename0'])) - envs.append( - "OPENPYPE_ASS_EXPORT_START={}".format( - int(self._instance.data["frameStartHandle"]))) - envs.append( - "OPENPYPE_ASS_EXPORT_END={}".format( - int(self._instance.data["frameEndHandle"]))) - envs.append( - "OPENPYPE_ASS_EXPORT_STEP={}".format(1)) - - for i, e in enumerate(envs): - payload["JobInfo"]["EnvironmentKeyValue{}".format(i)] = e - return payload - - def _get_vray_render_payload(self, data): - payload = copy.deepcopy(self.payload_skeleton) - vray_settings = cmds.ls(type="VRaySettingsNode") - node = vray_settings[0] - template = cmds.getAttr("{}.vrscene_filename".format(node)) - # "vrayscene//_/" - - scene, _ = os.path.splitext(data["filename"]) - first_file = self.format_vray_output_filename(scene, template) - first_file = "{}/{}".format(data["workspace"], first_file) - job_info_ext = { - "Name": "Render {} [{}-{}]".format( - data["jobname"], - int(self._instance.data["frameStartHandle"]), - int(self._instance.data["frameEndHandle"])), - - "Plugin": "Vray", - "OverrideTaskExtraInfoNames": False, - } - - plugin_info = { - "InputFilename": first_file, - "SeparateFilesPerFrame": True, - "VRayEngine": "V-Ray", - - "Width": self._instance.data["resolutionWidth"], - "Height": self._instance.data["resolutionHeight"], - "OutputFilePath": payload["JobInfo"]["OutputDirectory0"], - "OutputFileName": payload["JobInfo"]["OutputFilename0"] - } - - payload["JobInfo"].update(job_info_ext) - payload["PluginInfo"].update(plugin_info) - return payload - - def _get_arnold_render_payload(self, data): - payload = copy.deepcopy(self.payload_skeleton) - ass_file, _ = os.path.splitext(data["output_filename_0"]) - first_file = ass_file + ".ass" - job_info_ext = { - "Name": "Render {} [{}-{}]".format( - data["jobname"], - int(self._instance.data["frameStartHandle"]), - int(self._instance.data["frameEndHandle"])), - - "Plugin": "Arnold", - "OverrideTaskExtraInfoNames": False, - } - - plugin_info = { - "ArnoldFile": first_file, - } - - payload["JobInfo"].update(job_info_ext) - payload["PluginInfo"].update(plugin_info) - return payload - - def _submit_export(self, data, format): - if format == "vray": - payload = self._get_vray_export_payload(data) - self.log.info("Submitting vrscene export job.") - elif format == "arnold": - payload = self._get_arnold_export_payload(data) - self.log.info("Submitting ass export job.") - - url = "{}/api/jobs".format(self.deadline_url) - response = requests_post(url, json=payload) - if not response.ok: - self.log.error("Submition failed!") - self.log.error(response.status_code) - self.log.error(response.content) - self.log.debug(payload) - raise RuntimeError(response.text) - - dependency = response.json() - return dependency["_id"] - - def preflight_check(self, instance): - """Ensure the startFrame, endFrame and byFrameStep are integers.""" - for key in ("frameStartHandle", "frameEndHandle", "byFrameStep"): - value = instance.data[key] - - if int(value) == value: - continue - - self.log.warning( - "%f=%d was rounded off to nearest integer" - % (value, int(value)) - ) - - def format_vray_output_filename(self, filename, template, dir=False): - """Format the expected output file of the Export job. - - Example: - /_/ - "shot010_v006/shot010_v006_CHARS/CHARS" - - Args: - instance: - filename(str): - dir(bool): - - Returns: - str - - """ - def smart_replace(string, key_values): - new_string = string - for key, value in key_values.items(): - new_string = new_string.replace(key, value) - return new_string - - # Ensure filename has no extension - file_name, _ = os.path.splitext(filename) - - layer = self._instance.data['setMembers'] - - # Reformat without tokens - output_path = smart_replace( - template, - {"": file_name, - "": layer}) - - if dir: - return output_path.replace("\\", "/") - - start_frame = int(self._instance.data["frameStartHandle"]) - filename_zero = "{}_{:04d}.vrscene".format(output_path, start_frame) - - result = filename_zero.replace("\\", "/") - - return result - - def _patch_workfile(self, file, patches): - # type: (str, dict) -> [str, None] - """Patch Maya scene. - - This will take list of patches (lines to add) and apply them to - *published* Maya scene file (that is used later for rendering). - - Patches are dict with following structure:: - { - "name": "Name of patch", - "regex": "regex of line before patch", - "line": "line to insert" - } - - Args: - file (str): File to patch. - patches (dict): Dictionary defining patches. - - Returns: - str: Patched file path or None - - """ - if os.path.splitext(file)[1].lower() != ".ma" or not patches: - return None - - compiled_regex = [re.compile(p["regex"]) for p in patches] - with open(file, "r+") as pf: - scene_data = pf.readlines() - for ln, line in enumerate(scene_data): - for i, r in enumerate(compiled_regex): - if re.match(r, line): - scene_data.insert(ln + 1, patches[i]["line"]) - pf.seek(0) - pf.writelines(scene_data) - pf.truncate() - self.log.info( - "Applied {} patch to scene.".format( - patches[i]["name"])) - return file diff --git a/openpype/modules/deadline/plugins/publish/submit_nuke_deadline.py b/openpype/modules/deadline/plugins/publish/submit_nuke_deadline.py index 336a56ec45..b09d2935ab 100644 --- a/openpype/modules/deadline/plugins/publish/submit_nuke_deadline.py +++ b/openpype/modules/deadline/plugins/publish/submit_nuke_deadline.py @@ -114,6 +114,13 @@ class NukeSubmitDeadline(pyblish.api.InstancePlugin): instance.data["deadlineSubmissionJob"] = resp.json() instance.data["publishJobState"] = "Suspended" + # add to list of job Id + if not instance.data.get("bakingSubmissionJobs"): + instance.data["bakingSubmissionJobs"] = [] + + instance.data["bakingSubmissionJobs"].append( + resp.json()["_id"]) + # redefinition of families if "render.farm" in families: instance.data['family'] = 'write' diff --git a/openpype/modules/deadline/plugins/publish/submit_publish_job.py b/openpype/modules/deadline/plugins/publish/submit_publish_job.py index 379953c9e4..c9d1daffd1 100644 --- a/openpype/modules/deadline/plugins/publish/submit_publish_job.py +++ b/openpype/modules/deadline/plugins/publish/submit_publish_job.py @@ -296,6 +296,12 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin): for assembly_id in instance.data.get("assemblySubmissionJobs"): payload["JobInfo"]["JobDependency{}".format(job_index)] = assembly_id # noqa: E501 job_index += 1 + elif instance.data.get("bakingSubmissionJobs"): + self.log.info("Adding baking submission jobs as dependencies...") + job_index = 0 + for assembly_id in instance.data["bakingSubmissionJobs"]: + payload["JobInfo"]["JobDependency{}".format(job_index)] = assembly_id # noqa: E501 + job_index += 1 else: payload["JobInfo"]["JobDependency0"] = job["_id"] @@ -694,9 +700,6 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin): self.context = context self.anatomy = instance.context.data["anatomy"] - if hasattr(instance, "_log"): - data['_log'] = instance._log - asset = data.get("asset") or legacy_io.Session["AVALON_ASSET"] subset = data.get("subset") diff --git a/openpype/modules/deadline/plugins/publish/validate_expected_and_rendered_files.py b/openpype/modules/deadline/plugins/publish/validate_expected_and_rendered_files.py index c2426e0d78..f0a3ddd246 100644 --- a/openpype/modules/deadline/plugins/publish/validate_expected_and_rendered_files.py +++ b/openpype/modules/deadline/plugins/publish/validate_expected_and_rendered_files.py @@ -3,7 +3,7 @@ import requests import pyblish.api -from openpype.lib.delivery import collect_frames +from openpype.lib import collect_frames from openpype_modules.deadline.abstract_submit_deadline import requests_get diff --git a/openpype/modules/ftrack/event_handlers_server/action_prepare_project.py b/openpype/modules/ftrack/event_handlers_server/action_prepare_project.py index 713a4d9aba..332648cd02 100644 --- a/openpype/modules/ftrack/event_handlers_server/action_prepare_project.py +++ b/openpype/modules/ftrack/event_handlers_server/action_prepare_project.py @@ -1,10 +1,8 @@ import json import copy -from openpype.client import get_project -from openpype.api import ProjectSettings -from openpype.lib import create_project -from openpype.settings import SaveWarningExc +from openpype.client import get_project, create_project +from openpype.settings import ProjectSettings, SaveWarningExc from openpype_modules.ftrack.lib import ( ServerAction, diff --git a/openpype/modules/ftrack/event_handlers_user/action_create_project_structure.py b/openpype/modules/ftrack/event_handlers_user/action_create_project_structure.py index df914de854..7c896570b1 100644 --- a/openpype/modules/ftrack/event_handlers_user/action_create_project_structure.py +++ b/openpype/modules/ftrack/event_handlers_user/action_create_project_structure.py @@ -1,7 +1,10 @@ import re +from openpype.pipeline.project_folders import ( + get_project_basic_paths, + create_project_folders, +) from openpype_modules.ftrack.lib import BaseAction, statics_icon -from openpype.api import get_project_basic_paths, create_project_folders class CreateProjectFolders(BaseAction): @@ -81,7 +84,7 @@ class CreateProjectFolders(BaseAction): } # Invoking OpenPype API to create the project folders - create_project_folders(basic_paths, project_name) + create_project_folders(project_name, basic_paths) self.create_ftrack_entities(basic_paths, project_entity) self.trigger_event( diff --git a/openpype/modules/ftrack/event_handlers_user/action_delete_old_versions.py b/openpype/modules/ftrack/event_handlers_user/action_delete_old_versions.py index 79d04a7854..c543dc8834 100644 --- a/openpype/modules/ftrack/event_handlers_user/action_delete_old_versions.py +++ b/openpype/modules/ftrack/event_handlers_user/action_delete_old_versions.py @@ -11,7 +11,11 @@ from openpype.client import ( get_versions, get_representations ) -from openpype.lib import StringTemplate, TemplateUnsolved +from openpype.lib import ( + StringTemplate, + TemplateUnsolved, + format_file_size, +) from openpype.pipeline import AvalonMongoDB, Anatomy from openpype_modules.ftrack.lib import BaseAction, statics_icon @@ -134,13 +138,6 @@ class DeleteOldVersions(BaseAction): "title": self.inteface_title } - def sizeof_fmt(self, num, suffix='B'): - for unit in ['', 'Ki', 'Mi', 'Gi', 'Ti', 'Pi', 'Ei', 'Zi']: - if abs(num) < 1024.0: - return "%3.1f%s%s" % (num, unit, suffix) - num /= 1024.0 - return "%.1f%s%s" % (num, 'Yi', suffix) - def launch(self, session, entities, event): values = event["data"].get("values") if not values: @@ -359,7 +356,7 @@ class DeleteOldVersions(BaseAction): dir_paths, file_paths_by_dir, delete=False ) - msg = "Total size of files: " + self.sizeof_fmt(size) + msg = "Total size of files: {}".format(format_file_size(size)) self.log.warning(msg) @@ -430,7 +427,7 @@ class DeleteOldVersions(BaseAction): "message": msg } - msg = "Total size of files deleted: " + self.sizeof_fmt(size) + msg = "Total size of files deleted: {}".format(format_file_size(size)) self.log.warning(msg) diff --git a/openpype/modules/ftrack/event_handlers_user/action_delivery.py b/openpype/modules/ftrack/event_handlers_user/action_delivery.py index eec245070c..a400c8f5f0 100644 --- a/openpype/modules/ftrack/event_handlers_user/action_delivery.py +++ b/openpype/modules/ftrack/event_handlers_user/action_delivery.py @@ -10,19 +10,19 @@ from openpype.client import ( get_versions, get_representations ) -from openpype.pipeline import Anatomy from openpype_modules.ftrack.lib import BaseAction, statics_icon from openpype_modules.ftrack.lib.avalon_sync import CUST_ATTR_ID_KEY from openpype_modules.ftrack.lib.custom_attributes import ( query_custom_attributes ) from openpype.lib.dateutils import get_datetime_data -from openpype.lib.delivery import ( - path_from_representation, +from openpype.pipeline import Anatomy +from openpype.pipeline.load import get_representation_path_with_anatomy +from openpype.pipeline.delivery import ( get_format_dict, check_destination_path, - process_single_file, - process_sequence + deliver_single_file, + deliver_sequence, ) @@ -580,7 +580,7 @@ class Delivery(BaseAction): if frame: repre["context"]["frame"] = len(str(frame)) * "#" - repre_path = path_from_representation(repre, anatomy) + repre_path = get_representation_path_with_anatomy(repre, anatomy) # TODO add backup solution where root of path from component # is replaced with root args = ( @@ -594,9 +594,9 @@ class Delivery(BaseAction): self.log ) if not frame: - process_single_file(*args) + deliver_single_file(*args) else: - process_sequence(*args) + deliver_sequence(*args) return self.report(report_items) diff --git a/openpype/modules/ftrack/event_handlers_user/action_prepare_project.py b/openpype/modules/ftrack/event_handlers_user/action_prepare_project.py index e89595109e..e825198180 100644 --- a/openpype/modules/ftrack/event_handlers_user/action_prepare_project.py +++ b/openpype/modules/ftrack/event_handlers_user/action_prepare_project.py @@ -1,10 +1,8 @@ import json import copy -from openpype.client import get_project -from openpype.api import ProjectSettings -from openpype.lib import create_project -from openpype.settings import SaveWarningExc +from openpype.client import get_project, create_project +from openpype.settings import ProjectSettings, SaveWarningExc from openpype_modules.ftrack.lib import ( BaseAction, diff --git a/openpype/modules/ftrack/plugins/publish/collect_ftrack_family.py b/openpype/modules/ftrack/plugins/publish/collect_ftrack_family.py index 5758068f86..576a7d36c4 100644 --- a/openpype/modules/ftrack/plugins/publish/collect_ftrack_family.py +++ b/openpype/modules/ftrack/plugins/publish/collect_ftrack_family.py @@ -8,7 +8,7 @@ Provides: import pyblish.api from openpype.pipeline import legacy_io -from openpype.lib.plugin_tools import filter_profiles +from openpype.lib import filter_profiles class CollectFtrackFamily(pyblish.api.InstancePlugin): diff --git a/openpype/modules/ftrack/plugins/publish/collect_username.py b/openpype/modules/ftrack/plugins/publish/collect_username.py index a9b746ea51..798f3960a8 100644 --- a/openpype/modules/ftrack/plugins/publish/collect_username.py +++ b/openpype/modules/ftrack/plugins/publish/collect_username.py @@ -1,5 +1,8 @@ """Loads publishing context from json and continues in publish process. +Should run before 'CollectAnatomyContextData' so the user on context is +changed before it's stored to context anatomy data or instance anatomy data. + Requires: anatomy -> context["anatomy"] *(pyblish.api.CollectorOrder - 0.11) @@ -13,7 +16,7 @@ import os import pyblish.api -class CollectUsername(pyblish.api.ContextPlugin): +class CollectUsernameForWebpublish(pyblish.api.ContextPlugin): """ Translates user email to Ftrack username. @@ -32,10 +35,8 @@ class CollectUsername(pyblish.api.ContextPlugin): hosts = ["webpublisher", "photoshop"] targets = ["remotepublish", "filespublish", "tvpaint_worker"] - _context = None - def process(self, context): - self.log.info("CollectUsername") + self.log.info("{}".format(self.__class__.__name__)) os.environ["FTRACK_API_USER"] = os.environ["FTRACK_BOT_API_USER"] os.environ["FTRACK_API_KEY"] = os.environ["FTRACK_BOT_API_KEY"] @@ -54,12 +55,14 @@ class CollectUsername(pyblish.api.ContextPlugin): return session = ftrack_api.Session(auto_connect_event_hub=False) - user = session.query("User where email like '{}'".format(user_email)) + user = session.query( + "User where email like '{}'".format(user_email) + ).first() if not user: raise ValueError( "Couldn't find user with {} email".format(user_email)) - user = user[0] + username = user.get("username") self.log.debug("Resolved ftrack username:: {}".format(username)) os.environ["FTRACK_API_USER"] = username @@ -67,5 +70,4 @@ class CollectUsername(pyblish.api.ContextPlugin): burnin_name = username if '@' in burnin_name: burnin_name = burnin_name[:burnin_name.index('@')] - os.environ["WEBPUBLISH_OPENPYPE_USERNAME"] = burnin_name context.data["user"] = burnin_name diff --git a/openpype/modules/ftrack/plugins/publish/validate_custom_ftrack_attributes.py b/openpype/modules/ftrack/plugins/publish/validate_custom_ftrack_attributes.py index dc80bf4eb3..489f291c0f 100644 --- a/openpype/modules/ftrack/plugins/publish/validate_custom_ftrack_attributes.py +++ b/openpype/modules/ftrack/plugins/publish/validate_custom_ftrack_attributes.py @@ -1,5 +1,5 @@ import pyblish.api -import openpype.api +from openpype.pipeline.publish import ValidateContentsOrder class ValidateFtrackAttributes(pyblish.api.InstancePlugin): @@ -34,7 +34,7 @@ class ValidateFtrackAttributes(pyblish.api.InstancePlugin): """ label = "Validate Custom Ftrack Attributes" - order = openpype.api.ValidateContentsOrder + order = ValidateContentsOrder families = ["ftrack"] optional = True # Ignore standalone host, because it does not have an Ftrack entity diff --git a/openpype/modules/kitsu/utils/credentials.py b/openpype/modules/kitsu/utils/credentials.py index 0529380d6d..adcfb07cd5 100644 --- a/openpype/modules/kitsu/utils/credentials.py +++ b/openpype/modules/kitsu/utils/credentials.py @@ -5,6 +5,7 @@ from typing import Tuple import gazu from openpype.lib.local_settings import OpenPypeSecureRegistry +from openpype.lib import emit_event def validate_credentials( @@ -32,6 +33,8 @@ def validate_credentials( except gazu.exception.AuthFailedException: return False + emit_event("kitsu.user.logged", data={"username": login}, source="kitsu") + return True diff --git a/openpype/modules/kitsu/utils/update_op_with_zou.py b/openpype/modules/kitsu/utils/update_op_with_zou.py index e03cf2b30e..4a064f6a16 100644 --- a/openpype/modules/kitsu/utils/update_op_with_zou.py +++ b/openpype/modules/kitsu/utils/update_op_with_zou.py @@ -15,10 +15,10 @@ from openpype.client import ( get_assets, get_asset_by_id, get_asset_by_name, + create_project, ) from openpype.pipeline import AvalonMongoDB -from openpype.api import get_project_settings -from openpype.lib import create_project +from openpype.settings import get_project_settings from openpype.modules.kitsu.utils.credentials import validate_credentials @@ -166,50 +166,21 @@ def update_op_assets( # Substitute item type for general classification (assets or shots) if item_type in ["Asset", "AssetType"]: - substitute_item_type = "assets" + entity_root_asset_name = "Assets" elif item_type in ["Episode", "Sequence"]: - substitute_item_type = "shots" - else: - substitute_item_type = f"{item_type.lower()}s" - entity_parent_folders = [ - f - for f in project_module_settings["entities_root"] - .get(substitute_item_type) - .split("/") - if f - ] + entity_root_asset_name = "Shots" # Root parent folder if exist visual_parent_doc_id = ( asset_doc_ids[parent_zou_id]["_id"] if parent_zou_id else None ) if visual_parent_doc_id is None: - # Find root folder docs - root_folder_docs = get_assets( + # Find root folder doc ("Assets" or "Shots") + root_folder_doc = get_asset_by_name( project_name, - asset_names=[entity_parent_folders[-1]], + asset_name=entity_root_asset_name, fields=["_id", "data.root_of"], ) - # NOTE: Not sure why it's checking for entity type? - # OP3 does not support multiple assets with same names so type - # filtering is irelevant. - # This way mimics previous implementation: - # ``` - # root_folder_doc = dbcon.find_one( - # { - # "type": "asset", - # "name": entity_parent_folders[-1], - # "data.root_of": substitute_item_type, - # }, - # ["_id"], - # ) - # ``` - root_folder_doc = None - for folder_doc in root_folder_docs: - root_of = folder_doc.get("data", {}).get("root_of") - if root_of == substitute_item_type: - root_folder_doc = folder_doc - break if root_folder_doc: visual_parent_doc_id = root_folder_doc["_id"] @@ -240,7 +211,7 @@ def update_op_assets( item_name = item["name"] # Set root folders parents - item_data["parents"] = entity_parent_folders + item_data["parents"] + item_data["parents"] = [entity_root_asset_name] + item_data["parents"] # Update 'data' different in zou DB updated_data = { @@ -278,7 +249,7 @@ def write_project_to_op(project: dict, dbcon: AvalonMongoDB) -> UpdateOne: project_doc = get_project(project_name) if not project_doc: print(f"Creating project '{project_name}'") - project_doc = create_project(project_name, project_name, dbcon=dbcon) + project_doc = create_project(project_name, project_name) # Project data and tasks project_data = project_doc["data"] or {} @@ -318,13 +289,13 @@ def write_project_to_op(project: dict, dbcon: AvalonMongoDB) -> UpdateOne: ) -def sync_all_projects(login: str, password: str): +def sync_all_projects(login: str, password: str, ignore_projects: list = None): """Update all OP projects in DB with Zou data. Args: login (str): Kitsu user login password (str): Kitsu user password - + ignore_projects (list): List of unsynced project names Raises: gazu.exception.AuthFailedException: Wrong user login and/or password """ @@ -340,6 +311,8 @@ def sync_all_projects(login: str, password: str): dbcon.install() all_projects = gazu.project.all_open_projects() for project in all_projects: + if ignore_projects and project["name"] in ignore_projects: + continue sync_project_from_kitsu(dbcon, project) @@ -396,54 +369,30 @@ def sync_project_from_kitsu(dbcon: AvalonMongoDB, project: dict): zou_ids_and_asset_docs[project["id"]] = project_doc # Create entities root folders - project_module_settings = get_project_settings(project_name)["kitsu"] - for entity_type, root in project_module_settings["entities_root"].items(): - parent_folders = root.split("/") - direct_parent_doc = None - for i, folder in enumerate(parent_folders, 1): - parent_doc = get_asset_by_name( - project_name, folder, fields=["_id", "data.root_of"] - ) - # NOTE: Not sure why it's checking for entity type? - # OP3 does not support multiple assets with same names so type - # filtering is irelevant. - # Also all of the entities could find be queried at once using - # 'get_assets'. - # This way mimics previous implementation: - # ``` - # parent_doc = dbcon.find_one( - # {"type": "asset", "name": folder, "data.root_of": entity_type} - # ) - # ``` - if ( - parent_doc - and parent_doc.get("data", {}).get("root_of") != entity_type - ): - parent_doc = None - - if not parent_doc: - direct_parent_doc = dbcon.insert_one( - { - "name": folder, - "type": "asset", - "schema": "openpype:asset-3.0", - "data": { - "root_of": entity_type, - "parents": parent_folders[:i], - "visualParent": direct_parent_doc.inserted_id - if direct_parent_doc - else None, - "tasks": {}, - }, - } - ) + to_insert = [ + { + "name": r, + "type": "asset", + "schema": "openpype:asset-3.0", + "data": { + "root_of": r, + "tasks": {}, + }, + } + for r in ["Assets", "Shots"] + if not get_asset_by_name( + project_name, r, fields=["_id", "data.root_of"] + ) + ] # Create - to_insert = [ - create_op_asset(item) - for item in all_entities - if item["id"] not in zou_ids_and_asset_docs.keys() - ] + to_insert.extend( + [ + create_op_asset(item) + for item in all_entities + if item["id"] not in zou_ids_and_asset_docs.keys() + ] + ) if to_insert: # Insert doc in DB dbcon.insert_many(to_insert) diff --git a/openpype/modules/shotgrid/plugins/publish/validate_shotgrid_user.py b/openpype/modules/shotgrid/plugins/publish/validate_shotgrid_user.py index c14c980e2a..48b320e15e 100644 --- a/openpype/modules/shotgrid/plugins/publish/validate_shotgrid_user.py +++ b/openpype/modules/shotgrid/plugins/publish/validate_shotgrid_user.py @@ -1,5 +1,5 @@ import pyblish.api -import openpype.api +from openpype.pipeline.publish import ValidateContentsOrder class ValidateShotgridUser(pyblish.api.ContextPlugin): @@ -8,7 +8,7 @@ class ValidateShotgridUser(pyblish.api.ContextPlugin): """ label = "Validate Shotgrid User" - order = openpype.api.ValidateContentsOrder + order = ValidateContentsOrder def process(self, context): sg = context.data.get("shotgridSession") diff --git a/openpype/modules/slack/plugins/publish/integrate_slack_api.py b/openpype/modules/slack/plugins/publish/integrate_slack_api.py index c3b288f0cd..643e55915b 100644 --- a/openpype/modules/slack/plugins/publish/integrate_slack_api.py +++ b/openpype/modules/slack/plugins/publish/integrate_slack_api.py @@ -95,13 +95,15 @@ class IntegrateSlackAPI(pyblish.api.InstancePlugin): Reviews might be large, so allow only adding link to message instead of uploading only. """ + fill_data = copy.deepcopy(instance.context.data["anatomyData"]) + username = fill_data.get("user") fill_pairs = [ ("asset", instance.data.get("asset", fill_data.get("asset"))), ("subset", instance.data.get("subset", fill_data.get("subset"))), - ("username", instance.data.get("username", - fill_data.get("username"))), + ("user", username), + ("username", username), ("app", instance.data.get("app", fill_data.get("app"))), ("family", instance.data.get("family", fill_data.get("family"))), ("version", str(instance.data.get("version", @@ -110,13 +112,19 @@ class IntegrateSlackAPI(pyblish.api.InstancePlugin): if review_path: fill_pairs.append(("review_filepath", review_path)) - task_data = instance.data.get("task") - if not task_data: - task_data = fill_data.get("task") - for key, value in task_data.items(): - fill_key = "task[{}]".format(key) - fill_pairs.append((fill_key, value)) - fill_pairs.append(("task", task_data["name"])) + task_data = fill_data.get("task") + if task_data: + if ( + "{task}" in message_templ + or "{Task}" in message_templ + or "{TASK}" in message_templ + ): + fill_pairs.append(("task", task_data["name"])) + + else: + for key, value in task_data.items(): + fill_key = "task[{}]".format(key) + fill_pairs.append((fill_key, value)) self.log.debug("fill_pairs ::{}".format(fill_pairs)) multiple_case_variants = prepare_template_data(fill_pairs) diff --git a/openpype/modules/sync_server/sync_server_module.py b/openpype/modules/sync_server/sync_server_module.py index 634b68c55f..a478faa9ef 100644 --- a/openpype/modules/sync_server/sync_server_module.py +++ b/openpype/modules/sync_server/sync_server_module.py @@ -1,11 +1,16 @@ import os -from bson.objectid import ObjectId +import sys +import time from datetime import datetime import threading import platform import copy +import signal from collections import deque, defaultdict +import click +from bson.objectid import ObjectId + from openpype.client import get_projects from openpype.modules import OpenPypeModule from openpype_interfaces import ITrayModule @@ -2080,3 +2085,46 @@ class SyncServerModule(OpenPypeModule, ITrayModule): settings ('presets') """ return presets[project_name]['sites'][site_name]['root'] + + def cli(self, click_group): + click_group.add_command(cli_main) + + +@click.group(SyncServerModule.name, help="SyncServer module related commands.") +def cli_main(): + pass + + +@cli_main.command() +@click.option( + "-a", + "--active_site", + required=True, + help="Name of active stie") +def syncservice(active_site): + """Launch sync server under entered site. + + This should be ideally used by system service (such us systemd or upstart + on linux and window service). + """ + + from openpype.modules import ModulesManager + + os.environ["OPENPYPE_LOCAL_ID"] = active_site + + def signal_handler(sig, frame): + print("You pressed Ctrl+C. Process ended.") + sync_server_module.server_exit() + sys.exit(0) + + signal.signal(signal.SIGINT, signal_handler) + signal.signal(signal.SIGTERM, signal_handler) + + manager = ModulesManager() + sync_server_module = manager.modules_by_name["sync_server"] + + sync_server_module.server_init() + sync_server_module.server_start() + + while True: + time.sleep(1.0) diff --git a/openpype/pipeline/create/creator_plugins.py b/openpype/pipeline/create/creator_plugins.py index 9e1530a6a7..bf2fdd2c5f 100644 --- a/openpype/pipeline/create/creator_plugins.py +++ b/openpype/pipeline/create/creator_plugins.py @@ -9,7 +9,7 @@ from abc import ( import six from openpype.settings import get_system_settings, get_project_settings -from openpype.lib import get_subset_name_with_asset_doc +from .subset_name import get_subset_name from openpype.pipeline.plugin_discover import ( discover, register_plugin, @@ -75,6 +75,7 @@ class BaseCreator: ): # Reference to CreateContext self.create_context = create_context + self.project_settings = project_settings # Creator is running in headless mode (without UI elemets) # - we may use UI inside processing this attribute should be checked @@ -276,14 +277,15 @@ class BaseCreator: variant, task_name, asset_doc, project_name, host_name ) - return get_subset_name_with_asset_doc( + return get_subset_name( self.family, variant, task_name, asset_doc, project_name, host_name, - dynamic_data=dynamic_data + dynamic_data=dynamic_data, + project_settings=self.project_settings ) def get_instance_attr_defs(self): diff --git a/openpype/pipeline/delivery.py b/openpype/pipeline/delivery.py new file mode 100644 index 0000000000..8cf9a43aac --- /dev/null +++ b/openpype/pipeline/delivery.py @@ -0,0 +1,310 @@ +"""Functions useful for delivery of published representations.""" +import os +import shutil +import glob +import clique +import collections + +from openpype.lib import create_hard_link + + +def _copy_file(src_path, dst_path): + """Hardlink file if possible(to save space), copy if not. + + Because of using hardlinks should not be function used in other parts + of pipeline. + """ + + if os.path.exists(dst_path): + return + try: + create_hard_link( + src_path, + dst_path + ) + except OSError: + shutil.copyfile(src_path, dst_path) + + +def get_format_dict(anatomy, location_path): + """Returns replaced root values from user provider value. + + Args: + anatomy (Anatomy): Project anatomy. + location_path (str): User provided value. + + Returns: + (dict): Prepared data for formatting of a template. + """ + + format_dict = {} + if not location_path: + return format_dict + + location_path = location_path.replace("\\", "/") + root_names = anatomy.root_names_from_templates( + anatomy.templates["delivery"] + ) + format_dict["root"] = {} + for name in root_names: + format_dict["root"][name] = location_path + return format_dict + + +def check_destination_path( + repre_id, + anatomy, + anatomy_data, + datetime_data, + template_name +): + """ Try to create destination path based on 'template_name'. + + In the case that path cannot be filled, template contains unmatched + keys, provide error message to filter out repre later. + + Args: + repre_id (str): Representation id. + anatomy (Anatomy): Project anatomy. + anatomy_data (dict): Template data to fill anatomy templates. + datetime_data (dict): Values with actual date. + template_name (str): Name of template which should be used from anatomy + templates. + Returns: + Dict[str, List[str]]: Report of happened errors. Key is message title + value is detailed information. + """ + + anatomy_data.update(datetime_data) + anatomy_filled = anatomy.format_all(anatomy_data) + dest_path = anatomy_filled["delivery"][template_name] + report_items = collections.defaultdict(list) + + if not dest_path.solved: + msg = ( + "Missing keys in Representation's context" + " for anatomy template \"{}\"." + ).format(template_name) + + sub_msg = ( + "Representation: {}
" + ).format(repre_id) + + if dest_path.missing_keys: + keys = ", ".join(dest_path.missing_keys) + sub_msg += ( + "- Missing keys: \"{}\"
" + ).format(keys) + + if dest_path.invalid_types: + items = [] + for key, value in dest_path.invalid_types.items(): + items.append("\"{}\" {}".format(key, str(value))) + + keys = ", ".join(items) + sub_msg += ( + "- Invalid value DataType: \"{}\"
" + ).format(keys) + + report_items[msg].append(sub_msg) + + return report_items + + +def deliver_single_file( + src_path, + repre, + anatomy, + template_name, + anatomy_data, + format_dict, + report_items, + log +): + """Copy single file to calculated path based on template + + Args: + src_path(str): path of source representation file + repre (dict): full repre, used only in deliver_sequence, here only + as to share same signature + anatomy (Anatomy) + template_name (string): user selected delivery template name + anatomy_data (dict): data from repre to fill anatomy with + format_dict (dict): root dictionary with names and values + report_items (collections.defaultdict): to return error messages + log (logging.Logger): for log printing + + Returns: + (collections.defaultdict, int) + """ + + # Make sure path is valid for all platforms + src_path = os.path.normpath(src_path.replace("\\", "/")) + + if not os.path.exists(src_path): + msg = "{} doesn't exist for {}".format(src_path, repre["_id"]) + report_items["Source file was not found"].append(msg) + return report_items, 0 + + anatomy_filled = anatomy.format(anatomy_data) + if format_dict: + template_result = anatomy_filled["delivery"][template_name] + delivery_path = template_result.rootless.format(**format_dict) + else: + delivery_path = anatomy_filled["delivery"][template_name] + + # Backwards compatibility when extension contained `.` + delivery_path = delivery_path.replace("..", ".") + # Make sure path is valid for all platforms + delivery_path = os.path.normpath(delivery_path.replace("\\", "/")) + + delivery_folder = os.path.dirname(delivery_path) + if not os.path.exists(delivery_folder): + os.makedirs(delivery_folder) + + log.debug("Copying single: {} -> {}".format(src_path, delivery_path)) + _copy_file(src_path, delivery_path) + + return report_items, 1 + + +def deliver_sequence( + src_path, + repre, + anatomy, + template_name, + anatomy_data, + format_dict, + report_items, + log +): + """ For Pype2(mainly - works in 3 too) where representation might not + contain files. + + Uses listing physical files (not 'files' on repre as a)might not be + present, b)might not be reliable for representation and copying them. + + TODO Should be refactored when files are sufficient to drive all + representations. + + Args: + src_path(str): path of source representation file + repre (dict): full representation + anatomy (Anatomy) + template_name (string): user selected delivery template name + anatomy_data (dict): data from repre to fill anatomy with + format_dict (dict): root dictionary with names and values + report_items (collections.defaultdict): to return error messages + log (logging.Logger): for log printing + + Returns: + (collections.defaultdict, int) + """ + + src_path = os.path.normpath(src_path.replace("\\", "/")) + + def hash_path_exist(myPath): + res = myPath.replace('#', '*') + glob_search_results = glob.glob(res) + if len(glob_search_results) > 0: + return True + return False + + if not hash_path_exist(src_path): + msg = "{} doesn't exist for {}".format( + src_path, repre["_id"]) + report_items["Source file was not found"].append(msg) + return report_items, 0 + + delivery_templates = anatomy.templates.get("delivery") or {} + delivery_template = delivery_templates.get(template_name) + if delivery_template is None: + msg = ( + "Delivery template \"{}\" in anatomy of project \"{}\"" + " was not found" + ).format(template_name, anatomy.project_name) + report_items[""].append(msg) + return report_items, 0 + + # Check if 'frame' key is available in template which is required + # for sequence delivery + if "{frame" not in delivery_template: + msg = ( + "Delivery template \"{}\" in anatomy of project \"{}\"" + "does not contain '{{frame}}' key to fill. Delivery of sequence" + " can't be processed." + ).format(template_name, anatomy.project_name) + report_items[""].append(msg) + return report_items, 0 + + dir_path, file_name = os.path.split(str(src_path)) + + context = repre["context"] + ext = context.get("ext", context.get("representation")) + + if not ext: + msg = "Source extension not found, cannot find collection" + report_items[msg].append(src_path) + log.warning("{} <{}>".format(msg, context)) + return report_items, 0 + + ext = "." + ext + # context.representation could be .psd + ext = ext.replace("..", ".") + + src_collections, remainder = clique.assemble(os.listdir(dir_path)) + src_collection = None + for col in src_collections: + if col.tail != ext: + continue + + src_collection = col + break + + if src_collection is None: + msg = "Source collection of files was not found" + report_items[msg].append(src_path) + log.warning("{} <{}>".format(msg, src_path)) + return report_items, 0 + + frame_indicator = "@####@" + + anatomy_data["frame"] = frame_indicator + anatomy_filled = anatomy.format(anatomy_data) + + if format_dict: + template_result = anatomy_filled["delivery"][template_name] + delivery_path = template_result.rootless.format(**format_dict) + else: + delivery_path = anatomy_filled["delivery"][template_name] + + delivery_path = os.path.normpath(delivery_path.replace("\\", "/")) + delivery_folder = os.path.dirname(delivery_path) + dst_head, dst_tail = delivery_path.split(frame_indicator) + dst_padding = src_collection.padding + dst_collection = clique.Collection( + head=dst_head, + tail=dst_tail, + padding=dst_padding + ) + + if not os.path.exists(delivery_folder): + os.makedirs(delivery_folder) + + src_head = src_collection.head + src_tail = src_collection.tail + uploaded = 0 + for index in src_collection.indexes: + src_padding = src_collection.format("{padding}") % index + src_file_name = "{}{}{}".format(src_head, src_padding, src_tail) + src = os.path.normpath( + os.path.join(dir_path, src_file_name) + ) + + dst_padding = dst_collection.format("{padding}") % index + dst = "{}{}{}".format(dst_head, dst_padding, dst_tail) + log.debug("Copying single: {} -> {}".format(src, dst)) + _copy_file(src, dst) + uploaded += 1 + + return report_items, uploaded diff --git a/openpype/pipeline/load/__init__.py b/openpype/pipeline/load/__init__.py index b6bdd13d50..bf38a0b3c8 100644 --- a/openpype/pipeline/load/__init__.py +++ b/openpype/pipeline/load/__init__.py @@ -1,6 +1,8 @@ from .utils import ( HeroVersionType, + IncompatibleLoaderError, + InvalidRepresentationContext, get_repres_contexts, get_subset_contexts, @@ -20,6 +22,7 @@ from .utils import ( get_representation_path_from_context, get_representation_path, + get_representation_path_with_anatomy, is_compatible_loader, @@ -46,7 +49,9 @@ from .plugins import ( __all__ = ( # utils.py "HeroVersionType", + "IncompatibleLoaderError", + "InvalidRepresentationContext", "get_repres_contexts", "get_subset_contexts", @@ -66,6 +71,7 @@ __all__ = ( "get_representation_path_from_context", "get_representation_path", + "get_representation_path_with_anatomy", "is_compatible_loader", diff --git a/openpype/pipeline/load/utils.py b/openpype/pipeline/load/utils.py index 99d6876d4b..83b904e4a7 100644 --- a/openpype/pipeline/load/utils.py +++ b/openpype/pipeline/load/utils.py @@ -23,6 +23,10 @@ from openpype.client import ( get_representation_by_name, get_representation_parents ) +from openpype.lib import ( + StringTemplate, + TemplateUnsolved, +) from openpype.pipeline import ( schema, legacy_io, @@ -61,6 +65,11 @@ class IncompatibleLoaderError(ValueError): pass +class InvalidRepresentationContext(ValueError): + """Representation path can't be received using representation document.""" + pass + + def get_repres_contexts(representation_ids, dbcon=None): """Return parenthood context for representation. @@ -515,6 +524,52 @@ def get_representation_path_from_context(context): return get_representation_path(representation, root) +def get_representation_path_with_anatomy(repre_doc, anatomy): + """Receive representation path using representation document and anatomy. + + Anatomy is used to replace 'root' key in representation file. Ideally + should be used instead of 'get_representation_path' which is based on + "current context". + + Future notes: + We want also be able store resources into representation and I can + imagine the result should also contain paths to possible resources. + + Args: + repre_doc (Dict[str, Any]): Representation document. + anatomy (Anatomy): Project anatomy object. + + Returns: + Union[None, TemplateResult]: None if path can't be received + + Raises: + InvalidRepresentationContext: When representation data are probably + invalid or not available. + """ + + try: + template = repre_doc["data"]["template"] + + except KeyError: + raise InvalidRepresentationContext(( + "Representation document does not" + " contain template in data ('data.template')" + )) + + try: + context = repre_doc["context"] + context["root"] = anatomy.roots + path = StringTemplate.format_strict_template(template, context) + + except TemplateUnsolved as exc: + raise InvalidRepresentationContext(( + "Couldn't resolve representation template with available data." + " Reason: {}".format(str(exc)) + )) + + return path.normalized() + + def get_representation_path(representation, root=None, dbcon=None): """Get filename from representation document @@ -533,8 +588,6 @@ def get_representation_path(representation, root=None, dbcon=None): """ - from openpype.lib import StringTemplate, TemplateUnsolved - if dbcon is None: dbcon = legacy_io @@ -737,6 +790,7 @@ def get_outdated_containers(host=None, project_name=None): if host is None: from openpype.pipeline import registered_host + host = registered_host() if project_name is None: diff --git a/openpype/pipeline/project_folders.py b/openpype/pipeline/project_folders.py new file mode 100644 index 0000000000..1bcba5c320 --- /dev/null +++ b/openpype/pipeline/project_folders.py @@ -0,0 +1,107 @@ +import os +import re +import json + +import six + +from openpype.settings import get_project_settings +from openpype.lib import Logger + +from .anatomy import Anatomy +from .template_data import get_project_template_data + + +def concatenate_splitted_paths(split_paths, anatomy): + log = Logger.get_logger("concatenate_splitted_paths") + pattern_array = re.compile(r"\[.*\]") + output = [] + for path_items in split_paths: + clean_items = [] + if isinstance(path_items, str): + path_items = [path_items] + + for path_item in path_items: + if not re.match(r"{.+}", path_item): + path_item = re.sub(pattern_array, "", path_item) + clean_items.append(path_item) + + # backward compatibility + if "__project_root__" in path_items: + for root, root_path in anatomy.roots.items(): + if not os.path.exists(str(root_path)): + log.debug("Root {} path path {} not exist on \ + computer!".format(root, root_path)) + continue + clean_items = ["{{root[{}]}}".format(root), + r"{project[name]}"] + clean_items[1:] + output.append(os.path.normpath(os.path.sep.join(clean_items))) + continue + + output.append(os.path.normpath(os.path.sep.join(clean_items))) + + return output + + +def fill_paths(path_list, anatomy): + format_data = get_project_template_data(project_name=anatomy.project_name) + format_data["root"] = anatomy.roots + filled_paths = [] + + for path in path_list: + new_path = path.format(**format_data) + filled_paths.append(new_path) + + return filled_paths + + +def create_project_folders(project_name, basic_paths=None): + log = Logger.get_logger("create_project_folders") + anatomy = Anatomy(project_name) + if basic_paths is None: + basic_paths = get_project_basic_paths(project_name) + + if not basic_paths: + return + + concat_paths = concatenate_splitted_paths(basic_paths, anatomy) + filled_paths = fill_paths(concat_paths, anatomy) + + # Create folders + for path in filled_paths: + if os.path.exists(path): + log.debug("Folder already exists: {}".format(path)) + else: + log.debug("Creating folder: {}".format(path)) + os.makedirs(path) + + +def _list_path_items(folder_structure): + output = [] + for key, value in folder_structure.items(): + if not value: + output.append(key) + continue + + paths = _list_path_items(value) + for path in paths: + if not isinstance(path, (list, tuple)): + path = [path] + + item = [key] + item.extend(path) + output.append(item) + + return output + + +def get_project_basic_paths(project_name): + project_settings = get_project_settings(project_name) + folder_structure = ( + project_settings["global"]["project_folder_structure"] + ) + if not folder_structure: + return [] + + if isinstance(folder_structure, six.string_types): + folder_structure = json.loads(folder_structure) + return _list_path_items(folder_structure) diff --git a/openpype/pipeline/publish/__init__.py b/openpype/pipeline/publish/__init__.py index aa7fe0bdbf..04b1a66f3a 100644 --- a/openpype/pipeline/publish/__init__.py +++ b/openpype/pipeline/publish/__init__.py @@ -1,3 +1,10 @@ +from .constants import ( + ValidatePipelineOrder, + ValidateContentsOrder, + ValidateSceneOrder, + ValidateMeshOrder, +) + from .publish_plugins import ( AbstractMetaInstancePlugin, AbstractMetaContextPlugin, @@ -7,13 +14,27 @@ from .publish_plugins import ( KnownPublishError, OpenPypePyblishPluginMixin, OptionalPyblishPluginMixin, + + RepairAction, + RepairContextAction, + + Extractor, ) from .lib import ( + get_publish_template_name, + DiscoverResult, publish_plugins_discover, load_help_content_from_plugin, load_help_content_from_filepath, + + get_errored_instances_from_context, + get_errored_plugins_from_context, + + filter_instances_for_context_plugin, + context_plugin_should_run, + get_instance_staging_dir, ) from .abstract_expected_files import ExpectedFiles @@ -24,6 +45,11 @@ from .abstract_collect_render import ( __all__ = ( + "ValidatePipelineOrder", + "ValidateContentsOrder", + "ValidateSceneOrder", + "ValidateMeshOrder", + "AbstractMetaInstancePlugin", "AbstractMetaContextPlugin", @@ -33,11 +59,25 @@ __all__ = ( "OpenPypePyblishPluginMixin", "OptionalPyblishPluginMixin", + "RepairAction", + "RepairContextAction", + + "Extractor", + + "get_publish_template_name", + "DiscoverResult", "publish_plugins_discover", "load_help_content_from_plugin", "load_help_content_from_filepath", + "get_errored_instances_from_context", + "get_errored_plugins_from_context", + + "filter_instances_for_context_plugin", + "context_plugin_should_run", + "get_instance_staging_dir", + "ExpectedFiles", "RenderInstance", diff --git a/openpype/pipeline/publish/constants.py b/openpype/pipeline/publish/constants.py new file mode 100644 index 0000000000..dcd3445200 --- /dev/null +++ b/openpype/pipeline/publish/constants.py @@ -0,0 +1,7 @@ +import pyblish.api + + +ValidatePipelineOrder = pyblish.api.ValidatorOrder + 0.05 +ValidateContentsOrder = pyblish.api.ValidatorOrder + 0.1 +ValidateSceneOrder = pyblish.api.ValidatorOrder + 0.2 +ValidateMeshOrder = pyblish.api.ValidatorOrder + 0.3 diff --git a/openpype/pipeline/publish/contants.py b/openpype/pipeline/publish/contants.py new file mode 100644 index 0000000000..169eca2e5c --- /dev/null +++ b/openpype/pipeline/publish/contants.py @@ -0,0 +1,2 @@ +DEFAULT_PUBLISH_TEMPLATE = "publish" +DEFAULT_HERO_PUBLISH_TEMPLATE = "hero" diff --git a/openpype/pipeline/publish/lib.py b/openpype/pipeline/publish/lib.py index 9060a0bf4b..c76671fa39 100644 --- a/openpype/pipeline/publish/lib.py +++ b/openpype/pipeline/publish/lib.py @@ -2,14 +2,198 @@ import os import sys import types import inspect +import copy +import tempfile import xml.etree.ElementTree import six import pyblish.plugin import pyblish.api -from openpype.lib import Logger -from openpype.settings import get_project_settings, get_system_settings +from openpype.lib import Logger, filter_profiles +from openpype.settings import ( + get_project_settings, + get_system_settings, +) + +from .contants import ( + DEFAULT_PUBLISH_TEMPLATE, + DEFAULT_HERO_PUBLISH_TEMPLATE, +) + + +def get_template_name_profiles( + project_name, project_settings=None, logger=None +): + """Receive profiles for publish template keys. + + At least one of arguments must be passed. + + Args: + project_name (str): Name of project where to look for templates. + project_settings(Dic[str, Any]): Prepared project settings. + + Returns: + List[Dict[str, Any]]: Publish template profiles. + """ + + if not project_name and not project_settings: + raise ValueError(( + "Both project name and project settings are missing." + " At least one must be entered." + )) + + if not project_settings: + project_settings = get_project_settings(project_name) + + profiles = ( + project_settings + ["global"] + ["tools"] + ["publish"] + ["template_name_profiles"] + ) + if profiles: + return copy.deepcopy(profiles) + + # Use legacy approach for cases new settings are not filled yet for the + # project + legacy_profiles = ( + project_settings + ["global"] + ["publish"] + ["IntegrateAssetNew"] + ["template_name_profiles"] + ) + if legacy_profiles: + if not logger: + logger = Logger.get_logger("get_template_name_profiles") + + logger.warning(( + "Project \"{}\" is using legacy access to publish template." + " It is recommended to move settings to new location" + " 'project_settings/global/tools/publish/template_name_profiles'." + ).format(project_name)) + + # Replace "tasks" key with "task_names" + profiles = [] + for profile in copy.deepcopy(legacy_profiles): + profile["task_names"] = profile.pop("tasks", []) + profiles.append(profile) + return profiles + + +def get_hero_template_name_profiles( + project_name, project_settings=None, logger=None +): + """Receive profiles for hero publish template keys. + + At least one of arguments must be passed. + + Args: + project_name (str): Name of project where to look for templates. + project_settings(Dic[str, Any]): Prepared project settings. + + Returns: + List[Dict[str, Any]]: Publish template profiles. + """ + + if not project_name and not project_settings: + raise ValueError(( + "Both project name and project settings are missing." + " At least one must be entered." + )) + + if not project_settings: + project_settings = get_project_settings(project_name) + + profiles = ( + project_settings + ["global"] + ["tools"] + ["publish"] + ["hero_template_name_profiles"] + ) + if profiles: + return copy.deepcopy(profiles) + + # Use legacy approach for cases new settings are not filled yet for the + # project + legacy_profiles = copy.deepcopy( + project_settings + ["global"] + ["publish"] + ["IntegrateHeroVersion"] + ["template_name_profiles"] + ) + if legacy_profiles: + if not logger: + logger = Logger.get_logger("get_hero_template_name_profiles") + + logger.warning(( + "Project \"{}\" is using legacy access to hero publish template." + " It is recommended to move settings to new location" + " 'project_settings/global/tools/publish/" + "hero_template_name_profiles'." + ).format(project_name)) + return legacy_profiles + + +def get_publish_template_name( + project_name, + host_name, + family, + task_name, + task_type, + project_settings=None, + hero=False, + logger=None +): + """Get template name which should be used for passed context. + + Publish templates are filtered by host name, family, task name and + task type. + + Default template which is used at if profiles are not available or profile + has empty value is defined by 'DEFAULT_PUBLISH_TEMPLATE' constant. + + Args: + project_name (str): Name of project where to look for settings. + host_name (str): Name of host integration. + family (str): Family for which should be found template. + task_name (str): Task name on which is intance working. + task_type (str): Task type on which is intance working. + project_setting (Dict[str, Any]): Prepared project settings. + logger (logging.Logger): Custom logger used for 'filter_profiles' + function. + + Returns: + str: Template name which should be used for integration. + """ + + template = None + filter_criteria = { + "hosts": host_name, + "families": family, + "task_names": task_name, + "task_types": task_type, + } + if hero: + default_template = DEFAULT_HERO_PUBLISH_TEMPLATE + profiles = get_hero_template_name_profiles( + project_name, project_settings, logger + ) + + else: + profiles = get_template_name_profiles( + project_name, project_settings, logger + ) + default_template = DEFAULT_PUBLISH_TEMPLATE + + profile = filter_profiles(profiles, filter_criteria, logger=logger) + if profile: + template = profile["template_name"] + return template or default_template class DiscoverResult: @@ -313,3 +497,138 @@ def remote_publish(log, close_plugin_name=None, raise_error=False): # Fatal Error is because of Deadline error_message = "Fatal Error: " + error_format.format(**result) raise RuntimeError(error_message) + + +def get_errored_instances_from_context(context): + """Collect failed instances from pyblish context. + + Args: + context (pyblish.api.Context): Publish context where we're looking + for failed instances. + + Returns: + List[pyblish.lib.Instance]: Instances which failed during processing. + """ + + instances = list() + for result in context.data["results"]: + if result["instance"] is None: + # When instance is None we are on the "context" result + continue + + if result["error"]: + instances.append(result["instance"]) + + return instances + + +def get_errored_plugins_from_context(context): + """Collect failed plugins from pyblish context. + + Args: + context (pyblish.api.Context): Publish context where we're looking + for failed plugins. + + Returns: + List[pyblish.api.Plugin]: Plugins which failed during processing. + """ + + plugins = list() + results = context.data.get("results", []) + for result in results: + if result["success"] is True: + continue + plugins.append(result["plugin"]) + + return plugins + + +def filter_instances_for_context_plugin(plugin, context): + """Filter instances on context by context plugin filters. + + This is for cases when context plugin need similar filtering like instance + plugin have, but for some reason must run on context or should find out + if there is at least one instance with a family. + + Args: + plugin (pyblish.api.Plugin): Plugin with filters. + context (pyblish.api.Context): Pyblish context with insances. + + Returns: + Iterator[pyblish.lib.Instance]: Iteration of valid instances. + """ + + instances = [] + plugin_families = set() + all_families = False + if plugin.families: + instances = context + plugin_families = set(plugin.families) + all_families = "*" in plugin_families + + for instance in instances: + # Ignore inactive instances + if ( + not instance.data.get("publish", True) + or not instance.data.get("active", True) + ): + continue + + family = instance.data.get("family") + families = instance.data.get("families") or [] + if ( + all_families + or (family and family in plugin_families) + or any(f in plugin_families for f in families) + ): + yield instance + + +def context_plugin_should_run(plugin, context): + """Return whether the ContextPlugin should run on the given context. + + This is a helper function to work around a bug pyblish-base#250 + Whenever a ContextPlugin sets specific families it will still trigger even + when no instances are present that have those families. + + This actually checks it correctly and returns whether it should run. + + Args: + plugin (pyblish.api.Plugin): Plugin with filters. + context (pyblish.api.Context): Pyblish context with insances. + + Returns: + bool: Context plugin should run based on valid instances. + """ + + for _ in filter_instances_for_context_plugin(plugin, context): + return True + return False + + +def get_instance_staging_dir(instance): + """Unified way how staging dir is stored and created on instances. + + First check if 'stagingDir' is already set in instance data. If there is + not create new in tempdir. + + Note: + Staging dir does not have to be necessarily in tempdir so be carefull + about it's usage. + + Args: + instance (pyblish.lib.Instance): Instance for which we want to get + staging dir. + + Returns: + str: Path to staging dir of instance. + """ + + staging_dir = instance.data.get("stagingDir") + if not staging_dir: + staging_dir = os.path.normpath( + tempfile.mkdtemp(prefix="pyblish_tmp_") + ) + instance.data["stagingDir"] = staging_dir + + return staging_dir diff --git a/openpype/pipeline/publish/publish_plugins.py b/openpype/pipeline/publish/publish_plugins.py index 71a2c675b6..6e2be1ce2c 100644 --- a/openpype/pipeline/publish/publish_plugins.py +++ b/openpype/pipeline/publish/publish_plugins.py @@ -1,7 +1,16 @@ from abc import ABCMeta + +import pyblish.api from pyblish.plugin import MetaPlugin, ExplicitMetaPlugin + from openpype.lib import BoolDef -from .lib import load_help_content_from_plugin + +from .lib import ( + load_help_content_from_plugin, + get_errored_instances_from_context, + get_errored_plugins_from_context, + get_instance_staging_dir, +) class AbstractMetaInstancePlugin(ABCMeta, MetaPlugin): @@ -184,3 +193,74 @@ class OptionalPyblishPluginMixin(OpenPypePyblishPluginMixin): if active is None: active = getattr(self, "active", True) return active + + +class RepairAction(pyblish.api.Action): + """Repairs the action + + To process the repairing this requires a static `repair(instance)` method + is available on the plugin. + """ + + label = "Repair" + on = "failed" # This action is only available on a failed plug-in + icon = "wrench" # Icon from Awesome Icon + + def process(self, context, plugin): + if not hasattr(plugin, "repair"): + raise RuntimeError("Plug-in does not have repair method.") + + # Get the errored instances + self.log.info("Finding failed instances..") + errored_instances = get_errored_instances_from_context(context) + + # Apply pyblish.logic to get the instances for the plug-in + instances = pyblish.api.instances_by_plugin(errored_instances, plugin) + for instance in instances: + plugin.repair(instance) + + +class RepairContextAction(pyblish.api.Action): + """Repairs the action + + To process the repairing this requires a static `repair(instance)` method + is available on the plugin. + """ + + label = "Repair" + on = "failed" # This action is only available on a failed plug-in + + def process(self, context, plugin): + if not hasattr(plugin, "repair"): + raise RuntimeError("Plug-in does not have repair method.") + + # Get the errored instances + self.log.info("Finding failed instances..") + errored_plugins = get_errored_plugins_from_context(context) + + # Apply pyblish.logic to get the instances for the plug-in + if plugin in errored_plugins: + self.log.info("Attempting fix ...") + plugin.repair(context) + + +class Extractor(pyblish.api.InstancePlugin): + """Extractor base class. + + The extractor base class implements a "staging_dir" function used to + generate a temporary directory for an instance to extract to. + + This temporary directory is generated through `tempfile.mkdtemp()` + + """ + + order = 2.0 + + def staging_dir(self, instance): + """Provide a temporary directory in which to store extracted files + + Upon calling this method the staging directory is stored inside + the instance.data['stagingDir'] + """ + + return get_instance_staging_dir(instance) diff --git a/openpype/pipeline/template_data.py b/openpype/pipeline/template_data.py index 824a25127c..627eba5c3d 100644 --- a/openpype/pipeline/template_data.py +++ b/openpype/pipeline/template_data.py @@ -28,27 +28,37 @@ def get_general_template_data(system_settings=None): } -def get_project_template_data(project_doc): +def get_project_template_data(project_doc=None, project_name=None): """Extract data from project document that are used in templates. Project document must have 'name' and (at this moment) optional key 'data.code'. + One of 'project_name' or 'project_doc' must be passed. With prepared + project document is function much faster because don't have to query. + Output contains formatting keys: - 'project[name]' - Project name - 'project[code]' - Project code Args: project_doc (Dict[str, Any]): Queried project document. + project_name (str): Name of project. Returns: Dict[str, Dict[str, str]]: Template data based on project document. """ + if not project_name: + project_name = project_doc["name"] + + if not project_doc: + project_doc = get_project(project_name, fields=["data.code"]) + project_code = project_doc.get("data", {}).get("code") return { "project": { - "name": project_doc["name"], + "name": project_name, "code": project_code } } diff --git a/openpype/pipeline/thumbnail.py b/openpype/pipeline/thumbnail.py index eb383b16d9..39f3e17893 100644 --- a/openpype/pipeline/thumbnail.py +++ b/openpype/pipeline/thumbnail.py @@ -4,6 +4,7 @@ import logging from openpype.client import get_project from . import legacy_io +from .anatomy import Anatomy from .plugin_discover import ( discover, register_plugin, @@ -73,19 +74,20 @@ class ThumbnailResolver(object): class TemplateResolver(ThumbnailResolver): - priority = 90 def process(self, thumbnail_entity, thumbnail_type): - - if not os.environ.get("AVALON_THUMBNAIL_ROOT"): - return - template = thumbnail_entity["data"].get("template") if not template: self.log.debug("Thumbnail entity does not have set template") return + thumbnail_root_format_key = "{thumbnail_root}" + thumbnail_root = os.environ.get("AVALON_THUMBNAIL_ROOT") or "" + # Check if template require thumbnail root and if is avaiable + if thumbnail_root_format_key in template and not thumbnail_root: + return + project_name = self.dbcon.active_project() project = get_project(project_name, fields=["name", "data.code"]) @@ -95,12 +97,16 @@ class TemplateResolver(ThumbnailResolver): template_data.update({ "_id": str(thumbnail_entity["_id"]), "thumbnail_type": thumbnail_type, - "thumbnail_root": os.environ.get("AVALON_THUMBNAIL_ROOT"), + "thumbnail_root": thumbnail_root, "project": { "name": project["name"], "code": project["data"].get("code") - } + }, }) + # Add anatomy roots if is in template + if "{root" in template: + anatomy = Anatomy(project_name) + template_data["root"] = anatomy.roots try: filepath = os.path.normpath(template.format(**template_data)) diff --git a/openpype/pipeline/workfile/__init__.py b/openpype/pipeline/workfile/__init__.py index 0aad29b6f9..94ecc81bd6 100644 --- a/openpype/pipeline/workfile/__init__.py +++ b/openpype/pipeline/workfile/__init__.py @@ -9,6 +9,8 @@ from .path_resolving import ( get_custom_workfile_template, get_custom_workfile_template_by_string_context, + + create_workdir_extra_folders, ) from .build_workfile import BuildWorkfile @@ -26,5 +28,7 @@ __all__ = ( "get_custom_workfile_template", "get_custom_workfile_template_by_string_context", + "create_workdir_extra_folders", + "BuildWorkfile", ) diff --git a/openpype/pipeline/workfile/abstract_template_loader.py b/openpype/pipeline/workfile/abstract_template_loader.py index 05a98a1ddc..e2fbea98ca 100644 --- a/openpype/pipeline/workfile/abstract_template_loader.py +++ b/openpype/pipeline/workfile/abstract_template_loader.py @@ -5,13 +5,15 @@ import six import logging from functools import reduce -from openpype.client import get_asset_by_name +from openpype.client import ( + get_asset_by_name, + get_linked_assets, +) from openpype.settings import get_project_settings from openpype.lib import ( StringTemplate, Logger, filter_profiles, - get_linked_assets, ) from openpype.pipeline import legacy_io, Anatomy from openpype.pipeline.load import ( @@ -177,7 +179,7 @@ class AbstractTemplateLoader: build_info["profiles"], { "task_types": task_type, - "tasks": task_name + "task_names": task_name } ) diff --git a/openpype/pipeline/workfile/build_template.py b/openpype/pipeline/workfile/build_template.py index e6396578c5..3328dfbc9e 100644 --- a/openpype/pipeline/workfile/build_template.py +++ b/openpype/pipeline/workfile/build_template.py @@ -1,3 +1,4 @@ +import os from importlib import import_module from openpype.lib import classes_from_module from openpype.host import HostBase @@ -30,7 +31,7 @@ def build_workfile_template(*args): template_loader.populate_template() -def update_workfile_template(args): +def update_workfile_template(*args): template_loader = build_template_loader() template_loader.update_missing_containers() @@ -42,7 +43,10 @@ def build_template_loader(): if isinstance(host, HostBase): host_name = host.name else: - host_name = host.__name__.partition('.')[2] + host_name = os.environ.get("AVALON_APP") + if not host_name: + host_name = host.__name__.split(".")[-2] + module_path = _module_path_format.format(host=host_name) module = import_module(module_path) if not module: diff --git a/openpype/pipeline/workfile/build_workfile.py b/openpype/pipeline/workfile/build_workfile.py index bb6fcb4189..0b8a444436 100644 --- a/openpype/pipeline/workfile/build_workfile.py +++ b/openpype/pipeline/workfile/build_workfile.py @@ -8,10 +8,10 @@ from openpype.client import ( get_subsets, get_last_versions, get_representations, + get_linked_assets, ) from openpype.settings import get_project_settings from openpype.lib import ( - get_linked_assets, filter_profiles, Logger, ) diff --git a/openpype/pipeline/workfile/path_resolving.py b/openpype/pipeline/workfile/path_resolving.py index 6d9e72dbd2..1243e84148 100644 --- a/openpype/pipeline/workfile/path_resolving.py +++ b/openpype/pipeline/workfile/path_resolving.py @@ -467,3 +467,60 @@ def get_custom_workfile_template_by_string_context( return get_custom_workfile_template( project_doc, asset_doc, task_name, host_name, anatomy, project_settings ) + + +def create_workdir_extra_folders( + workdir, + host_name, + task_type, + task_name, + project_name, + project_settings=None +): + """Create extra folders in work directory based on context. + + Args: + workdir (str): Path to workdir where workfiles is stored. + host_name (str): Name of host implementation. + task_type (str): Type of task for which extra folders should be + created. + task_name (str): Name of task for which extra folders should be + created. + project_name (str): Name of project on which task is. + project_settings (dict): Prepared project settings. Are loaded if not + passed. + """ + + # Load project settings if not set + if not project_settings: + project_settings = get_project_settings(project_name) + + # Load extra folders profiles + extra_folders_profiles = ( + project_settings["global"]["tools"]["Workfiles"]["extra_folders"] + ) + # Skip if are empty + if not extra_folders_profiles: + return + + # Prepare profiles filters + filter_data = { + "task_types": task_type, + "task_names": task_name, + "hosts": host_name + } + profile = filter_profiles(extra_folders_profiles, filter_data) + if profile is None: + return + + for subfolder in profile["folders"]: + # Make sure backslashes are converted to forwards slashes + # and does not start with slash + subfolder = subfolder.replace("\\", "/").lstrip("/") + # Skip empty strings + if not subfolder: + continue + + fullpath = os.path.join(workdir, subfolder) + if not os.path.exists(fullpath): + os.makedirs(fullpath) diff --git a/openpype/plugin.py b/openpype/plugin.py index bb9bc2ff85..7e906b4451 100644 --- a/openpype/plugin.py +++ b/openpype/plugin.py @@ -1,24 +1,91 @@ -import tempfile -import os +import functools +import warnings + import pyblish.api +# New location of orders: openpype.pipeline.publish.constants +# - can be imported as +# 'from openpype.pipeline.publish import ValidatePipelineOrder' ValidatePipelineOrder = pyblish.api.ValidatorOrder + 0.05 ValidateContentsOrder = pyblish.api.ValidatorOrder + 0.1 ValidateSceneOrder = pyblish.api.ValidatorOrder + 0.2 ValidateMeshOrder = pyblish.api.ValidatorOrder + 0.3 +class PluginDeprecatedWarning(DeprecationWarning): + pass + + +def _deprecation_warning(item_name, warning_message): + warnings.simplefilter("always", PluginDeprecatedWarning) + warnings.warn( + ( + "Call to deprecated function '{}'" + "\nFunction was moved or removed.{}" + ).format(item_name, warning_message), + category=PluginDeprecatedWarning, + stacklevel=4 + ) + + +def deprecated(new_destination): + """Mark functions as deprecated. + + It will result in a warning being emitted when the function is used. + """ + + func = None + if callable(new_destination): + func = new_destination + new_destination = None + + def _decorator(decorated_func): + if new_destination is None: + warning_message = ( + " Please check content of deprecated function to figure out" + " possible replacement." + ) + else: + warning_message = " Please replace your usage with '{}'.".format( + new_destination + ) + + @functools.wraps(decorated_func) + def wrapper(*args, **kwargs): + _deprecation_warning(decorated_func.__name__, warning_message) + return decorated_func(*args, **kwargs) + return wrapper + + if func is None: + return _decorator + return _decorator(func) + + +# Classes just inheriting from pyblish classes +# - seems to be unused in code (not 100% sure) +# - they should be removed but because it is not clear if they're used +# we'll keep then and log deprecation warning +# Deprecated since 3.14.* will be removed in 3.16.* class ContextPlugin(pyblish.api.ContextPlugin): - def process(cls, *args, **kwargs): - super(ContextPlugin, cls).process(cls, *args, **kwargs) + def __init__(self, *args, **kwargs): + _deprecation_warning( + "openpype.plugin.ContextPlugin", + " Please replace your usage with 'pyblish.api.ContextPlugin'." + ) + super(ContextPlugin, self).__init__(*args, **kwargs) +# Deprecated since 3.14.* will be removed in 3.16.* class InstancePlugin(pyblish.api.InstancePlugin): - def process(cls, *args, **kwargs): - super(InstancePlugin, cls).process(cls, *args, **kwargs) + def __init__(self, *args, **kwargs): + _deprecation_warning( + "openpype.plugin.ContextPlugin", + " Please replace your usage with 'pyblish.api.InstancePlugin'." + ) + super(InstancePlugin, self).__init__(*args, **kwargs) -class Extractor(InstancePlugin): +class Extractor(pyblish.api.InstancePlugin): """Extractor base class. The extractor base class implements a "staging_dir" function used to @@ -36,17 +103,13 @@ class Extractor(InstancePlugin): Upon calling this method the staging directory is stored inside the instance.data['stagingDir'] """ - staging_dir = instance.data.get('stagingDir', None) - if not staging_dir: - staging_dir = os.path.normpath( - tempfile.mkdtemp(prefix="pyblish_tmp_") - ) - instance.data['stagingDir'] = staging_dir + from openpype.pipeline.publish import get_instance_staging_dir - return staging_dir + return get_instance_staging_dir(instance) +@deprecated("openpype.pipeline.publish.context_plugin_should_run") def contextplugin_should_run(plugin, context): """Return whether the ContextPlugin should run on the given context. @@ -56,30 +119,10 @@ def contextplugin_should_run(plugin, context): This actually checks it correctly and returns whether it should run. + Deprecated: + Since 3.14.* will be removed in 3.16.* or later. """ - required = set(plugin.families) - # When no filter always run - if "*" in required: - return True + from openpype.pipeline.publish import context_plugin_should_run - for instance in context: - - # Ignore inactive instances - if (not instance.data.get("publish", True) or - not instance.data.get("active", True)): - continue - - families = instance.data.get("families", []) - if any(f in required for f in families): - return True - - family = instance.data.get("family") - if family and family in required: - return True - - return False - - -class ValidationException(Exception): - pass + return context_plugin_should_run(plugin, context) diff --git a/openpype/plugins/load/add_site.py b/openpype/plugins/load/add_site.py index 55fda55d17..ac931e41db 100644 --- a/openpype/plugins/load/add_site.py +++ b/openpype/plugins/load/add_site.py @@ -1,6 +1,6 @@ +from openpype.client import get_linked_representation_id from openpype.modules import ModulesManager from openpype.pipeline import load -from openpype.lib.avalon_context import get_linked_ids_for_representations from openpype.modules.sync_server.utils import SiteAlreadyPresentError @@ -45,9 +45,11 @@ class AddSyncSite(load.LoaderPlugin): force=True) if family == "workfile": - links = get_linked_ids_for_representations(project_name, - [repre_id], - link_type="reference") + links = get_linked_representation_id( + project_name, + repre_id=repre_id, + link_type="reference" + ) for link_repre_id in links: try: self.sync_server.add_site(project_name, link_repre_id, diff --git a/openpype/plugins/load/delete_old_versions.py b/openpype/plugins/load/delete_old_versions.py index 6e0b464cc1..b7ac015268 100644 --- a/openpype/plugins/load/delete_old_versions.py +++ b/openpype/plugins/load/delete_old_versions.py @@ -7,11 +7,15 @@ from pymongo import UpdateOne import qargparse from Qt import QtWidgets, QtCore -from openpype.client import get_versions, get_representations from openpype import style -from openpype.pipeline import load, AvalonMongoDB, Anatomy -from openpype.lib import StringTemplate +from openpype.client import get_versions, get_representations from openpype.modules import ModulesManager +from openpype.lib import format_file_size +from openpype.pipeline import load, AvalonMongoDB, Anatomy +from openpype.pipeline.load import ( + get_representation_path_with_anatomy, + InvalidRepresentationContext, +) class DeleteOldVersions(load.SubsetLoaderPlugin): @@ -38,13 +42,6 @@ class DeleteOldVersions(load.SubsetLoaderPlugin): ) ] - def sizeof_fmt(self, num, suffix='B'): - for unit in ['', 'Ki', 'Mi', 'Gi', 'Ti', 'Pi', 'Ei', 'Zi']: - if abs(num) < 1024.0: - return "%3.1f%s%s" % (num, unit, suffix) - num /= 1024.0 - return "%.1f%s%s" % (num, 'Yi', suffix) - def delete_whole_dir_paths(self, dir_paths, delete=True): size = 0 @@ -80,27 +77,28 @@ class DeleteOldVersions(load.SubsetLoaderPlugin): def path_from_representation(self, representation, anatomy): try: - template = representation["data"]["template"] - + context = representation["context"] except KeyError: return (None, None) + try: + path = get_representation_path_with_anatomy( + representation, anatomy + ) + except InvalidRepresentationContext: + return (None, None) + sequence_path = None - try: - context = representation["context"] - context["root"] = anatomy.roots - path = str(StringTemplate.format_template(template, context)) - if "frame" in context: - context["frame"] = self.sequence_splitter - sequence_path = os.path.normpath(str( - StringTemplate.format_template(template, context) - )) + if "frame" in context: + context["frame"] = self.sequence_splitter + sequence_path = get_representation_path_with_anatomy( + representation, anatomy + ) - except KeyError: - # Template references unavailable data - return (None, None) + if sequence_path: + sequence_path = sequence_path.normalized() - return (os.path.normpath(path), sequence_path) + return (path.normalized(), sequence_path) def delete_only_repre_files(self, dir_paths, file_paths, delete=True): size = 0 @@ -456,7 +454,7 @@ class DeleteOldVersions(load.SubsetLoaderPlugin): size += self.main(project_name, data, remove_publish_folder) print("Progressing {}/{}".format(count + 1, len(contexts))) - msg = "Total size of files: " + self.sizeof_fmt(size) + msg = "Total size of files: {}".format(format_file_size(size)) self.log.info(msg) self.message(msg) diff --git a/openpype/plugins/load/delivery.py b/openpype/plugins/load/delivery.py index f6e1d4f06b..89c24f2402 100644 --- a/openpype/plugins/load/delivery.py +++ b/openpype/plugins/load/delivery.py @@ -7,15 +7,17 @@ from openpype.client import get_representations from openpype.pipeline import load, Anatomy from openpype import resources, style -from openpype.lib.dateutils import get_datetime_data -from openpype.lib.delivery import ( - sizeof_fmt, - path_from_representation, +from openpype.lib import ( + format_file_size, + collect_frames, + get_datetime_data, +) +from openpype.pipeline.load import get_representation_path_with_anatomy +from openpype.pipeline.delivery import ( get_format_dict, check_destination_path, - process_single_file, - process_sequence, - collect_frames + deliver_single_file, + deliver_sequence, ) @@ -167,7 +169,9 @@ class DeliveryOptionsDialog(QtWidgets.QDialog): if repre["name"] not in selected_repres: continue - repre_path = path_from_representation(repre, self.anatomy) + repre_path = get_representation_path_with_anatomy( + repre, self.anatomy + ) anatomy_data = copy.deepcopy(repre["context"]) new_report_items = check_destination_path(str(repre["_id"]), @@ -202,7 +206,7 @@ class DeliveryOptionsDialog(QtWidgets.QDialog): args[0] = src_path if frame: anatomy_data["frame"] = frame - new_report_items, uploaded = process_single_file(*args) + new_report_items, uploaded = deliver_single_file(*args) report_items.update(new_report_items) self._update_progress(uploaded) else: # fallback for Pype2 and representations without files @@ -211,9 +215,9 @@ class DeliveryOptionsDialog(QtWidgets.QDialog): repre["context"]["frame"] = len(str(frame)) * "#" if not frame: - new_report_items, uploaded = process_single_file(*args) + new_report_items, uploaded = deliver_single_file(*args) else: - new_report_items, uploaded = process_sequence(*args) + new_report_items, uploaded = deliver_sequence(*args) report_items.update(new_report_items) self._update_progress(uploaded) @@ -263,8 +267,9 @@ class DeliveryOptionsDialog(QtWidgets.QDialog): def _prepare_label(self): """Provides text with no of selected files and their size.""" - label = "{} files, size {}".format(self.files_selected, - sizeof_fmt(self.size_selected)) + label = "{} files, size {}".format( + self.files_selected, + format_file_size(self.size_selected)) return label def _get_selected_repres(self): diff --git a/openpype/plugins/load/open_file.py b/openpype/plugins/load/open_file.py index f21cd07c7f..00b2ecd7c5 100644 --- a/openpype/plugins/load/open_file.py +++ b/openpype/plugins/load/open_file.py @@ -15,8 +15,8 @@ def open(filepath): subprocess.call(('xdg-open', filepath)) -class Openfile(load.LoaderPlugin): - """Open Image Sequence with system default""" +class OpenFile(load.LoaderPlugin): + """Open Image Sequence or Video with system default""" families = ["render2d"] representations = ["*"] @@ -27,32 +27,10 @@ class Openfile(load.LoaderPlugin): color = "orange" def load(self, context, name, namespace, data): - import clique - directory = os.path.dirname(self.fname) - pattern = clique.PATTERNS["frames"] + path = self.fname + if not os.path.exists(path): + raise RuntimeError("File not found: {}".format(path)) - files = os.listdir(directory) - representation = context["representation"] - - ext = representation["name"] - path = representation["data"]["path"] - - if ext in ["#"]: - collections, remainder = clique.assemble(files, - patterns=[pattern], - minimum_items=1) - - seqeunce = collections[0] - - first_image = list(seqeunce)[0] - filepath = os.path.normpath(os.path.join(directory, first_image)) - else: - file = [f for f in files - if ext in f - if "#" not in f][0] - filepath = os.path.normpath(os.path.join(directory, file)) - - self.log.info("Opening : {}".format(filepath)) - - open(filepath) + self.log.info("Opening : {}".format(path)) + open(path) diff --git a/openpype/plugins/publish/collect_audio.py b/openpype/plugins/publish/collect_audio.py new file mode 100644 index 0000000000..7d53b24e54 --- /dev/null +++ b/openpype/plugins/publish/collect_audio.py @@ -0,0 +1,105 @@ +import pyblish.api + +from openpype.client import ( + get_last_version_by_subset_name, + get_representations, +) +from openpype.pipeline import ( + legacy_io, + get_representation_path, +) + + +class CollectAudio(pyblish.api.InstancePlugin): + """Collect asset's last published audio. + + The audio subset name searched for is defined in: + project settings > Collect Audio + """ + label = "Collect Asset Audio" + order = pyblish.api.CollectorOrder + 0.1 + families = ["review"] + hosts = [ + "nuke", + "maya", + "shell", + "hiero", + "premiere", + "harmony", + "traypublisher", + "standalonepublisher", + "fusion", + "tvpaint", + "resolve", + "webpublisher", + "aftereffects", + "flame", + "unreal" + ] + + audio_subset_name = "audioMain" + + def process(self, instance): + if instance.data.get("audio"): + self.log.info( + "Skipping Audio collecion. It is already collected" + ) + return + + # Add audio to instance if exists. + self.log.info(( + "Searching for audio subset '{subset}'" + " in asset '{asset}'" + ).format( + subset=self.audio_subset_name, + asset=instance.data["asset"] + )) + + repre_doc = self._get_repre_doc(instance) + + # Add audio to instance if representation was found + if repre_doc: + instance.data["audio"] = [{ + "offset": 0, + "filename": get_representation_path(repre_doc) + }] + self.log.info("Audio Data added to instance ...") + + def _get_repre_doc(self, instance): + cache = instance.context.data.get("__cache_asset_audio") + if cache is None: + cache = {} + instance.context.data["__cache_asset_audio"] = cache + asset_name = instance.data["asset"] + + # first try to get it from cache + if asset_name in cache: + return cache[asset_name] + + project_name = legacy_io.active_project() + + # Find latest versions document + last_version_doc = get_last_version_by_subset_name( + project_name, + self.audio_subset_name, + asset_name=asset_name, + fields=["_id"] + ) + + repre_doc = None + if last_version_doc: + # Try to find it's representation (Expected there is only one) + repre_docs = list(get_representations( + project_name, version_ids=[last_version_doc["_id"]] + )) + if not repre_docs: + self.log.warning( + "Version document does not contain any representations" + ) + else: + repre_doc = repre_docs[0] + + # update cache + cache[asset_name] = repre_doc + + return repre_doc diff --git a/openpype/plugins/publish/collect_otio_frame_ranges.py b/openpype/plugins/publish/collect_otio_frame_ranges.py index 40e89e29bc..cfb0318950 100644 --- a/openpype/plugins/publish/collect_otio_frame_ranges.py +++ b/openpype/plugins/publish/collect_otio_frame_ranges.py @@ -29,6 +29,7 @@ class CollectOtioFrameRanges(pyblish.api.InstancePlugin): # get basic variables otio_clip = instance.data["otioClip"] workfile_start = instance.data["workfileFrameStart"] + workfile_source_duration = instance.data.get("notRetimedFramerange") # get ranges otio_tl_range = otio_clip.range_in_parent() @@ -54,6 +55,11 @@ class CollectOtioFrameRanges(pyblish.api.InstancePlugin): frame_end = frame_start + otio.opentime.to_frames( otio_tl_range.duration, otio_tl_range.duration.rate) - 1 + # in case of retimed clip and frame range should not be retimed + if workfile_source_duration: + frame_end = frame_start + otio.opentime.to_frames( + otio_src_range.duration, otio_src_range.duration.rate) - 1 + data = { "frameStart": frame_start, "frameEnd": frame_end, diff --git a/openpype/plugins/publish/extract_burnin.py b/openpype/plugins/publish/extract_burnin.py index 88093fb92f..4179199317 100644 --- a/openpype/plugins/publish/extract_burnin.py +++ b/openpype/plugins/publish/extract_burnin.py @@ -8,10 +8,10 @@ import shutil import clique import six -import pyblish +import pyblish.api -import openpype -import openpype.api +from openpype import resources, PACKAGE_DIR +from openpype.pipeline import publish from openpype.lib import ( run_openpype_process, @@ -23,7 +23,7 @@ from openpype.lib import ( ) -class ExtractBurnin(openpype.api.Extractor): +class ExtractBurnin(publish.Extractor): """ Extractor to create video with pre-defined burnins from existing extracted video representation. @@ -400,7 +400,7 @@ class ExtractBurnin(openpype.api.Extractor): # Use OpenPype default font if not font_filepath: - font_filepath = openpype.api.resources.get_liberation_font_path() + font_filepath = resources.get_liberation_font_path() burnin_options["font"] = font_filepath @@ -488,12 +488,6 @@ class ExtractBurnin(openpype.api.Extractor): "frame_end_handle": frame_end_handle } - # use explicit username for webpublishes as rewriting - # OPENPYPE_USERNAME might have side effects - webpublish_user_name = os.environ.get("WEBPUBLISH_OPENPYPE_USERNAME") - if webpublish_user_name: - burnin_data["username"] = webpublish_user_name - self.log.debug( "Basic burnin_data: {}".format(json.dumps(burnin_data, indent=4)) ) @@ -981,7 +975,7 @@ class ExtractBurnin(openpype.api.Extractor): """Return path to python script for burnin processing.""" scriptpath = os.path.normpath( os.path.join( - openpype.PACKAGE_DIR, + PACKAGE_DIR, "scripts", "otio_burnin.py" ) diff --git a/openpype/plugins/publish/extract_otio_file.py b/openpype/plugins/publish/extract_otio_file.py index 4d310ce109..c692205d81 100644 --- a/openpype/plugins/publish/extract_otio_file.py +++ b/openpype/plugins/publish/extract_otio_file.py @@ -1,10 +1,11 @@ import os import pyblish.api -import openpype.api import opentimelineio as otio +from openpype.pipeline import publish -class ExtractOTIOFile(openpype.api.Extractor): + +class ExtractOTIOFile(publish.Extractor): """ Extractor export OTIO file """ diff --git a/openpype/plugins/publish/extract_otio_review.py b/openpype/plugins/publish/extract_otio_review.py index 2ce5323468..169ff9e136 100644 --- a/openpype/plugins/publish/extract_otio_review.py +++ b/openpype/plugins/publish/extract_otio_review.py @@ -18,7 +18,12 @@ import os import clique import opentimelineio as otio from pyblish import api -import openpype + +from openpype.lib import ( + get_ffmpeg_tool_path, + run_subprocess, +) +from openpype.pipeline import publish from openpype.pipeline.editorial import ( otio_range_to_frame_range, trim_media_range, @@ -28,7 +33,7 @@ from openpype.pipeline.editorial import ( ) -class ExtractOTIOReview(openpype.api.Extractor): +class ExtractOTIOReview(publish.Extractor): """ Extract OTIO timeline into one concuted image sequence file. @@ -334,7 +339,7 @@ class ExtractOTIOReview(openpype.api.Extractor): otio.time.TimeRange: trimmed available range """ # get rendering app path - ffmpeg_path = openpype.lib.get_ffmpeg_tool_path("ffmpeg") + ffmpeg_path = get_ffmpeg_tool_path("ffmpeg") # create path and frame start to destination output_path, out_frame_start = self._get_ffmpeg_output() @@ -397,7 +402,7 @@ class ExtractOTIOReview(openpype.api.Extractor): ]) # execute self.log.debug("Executing: {}".format(" ".join(command))) - output = openpype.api.run_subprocess( + output = run_subprocess( command, logger=self.log ) self.log.debug("Output: {}".format(output)) diff --git a/openpype/plugins/publish/extract_otio_trimming_video.py b/openpype/plugins/publish/extract_otio_trimming_video.py index 19625fa568..70726338aa 100644 --- a/openpype/plugins/publish/extract_otio_trimming_video.py +++ b/openpype/plugins/publish/extract_otio_trimming_video.py @@ -6,18 +6,24 @@ Requires: """ import os -from pyblish import api -import openpype from copy import deepcopy + +import pyblish.api + +from openpype.lib import ( + get_ffmpeg_tool_path, + run_subprocess, +) +from openpype.pipeline import publish from openpype.pipeline.editorial import frames_to_seconds -class ExtractOTIOTrimmingVideo(openpype.api.Extractor): +class ExtractOTIOTrimmingVideo(publish.Extractor): """ Trimming video file longer then required lenght """ - order = api.ExtractorOrder + order = pyblish.api.ExtractorOrder label = "Extract OTIO trim longer video" families = ["trim"] hosts = ["resolve", "hiero", "flame"] @@ -70,7 +76,7 @@ class ExtractOTIOTrimmingVideo(openpype.api.Extractor): """ # get rendering app path - ffmpeg_path = openpype.lib.get_ffmpeg_tool_path("ffmpeg") + ffmpeg_path = get_ffmpeg_tool_path("ffmpeg") # create path to destination output_path = self._get_ffmpeg_output(input_file_path) @@ -96,7 +102,7 @@ class ExtractOTIOTrimmingVideo(openpype.api.Extractor): # execute self.log.debug("Executing: {}".format(" ".join(command))) - output = openpype.api.run_subprocess( + output = run_subprocess( command, logger=self.log ) self.log.debug("Output: {}".format(output)) diff --git a/openpype/plugins/publish/extract_review_slate.py b/openpype/plugins/publish/extract_review_slate.py index 69043ee261..fca3d96ca6 100644 --- a/openpype/plugins/publish/extract_review_slate.py +++ b/openpype/plugins/publish/extract_review_slate.py @@ -1,19 +1,22 @@ import os -from pprint import pformat import re -import openpype.api -import pyblish +from pprint import pformat + +import pyblish.api + from openpype.lib import ( path_to_subprocess_arg, + run_subprocess, get_ffmpeg_tool_path, get_ffprobe_data, get_ffprobe_streams, get_ffmpeg_codec_args, get_ffmpeg_format_args, ) +from openpype.pipeline import publish -class ExtractReviewSlate(openpype.api.Extractor): +class ExtractReviewSlate(publish.Extractor): """ Will add slate frame at the start of the video files """ @@ -158,7 +161,7 @@ class ExtractReviewSlate(openpype.api.Extractor): input_args.extend([ "-loop", "1", - "-i", openpype.lib.path_to_subprocess_arg(slate_path), + "-i", path_to_subprocess_arg(slate_path), "-r", str(input_frame_rate), "-frames:v", "1", ]) @@ -267,7 +270,7 @@ class ExtractReviewSlate(openpype.api.Extractor): self.log.debug( "Slate Executing: {}".format(slate_subprocess_cmd) ) - openpype.api.run_subprocess( + run_subprocess( slate_subprocess_cmd, shell=True, logger=self.log ) @@ -348,7 +351,7 @@ class ExtractReviewSlate(openpype.api.Extractor): "Executing concat filter: {}".format (" ".join(concat_args)) ) - openpype.api.run_subprocess( + run_subprocess( concat_args, logger=self.log ) @@ -533,7 +536,7 @@ class ExtractReviewSlate(openpype.api.Extractor): self.log.debug("Silent Slate Executing: {}".format( " ".join(slate_silent_args) )) - openpype.api.run_subprocess( + run_subprocess( slate_silent_args, logger=self.log ) diff --git a/openpype/plugins/publish/extract_trim_video_audio.py b/openpype/plugins/publish/extract_trim_video_audio.py index 06817c4b5a..b951136391 100644 --- a/openpype/plugins/publish/extract_trim_video_audio.py +++ b/openpype/plugins/publish/extract_trim_video_audio.py @@ -1,14 +1,16 @@ import os +from pprint import pformat + import pyblish.api -import openpype.api from openpype.lib import ( get_ffmpeg_tool_path, + run_subprocess, ) -from pprint import pformat +from openpype.pipeline import publish -class ExtractTrimVideoAudio(openpype.api.Extractor): +class ExtractTrimVideoAudio(publish.Extractor): """Trim with ffmpeg "mov" and "wav" files.""" # must be before `ExtractThumbnailSP` @@ -98,7 +100,7 @@ class ExtractTrimVideoAudio(openpype.api.Extractor): joined_args = " ".join(ffmpeg_args) self.log.info(f"Processing: {joined_args}") - openpype.api.run_subprocess( + run_subprocess( ffmpeg_args, logger=self.log ) diff --git a/openpype/plugins/publish/integrate.py b/openpype/plugins/publish/integrate.py index f99c718f8a..8972e6ab70 100644 --- a/openpype/plugins/publish/integrate.py +++ b/openpype/plugins/publish/integrate.py @@ -5,6 +5,9 @@ import copy import clique import six +from bson.objectid import ObjectId +import pyblish.api + from openpype.client.operations import ( OperationsSession, new_subset_document, @@ -14,8 +17,6 @@ from openpype.client.operations import ( prepare_version_update_data, prepare_representation_update_data, ) -from bson.objectid import ObjectId -import pyblish.api from openpype.client import ( get_representations, @@ -23,10 +24,12 @@ from openpype.client import ( get_version_by_name, ) from openpype.lib import source_hash -from openpype.lib.profiles_filtering import filter_profiles from openpype.lib.file_transaction import FileTransaction from openpype.pipeline import legacy_io -from openpype.pipeline.publish import KnownPublishError +from openpype.pipeline.publish import ( + KnownPublishError, + get_publish_template_name, +) log = logging.getLogger(__name__) @@ -135,7 +138,7 @@ class IntegrateAsset(pyblish.api.InstancePlugin): # the database even if not used by the destination template db_representation_context_keys = [ "project", "asset", "task", "subset", "version", "representation", - "family", "hierarchy", "username", "output" + "family", "hierarchy", "username", "user", "output" ] skip_host_families = [] @@ -792,52 +795,26 @@ class IntegrateAsset(pyblish.api.InstancePlugin): def get_template_name(self, instance): """Return anatomy template name to use for integration""" - # Define publish template name from profiles - filter_criteria = self.get_profile_filter_criteria(instance) - template_name_profiles = self._get_template_name_profiles(instance) - profile = filter_profiles( - template_name_profiles, - filter_criteria, - logger=self.log - ) - - if profile: - return profile["template_name"] - return self.default_template_name - - def _get_template_name_profiles(self, instance): - """Receive profiles for publish template keys. - - Reuse template name profiles from legacy integrator. Goal is to move - the profile settings out of plugin settings but until that happens we - want to be able set it at one place and don't break backwards - compatibility (more then once). - """ - - return ( - instance.context.data["project_settings"] - ["global"] - ["publish"] - ["IntegrateAssetNew"] - ["template_name_profiles"] - ) - - def get_profile_filter_criteria(self, instance): - """Return filter criteria for `filter_profiles`""" # Anatomy data is pre-filled by Collectors - anatomy_data = instance.data["anatomyData"] + + project_name = legacy_io.active_project() # Task can be optional in anatomy data - task = anatomy_data.get("task", {}) + host_name = instance.context.data["hostName"] + anatomy_data = instance.data["anatomyData"] + family = anatomy_data["family"] + task_info = anatomy_data.get("task") or {} - # Return filter criteria - return { - "families": anatomy_data["family"], - "tasks": task.get("name"), - "task_types": task.get("type"), - "hosts": instance.context.data["hostName"], - } + return get_publish_template_name( + project_name, + host_name, + family, + task_name=task_info.get("name"), + task_type=task_info.get("type"), + project_settings=instance.context.data["project_settings"], + logger=self.log + ) def get_rootless_path(self, anatomy, path): """Returns, if possible, path without absolute portion from root diff --git a/openpype/plugins/publish/integrate_hero_version.py b/openpype/plugins/publish/integrate_hero_version.py index 7d698ff98d..96d768e1c1 100644 --- a/openpype/plugins/publish/integrate_hero_version.py +++ b/openpype/plugins/publish/integrate_hero_version.py @@ -14,14 +14,12 @@ from openpype.client import ( get_archived_representations, get_representations, ) -from openpype.lib import ( - create_hard_link, - filter_profiles -) +from openpype.lib import create_hard_link from openpype.pipeline import ( schema, legacy_io, ) +from openpype.pipeline.publish import get_publish_template_name class IntegrateHeroVersion(pyblish.api.InstancePlugin): @@ -46,7 +44,7 @@ class IntegrateHeroVersion(pyblish.api.InstancePlugin): ignored_representation_names = [] db_representation_context_keys = [ "project", "asset", "task", "subset", "representation", - "family", "hierarchy", "task", "username" + "family", "hierarchy", "task", "username", "user" ] # QUESTION/TODO this process should happen on server if crashed due to # permissions error on files (files were used or user didn't have perms) @@ -68,10 +66,11 @@ class IntegrateHeroVersion(pyblish.api.InstancePlugin): ) return - template_key = self._get_template_key(instance) - anatomy = instance.context.data["anatomy"] project_name = anatomy.project_name + + template_key = self._get_template_key(project_name, instance) + if template_key not in anatomy.templates: self.log.warning(( "!!! Anatomy of project \"{}\" does not have set" @@ -527,30 +526,24 @@ class IntegrateHeroVersion(pyblish.api.InstancePlugin): return publish_folder - def _get_template_key(self, instance): + def _get_template_key(self, project_name, instance): anatomy_data = instance.data["anatomyData"] - task_data = anatomy_data.get("task") or {} - task_name = task_data.get("name") - task_type = task_data.get("type") + task_info = anatomy_data.get("task") or {} host_name = instance.context.data["hostName"] + # TODO raise error if Hero not set? family = self.main_family_from_instance(instance) - key_values = { - "families": family, - "task_names": task_name, - "task_types": task_type, - "hosts": host_name - } - profile = filter_profiles( - self.template_name_profiles, - key_values, + + return get_publish_template_name( + project_name, + host_name, + family, + task_info.get("name"), + task_info.get("type"), + project_settings=instance.context.data["project_settings"], + hero=True, logger=self.log ) - if profile: - template_name = profile["template_name"] - else: - template_name = self._default_template_name - return template_name def main_family_from_instance(self, instance): """Returns main family of entered instance.""" diff --git a/openpype/plugins/publish/integrate_legacy.py b/openpype/plugins/publish/integrate_legacy.py index b90b61f587..536ab83f2c 100644 --- a/openpype/plugins/publish/integrate_legacy.py +++ b/openpype/plugins/publish/integrate_legacy.py @@ -15,7 +15,6 @@ from bson.objectid import ObjectId from pymongo import DeleteOne, InsertOne import pyblish.api -import openpype.api from openpype.client import ( get_asset_by_name, get_subset_by_id, @@ -25,14 +24,17 @@ from openpype.client import ( get_representations, get_archived_representations, ) -from openpype.lib.profiles_filtering import filter_profiles from openpype.lib import ( prepare_template_data, create_hard_link, StringTemplate, - TemplateUnsolved + TemplateUnsolved, + source_hash, + filter_profiles, + get_local_site_id, ) from openpype.pipeline import legacy_io +from openpype.pipeline.publish import get_publish_template_name # this is needed until speedcopy for linux is fixed if sys.platform == "win32": @@ -127,7 +129,7 @@ class IntegrateAssetNew(pyblish.api.InstancePlugin): exclude_families = ["render.farm"] db_representation_context_keys = [ "project", "asset", "task", "subset", "version", "representation", - "family", "hierarchy", "task", "username" + "family", "hierarchy", "task", "username", "user" ] default_template_name = "publish" @@ -138,7 +140,6 @@ class IntegrateAssetNew(pyblish.api.InstancePlugin): integrated_file_sizes = {} # Attributes set by settings - template_name_profiles = None subset_grouping_profiles = None def process(self, instance): @@ -388,22 +389,16 @@ class IntegrateAssetNew(pyblish.api.InstancePlugin): family = self.main_family_from_instance(instance) - key_values = { - "families": family, - "tasks": task_name, - "hosts": instance.context.data["hostName"], - "task_types": task_type - } - profile = filter_profiles( - self.template_name_profiles, - key_values, + template_name = get_publish_template_name( + project_name, + instance.context.data["hostName"], + family, + task_name=task_info.get("name"), + task_type=task_info.get("type"), + project_settings=instance.context.data["project_settings"], logger=self.log ) - template_name = "publish" - if profile: - template_name = profile["template_name"] - published_representations = {} for idx, repre in enumerate(repres): published_files = [] @@ -1058,7 +1053,7 @@ class IntegrateAssetNew(pyblish.api.InstancePlugin): for _src, dest in resources: path = self.get_rootless_path(anatomy, dest) dest = self.get_dest_temp_url(dest) - file_hash = openpype.api.source_hash(dest) + file_hash = source_hash(dest) if self.TMP_FILE_EXT and \ ',{}'.format(self.TMP_FILE_EXT) in file_hash: file_hash = file_hash.replace(',{}'.format(self.TMP_FILE_EXT), @@ -1168,7 +1163,7 @@ class IntegrateAssetNew(pyblish.api.InstancePlugin): def _get_sites(self, sync_project_presets): """Returns tuple (local_site, remote_site)""" - local_site_id = openpype.api.get_local_site_id() + local_site_id = get_local_site_id() local_site = sync_project_presets["config"]. \ get("active_site", "studio").strip() diff --git a/openpype/plugins/publish/integrate_thumbnail.py b/openpype/plugins/publish/integrate_thumbnail.py index 8ae0dd2d60..d86cec10ad 100644 --- a/openpype/plugins/publish/integrate_thumbnail.py +++ b/openpype/plugins/publish/integrate_thumbnail.py @@ -6,10 +6,9 @@ import copy import six import pyblish.api -from bson.objectid import ObjectId from openpype.client import get_version_by_id -from openpype.pipeline import legacy_io +from openpype.client.operations import OperationsSession, new_thumbnail_doc class IntegrateThumbnails(pyblish.api.InstancePlugin): @@ -24,13 +23,9 @@ class IntegrateThumbnails(pyblish.api.InstancePlugin): ] def process(self, instance): - - if not os.environ.get("AVALON_THUMBNAIL_ROOT"): - self.log.warning( - "AVALON_THUMBNAIL_ROOT is not set." - " Skipping thumbnail integration." - ) - return + env_key = "AVALON_THUMBNAIL_ROOT" + thumbnail_root_format_key = "{thumbnail_root}" + thumbnail_root = os.environ.get(env_key) or "" published_repres = instance.data.get("published_representations") if not published_repres: @@ -51,6 +46,16 @@ class IntegrateThumbnails(pyblish.api.InstancePlugin): ).format(project_name)) return + thumbnail_template = anatomy.templates["publish"]["thumbnail"] + if ( + not thumbnail_root + and thumbnail_root_format_key in thumbnail_template + ): + self.log.warning(( + "{} is not set. Skipping thumbnail integration." + ).format(env_key)) + return + thumb_repre = None thumb_repre_anatomy_data = None for repre_info in published_repres.values(): @@ -66,10 +71,6 @@ class IntegrateThumbnails(pyblish.api.InstancePlugin): ) return - legacy_io.install() - - thumbnail_template = anatomy.templates["publish"]["thumbnail"] - version = get_version_by_id(project_name, thumb_repre["parent"]) if not version: raise AssertionError( @@ -88,14 +89,15 @@ class IntegrateThumbnails(pyblish.api.InstancePlugin): filename, file_extension = os.path.splitext(src_full_path) # Create id for mongo entity now to fill anatomy template - thumbnail_id = ObjectId() + thumbnail_doc = new_thumbnail_doc() + thumbnail_id = thumbnail_doc["_id"] # Prepare anatomy template fill data template_data = copy.deepcopy(thumb_repre_anatomy_data) template_data.update({ "_id": str(thumbnail_id), - "thumbnail_root": os.environ.get("AVALON_THUMBNAIL_ROOT"), "ext": file_extension[1:], + "thumbnail_root": thumbnail_root, "thumbnail_type": "thumbnail" }) @@ -117,8 +119,8 @@ class IntegrateThumbnails(pyblish.api.InstancePlugin): shutil.copy(src_full_path, dst_full_path) # Clean template data from keys that are dynamic - template_data.pop("_id") - template_data.pop("thumbnail_root") + for key in ("_id", "thumbnail_root"): + template_data.pop(key, None) repre_context = template_filled.used_values for key in self.required_context_keys: @@ -127,34 +129,40 @@ class IntegrateThumbnails(pyblish.api.InstancePlugin): continue repre_context[key] = template_data[key] - thumbnail_entity = { - "_id": thumbnail_id, - "type": "thumbnail", - "schema": "openpype:thumbnail-1.0", - "data": { - "template": thumbnail_template, - "template_data": repre_context - } + op_session = OperationsSession() + + thumbnail_doc["data"] = { + "template": thumbnail_template, + "template_data": repre_context } - # Create thumbnail entity - legacy_io.insert_one(thumbnail_entity) - self.log.debug( - "Creating entity in database {}".format(str(thumbnail_entity)) + op_session.create_entity( + project_name, thumbnail_doc["type"], thumbnail_doc ) + # Create thumbnail entity + self.log.debug( + "Creating entity in database {}".format(str(thumbnail_doc)) + ) + # Set thumbnail id for version - legacy_io.update_many( - {"_id": version["_id"]}, - {"$set": {"data.thumbnail_id": thumbnail_id}} + op_session.update_entity( + project_name, + version["type"], + version["_id"], + {"data.thumbnail_id": thumbnail_id} ) self.log.debug("Setting thumbnail for version \"{}\" <{}>".format( version["name"], str(version["_id"]) )) asset_entity = instance.data["assetEntity"] - legacy_io.update_many( - {"_id": asset_entity["_id"]}, - {"$set": {"data.thumbnail_id": thumbnail_id}} + op_session.update_entity( + project_name, + asset_entity["type"], + asset_entity["_id"], + {"data.thumbnail_id": thumbnail_id} ) self.log.debug("Setting thumbnail for asset \"{}\" <{}>".format( asset_entity["name"], str(version["_id"]) )) + + op_session.commit() diff --git a/openpype/plugins/publish/validate_resources.py b/openpype/plugins/publish/validate_resources.py index 644977ecd4..7911c70c2d 100644 --- a/openpype/plugins/publish/validate_resources.py +++ b/openpype/plugins/publish/validate_resources.py @@ -1,7 +1,6 @@ -import pyblish.api -import openpype.api - import os +import pyblish.api +from openpype.pipeline.publish import ValidateContentsOrder class ValidateResources(pyblish.api.InstancePlugin): @@ -17,7 +16,7 @@ class ValidateResources(pyblish.api.InstancePlugin): """ - order = openpype.api.ValidateContentsOrder + order = ValidateContentsOrder label = "Resources" def process(self, instance): diff --git a/openpype/plugins/publish/validate_unique_names.py b/openpype/plugins/publish/validate_unique_names.py index 459c90e6c1..33a460f7cc 100644 --- a/openpype/plugins/publish/validate_unique_names.py +++ b/openpype/plugins/publish/validate_unique_names.py @@ -3,6 +3,7 @@ from maya import cmds import pyblish.api import openpype.api import openpype.hosts.maya.api.action +from openpype.pipeline.publish import ValidateContentsOrder class ValidateUniqueNames(pyblish.api.Validator): @@ -12,7 +13,7 @@ class ValidateUniqueNames(pyblish.api.Validator): """ - order = openpype.api.ValidateContentsOrder + order = ValidateContentsOrder hosts = ["maya"] families = ["model"] label = "Unique transform name" diff --git a/openpype/pype_commands.py b/openpype/pype_commands.py index fe46a4bc54..85561495fd 100644 --- a/openpype/pype_commands.py +++ b/openpype/pype_commands.py @@ -4,6 +4,7 @@ import os import sys import json import time +import signal class PypeCommands: @@ -315,8 +316,12 @@ class PypeCommands: pytest.main(args) def syncserver(self, active_site): - """Start running sync_server in background.""" - import signal + """Start running sync_server in background. + + This functionality is available in directly in module cli commands. + `~/openpype_console module sync_server syncservice` + """ + os.environ["OPENPYPE_LOCAL_ID"] = active_site def signal_handler(sig, frame): diff --git a/openpype/settings/defaults/project_settings/blender.json b/openpype/settings/defaults/project_settings/blender.json index a7262dcb5d..2720e0286d 100644 --- a/openpype/settings/defaults/project_settings/blender.json +++ b/openpype/settings/defaults/project_settings/blender.json @@ -2,5 +2,69 @@ "workfile_builder": { "create_first_version": false, "custom_templates": [] + }, + "publish": { + "ValidateCameraZeroKeyframe": { + "enabled": true, + "optional": true, + "active": true + }, + "ValidateMeshHasUvs": { + "enabled": true, + "optional": true, + "active": true + }, + "ValidateMeshNoNegativeScale": { + "enabled": true, + "optional": false, + "active": true + }, + "ValidateTransformZero": { + "enabled": true, + "optional": false, + "active": true + }, + "ExtractBlend": { + "enabled": true, + "optional": true, + "active": true, + "families": [ + "model", + "camera", + "rig", + "action", + "layout" + ] + }, + "ExtractBlendAnimation": { + "enabled": true, + "optional": true, + "active": true + }, + "ExtractCamera": { + "enabled": true, + "optional": true, + "active": true + }, + "ExtractFBX": { + "enabled": true, + "optional": true, + "active": false + }, + "ExtractAnimationFBX": { + "enabled": true, + "optional": true, + "active": false + }, + "ExtractABC": { + "enabled": true, + "optional": true, + "active": false + }, + "ExtractLayout": { + "enabled": true, + "optional": true, + "active": false + } } -} \ No newline at end of file +} diff --git a/openpype/settings/defaults/project_settings/ftrack.json b/openpype/settings/defaults/project_settings/ftrack.json index 41bed7751b..09b194e21c 100644 --- a/openpype/settings/defaults/project_settings/ftrack.json +++ b/openpype/settings/defaults/project_settings/ftrack.json @@ -56,7 +56,7 @@ "Not Ready" ], "__ignore__": [ - "in prgoress", + "in progress", "omitted", "on hold" ] diff --git a/openpype/settings/defaults/project_settings/global.json b/openpype/settings/defaults/project_settings/global.json index 0ff9363ba7..14957d2b48 100644 --- a/openpype/settings/defaults/project_settings/global.json +++ b/openpype/settings/defaults/project_settings/global.json @@ -3,6 +3,10 @@ "CollectAnatomyInstanceData": { "follow_workfile_version": false }, + "CollectAudio": { + "enabled": false, + "audio_subset_name": "audioMain" + }, "CollectSceneVersion": { "hosts": [ "aftereffects", @@ -414,6 +418,10 @@ "filter_families": [] } ] + }, + "publish": { + "template_name_profiles": [], + "hero_template_name_profiles": [] } }, "project_folder_structure": "{\"__project_root__\": {\"prod\": {}, \"resources\": {\"footage\": {\"plates\": {}, \"offline\": {}}, \"audio\": {}, \"art_dept\": {}}, \"editorial\": {}, \"assets\": {\"characters\": {}, \"locations\": {}}, \"shots\": {}}}", diff --git a/openpype/settings/defaults/project_settings/houdini.json b/openpype/settings/defaults/project_settings/houdini.json index 911bf82d9b..b7d2104ba1 100644 --- a/openpype/settings/defaults/project_settings/houdini.json +++ b/openpype/settings/defaults/project_settings/houdini.json @@ -47,6 +47,18 @@ } }, "publish": { + "ValidateWorkfilePaths": { + "enabled": true, + "optional": true, + "node_types": [ + "file", + "alembic" + ], + "prohibited_vars": [ + "$HIP", + "$JOB" + ] + }, "ValidateContainers": { "enabled": true, "optional": true, diff --git a/openpype/settings/defaults/project_settings/kitsu.json b/openpype/settings/defaults/project_settings/kitsu.json index ba02d8d259..3a9723b9c0 100644 --- a/openpype/settings/defaults/project_settings/kitsu.json +++ b/openpype/settings/defaults/project_settings/kitsu.json @@ -1,8 +1,4 @@ { - "entities_root": { - "assets": "Assets", - "shots": "Shots" - }, "entities_naming_pattern": { "episode": "E##", "sequence": "SQ##", diff --git a/openpype/settings/defaults/project_settings/maya.json b/openpype/settings/defaults/project_settings/maya.json index 28f6d23e4d..38063bc2c1 100644 --- a/openpype/settings/defaults/project_settings/maya.json +++ b/openpype/settings/defaults/project_settings/maya.json @@ -980,4 +980,4 @@ "ValidateNoAnimation": false } } -} +} \ No newline at end of file diff --git a/openpype/settings/defaults/project_settings/nuke.json b/openpype/settings/defaults/project_settings/nuke.json index f40ec1fe9e..c3eda2cbb4 100644 --- a/openpype/settings/defaults/project_settings/nuke.json +++ b/openpype/settings/defaults/project_settings/nuke.json @@ -325,5 +325,8 @@ } ] }, + "templated_workfile_build": { + "profiles": [] + }, "filters": {} } \ No newline at end of file diff --git a/openpype/settings/entities/schemas/projects_schema/schema_project_blender.json b/openpype/settings/entities/schemas/projects_schema/schema_project_blender.json index af09329a03..4c72ebda2f 100644 --- a/openpype/settings/entities/schemas/projects_schema/schema_project_blender.json +++ b/openpype/settings/entities/schemas/projects_schema/schema_project_blender.json @@ -12,6 +12,10 @@ "workfile_builder/builder_on_start", "workfile_builder/profiles" ] + }, + { + "type": "schema", + "name": "schema_blender_publish" } ] } diff --git a/openpype/settings/entities/schemas/projects_schema/schema_project_houdini.json b/openpype/settings/entities/schemas/projects_schema/schema_project_houdini.json index cad99dde22..d8728c0f4b 100644 --- a/openpype/settings/entities/schemas/projects_schema/schema_project_houdini.json +++ b/openpype/settings/entities/schemas/projects_schema/schema_project_houdini.json @@ -10,22 +10,8 @@ "name": "schema_houdini_create" }, { - "type": "dict", - "collapsible": true, - "key": "publish", - "label": "Publish plugins", - "children": [ - { - "type": "schema_template", - "name": "template_publish_plugin", - "template_data": [ - { - "key": "ValidateContainers", - "label": "ValidateContainers" - } - ] - } - ] + "type": "schema", + "name": "schema_houdini_publish" } ] } diff --git a/openpype/settings/entities/schemas/projects_schema/schema_project_kitsu.json b/openpype/settings/entities/schemas/projects_schema/schema_project_kitsu.json index 014a1b7886..fb47670e74 100644 --- a/openpype/settings/entities/schemas/projects_schema/schema_project_kitsu.json +++ b/openpype/settings/entities/schemas/projects_schema/schema_project_kitsu.json @@ -5,23 +5,6 @@ "collapsible": true, "is_file": true, "children": [ - { - "type": "dict", - "key": "entities_root", - "label": "Entities root folder", - "children": [ - { - "type": "text", - "key": "assets", - "label": "Assets:" - }, - { - "type": "text", - "key": "shots", - "label": "Shots (includes Episodes & Sequences if any):" - } - ] - }, { "type": "dict", "key": "entities_naming_pattern", diff --git a/openpype/settings/entities/schemas/projects_schema/schema_project_nuke.json b/openpype/settings/entities/schemas/projects_schema/schema_project_nuke.json index 03d67a57ba..7cf82b9e69 100644 --- a/openpype/settings/entities/schemas/projects_schema/schema_project_nuke.json +++ b/openpype/settings/entities/schemas/projects_schema/schema_project_nuke.json @@ -308,6 +308,10 @@ "type": "schema_template", "name": "template_workfile_options" }, + { + "type": "schema", + "name": "schema_templated_workfile_build" + }, { "type": "schema", "name": "schema_publish_gui_filter" diff --git a/openpype/settings/entities/schemas/projects_schema/schemas/schema_blender_publish.json b/openpype/settings/entities/schemas/projects_schema/schemas/schema_blender_publish.json new file mode 100644 index 0000000000..58428ad60a --- /dev/null +++ b/openpype/settings/entities/schemas/projects_schema/schemas/schema_blender_publish.json @@ -0,0 +1,113 @@ +{ + "type": "dict", + "collapsible": true, + "key": "publish", + "label": "Publish plugins", + "children": [ + { + "type": "label", + "label": "Validators" + }, + { + "type": "schema_template", + "name": "template_publish_plugin", + "template_data": [ + { + "key": "ValidateCameraZeroKeyframe", + "label": "Validate Camera Zero Keyframe" + } + ] + }, + { + "type": "collapsible-wrap", + "label": "Model", + "children": [ + { + "type": "schema_template", + "name": "template_publish_plugin", + "template_data": [ + { + "key": "ValidateMeshHasUvs", + "label": "Validate Mesh Has UVs" + }, + { + "key": "ValidateMeshNoNegativeScale", + "label": "Validate Mesh No Negative Scale" + }, + { + "key": "ValidateTransformZero", + "label": "Validate Transform Zero" + } + ] + } + ] + }, + { + "type": "splitter" + }, + { + "type": "label", + "label": "Extractors" + }, + { + "type": "dict", + "collapsible": true, + "key": "ExtractBlend", + "label": "Extract Blend", + "checkbox_key": "enabled", + "children": [ + { + "type": "boolean", + "key": "enabled", + "label": "Enabled" + }, + { + "type": "boolean", + "key": "optional", + "label": "Optional" + }, + { + "type": "boolean", + "key": "active", + "label": "Active" + }, + { + "key": "families", + "label": "Families", + "type": "list", + "object_type": "text" + } + ] + }, + { + "type": "schema_template", + "name": "template_publish_plugin", + "template_data": [ + { + "key": "ExtractFBX", + "label": "Extract FBX (model and rig)" + }, + { + "key": "ExtractABC", + "label": "Extract ABC (model and pointcache)" + }, + { + "key": "ExtractBlendAnimation", + "label": "Extract Animation as Blend" + }, + { + "key": "ExtractAnimationFBX", + "label": "Extract Animation as FBX" + }, + { + "key": "ExtractCamera", + "label": "Extract FBX Camera as FBX" + }, + { + "key": "ExtractLayout", + "label": "Extract Layout as JSON" + } + ] + } + ] +} diff --git a/openpype/settings/entities/schemas/projects_schema/schemas/schema_global_publish.json b/openpype/settings/entities/schemas/projects_schema/schemas/schema_global_publish.json index e1aa230b49..297f96aa8c 100644 --- a/openpype/settings/entities/schemas/projects_schema/schemas/schema_global_publish.json +++ b/openpype/settings/entities/schemas/projects_schema/schemas/schema_global_publish.json @@ -18,6 +18,27 @@ } ] }, + { + "type": "dict", + "collapsible": true, + "checkbox_key": "enabled", + "key": "CollectAudio", + "label": "Collect Audio", + "is_group": true, + "children": [ + { + "type": "boolean", + "key": "enabled", + "label": "Enabled" + }, + { + "key": "audio_subset_name", + "label": "Name of audio variant", + "type": "text", + "placeholder": "audioMain" + } + ] + }, { "type": "dict", "collapsible": true, @@ -642,10 +663,14 @@ ] } }, + { + "type": "label", + "label": "NOTE: Publish template profiles settings were moved to Tools/Publish/Template name profiles. Please move values there." + }, { "type": "list", "key": "template_name_profiles", - "label": "Template name profiles", + "label": "Template name profiles (DEPRECATED)", "use_label_wrap": true, "object_type": { "type": "dict", @@ -750,10 +775,14 @@ "type": "list", "object_type": "text" }, + { + "type": "label", + "label": "NOTE: Hero publish template profiles settings were moved to Tools/Publish/Hero template name profiles. Please move values there." + }, { "type": "list", "key": "template_name_profiles", - "label": "Template name profiles", + "label": "Template name profiles (DEPRECATED)", "use_label_wrap": true, "object_type": { "type": "dict", diff --git a/openpype/settings/entities/schemas/projects_schema/schemas/schema_global_tools.json b/openpype/settings/entities/schemas/projects_schema/schemas/schema_global_tools.json index f8c9482e5f..c919cd73c5 100644 --- a/openpype/settings/entities/schemas/projects_schema/schemas/schema_global_tools.json +++ b/openpype/settings/entities/schemas/projects_schema/schemas/schema_global_tools.json @@ -284,6 +284,102 @@ } } ] + }, + { + "type": "dict", + "key": "publish", + "label": "Publish", + "children": [ + { + "type": "label", + "label": "NOTE: For backwards compatibility can be value empty and in that case are used values from IntegrateAssetNew. This will change in future so please move all values here as soon as possible." + }, + { + "type": "list", + "key": "template_name_profiles", + "label": "Template name profiles", + "use_label_wrap": true, + "object_type": { + "type": "dict", + "children": [ + { + "key": "families", + "label": "Families", + "type": "list", + "object_type": "text" + }, + { + "type": "hosts-enum", + "key": "hosts", + "label": "Hosts", + "multiselection": true + }, + { + "key": "task_types", + "label": "Task types", + "type": "task-types-enum" + }, + { + "key": "task_names", + "label": "Task names", + "type": "list", + "object_type": "text" + }, + { + "type": "separator" + }, + { + "type": "text", + "key": "template_name", + "label": "Template name" + } + ] + } + }, + { + "type": "list", + "key": "hero_template_name_profiles", + "label": "Hero template name profiles", + "use_label_wrap": true, + "object_type": { + "type": "dict", + "children": [ + { + "key": "families", + "label": "Families", + "type": "list", + "object_type": "text" + }, + { + "type": "hosts-enum", + "key": "hosts", + "label": "Hosts", + "multiselection": true + }, + { + "key": "task_types", + "label": "Task types", + "type": "task-types-enum" + }, + { + "key": "task_names", + "label": "Task names", + "type": "list", + "object_type": "text" + }, + { + "type": "separator" + }, + { + "type": "text", + "key": "template_name", + "label": "Template name", + "tooltip": "Name of template from Anatomy templates" + } + ] + } + } + ] } ] } diff --git a/openpype/settings/entities/schemas/projects_schema/schemas/schema_houdini_publish.json b/openpype/settings/entities/schemas/projects_schema/schemas/schema_houdini_publish.json new file mode 100644 index 0000000000..aa6eaf5164 --- /dev/null +++ b/openpype/settings/entities/schemas/projects_schema/schemas/schema_houdini_publish.json @@ -0,0 +1,50 @@ +{ + "type": "dict", + "collapsible": true, + "key": "publish", + "label": "Publish plugins", + "children": [ + { + "type": "dict", + "collapsible": true, + "checkbox_key": "enabled", + "key": "ValidateWorkfilePaths", + "label": "Validate Workfile Paths", + "is_group": true, + "children": [ + { + "type": "boolean", + "key": "enabled", + "label": "Enabled" + }, + { + "type": "boolean", + "key": "optional", + "label": "Optional" + }, + { + "key": "node_types", + "label": "Node types", + "type": "list", + "object_type": "text" + }, + { + "key": "prohibited_vars", + "label": "Prohibited variables", + "type": "list", + "object_type": "text" + } + ] + }, + { + "type": "schema_template", + "name": "template_publish_plugin", + "template_data": [ + { + "key": "ValidateContainers", + "label": "ValidateContainers" + } + ] + } + ] +} \ No newline at end of file diff --git a/openpype/settings/entities/schemas/projects_schema/schemas/schema_templated_workfile_build.json b/openpype/settings/entities/schemas/projects_schema/schemas/schema_templated_workfile_build.json index a591facf98..99a29beb27 100644 --- a/openpype/settings/entities/schemas/projects_schema/schemas/schema_templated_workfile_build.json +++ b/openpype/settings/entities/schemas/projects_schema/schemas/schema_templated_workfile_build.json @@ -17,7 +17,7 @@ "type": "task-types-enum" }, { - "key": "tasks", + "key": "task_names", "label": "Task names", "type": "list", "object_type": "text" diff --git a/openpype/tools/launcher/models.py b/openpype/tools/launcher/models.py index 6d40d21f96..6e3b531018 100644 --- a/openpype/tools/launcher/models.py +++ b/openpype/tools/launcher/models.py @@ -281,18 +281,25 @@ class ActionModel(QtGui.QStandardItemModel): if not action_item: return - action = action_item.data(ACTION_ROLE) - actual_data = self._prepare_compare_data(action) + actions = action_item.data(ACTION_ROLE) + if not isinstance(actions, list): + actions = [actions] + + action_actions_data = [ + self._prepare_compare_data(action) + for action in actions + ] stored = self.launcher_registry.get_item("force_not_open_workfile") - if is_checked: - stored.append(actual_data) - else: - final_values = [] - for config in stored: - if config != actual_data: - final_values.append(config) - stored = final_values + for actual_data in action_actions_data: + if is_checked: + stored.append(actual_data) + else: + final_values = [] + for config in stored: + if config != actual_data: + final_values.append(config) + stored = final_values self.launcher_registry.set_item("force_not_open_workfile", stored) self.launcher_registry._get_item.cache_clear() @@ -329,21 +336,24 @@ class ActionModel(QtGui.QStandardItemModel): item (QStandardItem) stored (list) of dict """ - action = item.data(ACTION_ROLE) - if not self.is_application_action(action): + + actions = item.data(ACTION_ROLE) + if not isinstance(actions, list): + actions = [actions] + + if not self.is_application_action(actions[0]): return False - actual_data = self._prepare_compare_data(action) + action_actions_data = [ + self._prepare_compare_data(action) + for action in actions + ] for config in stored: - if config == actual_data: + if config in action_actions_data: return True - return False def _prepare_compare_data(self, action): - if isinstance(action, list) and action: - action = action[0] - compare_data = {} if action and action.label: compare_data = { diff --git a/openpype/tools/launcher/widgets.py b/openpype/tools/launcher/widgets.py index 62599664fe..774ceb659d 100644 --- a/openpype/tools/launcher/widgets.py +++ b/openpype/tools/launcher/widgets.py @@ -312,11 +312,12 @@ class ActionBar(QtWidgets.QWidget): is_group = index.data(GROUP_ROLE) is_variant_group = index.data(VARIANT_GROUP_ROLE) + force_not_open_workfile = index.data(FORCE_NOT_OPEN_WORKFILE_ROLE) if not is_group and not is_variant_group: action = index.data(ACTION_ROLE) # Change data of application action if issubclass(action, ApplicationAction): - if index.data(FORCE_NOT_OPEN_WORKFILE_ROLE): + if force_not_open_workfile: action.data["start_last_workfile"] = False else: action.data.pop("start_last_workfile", None) @@ -385,10 +386,18 @@ class ActionBar(QtWidgets.QWidget): menu.addMenu(sub_menu) result = menu.exec_(QtGui.QCursor.pos()) - if result: - action = actions_mapping[result] - self._start_animation(index) - self.action_clicked.emit(action) + if not result: + return + + action = actions_mapping[result] + if issubclass(action, ApplicationAction): + if force_not_open_workfile: + action.data["start_last_workfile"] = False + else: + action.data.pop("start_last_workfile", None) + + self._start_animation(index) + self.action_clicked.emit(action) class ActionHistory(QtWidgets.QPushButton): diff --git a/openpype/tools/project_manager/project_manager/widgets.py b/openpype/tools/project_manager/project_manager/widgets.py index d0715f204d..4bc968347a 100644 --- a/openpype/tools/project_manager/project_manager/widgets.py +++ b/openpype/tools/project_manager/project_manager/widgets.py @@ -1,14 +1,13 @@ import re -from openpype.client import get_projects +from openpype.client import get_projects, create_project from .constants import ( NAME_ALLOWED_SYMBOLS, NAME_REGEX ) -from openpype.lib import create_project from openpype.client.operations import ( PROJECT_NAME_ALLOWED_SYMBOLS, - PROJECT_NAME_REGEX + PROJECT_NAME_REGEX, ) from openpype.style import load_stylesheet from openpype.pipeline import AvalonMongoDB @@ -266,7 +265,7 @@ class CreateProjectDialog(QtWidgets.QDialog): project_name = self.project_name_input.text() project_code = self.project_code_input.text() library_project = self.library_project_input.isChecked() - create_project(project_name, project_code, library_project, self.dbcon) + create_project(project_name, project_code, library_project) self.done(1) diff --git a/openpype/tools/project_manager/project_manager/window.py b/openpype/tools/project_manager/project_manager/window.py index c6ae0ff352..3b2dea8ca3 100644 --- a/openpype/tools/project_manager/project_manager/window.py +++ b/openpype/tools/project_manager/project_manager/window.py @@ -1,5 +1,12 @@ from Qt import QtWidgets, QtCore, QtGui +from openpype import resources +from openpype.style import load_stylesheet +from openpype.widgets import PasswordDialog +from openpype.lib import is_admin_password_required, Logger +from openpype.pipeline import AvalonMongoDB +from openpype.pipeline.project_folders import create_project_folders + from . import ( ProjectModel, ProjectProxyFilter, @@ -13,17 +20,6 @@ from . import ( ) from .widgets import ConfirmProjectDeletion from .style import ResourceCache -from openpype.style import load_stylesheet -from openpype.lib import is_admin_password_required -from openpype.widgets import PasswordDialog -from openpype.pipeline import AvalonMongoDB - -from openpype import resources -from openpype.api import ( - get_project_basic_paths, - create_project_folders, - Logger -) class ProjectManagerWindow(QtWidgets.QWidget): @@ -259,12 +255,8 @@ class ProjectManagerWindow(QtWidgets.QWidget): qm.Yes | qm.No) if ans == qm.Yes: try: - # Get paths based on presets - basic_paths = get_project_basic_paths(project_name) - if not basic_paths: - pass # Invoking OpenPype API to create the project folders - create_project_folders(basic_paths, project_name) + create_project_folders(project_name) except Exception as exc: self.log.warning( "Cannot create starting folders: {}".format(exc), diff --git a/openpype/tools/sceneinventory/model.py b/openpype/tools/sceneinventory/model.py index 63fbe04c5c..1a3b7c7055 100644 --- a/openpype/tools/sceneinventory/model.py +++ b/openpype/tools/sceneinventory/model.py @@ -34,7 +34,8 @@ from .lib import ( class InventoryModel(TreeModel): """The model for the inventory""" - Columns = ["Name", "version", "count", "family", "loader", "objectName"] + Columns = ["Name", "version", "count", "family", + "group", "loader", "objectName"] OUTDATED_COLOR = QtGui.QColor(235, 30, 30) CHILD_OUTDATED_COLOR = QtGui.QColor(200, 160, 30) @@ -157,8 +158,13 @@ class InventoryModel(TreeModel): # Family icon return item.get("familyIcon", None) + column_name = self.Columns[index.column()] + + if column_name == "group" and item.get("group"): + return qtawesome.icon("fa.object-group", + color=get_default_entity_icon_color()) + if item.get("isGroupNode"): - column_name = self.Columns[index.column()] if column_name == "active_site": provider = item.get("active_site_provider") return self._site_icons.get(provider) @@ -423,6 +429,7 @@ class InventoryModel(TreeModel): group_node["familyIcon"] = family_icon group_node["count"] = len(group_items) group_node["isGroupNode"] = True + group_node["group"] = subset["data"].get("subsetGroup") if self.sync_enabled: progress = get_progress_for_repre( diff --git a/openpype/tools/sceneinventory/window.py b/openpype/tools/sceneinventory/window.py index 463280b71c..578f47d1c0 100644 --- a/openpype/tools/sceneinventory/window.py +++ b/openpype/tools/sceneinventory/window.py @@ -89,7 +89,8 @@ class SceneInventoryWindow(QtWidgets.QDialog): view.setColumnWidth(1, 55) # version view.setColumnWidth(2, 55) # count view.setColumnWidth(3, 150) # family - view.setColumnWidth(4, 100) # namespace + view.setColumnWidth(4, 120) # group + view.setColumnWidth(5, 150) # loader # apply delegates version_delegate = VersionDelegate(legacy_io, self) diff --git a/openpype/tools/standalonepublish/widgets/model_asset.py b/openpype/tools/standalonepublish/widgets/model_asset.py index abfc0a2145..9fed46b3fe 100644 --- a/openpype/tools/standalonepublish/widgets/model_asset.py +++ b/openpype/tools/standalonepublish/widgets/model_asset.py @@ -81,7 +81,7 @@ class AssetModel(TreeModel): for asset in current_assets: # get label from data, otherwise use name data = asset.get("data", {}) - label = data.get("label", asset["name"]) + label = data.get("label") or asset["name"] tags = data.get("tags", []) # store for the asset for optimization diff --git a/openpype/tools/tray/pype_tray.py b/openpype/tools/tray/pype_tray.py index 85bc00ead6..348573a191 100644 --- a/openpype/tools/tray/pype_tray.py +++ b/openpype/tools/tray/pype_tray.py @@ -9,11 +9,11 @@ import platform from Qt import QtCore, QtGui, QtWidgets import openpype.version -from openpype.api import ( - resources, - get_system_settings +from openpype import resources, style +from openpype.lib import ( + get_openpype_execute_args, + Logger, ) -from openpype.lib import get_openpype_execute_args, Logger from openpype.lib.openpype_version import ( op_version_control_available, get_expected_version, @@ -25,8 +25,8 @@ from openpype.lib.openpype_version import ( get_openpype_version, ) from openpype.modules import TrayModulesManager -from openpype import style from openpype.settings import ( + get_system_settings, SystemSettings, ProjectSettings, DefaultsNotDefined @@ -774,10 +774,24 @@ class PypeTrayStarter(QtCore.QObject): def main(): + log = Logger.get_logger(__name__) app = QtWidgets.QApplication.instance() if not app: app = QtWidgets.QApplication([]) + for attr_name in ( + "AA_EnableHighDpiScaling", + "AA_UseHighDpiPixmaps" + ): + attr = getattr(QtCore.Qt, attr_name, None) + if attr is None: + log.debug(( + "Missing QtCore.Qt attribute \"{}\"." + " UI quality may be affected." + ).format(attr_name)) + else: + app.setAttribute(attr) + starter = PypeTrayStarter(app) # TODO remove when pype.exe will have an icon diff --git a/openpype/tools/workfiles/files_widget.py b/openpype/tools/workfiles/files_widget.py index a5d5b14bb6..b4f5e422bc 100644 --- a/openpype/tools/workfiles/files_widget.py +++ b/openpype/tools/workfiles/files_widget.py @@ -10,10 +10,7 @@ from openpype.host import IWorkfileHost from openpype.client import get_asset_by_id from openpype.tools.utils import PlaceholderLineEdit from openpype.tools.utils.delegates import PrettyTimeDelegate -from openpype.lib import ( - emit_event, - create_workdir_extra_folders, -) +from openpype.lib import emit_event from openpype.pipeline import ( registered_host, legacy_io, @@ -23,7 +20,10 @@ from openpype.pipeline.context_tools import ( compute_session_changes, change_current_context ) -from openpype.pipeline.workfile import get_workfile_template_key +from openpype.pipeline.workfile import ( + get_workfile_template_key, + create_workdir_extra_folders, +) from .model import ( WorkAreaFilesModel, diff --git a/openpype/version.py b/openpype/version.py index 7894bb8bf4..142bd51a30 100644 --- a/openpype/version.py +++ b/openpype/version.py @@ -1,3 +1,3 @@ # -*- coding: utf-8 -*- """Package declaring Pype version.""" -__version__ = "3.14.1-nightly.3" +__version__ = "3.14.2-nightly.4" diff --git a/pyproject.toml b/pyproject.toml index 6ac5170e88..dd79dfb93f 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -1,6 +1,6 @@ [tool.poetry] name = "OpenPype" -version = "3.14.1-nightly.3" # OpenPype +version = "3.14.2-nightly.2" # OpenPype description = "Open VFX and Animation pipeline with support." authors = ["OpenPype Team "] license = "MIT License" diff --git a/tools/ci_tools.py b/tools/ci_tools.py index 4c59cd6af6..c8f0cd48b4 100644 --- a/tools/ci_tools.py +++ b/tools/ci_tools.py @@ -101,7 +101,7 @@ def bump_file_versions(version): # bump pyproject.toml filename = "pyproject.toml" - regex = "version = \"(0|[1-9]\d*)\.(0|[1-9]\d*)\.(0|[1-9]\d*)(-((0|[1-9]\d*|\d*[a-zA-Z-][0-9a-zA-Z-]*)(\.(0|[1-9]\d*|\d*[a-zA-Z-][0-9a-zA-Z-]*))*))?(\+([0-9a-zA-Z-]+(\.[0-9a-zA-Z-]+)*))?\" # OpenPype" + regex = "version = \"(0|[1-9]\d*)\.(0|[1-9]\d*)\.(0|[1-9]\d*)(\+((0|[1-9]\d*|\d*[a-zA-Z-][0-9a-zA-Z-]*)(\.(0|[1-9]\d*|\d*[a-zA-Z-][0-9a-zA-Z-]*))*))?(\+([0-9a-zA-Z-]+(\.[0-9a-zA-Z-]+)*))?\" # OpenPype" pyproject_version = f"version = \"{version}\" # OpenPype" file_regex_replace(filename, regex, pyproject_version) diff --git a/tools/create_env.ps1 b/tools/create_env.ps1 index ea38ae4745..01123780e3 100644 --- a/tools/create_env.ps1 +++ b/tools/create_env.ps1 @@ -68,6 +68,7 @@ function Install-Poetry() { } $env:POETRY_HOME="$openpype_root\.poetry" + $env:POETRY_VERSION="1.1.15" (Invoke-WebRequest -Uri https://install.python-poetry.org/ -UseBasicParsing).Content | & $($python) - } diff --git a/tools/create_env.sh b/tools/create_env.sh index ae27370227..6fc40f54e2 100755 --- a/tools/create_env.sh +++ b/tools/create_env.sh @@ -109,6 +109,7 @@ detect_python () { install_poetry () { echo -e "${BIGreen}>>>${RST} Installing Poetry ..." export POETRY_HOME="$openpype_root/.poetry" + export POETRY_VERSION="1.1.15" command -v curl >/dev/null 2>&1 || { echo -e "${BIRed}!!!${RST}${BIYellow} Missing ${RST}${BIBlue}curl${BIYellow} command.${RST}"; return 1; } curl -sSL https://install.python-poetry.org/ | python - } diff --git a/tools/create_zip.ps1 b/tools/create_zip.ps1 index 7b852b7c54..e5ebc7678b 100644 --- a/tools/create_zip.ps1 +++ b/tools/create_zip.ps1 @@ -96,6 +96,7 @@ Write-Color -Text ">>> ", "Cleaning cache files ... " -Color Green, Gray -NoNewl Get-ChildItem $openpype_root -Filter "__pycache__" -Force -Recurse| Where-Object {( $_.FullName -inotmatch '\\build\\' ) -and ( $_.FullName -inotmatch '\\.venv' )} | Remove-Item -Force -Recurse Get-ChildItem $openpype_root -Filter "*.pyc" -Force -Recurse | Where-Object {( $_.FullName -inotmatch '\\build\\' ) -and ( $_.FullName -inotmatch '\\.venv' )} | Remove-Item -Force Get-ChildItem $openpype_root -Filter "*.pyo" -Force -Recurse | Where-Object {( $_.FullName -inotmatch '\\build\\' ) -and ( $_.FullName -inotmatch '\\.venv' )} | Remove-Item -Force +Write-Color -Text "OK" -Color Green Write-Color -Text ">>> ", "Generating zip from current sources ..." -Color Green, Gray $env:PYTHONPATH="$($openpype_root);$($env:PYTHONPATH)" diff --git a/website/docs/admin_settings_project_anatomy.md b/website/docs/admin_settings_project_anatomy.md index 106faeb806..2068c5cde2 100644 --- a/website/docs/admin_settings_project_anatomy.md +++ b/website/docs/admin_settings_project_anatomy.md @@ -20,17 +20,17 @@ It defines: - Colour Management - File Formats -Anatomy is the only configuration that is always saved as project override. This is to make sure, that any updates to OpenPype or Studio default values, don't affect currently running productions. +Anatomy is the only configuration that is always saved as an project override. This is to make sure that any updates to OpenPype or Studio default values, don't affect currently running productions. ![anatomy_01](assets/settings/anatomy_01.png) ## Roots -Roots define where files are stored with path to shared folder. It is required to set root path for each platform you are using in studio. All paths must point to same folder! +Roots define where files are stored with path to a shared folder. It is required to set the root path for each platform you are using in the studio. All paths must point to the same folder! ![roots01](assets/settings/anatomy_roots01.png) -It is possible to set multiple roots when necessary. That may be handy when you need to store specific type of data on another disk. +It is possible to set multiple roots when necessary. That may be handy when you need to store a specific type of data on another disk. ![roots02](assets/settings/anatomy_roots02.png) @@ -40,7 +40,7 @@ Note how multiple roots are used here, to push different types of files to diffe ## Templates -Templates define project's folder structure and filenames. +Templates define the project's folder structure and filenames. We have a few required anatomy templates for OpenPype to work properly, however we keep adding more when needed. @@ -100,14 +100,36 @@ We have a few required anatomy templates for OpenPype to work properly, however +### Anatomy reference keys + +Anatomy templates have the ability to use "referenced keys". Best example is `path` in publish or work templates which just contains references to `folder` and `file` (`{@folder}/{@file}`). Any changes in folder or file template are propagated to the path template. The another example is simplification of version and frame formatting with paddings. You can notice that keys `{@version}` or `{@frame}` are used in default templates. They are referencing `Anatomy` -> `Templates` -> `Version` or `Frame` which handle version and frame formatting with padding. + +So if you set `project_anatomy/templates/defaults/version_padding` to `5` the `{@version}` key will be transformed to `v{version:0>5}` automatically and version number in paths will have 5 numbers -> `v00001`. + +### Optional keys + +In some cases of template formatting not all keys are available and should be just ignored. For example `{frame}` should be available only for sequences but we have single publish template. To handle these cases it is possible to use special characters to mark segment of template which should be ignored, if it can't be filled because of missing keys. To mark these segments use `<` and `>`. +. +Template `{project[code]}_{asset}_{subset}<_{output}><.{@frame}>.{ext}` can handle all 4 possible situations when `output` and `frame` keys are available or not. The optional segments can contain additional text, like in the example dot (`.`) for frame and underscore (`_`) for output, those are also ignored if the keys are not available. Optional segments without formatting keys are kept untouched: `
` -> stays as `
`. It is possible to nest optional segments inside optional segments `<{asset}<.{@frame}>
>` which may result in empty string if `asset` key is not available. ## Attributes +Project attributes are used as default values for new assets created under project, except `Applications` and `Active project` which are project specific. Values of attributes that are **not** project specific are always used from assets. So if `tools` are not loading as expected it is because the assets have different values. + +![anatomy_attributes](assets/settings/anatomy_attributes.png) + +**Most of attributes don't need detailed explanation.** + +| Attribute | Description | +| --- | --- | +| `Applications` | List of applications that can be used in the project. At the moment used only as a possible filter of applications. | +| `Tools` | List of application tools. This value can be overridden per asset. | +| `Active project` | Project won't be visible in tools if enabled.
- To revert check `Show Inactive projects` checkbox in project settings. | ## Task Types -Current state of default Task descriptors. +Available task types on a project. Each task on an asset is referencing a task type on the project which allows access to additional task type attributes. At this moment only `short_name` is available (can be used in templates as `{task[short_name]}`). ![tasks](assets/settings/anatomy_tasks.png) diff --git a/website/docs/artist_hosts_hiero.md b/website/docs/artist_hosts_hiero.md index dc6f1696e7..d14dcd1c01 100644 --- a/website/docs/artist_hosts_hiero.md +++ b/website/docs/artist_hosts_hiero.md @@ -202,3 +202,67 @@ This video shows a way to publish shot look as effect from Hiero to Nuke. ### Assembling edit from published shot versions + + +# Nuke Build Workfile +This is a tool of Node Graph initialisation using a pre-created template. + +### Add a profile +The path to the template that will be used in the initialisation must be added as a profile on Project Settings. + +![Create menu](assets/nuke_addProfile.png) + +### Create Place Holder + +![Create menu](assets/nuke_createPlaceHolder.png) + +This tool creates a Place Holder, which is a node that will be replaced by published instances. + +![Create menu](assets/nuke_placeHolderNode.png) +#### Result +- Create a red node called `PLACEHOLDER` which can be manipulated as wanted by using it in Node Graph. + +![Create menu](assets/nuke_placeholder.png) + +:::note +All published instances that will replace the place holder must contain unique input and output nodes in case they will not be imported as a single node. +::: + +![Create menu](assets/nuke_publishedinstance.png) + + +The informations about these objects are given by the user by filling the extra attributes of the Place Holder + +![Create menu](assets/nuke_fillingExtraAttributes.png) + + + +### Update Place Holder +This tool alows the user to change the information provided in the extra attributes of the selected Place Holder. + +![Create menu](assets/nuke_updatePlaceHolder.png) + + + +### Build Workfile from template +This tool imports the template used and replaces the existed PlaceHolders with the corresponding published objects (which can contain Place Holders too). In case there is no published items with the description given, the place holder will remain in the node graph. + +![Create menu](assets/nuke_buildWorfileFromTemplate.png) + +#### Result +- Replace `PLACEHOLDER` node in the template with the published instance corresponding to the informations provided in extra attributes of the Place Holder + +![Create menu](assets/nuke_buildworkfile.png) + +:::note +In case the instance that will replace the Place holder **A** contains another Place Holder **B** that points to many published elements, all the nodes that were imported with **A** except **B** will be duplicated for each element that will replace **B** +::: + +### Update Workfile +This tool can be used to check if some instances were published after the last build, so they will be imported. + +![Create menu](assets/nuke_updateWorkfile.png) + +:::note +Imported instances must not be deleted because they contain extra attributes that will be used to update the workfile since the place holder is been deleted. +::: \ No newline at end of file diff --git a/website/docs/assets/nuke_addProfile.png b/website/docs/assets/nuke_addProfile.png new file mode 100644 index 0000000000..37578df7f5 Binary files /dev/null and b/website/docs/assets/nuke_addProfile.png differ diff --git a/website/docs/assets/nuke_buildWorfileFromTemplate.png b/website/docs/assets/nuke_buildWorfileFromTemplate.png new file mode 100644 index 0000000000..77d2de2ff8 Binary files /dev/null and b/website/docs/assets/nuke_buildWorfileFromTemplate.png differ diff --git a/website/docs/assets/nuke_buildworkfile.png b/website/docs/assets/nuke_buildworkfile.png new file mode 100644 index 0000000000..e3d8d07f7c Binary files /dev/null and b/website/docs/assets/nuke_buildworkfile.png differ diff --git a/website/docs/assets/nuke_createPlaceHolder.png b/website/docs/assets/nuke_createPlaceHolder.png new file mode 100644 index 0000000000..93fb4de9d0 Binary files /dev/null and b/website/docs/assets/nuke_createPlaceHolder.png differ diff --git a/website/docs/assets/nuke_fillingExtraAttributes.png b/website/docs/assets/nuke_fillingExtraAttributes.png new file mode 100644 index 0000000000..146c4d9db6 Binary files /dev/null and b/website/docs/assets/nuke_fillingExtraAttributes.png differ diff --git a/website/docs/assets/nuke_placeHolderNode.png b/website/docs/assets/nuke_placeHolderNode.png new file mode 100644 index 0000000000..ac9e83b9d6 Binary files /dev/null and b/website/docs/assets/nuke_placeHolderNode.png differ diff --git a/website/docs/assets/nuke_placeholder.png b/website/docs/assets/nuke_placeholder.png new file mode 100644 index 0000000000..d899ff742c Binary files /dev/null and b/website/docs/assets/nuke_placeholder.png differ diff --git a/website/docs/assets/nuke_publishedinstance.png b/website/docs/assets/nuke_publishedinstance.png new file mode 100644 index 0000000000..494b9e40a4 Binary files /dev/null and b/website/docs/assets/nuke_publishedinstance.png differ diff --git a/website/docs/assets/nuke_updatePlaceHolder.png b/website/docs/assets/nuke_updatePlaceHolder.png new file mode 100644 index 0000000000..58fac2a7e4 Binary files /dev/null and b/website/docs/assets/nuke_updatePlaceHolder.png differ diff --git a/website/docs/assets/nuke_updateWorkfile.png b/website/docs/assets/nuke_updateWorkfile.png new file mode 100644 index 0000000000..aeeebb123f Binary files /dev/null and b/website/docs/assets/nuke_updateWorkfile.png differ diff --git a/website/docs/assets/settings/anatomy_attributes.png b/website/docs/assets/settings/anatomy_attributes.png new file mode 100644 index 0000000000..777b1c36ac Binary files /dev/null and b/website/docs/assets/settings/anatomy_attributes.png differ