Merge branch 'develop' into feature/OP-3879_Change-extractor-usage-in-maya

This commit is contained in:
Jakub Trllo 2022-09-12 09:52:55 +02:00
commit ad81d94de6
74 changed files with 1312 additions and 753 deletions

3
.gitignore vendored
View file

@ -107,3 +107,6 @@ website/.docusaurus
mypy.ini mypy.ini
tools/run_eventserver.* tools/run_eventserver.*
# Developer tools
tools/dev_*

View file

@ -1,6 +1,6 @@
# Changelog # Changelog
## [3.14.2-nightly.2](https://github.com/pypeclub/OpenPype/tree/HEAD) ## [3.14.2-nightly.4](https://github.com/pypeclub/OpenPype/tree/HEAD)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.14.1...HEAD) [Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.14.1...HEAD)
@ -8,35 +8,51 @@
- Nuke: Build workfile by template [\#3763](https://github.com/pypeclub/OpenPype/pull/3763) - Nuke: Build workfile by template [\#3763](https://github.com/pypeclub/OpenPype/pull/3763)
- Houdini: Publishing workfiles [\#3697](https://github.com/pypeclub/OpenPype/pull/3697) - Houdini: Publishing workfiles [\#3697](https://github.com/pypeclub/OpenPype/pull/3697)
- Global: making collect audio plugin global [\#3679](https://github.com/pypeclub/OpenPype/pull/3679)
**🚀 Enhancements** **🚀 Enhancements**
- Flame: Adding Creator's retimed shot and handles switch [\#3826](https://github.com/pypeclub/OpenPype/pull/3826)
- Flame: OpenPype submenu to batch and media manager [\#3825](https://github.com/pypeclub/OpenPype/pull/3825)
- General: Better pixmap scaling [\#3809](https://github.com/pypeclub/OpenPype/pull/3809)
- Photoshop: attempt to speed up ExtractImage [\#3793](https://github.com/pypeclub/OpenPype/pull/3793)
- SyncServer: Added cli commands for sync server [\#3765](https://github.com/pypeclub/OpenPype/pull/3765) - SyncServer: Added cli commands for sync server [\#3765](https://github.com/pypeclub/OpenPype/pull/3765)
- Maya: move set render settings menu entry [\#3669](https://github.com/pypeclub/OpenPype/pull/3669) - Kitsu: Drop 'entities root' setting. [\#3739](https://github.com/pypeclub/OpenPype/pull/3739)
- Scene Inventory: Maya add actions to select from or to scene [\#3659](https://github.com/pypeclub/OpenPype/pull/3659)
**🐛 Bug fixes** **🐛 Bug fixes**
- General: Fix Pattern access in client code [\#3828](https://github.com/pypeclub/OpenPype/pull/3828)
- Launcher: Skip opening last work file works for groups [\#3822](https://github.com/pypeclub/OpenPype/pull/3822)
- Maya: Publishing data key change [\#3811](https://github.com/pypeclub/OpenPype/pull/3811)
- Igniter: Fix status handling when version is already installed [\#3804](https://github.com/pypeclub/OpenPype/pull/3804)
- Resolve: Addon import is Python 2 compatible [\#3798](https://github.com/pypeclub/OpenPype/pull/3798)
- Hiero: retimed clip publishing is working [\#3792](https://github.com/pypeclub/OpenPype/pull/3792)
- nuke: validate write node is not failing due wrong type [\#3780](https://github.com/pypeclub/OpenPype/pull/3780)
- Fix - changed format of version string in pyproject.toml [\#3777](https://github.com/pypeclub/OpenPype/pull/3777) - Fix - changed format of version string in pyproject.toml [\#3777](https://github.com/pypeclub/OpenPype/pull/3777)
- Ftrack status fix typo prgoress -\> progress [\#3761](https://github.com/pypeclub/OpenPype/pull/3761) - Ftrack status fix typo prgoress -\> progress [\#3761](https://github.com/pypeclub/OpenPype/pull/3761)
- Fix version resolution [\#3757](https://github.com/pypeclub/OpenPype/pull/3757) - Fix version resolution [\#3757](https://github.com/pypeclub/OpenPype/pull/3757)
- Maya: `containerise` dont skip empty values [\#3674](https://github.com/pypeclub/OpenPype/pull/3674)
**🔀 Refactored code** **🔀 Refactored code**
- Photoshop: Use new Extractor location [\#3789](https://github.com/pypeclub/OpenPype/pull/3789)
- Blender: Use new Extractor location [\#3787](https://github.com/pypeclub/OpenPype/pull/3787)
- AfterEffects: Use new Extractor location [\#3784](https://github.com/pypeclub/OpenPype/pull/3784) - AfterEffects: Use new Extractor location [\#3784](https://github.com/pypeclub/OpenPype/pull/3784)
- General: Remove unused teshost [\#3773](https://github.com/pypeclub/OpenPype/pull/3773) - General: Remove unused teshost [\#3773](https://github.com/pypeclub/OpenPype/pull/3773)
- General: Copied 'Extractor' plugin to publish pipeline [\#3771](https://github.com/pypeclub/OpenPype/pull/3771) - General: Copied 'Extractor' plugin to publish pipeline [\#3771](https://github.com/pypeclub/OpenPype/pull/3771)
- General: Move queries of asset and representation links [\#3770](https://github.com/pypeclub/OpenPype/pull/3770)
- General: Move create project folders to pipeline [\#3768](https://github.com/pypeclub/OpenPype/pull/3768)
- General: Create project function moved to client code [\#3766](https://github.com/pypeclub/OpenPype/pull/3766) - General: Create project function moved to client code [\#3766](https://github.com/pypeclub/OpenPype/pull/3766)
- General: Move hostdirname functionality into host [\#3749](https://github.com/pypeclub/OpenPype/pull/3749) - General: Move hostdirname functionality into host [\#3749](https://github.com/pypeclub/OpenPype/pull/3749)
- General: Move publish utils to pipeline [\#3745](https://github.com/pypeclub/OpenPype/pull/3745) - General: Move publish utils to pipeline [\#3745](https://github.com/pypeclub/OpenPype/pull/3745)
- Houdini: Define houdini as addon [\#3735](https://github.com/pypeclub/OpenPype/pull/3735) - Houdini: Define houdini as addon [\#3735](https://github.com/pypeclub/OpenPype/pull/3735)
- Fusion: Defined fusion as addon [\#3733](https://github.com/pypeclub/OpenPype/pull/3733)
- Flame: Defined flame as addon [\#3732](https://github.com/pypeclub/OpenPype/pull/3732) - Flame: Defined flame as addon [\#3732](https://github.com/pypeclub/OpenPype/pull/3732)
- Resolve: Define resolve as addon [\#3727](https://github.com/pypeclub/OpenPype/pull/3727) - Resolve: Define resolve as addon [\#3727](https://github.com/pypeclub/OpenPype/pull/3727)
**Merged pull requests:** **Merged pull requests:**
- Standalone Publisher: Ignore empty labels, then still use name like other asset models [\#3779](https://github.com/pypeclub/OpenPype/pull/3779) - Standalone Publisher: Ignore empty labels, then still use name like other asset models [\#3779](https://github.com/pypeclub/OpenPype/pull/3779)
- Kitsu - sync\_all\_project - add list ignore\_projects [\#3776](https://github.com/pypeclub/OpenPype/pull/3776)
## [3.14.1](https://github.com/pypeclub/OpenPype/tree/3.14.1) (2022-08-30) ## [3.14.1](https://github.com/pypeclub/OpenPype/tree/3.14.1) (2022-08-30)
@ -45,23 +61,16 @@
### 📖 Documentation ### 📖 Documentation
- Documentation: Few updates [\#3698](https://github.com/pypeclub/OpenPype/pull/3698) - Documentation: Few updates [\#3698](https://github.com/pypeclub/OpenPype/pull/3698)
- Documentation: Settings development [\#3660](https://github.com/pypeclub/OpenPype/pull/3660)
**🆕 New features**
- Webpublisher:change create flatten image into tri state [\#3678](https://github.com/pypeclub/OpenPype/pull/3678)
- Blender: validators code correction with settings and defaults [\#3662](https://github.com/pypeclub/OpenPype/pull/3662)
**🚀 Enhancements** **🚀 Enhancements**
- General: Thumbnail can use project roots [\#3750](https://github.com/pypeclub/OpenPype/pull/3750) - General: Thumbnail can use project roots [\#3750](https://github.com/pypeclub/OpenPype/pull/3750)
- git: update gitignore [\#3722](https://github.com/pypeclub/OpenPype/pull/3722)
- Settings: Remove settings lock on tray exit [\#3720](https://github.com/pypeclub/OpenPype/pull/3720) - Settings: Remove settings lock on tray exit [\#3720](https://github.com/pypeclub/OpenPype/pull/3720)
- General: Added helper getters to modules manager [\#3712](https://github.com/pypeclub/OpenPype/pull/3712) - General: Added helper getters to modules manager [\#3712](https://github.com/pypeclub/OpenPype/pull/3712)
- Unreal: Define unreal as module and use host class [\#3701](https://github.com/pypeclub/OpenPype/pull/3701) - Unreal: Define unreal as module and use host class [\#3701](https://github.com/pypeclub/OpenPype/pull/3701)
- Settings: Lock settings UI session [\#3700](https://github.com/pypeclub/OpenPype/pull/3700) - Settings: Lock settings UI session [\#3700](https://github.com/pypeclub/OpenPype/pull/3700)
- General: Benevolent context label collector [\#3686](https://github.com/pypeclub/OpenPype/pull/3686) - General: Benevolent context label collector [\#3686](https://github.com/pypeclub/OpenPype/pull/3686)
- Ftrack: Store ftrack entities on hierarchy integration to instances [\#3677](https://github.com/pypeclub/OpenPype/pull/3677)
- Blender: ops refresh manager after process events [\#3663](https://github.com/pypeclub/OpenPype/pull/3663)
**🐛 Bug fixes** **🐛 Bug fixes**
@ -75,7 +84,6 @@
- Settings: Fix project overrides save [\#3708](https://github.com/pypeclub/OpenPype/pull/3708) - Settings: Fix project overrides save [\#3708](https://github.com/pypeclub/OpenPype/pull/3708)
- Workfiles tool: Fix published workfile filtering [\#3704](https://github.com/pypeclub/OpenPype/pull/3704) - Workfiles tool: Fix published workfile filtering [\#3704](https://github.com/pypeclub/OpenPype/pull/3704)
- PS, AE: Provide default variant value for workfile subset [\#3703](https://github.com/pypeclub/OpenPype/pull/3703) - PS, AE: Provide default variant value for workfile subset [\#3703](https://github.com/pypeclub/OpenPype/pull/3703)
- Flame: retime is working on clip publishing [\#3684](https://github.com/pypeclub/OpenPype/pull/3684)
- Webpublisher: added check for empty context [\#3682](https://github.com/pypeclub/OpenPype/pull/3682) - Webpublisher: added check for empty context [\#3682](https://github.com/pypeclub/OpenPype/pull/3682)
**🔀 Refactored code** **🔀 Refactored code**
@ -104,7 +112,6 @@
- Hiero: Define hiero as module [\#3717](https://github.com/pypeclub/OpenPype/pull/3717) - Hiero: Define hiero as module [\#3717](https://github.com/pypeclub/OpenPype/pull/3717)
- Deadline: better logging for DL webservice failures [\#3694](https://github.com/pypeclub/OpenPype/pull/3694) - Deadline: better logging for DL webservice failures [\#3694](https://github.com/pypeclub/OpenPype/pull/3694)
- Photoshop: resize saved images in ExtractReview for ffmpeg [\#3676](https://github.com/pypeclub/OpenPype/pull/3676)
## [3.14.0](https://github.com/pypeclub/OpenPype/tree/3.14.0) (2022-08-18) ## [3.14.0](https://github.com/pypeclub/OpenPype/tree/3.14.0) (2022-08-18)
@ -114,27 +121,11 @@
- Ftrack: Addiotional component metadata [\#3685](https://github.com/pypeclub/OpenPype/pull/3685) - Ftrack: Addiotional component metadata [\#3685](https://github.com/pypeclub/OpenPype/pull/3685)
- Ftrack: Set task status on farm publishing [\#3680](https://github.com/pypeclub/OpenPype/pull/3680) - Ftrack: Set task status on farm publishing [\#3680](https://github.com/pypeclub/OpenPype/pull/3680)
- Ftrack: Set task status on task creation in integrate hierarchy [\#3675](https://github.com/pypeclub/OpenPype/pull/3675)
- Maya: Disable rendering of all lights for render instances submitted through Deadline. [\#3661](https://github.com/pypeclub/OpenPype/pull/3661)
- General: Optimized OCIO configs [\#3650](https://github.com/pypeclub/OpenPype/pull/3650)
**🐛 Bug fixes** **🐛 Bug fixes**
- General: Switch from hero version to versioned works [\#3691](https://github.com/pypeclub/OpenPype/pull/3691) - General: Switch from hero version to versioned works [\#3691](https://github.com/pypeclub/OpenPype/pull/3691)
- General: Fix finding of last version [\#3656](https://github.com/pypeclub/OpenPype/pull/3656) - Flame: retime is working on clip publishing [\#3684](https://github.com/pypeclub/OpenPype/pull/3684)
- General: Extract Review can scale with pixel aspect ratio [\#3644](https://github.com/pypeclub/OpenPype/pull/3644)
- Maya: Refactor moved usage of CreateRender settings [\#3643](https://github.com/pypeclub/OpenPype/pull/3643)
**🔀 Refactored code**
- General: Use client projects getter [\#3673](https://github.com/pypeclub/OpenPype/pull/3673)
- Resolve: Match folder structure to other hosts [\#3653](https://github.com/pypeclub/OpenPype/pull/3653)
- Maya: Hosts as modules [\#3647](https://github.com/pypeclub/OpenPype/pull/3647)
**Merged pull requests:**
- Deadline: Global job pre load is not Pype 2 compatible [\#3666](https://github.com/pypeclub/OpenPype/pull/3666)
- Maya: Remove unused get current renderer logic [\#3645](https://github.com/pypeclub/OpenPype/pull/3645)
## [3.13.0](https://github.com/pypeclub/OpenPype/tree/3.13.0) (2022-08-09) ## [3.13.0](https://github.com/pypeclub/OpenPype/tree/3.13.0) (2022-08-09)

View file

@ -41,7 +41,7 @@ It can be built and ran on all common platforms. We develop and test on the foll
- **Linux** - **Linux**
- **Ubuntu** 20.04 LTS - **Ubuntu** 20.04 LTS
- **Centos** 7 - **Centos** 7
- **Mac OSX** - **Mac OSX**
- **10.15** Catalina - **10.15** Catalina
- **11.1** Big Sur (using Rosetta2) - **11.1** Big Sur (using Rosetta2)
@ -287,6 +287,14 @@ To run tests, execute `.\tools\run_tests(.ps1|.sh)`.
**Note that it needs existing virtual environment.** **Note that it needs existing virtual environment.**
Developer tools
-------------
In case you wish to add your own tools to `.\tools` folder without git tracking, it is possible by adding it with `dev_*` suffix (example: `dev_clear_pyc(.ps1|.sh)`).
## Contributors ✨ ## Contributors ✨
Thanks goes to these wonderful people ([emoji key](https://allcontributors.org/docs/en/emoji-key)): Thanks goes to these wonderful people ([emoji key](https://allcontributors.org/docs/en/emoji-key)):

View file

@ -388,8 +388,11 @@ class InstallDialog(QtWidgets.QDialog):
install_thread.start() install_thread.start()
def _installation_finished(self): def _installation_finished(self):
# TODO we should find out why status can be set to 'None'?
# - 'InstallThread.run' should handle all cases so not sure where
# that come from
status = self._install_thread.result() status = self._install_thread.result()
if status >= 0: if status is not None and status >= 0:
self._update_progress(100) self._update_progress(100)
QtWidgets.QApplication.processEvents() QtWidgets.QApplication.processEvents()
self.done(3) self.done(3)

View file

@ -45,6 +45,12 @@ from .entities import (
get_workfile_info, get_workfile_info,
) )
from .entity_links import (
get_linked_asset_ids,
get_linked_assets,
get_linked_representation_id,
)
from .operations import ( from .operations import (
create_project, create_project,
) )
@ -94,5 +100,9 @@ __all__ = (
"get_workfile_info", "get_workfile_info",
"get_linked_asset_ids",
"get_linked_assets",
"get_linked_representation_id",
"create_project", "create_project",
) )

View file

@ -14,6 +14,8 @@ from bson.objectid import ObjectId
from .mongo import get_project_database, get_project_connection from .mongo import get_project_database, get_project_connection
PatternType = type(re.compile(""))
def _prepare_fields(fields, required_fields=None): def _prepare_fields(fields, required_fields=None):
if not fields: if not fields:
@ -32,17 +34,37 @@ def _prepare_fields(fields, required_fields=None):
return output return output
def _convert_id(in_id): def convert_id(in_id):
"""Helper function for conversion of id from string to ObjectId.
Args:
in_id (Union[str, ObjectId, Any]): Entity id that should be converted
to right type for queries.
Returns:
Union[ObjectId, Any]: Converted ids to ObjectId or in type.
"""
if isinstance(in_id, six.string_types): if isinstance(in_id, six.string_types):
return ObjectId(in_id) return ObjectId(in_id)
return in_id return in_id
def _convert_ids(in_ids): def convert_ids(in_ids):
"""Helper function for conversion of ids from string to ObjectId.
Args:
in_ids (Iterable[Union[str, ObjectId, Any]]): List of entity ids that
should be converted to right type for queries.
Returns:
List[ObjectId]: Converted ids to ObjectId.
"""
_output = set() _output = set()
for in_id in in_ids: for in_id in in_ids:
if in_id is not None: if in_id is not None:
_output.add(_convert_id(in_id)) _output.add(convert_id(in_id))
return list(_output) return list(_output)
@ -115,7 +137,7 @@ def get_asset_by_id(project_name, asset_id, fields=None):
None: Asset was not found by id. None: Asset was not found by id.
""" """
asset_id = _convert_id(asset_id) asset_id = convert_id(asset_id)
if not asset_id: if not asset_id:
return None return None
@ -196,7 +218,7 @@ def _get_assets(
query_filter = {"type": {"$in": asset_types}} query_filter = {"type": {"$in": asset_types}}
if asset_ids is not None: if asset_ids is not None:
asset_ids = _convert_ids(asset_ids) asset_ids = convert_ids(asset_ids)
if not asset_ids: if not asset_ids:
return [] return []
query_filter["_id"] = {"$in": asset_ids} query_filter["_id"] = {"$in": asset_ids}
@ -207,7 +229,7 @@ def _get_assets(
query_filter["name"] = {"$in": list(asset_names)} query_filter["name"] = {"$in": list(asset_names)}
if parent_ids is not None: if parent_ids is not None:
parent_ids = _convert_ids(parent_ids) parent_ids = convert_ids(parent_ids)
if not parent_ids: if not parent_ids:
return [] return []
query_filter["data.visualParent"] = {"$in": parent_ids} query_filter["data.visualParent"] = {"$in": parent_ids}
@ -307,7 +329,7 @@ def get_asset_ids_with_subsets(project_name, asset_ids=None):
"type": "subset" "type": "subset"
} }
if asset_ids is not None: if asset_ids is not None:
asset_ids = _convert_ids(asset_ids) asset_ids = convert_ids(asset_ids)
if not asset_ids: if not asset_ids:
return [] return []
subset_query["parent"] = {"$in": asset_ids} subset_query["parent"] = {"$in": asset_ids}
@ -347,7 +369,7 @@ def get_subset_by_id(project_name, subset_id, fields=None):
Dict: Subset document which can be reduced to specified 'fields'. Dict: Subset document which can be reduced to specified 'fields'.
""" """
subset_id = _convert_id(subset_id) subset_id = convert_id(subset_id)
if not subset_id: if not subset_id:
return None return None
@ -374,7 +396,7 @@ def get_subset_by_name(project_name, subset_name, asset_id, fields=None):
if not subset_name: if not subset_name:
return None return None
asset_id = _convert_id(asset_id) asset_id = convert_id(asset_id)
if not asset_id: if not asset_id:
return None return None
@ -428,13 +450,13 @@ def get_subsets(
query_filter = {"type": {"$in": subset_types}} query_filter = {"type": {"$in": subset_types}}
if asset_ids is not None: if asset_ids is not None:
asset_ids = _convert_ids(asset_ids) asset_ids = convert_ids(asset_ids)
if not asset_ids: if not asset_ids:
return [] return []
query_filter["parent"] = {"$in": asset_ids} query_filter["parent"] = {"$in": asset_ids}
if subset_ids is not None: if subset_ids is not None:
subset_ids = _convert_ids(subset_ids) subset_ids = convert_ids(subset_ids)
if not subset_ids: if not subset_ids:
return [] return []
query_filter["_id"] = {"$in": subset_ids} query_filter["_id"] = {"$in": subset_ids}
@ -449,7 +471,7 @@ def get_subsets(
for asset_id, names in names_by_asset_ids.items(): for asset_id, names in names_by_asset_ids.items():
if asset_id and names: if asset_id and names:
or_query.append({ or_query.append({
"parent": _convert_id(asset_id), "parent": convert_id(asset_id),
"name": {"$in": list(names)} "name": {"$in": list(names)}
}) })
if not or_query: if not or_query:
@ -510,7 +532,7 @@ def get_version_by_id(project_name, version_id, fields=None):
Dict: Version document which can be reduced to specified 'fields'. Dict: Version document which can be reduced to specified 'fields'.
""" """
version_id = _convert_id(version_id) version_id = convert_id(version_id)
if not version_id: if not version_id:
return None return None
@ -537,7 +559,7 @@ def get_version_by_name(project_name, version, subset_id, fields=None):
Dict: Version document which can be reduced to specified 'fields'. Dict: Version document which can be reduced to specified 'fields'.
""" """
subset_id = _convert_id(subset_id) subset_id = convert_id(subset_id)
if not subset_id: if not subset_id:
return None return None
@ -567,7 +589,7 @@ def version_is_latest(project_name, version_id):
bool: True if is latest version from subset else False. bool: True if is latest version from subset else False.
""" """
version_id = _convert_id(version_id) version_id = convert_id(version_id)
if not version_id: if not version_id:
return False return False
version_doc = get_version_by_id( version_doc = get_version_by_id(
@ -610,13 +632,13 @@ def _get_versions(
query_filter = {"type": {"$in": version_types}} query_filter = {"type": {"$in": version_types}}
if subset_ids is not None: if subset_ids is not None:
subset_ids = _convert_ids(subset_ids) subset_ids = convert_ids(subset_ids)
if not subset_ids: if not subset_ids:
return [] return []
query_filter["parent"] = {"$in": subset_ids} query_filter["parent"] = {"$in": subset_ids}
if version_ids is not None: if version_ids is not None:
version_ids = _convert_ids(version_ids) version_ids = convert_ids(version_ids)
if not version_ids: if not version_ids:
return [] return []
query_filter["_id"] = {"$in": version_ids} query_filter["_id"] = {"$in": version_ids}
@ -690,7 +712,7 @@ def get_hero_version_by_subset_id(project_name, subset_id, fields=None):
Dict: Hero version entity data. Dict: Hero version entity data.
""" """
subset_id = _convert_id(subset_id) subset_id = convert_id(subset_id)
if not subset_id: if not subset_id:
return None return None
@ -720,7 +742,7 @@ def get_hero_version_by_id(project_name, version_id, fields=None):
Dict: Hero version entity data. Dict: Hero version entity data.
""" """
version_id = _convert_id(version_id) version_id = convert_id(version_id)
if not version_id: if not version_id:
return None return None
@ -786,7 +808,7 @@ def get_output_link_versions(project_name, version_id, fields=None):
links for passed version. links for passed version.
""" """
version_id = _convert_id(version_id) version_id = convert_id(version_id)
if not version_id: if not version_id:
return [] return []
@ -812,7 +834,7 @@ def get_last_versions(project_name, subset_ids, fields=None):
dict[ObjectId, int]: Key is subset id and value is last version name. dict[ObjectId, int]: Key is subset id and value is last version name.
""" """
subset_ids = _convert_ids(subset_ids) subset_ids = convert_ids(subset_ids)
if not subset_ids: if not subset_ids:
return {} return {}
@ -898,7 +920,7 @@ def get_last_version_by_subset_id(project_name, subset_id, fields=None):
Dict: Version document which can be reduced to specified 'fields'. Dict: Version document which can be reduced to specified 'fields'.
""" """
subset_id = _convert_id(subset_id) subset_id = convert_id(subset_id)
if not subset_id: if not subset_id:
return None return None
@ -971,7 +993,7 @@ def get_representation_by_id(project_name, representation_id, fields=None):
"type": {"$in": repre_types} "type": {"$in": repre_types}
} }
if representation_id is not None: if representation_id is not None:
query_filter["_id"] = _convert_id(representation_id) query_filter["_id"] = convert_id(representation_id)
conn = get_project_connection(project_name) conn = get_project_connection(project_name)
@ -996,7 +1018,7 @@ def get_representation_by_name(
to specified 'fields'. to specified 'fields'.
""" """
version_id = _convert_id(version_id) version_id = convert_id(version_id)
if not version_id or not representation_name: if not version_id or not representation_name:
return None return None
repre_types = ["representation", "archived_representations"] repre_types = ["representation", "archived_representations"]
@ -1034,11 +1056,11 @@ def _regex_filters(filters):
for key, value in filters.items(): for key, value in filters.items():
regexes = [] regexes = []
a_values = [] a_values = []
if isinstance(value, re.Pattern): if isinstance(value, PatternType):
regexes.append(value) regexes.append(value)
elif isinstance(value, (list, tuple, set)): elif isinstance(value, (list, tuple, set)):
for item in value: for item in value:
if isinstance(item, re.Pattern): if isinstance(item, PatternType):
regexes.append(item) regexes.append(item)
else: else:
a_values.append(item) a_values.append(item)
@ -1089,7 +1111,7 @@ def _get_representations(
query_filter = {"type": {"$in": repre_types}} query_filter = {"type": {"$in": repre_types}}
if representation_ids is not None: if representation_ids is not None:
representation_ids = _convert_ids(representation_ids) representation_ids = convert_ids(representation_ids)
if not representation_ids: if not representation_ids:
return default_output return default_output
query_filter["_id"] = {"$in": representation_ids} query_filter["_id"] = {"$in": representation_ids}
@ -1100,7 +1122,7 @@ def _get_representations(
query_filter["name"] = {"$in": list(representation_names)} query_filter["name"] = {"$in": list(representation_names)}
if version_ids is not None: if version_ids is not None:
version_ids = _convert_ids(version_ids) version_ids = convert_ids(version_ids)
if not version_ids: if not version_ids:
return default_output return default_output
query_filter["parent"] = {"$in": version_ids} query_filter["parent"] = {"$in": version_ids}
@ -1111,7 +1133,7 @@ def _get_representations(
for version_id, names in names_by_version_ids.items(): for version_id, names in names_by_version_ids.items():
if version_id and names: if version_id and names:
or_query.append({ or_query.append({
"parent": _convert_id(version_id), "parent": convert_id(version_id),
"name": {"$in": list(names)} "name": {"$in": list(names)}
}) })
if not or_query: if not or_query:
@ -1174,7 +1196,7 @@ def get_representations(
as filter. Filter ignored if 'None' is passed. as filter. Filter ignored if 'None' is passed.
version_ids (Iterable[str]): Subset ids used as parent filter. Filter version_ids (Iterable[str]): Subset ids used as parent filter. Filter
ignored if 'None' is passed. ignored if 'None' is passed.
context_filters (Dict[str, List[str, re.Pattern]]): Filter by context_filters (Dict[str, List[str, PatternType]]): Filter by
representation context fields. representation context fields.
names_by_version_ids (dict[ObjectId, list[str]]): Complex filtering names_by_version_ids (dict[ObjectId, list[str]]): Complex filtering
using version ids and list of names under the version. using version ids and list of names under the version.
@ -1220,7 +1242,7 @@ def get_archived_representations(
as filter. Filter ignored if 'None' is passed. as filter. Filter ignored if 'None' is passed.
version_ids (Iterable[str]): Subset ids used as parent filter. Filter version_ids (Iterable[str]): Subset ids used as parent filter. Filter
ignored if 'None' is passed. ignored if 'None' is passed.
context_filters (Dict[str, List[str, re.Pattern]]): Filter by context_filters (Dict[str, List[str, PatternType]]): Filter by
representation context fields. representation context fields.
names_by_version_ids (dict[ObjectId, List[str]]): Complex filtering names_by_version_ids (dict[ObjectId, List[str]]): Complex filtering
using version ids and list of names under the version. using version ids and list of names under the version.
@ -1361,7 +1383,7 @@ def get_thumbnail_id_from_source(project_name, src_type, src_id):
if not src_type or not src_id: if not src_type or not src_id:
return None return None
query_filter = {"_id": _convert_id(src_id)} query_filter = {"_id": convert_id(src_id)}
conn = get_project_connection(project_name) conn = get_project_connection(project_name)
src_doc = conn.find_one(query_filter, {"data.thumbnail_id"}) src_doc = conn.find_one(query_filter, {"data.thumbnail_id"})
@ -1388,7 +1410,7 @@ def get_thumbnails(project_name, thumbnail_ids, fields=None):
""" """
if thumbnail_ids: if thumbnail_ids:
thumbnail_ids = _convert_ids(thumbnail_ids) thumbnail_ids = convert_ids(thumbnail_ids)
if not thumbnail_ids: if not thumbnail_ids:
return [] return []
@ -1416,7 +1438,7 @@ def get_thumbnail(project_name, thumbnail_id, fields=None):
if not thumbnail_id: if not thumbnail_id:
return None return None
query_filter = {"type": "thumbnail", "_id": _convert_id(thumbnail_id)} query_filter = {"type": "thumbnail", "_id": convert_id(thumbnail_id)}
conn = get_project_connection(project_name) conn = get_project_connection(project_name)
return conn.find_one(query_filter, _prepare_fields(fields)) return conn.find_one(query_filter, _prepare_fields(fields))
@ -1444,7 +1466,7 @@ def get_workfile_info(
query_filter = { query_filter = {
"type": "workfile", "type": "workfile",
"parent": _convert_id(asset_id), "parent": convert_id(asset_id),
"task_name": task_name, "task_name": task_name,
"filename": filename "filename": filename
} }

View file

@ -0,0 +1,232 @@
from .mongo import get_project_connection
from .entities import (
get_assets,
get_asset_by_id,
get_representation_by_id,
convert_id,
)
def get_linked_asset_ids(project_name, asset_doc=None, asset_id=None):
"""Extract linked asset ids from asset document.
One of asset document or asset id must be passed.
Note:
Asset links now works only from asset to assets.
Args:
asset_doc (dict): Asset document from DB.
Returns:
List[Union[ObjectId, str]]: Asset ids of input links.
"""
output = []
if not asset_doc and not asset_id:
return output
if not asset_doc:
asset_doc = get_asset_by_id(
project_name, asset_id, fields=["data.inputLinks"]
)
input_links = asset_doc["data"].get("inputLinks")
if not input_links:
return output
for item in input_links:
# Backwards compatibility for "_id" key which was replaced with
# "id"
if "_id" in item:
link_id = item["_id"]
else:
link_id = item["id"]
output.append(link_id)
return output
def get_linked_assets(
project_name, asset_doc=None, asset_id=None, fields=None
):
"""Return linked assets based on passed asset document.
One of asset document or asset id must be passed.
Args:
project_name (str): Name of project where to look for queried entities.
asset_doc (Dict[str, Any]): Asset document from database.
asset_id (Union[ObjectId, str]): Asset id. Can be used instead of
asset document.
fields (Iterable[str]): Fields that should be returned. All fields are
returned if 'None' is passed.
Returns:
List[Dict[str, Any]]: Asset documents of input links for passed
asset doc.
"""
if not asset_doc:
if not asset_id:
return []
asset_doc = get_asset_by_id(
project_name,
asset_id,
fields=["data.inputLinks"]
)
if not asset_doc:
return []
link_ids = get_linked_asset_ids(project_name, asset_doc=asset_doc)
if not link_ids:
return []
return list(get_assets(project_name, asset_ids=link_ids, fields=fields))
def get_linked_representation_id(
project_name, repre_doc=None, repre_id=None, link_type=None, max_depth=None
):
"""Returns list of linked ids of particular type (if provided).
One of representation document or representation id must be passed.
Note:
Representation links now works only from representation through version
back to representations.
Args:
project_name (str): Name of project where look for links.
repre_doc (Dict[str, Any]): Representation document.
repre_id (Union[ObjectId, str]): Representation id.
link_type (str): Type of link (e.g. 'reference', ...).
max_depth (int): Limit recursion level. Default: 0
Returns:
List[ObjectId] Linked representation ids.
"""
if repre_doc:
repre_id = repre_doc["_id"]
if repre_id:
repre_id = convert_id(repre_id)
if not repre_id and not repre_doc:
return []
version_id = None
if repre_doc:
version_id = repre_doc.get("parent")
if not version_id:
repre_doc = get_representation_by_id(
project_name, repre_id, fields=["parent"]
)
version_id = repre_doc["parent"]
if not version_id:
return []
if max_depth is None:
max_depth = 0
match = {
"_id": version_id,
"type": {"$in": ["version", "hero_version"]}
}
graph_lookup = {
"from": project_name,
"startWith": "$data.inputLinks.id",
"connectFromField": "data.inputLinks.id",
"connectToField": "_id",
"as": "outputs_recursive",
"depthField": "depth"
}
if max_depth != 0:
# We offset by -1 since 0 basically means no recursion
# but the recursion only happens after the initial lookup
# for outputs.
graph_lookup["maxDepth"] = max_depth - 1
query_pipeline = [
# Match
{"$match": match},
# Recursive graph lookup for inputs
{"$graphLookup": graph_lookup}
]
conn = get_project_connection(project_name)
result = conn.aggregate(query_pipeline)
referenced_version_ids = _process_referenced_pipeline_result(
result, link_type
)
if not referenced_version_ids:
return []
ref_ids = conn.distinct(
"_id",
filter={
"parent": {"$in": list(referenced_version_ids)},
"type": "representation"
}
)
return list(ref_ids)
def _process_referenced_pipeline_result(result, link_type):
"""Filters result from pipeline for particular link_type.
Pipeline cannot use link_type directly in a query.
Returns:
(list)
"""
referenced_version_ids = set()
correctly_linked_ids = set()
for item in result:
input_links = item["data"].get("inputLinks")
if not input_links:
continue
_filter_input_links(
input_links,
link_type,
correctly_linked_ids
)
# outputs_recursive in random order, sort by depth
outputs_recursive = item.get("outputs_recursive")
if not outputs_recursive:
continue
for output in sorted(outputs_recursive, key=lambda o: o["depth"]):
output_links = output["data"].get("inputLinks")
if not output_links:
continue
# Leaf
if output["_id"] not in correctly_linked_ids:
continue
_filter_input_links(
output_links,
link_type,
correctly_linked_ids
)
referenced_version_ids.add(output["_id"])
return referenced_version_ids
def _filter_input_links(input_links, link_type, correctly_linked_ids):
for input_link in input_links:
if link_type and input_link["type"] != link_type:
continue
link_id = input_link.get("id") or input_link.get("_id")
if link_id is not None:
correctly_linked_ids.add(link_id)

View file

@ -1,8 +1,6 @@
import os import os
from openpype.lib import ( from openpype.lib import PreLaunchHook
PreLaunchHook, from openpype.pipeline.workfile import create_workdir_extra_folders
create_workdir_extra_folders
)
class AddLastWorkfileToLaunchArgs(PreLaunchHook): class AddLastWorkfileToLaunchArgs(PreLaunchHook):

View file

@ -1,6 +1,19 @@
import os
import bpy import bpy
import pyblish.api import pyblish.api
from openpype.pipeline import legacy_io
from openpype.hosts.blender.api import workio
class SaveWorkfiledAction(pyblish.api.Action):
"""Save Workfile."""
label = "Save Workfile"
on = "failed"
icon = "save"
def process(self, context, plugin):
bpy.ops.wm.avalon_workfiles()
class CollectBlenderCurrentFile(pyblish.api.ContextPlugin): class CollectBlenderCurrentFile(pyblish.api.ContextPlugin):
@ -8,12 +21,52 @@ class CollectBlenderCurrentFile(pyblish.api.ContextPlugin):
order = pyblish.api.CollectorOrder - 0.5 order = pyblish.api.CollectorOrder - 0.5
label = "Blender Current File" label = "Blender Current File"
hosts = ['blender'] hosts = ["blender"]
actions = [SaveWorkfiledAction]
def process(self, context): def process(self, context):
"""Inject the current working file""" """Inject the current working file"""
current_file = bpy.data.filepath current_file = workio.current_file()
context.data['currentFile'] = current_file
assert current_file != '', "Current file is empty. " \ context.data["currentFile"] = current_file
"Save the file before continuing."
assert current_file, (
"Current file is empty. Save the file before continuing."
)
folder, file = os.path.split(current_file)
filename, ext = os.path.splitext(file)
task = legacy_io.Session["AVALON_TASK"]
data = {}
# create instance
instance = context.create_instance(name=filename)
subset = "workfile" + task.capitalize()
data.update({
"subset": subset,
"asset": os.getenv("AVALON_ASSET", None),
"label": subset,
"publish": True,
"family": "workfile",
"families": ["workfile"],
"setMembers": [current_file],
"frameStart": bpy.context.scene.frame_start,
"frameEnd": bpy.context.scene.frame_end,
})
data["representations"] = [{
"name": ext.lstrip("."),
"ext": ext.lstrip("."),
"files": file,
"stagingDir": folder,
}]
instance.data.update(data)
self.log.info("Collected instance: {}".format(file))
self.log.info("Scene path: {}".format(current_file))
self.log.info("staging Dir: {}".format(folder))
self.log.info("subset: {}".format(subset))

View file

@ -2,12 +2,12 @@ import os
import bpy import bpy
from openpype import api from openpype.pipeline import publish
from openpype.hosts.blender.api import plugin from openpype.hosts.blender.api import plugin
from openpype.hosts.blender.api.pipeline import AVALON_PROPERTY from openpype.hosts.blender.api.pipeline import AVALON_PROPERTY
class ExtractABC(api.Extractor): class ExtractABC(publish.Extractor):
"""Extract as ABC.""" """Extract as ABC."""
label = "Extract ABC" label = "Extract ABC"

View file

@ -2,10 +2,10 @@ import os
import bpy import bpy
import openpype.api from openpype.pipeline import publish
class ExtractBlend(openpype.api.Extractor): class ExtractBlend(publish.Extractor):
"""Extract a blend file.""" """Extract a blend file."""
label = "Extract Blend" label = "Extract Blend"

View file

@ -2,10 +2,10 @@ import os
import bpy import bpy
import openpype.api from openpype.pipeline import publish
class ExtractBlendAnimation(openpype.api.Extractor): class ExtractBlendAnimation(publish.Extractor):
"""Extract a blend file.""" """Extract a blend file."""
label = "Extract Blend" label = "Extract Blend"

View file

@ -2,11 +2,11 @@ import os
import bpy import bpy
from openpype import api from openpype.pipeline import publish
from openpype.hosts.blender.api import plugin from openpype.hosts.blender.api import plugin
class ExtractCamera(api.Extractor): class ExtractCamera(publish.Extractor):
"""Extract as the camera as FBX.""" """Extract as the camera as FBX."""
label = "Extract Camera" label = "Extract Camera"

View file

@ -2,12 +2,12 @@ import os
import bpy import bpy
from openpype import api from openpype.pipeline import publish
from openpype.hosts.blender.api import plugin from openpype.hosts.blender.api import plugin
from openpype.hosts.blender.api.pipeline import AVALON_PROPERTY from openpype.hosts.blender.api.pipeline import AVALON_PROPERTY
class ExtractFBX(api.Extractor): class ExtractFBX(publish.Extractor):
"""Extract as FBX.""" """Extract as FBX."""
label = "Extract FBX" label = "Extract FBX"

View file

@ -5,12 +5,12 @@ import bpy
import bpy_extras import bpy_extras
import bpy_extras.anim_utils import bpy_extras.anim_utils
from openpype import api from openpype.pipeline import publish
from openpype.hosts.blender.api import plugin from openpype.hosts.blender.api import plugin
from openpype.hosts.blender.api.pipeline import AVALON_PROPERTY from openpype.hosts.blender.api.pipeline import AVALON_PROPERTY
class ExtractAnimationFBX(api.Extractor): class ExtractAnimationFBX(publish.Extractor):
"""Extract as animation.""" """Extract as animation."""
label = "Extract FBX" label = "Extract FBX"

View file

@ -6,12 +6,12 @@ import bpy_extras
import bpy_extras.anim_utils import bpy_extras.anim_utils
from openpype.client import get_representation_by_name from openpype.client import get_representation_by_name
from openpype.pipeline import publish
from openpype.hosts.blender.api import plugin from openpype.hosts.blender.api import plugin
from openpype.hosts.blender.api.pipeline import AVALON_PROPERTY from openpype.hosts.blender.api.pipeline import AVALON_PROPERTY
import openpype.api
class ExtractLayout(openpype.api.Extractor): class ExtractLayout(publish.Extractor):
"""Extract a layout.""" """Extract a layout."""
label = "Extract Layout" label = "Extract Layout"

View file

@ -1,113 +0,0 @@
import os
import collections
from pprint import pformat
import pyblish.api
from openpype.client import (
get_subsets,
get_last_versions,
get_representations
)
from openpype.pipeline import legacy_io
class AppendCelactionAudio(pyblish.api.ContextPlugin):
label = "Colect Audio for publishing"
order = pyblish.api.CollectorOrder + 0.1
def process(self, context):
self.log.info('Collecting Audio Data')
asset_doc = context.data["assetEntity"]
# get all available representations
subsets = self.get_subsets(
asset_doc,
representations=["audio", "wav"]
)
self.log.info(f"subsets is: {pformat(subsets)}")
if not subsets.get("audioMain"):
raise AttributeError("`audioMain` subset does not exist")
reprs = subsets.get("audioMain", {}).get("representations", [])
self.log.info(f"reprs is: {pformat(reprs)}")
repr = next((r for r in reprs), None)
if not repr:
raise "Missing `audioMain` representation"
self.log.info(f"representation is: {repr}")
audio_file = repr.get('data', {}).get('path', "")
if os.path.exists(audio_file):
context.data["audioFile"] = audio_file
self.log.info(
'audio_file: {}, has been added to context'.format(audio_file))
else:
self.log.warning("Couldn't find any audio file on Ftrack.")
def get_subsets(self, asset_doc, representations):
"""
Query subsets with filter on name.
The method will return all found subsets and its defined version
and subsets. Version could be specified with number. Representation
can be filtered.
Arguments:
asset_doct (dict): Asset (shot) mongo document
representations (list): list for all representations
Returns:
dict: subsets with version and representations in keys
"""
# Query all subsets for asset
project_name = legacy_io.active_project()
subset_docs = get_subsets(
project_name, asset_ids=[asset_doc["_id"]], fields=["_id"]
)
# Collect all subset ids
subset_ids = [
subset_doc["_id"]
for subset_doc in subset_docs
]
# Check if we found anything
assert subset_ids, (
"No subsets found. Check correct filter. "
"Try this for start `r'.*'`: asset: `{}`"
).format(asset_doc["name"])
last_versions_by_subset_id = get_last_versions(
project_name, subset_ids, fields=["_id", "parent"]
)
version_docs_by_id = {}
for version_doc in last_versions_by_subset_id.values():
version_docs_by_id[version_doc["_id"]] = version_doc
repre_docs = get_representations(
project_name,
version_ids=version_docs_by_id.keys(),
representation_names=representations
)
repre_docs_by_version_id = collections.defaultdict(list)
for repre_doc in repre_docs:
version_id = repre_doc["parent"]
repre_docs_by_version_id[version_id].append(repre_doc)
output_dict = {}
for version_id, repre_docs in repre_docs_by_version_id.items():
version_doc = version_docs_by_id[version_id]
subset_id = version_doc["parent"]
subset_doc = last_versions_by_subset_id[subset_id]
# Store queried docs by subset name
output_dict[subset_doc["name"]] = {
"representations": repre_docs,
"version": version_doc
}
return output_dict

View file

@ -51,7 +51,8 @@ from .pipeline import (
) )
from .menu import ( from .menu import (
FlameMenuProjectConnect, FlameMenuProjectConnect,
FlameMenuTimeline FlameMenuTimeline,
FlameMenuUniversal
) )
from .plugin import ( from .plugin import (
Creator, Creator,
@ -131,6 +132,7 @@ __all__ = [
# menu # menu
"FlameMenuProjectConnect", "FlameMenuProjectConnect",
"FlameMenuTimeline", "FlameMenuTimeline",
"FlameMenuUniversal",
# plugin # plugin
"Creator", "Creator",

View file

@ -201,3 +201,53 @@ class FlameMenuTimeline(_FlameMenuApp):
if self.flame: if self.flame:
self.flame.execute_shortcut('Rescan Python Hooks') self.flame.execute_shortcut('Rescan Python Hooks')
self.log.info('Rescan Python Hooks') self.log.info('Rescan Python Hooks')
class FlameMenuUniversal(_FlameMenuApp):
# flameMenuProjectconnect app takes care of the preferences dialog as well
def __init__(self, framework):
_FlameMenuApp.__init__(self, framework)
def __getattr__(self, name):
def method(*args, **kwargs):
project = self.dynamic_menu_data.get(name)
if project:
self.link_project(project)
return method
def build_menu(self):
if not self.flame:
return []
menu = deepcopy(self.menu)
menu['actions'].append({
"name": "Load...",
"execute": lambda x: self.tools_helper.show_loader()
})
menu['actions'].append({
"name": "Manage...",
"execute": lambda x: self.tools_helper.show_scene_inventory()
})
menu['actions'].append({
"name": "Library...",
"execute": lambda x: self.tools_helper.show_library_loader()
})
return menu
def refresh(self, *args, **kwargs):
self.rescan()
def rescan(self, *args, **kwargs):
if not self.flame:
try:
import flame
self.flame = flame
except ImportError:
self.flame = None
if self.flame:
self.flame.execute_shortcut('Rescan Python Hooks')
self.log.info('Rescan Python Hooks')

View file

@ -361,6 +361,8 @@ class PublishableClip:
index_from_segment_default = False index_from_segment_default = False
use_shot_name_default = False use_shot_name_default = False
include_handles_default = False include_handles_default = False
retimed_handles_default = True
retimed_framerange_default = True
def __init__(self, segment, **kwargs): def __init__(self, segment, **kwargs):
self.rename_index = kwargs["rename_index"] self.rename_index = kwargs["rename_index"]
@ -496,6 +498,14 @@ class PublishableClip:
"audio", {}).get("value") or False "audio", {}).get("value") or False
self.include_handles = self.ui_inputs.get( self.include_handles = self.ui_inputs.get(
"includeHandles", {}).get("value") or self.include_handles_default "includeHandles", {}).get("value") or self.include_handles_default
self.retimed_handles = (
self.ui_inputs.get("retimedHandles", {}).get("value")
or self.retimed_handles_default
)
self.retimed_framerange = (
self.ui_inputs.get("retimedFramerange", {}).get("value")
or self.retimed_framerange_default
)
# build subset name from layer name # build subset name from layer name
if self.subset_name == "[ track name ]": if self.subset_name == "[ track name ]":

View file

@ -276,6 +276,22 @@ class CreateShotClip(opfapi.Creator):
"target": "tag", "target": "tag",
"toolTip": "By default handles are excluded", # noqa "toolTip": "By default handles are excluded", # noqa
"order": 3 "order": 3
},
"retimedHandles": {
"value": True,
"type": "QCheckBox",
"label": "Retimed handles",
"target": "tag",
"toolTip": "By default handles are retimed.", # noqa
"order": 4
},
"retimedFramerange": {
"value": True,
"type": "QCheckBox",
"label": "Retimed framerange",
"target": "tag",
"toolTip": "By default framerange is retimed.", # noqa
"order": 5
} }
} }
} }

View file

@ -131,6 +131,10 @@ class CollectTimelineInstances(pyblish.api.ContextPlugin):
"fps": self.fps, "fps": self.fps,
"workfileFrameStart": workfile_start, "workfileFrameStart": workfile_start,
"sourceFirstFrame": int(first_frame), "sourceFirstFrame": int(first_frame),
"notRetimedHandles": (
not marker_data.get("retimedHandles")),
"notRetimedFramerange": (
not marker_data.get("retimedFramerange")),
"path": file_path, "path": file_path,
"flameAddTasks": self.add_tasks, "flameAddTasks": self.add_tasks,
"tasks": { "tasks": {

View file

@ -90,26 +90,38 @@ class ExtractSubsetResources(openpype.api.Extractor):
handle_end = instance.data["handleEnd"] handle_end = instance.data["handleEnd"]
handles = max(handle_start, handle_end) handles = max(handle_start, handle_end)
include_handles = instance.data.get("includeHandles") include_handles = instance.data.get("includeHandles")
retimed_handles = instance.data.get("retimedHandles")
# get media source range with handles # get media source range with handles
source_start_handles = instance.data["sourceStartH"] source_start_handles = instance.data["sourceStartH"]
source_end_handles = instance.data["sourceEndH"] source_end_handles = instance.data["sourceEndH"]
# retime if needed # retime if needed
if r_speed != 1.0: if r_speed != 1.0:
source_start_handles = ( if retimed_handles:
instance.data["sourceStart"] - r_handle_start) # handles are retimed
source_end_handles = ( source_start_handles = (
source_start_handles instance.data["sourceStart"] - r_handle_start)
+ (r_source_dur - 1) source_end_handles = (
+ r_handle_start source_start_handles
+ r_handle_end + (r_source_dur - 1)
) + r_handle_start
+ r_handle_end
)
else:
# handles are not retimed
source_end_handles = (
source_start_handles
+ (r_source_dur - 1)
+ handle_start
+ handle_end
)
# get frame range with handles for representation range # get frame range with handles for representation range
frame_start_handle = frame_start - handle_start frame_start_handle = frame_start - handle_start
repre_frame_start = frame_start_handle repre_frame_start = frame_start_handle
if include_handles: if include_handles:
if r_speed == 1.0: if r_speed == 1.0 or not retimed_handles:
frame_start_handle = frame_start frame_start_handle = frame_start
else: else:
frame_start_handle = ( frame_start_handle = (

View file

@ -73,6 +73,8 @@ def load_apps():
opfapi.FlameMenuProjectConnect(opfapi.CTX.app_framework)) opfapi.FlameMenuProjectConnect(opfapi.CTX.app_framework))
opfapi.CTX.flame_apps.append( opfapi.CTX.flame_apps.append(
opfapi.FlameMenuTimeline(opfapi.CTX.app_framework)) opfapi.FlameMenuTimeline(opfapi.CTX.app_framework))
opfapi.CTX.flame_apps.append(
opfapi.FlameMenuUniversal(opfapi.CTX.app_framework))
opfapi.CTX.app_framework.log.info("Apps are loaded") opfapi.CTX.app_framework.log.info("Apps are loaded")
@ -191,3 +193,27 @@ def get_timeline_custom_ui_actions():
openpype_install() openpype_install()
return _build_app_menu("FlameMenuTimeline") return _build_app_menu("FlameMenuTimeline")
def get_batch_custom_ui_actions():
"""Hook to create submenu in batch
Returns:
list: menu object
"""
# install openpype and the host
openpype_install()
return _build_app_menu("FlameMenuUniversal")
def get_media_panel_custom_ui_actions():
"""Hook to create submenu in desktop
Returns:
list: menu object
"""
# install openpype and the host
openpype_install()
return _build_app_menu("FlameMenuUniversal")

View file

@ -0,0 +1,10 @@
from .addon import (
FusionAddon,
FUSION_HOST_DIR,
)
__all__ = (
"FusionAddon",
"FUSION_HOST_DIR",
)

View file

@ -0,0 +1,23 @@
import os
from openpype.modules import OpenPypeModule
from openpype.modules.interfaces import IHostAddon
FUSION_HOST_DIR = os.path.dirname(os.path.abspath(__file__))
class FusionAddon(OpenPypeModule, IHostAddon):
name = "fusion"
host_name = "fusion"
def initialize(self, module_settings):
self.enabled = True
def get_launch_hook_paths(self, app):
if app.host_name != self.host_name:
return []
return [
os.path.join(FUSION_HOST_DIR, "hooks")
]
def get_workfile_extensions(self):
return [".comp"]

View file

@ -18,12 +18,11 @@ from openpype.pipeline import (
deregister_inventory_action_path, deregister_inventory_action_path,
AVALON_CONTAINER_ID, AVALON_CONTAINER_ID,
) )
import openpype.hosts.fusion from openpype.hosts.fusion import FUSION_HOST_DIR
log = Logger.get_logger(__name__) log = Logger.get_logger(__name__)
HOST_DIR = os.path.dirname(os.path.abspath(openpype.hosts.fusion.__file__)) PLUGINS_DIR = os.path.join(FUSION_HOST_DIR, "plugins")
PLUGINS_DIR = os.path.join(HOST_DIR, "plugins")
PUBLISH_PATH = os.path.join(PLUGINS_DIR, "publish") PUBLISH_PATH = os.path.join(PLUGINS_DIR, "publish")
LOAD_PATH = os.path.join(PLUGINS_DIR, "load") LOAD_PATH = os.path.join(PLUGINS_DIR, "load")

View file

@ -2,13 +2,11 @@
import sys import sys
import os import os
from openpype.pipeline import HOST_WORKFILE_EXTENSIONS
from .pipeline import get_current_comp from .pipeline import get_current_comp
def file_extensions(): def file_extensions():
return HOST_WORKFILE_EXTENSIONS["fusion"] return [".comp"]
def has_unsaved_changes(): def has_unsaved_changes():

View file

@ -318,10 +318,9 @@ class PrecollectInstances(pyblish.api.ContextPlugin):
@staticmethod @staticmethod
def create_otio_time_range_from_timeline_item_data(track_item): def create_otio_time_range_from_timeline_item_data(track_item):
speed = track_item.playbackSpeed()
timeline = phiero.get_current_sequence() timeline = phiero.get_current_sequence()
frame_start = int(track_item.timelineIn()) frame_start = int(track_item.timelineIn())
frame_duration = int((track_item.duration() - 1) / speed) frame_duration = int(track_item.duration())
fps = timeline.framerate().toFloat() fps = timeline.framerate().toFloat()
return hiero_export.create_otio_time_range( return hiero_export.create_otio_time_range(

View file

@ -70,7 +70,7 @@ class CollectAssembly(pyblish.api.InstancePlugin):
data[representation_id].append(instance_data) data[representation_id].append(instance_data)
instance.data["scenedata"] = dict(data) instance.data["scenedata"] = dict(data)
instance.data["hierarchy"] = list(set(hierarchy_nodes)) instance.data["nodesHierarchy"] = list(set(hierarchy_nodes))
def get_file_rule(self, rule): def get_file_rule(self, rule):
return mel.eval('workspace -query -fileRuleEntry "{}"'.format(rule)) return mel.eval('workspace -query -fileRuleEntry "{}"'.format(rule))

View file

@ -32,7 +32,7 @@ class ExtractAssembly(publish.Extractor):
json.dump(instance.data["scenedata"], filepath, ensure_ascii=False) json.dump(instance.data["scenedata"], filepath, ensure_ascii=False)
self.log.info("Extracting point cache ..") self.log.info("Extracting point cache ..")
cmds.select(instance.data["hierarchy"]) cmds.select(instance.data["nodesHierarchy"])
# Run basic alembic exporter # Run basic alembic exporter
extract_alembic(file=hierarchy_path, extract_alembic(file=hierarchy_path,

View file

@ -48,7 +48,7 @@ class ValidateAssemblyModelTransforms(pyblish.api.InstancePlugin):
from openpype.hosts.maya.api import lib from openpype.hosts.maya.api import lib
# Get all transforms in the loaded containers # Get all transforms in the loaded containers
container_roots = cmds.listRelatives(instance.data["hierarchy"], container_roots = cmds.listRelatives(instance.data["nodesHierarchy"],
children=True, children=True,
type="transform", type="transform",
fullPath=True) fullPath=True)

View file

@ -201,34 +201,6 @@ class CollectNukeWrites(pyblish.api.InstancePlugin):
if not instance.data["review"]: if not instance.data["review"]:
instance.data["useSequenceForReview"] = False instance.data["useSequenceForReview"] = False
project_name = legacy_io.active_project()
asset_name = instance.data["asset"]
# * Add audio to instance if exists.
# Find latest versions document
last_version_doc = get_last_version_by_subset_name(
project_name, "audioMain", asset_name=asset_name, fields=["_id"]
)
repre_doc = None
if last_version_doc:
# Try to find it's representation (Expected there is only one)
repre_docs = list(get_representations(
project_name, version_ids=[last_version_doc["_id"]]
))
if not repre_docs:
self.log.warning(
"Version document does not contain any representations"
)
else:
repre_doc = repre_docs[0]
# Add audio to instance if representation was found
if repre_doc:
instance.data["audio"] = [{
"offset": 0,
"filename": get_representation_path(repre_doc)
}]
self.log.debug("instance.data: {}".format(pformat(instance.data))) self.log.debug("instance.data: {}".format(pformat(instance.data)))
def is_prerender(self, families): def is_prerender(self, families):

View file

@ -2,7 +2,8 @@ import pyblish.api
from openpype.pipeline.publish import get_errored_instances_from_context from openpype.pipeline.publish import get_errored_instances_from_context
from openpype.hosts.nuke.api.lib import ( from openpype.hosts.nuke.api.lib import (
get_write_node_template_attr, get_write_node_template_attr,
set_node_knobs_from_settings set_node_knobs_from_settings,
color_gui_to_int
) )
from openpype.pipeline import PublishXmlValidationError from openpype.pipeline import PublishXmlValidationError
@ -76,8 +77,11 @@ class ValidateNukeWriteNode(pyblish.api.InstancePlugin):
# fix type differences # fix type differences
if type(node_value) in (int, float): if type(node_value) in (int, float):
value = float(value) if isinstance(value, list):
node_value = float(node_value) value = color_gui_to_int(value)
else:
value = float(value)
node_value = float(node_value)
else: else:
value = str(value) value = str(value)
node_value = str(node_value) node_value = str(node_value)

View file

@ -127,11 +127,11 @@ class CollectInstances(pyblish.api.ContextPlugin):
```python ```python
import os import os
import openpype.api from openpype.pipeline import publish
from avalon import photoshop from openpype.hosts.photoshop import api as photoshop
class ExtractImage(openpype.api.Extractor): class ExtractImage(publish.Extractor):
"""Produce a flattened image file from instance """Produce a flattened image file from instance
This plug-in takes into account only the layers in the group. This plug-in takes into account only the layers in the group.

View file

@ -64,10 +64,15 @@ def maintained_selection():
@contextlib.contextmanager @contextlib.contextmanager
def maintained_visibility(): def maintained_visibility(layers=None):
"""Maintain visibility during context.""" """Maintain visibility during context.
Args:
layers (list) of PSItem (used for caching)
"""
visibility = {} visibility = {}
layers = stub().get_layers() if not layers:
layers = stub().get_layers()
for layer in layers: for layer in layers:
visibility[layer.id] = layer.visible visibility[layer.id] = layer.visible
try: try:

View file

@ -229,10 +229,11 @@ class PhotoshopServerStub:
return self._get_layers_in_layers(parent_ids) return self._get_layers_in_layers(parent_ids)
def get_layers_in_layers_ids(self, layers_ids): def get_layers_in_layers_ids(self, layers_ids, layers=None):
"""Return all layers that belong to layers (might be groups). """Return all layers that belong to layers (might be groups).
Args: Args:
layers_ids <list of Int>
layers <list of PSItem>: layers <list of PSItem>:
Returns: Returns:
@ -240,10 +241,13 @@ class PhotoshopServerStub:
""" """
parent_ids = set(layers_ids) parent_ids = set(layers_ids)
return self._get_layers_in_layers(parent_ids) return self._get_layers_in_layers(parent_ids, layers)
def _get_layers_in_layers(self, parent_ids): def _get_layers_in_layers(self, parent_ids, layers=None):
all_layers = self.get_layers() if not layers:
layers = self.get_layers()
all_layers = layers
ret = [] ret = []
for layer in all_layers: for layer in all_layers:
@ -394,14 +398,17 @@ class PhotoshopServerStub:
self.hide_all_others_layers_ids(extract_ids) self.hide_all_others_layers_ids(extract_ids)
def hide_all_others_layers_ids(self, extract_ids): def hide_all_others_layers_ids(self, extract_ids, layers=None):
"""hides all layers that are not part of the list or that are not """hides all layers that are not part of the list or that are not
children of this list children of this list
Args: Args:
extract_ids (list): list of integer that should be visible extract_ids (list): list of integer that should be visible
layers (list) of PSItem (used for caching)
""" """
for layer in self.get_layers(): if not layers:
layers = self.get_layers()
for layer in layers:
if layer.visible and layer.id not in extract_ids: if layer.visible and layer.id not in extract_ids:
self.set_visible(layer.id, False) self.set_visible(layer.id, False)

View file

@ -1,61 +1,99 @@
import os import os
import openpype.api import pyblish.api
from openpype.pipeline import publish
from openpype.hosts.photoshop import api as photoshop from openpype.hosts.photoshop import api as photoshop
class ExtractImage(openpype.api.Extractor): class ExtractImage(pyblish.api.ContextPlugin):
"""Produce a flattened image file from instance """Extract all layers (groups) marked for publish.
This plug-in takes into account only the layers in the group. Usually publishable instance is created as a wrapper of layer(s). For each
publishable instance so many images as there is 'formats' is created.
Logic tries to hide/unhide layers minimum times.
Called once for all publishable instances.
""" """
order = publish.Extractor.order - 0.48
label = "Extract Image" label = "Extract Image"
hosts = ["photoshop"] hosts = ["photoshop"]
families = ["image", "background"] families = ["image", "background"]
formats = ["png", "jpg"] formats = ["png", "jpg"]
def process(self, instance): def process(self, context):
staging_dir = self.staging_dir(instance)
self.log.info("Outputting image to {}".format(staging_dir))
# Perform extraction
stub = photoshop.stub() stub = photoshop.stub()
files = {} hidden_layer_ids = set()
all_layers = stub.get_layers()
for layer in all_layers:
if not layer.visible:
hidden_layer_ids.add(layer.id)
stub.hide_all_others_layers_ids([], layers=all_layers)
with photoshop.maintained_selection(): with photoshop.maintained_selection():
self.log.info("Extracting %s" % str(list(instance))) with photoshop.maintained_visibility(layers=all_layers):
with photoshop.maintained_visibility(): for instance in context:
ids = set() if instance.data["family"] not in self.families:
layer = instance.data.get("layer") continue
if layer:
ids.add(layer.id)
add_ids = instance.data.pop("ids", None)
if add_ids:
ids.update(set(add_ids))
extract_ids = set([ll.id for ll in stub.
get_layers_in_layers_ids(ids)])
stub.hide_all_others_layers_ids(extract_ids)
file_basename = os.path.splitext( staging_dir = self.staging_dir(instance)
stub.get_active_document_name() self.log.info("Outputting image to {}".format(staging_dir))
)[0]
for extension in self.formats:
_filename = "{}.{}".format(file_basename, extension)
files[extension] = _filename
full_filename = os.path.join(staging_dir, _filename) # Perform extraction
stub.saveAs(full_filename, extension, True) files = {}
self.log.info(f"Extracted: {extension}") ids = set()
layer = instance.data.get("layer")
if layer:
ids.add(layer.id)
add_ids = instance.data.pop("ids", None)
if add_ids:
ids.update(set(add_ids))
extract_ids = set([ll.id for ll in stub.
get_layers_in_layers_ids(ids, all_layers)
if ll.id not in hidden_layer_ids])
representations = [] for extracted_id in extract_ids:
for extension, filename in files.items(): stub.set_visible(extracted_id, True)
representations.append({
"name": extension,
"ext": extension,
"files": filename,
"stagingDir": staging_dir
})
instance.data["representations"] = representations
instance.data["stagingDir"] = staging_dir
self.log.info(f"Extracted {instance} to {staging_dir}") file_basename = os.path.splitext(
stub.get_active_document_name()
)[0]
for extension in self.formats:
_filename = "{}.{}".format(file_basename,
extension)
files[extension] = _filename
full_filename = os.path.join(staging_dir,
_filename)
stub.saveAs(full_filename, extension, True)
self.log.info(f"Extracted: {extension}")
representations = []
for extension, filename in files.items():
representations.append({
"name": extension,
"ext": extension,
"files": filename,
"stagingDir": staging_dir
})
instance.data["representations"] = representations
instance.data["stagingDir"] = staging_dir
self.log.info(f"Extracted {instance} to {staging_dir}")
for extracted_id in extract_ids:
stub.set_visible(extracted_id, False)
def staging_dir(self, instance):
"""Provide a temporary directory in which to store extracted files
Upon calling this method the staging directory is stored inside
the instance.data['stagingDir']
"""
from openpype.pipeline.publish import get_instance_staging_dir
return get_instance_staging_dir(instance)

View file

@ -2,12 +2,15 @@ import os
import shutil import shutil
from PIL import Image from PIL import Image
import openpype.api from openpype.lib import (
import openpype.lib run_subprocess,
get_ffmpeg_tool_path,
)
from openpype.pipeline import publish
from openpype.hosts.photoshop import api as photoshop from openpype.hosts.photoshop import api as photoshop
class ExtractReview(openpype.api.Extractor): class ExtractReview(publish.Extractor):
""" """
Produce a flattened or sequence image files from all 'image' instances. Produce a flattened or sequence image files from all 'image' instances.
@ -72,7 +75,7 @@ class ExtractReview(openpype.api.Extractor):
}) })
processed_img_names = [img_list] processed_img_names = [img_list]
ffmpeg_path = openpype.lib.get_ffmpeg_tool_path("ffmpeg") ffmpeg_path = get_ffmpeg_tool_path("ffmpeg")
instance.data["stagingDir"] = staging_dir instance.data["stagingDir"] = staging_dir
@ -93,7 +96,7 @@ class ExtractReview(openpype.api.Extractor):
thumbnail_path thumbnail_path
] ]
self.log.debug("thumbnail args:: {}".format(args)) self.log.debug("thumbnail args:: {}".format(args))
output = openpype.lib.run_subprocess(args) output = run_subprocess(args)
instance.data["representations"].append({ instance.data["representations"].append({
"name": "thumbnail", "name": "thumbnail",
@ -116,7 +119,7 @@ class ExtractReview(openpype.api.Extractor):
mov_path mov_path
] ]
self.log.debug("mov args:: {}".format(args)) self.log.debug("mov args:: {}".format(args))
output = openpype.lib.run_subprocess(args) output = run_subprocess(args)
self.log.debug(output) self.log.debug(output)
instance.data["representations"].append({ instance.data["representations"].append({
"name": "mov", "name": "mov",

View file

@ -1,11 +1,11 @@
import openpype.api from openpype.pipeline import publish
from openpype.hosts.photoshop import api as photoshop from openpype.hosts.photoshop import api as photoshop
class ExtractSaveScene(openpype.api.Extractor): class ExtractSaveScene(publish.Extractor):
"""Save scene before extraction.""" """Save scene before extraction."""
order = openpype.api.Extractor.order - 0.49 order = publish.Extractor.order - 0.49
label = "Extract Save Scene" label = "Extract Save Scene"
hosts = ["photoshop"] hosts = ["photoshop"]
families = ["workfile"] families = ["workfile"]

View file

@ -17,7 +17,7 @@ def setup(env):
# collect script dirs # collect script dirs
if us_env: if us_env:
log.info(f"Utility Scripts Env: `{us_env}`") log.info("Utility Scripts Env: `{}`".format(us_env))
us_paths = us_env.split( us_paths = us_env.split(
os.pathsep) + us_paths os.pathsep) + us_paths
@ -25,13 +25,13 @@ def setup(env):
for path in us_paths: for path in us_paths:
scripts.update({path: os.listdir(path)}) scripts.update({path: os.listdir(path)})
log.info(f"Utility Scripts Dir: `{us_paths}`") log.info("Utility Scripts Dir: `{}`".format(us_paths))
log.info(f"Utility Scripts: `{scripts}`") log.info("Utility Scripts: `{}`".format(scripts))
# make sure no script file is in folder # make sure no script file is in folder
for s in os.listdir(us_dir): for s in os.listdir(us_dir):
path = os.path.join(us_dir, s) path = os.path.join(us_dir, s)
log.info(f"Removing `{path}`...") log.info("Removing `{}`...".format(path))
if os.path.isdir(path): if os.path.isdir(path):
shutil.rmtree(path, onerror=None) shutil.rmtree(path, onerror=None)
else: else:
@ -44,7 +44,7 @@ def setup(env):
# script in script list # script in script list
src = os.path.join(d, s) src = os.path.join(d, s)
dst = os.path.join(us_dir, s) dst = os.path.join(us_dir, s)
log.info(f"Copying `{src}` to `{dst}`...") log.info("Copying `{}` to `{}`...".format(src, dst))
if os.path.isdir(src): if os.path.isdir(src):
shutil.copytree( shutil.copytree(
src, dst, symlinks=False, src, dst, symlinks=False,

View file

@ -192,6 +192,7 @@ def get_system_general_anatomy_data(system_settings=None):
return get_general_template_data(system_settings) return get_general_template_data(system_settings)
@deprecated("openpype.client.get_linked_asset_ids")
def get_linked_asset_ids(asset_doc): def get_linked_asset_ids(asset_doc):
"""Return linked asset ids for `asset_doc` from DB """Return linked asset ids for `asset_doc` from DB
@ -200,26 +201,20 @@ def get_linked_asset_ids(asset_doc):
Returns: Returns:
(list): MongoDB ids of input links. (list): MongoDB ids of input links.
Deprecated:
Function will be removed after release version 3.16.*
""" """
output = []
if not asset_doc:
return output
input_links = asset_doc["data"].get("inputLinks") or [] from openpype.client import get_linked_asset_ids
if input_links: from openpype.pipeline import legacy_io
for item in input_links:
# Backwards compatibility for "_id" key which was replaced with
# "id"
if "_id" in item:
link_id = item["_id"]
else:
link_id = item["id"]
output.append(link_id)
return output project_name = legacy_io.active_project()
return get_linked_asset_ids(project_name, asset_doc=asset_doc)
@with_pipeline_io @deprecated("openpype.client.get_linked_assets")
def get_linked_assets(asset_doc): def get_linked_assets(asset_doc):
"""Return linked assets for `asset_doc` from DB """Return linked assets for `asset_doc` from DB
@ -228,14 +223,17 @@ def get_linked_assets(asset_doc):
Returns: Returns:
(list) Asset documents of input links for passed asset doc. (list) Asset documents of input links for passed asset doc.
Deprecated:
Function will be removed after release version 3.15.*
""" """
link_ids = get_linked_asset_ids(asset_doc) from openpype.pipeline import legacy_io
if not link_ids: from openpype.client import get_linked_assets
return []
project_name = legacy_io.active_project() project_name = legacy_io.active_project()
return list(get_assets(project_name, link_ids))
return get_linked_assets(project_name, asset_doc=asset_doc)
@deprecated("openpype.client.get_last_version_by_subset_name") @deprecated("openpype.client.get_last_version_by_subset_name")
@ -1041,9 +1039,10 @@ def get_last_workfile(
) )
@with_pipeline_io @deprecated("openpype.client.get_linked_representation_id")
def get_linked_ids_for_representations(project_name, repre_ids, dbcon=None, def get_linked_ids_for_representations(
link_type=None, max_depth=0): project_name, repre_ids, dbcon=None, link_type=None, max_depth=0
):
"""Returns list of linked ids of particular type (if provided). """Returns list of linked ids of particular type (if provided).
Goes from representations to version, back to representations Goes from representations to version, back to representations
@ -1054,104 +1053,25 @@ def get_linked_ids_for_representations(project_name, repre_ids, dbcon=None,
with Session. with Session.
link_type (str): ['reference', '..] link_type (str): ['reference', '..]
max_depth (int): limit how many levels of recursion max_depth (int): limit how many levels of recursion
Returns: Returns:
(list) of ObjectId - linked representations (list) of ObjectId - linked representations
Deprecated:
Function will be removed after release version 3.16.*
""" """
# Create new dbcon if not passed and use passed project name
if not dbcon: from openpype.client import get_linked_representation_id
from openpype.pipeline import AvalonMongoDB
dbcon = AvalonMongoDB()
dbcon.Session["AVALON_PROJECT"] = project_name
# Validate that passed dbcon has same project
elif dbcon.Session["AVALON_PROJECT"] != project_name:
raise ValueError("Passed connection does not have right project")
if not isinstance(repre_ids, list): if not isinstance(repre_ids, list):
repre_ids = [repre_ids] repre_ids = [repre_ids]
version_ids = dbcon.distinct("parent", { output = []
"_id": {"$in": repre_ids}, for repre_id in repre_ids:
"type": "representation" output.extend(get_linked_representation_id(
}) project_name,
repre_id=repre_id,
match = { link_type=link_type,
"_id": {"$in": version_ids}, max_depth=max_depth
"type": "version" ))
} return output
graph_lookup = {
"from": project_name,
"startWith": "$data.inputLinks.id",
"connectFromField": "data.inputLinks.id",
"connectToField": "_id",
"as": "outputs_recursive",
"depthField": "depth"
}
if max_depth != 0:
# We offset by -1 since 0 basically means no recursion
# but the recursion only happens after the initial lookup
# for outputs.
graph_lookup["maxDepth"] = max_depth - 1
pipeline_ = [
# Match
{"$match": match},
# Recursive graph lookup for inputs
{"$graphLookup": graph_lookup}
]
result = dbcon.aggregate(pipeline_)
referenced_version_ids = _process_referenced_pipeline_result(result,
link_type)
ref_ids = dbcon.distinct(
"_id",
filter={
"parent": {"$in": list(referenced_version_ids)},
"type": "representation"
}
)
return list(ref_ids)
def _process_referenced_pipeline_result(result, link_type):
"""Filters result from pipeline for particular link_type.
Pipeline cannot use link_type directly in a query.
Returns:
(list)
"""
referenced_version_ids = set()
correctly_linked_ids = set()
for item in result:
input_links = item["data"].get("inputLinks", [])
correctly_linked_ids = _filter_input_links(input_links,
link_type,
correctly_linked_ids)
# outputs_recursive in random order, sort by depth
outputs_recursive = sorted(item.get("outputs_recursive", []),
key=lambda d: d["depth"])
for output in outputs_recursive:
if output["_id"] not in correctly_linked_ids: # leaf
continue
correctly_linked_ids = _filter_input_links(
output["data"].get("inputLinks", []),
link_type,
correctly_linked_ids)
referenced_version_ids.add(output["_id"])
return referenced_version_ids
def _filter_input_links(input_links, link_type, correctly_linked_ids):
for input_link in input_links:
if not link_type or input_link["type"] == link_type:
correctly_linked_ids.add(input_link.get("id") or
input_link.get("_id")) # legacy
return correctly_linked_ids

View file

@ -1,21 +1,60 @@
import os import os
import re import re
import abc
import json
import logging import logging
import six
import platform import platform
import functools
import warnings
import clique import clique
from openpype.client import get_project
from openpype.settings import get_project_settings
from .profiles_filtering import filter_profiles
log = logging.getLogger(__name__) log = logging.getLogger(__name__)
class PathToolsDeprecatedWarning(DeprecationWarning):
pass
def deprecated(new_destination):
"""Mark functions as deprecated.
It will result in a warning being emitted when the function is used.
"""
func = None
if callable(new_destination):
func = new_destination
new_destination = None
def _decorator(decorated_func):
if new_destination is None:
warning_message = (
" Please check content of deprecated function to figure out"
" possible replacement."
)
else:
warning_message = " Please replace your usage with '{}'.".format(
new_destination
)
@functools.wraps(decorated_func)
def wrapper(*args, **kwargs):
warnings.simplefilter("always", PathToolsDeprecatedWarning)
warnings.warn(
(
"Call to deprecated function '{}'"
"\nFunction was moved or removed.{}"
).format(decorated_func.__name__, warning_message),
category=PathToolsDeprecatedWarning,
stacklevel=4
)
return decorated_func(*args, **kwargs)
return wrapper
if func is None:
return _decorator
return _decorator(func)
def format_file_size(file_size, suffix=None): def format_file_size(file_size, suffix=None):
"""Returns formatted string with size in appropriate unit. """Returns formatted string with size in appropriate unit.
@ -232,107 +271,69 @@ def get_last_version_from_path(path_dir, filter):
return None return None
@deprecated("openpype.pipeline.project_folders.concatenate_splitted_paths")
def concatenate_splitted_paths(split_paths, anatomy): def concatenate_splitted_paths(split_paths, anatomy):
pattern_array = re.compile(r"\[.*\]") """
output = [] Deprecated:
for path_items in split_paths: Function will be removed after release version 3.16.*
clean_items = [] """
if isinstance(path_items, str):
path_items = [path_items]
for path_item in path_items: from openpype.pipeline.project_folders import concatenate_splitted_paths
if not re.match(r"{.+}", path_item):
path_item = re.sub(pattern_array, "", path_item)
clean_items.append(path_item)
# backward compatibility return concatenate_splitted_paths(split_paths, anatomy)
if "__project_root__" in path_items:
for root, root_path in anatomy.roots.items():
if not os.path.exists(str(root_path)):
log.debug("Root {} path path {} not exist on \
computer!".format(root, root_path))
continue
clean_items = ["{{root[{}]}}".format(root),
r"{project[name]}"] + clean_items[1:]
output.append(os.path.normpath(os.path.sep.join(clean_items)))
continue
output.append(os.path.normpath(os.path.sep.join(clean_items)))
return output
@deprecated
def get_format_data(anatomy): def get_format_data(anatomy):
project_doc = get_project(anatomy.project_name, fields=["data.code"]) """
project_code = project_doc["data"]["code"] Deprecated:
Function will be removed after release version 3.16.*
"""
return { from openpype.pipeline.template_data import get_project_template_data
"root": anatomy.roots,
"project": { data = get_project_template_data(project_name=anatomy.project_name)
"name": anatomy.project_name, data["root"] = anatomy.roots
"code": project_code return data
},
}
@deprecated("openpype.pipeline.project_folders.fill_paths")
def fill_paths(path_list, anatomy): def fill_paths(path_list, anatomy):
format_data = get_format_data(anatomy) """
filled_paths = [] Deprecated:
Function will be removed after release version 3.16.*
"""
for path in path_list: from openpype.pipeline.project_folders import fill_paths
new_path = path.format(**format_data)
filled_paths.append(new_path)
return filled_paths return fill_paths(path_list, anatomy)
@deprecated("openpype.pipeline.project_folders.create_project_folders")
def create_project_folders(basic_paths, project_name): def create_project_folders(basic_paths, project_name):
from openpype.pipeline import Anatomy """
anatomy = Anatomy(project_name) Deprecated:
Function will be removed after release version 3.16.*
"""
concat_paths = concatenate_splitted_paths(basic_paths, anatomy) from openpype.pipeline.project_folders import create_project_folders
filled_paths = fill_paths(concat_paths, anatomy)
# Create folders return create_project_folders(project_name, basic_paths)
for path in filled_paths:
if os.path.exists(path):
log.debug("Folder already exists: {}".format(path))
else:
log.debug("Creating folder: {}".format(path))
os.makedirs(path)
def _list_path_items(folder_structure):
output = []
for key, value in folder_structure.items():
if not value:
output.append(key)
else:
paths = _list_path_items(value)
for path in paths:
if not isinstance(path, (list, tuple)):
path = [path]
item = [key]
item.extend(path)
output.append(item)
return output
@deprecated("openpype.pipeline.project_folders.get_project_basic_paths")
def get_project_basic_paths(project_name): def get_project_basic_paths(project_name):
project_settings = get_project_settings(project_name) """
folder_structure = ( Deprecated:
project_settings["global"]["project_folder_structure"] Function will be removed after release version 3.16.*
) """
if not folder_structure:
return []
if isinstance(folder_structure, str): from openpype.pipeline.project_folders import get_project_basic_paths
folder_structure = json.loads(folder_structure)
return _list_path_items(folder_structure) return get_project_basic_paths(project_name)
@deprecated("openpype.pipeline.workfile.create_workdir_extra_folders")
def create_workdir_extra_folders( def create_workdir_extra_folders(
workdir, host_name, task_type, task_name, project_name, workdir, host_name, task_type, task_name, project_name,
project_settings=None project_settings=None
@ -349,37 +350,18 @@ def create_workdir_extra_folders(
project_name (str): Name of project on which task is. project_name (str): Name of project on which task is.
project_settings (dict): Prepared project settings. Are loaded if not project_settings (dict): Prepared project settings. Are loaded if not
passed. passed.
Deprecated:
Function will be removed after release version 3.16.*
""" """
# Load project settings if not set
if not project_settings:
project_settings = get_project_settings(project_name)
# Load extra folders profiles from openpype.pipeline.project_folders import create_workdir_extra_folders
extra_folders_profiles = (
project_settings["global"]["tools"]["Workfiles"]["extra_folders"] return create_workdir_extra_folders(
workdir,
host_name,
task_type,
task_name,
project_name,
project_settings
) )
# Skip if are empty
if not extra_folders_profiles:
return
# Prepare profiles filters
filter_data = {
"task_types": task_type,
"task_names": task_name,
"hosts": host_name
}
profile = filter_profiles(extra_folders_profiles, filter_data)
if profile is None:
return
for subfolder in profile["folders"]:
# Make sure backslashes are converted to forwards slashes
# and does not start with slash
subfolder = subfolder.replace("\\", "/").lstrip("/")
# Skip empty strings
if not subfolder:
continue
fullpath = os.path.join(workdir, subfolder)
if not os.path.exists(fullpath):
os.makedirs(fullpath)

View file

@ -3,7 +3,6 @@
import os import os
import logging import logging
import re import re
import json
import warnings import warnings
import functools import functools

View file

@ -700,9 +700,6 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin):
self.context = context self.context = context
self.anatomy = instance.context.data["anatomy"] self.anatomy = instance.context.data["anatomy"]
if hasattr(instance, "_log"):
data['_log'] = instance._log
asset = data.get("asset") or legacy_io.Session["AVALON_ASSET"] asset = data.get("asset") or legacy_io.Session["AVALON_ASSET"]
subset = data.get("subset") subset = data.get("subset")

View file

@ -1,7 +1,10 @@
import re import re
from openpype.pipeline.project_folders import (
get_project_basic_paths,
create_project_folders,
)
from openpype_modules.ftrack.lib import BaseAction, statics_icon from openpype_modules.ftrack.lib import BaseAction, statics_icon
from openpype.api import get_project_basic_paths, create_project_folders
class CreateProjectFolders(BaseAction): class CreateProjectFolders(BaseAction):
@ -81,7 +84,7 @@ class CreateProjectFolders(BaseAction):
} }
# Invoking OpenPype API to create the project folders # Invoking OpenPype API to create the project folders
create_project_folders(basic_paths, project_name) create_project_folders(project_name, basic_paths)
self.create_ftrack_entities(basic_paths, project_entity) self.create_ftrack_entities(basic_paths, project_entity)
self.trigger_event( self.trigger_event(

View file

@ -1,5 +1,8 @@
"""Loads publishing context from json and continues in publish process. """Loads publishing context from json and continues in publish process.
Should run before 'CollectAnatomyContextData' so the user on context is
changed before it's stored to context anatomy data or instance anatomy data.
Requires: Requires:
anatomy -> context["anatomy"] *(pyblish.api.CollectorOrder - 0.11) anatomy -> context["anatomy"] *(pyblish.api.CollectorOrder - 0.11)
@ -13,7 +16,7 @@ import os
import pyblish.api import pyblish.api
class CollectUsername(pyblish.api.ContextPlugin): class CollectUsernameForWebpublish(pyblish.api.ContextPlugin):
""" """
Translates user email to Ftrack username. Translates user email to Ftrack username.
@ -32,10 +35,8 @@ class CollectUsername(pyblish.api.ContextPlugin):
hosts = ["webpublisher", "photoshop"] hosts = ["webpublisher", "photoshop"]
targets = ["remotepublish", "filespublish", "tvpaint_worker"] targets = ["remotepublish", "filespublish", "tvpaint_worker"]
_context = None
def process(self, context): def process(self, context):
self.log.info("CollectUsername") self.log.info("{}".format(self.__class__.__name__))
os.environ["FTRACK_API_USER"] = os.environ["FTRACK_BOT_API_USER"] os.environ["FTRACK_API_USER"] = os.environ["FTRACK_BOT_API_USER"]
os.environ["FTRACK_API_KEY"] = os.environ["FTRACK_BOT_API_KEY"] os.environ["FTRACK_API_KEY"] = os.environ["FTRACK_BOT_API_KEY"]
@ -54,12 +55,14 @@ class CollectUsername(pyblish.api.ContextPlugin):
return return
session = ftrack_api.Session(auto_connect_event_hub=False) session = ftrack_api.Session(auto_connect_event_hub=False)
user = session.query("User where email like '{}'".format(user_email)) user = session.query(
"User where email like '{}'".format(user_email)
).first()
if not user: if not user:
raise ValueError( raise ValueError(
"Couldn't find user with {} email".format(user_email)) "Couldn't find user with {} email".format(user_email))
user = user[0]
username = user.get("username") username = user.get("username")
self.log.debug("Resolved ftrack username:: {}".format(username)) self.log.debug("Resolved ftrack username:: {}".format(username))
os.environ["FTRACK_API_USER"] = username os.environ["FTRACK_API_USER"] = username
@ -67,5 +70,4 @@ class CollectUsername(pyblish.api.ContextPlugin):
burnin_name = username burnin_name = username
if '@' in burnin_name: if '@' in burnin_name:
burnin_name = burnin_name[:burnin_name.index('@')] burnin_name = burnin_name[:burnin_name.index('@')]
os.environ["WEBPUBLISH_OPENPYPE_USERNAME"] = burnin_name
context.data["user"] = burnin_name context.data["user"] = burnin_name

View file

@ -166,50 +166,21 @@ def update_op_assets(
# Substitute item type for general classification (assets or shots) # Substitute item type for general classification (assets or shots)
if item_type in ["Asset", "AssetType"]: if item_type in ["Asset", "AssetType"]:
substitute_item_type = "assets" entity_root_asset_name = "Assets"
elif item_type in ["Episode", "Sequence"]: elif item_type in ["Episode", "Sequence"]:
substitute_item_type = "shots" entity_root_asset_name = "Shots"
else:
substitute_item_type = f"{item_type.lower()}s"
entity_parent_folders = [
f
for f in project_module_settings["entities_root"]
.get(substitute_item_type)
.split("/")
if f
]
# Root parent folder if exist # Root parent folder if exist
visual_parent_doc_id = ( visual_parent_doc_id = (
asset_doc_ids[parent_zou_id]["_id"] if parent_zou_id else None asset_doc_ids[parent_zou_id]["_id"] if parent_zou_id else None
) )
if visual_parent_doc_id is None: if visual_parent_doc_id is None:
# Find root folder docs # Find root folder doc ("Assets" or "Shots")
root_folder_docs = get_assets( root_folder_doc = get_asset_by_name(
project_name, project_name,
asset_names=[entity_parent_folders[-1]], asset_name=entity_root_asset_name,
fields=["_id", "data.root_of"], fields=["_id", "data.root_of"],
) )
# NOTE: Not sure why it's checking for entity type?
# OP3 does not support multiple assets with same names so type
# filtering is irelevant.
# This way mimics previous implementation:
# ```
# root_folder_doc = dbcon.find_one(
# {
# "type": "asset",
# "name": entity_parent_folders[-1],
# "data.root_of": substitute_item_type,
# },
# ["_id"],
# )
# ```
root_folder_doc = None
for folder_doc in root_folder_docs:
root_of = folder_doc.get("data", {}).get("root_of")
if root_of == substitute_item_type:
root_folder_doc = folder_doc
break
if root_folder_doc: if root_folder_doc:
visual_parent_doc_id = root_folder_doc["_id"] visual_parent_doc_id = root_folder_doc["_id"]
@ -240,7 +211,7 @@ def update_op_assets(
item_name = item["name"] item_name = item["name"]
# Set root folders parents # Set root folders parents
item_data["parents"] = entity_parent_folders + item_data["parents"] item_data["parents"] = [entity_root_asset_name] + item_data["parents"]
# Update 'data' different in zou DB # Update 'data' different in zou DB
updated_data = { updated_data = {
@ -318,13 +289,13 @@ def write_project_to_op(project: dict, dbcon: AvalonMongoDB) -> UpdateOne:
) )
def sync_all_projects(login: str, password: str): def sync_all_projects(login: str, password: str, ignore_projects: list = None):
"""Update all OP projects in DB with Zou data. """Update all OP projects in DB with Zou data.
Args: Args:
login (str): Kitsu user login login (str): Kitsu user login
password (str): Kitsu user password password (str): Kitsu user password
ignore_projects (list): List of unsynced project names
Raises: Raises:
gazu.exception.AuthFailedException: Wrong user login and/or password gazu.exception.AuthFailedException: Wrong user login and/or password
""" """
@ -340,6 +311,8 @@ def sync_all_projects(login: str, password: str):
dbcon.install() dbcon.install()
all_projects = gazu.project.all_open_projects() all_projects = gazu.project.all_open_projects()
for project in all_projects: for project in all_projects:
if ignore_projects and project["name"] in ignore_projects:
continue
sync_project_from_kitsu(dbcon, project) sync_project_from_kitsu(dbcon, project)
@ -396,54 +369,30 @@ def sync_project_from_kitsu(dbcon: AvalonMongoDB, project: dict):
zou_ids_and_asset_docs[project["id"]] = project_doc zou_ids_and_asset_docs[project["id"]] = project_doc
# Create entities root folders # Create entities root folders
project_module_settings = get_project_settings(project_name)["kitsu"] to_insert = [
for entity_type, root in project_module_settings["entities_root"].items(): {
parent_folders = root.split("/") "name": r,
direct_parent_doc = None "type": "asset",
for i, folder in enumerate(parent_folders, 1): "schema": "openpype:asset-3.0",
parent_doc = get_asset_by_name( "data": {
project_name, folder, fields=["_id", "data.root_of"] "root_of": r,
) "tasks": {},
# NOTE: Not sure why it's checking for entity type? },
# OP3 does not support multiple assets with same names so type }
# filtering is irelevant. for r in ["Assets", "Shots"]
# Also all of the entities could find be queried at once using if not get_asset_by_name(
# 'get_assets'. project_name, r, fields=["_id", "data.root_of"]
# This way mimics previous implementation: )
# ``` ]
# parent_doc = dbcon.find_one(
# {"type": "asset", "name": folder, "data.root_of": entity_type}
# )
# ```
if (
parent_doc
and parent_doc.get("data", {}).get("root_of") != entity_type
):
parent_doc = None
if not parent_doc:
direct_parent_doc = dbcon.insert_one(
{
"name": folder,
"type": "asset",
"schema": "openpype:asset-3.0",
"data": {
"root_of": entity_type,
"parents": parent_folders[:i],
"visualParent": direct_parent_doc.inserted_id
if direct_parent_doc
else None,
"tasks": {},
},
}
)
# Create # Create
to_insert = [ to_insert.extend(
create_op_asset(item) [
for item in all_entities create_op_asset(item)
if item["id"] not in zou_ids_and_asset_docs.keys() for item in all_entities
] if item["id"] not in zou_ids_and_asset_docs.keys()
]
)
if to_insert: if to_insert:
# Insert doc in DB # Insert doc in DB
dbcon.insert_many(to_insert) dbcon.insert_many(to_insert)

View file

@ -95,13 +95,15 @@ class IntegrateSlackAPI(pyblish.api.InstancePlugin):
Reviews might be large, so allow only adding link to message instead of Reviews might be large, so allow only adding link to message instead of
uploading only. uploading only.
""" """
fill_data = copy.deepcopy(instance.context.data["anatomyData"]) fill_data = copy.deepcopy(instance.context.data["anatomyData"])
username = fill_data.get("user")
fill_pairs = [ fill_pairs = [
("asset", instance.data.get("asset", fill_data.get("asset"))), ("asset", instance.data.get("asset", fill_data.get("asset"))),
("subset", instance.data.get("subset", fill_data.get("subset"))), ("subset", instance.data.get("subset", fill_data.get("subset"))),
("username", instance.data.get("username", ("user", username),
fill_data.get("username"))), ("username", username),
("app", instance.data.get("app", fill_data.get("app"))), ("app", instance.data.get("app", fill_data.get("app"))),
("family", instance.data.get("family", fill_data.get("family"))), ("family", instance.data.get("family", fill_data.get("family"))),
("version", str(instance.data.get("version", ("version", str(instance.data.get("version",
@ -110,13 +112,19 @@ class IntegrateSlackAPI(pyblish.api.InstancePlugin):
if review_path: if review_path:
fill_pairs.append(("review_filepath", review_path)) fill_pairs.append(("review_filepath", review_path))
task_data = instance.data.get("task") task_data = fill_data.get("task")
if not task_data: if task_data:
task_data = fill_data.get("task") if (
for key, value in task_data.items(): "{task}" in message_templ
fill_key = "task[{}]".format(key) or "{Task}" in message_templ
fill_pairs.append((fill_key, value)) or "{TASK}" in message_templ
fill_pairs.append(("task", task_data["name"])) ):
fill_pairs.append(("task", task_data["name"]))
else:
for key, value in task_data.items():
fill_key = "task[{}]".format(key)
fill_pairs.append((fill_key, value))
self.log.debug("fill_pairs ::{}".format(fill_pairs)) self.log.debug("fill_pairs ::{}".format(fill_pairs))
multiple_case_variants = prepare_template_data(fill_pairs) multiple_case_variants = prepare_template_data(fill_pairs)

View file

@ -0,0 +1,107 @@
import os
import re
import json
import six
from openpype.settings import get_project_settings
from openpype.lib import Logger
from .anatomy import Anatomy
from .template_data import get_project_template_data
def concatenate_splitted_paths(split_paths, anatomy):
log = Logger.get_logger("concatenate_splitted_paths")
pattern_array = re.compile(r"\[.*\]")
output = []
for path_items in split_paths:
clean_items = []
if isinstance(path_items, str):
path_items = [path_items]
for path_item in path_items:
if not re.match(r"{.+}", path_item):
path_item = re.sub(pattern_array, "", path_item)
clean_items.append(path_item)
# backward compatibility
if "__project_root__" in path_items:
for root, root_path in anatomy.roots.items():
if not os.path.exists(str(root_path)):
log.debug("Root {} path path {} not exist on \
computer!".format(root, root_path))
continue
clean_items = ["{{root[{}]}}".format(root),
r"{project[name]}"] + clean_items[1:]
output.append(os.path.normpath(os.path.sep.join(clean_items)))
continue
output.append(os.path.normpath(os.path.sep.join(clean_items)))
return output
def fill_paths(path_list, anatomy):
format_data = get_project_template_data(project_name=anatomy.project_name)
format_data["root"] = anatomy.roots
filled_paths = []
for path in path_list:
new_path = path.format(**format_data)
filled_paths.append(new_path)
return filled_paths
def create_project_folders(project_name, basic_paths=None):
log = Logger.get_logger("create_project_folders")
anatomy = Anatomy(project_name)
if basic_paths is None:
basic_paths = get_project_basic_paths(project_name)
if not basic_paths:
return
concat_paths = concatenate_splitted_paths(basic_paths, anatomy)
filled_paths = fill_paths(concat_paths, anatomy)
# Create folders
for path in filled_paths:
if os.path.exists(path):
log.debug("Folder already exists: {}".format(path))
else:
log.debug("Creating folder: {}".format(path))
os.makedirs(path)
def _list_path_items(folder_structure):
output = []
for key, value in folder_structure.items():
if not value:
output.append(key)
continue
paths = _list_path_items(value)
for path in paths:
if not isinstance(path, (list, tuple)):
path = [path]
item = [key]
item.extend(path)
output.append(item)
return output
def get_project_basic_paths(project_name):
project_settings = get_project_settings(project_name)
folder_structure = (
project_settings["global"]["project_folder_structure"]
)
if not folder_structure:
return []
if isinstance(folder_structure, six.string_types):
folder_structure = json.loads(folder_structure)
return _list_path_items(folder_structure)

View file

@ -53,7 +53,7 @@ def get_project_template_data(project_doc=None, project_name=None):
project_name = project_doc["name"] project_name = project_doc["name"]
if not project_doc: if not project_doc:
project_code = get_project(project_name, fields=["data.code"]) project_doc = get_project(project_name, fields=["data.code"])
project_code = project_doc.get("data", {}).get("code") project_code = project_doc.get("data", {}).get("code")
return { return {

View file

@ -9,6 +9,8 @@ from .path_resolving import (
get_custom_workfile_template, get_custom_workfile_template,
get_custom_workfile_template_by_string_context, get_custom_workfile_template_by_string_context,
create_workdir_extra_folders,
) )
from .build_workfile import BuildWorkfile from .build_workfile import BuildWorkfile
@ -26,5 +28,7 @@ __all__ = (
"get_custom_workfile_template", "get_custom_workfile_template",
"get_custom_workfile_template_by_string_context", "get_custom_workfile_template_by_string_context",
"create_workdir_extra_folders",
"BuildWorkfile", "BuildWorkfile",
) )

View file

@ -5,13 +5,15 @@ import six
import logging import logging
from functools import reduce from functools import reduce
from openpype.client import get_asset_by_name from openpype.client import (
get_asset_by_name,
get_linked_assets,
)
from openpype.settings import get_project_settings from openpype.settings import get_project_settings
from openpype.lib import ( from openpype.lib import (
StringTemplate, StringTemplate,
Logger, Logger,
filter_profiles, filter_profiles,
get_linked_assets,
) )
from openpype.pipeline import legacy_io, Anatomy from openpype.pipeline import legacy_io, Anatomy
from openpype.pipeline.load import ( from openpype.pipeline.load import (

View file

@ -8,10 +8,10 @@ from openpype.client import (
get_subsets, get_subsets,
get_last_versions, get_last_versions,
get_representations, get_representations,
get_linked_assets,
) )
from openpype.settings import get_project_settings from openpype.settings import get_project_settings
from openpype.lib import ( from openpype.lib import (
get_linked_assets,
filter_profiles, filter_profiles,
Logger, Logger,
) )

View file

@ -467,3 +467,60 @@ def get_custom_workfile_template_by_string_context(
return get_custom_workfile_template( return get_custom_workfile_template(
project_doc, asset_doc, task_name, host_name, anatomy, project_settings project_doc, asset_doc, task_name, host_name, anatomy, project_settings
) )
def create_workdir_extra_folders(
workdir,
host_name,
task_type,
task_name,
project_name,
project_settings=None
):
"""Create extra folders in work directory based on context.
Args:
workdir (str): Path to workdir where workfiles is stored.
host_name (str): Name of host implementation.
task_type (str): Type of task for which extra folders should be
created.
task_name (str): Name of task for which extra folders should be
created.
project_name (str): Name of project on which task is.
project_settings (dict): Prepared project settings. Are loaded if not
passed.
"""
# Load project settings if not set
if not project_settings:
project_settings = get_project_settings(project_name)
# Load extra folders profiles
extra_folders_profiles = (
project_settings["global"]["tools"]["Workfiles"]["extra_folders"]
)
# Skip if are empty
if not extra_folders_profiles:
return
# Prepare profiles filters
filter_data = {
"task_types": task_type,
"task_names": task_name,
"hosts": host_name
}
profile = filter_profiles(extra_folders_profiles, filter_data)
if profile is None:
return
for subfolder in profile["folders"]:
# Make sure backslashes are converted to forwards slashes
# and does not start with slash
subfolder = subfolder.replace("\\", "/").lstrip("/")
# Skip empty strings
if not subfolder:
continue
fullpath = os.path.join(workdir, subfolder)
if not os.path.exists(fullpath):
os.makedirs(fullpath)

View file

@ -1,6 +1,6 @@
from openpype.client import get_linked_representation_id
from openpype.modules import ModulesManager from openpype.modules import ModulesManager
from openpype.pipeline import load from openpype.pipeline import load
from openpype.lib.avalon_context import get_linked_ids_for_representations
from openpype.modules.sync_server.utils import SiteAlreadyPresentError from openpype.modules.sync_server.utils import SiteAlreadyPresentError
@ -45,9 +45,11 @@ class AddSyncSite(load.LoaderPlugin):
force=True) force=True)
if family == "workfile": if family == "workfile":
links = get_linked_ids_for_representations(project_name, links = get_linked_representation_id(
[repre_id], project_name,
link_type="reference") repre_id=repre_id,
link_type="reference"
)
for link_repre_id in links: for link_repre_id in links:
try: try:
self.sync_server.add_site(project_name, link_repre_id, self.sync_server.add_site(project_name, link_repre_id,

View file

@ -0,0 +1,105 @@
import pyblish.api
from openpype.client import (
get_last_version_by_subset_name,
get_representations,
)
from openpype.pipeline import (
legacy_io,
get_representation_path,
)
class CollectAudio(pyblish.api.InstancePlugin):
"""Collect asset's last published audio.
The audio subset name searched for is defined in:
project settings > Collect Audio
"""
label = "Collect Asset Audio"
order = pyblish.api.CollectorOrder + 0.1
families = ["review"]
hosts = [
"nuke",
"maya",
"shell",
"hiero",
"premiere",
"harmony",
"traypublisher",
"standalonepublisher",
"fusion",
"tvpaint",
"resolve",
"webpublisher",
"aftereffects",
"flame",
"unreal"
]
audio_subset_name = "audioMain"
def process(self, instance):
if instance.data.get("audio"):
self.log.info(
"Skipping Audio collecion. It is already collected"
)
return
# Add audio to instance if exists.
self.log.info((
"Searching for audio subset '{subset}'"
" in asset '{asset}'"
).format(
subset=self.audio_subset_name,
asset=instance.data["asset"]
))
repre_doc = self._get_repre_doc(instance)
# Add audio to instance if representation was found
if repre_doc:
instance.data["audio"] = [{
"offset": 0,
"filename": get_representation_path(repre_doc)
}]
self.log.info("Audio Data added to instance ...")
def _get_repre_doc(self, instance):
cache = instance.context.data.get("__cache_asset_audio")
if cache is None:
cache = {}
instance.context.data["__cache_asset_audio"] = cache
asset_name = instance.data["asset"]
# first try to get it from cache
if asset_name in cache:
return cache[asset_name]
project_name = legacy_io.active_project()
# Find latest versions document
last_version_doc = get_last_version_by_subset_name(
project_name,
self.audio_subset_name,
asset_name=asset_name,
fields=["_id"]
)
repre_doc = None
if last_version_doc:
# Try to find it's representation (Expected there is only one)
repre_docs = list(get_representations(
project_name, version_ids=[last_version_doc["_id"]]
))
if not repre_docs:
self.log.warning(
"Version document does not contain any representations"
)
else:
repre_doc = repre_docs[0]
# update cache
cache[asset_name] = repre_doc
return repre_doc

View file

@ -29,6 +29,7 @@ class CollectOtioFrameRanges(pyblish.api.InstancePlugin):
# get basic variables # get basic variables
otio_clip = instance.data["otioClip"] otio_clip = instance.data["otioClip"]
workfile_start = instance.data["workfileFrameStart"] workfile_start = instance.data["workfileFrameStart"]
workfile_source_duration = instance.data.get("notRetimedFramerange")
# get ranges # get ranges
otio_tl_range = otio_clip.range_in_parent() otio_tl_range = otio_clip.range_in_parent()
@ -54,6 +55,11 @@ class CollectOtioFrameRanges(pyblish.api.InstancePlugin):
frame_end = frame_start + otio.opentime.to_frames( frame_end = frame_start + otio.opentime.to_frames(
otio_tl_range.duration, otio_tl_range.duration.rate) - 1 otio_tl_range.duration, otio_tl_range.duration.rate) - 1
# in case of retimed clip and frame range should not be retimed
if workfile_source_duration:
frame_end = frame_start + otio.opentime.to_frames(
otio_src_range.duration, otio_src_range.duration.rate) - 1
data = { data = {
"frameStart": frame_start, "frameStart": frame_start,
"frameEnd": frame_end, "frameEnd": frame_end,

View file

@ -488,12 +488,6 @@ class ExtractBurnin(publish.Extractor):
"frame_end_handle": frame_end_handle "frame_end_handle": frame_end_handle
} }
# use explicit username for webpublishes as rewriting
# OPENPYPE_USERNAME might have side effects
webpublish_user_name = os.environ.get("WEBPUBLISH_OPENPYPE_USERNAME")
if webpublish_user_name:
burnin_data["username"] = webpublish_user_name
self.log.debug( self.log.debug(
"Basic burnin_data: {}".format(json.dumps(burnin_data, indent=4)) "Basic burnin_data: {}".format(json.dumps(burnin_data, indent=4))
) )

View file

@ -135,7 +135,7 @@ class IntegrateAsset(pyblish.api.InstancePlugin):
# the database even if not used by the destination template # the database even if not used by the destination template
db_representation_context_keys = [ db_representation_context_keys = [
"project", "asset", "task", "subset", "version", "representation", "project", "asset", "task", "subset", "version", "representation",
"family", "hierarchy", "username", "output" "family", "hierarchy", "username", "user", "output"
] ]
skip_host_families = [] skip_host_families = []

View file

@ -46,7 +46,7 @@ class IntegrateHeroVersion(pyblish.api.InstancePlugin):
ignored_representation_names = [] ignored_representation_names = []
db_representation_context_keys = [ db_representation_context_keys = [
"project", "asset", "task", "subset", "representation", "project", "asset", "task", "subset", "representation",
"family", "hierarchy", "task", "username" "family", "hierarchy", "task", "username", "user"
] ]
# QUESTION/TODO this process should happen on server if crashed due to # QUESTION/TODO this process should happen on server if crashed due to
# permissions error on files (files were used or user didn't have perms) # permissions error on files (files were used or user didn't have perms)

View file

@ -127,7 +127,7 @@ class IntegrateAssetNew(pyblish.api.InstancePlugin):
exclude_families = ["render.farm"] exclude_families = ["render.farm"]
db_representation_context_keys = [ db_representation_context_keys = [
"project", "asset", "task", "subset", "version", "representation", "project", "asset", "task", "subset", "version", "representation",
"family", "hierarchy", "task", "username" "family", "hierarchy", "task", "username", "user"
] ]
default_template_name = "publish" default_template_name = "publish"

View file

@ -3,6 +3,10 @@
"CollectAnatomyInstanceData": { "CollectAnatomyInstanceData": {
"follow_workfile_version": false "follow_workfile_version": false
}, },
"CollectAudio": {
"enabled": false,
"audio_subset_name": "audioMain"
},
"CollectSceneVersion": { "CollectSceneVersion": {
"hosts": [ "hosts": [
"aftereffects", "aftereffects",

View file

@ -1,8 +1,4 @@
{ {
"entities_root": {
"assets": "Assets",
"shots": "Shots"
},
"entities_naming_pattern": { "entities_naming_pattern": {
"episode": "E##", "episode": "E##",
"sequence": "SQ##", "sequence": "SQ##",

View file

@ -5,23 +5,6 @@
"collapsible": true, "collapsible": true,
"is_file": true, "is_file": true,
"children": [ "children": [
{
"type": "dict",
"key": "entities_root",
"label": "Entities root folder",
"children": [
{
"type": "text",
"key": "assets",
"label": "Assets:"
},
{
"type": "text",
"key": "shots",
"label": "Shots (includes Episodes & Sequences if any):"
}
]
},
{ {
"type": "dict", "type": "dict",
"key": "entities_naming_pattern", "key": "entities_naming_pattern",

View file

@ -18,6 +18,27 @@
} }
] ]
}, },
{
"type": "dict",
"collapsible": true,
"checkbox_key": "enabled",
"key": "CollectAudio",
"label": "Collect Audio",
"is_group": true,
"children": [
{
"type": "boolean",
"key": "enabled",
"label": "Enabled"
},
{
"key": "audio_subset_name",
"label": "Name of audio variant",
"type": "text",
"placeholder": "audioMain"
}
]
},
{ {
"type": "dict", "type": "dict",
"collapsible": true, "collapsible": true,

View file

@ -281,18 +281,25 @@ class ActionModel(QtGui.QStandardItemModel):
if not action_item: if not action_item:
return return
action = action_item.data(ACTION_ROLE) actions = action_item.data(ACTION_ROLE)
actual_data = self._prepare_compare_data(action) if not isinstance(actions, list):
actions = [actions]
action_actions_data = [
self._prepare_compare_data(action)
for action in actions
]
stored = self.launcher_registry.get_item("force_not_open_workfile") stored = self.launcher_registry.get_item("force_not_open_workfile")
if is_checked: for actual_data in action_actions_data:
stored.append(actual_data) if is_checked:
else: stored.append(actual_data)
final_values = [] else:
for config in stored: final_values = []
if config != actual_data: for config in stored:
final_values.append(config) if config != actual_data:
stored = final_values final_values.append(config)
stored = final_values
self.launcher_registry.set_item("force_not_open_workfile", stored) self.launcher_registry.set_item("force_not_open_workfile", stored)
self.launcher_registry._get_item.cache_clear() self.launcher_registry._get_item.cache_clear()
@ -329,21 +336,24 @@ class ActionModel(QtGui.QStandardItemModel):
item (QStandardItem) item (QStandardItem)
stored (list) of dict stored (list) of dict
""" """
action = item.data(ACTION_ROLE)
if not self.is_application_action(action): actions = item.data(ACTION_ROLE)
if not isinstance(actions, list):
actions = [actions]
if not self.is_application_action(actions[0]):
return False return False
actual_data = self._prepare_compare_data(action) action_actions_data = [
self._prepare_compare_data(action)
for action in actions
]
for config in stored: for config in stored:
if config == actual_data: if config in action_actions_data:
return True return True
return False return False
def _prepare_compare_data(self, action): def _prepare_compare_data(self, action):
if isinstance(action, list) and action:
action = action[0]
compare_data = {} compare_data = {}
if action and action.label: if action and action.label:
compare_data = { compare_data = {

View file

@ -312,11 +312,12 @@ class ActionBar(QtWidgets.QWidget):
is_group = index.data(GROUP_ROLE) is_group = index.data(GROUP_ROLE)
is_variant_group = index.data(VARIANT_GROUP_ROLE) is_variant_group = index.data(VARIANT_GROUP_ROLE)
force_not_open_workfile = index.data(FORCE_NOT_OPEN_WORKFILE_ROLE)
if not is_group and not is_variant_group: if not is_group and not is_variant_group:
action = index.data(ACTION_ROLE) action = index.data(ACTION_ROLE)
# Change data of application action # Change data of application action
if issubclass(action, ApplicationAction): if issubclass(action, ApplicationAction):
if index.data(FORCE_NOT_OPEN_WORKFILE_ROLE): if force_not_open_workfile:
action.data["start_last_workfile"] = False action.data["start_last_workfile"] = False
else: else:
action.data.pop("start_last_workfile", None) action.data.pop("start_last_workfile", None)
@ -385,10 +386,18 @@ class ActionBar(QtWidgets.QWidget):
menu.addMenu(sub_menu) menu.addMenu(sub_menu)
result = menu.exec_(QtGui.QCursor.pos()) result = menu.exec_(QtGui.QCursor.pos())
if result: if not result:
action = actions_mapping[result] return
self._start_animation(index)
self.action_clicked.emit(action) action = actions_mapping[result]
if issubclass(action, ApplicationAction):
if force_not_open_workfile:
action.data["start_last_workfile"] = False
else:
action.data.pop("start_last_workfile", None)
self._start_animation(index)
self.action_clicked.emit(action)
class ActionHistory(QtWidgets.QPushButton): class ActionHistory(QtWidgets.QPushButton):

View file

@ -1,5 +1,12 @@
from Qt import QtWidgets, QtCore, QtGui from Qt import QtWidgets, QtCore, QtGui
from openpype import resources
from openpype.style import load_stylesheet
from openpype.widgets import PasswordDialog
from openpype.lib import is_admin_password_required, Logger
from openpype.pipeline import AvalonMongoDB
from openpype.pipeline.project_folders import create_project_folders
from . import ( from . import (
ProjectModel, ProjectModel,
ProjectProxyFilter, ProjectProxyFilter,
@ -13,17 +20,6 @@ from . import (
) )
from .widgets import ConfirmProjectDeletion from .widgets import ConfirmProjectDeletion
from .style import ResourceCache from .style import ResourceCache
from openpype.style import load_stylesheet
from openpype.lib import is_admin_password_required
from openpype.widgets import PasswordDialog
from openpype.pipeline import AvalonMongoDB
from openpype import resources
from openpype.api import (
get_project_basic_paths,
create_project_folders,
Logger
)
class ProjectManagerWindow(QtWidgets.QWidget): class ProjectManagerWindow(QtWidgets.QWidget):
@ -259,12 +255,8 @@ class ProjectManagerWindow(QtWidgets.QWidget):
qm.Yes | qm.No) qm.Yes | qm.No)
if ans == qm.Yes: if ans == qm.Yes:
try: try:
# Get paths based on presets
basic_paths = get_project_basic_paths(project_name)
if not basic_paths:
pass
# Invoking OpenPype API to create the project folders # Invoking OpenPype API to create the project folders
create_project_folders(basic_paths, project_name) create_project_folders(project_name)
except Exception as exc: except Exception as exc:
self.log.warning( self.log.warning(
"Cannot create starting folders: {}".format(exc), "Cannot create starting folders: {}".format(exc),

View file

@ -34,7 +34,8 @@ from .lib import (
class InventoryModel(TreeModel): class InventoryModel(TreeModel):
"""The model for the inventory""" """The model for the inventory"""
Columns = ["Name", "version", "count", "family", "loader", "objectName"] Columns = ["Name", "version", "count", "family",
"group", "loader", "objectName"]
OUTDATED_COLOR = QtGui.QColor(235, 30, 30) OUTDATED_COLOR = QtGui.QColor(235, 30, 30)
CHILD_OUTDATED_COLOR = QtGui.QColor(200, 160, 30) CHILD_OUTDATED_COLOR = QtGui.QColor(200, 160, 30)
@ -157,8 +158,13 @@ class InventoryModel(TreeModel):
# Family icon # Family icon
return item.get("familyIcon", None) return item.get("familyIcon", None)
column_name = self.Columns[index.column()]
if column_name == "group" and item.get("group"):
return qtawesome.icon("fa.object-group",
color=get_default_entity_icon_color())
if item.get("isGroupNode"): if item.get("isGroupNode"):
column_name = self.Columns[index.column()]
if column_name == "active_site": if column_name == "active_site":
provider = item.get("active_site_provider") provider = item.get("active_site_provider")
return self._site_icons.get(provider) return self._site_icons.get(provider)
@ -423,6 +429,7 @@ class InventoryModel(TreeModel):
group_node["familyIcon"] = family_icon group_node["familyIcon"] = family_icon
group_node["count"] = len(group_items) group_node["count"] = len(group_items)
group_node["isGroupNode"] = True group_node["isGroupNode"] = True
group_node["group"] = subset["data"].get("subsetGroup")
if self.sync_enabled: if self.sync_enabled:
progress = get_progress_for_repre( progress = get_progress_for_repre(

View file

@ -89,7 +89,8 @@ class SceneInventoryWindow(QtWidgets.QDialog):
view.setColumnWidth(1, 55) # version view.setColumnWidth(1, 55) # version
view.setColumnWidth(2, 55) # count view.setColumnWidth(2, 55) # count
view.setColumnWidth(3, 150) # family view.setColumnWidth(3, 150) # family
view.setColumnWidth(4, 100) # namespace view.setColumnWidth(4, 120) # group
view.setColumnWidth(5, 150) # loader
# apply delegates # apply delegates
version_delegate = VersionDelegate(legacy_io, self) version_delegate = VersionDelegate(legacy_io, self)

View file

@ -9,11 +9,11 @@ import platform
from Qt import QtCore, QtGui, QtWidgets from Qt import QtCore, QtGui, QtWidgets
import openpype.version import openpype.version
from openpype.api import ( from openpype import resources, style
resources, from openpype.lib import (
get_system_settings get_openpype_execute_args,
Logger,
) )
from openpype.lib import get_openpype_execute_args, Logger
from openpype.lib.openpype_version import ( from openpype.lib.openpype_version import (
op_version_control_available, op_version_control_available,
get_expected_version, get_expected_version,
@ -25,8 +25,8 @@ from openpype.lib.openpype_version import (
get_openpype_version, get_openpype_version,
) )
from openpype.modules import TrayModulesManager from openpype.modules import TrayModulesManager
from openpype import style
from openpype.settings import ( from openpype.settings import (
get_system_settings,
SystemSettings, SystemSettings,
ProjectSettings, ProjectSettings,
DefaultsNotDefined DefaultsNotDefined
@ -774,10 +774,24 @@ class PypeTrayStarter(QtCore.QObject):
def main(): def main():
log = Logger.get_logger(__name__)
app = QtWidgets.QApplication.instance() app = QtWidgets.QApplication.instance()
if not app: if not app:
app = QtWidgets.QApplication([]) app = QtWidgets.QApplication([])
for attr_name in (
"AA_EnableHighDpiScaling",
"AA_UseHighDpiPixmaps"
):
attr = getattr(QtCore.Qt, attr_name, None)
if attr is None:
log.debug((
"Missing QtCore.Qt attribute \"{}\"."
" UI quality may be affected."
).format(attr_name))
else:
app.setAttribute(attr)
starter = PypeTrayStarter(app) starter = PypeTrayStarter(app)
# TODO remove when pype.exe will have an icon # TODO remove when pype.exe will have an icon

View file

@ -10,10 +10,7 @@ from openpype.host import IWorkfileHost
from openpype.client import get_asset_by_id from openpype.client import get_asset_by_id
from openpype.tools.utils import PlaceholderLineEdit from openpype.tools.utils import PlaceholderLineEdit
from openpype.tools.utils.delegates import PrettyTimeDelegate from openpype.tools.utils.delegates import PrettyTimeDelegate
from openpype.lib import ( from openpype.lib import emit_event
emit_event,
create_workdir_extra_folders,
)
from openpype.pipeline import ( from openpype.pipeline import (
registered_host, registered_host,
legacy_io, legacy_io,
@ -23,7 +20,10 @@ from openpype.pipeline.context_tools import (
compute_session_changes, compute_session_changes,
change_current_context change_current_context
) )
from openpype.pipeline.workfile import get_workfile_template_key from openpype.pipeline.workfile import (
get_workfile_template_key,
create_workdir_extra_folders,
)
from .model import ( from .model import (
WorkAreaFilesModel, WorkAreaFilesModel,

View file

@ -1,3 +1,3 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
"""Package declaring Pype version.""" """Package declaring Pype version."""
__version__ = "3.14.2-nightly.2" __version__ = "3.14.2-nightly.4"