Compare commits

...

348 commits

Author SHA1 Message Date
Jakub Trllo
f9bbab9944
Merge pull request #1622 from ynput/bugfix/902-ay-3875_ayon-integrate-hero-for-review
Integrate Hero: Use FileTransaction in integrate plugin
2025-12-22 13:29:11 +01:00
Jakub Trllo
826d22b166
Merge branch 'develop' into bugfix/902-ay-3875_ayon-integrate-hero-for-review 2025-12-22 13:27:09 +01:00
Jakub Trllo
b6b2726795
Merge pull request #1621 from ynput/enhancement/YN-0290_provide_source_version_description
Library: provide source version description
2025-12-19 16:21:18 +01:00
Jakub Trllo
1612b0297d fix long lines 2025-12-19 11:35:26 +01:00
Petr Kalis
a802285a6c Removed unnecessary f 2025-12-19 11:25:30 +01:00
Petr Kalis
07edce9c9c Fix missing quote 2025-12-19 11:25:08 +01:00
Petr Kalis
0dc34c32d8
Merge branch 'develop' into enhancement/YN-0290_provide_source_version_description 2025-12-19 11:23:44 +01:00
Jakub Trllo
7485d99cf6 use FileTransaction in integrate hero 2025-12-19 11:23:37 +01:00
Petr Kalis
3d0cd51e65
Updates to description format
Co-authored-by: Jakub Trllo <43494761+iLLiCiTiT@users.noreply.github.com>
2025-12-19 11:22:39 +01:00
Petr Kalis
8f1eebfcbf
Refactor version parts concatenation
Co-authored-by: Jakub Trllo <43494761+iLLiCiTiT@users.noreply.github.com>
2025-12-19 11:22:20 +01:00
Petr Kalis
f46f1d2e8d
Refactor description concatenation
Co-authored-by: Jakub Trllo <43494761+iLLiCiTiT@users.noreply.github.com>
2025-12-19 11:21:54 +01:00
Jakub Trllo
92d4da9efa
Merge pull request #1620 from ynput/bugfix/thumbnail-safe-rescale-args
Extract thumbnail from source: Safe collection of rescale arguments
2025-12-18 17:13:33 +01:00
Petr Kalis
1be1a30b38 Always put src description on new line 2025-12-18 16:47:33 +01:00
Jakub Trllo
c55c6a2675 fix doubled line 2025-12-18 16:44:15 +01:00
Jakub Trllo
818a9f21f3 safe collection of rescale arguments 2025-12-18 16:39:52 +01:00
Petr Kalis
a83ebe3c8d Merge branch 'bugfix/thumbnail-args' into enhancement/YN-0290_provide_source_version_description 2025-12-18 16:25:09 +01:00
Petr Kalis
0b6c0f3de9 Added source version description
Links copied version to original with author information. Author is not passed on version as it might require admin privileges.
2025-12-18 16:24:06 +01:00
github-actions[bot]
4b4ccad085 chore(): update bug report / version 2025-12-18 13:13:01 +00:00
Ynbot
11f5c4ba8b [Automated] Update version in package.py for develop 2025-12-18 13:11:59 +00:00
Ynbot
ef93ab833a [Automated] Add generated package files from main 2025-12-18 13:11:14 +00:00
Jakub Trllo
9e067348bd
Merge pull request #1613 from ynput/enhancement/product-name-template-settings
Settings: Product name template profile filters
2025-12-18 12:23:45 +01:00
Jakub Trllo
69e4fb011a
Merge branch 'develop' into enhancement/product-name-template-settings 2025-12-18 12:19:41 +01:00
Jakub Trllo
46da65bf82
Merge pull request #1609 from BigRoy/chore/remove_asset_family_subset
Remove legacy usage of asset, family and subset
2025-12-18 12:04:50 +01:00
Roy Nieterau
15af2c051b
Merge branch 'develop' into enhancement/product-name-template-settings 2025-12-18 11:48:55 +01:00
Jakub Trllo
04958c3429
Merge branch 'develop' into chore/remove_asset_family_subset 2025-12-18 11:21:03 +01:00
Jakub Trllo
301b603775
Merge pull request #1616 from BigRoy/enhancement/integrate_inputlinks_no_workfile_to_debug
Integrate Input Links: Do not log warning if no workfile is present
2025-12-18 10:27:29 +01:00
Jakub Trllo
80f303c735
Merge branch 'develop' into enhancement/integrate_inputlinks_no_workfile_to_debug 2025-12-18 10:21:35 +01:00
Jakub Trllo
b22fbe3e77
Merge pull request #1614 from ynput/bugfix/thumbnail-args
Extract Thumbnail: Fix arguments
2025-12-18 10:14:01 +01:00
Roy Nieterau
b77b0583dd No workfile is allowed, and only looks scary in e.g. tray-publisher. It's not up to the Input integrator to have a strong opinion on it that it should be warning the user, so debug log level is better. 2025-12-17 23:58:30 +01:00
Jakub Trllo
bb35eccb57
Merge branch 'develop' into chore/remove_asset_family_subset 2025-12-17 15:59:18 +01:00
Jakub Trllo
4051d679dd
Merge branch 'develop' into enhancement/product-name-template-settings 2025-12-17 14:50:24 +01:00
Jakub Trllo
a9af964f4c added some typehints 2025-12-17 14:34:36 +01:00
Jakub Trllo
3fe508e773 pass thumbnail def to _create_colorspace_thumbnail 2025-12-17 14:32:11 +01:00
Jakub Trllo
9668623005
Merge pull request #1612 from ynput/enhancement/1470-yn-0067-publisher-crashed-plugins
Plugins discovery: Strict publish plugins discovery
2025-12-17 14:10:41 +01:00
Jakub Trllo
3e100408c3
Merge branch 'develop' into enhancement/1470-yn-0067-publisher-crashed-plugins 2025-12-17 14:09:09 +01:00
Jakub Trllo
e462dca889 always call '_update_footer_state' 2025-12-17 14:01:54 +01:00
Roy Nieterau
69e003c065
Merge branch 'develop' into chore/remove_asset_family_subset 2025-12-17 13:43:40 +01:00
Jakub Trllo
963e11e407
Merge branch 'develop' into enhancement/product-name-template-settings 2025-12-17 13:40:32 +01:00
Roy Nieterau
1daba76e3a
Merge pull request #1597 from BigRoy/enhancement/1524-yn-0156-usd-contribution-workflow-layer-strength-configured-hierarchically 2025-12-17 13:34:42 +01:00
Jakub Trllo
5982ad7944 call set_blocked only on reset 2025-12-17 12:32:19 +01:00
Jakub Trllo
de7b49e68f simplify update state 2025-12-17 12:31:49 +01:00
Roy Nieterau
2baffc253c Set min_items=1 for scope attribute in CollectUSDLayerContributions 2025-12-17 12:13:21 +01:00
Roy Nieterau
3e3cd49bea Add a warning if plug-in defaults are used 2025-12-17 11:59:48 +01:00
Jakub Trllo
108286aa34 fix refresh issue 2025-12-17 11:59:41 +01:00
Roy Nieterau
8047c70af2 Merge branch 'enhancement/1524-yn-0156-usd-contribution-workflow-layer-strength-configured-hierarchically' of https://github.com/BigRoy/ayon-core into enhancement/1524-yn-0156-usd-contribution-workflow-layer-strength-configured-hierarchically 2025-12-17 11:58:21 +01:00
Roy Nieterau
cd1c2cdb0f Set the default value for new entries to be scoped to asset, task so that copying from older releases automatically sets it to both. This way, also newly added entries will have both by default which is better than none. 2025-12-17 11:57:29 +01:00
Jakub Trllo
6a3f28cfb8 change defaults 2025-12-17 11:19:15 +01:00
Jakub Trllo
ad36a449fd fix filter criteria for backwards compatibility 2025-12-17 11:17:10 +01:00
Jakub Trllo
67d9ec366c added product base types to product name template settings 2025-12-17 11:16:48 +01:00
Jakub Trllo
d1db95d8cb fix conversion function name 2025-12-17 11:16:03 +01:00
Jakub Trllo
e1dc93cb44 unset icon if is not blocking anymore 2025-12-17 10:55:27 +01:00
Jakub Trllo
73cc4c53b4 use correct settings 2025-12-17 10:53:43 +01:00
Jakub Trllo
f4bd5d49f9 move settings to tools 2025-12-17 10:38:48 +01:00
Mustafa Zaky Jafar
78df19df44
Merge branch 'develop' into enhancement/1524-yn-0156-usd-contribution-workflow-layer-strength-configured-hierarchically 2025-12-17 11:35:36 +02:00
Jakub Trllo
5404153b94
Add new line character.
Co-authored-by: Roy Nieterau <roy_nieterau@hotmail.com>
2025-12-17 09:51:05 +01:00
Jakub Trllo
e8635725fa ruff fixes 2025-12-16 18:55:35 +01:00
Jakub Trllo
f2e014b3f8
Merge branch 'develop' into enhancement/1470-yn-0067-publisher-crashed-plugins 2025-12-16 18:44:36 +01:00
Jakub Trllo
ae7726bdef change tabs and mark blocking filepath 2025-12-16 18:31:35 +01:00
Jakub Trllo
856a58dc35 block publisher on blocking failed plugins 2025-12-16 18:03:12 +01:00
Jakub Trllo
c53a2f68e5 fix settings load 2025-12-16 18:01:10 +01:00
Jakub Trllo
096a5a809e fix imports 2025-12-16 16:28:30 +01:00
Jakub Trllo
09364a4f7e use the function in main cli publish 2025-12-16 16:20:40 +01:00
Jakub Trllo
3f72115a5e added option to filter crashed files 2025-12-16 16:12:35 +01:00
Jakub Trllo
7313025572 add return type hint 2025-12-16 12:30:53 +01:00
Jakub Trllo
b056d974f2
Merge branch 'develop' into chore/remove_asset_family_subset 2025-12-16 12:28:05 +01:00
Jakub Trllo
f7a2aa2792
Merge pull request #1610 from ynput/enhancement/remove-deprecated-cli
CLI: Remove deprecated 'extractenvironments' command
2025-12-16 12:16:18 +01:00
Jakub Trllo
d80fc97604 remove unused import 2025-12-16 12:09:38 +01:00
Jakub Trllo
0b14100976 remove line 2025-12-16 12:08:44 +01:00
Jakub Trllo
e2c9cacdd3 remove deprecated 'extractenvironments' 2025-12-16 12:02:57 +01:00
Jakub Trllo
18a4461e83 remove todo 2025-12-16 11:55:16 +01:00
Jakub Trllo
46791bc671
Merge branch 'develop' into chore/remove_asset_family_subset 2025-12-16 11:54:28 +01:00
Jakub Trllo
e32b54f911 fix 'get_versioning_start' 2025-12-16 11:53:34 +01:00
Jakub Trllo
a90eb2d54a fix burnins 2025-12-16 11:41:14 +01:00
Roy Nieterau
ea59a764cd Refactor/remove legacy usage of asset, family and subset 2025-12-16 11:19:50 +01:00
Mustafa Zaky Jafar
448d32fa42
Merge branch 'develop' into enhancement/1524-yn-0156-usd-contribution-workflow-layer-strength-configured-hierarchically 2025-12-16 11:47:26 +02:00
Petr Kalis
0f13d7a8e1
Merge pull request #1578 from ynput/bugfix/YN-0273_big_resolution_thumbnail_ftrack
Extract thumbnail from source: Big thumbnail resolution
2025-12-16 10:46:15 +01:00
Petr Kalis
46a8db48e7
Merge branch 'develop' into bugfix/YN-0273_big_resolution_thumbnail_ftrack 2025-12-16 10:45:43 +01:00
Jakub Trllo
a88e3bab77
Merge pull request #1608 from ynput/enhancement/console-override-full-std
Console Interpreter: Override full stdout and stderr
2025-12-16 10:31:42 +01:00
Jakub Trllo
cc712739ba
Merge branch 'develop' into enhancement/console-override-full-std 2025-12-16 10:30:32 +01:00
Jakub Trllo
e03c39dce1
Merge pull request #1607 from ynput/enhancement/console-allow-name-change
Console interpreter: Allow to change registry name
2025-12-16 10:30:19 +01:00
Jakub Trllo
1614737053
Merge branch 'develop' into enhancement/console-allow-name-change 2025-12-16 10:29:43 +01:00
Roy Nieterau
74971bd3dc Cosmetics 2025-12-15 22:14:14 +01:00
Roy Nieterau
69de145bb7 Merge branch 'develop' of https://github.com/ynput/ayon-core into enhancement/1524-yn-0156-usd-contribution-workflow-layer-strength-configured-hierarchically
# Conflicts:
#	client/ayon_core/plugins/publish/extract_usd_layer_contributions.py
2025-12-15 22:13:14 +01:00
Roy Nieterau
19f84805bd Fix scope defaults, fix order to int and enforce name to not be empty 2025-12-15 22:11:12 +01:00
Roy Nieterau
92d01a2ceb
Merge pull request #1596 from BigRoy/1595-yn-0304-usd-contributions-customizable-order-strength-per-instance 2025-12-15 21:13:47 +01:00
Roy Nieterau
34004ac538
Merge branch 'develop' into 1595-yn-0304-usd-contributions-customizable-order-strength-per-instance 2025-12-15 20:29:12 +01:00
Mustafa Zaky Jafar
362995d5f7
Merge branch 'develop' into enhancement/1524-yn-0156-usd-contribution-workflow-layer-strength-configured-hierarchically 2025-12-15 20:01:31 +02:00
Jakub Trllo
f9de1d13ba
Merge branch 'develop' into enhancement/console-allow-name-change 2025-12-15 14:57:59 +01:00
Jakub Trllo
c86631fcf3
Merge pull request #1589 from vincentullmann/enhancement/improve_oiio_info_performance
add verbose-flag to get_oiio_info_for_input
2025-12-15 14:56:18 +01:00
Jakub Trllo
0cfc959875 swap order of kwargs 2025-12-15 14:25:51 +01:00
Jakub Trllo
a2387d1856 require kwargs 2025-12-15 14:09:27 +01:00
Jakub Trllo
5ca04b0d6e Merge branch 'develop' into enhancement/improve_oiio_info_performance
# Conflicts:
#	client/ayon_core/lib/transcoding.py
2025-12-15 14:07:39 +01:00
Jakub Trllo
bd2e26ea50 use 'verbose=False' at other places 2025-12-15 14:03:27 +01:00
Jakub Trllo
dbdc4c590b remove unused import 2025-12-15 13:38:17 +01:00
Jakub Trllo
3c0dd4335e override full stdout and stderr 2025-12-15 12:24:22 +01:00
Petr Kalis
9cb97029bf
Merge branch 'develop' into bugfix/YN-0273_big_resolution_thumbnail_ftrack 2025-12-15 12:11:23 +01:00
Petr Kalis
e3b94654f8
Updated docstrings
Co-authored-by: Jakub Trllo <43494761+iLLiCiTiT@users.noreply.github.com>
2025-12-15 12:11:07 +01:00
Petr Kalis
e3fa6e446e
Updated docstring
Co-authored-by: Jakub Trllo <43494761+iLLiCiTiT@users.noreply.github.com>
2025-12-15 12:10:50 +01:00
Jakub Trllo
6b58d4fba7
Merge pull request #1582 from ynput/enhancement/AY-6586_Thumbnail_presets
Enhancement: thumbnail presets
2025-12-15 12:03:30 +01:00
Jakub Trllo
6558af5ff1
Merge branch 'develop' into enhancement/AY-6586_Thumbnail_presets 2025-12-15 12:02:15 +01:00
Jakub Trllo
3dacfec4ec allow to change registry name in controller 2025-12-15 10:24:49 +01:00
Jakub Trllo
cb06323e96
Merge pull request #1601 from BigRoy/1600-collect-versions-loaded-in-scene-fails-in-unreal-due-to-lack-of-name-key-in-containers
Collect Loaded Scene Versions: skip invalid containers
2025-12-15 10:17:58 +01:00
Jakub Trllo
dbdda81f94
Merge branch 'develop' into 1600-collect-versions-loaded-in-scene-fails-in-unreal-due-to-lack-of-name-key-in-containers 2025-12-15 10:00:20 +01:00
Ondřej Samohel
527d1d6c84
Merge pull request #1605 from ynput/bugfix/change_pytest-ayon_dependency
Chore: change pytest-ayon dependency
2025-12-12 23:28:16 +01:00
Ondrej Samohel
b39e09142f
♻️ change pytest-ayon dependency 2025-12-12 23:21:38 +01:00
Roy Nieterau
e19ca9e1d1
Merge branch 'develop' into 1595-yn-0304-usd-contributions-customizable-order-strength-per-instance 2025-12-12 22:54:17 +01:00
Roy Nieterau
be2dd92a7e Merge branch 'enhancement/1524-yn-0156-usd-contribution-workflow-layer-strength-configured-hierarchically' of https://github.com/BigRoy/ayon-core into enhancement/1524-yn-0156-usd-contribution-workflow-layer-strength-configured-hierarchically 2025-12-12 22:53:25 +01:00
Roy Nieterau
e2251ed76c
Merge branch 'develop' into enhancement/1524-yn-0156-usd-contribution-workflow-layer-strength-configured-hierarchically 2025-12-12 22:53:16 +01:00
Roy Nieterau
c93eb31b54 Fix typo 2025-12-12 22:53:05 +01:00
Roy Nieterau
2871ecac7d Fix setting the correct order 2025-12-12 22:52:52 +01:00
Roy Nieterau
7e1720d740
Merge branch 'develop' into 1600-collect-versions-loaded-in-scene-fails-in-unreal-due-to-lack-of-name-key-in-containers 2025-12-12 22:41:35 +01:00
Roy Nieterau
6f534f4ff0
Update client/ayon_core/plugins/publish/collect_scene_loaded_versions.py
Co-authored-by: Jakub Trllo <43494761+iLLiCiTiT@users.noreply.github.com>
2025-12-12 22:41:28 +01:00
Petr Kalis
9da077b52f
Merge branch 'develop' into enhancement/AY-6586_Thumbnail_presets 2025-12-12 16:39:39 +01:00
Petr Kalis
52e4932c97 Used renamed class name as variable 2025-12-12 16:39:22 +01:00
Jakub Trllo
7ca1a67d82
Merge pull request #1576 from ynput/enhancement/skip-base-classes
Chore: Skip base classes in plugin discovery
2025-12-12 16:20:27 +01:00
Jakub Trllo
65791a1d9f added 'skip_discovery' to loader plugin 2025-12-12 15:03:31 +01:00
Jakub Trllo
4faf61dd22 add logic description 2025-12-12 15:01:04 +01:00
Jakub Trllo
d0034b6007 use 'skip_discovery' instead 2025-12-12 15:00:51 +01:00
Jakub Trllo
2e1c9a3afb
Merge branch 'develop' into 1600-collect-versions-loaded-in-scene-fails-in-unreal-due-to-lack-of-name-key-in-containers 2025-12-12 14:13:46 +01:00
Jakub Trllo
70328e53c6
Merge branch 'develop' into enhancement/skip-base-classes 2025-12-12 12:58:08 +01:00
Jakub Trllo
7fa5b39ef6
Merge pull request #1217 from BigRoy/enhancement/transcoding_oiio_tool_for_ffmpeg_one_call
Transcoding: Use single `oiiotool` call for sequences, instead of frames one by one
2025-12-12 12:57:02 +01:00
Jakub Trllo
deb93bc95b
Merge branch 'develop' into enhancement/transcoding_oiio_tool_for_ffmpeg_one_call 2025-12-12 12:56:30 +01:00
Jakub Trllo
237cee6593
Merge pull request #1604 from ynput/enhancement/1603-yn-0309-csv-resolution-height-and-width-on-details-panel-at-frontend
Integrate: Add resolution and pixel aspect to version attributes
2025-12-12 12:55:39 +01:00
Jakub Trllo
15b0192d4e
move step next to frames
Co-authored-by: Roy Nieterau <roy_nieterau@hotmail.com>
2025-12-12 12:54:17 +01:00
Jakub Trllo
ea2642ab15 added resolution and pixel aspect to version 2025-12-12 12:21:06 +01:00
Petr Kalis
8fe830f5de Merge branch 'enhancement/AY-6586_Thumbnail_presets' of https://github.com/ynput/ayon-core into enhancement/AY-6586_Thumbnail_presets 2025-12-12 11:03:01 +01:00
Petr Kalis
7329725979 Used pop
IDK why
2025-12-12 11:02:01 +01:00
Petr Kalis
7d248880cc Changed variable resolution 2025-12-12 11:01:12 +01:00
Petr Kalis
f03ae1bc15 Formatting change 2025-12-12 10:59:02 +01:00
Petr Kalis
41fa48dbe7 Formatting change 2025-12-12 10:57:19 +01:00
Petr Kalis
3dbba063ca Renamed variables 2025-12-12 10:57:02 +01:00
Petr Kalis
4d8d9078b8 Returned None
IDK why, comment, must be really important though.
2025-12-12 10:56:39 +01:00
Roy Nieterau
55eb4cccbe Add soft compatibility requirement to ayon_third_party >=1.3.0 2025-12-12 10:44:46 +01:00
Roy Nieterau
2a7316b262 Type hints to the arguments 2025-12-12 10:40:10 +01:00
Roy Nieterau
fef45cebb3 Merge branch 'enhancement/transcoding_oiio_tool_for_ffmpeg_one_call' of https://github.com/BigRoy/ayon-core into enhancement/transcoding_oiio_tool_for_ffmpeg_one_call 2025-12-12 10:33:06 +01:00
Roy Nieterau
f9ca97ec71 Perform actual type hint 2025-12-12 10:32:54 +01:00
Roy Nieterau
af901213a2
Update client/ayon_core/lib/transcoding.py
Co-authored-by: Jakub Trllo <43494761+iLLiCiTiT@users.noreply.github.com>
2025-12-12 10:31:54 +01:00
Roy Nieterau
11ecc69b35 Refactor _input -> input_item 2025-12-12 10:31:25 +01:00
Roy Nieterau
775b0724bf Merge branch 'enhancement/transcoding_oiio_tool_for_ffmpeg_one_call' of https://github.com/BigRoy/ayon-core into enhancement/transcoding_oiio_tool_for_ffmpeg_one_call 2025-12-12 10:22:04 +01:00
Roy Nieterau
82427cb004 Fix merge conflicts 2025-12-12 10:21:45 +01:00
Roy Nieterau
197b74d1af Merge branch 'develop' of https://github.com/ynput/ayon-core into enhancement/transcoding_oiio_tool_for_ffmpeg_one_call
# Conflicts:
#	client/ayon_core/plugins/publish/extract_color_transcode.py
2025-12-12 10:20:05 +01:00
Petr Kalis
ec5766f656
Merge branch 'develop' into bugfix/YN-0273_big_resolution_thumbnail_ftrack 2025-12-11 18:32:39 +01:00
Petr Kalis
8865e7a2b4 Reverting change of order
Deemed unnecessary (by Kuba)
2025-12-11 18:32:11 +01:00
Petr Kalis
6ec302d01b Merge branch 'bugfix/YN-0273_big_resolution_thumbnail_ftrack' of https://github.com/ynput/ayon-core into bugfix/YN-0273_big_resolution_thumbnail_ftrack 2025-12-11 18:26:00 +01:00
Petr Kalis
e6eaf87272 Updated titles 2025-12-11 18:25:16 +01:00
Petr Kalis
738d9cf8d8 Updated docstring 2025-12-11 18:24:09 +01:00
Petr Kalis
061e9c5015 Renamed variable 2025-12-11 18:23:17 +01:00
Petr Kalis
c1f36199c2 Renamed method 2025-12-11 18:20:07 +01:00
Petr Kalis
9f6840a18d
Formatting change
Co-authored-by: Jakub Trllo <43494761+iLLiCiTiT@users.noreply.github.com>
2025-12-11 18:12:17 +01:00
Petr Kalis
bbff056268 Added psd to ExtractReview
ffmpeg and oiiotool seem to handle it fine.
2025-12-11 18:02:04 +01:00
Roy Nieterau
9ae72a1b21
Merge pull request #1599 from BigRoy/1598-publisher-create-enforces-current-task-if-no-task-is-selected-inside-dcc 2025-12-11 17:07:07 +01:00
Roy Nieterau
048cbddb43
Merge branch 'develop' into enhancement/1524-yn-0156-usd-contribution-workflow-layer-strength-configured-hierarchically 2025-12-11 15:56:21 +01:00
Roy Nieterau
b6709f9859 Also remove fallback for current folder in _get_folder_path 2025-12-11 15:36:45 +01:00
Roy Nieterau
2aaca57672
Merge branch 'develop' into 1598-publisher-create-enforces-current-task-if-no-task-is-selected-inside-dcc 2025-12-11 15:33:47 +01:00
Jakub Trllo
3c22320c43
Merge pull request #1462 from ynput/enhancement/initial-support-for-folder-in-product-name
Chore: Initial support for folder template data in product name
2025-12-11 13:52:19 +01:00
Petr Kalis
505021344b Fix logic for context thumbnail creation 2025-12-11 12:24:40 +01:00
Petr Kalis
55c74196ab Formatting changes 2025-12-11 12:20:24 +01:00
Petr Kalis
7a5d6ae77e Fix imports 2025-12-11 12:19:36 +01:00
Petr Kalis
ec510ab149
Formatting change
Co-authored-by: Jakub Trllo <43494761+iLLiCiTiT@users.noreply.github.com>
2025-12-11 12:16:45 +01:00
Roy Nieterau
5e674844b5 Cosmetics 2025-12-10 22:19:12 +01:00
Roy Nieterau
e3206796a7 Fix #1600: Filter out containers that lack required keys (looking at you ayon-unreal!) 2025-12-10 22:16:03 +01:00
Roy Nieterau
aff0ecf436 Fix #1598: Do not fallback to current task name 2025-12-10 20:29:10 +01:00
Petr Kalis
c52a7e367b Simplified ExtractThumbnailFrom source
Removed profiles
Changed defaults for smaller resolution
2025-12-10 18:35:22 +01:00
Petr Kalis
82dd0d0a76
Merge branch 'develop' into enhancement/AY-6586_Thumbnail_presets 2025-12-10 17:33:00 +01:00
Roy Nieterau
4eece5e6e9 Allow to define department layers scoped only to a particular department layer type, e.g. "shot" versus "asset". This way, you can scope same layer names for both shot and asset at different orders if they have differing target scopes 2025-12-10 16:55:44 +01:00
Roy Nieterau
9cdecbdee0
Merge branch 'develop' into 1595-yn-0304-usd-contributions-customizable-order-strength-per-instance 2025-12-10 16:35:49 +01:00
Roy Nieterau
ced9eadd3d Use the instance attribute Strength order as the in-layer order completely so that it's the exact value, not an offset to the department layer order 2025-12-10 16:31:05 +01:00
Jakub Trllo
17b09d608b unify indentation 2025-12-10 16:03:30 +01:00
Jakub Trllo
bceb645a80
fix typo
Co-authored-by: Roy Nieterau <roy_nieterau@hotmail.com>
2025-12-10 15:47:14 +01:00
Jakub Trllo
d4e5f96b3b upodate overload function 2025-12-10 15:46:48 +01:00
Jakub Trllo
5462c9516a
Merge branch 'develop' into enhancement/initial-support-for-folder-in-product-name 2025-12-10 13:45:46 +01:00
Roy Nieterau
97a8b13a4e Allow specifying a strength ordering offset for each contribution to a single department layer 2025-12-10 12:56:42 +01:00
Roy Nieterau
31e6b5a139
Merge pull request #1594 from BigRoy/1593-yn-0303-thumbnail-and-review-colorspace-is-off 2025-12-10 10:47:20 +01:00
Roy Nieterau
9e34f628e6
Merge branch 'develop' into 1593-yn-0303-thumbnail-and-review-colorspace-is-off 2025-12-10 00:40:50 +01:00
Roy Nieterau
b1be956994 Also invert if target_colorspace, which means - always invert source display/view if we have any target colorspace or a display/view that differs from the source display/view 2025-12-10 00:38:49 +01:00
Roy Nieterau
f0e603fe7c Do not invert source display/view if it already matches target display/view 2025-12-10 00:30:16 +01:00
Vincent Ullmann
3d321b4896 add verbose-flag to get_oiio_info_for_input and changed oiio_color_convert to use verbose=False 2025-12-09 15:21:23 +00:00
Jakub Trllo
8103135efd
Merge pull request #1587 from ynput/bugfix/1586-yn-0299-launcher-my-tasks-view-doesnt-refresh-on-new-assignments
Tools: Update my tasks filters on refresh
2025-12-09 15:04:42 +01:00
Jakub Trllo
8076615a5f use same approach in launcher as in other tools 2025-12-09 15:02:09 +01:00
Jakub Trllo
721c1fdd8d
Merge branch 'develop' into bugfix/1586-yn-0299-launcher-my-tasks-view-doesnt-refresh-on-new-assignments 2025-12-09 14:32:40 +01:00
Jakub Trllo
9ade73fb27 refresh my tasks in all tools 2025-12-09 14:28:35 +01:00
Jakub Trllo
f3a2cad425 refresh my tasks filters on refresh 2025-12-09 14:20:28 +01:00
Jakub Trllo
de3971ed56
Merge pull request #1585 from ynput/ayon_review_integrate_timeout_retries_and_fallback_with_help
Upload files: Added simple retries to upload of review and thumbnail
2025-12-09 14:17:58 +01:00
Jakub Trllo
ab78158d6e
Fixed typo and use debug level
Co-authored-by: Roy Nieterau <roy_nieterau@hotmail.com>
2025-12-09 14:17:00 +01:00
Jakub Trllo
5c17102d16 remove outdated docstring info 2025-12-09 12:38:51 +01:00
Jakub Trllo
dde471332f added more logs 2025-12-09 12:33:19 +01:00
Jakub Trllo
faff50ce33 don't use custom retries if are already handled by ayon api 2025-12-09 12:03:19 +01:00
Petr Kalis
94dc9d0484 Merge remote-tracking branch 'origin/enhancement/AY-6586_Thumbnail_presets' into enhancement/AY-6586_Thumbnail_presets 2025-12-09 10:34:42 +01:00
Petr Kalis
44251c93c7 Ruff 2025-12-09 10:33:12 +01:00
Petr Kalis
8624dcce60 Merge branch 'develop' of https://github.com/ynput/ayon-core into enhancement/AY-6586_Thumbnail_presets 2025-12-09 10:32:12 +01:00
Petr Kalis
a4ae90c16a
Merge branch 'develop' into enhancement/AY-6586_Thumbnail_presets 2025-12-09 10:30:27 +01:00
Jakub Trllo
647d91e288 update docstring 2025-12-08 17:26:02 +01:00
Jakub Trllo
e0597ac6de remove unnecessary imports 2025-12-08 17:20:02 +01:00
Jakub Trllo
9b35dd6cfc remove unused variable 2025-12-08 17:16:16 +01:00
Jakub Trllo
989c54001c added retries in thumbnail integration 2025-12-08 16:53:30 +01:00
Jakub Trllo
699673bbf2 slightly modified upload 2025-12-08 16:40:13 +01:00
Jakub Trllo
f0bd2b7e98 use different help file for integrate review 2025-12-08 16:39:44 +01:00
Jakub Trllo
fb2df33970 added option to define different help file 2025-12-08 16:39:13 +01:00
Petr Kalis
165f9c7e70
Merge branch 'develop' into bugfix/YN-0273_big_resolution_thumbnail_ftrack 2025-12-08 15:08:19 +01:00
Jakub Trllo
bd81f40156 Merge branch 'develop' into ayon_review_integrate_timeout_retries_and_fallback_with_help 2025-12-08 14:15:11 +01:00
Petr Kalis
00102dae85 Renamed ProfileConfig 2025-12-08 11:09:12 +01:00
Petr Kalis
ad0cbad663 Changed data types of rgb 2025-12-08 11:07:50 +01:00
Petr Kalis
cdac62aae7 Renamed hosts to host_names for ExtractThumbnailFromSource 2025-12-08 11:07:05 +01:00
Petr Kalis
14bead732c Removed unnecessary filtering
Already done in profile filter
2025-12-08 10:16:54 +01:00
Petr Kalis
f1288eb096 Renamed ProfileConfig to ThumbnailDef 2025-12-08 10:15:46 +01:00
Petr Kalis
d859ea2fc3 Explicit key values updates 2025-12-08 10:13:57 +01:00
Petr Kalis
6cfb22a4b5 Formatting change 2025-12-08 10:13:00 +01:00
Petr Kalis
a4559fe79e Changed datatype of rgb 2025-12-08 10:11:36 +01:00
Petr Kalis
89129dfeb4 Renamed hosts to host_names for ExtractThumbnail 2025-12-08 10:10:51 +01:00
Roy Nieterau
abc08e63c1
Merge pull request #1584 from ynput/bugfix/fix-kwarg-for-product-name 2025-12-06 22:08:51 +01:00
Jakub Trllo
cf28f96eda fix formatting in docstring 2025-12-05 17:49:11 +01:00
Jakub Trllo
b1db949ecc Merge branch 'develop' into enhancement/initial-support-for-folder-in-product-name
# Conflicts:
#	client/ayon_core/pipeline/create/product_name.py
2025-12-05 17:45:11 +01:00
Jakub Trllo
f665528ee7 fix 'product_name' to 'name' 2025-12-05 15:36:01 +01:00
Jakub Trllo
074c43ff68 added is_base_class to create base classes 2025-12-05 14:05:56 +01:00
Jakub Trllo
a657022919
Merge pull request #1583 from BigRoy/bugfix/fix_import_integrator
Integrator: Fix import
2025-12-05 12:25:39 +01:00
Roy Nieterau
f7f0005511 Fix import 2025-12-05 12:23:30 +01:00
Petr Kalis
32bc4248fc Typing 2025-12-05 12:11:08 +01:00
Petr Kalis
fa6e8b4478 Added missed argument 2025-12-05 12:05:54 +01:00
Petr Kalis
a59b264496 Updated Settings controlled variables 2025-12-05 11:59:50 +01:00
Petr Kalis
56df03848f Updated logging 2025-12-05 11:59:33 +01:00
Petr Kalis
c7672fd511 Fix querying of overrides 2025-12-04 18:53:30 +01:00
Jakub Trllo
523ac20121
Merge pull request #1306 from ynput/enhancement/1297-product-base-types-creation-and-creator-plugins
🏛️Product base types: Support in Creator logic
2025-12-04 18:09:28 +01:00
Jakub Trllo
24ff7f02d6
Fix wrongly resolved line 2025-12-04 18:05:42 +01:00
Jakub Trllo
aec589d9dd
Merge branch 'develop' into enhancement/1297-product-base-types-creation-and-creator-plugins 2025-12-04 18:03:57 +01:00
Jakub Trllo
c7e9789582
Merge pull request #1315 from ynput/enhancement/1296-product-base-types-support-in-integrator
🏛️Product base types: Support in the integrator
2025-12-04 18:03:08 +01:00
Jakub Trllo
49b736fb68
Merge branch 'develop' into enhancement/1296-product-base-types-support-in-integrator 2025-12-04 18:02:34 +01:00
Petr Kalis
c0ed22c4d7 Added profiles for ExtractThumbnail 2025-12-04 17:27:54 +01:00
Petr Kalis
7f40b6c6a2 Added conversion to profiles to ExtractThumbnail 2025-12-04 17:25:20 +01:00
Petr Kalis
2885ed1805 Added profiles to ExtractThumbnail 2025-12-04 17:24:37 +01:00
Petr Kalis
dabeb0d552 Ruff 2025-12-04 14:04:51 +01:00
Petr Kalis
a187a7fc56 Ruff 2025-12-04 13:32:14 +01:00
Petr Kalis
a426baf1a1 Typing 2025-12-04 13:30:46 +01:00
Petr Kalis
d9344239dd Fixes for resizing to actually work
Resizing argument must be before output arguments
2025-12-04 13:11:36 +01:00
Petr Kalis
bfca3175d6 Typing 2025-12-04 13:09:40 +01:00
Petr Kalis
2928c62d2b Added flag for integrate representation 2025-12-04 13:09:20 +01:00
Petr Kalis
daa9effd04 Reorganized position of context thumbnail 2025-12-04 13:08:57 +01:00
Petr Kalis
0ab00dbb4e Check for existing profile 2025-12-04 13:08:36 +01:00
Petr Kalis
ad83f76318 Introduced dataclass for config of selected profile
It makes it nicer than using dictionary
2025-12-04 13:08:11 +01:00
Petr Kalis
08c03e980b Moved order to trigger later
No need to trigger so early, but must be triggered before regular one to limit double creation of thumbnail.
2025-12-04 13:03:18 +01:00
Petr Kalis
3edb0148cd Added setting to match more from source to regular extract thumbnail 2025-12-04 13:02:23 +01:00
Petr Kalis
b39dd35af9 Removed webpublisher from regular extract_thumbnail
Must use regular one as customer uses `review` as product type. Adding `review` to generic plugin might have unforeseen consequences.
2025-12-04 13:01:49 +01:00
Jakub Trllo
74dc83d14a skip base classes 2025-12-03 17:06:19 +01:00
Jakub Trllo
a2a5f54857
Merge pull request #1570 from ynput/enhancement/1562-version-locking-does-not-work-nuke-maya-tested
Scene manager: Ignore locked containers when updating versions
2025-12-03 11:12:58 +01:00
Jakub Trllo
b25e3e27ad
Merge branch 'develop' into enhancement/1562-version-locking-does-not-work-nuke-maya-tested 2025-12-03 11:00:10 +01:00
github-actions[bot]
0ce6e70547 chore(): update bug report / version 2025-12-02 16:41:29 +00:00
Ynbot
83c4350277 [Automated] Update version in package.py for develop 2025-12-02 16:40:29 +00:00
Ynbot
7592bdcfcb [Automated] Add generated package files from main 2025-12-02 16:39:48 +00:00
Jakub Trllo
1943b897da
Merge pull request #1573 from ynput/bugfix/YN-0273_no_thumbnail_on_ftrack
Added webpublisher to extract thumbnail
2025-12-02 17:37:49 +01:00
Ondrej Samohel
acd1fcb0cf
Merge remote-tracking branch 'origin/enhancement/1296-product-base-types-support-in-integrator' into enhancement/1296-product-base-types-support-in-integrator 2025-12-02 16:01:18 +01:00
Ondrej Samohel
a9c7785700
🐶 fix linter 2025-12-02 15:59:14 +01:00
Ondřej Samohel
e413d88234
Merge branch 'develop' into enhancement/1296-product-base-types-support-in-integrator 2025-12-02 15:57:39 +01:00
Ondrej Samohel
206bcfe717
📝 add warning 2025-12-02 15:47:58 +01:00
Ondrej Samohel
1e66017861
♻️ use product base type if defined
when product base types are not supported by api, product base type should be the source of truth.
2025-12-02 15:47:39 +01:00
Ondrej Samohel
2efda3d3fe
🐛 fix import and function call/check 2025-12-02 15:41:58 +01:00
Petr Kalis
6ade0bb665 Added webpublisher to extract thumbnail for ftrack 2025-12-02 14:44:29 +01:00
Jakub Trllo
4e65d2a524
Merge pull request #1569 from ynput/enhancement/pass-host-name-to-getter-method
Chore: Pass host name to 'get_loader_action_plugin_paths'
2025-12-02 14:21:19 +01:00
Jakub Trllo
fc19076839
Merge branch 'develop' into enhancement/pass-host-name-to-getter-method 2025-12-02 14:20:45 +01:00
Jakub Trllo
2276f06733
Merge pull request #1571 from BigRoy/bugfix/yn-0264-extract-review-oiio-conversion
Extract Review: Fix layer name passed to FFMPEG after OIIO transcode
2025-12-02 14:12:43 +01:00
Roy Nieterau
c45fd481b3
Merge branch 'develop' into bugfix/yn-0264-extract-review-oiio-conversion 2025-12-02 11:01:37 +01:00
Roy Nieterau
b0c5b171c9 After OIIO transcoding force the layer name to be "" 2025-12-01 19:04:23 +01:00
Ondřej Samohel
43b557d95e
♻️ check for compatibility 2025-12-01 18:17:08 +01:00
Ondřej Samohel
85668a1b74
Merge remote-tracking branch 'origin/develop' into enhancement/1296-product-base-types-support-in-integrator 2025-12-01 16:39:35 +01:00
Jakub Trllo
055bf3fc17 ignore locked containers when updating versions 2025-12-01 16:31:36 +01:00
Jakub Trllo
cd499f4951 change IPluginPaths interface method signature 2025-12-01 15:18:16 +01:00
Jakub Trllo
215d077f31 pass host_name to 'get_loader_action_plugin_paths' 2025-12-01 13:15:19 +01:00
github-actions[bot]
31b65f22ae chore(): update bug report / version 2025-12-01 12:03:58 +00:00
Ynbot
0e34fb6474 [Automated] Update version in package.py for develop 2025-12-01 12:03:02 +00:00
Ondřej Samohel
07d88cd639
Merge branch 'develop' into enhancement/1296-product-base-types-support-in-integrator 2025-11-28 10:57:55 +01:00
Ondrej Samohel
feb1612200
remove argument 2025-11-27 17:18:35 +01:00
Ondrej Samohel
bb8f214e47
🐛 fix the abstract plugin debug print 2025-11-27 11:59:20 +01:00
Ondrej Samohel
f8e8ab2b27
🐶 fix long line 2025-11-27 10:58:13 +01:00
Ondrej Samohel
67364633f0
🤖 implement copilot suggestions 2025-11-27 10:56:02 +01:00
Ondřej Samohel
85e5024078
Merge branch 'develop' into enhancement/1296-product-base-types-support-in-integrator 2025-11-26 18:04:02 +01:00
Ondrej Samohel
1f88b0031d
♻️ fix discovery 2025-11-26 17:57:10 +01:00
Ondrej Samohel
64f549c495
ugly thing in name of compatibility? 2025-11-26 17:48:52 +01:00
Ondrej Samohel
3a24db94f5
📝 log deprecation warning 2025-11-26 17:45:17 +01:00
Ondrej Samohel
b0005180f2
⚗️ fix tests 2025-11-26 17:40:27 +01:00
Ondrej Samohel
bb430342d8
🐶 fix linter 2025-11-26 17:35:49 +01:00
Ondrej Samohel
700b025024
♻️ move plugin check earlier, fix hints 2025-11-26 17:34:01 +01:00
Ondrej Samohel
e6007b2cee
♻️ fixes
type hints, checks
2025-11-26 17:25:10 +01:00
Ondrej Samohel
00e2e3c2ad
🎛️ fix type hints 2025-11-26 15:33:43 +01:00
Ondrej Samohel
794bb716b2
♻️ small fixes 2025-11-26 15:25:54 +01:00
Ondrej Samohel
1cddb86918
⚗️ fix tests 2025-11-26 15:17:19 +01:00
Ondrej Samohel
b967f8f818
♻️ consolidate warninings 2025-11-26 15:01:25 +01:00
Ondrej Samohel
05547c752e
♻️ remove the check for product base type support - publisher model 2025-11-26 14:27:22 +01:00
Ondrej Samohel
2cf392633e
♻️ remove unnecessary checks 2025-11-26 14:08:50 +01:00
Jakub Trllo
d6431a4990 added overload functionality 2025-11-26 12:17:13 +01:00
Jakub Trllo
0576638603
Merge branch 'develop' into enhancement/initial-support-for-folder-in-product-name 2025-11-26 11:53:01 +01:00
Ondrej Samohel
04527b0061
♻️ change usage of product_base_types in plugins 2025-11-21 19:06:36 +01:00
Ondrej Samohel
90da1c9059
Merge branch 'develop' into enhancement/1297-product-base-types-creation-and-creator-plugins 2025-11-21 13:53:30 +01:00
Jakub Trllo
d7433f84d7 use setattr 2025-10-20 14:58:34 +02:00
Jakub Trllo
882c0bcc6a rename decorator and add more information to the example 2025-10-20 14:58:26 +02:00
Ondrej Samohel
0ca2d25ef6
⚗️ fix linting 2025-10-17 17:41:50 +02:00
Ondrej Samohel
f147d28c52
⚗️ add tests for product names 2025-10-17 17:36:47 +02:00
Ondrej Samohel
ff9167192a
Merge remote-tracking branch 'origin/enhancement/1297-product-base-types-creation-and-creator-plugins' into enhancement/1297-product-base-types-creation-and-creator-plugins 2025-10-17 15:15:16 +02:00
Ondřej Samohel
8906a1c903
Merge branch 'develop' into enhancement/1297-product-base-types-creation-and-creator-plugins 2025-10-17 15:13:21 +02:00
Ondrej Samohel
fa6d50c23e
Merge remote-tracking branch 'origin/develop' into enhancement/1297-product-base-types-creation-and-creator-plugins 2025-10-17 15:08:28 +02:00
Jakub Trllo
5fd5b73e91 fix type hints 2025-10-03 17:05:19 +02:00
Jakub Trllo
a35b179ed1 remove the private variant of the function 2025-10-03 16:59:46 +02:00
Jakub Trllo
16b4584609 mark the function with an attribute to know if entities are expected in arguments 2025-10-03 16:50:57 +02:00
Jakub Trllo
31b023b0fa use only new signature 2025-10-03 16:47:14 +02:00
Jakub Trllo
348e11f968 wrap get_product_name function 2025-10-03 16:40:12 +02:00
Jakub Trllo
f7e9f6e7c9 use kwargs in default implementation 2025-10-03 15:17:51 +02:00
Jakub Trllo
fc7ca39f39 move comment to correct place 2025-10-03 15:13:19 +02:00
Jakub Trllo
07650130c6 initial support to use folder in product name template 2025-10-03 15:11:24 +02:00
Ondřej Samohel
8edd6c583d
Merge remote-tracking branch 'origin/develop' into enhancement/1297-product-base-types-creation-and-creator-plugins 2025-09-24 12:41:26 +02:00
Ondrej Samohel
f5ac5c2cfb
Merge branch 'develop' into enhancement/1297-product-base-types-creation-and-creator-plugins 2025-09-23 13:34:35 +02:00
Ondřej Samohel
51965a9de1
🔥 remove unused import 2025-09-03 15:18:50 +02:00
Ondřej Samohel
2597469b30
🔥 remove deprecated code 2025-09-03 15:16:27 +02:00
Ondřej Samohel
93bf258978
Merge branch 'develop' into enhancement/1297-product-base-types-creation-and-creator-plugins 2025-09-03 14:54:20 +02:00
Aleks Berland
827cf15bf2
Merge branch 'develop' into ayon_review_integrate_timeout_retries_and_fallback_with_help 2025-08-26 09:56:03 -04:00
Aleks Berland
32c022cd4d Refactor upload retry logic to handle only transient network issues and improve error handling 2025-08-26 09:55:47 -04:00
Aleks Berland
a0f6a3f379 Implement upload retries for reviewable files and add user-friendly error handling in case of timeout. Update validation help documentation for upload failures. 2025-08-25 19:09:20 -04:00
Roy Nieterau
9dbaf15449
Merge branch 'develop' into enhancement/transcoding_oiio_tool_for_ffmpeg_one_call 2025-06-19 14:20:59 +02:00
Ondřej Samohel
3e77031d9c
Merge branch 'develop' into enhancement/1297-product-base-types-creation-and-creator-plugins 2025-06-13 16:42:27 +02:00
Ondřej Samohel
50045d71bd
support product base types in the integrator 2025-06-10 12:16:49 +02:00
Ondřej Samohel
e2a413f20e
🐶 remove unneeded f-string 2025-06-10 11:40:43 +02:00
Ondřej Samohel
da286e3cfb
♻️ remove check for attribute 2025-06-10 11:23:37 +02:00
Ondřej Samohel
7f21d39d81
Merge branch 'develop' into enhancement/1297-product-base-types-creation-and-creator-plugins 2025-06-09 14:06:28 +02:00
Ondřej Samohel
fa8c054889
♻️ refactor support feature check function name 2025-06-09 13:54:41 +02:00
Roy Nieterau
6061e8a82b Merge branch 'develop' of https://github.com/ynput/ayon-core into enhancement/transcoding_oiio_tool_for_ffmpeg_one_call
# Conflicts:
#	client/ayon_core/lib/transcoding.py
#	client/ayon_core/plugins/publish/extract_color_transcode.py
2025-06-06 14:50:06 +02:00
Roy Nieterau
4aa2f1bb86 Merge branch 'enhancement/transcoding_oiio_tool_for_ffmpeg_one_call' of https://github.com/BigRoy/ayon-core into enhancement/transcoding_oiio_tool_for_ffmpeg_one_call 2025-06-06 14:48:00 +02:00
Roy Nieterau
8fbb8c93c1 Allow more frames patterns 2025-06-06 14:47:41 +02:00
Ondřej Samohel
dfd8fe6e8c
🐛 report correctly skipped abstract creators 2025-06-06 10:33:25 +02:00
Ondrej Samohel
fce1ef248d
🐶 some more linter fixes 2025-06-04 11:28:07 +02:00
Ondrej Samohel
67db5c123f
🐶 linter fixes 2025-06-04 11:24:22 +02:00
Ondrej Samohel
d237e5f54c
🎨 add support for product base type to basic creator logic 2025-06-03 17:25:32 +02:00
Ondřej Samohel
9e730a6b5b
🔧 changes in Plugin anc CreateInstance WIP 2025-06-03 10:58:43 +02:00
Ondrej Samohel
bcdeba18ac
🔧 implementation WIP 2025-06-02 09:29:04 +02:00
Roy Nieterau
204625b5c8
Update client/ayon_core/lib/transcoding.py
Co-authored-by: Jakub Trllo <43494761+iLLiCiTiT@users.noreply.github.com>
2025-05-20 23:55:29 +02:00
Roy Nieterau
3248faff40 Fix int -> str frame padding argument to subprocess 2025-04-25 14:57:40 +02:00
Roy Nieterau
ec9c6c510a Split on any of the characters as intended, instead of on literal x- 2025-04-25 14:14:27 +02:00
Roy Nieterau
537dac6033 Fix get_oiio_info_for_input call for sequences in convert_colorspace 2025-04-25 12:47:20 +02:00
Roy Nieterau
422febf441 Fix variable usage 2025-04-25 12:26:50 +02:00
Roy Nieterau
7bf2bfd6b1 Improve docstring 2025-04-25 12:22:22 +02:00
Roy Nieterau
ea5f1c81d6 Fix passing sequence to oiiotool 2025-04-25 12:21:34 +02:00
Roy Nieterau
7b91c0da1e Merge branch 'enhancement/transcoding_oiio_tool_for_ffmpeg_one_call' of https://github.com/BigRoy/ayon-core into enhancement/oiio_transcode_parallel_frames 2025-04-25 09:27:27 +02:00
Roy Nieterau
849a999744 Fix TypeError message 2025-04-25 09:11:44 +02:00
Roy Nieterau
01174c9b11 Provide more sensible return type for _translate_to_sequence 2025-04-25 09:10:44 +02:00
Roy Nieterau
a94bda06f4 Merge branch 'develop' of https://github.com/ynput/ayon-core into enhancement/oiio_transcode_parallel_frames
# Conflicts:
#	client/ayon_core/plugins/publish/extract_color_transcode.py
2025-04-25 09:03:35 +02:00
Roy Nieterau
0aa0673b57 Use correct variable 2025-04-23 19:50:59 +02:00
Roy Nieterau
c79ae86c44
Merge branch 'develop' into enhancement/transcoding_oiio_tool_for_ffmpeg_one_call 2025-04-23 19:50:00 +02:00
Roy Nieterau
e8a0c69cf2 Merge branch 'enhancement/transcoding_oiio_tool_for_ffmpeg_one_call' of https://github.com/BigRoy/ayon-core into enhancement/oiio_transcode_parallel_frames
# Conflicts:
#	client/ayon_core/lib/transcoding.py
2025-04-23 19:49:15 +02:00
Roy Nieterau
98e0ec1051 Improve parallelization for ExtractReview and ExtractOIIOTranscode
- Support ExtractReview convert to FFMPEG in one `oiiotool` call for sequences
- Support sequences with holes in both plug-ins by using dedicated `--frames` argument to `oiiotool` for more complex frame patterns.
- Add `--parallel-frames` argument to `oiiotool` to allow parallelizing more of the OIIO tool process, improving throughput. Note: This requires OIIO 2.5.2.0 or higher. See f40f9800c8
2025-04-23 19:44:39 +02:00
Roy Nieterau
04c14cab7a Remove deprecated function import 2025-04-01 09:09:50 +02:00
Roy Nieterau
b43969da1c add from __future__ import annotations 2025-04-01 09:08:12 +02:00
Roy Nieterau
c7c2a4a7ec Cleanup 2025-04-01 09:07:02 +02:00
Roy Nieterau
cb125a192f Optimize oiio tool conversion for ffmpeg.
- Prepare attributes to remove list just once.
- Process sequences as a single `oiiotool` call
2025-03-31 23:15:17 +02:00
64 changed files with 2338 additions and 734 deletions

View file

@ -35,6 +35,9 @@ body:
label: Version label: Version
description: What version are you running? Look to AYON Tray description: What version are you running? Look to AYON Tray
options: options:
- 1.7.0
- 1.6.13
- 1.6.12
- 1.6.11 - 1.6.11
- 1.6.10 - 1.6.10
- 1.6.9 - 1.6.9

View file

@ -185,9 +185,14 @@ class IPluginPaths(AYONInterface):
""" """
return self._get_plugin_paths_by_type("inventory") return self._get_plugin_paths_by_type("inventory")
def get_loader_action_plugin_paths(self) -> list[str]: def get_loader_action_plugin_paths(
self, host_name: Optional[str]
) -> list[str]:
"""Receive loader action plugin paths. """Receive loader action plugin paths.
Args:
host_name (Optional[str]): Current host name.
Returns: Returns:
list[str]: Paths to loader action plugins. list[str]: Paths to loader action plugins.

View file

@ -6,7 +6,6 @@ import logging
import code import code
import traceback import traceback
from pathlib import Path from pathlib import Path
import warnings
import click import click
@ -90,54 +89,6 @@ def addon(ctx):
pass pass
@main_cli.command()
@click.pass_context
@click.argument("output_json_path")
@click.option("--project", help="Project name", default=None)
@click.option("--asset", help="Folder path", default=None)
@click.option("--task", help="Task name", default=None)
@click.option("--app", help="Application name", default=None)
@click.option(
"--envgroup", help="Environment group (e.g. \"farm\")", default=None
)
def extractenvironments(
ctx, output_json_path, project, asset, task, app, envgroup
):
"""Extract environment variables for entered context to a json file.
Entered output filepath will be created if does not exists.
All context options must be passed otherwise only AYON's global
environments will be extracted.
Context options are "project", "asset", "task", "app"
Deprecated:
This function is deprecated and will be removed in future. Please use
'addon applications extractenvironments ...' instead.
"""
warnings.warn(
(
"Command 'extractenvironments' is deprecated and will be"
" removed in future. Please use"
" 'addon applications extractenvironments ...' instead."
),
DeprecationWarning
)
addons_manager = ctx.obj["addons_manager"]
applications_addon = addons_manager.get_enabled_addon("applications")
if applications_addon is None:
raise RuntimeError(
"Applications addon is not available or enabled."
)
# Please ignore the fact this is using private method
applications_addon._cli_extract_environments(
output_json_path, project, asset, task, app, envgroup
)
@main_cli.command() @main_cli.command()
@click.pass_context @click.pass_context
@click.argument("path", required=True) @click.argument("path", required=True)

View file

@ -137,7 +137,7 @@ class HostBase(AbstractHost):
def get_current_folder_path(self) -> Optional[str]: def get_current_folder_path(self) -> Optional[str]:
""" """
Returns: Returns:
Optional[str]: Current asset name. Optional[str]: Current folder path.
""" """
return os.environ.get("AYON_FOLDER_PATH") return os.environ.get("AYON_FOLDER_PATH")

View file

@ -1,3 +1,4 @@
from __future__ import annotations
import os import os
import re import re
import logging import logging
@ -12,6 +13,8 @@ from typing import Optional
import xml.etree.ElementTree import xml.etree.ElementTree
import clique
from .execute import run_subprocess from .execute import run_subprocess
from .vendor_bin_utils import ( from .vendor_bin_utils import (
get_ffmpeg_tool_args, get_ffmpeg_tool_args,
@ -131,16 +134,29 @@ def get_transcode_temp_directory():
) )
def get_oiio_info_for_input(filepath, logger=None, subimages=False): def get_oiio_info_for_input(
filepath: str,
*,
subimages: bool = False,
verbose: bool = True,
logger: logging.Logger = None,
):
"""Call oiiotool to get information about input and return stdout. """Call oiiotool to get information about input and return stdout.
Args:
filepath (str): Path to file.
subimages (bool): include info about subimages in the output.
verbose (bool): get the full metadata about each input image.
logger (logging.Logger): Logger used for logging.
Stdout should contain xml format string. Stdout should contain xml format string.
""" """
args = get_oiio_tool_args( args = get_oiio_tool_args(
"oiiotool", "oiiotool",
"--info", "--info",
"-v"
) )
if verbose:
args.append("-v")
if subimages: if subimages:
args.append("-a") args.append("-a")
@ -570,7 +586,10 @@ def get_review_layer_name(src_filepath):
return None return None
# Load info about file from oiio tool # Load info about file from oiio tool
input_info = get_oiio_info_for_input(src_filepath) input_info = get_oiio_info_for_input(
src_filepath,
verbose=False,
)
if not input_info: if not input_info:
return None return None
@ -634,6 +653,37 @@ def should_convert_for_ffmpeg(src_filepath):
return False return False
def _get_attributes_to_erase(
input_info: dict, logger: logging.Logger
) -> list[str]:
"""FFMPEG does not support some attributes in metadata."""
erase_attrs: dict[str, str] = {} # Attr name to reason mapping
for attr_name, attr_value in input_info["attribs"].items():
if not isinstance(attr_value, str):
continue
# Remove attributes that have string value longer than allowed length
# for ffmpeg or when contain prohibited symbols
if len(attr_value) > MAX_FFMPEG_STRING_LEN:
reason = f"has too long value ({len(attr_value)} chars)."
erase_attrs[attr_name] = reason
continue
for char in NOT_ALLOWED_FFMPEG_CHARS:
if char not in attr_value:
continue
reason = f"contains unsupported character \"{char}\"."
erase_attrs[attr_name] = reason
break
for attr_name, reason in erase_attrs.items():
logger.info(
f"Removed attribute \"{attr_name}\" from metadata"
f" because {reason}."
)
return list(erase_attrs.keys())
def convert_input_paths_for_ffmpeg( def convert_input_paths_for_ffmpeg(
input_paths, input_paths,
output_dir, output_dir,
@ -659,7 +709,7 @@ def convert_input_paths_for_ffmpeg(
Raises: Raises:
ValueError: If input filepath has extension not supported by function. ValueError: If input filepath has extension not supported by function.
Currently is supported only ".exr" extension. Currently, only ".exr" extension is supported.
""" """
if logger is None: if logger is None:
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@ -684,7 +734,22 @@ def convert_input_paths_for_ffmpeg(
# Collect channels to export # Collect channels to export
input_arg, channels_arg = get_oiio_input_and_channel_args(input_info) input_arg, channels_arg = get_oiio_input_and_channel_args(input_info)
for input_path in input_paths: # Find which attributes to strip
erase_attributes: list[str] = _get_attributes_to_erase(
input_info, logger=logger
)
# clique.PATTERNS["frames"] supports only `.1001.exr` not `_1001.exr` so
# we use a customized pattern.
pattern = "[_.](?P<index>(?P<padding>0*)\\d+)\\.\\D+\\d?$"
input_collections, input_remainder = clique.assemble(
input_paths,
patterns=[pattern],
assume_padded_when_ambiguous=True,
)
input_items = list(input_collections)
input_items.extend(input_remainder)
for input_item in input_items:
# Prepare subprocess arguments # Prepare subprocess arguments
oiio_cmd = get_oiio_tool_args( oiio_cmd = get_oiio_tool_args(
"oiiotool", "oiiotool",
@ -695,8 +760,23 @@ def convert_input_paths_for_ffmpeg(
if compression: if compression:
oiio_cmd.extend(["--compression", compression]) oiio_cmd.extend(["--compression", compression])
# Convert a sequence of files using a single oiiotool command
# using its sequence syntax
if isinstance(input_item, clique.Collection):
frames = input_item.format("{head}#{tail}").replace(" ", "")
oiio_cmd.extend([ oiio_cmd.extend([
input_arg, input_path, "--framepadding", input_item.padding,
"--frames", frames,
"--parallel-frames"
])
input_item: str = input_item.format("{head}#{tail}")
elif not isinstance(input_item, str):
raise TypeError(
f"Input is not a string or Collection: {input_item}"
)
oiio_cmd.extend([
input_arg, input_item,
# Tell oiiotool which channels should be put to top stack # Tell oiiotool which channels should be put to top stack
# (and output) # (and output)
"--ch", channels_arg, "--ch", channels_arg,
@ -704,38 +784,11 @@ def convert_input_paths_for_ffmpeg(
"--subimage", "0" "--subimage", "0"
]) ])
for attr_name, attr_value in input_info["attribs"].items(): for attr_name in erase_attributes:
if not isinstance(attr_value, str):
continue
# Remove attributes that have string value longer than allowed
# length for ffmpeg or when containing prohibited symbols
erase_reason = "Missing reason"
erase_attribute = False
if len(attr_value) > MAX_FFMPEG_STRING_LEN:
erase_reason = "has too long value ({} chars).".format(
len(attr_value)
)
erase_attribute = True
if not erase_attribute:
for char in NOT_ALLOWED_FFMPEG_CHARS:
if char in attr_value:
erase_attribute = True
erase_reason = (
"contains unsupported character \"{}\"."
).format(char)
break
if erase_attribute:
# Set attribute to empty string
logger.info((
"Removed attribute \"{}\" from metadata because {}."
).format(attr_name, erase_reason))
oiio_cmd.extend(["--eraseattrib", attr_name]) oiio_cmd.extend(["--eraseattrib", attr_name])
# Add last argument - path to output # Add last argument - path to output
base_filename = os.path.basename(input_path) base_filename = os.path.basename(input_item)
output_path = os.path.join(output_dir, base_filename) output_path = os.path.join(output_dir, base_filename)
oiio_cmd.extend([ oiio_cmd.extend([
"-o", output_path "-o", output_path
@ -1136,7 +1189,10 @@ def oiio_color_convert(
target_display=None, target_display=None,
target_view=None, target_view=None,
additional_command_args=None, additional_command_args=None,
logger=None, frames: Optional[str] = None,
frame_padding: Optional[int] = None,
parallel_frames: bool = False,
logger: Optional[logging.Logger] = None,
): ):
"""Transcode source file to other with colormanagement. """Transcode source file to other with colormanagement.
@ -1148,7 +1204,7 @@ def oiio_color_convert(
input_path (str): Path that should be converted. It is expected that input_path (str): Path that should be converted. It is expected that
contains single file or image sequence of same type contains single file or image sequence of same type
(sequence in format 'file.FRAMESTART-FRAMEEND#.ext', see oiio docs, (sequence in format 'file.FRAMESTART-FRAMEEND#.ext', see oiio docs,
eg `big.1-3#.tif`) eg `big.1-3#.tif` or `big.1-3%d.ext` with `frames` argument)
output_path (str): Path to output filename. output_path (str): Path to output filename.
(must follow format of 'input_path', eg. single file or (must follow format of 'input_path', eg. single file or
sequence in 'file.FRAMESTART-FRAMEEND#.ext', `output.1-3#.tif`) sequence in 'file.FRAMESTART-FRAMEEND#.ext', `output.1-3#.tif`)
@ -1169,6 +1225,13 @@ def oiio_color_convert(
both 'view' and 'display' must be filled (if 'target_colorspace') both 'view' and 'display' must be filled (if 'target_colorspace')
additional_command_args (list): arguments for oiiotool (like binary additional_command_args (list): arguments for oiiotool (like binary
depth for .dpx) depth for .dpx)
frames (Optional[str]): Complex frame range to process. This requires
input path and output path to use frame token placeholder like
`#` or `%d`, e.g. file.#.exr
frame_padding (Optional[int]): Frame padding to use for the input and
output when using a sequence filepath.
parallel_frames (bool): If True, process frames in parallel inside
the `oiiotool` process. Only supported in OIIO 2.5.20.0+.
logger (logging.Logger): Logger used for logging. logger (logging.Logger): Logger used for logging.
Raises: Raises:
@ -1178,7 +1241,20 @@ def oiio_color_convert(
if logger is None: if logger is None:
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
input_info = get_oiio_info_for_input(input_path, logger=logger) # Get oiioinfo only from first image, otherwise file can't be found
first_input_path = input_path
if frames:
frames: str
first_frame = int(re.split("[ x-]", frames, 1)[0])
first_frame = str(first_frame).zfill(frame_padding or 0)
for token in ["#", "%d"]:
first_input_path = first_input_path.replace(token, first_frame)
input_info = get_oiio_info_for_input(
first_input_path,
verbose=False,
logger=logger,
)
# Collect channels to export # Collect channels to export
input_arg, channels_arg = get_oiio_input_and_channel_args(input_info) input_arg, channels_arg = get_oiio_input_and_channel_args(input_info)
@ -1191,6 +1267,22 @@ def oiio_color_convert(
"--colorconfig", config_path "--colorconfig", config_path
) )
if frames:
# If `frames` is specified, then process the input and output
# as if it's a sequence of frames (must contain `%04d` as frame
# token placeholder in filepaths)
oiio_cmd.extend([
"--frames", frames,
])
if frame_padding:
oiio_cmd.extend([
"--framepadding", str(frame_padding),
])
if parallel_frames:
oiio_cmd.append("--parallel-frames")
oiio_cmd.extend([ oiio_cmd.extend([
input_arg, input_path, input_arg, input_path,
# Tell oiiotool which channels should be put to top stack # Tell oiiotool which channels should be put to top stack
@ -1234,17 +1326,11 @@ def oiio_color_convert(
if source_view and source_display: if source_view and source_display:
color_convert_args = None color_convert_args = None
ocio_display_args = None ocio_display_args = None
oiio_cmd.extend([
"--ociodisplay:inverse=1:subimages=0",
source_display,
source_view,
])
if target_colorspace: if target_colorspace:
# This is a two-step conversion process since there's no direct # This is a two-step conversion process since there's no direct
# display/view to colorspace command # display/view to colorspace command
# This could be a config parameter or determined from OCIO config # This could be a config parameter or determined from OCIO config
# Use temporarty role space 'scene_linear' # Use temporary role space 'scene_linear'
color_convert_args = ("scene_linear", target_colorspace) color_convert_args = ("scene_linear", target_colorspace)
elif source_display != target_display or source_view != target_view: elif source_display != target_display or source_view != target_view:
# Complete display/view pair conversion # Complete display/view pair conversion
@ -1256,6 +1342,15 @@ def oiio_color_convert(
" No color conversion needed." " No color conversion needed."
) )
if color_convert_args or ocio_display_args:
# Invert source display/view so that we can go from there to the
# target colorspace or display/view
oiio_cmd.extend([
"--ociodisplay:inverse=1:subimages=0",
source_display,
source_view,
])
if color_convert_args: if color_convert_args:
# Use colorconvert for colorspace target # Use colorconvert for colorspace target
oiio_cmd.extend([ oiio_cmd.extend([
@ -1373,7 +1468,11 @@ def get_rescaled_command_arguments(
command_args.extend(["-vf", "{0},{1}".format(scale, pad)]) command_args.extend(["-vf", "{0},{1}".format(scale, pad)])
elif application == "oiiotool": elif application == "oiiotool":
input_info = get_oiio_info_for_input(input_path, logger=log) input_info = get_oiio_info_for_input(
input_path,
verbose=False,
logger=log,
)
# Collect channels to export # Collect channels to export
_, channels_arg = get_oiio_input_and_channel_args( _, channels_arg = get_oiio_input_and_channel_args(
input_info, alpha_default=1.0) input_info, alpha_default=1.0)
@ -1464,7 +1563,11 @@ def _get_image_dimensions(application, input_path, log):
# fallback for weird files with width=0, height=0 # fallback for weird files with width=0, height=0
if (input_width == 0 or input_height == 0) and application == "oiiotool": if (input_width == 0 or input_height == 0) and application == "oiiotool":
# Load info about file from oiio tool # Load info about file from oiio tool
input_info = get_oiio_info_for_input(input_path, logger=log) input_info = get_oiio_info_for_input(
input_path,
verbose=False,
logger=log,
)
if input_info: if input_info:
input_width = int(input_info["width"]) input_width = int(input_info["width"])
input_height = int(input_info["height"]) input_height = int(input_info["height"])
@ -1513,10 +1616,13 @@ def get_oiio_input_and_channel_args(oiio_input_info, alpha_default=None):
"""Get input and channel arguments for oiiotool. """Get input and channel arguments for oiiotool.
Args: Args:
oiio_input_info (dict): Information about input from oiio tool. oiio_input_info (dict): Information about input from oiio tool.
Should be output of function `get_oiio_info_for_input`. Should be output of function 'get_oiio_info_for_input' (can be
called with 'verbose=False').
alpha_default (float, optional): Default value for alpha channel. alpha_default (float, optional): Default value for alpha channel.
Returns: Returns:
tuple[str, str]: Tuple of input and channel arguments. tuple[str, str]: Tuple of input and channel arguments.
""" """
channel_names = oiio_input_info["channelnames"] channel_names = oiio_input_info["channelnames"]
review_channels = get_convert_rgb_channels(channel_names) review_channels = get_convert_rgb_channels(channel_names)

View file

@ -70,7 +70,7 @@ from dataclasses import dataclass
import ayon_api import ayon_api
from ayon_core import AYON_CORE_ROOT from ayon_core import AYON_CORE_ROOT
from ayon_core.lib import StrEnum, Logger from ayon_core.lib import StrEnum, Logger, is_func_signature_supported
from ayon_core.host import AbstractHost from ayon_core.host import AbstractHost
from ayon_core.addon import AddonsManager, IPluginPaths from ayon_core.addon import AddonsManager, IPluginPaths
from ayon_core.settings import get_studio_settings, get_project_settings from ayon_core.settings import get_studio_settings, get_project_settings
@ -752,6 +752,7 @@ class LoaderActionsContext:
def _get_plugins(self) -> dict[str, LoaderActionPlugin]: def _get_plugins(self) -> dict[str, LoaderActionPlugin]:
if self._plugins is None: if self._plugins is None:
host_name = self.get_host_name()
addons_manager = self.get_addons_manager() addons_manager = self.get_addons_manager()
all_paths = [ all_paths = [
os.path.join(AYON_CORE_ROOT, "plugins", "loader") os.path.join(AYON_CORE_ROOT, "plugins", "loader")
@ -759,7 +760,24 @@ class LoaderActionsContext:
for addon in addons_manager.addons: for addon in addons_manager.addons:
if not isinstance(addon, IPluginPaths): if not isinstance(addon, IPluginPaths):
continue continue
try:
if is_func_signature_supported(
addon.get_loader_action_plugin_paths,
host_name
):
paths = addon.get_loader_action_plugin_paths(
host_name
)
else:
paths = addon.get_loader_action_plugin_paths() paths = addon.get_loader_action_plugin_paths()
except Exception:
self._log.warning(
"Failed to get plugin paths for addon",
exc_info=True
)
continue
if paths: if paths:
all_paths.extend(paths) all_paths.extend(paths)

View file

@ -1,4 +1,5 @@
"""Package to handle compatibility checks for pipeline components.""" """Package to handle compatibility checks for pipeline components."""
import ayon_api
def is_product_base_type_supported() -> bool: def is_product_base_type_supported() -> bool:
@ -13,4 +14,7 @@ def is_product_base_type_supported() -> bool:
bool: True if product base types are supported, False otherwise. bool: True if product base types are supported, False otherwise.
""" """
if not hasattr(ayon_api, "is_product_base_type_supported"):
return False return False
return ayon_api.is_product_base_type_supported()

View file

@ -15,6 +15,7 @@ from typing import (
Any, Any,
Callable, Callable,
) )
from warnings import warn
import pyblish.logic import pyblish.logic
import pyblish.api import pyblish.api
@ -752,13 +753,13 @@ class CreateContext:
manual_creators = {} manual_creators = {}
report = discover_creator_plugins(return_report=True) report = discover_creator_plugins(return_report=True)
self.creator_discover_result = report self.creator_discover_result = report
for creator_class in report.plugins: for creator_class in report.abstract_plugins:
if inspect.isabstract(creator_class):
self.log.debug( self.log.debug(
"Skipping abstract Creator {}".format(str(creator_class)) "Skipping abstract Creator '%s'",
str(creator_class)
) )
continue
for creator_class in report.plugins:
creator_identifier = creator_class.identifier creator_identifier = creator_class.identifier
if creator_identifier in creators: if creator_identifier in creators:
self.log.warning( self.log.warning(
@ -772,19 +773,17 @@ class CreateContext:
creator_class.host_name creator_class.host_name
and creator_class.host_name != self.host_name and creator_class.host_name != self.host_name
): ):
self.log.info(( self.log.info(
"Creator's host name \"{}\"" (
" is not supported for current host \"{}\"" 'Creator\'s host name "{}"'
).format(creator_class.host_name, self.host_name)) ' is not supported for current host "{}"'
).format(creator_class.host_name, self.host_name)
)
continue continue
# TODO report initialization error # TODO report initialization error
try: try:
creator = creator_class( creator = creator_class(project_settings, self, self.headless)
project_settings,
self,
self.headless
)
except Exception: except Exception:
self.log.error( self.log.error(
f"Failed to initialize plugin: {creator_class}", f"Failed to initialize plugin: {creator_class}",
@ -792,6 +791,19 @@ class CreateContext:
) )
continue continue
if not creator.product_base_type:
message = (
f"Provided creator {creator!r} doesn't have "
"product base type attribute defined. This will be "
"required in future."
)
warn(
message,
DeprecationWarning,
stacklevel=2
)
self.log.warning(message)
if not creator.enabled: if not creator.enabled:
disabled_creators[creator_identifier] = creator disabled_creators[creator_identifier] = creator
continue continue
@ -1289,8 +1301,12 @@ class CreateContext:
"folderPath": folder_entity["path"], "folderPath": folder_entity["path"],
"task": task_entity["name"] if task_entity else None, "task": task_entity["name"] if task_entity else None,
"productType": creator.product_type, "productType": creator.product_type,
# Add product base type if supported. Fallback to product type
"productBaseType": (
creator.product_base_type or creator.product_type),
"variant": variant "variant": variant
} }
if active is not None: if active is not None:
if not isinstance(active, bool): if not isinstance(active, bool):
self.log.warning( self.log.warning(

View file

@ -1,20 +1,21 @@
# -*- coding: utf-8 -*- """Creator plugins for the create process."""
import os from __future__ import annotations
import copy
import collections
from typing import TYPE_CHECKING, Optional, Dict, Any
import collections
import copy
import os
from abc import ABC, abstractmethod from abc import ABC, abstractmethod
from typing import TYPE_CHECKING, Any, Dict, Optional
from ayon_core.lib import Logger, get_version_from_path from ayon_core.lib import Logger, get_version_from_path
from ayon_core.pipeline.plugin_discover import ( from ayon_core.pipeline.plugin_discover import (
deregister_plugin,
deregister_plugin_path,
discover, discover,
register_plugin, register_plugin,
register_plugin_path, register_plugin_path,
deregister_plugin,
deregister_plugin_path
) )
from ayon_core.pipeline.staging_dir import get_staging_dir_info, StagingDir from ayon_core.pipeline.staging_dir import StagingDir, get_staging_dir_info
from .constants import DEFAULT_VARIANT_VALUE from .constants import DEFAULT_VARIANT_VALUE
from .product_name import get_product_name from .product_name import get_product_name
@ -23,6 +24,7 @@ from .structures import CreatedInstance
if TYPE_CHECKING: if TYPE_CHECKING:
from ayon_core.lib import AbstractAttrDef from ayon_core.lib import AbstractAttrDef
# Avoid cyclic imports # Avoid cyclic imports
from .context import CreateContext, UpdateData # noqa: F401 from .context import CreateContext, UpdateData # noqa: F401
@ -66,7 +68,6 @@ class ProductConvertorPlugin(ABC):
Returns: Returns:
logging.Logger: Logger with name of the plugin. logging.Logger: Logger with name of the plugin.
""" """
if self._log is None: if self._log is None:
self._log = Logger.get_logger(self.__class__.__name__) self._log = Logger.get_logger(self.__class__.__name__)
return self._log return self._log
@ -82,9 +83,8 @@ class ProductConvertorPlugin(ABC):
Returns: Returns:
str: Converted identifier unique for all converters in host. str: Converted identifier unique for all converters in host.
"""
pass """
@abstractmethod @abstractmethod
def find_instances(self): def find_instances(self):
@ -94,14 +94,10 @@ class ProductConvertorPlugin(ABC):
convert. convert.
""" """
pass
@abstractmethod @abstractmethod
def convert(self): def convert(self):
"""Conversion code.""" """Conversion code."""
pass
@property @property
def create_context(self): def create_context(self):
"""Quick access to create context. """Quick access to create context.
@ -109,7 +105,6 @@ class ProductConvertorPlugin(ABC):
Returns: Returns:
CreateContext: Context which initialized the plugin. CreateContext: Context which initialized the plugin.
""" """
return self._create_context return self._create_context
@property @property
@ -122,7 +117,6 @@ class ProductConvertorPlugin(ABC):
Raises: Raises:
UnavailableSharedData: When called out of collection phase. UnavailableSharedData: When called out of collection phase.
""" """
return self._create_context.collection_shared_data return self._create_context.collection_shared_data
def add_convertor_item(self, label): def add_convertor_item(self, label):
@ -131,12 +125,10 @@ class ProductConvertorPlugin(ABC):
Args: Args:
label (str): Label of item which will show in UI. label (str): Label of item which will show in UI.
""" """
self._create_context.add_convertor_item(self.identifier, label) self._create_context.add_convertor_item(self.identifier, label)
def remove_convertor_item(self): def remove_convertor_item(self):
"""Remove legacy item from create context when conversion finished.""" """Remove legacy item from create context when conversion finished."""
self._create_context.remove_convertor_item(self.identifier) self._create_context.remove_convertor_item(self.identifier)
@ -154,7 +146,14 @@ class BaseCreator(ABC):
project_settings (dict[str, Any]): Project settings. project_settings (dict[str, Any]): Project settings.
create_context (CreateContext): Context which initialized creator. create_context (CreateContext): Context which initialized creator.
headless (bool): Running in headless mode. headless (bool): Running in headless mode.
""" """
# Attribute 'skip_discovery' is used during discovery phase to skip
# plugins, which can be used to mark base plugins that should not be
# considered as plugins "to use". The discovery logic does NOT use
# the attribute value from parent classes. Each base class has to define
# the attribute again.
skip_discovery = True
# Label shown in UI # Label shown in UI
label = None label = None
@ -219,7 +218,6 @@ class BaseCreator(ABC):
Returns: Returns:
Optional[dict[str, Any]]: Settings values or None. Optional[dict[str, Any]]: Settings values or None.
""" """
settings = project_settings.get(category_name) settings = project_settings.get(category_name)
if not settings: if not settings:
return None return None
@ -265,7 +263,6 @@ class BaseCreator(ABC):
Args: Args:
project_settings (dict[str, Any]): Project settings. project_settings (dict[str, Any]): Project settings.
""" """
settings_category = self.settings_category settings_category = self.settings_category
if not settings_category: if not settings_category:
return return
@ -277,18 +274,17 @@ class BaseCreator(ABC):
project_settings, settings_category, settings_name project_settings, settings_category, settings_name
) )
if settings is None: if settings is None:
self.log.debug("No settings found for {}".format(cls_name)) self.log.debug(f"No settings found for {cls_name}")
return return
for key, value in settings.items(): for key, value in settings.items():
# Log out attributes that are not defined on plugin object # Log out attributes that are not defined on plugin object
# - those may be potential dangerous typos in settings # - those may be potential dangerous typos in settings
if not hasattr(self, key): if not hasattr(self, key):
self.log.debug(( self.log.debug(
"Applying settings to unknown attribute '{}' on '{}'." "Applying settings to unknown attribute '%s' on '%s'.",
).format(
key, cls_name key, cls_name
)) )
setattr(self, key, value) setattr(self, key, value)
def register_callbacks(self): def register_callbacks(self):
@ -297,23 +293,39 @@ class BaseCreator(ABC):
Default implementation does nothing. It can be overridden to register Default implementation does nothing. It can be overridden to register
callbacks for creator. callbacks for creator.
""" """
pass
@property @property
def identifier(self): def identifier(self):
"""Identifier of creator (must be unique). """Identifier of creator (must be unique).
Default implementation returns plugin's product type. Default implementation returns plugin's product base type,
""" or falls back to product type if product base type is not set.
return self.product_type """
identifier = self.product_base_type
if not identifier:
identifier = self.product_type
return identifier
@property @property
@abstractmethod @abstractmethod
def product_type(self): def product_type(self):
"""Family that plugin represents.""" """Family that plugin represents."""
pass @property
def product_base_type(self) -> Optional[str]:
"""Base product type that plugin represents.
Todo (antirotor): This should be required in future - it
should be made abstract then.
Returns:
Optional[str]: Base product type that plugin represents.
If not set, it is assumed that the creator plugin is obsolete
and does not support product base type.
"""
return None
@property @property
def project_name(self): def project_name(self):
@ -322,7 +334,6 @@ class BaseCreator(ABC):
Returns: Returns:
str: Name of a project. str: Name of a project.
""" """
return self.create_context.project_name return self.create_context.project_name
@property @property
@ -332,7 +343,6 @@ class BaseCreator(ABC):
Returns: Returns:
Anatomy: Project anatomy object. Anatomy: Project anatomy object.
""" """
return self.create_context.project_anatomy return self.create_context.project_anatomy
@property @property
@ -344,13 +354,14 @@ class BaseCreator(ABC):
Default implementation use attributes in this order: Default implementation use attributes in this order:
- 'group_label' -> 'label' -> 'identifier' - 'group_label' -> 'label' -> 'identifier'
Keep in mind that 'identifier' use 'product_type' by default.
Keep in mind that 'identifier' uses 'product_base_type' by default.
Returns: Returns:
str: Group label that can be used for grouping of instances in UI. str: Group label that can be used for grouping of instances in UI.
Group label can be overridden by instance itself. Group label can be overridden by the instance itself.
"""
"""
if self._cached_group_label is None: if self._cached_group_label is None:
label = self.identifier label = self.identifier
if self.group_label: if self.group_label:
@ -367,7 +378,6 @@ class BaseCreator(ABC):
Returns: Returns:
logging.Logger: Logger with name of the plugin. logging.Logger: Logger with name of the plugin.
""" """
if self._log is None: if self._log is None:
self._log = Logger.get_logger(self.__class__.__name__) self._log = Logger.get_logger(self.__class__.__name__)
return self._log return self._log
@ -376,7 +386,8 @@ class BaseCreator(ABC):
self, self,
product_name: str, product_name: str,
data: Dict[str, Any], data: Dict[str, Any],
product_type: Optional[str] = None product_type: Optional[str] = None,
product_base_type: Optional[str] = None
) -> CreatedInstance: ) -> CreatedInstance:
"""Create instance and add instance to context. """Create instance and add instance to context.
@ -385,6 +396,8 @@ class BaseCreator(ABC):
data (Dict[str, Any]): Instance data. data (Dict[str, Any]): Instance data.
product_type (Optional[str]): Product type, object attribute product_type (Optional[str]): Product type, object attribute
'product_type' is used if not passed. 'product_type' is used if not passed.
product_base_type (Optional[str]): Product base type, object
attribute 'product_base_type' is used if not passed.
Returns: Returns:
CreatedInstance: Created instance. CreatedInstance: Created instance.
@ -392,11 +405,16 @@ class BaseCreator(ABC):
""" """
if product_type is None: if product_type is None:
product_type = self.product_type product_type = self.product_type
if not product_base_type and not self.product_base_type:
product_base_type = product_type
instance = CreatedInstance( instance = CreatedInstance(
product_type, product_type=product_type,
product_name, product_name=product_name,
data, data=data,
creator=self, creator=self,
product_base_type=product_base_type,
) )
self._add_instance_to_context(instance) self._add_instance_to_context(instance)
return instance return instance
@ -412,7 +430,6 @@ class BaseCreator(ABC):
Args: Args:
instance (CreatedInstance): New created instance. instance (CreatedInstance): New created instance.
""" """
self.create_context.creator_adds_instance(instance) self.create_context.creator_adds_instance(instance)
def _remove_instance_from_context(self, instance): def _remove_instance_from_context(self, instance):
@ -425,7 +442,6 @@ class BaseCreator(ABC):
Args: Args:
instance (CreatedInstance): Instance which should be removed. instance (CreatedInstance): Instance which should be removed.
""" """
self.create_context.creator_removed_instance(instance) self.create_context.creator_removed_instance(instance)
@abstractmethod @abstractmethod
@ -437,8 +453,6 @@ class BaseCreator(ABC):
implementation implementation
""" """
pass
@abstractmethod @abstractmethod
def collect_instances(self): def collect_instances(self):
"""Collect existing instances related to this creator plugin. """Collect existing instances related to this creator plugin.
@ -464,8 +478,6 @@ class BaseCreator(ABC):
``` ```
""" """
pass
@abstractmethod @abstractmethod
def update_instances(self, update_list): def update_instances(self, update_list):
"""Store changes of existing instances so they can be recollected. """Store changes of existing instances so they can be recollected.
@ -475,8 +487,6 @@ class BaseCreator(ABC):
contain changed instance and it's changes. contain changed instance and it's changes.
""" """
pass
@abstractmethod @abstractmethod
def remove_instances(self, instances): def remove_instances(self, instances):
"""Method called on instance removal. """Method called on instance removal.
@ -489,14 +499,11 @@ class BaseCreator(ABC):
removed. removed.
""" """
pass
def get_icon(self): def get_icon(self):
"""Icon of creator (product type). """Icon of creator (product type).
Can return path to image file or awesome icon name. Can return path to image file or awesome icon name.
""" """
return self.icon return self.icon
def get_dynamic_data( def get_dynamic_data(
@ -512,19 +519,18 @@ class BaseCreator(ABC):
These may be dynamically created based on current context of workfile. These may be dynamically created based on current context of workfile.
""" """
return {} return {}
def get_product_name( def get_product_name(
self, self,
project_name, project_name: str,
folder_entity, folder_entity: dict[str, Any],
task_entity, task_entity: Optional[dict[str, Any]],
variant, variant: str,
host_name=None, host_name: Optional[str] = None,
instance=None, instance: Optional[CreatedInstance] = None,
project_entity=None, project_entity: Optional[dict[str, Any]] = None,
): ) -> str:
"""Return product name for passed context. """Return product name for passed context.
Method is also called on product name update. In that case origin Method is also called on product name update. In that case origin
@ -546,11 +552,6 @@ class BaseCreator(ABC):
if host_name is None: if host_name is None:
host_name = self.create_context.host_name host_name = self.create_context.host_name
task_name = task_type = None
if task_entity:
task_name = task_entity["name"]
task_type = task_entity["taskType"]
dynamic_data = self.get_dynamic_data( dynamic_data = self.get_dynamic_data(
project_name, project_name,
folder_entity, folder_entity,
@ -566,11 +567,12 @@ class BaseCreator(ABC):
return get_product_name( return get_product_name(
project_name, project_name,
task_name, folder_entity=folder_entity,
task_type, task_entity=task_entity,
host_name, product_base_type=self.product_base_type,
self.product_type, product_type=self.product_type,
variant, host_name=host_name,
variant=variant,
dynamic_data=dynamic_data, dynamic_data=dynamic_data,
project_settings=self.project_settings, project_settings=self.project_settings,
project_entity=project_entity, project_entity=project_entity,
@ -583,15 +585,15 @@ class BaseCreator(ABC):
and values are stored to metadata for future usage and for publishing and values are stored to metadata for future usage and for publishing
purposes. purposes.
NOTE: Note:
Convert method should be implemented which should care about updating Convert method should be implemented which should care about
keys/values when plugin attributes change. updating keys/values when plugin attributes change.
Returns: Returns:
list[AbstractAttrDef]: Attribute definitions that can be tweaked list[AbstractAttrDef]: Attribute definitions that can be tweaked
for created instance. for created instance.
"""
"""
return self.instance_attr_defs return self.instance_attr_defs
def get_attr_defs_for_instance(self, instance): def get_attr_defs_for_instance(self, instance):
@ -614,12 +616,10 @@ class BaseCreator(ABC):
Raises: Raises:
UnavailableSharedData: When called out of collection phase. UnavailableSharedData: When called out of collection phase.
""" """
return self.create_context.collection_shared_data return self.create_context.collection_shared_data
def set_instance_thumbnail_path(self, instance_id, thumbnail_path=None): def set_instance_thumbnail_path(self, instance_id, thumbnail_path=None):
"""Set path to thumbnail for instance.""" """Set path to thumbnail for instance."""
self.create_context.thumbnail_paths_by_instance_id[instance_id] = ( self.create_context.thumbnail_paths_by_instance_id[instance_id] = (
thumbnail_path thumbnail_path
) )
@ -640,7 +640,6 @@ class BaseCreator(ABC):
Returns: Returns:
dict[str, int]: Next versions by instance id. dict[str, int]: Next versions by instance id.
""" """
return get_next_versions_for_instances( return get_next_versions_for_instances(
self.create_context.project_name, instances self.create_context.project_name, instances
) )
@ -651,7 +650,7 @@ class Creator(BaseCreator):
Creation requires prepared product name and instance data. Creation requires prepared product name and instance data.
""" """
skip_discovery = True
# GUI Purposes # GUI Purposes
# - default_variants may not be used if `get_default_variants` # - default_variants may not be used if `get_default_variants`
# is overridden # is overridden
@ -707,7 +706,6 @@ class Creator(BaseCreator):
int: Order in which is creator shown (less == earlier). By default int: Order in which is creator shown (less == earlier). By default
is using Creator's 'order' or processing. is using Creator's 'order' or processing.
""" """
return self.order return self.order
@abstractmethod @abstractmethod
@ -722,11 +720,9 @@ class Creator(BaseCreator):
pre_create_data(dict): Data based on pre creation attributes. pre_create_data(dict): Data based on pre creation attributes.
Those may affect how creator works. Those may affect how creator works.
""" """
# instance = CreatedInstance( # instance = CreatedInstance(
# self.product_type, product_name, instance_data # self.product_type, product_name, instance_data
# ) # )
pass
def get_description(self): def get_description(self):
"""Short description of product type and plugin. """Short description of product type and plugin.
@ -734,7 +730,6 @@ class Creator(BaseCreator):
Returns: Returns:
str: Short description of product type. str: Short description of product type.
""" """
return self.description return self.description
def get_detail_description(self): def get_detail_description(self):
@ -745,7 +740,6 @@ class Creator(BaseCreator):
Returns: Returns:
str: Detailed description of product type for artist. str: Detailed description of product type for artist.
""" """
return self.detailed_description return self.detailed_description
def get_default_variants(self): def get_default_variants(self):
@ -759,7 +753,6 @@ class Creator(BaseCreator):
Returns: Returns:
list[str]: Whisper variants for user input. list[str]: Whisper variants for user input.
""" """
return copy.deepcopy(self.default_variants) return copy.deepcopy(self.default_variants)
def get_default_variant(self, only_explicit=False): def get_default_variant(self, only_explicit=False):
@ -779,7 +772,6 @@ class Creator(BaseCreator):
Returns: Returns:
str: Variant value. str: Variant value.
""" """
if only_explicit or self._default_variant: if only_explicit or self._default_variant:
return self._default_variant return self._default_variant
@ -800,7 +792,6 @@ class Creator(BaseCreator):
Returns: Returns:
str: Variant value. str: Variant value.
""" """
return self.get_default_variant() return self.get_default_variant()
def _set_default_variant_wrap(self, variant): def _set_default_variant_wrap(self, variant):
@ -812,7 +803,6 @@ class Creator(BaseCreator):
Args: Args:
variant (str): New default variant value. variant (str): New default variant value.
""" """
self._default_variant = variant self._default_variant = variant
default_variant = property( default_variant = property(
@ -949,6 +939,8 @@ class Creator(BaseCreator):
class HiddenCreator(BaseCreator): class HiddenCreator(BaseCreator):
skip_discovery = True
@abstractmethod @abstractmethod
def create(self, instance_data, source_data): def create(self, instance_data, source_data):
pass pass
@ -959,10 +951,10 @@ class AutoCreator(BaseCreator):
Can be used e.g. for `workfile`. Can be used e.g. for `workfile`.
""" """
skip_discovery = True
def remove_instances(self, instances): def remove_instances(self, instances):
"""Skip removal.""" """Skip removal."""
pass
def discover_creator_plugins(*args, **kwargs): def discover_creator_plugins(*args, **kwargs):
@ -1020,7 +1012,6 @@ def cache_and_get_instances(creator, shared_key, list_instances_func):
dict[str, dict[str, Any]]: Cached instances by creator identifier from dict[str, dict[str, Any]]: Cached instances by creator identifier from
result of passed function. result of passed function.
""" """
if shared_key not in creator.collection_shared_data: if shared_key not in creator.collection_shared_data:
value = collections.defaultdict(list) value = collections.defaultdict(list)
for instance in list_instances_func(): for instance in list_instances_func():

View file

@ -1,24 +1,38 @@
"""Functions for handling product names."""
from __future__ import annotations
import warnings
from functools import wraps
from typing import Any, Optional, Union, overload
from warnings import warn
import ayon_api import ayon_api
from ayon_core.lib import ( from ayon_core.lib import (
StringTemplate, StringTemplate,
filter_profiles, filter_profiles,
prepare_template_data, prepare_template_data,
Logger,
is_func_signature_supported,
) )
from ayon_core.lib.path_templates import TemplateResult
from ayon_core.settings import get_project_settings from ayon_core.settings import get_project_settings
from .constants import DEFAULT_PRODUCT_TEMPLATE from .constants import DEFAULT_PRODUCT_TEMPLATE
from .exceptions import TaskNotSetError, TemplateFillError from .exceptions import TaskNotSetError, TemplateFillError
log = Logger.get_logger(__name__)
def get_product_name_template( def get_product_name_template(
project_name, project_name: str,
product_type, product_type: str,
task_name, task_name: Optional[str],
task_type, task_type: Optional[str],
host_name, host_name: str,
default_template=None, default_template: Optional[str] = None,
project_settings=None project_settings: Optional[dict[str, Any]] = None,
): product_base_type: Optional[str] = None
) -> str:
"""Get product name template based on passed context. """Get product name template based on passed context.
Args: Args:
@ -26,26 +40,32 @@ def get_product_name_template(
product_type (str): Product type for which the product name is product_type (str): Product type for which the product name is
calculated. calculated.
host_name (str): Name of host in which the product name is calculated. host_name (str): Name of host in which the product name is calculated.
task_name (str): Name of task in which context the product is created. task_name (Optional[str]): Name of task in which context the
task_type (str): Type of task in which context the product is created. product is created.
default_template (Union[str, None]): Default template which is used if task_type (Optional[str]): Type of task in which context the
product is created.
default_template (Optional[str]): Default template which is used if
settings won't find any matching possibility. Constant settings won't find any matching possibility. Constant
'DEFAULT_PRODUCT_TEMPLATE' is used if not defined. 'DEFAULT_PRODUCT_TEMPLATE' is used if not defined.
project_settings (Union[Dict[str, Any], None]): Prepared settings for project_settings (Optional[dict[str, Any]]): Prepared settings for
project. Settings are queried if not passed. project. Settings are queried if not passed.
""" product_base_type (Optional[str]): Base type of product.
Returns:
str: Product name template.
"""
if project_settings is None: if project_settings is None:
project_settings = get_project_settings(project_name) project_settings = get_project_settings(project_name)
tools_settings = project_settings["core"]["tools"] tools_settings = project_settings["core"]["tools"]
profiles = tools_settings["creator"]["product_name_profiles"] profiles = tools_settings["creator"]["product_name_profiles"]
filtering_criteria = { filtering_criteria = {
"product_base_types": product_base_type or product_type,
"product_types": product_type, "product_types": product_type,
"host_names": host_name, "host_names": host_name,
"task_names": task_name, "task_names": task_name,
"task_types": task_type "task_types": task_type,
} }
matching_profile = filter_profiles(profiles, filtering_criteria) matching_profile = filter_profiles(profiles, filtering_criteria)
template = None template = None
if matching_profile: if matching_profile:
@ -69,6 +89,214 @@ def get_product_name_template(
return template return template
def _get_product_name_old(
project_name: str,
task_name: Optional[str],
task_type: Optional[str],
host_name: str,
product_type: str,
variant: str,
default_template: Optional[str] = None,
dynamic_data: Optional[dict[str, Any]] = None,
project_settings: Optional[dict[str, Any]] = None,
product_type_filter: Optional[str] = None,
project_entity: Optional[dict[str, Any]] = None,
product_base_type: Optional[str] = None,
) -> TemplateResult:
warnings.warn(
"Used deprecated 'task_name' and 'task_type' arguments."
" Please use new signature with 'folder_entity' and 'task_entity'.",
DeprecationWarning,
stacklevel=2
)
if not product_type:
return StringTemplate("").format({})
template = get_product_name_template(
project_name=project_name,
product_type=product_type_filter or product_type,
task_name=task_name,
task_type=task_type,
host_name=host_name,
default_template=default_template,
project_settings=project_settings,
product_base_type=product_base_type,
)
template_low = template.lower()
# Simple check of task name existence for template with {task[name]} in
if not task_name and "{task" in template_low:
raise TaskNotSetError()
task_value = {
"name": task_name,
"type": task_type,
}
if "{task}" in template_low:
task_value = task_name
# NOTE this is message for TDs and Admins -> not really for users
# TODO validate this in settings and not allow it
log.warning(
"Found deprecated task key '{task}' in product name template."
" Please use '{task[name]}' instead."
)
elif "{task[short]}" in template_low:
if project_entity is None:
project_entity = ayon_api.get_project(project_name)
task_types_by_name = {
task["name"]: task for task in
project_entity["taskTypes"]
}
task_short = task_types_by_name.get(task_type, {}).get("shortName")
task_value["short"] = task_short
if not product_base_type and "{product[basetype]}" in template.lower():
warn(
"You have Product base type in product name template, "
"but it is not provided by the creator, please update your "
"creation code to include it. It will be required in "
"the future.",
DeprecationWarning,
stacklevel=2)
fill_pairs: dict[str, Union[str, dict[str, str]]] = {
"variant": variant,
"family": product_type,
"task": task_value,
"product": {
"type": product_type,
"basetype": product_base_type or product_type,
}
}
if dynamic_data:
# Dynamic data may override default values
for key, value in dynamic_data.items():
fill_pairs[key] = value
try:
return StringTemplate.format_strict_template(
template=template,
data=prepare_template_data(fill_pairs)
)
except KeyError as exp:
msg = (
f"Value for {exp} key is missing in template '{template}'."
f" Available values are {fill_pairs}"
)
raise TemplateFillError(msg) from exp
def _backwards_compatibility_product_name(func):
"""Helper to decide which variant of 'get_product_name' to use.
The old version expected 'task_name' and 'task_type' arguments. The new
version expects 'folder_entity' and 'task_entity' arguments instead.
The function is also marked with an attribute 'version' so other addons
can check if the function is using the new signature or is using
the old signature. That should allow addons to adapt to new signature.
>>> if getattr(get_product_name, "use_entities", None):
>>> # New signature is used
>>> path = get_product_name(project_name, folder_entity, ...)
>>> else:
>>> # Old signature is used
>>> path = get_product_name(project_name, taks_name, ...)
"""
# Add attribute to function to identify it as the new function
# so other addons can easily identify it.
# >>> geattr(get_product_name, "use_entities", False)
setattr(func, "use_entities", True)
@wraps(func)
def inner(*args, **kwargs):
# ---
# Decide which variant of the function is used based on
# passed arguments.
# ---
# Entities in key-word arguments mean that the new function is used
if "folder_entity" in kwargs or "task_entity" in kwargs:
return func(*args, **kwargs)
# Using more than 7 positional arguments is not allowed
# in the new function
if len(args) > 7:
return _get_product_name_old(*args, **kwargs)
if len(args) > 1:
arg_2 = args[1]
# The second argument is a string -> task name
if isinstance(arg_2, str):
return _get_product_name_old(*args, **kwargs)
if is_func_signature_supported(func, *args, **kwargs):
return func(*args, **kwargs)
return _get_product_name_old(*args, **kwargs)
return inner
@overload
def get_product_name(
project_name: str,
folder_entity: dict[str, Any],
task_entity: Optional[dict[str, Any]],
product_base_type: str,
product_type: str,
host_name: str,
variant: str,
*,
dynamic_data: Optional[dict[str, Any]] = None,
project_settings: Optional[dict[str, Any]] = None,
project_entity: Optional[dict[str, Any]] = None,
default_template: Optional[str] = None,
product_base_type_filter: Optional[str] = None,
) -> TemplateResult:
"""Calculate product name based on passed context and AYON settings.
Subst name templates are defined in `project_settings/global/tools/creator
/product_name_profiles` where are profiles with host name, product type,
task name and task type filters. If context does not match any profile
then `DEFAULT_PRODUCT_TEMPLATE` is used as default template.
That's main reason why so many arguments are required to calculate product
name.
Args:
project_name (str): Project name.
folder_entity (Optional[dict[str, Any]]): Folder entity.
task_entity (Optional[dict[str, Any]]): Task entity.
host_name (str): Host name.
product_base_type (str): Product base type.
product_type (str): Product type.
variant (str): In most of the cases it is user input during creation.
dynamic_data (Optional[dict[str, Any]]): Dynamic data specific for
a creator which creates instance.
project_settings (Optional[dict[str, Any]]): Prepared settings
for project. Settings are queried if not passed.
project_entity (Optional[dict[str, Any]]): Project entity used when
task short name is required by template.
default_template (Optional[str]): Default template if any profile does
not match passed context. Constant 'DEFAULT_PRODUCT_TEMPLATE'
is used if is not passed.
product_base_type_filter (Optional[str]): Use different product base
type for product template filtering. Value of
`product_base_type_filter` is used when not passed.
Returns:
TemplateResult: Product name.
Raises:
TaskNotSetError: If template requires task which is not provided.
TemplateFillError: If filled template contains placeholder key which
is not collected.
"""
@overload
def get_product_name( def get_product_name(
project_name, project_name,
task_name, task_name,
@ -81,25 +309,25 @@ def get_product_name(
project_settings=None, project_settings=None,
product_type_filter=None, product_type_filter=None,
project_entity=None, project_entity=None,
): ) -> TemplateResult:
"""Calculate product name based on passed context and AYON settings. """Calculate product name based on passed context and AYON settings.
Subst name templates are defined in `project_settings/global/tools/creator Product name templates are defined in `project_settings/global/tools
/product_name_profiles` where are profiles with host name, product type, /creator/product_name_profiles` where are profiles with host name,
task name and task type filters. If context does not match any profile product type, task name and task type filters. If context does not match
then `DEFAULT_PRODUCT_TEMPLATE` is used as default template. any profile then `DEFAULT_PRODUCT_TEMPLATE` is used as default template.
That's main reason why so many arguments are required to calculate product That's main reason why so many arguments are required to calculate product
name. name.
Todos: Deprecated:
Find better filtering options to avoid requirement of This function is using deprecated signature that does not support
argument 'family_filter'. folder entity data to be used.
Args: Args:
project_name (str): Project name. project_name (str): Project name.
task_name (Union[str, None]): Task name. task_name (Optional[str]): Task name.
task_type (Union[str, None]): Task type. task_type (Optional[str]): Task type.
host_name (str): Host name. host_name (str): Host name.
product_type (str): Product type. product_type (str): Product type.
variant (str): In most of the cases it is user input during creation. variant (str): In most of the cases it is user input during creation.
@ -117,7 +345,63 @@ def get_product_name(
task short name is required by template. task short name is required by template.
Returns: Returns:
str: Product name. TemplateResult: Product name.
"""
pass
@_backwards_compatibility_product_name
def get_product_name(
project_name: str,
folder_entity: dict[str, Any],
task_entity: Optional[dict[str, Any]],
product_base_type: str,
product_type: str,
host_name: str,
variant: str,
*,
dynamic_data: Optional[dict[str, Any]] = None,
project_settings: Optional[dict[str, Any]] = None,
project_entity: Optional[dict[str, Any]] = None,
default_template: Optional[str] = None,
product_base_type_filter: Optional[str] = None,
) -> TemplateResult:
"""Calculate product name based on passed context and AYON settings.
Product name templates are defined in `project_settings/global/tools
/creator/product_name_profiles` where are profiles with host name,
product base type, product type, task name and task type filters.
If context does not match any profile then `DEFAULT_PRODUCT_TEMPLATE`
is used as default template.
That's main reason why so many arguments are required to calculate product
name.
Args:
project_name (str): Project name.
folder_entity (Optional[dict[str, Any]]): Folder entity.
task_entity (Optional[dict[str, Any]]): Task entity.
host_name (str): Host name.
product_base_type (str): Product base type.
product_type (str): Product type.
variant (str): In most of the cases it is user input during creation.
dynamic_data (Optional[dict[str, Any]]): Dynamic data specific for
a creator which creates instance.
project_settings (Optional[dict[str, Any]]): Prepared settings
for project. Settings are queried if not passed.
project_entity (Optional[dict[str, Any]]): Project entity used when
task short name is required by template.
default_template (Optional[str]): Default template if any profile does
not match passed context. Constant 'DEFAULT_PRODUCT_TEMPLATE'
is used if is not passed.
product_base_type_filter (Optional[str]): Use different product base
type for product template filtering. Value of
`product_base_type_filter` is used when not passed.
Returns:
TemplateResult: Product name.
Raises: Raises:
TaskNotSetError: If template requires task which is not provided. TaskNotSetError: If template requires task which is not provided.
@ -126,47 +410,68 @@ def get_product_name(
""" """
if not product_type: if not product_type:
return "" return StringTemplate("").format({})
task_name = task_type = None
if task_entity:
task_name = task_entity["name"]
task_type = task_entity["taskType"]
template = get_product_name_template( template = get_product_name_template(
project_name, project_name=project_name,
product_type_filter or product_type, product_base_type=product_base_type_filter or product_base_type,
task_name, product_type=product_type,
task_type, task_name=task_name,
host_name, task_type=task_type,
host_name=host_name,
default_template=default_template, default_template=default_template,
project_settings=project_settings project_settings=project_settings,
) )
# Simple check of task name existence for template with {task} in
# - missing task should be possible only in Standalone publisher template_low = template.lower()
if not task_name and "{task" in template.lower(): # Simple check of task name existence for template with {task[name]} in
if not task_name and "{task" in template_low:
raise TaskNotSetError() raise TaskNotSetError()
task_value = { task_value = {
"name": task_name, "name": task_name,
"type": task_type, "type": task_type,
} }
if "{task}" in template.lower(): if "{task}" in template_low:
task_value = task_name task_value = task_name
# NOTE this is message for TDs and Admins -> not really for users
# TODO validate this in settings and not allow it
log.warning(
"Found deprecated task key '{task}' in product name template."
" Please use '{task[name]}' instead."
)
elif "{task[short]}" in template.lower(): elif "{task[short]}" in template_low:
if project_entity is None: if project_entity is None:
project_entity = ayon_api.get_project(project_name) project_entity = ayon_api.get_project(project_name)
task_types_by_name = { task_types_by_name = {
task["name"]: task for task in task["name"]: task
project_entity["taskTypes"] for task in project_entity["taskTypes"]
} }
task_short = task_types_by_name.get(task_type, {}).get("shortName") task_short = task_types_by_name.get(task_type, {}).get("shortName")
task_value["short"] = task_short task_value["short"] = task_short
fill_pairs = { fill_pairs = {
"variant": variant, "variant": variant,
# TODO We should stop support 'family' key.
"family": product_type, "family": product_type,
"task": task_value, "task": task_value,
"product": { "product": {
"type": product_type "type": product_type,
"basetype": product_base_type,
} }
} }
if folder_entity:
fill_pairs["folder"] = {
"name": folder_entity["name"],
"type": folder_entity["folderType"],
}
if dynamic_data: if dynamic_data:
# Dynamic data may override default values # Dynamic data may override default values
for key, value in dynamic_data.items(): for key, value in dynamic_data.items():
@ -178,7 +483,8 @@ def get_product_name(
data=prepare_template_data(fill_pairs) data=prepare_template_data(fill_pairs)
) )
except KeyError as exp: except KeyError as exp:
raise TemplateFillError( msg = (
"Value for {} key is missing in template '{}'." f"Value for {exp} key is missing in template '{template}'."
" Available values are {}".format(str(exp), template, fill_pairs) f" Available values are {fill_pairs}"
) )
raise TemplateFillError(msg)

View file

@ -11,6 +11,8 @@ from ayon_core.lib.attribute_definitions import (
serialize_attr_defs, serialize_attr_defs,
deserialize_attr_defs, deserialize_attr_defs,
) )
from ayon_core.pipeline import ( from ayon_core.pipeline import (
AYON_INSTANCE_ID, AYON_INSTANCE_ID,
AVALON_INSTANCE_ID, AVALON_INSTANCE_ID,
@ -480,6 +482,10 @@ class CreatedInstance:
data (Dict[str, Any]): Data used for filling product name or override data (Dict[str, Any]): Data used for filling product name or override
data from already existing instance. data from already existing instance.
creator (BaseCreator): Creator responsible for instance. creator (BaseCreator): Creator responsible for instance.
product_base_type (Optional[str]): Product base type that will be
created. If not provided then product base type is taken from
creator plugin. If creator does not have product base type then
deprecation warning is raised.
""" """
# Keys that can't be changed or removed from data after loading using # Keys that can't be changed or removed from data after loading using
@ -490,6 +496,7 @@ class CreatedInstance:
"id", "id",
"instance_id", "instance_id",
"productType", "productType",
"productBaseType",
"creator_identifier", "creator_identifier",
"creator_attributes", "creator_attributes",
"publish_attributes" "publish_attributes"
@ -509,7 +516,13 @@ class CreatedInstance:
data: Dict[str, Any], data: Dict[str, Any],
creator: "BaseCreator", creator: "BaseCreator",
transient_data: Optional[Dict[str, Any]] = None, transient_data: Optional[Dict[str, Any]] = None,
product_base_type: Optional[str] = None
): ):
"""Initialize CreatedInstance."""
# fallback to product type for backward compatibility
if not product_base_type:
product_base_type = creator.product_base_type or product_type
self._creator = creator self._creator = creator
creator_identifier = creator.identifier creator_identifier = creator.identifier
group_label = creator.get_group_label() group_label = creator.get_group_label()
@ -562,6 +575,9 @@ class CreatedInstance:
self._data["id"] = item_id self._data["id"] = item_id
self._data["productType"] = product_type self._data["productType"] = product_type
self._data["productName"] = product_name self._data["productName"] = product_name
self._data["productBaseType"] = product_base_type
self._data["active"] = data.get("active", True) self._data["active"] = data.get("active", True)
self._data["creator_identifier"] = creator_identifier self._data["creator_identifier"] = creator_identifier

View file

@ -21,6 +21,13 @@ from .utils import get_representation_path_from_context
class LoaderPlugin(list): class LoaderPlugin(list):
"""Load representation into host application""" """Load representation into host application"""
# Attribute 'skip_discovery' is used during discovery phase to skip
# plugins, which can be used to mark base plugins that should not be
# considered as plugins "to use". The discovery logic does NOT use
# the attribute value from parent classes. Each base class has to define
# the attribute again.
skip_discovery = True
product_types: set[str] = set() product_types: set[str] = set()
product_base_types: Optional[set[str]] = None product_base_types: Optional[set[str]] = None
representations = set() representations = set()

View file

@ -948,7 +948,7 @@ def get_representation_by_names(
version_name: Union[int, str], version_name: Union[int, str],
representation_name: str, representation_name: str,
) -> Optional[dict]: ) -> Optional[dict]:
"""Get representation entity for asset and subset. """Get representation entity for folder and product.
If version_name is "hero" then return the hero version If version_name is "hero" then return the hero version
If version_name is "latest" then return the latest version If version_name is "latest" then return the latest version
@ -966,7 +966,7 @@ def get_representation_by_names(
return None return None
if isinstance(product_name, dict) and "name" in product_name: if isinstance(product_name, dict) and "name" in product_name:
# Allow explicitly passing subset document # Allow explicitly passing product entity document
product_entity = product_name product_entity = product_name
else: else:
product_entity = ayon_api.get_product_by_name( product_entity = ayon_api.get_product_by_name(

View file

@ -138,7 +138,14 @@ def discover_plugins(
for item in modules: for item in modules:
filepath, module = item filepath, module = item
result.add_module(module) result.add_module(module)
all_plugins.extend(classes_from_module(base_class, module)) for cls in classes_from_module(base_class, module):
if cls is base_class:
continue
# Class has defined 'skip_discovery = True'
skip_discovery = cls.__dict__.get("skip_discovery")
if skip_discovery is True:
continue
all_plugins.append(cls)
if base_class not in ignored_classes: if base_class not in ignored_classes:
ignored_classes.append(base_class) ignored_classes.append(base_class)

View file

@ -29,6 +29,7 @@ from .lib import (
get_publish_template_name, get_publish_template_name,
publish_plugins_discover, publish_plugins_discover,
filter_crashed_publish_paths,
load_help_content_from_plugin, load_help_content_from_plugin,
load_help_content_from_filepath, load_help_content_from_filepath,
@ -87,6 +88,7 @@ __all__ = (
"get_publish_template_name", "get_publish_template_name",
"publish_plugins_discover", "publish_plugins_discover",
"filter_crashed_publish_paths",
"load_help_content_from_plugin", "load_help_content_from_plugin",
"load_help_content_from_filepath", "load_help_content_from_filepath",

View file

@ -1,6 +1,8 @@
"""Library functions for publishing.""" """Library functions for publishing."""
from __future__ import annotations from __future__ import annotations
import os import os
import platform
import re
import sys import sys
import inspect import inspect
import copy import copy
@ -8,19 +10,19 @@ import warnings
import hashlib import hashlib
import xml.etree.ElementTree import xml.etree.ElementTree
from typing import TYPE_CHECKING, Optional, Union, List, Any from typing import TYPE_CHECKING, Optional, Union, List, Any
import clique
import speedcopy
import logging import logging
import pyblish.util
import pyblish.plugin
import pyblish.api
from ayon_api import ( from ayon_api import (
get_server_api_connection, get_server_api_connection,
get_representations, get_representations,
get_last_version_by_product_name get_last_version_by_product_name
) )
import clique
import pyblish.util
import pyblish.plugin
import pyblish.api
import speedcopy
from ayon_core.lib import ( from ayon_core.lib import (
import_filepath, import_filepath,
Logger, Logger,
@ -122,7 +124,8 @@ def get_publish_template_name(
task_type, task_type,
project_settings=None, project_settings=None,
hero=False, hero=False,
logger=None product_base_type: Optional[str] = None,
logger=None,
): ):
"""Get template name which should be used for passed context. """Get template name which should be used for passed context.
@ -140,17 +143,29 @@ def get_publish_template_name(
task_type (str): Task type on which is instance working. task_type (str): Task type on which is instance working.
project_settings (Dict[str, Any]): Prepared project settings. project_settings (Dict[str, Any]): Prepared project settings.
hero (bool): Template is for hero version publishing. hero (bool): Template is for hero version publishing.
product_base_type (Optional[str]): Product type for which should
be found template.
logger (logging.Logger): Custom logger used for 'filter_profiles' logger (logging.Logger): Custom logger used for 'filter_profiles'
function. function.
Returns: Returns:
str: Template name which should be used for integration. str: Template name which should be used for integration.
""" """
if not product_base_type:
msg = (
"Argument 'product_base_type' is not provided to"
" 'get_publish_template_name' function. This argument"
" will be required in future versions."
)
warnings.warn(msg, DeprecationWarning)
if logger:
logger.warning(msg)
template = None template = None
filter_criteria = { filter_criteria = {
"hosts": host_name, "hosts": host_name,
"product_types": product_type, "product_types": product_type,
"product_base_types": product_base_type,
"task_names": task_name, "task_names": task_name,
"task_types": task_type, "task_types": task_type,
} }
@ -179,7 +194,9 @@ class HelpContent:
self.detail = detail self.detail = detail
def load_help_content_from_filepath(filepath): def load_help_content_from_filepath(
filepath: str
) -> dict[str, dict[str, HelpContent]]:
"""Load help content from xml file. """Load help content from xml file.
Xml file may contain errors and warnings. Xml file may contain errors and warnings.
""" """
@ -214,18 +231,84 @@ def load_help_content_from_filepath(filepath):
return output return output
def load_help_content_from_plugin(plugin): def load_help_content_from_plugin(
plugin: pyblish.api.Plugin,
help_filename: Optional[str] = None,
) -> dict[str, dict[str, HelpContent]]:
cls = plugin cls = plugin
if not inspect.isclass(plugin): if not inspect.isclass(plugin):
cls = plugin.__class__ cls = plugin.__class__
plugin_filepath = inspect.getfile(cls) plugin_filepath = inspect.getfile(cls)
plugin_dir = os.path.dirname(plugin_filepath) plugin_dir = os.path.dirname(plugin_filepath)
if help_filename is None:
basename = os.path.splitext(os.path.basename(plugin_filepath))[0] basename = os.path.splitext(os.path.basename(plugin_filepath))[0]
filename = basename + ".xml" help_filename = basename + ".xml"
filepath = os.path.join(plugin_dir, "help", filename) filepath = os.path.join(plugin_dir, "help", help_filename)
return load_help_content_from_filepath(filepath) return load_help_content_from_filepath(filepath)
def filter_crashed_publish_paths(
project_name: str,
crashed_paths: set[str],
*,
project_settings: Optional[dict[str, Any]] = None,
) -> set[str]:
"""Filter crashed paths happened during plugins discovery.
Check if plugins discovery has enabled strict mode and filter crashed
paths that happened during discover based on regexes from settings.
Publishing should not start if any paths are returned.
Args:
project_name (str): Project name in which context plugins discovery
happened.
crashed_paths (set[str]): Crashed paths from plugins discovery report.
project_settings (Optional[dict[str, Any]]): Project settings.
Returns:
set[str]: Filtered crashed paths.
"""
filtered_paths = set()
# Nothing crashed all good...
if not crashed_paths:
return filtered_paths
if project_settings is None:
project_settings = get_project_settings(project_name)
discover_validation = (
project_settings["core"]["tools"]["publish"]["discover_validation"]
)
# Strict mode is not enabled.
if not discover_validation["enabled"]:
return filtered_paths
regexes = [
re.compile(value, re.IGNORECASE)
for value in discover_validation["ignore_paths"]
if value
]
is_windows = platform.system().lower() == "windows"
# Fitler path with regexes from settings
for path in crashed_paths:
# Normalize paths to use forward slashes on windows
if is_windows:
path = path.replace("\\", "/")
is_invalid = True
for regex in regexes:
if regex.match(path):
is_invalid = False
break
if is_invalid:
filtered_paths.add(path)
return filtered_paths
def publish_plugins_discover( def publish_plugins_discover(
paths: Optional[list[str]] = None) -> DiscoverResult: paths: Optional[list[str]] = None) -> DiscoverResult:
"""Find and return available pyblish plug-ins. """Find and return available pyblish plug-ins.
@ -1079,14 +1162,16 @@ def main_cli_publish(
except ValueError: except ValueError:
pass pass
context = get_global_context()
project_settings = get_project_settings(context["project_name"])
install_ayon_plugins() install_ayon_plugins()
if addons_manager is None: if addons_manager is None:
addons_manager = AddonsManager() addons_manager = AddonsManager(project_settings)
applications_addon = addons_manager.get_enabled_addon("applications") applications_addon = addons_manager.get_enabled_addon("applications")
if applications_addon is not None: if applications_addon is not None:
context = get_global_context()
env = applications_addon.get_farm_publish_environment_variables( env = applications_addon.get_farm_publish_environment_variables(
context["project_name"], context["project_name"],
context["folder_path"], context["folder_path"],
@ -1109,17 +1194,33 @@ def main_cli_publish(
log.info("Running publish ...") log.info("Running publish ...")
discover_result = publish_plugins_discover() discover_result = publish_plugins_discover()
publish_plugins = discover_result.plugins
print(discover_result.get_report(only_errors=False)) print(discover_result.get_report(only_errors=False))
filtered_crashed_paths = filter_crashed_publish_paths(
context["project_name"],
set(discover_result.crashed_file_paths),
project_settings=project_settings,
)
if filtered_crashed_paths:
joined_paths = "\n".join([
f"- {path}"
for path in filtered_crashed_paths
])
log.error(
"Plugin discovery strict mode is enabled."
" Crashed plugin paths that prevent from publishing:"
f"\n{joined_paths}"
)
sys.exit(1)
publish_plugins = discover_result.plugins
# Error exit as soon as any error occurs. # Error exit as soon as any error occurs.
error_format = ("Failed {plugin.__name__}: " error_format = "Failed {plugin.__name__}: {error} -- {error.traceback}"
"{error} -- {error.traceback}")
for result in pyblish.util.publish_iter(plugins=publish_plugins): for result in pyblish.util.publish_iter(plugins=publish_plugins):
if result["error"]: if result["error"]:
log.error(error_format.format(**result)) log.error(error_format.format(**result))
# uninstall()
sys.exit(1) sys.exit(1)
log.info("Publish finished.") log.info("Publish finished.")

View file

@ -1,7 +1,7 @@
import inspect import inspect
from abc import ABCMeta from abc import ABCMeta
import typing import typing
from typing import Optional from typing import Optional, Any
import pyblish.api import pyblish.api
import pyblish.logic import pyblish.logic
@ -82,22 +82,51 @@ class PublishValidationError(PublishError):
class PublishXmlValidationError(PublishValidationError): class PublishXmlValidationError(PublishValidationError):
"""Raise an error from a dedicated xml file.
Can be useful to have one xml file with different possible messages that
helps to avoid flood code with dedicated artist messages.
XML files should live relative to the plugin file location:
'{plugin dir}/help/some_plugin.xml'.
Args:
plugin (pyblish.api.Plugin): Plugin that raised an error. Is used
to get path to xml file.
message (str): Exception message, can be technical, is used for
console output.
key (Optional[str]): XML file can contain multiple error messages, key
is used to get one of them. By default is used 'main'.
formatting_data (Optional[dict[str, Any]): Error message can have
variables to fill.
help_filename (Optional[str]): Name of xml file with messages. By
default, is used filename where plugin lives with .xml extension.
"""
def __init__( def __init__(
self, plugin, message, key=None, formatting_data=None self,
): plugin: pyblish.api.Plugin,
message: str,
key: Optional[str] = None,
formatting_data: Optional[dict[str, Any]] = None,
help_filename: Optional[str] = None,
) -> None:
if key is None: if key is None:
key = "main" key = "main"
if not formatting_data: if not formatting_data:
formatting_data = {} formatting_data = {}
result = load_help_content_from_plugin(plugin) result = load_help_content_from_plugin(plugin, help_filename)
content_obj = result["errors"][key] content_obj = result["errors"][key]
description = content_obj.description.format(**formatting_data) description = content_obj.description.format(**formatting_data)
detail = content_obj.detail detail = content_obj.detail
if detail: if detail:
detail = detail.format(**formatting_data) detail = detail.format(**formatting_data)
super(PublishXmlValidationError, self).__init__( super().__init__(
message, content_obj.title, description, detail message,
content_obj.title,
description,
detail
) )

View file

@ -96,7 +96,6 @@ def get_folder_template_data(folder_entity, project_name):
Output dictionary contains keys: Output dictionary contains keys:
- 'folder' - dictionary with 'name' key filled with folder name - 'folder' - dictionary with 'name' key filled with folder name
- 'asset' - folder name
- 'hierarchy' - parent folder names joined with '/' - 'hierarchy' - parent folder names joined with '/'
- 'parent' - direct parent name, project name used if is under - 'parent' - direct parent name, project name used if is under
project project
@ -132,7 +131,6 @@ def get_folder_template_data(folder_entity, project_name):
"path": path, "path": path,
"parents": parents, "parents": parents,
}, },
"asset": folder_name,
"hierarchy": hierarchy, "hierarchy": hierarchy,
"parent": parent_name "parent": parent_name
} }

View file

@ -299,7 +299,6 @@ def add_ordered_sublayer(layer, contribution_path, layer_id, order=None,
sdf format args metadata if enabled) sdf format args metadata if enabled)
""" """
# Add the order with the contribution path so that for future # Add the order with the contribution path so that for future
# contributions we can again use it to magically fit into the # contributions we can again use it to magically fit into the
# ordering. We put this in the path because sublayer paths do # ordering. We put this in the path because sublayer paths do
@ -317,20 +316,25 @@ def add_ordered_sublayer(layer, contribution_path, layer_id, order=None,
# If the layer was already in the layers, then replace it # If the layer was already in the layers, then replace it
for index, existing_path in enumerate(layer.subLayerPaths): for index, existing_path in enumerate(layer.subLayerPaths):
args = get_sdf_format_args(existing_path) args = get_sdf_format_args(existing_path)
existing_layer = args.get("layer_id") existing_layer_id = args.get("layer_id")
if existing_layer == layer_id: if existing_layer_id == layer_id:
existing_layer = layer.subLayerPaths[index]
existing_order = args.get("order")
existing_order = int(existing_order) if existing_order else None
if order is not None and order != existing_order:
# We need to move the layer, so we will remove this index
# and then re-insert it below at the right order
log.debug(f"Removing existing layer: {existing_layer}")
del layer.subLayerPaths[index]
break
# Put it in the same position where it was before when swapping # Put it in the same position where it was before when swapping
# it with the original, also take over its order metadata # it with the original, also take over its order metadata
order = args.get("order")
if order is not None:
order = int(order)
else:
order = None
contribution_path = _format_path(contribution_path, contribution_path = _format_path(contribution_path,
order=order, order=existing_order,
layer_id=layer_id) layer_id=layer_id)
log.debug( log.debug(
f"Replacing existing layer: {layer.subLayerPaths[index]} " f"Replacing existing layer: {existing_layer} "
f"-> {contribution_path}" f"-> {contribution_path}"
) )
layer.subLayerPaths[index] = contribution_path layer.subLayerPaths[index] = contribution_path

View file

@ -1,16 +1,19 @@
from __future__ import annotations
from typing import Optional, Any
from ayon_core.lib.profiles_filtering import filter_profiles from ayon_core.lib.profiles_filtering import filter_profiles
from ayon_core.settings import get_project_settings from ayon_core.settings import get_project_settings
def get_versioning_start( def get_versioning_start(
project_name, project_name: str,
host_name, host_name: str,
task_name=None, task_name: Optional[str] = None,
task_type=None, task_type: Optional[str] = None,
product_type=None, product_type: Optional[str] = None,
product_name=None, product_name: Optional[str] = None,
project_settings=None, project_settings: Optional[dict[str, Any]] = None,
): ) -> int:
"""Get anatomy versioning start""" """Get anatomy versioning start"""
if not project_settings: if not project_settings:
project_settings = get_project_settings(project_name) project_settings = get_project_settings(project_name)
@ -22,14 +25,12 @@ def get_versioning_start(
if not profiles: if not profiles:
return version_start return version_start
# TODO use 'product_types' and 'product_name' instead of
# 'families' and 'subsets'
filtering_criteria = { filtering_criteria = {
"host_names": host_name, "host_names": host_name,
"families": product_type, "product_types": product_type,
"product_names": product_name,
"task_names": task_name, "task_names": task_name,
"task_types": task_type, "task_types": task_type,
"subsets": product_name
} }
profile = filter_profiles(profiles, filtering_criteria) profile = filter_profiles(profiles, filtering_criteria)

View file

@ -1483,7 +1483,7 @@ class PlaceholderLoadMixin(object):
tooltip=( tooltip=(
"Link Type\n" "Link Type\n"
"\nDefines what type of link will be used to" "\nDefines what type of link will be used to"
" link the asset to the current folder." " link the product to the current folder."
) )
), ),
attribute_definitions.EnumDef( attribute_definitions.EnumDef(

View file

@ -62,8 +62,8 @@ class CreateHeroVersion(load.ProductLoaderPlugin):
ignored_representation_names: list[str] = [] ignored_representation_names: list[str] = []
db_representation_context_keys = [ db_representation_context_keys = [
"project", "folder", "asset", "hierarchy", "task", "product", "project", "folder", "hierarchy", "task", "product",
"subset", "family", "representation", "username", "user", "output" "representation", "username", "user", "output"
] ]
use_hardlinks = False use_hardlinks = False

View file

@ -301,8 +301,6 @@ class CollectAnatomyInstanceData(pyblish.api.ContextPlugin):
product_name = instance.data["productName"] product_name = instance.data["productName"]
product_type = instance.data["productType"] product_type = instance.data["productType"]
anatomy_data.update({ anatomy_data.update({
"family": product_type,
"subset": product_name,
"product": { "product": {
"name": product_name, "name": product_name,
"type": product_type, "type": product_type,

View file

@ -25,7 +25,7 @@ class CollectManagedStagingDir(pyblish.api.InstancePlugin):
Location of the folder is configured in: Location of the folder is configured in:
`ayon+anatomy://_/templates/staging`. `ayon+anatomy://_/templates/staging`.
Which family/task type/subset is applicable is configured in: Which product type/task type/product is applicable is configured in:
`ayon+settings://core/tools/publish/custom_staging_dir_profiles` `ayon+settings://core/tools/publish/custom_staging_dir_profiles`
""" """

View file

@ -1,3 +1,5 @@
from __future__ import annotations
from typing import Any
import ayon_api import ayon_api
import ayon_api.utils import ayon_api.utils
@ -32,6 +34,8 @@ class CollectSceneLoadedVersions(pyblish.api.ContextPlugin):
self.log.debug("No loaded containers found in scene.") self.log.debug("No loaded containers found in scene.")
return return
containers = self._filter_invalid_containers(containers)
repre_ids = { repre_ids = {
container["representation"] container["representation"]
for container in containers for container in containers
@ -78,3 +82,28 @@ class CollectSceneLoadedVersions(pyblish.api.ContextPlugin):
self.log.debug(f"Collected {len(loaded_versions)} loaded versions.") self.log.debug(f"Collected {len(loaded_versions)} loaded versions.")
context.data["loadedVersions"] = loaded_versions context.data["loadedVersions"] = loaded_versions
def _filter_invalid_containers(
self,
containers: list[dict[str, Any]]
) -> list[dict[str, Any]]:
"""Filter out invalid containers lacking required keys.
Skip any invalid containers that lack 'representation' or 'name'
keys to avoid KeyError.
"""
# Only filter by what's required for this plug-in instead of validating
# a full container schema.
required_keys = {"name", "representation"}
valid = []
for container in containers:
missing = [key for key in required_keys if key not in container]
if missing:
self.log.warning(
"Skipping invalid container, missing required keys:"
" {}. {}".format(", ".join(missing), container)
)
continue
valid.append(container)
return valid

View file

@ -316,22 +316,8 @@ class ExtractBurnin(publish.Extractor):
burnin_values = {} burnin_values = {}
for key in self.positions: for key in self.positions:
value = burnin_def.get(key) value = burnin_def.get(key)
if not value: if value:
continue burnin_values[key] = value
# TODO remove replacements
burnin_values[key] = (
value
.replace("{task}", "{task[name]}")
.replace("{product[name]}", "{subset}")
.replace("{Product[name]}", "{Subset}")
.replace("{PRODUCT[NAME]}", "{SUBSET}")
.replace("{product[type]}", "{family}")
.replace("{Product[type]}", "{Family}")
.replace("{PRODUCT[TYPE]}", "{FAMILY}")
.replace("{folder[name]}", "{asset}")
.replace("{Folder[name]}", "{Asset}")
.replace("{FOLDER[NAME]}", "{ASSET}")
)
# Remove "delete" tag from new representation # Remove "delete" tag from new representation
if "delete" in new_repre["tags"]: if "delete" in new_repre["tags"]:

View file

@ -172,20 +172,33 @@ class ExtractOIIOTranscode(publish.Extractor):
additional_command_args = (output_def["oiiotool_args"] additional_command_args = (output_def["oiiotool_args"]
["additional_command_args"]) ["additional_command_args"])
sequence_files = self._translate_to_sequence(files_to_convert) sequence_files = self._translate_to_sequence(
files_to_convert)
self.log.debug("Files to convert: {}".format(sequence_files)) self.log.debug("Files to convert: {}".format(sequence_files))
missing_rgba_review_channels = False missing_rgba_review_channels = False
for file_name in sequence_files: for file_name in sequence_files:
if isinstance(file_name, clique.Collection): if isinstance(file_name, clique.Collection):
# Convert to filepath that can be directly converted # Support sequences with holes by supplying
# by oiio like `frame.1001-1025%04d.exr` # dedicated `--frames` argument to `oiiotool`
file_name: str = file_name.format( # Create `frames` string like "1001-1002,1004,1010-1012
"{head}{range}{padding}{tail}" # Create `filename` string like "file.#.exr"
frames = file_name.format("{ranges}").replace(" ", "")
frame_padding = file_name.padding
file_name = file_name.format("{head}#{tail}")
parallel_frames = True
elif isinstance(file_name, str):
# Single file
frames = None
frame_padding = None
parallel_frames = False
else:
raise TypeError(
f"Unsupported file name type: {type(file_name)}."
" Expected str or clique.Collection."
) )
self.log.debug("Transcoding file: `{}`".format(file_name)) self.log.debug("Transcoding file: `{}`".format(file_name))
input_path = os.path.join(original_staging_dir, input_path = os.path.join(original_staging_dir, file_name)
file_name)
output_path = self._get_output_file_path(input_path, output_path = self._get_output_file_path(input_path,
new_staging_dir, new_staging_dir,
output_extension) output_extension)
@ -201,6 +214,9 @@ class ExtractOIIOTranscode(publish.Extractor):
source_display=source_display, source_display=source_display,
source_view=source_view, source_view=source_view,
additional_command_args=additional_command_args, additional_command_args=additional_command_args,
frames=frames,
frame_padding=frame_padding,
parallel_frames=parallel_frames,
logger=self.log logger=self.log
) )
except MissingRGBAChannelsError as exc: except MissingRGBAChannelsError as exc:
@ -294,16 +310,18 @@ class ExtractOIIOTranscode(publish.Extractor):
new_repre["files"] = renamed_files new_repre["files"] = renamed_files
def _translate_to_sequence(self, files_to_convert): def _translate_to_sequence(self, files_to_convert):
"""Returns original list or a clique.Collection of a sequence. """Returns original individual filepaths or list of clique.Collection.
Uses clique to find frame sequence Collection. Uses clique to find frame sequence, and return the collections instead.
If sequence not found, it returns original list. If sequence not detected in input filenames, it returns original list.
Args: Args:
files_to_convert (list): list of file names files_to_convert (list[str]): list of file names
Returns: Returns:
list[str | clique.Collection]: List of filepaths or a list list[str | clique.Collection]: List of
of Collections (usually one, unless there are holes) filepaths ['fileA.exr', 'fileB.exr']
or clique.Collection for a sequence.
""" """
pattern = [clique.PATTERNS["frames"]] pattern = [clique.PATTERNS["frames"]]
collections, _ = clique.assemble( collections, _ = clique.assemble(
@ -314,14 +332,7 @@ class ExtractOIIOTranscode(publish.Extractor):
raise ValueError( raise ValueError(
"Too many collections {}".format(collections)) "Too many collections {}".format(collections))
collection = collections[0] return collections
# TODO: Technically oiiotool supports holes in the sequence as well
# using the dedicated --frames argument to specify the frames.
# We may want to use that too so conversions of sequences with
# holes will perform faster as well.
# Separate the collection so that we have no holes/gaps per
# collection.
return collection.separate()
return files_to_convert return files_to_convert

View file

@ -169,7 +169,9 @@ class ExtractReview(pyblish.api.InstancePlugin):
settings_category = "core" settings_category = "core"
# Supported extensions # Supported extensions
image_exts = {"exr", "jpg", "jpeg", "png", "dpx", "tga", "tiff", "tif"} image_exts = {
"exr", "jpg", "jpeg", "png", "dpx", "tga", "tiff", "tif", "psd"
}
video_exts = {"mov", "mp4"} video_exts = {"mov", "mp4"}
supported_exts = image_exts | video_exts supported_exts = image_exts | video_exts
@ -401,6 +403,10 @@ class ExtractReview(pyblish.api.InstancePlugin):
new_staging_dir, new_staging_dir,
self.log self.log
) )
# The OIIO conversion will remap the RGBA channels just to
# `R,G,B,A` so we will pass the intermediate file to FFMPEG
# without layer name.
layer_name = ""
try: try:
self._render_output_definitions( self._render_output_definitions(

View file

@ -1,8 +1,9 @@
import copy import copy
from dataclasses import dataclass, field, fields
import os import os
import subprocess import subprocess
import tempfile import tempfile
import re from typing import Dict, Any, List, Tuple, Optional
import pyblish.api import pyblish.api
from ayon_core.lib import ( from ayon_core.lib import (
@ -15,6 +16,7 @@ from ayon_core.lib import (
path_to_subprocess_arg, path_to_subprocess_arg,
run_subprocess, run_subprocess,
filter_profiles,
) )
from ayon_core.lib.transcoding import ( from ayon_core.lib.transcoding import (
MissingRGBAChannelsError, MissingRGBAChannelsError,
@ -26,6 +28,61 @@ from ayon_core.lib.transcoding import (
from ayon_core.lib.transcoding import VIDEO_EXTENSIONS, IMAGE_EXTENSIONS from ayon_core.lib.transcoding import VIDEO_EXTENSIONS, IMAGE_EXTENSIONS
@dataclass
class ThumbnailDef:
"""
Data class representing the full configuration for selected profile
Any change of controllable fields in Settings must propagate here!
"""
integrate_thumbnail: bool = False
target_size: Dict[str, Any] = field(
default_factory=lambda: {
"type": "source",
"resize": {"width": 1920, "height": 1080},
}
)
duration_split: float = 0.5
oiiotool_defaults: Dict[str, str] = field(
default_factory=lambda: {
"type": "colorspace",
"colorspace": "color_picking"
}
)
ffmpeg_args: Dict[str, List[Any]] = field(
default_factory=lambda: {"input": [], "output": []}
)
# Background color defined as (R, G, B, A) tuple.
# Note: Use float for alpha channel (0.0 to 1.0).
background_color: Tuple[int, int, int, float] = (0, 0, 0, 0.0)
@classmethod
def from_dict(cls, data: Dict[str, Any]) -> "ThumbnailDef":
"""
Creates a ThumbnailDef instance from a dictionary, safely ignoring
any keys in the dictionary that are not fields in the dataclass.
Args:
data (Dict[str, Any]): The dictionary containing configuration data
Returns:
MediaConfig: A new instance of the dataclass.
"""
# Get all field names defined in the dataclass
field_names = {f.name for f in fields(cls)}
# Filter the input dictionary to include only keys matching field names
filtered_data = {k: v for k, v in data.items() if k in field_names}
# Unpack the filtered dictionary into the constructor
return cls(**filtered_data)
class ExtractThumbnail(pyblish.api.InstancePlugin): class ExtractThumbnail(pyblish.api.InstancePlugin):
"""Create jpg thumbnail from sequence using ffmpeg""" """Create jpg thumbnail from sequence using ffmpeg"""
@ -52,30 +109,7 @@ class ExtractThumbnail(pyblish.api.InstancePlugin):
settings_category = "core" settings_category = "core"
enabled = False enabled = False
integrate_thumbnail = False profiles = []
target_size = {
"type": "source",
"resize": {
"width": 1920,
"height": 1080
}
}
background_color = (0, 0, 0, 0.0)
duration_split = 0.5
# attribute presets from settings
oiiotool_defaults = {
"type": "colorspace",
"colorspace": "color_picking",
"display_and_view": {
"display": "default",
"view": "sRGB"
}
}
ffmpeg_args = {
"input": [],
"output": []
}
product_names = []
def process(self, instance): def process(self, instance):
# run main process # run main process
@ -98,6 +132,13 @@ class ExtractThumbnail(pyblish.api.InstancePlugin):
instance.data["representations"].remove(repre) instance.data["representations"].remove(repre)
def _main_process(self, instance): def _main_process(self, instance):
if not self.profiles:
self.log.debug("No profiles present for extract review thumbnail.")
return
thumbnail_def = self._get_config_from_profile(instance)
if not thumbnail_def:
return
product_name = instance.data["productName"] product_name = instance.data["productName"]
instance_repres = instance.data.get("representations") instance_repres = instance.data.get("representations")
if not instance_repres: if not instance_repres:
@ -130,24 +171,6 @@ class ExtractThumbnail(pyblish.api.InstancePlugin):
self.log.debug("Skipping crypto passes.") self.log.debug("Skipping crypto passes.")
return return
# We only want to process the produces needed from settings.
def validate_string_against_patterns(input_str, patterns):
for pattern in patterns:
if re.match(pattern, input_str):
return True
return False
product_names = self.product_names
if product_names:
result = validate_string_against_patterns(
product_name, product_names
)
if not result:
self.log.debug((
"Product name \"{}\" did not match settings filters: {}"
).format(product_name, product_names))
return
# first check for any explicitly marked representations for thumbnail # first check for any explicitly marked representations for thumbnail
explicit_repres = self._get_explicit_repres_for_thumbnail(instance) explicit_repres = self._get_explicit_repres_for_thumbnail(instance)
if explicit_repres: if explicit_repres:
@ -192,7 +215,8 @@ class ExtractThumbnail(pyblish.api.InstancePlugin):
) )
file_path = self._create_frame_from_video( file_path = self._create_frame_from_video(
video_file_path, video_file_path,
dst_staging dst_staging,
thumbnail_def
) )
if file_path: if file_path:
src_staging, input_file = os.path.split(file_path) src_staging, input_file = os.path.split(file_path)
@ -205,7 +229,8 @@ class ExtractThumbnail(pyblish.api.InstancePlugin):
if "slate-frame" in repre.get("tags", []): if "slate-frame" in repre.get("tags", []):
repre_files_thumb = repre_files_thumb[1:] repre_files_thumb = repre_files_thumb[1:]
file_index = int( file_index = int(
float(len(repre_files_thumb)) * self.duration_split) float(len(repre_files_thumb)) * thumbnail_def.duration_split # noqa: E501
)
input_file = repre_files[file_index] input_file = repre_files[file_index]
full_input_path = os.path.join(src_staging, input_file) full_input_path = os.path.join(src_staging, input_file)
@ -234,7 +259,8 @@ class ExtractThumbnail(pyblish.api.InstancePlugin):
repre_thumb_created = self._create_colorspace_thumbnail( repre_thumb_created = self._create_colorspace_thumbnail(
full_input_path, full_input_path,
full_output_path, full_output_path,
colorspace_data colorspace_data,
thumbnail_def,
) )
# Try to use FFMPEG if OIIO is not supported or for cases when # Try to use FFMPEG if OIIO is not supported or for cases when
@ -242,13 +268,13 @@ class ExtractThumbnail(pyblish.api.InstancePlugin):
# colorspace data # colorspace data
if not repre_thumb_created: if not repre_thumb_created:
repre_thumb_created = self._create_thumbnail_ffmpeg( repre_thumb_created = self._create_thumbnail_ffmpeg(
full_input_path, full_output_path full_input_path, full_output_path, thumbnail_def
) )
# Skip representation and try next one if wasn't created # Skip representation and try next one if wasn't created
if not repre_thumb_created and oiio_supported: if not repre_thumb_created and oiio_supported:
repre_thumb_created = self._create_thumbnail_oiio( repre_thumb_created = self._create_thumbnail_oiio(
full_input_path, full_output_path full_input_path, full_output_path, thumbnail_def
) )
if not repre_thumb_created: if not repre_thumb_created:
@ -276,7 +302,7 @@ class ExtractThumbnail(pyblish.api.InstancePlugin):
new_repre_tags = ["thumbnail"] new_repre_tags = ["thumbnail"]
# for workflows which needs to have thumbnails published as # for workflows which needs to have thumbnails published as
# separate representations `delete` tag should not be added # separate representations `delete` tag should not be added
if not self.integrate_thumbnail: if not thumbnail_def.integrate_thumbnail:
new_repre_tags.append("delete") new_repre_tags.append("delete")
new_repre = { new_repre = {
@ -375,7 +401,7 @@ class ExtractThumbnail(pyblish.api.InstancePlugin):
return review_repres + other_repres return review_repres + other_repres
def _is_valid_images_repre(self, repre): def _is_valid_images_repre(self, repre: dict) -> bool:
"""Check if representation contains valid image files """Check if representation contains valid image files
Args: Args:
@ -395,9 +421,10 @@ class ExtractThumbnail(pyblish.api.InstancePlugin):
def _create_colorspace_thumbnail( def _create_colorspace_thumbnail(
self, self,
src_path, src_path: str,
dst_path, dst_path: str,
colorspace_data, colorspace_data: dict,
thumbnail_def: ThumbnailDef,
): ):
"""Create thumbnail using OIIO tool oiiotool """Create thumbnail using OIIO tool oiiotool
@ -410,12 +437,15 @@ class ExtractThumbnail(pyblish.api.InstancePlugin):
config (dict) config (dict)
display (Optional[str]) display (Optional[str])
view (Optional[str]) view (Optional[str])
thumbnail_def (ThumbnailDefinition): Thumbnail definition.
Returns: Returns:
str: path to created thumbnail str: path to created thumbnail
""" """
self.log.info("Extracting thumbnail {}".format(dst_path)) self.log.info(f"Extracting thumbnail {dst_path}")
resolution_arg = self._get_resolution_arg("oiiotool", src_path) resolution_arg = self._get_resolution_args(
"oiiotool", src_path, thumbnail_def
)
repre_display = colorspace_data.get("display") repre_display = colorspace_data.get("display")
repre_view = colorspace_data.get("view") repre_view = colorspace_data.get("view")
@ -434,12 +464,13 @@ class ExtractThumbnail(pyblish.api.InstancePlugin):
) )
# if representation doesn't have display and view then use # if representation doesn't have display and view then use
# oiiotool_defaults # oiiotool_defaults
elif self.oiiotool_defaults: elif thumbnail_def.oiiotool_defaults:
oiio_default_type = self.oiiotool_defaults["type"] oiiotool_defaults = thumbnail_def.oiiotool_defaults
oiio_default_type = oiiotool_defaults["type"]
if "colorspace" == oiio_default_type: if "colorspace" == oiio_default_type:
oiio_default_colorspace = self.oiiotool_defaults["colorspace"] oiio_default_colorspace = oiiotool_defaults["colorspace"]
else: else:
display_and_view = self.oiiotool_defaults["display_and_view"] display_and_view = oiiotool_defaults["display_and_view"]
oiio_default_display = display_and_view["display"] oiio_default_display = display_and_view["display"]
oiio_default_view = display_and_view["view"] oiio_default_view = display_and_view["view"]
@ -466,18 +497,24 @@ class ExtractThumbnail(pyblish.api.InstancePlugin):
return True return True
def _create_thumbnail_oiio(self, src_path, dst_path): def _create_thumbnail_oiio(self, src_path, dst_path, thumbnail_def):
self.log.debug(f"Extracting thumbnail with OIIO: {dst_path}") self.log.debug(f"Extracting thumbnail with OIIO: {dst_path}")
try: try:
resolution_arg = self._get_resolution_arg("oiiotool", src_path) resolution_arg = self._get_resolution_args(
"oiiotool", src_path, thumbnail_def
)
except RuntimeError: except RuntimeError:
self.log.warning( self.log.warning(
"Failed to create thumbnail using oiio", exc_info=True "Failed to create thumbnail using oiio", exc_info=True
) )
return False return False
input_info = get_oiio_info_for_input(src_path, logger=self.log) input_info = get_oiio_info_for_input(
src_path,
logger=self.log,
verbose=False,
)
try: try:
input_arg, channels_arg = get_oiio_input_and_channel_args( input_arg, channels_arg = get_oiio_input_and_channel_args(
input_info input_info
@ -510,9 +547,11 @@ class ExtractThumbnail(pyblish.api.InstancePlugin):
) )
return False return False
def _create_thumbnail_ffmpeg(self, src_path, dst_path): def _create_thumbnail_ffmpeg(self, src_path, dst_path, thumbnail_def):
try: try:
resolution_arg = self._get_resolution_arg("ffmpeg", src_path) resolution_arg = self._get_resolution_args(
"ffmpeg", src_path, thumbnail_def
)
except RuntimeError: except RuntimeError:
self.log.warning( self.log.warning(
"Failed to create thumbnail using ffmpeg", exc_info=True "Failed to create thumbnail using ffmpeg", exc_info=True
@ -520,7 +559,7 @@ class ExtractThumbnail(pyblish.api.InstancePlugin):
return False return False
ffmpeg_path_args = get_ffmpeg_tool_args("ffmpeg") ffmpeg_path_args = get_ffmpeg_tool_args("ffmpeg")
ffmpeg_args = self.ffmpeg_args or {} ffmpeg_args = thumbnail_def.ffmpeg_args or {}
jpeg_items = [ jpeg_items = [
subprocess.list2cmdline(ffmpeg_path_args) subprocess.list2cmdline(ffmpeg_path_args)
@ -560,7 +599,12 @@ class ExtractThumbnail(pyblish.api.InstancePlugin):
) )
return False return False
def _create_frame_from_video(self, video_file_path, output_dir): def _create_frame_from_video(
self,
video_file_path: str,
output_dir: str,
thumbnail_def: ThumbnailDef,
) -> Optional[str]:
"""Convert video file to one frame image via ffmpeg""" """Convert video file to one frame image via ffmpeg"""
# create output file path # create output file path
base_name = os.path.basename(video_file_path) base_name = os.path.basename(video_file_path)
@ -585,7 +629,7 @@ class ExtractThumbnail(pyblish.api.InstancePlugin):
seek_position = 0.0 seek_position = 0.0
# Only use timestamp calculation for videos longer than 0.1 seconds # Only use timestamp calculation for videos longer than 0.1 seconds
if duration > 0.1: if duration > 0.1:
seek_position = duration * self.duration_split seek_position = duration * thumbnail_def.duration_split
# Build command args # Build command args
cmd_args = [] cmd_args = []
@ -659,16 +703,17 @@ class ExtractThumbnail(pyblish.api.InstancePlugin):
): ):
os.remove(output_thumb_file_path) os.remove(output_thumb_file_path)
def _get_resolution_arg( def _get_resolution_args(
self, self,
application, application: str,
input_path, input_path: str,
): thumbnail_def: ThumbnailDef,
) -> list:
# get settings # get settings
if self.target_size["type"] == "source": if thumbnail_def.target_size["type"] == "source":
return [] return []
resize = self.target_size["resize"] resize = thumbnail_def.target_size["resize"]
target_width = resize["width"] target_width = resize["width"]
target_height = resize["height"] target_height = resize["height"]
@ -678,6 +723,43 @@ class ExtractThumbnail(pyblish.api.InstancePlugin):
input_path, input_path,
target_width, target_width,
target_height, target_height,
bg_color=self.background_color, bg_color=thumbnail_def.background_color,
log=self.log log=self.log
) )
def _get_config_from_profile(
self,
instance: pyblish.api.Instance
) -> Optional[ThumbnailDef]:
"""Returns profile if and how repre should be color transcoded."""
host_name = instance.context.data["hostName"]
product_type = instance.data["productType"]
product_name = instance.data["productName"]
task_data = instance.data["anatomyData"].get("task", {})
task_name = task_data.get("name")
task_type = task_data.get("type")
filtering_criteria = {
"host_names": host_name,
"product_types": product_type,
"product_names": product_name,
"task_names": task_name,
"task_types": task_type,
}
profile = filter_profiles(
self.profiles,
filtering_criteria,
logger=self.log
)
if not profile:
self.log.debug(
"Skipped instance. None of profiles in presets are for"
f' Host: "{host_name}"'
f' | Product types: "{product_type}"'
f' | Product names: "{product_name}"'
f' | Task name "{task_name}"'
f' | Task type "{task_type}"'
)
return None
return ThumbnailDef.from_dict(profile)

View file

@ -14,6 +14,7 @@ Todos:
import os import os
import tempfile import tempfile
from typing import List, Optional
import pyblish.api import pyblish.api
from ayon_core.lib import ( from ayon_core.lib import (
@ -22,6 +23,7 @@ from ayon_core.lib import (
is_oiio_supported, is_oiio_supported,
run_subprocess, run_subprocess,
get_rescaled_command_arguments,
) )
@ -31,17 +33,20 @@ class ExtractThumbnailFromSource(pyblish.api.InstancePlugin):
Thumbnail source must be a single image or video filepath. Thumbnail source must be a single image or video filepath.
""" """
label = "Extract Thumbnail (from source)" label = "Extract Thumbnail from source"
# Before 'ExtractThumbnail' in global plugins # Before 'ExtractThumbnail' in global plugins
order = pyblish.api.ExtractorOrder - 0.00001 order = pyblish.api.ExtractorOrder - 0.00001
def process(self, instance): # Settings
target_size = {
"type": "resize",
"resize": {"width": 1920, "height": 1080}
}
background_color = (0, 0, 0, 0.0)
def process(self, instance: pyblish.api.Instance):
self._create_context_thumbnail(instance.context) self._create_context_thumbnail(instance.context)
product_name = instance.data["productName"]
self.log.debug(
"Processing instance with product name {}".format(product_name)
)
thumbnail_source = instance.data.get("thumbnailSource") thumbnail_source = instance.data.get("thumbnailSource")
if not thumbnail_source: if not thumbnail_source:
self.log.debug("Thumbnail source not filled. Skipping.") self.log.debug("Thumbnail source not filled. Skipping.")
@ -69,6 +74,8 @@ class ExtractThumbnailFromSource(pyblish.api.InstancePlugin):
"outputName": "thumbnail", "outputName": "thumbnail",
} }
new_repre["tags"].append("delete")
# adding representation # adding representation
self.log.debug( self.log.debug(
"Adding thumbnail representation: {}".format(new_repre) "Adding thumbnail representation: {}".format(new_repre)
@ -76,7 +83,11 @@ class ExtractThumbnailFromSource(pyblish.api.InstancePlugin):
instance.data["representations"].append(new_repre) instance.data["representations"].append(new_repre)
instance.data["thumbnailPath"] = dst_filepath instance.data["thumbnailPath"] = dst_filepath
def _create_thumbnail(self, context, thumbnail_source): def _create_thumbnail(
self,
context: pyblish.api.Context,
thumbnail_source: str,
) -> Optional[str]:
if not thumbnail_source: if not thumbnail_source:
self.log.debug("Thumbnail source not filled. Skipping.") self.log.debug("Thumbnail source not filled. Skipping.")
return return
@ -131,7 +142,7 @@ class ExtractThumbnailFromSource(pyblish.api.InstancePlugin):
self.log.warning("Thumbnail has not been created.") self.log.warning("Thumbnail has not been created.")
def _instance_has_thumbnail(self, instance): def _instance_has_thumbnail(self, instance: pyblish.api.Instance) -> bool:
if "representations" not in instance.data: if "representations" not in instance.data:
self.log.warning( self.log.warning(
"Instance does not have 'representations' key filled" "Instance does not have 'representations' key filled"
@ -143,14 +154,29 @@ class ExtractThumbnailFromSource(pyblish.api.InstancePlugin):
return True return True
return False return False
def create_thumbnail_oiio(self, src_path, dst_path): def create_thumbnail_oiio(
self,
src_path: str,
dst_path: str,
) -> bool:
self.log.debug("Outputting thumbnail with OIIO: {}".format(dst_path)) self.log.debug("Outputting thumbnail with OIIO: {}".format(dst_path))
oiio_cmd = get_oiio_tool_args( try:
"oiiotool", resolution_args = self._get_resolution_args(
"-a", src_path, "oiiotool", src_path
"--ch", "R,G,B",
"-o", dst_path
) )
except Exception:
self.log.warning("Failed to get resolution args for OIIO.")
return False
oiio_cmd = get_oiio_tool_args("oiiotool", "-a", src_path)
if resolution_args:
# resize must be before -o
oiio_cmd.extend(resolution_args)
else:
# resize provides own -ch, must be only one
oiio_cmd.extend(["--ch", "R,G,B"])
oiio_cmd.extend(["-o", dst_path])
self.log.debug("Running: {}".format(" ".join(oiio_cmd))) self.log.debug("Running: {}".format(" ".join(oiio_cmd)))
try: try:
run_subprocess(oiio_cmd, logger=self.log) run_subprocess(oiio_cmd, logger=self.log)
@ -162,7 +188,19 @@ class ExtractThumbnailFromSource(pyblish.api.InstancePlugin):
) )
return False return False
def create_thumbnail_ffmpeg(self, src_path, dst_path): def create_thumbnail_ffmpeg(
self,
src_path: str,
dst_path: str,
) -> bool:
try:
resolution_args = self._get_resolution_args(
"ffmpeg", src_path
)
except Exception:
self.log.warning("Failed to get resolution args for ffmpeg.")
return False
max_int = str(2147483647) max_int = str(2147483647)
ffmpeg_cmd = get_ffmpeg_tool_args( ffmpeg_cmd = get_ffmpeg_tool_args(
"ffmpeg", "ffmpeg",
@ -171,9 +209,13 @@ class ExtractThumbnailFromSource(pyblish.api.InstancePlugin):
"-probesize", max_int, "-probesize", max_int,
"-i", src_path, "-i", src_path,
"-frames:v", "1", "-frames:v", "1",
dst_path
) )
ffmpeg_cmd.extend(resolution_args)
# possible resize must be before output args
ffmpeg_cmd.append(dst_path)
self.log.debug("Running: {}".format(" ".join(ffmpeg_cmd))) self.log.debug("Running: {}".format(" ".join(ffmpeg_cmd)))
try: try:
run_subprocess(ffmpeg_cmd, logger=self.log) run_subprocess(ffmpeg_cmd, logger=self.log)
@ -185,10 +227,37 @@ class ExtractThumbnailFromSource(pyblish.api.InstancePlugin):
) )
return False return False
def _create_context_thumbnail(self, context): def _create_context_thumbnail(
self,
context: pyblish.api.Context,
):
if "thumbnailPath" in context.data: if "thumbnailPath" in context.data:
return return
thumbnail_source = context.data.get("thumbnailSource") thumbnail_source = context.data.get("thumbnailSource")
thumbnail_path = self._create_thumbnail(context, thumbnail_source) context.data["thumbnailPath"] = self._create_thumbnail(
context.data["thumbnailPath"] = thumbnail_path context, thumbnail_source
)
def _get_resolution_args(
self,
application: str,
input_path: str,
) -> List[str]:
# get settings
if self.target_size["type"] == "source":
return []
resize = self.target_size["resize"]
target_width = resize["width"]
target_height = resize["height"]
# form arg string per application
return get_rescaled_command_arguments(
application,
input_path,
target_width,
target_height,
bg_color=self.background_color,
log=self.log,
)

View file

@ -2,6 +2,7 @@ from operator import attrgetter
import dataclasses import dataclasses
import os import os
import platform import platform
from collections import defaultdict
from typing import Any, Dict, List from typing import Any, Dict, List
import pyblish.api import pyblish.api
@ -13,10 +14,11 @@ except ImportError:
from ayon_core.lib import ( from ayon_core.lib import (
TextDef, TextDef,
BoolDef, BoolDef,
NumberDef,
UISeparatorDef, UISeparatorDef,
UILabelDef, UILabelDef,
EnumDef, EnumDef,
filter_profiles filter_profiles,
) )
try: try:
from ayon_core.pipeline.usdlib import ( from ayon_core.pipeline.usdlib import (
@ -278,20 +280,24 @@ class CollectUSDLayerContributions(pyblish.api.InstancePlugin,
# level, you can add it directly from the publisher at that particular # level, you can add it directly from the publisher at that particular
# order. Future publishes will then see the existing contribution and will # order. Future publishes will then see the existing contribution and will
# persist adding it to future bootstraps at that order # persist adding it to future bootstraps at that order
contribution_layers: Dict[str, int] = { contribution_layers: Dict[str, Dict[str, int]] = {
# asset layers # asset layers
"asset": {
"model": 100, "model": 100,
"assembly": 150, "assembly": 150,
"groom": 175, "groom": 175,
"look": 200, "look": 200,
"rig": 300, "rig": 300,
},
# shot layers # shot layers
"shot": {
"layout": 200, "layout": 200,
"animation": 300, "animation": 300,
"simulation": 400, "simulation": 400,
"fx": 500, "fx": 500,
"lighting": 600, "lighting": 600,
} }
}
# Default profiles to set certain instance attribute defaults based on # Default profiles to set certain instance attribute defaults based on
# profiles in settings # profiles in settings
profiles: List[Dict[str, Any]] = [] profiles: List[Dict[str, Any]] = []
@ -305,12 +311,18 @@ class CollectUSDLayerContributions(pyblish.api.InstancePlugin,
cls.enabled = plugin_settings.get("enabled", cls.enabled) cls.enabled = plugin_settings.get("enabled", cls.enabled)
# Define contribution layers via settings # Define contribution layers via settings by their scope
contribution_layers = {} contribution_layers = defaultdict(dict)
for entry in plugin_settings.get("contribution_layers", []): for entry in plugin_settings.get("contribution_layers", []):
contribution_layers[entry["name"]] = int(entry["order"]) for scope in entry.get("scope", []):
contribution_layers[scope][entry["name"]] = int(entry["order"])
if contribution_layers: if contribution_layers:
cls.contribution_layers = contribution_layers cls.contribution_layers = dict(contribution_layers)
else:
cls.log.warning(
"No scoped contribution layers found in settings, falling back"
" to CollectUSDLayerContributions plug-in defaults..."
)
cls.profiles = plugin_settings.get("profiles", []) cls.profiles = plugin_settings.get("profiles", [])
@ -334,10 +346,7 @@ class CollectUSDLayerContributions(pyblish.api.InstancePlugin,
attr_values[key] = attr_values[key].format(**data) attr_values[key] = attr_values[key].format(**data)
# Define contribution # Define contribution
order = self.contribution_layers.get( in_layer_order: int = attr_values.get("contribution_in_layer_order", 0)
attr_values["contribution_layer"], 0
)
if attr_values["contribution_apply_as_variant"]: if attr_values["contribution_apply_as_variant"]:
contribution = VariantContribution( contribution = VariantContribution(
instance=instance, instance=instance,
@ -346,19 +355,23 @@ class CollectUSDLayerContributions(pyblish.api.InstancePlugin,
variant_set_name=attr_values["contribution_variant_set_name"], variant_set_name=attr_values["contribution_variant_set_name"],
variant_name=attr_values["contribution_variant"], variant_name=attr_values["contribution_variant"],
variant_is_default=attr_values["contribution_variant_is_default"], # noqa: E501 variant_is_default=attr_values["contribution_variant_is_default"], # noqa: E501
order=order order=in_layer_order
) )
else: else:
contribution = SublayerContribution( contribution = SublayerContribution(
instance=instance, instance=instance,
layer_id=attr_values["contribution_layer"], layer_id=attr_values["contribution_layer"],
target_product=attr_values["contribution_target_product"], target_product=attr_values["contribution_target_product"],
order=order order=in_layer_order
) )
asset_product = contribution.target_product asset_product = contribution.target_product
layer_product = "{}_{}".format(asset_product, contribution.layer_id) layer_product = "{}_{}".format(asset_product, contribution.layer_id)
scope: str = attr_values["contribution_target_product_init"]
layer_order: int = (
self.contribution_layers[scope][attr_values["contribution_layer"]]
)
# Layer contribution instance # Layer contribution instance
layer_instance = self.get_or_create_instance( layer_instance = self.get_or_create_instance(
product_name=layer_product, product_name=layer_product,
@ -370,7 +383,7 @@ class CollectUSDLayerContributions(pyblish.api.InstancePlugin,
contribution contribution
) )
layer_instance.data["usd_layer_id"] = contribution.layer_id layer_instance.data["usd_layer_id"] = contribution.layer_id
layer_instance.data["usd_layer_order"] = contribution.order layer_instance.data["usd_layer_order"] = layer_order
layer_instance.data["productGroup"] = ( layer_instance.data["productGroup"] = (
instance.data.get("productGroup") or "USD Layer" instance.data.get("productGroup") or "USD Layer"
@ -489,14 +502,14 @@ class CollectUSDLayerContributions(pyblish.api.InstancePlugin,
profile = {} profile = {}
# Define defaults # Define defaults
default_enabled = profile.get("contribution_enabled", True) default_enabled: bool = profile.get("contribution_enabled", True)
default_contribution_layer = profile.get( default_contribution_layer = profile.get(
"contribution_layer", None) "contribution_layer", None)
default_apply_as_variant = profile.get( default_apply_as_variant: bool = profile.get(
"contribution_apply_as_variant", False) "contribution_apply_as_variant", False)
default_target_product = profile.get( default_target_product: str = profile.get(
"contribution_target_product", "usdAsset") "contribution_target_product", "usdAsset")
default_init_as = ( default_init_as: str = (
"asset" "asset"
if profile.get("contribution_target_product") == "usdAsset" if profile.get("contribution_target_product") == "usdAsset"
else "shot") else "shot")
@ -509,6 +522,12 @@ class CollectUSDLayerContributions(pyblish.api.InstancePlugin,
visible = publish_attributes.get("contribution_enabled", True) visible = publish_attributes.get("contribution_enabled", True)
variant_visible = visible and publish_attributes.get( variant_visible = visible and publish_attributes.get(
"contribution_apply_as_variant", True) "contribution_apply_as_variant", True)
init_as: str = publish_attributes.get(
"contribution_target_product_init", default_init_as)
contribution_layers = cls.contribution_layers.get(
init_as, {}
)
return [ return [
UISeparatorDef("usd_container_settings1"), UISeparatorDef("usd_container_settings1"),
@ -558,9 +577,22 @@ class CollectUSDLayerContributions(pyblish.api.InstancePlugin,
"predefined ordering.\nA higher order (further down " "predefined ordering.\nA higher order (further down "
"the list) will contribute as a stronger opinion." "the list) will contribute as a stronger opinion."
), ),
items=list(cls.contribution_layers.keys()), items=list(contribution_layers.keys()),
default=default_contribution_layer, default=default_contribution_layer,
visible=visible), visible=visible),
# TODO: We may want to make the visibility of this optional
# based on studio preference, to avoid complexity when not needed
NumberDef("contribution_in_layer_order",
label="Strength order",
tooltip=(
"The contribution inside the department layer will be "
"made with this offset applied. A higher number means "
"a stronger opinion."
),
default=0,
minimum=-99999,
maximum=99999,
visible=visible),
BoolDef("contribution_apply_as_variant", BoolDef("contribution_apply_as_variant",
label="Add as variant", label="Add as variant",
tooltip=( tooltip=(
@ -606,7 +638,11 @@ class CollectUSDLayerContributions(pyblish.api.InstancePlugin,
# Update attributes if any of the following plug-in attributes # Update attributes if any of the following plug-in attributes
# change: # change:
keys = ["contribution_enabled", "contribution_apply_as_variant"] keys = {
"contribution_enabled",
"contribution_apply_as_variant",
"contribution_target_product_init",
}
for instance_change in event["changes"]: for instance_change in event["changes"]:
instance = instance_change["instance"] instance = instance_change["instance"]
@ -729,7 +765,7 @@ class ExtractUSDLayerContribution(publish.Extractor):
layer=sdf_layer, layer=sdf_layer,
contribution_path=path, contribution_path=path,
layer_id=product_name, layer_id=product_name,
order=None, # unordered order=contribution.order,
add_sdf_arguments_metadata=True add_sdf_arguments_metadata=True
) )
else: else:

View file

@ -0,0 +1,21 @@
<?xml version="1.0" encoding="UTF-8"?>
<root>
<error id="main">
<title>{upload_type} upload timed out</title>
<description>
## {upload_type} upload failed after retries
The connection to the AYON server timed out while uploading a file.
### How to resolve?
1. Try publishing again. Intermittent network hiccups often resolve on retry.
2. Ensure your network/VPN is stable and large uploads are allowed.
3. If it keeps failing, try again later or contact your admin.
<pre>File: {file}
Error: {error}</pre>
</description>
</error>
</root>

View file

@ -28,6 +28,7 @@ from ayon_core.pipeline.publish import (
KnownPublishError, KnownPublishError,
get_publish_template_name, get_publish_template_name,
) )
from ayon_core.pipeline import is_product_base_type_supported
log = logging.getLogger(__name__) log = logging.getLogger(__name__)
@ -122,10 +123,6 @@ class IntegrateAsset(pyblish.api.InstancePlugin):
"representation", "representation",
"username", "username",
"output", "output",
# OpenPype keys - should be removed
"asset", # folder[name]
"subset", # product[name]
"family", # product[type]
] ]
def process(self, instance): def process(self, instance):
@ -367,6 +364,8 @@ class IntegrateAsset(pyblish.api.InstancePlugin):
folder_entity = instance.data["folderEntity"] folder_entity = instance.data["folderEntity"]
product_name = instance.data["productName"] product_name = instance.data["productName"]
product_type = instance.data["productType"] product_type = instance.data["productType"]
product_base_type = instance.data.get("productBaseType")
self.log.debug("Product: {}".format(product_name)) self.log.debug("Product: {}".format(product_name))
# Get existing product if it exists # Get existing product if it exists
@ -394,15 +393,34 @@ class IntegrateAsset(pyblish.api.InstancePlugin):
product_id = None product_id = None
if existing_product_entity: if existing_product_entity:
product_id = existing_product_entity["id"] product_id = existing_product_entity["id"]
product_entity = new_product_entity(
product_name, new_product_entity_kwargs = {
product_type, "name": product_name,
folder_entity["id"], "product_type": product_type,
data=data, "folder_id": folder_entity["id"],
attribs=attributes, "data": data,
entity_id=product_id "attribs": attributes,
"entity_id": product_id,
"product_base_type": product_base_type,
}
if not is_product_base_type_supported():
new_product_entity_kwargs.pop("product_base_type")
if (
product_base_type is not None
and product_base_type != product_type):
self.log.warning((
"Product base type %s is not supported by the server, "
"but it's defined - and it differs from product type %s. "
"Using product base type as product type."
), product_base_type, product_type)
new_product_entity_kwargs["product_type"] = (
product_base_type
) )
product_entity = new_product_entity(**new_product_entity_kwargs)
if existing_product_entity is None: if existing_product_entity is None:
# Create a new product # Create a new product
self.log.info( self.log.info(
@ -902,8 +920,12 @@ class IntegrateAsset(pyblish.api.InstancePlugin):
# Include optional data if present in # Include optional data if present in
optionals = [ optionals = [
"frameStart", "frameEnd", "step", "frameStart", "frameEnd",
"handleEnd", "handleStart", "sourceHashes" "handleEnd", "handleStart",
"step",
"resolutionWidth", "resolutionHeight",
"pixelAspect",
"sourceHashes"
] ]
for key in optionals: for key in optionals:
if key in instance.data: if key in instance.data:
@ -927,6 +949,7 @@ class IntegrateAsset(pyblish.api.InstancePlugin):
host_name = context.data["hostName"] host_name = context.data["hostName"]
anatomy_data = instance.data["anatomyData"] anatomy_data = instance.data["anatomyData"]
product_type = instance.data["productType"] product_type = instance.data["productType"]
product_base_type = instance.data.get("productBaseType")
task_info = anatomy_data.get("task") or {} task_info = anatomy_data.get("task") or {}
return get_publish_template_name( return get_publish_template_name(
@ -936,7 +959,8 @@ class IntegrateAsset(pyblish.api.InstancePlugin):
task_name=task_info.get("name"), task_name=task_info.get("name"),
task_type=task_info.get("type"), task_type=task_info.get("type"),
project_settings=context.data["project_settings"], project_settings=context.data["project_settings"],
logger=self.log logger=self.log,
product_base_type=product_base_type
) )
def get_rootless_path(self, anatomy, path): def get_rootless_path(self, anatomy, path):

View file

@ -1,11 +1,8 @@
import os import os
import sys
import copy import copy
import errno
import itertools import itertools
import shutil import shutil
from concurrent.futures import ThreadPoolExecutor
from speedcopy import copyfile
import clique import clique
import pyblish.api import pyblish.api
@ -16,11 +13,15 @@ from ayon_api.operations import (
) )
from ayon_api.utils import create_entity_id from ayon_api.utils import create_entity_id
from ayon_core.lib import create_hard_link, source_hash from ayon_core.lib import source_hash
from ayon_core.lib.file_transaction import wait_for_future_errors from ayon_core.lib.file_transaction import (
FileTransaction,
DuplicateDestinationError,
)
from ayon_core.pipeline.publish import ( from ayon_core.pipeline.publish import (
get_publish_template_name, get_publish_template_name,
OptionalPyblishPluginMixin, OptionalPyblishPluginMixin,
KnownPublishError,
) )
@ -81,12 +82,9 @@ class IntegrateHeroVersion(
db_representation_context_keys = [ db_representation_context_keys = [
"project", "project",
"folder", "folder",
"asset",
"hierarchy", "hierarchy",
"task", "task",
"product", "product",
"subset",
"family",
"representation", "representation",
"username", "username",
"output" "output"
@ -424,19 +422,40 @@ class IntegrateHeroVersion(
(repre_entity, dst_paths) (repre_entity, dst_paths)
) )
self.path_checks = [] file_transactions = FileTransaction(
log=self.log,
# Copy(hardlink) paths of source and destination files # Enforce unique transfers
# TODO should we *only* create hardlinks? allow_queue_replacements=False
# TODO should we keep files for deletion until this is successful?
with ThreadPoolExecutor(max_workers=8) as executor:
futures = [
executor.submit(self.copy_file, src_path, dst_path)
for src_path, dst_path in itertools.chain(
src_to_dst_file_paths, other_file_paths_mapping
) )
] mode = FileTransaction.MODE_COPY
wait_for_future_errors(executor, futures) if self.use_hardlinks:
mode = FileTransaction.MODE_LINK
try:
for src_path, dst_path in itertools.chain(
src_to_dst_file_paths,
other_file_paths_mapping
):
file_transactions.add(src_path, dst_path, mode=mode)
self.log.debug("Integrating source files to destination ...")
file_transactions.process()
except DuplicateDestinationError as exc:
# Raise DuplicateDestinationError as KnownPublishError
# and rollback the transactions
file_transactions.rollback()
raise KnownPublishError(exc).with_traceback(sys.exc_info()[2])
except Exception as exc:
# Rollback the transactions
file_transactions.rollback()
self.log.critical("Error when copying files", exc_info=True)
raise exc
# Finalizing can't rollback safely so no use for moving it to
# the try, except.
file_transactions.finalize()
# Update prepared representation etity data with files # Update prepared representation etity data with files
# and integrate it to server. # and integrate it to server.
@ -625,48 +644,6 @@ class IntegrateHeroVersion(
).format(path)) ).format(path))
return path return path
def copy_file(self, src_path, dst_path):
# TODO check drives if are the same to check if cas hardlink
dirname = os.path.dirname(dst_path)
try:
os.makedirs(dirname)
self.log.debug("Folder(s) created: \"{}\"".format(dirname))
except OSError as exc:
if exc.errno != errno.EEXIST:
self.log.error("An unexpected error occurred.", exc_info=True)
raise
self.log.debug("Folder already exists: \"{}\"".format(dirname))
if self.use_hardlinks:
# First try hardlink and copy if paths are cross drive
self.log.debug("Hardlinking file \"{}\" to \"{}\"".format(
src_path, dst_path
))
try:
create_hard_link(src_path, dst_path)
# Return when successful
return
except OSError as exc:
# re-raise exception if different than
# EXDEV - cross drive path
# EINVAL - wrong format, must be NTFS
self.log.debug(
"Hardlink failed with errno:'{}'".format(exc.errno))
if exc.errno not in [errno.EXDEV, errno.EINVAL]:
raise
self.log.debug(
"Hardlinking failed, falling back to regular copy...")
self.log.debug("Copying file \"{}\" to \"{}\"".format(
src_path, dst_path
))
copyfile(src_path, dst_path)
def version_from_representations(self, project_name, repres): def version_from_representations(self, project_name, repres):
for repre in repres: for repre in repres:
version = ayon_api.get_version_by_id( version = ayon_api.get_version_by_id(

View file

@ -105,7 +105,7 @@ class IntegrateInputLinksAYON(pyblish.api.ContextPlugin):
created links by its type created links by its type
""" """
if workfile_instance is None: if workfile_instance is None:
self.log.warning("No workfile in this publish session.") self.log.debug("No workfile in this publish session.")
return return
workfile_version_id = workfile_instance.data["versionEntity"]["id"] workfile_version_id = workfile_instance.data["versionEntity"]["id"]

View file

@ -62,10 +62,8 @@ class IntegrateProductGroup(pyblish.api.InstancePlugin):
product_type = instance.data["productType"] product_type = instance.data["productType"]
fill_pairs = prepare_template_data({ fill_pairs = prepare_template_data({
"family": product_type,
"task": filter_criteria["tasks"], "task": filter_criteria["tasks"],
"host": filter_criteria["hosts"], "host": filter_criteria["hosts"],
"subset": product_name,
"product": { "product": {
"name": product_name, "name": product_name,
"type": product_type, "type": product_type,

View file

@ -1,11 +1,17 @@
import os import os
import time
import pyblish.api
import ayon_api import ayon_api
from ayon_api import TransferProgress
from ayon_api.server_api import RequestTypes from ayon_api.server_api import RequestTypes
import pyblish.api
from ayon_core.lib import get_media_mime_type from ayon_core.lib import get_media_mime_type, format_file_size
from ayon_core.pipeline.publish import get_publish_repre_path from ayon_core.pipeline.publish import (
PublishXmlValidationError,
get_publish_repre_path,
)
import requests.exceptions
class IntegrateAYONReview(pyblish.api.InstancePlugin): class IntegrateAYONReview(pyblish.api.InstancePlugin):
@ -44,7 +50,7 @@ class IntegrateAYONReview(pyblish.api.InstancePlugin):
if "webreview" not in repre_tags: if "webreview" not in repre_tags:
continue continue
# exclude representations with are going to be published on farm # exclude representations going to be published on farm
if "publish_on_farm" in repre_tags: if "publish_on_farm" in repre_tags:
continue continue
@ -75,18 +81,13 @@ class IntegrateAYONReview(pyblish.api.InstancePlugin):
f"/projects/{project_name}" f"/projects/{project_name}"
f"/versions/{version_id}/reviewables{query}" f"/versions/{version_id}/reviewables{query}"
) )
filename = os.path.basename(repre_path)
# Upload the reviewable
self.log.info(f"Uploading reviewable '{label or filename}' ...")
headers = ayon_con.get_headers(content_type)
headers["x-file-name"] = filename
self.log.info(f"Uploading reviewable {repre_path}") self.log.info(f"Uploading reviewable {repre_path}")
ayon_con.upload_file( # Upload with retries and clear help if it keeps failing
self._upload_with_retries(
ayon_con,
endpoint, endpoint,
repre_path, repre_path,
headers=headers, content_type,
request_type=RequestTypes.post,
) )
def _get_review_label(self, repre, uploaded_labels): def _get_review_label(self, repre, uploaded_labels):
@ -100,3 +101,74 @@ class IntegrateAYONReview(pyblish.api.InstancePlugin):
idx += 1 idx += 1
label = f"{orig_label}_{idx}" label = f"{orig_label}_{idx}"
return label return label
def _upload_with_retries(
self,
ayon_con: ayon_api.ServerAPI,
endpoint: str,
repre_path: str,
content_type: str,
):
"""Upload file with simple retries."""
filename = os.path.basename(repre_path)
headers = ayon_con.get_headers(content_type)
headers["x-file-name"] = filename
max_retries = ayon_con.get_default_max_retries()
# Retries are already implemented in 'ayon_api.upload_file'
# - added in ayon api 1.2.7
if hasattr(TransferProgress, "get_attempt"):
max_retries = 1
size = os.path.getsize(repre_path)
self.log.info(
f"Uploading '{repre_path}' (size: {format_file_size(size)})"
)
# How long to sleep before next attempt
wait_time = 1
last_error = None
for attempt in range(max_retries):
attempt += 1
start = time.time()
try:
output = ayon_con.upload_file(
endpoint,
repre_path,
headers=headers,
request_type=RequestTypes.post,
)
self.log.debug(f"Uploaded in {time.time() - start}s.")
return output
except (
requests.exceptions.Timeout,
requests.exceptions.ConnectionError
) as exc:
# Log and retry with backoff if attempts remain
if attempt >= max_retries:
last_error = exc
break
self.log.warning(
f"Review upload failed ({attempt}/{max_retries})"
f" after {time.time() - start}s."
f" Retrying in {wait_time}s...",
exc_info=True,
)
time.sleep(wait_time)
# Exhausted retries - raise a user-friendly validation error with help
raise PublishXmlValidationError(
self,
(
"Upload of reviewable timed out or failed after multiple"
" attempts. Please try publishing again."
),
formatting_data={
"upload_type": "Review",
"file": repre_path,
"error": str(last_error),
},
help_filename="upload_file.xml",
)

View file

@ -24,11 +24,16 @@
import os import os
import collections import collections
import time
import pyblish.api
import ayon_api import ayon_api
from ayon_api import RequestTypes from ayon_api import RequestTypes, TransferProgress
from ayon_api.operations import OperationsSession from ayon_api.operations import OperationsSession
import pyblish.api
import requests
from ayon_core.lib import get_media_mime_type, format_file_size
from ayon_core.pipeline.publish import PublishXmlValidationError
InstanceFilterResult = collections.namedtuple( InstanceFilterResult = collections.namedtuple(
@ -164,25 +169,17 @@ class IntegrateThumbnailsAYON(pyblish.api.ContextPlugin):
return os.path.normpath(filled_path) return os.path.normpath(filled_path)
def _create_thumbnail(self, project_name: str, src_filepath: str) -> str: def _create_thumbnail(self, project_name: str, src_filepath: str) -> str:
"""Upload thumbnail to AYON and return its id. """Upload thumbnail to AYON and return its id."""
mime_type = get_media_mime_type(src_filepath)
This is temporary fix of 'create_thumbnail' function in ayon_api to
fix jpeg mime type.
"""
mime_type = None
with open(src_filepath, "rb") as stream:
if b"\xff\xd8\xff" == stream.read(3):
mime_type = "image/jpeg"
if mime_type is None: if mime_type is None:
return ayon_api.create_thumbnail(project_name, src_filepath) return ayon_api.create_thumbnail(
project_name, src_filepath
)
response = ayon_api.upload_file( response = self._upload_with_retries(
f"projects/{project_name}/thumbnails", f"projects/{project_name}/thumbnails",
src_filepath, src_filepath,
request_type=RequestTypes.post, mime_type,
headers={"Content-Type": mime_type},
) )
response.raise_for_status() response.raise_for_status()
return response.json()["id"] return response.json()["id"]
@ -248,3 +245,71 @@ class IntegrateThumbnailsAYON(pyblish.api.ContextPlugin):
or instance.data.get("name") or instance.data.get("name")
or "N/A" or "N/A"
) )
def _upload_with_retries(
self,
endpoint: str,
repre_path: str,
content_type: str,
):
"""Upload file with simple retries."""
ayon_con = ayon_api.get_server_api_connection()
headers = ayon_con.get_headers(content_type)
max_retries = ayon_con.get_default_max_retries()
# Retries are already implemented in 'ayon_api.upload_file'
# - added in ayon api 1.2.7
if hasattr(TransferProgress, "get_attempt"):
max_retries = 1
size = os.path.getsize(repre_path)
self.log.info(
f"Uploading '{repre_path}' (size: {format_file_size(size)})"
)
# How long to sleep before next attempt
wait_time = 1
last_error = None
for attempt in range(max_retries):
attempt += 1
start = time.time()
try:
output = ayon_con.upload_file(
endpoint,
repre_path,
headers=headers,
request_type=RequestTypes.post,
)
self.log.debug(f"Uploaded in {time.time() - start}s.")
return output
except (
requests.exceptions.Timeout,
requests.exceptions.ConnectionError
) as exc:
# Log and retry with backoff if attempts remain
if attempt >= max_retries:
last_error = exc
break
self.log.warning(
f"Review upload failed ({attempt}/{max_retries})"
f" after {time.time() - start}s."
f" Retrying in {wait_time}s...",
exc_info=True,
)
time.sleep(wait_time)
# Exhausted retries - raise a user-friendly validation error with help
raise PublishXmlValidationError(
self,
(
"Upload of thumbnail timed out or failed after multiple"
" attempts. Please try publishing again."
),
formatting_data={
"upload_type": "Thumbnail",
"file": repre_path,
"error": str(last_error),
},
help_filename="upload_file.xml",
)

View file

@ -969,12 +969,6 @@ SearchItemDisplayWidget #ValueWidget {
background: {color:bg-buttons}; background: {color:bg-buttons};
} }
/* Subset Manager */
#SubsetManagerDetailsText {}
#SubsetManagerDetailsText[state="invalid"] {
border: 1px solid #ff0000;
}
/* Creator */ /* Creator */
#CreatorsView::item { #CreatorsView::item {
padding: 1px 5px; padding: 1px 5px;

View file

@ -1,6 +1,8 @@
from __future__ import annotations
from abc import ABC, abstractmethod from abc import ABC, abstractmethod
from dataclasses import dataclass, field from dataclasses import dataclass, field
from typing import List, Dict, Optional from typing import Optional
@dataclass @dataclass
@ -13,8 +15,8 @@ class TabItem:
class InterpreterConfig: class InterpreterConfig:
width: Optional[int] width: Optional[int]
height: Optional[int] height: Optional[int]
splitter_sizes: List[int] = field(default_factory=list) splitter_sizes: list[int] = field(default_factory=list)
tabs: List[TabItem] = field(default_factory=list) tabs: list[TabItem] = field(default_factory=list)
class AbstractInterpreterController(ABC): class AbstractInterpreterController(ABC):
@ -27,7 +29,7 @@ class AbstractInterpreterController(ABC):
self, self,
width: int, width: int,
height: int, height: int,
splitter_sizes: List[int], splitter_sizes: list[int],
tabs: List[Dict[str, str]], tabs: list[dict[str, str]],
): ) -> None:
pass pass

View file

@ -1,4 +1,5 @@
from typing import List, Dict from __future__ import annotations
from typing import Optional
from ayon_core.lib import JSONSettingRegistry from ayon_core.lib import JSONSettingRegistry
from ayon_core.lib.local_settings import get_launcher_local_dir from ayon_core.lib.local_settings import get_launcher_local_dir
@ -11,13 +12,15 @@ from .abstract import (
class InterpreterController(AbstractInterpreterController): class InterpreterController(AbstractInterpreterController):
def __init__(self): def __init__(self, name: Optional[str] = None) -> None:
if name is None:
name = "python_interpreter_tool"
self._registry = JSONSettingRegistry( self._registry = JSONSettingRegistry(
"python_interpreter_tool", name,
get_launcher_local_dir(), get_launcher_local_dir(),
) )
def get_config(self): def get_config(self) -> InterpreterConfig:
width = None width = None
height = None height = None
splitter_sizes = [] splitter_sizes = []
@ -54,9 +57,9 @@ class InterpreterController(AbstractInterpreterController):
self, self,
width: int, width: int,
height: int, height: int,
splitter_sizes: List[int], splitter_sizes: list[int],
tabs: List[Dict[str, str]], tabs: list[dict[str, str]],
): ) -> None:
self._registry.set_item("width", width) self._registry.set_item("width", width)
self._registry.set_item("height", height) self._registry.set_item("height", height)
self._registry.set_item("splitter_sizes", splitter_sizes) self._registry.set_item("splitter_sizes", splitter_sizes)

View file

@ -1,42 +1,42 @@
import os
import sys import sys
import collections import collections
class _CustomSTD:
def __init__(self, orig_std, write_callback):
self.orig_std = orig_std
self._valid_orig = bool(orig_std)
self._write_callback = write_callback
def __getattr__(self, attr):
return getattr(self.orig_std, attr)
def __setattr__(self, key, value):
if key in ("orig_std", "_valid_orig", "_write_callback"):
super().__setattr__(key, value)
else:
setattr(self.orig_std, key, value)
def write(self, text):
if self._valid_orig:
self.orig_std.write(text)
self._write_callback(text)
class StdOEWrap: class StdOEWrap:
def __init__(self): def __init__(self):
self._origin_stdout_write = None
self._origin_stderr_write = None
self._listening = False
self.lines = collections.deque() self.lines = collections.deque()
if not sys.stdout:
sys.stdout = open(os.devnull, "w")
if not sys.stderr:
sys.stderr = open(os.devnull, "w")
if self._origin_stdout_write is None:
self._origin_stdout_write = sys.stdout.write
if self._origin_stderr_write is None:
self._origin_stderr_write = sys.stderr.write
self._listening = True self._listening = True
sys.stdout.write = self._stdout_listener
sys.stderr.write = self._stderr_listener self._stdout_wrap = _CustomSTD(sys.stdout, self._listener)
self._stderr_wrap = _CustomSTD(sys.stderr, self._listener)
sys.stdout = self._stdout_wrap
sys.stderr = self._stderr_wrap
def stop_listen(self): def stop_listen(self):
self._listening = False self._listening = False
def _stdout_listener(self, text): def _listener(self, text):
if self._listening: if self._listening:
self.lines.append(text) self.lines.append(text)
if self._origin_stdout_write is not None:
self._origin_stdout_write(text)
def _stderr_listener(self, text):
if self._listening:
self.lines.append(text)
if self._origin_stderr_write is not None:
self._origin_stderr_write(text)

View file

@ -112,6 +112,7 @@ class HierarchyPage(QtWidgets.QWidget):
self._is_visible = False self._is_visible = False
self._controller = controller self._controller = controller
self._filters_widget = filters_widget
self._btn_back = btn_back self._btn_back = btn_back
self._projects_combobox = projects_combobox self._projects_combobox = projects_combobox
self._folders_widget = folders_widget self._folders_widget = folders_widget
@ -136,6 +137,10 @@ class HierarchyPage(QtWidgets.QWidget):
self._folders_widget.refresh() self._folders_widget.refresh()
self._tasks_widget.refresh() self._tasks_widget.refresh()
self._workfiles_page.refresh() self._workfiles_page.refresh()
# Update my tasks
self._on_my_tasks_checkbox_state_changed(
self._filters_widget.is_my_tasks_checked()
)
def _on_back_clicked(self): def _on_back_clicked(self):
self._controller.set_selected_project(None) self._controller.set_selected_project(None)
@ -155,6 +160,7 @@ class HierarchyPage(QtWidgets.QWidget):
) )
folder_ids = entity_ids["folder_ids"] folder_ids = entity_ids["folder_ids"]
task_ids = entity_ids["task_ids"] task_ids = entity_ids["task_ids"]
self._folders_widget.set_folder_ids_filter(folder_ids) self._folders_widget.set_folder_ids_filter(folder_ids)
self._tasks_widget.set_task_ids_filter(task_ids) self._tasks_widget.set_task_ids_filter(task_ids)

View file

@ -527,6 +527,10 @@ class LoaderWindow(QtWidgets.QWidget):
if not self._refresh_handler.project_refreshed: if not self._refresh_handler.project_refreshed:
self._projects_combobox.refresh() self._projects_combobox.refresh()
self._update_filters() self._update_filters()
# Update my tasks
self._on_my_tasks_checkbox_state_changed(
self._filters_widget.is_my_tasks_checked()
)
def _on_load_finished(self, event): def _on_load_finished(self, event):
error_info = event["error_info"] error_info = event["error_info"]

View file

@ -35,6 +35,7 @@ from ayon_core.pipeline.create import (
ConvertorsOperationFailed, ConvertorsOperationFailed,
ConvertorItem, ConvertorItem,
) )
from ayon_core.tools.publisher.abstract import ( from ayon_core.tools.publisher.abstract import (
AbstractPublisherBackend, AbstractPublisherBackend,
CardMessageTypes, CardMessageTypes,

View file

@ -21,6 +21,7 @@ from ayon_core.pipeline.plugin_discover import DiscoverResult
from ayon_core.pipeline.publish import ( from ayon_core.pipeline.publish import (
get_publish_instance_label, get_publish_instance_label,
PublishError, PublishError,
filter_crashed_publish_paths,
) )
from ayon_core.tools.publisher.abstract import AbstractPublisherBackend from ayon_core.tools.publisher.abstract import AbstractPublisherBackend
@ -107,11 +108,14 @@ class PublishReportMaker:
creator_discover_result: Optional[DiscoverResult] = None, creator_discover_result: Optional[DiscoverResult] = None,
convertor_discover_result: Optional[DiscoverResult] = None, convertor_discover_result: Optional[DiscoverResult] = None,
publish_discover_result: Optional[DiscoverResult] = None, publish_discover_result: Optional[DiscoverResult] = None,
blocking_crashed_paths: Optional[list[str]] = None,
): ):
self._create_discover_result: Union[DiscoverResult, None] = None self._create_discover_result: Union[DiscoverResult, None] = None
self._convert_discover_result: Union[DiscoverResult, None] = None self._convert_discover_result: Union[DiscoverResult, None] = None
self._publish_discover_result: Union[DiscoverResult, None] = None self._publish_discover_result: Union[DiscoverResult, None] = None
self._blocking_crashed_paths: list[str] = []
self._all_instances_by_id: Dict[str, pyblish.api.Instance] = {} self._all_instances_by_id: Dict[str, pyblish.api.Instance] = {}
self._plugin_data_by_id: Dict[str, Any] = {} self._plugin_data_by_id: Dict[str, Any] = {}
self._current_plugin_id: Optional[str] = None self._current_plugin_id: Optional[str] = None
@ -120,6 +124,7 @@ class PublishReportMaker:
creator_discover_result, creator_discover_result,
convertor_discover_result, convertor_discover_result,
publish_discover_result, publish_discover_result,
blocking_crashed_paths,
) )
def reset( def reset(
@ -127,12 +132,14 @@ class PublishReportMaker:
creator_discover_result: Union[DiscoverResult, None], creator_discover_result: Union[DiscoverResult, None],
convertor_discover_result: Union[DiscoverResult, None], convertor_discover_result: Union[DiscoverResult, None],
publish_discover_result: Union[DiscoverResult, None], publish_discover_result: Union[DiscoverResult, None],
blocking_crashed_paths: list[str],
): ):
"""Reset report and clear all data.""" """Reset report and clear all data."""
self._create_discover_result = creator_discover_result self._create_discover_result = creator_discover_result
self._convert_discover_result = convertor_discover_result self._convert_discover_result = convertor_discover_result
self._publish_discover_result = publish_discover_result self._publish_discover_result = publish_discover_result
self._blocking_crashed_paths = blocking_crashed_paths
self._all_instances_by_id = {} self._all_instances_by_id = {}
self._plugin_data_by_id = {} self._plugin_data_by_id = {}
@ -242,9 +249,10 @@ class PublishReportMaker:
"instances": instances_details, "instances": instances_details,
"context": self._extract_context_data(publish_context), "context": self._extract_context_data(publish_context),
"crashed_file_paths": crashed_file_paths, "crashed_file_paths": crashed_file_paths,
"blocking_crashed_paths": list(self._blocking_crashed_paths),
"id": uuid.uuid4().hex, "id": uuid.uuid4().hex,
"created_at": now.isoformat(), "created_at": now.isoformat(),
"report_version": "1.1.0", "report_version": "1.1.1",
} }
def _add_plugin_data_item(self, plugin: pyblish.api.Plugin): def _add_plugin_data_item(self, plugin: pyblish.api.Plugin):
@ -959,11 +967,16 @@ class PublishModel:
self._publish_plugins_proxy = PublishPluginsProxy( self._publish_plugins_proxy = PublishPluginsProxy(
publish_plugins publish_plugins
) )
blocking_crashed_paths = filter_crashed_publish_paths(
create_context.get_current_project_name(),
set(create_context.publish_discover_result.crashed_file_paths),
project_settings=create_context.get_current_project_settings(),
)
self._publish_report.reset( self._publish_report.reset(
create_context.creator_discover_result, create_context.creator_discover_result,
create_context.convertor_discover_result, create_context.convertor_discover_result,
create_context.publish_discover_result, create_context.publish_discover_result,
blocking_crashed_paths,
) )
for plugin in create_context.publish_plugins_mismatch_targets: for plugin in create_context.publish_plugins_mismatch_targets:
self._publish_report.set_plugin_skipped(plugin.id) self._publish_report.set_plugin_skipped(plugin.id)

View file

@ -139,3 +139,6 @@ class PublishReport:
self.logs = logs self.logs = logs
self.crashed_plugin_paths = report_data["crashed_file_paths"] self.crashed_plugin_paths = report_data["crashed_file_paths"]
self.blocking_crashed_paths = report_data.get(
"blocking_crashed_paths", []
)

View file

@ -7,6 +7,7 @@ from ayon_core.tools.utils import (
SeparatorWidget, SeparatorWidget,
IconButton, IconButton,
paint_image_with_color, paint_image_with_color,
get_qt_icon,
) )
from ayon_core.resources import get_image_path from ayon_core.resources import get_image_path
from ayon_core.style import get_objected_colors from ayon_core.style import get_objected_colors
@ -46,10 +47,13 @@ def get_pretty_milliseconds(value):
class PluginLoadReportModel(QtGui.QStandardItemModel): class PluginLoadReportModel(QtGui.QStandardItemModel):
_blocking_icon = None
def __init__(self): def __init__(self):
super().__init__() super().__init__()
self._traceback_by_filepath = {} self._traceback_by_filepath = {}
self._items_by_filepath = {} self._items_by_filepath = {}
self._blocking_crashed_paths = set()
self._is_active = True self._is_active = True
self._need_refresh = False self._need_refresh = False
@ -75,6 +79,7 @@ class PluginLoadReportModel(QtGui.QStandardItemModel):
for filepath in to_remove: for filepath in to_remove:
self._traceback_by_filepath.pop(filepath) self._traceback_by_filepath.pop(filepath)
self._blocking_crashed_paths = set(report.blocking_crashed_paths)
self._update_items() self._update_items()
def _update_items(self): def _update_items(self):
@ -83,6 +88,7 @@ class PluginLoadReportModel(QtGui.QStandardItemModel):
parent = self.invisibleRootItem() parent = self.invisibleRootItem()
if not self._traceback_by_filepath: if not self._traceback_by_filepath:
parent.removeRows(0, parent.rowCount()) parent.removeRows(0, parent.rowCount())
self._items_by_filepath = {}
return return
new_items = [] new_items = []
@ -91,13 +97,19 @@ class PluginLoadReportModel(QtGui.QStandardItemModel):
set(self._items_by_filepath) - set(self._traceback_by_filepath) set(self._items_by_filepath) - set(self._traceback_by_filepath)
) )
for filepath in self._traceback_by_filepath: for filepath in self._traceback_by_filepath:
if filepath in self._items_by_filepath: item = self._items_by_filepath.get(filepath)
continue if item is None:
item = QtGui.QStandardItem(filepath) item = QtGui.QStandardItem(filepath)
new_items.append(item) new_items.append(item)
new_items_by_filepath[filepath] = item new_items_by_filepath[filepath] = item
self._items_by_filepath[filepath] = item self._items_by_filepath[filepath] = item
icon = None
if filepath.replace("\\", "/") in self._blocking_crashed_paths:
icon = self._get_blocking_icon()
item.setData(icon, QtCore.Qt.DecorationRole)
if new_items: if new_items:
parent.appendRows(new_items) parent.appendRows(new_items)
@ -113,6 +125,16 @@ class PluginLoadReportModel(QtGui.QStandardItemModel):
item = self._items_by_filepath.pop(filepath) item = self._items_by_filepath.pop(filepath)
parent.removeRow(item.row()) parent.removeRow(item.row())
@classmethod
def _get_blocking_icon(cls):
if cls._blocking_icon is None:
cls._blocking_icon = get_qt_icon({
"type": "material-symbols",
"name": "block",
"color": "red",
})
return cls._blocking_icon
class DetailWidget(QtWidgets.QTextEdit): class DetailWidget(QtWidgets.QTextEdit):
def __init__(self, text, *args, **kwargs): def __init__(self, text, *args, **kwargs):
@ -856,7 +878,7 @@ class PublishReportViewerWidget(QtWidgets.QFrame):
report = PublishReport(report_data) report = PublishReport(report_data)
self.set_report(report) self.set_report(report)
def set_report(self, report): def set_report(self, report: PublishReport) -> None:
self._ignore_selection_changes = True self._ignore_selection_changes = True
self._report_item = report self._report_item = report
@ -866,6 +888,10 @@ class PublishReportViewerWidget(QtWidgets.QFrame):
self._logs_text_widget.set_report(report) self._logs_text_widget.set_report(report)
self._plugin_load_report_widget.set_report(report) self._plugin_load_report_widget.set_report(report)
self._plugins_details_widget.set_report(report) self._plugins_details_widget.set_report(report)
if report.blocking_crashed_paths:
self._details_tab_widget.setCurrentWidget(
self._plugin_load_report_widget
)
self._ignore_selection_changes = False self._ignore_selection_changes = False

View file

@ -221,6 +221,7 @@ class CreateContextWidget(QtWidgets.QWidget):
filters_widget.text_changed.connect(self._on_folder_filter_change) filters_widget.text_changed.connect(self._on_folder_filter_change)
filters_widget.my_tasks_changed.connect(self._on_my_tasks_change) filters_widget.my_tasks_changed.connect(self._on_my_tasks_change)
self._filters_widget = filters_widget
self._current_context_btn = current_context_btn self._current_context_btn = current_context_btn
self._folders_widget = folders_widget self._folders_widget = folders_widget
self._tasks_widget = tasks_widget self._tasks_widget = tasks_widget
@ -290,6 +291,10 @@ class CreateContextWidget(QtWidgets.QWidget):
self._hierarchy_controller.set_expected_selection( self._hierarchy_controller.set_expected_selection(
self._last_project_name, folder_id, task_name self._last_project_name, folder_id, task_name
) )
# Update my tasks
self._on_my_tasks_change(
self._filters_widget.is_my_tasks_checked()
)
def _clear_selection(self): def _clear_selection(self):
self._folders_widget.set_selected_folder(None) self._folders_widget.set_selected_folder(None)

View file

@ -310,9 +310,6 @@ class CreateWidget(QtWidgets.QWidget):
folder_path = None folder_path = None
if self._context_change_is_enabled(): if self._context_change_is_enabled():
folder_path = self._context_widget.get_selected_folder_path() folder_path = self._context_widget.get_selected_folder_path()
if folder_path is None:
folder_path = self.get_current_folder_path()
return folder_path or None return folder_path or None
def _get_folder_id(self): def _get_folder_id(self):
@ -328,9 +325,6 @@ class CreateWidget(QtWidgets.QWidget):
folder_path = self._context_widget.get_selected_folder_path() folder_path = self._context_widget.get_selected_folder_path()
if folder_path: if folder_path:
task_name = self._context_widget.get_selected_task_name() task_name = self._context_widget.get_selected_task_name()
if not task_name:
task_name = self.get_current_task_name()
return task_name return task_name
def _set_context_enabled(self, enabled): def _set_context_enabled(self, enabled):

View file

@ -113,6 +113,7 @@ class FoldersDialog(QtWidgets.QDialog):
self._soft_reset_enabled = False self._soft_reset_enabled = False
self._folders_widget.set_project_name(self._project_name) self._folders_widget.set_project_name(self._project_name)
self._on_my_tasks_change(self._filters_widget.is_my_tasks_checked())
def get_selected_folder_path(self): def get_selected_folder_path(self):
"""Get selected folder path.""" """Get selected folder path."""

View file

@ -1,9 +1,11 @@
from __future__ import annotations
import os import os
import json import json
import time import time
import collections import collections
import copy import copy
from typing import Optional from typing import Optional, Any
from qtpy import QtWidgets, QtCore, QtGui from qtpy import QtWidgets, QtCore, QtGui
@ -393,6 +395,9 @@ class PublisherWindow(QtWidgets.QDialog):
self._publish_frame_visible = None self._publish_frame_visible = None
self._tab_on_reset = None self._tab_on_reset = None
self._create_context_valid: bool = True
self._blocked_by_crashed_paths: bool = False
self._error_messages_to_show = collections.deque() self._error_messages_to_show = collections.deque()
self._errors_dialog_message_timer = errors_dialog_message_timer self._errors_dialog_message_timer = errors_dialog_message_timer
@ -406,6 +411,8 @@ class PublisherWindow(QtWidgets.QDialog):
self._show_counter = 0 self._show_counter = 0
self._window_is_visible = False self._window_is_visible = False
self._update_footer_state()
@property @property
def controller(self) -> AbstractPublisherFrontend: def controller(self) -> AbstractPublisherFrontend:
"""Kept for compatibility with traypublisher.""" """Kept for compatibility with traypublisher."""
@ -664,10 +671,32 @@ class PublisherWindow(QtWidgets.QDialog):
self._tab_on_reset = tab self._tab_on_reset = tab
def _update_publish_details_widget(self, force=False): def set_current_tab(self, tab):
if not force and not self._is_on_details_tab(): if tab == "create":
self._go_to_create_tab()
elif tab == "publish":
self._go_to_publish_tab()
elif tab == "report":
self._go_to_report_tab()
elif tab == "details":
self._go_to_details_tab()
if not self._window_is_visible:
self.set_tab_on_reset(tab)
def _update_publish_details_widget(
self,
force: bool = False,
report_data: Optional[dict[str, Any]] = None,
) -> None:
if (
report_data is None
and not force
and not self._is_on_details_tab()
):
return return
if report_data is None:
report_data = self._controller.get_publish_report() report_data = self._controller.get_publish_report()
self._publish_details_widget.set_report_data(report_data) self._publish_details_widget.set_report_data(report_data)
@ -752,19 +781,6 @@ class PublisherWindow(QtWidgets.QDialog):
def _set_current_tab(self, identifier): def _set_current_tab(self, identifier):
self._tabs_widget.set_current_tab(identifier) self._tabs_widget.set_current_tab(identifier)
def set_current_tab(self, tab):
if tab == "create":
self._go_to_create_tab()
elif tab == "publish":
self._go_to_publish_tab()
elif tab == "report":
self._go_to_report_tab()
elif tab == "details":
self._go_to_details_tab()
if not self._window_is_visible:
self.set_tab_on_reset(tab)
def _is_current_tab(self, identifier): def _is_current_tab(self, identifier):
return self._tabs_widget.is_current_tab(identifier) return self._tabs_widget.is_current_tab(identifier)
@ -865,15 +881,40 @@ class PublisherWindow(QtWidgets.QDialog):
# Reset style # Reset style
self._comment_input.setStyleSheet("") self._comment_input.setStyleSheet("")
def _set_footer_enabled(self, enabled): def _set_create_context_valid(self, valid: bool) -> None:
self._save_btn.setEnabled(True) self._create_context_valid = valid
self._update_footer_state()
def _set_blocked(self, blocked: bool) -> None:
self._blocked_by_crashed_paths = blocked
self._overview_widget.setEnabled(not blocked)
self._update_footer_state()
if not blocked:
return
self.set_tab_on_reset("details")
self._go_to_details_tab()
QtWidgets.QMessageBox.critical(
self,
"Failed to load plugins",
(
"Failed to load plugins that do prevent you from"
" using publish tool.\n"
"Please contact your TD or administrator."
)
)
def _update_footer_state(self) -> None:
enabled = (
not self._blocked_by_crashed_paths
and self._create_context_valid
)
save_enabled = not self._blocked_by_crashed_paths
self._save_btn.setEnabled(save_enabled)
self._reset_btn.setEnabled(True) self._reset_btn.setEnabled(True)
if enabled:
self._stop_btn.setEnabled(False) self._stop_btn.setEnabled(False)
self._validate_btn.setEnabled(True)
self._publish_btn.setEnabled(True)
else:
self._stop_btn.setEnabled(enabled)
self._validate_btn.setEnabled(enabled) self._validate_btn.setEnabled(enabled)
self._publish_btn.setEnabled(enabled) self._publish_btn.setEnabled(enabled)
@ -882,9 +923,14 @@ class PublisherWindow(QtWidgets.QDialog):
self._set_comment_input_visiblity(True) self._set_comment_input_visiblity(True)
self._set_publish_overlay_visibility(False) self._set_publish_overlay_visibility(False)
self._set_publish_visibility(False) self._set_publish_visibility(False)
self._update_publish_details_widget()
report_data = self._controller.get_publish_report()
blocked = bool(report_data["blocking_crashed_paths"])
self._set_blocked(blocked)
self._update_publish_details_widget(report_data=report_data)
def _on_controller_reset(self): def _on_controller_reset(self):
self._update_publish_details_widget(force=True)
self._first_reset, first_reset = False, self._first_reset self._first_reset, first_reset = False, self._first_reset
if self._tab_on_reset is not None: if self._tab_on_reset is not None:
self._tab_on_reset, new_tab = None, self._tab_on_reset self._tab_on_reset, new_tab = None, self._tab_on_reset
@ -952,7 +998,7 @@ class PublisherWindow(QtWidgets.QDialog):
def _validate_create_instances(self): def _validate_create_instances(self):
if not self._controller.is_host_valid(): if not self._controller.is_host_valid():
self._set_footer_enabled(True) self._set_create_context_valid(True)
return return
active_instances_by_id = { active_instances_by_id = {
@ -973,7 +1019,7 @@ class PublisherWindow(QtWidgets.QDialog):
if all_valid is None: if all_valid is None:
all_valid = True all_valid = True
self._set_footer_enabled(bool(all_valid)) self._set_create_context_valid(bool(all_valid))
def _on_create_model_reset(self): def _on_create_model_reset(self):
self._validate_create_instances() self._validate_create_instances()

View file

@ -1045,10 +1045,23 @@ class ProjectPushItemProcess:
copied_tags = self._get_transferable_tags(src_version_entity) copied_tags = self._get_transferable_tags(src_version_entity)
copied_status = self._get_transferable_status(src_version_entity) copied_status = self._get_transferable_status(src_version_entity)
description_parts = []
dst_attr_description = dst_attrib.get("description")
if dst_attr_description:
description_parts.append(dst_attr_description)
description = self._create_src_version_description(
self._item.src_project_name,
src_version_entity
)
if description:
description_parts.append(description)
dst_attrib["description"] = "\n\n".join(description_parts)
version_entity = new_version_entity( version_entity = new_version_entity(
dst_version, dst_version,
product_id, product_id,
author=src_version_entity["author"],
status=copied_status, status=copied_status,
tags=copied_tags, tags=copied_tags,
task_id=self._task_info.get("id"), task_id=self._task_info.get("id"),
@ -1129,8 +1142,6 @@ class ProjectPushItemProcess:
self.host_name self.host_name
) )
formatting_data.update({ formatting_data.update({
"subset": self._product_name,
"family": self._product_type,
"product": { "product": {
"name": self._product_name, "name": self._product_name,
"type": self._product_type, "type": self._product_type,
@ -1372,6 +1383,30 @@ class ProjectPushItemProcess:
return copied_status["name"] return copied_status["name"]
return None return None
def _create_src_version_description(
self,
src_project_name: str,
src_version_entity: dict[str, Any]
) -> str:
"""Creates description text about source version."""
src_version_id = src_version_entity["id"]
src_author = src_version_entity["author"]
query = "&".join([
f"project={src_project_name}",
"type=version",
f"id={src_version_id}"
])
version_url = (
f"{ayon_api.get_base_url()}"
f"/projects/{src_project_name}/products?{query}"
)
description = (
f"Version copied from from {version_url} "
f"created by '{src_author}', "
)
return description
class IntegrateModel: class IntegrateModel:
def __init__(self, controller): def __init__(self, controller):

View file

@ -1114,6 +1114,8 @@ class SceneInventoryView(QtWidgets.QTreeView):
try: try:
for item_id, item_version in zip(item_ids, versions): for item_id, item_version in zip(item_ids, versions):
container = containers_by_id[item_id] container = containers_by_id[item_id]
if container.get("version_locked"):
continue
try: try:
update_container(container, item_version) update_container(container, item_version)
except Exception as exc: except Exception as exc:

View file

@ -32,8 +32,6 @@ class TextureCopy:
product_type = "texture" product_type = "texture"
template_data = get_template_data(project_entity, folder_entity) template_data = get_template_data(project_entity, folder_entity)
template_data.update({ template_data.update({
"family": product_type,
"subset": product_name,
"product": { "product": {
"name": product_name, "name": product_name,
"type": product_type, "type": product_type,

View file

@ -834,6 +834,12 @@ class FoldersFiltersWidget(QtWidgets.QWidget):
self._folders_filter_input = folders_filter_input self._folders_filter_input = folders_filter_input
self._my_tasks_checkbox = my_tasks_checkbox self._my_tasks_checkbox = my_tasks_checkbox
def is_my_tasks_checked(self) -> bool:
return self._my_tasks_checkbox.isChecked()
def text(self) -> str:
return self._folders_filter_input.text()
def set_text(self, text: str) -> None: def set_text(self, text: str) -> None:
self._folders_filter_input.setText(text) self._folders_filter_input.setText(text)

View file

@ -205,6 +205,8 @@ class WorkfilesToolWindow(QtWidgets.QWidget):
self._folders_widget = folder_widget self._folders_widget = folder_widget
self._filters_widget = filters_widget
return col_widget return col_widget
def _create_col_3_widget(self, controller, parent): def _create_col_3_widget(self, controller, parent):
@ -343,6 +345,10 @@ class WorkfilesToolWindow(QtWidgets.QWidget):
self._project_name = self._controller.get_current_project_name() self._project_name = self._controller.get_current_project_name()
self._folders_widget.set_project_name(self._project_name) self._folders_widget.set_project_name(self._project_name)
# Update my tasks
self._on_my_tasks_checkbox_state_changed(
self._filters_widget.is_my_tasks_checked()
)
def _on_save_as_finished(self, event): def _on_save_as_finished(self, event):
if event["failed"]: if event["failed"]:

View file

@ -1,3 +1,3 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
"""Package declaring AYON addon 'core' version.""" """Package declaring AYON addon 'core' version."""
__version__ = "1.6.12" __version__ = "1.7.0+dev"

View file

@ -1,6 +1,6 @@
name = "core" name = "core"
title = "Core" title = "Core"
version = "1.6.12" version = "1.7.0+dev"
client_dir = "ayon_core" client_dir = "ayon_core"
@ -12,6 +12,7 @@ ayon_server_version = ">=1.8.4,<2.0.0"
ayon_launcher_version = ">=1.0.2" ayon_launcher_version = ">=1.0.2"
ayon_required_addons = {} ayon_required_addons = {}
ayon_compatible_addons = { ayon_compatible_addons = {
"ayon_third_party": ">=1.3.0",
"ayon_ocio": ">=1.2.1", "ayon_ocio": ">=1.2.1",
"applications": ">=1.1.2", "applications": ">=1.1.2",
"harmony": ">0.4.0", "harmony": ">0.4.0",

View file

@ -5,7 +5,7 @@
[tool.poetry] [tool.poetry]
name = "ayon-core" name = "ayon-core"
version = "1.6.12" version = "1.7.0+dev"
description = "" description = ""
authors = ["Ynput Team <team@ynput.io>"] authors = ["Ynput Team <team@ynput.io>"]
readme = "README.md" readme = "README.md"
@ -37,7 +37,7 @@ opentimelineio = "^0.17.0"
speedcopy = "^2.1" speedcopy = "^2.1"
qtpy="^2.4.3" qtpy="^2.4.3"
pyside6 = "^6.5.2" pyside6 = "^6.5.2"
pytest-ayon = { git = "https://github.com/ynput/pytest-ayon.git", branch = "chore/align-dependencies" } pytest-ayon = { git = "https://github.com/ynput/pytest-ayon.git", branch = "develop" }
[tool.codespell] [tool.codespell]
# Ignore words that are not in the dictionary. # Ignore words that are not in the dictionary.

View file

@ -7,7 +7,31 @@ from .publish_plugins import DEFAULT_PUBLISH_VALUES
PRODUCT_NAME_REPL_REGEX = re.compile(r"[^<>{}\[\]a-zA-Z0-9_.]") PRODUCT_NAME_REPL_REGEX = re.compile(r"[^<>{}\[\]a-zA-Z0-9_.]")
def _convert_imageio_configs_1_6_5(overrides): def _convert_product_name_templates_1_7_0(overrides):
product_name_profiles = (
overrides
.get("tools", {})
.get("creator", {})
.get("product_name_profiles")
)
if (
not product_name_profiles
or not isinstance(product_name_profiles, list)
):
return
# Already converted
item = product_name_profiles[0]
if "product_base_types" in item or "product_types" not in item:
return
# Move product base types to product types
for item in product_name_profiles:
item["product_base_types"] = item["product_types"]
item["product_types"] = []
def _convert_product_name_templates_1_6_5(overrides):
product_name_profiles = ( product_name_profiles = (
overrides overrides
.get("tools", {}) .get("tools", {})
@ -158,12 +182,54 @@ def _convert_publish_plugins(overrides):
_convert_oiio_transcode_0_4_5(overrides["publish"]) _convert_oiio_transcode_0_4_5(overrides["publish"])
def _convert_extract_thumbnail(overrides):
"""ExtractThumbnail config settings did change to profiles."""
extract_thumbnail_overrides = (
overrides.get("publish", {}).get("ExtractThumbnail")
)
if extract_thumbnail_overrides is None:
return
base_value = {
"product_types": [],
"host_names": [],
"task_types": [],
"task_names": [],
"product_names": [],
"integrate_thumbnail": True,
"target_size": {"type": "source"},
"duration_split": 0.5,
"oiiotool_defaults": {
"type": "colorspace",
"colorspace": "color_picking",
},
"ffmpeg_args": {"input": ["-apply_trc gamma22"], "output": []},
}
for key in (
"product_names",
"integrate_thumbnail",
"target_size",
"duration_split",
"oiiotool_defaults",
"ffmpeg_args",
):
if key in extract_thumbnail_overrides:
base_value[key] = extract_thumbnail_overrides.pop(key)
extract_thumbnail_profiles = extract_thumbnail_overrides.setdefault(
"profiles", []
)
extract_thumbnail_profiles.append(base_value)
def convert_settings_overrides( def convert_settings_overrides(
source_version: str, source_version: str,
overrides: dict[str, Any], overrides: dict[str, Any],
) -> dict[str, Any]: ) -> dict[str, Any]:
_convert_imageio_configs_0_3_1(overrides) _convert_imageio_configs_0_3_1(overrides)
_convert_imageio_configs_0_4_5(overrides) _convert_imageio_configs_0_4_5(overrides)
_convert_imageio_configs_1_6_5(overrides) _convert_product_name_templates_1_6_5(overrides)
_convert_product_name_templates_1_7_0(overrides)
_convert_publish_plugins(overrides) _convert_publish_plugins(overrides)
_convert_extract_thumbnail(overrides)
return overrides return overrides

View file

@ -74,13 +74,35 @@ class CollectFramesFixDefModel(BaseSettingsModel):
) )
def usd_contribution_layer_types():
return [
{"value": "asset", "label": "Asset"},
{"value": "shot", "label": "Shot"},
]
class ContributionLayersModel(BaseSettingsModel): class ContributionLayersModel(BaseSettingsModel):
_layout = "compact" _layout = "compact"
name: str = SettingsField(title="Name") name: str = SettingsField(
order: str = SettingsField( default="",
regex="[A-Za-z0-9_-]+",
title="Name")
scope: list[str] = SettingsField(
# This should actually be returned from a callable to `default_factory`
# because lists are mutable. However, the frontend can't interpret
# the callable. It will fail to apply it as the default. Specifying
# this default directly did not show any ill side effects.
default=["asset", "shot"],
title="Scope",
min_items=1,
enum_resolver=usd_contribution_layer_types)
order: int = SettingsField(
default=0,
title="Order", title="Order",
description="Higher order means a higher strength and stacks the " description=(
"layer on top.") "Higher order means a higher strength and stacks the layer on top."
)
)
class CollectUSDLayerContributionsProfileModel(BaseSettingsModel): class CollectUSDLayerContributionsProfileModel(BaseSettingsModel):
@ -400,24 +422,30 @@ class ExtractThumbnailOIIODefaultsModel(BaseSettingsModel):
) )
class ExtractThumbnailModel(BaseSettingsModel): class ExtractThumbnailProfileModel(BaseSettingsModel):
_isGroup = True product_types: list[str] = SettingsField(
enabled: bool = SettingsField(True) default_factory=list, title="Product types"
)
host_names: list[str] = SettingsField(
default_factory=list, title="Host names"
)
task_types: list[str] = SettingsField(
default_factory=list, title="Task types", enum_resolver=task_types_enum
)
task_names: list[str] = SettingsField(
default_factory=list, title="Task names"
)
product_names: list[str] = SettingsField( product_names: list[str] = SettingsField(
default_factory=list, default_factory=list, title="Product names"
title="Product names"
) )
integrate_thumbnail: bool = SettingsField( integrate_thumbnail: bool = SettingsField(
True, True, title="Integrate Thumbnail Representation"
title="Integrate Thumbnail Representation"
) )
target_size: ResizeModel = SettingsField( target_size: ResizeModel = SettingsField(
default_factory=ResizeModel, default_factory=ResizeModel, title="Target size"
title="Target size"
) )
background_color: ColorRGBA_uint8 = SettingsField( background_color: ColorRGBA_uint8 = SettingsField(
(0, 0, 0, 0.0), (0, 0, 0, 0.0), title="Background color"
title="Background color"
) )
duration_split: float = SettingsField( duration_split: float = SettingsField(
0.5, 0.5,
@ -434,6 +462,15 @@ class ExtractThumbnailModel(BaseSettingsModel):
) )
class ExtractThumbnailModel(BaseSettingsModel):
_isGroup = True
enabled: bool = SettingsField(True)
profiles: list[ExtractThumbnailProfileModel] = SettingsField(
default_factory=list, title="Profiles"
)
def _extract_oiio_transcoding_type(): def _extract_oiio_transcoding_type():
return [ return [
{"value": "colorspace", "label": "Use Colorspace"}, {"value": "colorspace", "label": "Use Colorspace"},
@ -469,6 +506,18 @@ class UseDisplayViewModel(BaseSettingsModel):
) )
class ExtractThumbnailFromSourceModel(BaseSettingsModel):
"""Thumbnail extraction from source files using ffmpeg and oiiotool."""
enabled: bool = SettingsField(True)
target_size: ResizeModel = SettingsField(
default_factory=ResizeModel, title="Target size"
)
background_color: ColorRGBA_uint8 = SettingsField(
(0, 0, 0, 0.0), title="Background color"
)
class ExtractOIIOTranscodeOutputModel(BaseSettingsModel): class ExtractOIIOTranscodeOutputModel(BaseSettingsModel):
_layout = "expanded" _layout = "expanded"
name: str = SettingsField( name: str = SettingsField(
@ -1244,6 +1293,16 @@ class PublishPuginsModel(BaseSettingsModel):
default_factory=ExtractThumbnailModel, default_factory=ExtractThumbnailModel,
title="Extract Thumbnail" title="Extract Thumbnail"
) )
ExtractThumbnailFromSource: ExtractThumbnailFromSourceModel = SettingsField( # noqa: E501
default_factory=ExtractThumbnailFromSourceModel,
title="Extract Thumbnail from source",
description=(
"Extract thumbnails from explicit file set in "
"instance.data['thumbnailSource'] using oiiotool"
" or ffmpeg."
"Used when artist provided thumbnail source."
)
)
ExtractOIIOTranscode: ExtractOIIOTranscodeModel = SettingsField( ExtractOIIOTranscode: ExtractOIIOTranscodeModel = SettingsField(
default_factory=ExtractOIIOTranscodeModel, default_factory=ExtractOIIOTranscodeModel,
title="Extract OIIO Transcode" title="Extract OIIO Transcode"
@ -1345,17 +1404,17 @@ DEFAULT_PUBLISH_VALUES = {
"enabled": True, "enabled": True,
"contribution_layers": [ "contribution_layers": [
# Asset layers # Asset layers
{"name": "model", "order": 100}, {"name": "model", "order": 100, "scope": ["asset"]},
{"name": "assembly", "order": 150}, {"name": "assembly", "order": 150, "scope": ["asset"]},
{"name": "groom", "order": 175}, {"name": "groom", "order": 175, "scope": ["asset"]},
{"name": "look", "order": 200}, {"name": "look", "order": 200, "scope": ["asset"]},
{"name": "rig", "order": 300}, {"name": "rig", "order": 300, "scope": ["asset"]},
# Shot layers # Shot layers
{"name": "layout", "order": 200}, {"name": "layout", "order": 200, "scope": ["shot"]},
{"name": "animation", "order": 300}, {"name": "animation", "order": 300, "scope": ["shot"]},
{"name": "simulation", "order": 400}, {"name": "simulation", "order": 400, "scope": ["shot"]},
{"name": "fx", "order": 500}, {"name": "fx", "order": 500, "scope": ["shot"]},
{"name": "lighting", "order": 600}, {"name": "lighting", "order": 600, "scope": ["shot"]},
], ],
"profiles": [ "profiles": [
{ {
@ -1458,6 +1517,12 @@ DEFAULT_PUBLISH_VALUES = {
}, },
"ExtractThumbnail": { "ExtractThumbnail": {
"enabled": True, "enabled": True,
"profiles": [
{
"product_types": [],
"host_names": [],
"task_types": [],
"task_names": [],
"product_names": [], "product_names": [],
"integrate_thumbnail": True, "integrate_thumbnail": True,
"target_size": { "target_size": {
@ -1474,6 +1539,18 @@ DEFAULT_PUBLISH_VALUES = {
], ],
"output": [] "output": []
} }
}
]
},
"ExtractThumbnailFromSource": {
"enabled": True,
"target_size": {
"type": "resize",
"resize": {
"width": 300,
"height": 170
}
},
}, },
"ExtractOIIOTranscode": { "ExtractOIIOTranscode": {
"enabled": True, "enabled": True,

View file

@ -24,6 +24,10 @@ class ProductTypeSmartSelectModel(BaseSettingsModel):
class ProductNameProfile(BaseSettingsModel): class ProductNameProfile(BaseSettingsModel):
_layout = "expanded" _layout = "expanded"
product_base_types: list[str] = SettingsField(
default_factory=list,
title="Product base types",
)
product_types: list[str] = SettingsField( product_types: list[str] = SettingsField(
default_factory=list, default_factory=list,
title="Product types", title="Product types",
@ -352,6 +356,27 @@ class CustomStagingDirProfileModel(BaseSettingsModel):
) )
class DiscoverValidationModel(BaseSettingsModel):
"""Strictly validate publish plugins discovery.
Artist won't be able to publish if path to publish plugin fails to be
imported.
"""
_isGroup = True
enabled: bool = SettingsField(
False,
description="Enable strict mode of plugins discovery",
)
ignore_paths: list[str] = SettingsField(
default_factory=list,
title="Ignored paths (regex)",
description=(
"Paths that do match regex will be skipped in validation."
),
)
class PublishToolModel(BaseSettingsModel): class PublishToolModel(BaseSettingsModel):
template_name_profiles: list[PublishTemplateNameProfile] = SettingsField( template_name_profiles: list[PublishTemplateNameProfile] = SettingsField(
default_factory=list, default_factory=list,
@ -369,6 +394,10 @@ class PublishToolModel(BaseSettingsModel):
title="Custom Staging Dir Profiles" title="Custom Staging Dir Profiles"
) )
) )
discover_validation: DiscoverValidationModel = SettingsField(
default_factory=DiscoverValidationModel,
title="Validate plugins discovery",
)
comment_minimum_required_chars: int = SettingsField( comment_minimum_required_chars: int = SettingsField(
0, 0,
title="Publish comment minimum required characters", title="Publish comment minimum required characters",
@ -443,6 +472,7 @@ DEFAULT_TOOLS_VALUES = {
], ],
"product_name_profiles": [ "product_name_profiles": [
{ {
"product_base_types": [],
"product_types": [], "product_types": [],
"host_names": [], "host_names": [],
"task_types": [], "task_types": [],
@ -450,28 +480,31 @@ DEFAULT_TOOLS_VALUES = {
"template": "{product[type]}{variant}" "template": "{product[type]}{variant}"
}, },
{ {
"product_types": [ "product_base_types": [
"workfile" "workfile"
], ],
"product_types": [],
"host_names": [], "host_names": [],
"task_types": [], "task_types": [],
"task_names": [], "task_names": [],
"template": "{product[type]}{Task[name]}" "template": "{product[type]}{Task[name]}"
}, },
{ {
"product_types": [ "product_base_types": [
"render" "render"
], ],
"product_types": [],
"host_names": [], "host_names": [],
"task_types": [], "task_types": [],
"task_names": [], "task_names": [],
"template": "{product[type]}{Task[name]}{Variant}<_{Aov}>" "template": "{product[type]}{Task[name]}{Variant}<_{Aov}>"
}, },
{ {
"product_types": [ "product_base_types": [
"renderLayer", "renderLayer",
"renderPass" "renderPass"
], ],
"product_types": [],
"host_names": [ "host_names": [
"tvpaint" "tvpaint"
], ],
@ -482,10 +515,11 @@ DEFAULT_TOOLS_VALUES = {
) )
}, },
{ {
"product_types": [ "product_base_types": [
"review", "review",
"workfile" "workfile"
], ],
"product_types": [],
"host_names": [ "host_names": [
"aftereffects", "aftereffects",
"tvpaint" "tvpaint"
@ -495,7 +529,8 @@ DEFAULT_TOOLS_VALUES = {
"template": "{product[type]}{Task[name]}" "template": "{product[type]}{Task[name]}"
}, },
{ {
"product_types": ["render"], "product_base_types": ["render"],
"product_types": [],
"host_names": [ "host_names": [
"aftereffects" "aftereffects"
], ],
@ -504,9 +539,10 @@ DEFAULT_TOOLS_VALUES = {
"template": "{product[type]}{Task[name]}{Composition}{Variant}" "template": "{product[type]}{Task[name]}{Composition}{Variant}"
}, },
{ {
"product_types": [ "product_base_types": [
"staticMesh" "staticMesh"
], ],
"product_types": [],
"host_names": [ "host_names": [
"maya" "maya"
], ],
@ -515,9 +551,10 @@ DEFAULT_TOOLS_VALUES = {
"template": "S_{folder[name]}{variant}" "template": "S_{folder[name]}{variant}"
}, },
{ {
"product_types": [ "product_base_types": [
"skeletalMesh" "skeletalMesh"
], ],
"product_types": [],
"host_names": [ "host_names": [
"maya" "maya"
], ],
@ -526,9 +563,10 @@ DEFAULT_TOOLS_VALUES = {
"template": "SK_{folder[name]}{variant}" "template": "SK_{folder[name]}{variant}"
}, },
{ {
"product_types": [ "product_base_types": [
"hda" "hda"
], ],
"product_types": [],
"host_names": [ "host_names": [
"houdini" "houdini"
], ],
@ -537,9 +575,10 @@ DEFAULT_TOOLS_VALUES = {
"template": "{folder[name]}_{variant}" "template": "{folder[name]}_{variant}"
}, },
{ {
"product_types": [ "product_base_types": [
"textureSet" "textureSet"
], ],
"product_types": [],
"host_names": [ "host_names": [
"substancedesigner" "substancedesigner"
], ],
@ -691,6 +730,10 @@ DEFAULT_TOOLS_VALUES = {
"template_name": "simpleUnrealTextureHero" "template_name": "simpleUnrealTextureHero"
} }
], ],
"discover_validation": {
"enabled": False,
"ignore_paths": [],
},
"comment_minimum_required_chars": 0, "comment_minimum_required_chars": 0,
} }
} }

View file

@ -0,0 +1,333 @@
"""Tests for product_name helpers."""
import pytest
from unittest.mock import patch
from ayon_core.pipeline.create.product_name import (
get_product_name_template,
get_product_name,
)
from ayon_core.pipeline.create.constants import DEFAULT_PRODUCT_TEMPLATE
from ayon_core.pipeline.create.exceptions import (
TaskNotSetError,
TemplateFillError,
)
class TestGetProductNameTemplate:
@patch("ayon_core.pipeline.create.product_name.get_project_settings")
@patch("ayon_core.pipeline.create.product_name.filter_profiles")
def test_matching_profile_with_replacements(
self,
mock_filter_profiles,
mock_get_settings,
):
"""Matching profile applies legacy replacement tokens."""
mock_get_settings.return_value = {
"core": {"tools": {"creator": {"product_name_profiles": []}}}
}
# The function should replace {task}/{family}/{asset} variants
mock_filter_profiles.return_value = {
"template": ("{task}-{Task}-{TASK}-{family}-{Family}"
"-{FAMILY}-{asset}-{Asset}-{ASSET}")
}
result = get_product_name_template(
project_name="proj",
product_type="model",
task_name="modeling",
task_type="Modeling",
host_name="maya",
)
assert result == (
"{task[name]}-{Task[name]}-{TASK[NAME]}-"
"{product[type]}-{Product[type]}-{PRODUCT[TYPE]}-"
"{folder[name]}-{Folder[name]}-{FOLDER[NAME]}"
)
@patch("ayon_core.pipeline.create.product_name.get_project_settings")
@patch("ayon_core.pipeline.create.product_name.filter_profiles")
def test_no_matching_profile_uses_default(
self,
mock_filter_profiles,
mock_get_settings,
):
mock_get_settings.return_value = {
"core": {"tools": {"creator": {"product_name_profiles": []}}}
}
mock_filter_profiles.return_value = None
assert (
get_product_name_template(
project_name="proj",
product_type="model",
task_name="modeling",
task_type="Modeling",
host_name="maya",
)
== DEFAULT_PRODUCT_TEMPLATE
)
@patch("ayon_core.pipeline.create.product_name.get_project_settings")
@patch("ayon_core.pipeline.create.product_name.filter_profiles")
def test_custom_default_template_used(
self,
mock_filter_profiles,
mock_get_settings,
):
mock_get_settings.return_value = {
"core": {"tools": {"creator": {"product_name_profiles": []}}}
}
mock_filter_profiles.return_value = None
custom_default = "{variant}_{family}"
assert (
get_product_name_template(
project_name="proj",
product_type="model",
task_name="modeling",
task_type="Modeling",
host_name="maya",
default_template=custom_default,
)
== custom_default
)
@patch("ayon_core.pipeline.create.product_name.get_project_settings")
@patch("ayon_core.pipeline.create.product_name.filter_profiles")
def test_product_base_type_added_to_filtering_when_provided(
self,
mock_filter_profiles,
mock_get_settings,
):
mock_get_settings.return_value = {
"core": {"tools": {"creator": {"product_name_profiles": []}}}
}
mock_filter_profiles.return_value = None
get_product_name_template(
project_name="proj",
product_type="model",
task_name="modeling",
task_type="Modeling",
host_name="maya",
product_base_type="asset",
)
args, kwargs = mock_filter_profiles.call_args
# args[1] is filtering_criteria
assert args[1]["product_base_types"] == "asset"
class TestGetProductName:
@patch("ayon_core.pipeline.create.product_name.get_product_name_template")
@patch("ayon_core.pipeline.create.product_name."
"StringTemplate.format_strict_template")
@patch("ayon_core.pipeline.create.product_name.prepare_template_data")
def test_empty_product_type_returns_empty(
self, mock_prepare, mock_format, mock_get_tmpl
):
assert (
get_product_name(
project_name="proj",
task_name="modeling",
task_type="Modeling",
host_name="maya",
product_type="",
variant="Main",
)
== ""
)
mock_get_tmpl.assert_not_called()
mock_format.assert_not_called()
mock_prepare.assert_not_called()
@patch("ayon_core.pipeline.create.product_name.get_product_name_template")
@patch("ayon_core.pipeline.create.product_name."
"StringTemplate.format_strict_template")
@patch("ayon_core.pipeline.create.product_name.prepare_template_data")
def test_happy_path(
self, mock_prepare, mock_format, mock_get_tmpl
):
mock_get_tmpl.return_value = "{task[name]}_{product[type]}_{variant}"
mock_prepare.return_value = {
"task": {"name": "modeling"},
"product": {"type": "model"},
"variant": "Main",
"family": "model",
}
mock_format.return_value = "modeling_model_Main"
result = get_product_name(
project_name="proj",
task_name="modeling",
task_type="Modeling",
host_name="maya",
product_type="model",
variant="Main",
)
assert result == "modeling_model_Main"
mock_get_tmpl.assert_called_once()
mock_prepare.assert_called_once()
mock_format.assert_called_once()
@patch("ayon_core.pipeline.create.product_name.get_product_name_template")
@patch("ayon_core.pipeline.create.product_name."
"StringTemplate.format_strict_template")
@patch("ayon_core.pipeline.create.product_name.prepare_template_data")
def test_product_name_with_base_type(
self, mock_prepare, mock_format, mock_get_tmpl
):
mock_get_tmpl.return_value = (
"{task[name]}_{product[basetype]}_{variant}"
)
mock_prepare.return_value = {
"task": {"name": "modeling"},
"product": {"type": "model"},
"variant": "Main",
"family": "model",
}
mock_format.return_value = "modeling_modelBase_Main"
result = get_product_name(
project_name="proj",
task_name="modeling",
task_type="Modeling",
host_name="maya",
product_type="model",
product_base_type="modelBase",
variant="Main",
)
assert result == "modeling_modelBase_Main"
mock_get_tmpl.assert_called_once()
mock_prepare.assert_called_once()
mock_format.assert_called_once()
@patch("ayon_core.pipeline.create.product_name.get_product_name_template")
def test_task_required_but_missing_raises(self, mock_get_tmpl):
mock_get_tmpl.return_value = "{task[name]}_{variant}"
with pytest.raises(TaskNotSetError):
get_product_name(
project_name="proj",
task_name="",
task_type="Modeling",
host_name="maya",
product_type="model",
variant="Main",
)
@patch("ayon_core.pipeline.create.product_name.get_product_name_template")
@patch("ayon_core.pipeline.create.product_name.ayon_api.get_project")
@patch("ayon_core.pipeline.create.product_name.StringTemplate."
"format_strict_template")
@patch("ayon_core.pipeline.create.product_name.prepare_template_data")
def test_task_short_name_is_used(
self, mock_prepare, mock_format, mock_get_project, mock_get_tmpl
):
mock_get_tmpl.return_value = "{task[short]}_{variant}"
mock_get_project.return_value = {
"taskTypes": [{"name": "Modeling", "shortName": "mdl"}]
}
mock_prepare.return_value = {
"task": {
"short": "mdl"
},
"variant": "Main"
}
mock_format.return_value = "mdl_Main"
result = get_product_name(
project_name="proj",
task_name="modeling",
task_type="Modeling",
host_name="maya",
product_type="model",
variant="Main",
)
assert result == "mdl_Main"
@patch("ayon_core.pipeline.create.product_name.get_product_name_template")
@patch("ayon_core.pipeline.create.product_name.StringTemplate."
"format_strict_template")
@patch("ayon_core.pipeline.create.product_name.prepare_template_data")
def test_template_fill_error_translated(
self, mock_prepare, mock_format, mock_get_tmpl
):
mock_get_tmpl.return_value = "{missing_key}_{variant}"
mock_prepare.return_value = {"variant": "Main"}
mock_format.side_effect = KeyError("missing_key")
with pytest.raises(TemplateFillError):
get_product_name(
project_name="proj",
task_name="modeling",
task_type="Modeling",
host_name="maya",
product_type="model",
variant="Main",
)
@patch("ayon_core.pipeline.create.product_name.warn")
@patch("ayon_core.pipeline.create.product_name.get_product_name_template")
@patch("ayon_core.pipeline.create.product_name."
"StringTemplate.format_strict_template")
@patch("ayon_core.pipeline.create.product_name.prepare_template_data")
def test_warns_when_template_needs_base_type_but_missing(
self,
mock_prepare,
mock_format,
mock_get_tmpl,
mock_warn,
):
mock_get_tmpl.return_value = "{product[basetype]}_{variant}"
mock_prepare.return_value = {
"product": {"type": "model"},
"variant": "Main",
"family": "model",
}
mock_format.return_value = "asset_Main"
_ = get_product_name(
project_name="proj",
task_name="modeling",
task_type="Modeling",
host_name="maya",
product_type="model",
variant="Main",
)
mock_warn.assert_called_once()
@patch("ayon_core.pipeline.create.product_name.get_product_name_template")
@patch("ayon_core.pipeline.create.product_name."
"StringTemplate.format_strict_template")
@patch("ayon_core.pipeline.create.product_name.prepare_template_data")
def test_dynamic_data_overrides_defaults(
self, mock_prepare, mock_format, mock_get_tmpl
):
mock_get_tmpl.return_value = "{custom}_{variant}"
mock_prepare.return_value = {"custom": "overridden", "variant": "Main"}
mock_format.return_value = "overridden_Main"
result = get_product_name(
project_name="proj",
task_name="modeling",
task_type="Modeling",
host_name="maya",
product_type="model",
variant="Main",
dynamic_data={"custom": "overridden"},
)
assert result == "overridden_Main"
@patch("ayon_core.pipeline.create.product_name.get_product_name_template")
def test_product_type_filter_is_used(self, mock_get_tmpl):
mock_get_tmpl.return_value = DEFAULT_PRODUCT_TEMPLATE
_ = get_product_name(
project_name="proj",
task_name="modeling",
task_type="Modeling",
host_name="maya",
product_type="model",
variant="Main",
product_type_filter="look",
)
args, kwargs = mock_get_tmpl.call_args
assert kwargs["product_type"] == "look"