mirror of
https://github.com/ynput/ayon-core.git
synced 2025-12-24 21:04:40 +01:00
resolved conflict
This commit is contained in:
commit
b71ca80bdb
186 changed files with 11840 additions and 940 deletions
8
.github/ISSUE_TEMPLATE/bug_report.yml
vendored
8
.github/ISSUE_TEMPLATE/bug_report.yml
vendored
|
|
@ -35,6 +35,10 @@ body:
|
|||
label: Version
|
||||
description: What version are you running? Look to OpenPype Tray
|
||||
options:
|
||||
- 3.17.6-nightly.2
|
||||
- 3.17.6-nightly.1
|
||||
- 3.17.5
|
||||
- 3.17.5-nightly.3
|
||||
- 3.17.5-nightly.2
|
||||
- 3.17.5-nightly.1
|
||||
- 3.17.4
|
||||
|
|
@ -131,10 +135,6 @@ body:
|
|||
- 3.15.2-nightly.1
|
||||
- 3.15.1
|
||||
- 3.15.1-nightly.6
|
||||
- 3.15.1-nightly.5
|
||||
- 3.15.1-nightly.4
|
||||
- 3.15.1-nightly.3
|
||||
- 3.15.1-nightly.2
|
||||
validations:
|
||||
required: true
|
||||
- type: dropdown
|
||||
|
|
|
|||
457
CHANGELOG.md
457
CHANGELOG.md
|
|
@ -1,6 +1,463 @@
|
|||
# Changelog
|
||||
|
||||
|
||||
## [3.17.5](https://github.com/ynput/OpenPype/tree/3.17.5)
|
||||
|
||||
|
||||
[Full Changelog](https://github.com/ynput/OpenPype/compare/3.17.4...3.17.5)
|
||||
|
||||
### **🆕 New features**
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Fusion: Add USD loader <a href="https://github.com/ynput/OpenPype/pull/4896">#4896</a></summary>
|
||||
|
||||
Add an OpenPype managed USD loader (`uLoader`) for Fusion.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Fusion: Resolution validator <a href="https://github.com/ynput/OpenPype/pull/5325">#5325</a></summary>
|
||||
|
||||
Added a resolution validator.The code is from my old PR (https://github.com/ynput/OpenPype/pull/4921) that I closed because the PR also contained a frame range validator that no longer is needed.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Context Selection tool: Refactor Context tool (for AYON) <a href="https://github.com/ynput/OpenPype/pull/5766">#5766</a></summary>
|
||||
|
||||
Context selection tool has AYON variant.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>AYON: Use AYON username for user in template data <a href="https://github.com/ynput/OpenPype/pull/5842">#5842</a></summary>
|
||||
|
||||
Use ayon username for template data in AYON mode.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Testing: app_group flag <a href="https://github.com/ynput/OpenPype/pull/5869">#5869</a></summary>
|
||||
|
||||
`app_group` command flag. This is for changing which flavour of the host to launch. In the case of Maya, you can launch Maya and MayaPy, but it can be used for the Nuke family as well.Split from #5644
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
### **🚀 Enhancements**
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Enhancement: Fusion fix saver creation + minor Blender/Fusion logging tweaks <a href="https://github.com/ynput/OpenPype/pull/5558">#5558</a></summary>
|
||||
|
||||
- Blender change logs to `debug` level in preparation for new publisher artist facing reports (note that it currently still uses the old publisher)
|
||||
- Fusion: Create Saver fix redeclaration of default_variants
|
||||
- Fusion: Fix saver being created in incorrect state without saving directly after create
|
||||
- Fusion: Allow reset frame range on render family
|
||||
- Fusion: Tweak logging level for artist-facing report
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Resolve: load clip to timeline at set time <a href="https://github.com/ynput/OpenPype/pull/5665">#5665</a></summary>
|
||||
|
||||
It is possible to load clip to correct place on timeline.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Nuke: Optional Deadline workfile dependency. <a href="https://github.com/ynput/OpenPype/pull/5732">#5732</a></summary>
|
||||
|
||||
Adds option to add the workfile as dependency for the Deadline job.Think it used to have something like this, but it disappeared. Usecase is for remote workflow where the Nuke script needs to be synced before the job can start.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Enhancement/houdini rearrange ayon houdini settings files <a href="https://github.com/ynput/OpenPype/pull/5748">#5748</a></summary>
|
||||
|
||||
Rearranging Houdini Settings to be more readable, easier to edit, update settings (include all families/product types)This PR is mainly for Ayon Settings to have more organized files. For Openpype, I'll make sure that each Houdini setting in Ayon has an equivalent in Openpype.
|
||||
- [x] update Ayon settings, fix typos and remove deprecated settings.
|
||||
- [x] Sync with Openpype
|
||||
- [x] Test in Openpype
|
||||
- [x] Test in Ayon
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Chore: updating create ayon addon script <a href="https://github.com/ynput/OpenPype/pull/5822">#5822</a></summary>
|
||||
|
||||
Adding developers environment options.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Max: Implement Validator for Properties/Attributes Value Check <a href="https://github.com/ynput/OpenPype/pull/5824">#5824</a></summary>
|
||||
|
||||
Add optional validator which can check if the property attributes are valid in Max
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Nuke: Remove unused 'get_render_path' function <a href="https://github.com/ynput/OpenPype/pull/5826">#5826</a></summary>
|
||||
|
||||
Remove unused function `get_render_path` from nuke integration.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Chore: Limit current context template data function <a href="https://github.com/ynput/OpenPype/pull/5845">#5845</a></summary>
|
||||
|
||||
Current implementation of `get_current_context_template_data` does return the same values as base template data function `get_template_data`.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Max: Make sure Collect Render not ignoring instance asset <a href="https://github.com/ynput/OpenPype/pull/5847">#5847</a></summary>
|
||||
|
||||
- Make sure Collect Render is not always using asset from context.
|
||||
- Make sure Scene version being collected
|
||||
- Clean up unnecessary uses of code in the collector.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Ftrack: Events are not processed if project is not available in OpenPype <a href="https://github.com/ynput/OpenPype/pull/5853">#5853</a></summary>
|
||||
|
||||
Events that happened on project which is not in OpenPype is not processed.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Nuke: Add Nuke 11.0 as default setting <a href="https://github.com/ynput/OpenPype/pull/5855">#5855</a></summary>
|
||||
|
||||
Found I needed Nuke 11.0 in the default settings to help with unit testing.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>TVPaint: Code cleanup <a href="https://github.com/ynput/OpenPype/pull/5857">#5857</a></summary>
|
||||
|
||||
Removed unused import. Use `AYON` label in ayon mode. Removed unused data in publish context `"previous_context"`.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>AYON settings: Use correct label for follow workfile version <a href="https://github.com/ynput/OpenPype/pull/5874">#5874</a></summary>
|
||||
|
||||
Follow workfile version label was marked as Collect Anatomy Instance Data label.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
### **🐛 Bug fixes**
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Nuke: Fix workfile template builder so representations get loaded next to each other <a href="https://github.com/ynput/OpenPype/pull/5061">#5061</a></summary>
|
||||
|
||||
Refactor when the cleanup of the placeholder happens for the cases where multiple representations are loaded by a single placeholder.The existing code didn't take into account the case where a template placeholder can load multiple representations so it was trying to do the cleanup of the placeholder node and the re-arrangement of the imported nodes too early. I assume this was designed only for the cases where a single representation can load multiple nodes.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Nuke: Dont update node name on update <a href="https://github.com/ynput/OpenPype/pull/5704">#5704</a></summary>
|
||||
|
||||
When updating `Image` containers the code is trying to set the name of the node. This results in a warning message from Nuke shown below;Suggesting to not change the node name when updating.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>UIDefLabel can be unique <a href="https://github.com/ynput/OpenPype/pull/5827">#5827</a></summary>
|
||||
|
||||
`UILabelDef` have implemented comparison and uniqueness.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>AYON: Skip kitsu module when creating ayon addons <a href="https://github.com/ynput/OpenPype/pull/5828">#5828</a></summary>
|
||||
|
||||
Create AYON packages is skipping kitsu module in creation of modules/addons and kitsu module is not loaded from modules on start. The addon already has it's repository https://github.com/ynput/ayon-kitsu.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Bugfix: Collect Rendered Files only collecting first instance <a href="https://github.com/ynput/OpenPype/pull/5832">#5832</a></summary>
|
||||
|
||||
Collect all instances from the metadata file - don't return on first instance iteration.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Houdini: set frame range for the created composite ROP <a href="https://github.com/ynput/OpenPype/pull/5833">#5833</a></summary>
|
||||
|
||||
Quick bug fix for created composite ROP, set its frame range to the frame range of the playbar.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Fix registering launcher actions from OpenPypeModules <a href="https://github.com/ynput/OpenPype/pull/5843">#5843</a></summary>
|
||||
|
||||
Fix typo `actions_dir` -> `path` to fix register launcher actions fromm OpenPypeModule
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Bugfix in houdini shelves manager and beautify settings <a href="https://github.com/ynput/OpenPype/pull/5844">#5844</a></summary>
|
||||
|
||||
This PR fixes the problem in this PR https://github.com/ynput/OpenPype/issues/5457 by using the right function to load a pre-made houdini `.shelf` fileAlso, it beautifies houdini shelves settings to provide better guidance for users which helps with other issue https://github.com/ynput/OpenPype/issues/5458 , Rather adding default shelf and set names, I'll educate users how to use the tool correctly.Users now are able to select between the two options.| OpenPype | Ayon || -- | -- || | |
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Blender: Fix missing Grease Pencils in review <a href="https://github.com/ynput/OpenPype/pull/5848">#5848</a></summary>
|
||||
|
||||
Fix Grease Pencil missing in review when isolating objects.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Blender: Fix Render Settings in Ayon <a href="https://github.com/ynput/OpenPype/pull/5849">#5849</a></summary>
|
||||
|
||||
Fix Render Settings in Ayon for Blender.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Bugfix: houdini tab menu working as expected <a href="https://github.com/ynput/OpenPype/pull/5850">#5850</a></summary>
|
||||
|
||||
This PR:Tab menu name changes to Ayon when using ayon get_network_categories is checked in all creator plugins. | Product | Network Category | | -- | -- | | Alembic camera | rop, obj | | Arnold Ass | rop | | Arnold ROP | rop | | Bgeo | rop, sop | | composite sequence | cop2, rop | | hda | obj | | Karma ROP | rop | | Mantra ROP | rop | | ABC | rop, sop | | RS proxy | rop, sop| | RS ROP | rop | | Review | rop | | Static mesh | rop, obj, sop | | USD | lop, rop | | USD Render | rop | | VDB | rop, obj, sop | | V Ray | rop |
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Bigfix: Houdini skip frame_range_validator if node has no 'trange' parameter <a href="https://github.com/ynput/OpenPype/pull/5851">#5851</a></summary>
|
||||
|
||||
I faced a bug when publishing HDA instance as it has no `trange` parameter. As this PR title says : skip frame_range_validator if node has no 'trange' parameter
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Bugfix: houdini image sequence loading and missing frames <a href="https://github.com/ynput/OpenPype/pull/5852">#5852</a></summary>
|
||||
|
||||
I made this PR in to fix issues mentioned here https://github.com/ynput/OpenPype/pull/5833#issuecomment-1789207727in short:
|
||||
- image load doesn't work
|
||||
- publisher only publish one frame
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Nuke: loaders' containers updating as nodes <a href="https://github.com/ynput/OpenPype/pull/5854">#5854</a></summary>
|
||||
|
||||
Nuke loaded containers are updating correctly even they have been duplicating of originally loaded nodes. This had previously been removed duplicated nodes.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>deadline: settings are not blocking extension input <a href="https://github.com/ynput/OpenPype/pull/5864">#5864</a></summary>
|
||||
|
||||
Settings are not blocking user input.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Blender: Fix loading of blend layouts <a href="https://github.com/ynput/OpenPype/pull/5866">#5866</a></summary>
|
||||
|
||||
Fix a problem with loading blend layouts.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>AYON: Launcher refresh issues <a href="https://github.com/ynput/OpenPype/pull/5867">#5867</a></summary>
|
||||
|
||||
Fixed refresh of projects issue in launcher tool. And renamed Qt models to contain `Qt` in their name (it was really hard to find out where were used). It is not possible to click on disabled item in launcher's projects view.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Fix the Wrong key words for tycache workfile template settings in AYON <a href="https://github.com/ynput/OpenPype/pull/5870">#5870</a></summary>
|
||||
|
||||
Fix the wrong key words for the tycache workfile template settings in AYON(i.e. Instead of families, product_types should be used)
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>AYON tools: Handle empty icon definition <a href="https://github.com/ynput/OpenPype/pull/5876">#5876</a></summary>
|
||||
|
||||
Ignore if passed icon definition is `None`.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
### **🔀 Refactored code**
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Houdini: Remove on instance toggled callback <a href="https://github.com/ynput/OpenPype/pull/5860">#5860</a></summary>
|
||||
|
||||
Remove on instance toggled callback which isn't relevant to the new publisher
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Chore: Remove unused `instanceToggled` callbacks <a href="https://github.com/ynput/OpenPype/pull/5862">#5862</a></summary>
|
||||
|
||||
The `instanceToggled` callbacks should be irrelevant for new publisher.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
|
||||
|
||||
## [3.17.4](https://github.com/ynput/OpenPype/tree/3.17.4)
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -282,6 +282,9 @@ def run(script):
|
|||
"--app_variant",
|
||||
help="Provide specific app variant for test, empty for latest",
|
||||
default=None)
|
||||
@click.option("--app_group",
|
||||
help="Provide specific app group for test, empty for default",
|
||||
default=None)
|
||||
@click.option("-t",
|
||||
"--timeout",
|
||||
help="Provide specific timeout value for test case",
|
||||
|
|
@ -294,11 +297,11 @@ def run(script):
|
|||
help="MongoDB for testing.",
|
||||
default=None)
|
||||
def runtests(folder, mark, pyargs, test_data_folder, persist, app_variant,
|
||||
timeout, setup_only, mongo_url):
|
||||
timeout, setup_only, mongo_url, app_group):
|
||||
"""Run all automatic tests after proper initialization via start.py"""
|
||||
PypeCommands().run_tests(folder, mark, pyargs, test_data_folder,
|
||||
persist, app_variant, timeout, setup_only,
|
||||
mongo_url)
|
||||
mongo_url, app_group)
|
||||
|
||||
|
||||
@main.command(help="DEPRECATED - run sync server")
|
||||
|
|
|
|||
|
|
@ -1,6 +1,7 @@
|
|||
from .mongo import (
|
||||
OpenPypeMongoConnection,
|
||||
)
|
||||
from .server.utils import get_ayon_server_api_connection
|
||||
|
||||
from .entities import (
|
||||
get_projects,
|
||||
|
|
@ -59,6 +60,8 @@ from .operations import (
|
|||
__all__ = (
|
||||
"OpenPypeMongoConnection",
|
||||
|
||||
"get_ayon_server_api_connection",
|
||||
|
||||
"get_projects",
|
||||
"get_project",
|
||||
"get_whole_project",
|
||||
|
|
|
|||
|
|
@ -1,9 +1,8 @@
|
|||
import collections
|
||||
|
||||
from ayon_api import get_server_api_connection
|
||||
|
||||
from openpype.client.mongo.operations import CURRENT_THUMBNAIL_SCHEMA
|
||||
|
||||
from .utils import get_ayon_server_api_connection
|
||||
from .openpype_comp import get_folders_with_tasks
|
||||
from .conversion_utils import (
|
||||
project_fields_v3_to_v4,
|
||||
|
|
@ -37,7 +36,7 @@ def get_projects(active=True, inactive=False, library=None, fields=None):
|
|||
elif inactive:
|
||||
active = False
|
||||
|
||||
con = get_server_api_connection()
|
||||
con = get_ayon_server_api_connection()
|
||||
fields = project_fields_v3_to_v4(fields, con)
|
||||
for project in con.get_projects(active, library, fields=fields):
|
||||
yield convert_v4_project_to_v3(project)
|
||||
|
|
@ -45,7 +44,7 @@ def get_projects(active=True, inactive=False, library=None, fields=None):
|
|||
|
||||
def get_project(project_name, active=True, inactive=False, fields=None):
|
||||
# Skip if both are disabled
|
||||
con = get_server_api_connection()
|
||||
con = get_ayon_server_api_connection()
|
||||
fields = project_fields_v3_to_v4(fields, con)
|
||||
return convert_v4_project_to_v3(
|
||||
con.get_project(project_name, fields=fields)
|
||||
|
|
@ -66,7 +65,7 @@ def _get_subsets(
|
|||
fields=None
|
||||
):
|
||||
# Convert fields and add minimum required fields
|
||||
con = get_server_api_connection()
|
||||
con = get_ayon_server_api_connection()
|
||||
fields = subset_fields_v3_to_v4(fields, con)
|
||||
if fields is not None:
|
||||
for key in (
|
||||
|
|
@ -102,7 +101,7 @@ def _get_versions(
|
|||
active=None,
|
||||
fields=None
|
||||
):
|
||||
con = get_server_api_connection()
|
||||
con = get_ayon_server_api_connection()
|
||||
|
||||
fields = version_fields_v3_to_v4(fields, con)
|
||||
|
||||
|
|
@ -198,7 +197,7 @@ def get_assets(
|
|||
if archived:
|
||||
active = None
|
||||
|
||||
con = get_server_api_connection()
|
||||
con = get_ayon_server_api_connection()
|
||||
fields = folder_fields_v3_to_v4(fields, con)
|
||||
kwargs = dict(
|
||||
folder_ids=asset_ids,
|
||||
|
|
@ -236,7 +235,7 @@ def get_archived_assets(
|
|||
|
||||
|
||||
def get_asset_ids_with_subsets(project_name, asset_ids=None):
|
||||
con = get_server_api_connection()
|
||||
con = get_ayon_server_api_connection()
|
||||
return con.get_folder_ids_with_products(project_name, asset_ids)
|
||||
|
||||
|
||||
|
|
@ -282,7 +281,7 @@ def get_subsets(
|
|||
|
||||
|
||||
def get_subset_families(project_name, subset_ids=None):
|
||||
con = get_server_api_connection()
|
||||
con = get_ayon_server_api_connection()
|
||||
return con.get_product_type_names(project_name, subset_ids)
|
||||
|
||||
|
||||
|
|
@ -430,7 +429,7 @@ def get_output_link_versions(project_name, version_id, fields=None):
|
|||
if not version_id:
|
||||
return []
|
||||
|
||||
con = get_server_api_connection()
|
||||
con = get_ayon_server_api_connection()
|
||||
version_links = con.get_version_links(
|
||||
project_name, version_id, link_direction="out")
|
||||
|
||||
|
|
@ -446,7 +445,7 @@ def get_output_link_versions(project_name, version_id, fields=None):
|
|||
|
||||
|
||||
def version_is_latest(project_name, version_id):
|
||||
con = get_server_api_connection()
|
||||
con = get_ayon_server_api_connection()
|
||||
return con.version_is_latest(project_name, version_id)
|
||||
|
||||
|
||||
|
|
@ -501,7 +500,7 @@ def get_representations(
|
|||
else:
|
||||
active = None
|
||||
|
||||
con = get_server_api_connection()
|
||||
con = get_ayon_server_api_connection()
|
||||
fields = representation_fields_v3_to_v4(fields, con)
|
||||
if fields and active is not None:
|
||||
fields.add("active")
|
||||
|
|
@ -535,7 +534,7 @@ def get_representations_parents(project_name, representations):
|
|||
repre["_id"]
|
||||
for repre in representations
|
||||
}
|
||||
con = get_server_api_connection()
|
||||
con = get_ayon_server_api_connection()
|
||||
parents_by_repre_id = con.get_representations_parents(project_name,
|
||||
repre_ids)
|
||||
folder_ids = set()
|
||||
|
|
@ -677,7 +676,7 @@ def get_workfile_info(
|
|||
if not asset_id or not task_name or not filename:
|
||||
return None
|
||||
|
||||
con = get_server_api_connection()
|
||||
con = get_ayon_server_api_connection()
|
||||
task = con.get_task_by_name(
|
||||
project_name, asset_id, task_name, fields=["id", "name", "folderId"]
|
||||
)
|
||||
|
|
|
|||
|
|
@ -1,6 +1,4 @@
|
|||
import ayon_api
|
||||
from ayon_api import get_folder_links, get_versions_links
|
||||
|
||||
from .utils import get_ayon_server_api_connection
|
||||
from .entities import get_assets, get_representation_by_id
|
||||
|
||||
|
||||
|
|
@ -28,7 +26,8 @@ def get_linked_asset_ids(project_name, asset_doc=None, asset_id=None):
|
|||
if not asset_id:
|
||||
asset_id = asset_doc["_id"]
|
||||
|
||||
links = get_folder_links(project_name, asset_id, link_direction="in")
|
||||
con = get_ayon_server_api_connection()
|
||||
links = con.get_folder_links(project_name, asset_id, link_direction="in")
|
||||
return [
|
||||
link["entityId"]
|
||||
for link in links
|
||||
|
|
@ -115,6 +114,7 @@ def get_linked_representation_id(
|
|||
if link_type:
|
||||
link_types = [link_type]
|
||||
|
||||
con = get_ayon_server_api_connection()
|
||||
# Store already found version ids to avoid recursion, and also to store
|
||||
# output -> Don't forget to remove 'version_id' at the end!!!
|
||||
linked_version_ids = {version_id}
|
||||
|
|
@ -124,7 +124,7 @@ def get_linked_representation_id(
|
|||
if not versions_to_check:
|
||||
break
|
||||
|
||||
links = get_versions_links(
|
||||
links = con.get_versions_links(
|
||||
project_name,
|
||||
versions_to_check,
|
||||
link_types=link_types,
|
||||
|
|
@ -145,8 +145,8 @@ def get_linked_representation_id(
|
|||
linked_version_ids.remove(version_id)
|
||||
if not linked_version_ids:
|
||||
return []
|
||||
|
||||
representations = ayon_api.get_representations(
|
||||
con = get_ayon_server_api_connection()
|
||||
representations = con.get_representations(
|
||||
project_name,
|
||||
version_ids=linked_version_ids,
|
||||
fields=["id"])
|
||||
|
|
|
|||
|
|
@ -5,7 +5,6 @@ import uuid
|
|||
import datetime
|
||||
|
||||
from bson.objectid import ObjectId
|
||||
from ayon_api import get_server_api_connection
|
||||
|
||||
from openpype.client.operations_base import (
|
||||
REMOVED_VALUE,
|
||||
|
|
@ -41,7 +40,7 @@ from .conversion_utils import (
|
|||
convert_update_representation_to_v4,
|
||||
convert_update_workfile_info_to_v4,
|
||||
)
|
||||
from .utils import create_entity_id
|
||||
from .utils import create_entity_id, get_ayon_server_api_connection
|
||||
|
||||
|
||||
def _create_or_convert_to_id(entity_id=None):
|
||||
|
|
@ -680,7 +679,7 @@ class OperationsSession(BaseOperationsSession):
|
|||
def __init__(self, con=None, *args, **kwargs):
|
||||
super(OperationsSession, self).__init__(*args, **kwargs)
|
||||
if con is None:
|
||||
con = get_server_api_connection()
|
||||
con = get_ayon_server_api_connection()
|
||||
self._con = con
|
||||
self._project_cache = {}
|
||||
self._nested_operations = collections.defaultdict(list)
|
||||
|
|
@ -858,7 +857,7 @@ def create_project(
|
|||
"""
|
||||
|
||||
if con is None:
|
||||
con = get_server_api_connection()
|
||||
con = get_ayon_server_api_connection()
|
||||
|
||||
return con.create_project(
|
||||
project_name,
|
||||
|
|
@ -870,12 +869,12 @@ def create_project(
|
|||
|
||||
def delete_project(project_name, con=None):
|
||||
if con is None:
|
||||
con = get_server_api_connection()
|
||||
con = get_ayon_server_api_connection()
|
||||
|
||||
return con.delete_project(project_name)
|
||||
|
||||
|
||||
def create_thumbnail(project_name, src_filepath, thumbnail_id=None, con=None):
|
||||
if con is None:
|
||||
con = get_server_api_connection()
|
||||
con = get_ayon_server_api_connection()
|
||||
return con.create_thumbnail(project_name, src_filepath, thumbnail_id)
|
||||
|
|
|
|||
|
|
@ -1,8 +1,33 @@
|
|||
import os
|
||||
import uuid
|
||||
|
||||
import ayon_api
|
||||
|
||||
from openpype.client.operations_base import REMOVED_VALUE
|
||||
|
||||
|
||||
class _GlobalCache:
|
||||
initialized = False
|
||||
|
||||
|
||||
def get_ayon_server_api_connection():
|
||||
if _GlobalCache.initialized:
|
||||
con = ayon_api.get_server_api_connection()
|
||||
else:
|
||||
from openpype.lib.local_settings import get_local_site_id
|
||||
|
||||
_GlobalCache.initialized = True
|
||||
site_id = get_local_site_id()
|
||||
version = os.getenv("AYON_VERSION")
|
||||
if ayon_api.is_connection_created():
|
||||
con = ayon_api.get_server_api_connection()
|
||||
con.set_site_id(site_id)
|
||||
con.set_client_version(version)
|
||||
else:
|
||||
con = ayon_api.create_connection(site_id, version)
|
||||
return con
|
||||
|
||||
|
||||
def create_entity_id():
|
||||
return uuid.uuid1().hex
|
||||
|
||||
|
|
|
|||
|
|
@ -74,11 +74,6 @@ class AfterEffectsHost(HostBase, IWorkfileHost, ILoadHost, IPublishHost):
|
|||
|
||||
register_loader_plugin_path(LOAD_PATH)
|
||||
register_creator_plugin_path(CREATE_PATH)
|
||||
log.info(PUBLISH_PATH)
|
||||
|
||||
pyblish.api.register_callback(
|
||||
"instanceToggled", on_pyblish_instance_toggled
|
||||
)
|
||||
|
||||
register_event_callback("application.launched", application_launch)
|
||||
|
||||
|
|
@ -186,11 +181,6 @@ def application_launch():
|
|||
check_inventory()
|
||||
|
||||
|
||||
def on_pyblish_instance_toggled(instance, old_value, new_value):
|
||||
"""Toggle layer visibility on instance toggles."""
|
||||
instance[0].Visible = new_value
|
||||
|
||||
|
||||
def ls():
|
||||
"""Yields containers from active AfterEffects document.
|
||||
|
||||
|
|
|
|||
|
|
@ -266,9 +266,57 @@ def read(node: bpy.types.bpy_struct_meta_idprop):
|
|||
return data
|
||||
|
||||
|
||||
def get_selection() -> List[bpy.types.Object]:
|
||||
"""Return the selected objects from the current scene."""
|
||||
return [obj for obj in bpy.context.scene.objects if obj.select_get()]
|
||||
def get_selected_collections():
|
||||
"""
|
||||
Returns a list of the currently selected collections in the outliner.
|
||||
|
||||
Raises:
|
||||
RuntimeError: If the outliner cannot be found in the main Blender
|
||||
window.
|
||||
|
||||
Returns:
|
||||
list: A list of `bpy.types.Collection` objects that are currently
|
||||
selected in the outliner.
|
||||
"""
|
||||
try:
|
||||
area = next(
|
||||
area for area in bpy.context.window.screen.areas
|
||||
if area.type == 'OUTLINER')
|
||||
region = next(
|
||||
region for region in area.regions
|
||||
if region.type == 'WINDOW')
|
||||
except StopIteration as e:
|
||||
raise RuntimeError("Could not find outliner. An outliner space "
|
||||
"must be in the main Blender window.") from e
|
||||
|
||||
with bpy.context.temp_override(
|
||||
window=bpy.context.window,
|
||||
area=area,
|
||||
region=region,
|
||||
screen=bpy.context.window.screen
|
||||
):
|
||||
ids = bpy.context.selected_ids
|
||||
|
||||
return [id for id in ids if isinstance(id, bpy.types.Collection)]
|
||||
|
||||
|
||||
def get_selection(include_collections: bool = False) -> List[bpy.types.Object]:
|
||||
"""
|
||||
Returns a list of selected objects in the current Blender scene.
|
||||
|
||||
Args:
|
||||
include_collections (bool, optional): Whether to include selected
|
||||
collections in the result. Defaults to False.
|
||||
|
||||
Returns:
|
||||
List[bpy.types.Object]: A list of selected objects.
|
||||
"""
|
||||
selection = [obj for obj in bpy.context.scene.objects if obj.select_get()]
|
||||
|
||||
if include_collections:
|
||||
selection.extend(get_selected_collections())
|
||||
|
||||
return selection
|
||||
|
||||
|
||||
@contextlib.contextmanager
|
||||
|
|
|
|||
|
|
@ -9,7 +9,10 @@ from openpype.pipeline import (
|
|||
LegacyCreator,
|
||||
LoaderPlugin,
|
||||
)
|
||||
from .pipeline import AVALON_CONTAINERS
|
||||
from .pipeline import (
|
||||
AVALON_CONTAINERS,
|
||||
AVALON_PROPERTY,
|
||||
)
|
||||
from .ops import (
|
||||
MainThreadItem,
|
||||
execute_in_main_thread
|
||||
|
|
@ -40,9 +43,16 @@ def get_unique_number(
|
|||
avalon_container = bpy.data.collections.get(AVALON_CONTAINERS)
|
||||
if not avalon_container:
|
||||
return "01"
|
||||
asset_groups = avalon_container.all_objects
|
||||
|
||||
container_names = [c.name for c in asset_groups if c.type == 'EMPTY']
|
||||
# Check the names of both object and collection containers
|
||||
obj_asset_groups = avalon_container.objects
|
||||
obj_group_names = {
|
||||
c.name for c in obj_asset_groups
|
||||
if c.type == 'EMPTY' and c.get(AVALON_PROPERTY)}
|
||||
coll_asset_groups = avalon_container.children
|
||||
coll_group_names = {
|
||||
c.name for c in coll_asset_groups
|
||||
if c.get(AVALON_PROPERTY)}
|
||||
container_names = obj_group_names.union(coll_group_names)
|
||||
count = 1
|
||||
name = f"{asset}_{count:0>2}_{subset}"
|
||||
while name in container_names:
|
||||
|
|
|
|||
|
|
@ -15,6 +15,8 @@ class CreateBlendScene(plugin.Creator):
|
|||
family = "blendScene"
|
||||
icon = "cubes"
|
||||
|
||||
maintain_selection = False
|
||||
|
||||
def process(self):
|
||||
""" Run the creator on Blender main thread"""
|
||||
mti = ops.MainThreadItem(self._process)
|
||||
|
|
@ -31,21 +33,20 @@ class CreateBlendScene(plugin.Creator):
|
|||
asset = self.data["asset"]
|
||||
subset = self.data["subset"]
|
||||
name = plugin.asset_name(asset, subset)
|
||||
asset_group = bpy.data.objects.new(name=name, object_data=None)
|
||||
asset_group.empty_display_type = 'SINGLE_ARROW'
|
||||
instances.objects.link(asset_group)
|
||||
|
||||
# Create the new asset group as collection
|
||||
asset_group = bpy.data.collections.new(name=name)
|
||||
instances.children.link(asset_group)
|
||||
self.data['task'] = get_current_task_name()
|
||||
lib.imprint(asset_group, self.data)
|
||||
|
||||
# Add selected objects to instance
|
||||
if (self.options or {}).get("useSelection"):
|
||||
bpy.context.view_layer.objects.active = asset_group
|
||||
selected = lib.get_selection()
|
||||
for obj in selected:
|
||||
if obj.parent in selected:
|
||||
obj.select_set(False)
|
||||
continue
|
||||
selected.append(asset_group)
|
||||
bpy.ops.object.parent_set(keep_transform=True)
|
||||
selection = lib.get_selection(include_collections=True)
|
||||
|
||||
for data in selection:
|
||||
if isinstance(data, bpy.types.Collection):
|
||||
asset_group.children.link(data)
|
||||
elif isinstance(data, bpy.types.Object):
|
||||
asset_group.objects.link(data)
|
||||
|
||||
return asset_group
|
||||
|
|
|
|||
|
|
@ -20,7 +20,7 @@ from openpype.hosts.blender.api.pipeline import (
|
|||
class BlendLoader(plugin.AssetLoader):
|
||||
"""Load assets from a .blend file."""
|
||||
|
||||
families = ["model", "rig", "layout", "camera", "blendScene"]
|
||||
families = ["model", "rig", "layout", "camera"]
|
||||
representations = ["blend"]
|
||||
|
||||
label = "Append Blend"
|
||||
|
|
@ -32,7 +32,7 @@ class BlendLoader(plugin.AssetLoader):
|
|||
empties = [obj for obj in objects if obj.type == 'EMPTY']
|
||||
|
||||
for empty in empties:
|
||||
if empty.get(AVALON_PROPERTY):
|
||||
if empty.get(AVALON_PROPERTY) and empty.parent is None:
|
||||
return empty
|
||||
|
||||
return None
|
||||
|
|
@ -90,6 +90,7 @@ class BlendLoader(plugin.AssetLoader):
|
|||
members.append(data)
|
||||
|
||||
container = self._get_asset_container(data_to.objects)
|
||||
print(container)
|
||||
assert container, "No asset group found"
|
||||
|
||||
container.name = group_name
|
||||
|
|
@ -100,8 +101,11 @@ class BlendLoader(plugin.AssetLoader):
|
|||
|
||||
# Link all the container children to the collection
|
||||
for obj in container.children_recursive:
|
||||
print(obj)
|
||||
bpy.context.scene.collection.objects.link(obj)
|
||||
|
||||
print("")
|
||||
|
||||
# Remove the library from the blend file
|
||||
library = bpy.data.libraries.get(bpy.path.basename(libpath))
|
||||
bpy.data.libraries.remove(library)
|
||||
|
|
|
|||
221
openpype/hosts/blender/plugins/load/load_blendscene.py
Normal file
221
openpype/hosts/blender/plugins/load/load_blendscene.py
Normal file
|
|
@ -0,0 +1,221 @@
|
|||
from typing import Dict, List, Optional
|
||||
from pathlib import Path
|
||||
|
||||
import bpy
|
||||
|
||||
from openpype.pipeline import (
|
||||
get_representation_path,
|
||||
AVALON_CONTAINER_ID,
|
||||
)
|
||||
from openpype.hosts.blender.api import plugin
|
||||
from openpype.hosts.blender.api.lib import imprint
|
||||
from openpype.hosts.blender.api.pipeline import (
|
||||
AVALON_CONTAINERS,
|
||||
AVALON_PROPERTY,
|
||||
)
|
||||
|
||||
|
||||
class BlendSceneLoader(plugin.AssetLoader):
|
||||
"""Load assets from a .blend file."""
|
||||
|
||||
families = ["blendScene"]
|
||||
representations = ["blend"]
|
||||
|
||||
label = "Append Blend"
|
||||
icon = "code-fork"
|
||||
color = "orange"
|
||||
|
||||
@staticmethod
|
||||
def _get_asset_container(collections):
|
||||
for coll in collections:
|
||||
parents = [c for c in collections if c.user_of_id(coll)]
|
||||
if coll.get(AVALON_PROPERTY) and not parents:
|
||||
return coll
|
||||
|
||||
return None
|
||||
|
||||
def _process_data(self, libpath, group_name, family):
|
||||
# Append all the data from the .blend file
|
||||
with bpy.data.libraries.load(
|
||||
libpath, link=False, relative=False
|
||||
) as (data_from, data_to):
|
||||
for attr in dir(data_to):
|
||||
setattr(data_to, attr, getattr(data_from, attr))
|
||||
|
||||
members = []
|
||||
|
||||
# Rename the object to add the asset name
|
||||
for attr in dir(data_to):
|
||||
for data in getattr(data_to, attr):
|
||||
data.name = f"{group_name}:{data.name}"
|
||||
members.append(data)
|
||||
|
||||
container = self._get_asset_container(
|
||||
data_to.collections)
|
||||
assert container, "No asset group found"
|
||||
|
||||
container.name = group_name
|
||||
|
||||
# Link the group to the scene
|
||||
bpy.context.scene.collection.children.link(container)
|
||||
|
||||
# Remove the library from the blend file
|
||||
library = bpy.data.libraries.get(bpy.path.basename(libpath))
|
||||
bpy.data.libraries.remove(library)
|
||||
|
||||
return container, members
|
||||
|
||||
def process_asset(
|
||||
self, context: dict, name: str, namespace: Optional[str] = None,
|
||||
options: Optional[Dict] = None
|
||||
) -> Optional[List]:
|
||||
"""
|
||||
Arguments:
|
||||
name: Use pre-defined name
|
||||
namespace: Use pre-defined namespace
|
||||
context: Full parenthood of representation to load
|
||||
options: Additional settings dictionary
|
||||
"""
|
||||
libpath = self.filepath_from_context(context)
|
||||
asset = context["asset"]["name"]
|
||||
subset = context["subset"]["name"]
|
||||
|
||||
try:
|
||||
family = context["representation"]["context"]["family"]
|
||||
except ValueError:
|
||||
family = "model"
|
||||
|
||||
asset_name = plugin.asset_name(asset, subset)
|
||||
unique_number = plugin.get_unique_number(asset, subset)
|
||||
group_name = plugin.asset_name(asset, subset, unique_number)
|
||||
namespace = namespace or f"{asset}_{unique_number}"
|
||||
|
||||
avalon_container = bpy.data.collections.get(AVALON_CONTAINERS)
|
||||
if not avalon_container:
|
||||
avalon_container = bpy.data.collections.new(name=AVALON_CONTAINERS)
|
||||
bpy.context.scene.collection.children.link(avalon_container)
|
||||
|
||||
container, members = self._process_data(libpath, group_name, family)
|
||||
|
||||
avalon_container.children.link(container)
|
||||
|
||||
data = {
|
||||
"schema": "openpype:container-2.0",
|
||||
"id": AVALON_CONTAINER_ID,
|
||||
"name": name,
|
||||
"namespace": namespace or '',
|
||||
"loader": str(self.__class__.__name__),
|
||||
"representation": str(context["representation"]["_id"]),
|
||||
"libpath": libpath,
|
||||
"asset_name": asset_name,
|
||||
"parent": str(context["representation"]["parent"]),
|
||||
"family": context["representation"]["context"]["family"],
|
||||
"objectName": group_name,
|
||||
"members": members,
|
||||
}
|
||||
|
||||
container[AVALON_PROPERTY] = data
|
||||
|
||||
objects = [
|
||||
obj for obj in bpy.data.objects
|
||||
if obj.name.startswith(f"{group_name}:")
|
||||
]
|
||||
|
||||
self[:] = objects
|
||||
return objects
|
||||
|
||||
def exec_update(self, container: Dict, representation: Dict):
|
||||
"""
|
||||
Update the loaded asset.
|
||||
"""
|
||||
group_name = container["objectName"]
|
||||
asset_group = bpy.data.collections.get(group_name)
|
||||
libpath = Path(get_representation_path(representation)).as_posix()
|
||||
|
||||
assert asset_group, (
|
||||
f"The asset is not loaded: {container['objectName']}"
|
||||
)
|
||||
|
||||
# Get the parents of the members of the asset group, so we can
|
||||
# re-link them after the update.
|
||||
# Also gets the transform for each object to reapply after the update.
|
||||
collection_parents = {}
|
||||
member_transforms = {}
|
||||
members = asset_group.get(AVALON_PROPERTY).get("members", [])
|
||||
loaded_collections = {c for c in bpy.data.collections if c in members}
|
||||
loaded_collections.add(bpy.data.collections.get(AVALON_CONTAINERS))
|
||||
for member in members:
|
||||
if isinstance(member, bpy.types.Object):
|
||||
member_parents = set(member.users_collection)
|
||||
member_transforms[member.name] = member.matrix_basis.copy()
|
||||
elif isinstance(member, bpy.types.Collection):
|
||||
member_parents = {
|
||||
c for c in bpy.data.collections if c.user_of_id(member)}
|
||||
else:
|
||||
continue
|
||||
|
||||
member_parents = member_parents.difference(loaded_collections)
|
||||
if member_parents:
|
||||
collection_parents[member.name] = list(member_parents)
|
||||
|
||||
old_data = dict(asset_group.get(AVALON_PROPERTY))
|
||||
|
||||
self.exec_remove(container)
|
||||
|
||||
family = container["family"]
|
||||
asset_group, members = self._process_data(libpath, group_name, family)
|
||||
|
||||
for member in members:
|
||||
if member.name in collection_parents:
|
||||
for parent in collection_parents[member.name]:
|
||||
if isinstance(member, bpy.types.Object):
|
||||
parent.objects.link(member)
|
||||
elif isinstance(member, bpy.types.Collection):
|
||||
parent.children.link(member)
|
||||
if member.name in member_transforms and isinstance(
|
||||
member, bpy.types.Object
|
||||
):
|
||||
member.matrix_basis = member_transforms[member.name]
|
||||
|
||||
avalon_container = bpy.data.collections.get(AVALON_CONTAINERS)
|
||||
avalon_container.children.link(asset_group)
|
||||
|
||||
# Restore the old data, but reset members, as they don't exist anymore
|
||||
# This avoids a crash, because the memory addresses of those members
|
||||
# are not valid anymore
|
||||
old_data["members"] = []
|
||||
asset_group[AVALON_PROPERTY] = old_data
|
||||
|
||||
new_data = {
|
||||
"libpath": libpath,
|
||||
"representation": str(representation["_id"]),
|
||||
"parent": str(representation["parent"]),
|
||||
"members": members,
|
||||
}
|
||||
|
||||
imprint(asset_group, new_data)
|
||||
|
||||
def exec_remove(self, container: Dict) -> bool:
|
||||
"""
|
||||
Remove an existing container from a Blender scene.
|
||||
"""
|
||||
group_name = container["objectName"]
|
||||
asset_group = bpy.data.collections.get(group_name)
|
||||
|
||||
members = set(asset_group.get(AVALON_PROPERTY).get("members", []))
|
||||
|
||||
if members:
|
||||
for attr_name in dir(bpy.data):
|
||||
attr = getattr(bpy.data, attr_name)
|
||||
if not isinstance(attr, bpy.types.bpy_prop_collection):
|
||||
continue
|
||||
|
||||
# ensure to make a list copy because we
|
||||
# we remove members as we iterate
|
||||
for data in list(attr):
|
||||
if data not in members or data == asset_group:
|
||||
continue
|
||||
|
||||
attr.remove(data)
|
||||
|
||||
bpy.data.collections.remove(asset_group)
|
||||
|
|
@ -1,4 +1,3 @@
|
|||
import json
|
||||
from typing import Generator
|
||||
|
||||
import bpy
|
||||
|
|
@ -50,6 +49,7 @@ class CollectInstances(pyblish.api.ContextPlugin):
|
|||
|
||||
for group in asset_groups:
|
||||
instance = self.create_instance(context, group)
|
||||
instance.data["instance_group"] = group
|
||||
members = []
|
||||
if isinstance(group, bpy.types.Collection):
|
||||
members = list(group.objects)
|
||||
|
|
@ -65,6 +65,6 @@ class CollectInstances(pyblish.api.ContextPlugin):
|
|||
|
||||
members.append(group)
|
||||
instance[:] = members
|
||||
self.log.debug(json.dumps(instance.data, indent=4))
|
||||
self.log.debug(instance.data)
|
||||
for obj in instance:
|
||||
self.log.debug(obj)
|
||||
|
|
|
|||
|
|
@ -31,11 +31,12 @@ class CollectReview(pyblish.api.InstancePlugin):
|
|||
|
||||
focal_length = cameras[0].data.lens
|
||||
|
||||
# get isolate objects list from meshes instance members .
|
||||
# get isolate objects list from meshes instance members.
|
||||
types = {"MESH", "GPENCIL"}
|
||||
isolate_objects = [
|
||||
obj
|
||||
for obj in instance
|
||||
if isinstance(obj, bpy.types.Object) and obj.type == "MESH"
|
||||
if isinstance(obj, bpy.types.Object) and obj.type in types
|
||||
]
|
||||
|
||||
if not instance.data.get("remove"):
|
||||
|
|
|
|||
|
|
@ -25,19 +25,27 @@ class ExtractBlend(publish.Extractor):
|
|||
|
||||
data_blocks = set()
|
||||
|
||||
for obj in instance:
|
||||
data_blocks.add(obj)
|
||||
for data in instance:
|
||||
data_blocks.add(data)
|
||||
# Pack used images in the blend files.
|
||||
if obj.type == 'MESH':
|
||||
for material_slot in obj.material_slots:
|
||||
mat = material_slot.material
|
||||
if mat and mat.use_nodes:
|
||||
tree = mat.node_tree
|
||||
if tree.type == 'SHADER':
|
||||
for node in tree.nodes:
|
||||
if node.bl_idname == 'ShaderNodeTexImage':
|
||||
if node.image:
|
||||
node.image.pack()
|
||||
if not (
|
||||
isinstance(data, bpy.types.Object) and data.type == 'MESH'
|
||||
):
|
||||
continue
|
||||
for material_slot in data.material_slots:
|
||||
mat = material_slot.material
|
||||
if not (mat and mat.use_nodes):
|
||||
continue
|
||||
tree = mat.node_tree
|
||||
if tree.type != 'SHADER':
|
||||
continue
|
||||
for node in tree.nodes:
|
||||
if node.bl_idname != 'ShaderNodeTexImage':
|
||||
continue
|
||||
# Check if image is not packed already
|
||||
# and pack it if not.
|
||||
if node.image and node.image.packed_file is None:
|
||||
node.image.pack()
|
||||
|
||||
bpy.data.libraries.write(filepath, data_blocks)
|
||||
|
||||
|
|
|
|||
|
|
@ -0,0 +1,23 @@
|
|||
import bpy
|
||||
|
||||
import pyblish.api
|
||||
|
||||
|
||||
class ValidateInstanceEmpty(pyblish.api.InstancePlugin):
|
||||
"""Validator to verify that the instance is not empty"""
|
||||
|
||||
order = pyblish.api.ValidatorOrder - 0.01
|
||||
hosts = ["blender"]
|
||||
families = ["model", "pointcache", "rig", "camera" "layout", "blendScene"]
|
||||
label = "Validate Instance is not Empty"
|
||||
optional = False
|
||||
|
||||
def process(self, instance):
|
||||
asset_group = instance.data["instance_group"]
|
||||
|
||||
if isinstance(asset_group, bpy.types.Collection):
|
||||
if not (asset_group.objects or asset_group.children):
|
||||
raise RuntimeError(f"Instance {instance.name} is empty.")
|
||||
elif isinstance(asset_group, bpy.types.Object):
|
||||
if not asset_group.children:
|
||||
raise RuntimeError(f"Instance {instance.name} is empty.")
|
||||
|
|
@ -173,6 +173,7 @@ def install():
|
|||
os.remove(filepath)
|
||||
|
||||
icon = get_openpype_icon_filepath()
|
||||
tab_menu_label = os.environ.get("AVALON_LABEL") or "AYON"
|
||||
|
||||
# Create context only to get creator plugins, so we don't reset and only
|
||||
# populate what we need to retrieve the list of creator plugins
|
||||
|
|
@ -197,14 +198,14 @@ def install():
|
|||
if not network_categories:
|
||||
continue
|
||||
|
||||
key = "openpype_create.{}".format(identifier)
|
||||
key = "ayon_create.{}".format(identifier)
|
||||
log.debug(f"Registering {key}")
|
||||
script = CREATE_SCRIPT.format(identifier=identifier)
|
||||
data = {
|
||||
"script": script,
|
||||
"language": hou.scriptLanguage.Python,
|
||||
"icon": icon,
|
||||
"help": "Create OpenPype publish instance for {}".format(
|
||||
"help": "Create Ayon publish instance for {}".format(
|
||||
creator.label
|
||||
),
|
||||
"help_url": None,
|
||||
|
|
@ -213,7 +214,7 @@ def install():
|
|||
"cop_viewer_categories": [],
|
||||
"network_op_type": None,
|
||||
"viewer_op_type": None,
|
||||
"locations": ["OpenPype"]
|
||||
"locations": [tab_menu_label]
|
||||
}
|
||||
label = "Create {}".format(creator.label)
|
||||
tool = hou.shelves.tool(key)
|
||||
|
|
|
|||
|
|
@ -569,9 +569,9 @@ def get_template_from_value(key, value):
|
|||
return parm
|
||||
|
||||
|
||||
def get_frame_data(node, handle_start=0, handle_end=0, log=None):
|
||||
"""Get the frame data: start frame, end frame, steps,
|
||||
start frame with start handle and end frame with end handle.
|
||||
def get_frame_data(node, log=None):
|
||||
"""Get the frame data: `frameStartHandle`, `frameEndHandle`
|
||||
and `byFrameStep`.
|
||||
|
||||
This function uses Houdini node's `trange`, `t1, `t2` and `t3`
|
||||
parameters as the source of truth for the full inclusive frame
|
||||
|
|
@ -579,20 +579,17 @@ def get_frame_data(node, handle_start=0, handle_end=0, log=None):
|
|||
range including the handles.
|
||||
|
||||
The non-inclusive frame start and frame end without handles
|
||||
are computed by subtracting the handles from the inclusive
|
||||
can be computed by subtracting the handles from the inclusive
|
||||
frame range.
|
||||
|
||||
Args:
|
||||
node (hou.Node): ROP node to retrieve frame range from,
|
||||
the frame range is assumed to be the frame range
|
||||
*including* the start and end handles.
|
||||
handle_start (int): Start handles.
|
||||
handle_end (int): End handles.
|
||||
log (logging.Logger): Logger to log to.
|
||||
|
||||
Returns:
|
||||
dict: frame data for start, end, steps,
|
||||
start with handle and end with handle
|
||||
dict: frame data for `frameStartHandle`, `frameEndHandle`
|
||||
and `byFrameStep`.
|
||||
|
||||
"""
|
||||
|
||||
|
|
@ -623,11 +620,6 @@ def get_frame_data(node, handle_start=0, handle_end=0, log=None):
|
|||
data["frameEndHandle"] = int(node.evalParm("f2"))
|
||||
data["byFrameStep"] = node.evalParm("f3")
|
||||
|
||||
data["handleStart"] = handle_start
|
||||
data["handleEnd"] = handle_end
|
||||
data["frameStart"] = data["frameStartHandle"] + data["handleStart"]
|
||||
data["frameEnd"] = data["frameEndHandle"] - data["handleEnd"]
|
||||
|
||||
return data
|
||||
|
||||
|
||||
|
|
@ -1018,7 +1010,7 @@ def self_publish():
|
|||
def add_self_publish_button(node):
|
||||
"""Adds a self publish button to the rop node."""
|
||||
|
||||
label = os.environ.get("AVALON_LABEL") or "OpenPype"
|
||||
label = os.environ.get("AVALON_LABEL") or "AYON"
|
||||
|
||||
button_parm = hou.ButtonParmTemplate(
|
||||
"ayon_self_publish",
|
||||
|
|
|
|||
|
|
@ -3,7 +3,6 @@
|
|||
import os
|
||||
import sys
|
||||
import logging
|
||||
import contextlib
|
||||
|
||||
import hou # noqa
|
||||
|
||||
|
|
@ -66,10 +65,6 @@ class HoudiniHost(HostBase, IWorkfileHost, ILoadHost, IPublishHost):
|
|||
register_event_callback("open", on_open)
|
||||
register_event_callback("new", on_new)
|
||||
|
||||
pyblish.api.register_callback(
|
||||
"instanceToggled", on_pyblish_instance_toggled
|
||||
)
|
||||
|
||||
self._has_been_setup = True
|
||||
# add houdini vendor packages
|
||||
hou_pythonpath = os.path.join(HOUDINI_HOST_DIR, "vendor")
|
||||
|
|
@ -406,54 +401,3 @@ def _set_context_settings():
|
|||
|
||||
lib.reset_framerange()
|
||||
lib.update_houdini_vars_context()
|
||||
|
||||
|
||||
def on_pyblish_instance_toggled(instance, new_value, old_value):
|
||||
"""Toggle saver tool passthrough states on instance toggles."""
|
||||
@contextlib.contextmanager
|
||||
def main_take(no_update=True):
|
||||
"""Enter root take during context"""
|
||||
original_take = hou.takes.currentTake()
|
||||
original_update_mode = hou.updateModeSetting()
|
||||
root = hou.takes.rootTake()
|
||||
has_changed = False
|
||||
try:
|
||||
if original_take != root:
|
||||
has_changed = True
|
||||
if no_update:
|
||||
hou.setUpdateMode(hou.updateMode.Manual)
|
||||
hou.takes.setCurrentTake(root)
|
||||
yield
|
||||
finally:
|
||||
if has_changed:
|
||||
if no_update:
|
||||
hou.setUpdateMode(original_update_mode)
|
||||
hou.takes.setCurrentTake(original_take)
|
||||
|
||||
if not instance.data.get("_allowToggleBypass", True):
|
||||
return
|
||||
|
||||
nodes = instance[:]
|
||||
if not nodes:
|
||||
return
|
||||
|
||||
# Assume instance node is first node
|
||||
instance_node = nodes[0]
|
||||
|
||||
if not hasattr(instance_node, "isBypassed"):
|
||||
# Likely not a node that can actually be bypassed
|
||||
log.debug("Can't bypass node: %s", instance_node.path())
|
||||
return
|
||||
|
||||
if instance_node.isBypassed() != (not old_value):
|
||||
print("%s old bypass state didn't match old instance state, "
|
||||
"updating anyway.." % instance_node.path())
|
||||
|
||||
try:
|
||||
# Go into the main take, because when in another take changing
|
||||
# the bypass state of a note cannot be done due to it being locked
|
||||
# by default.
|
||||
with main_take(no_update=True):
|
||||
instance_node.bypass(not new_value)
|
||||
except hou.PermissionError as exc:
|
||||
log.warning("%s - %s", instance_node.path(), exc)
|
||||
|
|
|
|||
|
|
@ -24,29 +24,33 @@ def generate_shelves():
|
|||
# load configuration of houdini shelves
|
||||
project_name = get_current_project_name()
|
||||
project_settings = get_project_settings(project_name)
|
||||
shelves_set_config = project_settings["houdini"]["shelves"]
|
||||
shelves_configs = project_settings["houdini"]["shelves"]
|
||||
|
||||
if not shelves_set_config:
|
||||
if not shelves_configs:
|
||||
log.debug("No custom shelves found in project settings.")
|
||||
return
|
||||
|
||||
# Get Template data
|
||||
template_data = get_current_context_template_data_with_asset_data()
|
||||
|
||||
for shelf_set_config in shelves_set_config:
|
||||
shelf_set_filepath = shelf_set_config.get('shelf_set_source_path')
|
||||
shelf_set_os_filepath = shelf_set_filepath[current_os]
|
||||
if shelf_set_os_filepath:
|
||||
shelf_set_os_filepath = get_path_using_template_data(
|
||||
shelf_set_os_filepath, template_data
|
||||
)
|
||||
if not os.path.isfile(shelf_set_os_filepath):
|
||||
log.error("Shelf path doesn't exist - "
|
||||
"{}".format(shelf_set_os_filepath))
|
||||
continue
|
||||
for config in shelves_configs:
|
||||
selected_option = config["options"]
|
||||
shelf_set_config = config[selected_option]
|
||||
|
||||
hou.shelves.newShelfSet(file_path=shelf_set_os_filepath)
|
||||
continue
|
||||
shelf_set_filepath = shelf_set_config.get('shelf_set_source_path')
|
||||
if shelf_set_filepath:
|
||||
shelf_set_os_filepath = shelf_set_filepath[current_os]
|
||||
if shelf_set_os_filepath:
|
||||
shelf_set_os_filepath = get_path_using_template_data(
|
||||
shelf_set_os_filepath, template_data
|
||||
)
|
||||
if not os.path.isfile(shelf_set_os_filepath):
|
||||
log.error("Shelf path doesn't exist - "
|
||||
"{}".format(shelf_set_os_filepath))
|
||||
continue
|
||||
|
||||
hou.shelves.loadFile(shelf_set_os_filepath)
|
||||
continue
|
||||
|
||||
shelf_set_name = shelf_set_config.get('shelf_set_name')
|
||||
if not shelf_set_name:
|
||||
|
|
|
|||
|
|
@ -3,6 +3,7 @@
|
|||
from openpype.hosts.houdini.api import plugin
|
||||
from openpype.pipeline import CreatedInstance, CreatorError
|
||||
from openpype.lib import EnumDef
|
||||
import hou
|
||||
|
||||
|
||||
class CreateBGEO(plugin.HoudiniCreator):
|
||||
|
|
@ -13,7 +14,6 @@ class CreateBGEO(plugin.HoudiniCreator):
|
|||
icon = "gears"
|
||||
|
||||
def create(self, subset_name, instance_data, pre_create_data):
|
||||
import hou
|
||||
|
||||
instance_data.pop("active", None)
|
||||
|
||||
|
|
@ -90,3 +90,9 @@ class CreateBGEO(plugin.HoudiniCreator):
|
|||
return attrs + [
|
||||
EnumDef("bgeo_type", bgeo_enum, label="BGEO Options"),
|
||||
]
|
||||
|
||||
def get_network_categories(self):
|
||||
return [
|
||||
hou.ropNodeTypeCategory(),
|
||||
hou.sopNodeTypeCategory()
|
||||
]
|
||||
|
|
|
|||
|
|
@ -45,6 +45,11 @@ class CreateCompositeSequence(plugin.HoudiniCreator):
|
|||
|
||||
instance_node.setParms(parms)
|
||||
|
||||
# Manually set f1 & f2 to $FSTART and $FEND respectively
|
||||
# to match other Houdini nodes default.
|
||||
instance_node.parm("f1").setExpression("$FSTART")
|
||||
instance_node.parm("f2").setExpression("$FEND")
|
||||
|
||||
# Lock any parameters in this list
|
||||
to_lock = ["prim_to_detail_pattern"]
|
||||
self.lock_parameters(instance_node, to_lock)
|
||||
|
|
|
|||
|
|
@ -5,6 +5,7 @@ from openpype.client import (
|
|||
get_subsets,
|
||||
)
|
||||
from openpype.hosts.houdini.api import plugin
|
||||
import hou
|
||||
|
||||
|
||||
class CreateHDA(plugin.HoudiniCreator):
|
||||
|
|
@ -35,7 +36,6 @@ class CreateHDA(plugin.HoudiniCreator):
|
|||
|
||||
def create_instance_node(
|
||||
self, node_name, parent, node_type="geometry"):
|
||||
import hou
|
||||
|
||||
parent_node = hou.node("/obj")
|
||||
if self.selected_nodes:
|
||||
|
|
@ -81,3 +81,8 @@ class CreateHDA(plugin.HoudiniCreator):
|
|||
pre_create_data) # type: plugin.CreatedInstance
|
||||
|
||||
return instance
|
||||
|
||||
def get_network_categories(self):
|
||||
return [
|
||||
hou.objNodeTypeCategory()
|
||||
]
|
||||
|
|
|
|||
|
|
@ -1,7 +1,7 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
"""Creator plugin for creating Redshift proxies."""
|
||||
from openpype.hosts.houdini.api import plugin
|
||||
from openpype.pipeline import CreatedInstance
|
||||
import hou
|
||||
|
||||
|
||||
class CreateRedshiftProxy(plugin.HoudiniCreator):
|
||||
|
|
@ -12,7 +12,7 @@ class CreateRedshiftProxy(plugin.HoudiniCreator):
|
|||
icon = "magic"
|
||||
|
||||
def create(self, subset_name, instance_data, pre_create_data):
|
||||
import hou # noqa
|
||||
|
||||
# Remove the active, we are checking the bypass flag of the nodes
|
||||
instance_data.pop("active", None)
|
||||
|
||||
|
|
@ -28,7 +28,7 @@ class CreateRedshiftProxy(plugin.HoudiniCreator):
|
|||
instance = super(CreateRedshiftProxy, self).create(
|
||||
subset_name,
|
||||
instance_data,
|
||||
pre_create_data) # type: CreatedInstance
|
||||
pre_create_data)
|
||||
|
||||
instance_node = hou.node(instance.get("instance_node"))
|
||||
|
||||
|
|
@ -44,3 +44,9 @@ class CreateRedshiftProxy(plugin.HoudiniCreator):
|
|||
# Lock some Avalon attributes
|
||||
to_lock = ["family", "id", "prim_to_detail_pattern"]
|
||||
self.lock_parameters(instance_node, to_lock)
|
||||
|
||||
def get_network_categories(self):
|
||||
return [
|
||||
hou.ropNodeTypeCategory(),
|
||||
hou.sopNodeTypeCategory()
|
||||
]
|
||||
|
|
|
|||
|
|
@ -54,6 +54,7 @@ class CreateStaticMesh(plugin.HoudiniCreator):
|
|||
def get_network_categories(self):
|
||||
return [
|
||||
hou.ropNodeTypeCategory(),
|
||||
hou.objNodeTypeCategory(),
|
||||
hou.sopNodeTypeCategory()
|
||||
]
|
||||
|
||||
|
|
|
|||
|
|
@ -40,6 +40,7 @@ class CreateVDBCache(plugin.HoudiniCreator):
|
|||
def get_network_categories(self):
|
||||
return [
|
||||
hou.ropNodeTypeCategory(),
|
||||
hou.objNodeTypeCategory(),
|
||||
hou.sopNodeTypeCategory()
|
||||
]
|
||||
|
||||
|
|
|
|||
|
|
@ -119,7 +119,8 @@ class ImageLoader(load.LoaderPlugin):
|
|||
if not parent.children():
|
||||
parent.destroy()
|
||||
|
||||
def _get_file_sequence(self, root):
|
||||
def _get_file_sequence(self, file_path):
|
||||
root = os.path.dirname(file_path)
|
||||
files = sorted(os.listdir(root))
|
||||
|
||||
first_fname = files[0]
|
||||
|
|
|
|||
|
|
@ -21,8 +21,8 @@ class CollectArnoldROPRenderProducts(pyblish.api.InstancePlugin):
|
|||
|
||||
label = "Arnold ROP Render Products"
|
||||
# This specific order value is used so that
|
||||
# this plugin runs after CollectRopFrameRange
|
||||
order = pyblish.api.CollectorOrder + 0.4999
|
||||
# this plugin runs after CollectFrames
|
||||
order = pyblish.api.CollectorOrder + 0.11
|
||||
hosts = ["houdini"]
|
||||
families = ["arnold_rop"]
|
||||
|
||||
|
|
|
|||
124
openpype/hosts/houdini/plugins/publish/collect_asset_handles.py
Normal file
124
openpype/hosts/houdini/plugins/publish/collect_asset_handles.py
Normal file
|
|
@ -0,0 +1,124 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
"""Collector plugin for frames data on ROP instances."""
|
||||
import hou # noqa
|
||||
import pyblish.api
|
||||
from openpype.lib import BoolDef
|
||||
from openpype.pipeline import OpenPypePyblishPluginMixin
|
||||
|
||||
|
||||
class CollectAssetHandles(pyblish.api.InstancePlugin,
|
||||
OpenPypePyblishPluginMixin):
|
||||
"""Apply asset handles.
|
||||
|
||||
If instance does not have:
|
||||
- frameStart
|
||||
- frameEnd
|
||||
- handleStart
|
||||
- handleEnd
|
||||
But it does have:
|
||||
- frameStartHandle
|
||||
- frameEndHandle
|
||||
|
||||
Then we will retrieve the asset's handles to compute
|
||||
the exclusive frame range and actual handle ranges.
|
||||
"""
|
||||
|
||||
hosts = ["houdini"]
|
||||
|
||||
# This specific order value is used so that
|
||||
# this plugin runs after CollectAnatomyInstanceData
|
||||
order = pyblish.api.CollectorOrder + 0.499
|
||||
|
||||
label = "Collect Asset Handles"
|
||||
use_asset_handles = True
|
||||
|
||||
def process(self, instance):
|
||||
# Only process instances without already existing handles data
|
||||
# but that do have frameStartHandle and frameEndHandle defined
|
||||
# like the data collected from CollectRopFrameRange
|
||||
if "frameStartHandle" not in instance.data:
|
||||
return
|
||||
if "frameEndHandle" not in instance.data:
|
||||
return
|
||||
|
||||
has_existing_data = {
|
||||
"handleStart",
|
||||
"handleEnd",
|
||||
"frameStart",
|
||||
"frameEnd"
|
||||
}.issubset(instance.data)
|
||||
if has_existing_data:
|
||||
return
|
||||
|
||||
attr_values = self.get_attr_values_from_data(instance.data)
|
||||
if attr_values.get("use_handles", self.use_asset_handles):
|
||||
asset_data = instance.data["assetEntity"]["data"]
|
||||
handle_start = asset_data.get("handleStart", 0)
|
||||
handle_end = asset_data.get("handleEnd", 0)
|
||||
else:
|
||||
handle_start = 0
|
||||
handle_end = 0
|
||||
|
||||
frame_start = instance.data["frameStartHandle"] + handle_start
|
||||
frame_end = instance.data["frameEndHandle"] - handle_end
|
||||
|
||||
instance.data.update({
|
||||
"handleStart": handle_start,
|
||||
"handleEnd": handle_end,
|
||||
"frameStart": frame_start,
|
||||
"frameEnd": frame_end
|
||||
})
|
||||
|
||||
# Log debug message about the collected frame range
|
||||
if attr_values.get("use_handles", self.use_asset_handles):
|
||||
self.log.debug(
|
||||
"Full Frame range with Handles "
|
||||
"[{frame_start_handle} - {frame_end_handle}]"
|
||||
.format(
|
||||
frame_start_handle=instance.data["frameStartHandle"],
|
||||
frame_end_handle=instance.data["frameEndHandle"]
|
||||
)
|
||||
)
|
||||
else:
|
||||
self.log.debug(
|
||||
"Use handles is deactivated for this instance, "
|
||||
"start and end handles are set to 0."
|
||||
)
|
||||
|
||||
# Log collected frame range to the user
|
||||
message = "Frame range [{frame_start} - {frame_end}]".format(
|
||||
frame_start=frame_start,
|
||||
frame_end=frame_end
|
||||
)
|
||||
if handle_start or handle_end:
|
||||
message += " with handles [{handle_start}]-[{handle_end}]".format(
|
||||
handle_start=handle_start,
|
||||
handle_end=handle_end
|
||||
)
|
||||
self.log.info(message)
|
||||
|
||||
if instance.data.get("byFrameStep", 1.0) != 1.0:
|
||||
self.log.info(
|
||||
"Frame steps {}".format(instance.data["byFrameStep"]))
|
||||
|
||||
# Add frame range to label if the instance has a frame range.
|
||||
label = instance.data.get("label", instance.data["name"])
|
||||
instance.data["label"] = (
|
||||
"{label} [{frame_start_handle} - {frame_end_handle}]"
|
||||
.format(
|
||||
label=label,
|
||||
frame_start_handle=instance.data["frameStartHandle"],
|
||||
frame_end_handle=instance.data["frameEndHandle"]
|
||||
)
|
||||
)
|
||||
|
||||
@classmethod
|
||||
def get_attribute_defs(cls):
|
||||
return [
|
||||
BoolDef("use_handles",
|
||||
tooltip="Disable this if you want the publisher to"
|
||||
" ignore start and end handles specified in the"
|
||||
" asset data for this publish instance",
|
||||
default=cls.use_asset_handles,
|
||||
label="Use asset handles")
|
||||
]
|
||||
|
|
@ -11,7 +11,9 @@ from openpype.hosts.houdini.api import lib
|
|||
class CollectFrames(pyblish.api.InstancePlugin):
|
||||
"""Collect all frames which would be saved from the ROP nodes"""
|
||||
|
||||
order = pyblish.api.CollectorOrder + 0.01
|
||||
# This specific order value is used so that
|
||||
# this plugin runs after CollectRopFrameRange
|
||||
order = pyblish.api.CollectorOrder + 0.1
|
||||
label = "Collect Frames"
|
||||
families = ["vdbcache", "imagesequence", "ass",
|
||||
"redshiftproxy", "review", "bgeo"]
|
||||
|
|
@ -20,8 +22,8 @@ class CollectFrames(pyblish.api.InstancePlugin):
|
|||
|
||||
ropnode = hou.node(instance.data["instance_node"])
|
||||
|
||||
start_frame = instance.data.get("frameStart", None)
|
||||
end_frame = instance.data.get("frameEnd", None)
|
||||
start_frame = instance.data.get("frameStartHandle", None)
|
||||
end_frame = instance.data.get("frameEndHandle", None)
|
||||
|
||||
output_parm = lib.get_output_parameter(ropnode)
|
||||
if start_frame is not None:
|
||||
|
|
|
|||
|
|
@ -122,10 +122,6 @@ class CollectInstancesUsdLayered(pyblish.api.ContextPlugin):
|
|||
instance.data.update(save_data)
|
||||
instance.data["usdLayer"] = layer
|
||||
|
||||
# Don't allow the Pyblish `instanceToggled` we have installed
|
||||
# to set this node to bypass.
|
||||
instance.data["_allowToggleBypass"] = False
|
||||
|
||||
instances.append(instance)
|
||||
|
||||
# Store the collected ROP node dependencies
|
||||
|
|
|
|||
|
|
@ -25,8 +25,8 @@ class CollectKarmaROPRenderProducts(pyblish.api.InstancePlugin):
|
|||
|
||||
label = "Karma ROP Render Products"
|
||||
# This specific order value is used so that
|
||||
# this plugin runs after CollectRopFrameRange
|
||||
order = pyblish.api.CollectorOrder + 0.4999
|
||||
# this plugin runs after CollectFrames
|
||||
order = pyblish.api.CollectorOrder + 0.11
|
||||
hosts = ["houdini"]
|
||||
families = ["karma_rop"]
|
||||
|
||||
|
|
|
|||
|
|
@ -25,8 +25,8 @@ class CollectMantraROPRenderProducts(pyblish.api.InstancePlugin):
|
|||
|
||||
label = "Mantra ROP Render Products"
|
||||
# This specific order value is used so that
|
||||
# this plugin runs after CollectRopFrameRange
|
||||
order = pyblish.api.CollectorOrder + 0.4999
|
||||
# this plugin runs after CollectFrames
|
||||
order = pyblish.api.CollectorOrder + 0.11
|
||||
hosts = ["houdini"]
|
||||
families = ["mantra_rop"]
|
||||
|
||||
|
|
|
|||
|
|
@ -25,8 +25,8 @@ class CollectRedshiftROPRenderProducts(pyblish.api.InstancePlugin):
|
|||
|
||||
label = "Redshift ROP Render Products"
|
||||
# This specific order value is used so that
|
||||
# this plugin runs after CollectRopFrameRange
|
||||
order = pyblish.api.CollectorOrder + 0.4999
|
||||
# this plugin runs after CollectFrames
|
||||
order = pyblish.api.CollectorOrder + 0.11
|
||||
hosts = ["houdini"]
|
||||
families = ["redshift_rop"]
|
||||
|
||||
|
|
|
|||
|
|
@ -6,6 +6,8 @@ class CollectHoudiniReviewData(pyblish.api.InstancePlugin):
|
|||
"""Collect Review Data."""
|
||||
|
||||
label = "Collect Review Data"
|
||||
# This specific order value is used so that
|
||||
# this plugin runs after CollectRopFrameRange
|
||||
order = pyblish.api.CollectorOrder + 0.1
|
||||
hosts = ["houdini"]
|
||||
families = ["review"]
|
||||
|
|
@ -41,8 +43,8 @@ class CollectHoudiniReviewData(pyblish.api.InstancePlugin):
|
|||
return
|
||||
|
||||
if focal_length_parm.isTimeDependent():
|
||||
start = instance.data["frameStart"]
|
||||
end = instance.data["frameEnd"] + 1
|
||||
start = instance.data["frameStartHandle"]
|
||||
end = instance.data["frameEndHandle"] + 1
|
||||
focal_length = [
|
||||
focal_length_parm.evalAsFloatAtFrame(t)
|
||||
for t in range(int(start), int(end))
|
||||
|
|
|
|||
|
|
@ -2,22 +2,15 @@
|
|||
"""Collector plugin for frames data on ROP instances."""
|
||||
import hou # noqa
|
||||
import pyblish.api
|
||||
from openpype.lib import BoolDef
|
||||
from openpype.hosts.houdini.api import lib
|
||||
from openpype.pipeline import OpenPypePyblishPluginMixin
|
||||
|
||||
|
||||
class CollectRopFrameRange(pyblish.api.InstancePlugin,
|
||||
OpenPypePyblishPluginMixin):
|
||||
|
||||
class CollectRopFrameRange(pyblish.api.InstancePlugin):
|
||||
"""Collect all frames which would be saved from the ROP nodes"""
|
||||
|
||||
hosts = ["houdini"]
|
||||
# This specific order value is used so that
|
||||
# this plugin runs after CollectAnatomyInstanceData
|
||||
order = pyblish.api.CollectorOrder + 0.499
|
||||
order = pyblish.api.CollectorOrder
|
||||
label = "Collect RopNode Frame Range"
|
||||
use_asset_handles = True
|
||||
|
||||
def process(self, instance):
|
||||
|
||||
|
|
@ -30,78 +23,16 @@ class CollectRopFrameRange(pyblish.api.InstancePlugin,
|
|||
return
|
||||
|
||||
ropnode = hou.node(node_path)
|
||||
|
||||
attr_values = self.get_attr_values_from_data(instance.data)
|
||||
|
||||
if attr_values.get("use_handles", self.use_asset_handles):
|
||||
asset_data = instance.data["assetEntity"]["data"]
|
||||
handle_start = asset_data.get("handleStart", 0)
|
||||
handle_end = asset_data.get("handleEnd", 0)
|
||||
else:
|
||||
handle_start = 0
|
||||
handle_end = 0
|
||||
|
||||
frame_data = lib.get_frame_data(
|
||||
ropnode, handle_start, handle_end, self.log
|
||||
ropnode, self.log
|
||||
)
|
||||
|
||||
if not frame_data:
|
||||
return
|
||||
|
||||
# Log debug message about the collected frame range
|
||||
frame_start = frame_data["frameStart"]
|
||||
frame_end = frame_data["frameEnd"]
|
||||
|
||||
if attr_values.get("use_handles", self.use_asset_handles):
|
||||
self.log.debug(
|
||||
"Full Frame range with Handles "
|
||||
"[{frame_start_handle} - {frame_end_handle}]"
|
||||
.format(
|
||||
frame_start_handle=frame_data["frameStartHandle"],
|
||||
frame_end_handle=frame_data["frameEndHandle"]
|
||||
)
|
||||
)
|
||||
else:
|
||||
self.log.debug(
|
||||
"Use handles is deactivated for this instance, "
|
||||
"start and end handles are set to 0."
|
||||
)
|
||||
|
||||
# Log collected frame range to the user
|
||||
message = "Frame range [{frame_start} - {frame_end}]".format(
|
||||
frame_start=frame_start,
|
||||
frame_end=frame_end
|
||||
self.log.debug(
|
||||
"Collected frame_data: {}".format(frame_data)
|
||||
)
|
||||
if handle_start or handle_end:
|
||||
message += " with handles [{handle_start}]-[{handle_end}]".format(
|
||||
handle_start=handle_start,
|
||||
handle_end=handle_end
|
||||
)
|
||||
self.log.info(message)
|
||||
|
||||
if frame_data.get("byFrameStep", 1.0) != 1.0:
|
||||
self.log.info("Frame steps {}".format(frame_data["byFrameStep"]))
|
||||
|
||||
instance.data.update(frame_data)
|
||||
|
||||
# Add frame range to label if the instance has a frame range.
|
||||
label = instance.data.get("label", instance.data["name"])
|
||||
instance.data["label"] = (
|
||||
"{label} [{frame_start} - {frame_end}]"
|
||||
.format(
|
||||
label=label,
|
||||
frame_start=frame_start,
|
||||
frame_end=frame_end
|
||||
)
|
||||
)
|
||||
|
||||
@classmethod
|
||||
def get_attribute_defs(cls):
|
||||
return [
|
||||
BoolDef("use_handles",
|
||||
tooltip="Disable this if you want the publisher to"
|
||||
" ignore start and end handles specified in the"
|
||||
" asset data for this publish instance",
|
||||
default=cls.use_asset_handles,
|
||||
label="Use asset handles")
|
||||
]
|
||||
|
|
|
|||
|
|
@ -25,8 +25,8 @@ class CollectVrayROPRenderProducts(pyblish.api.InstancePlugin):
|
|||
|
||||
label = "VRay ROP Render Products"
|
||||
# This specific order value is used so that
|
||||
# this plugin runs after CollectRopFrameRange
|
||||
order = pyblish.api.CollectorOrder + 0.4999
|
||||
# this plugin runs after CollectFrames
|
||||
order = pyblish.api.CollectorOrder + 0.11
|
||||
hosts = ["houdini"]
|
||||
families = ["vray_rop"]
|
||||
|
||||
|
|
|
|||
|
|
@ -56,7 +56,7 @@ class ExtractAss(publish.Extractor):
|
|||
'ext': ext,
|
||||
"files": files,
|
||||
"stagingDir": staging_dir,
|
||||
"frameStart": instance.data["frameStart"],
|
||||
"frameEnd": instance.data["frameEnd"],
|
||||
"frameStart": instance.data["frameStartHandle"],
|
||||
"frameEnd": instance.data["frameEndHandle"],
|
||||
}
|
||||
instance.data["representations"].append(representation)
|
||||
|
|
|
|||
|
|
@ -47,7 +47,7 @@ class ExtractBGEO(publish.Extractor):
|
|||
"ext": ext.lstrip("."),
|
||||
"files": output,
|
||||
"stagingDir": staging_dir,
|
||||
"frameStart": instance.data["frameStart"],
|
||||
"frameEnd": instance.data["frameEnd"]
|
||||
"frameStart": instance.data["frameStartHandle"],
|
||||
"frameEnd": instance.data["frameEndHandle"]
|
||||
}
|
||||
instance.data["representations"].append(representation)
|
||||
|
|
|
|||
|
|
@ -41,8 +41,8 @@ class ExtractComposite(publish.Extractor):
|
|||
"ext": ext,
|
||||
"files": output,
|
||||
"stagingDir": staging_dir,
|
||||
"frameStart": instance.data["frameStart"],
|
||||
"frameEnd": instance.data["frameEnd"],
|
||||
"frameStart": instance.data["frameStartHandle"],
|
||||
"frameEnd": instance.data["frameEndHandle"],
|
||||
}
|
||||
|
||||
from pprint import pformat
|
||||
|
|
|
|||
|
|
@ -40,9 +40,9 @@ class ExtractFBX(publish.Extractor):
|
|||
}
|
||||
|
||||
# A single frame may also be rendered without start/end frame.
|
||||
if "frameStart" in instance.data and "frameEnd" in instance.data:
|
||||
representation["frameStart"] = instance.data["frameStart"]
|
||||
representation["frameEnd"] = instance.data["frameEnd"]
|
||||
if "frameStartHandle" in instance.data and "frameEndHandle" in instance.data: # noqa
|
||||
representation["frameStart"] = instance.data["frameStartHandle"]
|
||||
representation["frameEnd"] = instance.data["frameEndHandle"]
|
||||
|
||||
# set value type for 'representations' key to list
|
||||
if "representations" not in instance.data:
|
||||
|
|
|
|||
|
|
@ -39,8 +39,8 @@ class ExtractOpenGL(publish.Extractor):
|
|||
"ext": instance.data["imageFormat"],
|
||||
"files": output,
|
||||
"stagingDir": staging_dir,
|
||||
"frameStart": instance.data["frameStart"],
|
||||
"frameEnd": instance.data["frameEnd"],
|
||||
"frameStart": instance.data["frameStartHandle"],
|
||||
"frameEnd": instance.data["frameEndHandle"],
|
||||
"tags": tags,
|
||||
"preview": True,
|
||||
"camera_name": instance.data.get("review_camera")
|
||||
|
|
|
|||
|
|
@ -44,8 +44,8 @@ class ExtractRedshiftProxy(publish.Extractor):
|
|||
}
|
||||
|
||||
# A single frame may also be rendered without start/end frame.
|
||||
if "frameStart" in instance.data and "frameEnd" in instance.data:
|
||||
representation["frameStart"] = instance.data["frameStart"]
|
||||
representation["frameEnd"] = instance.data["frameEnd"]
|
||||
if "frameStartHandle" in instance.data and "frameEndHandle" in instance.data: # noqa
|
||||
representation["frameStart"] = instance.data["frameStartHandle"]
|
||||
representation["frameEnd"] = instance.data["frameEndHandle"]
|
||||
|
||||
instance.data["representations"].append(representation)
|
||||
|
|
|
|||
|
|
@ -40,7 +40,7 @@ class ExtractVDBCache(publish.Extractor):
|
|||
"ext": "vdb",
|
||||
"files": output,
|
||||
"stagingDir": staging_dir,
|
||||
"frameStart": instance.data["frameStart"],
|
||||
"frameEnd": instance.data["frameEnd"],
|
||||
"frameStart": instance.data["frameStartHandle"],
|
||||
"frameEnd": instance.data["frameEndHandle"],
|
||||
}
|
||||
instance.data["representations"].append(representation)
|
||||
|
|
|
|||
|
|
@ -57,7 +57,17 @@ class ValidateFrameRange(pyblish.api.InstancePlugin):
|
|||
return
|
||||
|
||||
rop_node = hou.node(instance.data["instance_node"])
|
||||
if instance.data["frameStart"] > instance.data["frameEnd"]:
|
||||
frame_start = instance.data.get("frameStart")
|
||||
frame_end = instance.data.get("frameEnd")
|
||||
|
||||
if frame_start is None or frame_end is None:
|
||||
cls.log.debug(
|
||||
"Skipping frame range validation for "
|
||||
"instance without frame data: {}".format(rop_node.path())
|
||||
)
|
||||
return
|
||||
|
||||
if frame_start > frame_end:
|
||||
cls.log.info(
|
||||
"The ROP node render range is set to "
|
||||
"{0[frameStartHandle]} - {0[frameEndHandle]} "
|
||||
|
|
@ -89,7 +99,7 @@ class ValidateFrameRange(pyblish.api.InstancePlugin):
|
|||
.format(instance))
|
||||
return
|
||||
|
||||
created_instance.publish_attributes["CollectRopFrameRange"]["use_handles"] = False # noqa
|
||||
created_instance.publish_attributes["CollectAssetHandles"]["use_handles"] = False # noqa
|
||||
|
||||
create_context.save_changes()
|
||||
cls.log.debug("use asset handles is turned off for '{}'"
|
||||
|
|
|
|||
|
|
@ -4,7 +4,7 @@
|
|||
<subMenu id="openpype_menu">
|
||||
<labelExpression><![CDATA[
|
||||
import os
|
||||
return os.environ.get("AVALON_LABEL") or "OpenPype"
|
||||
return os.environ.get("AVALON_LABEL") or "AYON"
|
||||
]]></labelExpression>
|
||||
<actionItem id="asset_name">
|
||||
<labelExpression><![CDATA[
|
||||
|
|
@ -16,7 +16,7 @@ return label
|
|||
|
||||
<separatorItem/>
|
||||
|
||||
<scriptItem id="openpype_create">
|
||||
<scriptItem id="ayon_create">
|
||||
<label>Create...</label>
|
||||
<scriptCode><![CDATA[
|
||||
import hou
|
||||
|
|
@ -26,7 +26,7 @@ host_tools.show_publisher(parent, tab="create")
|
|||
]]></scriptCode>
|
||||
</scriptItem>
|
||||
|
||||
<scriptItem id="openpype_load">
|
||||
<scriptItem id="ayon_load">
|
||||
<label>Load...</label>
|
||||
<scriptCode><![CDATA[
|
||||
import hou
|
||||
|
|
@ -46,7 +46,7 @@ host_tools.show_publisher(parent, tab="publish")
|
|||
]]></scriptCode>
|
||||
</scriptItem>
|
||||
|
||||
<scriptItem id="openpype_manage">
|
||||
<scriptItem id="ayon_manage">
|
||||
<label>Manage...</label>
|
||||
<scriptCode><![CDATA[
|
||||
import hou
|
||||
|
|
|
|||
|
|
@ -23,27 +23,36 @@ def play_preview_when_done(has_autoplay):
|
|||
|
||||
|
||||
@contextlib.contextmanager
|
||||
def viewport_camera(camera):
|
||||
"""Set viewport camera during context
|
||||
def viewport_layout_and_camera(camera, layout="layout_1"):
|
||||
"""Set viewport layout and camera during context
|
||||
***For 3dsMax 2024+
|
||||
Args:
|
||||
camera (str): viewport camera
|
||||
layout (str): layout to use in viewport, defaults to `layout_1`
|
||||
Use None to not change viewport layout during context.
|
||||
"""
|
||||
original = rt.viewport.getCamera()
|
||||
if not original:
|
||||
original_camera = rt.viewport.getCamera()
|
||||
original_layout = rt.viewport.getLayout()
|
||||
if not original_camera:
|
||||
# if there is no original camera
|
||||
# use the current camera as original
|
||||
original = rt.getNodeByName(camera)
|
||||
original_camera = rt.getNodeByName(camera)
|
||||
review_camera = rt.getNodeByName(camera)
|
||||
try:
|
||||
if layout is not None:
|
||||
layout = rt.Name(layout)
|
||||
if rt.viewport.getLayout() != layout:
|
||||
rt.viewport.setLayout(layout)
|
||||
rt.viewport.setCamera(review_camera)
|
||||
yield
|
||||
finally:
|
||||
rt.viewport.setCamera(original)
|
||||
rt.viewport.setLayout(original_layout)
|
||||
rt.viewport.setCamera(original_camera)
|
||||
|
||||
|
||||
@contextlib.contextmanager
|
||||
def viewport_preference_setting(general_viewport,
|
||||
nitrous_manager,
|
||||
nitrous_viewport,
|
||||
vp_button_mgr):
|
||||
"""Function to set viewport setting during context
|
||||
|
|
@ -51,6 +60,7 @@ def viewport_preference_setting(general_viewport,
|
|||
Args:
|
||||
camera (str): Viewport camera for review render
|
||||
general_viewport (dict): General viewport setting
|
||||
nitrous_manager (dict): Nitrous graphic manager
|
||||
nitrous_viewport (dict): Nitrous setting for
|
||||
preview animation
|
||||
vp_button_mgr (dict): Viewport button manager Setting
|
||||
|
|
@ -64,6 +74,9 @@ def viewport_preference_setting(general_viewport,
|
|||
vp_button_mgr_original = {
|
||||
key: getattr(rt.ViewportButtonMgr, key) for key in vp_button_mgr
|
||||
}
|
||||
nitrous_manager_original = {
|
||||
key: getattr(nitrousGraphicMgr, key) for key in nitrous_manager
|
||||
}
|
||||
nitrous_viewport_original = {
|
||||
key: getattr(viewport_setting, key) for key in nitrous_viewport
|
||||
}
|
||||
|
|
@ -73,6 +86,8 @@ def viewport_preference_setting(general_viewport,
|
|||
rt.viewport.EnableSolidBackgroundColorMode(general_viewport["dspBkg"])
|
||||
for key, value in vp_button_mgr.items():
|
||||
setattr(rt.ViewportButtonMgr, key, value)
|
||||
for key, value in nitrous_manager.items():
|
||||
setattr(nitrousGraphicMgr, key, value)
|
||||
for key, value in nitrous_viewport.items():
|
||||
if nitrous_viewport[key] != nitrous_viewport_original[key]:
|
||||
setattr(viewport_setting, key, value)
|
||||
|
|
@ -83,6 +98,8 @@ def viewport_preference_setting(general_viewport,
|
|||
rt.viewport.EnableSolidBackgroundColorMode(orig_vp_bkg)
|
||||
for key, value in vp_button_mgr_original.items():
|
||||
setattr(rt.ViewportButtonMgr, key, value)
|
||||
for key, value in nitrous_manager_original.items():
|
||||
setattr(nitrousGraphicMgr, key, value)
|
||||
for key, value in nitrous_viewport_original.items():
|
||||
setattr(viewport_setting, key, value)
|
||||
|
||||
|
|
@ -149,24 +166,27 @@ def _render_preview_animation_max_2024(
|
|||
|
||||
|
||||
def _render_preview_animation_max_pre_2024(
|
||||
filepath, startFrame, endFrame, percentSize, ext):
|
||||
filepath, startFrame, endFrame,
|
||||
width, height, percentSize, ext):
|
||||
"""Render viewport animation by creating bitmaps
|
||||
***For 3dsMax Version <2024
|
||||
Args:
|
||||
filepath (str): filepath without frame numbers and extension
|
||||
startFrame (int): start frame
|
||||
endFrame (int): end frame
|
||||
width (int): render resolution width
|
||||
height (int): render resolution height
|
||||
percentSize (float): render resolution multiplier by 100
|
||||
e.g. 100.0 is 1x, 50.0 is 0.5x, 150.0 is 1.5x
|
||||
ext (str): image extension
|
||||
Returns:
|
||||
list: Created filepaths
|
||||
"""
|
||||
|
||||
# get the screenshot
|
||||
percent = percentSize / 100.0
|
||||
res_width = int(round(rt.renderWidth * percent))
|
||||
res_height = int(round(rt.renderHeight * percent))
|
||||
viewportRatio = float(res_width / res_height)
|
||||
res_width = width * percent
|
||||
res_height = height * percent
|
||||
frame_template = "{}.{{:04}}.{}".format(filepath, ext)
|
||||
frame_template.replace("\\", "/")
|
||||
files = []
|
||||
|
|
@ -178,23 +198,29 @@ def _render_preview_animation_max_pre_2024(
|
|||
res_width, res_height, filename=filepath
|
||||
)
|
||||
dib = rt.gw.getViewportDib()
|
||||
dib_width = float(dib.width)
|
||||
dib_height = float(dib.height)
|
||||
renderRatio = float(dib_width / dib_height)
|
||||
if viewportRatio <= renderRatio:
|
||||
dib_width = rt.renderWidth
|
||||
dib_height = rt.renderHeight
|
||||
# aspect ratio
|
||||
viewportRatio = dib_width / dib_height
|
||||
renderRatio = float(res_width / res_height)
|
||||
if viewportRatio < renderRatio:
|
||||
heightCrop = (dib_width / renderRatio)
|
||||
topEdge = int((dib_height - heightCrop) / 2.0)
|
||||
tempImage_bmp = rt.bitmap(dib_width, heightCrop)
|
||||
src_box_value = rt.Box2(0, topEdge, dib_width, heightCrop)
|
||||
else:
|
||||
rt.pasteBitmap(dib, tempImage_bmp, src_box_value, rt.Point2(0, 0))
|
||||
rt.copy(tempImage_bmp, preview_res)
|
||||
rt.close(tempImage_bmp)
|
||||
elif viewportRatio > renderRatio:
|
||||
widthCrop = dib_height * renderRatio
|
||||
leftEdge = int((dib_width - widthCrop) / 2.0)
|
||||
tempImage_bmp = rt.bitmap(widthCrop, dib_height)
|
||||
src_box_value = rt.Box2(0, leftEdge, dib_width, dib_height)
|
||||
rt.pasteBitmap(dib, tempImage_bmp, src_box_value, rt.Point2(0, 0))
|
||||
# copy the bitmap and close it
|
||||
rt.copy(tempImage_bmp, preview_res)
|
||||
rt.close(tempImage_bmp)
|
||||
src_box_value = rt.Box2(leftEdge, 0, widthCrop, dib_height)
|
||||
rt.pasteBitmap(dib, tempImage_bmp, src_box_value, rt.Point2(0, 0))
|
||||
rt.copy(tempImage_bmp, preview_res)
|
||||
rt.close(tempImage_bmp)
|
||||
else:
|
||||
rt.copy(dib, preview_res)
|
||||
rt.save(preview_res)
|
||||
rt.close(preview_res)
|
||||
rt.close(dib)
|
||||
|
|
@ -243,22 +269,25 @@ def render_preview_animation(
|
|||
if viewport_options is None:
|
||||
viewport_options = viewport_options_for_preview_animation()
|
||||
with play_preview_when_done(False):
|
||||
with viewport_camera(camera):
|
||||
with render_resolution(width, height):
|
||||
if int(get_max_version()) < 2024:
|
||||
with viewport_preference_setting(
|
||||
viewport_options["general_viewport"],
|
||||
viewport_options["nitrous_viewport"],
|
||||
viewport_options["vp_btn_mgr"]
|
||||
):
|
||||
return _render_preview_animation_max_pre_2024(
|
||||
filepath,
|
||||
start_frame,
|
||||
end_frame,
|
||||
percentSize,
|
||||
ext
|
||||
)
|
||||
else:
|
||||
with viewport_layout_and_camera(camera):
|
||||
if int(get_max_version()) < 2024:
|
||||
with viewport_preference_setting(
|
||||
viewport_options["general_viewport"],
|
||||
viewport_options["nitrous_manager"],
|
||||
viewport_options["nitrous_viewport"],
|
||||
viewport_options["vp_btn_mgr"]
|
||||
):
|
||||
return _render_preview_animation_max_pre_2024(
|
||||
filepath,
|
||||
start_frame,
|
||||
end_frame,
|
||||
width,
|
||||
height,
|
||||
percentSize,
|
||||
ext
|
||||
)
|
||||
else:
|
||||
with render_resolution(width, height):
|
||||
return _render_preview_animation_max_2024(
|
||||
filepath,
|
||||
start_frame,
|
||||
|
|
@ -299,6 +328,9 @@ def viewport_options_for_preview_animation():
|
|||
"dspBkg": True,
|
||||
"dspGrid": False
|
||||
}
|
||||
viewport_options["nitrous_manager"] = {
|
||||
"AntialiasingQuality": "None"
|
||||
}
|
||||
viewport_options["nitrous_viewport"] = {
|
||||
"VisualStyleMode": "defaultshading",
|
||||
"ViewportPreset": "highquality",
|
||||
|
|
|
|||
|
|
@ -108,7 +108,7 @@ class CreateReview(plugin.MaxCreator):
|
|||
label="Pre-View Preset"),
|
||||
EnumDef("antialiasingQuality",
|
||||
anti_aliasing_enum,
|
||||
default=self.anti_aliasing,
|
||||
default="None",
|
||||
label="Anti-aliasing Quality"),
|
||||
BoolDef("vpTexture",
|
||||
label="Viewport Texture",
|
||||
|
|
|
|||
|
|
@ -4,11 +4,9 @@ import os
|
|||
import pyblish.api
|
||||
|
||||
from pymxs import runtime as rt
|
||||
from openpype.pipeline import get_current_asset_name
|
||||
from openpype.hosts.max.api import colorspace
|
||||
from openpype.hosts.max.api.lib import get_max_version, get_current_renderer
|
||||
from openpype.hosts.max.api.lib_renderproducts import RenderProducts
|
||||
from openpype.client import get_last_version_by_subset_name
|
||||
|
||||
|
||||
class CollectRender(pyblish.api.InstancePlugin):
|
||||
|
|
@ -27,7 +25,6 @@ class CollectRender(pyblish.api.InstancePlugin):
|
|||
filepath = current_file.replace("\\", "/")
|
||||
|
||||
context.data['currentFile'] = current_file
|
||||
asset = get_current_asset_name()
|
||||
|
||||
files_by_aov = RenderProducts().get_beauty(instance.name)
|
||||
aovs = RenderProducts().get_aovs(instance.name)
|
||||
|
|
@ -49,19 +46,6 @@ class CollectRender(pyblish.api.InstancePlugin):
|
|||
instance.data["files"].append(files_by_aov)
|
||||
|
||||
img_format = RenderProducts().image_format()
|
||||
project_name = context.data["projectName"]
|
||||
asset_doc = context.data["assetEntity"]
|
||||
asset_id = asset_doc["_id"]
|
||||
version_doc = get_last_version_by_subset_name(project_name,
|
||||
instance.name,
|
||||
asset_id)
|
||||
self.log.debug("version_doc: {0}".format(version_doc))
|
||||
version_int = 1
|
||||
if version_doc:
|
||||
version_int += int(version_doc["name"])
|
||||
|
||||
self.log.debug(f"Setting {version_int} to context.")
|
||||
context.data["version"] = version_int
|
||||
# OCIO config not support in
|
||||
# most of the 3dsmax renderers
|
||||
# so this is currently hard coded
|
||||
|
|
@ -87,7 +71,7 @@ class CollectRender(pyblish.api.InstancePlugin):
|
|||
renderer = str(renderer_class).split(":")[0]
|
||||
# also need to get the render dir for conversion
|
||||
data = {
|
||||
"asset": asset,
|
||||
"asset": instance.data["asset"],
|
||||
"subset": str(instance.name),
|
||||
"publish": True,
|
||||
"maxversion": str(get_max_version()),
|
||||
|
|
@ -99,7 +83,6 @@ class CollectRender(pyblish.api.InstancePlugin):
|
|||
"plugin": "3dsmax",
|
||||
"frameStart": instance.data["frameStartHandle"],
|
||||
"frameEnd": instance.data["frameEndHandle"],
|
||||
"version": version_int,
|
||||
"farm": True
|
||||
}
|
||||
instance.data.update(data)
|
||||
|
|
|
|||
|
|
@ -90,6 +90,9 @@ class CollectReview(pyblish.api.InstancePlugin,
|
|||
"dspBkg": attr_values.get("dspBkg"),
|
||||
"dspGrid": attr_values.get("dspGrid")
|
||||
}
|
||||
nitrous_manager = {
|
||||
"AntialiasingQuality": creator_attrs["antialiasingQuality"],
|
||||
}
|
||||
nitrous_viewport = {
|
||||
"VisualStyleMode": creator_attrs["visualStyleMode"],
|
||||
"ViewportPreset": creator_attrs["viewportPreset"],
|
||||
|
|
@ -97,6 +100,7 @@ class CollectReview(pyblish.api.InstancePlugin,
|
|||
}
|
||||
preview_data = {
|
||||
"general_viewport": general_viewport,
|
||||
"nitrous_manager": nitrous_manager,
|
||||
"nitrous_viewport": nitrous_viewport,
|
||||
"vp_btn_mgr": {"EnableButtons": False}
|
||||
}
|
||||
|
|
|
|||
131
openpype/hosts/max/plugins/publish/validate_attributes.py
Normal file
131
openpype/hosts/max/plugins/publish/validate_attributes.py
Normal file
|
|
@ -0,0 +1,131 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
"""Validator for Attributes."""
|
||||
from pyblish.api import ContextPlugin, ValidatorOrder
|
||||
from pymxs import runtime as rt
|
||||
|
||||
from openpype.pipeline.publish import (
|
||||
OptionalPyblishPluginMixin,
|
||||
PublishValidationError,
|
||||
RepairContextAction
|
||||
)
|
||||
|
||||
|
||||
def has_property(object_name, property_name):
|
||||
"""Return whether an object has a property with given name"""
|
||||
return rt.Execute(f'isProperty {object_name} "{property_name}"')
|
||||
|
||||
|
||||
def is_matching_value(object_name, property_name, value):
|
||||
"""Return whether an existing property matches value `value"""
|
||||
property_value = rt.Execute(f"{object_name}.{property_name}")
|
||||
|
||||
# Wrap property value if value is a string valued attributes
|
||||
# starting with a `#`
|
||||
if (
|
||||
isinstance(value, str) and
|
||||
value.startswith("#") and
|
||||
not value.endswith(")")
|
||||
):
|
||||
# prefix value with `#`
|
||||
# not applicable for #() array value type
|
||||
# and only applicable for enum i.e. #bob, #sally
|
||||
property_value = f"#{property_value}"
|
||||
|
||||
return property_value == value
|
||||
|
||||
|
||||
class ValidateAttributes(OptionalPyblishPluginMixin,
|
||||
ContextPlugin):
|
||||
"""Validates attributes in the project setting are consistent
|
||||
with the nodes from MaxWrapper Class in 3ds max.
|
||||
E.g. "renderers.current.separateAovFiles",
|
||||
"renderers.production.PrimaryGIEngine"
|
||||
Admin(s) need to put the dict below and enable this validator for a check:
|
||||
{
|
||||
"renderers.current":{
|
||||
"separateAovFiles" : True
|
||||
},
|
||||
"renderers.production":{
|
||||
"PrimaryGIEngine": "#RS_GIENGINE_BRUTE_FORCE"
|
||||
}
|
||||
....
|
||||
}
|
||||
|
||||
"""
|
||||
|
||||
order = ValidatorOrder
|
||||
hosts = ["max"]
|
||||
label = "Attributes"
|
||||
actions = [RepairContextAction]
|
||||
optional = True
|
||||
|
||||
@classmethod
|
||||
def get_invalid(cls, context):
|
||||
attributes = (
|
||||
context.data["project_settings"]["max"]["publish"]
|
||||
["ValidateAttributes"]["attributes"]
|
||||
)
|
||||
if not attributes:
|
||||
return
|
||||
invalid = []
|
||||
for object_name, required_properties in attributes.items():
|
||||
if not rt.Execute(f"isValidValue {object_name}"):
|
||||
# Skip checking if the node does not
|
||||
# exist in MaxWrapper Class
|
||||
cls.log.debug(f"Unable to find '{object_name}'."
|
||||
" Skipping validation of attributes.")
|
||||
continue
|
||||
|
||||
for property_name, value in required_properties.items():
|
||||
if not has_property(object_name, property_name):
|
||||
cls.log.error(
|
||||
"Non-existing property: "
|
||||
f"{object_name}.{property_name}")
|
||||
invalid.append((object_name, property_name))
|
||||
|
||||
if not is_matching_value(object_name, property_name, value):
|
||||
cls.log.error(
|
||||
f"Invalid value for: {object_name}.{property_name}"
|
||||
f" should be: {value}")
|
||||
invalid.append((object_name, property_name))
|
||||
|
||||
return invalid
|
||||
|
||||
def process(self, context):
|
||||
if not self.is_active(context.data):
|
||||
self.log.debug("Skipping Validate Attributes...")
|
||||
return
|
||||
invalid_attributes = self.get_invalid(context)
|
||||
if invalid_attributes:
|
||||
bullet_point_invalid_statement = "\n".join(
|
||||
"- {}".format(invalid) for invalid
|
||||
in invalid_attributes
|
||||
)
|
||||
report = (
|
||||
"Required Attribute(s) have invalid value(s).\n\n"
|
||||
f"{bullet_point_invalid_statement}\n\n"
|
||||
"You can use repair action to fix them if they are not\n"
|
||||
"unknown property value(s)."
|
||||
)
|
||||
raise PublishValidationError(
|
||||
report, title="Invalid Value(s) for Required Attribute(s)")
|
||||
|
||||
@classmethod
|
||||
def repair(cls, context):
|
||||
attributes = (
|
||||
context.data["project_settings"]["max"]["publish"]
|
||||
["ValidateAttributes"]["attributes"]
|
||||
)
|
||||
invalid_attributes = cls.get_invalid(context)
|
||||
for attrs in invalid_attributes:
|
||||
prop, attr = attrs
|
||||
value = attributes[prop][attr]
|
||||
if isinstance(value, str) and not value.startswith("#"):
|
||||
attribute_fix = '{}.{}="{}"'.format(
|
||||
prop, attr, value
|
||||
)
|
||||
else:
|
||||
attribute_fix = "{}.{}={}".format(
|
||||
prop, attr, value
|
||||
)
|
||||
rt.Execute(attribute_fix)
|
||||
|
|
@ -156,7 +156,7 @@ class FBXExtractor:
|
|||
# Parse export options
|
||||
options = self.default_options
|
||||
options = self.parse_overrides(instance, options)
|
||||
self.log.info("Export options: {0}".format(options))
|
||||
self.log.debug("Export options: {0}".format(options))
|
||||
|
||||
# Collect the start and end including handles
|
||||
start = instance.data.get("frameStartHandle") or \
|
||||
|
|
@ -186,7 +186,7 @@ class FBXExtractor:
|
|||
template = "FBXExport{0} {1}" if key == "UpAxis" else \
|
||||
"FBXExport{0} -v {1}" # noqa
|
||||
cmd = template.format(key, value)
|
||||
self.log.info(cmd)
|
||||
self.log.debug(cmd)
|
||||
mel.eval(cmd)
|
||||
|
||||
# Never show the UI or generate a log
|
||||
|
|
|
|||
|
|
@ -1,7 +1,5 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
"""Class for handling Render Settings."""
|
||||
from maya import cmds # noqa
|
||||
import maya.mel as mel
|
||||
import six
|
||||
import sys
|
||||
|
||||
|
|
@ -63,6 +61,10 @@ class RenderSettings(object):
|
|||
|
||||
def set_default_renderer_settings(self, renderer=None):
|
||||
"""Set basic settings based on renderer."""
|
||||
# Not all hosts can import this module.
|
||||
from maya import cmds
|
||||
import maya.mel as mel
|
||||
|
||||
if not renderer:
|
||||
renderer = cmds.getAttr(
|
||||
'defaultRenderGlobals.currentRenderer').lower()
|
||||
|
|
|
|||
|
|
@ -771,7 +771,8 @@ class ReferenceLoader(Loader):
|
|||
"ma": "mayaAscii",
|
||||
"mb": "mayaBinary",
|
||||
"abc": "Alembic",
|
||||
"fbx": "FBX"
|
||||
"fbx": "FBX",
|
||||
"usd": "USD Import"
|
||||
}.get(representation["name"])
|
||||
|
||||
assert file_type, "Unsupported representation: %s" % representation
|
||||
|
|
|
|||
|
|
@ -1,7 +1,9 @@
|
|||
import os
|
||||
import difflib
|
||||
import contextlib
|
||||
|
||||
from maya import cmds
|
||||
import qargparse
|
||||
|
||||
from openpype.settings import get_project_settings
|
||||
import openpype.hosts.maya.api.plugin
|
||||
|
|
@ -128,6 +130,12 @@ class ReferenceLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
|
|||
if not attach_to_root:
|
||||
group_name = namespace
|
||||
|
||||
kwargs = {}
|
||||
if "file_options" in options:
|
||||
kwargs["options"] = options["file_options"]
|
||||
if "file_type" in options:
|
||||
kwargs["type"] = options["file_type"]
|
||||
|
||||
path = self.filepath_from_context(context)
|
||||
with maintained_selection():
|
||||
cmds.loadPlugin("AbcImport.mll", quiet=True)
|
||||
|
|
@ -139,7 +147,8 @@ class ReferenceLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
|
|||
reference=True,
|
||||
returnNewNodes=True,
|
||||
groupReference=attach_to_root,
|
||||
groupName=group_name)
|
||||
groupName=group_name,
|
||||
**kwargs)
|
||||
|
||||
shapes = cmds.ls(nodes, shapes=True, long=True)
|
||||
|
||||
|
|
@ -251,3 +260,92 @@ class ReferenceLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
|
|||
else:
|
||||
self.log.warning("This version of Maya does not support locking of"
|
||||
" transforms of cameras.")
|
||||
|
||||
|
||||
class MayaUSDReferenceLoader(ReferenceLoader):
|
||||
"""Reference USD file to native Maya nodes using MayaUSDImport reference"""
|
||||
|
||||
families = ["usd"]
|
||||
representations = ["usd"]
|
||||
extensions = {"usd", "usda", "usdc"}
|
||||
|
||||
options = ReferenceLoader.options + [
|
||||
qargparse.Boolean(
|
||||
"readAnimData",
|
||||
label="Load anim data",
|
||||
default=True,
|
||||
help="Load animation data from USD file"
|
||||
),
|
||||
qargparse.Boolean(
|
||||
"useAsAnimationCache",
|
||||
label="Use as animation cache",
|
||||
default=True,
|
||||
help=(
|
||||
"Imports geometry prims with time-sampled point data using a "
|
||||
"point-based deformer that references the imported "
|
||||
"USD file.\n"
|
||||
"This provides better import and playback performance when "
|
||||
"importing time-sampled geometry from USD, and should "
|
||||
"reduce the weight of the resulting Maya scene."
|
||||
)
|
||||
),
|
||||
qargparse.Boolean(
|
||||
"importInstances",
|
||||
label="Import instances",
|
||||
default=True,
|
||||
help=(
|
||||
"Import USD instanced geometries as Maya instanced shapes. "
|
||||
"Will flatten the scene otherwise."
|
||||
)
|
||||
),
|
||||
qargparse.String(
|
||||
"primPath",
|
||||
label="Prim Path",
|
||||
default="/",
|
||||
help=(
|
||||
"Name of the USD scope where traversing will begin.\n"
|
||||
"The prim at the specified primPath (including the prim) will "
|
||||
"be imported.\n"
|
||||
"Specifying the pseudo-root (/) means you want "
|
||||
"to import everything in the file.\n"
|
||||
"If the passed prim path is empty, it will first try to "
|
||||
"import the defaultPrim for the rootLayer if it exists.\n"
|
||||
"Otherwise, it will behave as if the pseudo-root was passed "
|
||||
"in."
|
||||
)
|
||||
)
|
||||
]
|
||||
|
||||
file_type = "USD Import"
|
||||
|
||||
def process_reference(self, context, name, namespace, options):
|
||||
cmds.loadPlugin("mayaUsdPlugin", quiet=True)
|
||||
|
||||
def bool_option(key, default):
|
||||
# Shorthand for getting optional boolean file option from options
|
||||
value = int(bool(options.get(key, default)))
|
||||
return "{}={}".format(key, value)
|
||||
|
||||
def string_option(key, default):
|
||||
# Shorthand for getting optional string file option from options
|
||||
value = str(options.get(key, default))
|
||||
return "{}={}".format(key, value)
|
||||
|
||||
options["file_options"] = ";".join([
|
||||
string_option("primPath", default="/"),
|
||||
bool_option("importInstances", default=True),
|
||||
bool_option("useAsAnimationCache", default=True),
|
||||
bool_option("readAnimData", default=True),
|
||||
# TODO: Expose more parameters
|
||||
# "preferredMaterial=none",
|
||||
# "importRelativeTextures=Automatic",
|
||||
# "useCustomFrameRange=0",
|
||||
# "startTime=0",
|
||||
# "endTime=0",
|
||||
# "importUSDZTextures=0"
|
||||
])
|
||||
options["file_type"] = self.file_type
|
||||
|
||||
return super(MayaUSDReferenceLoader, self).process_reference(
|
||||
context, name, namespace, options
|
||||
)
|
||||
|
|
|
|||
|
|
@ -42,6 +42,16 @@ class ExtractFBXAnimation(publish.Extractor):
|
|||
# Export from the rig's namespace so that the exported
|
||||
# FBX does not include the namespace but preserves the node
|
||||
# names as existing in the rig workfile
|
||||
if not out_members:
|
||||
skeleton_set = [
|
||||
i for i in instance
|
||||
if i.endswith("skeletonAnim_SET")
|
||||
]
|
||||
self.log.debug(
|
||||
"Top group of animated skeleton not found in "
|
||||
"{}.\nSkipping fbx animation extraction.".format(skeleton_set))
|
||||
return
|
||||
|
||||
namespace = get_namespace(out_members[0])
|
||||
relative_out_members = [
|
||||
strip_namespace(node, namespace) for node in out_members
|
||||
|
|
|
|||
|
|
@ -129,9 +129,6 @@ class NukeHost(
|
|||
register_event_callback("workio.open_file", check_inventory_versions)
|
||||
register_event_callback("taskChanged", change_context_label)
|
||||
|
||||
pyblish.api.register_callback(
|
||||
"instanceToggled", on_pyblish_instance_toggled)
|
||||
|
||||
_install_menu()
|
||||
|
||||
# add script menu
|
||||
|
|
@ -402,25 +399,6 @@ def add_shortcuts_from_presets():
|
|||
log.error(e)
|
||||
|
||||
|
||||
def on_pyblish_instance_toggled(instance, old_value, new_value):
|
||||
"""Toggle node passthrough states on instance toggles."""
|
||||
|
||||
log.info("instance toggle: {}, old_value: {}, new_value:{} ".format(
|
||||
instance, old_value, new_value))
|
||||
|
||||
# Whether instances should be passthrough based on new value
|
||||
|
||||
with viewer_update_and_undo_stop():
|
||||
n = instance[0]
|
||||
try:
|
||||
n["publish"].value()
|
||||
except ValueError:
|
||||
n = add_publish_knob(n)
|
||||
log.info(" `Publish` knob was added to write node..")
|
||||
|
||||
n["publish"].setValue(new_value)
|
||||
|
||||
|
||||
def containerise(node,
|
||||
name,
|
||||
namespace,
|
||||
|
|
@ -478,8 +456,6 @@ def parse_container(node):
|
|||
"""
|
||||
data = read_avalon_data(node)
|
||||
|
||||
# (TODO) Remove key validation when `ls` has re-implemented.
|
||||
#
|
||||
# If not all required data return the empty container
|
||||
required = ["schema", "id", "name",
|
||||
"namespace", "loader", "representation"]
|
||||
|
|
@ -487,7 +463,10 @@ def parse_container(node):
|
|||
return
|
||||
|
||||
# Store the node's name
|
||||
data["objectName"] = node["name"].value()
|
||||
data.update({
|
||||
"objectName": node.fullName(),
|
||||
"node": node,
|
||||
})
|
||||
|
||||
return data
|
||||
|
||||
|
|
|
|||
|
|
@ -537,6 +537,7 @@ class NukeLoader(LoaderPlugin):
|
|||
node.addKnob(knob)
|
||||
|
||||
def clear_members(self, parent_node):
|
||||
parent_class = parent_node.Class()
|
||||
members = self.get_members(parent_node)
|
||||
|
||||
dependent_nodes = None
|
||||
|
|
@ -549,6 +550,8 @@ class NukeLoader(LoaderPlugin):
|
|||
break
|
||||
|
||||
for member in members:
|
||||
if member.Class() == parent_class:
|
||||
continue
|
||||
self.log.info("removing node: `{}".format(member.name()))
|
||||
nuke.delete(member)
|
||||
|
||||
|
|
|
|||
|
|
@ -64,8 +64,7 @@ class LoadBackdropNodes(load.LoaderPlugin):
|
|||
|
||||
data_imprint = {
|
||||
"version": vname,
|
||||
"colorspaceInput": colorspace,
|
||||
"objectName": object_name
|
||||
"colorspaceInput": colorspace
|
||||
}
|
||||
|
||||
for k in add_keys:
|
||||
|
|
@ -194,7 +193,7 @@ class LoadBackdropNodes(load.LoaderPlugin):
|
|||
version_doc = get_version_by_id(project_name, representation["parent"])
|
||||
|
||||
# get corresponding node
|
||||
GN = nuke.toNode(container['objectName'])
|
||||
GN = container["node"]
|
||||
|
||||
file = get_representation_path(representation).replace("\\", "/")
|
||||
|
||||
|
|
@ -207,10 +206,11 @@ class LoadBackdropNodes(load.LoaderPlugin):
|
|||
|
||||
add_keys = ["source", "author", "fps"]
|
||||
|
||||
data_imprint = {"representation": str(representation["_id"]),
|
||||
"version": vname,
|
||||
"colorspaceInput": colorspace,
|
||||
"objectName": object_name}
|
||||
data_imprint = {
|
||||
"representation": str(representation["_id"]),
|
||||
"version": vname,
|
||||
"colorspaceInput": colorspace,
|
||||
}
|
||||
|
||||
for k in add_keys:
|
||||
data_imprint.update({k: version_data[k]})
|
||||
|
|
@ -252,6 +252,6 @@ class LoadBackdropNodes(load.LoaderPlugin):
|
|||
self.update(container, representation)
|
||||
|
||||
def remove(self, container):
|
||||
node = nuke.toNode(container['objectName'])
|
||||
node = container["node"]
|
||||
with viewer_update_and_undo_stop():
|
||||
nuke.delete(node)
|
||||
|
|
|
|||
|
|
@ -48,10 +48,11 @@ class AlembicCameraLoader(load.LoaderPlugin):
|
|||
# add additional metadata from the version to imprint to Avalon knob
|
||||
add_keys = ["source", "author", "fps"]
|
||||
|
||||
data_imprint = {"frameStart": first,
|
||||
"frameEnd": last,
|
||||
"version": vname,
|
||||
"objectName": object_name}
|
||||
data_imprint = {
|
||||
"frameStart": first,
|
||||
"frameEnd": last,
|
||||
"version": vname,
|
||||
}
|
||||
|
||||
for k in add_keys:
|
||||
data_imprint.update({k: version_data[k]})
|
||||
|
|
@ -111,7 +112,7 @@ class AlembicCameraLoader(load.LoaderPlugin):
|
|||
project_name = get_current_project_name()
|
||||
version_doc = get_version_by_id(project_name, representation["parent"])
|
||||
|
||||
object_name = container['objectName']
|
||||
object_name = container["node"]
|
||||
|
||||
# get main variables
|
||||
version_data = version_doc.get("data", {})
|
||||
|
|
@ -124,11 +125,12 @@ class AlembicCameraLoader(load.LoaderPlugin):
|
|||
# add additional metadata from the version to imprint to Avalon knob
|
||||
add_keys = ["source", "author", "fps"]
|
||||
|
||||
data_imprint = {"representation": str(representation["_id"]),
|
||||
"frameStart": first,
|
||||
"frameEnd": last,
|
||||
"version": vname,
|
||||
"objectName": object_name}
|
||||
data_imprint = {
|
||||
"representation": str(representation["_id"]),
|
||||
"frameStart": first,
|
||||
"frameEnd": last,
|
||||
"version": vname
|
||||
}
|
||||
|
||||
for k in add_keys:
|
||||
data_imprint.update({k: version_data[k]})
|
||||
|
|
@ -194,6 +196,6 @@ class AlembicCameraLoader(load.LoaderPlugin):
|
|||
self.update(container, representation)
|
||||
|
||||
def remove(self, container):
|
||||
node = nuke.toNode(container['objectName'])
|
||||
node = container["node"]
|
||||
with viewer_update_and_undo_stop():
|
||||
nuke.delete(node)
|
||||
|
|
|
|||
|
|
@ -189,8 +189,6 @@ class LoadClip(plugin.NukeLoader):
|
|||
value_ = value_.replace("\\", "/")
|
||||
data_imprint[key] = value_
|
||||
|
||||
data_imprint["objectName"] = read_name
|
||||
|
||||
if add_retime and version_data.get("retime", None):
|
||||
data_imprint["addRetime"] = True
|
||||
|
||||
|
|
@ -254,7 +252,7 @@ class LoadClip(plugin.NukeLoader):
|
|||
|
||||
is_sequence = len(representation["files"]) > 1
|
||||
|
||||
read_node = nuke.toNode(container['objectName'])
|
||||
read_node = container["node"]
|
||||
|
||||
if is_sequence:
|
||||
representation = self._representation_with_hash_in_frame(
|
||||
|
|
@ -299,9 +297,6 @@ class LoadClip(plugin.NukeLoader):
|
|||
"Representation id `{}` is failing to load".format(repre_id))
|
||||
return
|
||||
|
||||
read_name = self._get_node_name(representation)
|
||||
|
||||
read_node["name"].setValue(read_name)
|
||||
read_node["file"].setValue(filepath)
|
||||
|
||||
# to avoid multiple undo steps for rest of process
|
||||
|
|
@ -356,7 +351,7 @@ class LoadClip(plugin.NukeLoader):
|
|||
self.set_as_member(read_node)
|
||||
|
||||
def remove(self, container):
|
||||
read_node = nuke.toNode(container['objectName'])
|
||||
read_node = container["node"]
|
||||
assert read_node.Class() == "Read", "Must be Read"
|
||||
|
||||
with viewer_update_and_undo_stop():
|
||||
|
|
|
|||
|
|
@ -62,11 +62,12 @@ class LoadEffects(load.LoaderPlugin):
|
|||
add_keys = ["frameStart", "frameEnd", "handleStart", "handleEnd",
|
||||
"source", "author", "fps"]
|
||||
|
||||
data_imprint = {"frameStart": first,
|
||||
"frameEnd": last,
|
||||
"version": vname,
|
||||
"colorspaceInput": colorspace,
|
||||
"objectName": object_name}
|
||||
data_imprint = {
|
||||
"frameStart": first,
|
||||
"frameEnd": last,
|
||||
"version": vname,
|
||||
"colorspaceInput": colorspace,
|
||||
}
|
||||
|
||||
for k in add_keys:
|
||||
data_imprint.update({k: version_data[k]})
|
||||
|
|
@ -159,7 +160,7 @@ class LoadEffects(load.LoaderPlugin):
|
|||
version_doc = get_version_by_id(project_name, representation["parent"])
|
||||
|
||||
# get corresponding node
|
||||
GN = nuke.toNode(container['objectName'])
|
||||
GN = container["node"]
|
||||
|
||||
file = get_representation_path(representation).replace("\\", "/")
|
||||
name = container['name']
|
||||
|
|
@ -175,12 +176,13 @@ class LoadEffects(load.LoaderPlugin):
|
|||
add_keys = ["frameStart", "frameEnd", "handleStart", "handleEnd",
|
||||
"source", "author", "fps"]
|
||||
|
||||
data_imprint = {"representation": str(representation["_id"]),
|
||||
"frameStart": first,
|
||||
"frameEnd": last,
|
||||
"version": vname,
|
||||
"colorspaceInput": colorspace,
|
||||
"objectName": object_name}
|
||||
data_imprint = {
|
||||
"representation": str(representation["_id"]),
|
||||
"frameStart": first,
|
||||
"frameEnd": last,
|
||||
"version": vname,
|
||||
"colorspaceInput": colorspace
|
||||
}
|
||||
|
||||
for k in add_keys:
|
||||
data_imprint.update({k: version_data[k]})
|
||||
|
|
@ -212,7 +214,7 @@ class LoadEffects(load.LoaderPlugin):
|
|||
pre_node = nuke.createNode("Input")
|
||||
pre_node["name"].setValue("rgb")
|
||||
|
||||
for ef_name, ef_val in nodes_order.items():
|
||||
for _, ef_val in nodes_order.items():
|
||||
node = nuke.createNode(ef_val["class"])
|
||||
for k, v in ef_val["node"].items():
|
||||
if k in self.ignore_attr:
|
||||
|
|
@ -346,6 +348,6 @@ class LoadEffects(load.LoaderPlugin):
|
|||
self.update(container, representation)
|
||||
|
||||
def remove(self, container):
|
||||
node = nuke.toNode(container['objectName'])
|
||||
node = container["node"]
|
||||
with viewer_update_and_undo_stop():
|
||||
nuke.delete(node)
|
||||
|
|
|
|||
|
|
@ -63,11 +63,12 @@ class LoadEffectsInputProcess(load.LoaderPlugin):
|
|||
add_keys = ["frameStart", "frameEnd", "handleStart", "handleEnd",
|
||||
"source", "author", "fps"]
|
||||
|
||||
data_imprint = {"frameStart": first,
|
||||
"frameEnd": last,
|
||||
"version": vname,
|
||||
"colorspaceInput": colorspace,
|
||||
"objectName": object_name}
|
||||
data_imprint = {
|
||||
"frameStart": first,
|
||||
"frameEnd": last,
|
||||
"version": vname,
|
||||
"colorspaceInput": colorspace,
|
||||
}
|
||||
|
||||
for k in add_keys:
|
||||
data_imprint.update({k: version_data[k]})
|
||||
|
|
@ -98,7 +99,7 @@ class LoadEffectsInputProcess(load.LoaderPlugin):
|
|||
pre_node = nuke.createNode("Input")
|
||||
pre_node["name"].setValue("rgb")
|
||||
|
||||
for ef_name, ef_val in nodes_order.items():
|
||||
for _, ef_val in nodes_order.items():
|
||||
node = nuke.createNode(ef_val["class"])
|
||||
for k, v in ef_val["node"].items():
|
||||
if k in self.ignore_attr:
|
||||
|
|
@ -164,28 +165,26 @@ class LoadEffectsInputProcess(load.LoaderPlugin):
|
|||
version_doc = get_version_by_id(project_name, representation["parent"])
|
||||
|
||||
# get corresponding node
|
||||
GN = nuke.toNode(container['objectName'])
|
||||
GN = container["node"]
|
||||
|
||||
file = get_representation_path(representation).replace("\\", "/")
|
||||
name = container['name']
|
||||
version_data = version_doc.get("data", {})
|
||||
vname = version_doc.get("name", None)
|
||||
first = version_data.get("frameStart", None)
|
||||
last = version_data.get("frameEnd", None)
|
||||
workfile_first_frame = int(nuke.root()["first_frame"].getValue())
|
||||
namespace = container['namespace']
|
||||
colorspace = version_data.get("colorspace", None)
|
||||
object_name = "{}_{}".format(name, namespace)
|
||||
|
||||
add_keys = ["frameStart", "frameEnd", "handleStart", "handleEnd",
|
||||
"source", "author", "fps"]
|
||||
|
||||
data_imprint = {"representation": str(representation["_id"]),
|
||||
"frameStart": first,
|
||||
"frameEnd": last,
|
||||
"version": vname,
|
||||
"colorspaceInput": colorspace,
|
||||
"objectName": object_name}
|
||||
data_imprint = {
|
||||
"representation": str(representation["_id"]),
|
||||
"frameStart": first,
|
||||
"frameEnd": last,
|
||||
"version": vname,
|
||||
"colorspaceInput": colorspace,
|
||||
}
|
||||
|
||||
for k in add_keys:
|
||||
data_imprint.update({k: version_data[k]})
|
||||
|
|
@ -217,7 +216,7 @@ class LoadEffectsInputProcess(load.LoaderPlugin):
|
|||
pre_node = nuke.createNode("Input")
|
||||
pre_node["name"].setValue("rgb")
|
||||
|
||||
for ef_name, ef_val in nodes_order.items():
|
||||
for _, ef_val in nodes_order.items():
|
||||
node = nuke.createNode(ef_val["class"])
|
||||
for k, v in ef_val["node"].items():
|
||||
if k in self.ignore_attr:
|
||||
|
|
@ -251,11 +250,6 @@ class LoadEffectsInputProcess(load.LoaderPlugin):
|
|||
output = nuke.createNode("Output")
|
||||
output.setInput(0, pre_node)
|
||||
|
||||
# # try to place it under Viewer1
|
||||
# if not self.connect_active_viewer(GN):
|
||||
# nuke.delete(GN)
|
||||
# return
|
||||
|
||||
# get all versions in list
|
||||
last_version_doc = get_last_version_by_subset_id(
|
||||
project_name, version_doc["parent"], fields=["_id"]
|
||||
|
|
@ -365,6 +359,6 @@ class LoadEffectsInputProcess(load.LoaderPlugin):
|
|||
self.update(container, representation)
|
||||
|
||||
def remove(self, container):
|
||||
node = nuke.toNode(container['objectName'])
|
||||
node = container["node"]
|
||||
with viewer_update_and_undo_stop():
|
||||
nuke.delete(node)
|
||||
|
|
|
|||
|
|
@ -64,11 +64,12 @@ class LoadGizmo(load.LoaderPlugin):
|
|||
add_keys = ["frameStart", "frameEnd", "handleStart", "handleEnd",
|
||||
"source", "author", "fps"]
|
||||
|
||||
data_imprint = {"frameStart": first,
|
||||
"frameEnd": last,
|
||||
"version": vname,
|
||||
"colorspaceInput": colorspace,
|
||||
"objectName": object_name}
|
||||
data_imprint = {
|
||||
"frameStart": first,
|
||||
"frameEnd": last,
|
||||
"version": vname,
|
||||
"colorspaceInput": colorspace
|
||||
}
|
||||
|
||||
for k in add_keys:
|
||||
data_imprint.update({k: version_data[k]})
|
||||
|
|
@ -111,7 +112,7 @@ class LoadGizmo(load.LoaderPlugin):
|
|||
version_doc = get_version_by_id(project_name, representation["parent"])
|
||||
|
||||
# get corresponding node
|
||||
group_node = nuke.toNode(container['objectName'])
|
||||
group_node = container["node"]
|
||||
|
||||
file = get_representation_path(representation).replace("\\", "/")
|
||||
name = container['name']
|
||||
|
|
@ -126,12 +127,13 @@ class LoadGizmo(load.LoaderPlugin):
|
|||
add_keys = ["frameStart", "frameEnd", "handleStart", "handleEnd",
|
||||
"source", "author", "fps"]
|
||||
|
||||
data_imprint = {"representation": str(representation["_id"]),
|
||||
"frameStart": first,
|
||||
"frameEnd": last,
|
||||
"version": vname,
|
||||
"colorspaceInput": colorspace,
|
||||
"objectName": object_name}
|
||||
data_imprint = {
|
||||
"representation": str(representation["_id"]),
|
||||
"frameStart": first,
|
||||
"frameEnd": last,
|
||||
"version": vname,
|
||||
"colorspaceInput": colorspace
|
||||
}
|
||||
|
||||
for k in add_keys:
|
||||
data_imprint.update({k: version_data[k]})
|
||||
|
|
@ -175,6 +177,6 @@ class LoadGizmo(load.LoaderPlugin):
|
|||
self.update(container, representation)
|
||||
|
||||
def remove(self, container):
|
||||
node = nuke.toNode(container['objectName'])
|
||||
node = container["node"]
|
||||
with viewer_update_and_undo_stop():
|
||||
nuke.delete(node)
|
||||
|
|
|
|||
|
|
@ -66,11 +66,12 @@ class LoadGizmoInputProcess(load.LoaderPlugin):
|
|||
add_keys = ["frameStart", "frameEnd", "handleStart", "handleEnd",
|
||||
"source", "author", "fps"]
|
||||
|
||||
data_imprint = {"frameStart": first,
|
||||
"frameEnd": last,
|
||||
"version": vname,
|
||||
"colorspaceInput": colorspace,
|
||||
"objectName": object_name}
|
||||
data_imprint = {
|
||||
"frameStart": first,
|
||||
"frameEnd": last,
|
||||
"version": vname,
|
||||
"colorspaceInput": colorspace
|
||||
}
|
||||
|
||||
for k in add_keys:
|
||||
data_imprint.update({k: version_data[k]})
|
||||
|
|
@ -118,7 +119,7 @@ class LoadGizmoInputProcess(load.LoaderPlugin):
|
|||
version_doc = get_version_by_id(project_name, representation["parent"])
|
||||
|
||||
# get corresponding node
|
||||
group_node = nuke.toNode(container['objectName'])
|
||||
group_node = container["node"]
|
||||
|
||||
file = get_representation_path(representation).replace("\\", "/")
|
||||
name = container['name']
|
||||
|
|
@ -133,12 +134,13 @@ class LoadGizmoInputProcess(load.LoaderPlugin):
|
|||
add_keys = ["frameStart", "frameEnd", "handleStart", "handleEnd",
|
||||
"source", "author", "fps"]
|
||||
|
||||
data_imprint = {"representation": str(representation["_id"]),
|
||||
"frameStart": first,
|
||||
"frameEnd": last,
|
||||
"version": vname,
|
||||
"colorspaceInput": colorspace,
|
||||
"objectName": object_name}
|
||||
data_imprint = {
|
||||
"representation": str(representation["_id"]),
|
||||
"frameStart": first,
|
||||
"frameEnd": last,
|
||||
"version": vname,
|
||||
"colorspaceInput": colorspace
|
||||
}
|
||||
|
||||
for k in add_keys:
|
||||
data_imprint.update({k: version_data[k]})
|
||||
|
|
@ -256,6 +258,6 @@ class LoadGizmoInputProcess(load.LoaderPlugin):
|
|||
self.update(container, representation)
|
||||
|
||||
def remove(self, container):
|
||||
node = nuke.toNode(container['objectName'])
|
||||
node = container["node"]
|
||||
with viewer_update_and_undo_stop():
|
||||
nuke.delete(node)
|
||||
|
|
|
|||
|
|
@ -146,8 +146,6 @@ class LoadImage(load.LoaderPlugin):
|
|||
data_imprint.update(
|
||||
{k: context["version"]['data'].get(k, str(None))})
|
||||
|
||||
data_imprint.update({"objectName": read_name})
|
||||
|
||||
r["tile_color"].setValue(int("0x4ecd25ff", 16))
|
||||
|
||||
return containerise(r,
|
||||
|
|
@ -168,7 +166,7 @@ class LoadImage(load.LoaderPlugin):
|
|||
inputs:
|
||||
|
||||
"""
|
||||
node = nuke.toNode(container["objectName"])
|
||||
node = container["node"]
|
||||
frame_number = node["first"].value()
|
||||
|
||||
assert node.Class() == "Read", "Must be Read"
|
||||
|
|
@ -237,7 +235,7 @@ class LoadImage(load.LoaderPlugin):
|
|||
self.log.info("updated to version: {}".format(version_doc.get("name")))
|
||||
|
||||
def remove(self, container):
|
||||
node = nuke.toNode(container['objectName'])
|
||||
node = container["node"]
|
||||
assert node.Class() == "Read", "Must be Read"
|
||||
|
||||
with viewer_update_and_undo_stop():
|
||||
|
|
|
|||
|
|
@ -46,10 +46,11 @@ class AlembicModelLoader(load.LoaderPlugin):
|
|||
# add additional metadata from the version to imprint to Avalon knob
|
||||
add_keys = ["source", "author", "fps"]
|
||||
|
||||
data_imprint = {"frameStart": first,
|
||||
"frameEnd": last,
|
||||
"version": vname,
|
||||
"objectName": object_name}
|
||||
data_imprint = {
|
||||
"frameStart": first,
|
||||
"frameEnd": last,
|
||||
"version": vname
|
||||
}
|
||||
|
||||
for k in add_keys:
|
||||
data_imprint.update({k: version_data[k]})
|
||||
|
|
@ -114,9 +115,9 @@ class AlembicModelLoader(load.LoaderPlugin):
|
|||
# Get version from io
|
||||
project_name = get_current_project_name()
|
||||
version_doc = get_version_by_id(project_name, representation["parent"])
|
||||
object_name = container['objectName']
|
||||
|
||||
# get corresponding node
|
||||
model_node = nuke.toNode(object_name)
|
||||
model_node = container["node"]
|
||||
|
||||
# get main variables
|
||||
version_data = version_doc.get("data", {})
|
||||
|
|
@ -129,11 +130,12 @@ class AlembicModelLoader(load.LoaderPlugin):
|
|||
# add additional metadata from the version to imprint to Avalon knob
|
||||
add_keys = ["source", "author", "fps"]
|
||||
|
||||
data_imprint = {"representation": str(representation["_id"]),
|
||||
"frameStart": first,
|
||||
"frameEnd": last,
|
||||
"version": vname,
|
||||
"objectName": object_name}
|
||||
data_imprint = {
|
||||
"representation": str(representation["_id"]),
|
||||
"frameStart": first,
|
||||
"frameEnd": last,
|
||||
"version": vname
|
||||
}
|
||||
|
||||
for k in add_keys:
|
||||
data_imprint.update({k: version_data[k]})
|
||||
|
|
@ -142,7 +144,6 @@ class AlembicModelLoader(load.LoaderPlugin):
|
|||
file = get_representation_path(representation).replace("\\", "/")
|
||||
|
||||
with maintained_selection():
|
||||
model_node = nuke.toNode(object_name)
|
||||
model_node['selected'].setValue(True)
|
||||
|
||||
# collect input output dependencies
|
||||
|
|
@ -163,8 +164,10 @@ class AlembicModelLoader(load.LoaderPlugin):
|
|||
ypos = model_node.ypos()
|
||||
nuke.nodeCopy("%clipboard%")
|
||||
nuke.delete(model_node)
|
||||
|
||||
# paste the node back and set the position
|
||||
nuke.nodePaste("%clipboard%")
|
||||
model_node = nuke.toNode(object_name)
|
||||
model_node = nuke.selectedNode()
|
||||
model_node.setXYpos(xpos, ypos)
|
||||
|
||||
# link to original input nodes
|
||||
|
|
|
|||
|
|
@ -55,7 +55,7 @@ class LoadOcioLookNodes(load.LoaderPlugin):
|
|||
"""
|
||||
namespace = namespace or context['asset']['name']
|
||||
suffix = secrets.token_hex(nbytes=4)
|
||||
object_name = "{}_{}_{}".format(
|
||||
node_name = "{}_{}_{}".format(
|
||||
name, namespace, suffix)
|
||||
|
||||
# getting file path
|
||||
|
|
@ -64,7 +64,9 @@ class LoadOcioLookNodes(load.LoaderPlugin):
|
|||
json_f = self._load_json_data(filepath)
|
||||
|
||||
group_node = self._create_group_node(
|
||||
object_name, filepath, json_f["data"])
|
||||
filepath, json_f["data"])
|
||||
# renaming group node
|
||||
group_node["name"].setValue(node_name)
|
||||
|
||||
self._node_version_color(context["version"], group_node)
|
||||
|
||||
|
|
@ -76,17 +78,14 @@ class LoadOcioLookNodes(load.LoaderPlugin):
|
|||
name=name,
|
||||
namespace=namespace,
|
||||
context=context,
|
||||
loader=self.__class__.__name__,
|
||||
data={
|
||||
"objectName": object_name,
|
||||
}
|
||||
loader=self.__class__.__name__
|
||||
)
|
||||
|
||||
def _create_group_node(
|
||||
self,
|
||||
object_name,
|
||||
filepath,
|
||||
data
|
||||
data,
|
||||
group_node=None
|
||||
):
|
||||
"""Creates group node with all the nodes inside.
|
||||
|
||||
|
|
@ -94,9 +93,9 @@ class LoadOcioLookNodes(load.LoaderPlugin):
|
|||
in between - in case those are needed.
|
||||
|
||||
Arguments:
|
||||
object_name (str): name of the group node
|
||||
filepath (str): path to json file
|
||||
data (dict): data from json file
|
||||
group_node (Optional[nuke.Node]): group node or None
|
||||
|
||||
Returns:
|
||||
nuke.Node: group node with all the nodes inside
|
||||
|
|
@ -117,7 +116,6 @@ class LoadOcioLookNodes(load.LoaderPlugin):
|
|||
|
||||
input_node = None
|
||||
output_node = None
|
||||
group_node = nuke.toNode(object_name)
|
||||
if group_node:
|
||||
# remove all nodes between Input and Output nodes
|
||||
for node in group_node.nodes():
|
||||
|
|
@ -130,7 +128,6 @@ class LoadOcioLookNodes(load.LoaderPlugin):
|
|||
else:
|
||||
group_node = nuke.createNode(
|
||||
"Group",
|
||||
"name {}_1".format(object_name),
|
||||
inpanel=False
|
||||
)
|
||||
|
||||
|
|
@ -227,16 +224,16 @@ class LoadOcioLookNodes(load.LoaderPlugin):
|
|||
project_name = get_current_project_name()
|
||||
version_doc = get_version_by_id(project_name, representation["parent"])
|
||||
|
||||
object_name = container['objectName']
|
||||
group_node = container["node"]
|
||||
|
||||
filepath = get_representation_path(representation)
|
||||
|
||||
json_f = self._load_json_data(filepath)
|
||||
|
||||
group_node = self._create_group_node(
|
||||
object_name,
|
||||
filepath,
|
||||
json_f["data"]
|
||||
json_f["data"],
|
||||
group_node
|
||||
)
|
||||
|
||||
self._node_version_color(version_doc, group_node)
|
||||
|
|
|
|||
|
|
@ -46,8 +46,6 @@ class LinkAsGroup(load.LoaderPlugin):
|
|||
file = self.filepath_from_context(context).replace("\\", "/")
|
||||
self.log.info("file: {}\n".format(file))
|
||||
|
||||
precomp_name = context["representation"]["context"]["subset"]
|
||||
|
||||
self.log.info("versionData: {}\n".format(context["version"]["data"]))
|
||||
|
||||
# add additional metadata from the version to imprint to Avalon knob
|
||||
|
|
@ -62,7 +60,6 @@ class LinkAsGroup(load.LoaderPlugin):
|
|||
}
|
||||
for k in add_keys:
|
||||
data_imprint.update({k: context["version"]['data'][k]})
|
||||
data_imprint.update({"objectName": precomp_name})
|
||||
|
||||
# group context is set to precomp, so back up one level.
|
||||
nuke.endGroup()
|
||||
|
|
@ -118,7 +115,7 @@ class LinkAsGroup(load.LoaderPlugin):
|
|||
inputs:
|
||||
|
||||
"""
|
||||
node = nuke.toNode(container['objectName'])
|
||||
node = container["node"]
|
||||
|
||||
root = get_representation_path(representation).replace("\\", "/")
|
||||
|
||||
|
|
@ -159,6 +156,6 @@ class LinkAsGroup(load.LoaderPlugin):
|
|||
self.log.info("updated to version: {}".format(version_doc.get("name")))
|
||||
|
||||
def remove(self, container):
|
||||
node = nuke.toNode(container['objectName'])
|
||||
node = container["node"]
|
||||
with viewer_update_and_undo_stop():
|
||||
nuke.delete(node)
|
||||
|
|
|
|||
|
|
@ -48,11 +48,6 @@ class PhotoshopHost(HostBase, IWorkfileHost, ILoadHost, IPublishHost):
|
|||
pyblish.api.register_plugin_path(PUBLISH_PATH)
|
||||
register_loader_plugin_path(LOAD_PATH)
|
||||
register_creator_plugin_path(CREATE_PATH)
|
||||
log.info(PUBLISH_PATH)
|
||||
|
||||
pyblish.api.register_callback(
|
||||
"instanceToggled", on_pyblish_instance_toggled
|
||||
)
|
||||
|
||||
register_event_callback("application.launched", on_application_launch)
|
||||
|
||||
|
|
@ -177,11 +172,6 @@ def on_application_launch():
|
|||
check_inventory()
|
||||
|
||||
|
||||
def on_pyblish_instance_toggled(instance, old_value, new_value):
|
||||
"""Toggle layer visibility on instance toggles."""
|
||||
instance[0].Visible = new_value
|
||||
|
||||
|
||||
def ls():
|
||||
"""Yields containers from active Photoshop document
|
||||
|
||||
|
|
|
|||
|
|
@ -1,4 +1,4 @@
|
|||
Updated as of 9 May 2022
|
||||
Updated as of 26 May 2023
|
||||
----------------------------
|
||||
In this package, you will find a brief introduction to the Scripting API for DaVinci Resolve Studio. Apart from this README.txt file, this package contains folders containing the basic import
|
||||
modules for scripting access (DaVinciResolve.py) and some representative examples.
|
||||
|
|
@ -19,7 +19,7 @@ DaVinci Resolve scripting requires one of the following to be installed (for all
|
|||
|
||||
Lua 5.1
|
||||
Python 2.7 64-bit
|
||||
Python 3.6 64-bit
|
||||
Python >= 3.6 64-bit
|
||||
|
||||
|
||||
Using a script
|
||||
|
|
@ -171,6 +171,10 @@ Project
|
|||
GetRenderResolutions(format, codec) --> [{Resolution}] # Returns list of resolutions applicable for the given render format (string) and render codec (string). Returns full list of resolutions if no argument is provided. Each element in the list is a dictionary with 2 keys "Width" and "Height".
|
||||
RefreshLUTList() --> Bool # Refreshes LUT List
|
||||
GetUniqueId() --> string # Returns a unique ID for the project item
|
||||
InsertAudioToCurrentTrackAtPlayhead(mediaPath, --> Bool # Inserts the media specified by mediaPath (string) with startOffsetInSamples (int) and durationInSamples (int) at the playhead on a selected track on the Fairlight page. Returns True if successful, otherwise False.
|
||||
startOffsetInSamples, durationInSamples)
|
||||
LoadBurnInPreset(presetName) --> Bool # Loads user defined data burn in preset for project when supplied presetName (string). Returns true if successful.
|
||||
ExportCurrentFrameAsStill(filePath) --> Bool # Exports current frame as still to supplied filePath. filePath must end in valid export file format. Returns True if succssful, False otherwise.
|
||||
|
||||
MediaStorage
|
||||
GetMountedVolumeList() --> [paths...] # Returns list of folder paths corresponding to mounted volumes displayed in Resolve’s Media Storage.
|
||||
|
|
@ -179,6 +183,7 @@ MediaStorage
|
|||
RevealInStorage(path) --> Bool # Expands and displays given file/folder path in Resolve’s Media Storage.
|
||||
AddItemListToMediaPool(item1, item2, ...) --> [clips...] # Adds specified file/folder paths from Media Storage into current Media Pool folder. Input is one or more file/folder paths. Returns a list of the MediaPoolItems created.
|
||||
AddItemListToMediaPool([items...]) --> [clips...] # Adds specified file/folder paths from Media Storage into current Media Pool folder. Input is an array of file/folder paths. Returns a list of the MediaPoolItems created.
|
||||
AddItemListToMediaPool([{itemInfo}, ...]) --> [clips...] # Adds list of itemInfos specified as dict of "media", "startFrame" (int), "endFrame" (int) from Media Storage into current Media Pool folder. Returns a list of the MediaPoolItems created.
|
||||
AddClipMattesToMediaPool(MediaPoolItem, [paths], stereoEye) --> Bool # Adds specified media files as mattes for the specified MediaPoolItem. StereoEye is an optional argument for specifying which eye to add the matte to for stereo clips ("left" or "right"). Returns True if successful.
|
||||
AddTimelineMattesToMediaPool([paths]) --> [MediaPoolItems] # Adds specified media files as timeline mattes in current media pool folder. Returns a list of created MediaPoolItems.
|
||||
|
||||
|
|
@ -189,20 +194,22 @@ MediaPool
|
|||
CreateEmptyTimeline(name) --> Timeline # Adds new timeline with given name.
|
||||
AppendToTimeline(clip1, clip2, ...) --> [TimelineItem] # Appends specified MediaPoolItem objects in the current timeline. Returns the list of appended timelineItems.
|
||||
AppendToTimeline([clips]) --> [TimelineItem] # Appends specified MediaPoolItem objects in the current timeline. Returns the list of appended timelineItems.
|
||||
AppendToTimeline([{clipInfo}, ...]) --> [TimelineItem] # Appends list of clipInfos specified as dict of "mediaPoolItem", "startFrame" (int), "endFrame" (int), (optional) "mediaType" (int; 1 - Video only, 2 - Audio only). Returns the list of appended timelineItems.
|
||||
AppendToTimeline([{clipInfo}, ...]) --> [TimelineItem] # Appends list of clipInfos specified as dict of "mediaPoolItem", "startFrame" (int), "endFrame" (int), (optional) "mediaType" (int; 1 - Video only, 2 - Audio only), "trackIndex" (int) and "recordFrame" (int). Returns the list of appended timelineItems.
|
||||
CreateTimelineFromClips(name, clip1, clip2,...) --> Timeline # Creates new timeline with specified name, and appends the specified MediaPoolItem objects.
|
||||
CreateTimelineFromClips(name, [clips]) --> Timeline # Creates new timeline with specified name, and appends the specified MediaPoolItem objects.
|
||||
CreateTimelineFromClips(name, [{clipInfo}]) --> Timeline # Creates new timeline with specified name, appending the list of clipInfos specified as a dict of "mediaPoolItem", "startFrame" (int), "endFrame" (int).
|
||||
ImportTimelineFromFile(filePath, {importOptions}) --> Timeline # Creates timeline based on parameters within given file and optional importOptions dict, with support for the keys:
|
||||
# "timelineName": string, specifies the name of the timeline to be created
|
||||
# "importSourceClips": Bool, specifies whether source clips should be imported, True by default
|
||||
CreateTimelineFromClips(name, [{clipInfo}]) --> Timeline # Creates new timeline with specified name, appending the list of clipInfos specified as a dict of "mediaPoolItem", "startFrame" (int), "endFrame" (int), "recordFrame" (int).
|
||||
ImportTimelineFromFile(filePath, {importOptions}) --> Timeline # Creates timeline based on parameters within given file (AAF/EDL/XML/FCPXML/DRT/ADL) and optional importOptions dict, with support for the keys:
|
||||
# "timelineName": string, specifies the name of the timeline to be created. Not valid for DRT import
|
||||
# "importSourceClips": Bool, specifies whether source clips should be imported, True by default. Not valid for DRT import
|
||||
# "sourceClipsPath": string, specifies a filesystem path to search for source clips if the media is inaccessible in their original path and if "importSourceClips" is True
|
||||
# "sourceClipsFolders": List of Media Pool folder objects to search for source clips if the media is not present in current folder and if "importSourceClips" is False
|
||||
# "sourceClipsFolders": List of Media Pool folder objects to search for source clips if the media is not present in current folder and if "importSourceClips" is False. Not valid for DRT import
|
||||
# "interlaceProcessing": Bool, specifies whether to enable interlace processing on the imported timeline being created. valid only for AAF import
|
||||
DeleteTimelines([timeline]) --> Bool # Deletes specified timelines in the media pool.
|
||||
GetCurrentFolder() --> Folder # Returns currently selected Folder.
|
||||
SetCurrentFolder(Folder) --> Bool # Sets current folder by given Folder.
|
||||
DeleteClips([clips]) --> Bool # Deletes specified clips or timeline mattes in the media pool
|
||||
ImportFolderFromFile(filePath, sourceClipsPath="") --> Bool # Returns true if import from given DRB filePath is successful, false otherwise
|
||||
# sourceClipsPath is a string that specifies a filesystem path to search for source clips if the media is inaccessible in their original path, empty by default
|
||||
DeleteFolders([subfolders]) --> Bool # Deletes specified subfolders in the media pool
|
||||
MoveClips([clips], targetFolder) --> Bool # Moves specified clips to target folder.
|
||||
MoveFolders([folders], targetFolder) --> Bool # Moves specified folders to target folder.
|
||||
|
|
@ -225,6 +232,7 @@ Folder
|
|||
GetSubFolderList() --> [folders...] # Returns a list of subfolders in the folder.
|
||||
GetIsFolderStale() --> bool # Returns true if folder is stale in collaboration mode, false otherwise
|
||||
GetUniqueId() --> string # Returns a unique ID for the media pool folder
|
||||
Export(filePath) --> bool # Returns true if export of DRB folder to filePath is successful, false otherwise
|
||||
|
||||
MediaPoolItem
|
||||
GetName() --> string # Returns the clip name.
|
||||
|
|
@ -257,6 +265,8 @@ MediaPoolItem
|
|||
UnlinkProxyMedia() --> Bool # Unlinks any proxy media associated with clip.
|
||||
ReplaceClip(filePath) --> Bool # Replaces the underlying asset and metadata of MediaPoolItem with the specified absolute clip path.
|
||||
GetUniqueId() --> string # Returns a unique ID for the media pool item
|
||||
TranscribeAudio() --> Bool # Transcribes audio of the MediaPoolItem. Returns True if successful; False otherwise
|
||||
ClearTranscription() --> Bool # Clears audio transcription of the MediaPoolItem. Returns True if successful; False otherwise.
|
||||
|
||||
Timeline
|
||||
GetName() --> string # Returns the timeline name.
|
||||
|
|
@ -266,6 +276,23 @@ Timeline
|
|||
SetStartTimecode(timecode) --> Bool # Set the start timecode of the timeline to the string 'timecode'. Returns true when the change is successful, false otherwise.
|
||||
GetStartTimecode() --> string # Returns the start timecode for the timeline.
|
||||
GetTrackCount(trackType) --> int # Returns the number of tracks for the given track type ("audio", "video" or "subtitle").
|
||||
AddTrack(trackType, optionalSubTrackType) --> Bool # Adds track of trackType ("video", "subtitle", "audio"). Second argument optionalSubTrackType is required for "audio"
|
||||
# optionalSubTrackType can be one of {"mono", "stereo", "5.1", "5.1film", "7.1", "7.1film", "adaptive1", ... , "adaptive24"}
|
||||
DeleteTrack(trackType, trackIndex) --> Bool # Deletes track of trackType ("video", "subtitle", "audio") and given trackIndex. 1 <= trackIndex <= GetTrackCount(trackType).
|
||||
SetTrackEnable(trackType, trackIndex, Bool) --> Bool # Enables/Disables track with given trackType and trackIndex
|
||||
# trackType is one of {"audio", "video", "subtitle"}
|
||||
# 1 <= trackIndex <= GetTrackCount(trackType).
|
||||
GetIsTrackEnabled(trackType, trackIndex) --> Bool # Returns True if track with given trackType and trackIndex is enabled and False otherwise.
|
||||
# trackType is one of {"audio", "video", "subtitle"}
|
||||
# 1 <= trackIndex <= GetTrackCount(trackType).
|
||||
SetTrackLock(trackType, trackIndex, Bool) --> Bool # Locks/Unlocks track with given trackType and trackIndex
|
||||
# trackType is one of {"audio", "video", "subtitle"}
|
||||
# 1 <= trackIndex <= GetTrackCount(trackType).
|
||||
GetIsTrackLocked(trackType, trackIndex) --> Bool # Returns True if track with given trackType and trackIndex is locked and False otherwise.
|
||||
# trackType is one of {"audio", "video", "subtitle"}
|
||||
# 1 <= trackIndex <= GetTrackCount(trackType).
|
||||
DeleteClips([timelineItems], Bool) --> Bool # Deletes specified TimelineItems from the timeline, performing ripple delete if the second argument is True. Second argument is optional (The default for this is False)
|
||||
SetClipsLinked([timelineItems], Bool) --> Bool # Links or unlinks the specified TimelineItems depending on second argument.
|
||||
GetItemListInTrack(trackType, index) --> [items...] # Returns a list of timeline items on that track (based on trackType and index). 1 <= index <= GetTrackCount(trackType).
|
||||
AddMarker(frameId, color, name, note, duration, --> Bool # Creates a new marker at given frameId position and with given marker information. 'customData' is optional and helps to attach user specific data to the marker.
|
||||
customData)
|
||||
|
|
@ -301,7 +328,7 @@ Timeline
|
|||
# "sourceClipsFolders": string, list of Media Pool folder objects to search for source clips if the media is not present in current folder
|
||||
|
||||
Export(fileName, exportType, exportSubtype) --> Bool # Exports timeline to 'fileName' as per input exportType & exportSubtype format.
|
||||
# Refer to section "Looking up timeline exports properties" for information on the parameters.
|
||||
# Refer to section "Looking up timeline export properties" for information on the parameters.
|
||||
GetSetting(settingName) --> string # Returns value of timeline setting (indicated by settingName : string). Check the section below for more information.
|
||||
SetSetting(settingName, settingValue) --> Bool # Sets timeline setting (indicated by settingName : string) to the value (settingValue : string). Check the section below for more information.
|
||||
InsertGeneratorIntoTimeline(generatorName) --> TimelineItem # Inserts a generator (indicated by generatorName : string) into the timeline.
|
||||
|
|
@ -313,6 +340,8 @@ Timeline
|
|||
GrabStill() --> galleryStill # Grabs still from the current video clip. Returns a GalleryStill object.
|
||||
GrabAllStills(stillFrameSource) --> [galleryStill] # Grabs stills from all the clips of the timeline at 'stillFrameSource' (1 - First frame, 2 - Middle frame). Returns the list of GalleryStill objects.
|
||||
GetUniqueId() --> string # Returns a unique ID for the timeline
|
||||
CreateSubtitlesFromAudio() --> Bool # Creates subtitles from audio for the timeline. Returns True on success, False otherwise.
|
||||
DetectSceneCuts() --> Bool # Detects and makes scene cuts along the timeline. Returns True if successful, False otherwise.
|
||||
|
||||
TimelineItem
|
||||
GetName() --> string # Returns the item name.
|
||||
|
|
@ -362,6 +391,7 @@ TimelineItem
|
|||
GetStereoLeftFloatingWindowParams() --> {keyframes...} # For the LEFT eye -> returns a dict (offset -> dict) of keyframe offsets and respective floating window params. Value at particular offset includes the left, right, top and bottom floating window values.
|
||||
GetStereoRightFloatingWindowParams() --> {keyframes...} # For the RIGHT eye -> returns a dict (offset -> dict) of keyframe offsets and respective floating window params. Value at particular offset includes the left, right, top and bottom floating window values.
|
||||
GetNumNodes() --> int # Returns the number of nodes in the current graph for the timeline item
|
||||
ApplyArriCdlLut() --> Bool # Applies ARRI CDL and LUT. Returns True if successful, False otherwise.
|
||||
SetLUT(nodeIndex, lutPath) --> Bool # Sets LUT on the node mapping the node index provided, 1 <= nodeIndex <= total number of nodes.
|
||||
# The lutPath can be an absolute path, or a relative path (based off custom LUT paths or the master LUT path).
|
||||
# The operation is successful for valid lut paths that Resolve has already discovered (see Project.RefreshLUTList).
|
||||
|
|
@ -376,8 +406,16 @@ TimelineItem
|
|||
SelectTakeByIndex(idx) --> Bool # Selects a take by index, 1 <= idx <= number of takes.
|
||||
FinalizeTake() --> Bool # Finalizes take selection.
|
||||
CopyGrades([tgtTimelineItems]) --> Bool # Copies the current grade to all the items in tgtTimelineItems list. Returns True on success and False if any error occurred.
|
||||
SetClipEnabled(Bool) --> Bool # Sets clip enabled based on argument.
|
||||
GetClipEnabled() --> Bool # Gets clip enabled status.
|
||||
UpdateSidecar() --> Bool # Updates sidecar file for BRAW clips or RMD file for R3D clips.
|
||||
GetUniqueId() --> string # Returns a unique ID for the timeline item
|
||||
LoadBurnInPreset(presetName) --> Bool # Loads user defined data burn in preset for clip when supplied presetName (string). Returns true if successful.
|
||||
GetNodeLabel(nodeIndex) --> string # Returns the label of the node at nodeIndex.
|
||||
CreateMagicMask(mode) --> Bool # Returns True if magic mask was created successfully, False otherwise. mode can "F" (forward), "B" (backward), or "BI" (bidirection)
|
||||
RegenerateMagicMask() --> Bool # Returns True if magic mask was regenerated successfully, False otherwise.
|
||||
Stabilize() --> Bool # Returns True if stabilization was successful, False otherwise
|
||||
SmartReframe() --> Bool # Performs Smart Reframe. Returns True if successful, False otherwise.
|
||||
|
||||
Gallery
|
||||
GetAlbumName(galleryStillAlbum) --> string # Returns the name of the GalleryStillAlbum object 'galleryStillAlbum'.
|
||||
|
|
@ -422,9 +460,11 @@ Invoke "Project:SetSetting", "Timeline:SetSetting" or "MediaPoolItem:SetClipProp
|
|||
ensure the success of the operation. You can troubleshoot the validity of keys and values by setting the desired result from the UI and checking property snapshots before and after the change.
|
||||
|
||||
The following Project properties have specifically enumerated values:
|
||||
"superScale" - the property value is an enumerated integer between 0 and 3 with these meanings: 0=Auto, 1=no scaling, and 2, 3 and 4 represent the Super Scale multipliers 2x, 3x and 4x.
|
||||
"superScale" - the property value is an enumerated integer between 0 and 4 with these meanings: 0=Auto, 1=no scaling, and 2, 3 and 4 represent the Super Scale multipliers 2x, 3x and 4x.
|
||||
for super scale multiplier '2x Enhanced', exactly 4 arguments must be passed as outlined below. If less than 4 arguments are passed, it will default to 2x.
|
||||
Affects:
|
||||
• x = Project:GetSetting('superScale') and Project:SetSetting('superScale', x)
|
||||
• for '2x Enhanced' --> Project:SetSetting('superScale', 2, sharpnessValue, noiseReductionValue), where sharpnessValue is a float in the range [0.0, 1.0] and noiseReductionValue is a float in the range [0.0, 1.0]
|
||||
|
||||
"timelineFrameRate" - the property value is one of the frame rates available to the user in project settings under "Timeline frame rate" option. Drop Frame can be configured for supported frame rates
|
||||
by appending the frame rate with "DF", e.g. "29.97 DF" will enable drop frame and "29.97" will disable drop frame
|
||||
|
|
@ -432,9 +472,11 @@ Affects:
|
|||
• x = Project:GetSetting('timelineFrameRate') and Project:SetSetting('timelineFrameRate', x)
|
||||
|
||||
The following Clip properties have specifically enumerated values:
|
||||
"superScale" - the property value is an enumerated integer between 1 and 3 with these meanings: 1=no scaling, and 2, 3 and 4 represent the Super Scale multipliers 2x, 3x and 4x.
|
||||
"Super Scale" - the property value is an enumerated integer between 1 and 4 with these meanings: 1=no scaling, and 2, 3 and 4 represent the Super Scale multipliers 2x, 3x and 4x.
|
||||
for super scale multiplier '2x Enhanced', exactly 4 arguments must be passed as outlined below. If less than 4 arguments are passed, it will default to 2x.
|
||||
Affects:
|
||||
• x = MediaPoolItem:GetClipProperty('Super Scale') and MediaPoolItem:SetClipProperty('Super Scale', x)
|
||||
• for '2x Enhanced' --> MediaPoolItem:SetClipProperty('Super Scale', 2, sharpnessValue, noiseReductionValue), where sharpnessValue is a float in the range [0.0, 1.0] and noiseReductionValue is a float in the range [0.0, 1.0]
|
||||
|
||||
|
||||
Looking up Render Settings
|
||||
|
|
@ -478,11 +520,6 @@ exportType can be one of the following constants:
|
|||
- resolve.EXPORT_DRT
|
||||
- resolve.EXPORT_EDL
|
||||
- resolve.EXPORT_FCP_7_XML
|
||||
- resolve.EXPORT_FCPXML_1_3
|
||||
- resolve.EXPORT_FCPXML_1_4
|
||||
- resolve.EXPORT_FCPXML_1_5
|
||||
- resolve.EXPORT_FCPXML_1_6
|
||||
- resolve.EXPORT_FCPXML_1_7
|
||||
- resolve.EXPORT_FCPXML_1_8
|
||||
- resolve.EXPORT_FCPXML_1_9
|
||||
- resolve.EXPORT_FCPXML_1_10
|
||||
|
|
@ -492,6 +529,8 @@ exportType can be one of the following constants:
|
|||
- resolve.EXPORT_TEXT_TAB
|
||||
- resolve.EXPORT_DOLBY_VISION_VER_2_9
|
||||
- resolve.EXPORT_DOLBY_VISION_VER_4_0
|
||||
- resolve.EXPORT_DOLBY_VISION_VER_5_1
|
||||
- resolve.EXPORT_OTIO
|
||||
exportSubtype can be one of the following enums:
|
||||
- resolve.EXPORT_NONE
|
||||
- resolve.EXPORT_AAF_NEW
|
||||
|
|
@ -504,6 +543,16 @@ When exportType is resolve.EXPORT_AAF, valid exportSubtype values are resolve.EX
|
|||
When exportType is resolve.EXPORT_EDL, valid exportSubtype values are resolve.EXPORT_CDL, resolve.EXPORT_SDL, resolve.EXPORT_MISSING_CLIPS and resolve.EXPORT_NONE.
|
||||
Note: Replace 'resolve.' when using the constants above, if a different Resolve class instance name is used.
|
||||
|
||||
Unsupported exportType types
|
||||
---------------------------------
|
||||
Starting with DaVinci Resolve 18.1, the following export types are not supported:
|
||||
- resolve.EXPORT_FCPXML_1_3
|
||||
- resolve.EXPORT_FCPXML_1_4
|
||||
- resolve.EXPORT_FCPXML_1_5
|
||||
- resolve.EXPORT_FCPXML_1_6
|
||||
- resolve.EXPORT_FCPXML_1_7
|
||||
|
||||
|
||||
Looking up Timeline item properties
|
||||
-----------------------------------
|
||||
This section covers additional notes for the function "TimelineItem:SetProperty" and "TimelineItem:GetProperty". These functions are used to get and set properties mentioned.
|
||||
|
|
@ -6,7 +6,10 @@ import contextlib
|
|||
from opentimelineio import opentime
|
||||
|
||||
from openpype.lib import Logger
|
||||
from openpype.pipeline.editorial import is_overlapping_otio_ranges
|
||||
from openpype.pipeline.editorial import (
|
||||
is_overlapping_otio_ranges,
|
||||
frames_to_timecode
|
||||
)
|
||||
|
||||
from ..otio import davinci_export as otio_export
|
||||
|
||||
|
|
@ -246,18 +249,22 @@ def get_media_pool_item(filepath, root: object = None) -> object:
|
|||
return None
|
||||
|
||||
|
||||
def create_timeline_item(media_pool_item: object,
|
||||
timeline: object = None,
|
||||
source_start: int = None,
|
||||
source_end: int = None) -> object:
|
||||
def create_timeline_item(
|
||||
media_pool_item: object,
|
||||
timeline: object = None,
|
||||
timeline_in: int = None,
|
||||
source_start: int = None,
|
||||
source_end: int = None,
|
||||
) -> object:
|
||||
"""
|
||||
Add media pool item to current or defined timeline.
|
||||
|
||||
Args:
|
||||
media_pool_item (resolve.MediaPoolItem): resolve's object
|
||||
timeline (resolve.Timeline)[optional]: resolve's object
|
||||
source_start (int)[optional]: media source input frame (sequence frame)
|
||||
source_end (int)[optional]: media source output frame (sequence frame)
|
||||
timeline (Optional[resolve.Timeline]): resolve's object
|
||||
timeline_in (Optional[int]): timeline input frame (sequence frame)
|
||||
source_start (Optional[int]): media source input frame (sequence frame)
|
||||
source_end (Optional[int]): media source output frame (sequence frame)
|
||||
|
||||
Returns:
|
||||
object: resolve.TimelineItem
|
||||
|
|
@ -269,16 +276,29 @@ def create_timeline_item(media_pool_item: object,
|
|||
clip_name = _clip_property("File Name")
|
||||
timeline = timeline or get_current_timeline()
|
||||
|
||||
# timing variables
|
||||
if all([timeline_in, source_start, source_end]):
|
||||
fps = timeline.GetSetting("timelineFrameRate")
|
||||
duration = source_end - source_start
|
||||
timecode_in = frames_to_timecode(timeline_in, fps)
|
||||
timecode_out = frames_to_timecode(timeline_in + duration, fps)
|
||||
else:
|
||||
timecode_in = None
|
||||
timecode_out = None
|
||||
|
||||
# if timeline was used then switch it to current timeline
|
||||
with maintain_current_timeline(timeline):
|
||||
# Add input mediaPoolItem to clip data
|
||||
clip_data = {"mediaPoolItem": media_pool_item}
|
||||
clip_data = {
|
||||
"mediaPoolItem": media_pool_item,
|
||||
}
|
||||
|
||||
# add source time range if input was given
|
||||
if source_start is not None:
|
||||
clip_data.update({"startFrame": source_start})
|
||||
if source_end is not None:
|
||||
clip_data.update({"endFrame": source_end})
|
||||
if source_start:
|
||||
clip_data["startFrame"] = source_start
|
||||
if source_end:
|
||||
clip_data["endFrame"] = source_end
|
||||
if timecode_in:
|
||||
clip_data["recordFrame"] = timecode_in
|
||||
|
||||
# add to timeline
|
||||
media_pool.AppendToTimeline([clip_data])
|
||||
|
|
@ -286,10 +306,15 @@ def create_timeline_item(media_pool_item: object,
|
|||
output_timeline_item = get_timeline_item(
|
||||
media_pool_item, timeline)
|
||||
|
||||
assert output_timeline_item, AssertionError(
|
||||
"Track Item with name `{}` doesn't exist on the timeline: `{}`".format(
|
||||
clip_name, timeline.GetName()
|
||||
))
|
||||
assert output_timeline_item, AssertionError((
|
||||
"Clip name '{}' was't created on the timeline: '{}' \n\n"
|
||||
"Please check if correct track position is activated, \n"
|
||||
"or if a clip is not already at the timeline in \n"
|
||||
"position: '{}' out: '{}'. \n\n"
|
||||
"Clip data: {}"
|
||||
).format(
|
||||
clip_name, timeline.GetName(), timecode_in, timecode_out, clip_data
|
||||
))
|
||||
return output_timeline_item
|
||||
|
||||
|
||||
|
|
@ -490,7 +515,7 @@ def imprint(timeline_item, data=None):
|
|||
|
||||
Arguments:
|
||||
timeline_item (hiero.core.TrackItem): hiero track item object
|
||||
data (dict): Any data which needst to be imprinted
|
||||
data (dict): Any data which needs to be imprinted
|
||||
|
||||
Examples:
|
||||
data = {
|
||||
|
|
|
|||
|
|
@ -1,9 +1,10 @@
|
|||
import os
|
||||
import sys
|
||||
|
||||
from qtpy import QtWidgets, QtCore
|
||||
from qtpy import QtWidgets, QtCore, QtGui
|
||||
|
||||
from openpype.tools.utils import host_tools
|
||||
from openpype.pipeline import registered_host
|
||||
|
||||
|
||||
def load_stylesheet():
|
||||
|
|
@ -49,6 +50,7 @@ class OpenPypeMenu(QtWidgets.QWidget):
|
|||
)
|
||||
|
||||
self.setWindowTitle("OpenPype")
|
||||
save_current_btn = QtWidgets.QPushButton("Save current file", self)
|
||||
workfiles_btn = QtWidgets.QPushButton("Workfiles ...", self)
|
||||
create_btn = QtWidgets.QPushButton("Create ...", self)
|
||||
publish_btn = QtWidgets.QPushButton("Publish ...", self)
|
||||
|
|
@ -70,6 +72,10 @@ class OpenPypeMenu(QtWidgets.QWidget):
|
|||
layout = QtWidgets.QVBoxLayout(self)
|
||||
layout.setContentsMargins(10, 20, 10, 20)
|
||||
|
||||
layout.addWidget(save_current_btn)
|
||||
|
||||
layout.addWidget(Spacer(15, self))
|
||||
|
||||
layout.addWidget(workfiles_btn)
|
||||
layout.addWidget(create_btn)
|
||||
layout.addWidget(publish_btn)
|
||||
|
|
@ -94,6 +100,8 @@ class OpenPypeMenu(QtWidgets.QWidget):
|
|||
|
||||
self.setLayout(layout)
|
||||
|
||||
save_current_btn.clicked.connect(self.on_save_current_clicked)
|
||||
save_current_btn.setShortcut(QtGui.QKeySequence.Save)
|
||||
workfiles_btn.clicked.connect(self.on_workfile_clicked)
|
||||
create_btn.clicked.connect(self.on_create_clicked)
|
||||
publish_btn.clicked.connect(self.on_publish_clicked)
|
||||
|
|
@ -106,6 +114,18 @@ class OpenPypeMenu(QtWidgets.QWidget):
|
|||
# reset_resolution_btn.clicked.connect(self.on_set_resolution_clicked)
|
||||
experimental_btn.clicked.connect(self.on_experimental_clicked)
|
||||
|
||||
def on_save_current_clicked(self):
|
||||
host = registered_host()
|
||||
current_file = host.get_current_workfile()
|
||||
if not current_file:
|
||||
print("Current project is not saved. "
|
||||
"Please save once first via workfiles tool.")
|
||||
host_tools.show_workfiles()
|
||||
return
|
||||
|
||||
print(f"Saving current file to: {current_file}")
|
||||
host.save_workfile(current_file)
|
||||
|
||||
def on_workfile_clicked(self):
|
||||
print("Clicked Workfile")
|
||||
host_tools.show_workfiles()
|
||||
|
|
|
|||
|
|
@ -306,11 +306,18 @@ class ClipLoader:
|
|||
self.active_project = lib.get_current_project()
|
||||
|
||||
# try to get value from options or evaluate key value for `handles`
|
||||
self.with_handles = options.get("handles") or bool(
|
||||
options.get("handles") is True)
|
||||
self.with_handles = options.get("handles") is True
|
||||
|
||||
# try to get value from options or evaluate key value for `load_to`
|
||||
self.new_timeline = options.get("newTimeline") or bool(
|
||||
"New timeline" in options.get("load_to", ""))
|
||||
self.new_timeline = (
|
||||
options.get("newTimeline") or
|
||||
options.get("load_to") == "New timeline"
|
||||
)
|
||||
# try to get value from options or evaluate key value for `load_how`
|
||||
self.sequential_load = (
|
||||
options.get("sequentially") or
|
||||
options.get("load_how") == "Sequentially in order"
|
||||
)
|
||||
|
||||
assert self._populate_data(), str(
|
||||
"Cannot Load selected data, look into database "
|
||||
|
|
@ -391,30 +398,70 @@ class ClipLoader:
|
|||
# create project bin for the media to be imported into
|
||||
self.active_bin = lib.create_bin(self.data["binPath"])
|
||||
|
||||
handle_start = self.data["versionData"].get("handleStart") or 0
|
||||
handle_end = self.data["versionData"].get("handleEnd") or 0
|
||||
|
||||
# create mediaItem in active project bin
|
||||
# create clip media
|
||||
media_pool_item = lib.create_media_pool_item(
|
||||
files,
|
||||
self.active_bin
|
||||
)
|
||||
_clip_property = media_pool_item.GetClipProperty
|
||||
|
||||
# get handles
|
||||
handle_start = self.data["versionData"].get("handleStart")
|
||||
handle_end = self.data["versionData"].get("handleEnd")
|
||||
if handle_start is None:
|
||||
handle_start = int(self.data["assetData"]["handleStart"])
|
||||
if handle_end is None:
|
||||
handle_end = int(self.data["assetData"]["handleEnd"])
|
||||
|
||||
# check frame duration from versionData or assetData
|
||||
frame_start = self.data["versionData"].get("frameStart")
|
||||
if frame_start is None:
|
||||
frame_start = self.data["assetData"]["frameStart"]
|
||||
|
||||
# check frame duration from versionData or assetData
|
||||
frame_end = self.data["versionData"].get("frameEnd")
|
||||
if frame_end is None:
|
||||
frame_end = self.data["assetData"]["frameEnd"]
|
||||
|
||||
db_frame_duration = int(frame_end) - int(frame_start) + 1
|
||||
|
||||
# get timeline in
|
||||
timeline_start = self.active_timeline.GetStartFrame()
|
||||
if self.sequential_load:
|
||||
# set timeline start frame
|
||||
timeline_in = int(timeline_start)
|
||||
else:
|
||||
# set timeline start frame + original clip in frame
|
||||
timeline_in = int(
|
||||
timeline_start + self.data["assetData"]["clipIn"])
|
||||
|
||||
source_in = int(_clip_property("Start"))
|
||||
source_out = int(_clip_property("End"))
|
||||
source_duration = int(_clip_property("Frames"))
|
||||
|
||||
if _clip_property("Type") == "Video":
|
||||
# check if source duration is shorter than db frame duration
|
||||
source_with_handles = True
|
||||
if source_duration < db_frame_duration:
|
||||
source_with_handles = False
|
||||
|
||||
# only exclude handles if source has no handles or
|
||||
# if user wants to load without handles
|
||||
if (
|
||||
not self.with_handles
|
||||
or not source_with_handles
|
||||
):
|
||||
source_in += handle_start
|
||||
source_out -= handle_end
|
||||
|
||||
# include handles
|
||||
if self.with_handles:
|
||||
source_in -= handle_start
|
||||
source_out += handle_end
|
||||
|
||||
# make track item from source in bin as item
|
||||
timeline_item = lib.create_timeline_item(
|
||||
media_pool_item, self.active_timeline, source_in, source_out)
|
||||
media_pool_item,
|
||||
self.active_timeline,
|
||||
timeline_in,
|
||||
source_in,
|
||||
source_out,
|
||||
)
|
||||
|
||||
print("Loading clips: `{}`".format(self.data["clip_name"]))
|
||||
return timeline_item
|
||||
|
|
@ -455,7 +502,7 @@ class TimelineItemLoader(LoaderPlugin):
|
|||
"""
|
||||
|
||||
options = [
|
||||
qargparse.Toggle(
|
||||
qargparse.Boolean(
|
||||
"handles",
|
||||
label="Include handles",
|
||||
default=0,
|
||||
|
|
@ -470,6 +517,16 @@ class TimelineItemLoader(LoaderPlugin):
|
|||
],
|
||||
default=0,
|
||||
help="Where do you want clips to be loaded?"
|
||||
),
|
||||
qargparse.Choice(
|
||||
"load_how",
|
||||
label="How to load clips",
|
||||
items=[
|
||||
"Original timing",
|
||||
"Sequentially in order"
|
||||
],
|
||||
default="Original timing",
|
||||
help="Would you like to place it at original timing?"
|
||||
)
|
||||
]
|
||||
|
||||
|
|
|
|||
|
|
@ -21,6 +21,7 @@ from aiohttp_json_rpc.protocol import (
|
|||
)
|
||||
from aiohttp_json_rpc.exceptions import RpcError
|
||||
|
||||
from openpype import AYON_SERVER_ENABLED
|
||||
from openpype.lib import emit_event
|
||||
from openpype.hosts.tvpaint.tvpaint_plugin import get_plugin_files_path
|
||||
|
||||
|
|
@ -834,8 +835,12 @@ class BaseCommunicator:
|
|||
|
||||
|
||||
class QtCommunicator(BaseCommunicator):
|
||||
label = os.getenv("AVALON_LABEL")
|
||||
if not label:
|
||||
label = "AYON" if AYON_SERVER_ENABLED else "OpenPype"
|
||||
title = "{} Tools".format(label)
|
||||
menu_definitions = {
|
||||
"title": "OpenPype Tools",
|
||||
"title": title,
|
||||
"menu_items": [
|
||||
{
|
||||
"callback": "workfiles_tool",
|
||||
|
|
|
|||
|
|
@ -7,7 +7,7 @@ import requests
|
|||
|
||||
import pyblish.api
|
||||
|
||||
from openpype.client import get_project, get_asset_by_name
|
||||
from openpype.client import get_asset_by_name
|
||||
from openpype.host import HostBase, IWorkfileHost, ILoadHost, IPublishHost
|
||||
from openpype.hosts.tvpaint import TVPAINT_ROOT_DIR
|
||||
from openpype.settings import get_current_project_settings
|
||||
|
|
@ -84,10 +84,6 @@ class TVPaintHost(HostBase, IWorkfileHost, ILoadHost, IPublishHost):
|
|||
register_loader_plugin_path(load_dir)
|
||||
register_creator_plugin_path(create_dir)
|
||||
|
||||
registered_callbacks = (
|
||||
pyblish.api.registered_callbacks().get("instanceToggled") or []
|
||||
)
|
||||
|
||||
register_event_callback("application.launched", self.initial_launch)
|
||||
register_event_callback("application.exit", self.application_exit)
|
||||
|
||||
|
|
|
|||
|
|
@ -69,7 +69,6 @@ class CollectWorkfileData(pyblish.api.ContextPlugin):
|
|||
"asset_name": context.data["asset"],
|
||||
"task_name": context.data["task"]
|
||||
}
|
||||
context.data["previous_context"] = current_context
|
||||
self.log.debug("Current context is: {}".format(current_context))
|
||||
|
||||
# Collect context from workfile metadata
|
||||
|
|
|
|||
|
|
@ -36,6 +36,7 @@ from openpype.settings import (
|
|||
)
|
||||
|
||||
from openpype.client.mongo import validate_mongo_connection
|
||||
from openpype.client import get_ayon_server_api_connection
|
||||
|
||||
_PLACEHOLDER = object()
|
||||
|
||||
|
|
@ -613,9 +614,8 @@ def get_openpype_username():
|
|||
"""
|
||||
|
||||
if AYON_SERVER_ENABLED:
|
||||
import ayon_api
|
||||
|
||||
return ayon_api.get_user()["name"]
|
||||
con = get_ayon_server_api_connection()
|
||||
return con.get_user()["name"]
|
||||
|
||||
username = os.environ.get("OPENPYPE_USERNAME")
|
||||
if not username:
|
||||
|
|
|
|||
|
|
@ -16,9 +16,9 @@ from abc import ABCMeta, abstractmethod
|
|||
|
||||
import six
|
||||
import appdirs
|
||||
import ayon_api
|
||||
|
||||
from openpype import AYON_SERVER_ENABLED
|
||||
from openpype.client import get_ayon_server_api_connection
|
||||
from openpype.settings import (
|
||||
get_system_settings,
|
||||
SYSTEM_SETTINGS_KEY,
|
||||
|
|
@ -106,7 +106,7 @@ class _ModuleClass(object):
|
|||
if attr_name in self.__attributes__:
|
||||
self.log.warning(
|
||||
"Duplicated name \"{}\" in {}. Overriding.".format(
|
||||
self.name, attr_name
|
||||
attr_name, self.name
|
||||
)
|
||||
)
|
||||
self.__attributes__[attr_name] = value
|
||||
|
|
@ -319,8 +319,11 @@ def load_modules(force=False):
|
|||
|
||||
|
||||
def _get_ayon_bundle_data():
|
||||
con = get_ayon_server_api_connection()
|
||||
bundles = con.get_bundles()["bundles"]
|
||||
|
||||
bundle_name = os.getenv("AYON_BUNDLE_NAME")
|
||||
bundles = ayon_api.get_bundles()["bundles"]
|
||||
|
||||
return next(
|
||||
(
|
||||
bundle
|
||||
|
|
@ -345,7 +348,8 @@ def _get_ayon_addons_information(bundle_info):
|
|||
|
||||
output = []
|
||||
bundle_addons = bundle_info["addons"]
|
||||
addons = ayon_api.get_addons_info()["addons"]
|
||||
con = get_ayon_server_api_connection()
|
||||
addons = con.get_addons_info()["addons"]
|
||||
for addon in addons:
|
||||
name = addon["name"]
|
||||
versions = addon.get("versions")
|
||||
|
|
@ -408,6 +412,10 @@ def _load_ayon_addons(openpype_modules, modules_key, log):
|
|||
addon_name = addon_info["name"]
|
||||
addon_version = addon_info["version"]
|
||||
|
||||
# OpenPype addon does not have any addon object
|
||||
if addon_name == "openpype":
|
||||
continue
|
||||
|
||||
dev_addon_info = dev_addons_info.get(addon_name, {})
|
||||
use_dev_path = dev_addon_info.get("enabled", False)
|
||||
|
||||
|
|
@ -438,7 +446,7 @@ def _load_ayon_addons(openpype_modules, modules_key, log):
|
|||
# Ignore of files is implemented to be able to run code from code
|
||||
# where usually is more files than just the addon
|
||||
# Ignore start and setup scripts
|
||||
if name in ("setup.py", "start.py"):
|
||||
if name in ("setup.py", "start.py", "__pycache__"):
|
||||
continue
|
||||
|
||||
path = os.path.join(addon_dir, name)
|
||||
|
|
@ -454,7 +462,15 @@ def _load_ayon_addons(openpype_modules, modules_key, log):
|
|||
|
||||
try:
|
||||
mod = __import__(basename, fromlist=("",))
|
||||
imported_modules.append(mod)
|
||||
for attr_name in dir(mod):
|
||||
attr = getattr(mod, attr_name)
|
||||
if (
|
||||
inspect.isclass(attr)
|
||||
and issubclass(attr, OpenPypeModule)
|
||||
):
|
||||
imported_modules.append(mod)
|
||||
break
|
||||
|
||||
except BaseException:
|
||||
log.warning(
|
||||
"Failed to import \"{}\"".format(basename),
|
||||
|
|
@ -467,19 +483,26 @@ def _load_ayon_addons(openpype_modules, modules_key, log):
|
|||
))
|
||||
continue
|
||||
|
||||
if len(imported_modules) == 1:
|
||||
mod = imported_modules[0]
|
||||
addon_alias = getattr(mod, "V3_ALIAS", None)
|
||||
if not addon_alias:
|
||||
addon_alias = addon_name
|
||||
v3_addons_to_skip.append(addon_alias)
|
||||
new_import_str = "{}.{}".format(modules_key, addon_alias)
|
||||
if len(imported_modules) > 1:
|
||||
log.warning((
|
||||
"Skipping addon '{}'."
|
||||
" Multiple modules were found ({}) in dir {}."
|
||||
).format(
|
||||
addon_name,
|
||||
", ".join([m.__name__ for m in imported_modules]),
|
||||
addon_dir,
|
||||
))
|
||||
continue
|
||||
|
||||
sys.modules[new_import_str] = mod
|
||||
setattr(openpype_modules, addon_alias, mod)
|
||||
mod = imported_modules[0]
|
||||
addon_alias = getattr(mod, "V3_ALIAS", None)
|
||||
if not addon_alias:
|
||||
addon_alias = addon_name
|
||||
v3_addons_to_skip.append(addon_alias)
|
||||
new_import_str = "{}.{}".format(modules_key, addon_alias)
|
||||
|
||||
else:
|
||||
log.info("More then one module was imported")
|
||||
sys.modules[new_import_str] = mod
|
||||
setattr(openpype_modules, addon_alias, mod)
|
||||
|
||||
return v3_addons_to_skip
|
||||
|
||||
|
|
@ -997,7 +1020,18 @@ class ModulesManager:
|
|||
continue
|
||||
|
||||
method = getattr(module, method_name)
|
||||
paths = method(*args, **kwargs)
|
||||
try:
|
||||
paths = method(*args, **kwargs)
|
||||
except Exception:
|
||||
self.log.warning(
|
||||
(
|
||||
"Failed to get plugin paths from module"
|
||||
" '{}' using '{}'."
|
||||
).format(module.__class__.__name__, method_name),
|
||||
exc_info=True
|
||||
)
|
||||
continue
|
||||
|
||||
if paths:
|
||||
# Convert to list if value is not list
|
||||
if not isinstance(paths, (list, tuple, set)):
|
||||
|
|
|
|||
|
|
@ -5,8 +5,6 @@ This is resolving index of server lists stored in `deadlineServers` instance
|
|||
attribute or using default server if that attribute doesn't exists.
|
||||
|
||||
"""
|
||||
from maya import cmds
|
||||
|
||||
import pyblish.api
|
||||
from openpype.pipeline.publish import KnownPublishError
|
||||
|
||||
|
|
@ -44,7 +42,8 @@ class CollectDeadlineServerFromInstance(pyblish.api.InstancePlugin):
|
|||
str: Selected Deadline Webservice URL.
|
||||
|
||||
"""
|
||||
|
||||
# Not all hosts can import this module.
|
||||
from maya import cmds
|
||||
deadline_settings = (
|
||||
render_instance.context.data
|
||||
["system_settings"]
|
||||
|
|
|
|||
|
|
@ -6,8 +6,6 @@ import getpass
|
|||
import attr
|
||||
from datetime import datetime
|
||||
|
||||
import bpy
|
||||
|
||||
from openpype.lib import is_running_from_build
|
||||
from openpype.pipeline import legacy_io
|
||||
from openpype.pipeline.farm.tools import iter_expected_files
|
||||
|
|
@ -142,6 +140,9 @@ class BlenderSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline):
|
|||
return job_info
|
||||
|
||||
def get_plugin_info(self):
|
||||
# Not all hosts can import this module.
|
||||
import bpy
|
||||
|
||||
plugin_info = BlenderPluginInfo(
|
||||
SceneFile=self.scene_path,
|
||||
Version=bpy.app.version_string,
|
||||
|
|
|
|||
|
|
@ -3,7 +3,6 @@ import json
|
|||
from datetime import datetime
|
||||
|
||||
import requests
|
||||
import hou
|
||||
|
||||
import pyblish.api
|
||||
|
||||
|
|
@ -31,6 +30,8 @@ class HoudiniSubmitPublishDeadline(pyblish.api.ContextPlugin):
|
|||
targets = ["deadline"]
|
||||
|
||||
def process(self, context):
|
||||
# Not all hosts can import this module.
|
||||
import hou
|
||||
|
||||
# Ensure no errors so far
|
||||
assert all(
|
||||
|
|
|
|||
|
|
@ -1,9 +1,8 @@
|
|||
import hou
|
||||
|
||||
import os
|
||||
import attr
|
||||
import getpass
|
||||
from datetime import datetime
|
||||
|
||||
import pyblish.api
|
||||
|
||||
from openpype.pipeline import legacy_io
|
||||
|
|
@ -119,6 +118,8 @@ class HoudiniSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline):
|
|||
return job_info
|
||||
|
||||
def get_plugin_info(self):
|
||||
# Not all hosts can import this module.
|
||||
import hou
|
||||
|
||||
instance = self._instance
|
||||
context = instance.context
|
||||
|
|
|
|||
|
|
@ -1,8 +1,8 @@
|
|||
import os
|
||||
import getpass
|
||||
import copy
|
||||
|
||||
import attr
|
||||
|
||||
from openpype.lib import (
|
||||
TextDef,
|
||||
BoolDef,
|
||||
|
|
@ -15,11 +15,6 @@ from openpype.pipeline import (
|
|||
from openpype.pipeline.publish.lib import (
|
||||
replace_with_published_scene_path
|
||||
)
|
||||
from openpype.hosts.max.api.lib import (
|
||||
get_current_renderer,
|
||||
get_multipass_setting
|
||||
)
|
||||
from openpype.hosts.max.api.lib_rendersettings import RenderSettings
|
||||
from openpype_modules.deadline import abstract_submit_deadline
|
||||
from openpype_modules.deadline.abstract_submit_deadline import DeadlineJobInfo
|
||||
from openpype.lib import is_running_from_build
|
||||
|
|
@ -191,6 +186,13 @@ class MaxSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline,
|
|||
self.submit(self.assemble_payload(job_info, plugin_info))
|
||||
|
||||
def _use_published_name(self, data, project_settings):
|
||||
# Not all hosts can import these modules.
|
||||
from openpype.hosts.max.api.lib import (
|
||||
get_current_renderer,
|
||||
get_multipass_setting
|
||||
)
|
||||
from openpype.hosts.max.api.lib_rendersettings import RenderSettings
|
||||
|
||||
instance = self._instance
|
||||
job_info = copy.deepcopy(self.job_info)
|
||||
plugin_info = copy.deepcopy(self.plugin_info)
|
||||
|
|
|
|||
|
|
@ -28,8 +28,6 @@ from collections import OrderedDict
|
|||
|
||||
import attr
|
||||
|
||||
from maya import cmds
|
||||
|
||||
from openpype.pipeline import (
|
||||
legacy_io,
|
||||
OpenPypePyblishPluginMixin
|
||||
|
|
@ -246,6 +244,8 @@ class MayaSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline,
|
|||
return job_info
|
||||
|
||||
def get_plugin_info(self):
|
||||
# Not all hosts can import this module.
|
||||
from maya import cmds
|
||||
|
||||
instance = self._instance
|
||||
context = instance.context
|
||||
|
|
@ -288,7 +288,7 @@ class MayaSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline,
|
|||
return plugin_payload
|
||||
|
||||
def process_submission(self):
|
||||
|
||||
from maya import cmds
|
||||
instance = self._instance
|
||||
|
||||
filepath = self.scene_path # publish if `use_publish` else workfile
|
||||
|
|
@ -675,7 +675,7 @@ class MayaSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline,
|
|||
str
|
||||
|
||||
"""
|
||||
|
||||
from maya import cmds
|
||||
# "vrayscene/<Scene>/<Scene>_<Layer>/<Layer>"
|
||||
vray_settings = cmds.ls(type="VRaySettingsNode")
|
||||
node = vray_settings[0]
|
||||
|
|
|
|||
|
|
@ -2,8 +2,6 @@ import os
|
|||
import attr
|
||||
from datetime import datetime
|
||||
|
||||
from maya import cmds
|
||||
|
||||
from openpype import AYON_SERVER_ENABLED
|
||||
from openpype.pipeline import legacy_io, PublishXmlValidationError
|
||||
from openpype.tests.lib import is_in_tests
|
||||
|
|
@ -127,7 +125,8 @@ class MayaSubmitRemotePublishDeadline(
|
|||
job_info.EnvironmentKeyValue[key] = value
|
||||
|
||||
def get_plugin_info(self):
|
||||
|
||||
# Not all hosts can import this module.
|
||||
from maya import cmds
|
||||
scene = self._instance.context.data["currentFile"]
|
||||
|
||||
plugin_info = MayaPluginInfo()
|
||||
|
|
|
|||
|
|
@ -7,8 +7,6 @@ from datetime import datetime
|
|||
import requests
|
||||
import pyblish.api
|
||||
|
||||
import nuke
|
||||
|
||||
from openpype import AYON_SERVER_ENABLED
|
||||
from openpype.pipeline import legacy_io
|
||||
from openpype.pipeline.publish import (
|
||||
|
|
@ -498,6 +496,9 @@ class NukeSubmitDeadline(pyblish.api.InstancePlugin,
|
|||
Returning:
|
||||
list: captured groups list
|
||||
"""
|
||||
# Not all hosts can import this module.
|
||||
import nuke
|
||||
|
||||
captured_groups = []
|
||||
for lg_name, list_node_class in self.limit_groups.items():
|
||||
for node_class in list_node_class:
|
||||
|
|
|
|||
|
|
@ -1,3 +1,6 @@
|
|||
import collections
|
||||
|
||||
from openpype.client import get_project
|
||||
from openpype_modules.ftrack.lib import BaseEvent
|
||||
|
||||
|
||||
|
|
@ -73,8 +76,21 @@ class FirstVersionStatus(BaseEvent):
|
|||
if not self.task_status_map:
|
||||
return
|
||||
|
||||
entities_info = self.filter_event_ents(event)
|
||||
if not entities_info:
|
||||
filtered_entities_info = self.filter_entities_info(event)
|
||||
if not filtered_entities_info:
|
||||
return
|
||||
|
||||
for project_id, entities_info in filtered_entities_info.items():
|
||||
self.process_by_project(session, event, project_id, entities_info)
|
||||
|
||||
def process_by_project(self, session, event, project_id, entities_info):
|
||||
project_name = self.get_project_name_from_event(
|
||||
session, event, project_id
|
||||
)
|
||||
if get_project(project_name) is None:
|
||||
self.log.debug(
|
||||
f"Project '{project_name}' not found in OpenPype. Skipping"
|
||||
)
|
||||
return
|
||||
|
||||
entity_ids = []
|
||||
|
|
@ -154,18 +170,18 @@ class FirstVersionStatus(BaseEvent):
|
|||
exc_info=True
|
||||
)
|
||||
|
||||
def filter_event_ents(self, event):
|
||||
filtered_ents = []
|
||||
for entity in event["data"].get("entities", []):
|
||||
def filter_entities_info(self, event):
|
||||
filtered_entities_info = collections.defaultdict(list)
|
||||
for entity_info in event["data"].get("entities", []):
|
||||
# Care only about add actions
|
||||
if entity.get("action") != "add":
|
||||
if entity_info.get("action") != "add":
|
||||
continue
|
||||
|
||||
# Filter AssetVersions
|
||||
if entity["entityType"] != "assetversion":
|
||||
if entity_info["entityType"] != "assetversion":
|
||||
continue
|
||||
|
||||
entity_changes = entity.get("changes") or {}
|
||||
entity_changes = entity_info.get("changes") or {}
|
||||
|
||||
# Check if version of Asset Version is `1`
|
||||
version_num = entity_changes.get("version", {}).get("new")
|
||||
|
|
@ -177,9 +193,18 @@ class FirstVersionStatus(BaseEvent):
|
|||
if not task_id:
|
||||
continue
|
||||
|
||||
filtered_ents.append(entity)
|
||||
project_id = None
|
||||
for parent_item in reversed(entity_info["parents"]):
|
||||
if parent_item["entityType"] == "show":
|
||||
project_id = parent_item["entityId"]
|
||||
break
|
||||
|
||||
return filtered_ents
|
||||
if project_id is None:
|
||||
continue
|
||||
|
||||
filtered_entities_info[project_id].append(entity_info)
|
||||
|
||||
return filtered_entities_info
|
||||
|
||||
|
||||
def register(session):
|
||||
|
|
|
|||
|
|
@ -1,4 +1,6 @@
|
|||
import collections
|
||||
|
||||
from openpype.client import get_project
|
||||
from openpype_modules.ftrack.lib import BaseEvent
|
||||
|
||||
|
||||
|
|
@ -99,6 +101,10 @@ class NextTaskUpdate(BaseEvent):
|
|||
project_name = self.get_project_name_from_event(
|
||||
session, event, project_id
|
||||
)
|
||||
if get_project(project_name) is None:
|
||||
self.log.debug("Project not found in OpenPype. Skipping")
|
||||
return
|
||||
|
||||
# Load settings
|
||||
project_settings = self.get_project_settings_from_event(
|
||||
event, project_name
|
||||
|
|
|
|||
|
|
@ -3,6 +3,8 @@ import copy
|
|||
from typing import Any
|
||||
|
||||
import ftrack_api
|
||||
|
||||
from openpype.client import get_project
|
||||
from openpype_modules.ftrack.lib import (
|
||||
BaseEvent,
|
||||
query_custom_attributes,
|
||||
|
|
@ -139,6 +141,10 @@ class PushHierValuesToNonHierEvent(BaseEvent):
|
|||
project_name: str = self.get_project_name_from_event(
|
||||
session, event, project_id
|
||||
)
|
||||
if get_project(project_name) is None:
|
||||
self.log.debug("Project not found in OpenPype. Skipping")
|
||||
return set(), set()
|
||||
|
||||
# Load settings
|
||||
project_settings: dict[str, Any] = (
|
||||
self.get_project_settings_from_event(event, project_name)
|
||||
|
|
|
|||
|
|
@ -1,4 +1,6 @@
|
|||
import collections
|
||||
|
||||
from openpype.client import get_project
|
||||
from openpype_modules.ftrack.lib import BaseEvent
|
||||
|
||||
|
||||
|
|
@ -60,6 +62,10 @@ class TaskStatusToParent(BaseEvent):
|
|||
project_name = self.get_project_name_from_event(
|
||||
session, event, project_id
|
||||
)
|
||||
if get_project(project_name) is None:
|
||||
self.log.debug("Project not found in OpenPype. Skipping")
|
||||
return
|
||||
|
||||
# Load settings
|
||||
project_settings = self.get_project_settings_from_event(
|
||||
event, project_name
|
||||
|
|
|
|||
|
|
@ -1,4 +1,6 @@
|
|||
import collections
|
||||
|
||||
from openpype.client import get_project
|
||||
from openpype_modules.ftrack.lib import BaseEvent
|
||||
|
||||
|
||||
|
|
@ -102,6 +104,10 @@ class TaskToVersionStatus(BaseEvent):
|
|||
project_name = self.get_project_name_from_event(
|
||||
session, event, project_id
|
||||
)
|
||||
if get_project(project_name) is None:
|
||||
self.log.debug("Project not found in OpenPype. Skipping")
|
||||
return
|
||||
|
||||
# Load settings
|
||||
project_settings = self.get_project_settings_from_event(
|
||||
event, project_name
|
||||
|
|
|
|||
|
|
@ -1,4 +1,6 @@
|
|||
import collections
|
||||
|
||||
from openpype.client import get_project
|
||||
from openpype_modules.ftrack.lib import BaseEvent
|
||||
|
||||
|
||||
|
|
@ -22,6 +24,10 @@ class ThumbnailEvents(BaseEvent):
|
|||
project_name = self.get_project_name_from_event(
|
||||
session, event, project_id
|
||||
)
|
||||
if get_project(project_name) is None:
|
||||
self.log.debug("Project not found in OpenPype. Skipping")
|
||||
return
|
||||
|
||||
# Load settings
|
||||
project_settings = self.get_project_settings_from_event(
|
||||
event, project_name
|
||||
|
|
|
|||
|
|
@ -1,3 +1,4 @@
|
|||
from openpype.client import get_project
|
||||
from openpype_modules.ftrack.lib import BaseEvent
|
||||
|
||||
|
||||
|
|
@ -50,6 +51,10 @@ class VersionToTaskStatus(BaseEvent):
|
|||
project_name = self.get_project_name_from_event(
|
||||
session, event, project_id
|
||||
)
|
||||
if get_project(project_name) is None:
|
||||
self.log.debug("Project not found in OpenPype. Skipping")
|
||||
return
|
||||
|
||||
# Load settings
|
||||
project_settings = self.get_project_settings_from_event(
|
||||
event, project_name
|
||||
|
|
|
|||
|
|
@ -5,7 +5,6 @@ import platform
|
|||
import collections
|
||||
import numbers
|
||||
|
||||
import ayon_api
|
||||
import six
|
||||
import time
|
||||
|
||||
|
|
@ -16,7 +15,7 @@ from openpype.settings.lib import (
|
|||
from openpype.settings.constants import (
|
||||
DEFAULT_PROJECT_KEY
|
||||
)
|
||||
from openpype.client import get_project
|
||||
from openpype.client import get_project, get_ayon_server_api_connection
|
||||
from openpype.lib import Logger, get_local_site_id
|
||||
from openpype.lib.path_templates import (
|
||||
TemplateUnsolved,
|
||||
|
|
@ -479,7 +478,8 @@ class Anatomy(BaseAnatomy):
|
|||
if AYON_SERVER_ENABLED:
|
||||
if not project_name:
|
||||
return
|
||||
return ayon_api.get_project_roots_for_site(
|
||||
con = get_ayon_server_api_connection()
|
||||
return con.get_project_roots_for_site(
|
||||
project_name, get_local_site_id()
|
||||
)
|
||||
|
||||
|
|
|
|||
|
|
@ -11,12 +11,14 @@ import pyblish.api
|
|||
from pyblish.lib import MessageHandler
|
||||
|
||||
import openpype
|
||||
from openpype import AYON_SERVER_ENABLED
|
||||
from openpype.host import HostBase
|
||||
from openpype.client import (
|
||||
get_project,
|
||||
get_asset_by_id,
|
||||
get_asset_by_name,
|
||||
version_is_latest,
|
||||
get_ayon_server_api_connection,
|
||||
)
|
||||
from openpype.lib.events import emit_event
|
||||
from openpype.modules import load_modules, ModulesManager
|
||||
|
|
@ -105,6 +107,10 @@ def install_host(host):
|
|||
|
||||
_is_installed = True
|
||||
|
||||
# Make sure global AYON connection has set site id and version
|
||||
if AYON_SERVER_ENABLED:
|
||||
get_ayon_server_api_connection()
|
||||
|
||||
legacy_io.install()
|
||||
modules_manager = _get_modules_manager()
|
||||
|
||||
|
|
|
|||
|
|
@ -4,7 +4,7 @@ import logging
|
|||
|
||||
from openpype import AYON_SERVER_ENABLED
|
||||
from openpype.lib import Logger
|
||||
from openpype.client import get_project
|
||||
from openpype.client import get_project, get_ayon_server_api_connection
|
||||
from . import legacy_io
|
||||
from .anatomy import Anatomy
|
||||
from .plugin_discover import (
|
||||
|
|
@ -153,8 +153,6 @@ class ServerThumbnailResolver(ThumbnailResolver):
|
|||
if not entity_type or not entity_id:
|
||||
return None
|
||||
|
||||
import ayon_api
|
||||
|
||||
project_name = self.dbcon.active_project()
|
||||
thumbnail_id = thumbnail_entity["_id"]
|
||||
|
||||
|
|
@ -169,7 +167,7 @@ class ServerThumbnailResolver(ThumbnailResolver):
|
|||
# NOTE Use 'get_server_api_connection' because public function
|
||||
# 'get_thumbnail_by_id' does not return output of 'ServerAPI'
|
||||
# method.
|
||||
con = ayon_api.get_server_api_connection()
|
||||
con = get_ayon_server_api_connection()
|
||||
if hasattr(con, "get_thumbnail_by_id"):
|
||||
result = con.get_thumbnail_by_id(thumbnail_id)
|
||||
if result.is_valid:
|
||||
|
|
|
|||
|
|
@ -5,15 +5,9 @@ Requires:
|
|||
masterLayer -> instance data attribute
|
||||
otioClipRange -> instance data attribute
|
||||
"""
|
||||
# import os
|
||||
import opentimelineio as otio
|
||||
import pyblish.api
|
||||
from pprint import pformat
|
||||
from openpype.pipeline.editorial import (
|
||||
get_media_range_with_retimes,
|
||||
otio_range_to_frame_range,
|
||||
otio_range_with_handles
|
||||
)
|
||||
|
||||
import pyblish.api
|
||||
|
||||
|
||||
class CollectOtioFrameRanges(pyblish.api.InstancePlugin):
|
||||
|
|
@ -27,6 +21,14 @@ class CollectOtioFrameRanges(pyblish.api.InstancePlugin):
|
|||
hosts = ["resolve", "hiero", "flame", "traypublisher"]
|
||||
|
||||
def process(self, instance):
|
||||
# Not all hosts can import these modules.
|
||||
import opentimelineio as otio
|
||||
from openpype.pipeline.editorial import (
|
||||
get_media_range_with_retimes,
|
||||
otio_range_to_frame_range,
|
||||
otio_range_with_handles
|
||||
)
|
||||
|
||||
# get basic variables
|
||||
otio_clip = instance.data["otioClip"]
|
||||
workfile_start = instance.data["workfileFrameStart"]
|
||||
|
|
|
|||
Some files were not shown because too many files have changed in this diff Show more
Loading…
Add table
Add a link
Reference in a new issue