mirror of
https://github.com/ynput/ayon-core.git
synced 2025-12-25 05:14:40 +01:00
Merge branch 'develop' into feature/OP-4245Data_Exchange_Geometry
This commit is contained in:
commit
43453f3111
62 changed files with 1587 additions and 512 deletions
12
.github/ISSUE_TEMPLATE/bug_report.yml
vendored
12
.github/ISSUE_TEMPLATE/bug_report.yml
vendored
|
|
@ -35,6 +35,10 @@ body:
|
|||
label: Version
|
||||
description: What version are you running? Look to OpenPype Tray
|
||||
options:
|
||||
- 3.15.5
|
||||
- 3.15.5-nightly.2
|
||||
- 3.15.5-nightly.1
|
||||
- 3.15.4
|
||||
- 3.15.4-nightly.3
|
||||
- 3.15.4-nightly.2
|
||||
- 3.15.4-nightly.1
|
||||
|
|
@ -131,10 +135,6 @@ body:
|
|||
- 3.13.1-nightly.2
|
||||
- 3.13.1-nightly.1
|
||||
- 3.13.0
|
||||
- 3.13.0-nightly.1
|
||||
- 3.12.3-nightly.3
|
||||
- 3.12.3-nightly.2
|
||||
- 3.12.3-nightly.1
|
||||
validations:
|
||||
required: true
|
||||
- type: dropdown
|
||||
|
|
@ -166,8 +166,8 @@ body:
|
|||
label: Are there any labels you wish to add?
|
||||
description: Please search labels and identify those related to your bug.
|
||||
options:
|
||||
- label: I have added the relevant labels to the bug report.
|
||||
required: true
|
||||
- label: I have added the relevant labels to the bug report.
|
||||
required: true
|
||||
- type: textarea
|
||||
id: logs
|
||||
attributes:
|
||||
|
|
|
|||
2
.github/workflows/nightly_merge.yml
vendored
2
.github/workflows/nightly_merge.yml
vendored
|
|
@ -25,5 +25,5 @@ jobs:
|
|||
- name: Invoke pre-release workflow
|
||||
uses: benc-uk/workflow-dispatch@v1
|
||||
with:
|
||||
workflow: Nightly Prerelease
|
||||
workflow: prerelease.yml
|
||||
token: ${{ secrets.YNPUT_BOT_TOKEN }}
|
||||
|
|
|
|||
6
.github/workflows/prerelease.yml
vendored
6
.github/workflows/prerelease.yml
vendored
|
|
@ -65,3 +65,9 @@ jobs:
|
|||
source_ref: 'main'
|
||||
target_branch: 'develop'
|
||||
commit_message_template: '[Automated] Merged {source_ref} into {target_branch}'
|
||||
|
||||
- name: Invoke Update bug report workflow
|
||||
uses: benc-uk/workflow-dispatch@v1
|
||||
with:
|
||||
workflow: update_bug_report.yml
|
||||
token: ${{ secrets.YNPUT_BOT_TOKEN }}
|
||||
2
.github/workflows/update_bug_report.yml
vendored
2
.github/workflows/update_bug_report.yml
vendored
|
|
@ -18,6 +18,8 @@ jobs:
|
|||
uses: ynput/gha-populate-form-version@main
|
||||
with:
|
||||
github_token: ${{ secrets.YNPUT_BOT_TOKEN }}
|
||||
github_user: ${{ secrets.CI_USER }}
|
||||
github_email: ${{ secrets.CI_EMAIL }}
|
||||
registry: github
|
||||
dropdown: _version
|
||||
limit_to: 100
|
||||
|
|
|
|||
303
CHANGELOG.md
303
CHANGELOG.md
|
|
@ -1,6 +1,309 @@
|
|||
# Changelog
|
||||
|
||||
|
||||
## [3.15.5](https://github.com/ynput/OpenPype/tree/3.15.5)
|
||||
|
||||
|
||||
[Full Changelog](https://github.com/ynput/OpenPype/compare/3.15.4...3.15.5)
|
||||
|
||||
### **🚀 Enhancements**
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Maya: Playblast profiles <a href="https://github.com/ynput/OpenPype/pull/4777">#4777</a></summary>
|
||||
|
||||
Support playblast profiles.This enables studios to customize what playblast settings should be on a per task and/or subset basis. For example `modeling` should have `Wireframe On Shaded` enabled, while all other tasks should have it disabled.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Maya: Support .abc files directly for Arnold standin look assignment <a href="https://github.com/ynput/OpenPype/pull/4856">#4856</a></summary>
|
||||
|
||||
If `.abc` file is loaded into arnold standin support look assignment through the `cbId` attributes in the alembic file.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Maya: Hide animation instance in creator <a href="https://github.com/ynput/OpenPype/pull/4872">#4872</a></summary>
|
||||
|
||||
- Hide animation instance in creator
|
||||
- Add inventory action to recreate animation publish instance for loaded rigs
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Unreal: Render Creator enhancements <a href="https://github.com/ynput/OpenPype/pull/4477">#4477</a></summary>
|
||||
|
||||
<strong>Improvements to the creator for render family
|
||||
|
||||
</strong>This PR introduces some enhancements to the creator for the render family in Unreal Engine:
|
||||
- Added the option to create a new, empty sequence for the render.
|
||||
- Added the option to not include the whole hierarchy for the selected sequence.
|
||||
- Improvements of the error messages.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Unreal: Added settings for rendering <a href="https://github.com/ynput/OpenPype/pull/4575">#4575</a></summary>
|
||||
|
||||
<strong>Added settings for rendering in Unreal Engine.
|
||||
|
||||
</strong>Two settings has been added:
|
||||
- Pre roll frames, to set how many frames are used to load the scene before starting the actual rendering.
|
||||
- Configuration path, to allow to save a preset of settings from Unreal, and use it for rendering.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Global: Optimize anatomy formatting by only formatting used templates instead <a href="https://github.com/ynput/OpenPype/pull/4784">#4784</a></summary>
|
||||
|
||||
Optimization to not format full anatomy when only a single template is used. Instead format only the single template instead.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Patchelf version locked <a href="https://github.com/ynput/OpenPype/pull/4853">#4853</a></summary>
|
||||
|
||||
For Centos dockerfile it is necessary to lock the patchelf version to the older, otherwise the build process fails.
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Houdini: Implement `switch` method on loaders <a href="https://github.com/ynput/OpenPype/pull/4866">#4866</a></summary>
|
||||
|
||||
Implement `switch` method on loaders
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Code: Tweak docstrings and return type hints <a href="https://github.com/ynput/OpenPype/pull/4875">#4875</a></summary>
|
||||
|
||||
Tweak docstrings and return type hints for functions in `openpype.client.entities`.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Publisher: Clear comment on successful publish and on window close <a href="https://github.com/ynput/OpenPype/pull/4885">#4885</a></summary>
|
||||
|
||||
Clear comment text field on successful publish and on window close.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Publisher: Make sure to reset asset widget when hidden and reshown <a href="https://github.com/ynput/OpenPype/pull/4886">#4886</a></summary>
|
||||
|
||||
Make sure to reset asset widget when hidden and reshown. Without this the asset list would never refresh in the set asset widget when changing context on an existing instance and thus would not show new assets from after the first time launching that widget.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
### **🐛 Bug fixes**
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Maya: Fix nested model instances. <a href="https://github.com/ynput/OpenPype/pull/4852">#4852</a></summary>
|
||||
|
||||
Fix nested model instance under review instance, where data collection was not including "Display Lights" and "Focal Length".
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Maya: Make default namespace naming backwards compatible <a href="https://github.com/ynput/OpenPype/pull/4873">#4873</a></summary>
|
||||
|
||||
Namespaces of loaded references are now _by default_ back to what they were before #4511
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Nuke: Legacy convertor skips deprecation warnings <a href="https://github.com/ynput/OpenPype/pull/4846">#4846</a></summary>
|
||||
|
||||
Nuke legacy convertor was triggering deprecated function which is causing a lot of logs which slows down whole process. Changed the convertor to skip all nodes without `AVALON_TAB` to avoid the warnings.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>3dsmax: move startup script logic to hook <a href="https://github.com/ynput/OpenPype/pull/4849">#4849</a></summary>
|
||||
|
||||
Startup script for OpenPype was interfering with Open Last Workfile feature. Moving this loggic from simple command line argument in the Settings to pre-launch hook is solving the order of command line arguments and making both features work.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Maya: Don't change time slider ranges in `get_frame_range` <a href="https://github.com/ynput/OpenPype/pull/4858">#4858</a></summary>
|
||||
|
||||
Don't change time slider ranges in `get_frame_range`
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Maya: Looks - calculate hash for tx texture <a href="https://github.com/ynput/OpenPype/pull/4878">#4878</a></summary>
|
||||
|
||||
Texture hash is calculated for textures used in published look and it is used as key in dictionary. In recent changes, this hash is not calculated for TX files, resulting in `None` value as key in dictionary, crashing publishing. This PR is adding texture hash for TX files to solve that issue.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Houdini: Collect `currentFile` context data separate from workfile instance <a href="https://github.com/ynput/OpenPype/pull/4883">#4883</a></summary>
|
||||
|
||||
Fix publishing without an active workfile instance due to missing `currentFile` data.Now collect `currentFile` into context in houdini through context plugin no matter the active instances.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Nuke: fixed broken slate workflow once published on deadline <a href="https://github.com/ynput/OpenPype/pull/4887">#4887</a></summary>
|
||||
|
||||
Slate workflow is now working as expected and Validate Sequence Frames is not raising the once slate frame is included.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Add fps as instance.data in collect review in Houdini. <a href="https://github.com/ynput/OpenPype/pull/4888">#4888</a></summary>
|
||||
|
||||
fix the bug of failing to publish extract review in HoudiniOriginal error:
|
||||
```python
|
||||
File "OpenPype\build\exe.win-amd64-3.9\openpype\plugins\publish\extract_review.py", line 516, in prepare_temp_data
|
||||
"fps": float(instance.data["fps"]),
|
||||
KeyError: 'fps'
|
||||
```
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>TrayPublisher: Fill missing data for instances with review <a href="https://github.com/ynput/OpenPype/pull/4891">#4891</a></summary>
|
||||
|
||||
Fill required data to instance in traypublisher if instance has review family. The data are required by ExtractReview and it would be complicated to do proper fix at this moment! The collector does for review instances what did https://github.com/ynput/OpenPype/pull/4383
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Publisher: Keep track about current context and fix context selection widget <a href="https://github.com/ynput/OpenPype/pull/4892">#4892</a></summary>
|
||||
|
||||
Change selected context to current context on reset. Fix bug when context widget is re-enabled.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Scene inventory: Model refresh fix with cherry picking <a href="https://github.com/ynput/OpenPype/pull/4895">#4895</a></summary>
|
||||
|
||||
Fix cherry pick issue in scene inventory.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Nuke: Pre-render and missing review flag on instance causing crash <a href="https://github.com/ynput/OpenPype/pull/4897">#4897</a></summary>
|
||||
|
||||
If instance created in nuke was missing `review` flag, collector crashed.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
### **Merged pull requests**
|
||||
|
||||
|
||||
<details>
|
||||
<summary>After Effects: fix handles KeyError <a href="https://github.com/ynput/OpenPype/pull/4727">#4727</a></summary>
|
||||
|
||||
Sometimes when publishing with AE (we only saw this error on AE 2023), we got a KeyError for the handles in the "Collect Workfile" step. So I did get the handles from the context if ther's no handles in the asset entity.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
|
||||
|
||||
## [3.15.4](https://github.com/ynput/OpenPype/tree/3.15.4)
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -52,7 +52,7 @@ RUN yum -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.n
|
|||
|
||||
# we need to build our own patchelf
|
||||
WORKDIR /temp-patchelf
|
||||
RUN git clone https://github.com/NixOS/patchelf.git . \
|
||||
RUN git clone -b 0.17.0 --single-branch https://github.com/NixOS/patchelf.git . \
|
||||
&& source scl_source enable devtoolset-7 \
|
||||
&& ./bootstrap.sh \
|
||||
&& ./configure \
|
||||
|
|
|
|||
|
|
@ -69,6 +69,19 @@ def convert_ids(in_ids):
|
|||
|
||||
|
||||
def get_projects(active=True, inactive=False, fields=None):
|
||||
"""Yield all project entity documents.
|
||||
|
||||
Args:
|
||||
active (Optional[bool]): Include active projects. Defaults to True.
|
||||
inactive (Optional[bool]): Include inactive projects.
|
||||
Defaults to False.
|
||||
fields (Optional[Iterable[str]]): Fields that should be returned. All
|
||||
fields are returned if 'None' is passed.
|
||||
|
||||
Yields:
|
||||
dict: Project entity data which can be reduced to specified 'fields'.
|
||||
None is returned if project with specified filters was not found.
|
||||
"""
|
||||
mongodb = get_project_database()
|
||||
for project_name in mongodb.collection_names():
|
||||
if project_name in ("system.indexes",):
|
||||
|
|
@ -81,6 +94,20 @@ def get_projects(active=True, inactive=False, fields=None):
|
|||
|
||||
|
||||
def get_project(project_name, active=True, inactive=True, fields=None):
|
||||
"""Return project entity document by project name.
|
||||
|
||||
Args:
|
||||
project_name (str): Name of project.
|
||||
active (Optional[bool]): Allow active project. Defaults to True.
|
||||
inactive (Optional[bool]): Allow inactive project. Defaults to True.
|
||||
fields (Optional[Iterable[str]]): Fields that should be returned. All
|
||||
fields are returned if 'None' is passed.
|
||||
|
||||
Returns:
|
||||
Union[Dict, None]: Project entity data which can be reduced to
|
||||
specified 'fields'. None is returned if project with specified
|
||||
filters was not found.
|
||||
"""
|
||||
# Skip if both are disabled
|
||||
if not active and not inactive:
|
||||
return None
|
||||
|
|
@ -124,17 +151,18 @@ def get_whole_project(project_name):
|
|||
|
||||
|
||||
def get_asset_by_id(project_name, asset_id, fields=None):
|
||||
"""Receive asset data by it's id.
|
||||
"""Receive asset data by its id.
|
||||
|
||||
Args:
|
||||
project_name (str): Name of project where to look for queried entities.
|
||||
asset_id (Union[str, ObjectId]): Asset's id.
|
||||
fields (Iterable[str]): Fields that should be returned. All fields are
|
||||
returned if 'None' is passed.
|
||||
fields (Optional[Iterable[str]]): Fields that should be returned. All
|
||||
fields are returned if 'None' is passed.
|
||||
|
||||
Returns:
|
||||
dict: Asset entity data.
|
||||
None: Asset was not found by id.
|
||||
Union[Dict, None]: Asset entity data which can be reduced to
|
||||
specified 'fields'. None is returned if asset with specified
|
||||
filters was not found.
|
||||
"""
|
||||
|
||||
asset_id = convert_id(asset_id)
|
||||
|
|
@ -147,17 +175,18 @@ def get_asset_by_id(project_name, asset_id, fields=None):
|
|||
|
||||
|
||||
def get_asset_by_name(project_name, asset_name, fields=None):
|
||||
"""Receive asset data by it's name.
|
||||
"""Receive asset data by its name.
|
||||
|
||||
Args:
|
||||
project_name (str): Name of project where to look for queried entities.
|
||||
asset_name (str): Asset's name.
|
||||
fields (Iterable[str]): Fields that should be returned. All fields are
|
||||
returned if 'None' is passed.
|
||||
fields (Optional[Iterable[str]]): Fields that should be returned. All
|
||||
fields are returned if 'None' is passed.
|
||||
|
||||
Returns:
|
||||
dict: Asset entity data.
|
||||
None: Asset was not found by name.
|
||||
Union[Dict, None]: Asset entity data which can be reduced to
|
||||
specified 'fields'. None is returned if asset with specified
|
||||
filters was not found.
|
||||
"""
|
||||
|
||||
if not asset_name:
|
||||
|
|
@ -195,8 +224,8 @@ def _get_assets(
|
|||
parent_ids (Iterable[Union[str, ObjectId]]): Parent asset ids.
|
||||
standard (bool): Query standard assets (type 'asset').
|
||||
archived (bool): Query archived assets (type 'archived_asset').
|
||||
fields (Iterable[str]): Fields that should be returned. All fields are
|
||||
returned if 'None' is passed.
|
||||
fields (Optional[Iterable[str]]): Fields that should be returned. All
|
||||
fields are returned if 'None' is passed.
|
||||
|
||||
Returns:
|
||||
Cursor: Query cursor as iterable which returns asset documents matching
|
||||
|
|
@ -261,8 +290,8 @@ def get_assets(
|
|||
asset_names (Iterable[str]): Name assets that should be found.
|
||||
parent_ids (Iterable[Union[str, ObjectId]]): Parent asset ids.
|
||||
archived (bool): Add also archived assets.
|
||||
fields (Iterable[str]): Fields that should be returned. All fields are
|
||||
returned if 'None' is passed.
|
||||
fields (Optional[Iterable[str]]): Fields that should be returned. All
|
||||
fields are returned if 'None' is passed.
|
||||
|
||||
Returns:
|
||||
Cursor: Query cursor as iterable which returns asset documents matching
|
||||
|
|
@ -300,8 +329,8 @@ def get_archived_assets(
|
|||
be found.
|
||||
asset_names (Iterable[str]): Name assets that should be found.
|
||||
parent_ids (Iterable[Union[str, ObjectId]]): Parent asset ids.
|
||||
fields (Iterable[str]): Fields that should be returned. All fields are
|
||||
returned if 'None' is passed.
|
||||
fields (Optional[Iterable[str]]): Fields that should be returned. All
|
||||
fields are returned if 'None' is passed.
|
||||
|
||||
Returns:
|
||||
Cursor: Query cursor as iterable which returns asset documents matching
|
||||
|
|
@ -356,17 +385,18 @@ def get_asset_ids_with_subsets(project_name, asset_ids=None):
|
|||
|
||||
|
||||
def get_subset_by_id(project_name, subset_id, fields=None):
|
||||
"""Single subset entity data by it's id.
|
||||
"""Single subset entity data by its id.
|
||||
|
||||
Args:
|
||||
project_name (str): Name of project where to look for queried entities.
|
||||
subset_id (Union[str, ObjectId]): Id of subset which should be found.
|
||||
fields (Iterable[str]): Fields that should be returned. All fields are
|
||||
returned if 'None' is passed.
|
||||
fields (Optional[Iterable[str]]): Fields that should be returned. All
|
||||
fields are returned if 'None' is passed.
|
||||
|
||||
Returns:
|
||||
None: If subset with specified filters was not found.
|
||||
Dict: Subset document which can be reduced to specified 'fields'.
|
||||
Union[Dict, None]: Subset entity data which can be reduced to
|
||||
specified 'fields'. None is returned if subset with specified
|
||||
filters was not found.
|
||||
"""
|
||||
|
||||
subset_id = convert_id(subset_id)
|
||||
|
|
@ -379,20 +409,19 @@ def get_subset_by_id(project_name, subset_id, fields=None):
|
|||
|
||||
|
||||
def get_subset_by_name(project_name, subset_name, asset_id, fields=None):
|
||||
"""Single subset entity data by it's name and it's version id.
|
||||
"""Single subset entity data by its name and its version id.
|
||||
|
||||
Args:
|
||||
project_name (str): Name of project where to look for queried entities.
|
||||
subset_name (str): Name of subset.
|
||||
asset_id (Union[str, ObjectId]): Id of parent asset.
|
||||
fields (Iterable[str]): Fields that should be returned. All fields are
|
||||
returned if 'None' is passed.
|
||||
fields (Optional[Iterable[str]]): Fields that should be returned. All
|
||||
fields are returned if 'None' is passed.
|
||||
|
||||
Returns:
|
||||
Union[None, Dict[str, Any]]: None if subset with specified filters was
|
||||
not found or dict subset document which can be reduced to
|
||||
specified 'fields'.
|
||||
|
||||
Union[Dict, None]: Subset entity data which can be reduced to
|
||||
specified 'fields'. None is returned if subset with specified
|
||||
filters was not found.
|
||||
"""
|
||||
if not subset_name:
|
||||
return None
|
||||
|
|
@ -434,8 +463,8 @@ def get_subsets(
|
|||
names_by_asset_ids (dict[ObjectId, List[str]]): Complex filtering
|
||||
using asset ids and list of subset names under the asset.
|
||||
archived (bool): Look for archived subsets too.
|
||||
fields (Iterable[str]): Fields that should be returned. All fields are
|
||||
returned if 'None' is passed.
|
||||
fields (Optional[Iterable[str]]): Fields that should be returned. All
|
||||
fields are returned if 'None' is passed.
|
||||
|
||||
Returns:
|
||||
Cursor: Iterable cursor yielding all matching subsets.
|
||||
|
|
@ -520,17 +549,18 @@ def get_subset_families(project_name, subset_ids=None):
|
|||
|
||||
|
||||
def get_version_by_id(project_name, version_id, fields=None):
|
||||
"""Single version entity data by it's id.
|
||||
"""Single version entity data by its id.
|
||||
|
||||
Args:
|
||||
project_name (str): Name of project where to look for queried entities.
|
||||
version_id (Union[str, ObjectId]): Id of version which should be found.
|
||||
fields (Iterable[str]): Fields that should be returned. All fields are
|
||||
returned if 'None' is passed.
|
||||
fields (Optional[Iterable[str]]): Fields that should be returned. All
|
||||
fields are returned if 'None' is passed.
|
||||
|
||||
Returns:
|
||||
None: If version with specified filters was not found.
|
||||
Dict: Version document which can be reduced to specified 'fields'.
|
||||
Union[Dict, None]: Version entity data which can be reduced to
|
||||
specified 'fields'. None is returned if version with specified
|
||||
filters was not found.
|
||||
"""
|
||||
|
||||
version_id = convert_id(version_id)
|
||||
|
|
@ -546,18 +576,19 @@ def get_version_by_id(project_name, version_id, fields=None):
|
|||
|
||||
|
||||
def get_version_by_name(project_name, version, subset_id, fields=None):
|
||||
"""Single version entity data by it's name and subset id.
|
||||
"""Single version entity data by its name and subset id.
|
||||
|
||||
Args:
|
||||
project_name (str): Name of project where to look for queried entities.
|
||||
version (int): name of version entity (it's version).
|
||||
version (int): name of version entity (its version).
|
||||
subset_id (Union[str, ObjectId]): Id of version which should be found.
|
||||
fields (Iterable[str]): Fields that should be returned. All fields are
|
||||
returned if 'None' is passed.
|
||||
fields (Optional[Iterable[str]]): Fields that should be returned. All
|
||||
fields are returned if 'None' is passed.
|
||||
|
||||
Returns:
|
||||
None: If version with specified filters was not found.
|
||||
Dict: Version document which can be reduced to specified 'fields'.
|
||||
Union[Dict, None]: Version entity data which can be reduced to
|
||||
specified 'fields'. None is returned if version with specified
|
||||
filters was not found.
|
||||
"""
|
||||
|
||||
subset_id = convert_id(subset_id)
|
||||
|
|
@ -574,7 +605,7 @@ def get_version_by_name(project_name, version, subset_id, fields=None):
|
|||
|
||||
|
||||
def version_is_latest(project_name, version_id):
|
||||
"""Is version the latest from it's subset.
|
||||
"""Is version the latest from its subset.
|
||||
|
||||
Note:
|
||||
Hero versions are considered as latest.
|
||||
|
|
@ -680,8 +711,8 @@ def get_versions(
|
|||
versions (Iterable[int]): Version names (as integers).
|
||||
Filter ignored if 'None' is passed.
|
||||
hero (bool): Look also for hero versions.
|
||||
fields (Iterable[str]): Fields that should be returned. All fields are
|
||||
returned if 'None' is passed.
|
||||
fields (Optional[Iterable[str]]): Fields that should be returned. All
|
||||
fields are returned if 'None' is passed.
|
||||
|
||||
Returns:
|
||||
Cursor: Iterable cursor yielding all matching versions.
|
||||
|
|
@ -705,12 +736,13 @@ def get_hero_version_by_subset_id(project_name, subset_id, fields=None):
|
|||
project_name (str): Name of project where to look for queried entities.
|
||||
subset_id (Union[str, ObjectId]): Subset id under which
|
||||
is hero version.
|
||||
fields (Iterable[str]): Fields that should be returned. All fields are
|
||||
returned if 'None' is passed.
|
||||
fields (Optional[Iterable[str]]): Fields that should be returned. All
|
||||
fields are returned if 'None' is passed.
|
||||
|
||||
Returns:
|
||||
None: If hero version for passed subset id does not exists.
|
||||
Dict: Hero version entity data.
|
||||
Union[Dict, None]: Hero version entity data which can be reduced to
|
||||
specified 'fields'. None is returned if hero version with specified
|
||||
filters was not found.
|
||||
"""
|
||||
|
||||
subset_id = convert_id(subset_id)
|
||||
|
|
@ -730,17 +762,18 @@ def get_hero_version_by_subset_id(project_name, subset_id, fields=None):
|
|||
|
||||
|
||||
def get_hero_version_by_id(project_name, version_id, fields=None):
|
||||
"""Hero version by it's id.
|
||||
"""Hero version by its id.
|
||||
|
||||
Args:
|
||||
project_name (str): Name of project where to look for queried entities.
|
||||
version_id (Union[str, ObjectId]): Hero version id.
|
||||
fields (Iterable[str]): Fields that should be returned. All fields are
|
||||
returned if 'None' is passed.
|
||||
fields (Optional[Iterable[str]]): Fields that should be returned. All
|
||||
fields are returned if 'None' is passed.
|
||||
|
||||
Returns:
|
||||
None: If hero version with passed id was not found.
|
||||
Dict: Hero version entity data.
|
||||
Union[Dict, None]: Hero version entity data which can be reduced to
|
||||
specified 'fields'. None is returned if hero version with specified
|
||||
filters was not found.
|
||||
"""
|
||||
|
||||
version_id = convert_id(version_id)
|
||||
|
|
@ -773,8 +806,8 @@ def get_hero_versions(
|
|||
should look for hero versions. Filter ignored if 'None' is passed.
|
||||
version_ids (Iterable[Union[str, ObjectId]]): Hero version ids. Filter
|
||||
ignored if 'None' is passed.
|
||||
fields (Iterable[str]): Fields that should be returned. All fields are
|
||||
returned if 'None' is passed.
|
||||
fields (Optional[Iterable[str]]): Fields that should be returned. All
|
||||
fields are returned if 'None' is passed.
|
||||
|
||||
Returns:
|
||||
Cursor|list: Iterable yielding hero versions matching passed filters.
|
||||
|
|
@ -801,8 +834,8 @@ def get_output_link_versions(project_name, version_id, fields=None):
|
|||
project_name (str): Name of project where to look for queried entities.
|
||||
version_id (Union[str, ObjectId]): Version id which can be used
|
||||
as input link for other versions.
|
||||
fields (Iterable[str]): Fields that should be returned. All fields are
|
||||
returned if 'None' is passed.
|
||||
fields (Optional[Iterable[str]]): Fields that should be returned. All
|
||||
fields are returned if 'None' is passed.
|
||||
|
||||
Returns:
|
||||
Iterable: Iterable cursor yielding versions that are used as input
|
||||
|
|
@ -828,8 +861,8 @@ def get_last_versions(project_name, subset_ids, fields=None):
|
|||
Args:
|
||||
project_name (str): Name of project where to look for queried entities.
|
||||
subset_ids (Iterable[Union[str, ObjectId]]): List of subset ids.
|
||||
fields (Iterable[str]): Fields that should be returned. All fields are
|
||||
returned if 'None' is passed.
|
||||
fields (Optional[Iterable[str]]): Fields that should be returned. All
|
||||
fields are returned if 'None' is passed.
|
||||
|
||||
Returns:
|
||||
dict[ObjectId, int]: Key is subset id and value is last version name.
|
||||
|
|
@ -913,12 +946,13 @@ def get_last_version_by_subset_id(project_name, subset_id, fields=None):
|
|||
Args:
|
||||
project_name (str): Name of project where to look for queried entities.
|
||||
subset_id (Union[str, ObjectId]): Id of version which should be found.
|
||||
fields (Iterable[str]): Fields that should be returned. All fields are
|
||||
returned if 'None' is passed.
|
||||
fields (Optional[Iterable[str]]): Fields that should be returned. All
|
||||
fields are returned if 'None' is passed.
|
||||
|
||||
Returns:
|
||||
None: If version with specified filters was not found.
|
||||
Dict: Version document which can be reduced to specified 'fields'.
|
||||
Union[Dict, None]: Version entity data which can be reduced to
|
||||
specified 'fields'. None is returned if version with specified
|
||||
filters was not found.
|
||||
"""
|
||||
|
||||
subset_id = convert_id(subset_id)
|
||||
|
|
@ -945,12 +979,13 @@ def get_last_version_by_subset_name(
|
|||
asset_id (Union[str, ObjectId]): Asset id which is parent of passed
|
||||
subset name.
|
||||
asset_name (str): Asset name which is parent of passed subset name.
|
||||
fields (Iterable[str]): Fields that should be returned. All fields are
|
||||
returned if 'None' is passed.
|
||||
fields (Optional[Iterable[str]]): Fields that should be returned. All
|
||||
fields are returned if 'None' is passed.
|
||||
|
||||
Returns:
|
||||
None: If version with specified filters was not found.
|
||||
Dict: Version document which can be reduced to specified 'fields'.
|
||||
Union[Dict, None]: Version entity data which can be reduced to
|
||||
specified 'fields'. None is returned if version with specified
|
||||
filters was not found.
|
||||
"""
|
||||
|
||||
if not asset_id and not asset_name:
|
||||
|
|
@ -972,18 +1007,18 @@ def get_last_version_by_subset_name(
|
|||
|
||||
|
||||
def get_representation_by_id(project_name, representation_id, fields=None):
|
||||
"""Representation entity data by it's id.
|
||||
"""Representation entity data by its id.
|
||||
|
||||
Args:
|
||||
project_name (str): Name of project where to look for queried entities.
|
||||
representation_id (Union[str, ObjectId]): Representation id.
|
||||
fields (Iterable[str]): Fields that should be returned. All fields are
|
||||
returned if 'None' is passed.
|
||||
fields (Optional[Iterable[str]]): Fields that should be returned. All
|
||||
fields are returned if 'None' is passed.
|
||||
|
||||
Returns:
|
||||
None: If representation with specified filters was not found.
|
||||
Dict: Representation entity data which can be reduced
|
||||
to specified 'fields'.
|
||||
Union[Dict, None]: Representation entity data which can be reduced to
|
||||
specified 'fields'. None is returned if representation with
|
||||
specified filters was not found.
|
||||
"""
|
||||
|
||||
if not representation_id:
|
||||
|
|
@ -1004,19 +1039,19 @@ def get_representation_by_id(project_name, representation_id, fields=None):
|
|||
def get_representation_by_name(
|
||||
project_name, representation_name, version_id, fields=None
|
||||
):
|
||||
"""Representation entity data by it's name and it's version id.
|
||||
"""Representation entity data by its name and its version id.
|
||||
|
||||
Args:
|
||||
project_name (str): Name of project where to look for queried entities.
|
||||
representation_name (str): Representation name.
|
||||
version_id (Union[str, ObjectId]): Id of parent version entity.
|
||||
fields (Iterable[str]): Fields that should be returned. All fields are
|
||||
returned if 'None' is passed.
|
||||
fields (Optional[Iterable[str]]): Fields that should be returned. All
|
||||
fields are returned if 'None' is passed.
|
||||
|
||||
Returns:
|
||||
None: If representation with specified filters was not found.
|
||||
Dict: Representation entity data which can be reduced
|
||||
to specified 'fields'.
|
||||
Union[dict[str, Any], None]: Representation entity data which can be
|
||||
reduced to specified 'fields'. None is returned if representation
|
||||
with specified filters was not found.
|
||||
"""
|
||||
|
||||
version_id = convert_id(version_id)
|
||||
|
|
@ -1202,8 +1237,8 @@ def get_representations(
|
|||
names_by_version_ids (dict[ObjectId, list[str]]): Complex filtering
|
||||
using version ids and list of names under the version.
|
||||
archived (bool): Output will also contain archived representations.
|
||||
fields (Iterable[str]): Fields that should be returned. All fields are
|
||||
returned if 'None' is passed.
|
||||
fields (Optional[Iterable[str]]): Fields that should be returned. All
|
||||
fields are returned if 'None' is passed.
|
||||
|
||||
Returns:
|
||||
Cursor: Iterable cursor yielding all matching representations.
|
||||
|
|
@ -1247,8 +1282,8 @@ def get_archived_representations(
|
|||
representation context fields.
|
||||
names_by_version_ids (dict[ObjectId, List[str]]): Complex filtering
|
||||
using version ids and list of names under the version.
|
||||
fields (Iterable[str]): Fields that should be returned. All fields are
|
||||
returned if 'None' is passed.
|
||||
fields (Optional[Iterable[str]]): Fields that should be returned. All
|
||||
fields are returned if 'None' is passed.
|
||||
|
||||
Returns:
|
||||
Cursor: Iterable cursor yielding all matching representations.
|
||||
|
|
@ -1377,8 +1412,8 @@ def get_thumbnail_id_from_source(project_name, src_type, src_id):
|
|||
src_id (Union[str, ObjectId]): Id of source entity.
|
||||
|
||||
Returns:
|
||||
ObjectId: Thumbnail id assigned to entity.
|
||||
None: If Source entity does not have any thumbnail id assigned.
|
||||
Union[ObjectId, None]: Thumbnail id assigned to entity. If Source
|
||||
entity does not have any thumbnail id assigned.
|
||||
"""
|
||||
|
||||
if not src_type or not src_id:
|
||||
|
|
@ -1397,14 +1432,14 @@ def get_thumbnails(project_name, thumbnail_ids, fields=None):
|
|||
"""Receive thumbnails entity data.
|
||||
|
||||
Thumbnail entity can be used to receive binary content of thumbnail based
|
||||
on it's content and ThumbnailResolvers.
|
||||
on its content and ThumbnailResolvers.
|
||||
|
||||
Args:
|
||||
project_name (str): Name of project where to look for queried entities.
|
||||
thumbnail_ids (Iterable[Union[str, ObjectId]]): Ids of thumbnail
|
||||
entities.
|
||||
fields (Iterable[str]): Fields that should be returned. All fields are
|
||||
returned if 'None' is passed.
|
||||
fields (Optional[Iterable[str]]): Fields that should be returned. All
|
||||
fields are returned if 'None' is passed.
|
||||
|
||||
Returns:
|
||||
cursor: Cursor of queried documents.
|
||||
|
|
@ -1429,12 +1464,13 @@ def get_thumbnail(project_name, thumbnail_id, fields=None):
|
|||
Args:
|
||||
project_name (str): Name of project where to look for queried entities.
|
||||
thumbnail_id (Union[str, ObjectId]): Id of thumbnail entity.
|
||||
fields (Iterable[str]): Fields that should be returned. All fields are
|
||||
returned if 'None' is passed.
|
||||
fields (Optional[Iterable[str]]): Fields that should be returned. All
|
||||
fields are returned if 'None' is passed.
|
||||
|
||||
Returns:
|
||||
None: If thumbnail with specified id was not found.
|
||||
Dict: Thumbnail entity data which can be reduced to specified 'fields'.
|
||||
Union[Dict, None]: Thumbnail entity data which can be reduced to
|
||||
specified 'fields'.None is returned if thumbnail with specified
|
||||
filters was not found.
|
||||
"""
|
||||
|
||||
if not thumbnail_id:
|
||||
|
|
@ -1458,8 +1494,13 @@ def get_workfile_info(
|
|||
project_name (str): Name of project where to look for queried entities.
|
||||
asset_id (Union[str, ObjectId]): Id of asset entity.
|
||||
task_name (str): Task name on asset.
|
||||
fields (Iterable[str]): Fields that should be returned. All fields are
|
||||
returned if 'None' is passed.
|
||||
fields (Optional[Iterable[str]]): Fields that should be returned. All
|
||||
fields are returned if 'None' is passed.
|
||||
|
||||
Returns:
|
||||
Union[Dict, None]: Workfile entity data which can be reduced to
|
||||
specified 'fields'.None is returned if workfile with specified
|
||||
filters was not found.
|
||||
"""
|
||||
|
||||
if not asset_id or not task_name or not filename:
|
||||
|
|
|
|||
|
|
@ -1,7 +1,5 @@
|
|||
import os
|
||||
|
||||
import qtawesome
|
||||
|
||||
from openpype.hosts.fusion.api import (
|
||||
get_current_comp,
|
||||
comp_lock_and_undo_chunk,
|
||||
|
|
@ -28,6 +26,7 @@ class CreateSaver(Creator):
|
|||
family = "render"
|
||||
default_variants = ["Main", "Mask"]
|
||||
description = "Fusion Saver to generate image sequence"
|
||||
icon = "fa5.eye"
|
||||
|
||||
instance_attributes = ["reviewable"]
|
||||
|
||||
|
|
@ -89,9 +88,6 @@ class CreateSaver(Creator):
|
|||
|
||||
self._add_instance_to_context(created_instance)
|
||||
|
||||
def get_icon(self):
|
||||
return qtawesome.icon("fa.eye", color="white")
|
||||
|
||||
def update_instances(self, update_list):
|
||||
for created_inst, _changes in update_list:
|
||||
new_data = created_inst.data_to_store()
|
||||
|
|
|
|||
|
|
@ -1,5 +1,3 @@
|
|||
import qtawesome
|
||||
|
||||
from openpype.hosts.fusion.api import (
|
||||
get_current_comp
|
||||
)
|
||||
|
|
@ -15,6 +13,7 @@ class FusionWorkfileCreator(AutoCreator):
|
|||
identifier = "workfile"
|
||||
family = "workfile"
|
||||
label = "Workfile"
|
||||
icon = "fa5.file"
|
||||
|
||||
default_variant = "Main"
|
||||
|
||||
|
|
@ -104,6 +103,3 @@ class FusionWorkfileCreator(AutoCreator):
|
|||
existing_instance["asset"] = asset_name
|
||||
existing_instance["task"] = task_name
|
||||
existing_instance["subset"] = subset_name
|
||||
|
||||
def get_icon(self):
|
||||
return qtawesome.icon("fa.file-o", color="white")
|
||||
|
|
|
|||
|
|
@ -104,3 +104,6 @@ class AbcLoader(load.LoaderPlugin):
|
|||
|
||||
node = container["node"]
|
||||
node.destroy()
|
||||
|
||||
def switch(self, container, representation):
|
||||
self.update(container, representation)
|
||||
|
|
|
|||
|
|
@ -73,3 +73,6 @@ class AbcArchiveLoader(load.LoaderPlugin):
|
|||
|
||||
node = container["node"]
|
||||
node.destroy()
|
||||
|
||||
def switch(self, container, representation):
|
||||
self.update(container, representation)
|
||||
|
|
|
|||
|
|
@ -106,3 +106,6 @@ class BgeoLoader(load.LoaderPlugin):
|
|||
|
||||
node = container["node"]
|
||||
node.destroy()
|
||||
|
||||
def switch(self, container, representation):
|
||||
self.update(container, representation)
|
||||
|
|
|
|||
|
|
@ -192,3 +192,6 @@ class CameraLoader(load.LoaderPlugin):
|
|||
|
||||
new_node.moveToGoodPosition()
|
||||
return new_node
|
||||
|
||||
def switch(self, container, representation):
|
||||
self.update(container, representation)
|
||||
|
|
|
|||
|
|
@ -125,3 +125,6 @@ class ImageLoader(load.LoaderPlugin):
|
|||
prefix, padding, suffix = first_fname.rsplit(".", 2)
|
||||
fname = ".".join([prefix, "$F{}".format(len(padding)), suffix])
|
||||
return os.path.join(root, fname).replace("\\", "/")
|
||||
|
||||
def switch(self, container, representation):
|
||||
self.update(container, representation)
|
||||
|
|
|
|||
|
|
@ -79,3 +79,6 @@ class USDSublayerLoader(load.LoaderPlugin):
|
|||
|
||||
node = container["node"]
|
||||
node.destroy()
|
||||
|
||||
def switch(self, container, representation):
|
||||
self.update(container, representation)
|
||||
|
|
|
|||
|
|
@ -79,3 +79,6 @@ class USDReferenceLoader(load.LoaderPlugin):
|
|||
|
||||
node = container["node"]
|
||||
node.destroy()
|
||||
|
||||
def switch(self, container, representation):
|
||||
self.update(container, representation)
|
||||
|
|
|
|||
|
|
@ -102,3 +102,6 @@ class VdbLoader(load.LoaderPlugin):
|
|||
|
||||
node = container["node"]
|
||||
node.destroy()
|
||||
|
||||
def switch(self, container, representation):
|
||||
self.update(container, representation)
|
||||
|
|
|
|||
|
|
@ -4,15 +4,14 @@ import hou
|
|||
import pyblish.api
|
||||
|
||||
|
||||
class CollectHoudiniCurrentFile(pyblish.api.InstancePlugin):
|
||||
class CollectHoudiniCurrentFile(pyblish.api.ContextPlugin):
|
||||
"""Inject the current working file into context"""
|
||||
|
||||
order = pyblish.api.CollectorOrder - 0.01
|
||||
order = pyblish.api.CollectorOrder - 0.1
|
||||
label = "Houdini Current File"
|
||||
hosts = ["houdini"]
|
||||
families = ["workfile"]
|
||||
|
||||
def process(self, instance):
|
||||
def process(self, context):
|
||||
"""Inject the current working file"""
|
||||
|
||||
current_file = hou.hipFile.path()
|
||||
|
|
@ -34,26 +33,5 @@ class CollectHoudiniCurrentFile(pyblish.api.InstancePlugin):
|
|||
"saved correctly."
|
||||
)
|
||||
|
||||
instance.context.data["currentFile"] = current_file
|
||||
|
||||
folder, file = os.path.split(current_file)
|
||||
filename, ext = os.path.splitext(file)
|
||||
|
||||
instance.data.update({
|
||||
"setMembers": [current_file],
|
||||
"frameStart": instance.context.data['frameStart'],
|
||||
"frameEnd": instance.context.data['frameEnd'],
|
||||
"handleStart": instance.context.data['handleStart'],
|
||||
"handleEnd": instance.context.data['handleEnd']
|
||||
})
|
||||
|
||||
instance.data['representations'] = [{
|
||||
'name': ext.lstrip("."),
|
||||
'ext': ext.lstrip("."),
|
||||
'files': file,
|
||||
"stagingDir": folder,
|
||||
}]
|
||||
|
||||
self.log.info('Collected instance: {}'.format(file))
|
||||
self.log.info('Scene path: {}'.format(current_file))
|
||||
self.log.info('staging Dir: {}'.format(folder))
|
||||
context.data["currentFile"] = current_file
|
||||
self.log.info('Current workfile path: {}'.format(current_file))
|
||||
|
|
|
|||
|
|
@ -17,6 +17,7 @@ class CollectHoudiniReviewData(pyblish.api.InstancePlugin):
|
|||
# which isn't the actual frame range that this instance renders.
|
||||
instance.data["handleStart"] = 0
|
||||
instance.data["handleEnd"] = 0
|
||||
instance.data["fps"] = instance.context.data["fps"]
|
||||
|
||||
# Get the camera from the rop node to collect the focal length
|
||||
ropnode_path = instance.data["instance_node"]
|
||||
|
|
|
|||
36
openpype/hosts/houdini/plugins/publish/collect_workfile.py
Normal file
36
openpype/hosts/houdini/plugins/publish/collect_workfile.py
Normal file
|
|
@ -0,0 +1,36 @@
|
|||
import os
|
||||
|
||||
import pyblish.api
|
||||
|
||||
|
||||
class CollectWorkfile(pyblish.api.InstancePlugin):
|
||||
"""Inject workfile representation into instance"""
|
||||
|
||||
order = pyblish.api.CollectorOrder - 0.01
|
||||
label = "Houdini Workfile Data"
|
||||
hosts = ["houdini"]
|
||||
families = ["workfile"]
|
||||
|
||||
def process(self, instance):
|
||||
|
||||
current_file = instance.context.data["currentFile"]
|
||||
folder, file = os.path.split(current_file)
|
||||
filename, ext = os.path.splitext(file)
|
||||
|
||||
instance.data.update({
|
||||
"setMembers": [current_file],
|
||||
"frameStart": instance.context.data['frameStart'],
|
||||
"frameEnd": instance.context.data['frameEnd'],
|
||||
"handleStart": instance.context.data['handleStart'],
|
||||
"handleEnd": instance.context.data['handleEnd']
|
||||
})
|
||||
|
||||
instance.data['representations'] = [{
|
||||
'name': ext.lstrip("."),
|
||||
'ext': ext.lstrip("."),
|
||||
'files': file,
|
||||
"stagingDir": folder,
|
||||
}]
|
||||
|
||||
self.log.info('Collected instance: {}'.format(file))
|
||||
self.log.info('staging Dir: {}'.format(folder))
|
||||
|
|
@ -32,6 +32,10 @@ from openpype.pipeline import (
|
|||
load_container,
|
||||
registered_host,
|
||||
)
|
||||
from openpype.pipeline.create import (
|
||||
legacy_create,
|
||||
get_legacy_creator_by_name,
|
||||
)
|
||||
from openpype.pipeline.context_tools import (
|
||||
get_current_asset_name,
|
||||
get_current_project_asset,
|
||||
|
|
@ -2153,17 +2157,23 @@ def set_scene_resolution(width, height, pixelAspect):
|
|||
cmds.setAttr("%s.pixelAspect" % control_node, pixelAspect)
|
||||
|
||||
|
||||
def get_frame_range():
|
||||
"""Get the current assets frame range and handles."""
|
||||
def get_frame_range(include_animation_range=False):
|
||||
"""Get the current assets frame range and handles.
|
||||
|
||||
Args:
|
||||
include_animation_range (bool, optional): Whether to include
|
||||
`animationStart` and `animationEnd` keys to define the outer
|
||||
range of the timeline. It is excluded by default.
|
||||
|
||||
Returns:
|
||||
dict: Asset's expected frame range values.
|
||||
|
||||
"""
|
||||
|
||||
# Set frame start/end
|
||||
project_name = get_current_project_name()
|
||||
task_name = get_current_task_name()
|
||||
asset_name = get_current_asset_name()
|
||||
asset = get_asset_by_name(project_name, asset_name)
|
||||
settings = get_project_settings(project_name)
|
||||
include_handles_settings = settings["maya"]["include_handles"]
|
||||
current_task = asset.get("data").get("tasks").get(task_name)
|
||||
|
||||
frame_start = asset["data"].get("frameStart")
|
||||
frame_end = asset["data"].get("frameEnd")
|
||||
|
|
@ -2175,32 +2185,39 @@ def get_frame_range():
|
|||
handle_start = asset["data"].get("handleStart") or 0
|
||||
handle_end = asset["data"].get("handleEnd") or 0
|
||||
|
||||
animation_start = frame_start
|
||||
animation_end = frame_end
|
||||
|
||||
include_handles = include_handles_settings["include_handles_default"]
|
||||
for item in include_handles_settings["per_task_type"]:
|
||||
if current_task["type"] in item["task_type"]:
|
||||
include_handles = item["include_handles"]
|
||||
break
|
||||
if include_handles:
|
||||
animation_start -= int(handle_start)
|
||||
animation_end += int(handle_end)
|
||||
|
||||
cmds.playbackOptions(
|
||||
minTime=frame_start,
|
||||
maxTime=frame_end,
|
||||
animationStartTime=animation_start,
|
||||
animationEndTime=animation_end
|
||||
)
|
||||
cmds.currentTime(frame_start)
|
||||
|
||||
return {
|
||||
frame_range = {
|
||||
"frameStart": frame_start,
|
||||
"frameEnd": frame_end,
|
||||
"handleStart": handle_start,
|
||||
"handleEnd": handle_end
|
||||
}
|
||||
if include_animation_range:
|
||||
# The animation range values are only included to define whether
|
||||
# the Maya time slider should include the handles or not.
|
||||
# Some usages of this function use the full dictionary to define
|
||||
# instance attributes for which we want to exclude the animation
|
||||
# keys. That is why these are excluded by default.
|
||||
task_name = get_current_task_name()
|
||||
settings = get_project_settings(project_name)
|
||||
include_handles_settings = settings["maya"]["include_handles"]
|
||||
current_task = asset.get("data").get("tasks").get(task_name)
|
||||
|
||||
animation_start = frame_start
|
||||
animation_end = frame_end
|
||||
|
||||
include_handles = include_handles_settings["include_handles_default"]
|
||||
for item in include_handles_settings["per_task_type"]:
|
||||
if current_task["type"] in item["task_type"]:
|
||||
include_handles = item["include_handles"]
|
||||
break
|
||||
if include_handles:
|
||||
animation_start -= int(handle_start)
|
||||
animation_end += int(handle_end)
|
||||
|
||||
frame_range["animationStart"] = animation_start
|
||||
frame_range["animationEnd"] = animation_end
|
||||
|
||||
return frame_range
|
||||
|
||||
|
||||
def reset_frame_range(playback=True, render=True, fps=True):
|
||||
|
|
@ -2219,18 +2236,23 @@ def reset_frame_range(playback=True, render=True, fps=True):
|
|||
)
|
||||
set_scene_fps(fps)
|
||||
|
||||
frame_range = get_frame_range()
|
||||
frame_range = get_frame_range(include_animation_range=True)
|
||||
if not frame_range:
|
||||
# No frame range data found for asset
|
||||
return
|
||||
|
||||
frame_start = frame_range["frameStart"] - int(frame_range["handleStart"])
|
||||
frame_end = frame_range["frameEnd"] + int(frame_range["handleEnd"])
|
||||
frame_start = frame_range["frameStart"]
|
||||
frame_end = frame_range["frameEnd"]
|
||||
animation_start = frame_range["animationStart"]
|
||||
animation_end = frame_range["animationEnd"]
|
||||
|
||||
if playback:
|
||||
cmds.playbackOptions(minTime=frame_start)
|
||||
cmds.playbackOptions(maxTime=frame_end)
|
||||
cmds.playbackOptions(animationStartTime=frame_start)
|
||||
cmds.playbackOptions(animationEndTime=frame_end)
|
||||
cmds.playbackOptions(minTime=frame_start)
|
||||
cmds.playbackOptions(maxTime=frame_end)
|
||||
cmds.playbackOptions(
|
||||
minTime=frame_start,
|
||||
maxTime=frame_end,
|
||||
animationStartTime=animation_start,
|
||||
animationEndTime=animation_end
|
||||
)
|
||||
cmds.currentTime(frame_start)
|
||||
|
||||
if render:
|
||||
|
|
@ -3913,3 +3935,53 @@ def get_capture_preset(task_name, task_type, subset, project_settings, log):
|
|||
capture_preset = plugin_settings["capture_preset"]
|
||||
|
||||
return capture_preset or {}
|
||||
|
||||
|
||||
def create_rig_animation_instance(nodes, context, namespace, log=None):
|
||||
"""Create an animation publish instance for loaded rigs.
|
||||
|
||||
See the RecreateRigAnimationInstance inventory action on how to use this
|
||||
for loaded rig containers.
|
||||
|
||||
Arguments:
|
||||
nodes (list): Member nodes of the rig instance.
|
||||
context (dict): Representation context of the rig container
|
||||
namespace (str): Namespace of the rig container
|
||||
log (logging.Logger, optional): Logger to log to if provided
|
||||
|
||||
Returns:
|
||||
None
|
||||
|
||||
"""
|
||||
output = next((node for node in nodes if
|
||||
node.endswith("out_SET")), None)
|
||||
controls = next((node for node in nodes if
|
||||
node.endswith("controls_SET")), None)
|
||||
|
||||
assert output, "No out_SET in rig, this is a bug."
|
||||
assert controls, "No controls_SET in rig, this is a bug."
|
||||
|
||||
# Find the roots amongst the loaded nodes
|
||||
roots = (
|
||||
cmds.ls(nodes, assemblies=True, long=True) or
|
||||
get_highest_in_hierarchy(nodes)
|
||||
)
|
||||
assert roots, "No root nodes in rig, this is a bug."
|
||||
|
||||
asset = legacy_io.Session["AVALON_ASSET"]
|
||||
dependency = str(context["representation"]["_id"])
|
||||
|
||||
if log:
|
||||
log.info("Creating subset: {}".format(namespace))
|
||||
|
||||
# Create the animation instance
|
||||
creator_plugin = get_legacy_creator_by_name("CreateAnimation")
|
||||
with maintained_selection():
|
||||
cmds.select([output, controls] + roots, noExpand=True)
|
||||
legacy_create(
|
||||
creator_plugin,
|
||||
name=namespace,
|
||||
asset=asset,
|
||||
options={"useSelection": True},
|
||||
data={"dependencies": dependency}
|
||||
)
|
||||
|
|
|
|||
|
|
@ -7,6 +7,12 @@ from openpype.hosts.maya.api import (
|
|||
class CreateAnimation(plugin.Creator):
|
||||
"""Animation output for character rigs"""
|
||||
|
||||
# We hide the animation creator from the UI since the creation of it
|
||||
# is automated upon loading a rig. There's an inventory action to recreate
|
||||
# it for loaded rigs if by chance someone deleted the animation instance.
|
||||
# Note: This setting is actually applied from project settings
|
||||
enabled = False
|
||||
|
||||
name = "animationDefault"
|
||||
label = "Animation"
|
||||
family = "animation"
|
||||
|
|
|
|||
|
|
@ -0,0 +1,35 @@
|
|||
from openpype.pipeline import (
|
||||
InventoryAction,
|
||||
get_representation_context
|
||||
)
|
||||
from openpype.hosts.maya.api.lib import (
|
||||
create_rig_animation_instance,
|
||||
get_container_members,
|
||||
)
|
||||
|
||||
|
||||
class RecreateRigAnimationInstance(InventoryAction):
|
||||
"""Recreate animation publish instance for loaded rigs"""
|
||||
|
||||
label = "Recreate rig animation instance"
|
||||
icon = "wrench"
|
||||
color = "#888888"
|
||||
|
||||
@staticmethod
|
||||
def is_compatible(container):
|
||||
return (
|
||||
container.get("loader") == "ReferenceLoader"
|
||||
and container.get("name", "").startswith("rig")
|
||||
)
|
||||
|
||||
def process(self, containers):
|
||||
|
||||
for container in containers:
|
||||
# todo: delete an existing entry if it exist or skip creation
|
||||
|
||||
namespace = container["namespace"]
|
||||
representation_id = container["representation"]
|
||||
context = get_representation_context(representation_id)
|
||||
nodes = get_container_members(container)
|
||||
|
||||
create_rig_animation_instance(nodes, context, namespace)
|
||||
|
|
@ -4,16 +4,12 @@ import contextlib
|
|||
from maya import cmds
|
||||
|
||||
from openpype.settings import get_project_settings
|
||||
from openpype.pipeline import legacy_io
|
||||
from openpype.pipeline.create import (
|
||||
legacy_create,
|
||||
get_legacy_creator_by_name,
|
||||
)
|
||||
import openpype.hosts.maya.api.plugin
|
||||
from openpype.hosts.maya.api.lib import (
|
||||
maintained_selection,
|
||||
get_container_members,
|
||||
parent_nodes
|
||||
parent_nodes,
|
||||
create_rig_animation_instance
|
||||
)
|
||||
|
||||
|
||||
|
|
@ -114,9 +110,6 @@ class ReferenceLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
|
|||
icon = "code-fork"
|
||||
color = "orange"
|
||||
|
||||
# Name of creator class that will be used to create animation instance
|
||||
animation_creator_name = "CreateAnimation"
|
||||
|
||||
def process_reference(self, context, name, namespace, options):
|
||||
import maya.cmds as cmds
|
||||
|
||||
|
|
@ -220,37 +213,10 @@ class ReferenceLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
|
|||
self._lock_camera_transforms(members)
|
||||
|
||||
def _post_process_rig(self, name, namespace, context, options):
|
||||
|
||||
output = next((node for node in self if
|
||||
node.endswith("out_SET")), None)
|
||||
controls = next((node for node in self if
|
||||
node.endswith("controls_SET")), None)
|
||||
|
||||
assert output, "No out_SET in rig, this is a bug."
|
||||
assert controls, "No controls_SET in rig, this is a bug."
|
||||
|
||||
# Find the roots amongst the loaded nodes
|
||||
roots = cmds.ls(self[:], assemblies=True, long=True)
|
||||
assert roots, "No root nodes in rig, this is a bug."
|
||||
|
||||
asset = legacy_io.Session["AVALON_ASSET"]
|
||||
dependency = str(context["representation"]["_id"])
|
||||
|
||||
self.log.info("Creating subset: {}".format(namespace))
|
||||
|
||||
# Create the animation instance
|
||||
creator_plugin = get_legacy_creator_by_name(
|
||||
self.animation_creator_name
|
||||
nodes = self[:]
|
||||
create_rig_animation_instance(
|
||||
nodes, context, namespace, log=self.log
|
||||
)
|
||||
with maintained_selection():
|
||||
cmds.select([output, controls] + roots, noExpand=True)
|
||||
legacy_create(
|
||||
creator_plugin,
|
||||
name=namespace,
|
||||
asset=asset,
|
||||
options={"useSelection": True},
|
||||
data={"dependencies": dependency}
|
||||
)
|
||||
|
||||
def _lock_camera_transforms(self, nodes):
|
||||
cameras = cmds.ls(nodes, type="camera")
|
||||
|
|
|
|||
|
|
@ -280,7 +280,7 @@ class MakeTX(TextureProcessor):
|
|||
# Do nothing if the source file is already a .tx file.
|
||||
return TextureResult(
|
||||
path=source,
|
||||
file_hash=None, # todo: unknown texture hash?
|
||||
file_hash=source_hash(source),
|
||||
colorspace=colorspace,
|
||||
transfer_mode=COPY
|
||||
)
|
||||
|
|
|
|||
97
openpype/hosts/maya/tools/mayalookassigner/alembic.py
Normal file
97
openpype/hosts/maya/tools/mayalookassigner/alembic.py
Normal file
|
|
@ -0,0 +1,97 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
"""Tools for loading looks to vray proxies."""
|
||||
import os
|
||||
from collections import defaultdict
|
||||
import logging
|
||||
|
||||
import six
|
||||
|
||||
import alembic.Abc
|
||||
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def get_alembic_paths_by_property(filename, attr, verbose=False):
|
||||
# type: (str, str, bool) -> dict
|
||||
"""Return attribute value per objects in the Alembic file.
|
||||
|
||||
Reads an Alembic archive hierarchy and retrieves the
|
||||
value from the `attr` properties on the objects.
|
||||
|
||||
Args:
|
||||
filename (str): Full path to Alembic archive to read.
|
||||
attr (str): Id attribute.
|
||||
verbose (bool): Whether to verbosely log missing attributes.
|
||||
|
||||
Returns:
|
||||
dict: Mapping of node full path with its id
|
||||
|
||||
"""
|
||||
# Normalize alembic path
|
||||
filename = os.path.normpath(filename)
|
||||
filename = filename.replace("\\", "/")
|
||||
filename = str(filename) # path must be string
|
||||
|
||||
try:
|
||||
archive = alembic.Abc.IArchive(filename)
|
||||
except RuntimeError:
|
||||
# invalid alembic file - probably vrmesh
|
||||
log.warning("{} is not an alembic file".format(filename))
|
||||
return {}
|
||||
root = archive.getTop()
|
||||
|
||||
iterator = list(root.children)
|
||||
obj_ids = {}
|
||||
|
||||
for obj in iterator:
|
||||
name = obj.getFullName()
|
||||
|
||||
# include children for coming iterations
|
||||
iterator.extend(obj.children)
|
||||
|
||||
props = obj.getProperties()
|
||||
if props.getNumProperties() == 0:
|
||||
# Skip those without properties, e.g. '/materials' in a gpuCache
|
||||
continue
|
||||
|
||||
# THe custom attribute is under the properties' first container under
|
||||
# the ".arbGeomParams"
|
||||
prop = props.getProperty(0) # get base property
|
||||
|
||||
_property = None
|
||||
try:
|
||||
geo_params = prop.getProperty('.arbGeomParams')
|
||||
_property = geo_params.getProperty(attr)
|
||||
except KeyError:
|
||||
if verbose:
|
||||
log.debug("Missing attr on: {0}".format(name))
|
||||
continue
|
||||
|
||||
if not _property.isConstant():
|
||||
log.warning("Id not constant on: {0}".format(name))
|
||||
|
||||
# Get first value sample
|
||||
value = _property.getValue()[0]
|
||||
|
||||
obj_ids[name] = value
|
||||
|
||||
return obj_ids
|
||||
|
||||
|
||||
def get_alembic_ids_cache(path):
|
||||
# type: (str) -> dict
|
||||
"""Build a id to node mapping in Alembic file.
|
||||
|
||||
Nodes without IDs are ignored.
|
||||
|
||||
Returns:
|
||||
dict: Mapping of id to nodes in the Alembic.
|
||||
|
||||
"""
|
||||
node_ids = get_alembic_paths_by_property(path, attr="cbId")
|
||||
id_nodes = defaultdict(list)
|
||||
for node, _id in six.iteritems(node_ids):
|
||||
id_nodes[_id].append(node)
|
||||
|
||||
return dict(six.iteritems(id_nodes))
|
||||
|
|
@ -9,6 +9,7 @@ from openpype.pipeline import legacy_io
|
|||
from openpype.client import get_last_version_by_subset_name
|
||||
from openpype.hosts.maya import api
|
||||
from . import lib
|
||||
from .alembic import get_alembic_ids_cache
|
||||
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
|
@ -68,6 +69,11 @@ def get_nodes_by_id(standin):
|
|||
(dict): Dictionary with node full name/path and id.
|
||||
"""
|
||||
path = cmds.getAttr(standin + ".dso")
|
||||
|
||||
if path.endswith(".abc"):
|
||||
# Support alembic files directly
|
||||
return get_alembic_ids_cache(path)
|
||||
|
||||
json_path = None
|
||||
for f in os.listdir(os.path.dirname(path)):
|
||||
if f.endswith(".json"):
|
||||
|
|
|
|||
|
|
@ -1,108 +1,20 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
"""Tools for loading looks to vray proxies."""
|
||||
import os
|
||||
from collections import defaultdict
|
||||
import logging
|
||||
|
||||
import six
|
||||
|
||||
import alembic.Abc
|
||||
from maya import cmds
|
||||
|
||||
from openpype.client import get_last_version_by_subset_name
|
||||
from openpype.pipeline import legacy_io
|
||||
import openpype.hosts.maya.lib as maya_lib
|
||||
from . import lib
|
||||
from .alembic import get_alembic_ids_cache
|
||||
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def get_alembic_paths_by_property(filename, attr, verbose=False):
|
||||
# type: (str, str, bool) -> dict
|
||||
"""Return attribute value per objects in the Alembic file.
|
||||
|
||||
Reads an Alembic archive hierarchy and retrieves the
|
||||
value from the `attr` properties on the objects.
|
||||
|
||||
Args:
|
||||
filename (str): Full path to Alembic archive to read.
|
||||
attr (str): Id attribute.
|
||||
verbose (bool): Whether to verbosely log missing attributes.
|
||||
|
||||
Returns:
|
||||
dict: Mapping of node full path with its id
|
||||
|
||||
"""
|
||||
# Normalize alembic path
|
||||
filename = os.path.normpath(filename)
|
||||
filename = filename.replace("\\", "/")
|
||||
filename = str(filename) # path must be string
|
||||
|
||||
try:
|
||||
archive = alembic.Abc.IArchive(filename)
|
||||
except RuntimeError:
|
||||
# invalid alembic file - probably vrmesh
|
||||
log.warning("{} is not an alembic file".format(filename))
|
||||
return {}
|
||||
root = archive.getTop()
|
||||
|
||||
iterator = list(root.children)
|
||||
obj_ids = {}
|
||||
|
||||
for obj in iterator:
|
||||
name = obj.getFullName()
|
||||
|
||||
# include children for coming iterations
|
||||
iterator.extend(obj.children)
|
||||
|
||||
props = obj.getProperties()
|
||||
if props.getNumProperties() == 0:
|
||||
# Skip those without properties, e.g. '/materials' in a gpuCache
|
||||
continue
|
||||
|
||||
# THe custom attribute is under the properties' first container under
|
||||
# the ".arbGeomParams"
|
||||
prop = props.getProperty(0) # get base property
|
||||
|
||||
_property = None
|
||||
try:
|
||||
geo_params = prop.getProperty('.arbGeomParams')
|
||||
_property = geo_params.getProperty(attr)
|
||||
except KeyError:
|
||||
if verbose:
|
||||
log.debug("Missing attr on: {0}".format(name))
|
||||
continue
|
||||
|
||||
if not _property.isConstant():
|
||||
log.warning("Id not constant on: {0}".format(name))
|
||||
|
||||
# Get first value sample
|
||||
value = _property.getValue()[0]
|
||||
|
||||
obj_ids[name] = value
|
||||
|
||||
return obj_ids
|
||||
|
||||
|
||||
def get_alembic_ids_cache(path):
|
||||
# type: (str) -> dict
|
||||
"""Build a id to node mapping in Alembic file.
|
||||
|
||||
Nodes without IDs are ignored.
|
||||
|
||||
Returns:
|
||||
dict: Mapping of id to nodes in the Alembic.
|
||||
|
||||
"""
|
||||
node_ids = get_alembic_paths_by_property(path, attr="cbId")
|
||||
id_nodes = defaultdict(list)
|
||||
for node, _id in six.iteritems(node_ids):
|
||||
id_nodes[_id].append(node)
|
||||
|
||||
return dict(six.iteritems(id_nodes))
|
||||
|
||||
|
||||
def assign_vrayproxy_shaders(vrayproxy, assignments):
|
||||
# type: (str, dict) -> None
|
||||
"""Assign shaders to content of Vray Proxy.
|
||||
|
|
|
|||
|
|
@ -495,17 +495,17 @@ def get_avalon_knob_data(node, prefix="avalon:", create=True):
|
|||
data (dict)
|
||||
"""
|
||||
|
||||
data = {}
|
||||
if AVALON_TAB not in node.knobs():
|
||||
return data
|
||||
|
||||
# check if lists
|
||||
if not isinstance(prefix, list):
|
||||
prefix = list([prefix])
|
||||
|
||||
data = dict()
|
||||
prefix = [prefix]
|
||||
|
||||
# loop prefix
|
||||
for p in prefix:
|
||||
# check if the node is avalon tracked
|
||||
if AVALON_TAB not in node.knobs():
|
||||
continue
|
||||
try:
|
||||
# check if data available on the node
|
||||
test = node[AVALON_DATA_GROUP].value()
|
||||
|
|
@ -516,8 +516,7 @@ def get_avalon_knob_data(node, prefix="avalon:", create=True):
|
|||
if create:
|
||||
node = set_avalon_knob_data(node)
|
||||
return get_avalon_knob_data(node)
|
||||
else:
|
||||
return {}
|
||||
return {}
|
||||
|
||||
# get data from filtered knobs
|
||||
data.update({k.replace(p, ''): node[k].value()
|
||||
|
|
|
|||
|
|
@ -2,7 +2,8 @@ from openpype.pipeline.create.creator_plugins import SubsetConvertorPlugin
|
|||
from openpype.hosts.nuke.api.lib import (
|
||||
INSTANCE_DATA_KNOB,
|
||||
get_node_data,
|
||||
get_avalon_knob_data
|
||||
get_avalon_knob_data,
|
||||
AVALON_TAB,
|
||||
)
|
||||
from openpype.hosts.nuke.api.plugin import convert_to_valid_instaces
|
||||
|
||||
|
|
@ -17,13 +18,15 @@ class LegacyConverted(SubsetConvertorPlugin):
|
|||
legacy_found = False
|
||||
# search for first available legacy item
|
||||
for node in nuke.allNodes(recurseGroups=True):
|
||||
|
||||
if node.Class() in ["Viewer", "Dot"]:
|
||||
continue
|
||||
|
||||
if get_node_data(node, INSTANCE_DATA_KNOB):
|
||||
continue
|
||||
|
||||
if AVALON_TAB not in node.knobs():
|
||||
continue
|
||||
|
||||
# get data from avalon knob
|
||||
avalon_knob_data = get_avalon_knob_data(
|
||||
node, ["avalon:", "ak:"], create=False)
|
||||
|
|
|
|||
|
|
@ -190,7 +190,7 @@ class CollectNukeWrites(pyblish.api.InstancePlugin,
|
|||
|
||||
# make sure rendered sequence on farm will
|
||||
# be used for extract review
|
||||
if not instance.data["review"]:
|
||||
if not instance.data.get("review"):
|
||||
instance.data["useSequenceForReview"] = False
|
||||
|
||||
self.log.debug("instance.data: {}".format(pformat(instance.data)))
|
||||
|
|
|
|||
|
|
@ -0,0 +1,43 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
import pyblish.api
|
||||
|
||||
|
||||
class CollectReviewInfo(pyblish.api.InstancePlugin):
|
||||
"""Collect data required for review instances.
|
||||
|
||||
ExtractReview plugin requires frame start/end, fps on instance data which
|
||||
are missing on instances from TrayPublishes.
|
||||
|
||||
Warning:
|
||||
This is temporary solution to "make it work". Contains removed changes
|
||||
from https://github.com/ynput/OpenPype/pull/4383 reduced only for
|
||||
review instances.
|
||||
"""
|
||||
|
||||
label = "Collect Review Info"
|
||||
order = pyblish.api.CollectorOrder + 0.491
|
||||
families = ["review"]
|
||||
hosts = ["traypublisher"]
|
||||
|
||||
def process(self, instance):
|
||||
asset_entity = instance.data.get("assetEntity")
|
||||
if instance.data.get("frameStart") is not None or not asset_entity:
|
||||
self.log.debug("Missing required data on instance")
|
||||
return
|
||||
|
||||
asset_data = asset_entity["data"]
|
||||
# Store collected data for logging
|
||||
collected_data = {}
|
||||
for key in (
|
||||
"fps",
|
||||
"frameStart",
|
||||
"frameEnd",
|
||||
"handleStart",
|
||||
"handleEnd",
|
||||
):
|
||||
if key in instance.data or key not in asset_data:
|
||||
continue
|
||||
value = asset_data[key]
|
||||
collected_data[key] = value
|
||||
instance.data[key] = value
|
||||
self.log.debug("Collected data: {}".format(str(collected_data)))
|
||||
|
|
@ -2,8 +2,10 @@ import os
|
|||
|
||||
import unreal
|
||||
|
||||
from openpype.settings import get_project_settings
|
||||
from openpype.pipeline import Anatomy
|
||||
from openpype.hosts.unreal.api import pipeline
|
||||
from openpype.widgets.message_window import Window
|
||||
|
||||
|
||||
queue = None
|
||||
|
|
@ -32,11 +34,20 @@ def start_rendering():
|
|||
"""
|
||||
Start the rendering process.
|
||||
"""
|
||||
print("Starting rendering...")
|
||||
unreal.log("Starting rendering...")
|
||||
|
||||
# Get selected sequences
|
||||
assets = unreal.EditorUtilityLibrary.get_selected_assets()
|
||||
|
||||
if not assets:
|
||||
Window(
|
||||
parent=None,
|
||||
title="No assets selected",
|
||||
message="No assets selected. Select a render instance.",
|
||||
level="warning")
|
||||
raise RuntimeError(
|
||||
"No assets selected. You need to select a render instance.")
|
||||
|
||||
# instances = pipeline.ls_inst()
|
||||
instances = [
|
||||
a for a in assets
|
||||
|
|
@ -66,6 +77,13 @@ def start_rendering():
|
|||
|
||||
ar = unreal.AssetRegistryHelpers.get_asset_registry()
|
||||
|
||||
data = get_project_settings(project)
|
||||
config = None
|
||||
config_path = str(data.get("unreal").get("render_config_path"))
|
||||
if config_path and unreal.EditorAssetLibrary.does_asset_exist(config_path):
|
||||
unreal.log("Found saved render configuration")
|
||||
config = ar.get_asset_by_object_path(config_path).get_asset()
|
||||
|
||||
for i in inst_data:
|
||||
sequence = ar.get_asset_by_object_path(i["sequence"]).get_asset()
|
||||
|
||||
|
|
@ -81,55 +99,80 @@ def start_rendering():
|
|||
# Get all the sequences to render. If there are subsequences,
|
||||
# add them and their frame ranges to the render list. We also
|
||||
# use the names for the output paths.
|
||||
for s in sequences:
|
||||
subscenes = pipeline.get_subsequences(s.get('sequence'))
|
||||
for seq in sequences:
|
||||
subscenes = pipeline.get_subsequences(seq.get('sequence'))
|
||||
|
||||
if subscenes:
|
||||
for ss in subscenes:
|
||||
for sub_seq in subscenes:
|
||||
sequences.append({
|
||||
"sequence": ss.get_sequence(),
|
||||
"output": (f"{s.get('output')}/"
|
||||
f"{ss.get_sequence().get_name()}"),
|
||||
"sequence": sub_seq.get_sequence(),
|
||||
"output": (f"{seq.get('output')}/"
|
||||
f"{sub_seq.get_sequence().get_name()}"),
|
||||
"frame_range": (
|
||||
ss.get_start_frame(), ss.get_end_frame())
|
||||
sub_seq.get_start_frame(), sub_seq.get_end_frame())
|
||||
})
|
||||
else:
|
||||
# Avoid rendering camera sequences
|
||||
if "_camera" not in s.get('sequence').get_name():
|
||||
render_list.append(s)
|
||||
if "_camera" not in seq.get('sequence').get_name():
|
||||
render_list.append(seq)
|
||||
|
||||
# Create the rendering jobs and add them to the queue.
|
||||
for r in render_list:
|
||||
for render_setting in render_list:
|
||||
job = queue.allocate_new_job(unreal.MoviePipelineExecutorJob)
|
||||
job.sequence = unreal.SoftObjectPath(i["master_sequence"])
|
||||
job.map = unreal.SoftObjectPath(i["master_level"])
|
||||
job.author = "OpenPype"
|
||||
|
||||
# If we have a saved configuration, copy it to the job.
|
||||
if config:
|
||||
job.get_configuration().copy_from(config)
|
||||
|
||||
# User data could be used to pass data to the job, that can be
|
||||
# read in the job's OnJobFinished callback. We could,
|
||||
# for instance, pass the AvalonPublishInstance's path to the job.
|
||||
# job.user_data = ""
|
||||
|
||||
output_dir = render_setting.get('output')
|
||||
shot_name = render_setting.get('sequence').get_name()
|
||||
|
||||
settings = job.get_configuration().find_or_add_setting_by_class(
|
||||
unreal.MoviePipelineOutputSetting)
|
||||
settings.output_resolution = unreal.IntPoint(1920, 1080)
|
||||
settings.custom_start_frame = r.get("frame_range")[0]
|
||||
settings.custom_end_frame = r.get("frame_range")[1]
|
||||
settings.custom_start_frame = render_setting.get("frame_range")[0]
|
||||
settings.custom_end_frame = render_setting.get("frame_range")[1]
|
||||
settings.use_custom_playback_range = True
|
||||
settings.file_name_format = "{sequence_name}.{frame_number}"
|
||||
settings.output_directory.path = f"{render_dir}/{r.get('output')}"
|
||||
|
||||
renderPass = job.get_configuration().find_or_add_setting_by_class(
|
||||
unreal.MoviePipelineDeferredPassBase)
|
||||
renderPass.disable_multisample_effects = True
|
||||
settings.file_name_format = f"{shot_name}" + ".{frame_number}"
|
||||
settings.output_directory.path = f"{render_dir}/{output_dir}"
|
||||
|
||||
job.get_configuration().find_or_add_setting_by_class(
|
||||
unreal.MoviePipelineImageSequenceOutput_PNG)
|
||||
unreal.MoviePipelineDeferredPassBase)
|
||||
|
||||
render_format = data.get("unreal").get("render_format", "png")
|
||||
|
||||
if render_format == "png":
|
||||
job.get_configuration().find_or_add_setting_by_class(
|
||||
unreal.MoviePipelineImageSequenceOutput_PNG)
|
||||
elif render_format == "exr":
|
||||
job.get_configuration().find_or_add_setting_by_class(
|
||||
unreal.MoviePipelineImageSequenceOutput_EXR)
|
||||
elif render_format == "jpg":
|
||||
job.get_configuration().find_or_add_setting_by_class(
|
||||
unreal.MoviePipelineImageSequenceOutput_JPG)
|
||||
elif render_format == "bmp":
|
||||
job.get_configuration().find_or_add_setting_by_class(
|
||||
unreal.MoviePipelineImageSequenceOutput_BMP)
|
||||
|
||||
# If there are jobs in the queue, start the rendering process.
|
||||
if queue.get_jobs():
|
||||
global executor
|
||||
executor = unreal.MoviePipelinePIEExecutor()
|
||||
|
||||
preroll_frames = data.get("unreal").get("preroll_frames", 0)
|
||||
|
||||
settings = unreal.MoviePipelinePIEExecutorSettings()
|
||||
settings.set_editor_property(
|
||||
"initial_delay_frame_count", preroll_frames)
|
||||
|
||||
executor.on_executor_finished_delegate.add_callable_unique(
|
||||
_queue_finish_callback)
|
||||
executor.on_individual_job_finished_delegate.add_callable_unique(
|
||||
|
|
|
|||
|
|
@ -1,14 +1,22 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
from pathlib import Path
|
||||
|
||||
import unreal
|
||||
|
||||
from openpype.pipeline import CreatorError
|
||||
from openpype.hosts.unreal.api.pipeline import (
|
||||
get_subsequences
|
||||
UNREAL_VERSION,
|
||||
create_folder,
|
||||
get_subsequences,
|
||||
)
|
||||
from openpype.hosts.unreal.api.plugin import (
|
||||
UnrealAssetCreator
|
||||
)
|
||||
from openpype.lib import UILabelDef
|
||||
from openpype.lib import (
|
||||
UILabelDef,
|
||||
UISeparatorDef,
|
||||
BoolDef,
|
||||
NumberDef
|
||||
)
|
||||
|
||||
|
||||
class CreateRender(UnrealAssetCreator):
|
||||
|
|
@ -19,7 +27,92 @@ class CreateRender(UnrealAssetCreator):
|
|||
family = "render"
|
||||
icon = "eye"
|
||||
|
||||
def create(self, subset_name, instance_data, pre_create_data):
|
||||
def create_instance(
|
||||
self, instance_data, subset_name, pre_create_data,
|
||||
selected_asset_path, master_seq, master_lvl, seq_data
|
||||
):
|
||||
instance_data["members"] = [selected_asset_path]
|
||||
instance_data["sequence"] = selected_asset_path
|
||||
instance_data["master_sequence"] = master_seq
|
||||
instance_data["master_level"] = master_lvl
|
||||
instance_data["output"] = seq_data.get('output')
|
||||
instance_data["frameStart"] = seq_data.get('frame_range')[0]
|
||||
instance_data["frameEnd"] = seq_data.get('frame_range')[1]
|
||||
|
||||
super(CreateRender, self).create(
|
||||
subset_name,
|
||||
instance_data,
|
||||
pre_create_data)
|
||||
|
||||
def create_with_new_sequence(
|
||||
self, subset_name, instance_data, pre_create_data
|
||||
):
|
||||
# If the option to create a new level sequence is selected,
|
||||
# create a new level sequence and a master level.
|
||||
|
||||
root = f"/Game/OpenPype/Sequences"
|
||||
|
||||
# Create a new folder for the sequence in root
|
||||
sequence_dir_name = create_folder(root, subset_name)
|
||||
sequence_dir = f"{root}/{sequence_dir_name}"
|
||||
|
||||
unreal.log_warning(f"sequence_dir: {sequence_dir}")
|
||||
|
||||
# Create the level sequence
|
||||
asset_tools = unreal.AssetToolsHelpers.get_asset_tools()
|
||||
seq = asset_tools.create_asset(
|
||||
asset_name=subset_name,
|
||||
package_path=sequence_dir,
|
||||
asset_class=unreal.LevelSequence,
|
||||
factory=unreal.LevelSequenceFactoryNew())
|
||||
|
||||
seq.set_playback_start(pre_create_data.get("start_frame"))
|
||||
seq.set_playback_end(pre_create_data.get("end_frame"))
|
||||
|
||||
pre_create_data["members"] = [seq.get_path_name()]
|
||||
|
||||
unreal.EditorAssetLibrary.save_asset(seq.get_path_name())
|
||||
|
||||
# Create the master level
|
||||
if UNREAL_VERSION.major >= 5:
|
||||
curr_level = unreal.LevelEditorSubsystem().get_current_level()
|
||||
else:
|
||||
world = unreal.EditorLevelLibrary.get_editor_world()
|
||||
levels = unreal.EditorLevelUtils.get_levels(world)
|
||||
curr_level = levels[0] if len(levels) else None
|
||||
if not curr_level:
|
||||
raise RuntimeError("No level loaded.")
|
||||
curr_level_path = curr_level.get_outer().get_path_name()
|
||||
|
||||
# If the level path does not start with "/Game/", the current
|
||||
# level is a temporary, unsaved level.
|
||||
if curr_level_path.startswith("/Game/"):
|
||||
if UNREAL_VERSION.major >= 5:
|
||||
unreal.LevelEditorSubsystem().save_current_level()
|
||||
else:
|
||||
unreal.EditorLevelLibrary.save_current_level()
|
||||
|
||||
ml_path = f"{sequence_dir}/{subset_name}_MasterLevel"
|
||||
|
||||
if UNREAL_VERSION.major >= 5:
|
||||
unreal.LevelEditorSubsystem().new_level(ml_path)
|
||||
else:
|
||||
unreal.EditorLevelLibrary.new_level(ml_path)
|
||||
|
||||
seq_data = {
|
||||
"sequence": seq,
|
||||
"output": f"{seq.get_name()}",
|
||||
"frame_range": (
|
||||
seq.get_playback_start(),
|
||||
seq.get_playback_end())}
|
||||
|
||||
self.create_instance(
|
||||
instance_data, subset_name, pre_create_data,
|
||||
seq.get_path_name(), seq.get_path_name(), ml_path, seq_data)
|
||||
|
||||
def create_from_existing_sequence(
|
||||
self, subset_name, instance_data, pre_create_data
|
||||
):
|
||||
ar = unreal.AssetRegistryHelpers.get_asset_registry()
|
||||
|
||||
sel_objects = unreal.EditorUtilityLibrary.get_selected_assets()
|
||||
|
|
@ -27,8 +120,8 @@ class CreateRender(UnrealAssetCreator):
|
|||
a.get_path_name() for a in sel_objects
|
||||
if a.get_class().get_name() == "LevelSequence"]
|
||||
|
||||
if not selection:
|
||||
raise CreatorError("Please select at least one Level Sequence.")
|
||||
if len(selection) == 0:
|
||||
raise RuntimeError("Please select at least one Level Sequence.")
|
||||
|
||||
seq_data = None
|
||||
|
||||
|
|
@ -42,28 +135,38 @@ class CreateRender(UnrealAssetCreator):
|
|||
f"Skipping {selected_asset.get_name()}. It isn't a Level "
|
||||
"Sequence.")
|
||||
|
||||
# The asset name is the third element of the path which
|
||||
# contains the map.
|
||||
# To take the asset name, we remove from the path the prefix
|
||||
# "/Game/OpenPype/" and then we split the path by "/".
|
||||
sel_path = selected_asset_path
|
||||
asset_name = sel_path.replace("/Game/OpenPype/", "").split("/")[0]
|
||||
if pre_create_data.get("use_hierarchy"):
|
||||
# The asset name is the the third element of the path which
|
||||
# contains the map.
|
||||
# To take the asset name, we remove from the path the prefix
|
||||
# "/Game/OpenPype/" and then we split the path by "/".
|
||||
sel_path = selected_asset_path
|
||||
asset_name = sel_path.replace(
|
||||
"/Game/OpenPype/", "").split("/")[0]
|
||||
|
||||
search_path = f"/Game/OpenPype/{asset_name}"
|
||||
else:
|
||||
search_path = Path(selected_asset_path).parent.as_posix()
|
||||
|
||||
# Get the master sequence and the master level.
|
||||
# There should be only one sequence and one level in the directory.
|
||||
ar_filter = unreal.ARFilter(
|
||||
class_names=["LevelSequence"],
|
||||
package_paths=[f"/Game/OpenPype/{asset_name}"],
|
||||
recursive_paths=False)
|
||||
sequences = ar.get_assets(ar_filter)
|
||||
master_seq = sequences[0].get_asset().get_path_name()
|
||||
master_seq_obj = sequences[0].get_asset()
|
||||
ar_filter = unreal.ARFilter(
|
||||
class_names=["World"],
|
||||
package_paths=[f"/Game/OpenPype/{asset_name}"],
|
||||
recursive_paths=False)
|
||||
levels = ar.get_assets(ar_filter)
|
||||
master_lvl = levels[0].get_asset().get_path_name()
|
||||
try:
|
||||
ar_filter = unreal.ARFilter(
|
||||
class_names=["LevelSequence"],
|
||||
package_paths=[search_path],
|
||||
recursive_paths=False)
|
||||
sequences = ar.get_assets(ar_filter)
|
||||
master_seq = sequences[0].get_asset().get_path_name()
|
||||
master_seq_obj = sequences[0].get_asset()
|
||||
ar_filter = unreal.ARFilter(
|
||||
class_names=["World"],
|
||||
package_paths=[search_path],
|
||||
recursive_paths=False)
|
||||
levels = ar.get_assets(ar_filter)
|
||||
master_lvl = levels[0].get_asset().get_path_name()
|
||||
except IndexError:
|
||||
raise RuntimeError(
|
||||
f"Could not find the hierarchy for the selected sequence.")
|
||||
|
||||
# If the selected asset is the master sequence, we get its data
|
||||
# and then we create the instance for the master sequence.
|
||||
|
|
@ -79,7 +182,8 @@ class CreateRender(UnrealAssetCreator):
|
|||
master_seq_obj.get_playback_start(),
|
||||
master_seq_obj.get_playback_end())}
|
||||
|
||||
if selected_asset_path == master_seq:
|
||||
if (selected_asset_path == master_seq or
|
||||
pre_create_data.get("use_hierarchy")):
|
||||
seq_data = master_seq_data
|
||||
else:
|
||||
seq_data_list = [master_seq_data]
|
||||
|
|
@ -119,20 +223,54 @@ class CreateRender(UnrealAssetCreator):
|
|||
"sub-sequence of the master sequence.")
|
||||
continue
|
||||
|
||||
instance_data["members"] = [selected_asset_path]
|
||||
instance_data["sequence"] = selected_asset_path
|
||||
instance_data["master_sequence"] = master_seq
|
||||
instance_data["master_level"] = master_lvl
|
||||
instance_data["output"] = seq_data.get('output')
|
||||
instance_data["frameStart"] = seq_data.get('frame_range')[0]
|
||||
instance_data["frameEnd"] = seq_data.get('frame_range')[1]
|
||||
self.create_instance(
|
||||
instance_data, subset_name, pre_create_data,
|
||||
selected_asset_path, master_seq, master_lvl, seq_data)
|
||||
|
||||
super(CreateRender, self).create(
|
||||
subset_name,
|
||||
instance_data,
|
||||
pre_create_data)
|
||||
def create(self, subset_name, instance_data, pre_create_data):
|
||||
if pre_create_data.get("create_seq"):
|
||||
self.create_with_new_sequence(
|
||||
subset_name, instance_data, pre_create_data)
|
||||
else:
|
||||
self.create_from_existing_sequence(
|
||||
subset_name, instance_data, pre_create_data)
|
||||
|
||||
def get_pre_create_attr_defs(self):
|
||||
return [
|
||||
UILabelDef("Select the sequence to render.")
|
||||
UILabelDef(
|
||||
"Select a Level Sequence to render or create a new one."
|
||||
),
|
||||
BoolDef(
|
||||
"create_seq",
|
||||
label="Create a new Level Sequence",
|
||||
default=False
|
||||
),
|
||||
UILabelDef(
|
||||
"WARNING: If you create a new Level Sequence, the current\n"
|
||||
"level will be saved and a new Master Level will be created."
|
||||
),
|
||||
NumberDef(
|
||||
"start_frame",
|
||||
label="Start Frame",
|
||||
default=0,
|
||||
minimum=-999999,
|
||||
maximum=999999
|
||||
),
|
||||
NumberDef(
|
||||
"end_frame",
|
||||
label="Start Frame",
|
||||
default=150,
|
||||
minimum=-999999,
|
||||
maximum=999999
|
||||
),
|
||||
UISeparatorDef(),
|
||||
UILabelDef(
|
||||
"The following settings are valid only if you are not\n"
|
||||
"creating a new sequence."
|
||||
),
|
||||
BoolDef(
|
||||
"use_hierarchy",
|
||||
label="Use Hierarchy",
|
||||
default=False
|
||||
),
|
||||
]
|
||||
|
|
|
|||
|
|
@ -0,0 +1,42 @@
|
|||
import clique
|
||||
|
||||
import pyblish.api
|
||||
|
||||
|
||||
class ValidateSequenceFrames(pyblish.api.InstancePlugin):
|
||||
"""Ensure the sequence of frames is complete
|
||||
|
||||
The files found in the folder are checked against the frameStart and
|
||||
frameEnd of the instance. If the first or last file is not
|
||||
corresponding with the first or last frame it is flagged as invalid.
|
||||
"""
|
||||
|
||||
order = pyblish.api.ValidatorOrder
|
||||
label = "Validate Sequence Frames"
|
||||
families = ["render"]
|
||||
hosts = ["unreal"]
|
||||
optional = True
|
||||
|
||||
def process(self, instance):
|
||||
representations = instance.data.get("representations")
|
||||
for repr in representations:
|
||||
data = instance.data.get("assetEntity", {}).get("data", {})
|
||||
patterns = [clique.PATTERNS["frames"]]
|
||||
collections, remainder = clique.assemble(
|
||||
repr["files"], minimum_items=1, patterns=patterns)
|
||||
|
||||
assert not remainder, "Must not have remainder"
|
||||
assert len(collections) == 1, "Must detect single collection"
|
||||
collection = collections[0]
|
||||
frames = list(collection.indexes)
|
||||
|
||||
current_range = (frames[0], frames[-1])
|
||||
required_range = (data["frameStart"],
|
||||
data["frameEnd"])
|
||||
|
||||
if current_range != required_range:
|
||||
raise ValueError(f"Invalid frame range: {current_range} - "
|
||||
f"expected: {required_range}")
|
||||
|
||||
missing = collection.holes().indexes
|
||||
assert not missing, "Missing frames: %s" % (missing,)
|
||||
|
|
@ -45,7 +45,7 @@ class PublishValidationError(Exception):
|
|||
|
||||
def __init__(self, message, title=None, description=None, detail=None):
|
||||
self.message = message
|
||||
self.title = title or "< Missing title >"
|
||||
self.title = title
|
||||
self.description = description or message
|
||||
self.detail = detail
|
||||
super(PublishValidationError, self).__init__(message)
|
||||
|
|
|
|||
|
|
@ -49,7 +49,12 @@ class ValidateSequenceFrames(pyblish.api.InstancePlugin):
|
|||
collection = collections[0]
|
||||
frames = list(collection.indexes)
|
||||
|
||||
if instance.data.get("slate"):
|
||||
# Slate is not part of the frame range
|
||||
frames = frames[1:]
|
||||
|
||||
current_range = (frames[0], frames[-1])
|
||||
|
||||
required_range = (instance.data["frameStart"],
|
||||
instance.data["frameEnd"])
|
||||
|
||||
|
|
|
|||
|
|
@ -554,7 +554,7 @@
|
|||
"publish_mip_map": true
|
||||
},
|
||||
"CreateAnimation": {
|
||||
"enabled": true,
|
||||
"enabled": false,
|
||||
"write_color_sets": false,
|
||||
"write_face_sets": false,
|
||||
"include_parent_hierarchy": false,
|
||||
|
|
@ -1459,7 +1459,7 @@
|
|||
]
|
||||
},
|
||||
"reference_loader": {
|
||||
"namespace": "{asset_name}_{subset}_##",
|
||||
"namespace": "{asset_name}_{subset}_##_",
|
||||
"group_name": "_GRP"
|
||||
}
|
||||
},
|
||||
|
|
|
|||
|
|
@ -11,6 +11,9 @@
|
|||
},
|
||||
"level_sequences_for_layouts": false,
|
||||
"delete_unmatched_assets": false,
|
||||
"render_config_path": "",
|
||||
"preroll_frames": 0,
|
||||
"render_format": "png",
|
||||
"project_setup": {
|
||||
"dev_mode": true
|
||||
}
|
||||
|
|
|
|||
|
|
@ -32,6 +32,28 @@
|
|||
"key": "delete_unmatched_assets",
|
||||
"label": "Delete assets that are not matched"
|
||||
},
|
||||
{
|
||||
"type": "text",
|
||||
"key": "render_config_path",
|
||||
"label": "Render Config Path"
|
||||
},
|
||||
{
|
||||
"type": "number",
|
||||
"key": "preroll_frames",
|
||||
"label": "Pre-roll frames"
|
||||
},
|
||||
{
|
||||
"key": "render_format",
|
||||
"label": "Render format",
|
||||
"type": "enum",
|
||||
"multiselection": false,
|
||||
"enum_items": [
|
||||
{"png": "PNG"},
|
||||
{"exr": "EXR"},
|
||||
{"jpg": "JPG"},
|
||||
{"bmp": "BMP"}
|
||||
]
|
||||
},
|
||||
{
|
||||
"type": "dict",
|
||||
"collapsible": true,
|
||||
|
|
|
|||
|
|
@ -48,7 +48,7 @@
|
|||
"bg-view-selection-hover": "rgba(92, 173, 214, .8)",
|
||||
|
||||
"border": "#373D48",
|
||||
"border-hover": "rgba(168, 175, 189, .3)",
|
||||
"border-hover": "rgb(92, 99, 111)",
|
||||
"border-focus": "rgb(92, 173, 214)",
|
||||
|
||||
"restart-btn-bg": "#458056",
|
||||
|
|
|
|||
|
|
@ -35,6 +35,11 @@ QWidget:disabled {
|
|||
color: {color:font-disabled};
|
||||
}
|
||||
|
||||
/* Some DCCs have set borders to solid color */
|
||||
QScrollArea {
|
||||
border: none;
|
||||
}
|
||||
|
||||
QLabel {
|
||||
background: transparent;
|
||||
}
|
||||
|
|
@ -42,7 +47,7 @@ QLabel {
|
|||
/* Inputs */
|
||||
QAbstractSpinBox, QLineEdit, QPlainTextEdit, QTextEdit {
|
||||
border: 1px solid {color:border};
|
||||
border-radius: 0.3em;
|
||||
border-radius: 0.2em;
|
||||
background: {color:bg-inputs};
|
||||
padding: 0.1em;
|
||||
}
|
||||
|
|
@ -226,7 +231,7 @@ QMenu::separator {
|
|||
/* Combobox */
|
||||
QComboBox {
|
||||
border: 1px solid {color:border};
|
||||
border-radius: 3px;
|
||||
border-radius: 0.2em;
|
||||
padding: 1px 3px 1px 3px;
|
||||
background: {color:bg-inputs};
|
||||
}
|
||||
|
|
@ -474,7 +479,6 @@ QAbstractItemView:disabled{
|
|||
}
|
||||
|
||||
QAbstractItemView::item:hover {
|
||||
/* color: {color:bg-view-hover}; */
|
||||
background: {color:bg-view-hover};
|
||||
}
|
||||
|
||||
|
|
@ -743,7 +747,7 @@ OverlayMessageWidget QWidget {
|
|||
|
||||
#TypeEditor, #ToolEditor, #NameEditor, #NumberEditor {
|
||||
background: transparent;
|
||||
border-radius: 0.3em;
|
||||
border-radius: 0.2em;
|
||||
}
|
||||
|
||||
#TypeEditor:focus, #ToolEditor:focus, #NameEditor:focus, #NumberEditor:focus {
|
||||
|
|
@ -860,7 +864,13 @@ OverlayMessageWidget QWidget {
|
|||
background: {color:bg-view-hover};
|
||||
}
|
||||
|
||||
/* New Create/Publish UI */
|
||||
/* Publisher UI (Create/Publish) */
|
||||
#PublishWindow QAbstractSpinBox, QLineEdit, QPlainTextEdit, QTextEdit {
|
||||
padding: 1px;
|
||||
}
|
||||
#PublishWindow QComboBox {
|
||||
padding: 1px 1px 1px 0.2em;
|
||||
}
|
||||
PublisherTabsWidget {
|
||||
background: {color:publisher:tab-bg};
|
||||
}
|
||||
|
|
@ -944,6 +954,7 @@ PixmapButton:disabled {
|
|||
border-top-left-radius: 0px;
|
||||
padding-top: 0.5em;
|
||||
padding-bottom: 0.5em;
|
||||
width: 0.5em;
|
||||
}
|
||||
#VariantInput[state="new"], #VariantInput[state="new"]:focus, #VariantInput[state="new"]:hover {
|
||||
border-color: {color:publisher:success};
|
||||
|
|
@ -1072,7 +1083,7 @@ ValidationArtistMessage QLabel {
|
|||
#AssetNameInputWidget {
|
||||
background: {color:bg-inputs};
|
||||
border: 1px solid {color:border};
|
||||
border-radius: 0.3em;
|
||||
border-radius: 0.2em;
|
||||
}
|
||||
|
||||
#AssetNameInputWidget QWidget {
|
||||
|
|
@ -1465,6 +1476,12 @@ CreateNextPageOverlay {
|
|||
}
|
||||
|
||||
/* Attribute Definition widgets */
|
||||
AttributeDefinitionsWidget QAbstractSpinBox, QLineEdit, QPlainTextEdit, QTextEdit {
|
||||
padding: 1px;
|
||||
}
|
||||
AttributeDefinitionsWidget QComboBox {
|
||||
padding: 1px 1px 1px 0.2em;
|
||||
}
|
||||
InViewButton, InViewButton:disabled {
|
||||
background: transparent;
|
||||
}
|
||||
|
|
|
|||
|
|
@ -1,4 +1,3 @@
|
|||
import uuid
|
||||
import copy
|
||||
|
||||
from qtpy import QtWidgets, QtCore
|
||||
|
|
@ -126,7 +125,7 @@ class AttributeDefinitionsWidget(QtWidgets.QWidget):
|
|||
|
||||
row = 0
|
||||
for attr_def in attr_defs:
|
||||
if not isinstance(attr_def, UIDef):
|
||||
if attr_def.is_value_def:
|
||||
if attr_def.key in self._current_keys:
|
||||
raise KeyError(
|
||||
"Duplicated key \"{}\"".format(attr_def.key))
|
||||
|
|
@ -144,11 +143,16 @@ class AttributeDefinitionsWidget(QtWidgets.QWidget):
|
|||
|
||||
col_num = 2 - expand_cols
|
||||
|
||||
if attr_def.label:
|
||||
if attr_def.is_value_def and attr_def.label:
|
||||
label_widget = QtWidgets.QLabel(attr_def.label, self)
|
||||
tooltip = attr_def.tooltip
|
||||
if tooltip:
|
||||
label_widget.setToolTip(tooltip)
|
||||
if attr_def.is_label_horizontal:
|
||||
label_widget.setAlignment(
|
||||
QtCore.Qt.AlignRight
|
||||
| QtCore.Qt.AlignVCenter
|
||||
)
|
||||
layout.addWidget(
|
||||
label_widget, row, 0, 1, expand_cols
|
||||
)
|
||||
|
|
|
|||
|
|
@ -123,7 +123,7 @@ class BaseRepresentationModel(object):
|
|||
self.remote_provider = remote_provider
|
||||
|
||||
|
||||
class SubsetsModel(TreeModel, BaseRepresentationModel):
|
||||
class SubsetsModel(BaseRepresentationModel, TreeModel):
|
||||
doc_fetched = QtCore.Signal()
|
||||
refreshed = QtCore.Signal(bool)
|
||||
|
||||
|
|
|
|||
|
|
@ -2,7 +2,7 @@ from qtpy import QtCore, QtGui
|
|||
|
||||
# ID of context item in instance view
|
||||
CONTEXT_ID = "context"
|
||||
CONTEXT_LABEL = "Options"
|
||||
CONTEXT_LABEL = "Context"
|
||||
# Not showed anywhere - used as identifier
|
||||
CONTEXT_GROUP = "__ContextGroup__"
|
||||
|
||||
|
|
@ -15,6 +15,9 @@ VARIANT_TOOLTIP = (
|
|||
"\nnumerical characters (0-9) dot (\".\") or underscore (\"_\")."
|
||||
)
|
||||
|
||||
INPUTS_LAYOUT_HSPACING = 4
|
||||
INPUTS_LAYOUT_VSPACING = 2
|
||||
|
||||
# Roles for instance views
|
||||
INSTANCE_ID_ROLE = QtCore.Qt.UserRole + 1
|
||||
SORT_VALUE_ROLE = QtCore.Qt.UserRole + 2
|
||||
|
|
|
|||
|
|
@ -163,7 +163,7 @@ class AssetDocsCache:
|
|||
return copy.deepcopy(self._full_asset_docs_by_name[asset_name])
|
||||
|
||||
|
||||
class PublishReport:
|
||||
class PublishReportMaker:
|
||||
"""Report for single publishing process.
|
||||
|
||||
Report keeps current state of publishing and currently processed plugin.
|
||||
|
|
@ -784,6 +784,13 @@ class PublishValidationErrors:
|
|||
|
||||
# Make sure the cached report is cleared
|
||||
plugin_id = self._plugins_proxy.get_plugin_id(plugin)
|
||||
if not error.title:
|
||||
if hasattr(plugin, "label") and plugin.label:
|
||||
plugin_label = plugin.label
|
||||
else:
|
||||
plugin_label = plugin.__name__
|
||||
error.title = plugin_label
|
||||
|
||||
self._error_items.append(
|
||||
ValidationErrorItem.from_result(plugin_id, error, instance)
|
||||
)
|
||||
|
|
@ -1674,7 +1681,7 @@ class PublisherController(BasePublisherController):
|
|||
# pyblish.api.Context
|
||||
self._publish_context = None
|
||||
# Pyblish report
|
||||
self._publish_report = PublishReport(self)
|
||||
self._publish_report = PublishReportMaker(self)
|
||||
# Store exceptions of validation error
|
||||
self._publish_validation_errors = PublishValidationErrors()
|
||||
|
||||
|
|
|
|||
|
|
@ -211,6 +211,10 @@ class AssetsDialog(QtWidgets.QDialog):
|
|||
layout.addWidget(asset_view, 1)
|
||||
layout.addLayout(btns_layout, 0)
|
||||
|
||||
controller.event_system.add_callback(
|
||||
"controller.reset.finished", self._on_controller_reset
|
||||
)
|
||||
|
||||
asset_view.double_clicked.connect(self._on_ok_clicked)
|
||||
filter_input.textChanged.connect(self._on_filter_change)
|
||||
ok_btn.clicked.connect(self._on_ok_clicked)
|
||||
|
|
@ -245,6 +249,10 @@ class AssetsDialog(QtWidgets.QDialog):
|
|||
new_pos.setY(new_pos.y() - int(self.height() / 2))
|
||||
self.move(new_pos)
|
||||
|
||||
def _on_controller_reset(self):
|
||||
# Change reset enabled so model is reset on show event
|
||||
self._soft_reset_enabled = True
|
||||
|
||||
def showEvent(self, event):
|
||||
"""Refresh asset model on show."""
|
||||
super(AssetsDialog, self).showEvent(event)
|
||||
|
|
|
|||
|
|
@ -9,7 +9,7 @@ Only one item can be selected at a time.
|
|||
```
|
||||
<i> : Icon. Can have Warning icon when context is not right
|
||||
┌──────────────────────┐
|
||||
│ Options │
|
||||
│ Context │
|
||||
│ <Group 1> ────────── │
|
||||
│ <i> <Instance 1> [x]│
|
||||
│ <i> <Instance 2> [x]│
|
||||
|
|
@ -202,7 +202,7 @@ class ConvertorItemsGroupWidget(BaseGroupWidget):
|
|||
class InstanceGroupWidget(BaseGroupWidget):
|
||||
"""Widget wrapping instances under group."""
|
||||
|
||||
active_changed = QtCore.Signal()
|
||||
active_changed = QtCore.Signal(str, str, bool)
|
||||
|
||||
def __init__(self, group_icons, *args, **kwargs):
|
||||
super(InstanceGroupWidget, self).__init__(*args, **kwargs)
|
||||
|
|
@ -253,13 +253,16 @@ class InstanceGroupWidget(BaseGroupWidget):
|
|||
instance, group_icon, self
|
||||
)
|
||||
widget.selected.connect(self._on_widget_selection)
|
||||
widget.active_changed.connect(self.active_changed)
|
||||
widget.active_changed.connect(self._on_active_changed)
|
||||
self._widgets_by_id[instance.id] = widget
|
||||
self._content_layout.insertWidget(widget_idx, widget)
|
||||
widget_idx += 1
|
||||
|
||||
self._update_ordered_item_ids()
|
||||
|
||||
def _on_active_changed(self, instance_id, value):
|
||||
self.active_changed.emit(self.group_name, instance_id, value)
|
||||
|
||||
|
||||
class CardWidget(BaseClickableFrame):
|
||||
"""Clickable card used as bigger button."""
|
||||
|
|
@ -332,7 +335,7 @@ class ContextCardWidget(CardWidget):
|
|||
icon_layout.addWidget(icon_widget)
|
||||
|
||||
layout = QtWidgets.QHBoxLayout(self)
|
||||
layout.setContentsMargins(0, 5, 10, 5)
|
||||
layout.setContentsMargins(0, 2, 10, 2)
|
||||
layout.addLayout(icon_layout, 0)
|
||||
layout.addWidget(label_widget, 1)
|
||||
|
||||
|
|
@ -363,7 +366,7 @@ class ConvertorItemCardWidget(CardWidget):
|
|||
icon_layout.addWidget(icon_widget)
|
||||
|
||||
layout = QtWidgets.QHBoxLayout(self)
|
||||
layout.setContentsMargins(0, 5, 10, 5)
|
||||
layout.setContentsMargins(0, 2, 10, 2)
|
||||
layout.addLayout(icon_layout, 0)
|
||||
layout.addWidget(label_widget, 1)
|
||||
|
||||
|
|
@ -377,7 +380,7 @@ class ConvertorItemCardWidget(CardWidget):
|
|||
class InstanceCardWidget(CardWidget):
|
||||
"""Card widget representing instance."""
|
||||
|
||||
active_changed = QtCore.Signal()
|
||||
active_changed = QtCore.Signal(str, bool)
|
||||
|
||||
def __init__(self, instance, group_icon, parent):
|
||||
super(InstanceCardWidget, self).__init__(parent)
|
||||
|
|
@ -424,7 +427,7 @@ class InstanceCardWidget(CardWidget):
|
|||
top_layout.addWidget(expand_btn, 0)
|
||||
|
||||
layout = QtWidgets.QHBoxLayout(self)
|
||||
layout.setContentsMargins(0, 5, 10, 5)
|
||||
layout.setContentsMargins(0, 2, 10, 2)
|
||||
layout.addLayout(top_layout)
|
||||
layout.addWidget(detail_widget)
|
||||
|
||||
|
|
@ -445,6 +448,10 @@ class InstanceCardWidget(CardWidget):
|
|||
def set_active_toggle_enabled(self, enabled):
|
||||
self._active_checkbox.setEnabled(enabled)
|
||||
|
||||
@property
|
||||
def is_active(self):
|
||||
return self._active_checkbox.isChecked()
|
||||
|
||||
def set_active(self, new_value):
|
||||
"""Set instance as active."""
|
||||
checkbox_value = self._active_checkbox.isChecked()
|
||||
|
|
@ -515,7 +522,7 @@ class InstanceCardWidget(CardWidget):
|
|||
return
|
||||
|
||||
self.instance["active"] = new_value
|
||||
self.active_changed.emit()
|
||||
self.active_changed.emit(self._id, new_value)
|
||||
|
||||
def _on_expend_clicked(self):
|
||||
self._set_expanded()
|
||||
|
|
@ -584,6 +591,45 @@ class InstanceCardView(AbstractInstanceView):
|
|||
result.setWidth(width)
|
||||
return result
|
||||
|
||||
def _toggle_instances(self, value):
|
||||
if not self._active_toggle_enabled:
|
||||
return
|
||||
|
||||
widgets = self._get_selected_widgets()
|
||||
changed = False
|
||||
for widget in widgets:
|
||||
if not isinstance(widget, InstanceCardWidget):
|
||||
continue
|
||||
|
||||
is_active = widget.is_active
|
||||
if value == -1:
|
||||
widget.set_active(not is_active)
|
||||
changed = True
|
||||
continue
|
||||
|
||||
_value = bool(value)
|
||||
if is_active is not _value:
|
||||
widget.set_active(_value)
|
||||
changed = True
|
||||
|
||||
if changed:
|
||||
self.active_changed.emit()
|
||||
|
||||
def keyPressEvent(self, event):
|
||||
if event.key() == QtCore.Qt.Key_Space:
|
||||
self._toggle_instances(-1)
|
||||
return True
|
||||
|
||||
elif event.key() == QtCore.Qt.Key_Backspace:
|
||||
self._toggle_instances(0)
|
||||
return True
|
||||
|
||||
elif event.key() == QtCore.Qt.Key_Return:
|
||||
self._toggle_instances(1)
|
||||
return True
|
||||
|
||||
return super(InstanceCardView, self).keyPressEvent(event)
|
||||
|
||||
def _get_selected_widgets(self):
|
||||
output = []
|
||||
if (
|
||||
|
|
@ -742,7 +788,15 @@ class InstanceCardView(AbstractInstanceView):
|
|||
for widget in self._widgets_by_group.values():
|
||||
widget.update_instance_values()
|
||||
|
||||
def _on_active_changed(self):
|
||||
def _on_active_changed(self, group_name, instance_id, value):
|
||||
group_widget = self._widgets_by_group[group_name]
|
||||
instance_widget = group_widget.get_widget_by_item_id(instance_id)
|
||||
if instance_widget.is_selected:
|
||||
for widget in self._get_selected_widgets():
|
||||
if isinstance(widget, InstanceCardWidget):
|
||||
widget.set_active(value)
|
||||
else:
|
||||
self._select_item_clear(instance_id, group_name, instance_widget)
|
||||
self.active_changed.emit()
|
||||
|
||||
def _on_widget_selection(self, instance_id, group_name, selection_type):
|
||||
|
|
|
|||
|
|
@ -22,6 +22,8 @@ from ..constants import (
|
|||
CREATOR_IDENTIFIER_ROLE,
|
||||
CREATOR_THUMBNAIL_ENABLED_ROLE,
|
||||
CREATOR_SORT_ROLE,
|
||||
INPUTS_LAYOUT_HSPACING,
|
||||
INPUTS_LAYOUT_VSPACING,
|
||||
)
|
||||
|
||||
SEPARATORS = ("---separator---", "---")
|
||||
|
|
@ -198,6 +200,8 @@ class CreateWidget(QtWidgets.QWidget):
|
|||
|
||||
variant_subset_layout = QtWidgets.QFormLayout(variant_subset_widget)
|
||||
variant_subset_layout.setContentsMargins(0, 0, 0, 0)
|
||||
variant_subset_layout.setHorizontalSpacing(INPUTS_LAYOUT_HSPACING)
|
||||
variant_subset_layout.setVerticalSpacing(INPUTS_LAYOUT_VSPACING)
|
||||
variant_subset_layout.addRow("Variant", variant_widget)
|
||||
variant_subset_layout.addRow("Subset", subset_name_input)
|
||||
|
||||
|
|
@ -282,6 +286,9 @@ class CreateWidget(QtWidgets.QWidget):
|
|||
thumbnail_widget.thumbnail_created.connect(self._on_thumbnail_create)
|
||||
thumbnail_widget.thumbnail_cleared.connect(self._on_thumbnail_clear)
|
||||
|
||||
controller.event_system.add_callback(
|
||||
"main.window.closed", self._on_main_window_close
|
||||
)
|
||||
controller.event_system.add_callback(
|
||||
"plugins.refresh.finished", self._on_plugins_refresh
|
||||
)
|
||||
|
|
@ -316,6 +323,10 @@ class CreateWidget(QtWidgets.QWidget):
|
|||
self._first_show = True
|
||||
self._last_thumbnail_path = None
|
||||
|
||||
self._last_current_context_asset = None
|
||||
self._last_current_context_task = None
|
||||
self._use_current_context = True
|
||||
|
||||
@property
|
||||
def current_asset_name(self):
|
||||
return self._controller.current_asset_name
|
||||
|
|
@ -356,12 +367,39 @@ class CreateWidget(QtWidgets.QWidget):
|
|||
if check_prereq:
|
||||
self._invalidate_prereq()
|
||||
|
||||
def _on_main_window_close(self):
|
||||
"""Publisher window was closed."""
|
||||
|
||||
# Use current context on next refresh
|
||||
self._use_current_context = True
|
||||
|
||||
def refresh(self):
|
||||
current_asset_name = self._controller.current_asset_name
|
||||
current_task_name = self._controller.current_task_name
|
||||
|
||||
# Get context before refresh to keep selection of asset and
|
||||
# task widgets
|
||||
asset_name = self._get_asset_name()
|
||||
task_name = self._get_task_name()
|
||||
|
||||
# Replace by current context if last loaded context was
|
||||
# 'current context' before reset
|
||||
if (
|
||||
self._use_current_context
|
||||
or (
|
||||
self._last_current_context_asset
|
||||
and asset_name == self._last_current_context_asset
|
||||
and task_name == self._last_current_context_task
|
||||
)
|
||||
):
|
||||
asset_name = current_asset_name
|
||||
task_name = current_task_name
|
||||
|
||||
# Store values for future refresh
|
||||
self._last_current_context_asset = current_asset_name
|
||||
self._last_current_context_task = current_task_name
|
||||
self._use_current_context = False
|
||||
|
||||
self._prereq_available = False
|
||||
|
||||
# Disable context widget so refresh of asset will use context asset
|
||||
|
|
@ -398,7 +436,10 @@ class CreateWidget(QtWidgets.QWidget):
|
|||
prereq_available = False
|
||||
creator_btn_tooltips.append("Creator is not selected")
|
||||
|
||||
if self._context_change_is_enabled() and self._asset_name is None:
|
||||
if (
|
||||
self._context_change_is_enabled()
|
||||
and self._get_asset_name() is None
|
||||
):
|
||||
# QUESTION how to handle invalid asset?
|
||||
prereq_available = False
|
||||
creator_btn_tooltips.append("Context is not selected")
|
||||
|
|
|
|||
|
|
@ -11,7 +11,7 @@ selection can be enabled disabled using checkbox or keyboard key presses:
|
|||
- Backspace - disable selection
|
||||
|
||||
```
|
||||
|- Options
|
||||
|- Context
|
||||
|- <Group 1> [x]
|
||||
| |- <Instance 1> [x]
|
||||
| |- <Instance 2> [x]
|
||||
|
|
@ -486,6 +486,9 @@ class InstanceListView(AbstractInstanceView):
|
|||
group_widget.set_expanded(expanded)
|
||||
|
||||
def _on_toggle_request(self, toggle):
|
||||
if not self._active_toggle_enabled:
|
||||
return
|
||||
|
||||
selected_instance_ids = self._instance_view.get_selected_instance_ids()
|
||||
if toggle == -1:
|
||||
active = None
|
||||
|
|
@ -1039,7 +1042,8 @@ class InstanceListView(AbstractInstanceView):
|
|||
proxy_index = proxy_model.mapFromSource(select_indexes[0])
|
||||
selection_model.setCurrentIndex(
|
||||
proxy_index,
|
||||
selection_model.ClearAndSelect | selection_model.Rows
|
||||
QtCore.QItemSelectionModel.ClearAndSelect
|
||||
| QtCore.QItemSelectionModel.Rows
|
||||
)
|
||||
return
|
||||
|
||||
|
|
|
|||
|
|
@ -2,6 +2,8 @@ from qtpy import QtWidgets, QtCore
|
|||
|
||||
from openpype.tools.attribute_defs import create_widget_for_attr_def
|
||||
|
||||
from ..constants import INPUTS_LAYOUT_HSPACING, INPUTS_LAYOUT_VSPACING
|
||||
|
||||
|
||||
class PreCreateWidget(QtWidgets.QWidget):
|
||||
def __init__(self, parent):
|
||||
|
|
@ -81,6 +83,8 @@ class AttributesWidget(QtWidgets.QWidget):
|
|||
|
||||
layout = QtWidgets.QGridLayout(self)
|
||||
layout.setContentsMargins(0, 0, 0, 0)
|
||||
layout.setHorizontalSpacing(INPUTS_LAYOUT_HSPACING)
|
||||
layout.setVerticalSpacing(INPUTS_LAYOUT_VSPACING)
|
||||
|
||||
self._layout = layout
|
||||
|
||||
|
|
@ -117,8 +121,16 @@ class AttributesWidget(QtWidgets.QWidget):
|
|||
|
||||
col_num = 2 - expand_cols
|
||||
|
||||
if attr_def.label:
|
||||
if attr_def.is_value_def and attr_def.label:
|
||||
label_widget = QtWidgets.QLabel(attr_def.label, self)
|
||||
tooltip = attr_def.tooltip
|
||||
if tooltip:
|
||||
label_widget.setToolTip(tooltip)
|
||||
if attr_def.is_label_horizontal:
|
||||
label_widget.setAlignment(
|
||||
QtCore.Qt.AlignRight
|
||||
| QtCore.Qt.AlignVCenter
|
||||
)
|
||||
self._layout.addWidget(
|
||||
label_widget, row, 0, 1, expand_cols
|
||||
)
|
||||
|
|
|
|||
|
|
@ -9,7 +9,7 @@ import collections
|
|||
from qtpy import QtWidgets, QtCore, QtGui
|
||||
import qtawesome
|
||||
|
||||
from openpype.lib.attribute_definitions import UnknownDef, UIDef
|
||||
from openpype.lib.attribute_definitions import UnknownDef
|
||||
from openpype.tools.attribute_defs import create_widget_for_attr_def
|
||||
from openpype.tools import resources
|
||||
from openpype.tools.flickcharm import FlickCharm
|
||||
|
|
@ -36,6 +36,8 @@ from .icons import (
|
|||
from ..constants import (
|
||||
VARIANT_TOOLTIP,
|
||||
ResetKeySequence,
|
||||
INPUTS_LAYOUT_HSPACING,
|
||||
INPUTS_LAYOUT_VSPACING,
|
||||
)
|
||||
|
||||
|
||||
|
|
@ -1098,6 +1100,8 @@ class GlobalAttrsWidget(QtWidgets.QWidget):
|
|||
btns_layout.addWidget(cancel_btn)
|
||||
|
||||
main_layout = QtWidgets.QFormLayout(self)
|
||||
main_layout.setHorizontalSpacing(INPUTS_LAYOUT_HSPACING)
|
||||
main_layout.setVerticalSpacing(INPUTS_LAYOUT_VSPACING)
|
||||
main_layout.addRow("Variant", variant_input)
|
||||
main_layout.addRow("Asset", asset_value_widget)
|
||||
main_layout.addRow("Task", task_value_widget)
|
||||
|
|
@ -1346,6 +1350,8 @@ class CreatorAttrsWidget(QtWidgets.QWidget):
|
|||
content_layout.setColumnStretch(0, 0)
|
||||
content_layout.setColumnStretch(1, 1)
|
||||
content_layout.setAlignment(QtCore.Qt.AlignTop)
|
||||
content_layout.setHorizontalSpacing(INPUTS_LAYOUT_HSPACING)
|
||||
content_layout.setVerticalSpacing(INPUTS_LAYOUT_VSPACING)
|
||||
|
||||
row = 0
|
||||
for attr_def, attr_instances, values in result:
|
||||
|
|
@ -1371,9 +1377,19 @@ class CreatorAttrsWidget(QtWidgets.QWidget):
|
|||
|
||||
col_num = 2 - expand_cols
|
||||
|
||||
label = attr_def.label or attr_def.key
|
||||
label = None
|
||||
if attr_def.is_value_def:
|
||||
label = attr_def.label or attr_def.key
|
||||
if label:
|
||||
label_widget = QtWidgets.QLabel(label, self)
|
||||
tooltip = attr_def.tooltip
|
||||
if tooltip:
|
||||
label_widget.setToolTip(tooltip)
|
||||
if attr_def.is_label_horizontal:
|
||||
label_widget.setAlignment(
|
||||
QtCore.Qt.AlignRight
|
||||
| QtCore.Qt.AlignVCenter
|
||||
)
|
||||
content_layout.addWidget(
|
||||
label_widget, row, 0, 1, expand_cols
|
||||
)
|
||||
|
|
@ -1474,6 +1490,8 @@ class PublishPluginAttrsWidget(QtWidgets.QWidget):
|
|||
attr_def_layout = QtWidgets.QGridLayout(attr_def_widget)
|
||||
attr_def_layout.setColumnStretch(0, 0)
|
||||
attr_def_layout.setColumnStretch(1, 1)
|
||||
attr_def_layout.setHorizontalSpacing(INPUTS_LAYOUT_HSPACING)
|
||||
attr_def_layout.setVerticalSpacing(INPUTS_LAYOUT_VSPACING)
|
||||
|
||||
content_layout = QtWidgets.QVBoxLayout(content_widget)
|
||||
content_layout.addWidget(attr_def_widget, 0)
|
||||
|
|
@ -1501,12 +1519,19 @@ class PublishPluginAttrsWidget(QtWidgets.QWidget):
|
|||
expand_cols = 1
|
||||
|
||||
col_num = 2 - expand_cols
|
||||
label = attr_def.label or attr_def.key
|
||||
label = None
|
||||
if attr_def.is_value_def:
|
||||
label = attr_def.label or attr_def.key
|
||||
if label:
|
||||
label_widget = QtWidgets.QLabel(label, content_widget)
|
||||
tooltip = attr_def.tooltip
|
||||
if tooltip:
|
||||
label_widget.setToolTip(tooltip)
|
||||
if attr_def.is_label_horizontal:
|
||||
label_widget.setAlignment(
|
||||
QtCore.Qt.AlignRight
|
||||
| QtCore.Qt.AlignVCenter
|
||||
)
|
||||
attr_def_layout.addWidget(
|
||||
label_widget, row, 0, 1, expand_cols
|
||||
)
|
||||
|
|
@ -1517,7 +1542,7 @@ class PublishPluginAttrsWidget(QtWidgets.QWidget):
|
|||
)
|
||||
row += 1
|
||||
|
||||
if isinstance(attr_def, UIDef):
|
||||
if not attr_def.is_value_def:
|
||||
continue
|
||||
|
||||
widget.value_changed.connect(self._input_value_changed)
|
||||
|
|
|
|||
|
|
@ -46,6 +46,8 @@ class PublisherWindow(QtWidgets.QDialog):
|
|||
def __init__(self, parent=None, controller=None, reset_on_show=None):
|
||||
super(PublisherWindow, self).__init__(parent)
|
||||
|
||||
self.setObjectName("PublishWindow")
|
||||
|
||||
self.setWindowTitle("OpenPype publisher")
|
||||
|
||||
icon = QtGui.QIcon(resources.get_openpype_icon_filepath())
|
||||
|
|
@ -284,6 +286,9 @@ class PublisherWindow(QtWidgets.QDialog):
|
|||
controller.event_system.add_callback(
|
||||
"publish.has_validated.changed", self._on_publish_validated_change
|
||||
)
|
||||
controller.event_system.add_callback(
|
||||
"publish.finished.changed", self._on_publish_finished_change
|
||||
)
|
||||
controller.event_system.add_callback(
|
||||
"publish.process.stopped", self._on_publish_stop
|
||||
)
|
||||
|
|
@ -400,8 +405,12 @@ class PublisherWindow(QtWidgets.QDialog):
|
|||
# TODO capture changes and ask user if wants to save changes on close
|
||||
if not self._controller.host_context_has_changed:
|
||||
self._save_changes(False)
|
||||
self._comment_input.setText("") # clear comment
|
||||
self._reset_on_show = True
|
||||
self._controller.clear_thumbnail_temp_dir_path()
|
||||
# Trigger custom event that should be captured only in UI
|
||||
# - backend (controller) must not be dependent on this event topic!!!
|
||||
self._controller.event_system.emit("main.window.closed", {}, "window")
|
||||
super(PublisherWindow, self).closeEvent(event)
|
||||
|
||||
def leaveEvent(self, event):
|
||||
|
|
@ -433,15 +442,24 @@ class PublisherWindow(QtWidgets.QDialog):
|
|||
event.accept()
|
||||
return
|
||||
|
||||
if event.matches(QtGui.QKeySequence.Save):
|
||||
save_match = event.matches(QtGui.QKeySequence.Save)
|
||||
if save_match == QtGui.QKeySequence.ExactMatch:
|
||||
if not self._controller.publish_has_started:
|
||||
self._save_changes(True)
|
||||
event.accept()
|
||||
return
|
||||
|
||||
if ResetKeySequence.matches(
|
||||
QtGui.QKeySequence(event.key() | event.modifiers())
|
||||
):
|
||||
# PySide6 Support
|
||||
if hasattr(event, "keyCombination"):
|
||||
reset_match_result = ResetKeySequence.matches(
|
||||
QtGui.QKeySequence(event.keyCombination())
|
||||
)
|
||||
else:
|
||||
reset_match_result = ResetKeySequence.matches(
|
||||
QtGui.QKeySequence(event.modifiers() | event.key())
|
||||
)
|
||||
|
||||
if reset_match_result == QtGui.QKeySequence.ExactMatch:
|
||||
if not self.controller.publish_is_running:
|
||||
self.reset()
|
||||
event.accept()
|
||||
|
|
@ -777,6 +795,11 @@ class PublisherWindow(QtWidgets.QDialog):
|
|||
if event["value"]:
|
||||
self._validate_btn.setEnabled(False)
|
||||
|
||||
def _on_publish_finished_change(self, event):
|
||||
if event["value"]:
|
||||
# Successful publish, remove comment from UI
|
||||
self._comment_input.setText("")
|
||||
|
||||
def _on_publish_stop(self):
|
||||
self._set_publish_overlay_visibility(False)
|
||||
self._reset_btn.setEnabled(True)
|
||||
|
|
|
|||
|
|
@ -199,90 +199,103 @@ class InventoryModel(TreeModel):
|
|||
"""Refresh the model"""
|
||||
|
||||
host = registered_host()
|
||||
if not items: # for debugging or testing, injecting items from outside
|
||||
# for debugging or testing, injecting items from outside
|
||||
if items is None:
|
||||
if isinstance(host, ILoadHost):
|
||||
items = host.get_containers()
|
||||
else:
|
||||
elif hasattr(host, "ls"):
|
||||
items = host.ls()
|
||||
else:
|
||||
items = []
|
||||
|
||||
self.clear()
|
||||
|
||||
if self._hierarchy_view and selected:
|
||||
if not hasattr(host.pipeline, "update_hierarchy"):
|
||||
# If host doesn't support hierarchical containers, then
|
||||
# cherry-pick only.
|
||||
self.add_items((item for item in items
|
||||
if item["objectName"] in selected))
|
||||
return
|
||||
|
||||
# Update hierarchy info for all containers
|
||||
items_by_name = {item["objectName"]: item
|
||||
for item in host.pipeline.update_hierarchy(items)}
|
||||
|
||||
selected_items = set()
|
||||
|
||||
def walk_children(names):
|
||||
"""Select containers and extend to chlid containers"""
|
||||
for name in [n for n in names if n not in selected_items]:
|
||||
selected_items.add(name)
|
||||
item = items_by_name[name]
|
||||
yield item
|
||||
|
||||
for child in walk_children(item["children"]):
|
||||
yield child
|
||||
|
||||
items = list(walk_children(selected)) # Cherry-picked and extended
|
||||
|
||||
# Cut unselected upstream containers
|
||||
for item in items:
|
||||
if not item.get("parent") in selected_items:
|
||||
# Parent not in selection, this is root item.
|
||||
item["parent"] = None
|
||||
|
||||
parents = [self._root_item]
|
||||
|
||||
# The length of `items` array is the maximum depth that a
|
||||
# hierarchy could be.
|
||||
# Take this as an easiest way to prevent looping forever.
|
||||
maximum_loop = len(items)
|
||||
count = 0
|
||||
while items:
|
||||
if count > maximum_loop:
|
||||
self.log.warning("Maximum loop count reached, possible "
|
||||
"missing parent node.")
|
||||
break
|
||||
|
||||
_parents = list()
|
||||
for parent in parents:
|
||||
_unparented = list()
|
||||
|
||||
def _children():
|
||||
"""Child item provider"""
|
||||
for item in items:
|
||||
if item.get("parent") == parent.get("objectName"):
|
||||
# (NOTE)
|
||||
# Since `self._root_node` has no "objectName"
|
||||
# entry, it will be paired with root item if
|
||||
# the value of key "parent" is None, or not
|
||||
# having the key.
|
||||
yield item
|
||||
else:
|
||||
# Not current parent's child, try next
|
||||
_unparented.append(item)
|
||||
|
||||
self.add_items(_children(), parent)
|
||||
|
||||
items[:] = _unparented
|
||||
|
||||
# Parents of next level
|
||||
for group_node in parent.children():
|
||||
_parents += group_node.children()
|
||||
|
||||
parents[:] = _parents
|
||||
count += 1
|
||||
|
||||
else:
|
||||
if not selected or not self._hierarchy_view:
|
||||
self.add_items(items)
|
||||
return
|
||||
|
||||
if (
|
||||
not hasattr(host, "pipeline")
|
||||
or not hasattr(host.pipeline, "update_hierarchy")
|
||||
):
|
||||
# If host doesn't support hierarchical containers, then
|
||||
# cherry-pick only.
|
||||
self.add_items((
|
||||
item
|
||||
for item in items
|
||||
if item["objectName"] in selected
|
||||
))
|
||||
return
|
||||
|
||||
# TODO find out what this part does. Function 'update_hierarchy' is
|
||||
# available only in 'blender' at this moment.
|
||||
|
||||
# Update hierarchy info for all containers
|
||||
items_by_name = {
|
||||
item["objectName"]: item
|
||||
for item in host.pipeline.update_hierarchy(items)
|
||||
}
|
||||
|
||||
selected_items = set()
|
||||
|
||||
def walk_children(names):
|
||||
"""Select containers and extend to chlid containers"""
|
||||
for name in [n for n in names if n not in selected_items]:
|
||||
selected_items.add(name)
|
||||
item = items_by_name[name]
|
||||
yield item
|
||||
|
||||
for child in walk_children(item["children"]):
|
||||
yield child
|
||||
|
||||
items = list(walk_children(selected)) # Cherry-picked and extended
|
||||
|
||||
# Cut unselected upstream containers
|
||||
for item in items:
|
||||
if not item.get("parent") in selected_items:
|
||||
# Parent not in selection, this is root item.
|
||||
item["parent"] = None
|
||||
|
||||
parents = [self._root_item]
|
||||
|
||||
# The length of `items` array is the maximum depth that a
|
||||
# hierarchy could be.
|
||||
# Take this as an easiest way to prevent looping forever.
|
||||
maximum_loop = len(items)
|
||||
count = 0
|
||||
while items:
|
||||
if count > maximum_loop:
|
||||
self.log.warning("Maximum loop count reached, possible "
|
||||
"missing parent node.")
|
||||
break
|
||||
|
||||
_parents = list()
|
||||
for parent in parents:
|
||||
_unparented = list()
|
||||
|
||||
def _children():
|
||||
"""Child item provider"""
|
||||
for item in items:
|
||||
if item.get("parent") == parent.get("objectName"):
|
||||
# (NOTE)
|
||||
# Since `self._root_node` has no "objectName"
|
||||
# entry, it will be paired with root item if
|
||||
# the value of key "parent" is None, or not
|
||||
# having the key.
|
||||
yield item
|
||||
else:
|
||||
# Not current parent's child, try next
|
||||
_unparented.append(item)
|
||||
|
||||
self.add_items(_children(), parent)
|
||||
|
||||
items[:] = _unparented
|
||||
|
||||
# Parents of next level
|
||||
for group_node in parent.children():
|
||||
_parents += group_node.children()
|
||||
|
||||
parents[:] = _parents
|
||||
count += 1
|
||||
|
||||
def add_items(self, items, parent=None):
|
||||
"""Add the items to the model.
|
||||
|
|
|
|||
|
|
@ -107,8 +107,8 @@ class SceneInventoryWindow(QtWidgets.QDialog):
|
|||
view.hierarchy_view_changed.connect(
|
||||
self._on_hierarchy_view_change
|
||||
)
|
||||
view.data_changed.connect(self.refresh)
|
||||
refresh_button.clicked.connect(self.refresh)
|
||||
view.data_changed.connect(self._on_refresh_request)
|
||||
refresh_button.clicked.connect(self._on_refresh_request)
|
||||
update_all_button.clicked.connect(self._on_update_all)
|
||||
|
||||
self._update_all_button = update_all_button
|
||||
|
|
@ -139,6 +139,11 @@ class SceneInventoryWindow(QtWidgets.QDialog):
|
|||
|
||||
"""
|
||||
|
||||
def _on_refresh_request(self):
|
||||
"""Signal callback to trigger 'refresh' without any arguments."""
|
||||
|
||||
self.refresh()
|
||||
|
||||
def refresh(self, items=None):
|
||||
with preserve_expanded_rows(
|
||||
tree_view=self._view,
|
||||
|
|
|
|||
|
|
@ -1,6 +1,7 @@
|
|||
from .widgets import (
|
||||
FocusSpinBox,
|
||||
FocusDoubleSpinBox,
|
||||
ComboBox,
|
||||
CustomTextComboBox,
|
||||
PlaceholderLineEdit,
|
||||
BaseClickableFrame,
|
||||
|
|
@ -38,6 +39,7 @@ from .overlay_messages import (
|
|||
__all__ = (
|
||||
"FocusSpinBox",
|
||||
"FocusDoubleSpinBox",
|
||||
"ComboBox",
|
||||
"CustomTextComboBox",
|
||||
"PlaceholderLineEdit",
|
||||
"BaseClickableFrame",
|
||||
|
|
|
|||
|
|
@ -41,7 +41,28 @@ class FocusDoubleSpinBox(QtWidgets.QDoubleSpinBox):
|
|||
super(FocusDoubleSpinBox, self).wheelEvent(event)
|
||||
|
||||
|
||||
class CustomTextComboBox(QtWidgets.QComboBox):
|
||||
class ComboBox(QtWidgets.QComboBox):
|
||||
"""Base of combobox with pre-implement changes used in tools.
|
||||
|
||||
Combobox is using styled delegate by default so stylesheets are propagated.
|
||||
|
||||
Items are not changed on scroll until the combobox is in focus.
|
||||
"""
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
super(ComboBox, self).__init__(*args, **kwargs)
|
||||
delegate = QtWidgets.QStyledItemDelegate()
|
||||
self.setItemDelegate(delegate)
|
||||
self.setFocusPolicy(QtCore.Qt.StrongFocus)
|
||||
|
||||
self._delegate = delegate
|
||||
|
||||
def wheelEvent(self, event):
|
||||
if self.hasFocus():
|
||||
return super(ComboBox, self).wheelEvent(event)
|
||||
|
||||
|
||||
class CustomTextComboBox(ComboBox):
|
||||
"""Combobox which can have different text showed."""
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
|
|
@ -253,6 +274,9 @@ class PixmapLabel(QtWidgets.QLabel):
|
|||
self._empty_pixmap = QtGui.QPixmap(0, 0)
|
||||
self._source_pixmap = pixmap
|
||||
|
||||
self._last_width = 0
|
||||
self._last_height = 0
|
||||
|
||||
def set_source_pixmap(self, pixmap):
|
||||
"""Change source image."""
|
||||
self._source_pixmap = pixmap
|
||||
|
|
@ -263,6 +287,12 @@ class PixmapLabel(QtWidgets.QLabel):
|
|||
size += size % 2
|
||||
return size, size
|
||||
|
||||
def minimumSizeHint(self):
|
||||
width, height = self._get_pix_size()
|
||||
if width != self._last_width or height != self._last_height:
|
||||
self._set_resized_pix()
|
||||
return QtCore.QSize(width, height)
|
||||
|
||||
def _set_resized_pix(self):
|
||||
if self._source_pixmap is None:
|
||||
self.setPixmap(self._empty_pixmap)
|
||||
|
|
@ -276,6 +306,8 @@ class PixmapLabel(QtWidgets.QLabel):
|
|||
QtCore.Qt.SmoothTransformation
|
||||
)
|
||||
)
|
||||
self._last_width = width
|
||||
self._last_height = height
|
||||
|
||||
def resizeEvent(self, event):
|
||||
self._set_resized_pix()
|
||||
|
|
|
|||
|
|
@ -1,3 +1,3 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
"""Package declaring Pype version."""
|
||||
__version__ = "3.15.4"
|
||||
__version__ = "3.15.6-nightly.1"
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
[tool.poetry]
|
||||
name = "OpenPype"
|
||||
version = "3.15.4" # OpenPype
|
||||
version = "3.15.5" # OpenPype
|
||||
description = "Open VFX and Animation pipeline with support."
|
||||
authors = ["OpenPype Team <info@openpype.io>"]
|
||||
license = "MIT License"
|
||||
|
|
|
|||
|
|
@ -180,5 +180,23 @@ class TestValidateSequenceFrames(BaseTest):
|
|||
plugin.process(instance)
|
||||
assert ("Missing frames: [1002]" in str(excinfo.value))
|
||||
|
||||
def test_validate_sequence_frames_slate(self, instance, plugin):
|
||||
representations = [
|
||||
{
|
||||
"ext": "exr",
|
||||
"files": [
|
||||
"Main_beauty.1000.exr",
|
||||
"Main_beauty.1001.exr",
|
||||
"Main_beauty.1002.exr",
|
||||
"Main_beauty.1003.exr"
|
||||
]
|
||||
}
|
||||
]
|
||||
instance.data["slate"] = True
|
||||
instance.data["representations"] = representations
|
||||
instance.data["frameEnd"] = 1003
|
||||
|
||||
plugin.process(instance)
|
||||
|
||||
|
||||
test_case = TestValidateSequenceFrames()
|
||||
|
|
|
|||
|
|
@ -238,12 +238,12 @@ For resolution and frame range, use **OpenPype → Set Frame Range** and
|
|||
|
||||
Creating and publishing rigs with OpenPype follows similar workflow as with
|
||||
other data types. Create your rig and mark parts of your hierarchy in sets to
|
||||
help OpenPype validators and extractors to check it and publish it.
|
||||
help OpenPype validators and extractors to check and publish it.
|
||||
|
||||
### Preparing rig for publish
|
||||
|
||||
When creating rigs, it is recommended (and it is in fact enforced by validators)
|
||||
to separate bones or driving objects, their controllers and geometry so they are
|
||||
to separate bones or driven objects, their controllers and geometry so they are
|
||||
easily managed. Currently OpenPype doesn't allow to publish model at the same time as
|
||||
its rig so for demonstration purposes, I'll first create simple model for robotic
|
||||
arm, just made out of simple boxes and I'll publish it.
|
||||
|
|
@ -252,41 +252,48 @@ arm, just made out of simple boxes and I'll publish it.
|
|||
|
||||
For more information about publishing models, see [Publishing models](artist_hosts_maya.md#publishing-models).
|
||||
|
||||
Now lets start with empty scene. Load your model - **OpenPype → Load...**, right
|
||||
Now let's start with empty scene. Load your model - **OpenPype → Load...**, right
|
||||
click on it and select **Reference (abc)**.
|
||||
|
||||
I've created few bones and their controllers in two separate
|
||||
groups - `rig_GRP` and `controls_GRP`. Naming is not important - just adhere to
|
||||
your naming conventions.
|
||||
I've created a few bones in `rig_GRP`, their controllers in `controls_GRP` and
|
||||
placed the rig's output geometry in `geometry_GRP`. Naming of the groups is not important - just adhere to
|
||||
your naming conventions. Then I parented everything into a single top group named `arm_rig`.
|
||||
|
||||
Then I've put everything into `arm_rig` group.
|
||||
|
||||
When you've prepared your hierarchy, it's time to create *Rig instance* in OpenPype.
|
||||
Select your whole rig hierarchy and go **OpenPype → Create...**. Select **Rig**.
|
||||
Set is created in your scene to mark rig parts for export. Notice that it has
|
||||
two subsets - `controls_SET` and `out_SET`. Put your controls into `controls_SET`
|
||||
With the prepared hierarchy it is time to create a *Rig instance* in OpenPype.
|
||||
Select the top group of your rig and go to **OpenPype → Create...**. Select **Rig**.
|
||||
A publish set for your rig is created in your scene to mark rig parts for export.
|
||||
Notice that it has two subsets - `controls_SET` and `out_SET`. Put your controls into `controls_SET`
|
||||
and geometry to `out_SET`. You should end up with something like this:
|
||||
|
||||

|
||||
|
||||
:::note controls_SET and out_SET contents
|
||||
It is totally allowed to put the `geometry_GRP` in the `out_SET` as opposed to
|
||||
the individual meshes - it's even **recommended**. However, the `controls_SET`
|
||||
requires the individual controls in it that the artist is supposed to animate
|
||||
and manipulate so the publish validators can accurately check the rig's
|
||||
controls.
|
||||
:::
|
||||
|
||||
### Publishing rigs
|
||||
|
||||
Publishing rig is done in same way as publishing everything else. Save your scene
|
||||
and go **OpenPype → Publish**. When you run validation you'll mostly run at first into
|
||||
few issues. Although number of them will seem to be intimidating at first, you'll
|
||||
find out they are mostly minor things easily fixed.
|
||||
Publishing rigs is done in a same way as publishing everything else. Save your scene
|
||||
and go **OpenPype → Publish**. When you run validation you'll most likely run into
|
||||
a few issues at first. Although a number of them will seem to be intimidating you
|
||||
will find out they are mostly minor things, easily fixed and are there to optimize
|
||||
your rig for consistency and safe usage by the artist.
|
||||
|
||||
* **Non Duplicate Instance Members (ID)** - This will most likely fail because when
|
||||
- **Non Duplicate Instance Members (ID)** - This will most likely fail because when
|
||||
creating rigs, we usually duplicate few parts of it to reuse them. But duplication
|
||||
will duplicate also ID of original object and OpenPype needs every object to have
|
||||
unique ID. This is easily fixed by **Repair** action next to validator name. click
|
||||
on little up arrow on right side of validator name and select **Repair** form menu.
|
||||
|
||||
* **Joints Hidden** - This is enforcing joints (bones) to be hidden for user as
|
||||
- **Joints Hidden** - This is enforcing joints (bones) to be hidden for user as
|
||||
animator usually doesn't need to see them and they clutter his viewports. So
|
||||
well behaving rig should have them hidden. **Repair** action will help here also.
|
||||
|
||||
* **Rig Controllers** will check if there are no transforms on unlocked attributes
|
||||
- **Rig Controllers** will check if there are no transforms on unlocked attributes
|
||||
of controllers. This is needed because animator should have ease way to reset rig
|
||||
to it's default position. It also check that those attributes doesn't have any
|
||||
incoming connections from other parts of scene to ensure that published rig doesn't
|
||||
|
|
@ -297,6 +304,19 @@ have any missing dependencies.
|
|||
You can load rig with [Loader](artist_tools_loader). Go **OpenPype → Load...**,
|
||||
select your rig, right click on it and **Reference** it.
|
||||
|
||||
### Animation instances
|
||||
|
||||
Whenever you load a rig an animation publish instance is automatically created
|
||||
for it. This means that if you load a rig you don't need to create a pointcache
|
||||
instance yourself to publish the geometry. This is all cleanly prepared for you
|
||||
when loading a published rig.
|
||||
|
||||
:::tip Missing animation instance for your loaded rig?
|
||||
Did you accidentally delete the animation instance for a loaded rig? You can
|
||||
recreate it using the [**Recreate rig animation instance**](artist_hosts_maya.md#recreate-rig-animation-instance)
|
||||
inventory action.
|
||||
:::
|
||||
|
||||
## Point caches
|
||||
OpenPype is using Alembic format for point caches. Workflow is very similar as
|
||||
other data types.
|
||||
|
|
@ -646,3 +666,15 @@ Select 1 container of type `animation` or `pointcache`, then 1+ container of any
|
|||
The action searches the selected containers for 1 animation container of type `animation` or `pointcache`. This animation container will be connected to the rest of the selected containers. Matching geometries between containers is done by comparing the attribute `cbId`.
|
||||
|
||||
The connection between geometries is done with a live blendshape.
|
||||
|
||||
### Recreate rig animation instance
|
||||
|
||||
This action can regenerate an animation instance for a loaded rig, for example
|
||||
for when it was accidentally deleted by the user.
|
||||
|
||||

|
||||
|
||||
#### Usage
|
||||
|
||||
Select 1 or more container of type `rig` for which you want to recreate the
|
||||
animation instance.
|
||||
|
|
|
|||
Binary file not shown.
|
After Width: | Height: | Size: 46 KiB |
Loading…
Add table
Add a link
Reference in a new issue