Merge branch 'develop' into enhancement/houdini_review_opengl_extractor_not_optional

This commit is contained in:
Ondřej Samohel 2023-05-03 11:54:31 +02:00 committed by GitHub
commit 1fc0985186
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
93 changed files with 3434 additions and 1033 deletions

View file

@ -35,6 +35,10 @@ body:
label: Version
description: What version are you running? Look to OpenPype Tray
options:
- 3.15.5
- 3.15.5-nightly.2
- 3.15.5-nightly.1
- 3.15.4
- 3.15.4-nightly.3
- 3.15.4-nightly.2
- 3.15.4-nightly.1
@ -131,10 +135,6 @@ body:
- 3.13.1-nightly.2
- 3.13.1-nightly.1
- 3.13.0
- 3.13.0-nightly.1
- 3.12.3-nightly.3
- 3.12.3-nightly.2
- 3.12.3-nightly.1
validations:
required: true
- type: dropdown
@ -166,8 +166,8 @@ body:
label: Are there any labels you wish to add?
description: Please search labels and identify those related to your bug.
options:
- label: I have added the relevant labels to the bug report.
required: true
- label: I have added the relevant labels to the bug report.
required: true
- type: textarea
id: logs
attributes:

View file

@ -25,5 +25,5 @@ jobs:
- name: Invoke pre-release workflow
uses: benc-uk/workflow-dispatch@v1
with:
workflow: Nightly Prerelease
workflow: prerelease.yml
token: ${{ secrets.YNPUT_BOT_TOKEN }}

View file

@ -65,3 +65,9 @@ jobs:
source_ref: 'main'
target_branch: 'develop'
commit_message_template: '[Automated] Merged {source_ref} into {target_branch}'
- name: Invoke Update bug report workflow
uses: benc-uk/workflow-dispatch@v1
with:
workflow: update_bug_report.yml
token: ${{ secrets.YNPUT_BOT_TOKEN }}

View file

@ -18,6 +18,8 @@ jobs:
uses: ynput/gha-populate-form-version@main
with:
github_token: ${{ secrets.YNPUT_BOT_TOKEN }}
github_user: ${{ secrets.CI_USER }}
github_email: ${{ secrets.CI_EMAIL }}
registry: github
dropdown: _version
limit_to: 100

View file

@ -1,6 +1,309 @@
# Changelog
## [3.15.5](https://github.com/ynput/OpenPype/tree/3.15.5)
[Full Changelog](https://github.com/ynput/OpenPype/compare/3.15.4...3.15.5)
### **🚀 Enhancements**
<details>
<summary>Maya: Playblast profiles <a href="https://github.com/ynput/OpenPype/pull/4777">#4777</a></summary>
Support playblast profiles.This enables studios to customize what playblast settings should be on a per task and/or subset basis. For example `modeling` should have `Wireframe On Shaded` enabled, while all other tasks should have it disabled.
___
</details>
<details>
<summary>Maya: Support .abc files directly for Arnold standin look assignment <a href="https://github.com/ynput/OpenPype/pull/4856">#4856</a></summary>
If `.abc` file is loaded into arnold standin support look assignment through the `cbId` attributes in the alembic file.
___
</details>
<details>
<summary>Maya: Hide animation instance in creator <a href="https://github.com/ynput/OpenPype/pull/4872">#4872</a></summary>
- Hide animation instance in creator
- Add inventory action to recreate animation publish instance for loaded rigs
___
</details>
<details>
<summary>Unreal: Render Creator enhancements <a href="https://github.com/ynput/OpenPype/pull/4477">#4477</a></summary>
<strong>Improvements to the creator for render family
</strong>This PR introduces some enhancements to the creator for the render family in Unreal Engine:
- Added the option to create a new, empty sequence for the render.
- Added the option to not include the whole hierarchy for the selected sequence.
- Improvements of the error messages.
___
</details>
<details>
<summary>Unreal: Added settings for rendering <a href="https://github.com/ynput/OpenPype/pull/4575">#4575</a></summary>
<strong>Added settings for rendering in Unreal Engine.
</strong>Two settings has been added:
- Pre roll frames, to set how many frames are used to load the scene before starting the actual rendering.
- Configuration path, to allow to save a preset of settings from Unreal, and use it for rendering.
___
</details>
<details>
<summary>Global: Optimize anatomy formatting by only formatting used templates instead <a href="https://github.com/ynput/OpenPype/pull/4784">#4784</a></summary>
Optimization to not format full anatomy when only a single template is used. Instead format only the single template instead.
___
</details>
<details>
<summary>Patchelf version locked <a href="https://github.com/ynput/OpenPype/pull/4853">#4853</a></summary>
For Centos dockerfile it is necessary to lock the patchelf version to the older, otherwise the build process fails.
___
</details>
<details>
<summary>Houdini: Implement `switch` method on loaders <a href="https://github.com/ynput/OpenPype/pull/4866">#4866</a></summary>
Implement `switch` method on loaders
___
</details>
<details>
<summary>Code: Tweak docstrings and return type hints <a href="https://github.com/ynput/OpenPype/pull/4875">#4875</a></summary>
Tweak docstrings and return type hints for functions in `openpype.client.entities`.
___
</details>
<details>
<summary>Publisher: Clear comment on successful publish and on window close <a href="https://github.com/ynput/OpenPype/pull/4885">#4885</a></summary>
Clear comment text field on successful publish and on window close.
___
</details>
<details>
<summary>Publisher: Make sure to reset asset widget when hidden and reshown <a href="https://github.com/ynput/OpenPype/pull/4886">#4886</a></summary>
Make sure to reset asset widget when hidden and reshown. Without this the asset list would never refresh in the set asset widget when changing context on an existing instance and thus would not show new assets from after the first time launching that widget.
___
</details>
### **🐛 Bug fixes**
<details>
<summary>Maya: Fix nested model instances. <a href="https://github.com/ynput/OpenPype/pull/4852">#4852</a></summary>
Fix nested model instance under review instance, where data collection was not including "Display Lights" and "Focal Length".
___
</details>
<details>
<summary>Maya: Make default namespace naming backwards compatible <a href="https://github.com/ynput/OpenPype/pull/4873">#4873</a></summary>
Namespaces of loaded references are now _by default_ back to what they were before #4511
___
</details>
<details>
<summary>Nuke: Legacy convertor skips deprecation warnings <a href="https://github.com/ynput/OpenPype/pull/4846">#4846</a></summary>
Nuke legacy convertor was triggering deprecated function which is causing a lot of logs which slows down whole process. Changed the convertor to skip all nodes without `AVALON_TAB` to avoid the warnings.
___
</details>
<details>
<summary>3dsmax: move startup script logic to hook <a href="https://github.com/ynput/OpenPype/pull/4849">#4849</a></summary>
Startup script for OpenPype was interfering with Open Last Workfile feature. Moving this loggic from simple command line argument in the Settings to pre-launch hook is solving the order of command line arguments and making both features work.
___
</details>
<details>
<summary>Maya: Don't change time slider ranges in `get_frame_range` <a href="https://github.com/ynput/OpenPype/pull/4858">#4858</a></summary>
Don't change time slider ranges in `get_frame_range`
___
</details>
<details>
<summary>Maya: Looks - calculate hash for tx texture <a href="https://github.com/ynput/OpenPype/pull/4878">#4878</a></summary>
Texture hash is calculated for textures used in published look and it is used as key in dictionary. In recent changes, this hash is not calculated for TX files, resulting in `None` value as key in dictionary, crashing publishing. This PR is adding texture hash for TX files to solve that issue.
___
</details>
<details>
<summary>Houdini: Collect `currentFile` context data separate from workfile instance <a href="https://github.com/ynput/OpenPype/pull/4883">#4883</a></summary>
Fix publishing without an active workfile instance due to missing `currentFile` data.Now collect `currentFile` into context in houdini through context plugin no matter the active instances.
___
</details>
<details>
<summary>Nuke: fixed broken slate workflow once published on deadline <a href="https://github.com/ynput/OpenPype/pull/4887">#4887</a></summary>
Slate workflow is now working as expected and Validate Sequence Frames is not raising the once slate frame is included.
___
</details>
<details>
<summary>Add fps as instance.data in collect review in Houdini. <a href="https://github.com/ynput/OpenPype/pull/4888">#4888</a></summary>
fix the bug of failing to publish extract review in HoudiniOriginal error:
```python
File "OpenPype\build\exe.win-amd64-3.9\openpype\plugins\publish\extract_review.py", line 516, in prepare_temp_data
"fps": float(instance.data["fps"]),
KeyError: 'fps'
```
___
</details>
<details>
<summary>TrayPublisher: Fill missing data for instances with review <a href="https://github.com/ynput/OpenPype/pull/4891">#4891</a></summary>
Fill required data to instance in traypublisher if instance has review family. The data are required by ExtractReview and it would be complicated to do proper fix at this moment! The collector does for review instances what did https://github.com/ynput/OpenPype/pull/4383
___
</details>
<details>
<summary>Publisher: Keep track about current context and fix context selection widget <a href="https://github.com/ynput/OpenPype/pull/4892">#4892</a></summary>
Change selected context to current context on reset. Fix bug when context widget is re-enabled.
___
</details>
<details>
<summary>Scene inventory: Model refresh fix with cherry picking <a href="https://github.com/ynput/OpenPype/pull/4895">#4895</a></summary>
Fix cherry pick issue in scene inventory.
___
</details>
<details>
<summary>Nuke: Pre-render and missing review flag on instance causing crash <a href="https://github.com/ynput/OpenPype/pull/4897">#4897</a></summary>
If instance created in nuke was missing `review` flag, collector crashed.
___
</details>
### **Merged pull requests**
<details>
<summary>After Effects: fix handles KeyError <a href="https://github.com/ynput/OpenPype/pull/4727">#4727</a></summary>
Sometimes when publishing with AE (we only saw this error on AE 2023), we got a KeyError for the handles in the "Collect Workfile" step. So I did get the handles from the context if ther's no handles in the asset entity.
___
</details>
## [3.15.4](https://github.com/ynput/OpenPype/tree/3.15.4)

View file

@ -52,7 +52,7 @@ RUN yum -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.n
# we need to build our own patchelf
WORKDIR /temp-patchelf
RUN git clone https://github.com/NixOS/patchelf.git . \
RUN git clone -b 0.17.0 --single-branch https://github.com/NixOS/patchelf.git . \
&& source scl_source enable devtoolset-7 \
&& ./bootstrap.sh \
&& ./configure \

View file

@ -415,11 +415,12 @@ def repack_version(directory):
@main.command()
@click.option("--project", help="Project name")
@click.option(
"--dirpath", help="Directory where package is stored", default=None
)
def pack_project(project, dirpath):
"--dirpath", help="Directory where package is stored", default=None)
@click.option(
"--dbonly", help="Store only Database data", default=False, is_flag=True)
def pack_project(project, dirpath, dbonly):
"""Create a package of project with all files and database dump."""
PypeCommands().pack_project(project, dirpath)
PypeCommands().pack_project(project, dirpath, dbonly)
@main.command()
@ -427,9 +428,11 @@ def pack_project(project, dirpath):
@click.option(
"--root", help="Replace root which was stored in project", default=None
)
def unpack_project(zipfile, root):
@click.option(
"--dbonly", help="Store only Database data", default=False, is_flag=True)
def unpack_project(zipfile, root, dbonly):
"""Create a package of project with all files and database dump."""
PypeCommands().unpack_project(zipfile, root)
PypeCommands().unpack_project(zipfile, root, dbonly)
@main.command()

View file

@ -69,6 +69,19 @@ def convert_ids(in_ids):
def get_projects(active=True, inactive=False, fields=None):
"""Yield all project entity documents.
Args:
active (Optional[bool]): Include active projects. Defaults to True.
inactive (Optional[bool]): Include inactive projects.
Defaults to False.
fields (Optional[Iterable[str]]): Fields that should be returned. All
fields are returned if 'None' is passed.
Yields:
dict: Project entity data which can be reduced to specified 'fields'.
None is returned if project with specified filters was not found.
"""
mongodb = get_project_database()
for project_name in mongodb.collection_names():
if project_name in ("system.indexes",):
@ -81,6 +94,20 @@ def get_projects(active=True, inactive=False, fields=None):
def get_project(project_name, active=True, inactive=True, fields=None):
"""Return project entity document by project name.
Args:
project_name (str): Name of project.
active (Optional[bool]): Allow active project. Defaults to True.
inactive (Optional[bool]): Allow inactive project. Defaults to True.
fields (Optional[Iterable[str]]): Fields that should be returned. All
fields are returned if 'None' is passed.
Returns:
Union[Dict, None]: Project entity data which can be reduced to
specified 'fields'. None is returned if project with specified
filters was not found.
"""
# Skip if both are disabled
if not active and not inactive:
return None
@ -124,17 +151,18 @@ def get_whole_project(project_name):
def get_asset_by_id(project_name, asset_id, fields=None):
"""Receive asset data by it's id.
"""Receive asset data by its id.
Args:
project_name (str): Name of project where to look for queried entities.
asset_id (Union[str, ObjectId]): Asset's id.
fields (Iterable[str]): Fields that should be returned. All fields are
returned if 'None' is passed.
fields (Optional[Iterable[str]]): Fields that should be returned. All
fields are returned if 'None' is passed.
Returns:
dict: Asset entity data.
None: Asset was not found by id.
Union[Dict, None]: Asset entity data which can be reduced to
specified 'fields'. None is returned if asset with specified
filters was not found.
"""
asset_id = convert_id(asset_id)
@ -147,17 +175,18 @@ def get_asset_by_id(project_name, asset_id, fields=None):
def get_asset_by_name(project_name, asset_name, fields=None):
"""Receive asset data by it's name.
"""Receive asset data by its name.
Args:
project_name (str): Name of project where to look for queried entities.
asset_name (str): Asset's name.
fields (Iterable[str]): Fields that should be returned. All fields are
returned if 'None' is passed.
fields (Optional[Iterable[str]]): Fields that should be returned. All
fields are returned if 'None' is passed.
Returns:
dict: Asset entity data.
None: Asset was not found by name.
Union[Dict, None]: Asset entity data which can be reduced to
specified 'fields'. None is returned if asset with specified
filters was not found.
"""
if not asset_name:
@ -195,8 +224,8 @@ def _get_assets(
parent_ids (Iterable[Union[str, ObjectId]]): Parent asset ids.
standard (bool): Query standard assets (type 'asset').
archived (bool): Query archived assets (type 'archived_asset').
fields (Iterable[str]): Fields that should be returned. All fields are
returned if 'None' is passed.
fields (Optional[Iterable[str]]): Fields that should be returned. All
fields are returned if 'None' is passed.
Returns:
Cursor: Query cursor as iterable which returns asset documents matching
@ -261,8 +290,8 @@ def get_assets(
asset_names (Iterable[str]): Name assets that should be found.
parent_ids (Iterable[Union[str, ObjectId]]): Parent asset ids.
archived (bool): Add also archived assets.
fields (Iterable[str]): Fields that should be returned. All fields are
returned if 'None' is passed.
fields (Optional[Iterable[str]]): Fields that should be returned. All
fields are returned if 'None' is passed.
Returns:
Cursor: Query cursor as iterable which returns asset documents matching
@ -300,8 +329,8 @@ def get_archived_assets(
be found.
asset_names (Iterable[str]): Name assets that should be found.
parent_ids (Iterable[Union[str, ObjectId]]): Parent asset ids.
fields (Iterable[str]): Fields that should be returned. All fields are
returned if 'None' is passed.
fields (Optional[Iterable[str]]): Fields that should be returned. All
fields are returned if 'None' is passed.
Returns:
Cursor: Query cursor as iterable which returns asset documents matching
@ -356,17 +385,18 @@ def get_asset_ids_with_subsets(project_name, asset_ids=None):
def get_subset_by_id(project_name, subset_id, fields=None):
"""Single subset entity data by it's id.
"""Single subset entity data by its id.
Args:
project_name (str): Name of project where to look for queried entities.
subset_id (Union[str, ObjectId]): Id of subset which should be found.
fields (Iterable[str]): Fields that should be returned. All fields are
returned if 'None' is passed.
fields (Optional[Iterable[str]]): Fields that should be returned. All
fields are returned if 'None' is passed.
Returns:
None: If subset with specified filters was not found.
Dict: Subset document which can be reduced to specified 'fields'.
Union[Dict, None]: Subset entity data which can be reduced to
specified 'fields'. None is returned if subset with specified
filters was not found.
"""
subset_id = convert_id(subset_id)
@ -379,20 +409,19 @@ def get_subset_by_id(project_name, subset_id, fields=None):
def get_subset_by_name(project_name, subset_name, asset_id, fields=None):
"""Single subset entity data by it's name and it's version id.
"""Single subset entity data by its name and its version id.
Args:
project_name (str): Name of project where to look for queried entities.
subset_name (str): Name of subset.
asset_id (Union[str, ObjectId]): Id of parent asset.
fields (Iterable[str]): Fields that should be returned. All fields are
returned if 'None' is passed.
fields (Optional[Iterable[str]]): Fields that should be returned. All
fields are returned if 'None' is passed.
Returns:
Union[None, Dict[str, Any]]: None if subset with specified filters was
not found or dict subset document which can be reduced to
specified 'fields'.
Union[Dict, None]: Subset entity data which can be reduced to
specified 'fields'. None is returned if subset with specified
filters was not found.
"""
if not subset_name:
return None
@ -434,8 +463,8 @@ def get_subsets(
names_by_asset_ids (dict[ObjectId, List[str]]): Complex filtering
using asset ids and list of subset names under the asset.
archived (bool): Look for archived subsets too.
fields (Iterable[str]): Fields that should be returned. All fields are
returned if 'None' is passed.
fields (Optional[Iterable[str]]): Fields that should be returned. All
fields are returned if 'None' is passed.
Returns:
Cursor: Iterable cursor yielding all matching subsets.
@ -520,17 +549,18 @@ def get_subset_families(project_name, subset_ids=None):
def get_version_by_id(project_name, version_id, fields=None):
"""Single version entity data by it's id.
"""Single version entity data by its id.
Args:
project_name (str): Name of project where to look for queried entities.
version_id (Union[str, ObjectId]): Id of version which should be found.
fields (Iterable[str]): Fields that should be returned. All fields are
returned if 'None' is passed.
fields (Optional[Iterable[str]]): Fields that should be returned. All
fields are returned if 'None' is passed.
Returns:
None: If version with specified filters was not found.
Dict: Version document which can be reduced to specified 'fields'.
Union[Dict, None]: Version entity data which can be reduced to
specified 'fields'. None is returned if version with specified
filters was not found.
"""
version_id = convert_id(version_id)
@ -546,18 +576,19 @@ def get_version_by_id(project_name, version_id, fields=None):
def get_version_by_name(project_name, version, subset_id, fields=None):
"""Single version entity data by it's name and subset id.
"""Single version entity data by its name and subset id.
Args:
project_name (str): Name of project where to look for queried entities.
version (int): name of version entity (it's version).
version (int): name of version entity (its version).
subset_id (Union[str, ObjectId]): Id of version which should be found.
fields (Iterable[str]): Fields that should be returned. All fields are
returned if 'None' is passed.
fields (Optional[Iterable[str]]): Fields that should be returned. All
fields are returned if 'None' is passed.
Returns:
None: If version with specified filters was not found.
Dict: Version document which can be reduced to specified 'fields'.
Union[Dict, None]: Version entity data which can be reduced to
specified 'fields'. None is returned if version with specified
filters was not found.
"""
subset_id = convert_id(subset_id)
@ -574,7 +605,7 @@ def get_version_by_name(project_name, version, subset_id, fields=None):
def version_is_latest(project_name, version_id):
"""Is version the latest from it's subset.
"""Is version the latest from its subset.
Note:
Hero versions are considered as latest.
@ -680,8 +711,8 @@ def get_versions(
versions (Iterable[int]): Version names (as integers).
Filter ignored if 'None' is passed.
hero (bool): Look also for hero versions.
fields (Iterable[str]): Fields that should be returned. All fields are
returned if 'None' is passed.
fields (Optional[Iterable[str]]): Fields that should be returned. All
fields are returned if 'None' is passed.
Returns:
Cursor: Iterable cursor yielding all matching versions.
@ -705,12 +736,13 @@ def get_hero_version_by_subset_id(project_name, subset_id, fields=None):
project_name (str): Name of project where to look for queried entities.
subset_id (Union[str, ObjectId]): Subset id under which
is hero version.
fields (Iterable[str]): Fields that should be returned. All fields are
returned if 'None' is passed.
fields (Optional[Iterable[str]]): Fields that should be returned. All
fields are returned if 'None' is passed.
Returns:
None: If hero version for passed subset id does not exists.
Dict: Hero version entity data.
Union[Dict, None]: Hero version entity data which can be reduced to
specified 'fields'. None is returned if hero version with specified
filters was not found.
"""
subset_id = convert_id(subset_id)
@ -730,17 +762,18 @@ def get_hero_version_by_subset_id(project_name, subset_id, fields=None):
def get_hero_version_by_id(project_name, version_id, fields=None):
"""Hero version by it's id.
"""Hero version by its id.
Args:
project_name (str): Name of project where to look for queried entities.
version_id (Union[str, ObjectId]): Hero version id.
fields (Iterable[str]): Fields that should be returned. All fields are
returned if 'None' is passed.
fields (Optional[Iterable[str]]): Fields that should be returned. All
fields are returned if 'None' is passed.
Returns:
None: If hero version with passed id was not found.
Dict: Hero version entity data.
Union[Dict, None]: Hero version entity data which can be reduced to
specified 'fields'. None is returned if hero version with specified
filters was not found.
"""
version_id = convert_id(version_id)
@ -773,8 +806,8 @@ def get_hero_versions(
should look for hero versions. Filter ignored if 'None' is passed.
version_ids (Iterable[Union[str, ObjectId]]): Hero version ids. Filter
ignored if 'None' is passed.
fields (Iterable[str]): Fields that should be returned. All fields are
returned if 'None' is passed.
fields (Optional[Iterable[str]]): Fields that should be returned. All
fields are returned if 'None' is passed.
Returns:
Cursor|list: Iterable yielding hero versions matching passed filters.
@ -801,8 +834,8 @@ def get_output_link_versions(project_name, version_id, fields=None):
project_name (str): Name of project where to look for queried entities.
version_id (Union[str, ObjectId]): Version id which can be used
as input link for other versions.
fields (Iterable[str]): Fields that should be returned. All fields are
returned if 'None' is passed.
fields (Optional[Iterable[str]]): Fields that should be returned. All
fields are returned if 'None' is passed.
Returns:
Iterable: Iterable cursor yielding versions that are used as input
@ -828,8 +861,8 @@ def get_last_versions(project_name, subset_ids, fields=None):
Args:
project_name (str): Name of project where to look for queried entities.
subset_ids (Iterable[Union[str, ObjectId]]): List of subset ids.
fields (Iterable[str]): Fields that should be returned. All fields are
returned if 'None' is passed.
fields (Optional[Iterable[str]]): Fields that should be returned. All
fields are returned if 'None' is passed.
Returns:
dict[ObjectId, int]: Key is subset id and value is last version name.
@ -913,12 +946,13 @@ def get_last_version_by_subset_id(project_name, subset_id, fields=None):
Args:
project_name (str): Name of project where to look for queried entities.
subset_id (Union[str, ObjectId]): Id of version which should be found.
fields (Iterable[str]): Fields that should be returned. All fields are
returned if 'None' is passed.
fields (Optional[Iterable[str]]): Fields that should be returned. All
fields are returned if 'None' is passed.
Returns:
None: If version with specified filters was not found.
Dict: Version document which can be reduced to specified 'fields'.
Union[Dict, None]: Version entity data which can be reduced to
specified 'fields'. None is returned if version with specified
filters was not found.
"""
subset_id = convert_id(subset_id)
@ -945,12 +979,13 @@ def get_last_version_by_subset_name(
asset_id (Union[str, ObjectId]): Asset id which is parent of passed
subset name.
asset_name (str): Asset name which is parent of passed subset name.
fields (Iterable[str]): Fields that should be returned. All fields are
returned if 'None' is passed.
fields (Optional[Iterable[str]]): Fields that should be returned. All
fields are returned if 'None' is passed.
Returns:
None: If version with specified filters was not found.
Dict: Version document which can be reduced to specified 'fields'.
Union[Dict, None]: Version entity data which can be reduced to
specified 'fields'. None is returned if version with specified
filters was not found.
"""
if not asset_id and not asset_name:
@ -972,18 +1007,18 @@ def get_last_version_by_subset_name(
def get_representation_by_id(project_name, representation_id, fields=None):
"""Representation entity data by it's id.
"""Representation entity data by its id.
Args:
project_name (str): Name of project where to look for queried entities.
representation_id (Union[str, ObjectId]): Representation id.
fields (Iterable[str]): Fields that should be returned. All fields are
returned if 'None' is passed.
fields (Optional[Iterable[str]]): Fields that should be returned. All
fields are returned if 'None' is passed.
Returns:
None: If representation with specified filters was not found.
Dict: Representation entity data which can be reduced
to specified 'fields'.
Union[Dict, None]: Representation entity data which can be reduced to
specified 'fields'. None is returned if representation with
specified filters was not found.
"""
if not representation_id:
@ -1004,19 +1039,19 @@ def get_representation_by_id(project_name, representation_id, fields=None):
def get_representation_by_name(
project_name, representation_name, version_id, fields=None
):
"""Representation entity data by it's name and it's version id.
"""Representation entity data by its name and its version id.
Args:
project_name (str): Name of project where to look for queried entities.
representation_name (str): Representation name.
version_id (Union[str, ObjectId]): Id of parent version entity.
fields (Iterable[str]): Fields that should be returned. All fields are
returned if 'None' is passed.
fields (Optional[Iterable[str]]): Fields that should be returned. All
fields are returned if 'None' is passed.
Returns:
None: If representation with specified filters was not found.
Dict: Representation entity data which can be reduced
to specified 'fields'.
Union[dict[str, Any], None]: Representation entity data which can be
reduced to specified 'fields'. None is returned if representation
with specified filters was not found.
"""
version_id = convert_id(version_id)
@ -1202,8 +1237,8 @@ def get_representations(
names_by_version_ids (dict[ObjectId, list[str]]): Complex filtering
using version ids and list of names under the version.
archived (bool): Output will also contain archived representations.
fields (Iterable[str]): Fields that should be returned. All fields are
returned if 'None' is passed.
fields (Optional[Iterable[str]]): Fields that should be returned. All
fields are returned if 'None' is passed.
Returns:
Cursor: Iterable cursor yielding all matching representations.
@ -1247,8 +1282,8 @@ def get_archived_representations(
representation context fields.
names_by_version_ids (dict[ObjectId, List[str]]): Complex filtering
using version ids and list of names under the version.
fields (Iterable[str]): Fields that should be returned. All fields are
returned if 'None' is passed.
fields (Optional[Iterable[str]]): Fields that should be returned. All
fields are returned if 'None' is passed.
Returns:
Cursor: Iterable cursor yielding all matching representations.
@ -1377,8 +1412,8 @@ def get_thumbnail_id_from_source(project_name, src_type, src_id):
src_id (Union[str, ObjectId]): Id of source entity.
Returns:
ObjectId: Thumbnail id assigned to entity.
None: If Source entity does not have any thumbnail id assigned.
Union[ObjectId, None]: Thumbnail id assigned to entity. If Source
entity does not have any thumbnail id assigned.
"""
if not src_type or not src_id:
@ -1397,14 +1432,14 @@ def get_thumbnails(project_name, thumbnail_ids, fields=None):
"""Receive thumbnails entity data.
Thumbnail entity can be used to receive binary content of thumbnail based
on it's content and ThumbnailResolvers.
on its content and ThumbnailResolvers.
Args:
project_name (str): Name of project where to look for queried entities.
thumbnail_ids (Iterable[Union[str, ObjectId]]): Ids of thumbnail
entities.
fields (Iterable[str]): Fields that should be returned. All fields are
returned if 'None' is passed.
fields (Optional[Iterable[str]]): Fields that should be returned. All
fields are returned if 'None' is passed.
Returns:
cursor: Cursor of queried documents.
@ -1429,12 +1464,13 @@ def get_thumbnail(project_name, thumbnail_id, fields=None):
Args:
project_name (str): Name of project where to look for queried entities.
thumbnail_id (Union[str, ObjectId]): Id of thumbnail entity.
fields (Iterable[str]): Fields that should be returned. All fields are
returned if 'None' is passed.
fields (Optional[Iterable[str]]): Fields that should be returned. All
fields are returned if 'None' is passed.
Returns:
None: If thumbnail with specified id was not found.
Dict: Thumbnail entity data which can be reduced to specified 'fields'.
Union[Dict, None]: Thumbnail entity data which can be reduced to
specified 'fields'.None is returned if thumbnail with specified
filters was not found.
"""
if not thumbnail_id:
@ -1458,8 +1494,13 @@ def get_workfile_info(
project_name (str): Name of project where to look for queried entities.
asset_id (Union[str, ObjectId]): Id of asset entity.
task_name (str): Task name on asset.
fields (Iterable[str]): Fields that should be returned. All fields are
returned if 'None' is passed.
fields (Optional[Iterable[str]]): Fields that should be returned. All
fields are returned if 'None' is passed.
Returns:
Union[Dict, None]: Workfile entity data which can be reduced to
specified 'fields'.None is returned if workfile with specified
filters was not found.
"""
if not asset_id or not task_name or not filename:

View file

@ -5,6 +5,12 @@ import logging
import pymongo
import certifi
from bson.json_util import (
loads,
dumps,
CANONICAL_JSON_OPTIONS
)
if sys.version_info[0] == 2:
from urlparse import urlparse, parse_qs
else:
@ -15,6 +21,49 @@ class MongoEnvNotSet(Exception):
pass
def documents_to_json(docs):
"""Convert documents to json string.
Args:
Union[list[dict[str, Any]], dict[str, Any]]: Document/s to convert to
json string.
Returns:
str: Json string with mongo documents.
"""
return dumps(docs, json_options=CANONICAL_JSON_OPTIONS)
def load_json_file(filepath):
"""Load mongo documents from a json file.
Args:
filepath (str): Path to a json file.
Returns:
Union[dict[str, Any], list[dict[str, Any]]]: Loaded content from a
json file.
"""
if not os.path.exists(filepath):
raise ValueError("Path {} was not found".format(filepath))
with open(filepath, "r") as stream:
content = stream.read()
return loads("".join(content))
def get_project_database_name():
"""Name of database name where projects are available.
Returns:
str: Name of database name where projects are.
"""
return os.environ.get("AVALON_DB") or "avalon"
def _decompose_url(url):
"""Decompose mongo url to basic components.
@ -210,12 +259,102 @@ class OpenPypeMongoConnection:
return mongo_client
def get_project_database():
db_name = os.environ.get("AVALON_DB") or "avalon"
return OpenPypeMongoConnection.get_mongo_client()[db_name]
# ------ Helper Mongo functions ------
# Functions can be helpful with custom tools to backup/restore mongo state.
# Not meant as API functionality that should be used in production codebase!
def get_collection_documents(database_name, collection_name, as_json=False):
"""Query all documents from a collection.
Args:
database_name (str): Name of database where to look for collection.
collection_name (str): Name of collection where to look for collection.
as_json (Optional[bool]): Output should be a json string.
Default: 'False'
Returns:
Union[list[dict[str, Any]], str]: Queried documents.
"""
client = OpenPypeMongoConnection.get_mongo_client()
output = list(client[database_name][collection_name].find({}))
if as_json:
output = documents_to_json(output)
return output
def get_project_connection(project_name):
def store_collection(filepath, database_name, collection_name):
"""Store collection documents to a json file.
Args:
filepath (str): Path to a json file where documents will be stored.
database_name (str): Name of database where to look for collection.
collection_name (str): Name of collection to store.
"""
# Make sure directory for output file exists
dirpath = os.path.dirname(filepath)
if not os.path.isdir(dirpath):
os.makedirs(dirpath)
content = get_collection_documents(database_name, collection_name, True)
with open(filepath, "w") as stream:
stream.write(content)
def replace_collection_documents(docs, database_name, collection_name):
"""Replace all documents in a collection with passed documents.
Warnings:
All existing documents in collection will be removed if there are any.
Args:
docs (list[dict[str, Any]]): New documents.
database_name (str): Name of database where to look for collection.
collection_name (str): Name of collection where new documents are
uploaded.
"""
client = OpenPypeMongoConnection.get_mongo_client()
database = client[database_name]
if collection_name in database.list_collection_names():
database.drop_collection(collection_name)
col = database[collection_name]
col.insert_many(docs)
def restore_collection(filepath, database_name, collection_name):
"""Restore/replace collection from a json filepath.
Warnings:
All existing documents in collection will be removed if there are any.
Args:
filepath (str): Path to a json with documents.
database_name (str): Name of database where to look for collection.
collection_name (str): Name of collection where new documents are
uploaded.
"""
docs = load_json_file(filepath)
replace_collection_documents(docs, database_name, collection_name)
def get_project_database(database_name=None):
"""Database object where project collections are.
Args:
database_name (Optional[str]): Custom name of database.
Returns:
pymongo.database.Database: Collection related to passed project.
"""
if not database_name:
database_name = get_project_database_name()
return OpenPypeMongoConnection.get_mongo_client()[database_name]
def get_project_connection(project_name, database_name=None):
"""Direct access to mongo collection.
We're trying to avoid using direct access to mongo. This should be used
@ -223,13 +362,83 @@ def get_project_connection(project_name):
api calls for that.
Args:
project_name(str): Project name for which collection should be
project_name (str): Project name for which collection should be
returned.
database_name (Optional[str]): Custom name of database.
Returns:
pymongo.Collection: Collection realated to passed project.
pymongo.collection.Collection: Collection related to passed project.
"""
if not project_name:
raise ValueError("Invalid project name {}".format(str(project_name)))
return get_project_database()[project_name]
return get_project_database(database_name)[project_name]
def get_project_documents(project_name, database_name=None):
"""Query all documents from project collection.
Args:
project_name (str): Name of project.
database_name (Optional[str]): Name of mongo database where to look for
project.
Returns:
list[dict[str, Any]]: Documents in project collection.
"""
if not database_name:
database_name = get_project_database_name()
return get_collection_documents(database_name, project_name)
def store_project_documents(project_name, filepath, database_name=None):
"""Store project documents to a file as json string.
Args:
project_name (str): Name of project to store.
filepath (str): Path to a json file where output will be stored.
database_name (Optional[str]): Name of mongo database where to look for
project.
"""
if not database_name:
database_name = get_project_database_name()
store_collection(filepath, database_name, project_name)
def replace_project_documents(project_name, docs, database_name=None):
"""Replace documents in mongo with passed documents.
Warnings:
Existing project collection is removed if exists in mongo.
Args:
project_name (str): Name of project.
docs (list[dict[str, Any]]): Documents to restore.
database_name (Optional[str]): Name of mongo database where project
collection will be created.
"""
if not database_name:
database_name = get_project_database_name()
replace_collection_documents(docs, database_name, project_name)
def restore_project_documents(project_name, filepath, database_name=None):
"""Replace documents in mongo with passed documents.
Warnings:
Existing project collection is removed if exists in mongo.
Args:
project_name (str): Name of project.
filepath (str): File to json file with project documents.
database_name (Optional[str]): Name of mongo database where project
collection will be created.
"""
if not database_name:
database_name = get_project_database_name()
restore_collection(filepath, database_name, project_name)

View file

@ -1,7 +1,5 @@
import os
import qtawesome
from openpype.hosts.fusion.api import (
get_current_comp,
comp_lock_and_undo_chunk,
@ -28,6 +26,7 @@ class CreateSaver(Creator):
family = "render"
default_variants = ["Main", "Mask"]
description = "Fusion Saver to generate image sequence"
icon = "fa5.eye"
instance_attributes = ["reviewable"]
@ -89,9 +88,6 @@ class CreateSaver(Creator):
self._add_instance_to_context(created_instance)
def get_icon(self):
return qtawesome.icon("fa.eye", color="white")
def update_instances(self, update_list):
for created_inst, _changes in update_list:
new_data = created_inst.data_to_store()

View file

@ -1,5 +1,3 @@
import qtawesome
from openpype.hosts.fusion.api import (
get_current_comp
)
@ -15,6 +13,7 @@ class FusionWorkfileCreator(AutoCreator):
identifier = "workfile"
family = "workfile"
label = "Workfile"
icon = "fa5.file"
default_variant = "Main"
@ -104,6 +103,3 @@ class FusionWorkfileCreator(AutoCreator):
existing_instance["asset"] = asset_name
existing_instance["task"] = task_name
existing_instance["subset"] = subset_name
def get_icon(self):
return qtawesome.icon("fa.file-o", color="white")

View file

@ -14,7 +14,7 @@ class CreateWorkfile(plugin.HoudiniCreatorBase, AutoCreator):
identifier = "io.openpype.creators.houdini.workfile"
label = "Workfile"
family = "workfile"
icon = "document"
icon = "fa5.file"
default_variant = "Main"

View file

@ -104,3 +104,6 @@ class AbcLoader(load.LoaderPlugin):
node = container["node"]
node.destroy()
def switch(self, container, representation):
self.update(container, representation)

View file

@ -73,3 +73,6 @@ class AbcArchiveLoader(load.LoaderPlugin):
node = container["node"]
node.destroy()
def switch(self, container, representation):
self.update(container, representation)

View file

@ -106,3 +106,6 @@ class BgeoLoader(load.LoaderPlugin):
node = container["node"]
node.destroy()
def switch(self, container, representation):
self.update(container, representation)

View file

@ -192,3 +192,6 @@ class CameraLoader(load.LoaderPlugin):
new_node.moveToGoodPosition()
return new_node
def switch(self, container, representation):
self.update(container, representation)

View file

@ -125,3 +125,6 @@ class ImageLoader(load.LoaderPlugin):
prefix, padding, suffix = first_fname.rsplit(".", 2)
fname = ".".join([prefix, "$F{}".format(len(padding)), suffix])
return os.path.join(root, fname).replace("\\", "/")
def switch(self, container, representation):
self.update(container, representation)

View file

@ -79,3 +79,6 @@ class USDSublayerLoader(load.LoaderPlugin):
node = container["node"]
node.destroy()
def switch(self, container, representation):
self.update(container, representation)

View file

@ -79,3 +79,6 @@ class USDReferenceLoader(load.LoaderPlugin):
node = container["node"]
node.destroy()
def switch(self, container, representation):
self.update(container, representation)

View file

@ -102,3 +102,6 @@ class VdbLoader(load.LoaderPlugin):
node = container["node"]
node.destroy()
def switch(self, container, representation):
self.update(container, representation)

View file

@ -4,15 +4,14 @@ import hou
import pyblish.api
class CollectHoudiniCurrentFile(pyblish.api.InstancePlugin):
class CollectHoudiniCurrentFile(pyblish.api.ContextPlugin):
"""Inject the current working file into context"""
order = pyblish.api.CollectorOrder - 0.01
order = pyblish.api.CollectorOrder - 0.1
label = "Houdini Current File"
hosts = ["houdini"]
families = ["workfile"]
def process(self, instance):
def process(self, context):
"""Inject the current working file"""
current_file = hou.hipFile.path()
@ -34,26 +33,5 @@ class CollectHoudiniCurrentFile(pyblish.api.InstancePlugin):
"saved correctly."
)
instance.context.data["currentFile"] = current_file
folder, file = os.path.split(current_file)
filename, ext = os.path.splitext(file)
instance.data.update({
"setMembers": [current_file],
"frameStart": instance.context.data['frameStart'],
"frameEnd": instance.context.data['frameEnd'],
"handleStart": instance.context.data['handleStart'],
"handleEnd": instance.context.data['handleEnd']
})
instance.data['representations'] = [{
'name': ext.lstrip("."),
'ext': ext.lstrip("."),
'files': file,
"stagingDir": folder,
}]
self.log.info('Collected instance: {}'.format(file))
self.log.info('Scene path: {}'.format(current_file))
self.log.info('staging Dir: {}'.format(folder))
context.data["currentFile"] = current_file
self.log.info('Current workfile path: {}'.format(current_file))

View file

@ -17,6 +17,7 @@ class CollectHoudiniReviewData(pyblish.api.InstancePlugin):
# which isn't the actual frame range that this instance renders.
instance.data["handleStart"] = 0
instance.data["handleEnd"] = 0
instance.data["fps"] = instance.context.data["fps"]
# Get the camera from the rop node to collect the focal length
ropnode_path = instance.data["instance_node"]

View file

@ -0,0 +1,36 @@
import os
import pyblish.api
class CollectWorkfile(pyblish.api.InstancePlugin):
"""Inject workfile representation into instance"""
order = pyblish.api.CollectorOrder - 0.01
label = "Houdini Workfile Data"
hosts = ["houdini"]
families = ["workfile"]
def process(self, instance):
current_file = instance.context.data["currentFile"]
folder, file = os.path.split(current_file)
filename, ext = os.path.splitext(file)
instance.data.update({
"setMembers": [current_file],
"frameStart": instance.context.data['frameStart'],
"frameEnd": instance.context.data['frameEnd'],
"handleStart": instance.context.data['handleStart'],
"handleEnd": instance.context.data['handleEnd']
})
instance.data['representations'] = [{
'name': ext.lstrip("."),
'ext': ext.lstrip("."),
'files': file,
"stagingDir": folder,
}]
self.log.info('Collected instance: {}'.format(file))
self.log.info('staging Dir: {}'.format(folder))

View file

@ -32,6 +32,10 @@ from openpype.pipeline import (
load_container,
registered_host,
)
from openpype.pipeline.create import (
legacy_create,
get_legacy_creator_by_name,
)
from openpype.pipeline.context_tools import (
get_current_asset_name,
get_current_project_asset,
@ -2153,17 +2157,23 @@ def set_scene_resolution(width, height, pixelAspect):
cmds.setAttr("%s.pixelAspect" % control_node, pixelAspect)
def get_frame_range():
"""Get the current assets frame range and handles."""
def get_frame_range(include_animation_range=False):
"""Get the current assets frame range and handles.
Args:
include_animation_range (bool, optional): Whether to include
`animationStart` and `animationEnd` keys to define the outer
range of the timeline. It is excluded by default.
Returns:
dict: Asset's expected frame range values.
"""
# Set frame start/end
project_name = get_current_project_name()
task_name = get_current_task_name()
asset_name = get_current_asset_name()
asset = get_asset_by_name(project_name, asset_name)
settings = get_project_settings(project_name)
include_handles_settings = settings["maya"]["include_handles"]
current_task = asset.get("data").get("tasks").get(task_name)
frame_start = asset["data"].get("frameStart")
frame_end = asset["data"].get("frameEnd")
@ -2175,32 +2185,39 @@ def get_frame_range():
handle_start = asset["data"].get("handleStart") or 0
handle_end = asset["data"].get("handleEnd") or 0
animation_start = frame_start
animation_end = frame_end
include_handles = include_handles_settings["include_handles_default"]
for item in include_handles_settings["per_task_type"]:
if current_task["type"] in item["task_type"]:
include_handles = item["include_handles"]
break
if include_handles:
animation_start -= int(handle_start)
animation_end += int(handle_end)
cmds.playbackOptions(
minTime=frame_start,
maxTime=frame_end,
animationStartTime=animation_start,
animationEndTime=animation_end
)
cmds.currentTime(frame_start)
return {
frame_range = {
"frameStart": frame_start,
"frameEnd": frame_end,
"handleStart": handle_start,
"handleEnd": handle_end
}
if include_animation_range:
# The animation range values are only included to define whether
# the Maya time slider should include the handles or not.
# Some usages of this function use the full dictionary to define
# instance attributes for which we want to exclude the animation
# keys. That is why these are excluded by default.
task_name = get_current_task_name()
settings = get_project_settings(project_name)
include_handles_settings = settings["maya"]["include_handles"]
current_task = asset.get("data").get("tasks").get(task_name)
animation_start = frame_start
animation_end = frame_end
include_handles = include_handles_settings["include_handles_default"]
for item in include_handles_settings["per_task_type"]:
if current_task["type"] in item["task_type"]:
include_handles = item["include_handles"]
break
if include_handles:
animation_start -= int(handle_start)
animation_end += int(handle_end)
frame_range["animationStart"] = animation_start
frame_range["animationEnd"] = animation_end
return frame_range
def reset_frame_range(playback=True, render=True, fps=True):
@ -2219,18 +2236,23 @@ def reset_frame_range(playback=True, render=True, fps=True):
)
set_scene_fps(fps)
frame_range = get_frame_range()
frame_range = get_frame_range(include_animation_range=True)
if not frame_range:
# No frame range data found for asset
return
frame_start = frame_range["frameStart"] - int(frame_range["handleStart"])
frame_end = frame_range["frameEnd"] + int(frame_range["handleEnd"])
frame_start = frame_range["frameStart"]
frame_end = frame_range["frameEnd"]
animation_start = frame_range["animationStart"]
animation_end = frame_range["animationEnd"]
if playback:
cmds.playbackOptions(minTime=frame_start)
cmds.playbackOptions(maxTime=frame_end)
cmds.playbackOptions(animationStartTime=frame_start)
cmds.playbackOptions(animationEndTime=frame_end)
cmds.playbackOptions(minTime=frame_start)
cmds.playbackOptions(maxTime=frame_end)
cmds.playbackOptions(
minTime=frame_start,
maxTime=frame_end,
animationStartTime=animation_start,
animationEndTime=animation_end
)
cmds.currentTime(frame_start)
if render:
@ -3913,3 +3935,53 @@ def get_capture_preset(task_name, task_type, subset, project_settings, log):
capture_preset = plugin_settings["capture_preset"]
return capture_preset or {}
def create_rig_animation_instance(nodes, context, namespace, log=None):
"""Create an animation publish instance for loaded rigs.
See the RecreateRigAnimationInstance inventory action on how to use this
for loaded rig containers.
Arguments:
nodes (list): Member nodes of the rig instance.
context (dict): Representation context of the rig container
namespace (str): Namespace of the rig container
log (logging.Logger, optional): Logger to log to if provided
Returns:
None
"""
output = next((node for node in nodes if
node.endswith("out_SET")), None)
controls = next((node for node in nodes if
node.endswith("controls_SET")), None)
assert output, "No out_SET in rig, this is a bug."
assert controls, "No controls_SET in rig, this is a bug."
# Find the roots amongst the loaded nodes
roots = (
cmds.ls(nodes, assemblies=True, long=True) or
get_highest_in_hierarchy(nodes)
)
assert roots, "No root nodes in rig, this is a bug."
asset = legacy_io.Session["AVALON_ASSET"]
dependency = str(context["representation"]["_id"])
if log:
log.info("Creating subset: {}".format(namespace))
# Create the animation instance
creator_plugin = get_legacy_creator_by_name("CreateAnimation")
with maintained_selection():
cmds.select([output, controls] + roots, noExpand=True)
legacy_create(
creator_plugin,
name=namespace,
asset=asset,
options={"useSelection": True},
data={"dependencies": dependency}
)

View file

@ -7,6 +7,12 @@ from openpype.hosts.maya.api import (
class CreateAnimation(plugin.Creator):
"""Animation output for character rigs"""
# We hide the animation creator from the UI since the creation of it
# is automated upon loading a rig. There's an inventory action to recreate
# it for loaded rigs if by chance someone deleted the animation instance.
# Note: This setting is actually applied from project settings
enabled = False
name = "animationDefault"
label = "Animation"
family = "animation"

View file

@ -0,0 +1,35 @@
from openpype.pipeline import (
InventoryAction,
get_representation_context
)
from openpype.hosts.maya.api.lib import (
create_rig_animation_instance,
get_container_members,
)
class RecreateRigAnimationInstance(InventoryAction):
"""Recreate animation publish instance for loaded rigs"""
label = "Recreate rig animation instance"
icon = "wrench"
color = "#888888"
@staticmethod
def is_compatible(container):
return (
container.get("loader") == "ReferenceLoader"
and container.get("name", "").startswith("rig")
)
def process(self, containers):
for container in containers:
# todo: delete an existing entry if it exist or skip creation
namespace = container["namespace"]
representation_id = container["representation"]
context = get_representation_context(representation_id)
nodes = get_container_members(container)
create_rig_animation_instance(nodes, context, namespace)

View file

@ -4,16 +4,12 @@ import contextlib
from maya import cmds
from openpype.settings import get_project_settings
from openpype.pipeline import legacy_io
from openpype.pipeline.create import (
legacy_create,
get_legacy_creator_by_name,
)
import openpype.hosts.maya.api.plugin
from openpype.hosts.maya.api.lib import (
maintained_selection,
get_container_members,
parent_nodes
parent_nodes,
create_rig_animation_instance
)
@ -114,9 +110,6 @@ class ReferenceLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
icon = "code-fork"
color = "orange"
# Name of creator class that will be used to create animation instance
animation_creator_name = "CreateAnimation"
def process_reference(self, context, name, namespace, options):
import maya.cmds as cmds
@ -169,9 +162,15 @@ class ReferenceLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
with parent_nodes(roots, parent=None):
cmds.xform(group_name, zeroTransformPivots=True)
cmds.setAttr("{}.displayHandle".format(group_name), 1)
settings = get_project_settings(os.environ['AVALON_PROJECT'])
display_handle = settings['maya']['load'].get(
'reference_loader', {}
).get('display_handle', True)
cmds.setAttr(
"{}.displayHandle".format(group_name), display_handle
)
colors = settings['maya']['load']['colors']
c = colors.get(family)
if c is not None:
@ -181,7 +180,9 @@ class ReferenceLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
(float(c[1]) / 255),
(float(c[2]) / 255))
cmds.setAttr("{}.displayHandle".format(group_name), 1)
cmds.setAttr(
"{}.displayHandle".format(group_name), display_handle
)
# get bounding box
bbox = cmds.exactWorldBoundingBox(group_name)
# get pivot position on world space
@ -220,37 +221,10 @@ class ReferenceLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
self._lock_camera_transforms(members)
def _post_process_rig(self, name, namespace, context, options):
output = next((node for node in self if
node.endswith("out_SET")), None)
controls = next((node for node in self if
node.endswith("controls_SET")), None)
assert output, "No out_SET in rig, this is a bug."
assert controls, "No controls_SET in rig, this is a bug."
# Find the roots amongst the loaded nodes
roots = cmds.ls(self[:], assemblies=True, long=True)
assert roots, "No root nodes in rig, this is a bug."
asset = legacy_io.Session["AVALON_ASSET"]
dependency = str(context["representation"]["_id"])
self.log.info("Creating subset: {}".format(namespace))
# Create the animation instance
creator_plugin = get_legacy_creator_by_name(
self.animation_creator_name
nodes = self[:]
create_rig_animation_instance(
nodes, context, namespace, log=self.log
)
with maintained_selection():
cmds.select([output, controls] + roots, noExpand=True)
legacy_create(
creator_plugin,
name=namespace,
asset=asset,
options={"useSelection": True},
data={"dependencies": dependency}
)
def _lock_camera_transforms(self, nodes):
cameras = cmds.ls(nodes, type="camera")

View file

@ -280,7 +280,7 @@ class MakeTX(TextureProcessor):
# Do nothing if the source file is already a .tx file.
return TextureResult(
path=source,
file_hash=None, # todo: unknown texture hash?
file_hash=source_hash(source),
colorspace=colorspace,
transfer_mode=COPY
)

View file

@ -217,7 +217,11 @@ class ExtractPlayblast(publish.Extractor):
instance.data["panel"], edit=True, **viewport_defaults
)
cmds.setAttr("{}.panZoomEnabled".format(preset["camera"]), pan_zoom)
try:
cmds.setAttr(
"{}.panZoomEnabled".format(preset["camera"]), pan_zoom)
except RuntimeError:
self.log.warning("Cannot restore Pan/Zoom settings.")
collected_files = os.listdir(stagingdir)
patterns = [clique.PATTERNS["frames"]]

View file

@ -6,7 +6,7 @@ import pyblish.api
from openpype.hosts.maya.api.lib import set_attribute
from openpype.pipeline.publish import (
RepairContextAction,
RepairAction,
ValidateContentsOrder,
)
@ -26,7 +26,7 @@ class ValidateAttributes(pyblish.api.InstancePlugin):
order = ValidateContentsOrder
label = "Attributes"
hosts = ["maya"]
actions = [RepairContextAction]
actions = [RepairAction]
optional = True
attributes = None
@ -81,7 +81,7 @@ class ValidateAttributes(pyblish.api.InstancePlugin):
if node_name not in attributes:
continue
for attr_name, expected in attributes.items():
for attr_name, expected in attributes[node_name].items():
# Skip if attribute does not exist
if not cmds.attributeQuery(attr_name, node=node, exists=True):

View file

@ -0,0 +1,97 @@
# -*- coding: utf-8 -*-
"""Tools for loading looks to vray proxies."""
import os
from collections import defaultdict
import logging
import six
import alembic.Abc
log = logging.getLogger(__name__)
def get_alembic_paths_by_property(filename, attr, verbose=False):
# type: (str, str, bool) -> dict
"""Return attribute value per objects in the Alembic file.
Reads an Alembic archive hierarchy and retrieves the
value from the `attr` properties on the objects.
Args:
filename (str): Full path to Alembic archive to read.
attr (str): Id attribute.
verbose (bool): Whether to verbosely log missing attributes.
Returns:
dict: Mapping of node full path with its id
"""
# Normalize alembic path
filename = os.path.normpath(filename)
filename = filename.replace("\\", "/")
filename = str(filename) # path must be string
try:
archive = alembic.Abc.IArchive(filename)
except RuntimeError:
# invalid alembic file - probably vrmesh
log.warning("{} is not an alembic file".format(filename))
return {}
root = archive.getTop()
iterator = list(root.children)
obj_ids = {}
for obj in iterator:
name = obj.getFullName()
# include children for coming iterations
iterator.extend(obj.children)
props = obj.getProperties()
if props.getNumProperties() == 0:
# Skip those without properties, e.g. '/materials' in a gpuCache
continue
# THe custom attribute is under the properties' first container under
# the ".arbGeomParams"
prop = props.getProperty(0) # get base property
_property = None
try:
geo_params = prop.getProperty('.arbGeomParams')
_property = geo_params.getProperty(attr)
except KeyError:
if verbose:
log.debug("Missing attr on: {0}".format(name))
continue
if not _property.isConstant():
log.warning("Id not constant on: {0}".format(name))
# Get first value sample
value = _property.getValue()[0]
obj_ids[name] = value
return obj_ids
def get_alembic_ids_cache(path):
# type: (str) -> dict
"""Build a id to node mapping in Alembic file.
Nodes without IDs are ignored.
Returns:
dict: Mapping of id to nodes in the Alembic.
"""
node_ids = get_alembic_paths_by_property(path, attr="cbId")
id_nodes = defaultdict(list)
for node, _id in six.iteritems(node_ids):
id_nodes[_id].append(node)
return dict(six.iteritems(id_nodes))

View file

@ -9,6 +9,7 @@ from openpype.pipeline import legacy_io
from openpype.client import get_last_version_by_subset_name
from openpype.hosts.maya import api
from . import lib
from .alembic import get_alembic_ids_cache
log = logging.getLogger(__name__)
@ -68,6 +69,11 @@ def get_nodes_by_id(standin):
(dict): Dictionary with node full name/path and id.
"""
path = cmds.getAttr(standin + ".dso")
if path.endswith(".abc"):
# Support alembic files directly
return get_alembic_ids_cache(path)
json_path = None
for f in os.listdir(os.path.dirname(path)):
if f.endswith(".json"):

View file

@ -1,108 +1,20 @@
# -*- coding: utf-8 -*-
"""Tools for loading looks to vray proxies."""
import os
from collections import defaultdict
import logging
import six
import alembic.Abc
from maya import cmds
from openpype.client import get_last_version_by_subset_name
from openpype.pipeline import legacy_io
import openpype.hosts.maya.lib as maya_lib
from . import lib
from .alembic import get_alembic_ids_cache
log = logging.getLogger(__name__)
def get_alembic_paths_by_property(filename, attr, verbose=False):
# type: (str, str, bool) -> dict
"""Return attribute value per objects in the Alembic file.
Reads an Alembic archive hierarchy and retrieves the
value from the `attr` properties on the objects.
Args:
filename (str): Full path to Alembic archive to read.
attr (str): Id attribute.
verbose (bool): Whether to verbosely log missing attributes.
Returns:
dict: Mapping of node full path with its id
"""
# Normalize alembic path
filename = os.path.normpath(filename)
filename = filename.replace("\\", "/")
filename = str(filename) # path must be string
try:
archive = alembic.Abc.IArchive(filename)
except RuntimeError:
# invalid alembic file - probably vrmesh
log.warning("{} is not an alembic file".format(filename))
return {}
root = archive.getTop()
iterator = list(root.children)
obj_ids = {}
for obj in iterator:
name = obj.getFullName()
# include children for coming iterations
iterator.extend(obj.children)
props = obj.getProperties()
if props.getNumProperties() == 0:
# Skip those without properties, e.g. '/materials' in a gpuCache
continue
# THe custom attribute is under the properties' first container under
# the ".arbGeomParams"
prop = props.getProperty(0) # get base property
_property = None
try:
geo_params = prop.getProperty('.arbGeomParams')
_property = geo_params.getProperty(attr)
except KeyError:
if verbose:
log.debug("Missing attr on: {0}".format(name))
continue
if not _property.isConstant():
log.warning("Id not constant on: {0}".format(name))
# Get first value sample
value = _property.getValue()[0]
obj_ids[name] = value
return obj_ids
def get_alembic_ids_cache(path):
# type: (str) -> dict
"""Build a id to node mapping in Alembic file.
Nodes without IDs are ignored.
Returns:
dict: Mapping of id to nodes in the Alembic.
"""
node_ids = get_alembic_paths_by_property(path, attr="cbId")
id_nodes = defaultdict(list)
for node, _id in six.iteritems(node_ids):
id_nodes[_id].append(node)
return dict(six.iteritems(id_nodes))
def assign_vrayproxy_shaders(vrayproxy, assignments):
# type: (str, dict) -> None
"""Assign shaders to content of Vray Proxy.

View file

@ -495,17 +495,17 @@ def get_avalon_knob_data(node, prefix="avalon:", create=True):
data (dict)
"""
data = {}
if AVALON_TAB not in node.knobs():
return data
# check if lists
if not isinstance(prefix, list):
prefix = list([prefix])
data = dict()
prefix = [prefix]
# loop prefix
for p in prefix:
# check if the node is avalon tracked
if AVALON_TAB not in node.knobs():
continue
try:
# check if data available on the node
test = node[AVALON_DATA_GROUP].value()
@ -516,8 +516,7 @@ def get_avalon_knob_data(node, prefix="avalon:", create=True):
if create:
node = set_avalon_knob_data(node)
return get_avalon_knob_data(node)
else:
return {}
return {}
# get data from filtered knobs
data.update({k.replace(p, ''): node[k].value()

View file

@ -2,7 +2,8 @@ from openpype.pipeline.create.creator_plugins import SubsetConvertorPlugin
from openpype.hosts.nuke.api.lib import (
INSTANCE_DATA_KNOB,
get_node_data,
get_avalon_knob_data
get_avalon_knob_data,
AVALON_TAB,
)
from openpype.hosts.nuke.api.plugin import convert_to_valid_instaces
@ -17,13 +18,15 @@ class LegacyConverted(SubsetConvertorPlugin):
legacy_found = False
# search for first available legacy item
for node in nuke.allNodes(recurseGroups=True):
if node.Class() in ["Viewer", "Dot"]:
continue
if get_node_data(node, INSTANCE_DATA_KNOB):
continue
if AVALON_TAB not in node.knobs():
continue
# get data from avalon knob
avalon_knob_data = get_avalon_knob_data(
node, ["avalon:", "ak:"], create=False)

View file

@ -190,7 +190,7 @@ class CollectNukeWrites(pyblish.api.InstancePlugin,
# make sure rendered sequence on farm will
# be used for extract review
if not instance.data["review"]:
if not instance.data.get("review"):
instance.data["useSequenceForReview"] = False
self.log.debug("instance.data: {}".format(pformat(instance.data)))

View file

@ -7,28 +7,26 @@ from openpype.pipeline import (
from openpype.hosts.photoshop.api.pipeline import cache_and_get_instances
class PSWorkfileCreator(AutoCreator):
identifier = "workfile"
family = "workfile"
default_variant = "Main"
class PSAutoCreator(AutoCreator):
"""Generic autocreator to extend."""
def get_instance_attr_defs(self):
return []
def collect_instances(self):
for instance_data in cache_and_get_instances(self):
creator_id = instance_data.get("creator_identifier")
if creator_id == self.identifier:
subset_name = instance_data["subset"]
instance = CreatedInstance(
self.family, subset_name, instance_data, self
instance = CreatedInstance.from_existing(
instance_data, self
)
self._add_instance_to_context(instance)
def update_instances(self, update_list):
# nothing to change on workfiles
pass
self.log.debug("update_list:: {}".format(update_list))
for created_inst, _changes in update_list:
api.stub().imprint(created_inst.get("instance_id"),
created_inst.data_to_store())
def create(self, options=None):
existing_instance = None
@ -58,6 +56,9 @@ class PSWorkfileCreator(AutoCreator):
project_name, host_name, None
))
if not self.active_on_create:
data["active"] = False
new_instance = CreatedInstance(
self.family, subset_name, data, self
)

View file

@ -0,0 +1,120 @@
from openpype.pipeline import CreatedInstance
from openpype.lib import BoolDef
import openpype.hosts.photoshop.api as api
from openpype.hosts.photoshop.lib import PSAutoCreator
from openpype.pipeline.create import get_subset_name
from openpype.client import get_asset_by_name
class AutoImageCreator(PSAutoCreator):
"""Creates flatten image from all visible layers.
Used in simplified publishing as auto created instance.
Must be enabled in Setting and template for subset name provided
"""
identifier = "auto_image"
family = "image"
# Settings
default_variant = ""
# - Mark by default instance for review
mark_for_review = True
active_on_create = True
def create(self, options=None):
existing_instance = None
for instance in self.create_context.instances:
if instance.creator_identifier == self.identifier:
existing_instance = instance
break
context = self.create_context
project_name = context.get_current_project_name()
asset_name = context.get_current_asset_name()
task_name = context.get_current_task_name()
host_name = context.host_name
asset_doc = get_asset_by_name(project_name, asset_name)
if existing_instance is None:
subset_name = get_subset_name(
self.family, self.default_variant, task_name, asset_doc,
project_name, host_name
)
publishable_ids = [layer.id for layer in api.stub().get_layers()
if layer.visible]
data = {
"asset": asset_name,
"task": task_name,
# ids are "virtual" layers, won't get grouped as 'members' do
# same difference in color coded layers in WP
"ids": publishable_ids
}
if not self.active_on_create:
data["active"] = False
creator_attributes = {"mark_for_review": self.mark_for_review}
data.update({"creator_attributes": creator_attributes})
new_instance = CreatedInstance(
self.family, subset_name, data, self
)
self._add_instance_to_context(new_instance)
api.stub().imprint(new_instance.get("instance_id"),
new_instance.data_to_store())
elif ( # existing instance from different context
existing_instance["asset"] != asset_name
or existing_instance["task"] != task_name
):
subset_name = get_subset_name(
self.family, self.default_variant, task_name, asset_doc,
project_name, host_name
)
existing_instance["asset"] = asset_name
existing_instance["task"] = task_name
existing_instance["subset"] = subset_name
api.stub().imprint(existing_instance.get("instance_id"),
existing_instance.data_to_store())
def get_pre_create_attr_defs(self):
return [
BoolDef(
"mark_for_review",
label="Review",
default=self.mark_for_review
)
]
def get_instance_attr_defs(self):
return [
BoolDef(
"mark_for_review",
label="Review"
)
]
def apply_settings(self, project_settings, system_settings):
plugin_settings = (
project_settings["photoshop"]["create"]["AutoImageCreator"]
)
self.active_on_create = plugin_settings["active_on_create"]
self.default_variant = plugin_settings["default_variant"]
self.mark_for_review = plugin_settings["mark_for_review"]
self.enabled = plugin_settings["enabled"]
def get_detail_description(self):
return """Creator for flatten image.
Studio might configure simple publishing workflow. In that case
`image` instance is automatically created which will publish flat
image from all visible layers.
Artist might disable this instance from publishing or from creating
review for it though.
"""

View file

@ -23,6 +23,11 @@ class ImageCreator(Creator):
family = "image"
description = "Image creator"
# Settings
default_variants = ""
mark_for_review = False
active_on_create = True
def create(self, subset_name_from_ui, data, pre_create_data):
groups_to_create = []
top_layers_to_wrap = []
@ -94,6 +99,12 @@ class ImageCreator(Creator):
data.update({"layer_name": layer_name})
data.update({"long_name": "_".join(layer_names_in_hierarchy)})
creator_attributes = {"mark_for_review": self.mark_for_review}
data.update({"creator_attributes": creator_attributes})
if not self.active_on_create:
data["active"] = False
new_instance = CreatedInstance(self.family, subset_name, data,
self)
@ -134,11 +145,6 @@ class ImageCreator(Creator):
self.host.remove_instance(instance)
self._remove_instance_from_context(instance)
def get_default_variants(self):
return [
"Main"
]
def get_pre_create_attr_defs(self):
output = [
BoolDef("use_selection", default=True,
@ -148,10 +154,34 @@ class ImageCreator(Creator):
label="Create separate instance for each selected"),
BoolDef("use_layer_name",
default=False,
label="Use layer name in subset")
label="Use layer name in subset"),
BoolDef(
"mark_for_review",
label="Create separate review",
default=False
)
]
return output
def get_instance_attr_defs(self):
return [
BoolDef(
"mark_for_review",
label="Review"
)
]
def apply_settings(self, project_settings, system_settings):
plugin_settings = (
project_settings["photoshop"]["create"]["ImageCreator"]
)
self.active_on_create = plugin_settings["active_on_create"]
self.default_variants = plugin_settings["default_variants"]
self.mark_for_review = plugin_settings["mark_for_review"]
self.enabled = plugin_settings["enabled"]
def get_detail_description(self):
return """Creator for Image instances
@ -180,6 +210,11 @@ class ImageCreator(Creator):
but layer name should be used (set explicitly in UI or implicitly if
multiple images should be created), it is added in capitalized form
as a suffix to subset name.
Each image could have its separate review created if necessary via
`Create separate review` toggle.
But more use case is to use separate `review` instance to create review
from all published items.
"""
def _handle_legacy(self, instance_data):

View file

@ -0,0 +1,28 @@
from openpype.hosts.photoshop.lib import PSAutoCreator
class ReviewCreator(PSAutoCreator):
"""Creates review instance which might be disabled from publishing."""
identifier = "review"
family = "review"
default_variant = "Main"
def get_detail_description(self):
return """Auto creator for review.
Photoshop review is created from all published images or from all
visible layers if no `image` instances got created.
Review might be disabled by an artist (instance shouldn't be deleted as
it will get recreated in next publish either way).
"""
def apply_settings(self, project_settings, system_settings):
plugin_settings = (
project_settings["photoshop"]["create"]["ReviewCreator"]
)
self.default_variant = plugin_settings["default_variant"]
self.active_on_create = plugin_settings["active_on_create"]
self.enabled = plugin_settings["enabled"]

View file

@ -0,0 +1,28 @@
from openpype.hosts.photoshop.lib import PSAutoCreator
class WorkfileCreator(PSAutoCreator):
identifier = "workfile"
family = "workfile"
default_variant = "Main"
def get_detail_description(self):
return """Auto creator for workfile.
It is expected that each publish will also publish its source workfile
for safekeeping. This creator triggers automatically without need for
an artist to remember and trigger it explicitly.
Workfile instance could be disabled if it is not required to publish
workfile. (Instance shouldn't be deleted though as it will be recreated
in next publish automatically).
"""
def apply_settings(self, project_settings, system_settings):
plugin_settings = (
project_settings["photoshop"]["create"]["WorkfileCreator"]
)
self.active_on_create = plugin_settings["active_on_create"]
self.enabled = plugin_settings["enabled"]

View file

@ -0,0 +1,101 @@
import pyblish.api
from openpype.hosts.photoshop import api as photoshop
from openpype.pipeline.create import get_subset_name
class CollectAutoImage(pyblish.api.ContextPlugin):
"""Creates auto image in non artist based publishes (Webpublisher).
'remotepublish' should be renamed to 'autopublish' or similar in the future
"""
label = "Collect Auto Image"
order = pyblish.api.CollectorOrder
hosts = ["photoshop"]
order = pyblish.api.CollectorOrder + 0.2
targets = ["remotepublish"]
def process(self, context):
family = "image"
for instance in context:
creator_identifier = instance.data.get("creator_identifier")
if creator_identifier and creator_identifier == "auto_image":
self.log.debug("Auto image instance found, won't create new")
return
project_name = context.data["anatomyData"]["project"]["name"]
proj_settings = context.data["project_settings"]
task_name = context.data["anatomyData"]["task"]["name"]
host_name = context.data["hostName"]
asset_doc = context.data["assetEntity"]
asset_name = asset_doc["name"]
auto_creator = proj_settings.get(
"photoshop", {}).get(
"create", {}).get(
"AutoImageCreator", {})
if not auto_creator or not auto_creator["enabled"]:
self.log.debug("Auto image creator disabled, won't create new")
return
stub = photoshop.stub()
stored_items = stub.get_layers_metadata()
for item in stored_items:
if item.get("creator_identifier") == "auto_image":
if not item.get("active"):
self.log.debug("Auto_image instance disabled")
return
layer_items = stub.get_layers()
publishable_ids = [layer.id for layer in layer_items
if layer.visible]
# collect stored image instances
instance_names = []
for layer_item in layer_items:
layer_meta_data = stub.read(layer_item, stored_items)
# Skip layers without metadata.
if layer_meta_data is None:
continue
# Skip containers.
if "container" in layer_meta_data["id"]:
continue
# active might not be in legacy meta
if layer_meta_data.get("active", True) and layer_item.visible:
instance_names.append(layer_meta_data["subset"])
if len(instance_names) == 0:
variants = proj_settings.get(
"photoshop", {}).get(
"create", {}).get(
"CreateImage", {}).get(
"default_variants", [''])
family = "image"
variant = context.data.get("variant") or variants[0]
subset_name = get_subset_name(
family, variant, task_name, asset_doc,
project_name, host_name
)
instance = context.create_instance(subset_name)
instance.data["family"] = family
instance.data["asset"] = asset_name
instance.data["subset"] = subset_name
instance.data["ids"] = publishable_ids
instance.data["publish"] = True
instance.data["creator_identifier"] = "auto_image"
if auto_creator["mark_for_review"]:
instance.data["creator_attributes"] = {"mark_for_review": True}
instance.data["families"] = ["review"]
self.log.info("auto image instance: {} ".format(instance.data))

View file

@ -0,0 +1,92 @@
"""
Requires:
None
Provides:
instance -> family ("review")
"""
import pyblish.api
from openpype.hosts.photoshop import api as photoshop
from openpype.pipeline.create import get_subset_name
class CollectAutoReview(pyblish.api.ContextPlugin):
"""Create review instance in non artist based workflow.
Called only if PS is triggered in Webpublisher or in tests.
"""
label = "Collect Auto Review"
hosts = ["photoshop"]
order = pyblish.api.CollectorOrder + 0.2
targets = ["remotepublish"]
publish = True
def process(self, context):
family = "review"
has_review = False
for instance in context:
if instance.data["family"] == family:
self.log.debug("Review instance found, won't create new")
has_review = True
creator_attributes = instance.data.get("creator_attributes", {})
if (creator_attributes.get("mark_for_review") and
"review" not in instance.data["families"]):
instance.data["families"].append("review")
if has_review:
return
stub = photoshop.stub()
stored_items = stub.get_layers_metadata()
for item in stored_items:
if item.get("creator_identifier") == family:
if not item.get("active"):
self.log.debug("Review instance disabled")
return
auto_creator = context.data["project_settings"].get(
"photoshop", {}).get(
"create", {}).get(
"ReviewCreator", {})
if not auto_creator or not auto_creator["enabled"]:
self.log.debug("Review creator disabled, won't create new")
return
variant = (context.data.get("variant") or
auto_creator["default_variant"])
project_name = context.data["anatomyData"]["project"]["name"]
proj_settings = context.data["project_settings"]
task_name = context.data["anatomyData"]["task"]["name"]
host_name = context.data["hostName"]
asset_doc = context.data["assetEntity"]
asset_name = asset_doc["name"]
subset_name = get_subset_name(
family,
variant,
task_name,
asset_doc,
project_name,
host_name=host_name,
project_settings=proj_settings
)
instance = context.create_instance(subset_name)
instance.data.update({
"subset": subset_name,
"label": subset_name,
"name": subset_name,
"family": family,
"families": [],
"representations": [],
"asset": asset_name,
"publish": self.publish
})
self.log.debug("auto review created::{}".format(instance.data))

View file

@ -0,0 +1,99 @@
import os
import pyblish.api
from openpype.hosts.photoshop import api as photoshop
from openpype.pipeline.create import get_subset_name
class CollectAutoWorkfile(pyblish.api.ContextPlugin):
"""Collect current script for publish."""
order = pyblish.api.CollectorOrder + 0.2
label = "Collect Workfile"
hosts = ["photoshop"]
targets = ["remotepublish"]
def process(self, context):
family = "workfile"
file_path = context.data["currentFile"]
_, ext = os.path.splitext(file_path)
staging_dir = os.path.dirname(file_path)
base_name = os.path.basename(file_path)
workfile_representation = {
"name": ext[1:],
"ext": ext[1:],
"files": base_name,
"stagingDir": staging_dir,
}
for instance in context:
if instance.data["family"] == family:
self.log.debug("Workfile instance found, won't create new")
instance.data.update({
"label": base_name,
"name": base_name,
"representations": [],
})
# creating representation
_, ext = os.path.splitext(file_path)
instance.data["representations"].append(
workfile_representation)
return
stub = photoshop.stub()
stored_items = stub.get_layers_metadata()
for item in stored_items:
if item.get("creator_identifier") == family:
if not item.get("active"):
self.log.debug("Workfile instance disabled")
return
project_name = context.data["anatomyData"]["project"]["name"]
proj_settings = context.data["project_settings"]
auto_creator = proj_settings.get(
"photoshop", {}).get(
"create", {}).get(
"WorkfileCreator", {})
if not auto_creator or not auto_creator["enabled"]:
self.log.debug("Workfile creator disabled, won't create new")
return
# context.data["variant"] might come only from collect_batch_data
variant = (context.data.get("variant") or
auto_creator["default_variant"])
task_name = context.data["anatomyData"]["task"]["name"]
host_name = context.data["hostName"]
asset_doc = context.data["assetEntity"]
asset_name = asset_doc["name"]
subset_name = get_subset_name(
family,
variant,
task_name,
asset_doc,
project_name,
host_name=host_name,
project_settings=proj_settings
)
# Create instance
instance = context.create_instance(subset_name)
instance.data.update({
"subset": subset_name,
"label": base_name,
"name": base_name,
"family": family,
"families": [],
"representations": [],
"asset": asset_name
})
# creating representation
instance.data["representations"].append(workfile_representation)
self.log.debug("auto workfile review created:{}".format(instance.data))

View file

@ -1,116 +0,0 @@
import pprint
import pyblish.api
from openpype.settings import get_project_settings
from openpype.hosts.photoshop import api as photoshop
from openpype.lib import prepare_template_data
from openpype.pipeline import legacy_io
class CollectInstances(pyblish.api.ContextPlugin):
"""Gather instances by LayerSet and file metadata
Collects publishable instances from file metadata or enhance
already collected by creator (family == "image").
If no image instances are explicitly created, it looks if there is value
in `flatten_subset_template` (configurable in Settings), in that case it
produces flatten image with all visible layers.
Identifier:
id (str): "pyblish.avalon.instance"
"""
label = "Collect Instances"
order = pyblish.api.CollectorOrder
hosts = ["photoshop"]
families_mapping = {
"image": []
}
# configurable in Settings
flatten_subset_template = ""
def process(self, context):
instance_by_layer_id = {}
for instance in context:
if (
instance.data["family"] == "image" and
instance.data.get("members")):
layer_id = str(instance.data["members"][0])
instance_by_layer_id[layer_id] = instance
stub = photoshop.stub()
layer_items = stub.get_layers()
layers_meta = stub.get_layers_metadata()
instance_names = []
all_layer_ids = []
for layer_item in layer_items:
layer_meta_data = stub.read(layer_item, layers_meta)
all_layer_ids.append(layer_item.id)
# Skip layers without metadata.
if layer_meta_data is None:
continue
# Skip containers.
if "container" in layer_meta_data["id"]:
continue
# active might not be in legacy meta
if not layer_meta_data.get("active", True):
continue
instance = instance_by_layer_id.get(str(layer_item.id))
if instance is None:
instance = context.create_instance(layer_meta_data["subset"])
instance.data["layer"] = layer_item
instance.data.update(layer_meta_data)
instance.data["families"] = self.families_mapping[
layer_meta_data["family"]
]
instance.data["publish"] = layer_item.visible
instance_names.append(layer_meta_data["subset"])
# Produce diagnostic message for any graphical
# user interface interested in visualising it.
self.log.info("Found: \"%s\" " % instance.data["name"])
self.log.info("instance: {} ".format(
pprint.pformat(instance.data, indent=4)))
if len(instance_names) != len(set(instance_names)):
self.log.warning("Duplicate instances found. " +
"Remove unwanted via Publisher")
if len(instance_names) == 0 and self.flatten_subset_template:
project_name = context.data["projectEntity"]["name"]
variants = get_project_settings(project_name).get(
"photoshop", {}).get(
"create", {}).get(
"CreateImage", {}).get(
"defaults", [''])
family = "image"
task_name = legacy_io.Session["AVALON_TASK"]
asset_name = context.data["assetEntity"]["name"]
variant = context.data.get("variant") or variants[0]
fill_pairs = {
"variant": variant,
"family": family,
"task": task_name
}
subset = self.flatten_subset_template.format(
**prepare_template_data(fill_pairs))
instance = context.create_instance(subset)
instance.data["family"] = family
instance.data["asset"] = asset_name
instance.data["subset"] = subset
instance.data["ids"] = all_layer_ids
instance.data["families"] = self.families_mapping[family]
instance.data["publish"] = True
self.log.info("flatten instance: {} ".format(instance.data))

View file

@ -14,10 +14,7 @@ from openpype.pipeline.create import get_subset_name
class CollectReview(pyblish.api.ContextPlugin):
"""Gather the active document as review instance.
Triggers once even if no 'image' is published as by defaults it creates
flatten image from a workfile.
"""Adds review to families for instances marked to be reviewable.
"""
label = "Collect Review"
@ -28,25 +25,8 @@ class CollectReview(pyblish.api.ContextPlugin):
publish = True
def process(self, context):
family = "review"
subset = get_subset_name(
family,
context.data.get("variant", ''),
context.data["anatomyData"]["task"]["name"],
context.data["assetEntity"],
context.data["anatomyData"]["project"]["name"],
host_name=context.data["hostName"],
project_settings=context.data["project_settings"]
)
instance = context.create_instance(subset)
instance.data.update({
"subset": subset,
"label": subset,
"name": subset,
"family": family,
"families": [],
"representations": [],
"asset": os.environ["AVALON_ASSET"],
"publish": self.publish
})
for instance in context:
creator_attributes = instance.data["creator_attributes"]
if (creator_attributes.get("mark_for_review") and
"review" not in instance.data["families"]):
instance.data["families"].append("review")

View file

@ -14,50 +14,19 @@ class CollectWorkfile(pyblish.api.ContextPlugin):
default_variant = "Main"
def process(self, context):
existing_instance = None
for instance in context:
if instance.data["family"] == "workfile":
self.log.debug("Workfile instance found, won't create new")
existing_instance = instance
break
file_path = context.data["currentFile"]
_, ext = os.path.splitext(file_path)
staging_dir = os.path.dirname(file_path)
base_name = os.path.basename(file_path)
family = "workfile"
# context.data["variant"] might come only from collect_batch_data
variant = context.data.get("variant") or self.default_variant
subset = get_subset_name(
family,
variant,
context.data["anatomyData"]["task"]["name"],
context.data["assetEntity"],
context.data["anatomyData"]["project"]["name"],
host_name=context.data["hostName"],
project_settings=context.data["project_settings"]
)
file_path = context.data["currentFile"]
staging_dir = os.path.dirname(file_path)
base_name = os.path.basename(file_path)
# Create instance
if existing_instance is None:
instance = context.create_instance(subset)
instance.data.update({
"subset": subset,
"label": base_name,
"name": base_name,
"family": family,
"families": [],
"representations": [],
"asset": os.environ["AVALON_ASSET"]
})
else:
instance = existing_instance
# creating representation
_, ext = os.path.splitext(file_path)
instance.data["representations"].append({
"name": ext[1:],
"ext": ext[1:],
"files": base_name,
"stagingDir": staging_dir,
})
# creating representation
_, ext = os.path.splitext(file_path)
instance.data["representations"].append({
"name": ext[1:],
"ext": ext[1:],
"files": base_name,
"stagingDir": staging_dir,
})
return

View file

@ -47,32 +47,42 @@ class ExtractReview(publish.Extractor):
layers = self._get_layers_from_image_instances(instance)
self.log.info("Layers image instance found: {}".format(layers))
repre_name = "jpg"
repre_skeleton = {
"name": repre_name,
"ext": "jpg",
"stagingDir": staging_dir,
"tags": self.jpg_options['tags'],
}
if instance.data["family"] != "review":
# enable creation of review, without this jpg review would clash
# with jpg of the image family
output_name = repre_name
repre_name = "{}_{}".format(repre_name, output_name)
repre_skeleton.update({"name": repre_name,
"outputName": output_name})
if self.make_image_sequence and len(layers) > 1:
self.log.info("Extract layers to image sequence.")
img_list = self._save_sequence_images(staging_dir, layers)
instance.data["representations"].append({
"name": "jpg",
"ext": "jpg",
"files": img_list,
repre_skeleton.update({
"frameStart": 0,
"frameEnd": len(img_list),
"fps": fps,
"stagingDir": staging_dir,
"tags": self.jpg_options['tags'],
"files": img_list,
})
instance.data["representations"].append(repre_skeleton)
processed_img_names = img_list
else:
self.log.info("Extract layers to flatten image.")
img_list = self._save_flatten_image(staging_dir, layers)
instance.data["representations"].append({
"name": "jpg",
"ext": "jpg",
"files": img_list, # cannot be [] for single frame
"stagingDir": staging_dir,
"tags": self.jpg_options['tags']
repre_skeleton.update({
"files": img_list,
})
instance.data["representations"].append(repre_skeleton)
processed_img_names = [img_list]
ffmpeg_path = get_ffmpeg_tool_path("ffmpeg")

View file

@ -0,0 +1,43 @@
# -*- coding: utf-8 -*-
import pyblish.api
class CollectReviewInfo(pyblish.api.InstancePlugin):
"""Collect data required for review instances.
ExtractReview plugin requires frame start/end, fps on instance data which
are missing on instances from TrayPublishes.
Warning:
This is temporary solution to "make it work". Contains removed changes
from https://github.com/ynput/OpenPype/pull/4383 reduced only for
review instances.
"""
label = "Collect Review Info"
order = pyblish.api.CollectorOrder + 0.491
families = ["review"]
hosts = ["traypublisher"]
def process(self, instance):
asset_entity = instance.data.get("assetEntity")
if instance.data.get("frameStart") is not None or not asset_entity:
self.log.debug("Missing required data on instance")
return
asset_data = asset_entity["data"]
# Store collected data for logging
collected_data = {}
for key in (
"fps",
"frameStart",
"frameEnd",
"handleStart",
"handleEnd",
):
if key in instance.data or key not in asset_data:
continue
value = asset_data[key]
collected_data[key] = value
instance.data[key] = value
self.log.debug("Collected data: {}".format(str(collected_data)))

View file

@ -2,8 +2,10 @@ import os
import unreal
from openpype.settings import get_project_settings
from openpype.pipeline import Anatomy
from openpype.hosts.unreal.api import pipeline
from openpype.widgets.message_window import Window
queue = None
@ -32,11 +34,20 @@ def start_rendering():
"""
Start the rendering process.
"""
print("Starting rendering...")
unreal.log("Starting rendering...")
# Get selected sequences
assets = unreal.EditorUtilityLibrary.get_selected_assets()
if not assets:
Window(
parent=None,
title="No assets selected",
message="No assets selected. Select a render instance.",
level="warning")
raise RuntimeError(
"No assets selected. You need to select a render instance.")
# instances = pipeline.ls_inst()
instances = [
a for a in assets
@ -66,6 +77,13 @@ def start_rendering():
ar = unreal.AssetRegistryHelpers.get_asset_registry()
data = get_project_settings(project)
config = None
config_path = str(data.get("unreal").get("render_config_path"))
if config_path and unreal.EditorAssetLibrary.does_asset_exist(config_path):
unreal.log("Found saved render configuration")
config = ar.get_asset_by_object_path(config_path).get_asset()
for i in inst_data:
sequence = ar.get_asset_by_object_path(i["sequence"]).get_asset()
@ -81,55 +99,80 @@ def start_rendering():
# Get all the sequences to render. If there are subsequences,
# add them and their frame ranges to the render list. We also
# use the names for the output paths.
for s in sequences:
subscenes = pipeline.get_subsequences(s.get('sequence'))
for seq in sequences:
subscenes = pipeline.get_subsequences(seq.get('sequence'))
if subscenes:
for ss in subscenes:
for sub_seq in subscenes:
sequences.append({
"sequence": ss.get_sequence(),
"output": (f"{s.get('output')}/"
f"{ss.get_sequence().get_name()}"),
"sequence": sub_seq.get_sequence(),
"output": (f"{seq.get('output')}/"
f"{sub_seq.get_sequence().get_name()}"),
"frame_range": (
ss.get_start_frame(), ss.get_end_frame())
sub_seq.get_start_frame(), sub_seq.get_end_frame())
})
else:
# Avoid rendering camera sequences
if "_camera" not in s.get('sequence').get_name():
render_list.append(s)
if "_camera" not in seq.get('sequence').get_name():
render_list.append(seq)
# Create the rendering jobs and add them to the queue.
for r in render_list:
for render_setting in render_list:
job = queue.allocate_new_job(unreal.MoviePipelineExecutorJob)
job.sequence = unreal.SoftObjectPath(i["master_sequence"])
job.map = unreal.SoftObjectPath(i["master_level"])
job.author = "OpenPype"
# If we have a saved configuration, copy it to the job.
if config:
job.get_configuration().copy_from(config)
# User data could be used to pass data to the job, that can be
# read in the job's OnJobFinished callback. We could,
# for instance, pass the AvalonPublishInstance's path to the job.
# job.user_data = ""
output_dir = render_setting.get('output')
shot_name = render_setting.get('sequence').get_name()
settings = job.get_configuration().find_or_add_setting_by_class(
unreal.MoviePipelineOutputSetting)
settings.output_resolution = unreal.IntPoint(1920, 1080)
settings.custom_start_frame = r.get("frame_range")[0]
settings.custom_end_frame = r.get("frame_range")[1]
settings.custom_start_frame = render_setting.get("frame_range")[0]
settings.custom_end_frame = render_setting.get("frame_range")[1]
settings.use_custom_playback_range = True
settings.file_name_format = "{sequence_name}.{frame_number}"
settings.output_directory.path = f"{render_dir}/{r.get('output')}"
renderPass = job.get_configuration().find_or_add_setting_by_class(
unreal.MoviePipelineDeferredPassBase)
renderPass.disable_multisample_effects = True
settings.file_name_format = f"{shot_name}" + ".{frame_number}"
settings.output_directory.path = f"{render_dir}/{output_dir}"
job.get_configuration().find_or_add_setting_by_class(
unreal.MoviePipelineImageSequenceOutput_PNG)
unreal.MoviePipelineDeferredPassBase)
render_format = data.get("unreal").get("render_format", "png")
if render_format == "png":
job.get_configuration().find_or_add_setting_by_class(
unreal.MoviePipelineImageSequenceOutput_PNG)
elif render_format == "exr":
job.get_configuration().find_or_add_setting_by_class(
unreal.MoviePipelineImageSequenceOutput_EXR)
elif render_format == "jpg":
job.get_configuration().find_or_add_setting_by_class(
unreal.MoviePipelineImageSequenceOutput_JPG)
elif render_format == "bmp":
job.get_configuration().find_or_add_setting_by_class(
unreal.MoviePipelineImageSequenceOutput_BMP)
# If there are jobs in the queue, start the rendering process.
if queue.get_jobs():
global executor
executor = unreal.MoviePipelinePIEExecutor()
preroll_frames = data.get("unreal").get("preroll_frames", 0)
settings = unreal.MoviePipelinePIEExecutorSettings()
settings.set_editor_property(
"initial_delay_frame_count", preroll_frames)
executor.on_executor_finished_delegate.add_callable_unique(
_queue_finish_callback)
executor.on_individual_job_finished_delegate.add_callable_unique(

View file

@ -1,14 +1,22 @@
# -*- coding: utf-8 -*-
from pathlib import Path
import unreal
from openpype.pipeline import CreatorError
from openpype.hosts.unreal.api.pipeline import (
get_subsequences
UNREAL_VERSION,
create_folder,
get_subsequences,
)
from openpype.hosts.unreal.api.plugin import (
UnrealAssetCreator
)
from openpype.lib import UILabelDef
from openpype.lib import (
UILabelDef,
UISeparatorDef,
BoolDef,
NumberDef
)
class CreateRender(UnrealAssetCreator):
@ -19,7 +27,92 @@ class CreateRender(UnrealAssetCreator):
family = "render"
icon = "eye"
def create(self, subset_name, instance_data, pre_create_data):
def create_instance(
self, instance_data, subset_name, pre_create_data,
selected_asset_path, master_seq, master_lvl, seq_data
):
instance_data["members"] = [selected_asset_path]
instance_data["sequence"] = selected_asset_path
instance_data["master_sequence"] = master_seq
instance_data["master_level"] = master_lvl
instance_data["output"] = seq_data.get('output')
instance_data["frameStart"] = seq_data.get('frame_range')[0]
instance_data["frameEnd"] = seq_data.get('frame_range')[1]
super(CreateRender, self).create(
subset_name,
instance_data,
pre_create_data)
def create_with_new_sequence(
self, subset_name, instance_data, pre_create_data
):
# If the option to create a new level sequence is selected,
# create a new level sequence and a master level.
root = f"/Game/OpenPype/Sequences"
# Create a new folder for the sequence in root
sequence_dir_name = create_folder(root, subset_name)
sequence_dir = f"{root}/{sequence_dir_name}"
unreal.log_warning(f"sequence_dir: {sequence_dir}")
# Create the level sequence
asset_tools = unreal.AssetToolsHelpers.get_asset_tools()
seq = asset_tools.create_asset(
asset_name=subset_name,
package_path=sequence_dir,
asset_class=unreal.LevelSequence,
factory=unreal.LevelSequenceFactoryNew())
seq.set_playback_start(pre_create_data.get("start_frame"))
seq.set_playback_end(pre_create_data.get("end_frame"))
pre_create_data["members"] = [seq.get_path_name()]
unreal.EditorAssetLibrary.save_asset(seq.get_path_name())
# Create the master level
if UNREAL_VERSION.major >= 5:
curr_level = unreal.LevelEditorSubsystem().get_current_level()
else:
world = unreal.EditorLevelLibrary.get_editor_world()
levels = unreal.EditorLevelUtils.get_levels(world)
curr_level = levels[0] if len(levels) else None
if not curr_level:
raise RuntimeError("No level loaded.")
curr_level_path = curr_level.get_outer().get_path_name()
# If the level path does not start with "/Game/", the current
# level is a temporary, unsaved level.
if curr_level_path.startswith("/Game/"):
if UNREAL_VERSION.major >= 5:
unreal.LevelEditorSubsystem().save_current_level()
else:
unreal.EditorLevelLibrary.save_current_level()
ml_path = f"{sequence_dir}/{subset_name}_MasterLevel"
if UNREAL_VERSION.major >= 5:
unreal.LevelEditorSubsystem().new_level(ml_path)
else:
unreal.EditorLevelLibrary.new_level(ml_path)
seq_data = {
"sequence": seq,
"output": f"{seq.get_name()}",
"frame_range": (
seq.get_playback_start(),
seq.get_playback_end())}
self.create_instance(
instance_data, subset_name, pre_create_data,
seq.get_path_name(), seq.get_path_name(), ml_path, seq_data)
def create_from_existing_sequence(
self, subset_name, instance_data, pre_create_data
):
ar = unreal.AssetRegistryHelpers.get_asset_registry()
sel_objects = unreal.EditorUtilityLibrary.get_selected_assets()
@ -27,8 +120,8 @@ class CreateRender(UnrealAssetCreator):
a.get_path_name() for a in sel_objects
if a.get_class().get_name() == "LevelSequence"]
if not selection:
raise CreatorError("Please select at least one Level Sequence.")
if len(selection) == 0:
raise RuntimeError("Please select at least one Level Sequence.")
seq_data = None
@ -42,28 +135,38 @@ class CreateRender(UnrealAssetCreator):
f"Skipping {selected_asset.get_name()}. It isn't a Level "
"Sequence.")
# The asset name is the third element of the path which
# contains the map.
# To take the asset name, we remove from the path the prefix
# "/Game/OpenPype/" and then we split the path by "/".
sel_path = selected_asset_path
asset_name = sel_path.replace("/Game/OpenPype/", "").split("/")[0]
if pre_create_data.get("use_hierarchy"):
# The asset name is the the third element of the path which
# contains the map.
# To take the asset name, we remove from the path the prefix
# "/Game/OpenPype/" and then we split the path by "/".
sel_path = selected_asset_path
asset_name = sel_path.replace(
"/Game/OpenPype/", "").split("/")[0]
search_path = f"/Game/OpenPype/{asset_name}"
else:
search_path = Path(selected_asset_path).parent.as_posix()
# Get the master sequence and the master level.
# There should be only one sequence and one level in the directory.
ar_filter = unreal.ARFilter(
class_names=["LevelSequence"],
package_paths=[f"/Game/OpenPype/{asset_name}"],
recursive_paths=False)
sequences = ar.get_assets(ar_filter)
master_seq = sequences[0].get_asset().get_path_name()
master_seq_obj = sequences[0].get_asset()
ar_filter = unreal.ARFilter(
class_names=["World"],
package_paths=[f"/Game/OpenPype/{asset_name}"],
recursive_paths=False)
levels = ar.get_assets(ar_filter)
master_lvl = levels[0].get_asset().get_path_name()
try:
ar_filter = unreal.ARFilter(
class_names=["LevelSequence"],
package_paths=[search_path],
recursive_paths=False)
sequences = ar.get_assets(ar_filter)
master_seq = sequences[0].get_asset().get_path_name()
master_seq_obj = sequences[0].get_asset()
ar_filter = unreal.ARFilter(
class_names=["World"],
package_paths=[search_path],
recursive_paths=False)
levels = ar.get_assets(ar_filter)
master_lvl = levels[0].get_asset().get_path_name()
except IndexError:
raise RuntimeError(
f"Could not find the hierarchy for the selected sequence.")
# If the selected asset is the master sequence, we get its data
# and then we create the instance for the master sequence.
@ -79,7 +182,8 @@ class CreateRender(UnrealAssetCreator):
master_seq_obj.get_playback_start(),
master_seq_obj.get_playback_end())}
if selected_asset_path == master_seq:
if (selected_asset_path == master_seq or
pre_create_data.get("use_hierarchy")):
seq_data = master_seq_data
else:
seq_data_list = [master_seq_data]
@ -119,20 +223,54 @@ class CreateRender(UnrealAssetCreator):
"sub-sequence of the master sequence.")
continue
instance_data["members"] = [selected_asset_path]
instance_data["sequence"] = selected_asset_path
instance_data["master_sequence"] = master_seq
instance_data["master_level"] = master_lvl
instance_data["output"] = seq_data.get('output')
instance_data["frameStart"] = seq_data.get('frame_range')[0]
instance_data["frameEnd"] = seq_data.get('frame_range')[1]
self.create_instance(
instance_data, subset_name, pre_create_data,
selected_asset_path, master_seq, master_lvl, seq_data)
super(CreateRender, self).create(
subset_name,
instance_data,
pre_create_data)
def create(self, subset_name, instance_data, pre_create_data):
if pre_create_data.get("create_seq"):
self.create_with_new_sequence(
subset_name, instance_data, pre_create_data)
else:
self.create_from_existing_sequence(
subset_name, instance_data, pre_create_data)
def get_pre_create_attr_defs(self):
return [
UILabelDef("Select the sequence to render.")
UILabelDef(
"Select a Level Sequence to render or create a new one."
),
BoolDef(
"create_seq",
label="Create a new Level Sequence",
default=False
),
UILabelDef(
"WARNING: If you create a new Level Sequence, the current\n"
"level will be saved and a new Master Level will be created."
),
NumberDef(
"start_frame",
label="Start Frame",
default=0,
minimum=-999999,
maximum=999999
),
NumberDef(
"end_frame",
label="Start Frame",
default=150,
minimum=-999999,
maximum=999999
),
UISeparatorDef(),
UILabelDef(
"The following settings are valid only if you are not\n"
"creating a new sequence."
),
BoolDef(
"use_hierarchy",
label="Use Hierarchy",
default=False
),
]

View file

@ -0,0 +1,42 @@
import clique
import pyblish.api
class ValidateSequenceFrames(pyblish.api.InstancePlugin):
"""Ensure the sequence of frames is complete
The files found in the folder are checked against the frameStart and
frameEnd of the instance. If the first or last file is not
corresponding with the first or last frame it is flagged as invalid.
"""
order = pyblish.api.ValidatorOrder
label = "Validate Sequence Frames"
families = ["render"]
hosts = ["unreal"]
optional = True
def process(self, instance):
representations = instance.data.get("representations")
for repr in representations:
data = instance.data.get("assetEntity", {}).get("data", {})
patterns = [clique.PATTERNS["frames"]]
collections, remainder = clique.assemble(
repr["files"], minimum_items=1, patterns=patterns)
assert not remainder, "Must not have remainder"
assert len(collections) == 1, "Must detect single collection"
collection = collections[0]
frames = list(collection.indexes)
current_range = (frames[0], frames[-1])
required_range = (data["frameStart"],
data["frameEnd"])
if current_range != required_range:
raise ValueError(f"Invalid frame range: {current_range} - "
f"expected: {required_range}")
missing = collection.holes().indexes
assert not missing, "Missing frames: %s" % (missing,)

View file

@ -1,16 +1,19 @@
"""These lib functions are primarily for development purposes.
"""These lib functions are for development purposes.
WARNING: This is not meant for production data.
WARNING:
This is not meant for production data. Please don't write code which is
dependent on functionality here.
Goal is to be able create package of current state of project with related
documents from mongo and files from disk to zip file and then be able recreate
the project based on the zip.
Goal is to be able to create package of current state of project with related
documents from mongo and files from disk to zip file and then be able
to recreate the project based on the zip.
This gives ability to create project where a changes and tests can be done.
Keep in mind that to be able create a package of project has few requirements.
Possible requirement should be listed in 'pack_project' function.
Keep in mind that to be able to create a package of project has few
requirements. Possible requirement should be listed in 'pack_project' function.
"""
import os
import json
import platform
@ -19,16 +22,12 @@ import shutil
import datetime
import zipfile
from bson.json_util import (
loads,
dumps,
CANONICAL_JSON_OPTIONS
from openpype.client.mongo import (
load_json_file,
get_project_connection,
replace_project_documents,
store_project_documents,
)
from openpype.client import (
get_project,
get_whole_project,
)
from openpype.pipeline import AvalonMongoDB
DOCUMENTS_FILE_NAME = "database"
METADATA_FILE_NAME = "metadata"
@ -43,7 +42,52 @@ def add_timestamp(filepath):
return new_base + ext
def pack_project(project_name, destination_dir=None):
def get_project_document(project_name, database_name=None):
"""Query project document.
Function 'get_project' from client api cannot be used as it does not allow
to change which 'database_name' is used.
Args:
project_name (str): Name of project.
database_name (Optional[str]): Name of mongo database where to look for
project.
Returns:
Union[dict[str, Any], None]: Project document or None.
"""
col = get_project_connection(project_name, database_name)
return col.find_one({"type": "project"})
def _pack_files_to_zip(zip_stream, source_path, root_path):
"""Pack files to a zip stream.
Args:
zip_stream (zipfile.ZipFile): Stream to a zipfile.
source_path (str): Path to a directory where files are.
root_path (str): Path to a directory which is used for calculation
of relative path.
"""
for root, _, filenames in os.walk(source_path):
for filename in filenames:
filepath = os.path.join(root, filename)
# TODO add one more folder
archive_name = os.path.join(
PROJECT_FILES_DIR,
os.path.relpath(filepath, root_path)
)
zip_stream.write(filepath, archive_name)
def pack_project(
project_name,
destination_dir=None,
only_documents=False,
database_name=None
):
"""Make a package of a project with mongo documents and files.
This function has few restrictions:
@ -52,13 +96,18 @@ def pack_project(project_name, destination_dir=None):
"{root[...]}/{project[name]}"
Args:
project_name(str): Project that should be packaged.
destination_dir(str): Optional path where zip will be stored. Project's
root is used if not passed.
project_name (str): Project that should be packaged.
destination_dir (Optional[str]): Optional path where zip will be
stored. Project's root is used if not passed.
only_documents (Optional[bool]): Pack only Mongo documents and skip
files.
database_name (Optional[str]): Custom database name from which is
project queried.
"""
print("Creating package of project \"{}\"".format(project_name))
# Validate existence of project
project_doc = get_project(project_name)
project_doc = get_project_document(project_name, database_name)
if not project_doc:
raise ValueError("Project \"{}\" was not found in database".format(
project_name
@ -119,12 +168,7 @@ def pack_project(project_name, destination_dir=None):
temp_docs_json = s.name
# Query all project documents and store them to temp json
docs = list(get_whole_project(project_name))
data = dumps(
docs, json_options=CANONICAL_JSON_OPTIONS
)
with open(temp_docs_json, "w") as stream:
stream.write(data)
store_project_documents(project_name, temp_docs_json, database_name)
print("Packing files into zip")
# Write all to zip file
@ -133,16 +177,10 @@ def pack_project(project_name, destination_dir=None):
zip_stream.write(temp_metadata_json, METADATA_FILE_NAME + ".json")
# Add database documents
zip_stream.write(temp_docs_json, DOCUMENTS_FILE_NAME + ".json")
# Add project files to zip
for root, _, filenames in os.walk(project_source_path):
for filename in filenames:
filepath = os.path.join(root, filename)
# TODO add one more folder
archive_name = os.path.join(
PROJECT_FILES_DIR,
os.path.relpath(filepath, root_path)
)
zip_stream.write(filepath, archive_name)
if not only_documents:
_pack_files_to_zip(zip_stream, project_source_path, root_path)
print("Cleaning up")
# Cleanup
@ -152,80 +190,30 @@ def pack_project(project_name, destination_dir=None):
print("*** Packing finished ***")
def unpack_project(path_to_zip, new_root=None):
"""Unpack project zip file to recreate project.
def _unpack_project_files(unzip_dir, root_path, project_name):
"""Move project files from unarchived temp folder to new root.
Unpack is skipped if source files are not available in the zip. That can
happen if nothing was published yet or only documents were stored to
package.
Args:
path_to_zip(str): Path to zip which was created using 'pack_project'
function.
new_root(str): Optional way how to set different root path for unpacked
project.
unzip_dir (str): Location where zip was unzipped.
root_path (str): Path to new root.
project_name (str): Name of project.
"""
print("Unpacking project from zip {}".format(path_to_zip))
if not os.path.exists(path_to_zip):
print("Zip file does not exists: {}".format(path_to_zip))
src_project_files_dir = os.path.join(
unzip_dir, PROJECT_FILES_DIR, project_name
)
# Skip if files are not in the zip
if not os.path.exists(src_project_files_dir):
return
tmp_dir = tempfile.mkdtemp(prefix="unpack_")
print("Zip is extracted to temp: {}".format(tmp_dir))
with zipfile.ZipFile(path_to_zip, "r") as zip_stream:
zip_stream.extractall(tmp_dir)
metadata_json_path = os.path.join(tmp_dir, METADATA_FILE_NAME + ".json")
with open(metadata_json_path, "r") as stream:
metadata = json.load(stream)
docs_json_path = os.path.join(tmp_dir, DOCUMENTS_FILE_NAME + ".json")
with open(docs_json_path, "r") as stream:
content = stream.readlines()
docs = loads("".join(content))
low_platform = platform.system().lower()
project_name = metadata["project_name"]
source_root = metadata["root"]
root_path = source_root[low_platform]
# Drop existing collection
dbcon = AvalonMongoDB()
database = dbcon.database
if project_name in database.list_collection_names():
database.drop_collection(project_name)
print("Removed existing project collection")
print("Creating project documents ({})".format(len(docs)))
# Create new collection with loaded docs
collection = database[project_name]
collection.insert_many(docs)
# Skip change of root if is the same as the one stored in metadata
if (
new_root
and (os.path.normpath(new_root) == os.path.normpath(root_path))
):
new_root = None
if new_root:
print("Using different root path {}".format(new_root))
root_path = new_root
project_doc = get_project(project_name)
roots = project_doc["config"]["roots"]
key = tuple(roots.keys())[0]
update_key = "config.roots.{}.{}".format(key, low_platform)
collection.update_one(
{"_id": project_doc["_id"]},
{"$set": {
update_key: new_root
}}
)
# Make sure root path exists
if not os.path.exists(root_path):
os.makedirs(root_path)
src_project_files_dir = os.path.join(
tmp_dir, PROJECT_FILES_DIR, project_name
)
dst_project_files_dir = os.path.normpath(
os.path.join(root_path, project_name)
)
@ -241,8 +229,83 @@ def unpack_project(path_to_zip, new_root=None):
))
shutil.move(src_project_files_dir, dst_project_files_dir)
def unpack_project(
path_to_zip, new_root=None, database_only=None, database_name=None
):
"""Unpack project zip file to recreate project.
Args:
path_to_zip (str): Path to zip which was created using 'pack_project'
function.
new_root (str): Optional way how to set different root path for
unpacked project.
database_only (Optional[bool]): Unpack only database from zip.
database_name (str): Name of database where project will be recreated.
"""
if database_only is None:
database_only = False
print("Unpacking project from zip {}".format(path_to_zip))
if not os.path.exists(path_to_zip):
print("Zip file does not exists: {}".format(path_to_zip))
return
tmp_dir = tempfile.mkdtemp(prefix="unpack_")
print("Zip is extracted to temp: {}".format(tmp_dir))
with zipfile.ZipFile(path_to_zip, "r") as zip_stream:
if database_only:
for filename in (
"{}.json".format(METADATA_FILE_NAME),
"{}.json".format(DOCUMENTS_FILE_NAME),
):
zip_stream.extract(filename, tmp_dir)
else:
zip_stream.extractall(tmp_dir)
metadata_json_path = os.path.join(tmp_dir, METADATA_FILE_NAME + ".json")
with open(metadata_json_path, "r") as stream:
metadata = json.load(stream)
docs_json_path = os.path.join(tmp_dir, DOCUMENTS_FILE_NAME + ".json")
docs = load_json_file(docs_json_path)
low_platform = platform.system().lower()
project_name = metadata["project_name"]
source_root = metadata["root"]
root_path = source_root[low_platform]
# Drop existing collection
replace_project_documents(project_name, docs, database_name)
print("Creating project documents ({})".format(len(docs)))
# Skip change of root if is the same as the one stored in metadata
if (
new_root
and (os.path.normpath(new_root) == os.path.normpath(root_path))
):
new_root = None
if new_root:
print("Using different root path {}".format(new_root))
root_path = new_root
project_doc = get_project_document(project_name)
roots = project_doc["config"]["roots"]
key = tuple(roots.keys())[0]
update_key = "config.roots.{}.{}".format(key, low_platform)
collection = get_project_connection(project_name, database_name)
collection.update_one(
{"_id": project_doc["_id"]},
{"$set": {
update_key: new_root
}}
)
_unpack_project_files(tmp_dir, root_path, project_name)
# CLeanup
print("Cleaning up")
shutil.rmtree(tmp_dir)
dbcon.uninstall()
print("*** Unpack finished ***")

View file

@ -1,15 +1,20 @@
import os
import shutil
from time import sleep
from openpype.client.entities import (
get_last_version_by_subset_id,
get_representations,
get_subsets,
get_project
)
from openpype.lib import PreLaunchHook
from openpype.lib.local_settings import get_local_site_id
from openpype.lib.profiles_filtering import filter_profiles
from openpype.pipeline.load.utils import get_representation_path
from openpype.modules.sync_server.sync_server import (
download_last_published_workfile,
)
from openpype.pipeline.template_data import get_template_data
from openpype.pipeline.workfile.path_resolving import (
get_workfile_template_key,
)
from openpype.settings.lib import get_project_settings
@ -22,7 +27,11 @@ class CopyLastPublishedWorkfile(PreLaunchHook):
# Before `AddLastWorkfileToLaunchArgs`
order = -1
app_groups = ["blender", "photoshop", "tvpaint", "aftereffects"]
# any DCC could be used but TrayPublisher and other specials
app_groups = ["blender", "photoshop", "tvpaint", "aftereffects",
"nuke", "nukeassist", "nukex", "hiero", "nukestudio",
"maya", "harmony", "celaction", "flame", "fusion",
"houdini", "tvpaint"]
def execute(self):
"""Check if local workfile doesn't exist, else copy it.
@ -31,11 +40,11 @@ class CopyLastPublishedWorkfile(PreLaunchHook):
2- Check if workfile in work area doesn't exist
3- Check if published workfile exists and is copied locally in publish
4- Substitute copied published workfile as first workfile
with incremented version by +1
Returns:
None: This is a void method.
"""
sync_server = self.modules_manager.get("sync_server")
if not sync_server or not sync_server.enabled:
self.log.debug("Sync server module is not enabled or available")
@ -53,6 +62,7 @@ class CopyLastPublishedWorkfile(PreLaunchHook):
# Get data
project_name = self.data["project_name"]
asset_name = self.data["asset_name"]
task_name = self.data["task_name"]
task_type = self.data["task_type"]
host_name = self.application.host_name
@ -68,6 +78,8 @@ class CopyLastPublishedWorkfile(PreLaunchHook):
"hosts": host_name,
}
last_workfile_settings = filter_profiles(profiles, filter_data)
if not last_workfile_settings:
return
use_last_published_workfile = last_workfile_settings.get(
"use_last_published_workfile"
)
@ -92,57 +104,27 @@ class CopyLastPublishedWorkfile(PreLaunchHook):
)
return
max_retries = int((sync_server.sync_project_settings[project_name]
["config"]
["retry_cnt"]))
self.log.info("Trying to fetch last published workfile...")
project_doc = self.data.get("project_doc")
asset_doc = self.data.get("asset_doc")
anatomy = self.data.get("anatomy")
# Check it can proceed
if not project_doc and not asset_doc:
return
context_filters = {
"asset": asset_name,
"family": "workfile",
"task": {"name": task_name, "type": task_type}
}
# Get subset id
subset_id = next(
(
subset["_id"]
for subset in get_subsets(
project_name,
asset_ids=[asset_doc["_id"]],
fields=["_id", "data.family", "data.families"],
)
if subset["data"].get("family") == "workfile"
# Legacy compatibility
or "workfile" in subset["data"].get("families", {})
),
None,
)
if not subset_id:
self.log.debug(
'No any workfile for asset "{}".'.format(asset_doc["name"])
)
return
workfile_representations = list(get_representations(
project_name,
context_filters=context_filters
))
# Get workfile representation
last_version_doc = get_last_version_by_subset_id(
project_name, subset_id, fields=["_id"]
)
if not last_version_doc:
self.log.debug("Subset does not have any versions")
return
workfile_representation = next(
(
representation
for representation in get_representations(
project_name, version_ids=[last_version_doc["_id"]]
)
if representation["context"]["task"]["name"] == task_name
),
None,
)
if not workfile_representation:
if not workfile_representations:
self.log.debug(
'No published workfile for task "{}" and host "{}".'.format(
task_name, host_name
@ -150,28 +132,55 @@ class CopyLastPublishedWorkfile(PreLaunchHook):
)
return
local_site_id = get_local_site_id()
sync_server.add_site(
project_name,
workfile_representation["_id"],
local_site_id,
force=True,
priority=99,
reset_timer=True,
filtered_repres = filter(
lambda r: r["context"].get("version") is not None,
workfile_representations
)
while not sync_server.is_representation_on_site(
project_name, workfile_representation["_id"], local_site_id
):
sleep(5)
# Get paths
published_workfile_path = get_representation_path(
workfile_representation, root=anatomy.roots
workfile_representation = max(
filtered_repres, key=lambda r: r["context"]["version"]
)
local_workfile_dir = os.path.dirname(last_workfile)
# Copy file and substitute path
self.data["last_workfile_path"] = shutil.copy(
published_workfile_path, local_workfile_dir
last_published_workfile_path = download_last_published_workfile(
host_name,
project_name,
task_name,
workfile_representation,
max_retries,
anatomy=anatomy
)
if not last_published_workfile_path:
self.log.debug(
"Couldn't download {}".format(last_published_workfile_path)
)
return
project_doc = self.data["project_doc"]
project_settings = self.data["project_settings"]
template_key = get_workfile_template_key(
task_name, host_name, project_name, project_settings
)
# Get workfile data
workfile_data = get_template_data(
project_doc, asset_doc, task_name, host_name
)
extension = last_published_workfile_path.split(".")[-1]
workfile_data["version"] = (
workfile_representation["context"]["version"] + 1)
workfile_data["ext"] = extension
anatomy_result = anatomy.format(workfile_data)
local_workfile_path = anatomy_result[template_key]["path"]
# Copy last published workfile to local workfile directory
shutil.copy(
last_published_workfile_path,
local_workfile_path,
)
self.data["last_workfile_path"] = local_workfile_path
# Keep source filepath for further path conformation
self.data["source_filepath"] = last_published_workfile_path

View file

@ -3,10 +3,15 @@ import os
import asyncio
import threading
import concurrent.futures
from concurrent.futures._base import CancelledError
from time import sleep
from .providers import lib
from openpype.client.entity_links import get_linked_representation_id
from openpype.lib import Logger
from openpype.lib.local_settings import get_local_site_id
from openpype.modules.base import ModulesManager
from openpype.pipeline import Anatomy
from openpype.pipeline.load.utils import get_representation_path_with_anatomy
from .utils import SyncStatus, ResumableError
@ -189,6 +194,98 @@ def _site_is_working(module, project_name, site_name, site_config):
return handler.is_active()
def download_last_published_workfile(
host_name: str,
project_name: str,
task_name: str,
workfile_representation: dict,
max_retries: int,
anatomy: Anatomy = None,
) -> str:
"""Download the last published workfile
Args:
host_name (str): Host name.
project_name (str): Project name.
task_name (str): Task name.
workfile_representation (dict): Workfile representation.
max_retries (int): complete file failure only after so many attempts
anatomy (Anatomy, optional): Anatomy (Used for optimization).
Defaults to None.
Returns:
str: last published workfile path localized
"""
if not anatomy:
anatomy = Anatomy(project_name)
# Get sync server module
sync_server = ModulesManager().modules_by_name.get("sync_server")
if not sync_server or not sync_server.enabled:
print("Sync server module is disabled or unavailable.")
return
if not workfile_representation:
print(
"Not published workfile for task '{}' and host '{}'.".format(
task_name, host_name
)
)
return
last_published_workfile_path = get_representation_path_with_anatomy(
workfile_representation, anatomy
)
if (not last_published_workfile_path or
not os.path.exists(last_published_workfile_path)):
return
# If representation isn't available on remote site, then return.
if not sync_server.is_representation_on_site(
project_name,
workfile_representation["_id"],
sync_server.get_remote_site(project_name),
):
print(
"Representation for task '{}' and host '{}'".format(
task_name, host_name
)
)
return
# Get local site
local_site_id = get_local_site_id()
# Add workfile representation to local site
representation_ids = {workfile_representation["_id"]}
representation_ids.update(
get_linked_representation_id(
project_name, repre_id=workfile_representation["_id"]
)
)
for repre_id in representation_ids:
if not sync_server.is_representation_on_site(project_name, repre_id,
local_site_id):
sync_server.add_site(
project_name,
repre_id,
local_site_id,
force=True,
priority=99
)
sync_server.reset_timer()
print("Starting to download:{}".format(last_published_workfile_path))
# While representation unavailable locally, wait.
while not sync_server.is_representation_on_site(
project_name, workfile_representation["_id"], local_site_id,
max_retries=max_retries
):
sleep(5)
return last_published_workfile_path
class SyncServerThread(threading.Thread):
"""
Separate thread running synchronization server with asyncio loop.
@ -358,7 +455,6 @@ class SyncServerThread(threading.Thread):
duration = time.time() - start_time
self.log.debug("One loop took {:.2f}s".format(duration))
delay = self.module.get_loop_delay(project_name)
self.log.debug(
"Waiting for {} seconds to new loop".format(delay)
@ -370,8 +466,8 @@ class SyncServerThread(threading.Thread):
self.log.warning(
"ConnectionResetError in sync loop, trying next loop",
exc_info=True)
except CancelledError:
# just stopping server
except asyncio.exceptions.CancelledError:
# cancelling timer
pass
except ResumableError:
self.log.warning(

View file

@ -838,6 +838,18 @@ class SyncServerModule(OpenPypeModule, ITrayModule):
return ret_dict
def get_launch_hook_paths(self):
"""Implementation for applications launch hooks.
Returns:
(str): full absolut path to directory with hooks for the module
"""
return os.path.join(
os.path.dirname(os.path.abspath(__file__)),
"launch_hooks"
)
# Needs to be refactored after Settings are updated
# # Methods for Settings to get appriate values to fill forms
# def get_configurable_items(self, scope=None):
@ -1045,9 +1057,23 @@ class SyncServerModule(OpenPypeModule, ITrayModule):
self.sync_server_thread.reset_timer()
def is_representation_on_site(
self, project_name, representation_id, site_name
self, project_name, representation_id, site_name, max_retries=None
):
"""Checks if 'representation_id' has all files avail. on 'site_name'"""
"""Checks if 'representation_id' has all files avail. on 'site_name'
Args:
project_name (str)
representation_id (str)
site_name (str)
max_retries (int) (optional) - provide only if method used in while
loop to bail out
Returns:
(bool): True if 'representation_id' has all files correctly on the
'site_name'
Raises:
(ValueError) Only If 'max_retries' provided if upload/download
failed too many times to limit infinite loop check.
"""
representation = get_representation_by_id(project_name,
representation_id,
fields=["_id", "files"])
@ -1060,6 +1086,11 @@ class SyncServerModule(OpenPypeModule, ITrayModule):
if site["name"] != site_name:
continue
if max_retries:
tries = self._get_tries_count_from_rec(site)
if tries >= max_retries:
raise ValueError("Failed too many times")
if (site.get("progress") or site.get("error") or
not site.get("created_dt")):
return False

View file

@ -45,7 +45,7 @@ class PublishValidationError(Exception):
def __init__(self, message, title=None, description=None, detail=None):
self.message = message
self.title = title or "< Missing title >"
self.title = title
self.description = description or message
self.detail = detail
super(PublishValidationError, self).__init__(message)

View file

@ -49,7 +49,12 @@ class ValidateSequenceFrames(pyblish.api.InstancePlugin):
collection = collections[0]
frames = list(collection.indexes)
if instance.data.get("slate"):
# Slate is not part of the frame range
frames = frames[1:]
current_range = (frames[0], frames[-1])
required_range = (instance.data["frameStart"],
instance.data["frameEnd"])

View file

@ -353,12 +353,12 @@ class PypeCommands:
version_packer = VersionRepacker(directory)
version_packer.process()
def pack_project(self, project_name, dirpath):
def pack_project(self, project_name, dirpath, database_only):
from openpype.lib.project_backpack import pack_project
pack_project(project_name, dirpath)
pack_project(project_name, dirpath, database_only)
def unpack_project(self, zip_filepath, new_root):
def unpack_project(self, zip_filepath, new_root, database_only):
from openpype.lib.project_backpack import unpack_project
unpack_project(zip_filepath, new_root)
unpack_project(zip_filepath, new_root, database_only)

View file

@ -554,7 +554,7 @@
"publish_mip_map": true
},
"CreateAnimation": {
"enabled": true,
"enabled": false,
"write_color_sets": false,
"write_face_sets": false,
"include_parent_hierarchy": false,
@ -1459,8 +1459,9 @@
]
},
"reference_loader": {
"namespace": "{asset_name}_{subset}_##",
"group_name": "_GRP"
"namespace": "{asset_name}_{subset}_##_",
"group_name": "_GRP",
"display_handle": true
}
},
"workfile_build": {

View file

@ -10,23 +10,40 @@
}
},
"create": {
"CreateImage": {
"defaults": [
"ImageCreator": {
"enabled": true,
"active_on_create": true,
"mark_for_review": false,
"default_variants": [
"Main"
]
},
"AutoImageCreator": {
"enabled": false,
"active_on_create": true,
"mark_for_review": false,
"default_variant": ""
},
"ReviewCreator": {
"enabled": true,
"active_on_create": true,
"default_variant": ""
},
"WorkfileCreator": {
"enabled": true,
"active_on_create": true,
"default_variant": "Main"
}
},
"publish": {
"CollectColorCodedInstances": {
"enabled": true,
"create_flatten_image": "no",
"flatten_subset_template": "",
"color_code_mapping": []
},
"CollectInstances": {
"flatten_subset_template": ""
},
"CollectReview": {
"publish": true
"enabled": true
},
"CollectVersion": {
"enabled": false

View file

@ -11,6 +11,9 @@
},
"level_sequences_for_layouts": false,
"delete_unmatched_assets": false,
"render_config_path": "",
"preroll_frames": 0,
"render_format": "png",
"project_setup": {
"dev_mode": true
}

View file

@ -31,16 +31,126 @@
{
"type": "dict",
"collapsible": true,
"key": "CreateImage",
"key": "ImageCreator",
"label": "Create Image",
"checkbox_key": "enabled",
"children": [
{
"type": "label",
"label": "Manually create instance from layer or group of layers. \n Separate review could be created for this image to be sent to Asset Management System."
},
{
"type": "boolean",
"key": "enabled",
"label": "Enabled"
},
{
"type": "boolean",
"key": "active_on_create",
"label": "Active by default"
},
{
"type": "boolean",
"key": "mark_for_review",
"label": "Review by default"
},
{
"type": "list",
"key": "defaults",
"label": "Default Subsets",
"key": "default_variants",
"label": "Default Variants",
"object_type": "text"
}
]
},
{
"type": "dict",
"collapsible": true,
"key": "AutoImageCreator",
"label": "Create Flatten Image",
"checkbox_key": "enabled",
"children": [
{
"type": "label",
"label": "Auto create image for all visible layers, used for simplified processing. \n Separate review could be created for this image to be sent to Asset Management System."
},
{
"type": "boolean",
"key": "enabled",
"label": "Enabled"
},
{
"type": "boolean",
"key": "active_on_create",
"label": "Active by default"
},
{
"type": "boolean",
"key": "mark_for_review",
"label": "Review by default"
},
{
"type": "text",
"key": "default_variant",
"label": "Default variant"
}
]
},
{
"type": "dict",
"collapsible": true,
"key": "ReviewCreator",
"label": "Create Review",
"checkbox_key": "enabled",
"children": [
{
"type": "label",
"label": "Auto create review instance containing all published image instances or visible layers if no image instance."
},
{
"type": "boolean",
"key": "enabled",
"label": "Enabled",
"default": true
},
{
"type": "boolean",
"key": "active_on_create",
"label": "Active by default"
},
{
"type": "text",
"key": "default_variant",
"label": "Default variant"
}
]
},
{
"type": "dict",
"collapsible": true,
"key": "WorkfileCreator",
"label": "Create Workfile",
"checkbox_key": "enabled",
"children": [
{
"type": "label",
"label": "Auto create workfile instance"
},
{
"type": "boolean",
"key": "enabled",
"label": "Enabled"
},
{
"type": "boolean",
"key": "active_on_create",
"label": "Active by default"
},
{
"type": "text",
"key": "default_variant",
"label": "Default variant"
}
]
}
]
},
@ -56,11 +166,18 @@
"is_group": true,
"key": "CollectColorCodedInstances",
"label": "Collect Color Coded Instances",
"checkbox_key": "enabled",
"children": [
{
"type": "label",
"label": "Set color for publishable layers, set its resulting family and template for subset name. \nCan create flatten image from published instances.(Applicable only for remote publishing!)"
},
{
"type": "boolean",
"key": "enabled",
"label": "Enabled",
"default": true
},
{
"key": "create_flatten_image",
"label": "Create flatten image",
@ -131,40 +248,26 @@
}
]
},
{
"type": "dict",
"collapsible": true,
"key": "CollectInstances",
"label": "Collect Instances",
"children": [
{
"type": "label",
"label": "Name for flatten image created if no image instance present"
},
{
"type": "text",
"key": "flatten_subset_template",
"label": "Subset template for flatten image"
}
]
},
{
"type": "dict",
"collapsible": true,
"key": "CollectReview",
"label": "Collect Review",
"checkbox_key": "enabled",
"children": [
{
"type": "boolean",
"key": "publish",
"label": "Active"
}
]
"key": "enabled",
"label": "Enabled",
"default": true
}
]
},
{
"type": "dict",
"key": "CollectVersion",
"label": "Collect Version",
"checkbox_key": "enabled",
"children": [
{
"type": "label",

View file

@ -32,6 +32,28 @@
"key": "delete_unmatched_assets",
"label": "Delete assets that are not matched"
},
{
"type": "text",
"key": "render_config_path",
"label": "Render Config Path"
},
{
"type": "number",
"key": "preroll_frames",
"label": "Pre-roll frames"
},
{
"key": "render_format",
"label": "Render format",
"type": "enum",
"multiselection": false,
"enum_items": [
{"png": "PNG"},
{"exr": "EXR"},
{"jpg": "JPG"},
{"bmp": "BMP"}
]
},
{
"type": "dict",
"collapsible": true,

View file

@ -111,6 +111,14 @@
{
"type": "label",
"label": "Here's a link to the doc where you can find explanations about customing the naming of referenced assets: https://openpype.io/docs/admin_hosts_maya#load-plugins"
},
{
"type": "separator"
},
{
"type": "boolean",
"key": "display_handle",
"label": "Display Handle On Load References"
}
]
}

View file

@ -48,7 +48,7 @@
"bg-view-selection-hover": "rgba(92, 173, 214, .8)",
"border": "#373D48",
"border-hover": "rgba(168, 175, 189, .3)",
"border-hover": "rgb(92, 99, 111)",
"border-focus": "rgb(92, 173, 214)",
"restart-btn-bg": "#458056",

View file

@ -35,6 +35,11 @@ QWidget:disabled {
color: {color:font-disabled};
}
/* Some DCCs have set borders to solid color */
QScrollArea {
border: none;
}
QLabel {
background: transparent;
}
@ -42,7 +47,7 @@ QLabel {
/* Inputs */
QAbstractSpinBox, QLineEdit, QPlainTextEdit, QTextEdit {
border: 1px solid {color:border};
border-radius: 0.3em;
border-radius: 0.2em;
background: {color:bg-inputs};
padding: 0.1em;
}
@ -226,7 +231,7 @@ QMenu::separator {
/* Combobox */
QComboBox {
border: 1px solid {color:border};
border-radius: 3px;
border-radius: 0.2em;
padding: 1px 3px 1px 3px;
background: {color:bg-inputs};
}
@ -474,7 +479,6 @@ QAbstractItemView:disabled{
}
QAbstractItemView::item:hover {
/* color: {color:bg-view-hover}; */
background: {color:bg-view-hover};
}
@ -743,7 +747,7 @@ OverlayMessageWidget QWidget {
#TypeEditor, #ToolEditor, #NameEditor, #NumberEditor {
background: transparent;
border-radius: 0.3em;
border-radius: 0.2em;
}
#TypeEditor:focus, #ToolEditor:focus, #NameEditor:focus, #NumberEditor:focus {
@ -860,7 +864,13 @@ OverlayMessageWidget QWidget {
background: {color:bg-view-hover};
}
/* New Create/Publish UI */
/* Publisher UI (Create/Publish) */
#PublishWindow QAbstractSpinBox, QLineEdit, QPlainTextEdit, QTextEdit {
padding: 1px;
}
#PublishWindow QComboBox {
padding: 1px 1px 1px 0.2em;
}
PublisherTabsWidget {
background: {color:publisher:tab-bg};
}
@ -944,6 +954,7 @@ PixmapButton:disabled {
border-top-left-radius: 0px;
padding-top: 0.5em;
padding-bottom: 0.5em;
width: 0.5em;
}
#VariantInput[state="new"], #VariantInput[state="new"]:focus, #VariantInput[state="new"]:hover {
border-color: {color:publisher:success};
@ -1072,7 +1083,7 @@ ValidationArtistMessage QLabel {
#AssetNameInputWidget {
background: {color:bg-inputs};
border: 1px solid {color:border};
border-radius: 0.3em;
border-radius: 0.2em;
}
#AssetNameInputWidget QWidget {
@ -1465,6 +1476,12 @@ CreateNextPageOverlay {
}
/* Attribute Definition widgets */
AttributeDefinitionsWidget QAbstractSpinBox, QLineEdit, QPlainTextEdit, QTextEdit {
padding: 1px;
}
AttributeDefinitionsWidget QComboBox {
padding: 1px 1px 1px 0.2em;
}
InViewButton, InViewButton:disabled {
background: transparent;
}

View file

@ -1,4 +1,3 @@
import uuid
import copy
from qtpy import QtWidgets, QtCore
@ -126,7 +125,7 @@ class AttributeDefinitionsWidget(QtWidgets.QWidget):
row = 0
for attr_def in attr_defs:
if not isinstance(attr_def, UIDef):
if attr_def.is_value_def:
if attr_def.key in self._current_keys:
raise KeyError(
"Duplicated key \"{}\"".format(attr_def.key))
@ -144,11 +143,16 @@ class AttributeDefinitionsWidget(QtWidgets.QWidget):
col_num = 2 - expand_cols
if attr_def.label:
if attr_def.is_value_def and attr_def.label:
label_widget = QtWidgets.QLabel(attr_def.label, self)
tooltip = attr_def.tooltip
if tooltip:
label_widget.setToolTip(tooltip)
if attr_def.is_label_horizontal:
label_widget.setAlignment(
QtCore.Qt.AlignRight
| QtCore.Qt.AlignVCenter
)
layout.addWidget(
label_widget, row, 0, 1, expand_cols
)

View file

@ -123,7 +123,7 @@ class BaseRepresentationModel(object):
self.remote_provider = remote_provider
class SubsetsModel(TreeModel, BaseRepresentationModel):
class SubsetsModel(BaseRepresentationModel, TreeModel):
doc_fetched = QtCore.Signal()
refreshed = QtCore.Signal(bool)

View file

@ -2,7 +2,7 @@ from qtpy import QtCore, QtGui
# ID of context item in instance view
CONTEXT_ID = "context"
CONTEXT_LABEL = "Options"
CONTEXT_LABEL = "Context"
# Not showed anywhere - used as identifier
CONTEXT_GROUP = "__ContextGroup__"
@ -15,6 +15,9 @@ VARIANT_TOOLTIP = (
"\nnumerical characters (0-9) dot (\".\") or underscore (\"_\")."
)
INPUTS_LAYOUT_HSPACING = 4
INPUTS_LAYOUT_VSPACING = 2
# Roles for instance views
INSTANCE_ID_ROLE = QtCore.Qt.UserRole + 1
SORT_VALUE_ROLE = QtCore.Qt.UserRole + 2

View file

@ -163,7 +163,7 @@ class AssetDocsCache:
return copy.deepcopy(self._full_asset_docs_by_name[asset_name])
class PublishReport:
class PublishReportMaker:
"""Report for single publishing process.
Report keeps current state of publishing and currently processed plugin.
@ -784,6 +784,13 @@ class PublishValidationErrors:
# Make sure the cached report is cleared
plugin_id = self._plugins_proxy.get_plugin_id(plugin)
if not error.title:
if hasattr(plugin, "label") and plugin.label:
plugin_label = plugin.label
else:
plugin_label = plugin.__name__
error.title = plugin_label
self._error_items.append(
ValidationErrorItem.from_result(plugin_id, error, instance)
)
@ -1674,7 +1681,7 @@ class PublisherController(BasePublisherController):
# pyblish.api.Context
self._publish_context = None
# Pyblish report
self._publish_report = PublishReport(self)
self._publish_report = PublishReportMaker(self)
# Store exceptions of validation error
self._publish_validation_errors = PublishValidationErrors()

View file

@ -211,6 +211,10 @@ class AssetsDialog(QtWidgets.QDialog):
layout.addWidget(asset_view, 1)
layout.addLayout(btns_layout, 0)
controller.event_system.add_callback(
"controller.reset.finished", self._on_controller_reset
)
asset_view.double_clicked.connect(self._on_ok_clicked)
filter_input.textChanged.connect(self._on_filter_change)
ok_btn.clicked.connect(self._on_ok_clicked)
@ -245,6 +249,10 @@ class AssetsDialog(QtWidgets.QDialog):
new_pos.setY(new_pos.y() - int(self.height() / 2))
self.move(new_pos)
def _on_controller_reset(self):
# Change reset enabled so model is reset on show event
self._soft_reset_enabled = True
def showEvent(self, event):
"""Refresh asset model on show."""
super(AssetsDialog, self).showEvent(event)

View file

@ -9,7 +9,7 @@ Only one item can be selected at a time.
```
<i> : Icon. Can have Warning icon when context is not right
Options
Context
<Group 1>
<i> <Instance 1> [x]
<i> <Instance 2> [x]
@ -202,7 +202,7 @@ class ConvertorItemsGroupWidget(BaseGroupWidget):
class InstanceGroupWidget(BaseGroupWidget):
"""Widget wrapping instances under group."""
active_changed = QtCore.Signal()
active_changed = QtCore.Signal(str, str, bool)
def __init__(self, group_icons, *args, **kwargs):
super(InstanceGroupWidget, self).__init__(*args, **kwargs)
@ -253,13 +253,16 @@ class InstanceGroupWidget(BaseGroupWidget):
instance, group_icon, self
)
widget.selected.connect(self._on_widget_selection)
widget.active_changed.connect(self.active_changed)
widget.active_changed.connect(self._on_active_changed)
self._widgets_by_id[instance.id] = widget
self._content_layout.insertWidget(widget_idx, widget)
widget_idx += 1
self._update_ordered_item_ids()
def _on_active_changed(self, instance_id, value):
self.active_changed.emit(self.group_name, instance_id, value)
class CardWidget(BaseClickableFrame):
"""Clickable card used as bigger button."""
@ -332,7 +335,7 @@ class ContextCardWidget(CardWidget):
icon_layout.addWidget(icon_widget)
layout = QtWidgets.QHBoxLayout(self)
layout.setContentsMargins(0, 5, 10, 5)
layout.setContentsMargins(0, 2, 10, 2)
layout.addLayout(icon_layout, 0)
layout.addWidget(label_widget, 1)
@ -363,7 +366,7 @@ class ConvertorItemCardWidget(CardWidget):
icon_layout.addWidget(icon_widget)
layout = QtWidgets.QHBoxLayout(self)
layout.setContentsMargins(0, 5, 10, 5)
layout.setContentsMargins(0, 2, 10, 2)
layout.addLayout(icon_layout, 0)
layout.addWidget(label_widget, 1)
@ -377,7 +380,7 @@ class ConvertorItemCardWidget(CardWidget):
class InstanceCardWidget(CardWidget):
"""Card widget representing instance."""
active_changed = QtCore.Signal()
active_changed = QtCore.Signal(str, bool)
def __init__(self, instance, group_icon, parent):
super(InstanceCardWidget, self).__init__(parent)
@ -424,7 +427,7 @@ class InstanceCardWidget(CardWidget):
top_layout.addWidget(expand_btn, 0)
layout = QtWidgets.QHBoxLayout(self)
layout.setContentsMargins(0, 5, 10, 5)
layout.setContentsMargins(0, 2, 10, 2)
layout.addLayout(top_layout)
layout.addWidget(detail_widget)
@ -445,6 +448,10 @@ class InstanceCardWidget(CardWidget):
def set_active_toggle_enabled(self, enabled):
self._active_checkbox.setEnabled(enabled)
@property
def is_active(self):
return self._active_checkbox.isChecked()
def set_active(self, new_value):
"""Set instance as active."""
checkbox_value = self._active_checkbox.isChecked()
@ -515,7 +522,7 @@ class InstanceCardWidget(CardWidget):
return
self.instance["active"] = new_value
self.active_changed.emit()
self.active_changed.emit(self._id, new_value)
def _on_expend_clicked(self):
self._set_expanded()
@ -584,6 +591,45 @@ class InstanceCardView(AbstractInstanceView):
result.setWidth(width)
return result
def _toggle_instances(self, value):
if not self._active_toggle_enabled:
return
widgets = self._get_selected_widgets()
changed = False
for widget in widgets:
if not isinstance(widget, InstanceCardWidget):
continue
is_active = widget.is_active
if value == -1:
widget.set_active(not is_active)
changed = True
continue
_value = bool(value)
if is_active is not _value:
widget.set_active(_value)
changed = True
if changed:
self.active_changed.emit()
def keyPressEvent(self, event):
if event.key() == QtCore.Qt.Key_Space:
self._toggle_instances(-1)
return True
elif event.key() == QtCore.Qt.Key_Backspace:
self._toggle_instances(0)
return True
elif event.key() == QtCore.Qt.Key_Return:
self._toggle_instances(1)
return True
return super(InstanceCardView, self).keyPressEvent(event)
def _get_selected_widgets(self):
output = []
if (
@ -742,7 +788,15 @@ class InstanceCardView(AbstractInstanceView):
for widget in self._widgets_by_group.values():
widget.update_instance_values()
def _on_active_changed(self):
def _on_active_changed(self, group_name, instance_id, value):
group_widget = self._widgets_by_group[group_name]
instance_widget = group_widget.get_widget_by_item_id(instance_id)
if instance_widget.is_selected:
for widget in self._get_selected_widgets():
if isinstance(widget, InstanceCardWidget):
widget.set_active(value)
else:
self._select_item_clear(instance_id, group_name, instance_widget)
self.active_changed.emit()
def _on_widget_selection(self, instance_id, group_name, selection_type):

View file

@ -22,6 +22,8 @@ from ..constants import (
CREATOR_IDENTIFIER_ROLE,
CREATOR_THUMBNAIL_ENABLED_ROLE,
CREATOR_SORT_ROLE,
INPUTS_LAYOUT_HSPACING,
INPUTS_LAYOUT_VSPACING,
)
SEPARATORS = ("---separator---", "---")
@ -198,6 +200,8 @@ class CreateWidget(QtWidgets.QWidget):
variant_subset_layout = QtWidgets.QFormLayout(variant_subset_widget)
variant_subset_layout.setContentsMargins(0, 0, 0, 0)
variant_subset_layout.setHorizontalSpacing(INPUTS_LAYOUT_HSPACING)
variant_subset_layout.setVerticalSpacing(INPUTS_LAYOUT_VSPACING)
variant_subset_layout.addRow("Variant", variant_widget)
variant_subset_layout.addRow("Subset", subset_name_input)
@ -282,6 +286,9 @@ class CreateWidget(QtWidgets.QWidget):
thumbnail_widget.thumbnail_created.connect(self._on_thumbnail_create)
thumbnail_widget.thumbnail_cleared.connect(self._on_thumbnail_clear)
controller.event_system.add_callback(
"main.window.closed", self._on_main_window_close
)
controller.event_system.add_callback(
"plugins.refresh.finished", self._on_plugins_refresh
)
@ -316,6 +323,10 @@ class CreateWidget(QtWidgets.QWidget):
self._first_show = True
self._last_thumbnail_path = None
self._last_current_context_asset = None
self._last_current_context_task = None
self._use_current_context = True
@property
def current_asset_name(self):
return self._controller.current_asset_name
@ -356,12 +367,39 @@ class CreateWidget(QtWidgets.QWidget):
if check_prereq:
self._invalidate_prereq()
def _on_main_window_close(self):
"""Publisher window was closed."""
# Use current context on next refresh
self._use_current_context = True
def refresh(self):
current_asset_name = self._controller.current_asset_name
current_task_name = self._controller.current_task_name
# Get context before refresh to keep selection of asset and
# task widgets
asset_name = self._get_asset_name()
task_name = self._get_task_name()
# Replace by current context if last loaded context was
# 'current context' before reset
if (
self._use_current_context
or (
self._last_current_context_asset
and asset_name == self._last_current_context_asset
and task_name == self._last_current_context_task
)
):
asset_name = current_asset_name
task_name = current_task_name
# Store values for future refresh
self._last_current_context_asset = current_asset_name
self._last_current_context_task = current_task_name
self._use_current_context = False
self._prereq_available = False
# Disable context widget so refresh of asset will use context asset
@ -398,7 +436,10 @@ class CreateWidget(QtWidgets.QWidget):
prereq_available = False
creator_btn_tooltips.append("Creator is not selected")
if self._context_change_is_enabled() and self._asset_name is None:
if (
self._context_change_is_enabled()
and self._get_asset_name() is None
):
# QUESTION how to handle invalid asset?
prereq_available = False
creator_btn_tooltips.append("Context is not selected")

View file

@ -11,7 +11,7 @@ selection can be enabled disabled using checkbox or keyboard key presses:
- Backspace - disable selection
```
|- Options
|- Context
|- <Group 1> [x]
| |- <Instance 1> [x]
| |- <Instance 2> [x]
@ -486,6 +486,9 @@ class InstanceListView(AbstractInstanceView):
group_widget.set_expanded(expanded)
def _on_toggle_request(self, toggle):
if not self._active_toggle_enabled:
return
selected_instance_ids = self._instance_view.get_selected_instance_ids()
if toggle == -1:
active = None
@ -1039,7 +1042,8 @@ class InstanceListView(AbstractInstanceView):
proxy_index = proxy_model.mapFromSource(select_indexes[0])
selection_model.setCurrentIndex(
proxy_index,
selection_model.ClearAndSelect | selection_model.Rows
QtCore.QItemSelectionModel.ClearAndSelect
| QtCore.QItemSelectionModel.Rows
)
return

View file

@ -2,6 +2,8 @@ from qtpy import QtWidgets, QtCore
from openpype.tools.attribute_defs import create_widget_for_attr_def
from ..constants import INPUTS_LAYOUT_HSPACING, INPUTS_LAYOUT_VSPACING
class PreCreateWidget(QtWidgets.QWidget):
def __init__(self, parent):
@ -81,6 +83,8 @@ class AttributesWidget(QtWidgets.QWidget):
layout = QtWidgets.QGridLayout(self)
layout.setContentsMargins(0, 0, 0, 0)
layout.setHorizontalSpacing(INPUTS_LAYOUT_HSPACING)
layout.setVerticalSpacing(INPUTS_LAYOUT_VSPACING)
self._layout = layout
@ -117,8 +121,16 @@ class AttributesWidget(QtWidgets.QWidget):
col_num = 2 - expand_cols
if attr_def.label:
if attr_def.is_value_def and attr_def.label:
label_widget = QtWidgets.QLabel(attr_def.label, self)
tooltip = attr_def.tooltip
if tooltip:
label_widget.setToolTip(tooltip)
if attr_def.is_label_horizontal:
label_widget.setAlignment(
QtCore.Qt.AlignRight
| QtCore.Qt.AlignVCenter
)
self._layout.addWidget(
label_widget, row, 0, 1, expand_cols
)

View file

@ -9,7 +9,7 @@ import collections
from qtpy import QtWidgets, QtCore, QtGui
import qtawesome
from openpype.lib.attribute_definitions import UnknownDef, UIDef
from openpype.lib.attribute_definitions import UnknownDef
from openpype.tools.attribute_defs import create_widget_for_attr_def
from openpype.tools import resources
from openpype.tools.flickcharm import FlickCharm
@ -36,6 +36,8 @@ from .icons import (
from ..constants import (
VARIANT_TOOLTIP,
ResetKeySequence,
INPUTS_LAYOUT_HSPACING,
INPUTS_LAYOUT_VSPACING,
)
@ -1098,6 +1100,8 @@ class GlobalAttrsWidget(QtWidgets.QWidget):
btns_layout.addWidget(cancel_btn)
main_layout = QtWidgets.QFormLayout(self)
main_layout.setHorizontalSpacing(INPUTS_LAYOUT_HSPACING)
main_layout.setVerticalSpacing(INPUTS_LAYOUT_VSPACING)
main_layout.addRow("Variant", variant_input)
main_layout.addRow("Asset", asset_value_widget)
main_layout.addRow("Task", task_value_widget)
@ -1346,6 +1350,8 @@ class CreatorAttrsWidget(QtWidgets.QWidget):
content_layout.setColumnStretch(0, 0)
content_layout.setColumnStretch(1, 1)
content_layout.setAlignment(QtCore.Qt.AlignTop)
content_layout.setHorizontalSpacing(INPUTS_LAYOUT_HSPACING)
content_layout.setVerticalSpacing(INPUTS_LAYOUT_VSPACING)
row = 0
for attr_def, attr_instances, values in result:
@ -1371,9 +1377,19 @@ class CreatorAttrsWidget(QtWidgets.QWidget):
col_num = 2 - expand_cols
label = attr_def.label or attr_def.key
label = None
if attr_def.is_value_def:
label = attr_def.label or attr_def.key
if label:
label_widget = QtWidgets.QLabel(label, self)
tooltip = attr_def.tooltip
if tooltip:
label_widget.setToolTip(tooltip)
if attr_def.is_label_horizontal:
label_widget.setAlignment(
QtCore.Qt.AlignRight
| QtCore.Qt.AlignVCenter
)
content_layout.addWidget(
label_widget, row, 0, 1, expand_cols
)
@ -1474,6 +1490,8 @@ class PublishPluginAttrsWidget(QtWidgets.QWidget):
attr_def_layout = QtWidgets.QGridLayout(attr_def_widget)
attr_def_layout.setColumnStretch(0, 0)
attr_def_layout.setColumnStretch(1, 1)
attr_def_layout.setHorizontalSpacing(INPUTS_LAYOUT_HSPACING)
attr_def_layout.setVerticalSpacing(INPUTS_LAYOUT_VSPACING)
content_layout = QtWidgets.QVBoxLayout(content_widget)
content_layout.addWidget(attr_def_widget, 0)
@ -1501,12 +1519,19 @@ class PublishPluginAttrsWidget(QtWidgets.QWidget):
expand_cols = 1
col_num = 2 - expand_cols
label = attr_def.label or attr_def.key
label = None
if attr_def.is_value_def:
label = attr_def.label or attr_def.key
if label:
label_widget = QtWidgets.QLabel(label, content_widget)
tooltip = attr_def.tooltip
if tooltip:
label_widget.setToolTip(tooltip)
if attr_def.is_label_horizontal:
label_widget.setAlignment(
QtCore.Qt.AlignRight
| QtCore.Qt.AlignVCenter
)
attr_def_layout.addWidget(
label_widget, row, 0, 1, expand_cols
)
@ -1517,7 +1542,7 @@ class PublishPluginAttrsWidget(QtWidgets.QWidget):
)
row += 1
if isinstance(attr_def, UIDef):
if not attr_def.is_value_def:
continue
widget.value_changed.connect(self._input_value_changed)

View file

@ -46,6 +46,8 @@ class PublisherWindow(QtWidgets.QDialog):
def __init__(self, parent=None, controller=None, reset_on_show=None):
super(PublisherWindow, self).__init__(parent)
self.setObjectName("PublishWindow")
self.setWindowTitle("OpenPype publisher")
icon = QtGui.QIcon(resources.get_openpype_icon_filepath())
@ -284,6 +286,9 @@ class PublisherWindow(QtWidgets.QDialog):
controller.event_system.add_callback(
"publish.has_validated.changed", self._on_publish_validated_change
)
controller.event_system.add_callback(
"publish.finished.changed", self._on_publish_finished_change
)
controller.event_system.add_callback(
"publish.process.stopped", self._on_publish_stop
)
@ -400,8 +405,12 @@ class PublisherWindow(QtWidgets.QDialog):
# TODO capture changes and ask user if wants to save changes on close
if not self._controller.host_context_has_changed:
self._save_changes(False)
self._comment_input.setText("") # clear comment
self._reset_on_show = True
self._controller.clear_thumbnail_temp_dir_path()
# Trigger custom event that should be captured only in UI
# - backend (controller) must not be dependent on this event topic!!!
self._controller.event_system.emit("main.window.closed", {}, "window")
super(PublisherWindow, self).closeEvent(event)
def leaveEvent(self, event):
@ -433,15 +442,24 @@ class PublisherWindow(QtWidgets.QDialog):
event.accept()
return
if event.matches(QtGui.QKeySequence.Save):
save_match = event.matches(QtGui.QKeySequence.Save)
if save_match == QtGui.QKeySequence.ExactMatch:
if not self._controller.publish_has_started:
self._save_changes(True)
event.accept()
return
if ResetKeySequence.matches(
QtGui.QKeySequence(event.key() | event.modifiers())
):
# PySide6 Support
if hasattr(event, "keyCombination"):
reset_match_result = ResetKeySequence.matches(
QtGui.QKeySequence(event.keyCombination())
)
else:
reset_match_result = ResetKeySequence.matches(
QtGui.QKeySequence(event.modifiers() | event.key())
)
if reset_match_result == QtGui.QKeySequence.ExactMatch:
if not self.controller.publish_is_running:
self.reset()
event.accept()
@ -777,6 +795,11 @@ class PublisherWindow(QtWidgets.QDialog):
if event["value"]:
self._validate_btn.setEnabled(False)
def _on_publish_finished_change(self, event):
if event["value"]:
# Successful publish, remove comment from UI
self._comment_input.setText("")
def _on_publish_stop(self):
self._set_publish_overlay_visibility(False)
self._reset_btn.setEnabled(True)

View file

@ -199,90 +199,103 @@ class InventoryModel(TreeModel):
"""Refresh the model"""
host = registered_host()
if not items: # for debugging or testing, injecting items from outside
# for debugging or testing, injecting items from outside
if items is None:
if isinstance(host, ILoadHost):
items = host.get_containers()
else:
elif hasattr(host, "ls"):
items = host.ls()
else:
items = []
self.clear()
if self._hierarchy_view and selected:
if not hasattr(host.pipeline, "update_hierarchy"):
# If host doesn't support hierarchical containers, then
# cherry-pick only.
self.add_items((item for item in items
if item["objectName"] in selected))
return
# Update hierarchy info for all containers
items_by_name = {item["objectName"]: item
for item in host.pipeline.update_hierarchy(items)}
selected_items = set()
def walk_children(names):
"""Select containers and extend to chlid containers"""
for name in [n for n in names if n not in selected_items]:
selected_items.add(name)
item = items_by_name[name]
yield item
for child in walk_children(item["children"]):
yield child
items = list(walk_children(selected)) # Cherry-picked and extended
# Cut unselected upstream containers
for item in items:
if not item.get("parent") in selected_items:
# Parent not in selection, this is root item.
item["parent"] = None
parents = [self._root_item]
# The length of `items` array is the maximum depth that a
# hierarchy could be.
# Take this as an easiest way to prevent looping forever.
maximum_loop = len(items)
count = 0
while items:
if count > maximum_loop:
self.log.warning("Maximum loop count reached, possible "
"missing parent node.")
break
_parents = list()
for parent in parents:
_unparented = list()
def _children():
"""Child item provider"""
for item in items:
if item.get("parent") == parent.get("objectName"):
# (NOTE)
# Since `self._root_node` has no "objectName"
# entry, it will be paired with root item if
# the value of key "parent" is None, or not
# having the key.
yield item
else:
# Not current parent's child, try next
_unparented.append(item)
self.add_items(_children(), parent)
items[:] = _unparented
# Parents of next level
for group_node in parent.children():
_parents += group_node.children()
parents[:] = _parents
count += 1
else:
if not selected or not self._hierarchy_view:
self.add_items(items)
return
if (
not hasattr(host, "pipeline")
or not hasattr(host.pipeline, "update_hierarchy")
):
# If host doesn't support hierarchical containers, then
# cherry-pick only.
self.add_items((
item
for item in items
if item["objectName"] in selected
))
return
# TODO find out what this part does. Function 'update_hierarchy' is
# available only in 'blender' at this moment.
# Update hierarchy info for all containers
items_by_name = {
item["objectName"]: item
for item in host.pipeline.update_hierarchy(items)
}
selected_items = set()
def walk_children(names):
"""Select containers and extend to chlid containers"""
for name in [n for n in names if n not in selected_items]:
selected_items.add(name)
item = items_by_name[name]
yield item
for child in walk_children(item["children"]):
yield child
items = list(walk_children(selected)) # Cherry-picked and extended
# Cut unselected upstream containers
for item in items:
if not item.get("parent") in selected_items:
# Parent not in selection, this is root item.
item["parent"] = None
parents = [self._root_item]
# The length of `items` array is the maximum depth that a
# hierarchy could be.
# Take this as an easiest way to prevent looping forever.
maximum_loop = len(items)
count = 0
while items:
if count > maximum_loop:
self.log.warning("Maximum loop count reached, possible "
"missing parent node.")
break
_parents = list()
for parent in parents:
_unparented = list()
def _children():
"""Child item provider"""
for item in items:
if item.get("parent") == parent.get("objectName"):
# (NOTE)
# Since `self._root_node` has no "objectName"
# entry, it will be paired with root item if
# the value of key "parent" is None, or not
# having the key.
yield item
else:
# Not current parent's child, try next
_unparented.append(item)
self.add_items(_children(), parent)
items[:] = _unparented
# Parents of next level
for group_node in parent.children():
_parents += group_node.children()
parents[:] = _parents
count += 1
def add_items(self, items, parent=None):
"""Add the items to the model.

View file

@ -107,8 +107,8 @@ class SceneInventoryWindow(QtWidgets.QDialog):
view.hierarchy_view_changed.connect(
self._on_hierarchy_view_change
)
view.data_changed.connect(self.refresh)
refresh_button.clicked.connect(self.refresh)
view.data_changed.connect(self._on_refresh_request)
refresh_button.clicked.connect(self._on_refresh_request)
update_all_button.clicked.connect(self._on_update_all)
self._update_all_button = update_all_button
@ -139,6 +139,11 @@ class SceneInventoryWindow(QtWidgets.QDialog):
"""
def _on_refresh_request(self):
"""Signal callback to trigger 'refresh' without any arguments."""
self.refresh()
def refresh(self, items=None):
with preserve_expanded_rows(
tree_view=self._view,

View file

@ -1,6 +1,7 @@
from .widgets import (
FocusSpinBox,
FocusDoubleSpinBox,
ComboBox,
CustomTextComboBox,
PlaceholderLineEdit,
BaseClickableFrame,
@ -38,6 +39,7 @@ from .overlay_messages import (
__all__ = (
"FocusSpinBox",
"FocusDoubleSpinBox",
"ComboBox",
"CustomTextComboBox",
"PlaceholderLineEdit",
"BaseClickableFrame",

View file

@ -41,7 +41,28 @@ class FocusDoubleSpinBox(QtWidgets.QDoubleSpinBox):
super(FocusDoubleSpinBox, self).wheelEvent(event)
class CustomTextComboBox(QtWidgets.QComboBox):
class ComboBox(QtWidgets.QComboBox):
"""Base of combobox with pre-implement changes used in tools.
Combobox is using styled delegate by default so stylesheets are propagated.
Items are not changed on scroll until the combobox is in focus.
"""
def __init__(self, *args, **kwargs):
super(ComboBox, self).__init__(*args, **kwargs)
delegate = QtWidgets.QStyledItemDelegate()
self.setItemDelegate(delegate)
self.setFocusPolicy(QtCore.Qt.StrongFocus)
self._delegate = delegate
def wheelEvent(self, event):
if self.hasFocus():
return super(ComboBox, self).wheelEvent(event)
class CustomTextComboBox(ComboBox):
"""Combobox which can have different text showed."""
def __init__(self, *args, **kwargs):
@ -253,6 +274,9 @@ class PixmapLabel(QtWidgets.QLabel):
self._empty_pixmap = QtGui.QPixmap(0, 0)
self._source_pixmap = pixmap
self._last_width = 0
self._last_height = 0
def set_source_pixmap(self, pixmap):
"""Change source image."""
self._source_pixmap = pixmap
@ -263,6 +287,12 @@ class PixmapLabel(QtWidgets.QLabel):
size += size % 2
return size, size
def minimumSizeHint(self):
width, height = self._get_pix_size()
if width != self._last_width or height != self._last_height:
self._set_resized_pix()
return QtCore.QSize(width, height)
def _set_resized_pix(self):
if self._source_pixmap is None:
self.setPixmap(self._empty_pixmap)
@ -276,6 +306,8 @@ class PixmapLabel(QtWidgets.QLabel):
QtCore.Qt.SmoothTransformation
)
)
self._last_width = width
self._last_height = height
def resizeEvent(self, event):
self._set_resized_pix()

View file

@ -1,3 +1,3 @@
# -*- coding: utf-8 -*-
"""Package declaring Pype version."""
__version__ = "3.15.4"
__version__ = "3.15.6-nightly.3"

View file

@ -1,6 +1,6 @@
[tool.poetry]
name = "OpenPype"
version = "3.15.4" # OpenPype
version = "3.15.5" # OpenPype
description = "Open VFX and Animation pipeline with support."
authors = ["OpenPype Team <info@openpype.io>"]
license = "MIT License"

View file

@ -0,0 +1,93 @@
import logging
from tests.lib.assert_classes import DBAssert
from tests.integration.hosts.photoshop.lib import PhotoshopTestClass
log = logging.getLogger("test_publish_in_photoshop")
class TestPublishInPhotoshopAutoImage(PhotoshopTestClass):
"""Test for publish in Phohoshop with different review configuration.
Workfile contains 3 layers, auto image and review instances created.
Test contains updates to Settings!!!
"""
PERSIST = True
TEST_FILES = [
("1iLF6aNI31qlUCD1rGg9X9eMieZzxL-rc",
"test_photoshop_publish_auto_image.zip", "")
]
APP_GROUP = "photoshop"
# keep empty to locate latest installed variant or explicit
APP_VARIANT = ""
APP_NAME = "{}/{}".format(APP_GROUP, APP_VARIANT)
TIMEOUT = 120 # publish timeout
def test_db_asserts(self, dbcon, publish_finished):
"""Host and input data dependent expected results in DB."""
print("test_db_asserts")
failures = []
failures.append(DBAssert.count_of_types(dbcon, "version", 3))
failures.append(
DBAssert.count_of_types(dbcon, "version", 0, name={"$ne": 1}))
failures.append(
DBAssert.count_of_types(dbcon, "subset", 0,
name="imageMainForeground"))
failures.append(
DBAssert.count_of_types(dbcon, "subset", 0,
name="imageMainBackground"))
failures.append(
DBAssert.count_of_types(dbcon, "subset", 1,
name="workfileTest_task"))
failures.append(
DBAssert.count_of_types(dbcon, "representation", 5))
additional_args = {"context.subset": "imageMainForeground",
"context.ext": "png"}
failures.append(
DBAssert.count_of_types(dbcon, "representation", 0,
additional_args=additional_args))
additional_args = {"context.subset": "imageMainBackground",
"context.ext": "png"}
failures.append(
DBAssert.count_of_types(dbcon, "representation", 0,
additional_args=additional_args))
# review from image
additional_args = {"context.subset": "imageBeautyMain",
"context.ext": "jpg",
"name": "jpg_jpg"}
failures.append(
DBAssert.count_of_types(dbcon, "representation", 1,
additional_args=additional_args))
additional_args = {"context.subset": "imageBeautyMain",
"context.ext": "jpg",
"name": "jpg"}
failures.append(
DBAssert.count_of_types(dbcon, "representation", 1,
additional_args=additional_args))
additional_args = {"context.subset": "review"}
failures.append(
DBAssert.count_of_types(dbcon, "representation", 1,
additional_args=additional_args))
assert not any(failures)
if __name__ == "__main__":
test_case = TestPublishInPhotoshopAutoImage()

View file

@ -0,0 +1,111 @@
import logging
from tests.lib.assert_classes import DBAssert
from tests.integration.hosts.photoshop.lib import PhotoshopTestClass
log = logging.getLogger("test_publish_in_photoshop")
class TestPublishInPhotoshopImageReviews(PhotoshopTestClass):
"""Test for publish in Phohoshop with different review configuration.
Workfile contains 2 image instance, one has review flag, second doesn't.
Regular `review` family is disabled.
Expected result is to `imageMainForeground` to have additional file with
review, `imageMainBackground` without. No separate `review` family.
`test_project_test_asset_imageMainForeground_v001_jpg.jpg` is expected name
of imageForeground review, `_jpg` suffix is needed to differentiate between
image and review file.
"""
PERSIST = True
TEST_FILES = [
("12WGbNy9RJ3m9jlnk0Ib9-IZmONoxIz_p",
"test_photoshop_publish_review.zip", "")
]
APP_GROUP = "photoshop"
# keep empty to locate latest installed variant or explicit
APP_VARIANT = ""
APP_NAME = "{}/{}".format(APP_GROUP, APP_VARIANT)
TIMEOUT = 120 # publish timeout
def test_db_asserts(self, dbcon, publish_finished):
"""Host and input data dependent expected results in DB."""
print("test_db_asserts")
failures = []
failures.append(DBAssert.count_of_types(dbcon, "version", 3))
failures.append(
DBAssert.count_of_types(dbcon, "version", 0, name={"$ne": 1}))
failures.append(
DBAssert.count_of_types(dbcon, "subset", 1,
name="imageMainForeground"))
failures.append(
DBAssert.count_of_types(dbcon, "subset", 1,
name="imageMainBackground"))
failures.append(
DBAssert.count_of_types(dbcon, "subset", 1,
name="workfileTest_task"))
failures.append(
DBAssert.count_of_types(dbcon, "representation", 6))
additional_args = {"context.subset": "imageMainForeground",
"context.ext": "png"}
failures.append(
DBAssert.count_of_types(dbcon, "representation", 1,
additional_args=additional_args))
additional_args = {"context.subset": "imageMainForeground",
"context.ext": "jpg"}
failures.append(
DBAssert.count_of_types(dbcon, "representation", 2,
additional_args=additional_args))
additional_args = {"context.subset": "imageMainForeground",
"context.ext": "jpg",
"context.representation": "jpg_jpg"}
failures.append(
DBAssert.count_of_types(dbcon, "representation", 1,
additional_args=additional_args))
additional_args = {"context.subset": "imageMainBackground",
"context.ext": "png"}
failures.append(
DBAssert.count_of_types(dbcon, "representation", 1,
additional_args=additional_args))
additional_args = {"context.subset": "imageMainBackground",
"context.ext": "jpg"}
failures.append(
DBAssert.count_of_types(dbcon, "representation", 1,
additional_args=additional_args))
additional_args = {"context.subset": "imageMainBackground",
"context.ext": "jpg",
"context.representation": "jpg_jpg"}
failures.append(
DBAssert.count_of_types(dbcon, "representation", 0,
additional_args=additional_args))
additional_args = {"context.subset": "review"}
failures.append(
DBAssert.count_of_types(dbcon, "representation", 0,
additional_args=additional_args))
assert not any(failures)
if __name__ == "__main__":
test_case = TestPublishInPhotoshopImageReviews()

View file

@ -180,5 +180,23 @@ class TestValidateSequenceFrames(BaseTest):
plugin.process(instance)
assert ("Missing frames: [1002]" in str(excinfo.value))
def test_validate_sequence_frames_slate(self, instance, plugin):
representations = [
{
"ext": "exr",
"files": [
"Main_beauty.1000.exr",
"Main_beauty.1001.exr",
"Main_beauty.1002.exr",
"Main_beauty.1003.exr"
]
}
]
instance.data["slate"] = True
instance.data["representations"] = representations
instance.data["frameEnd"] = 1003
plugin.process(instance)
test_case = TestValidateSequenceFrames()

View file

@ -0,0 +1,127 @@
---
id: admin_hosts_photoshop
title: Photoshop Settings
sidebar_label: Photoshop
---
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
## Photoshop settings
There is a couple of settings that could configure publishing process for **Photoshop**.
All of them are Project based, eg. each project could have different configuration.
Location: Settings > Project > Photoshop
![AfterEffects Project Settings](assets/admin_hosts_photoshop_settings.png)
## Color Management (ImageIO)
Placeholder for Color Management. Currently not implemented yet.
## Creator plugins
Contains configurable items for creators used during publishing from Photoshop.
### Create Image
Provides list of [variants](artist_concepts.md#variant) that will be shown to an artist in Publisher. Default value `Main`.
### Create Flatten Image
Provides simplified publishing process. It will create single `image` instance for artist automatically. This instance will
produce flatten image from all visible layers in a workfile.
- Subset template for flatten image - provide template for subset name for this instance (example `imageBeauty`)
- Review - should be separate review created for this instance
### Create Review
Creates single `review` instance automatically. This allows artists to disable it if needed.
### Create Workfile
Creates single `workfile` instance automatically. This allows artists to disable it if needed.
## Publish plugins
Contains configurable items for publish plugins used during publishing from Photoshop.
### Collect Color Coded Instances
Used only in remote publishing!
Allows to create automatically `image` instances for configurable highlight color set on layer or group in the workfile.
#### Create flatten image
- Flatten with images - produce additional `image` with all published `image` instances merged
- Flatten only - produce only merged `image` instance
- No - produce only separate `image` instances
#### Subset template for flatten image
Template used to create subset name automatically (example `image{layer}Main` - uses layer name in subset name)
### Collect Review
Disable if no review should be created
### Collect Version
If enabled it will push version from workfile name to all published items. Eg. if artist is publishing `test_asset_workfile_v005.psd`
produced `image` and `review` files will contain `v005` (even if some previous version were skipped for particular family).
### Validate Containers
Checks if all imported assets to the workfile through `Loader` are in latest version. Limits cases that older version of asset would be used.
If enabled, artist might still decide to disable validation for each publish (for special use cases).
Limit this optionality by toggling `Optional`.
`Active` toggle denotes that by default artists sees that optional validation as enabled.
### Validate naming of subsets and layers
Subset cannot contain invalid characters or extract to file would fail
#### Regex pattern of invalid characters
Contains weird characters like `/`, `/`, these might cause an issue when file (which contains subset name) is created on OS disk.
#### Replacement character
Replace all offending characters with this one. `_` is default.
### Extract Image
Controls extension formats of published instances of `image` family. `png` and `jpg` are by default.
### Extract Review
Controls output definitions of extracted reviews to upload on Asset Management (AM).
#### Makes an image sequence instead of flatten image
If multiple `image` instances are produced, glue created images into image sequence (`mov`) to review all of them separetely.
Without it only flatten image would be produced.
#### Maximum size of sources for review
Set Byte limit for review file. Applicable if gigantic `image` instances are produced, full image size is unnecessary to upload to AM.
#### Extract jpg Options
Handles tags for produced `.jpg` representation. `Create review` and `Add review to Ftrack` are defaults.
#### Extract mov Options
Handles tags for produced `.mov` representation. `Create review` and `Add review to Ftrack` are defaults.
### Workfile Builder
Allows to open prepared workfile for an artist when no workfile exists. Useful to share standards, additional helpful content in the workfile.
Could be configured per `Task type`, eg. `composition` task type could use different `.psd` template file than `art` task.
Workfile template must be accessible for all artists.
(Currently not handled by [SiteSync](module_site_sync.md))

View file

@ -238,12 +238,12 @@ For resolution and frame range, use **OpenPype → Set Frame Range** and
Creating and publishing rigs with OpenPype follows similar workflow as with
other data types. Create your rig and mark parts of your hierarchy in sets to
help OpenPype validators and extractors to check it and publish it.
help OpenPype validators and extractors to check and publish it.
### Preparing rig for publish
When creating rigs, it is recommended (and it is in fact enforced by validators)
to separate bones or driving objects, their controllers and geometry so they are
to separate bones or driven objects, their controllers and geometry so they are
easily managed. Currently OpenPype doesn't allow to publish model at the same time as
its rig so for demonstration purposes, I'll first create simple model for robotic
arm, just made out of simple boxes and I'll publish it.
@ -252,41 +252,48 @@ arm, just made out of simple boxes and I'll publish it.
For more information about publishing models, see [Publishing models](artist_hosts_maya.md#publishing-models).
Now lets start with empty scene. Load your model - **OpenPype → Load...**, right
Now let's start with empty scene. Load your model - **OpenPype → Load...**, right
click on it and select **Reference (abc)**.
I've created few bones and their controllers in two separate
groups - `rig_GRP` and `controls_GRP`. Naming is not important - just adhere to
your naming conventions.
I've created a few bones in `rig_GRP`, their controllers in `controls_GRP` and
placed the rig's output geometry in `geometry_GRP`. Naming of the groups is not important - just adhere to
your naming conventions. Then I parented everything into a single top group named `arm_rig`.
Then I've put everything into `arm_rig` group.
When you've prepared your hierarchy, it's time to create *Rig instance* in OpenPype.
Select your whole rig hierarchy and go **OpenPype → Create...**. Select **Rig**.
Set is created in your scene to mark rig parts for export. Notice that it has
two subsets - `controls_SET` and `out_SET`. Put your controls into `controls_SET`
With the prepared hierarchy it is time to create a *Rig instance* in OpenPype.
Select the top group of your rig and go to **OpenPype → Create...**. Select **Rig**.
A publish set for your rig is created in your scene to mark rig parts for export.
Notice that it has two subsets - `controls_SET` and `out_SET`. Put your controls into `controls_SET`
and geometry to `out_SET`. You should end up with something like this:
![Maya - Rig Hierarchy Example](assets/maya-rig_hierarchy_example.jpg)
:::note controls_SET and out_SET contents
It is totally allowed to put the `geometry_GRP` in the `out_SET` as opposed to
the individual meshes - it's even **recommended**. However, the `controls_SET`
requires the individual controls in it that the artist is supposed to animate
and manipulate so the publish validators can accurately check the rig's
controls.
:::
### Publishing rigs
Publishing rig is done in same way as publishing everything else. Save your scene
and go **OpenPype → Publish**. When you run validation you'll mostly run at first into
few issues. Although number of them will seem to be intimidating at first, you'll
find out they are mostly minor things easily fixed.
Publishing rigs is done in a same way as publishing everything else. Save your scene
and go **OpenPype → Publish**. When you run validation you'll most likely run into
a few issues at first. Although a number of them will seem to be intimidating you
will find out they are mostly minor things, easily fixed and are there to optimize
your rig for consistency and safe usage by the artist.
* **Non Duplicate Instance Members (ID)** - This will most likely fail because when
- **Non Duplicate Instance Members (ID)** - This will most likely fail because when
creating rigs, we usually duplicate few parts of it to reuse them. But duplication
will duplicate also ID of original object and OpenPype needs every object to have
unique ID. This is easily fixed by **Repair** action next to validator name. click
on little up arrow on right side of validator name and select **Repair** form menu.
* **Joints Hidden** - This is enforcing joints (bones) to be hidden for user as
- **Joints Hidden** - This is enforcing joints (bones) to be hidden for user as
animator usually doesn't need to see them and they clutter his viewports. So
well behaving rig should have them hidden. **Repair** action will help here also.
* **Rig Controllers** will check if there are no transforms on unlocked attributes
- **Rig Controllers** will check if there are no transforms on unlocked attributes
of controllers. This is needed because animator should have ease way to reset rig
to it's default position. It also check that those attributes doesn't have any
incoming connections from other parts of scene to ensure that published rig doesn't
@ -297,6 +304,19 @@ have any missing dependencies.
You can load rig with [Loader](artist_tools_loader). Go **OpenPype → Load...**,
select your rig, right click on it and **Reference** it.
### Animation instances
Whenever you load a rig an animation publish instance is automatically created
for it. This means that if you load a rig you don't need to create a pointcache
instance yourself to publish the geometry. This is all cleanly prepared for you
when loading a published rig.
:::tip Missing animation instance for your loaded rig?
Did you accidentally delete the animation instance for a loaded rig? You can
recreate it using the [**Recreate rig animation instance**](artist_hosts_maya.md#recreate-rig-animation-instance)
inventory action.
:::
## Point caches
OpenPype is using Alembic format for point caches. Workflow is very similar as
other data types.
@ -646,3 +666,15 @@ Select 1 container of type `animation` or `pointcache`, then 1+ container of any
The action searches the selected containers for 1 animation container of type `animation` or `pointcache`. This animation container will be connected to the rest of the selected containers. Matching geometries between containers is done by comparing the attribute `cbId`.
The connection between geometries is done with a live blendshape.
### Recreate rig animation instance
This action can regenerate an animation instance for a loaded rig, for example
for when it was accidentally deleted by the user.
![Maya - Inventory Action Recreate Rig Animation Instance](assets/maya-inventory_action_recreate_animation_instance.png)
#### Usage
Select 1 or more container of type `rig` for which you want to recreate the
animation instance.

Binary file not shown.

After

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 46 KiB

View file

@ -7,80 +7,112 @@ sidebar_label: Site Sync
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
Site Sync allows users and studios to synchronize published assets between
multiple 'sites'. Site denotes a storage location,
which could be a physical disk, server, cloud storage. To be able to use site
sync, it first needs to be configured.
:::warning
**This feature is** currently **in a beta stage** and it is not recommended to rely on it fully for production.
:::
Site Sync allows users and studios to synchronize published assets between multiple 'sites'. Site denotes a storage location,
which could be a physical disk, server, cloud storage. To be able to use site sync, it first needs to be configured.
The general idea is that each user acts as an individual site and can download and upload any published project files when they are needed. that way, artist can have access to the whole project, but only every store files that are relevant to them on their home workstation.
The general idea is that each user acts as an individual site and can download
and upload any published project files when they are needed. that way, artist
can have access to the whole project, but only every store files that are
relevant to them on their home workstation.
:::note
At the moment site sync is only able to deal with publishes files. No workfiles will be synchronized unless they are published. We are working on making workfile synchronization possible as well.
At the moment site sync is only able to deal with publishes files. No workfiles
will be synchronized unless they are published. We are working on making
workfile synchronization possible as well.
:::
## System Settings
To use synchronization, *Site Sync* needs to be enabled globally in **OpenPype Settings/System/Modules/Site Sync**.
To use synchronization, *Site Sync* needs to be enabled globally in **OpenPype
Settings/System/Modules/Site Sync**.
![Configure module](assets/site_sync_system.png)
### Sites
### Sites
By default there are two sites created for each OpenPype installation:
- **studio** - default site - usually a centralized mounted disk accessible to all artists. Studio site is used if Site Sync is disabled.
- **local** - each workstation or server running OpenPype Tray receives its own with unique site name. Workstation refers to itself as "local"however all other sites will see it under it's unique ID.
Artists can explore their site ID by opening OpenPype Info tool by clicking on a version number in the tray app.
- **studio** - default site - usually a centralized mounted disk accessible to
all artists. Studio site is used if Site Sync is disabled.
- **local** - each workstation or server running OpenPype Tray receives its own
with unique site name. Workstation refers to itself as "local"however all
other sites will see it under it's unique ID.
Many different sites can be created and configured on the system level, and some or all can be assigned to each project.
Artists can explore their site ID by opening OpenPype Info tool by clicking on
a version number in the tray app.
Each OpenPype Tray app works with two sites at one time. (Sites can be the same, and no syncing is done in this setup).
Many different sites can be created and configured on the system level, and
some or all can be assigned to each project.
Sites could be configured differently per project basis.
Each OpenPype Tray app works with two sites at one time. (Sites can be the
same, and no syncing is done in this setup).
Each new site needs to be created first in `System Settings`. Most important feature of site is its Provider, select one from already prepared Providers.
Sites could be configured differently per project basis.
#### Alternative sites
Each new site needs to be created first in `System Settings`. Most important
feature of site is its Provider, select one from already prepared Providers.
#### Alternative sites
This attribute is meant for special use cases only.
One of the use cases is sftp site vendoring (exposing) same data as regular site (studio). Each site is accessible for different audience. 'studio' for artists in a studio via shared disk, 'sftp' for externals via sftp server with mounted 'studio' drive.
One of the use cases is sftp site vendoring (exposing) same data as regular
site (studio). Each site is accessible for different audience. 'studio' for
artists in a studio via shared disk, 'sftp' for externals via sftp server with
mounted 'studio' drive.
Change of file status on one site actually means same change on 'alternate' site occurred too. (eg. artists publish to 'studio', 'sftp' is using
same location >> file is accessible on 'sftp' site right away, no need to sync it anyhow.)
Change of file status on one site actually means same change on 'alternate'
site occurred too. (eg. artists publish to 'studio', 'sftp' is using
same location >> file is accessible on 'sftp' site right away, no need to sync
it anyhow.)
##### Example
![Configure module](assets/site_sync_system_sites.png)
Admin created new `sftp` site which is handled by `SFTP` provider. Somewhere in the studio SFTP server is deployed on a machine that has access to `studio` drive.
Admin created new `sftp` site which is handled by `SFTP` provider. Somewhere in
the studio SFTP server is deployed on a machine that has access to `studio`
drive.
Alternative sites work both way:
- everything published to `studio` is accessible on a `sftp` site too
- everything published to `sftp` (most probably via artist's local disk - artists publishes locally, representation is marked to be synced to `sftp`. Immediately after it is synced, it is marked to be available on `studio` too for artists in the studio to use.)
- everything published to `sftp` (most probably via artist's local disk -
artists publishes locally, representation is marked to be synced to `sftp`.
Immediately after it is synced, it is marked to be available on `studio` too
for artists in the studio to use.)
## Project Settings
Sites need to be made available for each project. Of course this is possible to do on the default project as well, in which case all other projects will inherit these settings until overridden explicitly.
Sites need to be made available for each project. Of course this is possible to
do on the default project as well, in which case all other projects will
inherit these settings until overridden explicitly.
You'll find the setting in **Settings/Project/Global/Site Sync**
The attributes that can be configured will vary between sites and their providers.
The attributes that can be configured will vary between sites and their
providers.
## Local settings
Each user should configure root folder for their 'local' site via **Local Settings** in OpenPype Tray. This folder will be used for all files that the user publishes or downloads while working on a project. Artist has the option to set the folder as "default"in which case it is used for all the projects, or it can be set on a project level individually.
Each user should configure root folder for their 'local' site via **Local
Settings** in OpenPype Tray. This folder will be used for all files that the
user publishes or downloads while working on a project. Artist has the option
to set the folder as "default"in which case it is used for all the projects, or
it can be set on a project level individually.
Artists can also override which site they use as active and remote if need be.
Artists can also override which site they use as active and remote if need be.
![Local overrides](assets/site_sync_local_setting.png)
## Providers
Each site implements a so called `provider` which handles most common operations (list files, copy files etc.) and provides interface with a particular type of storage. (disk, gdrive, aws, etc.)
Multiple configured sites could share the same provider with different settings (multiple mounted disks - each disk can be a separate site, while
Each site implements a so called `provider` which handles most common
operations (list files, copy files etc.) and provides interface with a
particular type of storage. (disk, gdrive, aws, etc.)
Multiple configured sites could share the same provider with different
settings (multiple mounted disks - each disk can be a separate site, while
all share the same provider).
**Currently implemented providers:**
@ -89,21 +121,30 @@ all share the same provider).
Handles files stored on disk storage.
Local drive provider is the most basic one that is used for accessing all standard hard disk storage scenarios. It will work with any storage that can be mounted on your system in a standard way. This could correspond to a physical external hard drive, network mounted storage, internal drive or even VPN connected network drive. It doesn't care about how the drive is mounted, but you must be able to point to it with a simple directory path.
Local drive provider is the most basic one that is used for accessing all
standard hard disk storage scenarios. It will work with any storage that can be
mounted on your system in a standard way. This could correspond to a physical
external hard drive, network mounted storage, internal drive or even VPN
connected network drive. It doesn't care about how the drive is mounted, but
you must be able to point to it with a simple directory path.
Default sites `local` and `studio` both use local drive provider.
### Google Drive
Handles files on Google Drive (this). GDrive is provided as a production example for implementing other cloud providers
Handles files on Google Drive (this). GDrive is provided as a production
example for implementing other cloud providers
Let's imagine a small globally distributed studio which wants all published work for all their freelancers uploaded to Google Drive folder.
Let's imagine a small globally distributed studio which wants all published
work for all their freelancers uploaded to Google Drive folder.
For this use case admin needs to configure:
- how many times it tries to synchronize file in case of some issue (network, permissions)
- how many times it tries to synchronize file in case of some issue (network,
permissions)
- how often should synchronization check for new assets
- sites for synchronization - 'local' and 'gdrive' (this can be overridden in local settings)
- sites for synchronization - 'local' and 'gdrive' (this can be overridden in
local settings)
- user credentials
- root folder location on Google Drive side
@ -111,30 +152,43 @@ Configuration would look like this:
![Configure project](assets/site_sync_project_settings.png)
*Site Sync* for Google Drive works using its API: https://developers.google.com/drive/api/v3/about-sdk
*Site Sync* for Google Drive works using its
API: https://developers.google.com/drive/api/v3/about-sdk
To configure Google Drive side you would need to have access to Google Cloud Platform project: https://console.cloud.google.com/
To configure Google Drive side you would need to have access to Google Cloud
Platform project: https://console.cloud.google.com/
To get working connection to Google Drive there are some necessary steps:
- first you need to enable GDrive API: https://developers.google.com/drive/api/v3/enable-drive-api
- next you need to create user, choose **Service Account** (for basic configuration no roles for account are necessary)
- first you need to enable GDrive
API: https://developers.google.com/drive/api/v3/enable-drive-api
- next you need to create user, choose **Service Account** (for basic
configuration no roles for account are necessary)
- add new key for created account and download .json file with credentials
- share destination folder on the Google Drive with created account (directly in GDrive web application)
- add new site back in OpenPype Settings, name as you want, provider needs to be 'gdrive'
- share destination folder on the Google Drive with created account (directly
in GDrive web application)
- add new site back in OpenPype Settings, name as you want, provider needs to
be 'gdrive'
- distribute credentials file via shared mounted disk location
:::note
If you are using regular personal GDrive for testing don't forget adding `/My Drive` as the prefix in root configuration. Business accounts and share drives don't need this.
If you are using regular personal GDrive for testing don't forget
adding `/My Drive` as the prefix in root configuration. Business accounts and
share drives don't need this.
:::
### SFTP
SFTP provider is used to connect to SFTP server. Currently authentication with `user:password` or `user:ssh key` is implemented.
Please provide only one combination, don't forget to provide password for ssh key if ssh key was created with a passphrase.
SFTP provider is used to connect to SFTP server. Currently authentication
with `user:password` or `user:ssh key` is implemented.
Please provide only one combination, don't forget to provide password for ssh
key if ssh key was created with a passphrase.
(SFTP connection could be a bit finicky, use FileZilla or WinSCP for testing connection, it will be mush faster.)
(SFTP connection could be a bit finicky, use FileZilla or WinSCP for testing
connection, it will be mush faster.)
Beware that ssh key expects OpenSSH format (`.pem`) not a Putty format (`.ppk`)!
Beware that ssh key expects OpenSSH format (`.pem`) not a Putty
format (`.ppk`)!
#### How to set SFTP site
@ -143,60 +197,101 @@ Beware that ssh key expects OpenSSH format (`.pem`) not a Putty format (`.ppk`)!
![Enable syncing and create site](assets/site_sync_sftp_system.png)
- In Projects setting enable Site Sync (on default project - all project will be synched, or on specific project)
- Configure SFTP connection and destination folder on a SFTP server (in screenshot `/upload`)
- In Projects setting enable Site Sync (on default project - all project will
be synched, or on specific project)
- Configure SFTP connection and destination folder on a SFTP server (in
screenshot `/upload`)
![SFTP connection](assets/site_sync_project_sftp_settings.png)
- if you want to force syncing between local and sftp site for all users, use combination `active site: local`, `remote site: NAME_OF_SFTP_SITE`
- if you want to allow only specific users to use SFTP syncing (external users, not located in the office), use `active site: studio`, `remote site: studio`.
- if you want to force syncing between local and sftp site for all users, use
combination `active site: local`, `remote site: NAME_OF_SFTP_SITE`
- if you want to allow only specific users to use SFTP syncing (external users,
not located in the office), use `active site: studio`, `remote site: studio`.
![Select active and remote site on a project](assets/site_sync_sftp_project_setting_not_forced.png)
- Each artist can decide and configure syncing from his/her local to SFTP via `Local Settings`
- Each artist can decide and configure syncing from his/her local to SFTP
via `Local Settings`
![Select active and remote site on a project](assets/site_sync_sftp_settings_local.png)
### Custom providers
If a studio needs to use other services for cloud storage, or want to implement totally different storage providers, they can do so by writing their own provider plugin. We're working on a developer documentation, however, for now we recommend looking at `abstract_provider.py`and `gdrive.py` inside `openpype/modules/sync_server/providers` and using it as a template.
If a studio needs to use other services for cloud storage, or want to implement
totally different storage providers, they can do so by writing their own
provider plugin. We're working on a developer documentation, however, for now
we recommend looking at `abstract_provider.py`and `gdrive.py`
inside `openpype/modules/sync_server/providers` and using it as a template.
### Running Site Sync in background
Site Sync server synchronizes new published files from artist machine into configured remote location by default.
Site Sync server synchronizes new published files from artist machine into
configured remote location by default.
There might be a use case where you need to synchronize between "non-artist" sites, for example between studio site and cloud. In this case
you need to run Site Sync as a background process from a command line (via service etc) 24/7.
There might be a use case where you need to synchronize between "non-artist"
sites, for example between studio site and cloud. In this case
you need to run Site Sync as a background process from a command line (via
service etc) 24/7.
To configure all sites where all published files should be synced eventually you need to configure `project_settings/global/sync_server/config/always_accessible_on` property in Settings (per project) first.
To configure all sites where all published files should be synced eventually
you need to
configure `project_settings/global/sync_server/config/always_accessible_on`
property in Settings (per project) first.
![Set another non artist remote site](assets/site_sync_always_on.png)
This is an example of:
- Site Sync is enabled for a project
- default active and remote sites are set to `studio` - eg. standard process: everyone is working in a studio, publishing to shared location etc.
- (but this also allows any of the artists to work remotely, they would change their active site in their own Local Settings to `local` and configure local root.
This would result in everything artist publishes is saved first onto his local folder AND synchronized to `studio` site eventually.)
- default active and remote sites are set to `studio` - eg. standard process:
everyone is working in a studio, publishing to shared location etc.
- (but this also allows any of the artists to work remotely, they would change
their active site in their own Local Settings to `local` and configure local
root.
This would result in everything artist publishes is saved first onto his
local folder AND synchronized to `studio` site eventually.)
- everything exported must also be eventually uploaded to `sftp` site
This eventual synchronization between `studio` and `sftp` sites must be physically handled by background process.
This eventual synchronization between `studio` and `sftp` sites must be
physically handled by background process.
As current implementation relies heavily on Settings and Local Settings, background process for a specific site ('studio' for example) must be configured via Tray first to `syncserver` command to work.
As current implementation relies heavily on Settings and Local Settings,
background process for a specific site ('studio' for example) must be
configured via Tray first to `syncserver` command to work.
To do this:
- run OP `Tray` with environment variable OPENPYPE_LOCAL_ID set to name of active (source) site. In most use cases it would be studio (for cases of backups of everything published to studio site to different cloud site etc.)
- run OP `Tray` with environment variable OPENPYPE_LOCAL_ID set to name of
active (source) site. In most use cases it would be studio (for cases of
backups of everything published to studio site to different cloud site etc.)
- start `Tray`
- check `Local ID` in information dialog after clicking on version number in the Tray
- check `Local ID` in information dialog after clicking on version number in
the Tray
- open `Local Settings` in the `Tray`
- configure for each project necessary active site and remote site
- close `Tray`
- run OP from a command line with `syncserver` and `--active_site` arguments
This is an example how to trigger background syncing process where active (source) site is `studio`.
(It is expected that OP is installed on a machine, `openpype_console` is on PATH. If not, add full path to executable.
This is an example how to trigger background syncing process where active (
source) site is `studio`.
(It is expected that OP is installed on a machine, `openpype_console` is on
PATH. If not, add full path to executable.
)
```shell
openpype_console syncserver --active_site studio
```
```
### Syncing of last published workfile
Some DCC might have enabled
in `project_setting/global/tools/Workfiles/last_workfile_on_startup`, eg. open
DCC with last opened workfile.
Flag `use_last_published_workfile` tells that last published workfile should be
used if no workfile is present locally.
This use case could happen if artists starts working on new task locally,
doesn't have any workfile present. In that case last published will be
synchronized locally and its version bumped by 1 (as workfile's version is
always +1 from published version).

View file

@ -126,6 +126,7 @@ module.exports = {
"admin_hosts_nuke",
"admin_hosts_resolve",
"admin_hosts_harmony",
"admin_hosts_photoshop",
"admin_hosts_aftereffects",
"admin_hosts_tvpaint"
],