Merge branch 'develop' into bugfix/houdini_review_no_camera_collect_error

This commit is contained in:
Roy Nieterau 2023-04-24 15:59:20 +02:00 committed by GitHub
commit e43ede4b8d
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
41 changed files with 1020 additions and 457 deletions

View file

@ -25,5 +25,5 @@ jobs:
- name: Invoke pre-release workflow
uses: benc-uk/workflow-dispatch@v1
with:
workflow: Nightly Prerelease
workflow: prerelease.yml
token: ${{ secrets.YNPUT_BOT_TOKEN }}

View file

@ -65,3 +65,9 @@ jobs:
source_ref: 'main'
target_branch: 'develop'
commit_message_template: '[Automated] Merged {source_ref} into {target_branch}'
- name: Invoke Update bug report workflow
uses: benc-uk/workflow-dispatch@v1
with:
workflow: update_bug_report.yml
token: ${{ secrets.YNPUT_BOT_TOKEN }}

View file

@ -52,7 +52,7 @@ RUN yum -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.n
# we need to build our own patchelf
WORKDIR /temp-patchelf
RUN git clone https://github.com/NixOS/patchelf.git . \
RUN git clone -b 0.17.0 --single-branch https://github.com/NixOS/patchelf.git . \
&& source scl_source enable devtoolset-7 \
&& ./bootstrap.sh \
&& ./configure \

View file

@ -69,6 +69,19 @@ def convert_ids(in_ids):
def get_projects(active=True, inactive=False, fields=None):
"""Yield all project entity documents.
Args:
active (Optional[bool]): Include active projects. Defaults to True.
inactive (Optional[bool]): Include inactive projects.
Defaults to False.
fields (Optional[Iterable[str]]): Fields that should be returned. All
fields are returned if 'None' is passed.
Yields:
dict: Project entity data which can be reduced to specified 'fields'.
None is returned if project with specified filters was not found.
"""
mongodb = get_project_database()
for project_name in mongodb.collection_names():
if project_name in ("system.indexes",):
@ -81,6 +94,20 @@ def get_projects(active=True, inactive=False, fields=None):
def get_project(project_name, active=True, inactive=True, fields=None):
"""Return project entity document by project name.
Args:
project_name (str): Name of project.
active (Optional[bool]): Allow active project. Defaults to True.
inactive (Optional[bool]): Allow inactive project. Defaults to True.
fields (Optional[Iterable[str]]): Fields that should be returned. All
fields are returned if 'None' is passed.
Returns:
Union[Dict, None]: Project entity data which can be reduced to
specified 'fields'. None is returned if project with specified
filters was not found.
"""
# Skip if both are disabled
if not active and not inactive:
return None
@ -124,17 +151,18 @@ def get_whole_project(project_name):
def get_asset_by_id(project_name, asset_id, fields=None):
"""Receive asset data by it's id.
"""Receive asset data by its id.
Args:
project_name (str): Name of project where to look for queried entities.
asset_id (Union[str, ObjectId]): Asset's id.
fields (Iterable[str]): Fields that should be returned. All fields are
returned if 'None' is passed.
fields (Optional[Iterable[str]]): Fields that should be returned. All
fields are returned if 'None' is passed.
Returns:
dict: Asset entity data.
None: Asset was not found by id.
Union[Dict, None]: Asset entity data which can be reduced to
specified 'fields'. None is returned if asset with specified
filters was not found.
"""
asset_id = convert_id(asset_id)
@ -147,17 +175,18 @@ def get_asset_by_id(project_name, asset_id, fields=None):
def get_asset_by_name(project_name, asset_name, fields=None):
"""Receive asset data by it's name.
"""Receive asset data by its name.
Args:
project_name (str): Name of project where to look for queried entities.
asset_name (str): Asset's name.
fields (Iterable[str]): Fields that should be returned. All fields are
returned if 'None' is passed.
fields (Optional[Iterable[str]]): Fields that should be returned. All
fields are returned if 'None' is passed.
Returns:
dict: Asset entity data.
None: Asset was not found by name.
Union[Dict, None]: Asset entity data which can be reduced to
specified 'fields'. None is returned if asset with specified
filters was not found.
"""
if not asset_name:
@ -195,8 +224,8 @@ def _get_assets(
parent_ids (Iterable[Union[str, ObjectId]]): Parent asset ids.
standard (bool): Query standard assets (type 'asset').
archived (bool): Query archived assets (type 'archived_asset').
fields (Iterable[str]): Fields that should be returned. All fields are
returned if 'None' is passed.
fields (Optional[Iterable[str]]): Fields that should be returned. All
fields are returned if 'None' is passed.
Returns:
Cursor: Query cursor as iterable which returns asset documents matching
@ -261,8 +290,8 @@ def get_assets(
asset_names (Iterable[str]): Name assets that should be found.
parent_ids (Iterable[Union[str, ObjectId]]): Parent asset ids.
archived (bool): Add also archived assets.
fields (Iterable[str]): Fields that should be returned. All fields are
returned if 'None' is passed.
fields (Optional[Iterable[str]]): Fields that should be returned. All
fields are returned if 'None' is passed.
Returns:
Cursor: Query cursor as iterable which returns asset documents matching
@ -300,8 +329,8 @@ def get_archived_assets(
be found.
asset_names (Iterable[str]): Name assets that should be found.
parent_ids (Iterable[Union[str, ObjectId]]): Parent asset ids.
fields (Iterable[str]): Fields that should be returned. All fields are
returned if 'None' is passed.
fields (Optional[Iterable[str]]): Fields that should be returned. All
fields are returned if 'None' is passed.
Returns:
Cursor: Query cursor as iterable which returns asset documents matching
@ -356,17 +385,18 @@ def get_asset_ids_with_subsets(project_name, asset_ids=None):
def get_subset_by_id(project_name, subset_id, fields=None):
"""Single subset entity data by it's id.
"""Single subset entity data by its id.
Args:
project_name (str): Name of project where to look for queried entities.
subset_id (Union[str, ObjectId]): Id of subset which should be found.
fields (Iterable[str]): Fields that should be returned. All fields are
returned if 'None' is passed.
fields (Optional[Iterable[str]]): Fields that should be returned. All
fields are returned if 'None' is passed.
Returns:
None: If subset with specified filters was not found.
Dict: Subset document which can be reduced to specified 'fields'.
Union[Dict, None]: Subset entity data which can be reduced to
specified 'fields'. None is returned if subset with specified
filters was not found.
"""
subset_id = convert_id(subset_id)
@ -379,20 +409,19 @@ def get_subset_by_id(project_name, subset_id, fields=None):
def get_subset_by_name(project_name, subset_name, asset_id, fields=None):
"""Single subset entity data by it's name and it's version id.
"""Single subset entity data by its name and its version id.
Args:
project_name (str): Name of project where to look for queried entities.
subset_name (str): Name of subset.
asset_id (Union[str, ObjectId]): Id of parent asset.
fields (Iterable[str]): Fields that should be returned. All fields are
returned if 'None' is passed.
fields (Optional[Iterable[str]]): Fields that should be returned. All
fields are returned if 'None' is passed.
Returns:
Union[None, Dict[str, Any]]: None if subset with specified filters was
not found or dict subset document which can be reduced to
specified 'fields'.
Union[Dict, None]: Subset entity data which can be reduced to
specified 'fields'. None is returned if subset with specified
filters was not found.
"""
if not subset_name:
return None
@ -434,8 +463,8 @@ def get_subsets(
names_by_asset_ids (dict[ObjectId, List[str]]): Complex filtering
using asset ids and list of subset names under the asset.
archived (bool): Look for archived subsets too.
fields (Iterable[str]): Fields that should be returned. All fields are
returned if 'None' is passed.
fields (Optional[Iterable[str]]): Fields that should be returned. All
fields are returned if 'None' is passed.
Returns:
Cursor: Iterable cursor yielding all matching subsets.
@ -520,17 +549,18 @@ def get_subset_families(project_name, subset_ids=None):
def get_version_by_id(project_name, version_id, fields=None):
"""Single version entity data by it's id.
"""Single version entity data by its id.
Args:
project_name (str): Name of project where to look for queried entities.
version_id (Union[str, ObjectId]): Id of version which should be found.
fields (Iterable[str]): Fields that should be returned. All fields are
returned if 'None' is passed.
fields (Optional[Iterable[str]]): Fields that should be returned. All
fields are returned if 'None' is passed.
Returns:
None: If version with specified filters was not found.
Dict: Version document which can be reduced to specified 'fields'.
Union[Dict, None]: Version entity data which can be reduced to
specified 'fields'. None is returned if version with specified
filters was not found.
"""
version_id = convert_id(version_id)
@ -546,18 +576,19 @@ def get_version_by_id(project_name, version_id, fields=None):
def get_version_by_name(project_name, version, subset_id, fields=None):
"""Single version entity data by it's name and subset id.
"""Single version entity data by its name and subset id.
Args:
project_name (str): Name of project where to look for queried entities.
version (int): name of version entity (it's version).
version (int): name of version entity (its version).
subset_id (Union[str, ObjectId]): Id of version which should be found.
fields (Iterable[str]): Fields that should be returned. All fields are
returned if 'None' is passed.
fields (Optional[Iterable[str]]): Fields that should be returned. All
fields are returned if 'None' is passed.
Returns:
None: If version with specified filters was not found.
Dict: Version document which can be reduced to specified 'fields'.
Union[Dict, None]: Version entity data which can be reduced to
specified 'fields'. None is returned if version with specified
filters was not found.
"""
subset_id = convert_id(subset_id)
@ -574,7 +605,7 @@ def get_version_by_name(project_name, version, subset_id, fields=None):
def version_is_latest(project_name, version_id):
"""Is version the latest from it's subset.
"""Is version the latest from its subset.
Note:
Hero versions are considered as latest.
@ -680,8 +711,8 @@ def get_versions(
versions (Iterable[int]): Version names (as integers).
Filter ignored if 'None' is passed.
hero (bool): Look also for hero versions.
fields (Iterable[str]): Fields that should be returned. All fields are
returned if 'None' is passed.
fields (Optional[Iterable[str]]): Fields that should be returned. All
fields are returned if 'None' is passed.
Returns:
Cursor: Iterable cursor yielding all matching versions.
@ -705,12 +736,13 @@ def get_hero_version_by_subset_id(project_name, subset_id, fields=None):
project_name (str): Name of project where to look for queried entities.
subset_id (Union[str, ObjectId]): Subset id under which
is hero version.
fields (Iterable[str]): Fields that should be returned. All fields are
returned if 'None' is passed.
fields (Optional[Iterable[str]]): Fields that should be returned. All
fields are returned if 'None' is passed.
Returns:
None: If hero version for passed subset id does not exists.
Dict: Hero version entity data.
Union[Dict, None]: Hero version entity data which can be reduced to
specified 'fields'. None is returned if hero version with specified
filters was not found.
"""
subset_id = convert_id(subset_id)
@ -730,17 +762,18 @@ def get_hero_version_by_subset_id(project_name, subset_id, fields=None):
def get_hero_version_by_id(project_name, version_id, fields=None):
"""Hero version by it's id.
"""Hero version by its id.
Args:
project_name (str): Name of project where to look for queried entities.
version_id (Union[str, ObjectId]): Hero version id.
fields (Iterable[str]): Fields that should be returned. All fields are
returned if 'None' is passed.
fields (Optional[Iterable[str]]): Fields that should be returned. All
fields are returned if 'None' is passed.
Returns:
None: If hero version with passed id was not found.
Dict: Hero version entity data.
Union[Dict, None]: Hero version entity data which can be reduced to
specified 'fields'. None is returned if hero version with specified
filters was not found.
"""
version_id = convert_id(version_id)
@ -773,8 +806,8 @@ def get_hero_versions(
should look for hero versions. Filter ignored if 'None' is passed.
version_ids (Iterable[Union[str, ObjectId]]): Hero version ids. Filter
ignored if 'None' is passed.
fields (Iterable[str]): Fields that should be returned. All fields are
returned if 'None' is passed.
fields (Optional[Iterable[str]]): Fields that should be returned. All
fields are returned if 'None' is passed.
Returns:
Cursor|list: Iterable yielding hero versions matching passed filters.
@ -801,8 +834,8 @@ def get_output_link_versions(project_name, version_id, fields=None):
project_name (str): Name of project where to look for queried entities.
version_id (Union[str, ObjectId]): Version id which can be used
as input link for other versions.
fields (Iterable[str]): Fields that should be returned. All fields are
returned if 'None' is passed.
fields (Optional[Iterable[str]]): Fields that should be returned. All
fields are returned if 'None' is passed.
Returns:
Iterable: Iterable cursor yielding versions that are used as input
@ -828,8 +861,8 @@ def get_last_versions(project_name, subset_ids, fields=None):
Args:
project_name (str): Name of project where to look for queried entities.
subset_ids (Iterable[Union[str, ObjectId]]): List of subset ids.
fields (Iterable[str]): Fields that should be returned. All fields are
returned if 'None' is passed.
fields (Optional[Iterable[str]]): Fields that should be returned. All
fields are returned if 'None' is passed.
Returns:
dict[ObjectId, int]: Key is subset id and value is last version name.
@ -913,12 +946,13 @@ def get_last_version_by_subset_id(project_name, subset_id, fields=None):
Args:
project_name (str): Name of project where to look for queried entities.
subset_id (Union[str, ObjectId]): Id of version which should be found.
fields (Iterable[str]): Fields that should be returned. All fields are
returned if 'None' is passed.
fields (Optional[Iterable[str]]): Fields that should be returned. All
fields are returned if 'None' is passed.
Returns:
None: If version with specified filters was not found.
Dict: Version document which can be reduced to specified 'fields'.
Union[Dict, None]: Version entity data which can be reduced to
specified 'fields'. None is returned if version with specified
filters was not found.
"""
subset_id = convert_id(subset_id)
@ -945,12 +979,13 @@ def get_last_version_by_subset_name(
asset_id (Union[str, ObjectId]): Asset id which is parent of passed
subset name.
asset_name (str): Asset name which is parent of passed subset name.
fields (Iterable[str]): Fields that should be returned. All fields are
returned if 'None' is passed.
fields (Optional[Iterable[str]]): Fields that should be returned. All
fields are returned if 'None' is passed.
Returns:
None: If version with specified filters was not found.
Dict: Version document which can be reduced to specified 'fields'.
Union[Dict, None]: Version entity data which can be reduced to
specified 'fields'. None is returned if version with specified
filters was not found.
"""
if not asset_id and not asset_name:
@ -972,18 +1007,18 @@ def get_last_version_by_subset_name(
def get_representation_by_id(project_name, representation_id, fields=None):
"""Representation entity data by it's id.
"""Representation entity data by its id.
Args:
project_name (str): Name of project where to look for queried entities.
representation_id (Union[str, ObjectId]): Representation id.
fields (Iterable[str]): Fields that should be returned. All fields are
returned if 'None' is passed.
fields (Optional[Iterable[str]]): Fields that should be returned. All
fields are returned if 'None' is passed.
Returns:
None: If representation with specified filters was not found.
Dict: Representation entity data which can be reduced
to specified 'fields'.
Union[Dict, None]: Representation entity data which can be reduced to
specified 'fields'. None is returned if representation with
specified filters was not found.
"""
if not representation_id:
@ -1004,19 +1039,19 @@ def get_representation_by_id(project_name, representation_id, fields=None):
def get_representation_by_name(
project_name, representation_name, version_id, fields=None
):
"""Representation entity data by it's name and it's version id.
"""Representation entity data by its name and its version id.
Args:
project_name (str): Name of project where to look for queried entities.
representation_name (str): Representation name.
version_id (Union[str, ObjectId]): Id of parent version entity.
fields (Iterable[str]): Fields that should be returned. All fields are
returned if 'None' is passed.
fields (Optional[Iterable[str]]): Fields that should be returned. All
fields are returned if 'None' is passed.
Returns:
None: If representation with specified filters was not found.
Dict: Representation entity data which can be reduced
to specified 'fields'.
Union[dict[str, Any], None]: Representation entity data which can be
reduced to specified 'fields'. None is returned if representation
with specified filters was not found.
"""
version_id = convert_id(version_id)
@ -1202,8 +1237,8 @@ def get_representations(
names_by_version_ids (dict[ObjectId, list[str]]): Complex filtering
using version ids and list of names under the version.
archived (bool): Output will also contain archived representations.
fields (Iterable[str]): Fields that should be returned. All fields are
returned if 'None' is passed.
fields (Optional[Iterable[str]]): Fields that should be returned. All
fields are returned if 'None' is passed.
Returns:
Cursor: Iterable cursor yielding all matching representations.
@ -1247,8 +1282,8 @@ def get_archived_representations(
representation context fields.
names_by_version_ids (dict[ObjectId, List[str]]): Complex filtering
using version ids and list of names under the version.
fields (Iterable[str]): Fields that should be returned. All fields are
returned if 'None' is passed.
fields (Optional[Iterable[str]]): Fields that should be returned. All
fields are returned if 'None' is passed.
Returns:
Cursor: Iterable cursor yielding all matching representations.
@ -1377,8 +1412,8 @@ def get_thumbnail_id_from_source(project_name, src_type, src_id):
src_id (Union[str, ObjectId]): Id of source entity.
Returns:
ObjectId: Thumbnail id assigned to entity.
None: If Source entity does not have any thumbnail id assigned.
Union[ObjectId, None]: Thumbnail id assigned to entity. If Source
entity does not have any thumbnail id assigned.
"""
if not src_type or not src_id:
@ -1397,14 +1432,14 @@ def get_thumbnails(project_name, thumbnail_ids, fields=None):
"""Receive thumbnails entity data.
Thumbnail entity can be used to receive binary content of thumbnail based
on it's content and ThumbnailResolvers.
on its content and ThumbnailResolvers.
Args:
project_name (str): Name of project where to look for queried entities.
thumbnail_ids (Iterable[Union[str, ObjectId]]): Ids of thumbnail
entities.
fields (Iterable[str]): Fields that should be returned. All fields are
returned if 'None' is passed.
fields (Optional[Iterable[str]]): Fields that should be returned. All
fields are returned if 'None' is passed.
Returns:
cursor: Cursor of queried documents.
@ -1429,12 +1464,13 @@ def get_thumbnail(project_name, thumbnail_id, fields=None):
Args:
project_name (str): Name of project where to look for queried entities.
thumbnail_id (Union[str, ObjectId]): Id of thumbnail entity.
fields (Iterable[str]): Fields that should be returned. All fields are
returned if 'None' is passed.
fields (Optional[Iterable[str]]): Fields that should be returned. All
fields are returned if 'None' is passed.
Returns:
None: If thumbnail with specified id was not found.
Dict: Thumbnail entity data which can be reduced to specified 'fields'.
Union[Dict, None]: Thumbnail entity data which can be reduced to
specified 'fields'.None is returned if thumbnail with specified
filters was not found.
"""
if not thumbnail_id:
@ -1458,8 +1494,13 @@ def get_workfile_info(
project_name (str): Name of project where to look for queried entities.
asset_id (Union[str, ObjectId]): Id of asset entity.
task_name (str): Task name on asset.
fields (Iterable[str]): Fields that should be returned. All fields are
returned if 'None' is passed.
fields (Optional[Iterable[str]]): Fields that should be returned. All
fields are returned if 'None' is passed.
Returns:
Union[Dict, None]: Workfile entity data which can be reduced to
specified 'fields'.None is returned if workfile with specified
filters was not found.
"""
if not asset_id or not task_name or not filename:

View file

@ -104,3 +104,6 @@ class AbcLoader(load.LoaderPlugin):
node = container["node"]
node.destroy()
def switch(self, container, representation):
self.update(container, representation)

View file

@ -73,3 +73,6 @@ class AbcArchiveLoader(load.LoaderPlugin):
node = container["node"]
node.destroy()
def switch(self, container, representation):
self.update(container, representation)

View file

@ -106,3 +106,6 @@ class BgeoLoader(load.LoaderPlugin):
node = container["node"]
node.destroy()
def switch(self, container, representation):
self.update(container, representation)

View file

@ -192,3 +192,6 @@ class CameraLoader(load.LoaderPlugin):
new_node.moveToGoodPosition()
return new_node
def switch(self, container, representation):
self.update(container, representation)

View file

@ -125,3 +125,6 @@ class ImageLoader(load.LoaderPlugin):
prefix, padding, suffix = first_fname.rsplit(".", 2)
fname = ".".join([prefix, "$F{}".format(len(padding)), suffix])
return os.path.join(root, fname).replace("\\", "/")
def switch(self, container, representation):
self.update(container, representation)

View file

@ -79,3 +79,6 @@ class USDSublayerLoader(load.LoaderPlugin):
node = container["node"]
node.destroy()
def switch(self, container, representation):
self.update(container, representation)

View file

@ -79,3 +79,6 @@ class USDReferenceLoader(load.LoaderPlugin):
node = container["node"]
node.destroy()
def switch(self, container, representation):
self.update(container, representation)

View file

@ -102,3 +102,6 @@ class VdbLoader(load.LoaderPlugin):
node = container["node"]
node.destroy()
def switch(self, container, representation):
self.update(container, representation)

View file

@ -4,15 +4,14 @@ import hou
import pyblish.api
class CollectHoudiniCurrentFile(pyblish.api.InstancePlugin):
class CollectHoudiniCurrentFile(pyblish.api.ContextPlugin):
"""Inject the current working file into context"""
order = pyblish.api.CollectorOrder - 0.01
order = pyblish.api.CollectorOrder - 0.1
label = "Houdini Current File"
hosts = ["houdini"]
families = ["workfile"]
def process(self, instance):
def process(self, context):
"""Inject the current working file"""
current_file = hou.hipFile.path()
@ -34,26 +33,5 @@ class CollectHoudiniCurrentFile(pyblish.api.InstancePlugin):
"saved correctly."
)
instance.context.data["currentFile"] = current_file
folder, file = os.path.split(current_file)
filename, ext = os.path.splitext(file)
instance.data.update({
"setMembers": [current_file],
"frameStart": instance.context.data['frameStart'],
"frameEnd": instance.context.data['frameEnd'],
"handleStart": instance.context.data['handleStart'],
"handleEnd": instance.context.data['handleEnd']
})
instance.data['representations'] = [{
'name': ext.lstrip("."),
'ext': ext.lstrip("."),
'files': file,
"stagingDir": folder,
}]
self.log.info('Collected instance: {}'.format(file))
self.log.info('Scene path: {}'.format(current_file))
self.log.info('staging Dir: {}'.format(folder))
context.data["currentFile"] = current_file
self.log.info('Current workfile path: {}'.format(current_file))

View file

@ -17,6 +17,7 @@ class CollectHoudiniReviewData(pyblish.api.InstancePlugin):
# which isn't the actual frame range that this instance renders.
instance.data["handleStart"] = 0
instance.data["handleEnd"] = 0
instance.data["fps"] = instance.context.data["fps"]
# Enable ftrack functionality
instance.data.setdefault("families", []).append('ftrack')

View file

@ -0,0 +1,36 @@
import os
import pyblish.api
class CollectWorkfile(pyblish.api.InstancePlugin):
"""Inject workfile representation into instance"""
order = pyblish.api.CollectorOrder - 0.01
label = "Houdini Workfile Data"
hosts = ["houdini"]
families = ["workfile"]
def process(self, instance):
current_file = instance.context.data["currentFile"]
folder, file = os.path.split(current_file)
filename, ext = os.path.splitext(file)
instance.data.update({
"setMembers": [current_file],
"frameStart": instance.context.data['frameStart'],
"frameEnd": instance.context.data['frameEnd'],
"handleStart": instance.context.data['handleStart'],
"handleEnd": instance.context.data['handleEnd']
})
instance.data['representations'] = [{
'name': ext.lstrip("."),
'ext': ext.lstrip("."),
'files': file,
"stagingDir": folder,
}]
self.log.info('Collected instance: {}'.format(file))
self.log.info('staging Dir: {}'.format(folder))

View file

@ -32,6 +32,10 @@ from openpype.pipeline import (
load_container,
registered_host,
)
from openpype.pipeline.create import (
legacy_create,
get_legacy_creator_by_name,
)
from openpype.pipeline.context_tools import (
get_current_asset_name,
get_current_project_asset,
@ -2153,17 +2157,23 @@ def set_scene_resolution(width, height, pixelAspect):
cmds.setAttr("%s.pixelAspect" % control_node, pixelAspect)
def get_frame_range():
"""Get the current assets frame range and handles."""
def get_frame_range(include_animation_range=False):
"""Get the current assets frame range and handles.
Args:
include_animation_range (bool, optional): Whether to include
`animationStart` and `animationEnd` keys to define the outer
range of the timeline. It is excluded by default.
Returns:
dict: Asset's expected frame range values.
"""
# Set frame start/end
project_name = get_current_project_name()
task_name = get_current_task_name()
asset_name = get_current_asset_name()
asset = get_asset_by_name(project_name, asset_name)
settings = get_project_settings(project_name)
include_handles_settings = settings["maya"]["include_handles"]
current_task = asset.get("data").get("tasks").get(task_name)
frame_start = asset["data"].get("frameStart")
frame_end = asset["data"].get("frameEnd")
@ -2175,32 +2185,39 @@ def get_frame_range():
handle_start = asset["data"].get("handleStart") or 0
handle_end = asset["data"].get("handleEnd") or 0
animation_start = frame_start
animation_end = frame_end
include_handles = include_handles_settings["include_handles_default"]
for item in include_handles_settings["per_task_type"]:
if current_task["type"] in item["task_type"]:
include_handles = item["include_handles"]
break
if include_handles:
animation_start -= int(handle_start)
animation_end += int(handle_end)
cmds.playbackOptions(
minTime=frame_start,
maxTime=frame_end,
animationStartTime=animation_start,
animationEndTime=animation_end
)
cmds.currentTime(frame_start)
return {
frame_range = {
"frameStart": frame_start,
"frameEnd": frame_end,
"handleStart": handle_start,
"handleEnd": handle_end
}
if include_animation_range:
# The animation range values are only included to define whether
# the Maya time slider should include the handles or not.
# Some usages of this function use the full dictionary to define
# instance attributes for which we want to exclude the animation
# keys. That is why these are excluded by default.
task_name = get_current_task_name()
settings = get_project_settings(project_name)
include_handles_settings = settings["maya"]["include_handles"]
current_task = asset.get("data").get("tasks").get(task_name)
animation_start = frame_start
animation_end = frame_end
include_handles = include_handles_settings["include_handles_default"]
for item in include_handles_settings["per_task_type"]:
if current_task["type"] in item["task_type"]:
include_handles = item["include_handles"]
break
if include_handles:
animation_start -= int(handle_start)
animation_end += int(handle_end)
frame_range["animationStart"] = animation_start
frame_range["animationEnd"] = animation_end
return frame_range
def reset_frame_range(playback=True, render=True, fps=True):
@ -2219,18 +2236,23 @@ def reset_frame_range(playback=True, render=True, fps=True):
)
set_scene_fps(fps)
frame_range = get_frame_range()
frame_range = get_frame_range(include_animation_range=True)
if not frame_range:
# No frame range data found for asset
return
frame_start = frame_range["frameStart"] - int(frame_range["handleStart"])
frame_end = frame_range["frameEnd"] + int(frame_range["handleEnd"])
frame_start = frame_range["frameStart"]
frame_end = frame_range["frameEnd"]
animation_start = frame_range["animationStart"]
animation_end = frame_range["animationEnd"]
if playback:
cmds.playbackOptions(minTime=frame_start)
cmds.playbackOptions(maxTime=frame_end)
cmds.playbackOptions(animationStartTime=frame_start)
cmds.playbackOptions(animationEndTime=frame_end)
cmds.playbackOptions(minTime=frame_start)
cmds.playbackOptions(maxTime=frame_end)
cmds.playbackOptions(
minTime=frame_start,
maxTime=frame_end,
animationStartTime=animation_start,
animationEndTime=animation_end
)
cmds.currentTime(frame_start)
if render:
@ -3913,3 +3935,53 @@ def get_capture_preset(task_name, task_type, subset, project_settings, log):
capture_preset = plugin_settings["capture_preset"]
return capture_preset or {}
def create_rig_animation_instance(nodes, context, namespace, log=None):
"""Create an animation publish instance for loaded rigs.
See the RecreateRigAnimationInstance inventory action on how to use this
for loaded rig containers.
Arguments:
nodes (list): Member nodes of the rig instance.
context (dict): Representation context of the rig container
namespace (str): Namespace of the rig container
log (logging.Logger, optional): Logger to log to if provided
Returns:
None
"""
output = next((node for node in nodes if
node.endswith("out_SET")), None)
controls = next((node for node in nodes if
node.endswith("controls_SET")), None)
assert output, "No out_SET in rig, this is a bug."
assert controls, "No controls_SET in rig, this is a bug."
# Find the roots amongst the loaded nodes
roots = (
cmds.ls(nodes, assemblies=True, long=True) or
get_highest_in_hierarchy(nodes)
)
assert roots, "No root nodes in rig, this is a bug."
asset = legacy_io.Session["AVALON_ASSET"]
dependency = str(context["representation"]["_id"])
if log:
log.info("Creating subset: {}".format(namespace))
# Create the animation instance
creator_plugin = get_legacy_creator_by_name("CreateAnimation")
with maintained_selection():
cmds.select([output, controls] + roots, noExpand=True)
legacy_create(
creator_plugin,
name=namespace,
asset=asset,
options={"useSelection": True},
data={"dependencies": dependency}
)

View file

@ -7,6 +7,12 @@ from openpype.hosts.maya.api import (
class CreateAnimation(plugin.Creator):
"""Animation output for character rigs"""
# We hide the animation creator from the UI since the creation of it
# is automated upon loading a rig. There's an inventory action to recreate
# it for loaded rigs if by chance someone deleted the animation instance.
# Note: This setting is actually applied from project settings
enabled = False
name = "animationDefault"
label = "Animation"
family = "animation"

View file

@ -0,0 +1,35 @@
from openpype.pipeline import (
InventoryAction,
get_representation_context
)
from openpype.hosts.maya.api.lib import (
create_rig_animation_instance,
get_container_members,
)
class RecreateRigAnimationInstance(InventoryAction):
"""Recreate animation publish instance for loaded rigs"""
label = "Recreate rig animation instance"
icon = "wrench"
color = "#888888"
@staticmethod
def is_compatible(container):
return (
container.get("loader") == "ReferenceLoader"
and container.get("name", "").startswith("rig")
)
def process(self, containers):
for container in containers:
# todo: delete an existing entry if it exist or skip creation
namespace = container["namespace"]
representation_id = container["representation"]
context = get_representation_context(representation_id)
nodes = get_container_members(container)
create_rig_animation_instance(nodes, context, namespace)

View file

@ -4,16 +4,12 @@ import contextlib
from maya import cmds
from openpype.settings import get_project_settings
from openpype.pipeline import legacy_io
from openpype.pipeline.create import (
legacy_create,
get_legacy_creator_by_name,
)
import openpype.hosts.maya.api.plugin
from openpype.hosts.maya.api.lib import (
maintained_selection,
get_container_members,
parent_nodes
parent_nodes,
create_rig_animation_instance
)
@ -114,9 +110,6 @@ class ReferenceLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
icon = "code-fork"
color = "orange"
# Name of creator class that will be used to create animation instance
animation_creator_name = "CreateAnimation"
def process_reference(self, context, name, namespace, options):
import maya.cmds as cmds
@ -220,37 +213,10 @@ class ReferenceLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
self._lock_camera_transforms(members)
def _post_process_rig(self, name, namespace, context, options):
output = next((node for node in self if
node.endswith("out_SET")), None)
controls = next((node for node in self if
node.endswith("controls_SET")), None)
assert output, "No out_SET in rig, this is a bug."
assert controls, "No controls_SET in rig, this is a bug."
# Find the roots amongst the loaded nodes
roots = cmds.ls(self[:], assemblies=True, long=True)
assert roots, "No root nodes in rig, this is a bug."
asset = legacy_io.Session["AVALON_ASSET"]
dependency = str(context["representation"]["_id"])
self.log.info("Creating subset: {}".format(namespace))
# Create the animation instance
creator_plugin = get_legacy_creator_by_name(
self.animation_creator_name
nodes = self[:]
create_rig_animation_instance(
nodes, context, namespace, log=self.log
)
with maintained_selection():
cmds.select([output, controls] + roots, noExpand=True)
legacy_create(
creator_plugin,
name=namespace,
asset=asset,
options={"useSelection": True},
data={"dependencies": dependency}
)
def _lock_camera_transforms(self, nodes):
cameras = cmds.ls(nodes, type="camera")

View file

@ -280,7 +280,7 @@ class MakeTX(TextureProcessor):
# Do nothing if the source file is already a .tx file.
return TextureResult(
path=source,
file_hash=None, # todo: unknown texture hash?
file_hash=source_hash(source),
colorspace=colorspace,
transfer_mode=COPY
)

View file

@ -0,0 +1,97 @@
# -*- coding: utf-8 -*-
"""Tools for loading looks to vray proxies."""
import os
from collections import defaultdict
import logging
import six
import alembic.Abc
log = logging.getLogger(__name__)
def get_alembic_paths_by_property(filename, attr, verbose=False):
# type: (str, str, bool) -> dict
"""Return attribute value per objects in the Alembic file.
Reads an Alembic archive hierarchy and retrieves the
value from the `attr` properties on the objects.
Args:
filename (str): Full path to Alembic archive to read.
attr (str): Id attribute.
verbose (bool): Whether to verbosely log missing attributes.
Returns:
dict: Mapping of node full path with its id
"""
# Normalize alembic path
filename = os.path.normpath(filename)
filename = filename.replace("\\", "/")
filename = str(filename) # path must be string
try:
archive = alembic.Abc.IArchive(filename)
except RuntimeError:
# invalid alembic file - probably vrmesh
log.warning("{} is not an alembic file".format(filename))
return {}
root = archive.getTop()
iterator = list(root.children)
obj_ids = {}
for obj in iterator:
name = obj.getFullName()
# include children for coming iterations
iterator.extend(obj.children)
props = obj.getProperties()
if props.getNumProperties() == 0:
# Skip those without properties, e.g. '/materials' in a gpuCache
continue
# THe custom attribute is under the properties' first container under
# the ".arbGeomParams"
prop = props.getProperty(0) # get base property
_property = None
try:
geo_params = prop.getProperty('.arbGeomParams')
_property = geo_params.getProperty(attr)
except KeyError:
if verbose:
log.debug("Missing attr on: {0}".format(name))
continue
if not _property.isConstant():
log.warning("Id not constant on: {0}".format(name))
# Get first value sample
value = _property.getValue()[0]
obj_ids[name] = value
return obj_ids
def get_alembic_ids_cache(path):
# type: (str) -> dict
"""Build a id to node mapping in Alembic file.
Nodes without IDs are ignored.
Returns:
dict: Mapping of id to nodes in the Alembic.
"""
node_ids = get_alembic_paths_by_property(path, attr="cbId")
id_nodes = defaultdict(list)
for node, _id in six.iteritems(node_ids):
id_nodes[_id].append(node)
return dict(six.iteritems(id_nodes))

View file

@ -9,6 +9,7 @@ from openpype.pipeline import legacy_io
from openpype.client import get_last_version_by_subset_name
from openpype.hosts.maya import api
from . import lib
from .alembic import get_alembic_ids_cache
log = logging.getLogger(__name__)
@ -68,6 +69,11 @@ def get_nodes_by_id(standin):
(dict): Dictionary with node full name/path and id.
"""
path = cmds.getAttr(standin + ".dso")
if path.endswith(".abc"):
# Support alembic files directly
return get_alembic_ids_cache(path)
json_path = None
for f in os.listdir(os.path.dirname(path)):
if f.endswith(".json"):

View file

@ -1,108 +1,20 @@
# -*- coding: utf-8 -*-
"""Tools for loading looks to vray proxies."""
import os
from collections import defaultdict
import logging
import six
import alembic.Abc
from maya import cmds
from openpype.client import get_last_version_by_subset_name
from openpype.pipeline import legacy_io
import openpype.hosts.maya.lib as maya_lib
from . import lib
from .alembic import get_alembic_ids_cache
log = logging.getLogger(__name__)
def get_alembic_paths_by_property(filename, attr, verbose=False):
# type: (str, str, bool) -> dict
"""Return attribute value per objects in the Alembic file.
Reads an Alembic archive hierarchy and retrieves the
value from the `attr` properties on the objects.
Args:
filename (str): Full path to Alembic archive to read.
attr (str): Id attribute.
verbose (bool): Whether to verbosely log missing attributes.
Returns:
dict: Mapping of node full path with its id
"""
# Normalize alembic path
filename = os.path.normpath(filename)
filename = filename.replace("\\", "/")
filename = str(filename) # path must be string
try:
archive = alembic.Abc.IArchive(filename)
except RuntimeError:
# invalid alembic file - probably vrmesh
log.warning("{} is not an alembic file".format(filename))
return {}
root = archive.getTop()
iterator = list(root.children)
obj_ids = {}
for obj in iterator:
name = obj.getFullName()
# include children for coming iterations
iterator.extend(obj.children)
props = obj.getProperties()
if props.getNumProperties() == 0:
# Skip those without properties, e.g. '/materials' in a gpuCache
continue
# THe custom attribute is under the properties' first container under
# the ".arbGeomParams"
prop = props.getProperty(0) # get base property
_property = None
try:
geo_params = prop.getProperty('.arbGeomParams')
_property = geo_params.getProperty(attr)
except KeyError:
if verbose:
log.debug("Missing attr on: {0}".format(name))
continue
if not _property.isConstant():
log.warning("Id not constant on: {0}".format(name))
# Get first value sample
value = _property.getValue()[0]
obj_ids[name] = value
return obj_ids
def get_alembic_ids_cache(path):
# type: (str) -> dict
"""Build a id to node mapping in Alembic file.
Nodes without IDs are ignored.
Returns:
dict: Mapping of id to nodes in the Alembic.
"""
node_ids = get_alembic_paths_by_property(path, attr="cbId")
id_nodes = defaultdict(list)
for node, _id in six.iteritems(node_ids):
id_nodes[_id].append(node)
return dict(six.iteritems(id_nodes))
def assign_vrayproxy_shaders(vrayproxy, assignments):
# type: (str, dict) -> None
"""Assign shaders to content of Vray Proxy.

View file

@ -495,17 +495,17 @@ def get_avalon_knob_data(node, prefix="avalon:", create=True):
data (dict)
"""
data = {}
if AVALON_TAB not in node.knobs():
return data
# check if lists
if not isinstance(prefix, list):
prefix = list([prefix])
data = dict()
prefix = [prefix]
# loop prefix
for p in prefix:
# check if the node is avalon tracked
if AVALON_TAB not in node.knobs():
continue
try:
# check if data available on the node
test = node[AVALON_DATA_GROUP].value()
@ -516,8 +516,7 @@ def get_avalon_knob_data(node, prefix="avalon:", create=True):
if create:
node = set_avalon_knob_data(node)
return get_avalon_knob_data(node)
else:
return {}
return {}
# get data from filtered knobs
data.update({k.replace(p, ''): node[k].value()

View file

@ -2,7 +2,8 @@ from openpype.pipeline.create.creator_plugins import SubsetConvertorPlugin
from openpype.hosts.nuke.api.lib import (
INSTANCE_DATA_KNOB,
get_node_data,
get_avalon_knob_data
get_avalon_knob_data,
AVALON_TAB,
)
from openpype.hosts.nuke.api.plugin import convert_to_valid_instaces
@ -17,13 +18,15 @@ class LegacyConverted(SubsetConvertorPlugin):
legacy_found = False
# search for first available legacy item
for node in nuke.allNodes(recurseGroups=True):
if node.Class() in ["Viewer", "Dot"]:
continue
if get_node_data(node, INSTANCE_DATA_KNOB):
continue
if AVALON_TAB not in node.knobs():
continue
# get data from avalon knob
avalon_knob_data = get_avalon_knob_data(
node, ["avalon:", "ak:"], create=False)

View file

@ -0,0 +1,43 @@
# -*- coding: utf-8 -*-
import pyblish.api
class CollectReviewInfo(pyblish.api.InstancePlugin):
"""Collect data required for review instances.
ExtractReview plugin requires frame start/end, fps on instance data which
are missing on instances from TrayPublishes.
Warning:
This is temporary solution to "make it work". Contains removed changes
from https://github.com/ynput/OpenPype/pull/4383 reduced only for
review instances.
"""
label = "Collect Review Info"
order = pyblish.api.CollectorOrder + 0.491
families = ["review"]
hosts = ["traypublisher"]
def process(self, instance):
asset_entity = instance.data.get("assetEntity")
if instance.data.get("frameStart") is not None or not asset_entity:
self.log.debug("Missing required data on instance")
return
asset_data = asset_entity["data"]
# Store collected data for logging
collected_data = {}
for key in (
"fps",
"frameStart",
"frameEnd",
"handleStart",
"handleEnd",
):
if key in instance.data or key not in asset_data:
continue
value = asset_data[key]
collected_data[key] = value
instance.data[key] = value
self.log.debug("Collected data: {}".format(str(collected_data)))

View file

@ -2,8 +2,10 @@ import os
import unreal
from openpype.settings import get_project_settings
from openpype.pipeline import Anatomy
from openpype.hosts.unreal.api import pipeline
from openpype.widgets.message_window import Window
queue = None
@ -32,11 +34,20 @@ def start_rendering():
"""
Start the rendering process.
"""
print("Starting rendering...")
unreal.log("Starting rendering...")
# Get selected sequences
assets = unreal.EditorUtilityLibrary.get_selected_assets()
if not assets:
Window(
parent=None,
title="No assets selected",
message="No assets selected. Select a render instance.",
level="warning")
raise RuntimeError(
"No assets selected. You need to select a render instance.")
# instances = pipeline.ls_inst()
instances = [
a for a in assets
@ -66,6 +77,13 @@ def start_rendering():
ar = unreal.AssetRegistryHelpers.get_asset_registry()
data = get_project_settings(project)
config = None
config_path = str(data.get("unreal").get("render_config_path"))
if config_path and unreal.EditorAssetLibrary.does_asset_exist(config_path):
unreal.log("Found saved render configuration")
config = ar.get_asset_by_object_path(config_path).get_asset()
for i in inst_data:
sequence = ar.get_asset_by_object_path(i["sequence"]).get_asset()
@ -81,55 +99,80 @@ def start_rendering():
# Get all the sequences to render. If there are subsequences,
# add them and their frame ranges to the render list. We also
# use the names for the output paths.
for s in sequences:
subscenes = pipeline.get_subsequences(s.get('sequence'))
for seq in sequences:
subscenes = pipeline.get_subsequences(seq.get('sequence'))
if subscenes:
for ss in subscenes:
for sub_seq in subscenes:
sequences.append({
"sequence": ss.get_sequence(),
"output": (f"{s.get('output')}/"
f"{ss.get_sequence().get_name()}"),
"sequence": sub_seq.get_sequence(),
"output": (f"{seq.get('output')}/"
f"{sub_seq.get_sequence().get_name()}"),
"frame_range": (
ss.get_start_frame(), ss.get_end_frame())
sub_seq.get_start_frame(), sub_seq.get_end_frame())
})
else:
# Avoid rendering camera sequences
if "_camera" not in s.get('sequence').get_name():
render_list.append(s)
if "_camera" not in seq.get('sequence').get_name():
render_list.append(seq)
# Create the rendering jobs and add them to the queue.
for r in render_list:
for render_setting in render_list:
job = queue.allocate_new_job(unreal.MoviePipelineExecutorJob)
job.sequence = unreal.SoftObjectPath(i["master_sequence"])
job.map = unreal.SoftObjectPath(i["master_level"])
job.author = "OpenPype"
# If we have a saved configuration, copy it to the job.
if config:
job.get_configuration().copy_from(config)
# User data could be used to pass data to the job, that can be
# read in the job's OnJobFinished callback. We could,
# for instance, pass the AvalonPublishInstance's path to the job.
# job.user_data = ""
output_dir = render_setting.get('output')
shot_name = render_setting.get('sequence').get_name()
settings = job.get_configuration().find_or_add_setting_by_class(
unreal.MoviePipelineOutputSetting)
settings.output_resolution = unreal.IntPoint(1920, 1080)
settings.custom_start_frame = r.get("frame_range")[0]
settings.custom_end_frame = r.get("frame_range")[1]
settings.custom_start_frame = render_setting.get("frame_range")[0]
settings.custom_end_frame = render_setting.get("frame_range")[1]
settings.use_custom_playback_range = True
settings.file_name_format = "{sequence_name}.{frame_number}"
settings.output_directory.path = f"{render_dir}/{r.get('output')}"
renderPass = job.get_configuration().find_or_add_setting_by_class(
unreal.MoviePipelineDeferredPassBase)
renderPass.disable_multisample_effects = True
settings.file_name_format = f"{shot_name}" + ".{frame_number}"
settings.output_directory.path = f"{render_dir}/{output_dir}"
job.get_configuration().find_or_add_setting_by_class(
unreal.MoviePipelineImageSequenceOutput_PNG)
unreal.MoviePipelineDeferredPassBase)
render_format = data.get("unreal").get("render_format", "png")
if render_format == "png":
job.get_configuration().find_or_add_setting_by_class(
unreal.MoviePipelineImageSequenceOutput_PNG)
elif render_format == "exr":
job.get_configuration().find_or_add_setting_by_class(
unreal.MoviePipelineImageSequenceOutput_EXR)
elif render_format == "jpg":
job.get_configuration().find_or_add_setting_by_class(
unreal.MoviePipelineImageSequenceOutput_JPG)
elif render_format == "bmp":
job.get_configuration().find_or_add_setting_by_class(
unreal.MoviePipelineImageSequenceOutput_BMP)
# If there are jobs in the queue, start the rendering process.
if queue.get_jobs():
global executor
executor = unreal.MoviePipelinePIEExecutor()
preroll_frames = data.get("unreal").get("preroll_frames", 0)
settings = unreal.MoviePipelinePIEExecutorSettings()
settings.set_editor_property(
"initial_delay_frame_count", preroll_frames)
executor.on_executor_finished_delegate.add_callable_unique(
_queue_finish_callback)
executor.on_individual_job_finished_delegate.add_callable_unique(

View file

@ -1,14 +1,22 @@
# -*- coding: utf-8 -*-
from pathlib import Path
import unreal
from openpype.pipeline import CreatorError
from openpype.hosts.unreal.api.pipeline import (
get_subsequences
UNREAL_VERSION,
create_folder,
get_subsequences,
)
from openpype.hosts.unreal.api.plugin import (
UnrealAssetCreator
)
from openpype.lib import UILabelDef
from openpype.lib import (
UILabelDef,
UISeparatorDef,
BoolDef,
NumberDef
)
class CreateRender(UnrealAssetCreator):
@ -19,7 +27,92 @@ class CreateRender(UnrealAssetCreator):
family = "render"
icon = "eye"
def create(self, subset_name, instance_data, pre_create_data):
def create_instance(
self, instance_data, subset_name, pre_create_data,
selected_asset_path, master_seq, master_lvl, seq_data
):
instance_data["members"] = [selected_asset_path]
instance_data["sequence"] = selected_asset_path
instance_data["master_sequence"] = master_seq
instance_data["master_level"] = master_lvl
instance_data["output"] = seq_data.get('output')
instance_data["frameStart"] = seq_data.get('frame_range')[0]
instance_data["frameEnd"] = seq_data.get('frame_range')[1]
super(CreateRender, self).create(
subset_name,
instance_data,
pre_create_data)
def create_with_new_sequence(
self, subset_name, instance_data, pre_create_data
):
# If the option to create a new level sequence is selected,
# create a new level sequence and a master level.
root = f"/Game/OpenPype/Sequences"
# Create a new folder for the sequence in root
sequence_dir_name = create_folder(root, subset_name)
sequence_dir = f"{root}/{sequence_dir_name}"
unreal.log_warning(f"sequence_dir: {sequence_dir}")
# Create the level sequence
asset_tools = unreal.AssetToolsHelpers.get_asset_tools()
seq = asset_tools.create_asset(
asset_name=subset_name,
package_path=sequence_dir,
asset_class=unreal.LevelSequence,
factory=unreal.LevelSequenceFactoryNew())
seq.set_playback_start(pre_create_data.get("start_frame"))
seq.set_playback_end(pre_create_data.get("end_frame"))
pre_create_data["members"] = [seq.get_path_name()]
unreal.EditorAssetLibrary.save_asset(seq.get_path_name())
# Create the master level
if UNREAL_VERSION.major >= 5:
curr_level = unreal.LevelEditorSubsystem().get_current_level()
else:
world = unreal.EditorLevelLibrary.get_editor_world()
levels = unreal.EditorLevelUtils.get_levels(world)
curr_level = levels[0] if len(levels) else None
if not curr_level:
raise RuntimeError("No level loaded.")
curr_level_path = curr_level.get_outer().get_path_name()
# If the level path does not start with "/Game/", the current
# level is a temporary, unsaved level.
if curr_level_path.startswith("/Game/"):
if UNREAL_VERSION.major >= 5:
unreal.LevelEditorSubsystem().save_current_level()
else:
unreal.EditorLevelLibrary.save_current_level()
ml_path = f"{sequence_dir}/{subset_name}_MasterLevel"
if UNREAL_VERSION.major >= 5:
unreal.LevelEditorSubsystem().new_level(ml_path)
else:
unreal.EditorLevelLibrary.new_level(ml_path)
seq_data = {
"sequence": seq,
"output": f"{seq.get_name()}",
"frame_range": (
seq.get_playback_start(),
seq.get_playback_end())}
self.create_instance(
instance_data, subset_name, pre_create_data,
seq.get_path_name(), seq.get_path_name(), ml_path, seq_data)
def create_from_existing_sequence(
self, subset_name, instance_data, pre_create_data
):
ar = unreal.AssetRegistryHelpers.get_asset_registry()
sel_objects = unreal.EditorUtilityLibrary.get_selected_assets()
@ -27,8 +120,8 @@ class CreateRender(UnrealAssetCreator):
a.get_path_name() for a in sel_objects
if a.get_class().get_name() == "LevelSequence"]
if not selection:
raise CreatorError("Please select at least one Level Sequence.")
if len(selection) == 0:
raise RuntimeError("Please select at least one Level Sequence.")
seq_data = None
@ -42,28 +135,38 @@ class CreateRender(UnrealAssetCreator):
f"Skipping {selected_asset.get_name()}. It isn't a Level "
"Sequence.")
# The asset name is the third element of the path which
# contains the map.
# To take the asset name, we remove from the path the prefix
# "/Game/OpenPype/" and then we split the path by "/".
sel_path = selected_asset_path
asset_name = sel_path.replace("/Game/OpenPype/", "").split("/")[0]
if pre_create_data.get("use_hierarchy"):
# The asset name is the the third element of the path which
# contains the map.
# To take the asset name, we remove from the path the prefix
# "/Game/OpenPype/" and then we split the path by "/".
sel_path = selected_asset_path
asset_name = sel_path.replace(
"/Game/OpenPype/", "").split("/")[0]
search_path = f"/Game/OpenPype/{asset_name}"
else:
search_path = Path(selected_asset_path).parent.as_posix()
# Get the master sequence and the master level.
# There should be only one sequence and one level in the directory.
ar_filter = unreal.ARFilter(
class_names=["LevelSequence"],
package_paths=[f"/Game/OpenPype/{asset_name}"],
recursive_paths=False)
sequences = ar.get_assets(ar_filter)
master_seq = sequences[0].get_asset().get_path_name()
master_seq_obj = sequences[0].get_asset()
ar_filter = unreal.ARFilter(
class_names=["World"],
package_paths=[f"/Game/OpenPype/{asset_name}"],
recursive_paths=False)
levels = ar.get_assets(ar_filter)
master_lvl = levels[0].get_asset().get_path_name()
try:
ar_filter = unreal.ARFilter(
class_names=["LevelSequence"],
package_paths=[search_path],
recursive_paths=False)
sequences = ar.get_assets(ar_filter)
master_seq = sequences[0].get_asset().get_path_name()
master_seq_obj = sequences[0].get_asset()
ar_filter = unreal.ARFilter(
class_names=["World"],
package_paths=[search_path],
recursive_paths=False)
levels = ar.get_assets(ar_filter)
master_lvl = levels[0].get_asset().get_path_name()
except IndexError:
raise RuntimeError(
f"Could not find the hierarchy for the selected sequence.")
# If the selected asset is the master sequence, we get its data
# and then we create the instance for the master sequence.
@ -79,7 +182,8 @@ class CreateRender(UnrealAssetCreator):
master_seq_obj.get_playback_start(),
master_seq_obj.get_playback_end())}
if selected_asset_path == master_seq:
if (selected_asset_path == master_seq or
pre_create_data.get("use_hierarchy")):
seq_data = master_seq_data
else:
seq_data_list = [master_seq_data]
@ -119,20 +223,54 @@ class CreateRender(UnrealAssetCreator):
"sub-sequence of the master sequence.")
continue
instance_data["members"] = [selected_asset_path]
instance_data["sequence"] = selected_asset_path
instance_data["master_sequence"] = master_seq
instance_data["master_level"] = master_lvl
instance_data["output"] = seq_data.get('output')
instance_data["frameStart"] = seq_data.get('frame_range')[0]
instance_data["frameEnd"] = seq_data.get('frame_range')[1]
self.create_instance(
instance_data, subset_name, pre_create_data,
selected_asset_path, master_seq, master_lvl, seq_data)
super(CreateRender, self).create(
subset_name,
instance_data,
pre_create_data)
def create(self, subset_name, instance_data, pre_create_data):
if pre_create_data.get("create_seq"):
self.create_with_new_sequence(
subset_name, instance_data, pre_create_data)
else:
self.create_from_existing_sequence(
subset_name, instance_data, pre_create_data)
def get_pre_create_attr_defs(self):
return [
UILabelDef("Select the sequence to render.")
UILabelDef(
"Select a Level Sequence to render or create a new one."
),
BoolDef(
"create_seq",
label="Create a new Level Sequence",
default=False
),
UILabelDef(
"WARNING: If you create a new Level Sequence, the current\n"
"level will be saved and a new Master Level will be created."
),
NumberDef(
"start_frame",
label="Start Frame",
default=0,
minimum=-999999,
maximum=999999
),
NumberDef(
"end_frame",
label="Start Frame",
default=150,
minimum=-999999,
maximum=999999
),
UISeparatorDef(),
UILabelDef(
"The following settings are valid only if you are not\n"
"creating a new sequence."
),
BoolDef(
"use_hierarchy",
label="Use Hierarchy",
default=False
),
]

View file

@ -0,0 +1,42 @@
import clique
import pyblish.api
class ValidateSequenceFrames(pyblish.api.InstancePlugin):
"""Ensure the sequence of frames is complete
The files found in the folder are checked against the frameStart and
frameEnd of the instance. If the first or last file is not
corresponding with the first or last frame it is flagged as invalid.
"""
order = pyblish.api.ValidatorOrder
label = "Validate Sequence Frames"
families = ["render"]
hosts = ["unreal"]
optional = True
def process(self, instance):
representations = instance.data.get("representations")
for repr in representations:
data = instance.data.get("assetEntity", {}).get("data", {})
patterns = [clique.PATTERNS["frames"]]
collections, remainder = clique.assemble(
repr["files"], minimum_items=1, patterns=patterns)
assert not remainder, "Must not have remainder"
assert len(collections) == 1, "Must detect single collection"
collection = collections[0]
frames = list(collection.indexes)
current_range = (frames[0], frames[-1])
required_range = (data["frameStart"],
data["frameEnd"])
if current_range != required_range:
raise ValueError(f"Invalid frame range: {current_range} - "
f"expected: {required_range}")
missing = collection.holes().indexes
assert not missing, "Missing frames: %s" % (missing,)

View file

@ -49,7 +49,12 @@ class ValidateSequenceFrames(pyblish.api.InstancePlugin):
collection = collections[0]
frames = list(collection.indexes)
if instance.data.get("slate"):
# Slate is not part of the frame range
frames = frames[1:]
current_range = (frames[0], frames[-1])
required_range = (instance.data["frameStart"],
instance.data["frameEnd"])

View file

@ -554,7 +554,7 @@
"publish_mip_map": true
},
"CreateAnimation": {
"enabled": true,
"enabled": false,
"write_color_sets": false,
"write_face_sets": false,
"include_parent_hierarchy": false,
@ -1459,7 +1459,7 @@
]
},
"reference_loader": {
"namespace": "{asset_name}_{subset}_##",
"namespace": "{asset_name}_{subset}_##_",
"group_name": "_GRP"
}
},

View file

@ -11,6 +11,9 @@
},
"level_sequences_for_layouts": false,
"delete_unmatched_assets": false,
"render_config_path": "",
"preroll_frames": 0,
"render_format": "png",
"project_setup": {
"dev_mode": true
}

View file

@ -32,6 +32,28 @@
"key": "delete_unmatched_assets",
"label": "Delete assets that are not matched"
},
{
"type": "text",
"key": "render_config_path",
"label": "Render Config Path"
},
{
"type": "number",
"key": "preroll_frames",
"label": "Pre-roll frames"
},
{
"key": "render_format",
"label": "Render format",
"type": "enum",
"multiselection": false,
"enum_items": [
{"png": "PNG"},
{"exr": "EXR"},
{"jpg": "JPG"},
{"bmp": "BMP"}
]
},
{
"type": "dict",
"collapsible": true,

View file

@ -211,6 +211,10 @@ class AssetsDialog(QtWidgets.QDialog):
layout.addWidget(asset_view, 1)
layout.addLayout(btns_layout, 0)
controller.event_system.add_callback(
"controller.reset.finished", self._on_controller_reset
)
asset_view.double_clicked.connect(self._on_ok_clicked)
filter_input.textChanged.connect(self._on_filter_change)
ok_btn.clicked.connect(self._on_ok_clicked)
@ -245,6 +249,10 @@ class AssetsDialog(QtWidgets.QDialog):
new_pos.setY(new_pos.y() - int(self.height() / 2))
self.move(new_pos)
def _on_controller_reset(self):
# Change reset enabled so model is reset on show event
self._soft_reset_enabled = True
def showEvent(self, event):
"""Refresh asset model on show."""
super(AssetsDialog, self).showEvent(event)

View file

@ -284,6 +284,9 @@ class PublisherWindow(QtWidgets.QDialog):
controller.event_system.add_callback(
"publish.has_validated.changed", self._on_publish_validated_change
)
controller.event_system.add_callback(
"publish.finished.changed", self._on_publish_finished_change
)
controller.event_system.add_callback(
"publish.process.stopped", self._on_publish_stop
)
@ -400,6 +403,7 @@ class PublisherWindow(QtWidgets.QDialog):
# TODO capture changes and ask user if wants to save changes on close
if not self._controller.host_context_has_changed:
self._save_changes(False)
self._comment_input.setText("") # clear comment
self._reset_on_show = True
self._controller.clear_thumbnail_temp_dir_path()
super(PublisherWindow, self).closeEvent(event)
@ -777,6 +781,11 @@ class PublisherWindow(QtWidgets.QDialog):
if event["value"]:
self._validate_btn.setEnabled(False)
def _on_publish_finished_change(self, event):
if event["value"]:
# Successful publish, remove comment from UI
self._comment_input.setText("")
def _on_publish_stop(self):
self._set_publish_overlay_visibility(False)
self._reset_btn.setEnabled(True)

View file

@ -199,90 +199,103 @@ class InventoryModel(TreeModel):
"""Refresh the model"""
host = registered_host()
if not items: # for debugging or testing, injecting items from outside
# for debugging or testing, injecting items from outside
if items is None:
if isinstance(host, ILoadHost):
items = host.get_containers()
else:
elif hasattr(host, "ls"):
items = host.ls()
else:
items = []
self.clear()
if self._hierarchy_view and selected:
if not hasattr(host.pipeline, "update_hierarchy"):
# If host doesn't support hierarchical containers, then
# cherry-pick only.
self.add_items((item for item in items
if item["objectName"] in selected))
return
# Update hierarchy info for all containers
items_by_name = {item["objectName"]: item
for item in host.pipeline.update_hierarchy(items)}
selected_items = set()
def walk_children(names):
"""Select containers and extend to chlid containers"""
for name in [n for n in names if n not in selected_items]:
selected_items.add(name)
item = items_by_name[name]
yield item
for child in walk_children(item["children"]):
yield child
items = list(walk_children(selected)) # Cherry-picked and extended
# Cut unselected upstream containers
for item in items:
if not item.get("parent") in selected_items:
# Parent not in selection, this is root item.
item["parent"] = None
parents = [self._root_item]
# The length of `items` array is the maximum depth that a
# hierarchy could be.
# Take this as an easiest way to prevent looping forever.
maximum_loop = len(items)
count = 0
while items:
if count > maximum_loop:
self.log.warning("Maximum loop count reached, possible "
"missing parent node.")
break
_parents = list()
for parent in parents:
_unparented = list()
def _children():
"""Child item provider"""
for item in items:
if item.get("parent") == parent.get("objectName"):
# (NOTE)
# Since `self._root_node` has no "objectName"
# entry, it will be paired with root item if
# the value of key "parent" is None, or not
# having the key.
yield item
else:
# Not current parent's child, try next
_unparented.append(item)
self.add_items(_children(), parent)
items[:] = _unparented
# Parents of next level
for group_node in parent.children():
_parents += group_node.children()
parents[:] = _parents
count += 1
else:
if not selected or not self._hierarchy_view:
self.add_items(items)
return
if (
not hasattr(host, "pipeline")
or not hasattr(host.pipeline, "update_hierarchy")
):
# If host doesn't support hierarchical containers, then
# cherry-pick only.
self.add_items((
item
for item in items
if item["objectName"] in selected
))
return
# TODO find out what this part does. Function 'update_hierarchy' is
# available only in 'blender' at this moment.
# Update hierarchy info for all containers
items_by_name = {
item["objectName"]: item
for item in host.pipeline.update_hierarchy(items)
}
selected_items = set()
def walk_children(names):
"""Select containers and extend to chlid containers"""
for name in [n for n in names if n not in selected_items]:
selected_items.add(name)
item = items_by_name[name]
yield item
for child in walk_children(item["children"]):
yield child
items = list(walk_children(selected)) # Cherry-picked and extended
# Cut unselected upstream containers
for item in items:
if not item.get("parent") in selected_items:
# Parent not in selection, this is root item.
item["parent"] = None
parents = [self._root_item]
# The length of `items` array is the maximum depth that a
# hierarchy could be.
# Take this as an easiest way to prevent looping forever.
maximum_loop = len(items)
count = 0
while items:
if count > maximum_loop:
self.log.warning("Maximum loop count reached, possible "
"missing parent node.")
break
_parents = list()
for parent in parents:
_unparented = list()
def _children():
"""Child item provider"""
for item in items:
if item.get("parent") == parent.get("objectName"):
# (NOTE)
# Since `self._root_node` has no "objectName"
# entry, it will be paired with root item if
# the value of key "parent" is None, or not
# having the key.
yield item
else:
# Not current parent's child, try next
_unparented.append(item)
self.add_items(_children(), parent)
items[:] = _unparented
# Parents of next level
for group_node in parent.children():
_parents += group_node.children()
parents[:] = _parents
count += 1
def add_items(self, items, parent=None):
"""Add the items to the model.

View file

@ -107,8 +107,8 @@ class SceneInventoryWindow(QtWidgets.QDialog):
view.hierarchy_view_changed.connect(
self._on_hierarchy_view_change
)
view.data_changed.connect(self.refresh)
refresh_button.clicked.connect(self.refresh)
view.data_changed.connect(self._on_refresh_request)
refresh_button.clicked.connect(self._on_refresh_request)
update_all_button.clicked.connect(self._on_update_all)
self._update_all_button = update_all_button
@ -139,6 +139,11 @@ class SceneInventoryWindow(QtWidgets.QDialog):
"""
def _on_refresh_request(self):
"""Signal callback to trigger 'refresh' without any arguments."""
self.refresh()
def refresh(self, items=None):
with preserve_expanded_rows(
tree_view=self._view,

View file

@ -1,3 +1,3 @@
# -*- coding: utf-8 -*-
"""Package declaring Pype version."""
__version__ = "3.15.4"
__version__ = "3.15.5-nightly.2"

View file

@ -180,5 +180,23 @@ class TestValidateSequenceFrames(BaseTest):
plugin.process(instance)
assert ("Missing frames: [1002]" in str(excinfo.value))
def test_validate_sequence_frames_slate(self, instance, plugin):
representations = [
{
"ext": "exr",
"files": [
"Main_beauty.1000.exr",
"Main_beauty.1001.exr",
"Main_beauty.1002.exr",
"Main_beauty.1003.exr"
]
}
]
instance.data["slate"] = True
instance.data["representations"] = representations
instance.data["frameEnd"] = 1003
plugin.process(instance)
test_case = TestValidateSequenceFrames()

View file

@ -238,12 +238,12 @@ For resolution and frame range, use **OpenPype → Set Frame Range** and
Creating and publishing rigs with OpenPype follows similar workflow as with
other data types. Create your rig and mark parts of your hierarchy in sets to
help OpenPype validators and extractors to check it and publish it.
help OpenPype validators and extractors to check and publish it.
### Preparing rig for publish
When creating rigs, it is recommended (and it is in fact enforced by validators)
to separate bones or driving objects, their controllers and geometry so they are
to separate bones or driven objects, their controllers and geometry so they are
easily managed. Currently OpenPype doesn't allow to publish model at the same time as
its rig so for demonstration purposes, I'll first create simple model for robotic
arm, just made out of simple boxes and I'll publish it.
@ -252,41 +252,48 @@ arm, just made out of simple boxes and I'll publish it.
For more information about publishing models, see [Publishing models](artist_hosts_maya.md#publishing-models).
Now lets start with empty scene. Load your model - **OpenPype → Load...**, right
Now let's start with empty scene. Load your model - **OpenPype → Load...**, right
click on it and select **Reference (abc)**.
I've created few bones and their controllers in two separate
groups - `rig_GRP` and `controls_GRP`. Naming is not important - just adhere to
your naming conventions.
I've created a few bones in `rig_GRP`, their controllers in `controls_GRP` and
placed the rig's output geometry in `geometry_GRP`. Naming of the groups is not important - just adhere to
your naming conventions. Then I parented everything into a single top group named `arm_rig`.
Then I've put everything into `arm_rig` group.
When you've prepared your hierarchy, it's time to create *Rig instance* in OpenPype.
Select your whole rig hierarchy and go **OpenPype → Create...**. Select **Rig**.
Set is created in your scene to mark rig parts for export. Notice that it has
two subsets - `controls_SET` and `out_SET`. Put your controls into `controls_SET`
With the prepared hierarchy it is time to create a *Rig instance* in OpenPype.
Select the top group of your rig and go to **OpenPype → Create...**. Select **Rig**.
A publish set for your rig is created in your scene to mark rig parts for export.
Notice that it has two subsets - `controls_SET` and `out_SET`. Put your controls into `controls_SET`
and geometry to `out_SET`. You should end up with something like this:
![Maya - Rig Hierarchy Example](assets/maya-rig_hierarchy_example.jpg)
:::note controls_SET and out_SET contents
It is totally allowed to put the `geometry_GRP` in the `out_SET` as opposed to
the individual meshes - it's even **recommended**. However, the `controls_SET`
requires the individual controls in it that the artist is supposed to animate
and manipulate so the publish validators can accurately check the rig's
controls.
:::
### Publishing rigs
Publishing rig is done in same way as publishing everything else. Save your scene
and go **OpenPype → Publish**. When you run validation you'll mostly run at first into
few issues. Although number of them will seem to be intimidating at first, you'll
find out they are mostly minor things easily fixed.
Publishing rigs is done in a same way as publishing everything else. Save your scene
and go **OpenPype → Publish**. When you run validation you'll most likely run into
a few issues at first. Although a number of them will seem to be intimidating you
will find out they are mostly minor things, easily fixed and are there to optimize
your rig for consistency and safe usage by the artist.
* **Non Duplicate Instance Members (ID)** - This will most likely fail because when
- **Non Duplicate Instance Members (ID)** - This will most likely fail because when
creating rigs, we usually duplicate few parts of it to reuse them. But duplication
will duplicate also ID of original object and OpenPype needs every object to have
unique ID. This is easily fixed by **Repair** action next to validator name. click
on little up arrow on right side of validator name and select **Repair** form menu.
* **Joints Hidden** - This is enforcing joints (bones) to be hidden for user as
- **Joints Hidden** - This is enforcing joints (bones) to be hidden for user as
animator usually doesn't need to see them and they clutter his viewports. So
well behaving rig should have them hidden. **Repair** action will help here also.
* **Rig Controllers** will check if there are no transforms on unlocked attributes
- **Rig Controllers** will check if there are no transforms on unlocked attributes
of controllers. This is needed because animator should have ease way to reset rig
to it's default position. It also check that those attributes doesn't have any
incoming connections from other parts of scene to ensure that published rig doesn't
@ -297,6 +304,19 @@ have any missing dependencies.
You can load rig with [Loader](artist_tools_loader). Go **OpenPype → Load...**,
select your rig, right click on it and **Reference** it.
### Animation instances
Whenever you load a rig an animation publish instance is automatically created
for it. This means that if you load a rig you don't need to create a pointcache
instance yourself to publish the geometry. This is all cleanly prepared for you
when loading a published rig.
:::tip Missing animation instance for your loaded rig?
Did you accidentally delete the animation instance for a loaded rig? You can
recreate it using the [**Recreate rig animation instance**](artist_hosts_maya.md#recreate-rig-animation-instance)
inventory action.
:::
## Point caches
OpenPype is using Alembic format for point caches. Workflow is very similar as
other data types.
@ -646,3 +666,15 @@ Select 1 container of type `animation` or `pointcache`, then 1+ container of any
The action searches the selected containers for 1 animation container of type `animation` or `pointcache`. This animation container will be connected to the rest of the selected containers. Matching geometries between containers is done by comparing the attribute `cbId`.
The connection between geometries is done with a live blendshape.
### Recreate rig animation instance
This action can regenerate an animation instance for a loaded rig, for example
for when it was accidentally deleted by the user.
![Maya - Inventory Action Recreate Rig Animation Instance](assets/maya-inventory_action_recreate_animation_instance.png)
#### Usage
Select 1 or more container of type `rig` for which you want to recreate the
animation instance.

Binary file not shown.

After

Width:  |  Height:  |  Size: 46 KiB