mirror of
https://github.com/ynput/ayon-core.git
synced 2025-12-26 22:02:15 +01:00
Merge branch 'develop' of github.com:pypeclub/OpenPype into feature/OP-2765_AE-to-new-publisher
This commit is contained in:
commit
c4612ed686
43 changed files with 713 additions and 1876 deletions
44
CHANGELOG.md
44
CHANGELOG.md
|
|
@ -1,6 +1,6 @@
|
|||
# Changelog
|
||||
|
||||
## [3.9.0-nightly.5](https://github.com/pypeclub/OpenPype/tree/HEAD)
|
||||
## [3.9.0-nightly.6](https://github.com/pypeclub/OpenPype/tree/HEAD)
|
||||
|
||||
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.8.2...HEAD)
|
||||
|
||||
|
|
@ -12,56 +12,56 @@
|
|||
|
||||
- Documentation: fixed broken links [\#2799](https://github.com/pypeclub/OpenPype/pull/2799)
|
||||
- Documentation: broken link fix [\#2785](https://github.com/pypeclub/OpenPype/pull/2785)
|
||||
- Documentation: link fixes [\#2772](https://github.com/pypeclub/OpenPype/pull/2772)
|
||||
- Update docusaurus to latest version [\#2760](https://github.com/pypeclub/OpenPype/pull/2760)
|
||||
- Various testing updates [\#2726](https://github.com/pypeclub/OpenPype/pull/2726)
|
||||
|
||||
**🚀 Enhancements**
|
||||
|
||||
- Ftrack: Can sync fps as string [\#2836](https://github.com/pypeclub/OpenPype/pull/2836)
|
||||
- General: Color dialog UI fixes [\#2817](https://github.com/pypeclub/OpenPype/pull/2817)
|
||||
- Nuke: adding Reformat to baking mov plugin [\#2811](https://github.com/pypeclub/OpenPype/pull/2811)
|
||||
- Manager: Update all to latest button [\#2805](https://github.com/pypeclub/OpenPype/pull/2805)
|
||||
- General: Set context environments for non host applications [\#2803](https://github.com/pypeclub/OpenPype/pull/2803)
|
||||
- Houdini: Remove duplicate ValidateOutputNode plug-in [\#2780](https://github.com/pypeclub/OpenPype/pull/2780)
|
||||
- Tray publisher: New Tray Publisher host \(beta\) [\#2778](https://github.com/pypeclub/OpenPype/pull/2778)
|
||||
- Slack: Added regex for filtering on subset names [\#2775](https://github.com/pypeclub/OpenPype/pull/2775)
|
||||
- Houdini: Implement Reset Frame Range [\#2770](https://github.com/pypeclub/OpenPype/pull/2770)
|
||||
- Pyblish Pype: Remove redundant new line in installed fonts printing [\#2758](https://github.com/pypeclub/OpenPype/pull/2758)
|
||||
- Flame: use Shot Name on segment for asset name [\#2751](https://github.com/pypeclub/OpenPype/pull/2751)
|
||||
- Flame: adding validator source clip [\#2746](https://github.com/pypeclub/OpenPype/pull/2746)
|
||||
- Ftrack: Disable ftrack module by default [\#2732](https://github.com/pypeclub/OpenPype/pull/2732)
|
||||
- Houdini: Move Houdini Save Current File to beginning of ExtractorOrder [\#2747](https://github.com/pypeclub/OpenPype/pull/2747)
|
||||
- RoyalRender: Minor enhancements [\#2700](https://github.com/pypeclub/OpenPype/pull/2700)
|
||||
|
||||
**🐛 Bug fixes**
|
||||
|
||||
- Maya: Stop creation of reviews for Cryptomattes [\#2832](https://github.com/pypeclub/OpenPype/pull/2832)
|
||||
- Deadline: Remove recreated event [\#2828](https://github.com/pypeclub/OpenPype/pull/2828)
|
||||
- Deadline: Added missing events folder [\#2827](https://github.com/pypeclub/OpenPype/pull/2827)
|
||||
- Settings: Missing document with OP versions may break start of OpenPype [\#2825](https://github.com/pypeclub/OpenPype/pull/2825)
|
||||
- Deadline: more detailed temp file name for environment json [\#2824](https://github.com/pypeclub/OpenPype/pull/2824)
|
||||
- General: Host name was formed from obsolete code [\#2821](https://github.com/pypeclub/OpenPype/pull/2821)
|
||||
- Settings UI: Fix "Apply from" action [\#2820](https://github.com/pypeclub/OpenPype/pull/2820)
|
||||
- Ftrack: Job killer with missing user [\#2819](https://github.com/pypeclub/OpenPype/pull/2819)
|
||||
- StandalonePublisher: use dynamic groups in subset names [\#2816](https://github.com/pypeclub/OpenPype/pull/2816)
|
||||
- Settings UI: Search case sensitivity [\#2810](https://github.com/pypeclub/OpenPype/pull/2810)
|
||||
- Flame Babypublisher optimalization [\#2806](https://github.com/pypeclub/OpenPype/pull/2806)
|
||||
- resolve: fixing fusion module loading [\#2802](https://github.com/pypeclub/OpenPype/pull/2802)
|
||||
- Ftrack: Unset task ids from asset versions before tasks are removed [\#2800](https://github.com/pypeclub/OpenPype/pull/2800)
|
||||
- Slack: fail gracefully if slack exception [\#2798](https://github.com/pypeclub/OpenPype/pull/2798)
|
||||
- Flame: Fix version string in default settings [\#2783](https://github.com/pypeclub/OpenPype/pull/2783)
|
||||
- After Effects: Fix typo in name `afftereffects` -\> `aftereffects` [\#2768](https://github.com/pypeclub/OpenPype/pull/2768)
|
||||
- Avoid renaming udim indexes [\#2765](https://github.com/pypeclub/OpenPype/pull/2765)
|
||||
- Houdini: Fix open last workfile [\#2767](https://github.com/pypeclub/OpenPype/pull/2767)
|
||||
- Maya: Fix `unique\_namespace` when in an namespace that is empty [\#2759](https://github.com/pypeclub/OpenPype/pull/2759)
|
||||
- Loader UI: Fix right click in representation widget [\#2757](https://github.com/pypeclub/OpenPype/pull/2757)
|
||||
- Aftereffects 2022 and Deadline [\#2748](https://github.com/pypeclub/OpenPype/pull/2748)
|
||||
- Flame: bunch of bugs [\#2745](https://github.com/pypeclub/OpenPype/pull/2745)
|
||||
- Maya: Save current scene on workfile publish [\#2744](https://github.com/pypeclub/OpenPype/pull/2744)
|
||||
- Version Up: Preserve parts of filename after version number \(like subversion\) on version\_up [\#2741](https://github.com/pypeclub/OpenPype/pull/2741)
|
||||
- Maya: Remove some unused code [\#2709](https://github.com/pypeclub/OpenPype/pull/2709)
|
||||
- Multiple hosts: unify menu style across hosts [\#2693](https://github.com/pypeclub/OpenPype/pull/2693)
|
||||
|
||||
**Merged pull requests:**
|
||||
|
||||
- General: Move change context functions [\#2839](https://github.com/pypeclub/OpenPype/pull/2839)
|
||||
- Tools: Don't use avalon tools code [\#2829](https://github.com/pypeclub/OpenPype/pull/2829)
|
||||
- Move Unreal Implementation to OpenPype [\#2823](https://github.com/pypeclub/OpenPype/pull/2823)
|
||||
- Ftrack: Job killer with missing user [\#2819](https://github.com/pypeclub/OpenPype/pull/2819)
|
||||
- Ftrack: Unset task ids from asset versions before tasks are removed [\#2800](https://github.com/pypeclub/OpenPype/pull/2800)
|
||||
- Slack: fail gracefully if slack exception [\#2798](https://github.com/pypeclub/OpenPype/pull/2798)
|
||||
- Nuke: Use AVALON\_APP to get value for "app" key [\#2818](https://github.com/pypeclub/OpenPype/pull/2818)
|
||||
- Ftrack: Moved module one hierarchy level higher [\#2792](https://github.com/pypeclub/OpenPype/pull/2792)
|
||||
- SyncServer: Moved module one hierarchy level higher [\#2791](https://github.com/pypeclub/OpenPype/pull/2791)
|
||||
- Royal render: Move module one hierarchy level higher [\#2790](https://github.com/pypeclub/OpenPype/pull/2790)
|
||||
- Deadline: Move module one hierarchy level higher [\#2789](https://github.com/pypeclub/OpenPype/pull/2789)
|
||||
- Houdini: Remove duplicate ValidateOutputNode plug-in [\#2780](https://github.com/pypeclub/OpenPype/pull/2780)
|
||||
- Slack: Added regex for filtering on subset names [\#2775](https://github.com/pypeclub/OpenPype/pull/2775)
|
||||
- Houdini: Fix open last workfile [\#2767](https://github.com/pypeclub/OpenPype/pull/2767)
|
||||
- General: Extract template formatting from anatomy [\#2766](https://github.com/pypeclub/OpenPype/pull/2766)
|
||||
- Harmony: Rendering in Deadline didn't work in other machines than submitter [\#2754](https://github.com/pypeclub/OpenPype/pull/2754)
|
||||
- Houdini: Move Houdini Save Current File to beginning of ExtractorOrder [\#2747](https://github.com/pypeclub/OpenPype/pull/2747)
|
||||
- Maya: set Deadline job/batch name to original source workfile name instead of published workfile [\#2733](https://github.com/pypeclub/OpenPype/pull/2733)
|
||||
|
||||
## [3.8.2](https://github.com/pypeclub/OpenPype/tree/3.8.2) (2022-02-07)
|
||||
|
||||
|
|
|
|||
|
|
@ -50,6 +50,10 @@ class ExtractCamera(api.Extractor):
|
|||
filepath=filepath,
|
||||
use_active_collection=False,
|
||||
use_selection=True,
|
||||
bake_anim_use_nla_strips=False,
|
||||
bake_anim_use_all_actions=False,
|
||||
add_leaf_bones=False,
|
||||
armature_nodetype='ROOT',
|
||||
object_types={'CAMERA'},
|
||||
bake_anim_simplify_factor=0.0
|
||||
)
|
||||
|
|
|
|||
|
|
@ -5,11 +5,12 @@ import logging
|
|||
|
||||
# Pipeline imports
|
||||
import avalon.api
|
||||
from avalon import io, pipeline
|
||||
from avalon import io
|
||||
|
||||
from openpype.lib import version_up
|
||||
from openpype.hosts.fusion import api
|
||||
from openpype.hosts.fusion.api import lib
|
||||
from openpype.lib.avalon_context import get_workdir_from_session
|
||||
|
||||
log = logging.getLogger("Update Slap Comp")
|
||||
|
||||
|
|
@ -44,16 +45,6 @@ def _format_version_folder(folder):
|
|||
return version_folder
|
||||
|
||||
|
||||
def _get_work_folder(session):
|
||||
"""Convenience function to get the work folder path of the current asset"""
|
||||
|
||||
# Get new filename, create path based on asset and work template
|
||||
template_work = self._project["config"]["template"]["work"]
|
||||
work_path = pipeline._format_work_template(template_work, session)
|
||||
|
||||
return os.path.normpath(work_path)
|
||||
|
||||
|
||||
def _get_fusion_instance():
|
||||
fusion = getattr(sys.modules["__main__"], "fusion", None)
|
||||
if fusion is None:
|
||||
|
|
@ -72,7 +63,7 @@ def _format_filepath(session):
|
|||
asset = session["AVALON_ASSET"]
|
||||
|
||||
# Save updated slap comp
|
||||
work_path = _get_work_folder(session)
|
||||
work_path = get_workdir_from_session(session)
|
||||
walk_to_dir = os.path.join(work_path, "scenes", "slapcomp")
|
||||
slapcomp_dir = os.path.abspath(walk_to_dir)
|
||||
|
||||
|
|
@ -112,7 +103,7 @@ def _update_savers(comp, session):
|
|||
None
|
||||
"""
|
||||
|
||||
new_work = _get_work_folder(session)
|
||||
new_work = get_workdir_from_session(session)
|
||||
renders = os.path.join(new_work, "renders")
|
||||
version_folder = _format_version_folder(renders)
|
||||
renders_version = os.path.join(renders, version_folder)
|
||||
|
|
|
|||
|
|
@ -5,11 +5,12 @@ import logging
|
|||
from Qt import QtWidgets, QtCore
|
||||
|
||||
import avalon.api
|
||||
from avalon import io, pipeline
|
||||
from avalon import io
|
||||
from avalon.vendor import qtawesome as qta
|
||||
|
||||
from openpype import style
|
||||
from openpype.hosts.fusion import api
|
||||
from openpype.lib.avalon_context import get_workdir_from_session
|
||||
|
||||
log = logging.getLogger("Fusion Switch Shot")
|
||||
|
||||
|
|
@ -123,7 +124,7 @@ class App(QtWidgets.QWidget):
|
|||
|
||||
def _on_open_from_dir(self):
|
||||
|
||||
start_dir = self._get_context_directory()
|
||||
start_dir = get_workdir_from_session()
|
||||
comp_file, _ = QtWidgets.QFileDialog.getOpenFileName(
|
||||
self, "Choose comp", start_dir)
|
||||
|
||||
|
|
@ -157,17 +158,6 @@ class App(QtWidgets.QWidget):
|
|||
import colorbleed.scripts.fusion_switch_shot as switch_shot
|
||||
switch_shot.switch(asset_name=asset, filepath=file_name, new=True)
|
||||
|
||||
def _get_context_directory(self):
|
||||
|
||||
project = io.find_one({"type": "project",
|
||||
"name": avalon.api.Session["AVALON_PROJECT"]},
|
||||
projection={"config": True})
|
||||
|
||||
template = project["config"]["template"]["work"]
|
||||
dir = pipeline._format_work_template(template, avalon.api.Session)
|
||||
|
||||
return dir
|
||||
|
||||
def collect_slap_comps(self, directory):
|
||||
items = glob.glob("{}/*.comp".format(directory))
|
||||
return items
|
||||
|
|
|
|||
|
|
@ -1,6 +1,5 @@
|
|||
import os
|
||||
|
||||
from avalon import harmony
|
||||
import pyblish.api
|
||||
import openpype.api
|
||||
from openpype.pipeline import PublishXmlValidationError
|
||||
|
|
|
|||
|
|
@ -105,11 +105,9 @@ class ValidateSceneSettings(pyblish.api.InstancePlugin):
|
|||
invalid_keys = set()
|
||||
for key, value in expected_settings.items():
|
||||
if value != current_settings[key]:
|
||||
invalid_settings.append({
|
||||
"name": key,
|
||||
"expected": value,
|
||||
"current": current_settings[key]
|
||||
})
|
||||
invalid_settings.append(
|
||||
"{} expected: {} found: {}".format(key, value,
|
||||
current_settings[key]))
|
||||
invalid_keys.add(key)
|
||||
|
||||
if ((expected_settings["handleStart"]
|
||||
|
|
|
|||
|
|
@ -1,15 +0,0 @@
|
|||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<root>
|
||||
<warning id="main">
|
||||
<title>Primitive to Detail</title>
|
||||
<description>## Invalid Primitive to Detail Attributes
|
||||
|
||||
Primitives with inconsistent primitive to detail attributes were found.
|
||||
|
||||
{message}
|
||||
|
||||
</description>
|
||||
<detail>
|
||||
</detail>
|
||||
</warning>
|
||||
</root>
|
||||
|
|
@ -1,22 +0,0 @@
|
|||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<root>
|
||||
<warning id="main">
|
||||
<title>Alembic ROP Face Sets</title>
|
||||
<description>## Invalid Alembic ROP Face Sets
|
||||
|
||||
When groups are saved as Face Sets with the Alembic these show up
|
||||
as shadingEngine connections in Maya - however, with animated groups
|
||||
these connections in Maya won't work as expected, it won't update per
|
||||
frame. Additionally, it can break shader assignments in some cases
|
||||
where it requires to first break this connection to allow a shader to
|
||||
be assigned.
|
||||
|
||||
It is allowed to include Face Sets, so only an issue is logged to
|
||||
identify that it could introduce issues down the pipeline.
|
||||
|
||||
|
||||
</description>
|
||||
<detail>
|
||||
</detail>
|
||||
</warning>
|
||||
</root>
|
||||
|
|
@ -1,21 +0,0 @@
|
|||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<root>
|
||||
<error id="main">
|
||||
<title>Alembic input</title>
|
||||
<description>## Invalid Alembic input
|
||||
|
||||
The node connected to the output is incorrect.
|
||||
It contains primitive types that are not supported for alembic output.
|
||||
|
||||
Problematic primitive is of type {primitive_type}
|
||||
|
||||
|
||||
</description>
|
||||
<detail>
|
||||
|
||||
The connected node cannot be of the following types for Alembic:
|
||||
- VDB
|
||||
- Volume
|
||||
</detail>
|
||||
</error>
|
||||
</root>
|
||||
|
|
@ -1,31 +0,0 @@
|
|||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<root>
|
||||
<error id="main">
|
||||
<title>Output frame token</title>
|
||||
<description>## Output path is missing frame token
|
||||
|
||||
This validator will check the output parameter of the node if
|
||||
the Valid Frame Range is not set to 'Render Current Frame'
|
||||
|
||||
No frame token found in: **{nodepath}**
|
||||
|
||||
### How to repair?
|
||||
|
||||
You need to add `$F4` or similar frame based token to your path.
|
||||
|
||||
**Example:**
|
||||
Good: 'my_vbd_cache.$F4.vdb'
|
||||
Bad: 'my_vbd_cache.vdb'
|
||||
|
||||
</description>
|
||||
<detail>
|
||||
|
||||
If you render out a frame range it is mandatory to have the
|
||||
frame token - '$F4' or similar - to ensure that each frame gets
|
||||
written. If this is not the case you will override the same file
|
||||
every time a frame is written out.
|
||||
|
||||
|
||||
</detail>
|
||||
</error>
|
||||
</root>
|
||||
|
|
@ -1,48 +0,0 @@
|
|||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<root>
|
||||
<error id="main">
|
||||
<title>VDB output node</title>
|
||||
<description>## Invalid VDB output nodes
|
||||
|
||||
Validate that the node connected to the output node is of type VDB.
|
||||
|
||||
Regardless of the amount of VDBs created the output will need to have an
|
||||
equal amount of VDBs, points, primitives and vertices
|
||||
|
||||
A VDB is an inherited type of Prim, holds the following data:
|
||||
|
||||
- Primitives: 1
|
||||
- Points: 1
|
||||
- Vertices: 1
|
||||
- VDBs: 1
|
||||
|
||||
</description>
|
||||
<detail>
|
||||
</detail>
|
||||
</error>
|
||||
|
||||
<error id="noSOP">
|
||||
<title>No SOP path</title>
|
||||
<description>## No SOP Path in output node
|
||||
|
||||
SOP Output node in '{node}' does not exist. Ensure a valid SOP output path is set.
|
||||
|
||||
</description>
|
||||
<detail>
|
||||
</detail>
|
||||
</error>
|
||||
|
||||
<error id="wrongSOP">
|
||||
<title>Wrong SOP path</title>
|
||||
<description>## Wrong SOP Path in output node
|
||||
|
||||
Output node {nodepath} is not a SOP node.
|
||||
SOP Path must point to a SOP node,
|
||||
instead found category type: {categoryname}
|
||||
|
||||
</description>
|
||||
<detail>
|
||||
</detail>
|
||||
</error>
|
||||
|
||||
</root>
|
||||
|
|
@ -0,0 +1,47 @@
|
|||
import pyblish.api
|
||||
import openpype.api
|
||||
|
||||
|
||||
class ValidateVDBInputNode(pyblish.api.InstancePlugin):
|
||||
"""Validate that the node connected to the output node is of type VDB.
|
||||
|
||||
Regardless of the amount of VDBs create the output will need to have an
|
||||
equal amount of VDBs, points, primitives and vertices
|
||||
|
||||
A VDB is an inherited type of Prim, holds the following data:
|
||||
- Primitives: 1
|
||||
- Points: 1
|
||||
- Vertices: 1
|
||||
- VDBs: 1
|
||||
|
||||
"""
|
||||
|
||||
order = openpype.api.ValidateContentsOrder + 0.1
|
||||
families = ["vdbcache"]
|
||||
hosts = ["houdini"]
|
||||
label = "Validate Input Node (VDB)"
|
||||
|
||||
def process(self, instance):
|
||||
invalid = self.get_invalid(instance)
|
||||
if invalid:
|
||||
raise RuntimeError(
|
||||
"Node connected to the output node is not" "of type VDB!"
|
||||
)
|
||||
|
||||
@classmethod
|
||||
def get_invalid(cls, instance):
|
||||
|
||||
node = instance.data["output_node"]
|
||||
|
||||
prims = node.geometry().prims()
|
||||
nr_of_prims = len(prims)
|
||||
|
||||
nr_of_points = len(node.geometry().points())
|
||||
if nr_of_points != nr_of_prims:
|
||||
cls.log.error("The number of primitives and points do not match")
|
||||
return [instance]
|
||||
|
||||
for prim in prims:
|
||||
if prim.numVertices() != 1:
|
||||
cls.log.error("Found primitive with more than 1 vertex!")
|
||||
return [instance]
|
||||
|
|
@ -0,0 +1,51 @@
|
|||
import pyblish.api
|
||||
|
||||
from openpype.hosts.houdini.api import lib
|
||||
|
||||
|
||||
class ValidateAnimationSettings(pyblish.api.InstancePlugin):
|
||||
"""Validate if the unexpanded string contains the frame ('$F') token
|
||||
|
||||
This validator will only check the output parameter of the node if
|
||||
the Valid Frame Range is not set to 'Render Current Frame'
|
||||
|
||||
Rules:
|
||||
If you render out a frame range it is mandatory to have the
|
||||
frame token - '$F4' or similar - to ensure that each frame gets
|
||||
written. If this is not the case you will override the same file
|
||||
every time a frame is written out.
|
||||
|
||||
Examples:
|
||||
Good: 'my_vbd_cache.$F4.vdb'
|
||||
Bad: 'my_vbd_cache.vdb'
|
||||
|
||||
"""
|
||||
|
||||
order = pyblish.api.ValidatorOrder
|
||||
label = "Validate Frame Settings"
|
||||
families = ["vdbcache"]
|
||||
|
||||
def process(self, instance):
|
||||
|
||||
invalid = self.get_invalid(instance)
|
||||
if invalid:
|
||||
raise RuntimeError(
|
||||
"Output settings do no match for '%s'" % instance
|
||||
)
|
||||
|
||||
@classmethod
|
||||
def get_invalid(cls, instance):
|
||||
|
||||
node = instance[0]
|
||||
|
||||
# Check trange parm, 0 means Render Current Frame
|
||||
frame_range = node.evalParm("trange")
|
||||
if frame_range == 0:
|
||||
return []
|
||||
|
||||
output_parm = lib.get_output_parameter(node)
|
||||
unexpanded_str = output_parm.unexpandedString()
|
||||
|
||||
if "$F" not in unexpanded_str:
|
||||
cls.log.error("No frame token found in '%s'" % node.path())
|
||||
return [instance]
|
||||
|
|
@ -1,12 +1,12 @@
|
|||
import pyblish.api
|
||||
|
||||
from openpype.hosts.houdini.api import lib
|
||||
from openpype.pipeline import PublishXmlValidationError
|
||||
|
||||
|
||||
class ValidateFrameToken(pyblish.api.InstancePlugin):
|
||||
"""Validate if the unexpanded string contains the frame ('$F') token
|
||||
"""Validate if the unexpanded string contains the frame ('$F') token.
|
||||
|
||||
This validator will only check the output parameter of the node if
|
||||
This validator will *only* check the output parameter of the node if
|
||||
the Valid Frame Range is not set to 'Render Current Frame'
|
||||
|
||||
Rules:
|
||||
|
|
@ -28,14 +28,9 @@ class ValidateFrameToken(pyblish.api.InstancePlugin):
|
|||
def process(self, instance):
|
||||
|
||||
invalid = self.get_invalid(instance)
|
||||
data = {
|
||||
"nodepath": instance
|
||||
}
|
||||
if invalid:
|
||||
raise PublishXmlValidationError(
|
||||
self,
|
||||
"Output path for '%s' is missing $F4 token" % instance,
|
||||
formatting_data=data
|
||||
raise RuntimeError(
|
||||
"Output settings do no match for '%s'" % instance
|
||||
)
|
||||
|
||||
@classmethod
|
||||
|
|
@ -52,5 +47,5 @@ class ValidateFrameToken(pyblish.api.InstancePlugin):
|
|||
unexpanded_str = output_parm.unexpandedString()
|
||||
|
||||
if "$F" not in unexpanded_str:
|
||||
# cls.log.info("No frame token found in '%s'" % node.path())
|
||||
cls.log.error("No frame token found in '%s'" % node.path())
|
||||
return [instance]
|
||||
|
|
|
|||
|
|
@ -14,7 +14,7 @@ class ValidateSopOutputNode(pyblish.api.InstancePlugin):
|
|||
"""
|
||||
|
||||
order = pyblish.api.ValidatorOrder
|
||||
families = ["pointcache"]
|
||||
families = ["pointcache", "vdbcache"]
|
||||
hosts = ["houdini"]
|
||||
label = "Validate Output Node"
|
||||
|
||||
|
|
|
|||
|
|
@ -0,0 +1,47 @@
|
|||
import pyblish.api
|
||||
import openpype.api
|
||||
|
||||
|
||||
class ValidateVDBInputNode(pyblish.api.InstancePlugin):
|
||||
"""Validate that the node connected to the output node is of type VDB.
|
||||
|
||||
Regardless of the amount of VDBs create the output will need to have an
|
||||
equal amount of VDBs, points, primitives and vertices
|
||||
|
||||
A VDB is an inherited type of Prim, holds the following data:
|
||||
- Primitives: 1
|
||||
- Points: 1
|
||||
- Vertices: 1
|
||||
- VDBs: 1
|
||||
|
||||
"""
|
||||
|
||||
order = openpype.api.ValidateContentsOrder + 0.1
|
||||
families = ["vdbcache"]
|
||||
hosts = ["houdini"]
|
||||
label = "Validate Input Node (VDB)"
|
||||
|
||||
def process(self, instance):
|
||||
invalid = self.get_invalid(instance)
|
||||
if invalid:
|
||||
raise RuntimeError(
|
||||
"Node connected to the output node is not" "of type VDB!"
|
||||
)
|
||||
|
||||
@classmethod
|
||||
def get_invalid(cls, instance):
|
||||
|
||||
node = instance.data["output_node"]
|
||||
|
||||
prims = node.geometry().prims()
|
||||
nr_of_prims = len(prims)
|
||||
|
||||
nr_of_points = len(node.geometry().points())
|
||||
if nr_of_points != nr_of_prims:
|
||||
cls.log.error("The number of primitives and points do not match")
|
||||
return [instance]
|
||||
|
||||
for prim in prims:
|
||||
if prim.numVertices() != 1:
|
||||
cls.log.error("Found primitive with more than 1 vertex!")
|
||||
return [instance]
|
||||
|
|
@ -1,6 +1,5 @@
|
|||
import pyblish.api
|
||||
import openpype.api
|
||||
from openpype.pipeline import PublishXmlValidationError
|
||||
import hou
|
||||
|
||||
|
||||
|
|
@ -24,61 +23,32 @@ class ValidateVDBOutputNode(pyblish.api.InstancePlugin):
|
|||
label = "Validate Output Node (VDB)"
|
||||
|
||||
def process(self, instance):
|
||||
|
||||
data = {
|
||||
"node": instance
|
||||
}
|
||||
|
||||
output_node = instance.data["output_node"]
|
||||
if output_node is None:
|
||||
raise PublishXmlValidationError(
|
||||
self,
|
||||
"SOP Output node in '{node}' does not exist. Ensure a valid "
|
||||
"SOP output path is set.".format(**data),
|
||||
key="noSOP",
|
||||
formatting_data=data
|
||||
)
|
||||
|
||||
# Output node must be a Sop node.
|
||||
if not isinstance(output_node, hou.SopNode):
|
||||
data = {
|
||||
"nodepath": output_node.path(),
|
||||
"categoryname": output_node.type().category().name()
|
||||
}
|
||||
raise PublishXmlValidationError(
|
||||
self,
|
||||
"Output node {nodepath} is not a SOP node. SOP Path must"
|
||||
"point to a SOP node, instead found category"
|
||||
"type: {categoryname}".format(**data),
|
||||
key="wrongSOP",
|
||||
formatting_data=data
|
||||
)
|
||||
|
||||
invalid = self.get_invalid(instance)
|
||||
|
||||
if invalid:
|
||||
raise PublishXmlValidationError(
|
||||
self,
|
||||
"Output node(s) `{}` are incorrect. See plug-in"
|
||||
"log for details.".format(invalid),
|
||||
formatting_data=data
|
||||
raise RuntimeError(
|
||||
"Node connected to the output node is not" " of type VDB!"
|
||||
)
|
||||
|
||||
@classmethod
|
||||
def get_invalid(cls, instance):
|
||||
|
||||
output_node = instance.data["output_node"]
|
||||
node = instance.data["output_node"]
|
||||
if node is None:
|
||||
cls.log.error(
|
||||
"SOP path is not correctly set on "
|
||||
"ROP node '%s'." % instance[0].path()
|
||||
)
|
||||
return [instance]
|
||||
|
||||
frame = instance.data.get("frameStart", 0)
|
||||
geometry = output_node.geometryAtFrame(frame)
|
||||
geometry = node.geometryAtFrame(frame)
|
||||
if geometry is None:
|
||||
# No geometry data on this output_node
|
||||
# - maybe the node hasn't cooked?
|
||||
cls.log.debug(
|
||||
# No geometry data on this node, maybe the node hasn't cooked?
|
||||
cls.log.error(
|
||||
"SOP node has no geometry data. "
|
||||
"Is it cooked? %s" % output_node.path()
|
||||
"Is it cooked? %s" % node.path()
|
||||
)
|
||||
return [output_node]
|
||||
return [node]
|
||||
|
||||
prims = geometry.prims()
|
||||
nr_of_prims = len(prims)
|
||||
|
|
@ -87,17 +57,17 @@ class ValidateVDBOutputNode(pyblish.api.InstancePlugin):
|
|||
invalid_prim = False
|
||||
for prim in prims:
|
||||
if not isinstance(prim, hou.VDB):
|
||||
cls.log.debug("Found non-VDB primitive: %s" % prim)
|
||||
cls.log.error("Found non-VDB primitive: %s" % prim)
|
||||
invalid_prim = True
|
||||
if invalid_prim:
|
||||
return [instance]
|
||||
|
||||
nr_of_points = len(geometry.points())
|
||||
if nr_of_points != nr_of_prims:
|
||||
cls.log.debug("The number of primitives and points do not match")
|
||||
cls.log.error("The number of primitives and points do not match")
|
||||
return [instance]
|
||||
|
||||
for prim in prims:
|
||||
if prim.numVertices() != 1:
|
||||
cls.log.debug("Found primitive with more than 1 vertex!")
|
||||
cls.log.error("Found primitive with more than 1 vertex!")
|
||||
return [instance]
|
||||
|
|
|
|||
|
|
@ -4,7 +4,6 @@ import os
|
|||
import sys
|
||||
import json
|
||||
import tempfile
|
||||
import platform
|
||||
import contextlib
|
||||
import subprocess
|
||||
from collections import OrderedDict
|
||||
|
|
@ -64,10 +63,6 @@ def maketx(source, destination, *args):
|
|||
|
||||
maketx_path = get_oiio_tools_path("maketx")
|
||||
|
||||
if platform.system().lower() == "windows":
|
||||
# Ensure .exe extension
|
||||
maketx_path += ".exe"
|
||||
|
||||
if not os.path.exists(maketx_path):
|
||||
print(
|
||||
"OIIO tool not found in {}".format(maketx_path))
|
||||
|
|
|
|||
|
|
@ -152,6 +152,7 @@ class ExporterReview(object):
|
|||
|
||||
"""
|
||||
data = None
|
||||
publish_on_farm = False
|
||||
|
||||
def __init__(self,
|
||||
klass,
|
||||
|
|
@ -210,6 +211,9 @@ class ExporterReview(object):
|
|||
if self.multiple_presets:
|
||||
repre["outputName"] = self.name
|
||||
|
||||
if self.publish_on_farm:
|
||||
repre["tags"].append("publish_on_farm")
|
||||
|
||||
self.data["representations"].append(repre)
|
||||
|
||||
def get_view_input_process_node(self):
|
||||
|
|
@ -446,6 +450,7 @@ class ExporterReviewMov(ExporterReview):
|
|||
return path
|
||||
|
||||
def generate_mov(self, farm=False, **kwargs):
|
||||
self.publish_on_farm = farm
|
||||
reformat_node_add = kwargs["reformat_node_add"]
|
||||
reformat_node_config = kwargs["reformat_node_config"]
|
||||
bake_viewer_process = kwargs["bake_viewer_process"]
|
||||
|
|
@ -563,7 +568,7 @@ class ExporterReviewMov(ExporterReview):
|
|||
# ---------- end nodes creation
|
||||
|
||||
# ---------- render or save to nk
|
||||
if farm:
|
||||
if self.publish_on_farm:
|
||||
nuke.scriptSave()
|
||||
path_nk = self.save_file()
|
||||
self.data.update({
|
||||
|
|
@ -573,11 +578,12 @@ class ExporterReviewMov(ExporterReview):
|
|||
})
|
||||
else:
|
||||
self.render(write_node.name())
|
||||
# ---------- generate representation data
|
||||
self.get_representation_data(
|
||||
tags=["review", "delete"] + add_tags,
|
||||
range=True
|
||||
)
|
||||
|
||||
# ---------- generate representation data
|
||||
self.get_representation_data(
|
||||
tags=["review", "delete"] + add_tags,
|
||||
range=True
|
||||
)
|
||||
|
||||
self.log.debug("Representation... `{}`".format(self.data))
|
||||
|
||||
|
|
|
|||
|
|
@ -130,9 +130,11 @@ class ExtractReviewDataMov(openpype.api.Extractor):
|
|||
})
|
||||
else:
|
||||
data = exporter.generate_mov(**o_data)
|
||||
generated_repres.extend(data["representations"])
|
||||
|
||||
self.log.info(generated_repres)
|
||||
# add representation generated by exporter
|
||||
generated_repres.extend(data["representations"])
|
||||
self.log.debug(
|
||||
"__ generated_repres: {}".format(generated_repres))
|
||||
|
||||
if generated_repres:
|
||||
# assign to representations
|
||||
|
|
|
|||
|
|
@ -2,7 +2,6 @@ import pyblish.api
|
|||
from openpype.pipeline import PublishValidationError
|
||||
|
||||
|
||||
|
||||
class ValidateInstanceAssetRepair(pyblish.api.Action):
|
||||
"""Repair the instance asset."""
|
||||
|
||||
|
|
|
|||
|
|
@ -10,14 +10,18 @@ Provides:
|
|||
import os
|
||||
import clique
|
||||
import tempfile
|
||||
import math
|
||||
|
||||
from avalon import io
|
||||
import pyblish.api
|
||||
from openpype.lib import prepare_template_data
|
||||
from openpype.lib import prepare_template_data, get_asset, ffprobe_streams
|
||||
from openpype.lib.vendor_bin_utils import get_fps
|
||||
from openpype.lib.plugin_tools import (
|
||||
parse_json,
|
||||
get_subset_name_with_asset_doc
|
||||
)
|
||||
|
||||
|
||||
class CollectPublishedFiles(pyblish.api.ContextPlugin):
|
||||
"""
|
||||
This collector will try to find json files in provided
|
||||
|
|
@ -49,10 +53,7 @@ class CollectPublishedFiles(pyblish.api.ContextPlugin):
|
|||
self.log.info("task_sub:: {}".format(task_subfolders))
|
||||
|
||||
asset_name = context.data["asset"]
|
||||
asset_doc = io.find_one({
|
||||
"type": "asset",
|
||||
"name": asset_name
|
||||
})
|
||||
asset_doc = get_asset()
|
||||
task_name = context.data["task"]
|
||||
task_type = context.data["taskType"]
|
||||
project_name = context.data["project_name"]
|
||||
|
|
@ -97,11 +98,26 @@ class CollectPublishedFiles(pyblish.api.ContextPlugin):
|
|||
instance.data["frameEnd"] = \
|
||||
instance.data["representations"][0]["frameEnd"]
|
||||
else:
|
||||
instance.data["frameStart"] = 0
|
||||
instance.data["frameEnd"] = 1
|
||||
frame_start = asset_doc["data"]["frameStart"]
|
||||
instance.data["frameStart"] = frame_start
|
||||
instance.data["frameEnd"] = asset_doc["data"]["frameEnd"]
|
||||
instance.data["representations"] = self._get_single_repre(
|
||||
task_dir, task_data["files"], tags
|
||||
)
|
||||
file_url = os.path.join(task_dir, task_data["files"][0])
|
||||
duration = self._get_duration(file_url)
|
||||
if duration:
|
||||
try:
|
||||
frame_end = int(frame_start) + math.ceil(duration)
|
||||
instance.data["frameEnd"] = math.ceil(frame_end)
|
||||
self.log.debug("frameEnd:: {}".format(
|
||||
instance.data["frameEnd"]))
|
||||
except ValueError:
|
||||
self.log.warning("Unable to count frames "
|
||||
"duration {}".format(duration))
|
||||
|
||||
instance.data["handleStart"] = asset_doc["data"]["handleStart"]
|
||||
instance.data["handleEnd"] = asset_doc["data"]["handleEnd"]
|
||||
|
||||
self.log.info("instance.data:: {}".format(instance.data))
|
||||
|
||||
|
|
@ -127,7 +143,7 @@ class CollectPublishedFiles(pyblish.api.ContextPlugin):
|
|||
return [repre_data]
|
||||
|
||||
def _process_sequence(self, files, task_dir, tags):
|
||||
"""Prepare reprentations for sequence of files."""
|
||||
"""Prepare representation for sequence of files."""
|
||||
collections, remainder = clique.assemble(files)
|
||||
assert len(collections) == 1, \
|
||||
"Too many collections in {}".format(files)
|
||||
|
|
@ -188,6 +204,7 @@ class CollectPublishedFiles(pyblish.api.ContextPlugin):
|
|||
msg = "No family found for combination of " +\
|
||||
"task_type: {}, is_sequence:{}, extension: {}".format(
|
||||
task_type, is_sequence, extension)
|
||||
found_family = "render"
|
||||
assert found_family, msg
|
||||
|
||||
return (found_family,
|
||||
|
|
@ -243,3 +260,41 @@ class CollectPublishedFiles(pyblish.api.ContextPlugin):
|
|||
return version[0].get("version") or 0
|
||||
else:
|
||||
return 0
|
||||
|
||||
def _get_duration(self, file_url):
|
||||
"""Return duration in frames"""
|
||||
try:
|
||||
streams = ffprobe_streams(file_url, self.log)
|
||||
except Exception as exc:
|
||||
raise AssertionError((
|
||||
"FFprobe couldn't read information about input file: \"{}\"."
|
||||
" Error message: {}"
|
||||
).format(file_url, str(exc)))
|
||||
|
||||
first_video_stream = None
|
||||
for stream in streams:
|
||||
if "width" in stream and "height" in stream:
|
||||
first_video_stream = stream
|
||||
break
|
||||
|
||||
if first_video_stream:
|
||||
nb_frames = stream.get("nb_frames")
|
||||
if nb_frames:
|
||||
try:
|
||||
return int(nb_frames)
|
||||
except ValueError:
|
||||
self.log.warning(
|
||||
"nb_frames {} not convertible".format(nb_frames))
|
||||
|
||||
duration = stream.get("duration")
|
||||
frame_rate = get_fps(stream.get("r_frame_rate", '0/0'))
|
||||
self.log.debu("duration:: {} frame_rate:: {}".format(
|
||||
duration, frame_rate))
|
||||
try:
|
||||
return float(duration) * float(frame_rate)
|
||||
except ValueError:
|
||||
self.log.warning(
|
||||
"{} or {} cannot be converted".format(duration,
|
||||
frame_rate))
|
||||
|
||||
self.log.warning("Cannot get number of frames")
|
||||
|
|
|
|||
|
|
@ -16,6 +16,14 @@ sys.path.insert(0, python_version_dir)
|
|||
site.addsitedir(python_version_dir)
|
||||
|
||||
|
||||
from .vendor_bin_utils import (
|
||||
find_executable,
|
||||
get_vendor_bin_path,
|
||||
get_oiio_tools_path,
|
||||
get_ffmpeg_tool_path,
|
||||
ffprobe_streams,
|
||||
is_oiio_supported
|
||||
)
|
||||
from .env_tools import (
|
||||
env_value_to_bool,
|
||||
get_paths_from_environ,
|
||||
|
|
@ -57,14 +65,6 @@ from .anatomy import (
|
|||
|
||||
from .config import get_datetime_data
|
||||
|
||||
from .vendor_bin_utils import (
|
||||
get_vendor_bin_path,
|
||||
get_oiio_tools_path,
|
||||
get_ffmpeg_tool_path,
|
||||
ffprobe_streams,
|
||||
is_oiio_supported
|
||||
)
|
||||
|
||||
from .python_module_tools import (
|
||||
import_filepath,
|
||||
modules_from_path,
|
||||
|
|
@ -193,6 +193,7 @@ from .openpype_version import (
|
|||
terminal = Terminal
|
||||
|
||||
__all__ = [
|
||||
"find_executable",
|
||||
"get_openpype_execute_args",
|
||||
"get_pype_execute_args",
|
||||
"get_linux_launcher_args",
|
||||
|
|
|
|||
|
|
@ -7,7 +7,6 @@ import platform
|
|||
import collections
|
||||
import inspect
|
||||
import subprocess
|
||||
import distutils.spawn
|
||||
from abc import ABCMeta, abstractmethod
|
||||
|
||||
import six
|
||||
|
|
@ -36,8 +35,10 @@ from .python_module_tools import (
|
|||
modules_from_path,
|
||||
classes_from_module
|
||||
)
|
||||
from .execute import get_linux_launcher_args
|
||||
|
||||
from .execute import (
|
||||
find_executable,
|
||||
get_linux_launcher_args
|
||||
)
|
||||
|
||||
_logger = None
|
||||
|
||||
|
|
@ -647,7 +648,7 @@ class ApplicationExecutable:
|
|||
def _realpath(self):
|
||||
"""Check if path is valid executable path."""
|
||||
# Check for executable in PATH
|
||||
result = distutils.spawn.find_executable(self.executable_path)
|
||||
result = find_executable(self.executable_path)
|
||||
if result is not None:
|
||||
return result
|
||||
|
||||
|
|
|
|||
|
|
@ -644,6 +644,166 @@ def get_workdir(
|
|||
)
|
||||
|
||||
|
||||
def template_data_from_session(session=None):
|
||||
""" Return dictionary with template from session keys.
|
||||
|
||||
Args:
|
||||
session (dict, Optional): The Session to use. If not provided use the
|
||||
currently active global Session.
|
||||
Returns:
|
||||
dict: All available data from session.
|
||||
"""
|
||||
from avalon import io
|
||||
import avalon.api
|
||||
|
||||
if session is None:
|
||||
session = avalon.api.Session
|
||||
|
||||
project_name = session["AVALON_PROJECT"]
|
||||
project_doc = io._database[project_name].find_one({"type": "project"})
|
||||
asset_doc = io._database[project_name].find_one({
|
||||
"type": "asset",
|
||||
"name": session["AVALON_ASSET"]
|
||||
})
|
||||
task_name = session["AVALON_TASK"]
|
||||
host_name = session["AVALON_APP"]
|
||||
return get_workdir_data(project_doc, asset_doc, task_name, host_name)
|
||||
|
||||
|
||||
def compute_session_changes(
|
||||
session, task=None, asset=None, app=None, template_key=None
|
||||
):
|
||||
"""Compute the changes for a Session object on asset, task or app switch
|
||||
|
||||
This does *NOT* update the Session object, but returns the changes
|
||||
required for a valid update of the Session.
|
||||
|
||||
Args:
|
||||
session (dict): The initial session to compute changes to.
|
||||
This is required for computing the full Work Directory, as that
|
||||
also depends on the values that haven't changed.
|
||||
task (str, Optional): Name of task to switch to.
|
||||
asset (str or dict, Optional): Name of asset to switch to.
|
||||
You can also directly provide the Asset dictionary as returned
|
||||
from the database to avoid an additional query. (optimization)
|
||||
app (str, Optional): Name of app to switch to.
|
||||
|
||||
Returns:
|
||||
dict: The required changes in the Session dictionary.
|
||||
|
||||
"""
|
||||
changes = dict()
|
||||
|
||||
# If no changes, return directly
|
||||
if not any([task, asset, app]):
|
||||
return changes
|
||||
|
||||
# Get asset document and asset
|
||||
asset_document = None
|
||||
asset_tasks = None
|
||||
if isinstance(asset, dict):
|
||||
# Assume asset database document
|
||||
asset_document = asset
|
||||
asset_tasks = asset_document.get("data", {}).get("tasks")
|
||||
asset = asset["name"]
|
||||
|
||||
if not asset_document or not asset_tasks:
|
||||
from avalon import io
|
||||
|
||||
# Assume asset name
|
||||
asset_document = io.find_one(
|
||||
{
|
||||
"name": asset,
|
||||
"type": "asset"
|
||||
},
|
||||
{"data.tasks": True}
|
||||
)
|
||||
assert asset_document, "Asset must exist"
|
||||
|
||||
# Detect any changes compared session
|
||||
mapping = {
|
||||
"AVALON_ASSET": asset,
|
||||
"AVALON_TASK": task,
|
||||
"AVALON_APP": app,
|
||||
}
|
||||
changes = {
|
||||
key: value
|
||||
for key, value in mapping.items()
|
||||
if value and value != session.get(key)
|
||||
}
|
||||
if not changes:
|
||||
return changes
|
||||
|
||||
# Compute work directory (with the temporary changed session so far)
|
||||
_session = session.copy()
|
||||
_session.update(changes)
|
||||
|
||||
changes["AVALON_WORKDIR"] = get_workdir_from_session(_session)
|
||||
|
||||
return changes
|
||||
|
||||
|
||||
def get_workdir_from_session(session=None, template_key=None):
|
||||
import avalon.api
|
||||
|
||||
if session is None:
|
||||
session = avalon.api.Session
|
||||
project_name = session["AVALON_PROJECT"]
|
||||
host_name = session["AVALON_APP"]
|
||||
anatomy = Anatomy(project_name)
|
||||
template_data = template_data_from_session(session)
|
||||
anatomy_filled = anatomy.format(template_data)
|
||||
|
||||
if not template_key:
|
||||
task_type = template_data["task"]["type"]
|
||||
template_key = get_workfile_template_key(
|
||||
task_type,
|
||||
host_name,
|
||||
project_name=project_name
|
||||
)
|
||||
return anatomy_filled[template_key]["folder"]
|
||||
|
||||
|
||||
def update_current_task(task=None, asset=None, app=None, template_key=None):
|
||||
"""Update active Session to a new task work area.
|
||||
|
||||
This updates the live Session to a different `asset`, `task` or `app`.
|
||||
|
||||
Args:
|
||||
task (str): The task to set.
|
||||
asset (str): The asset to set.
|
||||
app (str): The app to set.
|
||||
|
||||
Returns:
|
||||
dict: The changed key, values in the current Session.
|
||||
|
||||
"""
|
||||
import avalon.api
|
||||
from avalon.pipeline import emit
|
||||
|
||||
changes = compute_session_changes(
|
||||
avalon.api.Session,
|
||||
task=task,
|
||||
asset=asset,
|
||||
app=app,
|
||||
template_key=template_key
|
||||
)
|
||||
|
||||
# Update the Session and environments. Pop from environments all keys with
|
||||
# value set to None.
|
||||
for key, value in changes.items():
|
||||
avalon.api.Session[key] = value
|
||||
if value is None:
|
||||
os.environ.pop(key, None)
|
||||
else:
|
||||
os.environ[key] = value
|
||||
|
||||
# Emit session change
|
||||
emit("taskChanged", changes.copy())
|
||||
|
||||
return changes
|
||||
|
||||
|
||||
@with_avalon
|
||||
def get_workfile_doc(asset_id, task_name, filename, dbcon=None):
|
||||
"""Return workfile document for entered context.
|
||||
|
|
|
|||
|
|
@ -4,9 +4,9 @@ import subprocess
|
|||
import platform
|
||||
import json
|
||||
import tempfile
|
||||
import distutils.spawn
|
||||
|
||||
from .log import PypeLogger as Logger
|
||||
from .vendor_bin_utils import find_executable
|
||||
|
||||
# MSDN process creation flag (Windows only)
|
||||
CREATE_NO_WINDOW = 0x08000000
|
||||
|
|
@ -341,7 +341,7 @@ def get_linux_launcher_args(*args):
|
|||
os.path.dirname(openpype_executable),
|
||||
filename
|
||||
)
|
||||
executable_path = distutils.spawn.find_executable(new_executable)
|
||||
executable_path = find_executable(new_executable)
|
||||
if executable_path is None:
|
||||
return None
|
||||
launch_args = [executable_path]
|
||||
|
|
|
|||
|
|
@ -3,9 +3,87 @@ import logging
|
|||
import json
|
||||
import platform
|
||||
import subprocess
|
||||
import distutils
|
||||
|
||||
log = logging.getLogger("FFmpeg utils")
|
||||
log = logging.getLogger("Vendor utils")
|
||||
|
||||
|
||||
def is_file_executable(filepath):
|
||||
"""Filepath lead to executable file.
|
||||
|
||||
Args:
|
||||
filepath(str): Full path to file.
|
||||
"""
|
||||
if not filepath:
|
||||
return False
|
||||
|
||||
if os.path.isfile(filepath):
|
||||
if os.access(filepath, os.X_OK):
|
||||
return True
|
||||
|
||||
log.info(
|
||||
"Filepath is not available for execution \"{}\"".format(filepath)
|
||||
)
|
||||
return False
|
||||
|
||||
|
||||
def find_executable(executable):
|
||||
"""Find full path to executable.
|
||||
|
||||
Also tries additional extensions if passed executable does not contain one.
|
||||
|
||||
Paths where it is looked for executable is defined by 'PATH' environment
|
||||
variable, 'os.confstr("CS_PATH")' or 'os.defpath'.
|
||||
|
||||
Args:
|
||||
executable(str): Name of executable with or without extension. Can be
|
||||
path to file.
|
||||
|
||||
Returns:
|
||||
str: Full path to executable with extension (is file).
|
||||
None: When the executable was not found.
|
||||
"""
|
||||
# Skip if passed path is file
|
||||
if is_file_executable(executable):
|
||||
return executable
|
||||
|
||||
low_platform = platform.system().lower()
|
||||
_, ext = os.path.splitext(executable)
|
||||
|
||||
# Prepare variants for which it will be looked
|
||||
variants = [executable]
|
||||
# Add other extension variants only if passed executable does not have one
|
||||
if not ext:
|
||||
if low_platform == "windows":
|
||||
exts = [".exe", ".ps1", ".bat"]
|
||||
for ext in os.getenv("PATHEXT", "").split(os.pathsep):
|
||||
ext = ext.lower()
|
||||
if ext and ext not in exts:
|
||||
exts.append(ext)
|
||||
else:
|
||||
exts = [".sh"]
|
||||
|
||||
for ext in exts:
|
||||
variant = executable + ext
|
||||
if is_file_executable(variant):
|
||||
return variant
|
||||
variants.append(variant)
|
||||
|
||||
# Get paths where to look for executable
|
||||
path_str = os.environ.get("PATH", None)
|
||||
if path_str is None:
|
||||
if hasattr(os, "confstr"):
|
||||
path_str = os.confstr("CS_PATH")
|
||||
elif hasattr(os, "defpath"):
|
||||
path_str = os.defpath
|
||||
|
||||
if path_str:
|
||||
paths = path_str.split(os.pathsep)
|
||||
for path in paths:
|
||||
for variant in variants:
|
||||
filepath = os.path.abspath(os.path.join(path, variant))
|
||||
if is_file_executable(filepath):
|
||||
return filepath
|
||||
return None
|
||||
|
||||
|
||||
def get_vendor_bin_path(bin_app):
|
||||
|
|
@ -41,11 +119,7 @@ def get_oiio_tools_path(tool="oiiotool"):
|
|||
Default is "oiiotool".
|
||||
"""
|
||||
oiio_dir = get_vendor_bin_path("oiio")
|
||||
if platform.system().lower() == "windows" and not tool.lower().endswith(
|
||||
".exe"
|
||||
):
|
||||
tool = "{}.exe".format(tool)
|
||||
return os.path.join(oiio_dir, tool)
|
||||
return find_executable(os.path.join(oiio_dir, tool))
|
||||
|
||||
|
||||
def get_ffmpeg_tool_path(tool="ffmpeg"):
|
||||
|
|
@ -61,7 +135,7 @@ def get_ffmpeg_tool_path(tool="ffmpeg"):
|
|||
ffmpeg_dir = get_vendor_bin_path("ffmpeg")
|
||||
if platform.system().lower() == "windows":
|
||||
ffmpeg_dir = os.path.join(ffmpeg_dir, "bin")
|
||||
return os.path.join(ffmpeg_dir, tool)
|
||||
return find_executable(os.path.join(ffmpeg_dir, tool))
|
||||
|
||||
|
||||
def ffprobe_streams(path_to_file, logger=None):
|
||||
|
|
@ -122,7 +196,7 @@ def is_oiio_supported():
|
|||
"""
|
||||
loaded_path = oiio_path = get_oiio_tools_path()
|
||||
if oiio_path:
|
||||
oiio_path = distutils.spawn.find_executable(oiio_path)
|
||||
oiio_path = find_executable(oiio_path)
|
||||
|
||||
if not oiio_path:
|
||||
log.debug("OIIOTool is not configured or not present at {}".format(
|
||||
|
|
@ -130,3 +204,23 @@ def is_oiio_supported():
|
|||
))
|
||||
return False
|
||||
return True
|
||||
|
||||
|
||||
def get_fps(str_value):
|
||||
"""Returns (str) value of fps from ffprobe frame format (120/1)"""
|
||||
if str_value == "0/0":
|
||||
print("WARNING: Source has \"r_frame_rate\" value set to \"0/0\".")
|
||||
return "Unknown"
|
||||
|
||||
items = str_value.split("/")
|
||||
if len(items) == 1:
|
||||
fps = float(items[0])
|
||||
|
||||
elif len(items) == 2:
|
||||
fps = float(items[0]) / float(items[1])
|
||||
|
||||
# Check if fps is integer or float number
|
||||
if int(fps) == fps:
|
||||
fps = int(fps)
|
||||
|
||||
return str(fps)
|
||||
|
|
|
|||
|
|
@ -516,7 +516,6 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin):
|
|||
"""
|
||||
representations = []
|
||||
collections, remainders = clique.assemble(exp_files)
|
||||
bake_renders = instance.get("bakingNukeScripts", [])
|
||||
|
||||
# create representation for every collected sequento ce
|
||||
for collection in collections:
|
||||
|
|
@ -534,9 +533,6 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin):
|
|||
preview = True
|
||||
break
|
||||
|
||||
if bake_renders:
|
||||
preview = False
|
||||
|
||||
# toggle preview on if multipart is on
|
||||
if instance.get("multipartExr", False):
|
||||
preview = True
|
||||
|
|
@ -610,16 +606,6 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin):
|
|||
})
|
||||
self._solve_families(instance, True)
|
||||
|
||||
if (bake_renders
|
||||
and remainder in bake_renders[0]["bakeRenderPath"]):
|
||||
rep.update({
|
||||
"fps": instance.get("fps"),
|
||||
"tags": ["review", "delete"]
|
||||
})
|
||||
# solve families with `preview` attributes
|
||||
self._solve_families(instance, True)
|
||||
representations.append(rep)
|
||||
|
||||
return representations
|
||||
|
||||
def _solve_families(self, instance, preview=False):
|
||||
|
|
|
|||
|
|
@ -107,6 +107,10 @@ class ValidateExpectedFiles(pyblish.api.InstancePlugin):
|
|||
explicitly and manually changed the frame list on the Deadline job.
|
||||
|
||||
"""
|
||||
# no frames in file name at all, eg 'renderCompositingMain.withLut.mov'
|
||||
if not frame_placeholder:
|
||||
return set([file_name_template])
|
||||
|
||||
real_expected_rendered = set()
|
||||
src_padding_exp = "%0{}d".format(len(frame_placeholder))
|
||||
for frames in frame_list:
|
||||
|
|
@ -130,14 +134,13 @@ class ValidateExpectedFiles(pyblish.api.InstancePlugin):
|
|||
|
||||
# There might be cases where clique was unable to collect
|
||||
# collections in `collect_frames` - thus we capture that case
|
||||
if frame is None:
|
||||
self.log.warning("Unable to detect frame from filename: "
|
||||
"{}".format(file_name))
|
||||
continue
|
||||
if frame is not None:
|
||||
frame_placeholder = "#" * len(frame)
|
||||
|
||||
frame_placeholder = "#" * len(frame)
|
||||
file_name_template = os.path.basename(
|
||||
file_name.replace(frame, frame_placeholder))
|
||||
file_name_template = os.path.basename(
|
||||
file_name.replace(frame, frame_placeholder))
|
||||
else:
|
||||
file_name_template = file_name
|
||||
break
|
||||
|
||||
return file_name_template, frame_placeholder
|
||||
|
|
|
|||
|
|
@ -23,8 +23,11 @@ class CollectUsername(pyblish.api.ContextPlugin):
|
|||
Expects "pype.club" user created on Ftrack and FTRACK_BOT_API_KEY env
|
||||
var set up.
|
||||
|
||||
Resets `context.data["user"] to correctly populate `version.author` and
|
||||
`representation.context.username`
|
||||
|
||||
"""
|
||||
order = pyblish.api.CollectorOrder - 0.488
|
||||
order = pyblish.api.CollectorOrder + 0.0015
|
||||
label = "Collect ftrack username"
|
||||
hosts = ["webpublisher", "photoshop"]
|
||||
targets = ["remotepublish", "filespublish", "tvpaint_worker"]
|
||||
|
|
@ -65,3 +68,4 @@ class CollectUsername(pyblish.api.ContextPlugin):
|
|||
if '@' in burnin_name:
|
||||
burnin_name = burnin_name[:burnin_name.index('@')]
|
||||
os.environ["WEBPUBLISH_OPENPYPE_USERNAME"] = burnin_name
|
||||
context.data["user"] = burnin_name
|
||||
|
|
|
|||
|
|
@ -19,7 +19,6 @@ from openpype.lib import (
|
|||
|
||||
should_convert_for_ffmpeg,
|
||||
convert_for_ffmpeg,
|
||||
get_transcode_temp_directory,
|
||||
get_transcode_temp_directory
|
||||
)
|
||||
import speedcopy
|
||||
|
|
@ -972,16 +971,12 @@ class ExtractReview(pyblish.api.InstancePlugin):
|
|||
def get_letterbox_filters(
|
||||
self,
|
||||
letter_box_def,
|
||||
input_res_ratio,
|
||||
output_res_ratio,
|
||||
pixel_aspect,
|
||||
scale_factor_by_width,
|
||||
scale_factor_by_height
|
||||
output_width,
|
||||
output_height
|
||||
):
|
||||
output = []
|
||||
|
||||
ratio = letter_box_def["ratio"]
|
||||
state = letter_box_def["state"]
|
||||
fill_color = letter_box_def["fill_color"]
|
||||
f_red, f_green, f_blue, f_alpha = fill_color
|
||||
fill_color_hex = "{0:0>2X}{1:0>2X}{2:0>2X}".format(
|
||||
|
|
@ -997,75 +992,129 @@ class ExtractReview(pyblish.api.InstancePlugin):
|
|||
)
|
||||
line_color_alpha = float(l_alpha) / 255
|
||||
|
||||
if input_res_ratio == output_res_ratio:
|
||||
ratio /= pixel_aspect
|
||||
elif input_res_ratio < output_res_ratio:
|
||||
ratio /= scale_factor_by_width
|
||||
else:
|
||||
ratio /= scale_factor_by_height
|
||||
# test ratios and define if pillar or letter boxes
|
||||
output_ratio = float(output_width) / float(output_height)
|
||||
self.log.debug("Output ratio: {} LetterBox ratio: {}".format(
|
||||
output_ratio, ratio
|
||||
))
|
||||
pillar = output_ratio > ratio
|
||||
need_mask = format(output_ratio, ".3f") != format(ratio, ".3f")
|
||||
if not need_mask:
|
||||
return []
|
||||
|
||||
if state == "letterbox":
|
||||
if not pillar:
|
||||
if fill_color_alpha > 0:
|
||||
top_box = (
|
||||
"drawbox=0:0:iw:round((ih-(iw*(1/{})))/2):t=fill:c={}@{}"
|
||||
).format(ratio, fill_color_hex, fill_color_alpha)
|
||||
"drawbox=0:0:{width}"
|
||||
":round(({height}-({width}/{ratio}))/2)"
|
||||
":t=fill:c={color}@{alpha}"
|
||||
).format(
|
||||
width=output_width,
|
||||
height=output_height,
|
||||
ratio=ratio,
|
||||
color=fill_color_hex,
|
||||
alpha=fill_color_alpha
|
||||
)
|
||||
|
||||
bottom_box = (
|
||||
"drawbox=0:ih-round((ih-(iw*(1/{0})))/2)"
|
||||
":iw:round((ih-(iw*(1/{0})))/2):t=fill:c={1}@{2}"
|
||||
).format(ratio, fill_color_hex, fill_color_alpha)
|
||||
|
||||
"drawbox=0"
|
||||
":{height}-round(({height}-({width}/{ratio}))/2)"
|
||||
":{width}"
|
||||
":round(({height}-({width}/{ratio}))/2)"
|
||||
":t=fill:c={color}@{alpha}"
|
||||
).format(
|
||||
width=output_width,
|
||||
height=output_height,
|
||||
ratio=ratio,
|
||||
color=fill_color_hex,
|
||||
alpha=fill_color_alpha
|
||||
)
|
||||
output.extend([top_box, bottom_box])
|
||||
|
||||
if line_color_alpha > 0 and line_thickness > 0:
|
||||
top_line = (
|
||||
"drawbox=0:round((ih-(iw*(1/{0})))/2)-{1}:iw:{1}:"
|
||||
"t=fill:c={2}@{3}"
|
||||
"drawbox=0"
|
||||
":round(({height}-({width}/{ratio}))/2)-{l_thick}"
|
||||
":{width}:{l_thick}:t=fill:c={l_color}@{l_alpha}"
|
||||
).format(
|
||||
ratio, line_thickness, line_color_hex, line_color_alpha
|
||||
width=output_width,
|
||||
height=output_height,
|
||||
ratio=ratio,
|
||||
l_thick=line_thickness,
|
||||
l_color=line_color_hex,
|
||||
l_alpha=line_color_alpha
|
||||
)
|
||||
bottom_line = (
|
||||
"drawbox=0:ih-round((ih-(iw*(1/{})))/2)"
|
||||
":iw:{}:t=fill:c={}@{}"
|
||||
"drawbox=0"
|
||||
":{height}-round(({height}-({width}/{ratio}))/2)"
|
||||
":{width}:{l_thick}:t=fill:c={l_color}@{l_alpha}"
|
||||
).format(
|
||||
ratio, line_thickness, line_color_hex, line_color_alpha
|
||||
width=output_width,
|
||||
height=output_height,
|
||||
ratio=ratio,
|
||||
l_thick=line_thickness,
|
||||
l_color=line_color_hex,
|
||||
l_alpha=line_color_alpha
|
||||
)
|
||||
output.extend([top_line, bottom_line])
|
||||
|
||||
elif state == "pillar":
|
||||
else:
|
||||
if fill_color_alpha > 0:
|
||||
left_box = (
|
||||
"drawbox=0:0:round((iw-(ih*{}))/2):ih:t=fill:c={}@{}"
|
||||
).format(ratio, fill_color_hex, fill_color_alpha)
|
||||
"drawbox=0:0"
|
||||
":round(({width}-({height}*{ratio}))/2)"
|
||||
":{height}"
|
||||
":t=fill:c={color}@{alpha}"
|
||||
).format(
|
||||
width=output_width,
|
||||
height=output_height,
|
||||
ratio=ratio,
|
||||
color=fill_color_hex,
|
||||
alpha=fill_color_alpha
|
||||
)
|
||||
|
||||
right_box = (
|
||||
"drawbox=iw-round((iw-(ih*{0}))/2))"
|
||||
":0:round((iw-(ih*{0}))/2):ih:t=fill:c={1}@{2}"
|
||||
).format(ratio, fill_color_hex, fill_color_alpha)
|
||||
|
||||
"drawbox="
|
||||
"{width}-round(({width}-({height}*{ratio}))/2)"
|
||||
":0"
|
||||
":round(({width}-({height}*{ratio}))/2)"
|
||||
":{height}"
|
||||
":t=fill:c={color}@{alpha}"
|
||||
).format(
|
||||
width=output_width,
|
||||
height=output_height,
|
||||
ratio=ratio,
|
||||
color=fill_color_hex,
|
||||
alpha=fill_color_alpha
|
||||
)
|
||||
output.extend([left_box, right_box])
|
||||
|
||||
if line_color_alpha > 0 and line_thickness > 0:
|
||||
left_line = (
|
||||
"drawbox=round((iw-(ih*{}))/2):0:{}:ih:t=fill:c={}@{}"
|
||||
"drawbox=round(({width}-({height}*{ratio}))/2)"
|
||||
":0:{l_thick}:{height}:t=fill:c={l_color}@{l_alpha}"
|
||||
).format(
|
||||
ratio, line_thickness, line_color_hex, line_color_alpha
|
||||
width=output_width,
|
||||
height=output_height,
|
||||
ratio=ratio,
|
||||
l_thick=line_thickness,
|
||||
l_color=line_color_hex,
|
||||
l_alpha=line_color_alpha
|
||||
)
|
||||
|
||||
right_line = (
|
||||
"drawbox=iw-round((iw-(ih*{}))/2))"
|
||||
":0:{}:ih:t=fill:c={}@{}"
|
||||
"drawbox={width}-round(({width}-({height}*{ratio}))/2)"
|
||||
":0:{l_thick}:{height}:t=fill:c={l_color}@{l_alpha}"
|
||||
).format(
|
||||
ratio, line_thickness, line_color_hex, line_color_alpha
|
||||
width=output_width,
|
||||
height=output_height,
|
||||
ratio=ratio,
|
||||
l_thick=line_thickness,
|
||||
l_color=line_color_hex,
|
||||
l_alpha=line_color_alpha
|
||||
)
|
||||
|
||||
output.extend([left_line, right_line])
|
||||
|
||||
else:
|
||||
raise ValueError(
|
||||
"Letterbox state \"{}\" is not recognized".format(state)
|
||||
)
|
||||
|
||||
return output
|
||||
|
||||
def rescaling_filters(self, temp_data, output_def, new_repre):
|
||||
|
|
@ -1079,6 +1128,20 @@ class ExtractReview(pyblish.api.InstancePlugin):
|
|||
"""
|
||||
filters = []
|
||||
|
||||
# if reformat input video file is already reforamted from upstream
|
||||
reformat_in_baking = bool("reformated" in new_repre["tags"])
|
||||
self.log.debug("reformat_in_baking: `{}`".format(reformat_in_baking))
|
||||
|
||||
# Get instance data
|
||||
pixel_aspect = temp_data["pixel_aspect"]
|
||||
|
||||
if reformat_in_baking:
|
||||
self.log.debug((
|
||||
"Using resolution from input. It is already "
|
||||
"reformated from upstream process"
|
||||
))
|
||||
pixel_aspect = 1
|
||||
|
||||
# NOTE Skipped using instance's resolution
|
||||
full_input_path_single_file = temp_data["full_input_path_single_file"]
|
||||
try:
|
||||
|
|
@ -1141,12 +1204,6 @@ class ExtractReview(pyblish.api.InstancePlugin):
|
|||
output_width = input_width
|
||||
output_height = input_height
|
||||
|
||||
letter_box_def = output_def["letter_box"]
|
||||
letter_box_enabled = letter_box_def["enabled"]
|
||||
|
||||
# Get instance data
|
||||
pixel_aspect = temp_data["pixel_aspect"]
|
||||
|
||||
# Make sure input width and height is not an odd number
|
||||
input_width_is_odd = bool(input_width % 2 != 0)
|
||||
input_height_is_odd = bool(input_height % 2 != 0)
|
||||
|
|
@ -1171,9 +1228,6 @@ class ExtractReview(pyblish.api.InstancePlugin):
|
|||
self.log.debug("input_width: `{}`".format(input_width))
|
||||
self.log.debug("input_height: `{}`".format(input_height))
|
||||
|
||||
reformat_in_baking = bool("reformated" in new_repre["tags"])
|
||||
self.log.debug("reformat_in_baking: `{}`".format(reformat_in_baking))
|
||||
|
||||
# Use instance resolution if output definition has not set it.
|
||||
if output_width is None or output_height is None:
|
||||
output_width = temp_data["resolution_width"]
|
||||
|
|
@ -1185,17 +1239,6 @@ class ExtractReview(pyblish.api.InstancePlugin):
|
|||
output_width = input_width
|
||||
output_height = input_height
|
||||
|
||||
if reformat_in_baking:
|
||||
self.log.debug((
|
||||
"Using resolution from input. It is already "
|
||||
"reformated from baking process"
|
||||
))
|
||||
output_width = input_width
|
||||
output_height = input_height
|
||||
pixel_aspect = 1
|
||||
new_repre["resolutionWidth"] = input_width
|
||||
new_repre["resolutionHeight"] = input_height
|
||||
|
||||
output_width = int(output_width)
|
||||
output_height = int(output_height)
|
||||
|
||||
|
|
@ -1219,6 +1262,9 @@ class ExtractReview(pyblish.api.InstancePlugin):
|
|||
"Output resolution is {}x{}".format(output_width, output_height)
|
||||
)
|
||||
|
||||
letter_box_def = output_def["letter_box"]
|
||||
letter_box_enabled = letter_box_def["enabled"]
|
||||
|
||||
# Skip processing if resolution is same as input's and letterbox is
|
||||
# not set
|
||||
if (
|
||||
|
|
@ -1262,25 +1308,6 @@ class ExtractReview(pyblish.api.InstancePlugin):
|
|||
"scale_factor_by_height: `{}`".format(scale_factor_by_height)
|
||||
)
|
||||
|
||||
# letter_box
|
||||
if letter_box_enabled:
|
||||
filters.extend([
|
||||
"scale={}x{}:flags=lanczos".format(
|
||||
output_width, output_height
|
||||
),
|
||||
"setsar=1"
|
||||
])
|
||||
filters.extend(
|
||||
self.get_letterbox_filters(
|
||||
letter_box_def,
|
||||
input_res_ratio,
|
||||
output_res_ratio,
|
||||
pixel_aspect,
|
||||
scale_factor_by_width,
|
||||
scale_factor_by_height
|
||||
)
|
||||
)
|
||||
|
||||
# scaling none square pixels and 1920 width
|
||||
if (
|
||||
input_height != output_height
|
||||
|
|
@ -1319,6 +1346,16 @@ class ExtractReview(pyblish.api.InstancePlugin):
|
|||
"setsar=1"
|
||||
])
|
||||
|
||||
# letter_box
|
||||
if letter_box_enabled:
|
||||
filters.extend(
|
||||
self.get_letterbox_filters(
|
||||
letter_box_def,
|
||||
output_width,
|
||||
output_height
|
||||
)
|
||||
)
|
||||
|
||||
new_repre["resolutionWidth"] = output_width
|
||||
new_repre["resolutionHeight"] = output_height
|
||||
|
||||
|
|
|
|||
|
|
@ -4,13 +4,15 @@ import sys
|
|||
import logging
|
||||
|
||||
# Pipeline imports
|
||||
from avalon import api, io, pipeline
|
||||
from avalon import api, io
|
||||
import avalon.fusion
|
||||
|
||||
# Config imports
|
||||
import openpype.lib as pype
|
||||
import openpype.hosts.fusion.lib as fusion_lib
|
||||
|
||||
from openpype.lib.avalon_context import get_workdir_from_session
|
||||
|
||||
log = logging.getLogger("Update Slap Comp")
|
||||
|
||||
self = sys.modules[__name__]
|
||||
|
|
@ -44,16 +46,6 @@ def _format_version_folder(folder):
|
|||
return version_folder
|
||||
|
||||
|
||||
def _get_work_folder(session):
|
||||
"""Convenience function to get the work folder path of the current asset"""
|
||||
|
||||
# Get new filename, create path based on asset and work template
|
||||
template_work = self._project["config"]["template"]["work"]
|
||||
work_path = pipeline._format_work_template(template_work, session)
|
||||
|
||||
return os.path.normpath(work_path)
|
||||
|
||||
|
||||
def _get_fusion_instance():
|
||||
fusion = getattr(sys.modules["__main__"], "fusion", None)
|
||||
if fusion is None:
|
||||
|
|
@ -72,7 +64,7 @@ def _format_filepath(session):
|
|||
asset = session["AVALON_ASSET"]
|
||||
|
||||
# Save updated slap comp
|
||||
work_path = _get_work_folder(session)
|
||||
work_path = get_workdir_from_session(session)
|
||||
walk_to_dir = os.path.join(work_path, "scenes", "slapcomp")
|
||||
slapcomp_dir = os.path.abspath(walk_to_dir)
|
||||
|
||||
|
|
@ -103,7 +95,7 @@ def _update_savers(comp, session):
|
|||
None
|
||||
"""
|
||||
|
||||
new_work = _get_work_folder(session)
|
||||
new_work = get_workdir_from_session(session)
|
||||
renders = os.path.join(new_work, "renders")
|
||||
version_folder = _format_version_folder(renders)
|
||||
renders_version = os.path.join(renders, version_folder)
|
||||
|
|
|
|||
|
|
@ -6,6 +6,7 @@ import platform
|
|||
import json
|
||||
import opentimelineio_contrib.adapters.ffmpeg_burnins as ffmpeg_burnins
|
||||
import openpype.lib
|
||||
from openpype.lib.vendor_bin_utils import get_fps
|
||||
|
||||
|
||||
ffmpeg_path = openpype.lib.get_ffmpeg_tool_path("ffmpeg")
|
||||
|
|
@ -50,25 +51,6 @@ def _get_ffprobe_data(source):
|
|||
return json.loads(out)
|
||||
|
||||
|
||||
def get_fps(str_value):
|
||||
if str_value == "0/0":
|
||||
print("WARNING: Source has \"r_frame_rate\" value set to \"0/0\".")
|
||||
return "Unknown"
|
||||
|
||||
items = str_value.split("/")
|
||||
if len(items) == 1:
|
||||
fps = float(items[0])
|
||||
|
||||
elif len(items) == 2:
|
||||
fps = float(items[0]) / float(items[1])
|
||||
|
||||
# Check if fps is integer or float number
|
||||
if int(fps) == fps:
|
||||
fps = int(fps)
|
||||
|
||||
return str(fps)
|
||||
|
||||
|
||||
def _prores_codec_args(stream_data, source_ffmpeg_cmd):
|
||||
output = []
|
||||
|
||||
|
|
|
|||
|
|
@ -107,7 +107,6 @@
|
|||
"letter_box": {
|
||||
"enabled": false,
|
||||
"ratio": 0.0,
|
||||
"state": "letterbox",
|
||||
"fill_color": [
|
||||
0,
|
||||
0,
|
||||
|
|
|
|||
|
|
@ -366,19 +366,6 @@
|
|||
"minimum": 0,
|
||||
"maximum": 10000
|
||||
},
|
||||
{
|
||||
"key": "state",
|
||||
"label": "Type",
|
||||
"type": "enum",
|
||||
"enum_items": [
|
||||
{
|
||||
"letterbox": "Letterbox"
|
||||
},
|
||||
{
|
||||
"pillar": "Pillar"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"type": "color",
|
||||
"label": "Fill Color",
|
||||
|
|
|
|||
|
|
@ -1,10 +0,0 @@
|
|||
|
||||
from .app import (
|
||||
show,
|
||||
cli
|
||||
)
|
||||
|
||||
__all__ = [
|
||||
"show",
|
||||
"cli",
|
||||
]
|
||||
|
|
@ -1,5 +0,0 @@
|
|||
from . import cli
|
||||
|
||||
if __name__ == '__main__':
|
||||
import sys
|
||||
sys.exit(cli(sys.argv[1:]))
|
||||
|
|
@ -1,652 +0,0 @@
|
|||
import os
|
||||
import sys
|
||||
from subprocess import Popen
|
||||
|
||||
import ftrack_api
|
||||
from Qt import QtWidgets, QtCore
|
||||
from openpype.api import get_current_project_settings
|
||||
from openpype.tools.utils.lib import qt_app_context
|
||||
from avalon import io, api, style, schema
|
||||
from . import widget, model
|
||||
|
||||
module = sys.modules[__name__]
|
||||
module.window = None
|
||||
|
||||
|
||||
class Window(QtWidgets.QDialog):
|
||||
"""Asset creator interface
|
||||
|
||||
"""
|
||||
|
||||
def __init__(self, parent=None, context=None):
|
||||
super(Window, self).__init__(parent)
|
||||
self.context = context
|
||||
project_name = io.active_project()
|
||||
self.setWindowTitle("Asset creator ({0})".format(project_name))
|
||||
self.setFocusPolicy(QtCore.Qt.StrongFocus)
|
||||
self.setAttribute(QtCore.Qt.WA_DeleteOnClose)
|
||||
|
||||
# Validators
|
||||
self.valid_parent = False
|
||||
|
||||
self.session = None
|
||||
|
||||
# assets widget
|
||||
assets_widget = QtWidgets.QWidget()
|
||||
assets_widget.setContentsMargins(0, 0, 0, 0)
|
||||
assets_layout = QtWidgets.QVBoxLayout(assets_widget)
|
||||
assets = widget.AssetWidget()
|
||||
assets.view.setSelectionMode(assets.view.ExtendedSelection)
|
||||
assets_layout.addWidget(assets)
|
||||
|
||||
# Outlink
|
||||
label_outlink = QtWidgets.QLabel("Outlink:")
|
||||
input_outlink = QtWidgets.QLineEdit()
|
||||
input_outlink.setReadOnly(True)
|
||||
input_outlink.setStyleSheet("background-color: #333333;")
|
||||
checkbox_outlink = QtWidgets.QCheckBox("Use outlink")
|
||||
# Parent
|
||||
label_parent = QtWidgets.QLabel("*Parent:")
|
||||
input_parent = QtWidgets.QLineEdit()
|
||||
input_parent.setReadOnly(True)
|
||||
input_parent.setStyleSheet("background-color: #333333;")
|
||||
|
||||
# Name
|
||||
label_name = QtWidgets.QLabel("*Name:")
|
||||
input_name = QtWidgets.QLineEdit()
|
||||
input_name.setPlaceholderText("<asset name>")
|
||||
|
||||
# Asset Build
|
||||
label_assetbuild = QtWidgets.QLabel("Asset Build:")
|
||||
combo_assetbuilt = QtWidgets.QComboBox()
|
||||
|
||||
# Task template
|
||||
label_task_template = QtWidgets.QLabel("Task template:")
|
||||
combo_task_template = QtWidgets.QComboBox()
|
||||
|
||||
# Info widget
|
||||
info_widget = QtWidgets.QWidget()
|
||||
info_widget.setContentsMargins(10, 10, 10, 10)
|
||||
info_layout = QtWidgets.QVBoxLayout(info_widget)
|
||||
|
||||
# Inputs widget
|
||||
inputs_widget = QtWidgets.QWidget()
|
||||
inputs_widget.setContentsMargins(0, 0, 0, 0)
|
||||
|
||||
inputs_layout = QtWidgets.QFormLayout(inputs_widget)
|
||||
inputs_layout.addRow(label_outlink, input_outlink)
|
||||
inputs_layout.addRow(None, checkbox_outlink)
|
||||
inputs_layout.addRow(label_parent, input_parent)
|
||||
inputs_layout.addRow(label_name, input_name)
|
||||
inputs_layout.addRow(label_assetbuild, combo_assetbuilt)
|
||||
inputs_layout.addRow(label_task_template, combo_task_template)
|
||||
|
||||
# Add button
|
||||
btns_widget = QtWidgets.QWidget()
|
||||
btns_widget.setContentsMargins(0, 0, 0, 0)
|
||||
btn_layout = QtWidgets.QHBoxLayout(btns_widget)
|
||||
btn_create_asset = QtWidgets.QPushButton("Create asset")
|
||||
btn_create_asset.setToolTip(
|
||||
"Creates all necessary components for asset"
|
||||
)
|
||||
checkbox_app = None
|
||||
if self.context is not None:
|
||||
checkbox_app = QtWidgets.QCheckBox("Open {}".format(
|
||||
self.context.capitalize())
|
||||
)
|
||||
btn_layout.addWidget(checkbox_app)
|
||||
btn_layout.addWidget(btn_create_asset)
|
||||
|
||||
task_view = QtWidgets.QTreeView()
|
||||
task_view.setIndentation(0)
|
||||
task_model = model.TasksModel()
|
||||
task_view.setModel(task_model)
|
||||
|
||||
info_layout.addWidget(inputs_widget)
|
||||
info_layout.addWidget(task_view)
|
||||
info_layout.addWidget(btns_widget)
|
||||
|
||||
# Body
|
||||
body = QtWidgets.QSplitter()
|
||||
body.setContentsMargins(0, 0, 0, 0)
|
||||
body.setSizePolicy(QtWidgets.QSizePolicy.Expanding,
|
||||
QtWidgets.QSizePolicy.Expanding)
|
||||
body.setOrientation(QtCore.Qt.Horizontal)
|
||||
body.addWidget(assets_widget)
|
||||
body.addWidget(info_widget)
|
||||
body.setStretchFactor(0, 100)
|
||||
body.setStretchFactor(1, 150)
|
||||
|
||||
# statusbar
|
||||
message = QtWidgets.QLabel()
|
||||
message.setFixedHeight(20)
|
||||
|
||||
statusbar = QtWidgets.QWidget()
|
||||
layout = QtWidgets.QHBoxLayout(statusbar)
|
||||
layout.setContentsMargins(0, 0, 0, 0)
|
||||
layout.addWidget(message)
|
||||
|
||||
layout = QtWidgets.QVBoxLayout(self)
|
||||
layout.addWidget(body)
|
||||
layout.addWidget(statusbar)
|
||||
|
||||
self.data = {
|
||||
"label": {
|
||||
"message": message,
|
||||
},
|
||||
"view": {
|
||||
"tasks": task_view
|
||||
},
|
||||
"model": {
|
||||
"assets": assets,
|
||||
"tasks": task_model
|
||||
},
|
||||
"inputs": {
|
||||
"outlink": input_outlink,
|
||||
"outlink_cb": checkbox_outlink,
|
||||
"parent": input_parent,
|
||||
"name": input_name,
|
||||
"assetbuild": combo_assetbuilt,
|
||||
"tasktemplate": combo_task_template,
|
||||
"open_app": checkbox_app
|
||||
},
|
||||
"buttons": {
|
||||
"create_asset": btn_create_asset
|
||||
}
|
||||
}
|
||||
|
||||
# signals
|
||||
btn_create_asset.clicked.connect(self.create_asset)
|
||||
assets.selection_changed.connect(self.on_asset_changed)
|
||||
input_name.textChanged.connect(self.on_asset_name_change)
|
||||
checkbox_outlink.toggled.connect(self.on_outlink_checkbox_change)
|
||||
combo_task_template.currentTextChanged.connect(
|
||||
self.on_task_template_changed
|
||||
)
|
||||
if self.context is not None:
|
||||
checkbox_app.toggled.connect(self.on_app_checkbox_change)
|
||||
# on start
|
||||
self.on_start()
|
||||
|
||||
self.resize(600, 500)
|
||||
|
||||
self.echo("Connected to project: {0}".format(project_name))
|
||||
|
||||
def open_app(self):
|
||||
if self.context == 'maya':
|
||||
Popen("maya")
|
||||
else:
|
||||
message = QtWidgets.QMessageBox(self)
|
||||
message.setWindowTitle("App is not set")
|
||||
message.setIcon(QtWidgets.QMessageBox.Critical)
|
||||
message.show()
|
||||
|
||||
def on_start(self):
|
||||
project_name = io.Session['AVALON_PROJECT']
|
||||
project_query = 'Project where full_name is "{}"'.format(project_name)
|
||||
if self.session is None:
|
||||
session = ftrack_api.Session()
|
||||
self.session = session
|
||||
else:
|
||||
session = self.session
|
||||
ft_project = session.query(project_query).one()
|
||||
schema_name = ft_project['project_schema']['name']
|
||||
# Load config
|
||||
schemas_items = get_current_project_settings().get('ftrack', {}).get(
|
||||
'project_schemas', {}
|
||||
)
|
||||
# Get info if it is silo project
|
||||
self.silos = io.distinct("silo")
|
||||
if self.silos and None in self.silos:
|
||||
self.silos = None
|
||||
|
||||
key = "default"
|
||||
if schema_name in schemas_items:
|
||||
key = schema_name
|
||||
|
||||
self.config_data = schemas_items[key]
|
||||
|
||||
# set outlink
|
||||
input_outlink = self.data['inputs']['outlink']
|
||||
checkbox_outlink = self.data['inputs']['outlink_cb']
|
||||
outlink_text = io.Session.get('AVALON_ASSET', '')
|
||||
checkbox_outlink.setChecked(True)
|
||||
if outlink_text == '':
|
||||
outlink_text = '< No context >'
|
||||
checkbox_outlink.setChecked(False)
|
||||
checkbox_outlink.hide()
|
||||
input_outlink.setText(outlink_text)
|
||||
|
||||
# load asset build types
|
||||
self.load_assetbuild_types()
|
||||
|
||||
# Load task templates
|
||||
self.load_task_templates()
|
||||
self.data["model"]["assets"].refresh()
|
||||
self.on_asset_changed()
|
||||
|
||||
def create_asset(self):
|
||||
name_input = self.data['inputs']['name']
|
||||
name = name_input.text()
|
||||
test_name = name.replace(' ', '')
|
||||
error_message = None
|
||||
message = QtWidgets.QMessageBox(self)
|
||||
message.setWindowTitle("Some errors have occurred")
|
||||
message.setIcon(QtWidgets.QMessageBox.Critical)
|
||||
# TODO: show error messages on any error
|
||||
if self.valid_parent is not True and test_name == '':
|
||||
error_message = "Name is not set and Parent is not selected"
|
||||
elif self.valid_parent is not True:
|
||||
error_message = "Parent is not selected"
|
||||
elif test_name == '':
|
||||
error_message = "Name is not set"
|
||||
|
||||
if error_message is not None:
|
||||
message.setText(error_message)
|
||||
message.show()
|
||||
return
|
||||
|
||||
test_name_exists = io.find({
|
||||
'type': 'asset',
|
||||
'name': name
|
||||
})
|
||||
existing_assets = [x for x in test_name_exists]
|
||||
if len(existing_assets) > 0:
|
||||
message.setText("Entered Asset name is occupied")
|
||||
message.show()
|
||||
return
|
||||
|
||||
checkbox_app = self.data['inputs']['open_app']
|
||||
if checkbox_app is not None and checkbox_app.isChecked() is True:
|
||||
task_view = self.data["view"]["tasks"]
|
||||
task_model = self.data["model"]["tasks"]
|
||||
try:
|
||||
index = task_view.selectedIndexes()[0]
|
||||
task_name = task_model.itemData(index)[0]
|
||||
except Exception:
|
||||
message.setText("Please select task")
|
||||
message.show()
|
||||
return
|
||||
|
||||
# Get ftrack session
|
||||
if self.session is None:
|
||||
session = ftrack_api.Session()
|
||||
self.session = session
|
||||
else:
|
||||
session = self.session
|
||||
|
||||
# Get Ftrack project entity
|
||||
project_name = io.Session['AVALON_PROJECT']
|
||||
project_query = 'Project where full_name is "{}"'.format(project_name)
|
||||
try:
|
||||
ft_project = session.query(project_query).one()
|
||||
except Exception:
|
||||
message.setText("Ftrack project was not found")
|
||||
message.show()
|
||||
return
|
||||
|
||||
# Get Ftrack entity of parent
|
||||
ft_parent = None
|
||||
assets_model = self.data["model"]["assets"]
|
||||
selected = assets_model.get_selected_assets()
|
||||
parent = io.find_one({"_id": selected[0], "type": "asset"})
|
||||
asset_id = parent.get('data', {}).get('ftrackId', None)
|
||||
asset_entity_type = parent.get('data', {}).get('entityType', None)
|
||||
asset_query = '{} where id is "{}"'
|
||||
if asset_id is not None and asset_entity_type is not None:
|
||||
try:
|
||||
ft_parent = session.query(asset_query.format(
|
||||
asset_entity_type, asset_id)
|
||||
).one()
|
||||
except Exception:
|
||||
ft_parent = None
|
||||
|
||||
if ft_parent is None:
|
||||
ft_parent = self.get_ftrack_asset(parent, ft_project)
|
||||
|
||||
if ft_parent is None:
|
||||
message.setText("Parent's Ftrack entity was not found")
|
||||
message.show()
|
||||
return
|
||||
|
||||
asset_build_combo = self.data['inputs']['assetbuild']
|
||||
asset_type_name = asset_build_combo.currentText()
|
||||
asset_type_query = 'Type where name is "{}"'.format(asset_type_name)
|
||||
try:
|
||||
asset_type = session.query(asset_type_query).one()
|
||||
except Exception:
|
||||
message.setText("Selected Asset Build type does not exists")
|
||||
message.show()
|
||||
return
|
||||
|
||||
for children in ft_parent['children']:
|
||||
if children['name'] == name:
|
||||
message.setText("Entered Asset name is occupied")
|
||||
message.show()
|
||||
return
|
||||
|
||||
task_template_combo = self.data['inputs']['tasktemplate']
|
||||
task_template = task_template_combo.currentText()
|
||||
tasks = []
|
||||
for template in self.config_data['task_templates']:
|
||||
if template['name'] == task_template:
|
||||
tasks = template['task_types']
|
||||
break
|
||||
|
||||
available_task_types = []
|
||||
task_types = ft_project['project_schema']['_task_type_schema']
|
||||
for task_type in task_types['types']:
|
||||
available_task_types.append(task_type['name'])
|
||||
|
||||
not_possible_tasks = []
|
||||
for task in tasks:
|
||||
if task not in available_task_types:
|
||||
not_possible_tasks.append(task)
|
||||
|
||||
if len(not_possible_tasks) != 0:
|
||||
message.setText((
|
||||
"These Task types weren't found"
|
||||
" in Ftrack project schema:\n{}").format(
|
||||
', '.join(not_possible_tasks))
|
||||
)
|
||||
message.show()
|
||||
return
|
||||
|
||||
# Create asset build
|
||||
asset_build_data = {
|
||||
'name': name,
|
||||
'project_id': ft_project['id'],
|
||||
'parent_id': ft_parent['id'],
|
||||
'type': asset_type
|
||||
}
|
||||
|
||||
new_entity = session.create('AssetBuild', asset_build_data)
|
||||
|
||||
task_data = {
|
||||
'project_id': ft_project['id'],
|
||||
'parent_id': new_entity['id']
|
||||
}
|
||||
|
||||
for task in tasks:
|
||||
type = session.query('Type where name is "{}"'.format(task)).one()
|
||||
|
||||
task_data['type_id'] = type['id']
|
||||
task_data['name'] = task
|
||||
session.create('Task', task_data)
|
||||
|
||||
av_project = io.find_one({'type': 'project'})
|
||||
|
||||
hiearchy_items = []
|
||||
hiearchy_items.extend(self.get_avalon_parent(parent))
|
||||
hiearchy_items.append(parent['name'])
|
||||
|
||||
hierarchy = os.path.sep.join(hiearchy_items)
|
||||
new_asset_data = {
|
||||
'ftrackId': new_entity['id'],
|
||||
'entityType': new_entity.entity_type,
|
||||
'visualParent': parent['_id'],
|
||||
'tasks': tasks,
|
||||
'parents': hiearchy_items,
|
||||
'hierarchy': hierarchy
|
||||
}
|
||||
new_asset_info = {
|
||||
'parent': av_project['_id'],
|
||||
'name': name,
|
||||
'schema': "openpype:asset-3.0",
|
||||
'type': 'asset',
|
||||
'data': new_asset_data
|
||||
}
|
||||
|
||||
# Backwards compatibility (add silo from parent if is silo project)
|
||||
if self.silos:
|
||||
new_asset_info["silo"] = parent["silo"]
|
||||
|
||||
try:
|
||||
schema.validate(new_asset_info)
|
||||
except Exception:
|
||||
message.setText((
|
||||
'Asset information are not valid'
|
||||
' to create asset in avalon database'
|
||||
))
|
||||
message.show()
|
||||
session.rollback()
|
||||
return
|
||||
io.insert_one(new_asset_info)
|
||||
session.commit()
|
||||
|
||||
outlink_cb = self.data['inputs']['outlink_cb']
|
||||
if outlink_cb.isChecked() is True:
|
||||
outlink_input = self.data['inputs']['outlink']
|
||||
outlink_name = outlink_input.text()
|
||||
outlink_asset = io.find_one({
|
||||
'type': 'asset',
|
||||
'name': outlink_name
|
||||
})
|
||||
outlink_ft_id = outlink_asset.get('data', {}).get('ftrackId', None)
|
||||
outlink_entity_type = outlink_asset.get(
|
||||
'data', {}
|
||||
).get('entityType', None)
|
||||
if outlink_ft_id is not None and outlink_entity_type is not None:
|
||||
try:
|
||||
outlink_entity = session.query(asset_query.format()).one()
|
||||
except Exception:
|
||||
outlink_entity = None
|
||||
|
||||
if outlink_entity is None:
|
||||
outlink_entity = self.get_ftrack_asset(
|
||||
outlink_asset, ft_project
|
||||
)
|
||||
|
||||
if outlink_entity is None:
|
||||
message.setText("Outlink's Ftrack entity was not found")
|
||||
message.show()
|
||||
return
|
||||
|
||||
link_data = {
|
||||
'from_id': new_entity['id'],
|
||||
'to_id': outlink_entity['id']
|
||||
}
|
||||
session.create('TypedContextLink', link_data)
|
||||
session.commit()
|
||||
|
||||
if checkbox_app is not None and checkbox_app.isChecked() is True:
|
||||
origin_asset = api.Session.get('AVALON_ASSET', None)
|
||||
origin_task = api.Session.get('AVALON_TASK', None)
|
||||
asset_name = name
|
||||
task_view = self.data["view"]["tasks"]
|
||||
task_model = self.data["model"]["tasks"]
|
||||
try:
|
||||
index = task_view.selectedIndexes()[0]
|
||||
except Exception:
|
||||
message.setText("No task is selected. App won't be launched")
|
||||
message.show()
|
||||
return
|
||||
task_name = task_model.itemData(index)[0]
|
||||
try:
|
||||
api.update_current_task(task=task_name, asset=asset_name)
|
||||
self.open_app()
|
||||
|
||||
finally:
|
||||
if origin_task is not None and origin_asset is not None:
|
||||
api.update_current_task(
|
||||
task=origin_task, asset=origin_asset
|
||||
)
|
||||
|
||||
message.setWindowTitle("Asset Created")
|
||||
message.setText("Asset Created successfully")
|
||||
message.setIcon(QtWidgets.QMessageBox.Information)
|
||||
message.show()
|
||||
|
||||
def get_ftrack_asset(self, asset, ft_project):
|
||||
parenthood = []
|
||||
parenthood.extend(self.get_avalon_parent(asset))
|
||||
parenthood.append(asset['name'])
|
||||
parenthood = list(reversed(parenthood))
|
||||
output_entity = None
|
||||
ft_entity = ft_project
|
||||
index = len(parenthood) - 1
|
||||
while True:
|
||||
name = parenthood[index]
|
||||
found = False
|
||||
for children in ft_entity['children']:
|
||||
if children['name'] == name:
|
||||
ft_entity = children
|
||||
found = True
|
||||
break
|
||||
if found is False:
|
||||
return None
|
||||
if index == 0:
|
||||
output_entity = ft_entity
|
||||
break
|
||||
index -= 1
|
||||
|
||||
return output_entity
|
||||
|
||||
def get_avalon_parent(self, entity):
|
||||
parent_id = entity['data']['visualParent']
|
||||
parents = []
|
||||
if parent_id is not None:
|
||||
parent = io.find_one({'_id': parent_id})
|
||||
parents.extend(self.get_avalon_parent(parent))
|
||||
parents.append(parent['name'])
|
||||
return parents
|
||||
|
||||
def echo(self, message):
|
||||
widget = self.data["label"]["message"]
|
||||
widget.setText(str(message))
|
||||
|
||||
QtCore.QTimer.singleShot(5000, lambda: widget.setText(""))
|
||||
|
||||
print(message)
|
||||
|
||||
def load_task_templates(self):
|
||||
templates = self.config_data.get('task_templates', [])
|
||||
all_names = []
|
||||
for template in templates:
|
||||
all_names.append(template['name'])
|
||||
|
||||
tt_combobox = self.data['inputs']['tasktemplate']
|
||||
tt_combobox.clear()
|
||||
tt_combobox.addItems(all_names)
|
||||
|
||||
def load_assetbuild_types(self):
|
||||
types = []
|
||||
schemas = self.config_data.get('schemas', [])
|
||||
for _schema in schemas:
|
||||
if _schema['object_type'] == 'Asset Build':
|
||||
types = _schema['task_types']
|
||||
break
|
||||
ab_combobox = self.data['inputs']['assetbuild']
|
||||
ab_combobox.clear()
|
||||
ab_combobox.addItems(types)
|
||||
|
||||
def on_app_checkbox_change(self):
|
||||
task_model = self.data['model']['tasks']
|
||||
app_checkbox = self.data['inputs']['open_app']
|
||||
if app_checkbox.isChecked() is True:
|
||||
task_model.selectable = True
|
||||
else:
|
||||
task_model.selectable = False
|
||||
|
||||
def on_outlink_checkbox_change(self):
|
||||
checkbox_outlink = self.data['inputs']['outlink_cb']
|
||||
outlink_input = self.data['inputs']['outlink']
|
||||
if checkbox_outlink.isChecked() is True:
|
||||
outlink_text = io.Session['AVALON_ASSET']
|
||||
else:
|
||||
outlink_text = '< Outlinks won\'t be set >'
|
||||
|
||||
outlink_input.setText(outlink_text)
|
||||
|
||||
def on_task_template_changed(self):
|
||||
combobox = self.data['inputs']['tasktemplate']
|
||||
task_model = self.data['model']['tasks']
|
||||
name = combobox.currentText()
|
||||
tasks = []
|
||||
for template in self.config_data['task_templates']:
|
||||
if template['name'] == name:
|
||||
tasks = template['task_types']
|
||||
break
|
||||
task_model.set_tasks(tasks)
|
||||
|
||||
def on_asset_changed(self):
|
||||
"""Callback on asset selection changed
|
||||
|
||||
This updates the task view.
|
||||
|
||||
"""
|
||||
assets_model = self.data["model"]["assets"]
|
||||
parent_input = self.data['inputs']['parent']
|
||||
selected = assets_model.get_selected_assets()
|
||||
|
||||
self.valid_parent = False
|
||||
if len(selected) > 1:
|
||||
parent_input.setText('< Please select only one asset! >')
|
||||
elif len(selected) == 1:
|
||||
if isinstance(selected[0], io.ObjectId):
|
||||
self.valid_parent = True
|
||||
asset = io.find_one({"_id": selected[0], "type": "asset"})
|
||||
parent_input.setText(asset['name'])
|
||||
else:
|
||||
parent_input.setText('< Selected invalid parent(silo) >')
|
||||
else:
|
||||
parent_input.setText('< Nothing is selected >')
|
||||
|
||||
self.creatability_check()
|
||||
|
||||
def on_asset_name_change(self):
|
||||
self.creatability_check()
|
||||
|
||||
def creatability_check(self):
|
||||
name_input = self.data['inputs']['name']
|
||||
name = str(name_input.text()).strip()
|
||||
creatable = False
|
||||
if name and self.valid_parent:
|
||||
creatable = True
|
||||
|
||||
self.data["buttons"]["create_asset"].setEnabled(creatable)
|
||||
|
||||
|
||||
|
||||
def show(parent=None, debug=False, context=None):
|
||||
"""Display Loader GUI
|
||||
|
||||
Arguments:
|
||||
debug (bool, optional): Run loader in debug-mode,
|
||||
defaults to False
|
||||
|
||||
"""
|
||||
|
||||
try:
|
||||
module.window.close()
|
||||
del module.window
|
||||
except (RuntimeError, AttributeError):
|
||||
pass
|
||||
|
||||
if debug is True:
|
||||
io.install()
|
||||
|
||||
with qt_app_context():
|
||||
window = Window(parent, context)
|
||||
window.setStyleSheet(style.load_stylesheet())
|
||||
window.show()
|
||||
|
||||
module.window = window
|
||||
|
||||
|
||||
def cli(args):
|
||||
import argparse
|
||||
parser = argparse.ArgumentParser()
|
||||
parser.add_argument("project")
|
||||
parser.add_argument("asset")
|
||||
|
||||
args = parser.parse_args(args)
|
||||
project = args.project
|
||||
asset = args.asset
|
||||
io.install()
|
||||
|
||||
api.Session["AVALON_PROJECT"] = project
|
||||
if asset != '':
|
||||
api.Session["AVALON_ASSET"] = asset
|
||||
|
||||
show()
|
||||
|
|
@ -1,310 +0,0 @@
|
|||
import re
|
||||
import logging
|
||||
|
||||
from Qt import QtCore, QtWidgets
|
||||
from avalon.vendor import qtawesome
|
||||
from avalon import io
|
||||
from avalon import style
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class Item(dict):
|
||||
"""An item that can be represented in a tree view using `TreeModel`.
|
||||
|
||||
The item can store data just like a regular dictionary.
|
||||
|
||||
>>> data = {"name": "John", "score": 10}
|
||||
>>> item = Item(data)
|
||||
>>> assert item["name"] == "John"
|
||||
|
||||
"""
|
||||
|
||||
def __init__(self, data=None):
|
||||
super(Item, self).__init__()
|
||||
|
||||
self._children = list()
|
||||
self._parent = None
|
||||
|
||||
if data is not None:
|
||||
assert isinstance(data, dict)
|
||||
self.update(data)
|
||||
|
||||
def childCount(self):
|
||||
return len(self._children)
|
||||
|
||||
def child(self, row):
|
||||
|
||||
if row >= len(self._children):
|
||||
log.warning("Invalid row as child: {0}".format(row))
|
||||
return
|
||||
|
||||
return self._children[row]
|
||||
|
||||
def children(self):
|
||||
return self._children
|
||||
|
||||
def parent(self):
|
||||
return self._parent
|
||||
|
||||
def row(self):
|
||||
"""
|
||||
Returns:
|
||||
int: Index of this item under parent"""
|
||||
if self._parent is not None:
|
||||
siblings = self.parent().children()
|
||||
return siblings.index(self)
|
||||
|
||||
def add_child(self, child):
|
||||
"""Add a child to this item"""
|
||||
child._parent = self
|
||||
self._children.append(child)
|
||||
|
||||
|
||||
class TreeModel(QtCore.QAbstractItemModel):
|
||||
|
||||
Columns = list()
|
||||
ItemRole = QtCore.Qt.UserRole + 1
|
||||
|
||||
def __init__(self, parent=None):
|
||||
super(TreeModel, self).__init__(parent)
|
||||
self._root_item = Item()
|
||||
|
||||
def rowCount(self, parent):
|
||||
if parent.isValid():
|
||||
item = parent.internalPointer()
|
||||
else:
|
||||
item = self._root_item
|
||||
|
||||
return item.childCount()
|
||||
|
||||
def columnCount(self, parent):
|
||||
return len(self.Columns)
|
||||
|
||||
def data(self, index, role):
|
||||
|
||||
if not index.isValid():
|
||||
return None
|
||||
|
||||
if role == QtCore.Qt.DisplayRole or role == QtCore.Qt.EditRole:
|
||||
|
||||
item = index.internalPointer()
|
||||
column = index.column()
|
||||
|
||||
key = self.Columns[column]
|
||||
return item.get(key, None)
|
||||
|
||||
if role == self.ItemRole:
|
||||
return index.internalPointer()
|
||||
|
||||
def setData(self, index, value, role=QtCore.Qt.EditRole):
|
||||
"""Change the data on the items.
|
||||
|
||||
Returns:
|
||||
bool: Whether the edit was successful
|
||||
"""
|
||||
|
||||
if index.isValid():
|
||||
if role == QtCore.Qt.EditRole:
|
||||
|
||||
item = index.internalPointer()
|
||||
column = index.column()
|
||||
key = self.Columns[column]
|
||||
item[key] = value
|
||||
|
||||
# passing `list()` for PyQt5 (see PYSIDE-462)
|
||||
self.dataChanged.emit(index, index, list())
|
||||
|
||||
# must return true if successful
|
||||
return True
|
||||
|
||||
return False
|
||||
|
||||
def setColumns(self, keys):
|
||||
assert isinstance(keys, (list, tuple))
|
||||
self.Columns = keys
|
||||
|
||||
def headerData(self, section, orientation, role):
|
||||
|
||||
if role == QtCore.Qt.DisplayRole:
|
||||
if section < len(self.Columns):
|
||||
return self.Columns[section]
|
||||
|
||||
super(TreeModel, self).headerData(section, orientation, role)
|
||||
|
||||
def flags(self, index):
|
||||
flags = QtCore.Qt.ItemIsEnabled
|
||||
|
||||
item = index.internalPointer()
|
||||
if item.get("enabled", True):
|
||||
flags |= QtCore.Qt.ItemIsSelectable
|
||||
|
||||
return flags
|
||||
|
||||
def parent(self, index):
|
||||
|
||||
item = index.internalPointer()
|
||||
parent_item = item.parent()
|
||||
|
||||
# If it has no parents we return invalid
|
||||
if parent_item == self._root_item or not parent_item:
|
||||
return QtCore.QModelIndex()
|
||||
|
||||
return self.createIndex(parent_item.row(), 0, parent_item)
|
||||
|
||||
def index(self, row, column, parent):
|
||||
"""Return index for row/column under parent"""
|
||||
|
||||
if not parent.isValid():
|
||||
parent_item = self._root_item
|
||||
else:
|
||||
parent_item = parent.internalPointer()
|
||||
|
||||
child_item = parent_item.child(row)
|
||||
if child_item:
|
||||
return self.createIndex(row, column, child_item)
|
||||
else:
|
||||
return QtCore.QModelIndex()
|
||||
|
||||
def add_child(self, item, parent=None):
|
||||
if parent is None:
|
||||
parent = self._root_item
|
||||
|
||||
parent.add_child(item)
|
||||
|
||||
def column_name(self, column):
|
||||
"""Return column key by index"""
|
||||
|
||||
if column < len(self.Columns):
|
||||
return self.Columns[column]
|
||||
|
||||
def clear(self):
|
||||
self.beginResetModel()
|
||||
self._root_item = Item()
|
||||
self.endResetModel()
|
||||
|
||||
|
||||
class TasksModel(TreeModel):
|
||||
"""A model listing the tasks combined for a list of assets"""
|
||||
|
||||
Columns = ["Tasks"]
|
||||
|
||||
def __init__(self):
|
||||
super(TasksModel, self).__init__()
|
||||
self._num_assets = 0
|
||||
self._icons = {
|
||||
"__default__": qtawesome.icon("fa.male",
|
||||
color=style.colors.default),
|
||||
"__no_task__": qtawesome.icon("fa.exclamation-circle",
|
||||
color=style.colors.mid)
|
||||
}
|
||||
|
||||
self._get_task_icons()
|
||||
|
||||
def _get_task_icons(self):
|
||||
# Get the project configured icons from database
|
||||
project = io.find_one({"type": "project"})
|
||||
tasks = project["config"].get("tasks", [])
|
||||
for task in tasks:
|
||||
icon_name = task.get("icon", None)
|
||||
if icon_name:
|
||||
icon = qtawesome.icon("fa.{}".format(icon_name),
|
||||
color=style.colors.default)
|
||||
self._icons[task["name"]] = icon
|
||||
|
||||
def set_tasks(self, tasks):
|
||||
"""Set assets to track by their database id
|
||||
|
||||
Arguments:
|
||||
asset_ids (list): List of asset ids.
|
||||
|
||||
"""
|
||||
|
||||
self.clear()
|
||||
|
||||
# let cleared task view if no tasks are available
|
||||
if len(tasks) == 0:
|
||||
return
|
||||
|
||||
self.beginResetModel()
|
||||
|
||||
icon = self._icons["__default__"]
|
||||
for task in tasks:
|
||||
item = Item({
|
||||
"Tasks": task,
|
||||
"icon": icon
|
||||
})
|
||||
|
||||
self.add_child(item)
|
||||
|
||||
self.endResetModel()
|
||||
|
||||
def flags(self, index):
|
||||
return QtCore.Qt.ItemIsEnabled | QtCore.Qt.ItemIsSelectable
|
||||
|
||||
def headerData(self, section, orientation, role):
|
||||
|
||||
# Override header for count column to show amount of assets
|
||||
# it is listing the tasks for
|
||||
if role == QtCore.Qt.DisplayRole:
|
||||
if orientation == QtCore.Qt.Horizontal:
|
||||
if section == 1: # count column
|
||||
return "count ({0})".format(self._num_assets)
|
||||
|
||||
return super(TasksModel, self).headerData(section, orientation, role)
|
||||
|
||||
def data(self, index, role):
|
||||
|
||||
if not index.isValid():
|
||||
return
|
||||
|
||||
# Add icon to the first column
|
||||
if role == QtCore.Qt.DecorationRole:
|
||||
if index.column() == 0:
|
||||
return index.internalPointer()["icon"]
|
||||
|
||||
return super(TasksModel, self).data(index, role)
|
||||
|
||||
|
||||
class DeselectableTreeView(QtWidgets.QTreeView):
|
||||
"""A tree view that deselects on clicking on an empty area in the view"""
|
||||
|
||||
def mousePressEvent(self, event):
|
||||
|
||||
index = self.indexAt(event.pos())
|
||||
if not index.isValid():
|
||||
# clear the selection
|
||||
self.clearSelection()
|
||||
# clear the current index
|
||||
self.setCurrentIndex(QtCore.QModelIndex())
|
||||
|
||||
QtWidgets.QTreeView.mousePressEvent(self, event)
|
||||
|
||||
|
||||
class RecursiveSortFilterProxyModel(QtCore.QSortFilterProxyModel):
|
||||
"""Filters to the regex if any of the children matches allow parent"""
|
||||
def filterAcceptsRow(self, row, parent):
|
||||
|
||||
regex = self.filterRegExp()
|
||||
if not regex.isEmpty():
|
||||
pattern = regex.pattern()
|
||||
model = self.sourceModel()
|
||||
source_index = model.index(row, self.filterKeyColumn(), parent)
|
||||
if source_index.isValid():
|
||||
|
||||
# Check current index itself
|
||||
key = model.data(source_index, self.filterRole())
|
||||
if re.search(pattern, key, re.IGNORECASE):
|
||||
return True
|
||||
|
||||
# Check children
|
||||
rows = model.rowCount(source_index)
|
||||
for i in range(rows):
|
||||
if self.filterAcceptsRow(i, source_index):
|
||||
return True
|
||||
|
||||
# Otherwise filter it
|
||||
return False
|
||||
|
||||
return super(RecursiveSortFilterProxyModel,
|
||||
self).filterAcceptsRow(row, parent)
|
||||
|
|
@ -1,448 +0,0 @@
|
|||
import logging
|
||||
import contextlib
|
||||
import collections
|
||||
|
||||
from avalon.vendor import qtawesome
|
||||
from Qt import QtWidgets, QtCore, QtGui
|
||||
from avalon import style, io
|
||||
|
||||
from .model import (
|
||||
TreeModel,
|
||||
Item,
|
||||
RecursiveSortFilterProxyModel,
|
||||
DeselectableTreeView
|
||||
)
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def _iter_model_rows(model,
|
||||
column,
|
||||
include_root=False):
|
||||
"""Iterate over all row indices in a model"""
|
||||
indices = [QtCore.QModelIndex()] # start iteration at root
|
||||
|
||||
for index in indices:
|
||||
|
||||
# Add children to the iterations
|
||||
child_rows = model.rowCount(index)
|
||||
for child_row in range(child_rows):
|
||||
child_index = model.index(child_row, column, index)
|
||||
indices.append(child_index)
|
||||
|
||||
if not include_root and not index.isValid():
|
||||
continue
|
||||
|
||||
yield index
|
||||
|
||||
|
||||
@contextlib.contextmanager
|
||||
def preserve_expanded_rows(tree_view,
|
||||
column=0,
|
||||
role=QtCore.Qt.DisplayRole):
|
||||
"""Preserves expanded row in QTreeView by column's data role.
|
||||
|
||||
This function is created to maintain the expand vs collapse status of
|
||||
the model items. When refresh is triggered the items which are expanded
|
||||
will stay expanded and vice versa.
|
||||
|
||||
Arguments:
|
||||
tree_view (QWidgets.QTreeView): the tree view which is
|
||||
nested in the application
|
||||
column (int): the column to retrieve the data from
|
||||
role (int): the role which dictates what will be returned
|
||||
|
||||
Returns:
|
||||
None
|
||||
|
||||
"""
|
||||
|
||||
model = tree_view.model()
|
||||
|
||||
expanded = set()
|
||||
|
||||
for index in _iter_model_rows(model,
|
||||
column=column,
|
||||
include_root=False):
|
||||
if tree_view.isExpanded(index):
|
||||
value = index.data(role)
|
||||
expanded.add(value)
|
||||
|
||||
try:
|
||||
yield
|
||||
finally:
|
||||
if not expanded:
|
||||
return
|
||||
|
||||
for index in _iter_model_rows(model,
|
||||
column=column,
|
||||
include_root=False):
|
||||
value = index.data(role)
|
||||
state = value in expanded
|
||||
if state:
|
||||
tree_view.expand(index)
|
||||
else:
|
||||
tree_view.collapse(index)
|
||||
|
||||
|
||||
@contextlib.contextmanager
|
||||
def preserve_selection(tree_view,
|
||||
column=0,
|
||||
role=QtCore.Qt.DisplayRole,
|
||||
current_index=True):
|
||||
"""Preserves row selection in QTreeView by column's data role.
|
||||
|
||||
This function is created to maintain the selection status of
|
||||
the model items. When refresh is triggered the items which are expanded
|
||||
will stay expanded and vice versa.
|
||||
|
||||
tree_view (QWidgets.QTreeView): the tree view nested in the application
|
||||
column (int): the column to retrieve the data from
|
||||
role (int): the role which dictates what will be returned
|
||||
|
||||
Returns:
|
||||
None
|
||||
|
||||
"""
|
||||
|
||||
model = tree_view.model()
|
||||
selection_model = tree_view.selectionModel()
|
||||
flags = selection_model.Select | selection_model.Rows
|
||||
|
||||
if current_index:
|
||||
current_index_value = tree_view.currentIndex().data(role)
|
||||
else:
|
||||
current_index_value = None
|
||||
|
||||
selected_rows = selection_model.selectedRows()
|
||||
if not selected_rows:
|
||||
yield
|
||||
return
|
||||
|
||||
selected = set(row.data(role) for row in selected_rows)
|
||||
try:
|
||||
yield
|
||||
finally:
|
||||
if not selected:
|
||||
return
|
||||
|
||||
# Go through all indices, select the ones with similar data
|
||||
for index in _iter_model_rows(model,
|
||||
column=column,
|
||||
include_root=False):
|
||||
|
||||
value = index.data(role)
|
||||
state = value in selected
|
||||
if state:
|
||||
tree_view.scrollTo(index) # Ensure item is visible
|
||||
selection_model.select(index, flags)
|
||||
|
||||
if current_index_value and value == current_index_value:
|
||||
tree_view.setCurrentIndex(index)
|
||||
|
||||
|
||||
class AssetModel(TreeModel):
|
||||
"""A model listing assets in the silo in the active project.
|
||||
|
||||
The assets are displayed in a treeview, they are visually parented by
|
||||
a `visualParent` field in the database containing an `_id` to a parent
|
||||
asset.
|
||||
|
||||
"""
|
||||
|
||||
Columns = ["label"]
|
||||
Name = 0
|
||||
Deprecated = 2
|
||||
ObjectId = 3
|
||||
|
||||
DocumentRole = QtCore.Qt.UserRole + 2
|
||||
ObjectIdRole = QtCore.Qt.UserRole + 3
|
||||
|
||||
def __init__(self, parent=None):
|
||||
super(AssetModel, self).__init__(parent=parent)
|
||||
self.refresh()
|
||||
|
||||
def _add_hierarchy(self, assets, parent=None, silos=None):
|
||||
"""Add the assets that are related to the parent as children items.
|
||||
|
||||
This method does *not* query the database. These instead are queried
|
||||
in a single batch upfront as an optimization to reduce database
|
||||
queries. Resulting in up to 10x speed increase.
|
||||
|
||||
Args:
|
||||
assets (dict): All assets in the currently active silo stored
|
||||
by key/value
|
||||
|
||||
Returns:
|
||||
None
|
||||
|
||||
"""
|
||||
if silos:
|
||||
# WARNING: Silo item "_id" is set to silo value
|
||||
# mainly because GUI issue with preserve selection and expanded row
|
||||
# and because of easier hierarchy parenting (in "assets")
|
||||
for silo in silos:
|
||||
item = Item({
|
||||
"_id": silo,
|
||||
"name": silo,
|
||||
"label": silo,
|
||||
"type": "silo"
|
||||
})
|
||||
self.add_child(item, parent=parent)
|
||||
self._add_hierarchy(assets, parent=item)
|
||||
|
||||
parent_id = parent["_id"] if parent else None
|
||||
current_assets = assets.get(parent_id, list())
|
||||
|
||||
for asset in current_assets:
|
||||
# get label from data, otherwise use name
|
||||
data = asset.get("data", {})
|
||||
label = data.get("label", asset["name"])
|
||||
tags = data.get("tags", [])
|
||||
|
||||
# store for the asset for optimization
|
||||
deprecated = "deprecated" in tags
|
||||
|
||||
item = Item({
|
||||
"_id": asset["_id"],
|
||||
"name": asset["name"],
|
||||
"label": label,
|
||||
"type": asset["type"],
|
||||
"tags": ", ".join(tags),
|
||||
"deprecated": deprecated,
|
||||
"_document": asset
|
||||
})
|
||||
self.add_child(item, parent=parent)
|
||||
|
||||
# Add asset's children recursively if it has children
|
||||
if asset["_id"] in assets:
|
||||
self._add_hierarchy(assets, parent=item)
|
||||
|
||||
def refresh(self):
|
||||
"""Refresh the data for the model."""
|
||||
|
||||
self.clear()
|
||||
self.beginResetModel()
|
||||
|
||||
# Get all assets in current silo sorted by name
|
||||
db_assets = io.find({"type": "asset"}).sort("name", 1)
|
||||
silos = db_assets.distinct("silo") or None
|
||||
# if any silo is set to None then it's expected it should not be used
|
||||
if silos and None in silos:
|
||||
silos = None
|
||||
|
||||
# Group the assets by their visual parent's id
|
||||
assets_by_parent = collections.defaultdict(list)
|
||||
for asset in db_assets:
|
||||
parent_id = (
|
||||
asset.get("data", {}).get("visualParent") or
|
||||
asset.get("silo")
|
||||
)
|
||||
assets_by_parent[parent_id].append(asset)
|
||||
|
||||
# Build the hierarchical tree items recursively
|
||||
self._add_hierarchy(
|
||||
assets_by_parent,
|
||||
parent=None,
|
||||
silos=silos
|
||||
)
|
||||
|
||||
self.endResetModel()
|
||||
|
||||
def flags(self, index):
|
||||
return QtCore.Qt.ItemIsEnabled | QtCore.Qt.ItemIsSelectable
|
||||
|
||||
def data(self, index, role):
|
||||
|
||||
if not index.isValid():
|
||||
return
|
||||
|
||||
item = index.internalPointer()
|
||||
if role == QtCore.Qt.DecorationRole: # icon
|
||||
|
||||
column = index.column()
|
||||
if column == self.Name:
|
||||
|
||||
# Allow a custom icon and custom icon color to be defined
|
||||
data = item.get("_document", {}).get("data", {})
|
||||
icon = data.get("icon", None)
|
||||
if icon is None and item.get("type") == "silo":
|
||||
icon = "database"
|
||||
color = data.get("color", style.colors.default)
|
||||
|
||||
if icon is None:
|
||||
# Use default icons if no custom one is specified.
|
||||
# If it has children show a full folder, otherwise
|
||||
# show an open folder
|
||||
has_children = self.rowCount(index) > 0
|
||||
icon = "folder" if has_children else "folder-o"
|
||||
|
||||
# Make the color darker when the asset is deprecated
|
||||
if item.get("deprecated", False):
|
||||
color = QtGui.QColor(color).darker(250)
|
||||
|
||||
try:
|
||||
key = "fa.{0}".format(icon) # font-awesome key
|
||||
icon = qtawesome.icon(key, color=color)
|
||||
return icon
|
||||
except Exception as exception:
|
||||
# Log an error message instead of erroring out completely
|
||||
# when the icon couldn't be created (e.g. invalid name)
|
||||
log.error(exception)
|
||||
|
||||
return
|
||||
|
||||
if role == QtCore.Qt.ForegroundRole: # font color
|
||||
if "deprecated" in item.get("tags", []):
|
||||
return QtGui.QColor(style.colors.light).darker(250)
|
||||
|
||||
if role == self.ObjectIdRole:
|
||||
return item.get("_id", None)
|
||||
|
||||
if role == self.DocumentRole:
|
||||
return item.get("_document", None)
|
||||
|
||||
return super(AssetModel, self).data(index, role)
|
||||
|
||||
|
||||
class AssetWidget(QtWidgets.QWidget):
|
||||
"""A Widget to display a tree of assets with filter
|
||||
|
||||
To list the assets of the active project:
|
||||
>>> # widget = AssetWidget()
|
||||
>>> # widget.refresh()
|
||||
>>> # widget.show()
|
||||
|
||||
"""
|
||||
|
||||
assets_refreshed = QtCore.Signal() # on model refresh
|
||||
selection_changed = QtCore.Signal() # on view selection change
|
||||
current_changed = QtCore.Signal() # on view current index change
|
||||
|
||||
def __init__(self, parent=None):
|
||||
super(AssetWidget, self).__init__(parent=parent)
|
||||
self.setContentsMargins(0, 0, 0, 0)
|
||||
|
||||
layout = QtWidgets.QVBoxLayout(self)
|
||||
layout.setContentsMargins(0, 0, 0, 0)
|
||||
layout.setSpacing(4)
|
||||
|
||||
# Tree View
|
||||
model = AssetModel(self)
|
||||
proxy = RecursiveSortFilterProxyModel()
|
||||
proxy.setSourceModel(model)
|
||||
proxy.setFilterCaseSensitivity(QtCore.Qt.CaseInsensitive)
|
||||
|
||||
view = DeselectableTreeView()
|
||||
view.setIndentation(15)
|
||||
view.setContextMenuPolicy(QtCore.Qt.CustomContextMenu)
|
||||
view.setHeaderHidden(True)
|
||||
view.setModel(proxy)
|
||||
|
||||
# Header
|
||||
header = QtWidgets.QHBoxLayout()
|
||||
|
||||
icon = qtawesome.icon("fa.refresh", color=style.colors.light)
|
||||
refresh = QtWidgets.QPushButton(icon, "")
|
||||
refresh.setToolTip("Refresh items")
|
||||
|
||||
filter = QtWidgets.QLineEdit()
|
||||
filter.textChanged.connect(proxy.setFilterFixedString)
|
||||
filter.setPlaceholderText("Filter assets..")
|
||||
|
||||
header.addWidget(filter)
|
||||
header.addWidget(refresh)
|
||||
|
||||
# Layout
|
||||
layout.addLayout(header)
|
||||
layout.addWidget(view)
|
||||
|
||||
# Signals/Slots
|
||||
selection = view.selectionModel()
|
||||
selection.selectionChanged.connect(self.selection_changed)
|
||||
selection.currentChanged.connect(self.current_changed)
|
||||
refresh.clicked.connect(self.refresh)
|
||||
|
||||
self.refreshButton = refresh
|
||||
self.model = model
|
||||
self.proxy = proxy
|
||||
self.view = view
|
||||
|
||||
def _refresh_model(self):
|
||||
with preserve_expanded_rows(
|
||||
self.view, column=0, role=self.model.ObjectIdRole
|
||||
):
|
||||
with preserve_selection(
|
||||
self.view, column=0, role=self.model.ObjectIdRole
|
||||
):
|
||||
self.model.refresh()
|
||||
|
||||
self.assets_refreshed.emit()
|
||||
|
||||
def refresh(self):
|
||||
self._refresh_model()
|
||||
|
||||
def get_active_asset(self):
|
||||
"""Return the asset id the current asset."""
|
||||
current = self.view.currentIndex()
|
||||
return current.data(self.model.ItemRole)
|
||||
|
||||
def get_active_index(self):
|
||||
return self.view.currentIndex()
|
||||
|
||||
def get_selected_assets(self):
|
||||
"""Return the assets' ids that are selected."""
|
||||
selection = self.view.selectionModel()
|
||||
rows = selection.selectedRows()
|
||||
return [row.data(self.model.ObjectIdRole) for row in rows]
|
||||
|
||||
def select_assets(self, assets, expand=True, key="name"):
|
||||
"""Select assets by name.
|
||||
|
||||
Args:
|
||||
assets (list): List of asset names
|
||||
expand (bool): Whether to also expand to the asset in the view
|
||||
|
||||
Returns:
|
||||
None
|
||||
|
||||
"""
|
||||
# TODO: Instead of individual selection optimize for many assets
|
||||
|
||||
if not isinstance(assets, (tuple, list)):
|
||||
assets = [assets]
|
||||
assert isinstance(
|
||||
assets, (tuple, list)
|
||||
), "Assets must be list or tuple"
|
||||
|
||||
# convert to list - tuple cant be modified
|
||||
assets = list(assets)
|
||||
|
||||
# Clear selection
|
||||
selection_model = self.view.selectionModel()
|
||||
selection_model.clearSelection()
|
||||
|
||||
# Select
|
||||
mode = selection_model.Select | selection_model.Rows
|
||||
for index in iter_model_rows(
|
||||
self.proxy, column=0, include_root=False
|
||||
):
|
||||
# stop iteration if there are no assets to process
|
||||
if not assets:
|
||||
break
|
||||
|
||||
value = index.data(self.model.ItemRole).get(key)
|
||||
if value not in assets:
|
||||
continue
|
||||
|
||||
# Remove processed asset
|
||||
assets.pop(assets.index(value))
|
||||
|
||||
selection_model.select(index, mode)
|
||||
|
||||
if expand:
|
||||
# Expand parent index
|
||||
self.view.expand(self.proxy.parent(index))
|
||||
|
||||
# Set the currently active index
|
||||
self.view.setCurrentIndex(index)
|
||||
|
|
@ -29,6 +29,10 @@ from openpype.lib import (
|
|||
create_workdir_extra_folders,
|
||||
get_system_general_anatomy_data
|
||||
)
|
||||
from openpype.lib.avalon_context import (
|
||||
update_current_task,
|
||||
compute_session_changes
|
||||
)
|
||||
from .model import FilesModel
|
||||
from .view import FilesView
|
||||
|
||||
|
|
@ -667,7 +671,7 @@ class FilesWidget(QtWidgets.QWidget):
|
|||
session["AVALON_APP"],
|
||||
project_name=session["AVALON_PROJECT"]
|
||||
)
|
||||
changes = pipeline.compute_session_changes(
|
||||
changes = compute_session_changes(
|
||||
session,
|
||||
asset=self._get_asset_doc(),
|
||||
task=self._task_name,
|
||||
|
|
@ -681,7 +685,7 @@ class FilesWidget(QtWidgets.QWidget):
|
|||
"""Enter the asset and task session currently selected"""
|
||||
|
||||
session = api.Session.copy()
|
||||
changes = pipeline.compute_session_changes(
|
||||
changes = compute_session_changes(
|
||||
session,
|
||||
asset=self._get_asset_doc(),
|
||||
task=self._task_name,
|
||||
|
|
@ -692,7 +696,7 @@ class FilesWidget(QtWidgets.QWidget):
|
|||
# to avoid any unwanted Task Changed callbacks to be triggered.
|
||||
return
|
||||
|
||||
api.update_current_task(
|
||||
update_current_task(
|
||||
asset=self._get_asset_doc(),
|
||||
task=self._task_name,
|
||||
template_key=self.template_key
|
||||
|
|
|
|||
|
|
@ -1,3 +1,3 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
"""Package declaring Pype version."""
|
||||
__version__ = "3.9.0-nightly.5"
|
||||
__version__ = "3.9.0-nightly.6"
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
[tool.poetry]
|
||||
name = "OpenPype"
|
||||
version = "3.9.0-nightly.5" # OpenPype
|
||||
version = "3.9.0-nightly.6" # OpenPype
|
||||
description = "Open VFX and Animation pipeline with support."
|
||||
authors = ["OpenPype Team <info@openpype.io>"]
|
||||
license = "MIT License"
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue