Merge branch 'develop' into bugfix/OP-7281_Maya-Review---playblast-renders-without-textures
12
.github/ISSUE_TEMPLATE/bug_report.yml
vendored
|
|
@ -35,6 +35,12 @@ body:
|
|||
label: Version
|
||||
description: What version are you running? Look to OpenPype Tray
|
||||
options:
|
||||
- 3.18.2-nightly.2
|
||||
- 3.18.2-nightly.1
|
||||
- 3.18.1
|
||||
- 3.18.1-nightly.1
|
||||
- 3.18.0
|
||||
- 3.17.7
|
||||
- 3.17.7-nightly.7
|
||||
- 3.17.7-nightly.6
|
||||
- 3.17.7-nightly.5
|
||||
|
|
@ -129,12 +135,6 @@ body:
|
|||
- 3.15.4
|
||||
- 3.15.4-nightly.3
|
||||
- 3.15.4-nightly.2
|
||||
- 3.15.4-nightly.1
|
||||
- 3.15.3
|
||||
- 3.15.3-nightly.4
|
||||
- 3.15.3-nightly.3
|
||||
- 3.15.3-nightly.2
|
||||
- 3.15.3-nightly.1
|
||||
validations:
|
||||
required: true
|
||||
- type: dropdown
|
||||
|
|
|
|||
1140
CHANGELOG.md
|
|
@ -7,6 +7,10 @@ OpenPype
|
|||
|
||||
[](https://github.com/pypeclub/pype/actions/workflows/documentation.yml) 
|
||||
|
||||
## Important Notice!
|
||||
|
||||
OpenPype as a standalone product has reach end of it's life and this repository is now used as a pipeline core code for [AYON](https://ynput.io/ayon/). You can read more details about the end of life process here https://community.ynput.io/t/openpype-end-of-life-timeline/877
|
||||
|
||||
|
||||
Introduction
|
||||
------------
|
||||
|
|
|
|||
|
|
@ -606,7 +606,7 @@ def convert_v4_version_to_v3(version):
|
|||
output_data[dst_key] = version[src_key]
|
||||
|
||||
if "createdAt" in version:
|
||||
created_at = arrow.get(version["createdAt"])
|
||||
created_at = arrow.get(version["createdAt"]).to("local")
|
||||
output_data["time"] = created_at.strftime("%Y%m%dT%H%M%SZ")
|
||||
|
||||
output["data"] = output_data
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
# AfterEffects Integration
|
||||
|
||||
Requirements: This extension requires use of Javascript engine, which is
|
||||
Requirements: This extension requires use of Javascript engine, which is
|
||||
available since CC 16.0.
|
||||
Please check your File>Project Settings>Expressions>Expressions Engine
|
||||
|
||||
|
|
@ -13,26 +13,28 @@ The After Effects integration requires two components to work; `extension` and `
|
|||
To install the extension download [Extension Manager Command Line tool (ExManCmd)](https://github.com/Adobe-CEP/Getting-Started-guides/tree/master/Package%20Distribute%20Install#option-2---exmancmd).
|
||||
|
||||
```
|
||||
ExManCmd /install {path to avalon-core}\avalon\photoshop\extension.zxp
|
||||
ExManCmd /install {path to addon}/api/extension.zxp
|
||||
```
|
||||
OR
|
||||
download [Anastasiy’s Extension Manager](https://install.anastasiy.com/)
|
||||
|
||||
`{path to addon}` will be most likely in your AppData (on Windows, in your user data folder in Linux and MacOS.)
|
||||
|
||||
### Server
|
||||
|
||||
The easiest way to get the server and After Effects launch is with:
|
||||
|
||||
```
|
||||
python -c ^"import avalon.photoshop;avalon.aftereffects.launch(""c:\Program Files\Adobe\Adobe After Effects 2020\Support Files\AfterFX.exe"")^"
|
||||
python -c ^"import openpype.hosts.photoshop;openpype.hosts..aftereffects.launch(""c:\Program Files\Adobe\Adobe After Effects 2020\Support Files\AfterFX.exe"")^"
|
||||
```
|
||||
|
||||
`avalon.aftereffects.launch` launches the application and server, and also closes the server when After Effects exists.
|
||||
|
||||
## Usage
|
||||
|
||||
The After Effects extension can be found under `Window > Extensions > OpenPype`. Once launched you should be presented with a panel like this:
|
||||
The After Effects extension can be found under `Window > Extensions > AYON`. Once launched you should be presented with a panel like this:
|
||||
|
||||

|
||||

|
||||
|
||||
|
||||
## Developing
|
||||
|
|
@ -43,8 +45,8 @@ When developing the extension you can load it [unsigned](https://github.com/Adob
|
|||
When signing the extension you can use this [guide](https://github.com/Adobe-CEP/Getting-Started-guides/tree/master/Package%20Distribute%20Install#package-distribute-install-guide).
|
||||
|
||||
```
|
||||
ZXPSignCmd -selfSignedCert NA NA Avalon Avalon-After-Effects avalon extension.p12
|
||||
ZXPSignCmd -sign {path to avalon-core}\avalon\aftereffects\extension {path to avalon-core}\avalon\aftereffects\extension.zxp extension.p12 avalon
|
||||
ZXPSignCmd -selfSignedCert NA NA Ayon Avalon-After-Effects Ayon extension.p12
|
||||
ZXPSignCmd -sign {path to addon}/api/extension {path to addon}/api/extension.zxp extension.p12 Ayon
|
||||
```
|
||||
|
||||
### Plugin Examples
|
||||
|
|
@ -52,14 +54,14 @@ ZXPSignCmd -sign {path to avalon-core}\avalon\aftereffects\extension {path to av
|
|||
These plugins were made with the [polly config](https://github.com/mindbender-studio/config). To fully integrate and load, you will have to use this config and add `image` to the [integration plugin](https://github.com/mindbender-studio/config/blob/master/polly/plugins/publish/integrate_asset.py).
|
||||
|
||||
Expected deployed extension location on default Windows:
|
||||
`c:\Program Files (x86)\Common Files\Adobe\CEP\extensions\com.openpype.AE.panel`
|
||||
`c:\Program Files (x86)\Common Files\Adobe\CEP\extensions\io.ynput.AE.panel`
|
||||
|
||||
For easier debugging of Javascript:
|
||||
https://community.adobe.com/t5/download-install/adobe-extension-debuger-problem/td-p/10911704?page=1
|
||||
Add (optional) --enable-blink-features=ShadowDOMV0,CustomElementsV0 when starting Chrome
|
||||
then localhost:8092
|
||||
|
||||
Or use Visual Studio Code https://medium.com/adobetech/extendscript-debugger-for-visual-studio-code-public-release-a2ff6161fa01
|
||||
Or use Visual Studio Code https://medium.com/adobetech/extendscript-debugger-for-visual-studio-code-public-release-a2ff6161fa01
|
||||
## Resources
|
||||
- https://javascript-tools-guide.readthedocs.io/introduction/index.html
|
||||
- https://github.com/Adobe-CEP/Getting-Started-guides
|
||||
|
|
|
|||
|
|
@ -1,32 +1,31 @@
|
|||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<ExtensionList>
|
||||
<Extension Id="com.openpype.AE.panel">
|
||||
<Extension Id="io.ynput.AE.panel">
|
||||
<HostList>
|
||||
|
||||
|
||||
<!-- Comment Host tags according to the apps you want your panel to support -->
|
||||
|
||||
|
||||
<!-- Photoshop -->
|
||||
<Host Name="PHXS" Port="8088"/>
|
||||
|
||||
|
||||
<!-- Illustrator -->
|
||||
<Host Name="ILST" Port="8089"/>
|
||||
|
||||
<!-- InDesign -->
|
||||
<Host Name="IDSN" Port="8090" />
|
||||
|
||||
|
||||
<!-- Premiere -->
|
||||
<Host Name="PPRO" Port="8091" />
|
||||
|
||||
|
||||
<!-- AfterEffects -->
|
||||
<Host Name="AEFT" Port="8092" />
|
||||
|
||||
|
||||
<!-- PRELUDE -->
|
||||
<Host Name="PRLD" Port="8093" />
|
||||
|
||||
|
||||
<!-- FLASH Pro -->
|
||||
<Host Name="FLPR" Port="8094" />
|
||||
|
||||
|
||||
</HostList>
|
||||
</Extension>
|
||||
</ExtensionList>
|
||||
|
||||
|
|
@ -1,8 +1,8 @@
|
|||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<ExtensionManifest Version="8.0" ExtensionBundleId="com.openpype.AE.panel" ExtensionBundleVersion="1.0.27"
|
||||
ExtensionBundleName="com.openpype.AE.panel" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
|
||||
<ExtensionManifest Version="8.0" ExtensionBundleId="io.ynput.AE.panel" ExtensionBundleVersion="1.1.0"
|
||||
ExtensionBundleName="io.ynput.AE.panel" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
|
||||
<ExtensionList>
|
||||
<Extension Id="com.openpype.AE.panel" Version="1.0" />
|
||||
<Extension Id="io.ynput.AE.panel" Version="1.0" />
|
||||
</ExtensionList>
|
||||
<ExecutionEnvironment>
|
||||
<HostList>
|
||||
|
|
@ -38,7 +38,7 @@
|
|||
</RequiredRuntimeList>
|
||||
</ExecutionEnvironment>
|
||||
<DispatchInfoList>
|
||||
<Extension Id="com.openpype.AE.panel">
|
||||
<Extension Id="io.ynput.AE.panel">
|
||||
<DispatchInfo >
|
||||
<Resources>
|
||||
<MainPath>./index.html</MainPath>
|
||||
|
|
@ -49,7 +49,7 @@
|
|||
</Lifecycle>
|
||||
<UI>
|
||||
<Type>Panel</Type>
|
||||
<Menu>OpenPype</Menu>
|
||||
<Menu>AYON</Menu>
|
||||
<Geometry>
|
||||
<Size>
|
||||
<Height>200</Height>
|
||||
|
|
@ -66,7 +66,7 @@
|
|||
|
||||
</Geometry>
|
||||
<Icons>
|
||||
<Icon Type="Normal">./icons/iconNormal.png</Icon>
|
||||
<Icon Type="Normal">./icons/ayon_logo.png</Icon>
|
||||
<Icon Type="RollOver">./icons/iconRollover.png</Icon>
|
||||
<Icon Type="Disabled">./icons/iconDisabled.png</Icon>
|
||||
<Icon Type="DarkNormal">./icons/iconDarkNormal.png</Icon>
|
||||
|
|
|
|||
BIN
openpype/hosts/aftereffects/api/extension/icons/ayon_logo.png
Normal file
|
After Width: | Height: | Size: 3.5 KiB |
BIN
openpype/hosts/aftereffects/api/panel.png
Normal file
|
After Width: | Height: | Size: 16 KiB |
|
Before Width: | Height: | Size: 13 KiB |
BIN
openpype/hosts/aftereffects/api/panel_failure.png
Normal file
|
After Width: | Height: | Size: 13 KiB |
|
|
@ -60,8 +60,9 @@ class ExtractLocalRender(publish.Extractor):
|
|||
first_repre = not representations
|
||||
if instance.data["review"] and first_repre:
|
||||
repre_data["tags"] = ["review"]
|
||||
thumbnail_path = os.path.join(staging_dir, files[0])
|
||||
instance.data["thumbnailSource"] = thumbnail_path
|
||||
# TODO return back when Extract from source same as regular
|
||||
# thumbnail_path = os.path.join(staging_dir, files[0])
|
||||
# instance.data["thumbnailSource"] = thumbnail_path
|
||||
|
||||
representations.append(repre_data)
|
||||
|
||||
|
|
|
|||
|
|
@ -14,7 +14,7 @@ from openpype.pipeline import (
|
|||
legacy_io,
|
||||
Creator as NewCreator,
|
||||
CreatedInstance,
|
||||
Anatomy
|
||||
Anatomy,
|
||||
)
|
||||
|
||||
|
||||
|
|
@ -27,28 +27,21 @@ class CreateSaver(NewCreator):
|
|||
description = "Fusion Saver to generate image sequence"
|
||||
icon = "fa5.eye"
|
||||
|
||||
instance_attributes = [
|
||||
"reviewable"
|
||||
]
|
||||
instance_attributes = ["reviewable"]
|
||||
image_format = "exr"
|
||||
|
||||
# TODO: This should be renamed together with Nuke so it is aligned
|
||||
temp_rendering_path_template = (
|
||||
"{workdir}/renders/fusion/{subset}/{subset}.{frame}.{ext}")
|
||||
"{workdir}/renders/fusion/{subset}/{subset}.{frame}.{ext}"
|
||||
)
|
||||
|
||||
def create(self, subset_name, instance_data, pre_create_data):
|
||||
self.pass_pre_attributes_to_instance(
|
||||
instance_data,
|
||||
pre_create_data
|
||||
self.pass_pre_attributes_to_instance(instance_data, pre_create_data)
|
||||
|
||||
instance_data.update(
|
||||
{"id": "pyblish.avalon.instance", "subset": subset_name}
|
||||
)
|
||||
|
||||
instance_data.update({
|
||||
"id": "pyblish.avalon.instance",
|
||||
"subset": subset_name
|
||||
})
|
||||
|
||||
# TODO: Add pre_create attributes to choose file format?
|
||||
file_format = "OpenEXRFormat"
|
||||
|
||||
comp = get_current_comp()
|
||||
with comp_lock_and_undo_chunk(comp):
|
||||
args = (-32768, -32768) # Magical position numbers
|
||||
|
|
@ -56,19 +49,6 @@ class CreateSaver(NewCreator):
|
|||
|
||||
self._update_tool_with_data(saver, data=instance_data)
|
||||
|
||||
saver["OutputFormat"] = file_format
|
||||
|
||||
# Check file format settings are available
|
||||
if saver[file_format] is None:
|
||||
raise RuntimeError(
|
||||
f"File format is not set to {file_format}, this is a bug"
|
||||
)
|
||||
|
||||
# Set file format attributes
|
||||
saver[file_format]["Depth"] = 0 # Auto | float16 | float32
|
||||
# TODO Is this needed?
|
||||
saver[file_format]["SaveAlpha"] = 1
|
||||
|
||||
# Register the CreatedInstance
|
||||
instance = CreatedInstance(
|
||||
family=self.family,
|
||||
|
|
@ -140,8 +120,15 @@ class CreateSaver(NewCreator):
|
|||
return
|
||||
|
||||
original_subset = tool.GetData("openpype.subset")
|
||||
original_format = tool.GetData(
|
||||
"openpype.creator_attributes.image_format"
|
||||
)
|
||||
|
||||
subset = data["subset"]
|
||||
if original_subset != subset:
|
||||
if (
|
||||
original_subset != subset
|
||||
or original_format != data["creator_attributes"]["image_format"]
|
||||
):
|
||||
self._configure_saver_tool(data, tool, subset)
|
||||
|
||||
def _configure_saver_tool(self, data, tool, subset):
|
||||
|
|
@ -151,17 +138,17 @@ class CreateSaver(NewCreator):
|
|||
anatomy = Anatomy()
|
||||
frame_padding = anatomy.templates["frame_padding"]
|
||||
|
||||
# get output format
|
||||
ext = data["creator_attributes"]["image_format"]
|
||||
|
||||
# Subset change detected
|
||||
workdir = os.path.normpath(legacy_io.Session["AVALON_WORKDIR"])
|
||||
formatting_data.update({
|
||||
"workdir": workdir,
|
||||
"frame": "0" * frame_padding,
|
||||
"ext": "exr"
|
||||
})
|
||||
formatting_data.update(
|
||||
{"workdir": workdir, "frame": "0" * frame_padding, "ext": ext}
|
||||
)
|
||||
|
||||
# build file path to render
|
||||
filepath = self.temp_rendering_path_template.format(
|
||||
**formatting_data)
|
||||
filepath = self.temp_rendering_path_template.format(**formatting_data)
|
||||
|
||||
comp = get_current_comp()
|
||||
tool["Clip"] = comp.ReverseMapPath(os.path.normpath(filepath))
|
||||
|
|
@ -201,7 +188,8 @@ class CreateSaver(NewCreator):
|
|||
attr_defs = [
|
||||
self._get_render_target_enum(),
|
||||
self._get_reviewable_bool(),
|
||||
self._get_frame_range_enum()
|
||||
self._get_frame_range_enum(),
|
||||
self._get_image_format_enum(),
|
||||
]
|
||||
return attr_defs
|
||||
|
||||
|
|
@ -209,11 +197,7 @@ class CreateSaver(NewCreator):
|
|||
"""Settings for publish page"""
|
||||
return self.get_pre_create_attr_defs()
|
||||
|
||||
def pass_pre_attributes_to_instance(
|
||||
self,
|
||||
instance_data,
|
||||
pre_create_data
|
||||
):
|
||||
def pass_pre_attributes_to_instance(self, instance_data, pre_create_data):
|
||||
creator_attrs = instance_data["creator_attributes"] = {}
|
||||
for pass_key in pre_create_data.keys():
|
||||
creator_attrs[pass_key] = pre_create_data[pass_key]
|
||||
|
|
@ -236,13 +220,13 @@ class CreateSaver(NewCreator):
|
|||
frame_range_options = {
|
||||
"asset_db": "Current asset context",
|
||||
"render_range": "From render in/out",
|
||||
"comp_range": "From composition timeline"
|
||||
"comp_range": "From composition timeline",
|
||||
}
|
||||
|
||||
return EnumDef(
|
||||
"frame_range_source",
|
||||
items=frame_range_options,
|
||||
label="Frame range source"
|
||||
label="Frame range source",
|
||||
)
|
||||
|
||||
def _get_reviewable_bool(self):
|
||||
|
|
@ -252,20 +236,33 @@ class CreateSaver(NewCreator):
|
|||
label="Review",
|
||||
)
|
||||
|
||||
def _get_image_format_enum(self):
|
||||
image_format_options = ["exr", "tga", "tif", "png", "jpg"]
|
||||
return EnumDef(
|
||||
"image_format",
|
||||
items=image_format_options,
|
||||
default=self.image_format,
|
||||
label="Output Image Format",
|
||||
)
|
||||
|
||||
def apply_settings(self, project_settings):
|
||||
"""Method called on initialization of plugin to apply settings."""
|
||||
|
||||
# plugin settings
|
||||
plugin_settings = (
|
||||
project_settings["fusion"]["create"][self.__class__.__name__]
|
||||
)
|
||||
plugin_settings = project_settings["fusion"]["create"][
|
||||
self.__class__.__name__
|
||||
]
|
||||
|
||||
# individual attributes
|
||||
self.instance_attributes = plugin_settings.get(
|
||||
"instance_attributes") or self.instance_attributes
|
||||
self.default_variants = plugin_settings.get(
|
||||
"default_variants") or self.default_variants
|
||||
self.temp_rendering_path_template = (
|
||||
plugin_settings.get("temp_rendering_path_template")
|
||||
or self.temp_rendering_path_template
|
||||
"instance_attributes", self.instance_attributes
|
||||
)
|
||||
self.default_variants = plugin_settings.get(
|
||||
"default_variants", self.default_variants
|
||||
)
|
||||
self.temp_rendering_path_template = plugin_settings.get(
|
||||
"temp_rendering_path_template", self.temp_rendering_path_template
|
||||
)
|
||||
self.image_format = plugin_settings.get(
|
||||
"image_format", self.image_format
|
||||
)
|
||||
|
|
|
|||
|
|
@ -146,11 +146,15 @@ class FusionRenderLocal(
|
|||
|
||||
staging_dir = os.path.dirname(path)
|
||||
|
||||
files = [os.path.basename(f) for f in expected_files]
|
||||
if len(expected_files) == 1:
|
||||
files = files[0]
|
||||
|
||||
repre = {
|
||||
"name": ext[1:],
|
||||
"ext": ext[1:],
|
||||
"frameStart": f"%0{padding}d" % start,
|
||||
"files": [os.path.basename(f) for f in expected_files],
|
||||
"files": files,
|
||||
"stagingDir": staging_dir,
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -14,18 +14,13 @@ class CollectChunkSize(pyblish.api.InstancePlugin,
|
|||
hosts = ["houdini"]
|
||||
targets = ["local", "remote"]
|
||||
label = "Collect Chunk Size"
|
||||
chunkSize = 999999
|
||||
chunk_size = 999999
|
||||
|
||||
def process(self, instance):
|
||||
# need to get the chunk size info from the setting
|
||||
attr_values = self.get_attr_values_from_data(instance.data)
|
||||
instance.data["chunkSize"] = attr_values.get("chunkSize")
|
||||
|
||||
@classmethod
|
||||
def apply_settings(cls, project_settings):
|
||||
project_setting = project_settings["houdini"]["publish"]["CollectChunkSize"] # noqa
|
||||
cls.chunkSize = project_setting["chunk_size"]
|
||||
|
||||
@classmethod
|
||||
def get_attribute_defs(cls):
|
||||
return [
|
||||
|
|
@ -33,7 +28,6 @@ class CollectChunkSize(pyblish.api.InstancePlugin,
|
|||
minimum=1,
|
||||
maximum=999999,
|
||||
decimals=0,
|
||||
default=cls.chunkSize,
|
||||
default=cls.chunk_size,
|
||||
label="Frame Per Task")
|
||||
|
||||
]
|
||||
|
|
|
|||
|
|
@ -511,3 +511,20 @@ def render_resolution(width, height):
|
|||
finally:
|
||||
rt.renderWidth = current_renderWidth
|
||||
rt.renderHeight = current_renderHeight
|
||||
|
||||
|
||||
@contextlib.contextmanager
|
||||
def suspended_refresh():
|
||||
"""Suspended refresh for scene and modify panel redraw.
|
||||
"""
|
||||
if is_headless():
|
||||
yield
|
||||
return
|
||||
rt.disableSceneRedraw()
|
||||
rt.suspendEditing()
|
||||
try:
|
||||
yield
|
||||
|
||||
finally:
|
||||
rt.enableSceneRedraw()
|
||||
rt.resumeEditing()
|
||||
|
|
|
|||
|
|
@ -198,8 +198,8 @@ def _render_preview_animation_max_pre_2024(
|
|||
res_width, res_height, filename=filepath
|
||||
)
|
||||
dib = rt.gw.getViewportDib()
|
||||
dib_width = rt.renderWidth
|
||||
dib_height = rt.renderHeight
|
||||
dib_width = float(dib.width)
|
||||
dib_height = float(dib.height)
|
||||
# aspect ratio
|
||||
viewportRatio = dib_width / dib_height
|
||||
renderRatio = float(res_width / res_height)
|
||||
|
|
|
|||
|
|
@ -39,45 +39,41 @@ Note:
|
|||
"""
|
||||
import os
|
||||
import pyblish.api
|
||||
from openpype.pipeline import publish
|
||||
from openpype.pipeline import publish, OptionalPyblishPluginMixin
|
||||
from pymxs import runtime as rt
|
||||
from openpype.hosts.max.api import maintained_selection
|
||||
from openpype.hosts.max.api.lib import suspended_refresh
|
||||
from openpype.lib import BoolDef
|
||||
|
||||
|
||||
class ExtractAlembic(publish.Extractor):
|
||||
class ExtractAlembic(publish.Extractor,
|
||||
OptionalPyblishPluginMixin):
|
||||
order = pyblish.api.ExtractorOrder
|
||||
label = "Extract Pointcache"
|
||||
hosts = ["max"]
|
||||
families = ["pointcache"]
|
||||
optional = True
|
||||
|
||||
def process(self, instance):
|
||||
start = instance.data["frameStartHandle"]
|
||||
end = instance.data["frameEndHandle"]
|
||||
|
||||
self.log.debug("Extracting pointcache ...")
|
||||
if not self.is_active(instance.data):
|
||||
return
|
||||
|
||||
parent_dir = self.staging_dir(instance)
|
||||
file_name = "{name}.abc".format(**instance.data)
|
||||
path = os.path.join(parent_dir, file_name)
|
||||
|
||||
# We run the render
|
||||
self.log.info("Writing alembic '%s' to '%s'" % (file_name, parent_dir))
|
||||
|
||||
rt.AlembicExport.ArchiveType = rt.name("ogawa")
|
||||
rt.AlembicExport.CoordinateSystem = rt.name("maya")
|
||||
rt.AlembicExport.StartFrame = start
|
||||
rt.AlembicExport.EndFrame = end
|
||||
|
||||
with maintained_selection():
|
||||
# select and export
|
||||
node_list = instance.data["members"]
|
||||
rt.Select(node_list)
|
||||
rt.exportFile(
|
||||
path,
|
||||
rt.name("noPrompt"),
|
||||
selectedOnly=True,
|
||||
using=rt.AlembicExport,
|
||||
)
|
||||
with suspended_refresh():
|
||||
self._set_abc_attributes(instance)
|
||||
with maintained_selection():
|
||||
# select and export
|
||||
node_list = instance.data["members"]
|
||||
rt.Select(node_list)
|
||||
rt.exportFile(
|
||||
path,
|
||||
rt.name("noPrompt"),
|
||||
selectedOnly=True,
|
||||
using=rt.AlembicExport,
|
||||
)
|
||||
|
||||
if "representations" not in instance.data:
|
||||
instance.data["representations"] = []
|
||||
|
|
@ -89,3 +85,51 @@ class ExtractAlembic(publish.Extractor):
|
|||
"stagingDir": parent_dir,
|
||||
}
|
||||
instance.data["representations"].append(representation)
|
||||
|
||||
def _set_abc_attributes(self, instance):
|
||||
start = instance.data["frameStartHandle"]
|
||||
end = instance.data["frameEndHandle"]
|
||||
attr_values = self.get_attr_values_from_data(instance.data)
|
||||
custom_attrs = attr_values.get("custom_attrs", False)
|
||||
if not custom_attrs:
|
||||
self.log.debug(
|
||||
"No Custom Attributes included in this abc export...")
|
||||
rt.AlembicExport.ArchiveType = rt.Name("ogawa")
|
||||
rt.AlembicExport.CoordinateSystem = rt.Name("maya")
|
||||
rt.AlembicExport.StartFrame = start
|
||||
rt.AlembicExport.EndFrame = end
|
||||
rt.AlembicExport.CustomAttributes = custom_attrs
|
||||
|
||||
@classmethod
|
||||
def get_attribute_defs(cls):
|
||||
return [
|
||||
BoolDef("custom_attrs",
|
||||
label="Custom Attributes",
|
||||
default=False),
|
||||
]
|
||||
|
||||
|
||||
class ExtractCameraAlembic(ExtractAlembic):
|
||||
"""Extract Camera with AlembicExport."""
|
||||
|
||||
label = "Extract Alembic Camera"
|
||||
families = ["camera"]
|
||||
|
||||
|
||||
class ExtractModel(ExtractAlembic):
|
||||
"""Extract Geometry in Alembic Format"""
|
||||
label = "Extract Geometry (Alembic)"
|
||||
families = ["model"]
|
||||
|
||||
def _set_abc_attributes(self, instance):
|
||||
attr_values = self.get_attr_values_from_data(instance.data)
|
||||
custom_attrs = attr_values.get("custom_attrs", False)
|
||||
if not custom_attrs:
|
||||
self.log.debug(
|
||||
"No Custom Attributes included in this abc export...")
|
||||
rt.AlembicExport.ArchiveType = rt.name("ogawa")
|
||||
rt.AlembicExport.CoordinateSystem = rt.name("maya")
|
||||
rt.AlembicExport.CustomAttributes = custom_attrs
|
||||
rt.AlembicExport.UVs = True
|
||||
rt.AlembicExport.VertexColors = True
|
||||
rt.AlembicExport.PreserveInstances = True
|
||||
|
|
@ -1,64 +0,0 @@
|
|||
import os
|
||||
|
||||
import pyblish.api
|
||||
from pymxs import runtime as rt
|
||||
|
||||
from openpype.hosts.max.api import maintained_selection
|
||||
from openpype.pipeline import OptionalPyblishPluginMixin, publish
|
||||
|
||||
|
||||
class ExtractCameraAlembic(publish.Extractor, OptionalPyblishPluginMixin):
|
||||
"""Extract Camera with AlembicExport."""
|
||||
|
||||
order = pyblish.api.ExtractorOrder - 0.1
|
||||
label = "Extract Alembic Camera"
|
||||
hosts = ["max"]
|
||||
families = ["camera"]
|
||||
optional = True
|
||||
|
||||
def process(self, instance):
|
||||
if not self.is_active(instance.data):
|
||||
return
|
||||
start = instance.data["frameStartHandle"]
|
||||
end = instance.data["frameEndHandle"]
|
||||
|
||||
self.log.info("Extracting Camera ...")
|
||||
|
||||
stagingdir = self.staging_dir(instance)
|
||||
filename = "{name}.abc".format(**instance.data)
|
||||
path = os.path.join(stagingdir, filename)
|
||||
|
||||
# We run the render
|
||||
self.log.info(f"Writing alembic '{filename}' to '{stagingdir}'")
|
||||
|
||||
rt.AlembicExport.ArchiveType = rt.Name("ogawa")
|
||||
rt.AlembicExport.CoordinateSystem = rt.Name("maya")
|
||||
rt.AlembicExport.StartFrame = start
|
||||
rt.AlembicExport.EndFrame = end
|
||||
rt.AlembicExport.CustomAttributes = True
|
||||
|
||||
with maintained_selection():
|
||||
# select and export
|
||||
node_list = instance.data["members"]
|
||||
rt.Select(node_list)
|
||||
rt.ExportFile(
|
||||
path,
|
||||
rt.Name("noPrompt"),
|
||||
selectedOnly=True,
|
||||
using=rt.AlembicExport,
|
||||
)
|
||||
|
||||
self.log.info("Performing Extraction ...")
|
||||
if "representations" not in instance.data:
|
||||
instance.data["representations"] = []
|
||||
|
||||
representation = {
|
||||
"name": "abc",
|
||||
"ext": "abc",
|
||||
"files": filename,
|
||||
"stagingDir": stagingdir,
|
||||
"frameStart": start,
|
||||
"frameEnd": end,
|
||||
}
|
||||
instance.data["representations"].append(representation)
|
||||
self.log.info(f"Extracted instance '{instance.name}' to: {path}")
|
||||
|
|
@ -20,13 +20,10 @@ class ExtractCameraFbx(publish.Extractor, OptionalPyblishPluginMixin):
|
|||
if not self.is_active(instance.data):
|
||||
return
|
||||
|
||||
self.log.debug("Extracting Camera ...")
|
||||
stagingdir = self.staging_dir(instance)
|
||||
filename = "{name}.fbx".format(**instance.data)
|
||||
|
||||
filepath = os.path.join(stagingdir, filename)
|
||||
self.log.info(f"Writing fbx file '{filename}' to '{filepath}'")
|
||||
|
||||
rt.FBXExporterSetParam("Animation", True)
|
||||
rt.FBXExporterSetParam("Cameras", True)
|
||||
rt.FBXExporterSetParam("AxisConversionMethod", "Animation")
|
||||
|
|
|
|||
|
|
@ -26,7 +26,6 @@ class ExtractMaxSceneRaw(publish.Extractor, OptionalPyblishPluginMixin):
|
|||
filename = "{name}.max".format(**instance.data)
|
||||
|
||||
max_path = os.path.join(stagingdir, filename)
|
||||
self.log.info("Writing max file '%s' to '%s'" % (filename, max_path))
|
||||
|
||||
if "representations" not in instance.data:
|
||||
instance.data["representations"] = []
|
||||
|
|
|
|||
|
|
@ -1,63 +0,0 @@
|
|||
import os
|
||||
import pyblish.api
|
||||
from openpype.pipeline import publish, OptionalPyblishPluginMixin
|
||||
from pymxs import runtime as rt
|
||||
from openpype.hosts.max.api import maintained_selection
|
||||
|
||||
|
||||
class ExtractModel(publish.Extractor, OptionalPyblishPluginMixin):
|
||||
"""
|
||||
Extract Geometry in Alembic Format
|
||||
"""
|
||||
|
||||
order = pyblish.api.ExtractorOrder - 0.1
|
||||
label = "Extract Geometry (Alembic)"
|
||||
hosts = ["max"]
|
||||
families = ["model"]
|
||||
optional = True
|
||||
|
||||
def process(self, instance):
|
||||
if not self.is_active(instance.data):
|
||||
return
|
||||
|
||||
self.log.debug("Extracting Geometry ...")
|
||||
|
||||
stagingdir = self.staging_dir(instance)
|
||||
filename = "{name}.abc".format(**instance.data)
|
||||
filepath = os.path.join(stagingdir, filename)
|
||||
|
||||
# We run the render
|
||||
self.log.info("Writing alembic '%s' to '%s'" % (filename, stagingdir))
|
||||
|
||||
rt.AlembicExport.ArchiveType = rt.name("ogawa")
|
||||
rt.AlembicExport.CoordinateSystem = rt.name("maya")
|
||||
rt.AlembicExport.CustomAttributes = True
|
||||
rt.AlembicExport.UVs = True
|
||||
rt.AlembicExport.VertexColors = True
|
||||
rt.AlembicExport.PreserveInstances = True
|
||||
|
||||
with maintained_selection():
|
||||
# select and export
|
||||
node_list = instance.data["members"]
|
||||
rt.Select(node_list)
|
||||
rt.exportFile(
|
||||
filepath,
|
||||
rt.name("noPrompt"),
|
||||
selectedOnly=True,
|
||||
using=rt.AlembicExport,
|
||||
)
|
||||
|
||||
self.log.info("Performing Extraction ...")
|
||||
if "representations" not in instance.data:
|
||||
instance.data["representations"] = []
|
||||
|
||||
representation = {
|
||||
"name": "abc",
|
||||
"ext": "abc",
|
||||
"files": filename,
|
||||
"stagingDir": stagingdir,
|
||||
}
|
||||
instance.data["representations"].append(representation)
|
||||
self.log.info(
|
||||
"Extracted instance '%s' to: %s" % (instance.name, filepath)
|
||||
)
|
||||
|
|
@ -20,12 +20,9 @@ class ExtractModelFbx(publish.Extractor, OptionalPyblishPluginMixin):
|
|||
if not self.is_active(instance.data):
|
||||
return
|
||||
|
||||
self.log.debug("Extracting Geometry ...")
|
||||
|
||||
stagingdir = self.staging_dir(instance)
|
||||
filename = "{name}.fbx".format(**instance.data)
|
||||
filepath = os.path.join(stagingdir, filename)
|
||||
self.log.info("Writing FBX '%s' to '%s'" % (filepath, stagingdir))
|
||||
|
||||
rt.FBXExporterSetParam("Animation", False)
|
||||
rt.FBXExporterSetParam("Cameras", False)
|
||||
|
|
@ -46,7 +43,6 @@ class ExtractModelFbx(publish.Extractor, OptionalPyblishPluginMixin):
|
|||
using=rt.FBXEXP,
|
||||
)
|
||||
|
||||
self.log.info("Performing Extraction ...")
|
||||
if "representations" not in instance.data:
|
||||
instance.data["representations"] = []
|
||||
|
||||
|
|
|
|||
|
|
@ -3,6 +3,7 @@ import pyblish.api
|
|||
from openpype.pipeline import publish, OptionalPyblishPluginMixin
|
||||
from pymxs import runtime as rt
|
||||
from openpype.hosts.max.api import maintained_selection
|
||||
from openpype.hosts.max.api.lib import suspended_refresh
|
||||
from openpype.pipeline.publish import KnownPublishError
|
||||
|
||||
|
||||
|
|
@ -21,25 +22,21 @@ class ExtractModelObj(publish.Extractor, OptionalPyblishPluginMixin):
|
|||
if not self.is_active(instance.data):
|
||||
return
|
||||
|
||||
self.log.debug("Extracting Geometry ...")
|
||||
|
||||
stagingdir = self.staging_dir(instance)
|
||||
filename = "{name}.obj".format(**instance.data)
|
||||
filepath = os.path.join(stagingdir, filename)
|
||||
self.log.info("Writing OBJ '%s' to '%s'" % (filepath, stagingdir))
|
||||
|
||||
self.log.info("Performing Extraction ...")
|
||||
with maintained_selection():
|
||||
# select and export
|
||||
node_list = instance.data["members"]
|
||||
rt.Select(node_list)
|
||||
rt.exportFile(
|
||||
filepath,
|
||||
rt.name("noPrompt"),
|
||||
selectedOnly=True,
|
||||
using=rt.ObjExp,
|
||||
)
|
||||
|
||||
with suspended_refresh():
|
||||
with maintained_selection():
|
||||
# select and export
|
||||
node_list = instance.data["members"]
|
||||
rt.Select(node_list)
|
||||
rt.exportFile(
|
||||
filepath,
|
||||
rt.name("noPrompt"),
|
||||
selectedOnly=True,
|
||||
using=rt.ObjExp,
|
||||
)
|
||||
if not os.path.exists(filepath):
|
||||
raise KnownPublishError(
|
||||
"File {} wasn't produced by 3ds max, please check the logs.")
|
||||
|
|
|
|||
|
|
@ -1,9 +1,12 @@
|
|||
import pyblish.api
|
||||
from pymxs import runtime as rt
|
||||
from openpype.pipeline import (
|
||||
PublishValidationError,
|
||||
OptionalPyblishPluginMixin
|
||||
)
|
||||
from pymxs import runtime as rt
|
||||
from openpype.pipeline.publish import (
|
||||
RepairAction,
|
||||
PublishValidationError
|
||||
)
|
||||
from openpype.hosts.max.api.lib import reset_scene_resolution
|
||||
|
||||
|
||||
|
|
@ -16,6 +19,7 @@ class ValidateResolutionSetting(pyblish.api.InstancePlugin,
|
|||
hosts = ["max"]
|
||||
label = "Validate Resolution Setting"
|
||||
optional = True
|
||||
actions = [RepairAction]
|
||||
|
||||
def process(self, instance):
|
||||
if not self.is_active(instance.data):
|
||||
|
|
|
|||
|
|
@ -260,7 +260,7 @@ def _install_menu():
|
|||
"Create...",
|
||||
lambda: host_tools.show_publisher(
|
||||
parent=(
|
||||
main_window if nuke.NUKE_VERSION_RELEASE >= 14 else None
|
||||
main_window if nuke.NUKE_VERSION_MAJOR >= 14 else None
|
||||
),
|
||||
tab="create"
|
||||
)
|
||||
|
|
@ -271,7 +271,7 @@ def _install_menu():
|
|||
"Publish...",
|
||||
lambda: host_tools.show_publisher(
|
||||
parent=(
|
||||
main_window if nuke.NUKE_VERSION_RELEASE >= 14 else None
|
||||
main_window if nuke.NUKE_VERSION_MAJOR >= 14 else None
|
||||
),
|
||||
tab="publish"
|
||||
)
|
||||
|
|
|
|||
|
|
@ -9,7 +9,7 @@ The Photoshop integration requires two components to work; `extension` and `serv
|
|||
To install the extension download [Extension Manager Command Line tool (ExManCmd)](https://github.com/Adobe-CEP/Getting-Started-guides/tree/master/Package%20Distribute%20Install#option-2---exmancmd).
|
||||
|
||||
```
|
||||
ExManCmd /install {path to avalon-core}\avalon\photoshop\extension.zxp
|
||||
ExManCmd /install {path to addon}/api/extension.zxp
|
||||
```
|
||||
|
||||
### Server
|
||||
|
|
@ -17,16 +17,16 @@ ExManCmd /install {path to avalon-core}\avalon\photoshop\extension.zxp
|
|||
The easiest way to get the server and Photoshop launch is with:
|
||||
|
||||
```
|
||||
python -c ^"import avalon.photoshop;avalon.photoshop.launch(""C:\Program Files\Adobe\Adobe Photoshop 2020\Photoshop.exe"")^"
|
||||
python -c ^"import openpype.hosts.photoshop;openpype.hosts.photoshop.launch(""C:\Program Files\Adobe\Adobe Photoshop 2020\Photoshop.exe"")^"
|
||||
```
|
||||
|
||||
`avalon.photoshop.launch` launches the application and server, and also closes the server when Photoshop exists.
|
||||
|
||||
## Usage
|
||||
|
||||
The Photoshop extension can be found under `Window > Extensions > Avalon`. Once launched you should be presented with a panel like this:
|
||||
The Photoshop extension can be found under `Window > Extensions > Ayon`. Once launched you should be presented with a panel like this:
|
||||
|
||||

|
||||

|
||||
|
||||
|
||||
## Developing
|
||||
|
|
@ -37,7 +37,7 @@ When developing the extension you can load it [unsigned](https://github.com/Adob
|
|||
When signing the extension you can use this [guide](https://github.com/Adobe-CEP/Getting-Started-guides/tree/master/Package%20Distribute%20Install#package-distribute-install-guide).
|
||||
|
||||
```
|
||||
ZXPSignCmd -selfSignedCert NA NA Avalon Avalon-Photoshop avalon extension.p12
|
||||
ZXPSignCmd -selfSignedCert NA NA Ayon Ayon-Photoshop Ayon extension.p12
|
||||
ZXPSignCmd -sign {path to avalon-core}\avalon\photoshop\extension {path to avalon-core}\avalon\photoshop\extension.zxp extension.p12 avalon
|
||||
```
|
||||
|
||||
|
|
|
|||
|
|
@ -1,9 +1,9 @@
|
|||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<ExtensionList>
|
||||
<Extension Id="com.openpype.PS.panel">
|
||||
<Extension Id="io.ynput.PS.panel">
|
||||
<HostList>
|
||||
<Host Name="PHXS" Port="8078"/>
|
||||
<Host Name="FLPR" Port="8078"/>
|
||||
</HostList>
|
||||
</Extension>
|
||||
</ExtensionList>
|
||||
</ExtensionList>
|
||||
|
|
|
|||
|
|
@ -1,7 +1,7 @@
|
|||
<?xml version='1.0' encoding='UTF-8'?>
|
||||
<ExtensionManifest ExtensionBundleId="com.openpype.PS.panel" ExtensionBundleVersion="1.0.12" Version="7.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
|
||||
<ExtensionManifest ExtensionBundleId="io.ynput.PS.panel" ExtensionBundleVersion="1.1.0" Version="7.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
|
||||
<ExtensionList>
|
||||
<Extension Id="com.openpype.PS.panel" Version="1.0.1" />
|
||||
<Extension Id="io.ynput.PS.panel" Version="1.0.1" />
|
||||
</ExtensionList>
|
||||
<ExecutionEnvironment>
|
||||
<HostList>
|
||||
|
|
@ -16,7 +16,7 @@
|
|||
</RequiredRuntimeList>
|
||||
</ExecutionEnvironment>
|
||||
<DispatchInfoList>
|
||||
<Extension Id="com.openpype.PS.panel">
|
||||
<Extension Id="io.ynput.PS.panel">
|
||||
<DispatchInfo>
|
||||
<Resources>
|
||||
<MainPath>./index.html</MainPath>
|
||||
|
|
@ -32,7 +32,7 @@
|
|||
</Lifecycle>
|
||||
<UI>
|
||||
<Type>Panel</Type>
|
||||
<Menu>OpenPype</Menu>
|
||||
<Menu>AYON</Menu>
|
||||
<Geometry>
|
||||
<Size>
|
||||
<Width>300</Width>
|
||||
|
|
@ -44,7 +44,7 @@
|
|||
</MaxSize>
|
||||
</Geometry>
|
||||
<Icons>
|
||||
<Icon Type="Normal">./icons/avalon-logo-48.png</Icon>
|
||||
<Icon Type="Normal">./icons/ayon_logo.png</Icon>
|
||||
</Icons>
|
||||
</UI>
|
||||
</DispatchInfo>
|
||||
|
|
|
|||
|
Before Width: | Height: | Size: 1.3 KiB |
BIN
openpype/hosts/photoshop/api/extension/icons/ayon_logo.png
Normal file
|
After Width: | Height: | Size: 3.5 KiB |
|
Before Width: | Height: | Size: 8.6 KiB |
|
Before Width: | Height: | Size: 8.6 KiB After Width: | Height: | Size: 8.6 KiB |
|
Before Width: | Height: | Size: 13 KiB |
BIN
openpype/hosts/photoshop/api/panel_failure.png
Normal file
|
After Width: | Height: | Size: 13 KiB |
|
|
@ -170,8 +170,7 @@ class ExtractReview(publish.Extractor):
|
|||
# Generate mov.
|
||||
mov_path = os.path.join(staging_dir, "review.mov")
|
||||
self.log.info(f"Generate mov review: {mov_path}")
|
||||
args = [
|
||||
ffmpeg_path,
|
||||
args = ffmpeg_path + [
|
||||
"-y",
|
||||
"-i", source_files_pattern,
|
||||
"-vf", "pad=ceil(iw/2)*2:ceil(ih/2)*2",
|
||||
|
|
@ -224,6 +223,7 @@ class ExtractReview(publish.Extractor):
|
|||
"stagingDir": staging_dir,
|
||||
"tags": ["thumbnail", "delete"]
|
||||
})
|
||||
instance.data["thumbnailPath"] = thumbnail_path
|
||||
|
||||
def _check_and_resize(self, processed_img_names, source_files_pattern,
|
||||
staging_dir):
|
||||
|
|
|
|||
|
|
@ -257,8 +257,6 @@ class CollectHierarchyContext(pyblish.api.ContextPlugin):
|
|||
if 'shot' not in instance.data.get('family', ''):
|
||||
continue
|
||||
|
||||
name = instance.data["asset"]
|
||||
|
||||
# get handles
|
||||
handle_start = int(instance.data["handleStart"])
|
||||
handle_end = int(instance.data["handleEnd"])
|
||||
|
|
@ -286,6 +284,8 @@ class CollectHierarchyContext(pyblish.api.ContextPlugin):
|
|||
parents = instance.data.get('parents', [])
|
||||
self.log.debug(f"parents: {pformat(parents)}")
|
||||
|
||||
# Split by '/' for AYON where asset is a path
|
||||
name = instance.data["asset"].split("/")[-1]
|
||||
actual = {name: in_info}
|
||||
|
||||
for parent in reversed(parents):
|
||||
|
|
|
|||
|
|
@ -5,6 +5,7 @@ from openpype.pipeline import (
|
|||
)
|
||||
from openpype.lib import EnumDef
|
||||
from openpype.pipeline import colorspace
|
||||
from openpype.pipeline.publish import KnownPublishError
|
||||
|
||||
|
||||
class CollectColorspace(pyblish.api.InstancePlugin,
|
||||
|
|
@ -26,18 +27,44 @@ class CollectColorspace(pyblish.api.InstancePlugin,
|
|||
|
||||
def process(self, instance):
|
||||
values = self.get_attr_values_from_data(instance.data)
|
||||
colorspace = values.get("colorspace", None)
|
||||
if colorspace is None:
|
||||
colorspace_value = values.get("colorspace", None)
|
||||
if colorspace_value is None:
|
||||
return
|
||||
|
||||
self.log.debug("Explicit colorspace set to: {}".format(colorspace))
|
||||
color_data = colorspace.convert_colorspace_enumerator_item(
|
||||
colorspace_value, self.config_items)
|
||||
|
||||
colorspace_name = self._colorspace_name_by_type(color_data)
|
||||
self.log.debug("Explicit colorspace name: {}".format(colorspace_name))
|
||||
|
||||
context = instance.context
|
||||
for repre in instance.data.get("representations", {}):
|
||||
self.set_representation_colorspace(
|
||||
representation=repre,
|
||||
context=context,
|
||||
colorspace=colorspace
|
||||
colorspace=colorspace_name
|
||||
)
|
||||
|
||||
def _colorspace_name_by_type(self, colorspace_data):
|
||||
"""
|
||||
Returns colorspace name by type
|
||||
|
||||
Arguments:
|
||||
colorspace_data (dict): colorspace data
|
||||
|
||||
Returns:
|
||||
str: colorspace name
|
||||
"""
|
||||
if colorspace_data["type"] == "colorspaces":
|
||||
return colorspace_data["name"]
|
||||
elif colorspace_data["type"] == "roles":
|
||||
return colorspace_data["colorspace"]
|
||||
else:
|
||||
raise KnownPublishError(
|
||||
(
|
||||
"Collecting of colorspace failed. used config is missing "
|
||||
"colorspace type: '{}' . Please contact your pipeline TD."
|
||||
).format(colorspace_data['type'])
|
||||
)
|
||||
|
||||
@classmethod
|
||||
|
|
|
|||
|
|
@ -155,8 +155,6 @@ class CollectShotInstance(pyblish.api.InstancePlugin):
|
|||
else {}
|
||||
)
|
||||
|
||||
asset_name = instance.data["asset"]
|
||||
|
||||
# get handles
|
||||
handle_start = int(instance.data["handleStart"])
|
||||
handle_end = int(instance.data["handleEnd"])
|
||||
|
|
@ -177,6 +175,8 @@ class CollectShotInstance(pyblish.api.InstancePlugin):
|
|||
|
||||
parents = instance.data.get('parents', [])
|
||||
|
||||
# Split by '/' for AYON where asset is a path
|
||||
asset_name = instance.data["asset"].split("/")[-1]
|
||||
actual = {asset_name: in_info}
|
||||
|
||||
for parent in reversed(parents):
|
||||
|
|
|
|||
|
|
@ -33,7 +33,19 @@ class ValidateColorspace(pyblish.api.InstancePlugin,
|
|||
config_path = colorspace_data["config"]["path"]
|
||||
if config_path not in config_colorspaces:
|
||||
colorspaces = get_ocio_config_colorspaces(config_path)
|
||||
config_colorspaces[config_path] = set(colorspaces)
|
||||
if not colorspaces.get("colorspaces"):
|
||||
message = (
|
||||
f"OCIO config '{config_path}' does not contain any "
|
||||
"colorspaces. This is an error in the OCIO config. "
|
||||
"Contact your pipeline TD.",
|
||||
)
|
||||
raise PublishValidationError(
|
||||
title="Colorspace validation",
|
||||
message=message,
|
||||
description=message
|
||||
)
|
||||
config_colorspaces[config_path] = set(
|
||||
colorspaces["colorspaces"])
|
||||
|
||||
colorspace = colorspace_data["colorspace"]
|
||||
self.log.debug(
|
||||
|
|
|
|||
|
|
@ -66,7 +66,7 @@ def get_openpype_attr(session, split_hierarchical=True, query_keys=None):
|
|||
"select {}"
|
||||
" from CustomAttributeConfiguration"
|
||||
# Kept `pype` for Backwards Compatibility
|
||||
" where group.name in (\"pype\", \"{}\")"
|
||||
" where group.name in (\"pype\", \"ayon\", \"{}\")"
|
||||
).format(", ".join(query_keys), CUST_ATTR_GROUP)
|
||||
all_avalon_attr = session.query(cust_attrs_query).all()
|
||||
for cust_attr in all_avalon_attr:
|
||||
|
|
|
|||
|
|
@ -21,7 +21,7 @@ def get_pype_attr(session, split_hierarchical=True):
|
|||
"select id, entity_type, object_type_id, is_hierarchical, default"
|
||||
" from CustomAttributeConfiguration"
|
||||
# Kept `pype` for Backwards Compatibility
|
||||
" where group.name in (\"pype\", \"{}\")"
|
||||
" where group.name in (\"pype\", \"ayon\", \"{}\")"
|
||||
).format(CUST_ATTR_GROUP)
|
||||
all_avalon_attr = session.query(cust_attrs_query).all()
|
||||
for cust_attr in all_avalon_attr:
|
||||
|
|
|
|||
|
|
@ -61,8 +61,9 @@ class CollectHierarchy(pyblish.api.ContextPlugin):
|
|||
"resolutionHeight": instance.data["resolutionHeight"],
|
||||
"pixelAspect": instance.data["pixelAspect"]
|
||||
}
|
||||
|
||||
actual = {instance.data["asset"]: shot_data}
|
||||
# Split by '/' for AYON where asset is a path
|
||||
name = instance.data["asset"].split("/")[-1]
|
||||
actual = {name: shot_data}
|
||||
|
||||
for parent in reversed(instance.data["parents"]):
|
||||
next_dict = {}
|
||||
|
|
|
|||
|
|
@ -68,7 +68,6 @@ class CollectResourcesPath(pyblish.api.InstancePlugin):
|
|||
]
|
||||
|
||||
def process(self, instance):
|
||||
|
||||
anatomy = instance.context.data["anatomy"]
|
||||
|
||||
template_data = copy.deepcopy(instance.data["anatomyData"])
|
||||
|
|
@ -80,11 +79,18 @@ class CollectResourcesPath(pyblish.api.InstancePlugin):
|
|||
"representation": "TEMP"
|
||||
})
|
||||
|
||||
# For the first time publish
|
||||
if instance.data.get("hierarchy"):
|
||||
template_data.update({
|
||||
"hierarchy": instance.data["hierarchy"]
|
||||
})
|
||||
# Add fill keys for editorial publishing creating new entity
|
||||
# TODO handle in editorial plugin
|
||||
if instance.data.get("newAssetPublishing"):
|
||||
if "hierarchy" not in template_data:
|
||||
template_data["hierarchy"] = instance.data["hierarchy"]
|
||||
|
||||
if "asset" not in template_data:
|
||||
asset_name = instance.data["asset"].split("/")[-1]
|
||||
template_data["asset"] = asset_name
|
||||
template_data["folder"] = {
|
||||
"name": asset_name
|
||||
}
|
||||
|
||||
publish_templates = anatomy.templates_obj["publish"]
|
||||
if "folder" in publish_templates:
|
||||
|
|
|
|||
|
|
@ -204,7 +204,8 @@ class ExtractHierarchyToAYON(pyblish.api.ContextPlugin):
|
|||
|
||||
project_item = None
|
||||
project_children_context = None
|
||||
for key, value in context.data["hierarchyContext"].items():
|
||||
hierarchy_context = copy.deepcopy(context.data["hierarchyContext"])
|
||||
for key, value in hierarchy_context.items():
|
||||
project_item = copy.deepcopy(value)
|
||||
project_children_context = project_item.pop("childs", None)
|
||||
project_item["name"] = key
|
||||
|
|
@ -223,23 +224,24 @@ class ExtractHierarchyToAYON(pyblish.api.ContextPlugin):
|
|||
valid_ids = set()
|
||||
|
||||
hierarchy_queue = collections.deque()
|
||||
hierarchy_queue.append((project_id, project_children_context))
|
||||
hierarchy_queue.append((project_id, "", project_children_context))
|
||||
while hierarchy_queue:
|
||||
queue_item = hierarchy_queue.popleft()
|
||||
parent_id, children_context = queue_item
|
||||
parent_id, parent_path, children_context = queue_item
|
||||
if not children_context:
|
||||
continue
|
||||
|
||||
for asset, asset_info in children_context.items():
|
||||
for folder_name, folder_info in children_context.items():
|
||||
folder_path = "{}/{}".format(parent_path, folder_name)
|
||||
if (
|
||||
asset not in active_folder_paths
|
||||
and not asset_info.get("childs")
|
||||
folder_path not in active_folder_paths
|
||||
and not folder_info.get("childs")
|
||||
):
|
||||
continue
|
||||
asset_name = asset.split("/")[-1]
|
||||
|
||||
item_id = uuid.uuid4().hex
|
||||
new_item = copy.deepcopy(asset_info)
|
||||
new_item["name"] = asset_name
|
||||
new_item = copy.deepcopy(folder_info)
|
||||
new_item["name"] = folder_name
|
||||
new_item["children"] = []
|
||||
new_children_context = new_item.pop("childs", None)
|
||||
tasks = new_item.pop("tasks", {})
|
||||
|
|
@ -253,9 +255,11 @@ class ExtractHierarchyToAYON(pyblish.api.ContextPlugin):
|
|||
items_by_id[item_id] = new_item
|
||||
parent_id_by_item_id[item_id] = parent_id
|
||||
|
||||
if asset in active_folder_paths:
|
||||
if folder_path in active_folder_paths:
|
||||
valid_ids.add(item_id)
|
||||
hierarchy_queue.append((item_id, new_children_context))
|
||||
hierarchy_queue.append(
|
||||
(item_id, folder_path, new_children_context)
|
||||
)
|
||||
|
||||
if not valid_ids:
|
||||
return None
|
||||
|
|
|
|||
|
|
@ -5,7 +5,21 @@
|
|||
pull into a scene.
|
||||
|
||||
This one is used only as image describing content of published item and
|
||||
shows up only in Loader in right column section.
|
||||
shows up only in Loader or WebUI.
|
||||
|
||||
Instance must have 'published_representations' to
|
||||
be able to integrate thumbnail.
|
||||
Possible sources of thumbnail paths:
|
||||
- instance.data["thumbnailPath"]
|
||||
- representation with 'thumbnail' name in 'published_representations'
|
||||
- context.data["thumbnailPath"]
|
||||
|
||||
Notes:
|
||||
Issue with 'thumbnail' representation is that we most likely don't
|
||||
want to integrate it as representation. Integrated representation
|
||||
is polluting Loader and database without real usage. That's why
|
||||
they usually have 'delete' tag to skip the integration.
|
||||
|
||||
"""
|
||||
|
||||
import os
|
||||
|
|
@ -92,11 +106,8 @@ class IntegrateThumbnailsAYON(pyblish.api.ContextPlugin):
|
|||
continue
|
||||
|
||||
# Find thumbnail path on instance
|
||||
thumbnail_source = instance.data.get("thumbnailSource")
|
||||
thumbnail_path = instance.data.get("thumbnailPath")
|
||||
thumbnail_path = (
|
||||
thumbnail_source
|
||||
or thumbnail_path
|
||||
instance.data.get("thumbnailPath")
|
||||
or self._get_instance_thumbnail_path(published_repres)
|
||||
)
|
||||
if thumbnail_path:
|
||||
|
|
|
|||
|
|
@ -25,7 +25,8 @@
|
|||
"instance_attributes": [
|
||||
"reviewable",
|
||||
"farm_rendering"
|
||||
]
|
||||
],
|
||||
"image_format": "exr"
|
||||
}
|
||||
},
|
||||
"publish": {
|
||||
|
|
|
|||
|
|
@ -62,6 +62,12 @@
|
|||
"Main"
|
||||
]
|
||||
},
|
||||
"CreateMantraIFD": {
|
||||
"enabled": true,
|
||||
"default_variants": [
|
||||
"Main"
|
||||
]
|
||||
},
|
||||
"CreateMantraROP": {
|
||||
"enabled": true,
|
||||
"default_variants": [
|
||||
|
|
@ -137,14 +143,14 @@
|
|||
}
|
||||
},
|
||||
"publish": {
|
||||
"CollectAssetHandles": {
|
||||
"use_asset_handles": true
|
||||
},
|
||||
"CollectChunkSize": {
|
||||
"enabled": true,
|
||||
"optional": true,
|
||||
"chunk_size": 999999
|
||||
},
|
||||
"CollectAssetHandles": {
|
||||
"use_asset_handles": true
|
||||
},
|
||||
"ValidateContainers": {
|
||||
"enabled": true,
|
||||
"optional": true,
|
||||
|
|
|
|||
|
|
@ -436,7 +436,7 @@
|
|||
"viewTransform": "sRGB gamma"
|
||||
}
|
||||
},
|
||||
"mel_workspace": "workspace -fr \"shaders\" \"renderData/shaders\";\nworkspace -fr \"images\" \"renders/maya\";\nworkspace -fr \"particles\" \"particles\";\nworkspace -fr \"mayaAscii\" \"\";\nworkspace -fr \"mayaBinary\" \"\";\nworkspace -fr \"scene\" \"\";\nworkspace -fr \"alembicCache\" \"cache/alembic\";\nworkspace -fr \"renderData\" \"renderData\";\nworkspace -fr \"sourceImages\" \"sourceimages\";\nworkspace -fr \"fileCache\" \"cache/nCache\";\n",
|
||||
"mel_workspace": "workspace -fr \"shaders\" \"renderData/shaders\";\nworkspace -fr \"images\" \"renders/maya\";\nworkspace -fr \"particles\" \"particles\";\nworkspace -fr \"mayaAscii\" \"\";\nworkspace -fr \"mayaBinary\" \"\";\nworkspace -fr \"scene\" \"\";\nworkspace -fr \"alembicCache\" \"cache/alembic\";\nworkspace -fr \"renderData\" \"renderData\";\nworkspace -fr \"sourceImages\" \"sourceimages\";\nworkspace -fr \"fileCache\" \"cache/nCache\";\nworkspace -fr \"autoSave\" \"autosave\";",
|
||||
"ext_mapping": {
|
||||
"model": "ma",
|
||||
"mayaAscii": "ma",
|
||||
|
|
|
|||
|
|
@ -80,6 +80,19 @@
|
|||
"farm_rendering": "Farm rendering"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"key": "image_format",
|
||||
"label": "Output Image Format",
|
||||
"type": "enum",
|
||||
"multiselect": false,
|
||||
"enum_items": [
|
||||
{"exr": "exr"},
|
||||
{"tga": "tga"},
|
||||
{"png": "png"},
|
||||
{"tif": "tif"},
|
||||
{"jpg": "jpg"}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
|
|
|
|||
|
|
@ -69,6 +69,10 @@
|
|||
"key": "CreateKarmaROP",
|
||||
"label": "Create Karma ROP"
|
||||
},
|
||||
{
|
||||
"key": "CreateMantraIFD",
|
||||
"label": "Create Mantra IFD"
|
||||
},
|
||||
{
|
||||
"key": "CreateMantraROP",
|
||||
"label": "Create Mantra ROP"
|
||||
|
|
|
|||
|
|
@ -25,6 +25,31 @@
|
|||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"type": "dict",
|
||||
"collapsible": true,
|
||||
"checkbox_key": "enabled",
|
||||
"key": "CollectChunkSize",
|
||||
"label": "Collect Chunk Size",
|
||||
"is_group": true,
|
||||
"children": [
|
||||
{
|
||||
"type": "boolean",
|
||||
"key": "enabled",
|
||||
"label": "Enabled"
|
||||
},
|
||||
{
|
||||
"type": "boolean",
|
||||
"key": "optional",
|
||||
"label": "Optional"
|
||||
},
|
||||
{
|
||||
"type": "number",
|
||||
"key": "chunk_size",
|
||||
"label": "Frames Per Task"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"type": "label",
|
||||
"label": "Validators"
|
||||
|
|
@ -55,31 +80,6 @@
|
|||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"type": "dict",
|
||||
"collapsible": true,
|
||||
"checkbox_key": "enabled",
|
||||
"key": "CollectChunkSize",
|
||||
"label": "Collect Chunk Size",
|
||||
"is_group": true,
|
||||
"children": [
|
||||
{
|
||||
"type": "boolean",
|
||||
"key": "enabled",
|
||||
"label": "Enabled"
|
||||
},
|
||||
{
|
||||
"type": "boolean",
|
||||
"key": "optional",
|
||||
"label": "Optional"
|
||||
},
|
||||
{
|
||||
"type": "number",
|
||||
"key": "chunk_size",
|
||||
"label": "Frames Per Task"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"type": "dict",
|
||||
"collapsible": true,
|
||||
|
|
|
|||
|
|
@ -512,52 +512,58 @@ QAbstractItemView::item:selected:hover {
|
|||
}
|
||||
|
||||
/* Row colors (alternate colors) are from left - right */
|
||||
QAbstractItemView:branch {
|
||||
background: transparent;
|
||||
QTreeView::branch {
|
||||
background: {color:bg-view};
|
||||
}
|
||||
QTreeView::branch:hover {
|
||||
background: {color:bg-view};
|
||||
}
|
||||
QTreeView::branch:selected {
|
||||
background: {color:bg-view};
|
||||
}
|
||||
|
||||
QAbstractItemView::branch:open:has-children:!has-siblings,
|
||||
QAbstractItemView::branch:open:has-children:has-siblings {
|
||||
border-image: none;
|
||||
image: url(:/openpype/images/branch_open.png);
|
||||
background: transparent;
|
||||
background: {color:bg-view};
|
||||
}
|
||||
QAbstractItemView::branch:open:has-children:!has-siblings:hover,
|
||||
QAbstractItemView::branch:open:has-children:has-siblings:hover {
|
||||
border-image: none;
|
||||
image: url(:/openpype/images/branch_open_on.png);
|
||||
background: transparent;
|
||||
background: {color:bg-view};
|
||||
}
|
||||
|
||||
QAbstractItemView::branch:has-children:!has-siblings:closed,
|
||||
QAbstractItemView::branch:closed:has-children:has-siblings {
|
||||
border-image: none;
|
||||
image: url(:/openpype/images/branch_closed.png);
|
||||
background: transparent;
|
||||
background: {color:bg-view};
|
||||
}
|
||||
QAbstractItemView::branch:has-children:!has-siblings:closed:hover,
|
||||
QAbstractItemView::branch:closed:has-children:has-siblings:hover {
|
||||
border-image: none;
|
||||
image: url(:/openpype/images/branch_closed_on.png);
|
||||
background: transparent;
|
||||
background: {color:bg-view};
|
||||
}
|
||||
|
||||
QAbstractItemView::branch:has-siblings:!adjoins-item {
|
||||
border-image: none;
|
||||
image: url(:/openpype/images/transparent.png);
|
||||
background: transparent;
|
||||
background: {color:bg-view};
|
||||
}
|
||||
|
||||
QAbstractItemView::branch:has-siblings:adjoins-item {
|
||||
border-image: none;
|
||||
image: url(:/openpype/images/transparent.png);
|
||||
background: transparent;
|
||||
background: {color:bg-view};
|
||||
}
|
||||
|
||||
QAbstractItemView::branch:!has-children:!has-siblings:adjoins-item {
|
||||
border-image: none;
|
||||
image: url(:/openpype/images/transparent.png);
|
||||
background: transparent;
|
||||
background: {color:bg-view};
|
||||
}
|
||||
|
||||
CompleterView {
|
||||
|
|
|
|||
|
|
@ -44,7 +44,7 @@ def version_item_from_entity(version):
|
|||
# NOTE There is also 'updatedAt', should be used that instead?
|
||||
# TODO skip conversion - converting to '%Y%m%dT%H%M%SZ' is because
|
||||
# 'PrettyTimeDelegate' expects it
|
||||
created_at = arrow.get(version["createdAt"])
|
||||
created_at = arrow.get(version["createdAt"]).to("local")
|
||||
published_time = created_at.strftime("%Y%m%dT%H%M%SZ")
|
||||
author = version["author"]
|
||||
version_num = version["version"]
|
||||
|
|
|
|||
|
|
@ -606,7 +606,7 @@ class PublishWorkfilesModel:
|
|||
print("Failed to format workfile path: {}".format(exc))
|
||||
|
||||
dirpath, filename = os.path.split(workfile_path)
|
||||
created_at = arrow.get(repre_entity["createdAt"])
|
||||
created_at = arrow.get(repre_entity["createdAt"].to("local"))
|
||||
return FileItem(
|
||||
dirpath,
|
||||
filename,
|
||||
|
|
|
|||
|
|
@ -75,8 +75,6 @@ from ._api import (
|
|||
download_installer,
|
||||
upload_installer,
|
||||
|
||||
get_dependencies_info,
|
||||
update_dependency_info,
|
||||
get_dependency_packages,
|
||||
create_dependency_package,
|
||||
update_dependency_package,
|
||||
|
|
@ -277,8 +275,6 @@ __all__ = (
|
|||
"download_installer",
|
||||
"upload_installer",
|
||||
|
||||
"get_dependencies_info",
|
||||
"update_dependency_info",
|
||||
"get_dependency_packages",
|
||||
"create_dependency_package",
|
||||
"update_dependency_package",
|
||||
|
|
|
|||
10
openpype/vendor/python/common/ayon_api/_api.py
vendored
|
|
@ -611,16 +611,6 @@ def upload_installer(*args, **kwargs):
|
|||
|
||||
|
||||
# Dependency packages
|
||||
def get_dependencies_info(*args, **kwargs):
|
||||
con = get_server_api_connection()
|
||||
return con.get_dependencies_info(*args, **kwargs)
|
||||
|
||||
|
||||
def update_dependency_info(*args, **kwargs):
|
||||
con = get_server_api_connection()
|
||||
return con.update_dependency_info(*args, **kwargs)
|
||||
|
||||
|
||||
def download_dependency_package(*args, **kwargs):
|
||||
con = get_server_api_connection()
|
||||
return con.download_dependency_package(*args, **kwargs)
|
||||
|
|
|
|||
|
|
@ -7,9 +7,21 @@ import six
|
|||
from ._api import get_server_api_connection
|
||||
from .utils import create_entity_id, convert_entity_id, slugify_string
|
||||
|
||||
UNKNOWN_VALUE = object()
|
||||
PROJECT_PARENT_ID = object()
|
||||
_NOT_SET = object()
|
||||
|
||||
class _CustomNone(object):
|
||||
def __init__(self, name=None):
|
||||
self._name = name or "CustomNone"
|
||||
|
||||
def __repr__(self):
|
||||
return "<{}>".format(self._name)
|
||||
|
||||
def __bool__(self):
|
||||
return False
|
||||
|
||||
|
||||
UNKNOWN_VALUE = _CustomNone("UNKNOWN_VALUE")
|
||||
PROJECT_PARENT_ID = _CustomNone("PROJECT_PARENT_ID")
|
||||
_NOT_SET = _CustomNone("_NOT_SET")
|
||||
|
||||
|
||||
class EntityHub(object):
|
||||
|
|
@ -1284,7 +1296,10 @@ class BaseEntity(object):
|
|||
changes["name"] = self._name
|
||||
|
||||
if self._entity_hub.allow_data_changes:
|
||||
if self._orig_data != self._data:
|
||||
if (
|
||||
self._data is not UNKNOWN_VALUE
|
||||
and self._orig_data != self._data
|
||||
):
|
||||
changes["data"] = self._data
|
||||
|
||||
if self._orig_thumbnail_id != self._thumbnail_id:
|
||||
|
|
|
|||
272
openpype/vendor/python/common/ayon_api/server_api.py
vendored
|
|
@ -8,6 +8,7 @@ import collections
|
|||
import platform
|
||||
import copy
|
||||
import uuid
|
||||
import warnings
|
||||
from contextlib import contextmanager
|
||||
|
||||
import six
|
||||
|
|
@ -1022,17 +1023,10 @@ class ServerAPI(object):
|
|||
for attr, filter_value in filters.items():
|
||||
query.set_variable_value(attr, filter_value)
|
||||
|
||||
# Backwards compatibility for server 0.3.x
|
||||
# - will be removed in future releases
|
||||
major, minor, _, _, _ = self.server_version_tuple
|
||||
access_groups_field = "accessGroups"
|
||||
if major == 0 and minor <= 3:
|
||||
access_groups_field = "roles"
|
||||
|
||||
for parsed_data in query.continuous_query(self):
|
||||
for user in parsed_data["users"]:
|
||||
user[access_groups_field] = json.loads(
|
||||
user[access_groups_field])
|
||||
user["accessGroups"] = json.loads(
|
||||
user["accessGroups"])
|
||||
yield user
|
||||
|
||||
def get_user(self, username=None):
|
||||
|
|
@ -2044,14 +2038,6 @@ class ServerAPI(object):
|
|||
|
||||
elif entity_type == "user":
|
||||
entity_type_defaults = set(DEFAULT_USER_FIELDS)
|
||||
# Backwards compatibility for server 0.3.x
|
||||
# - will be removed in future releases
|
||||
major, minor, _, _, _ = self.server_version_tuple
|
||||
if major == 0 and minor <= 3:
|
||||
entity_type_defaults.discard("accessGroups")
|
||||
entity_type_defaults.discard("defaultAccessGroups")
|
||||
entity_type_defaults.add("roles")
|
||||
entity_type_defaults.add("defaultRoles")
|
||||
|
||||
else:
|
||||
raise ValueError("Unknown entity type \"{}\"".format(entity_type))
|
||||
|
|
@ -2306,125 +2292,8 @@ class ServerAPI(object):
|
|||
progress=progress
|
||||
)
|
||||
|
||||
def get_dependencies_info(self):
|
||||
"""Information about dependency packages on server.
|
||||
|
||||
Example data structure:
|
||||
{
|
||||
"packages": [
|
||||
{
|
||||
"name": str,
|
||||
"platform": str,
|
||||
"checksum": str,
|
||||
"sources": list[dict[str, Any]],
|
||||
"supportedAddons": dict[str, str],
|
||||
"pythonModules": dict[str, str]
|
||||
}
|
||||
],
|
||||
"productionPackage": str
|
||||
}
|
||||
|
||||
Deprecated:
|
||||
Deprecated since server version 0.2.1. Use
|
||||
'get_dependency_packages' instead.
|
||||
|
||||
Returns:
|
||||
dict[str, Any]: Information about dependency packages known for
|
||||
server.
|
||||
"""
|
||||
|
||||
major, minor, patch, _, _ = self.server_version_tuple
|
||||
if major == 0 and (minor < 2 or (minor == 2 and patch < 1)):
|
||||
result = self.get("dependencies")
|
||||
return result.data
|
||||
packages = self.get_dependency_packages()
|
||||
packages["productionPackage"] = None
|
||||
return packages
|
||||
|
||||
def update_dependency_info(
|
||||
self,
|
||||
name,
|
||||
platform_name,
|
||||
size,
|
||||
checksum,
|
||||
checksum_algorithm=None,
|
||||
supported_addons=None,
|
||||
python_modules=None,
|
||||
sources=None
|
||||
):
|
||||
"""Update or create dependency package for identifiers.
|
||||
|
||||
The endpoint can be used to create or update dependency package.
|
||||
|
||||
|
||||
Deprecated:
|
||||
Deprecated for server version 0.2.1. Use
|
||||
'create_dependency_pacakge' instead.
|
||||
|
||||
Args:
|
||||
name (str): Name of dependency package.
|
||||
platform_name (Literal["windows", "linux", "darwin"]): Platform
|
||||
for which is dependency package targeted.
|
||||
size (int): Size of dependency package in bytes.
|
||||
checksum (str): Checksum of archive file where dependencies are.
|
||||
checksum_algorithm (Optional[str]): Algorithm used to calculate
|
||||
checksum. By default, is used 'md5' (defined by server).
|
||||
supported_addons (Optional[dict[str, str]]): Name of addons for
|
||||
which was the package created.
|
||||
'{"<addon name>": "<addon version>", ...}'
|
||||
python_modules (Optional[dict[str, str]]): Python modules in
|
||||
dependencies package.
|
||||
'{"<module name>": "<module version>", ...}'
|
||||
sources (Optional[list[dict[str, Any]]]): Information about
|
||||
sources where dependency package is available.
|
||||
"""
|
||||
|
||||
kwargs = {
|
||||
key: value
|
||||
for key, value in (
|
||||
("checksumAlgorithm", checksum_algorithm),
|
||||
("supportedAddons", supported_addons),
|
||||
("pythonModules", python_modules),
|
||||
("sources", sources),
|
||||
)
|
||||
if value
|
||||
}
|
||||
|
||||
response = self.put(
|
||||
"dependencies",
|
||||
name=name,
|
||||
platform=platform_name,
|
||||
size=size,
|
||||
checksum=checksum,
|
||||
**kwargs
|
||||
)
|
||||
response.raise_for_status("Failed to create/update dependency")
|
||||
return response.data
|
||||
|
||||
def _get_dependency_package_route(
|
||||
self, filename=None, platform_name=None
|
||||
):
|
||||
major, minor, patch, _, _ = self.server_version_tuple
|
||||
if (major, minor, patch) <= (0, 2, 0):
|
||||
# Backwards compatibility for AYON server 0.2.0 and lower
|
||||
self.log.warning((
|
||||
"Using deprecated dependency package route."
|
||||
" Please update your AYON server to version 0.2.1 or higher."
|
||||
" Backwards compatibility for this route will be removed"
|
||||
" in future releases of ayon-python-api."
|
||||
))
|
||||
if platform_name is None:
|
||||
platform_name = platform.system().lower()
|
||||
base = "dependencies"
|
||||
if not filename:
|
||||
return base
|
||||
return "{}/{}/{}".format(base, filename, platform_name)
|
||||
|
||||
if (major, minor) <= (0, 3):
|
||||
endpoint = "desktop/dependency_packages"
|
||||
else:
|
||||
endpoint = "desktop/dependencyPackages"
|
||||
|
||||
def _get_dependency_package_route(self, filename=None):
|
||||
endpoint = "desktop/dependencyPackages"
|
||||
if filename:
|
||||
return "{}/{}".format(endpoint, filename)
|
||||
return endpoint
|
||||
|
|
@ -2535,14 +2404,21 @@ class ServerAPI(object):
|
|||
"""Remove dependency package for specific platform.
|
||||
|
||||
Args:
|
||||
filename (str): Filename of dependency package. Or name of package
|
||||
for server version 0.2.0 or lower.
|
||||
platform_name (Optional[str]): Which platform of the package
|
||||
should be removed. Current platform is used if not passed.
|
||||
Deprecated since version 0.2.1
|
||||
filename (str): Filename of dependency package.
|
||||
platform_name (Optional[str]): Deprecated.
|
||||
"""
|
||||
|
||||
route = self._get_dependency_package_route(filename, platform_name)
|
||||
if platform_name is not None:
|
||||
warnings.warn(
|
||||
(
|
||||
"Argument 'platform_name' is deprecated in"
|
||||
" 'delete_dependency_package'. The argument will be"
|
||||
" removed, please modify your code accordingly."
|
||||
),
|
||||
DeprecationWarning
|
||||
)
|
||||
|
||||
route = self._get_dependency_package_route(filename)
|
||||
response = self.delete(route)
|
||||
response.raise_for_status("Failed to delete dependency file")
|
||||
return response.data
|
||||
|
|
@ -2567,18 +2443,25 @@ class ServerAPI(object):
|
|||
to download.
|
||||
dst_directory (str): Where the file should be downloaded.
|
||||
dst_filename (str): Name of destination filename.
|
||||
platform_name (Optional[str]): Name of platform for which the
|
||||
dependency package is targeted. Default value is
|
||||
current platform. Deprecated since server version 0.2.1.
|
||||
platform_name (Optional[str]): Deprecated.
|
||||
chunk_size (Optional[int]): Download chunk size.
|
||||
progress (Optional[TransferProgress]): Object that gives ability
|
||||
to track download progress.
|
||||
|
||||
Returns:
|
||||
str: Filepath to downloaded file.
|
||||
"""
|
||||
"""
|
||||
|
||||
route = self._get_dependency_package_route(src_filename, platform_name)
|
||||
if platform_name is not None:
|
||||
warnings.warn(
|
||||
(
|
||||
"Argument 'platform_name' is deprecated in"
|
||||
" 'download_dependency_package'. The argument will be"
|
||||
" removed, please modify your code accordingly."
|
||||
),
|
||||
DeprecationWarning
|
||||
)
|
||||
route = self._get_dependency_package_route(src_filename)
|
||||
package_filepath = os.path.join(dst_directory, dst_filename)
|
||||
self.download_file(
|
||||
route,
|
||||
|
|
@ -2597,32 +2480,24 @@ class ServerAPI(object):
|
|||
src_filepath (str): Path to a package file.
|
||||
dst_filename (str): Dependency package filename or name of package
|
||||
for server version 0.2.0 or lower. Must be unique.
|
||||
platform_name (Optional[str]): For which platform is the
|
||||
package targeted. Deprecated since server version 0.2.1.
|
||||
platform_name (Optional[str]): Deprecated.
|
||||
progress (Optional[TransferProgress]): Object to keep track about
|
||||
upload state.
|
||||
"""
|
||||
|
||||
route = self._get_dependency_package_route(dst_filename, platform_name)
|
||||
if platform_name is not None:
|
||||
warnings.warn(
|
||||
(
|
||||
"Argument 'platform_name' is deprecated in"
|
||||
" 'upload_dependency_package'. The argument will be"
|
||||
" removed, please modify your code accordingly."
|
||||
),
|
||||
DeprecationWarning
|
||||
)
|
||||
|
||||
route = self._get_dependency_package_route(dst_filename)
|
||||
self.upload_file(route, src_filepath, progress=progress)
|
||||
|
||||
def create_dependency_package_basename(self, platform_name=None):
|
||||
"""Create basename for dependency package file.
|
||||
|
||||
Deprecated:
|
||||
Use 'create_dependency_package_basename' from `ayon_api` or
|
||||
`ayon_api.utils` instead.
|
||||
|
||||
Args:
|
||||
platform_name (Optional[str]): Name of platform for which the
|
||||
bundle is targeted. Default value is current platform.
|
||||
|
||||
Returns:
|
||||
str: Dependency package name with timestamp and platform.
|
||||
"""
|
||||
|
||||
return create_dependency_package_basename(platform_name)
|
||||
|
||||
def upload_addon_zip(self, src_filepath, progress=None):
|
||||
"""Upload addon zip file to server.
|
||||
|
||||
|
|
@ -2650,14 +2525,6 @@ class ServerAPI(object):
|
|||
)
|
||||
return response.json()
|
||||
|
||||
def _get_bundles_route(self):
|
||||
major, minor, patch, _, _ = self.server_version_tuple
|
||||
# Backwards compatibility for AYON server 0.3.0
|
||||
# - first version where bundles were available
|
||||
if major == 0 and minor == 3 and patch == 0:
|
||||
return "desktop/bundles"
|
||||
return "bundles"
|
||||
|
||||
def get_bundles(self):
|
||||
"""Server bundles with basic information.
|
||||
|
||||
|
|
@ -2688,7 +2555,7 @@ class ServerAPI(object):
|
|||
dict[str, Any]: Server bundles with basic information.
|
||||
"""
|
||||
|
||||
response = self.get(self._get_bundles_route())
|
||||
response = self.get("bundles")
|
||||
response.raise_for_status()
|
||||
return response.data
|
||||
|
||||
|
|
@ -2731,7 +2598,7 @@ class ServerAPI(object):
|
|||
if value is not None:
|
||||
body[key] = value
|
||||
|
||||
response = self.post(self._get_bundles_route(), **body)
|
||||
response = self.post("bundles", **body)
|
||||
response.raise_for_status()
|
||||
|
||||
def update_bundle(
|
||||
|
|
@ -2766,7 +2633,7 @@ class ServerAPI(object):
|
|||
if value is not None
|
||||
}
|
||||
response = self.patch(
|
||||
"{}/{}".format(self._get_bundles_route(), bundle_name),
|
||||
"{}/{}".format("bundles", bundle_name),
|
||||
**body
|
||||
)
|
||||
response.raise_for_status()
|
||||
|
|
@ -2779,7 +2646,7 @@ class ServerAPI(object):
|
|||
"""
|
||||
|
||||
response = self.delete(
|
||||
"{}/{}".format(self._get_bundles_route(), bundle_name)
|
||||
"{}/{}".format("bundles", bundle_name)
|
||||
)
|
||||
response.raise_for_status()
|
||||
|
||||
|
|
@ -3102,16 +2969,13 @@ class ServerAPI(object):
|
|||
- test how it behaves if there is not any production/staging
|
||||
bundle.
|
||||
|
||||
Warnings:
|
||||
For AYON server < 0.3.0 bundle name will be ignored.
|
||||
|
||||
Example output:
|
||||
{
|
||||
"addons": [
|
||||
{
|
||||
"name": "addon-name",
|
||||
"version": "addon-version",
|
||||
"settings": {...}
|
||||
"settings": {...},
|
||||
"siteSettings": {...}
|
||||
}
|
||||
]
|
||||
|
|
@ -3121,7 +2985,6 @@ class ServerAPI(object):
|
|||
dict[str, Any]: All settings for single bundle.
|
||||
"""
|
||||
|
||||
major, minor, _, _, _ = self.server_version_tuple
|
||||
query_values = {
|
||||
key: value
|
||||
for key, value in (
|
||||
|
|
@ -3137,21 +3000,8 @@ class ServerAPI(object):
|
|||
if site_id:
|
||||
query_values["site_id"] = site_id
|
||||
|
||||
if major == 0 and minor >= 3:
|
||||
url = "settings"
|
||||
else:
|
||||
# Backward compatibility for AYON server < 0.3.0
|
||||
url = "settings/addons"
|
||||
query_values.pop("bundle_name", None)
|
||||
for new_key, old_key in (
|
||||
("project_name", "project"),
|
||||
("site_id", "site"),
|
||||
):
|
||||
if new_key in query_values:
|
||||
query_values[old_key] = query_values.pop(new_key)
|
||||
|
||||
query = prepare_query_string(query_values)
|
||||
response = self.get("{}{}".format(url, query))
|
||||
response = self.get("settings{}".format(query))
|
||||
response.raise_for_status()
|
||||
return response.data
|
||||
|
||||
|
|
@ -3194,15 +3044,10 @@ class ServerAPI(object):
|
|||
use_site=use_site
|
||||
)
|
||||
if only_values:
|
||||
major, minor, patch, _, _ = self.server_version_tuple
|
||||
if major == 0 and minor >= 3:
|
||||
output = {
|
||||
addon["name"]: addon["settings"]
|
||||
for addon in output["addons"]
|
||||
}
|
||||
else:
|
||||
# Backward compatibility for AYON server < 0.3.0
|
||||
output = output["settings"]
|
||||
output = {
|
||||
addon["name"]: addon["settings"]
|
||||
for addon in output["addons"]
|
||||
}
|
||||
return output
|
||||
|
||||
def get_addons_project_settings(
|
||||
|
|
@ -3263,15 +3108,10 @@ class ServerAPI(object):
|
|||
use_site=use_site
|
||||
)
|
||||
if only_values:
|
||||
major, minor, patch, _, _ = self.server_version_tuple
|
||||
if major == 0 and minor >= 3:
|
||||
output = {
|
||||
addon["name"]: addon["settings"]
|
||||
for addon in output["addons"]
|
||||
}
|
||||
else:
|
||||
# Backward compatibility for AYON server < 0.3.0
|
||||
output = output["settings"]
|
||||
output = {
|
||||
addon["name"]: addon["settings"]
|
||||
for addon in output["addons"]
|
||||
}
|
||||
return output
|
||||
|
||||
def get_addons_settings(
|
||||
|
|
|
|||
|
|
@ -1,2 +1,2 @@
|
|||
"""Package declaring Python API for Ayon server."""
|
||||
__version__ = "1.0.0-rc.1"
|
||||
__version__ = "1.0.0-rc.3"
|
||||
|
|
|
|||
|
|
@ -1,3 +1,3 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
"""Package declaring Pype version."""
|
||||
__version__ = "3.17.7-nightly.7"
|
||||
__version__ = "3.18.2-nightly.2"
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
[tool.poetry]
|
||||
name = "OpenPype"
|
||||
version = "3.17.6" # OpenPype
|
||||
version = "3.18.1" # OpenPype
|
||||
description = "Open VFX and Animation pipeline with support."
|
||||
authors = ["OpenPype Team <info@openpype.io>"]
|
||||
license = "MIT License"
|
||||
|
|
|
|||
|
|
@ -25,6 +25,16 @@ def _create_saver_instance_attributes_enum():
|
|||
]
|
||||
|
||||
|
||||
def _image_format_enum():
|
||||
return [
|
||||
{"value": "exr", "label": "exr"},
|
||||
{"value": "tga", "label": "tga"},
|
||||
{"value": "png", "label": "png"},
|
||||
{"value": "tif", "label": "tif"},
|
||||
{"value": "jpg", "label": "jpg"},
|
||||
]
|
||||
|
||||
|
||||
class CreateSaverPluginModel(BaseSettingsModel):
|
||||
_isGroup = True
|
||||
temp_rendering_path_template: str = Field(
|
||||
|
|
@ -39,6 +49,10 @@ class CreateSaverPluginModel(BaseSettingsModel):
|
|||
enum_resolver=_create_saver_instance_attributes_enum,
|
||||
title="Instance attributes"
|
||||
)
|
||||
image_format: str = Field(
|
||||
enum_resolver=_image_format_enum,
|
||||
title="Output Image Format"
|
||||
)
|
||||
|
||||
|
||||
class CreatPluginsModel(BaseSettingsModel):
|
||||
|
|
@ -89,7 +103,8 @@ DEFAULT_VALUES = {
|
|||
"instance_attributes": [
|
||||
"reviewable",
|
||||
"farm_rendering"
|
||||
]
|
||||
],
|
||||
"image_format": "exr"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -1 +1 @@
|
|||
__version__ = "0.1.0"
|
||||
__version__ = "0.1.1"
|
||||
|
|
|
|||
|
|
@ -52,6 +52,9 @@ class CreatePluginsModel(BaseSettingsModel):
|
|||
CreateKarmaROP: CreatorModel = Field(
|
||||
default_factory=CreatorModel,
|
||||
title="Create Karma ROP")
|
||||
CreateMantraIFD: CreatorModel = Field(
|
||||
default_factory=CreatorModel,
|
||||
title="Create Mantra IFD")
|
||||
CreateMantraROP: CreatorModel = Field(
|
||||
default_factory=CreatorModel,
|
||||
title="Create Mantra ROP")
|
||||
|
|
@ -114,6 +117,10 @@ DEFAULT_HOUDINI_CREATE_SETTINGS = {
|
|||
"enabled": True,
|
||||
"default_variants": ["Main"]
|
||||
},
|
||||
"CreateMantraIFD": {
|
||||
"enabled": True,
|
||||
"default_variants": ["Main"]
|
||||
},
|
||||
"CreateMantraROP": {
|
||||
"enabled": True,
|
||||
"default_variants": ["Main"]
|
||||
|
|
|
|||
|
|
@ -13,6 +13,14 @@ class CollectAssetHandlesModel(BaseSettingsModel):
|
|||
title="Use asset handles")
|
||||
|
||||
|
||||
class CollectChunkSizeModel(BaseSettingsModel):
|
||||
"""Collect Chunk Size."""
|
||||
enabled: bool = Field(title="Enabled")
|
||||
optional: bool = Field(title="Optional")
|
||||
chunk_size: int = Field(
|
||||
title="Frames Per Task")
|
||||
|
||||
|
||||
class ValidateWorkfilePathsModel(BaseSettingsModel):
|
||||
enabled: bool = Field(title="Enabled")
|
||||
optional: bool = Field(title="Optional")
|
||||
|
|
@ -38,6 +46,10 @@ class PublishPluginsModel(BaseSettingsModel):
|
|||
title="Collect Asset Handles.",
|
||||
section="Collectors"
|
||||
)
|
||||
CollectChunkSize: CollectChunkSizeModel = Field(
|
||||
default_factory=CollectChunkSizeModel,
|
||||
title="Collect Chunk Size."
|
||||
)
|
||||
ValidateContainers: BasicValidateModel = Field(
|
||||
default_factory=BasicValidateModel,
|
||||
title="Validate Latest Containers.",
|
||||
|
|
@ -63,6 +75,11 @@ DEFAULT_HOUDINI_PUBLISH_SETTINGS = {
|
|||
"CollectAssetHandles": {
|
||||
"use_asset_handles": True
|
||||
},
|
||||
"CollectChunkSize": {
|
||||
"enabled": True,
|
||||
"optional": True,
|
||||
"chunk_size": 999999
|
||||
},
|
||||
"ValidateContainers": {
|
||||
"enabled": True,
|
||||
"optional": True,
|
||||
|
|
|
|||
|
|
@ -1 +1 @@
|
|||
__version__ = "0.2.9"
|
||||
__version__ = "0.2.10"
|
||||
|
|
|
|||
|
|
@ -97,7 +97,7 @@ DEFAULT_MEL_WORKSPACE_SETTINGS = "\n".join((
|
|||
'workspace -fr "renderData" "renderData";',
|
||||
'workspace -fr "sourceImages" "sourceimages";',
|
||||
'workspace -fr "fileCache" "cache/nCache";',
|
||||
'workspace -fr "autoSave" "autosave"',
|
||||
'workspace -fr "autoSave" "autosave";',
|
||||
'',
|
||||
))
|
||||
|
||||
|
|
|
|||
|
|
@ -29,7 +29,7 @@ class ColorCodeMappings(BaseSettingsModel):
|
|||
)
|
||||
|
||||
layer_name_regex: list[str] = Field(
|
||||
"",
|
||||
default_factory=list,
|
||||
title="Layer name regex"
|
||||
)
|
||||
|
||||
|
|
|
|||
|
|
@ -1,3 +1,3 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
"""Package declaring addon version."""
|
||||
__version__ = "0.1.0"
|
||||
__version__ = "0.1.1"
|
||||
|
|
|
|||
4
start.py
|
|
@ -366,8 +366,8 @@ def run_disk_mapping_commands(settings):
|
|||
destination = destination.replace("/", "\\").rstrip("\\")
|
||||
source = source.replace("/", "\\").rstrip("\\")
|
||||
# Add slash after ':' ('G:' -> 'G:\')
|
||||
if destination.endswith(":"):
|
||||
destination += "\\"
|
||||
if source.endswith(":"):
|
||||
source += "\\"
|
||||
else:
|
||||
destination = destination.rstrip("/")
|
||||
source = source.rstrip("/")
|
||||
|
|
|
|||
|
|
@ -60,7 +60,7 @@ class TestDeadlinePublishInAfterEffects(AEDeadlinePublishTestClass):
|
|||
name="renderTest_taskMain"))
|
||||
|
||||
failures.append(
|
||||
DBAssert.count_of_types(dbcon, "representation", 4))
|
||||
DBAssert.count_of_types(dbcon, "representation", 3))
|
||||
|
||||
additional_args = {"context.subset": "workfileTest_task",
|
||||
"context.ext": "aep"}
|
||||
|
|
@ -77,7 +77,7 @@ class TestDeadlinePublishInAfterEffects(AEDeadlinePublishTestClass):
|
|||
additional_args = {"context.subset": "renderTest_taskMain",
|
||||
"name": "thumbnail"}
|
||||
failures.append(
|
||||
DBAssert.count_of_types(dbcon, "representation", 1,
|
||||
DBAssert.count_of_types(dbcon, "representation", 0,
|
||||
additional_args=additional_args))
|
||||
|
||||
additional_args = {"context.subset": "renderTest_taskMain",
|
||||
|
|
|
|||
|
|
@ -71,7 +71,7 @@ class TestDeadlinePublishInAfterEffectsMultiComposition(AEDeadlinePublishTestCla
|
|||
name="renderTest_taskMain2"))
|
||||
|
||||
failures.append(
|
||||
DBAssert.count_of_types(dbcon, "representation", 5))
|
||||
DBAssert.count_of_types(dbcon, "representation", 4))
|
||||
|
||||
additional_args = {"context.subset": "workfileTest_task",
|
||||
"context.ext": "aep"}
|
||||
|
|
@ -89,7 +89,7 @@ class TestDeadlinePublishInAfterEffectsMultiComposition(AEDeadlinePublishTestCla
|
|||
additional_args = {"context.subset": "renderTest_taskMain",
|
||||
"name": "thumbnail"}
|
||||
failures.append(
|
||||
DBAssert.count_of_types(dbcon, "representation", 1,
|
||||
DBAssert.count_of_types(dbcon, "representation", 0,
|
||||
additional_args=additional_args))
|
||||
|
||||
additional_args = {"context.subset": "renderTest_taskMain",
|
||||
|
|
|
|||
|
|
@ -58,7 +58,7 @@ class TestPublishInAfterEffects(AELocalPublishTestClass):
|
|||
name="renderTest_taskMain"))
|
||||
|
||||
failures.append(
|
||||
DBAssert.count_of_types(dbcon, "representation", 4))
|
||||
DBAssert.count_of_types(dbcon, "representation", 3))
|
||||
|
||||
additional_args = {"context.subset": "workfileTest_task",
|
||||
"context.ext": "aep"}
|
||||
|
|
@ -75,7 +75,7 @@ class TestPublishInAfterEffects(AELocalPublishTestClass):
|
|||
additional_args = {"context.subset": "renderTest_taskMain",
|
||||
"name": "thumbnail"}
|
||||
failures.append(
|
||||
DBAssert.count_of_types(dbcon, "representation", 1,
|
||||
DBAssert.count_of_types(dbcon, "representation", 0,
|
||||
additional_args=additional_args))
|
||||
|
||||
additional_args = {"context.subset": "renderTest_taskMain",
|
||||
|
|
|
|||
|
|
@ -60,7 +60,7 @@ class TestPublishInAfterEffects(AELocalPublishTestClass):
|
|||
name="renderTest_taskMain"))
|
||||
|
||||
failures.append(
|
||||
DBAssert.count_of_types(dbcon, "representation", 4))
|
||||
DBAssert.count_of_types(dbcon, "representation", 2))
|
||||
|
||||
additional_args = {"context.subset": "workfileTest_task",
|
||||
"context.ext": "aep"}
|
||||
|
|
@ -77,7 +77,7 @@ class TestPublishInAfterEffects(AELocalPublishTestClass):
|
|||
additional_args = {"context.subset": "renderTest_taskMain",
|
||||
"name": "thumbnail"}
|
||||
failures.append(
|
||||
DBAssert.count_of_types(dbcon, "representation", 1,
|
||||
DBAssert.count_of_types(dbcon, "representation", 0,
|
||||
additional_args=additional_args))
|
||||
|
||||
additional_args = {"context.subset": "renderTest_taskMain",
|
||||
|
|
@ -89,7 +89,7 @@ class TestPublishInAfterEffects(AELocalPublishTestClass):
|
|||
additional_args = {"context.subset": "renderTest_taskMain",
|
||||
"name": "thumbnail"}
|
||||
failures.append(
|
||||
DBAssert.count_of_types(dbcon, "representation", 1,
|
||||
DBAssert.count_of_types(dbcon, "representation", 0,
|
||||
additional_args=additional_args))
|
||||
|
||||
additional_args = {"context.subset": "renderTest_taskMain",
|
||||
|
|
|
|||
|
|
@ -45,7 +45,7 @@ class TestPublishInAfterEffects(AELocalPublishTestClass):
|
|||
name="renderTest_taskMain"))
|
||||
|
||||
failures.append(
|
||||
DBAssert.count_of_types(dbcon, "representation", 4))
|
||||
DBAssert.count_of_types(dbcon, "representation", 3))
|
||||
|
||||
additional_args = {"context.subset": "workfileTest_task",
|
||||
"context.ext": "aep"}
|
||||
|
|
@ -62,7 +62,7 @@ class TestPublishInAfterEffects(AELocalPublishTestClass):
|
|||
additional_args = {"context.subset": "renderTest_taskMain",
|
||||
"name": "thumbnail"}
|
||||
failures.append(
|
||||
DBAssert.count_of_types(dbcon, "representation", 1,
|
||||
DBAssert.count_of_types(dbcon, "representation", 0,
|
||||
additional_args=additional_args))
|
||||
|
||||
additional_args = {"context.subset": "renderTest_taskMain",
|
||||
|
|
|
|||
|
|
@ -54,7 +54,7 @@ class TestDeadlinePublishInMaya(MayaDeadlinePublishTestClass):
|
|||
DBAssert.count_of_types(dbcon, "subset", 1,
|
||||
name="workfileTest_task"))
|
||||
|
||||
failures.append(DBAssert.count_of_types(dbcon, "representation", 8))
|
||||
failures.append(DBAssert.count_of_types(dbcon, "representation", 7))
|
||||
|
||||
# hero included
|
||||
additional_args = {"context.subset": "modelMain",
|
||||
|
|
@ -85,7 +85,7 @@ class TestDeadlinePublishInMaya(MayaDeadlinePublishTestClass):
|
|||
additional_args = {"context.subset": "renderTest_taskMain_beauty",
|
||||
"context.ext": "jpg"}
|
||||
failures.append(
|
||||
DBAssert.count_of_types(dbcon, "representation", 1,
|
||||
DBAssert.count_of_types(dbcon, "representation", 0,
|
||||
additional_args=additional_args))
|
||||
|
||||
additional_args = {"context.subset": "renderTest_taskMain_beauty",
|
||||
|
|
|
|||
|
Before Width: | Height: | Size: 75 KiB |
|
|
@ -69,7 +69,7 @@ class TestDeadlinePublishInNuke(NukeDeadlinePublishTestClass):
|
|||
name="workfileTest_task"))
|
||||
|
||||
failures.append(
|
||||
DBAssert.count_of_types(dbcon, "representation", 4))
|
||||
DBAssert.count_of_types(dbcon, "representation", 3))
|
||||
|
||||
additional_args = {"context.subset": "workfileTest_task",
|
||||
"context.ext": "nk"}
|
||||
|
|
@ -86,7 +86,7 @@ class TestDeadlinePublishInNuke(NukeDeadlinePublishTestClass):
|
|||
additional_args = {"context.subset": "renderTest_taskMain",
|
||||
"name": "thumbnail"}
|
||||
failures.append(
|
||||
DBAssert.count_of_types(dbcon, "representation", 1,
|
||||
DBAssert.count_of_types(dbcon, "representation", 0,
|
||||
additional_args=additional_args))
|
||||
|
||||
additional_args = {"context.subset": "renderTest_taskMain",
|
||||
|
|
|
|||