mirror of
https://github.com/ynput/ayon-core.git
synced 2025-12-24 21:04:40 +01:00
Merge remote-tracking branch 'origin/develop' into feature/OP-2042_nuke-testing-class
This commit is contained in:
commit
fab3b02b3e
350 changed files with 16069 additions and 5785 deletions
2
.github/workflows/prerelease.yml
vendored
2
.github/workflows/prerelease.yml
vendored
|
|
@ -33,7 +33,7 @@ jobs:
|
|||
id: version
|
||||
if: steps.version_type.outputs.type != 'skip'
|
||||
run: |
|
||||
RESULT=$(python ./tools/ci_tools.py --nightly)
|
||||
RESULT=$(python ./tools/ci_tools.py --nightly --github_token ${{ secrets.GITHUB_TOKEN }})
|
||||
|
||||
echo ::set-output name=next_tag::$RESULT
|
||||
|
||||
|
|
|
|||
187
CHANGELOG.md
187
CHANGELOG.md
|
|
@ -1,18 +1,114 @@
|
|||
# Changelog
|
||||
|
||||
## [3.6.0-nightly.5](https://github.com/pypeclub/OpenPype/tree/HEAD)
|
||||
## [3.7.0-nightly.3](https://github.com/pypeclub/OpenPype/tree/HEAD)
|
||||
|
||||
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.5.0...HEAD)
|
||||
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.6.4...HEAD)
|
||||
|
||||
### 📖 Documentation
|
||||
|
||||
- docs\[website\]: Add Ellipse Studio \(logo\) as an OpenPype contributor [\#2324](https://github.com/pypeclub/OpenPype/pull/2324)
|
||||
|
||||
**🆕 New features**
|
||||
|
||||
- Maya : Colorspace configuration [\#2170](https://github.com/pypeclub/OpenPype/pull/2170)
|
||||
- Blender: Added support for audio [\#2168](https://github.com/pypeclub/OpenPype/pull/2168)
|
||||
- Flame: a host basic integration [\#2165](https://github.com/pypeclub/OpenPype/pull/2165)
|
||||
- Houdini: simple HDA workflow [\#2072](https://github.com/pypeclub/OpenPype/pull/2072)
|
||||
- Store typed version dependencies for workfiles [\#2192](https://github.com/pypeclub/OpenPype/pull/2192)
|
||||
|
||||
**🚀 Enhancements**
|
||||
|
||||
- Hiero: Add experimental tools action [\#2323](https://github.com/pypeclub/OpenPype/pull/2323)
|
||||
- Input links: Cleanup and unification of differences [\#2322](https://github.com/pypeclub/OpenPype/pull/2322)
|
||||
- General: Run process log stderr as info log level [\#2309](https://github.com/pypeclub/OpenPype/pull/2309)
|
||||
- Tools: Cleanup of unused classes [\#2304](https://github.com/pypeclub/OpenPype/pull/2304)
|
||||
- Project Manager: Added ability to delete project [\#2298](https://github.com/pypeclub/OpenPype/pull/2298)
|
||||
- Ftrack: Synchronize input links [\#2287](https://github.com/pypeclub/OpenPype/pull/2287)
|
||||
- StandalonePublisher: Remove unused plugin ExtractHarmonyZip [\#2277](https://github.com/pypeclub/OpenPype/pull/2277)
|
||||
- Ftrack: Support multiple reviews [\#2271](https://github.com/pypeclub/OpenPype/pull/2271)
|
||||
- Ftrack: Remove unused clean component plugin [\#2269](https://github.com/pypeclub/OpenPype/pull/2269)
|
||||
- Houdini: Add experimental tools action [\#2267](https://github.com/pypeclub/OpenPype/pull/2267)
|
||||
- Tools: Assets widget [\#2265](https://github.com/pypeclub/OpenPype/pull/2265)
|
||||
- Nuke: extract baked review videos presets [\#2248](https://github.com/pypeclub/OpenPype/pull/2248)
|
||||
- TVPaint: Workers rendering [\#2209](https://github.com/pypeclub/OpenPype/pull/2209)
|
||||
|
||||
**🐛 Bug fixes**
|
||||
|
||||
- Fix - provider icons are pulled from a folder [\#2326](https://github.com/pypeclub/OpenPype/pull/2326)
|
||||
- InputLinks: Typo in "inputLinks" key [\#2314](https://github.com/pypeclub/OpenPype/pull/2314)
|
||||
- Deadline timeout and logging [\#2312](https://github.com/pypeclub/OpenPype/pull/2312)
|
||||
- nuke: do not multiply representation on class method [\#2311](https://github.com/pypeclub/OpenPype/pull/2311)
|
||||
- Workfiles tool: Fix task formatting [\#2306](https://github.com/pypeclub/OpenPype/pull/2306)
|
||||
- Delivery: Fix delivery paths created on windows [\#2302](https://github.com/pypeclub/OpenPype/pull/2302)
|
||||
- Maya: Deadline - fix limit groups [\#2295](https://github.com/pypeclub/OpenPype/pull/2295)
|
||||
- New Publisher: Fix mapping of indexes [\#2285](https://github.com/pypeclub/OpenPype/pull/2285)
|
||||
- Alternate site for site sync doesnt work for sequences [\#2284](https://github.com/pypeclub/OpenPype/pull/2284)
|
||||
- FFmpeg: Execute ffprobe using list of arguments instead of string command [\#2281](https://github.com/pypeclub/OpenPype/pull/2281)
|
||||
- Nuke: Anatomy fill data use task as dictionary [\#2278](https://github.com/pypeclub/OpenPype/pull/2278)
|
||||
- Bug: fix variable name \_asset\_id in workfiles application [\#2274](https://github.com/pypeclub/OpenPype/pull/2274)
|
||||
- Version handling fixes [\#2272](https://github.com/pypeclub/OpenPype/pull/2272)
|
||||
|
||||
## [3.6.4](https://github.com/pypeclub/OpenPype/tree/3.6.4) (2021-11-23)
|
||||
|
||||
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.7.0-nightly.1...3.6.4)
|
||||
|
||||
**🐛 Bug fixes**
|
||||
|
||||
- Nuke: inventory update removes all loaded read nodes [\#2294](https://github.com/pypeclub/OpenPype/pull/2294)
|
||||
|
||||
## [3.6.3](https://github.com/pypeclub/OpenPype/tree/3.6.3) (2021-11-19)
|
||||
|
||||
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.6.3-nightly.1...3.6.3)
|
||||
|
||||
**🐛 Bug fixes**
|
||||
|
||||
- Deadline: Fix publish targets [\#2280](https://github.com/pypeclub/OpenPype/pull/2280)
|
||||
|
||||
## [3.6.2](https://github.com/pypeclub/OpenPype/tree/3.6.2) (2021-11-18)
|
||||
|
||||
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.6.2-nightly.2...3.6.2)
|
||||
|
||||
**🚀 Enhancements**
|
||||
|
||||
- Royal Render: Support for rr channels in separate dirs [\#2268](https://github.com/pypeclub/OpenPype/pull/2268)
|
||||
- SceneInventory: Choose loader in asset switcher [\#2262](https://github.com/pypeclub/OpenPype/pull/2262)
|
||||
- Style: New fonts in OpenPype style [\#2256](https://github.com/pypeclub/OpenPype/pull/2256)
|
||||
- Tools: SceneInventory in OpenPype [\#2255](https://github.com/pypeclub/OpenPype/pull/2255)
|
||||
- Tools: Tasks widget [\#2251](https://github.com/pypeclub/OpenPype/pull/2251)
|
||||
- Added endpoint for configured extensions [\#2221](https://github.com/pypeclub/OpenPype/pull/2221)
|
||||
|
||||
**🐛 Bug fixes**
|
||||
|
||||
- Tools: Parenting of tools in Nuke and Hiero [\#2266](https://github.com/pypeclub/OpenPype/pull/2266)
|
||||
- limiting validator to specific editorial hosts [\#2264](https://github.com/pypeclub/OpenPype/pull/2264)
|
||||
- Tools: Select Context dialog attribute fix [\#2261](https://github.com/pypeclub/OpenPype/pull/2261)
|
||||
- Maya: Render publishing fails on linux [\#2260](https://github.com/pypeclub/OpenPype/pull/2260)
|
||||
- LookAssigner: Fix tool reopen [\#2259](https://github.com/pypeclub/OpenPype/pull/2259)
|
||||
- Standalone: editorial not publishing thumbnails on all subsets [\#2258](https://github.com/pypeclub/OpenPype/pull/2258)
|
||||
- Loader doesn't allow changing of version before loading [\#2254](https://github.com/pypeclub/OpenPype/pull/2254)
|
||||
- Burnins: Support mxf metadata [\#2247](https://github.com/pypeclub/OpenPype/pull/2247)
|
||||
- Maya: Support for configurable AOV separator characters [\#2197](https://github.com/pypeclub/OpenPype/pull/2197)
|
||||
- Maya: texture colorspace modes in looks [\#2195](https://github.com/pypeclub/OpenPype/pull/2195)
|
||||
|
||||
## [3.6.1](https://github.com/pypeclub/OpenPype/tree/3.6.1) (2021-11-16)
|
||||
|
||||
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.6.1-nightly.1...3.6.1)
|
||||
|
||||
## [3.6.0](https://github.com/pypeclub/OpenPype/tree/3.6.0) (2021-11-15)
|
||||
|
||||
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.6.0-nightly.6...3.6.0)
|
||||
|
||||
### 📖 Documentation
|
||||
|
||||
- Add alternative sites for Site Sync [\#2206](https://github.com/pypeclub/OpenPype/pull/2206)
|
||||
- Add command line way of running site sync server [\#2188](https://github.com/pypeclub/OpenPype/pull/2188)
|
||||
|
||||
**🚀 Enhancements**
|
||||
|
||||
- Tools: Creator in OpenPype [\#2244](https://github.com/pypeclub/OpenPype/pull/2244)
|
||||
- Tools: Subset manager in OpenPype [\#2243](https://github.com/pypeclub/OpenPype/pull/2243)
|
||||
- General: Skip module directories without init file [\#2239](https://github.com/pypeclub/OpenPype/pull/2239)
|
||||
- General: Static interfaces [\#2238](https://github.com/pypeclub/OpenPype/pull/2238)
|
||||
- Style: Fix transparent image in style [\#2235](https://github.com/pypeclub/OpenPype/pull/2235)
|
||||
- Add a "following workfile versioning" option on publish [\#2225](https://github.com/pypeclub/OpenPype/pull/2225)
|
||||
- Modules: Module can add cli commands [\#2224](https://github.com/pypeclub/OpenPype/pull/2224)
|
||||
- Webpublisher: Separate webpublisher logic [\#2222](https://github.com/pypeclub/OpenPype/pull/2222)
|
||||
- Add both side availability on Site Sync sites to Loader [\#2220](https://github.com/pypeclub/OpenPype/pull/2220)
|
||||
- Tools: Center loader and library loader on show [\#2219](https://github.com/pypeclub/OpenPype/pull/2219)
|
||||
- Maya : Validate shape zero [\#2212](https://github.com/pypeclub/OpenPype/pull/2212)
|
||||
|
|
@ -21,91 +117,30 @@
|
|||
- Ftrack: Replace Queue with deque in event handlers logic [\#2204](https://github.com/pypeclub/OpenPype/pull/2204)
|
||||
- Tools: New select context dialog [\#2200](https://github.com/pypeclub/OpenPype/pull/2200)
|
||||
- Maya : Validate mesh ngons [\#2199](https://github.com/pypeclub/OpenPype/pull/2199)
|
||||
- Dirmap in Nuke [\#2198](https://github.com/pypeclub/OpenPype/pull/2198)
|
||||
- Delivery: Check 'frame' key in template for sequence delivery [\#2196](https://github.com/pypeclub/OpenPype/pull/2196)
|
||||
- Settings: Site sync project settings improvement [\#2193](https://github.com/pypeclub/OpenPype/pull/2193)
|
||||
- Usage of tools code [\#2185](https://github.com/pypeclub/OpenPype/pull/2185)
|
||||
- Settings: Dictionary based on project roots [\#2184](https://github.com/pypeclub/OpenPype/pull/2184)
|
||||
- Subset name: Be able to pass asset document to get subset name [\#2179](https://github.com/pypeclub/OpenPype/pull/2179)
|
||||
- Tools: Experimental tools [\#2167](https://github.com/pypeclub/OpenPype/pull/2167)
|
||||
- Loader: Refactor and use OpenPype stylesheets [\#2166](https://github.com/pypeclub/OpenPype/pull/2166)
|
||||
- Add loader for linked smart objects in photoshop [\#2149](https://github.com/pypeclub/OpenPype/pull/2149)
|
||||
|
||||
**🐛 Bug fixes**
|
||||
|
||||
- Ftrack: Sync project ftrack id cache issue [\#2250](https://github.com/pypeclub/OpenPype/pull/2250)
|
||||
- Ftrack: Session creation and Prepare project [\#2245](https://github.com/pypeclub/OpenPype/pull/2245)
|
||||
- Added queue for studio processing in PS [\#2237](https://github.com/pypeclub/OpenPype/pull/2237)
|
||||
- Python 2: Unicode to string conversion [\#2236](https://github.com/pypeclub/OpenPype/pull/2236)
|
||||
- Fix - enum for color coding in PS [\#2234](https://github.com/pypeclub/OpenPype/pull/2234)
|
||||
- Pyblish Tool: Fix targets handling [\#2232](https://github.com/pypeclub/OpenPype/pull/2232)
|
||||
- Ftrack: Base event fix of 'get\_project\_from\_entity' method [\#2214](https://github.com/pypeclub/OpenPype/pull/2214)
|
||||
- Maya : multiple subsets review broken [\#2210](https://github.com/pypeclub/OpenPype/pull/2210)
|
||||
- Fix - different command used for Linux and Mac OS [\#2207](https://github.com/pypeclub/OpenPype/pull/2207)
|
||||
- Tools: Workfiles tool don't use avalon widgets [\#2205](https://github.com/pypeclub/OpenPype/pull/2205)
|
||||
- Ftrack: Fill missing ftrack id on mongo project [\#2203](https://github.com/pypeclub/OpenPype/pull/2203)
|
||||
- Project Manager: Fix copying of tasks [\#2191](https://github.com/pypeclub/OpenPype/pull/2191)
|
||||
- StandalonePublisher: Source validator don't expect representations [\#2190](https://github.com/pypeclub/OpenPype/pull/2190)
|
||||
- Blender: Fix trying to pack an image when the shader node has no texture [\#2183](https://github.com/pypeclub/OpenPype/pull/2183)
|
||||
- MacOS: Launching of applications may cause Permissions error [\#2175](https://github.com/pypeclub/OpenPype/pull/2175)
|
||||
- Maya: Aspect ratio [\#2174](https://github.com/pypeclub/OpenPype/pull/2174)
|
||||
- Blender: Fix 'Deselect All' with object not in 'Object Mode' [\#2163](https://github.com/pypeclub/OpenPype/pull/2163)
|
||||
- Maya: Fix hotbox broken by scriptsmenu [\#2151](https://github.com/pypeclub/OpenPype/pull/2151)
|
||||
- Added validator for source files for Standalone Publisher [\#2138](https://github.com/pypeclub/OpenPype/pull/2138)
|
||||
|
||||
**Merged pull requests:**
|
||||
|
||||
- Settings: Site sync project settings improvement [\#2193](https://github.com/pypeclub/OpenPype/pull/2193)
|
||||
- Add validate active site button to sync queue on a project [\#2176](https://github.com/pypeclub/OpenPype/pull/2176)
|
||||
- Bump pillow from 8.2.0 to 8.3.2 [\#2162](https://github.com/pypeclub/OpenPype/pull/2162)
|
||||
|
||||
## [3.5.0](https://github.com/pypeclub/OpenPype/tree/3.5.0) (2021-10-17)
|
||||
|
||||
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.5.0-nightly.8...3.5.0)
|
||||
|
||||
**Deprecated:**
|
||||
|
||||
- Maya: Change mayaAscii family to mayaScene [\#2106](https://github.com/pypeclub/OpenPype/pull/2106)
|
||||
|
||||
**🆕 New features**
|
||||
|
||||
- Added project and task into context change message in Maya [\#2131](https://github.com/pypeclub/OpenPype/pull/2131)
|
||||
- Add ExtractBurnin to photoshop review [\#2124](https://github.com/pypeclub/OpenPype/pull/2124)
|
||||
- PYPE-1218 - changed namespace to contain subset name in Maya [\#2114](https://github.com/pypeclub/OpenPype/pull/2114)
|
||||
- Added running configurable disk mapping command before start of OP [\#2091](https://github.com/pypeclub/OpenPype/pull/2091)
|
||||
- SFTP provider [\#2073](https://github.com/pypeclub/OpenPype/pull/2073)
|
||||
|
||||
**🚀 Enhancements**
|
||||
|
||||
- Maya: make rig validators configurable in settings [\#2137](https://github.com/pypeclub/OpenPype/pull/2137)
|
||||
- Settings: Updated readme for entity types in settings [\#2132](https://github.com/pypeclub/OpenPype/pull/2132)
|
||||
- Nuke: unified clip loader [\#2128](https://github.com/pypeclub/OpenPype/pull/2128)
|
||||
- Settings UI: Project model refreshing and sorting [\#2104](https://github.com/pypeclub/OpenPype/pull/2104)
|
||||
- Create Read From Rendered - Disable Relative paths by default [\#2093](https://github.com/pypeclub/OpenPype/pull/2093)
|
||||
- Added choosing different dirmap mapping if workfile synched locally [\#2088](https://github.com/pypeclub/OpenPype/pull/2088)
|
||||
- General: Remove IdleManager module [\#2084](https://github.com/pypeclub/OpenPype/pull/2084)
|
||||
- Tray UI: Message box about missing settings defaults [\#2080](https://github.com/pypeclub/OpenPype/pull/2080)
|
||||
- Tray UI: Show menu where first click happened [\#2079](https://github.com/pypeclub/OpenPype/pull/2079)
|
||||
- Global: add global validators to settings [\#2078](https://github.com/pypeclub/OpenPype/pull/2078)
|
||||
- Use CRF for burnin when available [\#2070](https://github.com/pypeclub/OpenPype/pull/2070)
|
||||
- Project manager: Filter first item after selection of project [\#2069](https://github.com/pypeclub/OpenPype/pull/2069)
|
||||
|
||||
**🐛 Bug fixes**
|
||||
|
||||
- Maya: fix model publishing [\#2130](https://github.com/pypeclub/OpenPype/pull/2130)
|
||||
- Fix - oiiotool wasn't recognized even if present [\#2129](https://github.com/pypeclub/OpenPype/pull/2129)
|
||||
- General: Disk mapping group [\#2120](https://github.com/pypeclub/OpenPype/pull/2120)
|
||||
- Hiero: publishing effect first time makes wrong resources path [\#2115](https://github.com/pypeclub/OpenPype/pull/2115)
|
||||
- Add startup script for Houdini Core. [\#2110](https://github.com/pypeclub/OpenPype/pull/2110)
|
||||
- TVPaint: Behavior name of loop also accept repeat [\#2109](https://github.com/pypeclub/OpenPype/pull/2109)
|
||||
- Ftrack: Project settings save custom attributes skip unknown attributes [\#2103](https://github.com/pypeclub/OpenPype/pull/2103)
|
||||
- Blender: Fix NoneType error when animation\_data is missing for a rig [\#2101](https://github.com/pypeclub/OpenPype/pull/2101)
|
||||
- Fix broken import in sftp provider [\#2100](https://github.com/pypeclub/OpenPype/pull/2100)
|
||||
- Global: Fix docstring on publish plugin extract review [\#2097](https://github.com/pypeclub/OpenPype/pull/2097)
|
||||
- Delivery Action Files Sequence fix [\#2096](https://github.com/pypeclub/OpenPype/pull/2096)
|
||||
- General: Cloud mongo ca certificate issue [\#2095](https://github.com/pypeclub/OpenPype/pull/2095)
|
||||
- TVPaint: Creator use context from workfile [\#2087](https://github.com/pypeclub/OpenPype/pull/2087)
|
||||
- Blender: fix texture missing when publishing blend files [\#2085](https://github.com/pypeclub/OpenPype/pull/2085)
|
||||
- General: Startup validations oiio tool path fix on linux [\#2083](https://github.com/pypeclub/OpenPype/pull/2083)
|
||||
- Deadline: Collect deadline server does not check existence of deadline key [\#2082](https://github.com/pypeclub/OpenPype/pull/2082)
|
||||
- Blender: fixed Curves with modifiers in Rigs [\#2081](https://github.com/pypeclub/OpenPype/pull/2081)
|
||||
- Nuke UI scaling [\#2077](https://github.com/pypeclub/OpenPype/pull/2077)
|
||||
|
||||
**Merged pull requests:**
|
||||
|
||||
- Bump pywin32 from 300 to 301 [\#2086](https://github.com/pypeclub/OpenPype/pull/2086)
|
||||
|
||||
## [3.4.1](https://github.com/pypeclub/OpenPype/tree/3.4.1) (2021-09-23)
|
||||
|
||||
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.4.1-nightly.1...3.4.1)
|
||||
|
|
|
|||
|
|
@ -10,6 +10,7 @@ import tempfile
|
|||
from pathlib import Path
|
||||
from typing import Union, Callable, List, Tuple
|
||||
import hashlib
|
||||
import platform
|
||||
|
||||
from zipfile import ZipFile, BadZipFile
|
||||
|
||||
|
|
@ -196,21 +197,23 @@ class OpenPypeVersion(semver.VersionInfo):
|
|||
return str(self.finalize_version())
|
||||
|
||||
@staticmethod
|
||||
def version_in_str(string: str) -> Tuple:
|
||||
def version_in_str(string: str) -> Union[None, OpenPypeVersion]:
|
||||
"""Find OpenPype version in given string.
|
||||
|
||||
Args:
|
||||
string (str): string to search.
|
||||
|
||||
Returns:
|
||||
tuple: True/False and OpenPypeVersion if found.
|
||||
OpenPypeVersion: of detected or None.
|
||||
|
||||
"""
|
||||
m = re.search(OpenPypeVersion._VERSION_REGEX, string)
|
||||
if not m:
|
||||
return False, None
|
||||
return None
|
||||
version = OpenPypeVersion.parse(string[m.start():m.end()])
|
||||
return True, version
|
||||
if "staging" in string[m.start():m.end()]:
|
||||
version.staging = True
|
||||
return version
|
||||
|
||||
@classmethod
|
||||
def parse(cls, version):
|
||||
|
|
@ -531,6 +534,7 @@ class BootstrapRepos:
|
|||
processed_path = file
|
||||
self._print(f"- processing {processed_path}")
|
||||
|
||||
|
||||
checksums.append(
|
||||
(
|
||||
sha256sum(file.as_posix()),
|
||||
|
|
@ -542,7 +546,10 @@ class BootstrapRepos:
|
|||
|
||||
checksums_str = ""
|
||||
for c in checksums:
|
||||
checksums_str += "{}:{}\n".format(c[0], c[1])
|
||||
file_str = c[1]
|
||||
if platform.system().lower() == "windows":
|
||||
file_str = c[1].as_posix().replace("\\", "/")
|
||||
checksums_str += "{}:{}\n".format(c[0], file_str)
|
||||
zip_file.writestr("checksums", checksums_str)
|
||||
# test if zip is ok
|
||||
zip_file.testzip()
|
||||
|
|
@ -563,6 +570,8 @@ class BootstrapRepos:
|
|||
and string with reason as second.
|
||||
|
||||
"""
|
||||
if os.getenv("OPENPYPE_DONT_VALIDATE_VERSION"):
|
||||
return True, "Disabled validation"
|
||||
if not path.exists():
|
||||
return False, "Path doesn't exist"
|
||||
|
||||
|
|
@ -589,13 +598,16 @@ class BootstrapRepos:
|
|||
|
||||
# calculate and compare checksums in the zip file
|
||||
for file in checksums:
|
||||
file_name = file[1]
|
||||
if platform.system().lower() == "windows":
|
||||
file_name = file_name.replace("/", "\\")
|
||||
h = hashlib.sha256()
|
||||
try:
|
||||
h.update(zip_file.read(file[1]))
|
||||
h.update(zip_file.read(file_name))
|
||||
except FileNotFoundError:
|
||||
return False, f"Missing file [ {file[1]} ]"
|
||||
return False, f"Missing file [ {file_name} ]"
|
||||
if h.hexdigest() != file[0]:
|
||||
return False, f"Invalid checksum on {file[1]}"
|
||||
return False, f"Invalid checksum on {file_name}"
|
||||
|
||||
# get list of files in zip minus `checksums` file itself
|
||||
# and turn in to set to compare against list of files
|
||||
|
|
@ -604,7 +616,7 @@ class BootstrapRepos:
|
|||
files_in_zip = zip_file.namelist()
|
||||
files_in_zip.remove("checksums")
|
||||
files_in_zip = set(files_in_zip)
|
||||
files_in_checksum = set([file[1] for file in checksums])
|
||||
files_in_checksum = {file[1] for file in checksums}
|
||||
diff = files_in_zip.difference(files_in_checksum)
|
||||
if diff:
|
||||
return False, f"Missing files {diff}"
|
||||
|
|
@ -628,16 +640,19 @@ class BootstrapRepos:
|
|||
]
|
||||
files_in_dir.remove("checksums")
|
||||
files_in_dir = set(files_in_dir)
|
||||
files_in_checksum = set([file[1] for file in checksums])
|
||||
files_in_checksum = {file[1] for file in checksums}
|
||||
|
||||
for file in checksums:
|
||||
file_name = file[1]
|
||||
if platform.system().lower() == "windows":
|
||||
file_name = file_name.replace("/", "\\")
|
||||
try:
|
||||
current = sha256sum((path / file[1]).as_posix())
|
||||
current = sha256sum((path / file_name).as_posix())
|
||||
except FileNotFoundError:
|
||||
return False, f"Missing file [ {file[1]} ]"
|
||||
return False, f"Missing file [ {file_name} ]"
|
||||
|
||||
if file[0] != current:
|
||||
return False, f"Invalid checksum on {file[1]}"
|
||||
return False, f"Invalid checksum on {file_name}"
|
||||
diff = files_in_dir.difference(files_in_checksum)
|
||||
if diff:
|
||||
return False, f"Missing files {diff}"
|
||||
|
|
@ -1161,9 +1176,9 @@ class BootstrapRepos:
|
|||
name = item.name if item.is_dir() else item.stem
|
||||
result = OpenPypeVersion.version_in_str(name)
|
||||
|
||||
if result[0]:
|
||||
if result:
|
||||
detected_version: OpenPypeVersion
|
||||
detected_version = result[1]
|
||||
detected_version = result
|
||||
|
||||
if item.is_dir() and not self._is_openpype_in_dir(
|
||||
item, detected_version
|
||||
|
|
|
|||
|
|
@ -59,7 +59,7 @@ def validate_mongo_connection(cnx: str) -> (bool, str):
|
|||
return False, "Not mongodb schema"
|
||||
|
||||
kwargs = {
|
||||
"serverSelectionTimeoutMS": 2000
|
||||
"serverSelectionTimeoutMS": os.environ.get("AVALON_TIMEOUT", 2000)
|
||||
}
|
||||
# Add certificate path if should be required
|
||||
if should_add_certificate_path_to_mongo_url(cnx):
|
||||
|
|
|
|||
|
|
@ -1,4 +1,4 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
"""Definition of Igniter version."""
|
||||
|
||||
__version__ = "1.0.1"
|
||||
__version__ = "1.0.2"
|
||||
|
|
|
|||
|
|
@ -17,6 +17,7 @@ from .lib import (
|
|||
version_up,
|
||||
get_asset,
|
||||
get_hierarchy,
|
||||
get_workdir_data,
|
||||
get_version_from_path,
|
||||
get_last_version_from_path,
|
||||
get_app_environments_for_context,
|
||||
|
|
|
|||
|
|
@ -158,7 +158,9 @@ def extractenvironments(output_json_path, project, asset, task, app):
|
|||
@click.option("-d", "--debug", is_flag=True, help="Print debug messages")
|
||||
@click.option("-t", "--targets", help="Targets module", default=None,
|
||||
multiple=True)
|
||||
def publish(debug, paths, targets):
|
||||
@click.option("-g", "--gui", is_flag=True,
|
||||
help="Show Publish UI", default=False)
|
||||
def publish(debug, paths, targets, gui):
|
||||
"""Start CLI publishing.
|
||||
|
||||
Publish collects json from paths provided as an argument.
|
||||
|
|
@ -166,7 +168,7 @@ def publish(debug, paths, targets):
|
|||
"""
|
||||
if debug:
|
||||
os.environ['OPENPYPE_DEBUG'] = '3'
|
||||
PypeCommands.publish(list(paths), targets)
|
||||
PypeCommands.publish(list(paths), targets, gui)
|
||||
|
||||
|
||||
@main.command()
|
||||
|
|
@ -357,3 +359,40 @@ def run(script):
|
|||
def runtests(folder, mark, pyargs):
|
||||
"""Run all automatic tests after proper initialization via start.py"""
|
||||
PypeCommands().run_tests(folder, mark, pyargs)
|
||||
|
||||
|
||||
@main.command()
|
||||
@click.option("-d", "--debug",
|
||||
is_flag=True, help=("Run process in debug mode"))
|
||||
@click.option("-a", "--active_site", required=True,
|
||||
help="Name of active stie")
|
||||
def syncserver(debug, active_site):
|
||||
"""Run sync site server in background.
|
||||
|
||||
Some Site Sync use cases need to expose site to another one.
|
||||
For example if majority of artists work in studio, they are not using
|
||||
SS at all, but if you want to expose published assets to 'studio' site
|
||||
to SFTP for only a couple of artists, some background process must
|
||||
mark published assets to live on multiple sites (they might be
|
||||
physically in same location - mounted shared disk).
|
||||
|
||||
Process mimics OP Tray with specific 'active_site' name, all
|
||||
configuration for this "dummy" user comes from Setting or Local
|
||||
Settings (configured by starting OP Tray with env
|
||||
var OPENPYPE_LOCAL_ID set to 'active_site'.
|
||||
"""
|
||||
if debug:
|
||||
os.environ['OPENPYPE_DEBUG'] = '3'
|
||||
PypeCommands().syncserver(active_site)
|
||||
|
||||
|
||||
@main.command()
|
||||
@click.argument("directory")
|
||||
def repack_version(directory):
|
||||
"""Repack OpenPype version from directory.
|
||||
|
||||
This command will re-create zip file from specified directory,
|
||||
recalculating file checksums. It will try to use version detected in
|
||||
directory name.
|
||||
"""
|
||||
PypeCommands().repack_version(directory)
|
||||
|
|
|
|||
|
|
@ -4,7 +4,7 @@ import logging
|
|||
|
||||
from avalon import io
|
||||
from avalon import api as avalon
|
||||
from avalon.vendor import Qt
|
||||
from Qt import QtWidgets
|
||||
from openpype import lib, api
|
||||
import pyblish.api as pyblish
|
||||
import openpype.hosts.aftereffects
|
||||
|
|
@ -41,10 +41,10 @@ def check_inventory():
|
|||
|
||||
# Warn about outdated containers.
|
||||
print("Starting new QApplication..")
|
||||
app = Qt.QtWidgets.QApplication(sys.argv)
|
||||
app = QtWidgets.QApplication(sys.argv)
|
||||
|
||||
message_box = Qt.QtWidgets.QMessageBox()
|
||||
message_box.setIcon(Qt.QtWidgets.QMessageBox.Warning)
|
||||
message_box = QtWidgets.QMessageBox()
|
||||
message_box.setIcon(QtWidgets.QMessageBox.Warning)
|
||||
msg = "There are outdated containers in the scene."
|
||||
message_box.setText(msg)
|
||||
message_box.exec_()
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
import openpype.api
|
||||
from avalon.vendor import Qt
|
||||
from Qt import QtWidgets
|
||||
from avalon import aftereffects
|
||||
|
||||
import logging
|
||||
|
|
@ -56,7 +56,7 @@ class CreateRender(openpype.api.Creator):
|
|||
stub.rename_item(item.id, stub.PUBLISH_ICON + self.data["subset"])
|
||||
|
||||
def _show_msg(self, txt):
|
||||
msg = Qt.QtWidgets.QMessageBox()
|
||||
msg.setIcon(Qt.QtWidgets.QMessageBox.Warning)
|
||||
msg = QtWidgets.QMessageBox()
|
||||
msg.setIcon(QtWidgets.QMessageBox.Warning)
|
||||
msg.setText(txt)
|
||||
msg.exec_()
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
import sys
|
||||
|
||||
from avalon.vendor.Qt import QtGui
|
||||
from Qt import QtGui
|
||||
import avalon.fusion
|
||||
from avalon import io
|
||||
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
from avalon import api, style
|
||||
from avalon.vendor.Qt import QtGui, QtWidgets
|
||||
from Qt import QtGui, QtWidgets
|
||||
|
||||
import avalon.fusion
|
||||
|
||||
|
|
|
|||
|
|
@ -1,4 +1,4 @@
|
|||
from avalon.vendor.Qt import QtWidgets
|
||||
from Qt import QtWidgets
|
||||
from avalon.vendor import qtawesome
|
||||
import avalon.fusion as avalon
|
||||
|
||||
|
|
|
|||
|
|
@ -2,12 +2,13 @@ import os
|
|||
import glob
|
||||
import logging
|
||||
|
||||
from Qt import QtWidgets, QtCore
|
||||
|
||||
import avalon.io as io
|
||||
import avalon.api as api
|
||||
import avalon.pipeline as pipeline
|
||||
import avalon.fusion
|
||||
import avalon.style as style
|
||||
from avalon.vendor.Qt import QtWidgets, QtCore
|
||||
from avalon.vendor import qtawesome as qta
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -126,7 +126,8 @@ class CollectFarmRender(openpype.lib.abstract_collect_render.
|
|||
# because of using 'renderFarm' as a family, replace 'Farm' with
|
||||
# capitalized task name - issue of avalon-core Creator app
|
||||
subset_name = node.split("/")[1]
|
||||
task_name = context.data["anatomyData"]["task"].capitalize()
|
||||
task_name = context.data["anatomyData"]["task"][
|
||||
"name"].capitalize()
|
||||
replace_str = ""
|
||||
if task_name.lower() not in subset_name.lower():
|
||||
replace_str = task_name
|
||||
|
|
|
|||
|
|
@ -28,7 +28,7 @@ class CollectPalettes(pyblish.api.ContextPlugin):
|
|||
|
||||
# skip collecting if not in allowed task
|
||||
if self.allowed_tasks:
|
||||
task_name = context.data["anatomyData"]["task"].lower()
|
||||
task_name = context.data["anatomyData"]["task"]["name"].lower()
|
||||
if (not any([re.search(pattern, task_name)
|
||||
for pattern in self.allowed_tasks])):
|
||||
return
|
||||
|
|
|
|||
|
|
@ -5,13 +5,13 @@ import os
|
|||
import re
|
||||
import sys
|
||||
import ast
|
||||
import shutil
|
||||
import hiero
|
||||
from Qt import QtWidgets
|
||||
import avalon.api as avalon
|
||||
import avalon.io
|
||||
from avalon.vendor.Qt import QtWidgets
|
||||
from openpype.api import (Logger, Anatomy, get_anatomy_settings)
|
||||
from . import tags
|
||||
import shutil
|
||||
from compiler.ast import flatten
|
||||
|
||||
try:
|
||||
|
|
@ -30,6 +30,7 @@ self = sys.modules[__name__]
|
|||
self._has_been_setup = False
|
||||
self._has_menu = False
|
||||
self._registered_gui = None
|
||||
self._parent = None
|
||||
self.pype_tag_name = "openpypeData"
|
||||
self.default_sequence_name = "openpypeSequence"
|
||||
self.default_bin_name = "openpypeBin"
|
||||
|
|
@ -1029,3 +1030,15 @@ def before_project_save(event):
|
|||
|
||||
# also mark old versions of loaded containers
|
||||
check_inventory_versions()
|
||||
|
||||
|
||||
def get_main_window():
|
||||
"""Acquire Nuke's main window"""
|
||||
if self._parent is None:
|
||||
top_widgets = QtWidgets.QApplication.topLevelWidgets()
|
||||
name = "Foundry::UI::DockMainWindow"
|
||||
main_window = next(widget for widget in top_widgets if
|
||||
widget.inherits("QMainWindow") and
|
||||
widget.metaObject().className() == name)
|
||||
self._parent = main_window
|
||||
return self._parent
|
||||
|
|
|
|||
|
|
@ -37,12 +37,16 @@ def menu_install():
|
|||
Installing menu into Hiero
|
||||
|
||||
"""
|
||||
from Qt import QtGui
|
||||
from . import (
|
||||
publish, launch_workfiles_app, reload_config,
|
||||
apply_colorspace_project, apply_colorspace_clips
|
||||
)
|
||||
from .lib import get_main_window
|
||||
|
||||
main_window = get_main_window()
|
||||
|
||||
# here is the best place to add menu
|
||||
from avalon.vendor.Qt import QtGui
|
||||
|
||||
menu_name = os.environ['AVALON_LABEL']
|
||||
|
||||
|
|
@ -86,18 +90,24 @@ def menu_install():
|
|||
|
||||
creator_action = menu.addAction("Create ...")
|
||||
creator_action.setIcon(QtGui.QIcon("icons:CopyRectangle.png"))
|
||||
creator_action.triggered.connect(host_tools.show_creator)
|
||||
creator_action.triggered.connect(
|
||||
lambda: host_tools.show_creator(parent=main_window)
|
||||
)
|
||||
|
||||
loader_action = menu.addAction("Load ...")
|
||||
loader_action.setIcon(QtGui.QIcon("icons:CopyRectangle.png"))
|
||||
loader_action.triggered.connect(host_tools.show_loader)
|
||||
loader_action.triggered.connect(
|
||||
lambda: host_tools.show_loader(parent=main_window)
|
||||
)
|
||||
|
||||
sceneinventory_action = menu.addAction("Manage ...")
|
||||
sceneinventory_action.setIcon(QtGui.QIcon("icons:CopyRectangle.png"))
|
||||
sceneinventory_action.triggered.connect(host_tools.show_scene_inventory)
|
||||
menu.addSeparator()
|
||||
sceneinventory_action.triggered.connect(
|
||||
lambda: host_tools.show_scene_inventory(parent=main_window)
|
||||
)
|
||||
|
||||
if os.getenv("OPENPYPE_DEVELOP"):
|
||||
menu.addSeparator()
|
||||
reload_action = menu.addAction("Reload pipeline")
|
||||
reload_action.setIcon(QtGui.QIcon("icons:ColorAdd.png"))
|
||||
reload_action.triggered.connect(reload_config)
|
||||
|
|
@ -110,3 +120,10 @@ def menu_install():
|
|||
apply_colorspace_c_action = menu.addAction("Apply Colorspace Clips")
|
||||
apply_colorspace_c_action.setIcon(QtGui.QIcon("icons:ColorAdd.png"))
|
||||
apply_colorspace_c_action.triggered.connect(apply_colorspace_clips)
|
||||
|
||||
menu.addSeparator()
|
||||
|
||||
exeprimental_action = menu.addAction("Experimental tools...")
|
||||
exeprimental_action.triggered.connect(
|
||||
lambda: host_tools.show_experimental_tools_dialog(parent=main_window)
|
||||
)
|
||||
|
|
|
|||
|
|
@ -209,9 +209,11 @@ def update_container(track_item, data=None):
|
|||
|
||||
def launch_workfiles_app(*args):
|
||||
''' Wrapping function for workfiles launcher '''
|
||||
from .lib import get_main_window
|
||||
|
||||
main_window = get_main_window()
|
||||
# show workfile gui
|
||||
host_tools.show_workfiles()
|
||||
host_tools.show_workfiles(parent=main_window)
|
||||
|
||||
|
||||
def publish(parent):
|
||||
|
|
|
|||
|
|
@ -6,6 +6,7 @@ from avalon.vendor import qargparse
|
|||
import avalon.api as avalon
|
||||
import openpype.api as openpype
|
||||
from . import lib
|
||||
from copy import deepcopy
|
||||
|
||||
log = openpype.Logger().get_logger(__name__)
|
||||
|
||||
|
|
@ -799,7 +800,8 @@ class PublishClip:
|
|||
# increasing steps by index of rename iteration
|
||||
self.count_steps *= self.rename_index
|
||||
|
||||
hierarchy_formating_data = dict()
|
||||
hierarchy_formating_data = {}
|
||||
hierarchy_data = deepcopy(self.hierarchy_data)
|
||||
_data = self.track_item_default_data.copy()
|
||||
if self.ui_inputs:
|
||||
# adding tag metadata from ui
|
||||
|
|
@ -824,19 +826,19 @@ class PublishClip:
|
|||
_data.update({"shot": self.shot_num})
|
||||
|
||||
# solve # in test to pythonic expression
|
||||
for _k, _v in self.hierarchy_data.items():
|
||||
for _k, _v in hierarchy_data.items():
|
||||
if "#" not in _v["value"]:
|
||||
continue
|
||||
self.hierarchy_data[
|
||||
hierarchy_data[
|
||||
_k]["value"] = self._replace_hash_to_expression(
|
||||
_k, _v["value"])
|
||||
|
||||
# fill up pythonic expresisons in hierarchy data
|
||||
for k, _v in self.hierarchy_data.items():
|
||||
for k, _v in hierarchy_data.items():
|
||||
hierarchy_formating_data[k] = _v["value"].format(**_data)
|
||||
else:
|
||||
# if no gui mode then just pass default data
|
||||
hierarchy_formating_data = self.hierarchy_data
|
||||
hierarchy_formating_data = hierarchy_data
|
||||
|
||||
tag_hierarchy_data = self._solve_tag_hierarchy_data(
|
||||
hierarchy_formating_data
|
||||
|
|
@ -886,30 +888,38 @@ class PublishClip:
|
|||
"families": [self.data["family"]]
|
||||
}
|
||||
|
||||
def _convert_to_entity(self, key):
|
||||
def _convert_to_entity(self, type, template):
|
||||
""" Converting input key to key with type. """
|
||||
# convert to entity type
|
||||
entity_type = self.types.get(key, None)
|
||||
entity_type = self.types.get(type, None)
|
||||
|
||||
assert entity_type, "Missing entity type for `{}`".format(
|
||||
key
|
||||
type
|
||||
)
|
||||
|
||||
# first collect formating data to use for formating template
|
||||
formating_data = {}
|
||||
for _k, _v in self.hierarchy_data.items():
|
||||
value = _v["value"].format(
|
||||
**self.track_item_default_data)
|
||||
formating_data[_k] = value
|
||||
|
||||
return {
|
||||
"entity_type": entity_type,
|
||||
"entity_name": self.hierarchy_data[key]["value"].format(
|
||||
**self.track_item_default_data
|
||||
"entity_name": template.format(
|
||||
**formating_data
|
||||
)
|
||||
}
|
||||
|
||||
def _create_parents(self):
|
||||
""" Create parents and return it in list. """
|
||||
self.parents = list()
|
||||
self.parents = []
|
||||
|
||||
patern = re.compile(self.parents_search_patern)
|
||||
par_split = [patern.findall(t).pop()
|
||||
|
||||
par_split = [(patern.findall(t).pop(), t)
|
||||
for t in self.hierarchy.split("/")]
|
||||
|
||||
for key in par_split:
|
||||
parent = self._convert_to_entity(key)
|
||||
for type, template in par_split:
|
||||
parent = self._convert_to_entity(type, template)
|
||||
self.parents.append(parent)
|
||||
|
|
|
|||
|
|
@ -3,9 +3,10 @@
|
|||
import contextlib
|
||||
|
||||
import logging
|
||||
from Qt import QtCore, QtGui
|
||||
from openpype.tools.utils.widgets import AssetWidget
|
||||
from avalon import style, io
|
||||
from Qt import QtWidgets, QtCore, QtGui
|
||||
from avalon import io
|
||||
from openpype import style
|
||||
from openpype.tools.utils.assets_widget import SingleSelectAssetsWidget
|
||||
|
||||
from pxr import Sdf
|
||||
|
||||
|
|
@ -13,6 +14,60 @@ from pxr import Sdf
|
|||
log = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class SelectAssetDialog(QtWidgets.QWidget):
|
||||
"""Frameless assets dialog to select asset with double click.
|
||||
|
||||
Args:
|
||||
parm: Parameter where selected asset name is set.
|
||||
"""
|
||||
def __init__(self, parm):
|
||||
self.setWindowTitle("Pick Asset")
|
||||
self.setWindowFlags(QtCore.Qt.FramelessWindowHint | QtCore.Qt.Popup)
|
||||
|
||||
assets_widget = SingleSelectAssetsWidget(io, parent=self)
|
||||
|
||||
layout = QtWidgets.QHBoxLayout(self)
|
||||
layout.addWidget(assets_widget)
|
||||
|
||||
assets_widget.double_clicked.connect(self._set_parameter)
|
||||
self._assets_widget = assets_widget
|
||||
self._parm = parm
|
||||
|
||||
def _set_parameter(self):
|
||||
name = self._assets_widget.get_selected_asset_name()
|
||||
self._parm.set(name)
|
||||
self.close()
|
||||
|
||||
def _on_show(self):
|
||||
pos = QtGui.QCursor.pos()
|
||||
# Select the current asset if there is any
|
||||
select_id = None
|
||||
name = self._parm.eval()
|
||||
if name:
|
||||
db_asset = io.find_one(
|
||||
{"name": name, "type": "asset"},
|
||||
{"_id": True}
|
||||
)
|
||||
if db_asset:
|
||||
select_id = db_asset["_id"]
|
||||
|
||||
# Set stylesheet
|
||||
self.setStyleSheet(style.load_stylesheet())
|
||||
# Refresh assets (is threaded)
|
||||
self._assets_widget.refresh()
|
||||
# Select asset - must be done after refresh
|
||||
if select_id is not None:
|
||||
self._assets_widget.select_asset(select_id)
|
||||
|
||||
# Show cursor (top right of window) near cursor
|
||||
self.resize(250, 400)
|
||||
self.move(self.mapFromGlobal(pos) - QtCore.QPoint(self.width(), 0))
|
||||
|
||||
def showEvent(self, event):
|
||||
super(SelectAssetDialog, self).showEvent(event)
|
||||
self._on_show()
|
||||
|
||||
|
||||
def pick_asset(node):
|
||||
"""Show a user interface to select an Asset in the project
|
||||
|
||||
|
|
@ -21,43 +76,15 @@ def pick_asset(node):
|
|||
|
||||
"""
|
||||
|
||||
pos = QtGui.QCursor.pos()
|
||||
|
||||
parm = node.parm("asset_name")
|
||||
if not parm:
|
||||
log.error("Node has no 'asset' parameter: %s", node)
|
||||
return
|
||||
|
||||
# Construct the AssetWidget as a frameless popup so it automatically
|
||||
# Construct a frameless popup so it automatically
|
||||
# closes when clicked outside of it.
|
||||
global tool
|
||||
tool = AssetWidget(io)
|
||||
tool.setContentsMargins(5, 5, 5, 5)
|
||||
tool.setWindowTitle("Pick Asset")
|
||||
tool.setStyleSheet(style.load_stylesheet())
|
||||
tool.setWindowFlags(QtCore.Qt.FramelessWindowHint | QtCore.Qt.Popup)
|
||||
tool.refresh()
|
||||
|
||||
# Select the current asset if there is any
|
||||
name = parm.eval()
|
||||
if name:
|
||||
db_asset = io.find_one({"name": name, "type": "asset"})
|
||||
if db_asset:
|
||||
silo = db_asset.get("silo")
|
||||
if silo:
|
||||
tool.set_silo(silo)
|
||||
tool.select_assets([name], expand=True)
|
||||
|
||||
# Show cursor (top right of window) near cursor
|
||||
tool.resize(250, 400)
|
||||
tool.move(tool.mapFromGlobal(pos) - QtCore.QPoint(tool.width(), 0))
|
||||
|
||||
def set_parameter_callback(index):
|
||||
name = index.data(tool.model.DocumentRole)["name"]
|
||||
parm.set(name)
|
||||
tool.close()
|
||||
|
||||
tool.view.doubleClicked.connect(set_parameter_callback)
|
||||
tool = SelectAssetDialog(parm)
|
||||
tool.show()
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -67,6 +67,16 @@ from avalon.houdini import pipeline
|
|||
pipeline.reload_pipeline()]]></scriptCode>
|
||||
</scriptItem>
|
||||
</subMenu>
|
||||
|
||||
<separatorItem/>
|
||||
<scriptItem id="experimental_tools">
|
||||
<label>Experimental tools...</label>
|
||||
<scriptCode><![CDATA[
|
||||
import hou
|
||||
from openpype.tools.utils import host_tools
|
||||
parent = hou.qt.mainWindow()
|
||||
host_tools.show_experimental_tools_dialog(parent)]]></scriptCode>
|
||||
</scriptItem>
|
||||
</subMenu>
|
||||
</menuBar>
|
||||
</mainMenu>
|
||||
|
|
|
|||
|
|
@ -138,7 +138,7 @@ def on_save(_):
|
|||
def on_open(_):
|
||||
"""On scene open let's assume the containers have changed."""
|
||||
|
||||
from avalon.vendor.Qt import QtWidgets
|
||||
from Qt import QtWidgets
|
||||
from openpype.widgets import popup
|
||||
|
||||
cmds.evalDeferred(
|
||||
|
|
|
|||
|
|
@ -6,19 +6,19 @@ import platform
|
|||
import uuid
|
||||
import math
|
||||
|
||||
import bson
|
||||
import json
|
||||
import logging
|
||||
import itertools
|
||||
import contextlib
|
||||
from collections import OrderedDict, defaultdict
|
||||
from math import ceil
|
||||
from six import string_types
|
||||
import bson
|
||||
|
||||
from maya import cmds, mel
|
||||
import maya.api.OpenMaya as om
|
||||
|
||||
from avalon import api, maya, io, pipeline
|
||||
from avalon.vendor.six import string_types
|
||||
import avalon.maya.lib
|
||||
import avalon.maya.interactive
|
||||
|
||||
|
|
@ -1936,7 +1936,7 @@ def validate_fps():
|
|||
|
||||
if current_fps != fps:
|
||||
|
||||
from avalon.vendor.Qt import QtWidgets
|
||||
from Qt import QtWidgets
|
||||
from ...widgets import popup
|
||||
|
||||
# Find maya main window
|
||||
|
|
@ -2694,7 +2694,7 @@ def update_content_on_context_change():
|
|||
|
||||
|
||||
def show_message(title, msg):
|
||||
from avalon.vendor.Qt import QtWidgets
|
||||
from Qt import QtWidgets
|
||||
from openpype.widgets import message_window
|
||||
|
||||
# Find maya main window
|
||||
|
|
|
|||
|
|
@ -180,6 +180,7 @@ class ARenderProducts:
|
|||
self.layer = layer
|
||||
self.render_instance = render_instance
|
||||
self.multipart = False
|
||||
self.aov_separator = render_instance.data.get("aovSeparator", "_")
|
||||
|
||||
# Initialize
|
||||
self.layer_data = self._get_layer_data()
|
||||
|
|
@ -676,7 +677,7 @@ class RenderProductsVray(ARenderProducts):
|
|||
|
||||
"""
|
||||
prefix = super(RenderProductsVray, self).get_renderer_prefix()
|
||||
prefix = "{}.<aov>".format(prefix)
|
||||
prefix = "{}{}<aov>".format(prefix, self.aov_separator)
|
||||
return prefix
|
||||
|
||||
def _get_layer_data(self):
|
||||
|
|
|
|||
|
|
@ -21,6 +21,7 @@ from openpype.api import (
|
|||
from openpype.modules import ModulesManager
|
||||
|
||||
from avalon.api import Session
|
||||
from avalon.api import CreatorError
|
||||
|
||||
|
||||
class CreateRender(plugin.Creator):
|
||||
|
|
@ -81,13 +82,21 @@ class CreateRender(plugin.Creator):
|
|||
}
|
||||
|
||||
_image_prefixes = {
|
||||
'mentalray': 'maya/<Scene>/<RenderLayer>/<RenderLayer>_<RenderPass>',
|
||||
'mentalray': 'maya/<Scene>/<RenderLayer>/<RenderLayer>{aov_separator}<RenderPass>', # noqa
|
||||
'vray': 'maya/<scene>/<Layer>/<Layer>',
|
||||
'arnold': 'maya/<Scene>/<RenderLayer>/<RenderLayer>_<RenderPass>',
|
||||
'renderman': 'maya/<Scene>/<layer>/<layer>_<aov>',
|
||||
'redshift': 'maya/<Scene>/<RenderLayer>/<RenderLayer>_<RenderPass>'
|
||||
'arnold': 'maya/<Scene>/<RenderLayer>/<RenderLayer>{aov_separator}<RenderPass>', # noqa
|
||||
'renderman': 'maya/<Scene>/<layer>/<layer>{aov_separator}<aov>',
|
||||
'redshift': 'maya/<Scene>/<RenderLayer>/<RenderLayer>{aov_separator}<RenderPass>' # noqa
|
||||
}
|
||||
|
||||
_aov_chars = {
|
||||
"dot": ".",
|
||||
"dash": "-",
|
||||
"underscore": "_"
|
||||
}
|
||||
|
||||
_project_settings = None
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
"""Constructor."""
|
||||
super(CreateRender, self).__init__(*args, **kwargs)
|
||||
|
|
@ -95,12 +104,24 @@ class CreateRender(plugin.Creator):
|
|||
if not deadline_settings["enabled"]:
|
||||
self.deadline_servers = {}
|
||||
return
|
||||
project_settings = get_project_settings(Session["AVALON_PROJECT"])
|
||||
self._project_settings = get_project_settings(
|
||||
Session["AVALON_PROJECT"])
|
||||
|
||||
# project_settings/maya/create/CreateRender/aov_separator
|
||||
try:
|
||||
self.aov_separator = self._aov_chars[(
|
||||
self._project_settings["maya"]
|
||||
["create"]
|
||||
["CreateRender"]
|
||||
["aov_separator"]
|
||||
)]
|
||||
except KeyError:
|
||||
self.aov_separator = "_"
|
||||
|
||||
try:
|
||||
default_servers = deadline_settings["deadline_urls"]
|
||||
project_servers = (
|
||||
project_settings["deadline"]
|
||||
["deadline_servers"]
|
||||
self._project_settings["deadline"]["deadline_servers"]
|
||||
)
|
||||
self.deadline_servers = {
|
||||
k: default_servers[k]
|
||||
|
|
@ -409,8 +430,10 @@ class CreateRender(plugin.Creator):
|
|||
renderer (str): Renderer name.
|
||||
|
||||
"""
|
||||
prefix = self._image_prefixes[renderer]
|
||||
prefix = prefix.replace("{aov_separator}", self.aov_separator)
|
||||
cmds.setAttr(self._image_prefix_nodes[renderer],
|
||||
self._image_prefixes[renderer],
|
||||
prefix,
|
||||
type="string")
|
||||
|
||||
asset = get_asset()
|
||||
|
|
@ -446,37 +469,37 @@ class CreateRender(plugin.Creator):
|
|||
|
||||
self._set_global_output_settings()
|
||||
|
||||
@staticmethod
|
||||
def _set_renderer_option(renderer_node, arg=None, value=None):
|
||||
# type: (str, str, str) -> str
|
||||
"""Set option on renderer node.
|
||||
|
||||
If renderer settings node doesn't exists, it is created first.
|
||||
|
||||
Args:
|
||||
renderer_node (str): Renderer name.
|
||||
arg (str, optional): Argument name.
|
||||
value (str, optional): Argument value.
|
||||
|
||||
Returns:
|
||||
str: Renderer settings node.
|
||||
|
||||
"""
|
||||
settings = cmds.ls(type=renderer_node)
|
||||
result = settings[0] if settings else cmds.createNode(renderer_node)
|
||||
cmds.setAttr(arg.format(result), value)
|
||||
return result
|
||||
|
||||
def _set_vray_settings(self, asset):
|
||||
# type: (dict) -> None
|
||||
"""Sets important settings for Vray."""
|
||||
node = self._set_renderer_option(
|
||||
"VRaySettingsNode", "{}.fileNameRenderElementSeparator", "_"
|
||||
)
|
||||
settings = cmds.ls(type="VRaySettingsNode")
|
||||
node = settings[0] if settings else cmds.createNode("VRaySettingsNode")
|
||||
|
||||
# set separator
|
||||
# set it in vray menu
|
||||
if cmds.optionMenuGrp("vrayRenderElementSeparator", exists=True,
|
||||
q=True):
|
||||
items = cmds.optionMenuGrp(
|
||||
"vrayRenderElementSeparator", ill=True, query=True)
|
||||
|
||||
separators = [cmds.menuItem(i, label=True, query=True) for i in items] # noqa: E501
|
||||
try:
|
||||
sep_idx = separators.index(self.aov_separator)
|
||||
except ValueError:
|
||||
raise CreatorError(
|
||||
"AOV character {} not in {}".format(
|
||||
self.aov_separator, separators))
|
||||
|
||||
cmds.optionMenuGrp(
|
||||
"vrayRenderElementSeparator", sl=sep_idx + 1, edit=True)
|
||||
cmds.setAttr(
|
||||
"{}.fileNameRenderElementSeparator".format(node),
|
||||
self.aov_separator,
|
||||
type="string"
|
||||
)
|
||||
# set format to exr
|
||||
cmds.setAttr(
|
||||
"{}.imageFormatStr".format(node), 5)
|
||||
"{}.imageFormatStr".format(node), "exr", type="string")
|
||||
|
||||
# animType
|
||||
cmds.setAttr(
|
||||
|
|
|
|||
|
|
@ -133,7 +133,7 @@ class ImportMayaLoader(api.Loader):
|
|||
|
||||
"""
|
||||
|
||||
from avalon.vendor.Qt import QtWidgets
|
||||
from Qt import QtWidgets
|
||||
|
||||
accept = QtWidgets.QMessageBox.Ok
|
||||
buttons = accept | QtWidgets.QMessageBox.Cancel
|
||||
|
|
|
|||
|
|
@ -532,7 +532,7 @@ class CollectLook(pyblish.api.InstancePlugin):
|
|||
color_space = cmds.getAttr(color_space_attr)
|
||||
except ValueError:
|
||||
# node doesn't have colorspace attribute
|
||||
color_space = "raw"
|
||||
color_space = "Raw"
|
||||
# Compare with the computed file path, e.g. the one with the <UDIM>
|
||||
# pattern in it, to generate some logging information about this
|
||||
# difference
|
||||
|
|
|
|||
|
|
@ -41,6 +41,7 @@ Provides:
|
|||
|
||||
import re
|
||||
import os
|
||||
import platform
|
||||
import json
|
||||
|
||||
from maya import cmds
|
||||
|
|
@ -61,6 +62,12 @@ class CollectMayaRender(pyblish.api.ContextPlugin):
|
|||
label = "Collect Render Layers"
|
||||
sync_workfile_version = False
|
||||
|
||||
_aov_chars = {
|
||||
"dot": ".",
|
||||
"dash": "-",
|
||||
"underscore": "_"
|
||||
}
|
||||
|
||||
def process(self, context):
|
||||
"""Entry point to collector."""
|
||||
render_instance = None
|
||||
|
|
@ -166,6 +173,18 @@ class CollectMayaRender(pyblish.api.ContextPlugin):
|
|||
if renderer.startswith("renderman"):
|
||||
renderer = "renderman"
|
||||
|
||||
try:
|
||||
aov_separator = self._aov_chars[(
|
||||
context.data["project_settings"]
|
||||
["create"]
|
||||
["CreateRender"]
|
||||
["aov_separator"]
|
||||
)]
|
||||
except KeyError:
|
||||
aov_separator = "_"
|
||||
|
||||
render_instance.data["aovSeparator"] = aov_separator
|
||||
|
||||
# return all expected files for all cameras and aovs in given
|
||||
# frame range
|
||||
layer_render_products = get_layer_render_products(
|
||||
|
|
@ -255,12 +274,28 @@ class CollectMayaRender(pyblish.api.ContextPlugin):
|
|||
common_publish_meta_path, part)
|
||||
if part == expected_layer_name:
|
||||
break
|
||||
|
||||
# TODO: replace this terrible linux hotfix with real solution :)
|
||||
if platform.system().lower() in ["linux", "darwin"]:
|
||||
common_publish_meta_path = "/" + common_publish_meta_path
|
||||
|
||||
self.log.info(
|
||||
"Publish meta path: {}".format(common_publish_meta_path))
|
||||
|
||||
self.log.info(full_exp_files)
|
||||
self.log.info("collecting layer: {}".format(layer_name))
|
||||
# Get layer specific settings, might be overrides
|
||||
|
||||
try:
|
||||
aov_separator = self._aov_chars[(
|
||||
context.data["project_settings"]
|
||||
["create"]
|
||||
["CreateRender"]
|
||||
["aov_separator"]
|
||||
)]
|
||||
except KeyError:
|
||||
aov_separator = "_"
|
||||
|
||||
data = {
|
||||
"subset": expected_layer_name,
|
||||
"attachTo": attach_to,
|
||||
|
|
@ -302,7 +337,8 @@ class CollectMayaRender(pyblish.api.ContextPlugin):
|
|||
"convertToScanline") or False,
|
||||
"useReferencedAovs": render_instance.data.get(
|
||||
"useReferencedAovs") or render_instance.data.get(
|
||||
"vrayUseReferencedAovs") or False
|
||||
"vrayUseReferencedAovs") or False,
|
||||
"aovSeparator": aov_separator
|
||||
}
|
||||
|
||||
if deadline_url:
|
||||
|
|
|
|||
|
|
@ -275,7 +275,7 @@ class CollectYetiRig(pyblish.api.InstancePlugin):
|
|||
list: file sequence.
|
||||
|
||||
"""
|
||||
from avalon.vendor import clique
|
||||
import clique
|
||||
|
||||
escaped = re.escape(filepath)
|
||||
re_pattern = escaped.replace(pattern, "-?[0-9]+")
|
||||
|
|
|
|||
|
|
@ -332,10 +332,10 @@ class ExtractLook(openpype.api.Extractor):
|
|||
if do_maketx and files_metadata[filepath]["color_space"].lower() == "srgb": # noqa: E501
|
||||
linearize = True
|
||||
# set its file node to 'raw' as tx will be linearized
|
||||
files_metadata[filepath]["color_space"] = "raw"
|
||||
files_metadata[filepath]["color_space"] = "Raw"
|
||||
|
||||
if do_maketx:
|
||||
color_space = "raw"
|
||||
# if do_maketx:
|
||||
# color_space = "Raw"
|
||||
|
||||
source, mode, texture_hash = self._process_texture(
|
||||
filepath,
|
||||
|
|
@ -383,11 +383,11 @@ class ExtractLook(openpype.api.Extractor):
|
|||
color_space = cmds.getAttr(color_space_attr)
|
||||
except ValueError:
|
||||
# node doesn't have color space attribute
|
||||
color_space = "raw"
|
||||
color_space = "Raw"
|
||||
else:
|
||||
if files_metadata[source]["color_space"] == "raw":
|
||||
if files_metadata[source]["color_space"] == "Raw":
|
||||
# set color space to raw if we linearized it
|
||||
color_space = "raw"
|
||||
color_space = "Raw"
|
||||
# Remap file node filename to destination
|
||||
remap[color_space_attr] = color_space
|
||||
attr = resource["attribute"]
|
||||
|
|
|
|||
|
|
@ -1,13 +1,14 @@
|
|||
import os
|
||||
import json
|
||||
import getpass
|
||||
import appdirs
|
||||
import platform
|
||||
|
||||
import appdirs
|
||||
import requests
|
||||
|
||||
from maya import cmds
|
||||
|
||||
from avalon import api
|
||||
from avalon.vendor import requests
|
||||
|
||||
import pyblish.api
|
||||
from openpype.hosts.maya.api import lib
|
||||
|
|
|
|||
|
|
@ -89,8 +89,8 @@ class ValidateAssemblyModelTransforms(pyblish.api.InstancePlugin):
|
|||
|
||||
"""
|
||||
|
||||
from Qt import QtWidgets
|
||||
from openpype.hosts.maya.api import lib
|
||||
from avalon.vendor.Qt import QtWidgets
|
||||
|
||||
# Store namespace in variable, cosmetics thingy
|
||||
messagebox = QtWidgets.QMessageBox
|
||||
|
|
|
|||
|
|
@ -1,9 +1,10 @@
|
|||
import os
|
||||
import json
|
||||
|
||||
import appdirs
|
||||
import requests
|
||||
|
||||
import pyblish.api
|
||||
from avalon.vendor import requests
|
||||
from openpype.plugin import contextplugin_should_run
|
||||
import openpype.hosts.maya.api.action
|
||||
|
||||
|
|
|
|||
|
|
@ -55,13 +55,19 @@ class ValidateRenderSettings(pyblish.api.InstancePlugin):
|
|||
|
||||
ImagePrefixTokens = {
|
||||
|
||||
'arnold': 'maya/<Scene>/<RenderLayer>/<RenderLayer>_<RenderPass>',
|
||||
'arnold': 'maya/<Scene>/<RenderLayer>/<RenderLayer>{aov_separator}<RenderPass>', # noqa
|
||||
'redshift': 'maya/<Scene>/<RenderLayer>/<RenderLayer>',
|
||||
'vray': 'maya/<Scene>/<Layer>/<Layer>',
|
||||
'renderman': '<layer>_<aov>.<f4>.<ext>'
|
||||
'renderman': '<layer>{aov_separator}<aov>.<f4>.<ext>' # noqa
|
||||
}
|
||||
|
||||
redshift_AOV_prefix = "<BeautyPath>/<BeautyFile>_<RenderPass>"
|
||||
_aov_chars = {
|
||||
"dot": ".",
|
||||
"dash": "-",
|
||||
"underscore": "_"
|
||||
}
|
||||
|
||||
redshift_AOV_prefix = "<BeautyPath>/<BeautyFile>{aov_separator}<RenderPass>" # noqa: E501
|
||||
|
||||
# WARNING: There is bug? in renderman, translating <scene> token
|
||||
# to something left behind mayas default image prefix. So instead
|
||||
|
|
@ -107,6 +113,9 @@ class ValidateRenderSettings(pyblish.api.InstancePlugin):
|
|||
|
||||
anim_override = lib.get_attr_in_layer("defaultRenderGlobals.animation",
|
||||
layer=layer)
|
||||
|
||||
prefix = prefix.replace(
|
||||
"{aov_separator}", instance.data.get("aovSeparator", "_"))
|
||||
if not anim_override:
|
||||
invalid = True
|
||||
cls.log.error("Animation needs to be enabled. Use the same "
|
||||
|
|
@ -138,12 +147,16 @@ class ValidateRenderSettings(pyblish.api.InstancePlugin):
|
|||
else:
|
||||
node = vray_settings[0]
|
||||
|
||||
if cmds.getAttr(
|
||||
"{}.fileNameRenderElementSeparator".format(node)) != "_":
|
||||
invalid = False
|
||||
scene_sep = cmds.getAttr(
|
||||
"{}.fileNameRenderElementSeparator".format(node))
|
||||
if scene_sep != instance.data.get("aovSeparator", "_"):
|
||||
cls.log.error("AOV separator is not set correctly.")
|
||||
invalid = True
|
||||
|
||||
if renderer == "redshift":
|
||||
redshift_AOV_prefix = cls.redshift_AOV_prefix.replace(
|
||||
"{aov_separator}", instance.data.get("aovSeparator", "_")
|
||||
)
|
||||
if re.search(cls.R_AOV_TOKEN, prefix):
|
||||
invalid = True
|
||||
cls.log.error(("Do not use AOV token [ {} ] - "
|
||||
|
|
@ -155,7 +168,7 @@ class ValidateRenderSettings(pyblish.api.InstancePlugin):
|
|||
for aov in rs_aovs:
|
||||
aov_prefix = cmds.getAttr("{}.filePrefix".format(aov))
|
||||
# check their image prefix
|
||||
if aov_prefix != cls.redshift_AOV_prefix:
|
||||
if aov_prefix != redshift_AOV_prefix:
|
||||
cls.log.error(("AOV ({}) image prefix is not set "
|
||||
"correctly {} != {}").format(
|
||||
cmds.getAttr("{}.name".format(aov)),
|
||||
|
|
@ -181,7 +194,7 @@ class ValidateRenderSettings(pyblish.api.InstancePlugin):
|
|||
file_prefix = cmds.getAttr("rmanGlobals.imageFileFormat")
|
||||
dir_prefix = cmds.getAttr("rmanGlobals.imageOutputDir")
|
||||
|
||||
if file_prefix.lower() != cls.ImagePrefixTokens[renderer].lower():
|
||||
if file_prefix.lower() != prefix.lower():
|
||||
invalid = True
|
||||
cls.log.error("Wrong image prefix [ {} ]".format(file_prefix))
|
||||
|
||||
|
|
@ -198,18 +211,20 @@ class ValidateRenderSettings(pyblish.api.InstancePlugin):
|
|||
cls.log.error("Wrong image prefix [ {} ] - "
|
||||
"You can't use '<renderpass>' token "
|
||||
"with merge AOVs turned on".format(prefix))
|
||||
else:
|
||||
if not re.search(cls.R_AOV_TOKEN, prefix):
|
||||
invalid = True
|
||||
cls.log.error("Wrong image prefix [ {} ] - "
|
||||
"doesn't have: '<renderpass>' or "
|
||||
"token".format(prefix))
|
||||
elif not re.search(cls.R_AOV_TOKEN, prefix):
|
||||
invalid = True
|
||||
cls.log.error("Wrong image prefix [ {} ] - "
|
||||
"doesn't have: '<renderpass>' or "
|
||||
"token".format(prefix))
|
||||
|
||||
# prefix check
|
||||
if prefix.lower() != cls.ImagePrefixTokens[renderer].lower():
|
||||
default_prefix = cls.ImagePrefixTokens[renderer]
|
||||
default_prefix = default_prefix.replace(
|
||||
"{aov_separator}", instance.data.get("aovSeparator", "_"))
|
||||
if prefix.lower() != default_prefix.lower():
|
||||
cls.log.warning("warning: prefix differs from "
|
||||
"recommended {}".format(
|
||||
cls.ImagePrefixTokens[renderer]))
|
||||
default_prefix))
|
||||
|
||||
if padding != cls.DEFAULT_PADDING:
|
||||
invalid = True
|
||||
|
|
@ -257,9 +272,14 @@ class ValidateRenderSettings(pyblish.api.InstancePlugin):
|
|||
|
||||
@classmethod
|
||||
def repair(cls, instance):
|
||||
|
||||
renderer = instance.data['renderer']
|
||||
layer_node = instance.data['setMembers']
|
||||
redshift_AOV_prefix = cls.redshift_AOV_prefix.replace(
|
||||
"{aov_separator}", instance.data.get("aovSeparator", "_")
|
||||
)
|
||||
default_prefix = cls.ImagePrefixTokens[renderer].replace(
|
||||
"{aov_separator}", instance.data.get("aovSeparator", "_")
|
||||
)
|
||||
|
||||
with lib.renderlayer(layer_node):
|
||||
default = lib.RENDER_ATTRS['default']
|
||||
|
|
@ -270,7 +290,7 @@ class ValidateRenderSettings(pyblish.api.InstancePlugin):
|
|||
node = render_attrs["node"]
|
||||
prefix_attr = render_attrs["prefix"]
|
||||
|
||||
fname_prefix = cls.ImagePrefixTokens[renderer]
|
||||
fname_prefix = default_prefix
|
||||
cmds.setAttr("{}.{}".format(node, prefix_attr),
|
||||
fname_prefix, type="string")
|
||||
|
||||
|
|
@ -281,7 +301,7 @@ class ValidateRenderSettings(pyblish.api.InstancePlugin):
|
|||
else:
|
||||
# renderman handles stuff differently
|
||||
cmds.setAttr("rmanGlobals.imageFileFormat",
|
||||
cls.ImagePrefixTokens[renderer],
|
||||
default_prefix,
|
||||
type="string")
|
||||
cmds.setAttr("rmanGlobals.imageOutputDir",
|
||||
cls.RendermanDirPrefix,
|
||||
|
|
@ -294,10 +314,13 @@ class ValidateRenderSettings(pyblish.api.InstancePlugin):
|
|||
else:
|
||||
node = vray_settings[0]
|
||||
|
||||
cmds.optionMenuGrp("vrayRenderElementSeparator",
|
||||
v=instance.data.get("aovSeparator", "_"))
|
||||
cmds.setAttr(
|
||||
"{}.fileNameRenderElementSeparator".format(
|
||||
node),
|
||||
"_"
|
||||
instance.data.get("aovSeparator", "_"),
|
||||
type="string"
|
||||
)
|
||||
|
||||
if renderer == "redshift":
|
||||
|
|
@ -306,7 +329,7 @@ class ValidateRenderSettings(pyblish.api.InstancePlugin):
|
|||
for aov in rs_aovs:
|
||||
# fix AOV prefixes
|
||||
cmds.setAttr(
|
||||
"{}.filePrefix".format(aov), cls.redshift_AOV_prefix)
|
||||
"{}.filePrefix".format(aov), redshift_AOV_prefix)
|
||||
# fix AOV file format
|
||||
default_ext = cmds.getAttr(
|
||||
"redshiftOptions.imageFormat", asString=True)
|
||||
|
|
|
|||
|
|
@ -70,7 +70,8 @@ def install():
|
|||
family_states = [
|
||||
"write",
|
||||
"review",
|
||||
"nukenodes"
|
||||
"nukenodes",
|
||||
"model",
|
||||
"gizmo"
|
||||
]
|
||||
|
||||
|
|
|
|||
|
|
@ -18,7 +18,7 @@ from openpype.api import (
|
|||
BuildWorkfile,
|
||||
get_version_from_path,
|
||||
get_anatomy_settings,
|
||||
get_hierarchy,
|
||||
get_workdir_data,
|
||||
get_asset,
|
||||
get_current_project_settings,
|
||||
ApplicationManager
|
||||
|
|
@ -41,6 +41,10 @@ opnl.workfiles_launched = False
|
|||
opnl._node_tab_name = "{}".format(os.getenv("AVALON_LABEL") or "Avalon")
|
||||
|
||||
|
||||
def get_nuke_imageio_settings():
|
||||
return get_anatomy_settings(opnl.project_name)["imageio"]["nuke"]
|
||||
|
||||
|
||||
def get_created_node_imageio_setting(**kwarg):
|
||||
''' Get preset data for dataflow (fileType, compression, bitDepth)
|
||||
'''
|
||||
|
|
@ -51,8 +55,7 @@ def get_created_node_imageio_setting(**kwarg):
|
|||
assert any([creator, nodeclass]), nuke.message(
|
||||
"`{}`: Missing mandatory kwargs `host`, `cls`".format(__file__))
|
||||
|
||||
imageio = get_anatomy_settings(opnl.project_name)["imageio"]
|
||||
imageio_nodes = imageio["nuke"]["nodes"]["requiredNodes"]
|
||||
imageio_nodes = get_nuke_imageio_settings()["nodes"]["requiredNodes"]
|
||||
|
||||
imageio_node = None
|
||||
for node in imageio_nodes:
|
||||
|
|
@ -70,8 +73,7 @@ def get_imageio_input_colorspace(filename):
|
|||
''' Get input file colorspace based on regex in settings.
|
||||
'''
|
||||
imageio_regex_inputs = (
|
||||
get_anatomy_settings(opnl.project_name)
|
||||
["imageio"]["nuke"]["regexInputs"]["inputs"])
|
||||
get_nuke_imageio_settings()["regexInputs"]["inputs"])
|
||||
|
||||
preset_clrsp = None
|
||||
for regexInput in imageio_regex_inputs:
|
||||
|
|
@ -268,15 +270,21 @@ def format_anatomy(data):
|
|||
if not version:
|
||||
file = script_name()
|
||||
data["version"] = get_version_from_path(file)
|
||||
project_document = io.find_one({"type": "project"})
|
||||
|
||||
project_doc = io.find_one({"type": "project"})
|
||||
asset_doc = io.find_one({
|
||||
"type": "asset",
|
||||
"name": data["avalon"]["asset"]
|
||||
})
|
||||
task_name = os.environ["AVALON_TASK"]
|
||||
host_name = os.environ["AVALON_APP"]
|
||||
context_data = get_workdir_data(
|
||||
project_doc, asset_doc, task_name, host_name
|
||||
)
|
||||
data.update(context_data)
|
||||
data.update({
|
||||
"subset": data["avalon"]["subset"],
|
||||
"asset": data["avalon"]["asset"],
|
||||
"task": os.environ["AVALON_TASK"],
|
||||
"family": data["avalon"]["family"],
|
||||
"project": {"name": project_document["name"],
|
||||
"code": project_document["data"].get("code", '')},
|
||||
"hierarchy": get_hierarchy(),
|
||||
"frame": "#" * padding,
|
||||
})
|
||||
return anatomy.format(data)
|
||||
|
|
@ -547,8 +555,7 @@ def add_rendering_knobs(node, farm=True):
|
|||
Return:
|
||||
node (obj): with added knobs
|
||||
'''
|
||||
knob_options = [
|
||||
"Use existing frames", "Local"]
|
||||
knob_options = ["Use existing frames", "Local"]
|
||||
if farm:
|
||||
knob_options.append("On farm")
|
||||
|
||||
|
|
@ -906,8 +913,7 @@ class WorkfileSettings(object):
|
|||
''' Setting colorpace following presets
|
||||
'''
|
||||
# get imageio
|
||||
imageio = get_anatomy_settings(opnl.project_name)["imageio"]
|
||||
nuke_colorspace = imageio["nuke"]
|
||||
nuke_colorspace = get_nuke_imageio_settings()
|
||||
|
||||
try:
|
||||
self.set_root_colorspace(nuke_colorspace["workfile"])
|
||||
|
|
@ -1164,386 +1170,6 @@ def get_write_node_template_attr(node):
|
|||
return anlib.fix_data_for_node_create(correct_data)
|
||||
|
||||
|
||||
class ExporterReview:
|
||||
"""
|
||||
Base class object for generating review data from Nuke
|
||||
|
||||
Args:
|
||||
klass (pyblish.plugin): pyblish plugin parent
|
||||
instance (pyblish.instance): instance of pyblish context
|
||||
|
||||
"""
|
||||
_temp_nodes = []
|
||||
data = dict({
|
||||
"representations": list()
|
||||
})
|
||||
|
||||
def __init__(self,
|
||||
klass,
|
||||
instance
|
||||
):
|
||||
|
||||
self.log = klass.log
|
||||
self.instance = instance
|
||||
self.path_in = self.instance.data.get("path", None)
|
||||
self.staging_dir = self.instance.data["stagingDir"]
|
||||
self.collection = self.instance.data.get("collection", None)
|
||||
|
||||
def get_file_info(self):
|
||||
if self.collection:
|
||||
self.log.debug("Collection: `{}`".format(self.collection))
|
||||
# get path
|
||||
self.fname = os.path.basename(self.collection.format(
|
||||
"{head}{padding}{tail}"))
|
||||
self.fhead = self.collection.format("{head}")
|
||||
|
||||
# get first and last frame
|
||||
self.first_frame = min(self.collection.indexes)
|
||||
self.last_frame = max(self.collection.indexes)
|
||||
if "slate" in self.instance.data["families"]:
|
||||
self.first_frame += 1
|
||||
else:
|
||||
self.fname = os.path.basename(self.path_in)
|
||||
self.fhead = os.path.splitext(self.fname)[0] + "."
|
||||
self.first_frame = self.instance.data.get("frameStartHandle", None)
|
||||
self.last_frame = self.instance.data.get("frameEndHandle", None)
|
||||
|
||||
if "#" in self.fhead:
|
||||
self.fhead = self.fhead.replace("#", "")[:-1]
|
||||
|
||||
def get_representation_data(self, tags=None, range=False):
|
||||
add_tags = []
|
||||
if tags:
|
||||
add_tags = tags
|
||||
|
||||
repre = {
|
||||
'name': self.name,
|
||||
'ext': self.ext,
|
||||
'files': self.file,
|
||||
"stagingDir": self.staging_dir,
|
||||
"tags": [self.name.replace("_", "-")] + add_tags
|
||||
}
|
||||
|
||||
if range:
|
||||
repre.update({
|
||||
"frameStart": self.first_frame,
|
||||
"frameEnd": self.last_frame,
|
||||
})
|
||||
|
||||
self.data["representations"].append(repre)
|
||||
|
||||
def get_view_process_node(self):
|
||||
"""
|
||||
Will get any active view process.
|
||||
|
||||
Arguments:
|
||||
self (class): in object definition
|
||||
|
||||
Returns:
|
||||
nuke.Node: copy node of Input Process node
|
||||
"""
|
||||
anlib.reset_selection()
|
||||
ipn_orig = None
|
||||
for v in nuke.allNodes(filter="Viewer"):
|
||||
ip = v['input_process'].getValue()
|
||||
ipn = v['input_process_node'].getValue()
|
||||
if "VIEWER_INPUT" not in ipn and ip:
|
||||
ipn_orig = nuke.toNode(ipn)
|
||||
ipn_orig.setSelected(True)
|
||||
|
||||
if ipn_orig:
|
||||
# copy selected to clipboard
|
||||
nuke.nodeCopy('%clipboard%')
|
||||
# reset selection
|
||||
anlib.reset_selection()
|
||||
# paste node and selection is on it only
|
||||
nuke.nodePaste('%clipboard%')
|
||||
# assign to variable
|
||||
ipn = nuke.selectedNode()
|
||||
|
||||
return ipn
|
||||
|
||||
def clean_nodes(self):
|
||||
for node in self._temp_nodes:
|
||||
nuke.delete(node)
|
||||
self._temp_nodes = []
|
||||
self.log.info("Deleted nodes...")
|
||||
|
||||
|
||||
class ExporterReviewLut(ExporterReview):
|
||||
"""
|
||||
Generator object for review lut from Nuke
|
||||
|
||||
Args:
|
||||
klass (pyblish.plugin): pyblish plugin parent
|
||||
instance (pyblish.instance): instance of pyblish context
|
||||
|
||||
|
||||
"""
|
||||
|
||||
def __init__(self,
|
||||
klass,
|
||||
instance,
|
||||
name=None,
|
||||
ext=None,
|
||||
cube_size=None,
|
||||
lut_size=None,
|
||||
lut_style=None):
|
||||
# initialize parent class
|
||||
ExporterReview.__init__(self, klass, instance)
|
||||
self._temp_nodes = []
|
||||
|
||||
# deal with now lut defined in viewer lut
|
||||
if hasattr(klass, "viewer_lut_raw"):
|
||||
self.viewer_lut_raw = klass.viewer_lut_raw
|
||||
else:
|
||||
self.viewer_lut_raw = False
|
||||
|
||||
self.name = name or "baked_lut"
|
||||
self.ext = ext or "cube"
|
||||
self.cube_size = cube_size or 32
|
||||
self.lut_size = lut_size or 1024
|
||||
self.lut_style = lut_style or "linear"
|
||||
|
||||
# set frame start / end and file name to self
|
||||
self.get_file_info()
|
||||
|
||||
self.log.info("File info was set...")
|
||||
|
||||
self.file = self.fhead + self.name + ".{}".format(self.ext)
|
||||
self.path = os.path.join(
|
||||
self.staging_dir, self.file).replace("\\", "/")
|
||||
|
||||
def generate_lut(self):
|
||||
# ---------- start nodes creation
|
||||
|
||||
# CMSTestPattern
|
||||
cms_node = nuke.createNode("CMSTestPattern")
|
||||
cms_node["cube_size"].setValue(self.cube_size)
|
||||
# connect
|
||||
self._temp_nodes.append(cms_node)
|
||||
self.previous_node = cms_node
|
||||
self.log.debug("CMSTestPattern... `{}`".format(self._temp_nodes))
|
||||
|
||||
# Node View Process
|
||||
ipn = self.get_view_process_node()
|
||||
if ipn is not None:
|
||||
# connect
|
||||
ipn.setInput(0, self.previous_node)
|
||||
self._temp_nodes.append(ipn)
|
||||
self.previous_node = ipn
|
||||
self.log.debug("ViewProcess... `{}`".format(self._temp_nodes))
|
||||
|
||||
if not self.viewer_lut_raw:
|
||||
# OCIODisplay
|
||||
dag_node = nuke.createNode("OCIODisplay")
|
||||
# connect
|
||||
dag_node.setInput(0, self.previous_node)
|
||||
self._temp_nodes.append(dag_node)
|
||||
self.previous_node = dag_node
|
||||
self.log.debug("OCIODisplay... `{}`".format(self._temp_nodes))
|
||||
|
||||
# GenerateLUT
|
||||
gen_lut_node = nuke.createNode("GenerateLUT")
|
||||
gen_lut_node["file"].setValue(self.path)
|
||||
gen_lut_node["file_type"].setValue(".{}".format(self.ext))
|
||||
gen_lut_node["lut1d"].setValue(self.lut_size)
|
||||
gen_lut_node["style1d"].setValue(self.lut_style)
|
||||
# connect
|
||||
gen_lut_node.setInput(0, self.previous_node)
|
||||
self._temp_nodes.append(gen_lut_node)
|
||||
self.log.debug("GenerateLUT... `{}`".format(self._temp_nodes))
|
||||
|
||||
# ---------- end nodes creation
|
||||
|
||||
# Export lut file
|
||||
nuke.execute(
|
||||
gen_lut_node.name(),
|
||||
int(self.first_frame),
|
||||
int(self.first_frame))
|
||||
|
||||
self.log.info("Exported...")
|
||||
|
||||
# ---------- generate representation data
|
||||
self.get_representation_data()
|
||||
|
||||
self.log.debug("Representation... `{}`".format(self.data))
|
||||
|
||||
# ---------- Clean up
|
||||
self.clean_nodes()
|
||||
|
||||
return self.data
|
||||
|
||||
|
||||
class ExporterReviewMov(ExporterReview):
|
||||
"""
|
||||
Metaclass for generating review mov files
|
||||
|
||||
Args:
|
||||
klass (pyblish.plugin): pyblish plugin parent
|
||||
instance (pyblish.instance): instance of pyblish context
|
||||
|
||||
"""
|
||||
|
||||
def __init__(self,
|
||||
klass,
|
||||
instance,
|
||||
name=None,
|
||||
ext=None,
|
||||
):
|
||||
# initialize parent class
|
||||
ExporterReview.__init__(self, klass, instance)
|
||||
|
||||
# passing presets for nodes to self
|
||||
if hasattr(klass, "nodes"):
|
||||
self.nodes = klass.nodes
|
||||
else:
|
||||
self.nodes = {}
|
||||
|
||||
# deal with now lut defined in viewer lut
|
||||
self.viewer_lut_raw = klass.viewer_lut_raw
|
||||
self.bake_colorspace_fallback = klass.bake_colorspace_fallback
|
||||
self.bake_colorspace_main = klass.bake_colorspace_main
|
||||
self.write_colorspace = instance.data["colorspace"]
|
||||
|
||||
self.name = name or "baked"
|
||||
self.ext = ext or "mov"
|
||||
|
||||
# set frame start / end and file name to self
|
||||
self.get_file_info()
|
||||
|
||||
self.log.info("File info was set...")
|
||||
|
||||
self.file = self.fhead + self.name + ".{}".format(self.ext)
|
||||
self.path = os.path.join(
|
||||
self.staging_dir, self.file).replace("\\", "/")
|
||||
|
||||
def render(self, render_node_name):
|
||||
self.log.info("Rendering... ")
|
||||
# Render Write node
|
||||
nuke.execute(
|
||||
render_node_name,
|
||||
int(self.first_frame),
|
||||
int(self.last_frame))
|
||||
|
||||
self.log.info("Rendered...")
|
||||
|
||||
def save_file(self):
|
||||
import shutil
|
||||
with anlib.maintained_selection():
|
||||
self.log.info("Saving nodes as file... ")
|
||||
# create nk path
|
||||
path = os.path.splitext(self.path)[0] + ".nk"
|
||||
# save file to the path
|
||||
shutil.copyfile(self.instance.context.data["currentFile"], path)
|
||||
|
||||
self.log.info("Nodes exported...")
|
||||
return path
|
||||
|
||||
def generate_mov(self, farm=False):
|
||||
# ---------- start nodes creation
|
||||
|
||||
# Read node
|
||||
r_node = nuke.createNode("Read")
|
||||
r_node["file"].setValue(self.path_in)
|
||||
r_node["first"].setValue(self.first_frame)
|
||||
r_node["origfirst"].setValue(self.first_frame)
|
||||
r_node["last"].setValue(self.last_frame)
|
||||
r_node["origlast"].setValue(self.last_frame)
|
||||
r_node["colorspace"].setValue(self.write_colorspace)
|
||||
|
||||
# connect
|
||||
self._temp_nodes.append(r_node)
|
||||
self.previous_node = r_node
|
||||
self.log.debug("Read... `{}`".format(self._temp_nodes))
|
||||
|
||||
# View Process node
|
||||
ipn = self.get_view_process_node()
|
||||
if ipn is not None:
|
||||
# connect
|
||||
ipn.setInput(0, self.previous_node)
|
||||
self._temp_nodes.append(ipn)
|
||||
self.previous_node = ipn
|
||||
self.log.debug("ViewProcess... `{}`".format(self._temp_nodes))
|
||||
|
||||
if not self.viewer_lut_raw:
|
||||
colorspaces = [
|
||||
self.bake_colorspace_main, self.bake_colorspace_fallback
|
||||
]
|
||||
|
||||
if any(colorspaces):
|
||||
# OCIOColorSpace with controled output
|
||||
dag_node = nuke.createNode("OCIOColorSpace")
|
||||
self._temp_nodes.append(dag_node)
|
||||
for c in colorspaces:
|
||||
test = dag_node["out_colorspace"].setValue(str(c))
|
||||
if test:
|
||||
self.log.info(
|
||||
"Baking in colorspace... `{}`".format(c))
|
||||
break
|
||||
|
||||
if not test:
|
||||
dag_node = nuke.createNode("OCIODisplay")
|
||||
else:
|
||||
# OCIODisplay
|
||||
dag_node = nuke.createNode("OCIODisplay")
|
||||
|
||||
# connect
|
||||
dag_node.setInput(0, self.previous_node)
|
||||
self._temp_nodes.append(dag_node)
|
||||
self.previous_node = dag_node
|
||||
self.log.debug("OCIODisplay... `{}`".format(self._temp_nodes))
|
||||
|
||||
# Write node
|
||||
write_node = nuke.createNode("Write")
|
||||
self.log.debug("Path: {}".format(self.path))
|
||||
write_node["file"].setValue(self.path)
|
||||
write_node["file_type"].setValue(self.ext)
|
||||
|
||||
# Knobs `meta_codec` and `mov64_codec` are not available on centos.
|
||||
# TODO change this to use conditions, if possible.
|
||||
try:
|
||||
write_node["meta_codec"].setValue("ap4h")
|
||||
except Exception:
|
||||
self.log.info("`meta_codec` knob was not found")
|
||||
|
||||
try:
|
||||
write_node["mov64_codec"].setValue("ap4h")
|
||||
except Exception:
|
||||
self.log.info("`mov64_codec` knob was not found")
|
||||
write_node["mov64_write_timecode"].setValue(1)
|
||||
write_node["raw"].setValue(1)
|
||||
# connect
|
||||
write_node.setInput(0, self.previous_node)
|
||||
self._temp_nodes.append(write_node)
|
||||
self.log.debug("Write... `{}`".format(self._temp_nodes))
|
||||
# ---------- end nodes creation
|
||||
|
||||
# ---------- render or save to nk
|
||||
if farm:
|
||||
nuke.scriptSave()
|
||||
path_nk = self.save_file()
|
||||
self.data.update({
|
||||
"bakeScriptPath": path_nk,
|
||||
"bakeWriteNodeName": write_node.name(),
|
||||
"bakeRenderPath": self.path
|
||||
})
|
||||
else:
|
||||
self.render(write_node.name())
|
||||
# ---------- generate representation data
|
||||
self.get_representation_data(
|
||||
tags=["review", "delete"],
|
||||
range=True
|
||||
)
|
||||
|
||||
self.log.debug("Representation... `{}`".format(self.data))
|
||||
|
||||
# ---------- Clean up
|
||||
self.clean_nodes()
|
||||
nuke.scriptSave()
|
||||
return self.data
|
||||
|
||||
|
||||
def get_dependent_nodes(nodes):
|
||||
"""Get all dependent nodes connected to the list of nodes.
|
||||
|
||||
|
|
@ -1654,6 +1280,8 @@ def launch_workfiles_app():
|
|||
from openpype.lib import (
|
||||
env_value_to_bool
|
||||
)
|
||||
from avalon.nuke.pipeline import get_main_window
|
||||
|
||||
# get all imortant settings
|
||||
open_at_start = env_value_to_bool(
|
||||
env_key="OPENPYPE_WORKFILE_TOOL_ON_START",
|
||||
|
|
@ -1665,7 +1293,8 @@ def launch_workfiles_app():
|
|||
|
||||
if not opnl.workfiles_launched:
|
||||
opnl.workfiles_launched = True
|
||||
host_tools.show_workfiles()
|
||||
main_window = get_main_window()
|
||||
host_tools.show_workfiles(parent=main_window)
|
||||
|
||||
|
||||
def process_workfile_builder():
|
||||
|
|
|
|||
|
|
@ -1,16 +1,21 @@
|
|||
import os
|
||||
import nuke
|
||||
from avalon.api import Session
|
||||
from avalon.nuke.pipeline import get_main_window
|
||||
|
||||
from .lib import WorkfileSettings
|
||||
from openpype.api import Logger, BuildWorkfile, get_current_project_settings
|
||||
from openpype.tools.utils import host_tools
|
||||
|
||||
from avalon.nuke.pipeline import get_main_window
|
||||
|
||||
log = Logger().get_logger(__name__)
|
||||
|
||||
menu_label = os.environ["AVALON_LABEL"]
|
||||
|
||||
|
||||
def install():
|
||||
main_window = get_main_window()
|
||||
menubar = nuke.menu("Nuke")
|
||||
menu = menubar.findItem(menu_label)
|
||||
|
||||
|
|
@ -25,7 +30,7 @@ def install():
|
|||
menu.removeItem(rm_item[1].name())
|
||||
menu.addCommand(
|
||||
name,
|
||||
host_tools.show_workfiles,
|
||||
lambda: host_tools.show_workfiles(parent=main_window),
|
||||
index=2
|
||||
)
|
||||
menu.addSeparator(index=3)
|
||||
|
|
@ -88,7 +93,7 @@ def install():
|
|||
menu.addSeparator()
|
||||
menu.addCommand(
|
||||
"Experimental tools...",
|
||||
host_tools.show_experimental_tools_dialog
|
||||
lambda: host_tools.show_experimental_tools_dialog(parent=main_window)
|
||||
)
|
||||
|
||||
# adding shortcuts
|
||||
|
|
|
|||
|
|
@ -1,3 +1,4 @@
|
|||
import os
|
||||
import random
|
||||
import string
|
||||
|
||||
|
|
@ -26,7 +27,7 @@ class PypeCreator(PypeCreatorMixin, avalon.nuke.pipeline.Creator):
|
|||
self.data["subset"]):
|
||||
msg = ("The subset name `{0}` is already used on a node in"
|
||||
"this workfile.".format(self.data["subset"]))
|
||||
self.log.error(msg + '\n\nPlease use other subset name!')
|
||||
self.log.error(msg + "\n\nPlease use other subset name!")
|
||||
raise NameError("`{0}: {1}".format(__name__, msg))
|
||||
return
|
||||
|
||||
|
|
@ -49,15 +50,18 @@ def get_review_presets_config():
|
|||
|
||||
class NukeLoader(api.Loader):
|
||||
container_id_knob = "containerId"
|
||||
container_id = ''.join(random.choice(
|
||||
string.ascii_uppercase + string.digits) for _ in range(10))
|
||||
container_id = None
|
||||
|
||||
def reset_container_id(self):
|
||||
self.container_id = "".join(random.choice(
|
||||
string.ascii_uppercase + string.digits) for _ in range(10))
|
||||
|
||||
def get_container_id(self, node):
|
||||
id_knob = node.knobs().get(self.container_id_knob)
|
||||
return id_knob.value() if id_knob else None
|
||||
|
||||
def get_members(self, source):
|
||||
"""Return nodes that has same 'containerId' as `source`"""
|
||||
"""Return nodes that has same "containerId" as `source`"""
|
||||
source_id = self.get_container_id(source)
|
||||
return [node for node in nuke.allNodes(recurseGroups=True)
|
||||
if self.get_container_id(node) == source_id
|
||||
|
|
@ -67,13 +71,16 @@ class NukeLoader(api.Loader):
|
|||
source_id = self.get_container_id(node)
|
||||
|
||||
if source_id:
|
||||
node[self.container_id_knob].setValue(self.container_id)
|
||||
node[self.container_id_knob].setValue(source_id)
|
||||
else:
|
||||
HIDEN_FLAG = 0x00040000
|
||||
_knob = anlib.Knobby(
|
||||
"String_Knob",
|
||||
self.container_id,
|
||||
flags=[nuke.READ_ONLY, HIDEN_FLAG])
|
||||
flags=[
|
||||
nuke.READ_ONLY,
|
||||
HIDEN_FLAG
|
||||
])
|
||||
knob = _knob.create(self.container_id_knob)
|
||||
node.addKnob(knob)
|
||||
|
||||
|
|
@ -94,3 +101,422 @@ class NukeLoader(api.Loader):
|
|||
nuke.delete(member)
|
||||
|
||||
return dependent_nodes
|
||||
|
||||
|
||||
class ExporterReview(object):
|
||||
"""
|
||||
Base class object for generating review data from Nuke
|
||||
|
||||
Args:
|
||||
klass (pyblish.plugin): pyblish plugin parent
|
||||
instance (pyblish.instance): instance of pyblish context
|
||||
|
||||
"""
|
||||
data = None
|
||||
|
||||
def __init__(self,
|
||||
klass,
|
||||
instance,
|
||||
multiple_presets=True
|
||||
):
|
||||
|
||||
self.log = klass.log
|
||||
self.instance = instance
|
||||
self.multiple_presets = multiple_presets
|
||||
self.path_in = self.instance.data.get("path", None)
|
||||
self.staging_dir = self.instance.data["stagingDir"]
|
||||
self.collection = self.instance.data.get("collection", None)
|
||||
self.data = dict({
|
||||
"representations": list()
|
||||
})
|
||||
|
||||
def get_file_info(self):
|
||||
if self.collection:
|
||||
self.log.debug("Collection: `{}`".format(self.collection))
|
||||
# get path
|
||||
self.fname = os.path.basename(self.collection.format(
|
||||
"{head}{padding}{tail}"))
|
||||
self.fhead = self.collection.format("{head}")
|
||||
|
||||
# get first and last frame
|
||||
self.first_frame = min(self.collection.indexes)
|
||||
self.last_frame = max(self.collection.indexes)
|
||||
if "slate" in self.instance.data["families"]:
|
||||
self.first_frame += 1
|
||||
else:
|
||||
self.fname = os.path.basename(self.path_in)
|
||||
self.fhead = os.path.splitext(self.fname)[0] + "."
|
||||
self.first_frame = self.instance.data.get("frameStartHandle", None)
|
||||
self.last_frame = self.instance.data.get("frameEndHandle", None)
|
||||
|
||||
if "#" in self.fhead:
|
||||
self.fhead = self.fhead.replace("#", "")[:-1]
|
||||
|
||||
def get_representation_data(self, tags=None, range=False):
|
||||
add_tags = tags or []
|
||||
repre = {
|
||||
"name": self.name,
|
||||
"ext": self.ext,
|
||||
"files": self.file,
|
||||
"stagingDir": self.staging_dir,
|
||||
"tags": [self.name.replace("_", "-")] + add_tags
|
||||
}
|
||||
|
||||
if range:
|
||||
repre.update({
|
||||
"frameStart": self.first_frame,
|
||||
"frameEnd": self.last_frame,
|
||||
})
|
||||
|
||||
if self.multiple_presets:
|
||||
repre["outputName"] = self.name
|
||||
|
||||
self.data["representations"].append(repre)
|
||||
|
||||
def get_view_input_process_node(self):
|
||||
"""
|
||||
Will get any active view process.
|
||||
|
||||
Arguments:
|
||||
self (class): in object definition
|
||||
|
||||
Returns:
|
||||
nuke.Node: copy node of Input Process node
|
||||
"""
|
||||
anlib.reset_selection()
|
||||
ipn_orig = None
|
||||
for v in nuke.allNodes(filter="Viewer"):
|
||||
ip = v["input_process"].getValue()
|
||||
ipn = v["input_process_node"].getValue()
|
||||
if "VIEWER_INPUT" not in ipn and ip:
|
||||
ipn_orig = nuke.toNode(ipn)
|
||||
ipn_orig.setSelected(True)
|
||||
|
||||
if ipn_orig:
|
||||
# copy selected to clipboard
|
||||
nuke.nodeCopy("%clipboard%")
|
||||
# reset selection
|
||||
anlib.reset_selection()
|
||||
# paste node and selection is on it only
|
||||
nuke.nodePaste("%clipboard%")
|
||||
# assign to variable
|
||||
ipn = nuke.selectedNode()
|
||||
|
||||
return ipn
|
||||
|
||||
def get_imageio_baking_profile(self):
|
||||
from . import lib as opnlib
|
||||
nuke_imageio = opnlib.get_nuke_imageio_settings()
|
||||
|
||||
# TODO: this is only securing backward compatibility lets remove
|
||||
# this once all projects's anotomy are upated to newer config
|
||||
if "baking" in nuke_imageio.keys():
|
||||
return nuke_imageio["baking"]["viewerProcess"]
|
||||
else:
|
||||
return nuke_imageio["viewer"]["viewerProcess"]
|
||||
|
||||
|
||||
|
||||
|
||||
class ExporterReviewLut(ExporterReview):
|
||||
"""
|
||||
Generator object for review lut from Nuke
|
||||
|
||||
Args:
|
||||
klass (pyblish.plugin): pyblish plugin parent
|
||||
instance (pyblish.instance): instance of pyblish context
|
||||
|
||||
|
||||
"""
|
||||
_temp_nodes = []
|
||||
|
||||
def __init__(self,
|
||||
klass,
|
||||
instance,
|
||||
name=None,
|
||||
ext=None,
|
||||
cube_size=None,
|
||||
lut_size=None,
|
||||
lut_style=None,
|
||||
multiple_presets=True):
|
||||
# initialize parent class
|
||||
super(ExporterReviewLut, self).__init__(
|
||||
klass, instance, multiple_presets)
|
||||
|
||||
# deal with now lut defined in viewer lut
|
||||
if hasattr(klass, "viewer_lut_raw"):
|
||||
self.viewer_lut_raw = klass.viewer_lut_raw
|
||||
else:
|
||||
self.viewer_lut_raw = False
|
||||
|
||||
self.name = name or "baked_lut"
|
||||
self.ext = ext or "cube"
|
||||
self.cube_size = cube_size or 32
|
||||
self.lut_size = lut_size or 1024
|
||||
self.lut_style = lut_style or "linear"
|
||||
|
||||
# set frame start / end and file name to self
|
||||
self.get_file_info()
|
||||
|
||||
self.log.info("File info was set...")
|
||||
|
||||
self.file = self.fhead + self.name + ".{}".format(self.ext)
|
||||
self.path = os.path.join(
|
||||
self.staging_dir, self.file).replace("\\", "/")
|
||||
|
||||
def clean_nodes(self):
|
||||
for node in self._temp_nodes:
|
||||
nuke.delete(node)
|
||||
self._temp_nodes = []
|
||||
self.log.info("Deleted nodes...")
|
||||
|
||||
def generate_lut(self):
|
||||
bake_viewer_process = kwargs["bake_viewer_process"]
|
||||
bake_viewer_input_process_node = kwargs[
|
||||
"bake_viewer_input_process"]
|
||||
|
||||
# ---------- start nodes creation
|
||||
|
||||
# CMSTestPattern
|
||||
cms_node = nuke.createNode("CMSTestPattern")
|
||||
cms_node["cube_size"].setValue(self.cube_size)
|
||||
# connect
|
||||
self._temp_nodes.append(cms_node)
|
||||
self.previous_node = cms_node
|
||||
self.log.debug("CMSTestPattern... `{}`".format(self._temp_nodes))
|
||||
|
||||
if bake_viewer_process:
|
||||
# Node View Process
|
||||
if bake_viewer_input_process_node:
|
||||
ipn = self.get_view_input_process_node()
|
||||
if ipn is not None:
|
||||
# connect
|
||||
ipn.setInput(0, self.previous_node)
|
||||
self._temp_nodes.append(ipn)
|
||||
self.previous_node = ipn
|
||||
self.log.debug(
|
||||
"ViewProcess... `{}`".format(self._temp_nodes))
|
||||
|
||||
if not self.viewer_lut_raw:
|
||||
# OCIODisplay
|
||||
dag_node = nuke.createNode("OCIODisplay")
|
||||
# connect
|
||||
dag_node.setInput(0, self.previous_node)
|
||||
self._temp_nodes.append(dag_node)
|
||||
self.previous_node = dag_node
|
||||
self.log.debug("OCIODisplay... `{}`".format(self._temp_nodes))
|
||||
|
||||
# GenerateLUT
|
||||
gen_lut_node = nuke.createNode("GenerateLUT")
|
||||
gen_lut_node["file"].setValue(self.path)
|
||||
gen_lut_node["file_type"].setValue(".{}".format(self.ext))
|
||||
gen_lut_node["lut1d"].setValue(self.lut_size)
|
||||
gen_lut_node["style1d"].setValue(self.lut_style)
|
||||
# connect
|
||||
gen_lut_node.setInput(0, self.previous_node)
|
||||
self._temp_nodes.append(gen_lut_node)
|
||||
self.log.debug("GenerateLUT... `{}`".format(self._temp_nodes))
|
||||
|
||||
# ---------- end nodes creation
|
||||
|
||||
# Export lut file
|
||||
nuke.execute(
|
||||
gen_lut_node.name(),
|
||||
int(self.first_frame),
|
||||
int(self.first_frame))
|
||||
|
||||
self.log.info("Exported...")
|
||||
|
||||
# ---------- generate representation data
|
||||
self.get_representation_data()
|
||||
|
||||
self.log.debug("Representation... `{}`".format(self.data))
|
||||
|
||||
# ---------- Clean up
|
||||
self.clean_nodes()
|
||||
|
||||
return self.data
|
||||
|
||||
|
||||
class ExporterReviewMov(ExporterReview):
|
||||
"""
|
||||
Metaclass for generating review mov files
|
||||
|
||||
Args:
|
||||
klass (pyblish.plugin): pyblish plugin parent
|
||||
instance (pyblish.instance): instance of pyblish context
|
||||
|
||||
"""
|
||||
_temp_nodes = {}
|
||||
|
||||
def __init__(self,
|
||||
klass,
|
||||
instance,
|
||||
name=None,
|
||||
ext=None,
|
||||
multiple_presets=True
|
||||
):
|
||||
# initialize parent class
|
||||
super(ExporterReviewMov, self).__init__(
|
||||
klass, instance, multiple_presets)
|
||||
# passing presets for nodes to self
|
||||
self.nodes = klass.nodes if hasattr(klass, "nodes") else {}
|
||||
|
||||
# deal with now lut defined in viewer lut
|
||||
self.viewer_lut_raw = klass.viewer_lut_raw
|
||||
self.write_colorspace = instance.data["colorspace"]
|
||||
|
||||
self.name = name or "baked"
|
||||
self.ext = ext or "mov"
|
||||
|
||||
# set frame start / end and file name to self
|
||||
self.get_file_info()
|
||||
|
||||
self.log.info("File info was set...")
|
||||
|
||||
self.file = self.fhead + self.name + ".{}".format(self.ext)
|
||||
self.path = os.path.join(
|
||||
self.staging_dir, self.file).replace("\\", "/")
|
||||
|
||||
def clean_nodes(self, node_name):
|
||||
for node in self._temp_nodes[node_name]:
|
||||
nuke.delete(node)
|
||||
self._temp_nodes[node_name] = []
|
||||
self.log.info("Deleted nodes...")
|
||||
|
||||
def render(self, render_node_name):
|
||||
self.log.info("Rendering... ")
|
||||
# Render Write node
|
||||
nuke.execute(
|
||||
render_node_name,
|
||||
int(self.first_frame),
|
||||
int(self.last_frame))
|
||||
|
||||
self.log.info("Rendered...")
|
||||
|
||||
def save_file(self):
|
||||
import shutil
|
||||
with anlib.maintained_selection():
|
||||
self.log.info("Saving nodes as file... ")
|
||||
# create nk path
|
||||
path = os.path.splitext(self.path)[0] + ".nk"
|
||||
# save file to the path
|
||||
shutil.copyfile(self.instance.context.data["currentFile"], path)
|
||||
|
||||
self.log.info("Nodes exported...")
|
||||
return path
|
||||
|
||||
def generate_mov(self, farm=False, **kwargs):
|
||||
bake_viewer_process = kwargs["bake_viewer_process"]
|
||||
bake_viewer_input_process_node = kwargs[
|
||||
"bake_viewer_input_process"]
|
||||
viewer_process_override = kwargs[
|
||||
"viewer_process_override"]
|
||||
|
||||
baking_view_profile = (
|
||||
viewer_process_override or self.get_imageio_baking_profile())
|
||||
|
||||
fps = self.instance.context.data["fps"]
|
||||
|
||||
self.log.debug(">> baking_view_profile `{}`".format(
|
||||
baking_view_profile))
|
||||
|
||||
add_tags = kwargs.get("add_tags", [])
|
||||
|
||||
self.log.info(
|
||||
"__ add_tags: `{0}`".format(add_tags))
|
||||
|
||||
subset = self.instance.data["subset"]
|
||||
self._temp_nodes[subset] = []
|
||||
# ---------- start nodes creation
|
||||
|
||||
# Read node
|
||||
r_node = nuke.createNode("Read")
|
||||
r_node["file"].setValue(self.path_in)
|
||||
r_node["first"].setValue(self.first_frame)
|
||||
r_node["origfirst"].setValue(self.first_frame)
|
||||
r_node["last"].setValue(self.last_frame)
|
||||
r_node["origlast"].setValue(self.last_frame)
|
||||
r_node["colorspace"].setValue(self.write_colorspace)
|
||||
|
||||
# connect
|
||||
self._temp_nodes[subset].append(r_node)
|
||||
self.previous_node = r_node
|
||||
self.log.debug("Read... `{}`".format(self._temp_nodes[subset]))
|
||||
|
||||
# only create colorspace baking if toggled on
|
||||
if bake_viewer_process:
|
||||
if bake_viewer_input_process_node:
|
||||
# View Process node
|
||||
ipn = self.get_view_input_process_node()
|
||||
if ipn is not None:
|
||||
# connect
|
||||
ipn.setInput(0, self.previous_node)
|
||||
self._temp_nodes[subset].append(ipn)
|
||||
self.previous_node = ipn
|
||||
self.log.debug(
|
||||
"ViewProcess... `{}`".format(
|
||||
self._temp_nodes[subset]))
|
||||
|
||||
if not self.viewer_lut_raw:
|
||||
# OCIODisplay
|
||||
dag_node = nuke.createNode("OCIODisplay")
|
||||
dag_node["view"].setValue(str(baking_view_profile))
|
||||
|
||||
# connect
|
||||
dag_node.setInput(0, self.previous_node)
|
||||
self._temp_nodes[subset].append(dag_node)
|
||||
self.previous_node = dag_node
|
||||
self.log.debug("OCIODisplay... `{}`".format(
|
||||
self._temp_nodes[subset]))
|
||||
|
||||
# Write node
|
||||
write_node = nuke.createNode("Write")
|
||||
self.log.debug("Path: {}".format(self.path))
|
||||
write_node["file"].setValue(str(self.path))
|
||||
write_node["file_type"].setValue(str(self.ext))
|
||||
|
||||
# Knobs `meta_codec` and `mov64_codec` are not available on centos.
|
||||
# TODO should't this come from settings on outputs?
|
||||
try:
|
||||
write_node["meta_codec"].setValue("ap4h")
|
||||
except Exception:
|
||||
self.log.info("`meta_codec` knob was not found")
|
||||
|
||||
try:
|
||||
write_node["mov64_codec"].setValue("ap4h")
|
||||
write_node["mov64_fps"].setValue(float(fps))
|
||||
except Exception:
|
||||
self.log.info("`mov64_codec` knob was not found")
|
||||
|
||||
write_node["mov64_write_timecode"].setValue(1)
|
||||
write_node["raw"].setValue(1)
|
||||
# connect
|
||||
write_node.setInput(0, self.previous_node)
|
||||
self._temp_nodes[subset].append(write_node)
|
||||
self.log.debug("Write... `{}`".format(self._temp_nodes[subset]))
|
||||
# ---------- end nodes creation
|
||||
|
||||
# ---------- render or save to nk
|
||||
if farm:
|
||||
nuke.scriptSave()
|
||||
path_nk = self.save_file()
|
||||
self.data.update({
|
||||
"bakeScriptPath": path_nk,
|
||||
"bakeWriteNodeName": write_node.name(),
|
||||
"bakeRenderPath": self.path
|
||||
})
|
||||
else:
|
||||
self.render(write_node.name())
|
||||
# ---------- generate representation data
|
||||
self.get_representation_data(
|
||||
tags=["review", "delete"] + add_tags,
|
||||
range=True
|
||||
)
|
||||
|
||||
self.log.debug("Representation... `{}`".format(self.data))
|
||||
|
||||
self.clean_nodes(subset)
|
||||
nuke.scriptSave()
|
||||
|
||||
return self.data
|
||||
|
|
|
|||
85
openpype/hosts/nuke/plugins/create/create_model.py
Normal file
85
openpype/hosts/nuke/plugins/create/create_model.py
Normal file
|
|
@ -0,0 +1,85 @@
|
|||
from avalon.nuke import lib as anlib
|
||||
from openpype.hosts.nuke.api import plugin
|
||||
import nuke
|
||||
|
||||
|
||||
class CreateModel(plugin.PypeCreator):
|
||||
"""Add Publishable Model Geometry"""
|
||||
|
||||
name = "model"
|
||||
label = "Create 3d Model"
|
||||
family = "model"
|
||||
icon = "cube"
|
||||
defaults = ["Main"]
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
super(CreateModel, self).__init__(*args, **kwargs)
|
||||
self.nodes = nuke.selectedNodes()
|
||||
self.node_color = "0xff3200ff"
|
||||
return
|
||||
|
||||
def process(self):
|
||||
nodes = list()
|
||||
if (self.options or {}).get("useSelection"):
|
||||
nodes = self.nodes
|
||||
for n in nodes:
|
||||
n['selected'].setValue(0)
|
||||
end_nodes = list()
|
||||
|
||||
# get the latest nodes in tree for selecion
|
||||
for n in nodes:
|
||||
x = n
|
||||
end = 0
|
||||
while end == 0:
|
||||
try:
|
||||
x = x.dependent()[0]
|
||||
except:
|
||||
end_node = x
|
||||
end = 1
|
||||
end_nodes.append(end_node)
|
||||
|
||||
# set end_nodes
|
||||
end_nodes = list(set(end_nodes))
|
||||
|
||||
# check if nodes is 3d nodes
|
||||
for n in end_nodes:
|
||||
n['selected'].setValue(1)
|
||||
sn = nuke.createNode("Scene")
|
||||
if not sn.input(0):
|
||||
end_nodes.remove(n)
|
||||
nuke.delete(sn)
|
||||
|
||||
# loop over end nodes
|
||||
for n in end_nodes:
|
||||
n['selected'].setValue(1)
|
||||
|
||||
self.nodes = nuke.selectedNodes()
|
||||
nodes = self.nodes
|
||||
if len(nodes) >= 1:
|
||||
# loop selected nodes
|
||||
for n in nodes:
|
||||
data = self.data.copy()
|
||||
if len(nodes) > 1:
|
||||
# rename subset name only if more
|
||||
# then one node are selected
|
||||
subset = self.family + n["name"].value().capitalize()
|
||||
data["subset"] = subset
|
||||
|
||||
# change node color
|
||||
n["tile_color"].setValue(int(self.node_color, 16))
|
||||
# add avalon knobs
|
||||
anlib.set_avalon_knob_data(n, data)
|
||||
return True
|
||||
else:
|
||||
msg = str("Please select nodes you "
|
||||
"wish to add to a container")
|
||||
self.log.error(msg)
|
||||
nuke.message(msg)
|
||||
return
|
||||
else:
|
||||
# if selected is off then create one node
|
||||
model_node = nuke.createNode("WriteGeo")
|
||||
model_node["tile_color"].setValue(int(self.node_color, 16))
|
||||
# add avalon knobs
|
||||
instance = anlib.set_avalon_knob_data(model_node, self.data)
|
||||
return instance
|
||||
|
|
@ -4,7 +4,6 @@ import nukescripts
|
|||
from openpype.hosts.nuke.api import lib as pnlib
|
||||
from avalon.nuke import lib as anlib
|
||||
from avalon.nuke import containerise, update_container
|
||||
reload(pnlib)
|
||||
|
||||
class LoadBackdropNodes(api.Loader):
|
||||
"""Loading Published Backdrop nodes (workfile, nukenodes)"""
|
||||
|
|
|
|||
|
|
@ -67,6 +67,9 @@ class LoadClip(plugin.NukeLoader):
|
|||
|
||||
def load(self, context, name, namespace, options):
|
||||
|
||||
# reste container id so it is always unique for each instance
|
||||
self.reset_container_id()
|
||||
|
||||
is_sequence = len(context["representation"]["files"]) > 1
|
||||
|
||||
file = self.fname.replace("\\", "/")
|
||||
|
|
@ -251,8 +254,7 @@ class LoadClip(plugin.NukeLoader):
|
|||
"handleStart": str(self.handle_start),
|
||||
"handleEnd": str(self.handle_end),
|
||||
"fps": str(version_data.get("fps")),
|
||||
"author": version_data.get("author"),
|
||||
"outputDir": version_data.get("outputDir"),
|
||||
"author": version_data.get("author")
|
||||
}
|
||||
|
||||
# change color of read_node
|
||||
|
|
|
|||
|
|
@ -217,8 +217,7 @@ class LoadImage(api.Loader):
|
|||
"colorspace": version_data.get("colorspace"),
|
||||
"source": version_data.get("source"),
|
||||
"fps": str(version_data.get("fps")),
|
||||
"author": version_data.get("author"),
|
||||
"outputDir": version_data.get("outputDir"),
|
||||
"author": version_data.get("author")
|
||||
})
|
||||
|
||||
# change color of node
|
||||
|
|
|
|||
187
openpype/hosts/nuke/plugins/load/load_model.py
Normal file
187
openpype/hosts/nuke/plugins/load/load_model.py
Normal file
|
|
@ -0,0 +1,187 @@
|
|||
from avalon import api, io
|
||||
from avalon.nuke import lib as anlib
|
||||
from avalon.nuke import containerise, update_container
|
||||
import nuke
|
||||
|
||||
|
||||
class AlembicModelLoader(api.Loader):
|
||||
"""
|
||||
This will load alembic model into script.
|
||||
"""
|
||||
|
||||
families = ["model"]
|
||||
representations = ["abc"]
|
||||
|
||||
label = "Load Alembic Model"
|
||||
icon = "cube"
|
||||
color = "orange"
|
||||
node_color = "0x4ecd91ff"
|
||||
|
||||
def load(self, context, name, namespace, data):
|
||||
# get main variables
|
||||
version = context['version']
|
||||
version_data = version.get("data", {})
|
||||
vname = version.get("name", None)
|
||||
first = version_data.get("frameStart", None)
|
||||
last = version_data.get("frameEnd", None)
|
||||
fps = version_data.get("fps") or nuke.root()["fps"].getValue()
|
||||
namespace = namespace or context['asset']['name']
|
||||
object_name = "{}_{}".format(name, namespace)
|
||||
|
||||
# prepare data for imprinting
|
||||
# add additional metadata from the version to imprint to Avalon knob
|
||||
add_keys = ["source", "author", "fps"]
|
||||
|
||||
data_imprint = {"frameStart": first,
|
||||
"frameEnd": last,
|
||||
"version": vname,
|
||||
"objectName": object_name}
|
||||
|
||||
for k in add_keys:
|
||||
data_imprint.update({k: version_data[k]})
|
||||
|
||||
# getting file path
|
||||
file = self.fname.replace("\\", "/")
|
||||
|
||||
with anlib.maintained_selection():
|
||||
model_node = nuke.createNode(
|
||||
"ReadGeo2",
|
||||
"name {} file {} ".format(
|
||||
object_name, file),
|
||||
inpanel=False
|
||||
)
|
||||
model_node.forceValidate()
|
||||
model_node["frame_rate"].setValue(float(fps))
|
||||
|
||||
# workaround because nuke's bug is not adding
|
||||
# animation keys properly
|
||||
xpos = model_node.xpos()
|
||||
ypos = model_node.ypos()
|
||||
nuke.nodeCopy("%clipboard%")
|
||||
nuke.delete(model_node)
|
||||
nuke.nodePaste("%clipboard%")
|
||||
model_node = nuke.toNode(object_name)
|
||||
model_node.setXYpos(xpos, ypos)
|
||||
|
||||
# color node by correct color by actual version
|
||||
self.node_version_color(version, model_node)
|
||||
|
||||
return containerise(
|
||||
node=model_node,
|
||||
name=name,
|
||||
namespace=namespace,
|
||||
context=context,
|
||||
loader=self.__class__.__name__,
|
||||
data=data_imprint)
|
||||
|
||||
def update(self, container, representation):
|
||||
"""
|
||||
Called by Scene Inventory when look should be updated to current
|
||||
version.
|
||||
If any reference edits cannot be applied, eg. shader renamed and
|
||||
material not present, reference is unloaded and cleaned.
|
||||
All failed edits are highlighted to the user via message box.
|
||||
|
||||
Args:
|
||||
container: object that has look to be updated
|
||||
representation: (dict): relationship data to get proper
|
||||
representation from DB and persisted
|
||||
data in .json
|
||||
Returns:
|
||||
None
|
||||
"""
|
||||
# Get version from io
|
||||
version = io.find_one({
|
||||
"type": "version",
|
||||
"_id": representation["parent"]
|
||||
})
|
||||
object_name = container['objectName']
|
||||
# get corresponding node
|
||||
model_node = nuke.toNode(object_name)
|
||||
|
||||
# get main variables
|
||||
version_data = version.get("data", {})
|
||||
vname = version.get("name", None)
|
||||
first = version_data.get("frameStart", None)
|
||||
last = version_data.get("frameEnd", None)
|
||||
fps = version_data.get("fps") or nuke.root()["fps"].getValue()
|
||||
|
||||
# prepare data for imprinting
|
||||
# add additional metadata from the version to imprint to Avalon knob
|
||||
add_keys = ["source", "author", "fps"]
|
||||
|
||||
data_imprint = {"representation": str(representation["_id"]),
|
||||
"frameStart": first,
|
||||
"frameEnd": last,
|
||||
"version": vname,
|
||||
"objectName": object_name}
|
||||
|
||||
for k in add_keys:
|
||||
data_imprint.update({k: version_data[k]})
|
||||
|
||||
# getting file path
|
||||
file = api.get_representation_path(representation).replace("\\", "/")
|
||||
|
||||
with anlib.maintained_selection():
|
||||
model_node = nuke.toNode(object_name)
|
||||
model_node['selected'].setValue(True)
|
||||
|
||||
# collect input output dependencies
|
||||
dependencies = model_node.dependencies()
|
||||
dependent = model_node.dependent()
|
||||
|
||||
model_node["frame_rate"].setValue(float(fps))
|
||||
model_node["file"].setValue(file)
|
||||
|
||||
# workaround because nuke's bug is
|
||||
# not adding animation keys properly
|
||||
xpos = model_node.xpos()
|
||||
ypos = model_node.ypos()
|
||||
nuke.nodeCopy("%clipboard%")
|
||||
nuke.delete(model_node)
|
||||
nuke.nodePaste("%clipboard%")
|
||||
model_node = nuke.toNode(object_name)
|
||||
model_node.setXYpos(xpos, ypos)
|
||||
|
||||
# link to original input nodes
|
||||
for i, input in enumerate(dependencies):
|
||||
model_node.setInput(i, input)
|
||||
# link to original output nodes
|
||||
for d in dependent:
|
||||
index = next((i for i, dpcy in enumerate(
|
||||
d.dependencies())
|
||||
if model_node is dpcy), 0)
|
||||
d.setInput(index, model_node)
|
||||
|
||||
# color node by correct color by actual version
|
||||
self.node_version_color(version, model_node)
|
||||
|
||||
self.log.info("udated to version: {}".format(version.get("name")))
|
||||
|
||||
return update_container(model_node, data_imprint)
|
||||
|
||||
def node_version_color(self, version, node):
|
||||
""" Coloring a node by correct color by actual version
|
||||
"""
|
||||
# get all versions in list
|
||||
versions = io.find({
|
||||
"type": "version",
|
||||
"parent": version["parent"]
|
||||
}).distinct('name')
|
||||
|
||||
max_version = max(versions)
|
||||
|
||||
# change color of node
|
||||
if version.get("name") not in [max_version]:
|
||||
node["tile_color"].setValue(int("0xd88467ff", 16))
|
||||
else:
|
||||
node["tile_color"].setValue(int(self.node_color, 16))
|
||||
|
||||
def switch(self, container, representation):
|
||||
self.update(container, representation)
|
||||
|
||||
def remove(self, container):
|
||||
from avalon.nuke import viewer_update_and_undo_stop
|
||||
node = nuke.toNode(container['objectName'])
|
||||
with viewer_update_and_undo_stop():
|
||||
nuke.delete(node)
|
||||
|
|
@ -135,8 +135,7 @@ class LinkAsGroup(api.Loader):
|
|||
"source": version["data"].get("source"),
|
||||
"handles": version["data"].get("handles"),
|
||||
"fps": version["data"].get("fps"),
|
||||
"author": version["data"].get("author"),
|
||||
"outputDir": version["data"].get("outputDir"),
|
||||
"author": version["data"].get("author")
|
||||
})
|
||||
|
||||
# Update the imprinted representation
|
||||
|
|
|
|||
49
openpype/hosts/nuke/plugins/publish/collect_model.py
Normal file
49
openpype/hosts/nuke/plugins/publish/collect_model.py
Normal file
|
|
@ -0,0 +1,49 @@
|
|||
import pyblish.api
|
||||
import nuke
|
||||
|
||||
|
||||
@pyblish.api.log
|
||||
class CollectModel(pyblish.api.InstancePlugin):
|
||||
"""Collect Model node instance and its content
|
||||
"""
|
||||
|
||||
order = pyblish.api.CollectorOrder + 0.22
|
||||
label = "Collect Model"
|
||||
hosts = ["nuke"]
|
||||
families = ["model"]
|
||||
|
||||
def process(self, instance):
|
||||
|
||||
grpn = instance[0]
|
||||
|
||||
# add family to familiess
|
||||
instance.data["families"].insert(0, instance.data["family"])
|
||||
# make label nicer
|
||||
instance.data["label"] = grpn.name()
|
||||
|
||||
# Get frame range
|
||||
handle_start = instance.context.data["handleStart"]
|
||||
handle_end = instance.context.data["handleEnd"]
|
||||
first_frame = int(nuke.root()["first_frame"].getValue())
|
||||
last_frame = int(nuke.root()["last_frame"].getValue())
|
||||
|
||||
# Add version data to instance
|
||||
version_data = {
|
||||
"handles": handle_start,
|
||||
"handleStart": handle_start,
|
||||
"handleEnd": handle_end,
|
||||
"frameStart": first_frame + handle_start,
|
||||
"frameEnd": last_frame - handle_end,
|
||||
"colorspace": nuke.root().knob('workingSpaceLUT').value(),
|
||||
"families": [instance.data["family"]] + instance.data["families"],
|
||||
"subset": instance.data["subset"],
|
||||
"fps": instance.context.data["fps"]
|
||||
}
|
||||
|
||||
instance.data.update({
|
||||
"versionData": version_data,
|
||||
"frameStart": first_frame,
|
||||
"frameEnd": last_frame
|
||||
})
|
||||
self.log.info("Model content collected: `{}`".format(instance[:]))
|
||||
self.log.info("Model instance collected: `{}`".format(instance))
|
||||
|
|
@ -4,7 +4,6 @@ from openpype.hosts.nuke.api import lib as pnlib
|
|||
import nuke
|
||||
import os
|
||||
import openpype
|
||||
reload(pnlib)
|
||||
|
||||
class ExtractBackdropNode(openpype.api.Extractor):
|
||||
"""Extracting content of backdrop nodes
|
||||
|
|
|
|||
103
openpype/hosts/nuke/plugins/publish/extract_model.py
Normal file
103
openpype/hosts/nuke/plugins/publish/extract_model.py
Normal file
|
|
@ -0,0 +1,103 @@
|
|||
import nuke
|
||||
import os
|
||||
import pyblish.api
|
||||
import openpype.api
|
||||
from avalon.nuke import lib as anlib
|
||||
from pprint import pformat
|
||||
|
||||
|
||||
class ExtractModel(openpype.api.Extractor):
|
||||
""" 3D model exctractor
|
||||
"""
|
||||
label = 'Exctract Model'
|
||||
order = pyblish.api.ExtractorOrder
|
||||
families = ["model"]
|
||||
hosts = ["nuke"]
|
||||
|
||||
# presets
|
||||
write_geo_knobs = [
|
||||
("file_type", "abc"),
|
||||
("storageFormat", "Ogawa"),
|
||||
("writeGeometries", True),
|
||||
("writePointClouds", False),
|
||||
("writeAxes", False)
|
||||
]
|
||||
|
||||
def process(self, instance):
|
||||
handle_start = instance.context.data["handleStart"]
|
||||
handle_end = instance.context.data["handleEnd"]
|
||||
first_frame = int(nuke.root()["first_frame"].getValue())
|
||||
last_frame = int(nuke.root()["last_frame"].getValue())
|
||||
|
||||
self.log.info("instance.data: `{}`".format(
|
||||
pformat(instance.data)))
|
||||
|
||||
rm_nodes = list()
|
||||
model_node = instance[0]
|
||||
self.log.info("Crating additional nodes")
|
||||
subset = instance.data["subset"]
|
||||
staging_dir = self.staging_dir(instance)
|
||||
|
||||
extension = next((k[1] for k in self.write_geo_knobs
|
||||
if k[0] == "file_type"), None)
|
||||
if not extension:
|
||||
raise RuntimeError(
|
||||
"Bad config for extension in presets. "
|
||||
"Talk to your supervisor or pipeline admin")
|
||||
|
||||
# create file name and path
|
||||
filename = subset + ".{}".format(extension)
|
||||
file_path = os.path.join(staging_dir, filename).replace("\\", "/")
|
||||
|
||||
with anlib.maintained_selection():
|
||||
# select model node
|
||||
anlib.select_nodes([model_node])
|
||||
|
||||
# create write geo node
|
||||
wg_n = nuke.createNode("WriteGeo")
|
||||
wg_n["file"].setValue(file_path)
|
||||
# add path to write to
|
||||
for k, v in self.write_geo_knobs:
|
||||
wg_n[k].setValue(v)
|
||||
rm_nodes.append(wg_n)
|
||||
|
||||
# write out model
|
||||
nuke.execute(
|
||||
wg_n,
|
||||
int(first_frame),
|
||||
int(last_frame)
|
||||
)
|
||||
# erase additional nodes
|
||||
for n in rm_nodes:
|
||||
nuke.delete(n)
|
||||
|
||||
self.log.info(file_path)
|
||||
|
||||
# create representation data
|
||||
if "representations" not in instance.data:
|
||||
instance.data["representations"] = []
|
||||
|
||||
representation = {
|
||||
'name': extension,
|
||||
'ext': extension,
|
||||
'files': filename,
|
||||
"stagingDir": staging_dir,
|
||||
"frameStart": first_frame,
|
||||
"frameEnd": last_frame
|
||||
}
|
||||
instance.data["representations"].append(representation)
|
||||
|
||||
instance.data.update({
|
||||
"path": file_path,
|
||||
"outputDir": staging_dir,
|
||||
"ext": extension,
|
||||
"handleStart": handle_start,
|
||||
"handleEnd": handle_end,
|
||||
"frameStart": first_frame + handle_start,
|
||||
"frameEnd": last_frame - handle_end,
|
||||
"frameStartHandle": first_frame,
|
||||
"frameEndHandle": last_frame,
|
||||
})
|
||||
|
||||
self.log.info("Extracted instance '{0}' to: {1}".format(
|
||||
instance.name, file_path))
|
||||
|
|
@ -1,16 +1,9 @@
|
|||
import os
|
||||
import pyblish.api
|
||||
from avalon.nuke import lib as anlib
|
||||
from openpype.hosts.nuke.api import lib as pnlib
|
||||
from openpype.hosts.nuke.api import plugin
|
||||
import openpype
|
||||
|
||||
try:
|
||||
from __builtin__ import reload
|
||||
except ImportError:
|
||||
from importlib import reload
|
||||
|
||||
reload(pnlib)
|
||||
|
||||
|
||||
class ExtractReviewDataLut(openpype.api.Extractor):
|
||||
"""Extracts movie and thumbnail with baked in luts
|
||||
|
|
@ -45,7 +38,7 @@ class ExtractReviewDataLut(openpype.api.Extractor):
|
|||
|
||||
# generate data
|
||||
with anlib.maintained_selection():
|
||||
exporter = pnlib.ExporterReviewLut(
|
||||
exporter = plugin.ExporterReviewLut(
|
||||
self, instance
|
||||
)
|
||||
data = exporter.generate_lut()
|
||||
|
|
|
|||
|
|
@ -1,16 +1,9 @@
|
|||
import os
|
||||
import pyblish.api
|
||||
from avalon.nuke import lib as anlib
|
||||
from openpype.hosts.nuke.api import lib as pnlib
|
||||
from openpype.hosts.nuke.api import plugin
|
||||
import openpype
|
||||
|
||||
try:
|
||||
from __builtin__ import reload
|
||||
except ImportError:
|
||||
from importlib import reload
|
||||
|
||||
reload(pnlib)
|
||||
|
||||
|
||||
class ExtractReviewDataMov(openpype.api.Extractor):
|
||||
"""Extracts movie and thumbnail with baked in luts
|
||||
|
|
@ -27,46 +20,104 @@ class ExtractReviewDataMov(openpype.api.Extractor):
|
|||
|
||||
# presets
|
||||
viewer_lut_raw = None
|
||||
bake_colorspace_fallback = None
|
||||
bake_colorspace_main = None
|
||||
outputs = {}
|
||||
|
||||
def process(self, instance):
|
||||
families = instance.data["families"]
|
||||
task_type = instance.context.data["taskType"]
|
||||
self.log.info("Creating staging dir...")
|
||||
|
||||
if "representations" not in instance.data:
|
||||
instance.data["representations"] = list()
|
||||
instance.data["representations"] = []
|
||||
|
||||
staging_dir = os.path.normpath(
|
||||
os.path.dirname(instance.data['path']))
|
||||
os.path.dirname(instance.data["path"]))
|
||||
|
||||
instance.data["stagingDir"] = staging_dir
|
||||
|
||||
self.log.info(
|
||||
"StagingDir `{0}`...".format(instance.data["stagingDir"]))
|
||||
|
||||
self.log.info(self.outputs)
|
||||
|
||||
# generate data
|
||||
with anlib.maintained_selection():
|
||||
exporter = pnlib.ExporterReviewMov(
|
||||
self, instance)
|
||||
for o_name, o_data in self.outputs.items():
|
||||
f_families = o_data["filter"]["families"]
|
||||
f_task_types = o_data["filter"]["task_types"]
|
||||
|
||||
if "render.farm" in families:
|
||||
instance.data["families"].remove("review")
|
||||
data = exporter.generate_mov(farm=True)
|
||||
# test if family found in context
|
||||
test_families = any([
|
||||
# first if exact family set is mathing
|
||||
# make sure only interesetion of list is correct
|
||||
bool(set(families).intersection(f_families)),
|
||||
# and if famiies are set at all
|
||||
# if not then return True because we want this preset
|
||||
# to be active if nothig is set
|
||||
bool(not f_families)
|
||||
])
|
||||
|
||||
self.log.debug(
|
||||
"_ data: {}".format(data))
|
||||
# test task types from filter
|
||||
test_task_types = any([
|
||||
# check if actual task type is defined in task types
|
||||
# set in preset's filter
|
||||
bool(task_type in f_task_types),
|
||||
# and if taskTypes are defined in preset filter
|
||||
# if not then return True, because we want this filter
|
||||
# to be active if no taskType is set
|
||||
bool(not f_task_types)
|
||||
])
|
||||
|
||||
instance.data.update({
|
||||
"bakeRenderPath": data.get("bakeRenderPath"),
|
||||
"bakeScriptPath": data.get("bakeScriptPath"),
|
||||
"bakeWriteNodeName": data.get("bakeWriteNodeName")
|
||||
})
|
||||
else:
|
||||
data = exporter.generate_mov()
|
||||
# we need all filters to be positive for this
|
||||
# preset to be activated
|
||||
test_all = all([
|
||||
test_families,
|
||||
test_task_types
|
||||
])
|
||||
|
||||
# assign to representations
|
||||
instance.data["representations"] += data["representations"]
|
||||
# if it is not positive then skip this preset
|
||||
if not test_all:
|
||||
continue
|
||||
|
||||
self.log.info(
|
||||
"Baking output `{}` with settings: {}".format(
|
||||
o_name, o_data))
|
||||
|
||||
# check if settings have more then one preset
|
||||
# so we dont need to add outputName to representation
|
||||
# in case there is only one preset
|
||||
multiple_presets = bool(len(self.outputs.keys()) > 1)
|
||||
|
||||
# create exporter instance
|
||||
exporter = plugin.ExporterReviewMov(
|
||||
self, instance, o_name, o_data["extension"],
|
||||
multiple_presets)
|
||||
|
||||
if "render.farm" in families:
|
||||
if "review" in instance.data["families"]:
|
||||
instance.data["families"].remove("review")
|
||||
|
||||
data = exporter.generate_mov(farm=True, **o_data)
|
||||
|
||||
self.log.debug(
|
||||
"_ data: {}".format(data))
|
||||
|
||||
if not instance.data.get("bakingNukeScripts"):
|
||||
instance.data["bakingNukeScripts"] = []
|
||||
|
||||
instance.data["bakingNukeScripts"].append({
|
||||
"bakeRenderPath": data.get("bakeRenderPath"),
|
||||
"bakeScriptPath": data.get("bakeScriptPath"),
|
||||
"bakeWriteNodeName": data.get("bakeWriteNodeName")
|
||||
})
|
||||
else:
|
||||
data = exporter.generate_mov(**o_data)
|
||||
|
||||
self.log.info(data["representations"])
|
||||
|
||||
# assign to representations
|
||||
instance.data["representations"] += data["representations"]
|
||||
|
||||
self.log.debug(
|
||||
"_ representations: {}".format(instance.data["representations"]))
|
||||
"_ representations: {}".format(
|
||||
instance.data["representations"]))
|
||||
|
|
|
|||
|
|
@ -2,9 +2,10 @@ import os
|
|||
import sys
|
||||
import logging
|
||||
|
||||
from Qt import QtWidgets
|
||||
|
||||
from avalon import io
|
||||
from avalon import api as avalon
|
||||
from avalon.vendor import Qt
|
||||
from openpype import lib
|
||||
from pyblish import api as pyblish
|
||||
import openpype.hosts.photoshop
|
||||
|
|
@ -38,10 +39,10 @@ def check_inventory():
|
|||
|
||||
# Warn about outdated containers.
|
||||
print("Starting new QApplication..")
|
||||
app = Qt.QtWidgets.QApplication(sys.argv)
|
||||
app = QtWidgets.QApplication(sys.argv)
|
||||
|
||||
message_box = Qt.QtWidgets.QMessageBox()
|
||||
message_box.setIcon(Qt.QtWidgets.QMessageBox.Warning)
|
||||
message_box = QtWidgets.QMessageBox()
|
||||
message_box.setIcon(QtWidgets.QMessageBox.Warning)
|
||||
msg = "There are outdated containers in the scene."
|
||||
message_box.setText(msg)
|
||||
message_box.exec_()
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
from Qt import QtWidgets
|
||||
import openpype.api
|
||||
from avalon.vendor import Qt
|
||||
from avalon import photoshop
|
||||
|
||||
|
||||
|
|
@ -26,21 +26,21 @@ class CreateImage(openpype.api.Creator):
|
|||
if len(selection) > 1:
|
||||
# Ask user whether to create one image or image per selected
|
||||
# item.
|
||||
msg_box = Qt.QtWidgets.QMessageBox()
|
||||
msg_box.setIcon(Qt.QtWidgets.QMessageBox.Warning)
|
||||
msg_box = QtWidgets.QMessageBox()
|
||||
msg_box.setIcon(QtWidgets.QMessageBox.Warning)
|
||||
msg_box.setText(
|
||||
"Multiple layers selected."
|
||||
"\nDo you want to make one image per layer?"
|
||||
)
|
||||
msg_box.setStandardButtons(
|
||||
Qt.QtWidgets.QMessageBox.Yes |
|
||||
Qt.QtWidgets.QMessageBox.No |
|
||||
Qt.QtWidgets.QMessageBox.Cancel
|
||||
QtWidgets.QMessageBox.Yes |
|
||||
QtWidgets.QMessageBox.No |
|
||||
QtWidgets.QMessageBox.Cancel
|
||||
)
|
||||
ret = msg_box.exec_()
|
||||
if ret == Qt.QtWidgets.QMessageBox.Yes:
|
||||
if ret == QtWidgets.QMessageBox.Yes:
|
||||
multiple_instances = True
|
||||
elif ret == Qt.QtWidgets.QMessageBox.Cancel:
|
||||
elif ret == QtWidgets.QMessageBox.Cancel:
|
||||
return
|
||||
|
||||
if multiple_instances:
|
||||
|
|
|
|||
|
|
@ -61,6 +61,9 @@ class OpenPypeMenu(QtWidgets.QWidget):
|
|||
inventory_btn = QtWidgets.QPushButton("Inventory ...", self)
|
||||
subsetm_btn = QtWidgets.QPushButton("Subset Manager ...", self)
|
||||
libload_btn = QtWidgets.QPushButton("Library ...", self)
|
||||
experimental_btn = QtWidgets.QPushButton(
|
||||
"Experimental tools ...", self
|
||||
)
|
||||
# rename_btn = QtWidgets.QPushButton("Rename", self)
|
||||
# set_colorspace_btn = QtWidgets.QPushButton(
|
||||
# "Set colorspace from presets", self
|
||||
|
|
@ -91,6 +94,8 @@ class OpenPypeMenu(QtWidgets.QWidget):
|
|||
|
||||
# layout.addWidget(set_colorspace_btn)
|
||||
# layout.addWidget(reset_resolution_btn)
|
||||
layout.addWidget(Spacer(15, self))
|
||||
layout.addWidget(experimental_btn)
|
||||
|
||||
self.setLayout(layout)
|
||||
|
||||
|
|
@ -104,6 +109,7 @@ class OpenPypeMenu(QtWidgets.QWidget):
|
|||
# rename_btn.clicked.connect(self.on_rename_clicked)
|
||||
# set_colorspace_btn.clicked.connect(self.on_set_colorspace_clicked)
|
||||
# reset_resolution_btn.clicked.connect(self.on_reset_resolution_clicked)
|
||||
experimental_btn.clicked.connect(self.on_experimental_clicked)
|
||||
|
||||
def on_workfile_clicked(self):
|
||||
print("Clicked Workfile")
|
||||
|
|
@ -142,6 +148,9 @@ class OpenPypeMenu(QtWidgets.QWidget):
|
|||
def on_reset_resolution_clicked(self):
|
||||
print("Clicked Reset Resolution")
|
||||
|
||||
def on_experimental_clicked(self):
|
||||
host_tools.show_experimental_tools_dialog()
|
||||
|
||||
|
||||
def launch_pype_menu():
|
||||
app = QtWidgets.QApplication(sys.argv)
|
||||
|
|
|
|||
|
|
@ -238,7 +238,7 @@ class CollectInstanceResources(pyblish.api.InstancePlugin):
|
|||
})
|
||||
|
||||
# exception for mp4 preview
|
||||
if ".mp4" in _reminding_file:
|
||||
if ext in ["mp4", "mov"]:
|
||||
frame_start = 0
|
||||
frame_end = (
|
||||
(instance_data["frameEnd"] - instance_data["frameStart"])
|
||||
|
|
@ -255,6 +255,7 @@ class CollectInstanceResources(pyblish.api.InstancePlugin):
|
|||
"step": 1,
|
||||
"fps": self.context.data.get("fps"),
|
||||
"name": "review",
|
||||
"thumbnail": True,
|
||||
"tags": ["review", "ftrackreview", "delete"],
|
||||
})
|
||||
|
||||
|
|
|
|||
|
|
@ -49,10 +49,22 @@ class CollectHarmonyScenes(pyblish.api.InstancePlugin):
|
|||
|
||||
# fix anatomy data
|
||||
anatomy_data_new = copy.deepcopy(anatomy_data)
|
||||
|
||||
project_entity = context.data["projectEntity"]
|
||||
asset_entity = context.data["assetEntity"]
|
||||
|
||||
task_type = asset_entity["data"]["tasks"].get(task, {}).get("type")
|
||||
project_task_types = project_entity["config"]["tasks"]
|
||||
task_code = project_task_types.get(task_type, {}).get("short_name")
|
||||
|
||||
# updating hierarchy data
|
||||
anatomy_data_new.update({
|
||||
"asset": asset_data["name"],
|
||||
"task": task,
|
||||
"task": {
|
||||
"name": task,
|
||||
"type": task_type,
|
||||
"short": task_code,
|
||||
},
|
||||
"subset": subset_name
|
||||
})
|
||||
|
||||
|
|
|
|||
|
|
@ -27,6 +27,7 @@ class CollectHarmonyZips(pyblish.api.InstancePlugin):
|
|||
anatomy_data = instance.context.data["anatomyData"]
|
||||
repres = instance.data["representations"]
|
||||
files = repres[0]["files"]
|
||||
project_entity = context.data["projectEntity"]
|
||||
|
||||
if files.endswith(".zip"):
|
||||
# A zip file was dropped
|
||||
|
|
@ -45,14 +46,24 @@ class CollectHarmonyZips(pyblish.api.InstancePlugin):
|
|||
|
||||
self.log.info("Copied data: {}".format(new_instance.data))
|
||||
|
||||
task_type = asset_data["data"]["tasks"].get(task, {}).get("type")
|
||||
project_task_types = project_entity["config"]["tasks"]
|
||||
task_code = project_task_types.get(task_type, {}).get("short_name")
|
||||
|
||||
# fix anatomy data
|
||||
anatomy_data_new = copy.deepcopy(anatomy_data)
|
||||
# updating hierarchy data
|
||||
anatomy_data_new.update({
|
||||
"asset": asset_data["name"],
|
||||
"task": task,
|
||||
"subset": subset_name
|
||||
})
|
||||
anatomy_data_new.update(
|
||||
{
|
||||
"asset": asset_data["name"],
|
||||
"task": {
|
||||
"name": task,
|
||||
"type": task_type,
|
||||
"short": task_code,
|
||||
},
|
||||
"subset": subset_name
|
||||
}
|
||||
)
|
||||
|
||||
new_instance.data["label"] = f"{instance_name}"
|
||||
new_instance.data["subset"] = subset_name
|
||||
|
|
|
|||
|
|
@ -1,415 +0,0 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
"""Extract Harmony scene from zip file."""
|
||||
import glob
|
||||
import os
|
||||
import shutil
|
||||
import six
|
||||
import sys
|
||||
import tempfile
|
||||
import zipfile
|
||||
|
||||
import pyblish.api
|
||||
from avalon import api, io
|
||||
import openpype.api
|
||||
from openpype.lib import get_workfile_template_key_from_context
|
||||
|
||||
|
||||
class ExtractHarmonyZip(openpype.api.Extractor):
|
||||
"""Extract Harmony zip."""
|
||||
|
||||
# Pyblish settings
|
||||
label = "Extract Harmony zip"
|
||||
order = pyblish.api.ExtractorOrder + 0.02
|
||||
hosts = ["standalonepublisher"]
|
||||
families = ["scene"]
|
||||
|
||||
# Properties
|
||||
session = None
|
||||
task_types = None
|
||||
task_statuses = None
|
||||
assetversion_statuses = None
|
||||
|
||||
# Presets
|
||||
create_workfile = True
|
||||
default_task = "harmonyIngest"
|
||||
default_task_type = "Ingest"
|
||||
default_task_status = "Ingested"
|
||||
assetversion_status = "Ingested"
|
||||
|
||||
def process(self, instance):
|
||||
"""Plugin entry point."""
|
||||
context = instance.context
|
||||
self.session = context.data["ftrackSession"]
|
||||
asset_doc = context.data["assetEntity"]
|
||||
# asset_name = instance.data["asset"]
|
||||
subset_name = instance.data["subset"]
|
||||
instance_name = instance.data["name"]
|
||||
family = instance.data["family"]
|
||||
task = context.data["anatomyData"]["task"] or self.default_task
|
||||
project_entity = instance.context.data["projectEntity"]
|
||||
ftrack_id = asset_doc["data"]["ftrackId"]
|
||||
repres = instance.data["representations"]
|
||||
submitted_staging_dir = repres[0]["stagingDir"]
|
||||
submitted_files = repres[0]["files"]
|
||||
|
||||
# Get all the ftrack entities needed
|
||||
|
||||
# Asset Entity
|
||||
query = 'AssetBuild where id is "{}"'.format(ftrack_id)
|
||||
asset_entity = self.session.query(query).first()
|
||||
|
||||
# Project Entity
|
||||
query = 'Project where full_name is "{}"'.format(
|
||||
project_entity["name"]
|
||||
)
|
||||
project_entity = self.session.query(query).one()
|
||||
|
||||
# Get Task types and Statuses for creation if needed
|
||||
self.task_types = self._get_all_task_types(project_entity)
|
||||
self.task_statuses = self._get_all_task_statuses(project_entity)
|
||||
|
||||
# Get Statuses of AssetVersions
|
||||
self.assetversion_statuses = self._get_all_assetversion_statuses(
|
||||
project_entity
|
||||
)
|
||||
|
||||
# Setup the status that we want for the AssetVersion
|
||||
if self.assetversion_status:
|
||||
instance.data["assetversion_status"] = self.assetversion_status
|
||||
|
||||
# Create the default_task if it does not exist
|
||||
if task == self.default_task:
|
||||
existing_tasks = []
|
||||
entity_children = asset_entity.get('children', [])
|
||||
for child in entity_children:
|
||||
if child.entity_type.lower() == 'task':
|
||||
existing_tasks.append(child['name'].lower())
|
||||
|
||||
if task.lower() in existing_tasks:
|
||||
print("Task {} already exists".format(task))
|
||||
|
||||
else:
|
||||
self.create_task(
|
||||
name=task,
|
||||
task_type=self.default_task_type,
|
||||
task_status=self.default_task_status,
|
||||
parent=asset_entity,
|
||||
)
|
||||
|
||||
# Find latest version
|
||||
latest_version = self._find_last_version(subset_name, asset_doc)
|
||||
version_number = 1
|
||||
if latest_version is not None:
|
||||
version_number += latest_version
|
||||
|
||||
self.log.info(
|
||||
"Next version of instance \"{}\" will be {}".format(
|
||||
instance_name, version_number
|
||||
)
|
||||
)
|
||||
|
||||
# update instance info
|
||||
instance.data["task"] = task
|
||||
instance.data["version_name"] = "{}_{}".format(subset_name, task)
|
||||
instance.data["family"] = family
|
||||
instance.data["subset"] = subset_name
|
||||
instance.data["version"] = version_number
|
||||
instance.data["latestVersion"] = latest_version
|
||||
instance.data["anatomyData"].update({
|
||||
"subset": subset_name,
|
||||
"family": family,
|
||||
"version": version_number
|
||||
})
|
||||
|
||||
# Copy `families` and check if `family` is not in current families
|
||||
families = instance.data.get("families") or list()
|
||||
if families:
|
||||
families = list(set(families))
|
||||
|
||||
instance.data["families"] = families
|
||||
|
||||
# Prepare staging dir for new instance and zip + sanitize scene name
|
||||
staging_dir = tempfile.mkdtemp(prefix="pyblish_tmp_")
|
||||
|
||||
# Handle if the representation is a .zip and not an .xstage
|
||||
pre_staged = False
|
||||
if submitted_files.endswith(".zip"):
|
||||
submitted_zip_file = os.path.join(submitted_staging_dir,
|
||||
submitted_files
|
||||
).replace("\\", "/")
|
||||
|
||||
pre_staged = self.sanitize_prezipped_project(instance,
|
||||
submitted_zip_file,
|
||||
staging_dir)
|
||||
|
||||
# Get the file to work with
|
||||
source_dir = str(repres[0]["stagingDir"])
|
||||
source_file = str(repres[0]["files"])
|
||||
|
||||
staging_scene_dir = os.path.join(staging_dir, "scene")
|
||||
staging_scene = os.path.join(staging_scene_dir, source_file)
|
||||
|
||||
# If the file is an .xstage / directory, we must stage it
|
||||
if not pre_staged:
|
||||
shutil.copytree(source_dir, staging_scene_dir)
|
||||
|
||||
# Rename this latest file as 'scene.xstage'
|
||||
# This is is determined in the collector from the latest scene in a
|
||||
# submitted directory / directory the submitted .xstage is in.
|
||||
# In the case of a zip file being submitted, this is determined within
|
||||
# the self.sanitize_project() method in this extractor.
|
||||
os.rename(staging_scene,
|
||||
os.path.join(staging_scene_dir, "scene.xstage")
|
||||
)
|
||||
|
||||
# Required to set the current directory where the zip will end up
|
||||
os.chdir(staging_dir)
|
||||
|
||||
# Create the zip file
|
||||
zip_filepath = shutil.make_archive(os.path.basename(source_dir),
|
||||
"zip",
|
||||
staging_scene_dir
|
||||
)
|
||||
|
||||
zip_filename = os.path.basename(zip_filepath)
|
||||
|
||||
self.log.info("Zip file: {}".format(zip_filepath))
|
||||
|
||||
# Setup representation
|
||||
new_repre = {
|
||||
"name": "zip",
|
||||
"ext": "zip",
|
||||
"files": zip_filename,
|
||||
"stagingDir": staging_dir
|
||||
}
|
||||
|
||||
self.log.debug(
|
||||
"Creating new representation: {}".format(new_repre)
|
||||
)
|
||||
instance.data["representations"] = [new_repre]
|
||||
|
||||
self.log.debug("Completed prep of zipped Harmony scene: {}"
|
||||
.format(zip_filepath)
|
||||
)
|
||||
|
||||
# If this extractor is setup to also extract a workfile...
|
||||
if self.create_workfile:
|
||||
workfile_path = self.extract_workfile(instance,
|
||||
staging_scene
|
||||
)
|
||||
|
||||
self.log.debug("Extracted Workfile to: {}".format(workfile_path))
|
||||
|
||||
def extract_workfile(self, instance, staging_scene):
|
||||
"""Extract a valid workfile for this corresponding publish.
|
||||
|
||||
Args:
|
||||
instance (:class:`pyblish.api.Instance`): Instance data.
|
||||
staging_scene (str): path of staging scene.
|
||||
|
||||
Returns:
|
||||
str: Path to workdir.
|
||||
|
||||
"""
|
||||
# Since the staging scene was renamed to "scene.xstage" for publish
|
||||
# rename the staging scene in the temp stagingdir
|
||||
staging_scene = os.path.join(os.path.dirname(staging_scene),
|
||||
"scene.xstage")
|
||||
|
||||
# Setup the data needed to form a valid work path filename
|
||||
anatomy = openpype.api.Anatomy()
|
||||
project_entity = instance.context.data["projectEntity"]
|
||||
|
||||
data = {
|
||||
"root": api.registered_root(),
|
||||
"project": {
|
||||
"name": project_entity["name"],
|
||||
"code": project_entity["data"].get("code", '')
|
||||
},
|
||||
"asset": instance.data["asset"],
|
||||
"hierarchy": openpype.api.get_hierarchy(instance.data["asset"]),
|
||||
"family": instance.data["family"],
|
||||
"task": instance.data.get("task"),
|
||||
"subset": instance.data["subset"],
|
||||
"version": 1,
|
||||
"ext": "zip",
|
||||
}
|
||||
host_name = "harmony"
|
||||
template_name = get_workfile_template_key_from_context(
|
||||
instance.data["asset"],
|
||||
instance.data.get("task"),
|
||||
host_name,
|
||||
project_name=project_entity["name"],
|
||||
dbcon=io
|
||||
)
|
||||
|
||||
# Get a valid work filename first with version 1
|
||||
file_template = anatomy.templates[template_name]["file"]
|
||||
anatomy_filled = anatomy.format(data)
|
||||
work_path = anatomy_filled[template_name]["path"]
|
||||
|
||||
# Get the final work filename with the proper version
|
||||
data["version"] = api.last_workfile_with_version(
|
||||
os.path.dirname(work_path),
|
||||
file_template,
|
||||
data,
|
||||
api.HOST_WORKFILE_EXTENSIONS[host_name]
|
||||
)[1]
|
||||
|
||||
base_name = os.path.splitext(os.path.basename(work_path))[0]
|
||||
|
||||
staging_work_path = os.path.join(os.path.dirname(staging_scene),
|
||||
base_name + ".xstage"
|
||||
)
|
||||
|
||||
# Rename this latest file after the workfile path filename
|
||||
os.rename(staging_scene, staging_work_path)
|
||||
|
||||
# Required to set the current directory where the zip will end up
|
||||
os.chdir(os.path.dirname(os.path.dirname(staging_scene)))
|
||||
|
||||
# Create the zip file
|
||||
zip_filepath = shutil.make_archive(base_name,
|
||||
"zip",
|
||||
os.path.dirname(staging_scene)
|
||||
)
|
||||
self.log.info(staging_scene)
|
||||
self.log.info(work_path)
|
||||
self.log.info(staging_work_path)
|
||||
self.log.info(os.path.dirname(os.path.dirname(staging_scene)))
|
||||
self.log.info(base_name)
|
||||
self.log.info(zip_filepath)
|
||||
|
||||
# Create the work path on disk if it does not exist
|
||||
os.makedirs(os.path.dirname(work_path), exist_ok=True)
|
||||
shutil.copy(zip_filepath, work_path)
|
||||
|
||||
return work_path
|
||||
|
||||
def sanitize_prezipped_project(
|
||||
self, instance, zip_filepath, staging_dir):
|
||||
"""Fix when a zip contains a folder.
|
||||
|
||||
Handle zip file root contains folder instead of the project.
|
||||
|
||||
Args:
|
||||
instance (:class:`pyblish.api.Instance`): Instance data.
|
||||
zip_filepath (str): Path to zip.
|
||||
staging_dir (str): Path to staging directory.
|
||||
|
||||
"""
|
||||
zip = zipfile.ZipFile(zip_filepath)
|
||||
zip_contents = zipfile.ZipFile.namelist(zip)
|
||||
|
||||
# Determine if any xstage file is in root of zip
|
||||
project_in_root = [pth for pth in zip_contents
|
||||
if "/" not in pth and pth.endswith(".xstage")]
|
||||
|
||||
staging_scene_dir = os.path.join(staging_dir, "scene")
|
||||
|
||||
# The project is nested, so we must extract and move it
|
||||
if not project_in_root:
|
||||
|
||||
staging_tmp_dir = os.path.join(staging_dir, "tmp")
|
||||
|
||||
with zipfile.ZipFile(zip_filepath, "r") as zip_ref:
|
||||
zip_ref.extractall(staging_tmp_dir)
|
||||
|
||||
nested_project_folder = os.path.join(staging_tmp_dir,
|
||||
zip_contents[0]
|
||||
)
|
||||
|
||||
shutil.copytree(nested_project_folder, staging_scene_dir)
|
||||
|
||||
else:
|
||||
# The project is not nested, so we just extract to scene folder
|
||||
with zipfile.ZipFile(zip_filepath, "r") as zip_ref:
|
||||
zip_ref.extractall(staging_scene_dir)
|
||||
|
||||
latest_file = max(glob.iglob(staging_scene_dir + "/*.xstage"),
|
||||
key=os.path.getctime).replace("\\", "/")
|
||||
|
||||
instance.data["representations"][0]["stagingDir"] = staging_scene_dir
|
||||
instance.data["representations"][0]["files"] = os.path.basename(
|
||||
latest_file)
|
||||
|
||||
# We have staged the scene already so return True
|
||||
return True
|
||||
|
||||
def _find_last_version(self, subset_name, asset_doc):
|
||||
"""Find last version of subset."""
|
||||
subset_doc = io.find_one({
|
||||
"type": "subset",
|
||||
"name": subset_name,
|
||||
"parent": asset_doc["_id"]
|
||||
})
|
||||
|
||||
if subset_doc is None:
|
||||
self.log.debug("Subset entity does not exist yet.")
|
||||
else:
|
||||
version_doc = io.find_one(
|
||||
{
|
||||
"type": "version",
|
||||
"parent": subset_doc["_id"]
|
||||
},
|
||||
sort=[("name", -1)]
|
||||
)
|
||||
if version_doc:
|
||||
return int(version_doc["name"])
|
||||
return None
|
||||
|
||||
def _get_all_task_types(self, project):
|
||||
"""Get all task types."""
|
||||
tasks = {}
|
||||
proj_template = project['project_schema']
|
||||
temp_task_types = proj_template['_task_type_schema']['types']
|
||||
|
||||
for type in temp_task_types:
|
||||
if type['name'] not in tasks:
|
||||
tasks[type['name']] = type
|
||||
|
||||
return tasks
|
||||
|
||||
def _get_all_task_statuses(self, project):
|
||||
"""Get all statuses of tasks."""
|
||||
statuses = {}
|
||||
proj_template = project['project_schema']
|
||||
temp_task_statuses = proj_template.get_statuses("Task")
|
||||
|
||||
for status in temp_task_statuses:
|
||||
if status['name'] not in statuses:
|
||||
statuses[status['name']] = status
|
||||
|
||||
return statuses
|
||||
|
||||
def _get_all_assetversion_statuses(self, project):
|
||||
"""Get statuses of all asset versions."""
|
||||
statuses = {}
|
||||
proj_template = project['project_schema']
|
||||
temp_task_statuses = proj_template.get_statuses("AssetVersion")
|
||||
|
||||
for status in temp_task_statuses:
|
||||
if status['name'] not in statuses:
|
||||
statuses[status['name']] = status
|
||||
|
||||
return statuses
|
||||
|
||||
def _create_task(self, name, task_type, parent, task_status):
|
||||
"""Create task."""
|
||||
task_data = {
|
||||
'name': name,
|
||||
'parent': parent,
|
||||
}
|
||||
self.log.info(task_type)
|
||||
task_data['type'] = self.task_types[task_type]
|
||||
task_data['status'] = self.task_statuses[task_status]
|
||||
self.log.info(task_data)
|
||||
task = self.session.create('Task', task_data)
|
||||
try:
|
||||
self.session.commit()
|
||||
except Exception:
|
||||
tp, value, tb = sys.exc_info()
|
||||
self.session.rollback()
|
||||
six.reraise(tp, value, tb)
|
||||
|
||||
return task
|
||||
682
openpype/hosts/tvpaint/lib.py
Normal file
682
openpype/hosts/tvpaint/lib.py
Normal file
|
|
@ -0,0 +1,682 @@
|
|||
import os
|
||||
import shutil
|
||||
import collections
|
||||
from PIL import Image, ImageDraw
|
||||
|
||||
|
||||
def backwards_id_conversion(data_by_layer_id):
|
||||
"""Convert layer ids to strings from integers."""
|
||||
for key in tuple(data_by_layer_id.keys()):
|
||||
if not isinstance(key, str):
|
||||
data_by_layer_id[str(key)] = data_by_layer_id.pop(key)
|
||||
|
||||
|
||||
def get_frame_filename_template(frame_end, filename_prefix=None, ext=None):
|
||||
"""Get file template with frame key for rendered files.
|
||||
|
||||
This is simple template contains `{frame}{ext}` for sequential outputs
|
||||
and `single_file{ext}` for single file output. Output is rendered to
|
||||
temporary folder so filename should not matter as integrator change
|
||||
them.
|
||||
"""
|
||||
frame_padding = 4
|
||||
frame_end_str_len = len(str(frame_end))
|
||||
if frame_end_str_len > frame_padding:
|
||||
frame_padding = frame_end_str_len
|
||||
|
||||
ext = ext or ".png"
|
||||
filename_prefix = filename_prefix or ""
|
||||
|
||||
return "{}{{frame:0>{}}}{}".format(filename_prefix, frame_padding, ext)
|
||||
|
||||
|
||||
def get_layer_pos_filename_template(range_end, filename_prefix=None, ext=None):
|
||||
filename_prefix = filename_prefix or ""
|
||||
new_filename_prefix = filename_prefix + "pos_{pos}."
|
||||
return get_frame_filename_template(range_end, new_filename_prefix, ext)
|
||||
|
||||
|
||||
def _calculate_pre_behavior_copy(
|
||||
range_start, exposure_frames, pre_beh,
|
||||
layer_frame_start, layer_frame_end,
|
||||
output_idx_by_frame_idx
|
||||
):
|
||||
"""Calculate frames before first exposure frame based on pre behavior.
|
||||
|
||||
Function may skip whole processing if first exposure frame is before
|
||||
layer's first frame. In that case pre behavior does not make sense.
|
||||
|
||||
Args:
|
||||
range_start(int): First frame of range which should be rendered.
|
||||
exposure_frames(list): List of all exposure frames on layer.
|
||||
pre_beh(str): Pre behavior of layer (enum of 4 strings).
|
||||
layer_frame_start(int): First frame of layer.
|
||||
layer_frame_end(int): Last frame of layer.
|
||||
output_idx_by_frame_idx(dict): References to already prepared frames
|
||||
and where result will be stored.
|
||||
"""
|
||||
# Check if last layer frame is after range end
|
||||
if layer_frame_start < range_start:
|
||||
return
|
||||
|
||||
first_exposure_frame = min(exposure_frames)
|
||||
# Skip if last exposure frame is after range end
|
||||
if first_exposure_frame < range_start:
|
||||
return
|
||||
|
||||
# Calculate frame count of layer
|
||||
frame_count = layer_frame_end - layer_frame_start + 1
|
||||
|
||||
if pre_beh == "none":
|
||||
# Just fill all frames from last exposure frame to range end with None
|
||||
for frame_idx in range(range_start, layer_frame_start):
|
||||
output_idx_by_frame_idx[frame_idx] = None
|
||||
|
||||
elif pre_beh == "hold":
|
||||
# Keep first frame for whole time
|
||||
for frame_idx in range(range_start, layer_frame_start):
|
||||
output_idx_by_frame_idx[frame_idx] = first_exposure_frame
|
||||
|
||||
elif pre_beh in ("loop", "repeat"):
|
||||
# Loop backwards from last frame of layer
|
||||
for frame_idx in reversed(range(range_start, layer_frame_start)):
|
||||
eq_frame_idx_offset = (
|
||||
(layer_frame_end - frame_idx) % frame_count
|
||||
)
|
||||
eq_frame_idx = layer_frame_end - eq_frame_idx_offset
|
||||
output_idx_by_frame_idx[frame_idx] = eq_frame_idx
|
||||
|
||||
elif pre_beh == "pingpong":
|
||||
half_seq_len = frame_count - 1
|
||||
seq_len = half_seq_len * 2
|
||||
for frame_idx in reversed(range(range_start, layer_frame_start)):
|
||||
eq_frame_idx_offset = (layer_frame_start - frame_idx) % seq_len
|
||||
if eq_frame_idx_offset > half_seq_len:
|
||||
eq_frame_idx_offset = (seq_len - eq_frame_idx_offset)
|
||||
eq_frame_idx = layer_frame_start + eq_frame_idx_offset
|
||||
output_idx_by_frame_idx[frame_idx] = eq_frame_idx
|
||||
|
||||
|
||||
def _calculate_post_behavior_copy(
|
||||
range_end, exposure_frames, post_beh,
|
||||
layer_frame_start, layer_frame_end,
|
||||
output_idx_by_frame_idx
|
||||
):
|
||||
"""Calculate frames after last frame of layer based on post behavior.
|
||||
|
||||
Function may skip whole processing if last layer frame is after range_end.
|
||||
In that case post behavior does not make sense.
|
||||
|
||||
Args:
|
||||
range_end(int): Last frame of range which should be rendered.
|
||||
exposure_frames(list): List of all exposure frames on layer.
|
||||
post_beh(str): Post behavior of layer (enum of 4 strings).
|
||||
layer_frame_start(int): First frame of layer.
|
||||
layer_frame_end(int): Last frame of layer.
|
||||
output_idx_by_frame_idx(dict): References to already prepared frames
|
||||
and where result will be stored.
|
||||
"""
|
||||
# Check if last layer frame is after range end
|
||||
if layer_frame_end >= range_end:
|
||||
return
|
||||
|
||||
last_exposure_frame = max(exposure_frames)
|
||||
# Skip if last exposure frame is after range end
|
||||
# - this is probably irrelevant with layer frame end check?
|
||||
if last_exposure_frame >= range_end:
|
||||
return
|
||||
|
||||
# Calculate frame count of layer
|
||||
frame_count = layer_frame_end - layer_frame_start + 1
|
||||
|
||||
if post_beh == "none":
|
||||
# Just fill all frames from last exposure frame to range end with None
|
||||
for frame_idx in range(layer_frame_end + 1, range_end + 1):
|
||||
output_idx_by_frame_idx[frame_idx] = None
|
||||
|
||||
elif post_beh == "hold":
|
||||
# Keep last exposure frame to the end
|
||||
for frame_idx in range(layer_frame_end + 1, range_end + 1):
|
||||
output_idx_by_frame_idx[frame_idx] = last_exposure_frame
|
||||
|
||||
elif post_beh in ("loop", "repeat"):
|
||||
# Loop backwards from last frame of layer
|
||||
for frame_idx in range(layer_frame_end + 1, range_end + 1):
|
||||
eq_frame_idx = frame_idx % frame_count
|
||||
output_idx_by_frame_idx[frame_idx] = eq_frame_idx
|
||||
|
||||
elif post_beh == "pingpong":
|
||||
half_seq_len = frame_count - 1
|
||||
seq_len = half_seq_len * 2
|
||||
for frame_idx in range(layer_frame_end + 1, range_end + 1):
|
||||
eq_frame_idx_offset = (frame_idx - layer_frame_end) % seq_len
|
||||
if eq_frame_idx_offset > half_seq_len:
|
||||
eq_frame_idx_offset = seq_len - eq_frame_idx_offset
|
||||
eq_frame_idx = layer_frame_end - eq_frame_idx_offset
|
||||
output_idx_by_frame_idx[frame_idx] = eq_frame_idx
|
||||
|
||||
|
||||
def _calculate_in_range_frames(
|
||||
range_start, range_end,
|
||||
exposure_frames, layer_frame_end,
|
||||
output_idx_by_frame_idx
|
||||
):
|
||||
"""Calculate frame references in defined range.
|
||||
|
||||
Function may skip whole processing if last layer frame is after range_end.
|
||||
In that case post behavior does not make sense.
|
||||
|
||||
Args:
|
||||
range_start(int): First frame of range which should be rendered.
|
||||
range_end(int): Last frame of range which should be rendered.
|
||||
exposure_frames(list): List of all exposure frames on layer.
|
||||
layer_frame_end(int): Last frame of layer.
|
||||
output_idx_by_frame_idx(dict): References to already prepared frames
|
||||
and where result will be stored.
|
||||
"""
|
||||
# Calculate in range frames
|
||||
in_range_frames = []
|
||||
for frame_idx in exposure_frames:
|
||||
if range_start <= frame_idx <= range_end:
|
||||
output_idx_by_frame_idx[frame_idx] = frame_idx
|
||||
in_range_frames.append(frame_idx)
|
||||
|
||||
if in_range_frames:
|
||||
first_in_range_frame = min(in_range_frames)
|
||||
# Calculate frames from first exposure frames to range end or last
|
||||
# frame of layer (post behavior should be calculated since that time)
|
||||
previous_exposure = first_in_range_frame
|
||||
for frame_idx in range(first_in_range_frame, range_end + 1):
|
||||
if frame_idx > layer_frame_end:
|
||||
break
|
||||
|
||||
if frame_idx in exposure_frames:
|
||||
previous_exposure = frame_idx
|
||||
else:
|
||||
output_idx_by_frame_idx[frame_idx] = previous_exposure
|
||||
|
||||
# There can be frames before first exposure frame in range
|
||||
# First check if we don't alreade have first range frame filled
|
||||
if range_start in output_idx_by_frame_idx:
|
||||
return
|
||||
|
||||
first_exposure_frame = max(exposure_frames)
|
||||
last_exposure_frame = max(exposure_frames)
|
||||
# Check if is first exposure frame smaller than defined range
|
||||
# if not then skip
|
||||
if first_exposure_frame >= range_start:
|
||||
return
|
||||
|
||||
# Check is if last exposure frame is also before range start
|
||||
# in that case we can't use fill frames before out range
|
||||
if last_exposure_frame < range_start:
|
||||
return
|
||||
|
||||
closest_exposure_frame = first_exposure_frame
|
||||
for frame_idx in exposure_frames:
|
||||
if frame_idx >= range_start:
|
||||
break
|
||||
if frame_idx > closest_exposure_frame:
|
||||
closest_exposure_frame = frame_idx
|
||||
|
||||
output_idx_by_frame_idx[closest_exposure_frame] = closest_exposure_frame
|
||||
for frame_idx in range(range_start, range_end + 1):
|
||||
if frame_idx in output_idx_by_frame_idx:
|
||||
break
|
||||
output_idx_by_frame_idx[frame_idx] = closest_exposure_frame
|
||||
|
||||
|
||||
def _cleanup_frame_references(output_idx_by_frame_idx):
|
||||
"""Cleanup frame references to frame reference.
|
||||
|
||||
Cleanup not direct references to rendered frame.
|
||||
```
|
||||
// Example input
|
||||
{
|
||||
1: 1,
|
||||
2: 1,
|
||||
3: 2
|
||||
}
|
||||
// Result
|
||||
{
|
||||
1: 1,
|
||||
2: 1,
|
||||
3: 1 // Changed reference to final rendered frame
|
||||
}
|
||||
```
|
||||
Result is dictionary where keys leads to frame that should be rendered.
|
||||
"""
|
||||
for frame_idx in tuple(output_idx_by_frame_idx.keys()):
|
||||
reference_idx = output_idx_by_frame_idx[frame_idx]
|
||||
# Skip transparent frames
|
||||
if reference_idx is None or reference_idx == frame_idx:
|
||||
continue
|
||||
|
||||
real_reference_idx = reference_idx
|
||||
_tmp_reference_idx = reference_idx
|
||||
while True:
|
||||
_temp = output_idx_by_frame_idx[_tmp_reference_idx]
|
||||
if _temp == _tmp_reference_idx:
|
||||
real_reference_idx = _tmp_reference_idx
|
||||
break
|
||||
_tmp_reference_idx = _temp
|
||||
|
||||
if real_reference_idx != reference_idx:
|
||||
output_idx_by_frame_idx[frame_idx] = real_reference_idx
|
||||
|
||||
|
||||
def _cleanup_out_range_frames(output_idx_by_frame_idx, range_start, range_end):
|
||||
"""Cleanup frame references to frames out of passed range.
|
||||
|
||||
First available frame in range is used
|
||||
```
|
||||
// Example input. Range 2-3
|
||||
{
|
||||
1: 1,
|
||||
2: 1,
|
||||
3: 1
|
||||
}
|
||||
// Result
|
||||
{
|
||||
2: 2, // Redirect to self as is first that refence out range
|
||||
3: 2 // Redirect to first redirected frame
|
||||
}
|
||||
```
|
||||
Result is dictionary where keys leads to frame that should be rendered.
|
||||
"""
|
||||
in_range_frames_by_out_frames = collections.defaultdict(set)
|
||||
out_range_frames = set()
|
||||
for frame_idx in tuple(output_idx_by_frame_idx.keys()):
|
||||
# Skip frames that are already out of range
|
||||
if frame_idx < range_start or frame_idx > range_end:
|
||||
out_range_frames.add(frame_idx)
|
||||
continue
|
||||
|
||||
reference_idx = output_idx_by_frame_idx[frame_idx]
|
||||
# Skip transparent frames
|
||||
if reference_idx is None:
|
||||
continue
|
||||
|
||||
# Skip references in range
|
||||
if reference_idx < range_start or reference_idx > range_end:
|
||||
in_range_frames_by_out_frames[reference_idx].add(frame_idx)
|
||||
|
||||
for reference_idx in tuple(in_range_frames_by_out_frames.keys()):
|
||||
frame_indexes = in_range_frames_by_out_frames.pop(reference_idx)
|
||||
new_reference = None
|
||||
for frame_idx in frame_indexes:
|
||||
if new_reference is None:
|
||||
new_reference = frame_idx
|
||||
output_idx_by_frame_idx[frame_idx] = new_reference
|
||||
|
||||
# Finally remove out of range frames
|
||||
for frame_idx in out_range_frames:
|
||||
output_idx_by_frame_idx.pop(frame_idx)
|
||||
|
||||
|
||||
def calculate_layer_frame_references(
|
||||
range_start, range_end,
|
||||
layer_frame_start,
|
||||
layer_frame_end,
|
||||
exposure_frames,
|
||||
pre_beh, post_beh
|
||||
):
|
||||
"""Calculate frame references for one layer based on it's data.
|
||||
|
||||
Output is dictionary where key is frame index referencing to rendered frame
|
||||
index. If frame index should be rendered then is referencing to self.
|
||||
|
||||
```
|
||||
// Example output
|
||||
{
|
||||
1: 1, // Reference to self - will be rendered
|
||||
2: 1, // Reference to frame 1 - will be copied
|
||||
3: 1, // Reference to frame 1 - will be copied
|
||||
4: 4, // Reference to self - will be rendered
|
||||
...
|
||||
20: 4 // Reference to frame 4 - will be copied
|
||||
21: None // Has reference to None - transparent image
|
||||
}
|
||||
```
|
||||
|
||||
Args:
|
||||
range_start(int): First frame of range which should be rendered.
|
||||
range_end(int): Last frame of range which should be rendered.
|
||||
layer_frame_start(int)L First frame of layer.
|
||||
layer_frame_end(int): Last frame of layer.
|
||||
exposure_frames(list): List of all exposure frames on layer.
|
||||
pre_beh(str): Pre behavior of layer (enum of 4 strings).
|
||||
post_beh(str): Post behavior of layer (enum of 4 strings).
|
||||
"""
|
||||
# Output variable
|
||||
output_idx_by_frame_idx = {}
|
||||
# Skip if layer does not have any exposure frames
|
||||
if not exposure_frames:
|
||||
return output_idx_by_frame_idx
|
||||
|
||||
# First calculate in range frames
|
||||
_calculate_in_range_frames(
|
||||
range_start, range_end,
|
||||
exposure_frames, layer_frame_end,
|
||||
output_idx_by_frame_idx
|
||||
)
|
||||
# Calculate frames by pre behavior of layer
|
||||
_calculate_pre_behavior_copy(
|
||||
range_start, exposure_frames, pre_beh,
|
||||
layer_frame_start, layer_frame_end,
|
||||
output_idx_by_frame_idx
|
||||
)
|
||||
# Calculate frames by post behavior of layer
|
||||
_calculate_post_behavior_copy(
|
||||
range_end, exposure_frames, post_beh,
|
||||
layer_frame_start, layer_frame_end,
|
||||
output_idx_by_frame_idx
|
||||
)
|
||||
# Cleanup of referenced frames
|
||||
_cleanup_frame_references(output_idx_by_frame_idx)
|
||||
|
||||
# Remove frames out of range
|
||||
_cleanup_out_range_frames(output_idx_by_frame_idx, range_start, range_end)
|
||||
|
||||
return output_idx_by_frame_idx
|
||||
|
||||
|
||||
def calculate_layers_extraction_data(
|
||||
layers_data,
|
||||
exposure_frames_by_layer_id,
|
||||
behavior_by_layer_id,
|
||||
range_start,
|
||||
range_end,
|
||||
skip_not_visible=True,
|
||||
filename_prefix=None,
|
||||
ext=None
|
||||
):
|
||||
"""Calculate extraction data for passed layers data.
|
||||
|
||||
```
|
||||
{
|
||||
<layer_id>: {
|
||||
"frame_references": {...},
|
||||
"filenames_by_frame_index": {...}
|
||||
},
|
||||
...
|
||||
}
|
||||
```
|
||||
|
||||
Frame references contains frame index reference to rendered frame index.
|
||||
|
||||
Filename by frame index represents filename under which should be frame
|
||||
stored. Directory is not handled here because each usage may need different
|
||||
approach.
|
||||
|
||||
Args:
|
||||
layers_data(list): Layers data loaded from TVPaint.
|
||||
exposure_frames_by_layer_id(dict): Exposure frames of layers stored by
|
||||
layer id.
|
||||
behavior_by_layer_id(dict): Pre and Post behavior of layers stored by
|
||||
layer id.
|
||||
range_start(int): First frame of rendered range.
|
||||
range_end(int): Last frame of rendered range.
|
||||
skip_not_visible(bool): Skip calculations for hidden layers (Skipped
|
||||
by default).
|
||||
filename_prefix(str): Prefix before filename.
|
||||
ext(str): Extension which filenames will have ('.png' is default).
|
||||
|
||||
Returns:
|
||||
dict: Prepared data for rendering by layer position.
|
||||
"""
|
||||
# Make sure layer ids are strings
|
||||
# backwards compatibility when layer ids were integers
|
||||
backwards_id_conversion(exposure_frames_by_layer_id)
|
||||
backwards_id_conversion(behavior_by_layer_id)
|
||||
|
||||
layer_template = get_layer_pos_filename_template(
|
||||
range_end, filename_prefix, ext
|
||||
)
|
||||
output = {}
|
||||
for layer_data in layers_data:
|
||||
if skip_not_visible and not layer_data["visible"]:
|
||||
continue
|
||||
|
||||
orig_layer_id = layer_data["layer_id"]
|
||||
layer_id = str(orig_layer_id)
|
||||
|
||||
# Skip if does not have any exposure frames (empty layer)
|
||||
exposure_frames = exposure_frames_by_layer_id[layer_id]
|
||||
if not exposure_frames:
|
||||
continue
|
||||
|
||||
layer_position = layer_data["position"]
|
||||
layer_frame_start = layer_data["frame_start"]
|
||||
layer_frame_end = layer_data["frame_end"]
|
||||
|
||||
layer_behavior = behavior_by_layer_id[layer_id]
|
||||
|
||||
pre_behavior = layer_behavior["pre"]
|
||||
post_behavior = layer_behavior["post"]
|
||||
|
||||
frame_references = calculate_layer_frame_references(
|
||||
range_start, range_end,
|
||||
layer_frame_start,
|
||||
layer_frame_end,
|
||||
exposure_frames,
|
||||
pre_behavior, post_behavior
|
||||
)
|
||||
# All values in 'frame_references' reference to a frame that must be
|
||||
# rendered out
|
||||
frames_to_render = set(frame_references.values())
|
||||
# Remove 'None' reference (transparent image)
|
||||
if None in frames_to_render:
|
||||
frames_to_render.remove(None)
|
||||
|
||||
# Skip layer if has nothing to render
|
||||
if not frames_to_render:
|
||||
continue
|
||||
|
||||
# All filenames that should be as output (not final output)
|
||||
filename_frames = (
|
||||
set(range(range_start, range_end + 1))
|
||||
| frames_to_render
|
||||
)
|
||||
filenames_by_frame_index = {}
|
||||
for frame_idx in filename_frames:
|
||||
filenames_by_frame_index[frame_idx] = layer_template.format(
|
||||
pos=layer_position,
|
||||
frame=frame_idx
|
||||
)
|
||||
|
||||
# Store objects under the layer id
|
||||
output[orig_layer_id] = {
|
||||
"frame_references": frame_references,
|
||||
"filenames_by_frame_index": filenames_by_frame_index
|
||||
}
|
||||
return output
|
||||
|
||||
|
||||
def create_transparent_image_from_source(src_filepath, dst_filepath):
|
||||
"""Create transparent image of same type and size as source image."""
|
||||
img_obj = Image.open(src_filepath)
|
||||
painter = ImageDraw.Draw(img_obj)
|
||||
painter.rectangle((0, 0, *img_obj.size), fill=(0, 0, 0, 0))
|
||||
img_obj.save(dst_filepath)
|
||||
|
||||
|
||||
def fill_reference_frames(frame_references, filepaths_by_frame):
|
||||
# Store path to first transparent image if there is any
|
||||
for frame_idx, ref_idx in frame_references.items():
|
||||
# Frame referencing to self should be rendered and used as source
|
||||
# and reference indexes with None can't be filled
|
||||
if ref_idx is None or frame_idx == ref_idx:
|
||||
continue
|
||||
|
||||
# Get destination filepath
|
||||
src_filepath = filepaths_by_frame[ref_idx]
|
||||
dst_filepath = filepaths_by_frame[frame_idx]
|
||||
|
||||
if hasattr(os, "link"):
|
||||
os.link(src_filepath, dst_filepath)
|
||||
else:
|
||||
shutil.copy(src_filepath, dst_filepath)
|
||||
|
||||
|
||||
def copy_render_file(src_path, dst_path):
|
||||
"""Create copy file of an image."""
|
||||
if hasattr(os, "link"):
|
||||
os.link(src_path, dst_path)
|
||||
else:
|
||||
shutil.copy(src_path, dst_path)
|
||||
|
||||
|
||||
def cleanup_rendered_layers(filepaths_by_layer_id):
|
||||
"""Delete all files for each individual layer files after compositing."""
|
||||
# Collect all filepaths from data
|
||||
all_filepaths = []
|
||||
for filepaths_by_frame in filepaths_by_layer_id.values():
|
||||
all_filepaths.extend(filepaths_by_frame.values())
|
||||
|
||||
# Loop over loop
|
||||
for filepath in set(all_filepaths):
|
||||
if filepath is not None and os.path.exists(filepath):
|
||||
os.remove(filepath)
|
||||
|
||||
|
||||
def composite_rendered_layers(
|
||||
layers_data, filepaths_by_layer_id,
|
||||
range_start, range_end,
|
||||
dst_filepaths_by_frame, cleanup=True
|
||||
):
|
||||
"""Composite multiple rendered layers by their position.
|
||||
|
||||
Result is single frame sequence with transparency matching content
|
||||
created in TVPaint. Missing source filepaths are replaced with transparent
|
||||
images but at least one image must be rendered and exist.
|
||||
|
||||
Function can be used even if single layer was created to fill transparent
|
||||
filepaths.
|
||||
|
||||
Args:
|
||||
layers_data(list): Layers data loaded from TVPaint.
|
||||
filepaths_by_layer_id(dict): Rendered filepaths stored by frame index
|
||||
per layer id. Used as source for compositing.
|
||||
range_start(int): First frame of rendered range.
|
||||
range_end(int): Last frame of rendered range.
|
||||
dst_filepaths_by_frame(dict): Output filepaths by frame where final
|
||||
image after compositing will be stored. Path must not clash with
|
||||
source filepaths.
|
||||
cleanup(bool): Remove all source filepaths when done with compositing.
|
||||
"""
|
||||
# Prepare layers by their position
|
||||
# - position tells in which order will compositing happen
|
||||
layer_ids_by_position = {}
|
||||
for layer in layers_data:
|
||||
layer_position = layer["position"]
|
||||
layer_ids_by_position[layer_position] = layer["layer_id"]
|
||||
|
||||
# Sort layer positions
|
||||
sorted_positions = tuple(sorted(layer_ids_by_position.keys()))
|
||||
# Prepare variable where filepaths without any rendered content
|
||||
# - transparent will be created
|
||||
transparent_filepaths = set()
|
||||
# Store first final filepath
|
||||
first_dst_filepath = None
|
||||
for frame_idx in range(range_start, range_end + 1):
|
||||
dst_filepath = dst_filepaths_by_frame[frame_idx]
|
||||
src_filepaths = []
|
||||
for layer_position in sorted_positions:
|
||||
layer_id = layer_ids_by_position[layer_position]
|
||||
filepaths_by_frame = filepaths_by_layer_id[layer_id]
|
||||
src_filepath = filepaths_by_frame.get(frame_idx)
|
||||
if src_filepath is not None:
|
||||
src_filepaths.append(src_filepath)
|
||||
|
||||
if not src_filepaths:
|
||||
transparent_filepaths.add(dst_filepath)
|
||||
continue
|
||||
|
||||
# Store first destionation filepath to be used for transparent images
|
||||
if first_dst_filepath is None:
|
||||
first_dst_filepath = dst_filepath
|
||||
|
||||
if len(src_filepaths) == 1:
|
||||
src_filepath = src_filepaths[0]
|
||||
if cleanup:
|
||||
os.rename(src_filepath, dst_filepath)
|
||||
else:
|
||||
copy_render_file(src_filepath, dst_filepath)
|
||||
|
||||
else:
|
||||
composite_images(src_filepaths, dst_filepath)
|
||||
|
||||
# Store first transparent filepath to be able copy it
|
||||
transparent_filepath = None
|
||||
for dst_filepath in transparent_filepaths:
|
||||
if transparent_filepath is None:
|
||||
create_transparent_image_from_source(
|
||||
first_dst_filepath, dst_filepath
|
||||
)
|
||||
transparent_filepath = dst_filepath
|
||||
else:
|
||||
copy_render_file(transparent_filepath, dst_filepath)
|
||||
|
||||
# Remove all files that were used as source for compositing
|
||||
if cleanup:
|
||||
cleanup_rendered_layers(filepaths_by_layer_id)
|
||||
|
||||
|
||||
def composite_images(input_image_paths, output_filepath):
|
||||
"""Composite images in order from passed list.
|
||||
|
||||
Raises:
|
||||
ValueError: When entered list is empty.
|
||||
"""
|
||||
if not input_image_paths:
|
||||
raise ValueError("Nothing to composite.")
|
||||
|
||||
img_obj = None
|
||||
for image_filepath in input_image_paths:
|
||||
_img_obj = Image.open(image_filepath)
|
||||
if img_obj is None:
|
||||
img_obj = _img_obj
|
||||
else:
|
||||
img_obj.alpha_composite(_img_obj)
|
||||
img_obj.save(output_filepath)
|
||||
|
||||
|
||||
def rename_filepaths_by_frame_start(
|
||||
filepaths_by_frame, range_start, range_end, new_frame_start
|
||||
):
|
||||
"""Change frames in filenames of finished images to new frame start."""
|
||||
# Skip if source first frame is same as destination first frame
|
||||
if range_start == new_frame_start:
|
||||
return
|
||||
|
||||
# Calculate frame end
|
||||
new_frame_end = range_end + (new_frame_start - range_start)
|
||||
# Create filename template
|
||||
filename_template = get_frame_filename_template(
|
||||
max(range_end, new_frame_end)
|
||||
)
|
||||
|
||||
# Use differnet ranges based on Mark In and output Frame Start values
|
||||
# - this is to make sure that filename renaming won't affect files that
|
||||
# are not renamed yet
|
||||
if range_start < new_frame_start:
|
||||
source_range = range(range_end, range_start - 1, -1)
|
||||
output_range = range(new_frame_end, new_frame_start - 1, -1)
|
||||
else:
|
||||
# This is less possible situation as frame start will be in most
|
||||
# cases higher than Mark In.
|
||||
source_range = range(range_start, range_end + 1)
|
||||
output_range = range(new_frame_start, new_frame_end + 1)
|
||||
|
||||
new_dst_filepaths = {}
|
||||
for src_frame, dst_frame in zip(source_range, output_range):
|
||||
src_filepath = filepaths_by_frame[src_frame]
|
||||
src_dirpath = os.path.dirname(src_filepath)
|
||||
dst_filename = filename_template.format(frame=dst_frame)
|
||||
dst_filepath = os.path.join(src_dirpath, dst_filename)
|
||||
|
||||
os.rename(src_filepath, dst_filepath)
|
||||
|
||||
new_dst_filepaths[dst_frame] = dst_filepath
|
||||
return new_dst_filepaths
|
||||
102
openpype/hosts/tvpaint/plugins/load/load_workfile.py
Normal file
102
openpype/hosts/tvpaint/plugins/load/load_workfile.py
Normal file
|
|
@ -0,0 +1,102 @@
|
|||
import getpass
|
||||
import os
|
||||
|
||||
from avalon.tvpaint import lib, pipeline, get_current_workfile_context
|
||||
from avalon import api, io
|
||||
from openpype.lib import (
|
||||
get_workfile_template_key_from_context,
|
||||
get_workdir_data
|
||||
)
|
||||
from openpype.api import Anatomy
|
||||
|
||||
|
||||
class LoadWorkfile(pipeline.Loader):
|
||||
"""Load workfile."""
|
||||
|
||||
families = ["workfile"]
|
||||
representations = ["tvpp"]
|
||||
|
||||
label = "Load Workfile"
|
||||
|
||||
def load(self, context, name, namespace, options):
|
||||
# Load context of current workfile as first thing
|
||||
# - which context and extension has
|
||||
host = api.registered_host()
|
||||
current_file = host.current_file()
|
||||
|
||||
context = get_current_workfile_context()
|
||||
|
||||
filepath = self.fname.replace("\\", "/")
|
||||
|
||||
if not os.path.exists(filepath):
|
||||
raise FileExistsError(
|
||||
"The loaded file does not exist. Try downloading it first."
|
||||
)
|
||||
|
||||
george_script = "tv_LoadProject '\"'\"{}\"'\"'".format(
|
||||
filepath
|
||||
)
|
||||
lib.execute_george_through_file(george_script)
|
||||
|
||||
# Save workfile.
|
||||
host_name = "tvpaint"
|
||||
asset_name = context.get("asset")
|
||||
task_name = context.get("task")
|
||||
# Far cases when there is workfile without context
|
||||
if not asset_name:
|
||||
asset_name = io.Session["AVALON_ASSET"]
|
||||
task_name = io.Session["AVALON_TASK"]
|
||||
|
||||
project_doc = io.find_one({
|
||||
"type": "project"
|
||||
})
|
||||
asset_doc = io.find_one({
|
||||
"type": "asset",
|
||||
"name": asset_name
|
||||
})
|
||||
project_name = project_doc["name"]
|
||||
|
||||
template_key = get_workfile_template_key_from_context(
|
||||
asset_name,
|
||||
task_name,
|
||||
host_name,
|
||||
project_name=project_name,
|
||||
dbcon=io
|
||||
)
|
||||
anatomy = Anatomy(project_name)
|
||||
|
||||
data = get_workdir_data(project_doc, asset_doc, task_name, host_name)
|
||||
data["root"] = anatomy.roots
|
||||
data["user"] = getpass.getuser()
|
||||
|
||||
template = anatomy.templates[template_key]["file"]
|
||||
|
||||
# Define saving file extension
|
||||
if current_file:
|
||||
# Match the extension of current file
|
||||
_, extension = os.path.splitext(current_file)
|
||||
else:
|
||||
# Fall back to the first extension supported for this host.
|
||||
extension = host.file_extensions()[0]
|
||||
|
||||
data["ext"] = extension
|
||||
|
||||
work_root = api.format_template_with_optional_keys(
|
||||
data, anatomy.templates[template_key]["folder"]
|
||||
)
|
||||
version = api.last_workfile_with_version(
|
||||
work_root, template, data, host.file_extensions()
|
||||
)[1]
|
||||
|
||||
if version is None:
|
||||
version = 1
|
||||
else:
|
||||
version += 1
|
||||
|
||||
data["version"] = version
|
||||
|
||||
path = os.path.join(
|
||||
work_root,
|
||||
api.format_template_with_optional_keys(data, template)
|
||||
)
|
||||
host.save_file(path)
|
||||
|
|
@ -1,12 +1,18 @@
|
|||
import os
|
||||
import shutil
|
||||
import copy
|
||||
import tempfile
|
||||
|
||||
import pyblish.api
|
||||
from avalon.tvpaint import lib
|
||||
from openpype.hosts.tvpaint.api.lib import composite_images
|
||||
from PIL import Image, ImageDraw
|
||||
from openpype.hosts.tvpaint.lib import (
|
||||
calculate_layers_extraction_data,
|
||||
get_frame_filename_template,
|
||||
fill_reference_frames,
|
||||
composite_rendered_layers,
|
||||
rename_filepaths_by_frame_start
|
||||
)
|
||||
from PIL import Image
|
||||
|
||||
|
||||
class ExtractSequence(pyblish.api.Extractor):
|
||||
|
|
@ -111,14 +117,6 @@ class ExtractSequence(pyblish.api.Extractor):
|
|||
|
||||
# -------------------------------------------------------------------
|
||||
|
||||
filename_template = self._get_filename_template(
|
||||
# Use the biggest number
|
||||
max(mark_out, frame_end)
|
||||
)
|
||||
ext = os.path.splitext(filename_template)[1].replace(".", "")
|
||||
|
||||
self.log.debug("Using file template \"{}\"".format(filename_template))
|
||||
|
||||
# Save to staging dir
|
||||
output_dir = instance.data.get("stagingDir")
|
||||
if not output_dir:
|
||||
|
|
@ -133,30 +131,30 @@ class ExtractSequence(pyblish.api.Extractor):
|
|||
)
|
||||
|
||||
if instance.data["family"] == "review":
|
||||
output_filenames, thumbnail_fullpath = self.render_review(
|
||||
filename_template, output_dir, mark_in, mark_out,
|
||||
scene_bg_color
|
||||
result = self.render_review(
|
||||
output_dir, mark_in, mark_out, scene_bg_color
|
||||
)
|
||||
else:
|
||||
# Render output
|
||||
output_filenames, thumbnail_fullpath = self.render(
|
||||
filename_template, output_dir,
|
||||
mark_in, mark_out,
|
||||
filtered_layers
|
||||
result = self.render(
|
||||
output_dir, mark_in, mark_out, filtered_layers
|
||||
)
|
||||
|
||||
output_filepaths_by_frame_idx, thumbnail_fullpath = result
|
||||
|
||||
# Change scene frame Start back to previous value
|
||||
lib.execute_george("tv_startframe {}".format(scene_start_frame))
|
||||
|
||||
# Sequence of one frame
|
||||
if not output_filenames:
|
||||
if not output_filepaths_by_frame_idx:
|
||||
self.log.warning("Extractor did not create any output.")
|
||||
return
|
||||
|
||||
repre_files = self._rename_output_files(
|
||||
filename_template, output_dir,
|
||||
mark_in, mark_out,
|
||||
output_frame_start, output_frame_end
|
||||
output_filepaths_by_frame_idx,
|
||||
mark_in,
|
||||
mark_out,
|
||||
output_frame_start
|
||||
)
|
||||
|
||||
# Fill tags and new families
|
||||
|
|
@ -169,9 +167,11 @@ class ExtractSequence(pyblish.api.Extractor):
|
|||
if single_file:
|
||||
repre_files = repre_files[0]
|
||||
|
||||
# Extension is harcoded
|
||||
# - changing extension would require change code
|
||||
new_repre = {
|
||||
"name": ext,
|
||||
"ext": ext,
|
||||
"name": "png",
|
||||
"ext": "png",
|
||||
"files": repre_files,
|
||||
"stagingDir": output_dir,
|
||||
"tags": tags
|
||||
|
|
@ -206,69 +206,28 @@ class ExtractSequence(pyblish.api.Extractor):
|
|||
}
|
||||
instance.data["representations"].append(thumbnail_repre)
|
||||
|
||||
def _get_filename_template(self, frame_end):
|
||||
"""Get filetemplate for rendered files.
|
||||
|
||||
This is simple template contains `{frame}{ext}` for sequential outputs
|
||||
and `single_file{ext}` for single file output. Output is rendered to
|
||||
temporary folder so filename should not matter as integrator change
|
||||
them.
|
||||
"""
|
||||
frame_padding = 4
|
||||
frame_end_str_len = len(str(frame_end))
|
||||
if frame_end_str_len > frame_padding:
|
||||
frame_padding = frame_end_str_len
|
||||
|
||||
return "{{frame:0>{}}}".format(frame_padding) + ".png"
|
||||
|
||||
def _rename_output_files(
|
||||
self, filename_template, output_dir,
|
||||
mark_in, mark_out, output_frame_start, output_frame_end
|
||||
self, filepaths_by_frame, mark_in, mark_out, output_frame_start
|
||||
):
|
||||
# Use differnet ranges based on Mark In and output Frame Start values
|
||||
# - this is to make sure that filename renaming won't affect files that
|
||||
# are not renamed yet
|
||||
mark_start_is_less = bool(mark_in < output_frame_start)
|
||||
if mark_start_is_less:
|
||||
marks_range = range(mark_out, mark_in - 1, -1)
|
||||
frames_range = range(output_frame_end, output_frame_start - 1, -1)
|
||||
else:
|
||||
# This is less possible situation as frame start will be in most
|
||||
# cases higher than Mark In.
|
||||
marks_range = range(mark_in, mark_out + 1)
|
||||
frames_range = range(output_frame_start, output_frame_end + 1)
|
||||
new_filepaths_by_frame = rename_filepaths_by_frame_start(
|
||||
filepaths_by_frame, mark_in, mark_out, output_frame_start
|
||||
)
|
||||
|
||||
repre_filepaths = []
|
||||
for mark, frame in zip(marks_range, frames_range):
|
||||
new_filename = filename_template.format(frame=frame)
|
||||
new_filepath = os.path.join(output_dir, new_filename)
|
||||
repre_filenames = []
|
||||
for filepath in new_filepaths_by_frame.values():
|
||||
repre_filenames.append(os.path.basename(filepath))
|
||||
|
||||
repre_filepaths.append(new_filepath)
|
||||
if mark_in < output_frame_start:
|
||||
repre_filenames = list(reversed(repre_filenames))
|
||||
|
||||
if mark != frame:
|
||||
old_filename = filename_template.format(frame=mark)
|
||||
old_filepath = os.path.join(output_dir, old_filename)
|
||||
os.rename(old_filepath, new_filepath)
|
||||
|
||||
# Reverse repre files order if output
|
||||
if mark_start_is_less:
|
||||
repre_filepaths = list(reversed(repre_filepaths))
|
||||
|
||||
return [
|
||||
os.path.basename(path)
|
||||
for path in repre_filepaths
|
||||
]
|
||||
return repre_filenames
|
||||
|
||||
def render_review(
|
||||
self, filename_template, output_dir, mark_in, mark_out, scene_bg_color
|
||||
self, output_dir, mark_in, mark_out, scene_bg_color
|
||||
):
|
||||
""" Export images from TVPaint using `tv_savesequence` command.
|
||||
|
||||
Args:
|
||||
filename_template (str): Filename template of an output. Template
|
||||
should already contain extension. Template may contain only
|
||||
keyword argument `{frame}` or index argument (for same value).
|
||||
Extension in template must match `save_mode`.
|
||||
output_dir (str): Directory where files will be stored.
|
||||
mark_in (int): Starting frame index from which export will begin.
|
||||
mark_out (int): On which frame index export will end.
|
||||
|
|
@ -279,6 +238,8 @@ class ExtractSequence(pyblish.api.Extractor):
|
|||
tuple: With 2 items first is list of filenames second is path to
|
||||
thumbnail.
|
||||
"""
|
||||
filename_template = get_frame_filename_template(mark_out)
|
||||
|
||||
self.log.debug("Preparing data for rendering.")
|
||||
first_frame_filepath = os.path.join(
|
||||
output_dir,
|
||||
|
|
@ -313,12 +274,13 @@ class ExtractSequence(pyblish.api.Extractor):
|
|||
lib.execute_george_through_file("\n".join(george_script_lines))
|
||||
|
||||
first_frame_filepath = None
|
||||
output_filenames = []
|
||||
for frame in range(mark_in, mark_out + 1):
|
||||
filename = filename_template.format(frame=frame)
|
||||
output_filenames.append(filename)
|
||||
|
||||
output_filepaths_by_frame_idx = {}
|
||||
for frame_idx in range(mark_in, mark_out + 1):
|
||||
filename = filename_template.format(frame=frame_idx)
|
||||
filepath = os.path.join(output_dir, filename)
|
||||
|
||||
output_filepaths_by_frame_idx[frame_idx] = filepath
|
||||
|
||||
if not os.path.exists(filepath):
|
||||
raise AssertionError(
|
||||
"Output was not rendered. File was not found {}".format(
|
||||
|
|
@ -337,16 +299,12 @@ class ExtractSequence(pyblish.api.Extractor):
|
|||
source_img = source_img.convert("RGB")
|
||||
source_img.save(thumbnail_filepath)
|
||||
|
||||
return output_filenames, thumbnail_filepath
|
||||
return output_filepaths_by_frame_idx, thumbnail_filepath
|
||||
|
||||
def render(self, filename_template, output_dir, mark_in, mark_out, layers):
|
||||
def render(self, output_dir, mark_in, mark_out, layers):
|
||||
""" Export images from TVPaint.
|
||||
|
||||
Args:
|
||||
filename_template (str): Filename template of an output. Template
|
||||
should already contain extension. Template may contain only
|
||||
keyword argument `{frame}` or index argument (for same value).
|
||||
Extension in template must match `save_mode`.
|
||||
output_dir (str): Directory where files will be stored.
|
||||
mark_in (int): Starting frame index from which export will begin.
|
||||
mark_out (int): On which frame index export will end.
|
||||
|
|
@ -360,12 +318,15 @@ class ExtractSequence(pyblish.api.Extractor):
|
|||
|
||||
# Map layers by position
|
||||
layers_by_position = {}
|
||||
layers_by_id = {}
|
||||
layer_ids = []
|
||||
for layer in layers:
|
||||
layer_id = layer["layer_id"]
|
||||
position = layer["position"]
|
||||
layers_by_position[position] = layer
|
||||
layers_by_id[layer_id] = layer
|
||||
|
||||
layer_ids.append(layer["layer_id"])
|
||||
layer_ids.append(layer_id)
|
||||
|
||||
# Sort layer positions in reverse order
|
||||
sorted_positions = list(reversed(sorted(layers_by_position.keys())))
|
||||
|
|
@ -374,59 +335,45 @@ class ExtractSequence(pyblish.api.Extractor):
|
|||
|
||||
self.log.debug("Collecting pre/post behavior of individual layers.")
|
||||
behavior_by_layer_id = lib.get_layers_pre_post_behavior(layer_ids)
|
||||
|
||||
tmp_filename_template = "pos_{pos}." + filename_template
|
||||
|
||||
files_by_position = {}
|
||||
for position in sorted_positions:
|
||||
layer = layers_by_position[position]
|
||||
behavior = behavior_by_layer_id[layer["layer_id"]]
|
||||
|
||||
files_by_frames = self._render_layer(
|
||||
layer,
|
||||
tmp_filename_template,
|
||||
output_dir,
|
||||
behavior,
|
||||
mark_in,
|
||||
mark_out
|
||||
)
|
||||
if files_by_frames:
|
||||
files_by_position[position] = files_by_frames
|
||||
else:
|
||||
self.log.warning((
|
||||
"Skipped layer \"{}\". Probably out of Mark In/Out range."
|
||||
).format(layer["name"]))
|
||||
|
||||
if not files_by_position:
|
||||
layer_names = set(layer["name"] for layer in layers)
|
||||
joined_names = ", ".join(
|
||||
["\"{}\"".format(name) for name in layer_names]
|
||||
)
|
||||
self.log.warning(
|
||||
"Layers {} do not have content in range {} - {}".format(
|
||||
joined_names, mark_in, mark_out
|
||||
)
|
||||
)
|
||||
return [], None
|
||||
|
||||
output_filepaths = self._composite_files(
|
||||
files_by_position,
|
||||
mark_in,
|
||||
mark_out,
|
||||
filename_template,
|
||||
output_dir
|
||||
exposure_frames_by_layer_id = lib.get_layers_exposure_frames(
|
||||
layer_ids, layers
|
||||
)
|
||||
self._cleanup_tmp_files(files_by_position)
|
||||
|
||||
output_filenames = [
|
||||
os.path.basename(filepath)
|
||||
for filepath in output_filepaths
|
||||
]
|
||||
extraction_data_by_layer_id = calculate_layers_extraction_data(
|
||||
layers,
|
||||
exposure_frames_by_layer_id,
|
||||
behavior_by_layer_id,
|
||||
mark_in,
|
||||
mark_out
|
||||
)
|
||||
# Render layers
|
||||
filepaths_by_layer_id = {}
|
||||
for layer_id, render_data in extraction_data_by_layer_id.items():
|
||||
layer = layers_by_id[layer_id]
|
||||
filepaths_by_layer_id[layer_id] = self._render_layer(
|
||||
render_data, layer, output_dir
|
||||
)
|
||||
|
||||
# Prepare final filepaths where compositing should store result
|
||||
output_filepaths_by_frame = {}
|
||||
thumbnail_src_filepath = None
|
||||
if output_filepaths:
|
||||
thumbnail_src_filepath = output_filepaths[0]
|
||||
finale_template = get_frame_filename_template(mark_out)
|
||||
for frame_idx in range(mark_in, mark_out + 1):
|
||||
filename = finale_template.format(frame=frame_idx)
|
||||
|
||||
filepath = os.path.join(output_dir, filename)
|
||||
output_filepaths_by_frame[frame_idx] = filepath
|
||||
|
||||
if thumbnail_src_filepath is None:
|
||||
thumbnail_src_filepath = filepath
|
||||
|
||||
self.log.info("Started compositing of layer frames.")
|
||||
composite_rendered_layers(
|
||||
layers, filepaths_by_layer_id,
|
||||
mark_in, mark_out,
|
||||
output_filepaths_by_frame
|
||||
)
|
||||
|
||||
self.log.info("Compositing finished")
|
||||
thumbnail_filepath = None
|
||||
if thumbnail_src_filepath and os.path.exists(thumbnail_src_filepath):
|
||||
source_img = Image.open(thumbnail_src_filepath)
|
||||
|
|
@ -449,7 +396,7 @@ class ExtractSequence(pyblish.api.Extractor):
|
|||
).format(source_img.mode))
|
||||
source_img.save(thumbnail_filepath)
|
||||
|
||||
return output_filenames, thumbnail_filepath
|
||||
return output_filepaths_by_frame, thumbnail_filepath
|
||||
|
||||
def _get_review_bg_color(self):
|
||||
red = green = blue = 255
|
||||
|
|
@ -460,338 +407,43 @@ class ExtractSequence(pyblish.api.Extractor):
|
|||
red, green, blue = self.review_bg
|
||||
return (red, green, blue)
|
||||
|
||||
def _render_layer(
|
||||
self,
|
||||
layer,
|
||||
tmp_filename_template,
|
||||
output_dir,
|
||||
behavior,
|
||||
mark_in_index,
|
||||
mark_out_index
|
||||
):
|
||||
def _render_layer(self, render_data, layer, output_dir):
|
||||
frame_references = render_data["frame_references"]
|
||||
filenames_by_frame_index = render_data["filenames_by_frame_index"]
|
||||
|
||||
layer_id = layer["layer_id"]
|
||||
frame_start_index = layer["frame_start"]
|
||||
frame_end_index = layer["frame_end"]
|
||||
|
||||
pre_behavior = behavior["pre"]
|
||||
post_behavior = behavior["post"]
|
||||
|
||||
# Check if layer is before mark in
|
||||
if frame_end_index < mark_in_index:
|
||||
# Skip layer if post behavior is "none"
|
||||
if post_behavior == "none":
|
||||
return {}
|
||||
|
||||
# Check if layer is after mark out
|
||||
elif frame_start_index > mark_out_index:
|
||||
# Skip layer if pre behavior is "none"
|
||||
if pre_behavior == "none":
|
||||
return {}
|
||||
|
||||
exposure_frames = lib.get_exposure_frames(
|
||||
layer_id, frame_start_index, frame_end_index
|
||||
)
|
||||
|
||||
if frame_start_index not in exposure_frames:
|
||||
exposure_frames.append(frame_start_index)
|
||||
|
||||
layer_files_by_frame = {}
|
||||
george_script_lines = [
|
||||
"tv_layerset {}".format(layer_id),
|
||||
"tv_SaveMode \"PNG\""
|
||||
]
|
||||
layer_position = layer["position"]
|
||||
|
||||
for frame_idx in exposure_frames:
|
||||
filename = tmp_filename_template.format(
|
||||
pos=layer_position,
|
||||
frame=frame_idx
|
||||
)
|
||||
filepaths_by_frame = {}
|
||||
frames_to_render = []
|
||||
for frame_idx, ref_idx in frame_references.items():
|
||||
# None reference is skipped because does not have source
|
||||
if ref_idx is None:
|
||||
filepaths_by_frame[frame_idx] = None
|
||||
continue
|
||||
filename = filenames_by_frame_index[frame_idx]
|
||||
dst_path = "/".join([output_dir, filename])
|
||||
layer_files_by_frame[frame_idx] = os.path.normpath(dst_path)
|
||||
filepaths_by_frame[frame_idx] = dst_path
|
||||
if frame_idx != ref_idx:
|
||||
continue
|
||||
|
||||
frames_to_render.append(str(frame_idx))
|
||||
# Go to frame
|
||||
george_script_lines.append("tv_layerImage {}".format(frame_idx))
|
||||
# Store image to output
|
||||
george_script_lines.append("tv_saveimage \"{}\"".format(dst_path))
|
||||
|
||||
self.log.debug("Rendering Exposure frames {} of layer {} ({})".format(
|
||||
str(exposure_frames), layer_id, layer["name"]
|
||||
",".join(frames_to_render), layer_id, layer["name"]
|
||||
))
|
||||
# Let TVPaint render layer's image
|
||||
lib.execute_george_through_file("\n".join(george_script_lines))
|
||||
|
||||
# Fill frames between `frame_start_index` and `frame_end_index`
|
||||
self.log.debug((
|
||||
"Filling frames between first and last frame of layer ({} - {})."
|
||||
).format(frame_start_index + 1, frame_end_index + 1))
|
||||
self.log.debug("Filling frames not rendered frames.")
|
||||
fill_reference_frames(frame_references, filepaths_by_frame)
|
||||
|
||||
_debug_filled_frames = []
|
||||
prev_filepath = None
|
||||
for frame_idx in range(frame_start_index, frame_end_index + 1):
|
||||
if frame_idx in layer_files_by_frame:
|
||||
prev_filepath = layer_files_by_frame[frame_idx]
|
||||
continue
|
||||
|
||||
if prev_filepath is None:
|
||||
raise ValueError("BUG: First frame of layer was not rendered!")
|
||||
_debug_filled_frames.append(frame_idx)
|
||||
filename = tmp_filename_template.format(
|
||||
pos=layer_position,
|
||||
frame=frame_idx
|
||||
)
|
||||
new_filepath = "/".join([output_dir, filename])
|
||||
self._copy_image(prev_filepath, new_filepath)
|
||||
layer_files_by_frame[frame_idx] = new_filepath
|
||||
|
||||
self.log.debug("Filled frames {}".format(str(_debug_filled_frames)))
|
||||
|
||||
# Fill frames by pre/post behavior of layer
|
||||
self.log.debug((
|
||||
"Completing image sequence of layer by pre/post behavior."
|
||||
" PRE: {} | POST: {}"
|
||||
).format(pre_behavior, post_behavior))
|
||||
|
||||
# Pre behavior
|
||||
self._fill_frame_by_pre_behavior(
|
||||
layer,
|
||||
pre_behavior,
|
||||
mark_in_index,
|
||||
layer_files_by_frame,
|
||||
tmp_filename_template,
|
||||
output_dir
|
||||
)
|
||||
self._fill_frame_by_post_behavior(
|
||||
layer,
|
||||
post_behavior,
|
||||
mark_out_index,
|
||||
layer_files_by_frame,
|
||||
tmp_filename_template,
|
||||
output_dir
|
||||
)
|
||||
return layer_files_by_frame
|
||||
|
||||
def _fill_frame_by_pre_behavior(
|
||||
self,
|
||||
layer,
|
||||
pre_behavior,
|
||||
mark_in_index,
|
||||
layer_files_by_frame,
|
||||
filename_template,
|
||||
output_dir
|
||||
):
|
||||
layer_position = layer["position"]
|
||||
frame_start_index = layer["frame_start"]
|
||||
frame_end_index = layer["frame_end"]
|
||||
frame_count = frame_end_index - frame_start_index + 1
|
||||
if mark_in_index >= frame_start_index:
|
||||
self.log.debug((
|
||||
"Skipping pre-behavior."
|
||||
" All frames after Mark In are rendered."
|
||||
))
|
||||
return
|
||||
|
||||
if pre_behavior == "none":
|
||||
# Empty frames are handled during `_composite_files`
|
||||
pass
|
||||
|
||||
elif pre_behavior == "hold":
|
||||
# Keep first frame for whole time
|
||||
eq_frame_filepath = layer_files_by_frame[frame_start_index]
|
||||
for frame_idx in range(mark_in_index, frame_start_index):
|
||||
filename = filename_template.format(
|
||||
pos=layer_position,
|
||||
frame=frame_idx
|
||||
)
|
||||
new_filepath = "/".join([output_dir, filename])
|
||||
self._copy_image(eq_frame_filepath, new_filepath)
|
||||
layer_files_by_frame[frame_idx] = new_filepath
|
||||
|
||||
elif pre_behavior in ("loop", "repeat"):
|
||||
# Loop backwards from last frame of layer
|
||||
for frame_idx in reversed(range(mark_in_index, frame_start_index)):
|
||||
eq_frame_idx_offset = (
|
||||
(frame_end_index - frame_idx) % frame_count
|
||||
)
|
||||
eq_frame_idx = frame_end_index - eq_frame_idx_offset
|
||||
eq_frame_filepath = layer_files_by_frame[eq_frame_idx]
|
||||
|
||||
filename = filename_template.format(
|
||||
pos=layer_position,
|
||||
frame=frame_idx
|
||||
)
|
||||
new_filepath = "/".join([output_dir, filename])
|
||||
self._copy_image(eq_frame_filepath, new_filepath)
|
||||
layer_files_by_frame[frame_idx] = new_filepath
|
||||
|
||||
elif pre_behavior == "pingpong":
|
||||
half_seq_len = frame_count - 1
|
||||
seq_len = half_seq_len * 2
|
||||
for frame_idx in reversed(range(mark_in_index, frame_start_index)):
|
||||
eq_frame_idx_offset = (frame_start_index - frame_idx) % seq_len
|
||||
if eq_frame_idx_offset > half_seq_len:
|
||||
eq_frame_idx_offset = (seq_len - eq_frame_idx_offset)
|
||||
eq_frame_idx = frame_start_index + eq_frame_idx_offset
|
||||
|
||||
eq_frame_filepath = layer_files_by_frame[eq_frame_idx]
|
||||
|
||||
filename = filename_template.format(
|
||||
pos=layer_position,
|
||||
frame=frame_idx
|
||||
)
|
||||
new_filepath = "/".join([output_dir, filename])
|
||||
self._copy_image(eq_frame_filepath, new_filepath)
|
||||
layer_files_by_frame[frame_idx] = new_filepath
|
||||
|
||||
def _fill_frame_by_post_behavior(
|
||||
self,
|
||||
layer,
|
||||
post_behavior,
|
||||
mark_out_index,
|
||||
layer_files_by_frame,
|
||||
filename_template,
|
||||
output_dir
|
||||
):
|
||||
layer_position = layer["position"]
|
||||
frame_start_index = layer["frame_start"]
|
||||
frame_end_index = layer["frame_end"]
|
||||
frame_count = frame_end_index - frame_start_index + 1
|
||||
if mark_out_index <= frame_end_index:
|
||||
self.log.debug((
|
||||
"Skipping post-behavior."
|
||||
" All frames up to Mark Out are rendered."
|
||||
))
|
||||
return
|
||||
|
||||
if post_behavior == "none":
|
||||
# Empty frames are handled during `_composite_files`
|
||||
pass
|
||||
|
||||
elif post_behavior == "hold":
|
||||
# Keep first frame for whole time
|
||||
eq_frame_filepath = layer_files_by_frame[frame_end_index]
|
||||
for frame_idx in range(frame_end_index + 1, mark_out_index + 1):
|
||||
filename = filename_template.format(
|
||||
pos=layer_position,
|
||||
frame=frame_idx
|
||||
)
|
||||
new_filepath = "/".join([output_dir, filename])
|
||||
self._copy_image(eq_frame_filepath, new_filepath)
|
||||
layer_files_by_frame[frame_idx] = new_filepath
|
||||
|
||||
elif post_behavior in ("loop", "repeat"):
|
||||
# Loop backwards from last frame of layer
|
||||
for frame_idx in range(frame_end_index + 1, mark_out_index + 1):
|
||||
eq_frame_idx = frame_idx % frame_count
|
||||
eq_frame_filepath = layer_files_by_frame[eq_frame_idx]
|
||||
|
||||
filename = filename_template.format(
|
||||
pos=layer_position,
|
||||
frame=frame_idx
|
||||
)
|
||||
new_filepath = "/".join([output_dir, filename])
|
||||
self._copy_image(eq_frame_filepath, new_filepath)
|
||||
layer_files_by_frame[frame_idx] = new_filepath
|
||||
|
||||
elif post_behavior == "pingpong":
|
||||
half_seq_len = frame_count - 1
|
||||
seq_len = half_seq_len * 2
|
||||
for frame_idx in range(frame_end_index + 1, mark_out_index + 1):
|
||||
eq_frame_idx_offset = (frame_idx - frame_end_index) % seq_len
|
||||
if eq_frame_idx_offset > half_seq_len:
|
||||
eq_frame_idx_offset = seq_len - eq_frame_idx_offset
|
||||
eq_frame_idx = frame_end_index - eq_frame_idx_offset
|
||||
|
||||
eq_frame_filepath = layer_files_by_frame[eq_frame_idx]
|
||||
|
||||
filename = filename_template.format(
|
||||
pos=layer_position,
|
||||
frame=frame_idx
|
||||
)
|
||||
new_filepath = "/".join([output_dir, filename])
|
||||
self._copy_image(eq_frame_filepath, new_filepath)
|
||||
layer_files_by_frame[frame_idx] = new_filepath
|
||||
|
||||
def _composite_files(
|
||||
self, files_by_position, frame_start, frame_end,
|
||||
filename_template, output_dir
|
||||
):
|
||||
"""Composite frames when more that one layer was exported.
|
||||
|
||||
This method is used when more than one layer is rendered out so and
|
||||
output should be composition of each frame of rendered layers.
|
||||
Missing frames are filled with transparent images.
|
||||
"""
|
||||
self.log.debug("Preparing files for compisiting.")
|
||||
# Prepare paths to images by frames into list where are stored
|
||||
# in order of compositing.
|
||||
images_by_frame = {}
|
||||
for frame_idx in range(frame_start, frame_end + 1):
|
||||
images_by_frame[frame_idx] = []
|
||||
for position in sorted(files_by_position.keys(), reverse=True):
|
||||
position_data = files_by_position[position]
|
||||
if frame_idx in position_data:
|
||||
filepath = position_data[frame_idx]
|
||||
images_by_frame[frame_idx].append(filepath)
|
||||
|
||||
output_filepaths = []
|
||||
missing_frame_paths = []
|
||||
random_frame_path = None
|
||||
for frame_idx in sorted(images_by_frame.keys()):
|
||||
image_filepaths = images_by_frame[frame_idx]
|
||||
output_filename = filename_template.format(frame=frame_idx)
|
||||
output_filepath = os.path.join(output_dir, output_filename)
|
||||
output_filepaths.append(output_filepath)
|
||||
|
||||
# Store information about missing frame and skip
|
||||
if not image_filepaths:
|
||||
missing_frame_paths.append(output_filepath)
|
||||
continue
|
||||
|
||||
# Just rename the file if is no need of compositing
|
||||
if len(image_filepaths) == 1:
|
||||
os.rename(image_filepaths[0], output_filepath)
|
||||
|
||||
# Composite images
|
||||
else:
|
||||
composite_images(image_filepaths, output_filepath)
|
||||
|
||||
# Store path of random output image that will 100% exist after all
|
||||
# multiprocessing as mockup for missing frames
|
||||
if random_frame_path is None:
|
||||
random_frame_path = output_filepath
|
||||
|
||||
self.log.debug(
|
||||
"Creating transparent images for frames without render {}.".format(
|
||||
str(missing_frame_paths)
|
||||
)
|
||||
)
|
||||
# Fill the sequence with transparent frames
|
||||
transparent_filepath = None
|
||||
for filepath in missing_frame_paths:
|
||||
if transparent_filepath is None:
|
||||
img_obj = Image.open(random_frame_path)
|
||||
painter = ImageDraw.Draw(img_obj)
|
||||
painter.rectangle((0, 0, *img_obj.size), fill=(0, 0, 0, 0))
|
||||
img_obj.save(filepath)
|
||||
transparent_filepath = filepath
|
||||
else:
|
||||
self._copy_image(transparent_filepath, filepath)
|
||||
return output_filepaths
|
||||
|
||||
def _cleanup_tmp_files(self, files_by_position):
|
||||
"""Remove temporary files that were used for compositing."""
|
||||
for data in files_by_position.values():
|
||||
for filepath in data.values():
|
||||
if os.path.exists(filepath):
|
||||
os.remove(filepath)
|
||||
|
||||
def _copy_image(self, src_path, dst_path):
|
||||
"""Create a copy of an image.
|
||||
|
||||
This was added to be able easier change copy method.
|
||||
"""
|
||||
# Create hardlink of image instead of copying if possible
|
||||
if hasattr(os, "link"):
|
||||
os.link(src_path, dst_path)
|
||||
else:
|
||||
shutil.copy(src_path, dst_path)
|
||||
return filepaths_by_frame
|
||||
|
|
|
|||
21
openpype/hosts/tvpaint/worker/__init__.py
Normal file
21
openpype/hosts/tvpaint/worker/__init__.py
Normal file
|
|
@ -0,0 +1,21 @@
|
|||
from .worker_job import (
|
||||
JobFailed,
|
||||
ExecuteSimpleGeorgeScript,
|
||||
ExecuteGeorgeScript,
|
||||
CollectSceneData,
|
||||
SenderTVPaintCommands,
|
||||
ProcessTVPaintCommands
|
||||
)
|
||||
|
||||
from .worker import main
|
||||
|
||||
__all__ = (
|
||||
"JobFailed",
|
||||
"ExecuteSimpleGeorgeScript",
|
||||
"ExecuteGeorgeScript",
|
||||
"CollectSceneData",
|
||||
"SenderTVPaintCommands",
|
||||
"ProcessTVPaintCommands",
|
||||
|
||||
"main"
|
||||
)
|
||||
133
openpype/hosts/tvpaint/worker/worker.py
Normal file
133
openpype/hosts/tvpaint/worker/worker.py
Normal file
|
|
@ -0,0 +1,133 @@
|
|||
import signal
|
||||
import time
|
||||
import asyncio
|
||||
|
||||
from avalon.tvpaint.communication_server import (
|
||||
BaseCommunicator,
|
||||
CommunicationWrapper
|
||||
)
|
||||
from openpype_modules.job_queue.job_workers import WorkerJobsConnection
|
||||
|
||||
from .worker_job import ProcessTVPaintCommands
|
||||
|
||||
|
||||
class TVPaintWorkerCommunicator(BaseCommunicator):
|
||||
"""Modified commuicator which cares about processing jobs.
|
||||
|
||||
Received jobs are send to TVPaint by parsing 'ProcessTVPaintCommands'.
|
||||
"""
|
||||
def __init__(self, server_url):
|
||||
super().__init__()
|
||||
|
||||
self.return_code = 1
|
||||
self._server_url = server_url
|
||||
self._worker_connection = None
|
||||
|
||||
def _start_webserver(self):
|
||||
"""Create connection to workers server before TVPaint server."""
|
||||
loop = self.websocket_server.loop
|
||||
self._worker_connection = WorkerJobsConnection(
|
||||
self._server_url, "tvpaint", loop
|
||||
)
|
||||
asyncio.ensure_future(
|
||||
self._worker_connection.main_loop(register_worker=False),
|
||||
loop=loop
|
||||
)
|
||||
|
||||
super()._start_webserver()
|
||||
|
||||
def _on_client_connect(self, *args, **kwargs):
|
||||
super()._on_client_connect(*args, **kwargs)
|
||||
# Register as "ready to work" worker
|
||||
self._worker_connection.register_as_worker()
|
||||
|
||||
def stop(self):
|
||||
"""Stop worker connection and TVPaint server."""
|
||||
self._worker_connection.stop()
|
||||
self.return_code = 0
|
||||
super().stop()
|
||||
|
||||
@property
|
||||
def current_job(self):
|
||||
"""Retrieve job which should be processed."""
|
||||
if self._worker_connection:
|
||||
return self._worker_connection.current_job
|
||||
return None
|
||||
|
||||
def _check_process(self):
|
||||
if self.process is None:
|
||||
return True
|
||||
|
||||
if self.process.poll() is not None:
|
||||
asyncio.ensure_future(
|
||||
self._worker_connection.disconnect(),
|
||||
loop=self.websocket_server.loop
|
||||
)
|
||||
self._exit()
|
||||
return False
|
||||
return True
|
||||
|
||||
def _process_job(self):
|
||||
job = self.current_job
|
||||
if job is None:
|
||||
return
|
||||
|
||||
# Prepare variables used for sendig
|
||||
success = False
|
||||
message = "Unknown function"
|
||||
data = None
|
||||
job_data = job["data"]
|
||||
workfile = job_data["workfile"]
|
||||
# Currently can process only "commands" function
|
||||
if job_data.get("function") == "commands":
|
||||
try:
|
||||
commands = ProcessTVPaintCommands(
|
||||
workfile, job_data["commands"], self
|
||||
)
|
||||
commands.execute()
|
||||
data = commands.response_data()
|
||||
success = True
|
||||
message = "Executed"
|
||||
|
||||
except Exception as exc:
|
||||
message = "Error on worker: {}".format(str(exc))
|
||||
|
||||
self._worker_connection.finish_job(success, message, data)
|
||||
|
||||
def main_loop(self):
|
||||
"""Main loop where jobs are processed.
|
||||
|
||||
Server is stopped by killing this process or TVPaint process.
|
||||
"""
|
||||
while self.server_is_running:
|
||||
if self._check_process():
|
||||
self._process_job()
|
||||
time.sleep(1)
|
||||
|
||||
return self.return_code
|
||||
|
||||
|
||||
def _start_tvpaint(tvpaint_executable_path, server_url):
|
||||
communicator = TVPaintWorkerCommunicator(server_url)
|
||||
CommunicationWrapper.set_communicator(communicator)
|
||||
communicator.launch([tvpaint_executable_path])
|
||||
|
||||
|
||||
def main(tvpaint_executable_path, server_url):
|
||||
# Register terminal signal handler
|
||||
def signal_handler(*_args):
|
||||
print("Termination signal received. Stopping.")
|
||||
if CommunicationWrapper.communicator is not None:
|
||||
CommunicationWrapper.communicator.stop()
|
||||
|
||||
signal.signal(signal.SIGINT, signal_handler)
|
||||
signal.signal(signal.SIGTERM, signal_handler)
|
||||
|
||||
_start_tvpaint(tvpaint_executable_path, server_url)
|
||||
|
||||
communicator = CommunicationWrapper.communicator
|
||||
if communicator is None:
|
||||
print("Communicator is not set")
|
||||
return 1
|
||||
|
||||
return communicator.main_loop()
|
||||
537
openpype/hosts/tvpaint/worker/worker_job.py
Normal file
537
openpype/hosts/tvpaint/worker/worker_job.py
Normal file
|
|
@ -0,0 +1,537 @@
|
|||
import os
|
||||
import tempfile
|
||||
import inspect
|
||||
import copy
|
||||
import json
|
||||
import time
|
||||
from uuid import uuid4
|
||||
from abc import ABCMeta, abstractmethod, abstractproperty
|
||||
|
||||
import six
|
||||
|
||||
from openpype.api import PypeLogger
|
||||
from openpype.modules import ModulesManager
|
||||
|
||||
|
||||
TMP_FILE_PREFIX = "opw_tvp_"
|
||||
|
||||
|
||||
class JobFailed(Exception):
|
||||
"""Raised when job was sent and finished unsuccessfully."""
|
||||
def __init__(self, job_status):
|
||||
job_state = job_status["state"]
|
||||
job_message = job_status["message"] or "Unknown issue"
|
||||
error_msg = (
|
||||
"Job didn't finish properly."
|
||||
" Job state: \"{}\" | Job message: \"{}\""
|
||||
).format(job_state, job_message)
|
||||
|
||||
self.job_status = job_status
|
||||
|
||||
super().__init__(error_msg)
|
||||
|
||||
|
||||
@six.add_metaclass(ABCMeta)
|
||||
class BaseCommand:
|
||||
"""Abstract TVPaint command which can be executed through worker.
|
||||
|
||||
Each command must have unique name and implemented 'execute' and
|
||||
'from_existing' methods.
|
||||
|
||||
Command also have id which is created on command creation.
|
||||
|
||||
The idea is that command is just a data container on sender side send
|
||||
througth server to a worker where is replicated one by one, executed and
|
||||
result sent back to sender through server.
|
||||
"""
|
||||
@abstractproperty
|
||||
def name(self):
|
||||
"""Command name (must be unique)."""
|
||||
pass
|
||||
|
||||
def __init__(self, data=None):
|
||||
if data is None:
|
||||
data = {}
|
||||
else:
|
||||
data = copy.deepcopy(data)
|
||||
|
||||
# Use 'id' from data when replicating on process side
|
||||
command_id = data.get("id")
|
||||
if command_id is None:
|
||||
command_id = str(uuid4())
|
||||
data["id"] = command_id
|
||||
data["command"] = self.name
|
||||
|
||||
self._parent = None
|
||||
self._result = None
|
||||
self._command_data = data
|
||||
self._done = False
|
||||
|
||||
def job_queue_root(self):
|
||||
"""Access to job queue root.
|
||||
|
||||
Job queue root is shared access point to files shared across senders
|
||||
and workers.
|
||||
"""
|
||||
if self._parent is None:
|
||||
return None
|
||||
return self._parent.job_queue_root()
|
||||
|
||||
def set_parent(self, parent):
|
||||
self._parent = parent
|
||||
|
||||
@property
|
||||
def id(self):
|
||||
"""Command id."""
|
||||
return self._command_data["id"]
|
||||
|
||||
@property
|
||||
def parent(self):
|
||||
"""Parent of command expected type of 'TVPaintCommands'."""
|
||||
return self._parent
|
||||
|
||||
@property
|
||||
def communicator(self):
|
||||
"""TVPaint communicator.
|
||||
|
||||
Available only on worker side.
|
||||
"""
|
||||
return self._parent.communicator
|
||||
|
||||
@property
|
||||
def done(self):
|
||||
"""Is command done."""
|
||||
return self._done
|
||||
|
||||
def set_done(self):
|
||||
"""Change state of done."""
|
||||
self._done = True
|
||||
|
||||
def set_result(self, result):
|
||||
"""Set result of executed command."""
|
||||
self._result = result
|
||||
|
||||
def result(self):
|
||||
"""Result of command."""
|
||||
return copy.deepcopy(self._result)
|
||||
|
||||
def response_data(self):
|
||||
"""Data send as response to sender."""
|
||||
return {
|
||||
"id": self.id,
|
||||
"result": self._result,
|
||||
"done": self._done
|
||||
}
|
||||
|
||||
def command_data(self):
|
||||
"""Raw command data."""
|
||||
return copy.deepcopy(self._command_data)
|
||||
|
||||
@abstractmethod
|
||||
def execute(self):
|
||||
"""Execute command on worker side."""
|
||||
pass
|
||||
|
||||
@classmethod
|
||||
@abstractmethod
|
||||
def from_existing(cls, data):
|
||||
"""Recreate object based on passed data."""
|
||||
pass
|
||||
|
||||
def execute_george(self, george_script):
|
||||
"""Execute george script in TVPaint."""
|
||||
return self.parent.execute_george(george_script)
|
||||
|
||||
def execute_george_through_file(self, george_script):
|
||||
"""Execute george script through temp file in TVPaint."""
|
||||
return self.parent.execute_george_through_file(george_script)
|
||||
|
||||
|
||||
class ExecuteSimpleGeorgeScript(BaseCommand):
|
||||
"""Execute simple george script in TVPaint.
|
||||
|
||||
Args:
|
||||
script(str): Script that will be executed.
|
||||
"""
|
||||
name = "execute_george_simple"
|
||||
|
||||
def __init__(self, script, data=None):
|
||||
data = data or {}
|
||||
data["script"] = script
|
||||
self._script = script
|
||||
super().__init__(data)
|
||||
|
||||
def execute(self):
|
||||
self._result = self.execute_george(self._script)
|
||||
|
||||
@classmethod
|
||||
def from_existing(cls, data):
|
||||
script = data.pop("script")
|
||||
return cls(script, data)
|
||||
|
||||
|
||||
class ExecuteGeorgeScript(BaseCommand):
|
||||
"""Execute multiline george script in TVPaint.
|
||||
|
||||
Args:
|
||||
script_lines(list): Lines that will be executed in george script
|
||||
through temp george file.
|
||||
tmp_file_keys(list): List of formatting keys in george script that
|
||||
require replacement with path to a temp file where result will be
|
||||
stored. The content of file is stored to result by the key.
|
||||
root_dir_key(str): Formatting key that will be replaced in george
|
||||
script with job queue root which can be different on worker side.
|
||||
data(dict): Raw data about command.
|
||||
"""
|
||||
name = "execute_george_through_file"
|
||||
|
||||
def __init__(
|
||||
self, script_lines, tmp_file_keys=None, root_dir_key=None, data=None
|
||||
):
|
||||
data = data or {}
|
||||
if not tmp_file_keys:
|
||||
tmp_file_keys = data.get("tmp_file_keys") or []
|
||||
|
||||
data["script_lines"] = script_lines
|
||||
data["tmp_file_keys"] = tmp_file_keys
|
||||
data["root_dir_key"] = root_dir_key
|
||||
self._script_lines = script_lines
|
||||
self._tmp_file_keys = tmp_file_keys
|
||||
self._root_dir_key = root_dir_key
|
||||
super().__init__(data)
|
||||
|
||||
def execute(self):
|
||||
filepath_by_key = {}
|
||||
script = self._script_lines
|
||||
if isinstance(script, list):
|
||||
script = "\n".join(script)
|
||||
|
||||
# Replace temporary files in george script
|
||||
for key in self._tmp_file_keys:
|
||||
output_file = tempfile.NamedTemporaryFile(
|
||||
mode="w", prefix=TMP_FILE_PREFIX, suffix=".txt", delete=False
|
||||
)
|
||||
output_file.close()
|
||||
format_key = "{" + key + "}"
|
||||
output_path = output_file.name.replace("\\", "/")
|
||||
script = script.replace(format_key, output_path)
|
||||
filepath_by_key[key] = output_path
|
||||
|
||||
# Replace job queue root in script
|
||||
if self._root_dir_key:
|
||||
job_queue_root = self.job_queue_root()
|
||||
format_key = "{" + self._root_dir_key + "}"
|
||||
script = script.replace(
|
||||
format_key, job_queue_root.replace("\\", "/")
|
||||
)
|
||||
|
||||
# Execute the script
|
||||
self.execute_george_through_file(script)
|
||||
|
||||
# Store result of temporary files
|
||||
result = {}
|
||||
for key, filepath in filepath_by_key.items():
|
||||
with open(filepath, "r") as stream:
|
||||
data = stream.read()
|
||||
result[key] = data
|
||||
os.remove(filepath)
|
||||
|
||||
self._result = result
|
||||
|
||||
@classmethod
|
||||
def from_existing(cls, data):
|
||||
"""Recreate the object from data."""
|
||||
script_lines = data.pop("script_lines")
|
||||
tmp_file_keys = data.pop("tmp_file_keys", None)
|
||||
root_dir_key = data.pop("root_dir_key", None)
|
||||
return cls(script_lines, tmp_file_keys, root_dir_key, data)
|
||||
|
||||
|
||||
class CollectSceneData(BaseCommand):
|
||||
"""Helper command which will collect all usefull info about workfile.
|
||||
|
||||
Result is dictionary with all layers data, exposure frames by layer ids
|
||||
pre/post behavior of layers by their ids, group information and scene data.
|
||||
"""
|
||||
name = "collect_scene_data"
|
||||
|
||||
def execute(self):
|
||||
from avalon.tvpaint.lib import (
|
||||
get_layers_data,
|
||||
get_groups_data,
|
||||
get_layers_pre_post_behavior,
|
||||
get_layers_exposure_frames,
|
||||
get_scene_data
|
||||
)
|
||||
|
||||
groups_data = get_groups_data(communicator=self.communicator)
|
||||
layers_data = get_layers_data(communicator=self.communicator)
|
||||
layer_ids = [
|
||||
layer_data["layer_id"]
|
||||
for layer_data in layers_data
|
||||
]
|
||||
pre_post_beh_by_layer_id = get_layers_pre_post_behavior(
|
||||
layer_ids, communicator=self.communicator
|
||||
)
|
||||
exposure_frames_by_layer_id = get_layers_exposure_frames(
|
||||
layer_ids, layers_data, communicator=self.communicator
|
||||
)
|
||||
|
||||
self._result = {
|
||||
"layers_data": layers_data,
|
||||
"exposure_frames_by_layer_id": exposure_frames_by_layer_id,
|
||||
"pre_post_beh_by_layer_id": pre_post_beh_by_layer_id,
|
||||
"groups_data": groups_data,
|
||||
"scene_data": get_scene_data(self.communicator)
|
||||
}
|
||||
|
||||
@classmethod
|
||||
def from_existing(cls, data):
|
||||
return cls(data)
|
||||
|
||||
|
||||
@six.add_metaclass(ABCMeta)
|
||||
class TVPaintCommands:
|
||||
"""Wrapper around TVPaint commands to be able send multiple commands.
|
||||
|
||||
Commands may send one or multiple commands at once. Also gives api access
|
||||
for commands info.
|
||||
|
||||
Base for sender and receiver which are extending the logic for their
|
||||
purposes. One of differences is preparation of workfile path.
|
||||
|
||||
Args:
|
||||
workfile(str): Path to workfile.
|
||||
job_queue_module(JobQueueModule): Object of OpenPype module JobQueue.
|
||||
"""
|
||||
def __init__(self, workfile, job_queue_module=None):
|
||||
self._log = None
|
||||
self._commands = []
|
||||
self._command_classes_by_name = None
|
||||
if job_queue_module is None:
|
||||
manager = ModulesManager()
|
||||
job_queue_module = manager.modules_by_name["job_queue"]
|
||||
self._job_queue_module = job_queue_module
|
||||
|
||||
self._workfile = self._prepare_workfile(workfile)
|
||||
|
||||
@abstractmethod
|
||||
def _prepare_workfile(self, workfile):
|
||||
"""Modification of workfile path on initialization to match platorm."""
|
||||
pass
|
||||
|
||||
def job_queue_root(self):
|
||||
"""Job queue root for current platform using current settings."""
|
||||
return self._job_queue_module.get_jobs_root_from_settings()
|
||||
|
||||
@property
|
||||
def log(self):
|
||||
"""Access to logger object."""
|
||||
if self._log is None:
|
||||
self._log = PypeLogger.get_logger(self.__class__.__name__)
|
||||
return self._log
|
||||
|
||||
@property
|
||||
def classes_by_name(self):
|
||||
"""Prepare commands classes for validation and recreation of commands.
|
||||
|
||||
It is expected that all commands are defined in this python file so
|
||||
we're looking for all implementation of BaseCommand in globals.
|
||||
"""
|
||||
if self._command_classes_by_name is None:
|
||||
command_classes_by_name = {}
|
||||
for attr in globals().values():
|
||||
if (
|
||||
not inspect.isclass(attr)
|
||||
or not issubclass(attr, BaseCommand)
|
||||
or attr is BaseCommand
|
||||
):
|
||||
continue
|
||||
|
||||
if inspect.isabstract(attr):
|
||||
self.log.debug(
|
||||
"Skipping abstract class {}".format(attr.__name__)
|
||||
)
|
||||
command_classes_by_name[attr.name] = attr
|
||||
self._command_classes_by_name = command_classes_by_name
|
||||
|
||||
return self._command_classes_by_name
|
||||
|
||||
def add_command(self, command):
|
||||
"""Add command to process."""
|
||||
command.set_parent(self)
|
||||
self._commands.append(command)
|
||||
|
||||
def result(self):
|
||||
"""Result of commands in list in which they were processed."""
|
||||
return [
|
||||
command.result()
|
||||
for command in self._commands
|
||||
]
|
||||
|
||||
def response_data(self):
|
||||
"""Data which should be send from worker."""
|
||||
return [
|
||||
command.response_data()
|
||||
for command in self._commands
|
||||
]
|
||||
|
||||
|
||||
class SenderTVPaintCommands(TVPaintCommands):
|
||||
"""Sender implementation of TVPaint Commands."""
|
||||
def _prepare_workfile(self, workfile):
|
||||
"""Remove job queue root from workfile path.
|
||||
|
||||
It is expected that worker will add it's root before passed workfile.
|
||||
"""
|
||||
new_workfile = workfile.replace("\\", "/")
|
||||
job_queue_root = self.job_queue_root().replace("\\", "/")
|
||||
if job_queue_root not in new_workfile:
|
||||
raise ValueError((
|
||||
"Workfile is not located in JobQueue root."
|
||||
" Workfile path: \"{}\". JobQueue root: \"{}\""
|
||||
).format(workfile, job_queue_root))
|
||||
return new_workfile.replace(job_queue_root, "")
|
||||
|
||||
def commands_data(self):
|
||||
"""Commands data to be able recreate them."""
|
||||
return [
|
||||
command.command_data()
|
||||
for command in self._commands
|
||||
]
|
||||
|
||||
def to_job_data(self):
|
||||
"""Convert commands to job data before sending to workers server."""
|
||||
return {
|
||||
"workfile": self._workfile,
|
||||
"function": "commands",
|
||||
"commands": self.commands_data()
|
||||
}
|
||||
|
||||
def set_result(self, result):
|
||||
commands_by_id = {
|
||||
command.id: command
|
||||
for command in self._commands
|
||||
}
|
||||
|
||||
for item in result:
|
||||
command = commands_by_id[item["id"]]
|
||||
command.set_result(item["result"])
|
||||
command.set_done()
|
||||
|
||||
def _send_job(self):
|
||||
"""Send job to a workers server."""
|
||||
# Send job data to job queue server
|
||||
job_data = self.to_job_data()
|
||||
self.log.debug("Sending job to JobQueue server.\n{}".format(
|
||||
json.dumps(job_data, indent=4)
|
||||
))
|
||||
job_id = self._job_queue_module.send_job("tvpaint", job_data)
|
||||
self.log.info((
|
||||
"Job sent to JobQueue server and got id \"{}\"."
|
||||
" Waiting for finishing the job."
|
||||
).format(job_id))
|
||||
|
||||
return job_id
|
||||
|
||||
def send_job_and_wait(self):
|
||||
"""Send job to workers server and wait for response.
|
||||
|
||||
Result of job is stored into the object.
|
||||
|
||||
Raises:
|
||||
JobFailed: When job was finished but not successfully.
|
||||
"""
|
||||
job_id = self._send_job()
|
||||
while True:
|
||||
job_status = self._job_queue_module.get_job_status(job_id)
|
||||
if job_status["done"]:
|
||||
break
|
||||
time.sleep(1)
|
||||
|
||||
# Check if job state is done
|
||||
if job_status["state"] != "done":
|
||||
raise JobFailed(job_status)
|
||||
|
||||
self.set_result(job_status["result"])
|
||||
|
||||
self.log.debug("Job is done and result is stored.")
|
||||
|
||||
|
||||
class ProcessTVPaintCommands(TVPaintCommands):
|
||||
"""Worker side of TVPaint Commands.
|
||||
|
||||
It is expected this object is created only on worker's side from existing
|
||||
data loaded from job.
|
||||
|
||||
Workfile path logic is based on 'SenderTVPaintCommands'.
|
||||
"""
|
||||
def __init__(self, workfile, commands, communicator):
|
||||
super(ProcessTVPaintCommands, self).__init__(workfile)
|
||||
|
||||
self._communicator = communicator
|
||||
|
||||
self.commands_from_data(commands)
|
||||
|
||||
def _prepare_workfile(self, workfile):
|
||||
"""Preprend job queue root before passed workfile."""
|
||||
workfile = workfile.replace("\\", "/")
|
||||
job_queue_root = self.job_queue_root().replace("\\", "/")
|
||||
new_workfile = "/".join([job_queue_root, workfile])
|
||||
while "//" in new_workfile:
|
||||
new_workfile = new_workfile.replace("//", "/")
|
||||
return os.path.normpath(new_workfile)
|
||||
|
||||
@property
|
||||
def communicator(self):
|
||||
"""Access to TVPaint communicator."""
|
||||
return self._communicator
|
||||
|
||||
def commands_from_data(self, commands_data):
|
||||
"""Recreate command from passed data."""
|
||||
for command_data in commands_data:
|
||||
command_name = command_data["command"]
|
||||
|
||||
klass = self.classes_by_name[command_name]
|
||||
command = klass.from_existing(command_data)
|
||||
self.add_command(command)
|
||||
|
||||
def execute_george(self, george_script):
|
||||
"""Helper method to execute george script."""
|
||||
return self.communicator.execute_george(george_script)
|
||||
|
||||
def execute_george_through_file(self, george_script):
|
||||
"""Helper method to execute george script through temp file."""
|
||||
temporary_file = tempfile.NamedTemporaryFile(
|
||||
mode="w", prefix=TMP_FILE_PREFIX, suffix=".grg", delete=False
|
||||
)
|
||||
temporary_file.write(george_script)
|
||||
temporary_file.close()
|
||||
temp_file_path = temporary_file.name.replace("\\", "/")
|
||||
self.execute_george("tv_runscript {}".format(temp_file_path))
|
||||
os.remove(temp_file_path)
|
||||
|
||||
def _open_workfile(self):
|
||||
"""Open workfile in TVPaint."""
|
||||
workfile = self._workfile
|
||||
print("Opening workfile {}".format(workfile))
|
||||
george_script = "tv_LoadProject '\"'\"{}\"'\"'".format(workfile)
|
||||
self.execute_george_through_file(george_script)
|
||||
|
||||
def _close_workfile(self):
|
||||
"""Close workfile in TVPaint."""
|
||||
print("Closing workfile")
|
||||
self.execute_george_through_file("tv_projectclose")
|
||||
|
||||
def execute(self):
|
||||
"""Execute commands."""
|
||||
# First open the workfile
|
||||
self._open_workfile()
|
||||
# Execute commands one by one
|
||||
# TODO maybe stop processing when command fails?
|
||||
print("Commands execution started ({})".format(len(self._commands)))
|
||||
for command in self._commands:
|
||||
command.execute()
|
||||
command.set_done()
|
||||
# Finally close workfile
|
||||
self._close_workfile()
|
||||
|
|
@ -0,0 +1,255 @@
|
|||
"""
|
||||
Requires:
|
||||
CollectTVPaintWorkfileData
|
||||
|
||||
Provides:
|
||||
Instances
|
||||
"""
|
||||
import os
|
||||
import re
|
||||
import copy
|
||||
import pyblish.api
|
||||
|
||||
from openpype.lib import get_subset_name_with_asset_doc
|
||||
|
||||
|
||||
class CollectTVPaintInstances(pyblish.api.ContextPlugin):
|
||||
label = "Collect TVPaint Instances"
|
||||
order = pyblish.api.CollectorOrder + 0.2
|
||||
hosts = ["webpublisher"]
|
||||
targets = ["tvpaint_worker"]
|
||||
|
||||
workfile_family = "workfile"
|
||||
workfile_variant = ""
|
||||
review_family = "review"
|
||||
review_variant = "Main"
|
||||
render_pass_family = "renderPass"
|
||||
render_layer_family = "renderLayer"
|
||||
render_layer_pass_name = "beauty"
|
||||
|
||||
# Set by settings
|
||||
# Regex must constain 'layer' and 'variant' groups which are extracted from
|
||||
# name when instances are created
|
||||
layer_name_regex = r"(?P<layer>L[0-9]{3}_\w+)_(?P<pass>.+)"
|
||||
|
||||
def process(self, context):
|
||||
# Prepare compiled regex
|
||||
layer_name_regex = re.compile(self.layer_name_regex)
|
||||
|
||||
layers_data = context.data["layersData"]
|
||||
|
||||
host_name = "tvpaint"
|
||||
task_name = context.data.get("task")
|
||||
asset_doc = context.data["assetEntity"]
|
||||
project_doc = context.data["projectEntity"]
|
||||
project_name = project_doc["name"]
|
||||
|
||||
new_instances = []
|
||||
|
||||
# Workfile instance
|
||||
workfile_subset_name = get_subset_name_with_asset_doc(
|
||||
self.workfile_family,
|
||||
self.workfile_variant,
|
||||
task_name,
|
||||
asset_doc,
|
||||
project_name,
|
||||
host_name
|
||||
)
|
||||
workfile_instance = self._create_workfile_instance(
|
||||
context, workfile_subset_name
|
||||
)
|
||||
new_instances.append(workfile_instance)
|
||||
|
||||
# Review instance
|
||||
review_subset_name = get_subset_name_with_asset_doc(
|
||||
self.review_family,
|
||||
self.review_variant,
|
||||
task_name,
|
||||
asset_doc,
|
||||
project_name,
|
||||
host_name
|
||||
)
|
||||
review_instance = self._create_review_instance(
|
||||
context, review_subset_name
|
||||
)
|
||||
new_instances.append(review_instance)
|
||||
|
||||
# Get render layers and passes from TVPaint layers
|
||||
# - it's based on regex extraction
|
||||
layers_by_layer_and_pass = {}
|
||||
for layer in layers_data:
|
||||
# Filter only visible layers
|
||||
if not layer["visible"]:
|
||||
continue
|
||||
|
||||
result = layer_name_regex.search(layer["name"])
|
||||
# Layer name not matching layer name regex
|
||||
# should raise an exception?
|
||||
if result is None:
|
||||
continue
|
||||
render_layer = result.group("layer")
|
||||
render_pass = result.group("pass")
|
||||
|
||||
render_pass_maping = layers_by_layer_and_pass.get(
|
||||
render_layer
|
||||
)
|
||||
if render_pass_maping is None:
|
||||
render_pass_maping = {}
|
||||
layers_by_layer_and_pass[render_layer] = render_pass_maping
|
||||
|
||||
if render_pass not in render_pass_maping:
|
||||
render_pass_maping[render_pass] = []
|
||||
render_pass_maping[render_pass].append(copy.deepcopy(layer))
|
||||
|
||||
layers_by_render_layer = {}
|
||||
for render_layer, render_passes in layers_by_layer_and_pass.items():
|
||||
render_layer_layers = []
|
||||
layers_by_render_layer[render_layer] = render_layer_layers
|
||||
for render_pass, layers in render_passes.items():
|
||||
render_layer_layers.extend(copy.deepcopy(layers))
|
||||
dynamic_data = {
|
||||
"render_pass": render_pass,
|
||||
"render_layer": render_layer,
|
||||
# Override family for subset name
|
||||
"family": "render"
|
||||
}
|
||||
|
||||
subset_name = get_subset_name_with_asset_doc(
|
||||
self.render_pass_family,
|
||||
render_pass,
|
||||
task_name,
|
||||
asset_doc,
|
||||
project_name,
|
||||
host_name,
|
||||
dynamic_data=dynamic_data
|
||||
)
|
||||
|
||||
instance = self._create_render_pass_instance(
|
||||
context, layers, subset_name
|
||||
)
|
||||
new_instances.append(instance)
|
||||
|
||||
for render_layer, layers in layers_by_render_layer.items():
|
||||
variant = render_layer
|
||||
dynamic_data = {
|
||||
"render_pass": self.render_layer_pass_name,
|
||||
"render_layer": render_layer,
|
||||
# Override family for subset name
|
||||
"family": "render"
|
||||
}
|
||||
subset_name = get_subset_name_with_asset_doc(
|
||||
self.render_pass_family,
|
||||
variant,
|
||||
task_name,
|
||||
asset_doc,
|
||||
project_name,
|
||||
host_name,
|
||||
dynamic_data=dynamic_data
|
||||
)
|
||||
instance = self._create_render_layer_instance(
|
||||
context, layers, subset_name
|
||||
)
|
||||
new_instances.append(instance)
|
||||
|
||||
# Set data same for all instances
|
||||
frame_start = context.data.get("frameStart")
|
||||
frame_end = context.data.get("frameEnd")
|
||||
|
||||
for instance in new_instances:
|
||||
if (
|
||||
instance.data.get("frameStart") is None
|
||||
or instance.data.get("frameEnd") is None
|
||||
):
|
||||
instance.data["frameStart"] = frame_start
|
||||
instance.data["frameEnd"] = frame_end
|
||||
|
||||
if instance.data.get("asset") is None:
|
||||
instance.data["asset"] = asset_doc["name"]
|
||||
|
||||
if instance.data.get("task") is None:
|
||||
instance.data["task"] = task_name
|
||||
|
||||
if "representations" not in instance.data:
|
||||
instance.data["representations"] = []
|
||||
|
||||
if "source" not in instance.data:
|
||||
instance.data["source"] = "webpublisher"
|
||||
|
||||
def _create_workfile_instance(self, context, subset_name):
|
||||
workfile_path = context.data["workfilePath"]
|
||||
staging_dir = os.path.dirname(workfile_path)
|
||||
filename = os.path.basename(workfile_path)
|
||||
ext = os.path.splitext(filename)[-1]
|
||||
|
||||
return context.create_instance(**{
|
||||
"name": subset_name,
|
||||
"label": subset_name,
|
||||
"subset": subset_name,
|
||||
"family": self.workfile_family,
|
||||
"families": [],
|
||||
"stagingDir": staging_dir,
|
||||
"representations": [{
|
||||
"name": ext.lstrip("."),
|
||||
"ext": ext.lstrip("."),
|
||||
"files": filename,
|
||||
"stagingDir": staging_dir
|
||||
}]
|
||||
})
|
||||
|
||||
def _create_review_instance(self, context, subset_name):
|
||||
staging_dir = self._create_staging_dir(context, subset_name)
|
||||
layers_data = context.data["layersData"]
|
||||
# Filter hidden layers
|
||||
filtered_layers_data = [
|
||||
copy.deepcopy(layer)
|
||||
for layer in layers_data
|
||||
if layer["visible"]
|
||||
]
|
||||
return context.create_instance(**{
|
||||
"name": subset_name,
|
||||
"label": subset_name,
|
||||
"subset": subset_name,
|
||||
"family": self.review_family,
|
||||
"families": [],
|
||||
"layers": filtered_layers_data,
|
||||
"stagingDir": staging_dir
|
||||
})
|
||||
|
||||
def _create_render_pass_instance(self, context, layers, subset_name):
|
||||
staging_dir = self._create_staging_dir(context, subset_name)
|
||||
# Global instance data modifications
|
||||
# Fill families
|
||||
return context.create_instance(**{
|
||||
"name": subset_name,
|
||||
"subset": subset_name,
|
||||
"label": subset_name,
|
||||
"family": self.render_pass_family,
|
||||
# Add `review` family for thumbnail integration
|
||||
"families": [self.render_pass_family, "review"],
|
||||
"representations": [],
|
||||
"layers": layers,
|
||||
"stagingDir": staging_dir
|
||||
})
|
||||
|
||||
def _create_render_layer_instance(self, context, layers, subset_name):
|
||||
staging_dir = self._create_staging_dir(context, subset_name)
|
||||
# Global instance data modifications
|
||||
# Fill families
|
||||
return context.create_instance(**{
|
||||
"name": subset_name,
|
||||
"subset": subset_name,
|
||||
"label": subset_name,
|
||||
"family": self.render_pass_family,
|
||||
# Add `review` family for thumbnail integration
|
||||
"families": [self.render_pass_family, "review"],
|
||||
"representations": [],
|
||||
"layers": layers,
|
||||
"stagingDir": staging_dir
|
||||
})
|
||||
|
||||
def _create_staging_dir(self, context, subset_name):
|
||||
context_staging_dir = context.data["contextStagingDir"]
|
||||
staging_dir = os.path.join(context_staging_dir, subset_name)
|
||||
if not os.path.exists(staging_dir):
|
||||
os.makedirs(staging_dir)
|
||||
return staging_dir
|
||||
|
|
@ -0,0 +1,142 @@
|
|||
"""
|
||||
Requires:
|
||||
CollectPublishedFiles
|
||||
CollectModules
|
||||
|
||||
Provides:
|
||||
workfilePath - Path to tvpaint workfile
|
||||
sceneData - Scene data loaded from the workfile
|
||||
groupsData -
|
||||
layersData
|
||||
layersExposureFrames
|
||||
layersPrePostBehavior
|
||||
"""
|
||||
import os
|
||||
import uuid
|
||||
import json
|
||||
import shutil
|
||||
import pyblish.api
|
||||
from openpype.lib.plugin_tools import parse_json
|
||||
from openpype.hosts.tvpaint.worker import (
|
||||
SenderTVPaintCommands,
|
||||
CollectSceneData
|
||||
)
|
||||
|
||||
|
||||
class CollectTVPaintWorkfileData(pyblish.api.ContextPlugin):
|
||||
label = "Collect TVPaint Workfile data"
|
||||
order = pyblish.api.CollectorOrder - 0.4
|
||||
hosts = ["webpublisher"]
|
||||
targets = ["tvpaint_worker"]
|
||||
|
||||
def process(self, context):
|
||||
# Get JobQueue module
|
||||
modules = context.data["openPypeModules"]
|
||||
job_queue_module = modules["job_queue"]
|
||||
jobs_root = job_queue_module.get_jobs_root()
|
||||
if not jobs_root:
|
||||
raise ValueError("Job Queue root is not set.")
|
||||
|
||||
context.data["jobsRoot"] = jobs_root
|
||||
|
||||
context_staging_dir = self._create_context_staging_dir(jobs_root)
|
||||
workfile_path = self._extract_workfile_path(
|
||||
context, context_staging_dir
|
||||
)
|
||||
context.data["contextStagingDir"] = context_staging_dir
|
||||
context.data["workfilePath"] = workfile_path
|
||||
|
||||
# Prepare tvpaint command
|
||||
collect_scene_data_command = CollectSceneData()
|
||||
# Create TVPaint sender commands
|
||||
commands = SenderTVPaintCommands(workfile_path, job_queue_module)
|
||||
commands.add_command(collect_scene_data_command)
|
||||
|
||||
# Send job and wait for answer
|
||||
commands.send_job_and_wait()
|
||||
|
||||
collected_data = collect_scene_data_command.result()
|
||||
layers_data = collected_data["layers_data"]
|
||||
groups_data = collected_data["groups_data"]
|
||||
scene_data = collected_data["scene_data"]
|
||||
exposure_frames_by_layer_id = (
|
||||
collected_data["exposure_frames_by_layer_id"]
|
||||
)
|
||||
pre_post_beh_by_layer_id = (
|
||||
collected_data["pre_post_beh_by_layer_id"]
|
||||
)
|
||||
|
||||
# Store results
|
||||
# scene data store the same way as TVPaint collector
|
||||
scene_data = {
|
||||
"sceneWidth": scene_data["width"],
|
||||
"sceneHeight": scene_data["height"],
|
||||
"scenePixelAspect": scene_data["pixel_aspect"],
|
||||
"sceneFps": scene_data["fps"],
|
||||
"sceneFieldOrder": scene_data["field_order"],
|
||||
"sceneMarkIn": scene_data["mark_in"],
|
||||
# scene_data["mark_in_state"],
|
||||
"sceneMarkInState": scene_data["mark_in_set"],
|
||||
"sceneMarkOut": scene_data["mark_out"],
|
||||
# scene_data["mark_out_state"],
|
||||
"sceneMarkOutState": scene_data["mark_out_set"],
|
||||
"sceneStartFrame": scene_data["start_frame"],
|
||||
"sceneBgColor": scene_data["bg_color"]
|
||||
}
|
||||
context.data["sceneData"] = scene_data
|
||||
# Store only raw data
|
||||
context.data["groupsData"] = groups_data
|
||||
context.data["layersData"] = layers_data
|
||||
context.data["layersExposureFrames"] = exposure_frames_by_layer_id
|
||||
context.data["layersPrePostBehavior"] = pre_post_beh_by_layer_id
|
||||
|
||||
self.log.debug(
|
||||
(
|
||||
"Collected data"
|
||||
"\nScene data: {}"
|
||||
"\nLayers data: {}"
|
||||
"\nExposure frames: {}"
|
||||
"\nPre/Post behavior: {}"
|
||||
).format(
|
||||
json.dumps(scene_data, indent=4),
|
||||
json.dumps(layers_data, indent=4),
|
||||
json.dumps(exposure_frames_by_layer_id, indent=4),
|
||||
json.dumps(pre_post_beh_by_layer_id, indent=4)
|
||||
)
|
||||
)
|
||||
|
||||
def _create_context_staging_dir(self, jobs_root):
|
||||
if not os.path.exists(jobs_root):
|
||||
os.makedirs(jobs_root)
|
||||
|
||||
random_folder_name = str(uuid.uuid4())
|
||||
full_path = os.path.join(jobs_root, random_folder_name)
|
||||
if not os.path.exists(full_path):
|
||||
os.makedirs(full_path)
|
||||
return full_path
|
||||
|
||||
def _extract_workfile_path(self, context, context_staging_dir):
|
||||
"""Find first TVPaint file in tasks and use it."""
|
||||
batch_dir = context.data["batchDir"]
|
||||
batch_data = context.data["batchData"]
|
||||
src_workfile_path = None
|
||||
for task_id in batch_data["tasks"]:
|
||||
if src_workfile_path is not None:
|
||||
break
|
||||
task_dir = os.path.join(batch_dir, task_id)
|
||||
task_manifest_path = os.path.join(task_dir, "manifest.json")
|
||||
task_data = parse_json(task_manifest_path)
|
||||
task_files = task_data["files"]
|
||||
for filename in task_files:
|
||||
_, ext = os.path.splitext(filename)
|
||||
if ext.lower() == ".tvpp":
|
||||
src_workfile_path = os.path.join(task_dir, filename)
|
||||
break
|
||||
|
||||
# Copy workfile to job queue work root
|
||||
new_workfile_path = os.path.join(
|
||||
context_staging_dir, os.path.basename(src_workfile_path)
|
||||
)
|
||||
shutil.copy(src_workfile_path, new_workfile_path)
|
||||
|
||||
return new_workfile_path
|
||||
|
|
@ -0,0 +1,535 @@
|
|||
import os
|
||||
import copy
|
||||
|
||||
from openpype.hosts.tvpaint.worker import (
|
||||
SenderTVPaintCommands,
|
||||
ExecuteSimpleGeorgeScript,
|
||||
ExecuteGeorgeScript
|
||||
)
|
||||
|
||||
import pyblish.api
|
||||
from openpype.hosts.tvpaint.lib import (
|
||||
calculate_layers_extraction_data,
|
||||
get_frame_filename_template,
|
||||
fill_reference_frames,
|
||||
composite_rendered_layers,
|
||||
rename_filepaths_by_frame_start
|
||||
)
|
||||
from PIL import Image
|
||||
|
||||
|
||||
class ExtractTVPaintSequences(pyblish.api.Extractor):
|
||||
label = "Extract TVPaint Sequences"
|
||||
hosts = ["webpublisher"]
|
||||
targets = ["tvpaint_worker"]
|
||||
|
||||
# Context plugin does not have families filtering
|
||||
families_filter = ["review", "renderPass", "renderLayer"]
|
||||
|
||||
job_queue_root_key = "jobs_root"
|
||||
|
||||
# Modifiable with settings
|
||||
review_bg = [255, 255, 255, 255]
|
||||
|
||||
def process(self, context):
|
||||
# Get workfle path
|
||||
workfile_path = context.data["workfilePath"]
|
||||
jobs_root = context.data["jobsRoot"]
|
||||
jobs_root_slashed = jobs_root.replace("\\", "/")
|
||||
|
||||
# Prepare scene data
|
||||
scene_data = context.data["sceneData"]
|
||||
scene_mark_in = scene_data["sceneMarkIn"]
|
||||
scene_mark_out = scene_data["sceneMarkOut"]
|
||||
scene_start_frame = scene_data["sceneStartFrame"]
|
||||
scene_bg_color = scene_data["sceneBgColor"]
|
||||
|
||||
# Prepare layers behavior
|
||||
behavior_by_layer_id = context.data["layersPrePostBehavior"]
|
||||
exposure_frames_by_layer_id = context.data["layersExposureFrames"]
|
||||
|
||||
# Handles are not stored per instance but on Context
|
||||
handle_start = context.data["handleStart"]
|
||||
handle_end = context.data["handleEnd"]
|
||||
|
||||
# Get JobQueue module
|
||||
modules = context.data["openPypeModules"]
|
||||
job_queue_module = modules["job_queue"]
|
||||
|
||||
tvpaint_commands = SenderTVPaintCommands(
|
||||
workfile_path, job_queue_module
|
||||
)
|
||||
|
||||
# Change scene Start Frame to 0 to prevent frame index issues
|
||||
# - issue is that TVPaint versions deal with frame indexes in a
|
||||
# different way when Start Frame is not `0`
|
||||
# NOTE It will be set back after rendering
|
||||
tvpaint_commands.add_command(
|
||||
ExecuteSimpleGeorgeScript("tv_startframe 0")
|
||||
)
|
||||
|
||||
root_key_replacement = "{" + self.job_queue_root_key + "}"
|
||||
after_render_instances = []
|
||||
for instance in context:
|
||||
instance_families = set(instance.data.get("families", []))
|
||||
instance_families.add(instance.data["family"])
|
||||
valid = False
|
||||
for family in instance_families:
|
||||
if family in self.families_filter:
|
||||
valid = True
|
||||
break
|
||||
|
||||
if not valid:
|
||||
continue
|
||||
|
||||
self.log.info("* Preparing commands for instance \"{}\"".format(
|
||||
instance.data["label"]
|
||||
))
|
||||
# Get all layers and filter out not visible
|
||||
layers = instance.data["layers"]
|
||||
filtered_layers = [layer for layer in layers if layer["visible"]]
|
||||
if not filtered_layers:
|
||||
self.log.info(
|
||||
"None of the layers from the instance"
|
||||
" are visible. Extraction skipped."
|
||||
)
|
||||
continue
|
||||
|
||||
joined_layer_names = ", ".join([
|
||||
"\"{}\"".format(str(layer["name"]))
|
||||
for layer in filtered_layers
|
||||
])
|
||||
self.log.debug(
|
||||
"Instance has {} layers with names: {}".format(
|
||||
len(filtered_layers), joined_layer_names
|
||||
)
|
||||
)
|
||||
|
||||
# Staging dir must be created during collection
|
||||
staging_dir = instance.data["stagingDir"].replace("\\", "/")
|
||||
|
||||
job_root_template = staging_dir.replace(
|
||||
jobs_root_slashed, root_key_replacement
|
||||
)
|
||||
|
||||
# Frame start/end may be stored as float
|
||||
frame_start = int(instance.data["frameStart"])
|
||||
frame_end = int(instance.data["frameEnd"])
|
||||
|
||||
# Prepare output frames
|
||||
output_frame_start = frame_start - handle_start
|
||||
output_frame_end = frame_end + handle_end
|
||||
|
||||
# Change output frame start to 0 if handles cause it's negative
|
||||
# number
|
||||
if output_frame_start < 0:
|
||||
self.log.warning((
|
||||
"Frame start with handles has negative value."
|
||||
" Changed to \"0\". Frames start: {}, Handle Start: {}"
|
||||
).format(frame_start, handle_start))
|
||||
output_frame_start = 0
|
||||
|
||||
# Create copy of scene Mark In/Out
|
||||
mark_in, mark_out = scene_mark_in, scene_mark_out
|
||||
|
||||
# Fix possible changes of output frame
|
||||
mark_out, output_frame_end = self._fix_range_changes(
|
||||
mark_in, mark_out, output_frame_start, output_frame_end
|
||||
)
|
||||
filename_template = get_frame_filename_template(
|
||||
max(scene_mark_out, output_frame_end)
|
||||
)
|
||||
|
||||
# -----------------------------------------------------------------
|
||||
self.log.debug(
|
||||
"Files will be rendered to folder: {}".format(staging_dir)
|
||||
)
|
||||
|
||||
output_filepaths_by_frame_idx = {}
|
||||
for frame_idx in range(mark_in, mark_out + 1):
|
||||
filename = filename_template.format(frame=frame_idx)
|
||||
filepath = os.path.join(staging_dir, filename)
|
||||
output_filepaths_by_frame_idx[frame_idx] = filepath
|
||||
|
||||
# Prepare data for post render processing
|
||||
post_render_data = {
|
||||
"output_dir": staging_dir,
|
||||
"layers": filtered_layers,
|
||||
"output_filepaths_by_frame_idx": output_filepaths_by_frame_idx,
|
||||
"instance": instance,
|
||||
"is_layers_render": False,
|
||||
"output_frame_start": output_frame_start,
|
||||
"output_frame_end": output_frame_end
|
||||
}
|
||||
# Store them to list
|
||||
after_render_instances.append(post_render_data)
|
||||
|
||||
# Review rendering
|
||||
if instance.data["family"] == "review":
|
||||
self.add_render_review_command(
|
||||
tvpaint_commands, mark_in, mark_out, scene_bg_color,
|
||||
job_root_template, filename_template
|
||||
)
|
||||
continue
|
||||
|
||||
# Layers rendering
|
||||
extraction_data_by_layer_id = calculate_layers_extraction_data(
|
||||
filtered_layers,
|
||||
exposure_frames_by_layer_id,
|
||||
behavior_by_layer_id,
|
||||
mark_in,
|
||||
mark_out
|
||||
)
|
||||
filepaths_by_layer_id = self.add_render_command(
|
||||
tvpaint_commands,
|
||||
job_root_template,
|
||||
staging_dir,
|
||||
filtered_layers,
|
||||
extraction_data_by_layer_id
|
||||
)
|
||||
# Add more data to post render processing
|
||||
post_render_data.update({
|
||||
"is_layers_render": True,
|
||||
"extraction_data_by_layer_id": extraction_data_by_layer_id,
|
||||
"filepaths_by_layer_id": filepaths_by_layer_id
|
||||
})
|
||||
|
||||
# Change scene frame Start back to previous value
|
||||
tvpaint_commands.add_command(
|
||||
ExecuteSimpleGeorgeScript(
|
||||
"tv_startframe {}".format(scene_start_frame)
|
||||
)
|
||||
)
|
||||
self.log.info("Sending the job and waiting for response...")
|
||||
tvpaint_commands.send_job_and_wait()
|
||||
self.log.info("Render job finished")
|
||||
|
||||
for post_render_data in after_render_instances:
|
||||
self._post_render_processing(post_render_data, mark_in, mark_out)
|
||||
|
||||
def _fix_range_changes(
|
||||
self, mark_in, mark_out, output_frame_start, output_frame_end
|
||||
):
|
||||
# Check Marks range and output range
|
||||
output_range = output_frame_end - output_frame_start
|
||||
marks_range = mark_out - mark_in
|
||||
|
||||
# Lower Mark Out if mark range is bigger than output
|
||||
# - do not rendered not used frames
|
||||
if output_range < marks_range:
|
||||
new_mark_out = mark_out - (marks_range - output_range)
|
||||
self.log.warning((
|
||||
"Lowering render range to {} frames. Changed Mark Out {} -> {}"
|
||||
).format(marks_range + 1, mark_out, new_mark_out))
|
||||
# Assign new mark out to variable
|
||||
mark_out = new_mark_out
|
||||
|
||||
# Lower output frame end so representation has right `frameEnd` value
|
||||
elif output_range > marks_range:
|
||||
new_output_frame_end = (
|
||||
output_frame_end - (output_range - marks_range)
|
||||
)
|
||||
self.log.warning((
|
||||
"Lowering representation range to {} frames."
|
||||
" Changed frame end {} -> {}"
|
||||
).format(output_range + 1, mark_out, new_output_frame_end))
|
||||
output_frame_end = new_output_frame_end
|
||||
return mark_out, output_frame_end
|
||||
|
||||
def _post_render_processing(self, post_render_data, mark_in, mark_out):
|
||||
# Unpack values
|
||||
instance = post_render_data["instance"]
|
||||
output_filepaths_by_frame_idx = (
|
||||
post_render_data["output_filepaths_by_frame_idx"]
|
||||
)
|
||||
is_layers_render = post_render_data["is_layers_render"]
|
||||
output_dir = post_render_data["output_dir"]
|
||||
layers = post_render_data["layers"]
|
||||
output_frame_start = post_render_data["output_frame_start"]
|
||||
output_frame_end = post_render_data["output_frame_end"]
|
||||
|
||||
# Trigger post processing of layers rendering
|
||||
# - only few frames were rendered this will complete the sequence
|
||||
# - multiple layers can be in single instance they must be composite
|
||||
# over each other
|
||||
if is_layers_render:
|
||||
self._finish_layer_render(
|
||||
layers,
|
||||
post_render_data["extraction_data_by_layer_id"],
|
||||
post_render_data["filepaths_by_layer_id"],
|
||||
mark_in,
|
||||
mark_out,
|
||||
output_filepaths_by_frame_idx
|
||||
)
|
||||
|
||||
# Create thumbnail
|
||||
thumbnail_filepath = os.path.join(output_dir, "thumbnail.jpg")
|
||||
thumbnail_src_path = output_filepaths_by_frame_idx[mark_in]
|
||||
self._create_thumbnail(thumbnail_src_path, thumbnail_filepath)
|
||||
|
||||
# Rename filepaths to final frames
|
||||
repre_files = self._rename_output_files(
|
||||
output_filepaths_by_frame_idx,
|
||||
mark_in,
|
||||
mark_out,
|
||||
output_frame_start
|
||||
)
|
||||
|
||||
# Fill tags and new families
|
||||
family_lowered = instance.data["family"].lower()
|
||||
tags = []
|
||||
if family_lowered in ("review", "renderlayer"):
|
||||
tags.append("review")
|
||||
|
||||
# Sequence of one frame
|
||||
single_file = len(repre_files) == 1
|
||||
if single_file:
|
||||
repre_files = repre_files[0]
|
||||
|
||||
# Extension is harcoded
|
||||
# - changing extension would require change code
|
||||
new_repre = {
|
||||
"name": "png",
|
||||
"ext": "png",
|
||||
"files": repre_files,
|
||||
"stagingDir": output_dir,
|
||||
"tags": tags
|
||||
}
|
||||
|
||||
if not single_file:
|
||||
new_repre["frameStart"] = output_frame_start
|
||||
new_repre["frameEnd"] = output_frame_end
|
||||
|
||||
self.log.debug("Creating new representation: {}".format(new_repre))
|
||||
|
||||
instance.data["representations"].append(new_repre)
|
||||
|
||||
if family_lowered in ("renderpass", "renderlayer"):
|
||||
# Change family to render
|
||||
instance.data["family"] = "render"
|
||||
|
||||
thumbnail_ext = os.path.splitext(thumbnail_filepath)[1]
|
||||
# Create thumbnail representation
|
||||
thumbnail_repre = {
|
||||
"name": "thumbnail",
|
||||
"ext": thumbnail_ext.replace(".", ""),
|
||||
"outputName": "thumb",
|
||||
"files": os.path.basename(thumbnail_filepath),
|
||||
"stagingDir": output_dir,
|
||||
"tags": ["thumbnail"]
|
||||
}
|
||||
instance.data["representations"].append(thumbnail_repre)
|
||||
|
||||
def _rename_output_files(
|
||||
self, filepaths_by_frame, mark_in, mark_out, output_frame_start
|
||||
):
|
||||
new_filepaths_by_frame = rename_filepaths_by_frame_start(
|
||||
filepaths_by_frame, mark_in, mark_out, output_frame_start
|
||||
)
|
||||
|
||||
repre_filenames = []
|
||||
for filepath in new_filepaths_by_frame.values():
|
||||
repre_filenames.append(os.path.basename(filepath))
|
||||
|
||||
if mark_in < output_frame_start:
|
||||
repre_filenames = list(reversed(repre_filenames))
|
||||
|
||||
return repre_filenames
|
||||
|
||||
def add_render_review_command(
|
||||
self,
|
||||
tvpaint_commands,
|
||||
mark_in,
|
||||
mark_out,
|
||||
scene_bg_color,
|
||||
job_root_template,
|
||||
filename_template
|
||||
):
|
||||
""" Export images from TVPaint using `tv_savesequence` command.
|
||||
|
||||
Args:
|
||||
output_dir (str): Directory where files will be stored.
|
||||
mark_in (int): Starting frame index from which export will begin.
|
||||
mark_out (int): On which frame index export will end.
|
||||
scene_bg_color (list): Bg color set in scene. Result of george
|
||||
script command `tv_background`.
|
||||
"""
|
||||
self.log.debug("Preparing data for rendering.")
|
||||
bg_color = self._get_review_bg_color()
|
||||
first_frame_filepath = "/".join([
|
||||
job_root_template,
|
||||
filename_template.format(frame=mark_in)
|
||||
])
|
||||
|
||||
george_script_lines = [
|
||||
# Change bg color to color from settings
|
||||
"tv_background \"color\" {} {} {}".format(*bg_color),
|
||||
"tv_SaveMode \"PNG\"",
|
||||
"export_path = \"{}\"".format(
|
||||
first_frame_filepath.replace("\\", "/")
|
||||
),
|
||||
"tv_savesequence '\"'export_path'\"' {} {}".format(
|
||||
mark_in, mark_out
|
||||
)
|
||||
]
|
||||
if scene_bg_color:
|
||||
# Change bg color back to previous scene bg color
|
||||
_scene_bg_color = copy.deepcopy(scene_bg_color)
|
||||
bg_type = _scene_bg_color.pop(0)
|
||||
orig_color_command = [
|
||||
"tv_background",
|
||||
"\"{}\"".format(bg_type)
|
||||
]
|
||||
orig_color_command.extend(_scene_bg_color)
|
||||
|
||||
george_script_lines.append(" ".join(orig_color_command))
|
||||
|
||||
tvpaint_commands.add_command(
|
||||
ExecuteGeorgeScript(
|
||||
george_script_lines,
|
||||
root_dir_key=self.job_queue_root_key
|
||||
)
|
||||
)
|
||||
|
||||
def add_render_command(
|
||||
self,
|
||||
tvpaint_commands,
|
||||
job_root_template,
|
||||
staging_dir,
|
||||
layers,
|
||||
extraction_data_by_layer_id
|
||||
):
|
||||
""" Export images from TVPaint.
|
||||
|
||||
Args:
|
||||
output_dir (str): Directory where files will be stored.
|
||||
mark_in (int): Starting frame index from which export will begin.
|
||||
mark_out (int): On which frame index export will end.
|
||||
layers (list): List of layers to be exported.
|
||||
|
||||
Retruns:
|
||||
tuple: With 2 items first is list of filenames second is path to
|
||||
thumbnail.
|
||||
"""
|
||||
# Map layers by position
|
||||
layers_by_id = {
|
||||
layer["layer_id"]: layer
|
||||
for layer in layers
|
||||
}
|
||||
|
||||
# Render layers
|
||||
filepaths_by_layer_id = {}
|
||||
for layer_id, render_data in extraction_data_by_layer_id.items():
|
||||
layer = layers_by_id[layer_id]
|
||||
frame_references = render_data["frame_references"]
|
||||
filenames_by_frame_index = render_data["filenames_by_frame_index"]
|
||||
|
||||
filepaths_by_frame = {}
|
||||
command_filepath_by_frame = {}
|
||||
for frame_idx, ref_idx in frame_references.items():
|
||||
# None reference is skipped because does not have source
|
||||
if ref_idx is None:
|
||||
filepaths_by_frame[frame_idx] = None
|
||||
continue
|
||||
filename = filenames_by_frame_index[frame_idx]
|
||||
|
||||
filepaths_by_frame[frame_idx] = os.path.join(
|
||||
staging_dir, filename
|
||||
)
|
||||
if frame_idx == ref_idx:
|
||||
command_filepath_by_frame[frame_idx] = "/".join(
|
||||
[job_root_template, filename]
|
||||
)
|
||||
|
||||
self._add_render_layer_command(
|
||||
tvpaint_commands, layer, command_filepath_by_frame
|
||||
)
|
||||
filepaths_by_layer_id[layer_id] = filepaths_by_frame
|
||||
|
||||
return filepaths_by_layer_id
|
||||
|
||||
def _add_render_layer_command(
|
||||
self, tvpaint_commands, layer, filepaths_by_frame
|
||||
):
|
||||
george_script_lines = [
|
||||
# Set current layer by position
|
||||
"tv_layergetid {}".format(layer["position"]),
|
||||
"layer_id = result",
|
||||
"tv_layerset layer_id",
|
||||
"tv_SaveMode \"PNG\""
|
||||
]
|
||||
|
||||
for frame_idx, filepath in filepaths_by_frame.items():
|
||||
if filepath is None:
|
||||
continue
|
||||
|
||||
# Go to frame
|
||||
george_script_lines.append("tv_layerImage {}".format(frame_idx))
|
||||
# Store image to output
|
||||
george_script_lines.append(
|
||||
"tv_saveimage \"{}\"".format(filepath.replace("\\", "/"))
|
||||
)
|
||||
|
||||
tvpaint_commands.add_command(
|
||||
ExecuteGeorgeScript(
|
||||
george_script_lines,
|
||||
root_dir_key=self.job_queue_root_key
|
||||
)
|
||||
)
|
||||
|
||||
def _finish_layer_render(
|
||||
self,
|
||||
layers,
|
||||
extraction_data_by_layer_id,
|
||||
filepaths_by_layer_id,
|
||||
mark_in,
|
||||
mark_out,
|
||||
output_filepaths_by_frame_idx
|
||||
):
|
||||
# Fill frames between `frame_start_index` and `frame_end_index`
|
||||
self.log.debug("Filling frames not rendered frames.")
|
||||
for layer_id, render_data in extraction_data_by_layer_id.items():
|
||||
frame_references = render_data["frame_references"]
|
||||
filepaths_by_frame = filepaths_by_layer_id[layer_id]
|
||||
fill_reference_frames(frame_references, filepaths_by_frame)
|
||||
|
||||
# Prepare final filepaths where compositing should store result
|
||||
self.log.info("Started compositing of layer frames.")
|
||||
composite_rendered_layers(
|
||||
layers, filepaths_by_layer_id,
|
||||
mark_in, mark_out,
|
||||
output_filepaths_by_frame_idx
|
||||
)
|
||||
|
||||
def _create_thumbnail(self, thumbnail_src_path, thumbnail_filepath):
|
||||
if not os.path.exists(thumbnail_src_path):
|
||||
return
|
||||
|
||||
source_img = Image.open(thumbnail_src_path)
|
||||
|
||||
# Composite background only on rgba images
|
||||
# - just making sure
|
||||
if source_img.mode.lower() == "rgba":
|
||||
bg_color = self._get_review_bg_color()
|
||||
self.log.debug("Adding thumbnail background color {}.".format(
|
||||
" ".join([str(val) for val in bg_color])
|
||||
))
|
||||
bg_image = Image.new("RGBA", source_img.size, bg_color)
|
||||
thumbnail_obj = Image.alpha_composite(bg_image, source_img)
|
||||
thumbnail_obj.convert("RGB").save(thumbnail_filepath)
|
||||
|
||||
else:
|
||||
self.log.info((
|
||||
"Source for thumbnail has mode \"{}\" (Expected: RGBA)."
|
||||
" Can't use thubmanail background color."
|
||||
).format(source_img.mode))
|
||||
source_img.save(thumbnail_filepath)
|
||||
|
||||
def _get_review_bg_color(self):
|
||||
red = green = blue = 255
|
||||
if self.review_bg:
|
||||
if len(self.review_bg) == 4:
|
||||
red, green, blue, _ = self.review_bg
|
||||
elif len(self.review_bg) == 3:
|
||||
red, green, blue = self.review_bg
|
||||
return (red, green, blue)
|
||||
|
|
@ -0,0 +1,31 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
"""Cleanup leftover files from publish."""
|
||||
import os
|
||||
import shutil
|
||||
import pyblish.api
|
||||
|
||||
|
||||
class CleanUpJobRoot(pyblish.api.ContextPlugin):
|
||||
"""Cleans up the job root directory after a successful publish.
|
||||
|
||||
Remove all files in job root as all of them should be published.
|
||||
"""
|
||||
|
||||
order = pyblish.api.IntegratorOrder + 1
|
||||
label = "Clean Up Job Root"
|
||||
optional = True
|
||||
active = True
|
||||
|
||||
def process(self, context):
|
||||
context_staging_dir = context.data.get("contextStagingDir")
|
||||
if not context_staging_dir:
|
||||
self.log.info("Key 'contextStagingDir' is empty.")
|
||||
|
||||
elif not os.path.exists(context_staging_dir):
|
||||
self.log.info((
|
||||
"Job root directory for this publish does not"
|
||||
" exists anymore \"{}\"."
|
||||
).format(context_staging_dir))
|
||||
else:
|
||||
self.log.info("Deleting job root with all files.")
|
||||
shutil.rmtree(context_staging_dir)
|
||||
|
|
@ -0,0 +1,35 @@
|
|||
import pyblish.api
|
||||
|
||||
|
||||
class ValidateWorkfileData(pyblish.api.ContextPlugin):
|
||||
"""Validate mark in and out are enabled and it's duration.
|
||||
|
||||
Mark In/Out does not have to match frameStart and frameEnd but duration is
|
||||
important.
|
||||
"""
|
||||
|
||||
label = "Validate Workfile Data"
|
||||
order = pyblish.api.ValidatorOrder
|
||||
|
||||
def process(self, context):
|
||||
# Data collected in `CollectAvalonEntities`
|
||||
frame_start = context.data["frameStart"]
|
||||
frame_end = context.data["frameEnd"]
|
||||
handle_start = context.data["handleStart"]
|
||||
handle_end = context.data["handleEnd"]
|
||||
|
||||
scene_data = context.data["sceneData"]
|
||||
scene_mark_in = scene_data["sceneMarkIn"]
|
||||
scene_mark_out = scene_data["sceneMarkOut"]
|
||||
|
||||
expected_range = (
|
||||
(frame_end - frame_start + 1)
|
||||
+ handle_start
|
||||
+ handle_end
|
||||
)
|
||||
marks_range = scene_mark_out - scene_mark_in + 1
|
||||
if expected_range != marks_range:
|
||||
raise AssertionError((
|
||||
"Wrong Mark In/Out range."
|
||||
" Expected range is {} frames got {} frames"
|
||||
).format(expected_range, marks_range))
|
||||
|
|
@ -198,6 +198,15 @@ class WebpublisherBatchPublishEndpoint(_RestApiEndpoint):
|
|||
# - filter defines command and can extend arguments dictionary
|
||||
# This is used only if 'studio_processing' is enabled on batch
|
||||
studio_processing_filters = [
|
||||
# TVPaint filter
|
||||
{
|
||||
"extensions": [".tvpp"],
|
||||
"command": "remotepublish",
|
||||
"arguments": {
|
||||
"targets": ["tvpaint_worker"]
|
||||
},
|
||||
"add_to_queue": False
|
||||
},
|
||||
# Photoshop filter
|
||||
{
|
||||
"extensions": [".psd", ".psb"],
|
||||
|
|
|
|||
|
|
@ -89,8 +89,10 @@ class Anatomy:
|
|||
|
||||
self.project_name = project_name
|
||||
|
||||
self._data = get_anatomy_settings(project_name, site_name)
|
||||
|
||||
self._data = self._prepare_anatomy_data(
|
||||
get_anatomy_settings(project_name, site_name)
|
||||
)
|
||||
self._site_name = site_name
|
||||
self._templates_obj = Templates(self)
|
||||
self._roots_obj = Roots(self)
|
||||
|
||||
|
|
@ -121,9 +123,36 @@ class Anatomy:
|
|||
"""
|
||||
return get_default_anatomy_settings(clear_metadata=False)
|
||||
|
||||
@staticmethod
|
||||
def _prepare_anatomy_data(anatomy_data):
|
||||
"""Prepare anatomy data for futher processing.
|
||||
|
||||
Method added to replace `{task}` with `{task[name]}` in templates.
|
||||
"""
|
||||
templates_data = anatomy_data.get("templates")
|
||||
if templates_data:
|
||||
# Replace `{task}` with `{task[name]}` in templates
|
||||
value_queue = collections.deque()
|
||||
value_queue.append(templates_data)
|
||||
while value_queue:
|
||||
item = value_queue.popleft()
|
||||
if not isinstance(item, dict):
|
||||
continue
|
||||
|
||||
for key in tuple(item.keys()):
|
||||
value = item[key]
|
||||
if isinstance(value, dict):
|
||||
value_queue.append(value)
|
||||
|
||||
elif isinstance(value, StringType):
|
||||
item[key] = value.replace("{task}", "{task[name]}")
|
||||
return anatomy_data
|
||||
|
||||
def reset(self):
|
||||
"""Reset values of cached data in templates and roots objects."""
|
||||
self._data = get_anatomy_settings(self.project_name)
|
||||
self._data = self._prepare_anatomy_data(
|
||||
get_anatomy_settings(self.project_name, self._site_name)
|
||||
)
|
||||
self.templates_obj.reset()
|
||||
self.roots_obj.reset()
|
||||
|
||||
|
|
@ -981,6 +1010,14 @@ class Templates:
|
|||
TemplateResult: Filled or partially filled template containing all
|
||||
data needed or missing for filling template.
|
||||
"""
|
||||
task_data = data.get("task")
|
||||
if (
|
||||
isinstance(task_data, StringType)
|
||||
and "{task[name]}" in orig_template
|
||||
):
|
||||
# Change task to dictionary if template expect dictionary
|
||||
data["task"] = {"name": task_data}
|
||||
|
||||
template, missing_optional, invalid_optional = (
|
||||
self._filter_optional(orig_template, data)
|
||||
)
|
||||
|
|
@ -989,6 +1026,7 @@ class Templates:
|
|||
invalid_required = []
|
||||
missing_required = []
|
||||
replace_keys = []
|
||||
|
||||
for group in self.key_pattern.findall(template):
|
||||
orig_key = group[1:-1]
|
||||
key = str(orig_key)
|
||||
|
|
@ -1074,6 +1112,10 @@ class Templates:
|
|||
output = collections.defaultdict(dict)
|
||||
for key, orig_value in templates.items():
|
||||
if isinstance(orig_value, StringType):
|
||||
# Replace {task} by '{task[name]}' for backward compatibility
|
||||
if '{task}' in orig_value:
|
||||
orig_value = orig_value.replace('{task}', '{task[name]}')
|
||||
|
||||
output[key] = self._format(orig_value, data)
|
||||
continue
|
||||
|
||||
|
|
|
|||
|
|
@ -1280,23 +1280,12 @@ def prepare_context_environments(data):
|
|||
|
||||
anatomy = data["anatomy"]
|
||||
|
||||
asset_tasks = asset_doc.get("data", {}).get("tasks") or {}
|
||||
task_info = asset_tasks.get(task_name) or {}
|
||||
task_type = task_info.get("type")
|
||||
task_type = workdir_data["task"]["type"]
|
||||
# Temp solution how to pass task type to `_prepare_last_workfile`
|
||||
data["task_type"] = task_type
|
||||
|
||||
workfile_template_key = get_workfile_template_key(
|
||||
task_type,
|
||||
app.host_name,
|
||||
project_name=project_name,
|
||||
project_settings=project_settings
|
||||
)
|
||||
|
||||
try:
|
||||
workdir = get_workdir_with_workdir_data(
|
||||
workdir_data, anatomy, template_key=workfile_template_key
|
||||
)
|
||||
workdir = get_workdir_with_workdir_data(workdir_data, anatomy)
|
||||
|
||||
except Exception as exc:
|
||||
raise ApplicationLaunchFailed(
|
||||
|
|
@ -1329,10 +1318,10 @@ def prepare_context_environments(data):
|
|||
)
|
||||
data["env"].update(context_env)
|
||||
|
||||
_prepare_last_workfile(data, workdir, workfile_template_key)
|
||||
_prepare_last_workfile(data, workdir)
|
||||
|
||||
|
||||
def _prepare_last_workfile(data, workdir, workfile_template_key):
|
||||
def _prepare_last_workfile(data, workdir):
|
||||
"""last workfile workflow preparation.
|
||||
|
||||
Function check if should care about last workfile workflow and tries
|
||||
|
|
@ -1395,6 +1384,10 @@ def _prepare_last_workfile(data, workdir, workfile_template_key):
|
|||
anatomy = data["anatomy"]
|
||||
# Find last workfile
|
||||
file_template = anatomy.templates["work"]["file"]
|
||||
# Replace {task} by '{task[name]}' for backward compatibility
|
||||
if '{task}' in file_template:
|
||||
file_template = file_template.replace('{task}', '{task[name]}')
|
||||
|
||||
workdir_data.update({
|
||||
"version": 1,
|
||||
"user": get_openpype_username(),
|
||||
|
|
|
|||
|
|
@ -7,6 +7,7 @@ import platform
|
|||
import logging
|
||||
import collections
|
||||
import functools
|
||||
import getpass
|
||||
|
||||
from openpype.settings import get_project_settings
|
||||
from .anatomy import Anatomy
|
||||
|
|
@ -257,19 +258,48 @@ def get_hierarchy(asset_name=None):
|
|||
return "/".join(hierarchy_items)
|
||||
|
||||
|
||||
@with_avalon
|
||||
def get_linked_assets(asset_entity):
|
||||
"""Return linked assets for `asset_entity` from DB
|
||||
def get_linked_asset_ids(asset_doc):
|
||||
"""Return linked asset ids for `asset_doc` from DB
|
||||
|
||||
Args:
|
||||
asset_entity (dict): asset document from DB
|
||||
Args:
|
||||
asset_doc (dict): Asset document from DB.
|
||||
|
||||
Returns:
|
||||
(list) of MongoDB documents
|
||||
Returns:
|
||||
(list): MongoDB ids of input links.
|
||||
"""
|
||||
inputs = asset_entity["data"].get("inputs", [])
|
||||
inputs = [avalon.io.find_one({"_id": x}) for x in inputs]
|
||||
return inputs
|
||||
output = []
|
||||
if not asset_doc:
|
||||
return output
|
||||
|
||||
input_links = asset_doc["data"].get("inputLinks") or []
|
||||
if input_links:
|
||||
for item in input_links:
|
||||
# Backwards compatibility for "_id" key which was replaced with
|
||||
# "id"
|
||||
if "_id" in item:
|
||||
link_id = item["_id"]
|
||||
else:
|
||||
link_id = item["id"]
|
||||
output.append(link_id)
|
||||
|
||||
return output
|
||||
|
||||
|
||||
@with_avalon
|
||||
def get_linked_assets(asset_doc):
|
||||
"""Return linked assets for `asset_doc` from DB
|
||||
|
||||
Args:
|
||||
asset_doc (dict): Asset document from DB
|
||||
|
||||
Returns:
|
||||
(list) Asset documents of input links for passed asset doc.
|
||||
"""
|
||||
link_ids = get_linked_asset_ids(asset_doc)
|
||||
if not link_ids:
|
||||
return []
|
||||
|
||||
return list(avalon.io.find({"_id": {"$in": link_ids}}))
|
||||
|
||||
|
||||
@with_avalon
|
||||
|
|
@ -464,6 +494,7 @@ def get_workfile_template_key(
|
|||
return default
|
||||
|
||||
|
||||
# TODO rename function as is not just "work" specific
|
||||
def get_workdir_data(project_doc, asset_doc, task_name, host_name):
|
||||
"""Prepare data for workdir template filling from entered information.
|
||||
|
||||
|
|
@ -479,22 +510,31 @@ def get_workdir_data(project_doc, asset_doc, task_name, host_name):
|
|||
"""
|
||||
hierarchy = "/".join(asset_doc["data"]["parents"])
|
||||
|
||||
task_type = asset_doc['data']['tasks'].get(task_name, {}).get('type')
|
||||
|
||||
project_task_types = project_doc["config"]["tasks"]
|
||||
task_code = project_task_types.get(task_type, {}).get("short_name")
|
||||
|
||||
data = {
|
||||
"project": {
|
||||
"name": project_doc["name"],
|
||||
"code": project_doc["data"].get("code")
|
||||
},
|
||||
"task": task_name,
|
||||
"task": {
|
||||
"name": task_name,
|
||||
"type": task_type,
|
||||
"short": task_code,
|
||||
},
|
||||
"asset": asset_doc["name"],
|
||||
"app": host_name,
|
||||
"hierarchy": hierarchy
|
||||
"user": getpass.getuser(),
|
||||
"hierarchy": hierarchy,
|
||||
}
|
||||
return data
|
||||
|
||||
|
||||
def get_workdir_with_workdir_data(
|
||||
workdir_data, anatomy=None, project_name=None,
|
||||
template_key=None, dbcon=None
|
||||
workdir_data, anatomy=None, project_name=None, template_key=None
|
||||
):
|
||||
"""Fill workdir path from entered data and project's anatomy.
|
||||
|
||||
|
|
@ -529,12 +569,10 @@ def get_workdir_with_workdir_data(
|
|||
anatomy = Anatomy(project_name)
|
||||
|
||||
if not template_key:
|
||||
template_key = get_workfile_template_key_from_context(
|
||||
workdir_data["asset"],
|
||||
workdir_data["task"],
|
||||
template_key = get_workfile_template_key(
|
||||
workdir_data["task"]["type"],
|
||||
workdir_data["app"],
|
||||
project_name=workdir_data["project"]["name"],
|
||||
dbcon=dbcon
|
||||
project_name=workdir_data["project"]["name"]
|
||||
)
|
||||
|
||||
anatomy_filled = anatomy.format(workdir_data)
|
||||
|
|
@ -648,7 +686,7 @@ def create_workfile_doc(asset_doc, task_name, filename, workdir, dbcon=None):
|
|||
anatomy = Anatomy(project_doc["name"])
|
||||
# Get workdir path (result is anatomy.TemplateResult)
|
||||
template_workdir = get_workdir_with_workdir_data(
|
||||
workdir_data, anatomy, dbcon=dbcon
|
||||
workdir_data, anatomy
|
||||
)
|
||||
template_workdir_path = str(template_workdir).replace("\\", "/")
|
||||
|
||||
|
|
|
|||
|
|
@ -60,12 +60,13 @@ def path_from_representation(representation, anatomy):
|
|||
path = pipeline.format_template_with_optional_keys(
|
||||
context, template
|
||||
)
|
||||
path = os.path.normpath(path.replace("/", "\\"))
|
||||
|
||||
except KeyError:
|
||||
# Template references unavailable data
|
||||
return None
|
||||
|
||||
return os.path.normpath(path)
|
||||
return path
|
||||
|
||||
|
||||
def copy_file(src_path, dst_path):
|
||||
|
|
@ -179,9 +180,11 @@ def process_single_file(
|
|||
Returns:
|
||||
(collections.defaultdict , int)
|
||||
"""
|
||||
# Make sure path is valid for all platforms
|
||||
src_path = os.path.normpath(src_path.replace("\\", "/"))
|
||||
|
||||
if not os.path.exists(src_path):
|
||||
msg = "{} doesn't exist for {}".format(src_path,
|
||||
repre["_id"])
|
||||
msg = "{} doesn't exist for {}".format(src_path, repre["_id"])
|
||||
report_items["Source file was not found"].append(msg)
|
||||
return report_items, 0
|
||||
|
||||
|
|
@ -192,8 +195,10 @@ def process_single_file(
|
|||
else:
|
||||
delivery_path = anatomy_filled["delivery"][template_name]
|
||||
|
||||
# context.representation could be .psd
|
||||
# Backwards compatibility when extension contained `.`
|
||||
delivery_path = delivery_path.replace("..", ".")
|
||||
# Make sure path is valid for all platforms
|
||||
delivery_path = os.path.normpath(delivery_path.replace("\\", "/"))
|
||||
|
||||
delivery_folder = os.path.dirname(delivery_path)
|
||||
if not os.path.exists(delivery_folder):
|
||||
|
|
@ -230,14 +235,14 @@ def process_sequence(
|
|||
Returns:
|
||||
(collections.defaultdict , int)
|
||||
"""
|
||||
src_path = os.path.normpath(src_path.replace("\\", "/"))
|
||||
|
||||
def hash_path_exist(myPath):
|
||||
res = myPath.replace('#', '*')
|
||||
glob_search_results = glob.glob(res)
|
||||
if len(glob_search_results) > 0:
|
||||
return True
|
||||
else:
|
||||
return False
|
||||
return False
|
||||
|
||||
if not hash_path_exist(src_path):
|
||||
msg = "{} doesn't exist for {}".format(src_path,
|
||||
|
|
@ -307,6 +312,7 @@ def process_sequence(
|
|||
else:
|
||||
delivery_path = anatomy_filled["delivery"][template_name]
|
||||
|
||||
delivery_path = os.path.normpath(delivery_path.replace("\\", "/"))
|
||||
delivery_folder = os.path.dirname(delivery_path)
|
||||
dst_head, dst_tail = delivery_path.split(frame_indicator)
|
||||
dst_padding = src_collection.padding
|
||||
|
|
|
|||
|
|
@ -124,7 +124,7 @@ def run_subprocess(*args, **kwargs):
|
|||
if full_output:
|
||||
full_output += "\n"
|
||||
full_output += _stderr
|
||||
logger.warning(_stderr)
|
||||
logger.info(_stderr)
|
||||
|
||||
if proc.returncode != 0:
|
||||
exc_msg = "Executing arguments was not successful: \"{}\"".format(args)
|
||||
|
|
|
|||
|
|
@ -522,6 +522,11 @@ def get_local_site_id():
|
|||
|
||||
Identifier is created if does not exists yet.
|
||||
"""
|
||||
# override local id from environment
|
||||
# used for background syncing
|
||||
if os.environ.get("OPENPYPE_LOCAL_ID"):
|
||||
return os.environ["OPENPYPE_LOCAL_ID"]
|
||||
|
||||
registry = OpenPypeSettingsRegistry()
|
||||
try:
|
||||
return registry.get_item("localId")
|
||||
|
|
|
|||
|
|
@ -531,12 +531,20 @@ def should_decompress(file_url):
|
|||
and we can decompress (oiiotool supported)
|
||||
"""
|
||||
if oiio_supported():
|
||||
output = run_subprocess([
|
||||
get_oiio_tools_path(),
|
||||
"--info", "-v", file_url])
|
||||
return "compression: \"dwaa\"" in output or \
|
||||
"compression: \"dwab\"" in output
|
||||
|
||||
try:
|
||||
output = run_subprocess([
|
||||
get_oiio_tools_path(),
|
||||
"--info", "-v", file_url])
|
||||
return "compression: \"dwaa\"" in output or \
|
||||
"compression: \"dwab\"" in output
|
||||
except RuntimeError:
|
||||
_name, ext = os.path.splitext(file_url)
|
||||
# TODO: should't the list of allowed extensions be
|
||||
# taken from an OIIO variable of supported formats
|
||||
if ext not in [".mxf"]:
|
||||
# Reraise exception
|
||||
raise
|
||||
return False
|
||||
return False
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -51,7 +51,8 @@ def start_webpublish_log(dbcon, batch_id, user):
|
|||
"batch_id": batch_id,
|
||||
"start_date": datetime.now(),
|
||||
"user": user,
|
||||
"status": "in_progress"
|
||||
"status": "in_progress",
|
||||
"progress": 0.0
|
||||
}).inserted_id
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -71,18 +71,24 @@ def ffprobe_streams(path_to_file, logger=None):
|
|||
"Getting information about input \"{}\".".format(path_to_file)
|
||||
)
|
||||
args = [
|
||||
"\"{}\"".format(get_ffmpeg_tool_path("ffprobe")),
|
||||
"-v quiet",
|
||||
"-print_format json",
|
||||
get_ffmpeg_tool_path("ffprobe"),
|
||||
"-hide_banner",
|
||||
"-loglevel", "fatal",
|
||||
"-show_error",
|
||||
"-show_format",
|
||||
"-show_streams",
|
||||
"\"{}\"".format(path_to_file)
|
||||
"-show_programs",
|
||||
"-show_chapters",
|
||||
"-show_private_data",
|
||||
"-print_format", "json",
|
||||
path_to_file
|
||||
]
|
||||
command = " ".join(args)
|
||||
logger.debug("FFprobe command: \"{}\"".format(command))
|
||||
|
||||
logger.debug("FFprobe command: {}".format(
|
||||
subprocess.list2cmdline(args)
|
||||
))
|
||||
popen = subprocess.Popen(
|
||||
command,
|
||||
shell=True,
|
||||
args,
|
||||
stdout=subprocess.PIPE,
|
||||
stderr=subprocess.PIPE
|
||||
)
|
||||
|
|
|
|||
|
|
@ -1,10 +1,10 @@
|
|||
import os
|
||||
import json
|
||||
|
||||
import requests
|
||||
import hou
|
||||
|
||||
from avalon import api, io
|
||||
from avalon.vendor import requests
|
||||
|
||||
import pyblish.api
|
||||
|
||||
|
|
|
|||
|
|
@ -2,8 +2,8 @@ import os
|
|||
import json
|
||||
import getpass
|
||||
|
||||
import requests
|
||||
from avalon import api
|
||||
from avalon.vendor import requests
|
||||
|
||||
import pyblish.api
|
||||
|
||||
|
|
|
|||
|
|
@ -288,6 +288,22 @@ class MayaSubmitDeadline(pyblish.api.InstancePlugin):
|
|||
"pluginInfo", {})
|
||||
)
|
||||
|
||||
self.limit_groups = (
|
||||
context.data["project_settings"].get(
|
||||
"deadline", {}).get(
|
||||
"publish", {}).get(
|
||||
"MayaSubmitDeadline", {}).get(
|
||||
"limit", [])
|
||||
)
|
||||
|
||||
self.group = (
|
||||
context.data["project_settings"].get(
|
||||
"deadline", {}).get(
|
||||
"publish", {}).get(
|
||||
"MayaSubmitDeadline", {}).get(
|
||||
"group", "none")
|
||||
)
|
||||
|
||||
context = instance.context
|
||||
workspace = context.data["workspaceDir"]
|
||||
anatomy = context.data['anatomy']
|
||||
|
|
|
|||
|
|
@ -1,10 +1,11 @@
|
|||
import os
|
||||
import re
|
||||
import json
|
||||
import getpass
|
||||
|
||||
import requests
|
||||
|
||||
from avalon import api
|
||||
from avalon.vendor import requests
|
||||
import re
|
||||
import pyblish.api
|
||||
import nuke
|
||||
|
||||
|
|
@ -94,24 +95,27 @@ class NukeSubmitDeadline(pyblish.api.InstancePlugin):
|
|||
render_path).replace("\\", "/")
|
||||
instance.data["publishJobState"] = "Suspended"
|
||||
|
||||
if instance.data.get("bakeScriptPath"):
|
||||
render_path = instance.data.get("bakeRenderPath")
|
||||
script_path = instance.data.get("bakeScriptPath")
|
||||
exe_node_name = instance.data.get("bakeWriteNodeName")
|
||||
if instance.data.get("bakingNukeScripts"):
|
||||
for baking_script in instance.data["bakingNukeScripts"]:
|
||||
render_path = baking_script["bakeRenderPath"]
|
||||
script_path = baking_script["bakeScriptPath"]
|
||||
exe_node_name = baking_script["bakeWriteNodeName"]
|
||||
|
||||
# exception for slate workflow
|
||||
if "slate" in instance.data["families"]:
|
||||
self._frame_start += 1
|
||||
# exception for slate workflow
|
||||
if "slate" in instance.data["families"]:
|
||||
self._frame_start += 1
|
||||
|
||||
resp = self.payload_submit(instance,
|
||||
script_path,
|
||||
render_path,
|
||||
exe_node_name,
|
||||
response.json()
|
||||
)
|
||||
# Store output dir for unified publisher (filesequence)
|
||||
instance.data["deadlineSubmissionJob"] = resp.json()
|
||||
instance.data["publishJobState"] = "Suspended"
|
||||
resp = self.payload_submit(
|
||||
instance,
|
||||
script_path,
|
||||
render_path,
|
||||
exe_node_name,
|
||||
response.json()
|
||||
)
|
||||
|
||||
# Store output dir for unified publisher (filesequence)
|
||||
instance.data["deadlineSubmissionJob"] = resp.json()
|
||||
instance.data["publishJobState"] = "Suspended"
|
||||
|
||||
# redefinition of families
|
||||
if "render.farm" in families:
|
||||
|
|
|
|||
|
|
@ -5,10 +5,11 @@ import os
|
|||
import json
|
||||
import re
|
||||
from copy import copy, deepcopy
|
||||
import requests
|
||||
import clique
|
||||
import openpype.api
|
||||
|
||||
from avalon import api, io
|
||||
from avalon.vendor import requests, clique
|
||||
|
||||
import pyblish.api
|
||||
|
||||
|
|
@ -104,7 +105,7 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin):
|
|||
families = ["render.farm", "prerender.farm",
|
||||
"renderlayer", "imagesequence", "vrayscene"]
|
||||
|
||||
aov_filter = {"maya": [r".*(?:\.|_)*([Bb]eauty)(?:\.|_)*.*"],
|
||||
aov_filter = {"maya": [r".*(?:[\._-])*([Bb]eauty)(?:[\.|_])*.*"],
|
||||
"aftereffects": [r".*"], # for everything from AE
|
||||
"harmony": [r".*"], # for everything from AE
|
||||
"celaction": [r".*"]}
|
||||
|
|
@ -142,8 +143,7 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin):
|
|||
instance_transfer = {
|
||||
"slate": ["slateFrame"],
|
||||
"review": ["lutPath"],
|
||||
"render2d": ["bakeScriptPath", "bakeRenderPath",
|
||||
"bakeWriteNodeName", "version"],
|
||||
"render2d": ["bakingNukeScripts", "version"],
|
||||
"renderlayer": ["convertToScanline"]
|
||||
}
|
||||
|
||||
|
|
@ -231,7 +231,8 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin):
|
|||
args = [
|
||||
'publish',
|
||||
roothless_metadata_path,
|
||||
"--targets {}".format("deadline")
|
||||
"--targets", "deadline",
|
||||
"--targets", "filesequence"
|
||||
]
|
||||
|
||||
# Generate the payload for Deadline submission
|
||||
|
|
@ -505,9 +506,9 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin):
|
|||
"""
|
||||
representations = []
|
||||
collections, remainders = clique.assemble(exp_files)
|
||||
bake_render_path = instance.get("bakeRenderPath", [])
|
||||
bake_renders = instance.get("bakingNukeScripts", [])
|
||||
|
||||
# create representation for every collected sequence
|
||||
# create representation for every collected sequento ce
|
||||
for collection in collections:
|
||||
ext = collection.tail.lstrip(".")
|
||||
preview = False
|
||||
|
|
@ -523,7 +524,7 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin):
|
|||
preview = True
|
||||
break
|
||||
|
||||
if bake_render_path:
|
||||
if bake_renders:
|
||||
preview = False
|
||||
|
||||
staging = os.path.dirname(list(collection)[0])
|
||||
|
|
@ -595,7 +596,8 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin):
|
|||
})
|
||||
self._solve_families(instance, True)
|
||||
|
||||
if remainder in bake_render_path:
|
||||
if (bake_renders
|
||||
and remainder in bake_renders[0]["bakeRenderPath"]):
|
||||
rep.update({
|
||||
"fps": instance.get("fps"),
|
||||
"tags": ["review", "delete"]
|
||||
|
|
|
|||
|
|
@ -1,7 +1,7 @@
|
|||
import pyblish.api
|
||||
|
||||
from avalon.vendor import requests
|
||||
import os
|
||||
import requests
|
||||
|
||||
import pyblish.api
|
||||
|
||||
|
||||
class ValidateDeadlineConnection(pyblish.api.InstancePlugin):
|
||||
|
|
|
|||
|
|
@ -1,8 +1,8 @@
|
|||
import os
|
||||
import json
|
||||
import pyblish.api
|
||||
import requests
|
||||
|
||||
from avalon.vendor import requests
|
||||
import pyblish.api
|
||||
|
||||
from openpype.lib.abstract_submit_deadline import requests_get
|
||||
from openpype.lib.delivery import collect_frames
|
||||
|
|
|
|||
|
|
@ -0,0 +1,147 @@
|
|||
from pymongo import UpdateOne
|
||||
from bson.objectid import ObjectId
|
||||
|
||||
from avalon.api import AvalonMongoDB
|
||||
|
||||
from openpype_modules.ftrack.lib import (
|
||||
CUST_ATTR_ID_KEY,
|
||||
query_custom_attributes,
|
||||
|
||||
BaseEvent
|
||||
)
|
||||
|
||||
|
||||
class SyncLinksToAvalon(BaseEvent):
|
||||
"""Synchronize inpug linkts to avalon documents."""
|
||||
# Run after sync to avalon event handler
|
||||
priority = 110
|
||||
|
||||
def __init__(self, session):
|
||||
self.dbcon = AvalonMongoDB()
|
||||
|
||||
super(SyncLinksToAvalon, self).__init__(session)
|
||||
|
||||
def launch(self, session, event):
|
||||
# Try to commit and if any error happen then recreate session
|
||||
entities_info = event["data"]["entities"]
|
||||
dependency_changes = []
|
||||
removed_entities = set()
|
||||
for entity_info in entities_info:
|
||||
action = entity_info.get("action")
|
||||
entityType = entity_info.get("entityType")
|
||||
if action not in ("remove", "add"):
|
||||
continue
|
||||
|
||||
if entityType == "task":
|
||||
removed_entities.add(entity_info["entityId"])
|
||||
elif entityType == "dependency":
|
||||
dependency_changes.append(entity_info)
|
||||
|
||||
# Care only about dependency changes
|
||||
if not dependency_changes:
|
||||
return
|
||||
|
||||
project_id = None
|
||||
for entity_info in dependency_changes:
|
||||
for parent_info in entity_info["parents"]:
|
||||
if parent_info["entityType"] == "show":
|
||||
project_id = parent_info["entityId"]
|
||||
if project_id is not None:
|
||||
break
|
||||
|
||||
changed_to_ids = set()
|
||||
for entity_info in dependency_changes:
|
||||
to_id_change = entity_info["changes"]["to_id"]
|
||||
if to_id_change["new"] is not None:
|
||||
changed_to_ids.add(to_id_change["new"])
|
||||
|
||||
if to_id_change["old"] is not None:
|
||||
changed_to_ids.add(to_id_change["old"])
|
||||
|
||||
self._update_in_links(session, changed_to_ids, project_id)
|
||||
|
||||
def _update_in_links(self, session, ftrack_ids, project_id):
|
||||
if not ftrack_ids or project_id is None:
|
||||
return
|
||||
|
||||
attr_def = session.query((
|
||||
"select id from CustomAttributeConfiguration where key is \"{}\""
|
||||
).format(CUST_ATTR_ID_KEY)).first()
|
||||
if attr_def is None:
|
||||
return
|
||||
|
||||
project_entity = session.query((
|
||||
"select full_name from Project where id is \"{}\""
|
||||
).format(project_id)).first()
|
||||
if not project_entity:
|
||||
return
|
||||
|
||||
project_name = project_entity["full_name"]
|
||||
mongo_id_by_ftrack_id = self._get_mongo_ids_by_ftrack_ids(
|
||||
session, attr_def["id"], ftrack_ids
|
||||
)
|
||||
|
||||
filtered_ftrack_ids = tuple(mongo_id_by_ftrack_id.keys())
|
||||
context_links = session.query((
|
||||
"select from_id, to_id from TypedContextLink where to_id in ({})"
|
||||
).format(self.join_query_keys(filtered_ftrack_ids))).all()
|
||||
|
||||
mapping_by_to_id = {
|
||||
ftrack_id: set()
|
||||
for ftrack_id in filtered_ftrack_ids
|
||||
}
|
||||
all_from_ids = set()
|
||||
for context_link in context_links:
|
||||
to_id = context_link["to_id"]
|
||||
from_id = context_link["from_id"]
|
||||
if from_id == to_id:
|
||||
continue
|
||||
all_from_ids.add(from_id)
|
||||
mapping_by_to_id[to_id].add(from_id)
|
||||
|
||||
mongo_id_by_ftrack_id.update(self._get_mongo_ids_by_ftrack_ids(
|
||||
session, attr_def["id"], all_from_ids
|
||||
))
|
||||
self.log.info(mongo_id_by_ftrack_id)
|
||||
bulk_writes = []
|
||||
for to_id, from_ids in mapping_by_to_id.items():
|
||||
dst_mongo_id = mongo_id_by_ftrack_id[to_id]
|
||||
links = []
|
||||
for ftrack_id in from_ids:
|
||||
link_mongo_id = mongo_id_by_ftrack_id.get(ftrack_id)
|
||||
if link_mongo_id is None:
|
||||
continue
|
||||
|
||||
links.append({
|
||||
"id": ObjectId(link_mongo_id),
|
||||
"linkedBy": "ftrack",
|
||||
"type": "breakdown"
|
||||
})
|
||||
|
||||
bulk_writes.append(UpdateOne(
|
||||
{"_id": ObjectId(dst_mongo_id)},
|
||||
{"$set": {"data.inputLinks": links}}
|
||||
))
|
||||
|
||||
if bulk_writes:
|
||||
self.dbcon.database[project_name].bulk_write(bulk_writes)
|
||||
|
||||
def _get_mongo_ids_by_ftrack_ids(self, session, attr_id, ftrack_ids):
|
||||
output = query_custom_attributes(
|
||||
session, [attr_id], ftrack_ids
|
||||
)
|
||||
mongo_id_by_ftrack_id = {}
|
||||
for item in output:
|
||||
mongo_id = item["value"]
|
||||
if not mongo_id:
|
||||
continue
|
||||
|
||||
ftrack_id = item["entity_id"]
|
||||
|
||||
mongo_id_by_ftrack_id[ftrack_id] = mongo_id
|
||||
return mongo_id_by_ftrack_id
|
||||
|
||||
|
||||
def register(session):
|
||||
'''Register plugin. Called when used as an plugin.'''
|
||||
SyncLinksToAvalon(session).register()
|
||||
|
|
@ -194,6 +194,7 @@ class SyncToAvalonEvent(BaseEvent):
|
|||
ftrack_id = proj["data"].get("ftrackId")
|
||||
if ftrack_id is None:
|
||||
ftrack_id = self._update_project_ftrack_id()
|
||||
proj["data"]["ftrackId"] = ftrack_id
|
||||
self._avalon_ents_by_ftrack_id[ftrack_id] = proj
|
||||
for ent in ents:
|
||||
ftrack_id = ent["data"].get("ftrackId")
|
||||
|
|
@ -584,6 +585,10 @@ class SyncToAvalonEvent(BaseEvent):
|
|||
continue
|
||||
ftrack_id = ftrack_id[0]
|
||||
|
||||
# Skip deleted projects
|
||||
if action == "remove" and entityType == "show":
|
||||
return True
|
||||
|
||||
# task modified, collect parent id of task, handle separately
|
||||
if entity_type.lower() == "task":
|
||||
changes = ent_info.get("changes") or {}
|
||||
|
|
|
|||
|
|
@ -226,8 +226,8 @@ class FtrackModule(
|
|||
if not project_name:
|
||||
return
|
||||
|
||||
attributes_changes = changes.get("attributes")
|
||||
if not attributes_changes:
|
||||
new_attr_values = new_value.get("attributes")
|
||||
if not new_attr_values:
|
||||
return
|
||||
|
||||
import ftrack_api
|
||||
|
|
@ -277,7 +277,7 @@ class FtrackModule(
|
|||
|
||||
failed = {}
|
||||
missing = {}
|
||||
for key, value in attributes_changes.items():
|
||||
for key, value in new_attr_values.items():
|
||||
if key not in ca_keys:
|
||||
continue
|
||||
|
||||
|
|
@ -351,12 +351,24 @@ class FtrackModule(
|
|||
if "server_url" not in session_kwargs:
|
||||
session_kwargs["server_url"] = self.ftrack_url
|
||||
|
||||
if "api_key" not in session_kwargs or "api_user" not in session_kwargs:
|
||||
api_key = session_kwargs.get("api_key")
|
||||
api_user = session_kwargs.get("api_user")
|
||||
# First look into environments
|
||||
# - both OpenPype tray and ftrack event server should have set them
|
||||
# - ftrack event server may crash when credentials are tried to load
|
||||
# from keyring
|
||||
if not api_key or not api_user:
|
||||
api_key = os.environ.get("FTRACK_API_KEY")
|
||||
api_user = os.environ.get("FTRACK_API_USER")
|
||||
|
||||
if not api_key or not api_user:
|
||||
from .lib import credentials
|
||||
cred = credentials.get_credentials()
|
||||
session_kwargs["api_user"] = cred.get("username")
|
||||
session_kwargs["api_key"] = cred.get("api_key")
|
||||
api_user = cred.get("username")
|
||||
api_key = cred.get("api_key")
|
||||
|
||||
session_kwargs["api_user"] = api_user
|
||||
session_kwargs["api_key"] = api_key
|
||||
return ftrack_api.Session(**session_kwargs)
|
||||
|
||||
def tray_init(self):
|
||||
|
|
@ -412,6 +424,14 @@ class FtrackModule(
|
|||
hours_logged = (task_entity["time_logged"] / 60) / 60
|
||||
return hours_logged
|
||||
|
||||
def get_credentials(self):
|
||||
# type: () -> tuple
|
||||
"""Get local Ftrack credentials."""
|
||||
from .lib import credentials
|
||||
|
||||
cred = credentials.get_credentials(self.ftrack_url)
|
||||
return cred.get("username"), cred.get("api_key")
|
||||
|
||||
def cli(self, click_group):
|
||||
click_group.add_command(cli_main)
|
||||
|
||||
|
|
|
|||
|
|
@ -22,7 +22,7 @@ from .custom_attributes import get_openpype_attr
|
|||
|
||||
from bson.objectid import ObjectId
|
||||
from bson.errors import InvalidId
|
||||
from pymongo import UpdateOne
|
||||
from pymongo import UpdateOne, ReplaceOne
|
||||
import ftrack_api
|
||||
|
||||
log = Logger.get_logger(__name__)
|
||||
|
|
@ -328,7 +328,7 @@ class SyncEntitiesFactory:
|
|||
server_url=self._server_url,
|
||||
api_key=self._api_key,
|
||||
api_user=self._api_user,
|
||||
auto_connect_event_hub=True
|
||||
auto_connect_event_hub=False
|
||||
)
|
||||
|
||||
self.duplicates = {}
|
||||
|
|
@ -341,6 +341,7 @@ class SyncEntitiesFactory:
|
|||
}
|
||||
|
||||
self.create_list = []
|
||||
self.unarchive_list = []
|
||||
self.updates = collections.defaultdict(dict)
|
||||
|
||||
self.avalon_project = None
|
||||
|
|
@ -1169,16 +1170,43 @@ class SyncEntitiesFactory:
|
|||
entity
|
||||
)
|
||||
|
||||
def _get_input_links(self, ftrack_ids):
|
||||
tupled_ids = tuple(ftrack_ids)
|
||||
mapping_by_to_id = {
|
||||
ftrack_id: set()
|
||||
for ftrack_id in tupled_ids
|
||||
}
|
||||
ids_len = len(tupled_ids)
|
||||
chunk_size = int(5000 / ids_len)
|
||||
all_links = []
|
||||
for idx in range(0, ids_len, chunk_size):
|
||||
entity_ids_joined = join_query_keys(
|
||||
tupled_ids[idx:idx + chunk_size]
|
||||
)
|
||||
|
||||
all_links.extend(self.session.query((
|
||||
"select from_id, to_id from"
|
||||
" TypedContextLink where to_id in ({})"
|
||||
).format(entity_ids_joined)).all())
|
||||
|
||||
for context_link in all_links:
|
||||
to_id = context_link["to_id"]
|
||||
from_id = context_link["from_id"]
|
||||
if from_id == to_id:
|
||||
continue
|
||||
mapping_by_to_id[to_id].add(from_id)
|
||||
return mapping_by_to_id
|
||||
|
||||
def prepare_ftrack_ent_data(self):
|
||||
not_set_ids = []
|
||||
for id, entity_dict in self.entities_dict.items():
|
||||
for ftrack_id, entity_dict in self.entities_dict.items():
|
||||
entity = entity_dict["entity"]
|
||||
if entity is None:
|
||||
not_set_ids.append(id)
|
||||
not_set_ids.append(ftrack_id)
|
||||
continue
|
||||
|
||||
self.entities_dict[id]["final_entity"] = {}
|
||||
self.entities_dict[id]["final_entity"]["name"] = (
|
||||
self.entities_dict[ftrack_id]["final_entity"] = {}
|
||||
self.entities_dict[ftrack_id]["final_entity"]["name"] = (
|
||||
entity_dict["name"]
|
||||
)
|
||||
data = {}
|
||||
|
|
@ -1191,58 +1219,59 @@ class SyncEntitiesFactory:
|
|||
for key, val in entity_dict.get("hier_attrs", []).items():
|
||||
data[key] = val
|
||||
|
||||
if id == self.ft_project_id:
|
||||
project_name = entity["full_name"]
|
||||
data["code"] = entity["name"]
|
||||
self.entities_dict[id]["final_entity"]["data"] = data
|
||||
self.entities_dict[id]["final_entity"]["type"] = "project"
|
||||
if ftrack_id != self.ft_project_id:
|
||||
ent_path_items = [ent["name"] for ent in entity["link"]]
|
||||
parents = ent_path_items[1:len(ent_path_items) - 1:]
|
||||
|
||||
proj_schema = entity["project_schema"]
|
||||
task_types = proj_schema["_task_type_schema"]["types"]
|
||||
proj_apps, warnings = get_project_apps(
|
||||
data.pop("applications", [])
|
||||
)
|
||||
for msg, items in warnings.items():
|
||||
if not msg or not items:
|
||||
continue
|
||||
self.report_items["warning"][msg] = items
|
||||
|
||||
current_project_anatomy_data = get_anatomy_settings(
|
||||
project_name, exclude_locals=True
|
||||
)
|
||||
anatomy_tasks = current_project_anatomy_data["tasks"]
|
||||
tasks = {}
|
||||
default_type_data = {
|
||||
"short_name": ""
|
||||
}
|
||||
for task_type in task_types:
|
||||
task_type_name = task_type["name"]
|
||||
tasks[task_type_name] = copy.deepcopy(
|
||||
anatomy_tasks.get(task_type_name)
|
||||
or default_type_data
|
||||
)
|
||||
|
||||
project_config = {
|
||||
"tasks": tasks,
|
||||
"apps": proj_apps
|
||||
}
|
||||
for key, value in current_project_anatomy_data.items():
|
||||
if key in project_config or key == "attributes":
|
||||
continue
|
||||
project_config[key] = value
|
||||
|
||||
self.entities_dict[id]["final_entity"]["config"] = (
|
||||
project_config
|
||||
)
|
||||
data["parents"] = parents
|
||||
data["tasks"] = self.entities_dict[ftrack_id].pop("tasks", {})
|
||||
self.entities_dict[ftrack_id]["final_entity"]["data"] = data
|
||||
self.entities_dict[ftrack_id]["final_entity"]["type"] = "asset"
|
||||
continue
|
||||
project_name = entity["full_name"]
|
||||
data["code"] = entity["name"]
|
||||
self.entities_dict[ftrack_id]["final_entity"]["data"] = data
|
||||
self.entities_dict[ftrack_id]["final_entity"]["type"] = (
|
||||
"project"
|
||||
)
|
||||
|
||||
ent_path_items = [ent["name"] for ent in entity["link"]]
|
||||
parents = ent_path_items[1:len(ent_path_items) - 1:]
|
||||
proj_schema = entity["project_schema"]
|
||||
task_types = proj_schema["_task_type_schema"]["types"]
|
||||
proj_apps, warnings = get_project_apps(
|
||||
data.pop("applications", [])
|
||||
)
|
||||
for msg, items in warnings.items():
|
||||
if not msg or not items:
|
||||
continue
|
||||
self.report_items["warning"][msg] = items
|
||||
|
||||
data["parents"] = parents
|
||||
data["tasks"] = self.entities_dict[id].pop("tasks", {})
|
||||
self.entities_dict[id]["final_entity"]["data"] = data
|
||||
self.entities_dict[id]["final_entity"]["type"] = "asset"
|
||||
current_project_anatomy_data = get_anatomy_settings(
|
||||
project_name, exclude_locals=True
|
||||
)
|
||||
anatomy_tasks = current_project_anatomy_data["tasks"]
|
||||
tasks = {}
|
||||
default_type_data = {
|
||||
"short_name": ""
|
||||
}
|
||||
for task_type in task_types:
|
||||
task_type_name = task_type["name"]
|
||||
tasks[task_type_name] = copy.deepcopy(
|
||||
anatomy_tasks.get(task_type_name)
|
||||
or default_type_data
|
||||
)
|
||||
|
||||
project_config = {
|
||||
"tasks": tasks,
|
||||
"apps": proj_apps
|
||||
}
|
||||
for key, value in current_project_anatomy_data.items():
|
||||
if key in project_config or key == "attributes":
|
||||
continue
|
||||
project_config[key] = value
|
||||
|
||||
self.entities_dict[ftrack_id]["final_entity"]["config"] = (
|
||||
project_config
|
||||
)
|
||||
|
||||
if not_set_ids:
|
||||
self.log.debug((
|
||||
|
|
@ -1433,6 +1462,28 @@ class SyncEntitiesFactory:
|
|||
for child_id in entity_dict["children"]:
|
||||
children_queue.append(child_id)
|
||||
|
||||
def set_input_links(self):
|
||||
ftrack_ids = set(self.create_ftrack_ids) | set(self.update_ftrack_ids)
|
||||
|
||||
input_links_by_ftrack_id = self._get_input_links(ftrack_ids)
|
||||
|
||||
for ftrack_id in ftrack_ids:
|
||||
input_links = []
|
||||
final_entity = self.entities_dict[ftrack_id]["final_entity"]
|
||||
final_entity["data"]["inputLinks"] = input_links
|
||||
link_ids = input_links_by_ftrack_id[ftrack_id]
|
||||
if not link_ids:
|
||||
continue
|
||||
|
||||
for ftrack_link_id in link_ids:
|
||||
mongo_id = self.ftrack_avalon_mapper.get(ftrack_link_id)
|
||||
if mongo_id is not None:
|
||||
input_links.append({
|
||||
"id": ObjectId(mongo_id),
|
||||
"linkedBy": "ftrack",
|
||||
"type": "breakdown"
|
||||
})
|
||||
|
||||
def prepare_changes(self):
|
||||
self.log.debug("* Preparing changes for avalon/ftrack")
|
||||
hierarchy_changing_ids = []
|
||||
|
|
@ -1806,9 +1857,28 @@ class SyncEntitiesFactory:
|
|||
for ftrack_id in self.create_ftrack_ids:
|
||||
# CHECK it is possible that entity was already created
|
||||
# because is parent of another entity which was processed first
|
||||
if ftrack_id in self.ftrack_avalon_mapper:
|
||||
continue
|
||||
self.create_avalon_entity(ftrack_id)
|
||||
if ftrack_id not in self.ftrack_avalon_mapper:
|
||||
self.create_avalon_entity(ftrack_id)
|
||||
|
||||
self.set_input_links()
|
||||
|
||||
unarchive_writes = []
|
||||
for item in self.unarchive_list:
|
||||
mongo_id = item["_id"]
|
||||
unarchive_writes.append(ReplaceOne(
|
||||
{"_id": mongo_id},
|
||||
item
|
||||
))
|
||||
av_ent_path_items = item["data"]["parents"]
|
||||
av_ent_path_items.append(item["name"])
|
||||
av_ent_path = "/".join(av_ent_path_items)
|
||||
self.log.debug(
|
||||
"Entity was unarchived <{}>".format(av_ent_path)
|
||||
)
|
||||
self.remove_from_archived(mongo_id)
|
||||
|
||||
if unarchive_writes:
|
||||
self.dbcon.bulk_write(unarchive_writes)
|
||||
|
||||
if len(self.create_list) > 0:
|
||||
self.dbcon.insert_many(self.create_list)
|
||||
|
|
@ -1899,14 +1969,8 @@ class SyncEntitiesFactory:
|
|||
|
||||
if unarchive is False:
|
||||
self.create_list.append(item)
|
||||
return
|
||||
# If unarchive then replace entity data in database
|
||||
self.dbcon.replace_one({"_id": new_id}, item)
|
||||
self.remove_from_archived(mongo_id)
|
||||
av_ent_path_items = item["data"]["parents"]
|
||||
av_ent_path_items.append(item["name"])
|
||||
av_ent_path = "/".join(av_ent_path_items)
|
||||
self.log.debug("Entity was unarchived <{}>".format(av_ent_path))
|
||||
else:
|
||||
self.unarchive_list.append(item)
|
||||
|
||||
def check_unarchivation(self, ftrack_id, mongo_id, name):
|
||||
archived_by_id = self.avalon_archived_by_id.get(mongo_id)
|
||||
|
|
|
|||
|
|
@ -570,9 +570,15 @@ class BaseHandler(object):
|
|||
|
||||
if low_entity_type == "assetversion":
|
||||
asset = entity["asset"]
|
||||
parent = None
|
||||
if asset:
|
||||
parent = asset["parent"]
|
||||
if parent:
|
||||
|
||||
if parent:
|
||||
if parent.entity_type.lower() == "project":
|
||||
return parent
|
||||
|
||||
if "project" in parent:
|
||||
return parent["project"]
|
||||
|
||||
project_data = entity["link"][0]
|
||||
|
|
|
|||
|
|
@ -0,0 +1,23 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
"""Collect default Deadline server."""
|
||||
import pyblish.api
|
||||
import os
|
||||
|
||||
|
||||
class CollectLocalFtrackCreds(pyblish.api.ContextPlugin):
|
||||
"""Collect default Royal Render path."""
|
||||
|
||||
order = pyblish.api.CollectorOrder + 0.01
|
||||
label = "Collect local ftrack credentials"
|
||||
targets = ["rr_control"]
|
||||
|
||||
def process(self, context):
|
||||
if os.getenv("FTRACK_API_USER") and os.getenv("FTRACK_API_KEY") and \
|
||||
os.getenv("FTRACK_SERVER"):
|
||||
return
|
||||
ftrack_module = context.data["openPypeModules"]["ftrack"]
|
||||
if ftrack_module.enabled:
|
||||
creds = ftrack_module.get_credentials()
|
||||
os.environ["FTRACK_API_USER"] = creds[0]
|
||||
os.environ["FTRACK_API_KEY"] = creds[1]
|
||||
os.environ["FTRACK_SERVER"] = ftrack_module.ftrack_url
|
||||
|
|
@ -27,7 +27,7 @@ class CollectUsername(pyblish.api.ContextPlugin):
|
|||
order = pyblish.api.CollectorOrder - 0.488
|
||||
label = "Collect ftrack username"
|
||||
hosts = ["webpublisher", "photoshop"]
|
||||
targets = ["remotepublish", "filespublish"]
|
||||
targets = ["remotepublish", "filespublish", "tvpaint_worker"]
|
||||
|
||||
_context = None
|
||||
|
||||
|
|
|
|||
|
|
@ -1,208 +1,266 @@
|
|||
import pyblish.api
|
||||
import json
|
||||
import os
|
||||
import json
|
||||
import copy
|
||||
import pyblish.api
|
||||
|
||||
|
||||
class IntegrateFtrackInstance(pyblish.api.InstancePlugin):
|
||||
"""Collect ftrack component data
|
||||
"""Collect ftrack component data (not integrate yet).
|
||||
|
||||
Add ftrack component list to instance.
|
||||
|
||||
|
||||
"""
|
||||
|
||||
order = pyblish.api.IntegratorOrder + 0.48
|
||||
label = 'Integrate Ftrack Component'
|
||||
label = "Integrate Ftrack Component"
|
||||
families = ["ftrack"]
|
||||
|
||||
family_mapping = {'camera': 'cam',
|
||||
'look': 'look',
|
||||
'mayaascii': 'scene',
|
||||
'model': 'geo',
|
||||
'rig': 'rig',
|
||||
'setdress': 'setdress',
|
||||
'pointcache': 'cache',
|
||||
'render': 'render',
|
||||
'render2d': 'render',
|
||||
'nukescript': 'comp',
|
||||
'write': 'render',
|
||||
'review': 'mov',
|
||||
'plate': 'img',
|
||||
'audio': 'audio',
|
||||
'workfile': 'scene',
|
||||
'animation': 'cache',
|
||||
'image': 'img',
|
||||
'reference': 'reference'
|
||||
}
|
||||
family_mapping = {
|
||||
"camera": "cam",
|
||||
"look": "look",
|
||||
"mayaascii": "scene",
|
||||
"model": "geo",
|
||||
"rig": "rig",
|
||||
"setdress": "setdress",
|
||||
"pointcache": "cache",
|
||||
"render": "render",
|
||||
"render2d": "render",
|
||||
"nukescript": "comp",
|
||||
"write": "render",
|
||||
"review": "mov",
|
||||
"plate": "img",
|
||||
"audio": "audio",
|
||||
"workfile": "scene",
|
||||
"animation": "cache",
|
||||
"image": "img",
|
||||
"reference": "reference"
|
||||
}
|
||||
|
||||
def process(self, instance):
|
||||
self.ftrack_locations = {}
|
||||
self.log.debug('instance {}'.format(instance))
|
||||
self.log.debug("instance {}".format(instance))
|
||||
|
||||
if instance.data.get('version'):
|
||||
version_number = int(instance.data.get('version'))
|
||||
else:
|
||||
instance_version = instance.data.get("version")
|
||||
if instance_version is None:
|
||||
raise ValueError("Instance version not set")
|
||||
|
||||
family = instance.data['family'].lower()
|
||||
version_number = int(instance_version)
|
||||
|
||||
family = instance.data["family"]
|
||||
family_low = instance.data["family"].lower()
|
||||
|
||||
asset_type = instance.data.get("ftrackFamily")
|
||||
if not asset_type and family in self.family_mapping:
|
||||
asset_type = self.family_mapping[family]
|
||||
if not asset_type and family_low in self.family_mapping:
|
||||
asset_type = self.family_mapping[family_low]
|
||||
|
||||
# Ignore this instance if neither "ftrackFamily" or a family mapping is
|
||||
# found.
|
||||
if not asset_type:
|
||||
self.log.info((
|
||||
"Family \"{}\" does not match any asset type mapping"
|
||||
).format(family))
|
||||
return
|
||||
|
||||
componentList = []
|
||||
instance_repres = instance.data.get("representations")
|
||||
if not instance_repres:
|
||||
self.log.info((
|
||||
"Skipping instance. Does not have any representations {}"
|
||||
).format(str(instance)))
|
||||
return
|
||||
|
||||
# Prepare FPS
|
||||
instance_fps = instance.data.get("fps")
|
||||
if instance_fps is None:
|
||||
instance_fps = instance.context.data["fps"]
|
||||
|
||||
# Base of component item data
|
||||
# - create a copy of this object when want to use it
|
||||
base_component_item = {
|
||||
"assettype_data": {
|
||||
"short": asset_type,
|
||||
},
|
||||
"asset_data": {
|
||||
"name": instance.data["subset"],
|
||||
},
|
||||
"assetversion_data": {
|
||||
"version": version_number,
|
||||
"comment": instance.context.data.get("comment") or ""
|
||||
},
|
||||
"component_overwrite": False,
|
||||
# This can be change optionally
|
||||
"thumbnail": False,
|
||||
# These must be changed for each component
|
||||
"component_data": None,
|
||||
"component_path": None,
|
||||
"component_location": None
|
||||
}
|
||||
|
||||
ft_session = instance.context.data["ftrackSession"]
|
||||
|
||||
for comp in instance.data['representations']:
|
||||
self.log.debug('component {}'.format(comp))
|
||||
# Filter types of representations
|
||||
review_representations = []
|
||||
thumbnail_representations = []
|
||||
other_representations = []
|
||||
for repre in instance_repres:
|
||||
self.log.debug("Representation {}".format(repre))
|
||||
repre_tags = repre.get("tags") or []
|
||||
if repre.get("thumbnail") or "thumbnail" in repre_tags:
|
||||
thumbnail_representations.append(repre)
|
||||
|
||||
if comp.get('thumbnail') or ("thumbnail" in comp.get('tags', [])):
|
||||
location = self.get_ftrack_location(
|
||||
'ftrack.server', ft_session
|
||||
)
|
||||
component_data = {
|
||||
"name": "thumbnail" # Default component name is "main".
|
||||
}
|
||||
comp['thumbnail'] = True
|
||||
comp_files = comp["files"]
|
||||
elif "ftrackreview" in repre_tags:
|
||||
review_representations.append(repre)
|
||||
|
||||
else:
|
||||
other_representations.append(repre)
|
||||
|
||||
# Prepare ftrack locations
|
||||
unmanaged_location = ft_session.query(
|
||||
"Location where name is \"ftrack.unmanaged\""
|
||||
).one()
|
||||
ftrack_server_location = ft_session.query(
|
||||
"Location where name is \"ftrack.server\""
|
||||
).one()
|
||||
|
||||
# Components data
|
||||
component_list = []
|
||||
# Components that will be duplicated to unmanaged location
|
||||
src_components_to_add = []
|
||||
|
||||
# Create thumbnail components
|
||||
# TODO what if there is multiple thumbnails?
|
||||
first_thumbnail_component = None
|
||||
for repre in thumbnail_representations:
|
||||
published_path = repre.get("published_path")
|
||||
if not published_path:
|
||||
comp_files = repre["files"]
|
||||
if isinstance(comp_files, (tuple, list, set)):
|
||||
filename = comp_files[0]
|
||||
else:
|
||||
filename = comp_files
|
||||
|
||||
comp['published_path'] = os.path.join(
|
||||
comp['stagingDir'], filename
|
||||
)
|
||||
|
||||
elif comp.get('ftrackreview') or ("ftrackreview" in comp.get('tags', [])):
|
||||
'''
|
||||
Ftrack bug requirement:
|
||||
- Start frame must be 0
|
||||
- End frame must be {duration}
|
||||
EXAMPLE: When mov has 55 frames:
|
||||
- Start frame should be 0
|
||||
- End frame should be 55 (do not ask why please!)
|
||||
'''
|
||||
start_frame = 0
|
||||
end_frame = 1
|
||||
if 'frameEndFtrack' in comp and 'frameStartFtrack' in comp:
|
||||
end_frame += (
|
||||
comp['frameEndFtrack'] - comp['frameStartFtrack']
|
||||
)
|
||||
else:
|
||||
end_frame += (
|
||||
instance.data["frameEnd"] - instance.data["frameStart"]
|
||||
)
|
||||
|
||||
fps = comp.get('fps')
|
||||
if fps is None:
|
||||
fps = instance.data.get(
|
||||
"fps", instance.context.data['fps']
|
||||
)
|
||||
|
||||
comp['fps'] = fps
|
||||
|
||||
location = self.get_ftrack_location(
|
||||
'ftrack.server', ft_session
|
||||
published_path = os.path.join(
|
||||
repre["stagingDir"], filename
|
||||
)
|
||||
component_data = {
|
||||
# Default component name is "main".
|
||||
"name": "ftrackreview-mp4",
|
||||
"metadata": {'ftr_meta': json.dumps({
|
||||
'frameIn': int(start_frame),
|
||||
'frameOut': int(end_frame),
|
||||
'frameRate': float(comp['fps'])})}
|
||||
}
|
||||
comp['thumbnail'] = False
|
||||
else:
|
||||
component_data = {
|
||||
"name": comp['name']
|
||||
}
|
||||
location = self.get_ftrack_location(
|
||||
'ftrack.unmanaged', ft_session
|
||||
)
|
||||
comp['thumbnail'] = False
|
||||
if not os.path.exists(published_path):
|
||||
continue
|
||||
repre["published_path"] = published_path
|
||||
|
||||
self.log.debug('location {}'.format(location))
|
||||
|
||||
component_item = {
|
||||
"assettype_data": {
|
||||
"short": asset_type,
|
||||
},
|
||||
"asset_data": {
|
||||
"name": instance.data["subset"],
|
||||
},
|
||||
"assetversion_data": {
|
||||
"version": version_number,
|
||||
"comment": instance.context.data.get("comment", "")
|
||||
},
|
||||
"component_data": component_data,
|
||||
"component_path": comp['published_path'],
|
||||
'component_location': location,
|
||||
"component_overwrite": False,
|
||||
"thumbnail": comp['thumbnail']
|
||||
# Create copy of base comp item and append it
|
||||
thumbnail_item = copy.deepcopy(base_component_item)
|
||||
thumbnail_item["component_path"] = repre["published_path"]
|
||||
thumbnail_item["component_data"] = {
|
||||
"name": "thumbnail"
|
||||
}
|
||||
thumbnail_item["thumbnail"] = True
|
||||
# Create copy of item before setting location
|
||||
src_components_to_add.append(copy.deepcopy(thumbnail_item))
|
||||
# Create copy of first thumbnail
|
||||
if first_thumbnail_component is None:
|
||||
first_thumbnail_component = copy.deepcopy(thumbnail_item)
|
||||
# Set location
|
||||
thumbnail_item["component_location"] = ftrack_server_location
|
||||
# Add item to component list
|
||||
component_list.append(thumbnail_item)
|
||||
|
||||
# Add custom attributes for AssetVersion
|
||||
assetversion_cust_attrs = {}
|
||||
intent_val = instance.context.data.get("intent")
|
||||
if intent_val and isinstance(intent_val, dict):
|
||||
intent_val = intent_val.get("value")
|
||||
# Create review components
|
||||
# Change asset name of each new component for review
|
||||
is_first_review_repre = True
|
||||
not_first_components = []
|
||||
for repre in review_representations:
|
||||
frame_start = repre.get("frameStartFtrack")
|
||||
frame_end = repre.get("frameEndFtrack")
|
||||
if frame_start is None or frame_end is None:
|
||||
frame_start = instance.data["frameStart"]
|
||||
frame_end = instance.data["frameEnd"]
|
||||
|
||||
if intent_val:
|
||||
assetversion_cust_attrs["intent"] = intent_val
|
||||
# Frame end of uploaded video file should be duration in frames
|
||||
# - frame start is always 0
|
||||
# - frame end is duration in frames
|
||||
duration = frame_end - frame_start + 1
|
||||
|
||||
component_item["assetversion_data"]["custom_attributes"] = (
|
||||
assetversion_cust_attrs
|
||||
)
|
||||
fps = repre.get("fps")
|
||||
if fps is None:
|
||||
fps = instance_fps
|
||||
|
||||
componentList.append(component_item)
|
||||
# Create copy with ftrack.unmanaged location if thumb or prev
|
||||
if comp.get('thumbnail') or comp.get('preview') \
|
||||
or ("preview" in comp.get('tags', [])) \
|
||||
or ("review" in comp.get('tags', [])) \
|
||||
or ("thumbnail" in comp.get('tags', [])):
|
||||
unmanaged_loc = self.get_ftrack_location(
|
||||
'ftrack.unmanaged', ft_session
|
||||
)
|
||||
|
||||
component_data_src = component_data.copy()
|
||||
name = component_data['name'] + '_src'
|
||||
component_data_src['name'] = name
|
||||
|
||||
component_item_src = {
|
||||
"assettype_data": {
|
||||
"short": asset_type,
|
||||
},
|
||||
"asset_data": {
|
||||
"name": instance.data["subset"],
|
||||
},
|
||||
"assetversion_data": {
|
||||
"version": version_number,
|
||||
},
|
||||
"component_data": component_data_src,
|
||||
"component_path": comp['published_path'],
|
||||
'component_location': unmanaged_loc,
|
||||
"component_overwrite": False,
|
||||
"thumbnail": False
|
||||
# Create copy of base comp item and append it
|
||||
review_item = copy.deepcopy(base_component_item)
|
||||
# Change location
|
||||
review_item["component_path"] = repre["published_path"]
|
||||
# Change component data
|
||||
review_item["component_data"] = {
|
||||
# Default component name is "main".
|
||||
"name": "ftrackreview-mp4",
|
||||
"metadata": {
|
||||
"ftr_meta": json.dumps({
|
||||
"frameIn": 0,
|
||||
"frameOut": int(duration),
|
||||
"frameRate": float(fps)
|
||||
})
|
||||
}
|
||||
}
|
||||
# Create copy of item before setting location or changing asset
|
||||
src_components_to_add.append(copy.deepcopy(review_item))
|
||||
if is_first_review_repre:
|
||||
is_first_review_repre = False
|
||||
else:
|
||||
# Add representation name to asset name of "not first" review
|
||||
asset_name = review_item["asset_data"]["name"]
|
||||
review_item["asset_data"]["name"] = "_".join(
|
||||
(asset_name, repre["name"])
|
||||
)
|
||||
not_first_components.append(review_item)
|
||||
|
||||
componentList.append(component_item_src)
|
||||
# Set location
|
||||
review_item["component_location"] = ftrack_server_location
|
||||
# Add item to component list
|
||||
component_list.append(review_item)
|
||||
|
||||
self.log.debug('componentsList: {}'.format(str(componentList)))
|
||||
instance.data["ftrackComponentsList"] = componentList
|
||||
# Duplicate thumbnail component for all not first reviews
|
||||
if first_thumbnail_component is not None:
|
||||
for component_item in not_first_components:
|
||||
asset_name = component_item["asset_data"]["name"]
|
||||
new_thumbnail_component = copy.deepcopy(
|
||||
first_thumbnail_component
|
||||
)
|
||||
new_thumbnail_component["asset_data"]["name"] = asset_name
|
||||
new_thumbnail_component["component_location"] = (
|
||||
ftrack_server_location
|
||||
)
|
||||
component_list.append(new_thumbnail_component)
|
||||
|
||||
def get_ftrack_location(self, name, session):
|
||||
if name in self.ftrack_locations:
|
||||
return self.ftrack_locations[name]
|
||||
# Add source components for review and thubmnail components
|
||||
for copy_src_item in src_components_to_add:
|
||||
# Make sure thumbnail is disabled
|
||||
copy_src_item["thumbnail"] = False
|
||||
# Set location
|
||||
copy_src_item["component_location"] = unmanaged_location
|
||||
# Modify name of component to have suffix "_src"
|
||||
component_data = copy_src_item["component_data"]
|
||||
component_name = component_data["name"]
|
||||
component_data["name"] = component_name + "_src"
|
||||
component_list.append(copy_src_item)
|
||||
|
||||
location = session.query(
|
||||
'Location where name is "{}"'.format(name)
|
||||
).one()
|
||||
self.ftrack_locations[name] = location
|
||||
return location
|
||||
# Add others representations as component
|
||||
for repre in other_representations:
|
||||
published_path = repre.get("published_path")
|
||||
if not published_path:
|
||||
continue
|
||||
# Create copy of base comp item and append it
|
||||
other_item = copy.deepcopy(base_component_item)
|
||||
other_item["component_data"] = {
|
||||
"name": repre["name"]
|
||||
}
|
||||
other_item["component_location"] = unmanaged_location
|
||||
other_item["component_path"] = published_path
|
||||
component_list.append(other_item)
|
||||
|
||||
def json_obj_parser(obj):
|
||||
return str(obj)
|
||||
|
||||
self.log.debug("Components list: {}".format(
|
||||
json.dumps(
|
||||
component_list,
|
||||
sort_keys=True,
|
||||
indent=4,
|
||||
default=json_obj_parser
|
||||
)
|
||||
))
|
||||
instance.data["ftrackComponentsList"] = component_list
|
||||
|
|
|
|||
|
|
@ -1,30 +0,0 @@
|
|||
import pyblish.api
|
||||
import os
|
||||
|
||||
|
||||
class IntegrateCleanComponentData(pyblish.api.InstancePlugin):
|
||||
"""
|
||||
Cleaning up thumbnail an mov files after they have been integrated
|
||||
"""
|
||||
|
||||
order = pyblish.api.IntegratorOrder + 0.5
|
||||
label = 'Clean component data'
|
||||
families = ["ftrack"]
|
||||
optional = True
|
||||
active = False
|
||||
|
||||
def process(self, instance):
|
||||
|
||||
for comp in instance.data['representations']:
|
||||
self.log.debug('component {}'.format(comp))
|
||||
|
||||
if "%" in comp['published_path'] or "#" in comp['published_path']:
|
||||
continue
|
||||
|
||||
if comp.get('thumbnail') or ("thumbnail" in comp.get('tags', [])):
|
||||
os.remove(comp['published_path'])
|
||||
self.log.info('Thumbnail image was erased')
|
||||
|
||||
elif comp.get('preview') or ("preview" in comp.get('tags', [])):
|
||||
os.remove(comp['published_path'])
|
||||
self.log.info('Preview mov file was erased')
|
||||
6
openpype/modules/default_modules/job_queue/__init__.py
Normal file
6
openpype/modules/default_modules/job_queue/__init__.py
Normal file
|
|
@ -0,0 +1,6 @@
|
|||
from .module import JobQueueModule
|
||||
|
||||
|
||||
__all__ = (
|
||||
"JobQueueModule",
|
||||
)
|
||||
|
|
@ -0,0 +1,8 @@
|
|||
from .server import WebServerManager
|
||||
from .utils import main
|
||||
|
||||
|
||||
__all__ = (
|
||||
"WebServerManager",
|
||||
"main"
|
||||
)
|
||||
|
|
@ -0,0 +1,62 @@
|
|||
import json
|
||||
|
||||
from aiohttp.web_response import Response
|
||||
|
||||
|
||||
class JobQueueResource:
|
||||
def __init__(self, job_queue, server_manager):
|
||||
self.server_manager = server_manager
|
||||
|
||||
self._prefix = "/api"
|
||||
|
||||
self._job_queue = job_queue
|
||||
|
||||
self.endpoint_defs = (
|
||||
("POST", "/jobs", self.post_job),
|
||||
("GET", "/jobs", self.get_jobs),
|
||||
("GET", "/jobs/{job_id}", self.get_job)
|
||||
)
|
||||
|
||||
self.register()
|
||||
|
||||
def register(self):
|
||||
for methods, url, callback in self.endpoint_defs:
|
||||
final_url = self._prefix + url
|
||||
self.server_manager.add_route(
|
||||
methods, final_url, callback
|
||||
)
|
||||
|
||||
async def get_jobs(self, request):
|
||||
jobs_data = []
|
||||
for job in self._job_queue.get_jobs():
|
||||
jobs_data.append(job.status())
|
||||
return Response(status=200, body=self.encode(jobs_data))
|
||||
|
||||
async def post_job(self, request):
|
||||
data = await request.json()
|
||||
host_name = data.get("host_name")
|
||||
if not host_name:
|
||||
return Response(
|
||||
status=400, message="Key \"host_name\" not filled."
|
||||
)
|
||||
|
||||
job = self._job_queue.create_job(host_name, data)
|
||||
return Response(status=201, text=job.id)
|
||||
|
||||
async def get_job(self, request):
|
||||
job_id = request.match_info["job_id"]
|
||||
content = self._job_queue.get_job_status(job_id)
|
||||
if content is None:
|
||||
content = {}
|
||||
return Response(
|
||||
status=200,
|
||||
body=self.encode(content),
|
||||
content_type="application/json"
|
||||
)
|
||||
|
||||
@classmethod
|
||||
def encode(cls, data):
|
||||
return json.dumps(
|
||||
data,
|
||||
indent=4
|
||||
).encode("utf-8")
|
||||
240
openpype/modules/default_modules/job_queue/job_server/jobs.py
Normal file
240
openpype/modules/default_modules/job_queue/job_server/jobs.py
Normal file
|
|
@ -0,0 +1,240 @@
|
|||
import datetime
|
||||
import collections
|
||||
from uuid import uuid4
|
||||
|
||||
|
||||
class Job:
|
||||
"""Job related to specific host name.
|
||||
|
||||
Data must contain everything needed to finish the job.
|
||||
"""
|
||||
# Remove done jobs each n days to clear memory
|
||||
keep_in_memory_days = 3
|
||||
|
||||
def __init__(self, host_name, data, job_id=None, created_time=None):
|
||||
if job_id is None:
|
||||
job_id = str(uuid4())
|
||||
self._id = job_id
|
||||
if created_time is None:
|
||||
created_time = datetime.datetime.now()
|
||||
self._created_time = created_time
|
||||
self._started_time = None
|
||||
self._done_time = None
|
||||
self.host_name = host_name
|
||||
self.data = data
|
||||
self._result_data = None
|
||||
|
||||
self._started = False
|
||||
self._done = False
|
||||
self._errored = False
|
||||
self._message = None
|
||||
self._deleted = False
|
||||
|
||||
self._worker = None
|
||||
|
||||
def keep_in_memory(self):
|
||||
if self._done_time is None:
|
||||
return True
|
||||
|
||||
now = datetime.datetime.now()
|
||||
delta = now - self._done_time
|
||||
return delta.days < self.keep_in_memory_days
|
||||
|
||||
@property
|
||||
def id(self):
|
||||
return self._id
|
||||
|
||||
@property
|
||||
def done(self):
|
||||
return self._done
|
||||
|
||||
def reset(self):
|
||||
self._started = False
|
||||
self._started_time = None
|
||||
self._done = False
|
||||
self._done_time = None
|
||||
self._errored = False
|
||||
self._message = None
|
||||
|
||||
self._worker = None
|
||||
|
||||
@property
|
||||
def started(self):
|
||||
return self._started
|
||||
|
||||
@property
|
||||
def deleted(self):
|
||||
return self._deleted
|
||||
|
||||
def set_deleted(self):
|
||||
self._deleted = True
|
||||
self.set_worker(None)
|
||||
|
||||
def set_worker(self, worker):
|
||||
if worker is self._worker:
|
||||
return
|
||||
|
||||
if self._worker is not None:
|
||||
self._worker.set_current_job(None)
|
||||
|
||||
self._worker = worker
|
||||
if worker is not None:
|
||||
worker.set_current_job(self)
|
||||
|
||||
def set_started(self):
|
||||
self._started_time = datetime.datetime.now()
|
||||
self._started = True
|
||||
|
||||
def set_done(self, success=True, message=None, data=None):
|
||||
self._done = True
|
||||
self._done_time = datetime.datetime.now()
|
||||
self._errored = not success
|
||||
self._message = message
|
||||
self._result_data = data
|
||||
if self._worker is not None:
|
||||
self._worker.set_current_job(None)
|
||||
|
||||
def status(self):
|
||||
worker_id = None
|
||||
if self._worker is not None:
|
||||
worker_id = self._worker.id
|
||||
output = {
|
||||
"id": self.id,
|
||||
"worker_id": worker_id,
|
||||
"done": self._done
|
||||
}
|
||||
output["message"] = self._message or None
|
||||
|
||||
state = "waiting"
|
||||
if self._deleted:
|
||||
state = "deleted"
|
||||
elif self._errored:
|
||||
state = "error"
|
||||
elif self._done:
|
||||
state = "done"
|
||||
elif self._started:
|
||||
state = "started"
|
||||
|
||||
output["result"] = self._result_data
|
||||
|
||||
output["state"] = state
|
||||
|
||||
return output
|
||||
|
||||
|
||||
class JobQueue:
|
||||
"""Queue holds jobs that should be done and workers that can do them.
|
||||
|
||||
Also asign jobs to a worker.
|
||||
"""
|
||||
old_jobs_check_minutes_interval = 30
|
||||
|
||||
def __init__(self):
|
||||
self._last_old_jobs_check = datetime.datetime.now()
|
||||
self._jobs_by_id = {}
|
||||
self._job_queue_by_host_name = collections.defaultdict(
|
||||
collections.deque
|
||||
)
|
||||
self._workers_by_id = {}
|
||||
self._workers_by_host_name = collections.defaultdict(list)
|
||||
|
||||
def workers(self):
|
||||
"""All currently registered workers."""
|
||||
return self._workers_by_id.values()
|
||||
|
||||
def add_worker(self, worker):
|
||||
host_name = worker.host_name
|
||||
print("Added new worker for \"{}\"".format(host_name))
|
||||
self._workers_by_id[worker.id] = worker
|
||||
self._workers_by_host_name[host_name].append(worker)
|
||||
|
||||
def get_worker(self, worker_id):
|
||||
return self._workers_by_id.get(worker_id)
|
||||
|
||||
def remove_worker(self, worker):
|
||||
# Look if worker had assigned job to do
|
||||
job = worker.current_job
|
||||
if job is not None and not job.done:
|
||||
# Reset job
|
||||
job.set_worker(None)
|
||||
job.reset()
|
||||
# Add job back to queue
|
||||
self._job_queue_by_host_name[job.host_name].appendleft(job)
|
||||
|
||||
# Remove worker from registered workers
|
||||
self._workers_by_id.pop(worker.id, None)
|
||||
host_name = worker.host_name
|
||||
if worker in self._workers_by_host_name[host_name]:
|
||||
self._workers_by_host_name[host_name].remove(worker)
|
||||
|
||||
print("Removed worker for \"{}\"".format(host_name))
|
||||
|
||||
def assign_jobs(self):
|
||||
"""Try to assign job for each idle worker.
|
||||
|
||||
Error all jobs without needed worker.
|
||||
"""
|
||||
available_host_names = set()
|
||||
for worker in self._workers_by_id.values():
|
||||
host_name = worker.host_name
|
||||
available_host_names.add(host_name)
|
||||
if worker.is_idle():
|
||||
jobs = self._job_queue_by_host_name[host_name]
|
||||
while jobs:
|
||||
job = jobs.popleft()
|
||||
if not job.deleted:
|
||||
worker.set_current_job(job)
|
||||
break
|
||||
|
||||
for host_name in tuple(self._job_queue_by_host_name.keys()):
|
||||
if host_name in available_host_names:
|
||||
continue
|
||||
|
||||
jobs_deque = self._job_queue_by_host_name[host_name]
|
||||
message = ("Not available workers for \"{}\"").format(host_name)
|
||||
while jobs_deque:
|
||||
job = jobs_deque.popleft()
|
||||
if not job.deleted:
|
||||
job.set_done(False, message)
|
||||
self._remove_old_jobs()
|
||||
|
||||
def get_jobs(self):
|
||||
return self._jobs_by_id.values()
|
||||
|
||||
def get_job(self, job_id):
|
||||
"""Job by it's id."""
|
||||
return self._jobs_by_id.get(job_id)
|
||||
|
||||
def create_job(self, host_name, job_data):
|
||||
"""Create new job from passed data and add it to queue."""
|
||||
job = Job(host_name, job_data)
|
||||
self._jobs_by_id[job.id] = job
|
||||
self._job_queue_by_host_name[host_name].append(job)
|
||||
return job
|
||||
|
||||
def _remove_old_jobs(self):
|
||||
"""Once in specific time look if should remove old finished jobs."""
|
||||
delta = datetime.datetime.now() - self._last_old_jobs_check
|
||||
if delta.seconds < self.old_jobs_check_minutes_interval:
|
||||
return
|
||||
|
||||
for job_id in tuple(self._jobs_by_id.keys()):
|
||||
job = self._jobs_by_id[job_id]
|
||||
if not job.keep_in_memory():
|
||||
self._jobs_by_id.pop(job_id)
|
||||
|
||||
def remove_job(self, job_id):
|
||||
"""Delete job and eventually stop it."""
|
||||
job = self._jobs_by_id.get(job_id)
|
||||
if job is None:
|
||||
return
|
||||
|
||||
job.set_deleted()
|
||||
self._jobs_by_id.pop(job.id)
|
||||
|
||||
def get_job_status(self, job_id):
|
||||
"""Job's status based on id."""
|
||||
job = self._jobs_by_id.get(job_id)
|
||||
if job is None:
|
||||
return {}
|
||||
return job.status()
|
||||
154
openpype/modules/default_modules/job_queue/job_server/server.py
Normal file
154
openpype/modules/default_modules/job_queue/job_server/server.py
Normal file
|
|
@ -0,0 +1,154 @@
|
|||
import threading
|
||||
import asyncio
|
||||
import logging
|
||||
|
||||
from aiohttp import web
|
||||
|
||||
from .jobs import JobQueue
|
||||
from .job_queue_route import JobQueueResource
|
||||
from .workers_rpc_route import WorkerRpc
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class WebServerManager:
|
||||
"""Manger that care about web server thread."""
|
||||
def __init__(self, port, host, loop=None):
|
||||
self.port = port
|
||||
self.host = host
|
||||
self.app = web.Application()
|
||||
if loop is None:
|
||||
loop = asyncio.new_event_loop()
|
||||
|
||||
# add route with multiple methods for single "external app"
|
||||
self.webserver_thread = WebServerThread(self, loop)
|
||||
|
||||
@property
|
||||
def url(self):
|
||||
return "http://{}:{}".format(self.host, self.port)
|
||||
|
||||
def add_route(self, *args, **kwargs):
|
||||
self.app.router.add_route(*args, **kwargs)
|
||||
|
||||
def add_static(self, *args, **kwargs):
|
||||
self.app.router.add_static(*args, **kwargs)
|
||||
|
||||
def start_server(self):
|
||||
if self.webserver_thread and not self.webserver_thread.is_alive():
|
||||
self.webserver_thread.start()
|
||||
|
||||
def stop_server(self):
|
||||
if not self.is_running:
|
||||
return
|
||||
|
||||
try:
|
||||
log.debug("Stopping Web server")
|
||||
self.webserver_thread.stop()
|
||||
|
||||
except Exception as exc:
|
||||
print("Errored", str(exc))
|
||||
log.warning(
|
||||
"Error has happened during Killing Web server",
|
||||
exc_info=True
|
||||
)
|
||||
|
||||
@property
|
||||
def is_running(self):
|
||||
if self.webserver_thread is not None:
|
||||
return self.webserver_thread.is_running
|
||||
return False
|
||||
|
||||
|
||||
class WebServerThread(threading.Thread):
|
||||
""" Listener for requests in thread."""
|
||||
def __init__(self, manager, loop):
|
||||
super(WebServerThread, self).__init__()
|
||||
|
||||
self._is_running = False
|
||||
self._stopped = False
|
||||
self.manager = manager
|
||||
self.loop = loop
|
||||
self.runner = None
|
||||
self.site = None
|
||||
|
||||
job_queue = JobQueue()
|
||||
self.job_queue_route = JobQueueResource(job_queue, manager)
|
||||
self.workers_route = WorkerRpc(job_queue, manager, loop=loop)
|
||||
|
||||
@property
|
||||
def port(self):
|
||||
return self.manager.port
|
||||
|
||||
@property
|
||||
def host(self):
|
||||
return self.manager.host
|
||||
|
||||
@property
|
||||
def stopped(self):
|
||||
return self._stopped
|
||||
|
||||
@property
|
||||
def is_running(self):
|
||||
return self._is_running
|
||||
|
||||
def run(self):
|
||||
self._is_running = True
|
||||
|
||||
try:
|
||||
log.info("Starting WebServer server")
|
||||
asyncio.set_event_loop(self.loop)
|
||||
self.loop.run_until_complete(self.start_server())
|
||||
|
||||
asyncio.ensure_future(self.check_shutdown(), loop=self.loop)
|
||||
self.loop.run_forever()
|
||||
|
||||
except Exception:
|
||||
log.warning(
|
||||
"Web Server service has failed", exc_info=True
|
||||
)
|
||||
finally:
|
||||
self.loop.close()
|
||||
|
||||
self._is_running = False
|
||||
log.info("Web server stopped")
|
||||
|
||||
async def start_server(self):
|
||||
""" Starts runner and TCPsite """
|
||||
self.runner = web.AppRunner(self.manager.app)
|
||||
await self.runner.setup()
|
||||
self.site = web.TCPSite(self.runner, self.host, self.port)
|
||||
await self.site.start()
|
||||
|
||||
def stop(self):
|
||||
"""Sets _stopped flag to True, 'check_shutdown' shuts server down"""
|
||||
self._stopped = True
|
||||
|
||||
async def check_shutdown(self):
|
||||
""" Future that is running and checks if server should be running
|
||||
periodically.
|
||||
"""
|
||||
while not self._stopped:
|
||||
await asyncio.sleep(0.5)
|
||||
|
||||
print("Starting shutdown")
|
||||
if self.workers_route:
|
||||
await self.workers_route.stop()
|
||||
|
||||
print("Stopping site")
|
||||
await self.site.stop()
|
||||
print("Site stopped")
|
||||
await self.runner.cleanup()
|
||||
|
||||
print("Runner stopped")
|
||||
tasks = [
|
||||
task
|
||||
for task in asyncio.all_tasks()
|
||||
if task is not asyncio.current_task()
|
||||
]
|
||||
list(map(lambda task: task.cancel(), tasks)) # cancel all the tasks
|
||||
results = await asyncio.gather(*tasks, return_exceptions=True)
|
||||
log.debug(f'Finished awaiting cancelled tasks, results: {results}...')
|
||||
await self.loop.shutdown_asyncgens()
|
||||
# to really make sure everything else has time to stop
|
||||
await asyncio.sleep(0.07)
|
||||
self.loop.stop()
|
||||
|
|
@ -0,0 +1,51 @@
|
|||
import sys
|
||||
import signal
|
||||
import time
|
||||
import socket
|
||||
|
||||
from .server import WebServerManager
|
||||
|
||||
|
||||
class SharedObjects:
|
||||
stopped = False
|
||||
|
||||
@classmethod
|
||||
def stop(cls):
|
||||
cls.stopped = True
|
||||
|
||||
|
||||
def main(port=None, host=None):
|
||||
def signal_handler(sig, frame):
|
||||
print("Signal to kill process received. Termination starts.")
|
||||
SharedObjects.stop()
|
||||
|
||||
signal.signal(signal.SIGINT, signal_handler)
|
||||
signal.signal(signal.SIGTERM, signal_handler)
|
||||
|
||||
port = int(port or 8079)
|
||||
host = str(host or "localhost")
|
||||
|
||||
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as con:
|
||||
result_of_check = con.connect_ex((host, port))
|
||||
|
||||
if result_of_check == 0:
|
||||
print((
|
||||
"Server {}:{} is already running or address is occupied."
|
||||
).format(host, port))
|
||||
return 1
|
||||
|
||||
print("Running server {}:{}".format(host, port))
|
||||
manager = WebServerManager(port, host)
|
||||
manager.start_server()
|
||||
|
||||
stopped = False
|
||||
while manager.is_running:
|
||||
if not stopped and SharedObjects.stopped:
|
||||
stopped = True
|
||||
manager.stop_server()
|
||||
time.sleep(0.1)
|
||||
return 0
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
sys.exit(main())
|
||||
122
openpype/modules/default_modules/job_queue/job_server/workers.py
Normal file
122
openpype/modules/default_modules/job_queue/job_server/workers.py
Normal file
|
|
@ -0,0 +1,122 @@
|
|||
import asyncio
|
||||
from uuid import uuid4
|
||||
from aiohttp import WSCloseCode
|
||||
from aiohttp_json_rpc.protocol import encode_request
|
||||
|
||||
|
||||
class WorkerState:
|
||||
IDLE = object()
|
||||
JOB_ASSIGNED = object()
|
||||
JOB_SENT = object()
|
||||
|
||||
|
||||
class Worker:
|
||||
"""Worker that can handle jobs of specific host."""
|
||||
def __init__(self, host_name, http_request):
|
||||
self._id = None
|
||||
self.host_name = host_name
|
||||
self._http_request = http_request
|
||||
self._state = WorkerState.IDLE
|
||||
self._job = None
|
||||
|
||||
# Give ability to send requests to worker
|
||||
http_request.request_id = str(uuid4())
|
||||
http_request.pending_requests = {}
|
||||
|
||||
async def send_job(self):
|
||||
if self._job is not None:
|
||||
data = {
|
||||
"job_id": self._job.id,
|
||||
"worker_id": self.id,
|
||||
"data": self._job.data
|
||||
}
|
||||
return await self.call("start_job", data)
|
||||
return False
|
||||
|
||||
async def call(self, method, params=None, timeout=None):
|
||||
"""Call method on worker's side."""
|
||||
request_id = self._http_request.request_id
|
||||
self._http_request.request_id = str(uuid4())
|
||||
pending_requests = self._http_request.pending_requests
|
||||
pending_requests[request_id] = asyncio.Future()
|
||||
|
||||
request = encode_request(method, id=request_id, params=params)
|
||||
|
||||
await self._http_request.ws.send_str(request)
|
||||
|
||||
if timeout:
|
||||
await asyncio.wait_for(
|
||||
pending_requests[request_id],
|
||||
timeout=timeout
|
||||
)
|
||||
|
||||
else:
|
||||
await pending_requests[request_id]
|
||||
|
||||
result = pending_requests[request_id].result()
|
||||
del pending_requests[request_id]
|
||||
|
||||
return result
|
||||
|
||||
async def close(self):
|
||||
return await self.ws.close(
|
||||
code=WSCloseCode.GOING_AWAY,
|
||||
message="Server shutdown"
|
||||
)
|
||||
|
||||
@property
|
||||
def id(self):
|
||||
if self._id is None:
|
||||
self._id = str(uuid4())
|
||||
return self._id
|
||||
|
||||
@property
|
||||
def state(self):
|
||||
return self._state
|
||||
|
||||
@property
|
||||
def current_job(self):
|
||||
return self._job
|
||||
|
||||
@property
|
||||
def http_request(self):
|
||||
return self._http_request
|
||||
|
||||
@property
|
||||
def ws(self):
|
||||
return self.http_request.ws
|
||||
|
||||
def connection_is_alive(self):
|
||||
if self.ws.closed or self.ws._writer.transport.is_closing():
|
||||
return False
|
||||
return True
|
||||
|
||||
def is_idle(self):
|
||||
return self._state is WorkerState.IDLE
|
||||
|
||||
def job_assigned(self):
|
||||
return (
|
||||
self._state is WorkerState.JOB_ASSIGNED
|
||||
or self._state is WorkerState.JOB_SENT
|
||||
)
|
||||
|
||||
def is_working(self):
|
||||
return self._state is WorkerState.JOB_SENT
|
||||
|
||||
def set_current_job(self, job):
|
||||
if job is self._job:
|
||||
return
|
||||
|
||||
self._job = job
|
||||
if job is None:
|
||||
self._set_idle()
|
||||
else:
|
||||
self._state = WorkerState.JOB_ASSIGNED
|
||||
job.set_worker(self)
|
||||
|
||||
def _set_idle(self):
|
||||
self._job = None
|
||||
self._state = WorkerState.IDLE
|
||||
|
||||
def set_working(self):
|
||||
self._state = WorkerState.JOB_SENT
|
||||
Some files were not shown because too many files have changed in this diff Show more
Loading…
Add table
Add a link
Reference in a new issue