Merge branch 'develop' into feature/OP-3883_Change-extractor-usage-in-unreal

This commit is contained in:
Jakub Trllo 2022-09-26 14:20:28 +02:00
commit 700f34febf
275 changed files with 8768 additions and 5415 deletions

View file

@ -6,6 +6,8 @@ labels: bug
assignees: ''
---
**Running version**
[ex. 3.14.1-nightly.2]
**Describe the bug**
A clear and concise description of what the bug is.

3
.gitignore vendored
View file

@ -107,3 +107,6 @@ website/.docusaurus
mypy.ini
tools/run_eventserver.*
# Developer tools
tools/dev_*

View file

@ -1,145 +1,133 @@
# Changelog
## [3.14.2-nightly.1](https://github.com/pypeclub/OpenPype/tree/HEAD)
## [3.14.3-nightly.4](https://github.com/pypeclub/OpenPype/tree/HEAD)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.14.1...HEAD)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.14.2...HEAD)
**🆕 New features**
**🚀 Enhancements**
- Houdini: Publishing workfiles [\#3697](https://github.com/pypeclub/OpenPype/pull/3697)
- Maya: better logging in Maketx [\#3886](https://github.com/pypeclub/OpenPype/pull/3886)
- Photoshop: review can be turned off [\#3885](https://github.com/pypeclub/OpenPype/pull/3885)
- TrayPublisher: added persisting of last selected project [\#3871](https://github.com/pypeclub/OpenPype/pull/3871)
- TrayPublisher: added text filter on project name to Tray Publisher [\#3867](https://github.com/pypeclub/OpenPype/pull/3867)
- Github issues adding `running version` section [\#3864](https://github.com/pypeclub/OpenPype/pull/3864)
- Publisher: Increase size of main window [\#3862](https://github.com/pypeclub/OpenPype/pull/3862)
- Flame: make migratable projects after creation [\#3860](https://github.com/pypeclub/OpenPype/pull/3860)
- Photoshop: synchronize image version with workfile [\#3854](https://github.com/pypeclub/OpenPype/pull/3854)
- General: Transcoding handle float2 attr type [\#3849](https://github.com/pypeclub/OpenPype/pull/3849)
- General: Simple script for getting license information about used packages [\#3843](https://github.com/pypeclub/OpenPype/pull/3843)
- Houdini: Increment current file on workfile publish [\#3840](https://github.com/pypeclub/OpenPype/pull/3840)
- Publisher: Add new publisher to host tools [\#3833](https://github.com/pypeclub/OpenPype/pull/3833)
- General: lock task workfiles when they are working on [\#3810](https://github.com/pypeclub/OpenPype/pull/3810)
- Maya: Workspace mel loaded from settings [\#3790](https://github.com/pypeclub/OpenPype/pull/3790)
**🐛 Bug fixes**
- Ftrack status fix typo prgoress -\> progress [\#3761](https://github.com/pypeclub/OpenPype/pull/3761)
- Flame: loading multilayer exr to batch/reel is working [\#3901](https://github.com/pypeclub/OpenPype/pull/3901)
- Hiero: Fix inventory check on launch [\#3895](https://github.com/pypeclub/OpenPype/pull/3895)
- WebPublisher: Fix import after refactor [\#3891](https://github.com/pypeclub/OpenPype/pull/3891)
- TVPaint: Fix renaming of rendered files [\#3882](https://github.com/pypeclub/OpenPype/pull/3882)
- Publisher: Nice checkbox visible in Python 2 [\#3877](https://github.com/pypeclub/OpenPype/pull/3877)
- Settings: Add missing default settings [\#3870](https://github.com/pypeclub/OpenPype/pull/3870)
- General: Copy of workfile does not use 'copy' function but 'copyfile' [\#3869](https://github.com/pypeclub/OpenPype/pull/3869)
- Tray Publisher: skip plugin if otioTimeline is missing [\#3856](https://github.com/pypeclub/OpenPype/pull/3856)
- Flame: retimed attributes are integrated with settings [\#3855](https://github.com/pypeclub/OpenPype/pull/3855)
- Maya: Extract Playblast fix textures + labelize viewport show settings [\#3852](https://github.com/pypeclub/OpenPype/pull/3852)
- Ftrack: Url validation does not require ftrackapp [\#3834](https://github.com/pypeclub/OpenPype/pull/3834)
- Maya+Ftrack: Change typo in family name `mayaascii` -\> `mayaAscii` [\#3820](https://github.com/pypeclub/OpenPype/pull/3820)
- Maya Deadline: Fix Tile Rendering by forcing integer pixel values [\#3758](https://github.com/pypeclub/OpenPype/pull/3758)
**🔀 Refactored code**
- Houdini: Use new Extractor location [\#3894](https://github.com/pypeclub/OpenPype/pull/3894)
- Harmony: Use new Extractor location [\#3893](https://github.com/pypeclub/OpenPype/pull/3893)
- Hiero: Use new Extractor location [\#3851](https://github.com/pypeclub/OpenPype/pull/3851)
- Maya: Remove old legacy \(ftrack\) plug-ins that are of no use anymore [\#3819](https://github.com/pypeclub/OpenPype/pull/3819)
- Nuke: Use new Extractor location [\#3799](https://github.com/pypeclub/OpenPype/pull/3799)
- Maya: Use new Extractor location [\#3775](https://github.com/pypeclub/OpenPype/pull/3775)
**Merged pull requests:**
- Maya: RenderSettings set default image format for V-Ray+Redshift to exr [\#3879](https://github.com/pypeclub/OpenPype/pull/3879)
- Remove lockfile during publish [\#3874](https://github.com/pypeclub/OpenPype/pull/3874)
## [3.14.2](https://github.com/pypeclub/OpenPype/tree/3.14.2) (2022-09-12)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.14.2-nightly.5...3.14.2)
**🆕 New features**
- Nuke: Build workfile by template [\#3763](https://github.com/pypeclub/OpenPype/pull/3763)
**🚀 Enhancements**
- Flame: Adding Creator's retimed shot and handles switch [\#3826](https://github.com/pypeclub/OpenPype/pull/3826)
- Flame: OpenPype submenu to batch and media manager [\#3825](https://github.com/pypeclub/OpenPype/pull/3825)
- General: Better pixmap scaling [\#3809](https://github.com/pypeclub/OpenPype/pull/3809)
- Photoshop: attempt to speed up ExtractImage [\#3793](https://github.com/pypeclub/OpenPype/pull/3793)
- SyncServer: Added cli commands for sync server [\#3765](https://github.com/pypeclub/OpenPype/pull/3765)
**🐛 Bug fixes**
- General: Fix Pattern access in client code [\#3828](https://github.com/pypeclub/OpenPype/pull/3828)
- Launcher: Skip opening last work file works for groups [\#3822](https://github.com/pypeclub/OpenPype/pull/3822)
- Maya: Publishing data key change [\#3811](https://github.com/pypeclub/OpenPype/pull/3811)
- Igniter: Fix status handling when version is already installed [\#3804](https://github.com/pypeclub/OpenPype/pull/3804)
- Resolve: Addon import is Python 2 compatible [\#3798](https://github.com/pypeclub/OpenPype/pull/3798)
- Hiero: retimed clip publishing is working [\#3792](https://github.com/pypeclub/OpenPype/pull/3792)
- nuke: validate write node is not failing due wrong type [\#3780](https://github.com/pypeclub/OpenPype/pull/3780)
- Fix - changed format of version string in pyproject.toml [\#3777](https://github.com/pypeclub/OpenPype/pull/3777)
- Ftrack status fix typo prgoress -\> progress [\#3761](https://github.com/pypeclub/OpenPype/pull/3761)
- Fix version resolution [\#3757](https://github.com/pypeclub/OpenPype/pull/3757)
**🔀 Refactored code**
- Photoshop: Use new Extractor location [\#3789](https://github.com/pypeclub/OpenPype/pull/3789)
- Blender: Use new Extractor location [\#3787](https://github.com/pypeclub/OpenPype/pull/3787)
- AfterEffects: Use new Extractor location [\#3784](https://github.com/pypeclub/OpenPype/pull/3784)
- General: Remove unused teshost [\#3773](https://github.com/pypeclub/OpenPype/pull/3773)
- General: Copied 'Extractor' plugin to publish pipeline [\#3771](https://github.com/pypeclub/OpenPype/pull/3771)
- General: Move queries of asset and representation links [\#3770](https://github.com/pypeclub/OpenPype/pull/3770)
- General: Move create project folders to pipeline [\#3768](https://github.com/pypeclub/OpenPype/pull/3768)
- General: Create project function moved to client code [\#3766](https://github.com/pypeclub/OpenPype/pull/3766)
- Maya: Refactor submit deadline to use AbstractSubmitDeadline [\#3759](https://github.com/pypeclub/OpenPype/pull/3759)
- General: Change publish template settings location [\#3755](https://github.com/pypeclub/OpenPype/pull/3755)
- General: Move hostdirname functionality into host [\#3749](https://github.com/pypeclub/OpenPype/pull/3749)
- Webpublisher: Webpublisher is used as addon [\#3740](https://github.com/pypeclub/OpenPype/pull/3740)
- Houdini: Define houdini as addon [\#3735](https://github.com/pypeclub/OpenPype/pull/3735)
- Flame: Defined flame as addon [\#3732](https://github.com/pypeclub/OpenPype/pull/3732)
- General: Move publish utils to pipeline [\#3745](https://github.com/pypeclub/OpenPype/pull/3745)
**Merged pull requests:**
- Standalone Publisher: Ignore empty labels, then still use name like other asset models [\#3779](https://github.com/pypeclub/OpenPype/pull/3779)
- Kitsu - sync\_all\_project - add list ignore\_projects [\#3776](https://github.com/pypeclub/OpenPype/pull/3776)
## [3.14.1](https://github.com/pypeclub/OpenPype/tree/3.14.1) (2022-08-30)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.14.1-nightly.4...3.14.1)
### 📖 Documentation
- Documentation: Few updates [\#3698](https://github.com/pypeclub/OpenPype/pull/3698)
- Documentation: Settings development [\#3660](https://github.com/pypeclub/OpenPype/pull/3660)
**🆕 New features**
- Webpublisher:change create flatten image into tri state [\#3678](https://github.com/pypeclub/OpenPype/pull/3678)
- Blender: validators code correction with settings and defaults [\#3662](https://github.com/pypeclub/OpenPype/pull/3662)
**🚀 Enhancements**
- General: Thumbnail can use project roots [\#3750](https://github.com/pypeclub/OpenPype/pull/3750)
- Settings: Remove settings lock on tray exit [\#3720](https://github.com/pypeclub/OpenPype/pull/3720)
- General: Added helper getters to modules manager [\#3712](https://github.com/pypeclub/OpenPype/pull/3712)
- Unreal: Define unreal as module and use host class [\#3701](https://github.com/pypeclub/OpenPype/pull/3701)
- Settings: Lock settings UI session [\#3700](https://github.com/pypeclub/OpenPype/pull/3700)
- General: Benevolent context label collector [\#3686](https://github.com/pypeclub/OpenPype/pull/3686)
- Ftrack: Store ftrack entities on hierarchy integration to instances [\#3677](https://github.com/pypeclub/OpenPype/pull/3677)
- Blender: ops refresh manager after process events [\#3663](https://github.com/pypeclub/OpenPype/pull/3663)
**🐛 Bug fixes**
- Maya: Fix typo in getPanel argument `with\_focus` -\> `withFocus` [\#3753](https://github.com/pypeclub/OpenPype/pull/3753)
- General: Smaller fixes of imports [\#3748](https://github.com/pypeclub/OpenPype/pull/3748)
- General: Logger tweaks [\#3741](https://github.com/pypeclub/OpenPype/pull/3741)
- Nuke: missing job dependency if multiple bake streams [\#3737](https://github.com/pypeclub/OpenPype/pull/3737)
- Nuke: color-space settings from anatomy is working [\#3721](https://github.com/pypeclub/OpenPype/pull/3721)
- Settings: Fix studio default anatomy save [\#3716](https://github.com/pypeclub/OpenPype/pull/3716)
- Maya: Use project name instead of project code [\#3709](https://github.com/pypeclub/OpenPype/pull/3709)
- Settings: Fix project overrides save [\#3708](https://github.com/pypeclub/OpenPype/pull/3708)
- Workfiles tool: Fix published workfile filtering [\#3704](https://github.com/pypeclub/OpenPype/pull/3704)
- PS, AE: Provide default variant value for workfile subset [\#3703](https://github.com/pypeclub/OpenPype/pull/3703)
- Flame: retime is working on clip publishing [\#3684](https://github.com/pypeclub/OpenPype/pull/3684)
- Webpublisher: added check for empty context [\#3682](https://github.com/pypeclub/OpenPype/pull/3682)
**🔀 Refactored code**
- General: Move delivery logic to pipeline [\#3751](https://github.com/pypeclub/OpenPype/pull/3751)
- General: Host addons cleanup [\#3744](https://github.com/pypeclub/OpenPype/pull/3744)
- Photoshop: Defined photoshop as addon [\#3736](https://github.com/pypeclub/OpenPype/pull/3736)
- Harmony: Defined harmony as addon [\#3734](https://github.com/pypeclub/OpenPype/pull/3734)
- General: Module interfaces cleanup [\#3731](https://github.com/pypeclub/OpenPype/pull/3731)
- AfterEffects: Move AE functions from general lib [\#3730](https://github.com/pypeclub/OpenPype/pull/3730)
- Blender: Define blender as module [\#3729](https://github.com/pypeclub/OpenPype/pull/3729)
- AfterEffects: Define AfterEffects as module [\#3728](https://github.com/pypeclub/OpenPype/pull/3728)
- General: Replace PypeLogger with Logger [\#3725](https://github.com/pypeclub/OpenPype/pull/3725)
- Nuke: Define nuke as module [\#3724](https://github.com/pypeclub/OpenPype/pull/3724)
- General: Move subset name functionality [\#3723](https://github.com/pypeclub/OpenPype/pull/3723)
- General: Move creators plugin getter [\#3714](https://github.com/pypeclub/OpenPype/pull/3714)
- General: Move constants from lib to client [\#3713](https://github.com/pypeclub/OpenPype/pull/3713)
- Loader: Subset groups using client operations [\#3710](https://github.com/pypeclub/OpenPype/pull/3710)
- TVPaint: Defined as module [\#3707](https://github.com/pypeclub/OpenPype/pull/3707)
- StandalonePublisher: Define StandalonePublisher as module [\#3706](https://github.com/pypeclub/OpenPype/pull/3706)
- TrayPublisher: Define TrayPublisher as module [\#3705](https://github.com/pypeclub/OpenPype/pull/3705)
- General: Move context specific functions to context tools [\#3702](https://github.com/pypeclub/OpenPype/pull/3702)
**Merged pull requests:**
- Hiero: Define hiero as module [\#3717](https://github.com/pypeclub/OpenPype/pull/3717)
- Deadline: better logging for DL webservice failures [\#3694](https://github.com/pypeclub/OpenPype/pull/3694)
- Photoshop: resize saved images in ExtractReview for ffmpeg [\#3676](https://github.com/pypeclub/OpenPype/pull/3676)
- Webpublisher: Webpublisher is used as addon [\#3740](https://github.com/pypeclub/OpenPype/pull/3740)
## [3.14.0](https://github.com/pypeclub/OpenPype/tree/3.14.0) (2022-08-18)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.14.0-nightly.1...3.14.0)
**🚀 Enhancements**
- Ftrack: Addiotional component metadata [\#3685](https://github.com/pypeclub/OpenPype/pull/3685)
- Ftrack: Set task status on task creation in integrate hierarchy [\#3675](https://github.com/pypeclub/OpenPype/pull/3675)
- Maya: Disable rendering of all lights for render instances submitted through Deadline. [\#3661](https://github.com/pypeclub/OpenPype/pull/3661)
- General: Optimized OCIO configs [\#3650](https://github.com/pypeclub/OpenPype/pull/3650)
**🐛 Bug fixes**
- General: Switch from hero version to versioned works [\#3691](https://github.com/pypeclub/OpenPype/pull/3691)
- General: Fix finding of last version [\#3656](https://github.com/pypeclub/OpenPype/pull/3656)
- General: Extract Review can scale with pixel aspect ratio [\#3644](https://github.com/pypeclub/OpenPype/pull/3644)
- Maya: Refactor moved usage of CreateRender settings [\#3643](https://github.com/pypeclub/OpenPype/pull/3643)
- General: Hero version representations have full context [\#3638](https://github.com/pypeclub/OpenPype/pull/3638)
- Nuke: color settings for render write node is working now [\#3632](https://github.com/pypeclub/OpenPype/pull/3632)
- Maya: FBX support for update in reference loader [\#3631](https://github.com/pypeclub/OpenPype/pull/3631)
**🔀 Refactored code**
- General: Use client projects getter [\#3673](https://github.com/pypeclub/OpenPype/pull/3673)
- Resolve: Match folder structure to other hosts [\#3653](https://github.com/pypeclub/OpenPype/pull/3653)
- Maya: Hosts as modules [\#3647](https://github.com/pypeclub/OpenPype/pull/3647)
- TimersManager: Plugins are in timers manager module [\#3639](https://github.com/pypeclub/OpenPype/pull/3639)
- General: Move workfiles functions into pipeline [\#3637](https://github.com/pypeclub/OpenPype/pull/3637)
**Merged pull requests:**
- Deadline: Global job pre load is not Pype 2 compatible [\#3666](https://github.com/pypeclub/OpenPype/pull/3666)
- Maya: Remove unused get current renderer logic [\#3645](https://github.com/pypeclub/OpenPype/pull/3645)
- Kitsu|Fix: Movie project type fails & first loop children names [\#3636](https://github.com/pypeclub/OpenPype/pull/3636)
- fix the bug of failing to extract look when UDIMs format used in AiImage [\#3628](https://github.com/pypeclub/OpenPype/pull/3628)
## [3.13.0](https://github.com/pypeclub/OpenPype/tree/3.13.0) (2022-08-09)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.13.0-nightly.1...3.13.0)
**🚀 Enhancements**
- Editorial: Mix audio use side file for ffmpeg filters [\#3630](https://github.com/pypeclub/OpenPype/pull/3630)
**🐛 Bug fixes**
- Maya: fix aov separator in Redshift [\#3625](https://github.com/pypeclub/OpenPype/pull/3625)
- Fix for multi-version build on Mac [\#3622](https://github.com/pypeclub/OpenPype/pull/3622)
- Ftrack: Sync hierarchical attributes can handle new created entities [\#3621](https://github.com/pypeclub/OpenPype/pull/3621)
**🔀 Refactored code**
- General: Plugin settings handled by plugins [\#3623](https://github.com/pypeclub/OpenPype/pull/3623)
## [3.12.2](https://github.com/pypeclub/OpenPype/tree/3.12.2) (2022-07-27)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.12.2-nightly.4...3.12.2)

View file

@ -41,7 +41,7 @@ It can be built and ran on all common platforms. We develop and test on the foll
- **Linux**
- **Ubuntu** 20.04 LTS
- **Centos** 7
- **Mac OSX**
- **Mac OSX**
- **10.15** Catalina
- **11.1** Big Sur (using Rosetta2)
@ -287,6 +287,14 @@ To run tests, execute `.\tools\run_tests(.ps1|.sh)`.
**Note that it needs existing virtual environment.**
Developer tools
-------------
In case you wish to add your own tools to `.\tools` folder without git tracking, it is possible by adding it with `dev_*` suffix (example: `dev_clear_pyc(.ps1|.sh)`).
## Contributors ✨
Thanks goes to these wonderful people ([emoji key](https://allcontributors.org/docs/en/emoji-key)):

View file

@ -0,0 +1,18 @@
Addon distribution tool
------------------------
Code in this folder is backend portion of Addon distribution logic for v4 server.
Each host, module will be separate Addon in the future. Each v4 server could run different set of Addons.
Client (running on artist machine) will in the first step ask v4 for list of enabled addons.
(It expects list of json documents matching to `addon_distribution.py:AddonInfo` object.)
Next it will compare presence of enabled addon version in local folder. In the case of missing version of
an addon, client will use information in the addon to download (from http/shared local disk/git) zip file
and unzip it.
Required part of addon distribution will be sharing of dependencies (python libraries, utilities) which is not part of this folder.
Location of this folder might change in the future as it will be required for a clint to add this folder to sys.path reliably.
This code needs to be independent on Openpype code as much as possible!

View file

@ -0,0 +1,208 @@
import os
from enum import Enum
from abc import abstractmethod
import attr
import logging
import requests
import platform
import shutil
from .file_handler import RemoteFileHandler
from .addon_info import AddonInfo
class UpdateState(Enum):
EXISTS = "exists"
UPDATED = "updated"
FAILED = "failed"
class AddonDownloader:
log = logging.getLogger(__name__)
def __init__(self):
self._downloaders = {}
def register_format(self, downloader_type, downloader):
self._downloaders[downloader_type.value] = downloader
def get_downloader(self, downloader_type):
downloader = self._downloaders.get(downloader_type)
if not downloader:
raise ValueError(f"{downloader_type} not implemented")
return downloader()
@classmethod
@abstractmethod
def download(cls, source, destination):
"""Returns url to downloaded addon zip file.
Args:
source (dict): {type:"http", "url":"https://} ...}
destination (str): local folder to unzip
Returns:
(str) local path to addon zip file
"""
pass
@classmethod
def check_hash(cls, addon_path, addon_hash):
"""Compares 'hash' of downloaded 'addon_url' file.
Args:
addon_path (str): local path to addon zip file
addon_hash (str): sha256 hash of zip file
Raises:
ValueError if hashes doesn't match
"""
if not os.path.exists(addon_path):
raise ValueError(f"{addon_path} doesn't exist.")
if not RemoteFileHandler.check_integrity(addon_path,
addon_hash,
hash_type="sha256"):
raise ValueError(f"{addon_path} doesn't match expected hash.")
@classmethod
def unzip(cls, addon_zip_path, destination):
"""Unzips local 'addon_zip_path' to 'destination'.
Args:
addon_zip_path (str): local path to addon zip file
destination (str): local folder to unzip
"""
RemoteFileHandler.unzip(addon_zip_path, destination)
os.remove(addon_zip_path)
@classmethod
def remove(cls, addon_url):
pass
class OSAddonDownloader(AddonDownloader):
@classmethod
def download(cls, source, destination):
# OS doesnt need to download, unzip directly
addon_url = source["path"].get(platform.system().lower())
if not os.path.exists(addon_url):
raise ValueError("{} is not accessible".format(addon_url))
return addon_url
class HTTPAddonDownloader(AddonDownloader):
CHUNK_SIZE = 100000
@classmethod
def download(cls, source, destination):
source_url = source["url"]
cls.log.debug(f"Downloading {source_url} to {destination}")
file_name = os.path.basename(destination)
_, ext = os.path.splitext(file_name)
if (ext.replace(".", '') not
in set(RemoteFileHandler.IMPLEMENTED_ZIP_FORMATS)):
file_name += ".zip"
RemoteFileHandler.download_url(source_url,
destination,
filename=file_name)
return os.path.join(destination, file_name)
def get_addons_info(server_endpoint):
"""Returns list of addon information from Server"""
# TODO temp
# addon_info = AddonInfo(
# **{"name": "openpype_slack",
# "version": "1.0.0",
# "addon_url": "c:/projects/openpype_slack_1.0.0.zip",
# "type": UrlType.FILESYSTEM,
# "hash": "4be25eb6215e91e5894d3c5475aeb1e379d081d3f5b43b4ee15b0891cf5f5658"}) # noqa
#
# http_addon = AddonInfo(
# **{"name": "openpype_slack",
# "version": "1.0.0",
# "addon_url": "https://drive.google.com/file/d/1TcuV8c2OV8CcbPeWi7lxOdqWsEqQNPYy/view?usp=sharing", # noqa
# "type": UrlType.HTTP,
# "hash": "4be25eb6215e91e5894d3c5475aeb1e379d081d3f5b43b4ee15b0891cf5f5658"}) # noqa
response = requests.get(server_endpoint)
if not response.ok:
raise Exception(response.text)
addons_info = []
for addon in response.json():
addons_info.append(AddonInfo(**addon))
return addons_info
def update_addon_state(addon_infos, destination_folder, factory,
log=None):
"""Loops through all 'addon_infos', compares local version, unzips.
Loops through server provided list of dictionaries with information about
available addons. Looks if each addon is already present and deployed.
If isn't, addon zip gets downloaded and unzipped into 'destination_folder'.
Args:
addon_infos (list of AddonInfo)
destination_folder (str): local path
factory (AddonDownloader): factory to get appropriate downloader per
addon type
log (logging.Logger)
Returns:
(dict): {"addon_full_name": UpdateState.value
(eg. "exists"|"updated"|"failed")
"""
if not log:
log = logging.getLogger(__name__)
download_states = {}
for addon in addon_infos:
full_name = "{}_{}".format(addon.name, addon.version)
addon_dest = os.path.join(destination_folder, full_name)
if os.path.isdir(addon_dest):
log.debug(f"Addon version folder {addon_dest} already exists.")
download_states[full_name] = UpdateState.EXISTS.value
continue
for source in addon.sources:
download_states[full_name] = UpdateState.FAILED.value
try:
downloader = factory.get_downloader(source.type)
zip_file_path = downloader.download(attr.asdict(source),
addon_dest)
downloader.check_hash(zip_file_path, addon.hash)
downloader.unzip(zip_file_path, addon_dest)
download_states[full_name] = UpdateState.UPDATED.value
break
except Exception:
log.warning(f"Error happened during updating {addon.name}",
exc_info=True)
if os.path.isdir(addon_dest):
log.debug(f"Cleaning {addon_dest}")
shutil.rmtree(addon_dest)
return download_states
def check_addons(server_endpoint, addon_folder, downloaders):
"""Main entry point to compare existing addons with those on server.
Args:
server_endpoint (str): url to v4 server endpoint
addon_folder (str): local dir path for addons
downloaders (AddonDownloader): factory of downloaders
Raises:
(RuntimeError) if any addon failed update
"""
addons_info = get_addons_info(server_endpoint)
result = update_addon_state(addons_info,
addon_folder,
downloaders)
if UpdateState.FAILED.value in result.values():
raise RuntimeError(f"Unable to update some addons {result}")
def cli(*args):
raise NotImplementedError

View file

@ -0,0 +1,80 @@
import attr
from enum import Enum
class UrlType(Enum):
HTTP = "http"
GIT = "git"
FILESYSTEM = "filesystem"
@attr.s
class MultiPlatformPath(object):
windows = attr.ib(default=None)
linux = attr.ib(default=None)
darwin = attr.ib(default=None)
@attr.s
class AddonSource(object):
type = attr.ib()
@attr.s
class LocalAddonSource(AddonSource):
path = attr.ib(default=attr.Factory(MultiPlatformPath))
@attr.s
class WebAddonSource(AddonSource):
url = attr.ib(default=None)
@attr.s
class VersionData(object):
version_data = attr.ib(default=None)
@attr.s
class AddonInfo(object):
"""Object matching json payload from Server"""
name = attr.ib()
version = attr.ib()
title = attr.ib(default=None)
sources = attr.ib(default=attr.Factory(dict))
hash = attr.ib(default=None)
description = attr.ib(default=None)
license = attr.ib(default=None)
authors = attr.ib(default=None)
@classmethod
def from_dict(cls, data):
sources = []
production_version = data.get("productionVersion")
if not production_version:
return
# server payload contains info about all versions
# active addon must have 'productionVersion' and matching version info
version_data = data.get("versions", {})[production_version]
for source in version_data.get("clientSourceInfo", []):
if source.get("type") == UrlType.FILESYSTEM.value:
source_addon = LocalAddonSource(type=source["type"],
path=source["path"])
if source.get("type") == UrlType.HTTP.value:
source_addon = WebAddonSource(type=source["type"],
url=source["url"])
sources.append(source_addon)
return cls(name=data.get("name"),
version=production_version,
sources=sources,
hash=data.get("hash"),
description=data.get("description"),
title=data.get("title"),
license=data.get("license"),
authors=data.get("authors"))

View file

@ -21,7 +21,7 @@ class RemoteFileHandler:
'tar.gz', 'tar.xz', 'tar.bz2']
@staticmethod
def calculate_md5(fpath, chunk_size):
def calculate_md5(fpath, chunk_size=10000):
md5 = hashlib.md5()
with open(fpath, 'rb') as f:
for chunk in iter(lambda: f.read(chunk_size), b''):
@ -33,17 +33,45 @@ class RemoteFileHandler:
return md5 == RemoteFileHandler.calculate_md5(fpath, **kwargs)
@staticmethod
def check_integrity(fpath, md5=None):
def calculate_sha256(fpath):
"""Calculate sha256 for content of the file.
Args:
fpath (str): Path to file.
Returns:
str: hex encoded sha256
"""
h = hashlib.sha256()
b = bytearray(128 * 1024)
mv = memoryview(b)
with open(fpath, 'rb', buffering=0) as f:
for n in iter(lambda: f.readinto(mv), 0):
h.update(mv[:n])
return h.hexdigest()
@staticmethod
def check_sha256(fpath, sha256, **kwargs):
return sha256 == RemoteFileHandler.calculate_sha256(fpath, **kwargs)
@staticmethod
def check_integrity(fpath, hash_value=None, hash_type=None):
if not os.path.isfile(fpath):
return False
if md5 is None:
if hash_value is None:
return True
return RemoteFileHandler.check_md5(fpath, md5)
if not hash_type:
raise ValueError("Provide hash type, md5 or sha256")
if hash_type == 'md5':
return RemoteFileHandler.check_md5(fpath, hash_value)
if hash_type == "sha256":
return RemoteFileHandler.check_sha256(fpath, hash_value)
@staticmethod
def download_url(
url, root, filename=None,
md5=None, max_redirect_hops=3
sha256=None, max_redirect_hops=3
):
"""Download a file from a url and place it in root.
Args:
@ -51,7 +79,7 @@ class RemoteFileHandler:
root (str): Directory to place downloaded file in
filename (str, optional): Name to save the file under.
If None, use the basename of the URL
md5 (str, optional): MD5 checksum of the download.
sha256 (str, optional): sha256 checksum of the download.
If None, do not check
max_redirect_hops (int, optional): Maximum number of redirect
hops allowed
@ -64,7 +92,8 @@ class RemoteFileHandler:
os.makedirs(root, exist_ok=True)
# check if file is already present locally
if RemoteFileHandler.check_integrity(fpath, md5):
if RemoteFileHandler.check_integrity(fpath,
sha256, hash_type="sha256"):
print('Using downloaded and verified file: ' + fpath)
return
@ -76,7 +105,7 @@ class RemoteFileHandler:
file_id = RemoteFileHandler._get_google_drive_file_id(url)
if file_id is not None:
return RemoteFileHandler.download_file_from_google_drive(
file_id, root, filename, md5)
file_id, root, filename, sha256)
# download the file
try:
@ -92,20 +121,21 @@ class RemoteFileHandler:
raise e
# check integrity of downloaded file
if not RemoteFileHandler.check_integrity(fpath, md5):
if not RemoteFileHandler.check_integrity(fpath,
sha256, hash_type="sha256"):
raise RuntimeError("File not found or corrupted.")
@staticmethod
def download_file_from_google_drive(file_id, root,
filename=None,
md5=None):
sha256=None):
"""Download a Google Drive file from and place it in root.
Args:
file_id (str): id of file to be downloaded
root (str): Directory to place downloaded file in
filename (str, optional): Name to save the file under.
If None, use the id of the file.
md5 (str, optional): MD5 checksum of the download.
sha256 (str, optional): sha256 checksum of the download.
If None, do not check
"""
# Based on https://stackoverflow.com/questions/38511444/python-download-files-from-google-drive-using-url # noqa
@ -119,8 +149,8 @@ class RemoteFileHandler:
os.makedirs(root, exist_ok=True)
if os.path.isfile(fpath) and RemoteFileHandler.check_integrity(fpath,
md5):
if os.path.isfile(fpath) and RemoteFileHandler.check_integrity(
fpath, sha256, hash_type="sha256"):
print('Using downloaded and verified file: ' + fpath)
else:
session = requests.Session()

View file

@ -0,0 +1,167 @@
import pytest
import attr
import tempfile
from common.openpype_common.distribution.addon_distribution import (
AddonDownloader,
OSAddonDownloader,
HTTPAddonDownloader,
AddonInfo,
update_addon_state,
UpdateState
)
from common.openpype_common.distribution.addon_info import UrlType
@pytest.fixture
def addon_downloader():
addon_downloader = AddonDownloader()
addon_downloader.register_format(UrlType.FILESYSTEM, OSAddonDownloader)
addon_downloader.register_format(UrlType.HTTP, HTTPAddonDownloader)
yield addon_downloader
@pytest.fixture
def http_downloader(addon_downloader):
yield addon_downloader.get_downloader(UrlType.HTTP.value)
@pytest.fixture
def temp_folder():
yield tempfile.mkdtemp()
@pytest.fixture
def sample_addon_info():
addon_info = {
"versions": {
"1.0.0": {
"clientPyproject": {
"tool": {
"poetry": {
"dependencies": {
"nxtools": "^1.6",
"orjson": "^3.6.7",
"typer": "^0.4.1",
"email-validator": "^1.1.3",
"python": "^3.10",
"fastapi": "^0.73.0"
}
}
}
},
"hasSettings": True,
"clientSourceInfo": [
{
"type": "http",
"url": "https://drive.google.com/file/d/1TcuV8c2OV8CcbPeWi7lxOdqWsEqQNPYy/view?usp=sharing" # noqa
},
{
"type": "filesystem",
"path": {
"windows": ["P:/sources/some_file.zip",
"W:/sources/some_file.zip"], # noqa
"linux": ["/mnt/srv/sources/some_file.zip"],
"darwin": ["/Volumes/srv/sources/some_file.zip"]
}
}
],
"frontendScopes": {
"project": {
"sidebar": "hierarchy"
}
}
}
},
"description": "",
"title": "Slack addon",
"name": "openpype_slack",
"productionVersion": "1.0.0",
"hash": "4be25eb6215e91e5894d3c5475aeb1e379d081d3f5b43b4ee15b0891cf5f5658" # noqa
}
yield addon_info
def test_register(printer):
addon_downloader = AddonDownloader()
assert len(addon_downloader._downloaders) == 0, "Contains registered"
addon_downloader.register_format(UrlType.FILESYSTEM, OSAddonDownloader)
assert len(addon_downloader._downloaders) == 1, "Should contain one"
def test_get_downloader(printer, addon_downloader):
assert addon_downloader.get_downloader(UrlType.FILESYSTEM.value), "Should find" # noqa
with pytest.raises(ValueError):
addon_downloader.get_downloader("unknown"), "Shouldn't find"
def test_addon_info(printer, sample_addon_info):
"""Tests parsing of expected payload from v4 server into AadonInfo."""
valid_minimum = {
"name": "openpype_slack",
"productionVersion": "1.0.0",
"versions": {
"1.0.0": {
"clientSourceInfo": [
{
"type": "filesystem",
"path": {
"windows": [
"P:/sources/some_file.zip",
"W:/sources/some_file.zip"],
"linux": [
"/mnt/srv/sources/some_file.zip"],
"darwin": [
"/Volumes/srv/sources/some_file.zip"] # noqa
}
}
]
}
}
}
assert AddonInfo.from_dict(valid_minimum), "Missing required fields"
valid_minimum["versions"].pop("1.0.0")
with pytest.raises(KeyError):
assert not AddonInfo.from_dict(valid_minimum), "Must fail without version data" # noqa
valid_minimum.pop("productionVersion")
assert not AddonInfo.from_dict(
valid_minimum), "none if not productionVersion" # noqa
addon = AddonInfo.from_dict(sample_addon_info)
assert addon, "Should be created"
assert addon.name == "openpype_slack", "Incorrect name"
assert addon.version == "1.0.0", "Incorrect version"
with pytest.raises(TypeError):
assert addon["name"], "Dict approach not implemented"
addon_as_dict = attr.asdict(addon)
assert addon_as_dict["name"], "Dict approach should work"
def test_update_addon_state(printer, sample_addon_info,
temp_folder, addon_downloader):
"""Tests possible cases of addon update."""
addon_info = AddonInfo.from_dict(sample_addon_info)
orig_hash = addon_info.hash
addon_info.hash = "brokenhash"
result = update_addon_state([addon_info], temp_folder, addon_downloader)
assert result["openpype_slack_1.0.0"] == UpdateState.FAILED.value, \
"Update should failed because of wrong hash"
addon_info.hash = orig_hash
result = update_addon_state([addon_info], temp_folder, addon_downloader)
assert result["openpype_slack_1.0.0"] == UpdateState.UPDATED.value, \
"Addon should have been updated"
result = update_addon_state([addon_info], temp_folder, addon_downloader)
assert result["openpype_slack_1.0.0"] == UpdateState.EXISTS.value, \
"Addon should already exist"

View file

@ -63,7 +63,7 @@ class OpenPypeVersion(semver.VersionInfo):
"""
staging = False
path = None
_VERSION_REGEX = re.compile(r"(?P<major>0|[1-9]\d*)\.(?P<minor>0|[1-9]\d*)\.(?P<patch>0|[1-9]\d*)(?:-(?P<prerelease>(?:0|[1-9]\d*|\d*[a-zA-Z-][0-9a-zA-Z-]*)(?:\.(?:0|[1-9]\d*|\d*[a-zA-Z-][0-9a-zA-Z-]*))*))?(?:\+(?P<buildmetadata>[0-9a-zA-Z-]+(?:\.[0-9a-zA-Z-]+)*))?$") # noqa: E501
_VERSION_REGEX = re.compile(r"(?P<major>0|[1-9]\d*)\.(?P<minor>0|[1-9]\d*)\.(?P<patch>0|[1-9]\d*)(?:-(?P<prerelease>(?:0|[1-9]\d*|\d*[a-zA-Z-][0-9a-zA-Z-]*)(?:\.(?:0|[1-9]\d*|\d*[a-zA-Z-][0-9a-zA-Z-]*))*))?(?:\+(?P<buildmetadata>[0-9a-zA-Z-]+(?:\.[0-9a-zA-Z-]+)*))?") # noqa: E501
_installed_version = None
def __init__(self, *args, **kwargs):

View file

@ -388,8 +388,11 @@ class InstallDialog(QtWidgets.QDialog):
install_thread.start()
def _installation_finished(self):
# TODO we should find out why status can be set to 'None'?
# - 'InstallThread.run' should handle all cases so not sure where
# that come from
status = self._install_thread.result()
if status >= 0:
if status is not None and status >= 0:
self._update_progress(100)
QtWidgets.QApplication.processEvents()
self.done(3)

View file

@ -45,6 +45,12 @@ from .entities import (
get_workfile_info,
)
from .entity_links import (
get_linked_asset_ids,
get_linked_assets,
get_linked_representation_id,
)
from .operations import (
create_project,
)
@ -94,5 +100,9 @@ __all__ = (
"get_workfile_info",
"get_linked_asset_ids",
"get_linked_assets",
"get_linked_representation_id",
"create_project",
)

View file

@ -14,6 +14,8 @@ from bson.objectid import ObjectId
from .mongo import get_project_database, get_project_connection
PatternType = type(re.compile(""))
def _prepare_fields(fields, required_fields=None):
if not fields:
@ -32,17 +34,37 @@ def _prepare_fields(fields, required_fields=None):
return output
def _convert_id(in_id):
def convert_id(in_id):
"""Helper function for conversion of id from string to ObjectId.
Args:
in_id (Union[str, ObjectId, Any]): Entity id that should be converted
to right type for queries.
Returns:
Union[ObjectId, Any]: Converted ids to ObjectId or in type.
"""
if isinstance(in_id, six.string_types):
return ObjectId(in_id)
return in_id
def _convert_ids(in_ids):
def convert_ids(in_ids):
"""Helper function for conversion of ids from string to ObjectId.
Args:
in_ids (Iterable[Union[str, ObjectId, Any]]): List of entity ids that
should be converted to right type for queries.
Returns:
List[ObjectId]: Converted ids to ObjectId.
"""
_output = set()
for in_id in in_ids:
if in_id is not None:
_output.add(_convert_id(in_id))
_output.add(convert_id(in_id))
return list(_output)
@ -115,7 +137,7 @@ def get_asset_by_id(project_name, asset_id, fields=None):
None: Asset was not found by id.
"""
asset_id = _convert_id(asset_id)
asset_id = convert_id(asset_id)
if not asset_id:
return None
@ -196,7 +218,7 @@ def _get_assets(
query_filter = {"type": {"$in": asset_types}}
if asset_ids is not None:
asset_ids = _convert_ids(asset_ids)
asset_ids = convert_ids(asset_ids)
if not asset_ids:
return []
query_filter["_id"] = {"$in": asset_ids}
@ -207,7 +229,7 @@ def _get_assets(
query_filter["name"] = {"$in": list(asset_names)}
if parent_ids is not None:
parent_ids = _convert_ids(parent_ids)
parent_ids = convert_ids(parent_ids)
if not parent_ids:
return []
query_filter["data.visualParent"] = {"$in": parent_ids}
@ -307,7 +329,7 @@ def get_asset_ids_with_subsets(project_name, asset_ids=None):
"type": "subset"
}
if asset_ids is not None:
asset_ids = _convert_ids(asset_ids)
asset_ids = convert_ids(asset_ids)
if not asset_ids:
return []
subset_query["parent"] = {"$in": asset_ids}
@ -347,7 +369,7 @@ def get_subset_by_id(project_name, subset_id, fields=None):
Dict: Subset document which can be reduced to specified 'fields'.
"""
subset_id = _convert_id(subset_id)
subset_id = convert_id(subset_id)
if not subset_id:
return None
@ -374,7 +396,7 @@ def get_subset_by_name(project_name, subset_name, asset_id, fields=None):
if not subset_name:
return None
asset_id = _convert_id(asset_id)
asset_id = convert_id(asset_id)
if not asset_id:
return None
@ -428,13 +450,13 @@ def get_subsets(
query_filter = {"type": {"$in": subset_types}}
if asset_ids is not None:
asset_ids = _convert_ids(asset_ids)
asset_ids = convert_ids(asset_ids)
if not asset_ids:
return []
query_filter["parent"] = {"$in": asset_ids}
if subset_ids is not None:
subset_ids = _convert_ids(subset_ids)
subset_ids = convert_ids(subset_ids)
if not subset_ids:
return []
query_filter["_id"] = {"$in": subset_ids}
@ -449,7 +471,7 @@ def get_subsets(
for asset_id, names in names_by_asset_ids.items():
if asset_id and names:
or_query.append({
"parent": _convert_id(asset_id),
"parent": convert_id(asset_id),
"name": {"$in": list(names)}
})
if not or_query:
@ -510,7 +532,7 @@ def get_version_by_id(project_name, version_id, fields=None):
Dict: Version document which can be reduced to specified 'fields'.
"""
version_id = _convert_id(version_id)
version_id = convert_id(version_id)
if not version_id:
return None
@ -537,7 +559,7 @@ def get_version_by_name(project_name, version, subset_id, fields=None):
Dict: Version document which can be reduced to specified 'fields'.
"""
subset_id = _convert_id(subset_id)
subset_id = convert_id(subset_id)
if not subset_id:
return None
@ -567,7 +589,7 @@ def version_is_latest(project_name, version_id):
bool: True if is latest version from subset else False.
"""
version_id = _convert_id(version_id)
version_id = convert_id(version_id)
if not version_id:
return False
version_doc = get_version_by_id(
@ -610,13 +632,13 @@ def _get_versions(
query_filter = {"type": {"$in": version_types}}
if subset_ids is not None:
subset_ids = _convert_ids(subset_ids)
subset_ids = convert_ids(subset_ids)
if not subset_ids:
return []
query_filter["parent"] = {"$in": subset_ids}
if version_ids is not None:
version_ids = _convert_ids(version_ids)
version_ids = convert_ids(version_ids)
if not version_ids:
return []
query_filter["_id"] = {"$in": version_ids}
@ -690,7 +712,7 @@ def get_hero_version_by_subset_id(project_name, subset_id, fields=None):
Dict: Hero version entity data.
"""
subset_id = _convert_id(subset_id)
subset_id = convert_id(subset_id)
if not subset_id:
return None
@ -720,7 +742,7 @@ def get_hero_version_by_id(project_name, version_id, fields=None):
Dict: Hero version entity data.
"""
version_id = _convert_id(version_id)
version_id = convert_id(version_id)
if not version_id:
return None
@ -786,7 +808,7 @@ def get_output_link_versions(project_name, version_id, fields=None):
links for passed version.
"""
version_id = _convert_id(version_id)
version_id = convert_id(version_id)
if not version_id:
return []
@ -812,7 +834,7 @@ def get_last_versions(project_name, subset_ids, fields=None):
dict[ObjectId, int]: Key is subset id and value is last version name.
"""
subset_ids = _convert_ids(subset_ids)
subset_ids = convert_ids(subset_ids)
if not subset_ids:
return {}
@ -898,7 +920,7 @@ def get_last_version_by_subset_id(project_name, subset_id, fields=None):
Dict: Version document which can be reduced to specified 'fields'.
"""
subset_id = _convert_id(subset_id)
subset_id = convert_id(subset_id)
if not subset_id:
return None
@ -971,7 +993,7 @@ def get_representation_by_id(project_name, representation_id, fields=None):
"type": {"$in": repre_types}
}
if representation_id is not None:
query_filter["_id"] = _convert_id(representation_id)
query_filter["_id"] = convert_id(representation_id)
conn = get_project_connection(project_name)
@ -996,7 +1018,7 @@ def get_representation_by_name(
to specified 'fields'.
"""
version_id = _convert_id(version_id)
version_id = convert_id(version_id)
if not version_id or not representation_name:
return None
repre_types = ["representation", "archived_representations"]
@ -1034,11 +1056,11 @@ def _regex_filters(filters):
for key, value in filters.items():
regexes = []
a_values = []
if isinstance(value, re.Pattern):
if isinstance(value, PatternType):
regexes.append(value)
elif isinstance(value, (list, tuple, set)):
for item in value:
if isinstance(item, re.Pattern):
if isinstance(item, PatternType):
regexes.append(item)
else:
a_values.append(item)
@ -1089,7 +1111,7 @@ def _get_representations(
query_filter = {"type": {"$in": repre_types}}
if representation_ids is not None:
representation_ids = _convert_ids(representation_ids)
representation_ids = convert_ids(representation_ids)
if not representation_ids:
return default_output
query_filter["_id"] = {"$in": representation_ids}
@ -1100,7 +1122,7 @@ def _get_representations(
query_filter["name"] = {"$in": list(representation_names)}
if version_ids is not None:
version_ids = _convert_ids(version_ids)
version_ids = convert_ids(version_ids)
if not version_ids:
return default_output
query_filter["parent"] = {"$in": version_ids}
@ -1111,7 +1133,7 @@ def _get_representations(
for version_id, names in names_by_version_ids.items():
if version_id and names:
or_query.append({
"parent": _convert_id(version_id),
"parent": convert_id(version_id),
"name": {"$in": list(names)}
})
if not or_query:
@ -1174,7 +1196,7 @@ def get_representations(
as filter. Filter ignored if 'None' is passed.
version_ids (Iterable[str]): Subset ids used as parent filter. Filter
ignored if 'None' is passed.
context_filters (Dict[str, List[str, re.Pattern]]): Filter by
context_filters (Dict[str, List[str, PatternType]]): Filter by
representation context fields.
names_by_version_ids (dict[ObjectId, list[str]]): Complex filtering
using version ids and list of names under the version.
@ -1220,7 +1242,7 @@ def get_archived_representations(
as filter. Filter ignored if 'None' is passed.
version_ids (Iterable[str]): Subset ids used as parent filter. Filter
ignored if 'None' is passed.
context_filters (Dict[str, List[str, re.Pattern]]): Filter by
context_filters (Dict[str, List[str, PatternType]]): Filter by
representation context fields.
names_by_version_ids (dict[ObjectId, List[str]]): Complex filtering
using version ids and list of names under the version.
@ -1361,7 +1383,7 @@ def get_thumbnail_id_from_source(project_name, src_type, src_id):
if not src_type or not src_id:
return None
query_filter = {"_id": _convert_id(src_id)}
query_filter = {"_id": convert_id(src_id)}
conn = get_project_connection(project_name)
src_doc = conn.find_one(query_filter, {"data.thumbnail_id"})
@ -1388,7 +1410,7 @@ def get_thumbnails(project_name, thumbnail_ids, fields=None):
"""
if thumbnail_ids:
thumbnail_ids = _convert_ids(thumbnail_ids)
thumbnail_ids = convert_ids(thumbnail_ids)
if not thumbnail_ids:
return []
@ -1416,7 +1438,7 @@ def get_thumbnail(project_name, thumbnail_id, fields=None):
if not thumbnail_id:
return None
query_filter = {"type": "thumbnail", "_id": _convert_id(thumbnail_id)}
query_filter = {"type": "thumbnail", "_id": convert_id(thumbnail_id)}
conn = get_project_connection(project_name)
return conn.find_one(query_filter, _prepare_fields(fields))
@ -1444,7 +1466,7 @@ def get_workfile_info(
query_filter = {
"type": "workfile",
"parent": _convert_id(asset_id),
"parent": convert_id(asset_id),
"task_name": task_name,
"filename": filename
}

View file

@ -0,0 +1,232 @@
from .mongo import get_project_connection
from .entities import (
get_assets,
get_asset_by_id,
get_representation_by_id,
convert_id,
)
def get_linked_asset_ids(project_name, asset_doc=None, asset_id=None):
"""Extract linked asset ids from asset document.
One of asset document or asset id must be passed.
Note:
Asset links now works only from asset to assets.
Args:
asset_doc (dict): Asset document from DB.
Returns:
List[Union[ObjectId, str]]: Asset ids of input links.
"""
output = []
if not asset_doc and not asset_id:
return output
if not asset_doc:
asset_doc = get_asset_by_id(
project_name, asset_id, fields=["data.inputLinks"]
)
input_links = asset_doc["data"].get("inputLinks")
if not input_links:
return output
for item in input_links:
# Backwards compatibility for "_id" key which was replaced with
# "id"
if "_id" in item:
link_id = item["_id"]
else:
link_id = item["id"]
output.append(link_id)
return output
def get_linked_assets(
project_name, asset_doc=None, asset_id=None, fields=None
):
"""Return linked assets based on passed asset document.
One of asset document or asset id must be passed.
Args:
project_name (str): Name of project where to look for queried entities.
asset_doc (Dict[str, Any]): Asset document from database.
asset_id (Union[ObjectId, str]): Asset id. Can be used instead of
asset document.
fields (Iterable[str]): Fields that should be returned. All fields are
returned if 'None' is passed.
Returns:
List[Dict[str, Any]]: Asset documents of input links for passed
asset doc.
"""
if not asset_doc:
if not asset_id:
return []
asset_doc = get_asset_by_id(
project_name,
asset_id,
fields=["data.inputLinks"]
)
if not asset_doc:
return []
link_ids = get_linked_asset_ids(project_name, asset_doc=asset_doc)
if not link_ids:
return []
return list(get_assets(project_name, asset_ids=link_ids, fields=fields))
def get_linked_representation_id(
project_name, repre_doc=None, repre_id=None, link_type=None, max_depth=None
):
"""Returns list of linked ids of particular type (if provided).
One of representation document or representation id must be passed.
Note:
Representation links now works only from representation through version
back to representations.
Args:
project_name (str): Name of project where look for links.
repre_doc (Dict[str, Any]): Representation document.
repre_id (Union[ObjectId, str]): Representation id.
link_type (str): Type of link (e.g. 'reference', ...).
max_depth (int): Limit recursion level. Default: 0
Returns:
List[ObjectId] Linked representation ids.
"""
if repre_doc:
repre_id = repre_doc["_id"]
if repre_id:
repre_id = convert_id(repre_id)
if not repre_id and not repre_doc:
return []
version_id = None
if repre_doc:
version_id = repre_doc.get("parent")
if not version_id:
repre_doc = get_representation_by_id(
project_name, repre_id, fields=["parent"]
)
version_id = repre_doc["parent"]
if not version_id:
return []
if max_depth is None:
max_depth = 0
match = {
"_id": version_id,
"type": {"$in": ["version", "hero_version"]}
}
graph_lookup = {
"from": project_name,
"startWith": "$data.inputLinks.id",
"connectFromField": "data.inputLinks.id",
"connectToField": "_id",
"as": "outputs_recursive",
"depthField": "depth"
}
if max_depth != 0:
# We offset by -1 since 0 basically means no recursion
# but the recursion only happens after the initial lookup
# for outputs.
graph_lookup["maxDepth"] = max_depth - 1
query_pipeline = [
# Match
{"$match": match},
# Recursive graph lookup for inputs
{"$graphLookup": graph_lookup}
]
conn = get_project_connection(project_name)
result = conn.aggregate(query_pipeline)
referenced_version_ids = _process_referenced_pipeline_result(
result, link_type
)
if not referenced_version_ids:
return []
ref_ids = conn.distinct(
"_id",
filter={
"parent": {"$in": list(referenced_version_ids)},
"type": "representation"
}
)
return list(ref_ids)
def _process_referenced_pipeline_result(result, link_type):
"""Filters result from pipeline for particular link_type.
Pipeline cannot use link_type directly in a query.
Returns:
(list)
"""
referenced_version_ids = set()
correctly_linked_ids = set()
for item in result:
input_links = item["data"].get("inputLinks")
if not input_links:
continue
_filter_input_links(
input_links,
link_type,
correctly_linked_ids
)
# outputs_recursive in random order, sort by depth
outputs_recursive = item.get("outputs_recursive")
if not outputs_recursive:
continue
for output in sorted(outputs_recursive, key=lambda o: o["depth"]):
output_links = output["data"].get("inputLinks")
if not output_links:
continue
# Leaf
if output["_id"] not in correctly_linked_ids:
continue
_filter_input_links(
output_links,
link_type,
correctly_linked_ids
)
referenced_version_ids.add(output["_id"])
return referenced_version_ids
def _filter_input_links(input_links, link_type, correctly_linked_ids):
for input_link in input_links:
if link_type and input_link["type"] != link_type:
continue
link_id = input_link.get("id") or input_link.get("_id")
if link_id is not None:
correctly_linked_ids.add(link_id)

View file

@ -19,6 +19,7 @@ class AddLastWorkfileToLaunchArgs(PreLaunchHook):
"hiero",
"houdini",
"nukestudio",
"fusion",
"blender",
"photoshop",
"tvpaint",

View file

@ -1,8 +1,6 @@
import os
from openpype.lib import (
PreLaunchHook,
create_workdir_extra_folders
)
from openpype.lib import PreLaunchHook
from openpype.pipeline.workfile import create_workdir_extra_folders
class AddLastWorkfileToLaunchArgs(PreLaunchHook):

View file

@ -5,6 +5,7 @@ from .host import (
from .interfaces import (
IWorkfileHost,
ILoadHost,
IPublishHost,
INewPublisher,
)
@ -16,6 +17,7 @@ __all__ = (
"IWorkfileHost",
"ILoadHost",
"IPublishHost",
"INewPublisher",
"HostDirmap",

View file

@ -282,7 +282,7 @@ class IWorkfileHost:
return self.workfile_has_unsaved_changes()
class INewPublisher:
class IPublishHost:
"""Functions related to new creation system in new publisher.
New publisher is not storing information only about each created instance
@ -306,7 +306,7 @@ class INewPublisher:
workflow.
"""
if isinstance(host, INewPublisher):
if isinstance(host, IPublishHost):
return []
required = [
@ -330,7 +330,7 @@ class INewPublisher:
MissingMethodsError: If there are missing methods on host
implementation.
"""
missing = INewPublisher.get_missing_publish_methods(host)
missing = IPublishHost.get_missing_publish_methods(host)
if missing:
raise MissingMethodsError(host, missing)
@ -368,3 +368,17 @@ class INewPublisher:
"""
pass
class INewPublisher(IPublishHost):
"""Legacy interface replaced by 'IPublishHost'.
Deprecated:
'INewPublisher' is replaced by 'IPublishHost' please change your
imports.
There is no "reasonable" way hot mark these classes as deprecated
to show warning of wrong import. Deprecated since 3.14.* will be
removed in 3.15.*
"""
pass

View file

@ -2,14 +2,18 @@ import os
import sys
import six
import openpype.api
from openpype.lib import (
get_ffmpeg_tool_path,
run_subprocess,
)
from openpype.pipeline import publish
from openpype.hosts.aftereffects.api import get_stub
class ExtractLocalRender(openpype.api.Extractor):
class ExtractLocalRender(publish.Extractor):
"""Render RenderQueue locally."""
order = openpype.api.Extractor.order - 0.47
order = publish.Extractor.order - 0.47
label = "Extract Local Render"
hosts = ["aftereffects"]
families = ["renderLocal", "render.local"]
@ -53,7 +57,7 @@ class ExtractLocalRender(openpype.api.Extractor):
instance.data["representations"] = [repre_data]
ffmpeg_path = openpype.lib.get_ffmpeg_tool_path("ffmpeg")
ffmpeg_path = get_ffmpeg_tool_path("ffmpeg")
# Generate thumbnail.
thumbnail_path = os.path.join(staging_dir, "thumbnail.jpg")
@ -66,7 +70,7 @@ class ExtractLocalRender(openpype.api.Extractor):
]
self.log.debug("Thumbnail args:: {}".format(args))
try:
output = openpype.lib.run_subprocess(args)
output = run_subprocess(args)
except TypeError:
self.log.warning("Error in creating thumbnail")
six.reraise(*sys.exc_info())

View file

@ -1,13 +1,13 @@
import pyblish.api
import openpype.api
from openpype.pipeline import publish
from openpype.hosts.aftereffects.api import get_stub
class ExtractSaveScene(pyblish.api.ContextPlugin):
"""Save scene before extraction."""
order = openpype.api.Extractor.order - 0.48
order = publish.Extractor.order - 0.48
label = "Extract Save Scene"
hosts = ["aftereffects"]

View file

@ -1,8 +1,8 @@
import openpype.api
from openpype.pipeline import publish
from openpype.hosts.aftereffects.api import get_stub
class RemovePublishHighlight(openpype.api.Extractor):
class RemovePublishHighlight(publish.Extractor):
"""Clean utf characters which are not working in DL
Published compositions are marked with unicode icon which causes
@ -10,7 +10,7 @@ class RemovePublishHighlight(openpype.api.Extractor):
rendering, add it later back to avoid confusion.
"""
order = openpype.api.Extractor.order - 0.49 # just before save
order = publish.Extractor.order - 0.49 # just before save
label = "Clean render comp"
hosts = ["aftereffects"]
families = ["render.farm"]

View file

@ -1,6 +1,19 @@
import os
import bpy
import pyblish.api
from openpype.pipeline import legacy_io
from openpype.hosts.blender.api import workio
class SaveWorkfiledAction(pyblish.api.Action):
"""Save Workfile."""
label = "Save Workfile"
on = "failed"
icon = "save"
def process(self, context, plugin):
bpy.ops.wm.avalon_workfiles()
class CollectBlenderCurrentFile(pyblish.api.ContextPlugin):
@ -8,12 +21,52 @@ class CollectBlenderCurrentFile(pyblish.api.ContextPlugin):
order = pyblish.api.CollectorOrder - 0.5
label = "Blender Current File"
hosts = ['blender']
hosts = ["blender"]
actions = [SaveWorkfiledAction]
def process(self, context):
"""Inject the current working file"""
current_file = bpy.data.filepath
context.data['currentFile'] = current_file
current_file = workio.current_file()
assert current_file != '', "Current file is empty. " \
"Save the file before continuing."
context.data["currentFile"] = current_file
assert current_file, (
"Current file is empty. Save the file before continuing."
)
folder, file = os.path.split(current_file)
filename, ext = os.path.splitext(file)
task = legacy_io.Session["AVALON_TASK"]
data = {}
# create instance
instance = context.create_instance(name=filename)
subset = "workfile" + task.capitalize()
data.update({
"subset": subset,
"asset": os.getenv("AVALON_ASSET", None),
"label": subset,
"publish": True,
"family": "workfile",
"families": ["workfile"],
"setMembers": [current_file],
"frameStart": bpy.context.scene.frame_start,
"frameEnd": bpy.context.scene.frame_end,
})
data["representations"] = [{
"name": ext.lstrip("."),
"ext": ext.lstrip("."),
"files": file,
"stagingDir": folder,
}]
instance.data.update(data)
self.log.info("Collected instance: {}".format(file))
self.log.info("Scene path: {}".format(current_file))
self.log.info("staging Dir: {}".format(folder))
self.log.info("subset: {}".format(subset))

View file

@ -2,12 +2,12 @@ import os
import bpy
from openpype import api
from openpype.pipeline import publish
from openpype.hosts.blender.api import plugin
from openpype.hosts.blender.api.pipeline import AVALON_PROPERTY
class ExtractABC(api.Extractor):
class ExtractABC(publish.Extractor):
"""Extract as ABC."""
label = "Extract ABC"

View file

@ -2,10 +2,10 @@ import os
import bpy
import openpype.api
from openpype.pipeline import publish
class ExtractBlend(openpype.api.Extractor):
class ExtractBlend(publish.Extractor):
"""Extract a blend file."""
label = "Extract Blend"

View file

@ -2,10 +2,10 @@ import os
import bpy
import openpype.api
from openpype.pipeline import publish
class ExtractBlendAnimation(openpype.api.Extractor):
class ExtractBlendAnimation(publish.Extractor):
"""Extract a blend file."""
label = "Extract Blend"

View file

@ -2,11 +2,11 @@ import os
import bpy
from openpype import api
from openpype.pipeline import publish
from openpype.hosts.blender.api import plugin
class ExtractCamera(api.Extractor):
class ExtractCamera(publish.Extractor):
"""Extract as the camera as FBX."""
label = "Extract Camera"

View file

@ -2,12 +2,12 @@ import os
import bpy
from openpype import api
from openpype.pipeline import publish
from openpype.hosts.blender.api import plugin
from openpype.hosts.blender.api.pipeline import AVALON_PROPERTY
class ExtractFBX(api.Extractor):
class ExtractFBX(publish.Extractor):
"""Extract as FBX."""
label = "Extract FBX"

View file

@ -5,12 +5,12 @@ import bpy
import bpy_extras
import bpy_extras.anim_utils
from openpype import api
from openpype.pipeline import publish
from openpype.hosts.blender.api import plugin
from openpype.hosts.blender.api.pipeline import AVALON_PROPERTY
class ExtractAnimationFBX(api.Extractor):
class ExtractAnimationFBX(publish.Extractor):
"""Extract as animation."""
label = "Extract FBX"

View file

@ -6,12 +6,12 @@ import bpy_extras
import bpy_extras.anim_utils
from openpype.client import get_representation_by_name
from openpype.pipeline import publish
from openpype.hosts.blender.api import plugin
from openpype.hosts.blender.api.pipeline import AVALON_PROPERTY
import openpype.api
class ExtractLayout(openpype.api.Extractor):
class ExtractLayout(publish.Extractor):
"""Extract a layout."""
label = "Extract Layout"

View file

@ -1,113 +0,0 @@
import os
import collections
from pprint import pformat
import pyblish.api
from openpype.client import (
get_subsets,
get_last_versions,
get_representations
)
from openpype.pipeline import legacy_io
class AppendCelactionAudio(pyblish.api.ContextPlugin):
label = "Colect Audio for publishing"
order = pyblish.api.CollectorOrder + 0.1
def process(self, context):
self.log.info('Collecting Audio Data')
asset_doc = context.data["assetEntity"]
# get all available representations
subsets = self.get_subsets(
asset_doc,
representations=["audio", "wav"]
)
self.log.info(f"subsets is: {pformat(subsets)}")
if not subsets.get("audioMain"):
raise AttributeError("`audioMain` subset does not exist")
reprs = subsets.get("audioMain", {}).get("representations", [])
self.log.info(f"reprs is: {pformat(reprs)}")
repr = next((r for r in reprs), None)
if not repr:
raise "Missing `audioMain` representation"
self.log.info(f"representation is: {repr}")
audio_file = repr.get('data', {}).get('path', "")
if os.path.exists(audio_file):
context.data["audioFile"] = audio_file
self.log.info(
'audio_file: {}, has been added to context'.format(audio_file))
else:
self.log.warning("Couldn't find any audio file on Ftrack.")
def get_subsets(self, asset_doc, representations):
"""
Query subsets with filter on name.
The method will return all found subsets and its defined version
and subsets. Version could be specified with number. Representation
can be filtered.
Arguments:
asset_doct (dict): Asset (shot) mongo document
representations (list): list for all representations
Returns:
dict: subsets with version and representations in keys
"""
# Query all subsets for asset
project_name = legacy_io.active_project()
subset_docs = get_subsets(
project_name, asset_ids=[asset_doc["_id"]], fields=["_id"]
)
# Collect all subset ids
subset_ids = [
subset_doc["_id"]
for subset_doc in subset_docs
]
# Check if we found anything
assert subset_ids, (
"No subsets found. Check correct filter. "
"Try this for start `r'.*'`: asset: `{}`"
).format(asset_doc["name"])
last_versions_by_subset_id = get_last_versions(
project_name, subset_ids, fields=["_id", "parent"]
)
version_docs_by_id = {}
for version_doc in last_versions_by_subset_id.values():
version_docs_by_id[version_doc["_id"]] = version_doc
repre_docs = get_representations(
project_name,
version_ids=version_docs_by_id.keys(),
representation_names=representations
)
repre_docs_by_version_id = collections.defaultdict(list)
for repre_doc in repre_docs:
version_id = repre_doc["parent"]
repre_docs_by_version_id[version_id].append(repre_doc)
output_dict = {}
for version_id, repre_docs in repre_docs_by_version_id.items():
version_doc = version_docs_by_id[version_id]
subset_id = version_doc["parent"]
subset_doc = last_versions_by_subset_id[subset_id]
# Store queried docs by subset name
output_dict[subset_doc["name"]] = {
"representations": repre_docs,
"version": version_doc
}
return output_dict

View file

@ -51,7 +51,8 @@ from .pipeline import (
)
from .menu import (
FlameMenuProjectConnect,
FlameMenuTimeline
FlameMenuTimeline,
FlameMenuUniversal
)
from .plugin import (
Creator,
@ -131,6 +132,7 @@ __all__ = [
# menu
"FlameMenuProjectConnect",
"FlameMenuTimeline",
"FlameMenuUniversal",
# plugin
"Creator",

View file

@ -766,11 +766,11 @@ class MediaInfoFile(object):
_drop_mode = None
_file_pattern = None
def __init__(self, path, **kwargs):
def __init__(self, path, logger=None):
# replace log if any
if kwargs.get("logger"):
self.log = kwargs["logger"]
if logger:
self.log = logger
# test if `dl_get_media_info` paht exists
self._validate_media_script_path()

View file

@ -201,3 +201,53 @@ class FlameMenuTimeline(_FlameMenuApp):
if self.flame:
self.flame.execute_shortcut('Rescan Python Hooks')
self.log.info('Rescan Python Hooks')
class FlameMenuUniversal(_FlameMenuApp):
# flameMenuProjectconnect app takes care of the preferences dialog as well
def __init__(self, framework):
_FlameMenuApp.__init__(self, framework)
def __getattr__(self, name):
def method(*args, **kwargs):
project = self.dynamic_menu_data.get(name)
if project:
self.link_project(project)
return method
def build_menu(self):
if not self.flame:
return []
menu = deepcopy(self.menu)
menu['actions'].append({
"name": "Load...",
"execute": lambda x: self.tools_helper.show_loader()
})
menu['actions'].append({
"name": "Manage...",
"execute": lambda x: self.tools_helper.show_scene_inventory()
})
menu['actions'].append({
"name": "Library...",
"execute": lambda x: self.tools_helper.show_library_loader()
})
return menu
def refresh(self, *args, **kwargs):
self.rescan()
def rescan(self, *args, **kwargs):
if not self.flame:
try:
import flame
self.flame = flame
except ImportError:
self.flame = None
if self.flame:
self.flame.execute_shortcut('Rescan Python Hooks')
self.log.info('Rescan Python Hooks')

View file

@ -90,8 +90,7 @@ def containerise(flame_clip_segment,
def ls():
"""List available containers.
"""
# TODO: ls
pass
return []
def parse_container(tl_segment, validate=True):
@ -107,6 +106,7 @@ def update_container(tl_segment, data=None):
# TODO: update_container
pass
def on_pyblish_instance_toggled(instance, old_value, new_value):
"""Toggle node passthrough states on instance toggles."""

View file

@ -361,6 +361,8 @@ class PublishableClip:
index_from_segment_default = False
use_shot_name_default = False
include_handles_default = False
retimed_handles_default = True
retimed_framerange_default = True
def __init__(self, segment, **kwargs):
self.rename_index = kwargs["rename_index"]
@ -496,6 +498,14 @@ class PublishableClip:
"audio", {}).get("value") or False
self.include_handles = self.ui_inputs.get(
"includeHandles", {}).get("value") or self.include_handles_default
self.retimed_handles = (
self.ui_inputs.get("retimedHandles", {}).get("value")
or self.retimed_handles_default
)
self.retimed_framerange = (
self.ui_inputs.get("retimedFramerange", {}).get("value")
or self.retimed_framerange_default
)
# build subset name from layer name
if self.subset_name == "[ track name ]":
@ -668,6 +678,7 @@ class ClipLoader(LoaderPlugin):
`update` logic.
"""
log = log
options = [
qargparse.Boolean(
@ -684,16 +695,20 @@ class OpenClipSolver(flib.MediaInfoFile):
log = log
def __init__(self, openclip_file_path, feed_data):
def __init__(self, openclip_file_path, feed_data, logger=None):
self.out_file = openclip_file_path
# replace log if any
if logger:
self.log = logger
# new feed variables:
feed_path = feed_data.pop("path")
# initialize parent class
super(OpenClipSolver, self).__init__(
feed_path,
**feed_data
logger=logger
)
# get other metadata
@ -741,17 +756,18 @@ class OpenClipSolver(flib.MediaInfoFile):
self.log.info("Building new openClip")
self.log.debug(">> self.clip_data: {}".format(self.clip_data))
# clip data comming from MediaInfoFile
tmp_xml_feeds = self.clip_data.find('tracks/track/feeds')
tmp_xml_feeds.set('currentVersion', self.feed_version_name)
for tmp_feed in tmp_xml_feeds:
tmp_feed.set('vuid', self.feed_version_name)
for tmp_xml_track in self.clip_data.iter("track"):
tmp_xml_feeds = tmp_xml_track.find('feeds')
tmp_xml_feeds.set('currentVersion', self.feed_version_name)
# add colorspace if any is set
if self.feed_colorspace:
self._add_colorspace(tmp_feed, self.feed_colorspace)
for tmp_feed in tmp_xml_track.iter("feed"):
tmp_feed.set('vuid', self.feed_version_name)
self._clear_handler(tmp_feed)
# add colorspace if any is set
if self.feed_colorspace:
self._add_colorspace(tmp_feed, self.feed_colorspace)
self._clear_handler(tmp_feed)
tmp_xml_versions_obj = self.clip_data.find('versions')
tmp_xml_versions_obj.set('currentVersion', self.feed_version_name)
@ -764,6 +780,17 @@ class OpenClipSolver(flib.MediaInfoFile):
self.write_clip_data_to_file(self.out_file, self.clip_data)
def _get_xml_track_obj_by_uid(self, xml_data, uid):
# loop all tracks of input xml data
for xml_track in xml_data.iter("track"):
track_uid = xml_track.get("uid")
self.log.debug(
">> track_uid:uid: {}:{}".format(track_uid, uid))
# get matching uids
if uid == track_uid:
return xml_track
def _update_open_clip(self):
self.log.info("Updating openClip ..")
@ -773,52 +800,81 @@ class OpenClipSolver(flib.MediaInfoFile):
self.log.debug(">> out_xml: {}".format(out_xml))
self.log.debug(">> self.clip_data: {}".format(self.clip_data))
# Get new feed from tmp file
tmp_xml_feed = self.clip_data.find('tracks/track/feeds/feed')
# loop tmp tracks
updated_any = False
for tmp_xml_track in self.clip_data.iter("track"):
# get tmp track uid
tmp_track_uid = tmp_xml_track.get("uid")
self.log.debug(">> tmp_track_uid: {}".format(tmp_track_uid))
self._clear_handler(tmp_xml_feed)
# get out data track by uid
out_track_element = self._get_xml_track_obj_by_uid(
out_xml, tmp_track_uid)
self.log.debug(
">> out_track_element: {}".format(out_track_element))
# update fps from MediaInfoFile class
if self.fps:
tmp_feed_fps_obj = tmp_xml_feed.find(
"startTimecode/rate")
tmp_feed_fps_obj.text = str(self.fps)
# loop tmp feeds
for tmp_xml_feed in tmp_xml_track.iter("feed"):
new_path_obj = tmp_xml_feed.find(
"spans/span/path")
new_path = new_path_obj.text
# update start_frame from MediaInfoFile class
if self.start_frame:
tmp_feed_nb_ticks_obj = tmp_xml_feed.find(
"startTimecode/nbTicks")
tmp_feed_nb_ticks_obj.text = str(self.start_frame)
# check if feed path already exists in track's feeds
if (
out_track_element is not None
and self._feed_exists(out_track_element, new_path)
):
continue
# update drop_mode from MediaInfoFile class
if self.drop_mode:
tmp_feed_drop_mode_obj = tmp_xml_feed.find(
"startTimecode/dropMode")
tmp_feed_drop_mode_obj.text = str(self.drop_mode)
# rename versions on feeds
tmp_xml_feed.set('vuid', self.feed_version_name)
self._clear_handler(tmp_xml_feed)
new_path_obj = tmp_xml_feed.find(
"spans/span/path")
new_path = new_path_obj.text
# update fps from MediaInfoFile class
if self.fps is not None:
tmp_feed_fps_obj = tmp_xml_feed.find(
"startTimecode/rate")
tmp_feed_fps_obj.text = str(self.fps)
feed_added = False
if not self._feed_exists(out_xml, new_path):
tmp_xml_feed.set('vuid', self.feed_version_name)
# Append new temp file feed to .clip source out xml
out_track = out_xml.find("tracks/track")
# add colorspace if any is set
if self.feed_colorspace:
self._add_colorspace(tmp_xml_feed, self.feed_colorspace)
# update start_frame from MediaInfoFile class
if self.start_frame is not None:
tmp_feed_nb_ticks_obj = tmp_xml_feed.find(
"startTimecode/nbTicks")
tmp_feed_nb_ticks_obj.text = str(self.start_frame)
out_feeds = out_track.find('feeds')
out_feeds.set('currentVersion', self.feed_version_name)
out_feeds.append(tmp_xml_feed)
# update drop_mode from MediaInfoFile class
if self.drop_mode is not None:
tmp_feed_drop_mode_obj = tmp_xml_feed.find(
"startTimecode/dropMode")
tmp_feed_drop_mode_obj.text = str(self.drop_mode)
self.log.info(
"Appending new feed: {}".format(
self.feed_version_name))
feed_added = True
# add colorspace if any is set
if self.feed_colorspace is not None:
self._add_colorspace(tmp_xml_feed, self.feed_colorspace)
if feed_added:
# then append/update feed to correct track in output
if out_track_element:
self.log.debug("updating track element ..")
# update already present track
out_feeds = out_track_element.find('feeds')
out_feeds.set('currentVersion', self.feed_version_name)
out_feeds.append(tmp_xml_feed)
self.log.info(
"Appending new feed: {}".format(
self.feed_version_name))
else:
self.log.debug("adding new track element ..")
# create new track as it doesnt exists yet
# set current version to feeds on tmp
tmp_xml_feeds = tmp_xml_track.find('feeds')
tmp_xml_feeds.set('currentVersion', self.feed_version_name)
out_tracks = out_xml.find("tracks")
out_tracks.append(tmp_xml_track)
updated_any = True
if updated_any:
# Append vUID to versions
out_xml_versions_obj = out_xml.find('versions')
out_xml_versions_obj.set(

View file

@ -22,6 +22,7 @@ class FlamePrelaunch(PreLaunchHook):
in environment var FLAME_SCRIPT_DIR.
"""
app_groups = ["flame"]
permissions = 0o777
wtc_script_path = os.path.join(
opflame.HOST_DIR, "api", "scripts", "wiretap_com.py")
@ -38,6 +39,7 @@ class FlamePrelaunch(PreLaunchHook):
"""Hook entry method."""
project_doc = self.data["project_doc"]
project_name = project_doc["name"]
volume_name = _env.get("FLAME_WIRETAP_VOLUME")
# get image io
project_anatomy = self.data["anatomy"]
@ -81,7 +83,7 @@ class FlamePrelaunch(PreLaunchHook):
data_to_script = {
# from settings
"host_name": _env.get("FLAME_WIRETAP_HOSTNAME") or hostname,
"volume_name": _env.get("FLAME_WIRETAP_VOLUME"),
"volume_name": volume_name,
"group_name": _env.get("FLAME_WIRETAP_GROUP"),
"color_policy": str(imageio_flame["project"]["colourPolicy"]),
@ -99,8 +101,41 @@ class FlamePrelaunch(PreLaunchHook):
app_arguments = self._get_launch_arguments(data_to_script)
# fix project data permission issue
self._fix_permissions(project_name, volume_name)
self.launch_context.launch_args.extend(app_arguments)
def _fix_permissions(self, project_name, volume_name):
"""Work around for project data permissions
Reported issue: when project is created locally on one machine,
it is impossible to migrate it to other machine. Autodesk Flame
is crating some unmanagable files which needs to be opened to 0o777.
Args:
project_name (str): project name
volume_name (str): studio volume
"""
dirs_to_modify = [
"/usr/discreet/project/{}".format(project_name),
"/opt/Autodesk/clip/{}/{}.prj".format(volume_name, project_name),
"/usr/discreet/clip/{}/{}.prj".format(volume_name, project_name)
]
for dirtm in dirs_to_modify:
for root, dirs, files in os.walk(dirtm):
try:
for name in set(dirs) | set(files):
path = os.path.join(root, name)
st = os.stat(path)
if oct(st.st_mode) != self.permissions:
os.chmod(path, self.permissions)
except OSError as exc:
self.log.warning("Not able to open files: {}".format(exc))
def _get_flame_fps(self, fps_num):
fps_table = {
float(23.976): "23.976 fps",

View file

@ -23,10 +23,11 @@ class CreateShotClip(opfapi.Creator):
# nested dictionary (only one level allowed
# for sections and dict)
for _k, _v in v["value"].items():
if presets.get(_k):
if presets.get(_k) is not None:
gui_inputs[k][
"value"][_k]["value"] = presets[_k]
if presets.get(k):
if presets.get(k) is not None:
gui_inputs[k]["value"] = presets[k]
# open widget for plugins inputs
@ -276,6 +277,22 @@ class CreateShotClip(opfapi.Creator):
"target": "tag",
"toolTip": "By default handles are excluded", # noqa
"order": 3
},
"retimedHandles": {
"value": True,
"type": "QCheckBox",
"label": "Retimed handles",
"target": "tag",
"toolTip": "By default handles are retimed.", # noqa
"order": 4
},
"retimedFramerange": {
"value": True,
"type": "QCheckBox",
"label": "Retimed framerange",
"target": "tag",
"toolTip": "By default framerange is retimed.", # noqa
"order": 5
}
}
}

View file

@ -4,6 +4,7 @@ from pprint import pformat
import openpype.hosts.flame.api as opfapi
from openpype.lib import StringTemplate
class LoadClip(opfapi.ClipLoader):
"""Load a subset to timeline as clip
@ -60,8 +61,6 @@ class LoadClip(opfapi.ClipLoader):
"path": self.fname.replace("\\", "/"),
"colorspace": colorspace,
"version": "v{:0>3}".format(version_name),
"logger": self.log
}
self.log.debug(pformat(
loading_context
@ -69,7 +68,8 @@ class LoadClip(opfapi.ClipLoader):
self.log.debug(openclip_path)
# make openpype clip file
opfapi.OpenClipSolver(openclip_path, loading_context).make()
opfapi.OpenClipSolver(
openclip_path, loading_context, logger=self.log).make()
# prepare Reel group in actual desktop
opc = self._get_clip(

View file

@ -64,8 +64,6 @@ class LoadClipBatch(opfapi.ClipLoader):
"path": self.fname.replace("\\", "/"),
"colorspace": colorspace,
"version": "v{:0>3}".format(version_name),
"logger": self.log
}
self.log.debug(pformat(
loading_context
@ -73,7 +71,8 @@ class LoadClipBatch(opfapi.ClipLoader):
self.log.debug(openclip_path)
# make openpype clip file
opfapi.OpenClipSolver(openclip_path, loading_context).make()
opfapi.OpenClipSolver(
openclip_path, loading_context, logger=self.log).make()
# prepare Reel group in actual desktop
opc = self._get_clip(

View file

@ -131,6 +131,9 @@ class CollectTimelineInstances(pyblish.api.ContextPlugin):
"fps": self.fps,
"workfileFrameStart": workfile_start,
"sourceFirstFrame": int(first_frame),
"retimedHandles": marker_data.get("retimedHandles"),
"shotDurationFromSource": (
not marker_data.get("retimedFramerange")),
"path": file_path,
"flameAddTasks": self.add_tasks,
"tasks": {

View file

@ -1,7 +1,6 @@
import os
import re
import tempfile
from pprint import pformat
from copy import deepcopy
import pyblish.api
@ -80,40 +79,53 @@ class ExtractSubsetResources(openpype.api.Extractor):
retimed_data = self._get_retimed_attributes(instance)
# get individual keys
r_handle_start = retimed_data["handle_start"]
r_handle_end = retimed_data["handle_end"]
r_source_dur = retimed_data["source_duration"]
r_speed = retimed_data["speed"]
retimed_handle_start = retimed_data["handle_start"]
retimed_handle_end = retimed_data["handle_end"]
retimed_source_duration = retimed_data["source_duration"]
retimed_speed = retimed_data["speed"]
# get handles value - take only the max from both
handle_start = instance.data["handleStart"]
handle_end = instance.data["handleEnd"]
handles = max(handle_start, handle_end)
include_handles = instance.data.get("includeHandles")
retimed_handles = instance.data.get("retimedHandles")
# get media source range with handles
source_start_handles = instance.data["sourceStartH"]
source_end_handles = instance.data["sourceEndH"]
# retime if needed
if r_speed != 1.0:
source_start_handles = (
instance.data["sourceStart"] - r_handle_start)
source_end_handles = (
source_start_handles
+ (r_source_dur - 1)
+ r_handle_start
+ r_handle_end
)
if retimed_speed != 1.0:
if retimed_handles:
# handles are retimed
source_start_handles = (
instance.data["sourceStart"] - retimed_handle_start)
source_end_handles = (
source_start_handles
+ (retimed_source_duration - 1)
+ retimed_handle_start
+ retimed_handle_end
)
else:
# handles are not retimed
source_end_handles = (
source_start_handles
+ (retimed_source_duration - 1)
+ handle_start
+ handle_end
)
# get frame range with handles for representation range
frame_start_handle = frame_start - handle_start
repre_frame_start = frame_start_handle
if include_handles:
if r_speed == 1.0:
if retimed_speed == 1.0 or not retimed_handles:
frame_start_handle = frame_start
else:
frame_start_handle = (
frame_start - handle_start) + r_handle_start
frame_start - handle_start) + retimed_handle_start
self.log.debug("_ frame_start_handle: {}".format(
frame_start_handle))
@ -124,6 +136,9 @@ class ExtractSubsetResources(openpype.api.Extractor):
source_duration_handles = (
source_end_handles - source_start_handles) + 1
self.log.debug("_ source_duration_handles: {}".format(
source_duration_handles))
# create staging dir path
staging_dir = self.staging_dir(instance)
@ -147,15 +162,30 @@ class ExtractSubsetResources(openpype.api.Extractor):
if version_data:
instance.data["versionData"].update(version_data)
if r_speed != 1.0:
instance.data["versionData"].update({
"frameStart": frame_start_handle,
"frameEnd": (
(frame_start_handle + source_duration_handles - 1)
- (r_handle_start + r_handle_end)
)
})
self.log.debug("_ i_version_data: {}".format(
# version data start frame
version_frame_start = frame_start
if include_handles:
version_frame_start = frame_start_handle
if retimed_speed != 1.0:
if retimed_handles:
instance.data["versionData"].update({
"frameStart": version_frame_start,
"frameEnd": (
(version_frame_start + source_duration_handles - 1)
- (retimed_handle_start + retimed_handle_end)
)
})
else:
instance.data["versionData"].update({
"handleStart": handle_start,
"handleEnd": handle_end,
"frameStart": version_frame_start,
"frameEnd": (
(version_frame_start + source_duration_handles - 1)
- (handle_start + handle_end)
)
})
self.log.debug("_ version_data: {}".format(
instance.data["versionData"]
))

View file

@ -73,6 +73,8 @@ def load_apps():
opfapi.FlameMenuProjectConnect(opfapi.CTX.app_framework))
opfapi.CTX.flame_apps.append(
opfapi.FlameMenuTimeline(opfapi.CTX.app_framework))
opfapi.CTX.flame_apps.append(
opfapi.FlameMenuUniversal(opfapi.CTX.app_framework))
opfapi.CTX.app_framework.log.info("Apps are loaded")
@ -191,3 +193,27 @@ def get_timeline_custom_ui_actions():
openpype_install()
return _build_app_menu("FlameMenuTimeline")
def get_batch_custom_ui_actions():
"""Hook to create submenu in batch
Returns:
list: menu object
"""
# install openpype and the host
openpype_install()
return _build_app_menu("FlameMenuUniversal")
def get_media_panel_custom_ui_actions():
"""Hook to create submenu in desktop
Returns:
list: menu object
"""
# install openpype and the host
openpype_install()
return _build_app_menu("FlameMenuUniversal")

View file

@ -0,0 +1,10 @@
from .addon import (
FusionAddon,
FUSION_HOST_DIR,
)
__all__ = (
"FusionAddon",
"FUSION_HOST_DIR",
)

View file

@ -0,0 +1,32 @@
import os
from openpype.modules import OpenPypeModule
from openpype.modules.interfaces import IHostAddon
FUSION_HOST_DIR = os.path.dirname(os.path.abspath(__file__))
class FusionAddon(OpenPypeModule, IHostAddon):
name = "fusion"
host_name = "fusion"
def initialize(self, module_settings):
self.enabled = True
def get_launch_hook_paths(self, app):
if app.host_name != self.host_name:
return []
return [
os.path.join(FUSION_HOST_DIR, "hooks")
]
def add_implementation_envs(self, env, _app):
# Set default values if are not already set via settings
defaults = {
"OPENPYPE_LOG_NO_COLORS": "Yes"
}
for key, value in defaults.items():
if not env.get(key):
env[key] = value
def get_workfile_extensions(self):
return [".comp"]

View file

@ -5,10 +5,7 @@ from .pipeline import (
ls,
imprint_container,
parse_container,
get_current_comp,
comp_lock_and_undo_chunk
parse_container
)
from .workio import (
@ -22,8 +19,10 @@ from .workio import (
from .lib import (
maintained_selection,
get_additional_data,
update_frame_range
update_frame_range,
set_asset_framerange,
get_current_comp,
comp_lock_and_undo_chunk
)
from .menu import launch_openpype_menu
@ -38,9 +37,6 @@ __all__ = [
"imprint_container",
"parse_container",
"get_current_comp",
"comp_lock_and_undo_chunk",
# workio
"open_file",
"save_file",
@ -51,8 +47,10 @@ __all__ = [
# lib
"maintained_selection",
"get_additional_data",
"update_frame_range",
"set_asset_framerange",
"get_current_comp",
"comp_lock_and_undo_chunk",
# menu
"launch_openpype_menu",

View file

@ -5,6 +5,7 @@ import contextlib
from Qt import QtGui
from openpype.lib import Logger
from openpype.client import (
get_asset_by_name,
get_subset_by_name,
@ -17,13 +18,14 @@ from openpype.pipeline import (
switch_container,
legacy_io,
)
from .pipeline import get_current_comp, comp_lock_and_undo_chunk
from openpype.pipeline.context_tools import get_current_project_asset
self = sys.modules[__name__]
self._project = None
def update_frame_range(start, end, comp=None, set_render_range=True):
def update_frame_range(start, end, comp=None, set_render_range=True,
handle_start=0, handle_end=0):
"""Set Fusion comp's start and end frame range
Args:
@ -32,6 +34,8 @@ def update_frame_range(start, end, comp=None, set_render_range=True):
comp (object, Optional): comp object from fusion
set_render_range (bool, Optional): When True this will also set the
composition's render start and end frame.
handle_start (float, int, Optional): frame handles before start frame
handle_end (float, int, Optional): frame handles after end frame
Returns:
None
@ -41,11 +45,16 @@ def update_frame_range(start, end, comp=None, set_render_range=True):
if not comp:
comp = get_current_comp()
# Convert any potential none type to zero
handle_start = handle_start or 0
handle_end = handle_end or 0
attrs = {
"COMPN_GlobalStart": start,
"COMPN_GlobalEnd": end
"COMPN_GlobalStart": start - handle_start,
"COMPN_GlobalEnd": end + handle_end
}
# set frame range
if set_render_range:
attrs.update({
"COMPN_RenderStart": start,
@ -56,24 +65,116 @@ def update_frame_range(start, end, comp=None, set_render_range=True):
comp.SetAttrs(attrs)
def get_additional_data(container):
"""Get Fusion related data for the container
def set_asset_framerange():
"""Set Comp's frame range based on current asset"""
asset_doc = get_current_project_asset()
start = asset_doc["data"]["frameStart"]
end = asset_doc["data"]["frameEnd"]
handle_start = asset_doc["data"]["handleStart"]
handle_end = asset_doc["data"]["handleEnd"]
update_frame_range(start, end, set_render_range=True,
handle_start=handle_start,
handle_end=handle_end)
Args:
container(dict): the container found by the ls() function
Returns:
dict
def set_asset_resolution():
"""Set Comp's resolution width x height default based on current asset"""
asset_doc = get_current_project_asset()
width = asset_doc["data"]["resolutionWidth"]
height = asset_doc["data"]["resolutionHeight"]
comp = get_current_comp()
print("Setting comp frame format resolution to {}x{}".format(width,
height))
comp.SetPrefs({
"Comp.FrameFormat.Width": width,
"Comp.FrameFormat.Height": height,
})
def validate_comp_prefs(comp=None):
"""Validate current comp defaults with asset settings.
Validates fps, resolutionWidth, resolutionHeight, aspectRatio.
This does *not* validate frameStart, frameEnd, handleStart and handleEnd.
"""
tool = container["_tool"]
tile_color = tool.TileColor
if tile_color is None:
return {}
if comp is None:
comp = get_current_comp()
return {"color": QtGui.QColor.fromRgbF(tile_color["R"],
tile_color["G"],
tile_color["B"])}
log = Logger.get_logger("validate_comp_prefs")
fields = [
"name",
"data.fps",
"data.resolutionWidth",
"data.resolutionHeight",
"data.pixelAspect"
]
asset_doc = get_current_project_asset(fields=fields)
asset_data = asset_doc["data"]
comp_frame_format_prefs = comp.GetPrefs("Comp.FrameFormat")
# Pixel aspect ratio in Fusion is set as AspectX and AspectY so we convert
# the data to something that is more sensible to Fusion
asset_data["pixelAspectX"] = asset_data.pop("pixelAspect")
asset_data["pixelAspectY"] = 1.0
validations = [
("fps", "Rate", "FPS"),
("resolutionWidth", "Width", "Resolution Width"),
("resolutionHeight", "Height", "Resolution Height"),
("pixelAspectX", "AspectX", "Pixel Aspect Ratio X"),
("pixelAspectY", "AspectY", "Pixel Aspect Ratio Y")
]
invalid = []
for key, comp_key, label in validations:
asset_value = asset_data[key]
comp_value = comp_frame_format_prefs.get(comp_key)
if asset_value != comp_value:
# todo: Actually show dialog to user instead of just logging
log.warning(
"Comp {pref} {value} does not match asset "
"'{asset_name}' {pref} {asset_value}".format(
pref=label,
value=comp_value,
asset_name=asset_doc["name"],
asset_value=asset_value)
)
invalid_msg = "{} {} should be {}".format(label,
comp_value,
asset_value)
invalid.append(invalid_msg)
if invalid:
def _on_repair():
attributes = dict()
for key, comp_key, _label in validations:
value = asset_data[key]
comp_key_full = "Comp.FrameFormat.{}".format(comp_key)
attributes[comp_key_full] = value
comp.SetPrefs(attributes)
from . import menu
from openpype.widgets import popup
from openpype.style import load_stylesheet
dialog = popup.Popup(parent=menu.menu)
dialog.setWindowTitle("Fusion comp has invalid configuration")
msg = "Comp preferences mismatches '{}'".format(asset_doc["name"])
msg += "\n" + "\n".join(invalid)
dialog.setMessage(msg)
dialog.setButtonText("Repair")
dialog.on_clicked.connect(_on_repair)
dialog.show()
dialog.raise_()
dialog.activateWindow()
dialog.setStyleSheet(load_stylesheet())
def switch_item(container,
@ -195,3 +296,21 @@ def get_frame_path(path):
padding = 4 # default Fusion padding
return filename, padding, ext
def get_current_comp():
"""Hack to get current comp in this session"""
fusion = getattr(sys.modules["__main__"], "fusion", None)
return fusion.CurrentComp if fusion else None
@contextlib.contextmanager
def comp_lock_and_undo_chunk(comp, undo_queue_name="Script CMD"):
"""Lock comp and open an undo chunk during the context"""
try:
comp.Lock()
comp.StartUndo(undo_queue_name)
yield
finally:
comp.Unlock()
comp.EndUndo()

View file

@ -1,43 +1,25 @@
import os
import sys
from Qt import QtWidgets, QtCore
from Qt import QtWidgets, QtCore, QtGui
from openpype import style
from openpype.tools.utils import host_tools
from openpype.style import load_stylesheet
from openpype.lib import register_event_callback
from openpype.hosts.fusion.scripts import (
set_rendermode,
duplicate_with_inputs
)
from openpype.hosts.fusion.api.lib import (
set_asset_framerange,
set_asset_resolution
)
from openpype.pipeline import legacy_io
from openpype.resources import get_openpype_icon_filepath
from .pulse import FusionPulse
def load_stylesheet():
path = os.path.join(os.path.dirname(__file__), "menu_style.qss")
if not os.path.exists(path):
print("Unable to load stylesheet, file not found in resources")
return ""
with open(path, "r") as file_stream:
stylesheet = file_stream.read()
return stylesheet
class Spacer(QtWidgets.QWidget):
def __init__(self, height, *args, **kwargs):
super(Spacer, self).__init__(*args, **kwargs)
self.setFixedHeight(height)
real_spacer = QtWidgets.QWidget(self)
real_spacer.setObjectName("Spacer")
real_spacer.setFixedHeight(height)
layout = QtWidgets.QVBoxLayout(self)
layout.setContentsMargins(0, 0, 0, 0)
layout.addWidget(real_spacer)
self.setLayout(layout)
self = sys.modules[__name__]
self.menu = None
class OpenPypeMenu(QtWidgets.QWidget):
@ -46,15 +28,29 @@ class OpenPypeMenu(QtWidgets.QWidget):
self.setObjectName("OpenPypeMenu")
icon_path = get_openpype_icon_filepath()
icon = QtGui.QIcon(icon_path)
self.setWindowIcon(icon)
self.setWindowFlags(
QtCore.Qt.Window
| QtCore.Qt.CustomizeWindowHint
| QtCore.Qt.WindowTitleHint
| QtCore.Qt.WindowMinimizeButtonHint
| QtCore.Qt.WindowCloseButtonHint
| QtCore.Qt.WindowStaysOnTopHint
)
self.render_mode_widget = None
self.setWindowTitle("OpenPype")
asset_label = QtWidgets.QLabel("Context", self)
asset_label.setStyleSheet("""QLabel {
font-size: 14px;
font-weight: 600;
color: #5f9fb8;
}""")
asset_label.setAlignment(QtCore.Qt.AlignHCenter)
workfiles_btn = QtWidgets.QPushButton("Workfiles...", self)
create_btn = QtWidgets.QPushButton("Create...", self)
publish_btn = QtWidgets.QPushButton("Publish...", self)
@ -62,77 +58,107 @@ class OpenPypeMenu(QtWidgets.QWidget):
manager_btn = QtWidgets.QPushButton("Manage...", self)
libload_btn = QtWidgets.QPushButton("Library...", self)
rendermode_btn = QtWidgets.QPushButton("Set render mode...", self)
set_framerange_btn = QtWidgets.QPushButton("Set Frame Range", self)
set_resolution_btn = QtWidgets.QPushButton("Set Resolution", self)
duplicate_with_inputs_btn = QtWidgets.QPushButton(
"Duplicate with input connections", self
)
reset_resolution_btn = QtWidgets.QPushButton(
"Reset Resolution from project", self
)
layout = QtWidgets.QVBoxLayout(self)
layout.setContentsMargins(10, 20, 10, 20)
layout.addWidget(asset_label)
layout.addSpacing(20)
layout.addWidget(workfiles_btn)
layout.addSpacing(20)
layout.addWidget(create_btn)
layout.addWidget(publish_btn)
layout.addWidget(load_btn)
layout.addWidget(publish_btn)
layout.addWidget(manager_btn)
layout.addWidget(Spacer(15, self))
layout.addSpacing(20)
layout.addWidget(libload_btn)
layout.addWidget(Spacer(15, self))
layout.addSpacing(20)
layout.addWidget(set_framerange_btn)
layout.addWidget(set_resolution_btn)
layout.addWidget(rendermode_btn)
layout.addWidget(Spacer(15, self))
layout.addSpacing(20)
layout.addWidget(duplicate_with_inputs_btn)
layout.addWidget(reset_resolution_btn)
self.setLayout(layout)
# Store reference so we can update the label
self.asset_label = asset_label
workfiles_btn.clicked.connect(self.on_workfile_clicked)
create_btn.clicked.connect(self.on_create_clicked)
publish_btn.clicked.connect(self.on_publish_clicked)
load_btn.clicked.connect(self.on_load_clicked)
manager_btn.clicked.connect(self.on_manager_clicked)
libload_btn.clicked.connect(self.on_libload_clicked)
rendermode_btn.clicked.connect(self.on_rendernode_clicked)
rendermode_btn.clicked.connect(self.on_rendermode_clicked)
duplicate_with_inputs_btn.clicked.connect(
self.on_duplicate_with_inputs_clicked)
reset_resolution_btn.clicked.connect(self.on_reset_resolution_clicked)
set_resolution_btn.clicked.connect(self.on_set_resolution_clicked)
set_framerange_btn.clicked.connect(self.on_set_framerange_clicked)
self._callbacks = []
self.register_callback("taskChanged", self.on_task_changed)
self.on_task_changed()
# Force close current process if Fusion is closed
self._pulse = FusionPulse(parent=self)
self._pulse.start()
def on_task_changed(self):
# Update current context label
label = legacy_io.Session["AVALON_ASSET"]
self.asset_label.setText(label)
def register_callback(self, name, fn):
# Create a wrapper callback that we only store
# for as long as we want it to persist as callback
def _callback(*args):
fn()
self._callbacks.append(_callback)
register_event_callback(name, _callback)
def deregister_all_callbacks(self):
self._callbacks[:] = []
def on_workfile_clicked(self):
print("Clicked Workfile")
host_tools.show_workfiles()
def on_create_clicked(self):
print("Clicked Create")
host_tools.show_creator()
def on_publish_clicked(self):
print("Clicked Publish")
host_tools.show_publish()
def on_load_clicked(self):
print("Clicked Load")
host_tools.show_loader(use_context=True)
def on_manager_clicked(self):
print("Clicked Manager")
host_tools.show_scene_inventory()
def on_libload_clicked(self):
print("Clicked Library")
host_tools.show_library_loader()
def on_rendernode_clicked(self):
print("Clicked Set Render Mode")
def on_rendermode_clicked(self):
if self.render_mode_widget is None:
window = set_rendermode.SetRenderMode()
window.setStyleSheet(style.load_stylesheet())
window.setStyleSheet(load_stylesheet())
window.show()
self.render_mode_widget = window
else:
@ -140,15 +166,16 @@ class OpenPypeMenu(QtWidgets.QWidget):
def on_duplicate_with_inputs_clicked(self):
duplicate_with_inputs.duplicate_with_input_connections()
print("Clicked Set Colorspace")
def on_reset_resolution_clicked(self):
print("Clicked Reset Resolution")
def on_set_resolution_clicked(self):
set_asset_resolution()
def on_set_framerange_clicked(self):
set_asset_framerange()
def launch_openpype_menu():
app = QtWidgets.QApplication(sys.argv)
app.setQuitOnLastWindowClosed(False)
pype_menu = OpenPypeMenu()
@ -156,5 +183,8 @@ def launch_openpype_menu():
pype_menu.setStyleSheet(stylesheet)
pype_menu.show()
self.menu = pype_menu
sys.exit(app.exec_())
result = app.exec_()
print("Shutting down..")
sys.exit(result)

View file

@ -1,29 +0,0 @@
QWidget {
background-color: #282828;
border-radius: 3;
}
QPushButton {
border: 1px solid #090909;
background-color: #201f1f;
color: #ffffff;
padding: 5;
}
QPushButton:focus {
background-color: "#171717";
color: #d0d0d0;
}
QPushButton:hover {
background-color: "#171717";
color: #e64b3d;
}
#OpenPypeMenu {
border: 1px solid #fef9ef;
}
#Spacer {
background-color: #282828;
}

View file

@ -2,13 +2,14 @@
Basic avalon integration
"""
import os
import sys
import logging
import contextlib
import pyblish.api
from openpype.lib import Logger
from openpype.lib import (
Logger,
register_event_callback
)
from openpype.pipeline import (
register_loader_plugin_path,
register_creator_plugin_path,
@ -18,12 +19,19 @@ from openpype.pipeline import (
deregister_inventory_action_path,
AVALON_CONTAINER_ID,
)
import openpype.hosts.fusion
from openpype.pipeline.load import any_outdated_containers
from openpype.hosts.fusion import FUSION_HOST_DIR
from openpype.tools.utils import host_tools
from .lib import (
get_current_comp,
comp_lock_and_undo_chunk,
validate_comp_prefs
)
log = Logger.get_logger(__name__)
HOST_DIR = os.path.dirname(os.path.abspath(openpype.hosts.fusion.__file__))
PLUGINS_DIR = os.path.join(HOST_DIR, "plugins")
PLUGINS_DIR = os.path.join(FUSION_HOST_DIR, "plugins")
PUBLISH_PATH = os.path.join(PLUGINS_DIR, "publish")
LOAD_PATH = os.path.join(PLUGINS_DIR, "load")
@ -40,7 +48,7 @@ class CompLogHandler(logging.Handler):
def install():
"""Install fusion-specific functionality of avalon-core.
"""Install fusion-specific functionality of OpenPype.
This is where you install menus and register families, data
and loaders into fusion.
@ -52,7 +60,7 @@ def install():
"""
# Remove all handlers associated with the root logger object, because
# that one sometimes logs as "warnings" incorrectly.
# that one always logs as "warnings" incorrectly.
for handler in logging.root.handlers[:]:
logging.root.removeHandler(handler)
@ -64,8 +72,6 @@ def install():
logger.addHandler(handler)
logger.setLevel(logging.DEBUG)
log.info("openpype.hosts.fusion installed")
pyblish.api.register_host("fusion")
pyblish.api.register_plugin_path(PUBLISH_PATH)
log.info("Registering Fusion plug-ins..")
@ -78,6 +84,11 @@ def install():
"instanceToggled", on_pyblish_instance_toggled
)
# Fusion integration currently does not attach to direct callbacks of
# the application. So we use workfile callbacks to allow similar behavior
# on save and open
register_event_callback("workfile.open.after", on_after_open)
def uninstall():
"""Uninstall all that was installed
@ -103,7 +114,7 @@ def uninstall():
)
def on_pyblish_instance_toggled(instance, new_value, old_value):
def on_pyblish_instance_toggled(instance, old_value, new_value):
"""Toggle saver tool passthrough states on instance toggles."""
comp = instance.context.data.get("currentComp")
if not comp:
@ -126,6 +137,38 @@ def on_pyblish_instance_toggled(instance, new_value, old_value):
tool.SetAttrs({"TOOLB_PassThrough": passthrough})
def on_after_open(_event):
comp = get_current_comp()
validate_comp_prefs(comp)
if any_outdated_containers():
log.warning("Scene has outdated content.")
# Find OpenPype menu to attach to
from . import menu
def _on_show_scene_inventory():
# ensure that comp is active
frame = comp.CurrentFrame
if not frame:
print("Comp is closed, skipping show scene inventory")
return
frame.ActivateFrame() # raise comp window
host_tools.show_scene_inventory()
from openpype.widgets import popup
from openpype.style import load_stylesheet
dialog = popup.Popup(parent=menu.menu)
dialog.setWindowTitle("Fusion comp has outdated content")
dialog.setMessage("There are outdated containers in "
"your Fusion comp.")
dialog.on_clicked.connect(_on_show_scene_inventory)
dialog.show()
dialog.raise_()
dialog.activateWindow()
dialog.setStyleSheet(load_stylesheet())
def ls():
"""List containers from active Fusion scene
@ -211,19 +254,3 @@ def parse_container(tool):
return container
def get_current_comp():
"""Hack to get current comp in this session"""
fusion = getattr(sys.modules["__main__"], "fusion", None)
return fusion.CurrentComp if fusion else None
@contextlib.contextmanager
def comp_lock_and_undo_chunk(comp, undo_queue_name="Script CMD"):
"""Lock comp and open an undo chunk during the context"""
try:
comp.Lock()
comp.StartUndo(undo_queue_name)
yield
finally:
comp.Unlock()
comp.EndUndo()

View file

@ -0,0 +1,60 @@
import os
import sys
from Qt import QtCore
class PulseThread(QtCore.QThread):
no_response = QtCore.Signal()
def __init__(self, parent=None):
super(PulseThread, self).__init__(parent=parent)
def run(self):
app = getattr(sys.modules["__main__"], "app", None)
# Interval in milliseconds
interval = os.environ.get("OPENPYPE_FUSION_PULSE_INTERVAL", 1000)
while True:
if self.isInterruptionRequested():
return
try:
app.Test()
except Exception:
self.no_response.emit()
self.msleep(interval)
class FusionPulse(QtCore.QObject):
"""A Timer that checks whether host app is still alive.
This checks whether the Fusion process is still active at a certain
interval. This is useful due to how Fusion runs its scripts. Each script
runs in its own environment and process (a `fusionscript` process each).
If Fusion would go down and we have a UI process running at the same time
then it can happen that the `fusionscript.exe` will remain running in the
background in limbo due to e.g. a Qt interface's QApplication that keeps
running infinitely.
Warning:
When the host is not detected this will automatically exit
the current process.
"""
def __init__(self, parent=None):
super(FusionPulse, self).__init__(parent=parent)
self._thread = PulseThread(parent=self)
self._thread.no_response.connect(self.on_no_response)
def on_no_response(self):
print("Pulse detected no response from Fusion..")
sys.exit(1)
def start(self):
self._thread.start()
def stop(self):
self._thread.requestInterruption()

View file

@ -2,13 +2,11 @@
import sys
import os
from openpype.pipeline import HOST_WORKFILE_EXTENSIONS
from .pipeline import get_current_comp
from .lib import get_current_comp
def file_extensions():
return HOST_WORKFILE_EXTENSIONS["fusion"]
return [".comp"]
def has_unsaved_changes():

View file

@ -0,0 +1,60 @@
{
Action
{
ID = "OpenPype_Menu",
Category = "OpenPype",
Name = "OpenPype Menu",
Targets =
{
Composition =
{
Execute = _Lua [=[
local scriptPath = app:MapPath("OpenPype:MenuScripts/openpype_menu.py")
if bmd.fileexists(scriptPath) == false then
print("[OpenPype Error] Can't run file: " .. scriptPath)
else
target:RunScript(scriptPath)
end
]=],
},
},
},
Action
{
ID = "OpenPype_Install_PySide2",
Category = "OpenPype",
Name = "Install PySide2",
Targets =
{
Composition =
{
Execute = _Lua [=[
local scriptPath = app:MapPath("OpenPype:MenuScripts/install_pyside2.py")
if bmd.fileexists(scriptPath) == false then
print("[OpenPype Error] Can't run file: " .. scriptPath)
else
target:RunScript(scriptPath)
end
]=],
},
},
},
Menus
{
Target = "ChildFrame",
Before "Help"
{
Sub "OpenPype"
{
"OpenPype_Menu{}",
"_",
Sub "Admin" {
"OpenPype_Install_PySide2{}"
}
}
},
},
}

View file

@ -0,0 +1,6 @@
### OpenPype deploy MenuScripts
Note that this `MenuScripts` is not an official Fusion folder.
OpenPype only uses this folder in `{fusion}/deploy/` to trigger the OpenPype menu actions.
They are used in the actions defined in `.fu` files in `{fusion}/deploy/Config`.

View file

@ -0,0 +1,29 @@
# This is just a quick hack for users running Py3 locally but having no
# Qt library installed
import os
import subprocess
import importlib
try:
from Qt import QtWidgets # noqa: F401
from Qt import __binding__
print(f"Qt binding: {__binding__}")
mod = importlib.import_module(__binding__)
print(f"Qt path: {mod.__file__}")
print("Qt library found, nothing to do..")
except ImportError:
print("Assuming no Qt library is installed..")
print('Installing PySide2 for Python 3.6: '
f'{os.environ["FUSION16_PYTHON36_HOME"]}')
# Get full path to python executable
exe = "python.exe" if os.name == 'nt' else "python"
python = os.path.join(os.environ["FUSION16_PYTHON36_HOME"], exe)
assert os.path.exists(python), f"Python doesn't exist: {python}"
# Do python -m pip install PySide2
args = [python, "-m", "pip", "install", "PySide2"]
print(f"Args: {args}")
subprocess.Popen(args)

View file

@ -9,6 +9,10 @@ from openpype.pipeline import (
def main(env):
# This script working directory starts in Fusion application folder.
# However the contents of that folder can conflict with Qt library dlls
# so we make sure to move out of it to avoid DLL Load Failed errors.
os.chdir("..")
from openpype.hosts.fusion import api
from openpype.hosts.fusion.api import menu
@ -20,6 +24,11 @@ def main(env):
menu.launch_openpype_menu()
# Initiate a QTimer to check if Fusion is still alive every X interval
# If Fusion is not found - kill itself
# todo(roy): Implement timer that ensures UI doesn't remain when e.g.
# Fusion closes down
if __name__ == "__main__":
result = main(os.environ)

View file

@ -0,0 +1,19 @@
{
Locked = true,
Global = {
Paths = {
Map = {
["OpenPype:"] = "$(OPENPYPE_FUSION)/deploy",
["Reactor:"] = "$(REACTOR)",
["Config:"] = "UserPaths:Config;OpenPype:Config",
["Scripts:"] = "UserPaths:Scripts;Reactor:System/Scripts;OpenPype:Scripts",
["UserPaths:"] = "UserData:;AllData:;Fusion:;Reactor:Deploy"
},
},
Script = {
PythonVersion = 3,
Python3Forced = true
},
},
}

View file

@ -0,0 +1,40 @@
import os
import platform
from openpype.lib import PreLaunchHook, ApplicationLaunchFailed
class FusionPreLaunchOCIO(PreLaunchHook):
"""Set OCIO environment variable for Fusion"""
app_groups = ["fusion"]
def execute(self):
"""Hook entry method."""
# get image io
project_settings = self.data["project_settings"]
# make sure anatomy settings are having flame key
imageio_fusion = project_settings.get("fusion", {}).get("imageio")
if not imageio_fusion:
raise ApplicationLaunchFailed((
"Anatomy project settings are missing `fusion` key. "
"Please make sure you remove project overrides on "
"Anatomy ImageIO")
)
ocio = imageio_fusion.get("ocio")
enabled = ocio.get("enabled", False)
if not enabled:
return
platform_key = platform.system().lower()
ocio_path = ocio["configFilePath"][platform_key]
if not ocio_path:
raise ApplicationLaunchFailed(
"Fusion OCIO is enabled in project settings but no OCIO config"
f"path is set for your current platform: {platform_key}"
)
self.log.info(f"Setting OCIO config path: {ocio_path}")
self.launch_context.env["OCIO"] = os.pathsep.join(ocio_path)

View file

@ -1,114 +1,61 @@
import os
import shutil
import openpype.hosts.fusion
from openpype.lib import PreLaunchHook, ApplicationLaunchFailed
from openpype.hosts.fusion import FUSION_HOST_DIR
class FusionPrelaunch(PreLaunchHook):
"""
This hook will check if current workfile path has Fusion
project inside.
"""Prepares OpenPype Fusion environment
Requires FUSION_PYTHON3_HOME to be defined in the environment for Fusion
to point at a valid Python 3 build for Fusion. That is Python 3.3-3.10
for Fusion 18 and Fusion 3.6 for Fusion 16 and 17.
This also sets FUSION16_MasterPrefs to apply the fusion master prefs
as set in openpype/hosts/fusion/deploy/fusion_shared.prefs to enable
the OpenPype menu and force Python 3 over Python 2.
"""
app_groups = ["fusion"]
def execute(self):
# making sure python 3.6 is installed at provided path
py36_dir = self.launch_context.env.get("PYTHON36")
if not py36_dir:
# making sure python 3 is installed at provided path
# Py 3.3-3.10 for Fusion 18+ or Py 3.6 for Fu 16-17
py3_var = "FUSION_PYTHON3_HOME"
fusion_python3_home = self.launch_context.env.get(py3_var, "")
self.log.info(f"Looking for Python 3 in: {fusion_python3_home}")
for path in fusion_python3_home.split(os.pathsep):
# Allow defining multiple paths to allow "fallback" to other
# path. But make to set only a single path as final variable.
py3_dir = os.path.normpath(path)
if os.path.isdir(py3_dir):
break
else:
raise ApplicationLaunchFailed(
"Required environment variable \"PYTHON36\" is not set."
"\n\nFusion implementation requires to have"
" installed Python 3.6"
"Python 3 is not installed at the provided path.\n"
"Make sure the environment in fusion settings has "
"'FUSION_PYTHON3_HOME' set correctly and make sure "
"Python 3 is installed in the given path."
f"\n\nPYTHON36: {fusion_python3_home}"
)
py36_dir = os.path.normpath(py36_dir)
if not os.path.isdir(py36_dir):
raise ApplicationLaunchFailed(
"Python 3.6 is not installed at the provided path.\n"
"Either make sure the environments in fusion settings has"
" 'PYTHON36' set corectly or make sure Python 3.6 is installed"
f" in the given path.\n\nPYTHON36: {py36_dir}"
)
self.log.info(f"Path to Fusion Python folder: '{py36_dir}'...")
self.launch_context.env["PYTHON36"] = py36_dir
self.log.info(f"Setting {py3_var}: '{py3_dir}'...")
self.launch_context.env[py3_var] = py3_dir
utility_dir = self.launch_context.env.get("FUSION_UTILITY_SCRIPTS_DIR")
if not utility_dir:
raise ApplicationLaunchFailed(
"Required Fusion utility script dir environment variable"
" \"FUSION_UTILITY_SCRIPTS_DIR\" is not set."
)
# Fusion 18+ requires FUSION_PYTHON3_HOME to also be on PATH
self.launch_context.env["PATH"] += ";" + py3_dir
# setting utility scripts dir for scripts syncing
utility_dir = os.path.normpath(utility_dir)
if not os.path.isdir(utility_dir):
raise ApplicationLaunchFailed(
"Fusion utility script dir does not exist. Either make sure "
"the environments in fusion settings has"
" 'FUSION_UTILITY_SCRIPTS_DIR' set correctly or reinstall "
f"Fusion.\n\nFUSION_UTILITY_SCRIPTS_DIR: '{utility_dir}'"
)
# Fusion 16 and 17 use FUSION16_PYTHON36_HOME instead of
# FUSION_PYTHON3_HOME and will only work with a Python 3.6 version
# TODO: Detect Fusion version to only set for specific Fusion build
self.launch_context.env["FUSION16_PYTHON36_HOME"] = py3_dir
self._sync_utility_scripts(self.launch_context.env)
self.log.info("Fusion Pype wrapper has been installed")
# Add our Fusion Master Prefs which is the only way to customize
# Fusion to define where it can read custom scripts and tools from
self.log.info(f"Setting OPENPYPE_FUSION: {FUSION_HOST_DIR}")
self.launch_context.env["OPENPYPE_FUSION"] = FUSION_HOST_DIR
def _sync_utility_scripts(self, env):
""" Synchronizing basic utlility scripts for resolve.
To be able to run scripts from inside `Fusion/Workspace/Scripts` menu
all scripts has to be accessible from defined folder.
"""
if not env:
env = {k: v for k, v in os.environ.items()}
# initiate inputs
scripts = {}
us_env = env.get("FUSION_UTILITY_SCRIPTS_SOURCE_DIR")
us_dir = env.get("FUSION_UTILITY_SCRIPTS_DIR", "")
us_paths = [os.path.join(
os.path.dirname(os.path.abspath(openpype.hosts.fusion.__file__)),
"utility_scripts"
)]
# collect script dirs
if us_env:
self.log.info(f"Utility Scripts Env: `{us_env}`")
us_paths = us_env.split(
os.pathsep) + us_paths
# collect scripts from dirs
for path in us_paths:
scripts.update({path: os.listdir(path)})
self.log.info(f"Utility Scripts Dir: `{us_paths}`")
self.log.info(f"Utility Scripts: `{scripts}`")
# make sure no script file is in folder
if next((s for s in os.listdir(us_dir)), None):
for s in os.listdir(us_dir):
path = os.path.normpath(
os.path.join(us_dir, s))
self.log.info(f"Removing `{path}`...")
# remove file or directory if not in our folders
if not os.path.isdir(path):
os.remove(path)
else:
shutil.rmtree(path)
# copy scripts into Resolve's utility scripts dir
for d, sl in scripts.items():
# directory and scripts list
for s in sl:
# script in script list
src = os.path.normpath(os.path.join(d, s))
dst = os.path.normpath(os.path.join(us_dir, s))
self.log.info(f"Copying `{src}` to `{dst}`...")
# copy file or directory from our folders to fusion's folder
if not os.path.isdir(src):
shutil.copy2(src, dst)
else:
shutil.copytree(src, dst)
pref_var = "FUSION16_MasterPrefs" # used by Fusion 16, 17 and 18
prefs = os.path.join(FUSION_HOST_DIR, "deploy", "fusion_shared.prefs")
self.log.info(f"Setting {pref_var}: {prefs}")
self.launch_context.env[pref_var] = prefs

View file

@ -1,6 +1,9 @@
import os
from openpype.pipeline import LegacyCreator
from openpype.pipeline import (
LegacyCreator,
legacy_io
)
from openpype.hosts.fusion.api import (
get_current_comp,
comp_lock_and_undo_chunk
@ -21,12 +24,9 @@ class CreateOpenEXRSaver(LegacyCreator):
comp = get_current_comp()
# todo: improve method of getting current environment
# todo: pref avalon.Session over os.environ
workdir = os.path.normpath(legacy_io.Session["AVALON_WORKDIR"])
workdir = os.path.normpath(os.environ["AVALON_WORKDIR"])
filename = "{}..tiff".format(self.name)
filename = "{}..exr".format(self.name)
filepath = os.path.join(workdir, "render", filename)
with comp_lock_and_undo_chunk(comp):
@ -39,10 +39,10 @@ class CreateOpenEXRSaver(LegacyCreator):
saver["Clip"] = filepath
saver["OutputFormat"] = file_format
# # # Set standard TIFF settings
# Check file format settings are available
if saver[file_format] is None:
raise RuntimeError("File format is not set to TiffFormat, "
"this is a bug")
raise RuntimeError("File format is not set to {}, "
"this is a bug".format(file_format))
# Set file format attributes
saver[file_format]["Depth"] = 1 # int8 | int16 | float32 | other

View file

@ -101,6 +101,9 @@ def loader_shift(loader, frame, relative=True):
else:
shift = frame - old_in
if not shift:
return 0
# Shifting global in will try to automatically compensate for the change
# in the "ClipTimeStart" and "HoldFirstFrame" inputs, so we preserve those
# input values to "just shift" the clip
@ -149,9 +152,8 @@ class FusionLoadSequence(load.LoaderPlugin):
tool["Clip"] = path
# Set global in point to start frame (if in version.data)
start = context["version"]["data"].get("frameStart", None)
if start is not None:
loader_shift(tool, start, relative=False)
start = self._get_start(context["version"], tool)
loader_shift(tool, start, relative=False)
imprint_container(tool,
name=name,
@ -214,12 +216,7 @@ class FusionLoadSequence(load.LoaderPlugin):
# Get start frame from version data
project_name = legacy_io.active_project()
version = get_version_by_id(project_name, representation["parent"])
start = version["data"].get("frameStart")
if start is None:
self.log.warning("Missing start frame for updated version"
"assuming starts at frame 0 for: "
"{} ({})".format(tool.Name, representation))
start = 0
start = self._get_start(version, tool)
with comp_lock_and_undo_chunk(comp, "Update Loader"):
@ -256,3 +253,27 @@ class FusionLoadSequence(load.LoaderPlugin):
"""Get first file in representation root"""
files = sorted(os.listdir(root))
return os.path.join(root, files[0])
def _get_start(self, version_doc, tool):
"""Return real start frame of published files (incl. handles)"""
data = version_doc["data"]
# Get start frame directly with handle if it's in data
start = data.get("frameStartHandle")
if start is not None:
return start
# Get frame start without handles
start = data.get("frameStart")
if start is None:
self.log.warning("Missing start frame for version "
"assuming starts at frame 0 for: "
"{}".format(tool.Name))
return 0
# Use `handleStart` if the data is available
handle_start = data.get("handleStart")
if handle_start:
start -= handle_start
return start

View file

@ -0,0 +1,114 @@
from bson.objectid import ObjectId
import pyblish.api
from openpype.pipeline import registered_host
def collect_input_containers(tools):
"""Collect containers that contain any of the node in `nodes`.
This will return any loaded Avalon container that contains at least one of
the nodes. As such, the Avalon container is an input for it. Or in short,
there are member nodes of that container.
Returns:
list: Input avalon containers
"""
# Lookup by node ids
lookup = frozenset([tool.Name for tool in tools])
containers = []
host = registered_host()
for container in host.ls():
name = container["_tool"].Name
# We currently assume no "groups" as containers but just single tools
# like a single "Loader" operator. As such we just check whether the
# Loader is part of the processing queue.
if name in lookup:
containers.append(container)
return containers
def iter_upstream(tool):
"""Yields all upstream inputs for the current tool.
Yields:
tool: The input tools.
"""
def get_connected_input_tools(tool):
"""Helper function that returns connected input tools for a tool."""
inputs = []
# Filter only to actual types that will have sensible upstream
# connections. So we ignore just "Number" inputs as they can be
# many to iterate, slowing things down quite a bit - and in practice
# they don't have upstream connections.
VALID_INPUT_TYPES = ['Image', 'Particles', 'Mask', 'DataType3D']
for type_ in VALID_INPUT_TYPES:
for input_ in tool.GetInputList(type_).values():
output = input_.GetConnectedOutput()
if output:
input_tool = output.GetTool()
inputs.append(input_tool)
return inputs
# Initialize process queue with the node's inputs itself
queue = get_connected_input_tools(tool)
# We keep track of which node names we have processed so far, to ensure we
# don't process the same hierarchy again. We are not pushing the tool
# itself into the set as that doesn't correctly recognize the same tool.
# Since tool names are unique in a comp in Fusion we rely on that.
collected = set(tool.Name for tool in queue)
# Traverse upstream references for all nodes and yield them as we
# process the queue.
while queue:
upstream_tool = queue.pop()
yield upstream_tool
# Find upstream tools that are not collected yet.
upstream_inputs = get_connected_input_tools(upstream_tool)
upstream_inputs = [t for t in upstream_inputs if
t.Name not in collected]
queue.extend(upstream_inputs)
collected.update(tool.Name for tool in upstream_inputs)
class CollectUpstreamInputs(pyblish.api.InstancePlugin):
"""Collect source input containers used for this publish.
This will include `inputs` data of which loaded publishes were used in the
generation of this publish. This leaves an upstream trace to what was used
as input.
"""
label = "Collect Inputs"
order = pyblish.api.CollectorOrder + 0.2
hosts = ["fusion"]
def process(self, instance):
# Get all upstream and include itself
tool = instance[0]
nodes = list(iter_upstream(tool))
nodes.append(tool)
# Collect containers for the given set of nodes
containers = collect_input_containers(nodes)
inputs = [ObjectId(c["representation"]) for c in containers]
instance.data["inputRepresentations"] = inputs
self.log.info("Collected inputs: %s" % inputs)

View file

@ -4,19 +4,21 @@ import pyblish.api
def get_comp_render_range(comp):
"""Return comp's start and end render range."""
"""Return comp's start-end render range and global start-end range."""
comp_attrs = comp.GetAttrs()
start = comp_attrs["COMPN_RenderStart"]
end = comp_attrs["COMPN_RenderEnd"]
global_start = comp_attrs["COMPN_GlobalStart"]
global_end = comp_attrs["COMPN_GlobalEnd"]
# Whenever render ranges are undefined fall back
# to the comp's global start and end
if start == -1000000000:
start = comp_attrs["COMPN_GlobalEnd"]
start = global_start
if end == -1000000000:
end = comp_attrs["COMPN_GlobalStart"]
end = global_end
return start, end
return start, end, global_start, global_end
class CollectInstances(pyblish.api.ContextPlugin):
@ -42,9 +44,11 @@ class CollectInstances(pyblish.api.ContextPlugin):
tools = comp.GetToolList(False).values()
savers = [tool for tool in tools if tool.ID == "Saver"]
start, end = get_comp_render_range(comp)
start, end, global_start, global_end = get_comp_render_range(comp)
context.data["frameStart"] = int(start)
context.data["frameEnd"] = int(end)
context.data["frameStartHandle"] = int(global_start)
context.data["frameEndHandle"] = int(global_end)
for tool in savers:
path = tool["Clip"][comp.TIME_UNDEFINED]
@ -78,8 +82,10 @@ class CollectInstances(pyblish.api.ContextPlugin):
"label": label,
"frameStart": context.data["frameStart"],
"frameEnd": context.data["frameEnd"],
"frameStartHandle": context.data["frameStartHandle"],
"frameEndHandle": context.data["frameStartHandle"],
"fps": context.data["fps"],
"families": ["render", "review", "ftrack"],
"families": ["render", "review"],
"family": "render",
"active": active,
"publish": active # backwards compatibility

View file

@ -20,6 +20,8 @@ class Fusionlocal(pyblish.api.InstancePlugin):
def process(self, instance):
# This plug-in runs only once and thus assumes all instances
# currently will render the same frame range
context = instance.context
key = "__hasRun{}".format(self.__class__.__name__)
if context.data.get(key, False):
@ -28,8 +30,8 @@ class Fusionlocal(pyblish.api.InstancePlugin):
context.data[key] = True
current_comp = context.data["currentComp"]
frame_start = current_comp.GetAttrs("COMPN_RenderStart")
frame_end = current_comp.GetAttrs("COMPN_RenderEnd")
frame_start = context.data["frameStartHandle"]
frame_end = context.data["frameEndHandle"]
path = instance.data["path"]
output_dir = instance.data["outputDir"]
@ -40,7 +42,11 @@ class Fusionlocal(pyblish.api.InstancePlugin):
self.log.info("End frame: {}".format(frame_end))
with comp_lock_and_undo_chunk(current_comp):
result = current_comp.Render()
result = current_comp.Render({
"Start": frame_start,
"End": frame_end,
"Wait": True
})
if "representations" not in instance.data:
instance.data["representations"] = []

View file

@ -1,284 +0,0 @@
import os
import re
import sys
import logging
from openpype.client import (
get_asset_by_name,
get_versions,
)
from openpype.pipeline import (
legacy_io,
install_host,
registered_host,
)
from openpype.lib import version_up
from openpype.hosts.fusion import api
from openpype.hosts.fusion.api import lib
from openpype.pipeline.context_tools import get_workdir_from_session
log = logging.getLogger("Update Slap Comp")
def _format_version_folder(folder):
"""Format a version folder based on the filepath
Assumption here is made that, if the path does not exists the folder
will be "v001"
Args:
folder: file path to a folder
Returns:
str: new version folder name
"""
new_version = 1
if os.path.isdir(folder):
re_version = re.compile(r"v\d+$")
versions = [i for i in os.listdir(folder) if os.path.isdir(i)
and re_version.match(i)]
if versions:
# ensure the "v" is not included
new_version = int(max(versions)[1:]) + 1
version_folder = "v{:03d}".format(new_version)
return version_folder
def _get_fusion_instance():
fusion = getattr(sys.modules["__main__"], "fusion", None)
if fusion is None:
try:
# Support for FuScript.exe, BlackmagicFusion module for py2 only
import BlackmagicFusion as bmf
fusion = bmf.scriptapp("Fusion")
except ImportError:
raise RuntimeError("Could not find a Fusion instance")
return fusion
def _format_filepath(session):
project = session["AVALON_PROJECT"]
asset = session["AVALON_ASSET"]
# Save updated slap comp
work_path = get_workdir_from_session(session)
walk_to_dir = os.path.join(work_path, "scenes", "slapcomp")
slapcomp_dir = os.path.abspath(walk_to_dir)
# Ensure destination exists
if not os.path.isdir(slapcomp_dir):
log.warning("Folder did not exist, creating folder structure")
os.makedirs(slapcomp_dir)
# Compute output path
new_filename = "{}_{}_slapcomp_v001.comp".format(project, asset)
new_filepath = os.path.join(slapcomp_dir, new_filename)
# Create new unique filepath
if os.path.exists(new_filepath):
new_filepath = version_up(new_filepath)
return new_filepath
def _update_savers(comp, session):
"""Update all savers of the current comp to ensure the output is correct
This will refactor the Saver file outputs to the renders of the new session
that is provided.
In the case the original saver path had a path set relative to a /fusion/
folder then that relative path will be matched with the exception of all
"version" (e.g. v010) references will be reset to v001. Otherwise only a
version folder will be computed in the new session's work "render" folder
to dump the files in and keeping the original filenames.
Args:
comp (object): current comp instance
session (dict): the current Avalon session
Returns:
None
"""
new_work = get_workdir_from_session(session)
renders = os.path.join(new_work, "renders")
version_folder = _format_version_folder(renders)
renders_version = os.path.join(renders, version_folder)
comp.Print("New renders to: %s\n" % renders)
with api.comp_lock_and_undo_chunk(comp):
savers = comp.GetToolList(False, "Saver").values()
for saver in savers:
filepath = saver.GetAttrs("TOOLST_Clip_Name")[1.0]
# Get old relative path to the "fusion" app folder so we can apply
# the same relative path afterwards. If not found fall back to
# using just a version folder with the filename in it.
# todo: can we make this less magical?
relpath = filepath.replace("\\", "/").rsplit("/fusion/", 1)[-1]
if os.path.isabs(relpath):
# If not relative to a "/fusion/" folder then just use filename
filename = os.path.basename(filepath)
log.warning("Can't parse relative path, refactoring to only"
"filename in a version folder: %s" % filename)
new_path = os.path.join(renders_version, filename)
else:
# Else reuse the relative path
# Reset version in folder and filename in the relative path
# to v001. The version should be is only detected when prefixed
# with either `_v` (underscore) or `/v` (folder)
version_pattern = r"(/|_)v[0-9]+"
if re.search(version_pattern, relpath):
new_relpath = re.sub(version_pattern,
r"\1v001",
relpath)
log.info("Resetting version folders to v001: "
"%s -> %s" % (relpath, new_relpath))
relpath = new_relpath
new_path = os.path.join(new_work, relpath)
saver["Clip"] = new_path
def update_frame_range(comp, representations):
"""Update the frame range of the comp and render length
The start and end frame are based on the lowest start frame and the highest
end frame
Args:
comp (object): current focused comp
representations (list) collection of dicts
Returns:
None
"""
project_name = legacy_io.active_project()
version_ids = {r["parent"] for r in representations}
versions = list(get_versions(project_name, version_ids))
versions = [v for v in versions
if v["data"].get("frameStart", None) is not None]
if not versions:
log.warning("No versions loaded to match frame range to.\n")
return
start = min(v["data"]["frameStart"] for v in versions)
end = max(v["data"]["frameEnd"] for v in versions)
lib.update_frame_range(start, end, comp=comp)
def switch(asset_name, filepath=None, new=True):
"""Switch the current containers of the file to the other asset (shot)
Args:
filepath (str): file path of the comp file
asset_name (str): name of the asset (shot)
new (bool): Save updated comp under a different name
Returns:
comp path (str): new filepath of the updated comp
"""
# If filepath provided, ensure it is valid absolute path
if filepath is not None:
if not os.path.isabs(filepath):
filepath = os.path.abspath(filepath)
assert os.path.exists(filepath), "%s must exist " % filepath
# Assert asset name exists
# It is better to do this here then to wait till switch_shot does it
project_name = legacy_io.active_project()
asset = get_asset_by_name(project_name, asset_name)
assert asset, "Could not find '%s' in the database" % asset_name
# Go to comp
if not filepath:
current_comp = api.get_current_comp()
assert current_comp is not None, "Could not find current comp"
else:
fusion = _get_fusion_instance()
current_comp = fusion.LoadComp(filepath, quiet=True)
assert current_comp is not None, (
"Fusion could not load '{}'").format(filepath)
host = registered_host()
containers = list(host.ls())
assert containers, "Nothing to update"
representations = []
for container in containers:
try:
representation = lib.switch_item(
container,
asset_name=asset_name)
representations.append(representation)
except Exception as e:
current_comp.Print("Error in switching! %s\n" % e.message)
message = "Switched %i Loaders of the %i\n" % (len(representations),
len(containers))
current_comp.Print(message)
# Build the session to switch to
switch_to_session = legacy_io.Session.copy()
switch_to_session["AVALON_ASSET"] = asset['name']
if new:
comp_path = _format_filepath(switch_to_session)
# Update savers output based on new session
_update_savers(current_comp, switch_to_session)
else:
comp_path = version_up(filepath)
current_comp.Print(comp_path)
current_comp.Print("\nUpdating frame range")
update_frame_range(current_comp, representations)
current_comp.Save(comp_path)
return comp_path
if __name__ == '__main__':
# QUESTION: can we convert this to gui rather then standalone script?
# TODO: convert to gui tool
import argparse
parser = argparse.ArgumentParser(description="Switch to a shot within an"
"existing comp file")
parser.add_argument("--file_path",
type=str,
default=True,
help="File path of the comp to use")
parser.add_argument("--asset_name",
type=str,
default=True,
help="Name of the asset (shot) to switch")
args, unknown = parser.parse_args()
install_host(api)
switch(args.asset_name, args.file_path)
sys.exit(0)

View file

@ -6,10 +6,10 @@ import csv
from PIL import Image, ImageDraw, ImageFont
import openpype.hosts.harmony.api as harmony
import openpype.api
from openpype.pipeline import publish
class ExtractPalette(openpype.api.Extractor):
class ExtractPalette(publish.Extractor):
"""Extract palette."""
label = "Extract Palette"

View file

@ -3,12 +3,11 @@
import os
import shutil
import openpype.api
from openpype.pipeline import publish
import openpype.hosts.harmony.api as harmony
import openpype.hosts.harmony
class ExtractTemplate(openpype.api.Extractor):
class ExtractTemplate(publish.Extractor):
"""Extract the connected nodes to the composite instance."""
label = "Extract Template"
@ -50,7 +49,7 @@ class ExtractTemplate(openpype.api.Extractor):
dependencies.remove(instance.data["setMembers"][0])
# Export template.
openpype.hosts.harmony.api.export_template(
harmony.export_template(
unique_backdrops, dependencies, filepath
)

View file

@ -4,10 +4,10 @@ import os
import shutil
from zipfile import ZipFile
import openpype.api
from openpype.pipeline import publish
class ExtractWorkfile(openpype.api.Extractor):
class ExtractWorkfile(publish.Extractor):
"""Extract and zip complete workfile folder into zip."""
label = "Extract Workfile"

View file

@ -13,14 +13,10 @@ import hiero
from Qt import QtWidgets
from openpype.client import (
get_project,
get_versions,
get_last_versions,
get_representations,
)
from openpype.client import get_project
from openpype.settings import get_anatomy_settings
from openpype.pipeline import legacy_io, Anatomy
from openpype.pipeline.load import filter_containers
from openpype.lib import Logger
from . import tags
@ -1055,6 +1051,10 @@ def sync_clip_name_to_data_asset(track_items_list):
print("asset was changed in clip: {}".format(ti_name))
def set_track_color(track_item, color):
track_item.source().binItem().setColor(color)
def check_inventory_versions(track_items=None):
"""
Actual version color idetifier of Loaded containers
@ -1066,68 +1066,29 @@ def check_inventory_versions(track_items=None):
"""
from . import parse_container
track_item = track_items or get_track_items()
track_items = track_items or get_track_items()
# presets
clip_color_last = "green"
clip_color = "red"
item_with_repre_id = []
repre_ids = set()
containers = []
# Find all containers and collect it's node and representation ids
for track_item in track_item:
for track_item in track_items:
container = parse_container(track_item)
if container:
repre_id = container["representation"]
repre_ids.add(repre_id)
item_with_repre_id.append((track_item, repre_id))
containers.append(container)
# Skip if nothing was found
if not repre_ids:
if not containers:
return
project_name = legacy_io.active_project()
# Find representations based on found containers
repre_docs = get_representations(
project_name,
repre_ids=repre_ids,
fields=["_id", "parent"]
)
# Store representations by id and collect version ids
repre_docs_by_id = {}
version_ids = set()
for repre_doc in repre_docs:
# Use stringed representation id to match value in containers
repre_id = str(repre_doc["_id"])
repre_docs_by_id[repre_id] = repre_doc
version_ids.add(repre_doc["parent"])
filter_result = filter_containers(containers, project_name)
for container in filter_result.latest:
set_track_color(container["_track_item"], clip_color)
version_docs = get_versions(
project_name, version_ids, fields=["_id", "name", "parent"]
)
# Store versions by id and collect subset ids
version_docs_by_id = {}
subset_ids = set()
for version_doc in version_docs:
version_docs_by_id[version_doc["_id"]] = version_doc
subset_ids.add(version_doc["parent"])
# Query last versions based on subset ids
last_versions_by_subset_id = get_last_versions(
project_name, subset_ids=subset_ids, fields=["_id", "parent"]
)
for item in item_with_repre_id:
# Some python versions of nuke can't unfold tuple in for loop
track_item, repre_id = item
repre_doc = repre_docs_by_id[repre_id]
version_doc = version_docs_by_id[repre_doc["parent"]]
last_version_doc = last_versions_by_subset_id[version_doc["parent"]]
# Check if last version is same as current version
if version_doc["_id"] == last_version_doc["_id"]:
track_item.source().binItem().setColor(clip_color_last)
else:
track_item.source().binItem().setColor(clip_color)
for container in filter_result.outdated:
set_track_color(container["_track_item"], clip_color_last)
def selection_changed_timeline(event):

View file

@ -2,10 +2,11 @@
import os
import json
import pyblish.api
import openpype
from openpype.pipeline import publish
class ExtractClipEffects(openpype.api.Extractor):
class ExtractClipEffects(publish.Extractor):
"""Extract clip effects instances."""
order = pyblish.api.ExtractorOrder

View file

@ -1,9 +1,14 @@
import os
import pyblish.api
import openpype
from openpype.lib import (
get_oiio_tools_path,
run_subprocess,
)
from openpype.pipeline import publish
class ExtractFrames(openpype.api.Extractor):
class ExtractFrames(publish.Extractor):
"""Extracts frames"""
order = pyblish.api.ExtractorOrder
@ -13,7 +18,7 @@ class ExtractFrames(openpype.api.Extractor):
movie_extensions = ["mov", "mp4"]
def process(self, instance):
oiio_tool_path = openpype.lib.get_oiio_tools_path()
oiio_tool_path = get_oiio_tools_path()
staging_dir = self.staging_dir(instance)
output_template = os.path.join(staging_dir, instance.data["name"])
sequence = instance.context.data["activeTimeline"]
@ -43,7 +48,7 @@ class ExtractFrames(openpype.api.Extractor):
args.extend(["--powc", "0.45,0.45,0.45,1.0"])
args.extend([input_path, "-o", output_path])
output = openpype.api.run_subprocess(args)
output = run_subprocess(args)
failed_output = "oiiotool produced no output."
if failed_output in output:

View file

@ -1,9 +1,10 @@
import os
import pyblish.api
import openpype.api
from openpype.pipeline import publish
class ExtractThumnail(openpype.api.Extractor):
class ExtractThumnail(publish.Extractor):
"""
Extractor for track item's tumnails
"""

View file

@ -318,10 +318,9 @@ class PrecollectInstances(pyblish.api.ContextPlugin):
@staticmethod
def create_otio_time_range_from_timeline_item_data(track_item):
speed = track_item.playbackSpeed()
timeline = phiero.get_current_sequence()
frame_start = int(track_item.timelineIn())
frame_duration = int((track_item.duration() - 1) / speed)
frame_duration = int(track_item.duration())
fps = timeline.framerate().toFloat()
return hiero_export.create_otio_time_range(

View file

@ -14,7 +14,7 @@ from openpype.pipeline import (
)
from openpype.pipeline.load import any_outdated_containers
from openpype.hosts.houdini import HOUDINI_HOST_DIR
from openpype.hosts.houdini.api import lib
from openpype.hosts.houdini.api import lib, shelves
from openpype.lib import (
register_event_callback,
@ -73,6 +73,7 @@ def install():
# so it initializes into the correct scene FPS, Frame Range, etc.
# todo: make sure this doesn't trigger when opening with last workfile
_set_context_settings()
shelves.generate_shelves()
def uninstall():

View file

@ -0,0 +1,204 @@
import os
import logging
import platform
import six
from openpype.settings import get_project_settings
import hou
log = logging.getLogger("openpype.hosts.houdini.shelves")
if six.PY2:
FileNotFoundError = IOError
def generate_shelves():
"""This function generates complete shelves from shelf set to tools
in Houdini from openpype project settings houdini shelf definition.
Raises:
FileNotFoundError: Raised when the shelf set filepath does not exist
"""
current_os = platform.system().lower()
# load configuration of houdini shelves
project_settings = get_project_settings(os.getenv("AVALON_PROJECT"))
shelves_set_config = project_settings["houdini"]["shelves"]
if not shelves_set_config:
log.debug(
"No custom shelves found in project settings."
)
return
for shelf_set_config in shelves_set_config:
shelf_set_filepath = shelf_set_config.get('shelf_set_source_path')
if shelf_set_filepath[current_os]:
if not os.path.isfile(shelf_set_filepath[current_os]):
raise FileNotFoundError(
"This path doesn't exist - {}".format(
shelf_set_filepath[current_os]
)
)
hou.shelves.newShelfSet(file_path=shelf_set_filepath[current_os])
continue
shelf_set_name = shelf_set_config.get('shelf_set_name')
if not shelf_set_name:
log.warning(
"No name found in shelf set definition."
)
return
shelf_set = get_or_create_shelf_set(shelf_set_name)
shelves_definition = shelf_set_config.get('shelf_definition')
if not shelves_definition:
log.debug(
"No shelf definition found for shelf set named '{}'".format(
shelf_set_name
)
)
return
for shelf_definition in shelves_definition:
shelf_name = shelf_definition.get('shelf_name')
if not shelf_name:
log.warning(
"No name found in shelf definition."
)
return
shelf = get_or_create_shelf(shelf_name)
if not shelf_definition.get('tools_list'):
log.debug(
"No tool definition found for shelf named {}".format(
shelf_name
)
)
return
mandatory_attributes = {'name', 'script'}
for tool_definition in shelf_definition.get('tools_list'):
# We verify that the name and script attibutes of the tool
# are set
if not all(
tool_definition[key] for key in mandatory_attributes
):
log.warning(
"You need to specify at least the name and \
the script path of the tool.")
continue
tool = get_or_create_tool(tool_definition, shelf)
if not tool:
return
# Add the tool to the shelf if not already in it
if tool not in shelf.tools():
shelf.setTools(list(shelf.tools()) + [tool])
# Add the shelf in the shelf set if not already in it
if shelf not in shelf_set.shelves():
shelf_set.setShelves(shelf_set.shelves() + (shelf,))
def get_or_create_shelf_set(shelf_set_label):
"""This function verifies if the shelf set label exists. If not,
creates a new shelf set.
Arguments:
shelf_set_label (str): The label of the shelf set
Returns:
hou.ShelfSet: The shelf set existing or the new one
"""
all_shelves_sets = hou.shelves.shelfSets().values()
shelf_sets = [
shelf for shelf in all_shelves_sets if shelf.label() == shelf_set_label
]
if shelf_sets:
return shelf_sets[0]
shelf_set_name = shelf_set_label.replace(' ', '_').lower()
new_shelf_set = hou.shelves.newShelfSet(
name=shelf_set_name,
label=shelf_set_label
)
return new_shelf_set
def get_or_create_shelf(shelf_label):
"""This function verifies if the shelf label exists. If not, creates
a new shelf.
Arguments:
shelf_label (str): The label of the shelf
Returns:
hou.Shelf: The shelf existing or the new one
"""
all_shelves = hou.shelves.shelves().values()
shelf = [s for s in all_shelves if s.label() == shelf_label]
if shelf:
return shelf[0]
shelf_name = shelf_label.replace(' ', '_').lower()
new_shelf = hou.shelves.newShelf(
name=shelf_name,
label=shelf_label
)
return new_shelf
def get_or_create_tool(tool_definition, shelf):
"""This function verifies if the tool exists and updates it. If not, creates
a new one.
Arguments:
tool_definition (dict): Dict with label, script, icon and help
shelf (hou.Shelf): The parent shelf of the tool
Returns:
hou.Tool: The tool updated or the new one
"""
existing_tools = shelf.tools()
tool_label = tool_definition.get('label')
existing_tool = [
tool for tool in existing_tools if tool.label() == tool_label
]
if existing_tool:
tool_definition.pop('name', None)
tool_definition.pop('label', None)
existing_tool[0].setData(**tool_definition)
return existing_tool[0]
tool_name = tool_label.replace(' ', '_').lower()
if not os.path.exists(tool_definition['script']):
log.warning(
"This path doesn't exist - {}".format(
tool_definition['script']
)
)
return
with open(tool_definition['script']) as f:
script = f.read()
tool_definition.update({'script': script})
new_tool = hou.shelves.newTool(name=tool_name, **tool_definition)
return new_tool

View file

@ -1,3 +1,5 @@
from bson.objectid import ObjectId
import pyblish.api
from openpype.pipeline import registered_host
@ -115,7 +117,7 @@ class CollectUpstreamInputs(pyblish.api.InstancePlugin):
# Collect containers for the given set of nodes
containers = collect_input_containers(nodes)
inputs = [c["representation"] for c in containers]
instance.data["inputs"] = inputs
inputs = [ObjectId(c["representation"]) for c in containers]
instance.data["inputRepresentations"] = inputs
self.log.info("Collected inputs: %s" % inputs)

View file

@ -1,7 +1,7 @@
import pyblish.api
import openpype.api
import hou
from openpype.pipeline.publish import RepairAction
from openpype.hosts.houdini.api import lib
@ -13,7 +13,7 @@ class CollectRemotePublishSettings(pyblish.api.ContextPlugin):
hosts = ["houdini"]
targets = ["deadline"]
label = "Remote Publish Submission Settings"
actions = [openpype.api.RepairAction]
actions = [RepairAction]
def process(self, context):

View file

@ -1,11 +1,12 @@
import os
import pyblish.api
import openpype.api
from openpype.pipeline import publish
from openpype.hosts.houdini.api.lib import render_rop
class ExtractAlembic(openpype.api.Extractor):
class ExtractAlembic(publish.Extractor):
order = pyblish.api.ExtractorOrder
label = "Extract Alembic"

View file

@ -1,11 +1,12 @@
import os
import pyblish.api
import openpype.api
from openpype.pipeline import publish
from openpype.hosts.houdini.api.lib import render_rop
class ExtractAss(openpype.api.Extractor):
class ExtractAss(publish.Extractor):
order = pyblish.api.ExtractorOrder + 0.1
label = "Extract Ass"

View file

@ -1,12 +1,12 @@
import os
import pyblish.api
import openpype.api
from openpype.pipeline import publish
from openpype.hosts.houdini.api.lib import render_rop
class ExtractComposite(openpype.api.Extractor):
class ExtractComposite(publish.Extractor):
order = pyblish.api.ExtractorOrder
label = "Extract Composite (Image Sequence)"

View file

@ -4,10 +4,11 @@ import os
from pprint import pformat
import pyblish.api
import openpype.api
from openpype.pipeline import publish
class ExtractHDA(openpype.api.Extractor):
class ExtractHDA(publish.Extractor):
order = pyblish.api.ExtractorOrder
label = "Extract HDA"

View file

@ -1,11 +1,12 @@
import os
import pyblish.api
import openpype.api
from openpype.pipeline import publish
from openpype.hosts.houdini.api.lib import render_rop
class ExtractRedshiftProxy(openpype.api.Extractor):
class ExtractRedshiftProxy(publish.Extractor):
order = pyblish.api.ExtractorOrder + 0.1
label = "Extract Redshift Proxy"

View file

@ -1,11 +1,12 @@
import os
import pyblish.api
import openpype.api
from openpype.pipeline import publish
from openpype.hosts.houdini.api.lib import render_rop
class ExtractUSD(openpype.api.Extractor):
class ExtractUSD(publish.Extractor):
order = pyblish.api.ExtractorOrder
label = "Extract USD"

View file

@ -5,7 +5,6 @@ import sys
from collections import deque
import pyblish.api
import openpype.api
from openpype.client import (
get_asset_by_name,
@ -16,6 +15,7 @@ from openpype.client import (
from openpype.pipeline import (
get_representation_path,
legacy_io,
publish,
)
import openpype.hosts.houdini.api.usd as hou_usdlib
from openpype.hosts.houdini.api.lib import render_rop
@ -160,7 +160,7 @@ def parm_values(overrides):
parm.set(value)
class ExtractUSDLayered(openpype.api.Extractor):
class ExtractUSDLayered(publish.Extractor):
order = pyblish.api.ExtractorOrder
label = "Extract Layered USD"

View file

@ -1,11 +1,12 @@
import os
import pyblish.api
import openpype.api
from openpype.pipeline import publish
from openpype.hosts.houdini.api.lib import render_rop
class ExtractVDBCache(openpype.api.Extractor):
class ExtractVDBCache(publish.Extractor):
order = pyblish.api.ExtractorOrder + 0.1
label = "Extract VDB Cache"

View file

@ -2,10 +2,9 @@ import pyblish.api
from openpype.lib import version_up
from openpype.pipeline import registered_host
from openpype.pipeline.publish import get_errored_plugins_from_context
class IncrementCurrentFile(pyblish.api.InstancePlugin):
class IncrementCurrentFile(pyblish.api.ContextPlugin):
"""Increment the current file.
Saves the current scene with an increased version number.
@ -15,30 +14,10 @@ class IncrementCurrentFile(pyblish.api.InstancePlugin):
label = "Increment current file"
order = pyblish.api.IntegratorOrder + 9.0
hosts = ["houdini"]
families = ["colorbleed.usdrender", "redshift_rop"]
targets = ["local"]
families = ["workfile"]
optional = True
def process(self, instance):
# This should be a ContextPlugin, but this is a workaround
# for a bug in pyblish to run once for a family: issue #250
context = instance.context
key = "__hasRun{}".format(self.__class__.__name__)
if context.data.get(key, False):
return
else:
context.data[key] = True
context = instance.context
errored_plugins = get_errored_plugins_from_context(context)
if any(
plugin.__name__ == "HoudiniSubmitPublishDeadline"
for plugin in errored_plugins
):
raise RuntimeError(
"Skipping incrementing current file because "
"submission to deadline failed."
)
def process(self, context):
# Filename must not have changed since collecting
host = registered_host()

View file

@ -1,35 +0,0 @@
import pyblish.api
import hou
from openpype.lib import version_up
from openpype.pipeline.publish import get_errored_plugins_from_context
class IncrementCurrentFileDeadline(pyblish.api.ContextPlugin):
"""Increment the current file.
Saves the current scene with an increased version number.
"""
label = "Increment current file"
order = pyblish.api.IntegratorOrder + 9.0
hosts = ["houdini"]
targets = ["deadline"]
def process(self, context):
errored_plugins = get_errored_plugins_from_context(context)
if any(
plugin.__name__ == "HoudiniSubmitPublishDeadline"
for plugin in errored_plugins
):
raise RuntimeError(
"Skipping incrementing current file because "
"submission to deadline failed."
)
current_filepath = context.data["currentFile"]
new_filepath = version_up(current_filepath)
hou.hipFile.save(file_name=new_filepath, save_to_recent_files=True)

View file

@ -2483,7 +2483,7 @@ def load_capture_preset(data=None):
# DISPLAY OPTIONS
id = 'Display Options'
disp_options = {}
for key in preset['Display Options']:
for key in preset[id]:
if key.startswith('background'):
disp_options[key] = preset['Display Options'][key]
if len(disp_options[key]) == 4:

View file

@ -5,6 +5,7 @@ import maya.mel as mel
import six
import sys
from openpype.lib import Logger
from openpype.api import (
get_project_settings,
get_current_project_settings
@ -38,6 +39,8 @@ class RenderSettings(object):
"underscore": "_"
}
log = Logger.get_logger("RenderSettings")
@classmethod
def get_image_prefix_attr(cls, renderer):
return cls._image_prefix_nodes[renderer]
@ -133,20 +136,7 @@ class RenderSettings(object):
cmds.setAttr(
"defaultArnoldDriver.mergeAOVs", multi_exr)
# Passes additional options in from the schema as a list
# but converts it to a dictionary because ftrack doesn't
# allow fullstops in custom attributes. Then checks for
# type of MtoA attribute passed to adjust the `setAttr`
# command accordingly.
self._additional_attribs_setter(additional_options)
for item in additional_options:
attribute, value = item
if (cmds.getAttr(str(attribute), type=True)) == "long":
cmds.setAttr(str(attribute), int(value))
elif (cmds.getAttr(str(attribute), type=True)) == "bool":
cmds.setAttr(str(attribute), int(value), type = "Boolean") # noqa
elif (cmds.getAttr(str(attribute), type=True)) == "string":
cmds.setAttr(str(attribute), str(value), type = "string") # noqa
reset_frame_range()
def _set_redshift_settings(self, width, height):
@ -230,12 +220,20 @@ class RenderSettings(object):
cmds.setAttr("defaultRenderGlobals.extensionPadding", 4)
def _additional_attribs_setter(self, additional_attribs):
print(additional_attribs)
for item in additional_attribs:
attribute, value = item
if (cmds.getAttr(str(attribute), type=True)) == "long":
cmds.setAttr(str(attribute), int(value))
elif (cmds.getAttr(str(attribute), type=True)) == "bool":
cmds.setAttr(str(attribute), int(value)) # noqa
elif (cmds.getAttr(str(attribute), type=True)) == "string":
cmds.setAttr(str(attribute), str(value), type = "string") # noqa
attribute = str(attribute) # ensure str conversion from settings
attribute_type = cmds.getAttr(attribute, type=True)
if attribute_type in {"long", "bool"}:
cmds.setAttr(attribute, int(value))
elif attribute_type == "string":
cmds.setAttr(attribute, str(value), type="string")
elif attribute_type in {"double", "doubleAngle", "doubleLinear"}:
cmds.setAttr(attribute, float(value))
else:
self.log.error(
"Attribute {attribute} can not be set due to unsupported "
"type: {attribute_type}".format(
attribute=attribute,
attribute_type=attribute_type)
)

View file

@ -348,3 +348,71 @@ def get_attr_overrides(node_attr, layer,
break
return reversed(plug_overrides)
def get_shader_in_layer(node, layer):
"""Return the assigned shader in a renderlayer without switching layers.
This has been developed and tested for Legacy Renderlayers and *not* for
Render Setup.
Note: This will also return the shader for any face assignments, however
it will *not* return the components they are assigned to. This could
be implemented, but since Maya's renderlayers are famous for breaking
with face assignments there has been no need for this function to
support that.
Returns:
list: The list of assigned shaders in the given layer.
"""
def _get_connected_shader(plug):
"""Return current shader"""
return cmds.listConnections(plug,
source=False,
destination=True,
plugs=False,
connections=False,
type="shadingEngine") or []
# We check the instObjGroups (shader connection) for layer overrides.
plug = node + ".instObjGroups"
# Ignore complex query if we're in the layer anyway (optimization)
current_layer = cmds.editRenderLayerGlobals(query=True,
currentRenderLayer=True)
if layer == current_layer:
return _get_connected_shader(plug)
connections = cmds.listConnections(plug,
plugs=True,
source=False,
destination=True,
type="renderLayer") or []
connections = filter(lambda x: x.endswith(".outPlug"), connections)
if not connections:
# If no overrides anywhere on the shader, just get the current shader
return _get_connected_shader(plug)
def _get_override(connections, layer):
"""Return the overridden connection for that layer in connections"""
# If there's an override on that layer, return that.
for connection in connections:
if (connection.startswith(layer + ".outAdjustments") and
connection.endswith(".outPlug")):
# This is a shader override on that layer so get the shader
# connected to .outValue of the .outAdjustment[i]
out_adjustment = connection.rsplit(".", 1)[0]
connection_attr = out_adjustment + ".outValue"
override = cmds.listConnections(connection_attr) or []
return override
override_shader = _get_override(connections, layer)
if override_shader is not None:
return override_shader
else:
# Get the override for "defaultRenderLayer" (=masterLayer)
return _get_override(connections, layer="defaultRenderLayer")

View file

@ -1,253 +0,0 @@
import json
from collections import OrderedDict
import maya.cmds as cmds
import qargparse
from openpype.tools.utils.widgets import OptionDialog
from .lib import get_main_window, imprint
# To change as enum
build_types = ["context_asset", "linked_asset", "all_assets"]
def get_placeholder_attributes(node):
return {
attr: cmds.getAttr("{}.{}".format(node, attr))
for attr in cmds.listAttr(node, userDefined=True)}
def delete_placeholder_attributes(node):
'''
function to delete all extra placeholder attributes
'''
extra_attributes = get_placeholder_attributes(node)
for attribute in extra_attributes:
cmds.deleteAttr(node + '.' + attribute)
def create_placeholder():
args = placeholder_window()
if not args:
return # operation canceled, no locator created
# custom arg parse to force empty data query
# and still imprint them on placeholder
# and getting items when arg is of type Enumerator
options = create_options(args)
# create placeholder name dynamically from args and options
placeholder_name = create_placeholder_name(args, options)
selection = cmds.ls(selection=True)
if not selection:
raise ValueError("Nothing is selected")
placeholder = cmds.spaceLocator(name=placeholder_name)[0]
# get the long name of the placeholder (with the groups)
placeholder_full_name = cmds.ls(selection[0], long=True)[
0] + '|' + placeholder.replace('|', '')
if selection:
cmds.parent(placeholder, selection[0])
imprint(placeholder_full_name, options)
# Some tweaks because imprint force enums to to default value so we get
# back arg read and force them to attributes
imprint_enum(placeholder_full_name, args)
# Add helper attributes to keep placeholder info
cmds.addAttr(
placeholder_full_name,
longName="parent",
hidden=True,
dataType="string"
)
cmds.addAttr(
placeholder_full_name,
longName="index",
hidden=True,
attributeType="short",
defaultValue=-1
)
cmds.setAttr(placeholder_full_name + '.parent', "", type="string")
def create_placeholder_name(args, options):
placeholder_builder_type = [
arg.read() for arg in args if 'builder_type' in str(arg)
][0]
placeholder_family = options['family']
placeholder_name = placeholder_builder_type.split('_')
# add famlily in any
if placeholder_family:
placeholder_name.insert(1, placeholder_family)
# add loader arguments if any
if options['loader_args']:
pos = 2
loader_args = options['loader_args'].replace('\'', '\"')
loader_args = json.loads(loader_args)
values = [v for v in loader_args.values()]
for i in range(len(values)):
placeholder_name.insert(i + pos, values[i])
placeholder_name = '_'.join(placeholder_name)
return placeholder_name.capitalize()
def update_placeholder():
placeholder = cmds.ls(selection=True)
if len(placeholder) == 0:
raise ValueError("No node selected")
if len(placeholder) > 1:
raise ValueError("Too many selected nodes")
placeholder = placeholder[0]
args = placeholder_window(get_placeholder_attributes(placeholder))
if not args:
return # operation canceled
# delete placeholder attributes
delete_placeholder_attributes(placeholder)
options = create_options(args)
imprint(placeholder, options)
imprint_enum(placeholder, args)
cmds.addAttr(
placeholder,
longName="parent",
hidden=True,
dataType="string"
)
cmds.addAttr(
placeholder,
longName="index",
hidden=True,
attributeType="short",
defaultValue=-1
)
cmds.setAttr(placeholder + '.parent', '', type="string")
def create_options(args):
options = OrderedDict()
for arg in args:
if not type(arg) == qargparse.Separator:
options[str(arg)] = arg._data.get("items") or arg.read()
return options
def imprint_enum(placeholder, args):
"""
Imprint method doesn't act properly with enums.
Replacing the functionnality with this for now
"""
enum_values = {str(arg): arg.read()
for arg in args if arg._data.get("items")}
string_to_value_enum_table = {
build: i for i, build
in enumerate(build_types)}
for key, value in enum_values.items():
cmds.setAttr(
placeholder + "." + key,
string_to_value_enum_table[value])
def placeholder_window(options=None):
options = options or dict()
dialog = OptionDialog(parent=get_main_window())
dialog.setWindowTitle("Create Placeholder")
args = [
qargparse.Separator("Main attributes"),
qargparse.Enum(
"builder_type",
label="Asset Builder Type",
default=options.get("builder_type", 0),
items=build_types,
help="""Asset Builder Type
Builder type describe what template loader will look for.
context_asset : Template loader will look for subsets of
current context asset (Asset bob will find asset)
linked_asset : Template loader will look for assets linked
to current context asset.
Linked asset are looked in avalon database under field "inputLinks"
"""
),
qargparse.String(
"family",
default=options.get("family", ""),
label="OpenPype Family",
placeholder="ex: model, look ..."),
qargparse.String(
"representation",
default=options.get("representation", ""),
label="OpenPype Representation",
placeholder="ex: ma, abc ..."),
qargparse.String(
"loader",
default=options.get("loader", ""),
label="Loader",
placeholder="ex: ReferenceLoader, LightLoader ...",
help="""Loader
Defines what openpype loader will be used to load assets.
Useable loader depends on current host's loader list.
Field is case sensitive.
"""),
qargparse.String(
"loader_args",
default=options.get("loader_args", ""),
label="Loader Arguments",
placeholder='ex: {"camera":"persp", "lights":True}',
help="""Loader
Defines a dictionnary of arguments used to load assets.
Useable arguments depend on current placeholder Loader.
Field should be a valid python dict. Anything else will be ignored.
"""),
qargparse.Integer(
"order",
default=options.get("order", 0),
min=0,
max=999,
label="Order",
placeholder="ex: 0, 100 ... (smallest order loaded first)",
help="""Order
Order defines asset loading priority (0 to 999)
Priority rule is : "lowest is first to load"."""),
qargparse.Separator(
"Optional attributes"),
qargparse.String(
"asset",
default=options.get("asset", ""),
label="Asset filter",
placeholder="regex filtering by asset name",
help="Filtering assets by matching field regex to asset's name"),
qargparse.String(
"subset",
default=options.get("subset", ""),
label="Subset filter",
placeholder="regex filtering by subset name",
help="Filtering assets by matching field regex to subset's name"),
qargparse.String(
"hierarchy",
default=options.get("hierarchy", ""),
label="Hierarchy filter",
placeholder="regex filtering by asset's hierarchy",
help="Filtering assets by matching field asset's hierarchy")
]
dialog.create(args)
if not dialog.exec_():
return None
return args

View file

@ -9,16 +9,17 @@ import maya.cmds as cmds
from openpype.settings import get_project_settings
from openpype.pipeline import legacy_io
from openpype.pipeline.workfile import BuildWorkfile
from openpype.pipeline.workfile.build_template import (
build_workfile_template,
update_workfile_template
)
from openpype.tools.utils import host_tools
from openpype.hosts.maya.api import lib, lib_rendersettings
from .lib import get_main_window, IS_HEADLESS
from .commands import reset_frame_range
from .lib_template_builder import create_placeholder, update_placeholder
from .workfile_template_builder import (
create_placeholder,
update_placeholder,
build_workfile_template,
update_workfile_template,
)
log = logging.getLogger(__name__)
@ -104,13 +105,6 @@ def install():
cmds.menuItem(divider=True)
cmds.menuItem(
"Set Render Settings",
command=lambda *args: lib_rendersettings.RenderSettings().set_default_renderer_settings() # noqa
)
cmds.menuItem(divider=True)
cmds.menuItem(
"Work Files...",
command=lambda *args: host_tools.show_workfiles(
@ -132,6 +126,12 @@ def install():
"Set Colorspace",
command=lambda *args: lib.set_colorspace(),
)
cmds.menuItem(
"Set Render Settings",
command=lambda *args: lib_rendersettings.RenderSettings().set_default_renderer_settings() # noqa
)
cmds.menuItem(divider=True, parent=MENU_NAME)
cmds.menuItem(
"Build First Workfile",
@ -162,12 +162,12 @@ def install():
cmds.menuItem(
"Create Placeholder",
parent=builder_menu,
command=lambda *args: create_placeholder()
command=create_placeholder
)
cmds.menuItem(
"Update Placeholder",
parent=builder_menu,
command=lambda *args: update_placeholder()
command=update_placeholder
)
cmds.menuItem(
"Build Workfile from template",

View file

@ -16,6 +16,7 @@ from openpype.host import (
HostDirmap,
)
from openpype.tools.utils import host_tools
from openpype.tools.workfiles.lock_dialog import WorkfileLockDialog
from openpype.lib import (
register_event_callback,
emit_event
@ -31,10 +32,17 @@ from openpype.pipeline import (
AVALON_CONTAINER_ID,
)
from openpype.pipeline.load import any_outdated_containers
from openpype.pipeline.workfile.lock_workfile import (
create_workfile_lock,
remove_workfile_lock,
is_workfile_locked,
is_workfile_lock_enabled
)
from openpype.hosts.maya import MAYA_ROOT_DIR
from openpype.hosts.maya.lib import copy_workspace_mel
from openpype.hosts.maya.lib import create_workspace_mel
from . import menu, lib
from .workfile_template_builder import MayaPlaceholderLoadPlugin
from .workio import (
open_file,
save_file,
@ -63,7 +71,7 @@ class MayaHost(HostBase, IWorkfileHost, ILoadHost):
self._op_events = {}
def install(self):
project_name = os.getenv("AVALON_PROJECT")
project_name = legacy_io.active_project()
project_settings = get_project_settings(project_name)
# process path mapping
dirmap_processor = MayaDirmap("maya", project_name, project_settings)
@ -99,8 +107,13 @@ class MayaHost(HostBase, IWorkfileHost, ILoadHost):
register_event_callback("open", on_open)
register_event_callback("new", on_new)
register_event_callback("before.save", on_before_save)
register_event_callback("after.save", on_after_save)
register_event_callback("before.close", on_before_close)
register_event_callback("before.file.open", before_file_open)
register_event_callback("taskChanged", on_task_changed)
register_event_callback("workfile.open.before", before_workfile_open)
register_event_callback("workfile.save.before", before_workfile_save)
register_event_callback("workfile.save.before", after_workfile_save)
def open_workfile(self, filepath):
return open_file(filepath)
@ -123,6 +136,11 @@ class MayaHost(HostBase, IWorkfileHost, ILoadHost):
def get_containers(self):
return ls()
def get_workfile_build_placeholder_plugins(self):
return [
MayaPlaceholderLoadPlugin
]
@contextlib.contextmanager
def maintained_selection(self):
with lib.maintained_selection():
@ -143,6 +161,13 @@ class MayaHost(HostBase, IWorkfileHost, ILoadHost):
OpenMaya.MSceneMessage.kBeforeSave, _on_scene_save
)
self._op_events[_after_scene_save] = (
OpenMaya.MSceneMessage.addCallback(
OpenMaya.MSceneMessage.kAfterSave,
_after_scene_save
)
)
self._op_events[_before_scene_save] = (
OpenMaya.MSceneMessage.addCheckCallback(
OpenMaya.MSceneMessage.kBeforeSaveCheck,
@ -161,15 +186,35 @@ class MayaHost(HostBase, IWorkfileHost, ILoadHost):
)
)
self._op_events[_on_scene_open] = OpenMaya.MSceneMessage.addCallback(
OpenMaya.MSceneMessage.kAfterOpen, _on_scene_open
self._op_events[_on_scene_open] = (
OpenMaya.MSceneMessage.addCallback(
OpenMaya.MSceneMessage.kAfterOpen,
_on_scene_open
)
)
self._op_events[_before_scene_open] = (
OpenMaya.MSceneMessage.addCallback(
OpenMaya.MSceneMessage.kBeforeOpen,
_before_scene_open
)
)
self._op_events[_before_close_maya] = (
OpenMaya.MSceneMessage.addCallback(
OpenMaya.MSceneMessage.kMayaExiting,
_before_close_maya
)
)
self.log.info("Installed event handler _on_scene_save..")
self.log.info("Installed event handler _before_scene_save..")
self.log.info("Installed event handler _on_after_save..")
self.log.info("Installed event handler _on_scene_new..")
self.log.info("Installed event handler _on_maya_initialized..")
self.log.info("Installed event handler _on_scene_open..")
self.log.info("Installed event handler _check_lock_file..")
self.log.info("Installed event handler _before_close_maya..")
def _set_project():
@ -208,6 +253,10 @@ def _on_scene_new(*args):
emit_event("new")
def _after_scene_save(*arg):
emit_event("after.save")
def _on_scene_save(*args):
emit_event("save")
@ -216,6 +265,14 @@ def _on_scene_open(*args):
emit_event("open")
def _before_close_maya(*args):
emit_event("before.close")
def _before_scene_open(*args):
emit_event("before.file.open")
def _before_scene_save(return_code, client_data):
# Default to allowing the action. Registered
@ -229,6 +286,23 @@ def _before_scene_save(return_code, client_data):
)
def _remove_workfile_lock():
"""Remove workfile lock on current file"""
if not handle_workfile_locks():
return
filepath = current_file()
log.info("Removing lock on current file {}...".format(filepath))
if filepath:
remove_workfile_lock(filepath)
def handle_workfile_locks():
if lib.IS_HEADLESS:
return False
project_name = legacy_io.active_project()
return is_workfile_lock_enabled(MayaHost.name, project_name)
def uninstall():
pyblish.api.deregister_plugin_path(PUBLISH_PATH)
pyblish.api.deregister_host("mayabatch")
@ -349,21 +423,13 @@ def containerise(name,
("id", AVALON_CONTAINER_ID),
("name", name),
("namespace", namespace),
("loader", str(loader)),
("loader", loader),
("representation", context["representation"]["_id"]),
]
for key, value in data:
if not value:
continue
if isinstance(value, (int, float)):
cmds.addAttr(container, longName=key, attributeType="short")
cmds.setAttr(container + "." + key, value)
else:
cmds.addAttr(container, longName=key, dataType="string")
cmds.setAttr(container + "." + key, value, type="string")
cmds.addAttr(container, longName=key, dataType="string")
cmds.setAttr(container + "." + key, str(value), type="string")
main_container = cmds.ls(AVALON_CONTAINERS, type="objectSet")
if not main_container:
@ -434,6 +500,46 @@ def on_before_save():
return lib.validate_fps()
def on_after_save():
"""Check if there is a lockfile after save"""
check_lock_on_current_file()
def check_lock_on_current_file():
"""Check if there is a user opening the file"""
if not handle_workfile_locks():
return
log.info("Running callback on checking the lock file...")
# add the lock file when opening the file
filepath = current_file()
if is_workfile_locked(filepath):
# add lockfile dialog
workfile_dialog = WorkfileLockDialog(filepath)
if not workfile_dialog.exec_():
cmds.file(new=True)
return
create_workfile_lock(filepath)
def on_before_close():
"""Delete the lock file after user quitting the Maya Scene"""
log.info("Closing Maya...")
# delete the lock file
filepath = current_file()
if handle_workfile_locks():
remove_workfile_lock(filepath)
def before_file_open():
"""check lock file when the file changed"""
# delete the lock file
_remove_workfile_lock()
def on_save():
"""Automatically add IDs to new nodes
@ -442,6 +548,8 @@ def on_save():
"""
log.info("Running callback on save..")
# remove lockfile if users jumps over from one scene to another
_remove_workfile_lock()
# # Update current task for the current scene
# update_task_from_path(cmds.file(query=True, sceneName=True))
@ -499,6 +607,9 @@ def on_open():
dialog.on_clicked.connect(_on_show_inventory)
dialog.show()
# create lock file for the maya scene
check_lock_on_current_file()
def on_new():
"""Set project resolution and fps when create a new file"""
@ -514,6 +625,7 @@ def on_new():
"from openpype.hosts.maya.api import lib;"
"lib.add_render_layer_change_observer()")
lib.set_context_settings()
_remove_workfile_lock()
def on_task_changed():
@ -541,7 +653,7 @@ def on_task_changed():
lib.update_content_on_context_change()
msg = " project: {}\n asset: {}\n task:{}".format(
legacy_io.Session["AVALON_PROJECT"],
legacy_io.active_project(),
legacy_io.Session["AVALON_ASSET"],
legacy_io.Session["AVALON_TASK"]
)
@ -552,10 +664,26 @@ def on_task_changed():
)
def before_workfile_open():
if handle_workfile_locks():
_remove_workfile_lock()
def before_workfile_save(event):
project_name = legacy_io.active_project()
if handle_workfile_locks():
_remove_workfile_lock()
workdir_path = event["workdir_path"]
if workdir_path:
copy_workspace_mel(workdir_path)
create_workspace_mel(workdir_path, project_name)
def after_workfile_save(event):
workfile_name = event["filename"]
if handle_workfile_locks():
if workfile_name:
if not is_workfile_locked(workfile_name):
create_workfile_lock(workfile_name)
class MayaDirmap(HostDirmap):

View file

@ -1,252 +0,0 @@
import re
from maya import cmds
from openpype.client import get_representations
from openpype.pipeline import legacy_io
from openpype.pipeline.workfile.abstract_template_loader import (
AbstractPlaceholder,
AbstractTemplateLoader
)
from openpype.pipeline.workfile.build_template_exceptions import (
TemplateAlreadyImported
)
PLACEHOLDER_SET = 'PLACEHOLDERS_SET'
class MayaTemplateLoader(AbstractTemplateLoader):
"""Concrete implementation of AbstractTemplateLoader for maya
"""
def import_template(self, path):
"""Import template into current scene.
Block if a template is already loaded.
Args:
path (str): A path to current template (usually given by
get_template_path implementation)
Returns:
bool: Wether the template was succesfully imported or not
"""
if cmds.objExists(PLACEHOLDER_SET):
raise TemplateAlreadyImported(
"Build template already loaded\n"
"Clean scene if needed (File > New Scene)")
cmds.sets(name=PLACEHOLDER_SET, empty=True)
self.new_nodes = cmds.file(path, i=True, returnNewNodes=True)
cmds.setAttr(PLACEHOLDER_SET + '.hiddenInOutliner', True)
for set in cmds.listSets(allSets=True):
if (cmds.objExists(set) and
cmds.attributeQuery('id', node=set, exists=True) and
cmds.getAttr(set + '.id') == 'pyblish.avalon.instance'):
if cmds.attributeQuery('asset', node=set, exists=True):
cmds.setAttr(
set + '.asset',
legacy_io.Session['AVALON_ASSET'], type='string'
)
return True
def template_already_imported(self, err_msg):
clearButton = "Clear scene and build"
updateButton = "Update template"
abortButton = "Abort"
title = "Scene already builded"
message = (
"It's seems a template was already build for this scene.\n"
"Error message reveived :\n\n\"{}\"".format(err_msg))
buttons = [clearButton, updateButton, abortButton]
defaultButton = clearButton
cancelButton = abortButton
dismissString = abortButton
answer = cmds.confirmDialog(
t=title,
m=message,
b=buttons,
db=defaultButton,
cb=cancelButton,
ds=dismissString)
if answer == clearButton:
cmds.file(newFile=True, force=True)
self.import_template(self.template_path)
self.populate_template()
elif answer == updateButton:
self.update_missing_containers()
elif answer == abortButton:
return
@staticmethod
def get_template_nodes():
attributes = cmds.ls('*.builder_type', long=True)
return [attribute.rpartition('.')[0] for attribute in attributes]
def get_loaded_containers_by_id(self):
try:
containers = cmds.sets("AVALON_CONTAINERS", q=True)
except ValueError:
return None
return [
cmds.getAttr(container + '.representation')
for container in containers]
class MayaPlaceholder(AbstractPlaceholder):
"""Concrete implementation of AbstractPlaceholder for maya
"""
optional_keys = {'asset', 'subset', 'hierarchy'}
def get_data(self, node):
user_data = dict()
for attr in self.required_keys.union(self.optional_keys):
attribute_name = '{}.{}'.format(node, attr)
if not cmds.attributeQuery(attr, node=node, exists=True):
print("{} not found".format(attribute_name))
continue
user_data[attr] = cmds.getAttr(
attribute_name,
asString=True)
user_data['parent'] = (
cmds.getAttr(node + '.parent', asString=True)
or node.rpartition('|')[0]
or ""
)
user_data['node'] = node
if user_data['parent']:
siblings = cmds.listRelatives(user_data['parent'], children=True)
else:
siblings = cmds.ls(assemblies=True)
node_shortname = user_data['node'].rpartition('|')[2]
current_index = cmds.getAttr(node + '.index', asString=True)
user_data['index'] = (
current_index if current_index >= 0
else siblings.index(node_shortname))
self.data = user_data
def parent_in_hierarchy(self, containers):
"""Parent loaded container to placeholder's parent
ie : Set loaded content as placeholder's sibling
Args:
containers (String): Placeholder loaded containers
"""
if not containers:
return
roots = cmds.sets(containers, q=True)
nodes_to_parent = []
for root in roots:
if root.endswith("_RN"):
refRoot = cmds.referenceQuery(root, n=True)[0]
refRoot = cmds.listRelatives(refRoot, parent=True) or [refRoot]
nodes_to_parent.extend(refRoot)
elif root in cmds.listSets(allSets=True):
if not cmds.sets(root, q=True):
return
else:
continue
else:
nodes_to_parent.append(root)
if self.data['parent']:
cmds.parent(nodes_to_parent, self.data['parent'])
# Move loaded nodes to correct index in outliner hierarchy
placeholder_node = self.data['node']
placeholder_form = cmds.xform(
placeholder_node,
q=True,
matrix=True,
worldSpace=True
)
for node in set(nodes_to_parent):
cmds.reorder(node, front=True)
cmds.reorder(node, relative=self.data['index'])
cmds.xform(node, matrix=placeholder_form, ws=True)
holding_sets = cmds.listSets(object=placeholder_node)
if not holding_sets:
return
for holding_set in holding_sets:
cmds.sets(roots, forceElement=holding_set)
def clean(self):
"""Hide placeholder, parent them to root
add them to placeholder set and register placeholder's parent
to keep placeholder info available for future use
"""
node = self.data['node']
if self.data['parent']:
cmds.setAttr(node + '.parent', self.data['parent'], type='string')
if cmds.getAttr(node + '.index') < 0:
cmds.setAttr(node + '.index', self.data['index'])
holding_sets = cmds.listSets(object=node)
if holding_sets:
for set in holding_sets:
cmds.sets(node, remove=set)
if cmds.listRelatives(node, p=True):
node = cmds.parent(node, world=True)[0]
cmds.sets(node, addElement=PLACEHOLDER_SET)
cmds.hide(node)
cmds.setAttr(node + '.hiddenInOutliner', True)
def get_representations(self, current_asset_doc, linked_asset_docs):
project_name = legacy_io.active_project()
builder_type = self.data["builder_type"]
if builder_type == "context_asset":
context_filters = {
"asset": [current_asset_doc["name"]],
"subset": [re.compile(self.data["subset"])],
"hierarchy": [re.compile(self.data["hierarchy"])],
"representations": [self.data["representation"]],
"family": [self.data["family"]]
}
elif builder_type != "linked_asset":
context_filters = {
"asset": [re.compile(self.data["asset"])],
"subset": [re.compile(self.data["subset"])],
"hierarchy": [re.compile(self.data["hierarchy"])],
"representation": [self.data["representation"]],
"family": [self.data["family"]]
}
else:
asset_regex = re.compile(self.data["asset"])
linked_asset_names = []
for asset_doc in linked_asset_docs:
asset_name = asset_doc["name"]
if asset_regex.match(asset_name):
linked_asset_names.append(asset_name)
context_filters = {
"asset": linked_asset_names,
"subset": [re.compile(self.data["subset"])],
"hierarchy": [re.compile(self.data["hierarchy"])],
"representation": [self.data["representation"]],
"family": [self.data["family"]],
}
return list(get_representations(
project_name,
context_filters=context_filters
))
def err_message(self):
return (
"Error while trying to load a representation.\n"
"Either the subset wasn't published or the template is malformed."
"\n\n"
"Builder was looking for :\n{attributes}".format(
attributes="\n".join([
"{}: {}".format(key.title(), value)
for key, value in self.data.items()]
)
)
)

Some files were not shown because too many files have changed in this diff Show more