diff --git a/.github/ISSUE_TEMPLATE/bug_report.yml b/.github/ISSUE_TEMPLATE/bug_report.yml index 2d9915609a..387b5574ab 100644 --- a/.github/ISSUE_TEMPLATE/bug_report.yml +++ b/.github/ISSUE_TEMPLATE/bug_report.yml @@ -35,6 +35,10 @@ body: label: Version description: What version are you running? Look to OpenPype Tray options: + - 3.16.3-nightly.2 + - 3.16.3-nightly.1 + - 3.16.2 + - 3.16.2-nightly.2 - 3.16.2-nightly.1 - 3.16.1 - 3.16.0 @@ -131,10 +135,6 @@ body: - 3.14.7-nightly.3 - 3.14.7-nightly.2 - 3.14.7-nightly.1 - - 3.14.6 - - 3.14.6-nightly.3 - - 3.14.6-nightly.2 - - 3.14.6-nightly.1 validations: required: true - type: dropdown diff --git a/.gitignore b/.gitignore index e5019a4e74..622d55fb88 100644 --- a/.gitignore +++ b/.gitignore @@ -37,7 +37,7 @@ Temporary Items ########### /build /dist/ -/server_addon/package/* +/server_addon/packages/* /vendor/bin/* /vendor/python/* diff --git a/CHANGELOG.md b/CHANGELOG.md index 07b95c7343..f2930d45eb 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,6 +1,186 @@ # Changelog +## [3.16.2](https://github.com/ynput/OpenPype/tree/3.16.2) + + +[Full Changelog](https://github.com/ynput/OpenPype/compare/3.16.1...3.16.2) + +### **🆕 New features** + + +
+Fusion - Set selected tool to active #5327 + +When you run the action to select a node, this PR makes the node-flow show the selected node + you'll see the nodes controls in the inspector. + + +___ + +
+ +### **🚀 Enhancements** + + +
+Maya: All base create plugins #5326 + +Prepared base classes for each creator type in Maya. Extended `MayaCreatorBase` to have default implementations of common logic with instances which is used in each type of plugin. + + +___ + +
+ + +
+Windows: Support long paths on zip updates. #5265 + +Support long paths for version extract on Windows.Use case is when having long paths in for example an addon. You can install to the C drive but because the zip files are extracted in the local users folder, it'll add additional sub directories to the paths and quickly get too long paths for Windows to handle the zip updates. + + +___ + +
+ + +
+Blender: Added setting to set resolution and start/end frames at startup #5338 + +This PR adds `set_resolution_startup`and `set_frames_startup` settings. They automatically set respectively the resolution and start/end frames and FPS in Blender when opening a file or creating a new one. + + +___ + +
+ + +
+Blender: Support for ExtractBurnin #5339 + +This PR adds support for ExtractBurnin for Blender, when publishing a Review. + + +___ + +
+ + +
+Blender: Extract Camera as Alembic #5343 + +Added support to extract Alembic Cameras in Blender. + + +___ + +
+ +### **🐛 Bug fixes** + + +
+Maya: Validate Instance In Context #5335 + +Missing new publisher error so the repair action shows up. + + +___ + +
+ + +
+Settings: Fix default settings #5311 + +Fixed defautl settings for shotgrid. Renamed `FarmRootEnumEntity` to `DynamicEnumEntity` and removed doubled ABC metaclass definition (all settings entities have abstract metaclass). + + +___ + +
+ + +
+Deadline: missing context argument #5312 + +Updated function arguments + + +___ + +
+ + +
+Qt UI: Multiselection combobox PySide6 compatibility #5314 + +- The check states are replaced with the values for PySide6 +- `QtCore.Qt.ItemIsUserTristate` is used instead of `QtCore.Qt.ItemIsTristate` to avoid crashes on PySide6 + + +___ + +
+ + +
+Docker: handle openssl 1.1.1 for centos 7 docker build #5319 + +Move to python 3.9 has added need to use openssl 1.1.x - but it is not by default available on centos 7 image. This is fixing it. + + +___ + +
+ + +
+houdini: fix typo in redshift proxy #5320 + +I believe there's a typo in `create_redshift_proxy.py` ( extra ` ) in filename, and I made this PR to suggest a fix + + +___ + +
+ + +
+Houdini: fix wrong creator identifier in pointCache workflow #5324 + +FIxing a bug in publishing alembics, were invalid creator identifier caused missing family association. + + +___ + +
+ + +
+Fix colorspace compatibility check #5334 + +for some reason a user may have `PyOpenColorIO` installed to his machine, _in my case it came with renderman._it can trick the compatibility check as `import PyOpenColorIO` won't raise an error however it may be an old version _like my case_Beforecompatibility check was true and It used wrapper directly After Fix It will use wrapper via subprocess instead + + +___ + +
+ +### **Merged pull requests** + + +
+Remove forgotten dev logging #5315 + + +___ + +
+ + + + ## [3.16.1](https://github.com/ynput/OpenPype/tree/3.16.1) diff --git a/ayon_start.py b/ayon_start.py deleted file mode 100644 index 458c46bba6..0000000000 --- a/ayon_start.py +++ /dev/null @@ -1,483 +0,0 @@ -# -*- coding: utf-8 -*- -"""Main entry point for AYON command. - -Bootstrapping process of AYON. -""" -import os -import sys -import site -import traceback -import contextlib - - -# Enabled logging debug mode when "--debug" is passed -if "--verbose" in sys.argv: - expected_values = ( - "Expected: notset, debug, info, warning, error, critical" - " or integer [0-50]." - ) - idx = sys.argv.index("--verbose") - sys.argv.pop(idx) - if idx < len(sys.argv): - value = sys.argv.pop(idx) - else: - raise RuntimeError(( - f"Expect value after \"--verbose\" argument. {expected_values}" - )) - - log_level = None - low_value = value.lower() - if low_value.isdigit(): - log_level = int(low_value) - elif low_value == "notset": - log_level = 0 - elif low_value == "debug": - log_level = 10 - elif low_value == "info": - log_level = 20 - elif low_value == "warning": - log_level = 30 - elif low_value == "error": - log_level = 40 - elif low_value == "critical": - log_level = 50 - - if log_level is None: - raise ValueError(( - "Unexpected value after \"--verbose\" " - f"argument \"{value}\". {expected_values}" - )) - - os.environ["OPENPYPE_LOG_LEVEL"] = str(log_level) - os.environ["AYON_LOG_LEVEL"] = str(log_level) - -# Enable debug mode, may affect log level if log level is not defined -if "--debug" in sys.argv: - sys.argv.remove("--debug") - os.environ["AYON_DEBUG"] = "1" - os.environ["OPENPYPE_DEBUG"] = "1" - -if "--automatic-tests" in sys.argv: - sys.argv.remove("--automatic-tests") - os.environ["IS_TEST"] = "1" - -SKIP_HEADERS = False -if "--skip-headers" in sys.argv: - sys.argv.remove("--skip-headers") - SKIP_HEADERS = True - -SKIP_BOOTSTRAP = False -if "--skip-bootstrap" in sys.argv: - sys.argv.remove("--skip-bootstrap") - SKIP_BOOTSTRAP = True - -if "--use-staging" in sys.argv: - sys.argv.remove("--use-staging") - os.environ["AYON_USE_STAGING"] = "1" - os.environ["OPENPYPE_USE_STAGING"] = "1" - -if "--headless" in sys.argv: - os.environ["AYON_HEADLESS_MODE"] = "1" - os.environ["OPENPYPE_HEADLESS_MODE"] = "1" - sys.argv.remove("--headless") - -elif ( - os.getenv("AYON_HEADLESS_MODE") != "1" - or os.getenv("OPENPYPE_HEADLESS_MODE") != "1" -): - os.environ.pop("AYON_HEADLESS_MODE", None) - os.environ.pop("OPENPYPE_HEADLESS_MODE", None) - -elif ( - os.getenv("AYON_HEADLESS_MODE") - != os.getenv("OPENPYPE_HEADLESS_MODE") -): - os.environ["OPENPYPE_HEADLESS_MODE"] = ( - os.environ["AYON_HEADLESS_MODE"] - ) - -IS_BUILT_APPLICATION = getattr(sys, "frozen", False) -HEADLESS_MODE_ENABLED = os.getenv("AYON_HEADLESS_MODE") == "1" - -_pythonpath = os.getenv("PYTHONPATH", "") -_python_paths = _pythonpath.split(os.pathsep) -if not IS_BUILT_APPLICATION: - # Code root defined by `start.py` directory - AYON_ROOT = os.path.dirname(os.path.abspath(__file__)) - _dependencies_path = site.getsitepackages()[-1] -else: - AYON_ROOT = os.path.dirname(sys.executable) - - # add dependencies folder to sys.pat for frozen code - _dependencies_path = os.path.normpath( - os.path.join(AYON_ROOT, "dependencies") - ) -# add stuff from `/dependencies` to PYTHONPATH. -sys.path.append(_dependencies_path) -_python_paths.append(_dependencies_path) - -# Vendored python modules that must not be in PYTHONPATH environment but -# are required for OpenPype processes -sys.path.insert(0, os.path.join(AYON_ROOT, "vendor", "python")) - -# Add common package to sys path -# - common contains common code for bootstraping and OpenPype processes -sys.path.insert(0, os.path.join(AYON_ROOT, "common")) - -# This is content of 'core' addon which is ATM part of build -common_python_vendor = os.path.join( - AYON_ROOT, - "openpype", - "vendor", - "python", - "common" -) -# Add tools dir to sys path for pyblish UI discovery -tools_dir = os.path.join(AYON_ROOT, "openpype", "tools") -for path in (AYON_ROOT, common_python_vendor, tools_dir): - while path in _python_paths: - _python_paths.remove(path) - - while path in sys.path: - sys.path.remove(path) - - _python_paths.insert(0, path) - sys.path.insert(0, path) - -os.environ["PYTHONPATH"] = os.pathsep.join(_python_paths) - -# enabled AYON state -os.environ["USE_AYON_SERVER"] = "1" -# Set this to point either to `python` from venv in case of live code -# or to `ayon` or `ayon_console` in case of frozen code -os.environ["AYON_EXECUTABLE"] = sys.executable -os.environ["OPENPYPE_EXECUTABLE"] = sys.executable -os.environ["AYON_ROOT"] = AYON_ROOT -os.environ["OPENPYPE_ROOT"] = AYON_ROOT -os.environ["OPENPYPE_REPOS_ROOT"] = AYON_ROOT -os.environ["AYON_MENU_LABEL"] = "AYON" -os.environ["AVALON_LABEL"] = "AYON" -# Set name of pyblish UI import -os.environ["PYBLISH_GUI"] = "pyblish_pype" -# Set builtin OCIO root -os.environ["BUILTIN_OCIO_ROOT"] = os.path.join( - AYON_ROOT, - "vendor", - "bin", - "ocioconfig", - "OpenColorIOConfigs" -) - -import blessed # noqa: E402 -import certifi # noqa: E402 - - -if sys.__stdout__: - term = blessed.Terminal() - - def _print(message: str): - if message.startswith("!!! "): - print(f'{term.orangered2("!!! ")}{message[4:]}') - elif message.startswith(">>> "): - print(f'{term.aquamarine3(">>> ")}{message[4:]}') - elif message.startswith("--- "): - print(f'{term.darkolivegreen3("--- ")}{message[4:]}') - elif message.startswith("*** "): - print(f'{term.gold("*** ")}{message[4:]}') - elif message.startswith(" - "): - print(f'{term.wheat(" - ")}{message[4:]}') - elif message.startswith(" . "): - print(f'{term.tan(" . ")}{message[4:]}') - elif message.startswith(" - "): - print(f'{term.seagreen3(" - ")}{message[7:]}') - elif message.startswith(" ! "): - print(f'{term.goldenrod(" ! ")}{message[7:]}') - elif message.startswith(" * "): - print(f'{term.aquamarine1(" * ")}{message[7:]}') - elif message.startswith(" "): - print(f'{term.darkseagreen3(" ")}{message[4:]}') - else: - print(message) -else: - def _print(message: str): - print(message) - - -# if SSL_CERT_FILE is not set prior to OpenPype launch, we set it to point -# to certifi bundle to make sure we have reasonably new CA certificates. -if not os.getenv("SSL_CERT_FILE"): - os.environ["SSL_CERT_FILE"] = certifi.where() -elif os.getenv("SSL_CERT_FILE") != certifi.where(): - _print("--- your system is set to use custom CA certificate bundle.") - -from ayon_api import get_base_url -from ayon_api.constants import SERVER_URL_ENV_KEY, SERVER_API_ENV_KEY -from ayon_common import is_staging_enabled -from ayon_common.connection.credentials import ( - ask_to_login_ui, - add_server, - need_server_or_login, - load_environments, - set_environments, - create_global_connection, - confirm_server_login, -) -from ayon_common.distribution import ( - AyonDistribution, - BundleNotFoundError, - show_missing_bundle_information, -) - - -def set_global_environments() -> None: - """Set global OpenPype's environments.""" - import acre - - from openpype.settings import get_general_environments - - general_env = get_general_environments() - - # first resolve general environment because merge doesn't expect - # values to be list. - # TODO: switch to OpenPype environment functions - merged_env = acre.merge( - acre.compute(acre.parse(general_env), cleanup=False), - dict(os.environ) - ) - env = acre.compute( - merged_env, - cleanup=False - ) - os.environ.clear() - os.environ.update(env) - - # Hardcoded default values - os.environ["PYBLISH_GUI"] = "pyblish_pype" - # Change scale factor only if is not set - if "QT_AUTO_SCREEN_SCALE_FACTOR" not in os.environ: - os.environ["QT_AUTO_SCREEN_SCALE_FACTOR"] = "1" - - -def set_addons_environments(): - """Set global environments for OpenPype modules. - - This requires to have OpenPype in `sys.path`. - """ - - import acre - from openpype.modules import ModulesManager - - modules_manager = ModulesManager() - - # Merge environments with current environments and update values - if module_envs := modules_manager.collect_global_environments(): - parsed_envs = acre.parse(module_envs) - env = acre.merge(parsed_envs, dict(os.environ)) - os.environ.clear() - os.environ.update(env) - - -def _connect_to_ayon_server(): - load_environments() - if not need_server_or_login(): - create_global_connection() - return - - if HEADLESS_MODE_ENABLED: - _print("!!! Cannot open v4 Login dialog in headless mode.") - _print(( - "!!! Please use `{}` to specify server address" - " and '{}' to specify user's token." - ).format(SERVER_URL_ENV_KEY, SERVER_API_ENV_KEY)) - sys.exit(1) - - current_url = os.environ.get(SERVER_URL_ENV_KEY) - url, token, username = ask_to_login_ui(current_url, always_on_top=True) - if url is not None and token is not None: - confirm_server_login(url, token, username) - return - - if url is not None: - add_server(url, username) - - _print("!!! Login was not successful.") - sys.exit(0) - - -def _check_and_update_from_ayon_server(): - """Gets addon info from v4, compares with local folder and updates it. - - Raises: - RuntimeError - """ - - distribution = AyonDistribution() - bundle = None - bundle_name = None - try: - bundle = distribution.bundle_to_use - if bundle is not None: - bundle_name = bundle.name - except BundleNotFoundError as exc: - bundle_name = exc.bundle_name - - if bundle is None: - url = get_base_url() - if not HEADLESS_MODE_ENABLED: - show_missing_bundle_information(url, bundle_name) - - elif bundle_name: - _print(( - f"!!! Requested release bundle '{bundle_name}'" - " is not available on server." - )) - _print( - "!!! Check if selected release bundle" - f" is available on the server '{url}'." - ) - - else: - mode = "staging" if is_staging_enabled() else "production" - _print( - f"!!! No release bundle is set as {mode} on the AYON server." - ) - _print( - "!!! Make sure there is a release bundle set" - f" as \"{mode}\" on the AYON server '{url}'." - ) - sys.exit(1) - - distribution.distribute() - distribution.validate_distribution() - os.environ["AYON_BUNDLE_NAME"] = bundle_name - - python_paths = [ - path - for path in os.getenv("PYTHONPATH", "").split(os.pathsep) - if path - ] - - for path in distribution.get_sys_paths(): - sys.path.insert(0, path) - if path not in python_paths: - python_paths.append(path) - os.environ["PYTHONPATH"] = os.pathsep.join(python_paths) - - -def boot(): - """Bootstrap OpenPype.""" - - from openpype.version import __version__ - - # TODO load version - os.environ["OPENPYPE_VERSION"] = __version__ - os.environ["AYON_VERSION"] = __version__ - - _connect_to_ayon_server() - _check_and_update_from_ayon_server() - - # delete OpenPype module and it's submodules from cache so it is used from - # specific version - modules_to_del = [ - sys.modules.pop(module_name) - for module_name in tuple(sys.modules) - if module_name == "openpype" or module_name.startswith("openpype.") - ] - - for module_name in modules_to_del: - with contextlib.suppress(AttributeError, KeyError): - del sys.modules[module_name] - - -def main_cli(): - from openpype import cli - from openpype.version import __version__ - from openpype.lib import terminal as t - - _print(">>> loading environments ...") - _print(" - global AYON ...") - set_global_environments() - _print(" - for addons ...") - set_addons_environments() - - # print info when not running scripts defined in 'silent commands' - if not SKIP_HEADERS: - info = get_info(is_staging_enabled()) - info.insert(0, f">>> Using AYON from [ {AYON_ROOT} ]") - - t_width = 20 - with contextlib.suppress(ValueError, OSError): - t_width = os.get_terminal_size().columns - 2 - - _header = f"*** AYON [{__version__}] " - info.insert(0, _header + "-" * (t_width - len(_header))) - - for i in info: - t.echo(i) - - try: - cli.main(obj={}, prog_name="ayon") - except Exception: # noqa - exc_info = sys.exc_info() - _print("!!! AYON crashed:") - traceback.print_exception(*exc_info) - sys.exit(1) - - -def script_cli(): - """Run and execute script.""" - - filepath = os.path.abspath(sys.argv[1]) - - # Find '__main__.py' in directory - if os.path.isdir(filepath): - new_filepath = os.path.join(filepath, "__main__.py") - if not os.path.exists(new_filepath): - raise RuntimeError( - f"can't find '__main__' module in '{filepath}'") - filepath = new_filepath - - # Add parent dir to sys path - sys.path.insert(0, os.path.dirname(filepath)) - - # Read content and execute - with open(filepath, "r") as stream: - content = stream.read() - - exec(compile(content, filepath, "exec"), globals()) - - -def get_info(use_staging=None) -> list: - """Print additional information to console.""" - - inf = [] - if use_staging: - inf.append(("AYON variant", "staging")) - else: - inf.append(("AYON variant", "production")) - inf.append(("AYON bundle", os.getenv("AYON_BUNDLE"))) - - # NOTE add addons information - - maximum = max(len(i[0]) for i in inf) - formatted = [] - for info in inf: - padding = (maximum - len(info[0])) + 1 - formatted.append(f'... {info[0]}:{" " * padding}[ {info[1]} ]') - return formatted - - -def main(): - if not SKIP_BOOTSTRAP: - boot() - - args = list(sys.argv) - args.pop(0) - if args and os.path.exists(args[0]): - script_cli() - else: - main_cli() - - -if __name__ == "__main__": - main() diff --git a/common/ayon_common/__init__.py b/common/ayon_common/__init__.py deleted file mode 100644 index ddabb7da2f..0000000000 --- a/common/ayon_common/__init__.py +++ /dev/null @@ -1,16 +0,0 @@ -from .utils import ( - IS_BUILT_APPLICATION, - is_staging_enabled, - get_local_site_id, - get_ayon_appdirs, - get_ayon_launch_args, -) - - -__all__ = ( - "IS_BUILT_APPLICATION", - "is_staging_enabled", - "get_local_site_id", - "get_ayon_appdirs", - "get_ayon_launch_args", -) diff --git a/common/ayon_common/connection/credentials.py b/common/ayon_common/connection/credentials.py deleted file mode 100644 index 7f70cb7992..0000000000 --- a/common/ayon_common/connection/credentials.py +++ /dev/null @@ -1,511 +0,0 @@ -"""Handle credentials and connection to server for client application. - -Cache and store used server urls. Store/load API keys to/from keyring if -needed. Store metadata about used urls, usernames for the urls and when was -the connection with the username established. - -On bootstrap is created global connection with information about site and -client version. The connection object lives in 'ayon_api'. -""" - -import os -import json -import platform -import datetime -import contextlib -import subprocess -import tempfile -from typing import Optional, Union, Any - -import ayon_api - -from ayon_api.constants import SERVER_URL_ENV_KEY, SERVER_API_ENV_KEY -from ayon_api.exceptions import UrlError -from ayon_api.utils import ( - validate_url, - is_token_valid, - logout_from_server, -) - -from ayon_common.utils import ( - get_ayon_appdirs, - get_local_site_id, - get_ayon_launch_args, - is_staging_enabled, -) - - -class ChangeUserResult: - def __init__( - self, logged_out, old_url, old_token, old_username, - new_url, new_token, new_username - ): - shutdown = logged_out - restart = new_url is not None and new_url != old_url - token_changed = new_token is not None and new_token != old_token - - self.logged_out = logged_out - self.old_url = old_url - self.old_token = old_token - self.old_username = old_username - self.new_url = new_url - self.new_token = new_token - self.new_username = new_username - - self.shutdown = shutdown - self.restart = restart - self.token_changed = token_changed - - -def _get_servers_path(): - return get_ayon_appdirs("used_servers.json") - - -def get_servers_info_data(): - """Metadata about used server on this machine. - - Store data about all used server urls, last used url and user username for - the url. Using this metadata we can remember which username was used per - url if token stored in keyring loose lifetime. - - Returns: - dict[str, Any]: Information about servers. - """ - - data = {} - servers_info_path = _get_servers_path() - if not os.path.exists(servers_info_path): - dirpath = os.path.dirname(servers_info_path) - if not os.path.exists(dirpath): - os.makedirs(dirpath) - - return data - - with open(servers_info_path, "r") as stream: - with contextlib.suppress(BaseException): - data = json.load(stream) - return data - - -def add_server(url: str, username: str): - """Add server to server info metadata. - - This function will also mark the url as last used url on the machine so on - next launch will be used. - - Args: - url (str): Server url. - username (str): Name of user used to log in. - """ - - servers_info_path = _get_servers_path() - data = get_servers_info_data() - data["last_server"] = url - if "urls" not in data: - data["urls"] = {} - data["urls"][url] = { - "updated_dt": datetime.datetime.now().strftime("%Y/%m/%d %H:%M:%S"), - "username": username, - } - - with open(servers_info_path, "w") as stream: - json.dump(data, stream) - - -def remove_server(url: str): - """Remove server url from servers information. - - This should be used on logout to completelly loose information about server - on the machine. - - Args: - url (str): Server url. - """ - - if not url: - return - - servers_info_path = _get_servers_path() - data = get_servers_info_data() - if data.get("last_server") == url: - data["last_server"] = None - - if "urls" in data: - data["urls"].pop(url, None) - - with open(servers_info_path, "w") as stream: - json.dump(data, stream) - - -def get_last_server( - data: Optional[dict[str, Any]] = None -) -> Union[str, None]: - """Last server used to log in on this machine. - - Args: - data (Optional[dict[str, Any]]): Prepared server information data. - - Returns: - Union[str, None]: Last used server url. - """ - - if data is None: - data = get_servers_info_data() - return data.get("last_server") - - -def get_last_username_by_url( - url: str, - data: Optional[dict[str, Any]] = None -) -> Union[str, None]: - """Get last username which was used for passed url. - - Args: - url (str): Server url. - data (Optional[dict[str, Any]]): Servers info. - - Returns: - Union[str, None]: Username. - """ - - if not url: - return None - - if data is None: - data = get_servers_info_data() - - if urls := data.get("urls"): - if url_info := urls.get(url): - return url_info.get("username") - return None - - -def get_last_server_with_username(): - """Receive last server and username used in last connection. - - Returns: - tuple[Union[str, None], Union[str, None]]: Url and username. - """ - - data = get_servers_info_data() - url = get_last_server(data) - username = get_last_username_by_url(url) - return url, username - - -class TokenKeyring: - # Fake username with hardcoded username - username_key = "username" - - def __init__(self, url): - try: - import keyring - - except Exception as exc: - raise NotImplementedError( - "Python module `keyring` is not available." - ) from exc - - # hack for cx_freeze and Windows keyring backend - if platform.system().lower() == "windows": - from keyring.backends import Windows - - keyring.set_keyring(Windows.WinVaultKeyring()) - - self._url = url - self._keyring_key = f"AYON/{url}" - - def get_value(self): - import keyring - - return keyring.get_password(self._keyring_key, self.username_key) - - def set_value(self, value): - import keyring - - if value is not None: - keyring.set_password(self._keyring_key, self.username_key, value) - return - - with contextlib.suppress(keyring.errors.PasswordDeleteError): - keyring.delete_password(self._keyring_key, self.username_key) - - -def load_token(url: str) -> Union[str, None]: - """Get token for url from keyring. - - Args: - url (str): Server url. - - Returns: - Union[str, None]: Token for passed url available in keyring. - """ - - return TokenKeyring(url).get_value() - - -def store_token(url: str, token: str): - """Store token by url to keyring. - - Args: - url (str): Server url. - token (str): User token to server. - """ - - TokenKeyring(url).set_value(token) - - -def ask_to_login_ui( - url: Optional[str] = None, - always_on_top: Optional[bool] = False -) -> tuple[str, str, str]: - """Ask user to login using UI. - - This should be used only when user is not yet logged in at all or available - credentials are invalid. To change credentials use 'change_user_ui' - function. - - Use a subprocess to show UI. - - Args: - url (Optional[str]): Server url that could be prefilled in UI. - always_on_top (Optional[bool]): Window will be drawn on top of - other windows. - - Returns: - tuple[str, str, str]: Url, user's token and username. - """ - - current_dir = os.path.dirname(os.path.abspath(__file__)) - ui_dir = os.path.join(current_dir, "ui") - - if url is None: - url = get_last_server() - username = get_last_username_by_url(url) - data = { - "url": url, - "username": username, - "always_on_top": always_on_top, - } - - with tempfile.NamedTemporaryFile( - mode="w", prefix="ayon_login", suffix=".json", delete=False - ) as tmp: - output = tmp.name - json.dump(data, tmp) - - code = subprocess.call( - get_ayon_launch_args(ui_dir, "--skip-bootstrap", output)) - if code != 0: - raise RuntimeError("Failed to show login UI") - - with open(output, "r") as stream: - data = json.load(stream) - os.remove(output) - return data["output"] - - -def change_user_ui() -> ChangeUserResult: - """Change user using UI. - - Show UI to user where he can change credentials or url. Output will contain - all information about old/new values of url, username, api key. If user - confirmed or declined values. - - Returns: - ChangeUserResult: Information about user change. - """ - - from .ui import change_user - - url, username = get_last_server_with_username() - token = load_token(url) - result = change_user(url, username, token) - new_url, new_token, new_username, logged_out = result - - output = ChangeUserResult( - logged_out, url, token, username, - new_url, new_token, new_username - ) - if output.logged_out: - logout(url, token) - - elif output.token_changed: - change_token( - output.new_url, - output.new_token, - output.new_username, - output.old_url - ) - return output - - -def change_token( - url: str, - token: str, - username: Optional[str] = None, - old_url: Optional[str] = None -): - """Change url and token in currently running session. - - Function can also change server url, in that case are previous credentials - NOT removed from cache. - - Args: - url (str): Url to server. - token (str): New token to be used for url connection. - username (Optional[str]): Username of logged user. - old_url (Optional[str]): Previous url. Value from 'get_last_server' - is used if not entered. - """ - - if old_url is None: - old_url = get_last_server() - if old_url and old_url == url: - remove_url_cache(old_url) - - # TODO check if ayon_api is already connected - add_server(url, username) - store_token(url, token) - ayon_api.change_token(url, token) - - -def remove_url_cache(url: str): - """Clear cache for server url. - - Args: - url (str): Server url which is removed from cache. - """ - - store_token(url, None) - - -def remove_token_cache(url: str, token: str): - """Remove token from local cache of url. - - Is skipped if cached token under the passed url is not the same - as passed token. - - Args: - url (str): Url to server. - token (str): Token to be removed from url cache. - """ - - if load_token(url) == token: - remove_url_cache(url) - - -def logout(url: str, token: str): - """Logout from server and throw token away. - - Args: - url (str): Url from which should be logged out. - token (str): Token which should be used to log out. - """ - - remove_server(url) - ayon_api.close_connection() - ayon_api.set_environments(None, None) - remove_token_cache(url, token) - logout_from_server(url, token) - - -def load_environments(): - """Load environments on startup. - - Handle environments needed for connection with server. Environments are - 'AYON_SERVER_URL' and 'AYON_API_KEY'. - - Server is looked up from environment. Already set environent is not - changed. If environemnt is not filled then last server stored in appdirs - is used. - - Token is skipped if url is not available. Otherwise, is also checked from - env and if is not available then uses 'load_token' to try to get token - based on server url. - """ - - server_url = os.environ.get(SERVER_URL_ENV_KEY) - if not server_url: - server_url = get_last_server() - if not server_url: - return - os.environ[SERVER_URL_ENV_KEY] = server_url - - if not os.environ.get(SERVER_API_ENV_KEY): - if token := load_token(server_url): - os.environ[SERVER_API_ENV_KEY] = token - - -def set_environments(url: str, token: str): - """Change url and token environemnts in currently running process. - - Args: - url (str): New server url. - token (str): User's token. - """ - - ayon_api.set_environments(url, token) - - -def create_global_connection(): - """Create global connection with site id and client version. - - Make sure the global connection in 'ayon_api' have entered site id and - client version. - - Set default settings variant to use based on 'is_staging_enabled'. - """ - - ayon_api.create_connection( - get_local_site_id(), os.environ.get("AYON_VERSION") - ) - ayon_api.set_default_settings_variant( - "staging" if is_staging_enabled() else "production" - ) - - -def need_server_or_login() -> bool: - """Check if server url or login to the server are needed. - - It is recommended to call 'load_environments' on startup before this check. - But in some cases this function could be called after startup. - - Returns: - bool: 'True' if server and token are available. Otherwise 'False'. - """ - - server_url = os.environ.get(SERVER_URL_ENV_KEY) - if not server_url: - return True - - try: - server_url = validate_url(server_url) - except UrlError: - return True - - token = os.environ.get(SERVER_API_ENV_KEY) - if token: - return not is_token_valid(server_url, token) - - token = load_token(server_url) - if token: - return not is_token_valid(server_url, token) - return True - - -def confirm_server_login(url, token, username): - """Confirm login of user and do necessary stepts to apply changes. - - This should not be used on "change" of user but on first login. - - Args: - url (str): Server url where user authenticated. - token (str): API token used for authentication to server. - username (Union[str, None]): Username related to API token. - """ - - add_server(url, username) - store_token(url, token) - set_environments(url, token) - create_global_connection() diff --git a/common/ayon_common/connection/ui/__init__.py b/common/ayon_common/connection/ui/__init__.py deleted file mode 100644 index 96e573df0d..0000000000 --- a/common/ayon_common/connection/ui/__init__.py +++ /dev/null @@ -1,12 +0,0 @@ -from .login_window import ( - ServerLoginWindow, - ask_to_login, - change_user, -) - - -__all__ = ( - "ServerLoginWindow", - "ask_to_login", - "change_user", -) diff --git a/common/ayon_common/connection/ui/__main__.py b/common/ayon_common/connection/ui/__main__.py deleted file mode 100644 index 719b2b8ef5..0000000000 --- a/common/ayon_common/connection/ui/__main__.py +++ /dev/null @@ -1,23 +0,0 @@ -import sys -import json - -from ayon_common.connection.ui.login_window import ask_to_login - - -def main(output_path): - with open(output_path, "r") as stream: - data = json.load(stream) - - url = data.get("url") - username = data.get("username") - always_on_top = data.get("always_on_top", False) - out_url, out_token, out_username = ask_to_login( - url, username, always_on_top=always_on_top) - - data["output"] = [out_url, out_token, out_username] - with open(output_path, "w") as stream: - json.dump(data, stream) - - -if __name__ == "__main__": - main(sys.argv[-1]) diff --git a/common/ayon_common/connection/ui/login_window.py b/common/ayon_common/connection/ui/login_window.py deleted file mode 100644 index 94c239852e..0000000000 --- a/common/ayon_common/connection/ui/login_window.py +++ /dev/null @@ -1,710 +0,0 @@ -import traceback - -from qtpy import QtWidgets, QtCore, QtGui - -from ayon_api.exceptions import UrlError -from ayon_api.utils import validate_url, login_to_server - -from ayon_common.resources import ( - get_resource_path, - get_icon_path, - load_stylesheet, -) -from ayon_common.ui_utils import set_style_property, get_qt_app - -from .widgets import ( - PressHoverButton, - PlaceholderLineEdit, -) - - -class LogoutConfirmDialog(QtWidgets.QDialog): - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - - self.setWindowTitle("Logout confirmation") - - message_widget = QtWidgets.QWidget(self) - - message_label = QtWidgets.QLabel( - ( - "You are going to logout. This action will close this" - " application and will invalidate your login." - " All other applications launched with this login won't be" - " able to use it anymore.

" - "You can cancel logout and only change server and user login" - " in login dialog.

" - "Press OK to confirm logout." - ), - message_widget - ) - message_label.setWordWrap(True) - - message_layout = QtWidgets.QHBoxLayout(message_widget) - message_layout.setContentsMargins(0, 0, 0, 0) - message_layout.addWidget(message_label, 1) - - sep_frame = QtWidgets.QFrame(self) - sep_frame.setObjectName("Separator") - sep_frame.setMinimumHeight(2) - sep_frame.setMaximumHeight(2) - - footer_widget = QtWidgets.QWidget(self) - - cancel_btn = QtWidgets.QPushButton("Cancel", footer_widget) - confirm_btn = QtWidgets.QPushButton("OK", footer_widget) - - footer_layout = QtWidgets.QHBoxLayout(footer_widget) - footer_layout.setContentsMargins(0, 0, 0, 0) - footer_layout.addStretch(1) - footer_layout.addWidget(cancel_btn, 0) - footer_layout.addWidget(confirm_btn, 0) - - main_layout = QtWidgets.QVBoxLayout(self) - main_layout.addWidget(message_widget, 0) - main_layout.addStretch(1) - main_layout.addWidget(sep_frame, 0) - main_layout.addWidget(footer_widget, 0) - - cancel_btn.clicked.connect(self._on_cancel_click) - confirm_btn.clicked.connect(self._on_confirm_click) - - self._cancel_btn = cancel_btn - self._confirm_btn = confirm_btn - self._result = False - - def showEvent(self, event): - super().showEvent(event) - self._match_btns_sizes() - - def resizeEvent(self, event): - super().resizeEvent(event) - self._match_btns_sizes() - - def _match_btns_sizes(self): - width = max( - self._cancel_btn.sizeHint().width(), - self._confirm_btn.sizeHint().width() - ) - self._cancel_btn.setMinimumWidth(width) - self._confirm_btn.setMinimumWidth(width) - - def _on_cancel_click(self): - self._result = False - self.reject() - - def _on_confirm_click(self): - self._result = True - self.accept() - - def get_result(self): - return self._result - - -class ServerLoginWindow(QtWidgets.QDialog): - default_width = 410 - default_height = 170 - - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - - icon_path = get_icon_path() - icon = QtGui.QIcon(icon_path) - self.setWindowIcon(icon) - self.setWindowTitle("Login to server") - - edit_icon_path = get_resource_path("edit.png") - edit_icon = QtGui.QIcon(edit_icon_path) - - # --- URL page --- - login_widget = QtWidgets.QWidget(self) - - user_cred_widget = QtWidgets.QWidget(login_widget) - - url_label = QtWidgets.QLabel("URL:", user_cred_widget) - - url_widget = QtWidgets.QWidget(user_cred_widget) - - url_input = PlaceholderLineEdit(url_widget) - url_input.setPlaceholderText("< https://ayon.server.com >") - - url_preview = QtWidgets.QLineEdit(url_widget) - url_preview.setReadOnly(True) - url_preview.setObjectName("LikeDisabledInput") - - url_edit_btn = PressHoverButton(user_cred_widget) - url_edit_btn.setIcon(edit_icon) - url_edit_btn.setObjectName("PasswordBtn") - - url_layout = QtWidgets.QHBoxLayout(url_widget) - url_layout.setContentsMargins(0, 0, 0, 0) - url_layout.addWidget(url_input, 1) - url_layout.addWidget(url_preview, 1) - - # --- URL separator --- - url_cred_sep = QtWidgets.QFrame(self) - url_cred_sep.setObjectName("Separator") - url_cred_sep.setMinimumHeight(2) - url_cred_sep.setMaximumHeight(2) - - # --- Login page --- - username_label = QtWidgets.QLabel("Username:", user_cred_widget) - - username_widget = QtWidgets.QWidget(user_cred_widget) - - username_input = PlaceholderLineEdit(username_widget) - username_input.setPlaceholderText("< Artist >") - - username_preview = QtWidgets.QLineEdit(username_widget) - username_preview.setReadOnly(True) - username_preview.setObjectName("LikeDisabledInput") - - username_edit_btn = PressHoverButton(user_cred_widget) - username_edit_btn.setIcon(edit_icon) - username_edit_btn.setObjectName("PasswordBtn") - - username_layout = QtWidgets.QHBoxLayout(username_widget) - username_layout.setContentsMargins(0, 0, 0, 0) - username_layout.addWidget(username_input, 1) - username_layout.addWidget(username_preview, 1) - - password_label = QtWidgets.QLabel("Password:", user_cred_widget) - password_input = PlaceholderLineEdit(user_cred_widget) - password_input.setPlaceholderText("< *********** >") - password_input.setEchoMode(PlaceholderLineEdit.Password) - - api_label = QtWidgets.QLabel("API key:", user_cred_widget) - api_preview = QtWidgets.QLineEdit(user_cred_widget) - api_preview.setReadOnly(True) - api_preview.setObjectName("LikeDisabledInput") - - show_password_icon_path = get_resource_path("eye.png") - show_password_icon = QtGui.QIcon(show_password_icon_path) - show_password_btn = PressHoverButton(user_cred_widget) - show_password_btn.setObjectName("PasswordBtn") - show_password_btn.setIcon(show_password_icon) - show_password_btn.setFocusPolicy(QtCore.Qt.ClickFocus) - - cred_msg_sep = QtWidgets.QFrame(self) - cred_msg_sep.setObjectName("Separator") - cred_msg_sep.setMinimumHeight(2) - cred_msg_sep.setMaximumHeight(2) - - # --- Credentials inputs --- - user_cred_layout = QtWidgets.QGridLayout(user_cred_widget) - user_cred_layout.setContentsMargins(0, 0, 0, 0) - row = 0 - - user_cred_layout.addWidget(url_label, row, 0, 1, 1) - user_cred_layout.addWidget(url_widget, row, 1, 1, 1) - user_cred_layout.addWidget(url_edit_btn, row, 2, 1, 1) - row += 1 - - user_cred_layout.addWidget(url_cred_sep, row, 0, 1, 3) - row += 1 - - user_cred_layout.addWidget(username_label, row, 0, 1, 1) - user_cred_layout.addWidget(username_widget, row, 1, 1, 1) - user_cred_layout.addWidget(username_edit_btn, row, 2, 2, 1) - row += 1 - - user_cred_layout.addWidget(api_label, row, 0, 1, 1) - user_cred_layout.addWidget(api_preview, row, 1, 1, 1) - row += 1 - - user_cred_layout.addWidget(password_label, row, 0, 1, 1) - user_cred_layout.addWidget(password_input, row, 1, 1, 1) - user_cred_layout.addWidget(show_password_btn, row, 2, 1, 1) - row += 1 - - user_cred_layout.addWidget(cred_msg_sep, row, 0, 1, 3) - row += 1 - - user_cred_layout.setColumnStretch(0, 0) - user_cred_layout.setColumnStretch(1, 1) - user_cred_layout.setColumnStretch(2, 0) - - login_layout = QtWidgets.QVBoxLayout(login_widget) - login_layout.setContentsMargins(0, 0, 0, 0) - login_layout.addWidget(user_cred_widget, 1) - - # --- Messages --- - # Messages for users (e.g. invalid url etc.) - message_label = QtWidgets.QLabel(self) - message_label.setWordWrap(True) - message_label.setTextInteractionFlags(QtCore.Qt.TextBrowserInteraction) - - footer_widget = QtWidgets.QWidget(self) - logout_btn = QtWidgets.QPushButton("Logout", footer_widget) - user_message = QtWidgets.QLabel(footer_widget) - login_btn = QtWidgets.QPushButton("Login", footer_widget) - confirm_btn = QtWidgets.QPushButton("Confirm", footer_widget) - - footer_layout = QtWidgets.QHBoxLayout(footer_widget) - footer_layout.setContentsMargins(0, 0, 0, 0) - footer_layout.addWidget(logout_btn, 0) - footer_layout.addWidget(user_message, 1) - footer_layout.addWidget(login_btn, 0) - footer_layout.addWidget(confirm_btn, 0) - - main_layout = QtWidgets.QVBoxLayout(self) - main_layout.addWidget(login_widget, 0) - main_layout.addWidget(message_label, 0) - main_layout.addStretch(1) - main_layout.addWidget(footer_widget, 0) - - url_input.textChanged.connect(self._on_url_change) - url_input.returnPressed.connect(self._on_url_enter_press) - username_input.textChanged.connect(self._on_user_change) - username_input.returnPressed.connect(self._on_username_enter_press) - password_input.returnPressed.connect(self._on_password_enter_press) - show_password_btn.change_state.connect(self._on_show_password) - url_edit_btn.clicked.connect(self._on_url_edit_click) - username_edit_btn.clicked.connect(self._on_username_edit_click) - logout_btn.clicked.connect(self._on_logout_click) - login_btn.clicked.connect(self._on_login_click) - confirm_btn.clicked.connect(self._on_login_click) - - self._message_label = message_label - - self._url_widget = url_widget - self._url_input = url_input - self._url_preview = url_preview - self._url_edit_btn = url_edit_btn - - self._login_widget = login_widget - - self._user_cred_widget = user_cred_widget - self._username_input = username_input - self._username_preview = username_preview - self._username_edit_btn = username_edit_btn - - self._password_label = password_label - self._password_input = password_input - self._show_password_btn = show_password_btn - self._api_label = api_label - self._api_preview = api_preview - - self._logout_btn = logout_btn - self._user_message = user_message - self._login_btn = login_btn - self._confirm_btn = confirm_btn - - self._url_is_valid = None - self._credentials_are_valid = None - self._result = (None, None, None, False) - self._first_show = True - - self._allow_logout = False - self._logged_in = False - self._url_edit_mode = False - self._username_edit_mode = False - - def set_allow_logout(self, allow_logout): - if allow_logout is self._allow_logout: - return - self._allow_logout = allow_logout - - self._update_states_by_edit_mode() - - def _set_logged_in(self, logged_in): - if logged_in is self._logged_in: - return - self._logged_in = logged_in - - self._update_states_by_edit_mode() - - def _set_url_edit_mode(self, edit_mode): - if self._url_edit_mode is not edit_mode: - self._url_edit_mode = edit_mode - self._update_states_by_edit_mode() - - def _set_username_edit_mode(self, edit_mode): - if self._username_edit_mode is not edit_mode: - self._username_edit_mode = edit_mode - self._update_states_by_edit_mode() - - def _get_url_user_edit(self): - url_edit = True - if self._logged_in and not self._url_edit_mode: - url_edit = False - user_edit = url_edit - if not user_edit and self._logged_in and self._username_edit_mode: - user_edit = True - return url_edit, user_edit - - def _update_states_by_edit_mode(self): - url_edit, user_edit = self._get_url_user_edit() - - self._url_preview.setVisible(not url_edit) - self._url_input.setVisible(url_edit) - self._url_edit_btn.setVisible(self._allow_logout and not url_edit) - - self._username_preview.setVisible(not user_edit) - self._username_input.setVisible(user_edit) - self._username_edit_btn.setVisible( - self._allow_logout and not user_edit - ) - - self._api_preview.setVisible(not user_edit) - self._api_label.setVisible(not user_edit) - - self._password_label.setVisible(user_edit) - self._show_password_btn.setVisible(user_edit) - self._password_input.setVisible(user_edit) - - self._logout_btn.setVisible(self._allow_logout and self._logged_in) - self._login_btn.setVisible(not self._allow_logout) - self._confirm_btn.setVisible(self._allow_logout) - self._update_login_btn_state(url_edit, user_edit) - - def _update_login_btn_state(self, url_edit=None, user_edit=None, url=None): - if url_edit is None: - url_edit, user_edit = self._get_url_user_edit() - - if url is None: - url = self._url_input.text() - - enabled = bool(url) and (url_edit or user_edit) - - self._login_btn.setEnabled(enabled) - self._confirm_btn.setEnabled(enabled) - - def showEvent(self, event): - super().showEvent(event) - if self._first_show: - self._first_show = False - self._on_first_show() - - def _on_first_show(self): - self.setStyleSheet(load_stylesheet()) - self.resize(self.default_width, self.default_height) - self._center_window() - if self._allow_logout is None: - self.set_allow_logout(False) - - self._update_states_by_edit_mode() - if not self._url_input.text(): - widget = self._url_input - elif not self._username_input.text(): - widget = self._username_input - else: - widget = self._password_input - - self._set_input_focus(widget) - - def result(self): - """Result url and token or login. - - Returns: - Union[Tuple[str, str], Tuple[None, None]]: Url and token used for - login if was successful otherwise are both set to None. - """ - return self._result - - def _center_window(self): - """Move window to center of screen.""" - - if hasattr(QtWidgets.QApplication, "desktop"): - desktop = QtWidgets.QApplication.desktop() - screen_idx = desktop.screenNumber(self) - screen_geo = desktop.screenGeometry(screen_idx) - else: - screen = self.screen() - screen_geo = screen.geometry() - - geo = self.frameGeometry() - geo.moveCenter(screen_geo.center()) - if geo.y() < screen_geo.y(): - geo.setY(screen_geo.y()) - self.move(geo.topLeft()) - - def _on_url_change(self, text): - self._update_login_btn_state(url=text) - self._set_url_valid(None) - self._set_credentials_valid(None) - self._url_preview.setText(text) - - def _set_url_valid(self, valid): - if valid is self._url_is_valid: - return - - self._url_is_valid = valid - self._set_input_valid_state(self._url_input, valid) - - def _set_credentials_valid(self, valid): - if self._credentials_are_valid is valid: - return - - self._credentials_are_valid = valid - self._set_input_valid_state(self._username_input, valid) - self._set_input_valid_state(self._password_input, valid) - - def _on_url_enter_press(self): - self._set_input_focus(self._username_input) - - def _on_user_change(self, username): - self._username_preview.setText(username) - - def _on_username_enter_press(self): - self._set_input_focus(self._password_input) - - def _on_password_enter_press(self): - self._login() - - def _on_show_password(self, show_password): - if show_password: - placeholder_text = "< MySecret124 >" - echo_mode = QtWidgets.QLineEdit.Normal - else: - placeholder_text = "< *********** >" - echo_mode = QtWidgets.QLineEdit.Password - - self._password_input.setEchoMode(echo_mode) - self._password_input.setPlaceholderText(placeholder_text) - - def _on_username_edit_click(self): - self._username_edit_mode = True - self._update_states_by_edit_mode() - - def _on_url_edit_click(self): - self._url_edit_mode = True - self._update_states_by_edit_mode() - - def _on_logout_click(self): - dialog = LogoutConfirmDialog(self) - dialog.exec_() - if dialog.get_result(): - self._result = (None, None, None, True) - self.accept() - - def _on_login_click(self): - self._login() - - def _validate_url(self): - """Use url from input to connect and change window state on success. - - Todos: - Threaded check. - """ - - url = self._url_input.text() - valid_url = None - try: - valid_url = validate_url(url) - - except UrlError as exc: - parts = [f"{exc.title}"] - parts.extend(f"- {hint}" for hint in exc.hints) - self._set_message("
".join(parts)) - - except KeyboardInterrupt: - # Reraise KeyboardInterrupt error - raise - - except BaseException: - self._set_unexpected_error() - return - - if valid_url is None: - return False - - self._url_input.setText(valid_url) - return True - - def _login(self): - if ( - not self._login_btn.isEnabled() - and not self._confirm_btn.isEnabled() - ): - return - - if not self._url_is_valid: - self._set_url_valid(self._validate_url()) - - if not self._url_is_valid: - self._set_input_focus(self._url_input) - self._set_credentials_valid(None) - return - - self._clear_message() - - url = self._url_input.text() - username = self._username_input.text() - password = self._password_input.text() - try: - token = login_to_server(url, username, password) - except BaseException: - self._set_unexpected_error() - return - - if token is not None: - self._result = (url, token, username, False) - self.accept() - return - - self._set_credentials_valid(False) - message_lines = ["Invalid credentials"] - if not username.strip(): - message_lines.append("- Username is not filled") - - if not password.strip(): - message_lines.append("- Password is not filled") - - if username and password: - message_lines.append("- Check your credentials") - - self._set_message("
".join(message_lines)) - self._set_input_focus(self._username_input) - - def _set_input_focus(self, widget): - widget.setFocus(QtCore.Qt.MouseFocusReason) - - def _set_input_valid_state(self, widget, valid): - state = "" - if valid is True: - state = "valid" - elif valid is False: - state = "invalid" - set_style_property(widget, "state", state) - - def _set_message(self, message): - self._message_label.setText(message) - - def _clear_message(self): - self._message_label.setText("") - - def _set_unexpected_error(self): - # TODO add traceback somewhere - # - maybe a button to show or copy? - traceback.print_exc() - lines = [ - "Unexpected error happened", - "- Can be caused by wrong url (leading elsewhere)" - ] - self._set_message("
".join(lines)) - - def set_url(self, url): - self._url_preview.setText(url) - self._url_input.setText(url) - self._validate_url() - - def set_username(self, username): - self._username_preview.setText(username) - self._username_input.setText(username) - - def _set_api_key(self, api_key): - if not api_key or len(api_key) < 3: - self._api_preview.setText(api_key or "") - return - - api_key_len = len(api_key) - offset = 6 - if api_key_len < offset: - offset = api_key_len // 2 - api_key = api_key[:offset] + "." * (api_key_len - offset) - - self._api_preview.setText(api_key) - - def set_logged_in( - self, - logged_in, - url=None, - username=None, - api_key=None, - allow_logout=None - ): - if url is not None: - self.set_url(url) - - if username is not None: - self.set_username(username) - - if api_key: - self._set_api_key(api_key) - - if logged_in and allow_logout is None: - allow_logout = True - - self._set_logged_in(logged_in) - - if allow_logout: - self.set_allow_logout(True) - elif allow_logout is False: - self.set_allow_logout(False) - - -def ask_to_login(url=None, username=None, always_on_top=False): - """Ask user to login using Qt dialog. - - Function creates new QApplication if is not created yet. - - Args: - url (Optional[str]): Server url that will be prefilled in dialog. - username (Optional[str]): Username that will be prefilled in dialog. - always_on_top (Optional[bool]): Window will be drawn on top of - other windows. - - Returns: - tuple[str, str, str]: Returns Url, user's token and username. Url can - be changed during dialog lifetime that's why the url is returned. - """ - - app_instance = get_qt_app() - - window = ServerLoginWindow() - if always_on_top: - window.setWindowFlags( - window.windowFlags() - | QtCore.Qt.WindowStaysOnTopHint - ) - - if url: - window.set_url(url) - - if username: - window.set_username(username) - - if not app_instance.startingUp(): - window.exec_() - else: - window.open() - app_instance.exec_() - result = window.result() - out_url, out_token, out_username, _ = result - return out_url, out_token, out_username - - -def change_user(url, username, api_key, always_on_top=False): - """Ask user to login using Qt dialog. - - Function creates new QApplication if is not created yet. - - Args: - url (str): Server url that will be prefilled in dialog. - username (str): Username that will be prefilled in dialog. - api_key (str): API key that will be prefilled in dialog. - always_on_top (Optional[bool]): Window will be drawn on top of - other windows. - - Returns: - Tuple[str, str]: Returns Url and user's token. Url can be changed - during dialog lifetime that's why the url is returned. - """ - - app_instance = get_qt_app() - window = ServerLoginWindow() - if always_on_top: - window.setWindowFlags( - window.windowFlags() - | QtCore.Qt.WindowStaysOnTopHint - ) - window.set_logged_in(True, url, username, api_key) - - if not app_instance.startingUp(): - window.exec_() - else: - window.open() - # This can become main Qt loop. Maybe should live elsewhere - app_instance.exec_() - return window.result() diff --git a/common/ayon_common/connection/ui/widgets.py b/common/ayon_common/connection/ui/widgets.py deleted file mode 100644 index 78b73e056d..0000000000 --- a/common/ayon_common/connection/ui/widgets.py +++ /dev/null @@ -1,47 +0,0 @@ -from qtpy import QtWidgets, QtCore, QtGui - - -class PressHoverButton(QtWidgets.QPushButton): - """Keep track about mouse press/release and enter/leave.""" - - _mouse_pressed = False - _mouse_hovered = False - change_state = QtCore.Signal(bool) - - def mousePressEvent(self, event): - self._mouse_pressed = True - self._mouse_hovered = True - self.change_state.emit(self._mouse_hovered) - super(PressHoverButton, self).mousePressEvent(event) - - def mouseReleaseEvent(self, event): - self._mouse_pressed = False - self._mouse_hovered = False - self.change_state.emit(self._mouse_hovered) - super(PressHoverButton, self).mouseReleaseEvent(event) - - def mouseMoveEvent(self, event): - mouse_pos = self.mapFromGlobal(QtGui.QCursor.pos()) - under_mouse = self.rect().contains(mouse_pos) - if under_mouse != self._mouse_hovered: - self._mouse_hovered = under_mouse - self.change_state.emit(self._mouse_hovered) - - super(PressHoverButton, self).mouseMoveEvent(event) - - -class PlaceholderLineEdit(QtWidgets.QLineEdit): - """Set placeholder color of QLineEdit in Qt 5.12 and higher.""" - - def __init__(self, *args, **kwargs): - super(PlaceholderLineEdit, self).__init__(*args, **kwargs) - # Change placeholder palette color - if hasattr(QtGui.QPalette, "PlaceholderText"): - filter_palette = self.palette() - color = QtGui.QColor("#D3D8DE") - color.setAlpha(67) - filter_palette.setColor( - QtGui.QPalette.PlaceholderText, - color - ) - self.setPalette(filter_palette) diff --git a/common/ayon_common/distribution/README.md b/common/ayon_common/distribution/README.md deleted file mode 100644 index f1c34ba722..0000000000 --- a/common/ayon_common/distribution/README.md +++ /dev/null @@ -1,18 +0,0 @@ -Addon distribution tool ------------------------- - -Code in this folder is backend portion of Addon distribution logic for v4 server. - -Each host, module will be separate Addon in the future. Each v4 server could run different set of Addons. - -Client (running on artist machine) will in the first step ask v4 for list of enabled addons. -(It expects list of json documents matching to `addon_distribution.py:AddonInfo` object.) -Next it will compare presence of enabled addon version in local folder. In the case of missing version of -an addon, client will use information in the addon to download (from http/shared local disk/git) zip file -and unzip it. - -Required part of addon distribution will be sharing of dependencies (python libraries, utilities) which is not part of this folder. - -Location of this folder might change in the future as it will be required for a clint to add this folder to sys.path reliably. - -This code needs to be independent on Openpype code as much as possible! diff --git a/common/ayon_common/distribution/__init__.py b/common/ayon_common/distribution/__init__.py deleted file mode 100644 index e3c0f0e161..0000000000 --- a/common/ayon_common/distribution/__init__.py +++ /dev/null @@ -1,9 +0,0 @@ -from .control import AyonDistribution, BundleNotFoundError -from .utils import show_missing_bundle_information - - -__all__ = ( - "AyonDistribution", - "BundleNotFoundError", - "show_missing_bundle_information", -) diff --git a/common/ayon_common/distribution/control.py b/common/ayon_common/distribution/control.py deleted file mode 100644 index 95c221d753..0000000000 --- a/common/ayon_common/distribution/control.py +++ /dev/null @@ -1,1116 +0,0 @@ -import os -import sys -import json -import traceback -import collections -import datetime -import logging -import shutil -import threading -import platform -import attr -from enum import Enum - -import ayon_api - -from ayon_common.utils import is_staging_enabled - -from .utils import ( - get_addons_dir, - get_dependencies_dir, -) -from .downloaders import get_default_download_factory -from .data_structures import ( - AddonInfo, - DependencyItem, - Bundle, -) - -NOT_SET = type("UNKNOWN", (), {"__bool__": lambda: False})() - - -class BundleNotFoundError(Exception): - """Bundle name is defined but is not available on server. - - Args: - bundle_name (str): Name of bundle that was not found. - """ - - def __init__(self, bundle_name): - self.bundle_name = bundle_name - super().__init__( - f"Bundle '{bundle_name}' is not available on server" - ) - - -class UpdateState(Enum): - UNKNOWN = "unknown" - UPDATED = "udated" - OUTDATED = "outdated" - UPDATE_FAILED = "failed" - MISS_SOURCE_FILES = "miss_source_files" - - -class DistributeTransferProgress: - """Progress of single source item in 'DistributionItem'. - - The item is to keep track of single source item. - """ - - def __init__(self): - self._transfer_progress = ayon_api.TransferProgress() - self._started = False - self._failed = False - self._fail_reason = None - self._unzip_started = False - self._unzip_finished = False - self._hash_check_started = False - self._hash_check_finished = False - - def set_started(self): - """Call when source distribution starts.""" - - self._started = True - - def set_failed(self, reason): - """Set source distribution as failed. - - Args: - reason (str): Error message why the transfer failed. - """ - - self._failed = True - self._fail_reason = reason - - def set_hash_check_started(self): - """Call just before hash check starts.""" - - self._hash_check_started = True - - def set_hash_check_finished(self): - """Call just after hash check finishes.""" - - self._hash_check_finished = True - - def set_unzip_started(self): - """Call just before unzip starts.""" - - self._unzip_started = True - - def set_unzip_finished(self): - """Call just after unzip finishes.""" - - self._unzip_finished = True - - @property - def is_running(self): - """Source distribution is in progress. - - Returns: - bool: Transfer is in progress. - """ - - return bool( - self._started - and not self._failed - and not self._hash_check_finished - ) - - @property - def transfer_progress(self): - """Source file 'download' progress tracker. - - Returns: - ayon_api.TransferProgress.: Content download progress. - """ - - return self._transfer_progress - - @property - def started(self): - return self._started - - @property - def hash_check_started(self): - return self._hash_check_started - - @property - def hash_check_finished(self): - return self._has_check_finished - - @property - def unzip_started(self): - return self._unzip_started - - @property - def unzip_finished(self): - return self._unzip_finished - - @property - def failed(self): - return self._failed or self._transfer_progress.failed - - @property - def fail_reason(self): - return self._fail_reason or self._transfer_progress.fail_reason - - -class DistributionItem: - """Distribution item with sources and target directories. - - Distribution item can be an addon or dependency package. Distribution item - can be already distributed and don't need any progression. The item keeps - track of the progress. The reason is to be able to use the distribution - items as source data for UI without implementing the same logic. - - Distribution is "state" based. Distribution can be 'UPDATED' or 'OUTDATED' - at the initialization. If item is 'UPDATED' the distribution is skipped - and 'OUTDATED' will trigger the distribution process. - - Because the distribution may have multiple sources each source has own - progress item. - - Args: - state (UpdateState): Initial state (UpdateState.UPDATED or - UpdateState.OUTDATED). - unzip_dirpath (str): Path to directory where zip is downloaded. - download_dirpath (str): Path to directory where file is unzipped. - file_hash (str): Hash of file for validation. - factory (DownloadFactory): Downloaders factory object. - sources (List[SourceInfo]): Possible sources to receive the - distribution item. - downloader_data (Dict[str, Any]): More information for downloaders. - item_label (str): Label used in log outputs (and in UI). - logger (logging.Logger): Logger object. - """ - - def __init__( - self, - state, - unzip_dirpath, - download_dirpath, - file_hash, - factory, - sources, - downloader_data, - item_label, - logger=None, - ): - if logger is None: - logger = logging.getLogger(self.__class__.__name__) - self.log = logger - self.state = state - self.unzip_dirpath = unzip_dirpath - self.download_dirpath = download_dirpath - self.file_hash = file_hash - self.factory = factory - self.sources = [ - (source, DistributeTransferProgress()) - for source in sources - ] - self.downloader_data = downloader_data - self.item_label = item_label - - self._need_distribution = state != UpdateState.UPDATED - self._current_source_progress = None - self._used_source_progress = None - self._used_source = None - self._dist_started = False - self._dist_finished = False - - self._error_msg = None - self._error_detail = None - - @property - def need_distribution(self): - """Need distribution based on initial state. - - Returns: - bool: Need distribution. - """ - - return self._need_distribution - - @property - def current_source_progress(self): - """Currently processed source progress object. - - Returns: - Union[DistributeTransferProgress, None]: Transfer progress or None. - """ - - return self._current_source_progress - - @property - def used_source_progress(self): - """Transfer progress that successfully distributed the item. - - Returns: - Union[DistributeTransferProgress, None]: Transfer progress or None. - """ - - return self._used_source_progress - - @property - def used_source(self): - """Data of source item. - - Returns: - Union[Dict[str, Any], None]: SourceInfo data or None. - """ - - return self._used_source - - @property - def error_message(self): - """Reason why distribution item failed. - - Returns: - Union[str, None]: Error message. - """ - - return self._error_msg - - @property - def error_detail(self): - """Detailed reason why distribution item failed. - - Returns: - Union[str, None]: Detailed information (maybe traceback). - """ - - return self._error_detail - - def _distribute(self): - if not self.sources: - message = ( - f"{self.item_label}: Don't have" - " any sources to download from." - ) - self.log.error(message) - self._error_msg = message - self.state = UpdateState.MISS_SOURCE_FILES - return - - download_dirpath = self.download_dirpath - unzip_dirpath = self.unzip_dirpath - for source, source_progress in self.sources: - self._current_source_progress = source_progress - source_progress.set_started() - - # Remove directory if exists - if os.path.isdir(unzip_dirpath): - self.log.debug(f"Cleaning {unzip_dirpath}") - shutil.rmtree(unzip_dirpath) - - # Create directory - os.makedirs(unzip_dirpath) - if not os.path.isdir(download_dirpath): - os.makedirs(download_dirpath) - - try: - downloader = self.factory.get_downloader(source.type) - except Exception: - message = f"Unknown downloader {source.type}" - source_progress.set_failed(message) - self.log.warning(message, exc_info=True) - continue - - source_data = attr.asdict(source) - cleanup_args = ( - source_data, - download_dirpath, - self.downloader_data - ) - - try: - zip_filepath = downloader.download( - source_data, - download_dirpath, - self.downloader_data, - source_progress.transfer_progress, - ) - except Exception: - message = "Failed to download source" - source_progress.set_failed(message) - self.log.warning( - f"{self.item_label}: {message}", - exc_info=True - ) - downloader.cleanup(*cleanup_args) - continue - - source_progress.set_hash_check_started() - try: - downloader.check_hash(zip_filepath, self.file_hash) - except Exception: - message = "File hash does not match" - source_progress.set_failed(message) - self.log.warning( - f"{self.item_label}: {message}", - exc_info=True - ) - downloader.cleanup(*cleanup_args) - continue - - source_progress.set_hash_check_finished() - source_progress.set_unzip_started() - try: - downloader.unzip(zip_filepath, unzip_dirpath) - except Exception: - message = "Couldn't unzip source file" - source_progress.set_failed(message) - self.log.warning( - f"{self.item_label}: {message}", - exc_info=True - ) - downloader.cleanup(*cleanup_args) - continue - - source_progress.set_unzip_finished() - downloader.cleanup(*cleanup_args) - self.state = UpdateState.UPDATED - self._used_source = source_data - break - - last_progress = self._current_source_progress - self._current_source_progress = None - if self.state == UpdateState.UPDATED: - self._used_source_progress = last_progress - self.log.info(f"{self.item_label}: Distributed") - return - - self.log.error(f"{self.item_label}: Failed to distribute") - self._error_msg = "Failed to receive or install source files" - - def distribute(self): - """Execute distribution logic.""" - - if not self.need_distribution or self._dist_started: - return - - self._dist_started = True - try: - if self.state == UpdateState.OUTDATED: - self._distribute() - - except Exception as exc: - self.state = UpdateState.UPDATE_FAILED - self._error_msg = str(exc) - self._error_detail = "".join( - traceback.format_exception(*sys.exc_info()) - ) - self.log.error( - f"{self.item_label}: Distibution filed", - exc_info=True - ) - - finally: - self._dist_finished = True - if self.state == UpdateState.OUTDATED: - self.state = UpdateState.UPDATE_FAILED - self._error_msg = "Distribution failed" - - if ( - self.state != UpdateState.UPDATED - and self.unzip_dirpath - and os.path.isdir(self.unzip_dirpath) - ): - self.log.debug(f"Cleaning {self.unzip_dirpath}") - shutil.rmtree(self.unzip_dirpath) - - -class AyonDistribution: - """Distribution control. - - Receive information from server what addons and dependency packages - should be available locally and prepare/validate their distribution. - - Arguments are available for testing of the class. - - Args: - addon_dirpath (Optional[str]): Where addons will be stored. - dependency_dirpath (Optional[str]): Where dependencies will be stored. - dist_factory (Optional[DownloadFactory]): Factory which cares about - downloading of items based on source type. - addons_info (Optional[list[dict[str, Any]]): List of prepared - addons' info. - dependency_packages_info (Optional[list[dict[str, Any]]): Info - about packages from server. - bundles_info (Optional[Dict[str, Any]]): Info about - bundles. - bundle_name (Optional[str]): Name of bundle to use. If not passed - an environment variable 'AYON_BUNDLE_NAME' is checked for value. - When both are not available the bundle is defined by 'use_staging' - value. - use_staging (Optional[bool]): Use staging versions of an addon. - If not passed, 'is_staging_enabled' is used as default value. - """ - - def __init__( - self, - addon_dirpath=None, - dependency_dirpath=None, - dist_factory=None, - addons_info=NOT_SET, - dependency_packages_info=NOT_SET, - bundles_info=NOT_SET, - bundle_name=NOT_SET, - use_staging=None - ): - self._log = None - - self._dist_started = False - self._dist_finished = False - - self._addons_dirpath = addon_dirpath or get_addons_dir() - self._dependency_dirpath = dependency_dirpath or get_dependencies_dir() - self._dist_factory = ( - dist_factory or get_default_download_factory() - ) - - if bundle_name is NOT_SET: - bundle_name = os.environ.get("AYON_BUNDLE_NAME", NOT_SET) - - # Raw addons data from server - self._addons_info = addons_info - # Prepared data as Addon objects - self._addon_items = NOT_SET - # Distrubtion items of addons - # - only those addons and versions that should be distributed - self._addon_dist_items = NOT_SET - - # Raw dependency packages data from server - self._dependency_packages_info = dependency_packages_info - # Prepared dependency packages as objects - self._dependency_packages_items = NOT_SET - # Dependency package item that should be used - self._dependency_package_item = NOT_SET - # Distribution item of dependency package - self._dependency_dist_item = NOT_SET - - # Raw bundles data from server - self._bundles_info = bundles_info - # Bundles as objects - self._bundle_items = NOT_SET - - # Bundle that should be used in production - self._production_bundle = NOT_SET - # Bundle that should be used in staging - self._staging_bundle = NOT_SET - # Boolean that defines if staging bundle should be used - self._use_staging = use_staging - - # Specific bundle name should be used - self._bundle_name = bundle_name - # Final bundle that will be used - self._bundle = NOT_SET - - @property - def use_staging(self): - """Staging version of a bundle should be used. - - This value is completely ignored if specific bundle name should - be used. - - Returns: - bool: True if staging version should be used. - """ - - if self._use_staging is None: - self._use_staging = is_staging_enabled() - return self._use_staging - - @property - def log(self): - """Helper to access logger. - - Returns: - logging.Logger: Logger instance. - """ - if self._log is None: - self._log = logging.getLogger(self.__class__.__name__) - return self._log - - @property - def bundles_info(self): - """ - - Returns: - dict[str, dict[str, Any]]: Bundles information from server. - """ - - if self._bundles_info is NOT_SET: - self._bundles_info = ayon_api.get_bundles() - return self._bundles_info - - @property - def bundle_items(self): - """ - - Returns: - list[Bundle]: List of bundles info. - """ - - if self._bundle_items is NOT_SET: - self._bundle_items = [ - Bundle.from_dict(info) - for info in self.bundles_info["bundles"] - ] - return self._bundle_items - - def _prepare_production_staging_bundles(self): - production_bundle = None - staging_bundle = None - for bundle in self.bundle_items: - if bundle.is_production: - production_bundle = bundle - if bundle.is_staging: - staging_bundle = bundle - self._production_bundle = production_bundle - self._staging_bundle = staging_bundle - - @property - def production_bundle(self): - """ - Returns: - Union[Bundle, None]: Bundle that should be used in production. - """ - - if self._production_bundle is NOT_SET: - self._prepare_production_staging_bundles() - return self._production_bundle - - @property - def staging_bundle(self): - """ - Returns: - Union[Bundle, None]: Bundle that should be used in staging. - """ - - if self._staging_bundle is NOT_SET: - self._prepare_production_staging_bundles() - return self._staging_bundle - - @property - def bundle_to_use(self): - """Bundle that will be used for distribution. - - Bundle that should be used can be affected by 'bundle_name' - or 'use_staging'. - - Returns: - Union[Bundle, None]: Bundle that will be used for distribution - or None. - - Raises: - BundleNotFoundError: When bundle name to use is defined - but is not available on server. - """ - - if self._bundle is NOT_SET: - if self._bundle_name is not NOT_SET: - bundle = next( - ( - bundle - for bundle in self.bundle_items - if bundle.name == self._bundle_name - ), - None - ) - if bundle is None: - raise BundleNotFoundError(self._bundle_name) - - self._bundle = bundle - elif self.use_staging: - self._bundle = self.staging_bundle - else: - self._bundle = self.production_bundle - return self._bundle - - @property - def bundle_name_to_use(self): - bundle = self.bundle_to_use - return None if bundle is None else bundle.name - - @property - def addons_info(self): - """Server information about available addons. - - Returns: - Dict[str, dict[str, Any]: Addon info by addon name. - """ - - if self._addons_info is NOT_SET: - server_info = ayon_api.get_addons_info(details=True) - self._addons_info = server_info["addons"] - return self._addons_info - - @property - def addon_items(self): - """Information about available addons on server. - - Addons may require distribution of files. For those addons will be - created 'DistributionItem' handling distribution itself. - - Returns: - Dict[str, AddonInfo]: Addon info object by addon name. - """ - - if self._addon_items is NOT_SET: - addons_info = {} - for addon in self.addons_info: - addon_info = AddonInfo.from_dict(addon) - addons_info[addon_info.name] = addon_info - self._addon_items = addons_info - return self._addon_items - - @property - def dependency_packages_info(self): - """Server information about available dependency packages. - - Notes: - For testing purposes it is possible to pass dependency packages - information to '__init__'. - - Returns: - list[dict[str, Any]]: Dependency packages information. - """ - - if self._dependency_packages_info is NOT_SET: - self._dependency_packages_info = ( - ayon_api.get_dependency_packages())["packages"] - return self._dependency_packages_info - - @property - def dependency_packages_items(self): - """Dependency packages as objects. - - Returns: - dict[str, DependencyItem]: Dependency packages as objects by name. - """ - - if self._dependency_packages_items is NOT_SET: - dependenc_package_items = {} - for item in self.dependency_packages_info: - item = DependencyItem.from_dict(item) - dependenc_package_items[item.name] = item - self._dependency_packages_items = dependenc_package_items - return self._dependency_packages_items - - @property - def dependency_package_item(self): - """Dependency package item that should be used by bundle. - - Returns: - Union[None, Dict[str, Any]]: None if bundle does not have - specified dependency package. - """ - - if self._dependency_package_item is NOT_SET: - dependency_package_item = None - bundle = self.bundle_to_use - if bundle is not None: - package_name = bundle.dependency_packages.get( - platform.system().lower() - ) - dependency_package_item = self.dependency_packages_items.get( - package_name) - self._dependency_package_item = dependency_package_item - return self._dependency_package_item - - def _prepare_current_addon_dist_items(self): - addons_metadata = self.get_addons_metadata() - output = [] - addon_versions = {} - bundle = self.bundle_to_use - if bundle is not None: - addon_versions = bundle.addon_versions - for addon_name, addon_item in self.addon_items.items(): - addon_version = addon_versions.get(addon_name) - # Addon is not in bundle -> Skip - if addon_version is None: - continue - - addon_version_item = addon_item.versions.get(addon_version) - # Addon version is not available in addons info - # - TODO handle this case (raise error, skip, store, report, ...) - if addon_version_item is None: - print( - f"Version '{addon_version}' of addon '{addon_name}'" - " is not available on server." - ) - continue - - if not addon_version_item.require_distribution: - continue - full_name = addon_version_item.full_name - addon_dest = os.path.join(self._addons_dirpath, full_name) - self.log.debug(f"Checking {full_name} in {addon_dest}") - addon_in_metadata = ( - addon_name in addons_metadata - and addon_version_item.version in addons_metadata[addon_name] - ) - if addon_in_metadata and os.path.isdir(addon_dest): - self.log.debug( - f"Addon version folder {addon_dest} already exists." - ) - state = UpdateState.UPDATED - - else: - state = UpdateState.OUTDATED - - downloader_data = { - "type": "addon", - "name": addon_name, - "version": addon_version - } - - dist_item = DistributionItem( - state, - addon_dest, - addon_dest, - addon_version_item.hash, - self._dist_factory, - list(addon_version_item.sources), - downloader_data, - full_name, - self.log - ) - output.append({ - "dist_item": dist_item, - "addon_name": addon_name, - "addon_version": addon_version, - "addon_item": addon_item, - "addon_version_item": addon_version_item, - }) - return output - - def _prepare_dependency_progress(self): - package = self.dependency_package_item - if package is None: - return None - - metadata = self.get_dependency_metadata() - downloader_data = { - "type": "dependency_package", - "name": package.name, - "platform": package.platform_name - } - zip_dir = package_dir = os.path.join( - self._dependency_dirpath, package.name - ) - self.log.debug(f"Checking {package.name} in {package_dir}") - - if not os.path.isdir(package_dir) or package.name not in metadata: - state = UpdateState.OUTDATED - else: - state = UpdateState.UPDATED - - return DistributionItem( - state, - zip_dir, - package_dir, - package.checksum, - self._dist_factory, - package.sources, - downloader_data, - package.name, - self.log, - ) - - def get_addon_dist_items(self): - """Addon distribution items. - - These items describe source files required by addon to be available on - machine. Each item may have 0-n source information from where can be - obtained. If file is already available it's state will be 'UPDATED'. - - Example output: - [ - { - "dist_item": DistributionItem, - "addon_name": str, - "addon_version": str, - "addon_item": AddonInfo, - "addon_version_item": AddonVersionInfo - }, { - ... - } - ] - - Returns: - list[dict[str, Any]]: Distribution items with addon version item. - """ - - if self._addon_dist_items is NOT_SET: - self._addon_dist_items = ( - self._prepare_current_addon_dist_items()) - return self._addon_dist_items - - def get_dependency_dist_item(self): - """Dependency package distribution item. - - Item describe source files required by server to be available on - machine. Item may have 0-n source information from where can be - obtained. If file is already available it's state will be 'UPDATED'. - - 'None' is returned if server does not have defined any dependency - package. - - Returns: - Union[None, DistributionItem]: Dependency item or None if server - does not have specified any dependency package. - """ - - if self._dependency_dist_item is NOT_SET: - self._dependency_dist_item = self._prepare_dependency_progress() - return self._dependency_dist_item - - def get_dependency_metadata_filepath(self): - """Path to distribution metadata file. - - Metadata contain information about distributed packages, used source, - expected file hash and time when file was distributed. - - Returns: - str: Path to a file where dependency package metadata are stored. - """ - - return os.path.join(self._dependency_dirpath, "dependency.json") - - def get_addons_metadata_filepath(self): - """Path to addons metadata file. - - Metadata contain information about distributed addons, used sources, - expected file hashes and time when files were distributed. - - Returns: - str: Path to a file where addons metadata are stored. - """ - - return os.path.join(self._addons_dirpath, "addons.json") - - def read_metadata_file(self, filepath, default_value=None): - """Read json file from path. - - Method creates the file when does not exist with default value. - - Args: - filepath (str): Path to json file. - default_value (Union[Dict[str, Any], List[Any], None]): Default - value if the file is not available (or valid). - - Returns: - Union[Dict[str, Any], List[Any]]: Value from file. - """ - - if default_value is None: - default_value = {} - - if not os.path.exists(filepath): - return default_value - - try: - with open(filepath, "r") as stream: - data = json.load(stream) - except ValueError: - data = default_value - return data - - def save_metadata_file(self, filepath, data): - """Store data to json file. - - Method creates the file when does not exist. - - Args: - filepath (str): Path to json file. - data (Union[Dict[str, Any], List[Any]]): Data to store into file. - """ - - if not os.path.exists(filepath): - dirpath = os.path.dirname(filepath) - if not os.path.exists(dirpath): - os.makedirs(dirpath) - with open(filepath, "w") as stream: - json.dump(data, stream, indent=4) - - def get_dependency_metadata(self): - filepath = self.get_dependency_metadata_filepath() - return self.read_metadata_file(filepath, {}) - - def update_dependency_metadata(self, package_name, data): - dependency_metadata = self.get_dependency_metadata() - dependency_metadata[package_name] = data - filepath = self.get_dependency_metadata_filepath() - self.save_metadata_file(filepath, dependency_metadata) - - def get_addons_metadata(self): - filepath = self.get_addons_metadata_filepath() - return self.read_metadata_file(filepath, {}) - - def update_addons_metadata(self, addons_information): - if not addons_information: - return - addons_metadata = self.get_addons_metadata() - for addon_name, version_value in addons_information.items(): - if addon_name not in addons_metadata: - addons_metadata[addon_name] = {} - for addon_version, version_data in version_value.items(): - addons_metadata[addon_name][addon_version] = version_data - - filepath = self.get_addons_metadata_filepath() - self.save_metadata_file(filepath, addons_metadata) - - def finish_distribution(self): - """Store metadata about distributed items.""" - - self._dist_finished = True - stored_time = datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S") - dependency_dist_item = self.get_dependency_dist_item() - if ( - dependency_dist_item is not None - and dependency_dist_item.need_distribution - and dependency_dist_item.state == UpdateState.UPDATED - ): - package = self.dependency_package - source = dependency_dist_item.used_source - if source is not None: - data = { - "source": source, - "file_hash": dependency_dist_item.file_hash, - "distributed_dt": stored_time - } - self.update_dependency_metadata(package.name, data) - - addons_info = {} - for item in self.get_addon_dist_items(): - dist_item = item["dist_item"] - if ( - not dist_item.need_distribution - or dist_item.state != UpdateState.UPDATED - ): - continue - - source_data = dist_item.used_source - if not source_data: - continue - - addon_name = item["addon_name"] - addon_version = item["addon_version"] - addons_info.setdefault(addon_name, {}) - addons_info[addon_name][addon_version] = { - "source": source_data, - "file_hash": dist_item.file_hash, - "distributed_dt": stored_time - } - - self.update_addons_metadata(addons_info) - - def get_all_distribution_items(self): - """Distribution items required by server. - - Items contain dependency package item and all addons that are enabled - and have distribution requirements. - - Items can be already available on machine. - - Returns: - List[DistributionItem]: Distribution items required by server. - """ - - output = [ - item["dist_item"] - for item in self.get_addon_dist_items() - ] - dependency_dist_item = self.get_dependency_dist_item() - if dependency_dist_item is not None: - output.insert(0, dependency_dist_item) - - return output - - def distribute(self, threaded=False): - """Distribute all missing items. - - Method will try to distribute all items that are required by server. - - This method does not handle failed items. To validate the result call - 'validate_distribution' when this method finishes. - - Args: - threaded (bool): Distribute items in threads. - """ - - if self._dist_started: - raise RuntimeError("Distribution already started") - self._dist_started = True - threads = collections.deque() - for item in self.get_all_distribution_items(): - if threaded: - threads.append(threading.Thread(target=item.distribute)) - else: - item.distribute() - - while threads: - thread = threads.popleft() - if thread.is_alive(): - threads.append(thread) - else: - thread.join() - - self.finish_distribution() - - def validate_distribution(self): - """Check if all required distribution items are distributed. - - Raises: - RuntimeError: Any of items is not available. - """ - - invalid = [] - dependency_package = self.get_dependency_dist_item() - if ( - dependency_package is not None - and dependency_package.state != UpdateState.UPDATED - ): - invalid.append("Dependency package") - - for item in self.get_addon_dist_items(): - dist_item = item["dist_item"] - if dist_item.state != UpdateState.UPDATED: - invalid.append(item["addon_name"]) - - if not invalid: - return - - raise RuntimeError("Failed to distribute {}".format( - ", ".join([f'"{item}"' for item in invalid]) - )) - - def get_sys_paths(self): - """Get all paths to python packages that should be added to python. - - These paths lead to addon directories and python dependencies in - dependency package. - - Todos: - Add dependency package directory to output. ATM is not structure of - dependency package 100% defined. - - Returns: - List[str]: Paths that should be added to 'sys.path' and - 'PYTHONPATH'. - """ - - output = [] - for item in self.get_all_distribution_items(): - if item.state != UpdateState.UPDATED: - continue - unzip_dirpath = item.unzip_dirpath - if unzip_dirpath and os.path.exists(unzip_dirpath): - output.append(unzip_dirpath) - return output - - -def cli(*args): - raise NotImplementedError diff --git a/common/ayon_common/distribution/data_structures.py b/common/ayon_common/distribution/data_structures.py deleted file mode 100644 index aa93d4ed71..0000000000 --- a/common/ayon_common/distribution/data_structures.py +++ /dev/null @@ -1,265 +0,0 @@ -import attr -from enum import Enum - - -class UrlType(Enum): - HTTP = "http" - GIT = "git" - FILESYSTEM = "filesystem" - SERVER = "server" - - -@attr.s -class MultiPlatformValue(object): - windows = attr.ib(default=None) - linux = attr.ib(default=None) - darwin = attr.ib(default=None) - - -@attr.s -class SourceInfo(object): - type = attr.ib() - - -@attr.s -class LocalSourceInfo(SourceInfo): - path = attr.ib(default=attr.Factory(MultiPlatformValue)) - - -@attr.s -class WebSourceInfo(SourceInfo): - url = attr.ib(default=None) - headers = attr.ib(default=None) - filename = attr.ib(default=None) - - -@attr.s -class ServerSourceInfo(SourceInfo): - filename = attr.ib(default=None) - path = attr.ib(default=None) - - -def convert_source(source): - """Create source object from data information. - - Args: - source (Dict[str, any]): Information about source. - - Returns: - Union[None, SourceInfo]: Object with source information if type is - known. - """ - - source_type = source.get("type") - if not source_type: - return None - - if source_type == UrlType.FILESYSTEM.value: - return LocalSourceInfo( - type=source_type, - path=source["path"] - ) - - if source_type == UrlType.HTTP.value: - url = source["path"] - return WebSourceInfo( - type=source_type, - url=url, - headers=source.get("headers"), - filename=source.get("filename") - ) - - if source_type == UrlType.SERVER.value: - return ServerSourceInfo( - type=source_type, - filename=source.get("filename"), - path=source.get("path") - ) - - -def prepare_sources(src_sources): - sources = [] - unknown_sources = [] - for source in (src_sources or []): - dependency_source = convert_source(source) - if dependency_source is not None: - sources.append(dependency_source) - else: - print(f"Unknown source {source.get('type')}") - unknown_sources.append(source) - return sources, unknown_sources - - -@attr.s -class VersionData(object): - version_data = attr.ib(default=None) - - -@attr.s -class AddonVersionInfo(object): - version = attr.ib() - full_name = attr.ib() - title = attr.ib(default=None) - require_distribution = attr.ib(default=False) - sources = attr.ib(default=attr.Factory(list)) - unknown_sources = attr.ib(default=attr.Factory(list)) - hash = attr.ib(default=None) - - @classmethod - def from_dict( - cls, addon_name, addon_title, addon_version, version_data - ): - """Addon version info. - - Args: - addon_name (str): Name of addon. - addon_title (str): Title of addon. - addon_version (str): Version of addon. - version_data (dict[str, Any]): Addon version information from - server. - - Returns: - AddonVersionInfo: Addon version info. - """ - - full_name = f"{addon_name}_{addon_version}" - title = f"{addon_title} {addon_version}" - - source_info = version_data.get("clientSourceInfo") - require_distribution = source_info is not None - sources, unknown_sources = prepare_sources(source_info) - - return cls( - version=addon_version, - full_name=full_name, - require_distribution=require_distribution, - sources=sources, - unknown_sources=unknown_sources, - hash=version_data.get("hash"), - title=title - ) - - -@attr.s -class AddonInfo(object): - """Object matching json payload from Server""" - name = attr.ib() - versions = attr.ib(default=attr.Factory(dict)) - title = attr.ib(default=None) - description = attr.ib(default=None) - license = attr.ib(default=None) - authors = attr.ib(default=None) - - @classmethod - def from_dict(cls, data): - """Addon info by available versions. - - Args: - data (dict[str, Any]): Addon information from server. Should - contain information about every version under 'versions'. - - Returns: - AddonInfo: Addon info with available versions. - """ - - # server payload contains info about all versions - addon_name = data["name"] - title = data.get("title") or addon_name - - src_versions = data.get("versions") or {} - dst_versions = { - addon_version: AddonVersionInfo.from_dict( - addon_name, title, addon_version, version_data - ) - for addon_version, version_data in src_versions.items() - } - return cls( - name=addon_name, - versions=dst_versions, - description=data.get("description"), - title=data.get("title") or addon_name, - license=data.get("license"), - authors=data.get("authors") - ) - - -@attr.s -class DependencyItem(object): - """Object matching payload from Server about single dependency package""" - name = attr.ib() - platform_name = attr.ib() - checksum = attr.ib() - sources = attr.ib(default=attr.Factory(list)) - unknown_sources = attr.ib(default=attr.Factory(list)) - source_addons = attr.ib(default=attr.Factory(dict)) - python_modules = attr.ib(default=attr.Factory(dict)) - - @classmethod - def from_dict(cls, package): - src_sources = package.get("sources") or [] - for source in src_sources: - if source.get("type") == "server" and not source.get("filename"): - source["filename"] = package["filename"] - sources, unknown_sources = prepare_sources(src_sources) - return cls( - name=package["filename"], - platform_name=package["platform"], - sources=sources, - unknown_sources=unknown_sources, - checksum=package["checksum"], - source_addons=package["sourceAddons"], - python_modules=package["pythonModules"] - ) - - -@attr.s -class Installer: - version = attr.ib() - filename = attr.ib() - platform_name = attr.ib() - size = attr.ib() - checksum = attr.ib() - python_version = attr.ib() - python_modules = attr.ib() - sources = attr.ib(default=attr.Factory(list)) - unknown_sources = attr.ib(default=attr.Factory(list)) - - @classmethod - def from_dict(cls, installer_info): - sources, unknown_sources = prepare_sources( - installer_info.get("sources")) - - return cls( - version=installer_info["version"], - filename=installer_info["filename"], - platform_name=installer_info["platform"], - size=installer_info["size"], - sources=sources, - unknown_sources=unknown_sources, - checksum=installer_info["checksum"], - python_version=installer_info["pythonVersion"], - python_modules=installer_info["pythonModules"] - ) - - -@attr.s -class Bundle: - """Class representing bundle information.""" - - name = attr.ib() - installer_version = attr.ib() - addon_versions = attr.ib(default=attr.Factory(dict)) - dependency_packages = attr.ib(default=attr.Factory(dict)) - is_production = attr.ib(default=False) - is_staging = attr.ib(default=False) - - @classmethod - def from_dict(cls, data): - return cls( - name=data["name"], - installer_version=data.get("installerVersion"), - addon_versions=data.get("addons", {}), - dependency_packages=data.get("dependencyPackages", {}), - is_production=data["isProduction"], - is_staging=data["isStaging"], - ) diff --git a/common/ayon_common/distribution/downloaders.py b/common/ayon_common/distribution/downloaders.py deleted file mode 100644 index 23280176c3..0000000000 --- a/common/ayon_common/distribution/downloaders.py +++ /dev/null @@ -1,250 +0,0 @@ -import os -import logging -import platform -from abc import ABCMeta, abstractmethod - -import ayon_api - -from .file_handler import RemoteFileHandler -from .data_structures import UrlType - - -class SourceDownloader(metaclass=ABCMeta): - """Abstract class for source downloader.""" - - log = logging.getLogger(__name__) - - @classmethod - @abstractmethod - def download(cls, source, destination_dir, data, transfer_progress): - """Returns url of downloaded addon zip file. - - Tranfer progress can be ignored, in that case file transfer won't - be shown as 0-100% but as 'running'. First step should be to set - destination content size and then add transferred chunk sizes. - - Args: - source (dict): {type:"http", "url":"https://} ...} - destination_dir (str): local folder to unzip - data (dict): More information about download content. Always have - 'type' key in. - transfer_progress (ayon_api.TransferProgress): Progress of - transferred (copy/download) content. - - Returns: - (str) local path to addon zip file - """ - - pass - - @classmethod - @abstractmethod - def cleanup(cls, source, destination_dir, data): - """Cleanup files when distribution finishes or crashes. - - Cleanup e.g. temporary files (downloaded zip) or other related stuff - to downloader. - """ - - pass - - @classmethod - def check_hash(cls, addon_path, addon_hash, hash_type="sha256"): - """Compares 'hash' of downloaded 'addon_url' file. - - Args: - addon_path (str): Local path to addon file. - addon_hash (str): Hash of downloaded file. - hash_type (str): Type of hash. - - Raises: - ValueError if hashes doesn't match - """ - - if not os.path.exists(addon_path): - raise ValueError(f"{addon_path} doesn't exist.") - if not RemoteFileHandler.check_integrity( - addon_path, addon_hash, hash_type=hash_type - ): - raise ValueError(f"{addon_path} doesn't match expected hash.") - - @classmethod - def unzip(cls, addon_zip_path, destination_dir): - """Unzips local 'addon_zip_path' to 'destination'. - - Args: - addon_zip_path (str): local path to addon zip file - destination_dir (str): local folder to unzip - """ - - RemoteFileHandler.unzip(addon_zip_path, destination_dir) - os.remove(addon_zip_path) - - -class OSDownloader(SourceDownloader): - """Downloader using files from file drive.""" - - @classmethod - def download(cls, source, destination_dir, data, transfer_progress): - # OS doesn't need to download, unzip directly - addon_url = source["path"].get(platform.system().lower()) - if not os.path.exists(addon_url): - raise ValueError(f"{addon_url} is not accessible") - return addon_url - - @classmethod - def cleanup(cls, source, destination_dir, data): - # Nothing to do - download does not copy anything - pass - - -class HTTPDownloader(SourceDownloader): - """Downloader using http or https protocol.""" - - CHUNK_SIZE = 100000 - - @staticmethod - def get_filename(source): - source_url = source["url"] - filename = source.get("filename") - if not filename: - filename = os.path.basename(source_url) - basename, ext = os.path.splitext(filename) - allowed_exts = set(RemoteFileHandler.IMPLEMENTED_ZIP_FORMATS) - if ext.lower().lstrip(".") not in allowed_exts: - filename = f"{basename}.zip" - return filename - - @classmethod - def download(cls, source, destination_dir, data, transfer_progress): - source_url = source["url"] - cls.log.debug(f"Downloading {source_url} to {destination_dir}") - headers = source.get("headers") - filename = cls.get_filename(source) - - # TODO use transfer progress - RemoteFileHandler.download_url( - source_url, - destination_dir, - filename, - headers=headers - ) - - return os.path.join(destination_dir, filename) - - @classmethod - def cleanup(cls, source, destination_dir, data): - filename = cls.get_filename(source) - filepath = os.path.join(destination_dir, filename) - if os.path.exists(filepath) and os.path.isfile(filepath): - os.remove(filepath) - - -class AyonServerDownloader(SourceDownloader): - """Downloads static resource file from AYON Server. - - Expects filled env var AYON_SERVER_URL. - """ - - CHUNK_SIZE = 8192 - - @classmethod - def download(cls, source, destination_dir, data, transfer_progress): - path = source["path"] - filename = source["filename"] - if path and not filename: - filename = path.split("/")[-1] - - cls.log.debug(f"Downloading {filename} to {destination_dir}") - - _, ext = os.path.splitext(filename) - ext = ext.lower().lstrip(".") - valid_exts = set(RemoteFileHandler.IMPLEMENTED_ZIP_FORMATS) - if ext not in valid_exts: - raise ValueError(( - f"Invalid file extension \"{ext}\"." - f" Expected {', '.join(valid_exts)}" - )) - - if path: - filepath = os.path.join(destination_dir, filename) - return ayon_api.download_file( - path, - filepath, - chunk_size=cls.CHUNK_SIZE, - progress=transfer_progress - ) - - # dst_filepath = os.path.join(destination_dir, filename) - if data["type"] == "dependency_package": - return ayon_api.download_dependency_package( - data["name"], - destination_dir, - filename, - platform_name=data["platform"], - chunk_size=cls.CHUNK_SIZE, - progress=transfer_progress - ) - - if data["type"] == "addon": - return ayon_api.download_addon_private_file( - data["name"], - data["version"], - filename, - destination_dir, - chunk_size=cls.CHUNK_SIZE, - progress=transfer_progress - ) - - raise ValueError(f"Unknown type to download \"{data['type']}\"") - - @classmethod - def cleanup(cls, source, destination_dir, data): - filename = source["filename"] - filepath = os.path.join(destination_dir, filename) - if os.path.exists(filepath) and os.path.isfile(filepath): - os.remove(filepath) - - -class DownloadFactory: - """Factory for downloaders.""" - - def __init__(self): - self._downloaders = {} - - def register_format(self, downloader_type, downloader): - """Register downloader for download type. - - Args: - downloader_type (UrlType): Type of source. - downloader (SourceDownloader): Downloader which cares about - download, hash check and unzipping. - """ - - self._downloaders[downloader_type.value] = downloader - - def get_downloader(self, downloader_type): - """Registered downloader for type. - - Args: - downloader_type (UrlType): Type of source. - - Returns: - SourceDownloader: Downloader object which should care about file - distribution. - - Raises: - ValueError: If type does not have registered downloader. - """ - - if downloader := self._downloaders.get(downloader_type): - return downloader() - raise ValueError(f"{downloader_type} not implemented") - - -def get_default_download_factory(): - download_factory = DownloadFactory() - download_factory.register_format(UrlType.FILESYSTEM, OSDownloader) - download_factory.register_format(UrlType.HTTP, HTTPDownloader) - download_factory.register_format(UrlType.SERVER, AyonServerDownloader) - return download_factory diff --git a/common/ayon_common/distribution/tests/test_addon_distributtion.py b/common/ayon_common/distribution/tests/test_addon_distributtion.py deleted file mode 100644 index 3e7bd1bc6a..0000000000 --- a/common/ayon_common/distribution/tests/test_addon_distributtion.py +++ /dev/null @@ -1,248 +0,0 @@ -import os -import sys -import copy -import tempfile - - -import attr -import pytest - -current_dir = os.path.dirname(os.path.abspath(__file__)) -root_dir = os.path.abspath(os.path.join(current_dir, "..", "..", "..", "..")) -sys.path.append(root_dir) - -from common.ayon_common.distribution.downloaders import ( - DownloadFactory, - OSDownloader, - HTTPDownloader, -) -from common.ayon_common.distribution.control import ( - AyonDistribution, - UpdateState, -) -from common.ayon_common.distribution.data_structures import ( - AddonInfo, - UrlType, -) - - -@pytest.fixture -def download_factory(): - addon_downloader = DownloadFactory() - addon_downloader.register_format(UrlType.FILESYSTEM, OSDownloader) - addon_downloader.register_format(UrlType.HTTP, HTTPDownloader) - - yield addon_downloader - - -@pytest.fixture -def http_downloader(download_factory): - yield download_factory.get_downloader(UrlType.HTTP.value) - - -@pytest.fixture -def temp_folder(): - yield tempfile.mkdtemp(prefix="ayon_test_") - - -@pytest.fixture -def sample_bundles(): - yield { - "bundles": [ - { - "name": "TestBundle", - "createdAt": "2023-06-29T00:00:00.0+00:00", - "installerVersion": None, - "addons": { - "slack": "1.0.0" - }, - "dependencyPackages": {}, - "isProduction": True, - "isStaging": False - } - ], - "productionBundle": "TestBundle", - "stagingBundle": None - } - - -@pytest.fixture -def sample_addon_info(): - yield { - "name": "slack", - "title": "Slack addon", - "versions": { - "1.0.0": { - "hasSettings": True, - "hasSiteSettings": False, - "clientPyproject": { - "tool": { - "poetry": { - "dependencies": { - "nxtools": "^1.6", - "orjson": "^3.6.7", - "typer": "^0.4.1", - "email-validator": "^1.1.3", - "python": "^3.10", - "fastapi": "^0.73.0" - } - } - } - }, - "clientSourceInfo": [ - { - "type": "http", - "path": "https://drive.google.com/file/d/1TcuV8c2OV8CcbPeWi7lxOdqWsEqQNPYy/view?usp=sharing", # noqa - "filename": "dummy.zip" - }, - { - "type": "filesystem", - "path": { - "windows": "P:/sources/some_file.zip", - "linux": "/mnt/srv/sources/some_file.zip", - "darwin": "/Volumes/srv/sources/some_file.zip" - } - } - ], - "frontendScopes": { - "project": { - "sidebar": "hierarchy", - } - }, - "hash": "4be25eb6215e91e5894d3c5475aeb1e379d081d3f5b43b4ee15b0891cf5f5658" # noqa - } - }, - "description": "" - } - - -def test_register(printer): - download_factory = DownloadFactory() - - assert len(download_factory._downloaders) == 0, "Contains registered" - - download_factory.register_format(UrlType.FILESYSTEM, OSDownloader) - assert len(download_factory._downloaders) == 1, "Should contain one" - - -def test_get_downloader(printer, download_factory): - assert download_factory.get_downloader(UrlType.FILESYSTEM.value), "Should find" # noqa - - with pytest.raises(ValueError): - download_factory.get_downloader("unknown"), "Shouldn't find" - - -def test_addon_info(printer, sample_addon_info): - """Tests parsing of expected payload from v4 server into AadonInfo.""" - valid_minimum = { - "name": "slack", - "versions": { - "1.0.0": { - "clientSourceInfo": [ - { - "type": "filesystem", - "path": { - "windows": "P:/sources/some_file.zip", - "linux": "/mnt/srv/sources/some_file.zip", - "darwin": "/Volumes/srv/sources/some_file.zip" - } - } - ] - } - } - } - - assert AddonInfo.from_dict(valid_minimum), "Missing required fields" - - addon = AddonInfo.from_dict(sample_addon_info) - assert addon, "Should be created" - assert addon.name == "slack", "Incorrect name" - assert "1.0.0" in addon.versions, "Version is not in versions" - - with pytest.raises(TypeError): - assert addon["name"], "Dict approach not implemented" - - addon_as_dict = attr.asdict(addon) - assert addon_as_dict["name"], "Dict approach should work" - - -def _get_dist_item(dist_items, name, version): - final_dist_info = next( - ( - dist_info - for dist_info in dist_items - if ( - dist_info["addon_name"] == name - and dist_info["addon_version"] == version - ) - ), - {} - ) - return final_dist_info["dist_item"] - - -def test_update_addon_state( - printer, sample_addon_info, temp_folder, download_factory, sample_bundles -): - """Tests possible cases of addon update.""" - - addon_version = list(sample_addon_info["versions"])[0] - broken_addon_info = copy.deepcopy(sample_addon_info) - - # Cause crash because of invalid hash - broken_addon_info["versions"][addon_version]["hash"] = "brokenhash" - distribution = AyonDistribution( - addon_dirpath=temp_folder, - dependency_dirpath=temp_folder, - dist_factory=download_factory, - addons_info=[broken_addon_info], - dependency_packages_info=[], - bundles_info=sample_bundles - ) - distribution.distribute() - dist_items = distribution.get_addon_dist_items() - slack_dist_item = _get_dist_item( - dist_items, - sample_addon_info["name"], - addon_version - ) - slack_state = slack_dist_item.state - assert slack_state == UpdateState.UPDATE_FAILED, ( - "Update should have failed because of wrong hash") - - # Fix cache and validate if was updated - distribution = AyonDistribution( - addon_dirpath=temp_folder, - dependency_dirpath=temp_folder, - dist_factory=download_factory, - addons_info=[sample_addon_info], - dependency_packages_info=[], - bundles_info=sample_bundles - ) - distribution.distribute() - dist_items = distribution.get_addon_dist_items() - slack_dist_item = _get_dist_item( - dist_items, - sample_addon_info["name"], - addon_version - ) - assert slack_dist_item.state == UpdateState.UPDATED, ( - "Addon should have been updated") - - # Is UPDATED without calling distribute - distribution = AyonDistribution( - addon_dirpath=temp_folder, - dependency_dirpath=temp_folder, - dist_factory=download_factory, - addons_info=[sample_addon_info], - dependency_packages_info=[], - bundles_info=sample_bundles - ) - dist_items = distribution.get_addon_dist_items() - slack_dist_item = _get_dist_item( - dist_items, - sample_addon_info["name"], - addon_version - ) - assert slack_dist_item.state == UpdateState.UPDATED, ( - "Addon should already exist") diff --git a/common/ayon_common/distribution/ui/missing_bundle_window.py b/common/ayon_common/distribution/ui/missing_bundle_window.py deleted file mode 100644 index ae7a6a2976..0000000000 --- a/common/ayon_common/distribution/ui/missing_bundle_window.py +++ /dev/null @@ -1,146 +0,0 @@ -import sys - -from qtpy import QtWidgets, QtGui - -from ayon_common import is_staging_enabled -from ayon_common.resources import ( - get_icon_path, - load_stylesheet, -) -from ayon_common.ui_utils import get_qt_app - - -class MissingBundleWindow(QtWidgets.QDialog): - default_width = 410 - default_height = 170 - - def __init__( - self, url=None, bundle_name=None, use_staging=None, parent=None - ): - super().__init__(parent) - - icon_path = get_icon_path() - icon = QtGui.QIcon(icon_path) - self.setWindowIcon(icon) - self.setWindowTitle("Missing Bundle") - - self._url = url - self._bundle_name = bundle_name - self._use_staging = use_staging - self._first_show = True - - info_label = QtWidgets.QLabel("", self) - info_label.setWordWrap(True) - - btns_widget = QtWidgets.QWidget(self) - confirm_btn = QtWidgets.QPushButton("Exit", btns_widget) - - btns_layout = QtWidgets.QHBoxLayout(btns_widget) - btns_layout.setContentsMargins(0, 0, 0, 0) - btns_layout.addStretch(1) - btns_layout.addWidget(confirm_btn, 0) - - main_layout = QtWidgets.QVBoxLayout(self) - main_layout.addWidget(info_label, 0) - main_layout.addStretch(1) - main_layout.addWidget(btns_widget, 0) - - confirm_btn.clicked.connect(self._on_confirm_click) - - self._info_label = info_label - self._confirm_btn = confirm_btn - - self._update_label() - - def set_url(self, url): - if url == self._url: - return - self._url = url - self._update_label() - - def set_bundle_name(self, bundle_name): - if bundle_name == self._bundle_name: - return - self._bundle_name = bundle_name - self._update_label() - - def set_use_staging(self, use_staging): - if self._use_staging == use_staging: - return - self._use_staging = use_staging - self._update_label() - - def showEvent(self, event): - super().showEvent(event) - if self._first_show: - self._first_show = False - self._on_first_show() - self._recalculate_sizes() - - def resizeEvent(self, event): - super().resizeEvent(event) - self._recalculate_sizes() - - def _recalculate_sizes(self): - hint = self._confirm_btn.sizeHint() - new_width = max((hint.width(), hint.height() * 3)) - self._confirm_btn.setMinimumWidth(new_width) - - def _on_first_show(self): - self.setStyleSheet(load_stylesheet()) - self.resize(self.default_width, self.default_height) - - def _on_confirm_click(self): - self.accept() - self.close() - - def _update_label(self): - self._info_label.setText(self._get_label()) - - def _get_label(self): - url_part = f" {self._url}" if self._url else "" - - if self._bundle_name: - return ( - f"Requested release bundle {self._bundle_name}" - f" is not available on server{url_part}." - "

Try to restart AYON desktop launcher. Please" - " contact your administrator if issue persist." - ) - mode = "staging" if self._use_staging else "production" - return ( - f"No release bundle is set as {mode} on the AYON" - f" server{url_part} so there is nothing to launch." - "

Please contact your administrator" - " to resolve the issue." - ) - - -def main(): - """Show message that server does not have set bundle to use. - - It is possible to pass url as argument to show it in the message. To use - this feature, pass `--url ` as argument to this script. - """ - - url = None - bundle_name = None - if "--url" in sys.argv: - url_index = sys.argv.index("--url") + 1 - if url_index < len(sys.argv): - url = sys.argv[url_index] - - if "--bundle" in sys.argv: - bundle_index = sys.argv.index("--bundle") + 1 - if bundle_index < len(sys.argv): - bundle_name = sys.argv[bundle_index] - - use_staging = is_staging_enabled() - app = get_qt_app() - window = MissingBundleWindow(url, bundle_name, use_staging) - window.show() - app.exec_() - - -if __name__ == "__main__": - main() diff --git a/common/ayon_common/distribution/utils.py b/common/ayon_common/distribution/utils.py deleted file mode 100644 index a8b755707a..0000000000 --- a/common/ayon_common/distribution/utils.py +++ /dev/null @@ -1,90 +0,0 @@ -import os -import subprocess - -from ayon_common.utils import get_ayon_appdirs, get_ayon_launch_args - - -def get_local_dir(*subdirs): - """Get product directory in user's home directory. - - Each user on machine have own local directory where are downloaded updates, - addons etc. - - Returns: - str: Path to product local directory. - """ - - if not subdirs: - raise ValueError("Must fill dir_name if nothing else provided!") - - local_dir = get_ayon_appdirs(*subdirs) - if not os.path.isdir(local_dir): - try: - os.makedirs(local_dir) - except Exception: # TODO fix exception - raise RuntimeError(f"Cannot create {local_dir}") - - return local_dir - - -def get_addons_dir(): - """Directory where addon packages are stored. - - Path to addons is defined using python module 'appdirs' which - - The path is stored into environment variable 'AYON_ADDONS_DIR'. - Value of environment variable can be overriden, but we highly recommended - to use that option only for development purposes. - - Returns: - str: Path to directory where addons should be downloaded. - """ - - addons_dir = os.environ.get("AYON_ADDONS_DIR") - if not addons_dir: - addons_dir = get_local_dir("addons") - os.environ["AYON_ADDONS_DIR"] = addons_dir - return addons_dir - - -def get_dependencies_dir(): - """Directory where dependency packages are stored. - - Path to addons is defined using python module 'appdirs' which - - The path is stored into environment variable 'AYON_DEPENDENCIES_DIR'. - Value of environment variable can be overriden, but we highly recommended - to use that option only for development purposes. - - Returns: - str: Path to directory where dependency packages should be downloaded. - """ - - dependencies_dir = os.environ.get("AYON_DEPENDENCIES_DIR") - if not dependencies_dir: - dependencies_dir = get_local_dir("dependency_packages") - os.environ["AYON_DEPENDENCIES_DIR"] = dependencies_dir - return dependencies_dir - - -def show_missing_bundle_information(url, bundle_name=None): - """Show missing bundle information window. - - This function should be called when server does not have set bundle for - production or staging, or when bundle that should be used is not available - on server. - - Using subprocess to show the dialog. Is blocking and is waiting until - dialog is closed. - - Args: - url (str): Server url where bundle is not set. - bundle_name (Optional[str]): Name of bundle that was not found. - """ - - ui_dir = os.path.join(os.path.dirname(__file__), "ui") - script_path = os.path.join(ui_dir, "missing_bundle_window.py") - args = get_ayon_launch_args(script_path, "--skip-bootstrap", "--url", url) - if bundle_name: - args.extend(["--bundle", bundle_name]) - subprocess.call(args) diff --git a/common/ayon_common/resources/AYON.icns b/common/ayon_common/resources/AYON.icns deleted file mode 100644 index 2ec66cf3e0..0000000000 Binary files a/common/ayon_common/resources/AYON.icns and /dev/null differ diff --git a/common/ayon_common/resources/AYON.ico b/common/ayon_common/resources/AYON.ico deleted file mode 100644 index e0ec3292f8..0000000000 Binary files a/common/ayon_common/resources/AYON.ico and /dev/null differ diff --git a/common/ayon_common/resources/AYON.png b/common/ayon_common/resources/AYON.png deleted file mode 100644 index ed13aeea52..0000000000 Binary files a/common/ayon_common/resources/AYON.png and /dev/null differ diff --git a/common/ayon_common/resources/AYON_staging.png b/common/ayon_common/resources/AYON_staging.png deleted file mode 100644 index 75dadfd56c..0000000000 Binary files a/common/ayon_common/resources/AYON_staging.png and /dev/null differ diff --git a/common/ayon_common/resources/__init__.py b/common/ayon_common/resources/__init__.py deleted file mode 100644 index 2b516feff3..0000000000 --- a/common/ayon_common/resources/__init__.py +++ /dev/null @@ -1,25 +0,0 @@ -import os - -from ayon_common.utils import is_staging_enabled - -RESOURCES_DIR = os.path.dirname(os.path.abspath(__file__)) - - -def get_resource_path(*args): - path_items = list(args) - path_items.insert(0, RESOURCES_DIR) - return os.path.sep.join(path_items) - - -def get_icon_path(): - if is_staging_enabled(): - return get_resource_path("AYON_staging.png") - return get_resource_path("AYON.png") - - -def load_stylesheet(): - stylesheet_path = get_resource_path("stylesheet.css") - - with open(stylesheet_path, "r") as stream: - content = stream.read() - return content diff --git a/common/ayon_common/resources/edit.png b/common/ayon_common/resources/edit.png deleted file mode 100644 index a5a07998a6..0000000000 Binary files a/common/ayon_common/resources/edit.png and /dev/null differ diff --git a/common/ayon_common/resources/eye.png b/common/ayon_common/resources/eye.png deleted file mode 100644 index 5a683e2974..0000000000 Binary files a/common/ayon_common/resources/eye.png and /dev/null differ diff --git a/common/ayon_common/resources/stylesheet.css b/common/ayon_common/resources/stylesheet.css deleted file mode 100644 index 01e664e9e8..0000000000 --- a/common/ayon_common/resources/stylesheet.css +++ /dev/null @@ -1,84 +0,0 @@ -* { - font-size: 10pt; - font-family: "Noto Sans"; - font-weight: 450; - outline: none; -} - -QWidget { - color: #D3D8DE; - background: #2C313A; - border-radius: 0px; -} - -QWidget:disabled { - color: #5b6779; -} - -QLabel { - background: transparent; -} - -QPushButton { - text-align:center center; - border: 0px solid transparent; - border-radius: 0.2em; - padding: 3px 5px 3px 5px; - background: #434a56; -} - -QPushButton:hover { - background: rgba(168, 175, 189, 0.3); - color: #F0F2F5; -} - -QPushButton:pressed {} - -QPushButton:disabled { - background: #434a56; -} - -QLineEdit { - border: 1px solid #373D48; - border-radius: 0.3em; - background: #21252B; - padding: 0.1em; -} - -QLineEdit:disabled { - background: #2C313A; -} -QLineEdit:hover { - border-color: rgba(168, 175, 189, .3); -} -QLineEdit:focus { - border-color: rgb(92, 173, 214); -} - -QLineEdit[state="invalid"] { - border-color: #AA5050; -} - -#Separator { - background: rgba(75, 83, 98, 127); -} - -#PasswordBtn { - border: none; - padding: 0.1em; - background: transparent; -} - -#PasswordBtn:hover { - background: #434a56; -} - -#LikeDisabledInput { - background: #2C313A; -} -#LikeDisabledInput:hover { - border-color: #373D48; -} -#LikeDisabledInput:focus { - border-color: #373D48; -} diff --git a/common/ayon_common/ui_utils.py b/common/ayon_common/ui_utils.py deleted file mode 100644 index a3894d0d9c..0000000000 --- a/common/ayon_common/ui_utils.py +++ /dev/null @@ -1,36 +0,0 @@ -import sys -from qtpy import QtWidgets, QtCore - - -def set_style_property(widget, property_name, property_value): - """Set widget's property that may affect style. - - Style of widget is polished if current property value is different. - """ - - cur_value = widget.property(property_name) - if cur_value == property_value: - return - widget.setProperty(property_name, property_value) - widget.style().polish(widget) - - -def get_qt_app(): - app = QtWidgets.QApplication.instance() - if app is not None: - return app - - for attr_name in ( - "AA_EnableHighDpiScaling", - "AA_UseHighDpiPixmaps", - ): - attr = getattr(QtCore.Qt, attr_name, None) - if attr is not None: - QtWidgets.QApplication.setAttribute(attr) - - if hasattr(QtWidgets.QApplication, "setHighDpiScaleFactorRoundingPolicy"): - QtWidgets.QApplication.setHighDpiScaleFactorRoundingPolicy( - QtCore.Qt.HighDpiScaleFactorRoundingPolicy.PassThrough - ) - - return QtWidgets.QApplication(sys.argv) diff --git a/common/ayon_common/utils.py b/common/ayon_common/utils.py deleted file mode 100644 index c0d0c7c0b1..0000000000 --- a/common/ayon_common/utils.py +++ /dev/null @@ -1,90 +0,0 @@ -import os -import sys -import appdirs - -IS_BUILT_APPLICATION = getattr(sys, "frozen", False) - - -def get_ayon_appdirs(*args): - """Local app data directory of AYON client. - - Args: - *args (Iterable[str]): Subdirectories/files in local app data dir. - - Returns: - str: Path to directory/file in local app data dir. - """ - - return os.path.join( - appdirs.user_data_dir("AYON", "Ynput"), - *args - ) - - -def is_staging_enabled(): - """Check if staging is enabled. - - Returns: - bool: True if staging is enabled. - """ - - return os.getenv("AYON_USE_STAGING") == "1" - - -def _create_local_site_id(): - """Create a local site identifier. - - Returns: - str: Randomly generated site id. - """ - - from coolname import generate_slug - - new_id = generate_slug(3) - - print("Created local site id \"{}\"".format(new_id)) - - return new_id - - -def get_local_site_id(): - """Get local site identifier. - - Site id is created if does not exist yet. - - Returns: - str: Site id. - """ - - # used for background syncing - site_id = os.environ.get("AYON_SITE_ID") - if site_id: - return site_id - - site_id_path = get_ayon_appdirs("site_id") - if os.path.exists(site_id_path): - with open(site_id_path, "r") as stream: - site_id = stream.read() - - if not site_id: - site_id = _create_local_site_id() - with open(site_id_path, "w") as stream: - stream.write(site_id) - return site_id - - -def get_ayon_launch_args(*args): - """Launch arguments that can be used to launch ayon process. - - Args: - *args (str): Additional arguments. - - Returns: - list[str]: Launch arguments. - """ - - output = [sys.executable] - if not IS_BUILT_APPLICATION: - output.append(sys.argv[0]) - output.extend(args) - return output diff --git a/openpype/action.py b/openpype/action.py deleted file mode 100644 index 6114c65fd4..0000000000 --- a/openpype/action.py +++ /dev/null @@ -1,135 +0,0 @@ -import warnings -import functools -import pyblish.api - - -class ActionDeprecatedWarning(DeprecationWarning): - pass - - -def deprecated(new_destination): - """Mark functions as deprecated. - - It will result in a warning being emitted when the function is used. - """ - - func = None - if callable(new_destination): - func = new_destination - new_destination = None - - def _decorator(decorated_func): - if new_destination is None: - warning_message = ( - " Please check content of deprecated function to figure out" - " possible replacement." - ) - else: - warning_message = " Please replace your usage with '{}'.".format( - new_destination - ) - - @functools.wraps(decorated_func) - def wrapper(*args, **kwargs): - warnings.simplefilter("always", ActionDeprecatedWarning) - warnings.warn( - ( - "Call to deprecated function '{}'" - "\nFunction was moved or removed.{}" - ).format(decorated_func.__name__, warning_message), - category=ActionDeprecatedWarning, - stacklevel=4 - ) - return decorated_func(*args, **kwargs) - return wrapper - - if func is None: - return _decorator - return _decorator(func) - - -@deprecated("openpype.pipeline.publish.get_errored_instances_from_context") -def get_errored_instances_from_context(context, plugin=None): - """ - Deprecated: - Since 3.14.* will be removed in 3.16.* or later. - """ - - from openpype.pipeline.publish import get_errored_instances_from_context - - return get_errored_instances_from_context(context, plugin=plugin) - - -@deprecated("openpype.pipeline.publish.get_errored_plugins_from_context") -def get_errored_plugins_from_data(context): - """ - Deprecated: - Since 3.14.* will be removed in 3.16.* or later. - """ - - from openpype.pipeline.publish import get_errored_plugins_from_context - - return get_errored_plugins_from_context(context) - - -class RepairAction(pyblish.api.Action): - """Repairs the action - - To process the repairing this requires a static `repair(instance)` method - is available on the plugin. - - Deprecated: - 'RepairAction' and 'RepairContextAction' were moved to - 'openpype.pipeline.publish' please change you imports. - There is no "reasonable" way hot mark these classes as deprecated - to show warning of wrong import. Deprecated since 3.14.* will be - removed in 3.16.* - - """ - label = "Repair" - on = "failed" # This action is only available on a failed plug-in - icon = "wrench" # Icon from Awesome Icon - - def process(self, context, plugin): - - if not hasattr(plugin, "repair"): - raise RuntimeError("Plug-in does not have repair method.") - - # Get the errored instances - self.log.info("Finding failed instances..") - errored_instances = get_errored_instances_from_context(context, - plugin=plugin) - for instance in errored_instances: - plugin.repair(instance) - - -class RepairContextAction(pyblish.api.Action): - """Repairs the action - - To process the repairing this requires a static `repair(instance)` method - is available on the plugin. - - Deprecated: - 'RepairAction' and 'RepairContextAction' were moved to - 'openpype.pipeline.publish' please change you imports. - There is no "reasonable" way hot mark these classes as deprecated - to show warning of wrong import. Deprecated since 3.14.* will be - removed in 3.16.* - - """ - label = "Repair" - on = "failed" # This action is only available on a failed plug-in - - def process(self, context, plugin): - - if not hasattr(plugin, "repair"): - raise RuntimeError("Plug-in does not have repair method.") - - # Get the errored instances - self.log.info("Finding failed instances..") - errored_plugins = get_errored_plugins_from_data(context) - - # Apply pyblish.logic to get the instances for the plug-in - if plugin in errored_plugins: - self.log.info("Attempting fix ...") - plugin.repair(context) diff --git a/openpype/hosts/blender/api/ops.py b/openpype/hosts/blender/api/ops.py index 2c1b7245cd..62d7987b47 100644 --- a/openpype/hosts/blender/api/ops.py +++ b/openpype/hosts/blender/api/ops.py @@ -20,6 +20,7 @@ from openpype.pipeline import get_current_asset_name, get_current_task_name from openpype.tools.utils import host_tools from .workio import OpenFileCacher +from . import pipeline PREVIEW_COLLECTIONS: Dict = dict() @@ -344,6 +345,26 @@ class LaunchWorkFiles(LaunchQtApp): self._window.refresh() +class SetFrameRange(bpy.types.Operator): + bl_idname = "wm.ayon_set_frame_range" + bl_label = "Set Frame Range" + + def execute(self, context): + data = pipeline.get_asset_data() + pipeline.set_frame_range(data) + return {"FINISHED"} + + +class SetResolution(bpy.types.Operator): + bl_idname = "wm.ayon_set_resolution" + bl_label = "Set Resolution" + + def execute(self, context): + data = pipeline.get_asset_data() + pipeline.set_resolution(data) + return {"FINISHED"} + + class TOPBAR_MT_avalon(bpy.types.Menu): """Avalon menu.""" @@ -381,9 +402,11 @@ class TOPBAR_MT_avalon(bpy.types.Menu): layout.operator(LaunchManager.bl_idname, text="Manage...") layout.operator(LaunchLibrary.bl_idname, text="Library...") layout.separator() + layout.operator(SetFrameRange.bl_idname, text="Set Frame Range") + layout.operator(SetResolution.bl_idname, text="Set Resolution") + layout.separator() layout.operator(LaunchWorkFiles.bl_idname, text="Work Files...") - # TODO (jasper): maybe add 'Reload Pipeline', 'Set Frame Range' and - # 'Set Resolution'? + # TODO (jasper): maybe add 'Reload Pipeline' def draw_avalon_menu(self, context): @@ -399,6 +422,8 @@ classes = [ LaunchManager, LaunchLibrary, LaunchWorkFiles, + SetFrameRange, + SetResolution, TOPBAR_MT_avalon, ] diff --git a/openpype/hosts/blender/api/pipeline.py b/openpype/hosts/blender/api/pipeline.py index eb696ec184..29339a512c 100644 --- a/openpype/hosts/blender/api/pipeline.py +++ b/openpype/hosts/blender/api/pipeline.py @@ -113,22 +113,21 @@ def message_window(title, message): _process_app_events() -def set_start_end_frames(): +def get_asset_data(): project_name = get_current_project_name() asset_name = get_current_asset_name() asset_doc = get_asset_by_name(project_name, asset_name) + return asset_doc.get("data") + + +def set_frame_range(data): scene = bpy.context.scene # Default scene settings frameStart = scene.frame_start frameEnd = scene.frame_end fps = scene.render.fps / scene.render.fps_base - resolution_x = scene.render.resolution_x - resolution_y = scene.render.resolution_y - - # Check if settings are set - data = asset_doc.get("data") if not data: return @@ -139,26 +138,47 @@ def set_start_end_frames(): frameEnd = data.get("frameEnd") if data.get("fps"): fps = data.get("fps") - if data.get("resolutionWidth"): - resolution_x = data.get("resolutionWidth") - if data.get("resolutionHeight"): - resolution_y = data.get("resolutionHeight") scene.frame_start = frameStart scene.frame_end = frameEnd scene.render.fps = round(fps) scene.render.fps_base = round(fps) / fps + + +def set_resolution(data): + scene = bpy.context.scene + + # Default scene settings + resolution_x = scene.render.resolution_x + resolution_y = scene.render.resolution_y + + if not data: + return + + if data.get("resolutionWidth"): + resolution_x = data.get("resolutionWidth") + if data.get("resolutionHeight"): + resolution_y = data.get("resolutionHeight") + scene.render.resolution_x = resolution_x scene.render.resolution_y = resolution_y def on_new(): - set_start_end_frames() - project = os.environ.get("AVALON_PROJECT") - settings = get_project_settings(project) + settings = get_project_settings(project).get("blender") - unit_scale_settings = settings.get("blender").get("unit_scale_settings") + set_resolution_startup = settings.get("set_resolution_startup") + set_frames_startup = settings.get("set_frames_startup") + + data = get_asset_data() + + if set_resolution_startup: + set_resolution(data) + if set_frames_startup: + set_frame_range(data) + + unit_scale_settings = settings.get("unit_scale_settings") unit_scale_enabled = unit_scale_settings.get("enabled") if unit_scale_enabled: unit_scale = unit_scale_settings.get("base_file_unit_scale") @@ -166,12 +186,20 @@ def on_new(): def on_open(): - set_start_end_frames() - project = os.environ.get("AVALON_PROJECT") - settings = get_project_settings(project) + settings = get_project_settings(project).get("blender") - unit_scale_settings = settings.get("blender").get("unit_scale_settings") + set_resolution_startup = settings.get("set_resolution_startup") + set_frames_startup = settings.get("set_frames_startup") + + data = get_asset_data() + + if set_resolution_startup: + set_resolution(data) + if set_frames_startup: + set_frame_range(data) + + unit_scale_settings = settings.get("unit_scale_settings") unit_scale_enabled = unit_scale_settings.get("enabled") apply_on_opening = unit_scale_settings.get("apply_on_opening") if unit_scale_enabled and apply_on_opening: diff --git a/openpype/hosts/blender/plugins/publish/collect_review.py b/openpype/hosts/blender/plugins/publish/collect_review.py index 82b3ca11eb..6459927015 100644 --- a/openpype/hosts/blender/plugins/publish/collect_review.py +++ b/openpype/hosts/blender/plugins/publish/collect_review.py @@ -29,6 +29,8 @@ class CollectReview(pyblish.api.InstancePlugin): camera = cameras[0].name self.log.debug(f"camera: {camera}") + focal_length = cameras[0].data.lens + # get isolate objects list from meshes instance members . isolate_objects = [ obj @@ -40,6 +42,10 @@ class CollectReview(pyblish.api.InstancePlugin): task = instance.context.data["task"] + # Store focal length in `burninDataMembers` + burninData = instance.data.setdefault("burninDataMembers", {}) + burninData["focalLength"] = focal_length + instance.data.update({ "subset": f"{task}Review", "review_camera": camera, diff --git a/openpype/hosts/blender/plugins/publish/extract_abc.py b/openpype/hosts/blender/plugins/publish/extract_abc.py index 1cab9d225b..f4babc94d3 100644 --- a/openpype/hosts/blender/plugins/publish/extract_abc.py +++ b/openpype/hosts/blender/plugins/publish/extract_abc.py @@ -22,8 +22,6 @@ class ExtractABC(publish.Extractor): filepath = os.path.join(stagingdir, filename) context = bpy.context - scene = context.scene - view_layer = context.view_layer # Perform extraction self.log.info("Performing extraction..") @@ -31,24 +29,25 @@ class ExtractABC(publish.Extractor): plugin.deselect_all() selected = [] - asset_group = None + active = None for obj in instance: obj.select_set(True) selected.append(obj) + # Set as active the asset group if obj.get(AVALON_PROPERTY): - asset_group = obj + active = obj context = plugin.create_blender_context( - active=asset_group, selected=selected) + active=active, selected=selected) - # We export the abc - bpy.ops.wm.alembic_export( - context, - filepath=filepath, - selected=True, - flatten=False - ) + with bpy.context.temp_override(**context): + # We export the abc + bpy.ops.wm.alembic_export( + filepath=filepath, + selected=True, + flatten=False + ) plugin.deselect_all() diff --git a/openpype/hosts/blender/plugins/publish/extract_camera_abc.py b/openpype/hosts/blender/plugins/publish/extract_camera_abc.py new file mode 100644 index 0000000000..a21a59b151 --- /dev/null +++ b/openpype/hosts/blender/plugins/publish/extract_camera_abc.py @@ -0,0 +1,73 @@ +import os + +import bpy + +from openpype.pipeline import publish +from openpype.hosts.blender.api import plugin +from openpype.hosts.blender.api.pipeline import AVALON_PROPERTY + + +class ExtractCameraABC(publish.Extractor): + """Extract camera as ABC.""" + + label = "Extract Camera (ABC)" + hosts = ["blender"] + families = ["camera"] + optional = True + + def process(self, instance): + # Define extract output file path + stagingdir = self.staging_dir(instance) + filename = f"{instance.name}.abc" + filepath = os.path.join(stagingdir, filename) + + context = bpy.context + + # Perform extraction + self.log.info("Performing extraction..") + + plugin.deselect_all() + + selected = [] + active = None + + asset_group = None + for obj in instance: + if obj.get(AVALON_PROPERTY): + asset_group = obj + break + assert asset_group, "No asset group found" + + # Need to cast to list because children is a tuple + selected = list(asset_group.children) + active = selected[0] + + for obj in selected: + obj.select_set(True) + + context = plugin.create_blender_context( + active=active, selected=selected) + + with bpy.context.temp_override(**context): + # We export the abc + bpy.ops.wm.alembic_export( + filepath=filepath, + selected=True, + flatten=True + ) + + plugin.deselect_all() + + if "representations" not in instance.data: + instance.data["representations"] = [] + + representation = { + 'name': 'abc', + 'ext': 'abc', + 'files': filename, + "stagingDir": stagingdir, + } + instance.data["representations"].append(representation) + + self.log.info("Extracted instance '%s' to: %s", + instance.name, representation) diff --git a/openpype/hosts/blender/plugins/publish/extract_camera.py b/openpype/hosts/blender/plugins/publish/extract_camera_fbx.py similarity index 98% rename from openpype/hosts/blender/plugins/publish/extract_camera.py rename to openpype/hosts/blender/plugins/publish/extract_camera_fbx.py index 9fd181825c..315994140e 100644 --- a/openpype/hosts/blender/plugins/publish/extract_camera.py +++ b/openpype/hosts/blender/plugins/publish/extract_camera_fbx.py @@ -9,7 +9,7 @@ from openpype.hosts.blender.api import plugin class ExtractCamera(publish.Extractor): """Extract as the camera as FBX.""" - label = "Extract Camera" + label = "Extract Camera (FBX)" hosts = ["blender"] families = ["camera"] optional = True diff --git a/openpype/hosts/fusion/api/action.py b/openpype/hosts/fusion/api/action.py index 347d552108..66b787c2f1 100644 --- a/openpype/hosts/fusion/api/action.py +++ b/openpype/hosts/fusion/api/action.py @@ -18,8 +18,10 @@ class SelectInvalidAction(pyblish.api.Action): icon = "search" # Icon from Awesome Icon def process(self, context, plugin): - errored_instances = get_errored_instances_from_context(context, - plugin=plugin) + errored_instances = get_errored_instances_from_context( + context, + plugin=plugin, + ) # Get the invalid nodes for the plug-ins self.log.info("Finding invalid nodes..") @@ -51,6 +53,7 @@ class SelectInvalidAction(pyblish.api.Action): names = set() for tool in invalid: flow.Select(tool, True) + comp.SetActiveTool(tool) names.add(tool.Name) self.log.info( "Selecting invalid tools: %s" % ", ".join(sorted(names)) diff --git a/openpype/hosts/harmony/plugins/publish/extract_render.py b/openpype/hosts/harmony/plugins/publish/extract_render.py index 38b09902c1..5825d95a4a 100644 --- a/openpype/hosts/harmony/plugins/publish/extract_render.py +++ b/openpype/hosts/harmony/plugins/publish/extract_render.py @@ -94,15 +94,14 @@ class ExtractRender(pyblish.api.InstancePlugin): # Generate thumbnail. thumbnail_path = os.path.join(path, "thumbnail.png") - ffmpeg_path = openpype.lib.get_ffmpeg_tool_path("ffmpeg") - args = [ - ffmpeg_path, + args = openpype.lib.get_ffmpeg_tool_args( + "ffmpeg", "-y", "-i", os.path.join(path, list(collections[0])[0]), "-vf", "scale=300:-1", "-vframes", "1", thumbnail_path - ] + ) process = subprocess.Popen( args, stdout=subprocess.PIPE, diff --git a/openpype/hosts/hiero/plugins/publish/extract_frames.py b/openpype/hosts/hiero/plugins/publish/extract_frames.py index f865d2fb39..803c338766 100644 --- a/openpype/hosts/hiero/plugins/publish/extract_frames.py +++ b/openpype/hosts/hiero/plugins/publish/extract_frames.py @@ -2,7 +2,7 @@ import os import pyblish.api from openpype.lib import ( - get_oiio_tools_path, + get_oiio_tool_args, run_subprocess, ) from openpype.pipeline import publish @@ -18,7 +18,7 @@ class ExtractFrames(publish.Extractor): movie_extensions = ["mov", "mp4"] def process(self, instance): - oiio_tool_path = get_oiio_tools_path() + oiio_tool_args = get_oiio_tool_args("oiiotool") staging_dir = self.staging_dir(instance) output_template = os.path.join(staging_dir, instance.data["name"]) sequence = instance.context.data["activeTimeline"] @@ -36,7 +36,7 @@ class ExtractFrames(publish.Extractor): output_path = output_template output_path += ".{:04d}.{}".format(int(frame), output_ext) - args = [oiio_tool_path] + args = list(oiio_tool_args) ext = os.path.splitext(input_path)[1][1:] if ext in self.movie_extensions: diff --git a/openpype/hosts/houdini/plugins/create/create_arnold_rop.py b/openpype/hosts/houdini/plugins/create/create_arnold_rop.py index bddf26dbd5..ca516619f6 100644 --- a/openpype/hosts/houdini/plugins/create/create_arnold_rop.py +++ b/openpype/hosts/houdini/plugins/create/create_arnold_rop.py @@ -1,5 +1,5 @@ from openpype.hosts.houdini.api import plugin -from openpype.lib import EnumDef +from openpype.lib import EnumDef, BoolDef class CreateArnoldRop(plugin.HoudiniCreator): @@ -24,7 +24,7 @@ class CreateArnoldRop(plugin.HoudiniCreator): # Add chunk size attribute instance_data["chunkSize"] = 1 # Submit for job publishing - instance_data["farm"] = True + instance_data["farm"] = pre_create_data.get("farm") instance = super(CreateArnoldRop, self).create( subset_name, @@ -64,6 +64,9 @@ class CreateArnoldRop(plugin.HoudiniCreator): ] return attrs + [ + BoolDef("farm", + label="Submitting to Farm", + default=True), EnumDef("image_format", image_format_enum, default=self.ext, diff --git a/openpype/hosts/houdini/plugins/create/create_karma_rop.py b/openpype/hosts/houdini/plugins/create/create_karma_rop.py index edfb992e1a..c7a9fe0968 100644 --- a/openpype/hosts/houdini/plugins/create/create_karma_rop.py +++ b/openpype/hosts/houdini/plugins/create/create_karma_rop.py @@ -21,7 +21,7 @@ class CreateKarmaROP(plugin.HoudiniCreator): # Add chunk size attribute instance_data["chunkSize"] = 10 # Submit for job publishing - instance_data["farm"] = True + instance_data["farm"] = pre_create_data.get("farm") instance = super(CreateKarmaROP, self).create( subset_name, @@ -67,6 +67,7 @@ class CreateKarmaROP(plugin.HoudiniCreator): camera = None for node in self.selected_nodes: if node.type().name() == "cam": + camera = node.path() has_camera = pre_create_data.get("cam_res") if has_camera: res_x = node.evalParm("resx") @@ -96,6 +97,9 @@ class CreateKarmaROP(plugin.HoudiniCreator): ] return attrs + [ + BoolDef("farm", + label="Submitting to Farm", + default=True), EnumDef("image_format", image_format_enum, default="exr", diff --git a/openpype/hosts/houdini/plugins/create/create_mantra_rop.py b/openpype/hosts/houdini/plugins/create/create_mantra_rop.py index 5ca53e96de..5c29adb33f 100644 --- a/openpype/hosts/houdini/plugins/create/create_mantra_rop.py +++ b/openpype/hosts/houdini/plugins/create/create_mantra_rop.py @@ -21,7 +21,7 @@ class CreateMantraROP(plugin.HoudiniCreator): # Add chunk size attribute instance_data["chunkSize"] = 10 # Submit for job publishing - instance_data["farm"] = True + instance_data["farm"] = pre_create_data.get("farm") instance = super(CreateMantraROP, self).create( subset_name, @@ -76,6 +76,9 @@ class CreateMantraROP(plugin.HoudiniCreator): ] return attrs + [ + BoolDef("farm", + label="Submitting to Farm", + default=True), EnumDef("image_format", image_format_enum, default="exr", diff --git a/openpype/hosts/houdini/plugins/create/create_redshift_rop.py b/openpype/hosts/houdini/plugins/create/create_redshift_rop.py index 4576e9a721..8f4aa1327d 100644 --- a/openpype/hosts/houdini/plugins/create/create_redshift_rop.py +++ b/openpype/hosts/houdini/plugins/create/create_redshift_rop.py @@ -3,7 +3,7 @@ import hou # noqa from openpype.hosts.houdini.api import plugin -from openpype.lib import EnumDef +from openpype.lib import EnumDef, BoolDef class CreateRedshiftROP(plugin.HoudiniCreator): @@ -23,7 +23,7 @@ class CreateRedshiftROP(plugin.HoudiniCreator): # Add chunk size attribute instance_data["chunkSize"] = 10 # Submit for job publishing - instance_data["farm"] = True + instance_data["farm"] = pre_create_data.get("farm") instance = super(CreateRedshiftROP, self).create( subset_name, @@ -100,6 +100,9 @@ class CreateRedshiftROP(plugin.HoudiniCreator): ] return attrs + [ + BoolDef("farm", + label="Submitting to Farm", + default=True), EnumDef("image_format", image_format_enum, default=self.ext, diff --git a/openpype/hosts/houdini/plugins/create/create_vray_rop.py b/openpype/hosts/houdini/plugins/create/create_vray_rop.py index 1de9be4ed6..58748d4c34 100644 --- a/openpype/hosts/houdini/plugins/create/create_vray_rop.py +++ b/openpype/hosts/houdini/plugins/create/create_vray_rop.py @@ -25,7 +25,7 @@ class CreateVrayROP(plugin.HoudiniCreator): # Add chunk size attribute instance_data["chunkSize"] = 10 # Submit for job publishing - instance_data["farm"] = True + instance_data["farm"] = pre_create_data.get("farm") instance = super(CreateVrayROP, self).create( subset_name, @@ -139,6 +139,9 @@ class CreateVrayROP(plugin.HoudiniCreator): ] return attrs + [ + BoolDef("farm", + label="Submitting to Farm", + default=True), EnumDef("image_format", image_format_enum, default=self.ext, diff --git a/openpype/hosts/houdini/plugins/publish/collect_pointcache_type.py b/openpype/hosts/houdini/plugins/publish/collect_pointcache_type.py index 6c527377e0..3323e97c20 100644 --- a/openpype/hosts/houdini/plugins/publish/collect_pointcache_type.py +++ b/openpype/hosts/houdini/plugins/publish/collect_pointcache_type.py @@ -17,5 +17,5 @@ class CollectPointcacheType(pyblish.api.InstancePlugin): def process(self, instance): if instance.data["creator_identifier"] == "io.openpype.creators.houdini.bgeo": # noqa: E501 instance.data["families"] += ["bgeo"] - elif instance.data["creator_identifier"] == "io.openpype.creators.houdini.alembic": # noqa: E501 + elif instance.data["creator_identifier"] == "io.openpype.creators.houdini.pointcache": # noqa: E501 instance.data["families"] += ["abc"] diff --git a/openpype/hosts/maya/api/lib.py b/openpype/hosts/maya/api/lib.py index cdc722a409..40b3419e73 100644 --- a/openpype/hosts/maya/api/lib.py +++ b/openpype/hosts/maya/api/lib.py @@ -27,20 +27,16 @@ from openpype.settings import get_project_settings from openpype.pipeline import ( get_current_project_name, get_current_asset_name, + get_current_task_name, discover_loader_plugins, loaders_from_representation, get_representation_path, load_container, - registered_host, + registered_host ) from openpype.lib import NumberDef from openpype.pipeline.context_tools import get_current_project_asset from openpype.pipeline.create import CreateContext -from openpype.pipeline.context_tools import ( - get_current_asset_name, - get_current_project_name, - get_current_task_name -) from openpype.lib.profiles_filtering import filter_profiles diff --git a/openpype/hosts/maya/api/plugin.py b/openpype/hosts/maya/api/plugin.py index 2b5aee9700..34b61698a3 100644 --- a/openpype/hosts/maya/api/plugin.py +++ b/openpype/hosts/maya/api/plugin.py @@ -8,13 +8,24 @@ from maya import cmds from maya.app.renderSetup.model import renderSetup from openpype.lib import BoolDef, Logger -from openpype.pipeline import AVALON_CONTAINER_ID, Anatomy, CreatedInstance -from openpype.pipeline import Creator as NewCreator -from openpype.pipeline import ( - CreatorError, LegacyCreator, LoaderPlugin, get_representation_path, - legacy_io) -from openpype.pipeline.load import LoadError from openpype.settings import get_project_settings +from openpype.pipeline import ( + AVALON_CONTAINER_ID, + Anatomy, + + CreatedInstance, + Creator as NewCreator, + AutoCreator, + HiddenCreator, + + CreatorError, + LegacyCreator, + LoaderPlugin, + get_representation_path, + + legacy_io, +) +from openpype.pipeline.load import LoadError from . import lib from .lib import imprint, read @@ -177,10 +188,42 @@ class MayaCreatorBase(object): return node_data + def _default_collect_instances(self): + self.cache_subsets(self.collection_shared_data) + cached_subsets = self.collection_shared_data["maya_cached_subsets"] + for node in cached_subsets.get(self.identifier, []): + node_data = self.read_instance_node(node) + + created_instance = CreatedInstance.from_existing(node_data, self) + self._add_instance_to_context(created_instance) + + def _default_update_instances(self, update_list): + for created_inst, _changes in update_list: + data = created_inst.data_to_store() + node = data.get("instance_node") + + self.imprint_instance_node(node, data) + + def _default_remove_instances(self, instances): + """Remove specified instance from the scene. + + This is only removing `id` parameter so instance is no longer + instance, because it might contain valuable data for artist. + + """ + for instance in instances: + node = instance.data.get("instance_node") + if node: + cmds.delete(node) + + self._remove_instance_from_context(instance) + @six.add_metaclass(ABCMeta) class MayaCreator(NewCreator, MayaCreatorBase): + settings_name = None + def create(self, subset_name, instance_data, pre_create_data): members = list() @@ -202,34 +245,13 @@ class MayaCreator(NewCreator, MayaCreatorBase): return instance def collect_instances(self): - self.cache_subsets(self.collection_shared_data) - cached_subsets = self.collection_shared_data["maya_cached_subsets"] - for node in cached_subsets.get(self.identifier, []): - node_data = self.read_instance_node(node) - - created_instance = CreatedInstance.from_existing(node_data, self) - self._add_instance_to_context(created_instance) + return self._default_collect_instances() def update_instances(self, update_list): - for created_inst, _changes in update_list: - data = created_inst.data_to_store() - node = data.get("instance_node") - - self.imprint_instance_node(node, data) + return self._default_update_instances(update_list) def remove_instances(self, instances): - """Remove specified instance from the scene. - - This is only removing `id` parameter so instance is no longer - instance, because it might contain valuable data for artist. - - """ - for instance in instances: - node = instance.data.get("instance_node") - if node: - cmds.delete(node) - - self._remove_instance_from_context(instance) + return self._default_remove_instances(instances) def get_pre_create_attr_defs(self): return [ @@ -238,6 +260,61 @@ class MayaCreator(NewCreator, MayaCreatorBase): default=True) ] + def apply_settings(self, project_settings, system_settings): + """Method called on initialization of plugin to apply settings.""" + + settings_name = self.settings_name + if settings_name is None: + settings_name = self.__class__.__name__ + + settings = project_settings["maya"]["create"] + settings = settings.get(settings_name) + if settings is None: + self.log.debug( + "No settings found for {}".format(self.__class__.__name__) + ) + return + + for key, value in settings.items(): + setattr(self, key, value) + + +class MayaAutoCreator(AutoCreator, MayaCreatorBase): + """Automatically triggered creator for Maya. + + The plugin is not visible in UI, and 'create' method does not expect + any arguments. + """ + + def collect_instances(self): + return self._default_collect_instances() + + def update_instances(self, update_list): + return self._default_update_instances(update_list) + + def remove_instances(self, instances): + return self._default_remove_instances(instances) + + +class MayaHiddenCreator(HiddenCreator, MayaCreatorBase): + """Hidden creator for Maya. + + The plugin is not visible in UI, and it does not have strictly defined + arguments for 'create' method. + """ + + def create(self, *args, **kwargs): + return MayaCreator.create(self, *args, **kwargs) + + def collect_instances(self): + return self._default_collect_instances() + + def update_instances(self, update_list): + return self._default_update_instances(update_list) + + def remove_instances(self, instances): + return self._default_remove_instances(instances) + def ensure_namespace(namespace): """Make sure the namespace exists. diff --git a/openpype/hosts/maya/plugins/create/convert_legacy.py b/openpype/hosts/maya/plugins/create/convert_legacy.py index 6133abc205..33a1e020dd 100644 --- a/openpype/hosts/maya/plugins/create/convert_legacy.py +++ b/openpype/hosts/maya/plugins/create/convert_legacy.py @@ -51,7 +51,7 @@ class MayaLegacyConvertor(SubsetConvertorPlugin, # From all current new style manual creators find the mapping # from family to identifier family_to_id = {} - for identifier, creator in self.create_context.manual_creators.items(): + for identifier, creator in self.create_context.creators.items(): family = getattr(creator, "family", None) if not family: continue @@ -70,7 +70,6 @@ class MayaLegacyConvertor(SubsetConvertorPlugin, # logic was thus to be live to the current task to begin with. data = dict() data["task"] = self.create_context.get_current_task_name() - for family, instance_nodes in legacy.items(): if family not in family_to_id: self.log.warning( @@ -81,7 +80,7 @@ class MayaLegacyConvertor(SubsetConvertorPlugin, continue creator_id = family_to_id[family] - creator = self.create_context.manual_creators[creator_id] + creator = self.create_context.creators[creator_id] data["creator_identifier"] = creator_id if isinstance(creator, plugin.RenderlayerCreator): diff --git a/openpype/hosts/maya/plugins/create/create_animation.py b/openpype/hosts/maya/plugins/create/create_animation.py index cade8603ce..214ac18aef 100644 --- a/openpype/hosts/maya/plugins/create/create_animation.py +++ b/openpype/hosts/maya/plugins/create/create_animation.py @@ -8,15 +8,13 @@ from openpype.lib import ( ) -class CreateAnimation(plugin.MayaCreator): - """Animation output for character rigs""" - - # We hide the animation creator from the UI since the creation of it - # is automated upon loading a rig. There's an inventory action to recreate - # it for loaded rigs if by chance someone deleted the animation instance. - # Note: This setting is actually applied from project settings - enabled = False +class CreateAnimation(plugin.MayaHiddenCreator): + """Animation output for character rigs + We hide the animation creator from the UI since the creation of it is + automated upon loading a rig. There's an inventory action to recreate it + for loaded rigs if by chance someone deleted the animation instance. + """ identifier = "io.openpype.creators.maya.animation" name = "animationDefault" label = "Animation" @@ -28,9 +26,6 @@ class CreateAnimation(plugin.MayaCreator): include_parent_hierarchy = False include_user_defined_attributes = False - # TODO: Would be great if we could visually hide this from the creator - # by default but do allow to generate it through code. - def get_instance_attr_defs(self): defs = lib.collect_animation_defs() @@ -85,3 +80,12 @@ class CreateAnimation(plugin.MayaCreator): """ return defs + + def apply_settings(self, project_settings, system_settings): + super(CreateAnimation, self).apply_settings( + project_settings, system_settings + ) + # Hardcoding creator to be enabled due to existing settings would + # disable the creator causing the creator plugin to not be + # discoverable. + self.enabled = True diff --git a/openpype/hosts/maya/plugins/create/create_arnold_scene_source.py b/openpype/hosts/maya/plugins/create/create_arnold_scene_source.py index 0c8cf8d2bb..1ef132725f 100644 --- a/openpype/hosts/maya/plugins/create/create_arnold_scene_source.py +++ b/openpype/hosts/maya/plugins/create/create_arnold_scene_source.py @@ -15,6 +15,7 @@ class CreateArnoldSceneSource(plugin.MayaCreator): label = "Arnold Scene Source" family = "ass" icon = "cube" + settings_name = "CreateAss" expandProcedurals = False motionBlur = True diff --git a/openpype/hosts/maya/plugins/create/create_model.py b/openpype/hosts/maya/plugins/create/create_model.py index 30f1a82281..5c3dd04af0 100644 --- a/openpype/hosts/maya/plugins/create/create_model.py +++ b/openpype/hosts/maya/plugins/create/create_model.py @@ -12,7 +12,7 @@ class CreateModel(plugin.MayaCreator): label = "Model" family = "model" icon = "cube" - defaults = ["Main", "Proxy", "_MD", "_HD", "_LD"] + default_variants = ["Main", "Proxy", "_MD", "_HD", "_LD"] write_color_sets = False write_face_sets = False diff --git a/openpype/hosts/maya/plugins/create/create_rig.py b/openpype/hosts/maya/plugins/create/create_rig.py index 04104cb7cb..345ab6c00d 100644 --- a/openpype/hosts/maya/plugins/create/create_rig.py +++ b/openpype/hosts/maya/plugins/create/create_rig.py @@ -20,6 +20,6 @@ class CreateRig(plugin.MayaCreator): instance_node = instance.get("instance_node") self.log.info("Creating Rig instance set up ...") - controls = cmds.sets(name="controls_SET", empty=True) - pointcache = cmds.sets(name="out_SET", empty=True) + controls = cmds.sets(name=subset_name + "_controls_SET", empty=True) + pointcache = cmds.sets(name=subset_name + "_out_SET", empty=True) cmds.sets([controls, pointcache], forceElement=instance_node) diff --git a/openpype/hosts/maya/plugins/create/create_setdress.py b/openpype/hosts/maya/plugins/create/create_setdress.py index 594a3dc46d..23a706380a 100644 --- a/openpype/hosts/maya/plugins/create/create_setdress.py +++ b/openpype/hosts/maya/plugins/create/create_setdress.py @@ -9,7 +9,7 @@ class CreateSetDress(plugin.MayaCreator): label = "Set Dress" family = "setdress" icon = "cubes" - defaults = ["Main", "Anim"] + default_variants = ["Main", "Anim"] def get_instance_attr_defs(self): return [ diff --git a/openpype/hosts/maya/plugins/publish/collect_review.py b/openpype/hosts/maya/plugins/publish/collect_review.py index 6cb10f9066..586939a3b8 100644 --- a/openpype/hosts/maya/plugins/publish/collect_review.py +++ b/openpype/hosts/maya/plugins/publish/collect_review.py @@ -107,6 +107,9 @@ class CollectReview(pyblish.api.InstancePlugin): data["displayLights"] = display_lights data["burninDataMembers"] = burninDataMembers + for key, value in instance.data["publish_attributes"].items(): + data["publish_attributes"][key] = value + # The review instance must be active cmds.setAttr(str(instance) + '.active', 1) diff --git a/openpype/hosts/maya/plugins/publish/extract_look.py b/openpype/hosts/maya/plugins/publish/extract_look.py index e2c88ef44a..b13568c781 100644 --- a/openpype/hosts/maya/plugins/publish/extract_look.py +++ b/openpype/hosts/maya/plugins/publish/extract_look.py @@ -15,8 +15,14 @@ import pyblish.api from maya import cmds # noqa -from openpype.lib.vendor_bin_utils import find_executable -from openpype.lib import source_hash, run_subprocess, get_oiio_tools_path +from openpype.lib import ( + find_executable, + source_hash, + run_subprocess, + get_oiio_tool_args, + ToolNotFoundError, +) + from openpype.pipeline import legacy_io, publish, KnownPublishError from openpype.hosts.maya.api import lib @@ -267,12 +273,11 @@ class MakeTX(TextureProcessor): """ - maketx_path = get_oiio_tools_path("maketx") - - if not maketx_path: - raise AssertionError( - "OIIO 'maketx' tool not found. Result: {}".format(maketx_path) - ) + try: + maketx_args = get_oiio_tool_args("maketx") + except ToolNotFoundError: + raise KnownPublishError( + "OpenImageIO is not available on the machine") # Define .tx filepath in staging if source file is not .tx fname, ext = os.path.splitext(os.path.basename(source)) @@ -328,8 +333,7 @@ class MakeTX(TextureProcessor): self.log.info("Generating .tx file for %s .." % source) - subprocess_args = [ - maketx_path, + subprocess_args = maketx_args + [ "-v", # verbose "-u", # update mode # --checknan doesn't influence the output file but aborts the diff --git a/openpype/hosts/maya/plugins/publish/validate_instance_in_context.py b/openpype/hosts/maya/plugins/publish/validate_instance_in_context.py index 41bb414829..b257add7e8 100644 --- a/openpype/hosts/maya/plugins/publish/validate_instance_in_context.py +++ b/openpype/hosts/maya/plugins/publish/validate_instance_in_context.py @@ -3,7 +3,9 @@ from __future__ import absolute_import import pyblish.api -from openpype.pipeline.publish import ValidateContentsOrder +from openpype.pipeline.publish import ( + ValidateContentsOrder, PublishValidationError +) from maya import cmds @@ -108,4 +110,5 @@ class ValidateInstanceInContext(pyblish.api.InstancePlugin): asset = instance.data.get("asset") context_asset = instance.context.data["assetEntity"]["name"] msg = "{} has asset {}".format(instance.name, asset) - assert asset == context_asset, msg + if asset != context_asset: + raise PublishValidationError(msg) diff --git a/openpype/hosts/maya/plugins/publish/validate_model_content.py b/openpype/hosts/maya/plugins/publish/validate_model_content.py index 9ba458a416..19373efad9 100644 --- a/openpype/hosts/maya/plugins/publish/validate_model_content.py +++ b/openpype/hosts/maya/plugins/publish/validate_model_content.py @@ -63,15 +63,10 @@ class ValidateModelContent(pyblish.api.InstancePlugin): return True # Top group - assemblies = cmds.ls(content_instance, assemblies=True, long=True) - if len(assemblies) != 1 and cls.validate_top_group: + top_parents = set([x.split("|")[1] for x in content_instance]) + if cls.validate_top_group and len(top_parents) != 1: cls.log.error("Must have exactly one top group") - return assemblies - if len(assemblies) == 0: - cls.log.warning("No top group found. " - "(Are there objects in the instance?" - " Or is it parented in another group?)") - return assemblies or True + return top_parents def _is_visible(node): """Return whether node is visible""" @@ -82,11 +77,11 @@ class ValidateModelContent(pyblish.api.InstancePlugin): visibility=True) # The roots must be visible (the assemblies) - for assembly in assemblies: - if not _is_visible(assembly): - cls.log.error("Invisible assembly (root node) is not " - "allowed: {0}".format(assembly)) - invalid.add(assembly) + for parent in top_parents: + if not _is_visible(parent): + cls.log.error("Invisible parent (root node) is not " + "allowed: {0}".format(parent)) + invalid.add(parent) # Ensure at least one shape is visible if not any(_is_visible(shape) for shape in shapes): diff --git a/openpype/hosts/maya/plugins/publish/validate_rig_output_ids.py b/openpype/hosts/maya/plugins/publish/validate_rig_output_ids.py index 75447fdfea..cbc750bace 100644 --- a/openpype/hosts/maya/plugins/publish/validate_rig_output_ids.py +++ b/openpype/hosts/maya/plugins/publish/validate_rig_output_ids.py @@ -47,7 +47,7 @@ class ValidateRigOutputIds(pyblish.api.InstancePlugin): invalid = {} if compute: - out_set = next(x for x in instance if x.endswith("out_SET")) + out_set = next(x for x in instance if "out_SET" in x) instance_nodes = cmds.sets(out_set, query=True, nodesOnly=True) instance_nodes = cmds.ls(instance_nodes, long=True) diff --git a/openpype/hosts/photoshop/plugins/publish/extract_review.py b/openpype/hosts/photoshop/plugins/publish/extract_review.py index d5416a389d..4aa7a05bd1 100644 --- a/openpype/hosts/photoshop/plugins/publish/extract_review.py +++ b/openpype/hosts/photoshop/plugins/publish/extract_review.py @@ -1,10 +1,9 @@ import os -import shutil from PIL import Image from openpype.lib import ( run_subprocess, - get_ffmpeg_tool_path, + get_ffmpeg_tool_args, ) from openpype.pipeline import publish from openpype.hosts.photoshop import api as photoshop @@ -85,7 +84,7 @@ class ExtractReview(publish.Extractor): instance.data["representations"].append(repre_skeleton) processed_img_names = [img_list] - ffmpeg_path = get_ffmpeg_tool_path("ffmpeg") + ffmpeg_args = get_ffmpeg_tool_args("ffmpeg") instance.data["stagingDir"] = staging_dir @@ -94,13 +93,21 @@ class ExtractReview(publish.Extractor): source_files_pattern = self._check_and_resize(processed_img_names, source_files_pattern, staging_dir) - self._generate_thumbnail(ffmpeg_path, instance, source_files_pattern, - staging_dir) + self._generate_thumbnail( + list(ffmpeg_args), + instance, + source_files_pattern, + staging_dir) no_of_frames = len(processed_img_names) if no_of_frames > 1: - self._generate_mov(ffmpeg_path, instance, fps, no_of_frames, - source_files_pattern, staging_dir) + self._generate_mov( + list(ffmpeg_args), + instance, + fps, + no_of_frames, + source_files_pattern, + staging_dir) self.log.info(f"Extracted {instance} to {staging_dir}") @@ -142,8 +149,9 @@ class ExtractReview(publish.Extractor): "tags": self.mov_options['tags'] }) - def _generate_thumbnail(self, ffmpeg_path, instance, source_files_pattern, - staging_dir): + def _generate_thumbnail( + self, ffmpeg_args, instance, source_files_pattern, staging_dir + ): """Generates scaled down thumbnail and adds it as representation. Args: @@ -157,8 +165,7 @@ class ExtractReview(publish.Extractor): # Generate thumbnail thumbnail_path = os.path.join(staging_dir, "thumbnail.jpg") self.log.info(f"Generate thumbnail {thumbnail_path}") - args = [ - ffmpeg_path, + args = ffmpeg_args + [ "-y", "-i", source_files_pattern, "-vf", "scale=300:-1", diff --git a/openpype/hosts/standalonepublisher/plugins/publish/extract_thumbnail.py b/openpype/hosts/standalonepublisher/plugins/publish/extract_thumbnail.py index 9f02d65d00..b99503b3c8 100644 --- a/openpype/hosts/standalonepublisher/plugins/publish/extract_thumbnail.py +++ b/openpype/hosts/standalonepublisher/plugins/publish/extract_thumbnail.py @@ -1,8 +1,9 @@ import os +import subprocess import tempfile import pyblish.api from openpype.lib import ( - get_ffmpeg_tool_path, + get_ffmpeg_tool_args, get_ffprobe_streams, path_to_subprocess_arg, run_subprocess, @@ -62,12 +63,12 @@ class ExtractThumbnailSP(pyblish.api.InstancePlugin): instance.context.data["cleanupFullPaths"].append(full_thumbnail_path) - ffmpeg_path = get_ffmpeg_tool_path("ffmpeg") + ffmpeg_executable_args = get_ffmpeg_tool_args("ffmpeg") ffmpeg_args = self.ffmpeg_args or {} jpeg_items = [ - path_to_subprocess_arg(ffmpeg_path), + subprocess.list2cmdline(ffmpeg_executable_args), # override file if already exists "-y" ] diff --git a/openpype/hosts/tvpaint/plugins/publish/extract_convert_to_exr.py b/openpype/hosts/tvpaint/plugins/publish/extract_convert_to_exr.py index ab5bbc5e2c..c10fc4de97 100644 --- a/openpype/hosts/tvpaint/plugins/publish/extract_convert_to_exr.py +++ b/openpype/hosts/tvpaint/plugins/publish/extract_convert_to_exr.py @@ -9,7 +9,8 @@ import json import pyblish.api from openpype.lib import ( - get_oiio_tools_path, + get_oiio_tool_args, + ToolNotFoundError, run_subprocess, ) from openpype.pipeline import KnownPublishError @@ -34,11 +35,12 @@ class ExtractConvertToEXR(pyblish.api.InstancePlugin): if not repres: return - oiio_path = get_oiio_tools_path() - # Raise an exception when oiiotool is not available - # - this can currently happen on MacOS machines - if not os.path.exists(oiio_path): - KnownPublishError( + try: + oiio_args = get_oiio_tool_args("oiiotool") + except ToolNotFoundError: + # Raise an exception when oiiotool is not available + # - this can currently happen on MacOS machines + raise KnownPublishError( "OpenImageIO tool is not available on this machine." ) @@ -64,8 +66,8 @@ class ExtractConvertToEXR(pyblish.api.InstancePlugin): src_filepaths.add(src_filepath) - args = [ - oiio_path, src_filepath, + args = oiio_args + [ + src_filepath, "--compression", self.exr_compression, # TODO how to define color conversion? "--colorconvert", "sRGB", "linear", diff --git a/openpype/hosts/unreal/addon.py b/openpype/hosts/unreal/addon.py index b5c978d98f..3225d742a3 100644 --- a/openpype/hosts/unreal/addon.py +++ b/openpype/hosts/unreal/addon.py @@ -54,7 +54,8 @@ class UnrealAddon(OpenPypeModule, IHostAddon): # Set default environments if are not set via settings defaults = { - "OPENPYPE_LOG_NO_COLORS": "True" + "OPENPYPE_LOG_NO_COLORS": "True", + "UE_PYTHONPATH": os.environ.get("PYTHONPATH", ""), } for key, value in defaults.items(): if not env.get(key): diff --git a/openpype/hosts/unreal/hooks/pre_workfile_preparation.py b/openpype/hosts/unreal/hooks/pre_workfile_preparation.py index 760d55077a..e5010366b8 100644 --- a/openpype/hosts/unreal/hooks/pre_workfile_preparation.py +++ b/openpype/hosts/unreal/hooks/pre_workfile_preparation.py @@ -3,21 +3,21 @@ import os import copy from pathlib import Path -from openpype.widgets.splash_screen import SplashScreen + from qtpy import QtCore -from openpype.hosts.unreal.ue_workers import ( - UEProjectGenerationWorker, - UEPluginInstallWorker -) from openpype import resources from openpype.lib import ( PreLaunchHook, ApplicationLaunchFailed, - ApplicationNotFound, ) from openpype.pipeline.workfile import get_workfile_template_key import openpype.hosts.unreal.lib as unreal_lib +from openpype.hosts.unreal.ue_workers import ( + UEProjectGenerationWorker, + UEPluginInstallWorker +) +from openpype.hosts.unreal.ui import SplashScreen class UnrealPrelaunchHook(PreLaunchHook): @@ -187,24 +187,36 @@ class UnrealPrelaunchHook(PreLaunchHook): project_path.mkdir(parents=True, exist_ok=True) - # Set "AYON_UNREAL_PLUGIN" to current process environment for - # execution of `create_unreal_project` - - if self.launch_context.env.get("AYON_UNREAL_PLUGIN"): - self.log.info(( - f"{self.signature} using Ayon plugin from " - f"{self.launch_context.env.get('AYON_UNREAL_PLUGIN')}" - )) - env_key = "AYON_UNREAL_PLUGIN" - if self.launch_context.env.get(env_key): - os.environ[env_key] = self.launch_context.env[env_key] - # engine_path points to the specific Unreal Engine root # so, we are going up from the executable itself 3 levels. engine_path: Path = Path(executable).parents[3] - if not unreal_lib.check_plugin_existence(engine_path): - self.exec_plugin_install(engine_path) + # Check if new env variable exists, and if it does, if the path + # actually contains the plugin. If not, install it. + + built_plugin_path = self.launch_context.env.get( + "AYON_BUILT_UNREAL_PLUGIN", None) + + if unreal_lib.check_built_plugin_existance(built_plugin_path): + self.log.info(( + f"{self.signature} using existing built Ayon plugin from " + f"{built_plugin_path}" + )) + unreal_lib.copy_built_plugin(engine_path, Path(built_plugin_path)) + else: + # Set "AYON_UNREAL_PLUGIN" to current process environment for + # execution of `create_unreal_project` + env_key = "AYON_UNREAL_PLUGIN" + if self.launch_context.env.get(env_key): + self.log.info(( + f"{self.signature} using Ayon plugin from " + f"{self.launch_context.env.get(env_key)}" + )) + if self.launch_context.env.get(env_key): + os.environ[env_key] = self.launch_context.env[env_key] + + if not unreal_lib.check_plugin_existence(engine_path): + self.exec_plugin_install(engine_path) project_file = project_path / unreal_project_filename diff --git a/openpype/hosts/unreal/lib.py b/openpype/hosts/unreal/lib.py index 67e7891344..0c39773c19 100644 --- a/openpype/hosts/unreal/lib.py +++ b/openpype/hosts/unreal/lib.py @@ -429,6 +429,36 @@ def get_build_id(engine_path: Path, ue_version: str) -> str: return "{" + loaded_modules.get("BuildId") + "}" +def check_built_plugin_existance(plugin_path) -> bool: + if not plugin_path: + return False + + integration_plugin_path = Path(plugin_path) + + if not integration_plugin_path.is_dir(): + raise RuntimeError("Path to the integration plugin is null!") + + if not (integration_plugin_path / "Binaries").is_dir() \ + or not (integration_plugin_path / "Intermediate").is_dir(): + return False + + return True + + +def copy_built_plugin(engine_path: Path, plugin_path: Path) -> None: + ayon_plugin_path: Path = engine_path / "Engine/Plugins/Marketplace/Ayon" + + if not ayon_plugin_path.is_dir(): + ayon_plugin_path.mkdir(parents=True, exist_ok=True) + + engine_plugin_config_path: Path = ayon_plugin_path / "Config" + engine_plugin_config_path.mkdir(exist_ok=True) + + dir_util._path_created = {} + + dir_util.copy_tree(plugin_path.as_posix(), ayon_plugin_path.as_posix()) + + def check_plugin_existence(engine_path: Path, env: dict = None) -> bool: env = env or os.environ integration_plugin_path: Path = Path(env.get("AYON_UNREAL_PLUGIN", "")) diff --git a/openpype/hosts/unreal/ui/__init__.py b/openpype/hosts/unreal/ui/__init__.py new file mode 100644 index 0000000000..606b21ef19 --- /dev/null +++ b/openpype/hosts/unreal/ui/__init__.py @@ -0,0 +1,5 @@ +from .splash_screen import SplashScreen + +__all__ = ( + "SplashScreen", +) diff --git a/openpype/widgets/splash_screen.py b/openpype/hosts/unreal/ui/splash_screen.py similarity index 98% rename from openpype/widgets/splash_screen.py rename to openpype/hosts/unreal/ui/splash_screen.py index 7c1ff72ecd..7ac77821d9 100644 --- a/openpype/widgets/splash_screen.py +++ b/openpype/hosts/unreal/ui/splash_screen.py @@ -1,6 +1,5 @@ from qtpy import QtWidgets, QtCore, QtGui from openpype import style, resources -from igniter.nice_progress_bar import NiceProgressBar class SplashScreen(QtWidgets.QDialog): @@ -143,7 +142,7 @@ class SplashScreen(QtWidgets.QDialog): button_layout.addWidget(self.close_btn) # Progress Bar - self.progress_bar = NiceProgressBar() + self.progress_bar = QtWidgets.QProgressBar() self.progress_bar.setValue(0) self.progress_bar.setAlignment(QtCore.Qt.AlignTop) diff --git a/openpype/lib/__init__.py b/openpype/lib/__init__.py index 06de486f2e..40df264452 100644 --- a/openpype/lib/__init__.py +++ b/openpype/lib/__init__.py @@ -22,11 +22,14 @@ from .events import ( ) from .vendor_bin_utils import ( + ToolNotFoundError, find_executable, get_vendor_bin_path, get_oiio_tools_path, + get_oiio_tool_args, get_ffmpeg_tool_path, - is_oiio_supported + get_ffmpeg_tool_args, + is_oiio_supported, ) from .attribute_definitions import ( @@ -53,7 +56,6 @@ from .env_tools import ( from .terminal import Terminal from .execute import ( get_openpype_execute_args, - get_pype_execute_args, get_linux_launcher_args, execute, run_subprocess, @@ -65,7 +67,6 @@ from .execute import ( ) from .log import ( Logger, - PypeLogger, ) from .path_templates import ( @@ -77,12 +78,6 @@ from .path_templates import ( FormatObject, ) -from .mongo import ( - get_default_components, - validate_mongo_connection, - OpenPypeMongoConnection -) - from .dateutils import ( get_datetime_data, get_timestamp, @@ -115,25 +110,6 @@ from .transcoding import ( convert_ffprobe_fps_value, convert_ffprobe_fps_to_float, ) -from .avalon_context import ( - CURRENT_DOC_SCHEMAS, - create_project, - - get_workfile_template_key, - get_workfile_template_key_from_context, - get_last_workfile_with_version, - get_last_workfile, - - BuildWorkfile, - - get_creator_by_name, - - get_custom_workfile_template, - - get_custom_workfile_template_by_context, - get_custom_workfile_template_by_string_context, - get_custom_workfile_template -) from .local_settings import ( IniSettingRegistry, @@ -163,9 +139,6 @@ from .applications import ( ) from .plugin_tools import ( - TaskNotSetError, - get_subset_name, - get_subset_name_with_asset_doc, prepare_template_data, source_hash, ) @@ -177,9 +150,6 @@ from .path_tools import ( version_up, get_version_from_path, get_last_version_from_path, - create_project_folders, - create_workdir_extra_folders, - get_project_basic_paths, ) from .openpype_version import ( @@ -205,9 +175,7 @@ __all__ = [ "emit_event", "register_event_callback", - "find_executable", "get_openpype_execute_args", - "get_pype_execute_args", "get_linux_launcher_args", "execute", "run_subprocess", @@ -220,9 +188,13 @@ __all__ = [ "env_value_to_bool", "get_paths_from_environ", + "ToolNotFoundError", + "find_executable", "get_vendor_bin_path", "get_oiio_tools_path", + "get_oiio_tool_args", "get_ffmpeg_tool_path", + "get_ffmpeg_tool_args", "is_oiio_supported", "AbstractAttrDef", @@ -257,22 +229,6 @@ __all__ = [ "convert_ffprobe_fps_value", "convert_ffprobe_fps_to_float", - "CURRENT_DOC_SCHEMAS", - "create_project", - - "get_workfile_template_key", - "get_workfile_template_key_from_context", - "get_last_workfile_with_version", - "get_last_workfile", - - "BuildWorkfile", - - "get_creator_by_name", - - "get_custom_workfile_template_by_context", - "get_custom_workfile_template_by_string_context", - "get_custom_workfile_template", - "IniSettingRegistry", "JSONSettingRegistry", "OpenPypeSecureRegistry", @@ -298,9 +254,7 @@ __all__ = [ "filter_profiles", - "TaskNotSetError", - "get_subset_name", - "get_subset_name_with_asset_doc", + "prepare_template_data", "source_hash", "format_file_size", @@ -323,15 +277,6 @@ __all__ = [ "get_formatted_current_time", "Logger", - "PypeLogger", - - "get_default_components", - "validate_mongo_connection", - "OpenPypeMongoConnection", - - "create_project_folders", - "create_workdir_extra_folders", - "get_project_basic_paths", "op_version_control_available", "get_openpype_version", diff --git a/openpype/lib/applications.py b/openpype/lib/applications.py index f47e11926c..fbde59ced5 100644 --- a/openpype/lib/applications.py +++ b/openpype/lib/applications.py @@ -1640,11 +1640,7 @@ def prepare_context_environments(data, env_group=None, modules_manager=None): project_doc = data["project_doc"] asset_doc = data["asset_doc"] task_name = data["task_name"] - if ( - not project_doc - or not asset_doc - or not task_name - ): + if not project_doc: log.info( "Skipping context environments preparation." " Launch context does not contain required data." @@ -1657,18 +1653,16 @@ def prepare_context_environments(data, env_group=None, modules_manager=None): system_settings = get_system_settings() data["project_settings"] = project_settings data["system_settings"] = system_settings - # Apply project specific environments on current env value - apply_project_environments_value( - project_name, data["env"], project_settings, env_group - ) app = data["app"] context_env = { "AVALON_PROJECT": project_doc["name"], - "AVALON_ASSET": asset_doc["name"], - "AVALON_TASK": task_name, "AVALON_APP_NAME": app.full_name } + if asset_doc: + context_env["AVALON_ASSET"] = asset_doc["name"] + if task_name: + context_env["AVALON_TASK"] = task_name log.debug( "Context environments set:\n{}".format( @@ -1676,9 +1670,25 @@ def prepare_context_environments(data, env_group=None, modules_manager=None): ) ) data["env"].update(context_env) + + # Apply project specific environments on current env value + # - apply them once the context environments are set + apply_project_environments_value( + project_name, data["env"], project_settings, env_group + ) + if not app.is_host: return + data["env"]["AVALON_APP"] = app.host_name + + if not asset_doc or not task_name: + # QUESTION replace with log.info and skip workfile discovery? + # - technically it should be possible to launch host without context + raise ApplicationLaunchFailed( + "Host launch require asset and task context." + ) + workdir_data = get_template_data( project_doc, asset_doc, task_name, app.host_name, system_settings ) @@ -1716,7 +1726,6 @@ def prepare_context_environments(data, env_group=None, modules_manager=None): "Couldn't create workdir because: {}".format(str(exc)) ) - data["env"]["AVALON_APP"] = app.host_name data["env"]["AVALON_WORKDIR"] = workdir _prepare_last_workfile(data, workdir, modules_manager) diff --git a/openpype/lib/avalon_context.py b/openpype/lib/avalon_context.py deleted file mode 100644 index a9ae27cb79..0000000000 --- a/openpype/lib/avalon_context.py +++ /dev/null @@ -1,654 +0,0 @@ -"""Should be used only inside of hosts.""" - -import platform -import logging -import functools -import warnings - -import six - -from openpype.client import ( - get_project, - get_asset_by_name, -) -from openpype.client.operations import ( - CURRENT_ASSET_DOC_SCHEMA, - CURRENT_PROJECT_SCHEMA, - CURRENT_PROJECT_CONFIG_SCHEMA, -) -from .profiles_filtering import filter_profiles -from .path_templates import StringTemplate - -legacy_io = None - -log = logging.getLogger("AvalonContext") - - -# Backwards compatibility - should not be used anymore -# - Will be removed in OP 3.16.* -CURRENT_DOC_SCHEMAS = { - "project": CURRENT_PROJECT_SCHEMA, - "asset": CURRENT_ASSET_DOC_SCHEMA, - "config": CURRENT_PROJECT_CONFIG_SCHEMA -} - - -class AvalonContextDeprecatedWarning(DeprecationWarning): - pass - - -def deprecated(new_destination): - """Mark functions as deprecated. - - It will result in a warning being emitted when the function is used. - """ - - func = None - if callable(new_destination): - func = new_destination - new_destination = None - - def _decorator(decorated_func): - if new_destination is None: - warning_message = ( - " Please check content of deprecated function to figure out" - " possible replacement." - ) - else: - warning_message = " Please replace your usage with '{}'.".format( - new_destination - ) - - @functools.wraps(decorated_func) - def wrapper(*args, **kwargs): - warnings.simplefilter("always", AvalonContextDeprecatedWarning) - warnings.warn( - ( - "Call to deprecated function '{}'" - "\nFunction was moved or removed.{}" - ).format(decorated_func.__name__, warning_message), - category=AvalonContextDeprecatedWarning, - stacklevel=4 - ) - return decorated_func(*args, **kwargs) - return wrapper - - if func is None: - return _decorator - return _decorator(func) - - -@deprecated("openpype.client.operations.create_project") -def create_project( - project_name, project_code, library_project=False, dbcon=None -): - """Create project using OpenPype settings. - - This project creation function is not validating project document on - creation. It is because project document is created blindly with only - minimum required information about project which is it's name, code, type - and schema. - - Entered project name must be unique and project must not exist yet. - - Args: - project_name(str): New project name. Should be unique. - project_code(str): Project's code should be unique too. - library_project(bool): Project is library project. - dbcon(AvalonMongoDB): Object of connection to MongoDB. - - Raises: - ValueError: When project name already exists in MongoDB. - - Returns: - dict: Created project document. - - Deprecated: - Function will be removed after release version 3.16.* - """ - - from openpype.client.operations import create_project - - return create_project(project_name, project_code, library_project) - - -def with_pipeline_io(func): - @functools.wraps(func) - def wrapped(*args, **kwargs): - global legacy_io - if legacy_io is None: - from openpype.pipeline import legacy_io - return func(*args, **kwargs) - return wrapped - - -@deprecated("openpype.client.get_linked_asset_ids") -def get_linked_asset_ids(asset_doc): - """Return linked asset ids for `asset_doc` from DB - - Args: - asset_doc (dict): Asset document from DB. - - Returns: - (list): MongoDB ids of input links. - - Deprecated: - Function will be removed after release version 3.16.* - """ - - from openpype.client import get_linked_asset_ids - from openpype.pipeline import legacy_io - - project_name = legacy_io.active_project() - - return get_linked_asset_ids(project_name, asset_doc=asset_doc) - - -@deprecated( - "openpype.pipeline.workfile.get_workfile_template_key_from_context") -def get_workfile_template_key_from_context( - asset_name, task_name, host_name, project_name=None, - dbcon=None, project_settings=None -): - """Helper function to get template key for workfile template. - - Do the same as `get_workfile_template_key` but returns value for "session - context". - - It is required to pass one of 'dbcon' with already set project name or - 'project_name' arguments. - - Args: - asset_name(str): Name of asset document. - task_name(str): Task name for which is template key retrieved. - Must be available on asset document under `data.tasks`. - host_name(str): Name of host implementation for which is workfile - used. - project_name(str): Project name where asset and task is. Not required - when 'dbcon' is passed. - dbcon(AvalonMongoDB): Connection to mongo with already set project - under `AVALON_PROJECT`. Not required when 'project_name' is passed. - project_settings(dict): Project settings for passed 'project_name'. - Not required at all but makes function faster. - Raises: - ValueError: When both 'dbcon' and 'project_name' were not - passed. - - Deprecated: - Function will be removed after release version 3.16.* - """ - - from openpype.pipeline.workfile import ( - get_workfile_template_key_from_context - ) - - if not project_name: - if not dbcon: - raise ValueError(( - "`get_workfile_template_key_from_context` requires to pass" - " one of 'dbcon' or 'project_name' arguments." - )) - project_name = dbcon.active_project() - - return get_workfile_template_key_from_context( - asset_name, task_name, host_name, project_name, project_settings - ) - - -@deprecated( - "openpype.pipeline.workfile.get_workfile_template_key") -def get_workfile_template_key( - task_type, host_name, project_name=None, project_settings=None -): - """Workfile template key which should be used to get workfile template. - - Function is using profiles from project settings to return right template - for passet task type and host name. - - One of 'project_name' or 'project_settings' must be passed it is preferred - to pass settings if are already available. - - Args: - task_type(str): Name of task type. - host_name(str): Name of host implementation (e.g. "maya", "nuke", ...) - project_name(str): Name of project in which context should look for - settings. Not required if `project_settings` are passed. - project_settings(dict): Prepare project settings for project name. - Not needed if `project_name` is passed. - - Raises: - ValueError: When both 'project_name' and 'project_settings' were not - passed. - - Deprecated: - Function will be removed after release version 3.16.* - """ - - from openpype.pipeline.workfile import get_workfile_template_key - - return get_workfile_template_key( - task_type, host_name, project_name, project_settings - ) - - -@deprecated("openpype.pipeline.context_tools.compute_session_changes") -def compute_session_changes( - session, task=None, asset=None, app=None, template_key=None -): - """Compute the changes for a Session object on asset, task or app switch - - This does *NOT* update the Session object, but returns the changes - required for a valid update of the Session. - - Args: - session (dict): The initial session to compute changes to. - This is required for computing the full Work Directory, as that - also depends on the values that haven't changed. - task (str, Optional): Name of task to switch to. - asset (str or dict, Optional): Name of asset to switch to. - You can also directly provide the Asset dictionary as returned - from the database to avoid an additional query. (optimization) - app (str, Optional): Name of app to switch to. - - Returns: - dict: The required changes in the Session dictionary. - - Deprecated: - Function will be removed after release version 3.16.* - """ - - from openpype.pipeline import legacy_io - from openpype.pipeline.context_tools import compute_session_changes - - if isinstance(asset, six.string_types): - project_name = legacy_io.active_project() - asset = get_asset_by_name(project_name, asset) - - return compute_session_changes( - session, - asset, - task, - template_key - ) - - -@deprecated("openpype.pipeline.context_tools.get_workdir_from_session") -def get_workdir_from_session(session=None, template_key=None): - """Calculate workdir path based on session data. - - Args: - session (Union[None, Dict[str, str]]): Session to use. If not passed - current context session is used (from legacy_io). - template_key (Union[str, None]): Precalculate template key to define - workfile template name in Anatomy. - - Returns: - str: Workdir path. - - Deprecated: - Function will be removed after release version 3.16.* - """ - - from openpype.pipeline.context_tools import get_workdir_from_session - - return get_workdir_from_session(session, template_key) - - -@deprecated("openpype.pipeline.context_tools.change_current_context") -def update_current_task(task=None, asset=None, app=None, template_key=None): - """Update active Session to a new task work area. - - This updates the live Session to a different `asset`, `task` or `app`. - - Args: - task (str): The task to set. - asset (str): The asset to set. - app (str): The app to set. - - Returns: - dict: The changed key, values in the current Session. - - Deprecated: - Function will be removed after release version 3.16.* - """ - - from openpype.pipeline import legacy_io - from openpype.pipeline.context_tools import change_current_context - - project_name = legacy_io.active_project() - if isinstance(asset, six.string_types): - asset = get_asset_by_name(project_name, asset) - - return change_current_context(asset, task, template_key) - - -@deprecated("openpype.pipeline.workfile.BuildWorkfile") -def BuildWorkfile(): - """Build workfile class was moved to workfile pipeline. - - Deprecated: - Function will be removed after release version 3.16.* - """ - from openpype.pipeline.workfile import BuildWorkfile - - return BuildWorkfile() - - -@deprecated("openpype.pipeline.create.get_legacy_creator_by_name") -def get_creator_by_name(creator_name, case_sensitive=False): - """Find creator plugin by name. - - Args: - creator_name (str): Name of creator class that should be returned. - case_sensitive (bool): Match of creator plugin name is case sensitive. - Set to `False` by default. - - Returns: - Creator: Return first matching plugin or `None`. - - Deprecated: - Function will be removed after release version 3.16.* - """ - from openpype.pipeline.create import get_legacy_creator_by_name - - return get_legacy_creator_by_name(creator_name, case_sensitive) - - -def _get_task_context_data_for_anatomy( - project_doc, asset_doc, task_name, anatomy=None -): - """Prepare Task context for anatomy data. - - WARNING: this data structure is currently used only in workfile templates. - Key "task" is currently in rest of pipeline used as string with task - name. - - Args: - project_doc (dict): Project document with available "name" and - "data.code" keys. - asset_doc (dict): Asset document from MongoDB. - task_name (str): Name of context task. - anatomy (Anatomy): Optionally Anatomy for passed project name can be - passed as Anatomy creation may be slow. - - Returns: - dict: With Anatomy context data. - """ - - from openpype.pipeline.template_data import get_general_template_data - - if anatomy is None: - from openpype.pipeline import Anatomy - anatomy = Anatomy(project_doc["name"]) - - asset_name = asset_doc["name"] - project_task_types = anatomy["tasks"] - - # get relevant task type from asset doc - assert task_name in asset_doc["data"]["tasks"], ( - "Task name \"{}\" not found on asset \"{}\"".format( - task_name, asset_name - ) - ) - - task_type = asset_doc["data"]["tasks"][task_name].get("type") - - assert task_type, ( - "Task name \"{}\" on asset \"{}\" does not have specified task type." - ).format(asset_name, task_name) - - # get short name for task type defined in default anatomy settings - project_task_type_data = project_task_types.get(task_type) - assert project_task_type_data, ( - "Something went wrong. Default anatomy tasks are not holding" - "requested task type: `{}`".format(task_type) - ) - - data = { - "project": { - "name": project_doc["name"], - "code": project_doc["data"].get("code") - }, - "asset": asset_name, - "task": { - "name": task_name, - "type": task_type, - "short": project_task_type_data["short_name"] - } - } - - system_general_data = get_general_template_data() - data.update(system_general_data) - - return data - - -@deprecated( - "openpype.pipeline.workfile.get_custom_workfile_template_by_context") -def get_custom_workfile_template_by_context( - template_profiles, project_doc, asset_doc, task_name, anatomy=None -): - """Filter and fill workfile template profiles by passed context. - - It is expected that passed argument are already queried documents of - project and asset as parents of processing task name. - - Existence of formatted path is not validated. - - Args: - template_profiles(list): Template profiles from settings. - project_doc(dict): Project document from MongoDB. - asset_doc(dict): Asset document from MongoDB. - task_name(str): Name of task for which templates are filtered. - anatomy(Anatomy): Optionally passed anatomy object for passed project - name. - - Returns: - str: Path to template or None if none of profiles match current - context. (Existence of formatted path is not validated.) - - Deprecated: - Function will be removed after release version 3.16.* - """ - - if anatomy is None: - from openpype.pipeline import Anatomy - anatomy = Anatomy(project_doc["name"]) - - # get project, asset, task anatomy context data - anatomy_context_data = _get_task_context_data_for_anatomy( - project_doc, asset_doc, task_name, anatomy - ) - # add root dict - anatomy_context_data["root"] = anatomy.roots - - # get task type for the task in context - current_task_type = anatomy_context_data["task"]["type"] - - # get path from matching profile - matching_item = filter_profiles( - template_profiles, - {"task_types": current_task_type} - ) - # when path is available try to format it in case - # there are some anatomy template strings - if matching_item: - template = matching_item["path"][platform.system().lower()] - return StringTemplate.format_strict_template( - template, anatomy_context_data - ) - - return None - - -@deprecated( - "openpype.pipeline.workfile.get_custom_workfile_template_by_string_context" -) -def get_custom_workfile_template_by_string_context( - template_profiles, project_name, asset_name, task_name, - dbcon=None, anatomy=None -): - """Filter and fill workfile template profiles by passed context. - - Passed context are string representations of project, asset and task. - Function will query documents of project and asset to be able use - `get_custom_workfile_template_by_context` for rest of logic. - - Args: - template_profiles(list): Loaded workfile template profiles. - project_name(str): Project name. - asset_name(str): Asset name. - task_name(str): Task name. - dbcon(AvalonMongoDB): Optional avalon implementation of mongo - connection with context Session. - anatomy(Anatomy): Optionally prepared anatomy object for passed - project. - - Returns: - str: Path to template or None if none of profiles match current - context. (Existence of formatted path is not validated.) - - Deprecated: - Function will be removed after release version 3.16.* - """ - - project_name = None - if anatomy is not None: - project_name = anatomy.project_name - - if not project_name and dbcon is not None: - project_name = dbcon.active_project() - - if not project_name: - raise ValueError("Can't determina project") - - project_doc = get_project(project_name, fields=["name", "data.code"]) - asset_doc = get_asset_by_name( - project_name, asset_name, fields=["name", "data.tasks"]) - - return get_custom_workfile_template_by_context( - template_profiles, project_doc, asset_doc, task_name, anatomy - ) - - -@deprecated("openpype.pipeline.context_tools.get_custom_workfile_template") -def get_custom_workfile_template(template_profiles): - """Filter and fill workfile template profiles by current context. - - Current context is defined by `legacy_io.Session`. That's why this - function should be used only inside host where context is set and stable. - - Args: - template_profiles(list): Template profiles from settings. - - Returns: - str: Path to template or None if none of profiles match current - context. (Existence of formatted path is not validated.) - - Deprecated: - Function will be removed after release version 3.16.* - """ - - from openpype.pipeline import legacy_io - - return get_custom_workfile_template_by_string_context( - template_profiles, - legacy_io.Session["AVALON_PROJECT"], - legacy_io.Session["AVALON_ASSET"], - legacy_io.Session["AVALON_TASK"], - legacy_io - ) - - -@deprecated("openpype.pipeline.workfile.get_last_workfile_with_version") -def get_last_workfile_with_version( - workdir, file_template, fill_data, extensions -): - """Return last workfile version. - - Args: - workdir(str): Path to dir where workfiles are stored. - file_template(str): Template of file name. - fill_data(dict): Data for filling template. - extensions(list, tuple): All allowed file extensions of workfile. - - Returns: - tuple: Last workfile with version if there is any otherwise - returns (None, None). - - Deprecated: - Function will be removed after release version 3.16.* - """ - - from openpype.pipeline.workfile import get_last_workfile_with_version - - return get_last_workfile_with_version( - workdir, file_template, fill_data, extensions - ) - - -@deprecated("openpype.pipeline.workfile.get_last_workfile") -def get_last_workfile( - workdir, file_template, fill_data, extensions, full_path=False -): - """Return last workfile filename. - - Returns file with version 1 if there is not workfile yet. - - Args: - workdir(str): Path to dir where workfiles are stored. - file_template(str): Template of file name. - fill_data(dict): Data for filling template. - extensions(list, tuple): All allowed file extensions of workfile. - full_path(bool): Full path to file is returned if set to True. - - Returns: - str: Last or first workfile as filename of full path to filename. - - Deprecated: - Function will be removed after release version 3.16.* - """ - - from openpype.pipeline.workfile import get_last_workfile - - return get_last_workfile( - workdir, file_template, fill_data, extensions, full_path - ) - - -@deprecated("openpype.client.get_linked_representation_id") -def get_linked_ids_for_representations( - project_name, repre_ids, dbcon=None, link_type=None, max_depth=0 -): - """Returns list of linked ids of particular type (if provided). - - Goes from representations to version, back to representations - Args: - project_name (str) - repre_ids (list) or (ObjectId) - dbcon (avalon.mongodb.AvalonMongoDB, optional): Avalon Mongo connection - with Session. - link_type (str): ['reference', '..] - max_depth (int): limit how many levels of recursion - - Returns: - (list) of ObjectId - linked representations - - Deprecated: - Function will be removed after release version 3.16.* - """ - - from openpype.client import get_linked_representation_id - - if not isinstance(repre_ids, list): - repre_ids = [repre_ids] - - output = [] - for repre_id in repre_ids: - output.extend(get_linked_representation_id( - project_name, - repre_id=repre_id, - link_type=link_type, - max_depth=max_depth - )) - return output diff --git a/openpype/lib/delivery.py b/openpype/lib/delivery.py deleted file mode 100644 index efb542de75..0000000000 --- a/openpype/lib/delivery.py +++ /dev/null @@ -1,252 +0,0 @@ -"""Functions useful for delivery action or loader""" -import os -import shutil -import functools -import warnings - - -class DeliveryDeprecatedWarning(DeprecationWarning): - pass - - -def deprecated(new_destination): - """Mark functions as deprecated. - - It will result in a warning being emitted when the function is used. - """ - - func = None - if callable(new_destination): - func = new_destination - new_destination = None - - def _decorator(decorated_func): - if new_destination is None: - warning_message = ( - " Please check content of deprecated function to figure out" - " possible replacement." - ) - else: - warning_message = " Please replace your usage with '{}'.".format( - new_destination - ) - - @functools.wraps(decorated_func) - def wrapper(*args, **kwargs): - warnings.simplefilter("always", DeliveryDeprecatedWarning) - warnings.warn( - ( - "Call to deprecated function '{}'" - "\nFunction was moved or removed.{}" - ).format(decorated_func.__name__, warning_message), - category=DeliveryDeprecatedWarning, - stacklevel=4 - ) - return decorated_func(*args, **kwargs) - return wrapper - - if func is None: - return _decorator - return _decorator(func) - - -@deprecated("openpype.lib.path_tools.collect_frames") -def collect_frames(files): - """Returns dict of source path and its frame, if from sequence - - Uses clique as most precise solution, used when anatomy template that - created files is not known. - - Assumption is that frames are separated by '.', negative frames are not - allowed. - - Args: - files(list) or (set with single value): list of source paths - - Returns: - (dict): {'/asset/subset_v001.0001.png': '0001', ....} - - Deprecated: - Function was moved to different location and will be removed - after 3.16.* release. - """ - - from .path_tools import collect_frames - - return collect_frames(files) - - -@deprecated("openpype.lib.path_tools.format_file_size") -def sizeof_fmt(num, suffix=None): - """Returns formatted string with size in appropriate unit - - Deprecated: - Function was moved to different location and will be removed - after 3.16.* release. - """ - - from .path_tools import format_file_size - return format_file_size(num, suffix) - - -@deprecated("openpype.pipeline.load.get_representation_path_with_anatomy") -def path_from_representation(representation, anatomy): - """Get representation path using representation document and anatomy. - - Args: - representation (Dict[str, Any]): Representation document. - anatomy (Anatomy): Project anatomy. - - Deprecated: - Function was moved to different location and will be removed - after 3.16.* release. - """ - - from openpype.pipeline.load import get_representation_path_with_anatomy - - return get_representation_path_with_anatomy(representation, anatomy) - - -@deprecated -def copy_file(src_path, dst_path): - """Hardlink file if possible(to save space), copy if not""" - from openpype.lib import create_hard_link # safer importing - - if os.path.exists(dst_path): - return - try: - create_hard_link( - src_path, - dst_path - ) - except OSError: - shutil.copyfile(src_path, dst_path) - - -@deprecated("openpype.pipeline.delivery.get_format_dict") -def get_format_dict(anatomy, location_path): - """Returns replaced root values from user provider value. - - Args: - anatomy (Anatomy) - location_path (str): user provided value - - Returns: - (dict): prepared for formatting of a template - - Deprecated: - Function was moved to different location and will be removed - after 3.16.* release. - """ - - from openpype.pipeline.delivery import get_format_dict - - return get_format_dict(anatomy, location_path) - - -@deprecated("openpype.pipeline.delivery.check_destination_path") -def check_destination_path(repre_id, - anatomy, anatomy_data, - datetime_data, template_name): - """ Try to create destination path based on 'template_name'. - - In the case that path cannot be filled, template contains unmatched - keys, provide error message to filter out repre later. - - Args: - anatomy (Anatomy) - anatomy_data (dict): context to fill anatomy - datetime_data (dict): values with actual date - template_name (str): to pick correct delivery template - - Returns: - (collections.defauldict): {"TYPE_OF_ERROR":"ERROR_DETAIL"} - - Deprecated: - Function was moved to different location and will be removed - after 3.16.* release. - """ - - from openpype.pipeline.delivery import check_destination_path - - return check_destination_path( - repre_id, - anatomy, - anatomy_data, - datetime_data, - template_name - ) - - -@deprecated("openpype.pipeline.delivery.deliver_single_file") -def process_single_file( - src_path, repre, anatomy, template_name, anatomy_data, format_dict, - report_items, log -): - """Copy single file to calculated path based on template - - Args: - src_path(str): path of source representation file - _repre (dict): full repre, used only in process_sequence, here only - as to share same signature - anatomy (Anatomy) - template_name (string): user selected delivery template name - anatomy_data (dict): data from repre to fill anatomy with - format_dict (dict): root dictionary with names and values - report_items (collections.defaultdict): to return error messages - log (Logger): for log printing - - Returns: - (collections.defaultdict , int) - - Deprecated: - Function was moved to different location and will be removed - after 3.16.* release. - """ - - from openpype.pipeline.delivery import deliver_single_file - - return deliver_single_file( - src_path, repre, anatomy, template_name, anatomy_data, format_dict, - report_items, log - ) - - -@deprecated("openpype.pipeline.delivery.deliver_sequence") -def process_sequence( - src_path, repre, anatomy, template_name, anatomy_data, format_dict, - report_items, log -): - """ For Pype2(mainly - works in 3 too) where representation might not - contain files. - - Uses listing physical files (not 'files' on repre as a)might not be - present, b)might not be reliable for representation and copying them. - - TODO Should be refactored when files are sufficient to drive all - representations. - - Args: - src_path(str): path of source representation file - repre (dict): full representation - anatomy (Anatomy) - template_name (string): user selected delivery template name - anatomy_data (dict): data from repre to fill anatomy with - format_dict (dict): root dictionary with names and values - report_items (collections.defaultdict): to return error messages - log (Logger): for log printing - - Returns: - (collections.defaultdict , int) - - Deprecated: - Function was moved to different location and will be removed - after 3.16.* release. - """ - - from openpype.pipeline.delivery import deliver_sequence - - return deliver_sequence( - src_path, repre, anatomy, template_name, anatomy_data, format_dict, - report_items, log - ) diff --git a/openpype/lib/execute.py b/openpype/lib/execute.py index 6c1425fc63..b3c8185d3e 100644 --- a/openpype/lib/execute.py +++ b/openpype/lib/execute.py @@ -296,18 +296,6 @@ def path_to_subprocess_arg(path): return subprocess.list2cmdline([path]) -def get_pype_execute_args(*args): - """Backwards compatible function for 'get_openpype_execute_args'.""" - import traceback - - log = Logger.get_logger("get_pype_execute_args") - stack = "\n".join(traceback.format_stack()) - log.warning(( - "Using deprecated function 'get_pype_execute_args'. Called from:\n{}" - ).format(stack)) - return get_openpype_execute_args(*args) - - def get_openpype_execute_args(*args): """Arguments to run pype command. diff --git a/openpype/lib/log.py b/openpype/lib/log.py index dc2e6615fe..72071063ec 100644 --- a/openpype/lib/log.py +++ b/openpype/lib/log.py @@ -492,21 +492,3 @@ class Logger: cls.initialize() return OpenPypeMongoConnection.get_mongo_client() - - -class PypeLogger(Logger): - """Duplicate of 'Logger'. - - Deprecated: - Class will be removed after release version 3.16.* - """ - - @classmethod - def get_logger(cls, *args, **kwargs): - logger = Logger.get_logger(*args, **kwargs) - # TODO uncomment when replaced most of places - logger.warning(( - "'openpype.lib.PypeLogger' is deprecated class." - " Please use 'openpype.lib.Logger' instead." - )) - return logger diff --git a/openpype/lib/mongo.py b/openpype/lib/mongo.py deleted file mode 100644 index bb2ee6016a..0000000000 --- a/openpype/lib/mongo.py +++ /dev/null @@ -1,61 +0,0 @@ -import warnings -import functools -from openpype.client.mongo import ( - MongoEnvNotSet, - OpenPypeMongoConnection, -) - - -class MongoDeprecatedWarning(DeprecationWarning): - pass - - -def mongo_deprecated(func): - """Mark functions as deprecated. - - It will result in a warning being emitted when the function is used. - """ - - @functools.wraps(func) - def new_func(*args, **kwargs): - warnings.simplefilter("always", MongoDeprecatedWarning) - warnings.warn( - ( - "Call to deprecated function '{}'." - " Function was moved to 'openpype.client.mongo'." - ).format(func.__name__), - category=MongoDeprecatedWarning, - stacklevel=2 - ) - return func(*args, **kwargs) - return new_func - - -@mongo_deprecated -def get_default_components(): - from openpype.client.mongo import get_default_components - - return get_default_components() - - -@mongo_deprecated -def should_add_certificate_path_to_mongo_url(mongo_url): - from openpype.client.mongo import should_add_certificate_path_to_mongo_url - - return should_add_certificate_path_to_mongo_url(mongo_url) - - -@mongo_deprecated -def validate_mongo_connection(mongo_uri): - from openpype.client.mongo import validate_mongo_connection - - return validate_mongo_connection(mongo_uri) - - -__all__ = ( - "MongoEnvNotSet", - "OpenPypeMongoConnection", - "get_default_components", - "should_add_certificate_path_to_mongo_url", - "validate_mongo_connection", -) diff --git a/openpype/lib/path_tools.py b/openpype/lib/path_tools.py index 0b6d0a3391..fec6a0c47d 100644 --- a/openpype/lib/path_tools.py +++ b/openpype/lib/path_tools.py @@ -2,59 +2,12 @@ import os import re import logging import platform -import functools -import warnings import clique log = logging.getLogger(__name__) -class PathToolsDeprecatedWarning(DeprecationWarning): - pass - - -def deprecated(new_destination): - """Mark functions as deprecated. - - It will result in a warning being emitted when the function is used. - """ - - func = None - if callable(new_destination): - func = new_destination - new_destination = None - - def _decorator(decorated_func): - if new_destination is None: - warning_message = ( - " Please check content of deprecated function to figure out" - " possible replacement." - ) - else: - warning_message = " Please replace your usage with '{}'.".format( - new_destination - ) - - @functools.wraps(decorated_func) - def wrapper(*args, **kwargs): - warnings.simplefilter("always", PathToolsDeprecatedWarning) - warnings.warn( - ( - "Call to deprecated function '{}'" - "\nFunction was moved or removed.{}" - ).format(decorated_func.__name__, warning_message), - category=PathToolsDeprecatedWarning, - stacklevel=4 - ) - return decorated_func(*args, **kwargs) - return wrapper - - if func is None: - return _decorator - return _decorator(func) - - def format_file_size(file_size, suffix=None): """Returns formatted string with size in appropriate unit. @@ -269,99 +222,3 @@ def get_last_version_from_path(path_dir, filter): return filtred_files[-1] return None - - -@deprecated("openpype.pipeline.project_folders.concatenate_splitted_paths") -def concatenate_splitted_paths(split_paths, anatomy): - """ - Deprecated: - Function will be removed after release version 3.16.* - """ - - from openpype.pipeline.project_folders import concatenate_splitted_paths - - return concatenate_splitted_paths(split_paths, anatomy) - - -@deprecated -def get_format_data(anatomy): - """ - Deprecated: - Function will be removed after release version 3.16.* - """ - - from openpype.pipeline.template_data import get_project_template_data - - data = get_project_template_data(project_name=anatomy.project_name) - data["root"] = anatomy.roots - return data - - -@deprecated("openpype.pipeline.project_folders.fill_paths") -def fill_paths(path_list, anatomy): - """ - Deprecated: - Function will be removed after release version 3.16.* - """ - - from openpype.pipeline.project_folders import fill_paths - - return fill_paths(path_list, anatomy) - - -@deprecated("openpype.pipeline.project_folders.create_project_folders") -def create_project_folders(basic_paths, project_name): - """ - Deprecated: - Function will be removed after release version 3.16.* - """ - - from openpype.pipeline.project_folders import create_project_folders - - return create_project_folders(project_name, basic_paths) - - -@deprecated("openpype.pipeline.project_folders.get_project_basic_paths") -def get_project_basic_paths(project_name): - """ - Deprecated: - Function will be removed after release version 3.16.* - """ - - from openpype.pipeline.project_folders import get_project_basic_paths - - return get_project_basic_paths(project_name) - - -@deprecated("openpype.pipeline.workfile.create_workdir_extra_folders") -def create_workdir_extra_folders( - workdir, host_name, task_type, task_name, project_name, - project_settings=None -): - """Create extra folders in work directory based on context. - - Args: - workdir (str): Path to workdir where workfiles is stored. - host_name (str): Name of host implementation. - task_type (str): Type of task for which extra folders should be - created. - task_name (str): Name of task for which extra folders should be - created. - project_name (str): Name of project on which task is. - project_settings (dict): Prepared project settings. Are loaded if not - passed. - - Deprecated: - Function will be removed after release version 3.16.* - """ - - from openpype.pipeline.project_folders import create_workdir_extra_folders - - return create_workdir_extra_folders( - workdir, - host_name, - task_type, - task_name, - project_name, - project_settings - ) diff --git a/openpype/lib/plugin_tools.py b/openpype/lib/plugin_tools.py index 10fd3940b8..d204fc2c8f 100644 --- a/openpype/lib/plugin_tools.py +++ b/openpype/lib/plugin_tools.py @@ -4,157 +4,9 @@ import os import logging import re -import warnings -import functools - -from openpype.client import get_asset_by_id - log = logging.getLogger(__name__) -class PluginToolsDeprecatedWarning(DeprecationWarning): - pass - - -def deprecated(new_destination): - """Mark functions as deprecated. - - It will result in a warning being emitted when the function is used. - """ - - func = None - if callable(new_destination): - func = new_destination - new_destination = None - - def _decorator(decorated_func): - if new_destination is None: - warning_message = ( - " Please check content of deprecated function to figure out" - " possible replacement." - ) - else: - warning_message = " Please replace your usage with '{}'.".format( - new_destination - ) - - @functools.wraps(decorated_func) - def wrapper(*args, **kwargs): - warnings.simplefilter("always", PluginToolsDeprecatedWarning) - warnings.warn( - ( - "Call to deprecated function '{}'" - "\nFunction was moved or removed.{}" - ).format(decorated_func.__name__, warning_message), - category=PluginToolsDeprecatedWarning, - stacklevel=4 - ) - return decorated_func(*args, **kwargs) - return wrapper - - if func is None: - return _decorator - return _decorator(func) - - -@deprecated("openpype.pipeline.create.TaskNotSetError") -def TaskNotSetError(*args, **kwargs): - from openpype.pipeline.create import TaskNotSetError - - return TaskNotSetError(*args, **kwargs) - - -@deprecated("openpype.pipeline.create.get_subset_name") -def get_subset_name_with_asset_doc( - family, - variant, - task_name, - asset_doc, - project_name=None, - host_name=None, - default_template=None, - dynamic_data=None -): - """Calculate subset name based on passed context and OpenPype settings. - - Subst name templates are defined in `project_settings/global/tools/creator - /subset_name_profiles` where are profiles with host name, family, task name - and task type filters. If context does not match any profile then - `DEFAULT_SUBSET_TEMPLATE` is used as default template. - - That's main reason why so many arguments are required to calculate subset - name. - - Args: - family (str): Instance family. - variant (str): In most of cases it is user input during creation. - task_name (str): Task name on which context is instance created. - asset_doc (dict): Queried asset document with it's tasks in data. - Used to get task type. - project_name (str): Name of project on which is instance created. - Important for project settings that are loaded. - host_name (str): One of filtering criteria for template profile - filters. - default_template (str): Default template if any profile does not match - passed context. Constant 'DEFAULT_SUBSET_TEMPLATE' is used if - is not passed. - dynamic_data (dict): Dynamic data specific for a creator which creates - instance. - """ - - from openpype.pipeline.create import get_subset_name - - return get_subset_name( - family, - variant, - task_name, - asset_doc, - project_name, - host_name, - default_template, - dynamic_data - ) - - -@deprecated -def get_subset_name( - family, - variant, - task_name, - asset_id, - project_name=None, - host_name=None, - default_template=None, - dynamic_data=None, - dbcon=None -): - """Calculate subset name using OpenPype settings. - - This variant of function expects asset id as argument. - - This is legacy function should be replaced with - `get_subset_name_with_asset_doc` where asset document is expected. - """ - - from openpype.pipeline.create import get_subset_name - - if project_name is None: - project_name = dbcon.project_name - - asset_doc = get_asset_by_id(project_name, asset_id, fields=["data.tasks"]) - - return get_subset_name( - family, - variant, - task_name, - asset_doc, - project_name, - host_name, - default_template, - dynamic_data - ) - - def prepare_template_data(fill_pairs): """ Prepares formatted data for filling template. diff --git a/openpype/lib/transcoding.py b/openpype/lib/transcoding.py index de6495900e..2bae28786e 100644 --- a/openpype/lib/transcoding.py +++ b/openpype/lib/transcoding.py @@ -11,8 +11,8 @@ import xml.etree.ElementTree from .execute import run_subprocess from .vendor_bin_utils import ( - get_ffmpeg_tool_path, - get_oiio_tools_path, + get_ffmpeg_tool_args, + get_oiio_tool_args, is_oiio_supported, ) @@ -83,11 +83,11 @@ def get_oiio_info_for_input(filepath, logger=None, subimages=False): Stdout should contain xml format string. """ - args = [ - get_oiio_tools_path(), + args = get_oiio_tool_args( + "oiiotool", "--info", "-v" - ] + ) if subimages: args.append("-a") @@ -486,12 +486,11 @@ def convert_for_ffmpeg( compression = "none" # Prepare subprocess arguments - oiio_cmd = [ - get_oiio_tools_path(), - + oiio_cmd = get_oiio_tool_args( + "oiiotool", # Don't add any additional attributes "--nosoftwareattrib", - ] + ) # Add input compression if available if compression: oiio_cmd.extend(["--compression", compression]) @@ -656,12 +655,11 @@ def convert_input_paths_for_ffmpeg( for input_path in input_paths: # Prepare subprocess arguments - oiio_cmd = [ - get_oiio_tools_path(), - + oiio_cmd = get_oiio_tool_args( + "oiiotool", # Don't add any additional attributes "--nosoftwareattrib", - ] + ) # Add input compression if available if compression: oiio_cmd.extend(["--compression", compression]) @@ -729,8 +727,8 @@ def get_ffprobe_data(path_to_file, logger=None): logger.info( "Getting information about input \"{}\".".format(path_to_file) ) - args = [ - get_ffmpeg_tool_path("ffprobe"), + ffprobe_args = get_ffmpeg_tool_args("ffprobe") + args = ffprobe_args + [ "-hide_banner", "-loglevel", "fatal", "-show_error", @@ -1084,13 +1082,13 @@ def convert_colorspace( if logger is None: logger = logging.getLogger(__name__) - oiio_cmd = [ - get_oiio_tools_path(), + oiio_cmd = get_oiio_tool_args( + "oiiotool", input_path, # Don't add any additional attributes "--nosoftwareattrib", "--colorconfig", config_path - ] + ) if all([target_colorspace, view, display]): raise ValueError("Colorspace and both screen and display" diff --git a/openpype/lib/vendor_bin_utils.py b/openpype/lib/vendor_bin_utils.py index f27c78d486..dc8bb7435e 100644 --- a/openpype/lib/vendor_bin_utils.py +++ b/openpype/lib/vendor_bin_utils.py @@ -3,9 +3,15 @@ import logging import platform import subprocess +from openpype import AYON_SERVER_ENABLED + log = logging.getLogger("Vendor utils") +class ToolNotFoundError(Exception): + """Raised when tool arguments are not found.""" + + class CachedToolPaths: """Cache already used and discovered tools and their executables. @@ -252,7 +258,7 @@ def _check_args_returncode(args): return proc.returncode == 0 -def _oiio_executable_validation(filepath): +def _oiio_executable_validation(args): """Validate oiio tool executable if can be executed. Validation has 2 steps. First is using 'find_executable' to fill possible @@ -270,32 +276,63 @@ def _oiio_executable_validation(filepath): should be used. Args: - filepath (str): Path to executable. + args (Union[str, list[str]]): Arguments to launch tool or + path to tool executable. Returns: bool: Filepath is valid executable. """ - filepath = find_executable(filepath) - if not filepath: + if not args: return False - return _check_args_returncode([filepath, "--help"]) + if not isinstance(args, list): + filepath = find_executable(args) + if not filepath: + return False + args = [filepath] + return _check_args_returncode(args + ["--help"]) + + +def _get_ayon_oiio_tool_args(tool_name): + try: + # Use 'ayon-third-party' addon to get oiio arguments + from ayon_third_party import get_oiio_arguments + except Exception: + print("!!! Failed to import 'ayon_third_party' addon.") + return None + + try: + return get_oiio_arguments(tool_name) + except Exception as exc: + print("!!! Failed to get OpenImageIO args. Reason: {}".format(exc)) + return None def get_oiio_tools_path(tool="oiiotool"): - """Path to vendorized OpenImageIO tool executables. + """Path to OpenImageIO tool executables. - On Window it adds .exe extension if missing from tool argument. + On Windows it adds .exe extension if missing from tool argument. Args: - tool (string): Tool name (oiiotool, maketx, ...). + tool (string): Tool name 'oiiotool', 'maketx', etc. Default is "oiiotool". """ if CachedToolPaths.is_tool_cached(tool): return CachedToolPaths.get_executable_path(tool) + if AYON_SERVER_ENABLED: + args = _get_ayon_oiio_tool_args(tool) + if args: + if len(args) > 1: + raise ValueError( + "AYON oiio arguments consist of multiple arguments." + ) + tool_executable_path = args[0] + CachedToolPaths.cache_executable_path(tool, tool_executable_path) + return tool_executable_path + custom_paths_str = os.environ.get("OPENPYPE_OIIO_PATHS") or "" tool_executable_path = find_tool_in_custom_paths( custom_paths_str.split(os.pathsep), @@ -321,7 +358,33 @@ def get_oiio_tools_path(tool="oiiotool"): return tool_executable_path -def _ffmpeg_executable_validation(filepath): +def get_oiio_tool_args(tool_name, *extra_args): + """Arguments to launch OpenImageIO tool. + + Args: + tool_name (str): Tool name 'oiiotool', 'maketx', etc. + *extra_args (str): Extra arguments to add to after tool arguments. + + Returns: + list[str]: List of arguments. + """ + + extra_args = list(extra_args) + + if AYON_SERVER_ENABLED: + args = _get_ayon_oiio_tool_args(tool_name) + if args: + return args + extra_args + + path = get_oiio_tools_path(tool_name) + if path: + return [path] + extra_args + raise ToolNotFoundError( + "OIIO '{}' tool not found.".format(tool_name) + ) + + +def _ffmpeg_executable_validation(args): """Validate ffmpeg tool executable if can be executed. Validation has 2 steps. First is using 'find_executable' to fill possible @@ -338,24 +401,45 @@ def _ffmpeg_executable_validation(filepath): It does not validate if the executable is really a ffmpeg tool. Args: - filepath (str): Path to executable. + args (Union[str, list[str]]): Arguments to launch tool or + path to tool executable. Returns: bool: Filepath is valid executable. """ - filepath = find_executable(filepath) - if not filepath: + if not args: return False - return _check_args_returncode([filepath, "-version"]) + if not isinstance(args, list): + filepath = find_executable(args) + if not filepath: + return False + args = [filepath] + return _check_args_returncode(args + ["--help"]) + + +def _get_ayon_ffmpeg_tool_args(tool_name): + try: + # Use 'ayon-third-party' addon to get ffmpeg arguments + from ayon_third_party import get_ffmpeg_arguments + + except Exception: + print("!!! Failed to import 'ayon_third_party' addon.") + return None + + try: + return get_ffmpeg_arguments(tool_name) + except Exception as exc: + print("!!! Failed to get FFmpeg args. Reason: {}".format(exc)) + return None def get_ffmpeg_tool_path(tool="ffmpeg"): """Path to vendorized FFmpeg executable. Args: - tool (string): Tool name (ffmpeg, ffprobe, ...). + tool (str): Tool name 'ffmpeg', 'ffprobe', etc. Default is "ffmpeg". Returns: @@ -365,6 +449,17 @@ def get_ffmpeg_tool_path(tool="ffmpeg"): if CachedToolPaths.is_tool_cached(tool): return CachedToolPaths.get_executable_path(tool) + if AYON_SERVER_ENABLED: + args = _get_ayon_ffmpeg_tool_args(tool) + if args is not None: + if len(args) > 1: + raise ValueError( + "AYON ffmpeg arguments consist of multiple arguments." + ) + tool_executable_path = args[0] + CachedToolPaths.cache_executable_path(tool, tool_executable_path) + return tool_executable_path + custom_paths_str = os.environ.get("OPENPYPE_FFMPEG_PATHS") or "" tool_executable_path = find_tool_in_custom_paths( custom_paths_str.split(os.pathsep), @@ -390,19 +485,44 @@ def get_ffmpeg_tool_path(tool="ffmpeg"): return tool_executable_path +def get_ffmpeg_tool_args(tool_name, *extra_args): + """Arguments to launch FFmpeg tool. + + Args: + tool_name (str): Tool name 'ffmpeg', 'ffprobe', exc. + *extra_args (str): Extra arguments to add to after tool arguments. + + Returns: + list[str]: List of arguments. + """ + + extra_args = list(extra_args) + + if AYON_SERVER_ENABLED: + args = _get_ayon_ffmpeg_tool_args(tool_name) + if args: + return args + extra_args + + executable_path = get_ffmpeg_tool_path(tool_name) + if executable_path: + return [executable_path] + extra_args + raise ToolNotFoundError( + "FFmpeg '{}' tool not found.".format(tool_name) + ) + + def is_oiio_supported(): """Checks if oiiotool is configured for this platform. Returns: bool: OIIO tool executable is available. """ - loaded_path = oiio_path = get_oiio_tools_path() - if oiio_path: - oiio_path = find_executable(oiio_path) - if not oiio_path: - log.debug("OIIOTool is not configured or not present at {}".format( - loaded_path - )) + try: + args = get_oiio_tool_args("oiiotool") + except ToolNotFoundError: + args = None + if not args: + log.debug("OIIOTool is not configured or not present.") return False - return True + return _oiio_executable_validation(args) diff --git a/openpype/modules/ftrack/event_handlers_server/action_sync_to_avalon.py b/openpype/modules/ftrack/event_handlers_server/action_sync_to_avalon.py index df9147bdf7..442206feba 100644 --- a/openpype/modules/ftrack/event_handlers_server/action_sync_to_avalon.py +++ b/openpype/modules/ftrack/event_handlers_server/action_sync_to_avalon.py @@ -40,6 +40,7 @@ class SyncToAvalonServer(ServerAction): #: Action description. description = "Send data from Ftrack to Avalon" role_list = {"Pypeclub", "Administrator", "Project Manager"} + settings_key = "sync_to_avalon" def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) @@ -48,11 +49,16 @@ class SyncToAvalonServer(ServerAction): def discover(self, session, entities, event): """ Validation """ # Check if selection is valid + is_valid = False for ent in event["data"]["selection"]: # Ignore entities that are not tasks or projects if ent["entityType"].lower() in ["show", "task"]: - return True - return False + is_valid = True + break + + if is_valid: + is_valid = self.valid_roles(session, entities, event) + return is_valid def launch(self, session, in_entities, event): self.log.debug("{}: Creating job".format(self.label)) diff --git a/openpype/modules/shotgrid/plugins/publish/collect_shotgrid_entities.py b/openpype/modules/shotgrid/plugins/publish/collect_shotgrid_entities.py index 43f5d1ef0e..db2e4eadc5 100644 --- a/openpype/modules/shotgrid/plugins/publish/collect_shotgrid_entities.py +++ b/openpype/modules/shotgrid/plugins/publish/collect_shotgrid_entities.py @@ -1,7 +1,5 @@ -import os - import pyblish.api -from openpype.lib.mongo import OpenPypeMongoConnection +from openpype.client.mongo import OpenPypeMongoConnection class CollectShotgridEntities(pyblish.api.ContextPlugin): diff --git a/openpype/pipeline/__init__.py b/openpype/pipeline/__init__.py index 5c15a5fa82..59f1655f91 100644 --- a/openpype/pipeline/__init__.py +++ b/openpype/pipeline/__init__.py @@ -13,6 +13,7 @@ from .create import ( BaseCreator, Creator, AutoCreator, + HiddenCreator, CreatedInstance, CreatorError, @@ -114,6 +115,7 @@ __all__ = ( "BaseCreator", "Creator", "AutoCreator", + "HiddenCreator", "CreatedInstance", "CreatorError", diff --git a/openpype/pipeline/colorspace.py b/openpype/pipeline/colorspace.py index 3f2d4891c1..caa0f6dcd7 100644 --- a/openpype/pipeline/colorspace.py +++ b/openpype/pipeline/colorspace.py @@ -237,10 +237,17 @@ def get_data_subprocess(config_path, data_type): def compatibility_check(): - """Making sure PyOpenColorIO is importable""" + """checking if user has a compatible PyOpenColorIO >= 2. + + It's achieved by checking if PyOpenColorIO is importable + and calling any version 2 specific function + """ try: - import PyOpenColorIO # noqa: F401 - except ImportError: + import PyOpenColorIO + + # ocio versions lower than 2 will raise AttributeError + PyOpenColorIO.GetVersion() + except (ImportError, AttributeError): return False return True diff --git a/openpype/pipeline/create/context.py b/openpype/pipeline/create/context.py index 6bdf7bb719..3076efcde7 100644 --- a/openpype/pipeline/create/context.py +++ b/openpype/pipeline/create/context.py @@ -1165,8 +1165,8 @@ class CreatedInstance: Args: instance_data (Dict[str, Any]): Data in a structure ready for 'CreatedInstance' object. - creator (Creator): Creator plugin which is creating the instance - of for which the instance belong. + creator (BaseCreator): Creator plugin which is creating the + instance of for which the instance belong. """ instance_data = copy.deepcopy(instance_data) @@ -1979,7 +1979,11 @@ class CreateContext: if pre_create_data is None: pre_create_data = {} - precreate_attr_defs = creator.get_pre_create_attr_defs() or [] + precreate_attr_defs = [] + # Hidden creators do not have or need the pre-create attributes. + if isinstance(creator, Creator): + precreate_attr_defs = creator.get_pre_create_attr_defs() + # Create default values of precreate data _pre_create_data = get_default_values(precreate_attr_defs) # Update passed precreate data to default values diff --git a/openpype/pipeline/schema.py b/openpype/pipeline/schema/__init__.py similarity index 92% rename from openpype/pipeline/schema.py rename to openpype/pipeline/schema/__init__.py index 7e96bfe1b1..d7b33f2621 100644 --- a/openpype/pipeline/schema.py +++ b/openpype/pipeline/schema/__init__.py @@ -24,6 +24,7 @@ log_ = logging.getLogger(__name__) ValidationError = jsonschema.ValidationError SchemaError = jsonschema.SchemaError +CURRENT_DIR = os.path.dirname(os.path.abspath(__file__)) _CACHED = False @@ -121,17 +122,14 @@ def _precache(): """Store available schemas in-memory for reduced disk access""" global _CACHED - repos_root = os.environ["OPENPYPE_REPOS_ROOT"] - schema_dir = os.path.join(repos_root, "schema") - - for schema in os.listdir(schema_dir): + for schema in os.listdir(CURRENT_DIR): if schema.startswith(("_", ".")): continue if not schema.endswith(".json"): continue - if not os.path.isfile(os.path.join(schema_dir, schema)): + if not os.path.isfile(os.path.join(CURRENT_DIR, schema)): continue - with open(os.path.join(schema_dir, schema)) as f: + with open(os.path.join(CURRENT_DIR, schema)) as f: log_.debug("Installing schema '%s'.." % schema) _cache[schema] = json.load(f) _CACHED = True diff --git a/openpype/pipeline/schema/application-1.0.json b/openpype/pipeline/schema/application-1.0.json new file mode 100644 index 0000000000..953abee569 --- /dev/null +++ b/openpype/pipeline/schema/application-1.0.json @@ -0,0 +1,68 @@ +{ + "$schema": "http://json-schema.org/draft-04/schema#", + + "title": "openpype:application-1.0", + "description": "An application definition.", + + "type": "object", + + "additionalProperties": true, + + "required": [ + "schema", + "label", + "application_dir", + "executable" + ], + + "properties": { + "schema": { + "description": "Schema identifier for payload", + "type": "string" + }, + "label": { + "description": "Nice name of application.", + "type": "string" + }, + "application_dir": { + "description": "Name of directory used for application resources.", + "type": "string" + }, + "executable": { + "description": "Name of callable executable, this is called to launch the application", + "type": "string" + }, + "description": { + "description": "Description of application.", + "type": "string" + }, + "environment": { + "description": "Key/value pairs for environment variables related to this application. Supports lists for paths, such as PYTHONPATH.", + "type": "object", + "items": { + "oneOf": [ + {"type": "string"}, + {"type": "array", "items": {"type": "string"}} + ] + } + }, + "default_dirs": { + "type": "array", + "items": { + "type": "string" + } + }, + "copy": { + "type": "object", + "patternProperties": { + "^.*$": { + "anyOf": [ + {"type": "string"}, + {"type": "null"} + ] + } + }, + "additionalProperties": false + } + } +} diff --git a/openpype/pipeline/schema/asset-1.0.json b/openpype/pipeline/schema/asset-1.0.json new file mode 100644 index 0000000000..ab104c002a --- /dev/null +++ b/openpype/pipeline/schema/asset-1.0.json @@ -0,0 +1,35 @@ +{ + "$schema": "http://json-schema.org/draft-04/schema#", + + "title": "openpype:asset-1.0", + "description": "A unit of data", + + "type": "object", + + "additionalProperties": true, + + "required": [ + "schema", + "name", + "subsets" + ], + + "properties": { + "schema": { + "description": "Schema identifier for payload", + "type": "string" + }, + "name": { + "description": "Name of directory", + "type": "string" + }, + "subsets": { + "type": "array", + "items": { + "$ref": "subset.json" + } + } + }, + + "definitions": {} +} diff --git a/openpype/pipeline/schema/asset-2.0.json b/openpype/pipeline/schema/asset-2.0.json new file mode 100644 index 0000000000..b894d79792 --- /dev/null +++ b/openpype/pipeline/schema/asset-2.0.json @@ -0,0 +1,55 @@ +{ + "$schema": "http://json-schema.org/draft-04/schema#", + + "title": "openpype:asset-2.0", + "description": "A unit of data", + + "type": "object", + + "additionalProperties": true, + + "required": [ + "schema", + "type", + "name", + "silo", + "data" + ], + + "properties": { + "schema": { + "description": "Schema identifier for payload", + "type": "string", + "enum": ["openpype:asset-2.0"], + "example": "openpype:asset-2.0" + }, + "type": { + "description": "The type of document", + "type": "string", + "enum": ["asset"], + "example": "asset" + }, + "parent": { + "description": "Unique identifier to parent document", + "example": "592c33475f8c1b064c4d1696" + }, + "name": { + "description": "Name of asset", + "type": "string", + "pattern": "^[a-zA-Z0-9_.]*$", + "example": "Bruce" + }, + "silo": { + "description": "Group or container of asset", + "type": "string", + "example": "assets" + }, + "data": { + "description": "Document metadata", + "type": "object", + "example": {"key": "value"} + } + }, + + "definitions": {} +} diff --git a/openpype/pipeline/schema/asset-3.0.json b/openpype/pipeline/schema/asset-3.0.json new file mode 100644 index 0000000000..948704d2a1 --- /dev/null +++ b/openpype/pipeline/schema/asset-3.0.json @@ -0,0 +1,55 @@ +{ + "$schema": "http://json-schema.org/draft-04/schema#", + + "title": "openpype:asset-3.0", + "description": "A unit of data", + + "type": "object", + + "additionalProperties": true, + + "required": [ + "schema", + "type", + "name", + "data" + ], + + "properties": { + "schema": { + "description": "Schema identifier for payload", + "type": "string", + "enum": ["openpype:asset-3.0"], + "example": "openpype:asset-3.0" + }, + "type": { + "description": "The type of document", + "type": "string", + "enum": ["asset"], + "example": "asset" + }, + "parent": { + "description": "Unique identifier to parent document", + "example": "592c33475f8c1b064c4d1696" + }, + "name": { + "description": "Name of asset", + "type": "string", + "pattern": "^[a-zA-Z0-9_.]*$", + "example": "Bruce" + }, + "silo": { + "description": "Group or container of asset", + "type": "string", + "pattern": "^[a-zA-Z0-9_.]*$", + "example": "assets" + }, + "data": { + "description": "Document metadata", + "type": "object", + "example": {"key": "value"} + } + }, + + "definitions": {} +} diff --git a/openpype/pipeline/schema/config-1.0.json b/openpype/pipeline/schema/config-1.0.json new file mode 100644 index 0000000000..49398a57cd --- /dev/null +++ b/openpype/pipeline/schema/config-1.0.json @@ -0,0 +1,85 @@ +{ + "$schema": "http://json-schema.org/draft-04/schema#", + + "title": "openpype:config-1.0", + "description": "A project configuration.", + + "type": "object", + + "additionalProperties": false, + "required": [ + "tasks", + "apps" + ], + + "properties": { + "schema": { + "description": "Schema identifier for payload", + "type": "string" + }, + "template": { + "type": "object", + "additionalProperties": false, + "patternProperties": { + "^.*$": { + "type": "string" + } + } + }, + "tasks": { + "type": "array", + "items": { + "type": "object", + "properties": { + "name": {"type": "string"}, + "icon": {"type": "string"}, + "group": {"type": "string"}, + "label": {"type": "string"} + }, + "required": ["name"] + } + }, + "apps": { + "type": "array", + "items": { + "type": "object", + "properties": { + "name": {"type": "string"}, + "icon": {"type": "string"}, + "group": {"type": "string"}, + "label": {"type": "string"} + }, + "required": ["name"] + } + }, + "families": { + "type": "array", + "items": { + "type": "object", + "properties": { + "name": {"type": "string"}, + "icon": {"type": "string"}, + "label": {"type": "string"}, + "hideFilter": {"type": "boolean"} + }, + "required": ["name"] + } + }, + "groups": { + "type": "array", + "items": { + "type": "object", + "properties": { + "name": {"type": "string"}, + "icon": {"type": "string"}, + "color": {"type": "string"}, + "order": {"type": ["integer", "number"]} + }, + "required": ["name"] + } + }, + "copy": { + "type": "object" + } + } +} diff --git a/openpype/pipeline/schema/config-1.1.json b/openpype/pipeline/schema/config-1.1.json new file mode 100644 index 0000000000..6e15514aaf --- /dev/null +++ b/openpype/pipeline/schema/config-1.1.json @@ -0,0 +1,87 @@ +{ + "$schema": "http://json-schema.org/draft-04/schema#", + + "title": "openpype:config-1.1", + "description": "A project configuration.", + + "type": "object", + + "additionalProperties": false, + "required": [ + "tasks", + "apps" + ], + + "properties": { + "schema": { + "description": "Schema identifier for payload", + "type": "string" + }, + "template": { + "type": "object", + "additionalProperties": false, + "patternProperties": { + "^.*$": { + "type": "string" + } + } + }, + "tasks": { + "type": "object", + "items": { + "type": "object", + "properties": { + "name": {"type": "string"}, + "icon": {"type": "string"}, + "group": {"type": "string"}, + "label": {"type": "string"} + }, + "required": [ + "short_name" + ] + } + }, + "apps": { + "type": "array", + "items": { + "type": "object", + "properties": { + "name": {"type": "string"}, + "icon": {"type": "string"}, + "group": {"type": "string"}, + "label": {"type": "string"} + }, + "required": ["name"] + } + }, + "families": { + "type": "array", + "items": { + "type": "object", + "properties": { + "name": {"type": "string"}, + "icon": {"type": "string"}, + "label": {"type": "string"}, + "hideFilter": {"type": "boolean"} + }, + "required": ["name"] + } + }, + "groups": { + "type": "array", + "items": { + "type": "object", + "properties": { + "name": {"type": "string"}, + "icon": {"type": "string"}, + "color": {"type": "string"}, + "order": {"type": ["integer", "number"]} + }, + "required": ["name"] + } + }, + "copy": { + "type": "object" + } + } +} diff --git a/openpype/pipeline/schema/config-2.0.json b/openpype/pipeline/schema/config-2.0.json new file mode 100644 index 0000000000..54b226711a --- /dev/null +++ b/openpype/pipeline/schema/config-2.0.json @@ -0,0 +1,87 @@ +{ + "$schema": "http://json-schema.org/draft-04/schema#", + + "title": "openpype:config-2.0", + "description": "A project configuration.", + + "type": "object", + + "additionalProperties": false, + "required": [ + "tasks", + "apps" + ], + + "properties": { + "schema": { + "description": "Schema identifier for payload", + "type": "string" + }, + "templates": { + "type": "object" + }, + "roots": { + "type": "object" + }, + "imageio": { + "type": "object" + }, + "tasks": { + "type": "object", + "items": { + "type": "object", + "properties": { + "name": {"type": "string"}, + "icon": {"type": "string"}, + "group": {"type": "string"}, + "label": {"type": "string"} + }, + "required": [ + "short_name" + ] + } + }, + "apps": { + "type": "array", + "items": { + "type": "object", + "properties": { + "name": {"type": "string"}, + "icon": {"type": "string"}, + "group": {"type": "string"}, + "label": {"type": "string"} + }, + "required": ["name"] + } + }, + "families": { + "type": "array", + "items": { + "type": "object", + "properties": { + "name": {"type": "string"}, + "icon": {"type": "string"}, + "label": {"type": "string"}, + "hideFilter": {"type": "boolean"} + }, + "required": ["name"] + } + }, + "groups": { + "type": "array", + "items": { + "type": "object", + "properties": { + "name": {"type": "string"}, + "icon": {"type": "string"}, + "color": {"type": "string"}, + "order": {"type": ["integer", "number"]} + }, + "required": ["name"] + } + }, + "copy": { + "type": "object" + } + } +} diff --git a/openpype/pipeline/schema/container-1.0.json b/openpype/pipeline/schema/container-1.0.json new file mode 100644 index 0000000000..012e8499e6 --- /dev/null +++ b/openpype/pipeline/schema/container-1.0.json @@ -0,0 +1,100 @@ +{ + "$schema": "http://json-schema.org/draft-04/schema#", + + "title": "openpype:container-1.0", + "description": "A loaded asset", + + "type": "object", + + "additionalProperties": true, + + "required": [ + "id", + "objectName", + "name", + "author", + "loader", + "families", + "time", + "subset", + "asset", + "representation", + "version", + "silo", + "path", + "source" + ], + "properties": { + "id": { + "description": "Identifier for finding object in host", + "type": "string", + "enum": ["pyblish.mindbender.container"], + "example": "pyblish.mindbender.container" + }, + "objectName": { + "description": "Name of internal object, such as the objectSet in Maya.", + "type": "string", + "example": "Bruce_:rigDefault_CON" + }, + "name": { + "description": "Full name of application object", + "type": "string", + "example": "modelDefault" + }, + "author": { + "description": "Name of the author of the published version", + "type": "string", + "example": "Marcus Ottosson" + }, + "loader": { + "description": "Name of loader plug-in used to produce this container", + "type": "string", + "example": "ModelLoader" + }, + "families": { + "description": "Families associated with the this subset", + "type": "string", + "example": "mindbender.model" + }, + "time": { + "description": "File-system safe, formatted time", + "type": "string", + "example": "20170329T131545Z" + }, + "subset": { + "description": "Name of source subset", + "type": "string", + "example": "modelDefault" + }, + "asset": { + "description": "Name of source asset", + "type": "string" , + "example": "Bruce" + }, + "representation": { + "description": "Name of source representation", + "type": "string" , + "example": ".ma" + }, + "version": { + "description": "Version number", + "type": "number", + "example": 12 + }, + "silo": { + "description": "Silo of parent asset", + "type": "string", + "example": "assets" + }, + "path": { + "description": "Absolute path on disk", + "type": "string", + "example": "{root}/assets/Bruce/publish/rigDefault/v002" + }, + "source": { + "description": "Absolute path to file from which this version was published", + "type": "string", + "example": "{root}/assets/Bruce/work/rigging/maya/scenes/rig_v001.ma" + } + } +} diff --git a/openpype/pipeline/schema/container-2.0.json b/openpype/pipeline/schema/container-2.0.json new file mode 100644 index 0000000000..1673ee5d1d --- /dev/null +++ b/openpype/pipeline/schema/container-2.0.json @@ -0,0 +1,59 @@ +{ + "$schema": "http://json-schema.org/draft-04/schema#", + + "title": "openpype:container-2.0", + "description": "A loaded asset", + + "type": "object", + + "additionalProperties": true, + + "required": [ + "schema", + "id", + "objectName", + "name", + "namespace", + "loader", + "representation" + ], + "properties": { + "schema": { + "description": "Schema identifier for payload", + "type": "string", + "enum": ["openpype:container-2.0"], + "example": "openpype:container-2.0" + }, + "id": { + "description": "Identifier for finding object in host", + "type": "string", + "enum": ["pyblish.avalon.container"], + "example": "pyblish.avalon.container" + }, + "objectName": { + "description": "Name of internal object, such as the objectSet in Maya.", + "type": "string", + "example": "Bruce_:rigDefault_CON" + }, + "loader": { + "description": "Name of loader plug-in used to produce this container", + "type": "string", + "example": "ModelLoader" + }, + "name": { + "description": "Internal object name of container in application", + "type": "string", + "example": "modelDefault_01" + }, + "namespace": { + "description": "Internal namespace of container in application", + "type": "string", + "example": "Bruce_" + }, + "representation": { + "description": "Unique id of representation in database", + "type": "string", + "example": "59523f355f8c1b5f6c5e8348" + } + } +} diff --git a/openpype/pipeline/schema/hero_version-1.0.json b/openpype/pipeline/schema/hero_version-1.0.json new file mode 100644 index 0000000000..b720dc2887 --- /dev/null +++ b/openpype/pipeline/schema/hero_version-1.0.json @@ -0,0 +1,44 @@ +{ + "$schema": "http://json-schema.org/draft-04/schema#", + + "title": "openpype:hero_version-1.0", + "description": "Hero version of asset", + + "type": "object", + + "additionalProperties": true, + + "required": [ + "version_id", + "schema", + "type", + "parent" + ], + + "properties": { + "_id": { + "description": "Document's id (database will create it's if not entered)", + "example": "ObjectId(592c33475f8c1b064c4d1696)" + }, + "version_id": { + "description": "The version ID from which it was created", + "example": "ObjectId(592c33475f8c1b064c4d1695)" + }, + "schema": { + "description": "The schema associated with this document", + "type": "string", + "enum": ["openpype:hero_version-1.0"], + "example": "openpype:hero_version-1.0" + }, + "type": { + "description": "The type of document", + "type": "string", + "enum": ["hero_version"], + "example": "hero_version" + }, + "parent": { + "description": "Unique identifier to parent document", + "example": "ObjectId(592c33475f8c1b064c4d1697)" + } + } +} diff --git a/openpype/pipeline/schema/inventory-1.0.json b/openpype/pipeline/schema/inventory-1.0.json new file mode 100644 index 0000000000..2fe78794ab --- /dev/null +++ b/openpype/pipeline/schema/inventory-1.0.json @@ -0,0 +1,10 @@ +{ + "$schema": "http://json-schema.org/draft-04/schema#", + + "title": "openpype:config-1.0", + "description": "A project configuration.", + + "type": "object", + + "additionalProperties": true +} diff --git a/openpype/pipeline/schema/inventory-1.1.json b/openpype/pipeline/schema/inventory-1.1.json new file mode 100644 index 0000000000..b61a76b32a --- /dev/null +++ b/openpype/pipeline/schema/inventory-1.1.json @@ -0,0 +1,10 @@ +{ + "$schema": "http://json-schema.org/draft-04/schema#", + + "title": "openpype:config-1.1", + "description": "A project configuration.", + + "type": "object", + + "additionalProperties": true +} diff --git a/openpype/pipeline/schema/project-2.0.json b/openpype/pipeline/schema/project-2.0.json new file mode 100644 index 0000000000..0ed5a55599 --- /dev/null +++ b/openpype/pipeline/schema/project-2.0.json @@ -0,0 +1,86 @@ +{ + "$schema": "http://json-schema.org/draft-04/schema#", + + "title": "openpype:project-2.0", + "description": "A unit of data", + + "type": "object", + + "additionalProperties": true, + + "required": [ + "schema", + "type", + "name", + "data", + "config" + ], + + "properties": { + "schema": { + "description": "Schema identifier for payload", + "type": "string", + "enum": ["openpype:project-2.0"], + "example": "openpype:project-2.0" + }, + "type": { + "description": "The type of document", + "type": "string", + "enum": ["project"], + "example": "project" + }, + "parent": { + "description": "Unique identifier to parent document", + "example": "592c33475f8c1b064c4d1696" + }, + "name": { + "description": "Name of directory", + "type": "string", + "pattern": "^[a-zA-Z0-9_.]*$", + "example": "hulk" + }, + "data": { + "description": "Document metadata", + "type": "object", + "example": { + "fps": 24, + "width": 1920, + "height": 1080 + } + }, + "config": { + "type": "object", + "description": "Document metadata", + "example": { + "schema": "openpype:config-1.0", + "apps": [ + { + "name": "maya2016", + "label": "Autodesk Maya 2016" + }, + { + "name": "nuke10", + "label": "The Foundry Nuke 10.0" + } + ], + "tasks": [ + {"name": "model"}, + {"name": "render"}, + {"name": "animate"}, + {"name": "rig"}, + {"name": "lookdev"}, + {"name": "layout"} + ], + "template": { + "work": + "{root}/{project}/{silo}/{asset}/work/{task}/{app}", + "publish": + "{root}/{project}/{silo}/{asset}/publish/{subset}/v{version:0>3}/{subset}.{representation}" + } + }, + "$ref": "config-1.0.json" + } + }, + + "definitions": {} +} diff --git a/openpype/pipeline/schema/project-2.1.json b/openpype/pipeline/schema/project-2.1.json new file mode 100644 index 0000000000..9413c9f691 --- /dev/null +++ b/openpype/pipeline/schema/project-2.1.json @@ -0,0 +1,86 @@ +{ + "$schema": "http://json-schema.org/draft-04/schema#", + + "title": "openpype:project-2.1", + "description": "A unit of data", + + "type": "object", + + "additionalProperties": true, + + "required": [ + "schema", + "type", + "name", + "data", + "config" + ], + + "properties": { + "schema": { + "description": "Schema identifier for payload", + "type": "string", + "enum": ["openpype:project-2.1"], + "example": "openpype:project-2.1" + }, + "type": { + "description": "The type of document", + "type": "string", + "enum": ["project"], + "example": "project" + }, + "parent": { + "description": "Unique identifier to parent document", + "example": "592c33475f8c1b064c4d1696" + }, + "name": { + "description": "Name of directory", + "type": "string", + "pattern": "^[a-zA-Z0-9_.]*$", + "example": "hulk" + }, + "data": { + "description": "Document metadata", + "type": "object", + "example": { + "fps": 24, + "width": 1920, + "height": 1080 + } + }, + "config": { + "type": "object", + "description": "Document metadata", + "example": { + "schema": "openpype:config-1.1", + "apps": [ + { + "name": "maya2016", + "label": "Autodesk Maya 2016" + }, + { + "name": "nuke10", + "label": "The Foundry Nuke 10.0" + } + ], + "tasks": { + "Model": {"short_name": "mdl"}, + "Render": {"short_name": "rnd"}, + "Animate": {"short_name": "anim"}, + "Rig": {"short_name": "rig"}, + "Lookdev": {"short_name": "look"}, + "Layout": {"short_name": "lay"} + }, + "template": { + "work": + "{root}/{project}/{silo}/{asset}/work/{task}/{app}", + "publish": + "{root}/{project}/{silo}/{asset}/publish/{subset}/v{version:0>3}/{subset}.{representation}" + } + }, + "$ref": "config-1.1.json" + } + }, + + "definitions": {} +} diff --git a/openpype/pipeline/schema/project-3.0.json b/openpype/pipeline/schema/project-3.0.json new file mode 100644 index 0000000000..be23e10c93 --- /dev/null +++ b/openpype/pipeline/schema/project-3.0.json @@ -0,0 +1,59 @@ +{ + "$schema": "http://json-schema.org/draft-04/schema#", + + "title": "openpype:project-3.0", + "description": "A unit of data", + + "type": "object", + + "additionalProperties": true, + + "required": [ + "schema", + "type", + "name", + "data", + "config" + ], + + "properties": { + "schema": { + "description": "Schema identifier for payload", + "type": "string", + "enum": ["openpype:project-3.0"], + "example": "openpype:project-3.0" + }, + "type": { + "description": "The type of document", + "type": "string", + "enum": ["project"], + "example": "project" + }, + "parent": { + "description": "Unique identifier to parent document", + "example": "592c33475f8c1b064c4d1696" + }, + "name": { + "description": "Name of directory", + "type": "string", + "pattern": "^[a-zA-Z0-9_.]*$", + "example": "hulk" + }, + "data": { + "description": "Document metadata", + "type": "object", + "example": { + "fps": 24, + "width": 1920, + "height": 1080 + } + }, + "config": { + "type": "object", + "description": "Document metadata", + "$ref": "config-2.0.json" + } + }, + + "definitions": {} +} diff --git a/openpype/pipeline/schema/representation-1.0.json b/openpype/pipeline/schema/representation-1.0.json new file mode 100644 index 0000000000..347c585f52 --- /dev/null +++ b/openpype/pipeline/schema/representation-1.0.json @@ -0,0 +1,28 @@ +{ + "$schema": "http://json-schema.org/draft-04/schema#", + + "title": "openpype:representation-1.0", + "description": "The inverse of an instance", + + "type": "object", + + "additionalProperties": true, + + "required": [ + "schema", + "format", + "path" + ], + + "properties": { + "schema": {"type": "string"}, + "format": { + "description": "File extension, including '.'", + "type": "string" + }, + "path": { + "description": "Unformatted path to version.", + "type": "string" + } + } +} diff --git a/openpype/pipeline/schema/representation-2.0.json b/openpype/pipeline/schema/representation-2.0.json new file mode 100644 index 0000000000..f47c16a10a --- /dev/null +++ b/openpype/pipeline/schema/representation-2.0.json @@ -0,0 +1,78 @@ +{ + "$schema": "http://json-schema.org/draft-04/schema#", + + "title": "openpype:representation-2.0", + "description": "The inverse of an instance", + + "type": "object", + + "additionalProperties": true, + + "required": [ + "schema", + "type", + "parent", + "name", + "data" + ], + + "properties": { + "schema": { + "description": "Schema identifier for payload", + "type": "string", + "enum": ["openpype:representation-2.0"], + "example": "openpype:representation-2.0" + }, + "type": { + "description": "The type of document", + "type": "string", + "enum": ["representation"], + "example": "representation" + }, + "parent": { + "description": "Unique identifier to parent document", + "example": "592c33475f8c1b064c4d1696" + }, + "name": { + "description": "Name of representation", + "type": "string", + "pattern": "^[a-zA-Z0-9_.]*$", + "example": "abc" + }, + "data": { + "description": "Document metadata", + "type": "object", + "example": { + "label": "Alembic" + } + }, + "dependencies": { + "description": "Other representation that this representation depends on", + "type": "array", + "items": {"type": "string"}, + "example": [ + "592d547a5f8c1b388093c145" + ] + }, + "context": { + "description": "Summary of the context to which this representation belong.", + "type": "object", + "properties": { + "project": {"type": "object"}, + "asset": {"type": "string"}, + "silo": {"type": ["string", "null"]}, + "subset": {"type": "string"}, + "version": {"type": "number"}, + "representation": {"type": "string"} + }, + "example": { + "project": "hulk", + "asset": "Bruce", + "silo": "assets", + "subset": "rigDefault", + "version": 12, + "representation": "ma" + } + } + } +} diff --git a/openpype/pipeline/schema/session-1.0.json b/openpype/pipeline/schema/session-1.0.json new file mode 100644 index 0000000000..5ced0a6f08 --- /dev/null +++ b/openpype/pipeline/schema/session-1.0.json @@ -0,0 +1,143 @@ +{ + "$schema": "http://json-schema.org/draft-04/schema#", + + "title": "openpype:session-1.0", + "description": "The Avalon environment", + + "type": "object", + + "additionalProperties": true, + + "required": [ + "AVALON_PROJECTS", + "AVALON_PROJECT", + "AVALON_ASSET", + "AVALON_SILO", + "AVALON_CONFIG" + ], + + "properties": { + "AVALON_PROJECTS": { + "description": "Absolute path to root of project directories", + "type": "string", + "example": "/nas/projects" + }, + "AVALON_PROJECT": { + "description": "Name of project", + "type": "string", + "pattern": "^\\w*$", + "example": "Hulk" + }, + "AVALON_ASSET": { + "description": "Name of asset", + "type": "string", + "pattern": "^\\w*$", + "example": "Bruce" + }, + "AVALON_SILO": { + "description": "Name of asset group or container", + "type": "string", + "pattern": "^\\w*$", + "example": "assets" + }, + "AVALON_TASK": { + "description": "Name of task", + "type": "string", + "pattern": "^\\w*$", + "example": "modeling" + }, + "AVALON_CONFIG": { + "description": "Name of Avalon configuration", + "type": "string", + "pattern": "^\\w*$", + "example": "polly" + }, + "AVALON_APP": { + "description": "Name of application", + "type": "string", + "pattern": "^\\w*$", + "example": "maya2016" + }, + "AVALON_MONGO": { + "description": "Address to the asset database", + "type": "string", + "pattern": "^mongodb://[\\w/@:.]*$", + "example": "mongodb://localhost:27017", + "default": "mongodb://localhost:27017" + }, + "AVALON_DB": { + "description": "Name of database", + "type": "string", + "pattern": "^\\w*$", + "example": "avalon", + "default": "avalon" + }, + "AVALON_LABEL": { + "description": "Nice name of Avalon, used in e.g. graphical user interfaces", + "type": "string", + "example": "Mindbender", + "default": "Avalon" + }, + "AVALON_SENTRY": { + "description": "Address to Sentry", + "type": "string", + "pattern": "^http[\\w/@:.]*$", + "example": "https://5b872b280de742919b115bdc8da076a5:8d278266fe764361b8fa6024af004a9c@logs.mindbender.com/2", + "default": null + }, + "AVALON_DEADLINE": { + "description": "Address to Deadline", + "type": "string", + "pattern": "^http[\\w/@:.]*$", + "example": "http://192.168.99.101", + "default": null + }, + "AVALON_TIMEOUT": { + "description": "Wherever there is a need for a timeout, this is the default value.", + "type": "string", + "pattern": "^[0-9]*$", + "default": "1000", + "example": "1000" + }, + "AVALON_UPLOAD": { + "description": "Boolean of whether to upload published material to central asset repository", + "type": "string", + "default": null, + "example": "True" + }, + "AVALON_USERNAME": { + "description": "Generic username", + "type": "string", + "pattern": "^\\w*$", + "default": "avalon", + "example": "myself" + }, + "AVALON_PASSWORD": { + "description": "Generic password", + "type": "string", + "pattern": "^\\w*$", + "default": "secret", + "example": "abc123" + }, + "AVALON_INSTANCE_ID": { + "description": "Unique identifier for instances in a working file", + "type": "string", + "pattern": "^[\\w.]*$", + "default": "avalon.instance", + "example": "avalon.instance" + }, + "AVALON_CONTAINER_ID": { + "description": "Unique identifier for a loaded representation in a working file", + "type": "string", + "pattern": "^[\\w.]*$", + "default": "avalon.container", + "example": "avalon.container" + }, + "AVALON_DEBUG": { + "description": "Enable debugging mode. Some applications may use this for e.g. extended verbosity or mock plug-ins.", + "type": "string", + "default": null, + "example": "True" + } + } +} diff --git a/openpype/pipeline/schema/session-2.0.json b/openpype/pipeline/schema/session-2.0.json new file mode 100644 index 0000000000..0a4d51beb2 --- /dev/null +++ b/openpype/pipeline/schema/session-2.0.json @@ -0,0 +1,134 @@ +{ + "$schema": "http://json-schema.org/draft-04/schema#", + + "title": "openpype:session-2.0", + "description": "The Avalon environment", + + "type": "object", + + "additionalProperties": true, + + "required": [ + "AVALON_PROJECT", + "AVALON_ASSET", + "AVALON_CONFIG" + ], + + "properties": { + "AVALON_PROJECTS": { + "description": "Absolute path to root of project directories", + "type": "string", + "example": "/nas/projects" + }, + "AVALON_PROJECT": { + "description": "Name of project", + "type": "string", + "pattern": "^\\w*$", + "example": "Hulk" + }, + "AVALON_ASSET": { + "description": "Name of asset", + "type": "string", + "pattern": "^\\w*$", + "example": "Bruce" + }, + "AVALON_SILO": { + "description": "Name of asset group or container", + "type": "string", + "pattern": "^\\w*$", + "example": "assets" + }, + "AVALON_TASK": { + "description": "Name of task", + "type": "string", + "pattern": "^\\w*$", + "example": "modeling" + }, + "AVALON_CONFIG": { + "description": "Name of Avalon configuration", + "type": "string", + "pattern": "^\\w*$", + "example": "polly" + }, + "AVALON_APP": { + "description": "Name of application", + "type": "string", + "pattern": "^\\w*$", + "example": "maya2016" + }, + "AVALON_DB": { + "description": "Name of database", + "type": "string", + "pattern": "^\\w*$", + "example": "avalon", + "default": "avalon" + }, + "AVALON_LABEL": { + "description": "Nice name of Avalon, used in e.g. graphical user interfaces", + "type": "string", + "example": "Mindbender", + "default": "Avalon" + }, + "AVALON_SENTRY": { + "description": "Address to Sentry", + "type": "string", + "pattern": "^http[\\w/@:.]*$", + "example": "https://5b872b280de742919b115bdc8da076a5:8d278266fe764361b8fa6024af004a9c@logs.mindbender.com/2", + "default": null + }, + "AVALON_DEADLINE": { + "description": "Address to Deadline", + "type": "string", + "pattern": "^http[\\w/@:.]*$", + "example": "http://192.168.99.101", + "default": null + }, + "AVALON_TIMEOUT": { + "description": "Wherever there is a need for a timeout, this is the default value.", + "type": "string", + "pattern": "^[0-9]*$", + "default": "1000", + "example": "1000" + }, + "AVALON_UPLOAD": { + "description": "Boolean of whether to upload published material to central asset repository", + "type": "string", + "default": null, + "example": "True" + }, + "AVALON_USERNAME": { + "description": "Generic username", + "type": "string", + "pattern": "^\\w*$", + "default": "avalon", + "example": "myself" + }, + "AVALON_PASSWORD": { + "description": "Generic password", + "type": "string", + "pattern": "^\\w*$", + "default": "secret", + "example": "abc123" + }, + "AVALON_INSTANCE_ID": { + "description": "Unique identifier for instances in a working file", + "type": "string", + "pattern": "^[\\w.]*$", + "default": "avalon.instance", + "example": "avalon.instance" + }, + "AVALON_CONTAINER_ID": { + "description": "Unique identifier for a loaded representation in a working file", + "type": "string", + "pattern": "^[\\w.]*$", + "default": "avalon.container", + "example": "avalon.container" + }, + "AVALON_DEBUG": { + "description": "Enable debugging mode. Some applications may use this for e.g. extended verbosity or mock plug-ins.", + "type": "string", + "default": null, + "example": "True" + } + } +} diff --git a/openpype/pipeline/schema/session-3.0.json b/openpype/pipeline/schema/session-3.0.json new file mode 100644 index 0000000000..9f785939e4 --- /dev/null +++ b/openpype/pipeline/schema/session-3.0.json @@ -0,0 +1,81 @@ +{ + "$schema": "http://json-schema.org/draft-04/schema#", + + "title": "openpype:session-3.0", + "description": "The Avalon environment", + + "type": "object", + + "additionalProperties": true, + + "required": [ + "AVALON_PROJECT", + "AVALON_ASSET" + ], + + "properties": { + "AVALON_PROJECTS": { + "description": "Absolute path to root of project directories", + "type": "string", + "example": "/nas/projects" + }, + "AVALON_PROJECT": { + "description": "Name of project", + "type": "string", + "pattern": "^\\w*$", + "example": "Hulk" + }, + "AVALON_ASSET": { + "description": "Name of asset", + "type": "string", + "pattern": "^\\w*$", + "example": "Bruce" + }, + "AVALON_TASK": { + "description": "Name of task", + "type": "string", + "pattern": "^\\w*$", + "example": "modeling" + }, + "AVALON_APP": { + "description": "Name of host", + "type": "string", + "pattern": "^\\w*$", + "example": "maya2016" + }, + "AVALON_DB": { + "description": "Name of database", + "type": "string", + "pattern": "^\\w*$", + "example": "avalon", + "default": "avalon" + }, + "AVALON_LABEL": { + "description": "Nice name of Avalon, used in e.g. graphical user interfaces", + "type": "string", + "example": "Mindbender", + "default": "Avalon" + }, + "AVALON_TIMEOUT": { + "description": "Wherever there is a need for a timeout, this is the default value.", + "type": "string", + "pattern": "^[0-9]*$", + "default": "1000", + "example": "1000" + }, + "AVALON_INSTANCE_ID": { + "description": "Unique identifier for instances in a working file", + "type": "string", + "pattern": "^[\\w.]*$", + "default": "avalon.instance", + "example": "avalon.instance" + }, + "AVALON_CONTAINER_ID": { + "description": "Unique identifier for a loaded representation in a working file", + "type": "string", + "pattern": "^[\\w.]*$", + "default": "avalon.container", + "example": "avalon.container" + } + } +} diff --git a/openpype/pipeline/schema/shaders-1.0.json b/openpype/pipeline/schema/shaders-1.0.json new file mode 100644 index 0000000000..7102ba1861 --- /dev/null +++ b/openpype/pipeline/schema/shaders-1.0.json @@ -0,0 +1,32 @@ +{ + "$schema": "http://json-schema.org/draft-04/schema#", + + "title": "openpype:shaders-1.0", + "description": "Relationships between shaders and Avalon IDs", + + "type": "object", + + "additionalProperties": true, + + "required": [ + "schema", + "shader" + ], + + "properties": { + "schema": { + "description": "Schema identifier for payload", + "type": "string" + }, + "shader": { + "description": "Name of directory", + "type": "array", + "items": { + "type": "str", + "description": "Avalon ID and optional face indexes, e.g. 'f9520572-ac1d-11e6-b39e-3085a99791c9.f[5002:5185]'" + } + } + }, + + "definitions": {} +} diff --git a/openpype/pipeline/schema/subset-1.0.json b/openpype/pipeline/schema/subset-1.0.json new file mode 100644 index 0000000000..a299a6d341 --- /dev/null +++ b/openpype/pipeline/schema/subset-1.0.json @@ -0,0 +1,35 @@ +{ + "$schema": "http://json-schema.org/draft-04/schema#", + + "title": "openpype:subset-1.0", + "description": "A container of instances", + + "type": "object", + + "additionalProperties": true, + + "required": [ + "schema", + "name", + "versions" + ], + + "properties": { + "schema": { + "description": "Schema identifier for payload", + "type": "string" + }, + "name": { + "description": "Name of directory", + "type": "string" + }, + "versions": { + "type": "array", + "items": { + "$ref": "version.json" + } + } + }, + + "definitions": {} +} diff --git a/openpype/pipeline/schema/subset-2.0.json b/openpype/pipeline/schema/subset-2.0.json new file mode 100644 index 0000000000..db256ec7fb --- /dev/null +++ b/openpype/pipeline/schema/subset-2.0.json @@ -0,0 +1,51 @@ +{ + "$schema": "http://json-schema.org/draft-04/schema#", + + "title": "openpype:subset-2.0", + "description": "A container of instances", + + "type": "object", + + "additionalProperties": true, + + "required": [ + "schema", + "type", + "parent", + "name", + "data" + ], + + "properties": { + "schema": { + "description": "The schema associated with this document", + "type": "string", + "enum": ["openpype:subset-2.0"], + "example": "openpype:subset-2.0" + }, + "type": { + "description": "The type of document", + "type": "string", + "enum": ["subset"], + "example": "subset" + }, + "parent": { + "description": "Unique identifier to parent document", + "example": "592c33475f8c1b064c4d1696" + }, + "name": { + "description": "Name of directory", + "type": "string", + "pattern": "^[a-zA-Z0-9_.]*$", + "example": "shot01" + }, + "data": { + "type": "object", + "description": "Document metadata", + "example": { + "frameStart": 1000, + "frameEnd": 1201 + } + } + } +} diff --git a/openpype/pipeline/schema/subset-3.0.json b/openpype/pipeline/schema/subset-3.0.json new file mode 100644 index 0000000000..1a0db53c04 --- /dev/null +++ b/openpype/pipeline/schema/subset-3.0.json @@ -0,0 +1,62 @@ +{ + "$schema": "http://json-schema.org/draft-04/schema#", + + "title": "openpype:subset-3.0", + "description": "A container of instances", + + "type": "object", + + "additionalProperties": true, + + "required": [ + "schema", + "type", + "parent", + "name", + "data" + ], + + "properties": { + "schema": { + "description": "The schema associated with this document", + "type": "string", + "enum": ["openpype:subset-3.0"], + "example": "openpype:subset-3.0" + }, + "type": { + "description": "The type of document", + "type": "string", + "enum": ["subset"], + "example": "subset" + }, + "parent": { + "description": "Unique identifier to parent document", + "example": "592c33475f8c1b064c4d1696" + }, + "name": { + "description": "Name of directory", + "type": "string", + "pattern": "^[a-zA-Z0-9_.]*$", + "example": "shot01" + }, + "data": { + "description": "Document metadata", + "type": "object", + "required": ["families"], + "properties": { + "families": { + "type": "array", + "items": {"type": "string"}, + "description": "One or more families associated with this subset" + } + }, + "example": { + "families" : [ + "avalon.camera" + ], + "frameStart": 1000, + "frameEnd": 1201 + } + } + } +} diff --git a/openpype/pipeline/schema/thumbnail-1.0.json b/openpype/pipeline/schema/thumbnail-1.0.json new file mode 100644 index 0000000000..5bdf78a4b1 --- /dev/null +++ b/openpype/pipeline/schema/thumbnail-1.0.json @@ -0,0 +1,42 @@ +{ + "$schema": "http://json-schema.org/draft-04/schema#", + + "title": "openpype:thumbnail-1.0", + "description": "Entity with thumbnail data", + + "type": "object", + + "additionalProperties": true, + + "required": [ + "schema", + "type", + "data" + ], + + "properties": { + "schema": { + "description": "The schema associated with this document", + "type": "string", + "enum": ["openpype:thumbnail-1.0"], + "example": "openpype:thumbnail-1.0" + }, + "type": { + "description": "The type of document", + "type": "string", + "enum": ["thumbnail"], + "example": "thumbnail" + }, + "data": { + "description": "Thumbnail data", + "type": "object", + "example": { + "binary_data": "Binary({byte data of image})", + "template": "{thumbnail_root}/{project[name]}/{_id}{ext}}", + "template_data": { + "ext": ".jpg" + } + } + } + } +} diff --git a/openpype/pipeline/schema/version-1.0.json b/openpype/pipeline/schema/version-1.0.json new file mode 100644 index 0000000000..daa1997721 --- /dev/null +++ b/openpype/pipeline/schema/version-1.0.json @@ -0,0 +1,50 @@ +{ + "$schema": "http://json-schema.org/draft-04/schema#", + + "title": "openpype:version-1.0", + "description": "An individual version", + + "type": "object", + + "additionalProperties": true, + + "required": [ + "schema", + "version", + "path", + "time", + "author", + "source", + "representations" + ], + + "properties": { + "schema": {"type": "string"}, + "representations": { + "type": "array", + "items": { + "$ref": "representation.json" + } + }, + "time": { + "description": "ISO formatted, file-system compatible time", + "type": "string" + }, + "author": { + "description": "User logged on to the machine at time of publish", + "type": "string" + }, + "version": { + "description": "Number of this version", + "type": "number" + }, + "path": { + "description": "Unformatted path, e.g. '{root}/assets/Bruce/publish/lookdevDefault/v001", + "type": "string" + }, + "source": { + "description": "Original file from which this version was made.", + "type": "string" + } + } +} diff --git a/openpype/pipeline/schema/version-2.0.json b/openpype/pipeline/schema/version-2.0.json new file mode 100644 index 0000000000..099e9be70a --- /dev/null +++ b/openpype/pipeline/schema/version-2.0.json @@ -0,0 +1,92 @@ +{ + "$schema": "http://json-schema.org/draft-04/schema#", + + "title": "openpype:version-2.0", + "description": "An individual version", + + "type": "object", + + "additionalProperties": true, + + "required": [ + "schema", + "type", + "parent", + "name", + "data" + ], + + "properties": { + "schema": { + "description": "The schema associated with this document", + "type": "string", + "enum": ["openpype:version-2.0"], + "example": "openpype:version-2.0" + }, + "type": { + "description": "The type of document", + "type": "string", + "enum": ["version"], + "example": "version" + }, + "parent": { + "description": "Unique identifier to parent document", + "example": "592c33475f8c1b064c4d1696" + }, + "name": { + "description": "Number of version", + "type": "number", + "example": 12 + }, + "locations": { + "description": "Where on the planet this version can be found.", + "type": "array", + "items": {"type": "string"}, + "example": ["data.avalon.com"] + }, + "data": { + "description": "Document metadata", + "type": "object", + "required": ["families", "author", "source", "time"], + "properties": { + "time": { + "description": "ISO formatted, file-system compatible time", + "type": "string" + }, + "timeFormat": { + "description": "ISO format of time", + "type": "string" + }, + "author": { + "description": "User logged on to the machine at time of publish", + "type": "string" + }, + "version": { + "description": "Number of this version", + "type": "number" + }, + "path": { + "description": "Unformatted path, e.g. '{root}/assets/Bruce/publish/lookdevDefault/v001", + "type": "string" + }, + "source": { + "description": "Original file from which this version was made.", + "type": "string" + }, + "families": { + "type": "array", + "items": {"type": "string"}, + "description": "One or more families associated with this version" + } + }, + "example": { + "source" : "{root}/f02_prod/assets/BubbleWitch/work/modeling/marcus/maya/scenes/model_v001.ma", + "author" : "marcus", + "families" : [ + "avalon.model" + ], + "time" : "20170510T090203Z" + } + } + } +} diff --git a/openpype/pipeline/schema/version-3.0.json b/openpype/pipeline/schema/version-3.0.json new file mode 100644 index 0000000000..3e07fc4499 --- /dev/null +++ b/openpype/pipeline/schema/version-3.0.json @@ -0,0 +1,84 @@ +{ + "$schema": "http://json-schema.org/draft-04/schema#", + + "title": "openpype:version-3.0", + "description": "An individual version", + + "type": "object", + + "additionalProperties": true, + + "required": [ + "schema", + "type", + "parent", + "name", + "data" + ], + + "properties": { + "schema": { + "description": "The schema associated with this document", + "type": "string", + "enum": ["openpype:version-3.0"], + "example": "openpype:version-3.0" + }, + "type": { + "description": "The type of document", + "type": "string", + "enum": ["version"], + "example": "version" + }, + "parent": { + "description": "Unique identifier to parent document", + "example": "592c33475f8c1b064c4d1696" + }, + "name": { + "description": "Number of version", + "type": "number", + "example": 12 + }, + "locations": { + "description": "Where on the planet this version can be found.", + "type": "array", + "items": {"type": "string"}, + "example": ["data.avalon.com"] + }, + "data": { + "description": "Document metadata", + "type": "object", + "required": ["author", "source", "time"], + "properties": { + "time": { + "description": "ISO formatted, file-system compatible time", + "type": "string" + }, + "timeFormat": { + "description": "ISO format of time", + "type": "string" + }, + "author": { + "description": "User logged on to the machine at time of publish", + "type": "string" + }, + "version": { + "description": "Number of this version", + "type": "number" + }, + "path": { + "description": "Unformatted path, e.g. '{root}/assets/Bruce/publish/lookdevDefault/v001", + "type": "string" + }, + "source": { + "description": "Original file from which this version was made.", + "type": "string" + } + }, + "example": { + "source" : "{root}/f02_prod/assets/BubbleWitch/work/modeling/marcus/maya/scenes/model_v001.ma", + "author" : "marcus", + "time" : "20170510T090203Z" + } + } + } +} diff --git a/openpype/pipeline/schema/workfile-1.0.json b/openpype/pipeline/schema/workfile-1.0.json new file mode 100644 index 0000000000..5f9600ef20 --- /dev/null +++ b/openpype/pipeline/schema/workfile-1.0.json @@ -0,0 +1,52 @@ +{ + "$schema": "http://json-schema.org/draft-04/schema#", + + "title": "openpype:workfile-1.0", + "description": "Workfile additional information.", + + "type": "object", + + "additionalProperties": true, + + "required": [ + "schema", + "type", + "filename", + "task_name", + "parent" + ], + + "properties": { + "schema": { + "description": "Schema identifier for payload", + "type": "string", + "enum": ["openpype:workfile-1.0"], + "example": "openpype:workfile-1.0" + }, + "type": { + "description": "The type of document", + "type": "string", + "enum": ["workfile"], + "example": "workfile" + }, + "parent": { + "description": "Unique identifier to parent document", + "example": "592c33475f8c1b064c4d1696" + }, + "filename": { + "description": "Workfile's filename", + "type": "string", + "example": "kuba_each_case_Alpaca_01_animation_v001.ma" + }, + "task_name": { + "description": "Task name", + "type": "string", + "example": "animation" + }, + "data": { + "description": "Document metadata", + "type": "object", + "example": {"key": "value"} + } + } +} diff --git a/openpype/plugin.py b/openpype/plugin.py deleted file mode 100644 index 7e906b4451..0000000000 --- a/openpype/plugin.py +++ /dev/null @@ -1,128 +0,0 @@ -import functools -import warnings - -import pyblish.api - -# New location of orders: openpype.pipeline.publish.constants -# - can be imported as -# 'from openpype.pipeline.publish import ValidatePipelineOrder' -ValidatePipelineOrder = pyblish.api.ValidatorOrder + 0.05 -ValidateContentsOrder = pyblish.api.ValidatorOrder + 0.1 -ValidateSceneOrder = pyblish.api.ValidatorOrder + 0.2 -ValidateMeshOrder = pyblish.api.ValidatorOrder + 0.3 - - -class PluginDeprecatedWarning(DeprecationWarning): - pass - - -def _deprecation_warning(item_name, warning_message): - warnings.simplefilter("always", PluginDeprecatedWarning) - warnings.warn( - ( - "Call to deprecated function '{}'" - "\nFunction was moved or removed.{}" - ).format(item_name, warning_message), - category=PluginDeprecatedWarning, - stacklevel=4 - ) - - -def deprecated(new_destination): - """Mark functions as deprecated. - - It will result in a warning being emitted when the function is used. - """ - - func = None - if callable(new_destination): - func = new_destination - new_destination = None - - def _decorator(decorated_func): - if new_destination is None: - warning_message = ( - " Please check content of deprecated function to figure out" - " possible replacement." - ) - else: - warning_message = " Please replace your usage with '{}'.".format( - new_destination - ) - - @functools.wraps(decorated_func) - def wrapper(*args, **kwargs): - _deprecation_warning(decorated_func.__name__, warning_message) - return decorated_func(*args, **kwargs) - return wrapper - - if func is None: - return _decorator - return _decorator(func) - - -# Classes just inheriting from pyblish classes -# - seems to be unused in code (not 100% sure) -# - they should be removed but because it is not clear if they're used -# we'll keep then and log deprecation warning -# Deprecated since 3.14.* will be removed in 3.16.* -class ContextPlugin(pyblish.api.ContextPlugin): - def __init__(self, *args, **kwargs): - _deprecation_warning( - "openpype.plugin.ContextPlugin", - " Please replace your usage with 'pyblish.api.ContextPlugin'." - ) - super(ContextPlugin, self).__init__(*args, **kwargs) - - -# Deprecated since 3.14.* will be removed in 3.16.* -class InstancePlugin(pyblish.api.InstancePlugin): - def __init__(self, *args, **kwargs): - _deprecation_warning( - "openpype.plugin.ContextPlugin", - " Please replace your usage with 'pyblish.api.InstancePlugin'." - ) - super(InstancePlugin, self).__init__(*args, **kwargs) - - -class Extractor(pyblish.api.InstancePlugin): - """Extractor base class. - - The extractor base class implements a "staging_dir" function used to - generate a temporary directory for an instance to extract to. - - This temporary directory is generated through `tempfile.mkdtemp()` - - """ - - order = 2.0 - - def staging_dir(self, instance): - """Provide a temporary directory in which to store extracted files - - Upon calling this method the staging directory is stored inside - the instance.data['stagingDir'] - """ - - from openpype.pipeline.publish import get_instance_staging_dir - - return get_instance_staging_dir(instance) - - -@deprecated("openpype.pipeline.publish.context_plugin_should_run") -def contextplugin_should_run(plugin, context): - """Return whether the ContextPlugin should run on the given context. - - This is a helper function to work around a bug pyblish-base#250 - Whenever a ContextPlugin sets specific families it will still trigger even - when no instances are present that have those families. - - This actually checks it correctly and returns whether it should run. - - Deprecated: - Since 3.14.* will be removed in 3.16.* or later. - """ - - from openpype.pipeline.publish import context_plugin_should_run - - return context_plugin_should_run(plugin, context) diff --git a/openpype/plugins/publish/extract_burnin.py b/openpype/plugins/publish/extract_burnin.py index e67739e842..4a64711bfd 100644 --- a/openpype/plugins/publish/extract_burnin.py +++ b/openpype/plugins/publish/extract_burnin.py @@ -52,7 +52,8 @@ class ExtractBurnin(publish.Extractor): "photoshop", "flame", "houdini", - "max" + "max", + "blender" # "resolve" ] diff --git a/openpype/plugins/publish/extract_otio_audio_tracks.py b/openpype/plugins/publish/extract_otio_audio_tracks.py index e19b7eeb13..4f17731452 100644 --- a/openpype/plugins/publish/extract_otio_audio_tracks.py +++ b/openpype/plugins/publish/extract_otio_audio_tracks.py @@ -1,7 +1,7 @@ import os import pyblish from openpype.lib import ( - get_ffmpeg_tool_path, + get_ffmpeg_tool_args, run_subprocess ) import tempfile @@ -20,9 +20,6 @@ class ExtractOtioAudioTracks(pyblish.api.ContextPlugin): label = "Extract OTIO Audio Tracks" hosts = ["hiero", "resolve", "flame"] - # FFmpeg tools paths - ffmpeg_path = get_ffmpeg_tool_path("ffmpeg") - def process(self, context): """Convert otio audio track's content to audio representations @@ -91,13 +88,13 @@ class ExtractOtioAudioTracks(pyblish.api.ContextPlugin): # temp audio file audio_fpath = self.create_temp_file(name) - cmd = [ - self.ffmpeg_path, + cmd = get_ffmpeg_tool_args( + "ffmpeg", "-ss", str(start_sec), "-t", str(duration_sec), "-i", audio_file, audio_fpath - ] + ) # run subprocess self.log.debug("Executing: {}".format(" ".join(cmd))) @@ -210,13 +207,13 @@ class ExtractOtioAudioTracks(pyblish.api.ContextPlugin): max_duration_sec = max(end_secs) # create empty cmd - cmd = [ - self.ffmpeg_path, + cmd = get_ffmpeg_tool_args( + "ffmpeg", "-f", "lavfi", "-i", "anullsrc=channel_layout=stereo:sample_rate=48000", "-t", str(max_duration_sec), empty_fpath - ] + ) # generate empty with ffmpeg # run subprocess @@ -295,7 +292,7 @@ class ExtractOtioAudioTracks(pyblish.api.ContextPlugin): filters_tmp_filepath = tmp_file.name tmp_file.write(",".join(filters)) - args = [self.ffmpeg_path] + args = get_ffmpeg_tool_args("ffmpeg") args.extend(input_args) args.extend([ "-filter_complex_script", filters_tmp_filepath, diff --git a/openpype/plugins/publish/extract_otio_review.py b/openpype/plugins/publish/extract_otio_review.py index 9ebcad2af1..699207df8a 100644 --- a/openpype/plugins/publish/extract_otio_review.py +++ b/openpype/plugins/publish/extract_otio_review.py @@ -20,7 +20,7 @@ import opentimelineio as otio from pyblish import api from openpype.lib import ( - get_ffmpeg_tool_path, + get_ffmpeg_tool_args, run_subprocess, ) from openpype.pipeline import publish @@ -338,8 +338,6 @@ class ExtractOTIOReview(publish.Extractor): Returns: otio.time.TimeRange: trimmed available range """ - # get rendering app path - ffmpeg_path = get_ffmpeg_tool_path("ffmpeg") # create path and frame start to destination output_path, out_frame_start = self._get_ffmpeg_output() @@ -348,7 +346,7 @@ class ExtractOTIOReview(publish.Extractor): out_frame_start += end_offset # start command list - command = [ffmpeg_path] + command = get_ffmpeg_tool_args("ffmpeg") input_extension = None if sequence: diff --git a/openpype/plugins/publish/extract_otio_trimming_video.py b/openpype/plugins/publish/extract_otio_trimming_video.py index 70726338aa..67ff6c538c 100644 --- a/openpype/plugins/publish/extract_otio_trimming_video.py +++ b/openpype/plugins/publish/extract_otio_trimming_video.py @@ -11,7 +11,7 @@ from copy import deepcopy import pyblish.api from openpype.lib import ( - get_ffmpeg_tool_path, + get_ffmpeg_tool_args, run_subprocess, ) from openpype.pipeline import publish @@ -75,14 +75,12 @@ class ExtractOTIOTrimmingVideo(publish.Extractor): otio_range (opentime.TimeRange): range to trim to """ - # get rendering app path - ffmpeg_path = get_ffmpeg_tool_path("ffmpeg") # create path to destination output_path = self._get_ffmpeg_output(input_file_path) # start command list - command = [ffmpeg_path] + command = get_ffmpeg_tool_args("ffmpeg") video_path = input_file_path frame_start = otio_range.start_time.value diff --git a/openpype/plugins/publish/extract_review.py b/openpype/plugins/publish/extract_review.py index f053d1b500..9cc456872e 100644 --- a/openpype/plugins/publish/extract_review.py +++ b/openpype/plugins/publish/extract_review.py @@ -3,6 +3,7 @@ import re import copy import json import shutil +import subprocess from abc import ABCMeta, abstractmethod import six @@ -11,7 +12,7 @@ import speedcopy import pyblish.api from openpype.lib import ( - get_ffmpeg_tool_path, + get_ffmpeg_tool_args, filter_profiles, path_to_subprocess_arg, run_subprocess, @@ -72,9 +73,6 @@ class ExtractReview(pyblish.api.InstancePlugin): alpha_exts = ["exr", "png", "dpx"] - # FFmpeg tools paths - ffmpeg_path = get_ffmpeg_tool_path("ffmpeg") - # Preset attributes profiles = None @@ -787,8 +785,9 @@ class ExtractReview(pyblish.api.InstancePlugin): arg = arg.replace(identifier, "").strip() audio_filters.append(arg) - all_args = [] - all_args.append(path_to_subprocess_arg(self.ffmpeg_path)) + all_args = [ + subprocess.list2cmdline(get_ffmpeg_tool_args("ffmpeg")) + ] all_args.extend(input_args) if video_filters: all_args.append("-filter:v") diff --git a/openpype/plugins/publish/extract_review_slate.py b/openpype/plugins/publish/extract_review_slate.py index fca3d96ca6..8f31f10c42 100644 --- a/openpype/plugins/publish/extract_review_slate.py +++ b/openpype/plugins/publish/extract_review_slate.py @@ -1,5 +1,6 @@ import os import re +import subprocess from pprint import pformat import pyblish.api @@ -7,7 +8,7 @@ import pyblish.api from openpype.lib import ( path_to_subprocess_arg, run_subprocess, - get_ffmpeg_tool_path, + get_ffmpeg_tool_args, get_ffprobe_data, get_ffprobe_streams, get_ffmpeg_codec_args, @@ -47,8 +48,6 @@ class ExtractReviewSlate(publish.Extractor): self.log.info("_ slates_data: {}".format(pformat(slates_data))) - ffmpeg_path = get_ffmpeg_tool_path("ffmpeg") - if "reviewToWidth" in inst_data: use_legacy_code = True else: @@ -260,7 +259,7 @@ class ExtractReviewSlate(publish.Extractor): _remove_at_end.append(slate_v_path) slate_args = [ - path_to_subprocess_arg(ffmpeg_path), + subprocess.list2cmdline(get_ffmpeg_tool_args("ffmpeg")), " ".join(input_args), " ".join(output_args) ] @@ -281,7 +280,6 @@ class ExtractReviewSlate(publish.Extractor): os.path.splitext(slate_v_path)) _remove_at_end.append(slate_silent_path) self._create_silent_slate( - ffmpeg_path, slate_v_path, slate_silent_path, audio_codec, @@ -309,12 +307,12 @@ class ExtractReviewSlate(publish.Extractor): "[0:v] [1:v] concat=n=2:v=1:a=0 [v]", "-map", '[v]' ] - concat_args = [ - ffmpeg_path, + concat_args = get_ffmpeg_tool_args( + "ffmpeg", "-y", "-i", slate_v_path, "-i", input_path, - ] + ) concat_args.extend(fmap) if offset_timecode: concat_args.extend(["-timecode", offset_timecode]) @@ -490,7 +488,6 @@ class ExtractReviewSlate(publish.Extractor): def _create_silent_slate( self, - ffmpeg_path, src_path, dst_path, audio_codec, @@ -515,8 +512,8 @@ class ExtractReviewSlate(publish.Extractor): one_frame_duration = str(int(one_frame_duration)) + "us" self.log.debug("One frame duration is {}".format(one_frame_duration)) - slate_silent_args = [ - ffmpeg_path, + slate_silent_args = get_ffmpeg_tool_args( + "ffmpeg", "-i", src_path, "-f", "lavfi", "-i", "anullsrc=r={}:cl={}:d={}".format( @@ -531,7 +528,7 @@ class ExtractReviewSlate(publish.Extractor): "-shortest", "-y", dst_path - ] + ) # run slate generation subprocess self.log.debug("Silent Slate Executing: {}".format( " ".join(slate_silent_args) diff --git a/openpype/plugins/publish/extract_scanline_exr.py b/openpype/plugins/publish/extract_scanline_exr.py index 0e4c0ca65f..9f22794a79 100644 --- a/openpype/plugins/publish/extract_scanline_exr.py +++ b/openpype/plugins/publish/extract_scanline_exr.py @@ -5,7 +5,12 @@ import shutil import pyblish.api -from openpype.lib import run_subprocess, get_oiio_tools_path +from openpype.lib import ( + run_subprocess, + get_oiio_tool_args, + ToolNotFoundError, +) +from openpype.pipeline import KnownPublishError class ExtractScanlineExr(pyblish.api.InstancePlugin): @@ -45,11 +50,11 @@ class ExtractScanlineExr(pyblish.api.InstancePlugin): stagingdir = os.path.normpath(repre.get("stagingDir")) - oiio_tool_path = get_oiio_tools_path() - if not os.path.exists(oiio_tool_path): - self.log.error( - "OIIO tool not found in {}".format(oiio_tool_path)) - raise AssertionError("OIIO tool not found") + try: + oiio_tool_args = get_oiio_tool_args("oiiotool") + except ToolNotFoundError: + self.log.error("OIIO tool not found.") + raise KnownPublishError("OIIO tool not found") for file in input_files: @@ -57,8 +62,7 @@ class ExtractScanlineExr(pyblish.api.InstancePlugin): temp_name = os.path.join(stagingdir, "__{}".format(file)) # move original render to temp location shutil.move(original_name, temp_name) - oiio_cmd = [ - oiio_tool_path, + oiio_cmd = oiio_tool_args + [ os.path.join(stagingdir, temp_name), "--scanline", "-o", os.path.join(stagingdir, original_name) ] diff --git a/openpype/plugins/publish/extract_thumbnail.py b/openpype/plugins/publish/extract_thumbnail.py index b98ab64f56..b72a6d02ad 100644 --- a/openpype/plugins/publish/extract_thumbnail.py +++ b/openpype/plugins/publish/extract_thumbnail.py @@ -1,10 +1,11 @@ import os +import subprocess import tempfile import pyblish.api from openpype.lib import ( - get_ffmpeg_tool_path, - get_oiio_tools_path, + get_ffmpeg_tool_args, + get_oiio_tool_args, is_oiio_supported, run_subprocess, @@ -174,12 +175,11 @@ class ExtractThumbnail(pyblish.api.InstancePlugin): def create_thumbnail_oiio(self, src_path, dst_path): self.log.info("Extracting thumbnail {}".format(dst_path)) - oiio_tool_path = get_oiio_tools_path() - oiio_cmd = [ - oiio_tool_path, + oiio_cmd = get_oiio_tool_args( + "oiiotool", "-a", src_path, "-o", dst_path - ] + ) self.log.debug("running: {}".format(" ".join(oiio_cmd))) try: run_subprocess(oiio_cmd, logger=self.log) @@ -194,27 +194,27 @@ class ExtractThumbnail(pyblish.api.InstancePlugin): def create_thumbnail_ffmpeg(self, src_path, dst_path): self.log.info("outputting {}".format(dst_path)) - ffmpeg_path = get_ffmpeg_tool_path("ffmpeg") + ffmpeg_path_args = get_ffmpeg_tool_args("ffmpeg") ffmpeg_args = self.ffmpeg_args or {} - jpeg_items = [] - jpeg_items.append(path_to_subprocess_arg(ffmpeg_path)) - # override file if already exists - jpeg_items.append("-y") + jpeg_items = [ + subprocess.list2cmdline(ffmpeg_path_args) + ] # flag for large file sizes max_int = 2147483647 - jpeg_items.append("-analyzeduration {}".format(max_int)) - jpeg_items.append("-probesize {}".format(max_int)) + jpeg_items.extend([ + "-y", + "-analyzeduration", str(max_int), + "-probesize", str(max_int), + ]) # use same input args like with mov jpeg_items.extend(ffmpeg_args.get("input") or []) # input file - jpeg_items.append("-i {}".format( - path_to_subprocess_arg(src_path) - )) + jpeg_items.extend(["-i", path_to_subprocess_arg(src_path)]) # output arguments from presets jpeg_items.extend(ffmpeg_args.get("output") or []) # we just want one frame from movie files - jpeg_items.append("-vframes 1") + jpeg_items.extend(["-vframes", "1"]) # output file jpeg_items.append(path_to_subprocess_arg(dst_path)) subprocess_command = " ".join(jpeg_items) diff --git a/openpype/plugins/publish/extract_thumbnail_from_source.py b/openpype/plugins/publish/extract_thumbnail_from_source.py index a9c95d6065..54622bb84e 100644 --- a/openpype/plugins/publish/extract_thumbnail_from_source.py +++ b/openpype/plugins/publish/extract_thumbnail_from_source.py @@ -17,8 +17,8 @@ import tempfile import pyblish.api from openpype.lib import ( - get_ffmpeg_tool_path, - get_oiio_tools_path, + get_ffmpeg_tool_args, + get_oiio_tool_args, is_oiio_supported, run_subprocess, @@ -144,12 +144,11 @@ class ExtractThumbnailFromSource(pyblish.api.InstancePlugin): def create_thumbnail_oiio(self, src_path, dst_path): self.log.info("outputting {}".format(dst_path)) - oiio_tool_path = get_oiio_tools_path() - oiio_cmd = [ - oiio_tool_path, + oiio_cmd = get_oiio_tool_args( + "oiiotool", "-a", src_path, "-o", dst_path - ] + ) self.log.info("Running: {}".format(" ".join(oiio_cmd))) try: run_subprocess(oiio_cmd, logger=self.log) @@ -162,18 +161,16 @@ class ExtractThumbnailFromSource(pyblish.api.InstancePlugin): return False def create_thumbnail_ffmpeg(self, src_path, dst_path): - ffmpeg_path = get_ffmpeg_tool_path("ffmpeg") - max_int = str(2147483647) - ffmpeg_cmd = [ - ffmpeg_path, + ffmpeg_cmd = get_ffmpeg_tool_args( + "ffmpeg", "-y", "-analyzeduration", max_int, "-probesize", max_int, "-i", src_path, "-vframes", "1", dst_path - ] + ) self.log.info("Running: {}".format(" ".join(ffmpeg_cmd))) try: diff --git a/openpype/plugins/publish/extract_trim_video_audio.py b/openpype/plugins/publish/extract_trim_video_audio.py index b951136391..2907ae1839 100644 --- a/openpype/plugins/publish/extract_trim_video_audio.py +++ b/openpype/plugins/publish/extract_trim_video_audio.py @@ -4,7 +4,7 @@ from pprint import pformat import pyblish.api from openpype.lib import ( - get_ffmpeg_tool_path, + get_ffmpeg_tool_args, run_subprocess, ) from openpype.pipeline import publish @@ -32,7 +32,7 @@ class ExtractTrimVideoAudio(publish.Extractor): instance.data["representations"] = list() # get ffmpet path - ffmpeg_path = get_ffmpeg_tool_path("ffmpeg") + ffmpeg_tool_args = get_ffmpeg_tool_args("ffmpeg") # get staging dir staging_dir = self.staging_dir(instance) @@ -76,8 +76,7 @@ class ExtractTrimVideoAudio(publish.Extractor): if "trimming" not in fml ] - ffmpeg_args = [ - ffmpeg_path, + ffmpeg_args = ffmpeg_tool_args + [ "-ss", str(clip_start_h / fps), "-i", video_file_path, "-t", str(clip_dur_h / fps) diff --git a/openpype/scripts/otio_burnin.py b/openpype/scripts/otio_burnin.py index 085b62501c..189feaee3a 100644 --- a/openpype/scripts/otio_burnin.py +++ b/openpype/scripts/otio_burnin.py @@ -8,21 +8,15 @@ from string import Formatter import opentimelineio_contrib.adapters.ffmpeg_burnins as ffmpeg_burnins from openpype.lib import ( - get_ffmpeg_tool_path, + get_ffmpeg_tool_args, get_ffmpeg_codec_args, get_ffmpeg_format_args, convert_ffprobe_fps_value, - convert_ffprobe_fps_to_float, ) - -ffmpeg_path = get_ffmpeg_tool_path("ffmpeg") -ffprobe_path = get_ffmpeg_tool_path("ffprobe") - - FFMPEG = ( - '"{}"%(input_args)s -i "%(input)s" %(filters)s %(args)s%(output)s' -).format(ffmpeg_path) + '{}%(input_args)s -i "%(input)s" %(filters)s %(args)s%(output)s' +).format(subprocess.list2cmdline(get_ffmpeg_tool_args("ffmpeg"))) DRAWTEXT = ( "drawtext@'%(label)s'=fontfile='%(font)s':text=\\'%(text)s\\':" @@ -46,14 +40,14 @@ def _get_ffprobe_data(source): :param str source: source media file :rtype: [{}, ...] """ - command = [ - ffprobe_path, + command = get_ffmpeg_tool_args( + "ffprobe", "-v", "quiet", "-print_format", "json", "-show_format", "-show_streams", source - ] + ) kwargs = { "stdout": subprocess.PIPE, } diff --git a/openpype/settings/ayon_settings.py b/openpype/settings/ayon_settings.py index d2a2afbee0..90c7f33fd2 100644 --- a/openpype/settings/ayon_settings.py +++ b/openpype/settings/ayon_settings.py @@ -161,91 +161,95 @@ def _convert_general(ayon_settings, output, default_settings): output["general"] = general -def _convert_kitsu_system_settings(ayon_settings, output): - output["modules"]["kitsu"] = { - "server": ayon_settings["kitsu"]["server"] - } +def _convert_kitsu_system_settings( + ayon_settings, output, addon_versions, default_settings +): + enabled = addon_versions.get("kitsu") is not None + kitsu_settings = default_settings["modules"]["kitsu"] + kitsu_settings["enabled"] = enabled + if enabled: + kitsu_settings["server"] = ayon_settings["kitsu"]["server"] + output["modules"]["kitsu"] = kitsu_settings -def _convert_ftrack_system_settings(ayon_settings, output, defaults): - # Ftrack contains few keys that are needed for initialization in OpenPype - # mode and some are used on different places - ftrack_settings = defaults["modules"]["ftrack"] - ftrack_settings["ftrack_server"] = ( - ayon_settings["ftrack"]["ftrack_server"]) - output["modules"]["ftrack"] = ftrack_settings - - -def _convert_shotgrid_system_settings(ayon_settings, output): - ayon_shotgrid = ayon_settings["shotgrid"] - # Skip conversion if different ayon addon is used - if "leecher_manager_url" not in ayon_shotgrid: - output["shotgrid"] = ayon_shotgrid - return - - shotgrid_settings = {} - for key in ( - "leecher_manager_url", - "leecher_backend_url", - "filter_projects_by_login", - ): - shotgrid_settings[key] = ayon_shotgrid[key] - - new_items = {} - for item in ayon_shotgrid["shotgrid_settings"]: - name = item.pop("name") - new_items[name] = item - shotgrid_settings["shotgrid_settings"] = new_items - - output["modules"]["shotgrid"] = shotgrid_settings - - -def _convert_timers_manager_system_settings(ayon_settings, output): - ayon_manager = ayon_settings["timers_manager"] - manager_settings = { - key: ayon_manager[key] - for key in { - "auto_stop", "full_time", "message_time", "disregard_publishing" - } - } +def _convert_timers_manager_system_settings( + ayon_settings, output, addon_versions, default_settings +): + enabled = addon_versions.get("timers_manager") is not None + manager_settings = default_settings["modules"]["timers_manager"] + manager_settings["enabled"] = enabled + if enabled: + ayon_manager = ayon_settings["timers_manager"] + manager_settings.update({ + key: ayon_manager[key] + for key in { + "auto_stop", + "full_time", + "message_time", + "disregard_publishing" + } + }) output["modules"]["timers_manager"] = manager_settings -def _convert_clockify_system_settings(ayon_settings, output): - output["modules"]["clockify"] = ayon_settings["clockify"] +def _convert_clockify_system_settings( + ayon_settings, output, addon_versions, default_settings +): + enabled = addon_versions.get("clockify") is not None + clockify_settings = default_settings["modules"]["clockify"] + clockify_settings["enabled"] = enabled + if enabled: + clockify_settings["workspace_name"] = ( + ayon_settings["clockify"]["workspace_name"] + ) + output["modules"]["clockify"] = clockify_settings -def _convert_deadline_system_settings(ayon_settings, output): - ayon_deadline = ayon_settings["deadline"] - deadline_settings = { - "deadline_urls": { +def _convert_deadline_system_settings( + ayon_settings, output, addon_versions, default_settings +): + enabled = addon_versions.get("deadline") is not None + deadline_settings = default_settings["modules"]["deadline"] + deadline_settings["enabled"] = enabled + if enabled: + ayon_deadline = ayon_settings["deadline"] + deadline_settings["deadline_urls"] = { item["name"]: item["value"] for item in ayon_deadline["deadline_urls"] } - } + output["modules"]["deadline"] = deadline_settings -def _convert_muster_system_settings(ayon_settings, output): - ayon_muster = ayon_settings["muster"] - templates_mapping = { - item["name"]: item["value"] - for item in ayon_muster["templates_mapping"] - } - output["modules"]["muster"] = { - "templates_mapping": templates_mapping, - "MUSTER_REST_URL": ayon_muster["MUSTER_REST_URL"] - } +def _convert_muster_system_settings( + ayon_settings, output, addon_versions, default_settings +): + enabled = addon_versions.get("muster") is not None + muster_settings = default_settings["modules"]["muster"] + muster_settings["enabled"] = enabled + if enabled: + ayon_muster = ayon_settings["muster"] + muster_settings["MUSTER_REST_URL"] = ayon_muster["MUSTER_REST_URL"] + muster_settings["templates_mapping"] = { + item["name"]: item["value"] + for item in ayon_muster["templates_mapping"] + } + output["modules"]["muster"] = muster_settings -def _convert_royalrender_system_settings(ayon_settings, output): - ayon_royalrender = ayon_settings["royalrender"] - output["modules"]["royalrender"] = { - "rr_paths": { +def _convert_royalrender_system_settings( + ayon_settings, output, addon_versions, default_settings +): + enabled = addon_versions.get("royalrender") is not None + rr_settings = default_settings["modules"]["royalrender"] + rr_settings["enabled"] = enabled + if enabled: + ayon_royalrender = ayon_settings["royalrender"] + rr_settings["rr_paths"] = { item["name"]: item["value"] for item in ayon_royalrender["rr_paths"] } - } + output["modules"]["royalrender"] = rr_settings def _convert_modules_system( @@ -253,42 +257,29 @@ def _convert_modules_system( ): # TODO add all modules # TODO add 'enabled' values - for key, func in ( - ("kitsu", _convert_kitsu_system_settings), - ("shotgrid", _convert_shotgrid_system_settings), - ("timers_manager", _convert_timers_manager_system_settings), - ("clockify", _convert_clockify_system_settings), - ("deadline", _convert_deadline_system_settings), - ("muster", _convert_muster_system_settings), - ("royalrender", _convert_royalrender_system_settings), + for func in ( + _convert_kitsu_system_settings, + _convert_timers_manager_system_settings, + _convert_clockify_system_settings, + _convert_deadline_system_settings, + _convert_muster_system_settings, + _convert_royalrender_system_settings, ): - if key in ayon_settings: - func(ayon_settings, output) + func(ayon_settings, output, addon_versions, default_settings) - if "ftrack" in ayon_settings: - _convert_ftrack_system_settings( - ayon_settings, output, default_settings) - - output_modules = output["modules"] - # TODO remove when not needed - for module_name, value in default_settings["modules"].items(): - if module_name not in output_modules: - output_modules[module_name] = value - - for module_name, value in default_settings["modules"].items(): - if "enabled" not in value or module_name not in output_modules: - continue - - ayon_module_name = module_name - if module_name == "sync_server": - ayon_module_name = "sitesync" - output_modules[module_name]["enabled"] = ( - ayon_module_name in addon_versions) - - # Missing modules conversions - # - "sync_server" -> renamed to sitesync - # - "slack" -> only 'enabled' - # - "job_queue" -> completelly missing in ayon + for module_name in ( + "sync_server", + "log_viewer", + "standalonepublish_tool", + "project_manager", + "job_queue", + "avalon", + "addon_paths", + ): + settings = default_settings["modules"][module_name] + if "enabled" in settings: + settings["enabled"] = False + output["modules"][module_name] = settings def convert_system_settings(ayon_settings, default_settings, addon_versions): @@ -724,12 +715,6 @@ def _convert_nuke_project_settings(ayon_settings, output): item_filter["subsets"] = item_filter.pop("product_names") item_filter["families"] = item_filter.pop("product_types") - item["reformat_node_config"] = _convert_nuke_knobs( - item["reformat_node_config"]) - - for node in item["reformat_nodes_config"]["reposition_nodes"]: - node["knobs"] = _convert_nuke_knobs(node["knobs"]) - name = item.pop("name") new_review_data_outputs[name] = item ayon_publish["ExtractReviewDataMov"]["outputs"] = new_review_data_outputs @@ -990,8 +975,11 @@ def _convert_royalrender_project_settings(ayon_settings, output): if "royalrender" not in ayon_settings: return ayon_royalrender = ayon_settings["royalrender"] + rr_paths = ayon_royalrender.get("selected_rr_paths", []) + output["royalrender"] = { - "publish": ayon_royalrender["publish"] + "publish": ayon_royalrender["publish"], + "rr_paths": rr_paths, } diff --git a/openpype/settings/defaults/project_settings/blender.json b/openpype/settings/defaults/project_settings/blender.json index 29e61fe233..df865adeba 100644 --- a/openpype/settings/defaults/project_settings/blender.json +++ b/openpype/settings/defaults/project_settings/blender.json @@ -4,6 +4,8 @@ "apply_on_opening": false, "base_file_unit_scale": 0.01 }, + "set_resolution_startup": true, + "set_frames_startup": true, "imageio": { "activate_host_color_management": true, "ocio_config": { @@ -83,6 +85,11 @@ "optional": true, "active": true }, + "ExtractCameraABC": { + "enabled": true, + "optional": true, + "active": true + }, "ExtractLayout": { "enabled": true, "optional": true, diff --git a/openpype/settings/defaults/project_settings/ftrack.json b/openpype/settings/defaults/project_settings/ftrack.json index b87c45666d..e2ca334b5f 100644 --- a/openpype/settings/defaults/project_settings/ftrack.json +++ b/openpype/settings/defaults/project_settings/ftrack.json @@ -1,9 +1,10 @@ { "events": { "sync_to_avalon": { - "statuses_name_change": [ - "ready", - "not ready" + "role_list": [ + "Pypeclub", + "Administrator", + "Project manager" ] }, "prepare_project": { diff --git a/openpype/settings/defaults/project_settings/maya.json b/openpype/settings/defaults/project_settings/maya.json index a25775e592..8e1022f877 100644 --- a/openpype/settings/defaults/project_settings/maya.json +++ b/openpype/settings/defaults/project_settings/maya.json @@ -555,7 +555,6 @@ "publish_mip_map": true }, "CreateAnimation": { - "enabled": false, "write_color_sets": false, "write_face_sets": false, "include_parent_hierarchy": false, diff --git a/openpype/settings/defaults/project_settings/nuke.json b/openpype/settings/defaults/project_settings/nuke.json index 85e3c0d3c3..b736c462ff 100644 --- a/openpype/settings/defaults/project_settings/nuke.json +++ b/openpype/settings/defaults/project_settings/nuke.json @@ -465,34 +465,6 @@ "viewer_process_override": "", "bake_viewer_process": true, "bake_viewer_input_process": true, - "reformat_node_add": false, - "reformat_node_config": [ - { - "type": "text", - "name": "type", - "value": "to format" - }, - { - "type": "text", - "name": "format", - "value": "HD_1080" - }, - { - "type": "text", - "name": "filter", - "value": "Lanczos6" - }, - { - "type": "bool", - "name": "black_outside", - "value": true - }, - { - "type": "bool", - "name": "pbb", - "value": false - } - ], "reformat_nodes_config": { "enabled": false, "reposition_nodes": [ diff --git a/openpype/settings/entities/schemas/projects_schema/schema_project_blender.json b/openpype/settings/entities/schemas/projects_schema/schema_project_blender.json index c549b577b2..aeb70dfd8c 100644 --- a/openpype/settings/entities/schemas/projects_schema/schema_project_blender.json +++ b/openpype/settings/entities/schemas/projects_schema/schema_project_blender.json @@ -31,6 +31,16 @@ } ] }, + { + "key": "set_resolution_startup", + "type": "boolean", + "label": "Set Resolution on Startup" + }, + { + "key": "set_frames_startup", + "type": "boolean", + "label": "Set Start/End Frames and FPS on Startup" + }, { "key": "imageio", "type": "dict", diff --git a/openpype/settings/entities/schemas/projects_schema/schema_project_ftrack.json b/openpype/settings/entities/schemas/projects_schema/schema_project_ftrack.json index 157a8d297e..d6efb118b9 100644 --- a/openpype/settings/entities/schemas/projects_schema/schema_project_ftrack.json +++ b/openpype/settings/entities/schemas/projects_schema/schema_project_ftrack.json @@ -21,12 +21,9 @@ }, { "type": "list", - "key": "statuses_name_change", - "label": "Statuses", - "object_type": { - "type": "text", - "multiline": false - } + "key": "role_list", + "label": "Roles", + "object_type": "text" } ] }, diff --git a/openpype/settings/entities/schemas/projects_schema/schemas/schema_blender_publish.json b/openpype/settings/entities/schemas/projects_schema/schemas/schema_blender_publish.json index 1037519f57..2f0bf0a831 100644 --- a/openpype/settings/entities/schemas/projects_schema/schemas/schema_blender_publish.json +++ b/openpype/settings/entities/schemas/projects_schema/schemas/schema_blender_publish.json @@ -105,7 +105,11 @@ }, { "key": "ExtractCamera", - "label": "Extract FBX Camera as FBX" + "label": "Extract Camera as FBX" + }, + { + "key": "ExtractCameraABC", + "label": "Extract Camera as ABC" }, { "key": "ExtractLayout", @@ -174,4 +178,4 @@ ] } ] -} \ No newline at end of file +} diff --git a/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_create.json b/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_create.json index 1c37638c90..d28d42c10c 100644 --- a/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_create.json +++ b/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_create.json @@ -120,12 +120,10 @@ "collapsible": true, "key": "CreateAnimation", "label": "Create Animation", - "checkbox_key": "enabled", "children": [ { - "type": "boolean", - "key": "enabled", - "label": "Enabled" + "type": "label", + "label": "This plugin is not optional due to implicit creation through loading the \"rig\" family.\nThis family is also hidden from creation due to complexity in setup." }, { "type": "boolean", diff --git a/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_publish.json b/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_publish.json index 07c8d8715b..b115ee3faa 100644 --- a/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_publish.json +++ b/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_publish.json @@ -103,7 +103,7 @@ }, { "key": "exclude_families", - "label": "Families", + "label": "Exclude Families", "type": "list", "object_type": "text" } diff --git a/openpype/settings/entities/schemas/projects_schema/schemas/schema_nuke_publish.json b/openpype/settings/entities/schemas/projects_schema/schemas/schema_nuke_publish.json index 3019c9b1b5..f006392bef 100644 --- a/openpype/settings/entities/schemas/projects_schema/schemas/schema_nuke_publish.json +++ b/openpype/settings/entities/schemas/projects_schema/schemas/schema_nuke_publish.json @@ -308,26 +308,6 @@ { "type": "separator" }, - { - "type": "label", - "label": "Currently we are supporting also multiple reposition nodes.
Older single reformat node is still supported
and if it is activated then preference will
be on it. If you want to use multiple reformat
nodes then you need to disable single reformat
node and enable multiple Reformat nodes here." - }, - { - "type": "boolean", - "key": "reformat_node_add", - "label": "Add Reformat Node", - "default": false - }, - { - "type": "schema_template", - "name": "template_nuke_knob_inputs", - "template_data": [ - { - "label": "Reformat Node Knobs", - "key": "reformat_node_config" - } - ] - }, { "key": "reformat_nodes_config", "type": "dict", diff --git a/openpype/settings/handlers.py b/openpype/settings/handlers.py index 1d4c838f1a..671cabfbc2 100644 --- a/openpype/settings/handlers.py +++ b/openpype/settings/handlers.py @@ -1803,10 +1803,7 @@ class MongoLocalSettingsHandler(LocalSettingsHandler): def __init__(self, local_site_id=None): # Get mongo connection - from openpype.lib import ( - OpenPypeMongoConnection, - get_local_site_id - ) + from openpype.lib import get_local_site_id if local_site_id is None: local_site_id = get_local_site_id() diff --git a/openpype/tools/publisher/widgets/thumbnail_widget.py b/openpype/tools/publisher/widgets/thumbnail_widget.py index b17ca0adc8..80d156185b 100644 --- a/openpype/tools/publisher/widgets/thumbnail_widget.py +++ b/openpype/tools/publisher/widgets/thumbnail_widget.py @@ -7,8 +7,8 @@ from openpype.style import get_objected_colors from openpype.lib import ( run_subprocess, is_oiio_supported, - get_oiio_tools_path, - get_ffmpeg_tool_path, + get_oiio_tool_args, + get_ffmpeg_tool_args, ) from openpype.lib.transcoding import ( IMAGE_EXTENSIONS, @@ -481,12 +481,12 @@ def _convert_thumbnail_oiio(src_path, dst_path): if not is_oiio_supported(): return None - oiio_cmd = [ - get_oiio_tools_path(), + oiio_cmd = get_oiio_tool_args( + "oiiotool", "-i", src_path, "--subimage", "0", "-o", dst_path - ] + ) try: _run_silent_subprocess(oiio_cmd) except Exception: @@ -495,12 +495,12 @@ def _convert_thumbnail_oiio(src_path, dst_path): def _convert_thumbnail_ffmpeg(src_path, dst_path): - ffmpeg_cmd = [ - get_ffmpeg_tool_path(), + ffmpeg_cmd = get_ffmpeg_tool_args( + "ffmpeg", "-y", "-i", src_path, dst_path - ] + ) try: _run_silent_subprocess(ffmpeg_cmd) except Exception: diff --git a/openpype/tools/standalonepublish/widgets/widget_drop_frame.py b/openpype/tools/standalonepublish/widgets/widget_drop_frame.py index f46e31786c..306c43e85d 100644 --- a/openpype/tools/standalonepublish/widgets/widget_drop_frame.py +++ b/openpype/tools/standalonepublish/widgets/widget_drop_frame.py @@ -5,6 +5,8 @@ import clique import subprocess import openpype.lib from qtpy import QtWidgets, QtCore + +from openpype.lib import get_ffprobe_data from . import DropEmpty, ComponentsList, ComponentItem @@ -269,26 +271,8 @@ class DropDataFrame(QtWidgets.QFrame): self._process_data(data) def load_data_with_probe(self, filepath): - ffprobe_path = openpype.lib.get_ffmpeg_tool_path("ffprobe") - args = [ - "\"{}\"".format(ffprobe_path), - '-v', 'quiet', - '-print_format json', - '-show_format', - '-show_streams', - '"{}"'.format(filepath) - ] - ffprobe_p = subprocess.Popen( - ' '.join(args), - stdout=subprocess.PIPE, - shell=True - ) - ffprobe_output = ffprobe_p.communicate()[0] - if ffprobe_p.returncode != 0: - raise RuntimeError( - 'Failed on ffprobe: check if ffprobe path is set in PATH env' - ) - return json.loads(ffprobe_output)['streams'][0] + ffprobe_data = get_ffprobe_data(filepath) + return ffprobe_data["streams"][0] def get_file_data(self, data): filepath = data['files'][0] diff --git a/openpype/vendor/python/common/ayon_api/__init__.py b/openpype/vendor/python/common/ayon_api/__init__.py index 4b4e0f3359..0540d7692d 100644 --- a/openpype/vendor/python/common/ayon_api/__init__.py +++ b/openpype/vendor/python/common/ayon_api/__init__.py @@ -78,6 +78,8 @@ from ._api import ( download_dependency_package, upload_dependency_package, + upload_addon_zip, + get_bundles, create_bundle, update_bundle, @@ -262,6 +264,8 @@ __all__ = ( "download_dependency_package", "upload_dependency_package", + "upload_addon_zip", + "get_bundles", "create_bundle", "update_bundle", diff --git a/openpype/vendor/python/common/ayon_api/_api.py b/openpype/vendor/python/common/ayon_api/_api.py index 82ffdc7527..26a4b1530a 100644 --- a/openpype/vendor/python/common/ayon_api/_api.py +++ b/openpype/vendor/python/common/ayon_api/_api.py @@ -25,12 +25,29 @@ class GlobalServerAPI(ServerAPI): but that can be filled afterwards with calling 'login' method. """ - def __init__(self, site_id=None, client_version=None): + def __init__( + self, + site_id=None, + client_version=None, + default_settings_variant=None, + ssl_verify=None, + cert=None, + ): url = self.get_url() token = self.get_token() - super(GlobalServerAPI, self).__init__(url, token, site_id, client_version) - + super(GlobalServerAPI, self).__init__( + url, + token, + site_id, + client_version, + default_settings_variant, + ssl_verify, + cert, + # We want to make sure that server and api key validation + # happens all the time in 'GlobalServerAPI'. + create_session=False, + ) self.validate_server_availability() self.create_session() @@ -129,17 +146,6 @@ class ServiceContext: addon_version = None service_name = None - @staticmethod - def get_value_from_envs(env_keys, value=None): - if value: - return value - - for env_key in env_keys: - value = os.environ.get(env_key) - if value: - break - return value - @classmethod def init_service( cls, @@ -150,14 +156,8 @@ class ServiceContext: service_name=None, connect=True ): - token = cls.get_value_from_envs( - ("AY_API_KEY", "AYON_API_KEY"), - token - ) - server_url = cls.get_value_from_envs( - ("AY_SERVER_URL", "AYON_SERVER_URL"), - server_url - ) + token = token or os.environ.get("AYON_API_KEY") + server_url = server_url or os.environ.get("AYON_SERVER_URL") if not server_url: raise FailedServiceInit("URL to server is not set") @@ -166,18 +166,9 @@ class ServiceContext: "Token to server {} is not set".format(server_url) ) - addon_name = cls.get_value_from_envs( - ("AY_ADDON_NAME", "AYON_ADDON_NAME"), - addon_name - ) - addon_version = cls.get_value_from_envs( - ("AY_ADDON_VERSION", "AYON_ADDON_VERSION"), - addon_version - ) - service_name = cls.get_value_from_envs( - ("AY_SERVICE_NAME", "AYON_SERVICE_NAME"), - service_name - ) + addon_name = addon_name or os.environ.get("AYON_ADDON_NAME") + addon_version = addon_version or os.environ.get("AYON_ADDON_VERSION") + service_name = service_name or os.environ.get("AYON_SERVICE_NAME") cls.token = token cls.server_url = server_url @@ -618,6 +609,11 @@ def delete_dependency_package(*args, **kwargs): return con.delete_dependency_package(*args, **kwargs) +def upload_addon_zip(*args, **kwargs): + con = get_server_api_connection() + return con.upload_addon_zip(*args, **kwargs) + + def get_project_anatomy_presets(*args, **kwargs): con = get_server_api_connection() return con.get_project_anatomy_presets(*args, **kwargs) diff --git a/openpype/vendor/python/common/ayon_api/server_api.py b/openpype/vendor/python/common/ayon_api/server_api.py index c886fed976..c578124cfc 100644 --- a/openpype/vendor/python/common/ayon_api/server_api.py +++ b/openpype/vendor/python/common/ayon_api/server_api.py @@ -4,7 +4,6 @@ import io import json import logging import collections -import datetime import platform import copy import uuid @@ -325,6 +324,8 @@ class ServerAPI(object): available then 'True' is used. cert (Optional[str]): Path to certificate file. Looks for env variable value 'AYON_CERT_FILE' by default. + create_session (Optional[bool]): Create session for connection if + token is available. Default is True. """ def __init__( @@ -336,6 +337,7 @@ class ServerAPI(object): default_settings_variant=None, ssl_verify=None, cert=None, + create_session=True, ): if not base_url: raise ValueError("Invalid server URL {}".format(str(base_url))) @@ -367,6 +369,7 @@ class ServerAPI(object): self._access_token_is_service = None self._token_is_valid = None + self._token_validation_started = False self._server_available = None self._server_version = None self._server_version_tuple = None @@ -389,6 +392,11 @@ class ServerAPI(object): self._as_user_stack = _AsUserStack() self._thumbnail_cache = ThumbnailCache(True) + # Create session + if self._access_token and create_session: + self.validate_server_availability() + self.create_session() + @property def log(self): if self._log is None: @@ -652,6 +660,7 @@ class ServerAPI(object): def validate_token(self): try: + self._token_validation_started = True # TODO add other possible validations # - existence of 'user' key in info # - validate that 'site_id' is in 'sites' in info @@ -661,6 +670,9 @@ class ServerAPI(object): except UnauthorizedError: self._token_is_valid = False + + finally: + self._token_validation_started = False return self._token_is_valid def set_token(self, token): @@ -673,8 +685,25 @@ class ServerAPI(object): self._token_is_valid = None self.close_session() - def create_session(self): + def create_session(self, ignore_existing=True, force=False): + """Create a connection session. + + Session helps to keep connection with server without + need to reconnect on each call. + + Args: + ignore_existing (bool): If session already exists, + ignore creation. + force (bool): If session already exists, close it and + create new. + """ + + if force and self._session is not None: + self.close_session() + if self._session is not None: + if ignore_existing: + return raise ValueError("Session is already created.") self._as_user_stack.clear() @@ -841,7 +870,19 @@ class ServerAPI(object): self._access_token) return headers - def login(self, username, password): + def login(self, username, password, create_session=True): + """Login to server. + + Args: + username (str): Username. + password (str): Password. + create_session (Optional[bool]): Create session after login. + Default: True. + + Raises: + AuthenticationError: Login failed. + """ + if self.has_valid_token: try: user_info = self.get_user() @@ -851,7 +892,8 @@ class ServerAPI(object): current_username = user_info.get("name") if current_username == username: self.close_session() - self.create_session() + if create_session: + self.create_session() return self.reset_token() @@ -875,7 +917,9 @@ class ServerAPI(object): if not self.has_valid_token: raise AuthenticationError("Invalid credentials") - self.create_session() + + if create_session: + self.create_session() def logout(self, soft=False): if self._access_token: @@ -888,6 +932,15 @@ class ServerAPI(object): def _do_rest_request(self, function, url, **kwargs): if self._session is None: + # Validate token if was not yet validated + # - ignore validation if we're in middle of + # validation + if ( + self._token_is_valid is None + and not self._token_validation_started + ): + self.validate_token() + if "headers" not in kwargs: kwargs["headers"] = self.get_headers() @@ -1328,6 +1381,7 @@ class ServerAPI(object): response = post_func(url, data=stream, **kwargs) response.raise_for_status() progress.set_transferred_size(size) + return response def upload_file( self, endpoint, filepath, progress=None, request_type=None @@ -1344,6 +1398,9 @@ class ServerAPI(object): to track upload progress. request_type (Optional[RequestType]): Type of request that will be used to upload file. + + Returns: + requests.Response: Response object. """ if endpoint.startswith(self._base_url): @@ -1362,7 +1419,7 @@ class ServerAPI(object): progress.set_started() try: - self._upload_file(url, filepath, progress, request_type) + return self._upload_file(url, filepath, progress, request_type) except Exception as exc: progress.set_failed(str(exc)) @@ -1640,7 +1697,7 @@ class ServerAPI(object): Args: addon_name (str): Name of addon. addon_version (str): Version of addon. - subpaths (tuple[str]): Any amount of subpaths that are added to + *subpaths (str): Any amount of subpaths that are added to addon url. Returns: @@ -1848,9 +1905,12 @@ class ServerAPI(object): dst_filename (str): Destination filename. progress (Optional[TransferProgress]): Object that gives ability to track download progress. + + Returns: + requests.Response: Response object. """ - self.upload_file( + return self.upload_file( "desktop/installers/{}".format(dst_filename), src_filepath, progress=progress @@ -2162,6 +2222,33 @@ class ServerAPI(object): return create_dependency_package_basename(platform_name) + def upload_addon_zip(self, src_filepath, progress=None): + """Upload addon zip file to server. + + File is validated on server. If it is valid, it is installed. It will + create an event job which can be tracked (tracking part is not + implemented yet). + + Example output: + {'eventId': 'a1bfbdee27c611eea7580242ac120003'} + + Args: + src_filepath (str): Path to a zip file. + progress (Optional[TransferProgress]): Object to keep track about + upload state. + + Returns: + dict[str, Any]: Response data from server. + """ + + response = self.upload_file( + "addons/install", + src_filepath, + progress=progress, + request_type=RequestTypes.post, + ) + return response.json() + def _get_bundles_route(self): major, minor, patch, _, _ = self.server_version_tuple # Backwards compatibility for AYON server 0.3.0 @@ -3051,6 +3138,65 @@ class ServerAPI(object): fill_own_attribs(project) return project + def get_folders_hierarchy( + self, + project_name, + search_string=None, + folder_types=None + ): + """Get project hierarchy. + + All folders in project in hierarchy data structure. + + Example output: + { + "hierarchy": [ + { + "id": "...", + "name": "...", + "label": "...", + "status": "...", + "folderType": "...", + "hasTasks": False, + "taskNames": [], + "parents": [], + "parentId": None, + "children": [...children folders...] + }, + ... + ] + } + + Args: + project_name (str): Project where to look for folders. + search_string (Optional[str]): Search string to filter folders. + folder_types (Optional[Iterable[str]]): Folder types to filter. + + Returns: + dict[str, Any]: Response data from server. + """ + + if folder_types: + folder_types = ",".join(folder_types) + + query_fields = [ + "{}={}".format(key, value) + for key, value in ( + ("search", search_string), + ("types", folder_types), + ) + if value + ] + query = "" + if query_fields: + query = "?{}".format(",".join(query_fields)) + + response = self.get( + "projects/{}/hierarchy{}".format(project_name, query) + ) + response.raise_for_status() + return response.data + def get_folders( self, project_name, @@ -3622,7 +3768,6 @@ class ServerAPI(object): if filtered_product is not None: yield filtered_product - def get_product_by_id( self, project_name, diff --git a/openpype/vendor/python/common/ayon_api/thumbnails.py b/openpype/vendor/python/common/ayon_api/thumbnails.py index 11734ca762..50acd94dcb 100644 --- a/openpype/vendor/python/common/ayon_api/thumbnails.py +++ b/openpype/vendor/python/common/ayon_api/thumbnails.py @@ -50,7 +50,7 @@ class ThumbnailCache: """ if self._thumbnails_dir is None: - directory = appdirs.user_data_dir("AYON", "Ynput") + directory = appdirs.user_data_dir("ayon", "ynput") self._thumbnails_dir = os.path.join(directory, "thumbnails") return self._thumbnails_dir diff --git a/openpype/vendor/python/common/ayon_api/utils.py b/openpype/vendor/python/common/ayon_api/utils.py index 69fd8e9b41..93822a58ac 100644 --- a/openpype/vendor/python/common/ayon_api/utils.py +++ b/openpype/vendor/python/common/ayon_api/utils.py @@ -359,7 +359,7 @@ class TransferProgress: def __init__(self): self._started = False self._transfer_done = False - self._transfered = 0 + self._transferred = 0 self._content_size = None self._failed = False @@ -369,25 +369,66 @@ class TransferProgress: self._destination_url = "N/A" def get_content_size(self): + """Content size in bytes. + + Returns: + Union[int, None]: Content size in bytes or None + if is unknown. + """ + return self._content_size def set_content_size(self, content_size): + """Set content size in bytes. + + Args: + content_size (int): Content size in bytes. + + Raises: + ValueError: If content size was already set. + """ + if self._content_size is not None: raise ValueError("Content size was set more then once") self._content_size = content_size def get_started(self): + """Transfer was started. + + Returns: + bool: True if transfer started. + """ + return self._started def set_started(self): + """Mark that transfer started. + + Raises: + ValueError: If transfer was already started. + """ + if self._started: raise ValueError("Progress already started") self._started = True def get_transfer_done(self): + """Transfer finished. + + Returns: + bool: Transfer finished. + """ + return self._transfer_done def set_transfer_done(self): + """Mark progress as transfer finished. + + Raises: + ValueError: If progress was already marked as done + or wasn't started yet. + """ + if self._transfer_done: raise ValueError("Progress was already marked as done") if not self._started: @@ -395,41 +436,117 @@ class TransferProgress: self._transfer_done = True def get_failed(self): + """Transfer failed. + + Returns: + bool: True if transfer failed. + """ + return self._failed def get_fail_reason(self): + """Get reason why transfer failed. + + Returns: + Union[str, None]: Reason why transfer + failed or None. + """ + return self._fail_reason def set_failed(self, reason): + """Mark progress as failed. + + Args: + reason (str): Reason why transfer failed. + """ + self._fail_reason = reason self._failed = True def get_transferred_size(self): - return self._transfered + """Already transferred size in bytes. - def set_transferred_size(self, transfered): - self._transfered = transfered + Returns: + int: Already transferred size in bytes. + """ + + return self._transferred + + def set_transferred_size(self, transferred): + """Set already transferred size in bytes. + + Args: + transferred (int): Already transferred size in bytes. + """ + + self._transferred = transferred def add_transferred_chunk(self, chunk_size): - self._transfered += chunk_size + """Add transferred chunk size in bytes. + + Args: + chunk_size (int): Add transferred chunk size + in bytes. + """ + + self._transferred += chunk_size def get_source_url(self): + """Source url from where transfer happens. + + Note: + Consider this as title. Must be set using + 'set_source_url' or 'N/A' will be returned. + + Returns: + str: Source url from where transfer happens. + """ + return self._source_url def set_source_url(self, url): + """Set source url from where transfer happens. + + Args: + url (str): Source url from where transfer happens. + """ + self._source_url = url def get_destination_url(self): + """Destination url where transfer happens. + + Note: + Consider this as title. Must be set using + 'set_source_url' or 'N/A' will be returned. + + Returns: + str: Destination url where transfer happens. + """ + return self._destination_url def set_destination_url(self, url): + """Set destination url where transfer happens. + + Args: + url (str): Destination url where transfer happens. + """ + self._destination_url = url @property def is_running(self): + """Check if transfer is running. + + Returns: + bool: True if transfer is running. + """ + if ( not self.started - or self.done + or self.transfer_done or self.failed ): return False @@ -437,9 +554,16 @@ class TransferProgress: @property def transfer_progress(self): + """Get transfer progress in percents. + + Returns: + Union[float, None]: Transfer progress in percents or 'None' + if content size is unknown. + """ + if self._content_size is None: return None - return (self._transfered * 100.0) / float(self._content_size) + return (self._transferred * 100.0) / float(self._content_size) content_size = property(get_content_size, set_content_size) started = property(get_started) @@ -448,7 +572,6 @@ class TransferProgress: fail_reason = property(get_fail_reason) source_url = property(get_source_url, set_source_url) destination_url = property(get_destination_url, set_destination_url) - content_size = property(get_content_size, set_content_size) transferred_size = property(get_transferred_size, set_transferred_size) diff --git a/openpype/vendor/python/common/ayon_api/version.py b/openpype/vendor/python/common/ayon_api/version.py index 238f6e9426..93024ea5f2 100644 --- a/openpype/vendor/python/common/ayon_api/version.py +++ b/openpype/vendor/python/common/ayon_api/version.py @@ -1,2 +1,2 @@ """Package declaring Python API for Ayon server.""" -__version__ = "0.3.2" +__version__ = "0.3.3" diff --git a/openpype/version.py b/openpype/version.py index 40375bef43..61bb0f8288 100644 --- a/openpype/version.py +++ b/openpype/version.py @@ -1,3 +1,3 @@ # -*- coding: utf-8 -*- """Package declaring Pype version.""" -__version__ = "3.16.2-nightly.1" +__version__ = "3.16.3-nightly.2" diff --git a/pyproject.toml b/pyproject.toml index fb6e222f27..c4596a7edd 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -1,6 +1,6 @@ [tool.poetry] name = "OpenPype" -version = "3.16.1" # OpenPype +version = "3.16.2" # OpenPype description = "Open VFX and Animation pipeline with support." authors = ["OpenPype Team "] license = "MIT License" diff --git a/server_addon/README.md b/server_addon/README.md index fa9a6001d2..c6d467adaa 100644 --- a/server_addon/README.md +++ b/server_addon/README.md @@ -1,5 +1,5 @@ -# OpenPype addon for AYON server -Convert openpype into AYON addon which can be installed on AYON server. The versioning of the addon is following versioning of OpenPype. +# Addons for AYON server +Preparation of AYON addons based on OpenPype codebase. The output is a bunch of zip files in `./packages` directory that can be uploaded to AYON server. One of the packages is `openpype` which is OpenPype code converted to AYON addon. The addon is must have requirement to be able to use `ayon-launcher`. The versioning of `openpype` addon is following versioning of OpenPype. The other addons contain only settings models. ## Intro OpenPype is transitioning to AYON, a dedicated server with its own database, moving away from MongoDB. During this transition period, OpenPype will remain compatible with both MongoDB and AYON. However, we will gradually update the codebase to align with AYON's data structure and separate individual components into addons. @@ -11,11 +11,24 @@ Since the implementation of the AYON Launcher is not yet fully completed, we wil During this transitional period, the AYON Launcher addon will be a requirement as the entry point for using the AYON Launcher. ## How to start -There is a `create_ayon_addon.py` python file which contains logic how to create server addon from OpenPype codebase. Just run the code. +There is a `create_ayon_addons.py` python file which contains logic how to create server addon from OpenPype codebase. Just run the code. ```shell -./.poetry/bin/poetry run python ./server_addon/create_ayon_addon.py +./.poetry/bin/poetry run python ./server_addon/create_ayon_addons.py ``` -It will create directory `./package/openpype//*` folder with all files necessary for AYON server. You can then copy `./package/openpype/` to server addons, or zip the folder and upload it to AYON server. Restart server to update addons information, add the addon version to server bundle and set the bundle for production or staging usage. +It will create directory `./packages/.zip` files for AYON server. You can then copy upload the zip files to AYON server. Restart server to update addons information, add the addon version to server bundle and set the bundle for production or staging usage. Once addon is on server and is enabled, you can just run AYON launcher. Content will be downloaded and used automatically. + +### Additional arguments +Additional arguments are useful for development purposes. + +To skip zip creation to keep only server ready folder structure, pass `--skip-zip` argument. +```shell +./.poetry/bin/poetry run python ./server_addon/create_ayon_addons.py --skip-zip +``` + +To create both zips and keep folder structure, pass `--keep-sources` argument. +```shell +./.poetry/bin/poetry run python ./server_addon/create_ayon_addons.py --keep-sources +``` diff --git a/server_addon/aftereffects/LICENSE b/server_addon/aftereffects/LICENSE new file mode 100644 index 0000000000..d645695673 --- /dev/null +++ b/server_addon/aftereffects/LICENSE @@ -0,0 +1,202 @@ + + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "[]" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright [yyyy] [name of copyright owner] + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. diff --git a/server_addon/aftereffects/README.md b/server_addon/aftereffects/README.md new file mode 100644 index 0000000000..b2f34f3407 --- /dev/null +++ b/server_addon/aftereffects/README.md @@ -0,0 +1,4 @@ +AfterEffects Addon +=============== + +Integration with Adobe AfterEffects. diff --git a/server_addon/aftereffects/server/__init__.py b/server_addon/aftereffects/server/__init__.py new file mode 100644 index 0000000000..e14e76e9db --- /dev/null +++ b/server_addon/aftereffects/server/__init__.py @@ -0,0 +1,16 @@ +from ayon_server.addons import BaseServerAddon + +from .settings import AfterEffectsSettings, DEFAULT_AFTEREFFECTS_SETTING +from .version import __version__ + + +class AfterEffects(BaseServerAddon): + name = "aftereffects" + title = "AfterEffects" + version = __version__ + + settings_model = AfterEffectsSettings + + async def get_default_settings(self): + settings_model_cls = self.get_settings_model() + return settings_model_cls(**DEFAULT_AFTEREFFECTS_SETTING) diff --git a/server_addon/aftereffects/server/settings/__init__.py b/server_addon/aftereffects/server/settings/__init__.py new file mode 100644 index 0000000000..4e96804b4a --- /dev/null +++ b/server_addon/aftereffects/server/settings/__init__.py @@ -0,0 +1,10 @@ +from .main import ( + AfterEffectsSettings, + DEFAULT_AFTEREFFECTS_SETTING, +) + + +__all__ = ( + "AfterEffectsSettings", + "DEFAULT_AFTEREFFECTS_SETTING", +) diff --git a/server_addon/aftereffects/server/settings/creator_plugins.py b/server_addon/aftereffects/server/settings/creator_plugins.py new file mode 100644 index 0000000000..ee52fadd40 --- /dev/null +++ b/server_addon/aftereffects/server/settings/creator_plugins.py @@ -0,0 +1,18 @@ +from pydantic import Field + +from ayon_server.settings import BaseSettingsModel + + +class CreateRenderPlugin(BaseSettingsModel): + mark_for_review: bool = Field(True, title="Review") + defaults: list[str] = Field( + default_factory=list, + title="Default Variants" + ) + + +class AfterEffectsCreatorPlugins(BaseSettingsModel): + RenderCreator: CreateRenderPlugin = Field( + title="Create Render", + default_factory=CreateRenderPlugin, + ) diff --git a/server_addon/aftereffects/server/settings/imageio.py b/server_addon/aftereffects/server/settings/imageio.py new file mode 100644 index 0000000000..55160ffd11 --- /dev/null +++ b/server_addon/aftereffects/server/settings/imageio.py @@ -0,0 +1,48 @@ +from pydantic import Field, validator +from ayon_server.settings import BaseSettingsModel +from ayon_server.settings.validators import ensure_unique_names + + +class ImageIOConfigModel(BaseSettingsModel): + override_global_config: bool = Field( + False, + title="Override global OCIO config" + ) + filepath: list[str] = Field( + default_factory=list, + title="Config path" + ) + + +class ImageIOFileRuleModel(BaseSettingsModel): + name: str = Field("", title="Rule name") + pattern: str = Field("", title="Regex pattern") + colorspace: str = Field("", title="Colorspace name") + ext: str = Field("", title="File extension") + + +class ImageIOFileRulesModel(BaseSettingsModel): + activate_host_rules: bool = Field(False) + rules: list[ImageIOFileRuleModel] = Field( + default_factory=list, + title="Rules" + ) + + @validator("rules") + def validate_unique_outputs(cls, value): + ensure_unique_names(value) + return value + + +class AfterEffectsImageIOModel(BaseSettingsModel): + activate_host_color_management: bool = Field( + True, title="Enable Color Management" + ) + ocio_config: ImageIOConfigModel = Field( + default_factory=ImageIOConfigModel, + title="OCIO config" + ) + file_rules: ImageIOFileRulesModel = Field( + default_factory=ImageIOFileRulesModel, + title="File Rules" + ) diff --git a/server_addon/aftereffects/server/settings/main.py b/server_addon/aftereffects/server/settings/main.py new file mode 100644 index 0000000000..04d2e51cc9 --- /dev/null +++ b/server_addon/aftereffects/server/settings/main.py @@ -0,0 +1,56 @@ +from pydantic import Field +from ayon_server.settings import BaseSettingsModel + +from .imageio import AfterEffectsImageIOModel +from .creator_plugins import AfterEffectsCreatorPlugins +from .publish_plugins import ( + AfterEffectsPublishPlugins, + AE_PUBLISH_PLUGINS_DEFAULTS, +) +from .workfile_builder import WorkfileBuilderPlugin +from .templated_workfile_build import TemplatedWorkfileBuildModel + + +class AfterEffectsSettings(BaseSettingsModel): + """AfterEffects Project Settings.""" + + imageio: AfterEffectsImageIOModel = Field( + default_factory=AfterEffectsImageIOModel, + title="OCIO config" + ) + create: AfterEffectsCreatorPlugins = Field( + default_factory=AfterEffectsCreatorPlugins, + title="Creator plugins" + ) + publish: AfterEffectsPublishPlugins = Field( + default_factory=AfterEffectsPublishPlugins, + title="Publish plugins" + ) + workfile_builder: WorkfileBuilderPlugin = Field( + default_factory=WorkfileBuilderPlugin, + title="Workfile Builder" + ) + templated_workfile_build: TemplatedWorkfileBuildModel = Field( + default_factory=TemplatedWorkfileBuildModel, + title="Templated Workfile Build Settings" + ) + + +DEFAULT_AFTEREFFECTS_SETTING = { + "create": { + "RenderCreator": { + "mark_for_review": True, + "defaults": [ + "Main" + ] + } + }, + "publish": AE_PUBLISH_PLUGINS_DEFAULTS, + "workfile_builder": { + "create_first_version": False, + "custom_templates": [] + }, + "templated_workfile_build": { + "profiles": [] + }, +} diff --git a/server_addon/aftereffects/server/settings/publish_plugins.py b/server_addon/aftereffects/server/settings/publish_plugins.py new file mode 100644 index 0000000000..78445d3223 --- /dev/null +++ b/server_addon/aftereffects/server/settings/publish_plugins.py @@ -0,0 +1,68 @@ +from pydantic import Field + +from ayon_server.settings import BaseSettingsModel + + +class CollectReviewPluginModel(BaseSettingsModel): + enabled: bool = Field(True, title="Enabled") + + +class ValidateSceneSettingsModel(BaseSettingsModel): + """Validate naming of products and layers""" + + # _isGroup = True + enabled: bool = Field(True, title="Enabled") + optional: bool = Field(False, title="Optional") + active: bool = Field(True, title="Active") + skip_resolution_check: list[str] = Field( + default_factory=list, + title="Skip Resolution Check for Tasks", + ) + skip_timelines_check: list[str] = Field( + default_factory=list, + title="Skip Timeline Check for Tasks", + ) + + +class ValidateContainersModel(BaseSettingsModel): + enabled: bool = Field(True, title="Enabled") + optional: bool = Field(True, title="Optional") + active: bool = Field(True, title="Active") + + +class AfterEffectsPublishPlugins(BaseSettingsModel): + CollectReview: CollectReviewPluginModel = Field( + default_factory=CollectReviewPluginModel, + title="Collect Review", + ) + ValidateSceneSettings: ValidateSceneSettingsModel = Field( + default_factory=ValidateSceneSettingsModel, + title="Validate Scene Settings", + ) + ValidateContainers: ValidateContainersModel = Field( + default_factory=ValidateContainersModel, + title="Validate Containers", + ) + + +AE_PUBLISH_PLUGINS_DEFAULTS = { + "CollectReview": { + "enabled": True + }, + "ValidateSceneSettings": { + "enabled": True, + "optional": True, + "active": True, + "skip_resolution_check": [ + ".*" + ], + "skip_timelines_check": [ + ".*" + ] + }, + "ValidateContainers": { + "enabled": True, + "optional": True, + "active": True, + } +} diff --git a/server_addon/aftereffects/server/settings/templated_workfile_build.py b/server_addon/aftereffects/server/settings/templated_workfile_build.py new file mode 100644 index 0000000000..e0245c8d06 --- /dev/null +++ b/server_addon/aftereffects/server/settings/templated_workfile_build.py @@ -0,0 +1,33 @@ +from pydantic import Field +from ayon_server.settings import ( + BaseSettingsModel, + task_types_enum, +) + + +class TemplatedWorkfileProfileModel(BaseSettingsModel): + task_types: list[str] = Field( + default_factory=list, + title="Task types", + enum_resolver=task_types_enum + ) + task_names: list[str] = Field( + default_factory=list, + title="Task names" + ) + path: str = Field( + title="Path to template" + ) + keep_placeholder: bool = Field( + False, + title="Keep placeholders") + create_first_version: bool = Field( + True, + title="Create first version" + ) + + +class TemplatedWorkfileBuildModel(BaseSettingsModel): + profiles: list[TemplatedWorkfileProfileModel] = Field( + default_factory=list + ) diff --git a/server_addon/aftereffects/server/settings/workfile_builder.py b/server_addon/aftereffects/server/settings/workfile_builder.py new file mode 100644 index 0000000000..d45d3f7f24 --- /dev/null +++ b/server_addon/aftereffects/server/settings/workfile_builder.py @@ -0,0 +1,25 @@ +from pydantic import Field + +from ayon_server.settings import BaseSettingsModel, MultiplatformPathModel + + +class CustomBuilderTemplate(BaseSettingsModel): + task_types: list[str] = Field( + default_factory=list, + title="Task types", + ) + template_path: MultiplatformPathModel = Field( + default_factory=MultiplatformPathModel + ) + + +class WorkfileBuilderPlugin(BaseSettingsModel): + _title = "Workfile Builder" + create_first_version: bool = Field( + False, + title="Create first workfile" + ) + + custom_templates: list[CustomBuilderTemplate] = Field( + default_factory=list + ) diff --git a/server_addon/aftereffects/server/version.py b/server_addon/aftereffects/server/version.py new file mode 100644 index 0000000000..a242f0e757 --- /dev/null +++ b/server_addon/aftereffects/server/version.py @@ -0,0 +1,3 @@ +# -*- coding: utf-8 -*- +"""Package declaring addon version.""" +__version__ = "0.1.1" diff --git a/server_addon/applications/server/__init__.py b/server_addon/applications/server/__init__.py new file mode 100644 index 0000000000..fdec05006b --- /dev/null +++ b/server_addon/applications/server/__init__.py @@ -0,0 +1,154 @@ +import os +import json +import copy + +from ayon_server.addons import BaseServerAddon +from ayon_server.lib.postgres import Postgres + +from .version import __version__ +from .settings import ApplicationsAddonSettings, DEFAULT_VALUES + + +def get_enum_items_from_groups(groups): + label_by_name = {} + for group in groups: + group_name = group["name"] + group_label = group["label"] or group_name + for variant in group["variants"]: + variant_name = variant["name"] + if not variant_name: + continue + variant_label = variant["label"] or variant_name + full_name = f"{group_name}/{variant_name}" + full_label = f"{group_label} {variant_label}" + label_by_name[full_name] = full_label + enum_items = [] + for full_name in sorted(label_by_name): + enum_items.append( + {"value": full_name, "label": label_by_name[full_name]} + ) + return enum_items + + +class ApplicationsAddon(BaseServerAddon): + name = "applications" + title = "Applications" + version = __version__ + settings_model = ApplicationsAddonSettings + + async def get_default_settings(self): + applications_path = os.path.join(self.addon_dir, "applications.json") + tools_path = os.path.join(self.addon_dir, "tools.json") + default_values = copy.deepcopy(DEFAULT_VALUES) + with open(applications_path, "r") as stream: + default_values.update(json.load(stream)) + + with open(tools_path, "r") as stream: + default_values.update(json.load(stream)) + + return self.get_settings_model()(**default_values) + + async def setup(self): + need_restart = await self.create_applications_attribute() + if need_restart: + self.request_server_restart() + + async def create_applications_attribute(self) -> bool: + """Make sure there are required attributes which ftrack addon needs. + + Returns: + bool: 'True' if an attribute was created or updated. + """ + + settings_model = await self.get_studio_settings() + studio_settings = settings_model.dict() + applications = studio_settings["applications"] + _applications = applications.pop("additional_apps") + for name, value in applications.items(): + value["name"] = name + _applications.append(value) + + query = "SELECT name, position, scope, data from public.attributes" + + apps_attrib_name = "applications" + tools_attrib_name = "tools" + + apps_enum = get_enum_items_from_groups(_applications) + tools_enum = get_enum_items_from_groups(studio_settings["tool_groups"]) + apps_attribute_data = { + "type": "list_of_strings", + "title": "Applications", + "enum": apps_enum + } + tools_attribute_data = { + "type": "list_of_strings", + "title": "Tools", + "enum": tools_enum + } + apps_scope = ["project"] + tools_scope = ["project", "folder", "task"] + + apps_match_position = None + apps_matches = False + tools_match_position = None + tools_matches = False + position = 1 + async for row in Postgres.iterate(query): + position += 1 + if row["name"] == apps_attrib_name: + # Check if scope is matching ftrack addon requirements + if ( + set(row["scope"]) == set(apps_scope) + and row["data"].get("enum") == apps_enum + ): + apps_matches = True + apps_match_position = row["position"] + + elif row["name"] == tools_attrib_name: + if ( + set(row["scope"]) == set(tools_scope) + and row["data"].get("enum") == tools_enum + ): + tools_matches = True + tools_match_position = row["position"] + + if apps_matches and tools_matches: + return False + + postgre_query = "\n".join(( + "INSERT INTO public.attributes", + " (name, position, scope, data)", + "VALUES", + " ($1, $2, $3, $4)", + "ON CONFLICT (name)", + "DO UPDATE SET", + " scope = $3,", + " data = $4", + )) + if not apps_matches: + # Reuse position from found attribute + if apps_match_position is None: + apps_match_position = position + position += 1 + + await Postgres.execute( + postgre_query, + apps_attrib_name, + apps_match_position, + apps_scope, + apps_attribute_data, + ) + + if not tools_matches: + if tools_match_position is None: + tools_match_position = position + position += 1 + + await Postgres.execute( + postgre_query, + tools_attrib_name, + tools_match_position, + tools_scope, + tools_attribute_data, + ) + return True diff --git a/server_addon/applications/server/applications.json b/server_addon/applications/server/applications.json new file mode 100644 index 0000000000..b19308ee7c --- /dev/null +++ b/server_addon/applications/server/applications.json @@ -0,0 +1,1125 @@ +{ + "applications": { + "maya": { + "enabled": true, + "label": "Maya", + "icon": "{}/app_icons/maya.png", + "host_name": "maya", + "environment": "{\n \"MAYA_DISABLE_CLIC_IPM\": \"Yes\",\n \"MAYA_DISABLE_CIP\": \"Yes\",\n \"MAYA_DISABLE_CER\": \"Yes\",\n \"PYMEL_SKIP_MEL_INIT\": \"Yes\",\n \"LC_ALL\": \"C\"\n}\n", + "variants": [ + { + "name": "2023", + "label": "2023", + "executables": { + "windows": [ + "C:\\Program Files\\Autodesk\\Maya2023\\bin\\maya.exe" + ], + "darwin": [], + "linux": [ + "/usr/autodesk/maya2023/bin/maya" + ] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{\n \"MAYA_VERSION\": \"2023\"\n}", + "use_python_2": false + }, + { + "name": "2022", + "label": "2022", + "executables": { + "windows": [ + "C:\\Program Files\\Autodesk\\Maya2022\\bin\\maya.exe" + ], + "darwin": [], + "linux": [ + "/usr/autodesk/maya2022/bin/maya" + ] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{\n \"MAYA_VERSION\": \"2022\"\n}", + "use_python_2": false + }, + { + "name": "2020", + "label": "2020", + "executables": { + "windows": [ + "C:\\Program Files\\Autodesk\\Maya2020\\bin\\maya.exe" + ], + "darwin": [], + "linux": [ + "/usr/autodesk/maya2020/bin/maya" + ] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{\n \"MAYA_VERSION\": \"2020\"\n}", + "use_python_2": true + }, + { + "name": "2019", + "label": "2019", + "executables": { + "windows": [ + "C:\\Program Files\\Autodesk\\Maya2019\\bin\\maya.exe" + ], + "darwin": [], + "linux": [ + "/usr/autodesk/maya2019/bin/maya" + ] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{\n \"MAYA_VERSION\": \"2019\"\n}", + "use_python_2": true + }, + { + "name": "2018", + "label": "2018", + "executables": { + "windows": [ + "C:\\Program Files\\Autodesk\\Maya2018\\bin\\maya.exe" + ], + "darwin": [], + "linux": [ + "/usr/autodesk/maya2018/bin/maya" + ] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{\n \"MAYA_VERSION\": \"2018\"\n}", + "use_python_2": true + } + ] + }, + "adsk_3dsmax": { + "enabled": true, + "label": "3ds Max", + "icon": "{}/app_icons/3dsmax.png", + "host_name": "max", + "environment": "{\n \"ADSK_3DSMAX_STARTUPSCRIPTS_ADDON_DIR\": \"{OPENPYPE_ROOT}/openpype/hosts/max/startup\"\n}", + "variants": [ + { + "name": "2023", + "use_python_2": false, + "executables": { + "windows": [ + "C:\\Program Files\\Autodesk\\3ds Max 2023\\3dsmax.exe" + ], + "darwin": [], + "linux": [] + }, + "arguments": { + "windows": [ + "-U MAXScript {OPENPYPE_ROOT}\\openpype\\hosts\\max\\startup\\startup.ms" + ], + "darwin": [], + "linux": [] + }, + "environment": "{\n \"3DSMAX_VERSION\": \"2023\"\n}" + } + ] + }, + "flame": { + "enabled": true, + "label": "Flame", + "icon": "{}/app_icons/flame.png", + "host_name": "flame", + "environment": "{\n \"FLAME_SCRIPT_DIRS\": {\n \"windows\": \"\",\n \"darwin\": \"\",\n \"linux\": \"\"\n },\n \"FLAME_WIRETAP_HOSTNAME\": \"\",\n \"FLAME_WIRETAP_VOLUME\": \"stonefs\",\n \"FLAME_WIRETAP_GROUP\": \"staff\"\n}", + "variants": [ + { + "name": "2021", + "label": "2021", + "executables": { + "windows": [], + "darwin": [ + "/opt/Autodesk/flame_2021/bin/flame.app/Contents/MacOS/startApp" + ], + "linux": [ + "/opt/Autodesk/flame_2021/bin/startApplication" + ] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{\n \"OPENPYPE_FLAME_PYTHON_EXEC\": \"/opt/Autodesk/python/2021/bin/python2.7\",\n \"OPENPYPE_FLAME_PYTHONPATH\": \"/opt/Autodesk/flame_2021/python\",\n \"OPENPYPE_WIRETAP_TOOLS\": \"/opt/Autodesk/wiretap/tools/2021\"\n}", + "use_python_2": true + }, + { + "name": "2021_1", + "label": "2021.1", + "executables": { + "windows": [], + "darwin": [ + "/opt/Autodesk/flame_2021.1/bin/flame.app/Contents/MacOS/startApp" + ], + "linux": [ + "/opt/Autodesk/flame_2021.1/bin/startApplication" + ] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{\n \"OPENPYPE_FLAME_PYTHON_EXEC\": \"/opt/Autodesk/python/2021.1/bin/python2.7\",\n \"OPENPYPE_FLAME_PYTHONPATH\": \"/opt/Autodesk/flame_2021.1/python\",\n \"OPENPYPE_WIRETAP_TOOLS\": \"/opt/Autodesk/wiretap/tools/2021.1\"\n}", + "use_python_2": true + } + ] + }, + "nuke": { + "enabled": true, + "label": "Nuke", + "icon": "{}/app_icons/nuke.png", + "host_name": "nuke", + "environment": "{\n \"NUKE_PATH\": [\n \"{NUKE_PATH}\",\n \"{OPENPYPE_STUDIO_PLUGINS}/nuke\"\n ]\n}", + "variants": [ + { + "name": "14-0", + "label": "14.0", + "executables": { + "windows": [ + "C:\\Program Files\\Nuke14.0v4\\Nuke14.0.exe" + ], + "darwin": [ + "/Applications/Nuke14.0v4/Nuke14.0v4.app" + ], + "linux": [ + "/usr/local/Nuke14.0v4/Nuke14.0" + ] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{}", + "use_python_2": false + }, + { + "name": "13-2", + "label": "13.2", + "executables": { + "windows": [ + "C:\\Program Files\\Nuke13.2v5\\Nuke13.2.exe" + ], + "darwin": [ + "/Applications/Nuke13.2v5/Nuke13.2v5.app" + ], + "linux": [ + "/usr/local/Nuke13.2v5/Nuke13.2" + ] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{}", + "use_python_2": false + }, + { + "name": "13-0", + "use_python_2": false, + "executables": { + "windows": [ + "C:\\Program Files\\Nuke13.0v1\\Nuke13.0.exe" + ], + "darwin": [ + "/Applications/Nuke13.0v1/Nuke13.0v1.app" + ], + "linux": [ + "/usr/local/Nuke13.0v1/Nuke13.0" + ] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{}" + } + ] + }, + "nukeassist": { + "enabled": true, + "label": "Nuke Assist", + "icon": "{}/app_icons/nuke.png", + "host_name": "nuke", + "environment": "{\n \"NUKE_PATH\": [\n \"{NUKE_PATH}\",\n \"{OPENPYPE_STUDIO_PLUGINS}/nuke\"\n ]\n}", + "variants": [ + { + "name": "14-0", + "label": "14.0", + "executables": { + "windows": [ + "C:\\Program Files\\Nuke14.0v4\\Nuke14.0.exe" + ], + "darwin": [ + "/Applications/Nuke14.0v4/NukeAssist14.0v4.app" + ], + "linux": [ + "/usr/local/Nuke14.0v4/Nuke14.0" + ] + }, + "arguments": { + "windows": [ + "--nukeassist" + ], + "darwin": [], + "linux": [ + "--nukeassist" + ] + }, + "environment": "{}", + "use_python_2": false + }, + { + "name": "13-2", + "label": "13.2", + "executables": { + "windows": [ + "C:\\Program Files\\Nuke13.2v5\\Nuke13.2.exe" + ], + "darwin": [ + "/Applications/Nuke13.2v5/NukeAssist13.2v5.app" + ], + "linux": [ + "/usr/local/Nuke13.2v5/Nuke13.2" + ] + }, + "arguments": { + "windows": [ + "--nukeassist" + ], + "darwin": [], + "linux": [ + "--nukeassist" + ] + }, + "environment": "{}", + "use_python_2": false + }, + { + "name": "13-0", + "use_python_2": false, + "executables": { + "windows": [ + "C:\\Program Files\\Nuke13.0v1\\Nuke13.0.exe" + ], + "darwin": [ + "/Applications/Nuke13.0v1/NukeAssist13.0v1.app" + ], + "linux": [ + "/usr/local/Nuke13.0v1/Nuke13.0" + ] + }, + "arguments": { + "windows": [ + "--nukeassist" + ], + "darwin": [], + "linux": [ + "--nukeassist" + ] + }, + "environment": "{}" + } + ] + }, + "nukex": { + "enabled": true, + "label": "Nuke X", + "icon": "{}/app_icons/nukex.png", + "host_name": "nuke", + "environment": "{\n \"NUKE_PATH\": [\n \"{NUKE_PATH}\",\n \"{OPENPYPE_STUDIO_PLUGINS}/nuke\"\n ]\n}", + "variants": [ + { + "name": "14-0", + "label": "14.0", + "executables": { + "windows": [ + "C:\\Program Files\\Nuke14.0v4\\Nuke14.0.exe" + ], + "darwin": [ + "/Applications/Nuke14.0v4/NukeX14.0v4.app" + ], + "linux": [ + "/usr/local/Nuke14.0v4/Nuke14.0" + ] + }, + "arguments": { + "windows": [ + "--nukex" + ], + "darwin": [], + "linux": [ + "--nukex" + ] + }, + "environment": "{}", + "use_python_2": false + }, + { + "name": "13-2", + "label": "13.2", + "executables": { + "windows": [ + "C:\\Program Files\\Nuke13.2v5\\Nuke13.2.exe" + ], + "darwin": [ + "/Applications/Nuke13.2v5/NukeX13.2v5.app" + ], + "linux": [ + "/usr/local/Nuke13.2v5/Nuke13.2" + ] + }, + "arguments": { + "windows": [ + "--nukex" + ], + "darwin": [], + "linux": [ + "--nukex" + ] + }, + "environment": "{}", + "use_python_2": false + }, + { + "name": "13-0", + "use_python_2": false, + "executables": { + "windows": [ + "C:\\Program Files\\Nuke13.0v1\\Nuke13.0.exe" + ], + "darwin": [ + "/Applications/Nuke13.0v1/NukeX13.0v1.app" + ], + "linux": [ + "/usr/local/Nuke13.0v1/Nuke13.0" + ] + }, + "arguments": { + "windows": [ + "--nukex" + ], + "darwin": [], + "linux": [ + "--nukex" + ] + }, + "environment": "{}" + } + ] + }, + "nukestudio": { + "enabled": true, + "label": "Nuke Studio", + "icon": "{}/app_icons/nukestudio.png", + "host_name": "hiero", + "environment": "{\n \"WORKFILES_STARTUP\": \"0\",\n \"TAG_ASSETBUILD_STARTUP\": \"0\"\n}", + "variants": [ + { + "name": "14-0", + "label": "14.0", + "executables": { + "windows": [ + "C:\\Program Files\\Nuke14.0v4\\Nuke14.0.exe" + ], + "darwin": [ + "/Applications/Nuke14.0v4/NukeStudio14.0v4.app" + ], + "linux": [ + "/usr/local/Nuke14.0v4/Nuke14.0" + ] + }, + "arguments": { + "windows": [ + "--studio" + ], + "darwin": [], + "linux": [ + "--studio" + ] + }, + "environment": "{}", + "use_python_2": false + }, + { + "name": "13-2", + "label": "13.2", + "executables": { + "windows": [ + "C:\\Program Files\\Nuke13.2v5\\Nuke13.2.exe" + ], + "darwin": [ + "/Applications/Nuke13.2v5/NukeStudio13.2v5.app" + ], + "linux": [ + "/usr/local/Nuke13.2v5/Nuke13.2" + ] + }, + "arguments": { + "windows": [ + "--studio" + ], + "darwin": [], + "linux": [ + "--studio" + ] + }, + "environment": "{}", + "use_python_2": false + }, + { + "name": "13-0", + "use_python_2": false, + "executables": { + "windows": [ + "C:\\Program Files\\Nuke13.0v1\\Nuke13.0.exe" + ], + "darwin": [ + "/Applications/Nuke13.0v1/NukeStudio13.0v1.app" + ], + "linux": [ + "/usr/local/Nuke13.0v1/Nuke13.0" + ] + }, + "arguments": { + "windows": [ + "--studio" + ], + "darwin": [], + "linux": [ + "--studio" + ] + }, + "environment": "{}" + } + ] + }, + "hiero": { + "enabled": true, + "label": "Hiero", + "icon": "{}/app_icons/hiero.png", + "host_name": "hiero", + "environment": "{\n \"WORKFILES_STARTUP\": \"0\",\n \"TAG_ASSETBUILD_STARTUP\": \"0\"\n}", + "variants": [ + { + "name": "14-0", + "label": "14.0", + "executables": { + "windows": [ + "C:\\Program Files\\Nuke14.0v4\\Nuke14.0.exe" + ], + "darwin": [ + "/Applications/Nuke14.0v4/Hiero14.0v4.app" + ], + "linux": [ + "/usr/local/Nuke14.0v4/Nuke14.0" + ] + }, + "arguments": { + "windows": [ + "--hiero" + ], + "darwin": [], + "linux": [ + "--hiero" + ] + }, + "environment": "{}", + "use_python_2": false + }, + { + "name": "13-2", + "label": "13.2", + "executables": { + "windows": [ + "C:\\Program Files\\Nuke13.2v5\\Nuke13.2.exe" + ], + "darwin": [ + "/Applications/Nuke13.2v5/Hiero13.2v5.app" + ], + "linux": [ + "/usr/local/Nuke13.2v5/Nuke13.2" + ] + }, + "arguments": { + "windows": [ + "--hiero" + ], + "darwin": [], + "linux": [ + "--hiero" + ] + }, + "environment": "{}", + "use_python_2": false + }, + { + "name": "13-0", + "use_python_2": false, + "executables": { + "windows": [ + "C:\\Program Files\\Nuke13.0v1\\Nuke13.0.exe" + ], + "darwin": [ + "/Applications/Nuke13.0v1/Hiero13.0v1.app" + ], + "linux": [ + "/usr/local/Nuke13.0v1/Nuke13.0" + ] + }, + "arguments": { + "windows": [ + "--hiero" + ], + "darwin": [], + "linux": [ + "--hiero" + ] + }, + "environment": "{}" + } + ] + }, + "fusion": { + "enabled": true, + "label": "Fusion", + "icon": "{}/app_icons/fusion.png", + "host_name": "fusion", + "environment": "{\n \"FUSION_PYTHON3_HOME\": {\n \"windows\": \"{LOCALAPPDATA}/Programs/Python/Python36\",\n \"darwin\": \"~/Library/Python/3.6/bin\",\n \"linux\": \"/opt/Python/3.6/bin\"\n }\n}", + "variants": [ + { + "name": "17", + "label": "17", + "executables": { + "windows": [ + "C:\\Program Files\\Blackmagic Design\\Fusion 17\\Fusion.exe" + ], + "darwin": [], + "linux": [] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{}" + }, + { + "name": "16", + "label": "16", + "executables": { + "windows": [ + "C:\\Program Files\\Blackmagic Design\\Fusion 16\\Fusion.exe" + ], + "darwin": [], + "linux": [] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{}" + }, + { + "name": "9", + "label": "9", + "executables": { + "windows": [ + "C:\\Program Files\\Blackmagic Design\\Fusion 9\\Fusion.exe" + ], + "darwin": [], + "linux": [] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{}" + } + ] + }, + "resolve": { + "enabled": true, + "label": "Resolve", + "icon": "{}/app_icons/resolve.png", + "host_name": "resolve", + "environment": "{\n \"RESOLVE_UTILITY_SCRIPTS_SOURCE_DIR\": [],\n \"RESOLVE_PYTHON3_HOME\": {\n \"windows\": \"{LOCALAPPDATA}/Programs/Python/Python36\",\n \"darwin\": \"~/Library/Python/3.6/bin\",\n \"linux\": \"/opt/Python/3.6/bin\"\n }\n}", + "variants": [ + { + "name": "stable", + "label": "stable", + "executables": { + "windows": [ + "C:/Program Files/Blackmagic Design/DaVinci Resolve/Resolve.exe" + ], + "darwin": [], + "linux": [] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{}" + } + ] + }, + "houdini": { + "enabled": true, + "label": "Houdini", + "icon": "{}/app_icons/houdini.png", + "host_name": "houdini", + "environment": "{}", + "variants": [ + { + "name": "18-5", + "label": "18.5", + "executables": { + "windows": [ + "C:\\Program Files\\Side Effects Software\\Houdini 18.5.499\\bin\\houdini.exe" + ], + "darwin": [], + "linux": [] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{}", + "use_python_2": true + }, + { + "name": "18", + "label": "18", + "executables": { + "windows": [], + "darwin": [], + "linux": [] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{}", + "use_python_2": true + }, + { + "name": "17", + "label": "17", + "executables": { + "windows": [], + "darwin": [], + "linux": [] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{}", + "use_python_2": true + } + ] + }, + "blender": { + "enabled": true, + "label": "Blender", + "icon": "{}/app_icons/blender.png", + "host_name": "blender", + "environment": "{}", + "variants": [ + { + "name": "2-83", + "label": "2.83", + "executables": { + "windows": [ + "C:\\Program Files\\Blender Foundation\\Blender 2.83\\blender.exe" + ], + "darwin": [], + "linux": [] + }, + "arguments": { + "windows": [ + "--python-use-system-env" + ], + "darwin": [ + "--python-use-system-env" + ], + "linux": [ + "--python-use-system-env" + ] + }, + "environment": "{}" + }, + { + "name": "2-90", + "label": "2.90", + "executables": { + "windows": [ + "C:\\Program Files\\Blender Foundation\\Blender 2.90\\blender.exe" + ], + "darwin": [], + "linux": [] + }, + "arguments": { + "windows": [ + "--python-use-system-env" + ], + "darwin": [ + "--python-use-system-env" + ], + "linux": [ + "--python-use-system-env" + ] + }, + "environment": "{}" + }, + { + "name": "2-91", + "label": "2.91", + "executables": { + "windows": [ + "C:\\Program Files\\Blender Foundation\\Blender 2.91\\blender.exe" + ], + "darwin": [], + "linux": [] + }, + "arguments": { + "windows": [ + "--python-use-system-env" + ], + "darwin": [ + "--python-use-system-env" + ], + "linux": [ + "--python-use-system-env" + ] + }, + "environment": "{}" + } + ] + }, + "harmony": { + "enabled": true, + "label": "Harmony", + "icon": "{}/app_icons/harmony.png", + "host_name": "harmony", + "environment": "{\n \"AVALON_HARMONY_WORKFILES_ON_LAUNCH\": \"1\"\n}", + "variants": [ + { + "name": "21", + "label": "21", + "executables": { + "windows": [ + "c:\\Program Files (x86)\\Toon Boom Animation\\Toon Boom Harmony 21 Premium\\win64\\bin\\HarmonyPremium.exe" + ], + "darwin": [ + "/Applications/Toon Boom Harmony 21 Premium/Harmony Premium.app/Contents/MacOS/Harmony Premium" + ], + "linux": [] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{}" + }, + { + "name": "20", + "label": "20", + "executables": { + "windows": [ + "c:\\Program Files (x86)\\Toon Boom Animation\\Toon Boom Harmony 20 Premium\\win64\\bin\\HarmonyPremium.exe" + ], + "darwin": [ + "/Applications/Toon Boom Harmony 20 Premium/Harmony Premium.app/Contents/MacOS/Harmony Premium" + ], + "linux": [] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{}" + }, + { + "name": "17", + "label": "17", + "executables": { + "windows": [ + "c:\\Program Files (x86)\\Toon Boom Animation\\Toon Boom Harmony 17 Premium\\win64\\bin\\HarmonyPremium.exe" + ], + "darwin": [ + "/Applications/Toon Boom Harmony 17 Premium/Harmony Premium.app/Contents/MacOS/Harmony Premium" + ], + "linux": [] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{}" + } + ] + }, + "tvpaint": { + "enabled": true, + "label": "TVPaint", + "icon": "{}/app_icons/tvpaint.png", + "host_name": "tvpaint", + "environment": "{}", + "variants": [ + { + "name": "animation_11-64bits", + "label": "11 (64bits)", + "executables": { + "windows": [ + "C:\\Program Files\\TVPaint Developpement\\TVPaint Animation 11 (64bits)\\TVPaint Animation 11 (64bits).exe" + ], + "darwin": [], + "linux": [] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{}" + }, + { + "name": "animation_11-32bits", + "label": "11 (32bits)", + "executables": { + "windows": [ + "C:\\Program Files (x86)\\TVPaint Developpement\\TVPaint Animation 11 (32bits)\\TVPaint Animation 11 (32bits).exe" + ], + "darwin": [], + "linux": [] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{}" + } + ] + }, + "photoshop": { + "enabled": true, + "label": "Photoshop", + "icon": "{}/app_icons/photoshop.png", + "host_name": "photoshop", + "environment": "{\n \"AVALON_PHOTOSHOP_WORKFILES_ON_LAUNCH\": \"1\",\n \"WORKFILES_SAVE_AS\": \"Yes\"\n}", + "variants": [ + { + "name": "2020", + "label": "2020", + "executables": { + "windows": [ + "C:\\Program Files\\Adobe\\Adobe Photoshop 2020\\Photoshop.exe" + ], + "darwin": [], + "linux": [] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{}" + }, + { + "name": "2021", + "label": "2021", + "executables": { + "windows": [ + "C:\\Program Files\\Adobe\\Adobe Photoshop 2021\\Photoshop.exe" + ], + "darwin": [], + "linux": [] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{}" + }, + { + "name": "2022", + "label": "2022", + "executables": { + "windows": [ + "C:\\Program Files\\Adobe\\Adobe Photoshop 2022\\Photoshop.exe" + ], + "darwin": [], + "linux": [] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{}" + } + ] + }, + "aftereffects": { + "enabled": true, + "label": "AfterEffects", + "icon": "{}/app_icons/aftereffects.png", + "host_name": "aftereffects", + "environment": "{\n \"AVALON_AFTEREFFECTS_WORKFILES_ON_LAUNCH\": \"1\",\n \"WORKFILES_SAVE_AS\": \"Yes\"\n}", + "variants": [ + { + "name": "2020", + "label": "2020", + "executables": { + "windows": [ + "C:\\Program Files\\Adobe\\Adobe After Effects 2020\\Support Files\\AfterFX.exe" + ], + "darwin": [], + "linux": [] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{}" + }, + { + "name": "2021", + "label": "2021", + "executables": { + "windows": [ + "C:\\Program Files\\Adobe\\Adobe After Effects 2021\\Support Files\\AfterFX.exe" + ], + "darwin": [], + "linux": [] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{}" + }, + { + "name": "2022", + "label": "2022", + "executables": { + "windows": [ + "C:\\Program Files\\Adobe\\Adobe After Effects 2022\\Support Files\\AfterFX.exe" + ], + "darwin": [], + "linux": [] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{\n \"MULTIPROCESS\": \"No\"\n}" + } + ] + }, + "celaction": { + "enabled": true, + "label": "CelAction 2D", + "icon": "app_icons/celaction.png", + "host_name": "celaction", + "environment": "{\n \"CELACTION_TEMPLATE\": \"{OPENPYPE_REPOS_ROOT}/openpype/hosts/celaction/celaction_template_scene.scn\"\n}", + "variants": [ + { + "name": "local", + "label": "local", + "executables": { + "windows": [], + "darwin": [], + "linux": [] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{}" + } + ] + }, + "unreal": { + "enabled": true, + "label": "Unreal Editor", + "icon": "{}/app_icons/ue4.png", + "host_name": "unreal", + "environment": "{}", + "variants": [ + { + "name": "4-26", + "label": "4.26", + "executables": {}, + "arguments": {}, + "environment": "{}" + } + ] + }, + "djvview": { + "enabled": true, + "label": "DJV View", + "icon": "{}/app_icons/djvView.png", + "host_name": "", + "environment": "{}", + "variants": [ + { + "name": "1-1", + "label": "1.1", + "executables": { + "windows": [], + "darwin": [], + "linux": [] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": "{}" + } + ] + }, + "additional_apps": [] + } +} diff --git a/server_addon/applications/server/settings.py b/server_addon/applications/server/settings.py new file mode 100644 index 0000000000..fd481b6ce8 --- /dev/null +++ b/server_addon/applications/server/settings.py @@ -0,0 +1,201 @@ +import json +from pydantic import Field, validator + +from ayon_server.settings import BaseSettingsModel, ensure_unique_names +from ayon_server.exceptions import BadRequestException + + +def validate_json_dict(value): + if not value.strip(): + return "{}" + try: + converted_value = json.loads(value) + success = isinstance(converted_value, dict) + except json.JSONDecodeError as exc: + print(exc) + success = False + + if not success: + raise BadRequestException( + "Environment's can't be parsed as json object" + ) + return value + + +class MultiplatformStrList(BaseSettingsModel): + windows: list[str] = Field(default_factory=list, title="Windows") + linux: list[str] = Field(default_factory=list, title="Linux") + darwin: list[str] = Field(default_factory=list, title="MacOS") + + +class AppVariant(BaseSettingsModel): + name: str = Field("", title="Name") + label: str = Field("", title="Label") + executables: MultiplatformStrList = Field( + default_factory=MultiplatformStrList, title="Executables" + ) + arguments: MultiplatformStrList = Field( + default_factory=MultiplatformStrList, title="Arguments" + ) + environment: str = Field("{}", title="Environment", widget="textarea") + + @validator("environment") + def validate_json(cls, value): + return validate_json_dict(value) + + +class AppVariantWithPython(AppVariant): + use_python_2: bool = Field(False, title="Use Python 2") + + +class AppGroup(BaseSettingsModel): + enabled: bool = Field(True) + label: str = Field("", title="Label") + host_name: str = Field("", title="Host name") + icon: str = Field("", title="Icon") + environment: str = Field("{}", title="Environment", widget="textarea") + + variants: list[AppVariant] = Field( + default_factory=list, + title="Variants", + description="Different variants of the applications", + section="Variants", + ) + + @validator("variants") + def validate_unique_name(cls, value): + ensure_unique_names(value) + return value + + +class AppGroupWithPython(AppGroup): + variants: list[AppVariantWithPython] = Field( + default_factory=list, + title="Variants", + description="Different variants of the applications", + section="Variants", + ) + + +class AdditionalAppGroup(BaseSettingsModel): + enabled: bool = Field(True) + name: str = Field("", title="Name") + label: str = Field("", title="Label") + host_name: str = Field("", title="Host name") + icon: str = Field("", title="Icon") + environment: str = Field("{}", title="Environment", widget="textarea") + + variants: list[AppVariantWithPython] = Field( + default_factory=list, + title="Variants", + description="Different variants of the applications", + section="Variants", + ) + + @validator("variants") + def validate_unique_name(cls, value): + ensure_unique_names(value) + return value + + +class ToolVariantModel(BaseSettingsModel): + name: str = Field("", title="Name") + label: str = Field("", title="Label") + host_names: list[str] = Field(default_factory=list, title="Hosts") + # TODO use applications enum if possible + app_variants: list[str] = Field(default_factory=list, title="Applications") + environment: str = Field("{}", title="Environments", widget="textarea") + + @validator("environment") + def validate_json(cls, value): + return validate_json_dict(value) + + +class ToolGroupModel(BaseSettingsModel): + name: str = Field("", title="Name") + label: str = Field("", title="Label") + environment: str = Field("{}", title="Environments", widget="textarea") + variants: list[ToolVariantModel] = Field( + default_factory=ToolVariantModel + ) + + @validator("environment") + def validate_json(cls, value): + return validate_json_dict(value) + + @validator("variants") + def validate_unique_name(cls, value): + ensure_unique_names(value) + return value + + +class ApplicationsSettings(BaseSettingsModel): + """Applications settings""" + + maya: AppGroupWithPython = Field( + default_factory=AppGroupWithPython, title="Autodesk Maya") + adsk_3dsmax: AppGroupWithPython = Field( + default_factory=AppGroupWithPython, title="Autodesk 3ds Max") + flame: AppGroupWithPython = Field( + default_factory=AppGroupWithPython, title="Autodesk Flame") + nuke: AppGroupWithPython = Field( + default_factory=AppGroupWithPython, title="Nuke") + nukeassist: AppGroupWithPython = Field( + default_factory=AppGroupWithPython, title="Nuke Assist") + nukex: AppGroupWithPython = Field( + default_factory=AppGroupWithPython, title="Nuke X") + nukestudio: AppGroupWithPython = Field( + default_factory=AppGroupWithPython, title="Nuke Studio") + hiero: AppGroupWithPython = Field( + default_factory=AppGroupWithPython, title="Hiero") + fusion: AppGroup = Field( + default_factory=AppGroupWithPython, title="Fusion") + resolve: AppGroupWithPython = Field( + default_factory=AppGroupWithPython, title="Resolve") + houdini: AppGroupWithPython = Field( + default_factory=AppGroupWithPython, title="Houdini") + blender: AppGroup = Field( + default_factory=AppGroupWithPython, title="Blender") + harmony: AppGroup = Field( + default_factory=AppGroupWithPython, title="Harmony") + tvpaint: AppGroup = Field( + default_factory=AppGroupWithPython, title="TVPaint") + photoshop: AppGroup = Field( + default_factory=AppGroupWithPython, title="Adobe Photoshop") + aftereffects: AppGroup = Field( + default_factory=AppGroupWithPython, title="Adobe After Effects") + celaction: AppGroup = Field( + default_factory=AppGroupWithPython, title="Celaction 2D") + unreal: AppGroup = Field( + default_factory=AppGroupWithPython, title="Unreal Editor") + additional_apps: list[AdditionalAppGroup] = Field( + default_factory=list, title="Additional Applications") + + @validator("additional_apps") + def validate_unique_name(cls, value): + ensure_unique_names(value) + return value + + +class ApplicationsAddonSettings(BaseSettingsModel): + applications: ApplicationsSettings = Field( + default_factory=ApplicationsSettings, + title="Applications", + scope=["studio"] + ) + tool_groups: list[ToolGroupModel] = Field( + default_factory=list, + scope=["studio"] + ) + only_available: bool = Field( + True, title="Show only available applications") + + @validator("tool_groups") + def validate_unique_name(cls, value): + ensure_unique_names(value) + return value + + +DEFAULT_VALUES = { + "only_available": False +} diff --git a/server_addon/applications/server/tools.json b/server_addon/applications/server/tools.json new file mode 100644 index 0000000000..54bee11cf7 --- /dev/null +++ b/server_addon/applications/server/tools.json @@ -0,0 +1,55 @@ +{ + "tool_groups": [ + { + "environment": "{\n \"MTOA\": \"{STUDIO_SOFTWARE}/arnold/mtoa_{MAYA_VERSION}_{MTOA_VERSION}\",\n \"MAYA_RENDER_DESC_PATH\": \"{MTOA}\",\n \"MAYA_MODULE_PATH\": \"{MTOA}\",\n \"ARNOLD_PLUGIN_PATH\": \"{MTOA}/shaders\",\n \"MTOA_EXTENSIONS_PATH\": {\n \"darwin\": \"{MTOA}/extensions\",\n \"linux\": \"{MTOA}/extensions\",\n \"windows\": \"{MTOA}/extensions\"\n },\n \"MTOA_EXTENSIONS\": {\n \"darwin\": \"{MTOA}/extensions\",\n \"linux\": \"{MTOA}/extensions\",\n \"windows\": \"{MTOA}/extensions\"\n },\n \"DYLD_LIBRARY_PATH\": {\n \"darwin\": \"{MTOA}/bin\"\n },\n \"PATH\": {\n \"windows\": \"{PATH};{MTOA}/bin\"\n }\n}", + "name": "mtoa", + "label": "Autodesk Arnold", + "variants": [ + { + "host_names": [], + "app_variants": [], + "environment": "{\n \"MTOA_VERSION\": \"3.2\"\n}", + "name": "3-2", + "label": "3.2" + }, + { + "host_names": [], + "app_variants": [], + "environment": "{\n \"MTOA_VERSION\": \"3.1\"\n}", + "name": "3-1", + "label": "3.1" + } + ] + }, + { + "environment": "{}", + "name": "vray", + "label": "Chaos Group Vray", + "variants": [] + }, + { + "environment": "{}", + "name": "yeti", + "label": "Peregrine Labs Yeti", + "variants": [] + }, + { + "environment": "{}", + "name": "renderman", + "label": "Pixar Renderman", + "variants": [ + { + "host_names": [ + "maya" + ], + "app_variants": [ + "maya/2022" + ], + "environment": "{\n \"RFMTREE\": {\n \"windows\": \"C:\\\\Program Files\\\\Pixar\\\\RenderManForMaya-24.3\",\n \"darwin\": \"/Applications/Pixar/RenderManForMaya-24.3\",\n \"linux\": \"/opt/pixar/RenderManForMaya-24.3\"\n },\n \"RMANTREE\": {\n \"windows\": \"C:\\\\Program Files\\\\Pixar\\\\RenderManProServer-24.3\",\n \"darwin\": \"/Applications/Pixar/RenderManProServer-24.3\",\n \"linux\": \"/opt/pixar/RenderManProServer-24.3\"\n }\n}", + "name": "24-3-maya", + "label": "24.3 RFM" + } + ] + } + ] +} diff --git a/server_addon/applications/server/version.py b/server_addon/applications/server/version.py new file mode 100644 index 0000000000..3dc1f76bc6 --- /dev/null +++ b/server_addon/applications/server/version.py @@ -0,0 +1 @@ +__version__ = "0.1.0" diff --git a/server_addon/blender/server/__init__.py b/server_addon/blender/server/__init__.py new file mode 100644 index 0000000000..a7d6cb4400 --- /dev/null +++ b/server_addon/blender/server/__init__.py @@ -0,0 +1,19 @@ +from typing import Type + +from ayon_server.addons import BaseServerAddon + +from .version import __version__ +from .settings import BlenderSettings, DEFAULT_VALUES + + +class BlenderAddon(BaseServerAddon): + name = "blender" + title = "Blender" + version = __version__ + settings_model: Type[BlenderSettings] = BlenderSettings + frontend_scopes = {} + services = {} + + async def get_default_settings(self): + settings_model_cls = self.get_settings_model() + return settings_model_cls(**DEFAULT_VALUES) diff --git a/server_addon/blender/server/settings/__init__.py b/server_addon/blender/server/settings/__init__.py new file mode 100644 index 0000000000..3d51e5c3e1 --- /dev/null +++ b/server_addon/blender/server/settings/__init__.py @@ -0,0 +1,10 @@ +from .main import ( + BlenderSettings, + DEFAULT_VALUES, +) + + +__all__ = ( + "BlenderSettings", + "DEFAULT_VALUES", +) diff --git a/server_addon/blender/server/settings/imageio.py b/server_addon/blender/server/settings/imageio.py new file mode 100644 index 0000000000..a6d3c5ff64 --- /dev/null +++ b/server_addon/blender/server/settings/imageio.py @@ -0,0 +1,48 @@ +from pydantic import Field, validator +from ayon_server.settings import BaseSettingsModel +from ayon_server.settings.validators import ensure_unique_names + + +class ImageIOConfigModel(BaseSettingsModel): + override_global_config: bool = Field( + False, + title="Override global OCIO config" + ) + filepath: list[str] = Field( + default_factory=list, + title="Config path" + ) + + +class ImageIOFileRuleModel(BaseSettingsModel): + name: str = Field("", title="Rule name") + pattern: str = Field("", title="Regex pattern") + colorspace: str = Field("", title="Colorspace name") + ext: str = Field("", title="File extension") + + +class ImageIOFileRulesModel(BaseSettingsModel): + activate_host_rules: bool = Field(False) + rules: list[ImageIOFileRuleModel] = Field( + default_factory=list, + title="Rules" + ) + + @validator("rules") + def validate_unique_outputs(cls, value): + ensure_unique_names(value) + return value + + +class BlenderImageIOModel(BaseSettingsModel): + activate_host_color_management: bool = Field( + True, title="Enable Color Management" + ) + ocio_config: ImageIOConfigModel = Field( + default_factory=ImageIOConfigModel, + title="OCIO config" + ) + file_rules: ImageIOFileRulesModel = Field( + default_factory=ImageIOFileRulesModel, + title="File Rules" + ) diff --git a/server_addon/blender/server/settings/main.py b/server_addon/blender/server/settings/main.py new file mode 100644 index 0000000000..f6118d39cd --- /dev/null +++ b/server_addon/blender/server/settings/main.py @@ -0,0 +1,63 @@ +from pydantic import Field +from ayon_server.settings import ( + BaseSettingsModel, + TemplateWorkfileBaseOptions, +) + +from .imageio import BlenderImageIOModel +from .publish_plugins import ( + PublishPuginsModel, + DEFAULT_BLENDER_PUBLISH_SETTINGS +) + + +class UnitScaleSettingsModel(BaseSettingsModel): + enabled: bool = Field(True, title="Enabled") + apply_on_opening: bool = Field( + False, title="Apply on Opening Existing Files") + base_file_unit_scale: float = Field( + 1.0, title="Base File Unit Scale" + ) + + +class BlenderSettings(BaseSettingsModel): + unit_scale_settings: UnitScaleSettingsModel = Field( + default_factory=UnitScaleSettingsModel, + title="Set Unit Scale" + ) + set_resolution_startup: bool = Field( + True, + title="Set Resolution on Startup" + ) + set_frames_startup: bool = Field( + True, + title="Set Start/End Frames and FPS on Startup" + ) + imageio: BlenderImageIOModel = Field( + default_factory=BlenderImageIOModel, + title="Color Management (ImageIO)" + ) + workfile_builder: TemplateWorkfileBaseOptions = Field( + default_factory=TemplateWorkfileBaseOptions, + title="Workfile Builder" + ) + publish: PublishPuginsModel = Field( + default_factory=PublishPuginsModel, + title="Publish Plugins" + ) + + +DEFAULT_VALUES = { + "unit_scale_settings": { + "enabled": True, + "apply_on_opening": False, + "base_file_unit_scale": 0.01 + }, + "set_frames_startup": True, + "set_resolution_startup": True, + "publish": DEFAULT_BLENDER_PUBLISH_SETTINGS, + "workfile_builder": { + "create_first_version": False, + "custom_templates": [] + } +} diff --git a/server_addon/blender/server/settings/publish_plugins.py b/server_addon/blender/server/settings/publish_plugins.py new file mode 100644 index 0000000000..65dda78411 --- /dev/null +++ b/server_addon/blender/server/settings/publish_plugins.py @@ -0,0 +1,283 @@ +import json +from pydantic import Field, validator +from ayon_server.exceptions import BadRequestException +from ayon_server.settings import BaseSettingsModel + + +def validate_json_dict(value): + if not value.strip(): + return "{}" + try: + converted_value = json.loads(value) + success = isinstance(converted_value, dict) + except json.JSONDecodeError: + success = False + + if not success: + raise BadRequestException( + "Environment's can't be parsed as json object" + ) + return value + + +class ValidatePluginModel(BaseSettingsModel): + enabled: bool = Field(True) + optional: bool = Field(title="Optional") + active: bool = Field(title="Active") + + +class ExtractBlendModel(BaseSettingsModel): + enabled: bool = Field(True) + optional: bool = Field(title="Optional") + active: bool = Field(title="Active") + families: list[str] = Field( + default_factory=list, + title="Families" + ) + + +class ExtractPlayblastModel(BaseSettingsModel): + enabled: bool = Field(True) + optional: bool = Field(title="Optional") + active: bool = Field(title="Active") + presets: str = Field("", title="Presets", widget="textarea") + + @validator("presets") + def validate_json(cls, value): + return validate_json_dict(value) + + +class PublishPuginsModel(BaseSettingsModel): + ValidateCameraZeroKeyframe: ValidatePluginModel = Field( + default_factory=ValidatePluginModel, + title="Validate Camera Zero Keyframe", + section="Validators" + ) + ValidateMeshHasUvs: ValidatePluginModel = Field( + default_factory=ValidatePluginModel, + title="Validate Mesh Has Uvs" + ) + ValidateMeshNoNegativeScale: ValidatePluginModel = Field( + default_factory=ValidatePluginModel, + title="Validate Mesh No Negative Scale" + ) + ValidateTransformZero: ValidatePluginModel = Field( + default_factory=ValidatePluginModel, + title="Validate Transform Zero" + ) + ValidateNoColonsInName: ValidatePluginModel = Field( + default_factory=ValidatePluginModel, + title="Validate No Colons In Name" + ) + ExtractBlend: ExtractBlendModel = Field( + default_factory=ExtractBlendModel, + title="Extract Blend", + section="Extractors" + ) + ExtractFBX: ValidatePluginModel = Field( + default_factory=ValidatePluginModel, + title="Extract FBX" + ) + ExtractABC: ValidatePluginModel = Field( + default_factory=ValidatePluginModel, + title="Extract ABC" + ) + ExtractBlendAnimation: ValidatePluginModel = Field( + default_factory=ValidatePluginModel, + title="Extract Blend Animation" + ) + ExtractAnimationFBX: ValidatePluginModel = Field( + default_factory=ValidatePluginModel, + title="Extract Animation FBX" + ) + ExtractCamera: ValidatePluginModel = Field( + default_factory=ValidatePluginModel, + title="Extract Camera" + ) + ExtractCameraABC: ValidatePluginModel = Field( + default_factory=ValidatePluginModel, + title="Extract Camera as ABC" + ) + ExtractLayout: ValidatePluginModel = Field( + default_factory=ValidatePluginModel, + title="Extract Layout" + ) + ExtractThumbnail: ExtractPlayblastModel = Field( + default_factory=ExtractPlayblastModel, + title="Extract Thumbnail" + ) + ExtractPlayblast: ExtractPlayblastModel = Field( + default_factory=ExtractPlayblastModel, + title="Extract Playblast" + ) + + +DEFAULT_BLENDER_PUBLISH_SETTINGS = { + "ValidateCameraZeroKeyframe": { + "enabled": True, + "optional": True, + "active": True + }, + "ValidateMeshHasUvs": { + "enabled": True, + "optional": True, + "active": True + }, + "ValidateMeshNoNegativeScale": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateTransformZero": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateNoColonsInName": { + "enabled": True, + "optional": False, + "active": True + }, + "ExtractBlend": { + "enabled": True, + "optional": True, + "active": True, + "families": [ + "model", + "camera", + "rig", + "action", + "layout", + "blendScene" + ] + }, + "ExtractFBX": { + "enabled": True, + "optional": True, + "active": False + }, + "ExtractABC": { + "enabled": True, + "optional": True, + "active": False + }, + "ExtractBlendAnimation": { + "enabled": True, + "optional": True, + "active": True + }, + "ExtractAnimationFBX": { + "enabled": True, + "optional": True, + "active": False + }, + "ExtractCamera": { + "enabled": True, + "optional": True, + "active": True + }, + "ExtractCameraABC": { + "enabled": True, + "optional": True, + "active": True + }, + "ExtractLayout": { + "enabled": True, + "optional": True, + "active": False + }, + "ExtractThumbnail": { + "enabled": True, + "optional": True, + "active": True, + "presets": json.dumps( + { + "model": { + "image_settings": { + "file_format": "JPEG", + "color_mode": "RGB", + "quality": 100 + }, + "display_options": { + "shading": { + "light": "STUDIO", + "studio_light": "Default", + "type": "SOLID", + "color_type": "OBJECT", + "show_xray": False, + "show_shadows": False, + "show_cavity": True + }, + "overlay": { + "show_overlays": False + } + } + }, + "rig": { + "image_settings": { + "file_format": "JPEG", + "color_mode": "RGB", + "quality": 100 + }, + "display_options": { + "shading": { + "light": "STUDIO", + "studio_light": "Default", + "type": "SOLID", + "color_type": "OBJECT", + "show_xray": True, + "show_shadows": False, + "show_cavity": False + }, + "overlay": { + "show_overlays": True, + "show_ortho_grid": False, + "show_floor": False, + "show_axis_x": False, + "show_axis_y": False, + "show_axis_z": False, + "show_text": False, + "show_stats": False, + "show_cursor": False, + "show_annotation": False, + "show_extras": False, + "show_relationship_lines": False, + "show_outline_selected": False, + "show_motion_paths": False, + "show_object_origins": False, + "show_bones": True + } + } + } + }, + indent=4, + ) + }, + "ExtractPlayblast": { + "enabled": True, + "optional": True, + "active": True, + "presets": json.dumps( + { + "default": { + "image_settings": { + "file_format": "PNG", + "color_mode": "RGB", + "color_depth": "8", + "compression": 15 + }, + "display_options": { + "shading": { + "type": "MATERIAL", + "render_pass": "COMBINED" + }, + "overlay": { + "show_overlays": False + } + } + } + }, + indent=4 + ) + } +} diff --git a/server_addon/blender/server/version.py b/server_addon/blender/server/version.py new file mode 100644 index 0000000000..485f44ac21 --- /dev/null +++ b/server_addon/blender/server/version.py @@ -0,0 +1 @@ +__version__ = "0.1.1" diff --git a/server_addon/celaction/server/__init__.py b/server_addon/celaction/server/__init__.py new file mode 100644 index 0000000000..90d3dbaa01 --- /dev/null +++ b/server_addon/celaction/server/__init__.py @@ -0,0 +1,19 @@ +from typing import Type + +from ayon_server.addons import BaseServerAddon + +from .version import __version__ +from .settings import CelActionSettings, DEFAULT_VALUES + + +class CelActionAddon(BaseServerAddon): + name = "celaction" + title = "CelAction" + version = __version__ + settings_model: Type[CelActionSettings] = CelActionSettings + frontend_scopes = {} + services = {} + + async def get_default_settings(self): + settings_model_cls = self.get_settings_model() + return settings_model_cls(**DEFAULT_VALUES) diff --git a/server_addon/celaction/server/imageio.py b/server_addon/celaction/server/imageio.py new file mode 100644 index 0000000000..72da441528 --- /dev/null +++ b/server_addon/celaction/server/imageio.py @@ -0,0 +1,48 @@ +from pydantic import Field, validator +from ayon_server.settings import BaseSettingsModel +from ayon_server.settings.validators import ensure_unique_names + + +class ImageIOConfigModel(BaseSettingsModel): + override_global_config: bool = Field( + False, + title="Override global OCIO config" + ) + filepath: list[str] = Field( + default_factory=list, + title="Config path" + ) + + +class ImageIOFileRuleModel(BaseSettingsModel): + name: str = Field("", title="Rule name") + pattern: str = Field("", title="Regex pattern") + colorspace: str = Field("", title="Colorspace name") + ext: str = Field("", title="File extension") + + +class ImageIOFileRulesModel(BaseSettingsModel): + activate_host_rules: bool = Field(False) + rules: list[ImageIOFileRuleModel] = Field( + default_factory=list, + title="Rules" + ) + + @validator("rules") + def validate_unique_outputs(cls, value): + ensure_unique_names(value) + return value + + +class CelActionImageIOModel(BaseSettingsModel): + activate_host_color_management: bool = Field( + True, title="Enable Color Management" + ) + ocio_config: ImageIOConfigModel = Field( + default_factory=ImageIOConfigModel, + title="OCIO config" + ) + file_rules: ImageIOFileRulesModel = Field( + default_factory=ImageIOFileRulesModel, + title="File Rules" + ) diff --git a/server_addon/celaction/server/settings.py b/server_addon/celaction/server/settings.py new file mode 100644 index 0000000000..68d1d2dc31 --- /dev/null +++ b/server_addon/celaction/server/settings.py @@ -0,0 +1,92 @@ +from pydantic import Field +from ayon_server.settings import BaseSettingsModel +from .imageio import CelActionImageIOModel + + +class CollectRenderPathModel(BaseSettingsModel): + output_extension: str = Field( + "", + title="Output render file extension" + ) + anatomy_template_key_render_files: str = Field( + "", + title="Anatomy template key: render files" + ) + anatomy_template_key_metadata: str = Field( + "", + title="Anatomy template key: metadata job file" + ) + + +def _workfile_submit_overrides(): + return [ + { + "value": "render_chunk", + "label": "Pass chunk size" + }, + { + "value": "frame_range", + "label": "Pass frame range" + }, + { + "value": "resolution", + "label": "Pass resolution" + } + ] + + +class WorkfileModel(BaseSettingsModel): + submission_overrides: list[str] = Field( + default_factory=list, + title="Submission workfile overrides", + enum_resolver=_workfile_submit_overrides + ) + + +class PublishPuginsModel(BaseSettingsModel): + CollectRenderPath: CollectRenderPathModel = Field( + default_factory=CollectRenderPathModel, + title="Collect Render Path" + ) + + +class CelActionSettings(BaseSettingsModel): + imageio: CelActionImageIOModel = Field( + default_factory=CelActionImageIOModel, + title="Color Management (ImageIO)" + ) + workfile: WorkfileModel = Field( + title="Workfile" + ) + publish: PublishPuginsModel = Field( + default_factory=PublishPuginsModel, + title="Publish plugins", + ) + + +DEFAULT_VALUES = { + "imageio": { + "ocio_config": { + "enabled": False, + "filepath": [] + }, + "file_rules": { + "enabled": False, + "rules": [] + } + }, + "workfile": { + "submission_overrides": [ + "render_chunk", + "frame_range", + "resolution" + ] + }, + "publish": { + "CollectRenderPath": { + "output_extension": "png", + "anatomy_template_key_render_files": "render", + "anatomy_template_key_metadata": "render" + } + } +} diff --git a/server_addon/celaction/server/version.py b/server_addon/celaction/server/version.py new file mode 100644 index 0000000000..3dc1f76bc6 --- /dev/null +++ b/server_addon/celaction/server/version.py @@ -0,0 +1 @@ +__version__ = "0.1.0" diff --git a/server_addon/clockify/server/__init__.py b/server_addon/clockify/server/__init__.py new file mode 100644 index 0000000000..0fa453fdf4 --- /dev/null +++ b/server_addon/clockify/server/__init__.py @@ -0,0 +1,15 @@ +from typing import Type + +from ayon_server.addons import BaseServerAddon + +from .version import __version__ +from .settings import ClockifySettings + + +class ClockifyAddon(BaseServerAddon): + name = "clockify" + title = "Clockify" + version = __version__ + settings_model: Type[ClockifySettings] = ClockifySettings + frontend_scopes = {} + services = {} diff --git a/server_addon/clockify/server/settings.py b/server_addon/clockify/server/settings.py new file mode 100644 index 0000000000..9067cd4243 --- /dev/null +++ b/server_addon/clockify/server/settings.py @@ -0,0 +1,10 @@ +from pydantic import Field +from ayon_server.settings import BaseSettingsModel + + +class ClockifySettings(BaseSettingsModel): + workspace_name: str = Field( + "", + title="Workspace name", + scope=["studio"] + ) diff --git a/server_addon/clockify/server/version.py b/server_addon/clockify/server/version.py new file mode 100644 index 0000000000..485f44ac21 --- /dev/null +++ b/server_addon/clockify/server/version.py @@ -0,0 +1 @@ +__version__ = "0.1.1" diff --git a/server_addon/core/server/__init__.py b/server_addon/core/server/__init__.py new file mode 100644 index 0000000000..4de2b038a5 --- /dev/null +++ b/server_addon/core/server/__init__.py @@ -0,0 +1,15 @@ +from ayon_server.addons import BaseServerAddon + +from .version import __version__ +from .settings import CoreSettings, DEFAULT_VALUES + + +class CoreAddon(BaseServerAddon): + name = "core" + title = "Core" + version = __version__ + settings_model = CoreSettings + + async def get_default_settings(self): + settings_model_cls = self.get_settings_model() + return settings_model_cls(**DEFAULT_VALUES) diff --git a/server_addon/core/server/settings/__init__.py b/server_addon/core/server/settings/__init__.py new file mode 100644 index 0000000000..527a2bdc0c --- /dev/null +++ b/server_addon/core/server/settings/__init__.py @@ -0,0 +1,7 @@ +from .main import CoreSettings, DEFAULT_VALUES + + +__all__ = ( + "CoreSettings", + "DEFAULT_VALUES", +) diff --git a/server_addon/core/server/settings/main.py b/server_addon/core/server/settings/main.py new file mode 100644 index 0000000000..d19d732e71 --- /dev/null +++ b/server_addon/core/server/settings/main.py @@ -0,0 +1,160 @@ +import json +from pydantic import Field, validator +from ayon_server.settings import ( + BaseSettingsModel, + MultiplatformPathListModel, + ensure_unique_names, +) +from ayon_server.exceptions import BadRequestException + +from .publish_plugins import PublishPuginsModel, DEFAULT_PUBLISH_VALUES +from .tools import GlobalToolsModel, DEFAULT_TOOLS_VALUES + + +class ImageIOFileRuleModel(BaseSettingsModel): + name: str = Field("", title="Rule name") + pattern: str = Field("", title="Regex pattern") + colorspace: str = Field("", title="Colorspace name") + ext: str = Field("", title="File extension") + + +class CoreImageIOFileRulesModel(BaseSettingsModel): + activate_global_file_rules: bool = Field(False) + rules: list[ImageIOFileRuleModel] = Field( + default_factory=list, + title="Rules" + ) + + @validator("rules") + def validate_unique_outputs(cls, value): + ensure_unique_names(value) + return value + + +class CoreImageIOConfigModel(BaseSettingsModel): + filepath: list[str] = Field(default_factory=list, title="Config path") + + +class CoreImageIOBaseModel(BaseSettingsModel): + activate_global_color_management: bool = Field( + False, + title="Override global OCIO config" + ) + ocio_config: CoreImageIOConfigModel = Field( + default_factory=CoreImageIOConfigModel, title="OCIO config" + ) + file_rules: CoreImageIOFileRulesModel = Field( + default_factory=CoreImageIOFileRulesModel, title="File Rules" + ) + + +class CoreSettings(BaseSettingsModel): + studio_name: str = Field("", title="Studio name", scope=["studio"]) + studio_code: str = Field("", title="Studio code", scope=["studio"]) + environments: str = Field( + "{}", + title="Global environment variables", + widget="textarea", + scope=["studio"], + ) + tools: GlobalToolsModel = Field( + default_factory=GlobalToolsModel, + title="Tools" + ) + imageio: CoreImageIOBaseModel = Field( + default_factory=CoreImageIOBaseModel, + title="Color Management (ImageIO)" + ) + publish: PublishPuginsModel = Field( + default_factory=PublishPuginsModel, + title="Publish plugins" + ) + project_plugins: MultiplatformPathListModel = Field( + default_factory=MultiplatformPathListModel, + title="Additional Project Plugin Paths", + ) + project_folder_structure: str = Field( + "{}", + widget="textarea", + title="Project folder structure", + section="---" + ) + project_environments: str = Field( + "{}", + widget="textarea", + title="Project environments", + section="---" + ) + + @validator( + "environments", + "project_folder_structure", + "project_environments") + def validate_json(cls, value): + if not value.strip(): + return "{}" + try: + converted_value = json.loads(value) + success = isinstance(converted_value, dict) + except json.JSONDecodeError: + success = False + + if not success: + raise BadRequestException( + "Environment's can't be parsed as json object" + ) + return value + + +DEFAULT_VALUES = { + "imageio": { + "activate_global_color_management": False, + "ocio_config": { + "filepath": [ + "{BUILTIN_OCIO_ROOT}/aces_1.2/config.ocio", + "{BUILTIN_OCIO_ROOT}/nuke-default/config.ocio" + ] + }, + "file_rules": { + "activate_global_file_rules": False, + "rules": [ + { + "name": "example", + "pattern": ".*(beauty).*", + "colorspace": "ACES - ACEScg", + "ext": "exr" + } + ] + } + }, + "studio_name": "", + "studio_code": "", + "environments": "{}", + "tools": DEFAULT_TOOLS_VALUES, + "publish": DEFAULT_PUBLISH_VALUES, + "project_folder_structure": json.dumps({ + "__project_root__": { + "prod": {}, + "resources": { + "footage": { + "plates": {}, + "offline": {} + }, + "audio": {}, + "art_dept": {} + }, + "editorial": {}, + "assets": { + "characters": {}, + "locations": {} + }, + "shots": {} + } + }, indent=4), + "project_plugins": { + "windows": [], + "darwin": [], + "linux": [] + }, + "project_environments": "{}" +} diff --git a/server_addon/core/server/settings/publish_plugins.py b/server_addon/core/server/settings/publish_plugins.py new file mode 100644 index 0000000000..c012312579 --- /dev/null +++ b/server_addon/core/server/settings/publish_plugins.py @@ -0,0 +1,959 @@ +from pydantic import Field, validator + +from ayon_server.settings import ( + BaseSettingsModel, + MultiplatformPathModel, + normalize_name, + ensure_unique_names, + task_types_enum, +) + +from ayon_server.types import ColorRGBA_uint8 + + +class ValidateBaseModel(BaseSettingsModel): + _isGroup = True + enabled: bool = Field(True) + optional: bool = Field(True, title="Optional") + active: bool = Field(True, title="Active") + + +class CollectAnatomyInstanceDataModel(BaseSettingsModel): + _isGroup = True + follow_workfile_version: bool = Field( + True, title="Collect Anatomy Instance Data" + ) + + +class CollectAudioModel(BaseSettingsModel): + _isGroup = True + enabled: bool = Field(True) + audio_product_name: str = Field( + "", title="Name of audio variant" + ) + + +class CollectSceneVersionModel(BaseSettingsModel): + _isGroup = True + hosts: list[str] = Field( + default_factory=list, + title="Host names" + ) + skip_hosts_headless_publish: list[str] = Field( + default_factory=list, + title="Skip for host if headless publish" + ) + + +class CollectCommentPIModel(BaseSettingsModel): + enabled: bool = Field(True) + families: list[str] = Field(default_factory=list, title="Families") + + +class CollectFramesFixDefModel(BaseSettingsModel): + enabled: bool = Field(True) + rewrite_version_enable: bool = Field( + True, + title="Show 'Rewrite latest version' toggle" + ) + + +class ValidateIntentProfile(BaseSettingsModel): + _layout = "expanded" + hosts: list[str] = Field(default_factory=list, title="Host names") + task_types: list[str] = Field( + default_factory=list, + title="Task types", + enum_resolver=task_types_enum + ) + tasks: list[str] = Field(default_factory=list, title="Task names") + # TODO This was 'validate' in v3 + validate_intent: bool = Field(True, title="Validate") + + +class ValidateIntentModel(BaseSettingsModel): + """Validate if Publishing intent was selected. + + It is possible to disable validation for specific publishing context + with profiles. + """ + + _isGroup = True + enabled: bool = Field(False) + profiles: list[ValidateIntentProfile] = Field(default_factory=list) + + +class ExtractThumbnailFFmpegModel(BaseSettingsModel): + _layout = "expanded" + input: list[str] = Field( + default_factory=list, + title="FFmpeg input arguments" + ) + output: list[str] = Field( + default_factory=list, + title="FFmpeg input arguments" + ) + + +class ExtractThumbnailModel(BaseSettingsModel): + _isGroup = True + enabled: bool = Field(True) + ffmpeg_args: ExtractThumbnailFFmpegModel = Field( + default_factory=ExtractThumbnailFFmpegModel + ) + + +def _extract_oiio_transcoding_type(): + return [ + {"value": "colorspace", "label": "Use Colorspace"}, + {"value": "display", "label": "Use Display&View"} + ] + + +class OIIOToolArgumentsModel(BaseSettingsModel): + additional_command_args: list[str] = Field( + default_factory=list, title="Arguments") + + +class ExtractOIIOTranscodeOutputModel(BaseSettingsModel): + extension: str = Field("", title="Extension") + transcoding_type: str = Field( + "colorspace", + title="Transcoding type", + enum_resolver=_extract_oiio_transcoding_type + ) + colorspace: str = Field("", title="Colorspace") + display: str = Field("", title="Display") + view: str = Field("", title="View") + oiiotool_args: OIIOToolArgumentsModel = Field( + default_factory=OIIOToolArgumentsModel, + title="OIIOtool arguments") + + tags: list[str] = Field(default_factory=list, title="Tags") + custom_tags: list[str] = Field(default_factory=list, title="Custom Tags") + + +class ExtractOIIOTranscodeProfileModel(BaseSettingsModel): + product_types: list[str] = Field( + default_factory=list, + title="Product types" + ) + hosts: list[str] = Field( + default_factory=list, + title="Host names" + ) + task_types: list[str] = Field( + default_factory=list, + title="Task types", + enum_resolver=task_types_enum + ) + task_names: list[str] = Field( + default_factory=list, + title="Task names" + ) + product_names: list[str] = Field( + default_factory=list, + title="Product names" + ) + delete_original: bool = Field( + True, + title="Delete Original Representation" + ) + outputs: list[ExtractOIIOTranscodeOutputModel] = Field( + default_factory=list, + title="Output Definitions", + ) + + +class ExtractOIIOTranscodeModel(BaseSettingsModel): + enabled: bool = Field(True) + profiles: list[ExtractOIIOTranscodeProfileModel] = Field( + default_factory=list, title="Profiles" + ) + + +# --- [START] Extract Review --- +class ExtractReviewFFmpegModel(BaseSettingsModel): + video_filters: list[str] = Field( + default_factory=list, + title="Video filters" + ) + audio_filters: list[str] = Field( + default_factory=list, + title="Audio filters" + ) + input: list[str] = Field( + default_factory=list, + title="Input arguments" + ) + output: list[str] = Field( + default_factory=list, + title="Output arguments" + ) + + +def extract_review_filter_enum(): + return [ + { + "value": "everytime", + "label": "Always" + }, + { + "value": "single_frame", + "label": "Only if input has 1 image frame" + }, + { + "value": "multi_frame", + "label": "Only if input is video or sequence of frames" + } + ] + + +class ExtractReviewFilterModel(BaseSettingsModel): + families: list[str] = Field(default_factory=list, title="Families") + product_names: list[str] = Field( + default_factory=list, title="Product names") + custom_tags: list[str] = Field(default_factory=list, title="Custom Tags") + single_frame_filter: str = Field( + "everytime", + description=( + "Use output always / only if input is 1 frame" + " image / only if has 2+ frames or is video" + ), + enum_resolver=extract_review_filter_enum + ) + + +class ExtractReviewLetterBox(BaseSettingsModel): + enabled: bool = Field(True) + ratio: float = Field( + 0.0, + title="Ratio", + ge=0.0, + le=10000.0 + ) + fill_color: ColorRGBA_uint8 = Field( + (0, 0, 0, 0.0), + title="Fill Color" + ) + line_thickness: int = Field( + 0, + title="Line Thickness", + ge=0, + le=1000 + ) + line_color: ColorRGBA_uint8 = Field( + (0, 0, 0, 0.0), + title="Line Color" + ) + + +class ExtractReviewOutputDefModel(BaseSettingsModel): + _layout = "expanded" + name: str = Field("", title="Name") + ext: str = Field("", title="Output extension") + # TODO use some different source of tags + tags: list[str] = Field(default_factory=list, title="Tags") + burnins: list[str] = Field( + default_factory=list, title="Link to a burnin by name" + ) + ffmpeg_args: ExtractReviewFFmpegModel = Field( + default_factory=ExtractReviewFFmpegModel, + title="FFmpeg arguments" + ) + filter: ExtractReviewFilterModel = Field( + default_factory=ExtractReviewFilterModel, + title="Additional output filtering" + ) + overscan_crop: str = Field( + "", + title="Overscan crop", + description=( + "Crop input overscan. See the documentation for more information." + ) + ) + overscan_color: ColorRGBA_uint8 = Field( + (0, 0, 0, 0.0), + title="Overscan color", + description=( + "Overscan color is used when input aspect ratio is not" + " same as output aspect ratio." + ) + ) + width: int = Field( + 0, + ge=0, + le=100000, + title="Output width", + description=( + "Width and Height must be both set to higher" + " value than 0 else source resolution is used." + ) + ) + height: int = Field( + 0, + title="Output height", + ge=0, + le=100000, + ) + scale_pixel_aspect: bool = Field( + True, + title="Scale pixel aspect", + description=( + "Rescale input when it's pixel aspect ratio is not 1." + " Usefull for anamorph reviews." + ) + ) + bg_color: ColorRGBA_uint8 = Field( + (0, 0, 0, 0.0), + description=( + "Background color is used only when input have transparency" + " and Alpha is higher than 0." + ), + title="Background color", + ) + letter_box: ExtractReviewLetterBox = Field( + default_factory=ExtractReviewLetterBox, + title="Letter Box" + ) + + @validator("name") + def validate_name(cls, value): + """Ensure name does not contain weird characters""" + return normalize_name(value) + + +class ExtractReviewProfileModel(BaseSettingsModel): + _layout = "expanded" + product_types: list[str] = Field( + default_factory=list, title="Product types" + ) + # TODO use hosts enum + hosts: list[str] = Field( + default_factory=list, title="Host names" + ) + outputs: list[ExtractReviewOutputDefModel] = Field( + default_factory=list, title="Output Definitions" + ) + + @validator("outputs") + def validate_unique_outputs(cls, value): + ensure_unique_names(value) + return value + + +class ExtractReviewModel(BaseSettingsModel): + _isGroup = True + enabled: bool = Field(True) + profiles: list[ExtractReviewProfileModel] = Field( + default_factory=list, + title="Profiles" + ) +# --- [END] Extract Review --- + + +# --- [Start] Extract Burnin --- +class ExtractBurninOptionsModel(BaseSettingsModel): + font_size: int = Field(0, ge=0, title="Font size") + font_color: ColorRGBA_uint8 = Field( + (255, 255, 255, 1.0), + title="Font color" + ) + bg_color: ColorRGBA_uint8 = Field( + (0, 0, 0, 1.0), + title="Background color" + ) + x_offset: int = Field(0, title="X Offset") + y_offset: int = Field(0, title="Y Offset") + bg_padding: int = Field(0, title="Padding around text") + font_filepath: MultiplatformPathModel = Field( + default_factory=MultiplatformPathModel, + title="Font file path" + ) + + +class ExtractBurninDefFilter(BaseSettingsModel): + families: list[str] = Field( + default_factory=list, + title="Families" + ) + tags: list[str] = Field( + default_factory=list, + title="Tags" + ) + + +class ExtractBurninDef(BaseSettingsModel): + _isGroup = True + _layout = "expanded" + name: str = Field("") + TOP_LEFT: str = Field("", topic="Top Left") + TOP_CENTERED: str = Field("", topic="Top Centered") + TOP_RIGHT: str = Field("", topic="Top Right") + BOTTOM_LEFT: str = Field("", topic="Bottom Left") + BOTTOM_CENTERED: str = Field("", topic="Bottom Centered") + BOTTOM_RIGHT: str = Field("", topic="Bottom Right") + filter: ExtractBurninDefFilter = Field( + default_factory=ExtractBurninDefFilter, + title="Additional filtering" + ) + + @validator("name") + def validate_name(cls, value): + """Ensure name does not contain weird characters""" + return normalize_name(value) + + +class ExtractBurninProfile(BaseSettingsModel): + _layout = "expanded" + product_types: list[str] = Field( + default_factory=list, + title="Produt types" + ) + hosts: list[str] = Field( + default_factory=list, + title="Host names" + ) + task_types: list[str] = Field( + default_factory=list, + title="Task types", + enum_resolver=task_types_enum + ) + task_names: list[str] = Field( + default_factory=list, + title="Task names" + ) + product_names: list[str] = Field( + default_factory=list, + title="Product names" + ) + burnins: list[ExtractBurninDef] = Field( + default_factory=list, + title="Burnins" + ) + + @validator("burnins") + def validate_unique_outputs(cls, value): + ensure_unique_names(value) + + return value + + +class ExtractBurninModel(BaseSettingsModel): + _isGroup = True + enabled: bool = Field(True) + options: ExtractBurninOptionsModel = Field( + default_factory=ExtractBurninOptionsModel, + title="Burnin formatting options" + ) + profiles: list[ExtractBurninProfile] = Field( + default_factory=list, + title="Profiles" + ) +# --- [END] Extract Burnin --- + + +class PreIntegrateThumbnailsProfile(BaseSettingsModel): + _isGroup = True + product_types: list[str] = Field( + default_factory=list, + title="Product types", + ) + hosts: list[str] = Field( + default_factory=list, + title="Hosts", + ) + task_types: list[str] = Field( + default_factory=list, + title="Task types", + enum_resolver=task_types_enum + ) + product_names: list[str] = Field( + default_factory=list, + title="Product names", + ) + integrate_thumbnail: bool = Field(True) + + +class PreIntegrateThumbnailsModel(BaseSettingsModel): + """Explicitly set if Thumbnail representation should be integrated. + + If no matching profile set, existing state from Host implementation + is kept. + """ + + _isGroup = True + enabled: bool = Field(True) + integrate_profiles: list[PreIntegrateThumbnailsProfile] = Field( + default_factory=list, + title="Integrate profiles" + ) + + +class IntegrateProductGroupProfile(BaseSettingsModel): + product_types: list[str] = Field( + default_factory=list, + title="Product types" + ) + hosts: list[str] = Field(default_factory=list, title="Hosts") + task_types: list[str] = Field( + default_factory=list, + title="Task types", + enum_resolver=task_types_enum + ) + tasks: list[str] = Field(default_factory=list, title="Task names") + template: str = Field("", title="Template") + + +class IntegrateProductGroupModel(BaseSettingsModel): + """Group published products by filtering logic. + + Set all published instances as a part of specific group named according + to 'Template'. + + Implemented all variants of placeholders '{task}', '{product[type]}', + '{host}', '{product[name]}', '{renderlayer}'. + """ + + _isGroup = True + product_grouping_profiles: list[IntegrateProductGroupProfile] = Field( + default_factory=list, + title="Product group profiles" + ) + + +class IntegrateANProductGroupProfileModel(BaseSettingsModel): + product_types: list[str] = Field( + default_factory=list, + title="Product types" + ) + hosts: list[str] = Field( + default_factory=list, + title="Hosts" + ) + task_types: list[str] = Field( + default_factory=list, + title="Task types", + enum_resolver=task_types_enum + ) + tasks: list[str] = Field( + default_factory=list, + title="Task names" + ) + template: str = Field("", title="Template") + + +class IntegrateANTemplateNameProfileModel(BaseSettingsModel): + product_types: list[str] = Field( + default_factory=list, + title="Product types" + ) + hosts: list[str] = Field( + default_factory=list, + title="Hosts" + ) + task_types: list[str] = Field( + default_factory=list, + title="Task types", + enum_resolver=task_types_enum + ) + tasks: list[str] = Field( + default_factory=list, + title="Task names" + ) + template_name: str = Field("", title="Template name") + + +class IntegrateHeroTemplateNameProfileModel(BaseSettingsModel): + product_types: list[str] = Field( + default_factory=list, + title="Product types" + ) + hosts: list[str] = Field( + default_factory=list, + title="Hosts" + ) + task_types: list[str] = Field( + default_factory=list, + title="Task types", + enum_resolver=task_types_enum + ) + task_names: list[str] = Field( + default_factory=list, + title="Task names" + ) + template_name: str = Field("", title="Template name") + + +class IntegrateHeroVersionModel(BaseSettingsModel): + _isGroup = True + enabled: bool = Field(True) + optional: bool = Field(False, title="Optional") + active: bool = Field(True, title="Active") + families: list[str] = Field(default_factory=list, title="Families") + # TODO remove when removed from client code + template_name_profiles: list[IntegrateHeroTemplateNameProfileModel] = ( + Field( + default_factory=list, + title="Template name profiles" + ) + ) + + +class CleanUpModel(BaseSettingsModel): + _isGroup = True + paterns: list[str] = Field( + default_factory=list, + title="Patterns (regex)" + ) + remove_temp_renders: bool = Field(False, title="Remove Temp renders") + + +class CleanUpFarmModel(BaseSettingsModel): + _isGroup = True + enabled: bool = Field(True) + + +class PublishPuginsModel(BaseSettingsModel): + CollectAnatomyInstanceData: CollectAnatomyInstanceDataModel = Field( + default_factory=CollectAnatomyInstanceDataModel, + title="Collect Anatomy Instance Data" + ) + CollectAudio: CollectAudioModel = Field( + default_factory=CollectAudioModel, + title="Collect Audio" + ) + CollectSceneVersion: CollectSceneVersionModel = Field( + default_factory=CollectSceneVersionModel, + title="Collect Version from Workfile" + ) + collect_comment_per_instance: CollectCommentPIModel = Field( + default_factory=CollectCommentPIModel, + title="Collect comment per instance", + ) + CollectFramesFixDef: CollectFramesFixDefModel = Field( + default_factory=CollectFramesFixDefModel, + title="Collect Frames to Fix", + ) + ValidateEditorialAssetName: ValidateBaseModel = Field( + default_factory=ValidateBaseModel, + title="Validate Editorial Asset Name" + ) + ValidateVersion: ValidateBaseModel = Field( + default_factory=ValidateBaseModel, + title="Validate Version" + ) + ValidateIntent: ValidateIntentModel = Field( + default_factory=ValidateIntentModel, + title="Validate Intent" + ) + ExtractThumbnail: ExtractThumbnailModel = Field( + default_factory=ExtractThumbnailModel, + title="Extract Thumbnail" + ) + ExtractOIIOTranscode: ExtractOIIOTranscodeModel = Field( + default_factory=ExtractOIIOTranscodeModel, + title="Extract OIIO Transcode" + ) + ExtractReview: ExtractReviewModel = Field( + default_factory=ExtractReviewModel, + title="Extract Review" + ) + ExtractBurnin: ExtractBurninModel = Field( + default_factory=ExtractBurninModel, + title="Extract Burnin" + ) + PreIntegrateThumbnails: PreIntegrateThumbnailsModel = Field( + default_factory=PreIntegrateThumbnailsModel, + title="Override Integrate Thumbnail Representations" + ) + IntegrateProductGroup: IntegrateProductGroupModel = Field( + default_factory=IntegrateProductGroupModel, + title="Integrate Product Group" + ) + IntegrateHeroVersion: IntegrateHeroVersionModel = Field( + default_factory=IntegrateHeroVersionModel, + title="Integrate Hero Version" + ) + CleanUp: CleanUpModel = Field( + default_factory=CleanUpModel, + title="Clean Up" + ) + CleanUpFarm: CleanUpFarmModel = Field( + default_factory=CleanUpFarmModel, + title="Clean Up Farm" + ) + + +DEFAULT_PUBLISH_VALUES = { + "CollectAnatomyInstanceData": { + "follow_workfile_version": False + }, + "CollectAudio": { + "enabled": False, + "audio_product_name": "audioMain" + }, + "CollectSceneVersion": { + "hosts": [ + "aftereffects", + "blender", + "celaction", + "fusion", + "harmony", + "hiero", + "houdini", + "maya", + "nuke", + "photoshop", + "resolve", + "tvpaint" + ], + "skip_hosts_headless_publish": [] + }, + "collect_comment_per_instance": { + "enabled": False, + "families": [] + }, + "CollectFramesFixDef": { + "enabled": True, + "rewrite_version_enable": True + }, + "ValidateEditorialAssetName": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateVersion": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateIntent": { + "enabled": False, + "profiles": [] + }, + "ExtractThumbnail": { + "enabled": True, + "ffmpeg_args": { + "input": [ + "-apply_trc gamma22" + ], + "output": [] + } + }, + "ExtractOIIOTranscode": { + "enabled": True, + "profiles": [] + }, + "ExtractReview": { + "enabled": True, + "profiles": [ + { + "product_types": [], + "hosts": [], + "outputs": [ + { + "name": "png", + "ext": "png", + "tags": [ + "ftrackreview", + "kitsureview" + ], + "burnins": [], + "ffmpeg_args": { + "video_filters": [], + "audio_filters": [], + "input": [], + "output": [] + }, + "filter": { + "families": [ + "render", + "review", + "ftrack" + ], + "product_names": [], + "custom_tags": [], + "single_frame_filter": "single_frame" + }, + "overscan_crop": "", + "overscan_color": [0, 0, 0, 1.0], + "width": 1920, + "height": 1080, + "scale_pixel_aspect": True, + "bg_color": [0, 0, 0, 0.0], + "letter_box": { + "enabled": False, + "ratio": 0.0, + "fill_color": [0, 0, 0, 1.0], + "line_thickness": 0, + "line_color": [255, 0, 0, 1.0] + } + }, + { + "name": "h264", + "ext": "mp4", + "tags": [ + "burnin", + "ftrackreview", + "kitsureview" + ], + "burnins": [], + "ffmpeg_args": { + "video_filters": [], + "audio_filters": [], + "input": [ + "-apply_trc gamma22" + ], + "output": [ + "-pix_fmt yuv420p", + "-crf 18", + "-intra" + ] + }, + "filter": { + "families": [ + "render", + "review", + "ftrack" + ], + "product_names": [], + "custom_tags": [], + "single_frame_filter": "multi_frame" + }, + "overscan_crop": "", + "overscan_color": [0, 0, 0, 1.0], + "width": 0, + "height": 0, + "scale_pixel_aspect": True, + "bg_color": [0, 0, 0, 0.0], + "letter_box": { + "enabled": False, + "ratio": 0.0, + "fill_color": [0, 0, 0, 1.0], + "line_thickness": 0, + "line_color": [255, 0, 0, 1.0] + } + } + ] + } + ] + }, + "ExtractBurnin": { + "enabled": True, + "options": { + "font_size": 42, + "font_color": [255, 255, 255, 1.0], + "bg_color": [0, 0, 0, 0.5], + "x_offset": 5, + "y_offset": 5, + "bg_padding": 5, + "font_filepath": { + "windows": "", + "darwin": "", + "linux": "" + } + }, + "profiles": [ + { + "product_types": [], + "hosts": [], + "task_types": [], + "task_names": [], + "product_names": [], + "burnins": [ + { + "name": "burnin", + "TOP_LEFT": "{yy}-{mm}-{dd}", + "TOP_CENTERED": "", + "TOP_RIGHT": "{anatomy[version]}", + "BOTTOM_LEFT": "{username}", + "BOTTOM_CENTERED": "{folder[name]}", + "BOTTOM_RIGHT": "{frame_start}-{current_frame}-{frame_end}", + "filter": { + "families": [], + "tags": [] + } + }, + ] + }, + { + "product_types": ["review"], + "hosts": [ + "maya", + "houdini", + "max" + ], + "task_types": [], + "task_names": [], + "product_names": [], + "burnins": [ + { + "name": "focal_length_burnin", + "TOP_LEFT": "{yy}-{mm}-{dd}", + "TOP_CENTERED": "{focalLength:.2f} mm", + "TOP_RIGHT": "{anatomy[version]}", + "BOTTOM_LEFT": "{username}", + "BOTTOM_CENTERED": "{folder[name]}", + "BOTTOM_RIGHT": "{frame_start}-{current_frame}-{frame_end}", + "filter": { + "families": [], + "tags": [] + } + } + ] + } + ] + }, + "PreIntegrateThumbnails": { + "enabled": True, + "integrate_profiles": [] + }, + "IntegrateProductGroup": { + "product_grouping_profiles": [ + { + "product_types": [], + "hosts": [], + "task_types": [], + "tasks": [], + "template": "" + } + ] + }, + "IntegrateHeroVersion": { + "enabled": True, + "optional": True, + "active": True, + "families": [ + "model", + "rig", + "look", + "pointcache", + "animation", + "setdress", + "layout", + "mayaScene", + "simpleUnrealTexture" + ], + "template_name_profiles": [ + { + "product_types": [ + "simpleUnrealTexture" + ], + "hosts": [ + "standalonepublisher" + ], + "task_types": [], + "task_names": [], + "template_name": "simpleUnrealTextureHero" + } + ] + }, + "CleanUp": { + "paterns": [], + "remove_temp_renders": False + }, + "CleanUpFarm": { + "enabled": False + } +} diff --git a/server_addon/core/server/settings/tools.py b/server_addon/core/server/settings/tools.py new file mode 100644 index 0000000000..7befc795e4 --- /dev/null +++ b/server_addon/core/server/settings/tools.py @@ -0,0 +1,506 @@ +from pydantic import Field, validator +from ayon_server.settings import ( + BaseSettingsModel, + normalize_name, + ensure_unique_names, + task_types_enum, +) + + +class ProductTypeSmartSelectModel(BaseSettingsModel): + _layout = "expanded" + name: str = Field("", title="Product type") + task_names: list[str] = Field(default_factory=list, title="Task names") + + @validator("name") + def normalize_value(cls, value): + return normalize_name(value) + + +class ProductNameProfile(BaseSettingsModel): + _layout = "expanded" + product_types: list[str] = Field( + default_factory=list, title="Product types" + ) + hosts: list[str] = Field(default_factory=list, title="Hosts") + task_types: list[str] = Field( + default_factory=list, + title="Task types", + enum_resolver=task_types_enum + ) + tasks: list[str] = Field(default_factory=list, title="Task names") + template: str = Field("", title="Template") + + +class CreatorToolModel(BaseSettingsModel): + # TODO this was dynamic dictionary '{name: task_names}' + product_types_smart_select: list[ProductTypeSmartSelectModel] = Field( + default_factory=list, + title="Create Smart Select" + ) + product_name_profiles: list[ProductNameProfile] = Field( + default_factory=list, + title="Product name profiles" + ) + + @validator("product_types_smart_select") + def validate_unique_name(cls, value): + ensure_unique_names(value) + return value + + +class WorkfileTemplateProfile(BaseSettingsModel): + _layout = "expanded" + task_types: list[str] = Field( + default_factory=list, + title="Task types", + enum_resolver=task_types_enum + ) + # TODO this should use hosts enum + hosts: list[str] = Field(default_factory=list, title="Hosts") + # TODO this was using project anatomy template name + workfile_template: str = Field("", title="Workfile template") + + +class LastWorkfileOnStartupProfile(BaseSettingsModel): + _layout = "expanded" + # TODO this should use hosts enum + hosts: list[str] = Field(default_factory=list, title="Hosts") + task_types: list[str] = Field( + default_factory=list, + title="Task types", + enum_resolver=task_types_enum + ) + tasks: list[str] = Field(default_factory=list, title="Task names") + enabled: bool = Field(True, title="Enabled") + use_last_published_workfile: bool = Field( + True, title="Use last published workfile" + ) + + +class WorkfilesToolOnStartupProfile(BaseSettingsModel): + _layout = "expanded" + # TODO this should use hosts enum + hosts: list[str] = Field(default_factory=list, title="Hosts") + task_types: list[str] = Field( + default_factory=list, + title="Task types", + enum_resolver=task_types_enum + ) + tasks: list[str] = Field(default_factory=list, title="Task names") + enabled: bool = Field(True, title="Enabled") + + +class ExtraWorkFoldersProfile(BaseSettingsModel): + _layout = "expanded" + # TODO this should use hosts enum + hosts: list[str] = Field(default_factory=list, title="Hosts") + task_types: list[str] = Field( + default_factory=list, + title="Task types", + enum_resolver=task_types_enum + ) + task_names: list[str] = Field(default_factory=list, title="Task names") + folders: list[str] = Field(default_factory=list, title="Folders") + + +class WorkfilesLockProfile(BaseSettingsModel): + _layout = "expanded" + # TODO this should use hosts enum + host_names: list[str] = Field(default_factory=list, title="Hosts") + enabled: bool = Field(True, title="Enabled") + + +class WorkfilesToolModel(BaseSettingsModel): + workfile_template_profiles: list[WorkfileTemplateProfile] = Field( + default_factory=list, + title="Workfile template profiles" + ) + last_workfile_on_startup: list[LastWorkfileOnStartupProfile] = Field( + default_factory=list, + title="Open last workfile on launch" + ) + open_workfile_tool_on_startup: list[WorkfilesToolOnStartupProfile] = Field( + default_factory=list, + title="Open workfile tool on launch" + ) + extra_folders: list[ExtraWorkFoldersProfile] = Field( + default_factory=list, + title="Extra work folders" + ) + workfile_lock_profiles: list[WorkfilesLockProfile] = Field( + default_factory=list, + title="Workfile lock profiles" + ) + + +def _product_types_enum(): + return [ + "action", + "animation", + "assembly", + "audio", + "backgroundComp", + "backgroundLayout", + "camera", + "editorial", + "gizmo", + "image", + "layout", + "look", + "matchmove", + "mayaScene", + "model", + "nukenodes", + "plate", + "pointcache", + "prerender", + "redshiftproxy", + "reference", + "render", + "review", + "rig", + "setdress", + "take", + "usdShade", + "vdbcache", + "vrayproxy", + "workfile", + "xgen", + "yetiRig", + "yeticache" + ] + + +class LoaderProductTypeFilterProfile(BaseSettingsModel): + _layout = "expanded" + # TODO this should use hosts enum + hosts: list[str] = Field(default_factory=list, title="Hosts") + task_types: list[str] = Field( + default_factory=list, + title="Task types", + enum_resolver=task_types_enum + ) + is_include: bool = Field(True, title="Exclude / Include") + filter_product_types: list[str] = Field( + default_factory=list, + enum_resolver=_product_types_enum + ) + + +class LoaderToolModel(BaseSettingsModel): + product_type_filter_profiles: list[LoaderProductTypeFilterProfile] = Field( + default_factory=list, + title="Product type filtering" + ) + + +class PublishTemplateNameProfile(BaseSettingsModel): + _layout = "expanded" + product_types: list[str] = Field( + default_factory=list, + title="Product types" + ) + # TODO this should use hosts enum + hosts: list[str] = Field(default_factory=list, title="Hosts") + task_types: list[str] = Field( + default_factory=list, + title="Task types", + enum_resolver=task_types_enum + ) + task_names: list[str] = Field(default_factory=list, title="Task names") + template_name: str = Field("", title="Template name") + + +class CustomStagingDirProfileModel(BaseSettingsModel): + active: bool = Field(True, title="Is active") + hosts: list[str] = Field(default_factory=list, title="Host names") + task_types: list[str] = Field( + default_factory=list, + title="Task types", + enum_resolver=task_types_enum + ) + task_names: list[str] = Field( + default_factory=list, title="Task names" + ) + product_types: list[str] = Field( + default_factory=list, title="Product types" + ) + product_names: list[str] = Field( + default_factory=list, title="Product names" + ) + custom_staging_dir_persistent: bool = Field( + False, title="Custom Staging Folder Persistent" + ) + template_name: str = Field("", title="Template Name") + + +class PublishToolModel(BaseSettingsModel): + template_name_profiles: list[PublishTemplateNameProfile] = Field( + default_factory=list, + title="Template name profiles" + ) + hero_template_name_profiles: list[PublishTemplateNameProfile] = Field( + default_factory=list, + title="Hero template name profiles" + ) + custom_staging_dir_profiles: list[CustomStagingDirProfileModel] = Field( + default_factory=list, + title="Custom Staging Dir Profiles" + ) + + +class GlobalToolsModel(BaseSettingsModel): + creator: CreatorToolModel = Field( + default_factory=CreatorToolModel, + title="Creator" + ) + Workfiles: WorkfilesToolModel = Field( + default_factory=WorkfilesToolModel, + title="Workfiles" + ) + loader: LoaderToolModel = Field( + default_factory=LoaderToolModel, + title="Loader" + ) + publish: PublishToolModel = Field( + default_factory=PublishToolModel, + title="Publish" + ) + + +DEFAULT_TOOLS_VALUES = { + "creator": { + "product_types_smart_select": [ + { + "name": "Render", + "task_names": [ + "light", + "render" + ] + }, + { + "name": "Model", + "task_names": [ + "model" + ] + }, + { + "name": "Layout", + "task_names": [ + "layout" + ] + }, + { + "name": "Look", + "task_names": [ + "look" + ] + }, + { + "name": "Rig", + "task_names": [ + "rigging", + "rig" + ] + } + ], + "product_name_profiles": [ + { + "product_types": [], + "hosts": [], + "task_types": [], + "tasks": [], + "template": "{product[type]}{variant}" + }, + { + "product_types": [ + "workfile" + ], + "hosts": [], + "task_types": [], + "tasks": [], + "template": "{product[type]}{Task[name]}" + }, + { + "product_types": [ + "render" + ], + "hosts": [], + "task_types": [], + "tasks": [], + "template": "{product[type]}{Task[name]}{Variant}" + }, + { + "product_types": [ + "renderLayer", + "renderPass" + ], + "hosts": [ + "tvpaint" + ], + "task_types": [], + "tasks": [], + "template": "{product[type]}{Task[name]}_{Renderlayer}_{Renderpass}" + }, + { + "product_types": [ + "review", + "workfile" + ], + "hosts": [ + "aftereffects", + "tvpaint" + ], + "task_types": [], + "tasks": [], + "template": "{product[type]}{Task[name]}" + }, + { + "product_types": ["render"], + "hosts": [ + "aftereffects" + ], + "task_types": [], + "tasks": [], + "template": "{product[type]}{Task[name]}{Composition}{Variant}" + }, + { + "product_types": [ + "staticMesh" + ], + "hosts": [ + "maya" + ], + "task_types": [], + "tasks": [], + "template": "S_{folder[name]}{variant}" + }, + { + "product_types": [ + "skeletalMesh" + ], + "hosts": [ + "maya" + ], + "task_types": [], + "tasks": [], + "template": "SK_{folder[name]}{variant}" + } + ] + }, + "Workfiles": { + "workfile_template_profiles": [ + { + "task_types": [], + "hosts": [], + "workfile_template": "work" + }, + { + "task_types": [], + "hosts": [ + "unreal" + ], + "workfile_template": "work_unreal" + } + ], + "last_workfile_on_startup": [ + { + "hosts": [], + "task_types": [], + "tasks": [], + "enabled": True, + "use_last_published_workfile": False + } + ], + "open_workfile_tool_on_startup": [ + { + "hosts": [], + "task_types": [], + "tasks": [], + "enabled": False + } + ], + "extra_folders": [], + "workfile_lock_profiles": [] + }, + "loader": { + "product_type_filter_profiles": [ + { + "hosts": [], + "task_types": [], + "is_include": True, + "filter_product_types": [] + } + ] + }, + "publish": { + "template_name_profiles": [ + { + "product_types": [], + "hosts": [], + "task_types": [], + "task_names": [], + "template_name": "publish" + }, + { + "product_types": [ + "review", + "render", + "prerender" + ], + "hosts": [], + "task_types": [], + "task_names": [], + "template_name": "publish_render" + }, + { + "product_types": [ + "simpleUnrealTexture" + ], + "hosts": [ + "standalonepublisher" + ], + "task_types": [], + "task_names": [], + "template_name": "publish_simpleUnrealTexture" + }, + { + "product_types": [ + "staticMesh", + "skeletalMesh" + ], + "hosts": [ + "maya" + ], + "task_types": [], + "task_names": [], + "template_name": "publish_maya2unreal" + }, + { + "product_types": [ + "online" + ], + "hosts": [ + "traypublisher" + ], + "task_types": [], + "task_names": [], + "template_name": "publish_online" + } + ], + "hero_template_name_profiles": [ + { + "product_types": [ + "simpleUnrealTexture" + ], + "hosts": [ + "standalonepublisher" + ], + "task_types": [], + "task_names": [], + "template_name": "hero_simpleUnrealTextureHero" + } + ] + } +} diff --git a/server_addon/core/server/version.py b/server_addon/core/server/version.py new file mode 100644 index 0000000000..485f44ac21 --- /dev/null +++ b/server_addon/core/server/version.py @@ -0,0 +1 @@ +__version__ = "0.1.1" diff --git a/server_addon/create_ayon_addon.py b/server_addon/create_ayon_addon.py deleted file mode 100644 index 657f416441..0000000000 --- a/server_addon/create_ayon_addon.py +++ /dev/null @@ -1,140 +0,0 @@ -import os -import re -import shutil -import zipfile -import collections -from pathlib import Path -from typing import Any, Optional, Iterable - -# Patterns of directories to be skipped for server part of addon -IGNORE_DIR_PATTERNS: list[re.Pattern] = [ - re.compile(pattern) - for pattern in { - # Skip directories starting with '.' - r"^\.", - # Skip any pycache folders - "^__pycache__$" - } -] - -# Patterns of files to be skipped for server part of addon -IGNORE_FILE_PATTERNS: list[re.Pattern] = [ - re.compile(pattern) - for pattern in { - # Skip files starting with '.' - # NOTE this could be an issue in some cases - r"^\.", - # Skip '.pyc' files - r"\.pyc$" - } -] - - -def _value_match_regexes(value: str, regexes: Iterable[re.Pattern]) -> bool: - return any( - regex.search(value) - for regex in regexes - ) - - -def find_files_in_subdir( - src_path: str, - ignore_file_patterns: Optional[list[re.Pattern]] = None, - ignore_dir_patterns: Optional[list[re.Pattern]] = None -): - """Find all files to copy in subdirectories of given path. - - All files that match any of the patterns in 'ignore_file_patterns' will - be skipped and any directories that match any of the patterns in - 'ignore_dir_patterns' will be skipped with all subfiles. - - Args: - src_path (str): Path to directory to search in. - ignore_file_patterns (Optional[list[re.Pattern]]): List of regexes - to match files to ignore. - ignore_dir_patterns (Optional[list[re.Pattern]]): List of regexes - to match directories to ignore. - - Returns: - list[tuple[str, str]]: List of tuples with path to file and parent - directories relative to 'src_path'. - """ - - if ignore_file_patterns is None: - ignore_file_patterns = IGNORE_FILE_PATTERNS - - if ignore_dir_patterns is None: - ignore_dir_patterns = IGNORE_DIR_PATTERNS - output: list[tuple[str, str]] = [] - - hierarchy_queue = collections.deque() - hierarchy_queue.append((src_path, [])) - while hierarchy_queue: - item: tuple[str, str] = hierarchy_queue.popleft() - dirpath, parents = item - for name in os.listdir(dirpath): - path = os.path.join(dirpath, name) - if os.path.isfile(path): - if not _value_match_regexes(name, ignore_file_patterns): - items = list(parents) - items.append(name) - output.append((path, os.path.sep.join(items))) - continue - - if not _value_match_regexes(name, ignore_dir_patterns): - items = list(parents) - items.append(name) - hierarchy_queue.append((path, items)) - - return output - - -def main(): - openpype_addon_dir = Path(os.path.dirname(os.path.abspath(__file__))) - server_dir = openpype_addon_dir / "server" - package_root = openpype_addon_dir / "package" - pyproject_path = openpype_addon_dir / "client" / "pyproject.toml" - - root_dir = openpype_addon_dir.parent - openpype_dir = root_dir / "openpype" - version_path = openpype_dir / "version.py" - - # Read version - version_content: dict[str, Any] = {} - with open(str(version_path), "r") as stream: - exec(stream.read(), version_content) - addon_version: str = version_content["__version__"] - - output_dir = package_root / "openpype" / addon_version - private_dir = output_dir / "private" - - # Make sure package dir is empty - if package_root.exists(): - shutil.rmtree(str(package_root)) - # Make sure output dir is created - output_dir.mkdir(parents=True) - - # Copy version - shutil.copy(str(version_path), str(output_dir)) - for subitem in server_dir.iterdir(): - shutil.copy(str(subitem), str(output_dir / subitem.name)) - - # Make sure private dir exists - private_dir.mkdir(parents=True) - - # Copy pyproject.toml - shutil.copy( - str(pyproject_path), - (private_dir / pyproject_path.name) - ) - - # Zip client - zip_filepath = private_dir / "client.zip" - with zipfile.ZipFile(zip_filepath, "w", zipfile.ZIP_DEFLATED) as zipf: - # Add client code content to zip - for path, sub_path in find_files_in_subdir(str(openpype_dir)): - zipf.write(path, f"{openpype_dir.name}/{sub_path}") - - -if __name__ == "__main__": - main() diff --git a/server_addon/create_ayon_addons.py b/server_addon/create_ayon_addons.py new file mode 100644 index 0000000000..61dbd5c8d9 --- /dev/null +++ b/server_addon/create_ayon_addons.py @@ -0,0 +1,308 @@ +import os +import sys +import re +import json +import shutil +import zipfile +import platform +import collections +from pathlib import Path +from typing import Any, Optional, Iterable, Pattern, List, Tuple + +# Patterns of directories to be skipped for server part of addon +IGNORE_DIR_PATTERNS: List[Pattern] = [ + re.compile(pattern) + for pattern in { + # Skip directories starting with '.' + r"^\.", + # Skip any pycache folders + "^__pycache__$" + } +] + +# Patterns of files to be skipped for server part of addon +IGNORE_FILE_PATTERNS: List[Pattern] = [ + re.compile(pattern) + for pattern in { + # Skip files starting with '.' + # NOTE this could be an issue in some cases + r"^\.", + # Skip '.pyc' files + r"\.pyc$" + } +] + + +class ZipFileLongPaths(zipfile.ZipFile): + """Allows longer paths in zip files. + + Regular DOS paths are limited to MAX_PATH (260) characters, including + the string's terminating NUL character. + That limit can be exceeded by using an extended-length path that + starts with the '\\?\' prefix. + """ + _is_windows = platform.system().lower() == "windows" + + def _extract_member(self, member, tpath, pwd): + if self._is_windows: + tpath = os.path.abspath(tpath) + if tpath.startswith("\\\\"): + tpath = "\\\\?\\UNC\\" + tpath[2:] + else: + tpath = "\\\\?\\" + tpath + + return super(ZipFileLongPaths, self)._extract_member( + member, tpath, pwd + ) + + +def _value_match_regexes(value: str, regexes: Iterable[Pattern]) -> bool: + return any( + regex.search(value) + for regex in regexes + ) + + +def find_files_in_subdir( + src_path: str, + ignore_file_patterns: Optional[List[Pattern]] = None, + ignore_dir_patterns: Optional[List[Pattern]] = None, + ignore_subdirs: Optional[Iterable[Tuple[str]]] = None +): + """Find all files to copy in subdirectories of given path. + + All files that match any of the patterns in 'ignore_file_patterns' will + be skipped and any directories that match any of the patterns in + 'ignore_dir_patterns' will be skipped with all subfiles. + + Args: + src_path (str): Path to directory to search in. + ignore_file_patterns (Optional[List[Pattern]]): List of regexes + to match files to ignore. + ignore_dir_patterns (Optional[List[Pattern]]): List of regexes + to match directories to ignore. + ignore_subdirs (Optional[Iterable[Tuple[str]]]): List of + subdirectories to ignore. + + Returns: + List[Tuple[str, str]]: List of tuples with path to file and parent + directories relative to 'src_path'. + """ + + if ignore_file_patterns is None: + ignore_file_patterns = IGNORE_FILE_PATTERNS + + if ignore_dir_patterns is None: + ignore_dir_patterns = IGNORE_DIR_PATTERNS + output: list[tuple[str, str]] = [] + + hierarchy_queue = collections.deque() + hierarchy_queue.append((src_path, [])) + while hierarchy_queue: + item: tuple[str, str] = hierarchy_queue.popleft() + dirpath, parents = item + if ignore_subdirs and parents in ignore_subdirs: + continue + for name in os.listdir(dirpath): + path = os.path.join(dirpath, name) + if os.path.isfile(path): + if not _value_match_regexes(name, ignore_file_patterns): + items = list(parents) + items.append(name) + output.append((path, os.path.sep.join(items))) + continue + + if not _value_match_regexes(name, ignore_dir_patterns): + items = list(parents) + items.append(name) + hierarchy_queue.append((path, items)) + + return output + + +def read_addon_version(version_path: Path) -> str: + # Read version + version_content: dict[str, Any] = {} + with open(str(version_path), "r") as stream: + exec(stream.read(), version_content) + return version_content["__version__"] + + +def get_addon_version(addon_dir: Path) -> str: + return read_addon_version(addon_dir / "server" / "version.py") + + +def create_addon_zip( + output_dir: Path, + addon_name: str, + addon_version: str, + keep_source: bool +): + zip_filepath = output_dir / f"{addon_name}-{addon_version}.zip" + addon_output_dir = output_dir / addon_name / addon_version + with ZipFileLongPaths(zip_filepath, "w", zipfile.ZIP_DEFLATED) as zipf: + zipf.writestr( + "manifest.json", + json.dumps({ + "addon_name": addon_name, + "addon_version": addon_version + }) + ) + # Add client code content to zip + src_root = os.path.normpath(str(addon_output_dir.absolute())) + src_root_offset = len(src_root) + 1 + for root, _, filenames in os.walk(str(addon_output_dir)): + rel_root = "" + if root != src_root: + rel_root = root[src_root_offset:] + + for filename in filenames: + src_path = os.path.join(root, filename) + if rel_root: + dst_path = os.path.join("addon", rel_root, filename) + else: + dst_path = os.path.join("addon", filename) + zipf.write(src_path, dst_path) + + if not keep_source: + shutil.rmtree(str(output_dir / addon_name)) + + +def create_openpype_package( + addon_dir: Path, + output_dir: Path, + root_dir: Path, + create_zip: bool, + keep_source: bool +): + server_dir = addon_dir / "server" + pyproject_path = addon_dir / "client" / "pyproject.toml" + + openpype_dir = root_dir / "openpype" + version_path = openpype_dir / "version.py" + addon_version = read_addon_version(version_path) + + addon_output_dir = output_dir / "openpype" / addon_version + private_dir = addon_output_dir / "private" + # Make sure dir exists + addon_output_dir.mkdir(parents=True) + private_dir.mkdir(parents=True) + + # Copy version + shutil.copy(str(version_path), str(addon_output_dir)) + for subitem in server_dir.iterdir(): + shutil.copy(str(subitem), str(addon_output_dir / subitem.name)) + + # Copy pyproject.toml + shutil.copy( + str(pyproject_path), + (private_dir / pyproject_path.name) + ) + + ignored_hosts = [] + ignored_modules = [ + "ftrack", + "shotgrid", + "sync_server", + "example_addons", + "slack" + ] + # Subdirs that won't be added to output zip file + ignored_subpaths = [ + ["addons"], + ["vendor", "common", "ayon_api"], + ] + ignored_subpaths.extend( + ["hosts", host_name] + for host_name in ignored_hosts + ) + ignored_subpaths.extend( + ["modules", module_name] + for module_name in ignored_modules + ) + + # Zip client + zip_filepath = private_dir / "client.zip" + with ZipFileLongPaths(zip_filepath, "w", zipfile.ZIP_DEFLATED) as zipf: + # Add client code content to zip + for path, sub_path in find_files_in_subdir( + str(openpype_dir), ignore_subdirs=ignored_subpaths + ): + zipf.write(path, f"{openpype_dir.name}/{sub_path}") + + if create_zip: + create_addon_zip(output_dir, "openpype", addon_version, keep_source) + + +def create_addon_package( + addon_dir: Path, + output_dir: Path, + create_zip: bool, + keep_source: bool +): + server_dir = addon_dir / "server" + addon_version = get_addon_version(addon_dir) + + addon_output_dir = output_dir / addon_dir.name / addon_version + if addon_output_dir.exists(): + shutil.rmtree(str(addon_output_dir)) + addon_output_dir.mkdir(parents=True) + + # Copy server content + src_root = os.path.normpath(str(server_dir.absolute())) + src_root_offset = len(src_root) + 1 + for root, _, filenames in os.walk(str(server_dir)): + dst_root = addon_output_dir + if root != src_root: + rel_root = root[src_root_offset:] + dst_root = dst_root / rel_root + + dst_root.mkdir(parents=True, exist_ok=True) + for filename in filenames: + src_path = os.path.join(root, filename) + shutil.copy(src_path, str(dst_root)) + + if create_zip: + create_addon_zip( + output_dir, addon_dir.name, addon_version, keep_source + ) + + +def main(create_zip=True, keep_source=False): + current_dir = Path(os.path.dirname(os.path.abspath(__file__))) + root_dir = current_dir.parent + output_dir = current_dir / "packages" + print("Package creation started...") + + # Make sure package dir is empty + if output_dir.exists(): + shutil.rmtree(str(output_dir)) + # Make sure output dir is created + output_dir.mkdir(parents=True) + + for addon_dir in current_dir.iterdir(): + if not addon_dir.is_dir(): + continue + + server_dir = addon_dir / "server" + if not server_dir.exists(): + continue + + if addon_dir.name == "openpype": + create_openpype_package( + addon_dir, output_dir, root_dir, create_zip, keep_source + ) + + else: + create_addon_package( + addon_dir, output_dir, create_zip, keep_source + ) + + print(f"- package '{addon_dir.name}' created") + print(f"Package creation finished. Output directory: {output_dir}") + + +if __name__ == "__main__": + create_zip = "--skip-zip" not in sys.argv + keep_sources = "--keep-sources" in sys.argv + main(create_zip, keep_sources) diff --git a/server_addon/deadline/server/__init__.py b/server_addon/deadline/server/__init__.py new file mode 100644 index 0000000000..36d04189a9 --- /dev/null +++ b/server_addon/deadline/server/__init__.py @@ -0,0 +1,17 @@ +from typing import Type + +from ayon_server.addons import BaseServerAddon + +from .version import __version__ +from .settings import DeadlineSettings, DEFAULT_VALUES + + +class Deadline(BaseServerAddon): + name = "deadline" + title = "Deadline" + version = __version__ + settings_model: Type[DeadlineSettings] = DeadlineSettings + + async def get_default_settings(self): + settings_model_cls = self.get_settings_model() + return settings_model_cls(**DEFAULT_VALUES) diff --git a/server_addon/deadline/server/settings/__init__.py b/server_addon/deadline/server/settings/__init__.py new file mode 100644 index 0000000000..0307862afa --- /dev/null +++ b/server_addon/deadline/server/settings/__init__.py @@ -0,0 +1,10 @@ +from .main import ( + DeadlineSettings, + DEFAULT_VALUES, +) + + +__all__ = ( + "DeadlineSettings", + "DEFAULT_VALUES", +) diff --git a/server_addon/deadline/server/settings/main.py b/server_addon/deadline/server/settings/main.py new file mode 100644 index 0000000000..f158b7464d --- /dev/null +++ b/server_addon/deadline/server/settings/main.py @@ -0,0 +1,48 @@ +from pydantic import Field, validator + +from ayon_server.settings import BaseSettingsModel, ensure_unique_names + +from .publish_plugins import ( + PublishPluginsModel, + DEFAULT_DEADLINE_PLUGINS_SETTINGS +) + + +class ServerListSubmodel(BaseSettingsModel): + _layout = "compact" + name: str = Field(title="Name") + value: str = Field(title="Value") + + +class DeadlineSettings(BaseSettingsModel): + deadline_urls: list[ServerListSubmodel] = Field( + default_factory=list, + title="System Deadline Webservice URLs", + scope=["studio"], + ) + deadline_servers: list[str] = Field( + title="Project deadline servers", + section="---", + ) + publish: PublishPluginsModel = Field( + default_factory=PublishPluginsModel, + title="Publish Plugins", + ) + + @validator("deadline_urls") + def validate_unique_names(cls, value): + ensure_unique_names(value) + return value + + +DEFAULT_VALUES = { + "deadline_urls": [ + { + "name": "default", + "value": "http://127.0.0.1:8082" + } + ], + # TODO: this needs to be dynamic from "deadline_urls" + "deadline_servers": [], + "publish": DEFAULT_DEADLINE_PLUGINS_SETTINGS +} diff --git a/server_addon/deadline/server/settings/publish_plugins.py b/server_addon/deadline/server/settings/publish_plugins.py new file mode 100644 index 0000000000..8d1b667345 --- /dev/null +++ b/server_addon/deadline/server/settings/publish_plugins.py @@ -0,0 +1,435 @@ +from pydantic import Field, validator + +from ayon_server.settings import BaseSettingsModel, ensure_unique_names + + +class CollectDefaultDeadlineServerModel(BaseSettingsModel): + """Settings for event handlers running in ftrack service.""" + + pass_mongo_url: bool = Field(title="Pass Mongo url to job") + + +class CollectDeadlinePoolsModel(BaseSettingsModel): + """Settings Deadline default pools.""" + + primary_pool: str = Field(title="Primary Pool") + + secondary_pool: str = Field(title="Secondary Pool") + + +class ValidateExpectedFilesModel(BaseSettingsModel): + enabled: bool = Field(True, title="Enabled") + active: bool = Field(True, title="Active") + allow_user_override: bool = Field( + True, title="Allow user change frame range" + ) + families: list[str] = Field( + default_factory=list, title="Trigger on families" + ) + targets: list[str] = Field( + default_factory=list, title="Trigger for plugins" + ) + + +def tile_assembler_enum(): + """Return a list of value/label dicts for the enumerator. + + Returning a list of dicts is used to allow for a custom label to be + displayed in the UI. + """ + return [ + { + "value": "DraftTileAssembler", + "label": "Draft Tile Assembler" + }, + { + "value": "OpenPypeTileAssembler", + "label": "Open Image IO" + } + ] + + +class ScenePatchesSubmodel(BaseSettingsModel): + _layout = "expanded" + name: str = Field(title="Patch name") + regex: str = Field(title="Patch regex") + line: str = Field(title="Patch line") + + +class MayaSubmitDeadlineModel(BaseSettingsModel): + """Maya deadline submitter settings.""" + + enabled: bool = Field(title="Enabled") + optional: bool = Field(title="Optional") + active: bool = Field(title="Active") + use_published: bool = Field(title="Use Published scene") + import_reference: bool = Field(title="Use Scene with Imported Reference") + asset_dependencies: bool = Field(title="Use Asset dependencies") + priority: int = Field(title="Priority") + tile_priority: int = Field(title="Tile Priority") + group: str = Field(title="Group") + limit: list[str] = Field( + default_factory=list, + title="Limit Groups" + ) + tile_assembler_plugin: str = Field( + title="Tile Assembler Plugin", + enum_resolver=tile_assembler_enum, + ) + jobInfo: str = Field( + title="Additional JobInfo data", + widget="textarea", + ) + pluginInfo: str = Field( + title="Additional PluginInfo data", + widget="textarea", + ) + + scene_patches: list[ScenePatchesSubmodel] = Field( + default_factory=list, + title="Scene patches", + ) + strict_error_checking: bool = Field( + title="Disable Strict Error Check profiles" + ) + + @validator("limit", "scene_patches") + def validate_unique_names(cls, value): + ensure_unique_names(value) + return value + + +class MaxSubmitDeadlineModel(BaseSettingsModel): + enabled: bool = Field(True) + optional: bool = Field(title="Optional") + active: bool = Field(title="Active") + use_published: bool = Field(title="Use Published scene") + priority: int = Field(title="Priority") + chunk_size: int = Field(title="Frame per Task") + group: str = Field("", title="Group Name") + + +class EnvSearchReplaceSubmodel(BaseSettingsModel): + _layout = "compact" + name: str = Field(title="Name") + value: str = Field(title="Value") + + +class LimitGroupsSubmodel(BaseSettingsModel): + _layout = "expanded" + name: str = Field(title="Name") + value: list[str] = Field( + default_factory=list, + title="Limit Groups" + ) + + +class FusionSubmitDeadlineModel(BaseSettingsModel): + enabled: bool = Field(True, title="Enabled") + optional: bool = Field(False, title="Optional") + active: bool = Field(True, title="Active") + priority: int = Field(50, title="Priority") + chunk_size: int = Field(10, title="Frame per Task") + concurrent_tasks: int = Field(1, title="Number of concurrent tasks") + group: str = Field("", title="Group Name") + + +class NukeSubmitDeadlineModel(BaseSettingsModel): + """Nuke deadline submitter settings.""" + + enabled: bool = Field(title="Enabled") + optional: bool = Field(title="Optional") + active: bool = Field(title="Active") + priority: int = Field(title="Priority") + chunk_size: int = Field(title="Chunk Size") + concurrent_tasks: int = Field(title="Number of concurrent tasks") + group: str = Field(title="Group") + department: str = Field(title="Department") + use_gpu: bool = Field(title="Use GPU") + + env_allowed_keys: list[str] = Field( + default_factory=list, + title="Allowed environment keys" + ) + + env_search_replace_values: list[EnvSearchReplaceSubmodel] = Field( + default_factory=list, + title="Search & replace in environment values", + ) + + limit_groups: list[LimitGroupsSubmodel] = Field( + default_factory=list, + title="Limit Groups", + ) + + @validator("limit_groups", "env_allowed_keys", "env_search_replace_values") + def validate_unique_names(cls, value): + ensure_unique_names(value) + return value + + +class HarmonySubmitDeadlineModel(BaseSettingsModel): + """Harmony deadline submitter settings.""" + + enabled: bool = Field(title="Enabled") + optional: bool = Field(title="Optional") + active: bool = Field(title="Active") + use_published: bool = Field(title="Use Published scene") + priority: int = Field(title="Priority") + chunk_size: int = Field(title="Chunk Size") + group: str = Field(title="Group") + department: str = Field(title="Department") + + +class AfterEffectsSubmitDeadlineModel(BaseSettingsModel): + """After Effects deadline submitter settings.""" + + enabled: bool = Field(title="Enabled") + optional: bool = Field(title="Optional") + active: bool = Field(title="Active") + use_published: bool = Field(title="Use Published scene") + priority: int = Field(title="Priority") + chunk_size: int = Field(title="Chunk Size") + group: str = Field(title="Group") + department: str = Field(title="Department") + multiprocess: bool = Field(title="Optional") + + +class CelactionSubmitDeadlineModel(BaseSettingsModel): + enabled: bool = Field(True, title="Enabled") + deadline_department: str = Field("", title="Deadline apartment") + deadline_priority: int = Field(50, title="Deadline priority") + deadline_pool: str = Field("", title="Deadline pool") + deadline_pool_secondary: str = Field("", title="Deadline pool (secondary)") + deadline_group: str = Field("", title="Deadline Group") + deadline_chunk_size: int = Field(10, title="Deadline Chunk size") + deadline_job_delay: str = Field( + "", title="Delay job (timecode dd:hh:mm:ss)" + ) + + +class AOVFilterSubmodel(BaseSettingsModel): + _layout = "expanded" + name: str = Field(title="Host") + value: list[str] = Field( + default_factory=list, + title="AOV regex" + ) + + +class ProcessSubmittedJobOnFarmModel(BaseSettingsModel): + """Process submitted job on farm.""" + + enabled: bool = Field(title="Enabled") + deadline_department: str = Field(title="Department") + deadline_pool: str = Field(title="Pool") + deadline_group: str = Field(title="Group") + deadline_chunk_size: int = Field(title="Chunk Size") + deadline_priority: int = Field(title="Priority") + publishing_script: str = Field(title="Publishing script path") + skip_integration_repre_list: list[str] = Field( + default_factory=list, + title="Skip integration of representation with ext" + ) + aov_filter: list[AOVFilterSubmodel] = Field( + default_factory=list, + title="Reviewable products filter", + ) + + @validator("aov_filter", "skip_integration_repre_list") + def validate_unique_names(cls, value): + ensure_unique_names(value) + return value + + +class PublishPluginsModel(BaseSettingsModel): + CollectDefaultDeadlineServer: CollectDefaultDeadlineServerModel = Field( + default_factory=CollectDefaultDeadlineServerModel, + title="Default Deadline Webservice") + CollectDefaultDeadlineServer: CollectDefaultDeadlineServerModel = Field( + default_factory=CollectDefaultDeadlineServerModel, + title="Default Deadline Webservice") + CollectDeadlinePools: CollectDeadlinePoolsModel = Field( + default_factory=CollectDeadlinePoolsModel, + title="Default Pools") + ValidateExpectedFiles: ValidateExpectedFilesModel = Field( + default_factory=ValidateExpectedFilesModel, + title="Validate Expected Files" + ) + MayaSubmitDeadline: MayaSubmitDeadlineModel = Field( + default_factory=MayaSubmitDeadlineModel, + title="Maya Submit to deadline") + MaxSubmitDeadline: MaxSubmitDeadlineModel = Field( + default_factory=MaxSubmitDeadlineModel, + title="Max Submit to deadline") + FusionSubmitDeadline: FusionSubmitDeadlineModel = Field( + default_factory=FusionSubmitDeadlineModel, + title="Fusion submit to Deadline") + NukeSubmitDeadline: NukeSubmitDeadlineModel = Field( + default_factory=NukeSubmitDeadlineModel, + title="Nuke Submit to deadline") + HarmonySubmitDeadline: HarmonySubmitDeadlineModel = Field( + default_factory=HarmonySubmitDeadlineModel, + title="Harmony Submit to deadline") + AfterEffectsSubmitDeadline: AfterEffectsSubmitDeadlineModel = Field( + default_factory=AfterEffectsSubmitDeadlineModel, + title="After Effects to deadline") + CelactionSubmitDeadline: CelactionSubmitDeadlineModel = Field( + default_factory=CelactionSubmitDeadlineModel, + title="Celaction Submit Deadline" + ) + ProcessSubmittedJobOnFarm: ProcessSubmittedJobOnFarmModel = Field( + default_factory=ProcessSubmittedJobOnFarmModel, + title="Process submitted job on farm.") + + +DEFAULT_DEADLINE_PLUGINS_SETTINGS = { + "CollectDefaultDeadlineServer": { + "pass_mongo_url": True + }, + "CollectDeadlinePools": { + "primary_pool": "", + "secondary_pool": "" + }, + "ValidateExpectedFiles": { + "enabled": True, + "active": True, + "allow_user_override": True, + "families": [ + "render" + ], + "targets": [ + "deadline" + ] + }, + "MayaSubmitDeadline": { + "enabled": True, + "optional": False, + "active": True, + "tile_assembler_plugin": "DraftTileAssembler", + "use_published": True, + "import_reference": False, + "asset_dependencies": True, + "strict_error_checking": True, + "priority": 50, + "tile_priority": 50, + "group": "none", + "limit": [], + # this used to be empty dict + "jobInfo": "", + # this used to be empty dict + "pluginInfo": "", + "scene_patches": [] + }, + "MaxSubmitDeadline": { + "enabled": True, + "optional": False, + "active": True, + "use_published": True, + "priority": 50, + "chunk_size": 10, + "group": "none" + }, + "FusionSubmitDeadline": { + "enabled": True, + "optional": False, + "active": True, + "priority": 50, + "chunk_size": 10, + "concurrent_tasks": 1, + "group": "" + }, + "NukeSubmitDeadline": { + "enabled": True, + "optional": False, + "active": True, + "priority": 50, + "chunk_size": 10, + "concurrent_tasks": 1, + "group": "", + "department": "", + "use_gpu": True, + "env_allowed_keys": [], + "env_search_replace_values": [], + "limit_groups": [] + }, + "HarmonySubmitDeadline": { + "enabled": True, + "optional": False, + "active": True, + "use_published": True, + "priority": 50, + "chunk_size": 10000, + "group": "", + "department": "" + }, + "AfterEffectsSubmitDeadline": { + "enabled": True, + "optional": False, + "active": True, + "use_published": True, + "priority": 50, + "chunk_size": 10000, + "group": "", + "department": "", + "multiprocess": True + }, + "CelactionSubmitDeadline": { + "enabled": True, + "deadline_department": "", + "deadline_priority": 50, + "deadline_pool": "", + "deadline_pool_secondary": "", + "deadline_group": "", + "deadline_chunk_size": 10, + "deadline_job_delay": "00:00:00:00" + }, + "ProcessSubmittedJobOnFarm": { + "enabled": True, + "deadline_department": "", + "deadline_pool": "", + "deadline_group": "", + "deadline_chunk_size": 1, + "deadline_priority": 50, + "publishing_script": "", + "skip_integration_repre_list": [], + "aov_filter": [ + { + "name": "maya", + "value": [ + ".*([Bb]eauty).*" + ] + }, + { + "name": "aftereffects", + "value": [ + ".*" + ] + }, + { + "name": "celaction", + "value": [ + ".*" + ] + }, + { + "name": "harmony", + "value": [ + ".*" + ] + }, + { + "name": "max", + "value": [ + ".*" + ] + }, + { + "name": "fusion", + "value": [ + ".*" + ] + } + ] + } +} diff --git a/server_addon/deadline/server/version.py b/server_addon/deadline/server/version.py new file mode 100644 index 0000000000..485f44ac21 --- /dev/null +++ b/server_addon/deadline/server/version.py @@ -0,0 +1 @@ +__version__ = "0.1.1" diff --git a/server_addon/flame/server/__init__.py b/server_addon/flame/server/__init__.py new file mode 100644 index 0000000000..7d5eb3960f --- /dev/null +++ b/server_addon/flame/server/__init__.py @@ -0,0 +1,19 @@ +from typing import Type + +from ayon_server.addons import BaseServerAddon + +from .version import __version__ +from .settings import FlameSettings, DEFAULT_VALUES + + +class FlameAddon(BaseServerAddon): + name = "flame" + title = "Flame" + version = __version__ + settings_model: Type[FlameSettings] = FlameSettings + frontend_scopes = {} + services = {} + + async def get_default_settings(self): + settings_model_cls = self.get_settings_model() + return settings_model_cls(**DEFAULT_VALUES) diff --git a/server_addon/flame/server/settings/__init__.py b/server_addon/flame/server/settings/__init__.py new file mode 100644 index 0000000000..39b8220d40 --- /dev/null +++ b/server_addon/flame/server/settings/__init__.py @@ -0,0 +1,10 @@ +from .main import ( + FlameSettings, + DEFAULT_VALUES, +) + + +__all__ = ( + "FlameSettings", + "DEFAULT_VALUES", +) diff --git a/server_addon/flame/server/settings/create_plugins.py b/server_addon/flame/server/settings/create_plugins.py new file mode 100644 index 0000000000..374a7368d2 --- /dev/null +++ b/server_addon/flame/server/settings/create_plugins.py @@ -0,0 +1,120 @@ +from pydantic import Field +from ayon_server.settings import BaseSettingsModel + + +class CreateShotClipModel(BaseSettingsModel): + hierarchy: str = Field( + "shot", + title="Shot parent hierarchy", + section="Shot Hierarchy And Rename Settings" + ) + useShotName: bool = Field( + True, + title="Use Shot Name", + ) + clipRename: bool = Field( + False, + title="Rename clips", + ) + clipName: str = Field( + "{sequence}{shot}", + title="Clip name template" + ) + segmentIndex: bool = Field( + True, + title="Accept segment order" + ) + countFrom: int = Field( + 10, + title="Count sequence from" + ) + countSteps: int = Field( + 10, + title="Stepping number" + ) + + folder: str = Field( + "shots", + title="{folder}", + section="Shot Template Keywords" + ) + episode: str = Field( + "ep01", + title="{episode}" + ) + sequence: str = Field( + "a", + title="{sequence}" + ) + track: str = Field( + "{_track_}", + title="{track}" + ) + shot: str = Field( + "####", + title="{shot}" + ) + + vSyncOn: bool = Field( + False, + title="Enable Vertical Sync", + section="Vertical Synchronization Of Attributes" + ) + + workfileFrameStart: int = Field( + 1001, + title="Workfiles Start Frame", + section="Shot Attributes" + ) + handleStart: int = Field( + 10, + title="Handle start (head)" + ) + handleEnd: int = Field( + 10, + title="Handle end (tail)" + ) + includeHandles: bool = Field( + False, + title="Enable handles including" + ) + retimedHandles: bool = Field( + True, + title="Enable retimed handles" + ) + retimedFramerange: bool = Field( + True, + title="Enable retimed shot frameranges" + ) + + +class CreatePuginsModel(BaseSettingsModel): + CreateShotClip: CreateShotClipModel = Field( + default_factory=CreateShotClipModel, + title="Create Shot Clip" + ) + + +DEFAULT_CREATE_SETTINGS = { + "CreateShotClip": { + "hierarchy": "{folder}/{sequence}", + "useShotName": True, + "clipRename": False, + "clipName": "{sequence}{shot}", + "segmentIndex": True, + "countFrom": 10, + "countSteps": 10, + "folder": "shots", + "episode": "ep01", + "sequence": "a", + "track": "{_track_}", + "shot": "####", + "vSyncOn": False, + "workfileFrameStart": 1001, + "handleStart": 5, + "handleEnd": 5, + "includeHandles": False, + "retimedHandles": True, + "retimedFramerange": True + } +} diff --git a/server_addon/flame/server/settings/imageio.py b/server_addon/flame/server/settings/imageio.py new file mode 100644 index 0000000000..ef1e4721d1 --- /dev/null +++ b/server_addon/flame/server/settings/imageio.py @@ -0,0 +1,130 @@ +from pydantic import Field, validator +from ayon_server.settings import BaseSettingsModel, ensure_unique_names + + +class ImageIOFileRuleModel(BaseSettingsModel): + name: str = Field("", title="Rule name") + pattern: str = Field("", title="Regex pattern") + colorspace: str = Field("", title="Colorspace name") + ext: str = Field("", title="File extension") + + +class ImageIOFileRulesModel(BaseSettingsModel): + activate_host_rules: bool = Field(False) + rules: list[ImageIOFileRuleModel] = Field( + default_factory=list, + title="Rules" + ) + + @validator("rules") + def validate_unique_outputs(cls, value): + ensure_unique_names(value) + return value + + +class ImageIORemappingRulesModel(BaseSettingsModel): + host_native_name: str = Field( + title="Application native colorspace name" + ) + ocio_name: str = Field(title="OCIO colorspace name") + + +class ImageIORemappingModel(BaseSettingsModel): + rules: list[ImageIORemappingRulesModel] = Field( + default_factory=list + ) + + +class ImageIOConfigModel(BaseSettingsModel): + override_global_config: bool = Field( + False, + title="Override global OCIO config" + ) + filepath: list[str] = Field( + default_factory=list, + title="Config path" + ) + + +class ProfileNamesMappingInputsModel(BaseSettingsModel): + _layout = "expanded" + + flameName: str = Field("", title="Flame name") + ocioName: str = Field("", title="OCIO name") + + +class ProfileNamesMappingModel(BaseSettingsModel): + _layout = "expanded" + + inputs: list[ProfileNamesMappingInputsModel] = Field( + default_factory=list, + title="Profile names mapping" + ) + + +class ImageIOProjectModel(BaseSettingsModel): + colourPolicy: str = Field( + "ACES 1.1", + title="Colour Policy (name or path)", + section="Project" + ) + frameDepth: str = Field( + "16-bit fp", + title="Image Depth" + ) + fieldDominance: str = Field( + "PROGRESSIVE", + title="Field Dominance" + ) + + +class FlameImageIOModel(BaseSettingsModel): + _isGroup = True + activate_host_color_management: bool = Field( + True, title="Enable Color Management" + ) + remapping: ImageIORemappingModel = Field( + title="Remapping colorspace names", + default_factory=ImageIORemappingModel + ) + ocio_config: ImageIOConfigModel = Field( + default_factory=ImageIOConfigModel, + title="OCIO config" + ) + file_rules: ImageIOFileRulesModel = Field( + default_factory=ImageIOFileRulesModel, + title="File Rules" + ) + # NOTE 'project' attribute was expanded to this model but that caused + # inconsistency with v3 settings and harder conversion handling + # - it can be moved back but keep in mind that it must be handled in v3 + # conversion script too + project: ImageIOProjectModel = Field( + default_factory=ImageIOProjectModel, + title="Project" + ) + profilesMapping: ProfileNamesMappingModel = Field( + default_factory=ProfileNamesMappingModel, + title="Profile names mapping" + ) + + +DEFAULT_IMAGEIO_SETTINGS = { + "project": { + "colourPolicy": "ACES 1.1", + "frameDepth": "16-bit fp", + "fieldDominance": "PROGRESSIVE" + }, + "profilesMapping": { + "inputs": [ + { + "flameName": "ACEScg", + "ocioName": "ACES - ACEScg" + }, + { + "flameName": "Rec.709 video", + "ocioName": "Output - Rec.709" + } + ] + } +} diff --git a/server_addon/flame/server/settings/loader_plugins.py b/server_addon/flame/server/settings/loader_plugins.py new file mode 100644 index 0000000000..6c27b926c2 --- /dev/null +++ b/server_addon/flame/server/settings/loader_plugins.py @@ -0,0 +1,99 @@ +from ayon_server.settings import Field, BaseSettingsModel + + +class LoadClipModel(BaseSettingsModel): + enabled: bool = Field(True) + + product_types: list[str] = Field( + default_factory=list, + title="Product types" + ) + reel_group_name: str = Field( + "OpenPype_Reels", + title="Reel group name" + ) + reel_name: str = Field( + "Loaded", + title="Reel name" + ) + + clip_name_template: str = Field( + "{folder[name]}_{product[name]}<_{output}>", + title="Clip name template" + ) + layer_rename_template: str = Field("", title="Layer name template") + layer_rename_patterns: list[str] = Field( + default_factory=list, + title="Layer rename patters", + ) + + +class LoadClipBatchModel(BaseSettingsModel): + enabled: bool = Field(True) + product_types: list[str] = Field( + default_factory=list, + title="Product types" + ) + reel_name: str = Field( + "OP_LoadedReel", + title="Reel name" + ) + clip_name_template: str = Field( + "{batch}_{folder[name]}_{product[name]}<_{output}>", + title="Clip name template" + ) + layer_rename_template: str = Field("", title="Layer name template") + layer_rename_patterns: list[str] = Field( + default_factory=list, + title="Layer rename patters", + ) + + +class LoaderPluginsModel(BaseSettingsModel): + LoadClip: LoadClipModel = Field( + default_factory=LoadClipModel, + title="Load Clip" + ) + LoadClipBatch: LoadClipBatchModel = Field( + default_factory=LoadClipBatchModel, + title="Load as clip to current batch" + ) + + +DEFAULT_LOADER_SETTINGS = { + "LoadClip": { + "enabled": True, + "product_types": [ + "render2d", + "source", + "plate", + "render", + "review" + ], + "reel_group_name": "OpenPype_Reels", + "reel_name": "Loaded", + "clip_name_template": "{folder[name]}_{product[name]}<_{output}>", + "layer_rename_template": "{folder[name]}_{product[name]}<_{output}>", + "layer_rename_patterns": [ + "rgb", + "rgba" + ] + }, + "LoadClipBatch": { + "enabled": True, + "product_types": [ + "render2d", + "source", + "plate", + "render", + "review" + ], + "reel_name": "OP_LoadedReel", + "clip_name_template": "{batch}_{folder[name]}_{product[name]}<_{output}>", + "layer_rename_template": "{folder[name]}_{product[name]}<_{output}>", + "layer_rename_patterns": [ + "rgb", + "rgba" + ] + } +} diff --git a/server_addon/flame/server/settings/main.py b/server_addon/flame/server/settings/main.py new file mode 100644 index 0000000000..f28de6641b --- /dev/null +++ b/server_addon/flame/server/settings/main.py @@ -0,0 +1,33 @@ +from ayon_server.settings import Field, BaseSettingsModel + +from .imageio import FlameImageIOModel, DEFAULT_IMAGEIO_SETTINGS +from .create_plugins import CreatePuginsModel, DEFAULT_CREATE_SETTINGS +from .publish_plugins import PublishPuginsModel, DEFAULT_PUBLISH_SETTINGS +from .loader_plugins import LoaderPluginsModel, DEFAULT_LOADER_SETTINGS + + +class FlameSettings(BaseSettingsModel): + imageio: FlameImageIOModel = Field( + default_factory=FlameImageIOModel, + title="Color Management (ImageIO)" + ) + create: CreatePuginsModel = Field( + default_factory=CreatePuginsModel, + title="Create plugins" + ) + publish: PublishPuginsModel = Field( + default_factory=PublishPuginsModel, + title="Publish plugins" + ) + load: LoaderPluginsModel = Field( + default_factory=LoaderPluginsModel, + title="Loader plugins" + ) + + +DEFAULT_VALUES = { + "imageio": DEFAULT_IMAGEIO_SETTINGS, + "create": DEFAULT_CREATE_SETTINGS, + "publish": DEFAULT_PUBLISH_SETTINGS, + "load": DEFAULT_LOADER_SETTINGS +} diff --git a/server_addon/flame/server/settings/publish_plugins.py b/server_addon/flame/server/settings/publish_plugins.py new file mode 100644 index 0000000000..ea7f109f73 --- /dev/null +++ b/server_addon/flame/server/settings/publish_plugins.py @@ -0,0 +1,190 @@ +from ayon_server.settings import Field, BaseSettingsModel, task_types_enum + + +class XMLPresetAttrsFromCommentsModel(BaseSettingsModel): + _layout = "expanded" + name: str = Field("", title="Attribute name") + type: str = Field( + default_factory=str, + title="Attribute type", + enum_resolver=lambda: ["number", "float", "string"] + ) + + +class AddTasksModel(BaseSettingsModel): + _layout = "expanded" + name: str = Field("", title="Task name") + type: str = Field( + default_factory=str, + title="Task type", + enum_resolver=task_types_enum + ) + create_batch_group: bool = Field( + True, + title="Create batch group" + ) + + +class CollectTimelineInstancesModel(BaseSettingsModel): + _isGroup = True + + xml_preset_attrs_from_comments: list[XMLPresetAttrsFromCommentsModel] = Field( + default_factory=list, + title="XML presets attributes parsable from segment comments" + ) + add_tasks: list[AddTasksModel] = Field( + default_factory=list, + title="Add tasks" + ) + + +class ExportPresetsMappingModel(BaseSettingsModel): + _layout = "expanded" + + name: str = Field( + ..., + title="Name" + ) + active: bool = Field(True, title="Is active") + export_type: str = Field( + "File Sequence", + title="Eport clip type", + enum_resolver=lambda: ["Movie", "File Sequence", "Sequence Publish"] + ) + ext: str = Field("exr", title="Output extension") + xml_preset_file: str = Field( + "OpenEXR (16-bit fp DWAA).xml", + title="XML preset file (with ext)" + ) + colorspace_out: str = Field( + "ACES - ACEScg", + title="Output color (imageio)" + ) + # TODO remove when resolved or v3 is not a thing anymore + # NOTE next 4 attributes were grouped under 'other_parameters' but that + # created inconsistency with v3 settings and harder conversion handling + # - it can be moved back but keep in mind that it must be handled in v3 + # conversion script too + xml_preset_dir: str = Field( + "", + title="XML preset directory" + ) + parsed_comment_attrs: bool = Field( + True, + title="Parsed comment attributes" + ) + representation_add_range: bool = Field( + True, + title="Add range to representation name" + ) + representation_tags: list[str] = Field( + default_factory=list, + title="Representation tags" + ) + load_to_batch_group: bool = Field( + True, + title="Load to batch group reel" + ) + batch_group_loader_name: str = Field( + "LoadClipBatch", + title="Use loader name" + ) + filter_path_regex: str = Field( + ".*", + title="Regex in clip path" + ) + + +class ExtractProductResourcesModel(BaseSettingsModel): + _isGroup = True + + keep_original_representation: bool = Field( + False, + title="Publish clip's original media" + ) + export_presets_mapping: list[ExportPresetsMappingModel] = Field( + default_factory=list, + title="Export presets mapping" + ) + + +class IntegrateBatchGroupModel(BaseSettingsModel): + enabled: bool = Field( + False, + title="Enabled" + ) + + +class PublishPuginsModel(BaseSettingsModel): + CollectTimelineInstances: CollectTimelineInstancesModel = Field( + default_factory=CollectTimelineInstancesModel, + title="Collect Timeline Instances" + ) + + ExtractProductResources: ExtractProductResourcesModel = Field( + default_factory=ExtractProductResourcesModel, + title="Extract Product Resources" + ) + + IntegrateBatchGroup: IntegrateBatchGroupModel = Field( + default_factory=IntegrateBatchGroupModel, + title="IntegrateBatchGroup" + ) + + +DEFAULT_PUBLISH_SETTINGS = { + "CollectTimelineInstances": { + "xml_preset_attrs_from_comments": [ + { + "name": "width", + "type": "number" + }, + { + "name": "height", + "type": "number" + }, + { + "name": "pixelRatio", + "type": "float" + }, + { + "name": "resizeType", + "type": "string" + }, + { + "name": "resizeFilter", + "type": "string" + } + ], + "add_tasks": [ + { + "name": "compositing", + "type": "Compositing", + "create_batch_group": True + } + ] + }, + "ExtractProductResources": { + "keep_original_representation": False, + "export_presets_mapping": [ + { + "name": "exr16fpdwaa", + "active": True, + "export_type": "File Sequence", + "ext": "exr", + "xml_preset_file": "OpenEXR (16-bit fp DWAA).xml", + "colorspace_out": "ACES - ACEScg", + "xml_preset_dir": "", + "parsed_comment_attrs": True, + "representation_add_range": True, + "representation_tags": [], + "load_to_batch_group": True, + "batch_group_loader_name": "LoadClipBatch", + "filter_path_regex": ".*" + } + ] + }, + "IntegrateBatchGroup": { + "enabled": False + } +} diff --git a/server_addon/flame/server/version.py b/server_addon/flame/server/version.py new file mode 100644 index 0000000000..3dc1f76bc6 --- /dev/null +++ b/server_addon/flame/server/version.py @@ -0,0 +1 @@ +__version__ = "0.1.0" diff --git a/server_addon/fusion/server/__init__.py b/server_addon/fusion/server/__init__.py new file mode 100644 index 0000000000..4d43f28812 --- /dev/null +++ b/server_addon/fusion/server/__init__.py @@ -0,0 +1,19 @@ +from typing import Type + +from ayon_server.addons import BaseServerAddon + +from .version import __version__ +from .settings import FusionSettings, DEFAULT_VALUES + + +class FusionAddon(BaseServerAddon): + name = "fusion" + title = "Fusion" + version = __version__ + settings_model: Type[FusionSettings] = FusionSettings + frontend_scopes = {} + services = {} + + async def get_default_settings(self): + settings_model_cls = self.get_settings_model() + return settings_model_cls(**DEFAULT_VALUES) diff --git a/server_addon/fusion/server/imageio.py b/server_addon/fusion/server/imageio.py new file mode 100644 index 0000000000..fe867af424 --- /dev/null +++ b/server_addon/fusion/server/imageio.py @@ -0,0 +1,48 @@ +from pydantic import Field, validator +from ayon_server.settings import BaseSettingsModel +from ayon_server.settings.validators import ensure_unique_names + + +class ImageIOConfigModel(BaseSettingsModel): + override_global_config: bool = Field( + False, + title="Override global OCIO config" + ) + filepath: list[str] = Field( + default_factory=list, + title="Config path" + ) + + +class ImageIOFileRuleModel(BaseSettingsModel): + name: str = Field("", title="Rule name") + pattern: str = Field("", title="Regex pattern") + colorspace: str = Field("", title="Colorspace name") + ext: str = Field("", title="File extension") + + +class ImageIOFileRulesModel(BaseSettingsModel): + activate_host_rules: bool = Field(False) + rules: list[ImageIOFileRuleModel] = Field( + default_factory=list, + title="Rules" + ) + + @validator("rules") + def validate_unique_outputs(cls, value): + ensure_unique_names(value) + return value + + +class FusionImageIOModel(BaseSettingsModel): + activate_host_color_management: bool = Field( + True, title="Enable Color Management" + ) + ocio_config: ImageIOConfigModel = Field( + default_factory=ImageIOConfigModel, + title="OCIO config" + ) + file_rules: ImageIOFileRulesModel = Field( + default_factory=ImageIOFileRulesModel, + title="File Rules" + ) diff --git a/server_addon/fusion/server/settings.py b/server_addon/fusion/server/settings.py new file mode 100644 index 0000000000..92fb362c66 --- /dev/null +++ b/server_addon/fusion/server/settings.py @@ -0,0 +1,95 @@ +from pydantic import Field +from ayon_server.settings import ( + BaseSettingsModel, +) + +from .imageio import FusionImageIOModel + + +class CopyFusionSettingsModel(BaseSettingsModel): + copy_path: str = Field("", title="Local Fusion profile directory") + copy_status: bool = Field(title="Copy profile on first launch") + force_sync: bool = Field(title="Resync profile on each launch") + + +def _create_saver_instance_attributes_enum(): + return [ + { + "value": "reviewable", + "label": "Reviewable" + }, + { + "value": "farm_rendering", + "label": "Farm rendering" + } + ] + + +class CreateSaverPluginModel(BaseSettingsModel): + _isGroup = True + temp_rendering_path_template: str = Field( + "", title="Temporary rendering path template" + ) + default_variants: list[str] = Field( + default_factory=list, + title="Default variants" + ) + instance_attributes: list[str] = Field( + default_factory=list, + enum_resolver=_create_saver_instance_attributes_enum, + title="Instance attributes" + ) + + +class CreatPluginsModel(BaseSettingsModel): + CreateSaver: CreateSaverPluginModel = Field( + default_factory=CreateSaverPluginModel, + title="Create Saver" + ) + + +class FusionSettings(BaseSettingsModel): + imageio: FusionImageIOModel = Field( + default_factory=FusionImageIOModel, + title="Color Management (ImageIO)" + ) + copy_fusion_settings: CopyFusionSettingsModel = Field( + default_factory=CopyFusionSettingsModel, + title="Local Fusion profile settings" + ) + create: CreatPluginsModel = Field( + default_factory=CreatPluginsModel, + title="Creator plugins" + ) + + +DEFAULT_VALUES = { + "imageio": { + "ocio_config": { + "enabled": False, + "filepath": [] + }, + "file_rules": { + "enabled": False, + "rules": [] + } + }, + "copy_fusion_settings": { + "copy_path": "~/.openpype/hosts/fusion/profiles", + "copy_status": False, + "force_sync": False + }, + "create": { + "CreateSaver": { + "temp_rendering_path_template": "{workdir}/renders/fusion/{product[name]}/{product[name]}.{frame}.{ext}", + "default_variants": [ + "Main", + "Mask" + ], + "instance_attributes": [ + "reviewable", + "farm_rendering" + ] + } + } +} diff --git a/server_addon/fusion/server/version.py b/server_addon/fusion/server/version.py new file mode 100644 index 0000000000..3dc1f76bc6 --- /dev/null +++ b/server_addon/fusion/server/version.py @@ -0,0 +1 @@ +__version__ = "0.1.0" diff --git a/server_addon/harmony/LICENSE b/server_addon/harmony/LICENSE new file mode 100644 index 0000000000..d645695673 --- /dev/null +++ b/server_addon/harmony/LICENSE @@ -0,0 +1,202 @@ + + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "[]" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright [yyyy] [name of copyright owner] + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. diff --git a/server_addon/harmony/README.md b/server_addon/harmony/README.md new file mode 100644 index 0000000000..d971fa39f9 --- /dev/null +++ b/server_addon/harmony/README.md @@ -0,0 +1,4 @@ +ToonBoom Harmony Addon +=============== + +Integration with ToonBoom Harmony. diff --git a/server_addon/harmony/server/__init__.py b/server_addon/harmony/server/__init__.py new file mode 100644 index 0000000000..4ecda1989e --- /dev/null +++ b/server_addon/harmony/server/__init__.py @@ -0,0 +1,16 @@ +from ayon_server.addons import BaseServerAddon + +from .settings import HarmonySettings, DEFAULT_HARMONY_SETTING +from .version import __version__ + + +class Harmony(BaseServerAddon): + name = "harmony" + title = "Harmony" + version = __version__ + + settings_model = HarmonySettings + + async def get_default_settings(self): + settings_model_cls = self.get_settings_model() + return settings_model_cls(**DEFAULT_HARMONY_SETTING) diff --git a/server_addon/harmony/server/settings/__init__.py b/server_addon/harmony/server/settings/__init__.py new file mode 100644 index 0000000000..4a8118d4da --- /dev/null +++ b/server_addon/harmony/server/settings/__init__.py @@ -0,0 +1,10 @@ +from .main import ( + HarmonySettings, + DEFAULT_HARMONY_SETTING, +) + + +__all__ = ( + "HarmonySettings", + "DEFAULT_HARMONY_SETTING", +) diff --git a/server_addon/harmony/server/settings/imageio.py b/server_addon/harmony/server/settings/imageio.py new file mode 100644 index 0000000000..4e01fae3d4 --- /dev/null +++ b/server_addon/harmony/server/settings/imageio.py @@ -0,0 +1,55 @@ +from pydantic import Field, validator +from ayon_server.settings import BaseSettingsModel +from ayon_server.settings.validators import ensure_unique_names + + +class ImageIOConfigModel(BaseSettingsModel): + override_global_config: bool = Field( + False, + title="Override global OCIO config" + ) + filepath: list[str] = Field( + default_factory=list, + title="Config path" + ) + + +class ImageIOFileRuleModel(BaseSettingsModel): + name: str = Field("", title="Rule name") + pattern: str = Field("", title="Regex pattern") + colorspace: str = Field("", title="Colorspace name") + ext: str = Field("", title="File extension") + + +class ImageIOFileRulesModel(BaseSettingsModel): + activate_host_rules: bool = Field(False) + rules: list[ImageIOFileRuleModel] = Field( + default_factory=list, + title="Rules" + ) + + @validator("rules") + def validate_unique_outputs(cls, value): + ensure_unique_names(value) + return value + + +class ImageIORemappingRulesModel(BaseSettingsModel): + host_native_name: str = Field( + title="Application native colorspace name" + ) + ocio_name: str = Field(title="OCIO colorspace name") + + +class HarmonyImageIOModel(BaseSettingsModel): + activate_host_color_management: bool = Field( + True, title="Enable Color Management" + ) + ocio_config: ImageIOConfigModel = Field( + default_factory=ImageIOConfigModel, + title="OCIO config" + ) + file_rules: ImageIOFileRulesModel = Field( + default_factory=ImageIOFileRulesModel, + title="File Rules" + ) diff --git a/server_addon/harmony/server/settings/main.py b/server_addon/harmony/server/settings/main.py new file mode 100644 index 0000000000..0936bc1fc7 --- /dev/null +++ b/server_addon/harmony/server/settings/main.py @@ -0,0 +1,63 @@ +from pydantic import Field +from ayon_server.settings import BaseSettingsModel + +from .imageio import HarmonyImageIOModel +from .publish_plugins import HarmonyPublishPlugins + + +class HarmonySettings(BaseSettingsModel): + """Harmony Project Settings.""" + + imageio: HarmonyImageIOModel = Field( + default_factory=HarmonyImageIOModel, + title="OCIO config" + ) + publish: HarmonyPublishPlugins = Field( + default_factory=HarmonyPublishPlugins, + title="Publish plugins" + ) + + +DEFAULT_HARMONY_SETTING = { + "load": { + "ImageSequenceLoader": { + "family": [ + "shot", + "render", + "image", + "plate", + "reference" + ], + "representations": [ + "jpeg", + "png", + "jpg" + ] + } + }, + "publish": { + "CollectPalettes": { + "allowed_tasks": [ + ".*" + ] + }, + "ValidateAudio": { + "enabled": True, + "optional": True, + "active": True + }, + "ValidateContainers": { + "enabled": True, + "optional": True, + "active": True + }, + "ValidateSceneSettings": { + "enabled": True, + "optional": True, + "active": True, + "frame_check_filter": [], + "skip_resolution_check": [], + "skip_timelines_check": [] + } + } +} diff --git a/server_addon/harmony/server/settings/publish_plugins.py b/server_addon/harmony/server/settings/publish_plugins.py new file mode 100644 index 0000000000..bdaec2bbd4 --- /dev/null +++ b/server_addon/harmony/server/settings/publish_plugins.py @@ -0,0 +1,76 @@ +from pydantic import Field + +from ayon_server.settings import BaseSettingsModel + + +class CollectPalettesPlugin(BaseSettingsModel): + """Set regular expressions to filter triggering on specific task names. '.*' means on all.""" # noqa + + allowed_tasks: list[str] = Field( + default_factory=list, + title="Allowed tasks" + ) + + +class ValidateAudioPlugin(BaseSettingsModel): + """Check if scene contains audio track.""" # + _isGroup = True + enabled: bool = True + optional: bool = Field(False, title="Optional") + active: bool = Field(True, title="Active") + + +class ValidateContainersPlugin(BaseSettingsModel): + """Check if loaded container is scene are latest versions.""" + _isGroup = True + enabled: bool = True + optional: bool = Field(False, title="Optional") + active: bool = Field(True, title="Active") + + +class ValidateSceneSettingsPlugin(BaseSettingsModel): + """Validate if FrameStart, FrameEnd and Resolution match shot data in DB. + Use regular expressions to limit validations only on particular asset + or task names.""" + _isGroup = True + enabled: bool = True + optional: bool = Field(False, title="Optional") + active: bool = Field(True, title="Active") + + frame_check_filter: list[str] = Field( + default_factory=list, + title="Skip Frame check for Assets with name containing" + ) + + skip_resolution_check: list[str] = Field( + default_factory=list, + title="Skip Resolution Check for Tasks" + ) + + skip_timelines_check: list[str] = Field( + default_factory=list, + title="Skip Timeline Check for Tasks" + ) + + +class HarmonyPublishPlugins(BaseSettingsModel): + + CollectPalettes: CollectPalettesPlugin = Field( + title="Collect Palettes", + default_factory=CollectPalettesPlugin, + ) + + ValidateAudio: ValidateAudioPlugin = Field( + title="Validate Audio", + default_factory=ValidateAudioPlugin, + ) + + ValidateContainers: ValidateContainersPlugin = Field( + title="Validate Containers", + default_factory=ValidateContainersPlugin, + ) + + ValidateSceneSettings: ValidateSceneSettingsPlugin = Field( + title="Validate Scene Settings", + default_factory=ValidateSceneSettingsPlugin, + ) diff --git a/server_addon/harmony/server/version.py b/server_addon/harmony/server/version.py new file mode 100644 index 0000000000..df0c92f1e2 --- /dev/null +++ b/server_addon/harmony/server/version.py @@ -0,0 +1,3 @@ +# -*- coding: utf-8 -*- +"""Package declaring addon version.""" +__version__ = "0.1.2" diff --git a/server_addon/hiero/server/__init__.py b/server_addon/hiero/server/__init__.py new file mode 100644 index 0000000000..d0f9bcefc3 --- /dev/null +++ b/server_addon/hiero/server/__init__.py @@ -0,0 +1,19 @@ +from typing import Type + +from ayon_server.addons import BaseServerAddon + +from .version import __version__ +from .settings import HieroSettings, DEFAULT_VALUES + + +class HieroAddon(BaseServerAddon): + name = "hiero" + title = "Hiero" + version = __version__ + settings_model: Type[HieroSettings] = HieroSettings + frontend_scopes = {} + services = {} + + async def get_default_settings(self): + settings_model_cls = self.get_settings_model() + return settings_model_cls(**DEFAULT_VALUES) diff --git a/server_addon/hiero/server/settings/__init__.py b/server_addon/hiero/server/settings/__init__.py new file mode 100644 index 0000000000..246c8203e9 --- /dev/null +++ b/server_addon/hiero/server/settings/__init__.py @@ -0,0 +1,10 @@ +from .main import ( + HieroSettings, + DEFAULT_VALUES, +) + + +__all__ = ( + "HieroSettings", + "DEFAULT_VALUES", +) diff --git a/server_addon/hiero/server/settings/common.py b/server_addon/hiero/server/settings/common.py new file mode 100644 index 0000000000..eb4791f93e --- /dev/null +++ b/server_addon/hiero/server/settings/common.py @@ -0,0 +1,98 @@ +from pydantic import Field +from ayon_server.settings import BaseSettingsModel +from ayon_server.types import ( + ColorRGBA_float, + ColorRGB_uint8 +) + + +class Vector2d(BaseSettingsModel): + _layout = "compact" + + x: float = Field(1.0, title="X") + y: float = Field(1.0, title="Y") + + +class Vector3d(BaseSettingsModel): + _layout = "compact" + + x: float = Field(1.0, title="X") + y: float = Field(1.0, title="Y") + z: float = Field(1.0, title="Z") + + +def formatable_knob_type_enum(): + return [ + {"value": "text", "label": "Text"}, + {"value": "number", "label": "Number"}, + {"value": "decimal_number", "label": "Decimal number"}, + {"value": "2d_vector", "label": "2D vector"}, + # "3D vector" + ] + + +class Formatable(BaseSettingsModel): + _layout = "compact" + + template: str = Field( + "", + placeholder="""{{key}} or {{key}};{{key}}""", + title="Template" + ) + to_type: str = Field( + "Text", + title="To Knob type", + enum_resolver=formatable_knob_type_enum, + ) + + +knob_types_enum = [ + {"value": "text", "label": "Text"}, + {"value": "formatable", "label": "Formate from template"}, + {"value": "color_gui", "label": "Color GUI"}, + {"value": "boolean", "label": "Boolean"}, + {"value": "number", "label": "Number"}, + {"value": "decimal_number", "label": "Decimal number"}, + {"value": "vector_2d", "label": "2D vector"}, + {"value": "vector_3d", "label": "3D vector"}, + {"value": "color", "label": "Color"} +] + + +class KnobModel(BaseSettingsModel): + _layout = "expanded" + + type: str = Field( + title="Type", + description="Switch between different knob types", + enum_resolver=lambda: knob_types_enum, + conditionalEnum=True + ) + name: str = Field( + title="Name", + placeholder="Name" + ) + text: str = Field("", title="Value") + color_gui: ColorRGB_uint8 = Field( + (0, 0, 255), + title="RGB Uint8", + ) + boolean: bool = Field(False, title="Value") + number: int = Field(0, title="Value") + decimal_number: float = Field(0.0, title="Value") + vector_2d: Vector2d = Field( + default_factory=Vector2d, + title="Value" + ) + vector_3d: Vector3d = Field( + default_factory=Vector3d, + title="Value" + ) + color: ColorRGBA_float = Field( + (0.0, 0.0, 1.0, 1.0), + title="RGBA Float" + ) + formatable: Formatable = Field( + default_factory=Formatable, + title="Value" + ) diff --git a/server_addon/hiero/server/settings/create_plugins.py b/server_addon/hiero/server/settings/create_plugins.py new file mode 100644 index 0000000000..daec4a7cea --- /dev/null +++ b/server_addon/hiero/server/settings/create_plugins.py @@ -0,0 +1,97 @@ +from pydantic import Field +from ayon_server.settings import BaseSettingsModel + + +class CreateShotClipModels(BaseSettingsModel): + hierarchy: str = Field( + "{folder}/{sequence}", + title="Shot parent hierarchy", + section="Shot Hierarchy And Rename Settings" + ) + clipRename: bool = Field( + True, + title="Rename clips" + ) + clipName: str = Field( + "{track}{sequence}{shot}", + title="Clip name template" + ) + countFrom: int = Field( + 10, + title="Count sequence from" + ) + countSteps: int = Field( + 10, + title="Stepping number" + ) + + folder: str = Field( + "shots", + title="{folder}", + section="Shot Template Keywords" + ) + episode: str = Field( + "ep01", + title="{episode}" + ) + sequence: str = Field( + "sq01", + title="{sequence}" + ) + track: str = Field( + "{_track_}", + title="{track}" + ) + shot: str = Field( + "sh###", + title="{shot}" + ) + + vSyncOn: bool = Field( + False, + title="Enable Vertical Sync", + section="Vertical Synchronization Of Attributes" + ) + + workfileFrameStart: int = Field( + 1001, + title="Workfiles Start Frame", + section="Shot Attributes" + ) + handleStart: int = Field( + 10, + title="Handle start (head)" + ) + handleEnd: int = Field( + 10, + title="Handle end (tail)" + ) + + +class CreatorPluginsSettings(BaseSettingsModel): + CreateShotClip: CreateShotClipModels = Field( + default_factory=CreateShotClipModels, + title="Create Shot Clip" + ) + + +DEFAULT_CREATE_SETTINGS = { + "create": { + "CreateShotClip": { + "hierarchy": "{folder}/{sequence}", + "clipRename": True, + "clipName": "{track}{sequence}{shot}", + "countFrom": 10, + "countSteps": 10, + "folder": "shots", + "episode": "ep01", + "sequence": "sq01", + "track": "{_track_}", + "shot": "sh###", + "vSyncOn": False, + "workfileFrameStart": 1001, + "handleStart": 10, + "handleEnd": 10 + } + } +} diff --git a/server_addon/hiero/server/settings/filters.py b/server_addon/hiero/server/settings/filters.py new file mode 100644 index 0000000000..7e2702b3b7 --- /dev/null +++ b/server_addon/hiero/server/settings/filters.py @@ -0,0 +1,19 @@ +from pydantic import Field, validator +from ayon_server.settings import BaseSettingsModel, ensure_unique_names + + +class PublishGUIFilterItemModel(BaseSettingsModel): + _layout = "compact" + name: str = Field(title="Name") + value: bool = Field(True, title="Active") + + +class PublishGUIFiltersModel(BaseSettingsModel): + _layout = "compact" + name: str = Field(title="Name") + value: list[PublishGUIFilterItemModel] = Field(default_factory=list) + + @validator("value") + def validate_unique_outputs(cls, value): + ensure_unique_names(value) + return value diff --git a/server_addon/hiero/server/settings/imageio.py b/server_addon/hiero/server/settings/imageio.py new file mode 100644 index 0000000000..f2c2728057 --- /dev/null +++ b/server_addon/hiero/server/settings/imageio.py @@ -0,0 +1,169 @@ +from pydantic import Field, validator + +from ayon_server.settings import ( + BaseSettingsModel, + ensure_unique_names, +) + + +def ocio_configs_switcher_enum(): + return [ + {"value": "nuke-default", "label": "nuke-default"}, + {"value": "spi-vfx", "label": "spi-vfx"}, + {"value": "spi-anim", "label": "spi-anim"}, + {"value": "aces_0.1.1", "label": "aces_0.1.1"}, + {"value": "aces_0.7.1", "label": "aces_0.7.1"}, + {"value": "aces_1.0.1", "label": "aces_1.0.1"}, + {"value": "aces_1.0.3", "label": "aces_1.0.3"}, + {"value": "aces_1.1", "label": "aces_1.1"}, + {"value": "aces_1.2", "label": "aces_1.2"}, + {"value": "aces_1.3", "label": "aces_1.3"}, + {"value": "custom", "label": "custom"} + ] + + +class WorkfileColorspaceSettings(BaseSettingsModel): + """Hiero workfile colorspace preset. """ + """# TODO: enhance settings with host api: + we need to add mapping to resolve properly keys. + Hiero is excpecting camel case key names, + but for better code consistency we are using snake_case: + + ocio_config = ocioConfigName + working_space_name = workingSpace + int_16_name = sixteenBitLut + int_8_name = eightBitLut + float_name = floatLut + log_name = logLut + viewer_name = viewerLut + thumbnail_name = thumbnailLut + """ + + ocioConfigName: str = Field( + title="OpenColorIO Config", + description="Switch between OCIO configs", + enum_resolver=ocio_configs_switcher_enum, + conditionalEnum=True + ) + workingSpace: str = Field( + title="Working Space" + ) + viewerLut: str = Field( + title="Viewer" + ) + eightBitLut: str = Field( + title="8-bit files" + ) + sixteenBitLut: str = Field( + title="16-bit files" + ) + logLut: str = Field( + title="Log files" + ) + floatLut: str = Field( + title="Float files" + ) + thumbnailLut: str = Field( + title="Thumnails" + ) + monitorOutLut: str = Field( + title="Monitor" + ) + + +class ClipColorspaceRulesItems(BaseSettingsModel): + _layout = "expanded" + + regex: str = Field("", title="Regex expression") + colorspace: str = Field("", title="Colorspace") + + +class RegexInputsModel(BaseSettingsModel): + inputs: list[ClipColorspaceRulesItems] = Field( + default_factory=list, + title="Inputs" + ) + + +class ImageIOConfigModel(BaseSettingsModel): + override_global_config: bool = Field( + False, + title="Override global OCIO config" + ) + filepath: list[str] = Field( + default_factory=list, + title="Config path" + ) + + +class ImageIOFileRuleModel(BaseSettingsModel): + name: str = Field("", title="Rule name") + pattern: str = Field("", title="Regex pattern") + colorspace: str = Field("", title="Colorspace name") + ext: str = Field("", title="File extension") + + +class ImageIOFileRulesModel(BaseSettingsModel): + activate_host_rules: bool = Field(False) + rules: list[ImageIOFileRuleModel] = Field( + default_factory=list, + title="Rules" + ) + + @validator("rules") + def validate_unique_outputs(cls, value): + ensure_unique_names(value) + return value + + +class ImageIOSettings(BaseSettingsModel): + """Hiero color management project settings. """ + _isGroup: bool = True + activate_host_color_management: bool = Field( + True, title="Enable Color Management" + ) + ocio_config: ImageIOConfigModel = Field( + default_factory=ImageIOConfigModel, + title="OCIO config" + ) + file_rules: ImageIOFileRulesModel = Field( + default_factory=ImageIOFileRulesModel, + title="File Rules" + ) + workfile: WorkfileColorspaceSettings = Field( + default_factory=WorkfileColorspaceSettings, + title="Workfile" + ) + """# TODO: enhance settings with host api: + - old settings are using `regexInputs` key but we + need to rename to `regex_inputs` + - no need for `inputs` middle part. It can stay + directly on `regex_inputs` + """ + regexInputs: RegexInputsModel = Field( + default_factory=RegexInputsModel, + title="Assign colorspace to clips via rules" + ) + + +DEFAULT_IMAGEIO_SETTINGS = { + "workfile": { + "ocioConfigName": "nuke-default", + "workingSpace": "linear", + "viewerLut": "sRGB", + "eightBitLut": "sRGB", + "sixteenBitLut": "sRGB", + "logLut": "Cineon", + "floatLut": "linear", + "thumbnailLut": "sRGB", + "monitorOutLut": "sRGB" + }, + "regexInputs": { + "inputs": [ + { + "regex": "[^-a-zA-Z0-9](plateRef).*(?=mp4)", + "colorspace": "sRGB" + } + ] + } +} diff --git a/server_addon/hiero/server/settings/loader_plugins.py b/server_addon/hiero/server/settings/loader_plugins.py new file mode 100644 index 0000000000..83b3564c2a --- /dev/null +++ b/server_addon/hiero/server/settings/loader_plugins.py @@ -0,0 +1,38 @@ +from pydantic import Field +from ayon_server.settings import BaseSettingsModel + + +class LoadClipModel(BaseSettingsModel): + enabled: bool = Field( + True, + title="Enabled" + ) + product_types: list[str] = Field( + default_factory=list, + title="Product types" + ) + clip_name_template: str = Field( + title="Clip name template" + ) + + +class LoaderPuginsModel(BaseSettingsModel): + LoadClip: LoadClipModel = Field( + default_factory=LoadClipModel, + title="Load Clip" + ) + + +DEFAULT_LOADER_PLUGINS_SETTINGS = { + "LoadClip": { + "enabled": True, + "product_types": [ + "render2d", + "source", + "plate", + "render", + "review" + ], + "clip_name_template": "{folder[name]}_{product[name]}_{representation}" + } +} diff --git a/server_addon/hiero/server/settings/main.py b/server_addon/hiero/server/settings/main.py new file mode 100644 index 0000000000..47f8110c22 --- /dev/null +++ b/server_addon/hiero/server/settings/main.py @@ -0,0 +1,64 @@ +from pydantic import Field + +from ayon_server.settings import BaseSettingsModel + +from .imageio import ( + ImageIOSettings, + DEFAULT_IMAGEIO_SETTINGS +) +from .create_plugins import ( + CreatorPluginsSettings, + DEFAULT_CREATE_SETTINGS +) +from .loader_plugins import ( + LoaderPuginsModel, + DEFAULT_LOADER_PLUGINS_SETTINGS +) +from .publish_plugins import ( + PublishPuginsModel, + DEFAULT_PUBLISH_PLUGIN_SETTINGS +) +from .scriptsmenu import ( + ScriptsmenuSettings, + DEFAULT_SCRIPTSMENU_SETTINGS +) +from .filters import PublishGUIFilterItemModel + + +class HieroSettings(BaseSettingsModel): + """Nuke addon settings.""" + + imageio: ImageIOSettings = Field( + default_factory=ImageIOSettings, + title="Color Management (imageio)", + ) + + create: CreatorPluginsSettings = Field( + default_factory=CreatorPluginsSettings, + title="Creator Plugins", + ) + load: LoaderPuginsModel = Field( + default_factory=LoaderPuginsModel, + title="Loader plugins" + ) + publish: PublishPuginsModel = Field( + default_factory=PublishPuginsModel, + title="Publish plugins" + ) + scriptsmenu: ScriptsmenuSettings = Field( + default_factory=ScriptsmenuSettings, + title="Scripts Menu Definition", + ) + filters: list[PublishGUIFilterItemModel] = Field( + default_factory=list + ) + + +DEFAULT_VALUES = { + "imageio": DEFAULT_IMAGEIO_SETTINGS, + "create": DEFAULT_CREATE_SETTINGS, + "load": DEFAULT_LOADER_PLUGINS_SETTINGS, + "publish": DEFAULT_PUBLISH_PLUGIN_SETTINGS, + "scriptsmenu": DEFAULT_SCRIPTSMENU_SETTINGS, + "filters": [], +} diff --git a/server_addon/hiero/server/settings/publish_plugins.py b/server_addon/hiero/server/settings/publish_plugins.py new file mode 100644 index 0000000000..a85e62724b --- /dev/null +++ b/server_addon/hiero/server/settings/publish_plugins.py @@ -0,0 +1,48 @@ +from pydantic import Field +from ayon_server.settings import BaseSettingsModel + + +class CollectInstanceVersionModel(BaseSettingsModel): + enabled: bool = Field( + True, + title="Enabled" + ) + + +class ExtractReviewCutUpVideoModel(BaseSettingsModel): + enabled: bool = Field( + True, + title="Enabled" + ) + tags_addition: list[str] = Field( + default_factory=list, + title="Additional tags" + ) + + +class PublishPuginsModel(BaseSettingsModel): + CollectInstanceVersion: CollectInstanceVersionModel = Field( + default_factory=CollectInstanceVersionModel, + title="Collect Instance Version" + ) + """# TODO: enhance settings with host api: + Rename class name and plugin name + to match title (it makes more sense) + """ + ExtractReviewCutUpVideo: ExtractReviewCutUpVideoModel = Field( + default_factory=ExtractReviewCutUpVideoModel, + title="Exctract Review Trim" + ) + + +DEFAULT_PUBLISH_PLUGIN_SETTINGS = { + "CollectInstanceVersion": { + "enabled": False, + }, + "ExtractReviewCutUpVideo": { + "enabled": True, + "tags_addition": [ + "review" + ] + } +} diff --git a/server_addon/hiero/server/settings/scriptsmenu.py b/server_addon/hiero/server/settings/scriptsmenu.py new file mode 100644 index 0000000000..51cb088298 --- /dev/null +++ b/server_addon/hiero/server/settings/scriptsmenu.py @@ -0,0 +1,41 @@ +from pydantic import Field +from ayon_server.settings import BaseSettingsModel + + +class ScriptsmenuSubmodel(BaseSettingsModel): + """Item Definition""" + _isGroup = True + + type: str = Field(title="Type") + command: str = Field(title="Command") + sourcetype: str = Field(title="Source Type") + title: str = Field(title="Title") + tooltip: str = Field(title="Tooltip") + + +class ScriptsmenuSettings(BaseSettingsModel): + """Nuke script menu project settings.""" + _isGroup = True + + """# TODO: enhance settings with host api: + - in api rename key `name` to `menu_name` + """ + name: str = Field(title="Menu name") + definition: list[ScriptsmenuSubmodel] = Field( + default_factory=list, + title="Definition", + description="Scriptmenu Items Definition") + + +DEFAULT_SCRIPTSMENU_SETTINGS = { + "name": "OpenPype Tools", + "definition": [ + { + "type": "action", + "sourcetype": "python", + "title": "OpenPype Docs", + "command": "import webbrowser;webbrowser.open(url='https://openpype.io/docs/artist_hosts_hiero')", + "tooltip": "Open the OpenPype Hiero user doc page" + } + ] +} diff --git a/server_addon/hiero/server/version.py b/server_addon/hiero/server/version.py new file mode 100644 index 0000000000..3dc1f76bc6 --- /dev/null +++ b/server_addon/hiero/server/version.py @@ -0,0 +1 @@ +__version__ = "0.1.0" diff --git a/server_addon/houdini/server/__init__.py b/server_addon/houdini/server/__init__.py new file mode 100644 index 0000000000..870ec2d0b7 --- /dev/null +++ b/server_addon/houdini/server/__init__.py @@ -0,0 +1,17 @@ +from typing import Type + +from ayon_server.addons import BaseServerAddon + +from .version import __version__ +from .settings import HoudiniSettings, DEFAULT_VALUES + + +class Houdini(BaseServerAddon): + name = "houdini" + title = "Houdini" + version = __version__ + settings_model: Type[HoudiniSettings] = HoudiniSettings + + async def get_default_settings(self): + settings_model_cls = self.get_settings_model() + return settings_model_cls(**DEFAULT_VALUES) diff --git a/server_addon/houdini/server/settings/__init__.py b/server_addon/houdini/server/settings/__init__.py new file mode 100644 index 0000000000..9fd2678925 --- /dev/null +++ b/server_addon/houdini/server/settings/__init__.py @@ -0,0 +1,10 @@ +from .main import ( + HoudiniSettings, + DEFAULT_VALUES, +) + + +__all__ = ( + "HoudiniSettings", + "DEFAULT_VALUES", +) diff --git a/server_addon/houdini/server/settings/imageio.py b/server_addon/houdini/server/settings/imageio.py new file mode 100644 index 0000000000..88aa40ecd6 --- /dev/null +++ b/server_addon/houdini/server/settings/imageio.py @@ -0,0 +1,48 @@ +from pydantic import Field, validator +from ayon_server.settings import BaseSettingsModel +from ayon_server.settings.validators import ensure_unique_names + + +class ImageIOConfigModel(BaseSettingsModel): + override_global_config: bool = Field( + False, + title="Override global OCIO config" + ) + filepath: list[str] = Field( + default_factory=list, + title="Config path" + ) + + +class ImageIOFileRuleModel(BaseSettingsModel): + name: str = Field("", title="Rule name") + pattern: str = Field("", title="Regex pattern") + colorspace: str = Field("", title="Colorspace name") + ext: str = Field("", title="File extension") + + +class ImageIOFileRulesModel(BaseSettingsModel): + activate_host_rules: bool = Field(False) + rules: list[ImageIOFileRuleModel] = Field( + default_factory=list, + title="Rules" + ) + + @validator("rules") + def validate_unique_outputs(cls, value): + ensure_unique_names(value) + return value + + +class HoudiniImageIOModel(BaseSettingsModel): + activate_host_color_management: bool = Field( + True, title="Enable Color Management" + ) + ocio_config: ImageIOConfigModel = Field( + default_factory=ImageIOConfigModel, + title="OCIO config" + ) + file_rules: ImageIOFileRulesModel = Field( + default_factory=ImageIOFileRulesModel, + title="File Rules" + ) diff --git a/server_addon/houdini/server/settings/main.py b/server_addon/houdini/server/settings/main.py new file mode 100644 index 0000000000..fdb6838f5c --- /dev/null +++ b/server_addon/houdini/server/settings/main.py @@ -0,0 +1,79 @@ +from pydantic import Field +from ayon_server.settings import ( + BaseSettingsModel, + MultiplatformPathModel, + MultiplatformPathListModel, +) + +from .imageio import HoudiniImageIOModel +from .publish_plugins import ( + PublishPluginsModel, + CreatePluginsModel, + DEFAULT_HOUDINI_PUBLISH_SETTINGS, + DEFAULT_HOUDINI_CREATE_SETTINGS +) + + +class ShelfToolsModel(BaseSettingsModel): + name: str = Field(title="Name") + help: str = Field(title="Help text") + script: MultiplatformPathModel = Field( + default_factory=MultiplatformPathModel, + title="Script Path " + ) + icon: MultiplatformPathModel = Field( + default_factory=MultiplatformPathModel, + title="Icon Path " + ) + + +class ShelfDefinitionModel(BaseSettingsModel): + _layout = "expanded" + shelf_name: str = Field(title="Shelf name") + tools_list: list[ShelfToolsModel] = Field( + default_factory=list, + title="Shelf Tools" + ) + + +class ShelvesModel(BaseSettingsModel): + _layout = "expanded" + shelf_set_name: str = Field(title="Shelfs set name") + + shelf_set_source_path: MultiplatformPathListModel = Field( + default_factory=MultiplatformPathListModel, + title="Shelf Set Path (optional)" + ) + + shelf_definition: list[ShelfDefinitionModel] = Field( + default_factory=list, + title="Shelf Definitions" + ) + + +class HoudiniSettings(BaseSettingsModel): + imageio: HoudiniImageIOModel = Field( + default_factory=HoudiniImageIOModel, + title="Color Management (ImageIO)" + ) + shelves: list[ShelvesModel] = Field( + default_factory=list, + title="Houdini Scripts Shelves", + ) + + publish: PublishPluginsModel = Field( + default_factory=PublishPluginsModel, + title="Publish Plugins", + ) + + create: CreatePluginsModel = Field( + default_factory=CreatePluginsModel, + title="Creator Plugins", + ) + + +DEFAULT_VALUES = { + "shelves": [], + "create": DEFAULT_HOUDINI_CREATE_SETTINGS, + "publish": DEFAULT_HOUDINI_PUBLISH_SETTINGS +} diff --git a/server_addon/houdini/server/settings/publish_plugins.py b/server_addon/houdini/server/settings/publish_plugins.py new file mode 100644 index 0000000000..ca5d0a4ea5 --- /dev/null +++ b/server_addon/houdini/server/settings/publish_plugins.py @@ -0,0 +1,150 @@ +from pydantic import Field + +from ayon_server.settings import BaseSettingsModel + + +# Creator Plugins +class CreatorModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + defaults: list[str] = Field(title="Default Products") + + +class CreateArnoldAssModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + defaults: list[str] = Field(title="Default Products") + ext: str = Field(Title="Extension") + + +class CreatePluginsModel(BaseSettingsModel): + CreateArnoldAss: CreateArnoldAssModel = Field( + default_factory=CreateArnoldAssModel, + title="Create Alembic Camera") + CreateAlembicCamera: CreatorModel = Field( + default_factory=CreatorModel, + title="Create Alembic Camera") + CreateCompositeSequence: CreatorModel = Field( + default_factory=CreatorModel, + title="Create Composite Sequence") + CreatePointCache: CreatorModel = Field( + default_factory=CreatorModel, + title="Create Point Cache") + CreateRedshiftROP: CreatorModel = Field( + default_factory=CreatorModel, + title="Create RedshiftROP") + CreateRemotePublish: CreatorModel = Field( + default_factory=CreatorModel, + title="Create Remote Publish") + CreateVDBCache: CreatorModel = Field( + default_factory=CreatorModel, + title="Create VDB Cache") + CreateUSD: CreatorModel = Field( + default_factory=CreatorModel, + title="Create USD") + CreateUSDModel: CreatorModel = Field( + default_factory=CreatorModel, + title="Create USD model") + USDCreateShadingWorkspace: CreatorModel = Field( + default_factory=CreatorModel, + title="Create USD shading workspace") + CreateUSDRender: CreatorModel = Field( + default_factory=CreatorModel, + title="Create USD render") + + +DEFAULT_HOUDINI_CREATE_SETTINGS = { + "CreateArnoldAss": { + "enabled": True, + "defaults": [], + "ext": ".ass" + }, + "CreateAlembicCamera": { + "enabled": True, + "defaults": [] + }, + "CreateCompositeSequence": { + "enabled": True, + "defaults": [] + }, + "CreatePointCache": { + "enabled": True, + "defaults": [] + }, + "CreateRedshiftROP": { + "enabled": True, + "defaults": [] + }, + "CreateRemotePublish": { + "enabled": True, + "defaults": [] + }, + "CreateVDBCache": { + "enabled": True, + "defaults": [] + }, + "CreateUSD": { + "enabled": False, + "defaults": [] + }, + "CreateUSDModel": { + "enabled": False, + "defaults": [] + }, + "USDCreateShadingWorkspace": { + "enabled": False, + "defaults": [] + }, + "CreateUSDRender": { + "enabled": False, + "defaults": [] + } +} + + +# Publish Plugins +class ValidateWorkfilePathsModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + optional: bool = Field(title="Optional") + node_types: list[str] = Field( + default_factory=list, + title="Node Types" + ) + prohibited_vars: list[str] = Field( + default_factory=list, + title="Prohibited Variables" + ) + + +class ValidateContainersModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + optional: bool = Field(title="Optional") + active: bool = Field(title="Active") + + +class PublishPluginsModel(BaseSettingsModel): + ValidateWorkfilePaths: ValidateWorkfilePathsModel = Field( + default_factory=ValidateWorkfilePathsModel, + title="Validate workfile paths settings.") + ValidateContainers: ValidateContainersModel = Field( + default_factory=ValidateContainersModel, + title="Validate Latest Containers.") + + +DEFAULT_HOUDINI_PUBLISH_SETTINGS = { + "ValidateWorkfilePaths": { + "enabled": True, + "optional": True, + "node_types": [ + "file", + "alembic" + ], + "prohibited_vars": [ + "$HIP", + "$JOB" + ] + }, + "ValidateContainers": { + "enabled": True, + "optional": True, + "active": True + } +} diff --git a/server_addon/houdini/server/version.py b/server_addon/houdini/server/version.py new file mode 100644 index 0000000000..3dc1f76bc6 --- /dev/null +++ b/server_addon/houdini/server/version.py @@ -0,0 +1 @@ +__version__ = "0.1.0" diff --git a/server_addon/kitsu/server/__init__.py b/server_addon/kitsu/server/__init__.py new file mode 100644 index 0000000000..69cf812dea --- /dev/null +++ b/server_addon/kitsu/server/__init__.py @@ -0,0 +1,19 @@ +from typing import Type + +from ayon_server.addons import BaseServerAddon + +from .version import __version__ +from .settings import KitsuSettings, DEFAULT_VALUES + + +class KitsuAddon(BaseServerAddon): + name = "kitsu" + title = "Kitsu" + version = __version__ + settings_model: Type[KitsuSettings] = KitsuSettings + frontend_scopes = {} + services = {} + + async def get_default_settings(self): + settings_model_cls = self.get_settings_model() + return settings_model_cls(**DEFAULT_VALUES) diff --git a/server_addon/kitsu/server/settings.py b/server_addon/kitsu/server/settings.py new file mode 100644 index 0000000000..a4d10d889d --- /dev/null +++ b/server_addon/kitsu/server/settings.py @@ -0,0 +1,112 @@ +from pydantic import Field +from ayon_server.settings import BaseSettingsModel + + +class EntityPattern(BaseSettingsModel): + episode: str = Field(title="Episode") + sequence: str = Field(title="Sequence") + shot: str = Field(title="Shot") + + +def _status_change_cond_enum(): + return [ + {"value": "equal", "label": "Equal"}, + {"value": "not_equal", "label": "Not equal"} + ] + + +class StatusChangeCondition(BaseSettingsModel): + condition: str = Field( + "equal", + enum_resolver=_status_change_cond_enum, + title="Condition" + ) + short_name: str = Field("", title="Short name") + + +class StatusChangeProductTypeRequirementModel(BaseSettingsModel): + condition: str = Field( + "equal", + enum_resolver=_status_change_cond_enum, + title="Condition" + ) + product_type: str = Field("", title="Product type") + + +class StatusChangeConditionsModel(BaseSettingsModel): + status_conditions: list[StatusChangeCondition] = Field( + default_factory=list, + title="Status conditions" + ) + product_type_requirements: list[StatusChangeProductTypeRequirementModel] = Field( + default_factory=list, + title="Product type requirements") + + +class CustomCommentTemplateModel(BaseSettingsModel): + enabled: bool = Field(True) + comment_template: str = Field("", title="Custom comment") + + +class IntegrateKitsuNotes(BaseSettingsModel): + """Kitsu supports markdown and here you can create a custom comment template. + + You can use data from your publishing instance's data. + """ + + set_status_note: bool = Field(title="Set status on note") + note_status_shortname: str = Field(title="Note shortname") + status_change_conditions: StatusChangeConditionsModel = Field( + default_factory=StatusChangeConditionsModel, + title="Status change conditions" + ) + custom_comment_template: CustomCommentTemplateModel = Field( + default_factory=CustomCommentTemplateModel, + title="Custom Comment Template", + ) + + +class PublishPlugins(BaseSettingsModel): + IntegrateKitsuNote: IntegrateKitsuNotes = Field( + default_factory=IntegrateKitsuNotes, + title="Integrate Kitsu Note" + ) + + +class KitsuSettings(BaseSettingsModel): + server: str = Field( + "", + title="Kitsu Server", + scope=["studio"], + ) + entities_naming_pattern: EntityPattern = Field( + default_factory=EntityPattern, + title="Entities naming pattern", + ) + publish: PublishPlugins = Field( + default_factory=PublishPlugins, + title="Publish plugins", + ) + + +DEFAULT_VALUES = { + "entities_naming_pattern": { + "episode": "E##", + "sequence": "SQ##", + "shot": "SH##" + }, + "publish": { + "IntegrateKitsuNote": { + "set_status_note": False, + "note_status_shortname": "wfa", + "status_change_conditions": { + "status_conditions": [], + "product_type_requirements": [] + }, + "custom_comment_template": { + "enabled": False, + "comment_template": "{comment}\n\n| | |\n|--|--|\n| version| `{version}` |\n| product type | `{product[type]}` |\n| name | `{name}` |" + } + } + } +} diff --git a/server_addon/kitsu/server/version.py b/server_addon/kitsu/server/version.py new file mode 100644 index 0000000000..485f44ac21 --- /dev/null +++ b/server_addon/kitsu/server/version.py @@ -0,0 +1 @@ +__version__ = "0.1.1" diff --git a/server_addon/maya/LICENCE b/server_addon/maya/LICENCE new file mode 100644 index 0000000000..261eeb9e9f --- /dev/null +++ b/server_addon/maya/LICENCE @@ -0,0 +1,201 @@ + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "[]" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright [yyyy] [name of copyright owner] + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. diff --git a/server_addon/maya/README.md b/server_addon/maya/README.md new file mode 100644 index 0000000000..c65c09fba0 --- /dev/null +++ b/server_addon/maya/README.md @@ -0,0 +1,4 @@ +Maya Integration Addon +====================== + +WIP diff --git a/server_addon/maya/server/__init__.py b/server_addon/maya/server/__init__.py new file mode 100644 index 0000000000..8784427dcf --- /dev/null +++ b/server_addon/maya/server/__init__.py @@ -0,0 +1,16 @@ +"""Maya Addon Module""" +from ayon_server.addons import BaseServerAddon + +from .settings.main import MayaSettings, DEFAULT_MAYA_SETTING +from .version import __version__ + + +class MayaAddon(BaseServerAddon): + name = "maya" + title = "Maya" + version = __version__ + settings_model = MayaSettings + + async def get_default_settings(self): + settings_model_cls = self.get_settings_model() + return settings_model_cls(**DEFAULT_MAYA_SETTING) diff --git a/common/ayon_common/connection/__init__.py b/server_addon/maya/server/settings/__init__.py similarity index 100% rename from common/ayon_common/connection/__init__.py rename to server_addon/maya/server/settings/__init__.py diff --git a/server_addon/maya/server/settings/creators.py b/server_addon/maya/server/settings/creators.py new file mode 100644 index 0000000000..291b3ec660 --- /dev/null +++ b/server_addon/maya/server/settings/creators.py @@ -0,0 +1,406 @@ +from pydantic import Field + +from ayon_server.settings import BaseSettingsModel + + +class CreateLookModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + make_tx: bool = Field(title="Make tx files") + rs_tex: bool = Field(title="Make Redshift texture files") + defaults: list[str] = Field( + default_factory=["Main"], title="Default Products" + ) + + +class BasicCreatorModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + defaults: list[str] = Field( + default_factory=list, + title="Default Products" + ) + + +class CreateUnrealStaticMeshModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + defaults: list[str] = Field( + default_factory=["", "_Main"], + title="Default Products" + ) + static_mesh_prefixes: str = Field("S", title="Static Mesh Prefix") + collision_prefixes: list[str] = Field( + default_factory=["UBX", "UCP", "USP", "UCX"], + title="Collision Prefixes" + ) + + +class CreateUnrealSkeletalMeshModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + defaults: list[str] = Field(default_factory=[], title="Default Products") + joint_hints: str = Field("jnt_org", title="Joint root hint") + + +class CreateMultiverseLookModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + publish_mip_map: bool = Field(title="publish_mip_map") + + +class BasicExportMeshModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + write_color_sets: bool = Field(title="Write Color Sets") + write_face_sets: bool = Field(title="Write Face Sets") + defaults: list[str] = Field( + default_factory=list, + title="Default Products" + ) + + +class CreateAnimationModel(BaseSettingsModel): + write_color_sets: bool = Field(title="Write Color Sets") + write_face_sets: bool = Field(title="Write Face Sets") + include_parent_hierarchy: bool = Field( + title="Include Parent Hierarchy") + include_user_defined_attributes: bool = Field( + title="Include User Defined Attributes") + defaults: list[str] = Field( + default_factory=list, + title="Default Products" + ) + + +class CreatePointCacheModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + write_color_sets: bool = Field(title="Write Color Sets") + write_face_sets: bool = Field(title="Write Face Sets") + include_user_defined_attributes: bool = Field( + title="Include User Defined Attributes" + ) + defaults: list[str] = Field( + default_factory=["Main"], + title="Default Products" + ) + + +class CreateProxyAlembicModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + write_color_sets: bool = Field(title="Write Color Sets") + write_face_sets: bool = Field(title="Write Face Sets") + defaults: list[str] = Field( + default_factory=["Main"], + title="Default Products" + ) + + +class CreateAssModel(BasicCreatorModel): + expandProcedurals: bool = Field(title="Expand Procedurals") + motionBlur: bool = Field(title="Motion Blur") + motionBlurKeys: int = Field(2, title="Motion Blur Keys") + motionBlurLength: float = Field(0.5, title="Motion Blur Length") + maskOptions: bool = Field(title="Mask Options") + maskCamera: bool = Field(title="Mask Camera") + maskLight: bool = Field(title="Mask Light") + maskShape: bool = Field(title="Mask Shape") + maskShader: bool = Field(title="Mask Shader") + maskOverride: bool = Field(title="Mask Override") + maskDriver: bool = Field(title="Mask Driver") + maskFilter: bool = Field(title="Mask Filter") + maskColor_manager: bool = Field(title="Mask Color Manager") + maskOperator: bool = Field(title="Mask Operator") + + +class CreateReviewModel(BasicCreatorModel): + useMayaTimeline: bool = Field(title="Use Maya Timeline for Frame Range.") + + +class CreateVrayProxyModel(BaseSettingsModel): + enabled: bool = Field(True) + vrmesh: bool = Field(title="VrMesh") + alembic: bool = Field(title="Alembic") + defaults: list[str] = Field(default_factory=list, title="Default Products") + + +class CreatorsModel(BaseSettingsModel): + CreateLook: CreateLookModel = Field( + default_factory=CreateLookModel, + title="Create Look" + ) + CreateRender: BasicCreatorModel = Field( + default_factory=BasicCreatorModel, + title="Create Render" + ) + # "-" is not compatible in the new model + CreateUnrealStaticMesh: CreateUnrealStaticMeshModel = Field( + default_factory=CreateUnrealStaticMeshModel, + title="Create Unreal_Static Mesh" + ) + # "-" is not compatible in the new model + CreateUnrealSkeletalMesh: CreateUnrealSkeletalMeshModel = Field( + default_factory=CreateUnrealSkeletalMeshModel, + title="Create Unreal_Skeletal Mesh" + ) + CreateMultiverseLook: CreateMultiverseLookModel = Field( + default_factory=CreateMultiverseLookModel, + title="Create Multiverse Look" + ) + CreateAnimation: CreateAnimationModel = Field( + default_factory=CreateAnimationModel, + title="Create Animation" + ) + CreateModel: BasicExportMeshModel = Field( + default_factory=BasicExportMeshModel, + title="Create Model" + ) + CreatePointCache: CreatePointCacheModel = Field( + default_factory=CreatePointCacheModel, + title="Create Point Cache" + ) + CreateProxyAlembic: CreateProxyAlembicModel = Field( + default_factory=CreateProxyAlembicModel, + title="Create Proxy Alembic" + ) + CreateMultiverseUsd: BasicCreatorModel = Field( + default_factory=BasicCreatorModel, + title="Create Multiverse USD" + ) + CreateMultiverseUsdComp: BasicCreatorModel = Field( + default_factory=BasicCreatorModel, + title="Create Multiverse USD Composition" + ) + CreateMultiverseUsdOver: BasicCreatorModel = Field( + default_factory=BasicCreatorModel, + title="Create Multiverse USD Override" + ) + CreateAss: CreateAssModel = Field( + default_factory=CreateAssModel, + title="Create Ass" + ) + CreateAssembly: BasicCreatorModel = Field( + default_factory=BasicCreatorModel, + title="Create Assembly" + ) + CreateCamera: BasicCreatorModel = Field( + default_factory=BasicCreatorModel, + title="Create Camera" + ) + CreateLayout: BasicCreatorModel = Field( + default_factory=BasicCreatorModel, + title="Create Layout" + ) + CreateMayaScene: BasicCreatorModel = Field( + default_factory=BasicCreatorModel, + title="Create Maya Scene" + ) + CreateRenderSetup: BasicCreatorModel = Field( + default_factory=BasicCreatorModel, + title="Create Render Setup" + ) + CreateReview: CreateReviewModel = Field( + default_factory=CreateReviewModel, + title="Create Review" + ) + CreateRig: BasicCreatorModel = Field( + default_factory=BasicCreatorModel, + title="Create Rig" + ) + CreateSetDress: BasicCreatorModel = Field( + default_factory=BasicCreatorModel, + title="Create Set Dress" + ) + CreateVrayProxy: CreateVrayProxyModel = Field( + default_factory=CreateVrayProxyModel, + title="Create VRay Proxy" + ) + CreateVRayScene: BasicCreatorModel = Field( + default_factory=BasicCreatorModel, + title="Create VRay Scene" + ) + CreateYetiRig: BasicCreatorModel = Field( + default_factory=BasicCreatorModel, + title="Create Yeti Rig" + ) + + +DEFAULT_CREATORS_SETTINGS = { + "CreateLook": { + "enabled": True, + "make_tx": True, + "rs_tex": False, + "defaults": [ + "Main" + ] + }, + "CreateRender": { + "enabled": True, + "defaults": [ + "Main" + ] + }, + "CreateUnrealStaticMesh": { + "enabled": True, + "defaults": [ + "", + "_Main" + ], + "static_mesh_prefix": "S", + "collision_prefixes": [ + "UBX", + "UCP", + "USP", + "UCX" + ] + }, + "CreateUnrealSkeletalMesh": { + "enabled": True, + "defaults": [], + "joint_hints": "jnt_org" + }, + "CreateMultiverseLook": { + "enabled": True, + "publish_mip_map": True + }, + "CreateAnimation": { + "write_color_sets": False, + "write_face_sets": False, + "include_parent_hierarchy": False, + "include_user_defined_attributes": False, + "defaults": [ + "Main" + ] + }, + "CreateModel": { + "enabled": True, + "write_color_sets": False, + "write_face_sets": False, + "defaults": [ + "Main", + "Proxy", + "Sculpt" + ] + }, + "CreatePointCache": { + "enabled": True, + "write_color_sets": False, + "write_face_sets": False, + "include_user_defined_attributes": False, + "defaults": [ + "Main" + ] + }, + "CreateProxyAlembic": { + "enabled": True, + "write_color_sets": False, + "write_face_sets": False, + "defaults": [ + "Main" + ] + }, + "CreateMultiverseUsd": { + "enabled": True, + "defaults": [ + "Main" + ] + }, + "CreateMultiverseUsdComp": { + "enabled": True, + "defaults": [ + "Main" + ] + }, + "CreateMultiverseUsdOver": { + "enabled": True, + "defaults": [ + "Main" + ] + }, + "CreateAss": { + "enabled": True, + "defaults": [ + "Main" + ], + "expandProcedurals": False, + "motionBlur": True, + "motionBlurKeys": 2, + "motionBlurLength": 0.5, + "maskOptions": False, + "maskCamera": False, + "maskLight": False, + "maskShape": False, + "maskShader": False, + "maskOverride": False, + "maskDriver": False, + "maskFilter": False, + "maskColor_manager": False, + "maskOperator": False + }, + "CreateAssembly": { + "enabled": True, + "defaults": [ + "Main" + ] + }, + "CreateCamera": { + "enabled": True, + "defaults": [ + "Main" + ] + }, + "CreateLayout": { + "enabled": True, + "defaults": [ + "Main" + ] + }, + "CreateMayaScene": { + "enabled": True, + "defaults": [ + "Main" + ] + }, + "CreateRenderSetup": { + "enabled": True, + "defaults": [ + "Main" + ] + }, + "CreateReview": { + "enabled": True, + "defaults": [ + "Main" + ], + "useMayaTimeline": True + }, + "CreateRig": { + "enabled": True, + "defaults": [ + "Main", + "Sim", + "Cloth" + ] + }, + "CreateSetDress": { + "enabled": True, + "defaults": [ + "Main", + "Anim" + ] + }, + "CreateVrayProxy": { + "enabled": True, + "vrmesh": True, + "alembic": True, + "defaults": [ + "Main" + ] + }, + "CreateVRayScene": { + "enabled": True, + "defaults": [ + "Main" + ] + }, + "CreateYetiRig": { + "enabled": True, + "defaults": [ + "Main" + ] + } +} diff --git a/server_addon/maya/server/settings/explicit_plugins_loading.py b/server_addon/maya/server/settings/explicit_plugins_loading.py new file mode 100644 index 0000000000..394adb728f --- /dev/null +++ b/server_addon/maya/server/settings/explicit_plugins_loading.py @@ -0,0 +1,429 @@ +from pydantic import Field + +from ayon_server.settings import BaseSettingsModel + + +class PluginsModel(BaseSettingsModel): + _layout = "expanded" + enabled: bool = Field(title="Enabled") + name: str = Field("", title="Name") + + +class ExplicitPluginsLoadingModel(BaseSettingsModel): + """Maya Explicit Plugins Loading.""" + _isGroup: bool = True + enabled: bool = Field(title="enabled") + plugins_to_load: list[PluginsModel] = Field( + default_factory=list, title="Plugins To Load" + ) + + +DEFAULT_EXPLITCIT_PLUGINS_LOADING_SETTINGS = { + "enabled": False, + "plugins_to_load": [ + { + "enabled": False, + "name": "AbcBullet" + }, + { + "enabled": True, + "name": "AbcExport" + }, + { + "enabled": True, + "name": "AbcImport" + }, + { + "enabled": False, + "name": "animImportExport" + }, + { + "enabled": False, + "name": "ArubaTessellator" + }, + { + "enabled": False, + "name": "ATFPlugin" + }, + { + "enabled": False, + "name": "atomImportExport" + }, + { + "enabled": False, + "name": "AutodeskPacketFile" + }, + { + "enabled": False, + "name": "autoLoader" + }, + { + "enabled": False, + "name": "bifmeshio" + }, + { + "enabled": False, + "name": "bifrostGraph" + }, + { + "enabled": False, + "name": "bifrostshellnode" + }, + { + "enabled": False, + "name": "bifrostvisplugin" + }, + { + "enabled": False, + "name": "blast2Cmd" + }, + { + "enabled": False, + "name": "bluePencil" + }, + { + "enabled": False, + "name": "Boss" + }, + { + "enabled": False, + "name": "bullet" + }, + { + "enabled": True, + "name": "cacheEvaluator" + }, + { + "enabled": False, + "name": "cgfxShader" + }, + { + "enabled": False, + "name": "cleanPerFaceAssignment" + }, + { + "enabled": False, + "name": "clearcoat" + }, + { + "enabled": False, + "name": "convertToComponentTags" + }, + { + "enabled": False, + "name": "curveWarp" + }, + { + "enabled": False, + "name": "ddsFloatReader" + }, + { + "enabled": True, + "name": "deformerEvaluator" + }, + { + "enabled": False, + "name": "dgProfiler" + }, + { + "enabled": False, + "name": "drawUfe" + }, + { + "enabled": False, + "name": "dx11Shader" + }, + { + "enabled": False, + "name": "fbxmaya" + }, + { + "enabled": False, + "name": "fltTranslator" + }, + { + "enabled": False, + "name": "freeze" + }, + { + "enabled": False, + "name": "Fur" + }, + { + "enabled": False, + "name": "gameFbxExporter" + }, + { + "enabled": False, + "name": "gameInputDevice" + }, + { + "enabled": False, + "name": "GamePipeline" + }, + { + "enabled": False, + "name": "gameVertexCount" + }, + { + "enabled": False, + "name": "geometryReport" + }, + { + "enabled": False, + "name": "geometryTools" + }, + { + "enabled": False, + "name": "glslShader" + }, + { + "enabled": True, + "name": "GPUBuiltInDeformer" + }, + { + "enabled": False, + "name": "gpuCache" + }, + { + "enabled": False, + "name": "hairPhysicalShader" + }, + { + "enabled": False, + "name": "ik2Bsolver" + }, + { + "enabled": False, + "name": "ikSpringSolver" + }, + { + "enabled": False, + "name": "invertShape" + }, + { + "enabled": False, + "name": "lges" + }, + { + "enabled": False, + "name": "lookdevKit" + }, + { + "enabled": False, + "name": "MASH" + }, + { + "enabled": False, + "name": "matrixNodes" + }, + { + "enabled": False, + "name": "mayaCharacterization" + }, + { + "enabled": False, + "name": "mayaHIK" + }, + { + "enabled": False, + "name": "MayaMuscle" + }, + { + "enabled": False, + "name": "mayaUsdPlugin" + }, + { + "enabled": False, + "name": "mayaVnnPlugin" + }, + { + "enabled": False, + "name": "melProfiler" + }, + { + "enabled": False, + "name": "meshReorder" + }, + { + "enabled": True, + "name": "modelingToolkit" + }, + { + "enabled": False, + "name": "mtoa" + }, + { + "enabled": False, + "name": "mtoh" + }, + { + "enabled": False, + "name": "nearestPointOnMesh" + }, + { + "enabled": True, + "name": "objExport" + }, + { + "enabled": False, + "name": "OneClick" + }, + { + "enabled": False, + "name": "OpenEXRLoader" + }, + { + "enabled": False, + "name": "pgYetiMaya" + }, + { + "enabled": False, + "name": "pgyetiVrayMaya" + }, + { + "enabled": False, + "name": "polyBoolean" + }, + { + "enabled": False, + "name": "poseInterpolator" + }, + { + "enabled": False, + "name": "quatNodes" + }, + { + "enabled": False, + "name": "randomizerDevice" + }, + { + "enabled": False, + "name": "redshift4maya" + }, + { + "enabled": True, + "name": "renderSetup" + }, + { + "enabled": False, + "name": "retargeterNodes" + }, + { + "enabled": False, + "name": "RokokoMotionLibrary" + }, + { + "enabled": False, + "name": "rotateHelper" + }, + { + "enabled": False, + "name": "sceneAssembly" + }, + { + "enabled": False, + "name": "shaderFXPlugin" + }, + { + "enabled": False, + "name": "shotCamera" + }, + { + "enabled": False, + "name": "snapTransform" + }, + { + "enabled": False, + "name": "stage" + }, + { + "enabled": True, + "name": "stereoCamera" + }, + { + "enabled": False, + "name": "stlTranslator" + }, + { + "enabled": False, + "name": "studioImport" + }, + { + "enabled": False, + "name": "Substance" + }, + { + "enabled": False, + "name": "substancelink" + }, + { + "enabled": False, + "name": "substancemaya" + }, + { + "enabled": False, + "name": "substanceworkflow" + }, + { + "enabled": False, + "name": "svgFileTranslator" + }, + { + "enabled": False, + "name": "sweep" + }, + { + "enabled": False, + "name": "testify" + }, + { + "enabled": False, + "name": "tiffFloatReader" + }, + { + "enabled": False, + "name": "timeSliderBookmark" + }, + { + "enabled": False, + "name": "Turtle" + }, + { + "enabled": False, + "name": "Type" + }, + { + "enabled": False, + "name": "udpDevice" + }, + { + "enabled": False, + "name": "ufeSupport" + }, + { + "enabled": False, + "name": "Unfold3D" + }, + { + "enabled": False, + "name": "VectorRender" + }, + { + "enabled": False, + "name": "vrayformaya" + }, + { + "enabled": False, + "name": "vrayvolumegrid" + }, + { + "enabled": False, + "name": "xgenToolkit" + }, + { + "enabled": False, + "name": "xgenVray" + } + ] +} diff --git a/server_addon/maya/server/settings/imageio.py b/server_addon/maya/server/settings/imageio.py new file mode 100644 index 0000000000..7512bfe253 --- /dev/null +++ b/server_addon/maya/server/settings/imageio.py @@ -0,0 +1,126 @@ +"""Providing models and setting values for image IO in Maya. + +Note: Names were changed to get rid of the versions in class names. +""" +from pydantic import Field, validator + +from ayon_server.settings import BaseSettingsModel, ensure_unique_names + + +class ImageIOConfigModel(BaseSettingsModel): + override_global_config: bool = Field( + False, + title="Override global OCIO config" + ) + filepath: list[str] = Field( + default_factory=list, + title="Config path" + ) + + +class ImageIOFileRuleModel(BaseSettingsModel): + name: str = Field("", title="Rule name") + pattern: str = Field("", title="Regex pattern") + colorspace: str = Field("", title="Colorspace name") + ext: str = Field("", title="File extension") + + +class ImageIOFileRulesModel(BaseSettingsModel): + activate_host_rules: bool = Field(False) + rules: list[ImageIOFileRuleModel] = Field( + default_factory=list, + title="Rules" + ) + + @validator("rules") + def validate_unique_outputs(cls, value): + ensure_unique_names(value) + return value + + +class ColorManagementPreferenceV2Model(BaseSettingsModel): + """Color Management Preference v2 (Maya 2022+).""" + _layout = "expanded" + + enabled: bool = Field(True, title="Use Color Management Preference v2") + + renderSpace: str = Field(title="Rendering Space") + displayName: str = Field(title="Display") + viewName: str = Field(title="View") + + +class ColorManagementPreferenceModel(BaseSettingsModel): + """Color Management Preference (legacy).""" + _layout = "expanded" + + renderSpace: str = Field(title="Rendering Space") + viewTransform: str = Field(title="Viewer Transform ") + + +class WorkfileImageIOModel(BaseSettingsModel): + enabled: bool = Field(True, title="Enabled") + renderSpace: str = Field(title="Rendering Space") + displayName: str = Field(title="Display") + viewName: str = Field(title="View") + + +class ImageIOSettings(BaseSettingsModel): + """Maya color management project settings. + + Todo: What to do with color management preferences version? + """ + + _isGroup: bool = True + activate_host_color_management: bool = Field( + True, title="Enable Color Management" + ) + ocio_config: ImageIOConfigModel = Field( + default_factory=ImageIOConfigModel, + title="OCIO config" + ) + file_rules: ImageIOFileRulesModel = Field( + default_factory=ImageIOFileRulesModel, + title="File Rules" + ) + workfile: WorkfileImageIOModel = Field( + default_factory=WorkfileImageIOModel, + title="Workfile" + ) + # Deprecated + colorManagementPreference_v2: ColorManagementPreferenceV2Model = Field( + default_factory=ColorManagementPreferenceV2Model, + title="Color Management Preference v2 (Maya 2022+)" + ) + colorManagementPreference: ColorManagementPreferenceModel = Field( + default_factory=ColorManagementPreferenceModel, + title="Color Management Preference (legacy)" + ) + + +DEFAULT_IMAGEIO_SETTINGS = { + "activate_host_color_management": True, + "ocio_config": { + "override_global_config": False, + "filepath": [] + }, + "file_rules": { + "activate_host_rules": False, + "rules": [] + }, + "workfile": { + "enabled": False, + "renderSpace": "ACES - ACEScg", + "displayName": "ACES", + "viewName": "sRGB" + }, + "colorManagementPreference_v2": { + "enabled": True, + "renderSpace": "ACEScg", + "displayName": "sRGB", + "viewName": "ACES 1.0 SDR-video" + }, + "colorManagementPreference": { + "renderSpace": "scene-linear Rec 709/sRGB", + "viewTransform": "sRGB gamma" + } +} diff --git a/server_addon/maya/server/settings/include_handles.py b/server_addon/maya/server/settings/include_handles.py new file mode 100644 index 0000000000..3ba6aca66b --- /dev/null +++ b/server_addon/maya/server/settings/include_handles.py @@ -0,0 +1,30 @@ +from pydantic import Field + +from ayon_server.settings import BaseSettingsModel, task_types_enum + + +class IncludeByTaskTypeModel(BaseSettingsModel): + task_type: list[str] = Field( + default_factory=list, + title="Task types", + enum_resolver=task_types_enum + ) + include_handles: bool = Field(True, title="Include handles") + + +class IncludeHandlesModel(BaseSettingsModel): + """Maya dirmap settings.""" + # _layout = "expanded" + include_handles_default: bool = Field( + True, title="Include handles by default" + ) + per_task_type: list[IncludeByTaskTypeModel] = Field( + default_factory=list, + title="Include/exclude handles by task type" + ) + + +DEFAULT_INCLUDE_HANDLES = { + "include_handles_default": False, + "per_task_type": [] +} diff --git a/server_addon/maya/server/settings/loaders.py b/server_addon/maya/server/settings/loaders.py new file mode 100644 index 0000000000..60fc2a1cdd --- /dev/null +++ b/server_addon/maya/server/settings/loaders.py @@ -0,0 +1,115 @@ +from pydantic import Field + +from ayon_server.settings import BaseSettingsModel +from ayon_server.types import ColorRGBA_uint8 + + +class ColorsSetting(BaseSettingsModel): + model: ColorRGBA_uint8 = Field( + (209, 132, 30, 1.0), title="Model:") + rig: ColorRGBA_uint8 = Field( + (59, 226, 235, 1.0), title="Rig:") + pointcache: ColorRGBA_uint8 = Field( + (94, 209, 30, 1.0), title="Pointcache:") + animation: ColorRGBA_uint8 = Field( + (94, 209, 30, 1.0), title="Animation:") + ass: ColorRGBA_uint8 = Field( + (249, 135, 53, 1.0), title="Arnold StandIn:") + camera: ColorRGBA_uint8 = Field( + (136, 114, 244, 1.0), title="Camera:") + fbx: ColorRGBA_uint8 = Field( + (215, 166, 255, 1.0), title="FBX:") + mayaAscii: ColorRGBA_uint8 = Field( + (67, 174, 255, 1.0), title="Maya Ascii:") + mayaScene: ColorRGBA_uint8 = Field( + (67, 174, 255, 1.0), title="Maya Scene:") + setdress: ColorRGBA_uint8 = Field( + (255, 250, 90, 1.0), title="Set Dress:") + layout: ColorRGBA_uint8 = Field(( + 255, 250, 90, 1.0), title="Layout:") + vdbcache: ColorRGBA_uint8 = Field( + (249, 54, 0, 1.0), title="VDB Cache:") + vrayproxy: ColorRGBA_uint8 = Field( + (255, 150, 12, 1.0), title="VRay Proxy:") + vrayscene_layer: ColorRGBA_uint8 = Field( + (255, 150, 12, 1.0), title="VRay Scene:") + yeticache: ColorRGBA_uint8 = Field( + (99, 206, 220, 1.0), title="Yeti Cache:") + yetiRig: ColorRGBA_uint8 = Field( + (0, 205, 125, 1.0), title="Yeti Rig:") + + +class ReferenceLoaderModel(BaseSettingsModel): + namespace: str = Field(title="Namespace") + group_name: str = Field(title="Group name") + display_handle: bool = Field(title="Display Handle On Load References") + + +class LoadersModel(BaseSettingsModel): + colors: ColorsSetting = Field( + default_factory=ColorsSetting, + title="Loaded Products Outliner Colors") + + reference_loader: ReferenceLoaderModel = Field( + default_factory=ReferenceLoaderModel, + title="Reference Loader" + ) + + +DEFAULT_LOADERS_SETTING = { + "colors": { + "model": [ + 209, 132, 30, 1.0 + ], + "rig": [ + 59, 226, 235, 1.0 + ], + "pointcache": [ + 94, 209, 30, 1.0 + ], + "animation": [ + 94, 209, 30, 1.0 + ], + "ass": [ + 249, 135, 53, 1.0 + ], + "camera": [ + 136, 114, 244, 1.0 + ], + "fbx": [ + 215, 166, 255, 1.0 + ], + "mayaAscii": [ + 67, 174, 255, 1.0 + ], + "mayaScene": [ + 67, 174, 255, 1.0 + ], + "setdress": [ + 255, 250, 90, 1.0 + ], + "layout": [ + 255, 250, 90, 1.0 + ], + "vdbcache": [ + 249, 54, 0, 1.0 + ], + "vrayproxy": [ + 255, 150, 12, 1.0 + ], + "vrayscene_layer": [ + 255, 150, 12, 1.0 + ], + "yeticache": [ + 99, 206, 220, 1.0 + ], + "yetiRig": [ + 0, 205, 125, 1.0 + ] + }, + "reference_loader": { + "namespace": "{folder[name]}_{product[name]}_##_", + "group_name": "_GRP", + "display_handle": True + } +} diff --git a/server_addon/maya/server/settings/main.py b/server_addon/maya/server/settings/main.py new file mode 100644 index 0000000000..c8021614be --- /dev/null +++ b/server_addon/maya/server/settings/main.py @@ -0,0 +1,141 @@ +from pydantic import Field, validator +from ayon_server.settings import BaseSettingsModel, ensure_unique_names +from .imageio import ImageIOSettings, DEFAULT_IMAGEIO_SETTINGS +from .maya_dirmap import MayaDirmapModel, DEFAULT_MAYA_DIRMAP_SETTINGS +from .include_handles import IncludeHandlesModel, DEFAULT_INCLUDE_HANDLES +from .explicit_plugins_loading import ( + ExplicitPluginsLoadingModel, DEFAULT_EXPLITCIT_PLUGINS_LOADING_SETTINGS +) +from .scriptsmenu import ScriptsmenuModel, DEFAULT_SCRIPTSMENU_SETTINGS +from .render_settings import RenderSettingsModel, DEFAULT_RENDER_SETTINGS +from .creators import CreatorsModel, DEFAULT_CREATORS_SETTINGS +from .publishers import PublishersModel, DEFAULT_PUBLISH_SETTINGS +from .loaders import LoadersModel, DEFAULT_LOADERS_SETTING +from .workfile_build_settings import ProfilesModel, DEFAULT_WORKFILE_SETTING +from .templated_workfile_settings import ( + TemplatedProfilesModel, DEFAULT_TEMPLATED_WORKFILE_SETTINGS +) + + +class ExtMappingItemModel(BaseSettingsModel): + _layout = "compact" + name: str = Field(title="Product type") + value: str = Field(title="Extension") + + +class PublishGUIFilterItemModel(BaseSettingsModel): + _layout = "compact" + name: str = Field(title="Name") + value: bool = Field(True, title="Active") + + +class PublishGUIFiltersModel(BaseSettingsModel): + _layout = "compact" + name: str = Field(title="Name") + value: list[PublishGUIFilterItemModel] = Field(default_factory=list) + + @validator("value") + def validate_unique_outputs(cls, value): + ensure_unique_names(value) + return value + + +class MayaSettings(BaseSettingsModel): + """Maya Project Settings.""" + + open_workfile_post_initialization: bool = Field( + True, title="Open Workfile Post Initialization") + explicit_plugins_loading: ExplicitPluginsLoadingModel = Field( + default_factory=ExplicitPluginsLoadingModel, + title="Explicit Plugins Loading") + imageio: ImageIOSettings = Field( + default_factory=ImageIOSettings, title="Color Management (imageio)") + mel_workspace: str = Field(title="Maya MEL Workspace", widget="textarea") + ext_mapping: list[ExtMappingItemModel] = Field( + default_factory=list, title="Extension Mapping") + maya_dirmap: MayaDirmapModel = Field( + default_factory=MayaDirmapModel, title="Maya dirmap Settings") + include_handles: IncludeHandlesModel = Field( + default_factory=IncludeHandlesModel, + title="Include/Exclude Handles in default playback & render range" + ) + scriptsmenu: ScriptsmenuModel = Field( + default_factory=ScriptsmenuModel, + title="Scriptsmenu Settings" + ) + render_settings: RenderSettingsModel = Field( + default_factory=RenderSettingsModel, title="Render Settings") + create: CreatorsModel = Field( + default_factory=CreatorsModel, title="Creators") + publish: PublishersModel = Field( + default_factory=PublishersModel, title="Publishers") + load: LoadersModel = Field( + default_factory=LoadersModel, title="Loaders") + workfile_build: ProfilesModel = Field( + default_factory=ProfilesModel, title="Workfile Build Settings") + templated_workfile_build: TemplatedProfilesModel = Field( + default_factory=TemplatedProfilesModel, + title="Templated Workfile Build Settings") + filters: list[PublishGUIFiltersModel] = Field( + default_factory=list, + title="Publish GUI Filters") + + @validator("filters", "ext_mapping") + def validate_unique_outputs(cls, value): + ensure_unique_names(value) + return value + + +DEFAULT_MEL_WORKSPACE_SETTINGS = "\n".join(( + 'workspace -fr "shaders" "renderData/shaders";', + 'workspace -fr "images" "renders/maya";', + 'workspace -fr "particles" "particles";', + 'workspace -fr "mayaAscii" "";', + 'workspace -fr "mayaBinary" "";', + 'workspace -fr "scene" "";', + 'workspace -fr "alembicCache" "cache/alembic";', + 'workspace -fr "renderData" "renderData";', + 'workspace -fr "sourceImages" "sourceimages";', + 'workspace -fr "fileCache" "cache/nCache";', + '', +)) + +DEFAULT_MAYA_SETTING = { + "open_workfile_post_initialization": False, + "explicit_plugins_loading": DEFAULT_EXPLITCIT_PLUGINS_LOADING_SETTINGS, + "imageio": DEFAULT_IMAGEIO_SETTINGS, + "mel_workspace": DEFAULT_MEL_WORKSPACE_SETTINGS, + "ext_mapping": [ + {"name": "model", "value": "ma"}, + {"name": "mayaAscii", "value": "ma"}, + {"name": "camera", "value": "ma"}, + {"name": "rig", "value": "ma"}, + {"name": "workfile", "value": "ma"}, + {"name": "yetiRig", "value": "ma"} + ], + # `maya_dirmap` was originally with dash - `maya-dirmap` + "maya_dirmap": DEFAULT_MAYA_DIRMAP_SETTINGS, + "include_handles": DEFAULT_INCLUDE_HANDLES, + "scriptsmenu": DEFAULT_SCRIPTSMENU_SETTINGS, + "render_settings": DEFAULT_RENDER_SETTINGS, + "create": DEFAULT_CREATORS_SETTINGS, + "publish": DEFAULT_PUBLISH_SETTINGS, + "load": DEFAULT_LOADERS_SETTING, + "workfile_build": DEFAULT_WORKFILE_SETTING, + "templated_workfile_build": DEFAULT_TEMPLATED_WORKFILE_SETTINGS, + "filters": [ + { + "name": "preset 1", + "value": [ + {"name": "ValidateNoAnimation", "value": False}, + {"name": "ValidateShapeDefaultNames", "value": False}, + ] + }, + { + "name": "preset 2", + "value": [ + {"name": "ValidateNoAnimation", "value": False}, + ] + }, + ] +} diff --git a/server_addon/maya/server/settings/maya_dirmap.py b/server_addon/maya/server/settings/maya_dirmap.py new file mode 100644 index 0000000000..243261dc87 --- /dev/null +++ b/server_addon/maya/server/settings/maya_dirmap.py @@ -0,0 +1,40 @@ +from pydantic import Field + +from ayon_server.settings import BaseSettingsModel + + +class MayaDirmapPathsSubmodel(BaseSettingsModel): + _layout = "compact" + source_path: list[str] = Field( + default_factory=list, title="Source Paths" + ) + destination_path: list[str] = Field( + default_factory=list, title="Destination Paths" + ) + + +class MayaDirmapModel(BaseSettingsModel): + """Maya dirmap settings.""" + # _layout = "expanded" + _isGroup: bool = True + + enabled: bool = Field(title="enabled") + # Use ${} placeholder instead of absolute value of a root in + # referenced filepaths. + use_env_var_as_root: bool = Field( + title="Use env var placeholder in referenced paths" + ) + paths: MayaDirmapPathsSubmodel = Field( + default_factory=MayaDirmapPathsSubmodel, + title="Dirmap Paths" + ) + + +DEFAULT_MAYA_DIRMAP_SETTINGS = { + "use_env_var_as_root": False, + "enabled": False, + "paths": { + "source-path": [], + "destination-path": [] + } +} diff --git a/server_addon/maya/server/settings/publish_playblast.py b/server_addon/maya/server/settings/publish_playblast.py new file mode 100644 index 0000000000..acfcaf5988 --- /dev/null +++ b/server_addon/maya/server/settings/publish_playblast.py @@ -0,0 +1,382 @@ +from pydantic import Field, validator + +from ayon_server.settings import ( + BaseSettingsModel, + ensure_unique_names, + task_types_enum, +) +from ayon_server.types import ColorRGBA_uint8 + + +def hardware_falloff_enum(): + return [ + {"label": "Linear", "value": "0"}, + {"label": "Exponential", "value": "1"}, + {"label": "Exponential Squared", "value": "2"} + ] + + +def renderer_enum(): + return [ + {"label": "Viewport 2.0", "value": "vp2Renderer"} + ] + + +def displayLights_enum(): + return [ + {"label": "Default Lighting", "value": "default"}, + {"label": "All Lights", "value": "all"}, + {"label": "Selected Lights", "value": "selected"}, + {"label": "Flat Lighting", "value": "flat"}, + {"label": "No Lights", "value": "nolights"} + ] + + +def plugin_objects_default(): + return [ + { + "name": "gpuCacheDisplayFilter", + "value": False + } + ] + + +class CodecSetting(BaseSettingsModel): + _layout = "expanded" + compression: str = Field("png", title="Encoding") + format: str = Field("image", title="Format") + quality: int = Field(95, title="Quality", ge=0, le=100) + + +class DisplayOptionsSetting(BaseSettingsModel): + _layout = "expanded" + override_display: bool = Field(True, title="Override display options") + background: ColorRGBA_uint8 = Field( + (125, 125, 125, 1.0), title="Background Color" + ) + displayGradient: bool = Field(True, title="Display background gradient") + backgroundTop: ColorRGBA_uint8 = Field( + (125, 125, 125, 1.0), title="Background Top" + ) + backgroundBottom: ColorRGBA_uint8 = Field( + (125, 125, 125, 1.0), title="Background Bottom" + ) + + +class GenericSetting(BaseSettingsModel): + _layout = "expanded" + isolate_view: bool = Field(True, title="Isolate View") + off_screen: bool = Field(True, title="Off Screen") + pan_zoom: bool = Field(False, title="2D Pan/Zoom") + + +class RendererSetting(BaseSettingsModel): + _layout = "expanded" + rendererName: str = Field( + "vp2Renderer", + enum_resolver=renderer_enum, + title="Renderer name" + ) + + +class ResolutionSetting(BaseSettingsModel): + _layout = "expanded" + width: int = Field(0, title="Width") + height: int = Field(0, title="Height") + + +class PluginObjectsModel(BaseSettingsModel): + name: str = Field("", title="Name") + value: bool = Field(True, title="Enabled") + + +class ViewportOptionsSetting(BaseSettingsModel): + override_viewport_options: bool = Field( + True, title="Override viewport options" + ) + displayLights: str = Field( + "default", enum_resolver=displayLights_enum, title="Display Lights" + ) + displayTextures: bool = Field(True, title="Display Textures") + textureMaxResolution: int = Field(1024, title="Texture Clamp Resolution") + renderDepthOfField: bool = Field( + True, title="Depth of Field", section="Depth of Field" + ) + shadows: bool = Field(True, title="Display Shadows") + twoSidedLighting: bool = Field(True, title="Two Sided Lighting") + lineAAEnable: bool = Field( + True, title="Enable Anti-Aliasing", section="Anti-Aliasing" + ) + multiSample: int = Field(8, title="Anti Aliasing Samples") + useDefaultMaterial: bool = Field(False, title="Use Default Material") + wireframeOnShaded: bool = Field(False, title="Wireframe On Shaded") + xray: bool = Field(False, title="X-Ray") + jointXray: bool = Field(False, title="X-Ray Joints") + backfaceCulling: bool = Field(False, title="Backface Culling") + ssaoEnable: bool = Field( + False, title="Screen Space Ambient Occlusion", section="SSAO" + ) + ssaoAmount: int = Field(1, title="SSAO Amount") + ssaoRadius: int = Field(16, title="SSAO Radius") + ssaoFilterRadius: int = Field(16, title="SSAO Filter Radius") + ssaoSamples: int = Field(16, title="SSAO Samples") + fogging: bool = Field(False, title="Enable Hardware Fog", section="Fog") + hwFogFalloff: str = Field( + "0", enum_resolver=hardware_falloff_enum, title="Hardware Falloff" + ) + hwFogDensity: float = Field(0.0, title="Fog Density") + hwFogStart: int = Field(0, title="Fog Start") + hwFogEnd: int = Field(100, title="Fog End") + hwFogAlpha: int = Field(0, title="Fog Alpha") + hwFogColorR: float = Field(1.0, title="Fog Color R") + hwFogColorG: float = Field(1.0, title="Fog Color G") + hwFogColorB: float = Field(1.0, title="Fog Color B") + motionBlurEnable: bool = Field( + False, title="Enable Motion Blur", section="Motion Blur" + ) + motionBlurSampleCount: int = Field(8, title="Motion Blur Sample Count") + motionBlurShutterOpenFraction: float = Field( + 0.2, title="Shutter Open Fraction" + ) + cameras: bool = Field(False, title="Cameras", section="Show") + clipGhosts: bool = Field(False, title="Clip Ghosts") + deformers: bool = Field(False, title="Deformers") + dimensions: bool = Field(False, title="Dimensions") + dynamicConstraints: bool = Field(False, title="Dynamic Constraints") + dynamics: bool = Field(False, title="Dynamics") + fluids: bool = Field(False, title="Fluids") + follicles: bool = Field(False, title="Follicles") + greasePencils: bool = Field(False, title="Grease Pencils") + grid: bool = Field(False, title="Grid") + hairSystems: bool = Field(True, title="Hair Systems") + handles: bool = Field(False, title="Handles") + headsUpDisplay: bool = Field(False, title="HUD") + ikHandles: bool = Field(False, title="IK Handles") + imagePlane: bool = Field(True, title="Image Plane") + joints: bool = Field(False, title="Joints") + lights: bool = Field(False, title="Lights") + locators: bool = Field(False, title="Locators") + manipulators: bool = Field(False, title="Manipulators") + motionTrails: bool = Field(False, title="Motion Trails") + nCloths: bool = Field(False, title="nCloths") + nParticles: bool = Field(False, title="nParticles") + nRigids: bool = Field(False, title="nRigids") + controlVertices: bool = Field(False, title="NURBS CVs") + nurbsCurves: bool = Field(False, title="NURBS Curves") + hulls: bool = Field(False, title="NURBS Hulls") + nurbsSurfaces: bool = Field(False, title="NURBS Surfaces") + particleInstancers: bool = Field(False, title="Particle Instancers") + pivots: bool = Field(False, title="Pivots") + planes: bool = Field(False, title="Planes") + pluginShapes: bool = Field(False, title="Plugin Shapes") + polymeshes: bool = Field(True, title="Polygons") + strokes: bool = Field(False, title="Strokes") + subdivSurfaces: bool = Field(False, title="Subdiv Surfaces") + textures: bool = Field(False, title="Texture Placements") + pluginObjects: list[PluginObjectsModel] = Field( + default_factory=plugin_objects_default, + title="Plugin Objects" + ) + + @validator("pluginObjects") + def validate_unique_plugin_objects(cls, value): + ensure_unique_names(value) + return value + + +class CameraOptionsSetting(BaseSettingsModel): + displayGateMask: bool = Field(False, title="Display Gate Mask") + displayResolution: bool = Field(False, title="Display Resolution") + displayFilmGate: bool = Field(False, title="Display Film Gate") + displayFieldChart: bool = Field(False, title="Display Field Chart") + displaySafeAction: bool = Field(False, title="Display Safe Action") + displaySafeTitle: bool = Field(False, title="Display Safe Title") + displayFilmPivot: bool = Field(False, title="Display Film Pivot") + displayFilmOrigin: bool = Field(False, title="Display Film Origin") + overscan: int = Field(1.0, title="Overscan") + + +class CapturePresetSetting(BaseSettingsModel): + Codec: CodecSetting = Field( + default_factory=CodecSetting, + title="Codec", + section="Codec") + DisplayOptions: DisplayOptionsSetting = Field( + default_factory=DisplayOptionsSetting, + title="Display Options", + section="Display Options") + Generic: GenericSetting = Field( + default_factory=GenericSetting, + title="Generic", + section="Generic") + Renderer: RendererSetting = Field( + default_factory=RendererSetting, + title="Renderer", + section="Renderer") + Resolution: ResolutionSetting = Field( + default_factory=ResolutionSetting, + title="Resolution", + section="Resolution") + ViewportOptions: ViewportOptionsSetting = Field( + default_factory=ViewportOptionsSetting, + title="Viewport Options") + CameraOptions: CameraOptionsSetting = Field( + default_factory=CameraOptionsSetting, + title="Camera Options") + + +class ProfilesModel(BaseSettingsModel): + _layout = "expanded" + task_types: list[str] = Field( + default_factory=list, + title="Task types", + enum_resolver=task_types_enum + ) + task_names: list[str] = Field(default_factory=list, title="Task names") + product_names: list[str] = Field(default_factory=list, title="Products names") + capture_preset: CapturePresetSetting = Field( + default_factory=CapturePresetSetting, + title="Capture Preset" + ) + + +class ExtractPlayblastSetting(BaseSettingsModel): + capture_preset: CapturePresetSetting = Field( + default_factory=CapturePresetSetting, + title="DEPRECATED! Please use \"Profiles\" below. Capture Preset" + ) + profiles: list[ProfilesModel] = Field( + default_factory=list, + title="Profiles" + ) + + +DEFAULT_PLAYBLAST_SETTING = { + "capture_preset": { + "Codec": { + "compression": "png", + "format": "image", + "quality": 95 + }, + "DisplayOptions": { + "override_display": True, + "background": [ + 125, + 125, + 125, + 1.0 + ], + "backgroundBottom": [ + 125, + 125, + 125, + 1.0 + ], + "backgroundTop": [ + 125, + 125, + 125, + 1.0 + ], + "displayGradient": True + }, + "Generic": { + "isolate_view": True, + "off_screen": True, + "pan_zoom": False + }, + "Renderer": { + "rendererName": "vp2Renderer" + }, + "Resolution": { + "width": 1920, + "height": 1080 + }, + "ViewportOptions": { + "override_viewport_options": True, + "displayLights": "default", + "displayTextures": True, + "textureMaxResolution": 1024, + "renderDepthOfField": True, + "shadows": True, + "twoSidedLighting": True, + "lineAAEnable": True, + "multiSample": 8, + "useDefaultMaterial": False, + "wireframeOnShaded": False, + "xray": False, + "jointXray": False, + "backfaceCulling": False, + "ssaoEnable": False, + "ssaoAmount": 1, + "ssaoRadius": 16, + "ssaoFilterRadius": 16, + "ssaoSamples": 16, + "fogging": False, + "hwFogFalloff": "0", + "hwFogDensity": 0.0, + "hwFogStart": 0, + "hwFogEnd": 100, + "hwFogAlpha": 0, + "hwFogColorR": 1.0, + "hwFogColorG": 1.0, + "hwFogColorB": 1.0, + "motionBlurEnable": False, + "motionBlurSampleCount": 8, + "motionBlurShutterOpenFraction": 0.2, + "cameras": False, + "clipGhosts": False, + "deformers": False, + "dimensions": False, + "dynamicConstraints": False, + "dynamics": False, + "fluids": False, + "follicles": False, + "greasePencils": False, + "grid": False, + "hairSystems": True, + "handles": False, + "headsUpDisplay": False, + "ikHandles": False, + "imagePlane": True, + "joints": False, + "lights": False, + "locators": False, + "manipulators": False, + "motionTrails": False, + "nCloths": False, + "nParticles": False, + "nRigids": False, + "controlVertices": False, + "nurbsCurves": False, + "hulls": False, + "nurbsSurfaces": False, + "particleInstancers": False, + "pivots": False, + "planes": False, + "pluginShapes": False, + "polymeshes": True, + "strokes": False, + "subdivSurfaces": False, + "textures": False, + "pluginObjects": [ + { + "name": "gpuCacheDisplayFilter", + "value": False + } + ] + }, + "CameraOptions": { + "displayGateMask": False, + "displayResolution": False, + "displayFilmGate": False, + "displayFieldChart": False, + "displaySafeAction": False, + "displaySafeTitle": False, + "displayFilmPivot": False, + "displayFilmOrigin": False, + "overscan": 1.0 + } + }, + "profiles": [] +} diff --git a/server_addon/maya/server/settings/publishers.py b/server_addon/maya/server/settings/publishers.py new file mode 100644 index 0000000000..bd7ccdf4d5 --- /dev/null +++ b/server_addon/maya/server/settings/publishers.py @@ -0,0 +1,1262 @@ +import json +from pydantic import Field, validator +from ayon_server.settings import ( + BaseSettingsModel, + MultiplatformPathModel, + ensure_unique_names, +) +from ayon_server.exceptions import BadRequestException +from .publish_playblast import ( + ExtractPlayblastSetting, + DEFAULT_PLAYBLAST_SETTING, +) + + +def linear_unit_enum(): + """Get linear units enumerator.""" + return [ + {"label": "mm", "value": "millimeter"}, + {"label": "cm", "value": "centimeter"}, + {"label": "m", "value": "meter"}, + {"label": "km", "value": "kilometer"}, + {"label": "in", "value": "inch"}, + {"label": "ft", "value": "foot"}, + {"label": "yd", "value": "yard"}, + {"label": "mi", "value": "mile"} + ] + + +def angular_unit_enum(): + """Get angular units enumerator.""" + return [ + {"label": "deg", "value": "degree"}, + {"label": "rad", "value": "radian"}, + ] + + +class BasicValidateModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + optional: bool = Field(title="Optional") + active: bool = Field(title="Active") + + +class ValidateMeshUVSetMap1Model(BasicValidateModel): + """Validate model's default uv set exists and is named 'map1'.""" + pass + + +class ValidateNoAnimationModel(BasicValidateModel): + """Ensure no keyframes on nodes in the Instance.""" + pass + + +class ValidateRigOutSetNodeIdsModel(BaseSettingsModel): + enabled: bool = Field(title="ValidateSkinclusterDeformerSet") + optional: bool = Field(title="Optional") + allow_history_only: bool = Field(title="Allow history only") + + +class ValidateModelNameModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + database: bool = Field(title="Use database shader name definitions") + material_file: MultiplatformPathModel = Field( + default_factory=MultiplatformPathModel, + title="Material File", + description=( + "Path to material file defining list of material names to check." + ) + ) + regex: str = Field( + "(.*)_(\\d)*_(?P.*)_(GEO)", + title="Validation regex", + description=( + "Regex for validating name of top level group name. You can use" + " named capturing groups:(?P.*) for Asset name" + ) + ) + top_level_regex: str = Field( + ".*_GRP", + title="Top level group name regex", + description=( + "To check for asset in name so *_some_asset_name_GRP" + " is valid, use:.*?_(?P.*)_GEO" + ) + ) + + +class ValidateModelContentModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + optional: bool = Field(title="Optional") + validate_top_group: bool = Field(title="Validate one top group") + + +class ValidateTransformNamingSuffixModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + optional: bool = Field(title="Optional") + SUFFIX_NAMING_TABLE: str = Field( + "{}", + title="Suffix Naming Tables", + widget="textarea", + description=( + "Validates transform suffix based on" + " the type of its children shapes." + ) + ) + + @validator("SUFFIX_NAMING_TABLE") + def validate_json(cls, value): + if not value.strip(): + return "{}" + try: + converted_value = json.loads(value) + success = isinstance(converted_value, dict) + except json.JSONDecodeError: + success = False + + if not success: + raise BadRequestException( + "The text can't be parsed as json object" + ) + return value + ALLOW_IF_NOT_IN_SUFFIX_TABLE: bool = Field( + title="Allow if suffix not in table" + ) + + +class CollectMayaRenderModel(BaseSettingsModel): + sync_workfile_version: bool = Field( + title="Sync render version with workfile" + ) + + +class CollectFbxCameraModel(BaseSettingsModel): + enabled: bool = Field(title="CollectFbxCamera") + + +class CollectGLTFModel(BaseSettingsModel): + enabled: bool = Field(title="CollectGLTF") + + +class ValidateFrameRangeModel(BaseSettingsModel): + enabled: bool = Field(title="ValidateFrameRange") + optional: bool = Field(title="Optional") + active: bool = Field(title="Active") + exclude_product_types: list[str] = Field( + default_factory=list, + title="Exclude product types" + ) + + +class ValidateShaderNameModel(BaseSettingsModel): + """ + Shader name regex can use named capture group asset to validate against current asset name. + """ + enabled: bool = Field(title="ValidateShaderName") + optional: bool = Field(title="Optional") + active: bool = Field(title="Active") + regex: str = Field("(?P.*)_(.*)_SHD", title="Validation regex") + + +class ValidateAttributesModel(BaseSettingsModel): + enabled: bool = Field(title="ValidateAttributes") + attributes: str = Field( + "{}", title="Attributes", widget="textarea") + + @validator("attributes") + def validate_json(cls, value): + if not value.strip(): + return "{}" + try: + converted_value = json.loads(value) + success = isinstance(converted_value, dict) + except json.JSONDecodeError: + success = False + + if not success: + raise BadRequestException( + "The attibutes can't be parsed as json object" + ) + return value + + +class ValidateLoadedPluginModel(BaseSettingsModel): + enabled: bool = Field(title="ValidateLoadedPlugin") + optional: bool = Field(title="Optional") + whitelist_native_plugins: bool = Field( + title="Whitelist Maya Native Plugins" + ) + authorized_plugins: list[str] = Field( + default_factory=list, title="Authorized plugins" + ) + + +class ValidateMayaUnitsModel(BaseSettingsModel): + enabled: bool = Field(title="ValidateMayaUnits") + optional: bool = Field(title="Optional") + validate_linear_units: bool = Field(title="Validate linear units") + linear_units: str = Field( + enum_resolver=linear_unit_enum, title="Linear Units" + ) + validate_angular_units: bool = Field(title="Validate angular units") + angular_units: str = Field( + enum_resolver=angular_unit_enum, title="Angular units" + ) + validate_fps: bool = Field(title="Validate fps") + + +class ValidateUnrealStaticMeshNameModel(BaseSettingsModel): + enabled: bool = Field(title="ValidateUnrealStaticMeshName") + optional: bool = Field(title="Optional") + validate_mesh: bool = Field(title="Validate mesh names") + validate_collision: bool = Field(title="Validate collison names") + + +class ValidateCycleErrorModel(BaseSettingsModel): + enabled: bool = Field(title="ValidateCycleError") + optional: bool = Field(title="Optional") + families: list[str] = Field(default_factory=list, title="Families") + + +class ValidatePluginPathAttributesAttrModel(BaseSettingsModel): + name: str = Field(title="Node type") + value: str = Field(title="Attribute") + + +class ValidatePluginPathAttributesModel(BaseSettingsModel): + """Fill in the node types and attributes you want to validate. + +

e.g. AlembicNode.abc_file, the node type is AlembicNode + and the node attribute is abc_file + """ + + enabled: bool = True + optional: bool = Field(title="Optional") + active: bool = Field(title="Active") + attribute: list[ValidatePluginPathAttributesAttrModel] = Field( + default_factory=list, + title="File Attribute" + ) + + @validator("attribute") + def validate_unique_outputs(cls, value): + ensure_unique_names(value) + return value + + +# Validate Render Setting +class RendererAttributesModel(BaseSettingsModel): + _layout = "compact" + type: str = Field(title="Type") + value: str = Field(title="Value") + + +class ValidateRenderSettingsModel(BaseSettingsModel): + arnold_render_attributes: list[RendererAttributesModel] = Field( + default_factory=list, title="Arnold Render Attributes") + vray_render_attributes: list[RendererAttributesModel] = Field( + default_factory=list, title="VRay Render Attributes") + redshift_render_attributes: list[RendererAttributesModel] = Field( + default_factory=list, title="Redshift Render Attributes") + renderman_render_attributes: list[RendererAttributesModel] = Field( + default_factory=list, title="Renderman Render Attributes") + + +class BasicValidateModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + optional: bool = Field(title="Optional") + active: bool = Field(title="Active") + + +class ValidateCameraContentsModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + optional: bool = Field(title="Optional") + validate_shapes: bool = Field(title="Validate presence of shapes") + + +class ExtractProxyAlembicModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + families: list[str] = Field( + default_factory=list, + title="Families") + + +class ExtractAlembicModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + families: list[str] = Field( + default_factory=list, + title="Families") + + +class ExtractObjModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + optional: bool = Field(title="Optional") + + +class ExtractMayaSceneRawModel(BaseSettingsModel): + """Add loaded instances to those published families:""" + enabled: bool = Field(title="ExtractMayaSceneRaw") + add_for_families: list[str] = Field(default_factory=list, title="Families") + + +class ExtractCameraAlembicModel(BaseSettingsModel): + """ + List of attributes that will be added to the baked alembic camera. Needs to be written in python list syntax. + """ + enabled: bool = Field(title="ExtractCameraAlembic") + optional: bool = Field(title="Optional") + active: bool = Field(title="Active") + bake_attributes: str = Field( + "[]", title="Base Attributes", widget="textarea" + ) + + @validator("bake_attributes") + def validate_json_list(cls, value): + if not value.strip(): + return "[]" + try: + converted_value = json.loads(value) + success = isinstance(converted_value, list) + except json.JSONDecodeError: + success = False + + if not success: + raise BadRequestException( + "The text can't be parsed as json object" + ) + return value + + +class ExtractGLBModel(BaseSettingsModel): + enabled: bool = True + active: bool = Field(title="Active") + ogsfx_path: str = Field(title="GLSL Shader Directory") + + +class ExtractLookArgsModel(BaseSettingsModel): + argument: str = Field(title="Argument") + parameters: list[str] = Field(default_factory=list, title="Parameters") + + +class ExtractLookModel(BaseSettingsModel): + maketx_arguments: list[ExtractLookArgsModel] = Field( + default_factory=list, + title="Extra arguments for maketx command line" + ) + + +class ExtractGPUCacheModel(BaseSettingsModel): + enabled: bool = True + families: list[str] = Field(default_factory=list, title="Families") + step: float = Field(1.0, ge=1.0, title="Step") + stepSave: int = Field(1, ge=1, title="Step Save") + optimize: bool = Field(title="Optimize Hierarchy") + optimizationThreshold: int = Field(1, ge=1, title="Optimization Threshold") + optimizeAnimationsForMotionBlur: bool = Field( + title="Optimize Animations For Motion Blur" + ) + writeMaterials: bool = Field(title="Write Materials") + useBaseTessellation: bool = Field(title="User Base Tesselation") + + +class PublishersModel(BaseSettingsModel): + CollectMayaRender: CollectMayaRenderModel = Field( + default_factory=CollectMayaRenderModel, + title="Collect Render Layers", + section="Collectors" + ) + CollectFbxCamera: CollectFbxCameraModel = Field( + default_factory=CollectFbxCameraModel, + title="Collect Camera for FBX export", + ) + CollectGLTF: CollectGLTFModel = Field( + default_factory=CollectGLTFModel, + title="Collect Assets for GLB/GLTF export" + ) + ValidateInstanceInContext: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Instance In Context", + section="Validators" + ) + ValidateContainers: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Containers" + ) + ValidateFrameRange: ValidateFrameRangeModel = Field( + default_factory=ValidateFrameRangeModel, + title="Validate Frame Range" + ) + ValidateShaderName: ValidateShaderNameModel = Field( + default_factory=ValidateShaderNameModel, + title="Validate Shader Name" + ) + ValidateShadingEngine: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Look Shading Engine Naming" + ) + ValidateMayaColorSpace: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Colorspace" + ) + ValidateAttributes: ValidateAttributesModel = Field( + default_factory=ValidateAttributesModel, + title="Validate Attributes" + ) + ValidateLoadedPlugin: ValidateLoadedPluginModel = Field( + default_factory=ValidateLoadedPluginModel, + title="Validate Loaded Plugin" + ) + ValidateMayaUnits: ValidateMayaUnitsModel = Field( + default_factory=ValidateMayaUnitsModel, + title="Validate Maya Units" + ) + ValidateUnrealStaticMeshName: ValidateUnrealStaticMeshNameModel = Field( + default_factory=ValidateUnrealStaticMeshNameModel, + title="Validate Unreal Static Mesh Name" + ) + ValidateCycleError: ValidateCycleErrorModel = Field( + default_factory=ValidateCycleErrorModel, + title="Validate Cycle Error" + ) + ValidatePluginPathAttributes: ValidatePluginPathAttributesModel = Field( + default_factory=ValidatePluginPathAttributesModel, + title="Plug-in Path Attributes" + ) + ValidateRenderSettings: ValidateRenderSettingsModel = Field( + default_factory=ValidateRenderSettingsModel, + title="Validate Render Settings" + ) + ValidateCurrentRenderLayerIsRenderable: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Current Render Layer Has Renderable Camera" + ) + ValidateGLSLMaterial: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate GLSL Material" + ) + ValidateGLSLPlugin: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate GLSL Plugin" + ) + ValidateRenderImageRule: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Render Image Rule (Workspace)" + ) + ValidateRenderNoDefaultCameras: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate No Default Cameras Renderable" + ) + ValidateRenderSingleCamera: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Render Single Camera " + ) + ValidateRenderLayerAOVs: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Render Passes/AOVs Are Registered" + ) + ValidateStepSize: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Step Size" + ) + ValidateVRayDistributedRendering: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="VRay Distributed Rendering" + ) + ValidateVrayReferencedAOVs: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="VRay Referenced AOVs" + ) + ValidateVRayTranslatorEnabled: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="VRay Translator Settings" + ) + ValidateVrayProxy: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="VRay Proxy Settings" + ) + ValidateVrayProxyMembers: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="VRay Proxy Members" + ) + ValidateYetiRenderScriptCallbacks: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Yeti Render Script Callbacks" + ) + ValidateYetiRigCacheState: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Yeti Rig Cache State" + ) + ValidateYetiRigInputShapesInInstance: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Yeti Rig Input Shapes In Instance" + ) + ValidateYetiRigSettings: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Yeti Rig Settings" + ) + # Model - START + ValidateModelName: ValidateModelNameModel = Field( + default_factory=ValidateModelNameModel, + title="Validate Model Name", + section="Model", + ) + ValidateModelContent: ValidateModelContentModel = Field( + default_factory=ValidateModelContentModel, + title="Validate Model Content", + ) + ValidateTransformNamingSuffix: ValidateTransformNamingSuffixModel = Field( + default_factory=ValidateTransformNamingSuffixModel, + title="Validate Transform Naming Suffix", + ) + ValidateColorSets: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Color Sets", + ) + ValidateMeshHasOverlappingUVs: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Mesh Has Overlapping UVs", + ) + ValidateMeshArnoldAttributes: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Mesh Arnold Attributes", + ) + ValidateMeshShaderConnections: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Mesh Shader Connections", + ) + ValidateMeshSingleUVSet: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Mesh Single UV Set", + ) + ValidateMeshHasUVs: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Mesh Has UVs", + ) + ValidateMeshLaminaFaces: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Mesh Lamina Faces", + ) + ValidateMeshNgons: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Mesh Ngons", + ) + ValidateMeshNonManifold: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Mesh Non-Manifold", + ) + ValidateMeshNoNegativeScale: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Mesh No Negative Scale", + ) + ValidateMeshNonZeroEdgeLength: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Mesh Edge Length Non Zero", + ) + ValidateMeshNormalsUnlocked: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Mesh Normals Unlocked", + ) + ValidateMeshUVSetMap1: ValidateMeshUVSetMap1Model = Field( + default_factory=ValidateMeshUVSetMap1Model, + title="Validate Mesh UV Set Map 1", + ) + ValidateMeshVerticesHaveEdges: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Mesh Vertices Have Edges", + ) + ValidateNoAnimation: ValidateNoAnimationModel = Field( + default_factory=ValidateNoAnimationModel, + title="Validate No Animation", + ) + ValidateNoNamespace: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate No Namespace", + ) + ValidateNoNullTransforms: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate No Null Transforms", + ) + ValidateNoUnknownNodes: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate No Unknown Nodes", + ) + ValidateNodeNoGhosting: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Node No Ghosting", + ) + ValidateShapeDefaultNames: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Shape Default Names", + ) + ValidateShapeRenderStats: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Shape Render Stats", + ) + ValidateShapeZero: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Shape Zero", + ) + ValidateTransformZero: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Transform Zero", + ) + ValidateUniqueNames: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Unique Names", + ) + ValidateNoVRayMesh: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate No V-Ray Proxies (VRayMesh)", + ) + ValidateUnrealMeshTriangulated: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate if Mesh is Triangulated", + ) + ValidateAlembicVisibleOnly: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Alembic Visible Node", + ) + ExtractProxyAlembic: ExtractProxyAlembicModel = Field( + default_factory=ExtractProxyAlembicModel, + title="Extract Proxy Alembic", + section="Model Extractors", + ) + ExtractAlembic: ExtractAlembicModel = Field( + default_factory=ExtractAlembicModel, + title="Extract Alembic", + ) + ExtractObj: ExtractObjModel = Field( + default_factory=ExtractObjModel, + title="Extract OBJ" + ) + # Model - END + + # Rig - START + ValidateRigContents: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Rig Contents", + section="Rig", + ) + ValidateRigJointsHidden: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Rig Joints Hidden", + ) + ValidateRigControllers: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Rig Controllers", + ) + ValidateAnimationContent: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Animation Content", + ) + ValidateOutRelatedNodeIds: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Animation Out Set Related Node Ids", + ) + ValidateRigControllersArnoldAttributes: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Rig Controllers (Arnold Attributes)", + ) + ValidateSkeletalMeshHierarchy: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Skeletal Mesh Top Node", + ) + ValidateSkinclusterDeformerSet: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Skincluster Deformer Relationships", + ) + ValidateRigOutSetNodeIds: ValidateRigOutSetNodeIdsModel = Field( + default_factory=ValidateRigOutSetNodeIdsModel, + title="Validate Rig Out Set Node Ids", + ) + # Rig - END + ValidateCameraAttributes: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Camera Attributes" + ) + ValidateAssemblyName: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Assembly Name" + ) + ValidateAssemblyNamespaces: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Assembly Namespaces" + ) + ValidateAssemblyModelTransforms: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Assembly Model Transforms" + ) + ValidateAssRelativePaths: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Ass Relative Paths" + ) + ValidateInstancerContent: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Instancer Content" + ) + ValidateInstancerFrameRanges: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Instancer Cache Frame Ranges" + ) + ValidateNoDefaultCameras: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate No Default Cameras" + ) + ValidateUnrealUpAxis: BasicValidateModel = Field( + default_factory=BasicValidateModel, + title="Validate Unreal Up-Axis Check" + ) + ValidateCameraContents: ValidateCameraContentsModel = Field( + default_factory=ValidateCameraContentsModel, + title="Validate Camera Content" + ) + ExtractPlayblast: ExtractPlayblastSetting = Field( + default_factory=ExtractPlayblastSetting, + title="Extract Playblast Settings", + section="Extractors" + ) + ExtractMayaSceneRaw: ExtractMayaSceneRawModel = Field( + default_factory=ExtractMayaSceneRawModel, + title="Maya Scene(Raw)" + ) + ExtractCameraAlembic: ExtractCameraAlembicModel = Field( + default_factory=ExtractCameraAlembicModel, + title="Extract Camera Alembic" + ) + ExtractGLB: ExtractGLBModel = Field( + default_factory=ExtractGLBModel, + title="Extract GLB" + ) + ExtractLook: ExtractLookModel = Field( + default_factory=ExtractLookModel, + title="Extract Look" + ) + ExtractGPUCache: ExtractGPUCacheModel = Field( + default_factory=ExtractGPUCacheModel, + title="Extract GPU Cache", + ) + + +DEFAULT_SUFFIX_NAMING = { + "mesh": ["_GEO", "_GES", "_GEP", "_OSD"], + "nurbsCurve": ["_CRV"], + "nurbsSurface": ["_NRB"], + "locator": ["_LOC"], + "group": ["_GRP"] +} + +DEFAULT_PUBLISH_SETTINGS = { + "CollectMayaRender": { + "sync_workfile_version": False + }, + "CollectFbxCamera": { + "enabled": False + }, + "CollectGLTF": { + "enabled": False + }, + "ValidateInstanceInContext": { + "enabled": True, + "optional": True, + "active": True + }, + "ValidateContainers": { + "enabled": True, + "optional": True, + "active": True + }, + "ValidateFrameRange": { + "enabled": True, + "optional": True, + "active": True, + "exclude_product_types": [ + "model", + "rig", + "staticMesh" + ] + }, + "ValidateShaderName": { + "enabled": False, + "optional": True, + "active": True, + "regex": "(?P.*)_(.*)_SHD" + }, + "ValidateShadingEngine": { + "enabled": True, + "optional": True, + "active": True + }, + "ValidateMayaColorSpace": { + "enabled": True, + "optional": True, + "active": True + }, + "ValidateAttributes": { + "enabled": False, + "attributes": "{}" + }, + "ValidateLoadedPlugin": { + "enabled": False, + "optional": True, + "whitelist_native_plugins": False, + "authorized_plugins": [] + }, + "ValidateMayaUnits": { + "enabled": True, + "optional": False, + "validate_linear_units": True, + "linear_units": "cm", + "validate_angular_units": True, + "angular_units": "deg", + "validate_fps": True + }, + "ValidateUnrealStaticMeshName": { + "enabled": True, + "optional": True, + "validate_mesh": False, + "validate_collision": True + }, + "ValidateCycleError": { + "enabled": True, + "optional": False, + "families": [ + "rig" + ] + }, + "ValidatePluginPathAttributes": { + "enabled": True, + "optional": False, + "active": True, + "attribute": [ + {"name": "AlembicNode", "value": "abc_File"}, + {"name": "VRayProxy", "value": "fileName"}, + {"name": "RenderManArchive", "value": "filename"}, + {"name": "pgYetiMaya", "value": "cacheFileName"}, + {"name": "aiStandIn", "value": "dso"}, + {"name": "RedshiftSprite", "value": "tex0"}, + {"name": "RedshiftBokeh", "value": "dofBokehImage"}, + {"name": "RedshiftCameraMap", "value": "tex0"}, + {"name": "RedshiftEnvironment", "value": "tex2"}, + {"name": "RedshiftDomeLight", "value": "tex1"}, + {"name": "RedshiftIESLight", "value": "profile"}, + {"name": "RedshiftLightGobo", "value": "tex0"}, + {"name": "RedshiftNormalMap", "value": "tex0"}, + {"name": "RedshiftProxyMesh", "value": "fileName"}, + {"name": "RedshiftVolumeShape", "value": "fileName"}, + {"name": "VRayTexGLSL", "value": "fileName"}, + {"name": "VRayMtlGLSL", "value": "fileName"}, + {"name": "VRayVRmatMtl", "value": "fileName"}, + {"name": "VRayPtex", "value": "ptexFile"}, + {"name": "VRayLightIESShape", "value": "iesFile"}, + {"name": "VRayMesh", "value": "materialAssignmentsFile"}, + {"name": "VRayMtlOSL", "value": "fileName"}, + {"name": "VRayTexOSL", "value": "fileName"}, + {"name": "VRayTexOCIO", "value": "ocioConfigFile"}, + {"name": "VRaySettingsNode", "value": "pmap_autoSaveFile2"}, + {"name": "VRayScannedMtl", "value": "file"}, + {"name": "VRayScene", "value": "parameterOverrideFilePath"}, + {"name": "VRayMtlMDL", "value": "filename"}, + {"name": "VRaySimbiont", "value": "file"}, + {"name": "dlOpenVDBShape", "value": "filename"}, + {"name": "pgYetiMayaShape", "value": "liveABCFilename"}, + {"name": "gpuCache", "value": "cacheFileName"}, + ] + }, + "ValidateRenderSettings": { + "arnold_render_attributes": [], + "vray_render_attributes": [], + "redshift_render_attributes": [], + "renderman_render_attributes": [] + }, + "ValidateCurrentRenderLayerIsRenderable": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateGLSLMaterial": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateGLSLPlugin": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateRenderImageRule": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateRenderNoDefaultCameras": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateRenderSingleCamera": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateRenderLayerAOVs": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateStepSize": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateVRayDistributedRendering": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateVrayReferencedAOVs": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateVRayTranslatorEnabled": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateVrayProxy": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateVrayProxyMembers": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateYetiRenderScriptCallbacks": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateYetiRigCacheState": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateYetiRigInputShapesInInstance": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateYetiRigSettings": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateModelName": { + "enabled": False, + "database": True, + "material_file": { + "windows": "", + "darwin": "", + "linux": "" + }, + "regex": "(.*)_(\\d)*_(?P.*)_(GEO)", + "top_level_regex": ".*_GRP" + }, + "ValidateModelContent": { + "enabled": True, + "optional": False, + "validate_top_group": True + }, + "ValidateTransformNamingSuffix": { + "enabled": True, + "optional": True, + "SUFFIX_NAMING_TABLE": json.dumps(DEFAULT_SUFFIX_NAMING, indent=4), + "ALLOW_IF_NOT_IN_SUFFIX_TABLE": True + }, + "ValidateColorSets": { + "enabled": True, + "optional": True, + "active": True + }, + "ValidateMeshHasOverlappingUVs": { + "enabled": False, + "optional": True, + "active": True + }, + "ValidateMeshArnoldAttributes": { + "enabled": False, + "optional": True, + "active": True + }, + "ValidateMeshShaderConnections": { + "enabled": True, + "optional": True, + "active": True + }, + "ValidateMeshSingleUVSet": { + "enabled": False, + "optional": True, + "active": True + }, + "ValidateMeshHasUVs": { + "enabled": True, + "optional": True, + "active": True + }, + "ValidateMeshLaminaFaces": { + "enabled": False, + "optional": True, + "active": True + }, + "ValidateMeshNgons": { + "enabled": False, + "optional": True, + "active": True + }, + "ValidateMeshNonManifold": { + "enabled": False, + "optional": True, + "active": True + }, + "ValidateMeshNoNegativeScale": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateMeshNonZeroEdgeLength": { + "enabled": True, + "optional": True, + "active": True + }, + "ValidateMeshNormalsUnlocked": { + "enabled": False, + "optional": True, + "active": True + }, + "ValidateMeshUVSetMap1": { + "enabled": False, + "optional": True, + "active": True + }, + "ValidateMeshVerticesHaveEdges": { + "enabled": True, + "optional": True, + "active": True + }, + "ValidateNoAnimation": { + "enabled": False, + "optional": True, + "active": True + }, + "ValidateNoNamespace": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateNoNullTransforms": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateNoUnknownNodes": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateNodeNoGhosting": { + "enabled": False, + "optional": False, + "active": True + }, + "ValidateShapeDefaultNames": { + "enabled": False, + "optional": True, + "active": True + }, + "ValidateShapeRenderStats": { + "enabled": False, + "optional": True, + "active": True + }, + "ValidateShapeZero": { + "enabled": False, + "optional": True, + "active": True + }, + "ValidateTransformZero": { + "enabled": False, + "optional": True, + "active": True + }, + "ValidateUniqueNames": { + "enabled": False, + "optional": True, + "active": True + }, + "ValidateNoVRayMesh": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateUnrealMeshTriangulated": { + "enabled": False, + "optional": True, + "active": True + }, + "ValidateAlembicVisibleOnly": { + "enabled": True, + "optional": False, + "active": True + }, + "ExtractProxyAlembic": { + "enabled": True, + "families": [ + "proxyAbc" + ] + }, + "ExtractAlembic": { + "enabled": True, + "families": [ + "pointcache", + "model", + "vrayproxy.alembic" + ] + }, + "ExtractObj": { + "enabled": False, + "optional": True, + "active": True + }, + "ValidateRigContents": { + "enabled": False, + "optional": True, + "active": True + }, + "ValidateRigJointsHidden": { + "enabled": False, + "optional": True, + "active": True + }, + "ValidateRigControllers": { + "enabled": False, + "optional": True, + "active": True + }, + "ValidateAnimationContent": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateOutRelatedNodeIds": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateRigControllersArnoldAttributes": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateSkeletalMeshHierarchy": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateSkinclusterDeformerSet": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateRigOutSetNodeIds": { + "enabled": True, + "optional": False, + "allow_history_only": False + }, + "ValidateCameraAttributes": { + "enabled": False, + "optional": True, + "active": True + }, + "ValidateAssemblyName": { + "enabled": True, + "optional": True, + "active": True + }, + "ValidateAssemblyNamespaces": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateAssemblyModelTransforms": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateAssRelativePaths": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateInstancerContent": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateInstancerFrameRanges": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateNoDefaultCameras": { + "enabled": True, + "optional": False, + "active": True + }, + "ValidateUnrealUpAxis": { + "enabled": False, + "optional": True, + "active": True + }, + "ValidateCameraContents": { + "enabled": True, + "optional": False, + "validate_shapes": True + }, + "ExtractPlayblast": DEFAULT_PLAYBLAST_SETTING, + "ExtractMayaSceneRaw": { + "enabled": True, + "add_for_families": [ + "layout" + ] + }, + "ExtractCameraAlembic": { + "enabled": True, + "optional": True, + "active": True, + "bake_attributes": "[]" + }, + "ExtractGLB": { + "enabled": True, + "active": True, + "ogsfx_path": "/maya2glTF/PBR/shaders/glTF_PBR.ogsfx" + }, + "ExtractLook": { + "maketx_arguments": [] + }, + "ExtractGPUCache": { + "enabled": False, + "families": [ + "model", + "animation", + "pointcache" + ], + "step": 1.0, + "stepSave": 1, + "optimize": True, + "optimizationThreshold": 40000, + "optimizeAnimationsForMotionBlur": True, + "writeMaterials": True, + "useBaseTessellation": True + } +} diff --git a/server_addon/maya/server/settings/render_settings.py b/server_addon/maya/server/settings/render_settings.py new file mode 100644 index 0000000000..b6163a04ce --- /dev/null +++ b/server_addon/maya/server/settings/render_settings.py @@ -0,0 +1,500 @@ +"""Providing models and values for Maya Render Settings.""" +from pydantic import Field + +from ayon_server.settings import BaseSettingsModel + + +def aov_separators_enum(): + return [ + {"value": "dash", "label": "- (dash)"}, + {"value": "underscore", "label": "_ (underscore)"}, + {"value": "dot", "label": ". (dot)"} + ] + + +def arnold_image_format_enum(): + """Return enumerator for Arnold output formats.""" + return [ + {"label": "jpeg", "value": "jpeg"}, + {"label": "png", "value": "png"}, + {"label": "deepexr", "value": "deep exr"}, + {"label": "tif", "value": "tif"}, + {"label": "exr", "value": "exr"}, + {"label": "maya", "value": "maya"}, + {"label": "mtoa_shaders", "value": "mtoa_shaders"} + ] + + +def arnold_aov_list_enum(): + """Return enumerator for Arnold AOVs. + + Note: Key is value, Value in this case is Label. This + was taken from v3 settings. + """ + return [ + {"value": "empty", "label": "< empty >"}, + {"value": "ID", "label": "ID"}, + {"value": "N", "label": "N"}, + {"value": "P", "label": "P"}, + {"value": "Pref", "label": "Pref"}, + {"value": "RGBA", "label": "RGBA"}, + {"value": "Z", "label": "Z"}, + {"value": "albedo", "label": "albedo"}, + {"value": "background", "label": "background"}, + {"value": "coat", "label": "coat"}, + {"value": "coat_albedo", "label": "coat_albedo"}, + {"value": "coat_direct", "label": "coat_direct"}, + {"value": "coat_indirect", "label": "coat_indirect"}, + {"value": "cputime", "label": "cputime"}, + {"value": "crypto_asset", "label": "crypto_asset"}, + {"value": "crypto_material", "label": "cypto_material"}, + {"value": "crypto_object", "label": "crypto_object"}, + {"value": "diffuse", "label": "diffuse"}, + {"value": "diffuse_albedo", "label": "diffuse_albedo"}, + {"value": "diffuse_direct", "label": "diffuse_direct"}, + {"value": "diffuse_indirect", "label": "diffuse_indirect"}, + {"value": "direct", "label": "direct"}, + {"value": "emission", "label": "emission"}, + {"value": "highlight", "label": "highlight"}, + {"value": "indirect", "label": "indirect"}, + {"value": "motionvector", "label": "motionvector"}, + {"value": "opacity", "label": "opacity"}, + {"value": "raycount", "label": "raycount"}, + {"value": "rim_light", "label": "rim_light"}, + {"value": "shadow", "label": "shadow"}, + {"value": "shadow_diff", "label": "shadow_diff"}, + {"value": "shadow_mask", "label": "shadow_mask"}, + {"value": "shadow_matte", "label": "shadow_matte"}, + {"value": "sheen", "label": "sheen"}, + {"value": "sheen_albedo", "label": "sheen_albedo"}, + {"value": "sheen_direct", "label": "sheen_direct"}, + {"value": "sheen_indirect", "label": "sheen_indirect"}, + {"value": "specular", "label": "specular"}, + {"value": "specular_albedo", "label": "specular_albedo"}, + {"value": "specular_direct", "label": "specular_direct"}, + {"value": "specular_indirect", "label": "specular_indirect"}, + {"value": "sss", "label": "sss"}, + {"value": "sss_albedo", "label": "sss_albedo"}, + {"value": "sss_direct", "label": "sss_direct"}, + {"value": "sss_indirect", "label": "sss_indirect"}, + {"value": "transmission", "label": "transmission"}, + {"value": "transmission_albedo", "label": "transmission_albedo"}, + {"value": "transmission_direct", "label": "transmission_direct"}, + {"value": "transmission_indirect", "label": "transmission_indirect"}, + {"value": "volume", "label": "volume"}, + {"value": "volume_Z", "label": "volume_Z"}, + {"value": "volume_albedo", "label": "volume_albedo"}, + {"value": "volume_direct", "label": "volume_direct"}, + {"value": "volume_indirect", "label": "volume_indirect"}, + {"value": "volume_opacity", "label": "volume_opacity"}, + ] + + +def vray_image_output_enum(): + """Return output format for Vray enumerator.""" + return [ + {"label": "png", "value": "png"}, + {"label": "jpg", "value": "jpg"}, + {"label": "vrimg", "value": "vrimg"}, + {"label": "hdr", "value": "hdr"}, + {"label": "exr", "value": "exr"}, + {"label": "exr (multichannel)", "value": "exr (multichannel)"}, + {"label": "exr (deep)", "value": "exr (deep)"}, + {"label": "tga", "value": "tga"}, + {"label": "bmp", "value": "bmp"}, + {"label": "sgi", "value": "sgi"} + ] + + +def vray_aov_list_enum(): + """Return enumerator for Vray AOVs. + + Note: Key is value, Value in this case is Label. This + was taken from v3 settings. + """ + + return [ + {"value": "empty", "label": "< empty >"}, + {"value": "atmosphereChannel", "label": "atmosphere"}, + {"value": "backgroundChannel", "label": "background"}, + {"value": "bumpNormalsChannel", "label": "bumpnormals"}, + {"value": "causticsChannel", "label": "caustics"}, + {"value": "coatFilterChannel", "label": "coat_filter"}, + {"value": "coatGlossinessChannel", "label": "coatGloss"}, + {"value": "coatReflectionChannel", "label": "coat_reflection"}, + {"value": "vrayCoatChannel", "label": "coat_specular"}, + {"value": "CoverageChannel", "label": "coverage"}, + {"value": "cryptomatteChannel", "label": "cryptomatte"}, + {"value": "customColor", "label": "custom_color"}, + {"value": "drBucketChannel", "label": "DR"}, + {"value": "denoiserChannel", "label": "denoiser"}, + {"value": "diffuseChannel", "label": "diffuse"}, + {"value": "ExtraTexElement", "label": "extraTex"}, + {"value": "giChannel", "label": "GI"}, + {"value": "LightMixElement", "label": "None"}, + {"value": "lightingChannel", "label": "lighting"}, + {"value": "LightingAnalysisChannel", "label": "LightingAnalysis"}, + {"value": "materialIDChannel", "label": "materialID"}, + {"value": "MaterialSelectElement", "label": "materialSelect"}, + {"value": "matteShadowChannel", "label": "matteShadow"}, + {"value": "MultiMatteElement", "label": "multimatte"}, + {"value": "multimatteIDChannel", "label": "multimatteID"}, + {"value": "normalsChannel", "label": "normals"}, + {"value": "nodeIDChannel", "label": "objectId"}, + {"value": "objectSelectChannel", "label": "objectSelect"}, + {"value": "rawCoatFilterChannel", "label": "raw_coat_filter"}, + {"value": "rawCoatReflectionChannel", "label": "raw_coat_reflection"}, + {"value": "rawDiffuseFilterChannel", "label": "rawDiffuseFilter"}, + {"value": "rawGiChannel", "label": "rawGI"}, + {"value": "rawLightChannel", "label": "rawLight"}, + {"value": "rawReflectionChannel", "label": "rawReflection"}, + { + "value": "rawReflectionFilterChannel", + "label": "rawReflectionFilter" + }, + {"value": "rawRefractionChannel", "label": "rawRefraction"}, + { + "value": "rawRefractionFilterChannel", + "label": "rawRefractionFilter" + }, + {"value": "rawShadowChannel", "label": "rawShadow"}, + {"value": "rawSheenFilterChannel", "label": "raw_sheen_filter"}, + { + "value": "rawSheenReflectionChannel", + "label": "raw_sheen_reflection" + }, + {"value": "rawTotalLightChannel", "label": "rawTotalLight"}, + {"value": "reflectIORChannel", "label": "reflIOR"}, + {"value": "reflectChannel", "label": "reflect"}, + {"value": "reflectionFilterChannel", "label": "reflectionFilter"}, + {"value": "reflectGlossinessChannel", "label": "reflGloss"}, + {"value": "refractChannel", "label": "refract"}, + {"value": "refractionFilterChannel", "label": "refractionFilter"}, + {"value": "refractGlossinessChannel", "label": "refrGloss"}, + {"value": "renderIDChannel", "label": "renderId"}, + {"value": "FastSSS2Channel", "label": "SSS"}, + {"value": "sampleRateChannel", "label": "sampleRate"}, + {"value": "samplerInfo", "label": "samplerInfo"}, + {"value": "selfIllumChannel", "label": "selfIllum"}, + {"value": "shadowChannel", "label": "shadow"}, + {"value": "sheenFilterChannel", "label": "sheen_filter"}, + {"value": "sheenGlossinessChannel", "label": "sheenGloss"}, + {"value": "sheenReflectionChannel", "label": "sheen_reflection"}, + {"value": "vraySheenChannel", "label": "sheen_specular"}, + {"value": "specularChannel", "label": "specular"}, + {"value": "Toon", "label": "Toon"}, + {"value": "toonLightingChannel", "label": "toonLighting"}, + {"value": "toonSpecularChannel", "label": "toonSpecular"}, + {"value": "totalLightChannel", "label": "totalLight"}, + {"value": "unclampedColorChannel", "label": "unclampedColor"}, + {"value": "VRScansPaintMaskChannel", "label": "VRScansPaintMask"}, + {"value": "VRScansZoneMaskChannel", "label": "VRScansZoneMask"}, + {"value": "velocityChannel", "label": "velocity"}, + {"value": "zdepthChannel", "label": "zDepth"}, + {"value": "LightSelectElement", "label": "lightselect"}, + ] + + +def redshift_engine_enum(): + """Get Redshift engine type enumerator.""" + return [ + {"value": "0", "label": "None"}, + {"value": "1", "label": "Photon Map"}, + {"value": "2", "label": "Irradiance Cache"}, + {"value": "3", "label": "Brute Force"} + ] + + +def redshift_image_output_enum(): + """Return output format for Redshift enumerator.""" + return [ + {"value": "iff", "label": "Maya IFF"}, + {"value": "exr", "label": "OpenEXR"}, + {"value": "tif", "label": "TIFF"}, + {"value": "png", "label": "PNG"}, + {"value": "tga", "label": "Targa"}, + {"value": "jpg", "label": "JPEG"} + ] + + +def redshift_aov_list_enum(): + """Return enumerator for Vray AOVs. + + Note: Key is value, Value in this case is Label. This + was taken from v3 settings. + """ + return [ + {"value": "empty", "label": "< none >"}, + {"value": "AO", "label": "Ambient Occlusion"}, + {"value": "Background", "label": "Background"}, + {"value": "Beauty", "label": "Beauty"}, + {"value": "BumpNormals", "label": "Bump Normals"}, + {"value": "Caustics", "label": "Caustics"}, + {"value": "CausticsRaw", "label": "Caustics Raw"}, + {"value": "Cryptomatte", "label": "Cryptomatte"}, + {"value": "Custom", "label": "Custom"}, + {"value": "Z", "label": "Depth"}, + {"value": "DiffuseFilter", "label": "Diffuse Filter"}, + {"value": "DiffuseLighting", "label": "Diffuse Lighting"}, + {"value": "DiffuseLightingRaw", "label": "Diffuse Lighting Raw"}, + {"value": "Emission", "label": "Emission"}, + {"value": "GI", "label": "Global Illumination"}, + {"value": "GIRaw", "label": "Global Illumination Raw"}, + {"value": "Matte", "label": "Matte"}, + {"value": "MotionVectors", "label": "Ambient Occlusion"}, + {"value": "N", "label": "Normals"}, + {"value": "ID", "label": "ObjectID"}, + {"value": "ObjectBumpNormal", "label": "Object-Space Bump Normals"}, + {"value": "ObjectPosition", "label": "Object-Space Positions"}, + {"value": "PuzzleMatte", "label": "Puzzle Matte"}, + {"value": "Reflections", "label": "Reflections"}, + {"value": "ReflectionsFilter", "label": "Reflections Filter"}, + {"value": "ReflectionsRaw", "label": "Reflections Raw"}, + {"value": "Refractions", "label": "Refractions"}, + {"value": "RefractionsFilter", "label": "Refractions Filter"}, + {"value": "RefractionsRaw", "label": "Refractions Filter"}, + {"value": "Shadows", "label": "Shadows"}, + {"value": "SpecularLighting", "label": "Specular Lighting"}, + {"value": "SSS", "label": "Sub Surface Scatter"}, + {"value": "SSSRaw", "label": "Sub Surface Scatter Raw"}, + { + "value": "TotalDiffuseLightingRaw", + "label": "Total Diffuse Lighting Raw" + }, + { + "value": "TotalTransLightingRaw", + "label": "Total Translucency Filter" + }, + {"value": "TransTint", "label": "Translucency Filter"}, + {"value": "TransGIRaw", "label": "Translucency Lighting Raw"}, + {"value": "VolumeFogEmission", "label": "Volume Fog Emission"}, + {"value": "VolumeFogTint", "label": "Volume Fog Tint"}, + {"value": "VolumeLighting", "label": "Volume Lighting"}, + {"value": "P", "label": "World Position"}, + ] + + +class AdditionalOptionsModel(BaseSettingsModel): + """Additional Option""" + _layout = "compact" + + attribute: str = Field("", title="Attribute name") + value: str = Field("", title="Value") + + +class ArnoldSettingsModel(BaseSettingsModel): + image_prefix: str = Field(title="Image prefix template") + image_format: str = Field( + enum_resolver=arnold_image_format_enum, title="Output Image Format") + multilayer_exr: bool = Field(title="Multilayer (exr)") + tiled: bool = Field(title="Tiled (tif, exr)") + aov_list: list[str] = Field( + default_factory=list, + enum_resolver=arnold_aov_list_enum, + title="AOVs to create" + ) + additional_options: list[AdditionalOptionsModel] = Field( + default_factory=list, + title="Additional Arnold Options", + description=( + "Add additional options - put attribute and value, like AASamples" + ) + ) + + +class VraySettingsModel(BaseSettingsModel): + image_prefix: str = Field(title="Image prefix template") + # engine was str because of JSON limitation (key must be string) + engine: str = Field( + enum_resolver=lambda: [ + {"label": "V-Ray", "value": "1"}, + {"label": "V-Ray GPU", "value": "2"} + ], + title="Production Engine" + ) + image_format: str = Field( + enum_resolver=vray_image_output_enum, + title="Output Image Format" + ) + aov_list: list[str] = Field( + default_factory=list, + enum_resolver=vray_aov_list_enum, + title="AOVs to create" + ) + additional_options: list[AdditionalOptionsModel] = Field( + default_factory=list, + title="Additional Vray Options", + description=( + "Add additional options - put attribute and value," + " like aaFilterSize" + ) + ) + + +class RedshiftSettingsModel(BaseSettingsModel): + image_prefix: str = Field(title="Image prefix template") + # both engines are using the same enumerator, + # both were originally str because of JSON limitation. + primary_gi_engine: str = Field( + enum_resolver=redshift_engine_enum, + title="Primary GI Engine" + ) + secondary_gi_engine: str = Field( + enum_resolver=redshift_engine_enum, + title="Secondary GI Engine" + ) + image_format: str = Field( + enum_resolver=redshift_image_output_enum, + title="Output Image Format" + ) + multilayer_exr: bool = Field(title="Multilayer (exr)") + force_combine: bool = Field(title="Force combine beauty and AOVs") + aov_list: list[str] = Field( + default_factory=list, + enum_resolver=redshift_aov_list_enum, + title="AOVs to create" + ) + additional_options: list[AdditionalOptionsModel] = Field( + default_factory=list, + title="Additional Vray Options", + description=( + "Add additional options - put attribute and value," + " like reflectionMaxTraceDepth" + ) + ) + + +def renderman_display_filters(): + return [ + "PxrBackgroundDisplayFilter", + "PxrCopyAOVDisplayFilter", + "PxrEdgeDetect", + "PxrFilmicTonemapperDisplayFilter", + "PxrGradeDisplayFilter", + "PxrHalfBufferErrorFilter", + "PxrImageDisplayFilter", + "PxrLightSaturation", + "PxrShadowDisplayFilter", + "PxrStylizedHatching", + "PxrStylizedLines", + "PxrStylizedToon", + "PxrWhitePointDisplayFilter" + ] + + +def renderman_sample_filters_enum(): + return [ + "PxrBackgroundSampleFilter", + "PxrCopyAOVSampleFilter", + "PxrCryptomatte", + "PxrFilmicTonemapperSampleFilter", + "PxrGradeSampleFilter", + "PxrShadowFilter", + "PxrWatermarkFilter", + "PxrWhitePointSampleFilter" + ] + + +class RendermanSettingsModel(BaseSettingsModel): + image_prefix: str = Field( + "", title="Image prefix template") + image_dir: str = Field( + "", title="Image Output Directory") + display_filters: list[str] = Field( + default_factory=list, + title="Display Filters", + enum_resolver=renderman_display_filters + ) + imageDisplay_dir: str = Field( + "", title="Image Display Filter Directory") + sample_filters: list[str] = Field( + default_factory=list, + title="Sample Filters", + enum_resolver=renderman_sample_filters_enum + ) + cryptomatte_dir: str = Field( + "", title="Cryptomatte Output Directory") + watermark_dir: str = Field( + "", title="Watermark Filter Directory") + additional_options: list[AdditionalOptionsModel] = Field( + default_factory=list, + title="Additional Renderer Options" + ) + + +class RenderSettingsModel(BaseSettingsModel): + apply_render_settings: bool = Field( + title="Apply Render Settings on creation" + ) + default_render_image_folder: str = Field( + title="Default render image folder" + ) + enable_all_lights: bool = Field( + title="Include all lights in Render Setup Layers by default" + ) + aov_separator: str = Field( + "underscore", + title="AOV Separator character", + enum_resolver=aov_separators_enum + ) + reset_current_frame: bool = Field( + title="Reset Current Frame") + remove_aovs: bool = Field( + title="Remove existing AOVs") + arnold_renderer: ArnoldSettingsModel = Field( + default_factory=ArnoldSettingsModel, + title="Arnold Renderer") + vray_renderer: VraySettingsModel = Field( + default_factory=VraySettingsModel, + title="Vray Renderer") + redshift_renderer: RedshiftSettingsModel = Field( + default_factory=RedshiftSettingsModel, + title="Redshift Renderer") + renderman_renderer: RendermanSettingsModel = Field( + default_factory=RendermanSettingsModel, + title="Renderman Renderer") + + +DEFAULT_RENDER_SETTINGS = { + "apply_render_settings": True, + "default_render_image_folder": "renders/maya", + "enable_all_lights": True, + "aov_separator": "underscore", + "reset_current_frame": False, + "remove_aovs": False, + "arnold_renderer": { + "image_prefix": "//_", + "image_format": "exr", + "multilayer_exr": True, + "tiled": True, + "aov_list": [], + "additional_options": [] + }, + "vray_renderer": { + "image_prefix": "//", + "engine": "1", + "image_format": "exr", + "aov_list": [], + "additional_options": [] + }, + "redshift_renderer": { + "image_prefix": "//", + "primary_gi_engine": "0", + "secondary_gi_engine": "0", + "image_format": "exr", + "multilayer_exr": True, + "force_combine": True, + "aov_list": [], + "additional_options": [] + }, + "renderman_renderer": { + "image_prefix": "{aov_separator}..", + "image_dir": "/", + "display_filters": [], + "imageDisplay_dir": "/{aov_separator}imageDisplayFilter..", + "sample_filters": [], + "cryptomatte_dir": "/{aov_separator}cryptomatte..", + "watermark_dir": "/{aov_separator}watermarkFilter..", + "additional_options": [] + } +} diff --git a/server_addon/maya/server/settings/scriptsmenu.py b/server_addon/maya/server/settings/scriptsmenu.py new file mode 100644 index 0000000000..82c1c2e53c --- /dev/null +++ b/server_addon/maya/server/settings/scriptsmenu.py @@ -0,0 +1,43 @@ +from pydantic import Field + +from ayon_server.settings import BaseSettingsModel + + +class ScriptsmenuSubmodel(BaseSettingsModel): + """Item Definition""" + _isGroup = True + type: str = Field(title="Type") + command: str = Field(title="Command") + sourcetype: str = Field(title="Source Type") + title: str = Field(title="Title") + tooltip: str = Field(title="Tooltip") + tags: list[str] = Field(default_factory=list, title="A list of tags") + + +class ScriptsmenuModel(BaseSettingsModel): + _isGroup = True + + name: str = Field(title="Menu Name") + definition: list[ScriptsmenuSubmodel] = Field( + default_factory=list, + title="Menu Definition", + description="Scriptmenu Items Definition" + ) + + +DEFAULT_SCRIPTSMENU_SETTINGS = { + "name": "OpenPype Tools", + "definition": [ + { + "type": "action", + "command": "import openpype.hosts.maya.api.commands as op_cmds; op_cmds.edit_shader_definitions()", + "sourcetype": "python", + "title": "Edit shader name definitions", + "tooltip": "Edit shader name definitions used in validation and renaming.", + "tags": [ + "pipeline", + "shader" + ] + } + ] +} diff --git a/server_addon/maya/server/settings/templated_workfile_settings.py b/server_addon/maya/server/settings/templated_workfile_settings.py new file mode 100644 index 0000000000..ef81b31a07 --- /dev/null +++ b/server_addon/maya/server/settings/templated_workfile_settings.py @@ -0,0 +1,25 @@ +from pydantic import Field +from ayon_server.settings import BaseSettingsModel, task_types_enum + + +class WorkfileBuildProfilesModel(BaseSettingsModel): + _layout = "expanded" + task_types: list[str] = Field( + default_factory=list, + title="Task types", + enum_resolver=task_types_enum + ) + task_names: list[str] = Field(default_factory=list, title="Task names") + path: str = Field("", title="Path to template") + + +class TemplatedProfilesModel(BaseSettingsModel): + profiles: list[WorkfileBuildProfilesModel] = Field( + default_factory=list, + title="Profiles" + ) + + +DEFAULT_TEMPLATED_WORKFILE_SETTINGS = { + "profiles": [] +} diff --git a/server_addon/maya/server/settings/workfile_build_settings.py b/server_addon/maya/server/settings/workfile_build_settings.py new file mode 100644 index 0000000000..dc56d1a320 --- /dev/null +++ b/server_addon/maya/server/settings/workfile_build_settings.py @@ -0,0 +1,131 @@ +from pydantic import Field +from ayon_server.settings import BaseSettingsModel, task_types_enum + + +class ContextItemModel(BaseSettingsModel): + _layout = "expanded" + product_name_filters: list[str] = Field( + default_factory=list, title="Product name Filters") + product_types: list[str] = Field( + default_factory=list, title="Product types") + repre_names: list[str] = Field( + default_factory=list, title="Repre Names") + loaders: list[str] = Field( + default_factory=list, title="Loaders") + + +class WorkfileSettingModel(BaseSettingsModel): + _layout = "expanded" + task_types: list[str] = Field( + default_factory=list, + enum_resolver=task_types_enum, + title="Task types") + tasks: list[str] = Field( + default_factory=list, + title="Task names") + current_context: list[ContextItemModel] = Field( + default_factory=list, + title="Current Context") + linked_assets: list[ContextItemModel] = Field( + default_factory=list, + title="Linked Assets") + + +class ProfilesModel(BaseSettingsModel): + profiles: list[WorkfileSettingModel] = Field( + default_factory=list, + title="Profiles" + ) + + +DEFAULT_WORKFILE_SETTING = { + "profiles": [ + { + "task_types": [], + "tasks": [ + "Lighting" + ], + "current_context": [ + { + "product_name_filters": [ + ".+[Mm]ain" + ], + "product_types": [ + "model" + ], + "repre_names": [ + "abc", + "ma" + ], + "loaders": [ + "ReferenceLoader" + ] + }, + { + "product_name_filters": [], + "product_types": [ + "animation", + "pointcache", + "proxyAbc" + ], + "repre_names": [ + "abc" + ], + "loaders": [ + "ReferenceLoader" + ] + }, + { + "product_name_filters": [], + "product_types": [ + "rendersetup" + ], + "repre_names": [ + "json" + ], + "loaders": [ + "RenderSetupLoader" + ] + }, + { + "product_name_filters": [], + "product_types": [ + "camera" + ], + "repre_names": [ + "abc" + ], + "loaders": [ + "ReferenceLoader" + ] + } + ], + "linked_assets": [ + { + "product_name_filters": [], + "product_types": [ + "sedress" + ], + "repre_names": [ + "ma" + ], + "loaders": [ + "ReferenceLoader" + ] + }, + { + "product_name_filters": [], + "product_types": [ + "ArnoldStandin" + ], + "repre_names": [ + "ass" + ], + "loaders": [ + "assLoader" + ] + } + ] + } + ] +} diff --git a/server_addon/maya/server/version.py b/server_addon/maya/server/version.py new file mode 100644 index 0000000000..a242f0e757 --- /dev/null +++ b/server_addon/maya/server/version.py @@ -0,0 +1,3 @@ +# -*- coding: utf-8 -*- +"""Package declaring addon version.""" +__version__ = "0.1.1" diff --git a/server_addon/muster/server/__init__.py b/server_addon/muster/server/__init__.py new file mode 100644 index 0000000000..2cb8943554 --- /dev/null +++ b/server_addon/muster/server/__init__.py @@ -0,0 +1,17 @@ +from typing import Type + +from ayon_server.addons import BaseServerAddon + +from .version import __version__ +from .settings import MusterSettings, DEFAULT_VALUES + + +class MusterAddon(BaseServerAddon): + name = "muster" + version = __version__ + title = "Muster" + settings_model: Type[MusterSettings] = MusterSettings + + async def get_default_settings(self): + settings_model_cls = self.get_settings_model() + return settings_model_cls(**DEFAULT_VALUES) diff --git a/server_addon/muster/server/settings.py b/server_addon/muster/server/settings.py new file mode 100644 index 0000000000..e37c762870 --- /dev/null +++ b/server_addon/muster/server/settings.py @@ -0,0 +1,41 @@ +from pydantic import Field +from ayon_server.settings import BaseSettingsModel + + +class TemplatesMapping(BaseSettingsModel): + _layout = "compact" + name: str = Field(title="Name") + value: int = Field(title="mapping") + + +class MusterSettings(BaseSettingsModel): + enabled: bool = True + MUSTER_REST_URL: str = Field( + "", + title="Muster Rest URL", + scope=["studio"], + ) + + templates_mapping: list[TemplatesMapping] = Field( + default_factory=list, + title="Templates mapping", + ) + + +DEFAULT_VALUES = { + "enabled": False, + "MUSTER_REST_URL": "http://127.0.0.1:9890", + "templates_mapping": [ + {"name": "file_layers", "value": 7}, + {"name": "mentalray", "value": 2}, + {"name": "mentalray_sf", "value": 6}, + {"name": "redshift", "value": 55}, + {"name": "renderman", "value": 29}, + {"name": "software", "value": 1}, + {"name": "software_sf", "value": 5}, + {"name": "turtle", "value": 10}, + {"name": "vector", "value": 4}, + {"name": "vray", "value": 37}, + {"name": "ffmpeg", "value": 48} + ] +} diff --git a/server_addon/muster/server/version.py b/server_addon/muster/server/version.py new file mode 100644 index 0000000000..485f44ac21 --- /dev/null +++ b/server_addon/muster/server/version.py @@ -0,0 +1 @@ +__version__ = "0.1.1" diff --git a/server_addon/nuke/server/__init__.py b/server_addon/nuke/server/__init__.py new file mode 100644 index 0000000000..032ceea5fb --- /dev/null +++ b/server_addon/nuke/server/__init__.py @@ -0,0 +1,17 @@ +from typing import Type + +from ayon_server.addons import BaseServerAddon + +from .version import __version__ +from .settings import NukeSettings, DEFAULT_VALUES + + +class NukeAddon(BaseServerAddon): + name = "nuke" + title = "Nuke" + version = __version__ + settings_model: Type[NukeSettings] = NukeSettings + + async def get_default_settings(self): + settings_model_cls = self.get_settings_model() + return settings_model_cls(**DEFAULT_VALUES) diff --git a/server_addon/nuke/server/settings/__init__.py b/server_addon/nuke/server/settings/__init__.py new file mode 100644 index 0000000000..1e58865395 --- /dev/null +++ b/server_addon/nuke/server/settings/__init__.py @@ -0,0 +1,10 @@ +from .main import ( + NukeSettings, + DEFAULT_VALUES, +) + + +__all__ = ( + "NukeSettings", + "DEFAULT_VALUES", +) diff --git a/server_addon/nuke/server/settings/common.py b/server_addon/nuke/server/settings/common.py new file mode 100644 index 0000000000..f1bb46ff90 --- /dev/null +++ b/server_addon/nuke/server/settings/common.py @@ -0,0 +1,128 @@ +import json +from pydantic import Field +from ayon_server.exceptions import BadRequestException +from ayon_server.settings import BaseSettingsModel +from ayon_server.types import ( + ColorRGBA_float, + ColorRGB_uint8 +) + + +def validate_json_dict(value): + if not value.strip(): + return "{}" + try: + converted_value = json.loads(value) + success = isinstance(converted_value, dict) + except json.JSONDecodeError: + success = False + + if not success: + raise BadRequestException( + "Environment's can't be parsed as json object" + ) + return value + + +class Vector2d(BaseSettingsModel): + _layout = "compact" + + x: float = Field(1.0, title="X") + y: float = Field(1.0, title="Y") + + +class Vector3d(BaseSettingsModel): + _layout = "compact" + + x: float = Field(1.0, title="X") + y: float = Field(1.0, title="Y") + z: float = Field(1.0, title="Z") + + +def formatable_knob_type_enum(): + return [ + {"value": "text", "label": "Text"}, + {"value": "number", "label": "Number"}, + {"value": "decimal_number", "label": "Decimal number"}, + {"value": "2d_vector", "label": "2D vector"}, + # "3D vector" + ] + + +class Formatable(BaseSettingsModel): + _layout = "compact" + + template: str = Field( + "", + placeholder="""{{key}} or {{key}};{{key}}""", + title="Template" + ) + to_type: str = Field( + "Text", + title="To Knob type", + enum_resolver=formatable_knob_type_enum, + ) + + +knob_types_enum = [ + {"value": "text", "label": "Text"}, + {"value": "formatable", "label": "Formate from template"}, + {"value": "color_gui", "label": "Color GUI"}, + {"value": "boolean", "label": "Boolean"}, + {"value": "number", "label": "Number"}, + {"value": "decimal_number", "label": "Decimal number"}, + {"value": "vector_2d", "label": "2D vector"}, + {"value": "vector_3d", "label": "3D vector"}, + {"value": "color", "label": "Color"}, + {"value": "expression", "label": "Expression"} +] + + +class KnobModel(BaseSettingsModel): + """# TODO: new data structure + - v3 was having type, name, value but + ayon is not able to make it the same. Current model is + defining `type` as `text` and instead of `value` the key is `text`. + So if `type` is `boolean` then key is `boolean` (value). + """ + _layout = "expanded" + + type: str = Field( + title="Type", + description="Switch between different knob types", + enum_resolver=lambda: knob_types_enum, + conditionalEnum=True + ) + + name: str = Field( + title="Name", + placeholder="Name" + ) + text: str = Field("", title="Value") + color_gui: ColorRGB_uint8 = Field( + (0, 0, 255), + title="RGB Uint8", + ) + boolean: bool = Field(False, title="Value") + number: int = Field(0, title="Value") + decimal_number: float = Field(0.0, title="Value") + vector_2d: Vector2d = Field( + default_factory=Vector2d, + title="Value" + ) + vector_3d: Vector3d = Field( + default_factory=Vector3d, + title="Value" + ) + color: ColorRGBA_float = Field( + (0.0, 0.0, 1.0, 1.0), + title="RGBA Float" + ) + formatable: Formatable = Field( + default_factory=Formatable, + title="Formatable" + ) + expression: str = Field( + "", + title="Expression" + ) diff --git a/server_addon/nuke/server/settings/create_plugins.py b/server_addon/nuke/server/settings/create_plugins.py new file mode 100644 index 0000000000..0bbae4ee77 --- /dev/null +++ b/server_addon/nuke/server/settings/create_plugins.py @@ -0,0 +1,223 @@ +from pydantic import validator, Field +from ayon_server.settings import ( + BaseSettingsModel, + ensure_unique_names +) +from .common import KnobModel + + +def instance_attributes_enum(): + """Return create write instance attributes.""" + return [ + {"value": "reviewable", "label": "Reviewable"}, + {"value": "farm_rendering", "label": "Farm rendering"}, + {"value": "use_range_limit", "label": "Use range limit"} + ] + + +class PrenodeModel(BaseSettingsModel): + # TODO: missing in host api + # - good for `dependency` + name: str = Field( + title="Node name" + ) + + # TODO: `nodeclass` should be renamed to `nuke_node_class` + nodeclass: str = Field( + "", + title="Node class" + ) + dependent: str = Field( + "", + title="Incoming dependency" + ) + + """# TODO: Changes in host api: + - Need complete rework of knob types in nuke integration. + - We could not support v3 style of settings. + """ + knobs: list[KnobModel] = Field( + title="Knobs", + ) + + @validator("knobs") + def ensure_unique_names(cls, value): + """Ensure name fields within the lists have unique names.""" + ensure_unique_names(value) + return value + + +class CreateWriteRenderModel(BaseSettingsModel): + temp_rendering_path_template: str = Field( + title="Temporary rendering path template" + ) + default_variants: list[str] = Field( + title="Default variants", + default_factory=list + ) + instance_attributes: list[str] = Field( + default_factory=list, + enum_resolver=instance_attributes_enum, + title="Instance attributes" + ) + + """# TODO: Changes in host api: + - prenodes key was originally dict and now is list + (we could not support v3 style of settings) + """ + prenodes: list[PrenodeModel] = Field( + title="Preceding nodes", + ) + + @validator("prenodes") + def ensure_unique_names(cls, value): + """Ensure name fields within the lists have unique names.""" + ensure_unique_names(value) + return value + + +class CreateWritePrerenderModel(BaseSettingsModel): + temp_rendering_path_template: str = Field( + title="Temporary rendering path template" + ) + default_variants: list[str] = Field( + title="Default variants", + default_factory=list + ) + instance_attributes: list[str] = Field( + default_factory=list, + enum_resolver=instance_attributes_enum, + title="Instance attributes" + ) + + """# TODO: Changes in host api: + - prenodes key was originally dict and now is list + (we could not support v3 style of settings) + """ + prenodes: list[PrenodeModel] = Field( + title="Preceding nodes", + ) + + @validator("prenodes") + def ensure_unique_names(cls, value): + """Ensure name fields within the lists have unique names.""" + ensure_unique_names(value) + return value + + +class CreateWriteImageModel(BaseSettingsModel): + temp_rendering_path_template: str = Field( + title="Temporary rendering path template" + ) + default_variants: list[str] = Field( + title="Default variants", + default_factory=list + ) + instance_attributes: list[str] = Field( + default_factory=list, + enum_resolver=instance_attributes_enum, + title="Instance attributes" + ) + + """# TODO: Changes in host api: + - prenodes key was originally dict and now is list + (we could not support v3 style of settings) + """ + prenodes: list[PrenodeModel] = Field( + title="Preceding nodes", + ) + + @validator("prenodes") + def ensure_unique_names(cls, value): + """Ensure name fields within the lists have unique names.""" + ensure_unique_names(value) + return value + + +class CreatorPluginsSettings(BaseSettingsModel): + CreateWriteRender: CreateWriteRenderModel = Field( + default_factory=CreateWriteRenderModel, + title="Create Write Render" + ) + CreateWritePrerender: CreateWritePrerenderModel = Field( + default_factory=CreateWritePrerenderModel, + title="Create Write Prerender" + ) + CreateWriteImage: CreateWriteImageModel = Field( + default_factory=CreateWriteImageModel, + title="Create Write Image" + ) + + +DEFAULT_CREATE_SETTINGS = { + "CreateWriteRender": { + "temp_rendering_path_template": "{work}/renders/nuke/{product[name]}/{product[name]}.{frame}.{ext}", + "default_variants": [ + "Main", + "Mask" + ], + "instance_attributes": [ + "reviewable", + "farm_rendering" + ], + "prenodes": [ + { + "name": "Reformat01", + "nodeclass": "Reformat", + "dependent": "", + "knobs": [ + { + "type": "text", + "name": "resize", + "text": "none" + }, + { + "type": "boolean", + "name": "black_outside", + "boolean": True + } + ] + } + ] + }, + "CreateWritePrerender": { + "temp_rendering_path_template": "{work}/renders/nuke/{product[name]}/{product[name]}.{frame}.{ext}", + "default_variants": [ + "Key01", + "Bg01", + "Fg01", + "Branch01", + "Part01" + ], + "instance_attributes": [ + "farm_rendering", + "use_range_limit" + ], + "prenodes": [] + }, + "CreateWriteImage": { + "temp_rendering_path_template": "{work}/renders/nuke/{product[name]}/{product[name]}.{ext}", + "default_variants": [ + "StillFrame", + "MPFrame", + "LayoutFrame" + ], + "instance_attributes": [ + "use_range_limit" + ], + "prenodes": [ + { + "name": "FrameHold01", + "nodeclass": "FrameHold", + "dependent": "", + "knobs": [ + { + "type": "expression", + "name": "first_frame", + "expression": "parent.first" + } + ] + } + ] + } +} diff --git a/server_addon/nuke/server/settings/dirmap.py b/server_addon/nuke/server/settings/dirmap.py new file mode 100644 index 0000000000..2da6d7bf60 --- /dev/null +++ b/server_addon/nuke/server/settings/dirmap.py @@ -0,0 +1,47 @@ +from pydantic import Field +from ayon_server.settings import BaseSettingsModel + + +class DirmapPathsSubmodel(BaseSettingsModel): + _layout = "compact" + source_path: list[str] = Field( + default_factory=list, + title="Source Paths" + ) + destination_path: list[str] = Field( + default_factory=list, + title="Destination Paths" + ) + + +class DirmapSettings(BaseSettingsModel): + """Nuke color management project settings.""" + _isGroup: bool = True + + enabled: bool = Field(title="enabled") + paths: DirmapPathsSubmodel = Field( + default_factory=DirmapPathsSubmodel, + title="Dirmap Paths" + ) + + +"""# TODO: +nuke is having originally implemented +following data inputs: + +"nuke-dirmap": { + "enabled": false, + "paths": { + "source-path": [], + "destination-path": [] + } +} +""" + +DEFAULT_DIRMAP_SETTINGS = { + "enabled": False, + "paths": { + "source_path": [], + "destination_path": [] + } +} diff --git a/server_addon/nuke/server/settings/filters.py b/server_addon/nuke/server/settings/filters.py new file mode 100644 index 0000000000..7e2702b3b7 --- /dev/null +++ b/server_addon/nuke/server/settings/filters.py @@ -0,0 +1,19 @@ +from pydantic import Field, validator +from ayon_server.settings import BaseSettingsModel, ensure_unique_names + + +class PublishGUIFilterItemModel(BaseSettingsModel): + _layout = "compact" + name: str = Field(title="Name") + value: bool = Field(True, title="Active") + + +class PublishGUIFiltersModel(BaseSettingsModel): + _layout = "compact" + name: str = Field(title="Name") + value: list[PublishGUIFilterItemModel] = Field(default_factory=list) + + @validator("value") + def validate_unique_outputs(cls, value): + ensure_unique_names(value) + return value diff --git a/server_addon/nuke/server/settings/general.py b/server_addon/nuke/server/settings/general.py new file mode 100644 index 0000000000..bcbb183952 --- /dev/null +++ b/server_addon/nuke/server/settings/general.py @@ -0,0 +1,42 @@ +from pydantic import Field +from ayon_server.settings import BaseSettingsModel + + +class MenuShortcut(BaseSettingsModel): + """Nuke general project settings.""" + + create: str = Field( + title="Create..." + ) + publish: str = Field( + title="Publish..." + ) + load: str = Field( + title="Load..." + ) + manage: str = Field( + title="Manage..." + ) + build_workfile: str = Field( + title="Build Workfile..." + ) + + +class GeneralSettings(BaseSettingsModel): + """Nuke general project settings.""" + + menu: MenuShortcut = Field( + default_factory=MenuShortcut, + title="Menu Shortcuts", + ) + + +DEFAULT_GENERAL_SETTINGS = { + "menu": { + "create": "ctrl+alt+c", + "publish": "ctrl+alt+p", + "load": "ctrl+alt+l", + "manage": "ctrl+alt+m", + "build_workfile": "ctrl+alt+b" + } +} diff --git a/server_addon/nuke/server/settings/gizmo.py b/server_addon/nuke/server/settings/gizmo.py new file mode 100644 index 0000000000..4cdd614da8 --- /dev/null +++ b/server_addon/nuke/server/settings/gizmo.py @@ -0,0 +1,79 @@ +from pydantic import Field +from ayon_server.settings import ( + BaseSettingsModel, + MultiplatformPathModel, + MultiplatformPathListModel, +) + + +class SubGizmoItem(BaseSettingsModel): + title: str = Field( + title="Label" + ) + sourcetype: str = Field( + title="Type of usage" + ) + command: str = Field( + title="Python command" + ) + icon: str = Field( + title="Icon Path" + ) + shortcut: str = Field( + title="Hotkey" + ) + + +class GizmoDefinitionItem(BaseSettingsModel): + gizmo_toolbar_path: str = Field( + title="Gizmo Menu" + ) + sub_gizmo_list: list[SubGizmoItem] = Field( + default_factory=list, title="Sub Gizmo List") + + +class GizmoItem(BaseSettingsModel): + """Nuke gizmo item """ + + toolbar_menu_name: str = Field( + title="Toolbar Menu Name" + ) + gizmo_source_dir: MultiplatformPathListModel = Field( + default_factory=MultiplatformPathListModel, + title="Gizmo Directory Path" + ) + toolbar_icon_path: MultiplatformPathModel = Field( + default_factory=MultiplatformPathModel, + title="Toolbar Icon Path" + ) + gizmo_definition: list[GizmoDefinitionItem] = Field( + default_factory=list, title="Gizmo Definition") + + +DEFAULT_GIZMO_ITEM = { + "toolbar_menu_name": "OpenPype Gizmo", + "gizmo_source_dir": { + "windows": [], + "darwin": [], + "linux": [] + }, + "toolbar_icon_path": { + "windows": "", + "darwin": "", + "linux": "" + }, + "gizmo_definition": [ + { + "gizmo_toolbar_path": "/path/to/menu", + "sub_gizmo_list": [ + { + "sourcetype": "python", + "title": "Gizmo Note", + "command": "nuke.nodes.StickyNote(label='You can create your own toolbar menu in the Nuke GizmoMenu of OpenPype')", + "icon": "", + "shortcut": "" + } + ] + } + ] +} diff --git a/server_addon/nuke/server/settings/imageio.py b/server_addon/nuke/server/settings/imageio.py new file mode 100644 index 0000000000..b43017ef8b --- /dev/null +++ b/server_addon/nuke/server/settings/imageio.py @@ -0,0 +1,410 @@ +from typing import Literal +from pydantic import validator, Field +from ayon_server.settings import ( + BaseSettingsModel, + ensure_unique_names, +) + +from .common import KnobModel + + +class NodesModel(BaseSettingsModel): + """# TODO: This needs to be somehow labeled in settings panel + or at least it could show gist of configuration + """ + _layout = "expanded" + plugins: list[str] = Field( + title="Used in plugins" + ) + # TODO: rename `nukeNodeClass` to `nuke_node_class` + nukeNodeClass: str = Field( + title="Nuke Node Class", + ) + + """ # TODO: Need complete rework of knob types + in nuke integration. We could not support v3 style of settings. + """ + knobs: list[KnobModel] = Field( + title="Knobs", + ) + + @validator("knobs") + def ensure_unique_names(cls, value): + """Ensure name fields within the lists have unique names.""" + ensure_unique_names(value) + return value + + +class NodesSetting(BaseSettingsModel): + # TODO: rename `requiredNodes` to `required_nodes` + requiredNodes: list[NodesModel] = Field( + title="Plugin required", + default_factory=list + ) + # TODO: rename `overrideNodes` to `override_nodes` + overrideNodes: list[NodesModel] = Field( + title="Plugin's node overrides", + default_factory=list + ) + + +def ocio_configs_switcher_enum(): + return [ + {"value": "nuke-default", "label": "nuke-default"}, + {"value": "spi-vfx", "label": "spi-vfx"}, + {"value": "spi-anim", "label": "spi-anim"}, + {"value": "aces_0.1.1", "label": "aces_0.1.1"}, + {"value": "aces_0.7.1", "label": "aces_0.7.1"}, + {"value": "aces_1.0.1", "label": "aces_1.0.1"}, + {"value": "aces_1.0.3", "label": "aces_1.0.3"}, + {"value": "aces_1.1", "label": "aces_1.1"}, + {"value": "aces_1.2", "label": "aces_1.2"}, + {"value": "aces_1.3", "label": "aces_1.3"}, + {"value": "custom", "label": "custom"} + ] + + +class WorkfileColorspaceSettings(BaseSettingsModel): + """Nuke workfile colorspace preset. """ + """# TODO: enhance settings with host api: + we need to add mapping to resolve properly keys. + Nuke is excpecting camel case key names, + but for better code consistency we need to + be using snake_case: + + color_management = colorManagement + ocio_config = OCIO_config + working_space_name = workingSpaceLUT + monitor_name = monitorLut + monitor_out_name = monitorOutLut + int_8_name = int8Lut + int_16_name = int16Lut + log_name = logLut + float_name = floatLut + """ + + colorManagement: Literal["Nuke", "OCIO"] = Field( + title="Color Management" + ) + + OCIO_config: str = Field( + title="OpenColorIO Config", + description="Switch between OCIO configs", + enum_resolver=ocio_configs_switcher_enum, + conditionalEnum=True + ) + + workingSpaceLUT: str = Field( + title="Working Space" + ) + monitorLut: str = Field( + title="Monitor" + ) + int8Lut: str = Field( + title="8-bit files" + ) + int16Lut: str = Field( + title="16-bit files" + ) + logLut: str = Field( + title="Log files" + ) + floatLut: str = Field( + title="Float files" + ) + + +class ReadColorspaceRulesItems(BaseSettingsModel): + _layout = "expanded" + + regex: str = Field("", title="Regex expression") + colorspace: str = Field("", title="Colorspace") + + +class RegexInputsModel(BaseSettingsModel): + inputs: list[ReadColorspaceRulesItems] = Field( + default_factory=list, + title="Inputs" + ) + + +class ViewProcessModel(BaseSettingsModel): + viewerProcess: str = Field( + title="Viewer Process Name" + ) + + +class ImageIOConfigModel(BaseSettingsModel): + override_global_config: bool = Field( + False, + title="Override global OCIO config" + ) + filepath: list[str] = Field( + default_factory=list, + title="Config path" + ) + + +class ImageIOFileRuleModel(BaseSettingsModel): + name: str = Field("", title="Rule name") + pattern: str = Field("", title="Regex pattern") + colorspace: str = Field("", title="Colorspace name") + ext: str = Field("", title="File extension") + + +class ImageIOFileRulesModel(BaseSettingsModel): + activate_host_rules: bool = Field(False) + rules: list[ImageIOFileRuleModel] = Field( + default_factory=list, + title="Rules" + ) + + @validator("rules") + def validate_unique_outputs(cls, value): + ensure_unique_names(value) + return value + + +class ImageIOSettings(BaseSettingsModel): + """Nuke color management project settings. """ + _isGroup: bool = True + + """# TODO: enhance settings with host api: + to restruture settings for simplification. + + now: nuke/imageio/viewer/viewerProcess + future: nuke/imageio/viewer + """ + activate_host_color_management: bool = Field( + True, title="Enable Color Management") + ocio_config: ImageIOConfigModel = Field( + default_factory=ImageIOConfigModel, + title="OCIO config" + ) + file_rules: ImageIOFileRulesModel = Field( + default_factory=ImageIOFileRulesModel, + title="File Rules" + ) + viewer: ViewProcessModel = Field( + default_factory=ViewProcessModel, + title="Viewer", + description="""Viewer profile is used during + Creation of new viewer node at knob viewerProcess""" + ) + + """# TODO: enhance settings with host api: + to restruture settings for simplification. + + now: nuke/imageio/baking/viewerProcess + future: nuke/imageio/baking + """ + baking: ViewProcessModel = Field( + default_factory=ViewProcessModel, + title="Baking", + description="""Baking profile is used during + publishing baked colorspace data at knob viewerProcess""" + ) + + workfile: WorkfileColorspaceSettings = Field( + default_factory=WorkfileColorspaceSettings, + title="Workfile" + ) + + nodes: NodesSetting = Field( + default_factory=NodesSetting, + title="Nodes" + ) + """# TODO: enhance settings with host api: + - old settings are using `regexInputs` key but we + need to rename to `regex_inputs` + - no need for `inputs` middle part. It can stay + directly on `regex_inputs` + """ + regexInputs: RegexInputsModel = Field( + default_factory=RegexInputsModel, + title="Assign colorspace to read nodes via rules" + ) + + +DEFAULT_IMAGEIO_SETTINGS = { + "viewer": { + "viewerProcess": "sRGB" + }, + "baking": { + "viewerProcess": "rec709" + }, + "workfile": { + "colorManagement": "Nuke", + "OCIO_config": "nuke-default", + "workingSpaceLUT": "linear", + "monitorLut": "sRGB", + "int8Lut": "sRGB", + "int16Lut": "sRGB", + "logLut": "Cineon", + "floatLut": "linear" + }, + "nodes": { + "requiredNodes": [ + { + "plugins": [ + "CreateWriteRender" + ], + "nukeNodeClass": "Write", + "knobs": [ + { + "type": "text", + "name": "file_type", + "text": "exr" + }, + { + "type": "text", + "name": "datatype", + "text": "16 bit half" + }, + { + "type": "text", + "name": "compression", + "text": "Zip (1 scanline)" + }, + { + "type": "boolean", + "name": "autocrop", + "boolean": True + }, + { + "type": "color_gui", + "name": "tile_color", + "color_gui": [ + 186, + 35, + 35 + ] + }, + { + "type": "text", + "name": "channels", + "text": "rgb" + }, + { + "type": "text", + "name": "colorspace", + "text": "linear" + }, + { + "type": "boolean", + "name": "create_directories", + "boolean": True + } + ] + }, + { + "plugins": [ + "CreateWritePrerender" + ], + "nukeNodeClass": "Write", + "knobs": [ + { + "type": "text", + "name": "file_type", + "text": "exr" + }, + { + "type": "text", + "name": "datatype", + "text": "16 bit half" + }, + { + "type": "text", + "name": "compression", + "text": "Zip (1 scanline)" + }, + { + "type": "boolean", + "name": "autocrop", + "boolean": True + }, + { + "type": "color_gui", + "name": "tile_color", + "color_gui": [ + 171, + 171, + 10 + ] + }, + { + "type": "text", + "name": "channels", + "text": "rgb" + }, + { + "type": "text", + "name": "colorspace", + "text": "linear" + }, + { + "type": "boolean", + "name": "create_directories", + "boolean": True + } + ] + }, + { + "plugins": [ + "CreateWriteImage" + ], + "nukeNodeClass": "Write", + "knobs": [ + { + "type": "text", + "name": "file_type", + "text": "tiff" + }, + { + "type": "text", + "name": "datatype", + "text": "16 bit" + }, + { + "type": "text", + "name": "compression", + "text": "Deflate" + }, + { + "type": "color_gui", + "name": "tile_color", + "color_gui": [ + 56, + 162, + 7 + ] + }, + { + "type": "text", + "name": "channels", + "text": "rgb" + }, + { + "type": "text", + "name": "colorspace", + "text": "sRGB" + }, + { + "type": "boolean", + "name": "create_directories", + "boolean": True + } + ] + } + ], + "overrideNodes": [] + }, + "regexInputs": { + "inputs": [ + { + "regex": "(beauty).*(?=.exr)", + "colorspace": "linear" + } + ] + } +} diff --git a/server_addon/nuke/server/settings/loader_plugins.py b/server_addon/nuke/server/settings/loader_plugins.py new file mode 100644 index 0000000000..6db381bffb --- /dev/null +++ b/server_addon/nuke/server/settings/loader_plugins.py @@ -0,0 +1,80 @@ +from pydantic import Field +from ayon_server.settings import BaseSettingsModel + + +class LoadImageModel(BaseSettingsModel): + enabled: bool = Field( + title="Enabled" + ) + """# TODO: v3 api used `_representation` + New api is hiding it so it had to be renamed + to `representations_include` + """ + representations_include: list[str] = Field( + default_factory=list, + title="Include representations" + ) + + node_name_template: str = Field( + title="Read node name template" + ) + + +class LoadClipOptionsModel(BaseSettingsModel): + start_at_workfile: bool = Field( + title="Start at workfile's start frame" + ) + add_retime: bool = Field( + title="Add retime" + ) + + +class LoadClipModel(BaseSettingsModel): + enabled: bool = Field( + title="Enabled" + ) + """# TODO: v3 api used `_representation` + New api is hiding it so it had to be renamed + to `representations_include` + """ + representations_include: list[str] = Field( + default_factory=list, + title="Include representations" + ) + + node_name_template: str = Field( + title="Read node name template" + ) + options_defaults: LoadClipOptionsModel = Field( + default_factory=LoadClipOptionsModel, + title="Loader option defaults" + ) + + +class LoaderPuginsModel(BaseSettingsModel): + LoadImage: LoadImageModel = Field( + default_factory=LoadImageModel, + title="Load Image" + ) + LoadClip: LoadClipModel = Field( + default_factory=LoadClipModel, + title="Load Clip" + ) + + +DEFAULT_LOADER_PLUGINS_SETTINGS = { + "LoadImage": { + "enabled": True, + "representations_include": [], + "node_name_template": "{class_name}_{ext}" + }, + "LoadClip": { + "enabled": True, + "representations_include": [], + "node_name_template": "{class_name}_{ext}", + "options_defaults": { + "start_at_workfile": True, + "add_retime": True + } + } +} diff --git a/server_addon/nuke/server/settings/main.py b/server_addon/nuke/server/settings/main.py new file mode 100644 index 0000000000..4687d48ac9 --- /dev/null +++ b/server_addon/nuke/server/settings/main.py @@ -0,0 +1,128 @@ +from pydantic import validator, Field + +from ayon_server.settings import ( + BaseSettingsModel, + ensure_unique_names +) + +from .general import ( + GeneralSettings, + DEFAULT_GENERAL_SETTINGS +) +from .imageio import ( + ImageIOSettings, + DEFAULT_IMAGEIO_SETTINGS +) +from .dirmap import ( + DirmapSettings, + DEFAULT_DIRMAP_SETTINGS +) +from .scriptsmenu import ( + ScriptsmenuSettings, + DEFAULT_SCRIPTSMENU_SETTINGS +) +from .gizmo import ( + GizmoItem, + DEFAULT_GIZMO_ITEM +) +from .create_plugins import ( + CreatorPluginsSettings, + DEFAULT_CREATE_SETTINGS +) +from .publish_plugins import ( + PublishPuginsModel, + DEFAULT_PUBLISH_PLUGIN_SETTINGS +) +from .loader_plugins import ( + LoaderPuginsModel, + DEFAULT_LOADER_PLUGINS_SETTINGS +) +from .workfile_builder import ( + WorkfileBuilderModel, + DEFAULT_WORKFILE_BUILDER_SETTINGS +) +from .templated_workfile_build import ( + TemplatedWorkfileBuildModel +) +from .filters import PublishGUIFilterItemModel + + +class NukeSettings(BaseSettingsModel): + """Nuke addon settings.""" + + general: GeneralSettings = Field( + default_factory=GeneralSettings, + title="General", + ) + + imageio: ImageIOSettings = Field( + default_factory=ImageIOSettings, + title="Color Management (imageio)", + ) + """# TODO: fix host api: + - rename `nuke-dirmap` to `dirmap` was inevitable + """ + dirmap: DirmapSettings = Field( + default_factory=DirmapSettings, + title="Nuke Directory Mapping", + ) + + scriptsmenu: ScriptsmenuSettings = Field( + default_factory=ScriptsmenuSettings, + title="Scripts Menu Definition", + ) + + gizmo: list[GizmoItem] = Field( + default_factory=list, title="Gizmo Menu") + + create: CreatorPluginsSettings = Field( + default_factory=CreatorPluginsSettings, + title="Creator Plugins", + ) + + publish: PublishPuginsModel = Field( + default_factory=PublishPuginsModel, + title="Publish Plugins", + ) + + load: LoaderPuginsModel = Field( + default_factory=LoaderPuginsModel, + title="Loader Plugins", + ) + + workfile_builder: WorkfileBuilderModel = Field( + default_factory=WorkfileBuilderModel, + title="Workfile Builder", + ) + + templated_workfile_build: TemplatedWorkfileBuildModel = Field( + title="Templated Workfile Build", + default_factory=TemplatedWorkfileBuildModel + ) + + filters: list[PublishGUIFilterItemModel] = Field( + default_factory=list + ) + + @validator("filters") + def ensure_unique_names(cls, value): + """Ensure name fields within the lists have unique names.""" + ensure_unique_names(value) + return value + + +DEFAULT_VALUES = { + "general": DEFAULT_GENERAL_SETTINGS, + "imageio": DEFAULT_IMAGEIO_SETTINGS, + "dirmap": DEFAULT_DIRMAP_SETTINGS, + "scriptsmenu": DEFAULT_SCRIPTSMENU_SETTINGS, + "gizmo": [DEFAULT_GIZMO_ITEM], + "create": DEFAULT_CREATE_SETTINGS, + "publish": DEFAULT_PUBLISH_PLUGIN_SETTINGS, + "load": DEFAULT_LOADER_PLUGINS_SETTINGS, + "workfile_builder": DEFAULT_WORKFILE_BUILDER_SETTINGS, + "templated_workfile_build": { + "profiles": [] + }, + "filters": [] +} diff --git a/server_addon/nuke/server/settings/publish_plugins.py b/server_addon/nuke/server/settings/publish_plugins.py new file mode 100644 index 0000000000..7e898f8c9a --- /dev/null +++ b/server_addon/nuke/server/settings/publish_plugins.py @@ -0,0 +1,504 @@ +from pydantic import validator, Field +from ayon_server.settings import ( + BaseSettingsModel, + ensure_unique_names, + task_types_enum +) +from .common import KnobModel, validate_json_dict + + +def nuke_render_publish_types_enum(): + """Return all nuke render families available in creators.""" + return [ + {"value": "render", "label": "Render"}, + {"value": "prerender", "label": "Prerender"}, + {"value": "image", "label": "Image"} + ] + + +def nuke_product_types_enum(): + """Return all nuke families available in creators.""" + return [ + {"value": "nukenodes", "label": "Nukenodes"}, + {"value": "model", "label": "Model"}, + {"value": "camera", "label": "Camera"}, + {"value": "gizmo", "label": "Gizmo"}, + {"value": "source", "label": "Source"} + ] + nuke_render_publish_types_enum() + + +class NodeModel(BaseSettingsModel): + # TODO: missing in host api + name: str = Field( + title="Node name" + ) + # TODO: `nodeclass` rename to `nuke_node_class` + nodeclass: str = Field( + "", + title="Node class" + ) + dependent: str = Field( + "", + title="Incoming dependency" + ) + """# TODO: Changes in host api: + - Need complete rework of knob types in nuke integration. + - We could not support v3 style of settings. + """ + knobs: list[KnobModel] = Field( + title="Knobs", + ) + + @validator("knobs") + def ensure_unique_names(cls, value): + """Ensure name fields within the lists have unique names.""" + ensure_unique_names(value) + return value + + +class ThumbnailRepositionNodeModel(BaseSettingsModel): + node_class: str = Field(title="Node class") + knobs: list[KnobModel] = Field(title="Knobs", default_factory=list) + + @validator("knobs") + def ensure_unique_names(cls, value): + """Ensure name fields within the lists have unique names.""" + ensure_unique_names(value) + return value + + +class CollectInstanceDataModel(BaseSettingsModel): + sync_workfile_version_on_product_types: list[str] = Field( + default_factory=list, + enum_resolver=nuke_product_types_enum, + title="Sync workfile versions for familes" + ) + + +class OptionalPluginModel(BaseSettingsModel): + enabled: bool = Field(True) + optional: bool = Field(title="Optional") + active: bool = Field(title="Active") + + +class ValidateKnobsModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + knobs: str = Field( + "{}", + title="Knobs", + widget="textarea", + ) + + @validator("knobs") + def validate_json(cls, value): + return validate_json_dict(value) + + +class ExtractThumbnailModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + use_rendered: bool = Field(title="Use rendered images") + bake_viewer_process: bool = Field(title="Bake view process") + bake_viewer_input_process: bool = Field(title="Bake viewer input process") + """# TODO: needs to rewrite from v3 to ayon + - `nodes` in v3 was dict but now `prenodes` is list of dict + - also later `nodes` should be `prenodes` + """ + + nodes: list[NodeModel] = Field( + title="Nodes (deprecated)" + ) + reposition_nodes: list[ThumbnailRepositionNodeModel] = Field( + title="Reposition nodes", + default_factory=list + ) + + +class ExtractReviewDataModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + + +class ExtractReviewDataLutModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + + +class BakingStreamFilterModel(BaseSettingsModel): + task_types: list[str] = Field( + default_factory=list, + title="Task types", + enum_resolver=task_types_enum + ) + product_types: list[str] = Field( + default_factory=list, + enum_resolver=nuke_render_publish_types_enum, + title="Sync workfile versions for familes" + ) + product_names: list[str] = Field( + default_factory=list, title="Product names") + + +class ReformatNodesRepositionNodes(BaseSettingsModel): + node_class: str = Field(title="Node class") + knobs: list[KnobModel] = Field( + default_factory=list, + title="Node knobs") + + +class ReformatNodesConfigModel(BaseSettingsModel): + """Only reposition nodes supported. + + You can add multiple reformat nodes and set their knobs. + Order of reformat nodes is important. First reformat node will + be applied first and last reformat node will be applied last. + """ + enabled: bool = Field(False) + reposition_nodes: list[ReformatNodesRepositionNodes] = Field( + default_factory=list, + title="Reposition knobs" + ) + + +class BakingStreamModel(BaseSettingsModel): + name: str = Field(title="Output name") + filter: BakingStreamFilterModel = Field( + title="Filter", default_factory=BakingStreamFilterModel) + read_raw: bool = Field(title="Read raw switch") + viewer_process_override: str = Field(title="Viewer process override") + bake_viewer_process: bool = Field(title="Bake view process") + bake_viewer_input_process: bool = Field(title="Bake viewer input process") + reformat_nodes_config: ReformatNodesConfigModel = Field( + default_factory=ReformatNodesConfigModel, + title="Reformat Nodes") + extension: str = Field(title="File extension") + add_custom_tags: list[str] = Field( + title="Custom tags", default_factory=list) + + +class ExtractReviewDataMovModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + viewer_lut_raw: bool = Field(title="Viewer lut raw") + outputs: list[BakingStreamModel] = Field( + title="Baking streams" + ) + + +class FSubmissionNoteModel(BaseSettingsModel): + enabled: bool = Field(title="enabled") + template: str = Field(title="Template") + + +class FSubmistingForModel(BaseSettingsModel): + enabled: bool = Field(title="enabled") + template: str = Field(title="Template") + + +class FVFXScopeOfWorkModel(BaseSettingsModel): + enabled: bool = Field(title="enabled") + template: str = Field(title="Template") + + +class ExctractSlateFrameParamModel(BaseSettingsModel): + f_submission_note: FSubmissionNoteModel = Field( + title="f_submission_note", + default_factory=FSubmissionNoteModel + ) + f_submitting_for: FSubmistingForModel = Field( + title="f_submitting_for", + default_factory=FSubmistingForModel + ) + f_vfx_scope_of_work: FVFXScopeOfWorkModel = Field( + title="f_vfx_scope_of_work", + default_factory=FVFXScopeOfWorkModel + ) + + +class ExtractSlateFrameModel(BaseSettingsModel): + viewer_lut_raw: bool = Field(title="Viewer lut raw") + """# TODO: v3 api different model: + - not possible to replicate v3 model: + {"name": [bool, str]} + - not it is: + {"name": {"enabled": bool, "template": str}} + """ + key_value_mapping: ExctractSlateFrameParamModel = Field( + title="Key value mapping", + default_factory=ExctractSlateFrameParamModel + ) + + +class IncrementScriptVersionModel(BaseSettingsModel): + enabled: bool = Field(title="Enabled") + optional: bool = Field(title="Optional") + active: bool = Field(title="Active") + + +class PublishPuginsModel(BaseSettingsModel): + CollectInstanceData: CollectInstanceDataModel = Field( + title="Collect Instance Version", + default_factory=CollectInstanceDataModel, + section="Collectors" + ) + ValidateCorrectAssetName: OptionalPluginModel = Field( + title="Validate Correct Folder Name", + default_factory=OptionalPluginModel, + section="Validators" + ) + ValidateContainers: OptionalPluginModel = Field( + title="Validate Containers", + default_factory=OptionalPluginModel + ) + ValidateKnobs: ValidateKnobsModel = Field( + title="Validate Knobs", + default_factory=ValidateKnobsModel + ) + ValidateOutputResolution: OptionalPluginModel = Field( + title="Validate Output Resolution", + default_factory=OptionalPluginModel + ) + ValidateGizmo: OptionalPluginModel = Field( + title="Validate Gizmo", + default_factory=OptionalPluginModel + ) + ValidateBackdrop: OptionalPluginModel = Field( + title="Validate Backdrop", + default_factory=OptionalPluginModel + ) + ValidateScript: OptionalPluginModel = Field( + title="Validate Script", + default_factory=OptionalPluginModel + ) + ExtractThumbnail: ExtractThumbnailModel = Field( + title="Extract Thumbnail", + default_factory=ExtractThumbnailModel, + section="Extractors" + ) + ExtractReviewData: ExtractReviewDataModel = Field( + title="Extract Review Data", + default_factory=ExtractReviewDataModel + ) + ExtractReviewDataLut: ExtractReviewDataLutModel = Field( + title="Extract Review Data Lut", + default_factory=ExtractReviewDataLutModel + ) + ExtractReviewDataMov: ExtractReviewDataMovModel = Field( + title="Extract Review Data Mov", + default_factory=ExtractReviewDataMovModel + ) + ExtractSlateFrame: ExtractSlateFrameModel = Field( + title="Extract Slate Frame", + default_factory=ExtractSlateFrameModel + ) + # TODO: plugin should be renamed - `workfile` not `script` + IncrementScriptVersion: IncrementScriptVersionModel = Field( + title="Increment Workfile Version", + default_factory=IncrementScriptVersionModel, + section="Integrators" + ) + + +DEFAULT_PUBLISH_PLUGIN_SETTINGS = { + "CollectInstanceData": { + "sync_workfile_version_on_product_types": [ + "nukenodes", + "camera", + "gizmo", + "source", + "render", + "write" + ] + }, + "ValidateCorrectAssetName": { + "enabled": True, + "optional": True, + "active": True + }, + "ValidateContainers": { + "enabled": True, + "optional": True, + "active": True + }, + "ValidateKnobs": { + "enabled": False, + "knobs": "\n".join([ + '{', + ' "render": {', + ' "review": true', + ' }', + '}' + ]) + }, + "ValidateOutputResolution": { + "enabled": True, + "optional": True, + "active": True + }, + "ValidateGizmo": { + "enabled": True, + "optional": True, + "active": True + }, + "ValidateBackdrop": { + "enabled": True, + "optional": True, + "active": True + }, + "ValidateScript": { + "enabled": True, + "optional": True, + "active": True + }, + "ExtractThumbnail": { + "enabled": True, + "use_rendered": True, + "bake_viewer_process": True, + "bake_viewer_input_process": True, + "nodes": [ + { + "name": "Reformat01", + "nodeclass": "Reformat", + "dependency": "", + "knobs": [ + { + "type": "text", + "name": "type", + "text": "to format" + }, + { + "type": "text", + "name": "format", + "text": "HD_1080" + }, + { + "type": "text", + "name": "filter", + "text": "Lanczos6" + }, + { + "type": "boolean", + "name": "black_outside", + "boolean": True + }, + { + "type": "boolean", + "name": "pbb", + "boolean": False + } + ] + } + ], + "reposition_nodes": [ + { + "node_class": "Reformat", + "knobs": [ + { + "type": "text", + "name": "type", + "text": "to format" + }, + { + "type": "text", + "name": "format", + "text": "HD_1080" + }, + { + "type": "text", + "name": "filter", + "text": "Lanczos6" + }, + { + "type": "bool", + "name": "black_outside", + "boolean": True + }, + { + "type": "bool", + "name": "pbb", + "boolean": False + } + ] + } + ] + }, + "ExtractReviewData": { + "enabled": False + }, + "ExtractReviewDataLut": { + "enabled": False + }, + "ExtractReviewDataMov": { + "enabled": True, + "viewer_lut_raw": False, + "outputs": [ + { + "name": "baking", + "filter": { + "task_types": [], + "product_types": [], + "product_names": [] + }, + "read_raw": False, + "viewer_process_override": "", + "bake_viewer_process": True, + "bake_viewer_input_process": True, + "reformat_nodes_config": { + "enabled": False, + "reposition_nodes": [ + { + "node_class": "Reformat", + "knobs": [ + { + "type": "text", + "name": "type", + "text": "to format" + }, + { + "type": "text", + "name": "format", + "text": "HD_1080" + }, + { + "type": "text", + "name": "filter", + "text": "Lanczos6" + }, + { + "type": "bool", + "name": "black_outside", + "boolean": True + }, + { + "type": "bool", + "name": "pbb", + "boolean": False + } + ] + } + ] + }, + "extension": "mov", + "add_custom_tags": [] + } + ] + }, + "ExtractSlateFrame": { + "viewer_lut_raw": False, + "key_value_mapping": { + "f_submission_note": { + "enabled": True, + "template": "{comment}" + }, + "f_submitting_for": { + "enabled": True, + "template": "{intent[value]}" + }, + "f_vfx_scope_of_work": { + "enabled": False, + "template": "" + } + } + }, + "IncrementScriptVersion": { + "enabled": True, + "optional": True, + "active": True + } +} diff --git a/server_addon/nuke/server/settings/scriptsmenu.py b/server_addon/nuke/server/settings/scriptsmenu.py new file mode 100644 index 0000000000..9d1c32ebac --- /dev/null +++ b/server_addon/nuke/server/settings/scriptsmenu.py @@ -0,0 +1,54 @@ +from pydantic import Field +from ayon_server.settings import BaseSettingsModel + + +class ScriptsmenuSubmodel(BaseSettingsModel): + """Item Definition""" + _isGroup = True + + type: str = Field(title="Type") + command: str = Field(title="Command") + sourcetype: str = Field(title="Source Type") + title: str = Field(title="Title") + tooltip: str = Field(title="Tooltip") + + +class ScriptsmenuSettings(BaseSettingsModel): + """Nuke script menu project settings.""" + _isGroup = True + + # TODO: in api rename key `name` to `menu_name` + name: str = Field(title="Menu Name") + definition: list[ScriptsmenuSubmodel] = Field( + default_factory=list, + title="Definition", + description="Scriptmenu Items Definition" + ) + + +DEFAULT_SCRIPTSMENU_SETTINGS = { + "name": "OpenPype Tools", + "definition": [ + { + "type": "action", + "sourcetype": "python", + "title": "OpenPype Docs", + "command": "import webbrowser;webbrowser.open(url='https://openpype.io/docs/artist_hosts_nuke_tut')", + "tooltip": "Open the OpenPype Nuke user doc page" + }, + { + "type": "action", + "sourcetype": "python", + "title": "Set Frame Start (Read Node)", + "command": "from openpype.hosts.nuke.startup.frame_setting_for_read_nodes import main;main();", + "tooltip": "Set frame start for read node(s)" + }, + { + "type": "action", + "sourcetype": "python", + "title": "Set non publish output for Write Node", + "command": "from openpype.hosts.nuke.startup.custom_write_node import main;main();", + "tooltip": "Open the OpenPype Nuke user doc page" + } + ] +} diff --git a/server_addon/nuke/server/settings/templated_workfile_build.py b/server_addon/nuke/server/settings/templated_workfile_build.py new file mode 100644 index 0000000000..e0245c8d06 --- /dev/null +++ b/server_addon/nuke/server/settings/templated_workfile_build.py @@ -0,0 +1,33 @@ +from pydantic import Field +from ayon_server.settings import ( + BaseSettingsModel, + task_types_enum, +) + + +class TemplatedWorkfileProfileModel(BaseSettingsModel): + task_types: list[str] = Field( + default_factory=list, + title="Task types", + enum_resolver=task_types_enum + ) + task_names: list[str] = Field( + default_factory=list, + title="Task names" + ) + path: str = Field( + title="Path to template" + ) + keep_placeholder: bool = Field( + False, + title="Keep placeholders") + create_first_version: bool = Field( + True, + title="Create first version" + ) + + +class TemplatedWorkfileBuildModel(BaseSettingsModel): + profiles: list[TemplatedWorkfileProfileModel] = Field( + default_factory=list + ) diff --git a/server_addon/nuke/server/settings/workfile_builder.py b/server_addon/nuke/server/settings/workfile_builder.py new file mode 100644 index 0000000000..ee67c7c16a --- /dev/null +++ b/server_addon/nuke/server/settings/workfile_builder.py @@ -0,0 +1,72 @@ +from pydantic import Field +from ayon_server.settings import ( + BaseSettingsModel, + task_types_enum, + MultiplatformPathModel, +) + + +class CustomTemplateModel(BaseSettingsModel): + task_types: list[str] = Field( + default_factory=list, + title="Task types", + enum_resolver=task_types_enum + ) + path: MultiplatformPathModel = Field( + default_factory=MultiplatformPathModel, + title="Gizmo Directory Path" + ) + + +class BuilderProfileItemModel(BaseSettingsModel): + product_name_filters: list[str] = Field( + default_factory=list, + title="Product name" + ) + product_types: list[str] = Field( + default_factory=list, + title="Product types" + ) + repre_names: list[str] = Field( + default_factory=list, + title="Representations" + ) + loaders: list[str] = Field( + default_factory=list, + title="Loader plugins" + ) + + +class BuilderProfileModel(BaseSettingsModel): + task_types: list[str] = Field( + default_factory=list, + title="Task types", + enum_resolver=task_types_enum + ) + tasks: list[str] = Field( + default_factory=list, + title="Task names" + ) + current_context: list[BuilderProfileItemModel] = Field( + title="Current context") + linked_assets: list[BuilderProfileItemModel] = Field( + title="Linked assets/shots") + + +class WorkfileBuilderModel(BaseSettingsModel): + create_first_version: bool = Field( + title="Create first workfile") + custom_templates: list[CustomTemplateModel] = Field( + title="Custom templates") + builder_on_start: bool = Field( + title="Run Builder at first workfile") + profiles: list[BuilderProfileModel] = Field( + title="Builder profiles") + + +DEFAULT_WORKFILE_BUILDER_SETTINGS = { + "create_first_version": False, + "custom_templates": [], + "builder_on_start": False, + "profiles": [] +} diff --git a/server_addon/nuke/server/version.py b/server_addon/nuke/server/version.py new file mode 100644 index 0000000000..485f44ac21 --- /dev/null +++ b/server_addon/nuke/server/version.py @@ -0,0 +1 @@ +__version__ = "0.1.1" diff --git a/server_addon/client/pyproject.toml b/server_addon/openpype/client/pyproject.toml similarity index 100% rename from server_addon/client/pyproject.toml rename to server_addon/openpype/client/pyproject.toml diff --git a/server_addon/server/__init__.py b/server_addon/openpype/server/__init__.py similarity index 100% rename from server_addon/server/__init__.py rename to server_addon/openpype/server/__init__.py diff --git a/server_addon/photoshop/LICENSE b/server_addon/photoshop/LICENSE new file mode 100644 index 0000000000..d645695673 --- /dev/null +++ b/server_addon/photoshop/LICENSE @@ -0,0 +1,202 @@ + + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "[]" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright [yyyy] [name of copyright owner] + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. diff --git a/server_addon/photoshop/README.md b/server_addon/photoshop/README.md new file mode 100644 index 0000000000..2d1e1c745c --- /dev/null +++ b/server_addon/photoshop/README.md @@ -0,0 +1,4 @@ +Photoshp Addon +=============== + +Integration with Adobe Photoshop. diff --git a/server_addon/photoshop/server/__init__.py b/server_addon/photoshop/server/__init__.py new file mode 100644 index 0000000000..3a45f7a809 --- /dev/null +++ b/server_addon/photoshop/server/__init__.py @@ -0,0 +1,16 @@ +from ayon_server.addons import BaseServerAddon + +from .settings import PhotoshopSettings, DEFAULT_PHOTOSHOP_SETTING +from .version import __version__ + + +class Photoshop(BaseServerAddon): + name = "photoshop" + title = "Photoshop" + version = __version__ + + settings_model = PhotoshopSettings + + async def get_default_settings(self): + settings_model_cls = self.get_settings_model() + return settings_model_cls(**DEFAULT_PHOTOSHOP_SETTING) diff --git a/server_addon/photoshop/server/settings/__init__.py b/server_addon/photoshop/server/settings/__init__.py new file mode 100644 index 0000000000..9ae5764362 --- /dev/null +++ b/server_addon/photoshop/server/settings/__init__.py @@ -0,0 +1,10 @@ +from .main import ( + PhotoshopSettings, + DEFAULT_PHOTOSHOP_SETTING, +) + + +__all__ = ( + "PhotoshopSettings", + "DEFAULT_PHOTOSHOP_SETTING", +) diff --git a/server_addon/photoshop/server/settings/creator_plugins.py b/server_addon/photoshop/server/settings/creator_plugins.py new file mode 100644 index 0000000000..2fe63a7e3a --- /dev/null +++ b/server_addon/photoshop/server/settings/creator_plugins.py @@ -0,0 +1,79 @@ +from pydantic import Field + +from ayon_server.settings import BaseSettingsModel + + +class CreateImagePluginModel(BaseSettingsModel): + enabled: bool = Field(True, title="Enabled") + active_on_create: bool = Field(True, title="Active by default") + mark_for_review: bool = Field(False, title="Review by default") + default_variants: list[str] = Field( + default_factory=list, + title="Default Variants" + ) + + +class AutoImageCreatorPluginModel(BaseSettingsModel): + enabled: bool = Field(False, title="Enabled") + active_on_create: bool = Field(True, title="Active by default") + mark_for_review: bool = Field(False, title="Review by default") + default_variant: str = Field("", title="Default Variants") + + +class CreateReviewPlugin(BaseSettingsModel): + enabled: bool = Field(True, title="Enabled") + active_on_create: bool = Field(True, title="Active by default") + default_variant: str = Field("", title="Default Variants") + + +class CreateWorkfilelugin(BaseSettingsModel): + enabled: bool = Field(True, title="Enabled") + active_on_create: bool = Field(True, title="Active by default") + default_variant: str = Field("", title="Default Variants") + + +class PhotoshopCreatorPlugins(BaseSettingsModel): + ImageCreator: CreateImagePluginModel = Field( + title="Create Image", + default_factory=CreateImagePluginModel, + ) + AutoImageCreator: AutoImageCreatorPluginModel = Field( + title="Create Flatten Image", + default_factory=AutoImageCreatorPluginModel, + ) + ReviewCreator: CreateReviewPlugin = Field( + title="Create Review", + default_factory=CreateReviewPlugin, + ) + WorkfileCreator: CreateWorkfilelugin = Field( + title="Create Workfile", + default_factory=CreateWorkfilelugin, + ) + + +DEFAULT_CREATE_SETTINGS = { + "ImageCreator": { + "enabled": True, + "active_on_create": True, + "mark_for_review": False, + "default_variants": [ + "Main" + ] + }, + "AutoImageCreator": { + "enabled": False, + "active_on_create": True, + "mark_for_review": False, + "default_variant": "" + }, + "ReviewCreator": { + "enabled": True, + "active_on_create": True, + "default_variant": "" + }, + "WorkfileCreator": { + "enabled": True, + "active_on_create": True, + "default_variant": "Main" + } +} diff --git a/server_addon/photoshop/server/settings/imageio.py b/server_addon/photoshop/server/settings/imageio.py new file mode 100644 index 0000000000..56b7f2fa32 --- /dev/null +++ b/server_addon/photoshop/server/settings/imageio.py @@ -0,0 +1,64 @@ +from pydantic import Field, validator +from ayon_server.settings import BaseSettingsModel +from ayon_server.settings.validators import ensure_unique_names + + +class ImageIOConfigModel(BaseSettingsModel): + override_global_config: bool = Field( + False, + title="Override global OCIO config" + ) + filepath: list[str] = Field( + default_factory=list, + title="Config path" + ) + + +class ImageIOFileRuleModel(BaseSettingsModel): + name: str = Field("", title="Rule name") + pattern: str = Field("", title="Regex pattern") + colorspace: str = Field("", title="Colorspace name") + ext: str = Field("", title="File extension") + + +class ImageIOFileRulesModel(BaseSettingsModel): + activate_host_rules: bool = Field(False) + rules: list[ImageIOFileRuleModel] = Field( + default_factory=list, + title="Rules" + ) + + @validator("rules") + def validate_unique_outputs(cls, value): + ensure_unique_names(value) + return value + + +class ImageIORemappingRulesModel(BaseSettingsModel): + host_native_name: str = Field( + title="Application native colorspace name" + ) + ocio_name: str = Field(title="OCIO colorspace name") + + +class ImageIORemappingModel(BaseSettingsModel): + rules: list[ImageIORemappingRulesModel] = Field( + default_factory=list) + + +class PhotoshopImageIOModel(BaseSettingsModel): + activate_host_color_management: bool = Field( + True, title="Enable Color Management" + ) + remapping: ImageIORemappingModel = Field( + title="Remapping colorspace names", + default_factory=ImageIORemappingModel + ) + ocio_config: ImageIOConfigModel = Field( + default_factory=ImageIOConfigModel, + title="OCIO config" + ) + file_rules: ImageIOFileRulesModel = Field( + default_factory=ImageIOFileRulesModel, + title="File Rules" + ) diff --git a/server_addon/photoshop/server/settings/main.py b/server_addon/photoshop/server/settings/main.py new file mode 100644 index 0000000000..ae7705b3db --- /dev/null +++ b/server_addon/photoshop/server/settings/main.py @@ -0,0 +1,41 @@ +from pydantic import Field +from ayon_server.settings import BaseSettingsModel + +from .imageio import PhotoshopImageIOModel +from .creator_plugins import PhotoshopCreatorPlugins, DEFAULT_CREATE_SETTINGS +from .publish_plugins import PhotoshopPublishPlugins, DEFAULT_PUBLISH_SETTINGS +from .workfile_builder import WorkfileBuilderPlugin + + +class PhotoshopSettings(BaseSettingsModel): + """Photoshop Project Settings.""" + + imageio: PhotoshopImageIOModel = Field( + default_factory=PhotoshopImageIOModel, + title="OCIO config" + ) + + create: PhotoshopCreatorPlugins = Field( + default_factory=PhotoshopCreatorPlugins, + title="Creator plugins" + ) + + publish: PhotoshopPublishPlugins = Field( + default_factory=PhotoshopPublishPlugins, + title="Publish plugins" + ) + + workfile_builder: WorkfileBuilderPlugin = Field( + default_factory=WorkfileBuilderPlugin, + title="Workfile Builder" + ) + + +DEFAULT_PHOTOSHOP_SETTING = { + "create": DEFAULT_CREATE_SETTINGS, + "publish": DEFAULT_PUBLISH_SETTINGS, + "workfile_builder": { + "create_first_version": False, + "custom_templates": [] + } +} diff --git a/server_addon/photoshop/server/settings/publish_plugins.py b/server_addon/photoshop/server/settings/publish_plugins.py new file mode 100644 index 0000000000..6bc72b4072 --- /dev/null +++ b/server_addon/photoshop/server/settings/publish_plugins.py @@ -0,0 +1,221 @@ +from pydantic import Field + +from ayon_server.settings import BaseSettingsModel + + +create_flatten_image_enum = [ + {"value": "flatten_with_images", "label": "Flatten with images"}, + {"value": "flatten_only", "label": "Flatten only"}, + {"value": "no", "label": "No"}, +] + + +color_code_enum = [ + {"value": "red", "label": "Red"}, + {"value": "orange", "label": "Orange"}, + {"value": "yellowColor", "label": "Yellow"}, + {"value": "grain", "label": "Green"}, + {"value": "blue", "label": "Blue"}, + {"value": "violet", "label": "Violet"}, + {"value": "gray", "label": "Gray"}, +] + + +class ColorCodeMappings(BaseSettingsModel): + color_code: list[str] = Field( + title="Color codes for layers", + default_factory=list, + enum_resolver=lambda: color_code_enum, + ) + + layer_name_regex: list[str] = Field( + "", + title="Layer name regex" + ) + + product_type: str = Field( + "", + title="Resulting product type" + ) + + product_name_template: str = Field( + "", + title="Product name template" + ) + + +class ExtractedOptions(BaseSettingsModel): + tags: list[str] = Field( + title="Tags", + default_factory=list + ) + + +class CollectColorCodedInstancesPlugin(BaseSettingsModel): + """Set color for publishable layers, set its resulting product type + and template for product name. \n Can create flatten image from published + instances. + (Applicable only for remote publishing!)""" + + enabled: bool = Field(True, title="Enabled") + create_flatten_image: str = Field( + "", + title="Create flatten image", + enum_resolver=lambda: create_flatten_image_enum, + ) + + flatten_product_type_template: str = Field( + "", + title="Subset template for flatten image" + ) + + color_code_mapping: list[ColorCodeMappings] = Field( + title="Color code mappings", + default_factory=ColorCodeMappings, + ) + + +class CollectReviewPlugin(BaseSettingsModel): + """Should review product be created""" + enabled: bool = Field(True, title="Enabled") + + +class CollectVersionPlugin(BaseSettingsModel): + """Synchronize version for image and review instances by workfile version""" # noqa + enabled: bool = Field(True, title="Enabled") + + +class ValidateContainersPlugin(BaseSettingsModel): + """Check that workfile contains latest version of loaded items""" # noqa + _isGroup = True + enabled: bool = True + optional: bool = Field(False, title="Optional") + active: bool = Field(True, title="Active") + + +class ValidateNamingPlugin(BaseSettingsModel): + """Validate naming of products and layers""" # noqa + invalid_chars: str = Field( + '', + title="Regex pattern of invalid characters" + ) + + replace_char: str = Field( + '', + title="Replacement character" + ) + + +class ExtractImagePlugin(BaseSettingsModel): + """Currently only jpg and png are supported""" + formats: list[str] = Field( + title="Extract Formats", + default_factory=list, + ) + + +class ExtractReviewPlugin(BaseSettingsModel): + make_image_sequence: bool = Field( + False, + title="Make an image sequence instead of flatten image" + ) + + max_downscale_size: int = Field( + 8192, + title="Maximum size of sources for review", + description="FFMpeg can only handle limited resolution for creation of review and/or thumbnail", # noqa + gt=300, # greater than + le=16384, # less or equal + ) + + jpg_options: ExtractedOptions = Field( + title="Extracted jpg Options", + default_factory=ExtractedOptions + ) + + mov_options: ExtractedOptions = Field( + title="Extracted mov Options", + default_factory=ExtractedOptions + ) + + +class PhotoshopPublishPlugins(BaseSettingsModel): + CollectColorCodedInstances: CollectColorCodedInstancesPlugin = Field( + title="Collect Color Coded Instances", + default_factory=CollectColorCodedInstancesPlugin, + ) + CollectReview: CollectReviewPlugin = Field( + title="Collect Review", + default_factory=CollectReviewPlugin, + ) + + CollectVersion: CollectVersionPlugin = Field( + title="Create Image", + default_factory=CollectVersionPlugin, + ) + + ValidateContainers: ValidateContainersPlugin = Field( + title="Validate Containers", + default_factory=ValidateContainersPlugin, + ) + + ValidateNaming: ValidateNamingPlugin = Field( + title="Validate naming of products and layers", + default_factory=ValidateNamingPlugin, + ) + + ExtractImage: ExtractImagePlugin = Field( + title="Extract Image", + default_factory=ExtractImagePlugin, + ) + + ExtractReview: ExtractReviewPlugin = Field( + title="Extract Review", + default_factory=ExtractReviewPlugin, + ) + + +DEFAULT_PUBLISH_SETTINGS = { + "CollectColorCodedInstances": { + "create_flatten_image": "no", + "flatten_product_type_template": "", + "color_code_mapping": [] + }, + "CollectReview": { + "enabled": True + }, + "CollectVersion": { + "enabled": False + }, + "ValidateContainers": { + "enabled": True, + "optional": True, + "active": True + }, + "ValidateNaming": { + "invalid_chars": "[ \\\\/+\\*\\?\\(\\)\\[\\]\\{\\}:,;]", + "replace_char": "_" + }, + "ExtractImage": { + "formats": [ + "png", + "jpg" + ] + }, + "ExtractReview": { + "make_image_sequence": False, + "max_downscale_size": 8192, + "jpg_options": { + "tags": [ + "review", + "ftrackreview" + ] + }, + "mov_options": { + "tags": [ + "review", + "ftrackreview" + ] + } + } +} diff --git a/server_addon/photoshop/server/settings/workfile_builder.py b/server_addon/photoshop/server/settings/workfile_builder.py new file mode 100644 index 0000000000..ec2ee136ad --- /dev/null +++ b/server_addon/photoshop/server/settings/workfile_builder.py @@ -0,0 +1,41 @@ +from pydantic import Field +from pathlib import Path + +from ayon_server.settings import BaseSettingsModel + + +class PathsTemplate(BaseSettingsModel): + windows: Path = Field( + '', + title="Windows" + ) + darwin: Path = Field( + '', + title="MacOS" + ) + linux: Path = Field( + '', + title="Linux" + ) + + +class CustomBuilderTemplate(BaseSettingsModel): + task_types: list[str] = Field( + default_factory=list, + title="Task types", + ) + template_path: PathsTemplate = Field( + default_factory=PathsTemplate + ) + + +class WorkfileBuilderPlugin(BaseSettingsModel): + _title = "Workfile Builder" + create_first_version: bool = Field( + False, + title="Create first workfile" + ) + + custom_templates: list[CustomBuilderTemplate] = Field( + default_factory=CustomBuilderTemplate + ) diff --git a/server_addon/photoshop/server/version.py b/server_addon/photoshop/server/version.py new file mode 100644 index 0000000000..d4b9e2d7f3 --- /dev/null +++ b/server_addon/photoshop/server/version.py @@ -0,0 +1,3 @@ +# -*- coding: utf-8 -*- +"""Package declaring addon version.""" +__version__ = "0.1.0" diff --git a/server_addon/resolve/server/__init__.py b/server_addon/resolve/server/__init__.py new file mode 100644 index 0000000000..a84180d0f5 --- /dev/null +++ b/server_addon/resolve/server/__init__.py @@ -0,0 +1,19 @@ +from typing import Type + +from ayon_server.addons import BaseServerAddon + +from .version import __version__ +from .settings import ResolveSettings, DEFAULT_VALUES + + +class ResolveAddon(BaseServerAddon): + name = "resolve" + title = "DaVinci Resolve" + version = __version__ + settings_model: Type[ResolveSettings] = ResolveSettings + frontend_scopes = {} + services = {} + + async def get_default_settings(self): + settings_model_cls = self.get_settings_model() + return settings_model_cls(**DEFAULT_VALUES) diff --git a/server_addon/resolve/server/imageio.py b/server_addon/resolve/server/imageio.py new file mode 100644 index 0000000000..c2bfcd40d0 --- /dev/null +++ b/server_addon/resolve/server/imageio.py @@ -0,0 +1,64 @@ +from pydantic import Field, validator +from ayon_server.settings import BaseSettingsModel +from ayon_server.settings.validators import ensure_unique_names + + +class ImageIOConfigModel(BaseSettingsModel): + override_global_config: bool = Field( + False, + title="Override global OCIO config" + ) + filepath: list[str] = Field( + default_factory=list, + title="Config path" + ) + + +class ImageIOFileRuleModel(BaseSettingsModel): + name: str = Field("", title="Rule name") + pattern: str = Field("", title="Regex pattern") + colorspace: str = Field("", title="Colorspace name") + ext: str = Field("", title="File extension") + + +class ImageIOFileRulesModel(BaseSettingsModel): + activate_host_rules: bool = Field(False) + rules: list[ImageIOFileRuleModel] = Field( + default_factory=list, + title="Rules" + ) + + @validator("rules") + def validate_unique_outputs(cls, value): + ensure_unique_names(value) + return value + + +class ImageIORemappingRulesModel(BaseSettingsModel): + host_native_name: str = Field( + title="Application native colorspace name" + ) + ocio_name: str = Field(title="OCIO colorspace name") + + +class ImageIORemappingModel(BaseSettingsModel): + rules: list[ImageIORemappingRulesModel] = Field( + default_factory=list) + + +class ResolveImageIOModel(BaseSettingsModel): + activate_host_color_management: bool = Field( + True, title="Enable Color Management" + ) + remapping: ImageIORemappingModel = Field( + title="Remapping colorspace names", + default_factory=ImageIORemappingModel + ) + ocio_config: ImageIOConfigModel = Field( + default_factory=ImageIOConfigModel, + title="OCIO config" + ) + file_rules: ImageIOFileRulesModel = Field( + default_factory=ImageIOFileRulesModel, + title="File Rules" + ) diff --git a/server_addon/resolve/server/settings.py b/server_addon/resolve/server/settings.py new file mode 100644 index 0000000000..326f6bea1e --- /dev/null +++ b/server_addon/resolve/server/settings.py @@ -0,0 +1,114 @@ +from pydantic import Field +from ayon_server.settings import BaseSettingsModel + +from .imageio import ResolveImageIOModel + + +class CreateShotClipModels(BaseSettingsModel): + hierarchy: str = Field( + "{folder}/{sequence}", + title="Shot parent hierarchy", + section="Shot Hierarchy And Rename Settings" + ) + clipRename: bool = Field( + True, + title="Rename clips" + ) + clipName: str = Field( + "{track}{sequence}{shot}", + title="Clip name template" + ) + countFrom: int = Field( + 10, + title="Count sequence from" + ) + countSteps: int = Field( + 10, + title="Stepping number" + ) + + folder: str = Field( + "shots", + title="{folder}", + section="Shot Template Keywords" + ) + episode: str = Field( + "ep01", + title="{episode}" + ) + sequence: str = Field( + "sq01", + title="{sequence}" + ) + track: str = Field( + "{_track_}", + title="{track}" + ) + shot: str = Field( + "sh###", + title="{shot}" + ) + + vSyncOn: bool = Field( + False, + title="Enable Vertical Sync", + section="Vertical Synchronization Of Attributes" + ) + + workfileFrameStart: int = Field( + 1001, + title="Workfiles Start Frame", + section="Shot Attributes" + ) + handleStart: int = Field( + 10, + title="Handle start (head)" + ) + handleEnd: int = Field( + 10, + title="Handle end (tail)" + ) + + +class CreatorPuginsModel(BaseSettingsModel): + CreateShotClip: CreateShotClipModels = Field( + default_factory=CreateShotClipModels, + title="Create Shot Clip" + ) + + +class ResolveSettings(BaseSettingsModel): + launch_openpype_menu_on_start: bool = Field( + False, title="Launch OpenPype menu on start of Resolve" + ) + imageio: ResolveImageIOModel = Field( + default_factory=ResolveImageIOModel, + title="Color Management (ImageIO)" + ) + create: CreatorPuginsModel = Field( + default_factory=CreatorPuginsModel, + title="Creator plugins", + ) + + +DEFAULT_VALUES = { + "launch_openpype_menu_on_start": False, + "create": { + "CreateShotClip": { + "hierarchy": "{folder}/{sequence}", + "clipRename": True, + "clipName": "{track}{sequence}{shot}", + "countFrom": 10, + "countSteps": 10, + "folder": "shots", + "episode": "ep01", + "sequence": "sq01", + "track": "{_track_}", + "shot": "sh###", + "vSyncOn": False, + "workfileFrameStart": 1001, + "handleStart": 10, + "handleEnd": 10 + } + } +} diff --git a/server_addon/resolve/server/version.py b/server_addon/resolve/server/version.py new file mode 100644 index 0000000000..3dc1f76bc6 --- /dev/null +++ b/server_addon/resolve/server/version.py @@ -0,0 +1 @@ +__version__ = "0.1.0" diff --git a/server_addon/royal_render/server/__init__.py b/server_addon/royal_render/server/__init__.py new file mode 100644 index 0000000000..c5f0aafa00 --- /dev/null +++ b/server_addon/royal_render/server/__init__.py @@ -0,0 +1,17 @@ +from typing import Type + +from ayon_server.addons import BaseServerAddon + +from .version import __version__ +from .settings import RoyalRenderSettings, DEFAULT_VALUES + + +class RoyalRenderAddon(BaseServerAddon): + name = "royalrender" + version = __version__ + title = "Royal Render" + settings_model: Type[RoyalRenderSettings] = RoyalRenderSettings + + async def get_default_settings(self): + settings_model_cls = self.get_settings_model() + return settings_model_cls(**DEFAULT_VALUES) diff --git a/server_addon/royal_render/server/settings.py b/server_addon/royal_render/server/settings.py new file mode 100644 index 0000000000..677d7e2671 --- /dev/null +++ b/server_addon/royal_render/server/settings.py @@ -0,0 +1,70 @@ +from pydantic import Field +from ayon_server.settings import BaseSettingsModel, MultiplatformPathModel + + +class CustomPath(MultiplatformPathModel): + _layout = "expanded" + + +class ServerListSubmodel(BaseSettingsModel): + _layout = "expanded" + name: str = Field("", title="Name") + value: CustomPath = Field( + default_factory=CustomPath + ) + + +class CollectSequencesFromJobModel(BaseSettingsModel): + review: bool = Field(True, title="Generate reviews from sequences") + + +class PublishPluginsModel(BaseSettingsModel): + CollectSequencesFromJob: CollectSequencesFromJobModel = Field( + default_factory=CollectSequencesFromJobModel, + title="Collect Sequences from the Job" + ) + + +class RoyalRenderSettings(BaseSettingsModel): + enabled: bool = True + # WARNING/TODO this needs change + # - both system and project settings contained 'rr_path' + # where project settings did choose one of rr_path from system settings + # that is not possible in AYON + rr_paths: list[ServerListSubmodel] = Field( + default_factory=list, + title="Royal Render Root Paths", + scope=["studio"], + ) + # This was 'rr_paths' in project settings and should be enum of + # 'rr_paths' from system settings, but that's not possible in AYON + selected_rr_paths: list[str] = Field( + default_factory=list, + title="Selected Royal Render Paths", + section="---", + ) + publish: PublishPluginsModel = Field( + default_factory=PublishPluginsModel, + title="Publish plugins", + ) + + +DEFAULT_VALUES = { + "enabled": False, + "rr_paths": [ + { + "name": "default", + "value": { + "windows": "", + "darwin": "", + "linux": "" + } + } + ], + "selected_rr_paths": ["default"], + "publish": { + "CollectSequencesFromJob": { + "review": True + } + } +} diff --git a/server_addon/royal_render/server/version.py b/server_addon/royal_render/server/version.py new file mode 100644 index 0000000000..485f44ac21 --- /dev/null +++ b/server_addon/royal_render/server/version.py @@ -0,0 +1 @@ +__version__ = "0.1.1" diff --git a/server_addon/timers_manager/server/__init__.py b/server_addon/timers_manager/server/__init__.py new file mode 100644 index 0000000000..29f9d47370 --- /dev/null +++ b/server_addon/timers_manager/server/__init__.py @@ -0,0 +1,13 @@ +from typing import Type + +from ayon_server.addons import BaseServerAddon + +from .version import __version__ +from .settings import TimersManagerSettings + + +class TimersManagerAddon(BaseServerAddon): + name = "timers_manager" + version = __version__ + title = "Timers Manager" + settings_model: Type[TimersManagerSettings] = TimersManagerSettings diff --git a/server_addon/timers_manager/server/settings.py b/server_addon/timers_manager/server/settings.py new file mode 100644 index 0000000000..a5c5721a57 --- /dev/null +++ b/server_addon/timers_manager/server/settings.py @@ -0,0 +1,25 @@ +from pydantic import Field +from ayon_server.settings import BaseSettingsModel + + +class TimersManagerSettings(BaseSettingsModel): + auto_stop: bool = Field( + True, + title="Auto stop timer", + scope=["studio"], + ) + full_time: int = Field( + 15, + title="Max idle time", + scope=["studio"], + ) + message_time: float = Field( + 0.5, + title="When dialog will show", + scope=["studio"], + ) + disregard_publishing: bool = Field( + False, + title="Disregard publishing", + scope=["studio"], + ) diff --git a/server_addon/timers_manager/server/version.py b/server_addon/timers_manager/server/version.py new file mode 100644 index 0000000000..485f44ac21 --- /dev/null +++ b/server_addon/timers_manager/server/version.py @@ -0,0 +1 @@ +__version__ = "0.1.1" diff --git a/server_addon/traypublisher/server/LICENSE b/server_addon/traypublisher/server/LICENSE new file mode 100644 index 0000000000..d645695673 --- /dev/null +++ b/server_addon/traypublisher/server/LICENSE @@ -0,0 +1,202 @@ + + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "[]" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright [yyyy] [name of copyright owner] + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. diff --git a/server_addon/traypublisher/server/README.md b/server_addon/traypublisher/server/README.md new file mode 100644 index 0000000000..c0029bc782 --- /dev/null +++ b/server_addon/traypublisher/server/README.md @@ -0,0 +1,4 @@ +Photoshp Addon +=============== + +Integration with Adobe Traypublisher. diff --git a/server_addon/traypublisher/server/__init__.py b/server_addon/traypublisher/server/__init__.py new file mode 100644 index 0000000000..e6f079609f --- /dev/null +++ b/server_addon/traypublisher/server/__init__.py @@ -0,0 +1,16 @@ +from ayon_server.addons import BaseServerAddon + +from .version import __version__ +from .settings import TraypublisherSettings, DEFAULT_TRAYPUBLISHER_SETTING + + +class Traypublisher(BaseServerAddon): + name = "traypublisher" + title = "TrayPublisher" + version = __version__ + + settings_model = TraypublisherSettings + + async def get_default_settings(self): + settings_model_cls = self.get_settings_model() + return settings_model_cls(**DEFAULT_TRAYPUBLISHER_SETTING) diff --git a/server_addon/traypublisher/server/settings/__init__.py b/server_addon/traypublisher/server/settings/__init__.py new file mode 100644 index 0000000000..bcf8beffa7 --- /dev/null +++ b/server_addon/traypublisher/server/settings/__init__.py @@ -0,0 +1,10 @@ +from .main import ( + TraypublisherSettings, + DEFAULT_TRAYPUBLISHER_SETTING, +) + + +__all__ = ( + "TraypublisherSettings", + "DEFAULT_TRAYPUBLISHER_SETTING", +) diff --git a/server_addon/traypublisher/server/settings/creator_plugins.py b/server_addon/traypublisher/server/settings/creator_plugins.py new file mode 100644 index 0000000000..345cb92e63 --- /dev/null +++ b/server_addon/traypublisher/server/settings/creator_plugins.py @@ -0,0 +1,46 @@ +from pydantic import Field + +from ayon_server.settings import BaseSettingsModel + + +class BatchMovieCreatorPlugin(BaseSettingsModel): + """Allows to publish multiple video files in one go.
Name of matching + asset is parsed from file names ('asset.mov', 'asset_v001.mov', + 'my_asset_to_publish.mov')""" + + default_variants: list[str] = Field( + title="Default variants", + default_factory=list + ) + + default_tasks: list[str] = Field( + title="Default tasks", + default_factory=list + ) + + extensions: list[str] = Field( + title="Extensions", + default_factory=list + ) + + +class TrayPublisherCreatePluginsModel(BaseSettingsModel): + BatchMovieCreator: BatchMovieCreatorPlugin = Field( + title="Batch Movie Creator", + default_factory=BatchMovieCreatorPlugin + ) + + +DEFAULT_CREATORS = { + "BatchMovieCreator": { + "default_variants": [ + "Main" + ], + "default_tasks": [ + "Compositing" + ], + "extensions": [ + ".mov" + ] + }, +} diff --git a/server_addon/traypublisher/server/settings/editorial_creators.py b/server_addon/traypublisher/server/settings/editorial_creators.py new file mode 100644 index 0000000000..4111f22576 --- /dev/null +++ b/server_addon/traypublisher/server/settings/editorial_creators.py @@ -0,0 +1,181 @@ +from pydantic import Field + +from ayon_server.settings import BaseSettingsModel, task_types_enum + + +class ClipNameTokenizerItem(BaseSettingsModel): + _layout = "expanded" + # TODO was 'dict-modifiable', is list of dicts now, must be fixed in code + name: str = Field("#TODO", title="Tokenizer name") + regex: str = Field("", title="Tokenizer regex") + + +class ShotAddTasksItem(BaseSettingsModel): + _layout = "expanded" + # TODO was 'dict-modifiable', is list of dicts now, must be fixed in code + name: str = Field('', title="Key") + task_type: list[str] = Field( + title="Task type", + default_factory=list, + enum_resolver=task_types_enum) + + +class ShotRenameSubmodel(BaseSettingsModel): + enabled: bool = True + shot_rename_template: str = Field( + "", + title="Shot rename template" + ) + + +parent_type_enum = [ + {"value": "Project", "label": "Project"}, + {"value": "Folder", "label": "Folder"}, + {"value": "Episode", "label": "Episode"}, + {"value": "Sequence", "label": "Sequence"}, +] + + +class TokenToParentConvertorItem(BaseSettingsModel): + # TODO - was 'type' must be renamed in code to `parent_type` + parent_type: str = Field( + "Project", + enum_resolver=lambda: parent_type_enum + ) + name: str = Field( + "", + title="Parent token name", + description="Unique name used in `Parent path template`" + ) + value: str = Field( + "", + title="Parent token value", + description="Template where any text, Anatomy keys and Tokens could be used" # noqa + ) + + +class ShotHierchySubmodel(BaseSettingsModel): + enabled: bool = True + parents_path: str = Field( + "", + title="Parents path template", + description="Using keys from \"Token to parent convertor\" or tokens directly" # noqa + ) + parents: list[TokenToParentConvertorItem] = Field( + default_factory=TokenToParentConvertorItem, + title="Token to parent convertor" + ) + + +output_file_type = [ + {"value": ".mp4", "label": "MP4"}, + {"value": ".mov", "label": "MOV"}, + {"value": ".wav", "label": "WAV"} +] + + +class ProductTypePresetItem(BaseSettingsModel): + product_type: str = Field("", title="Product type") + # TODO add placeholder '< Inherited >' + variant: str = Field("", title="Variant") + review: bool = Field(True, title="Review") + output_file_type: str = Field( + ".mp4", + enum_resolver=lambda: output_file_type + ) + + +class EditorialSimpleCreatorPlugin(BaseSettingsModel): + default_variants: list[str] = Field( + default_factory=list, + title="Default Variants" + ) + clip_name_tokenizer: list[ClipNameTokenizerItem] = Field( + default_factory=ClipNameTokenizerItem, + description=( + "Using Regex expression to create tokens. \nThose can be used" + " later in \"Shot rename\" creator \nor \"Shot hierarchy\"." + "\n\nTokens should be decorated with \"_\" on each side" + ) + ) + shot_rename: ShotRenameSubmodel = Field( + title="Shot Rename", + default_factory=ShotRenameSubmodel + ) + shot_hierarchy: ShotHierchySubmodel = Field( + title="Shot Hierarchy", + default_factory=ShotHierchySubmodel + ) + shot_add_tasks: list[ShotAddTasksItem] = Field( + title="Add tasks to shot", + default_factory=ShotAddTasksItem + ) + product_type_presets: list[ProductTypePresetItem] = Field( + default_factory=list + ) + + +class TraypublisherEditorialCreatorPlugins(BaseSettingsModel): + editorial_simple: EditorialSimpleCreatorPlugin = Field( + title="Editorial simple creator", + default_factory=EditorialSimpleCreatorPlugin, + ) + + +DEFAULT_EDITORIAL_CREATORS = { + "editorial_simple": { + "default_variants": [ + "Main" + ], + "clip_name_tokenizer": [ + {"name": "_sequence_", "regex": "(sc\\d{3})"}, + {"name": "_shot_", "regex": "(sh\\d{3})"} + ], + "shot_rename": { + "enabled": True, + "shot_rename_template": "{project[code]}_{_sequence_}_{_shot_}" + }, + "shot_hierarchy": { + "enabled": True, + "parents_path": "{project}/{folder}/{sequence}", + "parents": [ + { + "parent_type": "Project", + "name": "project", + "value": "{project[name]}" + }, + { + "parent_type": "Folder", + "name": "folder", + "value": "shots" + }, + { + "parent_type": "Sequence", + "name": "sequence", + "value": "{_sequence_}" + } + ] + }, + "shot_add_tasks": [], + "product_type_presets": [ + { + "product_type": "review", + "variant": "Reference", + "review": True, + "output_file_type": ".mp4" + }, + { + "product_type": "plate", + "variant": "", + "review": False, + "output_file_type": ".mov" + }, + { + "product_type": "audio", + "variant": "", + "review": False, + "output_file_type": ".wav" + } + ] + } +} diff --git a/server_addon/traypublisher/server/settings/imageio.py b/server_addon/traypublisher/server/settings/imageio.py new file mode 100644 index 0000000000..3df0d2f2fb --- /dev/null +++ b/server_addon/traypublisher/server/settings/imageio.py @@ -0,0 +1,48 @@ +from pydantic import Field, validator +from ayon_server.settings import BaseSettingsModel +from ayon_server.settings.validators import ensure_unique_names + + +class ImageIOConfigModel(BaseSettingsModel): + override_global_config: bool = Field( + False, + title="Override global OCIO config" + ) + filepath: list[str] = Field( + default_factory=list, + title="Config path" + ) + + +class ImageIOFileRuleModel(BaseSettingsModel): + name: str = Field("", title="Rule name") + pattern: str = Field("", title="Regex pattern") + colorspace: str = Field("", title="Colorspace name") + ext: str = Field("", title="File extension") + + +class ImageIOFileRulesModel(BaseSettingsModel): + activate_host_rules: bool = Field(False) + rules: list[ImageIOFileRuleModel] = Field( + default_factory=list, + title="Rules" + ) + + @validator("rules") + def validate_unique_outputs(cls, value): + ensure_unique_names(value) + return value + + +class TrayPublisherImageIOModel(BaseSettingsModel): + activate_host_color_management: bool = Field( + True, title="Enable Color Management" + ) + ocio_config: ImageIOConfigModel = Field( + default_factory=ImageIOConfigModel, + title="OCIO config" + ) + file_rules: ImageIOFileRulesModel = Field( + default_factory=ImageIOFileRulesModel, + title="File Rules" + ) diff --git a/server_addon/traypublisher/server/settings/main.py b/server_addon/traypublisher/server/settings/main.py new file mode 100644 index 0000000000..fad96bef2f --- /dev/null +++ b/server_addon/traypublisher/server/settings/main.py @@ -0,0 +1,52 @@ +from pydantic import Field +from ayon_server.settings import BaseSettingsModel + +from .imageio import TrayPublisherImageIOModel +from .simple_creators import ( + SimpleCreatorPlugin, + DEFAULT_SIMPLE_CREATORS, +) +from .editorial_creators import ( + TraypublisherEditorialCreatorPlugins, + DEFAULT_EDITORIAL_CREATORS, +) +from .creator_plugins import ( + TrayPublisherCreatePluginsModel, + DEFAULT_CREATORS, +) +from .publish_plugins import ( + TrayPublisherPublishPlugins, + DEFAULT_PUBLISH_PLUGINS, +) + + +class TraypublisherSettings(BaseSettingsModel): + """Traypublisher Project Settings.""" + imageio: TrayPublisherImageIOModel = Field( + default_factory=TrayPublisherImageIOModel, + title="Color Management (ImageIO)" + ) + simple_creators: list[SimpleCreatorPlugin] = Field( + title="Simple Create Plugins", + default_factory=SimpleCreatorPlugin, + ) + editorial_creators: TraypublisherEditorialCreatorPlugins = Field( + title="Editorial Creators", + default_factory=TraypublisherEditorialCreatorPlugins, + ) + create: TrayPublisherCreatePluginsModel = Field( + title="Create", + default_factory=TrayPublisherCreatePluginsModel + ) + publish: TrayPublisherPublishPlugins = Field( + title="Publish Plugins", + default_factory=TrayPublisherPublishPlugins + ) + + +DEFAULT_TRAYPUBLISHER_SETTING = { + "simple_creators": DEFAULT_SIMPLE_CREATORS, + "editorial_creators": DEFAULT_EDITORIAL_CREATORS, + "create": DEFAULT_CREATORS, + "publish": DEFAULT_PUBLISH_PLUGINS, +} diff --git a/server_addon/traypublisher/server/settings/publish_plugins.py b/server_addon/traypublisher/server/settings/publish_plugins.py new file mode 100644 index 0000000000..8c844f29f2 --- /dev/null +++ b/server_addon/traypublisher/server/settings/publish_plugins.py @@ -0,0 +1,50 @@ +from pydantic import Field + +from ayon_server.settings import BaseSettingsModel + + +class ValidatePluginModel(BaseSettingsModel): + _isGroup = True + enabled: bool = True + optional: bool = Field(True, title="Optional") + active: bool = Field(True, title="Active") + + +class ValidateFrameRangeModel(ValidatePluginModel): + """Allows to publish multiple video files in one go.
Name of matching + asset is parsed from file names ('asset.mov', 'asset_v001.mov', + 'my_asset_to_publish.mov')""" + + +class TrayPublisherPublishPlugins(BaseSettingsModel): + CollectFrameDataFromAssetEntity: ValidatePluginModel = Field( + default_factory=ValidatePluginModel, + title="Collect Frame Data From Folder Entity", + ) + ValidateFrameRange: ValidateFrameRangeModel = Field( + title="Validate Frame Range", + default_factory=ValidateFrameRangeModel, + ) + ValidateExistingVersion: ValidatePluginModel = Field( + title="Validate Existing Version", + default_factory=ValidatePluginModel, + ) + + +DEFAULT_PUBLISH_PLUGINS = { + "CollectFrameDataFromAssetEntity": { + "enabled": True, + "optional": True, + "active": True + }, + "ValidateFrameRange": { + "enabled": True, + "optional": True, + "active": True + }, + "ValidateExistingVersion": { + "enabled": True, + "optional": True, + "active": True + } +} diff --git a/server_addon/traypublisher/server/settings/simple_creators.py b/server_addon/traypublisher/server/settings/simple_creators.py new file mode 100644 index 0000000000..94d6602738 --- /dev/null +++ b/server_addon/traypublisher/server/settings/simple_creators.py @@ -0,0 +1,292 @@ +from pydantic import Field + +from ayon_server.settings import BaseSettingsModel + + +class SimpleCreatorPlugin(BaseSettingsModel): + _layout = "expanded" + product_type: str = Field("", title="Product type") + # TODO add placeholder + identifier: str = Field("", title="Identifier") + label: str = Field("", title="Label") + icon: str = Field("", title="Icon") + default_variants: list[str] = Field( + default_factory=list, + title="Default Variants" + ) + description: str = Field( + "", + title="Description", + widget="textarea" + ) + detailed_description: str = Field( + "", + title="Detailed Description", + widget="textarea" + ) + allow_sequences: bool = Field( + False, + title="Allow sequences" + ) + allow_multiple_items: bool = Field( + False, + title="Allow multiple items" + ) + allow_version_control: bool = Field( + False, + title="Allow version control" + ) + extensions: list[str] = Field( + default_factory=list, + title="Extensions" + ) + + +DEFAULT_SIMPLE_CREATORS = [ + { + "product_type": "workfile", + "identifier": "", + "label": "Workfile", + "icon": "fa.file", + "default_variants": [ + "Main" + ], + "description": "Backup of a working scene", + "detailed_description": "Workfiles are full scenes from any application that are directly edited by artists. They represent a state of work on a task at a given point and are usually not directly referenced into other scenes.", + "allow_sequences": False, + "allow_multiple_items": False, + "allow_version_control": False, + "extensions": [ + ".ma", + ".mb", + ".nk", + ".hrox", + ".hip", + ".hiplc", + ".hipnc", + ".blend", + ".scn", + ".tvpp", + ".comp", + ".zip", + ".prproj", + ".drp", + ".psd", + ".psb", + ".aep" + ] + }, + { + "product_type": "model", + "identifier": "", + "label": "Model", + "icon": "fa.cubes", + "default_variants": [ + "Main", + "Proxy", + "Sculpt" + ], + "description": "Clean models", + "detailed_description": "Models should only contain geometry data, without any extras like cameras, locators or bones.\n\nKeep in mind that models published from tray publisher are not validated for correctness. ", + "allow_sequences": False, + "allow_multiple_items": True, + "allow_version_control": False, + "extensions": [ + ".ma", + ".mb", + ".obj", + ".abc", + ".fbx", + ".bgeo", + ".bgeogz", + ".bgeosc", + ".usd", + ".blend" + ] + }, + { + "product_type": "pointcache", + "identifier": "", + "label": "Pointcache", + "icon": "fa.gears", + "default_variants": [ + "Main" + ], + "description": "Geometry Caches", + "detailed_description": "Alembic or bgeo cache of animated data", + "allow_sequences": True, + "allow_multiple_items": True, + "allow_version_control": False, + "extensions": [ + ".abc", + ".bgeo", + ".bgeogz", + ".bgeosc" + ] + }, + { + "product_type": "plate", + "identifier": "", + "label": "Plate", + "icon": "mdi.camera-image", + "default_variants": [ + "Main", + "BG", + "Animatic", + "Reference", + "Offline" + ], + "description": "Footage Plates", + "detailed_description": "Any type of image seqeuence coming from outside of the studio. Usually camera footage, but could also be animatics used for reference.", + "allow_sequences": True, + "allow_multiple_items": True, + "allow_version_control": False, + "extensions": [ + ".exr", + ".png", + ".dpx", + ".jpg", + ".tiff", + ".tif", + ".mov", + ".mp4", + ".avi" + ] + }, + { + "product_type": "render", + "identifier": "", + "label": "Render", + "icon": "mdi.folder-multiple-image", + "default_variants": [], + "description": "Rendered images or video", + "detailed_description": "Sequence or single file renders", + "allow_sequences": True, + "allow_multiple_items": True, + "allow_version_control": False, + "extensions": [ + ".exr", + ".png", + ".dpx", + ".jpg", + ".jpeg", + ".tiff", + ".tif", + ".mov", + ".mp4", + ".avi" + ] + }, + { + "product_type": "camera", + "identifier": "", + "label": "Camera", + "icon": "fa.video-camera", + "default_variants": [], + "description": "3d Camera", + "detailed_description": "Ideally this should be only camera itself with baked animation, however, it can technically also include helper geometry.", + "allow_sequences": False, + "allow_multiple_items": True, + "allow_version_control": False, + "extensions": [ + ".abc", + ".ma", + ".hip", + ".blend", + ".fbx", + ".usd" + ] + }, + { + "product_type": "image", + "identifier": "", + "label": "Image", + "icon": "fa.image", + "default_variants": [ + "Reference", + "Texture", + "Concept", + "Background" + ], + "description": "Single image", + "detailed_description": "Any image data can be published as image product type. References, textures, concept art, matte paints. This is a fallback 2d product type for everything that doesn't fit more specific product type.", + "allow_sequences": False, + "allow_multiple_items": True, + "allow_version_control": False, + "extensions": [ + ".exr", + ".jpg", + ".jpeg", + ".dpx", + ".bmp", + ".tif", + ".tiff", + ".png", + ".psb", + ".psd" + ] + }, + { + "product_type": "vdb", + "identifier": "", + "label": "VDB Volumes", + "icon": "fa.cloud", + "default_variants": [], + "description": "Sparse volumetric data", + "detailed_description": "Hierarchical data structure for the efficient storage and manipulation of sparse volumetric data discretized on three-dimensional grids", + "allow_sequences": True, + "allow_multiple_items": True, + "allow_version_control": False, + "extensions": [ + ".vdb" + ] + }, + { + "product_type": "matchmove", + "identifier": "", + "label": "Matchmove", + "icon": "fa.empire", + "default_variants": [ + "Camera", + "Object", + "Mocap" + ], + "description": "Matchmoving script", + "detailed_description": "Script exported from matchmoving application to be later processed into a tracked camera with additional data", + "allow_sequences": False, + "allow_multiple_items": True, + "allow_version_control": False, + "extensions": [] + }, + { + "product_type": "rig", + "identifier": "", + "label": "Rig", + "icon": "fa.wheelchair", + "default_variants": [], + "description": "CG rig file", + "detailed_description": "CG rigged character or prop. Rig should be clean of any extra data and directly loadable into it's respective application\t", + "allow_sequences": False, + "allow_multiple_items": False, + "allow_version_control": False, + "extensions": [ + ".ma", + ".blend", + ".hip", + ".hda" + ] + }, + { + "product_type": "simpleUnrealTexture", + "identifier": "", + "label": "Simple UE texture", + "icon": "fa.image", + "default_variants": [], + "description": "Simple Unreal Engine texture", + "detailed_description": "Texture files with Unreal Engine naming conventions", + "allow_sequences": False, + "allow_multiple_items": True, + "allow_version_control": False, + "extensions": [] + } +] diff --git a/server_addon/traypublisher/server/version.py b/server_addon/traypublisher/server/version.py new file mode 100644 index 0000000000..df0c92f1e2 --- /dev/null +++ b/server_addon/traypublisher/server/version.py @@ -0,0 +1,3 @@ +# -*- coding: utf-8 -*- +"""Package declaring addon version.""" +__version__ = "0.1.2" diff --git a/server_addon/tvpaint/server/__init__.py b/server_addon/tvpaint/server/__init__.py new file mode 100644 index 0000000000..033d7d3792 --- /dev/null +++ b/server_addon/tvpaint/server/__init__.py @@ -0,0 +1,17 @@ +from typing import Type + +from ayon_server.addons import BaseServerAddon + +from .version import __version__ +from .settings import TvpaintSettings, DEFAULT_VALUES + + +class TvpaintAddon(BaseServerAddon): + name = "tvpaint" + title = "TVPaint" + version = __version__ + settings_model: Type[TvpaintSettings] = TvpaintSettings + + async def get_default_settings(self): + settings_model_cls = self.get_settings_model() + return settings_model_cls(**DEFAULT_VALUES) diff --git a/server_addon/tvpaint/server/settings/__init__.py b/server_addon/tvpaint/server/settings/__init__.py new file mode 100644 index 0000000000..abee32e897 --- /dev/null +++ b/server_addon/tvpaint/server/settings/__init__.py @@ -0,0 +1,10 @@ +from .main import ( + TvpaintSettings, + DEFAULT_VALUES, +) + + +__all__ = ( + "TvpaintSettings", + "DEFAULT_VALUES", +) diff --git a/server_addon/tvpaint/server/settings/create_plugins.py b/server_addon/tvpaint/server/settings/create_plugins.py new file mode 100644 index 0000000000..349bfdd288 --- /dev/null +++ b/server_addon/tvpaint/server/settings/create_plugins.py @@ -0,0 +1,133 @@ +from pydantic import Field +from ayon_server.settings import BaseSettingsModel + + +class CreateWorkfileModel(BaseSettingsModel): + enabled: bool = Field(True) + default_variant: str = Field(title="Default variant") + default_variants: list[str] = Field( + default_factory=list, title="Default variants") + + +class CreateReviewModel(BaseSettingsModel): + enabled: bool = Field(True) + active_on_create: bool = Field(True, title="Active by default") + default_variant: str = Field(title="Default variant") + default_variants: list[str] = Field( + default_factory=list, title="Default variants") + + +class CreateRenderSceneModel(BaseSettingsModel): + enabled: bool = Field(True) + active_on_create: bool = Field(True, title="Active by default") + mark_for_review: bool = Field(True, title="Review by default") + default_pass_name: str = Field(title="Default beauty pass") + default_variant: str = Field(title="Default variant") + default_variants: list[str] = Field( + default_factory=list, title="Default variants") + + +class CreateRenderLayerModel(BaseSettingsModel): + mark_for_review: bool = Field(True, title="Review by default") + default_pass_name: str = Field(title="Default beauty pass") + default_variant: str = Field(title="Default variant") + default_variants: list[str] = Field( + default_factory=list, title="Default variants") + + +class CreateRenderPassModel(BaseSettingsModel): + mark_for_review: bool = Field(True, title="Review by default") + default_variant: str = Field(title="Default variant") + default_variants: list[str] = Field( + default_factory=list, title="Default variants") + + +class AutoDetectCreateRenderModel(BaseSettingsModel): + """The creator tries to auto-detect Render Layers and Render Passes in scene. + + For Render Layers is used group name as a variant and for Render Passes is + used TVPaint layer name. + + Group names can be renamed by their used order in scene. The renaming + template where can be used '{group_index}' formatting key which is + filled by "used position index of group". + - Template: 'L{group_index}' + - Group offset: '10' + - Group padding: '3' + + Would create group names "L010", "L020", ... + """ + + enabled: bool = Field(True) + allow_group_rename: bool = Field(title="Allow group rename") + group_name_template: str = Field(title="Group name template") + group_idx_offset: int = Field(1, title="Group index Offset", ge=1) + group_idx_padding: int = Field(4, title="Group index Padding", ge=1) + + +class CreatePluginsModel(BaseSettingsModel): + create_workfile: CreateWorkfileModel = Field( + default_factory=CreateWorkfileModel, + title="Create Workfile" + ) + create_review: CreateReviewModel = Field( + default_factory=CreateReviewModel, + title="Create Review" + ) + create_render_scene: CreateRenderSceneModel = Field( + default_factory=CreateReviewModel, + title="Create Render Scene" + ) + create_render_layer: CreateRenderLayerModel= Field( + default_factory=CreateRenderLayerModel, + title="Create Render Layer" + ) + create_render_pass: CreateRenderPassModel = Field( + default_factory=CreateRenderPassModel, + title="Create Render Pass" + ) + auto_detect_render: AutoDetectCreateRenderModel = Field( + default_factory=AutoDetectCreateRenderModel, + title="Auto-Detect Create Render", + ) + + +DEFAULT_CREATE_SETTINGS = { + "create_workfile": { + "enabled": True, + "default_variant": "Main", + "default_variants": [] + }, + "create_review": { + "enabled": True, + "active_on_create": True, + "default_variant": "Main", + "default_variants": [] + }, + "create_render_scene": { + "enabled": True, + "active_on_create": False, + "mark_for_review": True, + "default_pass_name": "beauty", + "default_variant": "Main", + "default_variants": [] + }, + "create_render_layer": { + "mark_for_review": False, + "default_pass_name": "beauty", + "default_variant": "Main", + "default_variants": [] + }, + "create_render_pass": { + "mark_for_review": False, + "default_variant": "Main", + "default_variants": [] + }, + "auto_detect_render": { + "enabled": False, + "allow_group_rename": True, + "group_name_template": "L{group_index}", + "group_idx_offset": 10, + "group_idx_padding": 3 + } +} diff --git a/server_addon/tvpaint/server/settings/filters.py b/server_addon/tvpaint/server/settings/filters.py new file mode 100644 index 0000000000..009febae06 --- /dev/null +++ b/server_addon/tvpaint/server/settings/filters.py @@ -0,0 +1,19 @@ +from pydantic import Field + +from ayon_server.settings import BaseSettingsModel + + +class FiltersSubmodel(BaseSettingsModel): + _layout = "compact" + name: str = Field(title="Name") + value: str = Field( + "", + title="Textarea", + widget="textarea", + ) + + +class PublishFiltersModel(BaseSettingsModel): + env_search_replace_values: list[FiltersSubmodel] = Field( + default_factory=list + ) diff --git a/server_addon/tvpaint/server/settings/imageio.py b/server_addon/tvpaint/server/settings/imageio.py new file mode 100644 index 0000000000..50f8b7eef4 --- /dev/null +++ b/server_addon/tvpaint/server/settings/imageio.py @@ -0,0 +1,48 @@ +from pydantic import Field, validator +from ayon_server.settings import BaseSettingsModel +from ayon_server.settings.validators import ensure_unique_names + + +class ImageIOConfigModel(BaseSettingsModel): + override_global_config: bool = Field( + False, + title="Override global OCIO config" + ) + filepath: list[str] = Field( + default_factory=list, + title="Config path" + ) + + +class ImageIOFileRuleModel(BaseSettingsModel): + name: str = Field("", title="Rule name") + pattern: str = Field("", title="Regex pattern") + colorspace: str = Field("", title="Colorspace name") + ext: str = Field("", title="File extension") + + +class ImageIOFileRulesModel(BaseSettingsModel): + activate_host_rules: bool = Field(False) + rules: list[ImageIOFileRuleModel] = Field( + default_factory=list, + title="Rules" + ) + + @validator("rules") + def validate_unique_outputs(cls, value): + ensure_unique_names(value) + return value + + +class TVPaintImageIOModel(BaseSettingsModel): + activate_host_color_management: bool = Field( + True, title="Enable Color Management" + ) + ocio_config: ImageIOConfigModel = Field( + default_factory=ImageIOConfigModel, + title="OCIO config" + ) + file_rules: ImageIOFileRulesModel = Field( + default_factory=ImageIOFileRulesModel, + title="File Rules" + ) diff --git a/server_addon/tvpaint/server/settings/main.py b/server_addon/tvpaint/server/settings/main.py new file mode 100644 index 0000000000..4cd6ac4b1a --- /dev/null +++ b/server_addon/tvpaint/server/settings/main.py @@ -0,0 +1,90 @@ +from pydantic import Field, validator +from ayon_server.settings import ( + BaseSettingsModel, + ensure_unique_names, +) + +from .imageio import TVPaintImageIOModel +from .workfile_builder import WorkfileBuilderPlugin +from .create_plugins import CreatePluginsModel, DEFAULT_CREATE_SETTINGS +from .publish_plugins import ( + PublishPluginsModel, + LoadPluginsModel, + DEFAULT_PUBLISH_SETTINGS, +) + + +class PublishGUIFilterItemModel(BaseSettingsModel): + _layout = "compact" + name: str = Field(title="Name") + value: bool = Field(True, title="Active") + + +class PublishGUIFiltersModel(BaseSettingsModel): + _layout = "compact" + name: str = Field(title="Name") + value: list[PublishGUIFilterItemModel] = Field(default_factory=list) + + @validator("value") + def validate_unique_outputs(cls, value): + ensure_unique_names(value) + return value + + +class TvpaintSettings(BaseSettingsModel): + imageio: TVPaintImageIOModel = Field( + default_factory=TVPaintImageIOModel, + title="Color Management (ImageIO)" + ) + stop_timer_on_application_exit: bool = Field( + title="Stop timer on application exit") + create: CreatePluginsModel = Field( + default_factory=CreatePluginsModel, + title="Create plugins" + ) + publish: PublishPluginsModel = Field( + default_factory=PublishPluginsModel, + title="Publish plugins") + load: LoadPluginsModel = Field( + default_factory=LoadPluginsModel, + title="Load plugins") + workfile_builder: WorkfileBuilderPlugin = Field( + default_factory=WorkfileBuilderPlugin, + title="Workfile Builder" + ) + filters: list[PublishGUIFiltersModel] = Field( + default_factory=list, + title="Publish GUI Filters") + + @validator("filters") + def validate_unique_outputs(cls, value): + ensure_unique_names(value) + return value + + +DEFAULT_VALUES = { + "stop_timer_on_application_exit": False, + "create": DEFAULT_CREATE_SETTINGS, + "publish": DEFAULT_PUBLISH_SETTINGS, + "load": { + "LoadImage": { + "defaults": { + "stretch": True, + "timestretch": True, + "preload": True + } + }, + "ImportImage": { + "defaults": { + "stretch": True, + "timestretch": True, + "preload": True + } + } + }, + "workfile_builder": { + "create_first_version": False, + "custom_templates": [] + }, + "filters": [] +} diff --git a/server_addon/tvpaint/server/settings/publish_plugins.py b/server_addon/tvpaint/server/settings/publish_plugins.py new file mode 100644 index 0000000000..76c7eaac01 --- /dev/null +++ b/server_addon/tvpaint/server/settings/publish_plugins.py @@ -0,0 +1,132 @@ +from pydantic import Field + +from ayon_server.settings import BaseSettingsModel +from ayon_server.types import ColorRGBA_uint8 + + +class CollectRenderInstancesModel(BaseSettingsModel): + ignore_render_pass_transparency: bool = Field( + title="Ignore Render Pass opacity" + ) + + +class ExtractSequenceModel(BaseSettingsModel): + """Review BG color is used for whole scene review and for thumbnails.""" + # TODO Use alpha color + review_bg: ColorRGBA_uint8 = Field( + (255, 255, 255, 1.0), + title="Review BG color") + + +class ValidatePluginModel(BaseSettingsModel): + enabled: bool = True + optional: bool = Field(True, title="Optional") + active: bool = Field(True, title="Active") + + +def compression_enum(): + return [ + {"value": "ZIP", "label": "ZIP"}, + {"value": "ZIPS", "label": "ZIPS"}, + {"value": "DWAA", "label": "DWAA"}, + {"value": "DWAB", "label": "DWAB"}, + {"value": "PIZ", "label": "PIZ"}, + {"value": "RLE", "label": "RLE"}, + {"value": "PXR24", "label": "PXR24"}, + {"value": "B44", "label": "B44"}, + {"value": "B44A", "label": "B44A"}, + {"value": "none", "label": "None"} + ] + + +class ExtractConvertToEXRModel(BaseSettingsModel): + """WARNING: This plugin does not work on MacOS (using OIIO tool).""" + enabled: bool = False + replace_pngs: bool = True + + exr_compression: str = Field( + "ZIP", + enum_resolver=compression_enum, + title="EXR Compression" + ) + + +class LoadImageDefaultModel(BaseSettingsModel): + _layout = "expanded" + stretch: bool = Field(title="Stretch") + timestretch: bool = Field(title="TimeStretch") + preload: bool = Field(title="Preload") + + +class LoadImageModel(BaseSettingsModel): + defaults: LoadImageDefaultModel = Field( + default_factory=LoadImageDefaultModel + ) + + +class PublishPluginsModel(BaseSettingsModel): + CollectRenderInstances: CollectRenderInstancesModel = Field( + default_factory=CollectRenderInstancesModel, + title="Collect Render Instances") + ExtractSequence: ExtractSequenceModel = Field( + default_factory=ExtractSequenceModel, + title="Extract Sequence") + ValidateProjectSettings: ValidatePluginModel = Field( + default_factory=ValidatePluginModel, + title="Validate Project Settings") + ValidateMarks: ValidatePluginModel = Field( + default_factory=ValidatePluginModel, + title="Validate MarkIn/Out") + ValidateStartFrame: ValidatePluginModel = Field( + default_factory=ValidatePluginModel, + title="Validate Scene Start Frame") + ValidateAssetName: ValidatePluginModel = Field( + default_factory=ValidatePluginModel, + title="Validate Folder Name") + ExtractConvertToEXR: ExtractConvertToEXRModel = Field( + default_factory=ExtractConvertToEXRModel, + title="Extract Convert To EXR") + + +class LoadPluginsModel(BaseSettingsModel): + LoadImage: LoadImageModel = Field( + default_factory=LoadImageModel, + title="Load Image") + ImportImage: LoadImageModel = Field( + default_factory=LoadImageModel, + title="Import Image") + + +DEFAULT_PUBLISH_SETTINGS = { + "CollectRenderInstances": { + "ignore_render_pass_transparency": False + }, + "ExtractSequence": { + "review_bg": [255, 255, 255, 1.0] + }, + "ValidateProjectSettings": { + "enabled": True, + "optional": True, + "active": True + }, + "ValidateMarks": { + "enabled": True, + "optional": True, + "active": True + }, + "ValidateStartFrame": { + "enabled": False, + "optional": True, + "active": True + }, + "ValidateAssetName": { + "enabled": True, + "optional": True, + "active": True + }, + "ExtractConvertToEXR": { + "enabled": False, + "replace_pngs": True, + "exr_compression": "ZIP" + } +} diff --git a/server_addon/tvpaint/server/settings/workfile_builder.py b/server_addon/tvpaint/server/settings/workfile_builder.py new file mode 100644 index 0000000000..e0aba5da7e --- /dev/null +++ b/server_addon/tvpaint/server/settings/workfile_builder.py @@ -0,0 +1,30 @@ +from pydantic import Field + +from ayon_server.settings import ( + BaseSettingsModel, + MultiplatformPathModel, + task_types_enum, +) + + +class CustomBuilderTemplate(BaseSettingsModel): + task_types: list[str] = Field( + default_factory=list, + title="Task types", + enum_resolver=task_types_enum + ) + template_path: MultiplatformPathModel = Field( + default_factory=MultiplatformPathModel + ) + + +class WorkfileBuilderPlugin(BaseSettingsModel): + _title = "Workfile Builder" + create_first_version: bool = Field( + False, + title="Create first workfile" + ) + + custom_templates: list[CustomBuilderTemplate] = Field( + default_factory=CustomBuilderTemplate + ) diff --git a/server_addon/tvpaint/server/version.py b/server_addon/tvpaint/server/version.py new file mode 100644 index 0000000000..3dc1f76bc6 --- /dev/null +++ b/server_addon/tvpaint/server/version.py @@ -0,0 +1 @@ +__version__ = "0.1.0" diff --git a/server_addon/unreal/server/__init__.py b/server_addon/unreal/server/__init__.py new file mode 100644 index 0000000000..a5f3e9597d --- /dev/null +++ b/server_addon/unreal/server/__init__.py @@ -0,0 +1,19 @@ +from typing import Type + +from ayon_server.addons import BaseServerAddon + +from .version import __version__ +from .settings import UnrealSettings, DEFAULT_VALUES + + +class UnrealAddon(BaseServerAddon): + name = "unreal" + title = "Unreal" + version = __version__ + settings_model: Type[UnrealSettings] = UnrealSettings + frontend_scopes = {} + services = {} + + async def get_default_settings(self): + settings_model_cls = self.get_settings_model() + return settings_model_cls(**DEFAULT_VALUES) diff --git a/server_addon/unreal/server/imageio.py b/server_addon/unreal/server/imageio.py new file mode 100644 index 0000000000..dde042ba47 --- /dev/null +++ b/server_addon/unreal/server/imageio.py @@ -0,0 +1,48 @@ +from pydantic import Field, validator +from ayon_server.settings import BaseSettingsModel +from ayon_server.settings.validators import ensure_unique_names + + +class ImageIOConfigModel(BaseSettingsModel): + override_global_config: bool = Field( + False, + title="Override global OCIO config" + ) + filepath: list[str] = Field( + default_factory=list, + title="Config path" + ) + + +class ImageIOFileRuleModel(BaseSettingsModel): + name: str = Field("", title="Rule name") + pattern: str = Field("", title="Regex pattern") + colorspace: str = Field("", title="Colorspace name") + ext: str = Field("", title="File extension") + + +class ImageIOFileRulesModel(BaseSettingsModel): + activate_host_rules: bool = Field(False) + rules: list[ImageIOFileRuleModel] = Field( + default_factory=list, + title="Rules" + ) + + @validator("rules") + def validate_unique_outputs(cls, value): + ensure_unique_names(value) + return value + + +class UnrealImageIOModel(BaseSettingsModel): + activate_host_color_management: bool = Field( + True, title="Enable Color Management" + ) + ocio_config: ImageIOConfigModel = Field( + default_factory=ImageIOConfigModel, + title="OCIO config" + ) + file_rules: ImageIOFileRulesModel = Field( + default_factory=ImageIOFileRulesModel, + title="File Rules" + ) diff --git a/server_addon/unreal/server/settings.py b/server_addon/unreal/server/settings.py new file mode 100644 index 0000000000..479e041e25 --- /dev/null +++ b/server_addon/unreal/server/settings.py @@ -0,0 +1,64 @@ +from pydantic import Field +from ayon_server.settings import BaseSettingsModel + +from .imageio import UnrealImageIOModel + + +class ProjectSetup(BaseSettingsModel): + dev_mode: bool = Field( + False, + title="Dev mode" + ) + + +def _render_format_enum(): + return [ + {"value": "png", "label": "PNG"}, + {"value": "exr", "label": "EXR"}, + {"value": "jpg", "label": "JPG"}, + {"value": "bmp", "label": "BMP"} + ] + + +class UnrealSettings(BaseSettingsModel): + imageio: UnrealImageIOModel = Field( + default_factory=UnrealImageIOModel, + title="Color Management (ImageIO)" + ) + level_sequences_for_layouts: bool = Field( + False, + title="Generate level sequences when loading layouts" + ) + delete_unmatched_assets: bool = Field( + False, + title="Delete assets that are not matched" + ) + render_config_path: str = Field( + "", + title="Render Config Path" + ) + preroll_frames: int = Field( + 0, + title="Pre-roll frames" + ) + render_format: str = Field( + "png", + title="Render format", + enum_resolver=_render_format_enum + ) + project_setup: ProjectSetup = Field( + default_factory=ProjectSetup, + title="Project Setup", + ) + + +DEFAULT_VALUES = { + "level_sequences_for_layouts": False, + "delete_unmatched_assets": False, + "render_config_path": "", + "preroll_frames": 0, + "render_format": "png", + "project_setup": { + "dev_mode": False + } +} diff --git a/server_addon/unreal/server/version.py b/server_addon/unreal/server/version.py new file mode 100644 index 0000000000..3dc1f76bc6 --- /dev/null +++ b/server_addon/unreal/server/version.py @@ -0,0 +1 @@ +__version__ = "0.1.0" diff --git a/setup.py b/setup.py index 260728dde6..4b6f286730 100644 --- a/setup.py +++ b/setup.py @@ -1,7 +1,6 @@ # -*- coding: utf-8 -*- """Setup info for building OpenPype 3.0.""" import os -import sys import re import platform import distutils.spawn @@ -125,7 +124,6 @@ bin_includes = [ include_files = [ "igniter", "openpype", - "common", "schema", "LICENSE", "README.md" @@ -170,22 +168,7 @@ executables = [ target_name="openpype_console", icon=icon_path.as_posix() ), - Executable( - "ayon_start.py", - base=base, - target_name="ayon", - icon=icon_path.as_posix() - ), ] -if IS_WINDOWS: - executables.append( - Executable( - "ayon_start.py", - base=None, - target_name="ayon_console", - icon=icon_path.as_posix() - ) - ) if IS_LINUX: executables.append( diff --git a/common/ayon_common/distribution/file_handler.py b/tests/lib/file_handler.py similarity index 100% rename from common/ayon_common/distribution/file_handler.py rename to tests/lib/file_handler.py diff --git a/tests/lib/testing_classes.py b/tests/lib/testing_classes.py index f04607dc27..2af4af02de 100644 --- a/tests/lib/testing_classes.py +++ b/tests/lib/testing_classes.py @@ -12,7 +12,7 @@ import requests import re from tests.lib.db_handler import DBHandler -from common.ayon_common.distribution.file_handler import RemoteFileHandler +from tests.lib.file_handler import RemoteFileHandler from openpype.modules import ModulesManager from openpype.settings import get_project_settings diff --git a/tools/run_tray_ayon.ps1 b/tools/run_tray_ayon.ps1 deleted file mode 100644 index 54a80f93fd..0000000000 --- a/tools/run_tray_ayon.ps1 +++ /dev/null @@ -1,41 +0,0 @@ -<# -.SYNOPSIS - Helper script AYON Tray. - -.DESCRIPTION - - -.EXAMPLE - -PS> .\run_tray.ps1 - -#> -$current_dir = Get-Location -$script_dir = Split-Path -Path $MyInvocation.MyCommand.Definition -Parent -$ayon_root = (Get-Item $script_dir).parent.FullName - -# Install PSWriteColor to support colorized output to terminal -$env:PSModulePath = $env:PSModulePath + ";$($ayon_root)\tools\modules\powershell" - -$env:_INSIDE_OPENPYPE_TOOL = "1" - -# make sure Poetry is in PATH -if (-not (Test-Path 'env:POETRY_HOME')) { - $env:POETRY_HOME = "$ayon_root\.poetry" -} -$env:PATH = "$($env:PATH);$($env:POETRY_HOME)\bin" - - -Set-Location -Path $ayon_root - -Write-Color -Text ">>> ", "Reading Poetry ... " -Color Green, Gray -NoNewline -if (-not (Test-Path -PathType Container -Path "$($env:POETRY_HOME)\bin")) { - Write-Color -Text "NOT FOUND" -Color Yellow - Write-Color -Text "*** ", "We need to install Poetry create virtual env first ..." -Color Yellow, Gray - & "$ayon_root\tools\create_env.ps1" -} else { - Write-Color -Text "OK" -Color Green -} - -& "$($env:POETRY_HOME)\bin\poetry" run python "$($ayon_root)\ayon_start.py" tray --debug -Set-Location -Path $current_dir diff --git a/tools/run_tray_ayon.sh b/tools/run_tray_ayon.sh deleted file mode 100755 index 3039750b87..0000000000 --- a/tools/run_tray_ayon.sh +++ /dev/null @@ -1,78 +0,0 @@ -#!/usr/bin/env bash -# Run AYON Tray - -# Colors for terminal - -RST='\033[0m' # Text Reset - -# Regular Colors -Black='\033[0;30m' # Black -Red='\033[0;31m' # Red -Green='\033[0;32m' # Green -Yellow='\033[0;33m' # Yellow -Blue='\033[0;34m' # Blue -Purple='\033[0;35m' # Purple -Cyan='\033[0;36m' # Cyan -White='\033[0;37m' # White - -# Bold -BBlack='\033[1;30m' # Black -BRed='\033[1;31m' # Red -BGreen='\033[1;32m' # Green -BYellow='\033[1;33m' # Yellow -BBlue='\033[1;34m' # Blue -BPurple='\033[1;35m' # Purple -BCyan='\033[1;36m' # Cyan -BWhite='\033[1;37m' # White - -# Bold High Intensity -BIBlack='\033[1;90m' # Black -BIRed='\033[1;91m' # Red -BIGreen='\033[1;92m' # Green -BIYellow='\033[1;93m' # Yellow -BIBlue='\033[1;94m' # Blue -BIPurple='\033[1;95m' # Purple -BICyan='\033[1;96m' # Cyan -BIWhite='\033[1;97m' # White - - -############################################################################## -# Return absolute path -# Globals: -# None -# Arguments: -# Path to resolve -# Returns: -# None -############################################################################### -realpath () { - echo $(cd $(dirname "$1"); pwd)/$(basename "$1") -} - -# Main -main () { - # Directories - ayon_root=$(realpath $(dirname $(dirname "${BASH_SOURCE[0]}"))) - - _inside_openpype_tool="1" - - if [[ -z $POETRY_HOME ]]; then - export POETRY_HOME="$ayon_root/.poetry" - fi - - echo -e "${BIGreen}>>>${RST} Reading Poetry ... \c" - if [ -f "$POETRY_HOME/bin/poetry" ]; then - echo -e "${BIGreen}OK${RST}" - else - echo -e "${BIYellow}NOT FOUND${RST}" - echo -e "${BIYellow}***${RST} We need to install Poetry and virtual env ..." - . "$ayon_root/tools/create_env.sh" || { echo -e "${BIRed}!!!${RST} Poetry installation failed"; return; } - fi - - pushd "$ayon_root" > /dev/null || return > /dev/null - - echo -e "${BIGreen}>>>${RST} Running AYON Tray with debug option ..." - "$POETRY_HOME/bin/poetry" run python3 "$ayon_root/ayon_start.py" tray --debug -} - -main diff --git a/website/docs/admin_hosts_resolve.md b/website/docs/admin_hosts_resolve.md index 09e7df1d9f..8bb8440f78 100644 --- a/website/docs/admin_hosts_resolve.md +++ b/website/docs/admin_hosts_resolve.md @@ -4,100 +4,38 @@ title: DaVinci Resolve Setup sidebar_label: DaVinci Resolve --- -import Tabs from '@theme/Tabs'; -import TabItem from '@theme/TabItem'; +:::warning +Only Resolve Studio is supported due to Python API limitation in Resolve (free). +::: ## Resolve requirements Due to the way resolve handles python and python scripts there are a few steps required steps needed to be done on any machine that will be using OpenPype with resolve. -### Installing Resolve's own python 3.6 interpreter. -Resolve uses a hardcoded method to look for the python executable path. All of tho following paths are defined automatically by Python msi installer. We are using Python 3.6.2. +## Basic setup - +- Supported version is up to v18 +- Install Python 3.6.2 (latest tested v17) or up to 3.9.13 (latest tested on v18) +- pip install PySide2: + - Python 3.9.*: open terminal and go to python.exe directory, then `python -m pip install PySide2` +- pip install OpenTimelineIO: + - Python 3.9.*: open terminal and go to python.exe directory, then `python -m pip install OpenTimelineIO` + - Python 3.6: open terminal and go to python.exe directory, then `python -m pip install git+https://github.com/PixarAnimationStudios/OpenTimelineIO.git@5aa24fbe89d615448876948fe4b4900455c9a3e8` and move built files from `./Lib/site-packages/opentimelineio/cxx-libs/bin and lib` to `./Lib/site-packages/opentimelineio/`. I was building it on Win10 machine with Visual Studio Community 2019 and + ![image](https://user-images.githubusercontent.com/40640033/102792588-ffcb1c80-43a8-11eb-9c6b-bf2114ed578e.png) with installed CMake in PATH. +- make sure Resolve Fusion (Fusion Tab/menu/Fusion/Fusion Settings) is set to Python 3.6 + ![image](https://user-images.githubusercontent.com/40640033/102631545-280b0f00-414e-11eb-89fc-98ac268d209d.png) +- Open OpenPype **Tray/Admin/Studio settings** > `applications/resolve/environment` and add Python3 path to `RESOLVE_PYTHON3_HOME` platform related. - +## Editorial setup -`%LOCALAPPDATA%\Programs\Python\Python36` +This is how it looks on my testing project timeline +![image](https://user-images.githubusercontent.com/40640033/102637638-96ec6600-4156-11eb-9656-6e8e3ce4baf8.png) +Notice I had renamed tracks to `main` (holding metadata markers) and `review` used for generating review data with ffmpeg confersion to jpg sequence. - - - -`/opt/Python/3.6/bin` - - - - -`~/Library/Python/3.6/bin` - - - - - -### Installing PySide2 into python 3.6 for correct gui work - -OpenPype is using its own window widget inside Resolve, for that reason PySide2 has to be installed into the python 3.6 (as explained above). - - - - - -paste to any terminal of your choice - -```bash -%LOCALAPPDATA%\Programs\Python\Python36\python.exe -m pip install PySide2 -``` - - - - -paste to any terminal of your choice - -```bash -/opt/Python/3.6/bin/python -m pip install PySide2 -``` - - - - -paste to any terminal of your choice - -```bash -~/Library/Python/3.6/bin/python -m pip install PySide2 -``` - - - - -

- -### Set Resolve's Fusion settings for Python 3.6 interpereter - -
- - -As it is shown in below picture you have to go to Fusion Tab and then in Fusion menu find Fusion Settings. Go to Fusion/Script and find Default Python Version and switch to Python 3.6 - -
- -
- -![Create menu](assets/resolve_fusion_tab.png) -![Create menu](assets/resolve_fusion_menu.png) -![Create menu](assets/resolve_fusion_script_settings.png) - -
-
\ No newline at end of file +1. you need to start OpenPype menu from Resolve/EditTab/Menu/Workspace/Scripts/Comp/**__OpenPype_Menu__** +2. then select any clips in `main` track and change their color to `Chocolate` +3. in OpenPype Menu select `Create` +4. in Creator select `Create Publishable Clip [New]` (temporary name) +5. set `Rename clips` to True, Master Track to `main` and Use review track to `review` as in picture + ![image](https://user-images.githubusercontent.com/40640033/102643773-0d419600-4160-11eb-919e-9c2be0aecab8.png) +6. after you hit `ok` all clips are colored to `ping` and marked with openpype metadata tag +7. git `Publish` on openpype menu and see that all had been collected correctly. That is the last step for now as rest is Work in progress. Next steps will follow.