mirror of
https://github.com/ynput/ayon-core.git
synced 2025-12-24 21:04:40 +01:00
Merge branch 'develop' into enhancement/houdini-hda-workflow
This commit is contained in:
commit
01dfbbcf8a
128 changed files with 4825 additions and 1159 deletions
118
CHANGELOG.md
118
CHANGELOG.md
|
|
@ -1,23 +1,80 @@
|
|||
# Changelog
|
||||
|
||||
## [3.5.0-nightly.6](https://github.com/pypeclub/OpenPype/tree/HEAD)
|
||||
|
||||
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.4.1...HEAD)
|
||||
|
||||
**Deprecated:**
|
||||
|
||||
- Maya: Change mayaAscii family to mayaScene [\#2106](https://github.com/pypeclub/OpenPype/pull/2106)
|
||||
|
||||
**🆕 New features**
|
||||
|
||||
- Added running configurable disk mapping command before start of OP [\#2091](https://github.com/pypeclub/OpenPype/pull/2091)
|
||||
- SFTP provider [\#2073](https://github.com/pypeclub/OpenPype/pull/2073)
|
||||
- Maya: Validate setdress top group [\#2068](https://github.com/pypeclub/OpenPype/pull/2068)
|
||||
|
||||
**🚀 Enhancements**
|
||||
|
||||
- Settings UI: Project model refreshing and sorting [\#2104](https://github.com/pypeclub/OpenPype/pull/2104)
|
||||
- Create Read From Rendered - Disable Relative paths by default [\#2093](https://github.com/pypeclub/OpenPype/pull/2093)
|
||||
- Added choosing different dirmap mapping if workfile synched locally [\#2088](https://github.com/pypeclub/OpenPype/pull/2088)
|
||||
- General: Remove IdleManager module [\#2084](https://github.com/pypeclub/OpenPype/pull/2084)
|
||||
- Tray UI: Message box about missing settings defaults [\#2080](https://github.com/pypeclub/OpenPype/pull/2080)
|
||||
- Tray UI: Show menu where first click happened [\#2079](https://github.com/pypeclub/OpenPype/pull/2079)
|
||||
- Global: add global validators to settings [\#2078](https://github.com/pypeclub/OpenPype/pull/2078)
|
||||
- Use CRF for burnin when available [\#2070](https://github.com/pypeclub/OpenPype/pull/2070)
|
||||
- Project manager: Filter first item after selection of project [\#2069](https://github.com/pypeclub/OpenPype/pull/2069)
|
||||
- Nuke: Adding `still` image family workflow [\#2064](https://github.com/pypeclub/OpenPype/pull/2064)
|
||||
- Maya: validate authorized loaded plugins [\#2062](https://github.com/pypeclub/OpenPype/pull/2062)
|
||||
- Tools: add support for pyenv on windows [\#2051](https://github.com/pypeclub/OpenPype/pull/2051)
|
||||
|
||||
**🐛 Bug fixes**
|
||||
|
||||
- General: Disk mapping group [\#2120](https://github.com/pypeclub/OpenPype/pull/2120)
|
||||
- Hiero: publishing effect first time makes wrong resources path [\#2115](https://github.com/pypeclub/OpenPype/pull/2115)
|
||||
- Add startup script for Houdini Core. [\#2110](https://github.com/pypeclub/OpenPype/pull/2110)
|
||||
- TVPaint: Behavior name of loop also accept repeat [\#2109](https://github.com/pypeclub/OpenPype/pull/2109)
|
||||
- Ftrack: Project settings save custom attributes skip unknown attributes [\#2103](https://github.com/pypeclub/OpenPype/pull/2103)
|
||||
- Fix broken import in sftp provider [\#2100](https://github.com/pypeclub/OpenPype/pull/2100)
|
||||
- Global: Fix docstring on publish plugin extract review [\#2097](https://github.com/pypeclub/OpenPype/pull/2097)
|
||||
- General: Cloud mongo ca certificate issue [\#2095](https://github.com/pypeclub/OpenPype/pull/2095)
|
||||
- TVPaint: Creator use context from workfile [\#2087](https://github.com/pypeclub/OpenPype/pull/2087)
|
||||
- Blender: fix texture missing when publishing blend files [\#2085](https://github.com/pypeclub/OpenPype/pull/2085)
|
||||
- General: Startup validations oiio tool path fix on linux [\#2083](https://github.com/pypeclub/OpenPype/pull/2083)
|
||||
- Deadline: Collect deadline server does not check existence of deadline key [\#2082](https://github.com/pypeclub/OpenPype/pull/2082)
|
||||
- Blender: fixed Curves with modifiers in Rigs [\#2081](https://github.com/pypeclub/OpenPype/pull/2081)
|
||||
- Maya: Fix multi-camera renders [\#2065](https://github.com/pypeclub/OpenPype/pull/2065)
|
||||
- Fix Sync Queue when project disabled [\#2063](https://github.com/pypeclub/OpenPype/pull/2063)
|
||||
|
||||
**Merged pull requests:**
|
||||
|
||||
- Blender: Fix NoneType error when animation\_data is missing for a rig [\#2101](https://github.com/pypeclub/OpenPype/pull/2101)
|
||||
- Delivery Action Files Sequence fix [\#2096](https://github.com/pypeclub/OpenPype/pull/2096)
|
||||
- Bump pywin32 from 300 to 301 [\#2086](https://github.com/pypeclub/OpenPype/pull/2086)
|
||||
- Nuke UI scaling [\#2077](https://github.com/pypeclub/OpenPype/pull/2077)
|
||||
|
||||
## [3.4.1](https://github.com/pypeclub/OpenPype/tree/3.4.1) (2021-09-23)
|
||||
|
||||
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.4.0...3.4.1)
|
||||
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.4.1-nightly.1...3.4.1)
|
||||
|
||||
**🆕 New features**
|
||||
|
||||
- Settings: Flag project as deactivated and hide from tools' view [\#2008](https://github.com/pypeclub/OpenPype/pull/2008)
|
||||
|
||||
**🚀 Enhancements**
|
||||
|
||||
- General: Startup validations [\#2054](https://github.com/pypeclub/OpenPype/pull/2054)
|
||||
- Nuke: proxy mode validator [\#2052](https://github.com/pypeclub/OpenPype/pull/2052)
|
||||
- Ftrack: Removed ftrack interface [\#2049](https://github.com/pypeclub/OpenPype/pull/2049)
|
||||
- Settings UI: Deffered set value on entity [\#2044](https://github.com/pypeclub/OpenPype/pull/2044)
|
||||
- Loader: Families filtering [\#2043](https://github.com/pypeclub/OpenPype/pull/2043)
|
||||
- Settings UI: Project view enhancements [\#2042](https://github.com/pypeclub/OpenPype/pull/2042)
|
||||
- Settings for Nuke IncrementScriptVersion [\#2039](https://github.com/pypeclub/OpenPype/pull/2039)
|
||||
- Loader & Library loader: Use tools from OpenPype [\#2038](https://github.com/pypeclub/OpenPype/pull/2038)
|
||||
- Adding predefined project folders creation in PM [\#2030](https://github.com/pypeclub/OpenPype/pull/2030)
|
||||
- WebserverModule: Removed interface of webserver module [\#2028](https://github.com/pypeclub/OpenPype/pull/2028)
|
||||
- TimersManager: Removed interface of timers manager [\#2024](https://github.com/pypeclub/OpenPype/pull/2024)
|
||||
- Feature Maya import asset from scene inventory [\#2018](https://github.com/pypeclub/OpenPype/pull/2018)
|
||||
- Settings: Flag project as deactivated and hide from tools' view [\#2008](https://github.com/pypeclub/OpenPype/pull/2008)
|
||||
|
||||
**🐛 Bug fixes**
|
||||
|
||||
|
|
@ -35,14 +92,18 @@
|
|||
|
||||
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.4.0-nightly.6...3.4.0)
|
||||
|
||||
### 📖 Documentation
|
||||
|
||||
- Documentation: Ftrack launch argsuments update [\#2014](https://github.com/pypeclub/OpenPype/pull/2014)
|
||||
|
||||
**🆕 New features**
|
||||
|
||||
- Nuke: Compatibility with Nuke 13 [\#2003](https://github.com/pypeclub/OpenPype/pull/2003)
|
||||
|
||||
**🚀 Enhancements**
|
||||
|
||||
- Ftrack: Removed ftrack interface [\#2049](https://github.com/pypeclub/OpenPype/pull/2049)
|
||||
- Added possibility to configure of synchronization of workfile version… [\#2041](https://github.com/pypeclub/OpenPype/pull/2041)
|
||||
- Loader & Library loader: Use tools from OpenPype [\#2038](https://github.com/pypeclub/OpenPype/pull/2038)
|
||||
- General: Task types in profiles [\#2036](https://github.com/pypeclub/OpenPype/pull/2036)
|
||||
- Console interpreter: Handle invalid sizes on initialization [\#2022](https://github.com/pypeclub/OpenPype/pull/2022)
|
||||
- Ftrack: Show OpenPype versions in event server status [\#2019](https://github.com/pypeclub/OpenPype/pull/2019)
|
||||
|
|
@ -54,15 +115,6 @@
|
|||
- Configurable items for providers without Settings [\#1987](https://github.com/pypeclub/OpenPype/pull/1987)
|
||||
- Global: Example addons [\#1986](https://github.com/pypeclub/OpenPype/pull/1986)
|
||||
- Standalone Publisher: Extract harmony zip handle workfile template [\#1982](https://github.com/pypeclub/OpenPype/pull/1982)
|
||||
- Settings UI: Number sliders [\#1978](https://github.com/pypeclub/OpenPype/pull/1978)
|
||||
- Workfiles: Support more workfile templates [\#1966](https://github.com/pypeclub/OpenPype/pull/1966)
|
||||
- Launcher: Fix crashes on action click [\#1964](https://github.com/pypeclub/OpenPype/pull/1964)
|
||||
- Settings: Minor fixes in UI and missing default values [\#1963](https://github.com/pypeclub/OpenPype/pull/1963)
|
||||
- Blender: Toggle system console works on windows [\#1962](https://github.com/pypeclub/OpenPype/pull/1962)
|
||||
- Global: Settings defined by Addons/Modules [\#1959](https://github.com/pypeclub/OpenPype/pull/1959)
|
||||
- CI: change release numbering triggers [\#1954](https://github.com/pypeclub/OpenPype/pull/1954)
|
||||
- Global: Avalon Host name collector [\#1949](https://github.com/pypeclub/OpenPype/pull/1949)
|
||||
- OpenPype: Add version validation and `--headless` mode and update progress 🔄 [\#1939](https://github.com/pypeclub/OpenPype/pull/1939)
|
||||
|
||||
**🐛 Bug fixes**
|
||||
|
||||
|
|
@ -77,55 +129,15 @@
|
|||
- Nuke: last version from path gets correct version [\#1990](https://github.com/pypeclub/OpenPype/pull/1990)
|
||||
- nuke, resolve, hiero: precollector order lest then 0.5 [\#1984](https://github.com/pypeclub/OpenPype/pull/1984)
|
||||
- Last workfile with multiple work templates [\#1981](https://github.com/pypeclub/OpenPype/pull/1981)
|
||||
- Collectors order [\#1977](https://github.com/pypeclub/OpenPype/pull/1977)
|
||||
- Stop timer was within validator order range. [\#1975](https://github.com/pypeclub/OpenPype/pull/1975)
|
||||
- Ftrack: arrow submodule has https url source [\#1974](https://github.com/pypeclub/OpenPype/pull/1974)
|
||||
- Ftrack: Fix hosts attribute in collect ftrack username [\#1972](https://github.com/pypeclub/OpenPype/pull/1972)
|
||||
- Deadline: Houdini plugins in different hierarchy [\#1970](https://github.com/pypeclub/OpenPype/pull/1970)
|
||||
- Removed deprecated submodules [\#1967](https://github.com/pypeclub/OpenPype/pull/1967)
|
||||
- Global: ExtractJpeg can handle filepaths with spaces [\#1961](https://github.com/pypeclub/OpenPype/pull/1961)
|
||||
|
||||
### 📖 Documentation
|
||||
|
||||
- Documentation: Ftrack launch argsuments update [\#2014](https://github.com/pypeclub/OpenPype/pull/2014)
|
||||
- Nuke Quick Start / Tutorial [\#1952](https://github.com/pypeclub/OpenPype/pull/1952)
|
||||
|
||||
## [3.3.1](https://github.com/pypeclub/OpenPype/tree/3.3.1) (2021-08-20)
|
||||
|
||||
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.3.1-nightly.1...3.3.1)
|
||||
|
||||
**🐛 Bug fixes**
|
||||
|
||||
- TVPaint: Fixed rendered frame indexes [\#1946](https://github.com/pypeclub/OpenPype/pull/1946)
|
||||
- Maya: Menu actions fix [\#1945](https://github.com/pypeclub/OpenPype/pull/1945)
|
||||
- standalone: editorial shared object problem [\#1941](https://github.com/pypeclub/OpenPype/pull/1941)
|
||||
- Bugfix nuke deadline app name [\#1928](https://github.com/pypeclub/OpenPype/pull/1928)
|
||||
|
||||
## [3.3.0](https://github.com/pypeclub/OpenPype/tree/3.3.0) (2021-08-17)
|
||||
|
||||
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.3.0-nightly.11...3.3.0)
|
||||
|
||||
**🆕 New features**
|
||||
|
||||
- Settings UI: Breadcrumbs in settings [\#1932](https://github.com/pypeclub/OpenPype/pull/1932)
|
||||
|
||||
**🚀 Enhancements**
|
||||
|
||||
- Python console interpreter [\#1940](https://github.com/pypeclub/OpenPype/pull/1940)
|
||||
- Global: Updated logos and Default settings [\#1927](https://github.com/pypeclub/OpenPype/pull/1927)
|
||||
- Check for missing ✨ Python when using `pyenv` [\#1925](https://github.com/pypeclub/OpenPype/pull/1925)
|
||||
|
||||
**🐛 Bug fixes**
|
||||
|
||||
- Fix - ftrack family was added incorrectly in some cases [\#1935](https://github.com/pypeclub/OpenPype/pull/1935)
|
||||
- Fix - Deadline publish on Linux started Tray instead of headless publishing [\#1930](https://github.com/pypeclub/OpenPype/pull/1930)
|
||||
- Maya: Validate Model Name - repair accident deletion in settings defaults [\#1929](https://github.com/pypeclub/OpenPype/pull/1929)
|
||||
- Nuke: submit to farm failed due `ftrack` family remove [\#1926](https://github.com/pypeclub/OpenPype/pull/1926)
|
||||
|
||||
**Merged pull requests:**
|
||||
|
||||
- Fix - make AE workfile publish to Ftrack configurable [\#1937](https://github.com/pypeclub/OpenPype/pull/1937)
|
||||
|
||||
## [3.2.0](https://github.com/pypeclub/OpenPype/tree/3.2.0) (2021-07-13)
|
||||
|
||||
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.2.0-nightly.7...3.2.0)
|
||||
|
|
|
|||
148
igniter/tools.py
148
igniter/tools.py
|
|
@ -1,18 +1,12 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
"""Tools used in **Igniter** GUI.
|
||||
|
||||
Functions ``compose_url()`` and ``decompose_url()`` are the same as in
|
||||
``openpype.lib`` and they are here to avoid importing OpenPype module before its
|
||||
version is decided.
|
||||
|
||||
"""
|
||||
import sys
|
||||
"""Tools used in **Igniter** GUI."""
|
||||
import os
|
||||
from typing import Dict, Union
|
||||
from typing import Union
|
||||
from urllib.parse import urlparse, parse_qs
|
||||
from pathlib import Path
|
||||
import platform
|
||||
|
||||
import certifi
|
||||
from pymongo import MongoClient
|
||||
from pymongo.errors import (
|
||||
ServerSelectionTimeoutError,
|
||||
|
|
@ -22,89 +16,32 @@ from pymongo.errors import (
|
|||
)
|
||||
|
||||
|
||||
def decompose_url(url: str) -> Dict:
|
||||
"""Decompose mongodb url to its separate components.
|
||||
|
||||
Args:
|
||||
url (str): Mongodb url.
|
||||
|
||||
Returns:
|
||||
dict: Dictionary of components.
|
||||
def should_add_certificate_path_to_mongo_url(mongo_url):
|
||||
"""Check if should add ca certificate to mongo url.
|
||||
|
||||
Since 30.9.2021 cloud mongo requires newer certificates that are not
|
||||
available on most of workstation. This adds path to certifi certificate
|
||||
which is valid for it. To add the certificate path url must have scheme
|
||||
'mongodb+srv' or has 'ssl=true' or 'tls=true' in url query.
|
||||
"""
|
||||
components = {
|
||||
"scheme": None,
|
||||
"host": None,
|
||||
"port": None,
|
||||
"username": None,
|
||||
"password": None,
|
||||
"auth_db": None
|
||||
}
|
||||
parsed = urlparse(mongo_url)
|
||||
query = parse_qs(parsed.query)
|
||||
lowered_query_keys = set(key.lower() for key in query.keys())
|
||||
add_certificate = False
|
||||
# Check if url 'ssl' or 'tls' are set to 'true'
|
||||
for key in ("ssl", "tls"):
|
||||
if key in query and "true" in query["ssl"]:
|
||||
add_certificate = True
|
||||
break
|
||||
|
||||
result = urlparse(url)
|
||||
if result.scheme is None:
|
||||
_url = "mongodb://{}".format(url)
|
||||
result = urlparse(_url)
|
||||
# Check if url contains 'mongodb+srv'
|
||||
if not add_certificate and parsed.scheme == "mongodb+srv":
|
||||
add_certificate = True
|
||||
|
||||
components["scheme"] = result.scheme
|
||||
components["host"] = result.hostname
|
||||
try:
|
||||
components["port"] = result.port
|
||||
except ValueError:
|
||||
raise RuntimeError("invalid port specified")
|
||||
components["username"] = result.username
|
||||
components["password"] = result.password
|
||||
|
||||
try:
|
||||
components["auth_db"] = parse_qs(result.query)['authSource'][0]
|
||||
except KeyError:
|
||||
# no auth db provided, mongo will use the one we are connecting to
|
||||
pass
|
||||
|
||||
return components
|
||||
|
||||
|
||||
def compose_url(scheme: str = None,
|
||||
host: str = None,
|
||||
username: str = None,
|
||||
password: str = None,
|
||||
port: int = None,
|
||||
auth_db: str = None) -> str:
|
||||
"""Compose mongodb url from its individual components.
|
||||
|
||||
Args:
|
||||
scheme (str, optional):
|
||||
host (str, optional):
|
||||
username (str, optional):
|
||||
password (str, optional):
|
||||
port (str, optional):
|
||||
auth_db (str, optional):
|
||||
|
||||
Returns:
|
||||
str: mongodb url
|
||||
|
||||
"""
|
||||
|
||||
url = "{scheme}://"
|
||||
|
||||
if username and password:
|
||||
url += "{username}:{password}@"
|
||||
|
||||
url += "{host}"
|
||||
if port:
|
||||
url += ":{port}"
|
||||
|
||||
if auth_db:
|
||||
url += "?authSource={auth_db}"
|
||||
|
||||
return url.format(**{
|
||||
"scheme": scheme,
|
||||
"host": host,
|
||||
"username": username,
|
||||
"password": password,
|
||||
"port": port,
|
||||
"auth_db": auth_db
|
||||
})
|
||||
# Check if url does already contain certificate path
|
||||
if add_certificate and "tlscafile" in lowered_query_keys:
|
||||
add_certificate = False
|
||||
return add_certificate
|
||||
|
||||
|
||||
def validate_mongo_connection(cnx: str) -> (bool, str):
|
||||
|
|
@ -121,12 +58,18 @@ def validate_mongo_connection(cnx: str) -> (bool, str):
|
|||
if parsed.scheme not in ["mongodb", "mongodb+srv"]:
|
||||
return False, "Not mongodb schema"
|
||||
|
||||
kwargs = {
|
||||
"serverSelectionTimeoutMS": 2000
|
||||
}
|
||||
# Add certificate path if should be required
|
||||
if should_add_certificate_path_to_mongo_url(cnx):
|
||||
kwargs["ssl_ca_certs"] = certifi.where()
|
||||
|
||||
try:
|
||||
client = MongoClient(
|
||||
cnx,
|
||||
serverSelectionTimeoutMS=2000
|
||||
)
|
||||
client = MongoClient(cnx, **kwargs)
|
||||
client.server_info()
|
||||
with client.start_session():
|
||||
pass
|
||||
client.close()
|
||||
except ServerSelectionTimeoutError as e:
|
||||
return False, f"Cannot connect to server {cnx} - {e}"
|
||||
|
|
@ -152,10 +95,7 @@ def validate_mongo_string(mongo: str) -> (bool, str):
|
|||
"""
|
||||
if not mongo:
|
||||
return True, "empty string"
|
||||
parsed = urlparse(mongo)
|
||||
if parsed.scheme in ["mongodb", "mongodb+srv"]:
|
||||
return validate_mongo_connection(mongo)
|
||||
return False, "not valid mongodb schema"
|
||||
return validate_mongo_connection(mongo)
|
||||
|
||||
|
||||
def validate_path_string(path: str) -> (bool, str):
|
||||
|
|
@ -195,21 +135,13 @@ def get_openpype_global_settings(url: str) -> dict:
|
|||
Returns:
|
||||
dict: With settings data. Empty dictionary is returned if not found.
|
||||
"""
|
||||
try:
|
||||
components = decompose_url(url)
|
||||
except RuntimeError:
|
||||
return {}
|
||||
mongo_kwargs = {
|
||||
"host": compose_url(**components),
|
||||
"serverSelectionTimeoutMS": 2000
|
||||
}
|
||||
port = components.get("port")
|
||||
if port is not None:
|
||||
mongo_kwargs["port"] = int(port)
|
||||
kwargs = {}
|
||||
if should_add_certificate_path_to_mongo_url(url):
|
||||
kwargs["ssl_ca_certs"] = certifi.where()
|
||||
|
||||
try:
|
||||
# Create mongo connection
|
||||
client = MongoClient(**mongo_kwargs)
|
||||
client = MongoClient(url, **kwargs)
|
||||
# Access settings collection
|
||||
col = client["openpype"]["settings"]
|
||||
# Query global settings
|
||||
|
|
|
|||
|
|
@ -283,3 +283,18 @@ def run(script):
|
|||
args_string = " ".join(args[1:])
|
||||
print(f"... running: {script} {args_string}")
|
||||
runpy.run_path(script, run_name="__main__", )
|
||||
|
||||
|
||||
@main.command()
|
||||
@click.argument("folder", nargs=-1)
|
||||
@click.option("-m",
|
||||
"--mark",
|
||||
help="Run tests marked by",
|
||||
default=None)
|
||||
@click.option("-p",
|
||||
"--pyargs",
|
||||
help="Run tests from package",
|
||||
default=None)
|
||||
def runtests(folder, mark, pyargs):
|
||||
"""Run all automatic tests after proper initialization via start.py"""
|
||||
PypeCommands().run_tests(folder, mark, pyargs)
|
||||
|
|
|
|||
|
|
@ -43,6 +43,8 @@ class GlobalHostDataHook(PreLaunchHook):
|
|||
|
||||
"env": self.launch_context.env,
|
||||
|
||||
"last_workfile_path": self.data.get("last_workfile_path"),
|
||||
|
||||
"log": self.log
|
||||
})
|
||||
|
||||
|
|
|
|||
|
|
@ -66,12 +66,16 @@ class BlendRigLoader(plugin.AssetLoader):
|
|||
objects = []
|
||||
nodes = list(container.children)
|
||||
|
||||
for obj in nodes:
|
||||
obj.parent = asset_group
|
||||
allowed_types = ['ARMATURE', 'MESH']
|
||||
|
||||
for obj in nodes:
|
||||
objects.append(obj)
|
||||
nodes.extend(list(obj.children))
|
||||
if obj.type in allowed_types:
|
||||
obj.parent = asset_group
|
||||
|
||||
for obj in nodes:
|
||||
if obj.type in allowed_types:
|
||||
objects.append(obj)
|
||||
nodes.extend(list(obj.children))
|
||||
|
||||
objects.reverse()
|
||||
|
||||
|
|
@ -107,7 +111,8 @@ class BlendRigLoader(plugin.AssetLoader):
|
|||
|
||||
if action is not None:
|
||||
local_obj.animation_data.action = action
|
||||
elif local_obj.animation_data.action is not None:
|
||||
elif (local_obj.animation_data and
|
||||
local_obj.animation_data.action is not None):
|
||||
plugin.prepare_data(
|
||||
local_obj.animation_data.action, group_name)
|
||||
|
||||
|
|
@ -126,7 +131,30 @@ class BlendRigLoader(plugin.AssetLoader):
|
|||
|
||||
objects.reverse()
|
||||
|
||||
bpy.data.orphans_purge(do_local_ids=False)
|
||||
curves = [obj for obj in data_to.objects if obj.type == 'CURVE']
|
||||
|
||||
for curve in curves:
|
||||
local_obj = plugin.prepare_data(curve, group_name)
|
||||
plugin.prepare_data(local_obj.data, group_name)
|
||||
|
||||
local_obj.use_fake_user = True
|
||||
|
||||
for mod in local_obj.modifiers:
|
||||
mod_target_name = mod.object.name
|
||||
mod.object = bpy.data.objects.get(
|
||||
f"{group_name}:{mod_target_name}")
|
||||
|
||||
if not local_obj.get(AVALON_PROPERTY):
|
||||
local_obj[AVALON_PROPERTY] = dict()
|
||||
|
||||
avalon_info = local_obj[AVALON_PROPERTY]
|
||||
avalon_info.update({"container_name": group_name})
|
||||
|
||||
local_obj.parent = asset_group
|
||||
objects.append(local_obj)
|
||||
|
||||
while bpy.data.orphans_purge(do_local_ids=False):
|
||||
pass
|
||||
|
||||
bpy.ops.object.select_all(action='DESELECT')
|
||||
|
||||
|
|
|
|||
|
|
@ -28,6 +28,16 @@ class ExtractBlend(openpype.api.Extractor):
|
|||
|
||||
for obj in instance:
|
||||
data_blocks.add(obj)
|
||||
# Pack used images in the blend files.
|
||||
if obj.type == 'MESH':
|
||||
for material_slot in obj.material_slots:
|
||||
mat = material_slot.material
|
||||
if mat and mat.use_nodes:
|
||||
tree = mat.node_tree
|
||||
if tree.type == 'SHADER':
|
||||
for node in tree.nodes:
|
||||
if node.bl_idname == 'ShaderNodeTexImage':
|
||||
node.image.pack()
|
||||
|
||||
bpy.data.libraries.write(filepath, data_blocks)
|
||||
|
||||
|
|
|
|||
9
openpype/hosts/houdini/startup/scripts/houdinicore.py
Normal file
9
openpype/hosts/houdini/startup/scripts/houdinicore.py
Normal file
|
|
@ -0,0 +1,9 @@
|
|||
from avalon import api, houdini
|
||||
|
||||
|
||||
def main():
|
||||
print("Installing OpenPype ...")
|
||||
api.install(houdini)
|
||||
|
||||
|
||||
main()
|
||||
|
|
@ -64,14 +64,23 @@ def process_dirmap(project_settings):
|
|||
# type: (dict) -> None
|
||||
"""Go through all paths in Settings and set them using `dirmap`.
|
||||
|
||||
If artists has Site Sync enabled, take dirmap mapping directly from
|
||||
Local Settings when artist is syncing workfile locally.
|
||||
|
||||
Args:
|
||||
project_settings (dict): Settings for current project.
|
||||
|
||||
"""
|
||||
if not project_settings["maya"].get("maya-dirmap"):
|
||||
local_mapping = _get_local_sync_dirmap(project_settings)
|
||||
if not project_settings["maya"].get("maya-dirmap") and not local_mapping:
|
||||
return
|
||||
mapping = project_settings["maya"]["maya-dirmap"]["paths"] or {}
|
||||
mapping_enabled = project_settings["maya"]["maya-dirmap"]["enabled"]
|
||||
|
||||
mapping = local_mapping or \
|
||||
project_settings["maya"]["maya-dirmap"]["paths"] \
|
||||
or {}
|
||||
mapping_enabled = project_settings["maya"]["maya-dirmap"]["enabled"] \
|
||||
or bool(local_mapping)
|
||||
|
||||
if not mapping or not mapping_enabled:
|
||||
return
|
||||
if mapping.get("source-path") and mapping_enabled is True:
|
||||
|
|
@ -94,10 +103,72 @@ def process_dirmap(project_settings):
|
|||
continue
|
||||
|
||||
|
||||
def _get_local_sync_dirmap(project_settings):
|
||||
"""
|
||||
Returns dirmap if synch to local project is enabled.
|
||||
|
||||
Only valid mapping is from roots of remote site to local site set in
|
||||
Local Settings.
|
||||
|
||||
Args:
|
||||
project_settings (dict)
|
||||
Returns:
|
||||
dict : { "source-path": [XXX], "destination-path": [YYYY]}
|
||||
"""
|
||||
import json
|
||||
mapping = {}
|
||||
|
||||
if not project_settings["global"]["sync_server"]["enabled"]:
|
||||
log.debug("Site Sync not enabled")
|
||||
return mapping
|
||||
|
||||
from openpype.settings.lib import get_site_local_overrides
|
||||
from openpype.modules import ModulesManager
|
||||
|
||||
manager = ModulesManager()
|
||||
sync_module = manager.modules_by_name["sync_server"]
|
||||
|
||||
project_name = os.getenv("AVALON_PROJECT")
|
||||
sync_settings = sync_module.get_sync_project_setting(
|
||||
os.getenv("AVALON_PROJECT"), exclude_locals=False, cached=False)
|
||||
log.debug(json.dumps(sync_settings, indent=4))
|
||||
|
||||
active_site = sync_module.get_local_normalized_site(
|
||||
sync_module.get_active_site(project_name))
|
||||
remote_site = sync_module.get_local_normalized_site(
|
||||
sync_module.get_remote_site(project_name))
|
||||
log.debug("active {} - remote {}".format(active_site, remote_site))
|
||||
|
||||
if active_site == "local" \
|
||||
and project_name in sync_module.get_enabled_projects()\
|
||||
and active_site != remote_site:
|
||||
overrides = get_site_local_overrides(os.getenv("AVALON_PROJECT"),
|
||||
active_site)
|
||||
for root_name, value in overrides.items():
|
||||
if os.path.isdir(value):
|
||||
try:
|
||||
mapping["destination-path"] = [value]
|
||||
mapping["source-path"] = [sync_settings["sites"]\
|
||||
[remote_site]\
|
||||
["root"]\
|
||||
[root_name]]
|
||||
except IndexError:
|
||||
# missing corresponding destination path
|
||||
log.debug("overrides".format(overrides))
|
||||
log.error(
|
||||
("invalid dirmap mapping, missing corresponding"
|
||||
" destination directory."))
|
||||
break
|
||||
|
||||
log.debug("local sync mapping:: {}".format(mapping))
|
||||
return mapping
|
||||
|
||||
|
||||
def uninstall():
|
||||
pyblish.deregister_plugin_path(PUBLISH_PATH)
|
||||
avalon.deregister_plugin_path(avalon.Loader, LOAD_PATH)
|
||||
avalon.deregister_plugin_path(avalon.Creator, CREATE_PATH)
|
||||
avalon.deregister_plugin_path(avalon.InventoryAction, INVENTORY_PATH)
|
||||
|
||||
menu.uninstall()
|
||||
|
||||
|
|
|
|||
|
|
@ -114,6 +114,8 @@ class RenderProduct(object):
|
|||
aov = attr.ib(default=None) # source aov
|
||||
driver = attr.ib(default=None) # source driver
|
||||
multipart = attr.ib(default=False) # multichannel file
|
||||
camera = attr.ib(default=None) # used only when rendering
|
||||
# from multiple cameras
|
||||
|
||||
|
||||
def get(layer, render_instance=None):
|
||||
|
|
@ -183,6 +185,16 @@ class ARenderProducts:
|
|||
self.layer_data = self._get_layer_data()
|
||||
self.layer_data.products = self.get_render_products()
|
||||
|
||||
def has_camera_token(self):
|
||||
# type: () -> bool
|
||||
"""Check if camera token is in image prefix.
|
||||
|
||||
Returns:
|
||||
bool: True/False if camera token is present.
|
||||
|
||||
"""
|
||||
return "<camera>" in self.layer_data.filePrefix.lower()
|
||||
|
||||
@abstractmethod
|
||||
def get_render_products(self):
|
||||
"""To be implemented by renderer class.
|
||||
|
|
@ -307,7 +319,7 @@ class ARenderProducts:
|
|||
# Deadline allows submitting renders with a custom frame list
|
||||
# to support those cases we might want to allow 'custom frames'
|
||||
# to be overridden to `ExpectFiles` class?
|
||||
layer_data = LayerMetadata(
|
||||
return LayerMetadata(
|
||||
frameStart=int(self.get_render_attribute("startFrame")),
|
||||
frameEnd=int(self.get_render_attribute("endFrame")),
|
||||
frameStep=int(self.get_render_attribute("byFrameStep")),
|
||||
|
|
@ -321,7 +333,6 @@ class ARenderProducts:
|
|||
defaultExt=self._get_attr("defaultRenderGlobals.imfPluginKey"),
|
||||
filePrefix=file_prefix
|
||||
)
|
||||
return layer_data
|
||||
|
||||
def _generate_file_sequence(
|
||||
self, layer_data,
|
||||
|
|
@ -330,7 +341,7 @@ class ARenderProducts:
|
|||
force_cameras=None):
|
||||
# type: (LayerMetadata, str, str, list) -> list
|
||||
expected_files = []
|
||||
cameras = force_cameras if force_cameras else layer_data.cameras
|
||||
cameras = force_cameras or layer_data.cameras
|
||||
ext = force_ext or layer_data.defaultExt
|
||||
for cam in cameras:
|
||||
file_prefix = layer_data.filePrefix
|
||||
|
|
@ -361,8 +372,8 @@ class ARenderProducts:
|
|||
)
|
||||
return expected_files
|
||||
|
||||
def get_files(self, product, camera):
|
||||
# type: (RenderProduct, str) -> list
|
||||
def get_files(self, product):
|
||||
# type: (RenderProduct) -> list
|
||||
"""Return list of expected files.
|
||||
|
||||
It will translate render token strings ('<RenderPass>', etc.) to
|
||||
|
|
@ -373,7 +384,6 @@ class ARenderProducts:
|
|||
Args:
|
||||
product (RenderProduct): Render product to be used for file
|
||||
generation.
|
||||
camera (str): Camera name.
|
||||
|
||||
Returns:
|
||||
List of files
|
||||
|
|
@ -383,7 +393,7 @@ class ARenderProducts:
|
|||
self.layer_data,
|
||||
force_aov_name=product.productName,
|
||||
force_ext=product.ext,
|
||||
force_cameras=[camera]
|
||||
force_cameras=[product.camera]
|
||||
)
|
||||
|
||||
def get_renderable_cameras(self):
|
||||
|
|
@ -460,15 +470,21 @@ class RenderProductsArnold(ARenderProducts):
|
|||
|
||||
return prefix
|
||||
|
||||
def _get_aov_render_products(self, aov):
|
||||
def _get_aov_render_products(self, aov, cameras=None):
|
||||
"""Return all render products for the AOV"""
|
||||
|
||||
products = list()
|
||||
products = []
|
||||
aov_name = self._get_attr(aov, "name")
|
||||
ai_drivers = cmds.listConnections("{}.outputs".format(aov),
|
||||
source=True,
|
||||
destination=False,
|
||||
type="aiAOVDriver") or []
|
||||
if not cameras:
|
||||
cameras = [
|
||||
self.sanitize_camera_name(
|
||||
self.get_renderable_cameras()[0]
|
||||
)
|
||||
]
|
||||
|
||||
for ai_driver in ai_drivers:
|
||||
# todo: check aiAOVDriver.prefix as it could have
|
||||
|
|
@ -497,30 +513,37 @@ class RenderProductsArnold(ARenderProducts):
|
|||
name = "beauty"
|
||||
|
||||
# Support Arnold light groups for AOVs
|
||||
# Global AOV: When disabled the main layer is not written: `{pass}`
|
||||
# Global AOV: When disabled the main layer is
|
||||
# not written: `{pass}`
|
||||
# All Light Groups: When enabled, a `{pass}_lgroups` file is
|
||||
# written and is always merged into a single file
|
||||
# Light Groups List: When set, a product per light group is written
|
||||
# written and is always merged into a
|
||||
# single file
|
||||
# Light Groups List: When set, a product per light
|
||||
# group is written
|
||||
# e.g. {pass}_front, {pass}_rim
|
||||
global_aov = self._get_attr(aov, "globalAov")
|
||||
if global_aov:
|
||||
product = RenderProduct(productName=name,
|
||||
ext=ext,
|
||||
aov=aov_name,
|
||||
driver=ai_driver)
|
||||
products.append(product)
|
||||
for camera in cameras:
|
||||
product = RenderProduct(productName=name,
|
||||
ext=ext,
|
||||
aov=aov_name,
|
||||
driver=ai_driver,
|
||||
camera=camera)
|
||||
products.append(product)
|
||||
|
||||
all_light_groups = self._get_attr(aov, "lightGroups")
|
||||
if all_light_groups:
|
||||
# All light groups is enabled. A single multipart
|
||||
# Render Product
|
||||
product = RenderProduct(productName=name + "_lgroups",
|
||||
ext=ext,
|
||||
aov=aov_name,
|
||||
driver=ai_driver,
|
||||
# Always multichannel output
|
||||
multipart=True)
|
||||
products.append(product)
|
||||
for camera in cameras:
|
||||
product = RenderProduct(productName=name + "_lgroups",
|
||||
ext=ext,
|
||||
aov=aov_name,
|
||||
driver=ai_driver,
|
||||
# Always multichannel output
|
||||
multipart=True,
|
||||
camera=camera)
|
||||
products.append(product)
|
||||
else:
|
||||
value = self._get_attr(aov, "lightGroupsList")
|
||||
if not value:
|
||||
|
|
@ -529,11 +552,15 @@ class RenderProductsArnold(ARenderProducts):
|
|||
for light_group in selected_light_groups:
|
||||
# Render Product per selected light group
|
||||
aov_light_group_name = "{}_{}".format(name, light_group)
|
||||
product = RenderProduct(productName=aov_light_group_name,
|
||||
aov=aov_name,
|
||||
driver=ai_driver,
|
||||
ext=ext)
|
||||
products.append(product)
|
||||
for camera in cameras:
|
||||
product = RenderProduct(
|
||||
productName=aov_light_group_name,
|
||||
aov=aov_name,
|
||||
driver=ai_driver,
|
||||
ext=ext,
|
||||
camera=camera
|
||||
)
|
||||
products.append(product)
|
||||
|
||||
return products
|
||||
|
||||
|
|
@ -556,17 +583,26 @@ class RenderProductsArnold(ARenderProducts):
|
|||
# anyway.
|
||||
return []
|
||||
|
||||
default_ext = self._get_attr("defaultRenderGlobals.imfPluginKey")
|
||||
beauty_product = RenderProduct(productName="beauty",
|
||||
ext=default_ext,
|
||||
driver="defaultArnoldDriver")
|
||||
# check if camera token is in prefix. If so, and we have list of
|
||||
# renderable cameras, generate render product for each and every
|
||||
# of them.
|
||||
cameras = [
|
||||
self.sanitize_camera_name(c)
|
||||
for c in self.get_renderable_cameras()
|
||||
]
|
||||
|
||||
default_ext = self._get_attr("defaultRenderGlobals.imfPluginKey")
|
||||
beauty_products = [RenderProduct(
|
||||
productName="beauty",
|
||||
ext=default_ext,
|
||||
driver="defaultArnoldDriver",
|
||||
camera=camera) for camera in cameras]
|
||||
# AOVs > Legacy > Maya Render View > Mode
|
||||
aovs_enabled = bool(
|
||||
self._get_attr("defaultArnoldRenderOptions.aovMode")
|
||||
)
|
||||
if not aovs_enabled:
|
||||
return [beauty_product]
|
||||
return beauty_products
|
||||
|
||||
# Common > File Output > Merge AOVs or <RenderPass>
|
||||
# We don't need to check for Merge AOVs due to overridden
|
||||
|
|
@ -575,8 +611,9 @@ class RenderProductsArnold(ARenderProducts):
|
|||
"<renderpass>" in self.layer_data.filePrefix.lower()
|
||||
)
|
||||
if not has_renderpass_token:
|
||||
beauty_product.multipart = True
|
||||
return [beauty_product]
|
||||
for product in beauty_products:
|
||||
product.multipart = True
|
||||
return beauty_products
|
||||
|
||||
# AOVs are set to be rendered separately. We should expect
|
||||
# <RenderPass> token in path.
|
||||
|
|
@ -598,14 +635,14 @@ class RenderProductsArnold(ARenderProducts):
|
|||
continue
|
||||
|
||||
# For now stick to the legacy output format.
|
||||
aov_products = self._get_aov_render_products(aov)
|
||||
aov_products = self._get_aov_render_products(aov, cameras)
|
||||
products.extend(aov_products)
|
||||
|
||||
if not any(product.aov == "RGBA" for product in products):
|
||||
if all(product.aov != "RGBA" for product in products):
|
||||
# Append default 'beauty' as this is arnolds default.
|
||||
# However, it is excluded whenever a RGBA pass is enabled.
|
||||
# For legibility add the beauty layer as first entry
|
||||
products.insert(0, beauty_product)
|
||||
products += beauty_products
|
||||
|
||||
# TODO: Output Denoising AOVs?
|
||||
|
||||
|
|
@ -670,6 +707,11 @@ class RenderProductsVray(ARenderProducts):
|
|||
# anyway.
|
||||
return []
|
||||
|
||||
cameras = [
|
||||
self.sanitize_camera_name(c)
|
||||
for c in self.get_renderable_cameras()
|
||||
]
|
||||
|
||||
image_format_str = self._get_attr("vraySettings.imageFormatStr")
|
||||
default_ext = image_format_str
|
||||
if default_ext in {"exr (multichannel)", "exr (deep)"}:
|
||||
|
|
@ -680,13 +722,21 @@ class RenderProductsVray(ARenderProducts):
|
|||
# add beauty as default when not disabled
|
||||
dont_save_rgb = self._get_attr("vraySettings.dontSaveRgbChannel")
|
||||
if not dont_save_rgb:
|
||||
products.append(RenderProduct(productName="", ext=default_ext))
|
||||
for camera in cameras:
|
||||
products.append(
|
||||
RenderProduct(productName="",
|
||||
ext=default_ext,
|
||||
camera=camera))
|
||||
|
||||
# separate alpha file
|
||||
separate_alpha = self._get_attr("vraySettings.separateAlpha")
|
||||
if separate_alpha:
|
||||
products.append(RenderProduct(productName="Alpha",
|
||||
ext=default_ext))
|
||||
for camera in cameras:
|
||||
products.append(
|
||||
RenderProduct(productName="Alpha",
|
||||
ext=default_ext,
|
||||
camera=camera)
|
||||
)
|
||||
|
||||
if image_format_str == "exr (multichannel)":
|
||||
# AOVs are merged in m-channel file, only main layer is rendered
|
||||
|
|
@ -716,19 +766,23 @@ class RenderProductsVray(ARenderProducts):
|
|||
# instead seems to output multiple Render Products,
|
||||
# specifically "Self_Illumination" and "Environment"
|
||||
product_names = ["Self_Illumination", "Environment"]
|
||||
for name in product_names:
|
||||
product = RenderProduct(productName=name,
|
||||
ext=default_ext,
|
||||
aov=aov)
|
||||
products.append(product)
|
||||
for camera in cameras:
|
||||
for name in product_names:
|
||||
product = RenderProduct(productName=name,
|
||||
ext=default_ext,
|
||||
aov=aov,
|
||||
camera=camera)
|
||||
products.append(product)
|
||||
# Continue as we've processed this special case AOV
|
||||
continue
|
||||
|
||||
aov_name = self._get_vray_aov_name(aov)
|
||||
product = RenderProduct(productName=aov_name,
|
||||
ext=default_ext,
|
||||
aov=aov)
|
||||
products.append(product)
|
||||
for camera in cameras:
|
||||
product = RenderProduct(productName=aov_name,
|
||||
ext=default_ext,
|
||||
aov=aov,
|
||||
camera=camera)
|
||||
products.append(product)
|
||||
|
||||
return products
|
||||
|
||||
|
|
@ -875,6 +929,11 @@ class RenderProductsRedshift(ARenderProducts):
|
|||
# anyway.
|
||||
return []
|
||||
|
||||
cameras = [
|
||||
self.sanitize_camera_name(c)
|
||||
for c in self.get_renderable_cameras()
|
||||
]
|
||||
|
||||
# For Redshift we don't directly return upon forcing multilayer
|
||||
# due to some AOVs still being written into separate files,
|
||||
# like Cryptomatte.
|
||||
|
|
@ -933,11 +992,14 @@ class RenderProductsRedshift(ARenderProducts):
|
|||
for light_group in light_groups:
|
||||
aov_light_group_name = "{}_{}".format(aov_name,
|
||||
light_group)
|
||||
product = RenderProduct(productName=aov_light_group_name,
|
||||
aov=aov_name,
|
||||
ext=ext,
|
||||
multipart=aov_multipart)
|
||||
products.append(product)
|
||||
for camera in cameras:
|
||||
product = RenderProduct(
|
||||
productName=aov_light_group_name,
|
||||
aov=aov_name,
|
||||
ext=ext,
|
||||
multipart=aov_multipart,
|
||||
camera=camera)
|
||||
products.append(product)
|
||||
|
||||
if light_groups:
|
||||
light_groups_enabled = True
|
||||
|
|
@ -945,11 +1007,13 @@ class RenderProductsRedshift(ARenderProducts):
|
|||
# Redshift AOV Light Select always renders the global AOV
|
||||
# even when light groups are present so we don't need to
|
||||
# exclude it when light groups are active
|
||||
product = RenderProduct(productName=aov_name,
|
||||
aov=aov_name,
|
||||
ext=ext,
|
||||
multipart=aov_multipart)
|
||||
products.append(product)
|
||||
for camera in cameras:
|
||||
product = RenderProduct(productName=aov_name,
|
||||
aov=aov_name,
|
||||
ext=ext,
|
||||
multipart=aov_multipart,
|
||||
camera=camera)
|
||||
products.append(product)
|
||||
|
||||
# When a Beauty AOV is added manually, it will be rendered as
|
||||
# 'Beauty_other' in file name and "standard" beauty will have
|
||||
|
|
@ -959,10 +1023,12 @@ class RenderProductsRedshift(ARenderProducts):
|
|||
return products
|
||||
|
||||
beauty_name = "Beauty_other" if has_beauty_aov else ""
|
||||
products.insert(0,
|
||||
RenderProduct(productName=beauty_name,
|
||||
ext=ext,
|
||||
multipart=multipart))
|
||||
for camera in cameras:
|
||||
products.insert(0,
|
||||
RenderProduct(productName=beauty_name,
|
||||
ext=ext,
|
||||
multipart=multipart,
|
||||
camera=camera))
|
||||
|
||||
return products
|
||||
|
||||
|
|
@ -987,6 +1053,16 @@ class RenderProductsRenderman(ARenderProducts):
|
|||
:func:`ARenderProducts.get_render_products()`
|
||||
|
||||
"""
|
||||
cameras = [
|
||||
self.sanitize_camera_name(c)
|
||||
for c in self.get_renderable_cameras()
|
||||
]
|
||||
|
||||
if not cameras:
|
||||
cameras = [
|
||||
self.sanitize_camera_name(
|
||||
self.get_renderable_cameras()[0])
|
||||
]
|
||||
products = []
|
||||
|
||||
default_ext = "exr"
|
||||
|
|
@ -1000,9 +1076,11 @@ class RenderProductsRenderman(ARenderProducts):
|
|||
if aov_name == "rmanDefaultDisplay":
|
||||
aov_name = "beauty"
|
||||
|
||||
product = RenderProduct(productName=aov_name,
|
||||
ext=default_ext)
|
||||
products.append(product)
|
||||
for camera in cameras:
|
||||
product = RenderProduct(productName=aov_name,
|
||||
ext=default_ext,
|
||||
camera=camera)
|
||||
products.append(product)
|
||||
|
||||
return products
|
||||
|
||||
|
|
|
|||
|
|
@ -123,7 +123,7 @@ class ReferenceLoader(api.Loader):
|
|||
count = options.get("count") or 1
|
||||
for c in range(0, count):
|
||||
namespace = namespace or lib.unique_namespace(
|
||||
asset["name"] + "_",
|
||||
"{}_{}_".format(asset["name"], context["subset"]["name"]),
|
||||
prefix="_" if asset["name"][0].isdigit() else "",
|
||||
suffix="_",
|
||||
)
|
||||
|
|
|
|||
|
|
@ -1,11 +1,11 @@
|
|||
from openpype.hosts.maya.api import plugin
|
||||
|
||||
|
||||
class CreateMayaAscii(plugin.Creator):
|
||||
"""Raw Maya Ascii file export"""
|
||||
class CreateMayaScene(plugin.Creator):
|
||||
"""Raw Maya Scene file export"""
|
||||
|
||||
name = "mayaAscii"
|
||||
label = "Maya Ascii"
|
||||
family = "mayaAscii"
|
||||
name = "mayaScene"
|
||||
label = "Maya Scene"
|
||||
family = "mayaScene"
|
||||
icon = "file-archive-o"
|
||||
defaults = ['Main']
|
||||
|
|
|
|||
|
|
@ -9,3 +9,8 @@ class CreateSetDress(plugin.Creator):
|
|||
family = "setdress"
|
||||
icon = "cubes"
|
||||
defaults = ["Main", "Anim"]
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
super(CreateSetDress, self).__init__(*args, **kwargs)
|
||||
|
||||
self.data["exactSetMembersOnly"] = True
|
||||
|
|
|
|||
92
openpype/hosts/maya/plugins/inventory/import_modelrender.py
Normal file
92
openpype/hosts/maya/plugins/inventory/import_modelrender.py
Normal file
|
|
@ -0,0 +1,92 @@
|
|||
from avalon import api, io
|
||||
|
||||
|
||||
class ImportModelRender(api.InventoryAction):
|
||||
|
||||
label = "Import Model Render Sets"
|
||||
icon = "industry"
|
||||
color = "#55DDAA"
|
||||
|
||||
scene_type_regex = "meta.render.m[ab]"
|
||||
look_data_type = "meta.render.json"
|
||||
|
||||
@staticmethod
|
||||
def is_compatible(container):
|
||||
return (
|
||||
container.get("loader") == "ReferenceLoader"
|
||||
and container.get("name", "").startswith("model")
|
||||
)
|
||||
|
||||
def process(self, containers):
|
||||
from maya import cmds
|
||||
|
||||
for container in containers:
|
||||
con_name = container["objectName"]
|
||||
nodes = []
|
||||
for n in cmds.sets(con_name, query=True, nodesOnly=True) or []:
|
||||
if cmds.nodeType(n) == "reference":
|
||||
nodes += cmds.referenceQuery(n, nodes=True)
|
||||
else:
|
||||
nodes.append(n)
|
||||
|
||||
repr_doc = io.find_one({
|
||||
"_id": io.ObjectId(container["representation"]),
|
||||
})
|
||||
version_id = repr_doc["parent"]
|
||||
|
||||
print("Importing render sets for model %r" % con_name)
|
||||
self.assign_model_render_by_version(nodes, version_id)
|
||||
|
||||
def assign_model_render_by_version(self, nodes, version_id):
|
||||
"""Assign nodes a specific published model render data version by id.
|
||||
|
||||
This assumes the nodes correspond with the asset.
|
||||
|
||||
Args:
|
||||
nodes(list): nodes to assign render data to
|
||||
version_id (bson.ObjectId): database id of the version of model
|
||||
|
||||
Returns:
|
||||
None
|
||||
"""
|
||||
import json
|
||||
from maya import cmds
|
||||
from avalon import maya, io, pipeline
|
||||
from openpype.hosts.maya.api import lib
|
||||
|
||||
# Get representations of shader file and relationships
|
||||
look_repr = io.find_one({
|
||||
"type": "representation",
|
||||
"parent": version_id,
|
||||
"name": {"$regex": self.scene_type_regex},
|
||||
})
|
||||
if not look_repr:
|
||||
print("No model render sets for this model version..")
|
||||
return
|
||||
|
||||
json_repr = io.find_one({
|
||||
"type": "representation",
|
||||
"parent": version_id,
|
||||
"name": self.look_data_type,
|
||||
})
|
||||
|
||||
context = pipeline.get_representation_context(look_repr["_id"])
|
||||
maya_file = pipeline.get_representation_path_from_context(context)
|
||||
|
||||
context = pipeline.get_representation_context(json_repr["_id"])
|
||||
json_file = pipeline.get_representation_path_from_context(context)
|
||||
|
||||
# Import the look file
|
||||
with maya.maintained_selection():
|
||||
shader_nodes = cmds.file(maya_file,
|
||||
i=True, # import
|
||||
returnNewNodes=True)
|
||||
# imprint context data
|
||||
|
||||
# Load relationships
|
||||
shader_relation = json_file
|
||||
with open(shader_relation, "r") as f:
|
||||
relationships = json.load(f)
|
||||
|
||||
# Assign relationships
|
||||
lib.apply_shaders(relationships, shader_nodes, nodes)
|
||||
|
|
@ -13,6 +13,7 @@ class ReferenceLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
|
|||
"pointcache",
|
||||
"animation",
|
||||
"mayaAscii",
|
||||
"mayaScene",
|
||||
"setdress",
|
||||
"layout",
|
||||
"camera",
|
||||
|
|
@ -40,14 +41,13 @@ class ReferenceLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
|
|||
family = "model"
|
||||
|
||||
with maya.maintained_selection():
|
||||
|
||||
groupName = "{}:{}".format(namespace, name)
|
||||
groupName = "{}:_GRP".format(namespace)
|
||||
cmds.loadPlugin("AbcImport.mll", quiet=True)
|
||||
nodes = cmds.file(self.fname,
|
||||
namespace=namespace,
|
||||
sharedReferenceFile=False,
|
||||
groupReference=True,
|
||||
groupName="{}:{}".format(namespace, name),
|
||||
groupName=groupName,
|
||||
reference=True,
|
||||
returnNewNodes=True)
|
||||
|
||||
|
|
@ -71,7 +71,7 @@ class ReferenceLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
|
|||
except: # noqa: E722
|
||||
pass
|
||||
|
||||
if family not in ["layout", "setdress", "mayaAscii"]:
|
||||
if family not in ["layout", "setdress", "mayaAscii", "mayaScene"]:
|
||||
for root in roots:
|
||||
root.setParent(world=True)
|
||||
|
||||
|
|
|
|||
|
|
@ -223,8 +223,8 @@ class CollectLook(pyblish.api.InstancePlugin):
|
|||
|
||||
def process(self, instance):
|
||||
"""Collect the Look in the instance with the correct layer settings"""
|
||||
|
||||
with lib.renderlayer(instance.data["renderlayer"]):
|
||||
renderlayer = instance.data.get("renderlayer", "defaultRenderLayer")
|
||||
with lib.renderlayer(renderlayer):
|
||||
self.collect(instance)
|
||||
|
||||
def collect(self, instance):
|
||||
|
|
@ -357,6 +357,23 @@ class CollectLook(pyblish.api.InstancePlugin):
|
|||
for vray_node in vray_plugin_nodes:
|
||||
history.extend(cmds.listHistory(vray_node))
|
||||
|
||||
# handling render attribute sets
|
||||
render_set_types = [
|
||||
"VRayDisplacement",
|
||||
"VRayLightMesh",
|
||||
"VRayObjectProperties",
|
||||
"RedshiftObjectId",
|
||||
"RedshiftMeshParameters",
|
||||
]
|
||||
render_sets = cmds.ls(look_sets, type=render_set_types)
|
||||
if render_sets:
|
||||
history.extend(
|
||||
cmds.listHistory(render_sets,
|
||||
future=False,
|
||||
pruneDagObjects=True)
|
||||
or []
|
||||
)
|
||||
|
||||
files = cmds.ls(history, type="file", long=True)
|
||||
files.extend(cmds.ls(history, type="aiImage", long=True))
|
||||
files.extend(cmds.ls(history, type="RedshiftNormalMap", long=True))
|
||||
|
|
@ -550,3 +567,45 @@ class CollectLook(pyblish.api.InstancePlugin):
|
|||
"source": source, # required for resources
|
||||
"files": files,
|
||||
"color_space": color_space} # required for resources
|
||||
|
||||
|
||||
class CollectModelRenderSets(CollectLook):
|
||||
"""Collect render attribute sets for model instance.
|
||||
|
||||
Collects additional render attribute sets so they can be
|
||||
published with model.
|
||||
|
||||
"""
|
||||
|
||||
order = pyblish.api.CollectorOrder + 0.21
|
||||
families = ["model"]
|
||||
label = "Collect Model Render Sets"
|
||||
hosts = ["maya"]
|
||||
maketx = True
|
||||
|
||||
def collect_sets(self, instance):
|
||||
"""Collect all related objectSets except shadingEngines
|
||||
|
||||
Args:
|
||||
instance (list): all nodes to be published
|
||||
|
||||
Returns:
|
||||
dict
|
||||
"""
|
||||
|
||||
sets = {}
|
||||
for node in instance:
|
||||
related_sets = lib.get_related_sets(node)
|
||||
if not related_sets:
|
||||
continue
|
||||
|
||||
for objset in related_sets:
|
||||
if objset in sets:
|
||||
continue
|
||||
|
||||
if "shadingEngine" in cmds.nodeType(objset, inherited=True):
|
||||
continue
|
||||
|
||||
sets[objset] = {"uuid": lib.get_id(objset), "members": list()}
|
||||
|
||||
return sets
|
||||
|
|
|
|||
|
|
@ -3,14 +3,14 @@ from maya import cmds
|
|||
import pyblish.api
|
||||
|
||||
|
||||
class CollectMayaAscii(pyblish.api.InstancePlugin):
|
||||
"""Collect May Ascii Data
|
||||
class CollectMayaScene(pyblish.api.InstancePlugin):
|
||||
"""Collect Maya Scene Data
|
||||
|
||||
"""
|
||||
|
||||
order = pyblish.api.CollectorOrder + 0.2
|
||||
label = 'Collect Model Data'
|
||||
families = ["mayaAscii"]
|
||||
families = ["mayaScene"]
|
||||
|
||||
def process(self, instance):
|
||||
# Extract only current frame (override)
|
||||
|
|
@ -174,10 +174,16 @@ class CollectMayaRender(pyblish.api.ContextPlugin):
|
|||
assert render_products, "no render products generated"
|
||||
exp_files = []
|
||||
for product in render_products:
|
||||
for camera in layer_render_products.layer_data.cameras:
|
||||
exp_files.append(
|
||||
{product.productName: layer_render_products.get_files(
|
||||
product, camera)})
|
||||
product_name = product.productName
|
||||
if product.camera and layer_render_products.has_camera_token():
|
||||
product_name = "{}{}".format(
|
||||
product.camera,
|
||||
"_" + product_name if product_name else "")
|
||||
exp_files.append(
|
||||
{
|
||||
product_name: layer_render_products.get_files(
|
||||
product)
|
||||
})
|
||||
|
||||
self.log.info("multipart: {}".format(
|
||||
layer_render_products.multipart))
|
||||
|
|
@ -199,12 +205,14 @@ class CollectMayaRender(pyblish.api.ContextPlugin):
|
|||
|
||||
# replace relative paths with absolute. Render products are
|
||||
# returned as list of dictionaries.
|
||||
publish_meta_path = None
|
||||
for aov in exp_files:
|
||||
full_paths = []
|
||||
for file in aov[aov.keys()[0]]:
|
||||
full_path = os.path.join(workspace, "renders", file)
|
||||
full_path = full_path.replace("\\", "/")
|
||||
full_paths.append(full_path)
|
||||
publish_meta_path = os.path.dirname(full_path)
|
||||
aov_dict[aov.keys()[0]] = full_paths
|
||||
|
||||
frame_start_render = int(self.get_render_attribute(
|
||||
|
|
@ -230,6 +238,26 @@ class CollectMayaRender(pyblish.api.ContextPlugin):
|
|||
frame_end_handle = frame_end_render
|
||||
|
||||
full_exp_files.append(aov_dict)
|
||||
|
||||
# find common path to store metadata
|
||||
# so if image prefix is branching to many directories
|
||||
# metadata file will be located in top-most common
|
||||
# directory.
|
||||
# TODO: use `os.path.commonpath()` after switch to Python 3
|
||||
common_publish_meta_path = os.path.splitdrive(
|
||||
publish_meta_path)[0]
|
||||
if common_publish_meta_path:
|
||||
common_publish_meta_path += os.path.sep
|
||||
for part in publish_meta_path.split("/"):
|
||||
common_publish_meta_path = os.path.join(
|
||||
common_publish_meta_path, part)
|
||||
if part == expected_layer_name:
|
||||
break
|
||||
common_publish_meta_path = common_publish_meta_path.replace(
|
||||
"\\", "/")
|
||||
self.log.info(
|
||||
"Publish meta path: {}".format(common_publish_meta_path))
|
||||
|
||||
self.log.info(full_exp_files)
|
||||
self.log.info("collecting layer: {}".format(layer_name))
|
||||
# Get layer specific settings, might be overrides
|
||||
|
|
@ -262,6 +290,7 @@ class CollectMayaRender(pyblish.api.ContextPlugin):
|
|||
# which was submitted originally
|
||||
"source": filepath,
|
||||
"expectedFiles": full_exp_files,
|
||||
"publishRenderMetadataFolder": common_publish_meta_path,
|
||||
"resolutionWidth": cmds.getAttr("defaultResolution.width"),
|
||||
"resolutionHeight": cmds.getAttr("defaultResolution.height"),
|
||||
"pixelAspect": cmds.getAttr("defaultResolution.pixelAspect"),
|
||||
|
|
|
|||
|
|
@ -4,7 +4,7 @@ import os
|
|||
from maya import cmds
|
||||
|
||||
|
||||
class CollectMayaScene(pyblish.api.ContextPlugin):
|
||||
class CollectWorkfile(pyblish.api.ContextPlugin):
|
||||
"""Inject the current working file into context"""
|
||||
|
||||
order = pyblish.api.CollectorOrder - 0.01
|
||||
|
|
@ -122,7 +122,7 @@ def no_workspace_dir():
|
|||
|
||||
|
||||
class ExtractLook(openpype.api.Extractor):
|
||||
"""Extract Look (Maya Ascii + JSON)
|
||||
"""Extract Look (Maya Scene + JSON)
|
||||
|
||||
Only extracts the sets (shadingEngines and alike) alongside a .json file
|
||||
that stores it relationships for the sets and "attribute" data for the
|
||||
|
|
@ -130,11 +130,12 @@ class ExtractLook(openpype.api.Extractor):
|
|||
|
||||
"""
|
||||
|
||||
label = "Extract Look (Maya ASCII + JSON)"
|
||||
label = "Extract Look (Maya Scene + JSON)"
|
||||
hosts = ["maya"]
|
||||
families = ["look"]
|
||||
order = pyblish.api.ExtractorOrder + 0.2
|
||||
scene_type = "ma"
|
||||
look_data_type = "json"
|
||||
|
||||
@staticmethod
|
||||
def get_renderer_name():
|
||||
|
|
@ -176,6 +177,8 @@ class ExtractLook(openpype.api.Extractor):
|
|||
# no preset found
|
||||
pass
|
||||
|
||||
return "mayaAscii" if self.scene_type == "ma" else "mayaBinary"
|
||||
|
||||
def process(self, instance):
|
||||
"""Plugin entry point.
|
||||
|
||||
|
|
@ -183,10 +186,12 @@ class ExtractLook(openpype.api.Extractor):
|
|||
instance: Instance to process.
|
||||
|
||||
"""
|
||||
_scene_type = self.get_maya_scene_type(instance)
|
||||
|
||||
# Define extract output file path
|
||||
dir_path = self.staging_dir(instance)
|
||||
maya_fname = "{0}.{1}".format(instance.name, self.scene_type)
|
||||
json_fname = "{0}.json".format(instance.name)
|
||||
json_fname = "{0}.{1}".format(instance.name, self.look_data_type)
|
||||
|
||||
# Make texture dump folder
|
||||
maya_path = os.path.join(dir_path, maya_fname)
|
||||
|
|
@ -196,11 +201,100 @@ class ExtractLook(openpype.api.Extractor):
|
|||
|
||||
# Remove all members of the sets so they are not included in the
|
||||
# exported file by accident
|
||||
self.log.info("Extract sets (Maya ASCII) ...")
|
||||
self.log.info("Extract sets (%s) ..." % _scene_type)
|
||||
lookdata = instance.data["lookData"]
|
||||
relationships = lookdata["relationships"]
|
||||
sets = relationships.keys()
|
||||
|
||||
results = self.process_resources(instance, staging_dir=dir_path)
|
||||
transfers = results["fileTransfers"]
|
||||
hardlinks = results["fileHardlinks"]
|
||||
hashes = results["fileHashes"]
|
||||
remap = results["attrRemap"]
|
||||
|
||||
# Extract in correct render layer
|
||||
layer = instance.data.get("renderlayer", "defaultRenderLayer")
|
||||
with lib.renderlayer(layer):
|
||||
# TODO: Ensure membership edits don't become renderlayer overrides
|
||||
with lib.empty_sets(sets, force=True):
|
||||
# To avoid Maya trying to automatically remap the file
|
||||
# textures relative to the `workspace -directory` we force
|
||||
# it to a fake temporary workspace. This fixes textures
|
||||
# getting incorrectly remapped. (LKD-17, PLN-101)
|
||||
with no_workspace_dir():
|
||||
with lib.attribute_values(remap):
|
||||
with avalon.maya.maintained_selection():
|
||||
cmds.select(sets, noExpand=True)
|
||||
cmds.file(
|
||||
maya_path,
|
||||
force=True,
|
||||
typ=_scene_type,
|
||||
exportSelected=True,
|
||||
preserveReferences=False,
|
||||
channels=True,
|
||||
constraints=True,
|
||||
expressions=True,
|
||||
constructionHistory=True,
|
||||
)
|
||||
|
||||
# Write the JSON data
|
||||
self.log.info("Extract json..")
|
||||
data = {
|
||||
"attributes": lookdata["attributes"],
|
||||
"relationships": relationships
|
||||
}
|
||||
|
||||
with open(json_path, "w") as f:
|
||||
json.dump(data, f)
|
||||
|
||||
if "files" not in instance.data:
|
||||
instance.data["files"] = []
|
||||
if "hardlinks" not in instance.data:
|
||||
instance.data["hardlinks"] = []
|
||||
if "transfers" not in instance.data:
|
||||
instance.data["transfers"] = []
|
||||
|
||||
instance.data["files"].append(maya_fname)
|
||||
instance.data["files"].append(json_fname)
|
||||
|
||||
if instance.data.get("representations") is None:
|
||||
instance.data["representations"] = []
|
||||
|
||||
instance.data["representations"].append(
|
||||
{
|
||||
"name": self.scene_type,
|
||||
"ext": self.scene_type,
|
||||
"files": os.path.basename(maya_fname),
|
||||
"stagingDir": os.path.dirname(maya_fname),
|
||||
}
|
||||
)
|
||||
instance.data["representations"].append(
|
||||
{
|
||||
"name": self.look_data_type,
|
||||
"ext": self.look_data_type,
|
||||
"files": os.path.basename(json_fname),
|
||||
"stagingDir": os.path.dirname(json_fname),
|
||||
}
|
||||
)
|
||||
|
||||
# Set up the resources transfers/links for the integrator
|
||||
instance.data["transfers"].extend(transfers)
|
||||
instance.data["hardlinks"].extend(hardlinks)
|
||||
|
||||
# Source hash for the textures
|
||||
instance.data["sourceHashes"] = hashes
|
||||
|
||||
"""
|
||||
self.log.info("Returning colorspaces to their original values ...")
|
||||
for attr, value in remap.items():
|
||||
self.log.info(" - {}: {}".format(attr, value))
|
||||
cmds.setAttr(attr, value, type="string")
|
||||
"""
|
||||
self.log.info("Extracted instance '%s' to: %s" % (instance.name,
|
||||
maya_path))
|
||||
|
||||
def process_resources(self, instance, staging_dir):
|
||||
|
||||
# Extract the textures to transfer, possibly convert with maketx and
|
||||
# remap the node paths to the destination path. Note that a source
|
||||
# might be included more than once amongst the resources as they could
|
||||
|
|
@ -218,7 +312,6 @@ class ExtractLook(openpype.api.Extractor):
|
|||
color_space = resource.get("color_space")
|
||||
|
||||
for f in resource["files"]:
|
||||
|
||||
files_metadata[os.path.normpath(f)] = {
|
||||
"color_space": color_space}
|
||||
# files.update(os.path.normpath(f))
|
||||
|
|
@ -244,7 +337,7 @@ class ExtractLook(openpype.api.Extractor):
|
|||
source, mode, texture_hash = self._process_texture(
|
||||
filepath,
|
||||
do_maketx,
|
||||
staging=dir_path,
|
||||
staging=staging_dir,
|
||||
linearize=linearize,
|
||||
force=force_copy
|
||||
)
|
||||
|
|
@ -299,85 +392,13 @@ class ExtractLook(openpype.api.Extractor):
|
|||
|
||||
self.log.info("Finished remapping destinations ...")
|
||||
|
||||
# Extract in correct render layer
|
||||
layer = instance.data.get("renderlayer", "defaultRenderLayer")
|
||||
with lib.renderlayer(layer):
|
||||
# TODO: Ensure membership edits don't become renderlayer overrides
|
||||
with lib.empty_sets(sets, force=True):
|
||||
# To avoid Maya trying to automatically remap the file
|
||||
# textures relative to the `workspace -directory` we force
|
||||
# it to a fake temporary workspace. This fixes textures
|
||||
# getting incorrectly remapped. (LKD-17, PLN-101)
|
||||
with no_workspace_dir():
|
||||
with lib.attribute_values(remap):
|
||||
with avalon.maya.maintained_selection():
|
||||
cmds.select(sets, noExpand=True)
|
||||
cmds.file(
|
||||
maya_path,
|
||||
force=True,
|
||||
typ="mayaAscii",
|
||||
exportSelected=True,
|
||||
preserveReferences=False,
|
||||
channels=True,
|
||||
constraints=True,
|
||||
expressions=True,
|
||||
constructionHistory=True,
|
||||
)
|
||||
|
||||
# Write the JSON data
|
||||
self.log.info("Extract json..")
|
||||
data = {
|
||||
"attributes": lookdata["attributes"],
|
||||
"relationships": relationships
|
||||
return {
|
||||
"fileTransfers": transfers,
|
||||
"fileHardlinks": hardlinks,
|
||||
"fileHashes": hashes,
|
||||
"attrRemap": remap,
|
||||
}
|
||||
|
||||
with open(json_path, "w") as f:
|
||||
json.dump(data, f)
|
||||
|
||||
if "files" not in instance.data:
|
||||
instance.data["files"] = []
|
||||
if "hardlinks" not in instance.data:
|
||||
instance.data["hardlinks"] = []
|
||||
if "transfers" not in instance.data:
|
||||
instance.data["transfers"] = []
|
||||
|
||||
instance.data["files"].append(maya_fname)
|
||||
instance.data["files"].append(json_fname)
|
||||
|
||||
instance.data["representations"] = []
|
||||
instance.data["representations"].append(
|
||||
{
|
||||
"name": "ma",
|
||||
"ext": "ma",
|
||||
"files": os.path.basename(maya_fname),
|
||||
"stagingDir": os.path.dirname(maya_fname),
|
||||
}
|
||||
)
|
||||
instance.data["representations"].append(
|
||||
{
|
||||
"name": "json",
|
||||
"ext": "json",
|
||||
"files": os.path.basename(json_fname),
|
||||
"stagingDir": os.path.dirname(json_fname),
|
||||
}
|
||||
)
|
||||
|
||||
# Set up the resources transfers/links for the integrator
|
||||
instance.data["transfers"].extend(transfers)
|
||||
instance.data["hardlinks"].extend(hardlinks)
|
||||
|
||||
# Source hash for the textures
|
||||
instance.data["sourceHashes"] = hashes
|
||||
|
||||
"""
|
||||
self.log.info("Returning colorspaces to their original values ...")
|
||||
for attr, value in remap.items():
|
||||
self.log.info(" - {}: {}".format(attr, value))
|
||||
cmds.setAttr(attr, value, type="string")
|
||||
"""
|
||||
self.log.info("Extracted instance '%s' to: %s" % (instance.name,
|
||||
maya_path))
|
||||
|
||||
def resource_destination(self, instance, filepath, do_maketx):
|
||||
"""Get resource destination path.
|
||||
|
||||
|
|
@ -467,3 +488,26 @@ class ExtractLook(openpype.api.Extractor):
|
|||
return converted, COPY, texture_hash
|
||||
|
||||
return filepath, COPY, texture_hash
|
||||
|
||||
|
||||
class ExtractModelRenderSets(ExtractLook):
|
||||
"""Extract model render attribute sets as model metadata
|
||||
|
||||
Only extracts the render attrib sets (NO shadingEngines) alongside
|
||||
a .json file that stores it relationships for the sets and "attribute"
|
||||
data for the instance members.
|
||||
|
||||
"""
|
||||
|
||||
label = "Model Render Sets"
|
||||
hosts = ["maya"]
|
||||
families = ["model"]
|
||||
scene_type_prefix = "meta.render."
|
||||
look_data_type = "meta.render.json"
|
||||
|
||||
def get_maya_scene_type(self, instance):
|
||||
typ = super(ExtractModelRenderSets, self).get_maya_scene_type(instance)
|
||||
# add prefix
|
||||
self.scene_type = self.scene_type_prefix + self.scene_type
|
||||
|
||||
return typ
|
||||
|
|
|
|||
|
|
@ -17,6 +17,7 @@ class ExtractMayaSceneRaw(openpype.api.Extractor):
|
|||
label = "Maya Scene (Raw)"
|
||||
hosts = ["maya"]
|
||||
families = ["mayaAscii",
|
||||
"mayaScene",
|
||||
"setdress",
|
||||
"layout",
|
||||
"camerarig",
|
||||
|
|
|
|||
|
|
@ -5,6 +5,8 @@ from __future__ import absolute_import
|
|||
import pyblish.api
|
||||
import openpype.api
|
||||
|
||||
from maya import cmds
|
||||
|
||||
|
||||
class SelectInvalidInstances(pyblish.api.Action):
|
||||
"""Select invalid instances in Outliner."""
|
||||
|
|
@ -18,13 +20,12 @@ class SelectInvalidInstances(pyblish.api.Action):
|
|||
# Get the errored instances
|
||||
failed = []
|
||||
for result in context.data["results"]:
|
||||
if result["error"] is None:
|
||||
continue
|
||||
if result["instance"] is None:
|
||||
continue
|
||||
if result["instance"] in failed:
|
||||
continue
|
||||
if result["plugin"] != plugin:
|
||||
if (
|
||||
result["error"] is None
|
||||
or result["instance"] is None
|
||||
or result["instance"] in failed
|
||||
or result["plugin"] != plugin
|
||||
):
|
||||
continue
|
||||
|
||||
failed.append(result["instance"])
|
||||
|
|
@ -44,25 +45,10 @@ class SelectInvalidInstances(pyblish.api.Action):
|
|||
self.deselect()
|
||||
|
||||
def select(self, instances):
|
||||
if "nuke" in pyblish.api.registered_hosts():
|
||||
import avalon.nuke.lib
|
||||
import nuke
|
||||
avalon.nuke.lib.select_nodes(
|
||||
[nuke.toNode(str(x)) for x in instances]
|
||||
)
|
||||
|
||||
if "maya" in pyblish.api.registered_hosts():
|
||||
from maya import cmds
|
||||
cmds.select(instances, replace=True, noExpand=True)
|
||||
cmds.select(instances, replace=True, noExpand=True)
|
||||
|
||||
def deselect(self):
|
||||
if "nuke" in pyblish.api.registered_hosts():
|
||||
import avalon.nuke.lib
|
||||
avalon.nuke.lib.reset_selection()
|
||||
|
||||
if "maya" in pyblish.api.registered_hosts():
|
||||
from maya import cmds
|
||||
cmds.select(deselect=True)
|
||||
cmds.select(deselect=True)
|
||||
|
||||
|
||||
class RepairSelectInvalidInstances(pyblish.api.Action):
|
||||
|
|
@ -92,23 +78,14 @@ class RepairSelectInvalidInstances(pyblish.api.Action):
|
|||
|
||||
context_asset = context.data["assetEntity"]["name"]
|
||||
for instance in instances:
|
||||
if "nuke" in pyblish.api.registered_hosts():
|
||||
import openpype.hosts.nuke.api as nuke_api
|
||||
origin_node = instance[0]
|
||||
nuke_api.lib.recreate_instance(
|
||||
origin_node, avalon_data={"asset": context_asset}
|
||||
)
|
||||
else:
|
||||
self.set_attribute(instance, context_asset)
|
||||
self.set_attribute(instance, context_asset)
|
||||
|
||||
def set_attribute(self, instance, context_asset):
|
||||
if "maya" in pyblish.api.registered_hosts():
|
||||
from maya import cmds
|
||||
cmds.setAttr(
|
||||
instance.data.get("name") + ".asset",
|
||||
context_asset,
|
||||
type="string"
|
||||
)
|
||||
cmds.setAttr(
|
||||
instance.data.get("name") + ".asset",
|
||||
context_asset,
|
||||
type="string"
|
||||
)
|
||||
|
||||
|
||||
class ValidateInstanceInContext(pyblish.api.InstancePlugin):
|
||||
|
|
@ -124,7 +101,7 @@ class ValidateInstanceInContext(pyblish.api.InstancePlugin):
|
|||
order = openpype.api.ValidateContentsOrder
|
||||
label = "Instance in same Context"
|
||||
optional = True
|
||||
hosts = ["maya", "nuke"]
|
||||
hosts = ["maya"]
|
||||
actions = [SelectInvalidInstances, RepairSelectInvalidInstances]
|
||||
|
||||
def process(self, instance):
|
||||
|
|
@ -0,0 +1,47 @@
|
|||
import pyblish.api
|
||||
import maya.cmds as cmds
|
||||
import openpype.api
|
||||
import os
|
||||
|
||||
|
||||
class ValidateLoadedPlugin(pyblish.api.ContextPlugin):
|
||||
"""Ensure there are no unauthorized loaded plugins"""
|
||||
|
||||
label = "Loaded Plugin"
|
||||
order = pyblish.api.ValidatorOrder
|
||||
host = ["maya"]
|
||||
actions = [openpype.api.RepairContextAction]
|
||||
|
||||
@classmethod
|
||||
def get_invalid(cls, context):
|
||||
|
||||
invalid = []
|
||||
loaded_plugin = cmds.pluginInfo(query=True, listPlugins=True)
|
||||
# get variable from OpenPype settings
|
||||
whitelist_native_plugins = cls.whitelist_native_plugins
|
||||
authorized_plugins = cls.authorized_plugins or []
|
||||
|
||||
for plugin in loaded_plugin:
|
||||
if not whitelist_native_plugins and os.getenv('MAYA_LOCATION') \
|
||||
in cmds.pluginInfo(plugin, query=True, path=True):
|
||||
continue
|
||||
if plugin not in authorized_plugins:
|
||||
invalid.append(plugin)
|
||||
|
||||
return invalid
|
||||
|
||||
def process(self, context):
|
||||
|
||||
invalid = self.get_invalid(context)
|
||||
if invalid:
|
||||
raise RuntimeError(
|
||||
"Found forbidden plugin name: {}".format(", ".join(invalid))
|
||||
)
|
||||
|
||||
@classmethod
|
||||
def repair(cls, context):
|
||||
"""Unload forbidden plugins"""
|
||||
|
||||
for plugin in cls.get_invalid(context):
|
||||
cmds.pluginInfo(plugin, edit=True, autoload=False)
|
||||
cmds.unloadPlugin(plugin, force=True)
|
||||
|
|
@ -76,7 +76,7 @@ class ValidateRenderSettings(pyblish.api.InstancePlugin):
|
|||
r'%a|<aov>|<renderpass>', re.IGNORECASE)
|
||||
R_LAYER_TOKEN = re.compile(
|
||||
r'%l|<layer>|<renderlayer>', re.IGNORECASE)
|
||||
R_CAMERA_TOKEN = re.compile(r'%c|<camera>', re.IGNORECASE)
|
||||
R_CAMERA_TOKEN = re.compile(r'%c|Camera>')
|
||||
R_SCENE_TOKEN = re.compile(r'%s|<scene>', re.IGNORECASE)
|
||||
|
||||
DEFAULT_PADDING = 4
|
||||
|
|
@ -126,7 +126,9 @@ class ValidateRenderSettings(pyblish.api.InstancePlugin):
|
|||
if len(cameras) > 1 and not re.search(cls.R_CAMERA_TOKEN, prefix):
|
||||
invalid = True
|
||||
cls.log.error("Wrong image prefix [ {} ] - "
|
||||
"doesn't have: '<camera>' token".format(prefix))
|
||||
"doesn't have: '<Camera>' token".format(prefix))
|
||||
cls.log.error(
|
||||
"Note that to needs to have capital 'C' at the beginning")
|
||||
|
||||
# renderer specific checks
|
||||
if renderer == "vray":
|
||||
|
|
|
|||
|
|
@ -0,0 +1,25 @@
|
|||
|
||||
import pyblish.api
|
||||
import openpype.api
|
||||
|
||||
|
||||
class ValidateSetdressRoot(pyblish.api.InstancePlugin):
|
||||
"""
|
||||
"""
|
||||
|
||||
order = openpype.api.ValidateContentsOrder
|
||||
label = "SetDress Root"
|
||||
hosts = ["maya"]
|
||||
families = ["setdress"]
|
||||
|
||||
def process(self, instance):
|
||||
from maya import cmds
|
||||
|
||||
if instance.data.get("exactSetMembersOnly"):
|
||||
return
|
||||
|
||||
set_member = instance.data["setMembers"]
|
||||
root = cmds.ls(set_member, assemblies=True, long=True)
|
||||
|
||||
if not root or root[0] not in set_member:
|
||||
raise Exception("Setdress top root node is not being published.")
|
||||
|
|
@ -21,6 +21,7 @@ def add_implementation_envs(env, _app):
|
|||
new_nuke_paths.append(norm_path)
|
||||
|
||||
env["NUKE_PATH"] = os.pathsep.join(new_nuke_paths)
|
||||
env.pop("QT_AUTO_SCREEN_SCALE_FACTOR", None)
|
||||
|
||||
# Try to add QuickTime to PATH
|
||||
quick_time_path = "C:/Program Files (x86)/QuickTime/QTSystem"
|
||||
|
|
|
|||
|
|
@ -288,14 +288,15 @@ def script_name():
|
|||
def add_button_write_to_read(node):
|
||||
name = "createReadNode"
|
||||
label = "Create Read From Rendered"
|
||||
value = "import write_to_read;write_to_read.write_to_read(nuke.thisNode())"
|
||||
value = "import write_to_read;\
|
||||
write_to_read.write_to_read(nuke.thisNode(), allow_relative=False)"
|
||||
knob = nuke.PyScript_Knob(name, label, value)
|
||||
knob.clearFlag(nuke.STARTLINE)
|
||||
node.addKnob(knob)
|
||||
|
||||
|
||||
def create_write_node(name, data, input=None, prenodes=None,
|
||||
review=True, linked_knobs=None):
|
||||
review=True, linked_knobs=None, farm=True):
|
||||
''' Creating write node which is group node
|
||||
|
||||
Arguments:
|
||||
|
|
@ -421,7 +422,15 @@ def create_write_node(name, data, input=None, prenodes=None,
|
|||
))
|
||||
continue
|
||||
|
||||
if knob and value:
|
||||
if not knob and not value:
|
||||
continue
|
||||
|
||||
log.info((knob, value))
|
||||
|
||||
if isinstance(value, str):
|
||||
if "[" in value:
|
||||
now_node[knob].setExpression(value)
|
||||
else:
|
||||
now_node[knob].setValue(value)
|
||||
|
||||
# connect to previous node
|
||||
|
|
@ -466,7 +475,7 @@ def create_write_node(name, data, input=None, prenodes=None,
|
|||
# imprinting group node
|
||||
anlib.set_avalon_knob_data(GN, data["avalon"])
|
||||
anlib.add_publish_knob(GN)
|
||||
add_rendering_knobs(GN)
|
||||
add_rendering_knobs(GN, farm)
|
||||
|
||||
if review:
|
||||
add_review_knob(GN)
|
||||
|
|
@ -526,7 +535,7 @@ def create_write_node(name, data, input=None, prenodes=None,
|
|||
return GN
|
||||
|
||||
|
||||
def add_rendering_knobs(node):
|
||||
def add_rendering_knobs(node, farm=True):
|
||||
''' Adds additional rendering knobs to given node
|
||||
|
||||
Arguments:
|
||||
|
|
@ -535,9 +544,13 @@ def add_rendering_knobs(node):
|
|||
Return:
|
||||
node (obj): with added knobs
|
||||
'''
|
||||
knob_options = [
|
||||
"Use existing frames", "Local"]
|
||||
if farm:
|
||||
knob_options.append("On farm")
|
||||
|
||||
if "render" not in node.knobs():
|
||||
knob = nuke.Enumeration_Knob("render", "", [
|
||||
"Use existing frames", "Local", "On farm"])
|
||||
knob = nuke.Enumeration_Knob("render", "", knob_options)
|
||||
knob.clearFlag(nuke.STARTLINE)
|
||||
node.addKnob(knob)
|
||||
return node
|
||||
|
|
|
|||
141
openpype/hosts/nuke/plugins/create/create_write_still.py
Normal file
141
openpype/hosts/nuke/plugins/create/create_write_still.py
Normal file
|
|
@ -0,0 +1,141 @@
|
|||
from collections import OrderedDict
|
||||
from openpype.hosts.nuke.api import (
|
||||
plugin,
|
||||
lib)
|
||||
import nuke
|
||||
|
||||
|
||||
class CreateWriteStill(plugin.PypeCreator):
|
||||
# change this to template preset
|
||||
name = "WriteStillFrame"
|
||||
label = "Create Write Still Image"
|
||||
hosts = ["nuke"]
|
||||
n_class = "Write"
|
||||
family = "still"
|
||||
icon = "image"
|
||||
defaults = [
|
||||
"ImageFrame{:0>4}".format(nuke.frame()),
|
||||
"MPFrame{:0>4}".format(nuke.frame()),
|
||||
"LayoutFrame{:0>4}".format(nuke.frame())
|
||||
]
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
super(CreateWriteStill, self).__init__(*args, **kwargs)
|
||||
|
||||
data = OrderedDict()
|
||||
|
||||
data["family"] = self.family
|
||||
data["families"] = self.n_class
|
||||
|
||||
for k, v in self.data.items():
|
||||
if k not in data.keys():
|
||||
data.update({k: v})
|
||||
|
||||
self.data = data
|
||||
self.nodes = nuke.selectedNodes()
|
||||
self.log.debug("_ self.data: '{}'".format(self.data))
|
||||
|
||||
def process(self):
|
||||
|
||||
inputs = []
|
||||
outputs = []
|
||||
instance = nuke.toNode(self.data["subset"])
|
||||
selected_node = None
|
||||
|
||||
# use selection
|
||||
if (self.options or {}).get("useSelection"):
|
||||
nodes = self.nodes
|
||||
|
||||
if not (len(nodes) < 2):
|
||||
msg = ("Select only one node. "
|
||||
"The node you want to connect to, "
|
||||
"or tick off `Use selection`")
|
||||
self.log.error(msg)
|
||||
nuke.message(msg)
|
||||
return
|
||||
|
||||
if len(nodes) == 0:
|
||||
msg = (
|
||||
"No nodes selected. Please select a single node to connect"
|
||||
" to or tick off `Use selection`"
|
||||
)
|
||||
self.log.error(msg)
|
||||
nuke.message(msg)
|
||||
return
|
||||
|
||||
selected_node = nodes[0]
|
||||
inputs = [selected_node]
|
||||
outputs = selected_node.dependent()
|
||||
|
||||
if instance:
|
||||
if (instance.name() in selected_node.name()):
|
||||
selected_node = instance.dependencies()[0]
|
||||
|
||||
# if node already exist
|
||||
if instance:
|
||||
# collect input / outputs
|
||||
inputs = instance.dependencies()
|
||||
outputs = instance.dependent()
|
||||
selected_node = inputs[0]
|
||||
# remove old one
|
||||
nuke.delete(instance)
|
||||
|
||||
# recreate new
|
||||
write_data = {
|
||||
"nodeclass": self.n_class,
|
||||
"families": [self.family],
|
||||
"avalon": self.data
|
||||
}
|
||||
|
||||
# add creator data
|
||||
creator_data = {"creator": self.__class__.__name__}
|
||||
self.data.update(creator_data)
|
||||
write_data.update(creator_data)
|
||||
|
||||
self.log.info("Adding template path from plugin")
|
||||
write_data.update({
|
||||
"fpath_template": (
|
||||
"{work}/renders/nuke/{subset}/{subset}.{ext}")})
|
||||
|
||||
_prenodes = [
|
||||
{
|
||||
"name": "FrameHold01",
|
||||
"class": "FrameHold",
|
||||
"knobs": [
|
||||
("first_frame", nuke.frame())
|
||||
],
|
||||
"dependent": None
|
||||
}
|
||||
]
|
||||
|
||||
write_node = lib.create_write_node(
|
||||
self.name,
|
||||
write_data,
|
||||
input=selected_node,
|
||||
review=False,
|
||||
prenodes=_prenodes,
|
||||
farm=False,
|
||||
linked_knobs=["channels", "___", "first", "last", "use_limit"])
|
||||
|
||||
# relinking to collected connections
|
||||
for i, input in enumerate(inputs):
|
||||
write_node.setInput(i, input)
|
||||
|
||||
write_node.autoplace()
|
||||
|
||||
for output in outputs:
|
||||
output.setInput(0, write_node)
|
||||
|
||||
# link frame hold to group node
|
||||
write_node.begin()
|
||||
for n in nuke.allNodes():
|
||||
# get write node
|
||||
if n.Class() in "Write":
|
||||
w_node = n
|
||||
write_node.end()
|
||||
|
||||
w_node["use_limit"].setValue(True)
|
||||
w_node["first"].setValue(nuke.frame())
|
||||
w_node["last"].setValue(nuke.frame())
|
||||
|
||||
return write_node
|
||||
|
|
@ -13,7 +13,7 @@ class LoadImage(api.Loader):
|
|||
"""Load still image into Nuke"""
|
||||
|
||||
families = ["render", "source", "plate", "review", "image"]
|
||||
representations = ["exr", "dpx", "jpg", "jpeg", "png", "psd"]
|
||||
representations = ["exr", "dpx", "jpg", "jpeg", "png", "psd", "tiff"]
|
||||
|
||||
label = "Load Image"
|
||||
order = -10
|
||||
|
|
|
|||
|
|
@ -17,7 +17,7 @@ class NukeRenderLocal(openpype.api.Extractor):
|
|||
order = pyblish.api.ExtractorOrder
|
||||
label = "Render Local"
|
||||
hosts = ["nuke"]
|
||||
families = ["render.local", "prerender.local"]
|
||||
families = ["render.local", "prerender.local", "still.local"]
|
||||
|
||||
def process(self, instance):
|
||||
families = instance.data["families"]
|
||||
|
|
@ -66,13 +66,23 @@ class NukeRenderLocal(openpype.api.Extractor):
|
|||
instance.data["representations"] = []
|
||||
|
||||
collected_frames = os.listdir(out_dir)
|
||||
repre = {
|
||||
'name': ext,
|
||||
'ext': ext,
|
||||
'frameStart': "%0{}d".format(len(str(last_frame))) % first_frame,
|
||||
'files': collected_frames,
|
||||
"stagingDir": out_dir
|
||||
}
|
||||
|
||||
if len(collected_frames) == 1:
|
||||
repre = {
|
||||
'name': ext,
|
||||
'ext': ext,
|
||||
'files': collected_frames.pop(),
|
||||
"stagingDir": out_dir
|
||||
}
|
||||
else:
|
||||
repre = {
|
||||
'name': ext,
|
||||
'ext': ext,
|
||||
'frameStart': "%0{}d".format(
|
||||
len(str(last_frame))) % first_frame,
|
||||
'files': collected_frames,
|
||||
"stagingDir": out_dir
|
||||
}
|
||||
instance.data["representations"].append(repre)
|
||||
|
||||
self.log.info("Extracted instance '{0}' to: {1}".format(
|
||||
|
|
@ -89,6 +99,9 @@ class NukeRenderLocal(openpype.api.Extractor):
|
|||
instance.data['family'] = 'prerender'
|
||||
families.remove('prerender.local')
|
||||
families.insert(0, "prerender")
|
||||
elif "still.local" in families:
|
||||
instance.data['family'] = 'image'
|
||||
families.remove('still.local')
|
||||
instance.data["families"] = families
|
||||
|
||||
collections, remainder = clique.assemble(collected_frames)
|
||||
|
|
|
|||
|
|
@ -64,7 +64,7 @@ class CollectNukeWrites(pyblish.api.InstancePlugin):
|
|||
)
|
||||
|
||||
if [fm for fm in _families_test
|
||||
if fm in ["render", "prerender"]]:
|
||||
if fm in ["render", "prerender", "still"]]:
|
||||
if "representations" not in instance.data:
|
||||
instance.data["representations"] = list()
|
||||
|
||||
|
|
@ -100,7 +100,13 @@ class CollectNukeWrites(pyblish.api.InstancePlugin):
|
|||
frame_start_str, frame_slate_str)
|
||||
collected_frames.insert(0, slate_frame)
|
||||
|
||||
representation['files'] = collected_frames
|
||||
if collected_frames_len == 1:
|
||||
representation['files'] = collected_frames.pop()
|
||||
if "still" in _families_test:
|
||||
instance.data['family'] = 'image'
|
||||
instance.data["families"].remove('still')
|
||||
else:
|
||||
representation['files'] = collected_frames
|
||||
instance.data["representations"].append(representation)
|
||||
except Exception:
|
||||
instance.data["representations"].append(representation)
|
||||
|
|
|
|||
|
|
@ -0,0 +1,110 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
"""Validate if instance asset is the same as context asset."""
|
||||
from __future__ import absolute_import
|
||||
|
||||
import nuke
|
||||
|
||||
import pyblish.api
|
||||
import openpype.api
|
||||
import avalon.nuke.lib
|
||||
import openpype.hosts.nuke.api as nuke_api
|
||||
|
||||
|
||||
class SelectInvalidInstances(pyblish.api.Action):
|
||||
"""Select invalid instances in Outliner."""
|
||||
|
||||
label = "Select Instances"
|
||||
icon = "briefcase"
|
||||
on = "failed"
|
||||
|
||||
def process(self, context, plugin):
|
||||
"""Process invalid validators and select invalid instances."""
|
||||
# Get the errored instances
|
||||
failed = []
|
||||
for result in context.data["results"]:
|
||||
if (
|
||||
result["error"] is None
|
||||
or result["instance"] is None
|
||||
or result["instance"] in failed
|
||||
or result["plugin"] != plugin
|
||||
):
|
||||
continue
|
||||
|
||||
failed.append(result["instance"])
|
||||
|
||||
# Apply pyblish.logic to get the instances for the plug-in
|
||||
instances = pyblish.api.instances_by_plugin(failed, plugin)
|
||||
|
||||
if instances:
|
||||
self.log.info(
|
||||
"Selecting invalid nodes: %s" % ", ".join(
|
||||
[str(x) for x in instances]
|
||||
)
|
||||
)
|
||||
self.select(instances)
|
||||
else:
|
||||
self.log.info("No invalid nodes found.")
|
||||
self.deselect()
|
||||
|
||||
def select(self, instances):
|
||||
avalon.nuke.lib.select_nodes(
|
||||
[nuke.toNode(str(x)) for x in instances]
|
||||
)
|
||||
|
||||
def deselect(self):
|
||||
avalon.nuke.lib.reset_selection()
|
||||
|
||||
|
||||
class RepairSelectInvalidInstances(pyblish.api.Action):
|
||||
"""Repair the instance asset."""
|
||||
|
||||
label = "Repair"
|
||||
icon = "wrench"
|
||||
on = "failed"
|
||||
|
||||
def process(self, context, plugin):
|
||||
# Get the errored instances
|
||||
failed = []
|
||||
for result in context.data["results"]:
|
||||
if (
|
||||
result["error"] is None
|
||||
or result["instance"] is None
|
||||
or result["instance"] in failed
|
||||
or result["plugin"] != plugin
|
||||
):
|
||||
continue
|
||||
|
||||
failed.append(result["instance"])
|
||||
|
||||
# Apply pyblish.logic to get the instances for the plug-in
|
||||
instances = pyblish.api.instances_by_plugin(failed, plugin)
|
||||
|
||||
context_asset = context.data["assetEntity"]["name"]
|
||||
for instance in instances:
|
||||
origin_node = instance[0]
|
||||
nuke_api.lib.recreate_instance(
|
||||
origin_node, avalon_data={"asset": context_asset}
|
||||
)
|
||||
|
||||
|
||||
class ValidateInstanceInContext(pyblish.api.InstancePlugin):
|
||||
"""Validator to check if instance asset match context asset.
|
||||
|
||||
When working in per-shot style you always publish data in context of
|
||||
current asset (shot). This validator checks if this is so. It is optional
|
||||
so it can be disabled when needed.
|
||||
|
||||
Action on this validator will select invalid instances in Outliner.
|
||||
"""
|
||||
|
||||
order = openpype.api.ValidateContentsOrder
|
||||
label = "Instance in same Context"
|
||||
hosts = ["nuke"]
|
||||
actions = [SelectInvalidInstances, RepairSelectInvalidInstances]
|
||||
optional = True
|
||||
|
||||
def process(self, instance):
|
||||
asset = instance.data.get("asset")
|
||||
context_asset = instance.context.data["assetEntity"]["name"]
|
||||
msg = "{} has asset {}".format(instance.name, asset)
|
||||
assert asset == context_asset, msg
|
||||
|
|
@ -55,7 +55,7 @@ class ValidateRenderedFrames(pyblish.api.InstancePlugin):
|
|||
""" Validates file output. """
|
||||
|
||||
order = pyblish.api.ValidatorOrder + 0.1
|
||||
families = ["render", "prerender"]
|
||||
families = ["render", "prerender", "still"]
|
||||
|
||||
label = "Validate rendered frame"
|
||||
hosts = ["nuke", "nukestudio"]
|
||||
|
|
@ -71,6 +71,9 @@ class ValidateRenderedFrames(pyblish.api.InstancePlugin):
|
|||
self.log.error(msg)
|
||||
raise ValidationException(msg)
|
||||
|
||||
if isinstance(repre["files"], str):
|
||||
return
|
||||
|
||||
collections, remainder = clique.assemble(repre["files"])
|
||||
self.log.info("collections: {}".format(str(collections)))
|
||||
self.log.info("remainder: {}".format(str(remainder)))
|
||||
|
|
|
|||
|
|
@ -9,7 +9,9 @@ SINGLE_FILE_FORMATS = ['avi', 'mp4', 'mxf', 'mov', 'mpg', 'mpeg', 'wmv', 'm4v',
|
|||
'm2v']
|
||||
|
||||
|
||||
def evaluate_filepath_new(k_value, k_eval, project_dir, first_frame):
|
||||
def evaluate_filepath_new(
|
||||
k_value, k_eval, project_dir, first_frame, allow_relative):
|
||||
|
||||
# get combined relative path
|
||||
combined_relative_path = None
|
||||
if k_eval is not None and project_dir is not None:
|
||||
|
|
@ -26,8 +28,9 @@ def evaluate_filepath_new(k_value, k_eval, project_dir, first_frame):
|
|||
combined_relative_path = None
|
||||
|
||||
try:
|
||||
k_value = k_value % first_frame
|
||||
if os.path.exists(k_value):
|
||||
# k_value = k_value % first_frame
|
||||
if os.path.isdir(os.path.basename(k_value)):
|
||||
# doesn't check for file, only parent dir
|
||||
filepath = k_value
|
||||
elif os.path.exists(k_eval):
|
||||
filepath = k_eval
|
||||
|
|
@ -37,10 +40,12 @@ def evaluate_filepath_new(k_value, k_eval, project_dir, first_frame):
|
|||
|
||||
filepath = os.path.abspath(filepath)
|
||||
except Exception as E:
|
||||
log.error("Cannot create Read node. Perhaps it needs to be rendered first :) Error: `{}`".format(E))
|
||||
log.error("Cannot create Read node. Perhaps it needs to be \
|
||||
rendered first :) Error: `{}`".format(E))
|
||||
return None
|
||||
|
||||
filepath = filepath.replace('\\', '/')
|
||||
# assumes last number is a sequence counter
|
||||
current_frame = re.findall(r'\d+', filepath)[-1]
|
||||
padding = len(current_frame)
|
||||
basename = filepath[: filepath.rfind(current_frame)]
|
||||
|
|
@ -51,11 +56,13 @@ def evaluate_filepath_new(k_value, k_eval, project_dir, first_frame):
|
|||
pass
|
||||
else:
|
||||
# Image sequence needs hashes
|
||||
# to do still with no number not handled
|
||||
filepath = basename + '#' * padding + '.' + filetype
|
||||
|
||||
# relative path? make it relative again
|
||||
if not isinstance(project_dir, type(None)):
|
||||
filepath = filepath.replace(project_dir, '.')
|
||||
if allow_relative:
|
||||
if (not isinstance(project_dir, type(None))) and project_dir != "":
|
||||
filepath = filepath.replace(project_dir, '.')
|
||||
|
||||
# get first and last frame from disk
|
||||
frames = []
|
||||
|
|
@ -95,41 +102,40 @@ def create_read_node(ndata, comp_start):
|
|||
return
|
||||
|
||||
|
||||
def write_to_read(gn):
|
||||
def write_to_read(gn,
|
||||
allow_relative=False):
|
||||
|
||||
comp_start = nuke.Root().knob('first_frame').value()
|
||||
comp_end = nuke.Root().knob('last_frame').value()
|
||||
project_dir = nuke.Root().knob('project_directory').getValue()
|
||||
if not os.path.exists(project_dir):
|
||||
project_dir = nuke.Root().knob('project_directory').evaluate()
|
||||
|
||||
group_read_nodes = []
|
||||
|
||||
with gn:
|
||||
height = gn.screenHeight() # get group height and position
|
||||
new_xpos = int(gn.knob('xpos').value())
|
||||
new_ypos = int(gn.knob('ypos').value()) + height + 20
|
||||
group_writes = [n for n in nuke.allNodes() if n.Class() == "Write"]
|
||||
print("__ group_writes: {}".format(group_writes))
|
||||
if group_writes != []:
|
||||
# there can be only 1 write node, taking first
|
||||
n = group_writes[0]
|
||||
|
||||
if n.knob('file') is not None:
|
||||
file_path_new = evaluate_filepath_new(
|
||||
myfile, firstFrame, lastFrame = evaluate_filepath_new(
|
||||
n.knob('file').getValue(),
|
||||
n.knob('file').evaluate(),
|
||||
project_dir,
|
||||
comp_start
|
||||
comp_start,
|
||||
allow_relative
|
||||
)
|
||||
if not file_path_new:
|
||||
if not myfile:
|
||||
return
|
||||
|
||||
myfiletranslated, firstFrame, lastFrame = file_path_new
|
||||
# get node data
|
||||
ndata = {
|
||||
'filepath': myfiletranslated,
|
||||
'firstframe': firstFrame,
|
||||
'lastframe': lastFrame,
|
||||
'filepath': myfile,
|
||||
'firstframe': int(firstFrame),
|
||||
'lastframe': int(lastFrame),
|
||||
'new_xpos': new_xpos,
|
||||
'new_ypos': new_ypos,
|
||||
'colorspace': n.knob('colorspace').getValue(),
|
||||
|
|
@ -139,7 +145,6 @@ def write_to_read(gn):
|
|||
}
|
||||
group_read_nodes.append(ndata)
|
||||
|
||||
|
||||
# create reads in one go
|
||||
for oneread in group_read_nodes:
|
||||
# create read node
|
||||
|
|
|
|||
|
|
@ -1,6 +1,8 @@
|
|||
import os
|
||||
import logging
|
||||
|
||||
import requests
|
||||
|
||||
import avalon.api
|
||||
import pyblish.api
|
||||
from avalon.tvpaint import pipeline
|
||||
|
|
@ -8,6 +10,7 @@ from avalon.tvpaint.communication_server import register_localization_file
|
|||
from .lib import set_context_settings
|
||||
|
||||
from openpype.hosts import tvpaint
|
||||
from openpype.api import get_current_project_settings
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
|
|
@ -51,6 +54,19 @@ def initial_launch():
|
|||
set_context_settings()
|
||||
|
||||
|
||||
def application_exit():
|
||||
data = get_current_project_settings()
|
||||
stop_timer = data["tvpaint"]["stop_timer_on_application_exit"]
|
||||
|
||||
if not stop_timer:
|
||||
return
|
||||
|
||||
# Stop application timer.
|
||||
webserver_url = os.environ.get("OPENPYPE_WEBSERVER_URL")
|
||||
rest_api_url = "{}/timers_manager/stop_timer".format(webserver_url)
|
||||
requests.post(rest_api_url)
|
||||
|
||||
|
||||
def install():
|
||||
log.info("OpenPype - Installing TVPaint integration")
|
||||
localization_file = os.path.join(HOST_DIR, "resources", "avalon.loc")
|
||||
|
|
@ -67,6 +83,7 @@ def install():
|
|||
pyblish.api.register_callback("instanceToggled", on_instance_toggle)
|
||||
|
||||
avalon.api.on("application.launched", initial_launch)
|
||||
avalon.api.on("application.exit", application_exit)
|
||||
|
||||
|
||||
def uninstall():
|
||||
|
|
|
|||
|
|
@ -3,4 +3,17 @@ from avalon.tvpaint import pipeline
|
|||
|
||||
|
||||
class Creator(PypeCreatorMixin, pipeline.Creator):
|
||||
pass
|
||||
@classmethod
|
||||
def get_dynamic_data(cls, *args, **kwargs):
|
||||
dynamic_data = super(Creator, cls).get_dynamic_data(*args, **kwargs)
|
||||
|
||||
# Change asset and name by current workfile context
|
||||
workfile_context = pipeline.get_current_workfile_context()
|
||||
asset_name = workfile_context.get("asset")
|
||||
task_name = workfile_context.get("task")
|
||||
if "asset" not in dynamic_data and asset_name:
|
||||
dynamic_data["asset"] = asset_name
|
||||
|
||||
if "task" not in dynamic_data and task_name:
|
||||
dynamic_data["task"] = task_name
|
||||
return dynamic_data
|
||||
|
|
|
|||
|
|
@ -606,7 +606,7 @@ class ExtractSequence(pyblish.api.Extractor):
|
|||
self._copy_image(eq_frame_filepath, new_filepath)
|
||||
layer_files_by_frame[frame_idx] = new_filepath
|
||||
|
||||
elif pre_behavior == "loop":
|
||||
elif pre_behavior in ("loop", "repeat"):
|
||||
# Loop backwards from last frame of layer
|
||||
for frame_idx in reversed(range(mark_in_index, frame_start_index)):
|
||||
eq_frame_idx_offset = (
|
||||
|
|
@ -678,7 +678,7 @@ class ExtractSequence(pyblish.api.Extractor):
|
|||
self._copy_image(eq_frame_filepath, new_filepath)
|
||||
layer_files_by_frame[frame_idx] = new_filepath
|
||||
|
||||
elif post_behavior == "loop":
|
||||
elif post_behavior in ("loop", "repeat"):
|
||||
# Loop backwards from last frame of layer
|
||||
for frame_idx in range(frame_end_index + 1, mark_out_index + 1):
|
||||
eq_frame_idx = frame_idx % frame_count
|
||||
|
|
|
|||
|
|
@ -1162,8 +1162,12 @@ def prepare_host_environments(data, implementation_envs=True):
|
|||
if final_env is None:
|
||||
final_env = loaded_env
|
||||
|
||||
keys_to_remove = set(data["env"].keys()) - set(final_env.keys())
|
||||
|
||||
# Update env
|
||||
data["env"].update(final_env)
|
||||
for key in keys_to_remove:
|
||||
data["env"].pop(key, None)
|
||||
|
||||
|
||||
def apply_project_environments_value(project_name, env, project_settings=None):
|
||||
|
|
@ -1349,23 +1353,23 @@ def _prepare_last_workfile(data, workdir, workfile_template_key):
|
|||
)
|
||||
|
||||
# Last workfile path
|
||||
last_workfile_path = ""
|
||||
extensions = avalon.api.HOST_WORKFILE_EXTENSIONS.get(
|
||||
app.host_name
|
||||
)
|
||||
if extensions:
|
||||
anatomy = data["anatomy"]
|
||||
# Find last workfile
|
||||
file_template = anatomy.templates[workfile_template_key]["file"]
|
||||
workdir_data.update({
|
||||
"version": 1,
|
||||
"user": get_openpype_username(),
|
||||
"ext": extensions[0]
|
||||
})
|
||||
last_workfile_path = data.get("last_workfile_path") or ""
|
||||
if not last_workfile_path:
|
||||
extensions = avalon.api.HOST_WORKFILE_EXTENSIONS.get(app.host_name)
|
||||
|
||||
last_workfile_path = avalon.api.last_workfile(
|
||||
workdir, file_template, workdir_data, extensions, True
|
||||
)
|
||||
if extensions:
|
||||
anatomy = data["anatomy"]
|
||||
# Find last workfile
|
||||
file_template = anatomy.templates["work"]["file"]
|
||||
workdir_data.update({
|
||||
"version": 1,
|
||||
"user": get_openpype_username(),
|
||||
"ext": extensions[0]
|
||||
})
|
||||
|
||||
last_workfile_path = avalon.api.last_workfile(
|
||||
workdir, file_template, workdir_data, extensions, True
|
||||
)
|
||||
|
||||
if os.path.exists(last_workfile_path):
|
||||
log.debug((
|
||||
|
|
|
|||
|
|
@ -1,9 +1,11 @@
|
|||
"""Functions useful for delivery action or loader"""
|
||||
import os
|
||||
import shutil
|
||||
import glob
|
||||
import clique
|
||||
import collections
|
||||
|
||||
|
||||
def collect_frames(files):
|
||||
"""
|
||||
Returns dict of source path and its frame, if from sequence
|
||||
|
|
@ -228,7 +230,16 @@ def process_sequence(
|
|||
Returns:
|
||||
(collections.defaultdict , int)
|
||||
"""
|
||||
if not os.path.exists(src_path):
|
||||
|
||||
def hash_path_exist(myPath):
|
||||
res = myPath.replace('#', '*')
|
||||
glob_search_results = glob.glob(res)
|
||||
if len(glob_search_results) > 0:
|
||||
return True
|
||||
else:
|
||||
return False
|
||||
|
||||
if not hash_path_exist(src_path):
|
||||
msg = "{} doesn't exist for {}".format(src_path,
|
||||
repre["_id"])
|
||||
report_items["Source file was not found"].append(msg)
|
||||
|
|
|
|||
|
|
@ -3,6 +3,7 @@ import sys
|
|||
import time
|
||||
import logging
|
||||
import pymongo
|
||||
import certifi
|
||||
|
||||
if sys.version_info[0] == 2:
|
||||
from urlparse import urlparse, parse_qs
|
||||
|
|
@ -85,12 +86,33 @@ def get_default_components():
|
|||
return decompose_url(mongo_url)
|
||||
|
||||
|
||||
def extract_port_from_url(url):
|
||||
parsed_url = urlparse(url)
|
||||
if parsed_url.scheme is None:
|
||||
_url = "mongodb://{}".format(url)
|
||||
parsed_url = urlparse(_url)
|
||||
return parsed_url.port
|
||||
def should_add_certificate_path_to_mongo_url(mongo_url):
|
||||
"""Check if should add ca certificate to mongo url.
|
||||
|
||||
Since 30.9.2021 cloud mongo requires newer certificates that are not
|
||||
available on most of workstation. This adds path to certifi certificate
|
||||
which is valid for it. To add the certificate path url must have scheme
|
||||
'mongodb+srv' or has 'ssl=true' or 'tls=true' in url query.
|
||||
"""
|
||||
parsed = urlparse(mongo_url)
|
||||
query = parse_qs(parsed.query)
|
||||
lowered_query_keys = set(key.lower() for key in query.keys())
|
||||
add_certificate = False
|
||||
# Check if url 'ssl' or 'tls' are set to 'true'
|
||||
for key in ("ssl", "tls"):
|
||||
if key in query and "true" in query["ssl"]:
|
||||
add_certificate = True
|
||||
break
|
||||
|
||||
# Check if url contains 'mongodb+srv'
|
||||
if not add_certificate and parsed.scheme == "mongodb+srv":
|
||||
add_certificate = True
|
||||
|
||||
# Check if url does already contain certificate path
|
||||
if add_certificate and "tlscafile" in lowered_query_keys:
|
||||
add_certificate = False
|
||||
|
||||
return add_certificate
|
||||
|
||||
|
||||
def validate_mongo_connection(mongo_uri):
|
||||
|
|
@ -106,26 +128,9 @@ def validate_mongo_connection(mongo_uri):
|
|||
passed so probably couldn't connect to mongo server.
|
||||
|
||||
"""
|
||||
parsed = urlparse(mongo_uri)
|
||||
# Force validation of scheme
|
||||
if parsed.scheme not in ["mongodb", "mongodb+srv"]:
|
||||
raise pymongo.errors.InvalidURI((
|
||||
"Invalid URI scheme:"
|
||||
" URI must begin with 'mongodb://' or 'mongodb+srv://'"
|
||||
))
|
||||
# we have mongo connection string. Let's try if we can connect.
|
||||
components = decompose_url(mongo_uri)
|
||||
mongo_args = {
|
||||
"host": compose_url(**components),
|
||||
"serverSelectionTimeoutMS": 1000
|
||||
}
|
||||
port = components.get("port")
|
||||
if port is not None:
|
||||
mongo_args["port"] = int(port)
|
||||
|
||||
# Create connection
|
||||
client = pymongo.MongoClient(**mongo_args)
|
||||
client.server_info()
|
||||
client = OpenPypeMongoConnection.create_connection(
|
||||
mongo_uri, retry_attempts=1
|
||||
)
|
||||
client.close()
|
||||
|
||||
|
||||
|
|
@ -151,6 +156,8 @@ class OpenPypeMongoConnection:
|
|||
# Naive validation of existing connection
|
||||
try:
|
||||
connection.server_info()
|
||||
with connection.start_session():
|
||||
pass
|
||||
except Exception:
|
||||
connection = None
|
||||
|
||||
|
|
@ -162,38 +169,53 @@ class OpenPypeMongoConnection:
|
|||
return connection
|
||||
|
||||
@classmethod
|
||||
def create_connection(cls, mongo_url, timeout=None):
|
||||
def create_connection(cls, mongo_url, timeout=None, retry_attempts=None):
|
||||
parsed = urlparse(mongo_url)
|
||||
# Force validation of scheme
|
||||
if parsed.scheme not in ["mongodb", "mongodb+srv"]:
|
||||
raise pymongo.errors.InvalidURI((
|
||||
"Invalid URI scheme:"
|
||||
" URI must begin with 'mongodb://' or 'mongodb+srv://'"
|
||||
))
|
||||
|
||||
if timeout is None:
|
||||
timeout = int(os.environ.get("AVALON_TIMEOUT") or 1000)
|
||||
|
||||
kwargs = {
|
||||
"host": mongo_url,
|
||||
"serverSelectionTimeoutMS": timeout
|
||||
}
|
||||
if should_add_certificate_path_to_mongo_url(mongo_url):
|
||||
kwargs["ssl_ca_certs"] = certifi.where()
|
||||
|
||||
port = extract_port_from_url(mongo_url)
|
||||
if port is not None:
|
||||
kwargs["port"] = int(port)
|
||||
mongo_client = pymongo.MongoClient(mongo_url, **kwargs)
|
||||
|
||||
mongo_client = pymongo.MongoClient(**kwargs)
|
||||
if retry_attempts is None:
|
||||
retry_attempts = 3
|
||||
|
||||
for _retry in range(3):
|
||||
elif not retry_attempts:
|
||||
retry_attempts = 1
|
||||
|
||||
last_exc = None
|
||||
valid = False
|
||||
t1 = time.time()
|
||||
for attempt in range(1, retry_attempts + 1):
|
||||
try:
|
||||
t1 = time.time()
|
||||
mongo_client.server_info()
|
||||
|
||||
except Exception:
|
||||
cls.log.warning("Retrying...")
|
||||
time.sleep(1)
|
||||
timeout *= 1.5
|
||||
|
||||
else:
|
||||
with mongo_client.start_session():
|
||||
pass
|
||||
valid = True
|
||||
break
|
||||
|
||||
else:
|
||||
raise IOError((
|
||||
"ERROR: Couldn't connect to {} in less than {:.3f}ms"
|
||||
).format(mongo_url, timeout))
|
||||
except Exception as exc:
|
||||
last_exc = exc
|
||||
if attempt < retry_attempts:
|
||||
cls.log.warning(
|
||||
"Attempt {} failed. Retrying... ".format(attempt)
|
||||
)
|
||||
time.sleep(1)
|
||||
|
||||
if not valid:
|
||||
raise last_exc
|
||||
|
||||
cls.log.info("Connected to {}, delay {:.3f}s".format(
|
||||
mongo_url, time.time() - t1
|
||||
|
|
|
|||
|
|
@ -11,7 +11,7 @@ import pyblish.api
|
|||
class CollectDeadlineServerFromInstance(pyblish.api.InstancePlugin):
|
||||
"""Collect Deadline Webservice URL from instance."""
|
||||
|
||||
order = pyblish.api.CollectorOrder
|
||||
order = pyblish.api.CollectorOrder + 0.02
|
||||
label = "Deadline Webservice from the Instance"
|
||||
families = ["rendering"]
|
||||
|
||||
|
|
@ -46,24 +46,25 @@ class CollectDeadlineServerFromInstance(pyblish.api.InstancePlugin):
|
|||
["deadline"]
|
||||
)
|
||||
|
||||
try:
|
||||
default_servers = deadline_settings["deadline_urls"]
|
||||
project_servers = (
|
||||
render_instance.context.data
|
||||
["project_settings"]
|
||||
["deadline"]
|
||||
["deadline_servers"]
|
||||
)
|
||||
deadline_servers = {
|
||||
k: default_servers[k]
|
||||
for k in project_servers
|
||||
if k in default_servers
|
||||
}
|
||||
|
||||
except AttributeError:
|
||||
# Handle situation were we had only one url for deadline.
|
||||
return render_instance.context.data["defaultDeadline"]
|
||||
default_server = render_instance.context.data["defaultDeadline"]
|
||||
instance_server = render_instance.data.get("deadlineServers")
|
||||
if not instance_server:
|
||||
return default_server
|
||||
|
||||
default_servers = deadline_settings["deadline_urls"]
|
||||
project_servers = (
|
||||
render_instance.context.data
|
||||
["project_settings"]
|
||||
["deadline"]
|
||||
["deadline_servers"]
|
||||
)
|
||||
deadline_servers = {
|
||||
k: default_servers[k]
|
||||
for k in project_servers
|
||||
if k in default_servers
|
||||
}
|
||||
# This is Maya specific and may not reflect real selection of deadline
|
||||
# url as dictionary keys in Python 2 are not ordered
|
||||
return deadline_servers[
|
||||
list(deadline_servers.keys())[
|
||||
int(render_instance.data.get("deadlineServers"))
|
||||
|
|
|
|||
|
|
@ -351,6 +351,11 @@ class MayaSubmitDeadline(pyblish.api.InstancePlugin):
|
|||
f.replace(orig_scene, new_scene)
|
||||
)
|
||||
instance.data["expectedFiles"] = [new_exp]
|
||||
|
||||
if instance.data.get("publishRenderMetadataFolder"):
|
||||
instance.data["publishRenderMetadataFolder"] = \
|
||||
instance.data["publishRenderMetadataFolder"].replace(
|
||||
orig_scene, new_scene)
|
||||
self.log.info("Scene name was switched {} -> {}".format(
|
||||
orig_scene, new_scene
|
||||
))
|
||||
|
|
|
|||
|
|
@ -385,6 +385,7 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin):
|
|||
"""
|
||||
task = os.environ["AVALON_TASK"]
|
||||
subset = instance_data["subset"]
|
||||
cameras = instance_data.get("cameras", [])
|
||||
instances = []
|
||||
# go through aovs in expected files
|
||||
for aov, files in exp_files[0].items():
|
||||
|
|
@ -410,7 +411,11 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin):
|
|||
task[0].upper(), task[1:],
|
||||
subset[0].upper(), subset[1:])
|
||||
|
||||
subset_name = '{}_{}'.format(group_name, aov)
|
||||
cam = [c for c in cameras if c in col.head]
|
||||
if cam:
|
||||
subset_name = '{}_{}_{}'.format(group_name, cam, aov)
|
||||
else:
|
||||
subset_name = '{}_{}'.format(group_name, aov)
|
||||
|
||||
if isinstance(col, (list, tuple)):
|
||||
staging = os.path.dirname(col[0])
|
||||
|
|
|
|||
|
|
@ -10,6 +10,7 @@ from openpype_modules.ftrack.lib import (
|
|||
CUST_ATTR_GROUP,
|
||||
CUST_ATTR_TOOLS,
|
||||
CUST_ATTR_APPLICATIONS,
|
||||
CUST_ATTR_INTENT,
|
||||
|
||||
default_custom_attributes_definition,
|
||||
app_definitions_from_app_manager,
|
||||
|
|
@ -431,7 +432,7 @@ class CustomAttributes(BaseAction):
|
|||
|
||||
intent_custom_attr_data = {
|
||||
"label": "Intent",
|
||||
"key": "intent",
|
||||
"key": CUST_ATTR_INTENT,
|
||||
"type": "enumerator",
|
||||
"entity_type": "assetversion",
|
||||
"group": CUST_ATTR_GROUP,
|
||||
|
|
|
|||
|
|
@ -1,7 +1,6 @@
|
|||
import os
|
||||
import json
|
||||
import collections
|
||||
import openpype
|
||||
from openpype.modules import OpenPypeModule
|
||||
|
||||
from openpype_interfaces import (
|
||||
|
|
@ -230,7 +229,13 @@ class FtrackModule(
|
|||
return
|
||||
|
||||
import ftrack_api
|
||||
from openpype_modules.ftrack.lib import get_openpype_attr
|
||||
from openpype_modules.ftrack.lib import (
|
||||
get_openpype_attr,
|
||||
default_custom_attributes_definition,
|
||||
CUST_ATTR_TOOLS,
|
||||
CUST_ATTR_APPLICATIONS,
|
||||
CUST_ATTR_INTENT
|
||||
)
|
||||
|
||||
try:
|
||||
session = self.create_ftrack_session()
|
||||
|
|
@ -255,6 +260,15 @@ class FtrackModule(
|
|||
|
||||
project_id = project_entity["id"]
|
||||
|
||||
ca_defs = default_custom_attributes_definition()
|
||||
hierarchical_attrs = ca_defs.get("is_hierarchical") or {}
|
||||
project_attrs = ca_defs.get("show") or {}
|
||||
ca_keys = (
|
||||
set(hierarchical_attrs.keys())
|
||||
| set(project_attrs.keys())
|
||||
| {CUST_ATTR_TOOLS, CUST_ATTR_APPLICATIONS, CUST_ATTR_INTENT}
|
||||
)
|
||||
|
||||
cust_attr, hier_attr = get_openpype_attr(session)
|
||||
cust_attr_by_key = {attr["key"]: attr for attr in cust_attr}
|
||||
hier_attrs_by_key = {attr["key"]: attr for attr in hier_attr}
|
||||
|
|
@ -262,6 +276,9 @@ class FtrackModule(
|
|||
failed = {}
|
||||
missing = {}
|
||||
for key, value in attributes_changes.items():
|
||||
if key not in ca_keys:
|
||||
continue
|
||||
|
||||
configuration = hier_attrs_by_key.get(key)
|
||||
if not configuration:
|
||||
configuration = cust_attr_by_key.get(key)
|
||||
|
|
@ -379,3 +396,16 @@ class FtrackModule(
|
|||
def timer_stopped(self):
|
||||
if self._timers_manager_module is not None:
|
||||
self._timers_manager_module.timer_stopped(self.id)
|
||||
|
||||
def get_task_time(self, project_name, asset_name, task_name):
|
||||
session = self.create_ftrack_session()
|
||||
query = (
|
||||
'Task where name is "{}"'
|
||||
' and parent.name is "{}"'
|
||||
' and project.full_name is "{}"'
|
||||
).format(task_name, asset_name, project_name)
|
||||
task_entity = session.query(query).first()
|
||||
if not task_entity:
|
||||
return 0
|
||||
hours_logged = (task_entity["time_logged"] / 60) / 60
|
||||
return hours_logged
|
||||
|
|
|
|||
|
|
@ -17,7 +17,8 @@ from openpype.lib import (
|
|||
get_pype_execute_args,
|
||||
OpenPypeMongoConnection,
|
||||
get_openpype_version,
|
||||
get_build_version
|
||||
get_build_version,
|
||||
validate_mongo_connection
|
||||
)
|
||||
from openpype_modules.ftrack import FTRACK_MODULE_DIR
|
||||
from openpype_modules.ftrack.lib import credentials
|
||||
|
|
@ -36,11 +37,15 @@ class MongoPermissionsError(Exception):
|
|||
def check_mongo_url(mongo_uri, log_error=False):
|
||||
"""Checks if mongo server is responding"""
|
||||
try:
|
||||
client = pymongo.MongoClient(mongo_uri)
|
||||
# Force connection on a request as the connect=True parameter of
|
||||
# MongoClient seems to be useless here
|
||||
client.server_info()
|
||||
client.close()
|
||||
validate_mongo_connection(mongo_uri)
|
||||
|
||||
except pymongo.errors.InvalidURI as err:
|
||||
if log_error:
|
||||
print("Can't connect to MongoDB at {} because: {}".format(
|
||||
mongo_uri, err
|
||||
))
|
||||
return False
|
||||
|
||||
except pymongo.errors.ServerSelectionTimeoutError as err:
|
||||
if log_error:
|
||||
print("Can't connect to MongoDB at {} because: {}".format(
|
||||
|
|
|
|||
|
|
@ -3,7 +3,8 @@ from .constants import (
|
|||
CUST_ATTR_AUTO_SYNC,
|
||||
CUST_ATTR_GROUP,
|
||||
CUST_ATTR_TOOLS,
|
||||
CUST_ATTR_APPLICATIONS
|
||||
CUST_ATTR_APPLICATIONS,
|
||||
CUST_ATTR_INTENT
|
||||
)
|
||||
from .settings import (
|
||||
get_ftrack_event_mongo_info
|
||||
|
|
|
|||
|
|
@ -10,3 +10,5 @@ CUST_ATTR_AUTO_SYNC = "avalon_auto_sync"
|
|||
CUST_ATTR_APPLICATIONS = "applications"
|
||||
# Environment tools custom attribute
|
||||
CUST_ATTR_TOOLS = "tools_env"
|
||||
# Intent custom attribute name
|
||||
CUST_ATTR_INTENT = "intent"
|
||||
|
|
|
|||
|
|
@ -1,8 +0,0 @@
|
|||
from .idle_module import (
|
||||
IdleManager
|
||||
)
|
||||
|
||||
|
||||
__all__ = (
|
||||
"IdleManager",
|
||||
)
|
||||
|
|
@ -1,79 +0,0 @@
|
|||
import platform
|
||||
import collections
|
||||
|
||||
from openpype.modules import OpenPypeModule
|
||||
from openpype_interfaces import (
|
||||
ITrayService,
|
||||
IIdleManager
|
||||
)
|
||||
|
||||
|
||||
class IdleManager(OpenPypeModule, ITrayService):
|
||||
""" Measure user's idle time in seconds.
|
||||
Idle time resets on keyboard/mouse input.
|
||||
Is able to emit signals at specific time idle.
|
||||
"""
|
||||
label = "Idle Service"
|
||||
name = "idle_manager"
|
||||
|
||||
def initialize(self, module_settings):
|
||||
enabled = True
|
||||
# Ignore on MacOs
|
||||
# - pynput need root permissions and enabled access for application
|
||||
if platform.system().lower() == "darwin":
|
||||
enabled = False
|
||||
self.enabled = enabled
|
||||
|
||||
self.time_callbacks = collections.defaultdict(list)
|
||||
self.idle_thread = None
|
||||
|
||||
def tray_init(self):
|
||||
return
|
||||
|
||||
def tray_start(self):
|
||||
if self.time_callbacks:
|
||||
self.start_thread()
|
||||
|
||||
def tray_exit(self):
|
||||
self.stop_thread()
|
||||
try:
|
||||
self.time_callbacks = {}
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
def connect_with_modules(self, enabled_modules):
|
||||
for module in enabled_modules:
|
||||
if not isinstance(module, IIdleManager):
|
||||
continue
|
||||
|
||||
module.idle_manager = self
|
||||
callbacks_items = module.callbacks_by_idle_time() or {}
|
||||
for emit_time, callbacks in callbacks_items.items():
|
||||
if not isinstance(callbacks, (tuple, list, set)):
|
||||
callbacks = [callbacks]
|
||||
self.time_callbacks[emit_time].extend(callbacks)
|
||||
|
||||
@property
|
||||
def idle_time(self):
|
||||
if self.idle_thread and self.idle_thread.is_running:
|
||||
return self.idle_thread.idle_time
|
||||
|
||||
def _create_thread(self):
|
||||
from .idle_threads import IdleManagerThread
|
||||
|
||||
return IdleManagerThread(self)
|
||||
|
||||
def start_thread(self):
|
||||
if self.idle_thread:
|
||||
self.idle_thread.stop()
|
||||
self.idle_thread.join()
|
||||
self.idle_thread = self._create_thread()
|
||||
self.idle_thread.start()
|
||||
|
||||
def stop_thread(self):
|
||||
if self.idle_thread:
|
||||
self.idle_thread.stop()
|
||||
self.idle_thread.join()
|
||||
|
||||
def on_thread_stop(self):
|
||||
self.set_service_failed_icon()
|
||||
|
|
@ -1,97 +0,0 @@
|
|||
import time
|
||||
import threading
|
||||
|
||||
from pynput import mouse, keyboard
|
||||
|
||||
from openpype.lib import PypeLogger
|
||||
|
||||
|
||||
class MouseThread(mouse.Listener):
|
||||
"""Listens user's mouse movement."""
|
||||
|
||||
def __init__(self, callback):
|
||||
super(MouseThread, self).__init__(on_move=self.on_move)
|
||||
self.callback = callback
|
||||
|
||||
def on_move(self, posx, posy):
|
||||
self.callback()
|
||||
|
||||
|
||||
class KeyboardThread(keyboard.Listener):
|
||||
"""Listens user's keyboard input."""
|
||||
|
||||
def __init__(self, callback):
|
||||
super(KeyboardThread, self).__init__(on_press=self.on_press)
|
||||
|
||||
self.callback = callback
|
||||
|
||||
def on_press(self, key):
|
||||
self.callback()
|
||||
|
||||
|
||||
class IdleManagerThread(threading.Thread):
|
||||
def __init__(self, module, *args, **kwargs):
|
||||
super(IdleManagerThread, self).__init__(*args, **kwargs)
|
||||
self.log = PypeLogger.get_logger(self.__class__.__name__)
|
||||
self.module = module
|
||||
self.threads = []
|
||||
self.is_running = False
|
||||
self.idle_time = 0
|
||||
|
||||
def stop(self):
|
||||
self.is_running = False
|
||||
|
||||
def reset_time(self):
|
||||
self.idle_time = 0
|
||||
|
||||
@property
|
||||
def time_callbacks(self):
|
||||
return self.module.time_callbacks
|
||||
|
||||
def on_stop(self):
|
||||
self.is_running = False
|
||||
self.log.info("IdleManagerThread has stopped")
|
||||
self.module.on_thread_stop()
|
||||
|
||||
def run(self):
|
||||
self.log.info("IdleManagerThread has started")
|
||||
self.is_running = True
|
||||
thread_mouse = MouseThread(self.reset_time)
|
||||
thread_keyboard = KeyboardThread(self.reset_time)
|
||||
thread_mouse.start()
|
||||
thread_keyboard.start()
|
||||
try:
|
||||
while self.is_running:
|
||||
if self.idle_time in self.time_callbacks:
|
||||
for callback in self.time_callbacks[self.idle_time]:
|
||||
thread = threading.Thread(target=callback)
|
||||
thread.start()
|
||||
self.threads.append(thread)
|
||||
|
||||
for thread in tuple(self.threads):
|
||||
if not thread.isAlive():
|
||||
thread.join()
|
||||
self.threads.remove(thread)
|
||||
|
||||
self.idle_time += 1
|
||||
time.sleep(1)
|
||||
|
||||
except Exception:
|
||||
self.log.warning(
|
||||
'Idle Manager service has failed', exc_info=True
|
||||
)
|
||||
|
||||
# Threads don't have their attrs when Qt application already finished
|
||||
try:
|
||||
thread_mouse.stop()
|
||||
thread_mouse.join()
|
||||
except AttributeError:
|
||||
pass
|
||||
|
||||
try:
|
||||
thread_keyboard.stop()
|
||||
thread_keyboard.join()
|
||||
except AttributeError:
|
||||
pass
|
||||
|
||||
self.on_stop()
|
||||
|
|
@ -1,26 +0,0 @@
|
|||
from abc import abstractmethod
|
||||
from openpype.modules import OpenPypeInterface
|
||||
|
||||
|
||||
class IIdleManager(OpenPypeInterface):
|
||||
"""Other modules interface to return callbacks by idle time in seconds.
|
||||
|
||||
Expected output is dictionary with seconds <int> as keys and callback/s
|
||||
as value, value may be callback of list of callbacks.
|
||||
EXAMPLE:
|
||||
```
|
||||
{
|
||||
60: self.on_minute_idle
|
||||
}
|
||||
```
|
||||
"""
|
||||
idle_manager = None
|
||||
|
||||
@abstractmethod
|
||||
def callbacks_by_idle_time(self):
|
||||
pass
|
||||
|
||||
@property
|
||||
def idle_time(self):
|
||||
if self.idle_manager:
|
||||
return self.idle_manager.idle_time
|
||||
|
|
@ -80,7 +80,8 @@ class AbstractProvider:
|
|||
representation (dict): complete repre containing 'file'
|
||||
site (str): site name
|
||||
Returns:
|
||||
(string) file_id of created file, raises exception
|
||||
(string) file_id of created/modified file ,
|
||||
throws FileExistsError, FileNotFoundError exceptions
|
||||
"""
|
||||
pass
|
||||
|
||||
|
|
@ -103,7 +104,8 @@ class AbstractProvider:
|
|||
representation (dict): complete repre containing 'file'
|
||||
site (str): site name
|
||||
Returns:
|
||||
None
|
||||
(string) file_id of created/modified file ,
|
||||
throws FileExistsError, FileNotFoundError exceptions
|
||||
"""
|
||||
pass
|
||||
|
||||
|
|
|
|||
|
|
@ -0,0 +1,423 @@
|
|||
import os
|
||||
|
||||
import dropbox
|
||||
|
||||
from openpype.api import Logger
|
||||
from .abstract_provider import AbstractProvider
|
||||
from ..utils import EditableScopes
|
||||
|
||||
log = Logger().get_logger("SyncServer")
|
||||
|
||||
|
||||
class DropboxHandler(AbstractProvider):
|
||||
CODE = 'dropbox'
|
||||
LABEL = 'Dropbox'
|
||||
|
||||
def __init__(self, project_name, site_name, tree=None, presets=None):
|
||||
self.active = False
|
||||
self.site_name = site_name
|
||||
self.presets = presets
|
||||
|
||||
if not self.presets:
|
||||
log.info(
|
||||
"Sync Server: There are no presets for {}.".format(site_name)
|
||||
)
|
||||
return
|
||||
|
||||
provider_presets = self.presets.get(self.CODE)
|
||||
if not provider_presets:
|
||||
msg = "Sync Server: No provider presets for {}".format(self.CODE)
|
||||
log.info(msg)
|
||||
return
|
||||
|
||||
token = self.presets[self.CODE].get("token", "")
|
||||
if not token:
|
||||
msg = "Sync Server: No access token for dropbox provider"
|
||||
log.info(msg)
|
||||
return
|
||||
|
||||
team_folder_name = self.presets[self.CODE].get("team_folder_name", "")
|
||||
if not team_folder_name:
|
||||
msg = "Sync Server: No team folder name for dropbox provider"
|
||||
log.info(msg)
|
||||
return
|
||||
|
||||
acting_as_member = self.presets[self.CODE].get("acting_as_member", "")
|
||||
if not acting_as_member:
|
||||
msg = (
|
||||
"Sync Server: No acting member for dropbox provider"
|
||||
)
|
||||
log.info(msg)
|
||||
return
|
||||
|
||||
self.dbx = None
|
||||
try:
|
||||
self.dbx = self._get_service(
|
||||
token, acting_as_member, team_folder_name
|
||||
)
|
||||
except Exception as e:
|
||||
log.info("Could not establish dropbox object: {}".format(e))
|
||||
return
|
||||
|
||||
super(AbstractProvider, self).__init__()
|
||||
|
||||
@classmethod
|
||||
def get_system_settings_schema(cls):
|
||||
"""
|
||||
Returns dict for editable properties on system settings level
|
||||
|
||||
|
||||
Returns:
|
||||
(list) of dict
|
||||
"""
|
||||
return []
|
||||
|
||||
@classmethod
|
||||
def get_project_settings_schema(cls):
|
||||
"""
|
||||
Returns dict for editable properties on project settings level
|
||||
|
||||
|
||||
Returns:
|
||||
(list) of dict
|
||||
"""
|
||||
# {platform} tells that value is multiplatform and only specific OS
|
||||
# should be returned
|
||||
return [
|
||||
{
|
||||
"type": "text",
|
||||
"key": "token",
|
||||
"label": "Access Token"
|
||||
},
|
||||
{
|
||||
"type": "text",
|
||||
"key": "team_folder_name",
|
||||
"label": "Team Folder Name"
|
||||
},
|
||||
{
|
||||
"type": "text",
|
||||
"key": "acting_as_member",
|
||||
"label": "Acting As Member"
|
||||
},
|
||||
# roots could be overriden only on Project level, User cannot
|
||||
{
|
||||
'key': "roots",
|
||||
'label': "Roots",
|
||||
'type': 'dict'
|
||||
}
|
||||
]
|
||||
|
||||
@classmethod
|
||||
def get_local_settings_schema(cls):
|
||||
"""
|
||||
Returns dict for editable properties on local settings level
|
||||
|
||||
|
||||
Returns:
|
||||
(dict)
|
||||
"""
|
||||
return []
|
||||
|
||||
def _get_service(self, token, acting_as_member, team_folder_name):
|
||||
dbx = dropbox.DropboxTeam(token)
|
||||
|
||||
# Getting member id.
|
||||
member_id = None
|
||||
member_names = []
|
||||
for member in dbx.team_members_list().members:
|
||||
member_names.append(member.profile.name.display_name)
|
||||
if member.profile.name.display_name == acting_as_member:
|
||||
member_id = member.profile.team_member_id
|
||||
|
||||
if member_id is None:
|
||||
raise ValueError(
|
||||
"Could not find member \"{}\". Available members: {}".format(
|
||||
acting_as_member, member_names
|
||||
)
|
||||
)
|
||||
|
||||
# Getting team folder id.
|
||||
team_folder_id = None
|
||||
team_folder_names = []
|
||||
for entry in dbx.team_team_folder_list().team_folders:
|
||||
team_folder_names.append(entry.name)
|
||||
if entry.name == team_folder_name:
|
||||
team_folder_id = entry.team_folder_id
|
||||
|
||||
if team_folder_id is None:
|
||||
raise ValueError(
|
||||
"Could not find team folder \"{}\". Available folders: "
|
||||
"{}".format(
|
||||
team_folder_name, team_folder_names
|
||||
)
|
||||
)
|
||||
|
||||
# Establish dropbox object.
|
||||
path_root = dropbox.common.PathRoot.namespace_id(team_folder_id)
|
||||
return dropbox.DropboxTeam(
|
||||
token
|
||||
).with_path_root(path_root).as_user(member_id)
|
||||
|
||||
def is_active(self):
|
||||
"""
|
||||
Returns True if provider is activated, eg. has working credentials.
|
||||
Returns:
|
||||
(boolean)
|
||||
"""
|
||||
return self.dbx is not None
|
||||
|
||||
@classmethod
|
||||
def get_configurable_items(cls):
|
||||
"""
|
||||
Returns filtered dict of editable properties
|
||||
|
||||
|
||||
Returns:
|
||||
(dict)
|
||||
"""
|
||||
editable = {
|
||||
'token': {
|
||||
'scope': [EditableScopes.PROJECT],
|
||||
'label': "Access Token",
|
||||
'type': 'text',
|
||||
'namespace': (
|
||||
'{project_settings}/global/sync_server/sites/{site}/token'
|
||||
)
|
||||
},
|
||||
'team_folder_name': {
|
||||
'scope': [EditableScopes.PROJECT],
|
||||
'label': "Team Folder Name",
|
||||
'type': 'text',
|
||||
'namespace': (
|
||||
'{project_settings}/global/sync_server/sites/{site}'
|
||||
'/team_folder_name'
|
||||
)
|
||||
},
|
||||
'acting_as_member': {
|
||||
'scope': [EditableScopes.PROJECT, EditableScopes.LOCAL],
|
||||
'label': "Acting As Member",
|
||||
'type': 'text',
|
||||
'namespace': (
|
||||
'{project_settings}/global/sync_server/sites/{site}'
|
||||
'/acting_as_member'
|
||||
)
|
||||
}
|
||||
}
|
||||
return editable
|
||||
|
||||
def _path_exists(self, path):
|
||||
try:
|
||||
entries = self.dbx.files_list_folder(
|
||||
path=os.path.dirname(path)
|
||||
).entries
|
||||
except dropbox.exceptions.ApiError:
|
||||
return False
|
||||
|
||||
for entry in entries:
|
||||
if entry.name == os.path.basename(path):
|
||||
return True
|
||||
|
||||
return False
|
||||
|
||||
def upload_file(self, source_path, path,
|
||||
server, collection, file, representation, site,
|
||||
overwrite=False):
|
||||
"""
|
||||
Copy file from 'source_path' to 'target_path' on provider.
|
||||
Use 'overwrite' boolean to rewrite existing file on provider
|
||||
|
||||
Args:
|
||||
source_path (string):
|
||||
path (string): absolute path with or without name of the file
|
||||
overwrite (boolean): replace existing file
|
||||
|
||||
arguments for saving progress:
|
||||
server (SyncServer): server instance to call update_db on
|
||||
collection (str): name of collection
|
||||
file (dict): info about uploaded file (matches structure from db)
|
||||
representation (dict): complete repre containing 'file'
|
||||
site (str): site name
|
||||
Returns:
|
||||
(string) file_id of created file, raises exception
|
||||
"""
|
||||
# Check source path.
|
||||
if not os.path.exists(source_path):
|
||||
raise FileNotFoundError(
|
||||
"Source file {} doesn't exist.".format(source_path)
|
||||
)
|
||||
|
||||
if self._path_exists(path) and not overwrite:
|
||||
raise FileExistsError(
|
||||
"File already exists, use 'overwrite' argument"
|
||||
)
|
||||
|
||||
mode = dropbox.files.WriteMode("add", None)
|
||||
if overwrite:
|
||||
mode = dropbox.files.WriteMode.overwrite
|
||||
|
||||
with open(source_path, "rb") as f:
|
||||
self.dbx.files_upload(f.read(), path, mode=mode)
|
||||
|
||||
server.update_db(
|
||||
collection=collection,
|
||||
new_file_id=None,
|
||||
file=file,
|
||||
representation=representation,
|
||||
site=site,
|
||||
progress=100
|
||||
)
|
||||
|
||||
return path
|
||||
|
||||
def download_file(self, source_path, local_path,
|
||||
server, collection, file, representation, site,
|
||||
overwrite=False):
|
||||
"""
|
||||
Download file from provider into local system
|
||||
|
||||
Args:
|
||||
source_path (string): absolute path on provider
|
||||
local_path (string): absolute path with or without name of the file
|
||||
overwrite (boolean): replace existing file
|
||||
|
||||
arguments for saving progress:
|
||||
server (SyncServer): server instance to call update_db on
|
||||
collection (str): name of collection
|
||||
file (dict): info about uploaded file (matches structure from db)
|
||||
representation (dict): complete repre containing 'file'
|
||||
site (str): site name
|
||||
Returns:
|
||||
None
|
||||
"""
|
||||
# Check source path.
|
||||
if not self._path_exists(source_path):
|
||||
raise FileNotFoundError(
|
||||
"Source file {} doesn't exist.".format(source_path)
|
||||
)
|
||||
|
||||
if os.path.exists(local_path) and not overwrite:
|
||||
raise FileExistsError(
|
||||
"File already exists, use 'overwrite' argument"
|
||||
)
|
||||
|
||||
if os.path.exists(local_path) and overwrite:
|
||||
os.unlink(local_path)
|
||||
|
||||
self.dbx.files_download_to_file(local_path, source_path)
|
||||
|
||||
server.update_db(
|
||||
collection=collection,
|
||||
new_file_id=None,
|
||||
file=file,
|
||||
representation=representation,
|
||||
site=site,
|
||||
progress=100
|
||||
)
|
||||
|
||||
return os.path.basename(source_path)
|
||||
|
||||
def delete_file(self, path):
|
||||
"""
|
||||
Deletes file from 'path'. Expects path to specific file.
|
||||
|
||||
Args:
|
||||
path (string): absolute path to particular file
|
||||
|
||||
Returns:
|
||||
None
|
||||
"""
|
||||
if not self._path_exists(path):
|
||||
raise FileExistsError("File {} doesn't exist".format(path))
|
||||
|
||||
self.dbx.files_delete(path)
|
||||
|
||||
def list_folder(self, folder_path):
|
||||
"""
|
||||
List all files and subfolders of particular path non-recursively.
|
||||
Args:
|
||||
folder_path (string): absolut path on provider
|
||||
|
||||
Returns:
|
||||
(list)
|
||||
"""
|
||||
if not self._path_exists(folder_path):
|
||||
raise FileExistsError(
|
||||
"Folder \"{}\" does not exist".format(folder_path)
|
||||
)
|
||||
|
||||
entry_names = []
|
||||
for entry in self.dbx.files_list_folder(path=folder_path).entries:
|
||||
entry_names.append(entry.name)
|
||||
return entry_names
|
||||
|
||||
def create_folder(self, folder_path):
|
||||
"""
|
||||
Create all nonexistent folders and subfolders in 'path'.
|
||||
|
||||
Args:
|
||||
path (string): absolute path
|
||||
|
||||
Returns:
|
||||
(string) folder id of lowest subfolder from 'path'
|
||||
"""
|
||||
if self._path_exists(folder_path):
|
||||
return folder_path
|
||||
|
||||
self.dbx.files_create_folder_v2(folder_path)
|
||||
|
||||
return folder_path
|
||||
|
||||
def get_tree(self):
|
||||
"""
|
||||
Creates folder structure for providers which do not provide
|
||||
tree folder structure (GDrive has no accessible tree structure,
|
||||
only parents and their parents)
|
||||
"""
|
||||
pass
|
||||
|
||||
def get_roots_config(self, anatomy=None):
|
||||
"""
|
||||
Returns root values for path resolving
|
||||
|
||||
Takes value from Anatomy which takes values from Settings
|
||||
overridden by Local Settings
|
||||
|
||||
Returns:
|
||||
(dict) - {"root": {"root": "/My Drive"}}
|
||||
OR
|
||||
{"root": {"root_ONE": "value", "root_TWO":"value}}
|
||||
Format is importing for usage of python's format ** approach
|
||||
"""
|
||||
return self.presets['root']
|
||||
|
||||
def resolve_path(self, path, root_config=None, anatomy=None):
|
||||
"""
|
||||
Replaces all root placeholders with proper values
|
||||
|
||||
Args:
|
||||
path(string): root[work]/folder...
|
||||
root_config (dict): {'work': "c:/..."...}
|
||||
anatomy (Anatomy): object of Anatomy
|
||||
Returns:
|
||||
(string): proper url
|
||||
"""
|
||||
if not root_config:
|
||||
root_config = self.get_roots_config(anatomy)
|
||||
|
||||
if root_config and not root_config.get("root"):
|
||||
root_config = {"root": root_config}
|
||||
|
||||
try:
|
||||
if not root_config:
|
||||
raise KeyError
|
||||
|
||||
path = path.format(**root_config)
|
||||
except KeyError:
|
||||
try:
|
||||
path = anatomy.fill_root(path)
|
||||
except KeyError:
|
||||
msg = "Error in resolving local root from anatomy"
|
||||
log.error(msg)
|
||||
raise ValueError(msg)
|
||||
|
||||
return path
|
||||
|
|
@ -61,7 +61,6 @@ class GDriveHandler(AbstractProvider):
|
|||
CHUNK_SIZE = 2097152 # must be divisible by 256! used for upload chunks
|
||||
|
||||
def __init__(self, project_name, site_name, tree=None, presets=None):
|
||||
self.presets = None
|
||||
self.active = False
|
||||
self.project_name = project_name
|
||||
self.site_name = site_name
|
||||
|
|
@ -74,7 +73,13 @@ class GDriveHandler(AbstractProvider):
|
|||
format(site_name))
|
||||
return
|
||||
|
||||
cred_path = self.presets.get("credentials_url", {}).\
|
||||
provider_presets = self.presets.get(self.CODE)
|
||||
if not provider_presets:
|
||||
msg = "Sync Server: No provider presets for {}".format(self.CODE)
|
||||
log.info(msg)
|
||||
return
|
||||
|
||||
cred_path = self.presets[self.CODE].get("credentials_url", {}).\
|
||||
get(platform.system().lower()) or ''
|
||||
if not os.path.exists(cred_path):
|
||||
msg = "Sync Server: No credentials for gdrive provider " + \
|
||||
|
|
|
|||
|
|
@ -1,5 +1,7 @@
|
|||
from .gdrive import GDriveHandler
|
||||
from .dropbox import DropboxHandler
|
||||
from .local_drive import LocalDriveHandler
|
||||
from .sftp import SFTPHandler
|
||||
|
||||
|
||||
class ProviderFactory:
|
||||
|
|
@ -111,4 +113,6 @@ factory = ProviderFactory()
|
|||
# 7 denotes number of files that could be synced in single loop - learned by
|
||||
# trial and error
|
||||
factory.register_provider(GDriveHandler.CODE, GDriveHandler, 7)
|
||||
factory.register_provider(DropboxHandler.CODE, DropboxHandler, 10)
|
||||
factory.register_provider(LocalDriveHandler.CODE, LocalDriveHandler, 50)
|
||||
factory.register_provider(SFTPHandler.CODE, SFTPHandler, 20)
|
||||
|
|
|
|||
Binary file not shown.
|
After Width: | Height: | Size: 2 KiB |
Binary file not shown.
|
After Width: | Height: | Size: 2.1 KiB |
461
openpype/modules/default_modules/sync_server/providers/sftp.py
Normal file
461
openpype/modules/default_modules/sync_server/providers/sftp.py
Normal file
|
|
@ -0,0 +1,461 @@
|
|||
import os
|
||||
import os.path
|
||||
import time
|
||||
import sys
|
||||
import six
|
||||
import threading
|
||||
import platform
|
||||
|
||||
from openpype.api import Logger
|
||||
from openpype.api import get_system_settings
|
||||
from .abstract_provider import AbstractProvider
|
||||
log = Logger().get_logger("SyncServer")
|
||||
|
||||
pysftp = None
|
||||
try:
|
||||
import pysftp
|
||||
except (ImportError, SyntaxError):
|
||||
pass
|
||||
|
||||
# handle imports from Python 2 hosts - in those only basic methods are used
|
||||
log.warning("Import failed, imported from Python 2, operations will fail.")
|
||||
|
||||
|
||||
class SFTPHandler(AbstractProvider):
|
||||
"""
|
||||
Implementation of SFTP API.
|
||||
|
||||
Authentication could be done in 2 ways:
|
||||
- user and password
|
||||
- ssh key file for user (optionally password for ssh key)
|
||||
|
||||
Settings could be overwritten per project.
|
||||
|
||||
"""
|
||||
CODE = 'sftp'
|
||||
LABEL = 'SFTP'
|
||||
|
||||
def __init__(self, project_name, site_name, tree=None, presets=None):
|
||||
self.presets = None
|
||||
self.active = False
|
||||
self.project_name = project_name
|
||||
self.site_name = site_name
|
||||
self.root = None
|
||||
self._conn = None
|
||||
|
||||
self.presets = presets
|
||||
if not self.presets:
|
||||
log.warning("Sync Server: There are no presets for {}.".
|
||||
format(site_name))
|
||||
return
|
||||
|
||||
provider_presets = self.presets.get(self.CODE)
|
||||
if not provider_presets:
|
||||
msg = "Sync Server: No provider presets for {}".format(self.CODE)
|
||||
log.warning(msg)
|
||||
return
|
||||
|
||||
# store to instance for reconnect
|
||||
self.sftp_host = provider_presets["sftp_host"]
|
||||
self.sftp_port = provider_presets["sftp_port"]
|
||||
self.sftp_user = provider_presets["sftp_user"]
|
||||
self.sftp_pass = provider_presets["sftp_pass"]
|
||||
self.sftp_key = provider_presets["sftp_key"]
|
||||
self.sftp_key_pass = provider_presets["sftp_key_pass"]
|
||||
|
||||
self._tree = None
|
||||
self.active = True
|
||||
|
||||
@property
|
||||
def conn(self):
|
||||
"""SFTP connection, cannot be used in all places though."""
|
||||
if not self._conn:
|
||||
self._conn = self._get_conn()
|
||||
|
||||
return self._conn
|
||||
|
||||
def is_active(self):
|
||||
"""
|
||||
Returns True if provider is activated, eg. has working credentials.
|
||||
Returns:
|
||||
(boolean)
|
||||
"""
|
||||
return self.conn is not None
|
||||
|
||||
@classmethod
|
||||
def get_system_settings_schema(cls):
|
||||
"""
|
||||
Returns dict for editable properties on system settings level
|
||||
|
||||
|
||||
Returns:
|
||||
(list) of dict
|
||||
"""
|
||||
return []
|
||||
|
||||
@classmethod
|
||||
def get_project_settings_schema(cls):
|
||||
"""
|
||||
Returns dict for editable properties on project settings level
|
||||
|
||||
Currently not implemented in Settings yet!
|
||||
|
||||
Returns:
|
||||
(list) of dict
|
||||
"""
|
||||
# {platform} tells that value is multiplatform and only specific OS
|
||||
# should be returned
|
||||
editable = [
|
||||
# credentials could be overriden on Project or User level
|
||||
{
|
||||
'key': "sftp_server",
|
||||
'label': "SFTP host name",
|
||||
'type': 'text'
|
||||
},
|
||||
{
|
||||
"type": "number",
|
||||
"key": "sftp_port",
|
||||
"label": "SFTP port"
|
||||
},
|
||||
{
|
||||
'key': "sftp_user",
|
||||
'label': "SFTP user name",
|
||||
'type': 'text'
|
||||
},
|
||||
{
|
||||
'key': "sftp_pass",
|
||||
'label': "SFTP password",
|
||||
'type': 'text'
|
||||
},
|
||||
{
|
||||
'key': "sftp_key",
|
||||
'label': "SFTP user ssh key",
|
||||
'type': 'path'
|
||||
},
|
||||
{
|
||||
'key': "sftp_key_pass",
|
||||
'label': "SFTP user ssh key password",
|
||||
'type': 'text'
|
||||
},
|
||||
# roots could be overriden only on Project leve, User cannot
|
||||
{
|
||||
'key': "roots",
|
||||
'label': "Roots",
|
||||
'type': 'dict'
|
||||
}
|
||||
]
|
||||
return editable
|
||||
|
||||
@classmethod
|
||||
def get_local_settings_schema(cls):
|
||||
"""
|
||||
Returns dict for editable properties on local settings level
|
||||
|
||||
Currently not implemented in Settings yet!
|
||||
|
||||
Returns:
|
||||
(dict)
|
||||
"""
|
||||
editable = [
|
||||
# credentials could be override on Project or User level
|
||||
{
|
||||
'key': "sftp_user",
|
||||
'label': "SFTP user name",
|
||||
'type': 'text'
|
||||
},
|
||||
{
|
||||
'key': "sftp_pass",
|
||||
'label': "SFTP password",
|
||||
'type': 'text'
|
||||
},
|
||||
{
|
||||
'key': "sftp_key",
|
||||
'label': "SFTP user ssh key",
|
||||
'type': 'path'
|
||||
},
|
||||
{
|
||||
'key': "sftp_key_pass",
|
||||
'label': "SFTP user ssh key password",
|
||||
'type': 'text'
|
||||
}
|
||||
]
|
||||
return editable
|
||||
|
||||
def get_roots_config(self, anatomy=None):
|
||||
"""
|
||||
Returns root values for path resolving
|
||||
|
||||
Use only Settings as GDrive cannot be modified by Local Settings
|
||||
|
||||
Returns:
|
||||
(dict) - {"root": {"root": "/My Drive"}}
|
||||
OR
|
||||
{"root": {"root_ONE": "value", "root_TWO":"value}}
|
||||
Format is importing for usage of python's format ** approach
|
||||
"""
|
||||
# roots cannot be locally overridden
|
||||
return self.presets['root']
|
||||
|
||||
def get_tree(self):
|
||||
"""
|
||||
Building of the folder tree could be potentially expensive,
|
||||
constructor provides argument that could inject previously created
|
||||
tree.
|
||||
Tree structure must be handled in thread safe fashion!
|
||||
Returns:
|
||||
(dictionary) - url to id mapping
|
||||
"""
|
||||
# not needed in this provider
|
||||
pass
|
||||
|
||||
def create_folder(self, path):
|
||||
"""
|
||||
Create all nonexistent folders and subfolders in 'path'.
|
||||
Updates self._tree structure with new paths
|
||||
|
||||
Args:
|
||||
path (string): absolute path, starts with GDrive root,
|
||||
without filename
|
||||
Returns:
|
||||
(string) folder id of lowest subfolder from 'path'
|
||||
"""
|
||||
self.conn.makedirs(path)
|
||||
|
||||
return os.path.basename(path)
|
||||
|
||||
def upload_file(self, source_path, target_path,
|
||||
server, collection, file, representation, site,
|
||||
overwrite=False):
|
||||
"""
|
||||
Uploads single file from 'source_path' to destination 'path'.
|
||||
It creates all folders on the path if are not existing.
|
||||
|
||||
Args:
|
||||
source_path (string):
|
||||
target_path (string): absolute path with or without name of a file
|
||||
overwrite (boolean): replace existing file
|
||||
|
||||
arguments for saving progress:
|
||||
server (SyncServer): server instance to call update_db on
|
||||
collection (str): name of collection
|
||||
file (dict): info about uploaded file (matches structure from db)
|
||||
representation (dict): complete repre containing 'file'
|
||||
site (str): site name
|
||||
|
||||
Returns:
|
||||
(string) file_id of created/modified file ,
|
||||
throws FileExistsError, FileNotFoundError exceptions
|
||||
"""
|
||||
if not os.path.isfile(source_path):
|
||||
raise FileNotFoundError("Source file {} doesn't exist."
|
||||
.format(source_path))
|
||||
|
||||
if self.file_path_exists(target_path):
|
||||
if not overwrite:
|
||||
raise ValueError("File {} exists, set overwrite".
|
||||
format(target_path))
|
||||
|
||||
thread = threading.Thread(target=self._upload,
|
||||
args=(source_path, target_path))
|
||||
thread.start()
|
||||
self._mark_progress(collection, file, representation, server,
|
||||
site, source_path, target_path, "upload")
|
||||
|
||||
return os.path.basename(target_path)
|
||||
|
||||
def _upload(self, source_path, target_path):
|
||||
print("copying {}->{}".format(source_path, target_path))
|
||||
conn = self._get_conn()
|
||||
conn.put(source_path, target_path)
|
||||
|
||||
def download_file(self, source_path, target_path,
|
||||
server, collection, file, representation, site,
|
||||
overwrite=False):
|
||||
"""
|
||||
Downloads single file from 'source_path' (remote) to 'target_path'.
|
||||
It creates all folders on the local_path if are not existing.
|
||||
By default existing file on 'target_path' will trigger an exception
|
||||
|
||||
Args:
|
||||
source_path (string): absolute path on provider
|
||||
target_path (string): absolute path with or without name of a file
|
||||
overwrite (boolean): replace existing file
|
||||
|
||||
arguments for saving progress:
|
||||
server (SyncServer): server instance to call update_db on
|
||||
collection (str): name of collection
|
||||
file (dict): info about uploaded file (matches structure from db)
|
||||
representation (dict): complete repre containing 'file'
|
||||
site (str): site name
|
||||
|
||||
Returns:
|
||||
(string) file_id of created/modified file ,
|
||||
throws FileExistsError, FileNotFoundError exceptions
|
||||
"""
|
||||
if not self.file_path_exists(source_path):
|
||||
raise FileNotFoundError("Source file {} doesn't exist."
|
||||
.format(source_path))
|
||||
|
||||
if os.path.isfile(target_path):
|
||||
if not overwrite:
|
||||
raise ValueError("File {} exists, set overwrite".
|
||||
format(target_path))
|
||||
|
||||
thread = threading.Thread(target=self._download,
|
||||
args=(source_path, target_path))
|
||||
thread.start()
|
||||
self._mark_progress(collection, file, representation, server,
|
||||
site, source_path, target_path, "download")
|
||||
|
||||
return os.path.basename(target_path)
|
||||
|
||||
def _download(self, source_path, target_path):
|
||||
print("downloading {}->{}".format(source_path, target_path))
|
||||
conn = self._get_conn()
|
||||
conn.get(source_path, target_path)
|
||||
|
||||
def delete_file(self, path):
|
||||
"""
|
||||
Deletes file from 'path'. Expects path to specific file.
|
||||
|
||||
Args:
|
||||
path: absolute path to particular file
|
||||
|
||||
Returns:
|
||||
None
|
||||
"""
|
||||
if not self.file_path_exists(path):
|
||||
raise FileNotFoundError("File {} to be deleted doesn't exist."
|
||||
.format(path))
|
||||
|
||||
self.conn.remove(path)
|
||||
|
||||
def list_folder(self, folder_path):
|
||||
"""
|
||||
List all files and subfolders of particular path non-recursively.
|
||||
|
||||
Args:
|
||||
folder_path (string): absolut path on provider
|
||||
Returns:
|
||||
(list)
|
||||
"""
|
||||
return list(pysftp.path_advance(folder_path))
|
||||
|
||||
def folder_path_exists(self, file_path):
|
||||
"""
|
||||
Checks if path from 'file_path' exists. If so, return its
|
||||
folder id.
|
||||
Args:
|
||||
file_path (string): path with / as a separator
|
||||
Returns:
|
||||
(string) folder id or False
|
||||
"""
|
||||
if not file_path:
|
||||
return False
|
||||
|
||||
return self.conn.isdir(file_path)
|
||||
|
||||
def file_path_exists(self, file_path):
|
||||
"""
|
||||
Checks if 'file_path' exists on GDrive
|
||||
|
||||
Args:
|
||||
file_path (string): separated by '/', from root, with file name
|
||||
Returns:
|
||||
(dictionary|boolean) file metadata | False if not found
|
||||
"""
|
||||
if not file_path:
|
||||
return False
|
||||
|
||||
return self.conn.isfile(file_path)
|
||||
|
||||
@classmethod
|
||||
def get_presets(cls):
|
||||
"""
|
||||
Get presets for this provider
|
||||
Returns:
|
||||
(dictionary) of configured sites
|
||||
"""
|
||||
provider_presets = None
|
||||
try:
|
||||
provider_presets = (
|
||||
get_system_settings()["modules"]
|
||||
["sync_server"]
|
||||
["providers"]
|
||||
["sftp"]
|
||||
)
|
||||
except KeyError:
|
||||
log.info(("Sync Server: There are no presets for SFTP " +
|
||||
"provider.").
|
||||
format(str(provider_presets)))
|
||||
return
|
||||
return provider_presets
|
||||
|
||||
def _get_conn(self):
|
||||
"""
|
||||
Returns fresh sftp connection.
|
||||
|
||||
It seems that connection cannot be cached into self.conn, at least
|
||||
for get and put which run in separate threads.
|
||||
|
||||
Returns:
|
||||
pysftp.Connection
|
||||
"""
|
||||
if not pysftp:
|
||||
raise ImportError
|
||||
|
||||
cnopts = pysftp.CnOpts()
|
||||
cnopts.hostkeys = None
|
||||
|
||||
conn_params = {
|
||||
'host': self.sftp_host,
|
||||
'port': self.sftp_port,
|
||||
'username': self.sftp_user,
|
||||
'cnopts': cnopts
|
||||
}
|
||||
if self.sftp_pass and self.sftp_pass.strip():
|
||||
conn_params['password'] = self.sftp_pass
|
||||
if self.sftp_key: # expects .pem format, not .ppk!
|
||||
conn_params['private_key'] = \
|
||||
self.sftp_key[platform.system().lower()]
|
||||
if self.sftp_key_pass:
|
||||
conn_params['private_key_pass'] = self.sftp_key_pass
|
||||
|
||||
return pysftp.Connection(**conn_params)
|
||||
|
||||
def _mark_progress(self, collection, file, representation, server, site,
|
||||
source_path, target_path, direction):
|
||||
"""
|
||||
Updates progress field in DB by values 0-1.
|
||||
|
||||
Compares file sizes of source and target.
|
||||
"""
|
||||
pass
|
||||
if direction == "upload":
|
||||
source_file_size = os.path.getsize(source_path)
|
||||
else:
|
||||
source_file_size = self.conn.stat(source_path).st_size
|
||||
|
||||
target_file_size = 0
|
||||
last_tick = status_val = None
|
||||
while source_file_size != target_file_size:
|
||||
if not last_tick or \
|
||||
time.time() - last_tick >= server.LOG_PROGRESS_SEC:
|
||||
status_val = target_file_size / source_file_size
|
||||
last_tick = time.time()
|
||||
log.debug(direction + "ed %d%%." % int(status_val * 100))
|
||||
server.update_db(collection=collection,
|
||||
new_file_id=None,
|
||||
file=file,
|
||||
representation=representation,
|
||||
site=site,
|
||||
progress=status_val
|
||||
)
|
||||
try:
|
||||
if direction == "upload":
|
||||
target_file_size = self.conn.stat(target_path).st_size
|
||||
else:
|
||||
target_file_size = os.path.getsize(target_path)
|
||||
except FileNotFoundError:
|
||||
pass
|
||||
time.sleep(0.5)
|
||||
|
|
@ -221,6 +221,7 @@ def _get_configured_sites_from_setting(module, project_name, project_setting):
|
|||
|
||||
return configured_sites
|
||||
|
||||
|
||||
class SyncServerThread(threading.Thread):
|
||||
"""
|
||||
Separate thread running synchronization server with asyncio loop.
|
||||
|
|
|
|||
|
|
@ -398,6 +398,18 @@ class SyncServerModule(OpenPypeModule, ITrayModule):
|
|||
|
||||
return remote_site
|
||||
|
||||
def get_local_normalized_site(self, site_name):
|
||||
"""
|
||||
Return 'site_name' or 'local' if 'site_name' is local id.
|
||||
|
||||
In some places Settings or Local Settings require 'local' instead
|
||||
of real site name.
|
||||
"""
|
||||
if site_name == get_local_site_id():
|
||||
site_name = self.LOCAL_SITE
|
||||
|
||||
return site_name
|
||||
|
||||
# Methods for Settings UI to draw appropriate forms
|
||||
@classmethod
|
||||
def get_system_settings_schema(cls):
|
||||
|
|
|
|||
|
|
@ -29,7 +29,6 @@ def time_function(method):
|
|||
kw['log_time'][name] = int((te - ts) * 1000)
|
||||
else:
|
||||
log.debug('%r %2.2f ms' % (method.__name__, (te - ts) * 1000))
|
||||
print('%r %2.2f ms' % (method.__name__, (te - ts) * 1000))
|
||||
return result
|
||||
|
||||
return timed
|
||||
|
|
|
|||
160
openpype/modules/default_modules/timers_manager/idle_threads.py
Normal file
160
openpype/modules/default_modules/timers_manager/idle_threads.py
Normal file
|
|
@ -0,0 +1,160 @@
|
|||
import time
|
||||
from Qt import QtCore
|
||||
from pynput import mouse, keyboard
|
||||
|
||||
from openpype.lib import PypeLogger
|
||||
|
||||
|
||||
class IdleItem:
|
||||
"""Python object holds information if state of idle changed.
|
||||
|
||||
This item is used to be independent from Qt objects.
|
||||
"""
|
||||
def __init__(self):
|
||||
self.changed = False
|
||||
|
||||
def reset(self):
|
||||
self.changed = False
|
||||
|
||||
def set_changed(self, changed=True):
|
||||
self.changed = changed
|
||||
|
||||
|
||||
class IdleManager(QtCore.QThread):
|
||||
""" Measure user's idle time in seconds.
|
||||
Idle time resets on keyboard/mouse input.
|
||||
Is able to emit signals at specific time idle.
|
||||
"""
|
||||
time_signals = {}
|
||||
idle_time = 0
|
||||
signal_reset_timer = QtCore.Signal()
|
||||
|
||||
def __init__(self):
|
||||
super(IdleManager, self).__init__()
|
||||
self.log = PypeLogger.get_logger(self.__class__.__name__)
|
||||
self.signal_reset_timer.connect(self._reset_time)
|
||||
|
||||
self.idle_item = IdleItem()
|
||||
|
||||
self._is_running = False
|
||||
self._mouse_thread = None
|
||||
self._keyboard_thread = None
|
||||
|
||||
def add_time_signal(self, emit_time, signal):
|
||||
""" If any module want to use IdleManager, need to use add_time_signal
|
||||
|
||||
Args:
|
||||
emit_time(int): Time when signal will be emitted.
|
||||
signal(QtCore.Signal): Signal that will be emitted
|
||||
(without objects).
|
||||
"""
|
||||
if emit_time not in self.time_signals:
|
||||
self.time_signals[emit_time] = []
|
||||
self.time_signals[emit_time].append(signal)
|
||||
|
||||
@property
|
||||
def is_running(self):
|
||||
return self._is_running
|
||||
|
||||
def _reset_time(self):
|
||||
self.idle_time = 0
|
||||
|
||||
def stop(self):
|
||||
self._is_running = False
|
||||
|
||||
def _on_mouse_destroy(self):
|
||||
self._mouse_thread = None
|
||||
|
||||
def _on_keyboard_destroy(self):
|
||||
self._keyboard_thread = None
|
||||
|
||||
def run(self):
|
||||
self.log.info('IdleManager has started')
|
||||
self._is_running = True
|
||||
|
||||
thread_mouse = MouseThread(self.idle_item)
|
||||
thread_keyboard = KeyboardThread(self.idle_item)
|
||||
|
||||
thread_mouse.destroyed.connect(self._on_mouse_destroy)
|
||||
thread_keyboard.destroyed.connect(self._on_keyboard_destroy)
|
||||
|
||||
self._mouse_thread = thread_mouse
|
||||
self._keyboard_thread = thread_keyboard
|
||||
|
||||
thread_mouse.start()
|
||||
thread_keyboard.start()
|
||||
|
||||
# Main loop here is each second checked if idle item changed state
|
||||
while self._is_running:
|
||||
if self.idle_item.changed:
|
||||
self.idle_item.reset()
|
||||
self.signal_reset_timer.emit()
|
||||
else:
|
||||
self.idle_time += 1
|
||||
|
||||
if self.idle_time in self.time_signals:
|
||||
for signal in self.time_signals[self.idle_time]:
|
||||
signal.emit()
|
||||
time.sleep(1)
|
||||
|
||||
self._post_run()
|
||||
self.log.info('IdleManager has stopped')
|
||||
|
||||
def _post_run(self):
|
||||
# Stop threads if still exist
|
||||
if self._mouse_thread is not None:
|
||||
self._mouse_thread.signal_stop.emit()
|
||||
self._mouse_thread.terminate()
|
||||
self._mouse_thread.wait()
|
||||
|
||||
if self._keyboard_thread is not None:
|
||||
self._keyboard_thread.signal_stop.emit()
|
||||
self._keyboard_thread.terminate()
|
||||
self._keyboard_thread.wait()
|
||||
|
||||
|
||||
class MouseThread(QtCore.QThread):
|
||||
"""Listens user's mouse movement."""
|
||||
signal_stop = QtCore.Signal()
|
||||
|
||||
def __init__(self, idle_item):
|
||||
super(MouseThread, self).__init__()
|
||||
self.signal_stop.connect(self.stop)
|
||||
self.m_listener = None
|
||||
self.idle_item = idle_item
|
||||
|
||||
def stop(self):
|
||||
if self.m_listener is not None:
|
||||
self.m_listener.stop()
|
||||
|
||||
def on_move(self, *args, **kwargs):
|
||||
self.idle_item.set_changed()
|
||||
|
||||
def run(self):
|
||||
self.m_listener = mouse.Listener(on_move=self.on_move)
|
||||
self.m_listener.start()
|
||||
|
||||
|
||||
class KeyboardThread(QtCore.QThread):
|
||||
"""Listens user's keyboard input
|
||||
"""
|
||||
signal_stop = QtCore.Signal()
|
||||
|
||||
def __init__(self, idle_item):
|
||||
super(KeyboardThread, self).__init__()
|
||||
self.signal_stop.connect(self.stop)
|
||||
self.k_listener = None
|
||||
self.idle_item = idle_item
|
||||
|
||||
def stop(self):
|
||||
if self.k_listener is not None:
|
||||
listener = self.k_listener
|
||||
self.k_listener = None
|
||||
listener.stop()
|
||||
|
||||
def on_press(self, *args, **kwargs):
|
||||
self.idle_item.set_changed()
|
||||
|
||||
def run(self):
|
||||
self.k_listener = keyboard.Listener(on_press=self.on_press)
|
||||
self.k_listener.start()
|
||||
|
|
@ -1,3 +1,5 @@
|
|||
import json
|
||||
|
||||
from aiohttp.web_response import Response
|
||||
from openpype.api import Logger
|
||||
|
||||
|
|
@ -28,6 +30,11 @@ class TimersManagerModuleRestApi:
|
|||
self.prefix + "/stop_timer",
|
||||
self.stop_timer
|
||||
)
|
||||
self.server_manager.add_route(
|
||||
"GET",
|
||||
self.prefix + "/get_task_time",
|
||||
self.get_task_time
|
||||
)
|
||||
|
||||
async def start_timer(self, request):
|
||||
data = await request.json()
|
||||
|
|
@ -48,3 +55,20 @@ class TimersManagerModuleRestApi:
|
|||
async def stop_timer(self, request):
|
||||
self.module.stop_timers()
|
||||
return Response(status=200)
|
||||
|
||||
async def get_task_time(self, request):
|
||||
data = await request.json()
|
||||
try:
|
||||
project_name = data['project_name']
|
||||
asset_name = data['asset_name']
|
||||
task_name = data['task_name']
|
||||
except KeyError:
|
||||
message = (
|
||||
"Payload must contain fields 'project_name, 'asset_name',"
|
||||
" 'task_name'"
|
||||
)
|
||||
log.warning(message)
|
||||
return Response(text=message, status=404)
|
||||
|
||||
time = self.module.get_task_time(project_name, asset_name, task_name)
|
||||
return Response(text=json.dumps(time))
|
||||
|
|
|
|||
|
|
@ -1,10 +1,9 @@
|
|||
import os
|
||||
import collections
|
||||
import platform
|
||||
from openpype.modules import OpenPypeModule
|
||||
from openpype_interfaces import (
|
||||
ITimersManager,
|
||||
ITrayService,
|
||||
IIdleManager
|
||||
ITrayService
|
||||
)
|
||||
from avalon.api import AvalonMongoDB
|
||||
|
||||
|
|
@ -68,7 +67,7 @@ class ExampleTimersManagerConnector:
|
|||
self._timers_manager_module.timer_stopped(self._module.id)
|
||||
|
||||
|
||||
class TimersManager(OpenPypeModule, ITrayService, IIdleManager):
|
||||
class TimersManager(OpenPypeModule, ITrayService):
|
||||
""" Handles about Timers.
|
||||
|
||||
Should be able to start/stop all timers at once.
|
||||
|
|
@ -93,12 +92,16 @@ class TimersManager(OpenPypeModule, ITrayService, IIdleManager):
|
|||
|
||||
self.enabled = timers_settings["enabled"]
|
||||
|
||||
auto_stop = timers_settings["auto_stop"]
|
||||
# When timer will stop if idle manager is running (minutes)
|
||||
full_time = int(timers_settings["full_time"] * 60)
|
||||
# How many minutes before the timer is stopped will popup the message
|
||||
message_time = int(timers_settings["message_time"] * 60)
|
||||
|
||||
auto_stop = timers_settings["auto_stop"]
|
||||
# Turn of auto stop on MacOs because pynput requires root permissions
|
||||
if platform.system().lower() == "darwin" or full_time <= 0:
|
||||
auto_stop = False
|
||||
|
||||
self.auto_stop = auto_stop
|
||||
self.time_show_message = full_time - message_time
|
||||
self.time_stop_timer = full_time
|
||||
|
|
@ -107,24 +110,46 @@ class TimersManager(OpenPypeModule, ITrayService, IIdleManager):
|
|||
self.last_task = None
|
||||
|
||||
# Tray attributes
|
||||
self.signal_handler = None
|
||||
self.widget_user_idle = None
|
||||
self.signal_handler = None
|
||||
self._signal_handler = None
|
||||
self._widget_user_idle = None
|
||||
self._idle_manager = None
|
||||
|
||||
self._connectors_by_module_id = {}
|
||||
self._modules_by_id = {}
|
||||
|
||||
def tray_init(self):
|
||||
if not self.auto_stop:
|
||||
return
|
||||
|
||||
from .idle_threads import IdleManager
|
||||
from .widget_user_idle import WidgetUserIdle, SignalHandler
|
||||
self.widget_user_idle = WidgetUserIdle(self)
|
||||
self.signal_handler = SignalHandler(self)
|
||||
|
||||
signal_handler = SignalHandler(self)
|
||||
idle_manager = IdleManager()
|
||||
widget_user_idle = WidgetUserIdle(self)
|
||||
widget_user_idle.set_countdown_start(self.time_show_message)
|
||||
|
||||
idle_manager.signal_reset_timer.connect(
|
||||
widget_user_idle.reset_countdown
|
||||
)
|
||||
idle_manager.add_time_signal(
|
||||
self.time_show_message, signal_handler.signal_show_message
|
||||
)
|
||||
idle_manager.add_time_signal(
|
||||
self.time_stop_timer, signal_handler.signal_stop_timers
|
||||
)
|
||||
|
||||
self._signal_handler = signal_handler
|
||||
self._widget_user_idle = widget_user_idle
|
||||
self._idle_manager = idle_manager
|
||||
|
||||
def tray_start(self, *_a, **_kw):
|
||||
return
|
||||
if self._idle_manager:
|
||||
self._idle_manager.start()
|
||||
|
||||
def tray_exit(self):
|
||||
"""Nothing special for TimersManager."""
|
||||
return
|
||||
if self._idle_manager:
|
||||
self._idle_manager.stop()
|
||||
|
||||
def start_timer(self, project_name, asset_name, task_name, hierarchy):
|
||||
"""
|
||||
|
|
@ -166,6 +191,16 @@ class TimersManager(OpenPypeModule, ITrayService, IIdleManager):
|
|||
}
|
||||
self.timer_started(None, data)
|
||||
|
||||
def get_task_time(self, project_name, asset_name, task_name):
|
||||
times = {}
|
||||
for module_id, connector in self._connectors_by_module_id.items():
|
||||
if hasattr(connector, "get_task_time"):
|
||||
module = self._modules_by_id[module_id]
|
||||
times[module.name] = connector.get_task_time(
|
||||
project_name, asset_name, task_name
|
||||
)
|
||||
return times
|
||||
|
||||
def timer_started(self, source_id, data):
|
||||
for module_id, connector in self._connectors_by_module_id.items():
|
||||
if module_id == source_id:
|
||||
|
|
@ -205,8 +240,8 @@ class TimersManager(OpenPypeModule, ITrayService, IIdleManager):
|
|||
if self.is_running is False:
|
||||
return
|
||||
|
||||
self.widget_user_idle.bool_not_stopped = False
|
||||
self.widget_user_idle.refresh_context()
|
||||
if self._widget_user_idle is not None:
|
||||
self._widget_user_idle.set_timer_stopped()
|
||||
self.is_running = False
|
||||
|
||||
self.timer_stopped(None)
|
||||
|
|
@ -244,70 +279,12 @@ class TimersManager(OpenPypeModule, ITrayService, IIdleManager):
|
|||
" for connector of module \"{}\"."
|
||||
).format(module.name))
|
||||
|
||||
def callbacks_by_idle_time(self):
|
||||
"""Implementation of IIdleManager interface."""
|
||||
# Time when message is shown
|
||||
if not self.auto_stop:
|
||||
return {}
|
||||
|
||||
callbacks = collections.defaultdict(list)
|
||||
callbacks[self.time_show_message].append(lambda: self.time_callback(0))
|
||||
|
||||
# Times when idle is between show widget and stop timers
|
||||
show_to_stop_range = range(
|
||||
self.time_show_message - 1, self.time_stop_timer
|
||||
)
|
||||
for num in show_to_stop_range:
|
||||
callbacks[num].append(lambda: self.time_callback(1))
|
||||
|
||||
# Times when widget is already shown and user restart idle
|
||||
shown_and_moved_range = range(
|
||||
self.time_stop_timer - self.time_show_message
|
||||
)
|
||||
for num in shown_and_moved_range:
|
||||
callbacks[num].append(lambda: self.time_callback(1))
|
||||
|
||||
# Time when timers are stopped
|
||||
callbacks[self.time_stop_timer].append(lambda: self.time_callback(2))
|
||||
|
||||
return callbacks
|
||||
|
||||
def time_callback(self, int_def):
|
||||
if not self.signal_handler:
|
||||
return
|
||||
|
||||
if int_def == 0:
|
||||
self.signal_handler.signal_show_message.emit()
|
||||
elif int_def == 1:
|
||||
self.signal_handler.signal_change_label.emit()
|
||||
elif int_def == 2:
|
||||
self.signal_handler.signal_stop_timers.emit()
|
||||
|
||||
def change_label(self):
|
||||
if self.is_running is False:
|
||||
return
|
||||
|
||||
if (
|
||||
not self.idle_manager
|
||||
or self.widget_user_idle.bool_is_showed is False
|
||||
):
|
||||
return
|
||||
|
||||
if self.idle_manager.idle_time > self.time_show_message:
|
||||
value = self.time_stop_timer - self.idle_manager.idle_time
|
||||
else:
|
||||
value = 1 + (
|
||||
self.time_stop_timer -
|
||||
self.time_show_message -
|
||||
self.idle_manager.idle_time
|
||||
)
|
||||
self.widget_user_idle.change_count_widget(value)
|
||||
|
||||
def show_message(self):
|
||||
if self.is_running is False:
|
||||
return
|
||||
if self.widget_user_idle.bool_is_showed is False:
|
||||
self.widget_user_idle.show()
|
||||
if not self._widget_user_idle.is_showed():
|
||||
self._widget_user_idle.reset_countdown()
|
||||
self._widget_user_idle.show()
|
||||
|
||||
# Webserver module implementation
|
||||
def webserver_initialization(self, server_manager):
|
||||
|
|
|
|||
|
|
@ -3,168 +3,193 @@ from openpype import resources, style
|
|||
|
||||
|
||||
class WidgetUserIdle(QtWidgets.QWidget):
|
||||
|
||||
SIZE_W = 300
|
||||
SIZE_H = 160
|
||||
|
||||
def __init__(self, module):
|
||||
|
||||
super(WidgetUserIdle, self).__init__()
|
||||
|
||||
self.bool_is_showed = False
|
||||
self.bool_not_stopped = True
|
||||
|
||||
self.module = module
|
||||
self.setWindowTitle("OpenPype - Stop timers")
|
||||
|
||||
icon = QtGui.QIcon(resources.get_openpype_icon_filepath())
|
||||
self.setWindowIcon(icon)
|
||||
|
||||
self.setWindowFlags(
|
||||
QtCore.Qt.WindowCloseButtonHint
|
||||
| QtCore.Qt.WindowMinimizeButtonHint
|
||||
)
|
||||
|
||||
self._translate = QtCore.QCoreApplication.translate
|
||||
self._is_showed = False
|
||||
self._timer_stopped = False
|
||||
self._countdown = 0
|
||||
self._countdown_start = 0
|
||||
|
||||
self.font = QtGui.QFont()
|
||||
self.font.setFamily("DejaVu Sans Condensed")
|
||||
self.font.setPointSize(9)
|
||||
self.font.setBold(True)
|
||||
self.font.setWeight(50)
|
||||
self.font.setKerning(True)
|
||||
self.module = module
|
||||
|
||||
msg_info = "You didn't work for a long time."
|
||||
msg_question = "Would you like to stop Timers?"
|
||||
msg_stopped = (
|
||||
"Your Timers were stopped. Do you want to start them again?"
|
||||
)
|
||||
|
||||
lbl_info = QtWidgets.QLabel(msg_info, self)
|
||||
lbl_info.setTextFormat(QtCore.Qt.RichText)
|
||||
lbl_info.setWordWrap(True)
|
||||
|
||||
lbl_question = QtWidgets.QLabel(msg_question, self)
|
||||
lbl_question.setTextFormat(QtCore.Qt.RichText)
|
||||
lbl_question.setWordWrap(True)
|
||||
|
||||
lbl_stopped = QtWidgets.QLabel(msg_stopped, self)
|
||||
lbl_stopped.setTextFormat(QtCore.Qt.RichText)
|
||||
lbl_stopped.setWordWrap(True)
|
||||
|
||||
lbl_rest_time = QtWidgets.QLabel(self)
|
||||
lbl_rest_time.setTextFormat(QtCore.Qt.RichText)
|
||||
lbl_rest_time.setWordWrap(True)
|
||||
lbl_rest_time.setAlignment(QtCore.Qt.AlignCenter)
|
||||
|
||||
form = QtWidgets.QFormLayout()
|
||||
form.setContentsMargins(10, 15, 10, 5)
|
||||
|
||||
form.addRow(lbl_info)
|
||||
form.addRow(lbl_question)
|
||||
form.addRow(lbl_stopped)
|
||||
form.addRow(lbl_rest_time)
|
||||
|
||||
btn_stop = QtWidgets.QPushButton("Stop timer", self)
|
||||
btn_stop.setToolTip("Stop's All timers")
|
||||
|
||||
btn_continue = QtWidgets.QPushButton("Continue", self)
|
||||
btn_continue.setToolTip("Timer won't stop")
|
||||
|
||||
btn_close = QtWidgets.QPushButton("Close", self)
|
||||
btn_close.setToolTip("Close window")
|
||||
|
||||
btn_restart = QtWidgets.QPushButton("Start timers", self)
|
||||
btn_restart.setToolTip("Timer will be started again")
|
||||
|
||||
group_layout = QtWidgets.QHBoxLayout()
|
||||
group_layout.addStretch(1)
|
||||
group_layout.addWidget(btn_continue)
|
||||
group_layout.addWidget(btn_stop)
|
||||
group_layout.addWidget(btn_restart)
|
||||
group_layout.addWidget(btn_close)
|
||||
|
||||
layout = QtWidgets.QVBoxLayout(self)
|
||||
layout.addLayout(form)
|
||||
layout.addLayout(group_layout)
|
||||
|
||||
count_timer = QtCore.QTimer()
|
||||
count_timer.setInterval(1000)
|
||||
|
||||
btn_stop.clicked.connect(self._on_stop_clicked)
|
||||
btn_continue.clicked.connect(self._on_continue_clicked)
|
||||
btn_close.clicked.connect(self._close_widget)
|
||||
btn_restart.clicked.connect(self._on_restart_clicked)
|
||||
count_timer.timeout.connect(self._on_count_timeout)
|
||||
|
||||
self.lbl_info = lbl_info
|
||||
self.lbl_question = lbl_question
|
||||
self.lbl_stopped = lbl_stopped
|
||||
self.lbl_rest_time = lbl_rest_time
|
||||
|
||||
self.btn_stop = btn_stop
|
||||
self.btn_continue = btn_continue
|
||||
self.btn_close = btn_close
|
||||
self.btn_restart = btn_restart
|
||||
|
||||
self._count_timer = count_timer
|
||||
|
||||
self.resize(self.SIZE_W, self.SIZE_H)
|
||||
self.setMinimumSize(QtCore.QSize(self.SIZE_W, self.SIZE_H))
|
||||
self.setMaximumSize(QtCore.QSize(self.SIZE_W+100, self.SIZE_H+100))
|
||||
self.setStyleSheet(style.load_stylesheet())
|
||||
|
||||
self.setLayout(self._main())
|
||||
self.refresh_context()
|
||||
self.setWindowTitle('Pype - Stop timers')
|
||||
def set_countdown_start(self, countdown):
|
||||
self._countdown_start = countdown
|
||||
if not self.is_showed():
|
||||
self.reset_countdown()
|
||||
|
||||
def _main(self):
|
||||
self.main = QtWidgets.QVBoxLayout()
|
||||
self.main.setObjectName('main')
|
||||
def reset_countdown(self):
|
||||
self._countdown = self._countdown_start
|
||||
self._update_countdown_label()
|
||||
|
||||
self.form = QtWidgets.QFormLayout()
|
||||
self.form.setContentsMargins(10, 15, 10, 5)
|
||||
self.form.setObjectName('form')
|
||||
def is_showed(self):
|
||||
return self._is_showed
|
||||
|
||||
msg_info = 'You didn\'t work for a long time.'
|
||||
msg_question = 'Would you like to stop Timers?'
|
||||
msg_stopped = (
|
||||
'Your Timers were stopped. Do you want to start them again?'
|
||||
)
|
||||
def set_timer_stopped(self):
|
||||
self._timer_stopped = True
|
||||
self._refresh_context()
|
||||
|
||||
self.lbl_info = QtWidgets.QLabel(msg_info)
|
||||
self.lbl_info.setFont(self.font)
|
||||
self.lbl_info.setTextFormat(QtCore.Qt.RichText)
|
||||
self.lbl_info.setObjectName("lbl_info")
|
||||
self.lbl_info.setWordWrap(True)
|
||||
def _update_countdown_label(self):
|
||||
self.lbl_rest_time.setText(str(self._countdown))
|
||||
|
||||
self.lbl_question = QtWidgets.QLabel(msg_question)
|
||||
self.lbl_question.setFont(self.font)
|
||||
self.lbl_question.setTextFormat(QtCore.Qt.RichText)
|
||||
self.lbl_question.setObjectName("lbl_question")
|
||||
self.lbl_question.setWordWrap(True)
|
||||
def _on_count_timeout(self):
|
||||
if self._timer_stopped or not self._is_showed:
|
||||
self._count_timer.stop()
|
||||
return
|
||||
|
||||
self.lbl_stopped = QtWidgets.QLabel(msg_stopped)
|
||||
self.lbl_stopped.setFont(self.font)
|
||||
self.lbl_stopped.setTextFormat(QtCore.Qt.RichText)
|
||||
self.lbl_stopped.setObjectName("lbl_stopped")
|
||||
self.lbl_stopped.setWordWrap(True)
|
||||
|
||||
self.lbl_rest_time = QtWidgets.QLabel("")
|
||||
self.lbl_rest_time.setFont(self.font)
|
||||
self.lbl_rest_time.setTextFormat(QtCore.Qt.RichText)
|
||||
self.lbl_rest_time.setObjectName("lbl_rest_time")
|
||||
self.lbl_rest_time.setWordWrap(True)
|
||||
self.lbl_rest_time.setAlignment(QtCore.Qt.AlignCenter)
|
||||
|
||||
self.form.addRow(self.lbl_info)
|
||||
self.form.addRow(self.lbl_question)
|
||||
self.form.addRow(self.lbl_stopped)
|
||||
self.form.addRow(self.lbl_rest_time)
|
||||
|
||||
self.group_btn = QtWidgets.QHBoxLayout()
|
||||
self.group_btn.addStretch(1)
|
||||
self.group_btn.setObjectName("group_btn")
|
||||
|
||||
self.btn_stop = QtWidgets.QPushButton("Stop timer")
|
||||
self.btn_stop.setToolTip('Stop\'s All timers')
|
||||
self.btn_stop.clicked.connect(self.stop_timer)
|
||||
|
||||
self.btn_continue = QtWidgets.QPushButton("Continue")
|
||||
self.btn_continue.setToolTip('Timer won\'t stop')
|
||||
self.btn_continue.clicked.connect(self.continue_timer)
|
||||
|
||||
self.btn_close = QtWidgets.QPushButton("Close")
|
||||
self.btn_close.setToolTip('Close window')
|
||||
self.btn_close.clicked.connect(self.close_widget)
|
||||
|
||||
self.btn_restart = QtWidgets.QPushButton("Start timers")
|
||||
self.btn_restart.setToolTip('Timer will be started again')
|
||||
self.btn_restart.clicked.connect(self.restart_timer)
|
||||
|
||||
self.group_btn.addWidget(self.btn_continue)
|
||||
self.group_btn.addWidget(self.btn_stop)
|
||||
self.group_btn.addWidget(self.btn_restart)
|
||||
self.group_btn.addWidget(self.btn_close)
|
||||
|
||||
self.main.addLayout(self.form)
|
||||
self.main.addLayout(self.group_btn)
|
||||
|
||||
return self.main
|
||||
|
||||
def refresh_context(self):
|
||||
self.lbl_question.setVisible(self.bool_not_stopped)
|
||||
self.lbl_rest_time.setVisible(self.bool_not_stopped)
|
||||
self.lbl_stopped.setVisible(not self.bool_not_stopped)
|
||||
|
||||
self.btn_continue.setVisible(self.bool_not_stopped)
|
||||
self.btn_stop.setVisible(self.bool_not_stopped)
|
||||
self.btn_restart.setVisible(not self.bool_not_stopped)
|
||||
self.btn_close.setVisible(not self.bool_not_stopped)
|
||||
|
||||
def change_count_widget(self, time):
|
||||
str_time = str(time)
|
||||
self.lbl_rest_time.setText(str_time)
|
||||
|
||||
def stop_timer(self):
|
||||
self.module.stop_timers()
|
||||
self.close_widget()
|
||||
|
||||
def restart_timer(self):
|
||||
self.module.restart_timers()
|
||||
self.close_widget()
|
||||
|
||||
def continue_timer(self):
|
||||
self.close_widget()
|
||||
|
||||
def closeEvent(self, event):
|
||||
event.ignore()
|
||||
if self.bool_not_stopped is True:
|
||||
self.continue_timer()
|
||||
if self._countdown <= 0:
|
||||
self._stop_timers()
|
||||
self.set_timer_stopped()
|
||||
else:
|
||||
self.close_widget()
|
||||
self._countdown -= 1
|
||||
self._update_countdown_label()
|
||||
|
||||
def close_widget(self):
|
||||
self.bool_is_showed = False
|
||||
self.bool_not_stopped = True
|
||||
self.refresh_context()
|
||||
def _refresh_context(self):
|
||||
self.lbl_question.setVisible(not self._timer_stopped)
|
||||
self.lbl_rest_time.setVisible(not self._timer_stopped)
|
||||
self.lbl_stopped.setVisible(self._timer_stopped)
|
||||
|
||||
self.btn_continue.setVisible(not self._timer_stopped)
|
||||
self.btn_stop.setVisible(not self._timer_stopped)
|
||||
self.btn_restart.setVisible(self._timer_stopped)
|
||||
self.btn_close.setVisible(self._timer_stopped)
|
||||
|
||||
def _stop_timers(self):
|
||||
self.module.stop_timers()
|
||||
|
||||
def _on_stop_clicked(self):
|
||||
self._stop_timers()
|
||||
self._close_widget()
|
||||
|
||||
def _on_restart_clicked(self):
|
||||
self.module.restart_timers()
|
||||
self._close_widget()
|
||||
|
||||
def _on_continue_clicked(self):
|
||||
self._close_widget()
|
||||
|
||||
def _close_widget(self):
|
||||
self._is_showed = False
|
||||
self._timer_stopped = False
|
||||
self._refresh_context()
|
||||
self.hide()
|
||||
|
||||
def showEvent(self, event):
|
||||
self.bool_is_showed = True
|
||||
if not self._is_showed:
|
||||
self._is_showed = True
|
||||
self._refresh_context()
|
||||
|
||||
if not self._count_timer.isActive():
|
||||
self._count_timer.start()
|
||||
super(WidgetUserIdle, self).showEvent(event)
|
||||
|
||||
def closeEvent(self, event):
|
||||
event.ignore()
|
||||
if self._timer_stopped:
|
||||
self._close_widget()
|
||||
else:
|
||||
self._on_continue_clicked()
|
||||
|
||||
|
||||
class SignalHandler(QtCore.QObject):
|
||||
signal_show_message = QtCore.Signal()
|
||||
signal_change_label = QtCore.Signal()
|
||||
signal_stop_timers = QtCore.Signal()
|
||||
|
||||
def __init__(self, module):
|
||||
super(SignalHandler, self).__init__()
|
||||
self.module = module
|
||||
self.signal_show_message.connect(module.show_message)
|
||||
self.signal_change_label.connect(module.change_label)
|
||||
self.signal_stop_timers.connect(module.stop_timers)
|
||||
|
|
|
|||
|
|
@ -1,26 +1,28 @@
|
|||
import os
|
||||
import subprocess
|
||||
from avalon import api
|
||||
from openpype.api import ApplicationManager
|
||||
|
||||
|
||||
def existing_djv_path():
|
||||
djv_paths = os.environ.get("DJV_PATH") or ""
|
||||
for path in djv_paths.split(os.pathsep):
|
||||
if os.path.exists(path):
|
||||
return path
|
||||
return None
|
||||
app_manager = ApplicationManager()
|
||||
djv_list = []
|
||||
|
||||
for app_name, app in app_manager.applications.items():
|
||||
if 'djv' in app_name and app.find_executable():
|
||||
djv_list.append(app_name)
|
||||
|
||||
return djv_list
|
||||
|
||||
class OpenInDJV(api.Loader):
|
||||
"""Open Image Sequence with system default"""
|
||||
|
||||
djv_path = existing_djv_path()
|
||||
families = ["*"] if djv_path else []
|
||||
djv_list = existing_djv_path()
|
||||
families = ["*"] if djv_list else []
|
||||
representations = [
|
||||
"cin", "dpx", "avi", "dv", "gif", "flv", "mkv", "mov", "mpg", "mpeg",
|
||||
"mp4", "m4v", "mxf", "iff", "z", "ifl", "jpeg", "jpg", "jfif", "lut",
|
||||
"1dl", "exr", "pic", "png", "ppm", "pnm", "pgm", "pbm", "rla", "rpf",
|
||||
"sgi", "rgba", "rgb", "bw", "tga", "tiff", "tif", "img"
|
||||
"sgi", "rgba", "rgb", "bw", "tga", "tiff", "tif", "img", "h264",
|
||||
]
|
||||
|
||||
label = "Open in DJV"
|
||||
|
|
@ -41,20 +43,18 @@ class OpenInDJV(api.Loader):
|
|||
)
|
||||
|
||||
if not remainder:
|
||||
seqeunce = collections[0]
|
||||
first_image = list(seqeunce)[0]
|
||||
sequence = collections[0]
|
||||
first_image = list(sequence)[0]
|
||||
else:
|
||||
first_image = self.fname
|
||||
filepath = os.path.normpath(os.path.join(directory, first_image))
|
||||
|
||||
self.log.info("Opening : {}".format(filepath))
|
||||
|
||||
cmd = [
|
||||
# DJV path
|
||||
os.path.normpath(self.djv_path),
|
||||
# PATH TO COMPONENT
|
||||
os.path.normpath(filepath)
|
||||
]
|
||||
last_djv_version = sorted(self.djv_list)[-1]
|
||||
|
||||
# Run DJV with these commands
|
||||
subprocess.Popen(cmd)
|
||||
app_manager = ApplicationManager()
|
||||
djv = app_manager.applications.get(last_djv_version)
|
||||
djv.arguments.append(filepath)
|
||||
|
||||
app_manager.launch(last_djv_version)
|
||||
|
|
|
|||
|
|
@ -22,6 +22,7 @@ class CollectAvalonEntities(pyblish.api.ContextPlugin):
|
|||
io.install()
|
||||
project_name = api.Session["AVALON_PROJECT"]
|
||||
asset_name = api.Session["AVALON_ASSET"]
|
||||
task_name = api.Session["AVALON_TASK"]
|
||||
|
||||
project_entity = io.find_one({
|
||||
"type": "project",
|
||||
|
|
@ -48,6 +49,12 @@ class CollectAvalonEntities(pyblish.api.ContextPlugin):
|
|||
|
||||
data = asset_entity['data']
|
||||
|
||||
# Task type
|
||||
asset_tasks = data.get("tasks") or {}
|
||||
task_info = asset_tasks.get(task_name) or {}
|
||||
task_type = task_info.get("type")
|
||||
context.data["taskType"] = task_type
|
||||
|
||||
frame_start = data.get("frameStart")
|
||||
if frame_start is None:
|
||||
frame_start = 1
|
||||
|
|
|
|||
|
|
@ -26,6 +26,7 @@ class CollectResourcesPath(pyblish.api.InstancePlugin):
|
|||
"animation",
|
||||
"model",
|
||||
"mayaAscii",
|
||||
"mayaScene",
|
||||
"setdress",
|
||||
"layout",
|
||||
"ass",
|
||||
|
|
@ -67,6 +68,12 @@ class CollectResourcesPath(pyblish.api.InstancePlugin):
|
|||
"representation": "TEMP"
|
||||
})
|
||||
|
||||
# For the first time publish
|
||||
if instance.data.get("hierarchy"):
|
||||
template_data.update({
|
||||
"hierarchy": instance.data["hierarchy"]
|
||||
})
|
||||
|
||||
anatomy_filled = anatomy.format(template_data)
|
||||
|
||||
if "folder" in anatomy.templates["publish"]:
|
||||
|
|
|
|||
|
|
@ -1,6 +1,5 @@
|
|||
import os
|
||||
import re
|
||||
import subprocess
|
||||
import json
|
||||
import copy
|
||||
import tempfile
|
||||
|
|
@ -158,6 +157,11 @@ class ExtractBurnin(openpype.api.Extractor):
|
|||
filled_anatomy = anatomy.format_all(burnin_data)
|
||||
burnin_data["anatomy"] = filled_anatomy.get_solved()
|
||||
|
||||
# Add context data burnin_data.
|
||||
burnin_data["custom"] = (
|
||||
instance.data.get("custom_burnin_data") or {}
|
||||
)
|
||||
|
||||
# Add source camera name to burnin data
|
||||
camera_name = repre.get("camera_name")
|
||||
if camera_name:
|
||||
|
|
@ -226,7 +230,8 @@ class ExtractBurnin(openpype.api.Extractor):
|
|||
"options": copy.deepcopy(burnin_options),
|
||||
"values": burnin_values,
|
||||
"full_input_path": temp_data["full_input_paths"][0],
|
||||
"first_frame": temp_data["first_frame"]
|
||||
"first_frame": temp_data["first_frame"],
|
||||
"ffmpeg_cmd": new_repre.get("ffmpeg_cmd", "")
|
||||
}
|
||||
|
||||
self.log.debug(
|
||||
|
|
|
|||
|
|
@ -30,8 +30,8 @@ class ExtractReview(pyblish.api.InstancePlugin):
|
|||
otherwise the representation is ignored.
|
||||
|
||||
All new representations are created and encoded by ffmpeg following
|
||||
presets found in `pype-config/presets/plugins/global/
|
||||
publish.json:ExtractReview:outputs`.
|
||||
presets found in OpenPype Settings interface at
|
||||
`project_settings/global/publish/ExtractReview/profiles:outputs`.
|
||||
"""
|
||||
|
||||
label = "Extract Review"
|
||||
|
|
@ -241,7 +241,8 @@ class ExtractReview(pyblish.api.InstancePlugin):
|
|||
"outputName": output_name,
|
||||
"outputDef": output_def,
|
||||
"frameStartFtrack": temp_data["output_frame_start"],
|
||||
"frameEndFtrack": temp_data["output_frame_end"]
|
||||
"frameEndFtrack": temp_data["output_frame_end"],
|
||||
"ffmpeg_cmd": subprcs_cmd
|
||||
})
|
||||
|
||||
# Force to pop these key if are in new repre
|
||||
|
|
|
|||
|
|
@ -63,6 +63,7 @@ class IntegrateAssetNew(pyblish.api.InstancePlugin):
|
|||
"animation",
|
||||
"model",
|
||||
"mayaAscii",
|
||||
"mayaScene",
|
||||
"setdress",
|
||||
"layout",
|
||||
"ass",
|
||||
|
|
|
|||
|
|
@ -1,6 +1,5 @@
|
|||
import pyblish.api
|
||||
|
||||
from openpype.api import get_system_settings
|
||||
from openpype.lib import change_timer_to_current_context
|
||||
|
||||
|
||||
|
|
@ -10,6 +9,6 @@ class StartTimer(pyblish.api.ContextPlugin):
|
|||
hosts = ["*"]
|
||||
|
||||
def process(self, context):
|
||||
modules_settings = get_system_settings()["modules"]
|
||||
modules_settings = context.data["system_settings"]["modules"]
|
||||
if modules_settings["timers_manager"]["disregard_publishing"]:
|
||||
change_timer_to_current_context()
|
||||
|
|
|
|||
|
|
@ -3,8 +3,6 @@ import requests
|
|||
|
||||
import pyblish.api
|
||||
|
||||
from openpype.api import get_system_settings
|
||||
|
||||
|
||||
class StopTimer(pyblish.api.ContextPlugin):
|
||||
label = "Stop Timer"
|
||||
|
|
@ -12,7 +10,7 @@ class StopTimer(pyblish.api.ContextPlugin):
|
|||
hosts = ["*"]
|
||||
|
||||
def process(self, context):
|
||||
modules_settings = get_system_settings()["modules"]
|
||||
modules_settings = context.data["system_settings"]["modules"]
|
||||
if modules_settings["timers_manager"]["disregard_publishing"]:
|
||||
webserver_url = os.environ.get("OPENPYPE_WEBSERVER_URL")
|
||||
rest_api_url = "{}/timers_manager/stop_timer".format(webserver_url)
|
||||
|
|
|
|||
|
|
@ -1,7 +1,5 @@
|
|||
import pyblish.api
|
||||
|
||||
import openpype.lib
|
||||
from avalon.tools import cbsceneinventory
|
||||
|
||||
|
||||
class ShowInventory(pyblish.api.Action):
|
||||
|
|
@ -11,7 +9,9 @@ class ShowInventory(pyblish.api.Action):
|
|||
on = "failed"
|
||||
|
||||
def process(self, context, plugin):
|
||||
cbsceneinventory.show()
|
||||
from avalon.tools import sceneinventory
|
||||
|
||||
sceneinventory.show()
|
||||
|
||||
|
||||
class ValidateContainers(pyblish.api.ContextPlugin):
|
||||
|
|
|
|||
|
|
@ -1,5 +1,7 @@
|
|||
import pyblish.api
|
||||
import os
|
||||
import pyblish.api
|
||||
|
||||
from openpype.lib import filter_profiles
|
||||
|
||||
|
||||
class ValidateIntent(pyblish.api.ContextPlugin):
|
||||
|
|
@ -12,20 +14,49 @@ class ValidateIntent(pyblish.api.ContextPlugin):
|
|||
order = pyblish.api.ValidatorOrder
|
||||
|
||||
label = "Validate Intent"
|
||||
# TODO: this should be off by default and only activated viac config
|
||||
tasks = ["animation"]
|
||||
hosts = ["harmony"]
|
||||
if os.environ.get("AVALON_TASK") not in tasks:
|
||||
active = False
|
||||
enabled = False
|
||||
|
||||
# Can be modified by settings
|
||||
profiles = [{
|
||||
"hosts": [],
|
||||
"task_types": [],
|
||||
"tasks": [],
|
||||
"validate": False
|
||||
}]
|
||||
|
||||
def process(self, context):
|
||||
# Skip if there are no profiles
|
||||
validate = True
|
||||
if self.profiles:
|
||||
# Collect data from context
|
||||
task_name = context.data.get("task")
|
||||
task_type = context.data.get("taskType")
|
||||
host_name = context.data.get("hostName")
|
||||
|
||||
filter_data = {
|
||||
"hosts": host_name,
|
||||
"task_types": task_type,
|
||||
"tasks": task_name
|
||||
}
|
||||
matching_profile = filter_profiles(
|
||||
self.profiles, filter_data, logger=self.log
|
||||
)
|
||||
if matching_profile:
|
||||
validate = matching_profile["validate"]
|
||||
|
||||
if not validate:
|
||||
self.log.debug((
|
||||
"Validation of intent was skipped."
|
||||
" Matching profile for current context disabled validation."
|
||||
))
|
||||
return
|
||||
|
||||
msg = (
|
||||
"Please make sure that you select the intent of this publish."
|
||||
)
|
||||
|
||||
intent = context.data.get("intent")
|
||||
self.log.debug(intent)
|
||||
assert intent, msg
|
||||
|
||||
intent = context.data.get("intent") or {}
|
||||
self.log.debug(str(intent))
|
||||
intent_value = intent.get("value")
|
||||
assert intent is not "", msg
|
||||
if not intent_value:
|
||||
raise AssertionError(msg)
|
||||
|
|
|
|||
|
|
@ -12,6 +12,9 @@ class ValidateVersion(pyblish.api.InstancePlugin):
|
|||
label = "Validate Version"
|
||||
hosts = ["nuke", "maya", "blender", "standalonepublisher"]
|
||||
|
||||
optional = False
|
||||
active = True
|
||||
|
||||
def process(self, instance):
|
||||
version = instance.data.get("version")
|
||||
latest_version = instance.data.get("latestVersion")
|
||||
|
|
|
|||
|
|
@ -257,3 +257,30 @@ class PypeCommands:
|
|||
def validate_jsons(self):
|
||||
pass
|
||||
|
||||
def run_tests(self, folder, mark, pyargs):
|
||||
"""
|
||||
Runs tests from 'folder'
|
||||
|
||||
Args:
|
||||
folder (str): relative path to folder with tests
|
||||
mark (str): label to run tests marked by it (slow etc)
|
||||
pyargs (str): package path to test
|
||||
"""
|
||||
print("run_tests")
|
||||
import subprocess
|
||||
|
||||
if folder:
|
||||
folder = " ".join(list(folder))
|
||||
else:
|
||||
folder = "../tests"
|
||||
|
||||
mark_str = pyargs_str = ''
|
||||
if mark:
|
||||
mark_str = "-m {}".format(mark)
|
||||
|
||||
if pyargs:
|
||||
pyargs_str = "--pyargs {}".format(pyargs)
|
||||
|
||||
cmd = "pytest {} {} {}".format(folder, mark_str, pyargs_str)
|
||||
print("Running {}".format(cmd))
|
||||
subprocess.run(cmd)
|
||||
|
|
|
|||
|
|
@ -69,7 +69,7 @@ def get_fps(str_value):
|
|||
return str(fps)
|
||||
|
||||
|
||||
def _prores_codec_args(ffprobe_data):
|
||||
def _prores_codec_args(ffprobe_data, source_ffmpeg_cmd):
|
||||
output = []
|
||||
|
||||
tags = ffprobe_data.get("tags") or {}
|
||||
|
|
@ -108,14 +108,24 @@ def _prores_codec_args(ffprobe_data):
|
|||
return output
|
||||
|
||||
|
||||
def _h264_codec_args(ffprobe_data):
|
||||
def _h264_codec_args(ffprobe_data, source_ffmpeg_cmd):
|
||||
output = []
|
||||
|
||||
output.extend(["-codec:v", "h264"])
|
||||
|
||||
bit_rate = ffprobe_data.get("bit_rate")
|
||||
if bit_rate:
|
||||
output.extend(["-b:v", bit_rate])
|
||||
# Use arguments from source if are available source arguments
|
||||
if source_ffmpeg_cmd:
|
||||
copy_args = (
|
||||
"-crf",
|
||||
"-b:v", "-vb",
|
||||
"-minrate", "-minrate:",
|
||||
"-maxrate", "-maxrate:",
|
||||
"-bufsize", "-bufsize:"
|
||||
)
|
||||
args = source_ffmpeg_cmd.split(" ")
|
||||
for idx, arg in enumerate(args):
|
||||
if arg in copy_args:
|
||||
output.extend([arg, args[idx + 1]])
|
||||
|
||||
pix_fmt = ffprobe_data.get("pix_fmt")
|
||||
if pix_fmt:
|
||||
|
|
@ -127,15 +137,15 @@ def _h264_codec_args(ffprobe_data):
|
|||
return output
|
||||
|
||||
|
||||
def get_codec_args(ffprobe_data):
|
||||
def get_codec_args(ffprobe_data, source_ffmpeg_cmd):
|
||||
codec_name = ffprobe_data.get("codec_name")
|
||||
# Codec "prores"
|
||||
if codec_name == "prores":
|
||||
return _prores_codec_args(ffprobe_data)
|
||||
return _prores_codec_args(ffprobe_data, source_ffmpeg_cmd)
|
||||
|
||||
# Codec "h264"
|
||||
if codec_name == "h264":
|
||||
return _h264_codec_args(ffprobe_data)
|
||||
return _h264_codec_args(ffprobe_data, source_ffmpeg_cmd)
|
||||
|
||||
output = []
|
||||
if codec_name:
|
||||
|
|
@ -469,7 +479,7 @@ def example(input_path, output_path):
|
|||
def burnins_from_data(
|
||||
input_path, output_path, data,
|
||||
codec_data=None, options=None, burnin_values=None, overwrite=True,
|
||||
full_input_path=None, first_frame=None
|
||||
full_input_path=None, first_frame=None, source_ffmpeg_cmd=None
|
||||
):
|
||||
"""This method adds burnins to video/image file based on presets setting.
|
||||
|
||||
|
|
@ -647,7 +657,7 @@ def burnins_from_data(
|
|||
|
||||
else:
|
||||
ffprobe_data = burnin._streams[0]
|
||||
ffmpeg_args.extend(get_codec_args(ffprobe_data))
|
||||
ffmpeg_args.extend(get_codec_args(ffprobe_data, source_ffmpeg_cmd))
|
||||
|
||||
# Use group one (same as `-intra` argument, which is deprecated)
|
||||
ffmpeg_args_str = " ".join(ffmpeg_args)
|
||||
|
|
@ -670,6 +680,7 @@ if __name__ == "__main__":
|
|||
options=in_data.get("options"),
|
||||
burnin_values=in_data.get("values"),
|
||||
full_input_path=in_data.get("full_input_path"),
|
||||
first_frame=in_data.get("first_frame")
|
||||
first_frame=in_data.get("first_frame"),
|
||||
source_ffmpeg_cmd=in_data.get("ffmpeg_cmd")
|
||||
)
|
||||
print("* Burnin script has finished")
|
||||
|
|
|
|||
|
|
@ -25,7 +25,8 @@ from .lib import (
|
|||
)
|
||||
from .entities import (
|
||||
SystemSettings,
|
||||
ProjectSettings
|
||||
ProjectSettings,
|
||||
DefaultsNotDefined
|
||||
)
|
||||
|
||||
|
||||
|
|
@ -51,6 +52,8 @@ __all__ = (
|
|||
"get_anatomy_settings",
|
||||
"get_environments",
|
||||
"get_local_settings",
|
||||
|
||||
"SystemSettings",
|
||||
"ProjectSettings"
|
||||
"ProjectSettings",
|
||||
"DefaultsNotDefined"
|
||||
)
|
||||
|
|
|
|||
|
|
@ -124,9 +124,47 @@
|
|||
"value": "True"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"plugins": [
|
||||
"CreateWriteStill"
|
||||
],
|
||||
"nukeNodeClass": "Write",
|
||||
"knobs": [
|
||||
{
|
||||
"name": "file_type",
|
||||
"value": "tiff"
|
||||
},
|
||||
{
|
||||
"name": "datatype",
|
||||
"value": "16 bit"
|
||||
},
|
||||
{
|
||||
"name": "compression",
|
||||
"value": "Deflate"
|
||||
},
|
||||
{
|
||||
"name": "tile_color",
|
||||
"value": "0x23ff00ff"
|
||||
},
|
||||
{
|
||||
"name": "channels",
|
||||
"value": "rgb"
|
||||
},
|
||||
{
|
||||
"name": "colorspace",
|
||||
"value": "sRGB"
|
||||
},
|
||||
{
|
||||
"name": "create_directories",
|
||||
"value": "True"
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"customNodes": []
|
||||
"customNodes": [
|
||||
|
||||
]
|
||||
},
|
||||
"regexInputs": {
|
||||
"inputs": [
|
||||
|
|
|
|||
|
|
@ -6,7 +6,12 @@
|
|||
},
|
||||
"ValidateVersion": {
|
||||
"enabled": true,
|
||||
"optional": false
|
||||
"optional": false,
|
||||
"active": true
|
||||
},
|
||||
"ValidateIntent": {
|
||||
"enabled": false,
|
||||
"profiles": []
|
||||
},
|
||||
"IntegrateHeroVersion": {
|
||||
"enabled": true,
|
||||
|
|
@ -19,7 +24,7 @@
|
|||
"animation",
|
||||
"setdress",
|
||||
"layout",
|
||||
"mayaAscii"
|
||||
"mayaScene"
|
||||
]
|
||||
},
|
||||
"ExtractJpegEXR": {
|
||||
|
|
|
|||
|
|
@ -156,6 +156,11 @@
|
|||
"CollectMayaRender": {
|
||||
"sync_workfile_version": false
|
||||
},
|
||||
"ValidateInstanceInContext": {
|
||||
"enabled": true,
|
||||
"optional": true,
|
||||
"active": true
|
||||
},
|
||||
"ValidateContainers": {
|
||||
"enabled": true,
|
||||
"optional": true,
|
||||
|
|
@ -169,6 +174,11 @@
|
|||
"enabled": false,
|
||||
"attributes": {}
|
||||
},
|
||||
"ValidateLoadedPlugin": {
|
||||
"enabled": false,
|
||||
"whitelist_native_plugins": false,
|
||||
"authorized_plugins": []
|
||||
},
|
||||
"ValidateRenderSettings": {
|
||||
"arnold_render_attributes": [],
|
||||
"vray_render_attributes": [],
|
||||
|
|
@ -479,6 +489,12 @@
|
|||
255,
|
||||
255
|
||||
],
|
||||
"mayaScene": [
|
||||
67,
|
||||
174,
|
||||
255,
|
||||
255
|
||||
],
|
||||
"setdress": [
|
||||
255,
|
||||
250,
|
||||
|
|
|
|||
|
|
@ -38,6 +38,11 @@
|
|||
"render"
|
||||
]
|
||||
},
|
||||
"ValidateInstanceInContext": {
|
||||
"enabled": true,
|
||||
"optional": true,
|
||||
"active": true
|
||||
},
|
||||
"ValidateContainers": {
|
||||
"enabled": true,
|
||||
"optional": true,
|
||||
|
|
@ -127,7 +132,8 @@
|
|||
"jpg",
|
||||
"jpeg",
|
||||
"png",
|
||||
"psd"
|
||||
"psd",
|
||||
"tiff"
|
||||
],
|
||||
"node_name_template": "{class_name}_{ext}"
|
||||
},
|
||||
|
|
|
|||
|
|
@ -1,4 +1,5 @@
|
|||
{
|
||||
"stop_timer_on_application_exit": false,
|
||||
"publish": {
|
||||
"ExtractSequence": {
|
||||
"review_bg": [
|
||||
|
|
|
|||
|
|
@ -7,6 +7,11 @@
|
|||
"global": []
|
||||
}
|
||||
},
|
||||
"disk_mapping": {
|
||||
"windows": [],
|
||||
"linux": [],
|
||||
"darwin": []
|
||||
},
|
||||
"openpype_path": {
|
||||
"windows": [],
|
||||
"darwin": [],
|
||||
|
|
|
|||
|
|
@ -179,4 +179,4 @@
|
|||
"slack": {
|
||||
"enabled": false
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
@ -5,64 +5,134 @@
|
|||
"collapsible": true,
|
||||
"checkbox_key": "enabled",
|
||||
"children": [
|
||||
{
|
||||
"type": "boolean",
|
||||
"key": "enabled",
|
||||
"label": "Enabled"
|
||||
},
|
||||
{
|
||||
"type": "dict",
|
||||
"key": "config",
|
||||
"label": "Config",
|
||||
"collapsible": true,
|
||||
"children": [
|
||||
{
|
||||
"type": "text",
|
||||
"key": "retry_cnt",
|
||||
"label": "Retry Count"
|
||||
},
|
||||
{
|
||||
"type": "text",
|
||||
"key": "loop_delay",
|
||||
"label": "Loop Delay"
|
||||
},
|
||||
{
|
||||
"type": "text",
|
||||
"key": "active_site",
|
||||
"label": "Active Site"
|
||||
},
|
||||
{
|
||||
"type": "text",
|
||||
"key": "remote_site",
|
||||
"label": "Remote Site"
|
||||
}
|
||||
]
|
||||
}, {
|
||||
"type": "dict-modifiable",
|
||||
"collapsible": true,
|
||||
"key": "sites",
|
||||
"label": "Sites",
|
||||
"collapsible_key": false,
|
||||
"object_type":
|
||||
{
|
||||
"type": "boolean",
|
||||
"key": "enabled",
|
||||
"label": "Enabled"
|
||||
},
|
||||
{
|
||||
"type": "dict",
|
||||
"key": "config",
|
||||
"label": "Config",
|
||||
"collapsible": true,
|
||||
"children": [
|
||||
{
|
||||
"type": "path",
|
||||
"key": "credentials_url",
|
||||
"label": "Credentials url",
|
||||
"multiplatform": true
|
||||
},
|
||||
{
|
||||
"type": "dict-modifiable",
|
||||
"key": "root",
|
||||
"label": "Roots",
|
||||
"collapsable": false,
|
||||
"collapsable_key": false,
|
||||
"object_type": "text"
|
||||
}
|
||||
{
|
||||
"type": "text",
|
||||
"key": "retry_cnt",
|
||||
"label": "Retry Count"
|
||||
},
|
||||
{
|
||||
"type": "text",
|
||||
"key": "loop_delay",
|
||||
"label": "Loop Delay"
|
||||
},
|
||||
{
|
||||
"type": "text",
|
||||
"key": "active_site",
|
||||
"label": "Active Site"
|
||||
},
|
||||
{
|
||||
"type": "text",
|
||||
"key": "remote_site",
|
||||
"label": "Remote Site"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"type": "dict-modifiable",
|
||||
"collapsible": true,
|
||||
"key": "sites",
|
||||
"label": "Sites",
|
||||
"collapsible_key": false,
|
||||
"object_type": {
|
||||
"type": "dict",
|
||||
"children": [
|
||||
{
|
||||
"type": "dict",
|
||||
"key": "gdrive",
|
||||
"label": "Google Drive",
|
||||
"collapsible": true,
|
||||
"children": [
|
||||
{
|
||||
"type": "path",
|
||||
"key": "credentials_url",
|
||||
"label": "Credentials url",
|
||||
"multiplatform": true
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"type": "dict",
|
||||
"key": "dropbox",
|
||||
"label": "Dropbox",
|
||||
"collapsible": true,
|
||||
"children": [
|
||||
{
|
||||
"type": "text",
|
||||
"key": "token",
|
||||
"label": "Access Token"
|
||||
},
|
||||
{
|
||||
"type": "text",
|
||||
"key": "team_folder_name",
|
||||
"label": "Team Folder Name"
|
||||
},
|
||||
{
|
||||
"type": "text",
|
||||
"key": "acting_as_member",
|
||||
"label": "Acting As Member"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"type": "dict",
|
||||
"key": "sftp",
|
||||
"label": "SFTP",
|
||||
"collapsible": true,
|
||||
"children": [
|
||||
{
|
||||
"type": "text",
|
||||
"key": "sftp_host",
|
||||
"label": "SFTP host"
|
||||
},
|
||||
{
|
||||
"type": "number",
|
||||
"key": "sftp_port",
|
||||
"label": "SFTP port"
|
||||
},
|
||||
{
|
||||
"type": "text",
|
||||
"key": "sftp_user",
|
||||
"label": "SFTP user"
|
||||
},
|
||||
{
|
||||
"type": "text",
|
||||
"key": "sftp_pass",
|
||||
"label": "SFTP pass"
|
||||
},
|
||||
{
|
||||
"type": "path",
|
||||
"key": "sftp_key",
|
||||
"label": "SFTP user ssh key",
|
||||
"multiplatform": true
|
||||
},
|
||||
{
|
||||
"type": "text",
|
||||
"key": "sftp_key_pass",
|
||||
"label": "SFTP user ssh key password"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"type": "dict-modifiable",
|
||||
"key": "root",
|
||||
"label": "Roots",
|
||||
"collapsable": false,
|
||||
"collapsable_key": false,
|
||||
"object_type": "text"
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
|
|
|
|||
|
|
@ -5,6 +5,11 @@
|
|||
"label": "TVPaint",
|
||||
"is_file": true,
|
||||
"children": [
|
||||
{
|
||||
"type": "boolean",
|
||||
"key": "stop_timer_on_application_exit",
|
||||
"label": "Stop timer on application exit"
|
||||
},
|
||||
{
|
||||
"type": "dict",
|
||||
"collapsible": true,
|
||||
|
|
|
|||
|
|
@ -24,13 +24,22 @@
|
|||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"type": "schema_template",
|
||||
"name": "template_publish_plugin",
|
||||
"template_data": [
|
||||
{
|
||||
"key": "ValidateVersion",
|
||||
"label": "Validate Version"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"type": "dict",
|
||||
"collapsible": true,
|
||||
"checkbox_key": "enabled",
|
||||
"key": "ValidateVersion",
|
||||
"label": "Validate Version",
|
||||
"label": "Validate Intent",
|
||||
"key": "ValidateIntent",
|
||||
"is_group": true,
|
||||
"checkbox_key": "enabled",
|
||||
"children": [
|
||||
{
|
||||
"type": "boolean",
|
||||
|
|
@ -38,9 +47,43 @@
|
|||
"label": "Enabled"
|
||||
},
|
||||
{
|
||||
"type": "boolean",
|
||||
"key": "optional",
|
||||
"label": "Optional"
|
||||
"type": "label",
|
||||
"label": "Validate if Publishing intent was selected. It is possible to disable validation for specific publishing context with profiles."
|
||||
},
|
||||
{
|
||||
"type": "list",
|
||||
"collapsible": true,
|
||||
"key": "profiles",
|
||||
"object_type": {
|
||||
"type": "dict",
|
||||
"children": [
|
||||
{
|
||||
"key": "hosts",
|
||||
"label": "Host names",
|
||||
"type": "hosts-enum",
|
||||
"multiselection": true
|
||||
},
|
||||
{
|
||||
"key": "task_types",
|
||||
"label": "Task types",
|
||||
"type": "task-types-enum"
|
||||
},
|
||||
{
|
||||
"key": "tasks",
|
||||
"label": "Task names",
|
||||
"type": "list",
|
||||
"object_type": "text"
|
||||
},
|
||||
{
|
||||
"type": "separator"
|
||||
},
|
||||
{
|
||||
"key": "validate",
|
||||
"label": "Validate",
|
||||
"type": "boolean"
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
|
|
|
|||
|
|
@ -47,9 +47,14 @@
|
|||
},
|
||||
{
|
||||
"type": "color",
|
||||
"label": "Maya Scene:",
|
||||
"label": "Maya Ascii:",
|
||||
"key": "mayaAscii"
|
||||
},
|
||||
{
|
||||
"type": "color",
|
||||
"label": "Maya Scene:",
|
||||
"key": "mayaScene"
|
||||
},
|
||||
{
|
||||
"type": "color",
|
||||
"label": "Set Dress:",
|
||||
|
|
|
|||
|
|
@ -28,6 +28,16 @@
|
|||
"type": "label",
|
||||
"label": "Validators"
|
||||
},
|
||||
{
|
||||
"type": "schema_template",
|
||||
"name": "template_publish_plugin",
|
||||
"template_data": [
|
||||
{
|
||||
"key": "ValidateInstanceInContext",
|
||||
"label": "Validate Instance In Context"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"type": "schema_template",
|
||||
"name": "template_publish_plugin",
|
||||
|
|
@ -82,6 +92,32 @@
|
|||
]
|
||||
},
|
||||
|
||||
{
|
||||
"type": "dict",
|
||||
"collapsible": true,
|
||||
"key": "ValidateLoadedPlugin",
|
||||
"label": "Validate Loaded Plugin",
|
||||
"checkbox_key": "enabled",
|
||||
"children": [
|
||||
{
|
||||
"type": "boolean",
|
||||
"key": "enabled",
|
||||
"label": "Enabled"
|
||||
},
|
||||
{
|
||||
"type": "boolean",
|
||||
"key": "whitelist_native_plugins",
|
||||
"label": "Whitelist Maya Native Plugins"
|
||||
},
|
||||
{
|
||||
"type": "list",
|
||||
"key": "authorized_plugins",
|
||||
"label": "Authorized plugins",
|
||||
"object_type": "text"
|
||||
}
|
||||
]
|
||||
},
|
||||
|
||||
{
|
||||
"type": "dict",
|
||||
"collapsible": true,
|
||||
|
|
|
|||
|
|
@ -50,6 +50,16 @@
|
|||
"type": "label",
|
||||
"label": "Validators"
|
||||
},
|
||||
{
|
||||
"type": "schema_template",
|
||||
"name": "template_publish_plugin",
|
||||
"template_data": [
|
||||
{
|
||||
"key": "ValidateInstanceInContext",
|
||||
"label": "Validate Instance In Context"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"type": "schema_template",
|
||||
"name": "template_publish_plugin",
|
||||
|
|
|
|||
|
|
@ -40,6 +40,76 @@
|
|||
{
|
||||
"type": "splitter"
|
||||
},
|
||||
{
|
||||
"type": "dict",
|
||||
"key": "disk_mapping",
|
||||
"label": "Disk mapping",
|
||||
"is_group": true,
|
||||
"use_label_wrap": false,
|
||||
"collapsible": false,
|
||||
"children": [
|
||||
{
|
||||
"key": "windows",
|
||||
"label": "Windows",
|
||||
"type": "list",
|
||||
"object_type": {
|
||||
"type": "list-strict",
|
||||
"key": "item",
|
||||
"object_types": [
|
||||
{
|
||||
"label": "Source",
|
||||
"type": "path"
|
||||
},
|
||||
{
|
||||
"label": "Destination",
|
||||
"type": "path"
|
||||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
{
|
||||
"key": "linux",
|
||||
"label": "Linux",
|
||||
"type": "list",
|
||||
"object_type": {
|
||||
"type": "list-strict",
|
||||
"key": "item",
|
||||
"object_types": [
|
||||
{
|
||||
"label": "Source",
|
||||
"type": "path"
|
||||
},
|
||||
{
|
||||
"label": "Destination",
|
||||
"type": "path"
|
||||
}
|
||||
]
|
||||
}
|
||||
},
|
||||
{
|
||||
"key": "darwin",
|
||||
"label": "MacOS",
|
||||
"type": "list",
|
||||
"object_type": {
|
||||
"type": "list-strict",
|
||||
"key": "item",
|
||||
"object_types": [
|
||||
{
|
||||
"label": "Source",
|
||||
"type": "path"
|
||||
},
|
||||
{
|
||||
"label": "Destination",
|
||||
"type": "path"
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"type": "splitter"
|
||||
},
|
||||
{
|
||||
"type": "path",
|
||||
"key": "openpype_path",
|
||||
|
|
|
|||
|
|
@ -168,7 +168,7 @@ class CacheValues:
|
|||
|
||||
class MongoSettingsHandler(SettingsHandler):
|
||||
"""Settings handler that use mongo for storing and loading of settings."""
|
||||
global_general_keys = ("openpype_path", "admin_password")
|
||||
global_general_keys = ("openpype_path", "admin_password", "disk_mapping")
|
||||
|
||||
def __init__(self):
|
||||
# Get mongo connection
|
||||
|
|
|
|||
|
|
@ -1,9 +1,11 @@
|
|||
__all__ = (
|
||||
"IDENTIFIER_ROLE",
|
||||
"PROJECT_NAME_ROLE",
|
||||
|
||||
"HierarchyView",
|
||||
|
||||
"ProjectModel",
|
||||
"ProjectProxyFilter",
|
||||
"CreateProjectDialog",
|
||||
|
||||
"HierarchyModel",
|
||||
|
|
@ -20,12 +22,14 @@ __all__ = (
|
|||
|
||||
|
||||
from .constants import (
|
||||
IDENTIFIER_ROLE
|
||||
IDENTIFIER_ROLE,
|
||||
PROJECT_NAME_ROLE
|
||||
)
|
||||
from .widgets import CreateProjectDialog
|
||||
from .view import HierarchyView
|
||||
from .model import (
|
||||
ProjectModel,
|
||||
ProjectProxyFilter,
|
||||
|
||||
HierarchyModel,
|
||||
HierarchySelectionModel,
|
||||
|
|
|
|||
|
|
@ -17,6 +17,9 @@ ITEM_TYPE_ROLE = QtCore.Qt.UserRole + 5
|
|||
# Item has opened editor (per column)
|
||||
EDITOR_OPENED_ROLE = QtCore.Qt.UserRole + 6
|
||||
|
||||
# Role for project model
|
||||
PROJECT_NAME_ROLE = QtCore.Qt.UserRole + 7
|
||||
|
||||
# Allowed symbols for any name
|
||||
NAME_ALLOWED_SYMBOLS = "a-zA-Z0-9_"
|
||||
NAME_REGEX = re.compile("^[" + NAME_ALLOWED_SYMBOLS + "]*$")
|
||||
|
|
|
|||
|
|
@ -9,7 +9,8 @@ from .constants import (
|
|||
DUPLICATED_ROLE,
|
||||
HIERARCHY_CHANGE_ABLE_ROLE,
|
||||
REMOVED_ROLE,
|
||||
EDITOR_OPENED_ROLE
|
||||
EDITOR_OPENED_ROLE,
|
||||
PROJECT_NAME_ROLE
|
||||
)
|
||||
from .style import ResourceCache
|
||||
|
||||
|
|
@ -29,7 +30,7 @@ class ProjectModel(QtGui.QStandardItemModel):
|
|||
def __init__(self, dbcon, *args, **kwargs):
|
||||
self.dbcon = dbcon
|
||||
|
||||
self._project_names = set()
|
||||
self._items_by_name = {}
|
||||
|
||||
super(ProjectModel, self).__init__(*args, **kwargs)
|
||||
|
||||
|
|
@ -37,29 +38,62 @@ class ProjectModel(QtGui.QStandardItemModel):
|
|||
"""Reload projects."""
|
||||
self.dbcon.Session["AVALON_PROJECT"] = None
|
||||
|
||||
project_items = []
|
||||
new_project_items = []
|
||||
|
||||
none_project = QtGui.QStandardItem("< Select Project >")
|
||||
none_project.setData(None)
|
||||
project_items.append(none_project)
|
||||
if None not in self._items_by_name:
|
||||
none_project = QtGui.QStandardItem("< Select Project >")
|
||||
self._items_by_name[None] = none_project
|
||||
new_project_items.append(none_project)
|
||||
|
||||
project_docs = self.dbcon.projects(
|
||||
projection={"name": 1},
|
||||
only_active=True
|
||||
)
|
||||
project_names = set()
|
||||
for project_doc in project_docs:
|
||||
project_name = project_doc.get("name")
|
||||
if not project_name:
|
||||
continue
|
||||
|
||||
for doc in sorted(
|
||||
self.dbcon.projects(projection={"name": 1}, only_active=True),
|
||||
key=lambda x: x["name"]
|
||||
):
|
||||
project_names.add(project_name)
|
||||
if project_name not in self._items_by_name:
|
||||
project_item = QtGui.QStandardItem(project_name)
|
||||
project_item.setData(project_name, PROJECT_NAME_ROLE)
|
||||
|
||||
project_name = doc.get("name")
|
||||
if project_name:
|
||||
project_names.add(project_name)
|
||||
project_items.append(QtGui.QStandardItem(project_name))
|
||||
self._items_by_name[project_name] = project_item
|
||||
new_project_items.append(project_item)
|
||||
|
||||
self.clear()
|
||||
root_item = self.invisibleRootItem()
|
||||
for project_name in tuple(self._items_by_name.keys()):
|
||||
if project_name is None or project_name in project_names:
|
||||
continue
|
||||
project_item = self._items_by_name.pop(project_name)
|
||||
root_item.removeRow(project_item.row())
|
||||
|
||||
self._project_names = project_names
|
||||
if new_project_items:
|
||||
root_item.appendRows(new_project_items)
|
||||
|
||||
self.invisibleRootItem().appendRows(project_items)
|
||||
|
||||
class ProjectProxyFilter(QtCore.QSortFilterProxyModel):
|
||||
"""Filters default project item."""
|
||||
def __init__(self, *args, **kwargs):
|
||||
super(ProjectProxyFilter, self).__init__(*args, **kwargs)
|
||||
self._filter_default = False
|
||||
|
||||
def set_filter_default(self, enabled=True):
|
||||
"""Set if filtering of default item is enabled."""
|
||||
if enabled == self._filter_default:
|
||||
return
|
||||
self._filter_default = enabled
|
||||
self.invalidateFilter()
|
||||
|
||||
def filterAcceptsRow(self, row, parent):
|
||||
if not self._filter_default:
|
||||
return True
|
||||
|
||||
model = self.sourceModel()
|
||||
source_index = model.index(row, self.filterKeyColumn(), parent)
|
||||
return source_index.data(PROJECT_NAME_ROLE) is not None
|
||||
|
||||
|
||||
class HierarchySelectionModel(QtCore.QItemSelectionModel):
|
||||
|
|
|
|||
|
|
@ -2,15 +2,17 @@ from Qt import QtWidgets, QtCore, QtGui
|
|||
|
||||
from . import (
|
||||
ProjectModel,
|
||||
ProjectProxyFilter,
|
||||
|
||||
HierarchyModel,
|
||||
HierarchySelectionModel,
|
||||
HierarchyView,
|
||||
|
||||
CreateProjectDialog
|
||||
CreateProjectDialog,
|
||||
PROJECT_NAME_ROLE
|
||||
)
|
||||
from openpype.style import load_stylesheet
|
||||
from .style import ResourceCache
|
||||
from openpype.style import load_stylesheet
|
||||
from openpype.lib import is_admin_password_required
|
||||
from openpype.widgets import PasswordDialog
|
||||
|
||||
|
|
@ -35,9 +37,6 @@ class ProjectManagerWindow(QtWidgets.QWidget):
|
|||
self._password_dialog = None
|
||||
self._user_passed = False
|
||||
|
||||
# keep track of the current project PM is viewing
|
||||
self._current_project = None
|
||||
|
||||
self.setWindowTitle("OpenPype Project Manager")
|
||||
self.setWindowIcon(QtGui.QIcon(resources.get_openpype_icon_filepath()))
|
||||
|
||||
|
|
@ -50,11 +49,15 @@ class ProjectManagerWindow(QtWidgets.QWidget):
|
|||
dbcon = AvalonMongoDB()
|
||||
|
||||
project_model = ProjectModel(dbcon)
|
||||
project_proxy = ProjectProxyFilter()
|
||||
project_proxy.setSourceModel(project_model)
|
||||
project_proxy.setDynamicSortFilter(True)
|
||||
|
||||
project_combobox = QtWidgets.QComboBox(project_widget)
|
||||
project_combobox.setSizeAdjustPolicy(
|
||||
QtWidgets.QComboBox.AdjustToContents
|
||||
)
|
||||
project_combobox.setModel(project_model)
|
||||
project_combobox.setModel(project_proxy)
|
||||
project_combobox.setRootModelIndex(QtCore.QModelIndex())
|
||||
style_delegate = QtWidgets.QStyledItemDelegate()
|
||||
project_combobox.setItemDelegate(style_delegate)
|
||||
|
|
@ -72,6 +75,7 @@ class ProjectManagerWindow(QtWidgets.QWidget):
|
|||
"Create Starting Folders",
|
||||
project_widget
|
||||
)
|
||||
create_folders_btn.setEnabled(False)
|
||||
|
||||
project_layout = QtWidgets.QHBoxLayout(project_widget)
|
||||
project_layout.setContentsMargins(0, 0, 0, 0)
|
||||
|
|
@ -147,6 +151,7 @@ class ProjectManagerWindow(QtWidgets.QWidget):
|
|||
add_task_btn.clicked.connect(self._on_add_task)
|
||||
|
||||
self._project_model = project_model
|
||||
self._project_proxy_model = project_proxy
|
||||
|
||||
self.hierarchy_view = hierarchy_view
|
||||
self.hierarchy_model = hierarchy_model
|
||||
|
|
@ -165,8 +170,17 @@ class ProjectManagerWindow(QtWidgets.QWidget):
|
|||
self.setStyleSheet(load_stylesheet())
|
||||
|
||||
def _set_project(self, project_name=None):
|
||||
self._create_folders_btn.setEnabled(project_name is not None)
|
||||
self._project_proxy_model.set_filter_default(project_name is not None)
|
||||
self.hierarchy_view.set_project(project_name)
|
||||
|
||||
def _current_project(self):
|
||||
row = self._project_combobox.currentIndex()
|
||||
if row < 0:
|
||||
return None
|
||||
index = self._project_proxy_model.index(row, 0)
|
||||
return index.data(PROJECT_NAME_ROLE)
|
||||
|
||||
def showEvent(self, event):
|
||||
super(ProjectManagerWindow, self).showEvent(event)
|
||||
|
||||
|
|
@ -185,6 +199,7 @@ class ProjectManagerWindow(QtWidgets.QWidget):
|
|||
project_name = self._project_combobox.currentText()
|
||||
|
||||
self._project_model.refresh()
|
||||
self._project_proxy_model.sort(0, QtCore.Qt.AscendingOrder)
|
||||
|
||||
if self._project_combobox.count() == 0:
|
||||
return self._set_project()
|
||||
|
|
@ -194,12 +209,12 @@ class ProjectManagerWindow(QtWidgets.QWidget):
|
|||
if row >= 0:
|
||||
self._project_combobox.setCurrentIndex(row)
|
||||
|
||||
self._set_project(self._project_combobox.currentText())
|
||||
selected_project = self._current_project()
|
||||
self._set_project(selected_project)
|
||||
|
||||
def _on_project_change(self):
|
||||
if self._project_combobox.currentIndex() != 0:
|
||||
self._current_project = self._project_combobox.currentText()
|
||||
self._set_project(self._current_project)
|
||||
selected_project = self._current_project()
|
||||
self._set_project(selected_project)
|
||||
|
||||
def _on_project_refresh(self):
|
||||
self.refresh_projects()
|
||||
|
|
@ -214,7 +229,8 @@ class ProjectManagerWindow(QtWidgets.QWidget):
|
|||
self.hierarchy_view.add_task()
|
||||
|
||||
def _on_create_folders(self):
|
||||
if not self._current_project:
|
||||
project_name = self._current_project()
|
||||
if not project_name:
|
||||
return
|
||||
|
||||
qm = QtWidgets.QMessageBox
|
||||
|
|
@ -225,11 +241,11 @@ class ProjectManagerWindow(QtWidgets.QWidget):
|
|||
if ans == qm.Yes:
|
||||
try:
|
||||
# Get paths based on presets
|
||||
basic_paths = get_project_basic_paths(self._current_project)
|
||||
basic_paths = get_project_basic_paths(project_name)
|
||||
if not basic_paths:
|
||||
pass
|
||||
# Invoking OpenPype API to create the project folders
|
||||
create_project_folders(basic_paths, self._current_project)
|
||||
create_project_folders(basic_paths, project_name)
|
||||
except Exception as exc:
|
||||
self.log.warning(
|
||||
"Cannot create starting folders: {}".format(exc),
|
||||
|
|
@ -246,11 +262,9 @@ class ProjectManagerWindow(QtWidgets.QWidget):
|
|||
if dialog.result() != 1:
|
||||
return
|
||||
|
||||
self._current_project = dialog.project_name
|
||||
self.show_message(
|
||||
"Created project \"{}\"".format(self._current_project)
|
||||
)
|
||||
self.refresh_projects(self._current_project)
|
||||
project_name = dialog.project_name
|
||||
self.show_message("Created project \"{}\"".format(project_name))
|
||||
self.refresh_projects(project_name)
|
||||
|
||||
def _show_password_dialog(self):
|
||||
if self._password_dialog:
|
||||
|
|
|
|||
16
openpype/tools/settings/settings/constants.py
Normal file
16
openpype/tools/settings/settings/constants.py
Normal file
|
|
@ -0,0 +1,16 @@
|
|||
from Qt import QtCore
|
||||
|
||||
|
||||
DEFAULT_PROJECT_LABEL = "< Default >"
|
||||
PROJECT_NAME_ROLE = QtCore.Qt.UserRole + 1
|
||||
PROJECT_IS_ACTIVE_ROLE = QtCore.Qt.UserRole + 2
|
||||
PROJECT_IS_SELECTED_ROLE = QtCore.Qt.UserRole + 3
|
||||
|
||||
|
||||
__all__ = (
|
||||
"DEFAULT_PROJECT_LABEL",
|
||||
|
||||
"PROJECT_NAME_ROLE",
|
||||
"PROJECT_IS_ACTIVE_ROLE",
|
||||
"PROJECT_IS_SELECTED_ROLE"
|
||||
)
|
||||
|
|
@ -7,6 +7,12 @@ from avalon.mongodb import (
|
|||
)
|
||||
|
||||
from openpype.settings.lib import get_system_settings
|
||||
from .constants import (
|
||||
DEFAULT_PROJECT_LABEL,
|
||||
PROJECT_NAME_ROLE,
|
||||
PROJECT_IS_ACTIVE_ROLE,
|
||||
PROJECT_IS_SELECTED_ROLE
|
||||
)
|
||||
|
||||
|
||||
class SettingsLineEdit(QtWidgets.QLineEdit):
|
||||
|
|
@ -602,10 +608,63 @@ class NiceCheckbox(QtWidgets.QFrame):
|
|||
return super(NiceCheckbox, self).mouseReleaseEvent(event)
|
||||
|
||||
|
||||
class ProjectListModel(QtGui.QStandardItemModel):
|
||||
sort_role = QtCore.Qt.UserRole + 10
|
||||
filter_role = QtCore.Qt.UserRole + 11
|
||||
selected_role = QtCore.Qt.UserRole + 12
|
||||
class ProjectModel(QtGui.QStandardItemModel):
|
||||
def __init__(self, only_active, *args, **kwargs):
|
||||
super(ProjectModel, self).__init__(*args, **kwargs)
|
||||
|
||||
self.dbcon = None
|
||||
|
||||
self._only_active = only_active
|
||||
self._default_item = None
|
||||
self._items_by_name = {}
|
||||
|
||||
def set_dbcon(self, dbcon):
|
||||
self.dbcon = dbcon
|
||||
|
||||
def refresh(self):
|
||||
new_items = []
|
||||
if self._default_item is None:
|
||||
item = QtGui.QStandardItem(DEFAULT_PROJECT_LABEL)
|
||||
item.setData(None, PROJECT_NAME_ROLE)
|
||||
item.setData(True, PROJECT_IS_ACTIVE_ROLE)
|
||||
item.setData(False, PROJECT_IS_SELECTED_ROLE)
|
||||
new_items.append(item)
|
||||
self._default_item = item
|
||||
|
||||
project_names = set()
|
||||
if self.dbcon is not None:
|
||||
for project_doc in self.dbcon.projects(
|
||||
projection={"name": 1, "data.active": 1},
|
||||
only_active=self._only_active
|
||||
):
|
||||
project_name = project_doc["name"]
|
||||
project_names.add(project_name)
|
||||
if project_name in self._items_by_name:
|
||||
item = self._items_by_name[project_name]
|
||||
else:
|
||||
item = QtGui.QStandardItem(project_name)
|
||||
|
||||
self._items_by_name[project_name] = item
|
||||
new_items.append(item)
|
||||
|
||||
is_active = project_doc.get("data", {}).get("active", True)
|
||||
item.setData(project_name, PROJECT_NAME_ROLE)
|
||||
item.setData(is_active, PROJECT_IS_ACTIVE_ROLE)
|
||||
item.setData(False, PROJECT_IS_SELECTED_ROLE)
|
||||
|
||||
if not is_active:
|
||||
font = item.font()
|
||||
font.setItalic(True)
|
||||
item.setFont(font)
|
||||
|
||||
root_item = self.invisibleRootItem()
|
||||
for project_name in tuple(self._items_by_name.keys()):
|
||||
if project_name not in project_names:
|
||||
item = self._items_by_name.pop(project_name)
|
||||
root_item.removeRow(item.row())
|
||||
|
||||
if new_items:
|
||||
root_item.appendRows(new_items)
|
||||
|
||||
|
||||
class ProjectListView(QtWidgets.QListView):
|
||||
|
|
@ -618,19 +677,36 @@ class ProjectListView(QtWidgets.QListView):
|
|||
super(ProjectListView, self).mouseReleaseEvent(event)
|
||||
|
||||
|
||||
class ProjectListSortFilterProxy(QtCore.QSortFilterProxyModel):
|
||||
|
||||
class ProjectSortFilterProxy(QtCore.QSortFilterProxyModel):
|
||||
def __init__(self, *args, **kwargs):
|
||||
super(ProjectListSortFilterProxy, self).__init__(*args, **kwargs)
|
||||
super(ProjectSortFilterProxy, self).__init__(*args, **kwargs)
|
||||
self._enable_filter = True
|
||||
|
||||
def lessThan(self, left_index, right_index):
|
||||
if left_index.data(PROJECT_NAME_ROLE) is None:
|
||||
return True
|
||||
|
||||
if right_index.data(PROJECT_NAME_ROLE) is None:
|
||||
return False
|
||||
|
||||
left_is_active = left_index.data(PROJECT_IS_ACTIVE_ROLE)
|
||||
right_is_active = right_index.data(PROJECT_IS_ACTIVE_ROLE)
|
||||
if right_is_active == left_is_active:
|
||||
return super(ProjectSortFilterProxy, self).lessThan(
|
||||
left_index, right_index
|
||||
)
|
||||
|
||||
if left_is_active:
|
||||
return True
|
||||
return False
|
||||
|
||||
def filterAcceptsRow(self, source_row, source_parent):
|
||||
if not self._enable_filter:
|
||||
return True
|
||||
|
||||
index = self.sourceModel().index(source_row, 0, source_parent)
|
||||
is_active = bool(index.data(self.filterRole()))
|
||||
is_selected = bool(index.data(ProjectListModel.selected_role))
|
||||
is_selected = bool(index.data(PROJECT_IS_SELECTED_ROLE))
|
||||
|
||||
return is_active or is_selected
|
||||
|
||||
|
|
@ -643,7 +719,6 @@ class ProjectListSortFilterProxy(QtCore.QSortFilterProxyModel):
|
|||
|
||||
|
||||
class ProjectListWidget(QtWidgets.QWidget):
|
||||
default = "< Default >"
|
||||
project_changed = QtCore.Signal()
|
||||
|
||||
def __init__(self, parent, only_active=False):
|
||||
|
|
@ -657,13 +732,10 @@ class ProjectListWidget(QtWidgets.QWidget):
|
|||
label_widget = QtWidgets.QLabel("Projects")
|
||||
|
||||
project_list = ProjectListView(self)
|
||||
project_model = ProjectListModel()
|
||||
project_proxy = ProjectListSortFilterProxy()
|
||||
|
||||
project_proxy.setFilterRole(ProjectListModel.filter_role)
|
||||
project_proxy.setSortRole(ProjectListModel.sort_role)
|
||||
project_proxy.setSortCaseSensitivity(QtCore.Qt.CaseInsensitive)
|
||||
project_model = ProjectModel(only_active)
|
||||
project_proxy = ProjectSortFilterProxy()
|
||||
|
||||
project_proxy.setFilterRole(PROJECT_IS_ACTIVE_ROLE)
|
||||
project_proxy.setSourceModel(project_model)
|
||||
project_list.setModel(project_proxy)
|
||||
|
||||
|
|
@ -693,13 +765,14 @@ class ProjectListWidget(QtWidgets.QWidget):
|
|||
|
||||
project_list.left_mouse_released_at.connect(self.on_item_clicked)
|
||||
|
||||
self._default_project_item = None
|
||||
|
||||
self.project_list = project_list
|
||||
self.project_proxy = project_proxy
|
||||
self.project_model = project_model
|
||||
self.inactive_chk = inactive_chk
|
||||
|
||||
self.dbcon = None
|
||||
self._only_active = only_active
|
||||
|
||||
def on_item_clicked(self, new_index):
|
||||
new_project_name = new_index.data(QtCore.Qt.DisplayRole)
|
||||
|
|
@ -746,12 +819,12 @@ class ProjectListWidget(QtWidgets.QWidget):
|
|||
return not self._parent.entity.has_unsaved_changes
|
||||
|
||||
def project_name(self):
|
||||
if self.current_project == self.default:
|
||||
if self.current_project == DEFAULT_PROJECT_LABEL:
|
||||
return None
|
||||
return self.current_project
|
||||
|
||||
def select_default_project(self):
|
||||
self.select_project(self.default)
|
||||
self.select_project(DEFAULT_PROJECT_LABEL)
|
||||
|
||||
def select_project(self, project_name):
|
||||
model = self.project_model
|
||||
|
|
@ -759,10 +832,10 @@ class ProjectListWidget(QtWidgets.QWidget):
|
|||
|
||||
found_items = model.findItems(project_name)
|
||||
if not found_items:
|
||||
found_items = model.findItems(self.default)
|
||||
found_items = model.findItems(DEFAULT_PROJECT_LABEL)
|
||||
|
||||
index = model.indexFromItem(found_items[0])
|
||||
model.setData(index, True, ProjectListModel.selected_role)
|
||||
model.setData(index, True, PROJECT_IS_SELECTED_ROLE)
|
||||
|
||||
index = proxy.mapFromSource(index)
|
||||
|
||||
|
|
@ -777,9 +850,6 @@ class ProjectListWidget(QtWidgets.QWidget):
|
|||
selected_project = index.data(QtCore.Qt.DisplayRole)
|
||||
break
|
||||
|
||||
model = self.project_model
|
||||
model.clear()
|
||||
|
||||
mongo_url = os.environ["OPENPYPE_MONGO"]
|
||||
|
||||
# Force uninstall of whole avalon connection if url does not match
|
||||
|
|
@ -797,35 +867,8 @@ class ProjectListWidget(QtWidgets.QWidget):
|
|||
self.dbcon = None
|
||||
self.current_project = None
|
||||
|
||||
items = [(self.default, True)]
|
||||
|
||||
if self.dbcon:
|
||||
|
||||
for doc in self.dbcon.projects(
|
||||
projection={"name": 1, "data.active": 1},
|
||||
only_active=self._only_active
|
||||
):
|
||||
items.append(
|
||||
(doc["name"], doc.get("data", {}).get("active", True))
|
||||
)
|
||||
|
||||
for project_name, is_active in items:
|
||||
|
||||
row = QtGui.QStandardItem(project_name)
|
||||
row.setData(is_active, ProjectListModel.filter_role)
|
||||
row.setData(False, ProjectListModel.selected_role)
|
||||
|
||||
if is_active:
|
||||
row.setData(project_name, ProjectListModel.sort_role)
|
||||
|
||||
else:
|
||||
row.setData("~" + project_name, ProjectListModel.sort_role)
|
||||
|
||||
font = row.font()
|
||||
font.setItalic(True)
|
||||
row.setFont(font)
|
||||
|
||||
model.appendRow(row)
|
||||
self.project_model.set_dbcon(self.dbcon)
|
||||
self.project_model.refresh()
|
||||
|
||||
self.project_proxy.sort(0)
|
||||
|
||||
|
|
|
|||
Some files were not shown because too many files have changed in this diff Show more
Loading…
Add table
Add a link
Reference in a new issue