Merge branch 'develop' into bugfix/OP-3654_Flame-re-timing-produces-frame-range-discrepancy-

This commit is contained in:
Jakub Jezek 2022-08-17 12:07:51 +02:00
commit fa745048c2
No known key found for this signature in database
GPG key ID: 730D7C02726179A7
222 changed files with 10221 additions and 3212 deletions

3
.gitignore vendored
View file

@ -102,5 +102,8 @@ website/.docusaurus
.poetry/
.python-version
.editorconfig
.pre-commit-config.yaml
mypy.ini
tools/run_eventserver.*

3
.gitmodules vendored
View file

@ -5,3 +5,6 @@
[submodule "tools/modules/powershell/PSWriteColor"]
path = tools/modules/powershell/PSWriteColor
url = https://github.com/EvotecIT/PSWriteColor.git
[submodule "vendor/configs/OpenColorIO-Configs"]
path = vendor/configs/OpenColorIO-Configs
url = https://github.com/imageworks/OpenColorIO-Configs

View file

@ -1,8 +1,85 @@
# Changelog
## [3.13.1-nightly.3](https://github.com/pypeclub/OpenPype/tree/HEAD)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.13.0...HEAD)
**🐛 Bug fixes**
- General: Extract Review can scale with pixel aspect ratio [\#3644](https://github.com/pypeclub/OpenPype/pull/3644)
- Maya: Refactor moved usage of CreateRender settings [\#3643](https://github.com/pypeclub/OpenPype/pull/3643)
- General: Hero version representations have full context [\#3638](https://github.com/pypeclub/OpenPype/pull/3638)
- Nuke: color settings for render write node is working now [\#3632](https://github.com/pypeclub/OpenPype/pull/3632)
- Maya: FBX support for update in reference loader [\#3631](https://github.com/pypeclub/OpenPype/pull/3631)
**🔀 Refactored code**
- TimersManager: Plugins are in timers manager module [\#3639](https://github.com/pypeclub/OpenPype/pull/3639)
- General: Move workfiles functions into pipeline [\#3637](https://github.com/pypeclub/OpenPype/pull/3637)
**Merged pull requests:**
- Deadline: Global job pre load is not Pype 2 compatible [\#3666](https://github.com/pypeclub/OpenPype/pull/3666)
- Maya: Remove unused get current renderer logic [\#3645](https://github.com/pypeclub/OpenPype/pull/3645)
- Kitsu|Fix: Movie project type fails & first loop children names [\#3636](https://github.com/pypeclub/OpenPype/pull/3636)
- fix the bug of failing to extract look when UDIMs format used in AiImage [\#3628](https://github.com/pypeclub/OpenPype/pull/3628)
## [3.13.0](https://github.com/pypeclub/OpenPype/tree/3.13.0) (2022-08-09)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.13.0-nightly.1...3.13.0)
**🆕 New features**
- Support for mutliple installed versions - 3.13 [\#3605](https://github.com/pypeclub/OpenPype/pull/3605)
**🚀 Enhancements**
- Editorial: Mix audio use side file for ffmpeg filters [\#3630](https://github.com/pypeclub/OpenPype/pull/3630)
- Ftrack: Comment template can contain optional keys [\#3615](https://github.com/pypeclub/OpenPype/pull/3615)
- Ftrack: Add more metadata to ftrack components [\#3612](https://github.com/pypeclub/OpenPype/pull/3612)
- General: Add context to pyblish context [\#3594](https://github.com/pypeclub/OpenPype/pull/3594)
- Kitsu: Shot&Sequence name with prefix over appends [\#3593](https://github.com/pypeclub/OpenPype/pull/3593)
- Photoshop: implemented {layer} placeholder in subset template [\#3591](https://github.com/pypeclub/OpenPype/pull/3591)
- General: Python module appdirs from git [\#3589](https://github.com/pypeclub/OpenPype/pull/3589)
- Ftrack: Update ftrack api to 2.3.3 [\#3588](https://github.com/pypeclub/OpenPype/pull/3588)
- General: New Integrator small fixes [\#3583](https://github.com/pypeclub/OpenPype/pull/3583)
**🐛 Bug fixes**
- Maya: fix aov separator in Redshift [\#3625](https://github.com/pypeclub/OpenPype/pull/3625)
- Fix for multi-version build on Mac [\#3622](https://github.com/pypeclub/OpenPype/pull/3622)
- Ftrack: Sync hierarchical attributes can handle new created entities [\#3621](https://github.com/pypeclub/OpenPype/pull/3621)
- General: Extract review aspect ratio scale is calculated by ffmpeg [\#3620](https://github.com/pypeclub/OpenPype/pull/3620)
- Maya: Fix types of default settings [\#3617](https://github.com/pypeclub/OpenPype/pull/3617)
- Integrator: Don't force to have dot before frame [\#3611](https://github.com/pypeclub/OpenPype/pull/3611)
- AfterEffects: refactored integrate doesnt work formulti frame publishes [\#3610](https://github.com/pypeclub/OpenPype/pull/3610)
- Maya look data contents fails with custom attribute on group [\#3607](https://github.com/pypeclub/OpenPype/pull/3607)
- TrayPublisher: Fix wrong conflict merge [\#3600](https://github.com/pypeclub/OpenPype/pull/3600)
- Bugfix: Add OCIO as submodule to prepare for handling `maketx` color space conversion. [\#3590](https://github.com/pypeclub/OpenPype/pull/3590)
- Fix general settings environment variables resolution [\#3587](https://github.com/pypeclub/OpenPype/pull/3587)
- Editorial publishing workflow improvements [\#3580](https://github.com/pypeclub/OpenPype/pull/3580)
- General: Update imports in start script [\#3579](https://github.com/pypeclub/OpenPype/pull/3579)
- Nuke: render family integration consistency [\#3576](https://github.com/pypeclub/OpenPype/pull/3576)
- Ftrack: Handle missing published path in integrator [\#3570](https://github.com/pypeclub/OpenPype/pull/3570)
- Nuke: publish existing frames with slate with correct range [\#3555](https://github.com/pypeclub/OpenPype/pull/3555)
**🔀 Refactored code**
- General: Plugin settings handled by plugins [\#3623](https://github.com/pypeclub/OpenPype/pull/3623)
- General: Naive implementation of document create, update, delete [\#3601](https://github.com/pypeclub/OpenPype/pull/3601)
- General: Use query functions in general code [\#3596](https://github.com/pypeclub/OpenPype/pull/3596)
- General: Separate extraction of template data into more functions [\#3574](https://github.com/pypeclub/OpenPype/pull/3574)
- General: Lib cleanup [\#3571](https://github.com/pypeclub/OpenPype/pull/3571)
**Merged pull requests:**
- Webpublisher: timeout for PS studio processing [\#3619](https://github.com/pypeclub/OpenPype/pull/3619)
- Core: translated validate\_containers.py into New publisher style [\#3614](https://github.com/pypeclub/OpenPype/pull/3614)
- Enable write color sets on animation publish automatically [\#3582](https://github.com/pypeclub/OpenPype/pull/3582)
## [3.12.2](https://github.com/pypeclub/OpenPype/tree/3.12.2) (2022-07-27)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.12.1...3.12.2)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.12.2-nightly.4...3.12.2)
### 📖 Documentation
@ -13,13 +90,6 @@
- General: Global thumbnail extractor is ready for more cases [\#3561](https://github.com/pypeclub/OpenPype/pull/3561)
- Maya: add additional validators to Settings [\#3540](https://github.com/pypeclub/OpenPype/pull/3540)
- General: Interactive console in cli [\#3526](https://github.com/pypeclub/OpenPype/pull/3526)
- Ftrack: Automatic daily review session creation can define trigger hour [\#3516](https://github.com/pypeclub/OpenPype/pull/3516)
- Ftrack: add source into Note [\#3509](https://github.com/pypeclub/OpenPype/pull/3509)
- Add pack and unpack convenience scripts [\#3502](https://github.com/pypeclub/OpenPype/pull/3502)
- NewPublisher: Keep plugins with mismatch target in report [\#3498](https://github.com/pypeclub/OpenPype/pull/3498)
- Nuke: load clip with options from settings [\#3497](https://github.com/pypeclub/OpenPype/pull/3497)
- TrayPublisher: implemented render\_mov\_batch [\#3486](https://github.com/pypeclub/OpenPype/pull/3486)
**🐛 Bug fixes**
@ -32,15 +102,6 @@
- Module interfaces: Fix import error [\#3547](https://github.com/pypeclub/OpenPype/pull/3547)
- Workfiles tool: Show of tool and it's flags [\#3539](https://github.com/pypeclub/OpenPype/pull/3539)
- General: Create workfile documents works again [\#3538](https://github.com/pypeclub/OpenPype/pull/3538)
- Additional fixes for powershell scripts [\#3525](https://github.com/pypeclub/OpenPype/pull/3525)
- Maya: Added wrapper around cmds.setAttr [\#3523](https://github.com/pypeclub/OpenPype/pull/3523)
- Nuke: double slate [\#3521](https://github.com/pypeclub/OpenPype/pull/3521)
- General: Fix hash of centos oiio archive [\#3519](https://github.com/pypeclub/OpenPype/pull/3519)
- Maya: Renderman display output fix [\#3514](https://github.com/pypeclub/OpenPype/pull/3514)
- TrayPublisher: Simple creation enhancements and fixes [\#3513](https://github.com/pypeclub/OpenPype/pull/3513)
- NewPublisher: Publish attributes are properly collected [\#3510](https://github.com/pypeclub/OpenPype/pull/3510)
- TrayPublisher: Make sure host name is filled [\#3504](https://github.com/pypeclub/OpenPype/pull/3504)
- NewPublisher: Groups work and enum multivalue [\#3501](https://github.com/pypeclub/OpenPype/pull/3501)
**🔀 Refactored code**
@ -49,9 +110,6 @@
- Refactor Integrate Asset [\#3530](https://github.com/pypeclub/OpenPype/pull/3530)
- General: Client docstrings cleanup [\#3529](https://github.com/pypeclub/OpenPype/pull/3529)
- General: Move load related functions into pipeline [\#3527](https://github.com/pypeclub/OpenPype/pull/3527)
- General: Get current context document functions [\#3522](https://github.com/pypeclub/OpenPype/pull/3522)
- Kitsu: Use query function from client [\#3496](https://github.com/pypeclub/OpenPype/pull/3496)
- Deadline: Use query functions [\#3466](https://github.com/pypeclub/OpenPype/pull/3466)
**Merged pull requests:**
@ -61,51 +119,6 @@
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.12.1-nightly.6...3.12.1)
### 📖 Documentation
- Docs: Added minimal permissions for MongoDB [\#3441](https://github.com/pypeclub/OpenPype/pull/3441)
**🚀 Enhancements**
- TrayPublisher: Added more options for grouping of instances [\#3494](https://github.com/pypeclub/OpenPype/pull/3494)
- NewPublisher: Align creator attributes from top to bottom [\#3487](https://github.com/pypeclub/OpenPype/pull/3487)
- NewPublisher: Added ability to use label of instance [\#3484](https://github.com/pypeclub/OpenPype/pull/3484)
- General: Creator Plugins have access to project [\#3476](https://github.com/pypeclub/OpenPype/pull/3476)
- General: Better arguments order in creator init [\#3475](https://github.com/pypeclub/OpenPype/pull/3475)
- Ftrack: Trigger custom ftrack events on project creation and preparation [\#3465](https://github.com/pypeclub/OpenPype/pull/3465)
- Windows installer: Clean old files and add version subfolder [\#3445](https://github.com/pypeclub/OpenPype/pull/3445)
**🐛 Bug fixes**
- TrayPublisher: Keep use instance label in list view [\#3493](https://github.com/pypeclub/OpenPype/pull/3493)
- General: Extract review use first frame of input sequence [\#3491](https://github.com/pypeclub/OpenPype/pull/3491)
- General: Fix Plist loading for application launch [\#3485](https://github.com/pypeclub/OpenPype/pull/3485)
- Nuke: Workfile tools open on start [\#3479](https://github.com/pypeclub/OpenPype/pull/3479)
- New Publisher: Disabled context change allows creation [\#3478](https://github.com/pypeclub/OpenPype/pull/3478)
- General: thumbnail extractor fix [\#3474](https://github.com/pypeclub/OpenPype/pull/3474)
- Kitsu: bugfix with sync-service ans publish plugins [\#3473](https://github.com/pypeclub/OpenPype/pull/3473)
- Flame: solved problem with multi-selected loading [\#3470](https://github.com/pypeclub/OpenPype/pull/3470)
- General: Fix query function in update logic [\#3468](https://github.com/pypeclub/OpenPype/pull/3468)
- Resolve: removed few bugs [\#3464](https://github.com/pypeclub/OpenPype/pull/3464)
- General: Delete old versions is safer when ftrack is disabled [\#3462](https://github.com/pypeclub/OpenPype/pull/3462)
- Nuke: fixing metadata slate TC difference [\#3455](https://github.com/pypeclub/OpenPype/pull/3455)
- Nuke: prerender reviewable fails [\#3450](https://github.com/pypeclub/OpenPype/pull/3450)
- Maya: fix hashing in Python 3 for tile rendering [\#3447](https://github.com/pypeclub/OpenPype/pull/3447)
- LogViewer: Escape html characters in log message [\#3443](https://github.com/pypeclub/OpenPype/pull/3443)
**🔀 Refactored code**
- Maya: Merge animation + pointcache extractor logic [\#3461](https://github.com/pypeclub/OpenPype/pull/3461)
- Maya: Re-use `maintained\_time` from lib [\#3460](https://github.com/pypeclub/OpenPype/pull/3460)
- General: Use query functions in global plugins [\#3459](https://github.com/pypeclub/OpenPype/pull/3459)
- Clockify: Use query functions in clockify actions [\#3458](https://github.com/pypeclub/OpenPype/pull/3458)
- General: Use query functions in rest api calls [\#3457](https://github.com/pypeclub/OpenPype/pull/3457)
- General: Use query functions in openpype lib functions [\#3454](https://github.com/pypeclub/OpenPype/pull/3454)
- General: Use query functions in load utils [\#3446](https://github.com/pypeclub/OpenPype/pull/3446)
- General: Move publish plugin and publish render abstractions [\#3442](https://github.com/pypeclub/OpenPype/pull/3442)
- General: Use Anatomy after move to pipeline [\#3436](https://github.com/pypeclub/OpenPype/pull/3436)
- General: Anatomy moved to pipeline [\#3435](https://github.com/pypeclub/OpenPype/pull/3435)
## [3.12.0](https://github.com/pypeclub/OpenPype/tree/3.12.0) (2022-06-28)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.12.0-nightly.3...3.12.0)

View file

@ -122,7 +122,7 @@ class OpenPypeVersion(semver.VersionInfo):
if self.staging:
if kwargs.get("build"):
if "staging" not in kwargs.get("build"):
kwargs["build"] = "{}-staging".format(kwargs.get("build"))
kwargs["build"] = f"{kwargs.get('build')}-staging"
else:
kwargs["build"] = "staging"
@ -136,8 +136,7 @@ class OpenPypeVersion(semver.VersionInfo):
return bool(result and self.staging == other.staging)
def __repr__(self):
return "<{}: {} - path={}>".format(
self.__class__.__name__, str(self), self.path)
return f"<{self.__class__.__name__}: {str(self)} - path={self.path}>"
def __lt__(self, other: OpenPypeVersion):
result = super().__lt__(other)
@ -232,10 +231,7 @@ class OpenPypeVersion(semver.VersionInfo):
return openpype_version
def __hash__(self):
if self.path:
return hash(self.path)
else:
return hash(str(self))
return hash(self.path) if self.path else hash(str(self))
@staticmethod
def is_version_in_dir(
@ -384,7 +380,8 @@ class OpenPypeVersion(semver.VersionInfo):
@classmethod
def get_local_versions(
cls, production: bool = None, staging: bool = None
cls, production: bool = None,
staging: bool = None, compatible_with: OpenPypeVersion = None
) -> List:
"""Get all versions available on this machine.
@ -394,6 +391,8 @@ class OpenPypeVersion(semver.VersionInfo):
Args:
production (bool): Return production versions.
staging (bool): Return staging versions.
compatible_with (OpenPypeVersion): Return only those compatible
with specified version.
"""
# Return all local versions if arguments are set to None
if production is None and staging is None:
@ -410,10 +409,19 @@ class OpenPypeVersion(semver.VersionInfo):
if not production and not staging:
return []
# DEPRECATED: backwards compatible way to look for versions in root
dir_to_search = Path(user_data_dir("openpype", "pypeclub"))
versions = OpenPypeVersion.get_versions_from_directory(
dir_to_search
dir_to_search, compatible_with=compatible_with
)
if compatible_with:
dir_to_search = Path(
user_data_dir("openpype", "pypeclub")) / f"{compatible_with.major}.{compatible_with.minor}" # noqa
versions += OpenPypeVersion.get_versions_from_directory(
dir_to_search, compatible_with=compatible_with
)
filtered_versions = []
for version in versions:
if version.is_staging():
@ -425,7 +433,8 @@ class OpenPypeVersion(semver.VersionInfo):
@classmethod
def get_remote_versions(
cls, production: bool = None, staging: bool = None
cls, production: bool = None,
staging: bool = None, compatible_with: OpenPypeVersion = None
) -> List:
"""Get all versions available in OpenPype Path.
@ -435,6 +444,8 @@ class OpenPypeVersion(semver.VersionInfo):
Args:
production (bool): Return production versions.
staging (bool): Return staging versions.
compatible_with (OpenPypeVersion): Return only those compatible
with specified version.
"""
# Return all local versions if arguments are set to None
if production is None and staging is None:
@ -468,7 +479,14 @@ class OpenPypeVersion(semver.VersionInfo):
if not dir_to_search:
return []
versions = cls.get_versions_from_directory(dir_to_search)
# DEPRECATED: look for version in root directory
versions = cls.get_versions_from_directory(
dir_to_search, compatible_with=compatible_with)
if compatible_with:
dir_to_search = dir_to_search / f"{compatible_with.major}.{compatible_with.minor}" # noqa
versions += cls.get_versions_from_directory(
dir_to_search, compatible_with=compatible_with)
filtered_versions = []
for version in versions:
if version.is_staging():
@ -479,11 +497,15 @@ class OpenPypeVersion(semver.VersionInfo):
return list(sorted(set(filtered_versions)))
@staticmethod
def get_versions_from_directory(openpype_dir: Path) -> List:
def get_versions_from_directory(
openpype_dir: Path,
compatible_with: OpenPypeVersion = None) -> List:
"""Get all detected OpenPype versions in directory.
Args:
openpype_dir (Path): Directory to scan.
compatible_with (OpenPypeVersion): Return only versions compatible
with build version specified as OpenPypeVersion.
Returns:
list of OpenPypeVersion
@ -492,10 +514,10 @@ class OpenPypeVersion(semver.VersionInfo):
ValueError: if invalid path is specified.
"""
if not openpype_dir.exists() and not openpype_dir.is_dir():
raise ValueError("specified directory is invalid")
_openpype_versions = []
if not openpype_dir.exists() and not openpype_dir.is_dir():
return _openpype_versions
# iterate over directory in first level and find all that might
# contain OpenPype.
for item in openpype_dir.iterdir():
@ -518,6 +540,10 @@ class OpenPypeVersion(semver.VersionInfo):
)[0]:
continue
if compatible_with and not detected_version.is_compatible(
compatible_with):
continue
detected_version.path = item
_openpype_versions.append(detected_version)
@ -549,8 +575,9 @@ class OpenPypeVersion(semver.VersionInfo):
def get_latest_version(
staging: bool = False,
local: bool = None,
remote: bool = None
) -> OpenPypeVersion:
remote: bool = None,
compatible_with: OpenPypeVersion = None
) -> Union[OpenPypeVersion, None]:
"""Get latest available version.
The version does not contain information about path and source.
@ -568,6 +595,9 @@ class OpenPypeVersion(semver.VersionInfo):
staging (bool, optional): List staging versions if True.
local (bool, optional): List local versions if True.
remote (bool, optional): List remote versions if True.
compatible_with (OpenPypeVersion, optional) Return only version
compatible with compatible_with.
"""
if local is None and remote is None:
local = True
@ -598,7 +628,12 @@ class OpenPypeVersion(semver.VersionInfo):
return None
all_versions.sort()
return all_versions[-1]
latest_version: OpenPypeVersion
latest_version = all_versions[-1]
if compatible_with and not latest_version.is_compatible(
compatible_with):
return None
return latest_version
@classmethod
def get_expected_studio_version(cls, staging=False, global_settings=None):
@ -621,6 +656,21 @@ class OpenPypeVersion(semver.VersionInfo):
return None
return OpenPypeVersion(version=result)
def is_compatible(self, version: OpenPypeVersion):
"""Test build compatibility.
This will simply compare major and minor versions (ignoring patch
and the rest).
Args:
version (OpenPypeVersion): Version to check compatibility with.
Returns:
bool: if the version is compatible
"""
return self.major == version.major and self.minor == version.minor
class BootstrapRepos:
"""Class for bootstrapping local OpenPype installation.
@ -741,8 +791,9 @@ class BootstrapRepos:
return
# create destination directory
if not self.data_dir.exists():
self.data_dir.mkdir(parents=True)
destination = self.data_dir / f"{installed_version.major}.{installed_version.minor}" # noqa
if not destination.exists():
destination.mkdir(parents=True)
# create zip inside temporary directory.
with tempfile.TemporaryDirectory() as temp_dir:
@ -770,7 +821,9 @@ class BootstrapRepos:
Path to moved zip on success.
"""
destination = self.data_dir / zip_file.name
version = OpenPypeVersion.version_in_str(zip_file.name)
destination_dir = self.data_dir / f"{version.major}.{version.minor}"
destination = destination_dir / zip_file.name
if destination.exists():
self._print(
@ -782,7 +835,7 @@ class BootstrapRepos:
self._print(str(e), LOG_ERROR, exc_info=True)
return None
try:
shutil.move(zip_file.as_posix(), self.data_dir.as_posix())
shutil.move(zip_file.as_posix(), destination_dir.as_posix())
except shutil.Error as e:
self._print(str(e), LOG_ERROR, exc_info=True)
return None
@ -995,6 +1048,16 @@ class BootstrapRepos:
@staticmethod
def _validate_dir(path: Path) -> tuple:
"""Validate checksums in a given path.
Args:
path (Path): path to folder to validate.
Returns:
tuple(bool, str): returns status and reason as a bool
and str in a tuple.
"""
checksums_file = Path(path / "checksums")
if not checksums_file.exists():
# FIXME: This should be set to False sometimes in the future
@ -1076,7 +1139,20 @@ class BootstrapRepos:
sys.path.insert(0, directory.as_posix())
@staticmethod
def find_openpype_version(version, staging):
def find_openpype_version(
version: Union[str, OpenPypeVersion],
staging: bool,
compatible_with: OpenPypeVersion = None
) -> Union[OpenPypeVersion, None]:
"""Find location of specified OpenPype version.
Args:
version (Union[str, OpenPypeVersion): Version to find.
staging (bool): Filter staging versions.
compatible_with (OpenPypeVersion, optional): Find only
versions compatible with specified one.
"""
if isinstance(version, str):
version = OpenPypeVersion(version=version)
@ -1085,7 +1161,8 @@ class BootstrapRepos:
return installed_version
local_versions = OpenPypeVersion.get_local_versions(
staging=staging, production=not staging
staging=staging, production=not staging,
compatible_with=compatible_with
)
zip_version = None
for local_version in local_versions:
@ -1099,7 +1176,8 @@ class BootstrapRepos:
return zip_version
remote_versions = OpenPypeVersion.get_remote_versions(
staging=staging, production=not staging
staging=staging, production=not staging,
compatible_with=compatible_with
)
for remote_version in remote_versions:
if remote_version == version:
@ -1107,13 +1185,14 @@ class BootstrapRepos:
return None
@staticmethod
def find_latest_openpype_version(staging):
def find_latest_openpype_version(
staging, compatible_with: OpenPypeVersion = None):
installed_version = OpenPypeVersion.get_installed_version()
local_versions = OpenPypeVersion.get_local_versions(
staging=staging
staging=staging, compatible_with=compatible_with
)
remote_versions = OpenPypeVersion.get_remote_versions(
staging=staging
staging=staging, compatible_with=compatible_with
)
all_versions = local_versions + remote_versions
if not staging:
@ -1138,7 +1217,9 @@ class BootstrapRepos:
self,
openpype_path: Union[Path, str] = None,
staging: bool = False,
include_zips: bool = False) -> Union[List[OpenPypeVersion], None]:
include_zips: bool = False,
compatible_with: OpenPypeVersion = None
) -> Union[List[OpenPypeVersion], None]:
"""Get ordered dict of detected OpenPype version.
Resolution order for OpenPype is following:
@ -1154,6 +1235,8 @@ class BootstrapRepos:
otherwise.
include_zips (bool, optional): If set True it will try to find
OpenPype in zip files in given directory.
compatible_with (OpenPypeVersion, optional): Find only those
versions compatible with the one specified.
Returns:
dict of Path: Dictionary of detected OpenPype version.
@ -1172,30 +1255,56 @@ class BootstrapRepos:
("Finding OpenPype in non-filesystem locations is"
" not implemented yet."))
dir_to_search = self.data_dir
user_versions = self.get_openpype_versions(self.data_dir, staging)
# if we have openpype_path specified, search only there.
version_dir = ""
if compatible_with:
version_dir = f"{compatible_with.major}.{compatible_with.minor}"
# if checks bellow for OPENPYPE_PATH and registry fails, use data_dir
# DEPRECATED: lookup in root of this folder is deprecated in favour
# of major.minor sub-folders.
dirs_to_search = [
self.data_dir
]
if compatible_with:
dirs_to_search.append(self.data_dir / version_dir)
if openpype_path:
dir_to_search = openpype_path
dirs_to_search = [openpype_path]
if compatible_with:
dirs_to_search.append(openpype_path / version_dir)
else:
if os.getenv("OPENPYPE_PATH"):
if Path(os.getenv("OPENPYPE_PATH")).exists():
dir_to_search = Path(os.getenv("OPENPYPE_PATH"))
# first try OPENPYPE_PATH and if that is not available,
# try registry.
if os.getenv("OPENPYPE_PATH") \
and Path(os.getenv("OPENPYPE_PATH")).exists():
dirs_to_search = [Path(os.getenv("OPENPYPE_PATH"))]
if compatible_with:
dirs_to_search.append(
Path(os.getenv("OPENPYPE_PATH")) / version_dir)
else:
try:
registry_dir = Path(
str(self.registry.get_item("openPypePath")))
if registry_dir.exists():
dir_to_search = registry_dir
dirs_to_search = [registry_dir]
if compatible_with:
dirs_to_search.append(registry_dir / version_dir)
except ValueError:
# nothing found in registry, we'll use data dir
pass
openpype_versions = self.get_openpype_versions(dir_to_search, staging)
openpype_versions += user_versions
openpype_versions = []
for dir_to_search in dirs_to_search:
try:
openpype_versions += self.get_openpype_versions(
dir_to_search, staging, compatible_with=compatible_with)
except ValueError:
# location is invalid, skip it
pass
# remove zip file version if needed.
if not include_zips:
openpype_versions = [
v for v in openpype_versions if v.path.suffix != ".zip"
@ -1308,9 +1417,8 @@ class BootstrapRepos:
raise ValueError(
f"version {version} is not associated with any file")
destination = self.data_dir / version.path.stem
if destination.exists():
assert destination.is_dir()
destination = self.data_dir / f"{version.major}.{version.minor}" / version.path.stem # noqa
if destination.exists() and destination.is_dir():
try:
shutil.rmtree(destination)
except OSError as e:
@ -1379,7 +1487,7 @@ class BootstrapRepos:
else:
dir_name = openpype_version.path.stem
destination = self.data_dir / dir_name
destination = self.data_dir / f"{openpype_version.major}.{openpype_version.minor}" / dir_name # noqa
# test if destination directory already exist, if so lets delete it.
if destination.exists() and force:
@ -1557,14 +1665,18 @@ class BootstrapRepos:
return False
return True
def get_openpype_versions(self,
openpype_dir: Path,
staging: bool = False) -> list:
def get_openpype_versions(
self,
openpype_dir: Path,
staging: bool = False,
compatible_with: OpenPypeVersion = None) -> list:
"""Get all detected OpenPype versions in directory.
Args:
openpype_dir (Path): Directory to scan.
staging (bool, optional): Find staging versions if True.
compatible_with (OpenPypeVersion, optional): Get only versions
compatible with the one specified.
Returns:
list of OpenPypeVersion
@ -1574,7 +1686,7 @@ class BootstrapRepos:
"""
if not openpype_dir.exists() and not openpype_dir.is_dir():
raise ValueError("specified directory is invalid")
raise ValueError(f"specified directory {openpype_dir} is invalid")
_openpype_versions = []
# iterate over directory in first level and find all that might
@ -1599,6 +1711,10 @@ class BootstrapRepos:
):
continue
if compatible_with and \
not detected_version.is_compatible(compatible_with):
continue
detected_version.path = item
if staging and detected_version.is_staging():
_openpype_versions.append(detected_version)

View file

@ -21,6 +21,11 @@ class OpenPypeVersionNotFound(Exception):
pass
class OpenPypeVersionIncompatible(Exception):
"""OpenPype version is not compatible with the installed one (build)."""
pass
def should_add_certificate_path_to_mongo_url(mongo_url):
"""Check if should add ca certificate to mongo url.

View file

@ -443,3 +443,26 @@ def interactive():
__version__, sys.version, sys.platform
)
code.interact(banner)
@main.command()
@click.option("--build", help="Print only build version",
is_flag=True, default=False)
def version(build):
"""Print OpenPype version."""
from openpype.version import __version__
from igniter.bootstrap_repos import BootstrapRepos, OpenPypeVersion
from pathlib import Path
import os
if getattr(sys, 'frozen', False):
local_version = BootstrapRepos.get_version(
Path(os.getenv("OPENPYPE_ROOT")))
else:
local_version = OpenPypeVersion.get_installed_version_str()
if build:
print(local_version)
return
print(f"{__version__} (booted: {local_version})")

View file

@ -6,38 +6,12 @@ that has project name as a context (e.g. on 'ProjectEntity'?).
+ We will need more specific functions doing wery specific queires really fast.
"""
import os
import collections
import six
from bson.objectid import ObjectId
from .mongo import OpenPypeMongoConnection
def _get_project_database():
db_name = os.environ.get("AVALON_DB") or "avalon"
return OpenPypeMongoConnection.get_mongo_client()[db_name]
def get_project_connection(project_name):
"""Direct access to mongo collection.
We're trying to avoid using direct access to mongo. This should be used
only for Create, Update and Remove operations until there are implemented
api calls for that.
Args:
project_name(str): Project name for which collection should be
returned.
Returns:
pymongo.Collection: Collection realated to passed project.
"""
if not project_name:
raise ValueError("Invalid project name {}".format(str(project_name)))
return _get_project_database()[project_name]
from .mongo import get_project_database, get_project_connection
def _prepare_fields(fields, required_fields=None):
@ -72,7 +46,7 @@ def _convert_ids(in_ids):
def get_projects(active=True, inactive=False, fields=None):
mongodb = _get_project_database()
mongodb = get_project_database()
for project_name in mongodb.collection_names():
if project_name in ("system.indexes",):
continue
@ -819,7 +793,7 @@ def get_output_link_versions(project_name, version_id, fields=None):
# Does make sense to look for hero versions?
query_filter = {
"type": "version",
"data.inputLinks.input": version_id
"data.inputLinks.id": version_id
}
return conn.find(query_filter, _prepare_fields(fields))

View file

@ -208,3 +208,28 @@ class OpenPypeMongoConnection:
mongo_url, time.time() - t1
))
return mongo_client
def get_project_database():
db_name = os.environ.get("AVALON_DB") or "avalon"
return OpenPypeMongoConnection.get_mongo_client()[db_name]
def get_project_connection(project_name):
"""Direct access to mongo collection.
We're trying to avoid using direct access to mongo. This should be used
only for Create, Update and Remove operations until there are implemented
api calls for that.
Args:
project_name(str): Project name for which collection should be
returned.
Returns:
pymongo.Collection: Collection realated to passed project.
"""
if not project_name:
raise ValueError("Invalid project name {}".format(str(project_name)))
return get_project_database()[project_name]

View file

@ -0,0 +1,634 @@
import uuid
import copy
import collections
from abc import ABCMeta, abstractmethod, abstractproperty
import six
from bson.objectid import ObjectId
from pymongo import DeleteOne, InsertOne, UpdateOne
from .mongo import get_project_connection
REMOVED_VALUE = object()
CURRENT_PROJECT_SCHEMA = "openpype:project-3.0"
CURRENT_PROJECT_CONFIG_SCHEMA = "openpype:config-2.0"
CURRENT_ASSET_DOC_SCHEMA = "openpype:asset-3.0"
CURRENT_SUBSET_SCHEMA = "openpype:subset-3.0"
CURRENT_VERSION_SCHEMA = "openpype:version-3.0"
CURRENT_REPRESENTATION_SCHEMA = "openpype:representation-2.0"
CURRENT_WORKFILE_INFO_SCHEMA = "openpype:workfile-1.0"
def _create_or_convert_to_mongo_id(mongo_id):
if mongo_id is None:
return ObjectId()
return ObjectId(mongo_id)
def new_project_document(
project_name, project_code, config, data=None, entity_id=None
):
"""Create skeleton data of project document.
Args:
project_name (str): Name of project. Used as identifier of a project.
project_code (str): Shorter version of projet without spaces and
special characters (in most of cases). Should be also considered
as unique name across projects.
config (Dic[str, Any]): Project config consist of roots, templates,
applications and other project Anatomy related data.
data (Dict[str, Any]): Project data with information about it's
attributes (e.g. 'fps' etc.) or integration specific keys.
entity_id (Union[str, ObjectId]): Predefined id of document. New id is
created if not passed.
Returns:
Dict[str, Any]: Skeleton of project document.
"""
if data is None:
data = {}
data["code"] = project_code
return {
"_id": _create_or_convert_to_mongo_id(entity_id),
"name": project_name,
"type": CURRENT_PROJECT_SCHEMA,
"entity_data": data,
"config": config
}
def new_asset_document(
name, project_id, parent_id, parents, data=None, entity_id=None
):
"""Create skeleton data of asset document.
Args:
name (str): Is considered as unique identifier of asset in project.
project_id (Union[str, ObjectId]): Id of project doument.
parent_id (Union[str, ObjectId]): Id of parent asset.
parents (List[str]): List of parent assets names.
data (Dict[str, Any]): Asset document data. Empty dictionary is used
if not passed. Value of 'parent_id' is used to fill 'visualParent'.
entity_id (Union[str, ObjectId]): Predefined id of document. New id is
created if not passed.
Returns:
Dict[str, Any]: Skeleton of asset document.
"""
if data is None:
data = {}
if parent_id is not None:
parent_id = ObjectId(parent_id)
data["visualParent"] = parent_id
data["parents"] = parents
return {
"_id": _create_or_convert_to_mongo_id(entity_id),
"type": "asset",
"name": name,
"parent": ObjectId(project_id),
"data": data,
"schema": CURRENT_ASSET_DOC_SCHEMA
}
def new_subset_document(name, family, asset_id, data=None, entity_id=None):
"""Create skeleton data of subset document.
Args:
name (str): Is considered as unique identifier of subset under asset.
family (str): Subset's family.
asset_id (Union[str, ObjectId]): Id of parent asset.
data (Dict[str, Any]): Subset document data. Empty dictionary is used
if not passed. Value of 'family' is used to fill 'family'.
entity_id (Union[str, ObjectId]): Predefined id of document. New id is
created if not passed.
Returns:
Dict[str, Any]: Skeleton of subset document.
"""
if data is None:
data = {}
data["family"] = family
return {
"_id": _create_or_convert_to_mongo_id(entity_id),
"schema": CURRENT_SUBSET_SCHEMA,
"type": "subset",
"name": name,
"data": data,
"parent": asset_id
}
def new_version_doc(version, subset_id, data=None, entity_id=None):
"""Create skeleton data of version document.
Args:
version (int): Is considered as unique identifier of version
under subset.
subset_id (Union[str, ObjectId]): Id of parent subset.
data (Dict[str, Any]): Version document data.
entity_id (Union[str, ObjectId]): Predefined id of document. New id is
created if not passed.
Returns:
Dict[str, Any]: Skeleton of version document.
"""
if data is None:
data = {}
return {
"_id": _create_or_convert_to_mongo_id(entity_id),
"schema": CURRENT_VERSION_SCHEMA,
"type": "version",
"name": int(version),
"parent": subset_id,
"data": data
}
def new_representation_doc(
name, version_id, context, data=None, entity_id=None
):
"""Create skeleton data of asset document.
Args:
version (int): Is considered as unique identifier of version
under subset.
version_id (Union[str, ObjectId]): Id of parent version.
context (Dict[str, Any]): Representation context used for fill template
of to query.
data (Dict[str, Any]): Representation document data.
entity_id (Union[str, ObjectId]): Predefined id of document. New id is
created if not passed.
Returns:
Dict[str, Any]: Skeleton of version document.
"""
if data is None:
data = {}
return {
"_id": _create_or_convert_to_mongo_id(entity_id),
"schema": CURRENT_REPRESENTATION_SCHEMA,
"type": "representation",
"parent": version_id,
"name": name,
"data": data,
# Imprint shortcut to context for performance reasons.
"context": context
}
def new_workfile_info_doc(
filename, asset_id, task_name, files, data=None, entity_id=None
):
"""Create skeleton data of workfile info document.
Workfile document is at this moment used primarily for artist notes.
Args:
filename (str): Filename of workfile.
asset_id (Union[str, ObjectId]): Id of asset under which workfile live.
task_name (str): Task under which was workfile created.
files (List[str]): List of rootless filepaths related to workfile.
data (Dict[str, Any]): Additional metadata.
Returns:
Dict[str, Any]: Skeleton of workfile info document.
"""
if not data:
data = {}
return {
"_id": _create_or_convert_to_mongo_id(entity_id),
"type": "workfile",
"parent": ObjectId(asset_id),
"task_name": task_name,
"filename": filename,
"data": data,
"files": files
}
def _prepare_update_data(old_doc, new_doc, replace):
changes = {}
for key, value in new_doc.items():
if key not in old_doc or value != old_doc[key]:
changes[key] = value
if replace:
for key in old_doc.keys():
if key not in new_doc:
changes[key] = REMOVED_VALUE
return changes
def prepare_subset_update_data(old_doc, new_doc, replace=True):
"""Compare two subset documents and prepare update data.
Based on compared values will create update data for 'UpdateOperation'.
Empty output means that documents are identical.
Returns:
Dict[str, Any]: Changes between old and new document.
"""
return _prepare_update_data(old_doc, new_doc, replace)
def prepare_version_update_data(old_doc, new_doc, replace=True):
"""Compare two version documents and prepare update data.
Based on compared values will create update data for 'UpdateOperation'.
Empty output means that documents are identical.
Returns:
Dict[str, Any]: Changes between old and new document.
"""
return _prepare_update_data(old_doc, new_doc, replace)
def prepare_representation_update_data(old_doc, new_doc, replace=True):
"""Compare two representation documents and prepare update data.
Based on compared values will create update data for 'UpdateOperation'.
Empty output means that documents are identical.
Returns:
Dict[str, Any]: Changes between old and new document.
"""
return _prepare_update_data(old_doc, new_doc, replace)
def prepare_workfile_info_update_data(old_doc, new_doc, replace=True):
"""Compare two workfile info documents and prepare update data.
Based on compared values will create update data for 'UpdateOperation'.
Empty output means that documents are identical.
Returns:
Dict[str, Any]: Changes between old and new document.
"""
return _prepare_update_data(old_doc, new_doc, replace)
@six.add_metaclass(ABCMeta)
class AbstractOperation(object):
"""Base operation class.
Opration represent a call into database. The call can create, change or
remove data.
Args:
project_name (str): On which project operation will happen.
entity_type (str): Type of entity on which change happens.
e.g. 'asset', 'representation' etc.
"""
def __init__(self, project_name, entity_type):
self._project_name = project_name
self._entity_type = entity_type
self._id = str(uuid.uuid4())
@property
def project_name(self):
return self._project_name
@property
def id(self):
"""Identifier of operation."""
return self._id
@property
def entity_type(self):
return self._entity_type
@abstractproperty
def operation_name(self):
"""Stringified type of operation."""
pass
@abstractmethod
def to_mongo_operation(self):
"""Convert operation to Mongo batch operation."""
pass
def to_data(self):
"""Convert opration to data that can be converted to json or others.
Warning:
Current state returns ObjectId objects which cannot be parsed by
json.
Returns:
Dict[str, Any]: Description of operation.
"""
return {
"id": self._id,
"entity_type": self.entity_type,
"project_name": self.project_name,
"operation": self.operation_name
}
class CreateOperation(AbstractOperation):
"""Opeartion to create an entity.
Args:
project_name (str): On which project operation will happen.
entity_type (str): Type of entity on which change happens.
e.g. 'asset', 'representation' etc.
data (Dict[str, Any]): Data of entity that will be created.
"""
operation_name = "create"
def __init__(self, project_name, entity_type, data):
super(CreateOperation, self).__init__(project_name, entity_type)
if not data:
data = {}
else:
data = copy.deepcopy(dict(data))
if "_id" not in data:
data["_id"] = ObjectId()
else:
data["_id"] = ObjectId(data["_id"])
self._entity_id = data["_id"]
self._data = data
def __setitem__(self, key, value):
self.set_value(key, value)
def __getitem__(self, key):
return self.data[key]
def set_value(self, key, value):
self.data[key] = value
def get(self, key, *args, **kwargs):
return self.data.get(key, *args, **kwargs)
@property
def entity_id(self):
return self._entity_id
@property
def data(self):
return self._data
def to_mongo_operation(self):
return InsertOne(copy.deepcopy(self._data))
def to_data(self):
output = super(CreateOperation, self).to_data()
output["data"] = copy.deepcopy(self.data)
return output
class UpdateOperation(AbstractOperation):
"""Opeartion to update an entity.
Args:
project_name (str): On which project operation will happen.
entity_type (str): Type of entity on which change happens.
e.g. 'asset', 'representation' etc.
entity_id (Union[str, ObjectId]): Identifier of an entity.
update_data (Dict[str, Any]): Key -> value changes that will be set in
database. If value is set to 'REMOVED_VALUE' the key will be
removed. Only first level of dictionary is checked (on purpose).
"""
operation_name = "update"
def __init__(self, project_name, entity_type, entity_id, update_data):
super(UpdateOperation, self).__init__(project_name, entity_type)
self._entity_id = ObjectId(entity_id)
self._update_data = update_data
@property
def entity_id(self):
return self._entity_id
@property
def update_data(self):
return self._update_data
def to_mongo_operation(self):
unset_data = {}
set_data = {}
for key, value in self._update_data.items():
if value is REMOVED_VALUE:
unset_data[key] = value
else:
set_data[key] = value
op_data = {}
if unset_data:
op_data["$unset"] = unset_data
if set_data:
op_data["$set"] = set_data
if not op_data:
return None
return UpdateOne(
{"_id": self.entity_id},
op_data
)
def to_data(self):
changes = {}
for key, value in self._update_data.items():
if value is REMOVED_VALUE:
value = None
changes[key] = value
output = super(UpdateOperation, self).to_data()
output.update({
"entity_id": self.entity_id,
"changes": changes
})
return output
class DeleteOperation(AbstractOperation):
"""Opeartion to delete an entity.
Args:
project_name (str): On which project operation will happen.
entity_type (str): Type of entity on which change happens.
e.g. 'asset', 'representation' etc.
entity_id (Union[str, ObjectId]): Entity id that will be removed.
"""
operation_name = "delete"
def __init__(self, project_name, entity_type, entity_id):
super(DeleteOperation, self).__init__(project_name, entity_type)
self._entity_id = ObjectId(entity_id)
@property
def entity_id(self):
return self._entity_id
def to_mongo_operation(self):
return DeleteOne({"_id": self.entity_id})
def to_data(self):
output = super(DeleteOperation, self).to_data()
output["entity_id"] = self.entity_id
return output
class OperationsSession(object):
"""Session storing operations that should happen in an order.
At this moment does not handle anything special can be sonsidered as
stupid list of operations that will happen after each other. If creation
of same entity is there multiple times it's handled in any way and document
values are not validated.
All operations must be related to single project.
Args:
project_name (str): Project name to which are operations related.
"""
def __init__(self):
self._operations = []
def add(self, operation):
"""Add operation to be processed.
Args:
operation (BaseOperation): Operation that should be processed.
"""
if not isinstance(
operation,
(CreateOperation, UpdateOperation, DeleteOperation)
):
raise TypeError("Expected Operation object got {}".format(
str(type(operation))
))
self._operations.append(operation)
def append(self, operation):
"""Add operation to be processed.
Args:
operation (BaseOperation): Operation that should be processed.
"""
self.add(operation)
def extend(self, operations):
"""Add operations to be processed.
Args:
operations (List[BaseOperation]): Operations that should be
processed.
"""
for operation in operations:
self.add(operation)
def remove(self, operation):
"""Remove operation."""
self._operations.remove(operation)
def clear(self):
"""Clear all registered operations."""
self._operations = []
def to_data(self):
return [
operation.to_data()
for operation in self._operations
]
def commit(self):
"""Commit session operations."""
operations, self._operations = self._operations, []
if not operations:
return
operations_by_project = collections.defaultdict(list)
for operation in operations:
operations_by_project[operation.project_name].append(operation)
for project_name, operations in operations_by_project.items():
bulk_writes = []
for operation in operations:
mongo_op = operation.to_mongo_operation()
if mongo_op is not None:
bulk_writes.append(mongo_op)
if bulk_writes:
collection = get_project_connection(project_name)
collection.bulk_write(bulk_writes)
def create_entity(self, project_name, entity_type, data):
"""Fast access to 'CreateOperation'.
Returns:
CreateOperation: Object of update operation.
"""
operation = CreateOperation(project_name, entity_type, data)
self.add(operation)
return operation
def update_entity(self, project_name, entity_type, entity_id, update_data):
"""Fast access to 'UpdateOperation'.
Returns:
UpdateOperation: Object of update operation.
"""
operation = UpdateOperation(
project_name, entity_type, entity_id, update_data
)
self.add(operation)
return operation
def delete_entity(self, project_name, entity_type, entity_id):
"""Fast access to 'DeleteOperation'.
Returns:
DeleteOperation: Object of delete operation.
"""
operation = DeleteOperation(project_name, entity_type, entity_id)
self.add(operation)
return operation

View file

@ -1,11 +1,11 @@
import os
import shutil
from openpype.lib import (
PreLaunchHook,
get_custom_workfile_template_by_context,
from openpype.lib import PreLaunchHook
from openpype.settings import get_project_settings
from openpype.pipeline.workfile import (
get_custom_workfile_template,
get_custom_workfile_template_by_string_context
)
from openpype.settings import get_project_settings
class CopyTemplateWorkfile(PreLaunchHook):
@ -54,41 +54,22 @@ class CopyTemplateWorkfile(PreLaunchHook):
project_name = self.data["project_name"]
asset_name = self.data["asset_name"]
task_name = self.data["task_name"]
host_name = self.application.host_name
project_settings = get_project_settings(project_name)
host_settings = project_settings[self.application.host_name]
workfile_builder_settings = host_settings.get("workfile_builder")
if not workfile_builder_settings:
# TODO remove warning when deprecated
self.log.warning((
"Seems like old version of settings is used."
" Can't access custom templates in host \"{}\"."
).format(self.application.full_label))
return
if not workfile_builder_settings["create_first_version"]:
self.log.info((
"Project \"{}\" has turned off to create first workfile for"
" application \"{}\""
).format(project_name, self.application.full_label))
return
# Backwards compatibility
template_profiles = workfile_builder_settings.get("custom_templates")
if not template_profiles:
self.log.info(
"Custom templates are not filled. Skipping template copy."
)
return
project_doc = self.data.get("project_doc")
asset_doc = self.data.get("asset_doc")
anatomy = self.data.get("anatomy")
if project_doc and asset_doc:
self.log.debug("Started filtering of custom template paths.")
template_path = get_custom_workfile_template_by_context(
template_profiles, project_doc, asset_doc, task_name, anatomy
template_path = get_custom_workfile_template(
project_doc,
asset_doc,
task_name,
host_name,
anatomy,
project_settings
)
else:
@ -96,10 +77,13 @@ class CopyTemplateWorkfile(PreLaunchHook):
"Global data collection probably did not execute."
" Using backup solution."
))
dbcon = self.data.get("dbcon")
template_path = get_custom_workfile_template_by_string_context(
template_profiles, project_name, asset_name, task_name,
dbcon, anatomy
project_name,
asset_name,
task_name,
host_name,
anatomy,
project_settings
)
if not template_path:

View file

@ -1,3 +1,4 @@
from openpype.client import get_project, get_asset_by_name
from openpype.lib import (
PreLaunchHook,
EnvironmentPrepData,
@ -69,7 +70,7 @@ class GlobalHostDataHook(PreLaunchHook):
self.data["dbcon"] = dbcon
# Project document
project_doc = dbcon.find_one({"type": "project"})
project_doc = get_project(project_name)
self.data["project_doc"] = project_doc
asset_name = self.data.get("asset_name")
@ -79,8 +80,5 @@ class GlobalHostDataHook(PreLaunchHook):
)
return
asset_doc = dbcon.find_one({
"type": "asset",
"name": asset_name
})
asset_doc = get_asset_by_name(project_name, asset_name)
self.data["asset_doc"] = asset_doc

View file

@ -102,7 +102,6 @@ class CollectAERender(publish.AbstractCollectRender):
attachTo=False,
setMembers='',
publish=True,
renderer='aerender',
name=subset_name,
resolutionWidth=render_q.width,
resolutionHeight=render_q.height,
@ -113,7 +112,6 @@ class CollectAERender(publish.AbstractCollectRender):
frameStart=frame_start,
frameEnd=frame_end,
frameStep=1,
toBeRenderedOn='deadline',
fps=fps,
app_version=app_version,
publish_attributes=inst.data.get("publish_attributes", {}),
@ -138,6 +136,9 @@ class CollectAERender(publish.AbstractCollectRender):
fam = "render.farm"
if fam not in instance.families:
instance.families.append(fam)
instance.toBeRenderedOn = "deadline"
instance.renderer = "aerender"
instance.farm = True # to skip integrate
instances.append(instance)
instances_to_remove.append(inst)

View file

@ -220,12 +220,9 @@ class LaunchQtApp(bpy.types.Operator):
self._app.store_window(self.bl_idname, window)
self._window = window
if not isinstance(
self._window,
(QtWidgets.QMainWindow, QtWidgets.QDialog, ModuleType)
):
if not isinstance(self._window, (QtWidgets.QWidget, ModuleType)):
raise AttributeError(
"`window` should be a `QDialog or module`. Got: {}".format(
"`window` should be a `QWidget or module`. Got: {}".format(
str(type(window))
)
)
@ -249,9 +246,9 @@ class LaunchQtApp(bpy.types.Operator):
self._window.setWindowFlags(on_top_flags)
self._window.show()
if on_top_flags != origin_flags:
self._window.setWindowFlags(origin_flags)
self._window.show()
# if on_top_flags != origin_flags:
# self._window.setWindowFlags(origin_flags)
# self._window.show()
return {'FINISHED'}

View file

@ -180,7 +180,7 @@ class ExtractLayout(openpype.api.Extractor):
"rotation": {
"x": asset.rotation_euler.x,
"y": asset.rotation_euler.y,
"z": asset.rotation_euler.z,
"z": asset.rotation_euler.z
},
"scale": {
"x": asset.scale.x,
@ -189,6 +189,18 @@ class ExtractLayout(openpype.api.Extractor):
}
}
json_element["transform_matrix"] = []
for row in list(asset.matrix_world.transposed()):
json_element["transform_matrix"].append(list(row))
json_element["basis"] = [
[1, 0, 0, 0],
[0, -1, 0, 0],
[0, 0, 1, 0],
[0, 0, 0, 1]
]
# Extract the animation as well
if family == "rig":
f, n = self._export_animation(

View file

@ -3,9 +3,9 @@ import copy
from collections import OrderedDict
from pprint import pformat
import pyblish
from openpype.lib import get_workdir
import openpype.hosts.flame.api as opfapi
import openpype.pipeline as op_pipeline
from openpype.pipeline.workfile import get_workdir
class IntegrateBatchGroup(pyblish.api.InstancePlugin):
@ -324,7 +324,13 @@ class IntegrateBatchGroup(pyblish.api.InstancePlugin):
project_doc = instance.data["projectEntity"]
asset_entity = instance.data["assetEntity"]
anatomy = instance.context.data["anatomy"]
project_settings = instance.context.data["project_settings"]
return get_workdir(
project_doc, asset_entity, task_data["name"], "flame", anatomy
project_doc,
asset_entity,
task_data["name"],
"flame",
anatomy,
project_settings=project_settings
)

View file

@ -3,9 +3,7 @@ import re
import sys
import logging
# Pipeline imports
from openpype.client import (
get_project,
get_asset_by_name,
get_versions,
)
@ -17,13 +15,10 @@ from openpype.pipeline import (
from openpype.lib import version_up
from openpype.hosts.fusion import api
from openpype.hosts.fusion.api import lib
from openpype.lib.avalon_context import get_workdir_from_session
from openpype.pipeline.context_tools import get_workdir_from_session
log = logging.getLogger("Update Slap Comp")
self = sys.modules[__name__]
self._project = None
def _format_version_folder(folder):
"""Format a version folder based on the filepath
@ -212,9 +207,6 @@ def switch(asset_name, filepath=None, new=True):
asset = get_asset_by_name(project_name, asset_name)
assert asset, "Could not find '%s' in the database" % asset_name
# Get current project
self._project = get_project(project_name)
# Go to comp
if not filepath:
current_comp = api.get_current_comp()

View file

@ -14,7 +14,7 @@ from openpype.pipeline import (
legacy_io,
)
from openpype.hosts.fusion import api
from openpype.lib.avalon_context import get_workdir_from_session
from openpype.pipeline.context_tools import get_workdir_from_session
log = logging.getLogger("Fusion Switch Shot")

View file

@ -82,6 +82,14 @@ IMAGE_PREFIXES = {
RENDERMAN_IMAGE_DIR = "maya/<scene>/<layer>"
def has_tokens(string, tokens):
"""Return whether any of tokens is in input string (case-insensitive)"""
pattern = "({})".format("|".join(re.escape(token) for token in tokens))
match = re.search(pattern, string, re.IGNORECASE)
return bool(match)
@attr.s
class LayerMetadata(object):
"""Data class for Render Layer metadata."""
@ -99,6 +107,12 @@ class LayerMetadata(object):
# Render Products
products = attr.ib(init=False, default=attr.Factory(list))
# The AOV separator token. Note that not all renderers define an explicit
# render separator but allow to put the AOV/RenderPass token anywhere in
# the file path prefix. For those renderers we'll fall back to whatever
# is between the last occurrences of <RenderLayer> and <RenderPass> tokens.
aov_separator = attr.ib(default="_")
@attr.s
class RenderProduct(object):
@ -183,7 +197,6 @@ class ARenderProducts:
self.layer = layer
self.render_instance = render_instance
self.multipart = False
self.aov_separator = render_instance.data.get("aovSeparator", "_")
# Initialize
self.layer_data = self._get_layer_data()
@ -296,6 +309,42 @@ class ARenderProducts:
return lib.get_attr_in_layer(plug, layer=self.layer)
@staticmethod
def extract_separator(file_prefix):
"""Extract AOV separator character from the prefix.
Default behavior extracts the part between
last occurrences of <RenderLayer> and <RenderPass>
Todo:
This code also triggers for V-Ray which overrides it explicitly
so this code will invalidly debug log it couldn't extract the
AOV separator even though it does set it in RenderProductsVray.
Args:
file_prefix (str): File prefix with tokens.
Returns:
str or None: prefix character if it can be extracted.
"""
layer_tokens = ["<renderlayer>", "<layer>"]
aov_tokens = ["<aov>", "<renderpass>"]
def match_last(tokens, text):
"""regex match the last occurence from a list of tokens"""
pattern = "(?:.*)({})".format("|".join(tokens))
return re.search(pattern, text, re.IGNORECASE)
layer_match = match_last(layer_tokens, file_prefix)
aov_match = match_last(aov_tokens, file_prefix)
separator = None
if layer_match and aov_match:
matches = sorted((layer_match, aov_match),
key=lambda match: match.end(1))
separator = file_prefix[matches[0].end(1):matches[1].start(1)]
return separator
def _get_layer_data(self):
# type: () -> LayerMetadata
# ______________________________________________
@ -304,7 +353,7 @@ class ARenderProducts:
# ____________________/
_, scene_basename = os.path.split(cmds.file(q=True, loc=True))
scene_name, _ = os.path.splitext(scene_basename)
kwargs = {}
file_prefix = self.get_renderer_prefix()
# If the Render Layer belongs to a Render Setup layer then the
@ -319,6 +368,13 @@ class ARenderProducts:
# defaultRenderLayer renders as masterLayer
layer_name = "masterLayer"
separator = self.extract_separator(file_prefix)
if separator:
kwargs["aov_separator"] = separator
else:
log.debug("Couldn't extract aov separator from "
"file prefix: {}".format(file_prefix))
# todo: Support Custom Frames sequences 0,5-10,100-120
# Deadline allows submitting renders with a custom frame list
# to support those cases we might want to allow 'custom frames'
@ -335,7 +391,8 @@ class ARenderProducts:
layerName=layer_name,
renderer=self.renderer,
defaultExt=self._get_attr("defaultRenderGlobals.imfPluginKey"),
filePrefix=file_prefix
filePrefix=file_prefix,
**kwargs
)
def _generate_file_sequence(
@ -680,9 +737,17 @@ class RenderProductsVray(ARenderProducts):
"""
prefix = super(RenderProductsVray, self).get_renderer_prefix()
prefix = "{}{}<aov>".format(prefix, self.aov_separator)
aov_separator = self._get_aov_separator()
prefix = "{}{}<aov>".format(prefix, aov_separator)
return prefix
def _get_aov_separator(self):
# type: () -> str
"""Return the V-Ray AOV/Render Elements separator"""
return self._get_attr(
"vraySettings.fileNameRenderElementSeparator"
)
def _get_layer_data(self):
# type: () -> LayerMetadata
"""Override to get vray specific extension."""
@ -694,6 +759,8 @@ class RenderProductsVray(ARenderProducts):
layer_data.defaultExt = default_ext
layer_data.padding = self._get_attr("vraySettings.fileNamePadding")
layer_data.aov_separator = self._get_aov_separator()
return layer_data
def get_render_products(self):
@ -913,8 +980,9 @@ class RenderProductsRedshift(ARenderProducts):
:func:`ARenderProducts.get_renderer_prefix()`
"""
prefix = super(RenderProductsRedshift, self).get_renderer_prefix()
prefix = "{}{}<aov>".format(prefix, self.aov_separator)
file_prefix = super(RenderProductsRedshift, self).get_renderer_prefix()
separator = self.extract_separator(file_prefix)
prefix = "{}{}<aov>".format(file_prefix, separator or "_")
return prefix
def get_render_products(self):

View file

@ -0,0 +1,241 @@
# -*- coding: utf-8 -*-
"""Class for handling Render Settings."""
from maya import cmds # noqa
import maya.mel as mel
import six
import sys
from openpype.api import (
get_project_settings,
get_current_project_settings
)
from openpype.pipeline import legacy_io
from openpype.pipeline import CreatorError
from openpype.pipeline.context_tools import get_current_project_asset
from openpype.hosts.maya.api.commands import reset_frame_range
class RenderSettings(object):
_image_prefix_nodes = {
'vray': 'vraySettings.fileNamePrefix',
'arnold': 'defaultRenderGlobals.imageFilePrefix',
'renderman': 'defaultRenderGlobals.imageFilePrefix',
'redshift': 'defaultRenderGlobals.imageFilePrefix'
}
_image_prefixes = {
'vray': get_current_project_settings()["maya"]["RenderSettings"]["vray_renderer"]["image_prefix"], # noqa
'arnold': get_current_project_settings()["maya"]["RenderSettings"]["arnold_renderer"]["image_prefix"], # noqa
'renderman': 'maya/<Scene>/<layer>/<layer>{aov_separator}<aov>',
'redshift': get_current_project_settings()["maya"]["RenderSettings"]["redshift_renderer"]["image_prefix"] # noqa
}
_aov_chars = {
"dot": ".",
"dash": "-",
"underscore": "_"
}
@classmethod
def get_image_prefix_attr(cls, renderer):
return cls._image_prefix_nodes[renderer]
def __init__(self, project_settings=None):
self._project_settings = project_settings
if not self._project_settings:
self._project_settings = get_project_settings(
legacy_io.Session["AVALON_PROJECT"]
)
def set_default_renderer_settings(self, renderer=None):
"""Set basic settings based on renderer."""
if not renderer:
renderer = cmds.getAttr(
'defaultRenderGlobals.currentRenderer').lower()
asset_doc = get_current_project_asset()
# project_settings/maya/create/CreateRender/aov_separator
try:
aov_separator = self._aov_chars[(
self._project_settings["maya"]
["RenderSettings"]
["aov_separator"]
)]
except KeyError:
aov_separator = "_"
reset_frame = self._project_settings["maya"]["RenderSettings"]["reset_current_frame"] # noqa
if reset_frame:
start_frame = cmds.getAttr("defaultRenderGlobals.startFrame")
cmds.currentTime(start_frame, edit=True)
if renderer in self._image_prefix_nodes:
prefix = self._image_prefixes[renderer]
prefix = prefix.replace("{aov_separator}", aov_separator)
cmds.setAttr(self._image_prefix_nodes[renderer],
prefix, type="string") # noqa
else:
print("{0} isn't a supported renderer to autoset settings.".format(renderer)) # noqa
# TODO: handle not having res values in the doc
width = asset_doc["data"].get("resolutionWidth")
height = asset_doc["data"].get("resolutionHeight")
if renderer == "arnold":
# set renderer settings for Arnold from project settings
self._set_arnold_settings(width, height)
if renderer == "vray":
self._set_vray_settings(aov_separator, width, height)
if renderer == "redshift":
self._set_redshift_settings(width, height)
def _set_arnold_settings(self, width, height):
"""Sets settings for Arnold."""
from mtoa.core import createOptions # noqa
from mtoa.aovs import AOVInterface # noqa
createOptions()
arnold_render_presets = self._project_settings["maya"]["RenderSettings"]["arnold_renderer"] # noqa
# Force resetting settings and AOV list to avoid having to deal with
# AOV checking logic, for now.
# This is a work around because the standard
# function to revert render settings does not reset AOVs list in MtoA
# Fetch current aovs in case there's any.
current_aovs = AOVInterface().getAOVs()
# Remove fetched AOVs
AOVInterface().removeAOVs(current_aovs)
mel.eval("unifiedRenderGlobalsRevertToDefault")
img_ext = arnold_render_presets["image_format"]
img_prefix = arnold_render_presets["image_prefix"]
aovs = arnold_render_presets["aov_list"]
img_tiled = arnold_render_presets["tiled"]
multi_exr = arnold_render_presets["multilayer_exr"]
additional_options = arnold_render_presets["additional_options"]
for aov in aovs:
AOVInterface('defaultArnoldRenderOptions').addAOV(aov)
cmds.setAttr("defaultResolution.width", width)
cmds.setAttr("defaultResolution.height", height)
self._set_global_output_settings()
cmds.setAttr(
"defaultRenderGlobals.imageFilePrefix", img_prefix, type="string")
cmds.setAttr(
"defaultArnoldDriver.ai_translator", img_ext, type="string")
cmds.setAttr(
"defaultArnoldDriver.exrTiled", img_tiled)
cmds.setAttr(
"defaultArnoldDriver.mergeAOVs", multi_exr)
# Passes additional options in from the schema as a list
# but converts it to a dictionary because ftrack doesn't
# allow fullstops in custom attributes. Then checks for
# type of MtoA attribute passed to adjust the `setAttr`
# command accordingly.
self._additional_attribs_setter(additional_options)
for item in additional_options:
attribute, value = item
if (cmds.getAttr(str(attribute), type=True)) == "long":
cmds.setAttr(str(attribute), int(value))
elif (cmds.getAttr(str(attribute), type=True)) == "bool":
cmds.setAttr(str(attribute), int(value), type = "Boolean") # noqa
elif (cmds.getAttr(str(attribute), type=True)) == "string":
cmds.setAttr(str(attribute), str(value), type = "string") # noqa
reset_frame_range()
def _set_redshift_settings(self, width, height):
"""Sets settings for Redshift."""
redshift_render_presets = (
self._project_settings
["maya"]
["RenderSettings"]
["redshift_renderer"]
)
additional_options = redshift_render_presets["additional_options"]
ext = redshift_render_presets["image_format"]
img_exts = ["iff", "exr", "tif", "png", "tga", "jpg"]
img_ext = img_exts.index(ext)
self._set_global_output_settings()
cmds.setAttr("redshiftOptions.imageFormat", img_ext)
cmds.setAttr("defaultResolution.width", width)
cmds.setAttr("defaultResolution.height", height)
self._additional_attribs_setter(additional_options)
def _set_vray_settings(self, aov_separator, width, height):
# type: (str, int, int) -> None
"""Sets important settings for Vray."""
settings = cmds.ls(type="VRaySettingsNode")
node = settings[0] if settings else cmds.createNode("VRaySettingsNode")
vray_render_presets = (
self._project_settings
["maya"]
["RenderSettings"]
["vray_renderer"]
)
# Set aov separator
# First we need to explicitly set the UI items in Render Settings
# because that is also what V-Ray updates to when that Render Settings
# UI did initialize before and refreshes again.
MENU = "vrayRenderElementSeparator"
if cmds.optionMenuGrp(MENU, query=True, exists=True):
items = cmds.optionMenuGrp(MENU, query=True, ill=True)
separators = [cmds.menuItem(i, query=True, label=True) for i in items] # noqa: E501
try:
sep_idx = separators.index(aov_separator)
except ValueError as e:
six.reraise(
CreatorError,
CreatorError(
"AOV character {} not in {}".format(
aov_separator, separators)),
sys.exc_info()[2])
cmds.optionMenuGrp(MENU, edit=True, select=sep_idx + 1)
# Set the render element attribute as string. This is also what V-Ray
# sets whenever the `vrayRenderElementSeparator` menu items switch
cmds.setAttr(
"{}.fileNameRenderElementSeparator".format(node),
aov_separator,
type="string"
)
# Set render file format to exr
cmds.setAttr("{}.imageFormatStr".format(node), "exr", type="string")
# animType
cmds.setAttr("{}.animType".format(node), 1)
# resolution
cmds.setAttr("{}.width".format(node), width)
cmds.setAttr("{}.height".format(node), height)
additional_options = vray_render_presets["additional_options"]
self._additional_attribs_setter(additional_options)
@staticmethod
def _set_global_output_settings():
# enable animation
cmds.setAttr("defaultRenderGlobals.outFormatControl", 0)
cmds.setAttr("defaultRenderGlobals.animation", 1)
cmds.setAttr("defaultRenderGlobals.putFrameBeforeExt", 1)
cmds.setAttr("defaultRenderGlobals.extensionPadding", 4)
def _additional_attribs_setter(self, additional_attribs):
print(additional_attribs)
for item in additional_attribs:
attribute, value = item
if (cmds.getAttr(str(attribute), type=True)) == "long":
cmds.setAttr(str(attribute), int(value))
elif (cmds.getAttr(str(attribute), type=True)) == "bool":
cmds.setAttr(str(attribute), int(value)) # noqa
elif (cmds.getAttr(str(attribute), type=True)) == "string":
cmds.setAttr(str(attribute), str(value), type = "string") # noqa

View file

@ -6,11 +6,11 @@ from Qt import QtWidgets, QtGui
import maya.utils
import maya.cmds as cmds
from openpype.api import BuildWorkfile
from openpype.settings import get_project_settings
from openpype.pipeline import legacy_io
from openpype.pipeline.workfile import BuildWorkfile
from openpype.tools.utils import host_tools
from openpype.hosts.maya.api import lib
from openpype.hosts.maya.api import lib, lib_rendersettings
from .lib import get_main_window, IS_HEADLESS
from .commands import reset_frame_range
@ -44,6 +44,7 @@ def install():
parent="MayaWindow"
)
renderer = cmds.getAttr('defaultRenderGlobals.currentRenderer').lower()
# Create context menu
context_label = "{}, {}".format(
legacy_io.Session["AVALON_ASSET"],
@ -98,6 +99,13 @@ def install():
cmds.menuItem(divider=True)
cmds.menuItem(
"Set Render Settings",
command=lambda *args: lib_rendersettings.RenderSettings().set_default_renderer_settings() # noqa
)
cmds.menuItem(divider=True)
cmds.menuItem(
"Work Files...",
command=lambda *args: host_tools.show_workfiles(

View file

@ -208,7 +208,8 @@ class ReferenceLoader(Loader):
file_type = {
"ma": "mayaAscii",
"mb": "mayaBinary",
"abc": "Alembic"
"abc": "Alembic",
"fbx": "FBX"
}.get(representation["name"])
assert file_type, "Unsupported representation: %s" % representation
@ -234,7 +235,7 @@ class ReferenceLoader(Loader):
path = self.prepare_root_value(path,
representation["context"]
["project"]
["code"])
["name"])
content = cmds.file(path,
loadReference=reference_node,
type=file_type,

View file

@ -11,6 +11,7 @@ class CreateAnimation(plugin.Creator):
label = "Animation"
family = "animation"
icon = "male"
write_color_sets = False
def __init__(self, *args, **kwargs):
super(CreateAnimation, self).__init__(*args, **kwargs)
@ -22,7 +23,7 @@ class CreateAnimation(plugin.Creator):
self.data[key] = value
# Write vertex colors with the geometry.
self.data["writeColorSets"] = False
self.data["writeColorSets"] = self.write_color_sets
self.data["writeFaceSets"] = False
# Include only renderable visible shapes.

View file

@ -11,6 +11,7 @@ class CreatePointCache(plugin.Creator):
label = "Point Cache"
family = "pointcache"
icon = "gears"
write_color_sets = False
def __init__(self, *args, **kwargs):
super(CreatePointCache, self).__init__(*args, **kwargs)
@ -18,7 +19,8 @@ class CreatePointCache(plugin.Creator):
# Add animation data
self.data.update(lib.collect_animation_data())
self.data["writeColorSets"] = False # Vertex colors with the geometry.
# Vertex colors with the geometry.
self.data["writeColorSets"] = self.write_color_sets
self.data["writeFaceSets"] = False # Vertex colors with the geometry.
self.data["renderableOnly"] = False # Only renderable visible shapes
self.data["visibleOnly"] = False # only nodes that are visible

View file

@ -1,15 +1,21 @@
# -*- coding: utf-8 -*-
"""Create ``Render`` instance in Maya."""
import os
import json
import os
import appdirs
import requests
from maya import cmds
import maya.app.renderSetup.model.renderSetup as renderSetup
from maya.app.renderSetup.model import renderSetup
from openpype.api import (
get_system_settings,
get_project_settings,
)
from openpype.hosts.maya.api import (
lib,
lib_rendersettings,
plugin
)
from openpype.lib import requests_get
@ -17,6 +23,7 @@ from openpype.api import (
get_system_settings,
get_project_settings)
from openpype.modules import ModulesManager
from openpype.pipeline import legacy_io
from openpype.pipeline import (
CreatorError,
legacy_io,
@ -69,35 +76,6 @@ class CreateRender(plugin.Creator):
_user = None
_password = None
# renderSetup instance
_rs = None
_image_prefix_nodes = {
'mentalray': 'defaultRenderGlobals.imageFilePrefix',
'vray': 'vraySettings.fileNamePrefix',
'arnold': 'defaultRenderGlobals.imageFilePrefix',
'renderman': 'rmanGlobals.imageFileFormat',
'redshift': 'defaultRenderGlobals.imageFilePrefix',
'mayahardware2': 'defaultRenderGlobals.imageFilePrefix',
}
_image_prefixes = {
'mentalray': 'maya/<Scene>/<RenderLayer>/<RenderLayer>{aov_separator}<RenderPass>', # noqa
'vray': 'maya/<scene>/<Layer>/<Layer>',
'arnold': 'maya/<Scene>/<RenderLayer>/<RenderLayer>{aov_separator}<RenderPass>', # noqa
# this needs `imageOutputDir`
# (<ws>/renders/maya/<scene>) set separately
'renderman': '<layer>_<aov>.<f4>.<ext>',
'redshift': 'maya/<Scene>/<RenderLayer>/<RenderLayer>', # noqa
'mayahardware2': 'maya/<Scene>/<RenderLayer>/<RenderLayer>', # noqa
}
_aov_chars = {
"dot": ".",
"dash": "-",
"underscore": "_"
}
_project_settings = None
def __init__(self, *args, **kwargs):
@ -109,18 +87,8 @@ class CreateRender(plugin.Creator):
return
self._project_settings = get_project_settings(
legacy_io.Session["AVALON_PROJECT"])
# project_settings/maya/create/CreateRender/aov_separator
try:
self.aov_separator = self._aov_chars[(
self._project_settings["maya"]
["create"]
["CreateRender"]
["aov_separator"]
)]
except KeyError:
self.aov_separator = "_"
if self._project_settings["maya"]["RenderSettings"]["apply_render_settings"]: # noqa
lib_rendersettings.RenderSettings().set_default_renderer_settings()
manager = ModulesManager()
self.deadline_module = manager.modules_by_name["deadline"]
try:
@ -177,13 +145,13 @@ class CreateRender(plugin.Creator):
])
cmds.setAttr("{}.machineList".format(self.instance), lock=True)
self._rs = renderSetup.instance()
layers = self._rs.getRenderLayers()
rs = renderSetup.instance()
layers = rs.getRenderLayers()
if use_selection:
print(">>> processing existing layers")
self.log.info("Processing existing layers")
sets = []
for layer in layers:
print(" - creating set for {}:{}".format(
self.log.info(" - creating set for {}:{}".format(
namespace, layer.name()))
render_set = cmds.sets(
n="{}:{}".format(namespace, layer.name()))
@ -193,17 +161,10 @@ class CreateRender(plugin.Creator):
# if no render layers are present, create default one with
# asterisk selector
if not layers:
render_layer = self._rs.createRenderLayer('Main')
render_layer = rs.createRenderLayer('Main')
collection = render_layer.createCollection("defaultCollection")
collection.getSelector().setPattern('*')
renderer = cmds.getAttr(
'defaultRenderGlobals.currentRenderer').lower()
# handle various renderman names
if renderer.startswith('renderman'):
renderer = 'renderman'
self._set_default_renderer_settings(renderer)
return self.instance
def _deadline_webservice_changed(self):
@ -237,7 +198,7 @@ class CreateRender(plugin.Creator):
def _create_render_settings(self):
"""Create instance settings."""
# get pools
# get pools (slave machines of the render farm)
pool_names = []
default_priority = 50
@ -281,7 +242,8 @@ class CreateRender(plugin.Creator):
# if 'default' server is not between selected,
# use first one for initial list of pools.
deadline_url = next(iter(self.deadline_servers.values()))
# Uses function to get pool machines from the assigned deadline
# url in settings
pool_names = self.deadline_module.get_deadline_pools(deadline_url,
self.log)
maya_submit_dl = self._project_settings.get(
@ -400,102 +362,36 @@ class CreateRender(plugin.Creator):
self.log.error("Cannot show login form to Muster")
raise Exception("Cannot show login form to Muster")
def _set_default_renderer_settings(self, renderer):
"""Set basic settings based on renderer.
def _requests_post(self, *args, **kwargs):
"""Wrap request post method.
Args:
renderer (str): Renderer name.
Disabling SSL certificate validation if ``DONT_VERIFY_SSL`` environment
variable is found. This is useful when Deadline or Muster server are
running with self-signed certificates and their certificate is not
added to trusted certificates on client machines.
Warning:
Disabling SSL certificate validation is defeating one line
of defense SSL is providing and it is not recommended.
"""
prefix = self._image_prefixes[renderer]
prefix = prefix.replace("{aov_separator}", self.aov_separator)
cmds.setAttr(self._image_prefix_nodes[renderer],
prefix,
type="string")
if "verify" not in kwargs:
kwargs["verify"] = not os.getenv("OPENPYPE_DONT_VERIFY_SSL", True)
return requests.post(*args, **kwargs)
asset = get_current_project_asset()
def _requests_get(self, *args, **kwargs):
"""Wrap request get method.
if renderer == "arnold":
# set format to exr
Disabling SSL certificate validation if ``DONT_VERIFY_SSL`` environment
variable is found. This is useful when Deadline or Muster server are
running with self-signed certificates and their certificate is not
added to trusted certificates on client machines.
cmds.setAttr(
"defaultArnoldDriver.ai_translator", "exr", type="string")
self._set_global_output_settings()
# resolution
cmds.setAttr(
"defaultResolution.width",
asset["data"].get("resolutionWidth"))
cmds.setAttr(
"defaultResolution.height",
asset["data"].get("resolutionHeight"))
Warning:
Disabling SSL certificate validation is defeating one line
of defense SSL is providing and it is not recommended.
if renderer == "vray":
self._set_vray_settings(asset)
if renderer == "redshift":
cmds.setAttr("redshiftOptions.imageFormat", 1)
# resolution
cmds.setAttr(
"defaultResolution.width",
asset["data"].get("resolutionWidth"))
cmds.setAttr(
"defaultResolution.height",
asset["data"].get("resolutionHeight"))
self._set_global_output_settings()
if renderer == "renderman":
cmds.setAttr("rmanGlobals.imageOutputDir",
"maya/<scene>/<layer>", type="string")
def _set_vray_settings(self, asset):
# type: (dict) -> None
"""Sets important settings for Vray."""
settings = cmds.ls(type="VRaySettingsNode")
node = settings[0] if settings else cmds.createNode("VRaySettingsNode")
# set separator
# set it in vray menu
if cmds.optionMenuGrp("vrayRenderElementSeparator", exists=True,
q=True):
items = cmds.optionMenuGrp(
"vrayRenderElementSeparator", ill=True, query=True)
separators = [cmds.menuItem(i, label=True, query=True) for i in items] # noqa: E501
try:
sep_idx = separators.index(self.aov_separator)
except ValueError:
raise CreatorError(
"AOV character {} not in {}".format(
self.aov_separator, separators))
cmds.optionMenuGrp(
"vrayRenderElementSeparator", sl=sep_idx + 1, edit=True)
cmds.setAttr(
"{}.fileNameRenderElementSeparator".format(node),
self.aov_separator,
type="string"
)
# set format to exr
cmds.setAttr(
"{}.imageFormatStr".format(node), "exr", type="string")
# animType
cmds.setAttr(
"{}.animType".format(node), 1)
# resolution
cmds.setAttr(
"{}.width".format(node),
asset["data"].get("resolutionWidth"))
cmds.setAttr(
"{}.height".format(node),
asset["data"].get("resolutionHeight"))
@staticmethod
def _set_global_output_settings():
# enable animation
cmds.setAttr("defaultRenderGlobals.outFormatControl", 0)
cmds.setAttr("defaultRenderGlobals.animation", 1)
cmds.setAttr("defaultRenderGlobals.putFrameBeforeExt", 1)
cmds.setAttr("defaultRenderGlobals.extensionPadding", 4)
"""
if "verify" not in kwargs:
kwargs["verify"] = not os.getenv("OPENPYPE_DONT_VERIFY_SSL", True)
return requests.get(*args, **kwargs)

View file

@ -551,7 +551,9 @@ class CollectLook(pyblish.api.InstancePlugin):
if cmds.getAttr(attribute, type=True) == "message":
continue
node_attributes[attr] = cmds.getAttr(attribute)
# Only include if there are any properties we care about
if not node_attributes:
continue
attributes.append({"name": node,
"uuid": lib.get_id(node),
"attributes": node_attributes})

View file

@ -72,7 +72,6 @@ class CollectMayaRender(pyblish.api.ContextPlugin):
def process(self, context):
"""Entry point to collector."""
render_instance = None
deadline_url = None
for instance in context:
if "rendering" in instance.data["families"]:
@ -96,23 +95,12 @@ class CollectMayaRender(pyblish.api.ContextPlugin):
asset = legacy_io.Session["AVALON_ASSET"]
workspace = context.data["workspaceDir"]
deadline_settings = (
context.data
["system_settings"]
["modules"]
["deadline"]
)
if deadline_settings["enabled"]:
deadline_url = render_instance.data.get("deadlineUrl")
self._rs = renderSetup.instance()
current_layer = self._rs.getVisibleRenderLayer()
# Retrieve render setup layers
rs = renderSetup.instance()
maya_render_layers = {
layer.name(): layer for layer in self._rs.getRenderLayers()
layer.name(): layer for layer in rs.getRenderLayers()
}
self.maya_layers = maya_render_layers
for layer in collected_render_layers:
try:
if layer.startswith("LAYER_"):
@ -147,49 +135,28 @@ class CollectMayaRender(pyblish.api.ContextPlugin):
self.log.warning(msg)
continue
# test if there are sets (subsets) to attach render to
# detect if there are sets (subsets) to attach render to
sets = cmds.sets(layer, query=True) or []
attach_to = []
if sets:
for s in sets:
if "family" not in cmds.listAttr(s):
continue
for s in sets:
if not cmds.attributeQuery("family", node=s, exists=True):
continue
attach_to.append(
{
"version": None, # we need integrator for that
"subset": s,
"family": cmds.getAttr("{}.family".format(s)),
}
)
self.log.info(" -> attach render to: {}".format(s))
attach_to.append(
{
"version": None, # we need integrator for that
"subset": s,
"family": cmds.getAttr("{}.family".format(s)),
}
)
self.log.info(" -> attach render to: {}".format(s))
layer_name = "rs_{}".format(expected_layer_name)
# collect all frames we are expecting to be rendered
renderer = cmds.getAttr(
"defaultRenderGlobals.currentRenderer"
).lower()
# handle various renderman names
if renderer.startswith("renderman"):
renderer = "renderman"
try:
aov_separator = self._aov_chars[(
context.data["project_settings"]
["create"]
["CreateRender"]
["aov_separator"]
)]
except KeyError:
aov_separator = "_"
render_instance.data["aovSeparator"] = aov_separator
# return all expected files for all cameras and aovs in given
# frame range
layer_render_products = get_layer_render_products(
layer_name, render_instance)
layer_render_products = get_layer_render_products(layer_name)
render_products = layer_render_products.layer_data.products
assert render_products, "no render products generated"
exp_files = []
@ -226,13 +193,11 @@ class CollectMayaRender(pyblish.api.ContextPlugin):
)
# append full path
full_exp_files = []
aov_dict = {}
default_render_file = context.data.get('project_settings')\
.get('maya')\
.get('create')\
.get('CreateRender')\
.get('default_render_image_folder')
.get('RenderSettings')\
.get('default_render_image_folder') or ""
# replace relative paths with absolute. Render products are
# returned as list of dictionaries.
publish_meta_path = None
@ -246,6 +211,7 @@ class CollectMayaRender(pyblish.api.ContextPlugin):
full_paths.append(full_path)
publish_meta_path = os.path.dirname(full_path)
aov_dict[aov_first_key] = full_paths
full_exp_files = [aov_dict]
frame_start_render = int(self.get_render_attribute(
"startFrame", layer=layer_name))
@ -269,8 +235,6 @@ class CollectMayaRender(pyblish.api.ContextPlugin):
frame_start_handle = frame_start_render
frame_end_handle = frame_end_render
full_exp_files.append(aov_dict)
# find common path to store metadata
# so if image prefix is branching to many directories
# metadata file will be located in top-most common
@ -299,16 +263,6 @@ class CollectMayaRender(pyblish.api.ContextPlugin):
self.log.info("collecting layer: {}".format(layer_name))
# Get layer specific settings, might be overrides
try:
aov_separator = self._aov_chars[(
context.data["project_settings"]
["create"]
["CreateRender"]
["aov_separator"]
)]
except KeyError:
aov_separator = "_"
data = {
"subset": expected_layer_name,
"attachTo": attach_to,
@ -357,11 +311,15 @@ class CollectMayaRender(pyblish.api.ContextPlugin):
"useReferencedAovs": render_instance.data.get(
"useReferencedAovs") or render_instance.data.get(
"vrayUseReferencedAovs") or False,
"aovSeparator": aov_separator
"aovSeparator": layer_render_products.layer_data.aov_separator # noqa: E501
}
if deadline_url:
data["deadlineUrl"] = deadline_url
# Collect Deadline url if Deadline module is enabled
deadline_settings = (
context.data["system_settings"]["modules"]["deadline"]
)
if deadline_settings["enabled"]:
data["deadlineUrl"] = render_instance.data.get("deadlineUrl")
if self.sync_workfile_version:
data["version"] = context.data["version"]
@ -370,19 +328,6 @@ class CollectMayaRender(pyblish.api.ContextPlugin):
if instance.data['family'] == "workfile":
instance.data["version"] = context.data["version"]
# Apply each user defined attribute as data
for attr in cmds.listAttr(layer, userDefined=True) or list():
try:
value = cmds.getAttr("{}.{}".format(layer, attr))
except Exception:
# Some attributes cannot be read directly,
# such as mesh and color attributes. These
# are considered non-essential to this
# particular publishing pipeline.
value = None
data[attr] = value
# handle standalone renderers
if render_instance.data.get("vrayScene") is True:
data["families"].append("vrayscene_render")
@ -490,10 +435,6 @@ class CollectMayaRender(pyblish.api.ContextPlugin):
return pool_a, pool_b
def _get_overrides(self, layer):
rset = self.maya_layers[layer].renderSettingsCollectionInstance()
return rset.getOverrides()
@staticmethod
def get_render_attribute(attr, layer):
"""Get attribute from render options.

View file

@ -0,0 +1,146 @@
import math
import os
import json
from maya import cmds
from maya.api import OpenMaya as om
from bson.objectid import ObjectId
from openpype.pipeline import legacy_io
import openpype.api
class ExtractLayout(openpype.api.Extractor):
"""Extract a layout."""
label = "Extract Layout"
hosts = ["maya"]
families = ["layout"]
optional = True
def process(self, instance):
# Define extract output file path
stagingdir = self.staging_dir(instance)
# Perform extraction
self.log.info("Performing extraction..")
if "representations" not in instance.data:
instance.data["representations"] = []
json_data = []
for asset in cmds.sets(str(instance), query=True):
# Find the container
grp_name = asset.split(':')[0]
containers = cmds.ls(f"{grp_name}*_CON")
assert len(containers) == 1, \
f"More than one container found for {asset}"
container = containers[0]
representation_id = cmds.getAttr(f"{container}.representation")
representation = legacy_io.find_one(
{
"type": "representation",
"_id": ObjectId(representation_id)
}, projection={"parent": True, "context.family": True})
self.log.info(representation)
version_id = representation.get("parent")
family = representation.get("context").get("family")
json_element = {
"family": family,
"instance_name": cmds.getAttr(f"{container}.name"),
"representation": str(representation_id),
"version": str(version_id)
}
loc = cmds.xform(asset, query=True, translation=True)
rot = cmds.xform(asset, query=True, rotation=True, euler=True)
scl = cmds.xform(asset, query=True, relative=True, scale=True)
json_element["transform"] = {
"translation": {
"x": loc[0],
"y": loc[1],
"z": loc[2]
},
"rotation": {
"x": math.radians(rot[0]),
"y": math.radians(rot[1]),
"z": math.radians(rot[2])
},
"scale": {
"x": scl[0],
"y": scl[1],
"z": scl[2]
}
}
row_length = 4
t_matrix_list = cmds.xform(asset, query=True, matrix=True)
transform_mm = om.MMatrix(t_matrix_list)
transform = om.MTransformationMatrix(transform_mm)
t = transform.translation(om.MSpace.kWorld)
t = om.MVector(t.x, t.z, -t.y)
transform.setTranslation(t, om.MSpace.kWorld)
transform.rotateBy(
om.MEulerRotation(math.radians(-90), 0, 0), om.MSpace.kWorld)
transform.scaleBy([1.0, 1.0, -1.0], om.MSpace.kObject)
t_matrix_list = list(transform.asMatrix())
t_matrix = []
for i in range(0, len(t_matrix_list), row_length):
t_matrix.append(t_matrix_list[i:i + row_length])
json_element["transform_matrix"] = []
for row in t_matrix:
json_element["transform_matrix"].append(list(row))
basis_list = [
1, 0, 0, 0,
0, 1, 0, 0,
0, 0, -1, 0,
0, 0, 0, 1
]
basis_mm = om.MMatrix(basis_list)
basis = om.MTransformationMatrix(basis_mm)
b_matrix_list = list(basis.asMatrix())
b_matrix = []
for i in range(0, len(b_matrix_list), row_length):
b_matrix.append(b_matrix_list[i:i + row_length])
json_element["basis"] = []
for row in b_matrix:
json_element["basis"].append(list(row))
json_data.append(json_element)
json_filename = "{}.json".format(instance.name)
json_path = os.path.join(stagingdir, json_filename)
with open(json_path, "w+") as file:
json.dump(json_data, fp=file, indent=2)
json_representation = {
'name': 'json',
'ext': 'json',
'files': json_filename,
"stagingDir": stagingdir,
}
instance.data["representations"].append(json_representation)
self.log.info("Extracted instance '%s' to: %s",
instance.name, json_representation)

View file

@ -27,6 +27,29 @@ def escape_space(path):
return '"{}"'.format(path) if " " in path else path
def get_ocio_config_path(profile_folder):
"""Path to OpenPype vendorized OCIO.
Vendorized OCIO config file path is grabbed from the specific path
hierarchy specified below.
"{OPENPYPE_ROOT}/vendor/OpenColorIO-Configs/{profile_folder}/config.ocio"
Args:
profile_folder (str): Name of folder to grab config file from.
Returns:
str: Path to vendorized config file.
"""
return os.path.join(
os.environ["OPENPYPE_ROOT"],
"vendor",
"configs",
"OpenColorIO-Configs",
profile_folder,
"config.ocio"
)
def find_paths_by_hash(texture_hash):
"""Find the texture hash key in the dictionary.
@ -79,10 +102,11 @@ def maketx(source, destination, *args):
# use oiio-optimized settings for tile-size, planarconfig, metadata
"--oiio",
"--filter lanczos3",
escape_space(source)
]
cmd.extend(args)
cmd.extend(["-o", escape_space(destination), escape_space(source)])
cmd.extend(["-o", escape_space(destination)])
cmd = " ".join(cmd)
@ -405,7 +429,19 @@ class ExtractLook(openpype.api.Extractor):
# node doesn't have color space attribute
color_space = "Raw"
else:
if files_metadata[source]["color_space"] == "Raw":
# get the resolved files
metadata = files_metadata.get(source)
# if the files are unresolved from `source`
# assume color space from the first file of
# the resource
if not metadata:
first_file = next(iter(resource.get(
"files", [])), None)
if not first_file:
continue
first_filepath = os.path.normpath(first_file)
metadata = files_metadata[first_filepath]
if metadata["color_space"] == "Raw":
# set color space to raw if we linearized it
color_space = "Raw"
# Remap file node filename to destination
@ -493,6 +529,8 @@ class ExtractLook(openpype.api.Extractor):
else:
colorconvert = ""
config_path = get_ocio_config_path("nuke-default")
color_config = "--colorconfig {0}".format(config_path)
# Ensure folder exists
if not os.path.exists(os.path.dirname(converted)):
os.makedirs(os.path.dirname(converted))
@ -502,10 +540,11 @@ class ExtractLook(openpype.api.Extractor):
filepath,
converted,
# Include `source-hash` as string metadata
"-sattrib",
"--sattrib",
"sourceHash",
escape_space(texture_hash),
colorconvert,
color_config
)
return converted, COPY, texture_hash

View file

@ -78,14 +78,13 @@ class ValidateLookContents(pyblish.api.InstancePlugin):
# Check if attributes are on a node with an ID, crucial for rebuild!
for attr_changes in lookdata["attributes"]:
if not attr_changes["uuid"]:
if not attr_changes["uuid"] and not attr_changes["attributes"]:
cls.log.error("Node '%s' has no cbId, please set the "
"attributes to its children if it has any"
% attr_changes["name"])
invalid.add(instance.name)
return list(invalid)
@classmethod
def validate_looks(cls, instance):

View file

@ -1,17 +1,15 @@
import maya.mel as mel
import pymel.core as pm
from maya import cmds
import pyblish.api
import openpype.api
def get_file_rule(rule):
"""Workaround for a bug in python with cmds.workspace"""
return mel.eval('workspace -query -fileRuleEntry "{}"'.format(rule))
class ValidateRenderImageRule(pyblish.api.InstancePlugin):
"""Validates "images" file rule is set to "renders/"
"""Validates Maya Workpace "images" file rule matches project settings.
This validates against the configured default render image folder:
Studio Settings > Project > Maya >
Render Settings > Default render image folder.
"""
@ -23,24 +21,29 @@ class ValidateRenderImageRule(pyblish.api.InstancePlugin):
def process(self, instance):
default_render_file = self.get_default_render_image_folder(instance)
required_images_rule = self.get_default_render_image_folder(instance)
current_images_rule = cmds.workspace(fileRuleEntry="images")
assert get_file_rule("images") == default_render_file, (
"Workspace's `images` file rule must be set to: {}".format(
default_render_file
assert current_images_rule == required_images_rule, (
"Invalid workspace `images` file rule value: '{}'. "
"Must be set to: '{}'".format(
current_images_rule, required_images_rule
)
)
@classmethod
def repair(cls, instance):
default = cls.get_default_render_image_folder(instance)
pm.workspace.fileRules["images"] = default
pm.system.Workspace.save()
required_images_rule = cls.get_default_render_image_folder(instance)
current_images_rule = cmds.workspace(fileRuleEntry="images")
if current_images_rule != required_images_rule:
cmds.workspace(fileRule=("images", required_images_rule))
cmds.workspace(saveWorkspace=True)
@staticmethod
def get_default_render_image_folder(instance):
return instance.context.data.get('project_settings')\
.get('maya') \
.get('create') \
.get('CreateRender') \
.get('RenderSettings') \
.get('default_render_image_folder')

View file

@ -1,20 +1,11 @@
import re
import pyblish.api
import openpype.api
import openpype.hosts.maya.api.action
from maya import cmds
ImagePrefixes = {
'mentalray': 'defaultRenderGlobals.imageFilePrefix',
'vray': 'vraySettings.fileNamePrefix',
'arnold': 'defaultRenderGlobals.imageFilePrefix',
'renderman': 'defaultRenderGlobals.imageFilePrefix',
'redshift': 'defaultRenderGlobals.imageFilePrefix',
'mayahardware2': 'defaultRenderGlobals.imageFilePrefix',
}
import openpype.api
import openpype.hosts.maya.api.action
from openpype.hosts.maya.api.render_settings import RenderSettings
class ValidateRenderSingleCamera(pyblish.api.InstancePlugin):
@ -47,7 +38,11 @@ class ValidateRenderSingleCamera(pyblish.api.InstancePlugin):
# handle various renderman names
if renderer.startswith('renderman'):
renderer = 'renderman'
file_prefix = cmds.getAttr(ImagePrefixes[renderer])
file_prefix = cmds.getAttr(
RenderSettings.get_image_prefix_attr(renderer)
)
if len(cameras) > 1:
if re.search(cls.R_CAMERA_TOKEN, file_prefix):

View file

@ -21,9 +21,7 @@ from openpype.client import (
)
from openpype.api import (
Logger,
BuildWorkfile,
get_version_from_path,
get_workdir_data,
get_current_project_settings,
)
from openpype.tools.utils import host_tools
@ -34,12 +32,17 @@ from openpype.settings import (
get_anatomy_settings,
)
from openpype.modules import ModulesManager
from openpype.pipeline.template_data import get_template_data_with_names
from openpype.pipeline import (
discover_legacy_creator_plugins,
legacy_io,
Anatomy,
)
from openpype.pipeline.context_tools import get_current_project_asset
from openpype.pipeline.context_tools import (
get_current_project_asset,
get_custom_workfile_template_from_session
)
from openpype.pipeline.workfile import BuildWorkfile
from . import gizmo_menu
@ -910,19 +913,17 @@ def get_render_path(node):
''' Generate Render path from presets regarding avalon knob data
'''
avalon_knob_data = read_avalon_data(node)
data = {'avalon': avalon_knob_data}
nuke_imageio_writes = get_imageio_node_setting(
node_class=avalon_knob_data["family"],
node_class=avalon_knob_data["families"],
plugin_name=avalon_knob_data["creator"],
subset=avalon_knob_data["subset"]
)
host_name = os.environ.get("AVALON_APP")
data.update({
"app": host_name,
data = {
"avalon": avalon_knob_data,
"nuke_imageio_writes": nuke_imageio_writes
})
}
anatomy_filled = format_anatomy(data)
return anatomy_filled["render"]["path"].replace("\\", "/")
@ -965,12 +966,11 @@ def format_anatomy(data):
data["version"] = get_version_from_path(file)
project_name = anatomy.project_name
project_doc = get_project(project_name)
asset_doc = get_asset_by_name(project_name, data["avalon"]["asset"])
asset_name = data["avalon"]["asset"]
task_name = os.environ["AVALON_TASK"]
host_name = os.environ["AVALON_APP"]
context_data = get_workdir_data(
project_doc, asset_doc, task_name, host_name
context_data = get_template_data_with_names(
project_name, asset_name, task_name, host_name
)
data.update(context_data)
data.update({
@ -1128,10 +1128,8 @@ def create_write_node(
if knob["name"] == "file_type":
representation = knob["value"]
host_name = os.environ.get("AVALON_APP")
try:
data.update({
"app": host_name,
"imageio_writes": imageio_writes,
"representation": representation,
})
@ -1925,7 +1923,7 @@ class WorkfileSettings(object):
families.append(avalon_knob_data.get("families"))
nuke_imageio_writes = get_imageio_node_setting(
node_class=avalon_knob_data["family"],
node_class=avalon_knob_data["families"],
plugin_name=avalon_knob_data["creator"],
subset=avalon_knob_data["subset"]
)
@ -2224,7 +2222,7 @@ def get_write_node_template_attr(node):
avalon_knob_data = read_avalon_data(node)
# get template data
nuke_imageio_writes = get_imageio_node_setting(
node_class=avalon_knob_data["family"],
node_class=avalon_knob_data["families"],
plugin_name=avalon_knob_data["creator"],
subset=avalon_knob_data["subset"]
)
@ -2449,15 +2447,12 @@ def _launch_workfile_app():
def process_workfile_builder():
from openpype.lib import (
env_value_to_bool,
get_custom_workfile_template
)
# to avoid looping of the callback, remove it!
nuke.removeOnCreate(process_workfile_builder, nodeClass="Root")
# get state from settings
workfile_builder = get_current_project_settings()["nuke"].get(
project_settings = get_current_project_settings()
workfile_builder = project_settings["nuke"].get(
"workfile_builder", {})
# get all imortant settings
@ -2467,7 +2462,6 @@ def process_workfile_builder():
# get settings
createfv_on = workfile_builder.get("create_first_version") or None
custom_templates = workfile_builder.get("custom_templates") or None
builder_on = workfile_builder.get("builder_on_start") or None
last_workfile_path = os.environ.get("AVALON_LAST_WORKFILE")
@ -2475,8 +2469,8 @@ def process_workfile_builder():
# generate first version in file not existing and feature is enabled
if createfv_on and not os.path.exists(last_workfile_path):
# get custom template path if any
custom_template_path = get_custom_workfile_template(
custom_templates
custom_template_path = get_custom_workfile_template_from_session(
project_settings=project_settings
)
# if custom template is defined

View file

@ -9,7 +9,6 @@ import pyblish.api
import openpype
from openpype.api import (
Logger,
BuildWorkfile,
get_current_project_settings
)
from openpype.lib import register_event_callback
@ -22,6 +21,7 @@ from openpype.pipeline import (
deregister_inventory_action_path,
AVALON_CONTAINER_ID,
)
from openpype.pipeline.workfile import BuildWorkfile
from openpype.tools.utils import host_tools
from .command import viewer_update_and_undo_stop

View file

@ -181,8 +181,6 @@ class ExporterReview(object):
# get first and last frame
self.first_frame = min(self.collection.indexes)
self.last_frame = max(self.collection.indexes)
if "slate" in self.instance.data["families"]:
self.first_frame += 1
else:
self.fname = os.path.basename(self.path_in)
self.fhead = os.path.splitext(self.fname)[0] + "."

View file

@ -33,6 +33,7 @@ class CollectSlate(pyblish.api.InstancePlugin):
if slate_node:
instance.data["slateNode"] = slate_node
instance.data["slate"] = True
instance.data["families"].append("slate")
instance.data["versionData"]["families"].append("slate")
self.log.info(

View file

@ -31,10 +31,6 @@ class NukeRenderLocal(openpype.api.Extractor):
first_frame = instance.data.get("frameStartHandle", None)
# exception for slate workflow
if "slate" in families:
first_frame -= 1
last_frame = instance.data.get("frameEndHandle", None)
node_subset_name = instance.data.get("name", None)
@ -68,10 +64,6 @@ class NukeRenderLocal(openpype.api.Extractor):
int(last_frame)
)
# exception for slate workflow
if "slate" in families:
first_frame += 1
ext = node["file_type"].value()
if "representations" not in instance.data:
@ -88,8 +80,11 @@ class NukeRenderLocal(openpype.api.Extractor):
repre = {
'name': ext,
'ext': ext,
'frameStart': "%0{}d".format(
len(str(last_frame))) % first_frame,
'frameStart': (
"{{:0>{}}}"
.format(len(str(last_frame)))
.format(first_frame)
),
'files': filenames,
"stagingDir": out_dir
}
@ -105,13 +100,16 @@ class NukeRenderLocal(openpype.api.Extractor):
instance.data['family'] = 'render'
families.remove('render.local')
families.insert(0, "render2d")
instance.data["anatomyData"]["family"] = "render"
elif "prerender.local" in families:
instance.data['family'] = 'prerender'
families.remove('prerender.local')
families.insert(0, "prerender")
instance.data["anatomyData"]["family"] = "prerender"
elif "still.local" in families:
instance.data['family'] = 'image'
families.remove('still.local')
instance.data["anatomyData"]["family"] = "image"
instance.data["families"] = families
collections, remainder = clique.assemble(filenames)
@ -123,4 +121,4 @@ class NukeRenderLocal(openpype.api.Extractor):
self.log.info('Finished render')
self.log.debug("instance extracted: {}".format(instance.data))
self.log.debug("_ instance.data: {}".format(instance.data))

View file

@ -13,6 +13,7 @@ from openpype.hosts.nuke.api import (
get_view_process_node
)
class ExtractSlateFrame(openpype.api.Extractor):
"""Extracts movie and thumbnail with baked in luts
@ -236,6 +237,7 @@ class ExtractSlateFrame(openpype.api.Extractor):
def _render_slate_to_sequence(self, instance):
# set slate frame
first_frame = instance.data["frameStartHandle"]
last_frame = instance.data["frameEndHandle"]
slate_first_frame = first_frame - 1
# render slate as sequence frame
@ -284,6 +286,13 @@ class ExtractSlateFrame(openpype.api.Extractor):
matching_repre["files"] = [first_filename, slate_filename]
elif slate_filename not in matching_repre["files"]:
matching_repre["files"].insert(0, slate_filename)
matching_repre["frameStart"] = (
"{{:0>{}}}"
.format(len(str(last_frame)))
.format(slate_first_frame)
)
self.log.debug(
"__ matching_repre: {}".format(pformat(matching_repre)))
self.log.warning("Added slate frame to representation files")

View file

@ -50,7 +50,7 @@ class PreCollectNukeInstances(pyblish.api.ContextPlugin):
# establish families
family = avalon_knob_data["family"]
families_ak = avalon_knob_data.get("families", [])
families = list()
families = []
# except disabled nodes but exclude backdrops in test
if ("nukenodes" not in family) and (node["disable"].value()):
@ -111,10 +111,10 @@ class PreCollectNukeInstances(pyblish.api.ContextPlugin):
self.log.debug("__ families: `{}`".format(families))
# Get format
format = root['format'].value()
resolution_width = format.width()
resolution_height = format.height()
pixel_aspect = format.pixelAspect()
format_ = root['format'].value()
resolution_width = format_.width()
resolution_height = format_.height()
pixel_aspect = format_.pixelAspect()
# get publish knob value
if "publish" not in node.knobs():
@ -125,8 +125,11 @@ class PreCollectNukeInstances(pyblish.api.ContextPlugin):
self.log.debug("__ _families_test: `{}`".format(_families_test))
for family_test in _families_test:
if family_test in self.sync_workfile_version_on_families:
self.log.debug("Syncing version with workfile for '{}'"
.format(family_test))
self.log.debug(
"Syncing version with workfile for '{}'".format(
family_test
)
)
# get version to instance for integration
instance.data['version'] = instance.context.data['version']

View file

@ -144,8 +144,10 @@ class CollectNukeWrites(pyblish.api.InstancePlugin):
self.log.debug("colorspace: `{}`".format(colorspace))
version_data = {
"families": [f.replace(".local", "").replace(".farm", "")
for f in _families_test if "write" not in f],
"families": [
_f.replace(".local", "").replace(".farm", "")
for _f in _families_test if "write" != _f
],
"colorspace": colorspace
}

View file

@ -98,7 +98,7 @@ class ValidateRenderedFrames(pyblish.api.InstancePlugin):
self.log.error(msg)
raise ValidationException(msg)
collected_frames_len = int(len(collection.indexes))
collected_frames_len = len(collection.indexes)
coll_start = min(collection.indexes)
coll_end = max(collection.indexes)

View file

@ -1,3 +1,5 @@
import re
from openpype.hosts.photoshop import api
from openpype.lib import BoolDef
from openpype.pipeline import (
@ -5,6 +7,8 @@ from openpype.pipeline import (
CreatedInstance,
legacy_io
)
from openpype.lib import prepare_template_data
from openpype.pipeline.create import SUBSET_NAME_ALLOWED_SYMBOLS
class ImageCreator(Creator):
@ -38,17 +42,24 @@ class ImageCreator(Creator):
top_level_selected_items = stub.get_selected_layers()
if pre_create_data.get("use_selection"):
only_single_item_selected = len(top_level_selected_items) == 1
for selected_item in top_level_selected_items:
if (
only_single_item_selected or
pre_create_data.get("create_multiple")):
if (
only_single_item_selected or
pre_create_data.get("create_multiple")):
for selected_item in top_level_selected_items:
if selected_item.group:
groups_to_create.append(selected_item)
else:
top_layers_to_wrap.append(selected_item)
else:
group = stub.group_selected_layers(subset_name_from_ui)
groups_to_create.append(group)
else:
group = stub.group_selected_layers(subset_name_from_ui)
groups_to_create.append(group)
else:
stub.select_layers(stub.get_layers())
try:
group = stub.group_selected_layers(subset_name_from_ui)
except:
raise ValueError("Cannot group locked Bakcground layer!")
groups_to_create.append(group)
if not groups_to_create and not top_layers_to_wrap:
group = stub.create_group(subset_name_from_ui)
@ -60,6 +71,7 @@ class ImageCreator(Creator):
group = stub.group_selected_layers(layer.name)
groups_to_create.append(group)
layer_name = ''
creating_multiple_groups = len(groups_to_create) > 1
for group in groups_to_create:
subset_name = subset_name_from_ui # reset to name from creator UI
@ -67,8 +79,16 @@ class ImageCreator(Creator):
created_group_name = self._clean_highlights(stub, group.name)
if creating_multiple_groups:
# concatenate with layer name to differentiate subsets
subset_name += group.name.title().replace(" ", "")
layer_name = re.sub(
"[^{}]+".format(SUBSET_NAME_ALLOWED_SYMBOLS),
"",
group.name
)
if "{layer}" not in subset_name.lower():
subset_name += "{Layer}"
layer_fill = prepare_template_data({"layer": layer_name})
subset_name = subset_name.format(**layer_fill)
if group.long_name:
for directory in group.long_name[::-1]:
@ -143,3 +163,6 @@ class ImageCreator(Creator):
def _clean_highlights(self, stub, item):
return item.replace(stub.PUBLISH_ICON, '').replace(stub.LOADED_ICON,
'')
@classmethod
def get_dynamic_data(cls, *args, **kwargs):
return {"layer": "{layer}"}

View file

@ -1,7 +1,12 @@
import re
from Qt import QtWidgets
from openpype.pipeline import create
from openpype.hosts.photoshop import api as photoshop
from openpype.lib import prepare_template_data
from openpype.pipeline.create import SUBSET_NAME_ALLOWED_SYMBOLS
class CreateImage(create.LegacyCreator):
"""Image folder for publish."""
@ -75,6 +80,7 @@ class CreateImage(create.LegacyCreator):
groups.append(group)
creator_subset_name = self.data["subset"]
layer_name = ''
for group in groups:
long_names = []
group.name = group.name.replace(stub.PUBLISH_ICON, ''). \
@ -82,7 +88,16 @@ class CreateImage(create.LegacyCreator):
subset_name = creator_subset_name
if len(groups) > 1:
subset_name += group.name.title().replace(" ", "")
layer_name = re.sub(
"[^{}]+".format(SUBSET_NAME_ALLOWED_SYMBOLS),
"",
group.name
)
if "{layer}" not in subset_name.lower():
subset_name += "{Layer}"
layer_fill = prepare_template_data({"layer": layer_name})
subset_name = subset_name.format(**layer_fill)
if group.long_name:
for directory in group.long_name[::-1]:
@ -98,3 +113,7 @@ class CreateImage(create.LegacyCreator):
# reusing existing group, need to rename afterwards
if not create_group:
stub.rename_layer(group.id, stub.PUBLISH_ICON + group.name)
@classmethod
def get_dynamic_data(cls, *args, **kwargs):
return {"layer": "{layer}"}

View file

@ -4,6 +4,7 @@ import pyblish.api
import openpype.api
from openpype.pipeline import PublishXmlValidationError
from openpype.hosts.photoshop import api as photoshop
from openpype.pipeline.create import SUBSET_NAME_ALLOWED_SYMBOLS
class ValidateNamingRepair(pyblish.api.Action):
@ -50,6 +51,13 @@ class ValidateNamingRepair(pyblish.api.Action):
subset_name = re.sub(invalid_chars, replace_char,
instance.data["subset"])
# format from Tool Creator
subset_name = re.sub(
"[^{}]+".format(SUBSET_NAME_ALLOWED_SYMBOLS),
"",
subset_name
)
layer_meta["subset"] = subset_name
stub.imprint(instance_id, layer_meta)

View file

@ -1,6 +1,6 @@
import os
import json
from openpype.pipeline import legacy_io
from openpype.client import get_asset_by_name
class HostContext:
@ -17,10 +17,10 @@ class HostContext:
if not asset_name:
return project_name
asset_doc = legacy_io.find_one(
{"type": "asset", "name": asset_name},
{"data.parents": 1}
asset_doc = get_asset_by_name(
project_name, asset_name, fields=["data.parents"]
)
parents = asset_doc.get("data", {}).get("parents") or []
hierarchy = [project_name]

View file

@ -1,10 +1,11 @@
from openpype.lib import NumberDef
from openpype.hosts.testhost.api import pipeline
from openpype.client import get_asset_by_name
from openpype.pipeline import (
legacy_io,
AutoCreator,
CreatedInstance,
)
from openpype.hosts.testhost.api import pipeline
class MyAutoCreator(AutoCreator):
@ -44,10 +45,7 @@ class MyAutoCreator(AutoCreator):
host_name = legacy_io.Session["AVALON_APP"]
if existing_instance is None:
asset_doc = legacy_io.find_one({
"type": "asset",
"name": asset_name
})
asset_doc = get_asset_by_name(project_name, asset_name)
subset_name = self.get_subset_name(
variant, task_name, asset_doc, project_name, host_name
)
@ -69,10 +67,7 @@ class MyAutoCreator(AutoCreator):
existing_instance["asset"] != asset_name
or existing_instance["task"] != task_name
):
asset_doc = legacy_io.find_one({
"type": "asset",
"name": asset_name
})
asset_doc = get_asset_by_name(project_name, asset_name)
subset_name = self.get_subset_name(
variant, task_name, asset_doc, project_name, host_name
)

View file

@ -0,0 +1,331 @@
import re
from copy import deepcopy
from openpype.client import get_asset_by_id
from openpype.pipeline.create import CreatorError
class ShotMetadataSolver:
""" Solving hierarchical metadata
Used during editorial publishing. Works with imput
clip name and settings defining python formatable
template. Settings also define searching patterns
and its token keys used for formating in templates.
"""
NO_DECOR_PATERN = re.compile(r"\{([a-z]*?)\}")
# presets
clip_name_tokenizer = None
shot_rename = True
shot_hierarchy = None
shot_add_tasks = None
def __init__(
self,
clip_name_tokenizer,
shot_rename,
shot_hierarchy,
shot_add_tasks,
logger
):
self.clip_name_tokenizer = clip_name_tokenizer
self.shot_rename = shot_rename
self.shot_hierarchy = shot_hierarchy
self.shot_add_tasks = shot_add_tasks
self.log = logger
def _rename_template(self, data):
"""Shot renaming function
Args:
data (dict): formating data
Raises:
CreatorError: If missing keys
Returns:
str: formated new name
"""
shot_rename_template = self.shot_rename[
"shot_rename_template"]
try:
# format to new shot name
return shot_rename_template.format(**data)
except KeyError as _E:
raise CreatorError((
"Make sure all keys in settings are correct:: \n\n"
f"From template string {shot_rename_template} > "
f"`{_E}` has no equivalent in \n"
f"{list(data.keys())} input formating keys!"
))
def _generate_tokens(self, clip_name, source_data):
"""Token generator
Settings defines token pairs key and regex expression.
Args:
clip_name (str): name of clip in editorial
source_data (dict): data for formating
Raises:
CreatorError: if missing key
Returns:
dict: updated source_data
"""
output_data = deepcopy(source_data["anatomy_data"])
output_data["clip_name"] = clip_name
if not self.clip_name_tokenizer:
return output_data
parent_name = source_data["selected_asset_doc"]["name"]
search_text = parent_name + clip_name
for token_key, pattern in self.clip_name_tokenizer.items():
p = re.compile(pattern)
match = p.findall(search_text)
if not match:
raise CreatorError((
"Make sure regex expression works with your data: \n\n"
f"'{token_key}' with regex '{pattern}' in your settings\n"
"can't find any match in your clip name "
f"'{search_text}'!\n\nLook to: "
"'project_settings/traypublisher/editorial_creators"
"/editorial_simple/clip_name_tokenizer'\n"
"at your project settings..."
))
# QUESTION:how to refactory `match[-1]` to some better way?
output_data[token_key] = match[-1]
return output_data
def _create_parents_from_settings(self, parents, data):
"""Formating parent components.
Args:
parents (list): list of dict parent components
data (dict): formating data
Raises:
CreatorError: missing formating key
CreatorError: missing token key
KeyError: missing parent token
Returns:
list: list of dict of parent components
"""
# fill the parents parts from presets
shot_hierarchy = deepcopy(self.shot_hierarchy)
hierarchy_parents = shot_hierarchy["parents"]
# fill parent keys data template from anatomy data
try:
_parent_tokens_formating_data = {
parent_token["name"]: parent_token["value"].format(**data)
for parent_token in hierarchy_parents
}
except KeyError as _E:
raise CreatorError((
"Make sure all keys in settings are correct : \n"
f"`{_E}` has no equivalent in \n{list(data.keys())}"
))
_parent_tokens_type = {
parent_token["name"]: parent_token["type"]
for parent_token in hierarchy_parents
}
for _index, _parent in enumerate(
shot_hierarchy["parents_path"].split("/")
):
# format parent token with value which is formated
try:
parent_name = _parent.format(
**_parent_tokens_formating_data)
except KeyError as _E:
raise CreatorError((
"Make sure all keys in settings are correct : \n\n"
f"`{_E}` from template string "
f"{shot_hierarchy['parents_path']}, "
f" has no equivalent in \n"
f"{list(_parent_tokens_formating_data.keys())} parents"
))
parent_token_name = (
self.NO_DECOR_PATERN.findall(_parent).pop())
if not parent_token_name:
raise KeyError(
f"Parent token is not found in: `{_parent}`")
# find parent type
parent_token_type = _parent_tokens_type[parent_token_name]
# in case selected context is set to the same asset
if (
_index == 0
and parents[-1]["entity_name"] == parent_name
):
self.log.debug(f" skipping : {parent_name}")
continue
# in case first parent is project then start parents from start
if (
_index == 0
and parent_token_type == "Project"
):
self.log.debug("rebuilding parents from scratch")
project_parent = parents[0]
parents = [project_parent]
continue
parents.append({
"entity_type": parent_token_type,
"entity_name": parent_name
})
self.log.debug(f"__ parents: {parents}")
return parents
def _create_hierarchy_path(self, parents):
"""Converting hierarchy path from parents
Args:
parents (list): list of dict parent components
Returns:
str: hierarchy path
"""
return "/".join(
[
p["entity_name"] for p in parents
if p["entity_type"] != "Project"
]
) if parents else ""
def _get_parents_from_selected_asset(
self,
asset_doc,
project_doc
):
"""Returning parents from context on selected asset.
Context defined in Traypublisher project tree.
Args:
asset_doc (db obj): selected asset doc
project_doc (db obj): actual project doc
Returns:
list: list of dict parent components
"""
project_name = project_doc["name"]
visual_hierarchy = [asset_doc]
current_doc = asset_doc
# looping trought all available visual parents
# if they are not available anymore than it breaks
while True:
visual_parent_id = current_doc["data"]["visualParent"]
visual_parent = None
if visual_parent_id:
visual_parent = get_asset_by_id(project_name, visual_parent_id)
if not visual_parent:
visual_hierarchy.append(project_doc)
break
visual_hierarchy.append(visual_parent)
current_doc = visual_parent
# add current selection context hierarchy
return [
{
"entity_type": entity["data"]["entityType"],
"entity_name": entity["name"]
}
for entity in reversed(visual_hierarchy)
]
def _generate_tasks_from_settings(self, project_doc):
"""Convert settings inputs to task data.
Args:
project_doc (db obj): actual project doc
Raises:
KeyError: Missing task type in project doc
Returns:
dict: tasks data
"""
tasks_to_add = {}
project_tasks = project_doc["config"]["tasks"]
for task_name, task_data in self.shot_add_tasks.items():
_task_data = deepcopy(task_data)
# check if task type in project task types
if _task_data["type"] in project_tasks.keys():
tasks_to_add[task_name] = _task_data
else:
raise KeyError(
"Missing task type `{}` for `{}` is not"
" existing in `{}``".format(
_task_data["type"],
task_name,
list(project_tasks.keys())
)
)
return tasks_to_add
def generate_data(self, clip_name, source_data):
"""Metadata generator.
Converts input data to hierarchy mentadata.
Args:
clip_name (str): clip name
source_data (dict): formating data
Returns:
(str, dict): shot name and hierarchy data
"""
self.log.info(f"_ source_data: {source_data}")
tasks = {}
asset_doc = source_data["selected_asset_doc"]
project_doc = source_data["project_doc"]
# match clip to shot name at start
shot_name = clip_name
# parse all tokens and generate formating data
formating_data = self._generate_tokens(shot_name, source_data)
# generate parents from selected asset
parents = self._get_parents_from_selected_asset(asset_doc, project_doc)
if self.shot_rename["enabled"]:
shot_name = self._rename_template(formating_data)
self.log.info(f"Renamed shot name: {shot_name}")
if self.shot_hierarchy["enabled"]:
parents = self._create_parents_from_settings(
parents, formating_data)
if self.shot_add_tasks:
tasks = self._generate_tasks_from_settings(
project_doc)
return shot_name, {
"hierarchy": self._create_hierarchy_path(parents),
"parents": parents,
"tasks": tasks
}

View file

@ -1,6 +1,7 @@
from openpype.lib.attribute_definitions import FileDef
from openpype.pipeline import (
from openpype.pipeline.create import (
Creator,
HiddenCreator,
CreatedInstance
)
@ -11,7 +12,6 @@ from .pipeline import (
HostContext,
)
IMAGE_EXTENSIONS = [
".ani", ".anim", ".apng", ".art", ".bmp", ".bpg", ".bsave", ".cal",
".cin", ".cpc", ".cpt", ".dds", ".dpx", ".ecw", ".exr", ".fits",
@ -35,6 +35,42 @@ VIDEO_EXTENSIONS = [
REVIEW_EXTENSIONS = IMAGE_EXTENSIONS + VIDEO_EXTENSIONS
class HiddenTrayPublishCreator(HiddenCreator):
host_name = "traypublisher"
def collect_instances(self):
for instance_data in list_instances():
creator_id = instance_data.get("creator_identifier")
if creator_id == self.identifier:
instance = CreatedInstance.from_existing(
instance_data, self
)
self._add_instance_to_context(instance)
def update_instances(self, update_list):
update_instances(update_list)
def remove_instances(self, instances):
remove_instances(instances)
for instance in instances:
self._remove_instance_from_context(instance)
def _store_new_instance(self, new_instance):
"""Tray publisher specific method to store instance.
Instance is stored into "workfile" of traypublisher and also add it
to CreateContext.
Args:
new_instance (CreatedInstance): Instance that should be stored.
"""
# Host implementation of storing metadata about instance
HostContext.add_instance(new_instance.data_to_store())
# Add instance to current context
self._add_instance_to_context(new_instance)
class TrayPublishCreator(Creator):
create_allow_context_change = True
host_name = "traypublisher"
@ -56,10 +92,6 @@ class TrayPublishCreator(Creator):
for instance in instances:
self._remove_instance_from_context(instance)
def get_pre_create_attr_defs(self):
# Use same attributes as for instance attrobites
return self.get_instance_attr_defs()
def _store_new_instance(self, new_instance):
"""Tray publisher specific method to store instance.
@ -81,15 +113,6 @@ class SettingsCreator(TrayPublishCreator):
extensions = []
def collect_instances(self):
for instance_data in list_instances():
creator_id = instance_data.get("creator_identifier")
if creator_id == self.identifier:
instance = CreatedInstance.from_existing(
instance_data, self
)
self._add_instance_to_context(instance)
def create(self, subset_name, data, pre_create_data):
# Pass precreate data to creator attributes
data["creator_attributes"] = pre_create_data
@ -120,6 +143,10 @@ class SettingsCreator(TrayPublishCreator):
)
]
def get_pre_create_attr_defs(self):
# Use same attributes as for instance attrobites
return self.get_instance_attr_defs()
@classmethod
def from_settings(cls, item_data):
identifier = item_data["identifier"]

View file

@ -0,0 +1,869 @@
import os
from copy import deepcopy
from pprint import pformat
import opentimelineio as otio
from openpype.client import (
get_asset_by_name,
get_project
)
from openpype.hosts.traypublisher.api.plugin import (
TrayPublishCreator,
HiddenTrayPublishCreator
)
from openpype.hosts.traypublisher.api.editorial import (
ShotMetadataSolver
)
from openpype.pipeline import CreatedInstance
from openpype.lib import (
get_ffprobe_data,
convert_ffprobe_fps_value,
FileDef,
TextDef,
NumberDef,
EnumDef,
BoolDef,
UISeparatorDef,
UILabelDef
)
CLIP_ATTR_DEFS = [
EnumDef(
"fps",
items={
"from_selection": "From selection",
23.997: "23.976",
24: "24",
25: "25",
29.97: "29.97",
30: "30"
},
label="FPS"
),
NumberDef(
"workfile_start_frame",
default=1001,
label="Workfile start frame"
),
NumberDef(
"handle_start",
default=0,
label="Handle start"
),
NumberDef(
"handle_end",
default=0,
label="Handle end"
)
]
class EditorialClipInstanceCreatorBase(HiddenTrayPublishCreator):
""" Wrapper class for clip family creators
Args:
HiddenTrayPublishCreator (BaseCreator): hidden supporting class
"""
host_name = "traypublisher"
def create(self, instance_data, source_data=None):
self.log.info(f"instance_data: {instance_data}")
subset_name = instance_data["subset"]
# Create new instance
new_instance = CreatedInstance(
self.family, subset_name, instance_data, self
)
self.log.info(f"instance_data: {pformat(new_instance.data)}")
self._store_new_instance(new_instance)
return new_instance
def get_instance_attr_defs(self):
return [
BoolDef(
"add_review_family",
default=True,
label="Review"
)
]
class EditorialShotInstanceCreator(EditorialClipInstanceCreatorBase):
""" Shot family class
The shot metadata instance carrier.
Args:
EditorialClipInstanceCreatorBase (BaseCreator): hidden supporting class
"""
identifier = "editorial_shot"
family = "shot"
label = "Editorial Shot"
def get_instance_attr_defs(self):
attr_defs = [
TextDef(
"asset_name",
label="Asset name",
)
]
attr_defs.extend(CLIP_ATTR_DEFS)
return attr_defs
class EditorialPlateInstanceCreator(EditorialClipInstanceCreatorBase):
""" Plate family class
Plate representation instance.
Args:
EditorialClipInstanceCreatorBase (BaseCreator): hidden supporting class
"""
identifier = "editorial_plate"
family = "plate"
label = "Editorial Plate"
class EditorialAudioInstanceCreator(EditorialClipInstanceCreatorBase):
""" Audio family class
Audio representation instance.
Args:
EditorialClipInstanceCreatorBase (BaseCreator): hidden supporting class
"""
identifier = "editorial_audio"
family = "audio"
label = "Editorial Audio"
class EditorialReviewInstanceCreator(EditorialClipInstanceCreatorBase):
""" Review family class
Review representation instance.
Args:
EditorialClipInstanceCreatorBase (BaseCreator): hidden supporting class
"""
identifier = "editorial_review"
family = "review"
label = "Editorial Review"
class EditorialSimpleCreator(TrayPublishCreator):
""" Editorial creator class
Simple workflow creator. This creator only disecting input
video file into clip chunks and then converts each to
defined format defined Settings for each subset preset.
Args:
TrayPublishCreator (Creator): Tray publisher plugin class
"""
label = "Editorial Simple"
family = "editorial"
identifier = "editorial_simple"
default_variants = [
"main"
]
description = "Editorial files to generate shots."
detailed_description = """
Supporting publishing new shots to project
or updating already created. Publishing will create OTIO file.
"""
icon = "fa.file"
def __init__(
self, project_settings, *args, **kwargs
):
super(EditorialSimpleCreator, self).__init__(
project_settings, *args, **kwargs
)
editorial_creators = deepcopy(
project_settings["traypublisher"]["editorial_creators"]
)
# get this creator settings by identifier
self._creator_settings = editorial_creators.get(self.identifier)
clip_name_tokenizer = self._creator_settings["clip_name_tokenizer"]
shot_rename = self._creator_settings["shot_rename"]
shot_hierarchy = self._creator_settings["shot_hierarchy"]
shot_add_tasks = self._creator_settings["shot_add_tasks"]
self._shot_metadata_solver = ShotMetadataSolver(
clip_name_tokenizer,
shot_rename,
shot_hierarchy,
shot_add_tasks,
self.log
)
# try to set main attributes from settings
if self._creator_settings.get("default_variants"):
self.default_variants = self._creator_settings["default_variants"]
def create(self, subset_name, instance_data, pre_create_data):
allowed_family_presets = self._get_allowed_family_presets(
pre_create_data)
clip_instance_properties = {
k: v for k, v in pre_create_data.items()
if k != "sequence_filepath_data"
if k not in [
i["family"] for i in self._creator_settings["family_presets"]
]
}
# Create otio editorial instance
asset_name = instance_data["asset"]
asset_doc = get_asset_by_name(self.project_name, asset_name)
self.log.info(pre_create_data["fps"])
if pre_create_data["fps"] == "from_selection":
# get asset doc data attributes
fps = asset_doc["data"]["fps"]
else:
fps = float(pre_create_data["fps"])
instance_data.update({
"fps": fps
})
# get path of sequence
sequence_path_data = pre_create_data["sequence_filepath_data"]
media_path_data = pre_create_data["media_filepaths_data"]
sequence_path = self._get_path_from_file_data(sequence_path_data)
media_path = self._get_path_from_file_data(media_path_data)
# get otio timeline
otio_timeline = self._create_otio_timeline(
sequence_path, fps)
# Create all clip instances
clip_instance_properties.update({
"fps": fps,
"parent_asset_name": asset_name,
"variant": instance_data["variant"]
})
# create clip instances
self._get_clip_instances(
otio_timeline,
media_path,
clip_instance_properties,
family_presets=allowed_family_presets
)
# create otio editorial instance
self._create_otio_instance(
subset_name, instance_data,
sequence_path, media_path,
otio_timeline
)
def _create_otio_instance(
self,
subset_name,
data,
sequence_path,
media_path,
otio_timeline
):
"""Otio instance creating function
Args:
subset_name (str): name of subset
data (dict): instnance data
sequence_path (str): path to sequence file
media_path (str): path to media file
otio_timeline (otio.Timeline): otio timeline object
"""
# Pass precreate data to creator attributes
data.update({
"sequenceFilePath": sequence_path,
"editorialSourcePath": media_path,
"otioTimeline": otio.adapters.write_to_string(otio_timeline)
})
new_instance = CreatedInstance(
self.family, subset_name, data, self
)
self._store_new_instance(new_instance)
def _create_otio_timeline(self, sequence_path, fps):
"""Creating otio timeline from sequence path
Args:
sequence_path (str): path to sequence file
fps (float): frame per second
Returns:
otio.Timeline: otio timeline object
"""
# get editorial sequence file into otio timeline object
extension = os.path.splitext(sequence_path)[1]
kwargs = {}
if extension == ".edl":
# EDL has no frame rate embedded so needs explicit
# frame rate else 24 is asssumed.
kwargs["rate"] = fps
kwargs["ignore_timecode_mismatch"] = True
self.log.info(f"kwargs: {kwargs}")
return otio.adapters.read_from_file(sequence_path, **kwargs)
def _get_path_from_file_data(self, file_path_data):
"""Converting creator path data to single path string
Args:
file_path_data (FileDefItem): creator path data inputs
Raises:
FileExistsError: in case nothing had been set
Returns:
str: path string
"""
# TODO: just temporarly solving only one media file
if isinstance(file_path_data, list):
file_path_data = file_path_data.pop()
if len(file_path_data["filenames"]) == 0:
raise FileExistsError(
f"File path was not added: {file_path_data}")
return os.path.join(
file_path_data["directory"], file_path_data["filenames"][0])
def _get_clip_instances(
self,
otio_timeline,
media_path,
instance_data,
family_presets
):
"""Helping function fro creating clip instance
Args:
otio_timeline (otio.Timeline): otio timeline object
media_path (str): media file path string
instance_data (dict): clip instance data
family_presets (list): list of dict settings subset presets
"""
self.asset_name_check = []
tracks = otio_timeline.each_child(
descended_from_type=otio.schema.Track
)
# media data for audio sream and reference solving
media_data = self._get_media_source_metadata(media_path)
for track in tracks:
self.log.debug(f"track.name: {track.name}")
try:
track_start_frame = (
abs(track.source_range.start_time.value)
)
self.log.debug(f"track_start_frame: {track_start_frame}")
track_start_frame -= self.timeline_frame_start
except AttributeError:
track_start_frame = 0
self.log.debug(f"track_start_frame: {track_start_frame}")
for clip in track.each_child():
if not self._validate_clip_for_processing(clip):
continue
# get available frames info to clip data
self._create_otio_reference(clip, media_path, media_data)
# convert timeline range to source range
self._restore_otio_source_range(clip)
base_instance_data = self._get_base_instance_data(
clip,
instance_data,
track_start_frame
)
parenting_data = {
"instance_label": None,
"instance_id": None
}
self.log.info((
"Creating subsets from presets: \n"
f"{pformat(family_presets)}"
))
for _fpreset in family_presets:
# exclude audio family if no audio stream
if (
_fpreset["family"] == "audio"
and not media_data.get("audio")
):
continue
instance = self._make_subset_instance(
clip,
_fpreset,
deepcopy(base_instance_data),
parenting_data
)
self.log.debug(f"{pformat(dict(instance.data))}")
def _restore_otio_source_range(self, otio_clip):
"""Infusing source range.
Otio clip is missing proper source clip range so
here we add them from from parent timeline frame range.
Args:
otio_clip (otio.Clip): otio clip object
"""
otio_clip.source_range = otio_clip.range_in_parent()
def _create_otio_reference(
self,
otio_clip,
media_path,
media_data
):
"""Creating otio reference at otio clip.
Args:
otio_clip (otio.Clip): otio clip object
media_path (str): media file path string
media_data (dict): media metadata
"""
start_frame = media_data["start_frame"]
frame_duration = media_data["duration"]
fps = media_data["fps"]
available_range = otio.opentime.TimeRange(
start_time=otio.opentime.RationalTime(
start_frame, fps),
duration=otio.opentime.RationalTime(
frame_duration, fps)
)
# in case old OTIO or video file create `ExternalReference`
media_reference = otio.schema.ExternalReference(
target_url=media_path,
available_range=available_range
)
otio_clip.media_reference = media_reference
def _get_media_source_metadata(self, path):
"""Get all available metadata from file
Args:
path (str): media file path string
Raises:
AssertionError: ffprobe couldn't read metadata
Returns:
dict: media file metadata
"""
return_data = {}
try:
media_data = get_ffprobe_data(
path, self.log
)
self.log.debug(f"__ media_data: {pformat(media_data)}")
# get video stream data
video_stream = media_data["streams"][0]
return_data = {
"video": True,
"start_frame": 0,
"duration": int(video_stream["nb_frames"]),
"fps": float(
convert_ffprobe_fps_value(
video_stream["r_frame_rate"]
)
)
}
# get audio streams data
audio_stream = [
stream for stream in media_data["streams"]
if stream["codec_type"] == "audio"
]
if audio_stream:
return_data["audio"] = True
except Exception as exc:
raise AssertionError((
"FFprobe couldn't read information about input file: "
f"\"{path}\". Error message: {exc}"
))
return return_data
def _make_subset_instance(
self,
otio_clip,
preset,
instance_data,
parenting_data
):
"""Making subset instance from input preset
Args:
otio_clip (otio.Clip): otio clip object
preset (dict): sigle family preset
instance_data (dict): instance data
parenting_data (dict): shot instance parent data
Returns:
CreatedInstance: creator instance object
"""
family = preset["family"]
label = self._make_subset_naming(
preset,
instance_data
)
instance_data["label"] = label
# add file extension filter only if it is not shot family
if family == "shot":
instance_data["otioClip"] = (
otio.adapters.write_to_string(otio_clip))
c_instance = self.create_context.creators[
"editorial_shot"].create(
instance_data)
parenting_data.update({
"instance_label": label,
"instance_id": c_instance.data["instance_id"]
})
else:
# add review family if defined
instance_data.update({
"outputFileType": preset["output_file_type"],
"parent_instance_id": parenting_data["instance_id"],
"creator_attributes": {
"parent_instance": parenting_data["instance_label"],
"add_review_family": preset.get("review")
}
})
creator_identifier = f"editorial_{family}"
editorial_clip_creator = self.create_context.creators[
creator_identifier]
c_instance = editorial_clip_creator.create(
instance_data)
return c_instance
def _make_subset_naming(
self,
preset,
instance_data
):
""" Subset name maker
Args:
preset (dict): single preset item
instance_data (dict): instance data
Returns:
str: label string
"""
shot_name = instance_data["shotName"]
variant_name = instance_data["variant"]
family = preset["family"]
# get variant name from preset or from inharitance
_variant_name = preset.get("variant") or variant_name
self.log.debug(f"__ family: {family}")
self.log.debug(f"__ preset: {preset}")
# subset name
subset_name = "{}{}".format(
family, _variant_name.capitalize()
)
label = "{}_{}".format(
shot_name,
subset_name
)
instance_data.update({
"family": family,
"label": label,
"variant": _variant_name,
"subset": subset_name,
})
return label
def _get_base_instance_data(
self,
otio_clip,
instance_data,
track_start_frame,
):
""" Factoring basic set of instance data.
Args:
otio_clip (otio.Clip): otio clip object
instance_data (dict): precreate instance data
track_start_frame (int): track start frame
Returns:
dict: instance data
"""
# get clip instance properties
parent_asset_name = instance_data["parent_asset_name"]
handle_start = instance_data["handle_start"]
handle_end = instance_data["handle_end"]
timeline_offset = instance_data["timeline_offset"]
workfile_start_frame = instance_data["workfile_start_frame"]
fps = instance_data["fps"]
variant_name = instance_data["variant"]
# basic unique asset name
clip_name = os.path.splitext(otio_clip.name)[0].lower()
project_doc = get_project(self.project_name)
shot_name, shot_metadata = self._shot_metadata_solver.generate_data(
clip_name,
{
"anatomy_data": {
"project": {
"name": self.project_name,
"code": project_doc["data"]["code"]
},
"parent": parent_asset_name,
"app": self.host_name
},
"selected_asset_doc": get_asset_by_name(
self.project_name, parent_asset_name),
"project_doc": project_doc
}
)
self._validate_name_uniqueness(shot_name)
timing_data = self._get_timing_data(
otio_clip,
timeline_offset,
track_start_frame,
workfile_start_frame
)
# create creator attributes
creator_attributes = {
"asset_name": shot_name,
"Parent hierarchy path": shot_metadata["hierarchy"],
"workfile_start_frame": workfile_start_frame,
"fps": fps,
"handle_start": int(handle_start),
"handle_end": int(handle_end)
}
creator_attributes.update(timing_data)
# create shared new instance data
base_instance_data = {
"shotName": shot_name,
"variant": variant_name,
# HACK: just for temporal bug workaround
# TODO: should loockup shot name for update
"asset": parent_asset_name,
"task": "",
"newAssetPublishing": True,
# parent time properties
"trackStartFrame": track_start_frame,
"timelineOffset": timeline_offset,
# creator_attributes
"creator_attributes": creator_attributes
}
# add hierarchy shot metadata
base_instance_data.update(shot_metadata)
return base_instance_data
def _get_timing_data(
self,
otio_clip,
timeline_offset,
track_start_frame,
workfile_start_frame
):
"""Returning available timing data
Args:
otio_clip (otio.Clip): otio clip object
timeline_offset (int): offset value
track_start_frame (int): starting frame input
workfile_start_frame (int): start frame for shot's workfiles
Returns:
dict: timing metadata
"""
# frame ranges data
clip_in = otio_clip.range_in_parent().start_time.value
clip_in += track_start_frame
clip_out = otio_clip.range_in_parent().end_time_inclusive().value
clip_out += track_start_frame
self.log.info(f"clip_in: {clip_in} | clip_out: {clip_out}")
# add offset in case there is any
self.log.debug(f"__ timeline_offset: {timeline_offset}")
if timeline_offset:
clip_in += timeline_offset
clip_out += timeline_offset
clip_duration = otio_clip.duration().value
self.log.info(f"clip duration: {clip_duration}")
source_in = otio_clip.trimmed_range().start_time.value
source_out = source_in + clip_duration
# define starting frame for future shot
frame_start = (
clip_in if workfile_start_frame is None
else workfile_start_frame
)
frame_end = frame_start + (clip_duration - 1)
return {
"frameStart": int(frame_start),
"frameEnd": int(frame_end),
"clipIn": int(clip_in),
"clipOut": int(clip_out),
"clipDuration": int(otio_clip.duration().value),
"sourceIn": int(source_in),
"sourceOut": int(source_out)
}
def _get_allowed_family_presets(self, pre_create_data):
""" Filter out allowed family presets.
Args:
pre_create_data (dict): precreate attributes inputs
Returns:
list: lit of dict with preset items
"""
self.log.debug(f"__ pre_create_data: {pre_create_data}")
return [
{"family": "shot"},
*[
preset for preset in self._creator_settings["family_presets"]
if pre_create_data[preset["family"]]
]
]
def _validate_clip_for_processing(self, otio_clip):
"""Validate otio clip attribues
Args:
otio_clip (otio.Clip): otio clip object
Returns:
bool: True if all passing conditions
"""
if otio_clip.name is None:
return False
if isinstance(otio_clip, otio.schema.Gap):
return False
# skip all generators like black empty
if isinstance(
otio_clip.media_reference,
otio.schema.GeneratorReference):
return False
# Transitions are ignored, because Clips have the full frame
# range.
if isinstance(otio_clip, otio.schema.Transition):
return False
return True
def _validate_name_uniqueness(self, name):
""" Validating name uniqueness.
In context of other clip names in sequence file.
Args:
name (str): shot name string
"""
if name not in self.asset_name_check:
self.asset_name_check.append(name)
else:
self.log.warning(
f"Duplicate shot name: {name}! "
"Please check names in the input sequence files."
)
def get_pre_create_attr_defs(self):
""" Creating pre-create attributes at creator plugin.
Returns:
list: list of attribute object instances
"""
# Use same attributes as for instance attrobites
attr_defs = [
FileDef(
"sequence_filepath_data",
folders=False,
extensions=[
".edl",
".xml",
".aaf",
".fcpxml"
],
allow_sequences=False,
single_item=True,
label="Sequence file",
),
FileDef(
"media_filepaths_data",
folders=False,
extensions=[
".mov",
".mp4",
".wav"
],
allow_sequences=False,
single_item=False,
label="Media files",
),
# TODO: perhpas better would be timecode and fps input
NumberDef(
"timeline_offset",
default=0,
label="Timeline offset"
),
UISeparatorDef(),
UILabelDef("Clip instance attributes"),
UISeparatorDef()
]
# add variants swithers
attr_defs.extend(
BoolDef(_var["family"], label=_var["family"])
for _var in self._creator_settings["family_presets"]
)
attr_defs.append(UISeparatorDef())
attr_defs.extend(CLIP_ATTR_DEFS)
return attr_defs

View file

@ -1,6 +1,7 @@
import os
from openpype.api import get_project_settings, Logger
from openpype.api import get_project_settings
log = Logger.get_logger(__name__)
def initialize():
@ -13,6 +14,7 @@ def initialize():
global_variables = globals()
for item in simple_creators:
dynamic_plugin = SettingsCreator.from_settings(item)
global_variables[dynamic_plugin.__name__] = dynamic_plugin

View file

@ -0,0 +1,36 @@
from pprint import pformat
import pyblish.api
class CollectClipInstance(pyblish.api.InstancePlugin):
"""Collect clip instances and resolve its parent"""
label = "Collect Clip Instances"
order = pyblish.api.CollectorOrder - 0.081
hosts = ["traypublisher"]
families = ["plate", "review", "audio"]
def process(self, instance):
creator_identifier = instance.data["creator_identifier"]
if creator_identifier not in [
"editorial_plate",
"editorial_audio",
"editorial_review"
]:
return
instance.data["families"].append("clip")
parent_instance_id = instance.data["parent_instance_id"]
edit_shared_data = instance.context.data["editorialSharedData"]
instance.data.update(
edit_shared_data[parent_instance_id]
)
if "editorialSourcePath" in instance.context.data.keys():
instance.data["editorialSourcePath"] = (
instance.context.data["editorialSourcePath"])
instance.data["families"].append("trimming")
self.log.debug(pformat(instance.data))

View file

@ -0,0 +1,48 @@
import os
from pprint import pformat
import pyblish.api
import opentimelineio as otio
class CollectEditorialInstance(pyblish.api.InstancePlugin):
"""Collect data for instances created by settings creators."""
label = "Collect Editorial Instances"
order = pyblish.api.CollectorOrder - 0.1
hosts = ["traypublisher"]
families = ["editorial"]
def process(self, instance):
if "families" not in instance.data:
instance.data["families"] = []
if "representations" not in instance.data:
instance.data["representations"] = []
fpath = instance.data["sequenceFilePath"]
otio_timeline_string = instance.data.pop("otioTimeline")
otio_timeline = otio.adapters.read_from_string(
otio_timeline_string)
instance.context.data["otioTimeline"] = otio_timeline
instance.context.data["editorialSourcePath"] = (
instance.data["editorialSourcePath"])
self.log.info(fpath)
instance.data["stagingDir"] = os.path.dirname(fpath)
_, ext = os.path.splitext(fpath)
instance.data["representations"].append({
"ext": ext[1:],
"name": ext[1:],
"stagingDir": instance.data["stagingDir"],
"files": os.path.basename(fpath)
})
self.log.debug("Created Editorial Instance {}".format(
pformat(instance.data)
))

View file

@ -0,0 +1,30 @@
import pyblish.api
class CollectEditorialReviewable(pyblish.api.InstancePlugin):
""" Collect review input from user.
Adds the input to instance data.
"""
label = "Collect Editorial Reviewable"
order = pyblish.api.CollectorOrder
families = ["plate", "review", "audio"]
hosts = ["traypublisher"]
def process(self, instance):
creator_identifier = instance.data["creator_identifier"]
if creator_identifier not in [
"editorial_plate",
"editorial_audio",
"editorial_review"
]:
return
creator_attributes = instance.data["creator_attributes"]
if creator_attributes["add_review_family"]:
instance.data["families"].append("review")
self.log.debug("instance.data {}".format(instance.data))

View file

@ -0,0 +1,213 @@
from pprint import pformat
import pyblish.api
import opentimelineio as otio
class CollectShotInstance(pyblish.api.InstancePlugin):
""" Collect shot instances
Resolving its user inputs from creator attributes
to instance data.
"""
label = "Collect Shot Instances"
order = pyblish.api.CollectorOrder - 0.09
hosts = ["traypublisher"]
families = ["shot"]
SHARED_KEYS = [
"asset",
"fps",
"handleStart",
"handleEnd",
"frameStart",
"frameEnd",
"clipIn",
"clipOut",
"clipDuration",
"sourceIn",
"sourceOut",
"otioClip",
"workfileFrameStart"
]
def process(self, instance):
self.log.debug(pformat(instance.data))
creator_identifier = instance.data["creator_identifier"]
if "editorial" not in creator_identifier:
return
# get otio clip object
otio_clip = self._get_otio_clip(instance)
instance.data["otioClip"] = otio_clip
# first solve the inputs from creator attr
data = self._solve_inputs_to_data(instance)
instance.data.update(data)
# distribute all shared keys to clips instances
self._distribute_shared_data(instance)
self._solve_hierarchy_context(instance)
self.log.debug(pformat(instance.data))
def _get_otio_clip(self, instance):
""" Converts otio string data.
Convert them to proper otio object
and finds its equivalent at otio timeline.
This process is a hack to support also
resolving parent range.
Args:
instance (obj): publishing instance
Returns:
otio.Clip: otio clip object
"""
context = instance.context
# convert otio clip from string to object
otio_clip_string = instance.data.pop("otioClip")
otio_clip = otio.adapters.read_from_string(
otio_clip_string)
otio_timeline = context.data["otioTimeline"]
clips = [
clip for clip in otio_timeline.each_child(
descended_from_type=otio.schema.Clip)
if clip.name == otio_clip.name
]
otio_clip = clips.pop()
self.log.debug(f"__ otioclip.parent: {otio_clip.parent}")
return otio_clip
def _distribute_shared_data(self, instance):
""" Distribute all defined keys.
All data are shared between all related
instances in context.
Args:
instance (obj): publishing instance
"""
context = instance.context
instance_id = instance.data["instance_id"]
if not context.data.get("editorialSharedData"):
context.data["editorialSharedData"] = {}
context.data["editorialSharedData"][instance_id] = {
_k: _v for _k, _v in instance.data.items()
if _k in self.SHARED_KEYS
}
def _solve_inputs_to_data(self, instance):
""" Resolve all user inputs into instance data.
Args:
instance (obj): publishing instance
Returns:
dict: instance data updating data
"""
_cr_attrs = instance.data["creator_attributes"]
workfile_start_frame = _cr_attrs["workfile_start_frame"]
frame_start = _cr_attrs["frameStart"]
frame_end = _cr_attrs["frameEnd"]
frame_dur = frame_end - frame_start
return {
"asset": _cr_attrs["asset_name"],
"fps": float(_cr_attrs["fps"]),
"handleStart": _cr_attrs["handle_start"],
"handleEnd": _cr_attrs["handle_end"],
"frameStart": workfile_start_frame,
"frameEnd": workfile_start_frame + frame_dur,
"clipIn": _cr_attrs["clipIn"],
"clipOut": _cr_attrs["clipOut"],
"clipDuration": _cr_attrs["clipDuration"],
"sourceIn": _cr_attrs["sourceIn"],
"sourceOut": _cr_attrs["sourceOut"],
"workfileFrameStart": workfile_start_frame
}
def _solve_hierarchy_context(self, instance):
""" Adding hierarchy data to context shared data.
Args:
instance (obj): publishing instance
"""
context = instance.context
final_context = (
context.data["hierarchyContext"]
if context.data.get("hierarchyContext")
else {}
)
name = instance.data["asset"]
# get handles
handle_start = int(instance.data["handleStart"])
handle_end = int(instance.data["handleEnd"])
in_info = {
"entity_type": "Shot",
"custom_attributes": {
"handleStart": handle_start,
"handleEnd": handle_end,
"frameStart": instance.data["frameStart"],
"frameEnd": instance.data["frameEnd"],
"clipIn": instance.data["clipIn"],
"clipOut": instance.data["clipOut"],
"fps": instance.data["fps"]
},
"tasks": instance.data["tasks"]
}
parents = instance.data.get('parents', [])
self.log.debug(f"parents: {pformat(parents)}")
actual = {name: in_info}
for parent in reversed(parents):
parent_name = parent["entity_name"]
next_dict = {
parent_name: {
"entity_type": parent["entity_type"],
"childs": actual
}
}
actual = next_dict
final_context = self._update_dict(final_context, actual)
# adding hierarchy context to instance
context.data["hierarchyContext"] = final_context
self.log.debug(pformat(final_context))
def _update_dict(self, ex_dict, new_dict):
""" Recursion function
Updating nested data with another nested data.
Args:
ex_dict (dict): nested data
new_dict (dict): nested data
Returns:
dict: updated nested data
"""
for key in ex_dict:
if key in new_dict and isinstance(ex_dict[key], dict):
new_dict[key] = self._update_dict(ex_dict[key], new_dict[key])
elif not ex_dict.get(key) or not new_dict.get(key):
new_dict[key] = ex_dict[key]
return new_dict

View file

@ -1,17 +1,16 @@
import os
from openpype.client import get_project, get_asset_by_name
from openpype.lib import (
StringTemplate,
get_workfile_template_key_from_context,
get_workdir_data,
get_last_workfile_with_version,
)
from openpype.lib import StringTemplate
from openpype.pipeline import (
registered_host,
legacy_io,
Anatomy,
)
from openpype.pipeline.workfile import (
get_workfile_template_key_from_context,
get_last_workfile_with_version,
)
from openpype.pipeline.template_data import get_template_data_with_names
from openpype.hosts.tvpaint.api import lib, pipeline, plugin
@ -54,19 +53,17 @@ class LoadWorkfile(plugin.Loader):
asset_name = legacy_io.Session["AVALON_ASSET"]
task_name = legacy_io.Session["AVALON_TASK"]
project_doc = get_project(project_name)
asset_doc = get_asset_by_name(project_name, asset_name)
template_key = get_workfile_template_key_from_context(
asset_name,
task_name,
host_name,
project_name=project_name,
dbcon=legacy_io
project_name=project_name
)
anatomy = Anatomy(project_name)
data = get_workdir_data(project_doc, asset_doc, task_name, host_name)
data = get_template_data_with_names(
project_name, asset_name, task_name, host_name
)
data["root"] = anatomy.roots
file_template = anatomy.templates[template_key]["file"]

View file

@ -1,13 +1,13 @@
# -*- coding: utf-8 -*-
"""Hook to launch Unreal and prepare projects."""
import os
import copy
from pathlib import Path
from openpype.lib import (
PreLaunchHook,
ApplicationLaunchFailed,
ApplicationNotFound,
get_workdir_data,
get_workfile_template_key
)
import openpype.hosts.unreal.lib as unreal_lib
@ -35,18 +35,13 @@ class UnrealPrelaunchHook(PreLaunchHook):
return last_workfile.name
# Prepare data for fill data and for getting workfile template key
task_name = self.data["task_name"]
anatomy = self.data["anatomy"]
asset_doc = self.data["asset_doc"]
project_doc = self.data["project_doc"]
asset_tasks = asset_doc.get("data", {}).get("tasks") or {}
task_info = asset_tasks.get(task_name) or {}
task_type = task_info.get("type")
# Use already prepared workdir data
workdir_data = copy.deepcopy(self.data["workdir_data"])
task_type = workdir_data.get("task", {}).get("type")
workdir_data = get_workdir_data(
project_doc, asset_doc, task_name, self.host_name
)
# QUESTION raise exception if version is part of filename template?
workdir_data["version"] = 1
workdir_data["ext"] = "uproject"

View file

@ -20,6 +20,34 @@ class SkeletalMeshAlembicLoader(plugin.Loader):
icon = "cube"
color = "orange"
def get_task(self, filename, asset_dir, asset_name, replace):
task = unreal.AssetImportTask()
options = unreal.AbcImportSettings()
sm_settings = unreal.AbcStaticMeshSettings()
conversion_settings = unreal.AbcConversionSettings(
preset=unreal.AbcConversionPreset.CUSTOM,
flip_u=False, flip_v=False,
rotation=[0.0, 0.0, 0.0],
scale=[1.0, 1.0, 1.0])
task.set_editor_property('filename', filename)
task.set_editor_property('destination_path', asset_dir)
task.set_editor_property('destination_name', asset_name)
task.set_editor_property('replace_existing', replace)
task.set_editor_property('automated', True)
task.set_editor_property('save', True)
# set import options here
# Unreal 4.24 ignores the settings. It works with Unreal 4.26
options.set_editor_property(
'import_type', unreal.AlembicImportType.SKELETAL)
options.static_mesh_settings = sm_settings
options.conversion_settings = conversion_settings
task.options = options
return task
def load(self, context, name, namespace, data):
"""Load and containerise representation into Content Browser.
@ -50,36 +78,24 @@ class SkeletalMeshAlembicLoader(plugin.Loader):
asset_name = "{}_{}".format(asset, name)
else:
asset_name = "{}".format(name)
version = context.get('version').get('name')
tools = unreal.AssetToolsHelpers().get_asset_tools()
asset_dir, container_name = tools.create_unique_asset_name(
"{}/{}/{}".format(root, asset, name), suffix="")
f"{root}/{asset}/{name}_v{version:03d}", suffix="")
container_name += suffix
unreal.EditorAssetLibrary.make_directory(asset_dir)
if not unreal.EditorAssetLibrary.does_directory_exist(asset_dir):
unreal.EditorAssetLibrary.make_directory(asset_dir)
task = unreal.AssetImportTask()
task = self.get_task(self.fname, asset_dir, asset_name, False)
task.set_editor_property('filename', self.fname)
task.set_editor_property('destination_path', asset_dir)
task.set_editor_property('destination_name', asset_name)
task.set_editor_property('replace_existing', False)
task.set_editor_property('automated', True)
task.set_editor_property('save', True)
unreal.AssetToolsHelpers.get_asset_tools().import_asset_tasks([task]) # noqa: E501
# set import options here
# Unreal 4.24 ignores the settings. It works with Unreal 4.26
options = unreal.AbcImportSettings()
options.set_editor_property(
'import_type', unreal.AlembicImportType.SKELETAL)
task.options = options
unreal.AssetToolsHelpers.get_asset_tools().import_asset_tasks([task]) # noqa: E501
# Create Asset Container
unreal_pipeline.create_container(
container=container_name, path=asset_dir)
# Create Asset Container
unreal_pipeline.create_container(
container=container_name, path=asset_dir)
data = {
"schema": "openpype:container-2.0",
@ -110,23 +126,8 @@ class SkeletalMeshAlembicLoader(plugin.Loader):
source_path = get_representation_path(representation)
destination_path = container["namespace"]
task = unreal.AssetImportTask()
task = self.get_task(source_path, destination_path, name, True)
task.set_editor_property('filename', source_path)
task.set_editor_property('destination_path', destination_path)
# strip suffix
task.set_editor_property('destination_name', name)
task.set_editor_property('replace_existing', True)
task.set_editor_property('automated', True)
task.set_editor_property('save', True)
# set import options here
# Unreal 4.24 ignores the settings. It works with Unreal 4.26
options = unreal.AbcImportSettings()
options.set_editor_property(
'import_type', unreal.AlembicImportType.SKELETAL)
task.options = options
# do import fbx and replace existing data
unreal.AssetToolsHelpers.get_asset_tools().import_asset_tasks([task])
container_path = "{}/{}".format(container["namespace"],

View file

@ -24,7 +24,11 @@ class StaticMeshAlembicLoader(plugin.Loader):
task = unreal.AssetImportTask()
options = unreal.AbcImportSettings()
sm_settings = unreal.AbcStaticMeshSettings()
conversion_settings = unreal.AbcConversionSettings()
conversion_settings = unreal.AbcConversionSettings(
preset=unreal.AbcConversionPreset.CUSTOM,
flip_u=False, flip_v=False,
rotation=[0.0, 0.0, 0.0],
scale=[1.0, 1.0, 1.0])
task.set_editor_property('filename', filename)
task.set_editor_property('destination_path', asset_dir)
@ -40,13 +44,6 @@ class StaticMeshAlembicLoader(plugin.Loader):
sm_settings.set_editor_property('merge_meshes', True)
conversion_settings.set_editor_property('flip_u', False)
conversion_settings.set_editor_property('flip_v', True)
conversion_settings.set_editor_property(
'scale', unreal.Vector(x=100.0, y=100.0, z=100.0))
conversion_settings.set_editor_property(
'rotation', unreal.Vector(x=-90.0, y=0.0, z=180.0))
options.static_mesh_settings = sm_settings
options.conversion_settings = conversion_settings
task.options = options
@ -83,22 +80,24 @@ class StaticMeshAlembicLoader(plugin.Loader):
asset_name = "{}_{}".format(asset, name)
else:
asset_name = "{}".format(name)
version = context.get('version').get('name')
tools = unreal.AssetToolsHelpers().get_asset_tools()
asset_dir, container_name = tools.create_unique_asset_name(
"{}/{}/{}".format(root, asset, name), suffix="")
f"{root}/{asset}/{name}_v{version:03d}", suffix="")
container_name += suffix
unreal.EditorAssetLibrary.make_directory(asset_dir)
if not unreal.EditorAssetLibrary.does_directory_exist(asset_dir):
unreal.EditorAssetLibrary.make_directory(asset_dir)
task = self.get_task(self.fname, asset_dir, asset_name, False)
task = self.get_task(self.fname, asset_dir, asset_name, False)
unreal.AssetToolsHelpers.get_asset_tools().import_asset_tasks([task]) # noqa: E501
unreal.AssetToolsHelpers.get_asset_tools().import_asset_tasks([task]) # noqa: E501
# Create Asset Container
unreal_pipeline.create_container(
container=container_name, path=asset_dir)
# Create Asset Container
unreal_pipeline.create_container(
container=container_name, path=asset_dir)
data = {
"schema": "openpype:container-2.0",

View file

@ -9,7 +9,10 @@ from unreal import EditorLevelLibrary
from unreal import EditorLevelUtils
from unreal import AssetToolsHelpers
from unreal import FBXImportType
from unreal import MathLibrary as umath
from unreal import MovieSceneLevelVisibilityTrack
from unreal import MovieSceneSubTrack
from bson.objectid import ObjectId
from openpype.client import get_asset_by_name, get_assets
from openpype.pipeline import (
@ -21,6 +24,7 @@ from openpype.pipeline import (
legacy_io,
)
from openpype.pipeline.context_tools import get_current_project_asset
from openpype.api import get_current_project_settings
from openpype.hosts.unreal.api import plugin
from openpype.hosts.unreal.api import pipeline as unreal_pipeline
@ -159,9 +163,29 @@ class LayoutLoader(plugin.Loader):
hid_section.set_row_index(index)
hid_section.set_level_names(maps)
@staticmethod
def _transform_from_basis(self, transform, basis):
"""Transform a transform from a basis to a new basis."""
# Get the basis matrix
basis_matrix = unreal.Matrix(
basis[0],
basis[1],
basis[2],
basis[3]
)
transform_matrix = unreal.Matrix(
transform[0],
transform[1],
transform[2],
transform[3]
)
new_transform = (
basis_matrix.get_inverse() * transform_matrix * basis_matrix)
return new_transform.transform()
def _process_family(
assets, class_name, transform, sequence, inst_name=None
self, assets, class_name, transform, basis, sequence, inst_name=None
):
ar = unreal.AssetRegistryHelpers.get_asset_registry()
@ -171,30 +195,12 @@ class LayoutLoader(plugin.Loader):
for asset in assets:
obj = ar.get_asset_by_object_path(asset).get_asset()
if obj.get_class().get_name() == class_name:
t = self._transform_from_basis(transform, basis)
actor = EditorLevelLibrary.spawn_actor_from_object(
obj,
transform.get('translation')
obj, t.translation
)
if inst_name:
try:
# Rename method leads to crash
# actor.rename(name=inst_name)
# The label works, although it make it slightly more
# complicated to check for the names, as we need to
# loop through all the actors in the level
actor.set_actor_label(inst_name)
except Exception as e:
print(e)
actor.set_actor_rotation(unreal.Rotator(
umath.radians_to_degrees(
transform.get('rotation').get('x')),
-umath.radians_to_degrees(
transform.get('rotation').get('y')),
umath.radians_to_degrees(
transform.get('rotation').get('z')),
), False)
actor.set_actor_scale3d(transform.get('scale'))
actor.set_actor_rotation(t.rotation.rotator(), False)
actor.set_actor_scale3d(t.scale3d)
if class_name == 'SkeletalMesh':
skm_comp = actor.get_editor_property(
@ -203,16 +209,17 @@ class LayoutLoader(plugin.Loader):
actors.append(actor)
binding = None
for p in sequence.get_possessables():
if p.get_name() == actor.get_name():
binding = p
break
if sequence:
binding = None
for p in sequence.get_possessables():
if p.get_name() == actor.get_name():
binding = p
break
if not binding:
binding = sequence.add_possessable(actor)
if not binding:
binding = sequence.add_possessable(actor)
bindings.append(binding)
bindings.append(binding)
return actors, bindings
@ -301,52 +308,53 @@ class LayoutLoader(plugin.Loader):
actor.skeletal_mesh_component.animation_data.set_editor_property(
'anim_to_play', animation)
# Add animation to the sequencer
bindings = bindings_dict.get(instance_name)
if sequence:
# Add animation to the sequencer
bindings = bindings_dict.get(instance_name)
ar = unreal.AssetRegistryHelpers.get_asset_registry()
ar = unreal.AssetRegistryHelpers.get_asset_registry()
for binding in bindings:
tracks = binding.get_tracks()
track = None
track = tracks[0] if tracks else binding.add_track(
unreal.MovieSceneSkeletalAnimationTrack)
for binding in bindings:
tracks = binding.get_tracks()
track = None
track = tracks[0] if tracks else binding.add_track(
unreal.MovieSceneSkeletalAnimationTrack)
sections = track.get_sections()
section = None
if not sections:
section = track.add_section()
else:
section = sections[0]
sections = track.get_sections()
section = None
if not sections:
section = track.add_section()
else:
section = sections[0]
sec_params = section.get_editor_property('params')
curr_anim = sec_params.get_editor_property('animation')
if curr_anim:
# Checks if the animation path has a container.
# If it does, it means that the animation is
# already in the sequencer.
anim_path = str(Path(
curr_anim.get_path_name()).parent
).replace('\\', '/')
_filter = unreal.ARFilter(
class_names=["AssetContainer"],
package_paths=[anim_path],
recursive_paths=False)
containers = ar.get_assets(_filter)
if len(containers) > 0:
return
section.set_range(
sequence.get_playback_start(),
sequence.get_playback_end())
sec_params = section.get_editor_property('params')
curr_anim = sec_params.get_editor_property('animation')
if curr_anim:
# Checks if the animation path has a container.
# If it does, it means that the animation is already
# in the sequencer.
anim_path = str(Path(
curr_anim.get_path_name()).parent
).replace('\\', '/')
_filter = unreal.ARFilter(
class_names=["AssetContainer"],
package_paths=[anim_path],
recursive_paths=False)
containers = ar.get_assets(_filter)
if len(containers) > 0:
return
section.set_range(
sequence.get_playback_start(),
sequence.get_playback_end())
sec_params = section.get_editor_property('params')
sec_params.set_editor_property('animation', animation)
sec_params.set_editor_property('animation', animation)
@staticmethod
def _generate_sequence(self, h, h_dir):
def _generate_sequence(h, h_dir):
tools = unreal.AssetToolsHelpers().get_asset_tools()
sequence = tools.create_asset(
@ -402,7 +410,7 @@ class LayoutLoader(plugin.Loader):
return sequence, (min_frame, max_frame)
def _process(self, lib_path, asset_dir, sequence, loaded=None):
def _process(self, lib_path, asset_dir, sequence, repr_loaded=None):
ar = unreal.AssetRegistryHelpers.get_asset_registry()
with open(lib_path, "r") as fp:
@ -410,8 +418,8 @@ class LayoutLoader(plugin.Loader):
all_loaders = discover_loader_plugins()
if not loaded:
loaded = []
if not repr_loaded:
repr_loaded = []
path = Path(lib_path)
@ -422,36 +430,65 @@ class LayoutLoader(plugin.Loader):
loaded_assets = []
for element in data:
reference = None
if element.get('reference_fbx'):
reference = element.get('reference_fbx')
representation = None
repr_format = None
if element.get('representation'):
# representation = element.get('representation')
self.log.info(element.get("version"))
valid_formats = ['fbx', 'abc']
repr_data = legacy_io.find_one({
"type": "representation",
"parent": ObjectId(element.get("version")),
"name": {"$in": valid_formats}
})
repr_format = repr_data.get('name')
if not repr_data:
self.log.error(
f"No valid representation found for version "
f"{element.get('version')}")
continue
representation = str(repr_data.get('_id'))
print(representation)
# This is to keep compatibility with old versions of the
# json format.
elif element.get('reference_fbx'):
representation = element.get('reference_fbx')
repr_format = 'fbx'
elif element.get('reference_abc'):
reference = element.get('reference_abc')
representation = element.get('reference_abc')
repr_format = 'abc'
# If reference is None, this element is skipped, as it cannot be
# imported in Unreal
if not reference:
if not representation:
continue
instance_name = element.get('instance_name')
skeleton = None
if reference not in loaded:
loaded.append(reference)
if representation not in repr_loaded:
repr_loaded.append(representation)
family = element.get('family')
loaders = loaders_from_representation(
all_loaders, reference)
all_loaders, representation)
loader = None
if reference == element.get('reference_fbx'):
if repr_format == 'fbx':
loader = self._get_fbx_loader(loaders, family)
elif reference == element.get('reference_abc'):
elif repr_format == 'abc':
loader = self._get_abc_loader(loaders, family)
if not loader:
self.log.error(
f"No valid loader found for {representation}")
continue
options = {
@ -460,7 +497,7 @@ class LayoutLoader(plugin.Loader):
assets = load_container(
loader,
reference,
representation,
namespace=instance_name,
options=options
)
@ -478,28 +515,36 @@ class LayoutLoader(plugin.Loader):
instances = [
item for item in data
if (item.get('reference_fbx') == reference or
item.get('reference_abc') == reference)]
if ((item.get('version') and
item.get('version') == element.get('version')) or
item.get('reference_fbx') == representation or
item.get('reference_abc') == representation)]
for instance in instances:
transform = instance.get('transform')
# transform = instance.get('transform')
transform = instance.get('transform_matrix')
basis = instance.get('basis')
inst = instance.get('instance_name')
actors = []
if family == 'model':
actors, _ = self._process_family(
assets, 'StaticMesh', transform, sequence, inst)
assets, 'StaticMesh', transform, basis,
sequence, inst
)
elif family == 'rig':
actors, bindings = self._process_family(
assets, 'SkeletalMesh', transform, sequence, inst)
assets, 'SkeletalMesh', transform, basis,
sequence, inst
)
actors_dict[inst] = actors
bindings_dict[inst] = bindings
if skeleton:
skeleton_dict[reference] = skeleton
skeleton_dict[representation] = skeleton
else:
skeleton = skeleton_dict.get(reference)
skeleton = skeleton_dict.get(representation)
animation_file = element.get('animation')
@ -573,6 +618,9 @@ class LayoutLoader(plugin.Loader):
Returns:
list(str): list of container content
"""
data = get_current_project_settings()
create_sequences = data["unreal"]["level_sequences_for_layouts"]
# Create directory for asset and avalon container
hierarchy = context.get('asset').get('data').get('parents')
root = self.ASSET_ROOT
@ -593,81 +641,88 @@ class LayoutLoader(plugin.Loader):
EditorAssetLibrary.make_directory(asset_dir)
# Create map for the shot, and create hierarchy of map. If the maps
# already exist, we will use them.
h_dir = hierarchy_dir_list[0]
h_asset = hierarchy[0]
master_level = f"{h_dir}/{h_asset}_map.{h_asset}_map"
if not EditorAssetLibrary.does_asset_exist(master_level):
EditorLevelLibrary.new_level(f"{h_dir}/{h_asset}_map")
master_level = None
shot = None
sequences = []
level = f"{asset_dir}/{asset}_map.{asset}_map"
EditorLevelLibrary.new_level(f"{asset_dir}/{asset}_map")
EditorLevelLibrary.load_level(master_level)
EditorLevelUtils.add_level_to_world(
EditorLevelLibrary.get_editor_world(),
level,
unreal.LevelStreamingDynamic
)
EditorLevelLibrary.save_all_dirty_levels()
EditorLevelLibrary.load_level(level)
if create_sequences:
# Create map for the shot, and create hierarchy of map. If the
# maps already exist, we will use them.
if hierarchy:
h_dir = hierarchy_dir_list[0]
h_asset = hierarchy[0]
master_level = f"{h_dir}/{h_asset}_map.{h_asset}_map"
if not EditorAssetLibrary.does_asset_exist(master_level):
EditorLevelLibrary.new_level(f"{h_dir}/{h_asset}_map")
# Get all the sequences in the hierarchy. It will create them, if
# they don't exist.
sequences = []
frame_ranges = []
for (h_dir, h) in zip(hierarchy_dir_list, hierarchy):
root_content = EditorAssetLibrary.list_assets(
h_dir, recursive=False, include_folder=False)
if master_level:
EditorLevelLibrary.load_level(master_level)
EditorLevelUtils.add_level_to_world(
EditorLevelLibrary.get_editor_world(),
level,
unreal.LevelStreamingDynamic
)
EditorLevelLibrary.save_all_dirty_levels()
EditorLevelLibrary.load_level(level)
existing_sequences = [
EditorAssetLibrary.find_asset_data(asset)
for asset in root_content
if EditorAssetLibrary.find_asset_data(
asset).get_class().get_name() == 'LevelSequence'
]
# Get all the sequences in the hierarchy. It will create them, if
# they don't exist.
frame_ranges = []
for (h_dir, h) in zip(hierarchy_dir_list, hierarchy):
root_content = EditorAssetLibrary.list_assets(
h_dir, recursive=False, include_folder=False)
if not existing_sequences:
sequence, frame_range = self._generate_sequence(h, h_dir)
existing_sequences = [
EditorAssetLibrary.find_asset_data(asset)
for asset in root_content
if EditorAssetLibrary.find_asset_data(
asset).get_class().get_name() == 'LevelSequence'
]
sequences.append(sequence)
frame_ranges.append(frame_range)
else:
for e in existing_sequences:
sequences.append(e.get_asset())
frame_ranges.append((
e.get_asset().get_playback_start(),
e.get_asset().get_playback_end()))
if not existing_sequences:
sequence, frame_range = self._generate_sequence(h, h_dir)
shot = tools.create_asset(
asset_name=asset,
package_path=asset_dir,
asset_class=unreal.LevelSequence,
factory=unreal.LevelSequenceFactoryNew()
)
sequences.append(sequence)
frame_ranges.append(frame_range)
else:
for e in existing_sequences:
sequences.append(e.get_asset())
frame_ranges.append((
e.get_asset().get_playback_start(),
e.get_asset().get_playback_end()))
# sequences and frame_ranges have the same length
for i in range(0, len(sequences) - 1):
self._set_sequence_hierarchy(
sequences[i], sequences[i + 1],
frame_ranges[i][1],
frame_ranges[i + 1][0], frame_ranges[i + 1][1],
[level])
shot = tools.create_asset(
asset_name=asset,
package_path=asset_dir,
asset_class=unreal.LevelSequence,
factory=unreal.LevelSequenceFactoryNew()
)
project_name = legacy_io.active_project()
data = get_asset_by_name(project_name, asset)["data"]
shot.set_display_rate(
unreal.FrameRate(data.get("fps"), 1.0))
shot.set_playback_start(0)
shot.set_playback_end(data.get('clipOut') - data.get('clipIn') + 1)
self._set_sequence_hierarchy(
sequences[-1], shot,
frame_ranges[-1][1],
data.get('clipIn'), data.get('clipOut'),
[level])
# sequences and frame_ranges have the same length
for i in range(0, len(sequences) - 1):
self._set_sequence_hierarchy(
sequences[i], sequences[i + 1],
frame_ranges[i][1],
frame_ranges[i + 1][0], frame_ranges[i + 1][1],
[level])
EditorLevelLibrary.load_level(level)
project_name = legacy_io.active_project()
data = get_asset_by_name(project_name, asset)["data"]
shot.set_display_rate(
unreal.FrameRate(data.get("fps"), 1.0))
shot.set_playback_start(0)
shot.set_playback_end(data.get('clipOut') - data.get('clipIn') + 1)
if sequences:
self._set_sequence_hierarchy(
sequences[-1], shot,
frame_ranges[-1][1],
data.get('clipIn'), data.get('clipOut'),
[level])
EditorLevelLibrary.load_level(level)
loaded_assets = self._process(self.fname, asset_dir, shot)
@ -702,32 +757,47 @@ class LayoutLoader(plugin.Loader):
for a in asset_content:
EditorAssetLibrary.save_asset(a)
EditorLevelLibrary.load_level(master_level)
if master_level:
EditorLevelLibrary.load_level(master_level)
return asset_content
def update(self, container, representation):
data = get_current_project_settings()
create_sequences = data["unreal"]["level_sequences_for_layouts"]
ar = unreal.AssetRegistryHelpers.get_asset_registry()
root = "/Game/OpenPype"
asset_dir = container.get('namespace')
context = representation.get("context")
hierarchy = context.get('hierarchy').split("/")
h_dir = f"{root}/{hierarchy[0]}"
h_asset = hierarchy[0]
master_level = f"{h_dir}/{h_asset}_map.{h_asset}_map"
sequence = None
master_level = None
# # Create a temporary level to delete the layout level.
# EditorLevelLibrary.save_all_dirty_levels()
# EditorAssetLibrary.make_directory(f"{root}/tmp")
# tmp_level = f"{root}/tmp/temp_map"
# if not EditorAssetLibrary.does_asset_exist(f"{tmp_level}.temp_map"):
# EditorLevelLibrary.new_level(tmp_level)
# else:
# EditorLevelLibrary.load_level(tmp_level)
if create_sequences:
hierarchy = context.get('hierarchy').split("/")
h_dir = f"{root}/{hierarchy[0]}"
h_asset = hierarchy[0]
master_level = f"{h_dir}/{h_asset}_map.{h_asset}_map"
filter = unreal.ARFilter(
class_names=["LevelSequence"],
package_paths=[asset_dir],
recursive_paths=False)
sequences = ar.get_assets(filter)
sequence = sequences[0].get_asset()
prev_level = None
if not master_level:
curr_level = unreal.LevelEditorSubsystem().get_current_level()
curr_level_path = curr_level.get_outer().get_path_name()
# If the level path does not start with "/Game/", the current
# level is a temporary, unsaved level.
if curr_level_path.startswith("/Game/"):
prev_level = curr_level_path
# Get layout level
filter = unreal.ARFilter(
@ -735,11 +805,6 @@ class LayoutLoader(plugin.Loader):
package_paths=[asset_dir],
recursive_paths=False)
levels = ar.get_assets(filter)
filter = unreal.ARFilter(
class_names=["LevelSequence"],
package_paths=[asset_dir],
recursive_paths=False)
sequences = ar.get_assets(filter)
layout_level = levels[0].get_editor_property('object_path')
@ -751,14 +816,14 @@ class LayoutLoader(plugin.Loader):
for actor in actors:
unreal.EditorLevelLibrary.destroy_actor(actor)
EditorLevelLibrary.save_current_level()
if create_sequences:
EditorLevelLibrary.save_current_level()
EditorAssetLibrary.delete_directory(f"{asset_dir}/animations/")
source_path = get_representation_path(representation)
loaded_assets = self._process(
source_path, asset_dir, sequences[0].get_asset())
loaded_assets = self._process(source_path, asset_dir, sequence)
data = {
"representation": str(representation["_id"]),
@ -776,13 +841,20 @@ class LayoutLoader(plugin.Loader):
for a in asset_content:
EditorAssetLibrary.save_asset(a)
EditorLevelLibrary.load_level(master_level)
if master_level:
EditorLevelLibrary.load_level(master_level)
elif prev_level:
EditorLevelLibrary.load_level(prev_level)
def remove(self, container):
"""
Delete the layout. First, check if the assets loaded with the layout
are used by other layouts. If not, delete the assets.
"""
data = get_current_project_settings()
create_sequences = data["unreal"]["level_sequences_for_layouts"]
root = "/Game/OpenPype"
path = Path(container.get("namespace"))
containers = unreal_pipeline.ls()
@ -793,7 +865,7 @@ class LayoutLoader(plugin.Loader):
# Check if the assets have been loaded by other layouts, and deletes
# them if they haven't.
for asset in container.get('loaded_assets'):
for asset in eval(container.get('loaded_assets')):
layouts = [
lc for lc in layout_containers
if asset in lc.get('loaded_assets')]
@ -801,71 +873,87 @@ class LayoutLoader(plugin.Loader):
if not layouts:
EditorAssetLibrary.delete_directory(str(Path(asset).parent))
# Remove the Level Sequence from the parent.
# We need to traverse the hierarchy from the master sequence to find
# the level sequence.
root = "/Game/OpenPype"
namespace = container.get('namespace').replace(f"{root}/", "")
ms_asset = namespace.split('/')[0]
ar = unreal.AssetRegistryHelpers.get_asset_registry()
_filter = unreal.ARFilter(
class_names=["LevelSequence"],
package_paths=[f"{root}/{ms_asset}"],
recursive_paths=False)
sequences = ar.get_assets(_filter)
master_sequence = sequences[0].get_asset()
_filter = unreal.ARFilter(
class_names=["World"],
package_paths=[f"{root}/{ms_asset}"],
recursive_paths=False)
levels = ar.get_assets(_filter)
master_level = levels[0].get_editor_property('object_path')
# Delete the parent folder if there aren't any more
# layouts in it.
asset_content = EditorAssetLibrary.list_assets(
str(Path(asset).parent.parent), recursive=False,
include_folder=True
)
sequences = [master_sequence]
if len(asset_content) == 0:
EditorAssetLibrary.delete_directory(
str(Path(asset).parent.parent))
parent = None
for s in sequences:
tracks = s.get_master_tracks()
subscene_track = None
visibility_track = None
for t in tracks:
if t.get_class() == unreal.MovieSceneSubTrack.static_class():
subscene_track = t
if (t.get_class() ==
unreal.MovieSceneLevelVisibilityTrack.static_class()):
visibility_track = t
if subscene_track:
sections = subscene_track.get_sections()
for ss in sections:
if ss.get_sequence().get_name() == container.get('asset'):
parent = s
subscene_track.remove_section(ss)
break
sequences.append(ss.get_sequence())
# Update subscenes indexes.
i = 0
for ss in sections:
ss.set_row_index(i)
i += 1
master_sequence = None
master_level = None
sequences = []
if visibility_track:
sections = visibility_track.get_sections()
for ss in sections:
if (unreal.Name(f"{container.get('asset')}_map")
in ss.get_level_names()):
visibility_track.remove_section(ss)
# Update visibility sections indexes.
i = -1
prev_name = []
for ss in sections:
if prev_name != ss.get_level_names():
if create_sequences:
# Remove the Level Sequence from the parent.
# We need to traverse the hierarchy from the master sequence to
# find the level sequence.
namespace = container.get('namespace').replace(f"{root}/", "")
ms_asset = namespace.split('/')[0]
ar = unreal.AssetRegistryHelpers.get_asset_registry()
_filter = unreal.ARFilter(
class_names=["LevelSequence"],
package_paths=[f"{root}/{ms_asset}"],
recursive_paths=False)
sequences = ar.get_assets(_filter)
master_sequence = sequences[0].get_asset()
_filter = unreal.ARFilter(
class_names=["World"],
package_paths=[f"{root}/{ms_asset}"],
recursive_paths=False)
levels = ar.get_assets(_filter)
master_level = levels[0].get_editor_property('object_path')
sequences = [master_sequence]
parent = None
for s in sequences:
tracks = s.get_master_tracks()
subscene_track = None
visibility_track = None
for t in tracks:
if t.get_class() == MovieSceneSubTrack.static_class():
subscene_track = t
if (t.get_class() ==
MovieSceneLevelVisibilityTrack.static_class()):
visibility_track = t
if subscene_track:
sections = subscene_track.get_sections()
for ss in sections:
if (ss.get_sequence().get_name() ==
container.get('asset')):
parent = s
subscene_track.remove_section(ss)
break
sequences.append(ss.get_sequence())
# Update subscenes indexes.
i = 0
for ss in sections:
ss.set_row_index(i)
i += 1
ss.set_row_index(i)
prev_name = ss.get_level_names()
if parent:
break
assert parent, "Could not find the parent sequence"
if visibility_track:
sections = visibility_track.get_sections()
for ss in sections:
if (unreal.Name(f"{container.get('asset')}_map")
in ss.get_level_names()):
visibility_track.remove_section(ss)
# Update visibility sections indexes.
i = -1
prev_name = []
for ss in sections:
if prev_name != ss.get_level_names():
i += 1
ss.set_row_index(i)
prev_name = ss.get_level_names()
if parent:
break
assert parent, "Could not find the parent sequence"
# Create a temporary level to delete the layout level.
EditorLevelLibrary.save_all_dirty_levels()
@ -879,10 +967,9 @@ class LayoutLoader(plugin.Loader):
# Delete the layout directory.
EditorAssetLibrary.delete_directory(str(path))
EditorLevelLibrary.load_level(master_level)
EditorAssetLibrary.delete_directory(f"{root}/tmp")
EditorLevelLibrary.save_current_level()
if create_sequences:
EditorLevelLibrary.load_level(master_level)
EditorAssetLibrary.delete_directory(f"{root}/tmp")
# Delete the parent folder if there aren't any more layouts in it.
asset_content = EditorAssetLibrary.list_assets(

View file

@ -115,6 +115,7 @@ from .transcoding import (
get_ffmpeg_codec_args,
get_ffmpeg_format_args,
convert_ffprobe_fps_value,
convert_ffprobe_fps_to_float,
)
from .avalon_context import (
CURRENT_DOC_SCHEMAS,
@ -287,6 +288,7 @@ __all__ = [
"get_ffmpeg_codec_args",
"get_ffmpeg_format_args",
"convert_ffprobe_fps_value",
"convert_ffprobe_fps_to_float",
"CURRENT_DOC_SCHEMAS",
"PROJECT_NAME_ALLOWED_SYMBOLS",

View file

@ -27,12 +27,6 @@ from openpype.settings.constants import (
from . import PypeLogger
from .profiles_filtering import filter_profiles
from .local_settings import get_openpype_username
from .avalon_context import (
get_workdir_data,
get_workdir_with_workdir_data,
get_workfile_template_key,
get_last_workfile
)
from .python_module_tools import (
modules_from_path,
@ -1576,6 +1570,9 @@ def prepare_context_environments(data, env_group=None):
data (EnvironmentPrepData): Dictionary where result and intermediate
result will be stored.
"""
from openpype.pipeline.template_data import get_template_data
# Context environments
log = data["log"]
@ -1596,7 +1593,9 @@ def prepare_context_environments(data, env_group=None):
# Load project specific environments
project_name = project_doc["name"]
project_settings = get_project_settings(project_name)
system_settings = get_system_settings()
data["project_settings"] = project_settings
data["system_settings"] = system_settings
# Apply project specific environments on current env value
apply_project_environments_value(
project_name, data["env"], project_settings, env_group
@ -1619,8 +1618,8 @@ def prepare_context_environments(data, env_group=None):
if not app.is_host:
return
workdir_data = get_workdir_data(
project_doc, asset_doc, task_name, app.host_name
workdir_data = get_template_data(
project_doc, asset_doc, task_name, app.host_name, system_settings
)
data["workdir_data"] = workdir_data
@ -1631,7 +1630,14 @@ def prepare_context_environments(data, env_group=None):
data["task_type"] = task_type
try:
workdir = get_workdir_with_workdir_data(workdir_data, anatomy)
from openpype.pipeline.workfile import get_workdir_with_workdir_data
workdir = get_workdir_with_workdir_data(
workdir_data,
anatomy.project_name,
anatomy,
project_settings=project_settings
)
except Exception as exc:
raise ApplicationLaunchFailed(
@ -1721,11 +1727,19 @@ def _prepare_last_workfile(data, workdir):
if not last_workfile_path:
extensions = HOST_WORKFILE_EXTENSIONS.get(app.host_name)
if extensions:
from openpype.pipeline.workfile import (
get_workfile_template_key,
get_last_workfile
)
anatomy = data["anatomy"]
project_settings = data["project_settings"]
task_type = workdir_data["task"]["type"]
template_key = get_workfile_template_key(
task_type, app.host_name, project_settings=project_settings
task_type,
app.host_name,
project_name,
project_settings=project_settings
)
# Find last workfile
file_template = str(anatomy.templates[template_key]["file"])

File diff suppressed because it is too large Load diff

View file

@ -211,15 +211,28 @@ class StringTemplate(object):
if counted_symb > -1:
parts = tmp_parts.pop(counted_symb)
counted_symb -= 1
# If part contains only single string keep value
# unchanged
if parts:
# Remove optional start char
parts.pop(0)
if counted_symb < 0:
out_parts = new_parts
else:
out_parts = tmp_parts[counted_symb]
# Store temp parts
out_parts.append(OptionalPart(parts))
if not parts:
value = "<>"
elif (
len(parts) == 1
and isinstance(parts[0], six.string_types)
):
value = "<{}>".format(parts[0])
else:
value = OptionalPart(parts)
if counted_symb < 0:
out_parts = new_parts
else:
out_parts = tmp_parts[counted_symb]
# Store value
out_parts.append(value)
continue
if counted_symb < 0:
@ -793,6 +806,7 @@ class OptionalPart:
parts(list): Parts of template. Can contain 'str', 'OptionalPart' or
'FormattingPart'.
"""
def __init__(self, parts):
self._parts = parts

View file

@ -1,11 +1,13 @@
# -*- coding: utf-8 -*-
"""Avalon/Pyblish plugin tools."""
import os
import inspect
import logging
import re
import json
import warnings
import functools
from openpype.client import get_asset_by_id
from openpype.settings import get_project_settings
@ -17,6 +19,51 @@ log = logging.getLogger(__name__)
DEFAULT_SUBSET_TEMPLATE = "{family}{Variant}"
class PluginToolsDeprecatedWarning(DeprecationWarning):
pass
def deprecated(new_destination):
"""Mark functions as deprecated.
It will result in a warning being emitted when the function is used.
"""
func = None
if callable(new_destination):
func = new_destination
new_destination = None
def _decorator(decorated_func):
if new_destination is None:
warning_message = (
" Please check content of deprecated function to figure out"
" possible replacement."
)
else:
warning_message = " Please replace your usage with '{}'.".format(
new_destination
)
@functools.wraps(decorated_func)
def wrapper(*args, **kwargs):
warnings.simplefilter("always", PluginToolsDeprecatedWarning)
warnings.warn(
(
"Call to deprecated function '{}'"
"\nFunction was moved or removed.{}"
).format(decorated_func.__name__, warning_message),
category=PluginToolsDeprecatedWarning,
stacklevel=4
)
return decorated_func(*args, **kwargs)
return wrapper
if func is None:
return _decorator
return _decorator(func)
class TaskNotSetError(KeyError):
def __init__(self, msg=None):
if not msg:
@ -197,6 +244,7 @@ def prepare_template_data(fill_pairs):
return fill_data
@deprecated("openpype.pipeline.publish.lib.filter_pyblish_plugins")
def filter_pyblish_plugins(plugins):
"""Filter pyblish plugins by presets.
@ -206,57 +254,14 @@ def filter_pyblish_plugins(plugins):
Args:
plugins (dict): Dictionary of plugins produced by :mod:`pyblish-base`
`discover()` method.
"""
from pyblish import api
host = api.current_host()
from openpype.pipeline.publish.lib import filter_pyblish_plugins
presets = get_project_settings(os.environ['AVALON_PROJECT']) or {}
# skip if there are no presets to process
if not presets:
return
# iterate over plugins
for plugin in plugins[:]:
try:
config_data = presets[host]["publish"][plugin.__name__]
except KeyError:
# host determined from path
file = os.path.normpath(inspect.getsourcefile(plugin))
file = os.path.normpath(file)
split_path = file.split(os.path.sep)
if len(split_path) < 4:
log.warning(
'plugin path too short to extract host {}'.format(file)
)
continue
host_from_file = split_path[-4]
plugin_kind = split_path[-2]
# TODO: change after all plugins are moved one level up
if host_from_file == "openpype":
host_from_file = "global"
try:
config_data = presets[host_from_file][plugin_kind][plugin.__name__] # noqa: E501
except KeyError:
continue
for option, value in config_data.items():
if option == "enabled" and value is False:
log.info('removing plugin {}'.format(plugin.__name__))
plugins.remove(plugin)
else:
log.info('setting {}:{} on plugin {}'.format(
option, value, plugin.__name__))
setattr(plugin, option, value)
filter_pyblish_plugins(plugins)
@deprecated
def set_plugin_attributes_from_settings(
plugins, superclass, host_name=None, project_name=None
):
@ -273,6 +278,8 @@ def set_plugin_attributes_from_settings(
project_name (str): Name of project for which settings will be loaded.
Value from environment `AVALON_PROJECT` is used if not entered.
"""
# Function is not used anymore
from openpype.pipeline import LegacyCreator, LoaderPlugin
# determine host application to use for finding presets

View file

@ -9,6 +9,8 @@ import pyblish.api
from openpype.client.mongo import OpenPypeMongoConnection
from openpype.lib.plugin_tools import parse_json
from openpype.lib.profiles_filtering import filter_profiles
from openpype.api import get_project_settings
ERROR_STATUS = "error"
IN_PROGRESS_STATUS = "in_progress"
@ -175,14 +177,12 @@ def publish_and_log(dbcon, _id, log, close_plugin_name=None, batch_id=None):
)
def fail_batch(_id, batches_in_progress, dbcon):
"""Set current batch as failed as there are some stuck batches."""
running_batches = [str(batch["_id"])
for batch in batches_in_progress
if batch["_id"] != _id]
msg = "There are still running batches {}\n". \
format("\n".join(running_batches))
msg += "Ask admin to check them and reprocess current batch"
def fail_batch(_id, dbcon, msg):
"""Set current batch as failed as there is some problem.
Raises:
ValueError
"""
dbcon.update_one(
{"_id": _id},
{"$set":
@ -259,3 +259,19 @@ def get_task_data(batch_dir):
"Cannot parse batch meta in {} folder".format(task_data))
return task_data
def get_timeout(project_name, host_name, task_type):
"""Returns timeout(seconds) from Setting profile."""
filter_data = {
"task_types": task_type,
"hosts": host_name
}
timeout_profiles = (get_project_settings(project_name)["webpublisher"]
["timeout_profiles"])
matching_item = filter_profiles(timeout_profiles, filter_data)
timeout = 3600
if matching_item:
timeout = matching_item["timeout"]
return timeout

View file

@ -938,3 +938,40 @@ def convert_ffprobe_fps_value(str_value):
fps = int(fps)
return str(fps)
def convert_ffprobe_fps_to_float(value):
"""Convert string value of frame rate to float.
Copy of 'convert_ffprobe_fps_value' which raises exceptions on invalid
value, does not convert value to string and does not return "Unknown"
string.
Args:
value (str): Value to be converted.
Returns:
Float: Converted frame rate in float. If divisor in value is '0' then
'0.0' is returned.
Raises:
ValueError: Passed value is invalid for conversion.
"""
if not value:
raise ValueError("Got empty value.")
items = value.split("/")
if len(items) == 1:
return float(items[0])
if len(items) > 2:
raise ValueError((
"FPS expression contains multiple dividers \"{}\"."
).format(value))
dividend = float(items.pop(0))
divisor = float(items.pop(0))
if divisor == 0.0:
return 0.0
return dividend / divisor

View file

@ -80,7 +80,8 @@ class AfterEffectsSubmitDeadline(
"AVALON_TASK",
"AVALON_APP_NAME",
"OPENPYPE_DEV",
"OPENPYPE_LOG_NO_COLORS"
"OPENPYPE_LOG_NO_COLORS",
"OPENPYPE_VERSION"
]
# Add mongo url if it's enabled
if self._instance.context.data.get("deadlinePassMongoUrl"):

View file

@ -274,7 +274,8 @@ class HarmonySubmitDeadline(
"AVALON_TASK",
"AVALON_APP_NAME",
"OPENPYPE_DEV",
"OPENPYPE_LOG_NO_COLORS"
"OPENPYPE_LOG_NO_COLORS",
"OPENPYPE_VERSION"
]
# Add mongo url if it's enabled
if self._instance.context.data.get("deadlinePassMongoUrl"):

View file

@ -130,6 +130,7 @@ class HoudiniSubmitPublishDeadline(pyblish.api.ContextPlugin):
# this application with so the Render Slave can build its own
# similar environment using it, e.g. "houdini17.5;pluginx2.3"
"AVALON_TOOLS",
"OPENPYPE_VERSION"
]
# Add mongo url if it's enabled
if context.data.get("deadlinePassMongoUrl"):

View file

@ -101,6 +101,7 @@ class HoudiniSubmitRenderDeadline(pyblish.api.InstancePlugin):
# this application with so the Render Slave can build its own
# similar environment using it, e.g. "maya2018;vray4.x;yeti3.1.9"
"AVALON_TOOLS",
"OPENPYPE_VERSION"
]
# Add mongo url if it's enabled
if context.data.get("deadlinePassMongoUrl"):

View file

@ -413,8 +413,7 @@ class MayaSubmitDeadline(pyblish.api.InstancePlugin):
# Gather needed data ------------------------------------------------
default_render_file = instance.context.data.get('project_settings')\
.get('maya')\
.get('create')\
.get('CreateRender')\
.get('RenderSettings')\
.get('default_render_image_folder')
filename = os.path.basename(filepath)
comment = context.data.get("comment", "")
@ -519,12 +518,14 @@ class MayaSubmitDeadline(pyblish.api.InstancePlugin):
"FTRACK_API_KEY",
"FTRACK_API_USER",
"FTRACK_SERVER",
"OPENPYPE_SG_USER",
"AVALON_PROJECT",
"AVALON_ASSET",
"AVALON_TASK",
"AVALON_APP_NAME",
"OPENPYPE_DEV",
"OPENPYPE_LOG_NO_COLORS"
"OPENPYPE_LOG_NO_COLORS",
"OPENPYPE_VERSION"
]
# Add mongo url if it's enabled
if instance.context.data.get("deadlinePassMongoUrl"):

View file

@ -5,7 +5,6 @@ from maya import cmds
from openpype.pipeline import legacy_io, PublishXmlValidationError
from openpype.settings import get_project_settings
import openpype.api
import pyblish.api
@ -34,7 +33,9 @@ class MayaSubmitRemotePublishDeadline(pyblish.api.InstancePlugin):
targets = ["local"]
def process(self, instance):
settings = get_project_settings(os.getenv("AVALON_PROJECT"))
project_name = instance.context.data["projectName"]
# TODO settings can be received from 'context.data["project_settings"]'
settings = get_project_settings(project_name)
# use setting for publish job on farm, no reason to have it separately
deadline_publish_job_sett = (settings["deadline"]
["publish"]
@ -53,9 +54,6 @@ class MayaSubmitRemotePublishDeadline(pyblish.api.InstancePlugin):
scene = instance.context.data["currentFile"]
scenename = os.path.basename(scene)
# Get project code
project_name = legacy_io.Session["AVALON_PROJECT"]
job_name = "{scene} [PUBLISH]".format(scene=scenename)
batch_name = "{code} - {scene}".format(code=project_name,
scene=scenename)
@ -102,13 +100,14 @@ class MayaSubmitRemotePublishDeadline(pyblish.api.InstancePlugin):
keys = [
"FTRACK_API_USER",
"FTRACK_API_KEY",
"FTRACK_SERVER"
"FTRACK_SERVER",
"OPENPYPE_VERSION"
]
environment = dict({key: os.environ[key] for key in keys
if key in os.environ}, **legacy_io.Session)
# TODO replace legacy_io with context.data ?
environment["AVALON_PROJECT"] = legacy_io.Session["AVALON_PROJECT"]
# TODO replace legacy_io with context.data
environment["AVALON_PROJECT"] = project_name
environment["AVALON_ASSET"] = legacy_io.Session["AVALON_ASSET"]
environment["AVALON_TASK"] = legacy_io.Session["AVALON_TASK"]
environment["AVALON_APP_NAME"] = os.environ.get("AVALON_APP_NAME")

View file

@ -80,10 +80,6 @@ class NukeSubmitDeadline(pyblish.api.InstancePlugin):
"Using published scene for render {}".format(script_path)
)
# exception for slate workflow
if "slate" in instance.data["families"]:
submit_frame_start -= 1
response = self.payload_submit(
instance,
script_path,
@ -99,10 +95,6 @@ class NukeSubmitDeadline(pyblish.api.InstancePlugin):
instance.data["publishJobState"] = "Suspended"
if instance.data.get("bakingNukeScripts"):
# exception for slate workflow
if "slate" in instance.data["families"]:
submit_frame_start += 1
for baking_script in instance.data["bakingNukeScripts"]:
render_path = baking_script["bakeRenderPath"]
script_path = baking_script["bakeScriptPath"]
@ -261,7 +253,8 @@ class NukeSubmitDeadline(pyblish.api.InstancePlugin):
"PYBLISHPLUGINPATH",
"NUKE_PATH",
"TOOL_ENV",
"FOUNDRY_LICENSE"
"FOUNDRY_LICENSE",
"OPENPYPE_VERSION"
]
# Add mongo url if it's enabled
if instance.context.data.get("deadlinePassMongoUrl"):
@ -365,7 +358,7 @@ class NukeSubmitDeadline(pyblish.api.InstancePlugin):
if not instance.data.get("expectedFiles"):
instance.data["expectedFiles"] = []
dir = os.path.dirname(path)
dirname = os.path.dirname(path)
file = os.path.basename(path)
if "#" in file:
@ -377,9 +370,12 @@ class NukeSubmitDeadline(pyblish.api.InstancePlugin):
instance.data["expectedFiles"].append(path)
return
if instance.data.get("slate"):
start_frame -= 1
for i in range(start_frame, (end_frame + 1)):
instance.data["expectedFiles"].append(
os.path.join(dir, (file % i)).replace("\\", "/"))
os.path.join(dirname, (file % i)).replace("\\", "/"))
def get_limit_groups(self):
"""Search for limit group nodes and return group name.

View file

@ -141,7 +141,8 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin):
"OPENPYPE_USERNAME",
"OPENPYPE_RENDER_JOB",
"OPENPYPE_PUBLISH_JOB",
"OPENPYPE_MONGO"
"OPENPYPE_MONGO",
"OPENPYPE_VERSION"
]
# custom deadline attributes
@ -158,7 +159,7 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin):
# mapping of instance properties to be transfered to new instance for every
# specified family
instance_transfer = {
"slate": ["slateFrames"],
"slate": ["slateFrames", "slate"],
"review": ["lutPath"],
"render2d": ["bakingNukeScripts", "version"],
"renderlayer": ["convertToScanline"]
@ -585,11 +586,15 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin):
" This may cause issues on farm."
).format(staging))
frame_start = int(instance.get("frameStartHandle"))
if instance.get("slate"):
frame_start -= 1
rep = {
"name": ext,
"ext": ext,
"files": [os.path.basename(f) for f in list(collection)],
"frameStart": int(instance.get("frameStartHandle")),
"frameStart": frame_start,
"frameEnd": int(instance.get("frameEndHandle")),
# If expectedFile are absolute, we need only filenames
"stagingDir": staging,

View file

@ -6,13 +6,52 @@ import subprocess
import json
import platform
import uuid
from Deadline.Scripting import RepositoryUtils, FileUtils
import re
from Deadline.Scripting import RepositoryUtils, FileUtils, DirectoryUtils
def get_openpype_version_from_path(path, build=True):
"""Get OpenPype version from provided path.
path (str): Path to scan.
build (bool, optional): Get only builds, not sources
Returns:
str or None: version of OpenPype if found.
"""
# fix path for application bundle on macos
if platform.system().lower() == "darwin":
path = os.path.join(path, "Contents", "MacOS", "lib", "Python")
version_file = os.path.join(path, "openpype", "version.py")
if not os.path.isfile(version_file):
return None
# skip if the version is not build
exe = os.path.join(path, "openpype_console.exe")
if platform.system().lower() in ["linux", "darwin"]:
exe = os.path.join(path, "openpype_console")
# if only builds are requested
if build and not os.path.isfile(exe): # noqa: E501
print(" ! path is not a build: {}".format(path))
return None
version = {}
with open(version_file, "r") as vf:
exec(vf.read(), version)
version_match = re.search(r"(\d+\.\d+.\d+).*", version["__version__"])
return version_match[1]
def get_openpype_executable():
"""Return OpenPype Executable from Event Plug-in Settings"""
config = RepositoryUtils.GetPluginConfig("OpenPype")
return config.GetConfigEntryWithDefault("OpenPypeExecutable", "")
exe_list = config.GetConfigEntryWithDefault("OpenPypeExecutable", "")
dir_list = config.GetConfigEntryWithDefault(
"OpenPypeInstallationDirs", "")
return exe_list, dir_list
def inject_openpype_environment(deadlinePlugin):
@ -25,16 +64,94 @@ def inject_openpype_environment(deadlinePlugin):
print(">>> Injecting OpenPype environments ...")
try:
print(">>> Getting OpenPype executable ...")
exe_list = get_openpype_executable()
openpype_app = FileUtils.SearchFileList(exe_list)
if openpype_app == "":
exe_list, dir_list = get_openpype_executable()
openpype_versions = []
# if the job requires specific OpenPype version,
# lets go over all available and find compatible build.
requested_version = job.GetJobEnvironmentKeyValue("OPENPYPE_VERSION")
if requested_version:
print((
">>> Scanning for compatible requested version {}"
).format(requested_version))
install_dir = DirectoryUtils.SearchDirectoryList(dir_list)
if install_dir:
print("--- Looking for OpenPype at: {}".format(install_dir))
sub_dirs = [
f.path for f in os.scandir(install_dir)
if f.is_dir()
]
for subdir in sub_dirs:
version = get_openpype_version_from_path(subdir)
if not version:
continue
print(" - found: {} - {}".format(version, subdir))
openpype_versions.append((version, subdir))
exe = FileUtils.SearchFileList(exe_list)
if openpype_versions:
# if looking for requested compatible version,
# add the implicitly specified to the list too.
print("Looking for OpenPype at: {}".format(os.path.dirname(exe)))
version = get_openpype_version_from_path(
os.path.dirname(exe))
if version:
print(" - found: {} - {}".format(
version, os.path.dirname(exe)
))
openpype_versions.append((version, os.path.dirname(exe)))
if requested_version:
# sort detected versions
if openpype_versions:
# use natural sorting
openpype_versions.sort(
key=lambda ver: [
int(t) if t.isdigit() else t.lower()
for t in re.split(r"(\d+)", ver[0])
])
print((
"*** Latest available version found is {}"
).format(openpype_versions[-1][0]))
requested_major, requested_minor, _ = requested_version.split(".")[:3] # noqa: E501
compatible_versions = []
for version in openpype_versions:
v = version[0].split(".")[:3]
if v[0] == requested_major and v[1] == requested_minor:
compatible_versions.append(version)
if not compatible_versions:
raise RuntimeError(
("Cannot find compatible version available "
"for version {} requested by the job. "
"Please add it through plugin configuration "
"in Deadline or install it to configured "
"directory.").format(requested_version))
# sort compatible versions nad pick the last one
compatible_versions.sort(
key=lambda ver: [
int(t) if t.isdigit() else t.lower()
for t in re.split(r"(\d+)", ver[0])
])
print((
"*** Latest compatible version found is {}"
).format(compatible_versions[-1][0]))
# create list of executables for different platform and let
# Deadline decide.
exe_list = [
os.path.join(
compatible_versions[-1][1], "openpype_console.exe"),
os.path.join(
compatible_versions[-1][1], "openpype_console")
]
exe = FileUtils.SearchFileList(";".join(exe_list))
if exe == "":
raise RuntimeError(
"OpenPype executable was not found " +
"in the semicolon separated list \"" + exe_list + "\". " +
"in the semicolon separated list " +
"\"" + ";".join(exe_list) + "\". " +
"The path to the render executable can be configured " +
"from the Plugin Configuration in the Deadline Monitor.")
print("--- OpenPype executable: {}".format(openpype_app))
print("--- OpenPype executable: {}".format(exe))
# tempfile.TemporaryFile cannot be used because of locking
temp_file_name = "{}_{}.json".format(
@ -45,7 +162,7 @@ def inject_openpype_environment(deadlinePlugin):
print(">>> Temporary path: {}".format(export_url))
args = [
openpype_app,
exe,
"--headless",
'extractenvironments',
export_url
@ -75,9 +192,9 @@ def inject_openpype_environment(deadlinePlugin):
env["OPENPYPE_HEADLESS_MODE"] = "1"
env["AVALON_TIMEOUT"] = "5000"
print(">>> Executing: {}".format(args))
print(">>> Executing: {}".format(" ".join(args)))
std_output = subprocess.check_output(args,
cwd=os.path.dirname(openpype_app),
cwd=os.path.dirname(exe),
env=env)
print(">>> Process result {}".format(std_output))
@ -122,78 +239,6 @@ def inject_render_job_id(deadlinePlugin):
print(">>> Injection end.")
def pype_command_line(executable, arguments, workingDirectory):
"""Remap paths in comand line argument string.
Using Deadline rempper it will remap all path found in command-line.
Args:
executable (str): path to executable
arguments (str): arguments passed to executable
workingDirectory (str): working directory path
Returns:
Tuple(executable, arguments, workingDirectory)
"""
print("-" * 40)
print("executable: {}".format(executable))
print("arguments: {}".format(arguments))
print("workingDirectory: {}".format(workingDirectory))
print("-" * 40)
print("Remapping arguments ...")
arguments = RepositoryUtils.CheckPathMapping(arguments)
print("* {}".format(arguments))
print("-" * 40)
return executable, arguments, workingDirectory
def pype(deadlinePlugin):
"""Remaps `PYPE_METADATA_FILE` and `PYPE_PYTHON_EXE` environment vars.
`PYPE_METADATA_FILE` is used on farm to point to rendered data. This path
originates on platform from which this job was published. To be able to
publish on different platform, this path needs to be remapped.
`PYPE_PYTHON_EXE` can be used to specify custom location of python
interpreter to use for Pype. This is remappeda also if present even
though it probably doesn't make much sense.
Arguments:
deadlinePlugin: Deadline job plugin passed by Deadline
"""
print(">>> Getting job ...")
job = deadlinePlugin.GetJob()
# PYPE should be here, not OPENPYPE - backward compatibility!!
pype_metadata = job.GetJobEnvironmentKeyValue("PYPE_METADATA_FILE")
pype_python = job.GetJobEnvironmentKeyValue("PYPE_PYTHON_EXE")
print(">>> Having backward compatible env vars {}/{}".format(pype_metadata,
pype_python))
# test if it is pype publish job.
if pype_metadata:
pype_metadata = RepositoryUtils.CheckPathMapping(pype_metadata)
if platform.system().lower() == "linux":
pype_metadata = pype_metadata.replace("\\", "/")
print("- remapping PYPE_METADATA_FILE: {}".format(pype_metadata))
job.SetJobEnvironmentKeyValue("PYPE_METADATA_FILE", pype_metadata)
deadlinePlugin.SetProcessEnvironmentVariable(
"PYPE_METADATA_FILE", pype_metadata)
if pype_python:
pype_python = RepositoryUtils.CheckPathMapping(pype_python)
if platform.system().lower() == "linux":
pype_python = pype_python.replace("\\", "/")
print("- remapping PYPE_PYTHON_EXE: {}".format(pype_python))
job.SetJobEnvironmentKeyValue("PYPE_PYTHON_EXE", pype_python)
deadlinePlugin.SetProcessEnvironmentVariable(
"PYPE_PYTHON_EXE", pype_python)
deadlinePlugin.ModifyCommandLineCallback += pype_command_line
def __main__(deadlinePlugin):
print("*** GlobalJobPreload start ...")
print(">>> Getting job ...")
@ -217,5 +262,3 @@ def __main__(deadlinePlugin):
inject_render_job_id(deadlinePlugin)
elif openpype_render_job == '1' or openpype_remote_job == '1':
inject_openpype_environment(deadlinePlugin)
else:
pype(deadlinePlugin) # backward compatibility with Pype2

View file

@ -7,11 +7,20 @@ Index=0
Default=OpenPype Plugin for Deadline
Description=Not configurable
[OpenPypeInstallationDirs]
Type=multilinemultifolder
Label=Directories where OpenPype versions are installed
Category=OpenPype Installation Directories
CategoryOrder=0
Index=0
Default=C:\Program Files (x86)\OpenPype
Description=Path or paths to directories where multiple versions of OpenPype might be installed. Enter every such path on separate lines.
[OpenPypeExecutable]
Type=multilinemultifilename
Label=OpenPype Executable
Category=OpenPype Executables
CategoryOrder=0
CategoryOrder=1
Index=0
Default=
Description=The path to the OpenPype executable. Enter alternative paths on separate lines.

View file

@ -1,10 +1,19 @@
#!/usr/bin/env python3
from System.IO import Path
from System.Text.RegularExpressions import Regex
from Deadline.Plugins import PluginType, DeadlinePlugin
from Deadline.Scripting import StringUtils, FileUtils, RepositoryUtils
from Deadline.Scripting import (
StringUtils,
FileUtils,
DirectoryUtils,
RepositoryUtils
)
import re
import os
import platform
######################################################################
@ -52,13 +61,115 @@ class OpenPypeDeadlinePlugin(DeadlinePlugin):
self.AddStdoutHandlerCallback(
".*Progress: (\d+)%.*").HandleCallback += self.HandleProgress
@staticmethod
def get_openpype_version_from_path(path, build=True):
"""Get OpenPype version from provided path.
path (str): Path to scan.
build (bool, optional): Get only builds, not sources
Returns:
str or None: version of OpenPype if found.
"""
# fix path for application bundle on macos
if platform.system().lower() == "darwin":
path = os.path.join(path, "Contents", "MacOS", "lib", "Python")
version_file = os.path.join(path, "openpype", "version.py")
if not os.path.isfile(version_file):
return None
# skip if the version is not build
exe = os.path.join(path, "openpype_console.exe")
if platform.system().lower() in ["linux", "darwin"]:
exe = os.path.join(path, "openpype_console")
# if only builds are requested
if build and not os.path.isfile(exe): # noqa: E501
print(f" ! path is not a build: {path}")
return None
version = {}
with open(version_file, "r") as vf:
exec(vf.read(), version)
version_match = re.search(r"(\d+\.\d+.\d+).*", version["__version__"])
return version_match[1]
def RenderExecutable(self):
exeList = self.GetConfigEntry("OpenPypeExecutable")
exe = FileUtils.SearchFileList(exeList)
job = self.GetJob()
openpype_versions = []
# if the job requires specific OpenPype version,
# lets go over all available and find compatible build.
requested_version = job.GetJobEnvironmentKeyValue("OPENPYPE_VERSION")
if requested_version:
self.LogInfo((
"Scanning for compatible requested "
f"version {requested_version}"))
dir_list = self.GetConfigEntry("OpenPypeInstallationDirs")
install_dir = DirectoryUtils.SearchDirectoryList(dir_list)
if dir:
sub_dirs = [
f.path for f in os.scandir(install_dir)
if f.is_dir()
]
for subdir in sub_dirs:
version = self.get_openpype_version_from_path(subdir)
if not version:
continue
openpype_versions.append((version, subdir))
exe_list = self.GetConfigEntry("OpenPypeExecutable")
exe = FileUtils.SearchFileList(exe_list)
if openpype_versions:
# if looking for requested compatible version,
# add the implicitly specified to the list too.
version = self.get_openpype_version_from_path(
os.path.dirname(exe))
if version:
openpype_versions.append((version, os.path.dirname(exe)))
if requested_version:
# sort detected versions
if openpype_versions:
openpype_versions.sort(
key=lambda ver: [
int(t) if t.isdigit() else t.lower()
for t in re.split(r"(\d+)", ver[0])
])
requested_major, requested_minor, _ = requested_version.split(".")[:3] # noqa: E501
compatible_versions = []
for version in openpype_versions:
v = version[0].split(".")[:3]
if v[0] == requested_major and v[1] == requested_minor:
compatible_versions.append(version)
if not compatible_versions:
self.FailRender(("Cannot find compatible version available "
"for version {} requested by the job. "
"Please add it through plugin configuration "
"in Deadline or install it to configured "
"directory.").format(requested_version))
# sort compatible versions nad pick the last one
compatible_versions.sort(
key=lambda ver: [
int(t) if t.isdigit() else t.lower()
for t in re.split(r"(\d+)", ver[0])
])
# create list of executables for different platform and let
# Deadline decide.
exe_list = [
os.path.join(
compatible_versions[-1][1], "openpype_console.exe"),
os.path.join(
compatible_versions[-1][1], "openpype_console")
]
exe = FileUtils.SearchFileList(";".join(exe_list))
if exe == "":
self.FailRender(
"OpenPype executable was not found " +
"in the semicolon separated list \"" + exeList + "\". " +
"in the semicolon separated list " +
"\"" + ";".join(exe_list) + "\". " +
"The path to the render executable can be configured " +
"from the Plugin Configuration in the Deadline Monitor.")
return exe

View file

@ -1,10 +1,11 @@
import collections
import datetime
import copy
import ftrack_api
from openpype_modules.ftrack.lib import (
BaseEvent,
query_custom_attributes
query_custom_attributes,
)
@ -124,10 +125,15 @@ class PushFrameValuesToTaskEvent(BaseEvent):
# Separate value changes and task parent changes
_entities_info = []
added_entities = []
added_entity_ids = set()
task_parent_changes = []
for entity_info in entities_info:
if entity_info["entity_type"].lower() == "task":
task_parent_changes.append(entity_info)
elif entity_info.get("action") == "add":
added_entities.append(entity_info)
added_entity_ids.add(entity_info["entityId"])
else:
_entities_info.append(entity_info)
entities_info = _entities_info
@ -136,6 +142,13 @@ class PushFrameValuesToTaskEvent(BaseEvent):
interesting_data, changed_keys_by_object_id = self.filter_changes(
session, event, entities_info, interest_attributes
)
self.interesting_data_for_added(
session,
added_entities,
interest_attributes,
interesting_data,
changed_keys_by_object_id
)
if not interesting_data and not task_parent_changes:
return
@ -151,9 +164,13 @@ class PushFrameValuesToTaskEvent(BaseEvent):
# - it is a complex way how to find out
if interesting_data:
self.process_attribute_changes(
session, object_types_by_name,
interesting_data, changed_keys_by_object_id,
interest_entity_types, interest_attributes
session,
object_types_by_name,
interesting_data,
changed_keys_by_object_id,
interest_entity_types,
interest_attributes,
added_entity_ids
)
if task_parent_changes:
@ -163,8 +180,12 @@ class PushFrameValuesToTaskEvent(BaseEvent):
)
def process_task_parent_change(
self, session, object_types_by_name, task_parent_changes,
interest_entity_types, interest_attributes
self,
session,
object_types_by_name,
task_parent_changes,
interest_entity_types,
interest_attributes
):
"""Push custom attribute values if task parent has changed.
@ -176,6 +197,7 @@ class PushFrameValuesToTaskEvent(BaseEvent):
real hierarchical value and non hierarchical custom attribute value
should be set to hierarchical value.
"""
# Store task ids which were created or moved under parent with entity
# type defined in settings (interest_entity_types).
task_ids = set()
@ -380,33 +402,49 @@ class PushFrameValuesToTaskEvent(BaseEvent):
uncommited_changes = False
for idx, item in enumerate(changes):
new_value = item["new_value"]
old_value = item["old_value"]
attr_id = item["attr_id"]
entity_id = item["entity_id"]
attr_key = item["attr_key"]
entity_key = collections.OrderedDict()
entity_key["configuration_id"] = attr_id
entity_key["entity_id"] = entity_id
entity_key = collections.OrderedDict((
("configuration_id", attr_id),
("entity_id", entity_id)
))
self._cached_changes.append({
"attr_key": attr_key,
"entity_id": entity_id,
"value": new_value,
"time": datetime.datetime.now()
})
old_value_is_set = (
old_value is not ftrack_api.symbol.NOT_SET
and old_value is not None
)
if new_value is None:
if not old_value_is_set:
continue
op = ftrack_api.operation.DeleteEntityOperation(
"CustomAttributeValue",
entity_key
)
else:
elif old_value_is_set:
op = ftrack_api.operation.UpdateEntityOperation(
"ContextCustomAttributeValue",
"CustomAttributeValue",
entity_key,
"value",
ftrack_api.symbol.NOT_SET,
old_value,
new_value
)
else:
op = ftrack_api.operation.CreateEntityOperation(
"CustomAttributeValue",
entity_key,
{"value": new_value}
)
session.recorded_operations.push(op)
self.log.info((
"Changing Custom Attribute \"{}\" to value"
@ -432,9 +470,14 @@ class PushFrameValuesToTaskEvent(BaseEvent):
self.log.warning("Changing of values failed.", exc_info=True)
def process_attribute_changes(
self, session, object_types_by_name,
interesting_data, changed_keys_by_object_id,
interest_entity_types, interest_attributes
self,
session,
object_types_by_name,
interesting_data,
changed_keys_by_object_id,
interest_entity_types,
interest_attributes,
added_entity_ids
):
# Prepare task object id
task_object_id = object_types_by_name["task"]["id"]
@ -522,15 +565,26 @@ class PushFrameValuesToTaskEvent(BaseEvent):
parent_id_by_task_id[task_id] = task_entity["parent_id"]
self.finalize_attribute_changes(
session, interesting_data,
changed_keys, attrs_by_obj_id, hier_attrs,
task_entity_ids, parent_id_by_task_id
session,
interesting_data,
changed_keys,
attrs_by_obj_id,
hier_attrs,
task_entity_ids,
parent_id_by_task_id,
added_entity_ids
)
def finalize_attribute_changes(
self, session, interesting_data,
changed_keys, attrs_by_obj_id, hier_attrs,
task_entity_ids, parent_id_by_task_id
self,
session,
interesting_data,
changed_keys,
attrs_by_obj_id,
hier_attrs,
task_entity_ids,
parent_id_by_task_id,
added_entity_ids
):
attr_id_to_key = {}
for attr_confs in attrs_by_obj_id.values():
@ -550,7 +604,11 @@ class PushFrameValuesToTaskEvent(BaseEvent):
attr_ids = set(attr_id_to_key.keys())
current_values_by_id = self.get_current_values(
session, attr_ids, entity_ids, task_entity_ids, hier_attrs
session,
attr_ids,
entity_ids,
task_entity_ids,
hier_attrs
)
changes = []
@ -560,14 +618,25 @@ class PushFrameValuesToTaskEvent(BaseEvent):
parent_id = entity_id
values = interesting_data[parent_id]
added_entity = entity_id in added_entity_ids
for attr_id, old_value in current_values.items():
if added_entity and attr_id in hier_attrs:
continue
attr_key = attr_id_to_key.get(attr_id)
if not attr_key:
continue
# Convert new value from string
new_value = values.get(attr_key)
if new_value is not None and old_value is not None:
new_value_is_valid = (
old_value is not ftrack_api.symbol.NOT_SET
and new_value is not None
)
if added_entity and not new_value_is_valid:
continue
if new_value is not None and new_value_is_valid:
try:
new_value = type(old_value)(new_value)
except Exception:
@ -581,6 +650,7 @@ class PushFrameValuesToTaskEvent(BaseEvent):
changes.append({
"new_value": new_value,
"attr_id": attr_id,
"old_value": old_value,
"entity_id": entity_id,
"attr_key": attr_key
})
@ -599,6 +669,7 @@ class PushFrameValuesToTaskEvent(BaseEvent):
interesting_data = {}
changed_keys_by_object_id = {}
for entity_info in entities_info:
# Care only about changes if specific keys
entity_changes = {}
@ -644,16 +715,123 @@ class PushFrameValuesToTaskEvent(BaseEvent):
return interesting_data, changed_keys_by_object_id
def interesting_data_for_added(
self,
session,
added_entities,
interest_attributes,
interesting_data,
changed_keys_by_object_id
):
if not added_entities or not interest_attributes:
return
object_type_ids = set()
entity_ids = set()
all_entity_ids = set()
object_id_by_entity_id = {}
project_id = None
entity_ids_by_parent_id = collections.defaultdict(set)
for entity_info in added_entities:
object_id = entity_info["objectTypeId"]
entity_id = entity_info["entityId"]
object_type_ids.add(object_id)
entity_ids.add(entity_id)
object_id_by_entity_id[entity_id] = object_id
for item in entity_info["parents"]:
entity_id = item["entityId"]
all_entity_ids.add(entity_id)
parent_id = item["parentId"]
if not parent_id:
project_id = entity_id
else:
entity_ids_by_parent_id[parent_id].add(entity_id)
hier_attrs = self.get_hierarchical_configurations(
session, interest_attributes
)
if not hier_attrs:
return
hier_attrs_key_by_id = {
attr_conf["id"]: attr_conf["key"]
for attr_conf in hier_attrs
}
default_values_by_key = {
attr_conf["key"]: attr_conf["default"]
for attr_conf in hier_attrs
}
values = query_custom_attributes(
session, list(hier_attrs_key_by_id.keys()), all_entity_ids, True
)
values_per_entity_id = {}
for entity_id in all_entity_ids:
values_per_entity_id[entity_id] = {}
for attr_name in interest_attributes:
values_per_entity_id[entity_id][attr_name] = None
for item in values:
entity_id = item["entity_id"]
key = hier_attrs_key_by_id[item["configuration_id"]]
values_per_entity_id[entity_id][key] = item["value"]
fill_queue = collections.deque()
fill_queue.append((project_id, default_values_by_key))
while fill_queue:
item = fill_queue.popleft()
entity_id, values_by_key = item
entity_values = values_per_entity_id[entity_id]
new_values_by_key = copy.deepcopy(values_by_key)
for key, value in values_by_key.items():
current_value = entity_values[key]
if current_value is None:
entity_values[key] = value
else:
new_values_by_key[key] = current_value
for child_id in entity_ids_by_parent_id[entity_id]:
fill_queue.append((child_id, new_values_by_key))
for entity_id in entity_ids:
entity_changes = {}
for key, value in values_per_entity_id[entity_id].items():
if value is not None:
entity_changes[key] = value
if not entity_changes:
continue
interesting_data[entity_id] = entity_changes
object_id = object_id_by_entity_id[entity_id]
if object_id not in changed_keys_by_object_id:
changed_keys_by_object_id[object_id] = set()
changed_keys_by_object_id[object_id] |= set(entity_changes.keys())
def get_current_values(
self, session, attr_ids, entity_ids, task_entity_ids, hier_attrs
self,
session,
attr_ids,
entity_ids,
task_entity_ids,
hier_attrs
):
current_values_by_id = {}
if not attr_ids or not entity_ids:
return current_values_by_id
for entity_id in entity_ids:
current_values_by_id[entity_id] = {}
for attr_id in attr_ids:
current_values_by_id[entity_id][attr_id] = (
ftrack_api.symbol.NOT_SET
)
values = query_custom_attributes(
session, attr_ids, entity_ids, True
)
for item in values:
entity_id = item["entity_id"]
attr_id = item["configuration_id"]
@ -699,6 +877,18 @@ class PushFrameValuesToTaskEvent(BaseEvent):
output[obj_id][attr["key"]] = attr["id"]
return output, hiearchical
def get_hierarchical_configurations(self, session, interest_attributes):
hier_attr_query = (
"select id, key, object_type_id, is_hierarchical, default"
" from CustomAttributeConfiguration"
" where key in ({}) and is_hierarchical is true"
)
if not interest_attributes:
return []
return list(session.query(hier_attr_query.format(
self.join_query_keys(interest_attributes),
)).all())
def register(session):
PushFrameValuesToTaskEvent(session).register()

View file

@ -11,13 +11,11 @@ from openpype.client import (
get_project,
get_assets,
)
from openpype.settings import get_project_settings
from openpype.lib import (
get_workfile_template_key,
get_workdir_data,
StringTemplate,
)
from openpype.settings import get_project_settings, get_system_settings
from openpype.lib import StringTemplate
from openpype.pipeline import Anatomy
from openpype.pipeline.template_data import get_template_data
from openpype.pipeline.workfile import get_workfile_template_key
from openpype_modules.ftrack.lib import BaseAction, statics_icon
from openpype_modules.ftrack.lib.avalon_sync import create_chunks
@ -279,14 +277,19 @@ class FillWorkfileAttributeAction(BaseAction):
extension = "{ext}"
project_doc = get_project(project_name)
project_settings = get_project_settings(project_name)
system_settings = get_system_settings()
anatomy = Anatomy(project_name)
templates_by_key = {}
operations = []
for asset_doc, task_entities in asset_docs_with_task_entities:
for task_entity in task_entities:
workfile_data = get_workdir_data(
project_doc, asset_doc, task_entity["name"], host_name
workfile_data = get_template_data(
project_doc,
asset_doc,
task_entity["name"],
host_name,
system_settings
)
# Use version 1 for each workfile
workfile_data["version"] = 1
@ -294,7 +297,10 @@ class FillWorkfileAttributeAction(BaseAction):
task_type = workfile_data["task"]["type"]
template_key = get_workfile_template_key(
task_type, host_name, project_settings=project_settings
task_type,
host_name,
project_name,
project_settings=project_settings
)
if template_key in templates_by_key:
template = templates_by_key[template_key]

View file

@ -7,6 +7,7 @@ import threading
import datetime
import time
import queue
import collections
import appdirs
import pymongo
@ -309,7 +310,20 @@ class CustomEventHubSession(ftrack_api.session.Session):
# Currently pending operations.
self.recorded_operations = ftrack_api.operation.Operations()
self.record_operations = True
# OpenPype change - In new API are operations properties
new_api = hasattr(self.__class__, "record_operations")
if new_api:
self._record_operations = collections.defaultdict(
lambda: True
)
self._auto_populate = collections.defaultdict(
lambda: auto_populate
)
else:
self.record_operations = True
self.auto_populate = auto_populate
self.cache_key_maker = cache_key_maker
if self.cache_key_maker is None:
@ -328,6 +342,9 @@ class CustomEventHubSession(ftrack_api.session.Session):
if cache is not None:
self.cache.caches.append(cache)
if new_api:
self.merge_lock = threading.RLock()
self._managed_request = None
self._request = requests.Session()
self._request.auth = ftrack_api.session.SessionAuthentication(
@ -335,8 +352,6 @@ class CustomEventHubSession(ftrack_api.session.Session):
)
self.request_timeout = timeout
self.auto_populate = auto_populate
# Fetch server information and in doing so also check credentials.
self._server_information = self._fetch_server_information()

View file

@ -26,8 +26,6 @@ class IntegrateFtrackApi(pyblish.api.InstancePlugin):
families = ["ftrack"]
def process(self, instance):
session = instance.context.data["ftrackSession"]
context = instance.context
component_list = instance.data.get("ftrackComponentsList")
if not component_list:
self.log.info(
@ -36,8 +34,8 @@ class IntegrateFtrackApi(pyblish.api.InstancePlugin):
)
return
session = instance.context.data["ftrackSession"]
context = instance.context
session = context.data["ftrackSession"]
parent_entity = None
default_asset_name = None
@ -89,6 +87,7 @@ class IntegrateFtrackApi(pyblish.api.InstancePlugin):
asset_versions_data_by_id = {}
used_asset_versions = []
# Iterate over components and publish
for data in component_list:
self.log.debug("data: {}".format(data))
@ -118,9 +117,6 @@ class IntegrateFtrackApi(pyblish.api.InstancePlugin):
asset_version_status_ids_by_name
)
# Component
self.create_component(session, asset_version_entity, data)
# Store asset version and components items that were
version_id = asset_version_entity["id"]
if version_id not in asset_versions_data_by_id:
@ -137,6 +133,8 @@ class IntegrateFtrackApi(pyblish.api.InstancePlugin):
if asset_version_entity not in used_asset_versions:
used_asset_versions.append(asset_version_entity)
self._create_components(session, asset_versions_data_by_id)
instance.data["ftrackIntegratedAssetVersionsData"] = (
asset_versions_data_by_id
)
@ -625,3 +623,40 @@ class IntegrateFtrackApi(pyblish.api.InstancePlugin):
session.rollback()
session._configure_locations()
six.reraise(tp, value, tb)
def _create_components(self, session, asset_versions_data_by_id):
for item in asset_versions_data_by_id.values():
asset_version_entity = item["asset_version"]
component_items = item["component_items"]
component_entities = session.query(
(
"select id, name from Component where version_id is \"{}\""
).format(asset_version_entity["id"])
).all()
existing_component_names = {
component["name"]
for component in component_entities
}
contain_review = "ftrackreview-mp4" in existing_component_names
thumbnail_component_item = None
for component_item in component_items:
component_data = component_item.get("component_data") or {}
component_name = component_data.get("name")
if component_name == "ftrackreview-mp4":
contain_review = True
elif component_name == "ftrackreview-image":
thumbnail_component_item = component_item
if contain_review and thumbnail_component_item:
thumbnail_component_item["component_data"]["name"] = (
"thumbnail"
)
# Component
for component_item in component_items:
self.create_component(
session, asset_version_entity, component_item
)

View file

@ -13,7 +13,10 @@ class IntegrateFtrackComponentOverwrite(pyblish.api.InstancePlugin):
active = False
def process(self, instance):
component_list = instance.data['ftrackComponentsList']
component_list = instance.data.get('ftrackComponentsList')
if not component_list:
self.log.info("No component to overwrite...")
return
for cl in component_list:
cl['component_overwrite'] = True

View file

@ -6,9 +6,11 @@ Requires:
"""
import sys
import json
import six
import pyblish.api
from openpype.lib import StringTemplate
class IntegrateFtrackDescription(pyblish.api.InstancePlugin):
@ -25,6 +27,10 @@ class IntegrateFtrackDescription(pyblish.api.InstancePlugin):
description_template = "{comment}"
def process(self, instance):
if not self.description_template:
self.log.info("Skipping. Description template is not set.")
return
# Check if there are any integrated AssetVersion entities
asset_versions_key = "ftrackIntegratedAssetVersionsData"
asset_versions_data_by_id = instance.data.get(asset_versions_key)
@ -38,39 +44,62 @@ class IntegrateFtrackDescription(pyblish.api.InstancePlugin):
else:
self.log.debug("Comment is set to `{}`".format(comment))
session = instance.context.data["ftrackSession"]
intent = instance.context.data.get("intent")
intent_label = None
if intent and isinstance(intent, dict):
intent_val = intent.get("value")
intent_label = intent.get("label")
else:
intent_val = intent
if intent and "{intent}" in self.description_template:
value = intent.get("value")
if value:
intent = intent.get("label") or value
if not intent_label:
intent_label = intent_val or ""
if not intent and not comment:
self.log.info("Skipping. Intent and comment are empty.")
return
# if intent label is set then format comment
# - it is possible that intent_label is equal to "" (empty string)
if intent_label:
self.log.debug(
"Intent label is set to `{}`.".format(intent_label)
)
if intent:
self.log.debug("Intent is set to `{}`.".format(intent))
else:
self.log.debug("Intent is not set.")
# If we would like to use more "optional" possibilities we would have
# come up with some expressions in templates or speicifc templates
# for all 3 possible combinations when comment and intent are
# set or not (when both are not set then description does not
# make sense).
fill_data = {}
if comment:
fill_data["comment"] = comment
if intent:
fill_data["intent"] = intent
description = StringTemplate.format_template(
self.description_template, fill_data
)
if not description.solved:
self.log.warning((
"Couldn't solve template \"{}\" with data {}"
).format(
self.description_template, json.dumps(fill_data, indent=4)
))
return
if not description:
self.log.debug((
"Skipping. Result of template is empty string."
" Template \"{}\" Fill data: {}"
).format(
self.description_template, json.dumps(fill_data, indent=4)
))
return
session = instance.context.data["ftrackSession"]
for asset_version_data in asset_versions_data_by_id.values():
asset_version = asset_version_data["asset_version"]
# Backwards compatibility for older settings using
# attribute 'note_with_intent_template'
comment = self.description_template.format(**{
"intent": intent_label,
"comment": comment
})
asset_version["comment"] = comment
asset_version["comment"] = description
try:
session.commit()

View file

@ -3,7 +3,10 @@ import json
import copy
import pyblish.api
from openpype.lib import get_ffprobe_streams
from openpype.lib.transcoding import (
get_ffprobe_streams,
convert_ffprobe_fps_to_float,
)
from openpype.lib.profiles_filtering import filter_profiles
@ -58,7 +61,7 @@ class IntegrateFtrackInstance(pyblish.api.InstancePlugin):
version_number = int(instance_version)
family = instance.data["family"]
family_low = instance.data["family"].lower()
family_low = family.lower()
asset_type = instance.data.get("ftrackFamily")
if not asset_type and family_low in self.family_mapping:
@ -79,11 +82,6 @@ class IntegrateFtrackInstance(pyblish.api.InstancePlugin):
).format(family))
return
# Prepare FPS
instance_fps = instance.data.get("fps")
if instance_fps is None:
instance_fps = instance.context.data["fps"]
status_name = self._get_asset_version_status_name(instance)
# Base of component item data
@ -140,24 +138,16 @@ class IntegrateFtrackInstance(pyblish.api.InstancePlugin):
first_thumbnail_component = None
first_thumbnail_component_repre = None
for repre in thumbnail_representations:
published_path = repre.get("published_path")
if not published_path:
comp_files = repre["files"]
if isinstance(comp_files, (tuple, list, set)):
filename = comp_files[0]
else:
filename = comp_files
published_path = os.path.join(
repre["stagingDir"], filename
repre_path = self._get_repre_path(instance, repre, False)
if not repre_path:
self.log.warning(
"Published path is not set and source was removed."
)
if not os.path.exists(published_path):
continue
repre["published_path"] = published_path
continue
# Create copy of base comp item and append it
thumbnail_item = copy.deepcopy(base_component_item)
thumbnail_item["component_path"] = repre["published_path"]
thumbnail_item["component_path"] = repre_path
thumbnail_item["component_data"] = {
"name": "thumbnail"
}
@ -176,10 +166,7 @@ class IntegrateFtrackInstance(pyblish.api.InstancePlugin):
# Add item to component list
component_list.append(thumbnail_item)
if (
not review_representations
and first_thumbnail_component is not None
):
if first_thumbnail_component is not None:
width = first_thumbnail_component_repre.get("width")
height = first_thumbnail_component_repre.get("height")
if not width or not height:
@ -216,6 +203,13 @@ class IntegrateFtrackInstance(pyblish.api.InstancePlugin):
extended_asset_name = ""
multiple_reviewable = len(review_representations) > 1
for repre in review_representations:
repre_path = self._get_repre_path(instance, repre, False)
if not repre_path:
self.log.warning(
"Published path is not set and source was removed."
)
continue
# Create copy of base comp item and append it
review_item = copy.deepcopy(base_component_item)
@ -254,33 +248,18 @@ class IntegrateFtrackInstance(pyblish.api.InstancePlugin):
first_thumbnail_component[
"asset_data"]["name"] = extended_asset_name
frame_start = repre.get("frameStartFtrack")
frame_end = repre.get("frameEndFtrack")
if frame_start is None or frame_end is None:
frame_start = instance.data["frameStart"]
frame_end = instance.data["frameEnd"]
# Frame end of uploaded video file should be duration in frames
# - frame start is always 0
# - frame end is duration in frames
duration = frame_end - frame_start + 1
fps = repre.get("fps")
if fps is None:
fps = instance_fps
component_meta = self._prepare_component_metadata(
instance, repre, repre_path, True
)
# Change location
review_item["component_path"] = repre["published_path"]
review_item["component_path"] = repre_path
# Change component data
review_item["component_data"] = {
# Default component name is "main".
"name": "ftrackreview-mp4",
"metadata": {
"ftr_meta": json.dumps({
"frameIn": 0,
"frameOut": int(duration),
"frameRate": float(fps)
})
"ftr_meta": json.dumps(component_meta)
}
}
@ -323,11 +302,18 @@ class IntegrateFtrackInstance(pyblish.api.InstancePlugin):
component_data = copy_src_item["component_data"]
component_name = component_data["name"]
component_data["name"] = component_name + "_src"
component_meta = self._prepare_component_metadata(
instance, repre, copy_src_item["component_path"], False
)
if component_meta:
component_data["metadata"] = {
"ftr_meta": json.dumps(component_meta)
}
component_list.append(copy_src_item)
# Add others representations as component
for repre in other_representations:
published_path = repre.get("published_path")
published_path = self._get_repre_path(instance, repre, True)
if not published_path:
continue
# Create copy of base comp item and append it
@ -340,9 +326,17 @@ class IntegrateFtrackInstance(pyblish.api.InstancePlugin):
):
other_item["asset_data"]["name"] = extended_asset_name
other_item["component_data"] = {
component_meta = self._prepare_component_metadata(
instance, repre, published_path, False
)
component_data = {
"name": repre["name"]
}
if component_meta:
component_data["metadata"] = {
"ftr_meta": json.dumps(component_meta)
}
other_item["component_data"] = component_data
other_item["component_location_name"] = unmanaged_location_name
other_item["component_path"] = published_path
component_list.append(other_item)
@ -360,6 +354,51 @@ class IntegrateFtrackInstance(pyblish.api.InstancePlugin):
))
instance.data["ftrackComponentsList"] = component_list
def _get_repre_path(self, instance, repre, only_published):
"""Get representation path that can be used for integration.
When 'only_published' is set to true the validation of path is not
relevant. In that case we just need what is set in 'published_path'
as "reference". The reference is not used to get or upload the file but
for reference where the file was published.
Args:
instance (pyblish.Instance): Processed instance object. Used
for source of staging dir if representation does not have
filled it.
repre (dict): Representation on instance which could be and
could not be integrated with main integrator.
only_published (bool): Care only about published paths and
ignore if filepath is not existing anymore.
Returns:
str: Path to representation file.
None: Path is not filled or does not exists.
"""
published_path = repre.get("published_path")
if published_path:
published_path = os.path.normpath(published_path)
if os.path.exists(published_path):
return published_path
if only_published:
return published_path
comp_files = repre["files"]
if isinstance(comp_files, (tuple, list, set)):
filename = comp_files[0]
else:
filename = comp_files
staging_dir = repre.get("stagingDir")
if not staging_dir:
staging_dir = instance.data["stagingDir"]
src_path = os.path.normpath(os.path.join(staging_dir, filename))
if os.path.exists(src_path):
return src_path
return None
def _get_asset_version_status_name(self, instance):
if not self.asset_versions_status_profiles:
return None
@ -380,3 +419,107 @@ class IntegrateFtrackInstance(pyblish.api.InstancePlugin):
return None
return matching_profile["status"] or None
def _prepare_component_metadata(
self, instance, repre, component_path, is_review
):
extension = os.path.splitext(component_path)[-1]
streams = []
try:
streams = get_ffprobe_streams(component_path)
except Exception:
self.log.debug((
"Failed to retrieve information about intput {}"
).format(component_path))
# Find video streams
video_streams = [
stream
for stream in streams
if stream["codec_type"] == "video"
]
# Skip if there are not video streams
# - exr is special case which can have issues with reading through
# ffmpegh but we want to set fps for it
if not video_streams and extension not in [".exr"]:
return {}
stream_width = None
stream_height = None
stream_fps = None
frame_out = None
for video_stream in video_streams:
tmp_width = video_stream.get("width")
tmp_height = video_stream.get("height")
if tmp_width and tmp_height:
stream_width = tmp_width
stream_height = tmp_height
input_framerate = video_stream.get("r_frame_rate")
duration = video_stream.get("duration")
if input_framerate is None or duration is None:
continue
try:
stream_fps = convert_ffprobe_fps_to_float(
input_framerate
)
except ValueError:
self.log.warning((
"Could not convert ffprobe fps to float \"{}\""
).format(input_framerate))
continue
stream_width = tmp_width
stream_height = tmp_height
self.log.debug("FPS from stream is {} and duration is {}".format(
input_framerate, duration
))
frame_out = float(duration) * stream_fps
break
# Prepare FPS
instance_fps = instance.data.get("fps")
if instance_fps is None:
instance_fps = instance.context.data["fps"]
if not is_review:
output = {}
fps = stream_fps or instance_fps
if fps:
output["frameRate"] = fps
if stream_width and stream_height:
output["width"] = int(stream_width)
output["height"] = int(stream_height)
return output
frame_start = repre.get("frameStartFtrack")
frame_end = repre.get("frameEndFtrack")
if frame_start is None or frame_end is None:
frame_start = instance.data["frameStart"]
frame_end = instance.data["frameEnd"]
fps = None
repre_fps = repre.get("fps")
if repre_fps is not None:
repre_fps = float(repre_fps)
fps = stream_fps or repre_fps or instance_fps
# Frame end of uploaded video file should be duration in frames
# - frame start is always 0
# - frame end is duration in frames
if not frame_out:
frame_out = frame_end - frame_start + 1
# Ftrack documentation says that it is required to have
# 'width' and 'height' in review component. But with those values
# review video does not play.
component_meta = {
"frameIn": 0,
"frameOut": frame_out,
"frameRate": float(fps)
}
return component_meta

View file

@ -9,9 +9,11 @@ Requires:
"""
import sys
import copy
import six
import pyblish.api
from openpype.lib import StringTemplate
class IntegrateFtrackNote(pyblish.api.InstancePlugin):
@ -53,14 +55,10 @@ class IntegrateFtrackNote(pyblish.api.InstancePlugin):
intent = instance.context.data.get("intent")
intent_label = None
if intent and isinstance(intent, dict):
intent_val = intent.get("value")
intent_label = intent.get("label")
else:
intent_val = intent
if not intent_label:
intent_label = intent_val or ""
if intent:
value = intent["value"]
if value:
intent_label = intent["label"] or value
# if intent label is set then format comment
# - it is possible that intent_label is equal to "" (empty string)
@ -96,6 +94,14 @@ class IntegrateFtrackNote(pyblish.api.InstancePlugin):
labels.append(label)
base_format_data = {
"host_name": host_name,
"app_name": app_name,
"app_label": app_label,
"source": instance.data.get("source", '')
}
if comment:
base_format_data["comment"] = comment
for asset_version_data in asset_versions_data_by_id.values():
asset_version = asset_version_data["asset_version"]
component_items = asset_version_data["component_items"]
@ -109,23 +115,31 @@ class IntegrateFtrackNote(pyblish.api.InstancePlugin):
template = self.note_template
if template is None:
template = self.note_with_intent_template
format_data = {
"intent": intent_label,
"comment": comment,
"host_name": host_name,
"app_name": app_name,
"app_label": app_label,
"published_paths": "<br/>".join(sorted(published_paths)),
"source": instance.data.get("source", '')
}
comment = template.format(**format_data)
if not comment:
format_data = copy.deepcopy(base_format_data)
format_data["published_paths"] = "<br/>".join(
sorted(published_paths)
)
if intent:
if "{intent}" in template:
format_data["intent"] = intent_label
else:
format_data["intent"] = intent
note_text = StringTemplate.format_template(template, format_data)
if not note_text.solved:
self.log.warning((
"Note template require more keys then can be provided."
"\nTemplate: {}\nData: {}"
).format(template, format_data))
continue
if not note_text:
self.log.info((
"Note for AssetVersion {} would be empty. Skipping."
"\nTemplate: {}\nData: {}"
).format(asset_version["id"], template, format_data))
continue
asset_version.create_note(comment, author=user, labels=labels)
asset_version.create_note(note_text, author=user, labels=labels)
try:
session.commit()

View file

@ -1,9 +1,12 @@
import sys
import collections
import six
import pyblish.api
from copy import deepcopy
import pyblish.api
from openpype.client import get_asset_by_id
from openpype.lib import filter_profiles
# Copy of constant `openpype_modules.ftrack.lib.avalon_sync.CUST_ATTR_AUTO_SYNC`
@ -65,8 +68,15 @@ class IntegrateHierarchyToFtrack(pyblish.api.ContextPlugin):
order = pyblish.api.IntegratorOrder - 0.04
label = 'Integrate Hierarchy To Ftrack'
families = ["shot"]
hosts = ["hiero", "resolve", "standalonepublisher", "flame"]
hosts = [
"hiero",
"resolve",
"standalonepublisher",
"flame",
"traypublisher"
]
optional = False
create_task_status_profiles = []
def process(self, context):
self.context = context
@ -76,14 +86,16 @@ class IntegrateHierarchyToFtrack(pyblish.api.ContextPlugin):
hierarchy_context = self._get_active_assets(context)
self.log.debug("__ hierarchy_context: {}".format(hierarchy_context))
self.session = self.context.data["ftrackSession"]
session = self.context.data["ftrackSession"]
project_name = self.context.data["projectEntity"]["name"]
query = 'Project where full_name is "{}"'.format(project_name)
project = self.session.query(query).one()
auto_sync_state = project[
"custom_attributes"][CUST_ATTR_AUTO_SYNC]
project = session.query(query).one()
auto_sync_state = project["custom_attributes"][CUST_ATTR_AUTO_SYNC]
self.ft_project = None
self.session = session
self.ft_project = project
self.task_types = self.get_all_task_types(project)
self.task_statuses = self.get_task_statuses(project)
# disable termporarily ftrack project's autosyncing
if auto_sync_state:
@ -115,10 +127,7 @@ class IntegrateHierarchyToFtrack(pyblish.api.ContextPlugin):
self.log.debug(entity_type)
if entity_type.lower() == 'project':
query = 'Project where full_name is "{}"'.format(entity_name)
entity = self.session.query(query).one()
self.ft_project = entity
self.task_types = self.get_all_task_types(entity)
entity = self.ft_project
elif self.ft_project is None or parent is None:
raise AssertionError(
@ -211,13 +220,6 @@ class IntegrateHierarchyToFtrack(pyblish.api.ContextPlugin):
task_type=task_type,
parent=entity
)
try:
self.session.commit()
except Exception:
tp, value, tb = sys.exc_info()
self.session.rollback()
self.session._configure_locations()
six.reraise(tp, value, tb)
# Incoming links.
self.create_links(project_name, entity_data, entity)
@ -297,7 +299,37 @@ class IntegrateHierarchyToFtrack(pyblish.api.ContextPlugin):
return tasks
def get_task_statuses(self, project_entity):
project_schema = project_entity["project_schema"]
task_workflow_statuses = project_schema["_task_workflow"]["statuses"]
return {
status["id"]: status
for status in task_workflow_statuses
}
def create_task(self, name, task_type, parent):
filter_data = {
"task_names": name,
"task_types": task_type
}
profile = filter_profiles(
self.create_task_status_profiles,
filter_data
)
status_id = None
if profile:
status_name = profile["status_name"]
status_name_low = status_name.lower()
for _status_id, status in self.task_statuses.items():
if status["name"].lower() == status_name_low:
status_id = _status_id
break
if status_id is None:
self.log.warning(
"Task status \"{}\" was not found".format(status_name)
)
task = self.session.create('Task', {
'name': name,
'parent': parent
@ -306,6 +338,8 @@ class IntegrateHierarchyToFtrack(pyblish.api.ContextPlugin):
self.log.info(task_type)
self.log.info(self.task_types)
task['type'] = self.task_types[task_type]
if status_id is not None:
task["status_id"] = status_id
try:
self.session.commit()

View file

@ -39,10 +39,12 @@ class CollectKitsuEntities(pyblish.api.ContextPlugin):
kitsu_entity = gazu.asset.get_asset(zou_asset_data["id"])
if not kitsu_entity:
raise AssertionError(f"{entity_type} not found in kitsu!")
raise AssertionError("{} not found in kitsu!".format(entity_type))
context.data["kitsu_entity"] = kitsu_entity
self.log.debug(f"Collect kitsu {entity_type}: {kitsu_entity}")
self.log.debug(
"Collect kitsu {}: {}".format(entity_type, kitsu_entity)
)
if zou_task_data:
kitsu_task = gazu.task.get_task(zou_task_data["id"])

View file

@ -219,20 +219,25 @@ def update_op_assets(
# Add parents for hierarchy
item_data["parents"] = []
while parent_zou_id is not None:
parent_doc = asset_doc_ids[parent_zou_id]
ancestor_id = parent_zou_id
while ancestor_id is not None:
parent_doc = asset_doc_ids[ancestor_id]
item_data["parents"].insert(0, parent_doc["name"])
# Get parent entity
parent_entity = parent_doc["data"]["zou"]
parent_zou_id = parent_entity.get("parent_id")
ancestor_id = parent_entity.get("parent_id")
if item_type in ["Shot", "Sequence"]:
# Build OpenPype compatible name
if item_type in ["Shot", "Sequence"] and parent_zou_id is not None:
# Name with parents hierarchy "({episode}_){sequence}_{shot}"
# to avoid duplicate name issue
item_name = "_".join(item_data["parents"] + [item_doc["name"]])
item_name = f"{item_data['parents'][-1]}_{item['name']}"
# Update doc name
asset_doc_ids[item["id"]]["name"] = item_name
else:
item_name = item_doc["name"]
item_name = item["name"]
# Set root folders parents
item_data["parents"] = entity_parent_folders + item_data["parents"]
@ -276,7 +281,7 @@ def write_project_to_op(project: dict, dbcon: AvalonMongoDB) -> UpdateOne:
project_doc = create_project(project_name, project_name, dbcon=dbcon)
# Project data and tasks
project_data = project["data"] or {}
project_data = project_doc["data"] or {}
# Build project code and update Kitsu
project_code = project.get("code")
@ -305,6 +310,7 @@ def write_project_to_op(project: dict, dbcon: AvalonMongoDB) -> UpdateOne:
"config.tasks": {
t["name"]: {"short_name": t.get("short_name", t["name"])}
for t in gazu.task.all_task_types_for_project(project)
or gazu.task.all_task_types()
},
"data": project_data,
}

View file

@ -0,0 +1,19 @@
## Shotgrid Module
### Pre-requisites
Install and launch a [shotgrid leecher](https://github.com/Ellipsanime/shotgrid-leecher) server
### Quickstart
The goal of this tutorial is to synchronize an already existing shotgrid project with OpenPype.
- Activate the shotgrid module in the **system settings** and inform the shotgrid leecher server API url
- Create a new OpenPype project with the **project manager**
- Inform the shotgrid authentication infos (url, script name, api key) and the shotgrid project ID related to this OpenPype project in the **project settings**
- Use the batch interface (Tray > shotgrid > Launch batch), select your project and click "batch"
- You can now access your shotgrid entities within the **avalon launcher** and publish informations to shotgrid with **pyblish**

View file

@ -0,0 +1,5 @@
from .shotgrid_module import (
ShotgridModule,
)
__all__ = ("ShotgridModule",)

View file

@ -0,0 +1 @@
MODULE_NAME = "shotgrid"

View file

@ -0,0 +1,125 @@
from urllib.parse import urlparse
import shotgun_api3
from shotgun_api3.shotgun import AuthenticationFault
from openpype.lib import OpenPypeSecureRegistry, OpenPypeSettingsRegistry
from openpype.modules.shotgrid.lib.record import Credentials
def _get_shotgrid_secure_key(hostname, key):
"""Secure item key for entered hostname."""
return f"shotgrid/{hostname}/{key}"
def _get_secure_value_and_registry(
hostname,
name,
):
key = _get_shotgrid_secure_key(hostname, name)
registry = OpenPypeSecureRegistry(key)
return registry.get_item(name, None), registry
def get_shotgrid_hostname(shotgrid_url):
if not shotgrid_url:
raise Exception("Shotgrid url cannot be a null")
valid_shotgrid_url = (
f"//{shotgrid_url}" if "//" not in shotgrid_url else shotgrid_url
)
return urlparse(valid_shotgrid_url).hostname
# Credentials storing function (using keyring)
def get_credentials(shotgrid_url):
hostname = get_shotgrid_hostname(shotgrid_url)
if not hostname:
return None
login_value, _ = _get_secure_value_and_registry(
hostname,
Credentials.login_key_prefix(),
)
password_value, _ = _get_secure_value_and_registry(
hostname,
Credentials.password_key_prefix(),
)
return Credentials(login_value, password_value)
def save_credentials(login, password, shotgrid_url):
hostname = get_shotgrid_hostname(shotgrid_url)
_, login_registry = _get_secure_value_and_registry(
hostname,
Credentials.login_key_prefix(),
)
_, password_registry = _get_secure_value_and_registry(
hostname,
Credentials.password_key_prefix(),
)
clear_credentials(shotgrid_url)
login_registry.set_item(Credentials.login_key_prefix(), login)
password_registry.set_item(Credentials.password_key_prefix(), password)
def clear_credentials(shotgrid_url):
hostname = get_shotgrid_hostname(shotgrid_url)
login_value, login_registry = _get_secure_value_and_registry(
hostname,
Credentials.login_key_prefix(),
)
password_value, password_registry = _get_secure_value_and_registry(
hostname,
Credentials.password_key_prefix(),
)
if login_value is not None:
login_registry.delete_item(Credentials.login_key_prefix())
if password_value is not None:
password_registry.delete_item(Credentials.password_key_prefix())
# Login storing function (using json)
def get_local_login():
reg = OpenPypeSettingsRegistry()
try:
return str(reg.get_item("shotgrid_login"))
except Exception:
return None
def save_local_login(login):
reg = OpenPypeSettingsRegistry()
reg.set_item("shotgrid_login", login)
def clear_local_login():
reg = OpenPypeSettingsRegistry()
reg.delete_item("shotgrid_login")
def check_credentials(
login,
password,
shotgrid_url,
):
if not shotgrid_url or not login or not password:
return False
try:
session = shotgun_api3.Shotgun(
shotgrid_url,
login=login,
password=password,
)
session.preferences_read()
session.close()
except AuthenticationFault:
return False
return True

View file

@ -0,0 +1,20 @@
class Credentials:
login = None
password = None
def __init__(self, login, password) -> None:
super().__init__()
self.login = login
self.password = password
def is_empty(self):
return not (self.login and self.password)
@staticmethod
def login_key_prefix():
return "login"
@staticmethod
def password_key_prefix():
return "password"

View file

@ -0,0 +1,18 @@
from openpype.api import get_system_settings, get_project_settings
from openpype.modules.shotgrid.lib.const import MODULE_NAME
def get_shotgrid_project_settings(project):
return get_project_settings(project).get(MODULE_NAME, {})
def get_shotgrid_settings():
return get_system_settings().get("modules", {}).get(MODULE_NAME, {})
def get_shotgrid_servers():
return get_shotgrid_settings().get("shotgrid_settings", {})
def get_leecher_backend_url():
return get_shotgrid_settings().get("leecher_backend_url")

View file

@ -0,0 +1,100 @@
import os
import pyblish.api
from openpype.lib.mongo import OpenPypeMongoConnection
class CollectShotgridEntities(pyblish.api.ContextPlugin):
"""Collect shotgrid entities according to the current context"""
order = pyblish.api.CollectorOrder + 0.499
label = "Shotgrid entities"
def process(self, context):
avalon_project = context.data.get("projectEntity")
avalon_asset = context.data.get("assetEntity")
avalon_task_name = os.getenv("AVALON_TASK")
self.log.info(avalon_project)
self.log.info(avalon_asset)
sg_project = _get_shotgrid_project(context)
sg_task = _get_shotgrid_task(
avalon_project,
avalon_asset,
avalon_task_name
)
sg_entity = _get_shotgrid_entity(avalon_project, avalon_asset)
if sg_project:
context.data["shotgridProject"] = sg_project
self.log.info(
"Collected correspondig shotgrid project : {}".format(
sg_project
)
)
if sg_task:
context.data["shotgridTask"] = sg_task
self.log.info(
"Collected correspondig shotgrid task : {}".format(sg_task)
)
if sg_entity:
context.data["shotgridEntity"] = sg_entity
self.log.info(
"Collected correspondig shotgrid entity : {}".format(sg_entity)
)
def _find_existing_version(self, code, context):
filters = [
["project", "is", context.data.get("shotgridProject")],
["sg_task", "is", context.data.get("shotgridTask")],
["entity", "is", context.data.get("shotgridEntity")],
["code", "is", code],
]
sg = context.data.get("shotgridSession")
return sg.find_one("Version", filters, [])
def _get_shotgrid_collection(project):
client = OpenPypeMongoConnection.get_mongo_client()
return client.get_database("shotgrid_openpype").get_collection(project)
def _get_shotgrid_project(context):
shotgrid_project_id = context.data["project_settings"].get(
"shotgrid_project_id")
if shotgrid_project_id:
return {"type": "Project", "id": shotgrid_project_id}
return {}
def _get_shotgrid_task(avalon_project, avalon_asset, avalon_task):
sg_col = _get_shotgrid_collection(avalon_project["name"])
shotgrid_task_hierarchy_row = sg_col.find_one(
{
"type": "Task",
"_id": {"$regex": "^" + avalon_task + "_[0-9]*"},
"parent": {"$regex": ".*," + avalon_asset["name"] + ","},
}
)
if shotgrid_task_hierarchy_row:
return {"type": "Task", "id": shotgrid_task_hierarchy_row["src_id"]}
return {}
def _get_shotgrid_entity(avalon_project, avalon_asset):
sg_col = _get_shotgrid_collection(avalon_project["name"])
shotgrid_entity_hierarchy_row = sg_col.find_one(
{"_id": avalon_asset["name"]}
)
if shotgrid_entity_hierarchy_row:
return {
"type": shotgrid_entity_hierarchy_row["type"],
"id": shotgrid_entity_hierarchy_row["src_id"],
}
return {}

View file

@ -0,0 +1,123 @@
import os
import pyblish.api
import shotgun_api3
from shotgun_api3.shotgun import AuthenticationFault
from openpype.lib import OpenPypeSettingsRegistry
from openpype.modules.shotgrid.lib.settings import (
get_shotgrid_servers,
get_shotgrid_project_settings,
)
class CollectShotgridSession(pyblish.api.ContextPlugin):
"""Collect shotgrid session using user credentials"""
order = pyblish.api.CollectorOrder
label = "Shotgrid user session"
def process(self, context):
certificate_path = os.getenv("SHOTGUN_API_CACERTS")
if certificate_path is None or not os.path.exists(certificate_path):
self.log.info(
"SHOTGUN_API_CACERTS does not contains a valid \
path: {}".format(
certificate_path
)
)
certificate_path = get_shotgrid_certificate()
self.log.info("Get Certificate from shotgrid_api")
if not os.path.exists(certificate_path):
self.log.error(
"Could not find certificate in shotgun_api3: \
{}".format(
certificate_path
)
)
return
set_shotgrid_certificate(certificate_path)
self.log.info("Set Certificate: {}".format(certificate_path))
avalon_project = os.getenv("AVALON_PROJECT")
shotgrid_settings = get_shotgrid_project_settings(avalon_project)
self.log.info("shotgrid settings: {}".format(shotgrid_settings))
shotgrid_servers_settings = get_shotgrid_servers()
self.log.info(
"shotgrid_servers_settings: {}".format(shotgrid_servers_settings)
)
shotgrid_server = shotgrid_settings.get("shotgrid_server", "")
if not shotgrid_server:
self.log.error(
"No Shotgrid server found, please choose a credential"
"in script name and script key in OpenPype settings"
)
shotgrid_server_setting = shotgrid_servers_settings.get(
shotgrid_server, {}
)
shotgrid_url = shotgrid_server_setting.get("shotgrid_url", "")
shotgrid_script_name = shotgrid_server_setting.get(
"shotgrid_script_name", ""
)
shotgrid_script_key = shotgrid_server_setting.get(
"shotgrid_script_key", ""
)
if not shotgrid_script_name and not shotgrid_script_key:
self.log.error(
"No Shotgrid api credential found, please enter "
"script name and script key in OpenPype settings"
)
login = get_login() or os.getenv("OPENPYPE_SG_USER")
if not login:
self.log.error(
"No Shotgrid login found, please "
"login to shotgrid withing openpype Tray"
)
session = shotgun_api3.Shotgun(
base_url=shotgrid_url,
script_name=shotgrid_script_name,
api_key=shotgrid_script_key,
sudo_as_login=login,
)
try:
session.preferences_read()
except AuthenticationFault:
raise ValueError(
"Could not connect to shotgrid {} with user {}".format(
shotgrid_url, login
)
)
self.log.info(
"Logged to shotgrid {} with user {}".format(shotgrid_url, login)
)
context.data["shotgridSession"] = session
context.data["shotgridUser"] = login
def get_shotgrid_certificate():
shotgun_api_path = os.path.dirname(shotgun_api3.__file__)
return os.path.join(shotgun_api_path, "lib", "certifi", "cacert.pem")
def set_shotgrid_certificate(certificate):
os.environ["SHOTGUN_API_CACERTS"] = certificate
def get_login():
reg = OpenPypeSettingsRegistry()
try:
return str(reg.get_item("shotgrid_login"))
except Exception:
return None

View file

@ -0,0 +1,77 @@
import os
import pyblish.api
class IntegrateShotgridPublish(pyblish.api.InstancePlugin):
"""
Create published Files from representations and add it to version. If
representation is tagged add shotgrid review, it will add it in
path to movie for a movie file or path to frame for an image sequence.
"""
order = pyblish.api.IntegratorOrder + 0.499
label = "Shotgrid Published Files"
def process(self, instance):
context = instance.context
self.sg = context.data.get("shotgridSession")
shotgrid_version = instance.data.get("shotgridVersion")
for representation in instance.data.get("representations", []):
local_path = representation.get("published_path")
code = os.path.basename(local_path)
if representation.get("tags", []):
continue
published_file = self._find_existing_publish(
code, context, shotgrid_version
)
published_file_data = {
"project": context.data.get("shotgridProject"),
"code": code,
"entity": context.data.get("shotgridEntity"),
"task": context.data.get("shotgridTask"),
"version": shotgrid_version,
"path": {"local_path": local_path},
}
if not published_file:
published_file = self._create_published(published_file_data)
self.log.info(
"Create Shotgrid PublishedFile: {}".format(published_file)
)
else:
self.sg.update(
published_file["type"],
published_file["id"],
published_file_data,
)
self.log.info(
"Update Shotgrid PublishedFile: {}".format(published_file)
)
if instance.data["family"] == "image":
self.sg.upload_thumbnail(
published_file["type"], published_file["id"], local_path
)
instance.data["shotgridPublishedFile"] = published_file
def _find_existing_publish(self, code, context, shotgrid_version):
filters = [
["project", "is", context.data.get("shotgridProject")],
["task", "is", context.data.get("shotgridTask")],
["entity", "is", context.data.get("shotgridEntity")],
["version", "is", shotgrid_version],
["code", "is", code],
]
return self.sg.find_one("PublishedFile", filters, [])
def _create_published(self, published_file_data):
return self.sg.create("PublishedFile", published_file_data)

View file

@ -0,0 +1,92 @@
import os
import pyblish.api
class IntegrateShotgridVersion(pyblish.api.InstancePlugin):
"""Integrate Shotgrid Version"""
order = pyblish.api.IntegratorOrder + 0.497
label = "Shotgrid Version"
sg = None
def process(self, instance):
context = instance.context
self.sg = context.data.get("shotgridSession")
# TODO: Use path template solver to build version code from settings
anatomy = instance.data.get("anatomyData", {})
code = "_".join(
[
anatomy["project"]["code"],
anatomy["parent"],
anatomy["asset"],
anatomy["task"]["name"],
"v{:03}".format(int(anatomy["version"])),
]
)
version = self._find_existing_version(code, context)
if not version:
version = self._create_version(code, context)
self.log.info("Create Shotgrid version: {}".format(version))
else:
self.log.info("Use existing Shotgrid version: {}".format(version))
data_to_update = {}
status = context.data.get("intent", {}).get("value")
if status:
data_to_update["sg_status_list"] = status
for representation in instance.data.get("representations", []):
local_path = representation.get("published_path")
code = os.path.basename(local_path)
if "shotgridreview" in representation.get("tags", []):
if representation["ext"] in ["mov", "avi"]:
self.log.info(
"Upload review: {} for version shotgrid {}".format(
local_path, version.get("id")
)
)
self.sg.upload(
"Version",
version.get("id"),
local_path,
field_name="sg_uploaded_movie",
)
data_to_update["sg_path_to_movie"] = local_path
elif representation["ext"] in ["jpg", "png", "exr", "tga"]:
path_to_frame = local_path.replace("0000", "#")
data_to_update["sg_path_to_frames"] = path_to_frame
self.log.info("Update Shotgrid version with {}".format(data_to_update))
self.sg.update("Version", version["id"], data_to_update)
instance.data["shotgridVersion"] = version
def _find_existing_version(self, code, context):
filters = [
["project", "is", context.data.get("shotgridProject")],
["sg_task", "is", context.data.get("shotgridTask")],
["entity", "is", context.data.get("shotgridEntity")],
["code", "is", code],
]
return self.sg.find_one("Version", filters, [])
def _create_version(self, code, context):
version_data = {
"project": context.data.get("shotgridProject"),
"sg_task": context.data.get("shotgridTask"),
"entity": context.data.get("shotgridEntity"),
"code": code,
}
return self.sg.create("Version", version_data)

View file

@ -0,0 +1,38 @@
import pyblish.api
import openpype.api
class ValidateShotgridUser(pyblish.api.ContextPlugin):
"""
Check if user is valid and have access to the project.
"""
label = "Validate Shotgrid User"
order = openpype.api.ValidateContentsOrder
def process(self, context):
sg = context.data.get("shotgridSession")
login = context.data.get("shotgridUser")
self.log.info("Login shotgrid set in OpenPype is {}".format(login))
project = context.data.get("shotgridProject")
self.log.info("Current shotgun project is {}".format(project))
if not (login and sg and project):
raise KeyError()
user = sg.find_one("HumanUser", [["login", "is", login]], ["projects"])
self.log.info(user)
self.log.info(login)
user_projects_id = [p["id"] for p in user.get("projects", [])]
if not project.get("id") in user_projects_id:
raise PermissionError(
"Login {} don't have access to the project {}".format(
login, project
)
)
self.log.info(
"Login {} have access to the project {}".format(login, project)
)

View file

@ -0,0 +1,5 @@
### Shotgrid server
Please refer to the external project that covers Openpype/Shotgrid communication:
- https://github.com/Ellipsanime/shotgrid-leecher

Some files were not shown because too many files have changed in this diff Show more