Merge branch 'develop' into feature/1346-blender-publish-layout-json

This commit is contained in:
Milan Kolar 2021-05-18 12:16:54 +02:00
commit bf14b418b7
393 changed files with 13471 additions and 6292 deletions

146
.dockerignore Normal file
View file

@ -0,0 +1,146 @@
# Created by .ignore support plugin (hsz.mobi)
### Python template
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class
# C extensions
*.so
# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST
# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
*.py,cover
.hypothesis/
.pytest_cache/
cover/
# Translations
*.mo
*.pot
# Django stuff:
*.log
local_settings.py
db.sqlite3
db.sqlite3-journal
# Flask stuff:
instance/
.webassets-cache
# Scrapy stuff:
.scrapy
# Sphinx documentation
docs/_build/
# PyBuilder
.pybuilder/
target/
# Jupyter Notebook
.ipynb_checkpoints
# IPython
profile_default/
ipython_config.py
# pyenv
# For a library or package, you might want to ignore these files since the code is
# intended to run in multiple environments; otherwise, check them in:
# .python-version
# pipenv
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
# However, in case of collaboration, if having platform-specific dependencies or dependencies
# having no cross-platform support, pipenv may install dependencies that don't work, or not
# install all needed dependencies.
#Pipfile.lock
# PEP 582; used by e.g. github.com/David-OConnor/pyflow
__pypackages__/
# Celery stuff
celerybeat-schedule
celerybeat.pid
# SageMath parsed files
*.sage.py
# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/
# Spyder project settings
.spyderproject
.spyproject
# Rope project settings
.ropeproject
# mkdocs documentation
/site
# mypy
.mypy_cache/
.dmypy.json
dmypy.json
# Pyre type checker
.pyre/
# pytype static type analyzer
.pytype/
# Cython debug symbols
cython_debug/
.poetry/
.github/
vendor/bin/
docs/
website/

8
.gitignore vendored
View file

@ -64,7 +64,6 @@ coverage.xml
.hypothesis/
.pytest_cache/
# Node JS packages
##################
node_modules
@ -92,4 +91,9 @@ website/i18n/*
website/debug.log
website/.docusaurus
website/.docusaurus
# Poetry
########
.poetry/

View file

@ -1,5 +1,61 @@
# Changelog
## [2.17.1](https://github.com/pypeclub/openpype/tree/2.17.1) (2021-04-30)
[Full Changelog](https://github.com/pypeclub/openpype/compare/2.17.0...2.17.1)
**Enhancements:**
- TVPaint frame range definition [\#1424](https://github.com/pypeclub/OpenPype/pull/1424)
- PS - group all published instances [\#1415](https://github.com/pypeclub/OpenPype/pull/1415)
- Nuke: deadline submission with gpu [\#1414](https://github.com/pypeclub/OpenPype/pull/1414)
- Add task name to context pop up. [\#1383](https://github.com/pypeclub/OpenPype/pull/1383)
- AE add duration validation [\#1363](https://github.com/pypeclub/OpenPype/pull/1363)
- Maya: Support for Redshift proxies [\#1360](https://github.com/pypeclub/OpenPype/pull/1360)
**Fixed bugs:**
- Nuke: fixing undo for loaded mov and sequence [\#1433](https://github.com/pypeclub/OpenPype/pull/1433)
- AE - validation for duration was 1 frame shorter [\#1426](https://github.com/pypeclub/OpenPype/pull/1426)
- Houdini menu filename [\#1417](https://github.com/pypeclub/OpenPype/pull/1417)
- Maya: Vray - problem getting all file nodes for look publishing [\#1399](https://github.com/pypeclub/OpenPype/pull/1399)
## [2.17.0](https://github.com/pypeclub/openpype/tree/2.17.0) (2021-04-20)
[Full Changelog](https://github.com/pypeclub/openpype/compare/3.0.0-beta2...2.17.0)
**Enhancements:**
- Forward compatible ftrack group [\#1243](https://github.com/pypeclub/OpenPype/pull/1243)
- Maya: Make tx option configurable with presets [\#1328](https://github.com/pypeclub/OpenPype/pull/1328)
- TVPaint asset name validation [\#1302](https://github.com/pypeclub/OpenPype/pull/1302)
- TV Paint: Set initial project settings. [\#1299](https://github.com/pypeclub/OpenPype/pull/1299)
- TV Paint: Validate mark in and out. [\#1298](https://github.com/pypeclub/OpenPype/pull/1298)
- Validate project settings [\#1297](https://github.com/pypeclub/OpenPype/pull/1297)
- After Effects: added SubsetManager [\#1234](https://github.com/pypeclub/OpenPype/pull/1234)
- Show error message in pyblish UI [\#1206](https://github.com/pypeclub/OpenPype/pull/1206)
**Fixed bugs:**
- Hiero: fixing source frame from correct object [\#1362](https://github.com/pypeclub/OpenPype/pull/1362)
- Nuke: fix colourspace, prerenders and nuke panes opening [\#1308](https://github.com/pypeclub/OpenPype/pull/1308)
- AE remove orphaned instance from workfile - fix self.stub [\#1282](https://github.com/pypeclub/OpenPype/pull/1282)
- Nuke: deadline submission with search replaced env values from preset [\#1194](https://github.com/pypeclub/OpenPype/pull/1194)
- Ftrack custom attributes in bulks [\#1312](https://github.com/pypeclub/OpenPype/pull/1312)
- Ftrack optional pypclub role [\#1303](https://github.com/pypeclub/OpenPype/pull/1303)
- After Effects: remove orphaned instances [\#1275](https://github.com/pypeclub/OpenPype/pull/1275)
- Avalon schema names [\#1242](https://github.com/pypeclub/OpenPype/pull/1242)
- Handle duplication of Task name [\#1226](https://github.com/pypeclub/OpenPype/pull/1226)
- Modified path of plugin loads for Harmony and TVPaint [\#1217](https://github.com/pypeclub/OpenPype/pull/1217)
- Regex checks in profiles filtering [\#1214](https://github.com/pypeclub/OpenPype/pull/1214)
- Bulk mov strict task [\#1204](https://github.com/pypeclub/OpenPype/pull/1204)
- Update custom ftrack session attributes [\#1202](https://github.com/pypeclub/OpenPype/pull/1202)
- Nuke: write node colorspace ignore `default\(\)` label [\#1199](https://github.com/pypeclub/OpenPype/pull/1199)
- Nuke: reverse search to make it more versatile [\#1178](https://github.com/pypeclub/OpenPype/pull/1178)
## [2.16.1](https://github.com/pypeclub/pype/tree/2.16.1) (2021-04-13)
[Full Changelog](https://github.com/pypeclub/pype/compare/2.16.0...2.16.1)

82
Dockerfile Normal file
View file

@ -0,0 +1,82 @@
# Build Pype docker image
FROM centos:7 AS builder
ARG OPENPYPE_PYTHON_VERSION=3.7.10
LABEL org.opencontainers.image.name="pypeclub/openpype"
LABEL org.opencontainers.image.title="OpenPype Docker Image"
LABEL org.opencontainers.image.url="https://openpype.io/"
LABEL org.opencontainers.image.source="https://github.com/pypeclub/pype"
USER root
# update base
RUN yum -y install deltarpm \
&& yum -y update \
&& yum clean all
# add tools we need
RUN yum -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm \
&& yum -y install centos-release-scl \
&& yum -y install \
bash \
which \
git \
devtoolset-7-gcc* \
make \
cmake \
curl \
wget \
gcc \
zlib-devel \
bzip2 \
bzip2-devel \
readline-devel \
sqlite sqlite-devel \
openssl-devel \
tk-devel libffi-devel \
qt5-qtbase-devel \
patchelf \
&& yum clean all
RUN mkdir /opt/openpype
# RUN useradd -m pype
# RUN chown pype /opt/openpype
# USER pype
RUN curl https://pyenv.run | bash
ENV PYTHON_CONFIGURE_OPTS --enable-shared
RUN echo 'export PATH="$HOME/.pyenv/bin:$PATH"'>> $HOME/.bashrc \
&& echo 'eval "$(pyenv init -)"' >> $HOME/.bashrc \
&& echo 'eval "$(pyenv virtualenv-init -)"' >> $HOME/.bashrc \
&& echo 'eval "$(pyenv init --path)"' >> $HOME/.bashrc
RUN source $HOME/.bashrc && pyenv install ${OPENPYPE_PYTHON_VERSION}
COPY . /opt/openpype/
RUN rm -rf /openpype/.poetry || echo "No Poetry installed yet."
# USER root
# RUN chown -R pype /opt/openpype
RUN chmod +x /opt/openpype/tools/create_env.sh && chmod +x /opt/openpype/tools/build.sh
# USER pype
WORKDIR /opt/openpype
RUN cd /opt/openpype \
&& source $HOME/.bashrc \
&& pyenv local ${OPENPYPE_PYTHON_VERSION}
RUN source $HOME/.bashrc \
&& ./tools/create_env.sh
RUN source $HOME/.bashrc \
&& ./tools/fetch_thirdparty_libs.sh
RUN source $HOME/.bashrc \
&& bash ./tools/build.sh \
&& cp /usr/lib64/libffi* ./build/exe.linux-x86_64-3.7/lib \
&& cp /usr/lib64/libssl* ./build/exe.linux-x86_64-3.7/lib \
&& cp /usr/lib64/libcrypto* ./build/exe.linux-x86_64-3.7/lib
RUN cd /opt/openpype \
rm -rf ./vendor/bin

93
igniter/Poppins/OFL.txt Normal file
View file

@ -0,0 +1,93 @@
Copyright 2020 The Poppins Project Authors (https://github.com/itfoundry/Poppins)
This Font Software is licensed under the SIL Open Font License, Version 1.1.
This license is copied below, and is also available with a FAQ at:
http://scripts.sil.org/OFL
-----------------------------------------------------------
SIL OPEN FONT LICENSE Version 1.1 - 26 February 2007
-----------------------------------------------------------
PREAMBLE
The goals of the Open Font License (OFL) are to stimulate worldwide
development of collaborative font projects, to support the font creation
efforts of academic and linguistic communities, and to provide a free and
open framework in which fonts may be shared and improved in partnership
with others.
The OFL allows the licensed fonts to be used, studied, modified and
redistributed freely as long as they are not sold by themselves. The
fonts, including any derivative works, can be bundled, embedded,
redistributed and/or sold with any software provided that any reserved
names are not used by derivative works. The fonts and derivatives,
however, cannot be released under any other type of license. The
requirement for fonts to remain under this license does not apply
to any document created using the fonts or their derivatives.
DEFINITIONS
"Font Software" refers to the set of files released by the Copyright
Holder(s) under this license and clearly marked as such. This may
include source files, build scripts and documentation.
"Reserved Font Name" refers to any names specified as such after the
copyright statement(s).
"Original Version" refers to the collection of Font Software components as
distributed by the Copyright Holder(s).
"Modified Version" refers to any derivative made by adding to, deleting,
or substituting -- in part or in whole -- any of the components of the
Original Version, by changing formats or by porting the Font Software to a
new environment.
"Author" refers to any designer, engineer, programmer, technical
writer or other person who contributed to the Font Software.
PERMISSION & CONDITIONS
Permission is hereby granted, free of charge, to any person obtaining
a copy of the Font Software, to use, study, copy, merge, embed, modify,
redistribute, and sell modified and unmodified copies of the Font
Software, subject to the following conditions:
1) Neither the Font Software nor any of its individual components,
in Original or Modified Versions, may be sold by itself.
2) Original or Modified Versions of the Font Software may be bundled,
redistributed and/or sold with any software, provided that each copy
contains the above copyright notice and this license. These can be
included either as stand-alone text files, human-readable headers or
in the appropriate machine-readable metadata fields within text or
binary files as long as those fields can be easily viewed by the user.
3) No Modified Version of the Font Software may use the Reserved Font
Name(s) unless explicit written permission is granted by the corresponding
Copyright Holder. This restriction only applies to the primary font name as
presented to the users.
4) The name(s) of the Copyright Holder(s) or the Author(s) of the Font
Software shall not be used to promote, endorse or advertise any
Modified Version, except to acknowledge the contribution(s) of the
Copyright Holder(s) and the Author(s) or with their explicit written
permission.
5) The Font Software, modified or unmodified, in part or in whole,
must be distributed entirely under this license, and must not be
distributed under any other license. The requirement for fonts to
remain under this license does not apply to any document created
using the Font Software.
TERMINATION
This license becomes null and void if any of the above conditions are
not met.
DISCLAIMER
THE FONT SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT
OF COPYRIGHT, PATENT, TRADEMARK, OR OTHER RIGHT. IN NO EVENT SHALL THE
COPYRIGHT HOLDER BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
INCLUDING ANY GENERAL, SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL
DAMAGES, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
FROM, OUT OF THE USE OR INABILITY TO USE THE FONT SOFTWARE OR FROM
OTHER DEALINGS IN THE FONT SOFTWARE.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

Binary file not shown.

View file

@ -10,29 +10,22 @@ from .bootstrap_repos import BootstrapRepos
from .version import __version__ as version
RESULT = 0
def get_result(res: int):
"""Sets result returned from dialog."""
global RESULT
RESULT = res
def open_dialog():
"""Show Igniter dialog."""
from Qt import QtWidgets
from Qt import QtWidgets, QtCore
from .install_dialog import InstallDialog
scale_attr = getattr(QtCore.Qt, "AA_EnableHighDpiScaling", None)
if scale_attr is not None:
QtWidgets.QApplication.setAttribute(scale_attr)
app = QtWidgets.QApplication(sys.argv)
d = InstallDialog()
d.finished.connect(get_result)
d.open()
app.exec()
return RESULT
app.exec_()
return d.result()
__all__ = [

View file

@ -1,6 +1,6 @@
# -*- coding: utf-8 -*-
"""Bootstrap OpenPype repositories."""
import functools
from __future__ import annotations
import logging as log
import os
import re
@ -9,10 +9,12 @@ import sys
import tempfile
from pathlib import Path
from typing import Union, Callable, List, Tuple
from zipfile import ZipFile, BadZipFile
from appdirs import user_data_dir
from speedcopy import copyfile
import semver
from .user_settings import (
OpenPypeSecureRegistry,
@ -26,159 +28,138 @@ LOG_WARNING = 1
LOG_ERROR = 3
@functools.total_ordering
class OpenPypeVersion:
class OpenPypeVersion(semver.VersionInfo):
"""Class for storing information about OpenPype version.
Attributes:
major (int): [1].2.3-client-variant
minor (int): 1.[2].3-client-variant
subversion (int): 1.2.[3]-client-variant
client (str): 1.2.3-[client]-variant
variant (str): 1.2.3-client-[variant]
staging (bool): True if it is staging version
path (str): path to OpenPype
"""
major = 0
minor = 0
subversion = 0
variant = ""
client = None
staging = False
path = None
_VERSION_REGEX = re.compile(r"(?P<major>0|[1-9]\d*)\.(?P<minor>0|[1-9]\d*)\.(?P<patch>0|[1-9]\d*)(?:-(?P<prerelease>(?:0|[1-9]\d*|\d*[a-zA-Z-][0-9a-zA-Z-]*)(?:\.(?:0|[1-9]\d*|\d*[a-zA-Z-][0-9a-zA-Z-]*))*))?(?:\+(?P<buildmetadata>[0-9a-zA-Z-]+(?:\.[0-9a-zA-Z-]+)*))?$") # noqa: E501
_version_regex = re.compile(
r"(?P<major>\d+)\.(?P<minor>\d+)\.(?P<sub>\d+)(-(?P<var1>staging)|-(?P<client>.+)(-(?P<var2>staging)))?") # noqa: E501
def __init__(self, *args, **kwargs):
"""Create OpenPype version.
@property
def version(self):
"""return formatted version string."""
return self._compose_version()
.. deprecated:: 3.0.0-rc.2
`client` and `variant` are removed.
@version.setter
def version(self, val):
decomposed = self._decompose_version(val)
self.major = decomposed[0]
self.minor = decomposed[1]
self.subversion = decomposed[2]
self.variant = decomposed[3]
self.client = decomposed[4]
def __init__(self, major: int = None, minor: int = None,
subversion: int = None, version: str = None,
variant: str = "", client: str = None,
path: Path = None):
self.path = path
Args:
major (int): version when you make incompatible API changes.
minor (int): version when you add functionality in a
backwards-compatible manner.
patch (int): version when you make backwards-compatible bug fixes.
prerelease (str): an optional prerelease string
build (str): an optional build string
version (str): if set, it will be parsed and will override
parameters like `major`, `minor` and so on.
staging (bool): set to True if version is staging.
path (Path): path to version location.
if (
major is None or minor is None or subversion is None
) and version is None:
raise ValueError("Need version specified in some way.")
if version:
values = self._decompose_version(version)
self.major = values[0]
self.minor = values[1]
self.subversion = values[2]
self.variant = values[3]
self.client = values[4]
else:
self.major = major
self.minor = minor
self.subversion = subversion
# variant is set only if it is "staging", otherwise "production" is
# implied and no need to mention it in version string.
if variant == "staging":
self.variant = variant
self.client = client
"""
self.path = None
self.staging = False
def _compose_version(self):
version = "{}.{}.{}".format(self.major, self.minor, self.subversion)
if "version" in kwargs.keys():
if not kwargs.get("version"):
raise ValueError("Invalid version specified")
v = OpenPypeVersion.parse(kwargs.get("version"))
kwargs["major"] = v.major
kwargs["minor"] = v.minor
kwargs["patch"] = v.patch
kwargs["prerelease"] = v.prerelease
kwargs["build"] = v.build
kwargs.pop("version")
if self.client:
version = "{}-{}".format(version, self.client)
if kwargs.get("path"):
if isinstance(kwargs.get("path"), str):
self.path = Path(kwargs.get("path"))
elif isinstance(kwargs.get("path"), Path):
self.path = kwargs.get("path")
else:
raise TypeError("Path must be str or Path")
kwargs.pop("path")
if self.variant == "staging":
version = "{}-{}".format(version, self.variant)
if "path" in kwargs.keys():
kwargs.pop("path")
return version
if kwargs.get("staging"):
self.staging = kwargs.get("staging", False)
kwargs.pop("staging")
@classmethod
def _decompose_version(cls, version_string: str) -> tuple:
m = re.search(cls._version_regex, version_string)
if not m:
raise ValueError(
"Cannot parse version string: {}".format(version_string))
if "staging" in kwargs.keys():
kwargs.pop("staging")
variant = None
if m.group("var1") == "staging" or m.group("var2") == "staging":
variant = "staging"
if self.staging:
if kwargs.get("build"):
if "staging" not in kwargs.get("build"):
kwargs["build"] = "{}-staging".format(kwargs.get("build"))
else:
kwargs["build"] = "staging"
client = m.group("client")
if kwargs.get("build") and "staging" in kwargs.get("build", ""):
self.staging = True
return (int(m.group("major")), int(m.group("minor")),
int(m.group("sub")), variant, client)
super().__init__(*args, **kwargs)
def __eq__(self, other):
if not isinstance(other, self.__class__):
return False
return self.version == other.version
def __str__(self):
return self.version
result = super().__eq__(other)
return bool(result and self.staging == other.staging)
def __repr__(self):
return "{}, {}: {}".format(
self.__class__.__name__, self.version, self.path)
def __hash__(self):
return hash(self.version)
def __lt__(self, other):
if (self.major, self.minor, self.subversion) < \
(other.major, other.minor, other.subversion):
return True
# 1.2.3-staging < 1.2.3-client-staging
if self.get_main_version() == other.get_main_version() and \
not self.client and self.variant and \
other.client and other.variant:
return True
# 1.2.3 < 1.2.3-staging
if self.get_main_version() == other.get_main_version() and \
not self.client and self.variant and \
not other.client and not other.variant:
return True
# 1.2.3 < 1.2.3-client
if self.get_main_version() == other.get_main_version() and \
not self.client and not self.variant and \
other.client and not other.variant:
return True
# 1.2.3 < 1.2.3-client-staging
if self.get_main_version() == other.get_main_version() and \
not self.client and not self.variant and other.client:
return True
# 1.2.3-client-staging < 1.2.3-client
if self.get_main_version() == other.get_main_version() and \
self.client and self.variant and \
other.client and not other.variant:
return True
return "<{}: {} - path={}>".format(
self.__class__.__name__, str(self), self.path)
def __lt__(self, other: OpenPypeVersion):
result = super().__lt__(other)
# prefer path over no path
if self.version == other.version and \
not self.path and other.path:
if self == other and not self.path and other.path:
return True
# prefer path with dir over path with file
return self.version == other.version and self.path and \
other.path and self.path.is_file() and \
other.path.is_dir()
if self == other and self.path and other.path and \
other.path.is_dir() and self.path.is_file():
return True
if self.finalize_version() == other.finalize_version() and \
self.prerelease == other.prerelease and \
self.is_staging() and not other.is_staging():
return True
return result
def set_staging(self) -> OpenPypeVersion:
"""Set version as staging and return it.
This will preserve current one.
Returns:
OpenPypeVersion: Set as staging.
"""
if self.staging:
return self
return self.replace(parts={"build": f"{self.build}-staging"})
def set_production(self) -> OpenPypeVersion:
"""Set version as production and return it.
This will preserve current one.
Returns:
OpenPypeVersion: Set as production.
"""
if not self.staging:
return self
return self.replace(
parts={"build": self.build.replace("-staging", "")})
def is_staging(self) -> bool:
"""Test if current version is staging one."""
return self.variant == "staging"
return self.staging
def get_main_version(self) -> str:
"""Return main version component.
@ -186,11 +167,13 @@ class OpenPypeVersion:
This returns x.x.x part of version from possibly more complex one
like x.x.x-foo-bar.
.. deprecated:: 3.0.0-rc.2
use `finalize_version()` instead.
Returns:
str: main version component
"""
return "{}.{}.{}".format(self.major, self.minor, self.subversion)
return str(self.finalize_version())
@staticmethod
def version_in_str(string: str) -> Tuple:
@ -203,15 +186,22 @@ class OpenPypeVersion:
tuple: True/False and OpenPypeVersion if found.
"""
try:
result = OpenPypeVersion._decompose_version(string)
except ValueError:
m = re.search(OpenPypeVersion._VERSION_REGEX, string)
if not m:
return False, None
return True, OpenPypeVersion(major=result[0],
minor=result[1],
subversion=result[2],
variant=result[3],
client=result[4])
version = OpenPypeVersion.parse(string[m.start():m.end()])
return True, version
@classmethod
def parse(cls, version):
"""Extends parse to handle ta handle staging variant."""
v = super().parse(version)
openpype_version = cls(major=v.major, minor=v.minor,
patch=v.patch, prerelease=v.prerelease,
build=v.build)
if v.build and "staging" in v.build:
openpype_version.staging = True
return openpype_version
class BootstrapRepos:
@ -223,7 +213,7 @@ class BootstrapRepos:
otherwise `None`.
registry (OpenPypeSettingsRegistry): OpenPype registry object.
zip_filter (list): List of files to exclude from zip
openpype_filter (list): list of top level directories not to
openpype_filter (list): list of top level directories to
include in zip in OpenPype repository.
"""
@ -246,7 +236,7 @@ class BootstrapRepos:
self.registry = OpenPypeSettingsRegistry()
self.zip_filter = [".pyc", "__pycache__"]
self.openpype_filter = [
"build", "docs", "tests", "tools", "venv", "coverage"
"openpype", "repos", "schema", "LICENSE"
]
self._message = message
@ -269,7 +259,7 @@ class BootstrapRepos:
"""Get path for specific version in list of OpenPype versions.
Args:
version (str): Version string to look for (1.2.4-staging)
version (str): Version string to look for (1.2.4+staging)
version_list (list of OpenPypeVersion): list of version to search.
Returns:
@ -285,7 +275,7 @@ class BootstrapRepos:
"""Get version of local OpenPype."""
version = {}
path = Path(os.path.dirname(__file__)).parent / "openpype" / "version.py"
path = Path(os.environ["OPENPYPE_ROOT"]) / "openpype" / "version.py"
with open(path, "r") as fp:
exec(fp.read(), version)
return version["__version__"]
@ -423,18 +413,13 @@ class BootstrapRepos:
"""
frozen_root = Path(sys.executable).parent
# from frozen code we need igniter, openpype, schema vendor
openpype_list = self._filter_dir(
frozen_root / "openpype", self.zip_filter)
openpype_list += self._filter_dir(
frozen_root / "igniter", self.zip_filter)
openpype_list += self._filter_dir(
frozen_root / "repos", self.zip_filter)
openpype_list += self._filter_dir(
frozen_root / "schema", self.zip_filter)
openpype_list += self._filter_dir(
frozen_root / "vendor", self.zip_filter)
openpype_list.append(frozen_root / "LICENSE")
openpype_list = []
for f in self.openpype_filter:
if (frozen_root / f).is_dir():
openpype_list += self._filter_dir(
frozen_root / f, self.zip_filter)
else:
openpype_list.append(frozen_root / f)
version = self.get_version(frozen_root)
@ -477,11 +462,16 @@ class BootstrapRepos:
openpype_path (Path): Path to OpenPype sources.
"""
openpype_list = []
openpype_inc = 0
# get filtered list of file in Pype repository
openpype_list = self._filter_dir(openpype_path, self.zip_filter)
# openpype_list = self._filter_dir(openpype_path, self.zip_filter)
openpype_list = []
for f in self.openpype_filter:
if (openpype_path / f).is_dir():
openpype_list += self._filter_dir(
openpype_path / f, self.zip_filter)
else:
openpype_list.append(openpype_path / f)
openpype_files = len(openpype_list)
openpype_inc = 98.0 / float(openpype_files)
@ -506,7 +496,7 @@ class BootstrapRepos:
except ValueError:
pass
if is_inside:
if not is_inside:
continue
processed_path = file
@ -575,7 +565,7 @@ class BootstrapRepos:
"""
sys.path.insert(0, directory.as_posix())
directory = directory / "repos"
directory /= "repos"
if not directory.exists() and not directory.is_dir():
raise ValueError("directory is invalid")
@ -632,7 +622,7 @@ class BootstrapRepos:
" not implemented yet."))
dir_to_search = self.data_dir
user_versions = self.get_openpype_versions(self.data_dir, staging)
# if we have openpype_path specified, search only there.
if openpype_path:
dir_to_search = openpype_path
@ -652,6 +642,7 @@ class BootstrapRepos:
pass
openpype_versions = self.get_openpype_versions(dir_to_search, staging)
openpype_versions += user_versions
# remove zip file version if needed.
if not include_zips:
@ -681,7 +672,7 @@ class BootstrapRepos:
openpype_path = None
# try to get OpenPype path from mongo.
if location.startswith("mongodb"):
pype_path = get_openpype_path_from_db(location)
openpype_path = get_openpype_path_from_db(location)
if not openpype_path:
self._print("cannot find OPENPYPE_PATH in settings.")
return None
@ -764,12 +755,13 @@ class BootstrapRepos:
destination = self.data_dir / version.path.stem
if destination.exists():
assert destination.is_dir()
try:
destination.unlink()
except OSError:
shutil.rmtree(destination)
except OSError as e:
msg = f"!!! Cannot remove already existing {destination}"
self._print(msg, LOG_ERROR, exc_info=True)
return None
raise e
destination.mkdir(parents=True)
@ -808,7 +800,7 @@ class BootstrapRepos:
"""Install OpenPype version to user data directory.
Args:
oepnpype_version (OpenPypeVersion): OpenPype version to install.
openpype_version (OpenPypeVersion): OpenPype version to install.
force (bool, optional): Force overwrite existing version.
Returns:
@ -821,7 +813,6 @@ class BootstrapRepos:
OpenPypeVersionIOError: If copying or zipping fail.
"""
if self.is_inside_user_data(openpype_version.path) and not openpype_version.path.is_file(): # noqa
raise OpenPypeVersionExists(
"OpenPype already inside user data dir")
@ -868,26 +859,20 @@ class BootstrapRepos:
# set zip as version source
openpype_version.path = temp_zip
if self.is_inside_user_data(openpype_version.path):
raise OpenPypeVersionInvalid(
"Version is in user data dir.")
openpype_version.path = self._copy_zip(
openpype_version.path, destination)
elif openpype_version.path.is_file():
# check if file is zip (by extension)
if openpype_version.path.suffix.lower() != ".zip":
raise OpenPypeVersionInvalid("Invalid file format")
if not self.is_inside_user_data(openpype_version.path):
try:
# copy file to destination
self._print("Copying zip to destination ...")
_destination_zip = destination.parent / openpype_version.path.name # noqa: E501
copyfile(
openpype_version.path.as_posix(),
_destination_zip.as_posix())
except OSError as e:
self._print(
"cannot copy version to user data directory", LOG_ERROR,
exc_info=True)
raise OpenPypeVersionIOError((
f"can't copy version {openpype_version.path.as_posix()} "
f"to destination {destination.parent.as_posix()}")) from e
if not self.is_inside_user_data(openpype_version.path):
openpype_version.path = self._copy_zip(
openpype_version.path, destination)
# extract zip there
self._print("extracting zip to destination ...")
@ -896,6 +881,23 @@ class BootstrapRepos:
return destination
def _copy_zip(self, source: Path, destination: Path) -> Path:
try:
# copy file to destination
self._print("Copying zip to destination ...")
_destination_zip = destination.parent / source.name # noqa: E501
copyfile(
source.as_posix(),
_destination_zip.as_posix())
except OSError as e:
self._print(
"cannot copy version to user data directory", LOG_ERROR,
exc_info=True)
raise OpenPypeVersionIOError((
f"can't copy version {source.as_posix()} "
f"to destination {destination.parent.as_posix()}")) from e
return _destination_zip
def _is_openpype_in_dir(self,
dir_item: Path,
detected_version: OpenPypeVersion) -> bool:

File diff suppressed because it is too large Load diff

View file

@ -17,12 +17,6 @@ from .bootstrap_repos import (
from .tools import validate_mongo_connection
class InstallResult(QObject):
"""Used to pass results back."""
def __init__(self, value):
self.status = value
class InstallThread(QThread):
"""Install Worker thread.
@ -36,15 +30,22 @@ class InstallThread(QThread):
"""
progress = Signal(int)
message = Signal((str, bool))
finished = Signal(object)
def __init__(self, callback, parent=None,):
def __init__(self, parent=None,):
self._mongo = None
self._path = None
self.result_callback = callback
self._result = None
QThread.__init__(self, parent)
self.finished.connect(callback)
def result(self):
"""Result of finished installation."""
return self._result
def _set_result(self, value):
if self._result is not None:
raise AssertionError("BUG: Result was set more than once!")
self._result = value
def run(self):
"""Thread entry point.
@ -76,7 +77,7 @@ class InstallThread(QThread):
except ValueError:
self.message.emit(
"!!! We need MongoDB URL to proceed.", True)
self.finished.emit(InstallResult(-1))
self._set_result(-1)
return
else:
self._mongo = os.getenv("OPENPYPE_MONGO")
@ -101,7 +102,7 @@ class InstallThread(QThread):
self.message.emit("Skipping OpenPype install ...", False)
if detected[-1].path.suffix.lower() == ".zip":
bs.extract_openpype(detected[-1])
self.finished.emit(InstallResult(0))
self._set_result(0)
return
if OpenPypeVersion(version=local_version).get_main_version() == detected[-1].get_main_version(): # noqa
@ -110,7 +111,7 @@ class InstallThread(QThread):
f"currently running {local_version}"
), False)
self.message.emit("Skipping OpenPype install ...", False)
self.finished.emit(InstallResult(0))
self._set_result(0)
return
self.message.emit((
@ -126,13 +127,13 @@ class InstallThread(QThread):
if not openpype_version:
self.message.emit(
f"!!! Install failed - {openpype_version}", True)
self.finished.emit(InstallResult(-1))
self._set_result(-1)
return
self.message.emit(f"Using: {openpype_version}", False)
bs.install_version(openpype_version)
self.message.emit(f"Installed as {openpype_version}", False)
self.progress.emit(100)
self.finished.emit(InstallResult(1))
self._set_result(1)
return
else:
self.message.emit("None detected.", False)
@ -144,7 +145,7 @@ class InstallThread(QThread):
if not local_openpype:
self.message.emit(
f"!!! Install failed - {local_openpype}", True)
self.finished.emit(InstallResult(-1))
self._set_result(-1)
return
try:
@ -154,11 +155,12 @@ class InstallThread(QThread):
OpenPypeVersionIOError) as e:
self.message.emit(f"Installed failed: ", True)
self.message.emit(str(e), True)
self.finished.emit(InstallResult(-1))
self._set_result(-1)
return
self.message.emit(f"Installed as {local_openpype}", False)
self.progress.emit(100)
self._set_result(1)
return
else:
# if we have mongo connection string, validate it, set it to
@ -167,7 +169,7 @@ class InstallThread(QThread):
if not validate_mongo_connection(self._mongo):
self.message.emit(
f"!!! invalid mongo url {self._mongo}", True)
self.finished.emit(InstallResult(-1))
self._set_result(-1)
return
bs.secure_registry.set_item("openPypeMongo", self._mongo)
os.environ["OPENPYPE_MONGO"] = self._mongo
@ -177,11 +179,11 @@ class InstallThread(QThread):
if not repo_file:
self.message.emit("!!! Cannot install", True)
self.finished.emit(InstallResult(-1))
self._set_result(-1)
return
self.progress.emit(100)
self.finished.emit(InstallResult(1))
self._set_result(1)
return
def set_path(self, path: str) -> None:

BIN
igniter/openpype.icns Normal file

Binary file not shown.

280
igniter/stylesheet.css Normal file
View file

@ -0,0 +1,280 @@
*{
font-size: 10pt;
font-family: "Poppins";
}
QWidget {
color: #bfccd6;
background-color: #282C34;
border-radius: 0px;
}
QMenu {
border: 1px solid #555555;
background-color: #21252B;
}
QMenu::item {
padding: 5px 10px 5px 10px;
border-left: 5px solid #313741;;
}
QMenu::item:selected {
border-left-color: rgb(84, 209, 178);
background-color: #222d37;
}
QLineEdit, QPlainTextEdit {
border: 1px solid #464b54;
border-radius: 3px;
background-color: #21252B;
padding: 0.5em;
}
QLineEdit[state="valid"] {
background-color: rgb(19, 19, 19);
color: rgb(64, 230, 132);
border-color: rgb(32, 64, 32);
}
QLineEdit[state="invalid"] {
background-color: rgb(32, 19, 19);
color: rgb(255, 69, 0);
border-color: rgb(64, 32, 32);
}
QLabel {
background: transparent;
color: #969b9e;
}
QLabel:hover {color: #b8c1c5;}
QPushButton {
border: 1px solid #aaaaaa;
border-radius: 3px;
padding: 5px;
}
QPushButton:hover {
background-color: #333840;
border: 1px solid #fff;
color: #fff;
}
QTableView {
border: 1px solid #444;
gridline-color: #6c6c6c;
background-color: #201F1F;
alternate-background-color:#21252B;
}
QTableView::item:pressed, QListView::item:pressed, QTreeView::item:pressed {
background: #78879b;
color: #FFFFFF;
}
QTableView::item:selected:active, QTreeView::item:selected:active, QListView::item:selected:active {
background: #3d8ec9;
}
QProgressBar {
border: 1px solid grey;
border-radius: 10px;
color: #222222;
font-weight: bold;
}
QProgressBar:horizontal {
height: 20px;
}
QProgressBar::chunk {
border-radius: 10px;
background-color: qlineargradient(
x1: 0,
y1: 0.5,
x2: 1,
y2: 0.5,
stop: 0 rgb(72, 200, 150),
stop: 1 rgb(82, 172, 215)
);
}
QScrollBar:horizontal {
height: 15px;
margin: 3px 15px 3px 15px;
border: 1px transparent #21252B;
border-radius: 4px;
background-color: #21252B;
}
QScrollBar::handle:horizontal {
background-color: #4B5362;
min-width: 5px;
border-radius: 4px;
}
QScrollBar::add-line:horizontal {
margin: 0px 3px 0px 3px;
border-image: url(:/qss_icons/rc/right_arrow_disabled.png);
width: 10px;
height: 10px;
subcontrol-position: right;
subcontrol-origin: margin;
}
QScrollBar::sub-line:horizontal {
margin: 0px 3px 0px 3px;
border-image: url(:/qss_icons/rc/left_arrow_disabled.png);
height: 10px;
width: 10px;
subcontrol-position: left;
subcontrol-origin: margin;
}
QScrollBar::add-line:horizontal:hover,QScrollBar::add-line:horizontal:on {
border-image: url(:/qss_icons/rc/right_arrow.png);
height: 10px;
width: 10px;
subcontrol-position: right;
subcontrol-origin: margin;
}
QScrollBar::sub-line:horizontal:hover, QScrollBar::sub-line:horizontal:on {
border-image: url(:/qss_icons/rc/left_arrow.png);
height: 10px;
width: 10px;
subcontrol-position: left;
subcontrol-origin: margin;
}
QScrollBar::up-arrow:horizontal, QScrollBar::down-arrow:horizontal {
background: none;
}
QScrollBar::add-page:horizontal, QScrollBar::sub-page:horizontal {
background: none;
}
QScrollBar:vertical {
background-color: #21252B;
width: 15px;
margin: 15px 3px 15px 3px;
border: 1px transparent #21252B;
border-radius: 4px;
}
QScrollBar::handle:vertical {
background-color: #4B5362;
min-height: 5px;
border-radius: 4px;
}
QScrollBar::sub-line:vertical {
margin: 3px 0px 3px 0px;
border-image: url(:/qss_icons/rc/up_arrow_disabled.png);
height: 10px;
width: 10px;
subcontrol-position: top;
subcontrol-origin: margin;
}
QScrollBar::add-line:vertical {
margin: 3px 0px 3px 0px;
border-image: url(:/qss_icons/rc/down_arrow_disabled.png);
height: 10px;
width: 10px;
subcontrol-position: bottom;
subcontrol-origin: margin;
}
QScrollBar::sub-line:vertical:hover,QScrollBar::sub-line:vertical:on {
border-image: url(:/qss_icons/rc/up_arrow.png);
height: 10px;
width: 10px;
subcontrol-position: top;
subcontrol-origin: margin;
}
QScrollBar::add-line:vertical:hover, QScrollBar::add-line:vertical:on {
border-image: url(:/qss_icons/rc/down_arrow.png);
height: 10px;
width: 10px;
subcontrol-position: bottom;
subcontrol-origin: margin;
}
QScrollBar::up-arrow:vertical, QScrollBar::down-arrow:vertical {
background: none;
}
QScrollBar::add-page:vertical, QScrollBar::sub-page:vertical {
background: none;
}
#MainLabel {
color: rgb(200, 200, 200);
font-size: 12pt;
}
#Console {
background-color: #21252B;
color: rgb(72, 200, 150);
font-family: "Roboto Mono";
font-size: 8pt;
}
#ExitBtn {
/* `border` must be set to background of flat button is painted .*/
border: none;
color: rgb(39, 39, 39);
background-color: #828a97;
padding: 0.5em;
font-weight: 400;
}
#ExitBtn:hover{
background-color: #b2bece
}
#ExitBtn:disabled {
background-color: rgba(185, 185, 185, 31);
color: rgba(64, 64, 64, 63);
}
#ButtonWithOptions QPushButton{
border-top-right-radius: 0px;
border-bottom-right-radius: 0px;
border: none;
background-color: rgb(84, 209, 178);
color: rgb(39, 39, 39);
font-weight: 400;
padding: 0.5em;
}
#ButtonWithOptions QPushButton:hover{
background-color: rgb(85, 224, 189)
}
#ButtonWithOptions QPushButton:disabled {
background-color: rgba(72, 200, 150, 31);
color: rgba(64, 64, 64, 63);
}
#ButtonWithOptions QToolButton{
border: none;
border-top-left-radius: 0px;
border-bottom-left-radius: 0px;
border-top-right-radius: 3px;
border-bottom-right-radius: 3px;
background-color: rgb(84, 209, 178);
color: rgb(39, 39, 39);
}
#ButtonWithOptions QToolButton:hover{
background-color: rgb(85, 224, 189)
}
#ButtonWithOptions QToolButton:disabled {
background-color: rgba(72, 200, 150, 31);
color: rgba(64, 64, 64, 63);
}

View file

@ -14,7 +14,12 @@ from pathlib import Path
import platform
from pymongo import MongoClient
from pymongo.errors import ServerSelectionTimeoutError, InvalidURI
from pymongo.errors import (
ServerSelectionTimeoutError,
InvalidURI,
ConfigurationError,
OperationFailure
)
def decompose_url(url: str) -> Dict:
@ -115,30 +120,20 @@ def validate_mongo_connection(cnx: str) -> (bool, str):
parsed = urlparse(cnx)
if parsed.scheme not in ["mongodb", "mongodb+srv"]:
return False, "Not mongodb schema"
# we have mongo connection string. Let's try if we can connect.
try:
components = decompose_url(cnx)
except RuntimeError:
return False, f"Invalid port specified."
mongo_args = {
"host": compose_url(**components),
"serverSelectionTimeoutMS": 2000
}
port = components.get("port")
if port is not None:
mongo_args["port"] = int(port)
try:
client = MongoClient(**mongo_args)
client = MongoClient(
cnx,
serverSelectionTimeoutMS=2000
)
client.server_info()
client.close()
except ServerSelectionTimeoutError as e:
return False, f"Cannot connect to server {cnx} - {e}"
except ValueError:
return False, f"Invalid port specified {parsed.port}"
except InvalidURI as e:
return False, str(e)
except (ConfigurationError, OperationFailure, InvalidURI) as exc:
return False, str(exc)
else:
return True, "Connection is successful"

View file

@ -1,4 +1,4 @@
# -*- coding: utf-8 -*-
"""Definition of Igniter version."""
__version__ = "1.0.0-beta"
__version__ = "1.0.0-rc1"

50
inno_setup.iss Normal file
View file

@ -0,0 +1,50 @@
; Script generated by the Inno Setup Script Wizard.
; SEE THE DOCUMENTATION FOR DETAILS ON CREATING INNO SETUP SCRIPT FILES!
#define MyAppName "OpenPype"
#define Build GetEnv("BUILD_DIR")
#define AppVer GetEnv("BUILD_VERSION")
[Setup]
; NOTE: The value of AppId uniquely identifies this application. Do not use the same AppId value in installers for other applications.
; (To generate a new GUID, click Tools | Generate GUID inside the IDE.)
AppId={{B9E9DF6A-5BDA-42DD-9F35-C09D564C4D93}
AppName={#MyAppName}
AppVersion={#AppVer}
AppVerName={#MyAppName} version {#AppVer}
AppPublisher=Orbi Tools s.r.o
AppPublisherURL=http://pype.club
AppSupportURL=http://pype.club
AppUpdatesURL=http://pype.club
DefaultDirName={autopf}\{#MyAppName}
DisableProgramGroupPage=yes
OutputBaseFilename={#MyAppName}-{#AppVer}-install
AllowCancelDuringInstall=yes
; Uncomment the following line to run in non administrative install mode (install for current user only.)
;PrivilegesRequired=lowest
PrivilegesRequiredOverridesAllowed=dialog
SetupIconFile=igniter\openpype.ico
OutputDir=build\
Compression=lzma
SolidCompression=yes
WizardStyle=modern
[Languages]
Name: "english"; MessagesFile: "compiler:Default.isl"
[Tasks]
Name: "desktopicon"; Description: "{cm:CreateDesktopIcon}"; GroupDescription: "{cm:AdditionalIcons}"; Flags: unchecked
[Files]
Source: "build\{#build}\*"; DestDir: "{app}"; Flags: ignoreversion recursesubdirs createallsubdirs
; NOTE: Don't use "Flags: ignoreversion" on any shared system files
[Icons]
Name: "{autoprograms}\{#MyAppName}"; Filename: "{app}\openpype_gui.exe"
Name: "{autodesktop}\{#MyAppName}"; Filename: "{app}\openpype_gui.exe"; Tasks: desktopicon
[Run]
Filename: "{app}\openpype_gui.exe"; Description: "{cm:LaunchProgram,OpenPype}"; Flags: nowait postinstall skipifsilent

View file

@ -9,6 +9,7 @@ from .settings import get_project_settings
from .lib import (
Anatomy,
filter_pyblish_plugins,
set_plugin_attributes_from_settings,
change_timer_to_current_context
)
@ -58,44 +59,23 @@ def patched_discover(superclass):
# run original discover and get plugins
plugins = _original_discover(superclass)
# determine host application to use for finding presets
if avalon.registered_host() is None:
return plugins
host = avalon.registered_host().__name__.split(".")[-1]
set_plugin_attributes_from_settings(plugins, superclass)
# map plugin superclass to preset json. Currenly suppoted is load and
# create (avalon.api.Loader and avalon.api.Creator)
plugin_type = "undefined"
if superclass.__name__.split(".")[-1] == "Loader":
plugin_type = "load"
elif superclass.__name__.split(".")[-1] == "Creator":
plugin_type = "create"
print(">>> Finding presets for {}:{} ...".format(host, plugin_type))
try:
settings = (
get_project_settings(os.environ['AVALON_PROJECT'])
[host][plugin_type]
)
except KeyError:
print("*** no presets found.")
else:
for plugin in plugins:
if plugin.__name__ in settings:
print(">>> We have preset for {}".format(plugin.__name__))
for option, value in settings[plugin.__name__].items():
if option == "enabled" and value is False:
setattr(plugin, "active", False)
print(" - is disabled by preset")
else:
setattr(plugin, option, value)
print(" - setting `{}`: `{}`".format(option, value))
return plugins
@import_wrapper
def install():
"""Install Pype to Avalon."""
from pyblish.lib import MessageHandler
def modified_emit(obj, record):
"""Method replacing `emit` in Pyblish's MessageHandler."""
record.msg = record.getMessage()
obj.records.append(record)
MessageHandler.emit = modified_emit
log.info("Registering global plug-ins..")
pyblish.register_plugin_path(PUBLISH_PATH)
pyblish.register_discovery_filter(filter_pyblish_plugins)

View file

@ -224,17 +224,6 @@ def launch(app, project, asset, task,
PypeCommands().run_application(app, project, asset, task, tools, arguments)
@main.command()
@click.option("-p", "--path", help="Path to zip file", default=None)
def generate_zip(path):
"""Generate Pype zip from current sources.
If PATH is not provided, it will create zip file in user data dir.
"""
PypeCommands().generate_zip(path)
@main.command(
context_settings=dict(
ignore_unknown_options=True,

View file

@ -5,7 +5,7 @@ import logging
from avalon import io
from avalon import api as avalon
from avalon.vendor import Qt
from openpype import lib
from openpype import lib, api
import pyblish.api as pyblish
import openpype.hosts.aftereffects
@ -81,3 +81,35 @@ def uninstall():
def on_pyblish_instance_toggled(instance, old_value, new_value):
"""Toggle layer visibility on instance toggles."""
instance[0].Visible = new_value
def get_asset_settings():
"""Get settings on current asset from database.
Returns:
dict: Scene data.
"""
asset_data = lib.get_asset()["data"]
fps = asset_data.get("fps")
frame_start = asset_data.get("frameStart")
frame_end = asset_data.get("frameEnd")
handle_start = asset_data.get("handleStart")
handle_end = asset_data.get("handleEnd")
resolution_width = asset_data.get("resolutionWidth")
resolution_height = asset_data.get("resolutionHeight")
duration = (frame_end - frame_start + 1) + handle_start + handle_end
entity_type = asset_data.get("entityType")
scene_data = {
"fps": fps,
"frameStart": frame_start,
"frameEnd": frame_end,
"handleStart": handle_start,
"handleEnd": handle_end,
"resolutionWidth": resolution_width,
"resolutionHeight": resolution_height,
"duration": duration
}
return scene_data

View file

@ -47,6 +47,10 @@ class CreateRender(openpype.api.Creator):
self.data["members"] = [item.id]
self.data["uuid"] = item.id # for SubsetManager
self.data["subset"] = self.data["subset"]\
.replace(stub.PUBLISH_ICON, '')\
.replace(stub.LOADED_ICON, '')
stub.imprint(item, self.data)
stub.set_label_color(item.id, 14) # Cyan options 0 - 16
stub.rename_item(item.id, stub.PUBLISH_ICON + self.data["subset"])

View file

@ -12,6 +12,7 @@ class AERenderInstance(RenderInstance):
# extend generic, composition name is needed
comp_name = attr.ib(default=None)
comp_id = attr.ib(default=None)
fps = attr.ib(default=None)
class CollectAERender(abstract_collect_render.AbstractCollectRender):
@ -45,6 +46,7 @@ class CollectAERender(abstract_collect_render.AbstractCollectRender):
raise ValueError("Couldn't find id, unable to publish. " +
"Please recreate instance.")
item_id = inst["members"][0]
work_area_info = self.stub.get_work_area(int(item_id))
if not work_area_info:
@ -57,6 +59,8 @@ class CollectAERender(abstract_collect_render.AbstractCollectRender):
frameEnd = round(work_area_info.workAreaStart +
float(work_area_info.workAreaDuration) *
float(work_area_info.frameRate)) - 1
fps = work_area_info.frameRate
# TODO add resolution when supported by extension
if inst["family"] == "render" and inst["active"]:
instance = AERenderInstance(
@ -86,7 +90,8 @@ class CollectAERender(abstract_collect_render.AbstractCollectRender):
frameStart=frameStart,
frameEnd=frameEnd,
frameStep=1,
toBeRenderedOn='deadline'
toBeRenderedOn='deadline',
fps=fps
)
comp = compositions_by_id.get(int(item_id))
@ -102,7 +107,6 @@ class CollectAERender(abstract_collect_render.AbstractCollectRender):
instances.append(instance)
self.log.debug("instances::{}".format(instances))
return instances
def get_expected_files(self, render_instance):

View file

@ -0,0 +1,126 @@
# -*- coding: utf-8 -*-
"""Validate scene settings."""
import os
import re
import pyblish.api
from avalon import aftereffects
import openpype.hosts.aftereffects.api as api
stub = aftereffects.stub()
class ValidateSceneSettings(pyblish.api.InstancePlugin):
"""
Ensures that Composition Settings (right mouse on comp) are same as
in FTrack on task.
By default checks only duration - how many frames should be rendered.
Compares:
Frame start - Frame end + 1 from FTrack
against
Duration in Composition Settings.
If this complains:
Check error message where is discrepancy.
Check FTrack task 'pype' section of task attributes for expected
values.
Check/modify rendered Composition Settings.
If you know what you are doing run publishing again, uncheck this
validation before Validation phase.
"""
"""
Dev docu:
Could be configured by 'presets/plugins/aftereffects/publish'
skip_timelines_check - fill task name for which skip validation of
frameStart
frameEnd
fps
handleStart
handleEnd
skip_resolution_check - fill entity type ('asset') to skip validation
resolutionWidth
resolutionHeight
TODO support in extension is missing for now
By defaults validates duration (how many frames should be published)
"""
order = pyblish.api.ValidatorOrder
label = "Validate Scene Settings"
families = ["render.farm"]
hosts = ["aftereffects"]
optional = True
skip_timelines_check = [".*"] # * >> skip for all
skip_resolution_check = [".*"]
def process(self, instance):
"""Plugin entry point."""
expected_settings = api.get_asset_settings()
self.log.info("config from DB::{}".format(expected_settings))
if any(re.search(pattern, os.getenv('AVALON_TASK'))
for pattern in self.skip_resolution_check):
expected_settings.pop("resolutionWidth")
expected_settings.pop("resolutionHeight")
if any(re.search(pattern, os.getenv('AVALON_TASK'))
for pattern in self.skip_timelines_check):
expected_settings.pop('fps', None)
expected_settings.pop('frameStart', None)
expected_settings.pop('frameEnd', None)
expected_settings.pop('handleStart', None)
expected_settings.pop('handleEnd', None)
# handle case where ftrack uses only two decimal places
# 23.976023976023978 vs. 23.98
fps = instance.data.get("fps")
if fps:
if isinstance(fps, float):
fps = float(
"{:.2f}".format(fps))
expected_settings["fps"] = fps
duration = instance.data.get("frameEndHandle") - \
instance.data.get("frameStartHandle") + 1
self.log.debug("filtered config::{}".format(expected_settings))
current_settings = {
"fps": fps,
"frameStartHandle": instance.data.get("frameStartHandle"),
"frameEndHandle": instance.data.get("frameEndHandle"),
"resolutionWidth": instance.data.get("resolutionWidth"),
"resolutionHeight": instance.data.get("resolutionHeight"),
"duration": duration
}
self.log.info("current_settings:: {}".format(current_settings))
invalid_settings = []
for key, value in expected_settings.items():
if value != current_settings[key]:
invalid_settings.append(
"{} expected: {} found: {}".format(key, value,
current_settings[key])
)
if ((expected_settings.get("handleStart")
or expected_settings.get("handleEnd"))
and invalid_settings):
msg = "Handles included in calculation. Remove handles in DB " +\
"or extend frame range in Composition Setting."
invalid_settings[-1]["reason"] = msg
msg = "Found invalid settings:\n{}".format(
"\n".join(invalid_settings)
)
assert not invalid_settings, msg
assert os.path.exists(instance.data.get("source")), (
"Scene file not found (saved under wrong name)"
)

View file

@ -9,7 +9,7 @@ from avalon import api
import avalon.blender
from openpype.api import PypeCreatorMixin
VALID_EXTENSIONS = [".blend", ".json"]
VALID_EXTENSIONS = [".blend", ".json", ".abc"]
def asset_name(

View file

@ -1,4 +1,5 @@
import os
import re
import subprocess
from openpype.lib import PreLaunchHook
@ -31,10 +32,46 @@ class InstallPySideToBlender(PreLaunchHook):
def inner_execute(self):
# Get blender's python directory
version_regex = re.compile(r"^2\.[0-9]{2}$")
executable = self.launch_context.executable.executable_path
# Blender installation contain subfolder named with it's version where
# python binaries are stored.
version_subfolder = self.launch_context.app_name.split("_")[1]
if os.path.basename(executable).lower() != "blender.exe":
self.log.info((
"Executable does not lead to blender.exe file. Can't determine"
" blender's python to check/install PySide2."
))
return
executable_dir = os.path.dirname(executable)
version_subfolders = []
for name in os.listdir(executable_dir):
fullpath = os.path.join(name, executable_dir)
if not os.path.isdir(fullpath):
continue
if not version_regex.match(name):
continue
version_subfolders.append(name)
if not version_subfolders:
self.log.info(
"Didn't find version subfolder next to Blender executable"
)
return
if len(version_subfolders) > 1:
self.log.info((
"Found more than one version subfolder next"
" to blender executable. {}"
).format(", ".join([
'"./{}"'.format(name)
for name in version_subfolders
])))
return
version_subfolder = version_subfolders[0]
pythond_dir = os.path.join(
os.path.dirname(executable),
version_subfolder,
@ -65,6 +102,7 @@ class InstallPySideToBlender(PreLaunchHook):
# Check if PySide2 is installed and skip if yes
if self.is_pyside_installed(python_executable):
self.log.debug("Blender has already installed PySide2.")
return
# Install PySide2 in blender's python

View file

@ -0,0 +1,35 @@
"""Create a pointcache asset."""
import bpy
from avalon import api
from avalon.blender import lib
import openpype.hosts.blender.api.plugin
class CreatePointcache(openpype.hosts.blender.api.plugin.Creator):
"""Polygonal static geometry"""
name = "pointcacheMain"
label = "Point Cache"
family = "pointcache"
icon = "gears"
def process(self):
asset = self.data["asset"]
subset = self.data["subset"]
name = openpype.hosts.blender.api.plugin.asset_name(asset, subset)
collection = bpy.data.collections.new(name=name)
bpy.context.scene.collection.children.link(collection)
self.data['task'] = api.Session.get('AVALON_TASK')
lib.imprint(collection, self.data)
if (self.options or {}).get("useSelection"):
objects = lib.get_selection()
for obj in objects:
collection.objects.link(obj)
if obj.type == 'EMPTY':
objects.extend(obj.children)
return collection

View file

@ -0,0 +1,246 @@
"""Load an asset in Blender from an Alembic file."""
from pathlib import Path
from pprint import pformat
from typing import Dict, List, Optional
from avalon import api, blender
import bpy
import openpype.hosts.blender.api.plugin as plugin
class CacheModelLoader(plugin.AssetLoader):
"""Load cache models.
Stores the imported asset in a collection named after the asset.
Note:
At least for now it only supports Alembic files.
"""
families = ["model", "pointcache"]
representations = ["abc"]
label = "Link Alembic"
icon = "code-fork"
color = "orange"
def _remove(self, objects, container):
for obj in list(objects):
if obj.type == 'MESH':
bpy.data.meshes.remove(obj.data)
elif obj.type == 'EMPTY':
bpy.data.objects.remove(obj)
bpy.data.collections.remove(container)
def _process(self, libpath, container_name, parent_collection):
bpy.ops.object.select_all(action='DESELECT')
view_layer = bpy.context.view_layer
view_layer_collection = view_layer.active_layer_collection.collection
relative = bpy.context.preferences.filepaths.use_relative_paths
bpy.ops.wm.alembic_import(
filepath=libpath,
relative_path=relative
)
parent = parent_collection
if parent is None:
parent = bpy.context.scene.collection
model_container = bpy.data.collections.new(container_name)
parent.children.link(model_container)
for obj in bpy.context.selected_objects:
model_container.objects.link(obj)
view_layer_collection.objects.unlink(obj)
name = obj.name
obj.name = f"{name}:{container_name}"
# Groups are imported as Empty objects in Blender
if obj.type == 'MESH':
data_name = obj.data.name
obj.data.name = f"{data_name}:{container_name}"
if not obj.get(blender.pipeline.AVALON_PROPERTY):
obj[blender.pipeline.AVALON_PROPERTY] = dict()
avalon_info = obj[blender.pipeline.AVALON_PROPERTY]
avalon_info.update({"container_name": container_name})
bpy.ops.object.select_all(action='DESELECT')
return model_container
def process_asset(
self, context: dict, name: str, namespace: Optional[str] = None,
options: Optional[Dict] = None
) -> Optional[List]:
"""
Arguments:
name: Use pre-defined name
namespace: Use pre-defined namespace
context: Full parenthood of representation to load
options: Additional settings dictionary
"""
libpath = self.fname
asset = context["asset"]["name"]
subset = context["subset"]["name"]
lib_container = plugin.asset_name(
asset, subset
)
unique_number = plugin.get_unique_number(
asset, subset
)
namespace = namespace or f"{asset}_{unique_number}"
container_name = plugin.asset_name(
asset, subset, unique_number
)
container = bpy.data.collections.new(lib_container)
container.name = container_name
blender.pipeline.containerise_existing(
container,
name,
namespace,
context,
self.__class__.__name__,
)
container_metadata = container.get(
blender.pipeline.AVALON_PROPERTY)
container_metadata["libpath"] = libpath
container_metadata["lib_container"] = lib_container
obj_container = self._process(
libpath, container_name, None)
container_metadata["obj_container"] = obj_container
# Save the list of objects in the metadata container
container_metadata["objects"] = obj_container.all_objects
nodes = list(container.objects)
nodes.append(container)
self[:] = nodes
return nodes
def update(self, container: Dict, representation: Dict):
"""Update the loaded asset.
This will remove all objects of the current collection, load the new
ones and add them to the collection.
If the objects of the collection are used in another collection they
will not be removed, only unlinked. Normally this should not be the
case though.
Warning:
No nested collections are supported at the moment!
"""
collection = bpy.data.collections.get(
container["objectName"]
)
libpath = Path(api.get_representation_path(representation))
extension = libpath.suffix.lower()
self.log.info(
"Container: %s\nRepresentation: %s",
pformat(container, indent=2),
pformat(representation, indent=2),
)
assert collection, (
f"The asset is not loaded: {container['objectName']}"
)
assert not (collection.children), (
"Nested collections are not supported."
)
assert libpath, (
"No existing library file found for {container['objectName']}"
)
assert libpath.is_file(), (
f"The file doesn't exist: {libpath}"
)
assert extension in plugin.VALID_EXTENSIONS, (
f"Unsupported file: {libpath}"
)
collection_metadata = collection.get(
blender.pipeline.AVALON_PROPERTY)
collection_libpath = collection_metadata["libpath"]
obj_container = plugin.get_local_collection_with_name(
collection_metadata["obj_container"].name
)
objects = obj_container.all_objects
container_name = obj_container.name
normalized_collection_libpath = (
str(Path(bpy.path.abspath(collection_libpath)).resolve())
)
normalized_libpath = (
str(Path(bpy.path.abspath(str(libpath))).resolve())
)
self.log.debug(
"normalized_collection_libpath:\n %s\nnormalized_libpath:\n %s",
normalized_collection_libpath,
normalized_libpath,
)
if normalized_collection_libpath == normalized_libpath:
self.log.info("Library already loaded, not updating...")
return
parent = plugin.get_parent_collection(obj_container)
self._remove(objects, obj_container)
obj_container = self._process(
str(libpath), container_name, parent)
collection_metadata["obj_container"] = obj_container
collection_metadata["objects"] = obj_container.all_objects
collection_metadata["libpath"] = str(libpath)
collection_metadata["representation"] = str(representation["_id"])
def remove(self, container: Dict) -> bool:
"""Remove an existing container from a Blender scene.
Arguments:
container (openpype:container-1.0): Container to remove,
from `host.ls()`.
Returns:
bool: Whether the container was deleted.
Warning:
No nested collections are supported at the moment!
"""
collection = bpy.data.collections.get(
container["objectName"]
)
if not collection:
return False
assert not (collection.children), (
"Nested collections are not supported."
)
collection_metadata = collection.get(
blender.pipeline.AVALON_PROPERTY)
obj_container = plugin.get_local_collection_with_name(
collection_metadata["obj_container"].name
)
objects = obj_container.all_objects
self._remove(objects, obj_container)
bpy.data.collections.remove(collection)
return True

View file

@ -244,65 +244,3 @@ class BlendModelLoader(plugin.AssetLoader):
bpy.data.collections.remove(collection)
return True
class CacheModelLoader(plugin.AssetLoader):
"""Load cache models.
Stores the imported asset in a collection named after the asset.
Note:
At least for now it only supports Alembic files.
"""
families = ["model"]
representations = ["abc"]
label = "Link Model"
icon = "code-fork"
color = "orange"
def process_asset(
self, context: dict, name: str, namespace: Optional[str] = None,
options: Optional[Dict] = None
) -> Optional[List]:
"""
Arguments:
name: Use pre-defined name
namespace: Use pre-defined namespace
context: Full parenthood of representation to load
options: Additional settings dictionary
"""
raise NotImplementedError(
"Loading of Alembic files is not yet implemented.")
# TODO (jasper): implement Alembic import.
libpath = self.fname
asset = context["asset"]["name"]
subset = context["subset"]["name"]
# TODO (jasper): evaluate use of namespace which is 'alien' to Blender.
lib_container = container_name = (
plugin.asset_name(asset, subset, namespace)
)
relative = bpy.context.preferences.filepaths.use_relative_paths
with bpy.data.libraries.load(
libpath, link=True, relative=relative
) as (data_from, data_to):
data_to.collections = [lib_container]
scene = bpy.context.scene
instance_empty = bpy.data.objects.new(
container_name, None
)
scene.collection.objects.link(instance_empty)
instance_empty.instance_type = 'COLLECTION'
collection = bpy.data.collections[lib_container]
collection.name = container_name
instance_empty.instance_collection = collection
nodes = list(collection.objects)
nodes.append(collection)
nodes.append(instance_empty)
self[:] = nodes
return nodes

View file

@ -11,14 +11,14 @@ class ExtractABC(openpype.api.Extractor):
label = "Extract ABC"
hosts = ["blender"]
families = ["model"]
families = ["model", "pointcache"]
optional = True
def process(self, instance):
# Define extract output file path
stagingdir = self.staging_dir(instance)
filename = f"{instance.name}.fbx"
filename = f"{instance.name}.abc"
filepath = os.path.join(stagingdir, filename)
context = bpy.context
@ -52,6 +52,8 @@ class ExtractABC(openpype.api.Extractor):
old_scale = scene.unit_settings.scale_length
bpy.ops.object.select_all(action='DESELECT')
selected = list()
for obj in instance:
@ -67,14 +69,11 @@ class ExtractABC(openpype.api.Extractor):
# We set the scale of the scene for the export
scene.unit_settings.scale_length = 0.01
self.log.info(new_context)
# We export the abc
bpy.ops.wm.alembic_export(
new_context,
filepath=filepath,
start=1,
end=1
selected=True
)
view_layer.active_layer_collection = old_active_layer_collection

View file

@ -3,6 +3,7 @@
import os
from pathlib import Path
import logging
import re
from openpype import lib
from openpype.api import (get_current_project_settings)
@ -63,26 +64,9 @@ def get_asset_settings():
"handleStart": handle_start,
"handleEnd": handle_end,
"resolutionWidth": resolution_width,
"resolutionHeight": resolution_height
"resolutionHeight": resolution_height,
"entityType": entity_type
}
settings = get_current_project_settings()
try:
skip_resolution_check = \
settings["harmony"]["general"]["skip_resolution_check"]
skip_timelines_check = \
settings["harmony"]["general"]["skip_timelines_check"]
except KeyError:
skip_resolution_check = []
skip_timelines_check = []
if os.getenv('AVALON_TASK') in skip_resolution_check:
scene_data.pop("resolutionWidth")
scene_data.pop("resolutionHeight")
if entity_type in skip_timelines_check:
scene_data.pop('frameStart', None)
scene_data.pop('frameEnd', None)
return scene_data

View file

@ -2,6 +2,7 @@
"""Validate scene settings."""
import os
import json
import re
import pyblish.api
@ -41,22 +42,42 @@ class ValidateSceneSettings(pyblish.api.InstancePlugin):
families = ["workfile"]
hosts = ["harmony"]
actions = [ValidateSceneSettingsRepair]
optional = True
frame_check_filter = ["_ch_", "_pr_", "_intd_", "_extd_"]
# used for skipping resolution validation for render tasks
render_check_filter = ["render", "Render"]
# skip frameEnd check if asset contains any of:
frame_check_filter = ["_ch_", "_pr_", "_intd_", "_extd_"] # regex
# skip resolution check if Task name matches any of regex patterns
skip_resolution_check = ["render", "Render"] # regex
# skip frameStart, frameEnd check if Task name matches any of regex patt.
skip_timelines_check = [] # regex
def process(self, instance):
"""Plugin entry point."""
expected_settings = openpype.hosts.harmony.api.get_asset_settings()
self.log.info(expected_settings)
self.log.info("scene settings from DB:".format(expected_settings))
expected_settings = _update_frames(dict.copy(expected_settings))
expected_settings["frameEndHandle"] = expected_settings["frameEnd"] +\
expected_settings["handleEnd"]
if any(string in instance.context.data['anatomyData']['asset']
for string in self.frame_check_filter):
if (any(re.search(pattern, os.getenv('AVALON_TASK'))
for pattern in self.skip_resolution_check)):
expected_settings.pop("resolutionWidth")
expected_settings.pop("resolutionHeight")
entity_type = expected_settings.get("entityType")
if (any(re.search(pattern, entity_type)
for pattern in self.skip_timelines_check)):
expected_settings.pop('frameStart', None)
expected_settings.pop('frameEnd', None)
expected_settings.pop("entityType") # not useful after the check
asset_name = instance.context.data['anatomyData']['asset']
if any(re.search(pattern, asset_name)
for pattern in self.frame_check_filter):
expected_settings.pop("frameEnd")
# handle case where ftrack uses only two decimal places
@ -66,13 +87,7 @@ class ValidateSceneSettings(pyblish.api.InstancePlugin):
fps = float(
"{:.2f}".format(instance.context.data.get("frameRate")))
if any(string in instance.context.data['anatomyData']['task']
for string in self.render_check_filter):
self.log.debug("Render task detected, resolution check skipped")
expected_settings.pop("resolutionWidth")
expected_settings.pop("resolutionHeight")
self.log.debug(expected_settings)
self.log.debug("filtered settings: {}".format(expected_settings))
current_settings = {
"fps": fps,
@ -84,7 +99,7 @@ class ValidateSceneSettings(pyblish.api.InstancePlugin):
"resolutionWidth": instance.context.data.get("resolutionWidth"),
"resolutionHeight": instance.context.data.get("resolutionHeight"),
}
self.log.debug("curr:: {}".format(current_settings))
self.log.debug("current scene settings {}".format(current_settings))
invalid_settings = []
for key, value in expected_settings.items():

View file

@ -22,6 +22,7 @@ from .pipeline import (
)
from .lib import (
pype_tag_name,
get_track_items,
get_current_project,
get_current_sequence,
@ -73,6 +74,7 @@ __all__ = [
"work_root",
# Lib functions
"pype_tag_name",
"get_track_items",
"get_current_project",
"get_current_sequence",

View file

@ -2,7 +2,12 @@ import os
import hiero.core.events
import avalon.api as avalon
from openpype.api import Logger
from .lib import sync_avalon_data_to_workfile, launch_workfiles_app
from .lib import (
sync_avalon_data_to_workfile,
launch_workfiles_app,
selection_changed_timeline,
before_project_save
)
from .tags import add_tags_to_workfile
from .menu import update_menu_task_label
@ -78,7 +83,7 @@ def register_hiero_events():
"Registering events for: kBeforeNewProjectCreated, "
"kAfterNewProjectCreated, kBeforeProjectLoad, kAfterProjectLoad, "
"kBeforeProjectSave, kAfterProjectSave, kBeforeProjectClose, "
"kAfterProjectClose, kShutdown, kStartup"
"kAfterProjectClose, kShutdown, kStartup, kSelectionChanged"
)
# hiero.core.events.registerInterest(
@ -91,8 +96,8 @@ def register_hiero_events():
hiero.core.events.registerInterest(
"kAfterProjectLoad", afterProjectLoad)
# hiero.core.events.registerInterest(
# "kBeforeProjectSave", beforeProjectSaved)
hiero.core.events.registerInterest(
"kBeforeProjectSave", before_project_save)
# hiero.core.events.registerInterest(
# "kAfterProjectSave", afterProjectSaved)
#
@ -104,10 +109,16 @@ def register_hiero_events():
# hiero.core.events.registerInterest("kShutdown", shutDown)
# hiero.core.events.registerInterest("kStartup", startupCompleted)
# workfiles
hiero.core.events.registerEventType("kStartWorkfiles")
hiero.core.events.registerInterest("kStartWorkfiles", launch_workfiles_app)
hiero.core.events.registerInterest(
("kSelectionChanged", "kTimeline"), selection_changed_timeline)
# workfiles
try:
hiero.core.events.registerEventType("kStartWorkfiles")
hiero.core.events.registerInterest(
"kStartWorkfiles", launch_workfiles_app)
except RuntimeError:
pass
def register_events():
"""

View file

@ -9,7 +9,7 @@ import hiero
import avalon.api as avalon
import avalon.io
from avalon.vendor.Qt import QtWidgets
from openpype.api import (Logger, Anatomy, config)
from openpype.api import (Logger, Anatomy, get_anatomy_settings)
from . import tags
import shutil
from compiler.ast import flatten
@ -30,9 +30,9 @@ self = sys.modules[__name__]
self._has_been_setup = False
self._has_menu = False
self._registered_gui = None
self.pype_tag_name = "Pype Data"
self.default_sequence_name = "PypeSequence"
self.default_bin_name = "PypeBin"
self.pype_tag_name = "openpypeData"
self.default_sequence_name = "openpypeSequence"
self.default_bin_name = "openpypeBin"
AVALON_CONFIG = os.getenv("AVALON_CONFIG", "pype")
@ -150,15 +150,27 @@ def get_track_items(
# get selected track items or all in active sequence
if selected:
selected_items = list(hiero.selection)
for item in selected_items:
if track_name and track_name in item.parent().name():
# filter only items fitting input track name
track_items.append(item)
elif not track_name:
# or add all if no track_name was defined
track_items.append(item)
else:
try:
selected_items = list(hiero.selection)
for item in selected_items:
if track_name and track_name in item.parent().name():
# filter only items fitting input track name
track_items.append(item)
elif not track_name:
# or add all if no track_name was defined
track_items.append(item)
except AttributeError:
pass
# check if any collected track items are
# `core.Hiero.Python.TrackItem` instance
if track_items:
any_track_item = track_items[0]
if not isinstance(any_track_item, hiero.core.TrackItem):
selected_items = []
# collect all available active sequence track items
if not track_items:
sequence = get_current_sequence(name=sequence_name)
# get all available tracks from sequence
tracks = list(sequence.audioTracks()) + list(sequence.videoTracks())
@ -240,7 +252,7 @@ def set_track_item_pype_tag(track_item, data=None):
# basic Tag's attribute
tag_data = {
"editable": "0",
"note": "Pype data holder",
"note": "OpenPype data container",
"icon": "openpype_icon.png",
"metadata": {k: v for k, v in data.items()}
}
@ -744,10 +756,13 @@ def _set_hrox_project_knobs(doc, **knobs):
# set attributes to Project Tag
proj_elem = doc.documentElement().firstChildElement("Project")
for k, v in knobs.items():
proj_elem.setAttribute(k, v)
if isinstance(v, dict):
continue
proj_elem.setAttribute(str(k), v)
def apply_colorspace_project():
project_name = os.getenv("AVALON_PROJECT")
# get path the the active projects
project = get_current_project(remove_untitled=True)
current_file = project.path()
@ -756,9 +771,9 @@ def apply_colorspace_project():
project.close()
# get presets for hiero
presets = config.get_init_presets()
colorspace = presets["colorspace"]
hiero_project_clrs = colorspace.get("hiero", {}).get("project", {})
imageio = get_anatomy_settings(
project_name)["imageio"].get("hiero", None)
presets = imageio.get("workfile")
# save the workfile as subversion "comment:_colorspaceChange"
split_current_file = os.path.splitext(current_file)
@ -789,13 +804,13 @@ def apply_colorspace_project():
os.remove(copy_current_file_tmp)
# use the code from bellow for changing xml hrox Attributes
hiero_project_clrs.update({"name": os.path.basename(copy_current_file)})
presets.update({"name": os.path.basename(copy_current_file)})
# read HROX in as QDomSocument
doc = _read_doc_from_path(copy_current_file)
# apply project colorspace properties
_set_hrox_project_knobs(doc, **hiero_project_clrs)
_set_hrox_project_knobs(doc, **presets)
# write QDomSocument back as HROX
_write_doc_to_path(doc, copy_current_file)
@ -805,14 +820,17 @@ def apply_colorspace_project():
def apply_colorspace_clips():
project_name = os.getenv("AVALON_PROJECT")
project = get_current_project(remove_untitled=True)
clips = project.clips()
# get presets for hiero
presets = config.get_init_presets()
colorspace = presets["colorspace"]
hiero_clips_clrs = colorspace.get("hiero", {}).get("clips", {})
imageio = get_anatomy_settings(
project_name)["imageio"].get("hiero", None)
from pprint import pprint
presets = imageio.get("regexInputs", {}).get("inputs", {})
pprint(presets)
for clip in clips:
clip_media_source_path = clip.mediaSource().firstpath()
clip_name = clip.name()
@ -822,10 +840,11 @@ def apply_colorspace_clips():
continue
# check if any colorspace presets for read is mathing
preset_clrsp = next((hiero_clips_clrs[k]
for k in hiero_clips_clrs
if bool(re.search(k, clip_media_source_path))),
None)
preset_clrsp = None
for k in presets:
if not bool(re.search(k["regex"], clip_media_source_path)):
continue
preset_clrsp = k["colorspace"]
if preset_clrsp:
log.debug("Changing clip.path: {}".format(clip_media_source_path))
@ -893,3 +912,61 @@ def get_sequence_pattern_and_padding(file):
return found, padding
else:
return None, None
def sync_clip_name_to_data_asset(track_items_list):
# loop trough all selected clips
for track_item in track_items_list:
# ignore if parent track is locked or disabled
if track_item.parent().isLocked():
continue
if not track_item.parent().isEnabled():
continue
# ignore if the track item is disabled
if not track_item.isEnabled():
continue
# get name and data
ti_name = track_item.name()
data = get_track_item_pype_data(track_item)
# ignore if no data on the clip or not publish instance
if not data:
continue
if data.get("id") != "pyblish.avalon.instance":
continue
# fix data if wrong name
if data["asset"] != ti_name:
data["asset"] = ti_name
# remove the original tag
tag = get_track_item_pype_tag(track_item)
track_item.removeTag(tag)
# create new tag with updated data
set_track_item_pype_tag(track_item, data)
print("asset was changed in clip: {}".format(ti_name))
def selection_changed_timeline(event):
"""Callback on timeline to check if asset in data is the same as clip name.
Args:
event (hiero.core.Event): timeline event
"""
timeline_editor = event.sender
selection = timeline_editor.selection()
# run checking function
sync_clip_name_to_data_asset(selection)
def before_project_save(event):
track_items = get_track_items(
selected=False,
track_type="video",
check_enabled=True,
check_locked=True,
check_tagged=True)
# run checking function
sync_clip_name_to_data_asset(track_items)

View file

@ -68,50 +68,45 @@ def menu_install():
menu.addSeparator()
workfiles_action = menu.addAction("Work Files...")
workfiles_action = menu.addAction("Work Files ...")
workfiles_action.setIcon(QtGui.QIcon("icons:Position.png"))
workfiles_action.triggered.connect(launch_workfiles_app)
default_tags_action = menu.addAction("Create Default Tags...")
default_tags_action = menu.addAction("Create Default Tags")
default_tags_action.setIcon(QtGui.QIcon("icons:Position.png"))
default_tags_action.triggered.connect(tags.add_tags_to_workfile)
menu.addSeparator()
publish_action = menu.addAction("Publish...")
publish_action = menu.addAction("Publish ...")
publish_action.setIcon(QtGui.QIcon("icons:Output.png"))
publish_action.triggered.connect(
lambda *args: publish(hiero.ui.mainWindow())
)
creator_action = menu.addAction("Create...")
creator_action = menu.addAction("Create ...")
creator_action.setIcon(QtGui.QIcon("icons:CopyRectangle.png"))
creator_action.triggered.connect(creator.show)
loader_action = menu.addAction("Load...")
loader_action = menu.addAction("Load ...")
loader_action.setIcon(QtGui.QIcon("icons:CopyRectangle.png"))
loader_action.triggered.connect(cbloader.show)
sceneinventory_action = menu.addAction("Manage...")
sceneinventory_action = menu.addAction("Manage ...")
sceneinventory_action.setIcon(QtGui.QIcon("icons:CopyRectangle.png"))
sceneinventory_action.triggered.connect(sceneinventory.show)
menu.addSeparator()
reload_action = menu.addAction("Reload pipeline...")
reload_action.setIcon(QtGui.QIcon("icons:ColorAdd.png"))
reload_action.triggered.connect(reload_config)
if os.getenv("OPENPYPE_DEVELOP"):
reload_action = menu.addAction("Reload pipeline")
reload_action.setIcon(QtGui.QIcon("icons:ColorAdd.png"))
reload_action.triggered.connect(reload_config)
menu.addSeparator()
apply_colorspace_p_action = menu.addAction("Apply Colorspace Project...")
apply_colorspace_p_action = menu.addAction("Apply Colorspace Project")
apply_colorspace_p_action.setIcon(QtGui.QIcon("icons:ColorAdd.png"))
apply_colorspace_p_action.triggered.connect(apply_colorspace_project)
apply_colorspace_c_action = menu.addAction("Apply Colorspace Clips...")
apply_colorspace_c_action = menu.addAction("Apply Colorspace Clips")
apply_colorspace_c_action.setIcon(QtGui.QIcon("icons:ColorAdd.png"))
apply_colorspace_c_action.triggered.connect(apply_colorspace_clips)
self.context_label_action = context_label_action
self.workfile_actions = workfiles_action
self.default_tags_action = default_tags_action
self.publish_action = publish_action
self.reload_action = reload_action

View file

@ -4,10 +4,10 @@ import hiero
from Qt import QtWidgets, QtCore
from avalon.vendor import qargparse
import avalon.api as avalon
import openpype.api as pype
import openpype.api as openpype
from . import lib
log = pype.Logger().get_logger(__name__)
log = openpype.Logger().get_logger(__name__)
def load_stylesheet():
@ -266,7 +266,8 @@ class CreatorWidget(QtWidgets.QDialog):
elif v["type"] == "QSpinBox":
data[k]["value"] = self.create_row(
content_layout, "QSpinBox", v["label"],
setValue=v["value"], setMaximum=10000, setToolTip=tool_tip)
setValue=v["value"], setMinimum=0,
setMaximum=100000, setToolTip=tool_tip)
return data
@ -387,7 +388,8 @@ class ClipLoader:
# try to get value from options or evaluate key value for `load_to`
self.new_sequence = options.get("newSequence") or bool(
"New timeline" in options.get("load_to", ""))
self.clip_name_template = options.get(
"clipNameTemplate") or "{asset}_{subset}_{representation}"
assert self._populate_data(), str(
"Cannot Load selected data, look into database "
"or call your supervisor")
@ -432,7 +434,7 @@ class ClipLoader:
asset = str(repr_cntx["asset"])
subset = str(repr_cntx["subset"])
representation = str(repr_cntx["representation"])
self.data["clip_name"] = "_".join([asset, subset, representation])
self.data["clip_name"] = self.clip_name_template.format(**repr_cntx)
self.data["track_name"] = "_".join([subset, representation])
self.data["versionData"] = self.context["version"]["data"]
# gets file path
@ -476,7 +478,7 @@ class ClipLoader:
"""
asset_name = self.context["representation"]["context"]["asset"]
self.data["assetData"] = pype.get_asset(asset_name)["data"]
self.data["assetData"] = openpype.get_asset(asset_name)["data"]
def _make_track_item(self, source_bin_item, audio=False):
""" Create track item with """
@ -543,15 +545,9 @@ class ClipLoader:
if "slate" in f),
# if nothing was found then use default None
# so other bool could be used
None) or bool(((
# put together duration of clip attributes
self.timeline_out - self.timeline_in + 1) \
+ self.handle_start \
+ self.handle_end
# and compare it with meda duration
) > self.media_duration)
print("__ slate_on: `{}`".format(slate_on))
None) or bool(int(
(self.timeline_out - self.timeline_in + 1)
+ self.handle_start + self.handle_end) < self.media_duration)
# if slate is on then remove the slate frame from begining
if slate_on:
@ -592,7 +588,7 @@ class ClipLoader:
return track_item
class Creator(pype.Creator):
class Creator(openpype.Creator):
"""Creator class wrapper
"""
clip_color = "Purple"
@ -601,7 +597,7 @@ class Creator(pype.Creator):
def __init__(self, *args, **kwargs):
import openpype.hosts.hiero.api as phiero
super(Creator, self).__init__(*args, **kwargs)
self.presets = pype.get_current_project_settings()[
self.presets = openpype.get_current_project_settings()[
"hiero"]["create"].get(self.__class__.__name__, {})
# adding basic current context resolve objects
@ -674,6 +670,9 @@ class PublishClip:
if kwargs.get("avalon"):
self.tag_data.update(kwargs["avalon"])
# add publish attribute to tag data
self.tag_data.update({"publish": True})
# adding ui inputs if any
self.ui_inputs = kwargs.get("ui_inputs", {})
@ -687,6 +686,7 @@ class PublishClip:
self._create_parents()
def convert(self):
# solve track item data and add them to tag data
self._convert_to_tag_data()
@ -705,6 +705,12 @@ class PublishClip:
self.tag_data["asset"] = new_name
else:
self.tag_data["asset"] = self.ti_name
self.tag_data["hierarchyData"]["shot"] = self.ti_name
if self.tag_data["heroTrack"] and self.review_layer:
self.tag_data.update({"reviewTrack": self.review_layer})
else:
self.tag_data.update({"reviewTrack": None})
# create pype tag on track_item and add data
lib.imprint(self.track_item, self.tag_data)
@ -773,8 +779,8 @@ class PublishClip:
_spl = text.split("#")
_len = (len(_spl) - 1)
_repl = "{{{0}:0>{1}}}".format(name, _len)
new_text = text.replace(("#" * _len), _repl)
return new_text
return text.replace(("#" * _len), _repl)
def _convert_to_tag_data(self):
""" Convert internal data to tag data.
@ -782,13 +788,13 @@ class PublishClip:
Populating the tag data into internal variable self.tag_data
"""
# define vertical sync attributes
master_layer = True
hero_track = True
self.review_layer = ""
if self.vertical_sync:
# check if track name is not in driving layer
if self.track_name not in self.driving_layer:
# if it is not then define vertical sync as None
master_layer = False
hero_track = False
# increasing steps by index of rename iteration
self.count_steps *= self.rename_index
@ -802,7 +808,7 @@ class PublishClip:
self.tag_data[_k] = _v["value"]
# driving layer is set as positive match
if master_layer or self.vertical_sync:
if hero_track or self.vertical_sync:
# mark review layer
if self.review_track and (
self.review_track not in self.review_track_default):
@ -836,40 +842,40 @@ class PublishClip:
hierarchy_formating_data
)
tag_hierarchy_data.update({"masterLayer": True})
if master_layer and self.vertical_sync:
tag_hierarchy_data.update({"heroTrack": True})
if hero_track and self.vertical_sync:
self.vertical_clip_match.update({
(self.clip_in, self.clip_out): tag_hierarchy_data
})
if not master_layer and self.vertical_sync:
if not hero_track and self.vertical_sync:
# driving layer is set as negative match
for (_in, _out), master_data in self.vertical_clip_match.items():
master_data.update({"masterLayer": False})
for (_in, _out), hero_data in self.vertical_clip_match.items():
hero_data.update({"heroTrack": False})
if _in == self.clip_in and _out == self.clip_out:
data_subset = master_data["subset"]
# add track index in case duplicity of names in master data
data_subset = hero_data["subset"]
# add track index in case duplicity of names in hero data
if self.subset in data_subset:
master_data["subset"] = self.subset + str(
hero_data["subset"] = self.subset + str(
self.track_index)
# in case track name and subset name is the same then add
if self.subset_name == self.track_name:
master_data["subset"] = self.subset
hero_data["subset"] = self.subset
# assing data to return hierarchy data to tag
tag_hierarchy_data = master_data
tag_hierarchy_data = hero_data
# add data to return data dict
self.tag_data.update(tag_hierarchy_data)
if master_layer and self.review_layer:
self.tag_data.update({"reviewTrack": self.review_layer})
def _solve_tag_hierarchy_data(self, hierarchy_formating_data):
""" Solve tag data from hierarchy data and templates. """
# fill up clip name and hierarchy keys
hierarchy_filled = self.hierarchy.format(**hierarchy_formating_data)
clip_name_filled = self.clip_name.format(**hierarchy_formating_data)
# remove shot from hierarchy data: is not needed anymore
hierarchy_formating_data.pop("shot")
return {
"newClipName": clip_name_filled,
"hierarchy": hierarchy_filled,

View file

@ -84,6 +84,13 @@ def update_tag(tag, data):
mtd = tag.metadata()
# get metadata key from data
data_mtd = data.get("metadata", {})
# due to hiero bug we have to make sure keys which are not existent in
# data are cleared of value by `None`
for _mk in mtd.keys():
if _mk.replace("tag.", "") not in data_mtd.keys():
mtd.setValue(_mk, str(None))
# set all data metadata to tag metadata
for k, v in data_mtd.items():
mtd.setValue(

View file

View file

@ -0,0 +1,366 @@
""" compatibility OpenTimelineIO 0.12.0 and newer
"""
import os
import re
import sys
import ast
from compiler.ast import flatten
import opentimelineio as otio
from . import utils
import hiero.core
import hiero.ui
self = sys.modules[__name__]
self.track_types = {
hiero.core.VideoTrack: otio.schema.TrackKind.Video,
hiero.core.AudioTrack: otio.schema.TrackKind.Audio
}
self.project_fps = None
self.marker_color_map = {
"magenta": otio.schema.MarkerColor.MAGENTA,
"red": otio.schema.MarkerColor.RED,
"yellow": otio.schema.MarkerColor.YELLOW,
"green": otio.schema.MarkerColor.GREEN,
"cyan": otio.schema.MarkerColor.CYAN,
"blue": otio.schema.MarkerColor.BLUE,
}
self.timeline = None
self.include_tags = True
def get_current_hiero_project(remove_untitled=False):
projects = flatten(hiero.core.projects())
if not remove_untitled:
return next(iter(projects))
# if remove_untitled
for proj in projects:
if "Untitled" in proj.name():
proj.close()
else:
return proj
def create_otio_rational_time(frame, fps):
return otio.opentime.RationalTime(
float(frame),
float(fps)
)
def create_otio_time_range(start_frame, frame_duration, fps):
return otio.opentime.TimeRange(
start_time=create_otio_rational_time(start_frame, fps),
duration=create_otio_rational_time(frame_duration, fps)
)
def _get_metadata(item):
if hasattr(item, 'metadata'):
return {key: value for key, value in dict(item.metadata()).items()}
return {}
def create_otio_reference(clip):
metadata = _get_metadata(clip)
media_source = clip.mediaSource()
# get file info for path and start frame
file_info = media_source.fileinfos().pop()
frame_start = file_info.startFrame()
path = file_info.filename()
# get padding and other file infos
padding = media_source.filenamePadding()
file_head = media_source.filenameHead()
is_sequence = not media_source.singleFile()
frame_duration = media_source.duration()
fps = utils.get_rate(clip) or self.project_fps
extension = os.path.splitext(path)[-1]
if is_sequence:
metadata.update({
"isSequence": True,
"padding": padding
})
# add resolution metadata
metadata.update({
"openpype.source.colourtransform": clip.sourceMediaColourTransform(),
"openpype.source.width": int(media_source.width()),
"openpype.source.height": int(media_source.height()),
"openpype.source.pixelAspect": float(media_source.pixelAspect())
})
otio_ex_ref_item = None
if is_sequence:
# if it is file sequence try to create `ImageSequenceReference`
# the OTIO might not be compatible so return nothing and do it old way
try:
dirname = os.path.dirname(path)
otio_ex_ref_item = otio.schema.ImageSequenceReference(
target_url_base=dirname + os.sep,
name_prefix=file_head,
name_suffix=extension,
start_frame=frame_start,
frame_zero_padding=padding,
rate=fps,
available_range=create_otio_time_range(
frame_start,
frame_duration,
fps
)
)
except AttributeError:
pass
if not otio_ex_ref_item:
reformat_path = utils.get_reformated_path(path, padded=False)
# in case old OTIO or video file create `ExternalReference`
otio_ex_ref_item = otio.schema.ExternalReference(
target_url=reformat_path,
available_range=create_otio_time_range(
frame_start,
frame_duration,
fps
)
)
# add metadata to otio item
add_otio_metadata(otio_ex_ref_item, media_source, **metadata)
return otio_ex_ref_item
def get_marker_color(tag):
icon = tag.icon()
pat = r'icons:Tag(?P<color>\w+)\.\w+'
res = re.search(pat, icon)
if res:
color = res.groupdict().get('color')
if color.lower() in self.marker_color_map:
return self.marker_color_map[color.lower()]
return otio.schema.MarkerColor.RED
def create_otio_markers(otio_item, item):
for tag in item.tags():
if not tag.visible():
continue
if tag.name() == 'Copy':
# Hiero adds this tag to a lot of clips
continue
frame_rate = utils.get_rate(item) or self.project_fps
marked_range = otio.opentime.TimeRange(
start_time=otio.opentime.RationalTime(
tag.inTime(),
frame_rate
),
duration=otio.opentime.RationalTime(
int(tag.metadata().dict().get('tag.length', '0')),
frame_rate
)
)
# add tag metadata but remove "tag." string
metadata = {}
for key, value in tag.metadata().dict().items():
_key = key.replace("tag.", "")
try:
# capture exceptions which are related to strings only
_value = ast.literal_eval(value)
except (ValueError, SyntaxError):
_value = value
metadata.update({_key: _value})
# Store the source item for future import assignment
metadata['hiero_source_type'] = item.__class__.__name__
marker = otio.schema.Marker(
name=tag.name(),
color=get_marker_color(tag),
marked_range=marked_range,
metadata=metadata
)
otio_item.markers.append(marker)
def create_otio_clip(track_item):
clip = track_item.source()
source_in = track_item.sourceIn()
duration = track_item.sourceDuration()
fps = utils.get_rate(track_item) or self.project_fps
name = track_item.name()
media_reference = create_otio_reference(clip)
source_range = create_otio_time_range(
int(source_in),
int(duration),
fps
)
otio_clip = otio.schema.Clip(
name=name,
source_range=source_range,
media_reference=media_reference
)
# Add tags as markers
if self.include_tags:
create_otio_markers(otio_clip, track_item)
create_otio_markers(otio_clip, track_item.source())
return otio_clip
def create_otio_gap(gap_start, clip_start, tl_start_frame, fps):
return otio.schema.Gap(
source_range=create_otio_time_range(
gap_start,
(clip_start - tl_start_frame) - gap_start,
fps
)
)
def _create_otio_timeline():
project = get_current_hiero_project(remove_untitled=False)
metadata = _get_metadata(self.timeline)
metadata.update({
"openpype.timeline.width": int(self.timeline.format().width()),
"openpype.timeline.height": int(self.timeline.format().height()),
"openpype.timeline.pixelAspect": int(self.timeline.format().pixelAspect()), # noqa
"openpype.project.useOCIOEnvironmentOverride": project.useOCIOEnvironmentOverride(), # noqa
"openpype.project.lutSetting16Bit": project.lutSetting16Bit(),
"openpype.project.lutSetting8Bit": project.lutSetting8Bit(),
"openpype.project.lutSettingFloat": project.lutSettingFloat(),
"openpype.project.lutSettingLog": project.lutSettingLog(),
"openpype.project.lutSettingViewer": project.lutSettingViewer(),
"openpype.project.lutSettingWorkingSpace": project.lutSettingWorkingSpace(), # noqa
"openpype.project.lutUseOCIOForExport": project.lutUseOCIOForExport(),
"openpype.project.ocioConfigName": project.ocioConfigName(),
"openpype.project.ocioConfigPath": project.ocioConfigPath()
})
start_time = create_otio_rational_time(
self.timeline.timecodeStart(), self.project_fps)
return otio.schema.Timeline(
name=self.timeline.name(),
global_start_time=start_time,
metadata=metadata
)
def create_otio_track(track_type, track_name):
return otio.schema.Track(
name=track_name,
kind=self.track_types[track_type]
)
def add_otio_gap(track_item, otio_track, prev_out):
gap_length = track_item.timelineIn() - prev_out
if prev_out != 0:
gap_length -= 1
gap = otio.opentime.TimeRange(
duration=otio.opentime.RationalTime(
gap_length,
self.project_fps
)
)
otio_gap = otio.schema.Gap(source_range=gap)
otio_track.append(otio_gap)
def add_otio_metadata(otio_item, media_source, **kwargs):
metadata = _get_metadata(media_source)
# add additional metadata from kwargs
if kwargs:
metadata.update(kwargs)
# add metadata to otio item metadata
for key, value in metadata.items():
otio_item.metadata.update({key: value})
def create_otio_timeline():
# get current timeline
self.timeline = hiero.ui.activeSequence()
self.project_fps = self.timeline.framerate().toFloat()
# convert timeline to otio
otio_timeline = _create_otio_timeline()
# loop all defined track types
for track in self.timeline.items():
# skip if track is disabled
if not track.isEnabled():
continue
# convert track to otio
otio_track = create_otio_track(
type(track), track.name())
for itemindex, track_item in enumerate(track):
# skip offline track items
if not track_item.isMediaPresent():
continue
# skip if track item is disabled
if not track_item.isEnabled():
continue
# Add Gap if needed
if itemindex == 0:
# if it is first track item at track then add
# it to previouse item
prev_item = track_item
else:
# get previouse item
prev_item = track_item.parent().items()[itemindex - 1]
# calculate clip frame range difference from each other
clip_diff = track_item.timelineIn() - prev_item.timelineOut()
# add gap if first track item is not starting
# at first timeline frame
if itemindex == 0 and track_item.timelineIn() > 0:
add_otio_gap(track_item, otio_track, 0)
# or add gap if following track items are having
# frame range differences from each other
elif itemindex and clip_diff != 1:
add_otio_gap(track_item, otio_track, prev_item.timelineOut())
# create otio clip and add it to track
otio_clip = create_otio_clip(track_item)
otio_track.append(otio_clip)
# Add tags as markers
if self.include_tags:
create_otio_markers(otio_track, track)
# add track to otio timeline
otio_timeline.tracks.append(otio_track)
return otio_timeline
def write_to_file(otio_timeline, path):
otio.adapters.write_to_file(otio_timeline, path)

View file

@ -0,0 +1,545 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
__author__ = "Daniel Flehner Heen"
__credits__ = ["Jakub Jezek", "Daniel Flehner Heen"]
import os
import hiero.core
import hiero.ui
import PySide2.QtWidgets as qw
try:
from urllib import unquote
except ImportError:
from urllib.parse import unquote # lint:ok
import opentimelineio as otio
_otio_old = False
def inform(messages):
if isinstance(messages, type('')):
messages = [messages]
qw.QMessageBox.information(
hiero.ui.mainWindow(),
'OTIO Import',
'\n'.join(messages),
qw.QMessageBox.StandardButton.Ok
)
def get_transition_type(otio_item, otio_track):
_in, _out = otio_track.neighbors_of(otio_item)
if isinstance(_in, otio.schema.Gap):
_in = None
if isinstance(_out, otio.schema.Gap):
_out = None
if _in and _out:
return 'dissolve'
elif _in and not _out:
return 'fade_out'
elif not _in and _out:
return 'fade_in'
else:
return 'unknown'
def find_trackitem(otio_clip, hiero_track):
for item in hiero_track.items():
if item.timelineIn() == otio_clip.range_in_parent().start_time.value:
if item.name() == otio_clip.name:
return item
return None
def get_neighboring_trackitems(otio_item, otio_track, hiero_track):
_in, _out = otio_track.neighbors_of(otio_item)
trackitem_in = None
trackitem_out = None
if _in:
trackitem_in = find_trackitem(_in, hiero_track)
if _out:
trackitem_out = find_trackitem(_out, hiero_track)
return trackitem_in, trackitem_out
def apply_transition(otio_track, otio_item, track):
warning = None
# Figure out type of transition
transition_type = get_transition_type(otio_item, otio_track)
# Figure out track kind for getattr below
kind = ''
if isinstance(track, hiero.core.AudioTrack):
kind = 'Audio'
# Gather TrackItems involved in trasition
item_in, item_out = get_neighboring_trackitems(
otio_item,
otio_track,
track
)
# Create transition object
if transition_type == 'dissolve':
transition_func = getattr(
hiero.core.Transition,
'create{kind}DissolveTransition'.format(kind=kind)
)
try:
transition = transition_func(
item_in,
item_out,
otio_item.in_offset.value,
otio_item.out_offset.value
)
# Catch error raised if transition is bigger than TrackItem source
except RuntimeError as e:
transition = None
warning = (
"Unable to apply transition \"{t.name}\": {e} "
"Ignoring the transition.").format(t=otio_item, e=str(e))
elif transition_type == 'fade_in':
transition_func = getattr(
hiero.core.Transition,
'create{kind}FadeInTransition'.format(kind=kind)
)
# Warn user if part of fade is outside of clip
if otio_item.in_offset.value:
warning = \
'Fist half of transition "{t.name}" is outside of clip and ' \
'not valid in Hiero. Only applied second half.' \
.format(t=otio_item)
transition = transition_func(
item_out,
otio_item.out_offset.value
)
elif transition_type == 'fade_out':
transition_func = getattr(
hiero.core.Transition,
'create{kind}FadeOutTransition'.format(kind=kind)
)
transition = transition_func(
item_in,
otio_item.in_offset.value
)
# Warn user if part of fade is outside of clip
if otio_item.out_offset.value:
warning = \
'Second half of transition "{t.name}" is outside of clip ' \
'and not valid in Hiero. Only applied first half.' \
.format(t=otio_item)
else:
# Unknown transition
return
# Apply transition to track
if transition:
track.addTransition(transition)
# Inform user about missing or adjusted transitions
return warning
def prep_url(url_in):
url = unquote(url_in)
if url.startswith('file://localhost/'):
return url
url = 'file://localhost{sep}{url}'.format(
sep=url.startswith(os.sep) and '' or os.sep,
url=url.startswith(os.sep) and url[1:] or url
)
return url
def create_offline_mediasource(otio_clip, path=None):
global _otio_old
hiero_rate = hiero.core.TimeBase(
otio_clip.source_range.start_time.rate
)
try:
legal_media_refs = (
otio.schema.ExternalReference,
otio.schema.ImageSequenceReference
)
except AttributeError:
_otio_old = True
legal_media_refs = (
otio.schema.ExternalReference
)
if isinstance(otio_clip.media_reference, legal_media_refs):
source_range = otio_clip.available_range()
else:
source_range = otio_clip.source_range
if path is None:
path = otio_clip.name
media = hiero.core.MediaSource.createOfflineVideoMediaSource(
prep_url(path),
source_range.start_time.value,
source_range.duration.value,
hiero_rate,
source_range.start_time.value
)
return media
def load_otio(otio_file, project=None, sequence=None):
otio_timeline = otio.adapters.read_from_file(otio_file)
build_sequence(otio_timeline, project=project, sequence=sequence)
marker_color_map = {
"PINK": "Magenta",
"RED": "Red",
"ORANGE": "Yellow",
"YELLOW": "Yellow",
"GREEN": "Green",
"CYAN": "Cyan",
"BLUE": "Blue",
"PURPLE": "Magenta",
"MAGENTA": "Magenta",
"BLACK": "Blue",
"WHITE": "Green"
}
def get_tag(tagname, tagsbin):
for tag in tagsbin.items():
if tag.name() == tagname:
return tag
if isinstance(tag, hiero.core.Bin):
tag = get_tag(tagname, tag)
if tag is not None:
return tag
return None
def add_metadata(metadata, hiero_item):
for key, value in metadata.get('Hiero', dict()).items():
if key == 'source_type':
# Only used internally to reassign tag to correct Hiero item
continue
if isinstance(value, dict):
add_metadata(value, hiero_item)
continue
if value is not None:
if not key.startswith('tag.'):
key = 'tag.' + key
hiero_item.metadata().setValue(key, str(value))
def add_markers(otio_item, hiero_item, tagsbin):
if isinstance(otio_item, (otio.schema.Stack, otio.schema.Clip)):
markers = otio_item.markers
elif isinstance(otio_item, otio.schema.Timeline):
markers = otio_item.tracks.markers
else:
markers = []
for marker in markers:
meta = marker.metadata.get('Hiero', dict())
if 'source_type' in meta:
if hiero_item.__class__.__name__ != meta.get('source_type'):
continue
marker_color = marker.color
_tag = get_tag(marker.name, tagsbin)
if _tag is None:
_tag = get_tag(marker_color_map[marker_color], tagsbin)
if _tag is None:
_tag = hiero.core.Tag(marker_color_map[marker.color])
start = marker.marked_range.start_time.value
end = (
marker.marked_range.start_time.value +
marker.marked_range.duration.value
)
if hasattr(hiero_item, 'addTagToRange'):
tag = hiero_item.addTagToRange(_tag, start, end)
else:
tag = hiero_item.addTag(_tag)
tag.setName(marker.name or marker_color_map[marker_color])
# tag.setNote(meta.get('tag.note', ''))
# Add metadata
add_metadata(marker.metadata, tag)
def create_track(otio_track, tracknum, track_kind):
if track_kind is None and hasattr(otio_track, 'kind'):
track_kind = otio_track.kind
# Create a Track
if track_kind == otio.schema.TrackKind.Video:
track = hiero.core.VideoTrack(
otio_track.name or 'Video{n}'.format(n=tracknum)
)
else:
track = hiero.core.AudioTrack(
otio_track.name or 'Audio{n}'.format(n=tracknum)
)
return track
def create_clip(otio_clip, tagsbin, sequencebin):
# Create MediaSource
url = None
media = None
otio_media = otio_clip.media_reference
if isinstance(otio_media, otio.schema.ExternalReference):
url = prep_url(otio_media.target_url)
media = hiero.core.MediaSource(url)
elif not _otio_old:
if isinstance(otio_media, otio.schema.ImageSequenceReference):
url = prep_url(otio_media.abstract_target_url('#'))
media = hiero.core.MediaSource(url)
if media is None or media.isOffline():
media = create_offline_mediasource(otio_clip, url)
# Reuse previous clip if possible
clip = None
for item in sequencebin.clips():
if item.activeItem().mediaSource() == media:
clip = item.activeItem()
break
if not clip:
# Create new Clip
clip = hiero.core.Clip(media)
# Add Clip to a Bin
sequencebin.addItem(hiero.core.BinItem(clip))
# Add markers
add_markers(otio_clip, clip, tagsbin)
return clip
def create_trackitem(playhead, track, otio_clip, clip):
source_range = otio_clip.source_range
trackitem = track.createTrackItem(otio_clip.name)
trackitem.setPlaybackSpeed(source_range.start_time.rate)
trackitem.setSource(clip)
time_scalar = 1.
# Check for speed effects and adjust playback speed accordingly
for effect in otio_clip.effects:
if isinstance(effect, otio.schema.LinearTimeWarp):
time_scalar = effect.time_scalar
# Only reverse effect can be applied here
if abs(time_scalar) == 1.:
trackitem.setPlaybackSpeed(
trackitem.playbackSpeed() * time_scalar)
elif isinstance(effect, otio.schema.FreezeFrame):
# For freeze frame, playback speed must be set after range
time_scalar = 0.
# If reverse playback speed swap source in and out
if trackitem.playbackSpeed() < 0:
source_out = source_range.start_time.value
source_in = source_range.end_time_inclusive().value
timeline_in = playhead + source_out
timeline_out = (
timeline_in +
source_range.duration.value
) - 1
else:
# Normal playback speed
source_in = source_range.start_time.value
source_out = source_range.end_time_inclusive().value
timeline_in = playhead
timeline_out = (
timeline_in +
source_range.duration.value
) - 1
# Set source and timeline in/out points
trackitem.setTimes(
timeline_in,
timeline_out,
source_in,
source_out
)
# Apply playback speed for freeze frames
if abs(time_scalar) != 1.:
trackitem.setPlaybackSpeed(trackitem.playbackSpeed() * time_scalar)
# Link audio to video when possible
if isinstance(track, hiero.core.AudioTrack):
for other in track.parent().trackItemsAt(playhead):
if other.source() == clip:
trackitem.link(other)
return trackitem
def build_sequence(
otio_timeline, project=None, sequence=None, track_kind=None):
if project is None:
if sequence:
project = sequence.project()
else:
# Per version 12.1v2 there is no way of getting active project
project = hiero.core.projects(hiero.core.Project.kUserProjects)[-1]
projectbin = project.clipsBin()
if not sequence:
# Create a Sequence
sequence = hiero.core.Sequence(otio_timeline.name or 'OTIOSequence')
# Set sequence settings from otio timeline if available
if (
hasattr(otio_timeline, 'global_start_time')
and otio_timeline.global_start_time
):
start_time = otio_timeline.global_start_time
sequence.setFramerate(start_time.rate)
sequence.setTimecodeStart(start_time.value)
# Create a Bin to hold clips
projectbin.addItem(hiero.core.BinItem(sequence))
sequencebin = hiero.core.Bin(sequence.name())
projectbin.addItem(sequencebin)
else:
sequencebin = projectbin
# Get tagsBin
tagsbin = hiero.core.project("Tag Presets").tagsBin()
# Add timeline markers
add_markers(otio_timeline, sequence, tagsbin)
if isinstance(otio_timeline, otio.schema.Timeline):
tracks = otio_timeline.tracks
else:
tracks = [otio_timeline]
for tracknum, otio_track in enumerate(tracks):
playhead = 0
_transitions = []
# Add track to sequence
track = create_track(otio_track, tracknum, track_kind)
sequence.addTrack(track)
# iterate over items in track
for _itemnum, otio_clip in enumerate(otio_track):
if isinstance(otio_clip, (otio.schema.Track, otio.schema.Stack)):
inform('Nested sequences/tracks are created separately.')
# Add gap where the nested sequence would have been
playhead += otio_clip.source_range.duration.value
# Process nested sequence
build_sequence(
otio_clip,
project=project,
track_kind=otio_track.kind
)
elif isinstance(otio_clip, otio.schema.Clip):
# Create a Clip
clip = create_clip(otio_clip, tagsbin, sequencebin)
# Create TrackItem
trackitem = create_trackitem(
playhead,
track,
otio_clip,
clip
)
# Add markers
add_markers(otio_clip, trackitem, tagsbin)
# Add trackitem to track
track.addTrackItem(trackitem)
# Update playhead
playhead = trackitem.timelineOut() + 1
elif isinstance(otio_clip, otio.schema.Transition):
# Store transitions for when all clips in the track are created
_transitions.append((otio_track, otio_clip))
elif isinstance(otio_clip, otio.schema.Gap):
# Hiero has no fillers, slugs or blanks at the moment
playhead += otio_clip.source_range.duration.value
# Apply transitions we stored earlier now that all clips are present
warnings = []
for otio_track, otio_item in _transitions:
# Catch warnings form transitions in case
# of unsupported transitions
warning = apply_transition(otio_track, otio_item, track)
if warning:
warnings.append(warning)
if warnings:
inform(warnings)

View file

@ -0,0 +1,80 @@
import re
import opentimelineio as otio
def timecode_to_frames(timecode, framerate):
rt = otio.opentime.from_timecode(timecode, 24)
return int(otio.opentime.to_frames(rt))
def frames_to_timecode(frames, framerate):
rt = otio.opentime.from_frames(frames, framerate)
return otio.opentime.to_timecode(rt)
def frames_to_secons(frames, framerate):
rt = otio.opentime.from_frames(frames, framerate)
return otio.opentime.to_seconds(rt)
def get_reformated_path(path, padded=True):
"""
Return fixed python expression path
Args:
path (str): path url or simple file name
Returns:
type: string with reformated path
Example:
get_reformated_path("plate.[0001-1008].exr") > plate.%04d.exr
"""
if "%" in path:
padding_pattern = r"(\d+)"
padding = int(re.findall(padding_pattern, path).pop())
num_pattern = r"(%\d+d)"
if padded:
path = re.sub(num_pattern, "%0{}d".format(padding), path)
else:
path = re.sub(num_pattern, "%d", path)
return path
def get_padding_from_path(path):
"""
Return padding number from DaVinci Resolve sequence path style
Args:
path (str): path url or simple file name
Returns:
int: padding number
Example:
get_padding_from_path("plate.[0001-1008].exr") > 4
"""
padding_pattern = "(\\d+)(?=-)"
if "[" in path:
return len(re.findall(padding_pattern, path).pop())
return None
def get_rate(item):
if not hasattr(item, 'framerate'):
return None
num, den = item.framerate().toRational()
try:
rate = float(num) / float(den)
except ZeroDivisionError:
return None
if rate.is_integer():
return rate
return round(rate, 4)

View file

@ -120,9 +120,9 @@ class CreateShotClip(phiero.Creator):
"vSyncTrack": {
"value": gui_tracks, # noqa
"type": "QComboBox",
"label": "Master track",
"label": "Hero track",
"target": "ui",
"toolTip": "Select driving track name which should be mastering all others", # noqa
"toolTip": "Select driving track name which should be hero for all others", # noqa
"order": 1}
}
},

View file

@ -29,13 +29,19 @@ class LoadClip(phiero.SequenceLoader):
clip_color_last = "green"
clip_color = "red"
def load(self, context, name, namespace, options):
clip_name_template = "{asset}_{subset}_{representation}"
def load(self, context, name, namespace, options):
# add clip name template to options
options.update({
"clipNameTemplate": self.clip_name_template
})
# in case loader uses multiselection
if self.track and self.sequence:
options.update({
"sequence": self.sequence,
"track": self.track
"track": self.track,
"clipNameTemplate": self.clip_name_template
})
# load clip to timeline and get main variables
@ -45,7 +51,8 @@ class LoadClip(phiero.SequenceLoader):
version_data = version.get("data", {})
version_name = version.get("name", None)
colorspace = version_data.get("colorspace", None)
object_name = "{}_{}".format(name, namespace)
object_name = self.clip_name_template.format(
**context["representation"]["context"])
# add additional metadata from the version to imprint Avalon knob
add_keys = [

View file

@ -0,0 +1,59 @@
import os
import pyblish.api
import openpype.api
class ExtractThumnail(openpype.api.Extractor):
"""
Extractor for track item's tumnails
"""
label = "Extract Thumnail"
order = pyblish.api.ExtractorOrder
families = ["plate", "take"]
hosts = ["hiero"]
def process(self, instance):
# create representation data
if "representations" not in instance.data:
instance.data["representations"] = []
staging_dir = self.staging_dir(instance)
self.create_thumbnail(staging_dir, instance)
def create_thumbnail(self, staging_dir, instance):
track_item = instance.data["item"]
track_item_name = track_item.name()
# frames
duration = track_item.sourceDuration()
frame_start = track_item.sourceIn()
self.log.debug(
"__ frame_start: `{}`, duration: `{}`".format(
frame_start, duration))
# get thumbnail frame from the middle
thumb_frame = int(frame_start + (duration / 2))
thumb_file = "{}thumbnail{}{}".format(
track_item_name, thumb_frame, ".png")
thumb_path = os.path.join(staging_dir, thumb_file)
thumbnail = track_item.thumbnail(thumb_frame).save(
thumb_path,
format='png'
)
self.log.debug(
"__ thumb_path: `{}`, frame: `{}`".format(thumbnail, thumb_frame))
self.log.info("Thumnail was generated to: {}".format(thumb_path))
thumb_representation = {
'files': thumb_file,
'stagingDir': staging_dir,
'name': "thumbnail",
'thumbnail': True,
'ext': "png"
}
instance.data["representations"].append(
thumb_representation)

View file

@ -2,7 +2,7 @@ from pyblish import api
import openpype.api as pype
class VersionUpWorkfile(api.ContextPlugin):
class IntegrateVersionUpWorkfile(api.ContextPlugin):
"""Save as new workfile version"""
order = api.IntegratorOrder + 10.1

View file

@ -1,221 +1,260 @@
from compiler.ast import flatten
from pyblish import api
import pyblish
import openpype
from openpype.hosts.hiero import api as phiero
import hiero
# from openpype.hosts.hiero.api import lib
# reload(lib)
# reload(phiero)
from openpype.hosts.hiero.otio import hiero_export
# # developer reload modules
from pprint import pformat
class PreCollectInstances(api.ContextPlugin):
class PrecollectInstances(pyblish.api.ContextPlugin):
"""Collect all Track items selection."""
order = api.CollectorOrder - 0.509
label = "Pre-collect Instances"
order = pyblish.api.CollectorOrder - 0.59
label = "Precollect Instances"
hosts = ["hiero"]
def process(self, context):
track_items = phiero.get_track_items(
selected=True, check_tagged=True, check_enabled=True)
# only return enabled track items
if not track_items:
track_items = phiero.get_track_items(
check_enabled=True, check_tagged=True)
# get sequence and video tracks
sequence = context.data["activeSequence"]
tracks = sequence.videoTracks()
# add collection to context
tracks_effect_items = self.collect_sub_track_items(tracks)
context.data["tracksEffectItems"] = tracks_effect_items
otio_timeline = context.data["otioTimeline"]
selected_timeline_items = phiero.get_track_items(
selected=True, check_enabled=True, check_tagged=True)
self.log.info(
"Processing enabled track items: {}".format(len(track_items)))
"Processing enabled track items: {}".format(
selected_timeline_items))
for _ti in track_items:
data = dict()
clip = _ti.source()
for track_item in selected_timeline_items:
# get clips subtracks and anotations
annotations = self.clip_annotations(clip)
subtracks = self.clip_subtrack(_ti)
self.log.debug("Annotations: {}".format(annotations))
self.log.debug(">> Subtracks: {}".format(subtracks))
data = {}
clip_name = track_item.name()
# get pype tag data
tag_parsed_data = phiero.get_track_item_pype_data(_ti)
# self.log.debug(pformat(tag_parsed_data))
# get openpype tag data
tag_data = phiero.get_track_item_pype_data(track_item)
self.log.debug("__ tag_data: {}".format(pformat(tag_data)))
if not tag_parsed_data:
if not tag_data:
continue
if tag_parsed_data.get("id") != "pyblish.avalon.instance":
if tag_data.get("id") != "pyblish.avalon.instance":
continue
# solve handles length
tag_data["handleStart"] = min(
tag_data["handleStart"], int(track_item.handleInLength()))
tag_data["handleEnd"] = min(
tag_data["handleEnd"], int(track_item.handleOutLength()))
# add audio to families
with_audio = False
if tag_data.pop("audio"):
with_audio = True
# add tag data to instance data
data.update({
k: v for k, v in tag_parsed_data.items()
k: v for k, v in tag_data.items()
if k not in ("id", "applieswhole", "label")
})
asset = tag_parsed_data["asset"]
subset = tag_parsed_data["subset"]
review = tag_parsed_data.get("review")
audio = tag_parsed_data.get("audio")
# remove audio attribute from data
data.pop("audio")
asset = tag_data["asset"]
subset = tag_data["subset"]
# insert family into families
family = tag_parsed_data["family"]
families = [str(f) for f in tag_parsed_data["families"]]
family = tag_data["family"]
families = [str(f) for f in tag_data["families"]]
families.insert(0, str(family))
track = _ti.parent()
media_source = _ti.source().mediaSource()
source_path = media_source.firstpath()
file_head = media_source.filenameHead()
file_info = media_source.fileinfos().pop()
source_first_frame = int(file_info.startFrame())
# apply only for feview and master track instance
if review:
families += ["review", "ftrack"]
# form label
label = asset
if asset != clip_name:
label += " ({})".format(clip_name)
label += " {}".format(subset)
label += " {}".format("[" + ", ".join(families) + "]")
data.update({
"name": "{} {} {}".format(asset, subset, families),
"name": "{}_{}".format(asset, subset),
"label": label,
"asset": asset,
"item": _ti,
"item": track_item,
"families": families,
# tags
"tags": _ti.tags(),
# track item attributes
"track": track.name(),
"trackItem": track,
# version data
"versionData": {
"colorspace": _ti.sourceMediaColourTransform()
},
# source attribute
"source": source_path,
"sourceMedia": media_source,
"sourcePath": source_path,
"sourceFileHead": file_head,
"sourceFirst": source_first_frame,
# clip's effect
"clipEffectItems": subtracks
"publish": tag_data["publish"],
"fps": context.data["fps"]
})
# otio clip data
otio_data = self.get_otio_clip_instance_data(
otio_timeline, track_item) or {}
self.log.debug("__ otio_data: {}".format(pformat(otio_data)))
data.update(otio_data)
self.log.debug("__ data: {}".format(pformat(data)))
# add resolution
self.get_resolution_to_data(data, context)
# create instance
instance = context.create_instance(**data)
# create shot instance for shot attributes create/update
self.create_shot_instance(context, **data)
self.log.info("Creating instance: {}".format(instance))
self.log.debug(
"_ instance.data: {}".format(pformat(instance.data)))
if audio:
a_data = dict()
if not with_audio:
return
# add tag data to instance data
a_data.update({
k: v for k, v in tag_parsed_data.items()
if k not in ("id", "applieswhole", "label")
})
# create audio subset instance
self.create_audio_instance(context, **data)
# create main attributes
subset = "audioMain"
family = "audio"
families = ["clip", "ftrack"]
families.insert(0, str(family))
# add audioReview attribute to plate instance data
# if reviewTrack is on
if tag_data.get("reviewTrack") is not None:
instance.data["reviewAudio"] = True
name = "{} {} {}".format(asset, subset, families)
def get_resolution_to_data(self, data, context):
assert data.get("otioClip"), "Missing `otioClip` data"
a_data.update({
"name": name,
"subset": subset,
"asset": asset,
"family": family,
"families": families,
"item": _ti,
# solve source resolution option
if data.get("sourceResolution", None):
otio_clip_metadata = data[
"otioClip"].media_reference.metadata
data.update({
"resolutionWidth": otio_clip_metadata[
"openpype.source.width"],
"resolutionHeight": otio_clip_metadata[
"openpype.source.height"],
"pixelAspect": otio_clip_metadata[
"openpype.source.pixelAspect"]
})
else:
otio_tl_metadata = context.data["otioTimeline"].metadata
data.update({
"resolutionWidth": otio_tl_metadata["openpype.timeline.width"],
"resolutionHeight": otio_tl_metadata[
"openpype.timeline.height"],
"pixelAspect": otio_tl_metadata[
"openpype.timeline.pixelAspect"]
})
# tags
"tags": _ti.tags(),
})
def create_shot_instance(self, context, **data):
master_layer = data.get("heroTrack")
hierarchy_data = data.get("hierarchyData")
asset = data.get("asset")
item = data.get("item")
clip_name = item.name()
a_instance = context.create_instance(**a_data)
self.log.info("Creating audio instance: {}".format(a_instance))
if not master_layer:
return
if not hierarchy_data:
return
asset = data["asset"]
subset = "shotMain"
# insert family into families
family = "shot"
# form label
label = asset
if asset != clip_name:
label += " ({}) ".format(clip_name)
label += " {}".format(subset)
label += " [{}]".format(family)
data.update({
"name": "{}_{}".format(asset, subset),
"label": label,
"subset": subset,
"asset": asset,
"family": family,
"families": []
})
instance = context.create_instance(**data)
self.log.info("Creating instance: {}".format(instance))
self.log.debug(
"_ instance.data: {}".format(pformat(instance.data)))
def create_audio_instance(self, context, **data):
master_layer = data.get("heroTrack")
if not master_layer:
return
asset = data.get("asset")
item = data.get("item")
clip_name = item.name()
asset = data["asset"]
subset = "audioMain"
# insert family into families
family = "audio"
# form label
label = asset
if asset != clip_name:
label += " ({}) ".format(clip_name)
label += " {}".format(subset)
label += " [{}]".format(family)
data.update({
"name": "{}_{}".format(asset, subset),
"label": label,
"subset": subset,
"asset": asset,
"family": family,
"families": ["clip"]
})
# remove review track attr if any
data.pop("reviewTrack")
# create instance
instance = context.create_instance(**data)
self.log.info("Creating instance: {}".format(instance))
self.log.debug(
"_ instance.data: {}".format(pformat(instance.data)))
def get_otio_clip_instance_data(self, otio_timeline, track_item):
"""
Return otio objects for timeline, track and clip
Args:
timeline_item_data (dict): timeline_item_data from list returned by
resolve.get_current_timeline_items()
otio_timeline (otio.schema.Timeline): otio object
Returns:
dict: otio clip object
"""
ti_track_name = track_item.parent().name()
timeline_range = self.create_otio_time_range_from_timeline_item_data(
track_item)
for otio_clip in otio_timeline.each_clip():
track_name = otio_clip.parent().name
parent_range = otio_clip.range_in_parent()
if ti_track_name not in track_name:
continue
if otio_clip.name not in track_item.name():
continue
if openpype.lib.is_overlapping_otio_ranges(
parent_range, timeline_range, strict=True):
# add pypedata marker to otio_clip metadata
for marker in otio_clip.markers:
if phiero.pype_tag_name in marker.name:
otio_clip.metadata.update(marker.metadata)
return {"otioClip": otio_clip}
return None
@staticmethod
def clip_annotations(clip):
"""
Returns list of Clip's hiero.core.Annotation
"""
annotations = []
subTrackItems = flatten(clip.subTrackItems())
annotations += [item for item in subTrackItems if isinstance(
item, hiero.core.Annotation)]
return annotations
def create_otio_time_range_from_timeline_item_data(track_item):
timeline = phiero.get_current_sequence()
frame_start = int(track_item.timelineIn())
frame_duration = int(track_item.sourceDuration())
fps = timeline.framerate().toFloat()
@staticmethod
def clip_subtrack(clip):
"""
Returns list of Clip's hiero.core.SubTrackItem
"""
subtracks = []
subTrackItems = flatten(clip.parent().subTrackItems())
for item in subTrackItems:
# avoid all anotation
if isinstance(item, hiero.core.Annotation):
continue
# # avoid all not anaibled
if not item.isEnabled():
continue
subtracks.append(item)
return subtracks
@staticmethod
def collect_sub_track_items(tracks):
"""
Returns dictionary with track index as key and list of subtracks
"""
# collect all subtrack items
sub_track_items = dict()
for track in tracks:
items = track.items()
# skip if no clips on track > need track with effect only
if items:
continue
# skip all disabled tracks
if not track.isEnabled():
continue
track_index = track.trackIndex()
_sub_track_items = flatten(track.subTrackItems())
# continue only if any subtrack items are collected
if len(_sub_track_items) < 1:
continue
enabled_sti = list()
# loop all found subtrack items and check if they are enabled
for _sti in _sub_track_items:
# checking if not enabled
if not _sti.isEnabled():
continue
if isinstance(_sti, hiero.core.Annotation):
continue
# collect the subtrack item
enabled_sti.append(_sti)
# continue only if any subtrack items are collected
if len(enabled_sti) < 1:
continue
# add collection of subtrackitems to dict
sub_track_items[track_index] = enabled_sti
return sub_track_items
return hiero_export.create_otio_time_range(
frame_start, frame_duration, fps)

View file

@ -1,52 +1,57 @@
import os
import pyblish.api
import hiero.ui
from openpype.hosts.hiero import api as phiero
from avalon import api as avalon
from pprint import pformat
from openpype.hosts.hiero.otio import hiero_export
from Qt.QtGui import QPixmap
import tempfile
class PreCollectWorkfile(pyblish.api.ContextPlugin):
class PrecollectWorkfile(pyblish.api.ContextPlugin):
"""Inject the current working file into context"""
label = "Pre-collect Workfile"
order = pyblish.api.CollectorOrder - 0.51
label = "Precollect Workfile"
order = pyblish.api.CollectorOrder - 0.6
def process(self, context):
asset = avalon.Session["AVALON_ASSET"]
subset = "workfile"
project = phiero.get_current_project()
active_sequence = phiero.get_current_sequence()
video_tracks = active_sequence.videoTracks()
audio_tracks = active_sequence.audioTracks()
current_file = project.path()
staging_dir = os.path.dirname(current_file)
base_name = os.path.basename(current_file)
active_timeline = hiero.ui.activeSequence()
fps = active_timeline.framerate().toFloat()
# get workfile's colorspace properties
_clrs = {}
_clrs["useOCIOEnvironmentOverride"] = project.useOCIOEnvironmentOverride() # noqa
_clrs["lutSetting16Bit"] = project.lutSetting16Bit()
_clrs["lutSetting8Bit"] = project.lutSetting8Bit()
_clrs["lutSettingFloat"] = project.lutSettingFloat()
_clrs["lutSettingLog"] = project.lutSettingLog()
_clrs["lutSettingViewer"] = project.lutSettingViewer()
_clrs["lutSettingWorkingSpace"] = project.lutSettingWorkingSpace()
_clrs["lutUseOCIOForExport"] = project.lutUseOCIOForExport()
_clrs["ocioConfigName"] = project.ocioConfigName()
_clrs["ocioConfigPath"] = project.ocioConfigPath()
# adding otio timeline to context
otio_timeline = hiero_export.create_otio_timeline()
# set main project attributes to context
context.data["activeProject"] = project
context.data["activeSequence"] = active_sequence
context.data["videoTracks"] = video_tracks
context.data["audioTracks"] = audio_tracks
context.data["currentFile"] = current_file
context.data["colorspace"] = _clrs
# get workfile thumnail paths
tmp_staging = tempfile.mkdtemp(prefix="pyblish_tmp_")
thumbnail_name = "workfile_thumbnail.png"
thumbnail_path = os.path.join(tmp_staging, thumbnail_name)
self.log.info("currentFile: {}".format(current_file))
# search for all windows with name of actual sequence
_windows = [w for w in hiero.ui.windowManager().windows()
if active_timeline.name() in w.windowTitle()]
# export window to thumb path
QPixmap.grabWidget(_windows[-1]).save(thumbnail_path, 'png')
# thumbnail
thumb_representation = {
'files': thumbnail_name,
'stagingDir': tmp_staging,
'name': "thumbnail",
'thumbnail': True,
'ext': "png"
}
# get workfile paths
curent_file = project.path()
staging_dir, base_name = os.path.split(curent_file)
# creating workfile representation
representation = {
workfile_representation = {
'name': 'hrox',
'ext': 'hrox',
'files': base_name,
@ -59,16 +64,21 @@ class PreCollectWorkfile(pyblish.api.ContextPlugin):
"subset": "{}{}".format(asset, subset.capitalize()),
"item": project,
"family": "workfile",
# version data
"versionData": {
"colorspace": _clrs
},
# source attribute
"sourcePath": current_file,
"representations": [representation]
"representations": [workfile_representation, thumb_representation]
}
# create instance with workfile
instance = context.create_instance(**instance_data)
# update context with main project attributes
context_data = {
"activeProject": project,
"otioTimeline": otio_timeline,
"currentFile": curent_file,
"fps": fps,
}
context.data.update(context_data)
self.log.info("Creating instance: {}".format(instance))
self.log.debug("__ instance.data: {}".format(pformat(instance.data)))
self.log.debug("__ context_data: {}".format(pformat(context_data)))

View file

@ -5,7 +5,7 @@ class CollectFrameRanges(pyblish.api.InstancePlugin):
""" Collect all framranges.
"""
order = pyblish.api.CollectorOrder
order = pyblish.api.CollectorOrder - 0.1
label = "Collect Frame Ranges"
hosts = ["hiero"]
families = ["clip", "effect"]

View file

@ -39,8 +39,8 @@ class CollectHierarchy(pyblish.api.ContextPlugin):
if not set(self.families).intersection(families):
continue
# exclude if not masterLayer True
if not instance.data.get("masterLayer"):
# exclude if not heroTrack True
if not instance.data.get("heroTrack"):
continue
# update families to include `shot` for hierarchy integration

View file

@ -29,7 +29,7 @@ class CollectReview(api.InstancePlugin):
Exception: description
"""
review_track = instance.data.get("review")
review_track = instance.data.get("reviewTrack")
video_tracks = instance.context.data["videoTracks"]
for track in video_tracks:
if review_track not in track.name():

View file

@ -132,7 +132,7 @@ class ExtractReviewPreparation(openpype.api.Extractor):
).format(**locals())
self.log.debug("ffprob_cmd: {}".format(ffprob_cmd))
audio_check_output = openpype.api.subprocess(ffprob_cmd)
audio_check_output = openpype.api.run_subprocess(ffprob_cmd)
self.log.debug(
"audio_check_output: {}".format(audio_check_output))
@ -167,7 +167,7 @@ class ExtractReviewPreparation(openpype.api.Extractor):
# try to get video native resolution data
try:
resolution_output = openpype.api.subprocess((
resolution_output = openpype.api.run_subprocess((
"\"{ffprobe_path}\" -i \"{full_input_path}\""
" -v error "
"-select_streams v:0 -show_entries "
@ -280,7 +280,7 @@ class ExtractReviewPreparation(openpype.api.Extractor):
# run subprocess
self.log.debug("Executing: {}".format(subprcs_cmd))
output = openpype.api.subprocess(subprcs_cmd)
output = openpype.api.run_subprocess(subprcs_cmd)
self.log.debug("Output: {}".format(output))
repre_new = {

View file

@ -0,0 +1,223 @@
from compiler.ast import flatten
from pyblish import api
from openpype.hosts.hiero import api as phiero
import hiero
# from openpype.hosts.hiero.api import lib
# reload(lib)
# reload(phiero)
class PreCollectInstances(api.ContextPlugin):
"""Collect all Track items selection."""
order = api.CollectorOrder - 0.509
label = "Pre-collect Instances"
hosts = ["hiero"]
def process(self, context):
track_items = phiero.get_track_items(
selected=True, check_tagged=True, check_enabled=True)
# only return enabled track items
if not track_items:
track_items = phiero.get_track_items(
check_enabled=True, check_tagged=True)
# get sequence and video tracks
sequence = context.data["activeSequence"]
tracks = sequence.videoTracks()
# add collection to context
tracks_effect_items = self.collect_sub_track_items(tracks)
context.data["tracksEffectItems"] = tracks_effect_items
self.log.info(
"Processing enabled track items: {}".format(len(track_items)))
for _ti in track_items:
data = {}
clip = _ti.source()
# get clips subtracks and anotations
annotations = self.clip_annotations(clip)
subtracks = self.clip_subtrack(_ti)
self.log.debug("Annotations: {}".format(annotations))
self.log.debug(">> Subtracks: {}".format(subtracks))
# get pype tag data
tag_parsed_data = phiero.get_track_item_pype_data(_ti)
# self.log.debug(pformat(tag_parsed_data))
if not tag_parsed_data:
continue
if tag_parsed_data.get("id") != "pyblish.avalon.instance":
continue
# add tag data to instance data
data.update({
k: v for k, v in tag_parsed_data.items()
if k not in ("id", "applieswhole", "label")
})
asset = tag_parsed_data["asset"]
subset = tag_parsed_data["subset"]
review_track = tag_parsed_data.get("reviewTrack")
hiero_track = tag_parsed_data.get("heroTrack")
audio = tag_parsed_data.get("audio")
# remove audio attribute from data
data.pop("audio")
# insert family into families
family = tag_parsed_data["family"]
families = [str(f) for f in tag_parsed_data["families"]]
families.insert(0, str(family))
track = _ti.parent()
media_source = _ti.source().mediaSource()
source_path = media_source.firstpath()
file_head = media_source.filenameHead()
file_info = media_source.fileinfos().pop()
source_first_frame = int(file_info.startFrame())
# apply only for review and master track instance
if review_track and hiero_track:
families += ["review", "ftrack"]
data.update({
"name": "{} {} {}".format(asset, subset, families),
"asset": asset,
"item": _ti,
"families": families,
# tags
"tags": _ti.tags(),
# track item attributes
"track": track.name(),
"trackItem": track,
"reviewTrack": review_track,
# version data
"versionData": {
"colorspace": _ti.sourceMediaColourTransform()
},
# source attribute
"source": source_path,
"sourceMedia": media_source,
"sourcePath": source_path,
"sourceFileHead": file_head,
"sourceFirst": source_first_frame,
# clip's effect
"clipEffectItems": subtracks
})
instance = context.create_instance(**data)
self.log.info("Creating instance.data: {}".format(instance.data))
if audio:
a_data = dict()
# add tag data to instance data
a_data.update({
k: v for k, v in tag_parsed_data.items()
if k not in ("id", "applieswhole", "label")
})
# create main attributes
subset = "audioMain"
family = "audio"
families = ["clip", "ftrack"]
families.insert(0, str(family))
name = "{} {} {}".format(asset, subset, families)
a_data.update({
"name": name,
"subset": subset,
"asset": asset,
"family": family,
"families": families,
"item": _ti,
# tags
"tags": _ti.tags(),
})
a_instance = context.create_instance(**a_data)
self.log.info("Creating audio instance: {}".format(a_instance))
@staticmethod
def clip_annotations(clip):
"""
Returns list of Clip's hiero.core.Annotation
"""
annotations = []
subTrackItems = flatten(clip.subTrackItems())
annotations += [item for item in subTrackItems if isinstance(
item, hiero.core.Annotation)]
return annotations
@staticmethod
def clip_subtrack(clip):
"""
Returns list of Clip's hiero.core.SubTrackItem
"""
subtracks = []
subTrackItems = flatten(clip.parent().subTrackItems())
for item in subTrackItems:
# avoid all anotation
if isinstance(item, hiero.core.Annotation):
continue
# # avoid all not anaibled
if not item.isEnabled():
continue
subtracks.append(item)
return subtracks
@staticmethod
def collect_sub_track_items(tracks):
"""
Returns dictionary with track index as key and list of subtracks
"""
# collect all subtrack items
sub_track_items = dict()
for track in tracks:
items = track.items()
# skip if no clips on track > need track with effect only
if items:
continue
# skip all disabled tracks
if not track.isEnabled():
continue
track_index = track.trackIndex()
_sub_track_items = flatten(track.subTrackItems())
# continue only if any subtrack items are collected
if len(_sub_track_items) < 1:
continue
enabled_sti = list()
# loop all found subtrack items and check if they are enabled
for _sti in _sub_track_items:
# checking if not enabled
if not _sti.isEnabled():
continue
if isinstance(_sti, hiero.core.Annotation):
continue
# collect the subtrack item
enabled_sti.append(_sti)
# continue only if any subtrack items are collected
if len(enabled_sti) < 1:
continue
# add collection of subtrackitems to dict
sub_track_items[track_index] = enabled_sti
return sub_track_items

View file

@ -0,0 +1,74 @@
import os
import pyblish.api
from openpype.hosts.hiero import api as phiero
from avalon import api as avalon
class PreCollectWorkfile(pyblish.api.ContextPlugin):
"""Inject the current working file into context"""
label = "Pre-collect Workfile"
order = pyblish.api.CollectorOrder - 0.51
def process(self, context):
asset = avalon.Session["AVALON_ASSET"]
subset = "workfile"
project = phiero.get_current_project()
active_sequence = phiero.get_current_sequence()
video_tracks = active_sequence.videoTracks()
audio_tracks = active_sequence.audioTracks()
current_file = project.path()
staging_dir = os.path.dirname(current_file)
base_name = os.path.basename(current_file)
# get workfile's colorspace properties
_clrs = {}
_clrs["useOCIOEnvironmentOverride"] = project.useOCIOEnvironmentOverride() # noqa
_clrs["lutSetting16Bit"] = project.lutSetting16Bit()
_clrs["lutSetting8Bit"] = project.lutSetting8Bit()
_clrs["lutSettingFloat"] = project.lutSettingFloat()
_clrs["lutSettingLog"] = project.lutSettingLog()
_clrs["lutSettingViewer"] = project.lutSettingViewer()
_clrs["lutSettingWorkingSpace"] = project.lutSettingWorkingSpace()
_clrs["lutUseOCIOForExport"] = project.lutUseOCIOForExport()
_clrs["ocioConfigName"] = project.ocioConfigName()
_clrs["ocioConfigPath"] = project.ocioConfigPath()
# set main project attributes to context
context.data["activeProject"] = project
context.data["activeSequence"] = active_sequence
context.data["videoTracks"] = video_tracks
context.data["audioTracks"] = audio_tracks
context.data["currentFile"] = current_file
context.data["colorspace"] = _clrs
self.log.info("currentFile: {}".format(current_file))
# creating workfile representation
representation = {
'name': 'hrox',
'ext': 'hrox',
'files': base_name,
"stagingDir": staging_dir,
}
instance_data = {
"name": "{}_{}".format(asset, subset),
"asset": asset,
"subset": "{}{}".format(asset, subset.capitalize()),
"item": project,
"family": "workfile",
# version data
"versionData": {
"colorspace": _clrs
},
# source attribute
"sourcePath": current_file,
"representations": [representation]
}
instance = context.create_instance(**instance_data)
self.log.info("Creating instance: {}".format(instance))

View file

@ -1,338 +1,28 @@
# MIT License
#
# Copyright (c) 2018 Daniel Flehner Heen (Storm Studios)
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
#!/usr/bin/env python
# -*- coding: utf-8 -*-
__author__ = "Daniel Flehner Heen"
__credits__ = ["Jakub Jezek", "Daniel Flehner Heen"]
import os
import re
import hiero.core
from hiero.core import util
import opentimelineio as otio
marker_color_map = {
"magenta": otio.schema.MarkerColor.MAGENTA,
"red": otio.schema.MarkerColor.RED,
"yellow": otio.schema.MarkerColor.YELLOW,
"green": otio.schema.MarkerColor.GREEN,
"cyan": otio.schema.MarkerColor.CYAN,
"blue": otio.schema.MarkerColor.BLUE,
}
from openpype.hosts.hiero.otio import hiero_export
class OTIOExportTask(hiero.core.TaskBase):
def __init__(self, initDict):
"""Initialize"""
hiero.core.TaskBase.__init__(self, initDict)
self.otio_timeline = None
def name(self):
return str(type(self))
def get_rate(self, item):
if not hasattr(item, 'framerate'):
item = item.sequence()
num, den = item.framerate().toRational()
rate = float(num) / float(den)
if rate.is_integer():
return rate
return round(rate, 2)
def get_clip_ranges(self, trackitem):
# Get rate from source or sequence
if trackitem.source().mediaSource().hasVideo():
rate_item = trackitem.source()
else:
rate_item = trackitem.sequence()
source_rate = self.get_rate(rate_item)
# Reversed video/audio
if trackitem.playbackSpeed() < 0:
start = trackitem.sourceOut()
else:
start = trackitem.sourceIn()
source_start_time = otio.opentime.RationalTime(
start,
source_rate
)
source_duration = otio.opentime.RationalTime(
trackitem.duration(),
source_rate
)
source_range = otio.opentime.TimeRange(
start_time=source_start_time,
duration=source_duration
)
hiero_clip = trackitem.source()
available_range = None
if hiero_clip.mediaSource().isMediaPresent():
start_time = otio.opentime.RationalTime(
hiero_clip.mediaSource().startTime(),
source_rate
)
duration = otio.opentime.RationalTime(
hiero_clip.mediaSource().duration(),
source_rate
)
available_range = otio.opentime.TimeRange(
start_time=start_time,
duration=duration
)
return source_range, available_range
def add_gap(self, trackitem, otio_track, prev_out):
gap_length = trackitem.timelineIn() - prev_out
if prev_out != 0:
gap_length -= 1
rate = self.get_rate(trackitem.sequence())
gap = otio.opentime.TimeRange(
duration=otio.opentime.RationalTime(
gap_length,
rate
)
)
otio_gap = otio.schema.Gap(source_range=gap)
otio_track.append(otio_gap)
def get_marker_color(self, tag):
icon = tag.icon()
pat = r'icons:Tag(?P<color>\w+)\.\w+'
res = re.search(pat, icon)
if res:
color = res.groupdict().get('color')
if color.lower() in marker_color_map:
return marker_color_map[color.lower()]
return otio.schema.MarkerColor.RED
def add_markers(self, hiero_item, otio_item):
for tag in hiero_item.tags():
if not tag.visible():
continue
if tag.name() == 'Copy':
# Hiero adds this tag to a lot of clips
continue
frame_rate = self.get_rate(hiero_item)
marked_range = otio.opentime.TimeRange(
start_time=otio.opentime.RationalTime(
tag.inTime(),
frame_rate
),
duration=otio.opentime.RationalTime(
int(tag.metadata().dict().get('tag.length', '0')),
frame_rate
)
)
metadata = dict(
Hiero=tag.metadata().dict()
)
# Store the source item for future import assignment
metadata['Hiero']['source_type'] = hiero_item.__class__.__name__
marker = otio.schema.Marker(
name=tag.name(),
color=self.get_marker_color(tag),
marked_range=marked_range,
metadata=metadata
)
otio_item.markers.append(marker)
def add_clip(self, trackitem, otio_track, itemindex):
hiero_clip = trackitem.source()
# Add Gap if needed
if itemindex == 0:
prev_item = trackitem
else:
prev_item = trackitem.parent().items()[itemindex - 1]
clip_diff = trackitem.timelineIn() - prev_item.timelineOut()
if itemindex == 0 and trackitem.timelineIn() > 0:
self.add_gap(trackitem, otio_track, 0)
elif itemindex and clip_diff != 1:
self.add_gap(trackitem, otio_track, prev_item.timelineOut())
# Create Clip
source_range, available_range = self.get_clip_ranges(trackitem)
otio_clip = otio.schema.Clip(
name=trackitem.name(),
source_range=source_range
)
# Add media reference
media_reference = otio.schema.MissingReference()
if hiero_clip.mediaSource().isMediaPresent():
source = hiero_clip.mediaSource()
first_file = source.fileinfos()[0]
path = first_file.filename()
if "%" in path:
path = re.sub(r"%\d+d", "%d", path)
if "#" in path:
path = re.sub(r"#+", "%d", path)
media_reference = otio.schema.ExternalReference(
target_url=u'{}'.format(path),
available_range=available_range
)
otio_clip.media_reference = media_reference
# Add Time Effects
playbackspeed = trackitem.playbackSpeed()
if playbackspeed != 1:
if playbackspeed == 0:
time_effect = otio.schema.FreezeFrame()
else:
time_effect = otio.schema.LinearTimeWarp(
time_scalar=playbackspeed
)
otio_clip.effects.append(time_effect)
# Add tags as markers
if self._preset.properties()["includeTags"]:
self.add_markers(trackitem, otio_clip)
self.add_markers(trackitem.source(), otio_clip)
otio_track.append(otio_clip)
# Add Transition if needed
if trackitem.inTransition() or trackitem.outTransition():
self.add_transition(trackitem, otio_track)
def add_transition(self, trackitem, otio_track):
transitions = []
if trackitem.inTransition():
if trackitem.inTransition().alignment().name == 'kFadeIn':
transitions.append(trackitem.inTransition())
if trackitem.outTransition():
transitions.append(trackitem.outTransition())
for transition in transitions:
alignment = transition.alignment().name
if alignment == 'kFadeIn':
in_offset_frames = 0
out_offset_frames = (
transition.timelineOut() - transition.timelineIn()
) + 1
elif alignment == 'kFadeOut':
in_offset_frames = (
trackitem.timelineOut() - transition.timelineIn()
) + 1
out_offset_frames = 0
elif alignment == 'kDissolve':
in_offset_frames = (
transition.inTrackItem().timelineOut() -
transition.timelineIn()
)
out_offset_frames = (
transition.timelineOut() -
transition.outTrackItem().timelineIn()
)
else:
# kUnknown transition is ignored
continue
rate = trackitem.source().framerate().toFloat()
in_time = otio.opentime.RationalTime(in_offset_frames, rate)
out_time = otio.opentime.RationalTime(out_offset_frames, rate)
otio_transition = otio.schema.Transition(
name=alignment, # Consider placing Hiero name in metadata
transition_type=otio.schema.TransitionTypes.SMPTE_Dissolve,
in_offset=in_time,
out_offset=out_time
)
if alignment == 'kFadeIn':
otio_track.insert(-1, otio_transition)
else:
otio_track.append(otio_transition)
def add_tracks(self):
for track in self._sequence.items():
if isinstance(track, hiero.core.AudioTrack):
kind = otio.schema.TrackKind.Audio
else:
kind = otio.schema.TrackKind.Video
otio_track = otio.schema.Track(name=track.name(), kind=kind)
for itemindex, trackitem in enumerate(track):
if isinstance(trackitem.source(), hiero.core.Clip):
self.add_clip(trackitem, otio_track, itemindex)
self.otio_timeline.tracks.append(otio_track)
# Add tags as markers
if self._preset.properties()["includeTags"]:
self.add_markers(self._sequence, self.otio_timeline.tracks)
def create_OTIO(self):
self.otio_timeline = otio.schema.Timeline()
# Set global start time based on sequence
self.otio_timeline.global_start_time = otio.opentime.RationalTime(
self._sequence.timecodeStart(),
self._sequence.framerate().toFloat()
)
self.otio_timeline.name = self._sequence.name()
self.add_tracks()
def startTask(self):
self.create_OTIO()
self.otio_timeline = hiero_export.create_otio_timeline()
def taskStep(self):
return False
@ -350,7 +40,7 @@ class OTIOExportTask(hiero.core.TaskBase):
util.filesystem.makeDirs(dirname)
# write otio file
otio.adapters.write_to_file(self.otio_timeline, exportPath)
hiero_export.write_to_file(self.otio_timeline, exportPath)
# Catch all exceptions and log error
except Exception as e:
@ -370,7 +60,7 @@ class OTIOExportPreset(hiero.core.TaskPresetBase):
"""Initialise presets to default values"""
hiero.core.TaskPresetBase.__init__(self, OTIOExportTask, name)
self.properties()["includeTags"] = True
self.properties()["includeTags"] = hiero_export.include_tags = True
self.properties().update(properties)
def supportedItems(self):

View file

@ -1,3 +1,9 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
__author__ = "Daniel Flehner Heen"
__credits__ = ["Jakub Jezek", "Daniel Flehner Heen"]
import hiero.ui
import OTIOExportTask
@ -14,6 +20,7 @@ except ImportError:
FormLayout = QFormLayout # lint:ok
from openpype.hosts.hiero.otio import hiero_export
class OTIOExportUI(hiero.ui.TaskUIBase):
def __init__(self, preset):
@ -27,7 +34,7 @@ class OTIOExportUI(hiero.ui.TaskUIBase):
def includeMarkersCheckboxChanged(self, state):
# Slot to handle change of checkbox state
self._preset.properties()["includeTags"] = state == QtCore.Qt.Checked
hiero_export.include_tags = state == QtCore.Qt.Checked
def populateUI(self, widget, exportTemplate):
layout = widget.layout()

View file

@ -1,25 +1,3 @@
# MIT License
#
# Copyright (c) 2018 Daniel Flehner Heen (Storm Studios)
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
from OTIOExportTask import OTIOExportTask
from OTIOExportUI import OTIOExportUI

View file

@ -1,42 +1,91 @@
# MIT License
#
# Copyright (c) 2018 Daniel Flehner Heen (Storm Studios)
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
#!/usr/bin/env python
# -*- coding: utf-8 -*-
__author__ = "Daniel Flehner Heen"
__credits__ = ["Jakub Jezek", "Daniel Flehner Heen"]
import hiero.ui
import hiero.core
from otioimporter.OTIOImport import load_otio
import PySide2.QtWidgets as qw
from openpype.hosts.hiero.otio.hiero_import import load_otio
class OTIOProjectSelect(qw.QDialog):
def __init__(self, projects, *args, **kwargs):
super(OTIOProjectSelect, self).__init__(*args, **kwargs)
self.setWindowTitle('Please select active project')
self.layout = qw.QVBoxLayout()
self.label = qw.QLabel(
'Unable to determine which project to import sequence to.\n'
'Please select one.'
)
self.layout.addWidget(self.label)
self.projects = qw.QComboBox()
self.projects.addItems(map(lambda p: p.name(), projects))
self.layout.addWidget(self.projects)
QBtn = qw.QDialogButtonBox.Ok | qw.QDialogButtonBox.Cancel
self.buttonBox = qw.QDialogButtonBox(QBtn)
self.buttonBox.accepted.connect(self.accept)
self.buttonBox.rejected.connect(self.reject)
self.layout.addWidget(self.buttonBox)
self.setLayout(self.layout)
def get_sequence(view):
sequence = None
if isinstance(view, hiero.ui.TimelineEditor):
sequence = view.sequence()
elif isinstance(view, hiero.ui.BinView):
for item in view.selection():
if not hasattr(item, 'acitveItem'):
continue
if isinstance(item.activeItem(), hiero.core.Sequence):
sequence = item.activeItem()
return sequence
def OTIO_menu_action(event):
otio_action = hiero.ui.createMenuAction(
'Import OTIO',
# Menu actions
otio_import_action = hiero.ui.createMenuAction(
'Import OTIO...',
open_otio_file,
icon=None
)
hiero.ui.registerAction(otio_action)
otio_add_track_action = hiero.ui.createMenuAction(
'New Track(s) from OTIO...',
open_otio_file,
icon=None
)
otio_add_track_action.setEnabled(False)
hiero.ui.registerAction(otio_import_action)
hiero.ui.registerAction(otio_add_track_action)
view = hiero.ui.currentContextMenuView()
if view:
sequence = get_sequence(view)
if sequence:
otio_add_track_action.setEnabled(True)
for action in event.menu.actions():
if action.text() == 'Import':
action.menu().addAction(otio_action)
break
action.menu().addAction(otio_import_action)
action.menu().addAction(otio_add_track_action)
elif action.text() == 'New Track':
action.menu().addAction(otio_add_track_action)
def open_otio_file():
@ -45,8 +94,39 @@ def open_otio_file():
pattern='*.otio',
requiredExtension='.otio'
)
selection = None
sequence = None
view = hiero.ui.currentContextMenuView()
if view:
sequence = get_sequence(view)
selection = view.selection()
if sequence:
project = sequence.project()
elif selection:
project = selection[0].project()
elif len(hiero.core.projects()) > 1:
dialog = OTIOProjectSelect(hiero.core.projects())
if dialog.exec_():
project = hiero.core.projects()[dialog.projects.currentIndex()]
else:
bar = hiero.ui.mainWindow().statusBar()
bar.showMessage(
'OTIO Import aborted by user',
timeout=3000
)
return
else:
project = hiero.core.projects()[-1]
for otio_file in files:
load_otio(otio_file)
load_otio(otio_file, project, sequence)
# HieroPlayer is quite limited and can't create transitions etc.
@ -55,3 +135,7 @@ if not hiero.core.isHieroPlayer():
"kShowContextMenu/kBin",
OTIO_menu_action
)
hiero.core.events.registerInterest(
"kShowContextMenu/kTimeline",
OTIO_menu_action
)

View file

@ -210,7 +210,7 @@ def validate_fps():
if current_fps != fps:
from ...widgets import popup
from openpype.widgets import popup
# Find main window
parent = hou.ui.mainQtWindow()
@ -219,8 +219,8 @@ def validate_fps():
else:
dialog = popup.Popup2(parent=parent)
dialog.setModal(True)
dialog.setWindowTitle("Maya scene not in line with project")
dialog.setMessage("The FPS is out of sync, please fix")
dialog.setWindowTitle("Houdini scene not in line with project")
dialog.setMessage("The FPS is out of sync, please fix it")
# Set new text for button (add optional argument for the popup?)
toggle = dialog.widgets["toggle"]

View file

@ -184,7 +184,7 @@ class AExpectedFiles:
(str): sanitized camera name
Example:
>>> sanizite_camera_name('test:camera_01')
>>> AExpectedFiles.sanizite_camera_name('test:camera_01')
test_camera_01
"""
@ -230,7 +230,7 @@ class AExpectedFiles:
if self.layer.startswith("rs_"):
layer_name = self.layer[3:]
scene_data = {
return {
"frameStart": int(self.get_render_attribute("startFrame")),
"frameEnd": int(self.get_render_attribute("endFrame")),
"frameStep": int(self.get_render_attribute("byFrameStep")),
@ -245,7 +245,6 @@ class AExpectedFiles:
"filePrefix": file_prefix,
"enabledAOVs": self.get_aovs(),
}
return scene_data
def _generate_single_file_sequence(
self, layer_data, force_aov_name=None):
@ -685,8 +684,6 @@ class ExpectedFilesRedshift(AExpectedFiles):
"""Expected files for Redshift renderer.
Attributes:
ext_mapping (list): Mapping redshift extension dropdown values
to strings.
unmerged_aovs (list): Name of aovs that are not merged into resulting
exr and we need them specified in expectedFiles output.
@ -695,8 +692,6 @@ class ExpectedFilesRedshift(AExpectedFiles):
unmerged_aovs = ["Cryptomatte"]
ext_mapping = ["iff", "exr", "tif", "png", "tga", "jpg"]
def __init__(self, layer, render_instance):
"""Construtor."""
super(ExpectedFilesRedshift, self).__init__(layer, render_instance)
@ -785,12 +780,10 @@ class ExpectedFilesRedshift(AExpectedFiles):
# anyway.
return enabled_aovs
default_ext = self.ext_mapping[
cmds.getAttr("redshiftOptions.imageFormat")
]
default_ext = cmds.getAttr(
"redshiftOptions.imageFormat", asString=True)
rs_aovs = cmds.ls(type="RedshiftAOV", referencedNodes=False)
# todo: find out how to detect multichannel exr for redshift
for aov in rs_aovs:
enabled = self.maya_is_true(cmds.getAttr("{}.enabled".format(aov)))
for override in self.get_layer_overrides(

View file

@ -1124,16 +1124,14 @@ def get_id_required_nodes(referenced_nodes=False, nodes=None):
def get_id(node):
"""
Get the `cbId` attribute of the given node
"""Get the `cbId` attribute of the given node.
Args:
node (str): the name of the node to retrieve the attribute from
Returns:
str
"""
if node is None:
return
@ -1872,7 +1870,7 @@ def set_context_settings():
# Set project fps
fps = asset_data.get("fps", project_data.get("fps", 25))
api.Session["AVALON_FPS"] = fps
api.Session["AVALON_FPS"] = str(fps)
set_scene_fps(fps)
# Set project resolution
@ -2688,3 +2686,69 @@ def show_message(title, msg):
pass
else:
message_window.message(title=title, message=msg, parent=parent)
def iter_shader_edits(relationships, shader_nodes, nodes_by_id, label=None):
"""Yield edits as a set of actions."""
attributes = relationships.get("attributes", [])
shader_data = relationships.get("relationships", {})
shading_engines = cmds.ls(shader_nodes, type="objectSet", long=True)
assert shading_engines, "Error in retrieving objectSets from reference"
# region compute lookup
shading_engines_by_id = defaultdict(list)
for shad in shading_engines:
shading_engines_by_id[get_id(shad)].append(shad)
# endregion
# region assign shading engines and other sets
for data in shader_data.values():
# collect all unique IDs of the set members
shader_uuid = data["uuid"]
member_uuids = [
(member["uuid"], member.get("components"))
for member in data["members"]]
filtered_nodes = list()
for _uuid, components in member_uuids:
nodes = nodes_by_id.get(_uuid, None)
if nodes is None:
continue
if components:
# Assign to the components
nodes = [".".join([node, components]) for node in nodes]
filtered_nodes.extend(nodes)
id_shading_engines = shading_engines_by_id[shader_uuid]
if not id_shading_engines:
log.error("{} - No shader found with cbId "
"'{}'".format(label, shader_uuid))
continue
elif len(id_shading_engines) > 1:
log.error("{} - Skipping shader assignment. "
"More than one shader found with cbId "
"'{}'. (found: {})".format(label, shader_uuid,
id_shading_engines))
continue
if not filtered_nodes:
log.warning("{} - No nodes found for shading engine "
"'{}'".format(label, id_shading_engines[0]))
continue
yield {"action": "assign",
"uuid": data["uuid"],
"nodes": filtered_nodes,
"shader": id_shading_engines[0]}
for data in attributes:
nodes = nodes_by_id.get(data["uuid"], [])
attr_value = data["attributes"]
yield {"action": "setattr",
"uuid": data["uuid"],
"nodes": nodes,
"attributes": attr_value}

View file

@ -12,7 +12,7 @@ from openpype.hosts.maya.api import (
lib,
plugin
)
from openpype.api import get_system_settings
from openpype.api import (get_system_settings, get_asset)
class CreateRender(plugin.Creator):
@ -104,7 +104,7 @@ class CreateRender(plugin.Creator):
# namespace is not empty, so we leave it untouched
pass
while(cmds.namespace(exists=namespace_name)):
while cmds.namespace(exists=namespace_name):
namespace_name = "_{}{}".format(str(instance), index)
index += 1
@ -125,7 +125,7 @@ class CreateRender(plugin.Creator):
cmds.sets(sets, forceElement=instance)
# if no render layers are present, create default one with
# asterix selector
# asterisk selector
if not layers:
render_layer = self._rs.createRenderLayer('Main')
collection = render_layer.createCollection("defaultCollection")
@ -137,9 +137,7 @@ class CreateRender(plugin.Creator):
if renderer.startswith('renderman'):
renderer = 'renderman'
cmds.setAttr(self._image_prefix_nodes[renderer],
self._image_prefixes[renderer],
type="string")
self._set_default_renderer_settings(renderer)
def _create_render_settings(self):
# get pools
@ -318,3 +316,86 @@ class CreateRender(plugin.Creator):
False if os.getenv("OPENPYPE_DONT_VERIFY_SSL", True) else True
) # noqa
return requests.get(*args, **kwargs)
def _set_default_renderer_settings(self, renderer):
"""Set basic settings based on renderer.
Args:
renderer (str): Renderer name.
"""
cmds.setAttr(self._image_prefix_nodes[renderer],
self._image_prefixes[renderer],
type="string")
asset = get_asset()
if renderer == "arnold":
# set format to exr
cmds.setAttr(
"defaultArnoldDriver.ai_translator", "exr", type="string")
# enable animation
cmds.setAttr("defaultRenderGlobals.outFormatControl", 0)
cmds.setAttr("defaultRenderGlobals.animation", 1)
cmds.setAttr("defaultRenderGlobals.putFrameBeforeExt", 1)
cmds.setAttr("defaultRenderGlobals.extensionPadding", 4)
# resolution
cmds.setAttr(
"defaultResolution.width",
asset["data"].get("resolutionWidth"))
cmds.setAttr(
"defaultResolution.height",
asset["data"].get("resolutionHeight"))
if renderer == "vray":
vray_settings = cmds.ls(type="VRaySettingsNode")
if not vray_settings:
node = cmds.createNode("VRaySettingsNode")
else:
node = vray_settings[0]
# set underscore as element separator instead of default `.`
cmds.setAttr(
"{}.fileNameRenderElementSeparator".format(
node),
"_"
)
# set format to exr
cmds.setAttr(
"{}.imageFormatStr".format(node), 5)
# animType
cmds.setAttr(
"{}.animType".format(node), 1)
# resolution
cmds.setAttr(
"{}.width".format(node),
asset["data"].get("resolutionWidth"))
cmds.setAttr(
"{}.height".format(node),
asset["data"].get("resolutionHeight"))
if renderer == "redshift":
redshift_settings = cmds.ls(type="RedshiftOptions")
if not redshift_settings:
node = cmds.createNode("RedshiftOptions")
else:
node = redshift_settings[0]
# set exr
cmds.setAttr("{}.imageFormat".format(node), 1)
# resolution
cmds.setAttr(
"defaultResolution.width",
asset["data"].get("resolutionWidth"))
cmds.setAttr(
"defaultResolution.height",
asset["data"].get("resolutionHeight"))
# enable animation
cmds.setAttr("defaultRenderGlobals.outFormatControl", 0)
cmds.setAttr("defaultRenderGlobals.animation", 1)
cmds.setAttr("defaultRenderGlobals.putFrameBeforeExt", 1)
cmds.setAttr("defaultRenderGlobals.extensionPadding", 4)

View file

@ -1,3 +1,5 @@
# -*- coding: utf-8 -*-
"""Look loader."""
import openpype.hosts.maya.api.plugin
from avalon import api, io
import json

View file

@ -19,7 +19,6 @@ class ReferenceLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
"rig",
"camerarig"]
representations = ["ma", "abc", "fbx", "mb"]
tool_names = ["loader"]
label = "Reference"
order = -10

View file

@ -1,12 +1,21 @@
from avalon.maya import lib
from avalon import api
from openpype.api import get_project_settings
# -*- coding: utf-8 -*-
"""Loader for Vray Proxy files.
If there are Alembics published along vray proxy (in the same version),
loader will use them instead of native vray vrmesh format.
"""
import os
import maya.cmds as cmds
from avalon.maya import lib
from avalon import api, io
from openpype.api import get_project_settings
class VRayProxyLoader(api.Loader):
"""Load VRayMesh proxy"""
"""Load VRay Proxy with Alembic or VrayMesh."""
families = ["vrayproxy"]
representations = ["vrmesh"]
@ -16,8 +25,17 @@ class VRayProxyLoader(api.Loader):
icon = "code-fork"
color = "orange"
def load(self, context, name, namespace, data):
def load(self, context, name=None, namespace=None, options=None):
# type: (dict, str, str, dict) -> None
"""Loader entry point.
Args:
context (dict): Loaded representation context.
name (str): Name of container.
namespace (str): Optional namespace name.
options (dict): Optional loader options.
"""
from avalon.maya.pipeline import containerise
from openpype.hosts.maya.api.lib import namespaced
@ -26,6 +44,9 @@ class VRayProxyLoader(api.Loader):
except ValueError:
family = "vrayproxy"
# get all representations for this version
self.fname = self._get_abc(context["version"]["_id"]) or self.fname
asset_name = context['asset']["name"]
namespace = namespace or lib.unique_namespace(
asset_name + "_",
@ -39,8 +60,8 @@ class VRayProxyLoader(api.Loader):
with lib.maintained_selection():
cmds.namespace(addNamespace=namespace)
with namespaced(namespace, new=False):
nodes, group_node = self.create_vray_proxy(name,
filename=self.fname)
nodes, group_node = self.create_vray_proxy(
name, filename=self.fname)
self[:] = nodes
if not nodes:
@ -63,7 +84,8 @@ class VRayProxyLoader(api.Loader):
loader=self.__class__.__name__)
def update(self, container, representation):
# type: (dict, dict) -> None
"""Update container with specified representation."""
node = container['objectName']
assert cmds.objExists(node), "Missing container"
@ -71,7 +93,8 @@ class VRayProxyLoader(api.Loader):
vraymeshes = cmds.ls(members, type="VRayMesh")
assert vraymeshes, "Cannot find VRayMesh in container"
filename = api.get_representation_path(representation)
# get all representations for this version
filename = self._get_abc(representation["parent"]) or api.get_representation_path(representation) # noqa: E501
for vray_mesh in vraymeshes:
cmds.setAttr("{}.fileName".format(vray_mesh),
@ -84,7 +107,8 @@ class VRayProxyLoader(api.Loader):
type="string")
def remove(self, container):
# type: (dict) -> None
"""Remove loaded container."""
# Delete container and its contents
if cmds.objExists(container['objectName']):
members = cmds.sets(container['objectName'], query=True) or []
@ -101,61 +125,62 @@ class VRayProxyLoader(api.Loader):
"still has members: %s", namespace)
def switch(self, container, representation):
# type: (dict, dict) -> None
"""Switch loaded representation."""
self.update(container, representation)
def create_vray_proxy(self, name, filename):
# type: (str, str) -> (list, str)
"""Re-create the structure created by VRay to support vrmeshes
Args:
name(str): name of the asset
name (str): Name of the asset.
filename (str): File name of vrmesh.
Returns:
nodes(list)
"""
# Create nodes
vray_mesh = cmds.createNode('VRayMesh', name="{}_VRMS".format(name))
mesh_shape = cmds.createNode("mesh", name="{}_GEOShape".format(name))
vray_mat = cmds.shadingNode("VRayMeshMaterial", asShader=True,
name="{}_VRMM".format(name))
vray_mat_sg = cmds.sets(name="{}_VRSG".format(name),
empty=True,
renderable=True,
noSurfaceShader=True)
if name is None:
name = os.path.splitext(os.path.basename(filename))[0]
cmds.setAttr("{}.fileName".format(vray_mesh),
filename,
type="string")
parent = cmds.createNode("transform", name=name)
proxy = cmds.createNode(
"VRayProxy", name="{}Shape".format(name), parent=parent)
cmds.setAttr(proxy + ".fileName", filename, type="string")
cmds.connectAttr("time1.outTime", proxy + ".currentFrame")
# Create important connections
cmds.connectAttr("time1.outTime",
"{0}.currentFrame".format(vray_mesh))
cmds.connectAttr("{}.fileName2".format(vray_mesh),
"{}.fileName".format(vray_mat))
cmds.connectAttr("{}.instancing".format(vray_mesh),
"{}.instancing".format(vray_mat))
cmds.connectAttr("{}.output".format(vray_mesh),
"{}.inMesh".format(mesh_shape))
cmds.connectAttr("{}.overrideFileName".format(vray_mesh),
"{}.overrideFileName".format(vray_mat))
cmds.connectAttr("{}.currentFrame".format(vray_mesh),
"{}.currentFrame".format(vray_mat))
return [parent, proxy], parent
# Set surface shader input
cmds.connectAttr("{}.outColor".format(vray_mat),
"{}.surfaceShader".format(vray_mat_sg))
def _get_abc(self, version_id):
# type: (str) -> str
"""Get abc representation file path if present.
# Connect mesh to shader
cmds.sets([mesh_shape], addElement=vray_mat_sg)
If here is published Alembic (abc) representation published along
vray proxy, get is file path.
group_node = cmds.group(empty=True, name="{}_GRP".format(name))
mesh_transform = cmds.listRelatives(mesh_shape,
parent=True, fullPath=True)
cmds.parent(mesh_transform, group_node)
nodes = [vray_mesh, mesh_shape, vray_mat, vray_mat_sg, group_node]
Args:
version_id (str): Version hash id.
# Fix: Force refresh so the mesh shows correctly after creation
cmds.refresh()
cmds.setAttr("{}.geomType".format(vray_mesh), 2)
Returns:
str: Path to file.
None: If abc not found.
return nodes, group_node
"""
self.log.debug(
"Looking for abc in published representations of this version.")
abc_rep = io.find_one(
{
"type": "representation",
"parent": io.ObjectId(version_id),
"name": "abc"
})
if abc_rep:
self.log.debug("Found, we'll link alembic to vray proxy.")
file_name = api.get_representation_path(abc_rep)
self.log.debug("File: {}".format(self.fname))
return file_name
return ""

View file

@ -348,6 +348,13 @@ class CollectLook(pyblish.api.InstancePlugin):
history = []
for material in materials:
history.extend(cmds.listHistory(material))
# handle VrayPluginNodeMtl node - see #1397
vray_plugin_nodes = cmds.ls(
history, type="VRayPluginNodeMtl", long=True)
for vray_node in vray_plugin_nodes:
history.extend(cmds.listHistory(vray_node))
files = cmds.ls(history, type="file", long=True)
files.extend(cmds.ls(history, type="aiImage", long=True))

View file

@ -2,14 +2,9 @@ import pyblish.api
class CollectRemoveMarked(pyblish.api.ContextPlugin):
"""Collect model data
"""Remove marked data
Ensures always only a single frame is extracted (current frame).
Note:
This is a workaround so that the `pype.model` family can use the
same pointcache extractor implementation as animation and pointcaches.
This always enforces the "current" frame to be published.
Remove instances that have 'remove' in their instance.data
"""

View file

@ -358,9 +358,7 @@ class CollectMayaRender(pyblish.api.ContextPlugin):
options["extendFrames"] = extend_frames
options["overrideExistingFrame"] = override_frames
maya_render_plugin = "MayaPype"
if attributes.get("useMayaBatch", True):
maya_render_plugin = "MayaBatch"
maya_render_plugin = "MayaBatch"
options["mayaRenderPlugin"] = maya_render_plugin

View file

@ -0,0 +1,18 @@
# -*- coding: utf-8 -*-
"""Collect Vray Proxy."""
import pyblish.api
class CollectVrayProxy(pyblish.api.InstancePlugin):
"""Collect Vray Proxy instance.
Add `pointcache` family for it.
"""
order = pyblish.api.CollectorOrder + 0.01
label = 'Collect Vray Proxy'
families = ["vrayproxy"]
def process(self, instance):
"""Collector entry point."""
if not instance.data.get('families'):
instance.data["families"] = []

View file

@ -96,19 +96,25 @@ class ExtractPlayblast(openpype.api.Extractor):
# Remove panel key since it's internal value to capture_gui
preset.pop("panel", None)
self.log.info('using viewport preset: {}'.format(preset))
path = capture.capture(**preset)
playblast = self._fix_playblast_output_path(path)
self.log.info("file list {}".format(playblast))
self.log.debug("playblast path {}".format(path))
collected_frames = os.listdir(stagingdir)
collections, remainder = clique.assemble(collected_frames)
input_path = os.path.join(
stagingdir, collections[0].format('{head}{padding}{tail}'))
self.log.info("input {}".format(input_path))
collected_files = os.listdir(stagingdir)
collections, remainder = clique.assemble(collected_files)
self.log.debug("filename {}".format(filename))
frame_collection = None
for collection in collections:
filebase = collection.format('{head}').rstrip(".")
self.log.debug("collection head {}".format(filebase))
if filebase in filename:
frame_collection = collection
self.log.info(
"we found collection of interest {}".format(
str(frame_collection)))
if "representations" not in instance.data:
instance.data["representations"] = []
@ -119,12 +125,11 @@ class ExtractPlayblast(openpype.api.Extractor):
# Add camera node name to representation data
camera_node_name = pm.ls(camera)[0].getTransform().name()
representation = {
'name': 'png',
'ext': 'png',
'files': collected_frames,
'files': list(frame_collection),
"stagingDir": stagingdir,
"frameStart": start,
"frameEnd": end,
@ -135,44 +140,6 @@ class ExtractPlayblast(openpype.api.Extractor):
}
instance.data["representations"].append(representation)
def _fix_playblast_output_path(self, filepath):
"""Workaround a bug in maya.cmds.playblast to return correct filepath.
When the `viewer` argument is set to False and maya.cmds.playblast
does not automatically open the playblasted file the returned
filepath does not have the file's extension added correctly.
To workaround this we just glob.glob() for any file extensions and
assume the latest modified file is the correct file and return it.
"""
# Catch cancelled playblast
if filepath is None:
self.log.warning("Playblast did not result in output path. "
"Playblast is probably interrupted.")
return None
# Fix: playblast not returning correct filename (with extension)
# Lets assume the most recently modified file is the correct one.
if not os.path.exists(filepath):
directory = os.path.dirname(filepath)
filename = os.path.basename(filepath)
# check if the filepath is has frame based filename
# example : capture.####.png
parts = filename.split(".")
if len(parts) == 3:
query = os.path.join(directory, "{}.*.{}".format(parts[0],
parts[-1]))
files = glob.glob(query)
else:
files = glob.glob("{}.*".format(filepath))
if not files:
raise RuntimeError("Couldn't find playblast from: "
"{0}".format(filepath))
filepath = max(files, key=os.path.getmtime)
return filepath
@contextlib.contextmanager
def maintained_time():

View file

@ -18,7 +18,8 @@ class ExtractAlembic(openpype.api.Extractor):
label = "Extract Pointcache (Alembic)"
hosts = ["maya"]
families = ["pointcache",
"model"]
"model",
"vrayproxy"]
def process(self, instance):

View file

@ -74,6 +74,8 @@ class ExtractRedshiftProxy(openpype.api.Extractor):
'files': repr_files,
"stagingDir": staging_dir,
}
if anim_on:
representation["frameStart"] = instance.data["proxyFrameStart"]
instance.data["representations"].append(representation)
self.log.info("Extracted instance '%s' to: %s"

View file

@ -26,15 +26,11 @@ class ExtractThumbnail(openpype.api.Extractor):
def process(self, instance):
self.log.info("Extracting capture..")
start = cmds.currentTime(query=True)
end = cmds.currentTime(query=True)
self.log.info("start: {}, end: {}".format(start, end))
camera = instance.data['review_camera']
capture_preset = ""
capture_preset = (
instance.context.data["project_settings"]['maya']['publish']['ExtractPlayblast']
instance.context.data["project_settings"]['maya']['publish']['ExtractPlayblast']['capture_preset']
)
try:
@ -50,8 +46,8 @@ class ExtractThumbnail(openpype.api.Extractor):
# preset['compression'] = "qt"
preset['quality'] = 50
preset['compression'] = "jpg"
preset['start_frame'] = start
preset['end_frame'] = end
preset['start_frame'] = instance.data["frameStart"]
preset['end_frame'] = instance.data["frameStart"]
preset['camera_options'] = {
"displayGateMask": False,
"displayResolution": False,

View file

@ -1,8 +1,9 @@
import os
# -*- coding: utf-8 -*-
"""Maya validator for render settings."""
import re
from collections import OrderedDict
from maya import cmds, mel
import pymel.core as pm
import pyblish.api
import openpype.api
@ -60,6 +61,8 @@ class ValidateRenderSettings(pyblish.api.InstancePlugin):
'renderman': '<layer>_<aov>.<f4>.<ext>'
}
redshift_AOV_prefix = "<BeautyPath>/<BeautyFile>_<RenderPass>"
# WARNING: There is bug? in renderman, translating <scene> token
# to something left behind mayas default image prefix. So instead
# `SceneName_v01` it translates to:
@ -120,25 +123,59 @@ class ValidateRenderSettings(pyblish.api.InstancePlugin):
"doesn't have: '<renderlayer>' or "
"'<layer>' token".format(prefix))
if len(cameras) > 1:
if not re.search(cls.R_CAMERA_TOKEN, prefix):
invalid = True
cls.log.error("Wrong image prefix [ {} ] - "
"doesn't have: '<camera>' token".format(prefix))
if len(cameras) > 1 and not re.search(cls.R_CAMERA_TOKEN, prefix):
invalid = True
cls.log.error("Wrong image prefix [ {} ] - "
"doesn't have: '<camera>' token".format(prefix))
# renderer specific checks
if renderer == "vray":
# no vray checks implemented yet
pass
elif renderer == "redshift":
vray_settings = cmds.ls(type="VRaySettingsNode")
if not vray_settings:
node = cmds.createNode("VRaySettingsNode")
else:
node = vray_settings[0]
if cmds.getAttr(
"{}.fileNameRenderElementSeparator".format(node)) != "_":
invalid = False
cls.log.error("AOV separator is not set correctly.")
if renderer == "redshift":
if re.search(cls.R_AOV_TOKEN, prefix):
invalid = True
cls.log.error("Do not use AOV token [ {} ] - "
"Redshift automatically append AOV name and "
"it doesn't make much sense with "
"Multipart EXR".format(prefix))
cls.log.error(("Do not use AOV token [ {} ] - "
"Redshift is using image prefixes per AOV so "
"it doesn't make much sense using it in global"
"image prefix").format(prefix))
# get redshift AOVs
rs_aovs = cmds.ls(type="RedshiftAOV", referencedNodes=False)
for aov in rs_aovs:
aov_prefix = cmds.getAttr("{}.filePrefix".format(aov))
# check their image prefix
if aov_prefix != cls.redshift_AOV_prefix:
cls.log.error(("AOV ({}) image prefix is not set "
"correctly {} != {}").format(
cmds.getAttr("{}.name".format(aov)),
cmds.getAttr("{}.filePrefix".format(aov)),
aov_prefix
))
invalid = True
# get aov format
aov_ext = cmds.getAttr(
"{}.fileFormat".format(aov), asString=True)
elif renderer == "renderman":
default_ext = cmds.getAttr(
"redshiftOptions.imageFormat", asString=True)
if default_ext != aov_ext:
cls.log.error(("AOV file format is not the same "
"as the one set globally "
"{} != {}").format(default_ext,
aov_ext))
invalid = True
if renderer == "renderman":
file_prefix = cmds.getAttr("rmanGlobals.imageFileFormat")
dir_prefix = cmds.getAttr("rmanGlobals.imageOutputDir")
@ -151,7 +188,7 @@ class ValidateRenderSettings(pyblish.api.InstancePlugin):
cls.log.error("Wrong directory prefix [ {} ]".format(
dir_prefix))
else:
if renderer == "arnold":
multipart = cmds.getAttr("defaultArnoldDriver.mergeAOVs")
if multipart:
if re.search(cls.R_AOV_TOKEN, prefix):
@ -177,6 +214,43 @@ class ValidateRenderSettings(pyblish.api.InstancePlugin):
cls.log.error("Expecting padding of {} ( {} )".format(
cls.DEFAULT_PADDING, "0" * cls.DEFAULT_PADDING))
# load validation definitions from settings
validation_settings = (
instance.context.data["project_settings"]["maya"]["publish"]["ValidateRenderSettings"].get( # noqa: E501
"{}_render_attributes".format(renderer))
)
# go through definitions and test if such node.attribute exists.
# if so, compare its value from the one required.
for attr, value in OrderedDict(validation_settings).items():
# first get node of that type
cls.log.debug("{}: {}".format(attr, value))
node_type = attr.split(".")[0]
attribute_name = ".".join(attr.split(".")[1:])
nodes = cmds.ls(type=node_type)
if not isinstance(nodes, list):
cls.log.warning("No nodes of '{}' found.".format(node_type))
continue
for node in nodes:
try:
render_value = cmds.getAttr(
"{}.{}".format(node, attribute_name))
except RuntimeError:
invalid = True
cls.log.error(
"Cannot get value of {}.{}".format(
node, attribute_name))
else:
if value != render_value:
invalid = True
cls.log.error(
("Invalid value {} set on {}.{}. "
"Expecting {}").format(
render_value, node, attribute_name, value)
)
return invalid
@classmethod
@ -210,3 +284,29 @@ class ValidateRenderSettings(pyblish.api.InstancePlugin):
cmds.setAttr("rmanGlobals.imageOutputDir",
cls.RendermanDirPrefix,
type="string")
if renderer == "vray":
vray_settings = cmds.ls(type="VRaySettingsNode")
if not vray_settings:
node = cmds.createNode("VRaySettingsNode")
else:
node = vray_settings[0]
cmds.setAttr(
"{}.fileNameRenderElementSeparator".format(
node),
"_"
)
if renderer == "redshift":
# get redshift AOVs
rs_aovs = cmds.ls(type="RedshiftAOV", referencedNodes=False)
for aov in rs_aovs:
# fix AOV prefixes
cmds.setAttr(
"{}.filePrefix".format(aov), cls.redshift_AOV_prefix)
# fix AOV file format
default_ext = cmds.getAttr(
"redshiftOptions.imageFormat", asString=True)
cmds.setAttr(
"{}.fileFormat".format(aov), default_ext)

Some files were not shown because too many files have changed in this diff Show more