Merge branch 'feature/new_publisher_core' of github.com:pypeclub/pype into feature/new_publisher_core

This commit is contained in:
iLLiCiTiT 2021-10-20 10:22:57 +02:00
commit eacee0e015
132 changed files with 2782 additions and 1262 deletions

View file

@ -87,7 +87,7 @@ ipython_config.py
# pyenv
# For a library or package, you might want to ignore these files since the code is
# intended to run in multiple environments; otherwise, check them in:
# .python-version
.python-version
# pipenv
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
@ -142,5 +142,6 @@ cython_debug/
.poetry/
.github/
vendor/bin/
vendor/python/
docs/
website/

1
.gitignore vendored
View file

@ -39,6 +39,7 @@ Temporary Items
/dist/
/vendor/bin/*
/vendor/python/*
/.venv
/venv/

View file

@ -1,18 +1,29 @@
# Changelog
## [3.5.0-nightly.4](https://github.com/pypeclub/OpenPype/tree/HEAD)
## [3.5.0](https://github.com/pypeclub/OpenPype/tree/3.5.0) (2021-10-17)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.4.1...HEAD)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.4.1...3.5.0)
**Deprecated:**
- Maya: Change mayaAscii family to mayaScene [\#2106](https://github.com/pypeclub/OpenPype/pull/2106)
**🆕 New features**
- Added project and task into context change message in Maya [\#2131](https://github.com/pypeclub/OpenPype/pull/2131)
- Add ExtractBurnin to photoshop review [\#2124](https://github.com/pypeclub/OpenPype/pull/2124)
- PYPE-1218 - changed namespace to contain subset name in Maya [\#2114](https://github.com/pypeclub/OpenPype/pull/2114)
- Added running configurable disk mapping command before start of OP [\#2091](https://github.com/pypeclub/OpenPype/pull/2091)
- SFTP provider [\#2073](https://github.com/pypeclub/OpenPype/pull/2073)
- Maya: Validate setdress top group [\#2068](https://github.com/pypeclub/OpenPype/pull/2068)
**🚀 Enhancements**
- Maya: make rig validators configurable in settings [\#2137](https://github.com/pypeclub/OpenPype/pull/2137)
- Settings: Updated readme for entity types in settings [\#2132](https://github.com/pypeclub/OpenPype/pull/2132)
- Nuke: unified clip loader [\#2128](https://github.com/pypeclub/OpenPype/pull/2128)
- Settings UI: Project model refreshing and sorting [\#2104](https://github.com/pypeclub/OpenPype/pull/2104)
- Create Read From Rendered - Disable Relative paths by default [\#2093](https://github.com/pypeclub/OpenPype/pull/2093)
- Added choosing different dirmap mapping if workfile synched locally [\#2088](https://github.com/pypeclub/OpenPype/pull/2088)
- General: Remove IdleManager module [\#2084](https://github.com/pypeclub/OpenPype/pull/2084)
- Tray UI: Message box about missing settings defaults [\#2080](https://github.com/pypeclub/OpenPype/pull/2080)
@ -23,26 +34,33 @@
- Nuke: Adding `still` image family workflow [\#2064](https://github.com/pypeclub/OpenPype/pull/2064)
- Maya: validate authorized loaded plugins [\#2062](https://github.com/pypeclub/OpenPype/pull/2062)
- Tools: add support for pyenv on windows [\#2051](https://github.com/pypeclub/OpenPype/pull/2051)
- SyncServer: Dropbox Provider [\#1979](https://github.com/pypeclub/OpenPype/pull/1979)
**🐛 Bug fixes**
- Maya: fix model publishing [\#2130](https://github.com/pypeclub/OpenPype/pull/2130)
- Fix - oiiotool wasn't recognized even if present [\#2129](https://github.com/pypeclub/OpenPype/pull/2129)
- General: Disk mapping group [\#2120](https://github.com/pypeclub/OpenPype/pull/2120)
- Hiero: publishing effect first time makes wrong resources path [\#2115](https://github.com/pypeclub/OpenPype/pull/2115)
- Add startup script for Houdini Core. [\#2110](https://github.com/pypeclub/OpenPype/pull/2110)
- TVPaint: Behavior name of loop also accept repeat [\#2109](https://github.com/pypeclub/OpenPype/pull/2109)
- Ftrack: Project settings save custom attributes skip unknown attributes [\#2103](https://github.com/pypeclub/OpenPype/pull/2103)
- Blender: Fix NoneType error when animation\_data is missing for a rig [\#2101](https://github.com/pypeclub/OpenPype/pull/2101)
- Fix broken import in sftp provider [\#2100](https://github.com/pypeclub/OpenPype/pull/2100)
- Global: Fix docstring on publish plugin extract review [\#2097](https://github.com/pypeclub/OpenPype/pull/2097)
- Delivery Action Files Sequence fix [\#2096](https://github.com/pypeclub/OpenPype/pull/2096)
- General: Cloud mongo ca certificate issue [\#2095](https://github.com/pypeclub/OpenPype/pull/2095)
- TVPaint: Creator use context from workfile [\#2087](https://github.com/pypeclub/OpenPype/pull/2087)
- Blender: fix texture missing when publishing blend files [\#2085](https://github.com/pypeclub/OpenPype/pull/2085)
- General: Startup validations oiio tool path fix on linux [\#2083](https://github.com/pypeclub/OpenPype/pull/2083)
- Deadline: Collect deadline server does not check existence of deadline key [\#2082](https://github.com/pypeclub/OpenPype/pull/2082)
- Blender: fixed Curves with modifiers in Rigs [\#2081](https://github.com/pypeclub/OpenPype/pull/2081)
- Nuke UI scaling [\#2077](https://github.com/pypeclub/OpenPype/pull/2077)
- Maya: Fix multi-camera renders [\#2065](https://github.com/pypeclub/OpenPype/pull/2065)
- Fix Sync Queue when project disabled [\#2063](https://github.com/pypeclub/OpenPype/pull/2063)
**Merged pull requests:**
- Delivery Action Files Sequence fix [\#2096](https://github.com/pypeclub/OpenPype/pull/2096)
- Bump pywin32 from 300 to 301 [\#2086](https://github.com/pypeclub/OpenPype/pull/2086)
- Nuke UI scaling [\#2077](https://github.com/pypeclub/OpenPype/pull/2077)
## [3.4.1](https://github.com/pypeclub/OpenPype/tree/3.4.1) (2021-09-23)
@ -60,7 +78,6 @@
- Settings UI: Deffered set value on entity [\#2044](https://github.com/pypeclub/OpenPype/pull/2044)
- Loader: Families filtering [\#2043](https://github.com/pypeclub/OpenPype/pull/2043)
- Settings UI: Project view enhancements [\#2042](https://github.com/pypeclub/OpenPype/pull/2042)
- Added possibility to configure of synchronization of workfile version… [\#2041](https://github.com/pypeclub/OpenPype/pull/2041)
- Settings for Nuke IncrementScriptVersion [\#2039](https://github.com/pypeclub/OpenPype/pull/2039)
- Loader & Library loader: Use tools from OpenPype [\#2038](https://github.com/pypeclub/OpenPype/pull/2038)
- Adding predefined project folders creation in PM [\#2030](https://github.com/pypeclub/OpenPype/pull/2030)
@ -75,7 +92,6 @@
- Differentiate jpg sequences from thumbnail [\#2056](https://github.com/pypeclub/OpenPype/pull/2056)
- FFmpeg: Split command to list does not work [\#2046](https://github.com/pypeclub/OpenPype/pull/2046)
- Removed shell flag in subprocess call [\#2045](https://github.com/pypeclub/OpenPype/pull/2045)
- Hiero: Fix "none" named tags [\#2033](https://github.com/pypeclub/OpenPype/pull/2033)
**Merged pull requests:**
@ -85,16 +101,13 @@
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.4.0-nightly.6...3.4.0)
### 📖 Documentation
- Documentation: Ftrack launch argsuments update [\#2014](https://github.com/pypeclub/OpenPype/pull/2014)
**🆕 New features**
- Nuke: Compatibility with Nuke 13 [\#2003](https://github.com/pypeclub/OpenPype/pull/2003)
**🚀 Enhancements**
- Added possibility to configure of synchronization of workfile version… [\#2041](https://github.com/pypeclub/OpenPype/pull/2041)
- General: Task types in profiles [\#2036](https://github.com/pypeclub/OpenPype/pull/2036)
- Console interpreter: Handle invalid sizes on initialization [\#2022](https://github.com/pypeclub/OpenPype/pull/2022)
- Ftrack: Show OpenPype versions in event server status [\#2019](https://github.com/pypeclub/OpenPype/pull/2019)
@ -103,28 +116,20 @@
- Modules: Connect method is not required [\#2009](https://github.com/pypeclub/OpenPype/pull/2009)
- Settings UI: Number with configurable steps [\#2001](https://github.com/pypeclub/OpenPype/pull/2001)
- Moving project folder structure creation out of ftrack module \#1989 [\#1996](https://github.com/pypeclub/OpenPype/pull/1996)
- Configurable items for providers without Settings [\#1987](https://github.com/pypeclub/OpenPype/pull/1987)
- Global: Example addons [\#1986](https://github.com/pypeclub/OpenPype/pull/1986)
- Standalone Publisher: Extract harmony zip handle workfile template [\#1982](https://github.com/pypeclub/OpenPype/pull/1982)
- Settings UI: Number sliders [\#1978](https://github.com/pypeclub/OpenPype/pull/1978)
**🐛 Bug fixes**
- Workfiles tool: Task selection [\#2040](https://github.com/pypeclub/OpenPype/pull/2040)
- Ftrack: Delete old versions missing settings key [\#2037](https://github.com/pypeclub/OpenPype/pull/2037)
- Nuke: typo on a button [\#2034](https://github.com/pypeclub/OpenPype/pull/2034)
- Hiero: Fix "none" named tags [\#2033](https://github.com/pypeclub/OpenPype/pull/2033)
- FFmpeg: Subprocess arguments as list [\#2032](https://github.com/pypeclub/OpenPype/pull/2032)
- General: Fix Python 2 breaking line [\#2016](https://github.com/pypeclub/OpenPype/pull/2016)
- Bugfix/webpublisher task type [\#2006](https://github.com/pypeclub/OpenPype/pull/2006)
- Nuke thumbnails generated from middle of the sequence [\#1992](https://github.com/pypeclub/OpenPype/pull/1992)
- Nuke: last version from path gets correct version [\#1990](https://github.com/pypeclub/OpenPype/pull/1990)
- nuke, resolve, hiero: precollector order lest then 0.5 [\#1984](https://github.com/pypeclub/OpenPype/pull/1984)
- Last workfile with multiple work templates [\#1981](https://github.com/pypeclub/OpenPype/pull/1981)
- Collectors order [\#1977](https://github.com/pypeclub/OpenPype/pull/1977)
- Stop timer was within validator order range. [\#1975](https://github.com/pypeclub/OpenPype/pull/1975)
- Ftrack: arrow submodule has https url source [\#1974](https://github.com/pypeclub/OpenPype/pull/1974)
- Ftrack: Fix hosts attribute in collect ftrack username [\#1972](https://github.com/pypeclub/OpenPype/pull/1972)
- Deadline: Houdini plugins in different hierarchy [\#1970](https://github.com/pypeclub/OpenPype/pull/1970)
### 📖 Documentation
- Documentation: Ftrack launch argsuments update [\#2014](https://github.com/pypeclub/OpenPype/pull/2014)
## [3.3.1](https://github.com/pypeclub/OpenPype/tree/3.3.1) (2021-08-20)

View file

@ -1,7 +1,9 @@
# Build Pype docker image
FROM centos:7 AS builder
ARG OPENPYPE_PYTHON_VERSION=3.7.10
FROM debian:bookworm-slim AS builder
ARG OPENPYPE_PYTHON_VERSION=3.7.12
LABEL maintainer="info@openpype.io"
LABEL description="Docker Image to build and run OpenPype"
LABEL org.opencontainers.image.name="pypeclub/openpype"
LABEL org.opencontainers.image.title="OpenPype Docker Image"
LABEL org.opencontainers.image.url="https://openpype.io/"
@ -9,56 +11,49 @@ LABEL org.opencontainers.image.source="https://github.com/pypeclub/pype"
USER root
# update base
RUN yum -y install deltarpm \
&& yum -y update \
&& yum clean all
ARG DEBIAN_FRONTEND=noninteractive
# add tools we need
RUN yum -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm \
&& yum -y install centos-release-scl \
&& yum -y install \
# update base
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
ca-certificates \
bash \
which \
git \
devtoolset-7-gcc* \
make \
cmake \
make \
curl \
wget \
gcc \
zlib-devel \
bzip2 \
bzip2-devel \
readline-devel \
sqlite sqlite-devel \
openssl-devel \
tk-devel libffi-devel \
qt5-qtbase-devel \
patchelf \
&& yum clean all
build-essential \
checkinstall \
libssl-dev \
zlib1g-dev \
libbz2-dev \
libreadline-dev \
libsqlite3-dev \
llvm \
libncursesw5-dev \
xz-utils \
tk-dev \
libxml2-dev \
libxmlsec1-dev \
libffi-dev \
liblzma-dev \
patchelf
SHELL ["/bin/bash", "-c"]
RUN mkdir /opt/openpype
# RUN useradd -m pype
# RUN chown pype /opt/openpype
# USER pype
RUN curl https://pyenv.run | bash
ENV PYTHON_CONFIGURE_OPTS --enable-shared
RUN echo 'export PATH="$HOME/.pyenv/bin:$PATH"'>> $HOME/.bashrc \
RUN curl https://pyenv.run | bash \
&& echo 'export PATH="$HOME/.pyenv/bin:$PATH"'>> $HOME/.bashrc \
&& echo 'eval "$(pyenv init -)"' >> $HOME/.bashrc \
&& echo 'eval "$(pyenv virtualenv-init -)"' >> $HOME/.bashrc \
&& echo 'eval "$(pyenv init --path)"' >> $HOME/.bashrc
RUN source $HOME/.bashrc && pyenv install ${OPENPYPE_PYTHON_VERSION}
&& echo 'eval "$(pyenv init --path)"' >> $HOME/.bashrc \
&& source $HOME/.bashrc && pyenv install ${OPENPYPE_PYTHON_VERSION}
COPY . /opt/openpype/
RUN rm -rf /openpype/.poetry || echo "No Poetry installed yet."
# USER root
# RUN chown -R pype /opt/openpype
RUN chmod +x /opt/openpype/tools/create_env.sh && chmod +x /opt/openpype/tools/build.sh
# USER pype
RUN chmod +x /opt/openpype/tools/create_env.sh && chmod +x /opt/openpype/tools/build.sh
WORKDIR /opt/openpype
@ -67,16 +62,8 @@ RUN cd /opt/openpype \
&& pyenv local ${OPENPYPE_PYTHON_VERSION}
RUN source $HOME/.bashrc \
&& ./tools/create_env.sh
RUN source $HOME/.bashrc \
&& ./tools/create_env.sh \
&& ./tools/fetch_thirdparty_libs.sh
RUN source $HOME/.bashrc \
&& bash ./tools/build.sh \
&& cp /usr/lib64/libffi* ./build/exe.linux-x86_64-3.7/lib \
&& cp /usr/lib64/libssl* ./build/exe.linux-x86_64-3.7/lib \
&& cp /usr/lib64/libcrypto* ./build/exe.linux-x86_64-3.7/lib
RUN cd /opt/openpype \
rm -rf ./vendor/bin
&& bash ./tools/build.sh

98
Dockerfile.centos7 Normal file
View file

@ -0,0 +1,98 @@
# Build Pype docker image
FROM centos:7 AS builder
ARG OPENPYPE_PYTHON_VERSION=3.7.10
LABEL org.opencontainers.image.name="pypeclub/openpype"
LABEL org.opencontainers.image.title="OpenPype Docker Image"
LABEL org.opencontainers.image.url="https://openpype.io/"
LABEL org.opencontainers.image.source="https://github.com/pypeclub/pype"
USER root
# update base
RUN yum -y install deltarpm \
&& yum -y update \
&& yum clean all
# add tools we need
RUN yum -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm \
&& yum -y install centos-release-scl \
&& yum -y install \
bash \
which \
git \
make \
devtoolset-7 \
cmake \
curl \
wget \
gcc \
zlib-devel \
bzip2 \
bzip2-devel \
readline-devel \
sqlite sqlite-devel \
openssl-devel \
openssl-libs \
tk-devel libffi-devel \
patchelf \
automake \
autoconf \
ncurses \
ncurses-devel \
qt5-qtbase-devel \
&& yum clean all
# we need to build our own patchelf
WORKDIR /temp-patchelf
RUN git clone https://github.com/NixOS/patchelf.git . \
&& source scl_source enable devtoolset-7 \
&& ./bootstrap.sh \
&& ./configure \
&& make \
&& make install
RUN mkdir /opt/openpype
# RUN useradd -m pype
# RUN chown pype /opt/openpype
# USER pype
RUN curl https://pyenv.run | bash
# ENV PYTHON_CONFIGURE_OPTS --enable-shared
RUN echo 'export PATH="$HOME/.pyenv/bin:$PATH"'>> $HOME/.bashrc \
&& echo 'eval "$(pyenv init -)"' >> $HOME/.bashrc \
&& echo 'eval "$(pyenv virtualenv-init -)"' >> $HOME/.bashrc \
&& echo 'eval "$(pyenv init --path)"' >> $HOME/.bashrc
RUN source $HOME/.bashrc && pyenv install ${OPENPYPE_PYTHON_VERSION}
COPY . /opt/openpype/
RUN rm -rf /openpype/.poetry || echo "No Poetry installed yet."
# USER root
# RUN chown -R pype /opt/openpype
RUN chmod +x /opt/openpype/tools/create_env.sh && chmod +x /opt/openpype/tools/build.sh
# USER pype
WORKDIR /opt/openpype
RUN cd /opt/openpype \
&& source $HOME/.bashrc \
&& pyenv local ${OPENPYPE_PYTHON_VERSION}
RUN source $HOME/.bashrc \
&& ./tools/create_env.sh
RUN source $HOME/.bashrc \
&& ./tools/fetch_thirdparty_libs.sh
RUN source $HOME/.bashrc \
&& bash ./tools/build.sh
RUN cp /usr/lib64/libffi* ./build/exe.linux-x86_64-3.7/lib \
&& cp /usr/lib64/libssl* ./build/exe.linux-x86_64-3.7/lib \
&& cp /usr/lib64/libcrypto* ./build/exe.linux-x86_64-3.7/lib \
&& cp /root/.pyenv/versions/${OPENPYPE_PYTHON_VERSION}/lib/libpython* ./build/exe.linux-x86_64-3.7/lib
RUN cd /opt/openpype \
rm -rf ./vendor/bin

View file

@ -133,6 +133,12 @@ Easiest way to build OpenPype on Linux is using [Docker](https://www.docker.com/
sudo ./tools/docker_build.sh
```
This will by default use Debian as base image. If you need to make Centos 7 compatible build, please run:
```sh
sudo ./tools/docker_build.sh centos7
```
If all is successful, you'll find built OpenPype in `./build/` folder.
#### Manual build
@ -158,6 +164,11 @@ you'll need also additional libraries for Qt5:
```sh
sudo apt install qt5-default
```
or if you are on Ubuntu > 20.04, there is no `qt5-default` packages so you need to install its content individually:
```sh
sudo apt-get install qtbase5-dev qtchooser qt5-qmake qtbase5-dev-tools
```
</details>
<details>

View file

@ -69,6 +69,7 @@ def install():
"""Install Pype to Avalon."""
from pyblish.lib import MessageHandler
from openpype.modules import load_modules
from avalon import pipeline
# Make sure modules are loaded
load_modules()
@ -117,7 +118,9 @@ def install():
# apply monkey patched discover to original one
log.info("Patching discovery")
avalon.discover = patched_discover
pipeline.discover = patched_discover
avalon.on("taskChanged", _on_task_change)

View file

@ -283,3 +283,18 @@ def run(script):
args_string = " ".join(args[1:])
print(f"... running: {script} {args_string}")
runpy.run_path(script, run_name="__main__", )
@main.command()
@click.argument("folder", nargs=-1)
@click.option("-m",
"--mark",
help="Run tests marked by",
default=None)
@click.option("-p",
"--pyargs",
help="Run tests from package",
default=None)
def runtests(folder, mark, pyargs):
"""Run all automatic tests after proper initialization via start.py"""
PypeCommands().run_tests(folder, mark, pyargs)

View file

@ -43,6 +43,8 @@ class GlobalHostDataHook(PreLaunchHook):
"env": self.launch_context.env,
"last_workfile_path": self.data.get("last_workfile_path"),
"log": self.log
})

View file

@ -111,7 +111,8 @@ class BlendRigLoader(plugin.AssetLoader):
if action is not None:
local_obj.animation_data.action = action
elif local_obj.animation_data.action is not None:
elif (local_obj.animation_data and
local_obj.animation_data.action is not None):
plugin.prepare_data(
local_obj.animation_data.action, group_name)

View file

@ -0,0 +1,9 @@
from avalon import api, houdini
def main():
print("Installing OpenPype ...")
api.install(houdini)
main()

View file

@ -313,9 +313,15 @@ def on_task_changed(*args):
lib.set_context_settings()
lib.update_content_on_context_change()
msg = " project: {}\n asset: {}\n task:{}".format(
avalon.Session["AVALON_PROJECT"],
avalon.Session["AVALON_ASSET"],
avalon.Session["AVALON_TASK"]
)
lib.show_message(
"Context was changed",
("Context was changed to {}".format(avalon.Session["AVALON_ASSET"])),
("Context was changed to:\n{}".format(msg)),
)

View file

@ -114,6 +114,8 @@ class RenderProduct(object):
aov = attr.ib(default=None) # source aov
driver = attr.ib(default=None) # source driver
multipart = attr.ib(default=False) # multichannel file
camera = attr.ib(default=None) # used only when rendering
# from multiple cameras
def get(layer, render_instance=None):
@ -183,6 +185,16 @@ class ARenderProducts:
self.layer_data = self._get_layer_data()
self.layer_data.products = self.get_render_products()
def has_camera_token(self):
# type: () -> bool
"""Check if camera token is in image prefix.
Returns:
bool: True/False if camera token is present.
"""
return "<camera>" in self.layer_data.filePrefix.lower()
@abstractmethod
def get_render_products(self):
"""To be implemented by renderer class.
@ -307,7 +319,7 @@ class ARenderProducts:
# Deadline allows submitting renders with a custom frame list
# to support those cases we might want to allow 'custom frames'
# to be overridden to `ExpectFiles` class?
layer_data = LayerMetadata(
return LayerMetadata(
frameStart=int(self.get_render_attribute("startFrame")),
frameEnd=int(self.get_render_attribute("endFrame")),
frameStep=int(self.get_render_attribute("byFrameStep")),
@ -321,7 +333,6 @@ class ARenderProducts:
defaultExt=self._get_attr("defaultRenderGlobals.imfPluginKey"),
filePrefix=file_prefix
)
return layer_data
def _generate_file_sequence(
self, layer_data,
@ -330,7 +341,7 @@ class ARenderProducts:
force_cameras=None):
# type: (LayerMetadata, str, str, list) -> list
expected_files = []
cameras = force_cameras if force_cameras else layer_data.cameras
cameras = force_cameras or layer_data.cameras
ext = force_ext or layer_data.defaultExt
for cam in cameras:
file_prefix = layer_data.filePrefix
@ -361,8 +372,8 @@ class ARenderProducts:
)
return expected_files
def get_files(self, product, camera):
# type: (RenderProduct, str) -> list
def get_files(self, product):
# type: (RenderProduct) -> list
"""Return list of expected files.
It will translate render token strings ('<RenderPass>', etc.) to
@ -373,7 +384,6 @@ class ARenderProducts:
Args:
product (RenderProduct): Render product to be used for file
generation.
camera (str): Camera name.
Returns:
List of files
@ -383,7 +393,7 @@ class ARenderProducts:
self.layer_data,
force_aov_name=product.productName,
force_ext=product.ext,
force_cameras=[camera]
force_cameras=[product.camera]
)
def get_renderable_cameras(self):
@ -460,15 +470,21 @@ class RenderProductsArnold(ARenderProducts):
return prefix
def _get_aov_render_products(self, aov):
def _get_aov_render_products(self, aov, cameras=None):
"""Return all render products for the AOV"""
products = list()
products = []
aov_name = self._get_attr(aov, "name")
ai_drivers = cmds.listConnections("{}.outputs".format(aov),
source=True,
destination=False,
type="aiAOVDriver") or []
if not cameras:
cameras = [
self.sanitize_camera_name(
self.get_renderable_cameras()[0]
)
]
for ai_driver in ai_drivers:
# todo: check aiAOVDriver.prefix as it could have
@ -497,30 +513,37 @@ class RenderProductsArnold(ARenderProducts):
name = "beauty"
# Support Arnold light groups for AOVs
# Global AOV: When disabled the main layer is not written: `{pass}`
# Global AOV: When disabled the main layer is
# not written: `{pass}`
# All Light Groups: When enabled, a `{pass}_lgroups` file is
# written and is always merged into a single file
# Light Groups List: When set, a product per light group is written
# written and is always merged into a
# single file
# Light Groups List: When set, a product per light
# group is written
# e.g. {pass}_front, {pass}_rim
global_aov = self._get_attr(aov, "globalAov")
if global_aov:
product = RenderProduct(productName=name,
ext=ext,
aov=aov_name,
driver=ai_driver)
products.append(product)
for camera in cameras:
product = RenderProduct(productName=name,
ext=ext,
aov=aov_name,
driver=ai_driver,
camera=camera)
products.append(product)
all_light_groups = self._get_attr(aov, "lightGroups")
if all_light_groups:
# All light groups is enabled. A single multipart
# Render Product
product = RenderProduct(productName=name + "_lgroups",
ext=ext,
aov=aov_name,
driver=ai_driver,
# Always multichannel output
multipart=True)
products.append(product)
for camera in cameras:
product = RenderProduct(productName=name + "_lgroups",
ext=ext,
aov=aov_name,
driver=ai_driver,
# Always multichannel output
multipart=True,
camera=camera)
products.append(product)
else:
value = self._get_attr(aov, "lightGroupsList")
if not value:
@ -529,11 +552,15 @@ class RenderProductsArnold(ARenderProducts):
for light_group in selected_light_groups:
# Render Product per selected light group
aov_light_group_name = "{}_{}".format(name, light_group)
product = RenderProduct(productName=aov_light_group_name,
aov=aov_name,
driver=ai_driver,
ext=ext)
products.append(product)
for camera in cameras:
product = RenderProduct(
productName=aov_light_group_name,
aov=aov_name,
driver=ai_driver,
ext=ext,
camera=camera
)
products.append(product)
return products
@ -556,17 +583,26 @@ class RenderProductsArnold(ARenderProducts):
# anyway.
return []
default_ext = self._get_attr("defaultRenderGlobals.imfPluginKey")
beauty_product = RenderProduct(productName="beauty",
ext=default_ext,
driver="defaultArnoldDriver")
# check if camera token is in prefix. If so, and we have list of
# renderable cameras, generate render product for each and every
# of them.
cameras = [
self.sanitize_camera_name(c)
for c in self.get_renderable_cameras()
]
default_ext = self._get_attr("defaultRenderGlobals.imfPluginKey")
beauty_products = [RenderProduct(
productName="beauty",
ext=default_ext,
driver="defaultArnoldDriver",
camera=camera) for camera in cameras]
# AOVs > Legacy > Maya Render View > Mode
aovs_enabled = bool(
self._get_attr("defaultArnoldRenderOptions.aovMode")
)
if not aovs_enabled:
return [beauty_product]
return beauty_products
# Common > File Output > Merge AOVs or <RenderPass>
# We don't need to check for Merge AOVs due to overridden
@ -575,8 +611,9 @@ class RenderProductsArnold(ARenderProducts):
"<renderpass>" in self.layer_data.filePrefix.lower()
)
if not has_renderpass_token:
beauty_product.multipart = True
return [beauty_product]
for product in beauty_products:
product.multipart = True
return beauty_products
# AOVs are set to be rendered separately. We should expect
# <RenderPass> token in path.
@ -598,14 +635,14 @@ class RenderProductsArnold(ARenderProducts):
continue
# For now stick to the legacy output format.
aov_products = self._get_aov_render_products(aov)
aov_products = self._get_aov_render_products(aov, cameras)
products.extend(aov_products)
if not any(product.aov == "RGBA" for product in products):
if all(product.aov != "RGBA" for product in products):
# Append default 'beauty' as this is arnolds default.
# However, it is excluded whenever a RGBA pass is enabled.
# For legibility add the beauty layer as first entry
products.insert(0, beauty_product)
products += beauty_products
# TODO: Output Denoising AOVs?
@ -670,6 +707,11 @@ class RenderProductsVray(ARenderProducts):
# anyway.
return []
cameras = [
self.sanitize_camera_name(c)
for c in self.get_renderable_cameras()
]
image_format_str = self._get_attr("vraySettings.imageFormatStr")
default_ext = image_format_str
if default_ext in {"exr (multichannel)", "exr (deep)"}:
@ -680,13 +722,21 @@ class RenderProductsVray(ARenderProducts):
# add beauty as default when not disabled
dont_save_rgb = self._get_attr("vraySettings.dontSaveRgbChannel")
if not dont_save_rgb:
products.append(RenderProduct(productName="", ext=default_ext))
for camera in cameras:
products.append(
RenderProduct(productName="",
ext=default_ext,
camera=camera))
# separate alpha file
separate_alpha = self._get_attr("vraySettings.separateAlpha")
if separate_alpha:
products.append(RenderProduct(productName="Alpha",
ext=default_ext))
for camera in cameras:
products.append(
RenderProduct(productName="Alpha",
ext=default_ext,
camera=camera)
)
if image_format_str == "exr (multichannel)":
# AOVs are merged in m-channel file, only main layer is rendered
@ -716,19 +766,23 @@ class RenderProductsVray(ARenderProducts):
# instead seems to output multiple Render Products,
# specifically "Self_Illumination" and "Environment"
product_names = ["Self_Illumination", "Environment"]
for name in product_names:
product = RenderProduct(productName=name,
ext=default_ext,
aov=aov)
products.append(product)
for camera in cameras:
for name in product_names:
product = RenderProduct(productName=name,
ext=default_ext,
aov=aov,
camera=camera)
products.append(product)
# Continue as we've processed this special case AOV
continue
aov_name = self._get_vray_aov_name(aov)
product = RenderProduct(productName=aov_name,
ext=default_ext,
aov=aov)
products.append(product)
for camera in cameras:
product = RenderProduct(productName=aov_name,
ext=default_ext,
aov=aov,
camera=camera)
products.append(product)
return products
@ -875,6 +929,11 @@ class RenderProductsRedshift(ARenderProducts):
# anyway.
return []
cameras = [
self.sanitize_camera_name(c)
for c in self.get_renderable_cameras()
]
# For Redshift we don't directly return upon forcing multilayer
# due to some AOVs still being written into separate files,
# like Cryptomatte.
@ -933,11 +992,14 @@ class RenderProductsRedshift(ARenderProducts):
for light_group in light_groups:
aov_light_group_name = "{}_{}".format(aov_name,
light_group)
product = RenderProduct(productName=aov_light_group_name,
aov=aov_name,
ext=ext,
multipart=aov_multipart)
products.append(product)
for camera in cameras:
product = RenderProduct(
productName=aov_light_group_name,
aov=aov_name,
ext=ext,
multipart=aov_multipart,
camera=camera)
products.append(product)
if light_groups:
light_groups_enabled = True
@ -945,11 +1007,13 @@ class RenderProductsRedshift(ARenderProducts):
# Redshift AOV Light Select always renders the global AOV
# even when light groups are present so we don't need to
# exclude it when light groups are active
product = RenderProduct(productName=aov_name,
aov=aov_name,
ext=ext,
multipart=aov_multipart)
products.append(product)
for camera in cameras:
product = RenderProduct(productName=aov_name,
aov=aov_name,
ext=ext,
multipart=aov_multipart,
camera=camera)
products.append(product)
# When a Beauty AOV is added manually, it will be rendered as
# 'Beauty_other' in file name and "standard" beauty will have
@ -959,10 +1023,12 @@ class RenderProductsRedshift(ARenderProducts):
return products
beauty_name = "Beauty_other" if has_beauty_aov else ""
products.insert(0,
RenderProduct(productName=beauty_name,
ext=ext,
multipart=multipart))
for camera in cameras:
products.insert(0,
RenderProduct(productName=beauty_name,
ext=ext,
multipart=multipart,
camera=camera))
return products
@ -987,6 +1053,16 @@ class RenderProductsRenderman(ARenderProducts):
:func:`ARenderProducts.get_render_products()`
"""
cameras = [
self.sanitize_camera_name(c)
for c in self.get_renderable_cameras()
]
if not cameras:
cameras = [
self.sanitize_camera_name(
self.get_renderable_cameras()[0])
]
products = []
default_ext = "exr"
@ -1000,9 +1076,11 @@ class RenderProductsRenderman(ARenderProducts):
if aov_name == "rmanDefaultDisplay":
aov_name = "beauty"
product = RenderProduct(productName=aov_name,
ext=default_ext)
products.append(product)
for camera in cameras:
product = RenderProduct(productName=aov_name,
ext=default_ext,
camera=camera)
products.append(product)
return products

View file

@ -123,7 +123,7 @@ class ReferenceLoader(api.Loader):
count = options.get("count") or 1
for c in range(0, count):
namespace = namespace or lib.unique_namespace(
asset["name"] + "_",
"{}_{}_".format(asset["name"], context["subset"]["name"]),
prefix="_" if asset["name"][0].isdigit() else "",
suffix="_",
)

View file

@ -1,11 +1,11 @@
from openpype.hosts.maya.api import plugin
class CreateMayaAscii(plugin.Creator):
"""Raw Maya Ascii file export"""
class CreateMayaScene(plugin.Creator):
"""Raw Maya Scene file export"""
name = "mayaAscii"
label = "Maya Ascii"
family = "mayaAscii"
name = "mayaScene"
label = "Maya Scene"
family = "mayaScene"
icon = "file-archive-o"
defaults = ['Main']

View file

@ -13,6 +13,7 @@ class ReferenceLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
"pointcache",
"animation",
"mayaAscii",
"mayaScene",
"setdress",
"layout",
"camera",
@ -40,14 +41,13 @@ class ReferenceLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
family = "model"
with maya.maintained_selection():
groupName = "{}:{}".format(namespace, name)
groupName = "{}:_GRP".format(namespace)
cmds.loadPlugin("AbcImport.mll", quiet=True)
nodes = cmds.file(self.fname,
namespace=namespace,
sharedReferenceFile=False,
groupReference=True,
groupName="{}:{}".format(namespace, name),
groupName=groupName,
reference=True,
returnNewNodes=True)
@ -71,7 +71,7 @@ class ReferenceLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
except: # noqa: E722
pass
if family not in ["layout", "setdress", "mayaAscii"]:
if family not in ["layout", "setdress", "mayaAscii", "mayaScene"]:
for root in roots:
root.setParent(world=True)

View file

@ -3,14 +3,14 @@ from maya import cmds
import pyblish.api
class CollectMayaAscii(pyblish.api.InstancePlugin):
"""Collect May Ascii Data
class CollectMayaScene(pyblish.api.InstancePlugin):
"""Collect Maya Scene Data
"""
order = pyblish.api.CollectorOrder + 0.2
label = 'Collect Model Data'
families = ["mayaAscii"]
families = ["mayaScene"]
def process(self, instance):
# Extract only current frame (override)

View file

@ -174,10 +174,16 @@ class CollectMayaRender(pyblish.api.ContextPlugin):
assert render_products, "no render products generated"
exp_files = []
for product in render_products:
for camera in layer_render_products.layer_data.cameras:
exp_files.append(
{product.productName: layer_render_products.get_files(
product, camera)})
product_name = product.productName
if product.camera and layer_render_products.has_camera_token():
product_name = "{}{}".format(
product.camera,
"_" + product_name if product_name else "")
exp_files.append(
{
product_name: layer_render_products.get_files(
product)
})
self.log.info("multipart: {}".format(
layer_render_products.multipart))
@ -199,12 +205,14 @@ class CollectMayaRender(pyblish.api.ContextPlugin):
# replace relative paths with absolute. Render products are
# returned as list of dictionaries.
publish_meta_path = None
for aov in exp_files:
full_paths = []
for file in aov[aov.keys()[0]]:
full_path = os.path.join(workspace, "renders", file)
full_path = full_path.replace("\\", "/")
full_paths.append(full_path)
publish_meta_path = os.path.dirname(full_path)
aov_dict[aov.keys()[0]] = full_paths
frame_start_render = int(self.get_render_attribute(
@ -230,6 +238,26 @@ class CollectMayaRender(pyblish.api.ContextPlugin):
frame_end_handle = frame_end_render
full_exp_files.append(aov_dict)
# find common path to store metadata
# so if image prefix is branching to many directories
# metadata file will be located in top-most common
# directory.
# TODO: use `os.path.commonpath()` after switch to Python 3
common_publish_meta_path = os.path.splitdrive(
publish_meta_path)[0]
if common_publish_meta_path:
common_publish_meta_path += os.path.sep
for part in publish_meta_path.split("/"):
common_publish_meta_path = os.path.join(
common_publish_meta_path, part)
if part == expected_layer_name:
break
common_publish_meta_path = common_publish_meta_path.replace(
"\\", "/")
self.log.info(
"Publish meta path: {}".format(common_publish_meta_path))
self.log.info(full_exp_files)
self.log.info("collecting layer: {}".format(layer_name))
# Get layer specific settings, might be overrides
@ -262,6 +290,7 @@ class CollectMayaRender(pyblish.api.ContextPlugin):
# which was submitted originally
"source": filepath,
"expectedFiles": full_exp_files,
"publishRenderMetadataFolder": common_publish_meta_path,
"resolutionWidth": cmds.getAttr("defaultResolution.width"),
"resolutionHeight": cmds.getAttr("defaultResolution.height"),
"pixelAspect": cmds.getAttr("defaultResolution.pixelAspect"),

View file

@ -4,7 +4,7 @@ import os
from maya import cmds
class CollectMayaScene(pyblish.api.ContextPlugin):
class CollectWorkfile(pyblish.api.ContextPlugin):
"""Inject the current working file into context"""
order = pyblish.api.CollectorOrder - 0.01

View file

@ -205,6 +205,9 @@ class ExtractLook(openpype.api.Extractor):
lookdata = instance.data["lookData"]
relationships = lookdata["relationships"]
sets = relationships.keys()
if not sets:
self.log.info("No sets found")
return
results = self.process_resources(instance, staging_dir=dir_path)
transfers = results["fileTransfers"]

View file

@ -17,6 +17,7 @@ class ExtractMayaSceneRaw(openpype.api.Extractor):
label = "Maya Scene (Raw)"
hosts = ["maya"]
families = ["mayaAscii",
"mayaScene",
"setdress",
"layout",
"camerarig",

View file

@ -76,7 +76,7 @@ class ValidateRenderSettings(pyblish.api.InstancePlugin):
r'%a|<aov>|<renderpass>', re.IGNORECASE)
R_LAYER_TOKEN = re.compile(
r'%l|<layer>|<renderlayer>', re.IGNORECASE)
R_CAMERA_TOKEN = re.compile(r'%c|<camera>', re.IGNORECASE)
R_CAMERA_TOKEN = re.compile(r'%c|Camera>')
R_SCENE_TOKEN = re.compile(r'%s|<scene>', re.IGNORECASE)
DEFAULT_PADDING = 4
@ -126,7 +126,9 @@ class ValidateRenderSettings(pyblish.api.InstancePlugin):
if len(cameras) > 1 and not re.search(cls.R_CAMERA_TOKEN, prefix):
invalid = True
cls.log.error("Wrong image prefix [ {} ] - "
"doesn't have: '<camera>' token".format(prefix))
"doesn't have: '<Camera>' token".format(prefix))
cls.log.error(
"Note that to needs to have capital 'C' at the beginning")
# renderer specific checks
if renderer == "vray":

View file

@ -288,7 +288,8 @@ def script_name():
def add_button_write_to_read(node):
name = "createReadNode"
label = "Create Read From Rendered"
value = "import write_to_read;write_to_read.write_to_read(nuke.thisNode())"
value = "import write_to_read;\
write_to_read.write_to_read(nuke.thisNode(), allow_relative=False)"
knob = nuke.PyScript_Knob(name, label, value)
knob.clearFlag(nuke.STARTLINE)
node.addKnob(knob)

View file

@ -1,4 +1,10 @@
import random
import string
import avalon.nuke
from avalon.nuke import lib as anlib
from avalon import api
from openpype.api import (
get_current_project_settings,
PypeCreatorMixin
@ -23,3 +29,68 @@ class PypeCreator(PypeCreatorMixin, avalon.nuke.pipeline.Creator):
self.log.error(msg + '\n\nPlease use other subset name!')
raise NameError("`{0}: {1}".format(__name__, msg))
return
def get_review_presets_config():
settings = get_current_project_settings()
review_profiles = (
settings["global"]
["publish"]
["ExtractReview"]
["profiles"]
)
outputs = {}
for profile in review_profiles:
outputs.update(profile.get("outputs", {}))
return [str(name) for name, _prop in outputs.items()]
class NukeLoader(api.Loader):
container_id_knob = "containerId"
container_id = ''.join(random.choice(
string.ascii_uppercase + string.digits) for _ in range(10))
def get_container_id(self, node):
id_knob = node.knobs().get(self.container_id_knob)
return id_knob.value() if id_knob else None
def get_members(self, source):
"""Return nodes that has same 'containerId' as `source`"""
source_id = self.get_container_id(source)
return [node for node in nuke.allNodes(recurseGroups=True)
if self.get_container_id(node) == source_id
and node is not source] if source_id else []
def set_as_member(self, node):
source_id = self.get_container_id(node)
if source_id:
node[self.container_id_knob].setValue(self.container_id)
else:
HIDEN_FLAG = 0x00040000
_knob = anlib.Knobby(
"String_Knob",
self.container_id,
flags=[nuke.READ_ONLY, HIDEN_FLAG])
knob = _knob.create(self.container_id_knob)
node.addKnob(knob)
def clear_members(self, parent_node):
members = self.get_members(parent_node)
dependent_nodes = None
for node in members:
_depndc = [n for n in node.dependent() if n not in members]
if not _depndc:
continue
dependent_nodes = _depndc
break
for member in members:
self.log.info("removing node: `{}".format(member.name()))
nuke.delete(member)
return dependent_nodes

View file

@ -0,0 +1,37 @@
from avalon import api, style
from avalon.nuke import lib as anlib
from openpype.api import (
Logger)
class RepairOldLoaders(api.InventoryAction):
label = "Repair Old Loaders"
icon = "gears"
color = style.colors.alert
log = Logger().get_logger(__name__)
def process(self, containers):
import nuke
new_loader = "LoadClip"
for cdata in containers:
orig_loader = cdata["loader"]
orig_name = cdata["objectName"]
if orig_loader not in ["LoadSequence", "LoadMov"]:
self.log.warning(
"This repair action is only working on "
"`LoadSequence` and `LoadMov` Loaders")
continue
new_name = orig_name.replace(orig_loader, new_loader)
node = nuke.toNode(cdata["objectName"])
cdata.update({
"loader": new_loader,
"objectName": new_name
})
node["name"].setValue(new_name)
# get data from avalon knob
anlib.set_avalon_knob_data(node, cdata)

View file

@ -8,10 +8,10 @@ class SelectContainers(api.InventoryAction):
color = "#d8d8d8"
def process(self, containers):
import nuke
import avalon.nuke
nodes = [i["_node"] for i in containers]
nodes = [nuke.toNode(i["objectName"]) for i in containers]
with avalon.nuke.viewer_update_and_undo_stop():
# clear previous_selection

View file

@ -1,68 +0,0 @@
# from avalon import api, style
# from avalon.vendor.Qt import QtGui, QtWidgets
#
# import avalon.fusion
#
#
# class FusionSetToolColor(api.InventoryAction):
# """Update the color of the selected tools"""
#
# label = "Set Tool Color"
# icon = "plus"
# color = "#d8d8d8"
# _fallback_color = QtGui.QColor(1.0, 1.0, 1.0)
#
# def process(self, containers):
# """Color all selected tools the selected colors"""
#
# result = []
# comp = avalon.fusion.get_current_comp()
#
# # Get tool color
# first = containers[0]
# tool = first["_node"]
# color = tool.TileColor
#
# if color is not None:
# qcolor = QtGui.QColor().fromRgbF(color["R"], color["G"], color["B"])
# else:
# qcolor = self._fallback_color
#
# # Launch pick color
# picked_color = self.get_color_picker(qcolor)
# if not picked_color:
# return
#
# with avalon.fusion.comp_lock_and_undo_chunk(comp):
# for container in containers:
# # Convert color to RGB 0-1 floats
# rgb_f = picked_color.getRgbF()
# rgb_f_table = {"R": rgb_f[0], "G": rgb_f[1], "B": rgb_f[2]}
#
# # Update tool
# tool = container["_node"]
# tool.TileColor = rgb_f_table
#
# result.append(container)
#
# return result
#
# def get_color_picker(self, color):
# """Launch color picker and return chosen color
#
# Args:
# color(QtGui.QColor): Start color to display
#
# Returns:
# QtGui.QColor
#
# """
#
# color_dialog = QtWidgets.QColorDialog(color)
# color_dialog.setStyleSheet(style.load_stylesheet())
#
# accepted = color_dialog.exec_()
# if not accepted:
# return
#
# return color_dialog.selectedColor()

View file

@ -0,0 +1,371 @@
import nuke
from avalon.vendor import qargparse
from avalon import api, io
from openpype.hosts.nuke.api.lib import (
get_imageio_input_colorspace
)
from avalon.nuke import (
containerise,
update_container,
viewer_update_and_undo_stop,
maintained_selection
)
from openpype.hosts.nuke.api import plugin
class LoadClip(plugin.NukeLoader):
"""Load clip into Nuke
Either it is image sequence or video file.
"""
families = [
"source",
"plate",
"render",
"prerender",
"review"
]
representations = [
"exr",
"dpx",
"mov",
"review",
"mp4"
]
label = "Load Clip"
order = -20
icon = "file-video-o"
color = "white"
script_start = int(nuke.root()["first_frame"].value())
# option gui
defaults = {
"start_at_workfile": True
}
options = [
qargparse.Boolean(
"start_at_workfile",
help="Load at workfile start frame",
default=True
)
]
node_name_template = "{class_name}_{ext}"
@classmethod
def get_representations(cls):
return (
cls.representations
+ cls._representations
+ plugin.get_review_presets_config()
)
def load(self, context, name, namespace, options):
is_sequence = len(context["representation"]["files"]) > 1
file = self.fname.replace("\\", "/")
start_at_workfile = options.get(
"start_at_workfile", self.defaults["start_at_workfile"])
version = context['version']
version_data = version.get("data", {})
repr_id = context["representation"]["_id"]
colorspace = version_data.get("colorspace")
iio_colorspace = get_imageio_input_colorspace(file)
repr_cont = context["representation"]["context"]
self.log.info("version_data: {}\n".format(version_data))
self.log.debug(
"Representation id `{}` ".format(repr_id))
self.handle_start = version_data.get("handleStart", 0)
self.handle_end = version_data.get("handleEnd", 0)
first = version_data.get("frameStart", None)
last = version_data.get("frameEnd", None)
first -= self.handle_start
last += self.handle_end
if not is_sequence:
duration = last - first + 1
first = 1
last = first + duration
elif "#" not in file:
frame = repr_cont.get("frame")
assert frame, "Representation is not sequence"
padding = len(frame)
file = file.replace(frame, "#" * padding)
# Fallback to asset name when namespace is None
if namespace is None:
namespace = context['asset']['name']
if not file:
self.log.warning(
"Representation id `{}` is failing to load".format(repr_id))
return
name_data = {
"asset": repr_cont["asset"],
"subset": repr_cont["subset"],
"representation": context["representation"]["name"],
"ext": repr_cont["representation"],
"id": context["representation"]["_id"],
"class_name": self.__class__.__name__
}
read_name = self.node_name_template.format(**name_data)
# Create the Loader with the filename path set
read_node = nuke.createNode(
"Read",
"name {}".format(read_name))
# to avoid multiple undo steps for rest of process
# we will switch off undo-ing
with viewer_update_and_undo_stop():
read_node["file"].setValue(file)
# Set colorspace defined in version data
if colorspace:
read_node["colorspace"].setValue(str(colorspace))
elif iio_colorspace is not None:
read_node["colorspace"].setValue(iio_colorspace)
self.set_range_to_node(read_node, first, last, start_at_workfile)
# add additional metadata from the version to imprint Avalon knob
add_keys = ["frameStart", "frameEnd",
"source", "colorspace", "author", "fps", "version",
"handleStart", "handleEnd"]
data_imprint = {}
for k in add_keys:
if k == 'version':
data_imprint.update({k: context["version"]['name']})
else:
data_imprint.update(
{k: context["version"]['data'].get(k, str(None))})
data_imprint.update({"objectName": read_name})
read_node["tile_color"].setValue(int("0x4ecd25ff", 16))
container = containerise(
read_node,
name=name,
namespace=namespace,
context=context,
loader=self.__class__.__name__,
data=data_imprint)
if version_data.get("retime", None):
self.make_retimes(read_node, version_data)
self.set_as_member(read_node)
return container
def switch(self, container, representation):
self.update(container, representation)
def update(self, container, representation):
"""Update the Loader's path
Nuke automatically tries to reset some variables when changing
the loader's path to a new file. These automatic changes are to its
inputs:
"""
is_sequence = len(representation["files"]) > 1
read_node = nuke.toNode(container['objectName'])
file = api.get_representation_path(representation).replace("\\", "/")
start_at_workfile = bool("start at" in read_node['frame_mode'].value())
version = io.find_one({
"type": "version",
"_id": representation["parent"]
})
version_data = version.get("data", {})
repr_id = representation["_id"]
colorspace = version_data.get("colorspace")
iio_colorspace = get_imageio_input_colorspace(file)
repr_cont = representation["context"]
self.handle_start = version_data.get("handleStart", 0)
self.handle_end = version_data.get("handleEnd", 0)
first = version_data.get("frameStart", None)
last = version_data.get("frameEnd", None)
first -= self.handle_start
last += self.handle_end
if not is_sequence:
duration = last - first + 1
first = 1
last = first + duration
elif "#" not in file:
frame = repr_cont.get("frame")
assert frame, "Representation is not sequence"
padding = len(frame)
file = file.replace(frame, "#" * padding)
if not file:
self.log.warning(
"Representation id `{}` is failing to load".format(repr_id))
return
read_node["file"].setValue(file)
# to avoid multiple undo steps for rest of process
# we will switch off undo-ing
with viewer_update_and_undo_stop():
# Set colorspace defined in version data
if colorspace:
read_node["colorspace"].setValue(str(colorspace))
elif iio_colorspace is not None:
read_node["colorspace"].setValue(iio_colorspace)
self.set_range_to_node(read_node, first, last, start_at_workfile)
updated_dict = {
"representation": str(representation["_id"]),
"frameStart": str(first),
"frameEnd": str(last),
"version": str(version.get("name")),
"colorspace": colorspace,
"source": version_data.get("source"),
"handleStart": str(self.handle_start),
"handleEnd": str(self.handle_end),
"fps": str(version_data.get("fps")),
"author": version_data.get("author"),
"outputDir": version_data.get("outputDir"),
}
# change color of read_node
# get all versions in list
versions = io.find({
"type": "version",
"parent": version["parent"]
}).distinct('name')
max_version = max(versions)
if version.get("name") not in [max_version]:
read_node["tile_color"].setValue(int("0xd84f20ff", 16))
else:
read_node["tile_color"].setValue(int("0x4ecd25ff", 16))
# Update the imprinted representation
update_container(
read_node,
updated_dict
)
self.log.info("udated to version: {}".format(version.get("name")))
if version_data.get("retime", None):
self.make_retimes(read_node, version_data)
else:
self.clear_members(read_node)
self.set_as_member(read_node)
def set_range_to_node(self, read_node, first, last, start_at_workfile):
read_node['origfirst'].setValue(int(first))
read_node['first'].setValue(int(first))
read_node['origlast'].setValue(int(last))
read_node['last'].setValue(int(last))
# set start frame depending on workfile or version
self.loader_shift(read_node, start_at_workfile)
def remove(self, container):
from avalon.nuke import viewer_update_and_undo_stop
read_node = nuke.toNode(container['objectName'])
assert read_node.Class() == "Read", "Must be Read"
with viewer_update_and_undo_stop():
members = self.get_members(read_node)
nuke.delete(read_node)
for member in members:
nuke.delete(member)
def make_retimes(self, parent_node, version_data):
''' Create all retime and timewarping nodes with coppied animation '''
speed = version_data.get('speed', 1)
time_warp_nodes = version_data.get('timewarps', [])
last_node = None
source_id = self.get_container_id(parent_node)
self.log.info("__ source_id: {}".format(source_id))
self.log.info("__ members: {}".format(self.get_members(parent_node)))
dependent_nodes = self.clear_members(parent_node)
with maintained_selection():
parent_node['selected'].setValue(True)
if speed != 1:
rtn = nuke.createNode(
"Retime",
"speed {}".format(speed))
rtn["before"].setValue("continue")
rtn["after"].setValue("continue")
rtn["input.first_lock"].setValue(True)
rtn["input.first"].setValue(
self.script_start
)
self.set_as_member(rtn)
last_node = rtn
if time_warp_nodes != []:
start_anim = self.script_start + (self.handle_start / speed)
for timewarp in time_warp_nodes:
twn = nuke.createNode(
timewarp["Class"],
"name {}".format(timewarp["name"])
)
if isinstance(timewarp["lookup"], list):
# if array for animation
twn["lookup"].setAnimated()
for i, value in enumerate(timewarp["lookup"]):
twn["lookup"].setValueAt(
(start_anim + i) + value,
(start_anim + i))
else:
# if static value `int`
twn["lookup"].setValue(timewarp["lookup"])
self.set_as_member(twn)
last_node = twn
if dependent_nodes:
# connect to original inputs
for i, n in enumerate(dependent_nodes):
last_node.setInput(i, n)
def loader_shift(self, read_node, workfile_start=False):
""" Set start frame of read node to a workfile start
Args:
read_node (nuke.Node): The nuke's read node
workfile_start (bool): set workfile start frame if true
"""
if workfile_start:
read_node['frame_mode'].setValue("start at")
read_node['frame'].setValue(str(self.script_start))

View file

@ -12,7 +12,15 @@ from openpype.hosts.nuke.api.lib import (
class LoadImage(api.Loader):
"""Load still image into Nuke"""
families = ["render", "source", "plate", "review", "image"]
families = [
"render2d",
"source",
"plate",
"render",
"prerender",
"review",
"image"
]
representations = ["exr", "dpx", "jpg", "jpeg", "png", "psd", "tiff"]
label = "Load Image"
@ -33,6 +41,10 @@ class LoadImage(api.Loader):
)
]
@classmethod
def get_representations(cls):
return cls.representations + cls._representations
def load(self, context, name, namespace, options):
from avalon.nuke import (
containerise,

View file

@ -1,347 +0,0 @@
import nuke
from avalon.vendor import qargparse
from avalon import api, io
from openpype.api import get_current_project_settings
from openpype.hosts.nuke.api.lib import (
get_imageio_input_colorspace
)
def add_review_presets_config():
returning = {
"families": list(),
"representations": list()
}
settings = get_current_project_settings()
review_profiles = (
settings["global"]
["publish"]
["ExtractReview"]
["profiles"]
)
outputs = {}
for profile in review_profiles:
outputs.update(profile.get("outputs", {}))
for output, properities in outputs.items():
returning["representations"].append(output)
returning["families"] += properities.get("families", [])
return returning
class LoadMov(api.Loader):
"""Load mov file into Nuke"""
families = ["render", "source", "plate", "review"]
representations = ["mov", "review", "mp4"]
label = "Load mov"
order = -10
icon = "code-fork"
color = "orange"
first_frame = nuke.root()["first_frame"].value()
# options gui
defaults = {
"start_at_workfile": True
}
options = [
qargparse.Boolean(
"start_at_workfile",
help="Load at workfile start frame",
default=True
)
]
node_name_template = "{class_name}_{ext}"
def load(self, context, name, namespace, options):
from avalon.nuke import (
containerise,
viewer_update_and_undo_stop
)
start_at_workfile = options.get(
"start_at_workfile", self.defaults["start_at_workfile"])
version = context['version']
version_data = version.get("data", {})
repr_id = context["representation"]["_id"]
self.handle_start = version_data.get("handleStart", 0)
self.handle_end = version_data.get("handleEnd", 0)
orig_first = version_data.get("frameStart")
orig_last = version_data.get("frameEnd")
diff = orig_first - 1
first = orig_first - diff
last = orig_last - diff
colorspace = version_data.get("colorspace")
repr_cont = context["representation"]["context"]
self.log.debug(
"Representation id `{}` ".format(repr_id))
context["representation"]["_id"]
# create handles offset (only to last, because of mov)
last += self.handle_start + self.handle_end
# Fallback to asset name when namespace is None
if namespace is None:
namespace = context['asset']['name']
file = self.fname
if not file:
self.log.warning(
"Representation id `{}` is failing to load".format(repr_id))
return
file = file.replace("\\", "/")
name_data = {
"asset": repr_cont["asset"],
"subset": repr_cont["subset"],
"representation": context["representation"]["name"],
"ext": repr_cont["representation"],
"id": context["representation"]["_id"],
"class_name": self.__class__.__name__
}
read_name = self.node_name_template.format(**name_data)
read_node = nuke.createNode(
"Read",
"name {}".format(read_name)
)
# to avoid multiple undo steps for rest of process
# we will switch off undo-ing
with viewer_update_and_undo_stop():
read_node["file"].setValue(file)
read_node["origfirst"].setValue(first)
read_node["first"].setValue(first)
read_node["origlast"].setValue(last)
read_node["last"].setValue(last)
read_node['frame_mode'].setValue("start at")
if start_at_workfile:
# start at workfile start
read_node['frame'].setValue(str(self.first_frame))
else:
# start at version frame start
read_node['frame'].setValue(
str(orig_first - self.handle_start))
if colorspace:
read_node["colorspace"].setValue(str(colorspace))
preset_clrsp = get_imageio_input_colorspace(file)
if preset_clrsp is not None:
read_node["colorspace"].setValue(preset_clrsp)
# add additional metadata from the version to imprint Avalon knob
add_keys = [
"frameStart", "frameEnd", "handles", "source", "author",
"fps", "version", "handleStart", "handleEnd"
]
data_imprint = {}
for key in add_keys:
if key == 'version':
data_imprint.update({
key: context["version"]['name']
})
else:
data_imprint.update({
key: context["version"]['data'].get(key, str(None))
})
data_imprint.update({"objectName": read_name})
read_node["tile_color"].setValue(int("0x4ecd25ff", 16))
if version_data.get("retime", None):
speed = version_data.get("speed", 1)
time_warp_nodes = version_data.get("timewarps", [])
self.make_retimes(speed, time_warp_nodes)
return containerise(
read_node,
name=name,
namespace=namespace,
context=context,
loader=self.__class__.__name__,
data=data_imprint
)
def switch(self, container, representation):
self.update(container, representation)
def update(self, container, representation):
"""Update the Loader's path
Nuke automatically tries to reset some variables when changing
the loader's path to a new file. These automatic changes are to its
inputs:
"""
from avalon.nuke import (
update_container
)
read_node = nuke.toNode(container['objectName'])
assert read_node.Class() == "Read", "Must be Read"
file = self.fname
if not file:
repr_id = representation["_id"]
self.log.warning(
"Representation id `{}` is failing to load".format(repr_id))
return
file = file.replace("\\", "/")
# Get start frame from version data
version = io.find_one({
"type": "version",
"_id": representation["parent"]
})
# get all versions in list
versions = io.find({
"type": "version",
"parent": version["parent"]
}).distinct('name')
max_version = max(versions)
version_data = version.get("data", {})
orig_first = version_data.get("frameStart")
orig_last = version_data.get("frameEnd")
diff = orig_first - 1
# set first to 1
first = orig_first - diff
last = orig_last - diff
self.handle_start = version_data.get("handleStart", 0)
self.handle_end = version_data.get("handleEnd", 0)
colorspace = version_data.get("colorspace")
if first is None:
self.log.warning((
"Missing start frame for updated version"
"assuming starts at frame 0 for: "
"{} ({})").format(
read_node['name'].value(), representation))
first = 0
# create handles offset (only to last, because of mov)
last += self.handle_start + self.handle_end
read_node["file"].setValue(file)
# Set the global in to the start frame of the sequence
read_node["origfirst"].setValue(first)
read_node["first"].setValue(first)
read_node["origlast"].setValue(last)
read_node["last"].setValue(last)
read_node['frame_mode'].setValue("start at")
if int(float(self.first_frame)) == int(
float(read_node['frame'].value())):
# start at workfile start
read_node['frame'].setValue(str(self.first_frame))
else:
# start at version frame start
read_node['frame'].setValue(str(orig_first - self.handle_start))
if colorspace:
read_node["colorspace"].setValue(str(colorspace))
preset_clrsp = get_imageio_input_colorspace(file)
if preset_clrsp is not None:
read_node["colorspace"].setValue(preset_clrsp)
updated_dict = {}
updated_dict.update({
"representation": str(representation["_id"]),
"frameStart": str(first),
"frameEnd": str(last),
"version": str(version.get("name")),
"colorspace": version_data.get("colorspace"),
"source": version_data.get("source"),
"handleStart": str(self.handle_start),
"handleEnd": str(self.handle_end),
"fps": str(version_data.get("fps")),
"author": version_data.get("author"),
"outputDir": version_data.get("outputDir")
})
# change color of node
if version.get("name") not in [max_version]:
read_node["tile_color"].setValue(int("0xd84f20ff", 16))
else:
read_node["tile_color"].setValue(int("0x4ecd25ff", 16))
if version_data.get("retime", None):
speed = version_data.get("speed", 1)
time_warp_nodes = version_data.get("timewarps", [])
self.make_retimes(speed, time_warp_nodes)
# Update the imprinted representation
update_container(
read_node, updated_dict
)
self.log.info("udated to version: {}".format(version.get("name")))
def remove(self, container):
from avalon.nuke import viewer_update_and_undo_stop
read_node = nuke.toNode(container['objectName'])
assert read_node.Class() == "Read", "Must be Read"
with viewer_update_and_undo_stop():
nuke.delete(read_node)
def make_retimes(self, speed, time_warp_nodes):
''' Create all retime and timewarping nodes with coppied animation '''
if speed != 1:
rtn = nuke.createNode(
"Retime",
"speed {}".format(speed))
rtn["before"].setValue("continue")
rtn["after"].setValue("continue")
rtn["input.first_lock"].setValue(True)
rtn["input.first"].setValue(
self.first_frame
)
if time_warp_nodes != []:
start_anim = self.first_frame + (self.handle_start / speed)
for timewarp in time_warp_nodes:
twn = nuke.createNode(timewarp["Class"],
"name {}".format(timewarp["name"]))
if isinstance(timewarp["lookup"], list):
# if array for animation
twn["lookup"].setAnimated()
for i, value in enumerate(timewarp["lookup"]):
twn["lookup"].setValueAt(
(start_anim + i) + value,
(start_anim + i))
else:
# if static value `int`
twn["lookup"].setValue(timewarp["lookup"])

View file

@ -1,320 +0,0 @@
import nuke
from avalon.vendor import qargparse
from avalon import api, io
from openpype.hosts.nuke.api.lib import (
get_imageio_input_colorspace
)
class LoadSequence(api.Loader):
"""Load image sequence into Nuke"""
families = ["render", "source", "plate", "review"]
representations = ["exr", "dpx"]
label = "Load Image Sequence"
order = -20
icon = "file-video-o"
color = "white"
script_start = nuke.root()["first_frame"].value()
# option gui
defaults = {
"start_at_workfile": True
}
options = [
qargparse.Boolean(
"start_at_workfile",
help="Load at workfile start frame",
default=True
)
]
node_name_template = "{class_name}_{ext}"
def load(self, context, name, namespace, options):
from avalon.nuke import (
containerise,
viewer_update_and_undo_stop
)
start_at_workfile = options.get(
"start_at_workfile", self.defaults["start_at_workfile"])
version = context['version']
version_data = version.get("data", {})
repr_id = context["representation"]["_id"]
self.log.info("version_data: {}\n".format(version_data))
self.log.debug(
"Representation id `{}` ".format(repr_id))
self.first_frame = int(nuke.root()["first_frame"].getValue())
self.handle_start = version_data.get("handleStart", 0)
self.handle_end = version_data.get("handleEnd", 0)
first = version_data.get("frameStart", None)
last = version_data.get("frameEnd", None)
# Fallback to asset name when namespace is None
if namespace is None:
namespace = context['asset']['name']
first -= self.handle_start
last += self.handle_end
file = self.fname
if not file:
repr_id = context["representation"]["_id"]
self.log.warning(
"Representation id `{}` is failing to load".format(repr_id))
return
file = file.replace("\\", "/")
repr_cont = context["representation"]["context"]
assert repr_cont.get("frame"), "Representation is not sequence"
if "#" not in file:
frame = repr_cont.get("frame")
if frame:
padding = len(frame)
file = file.replace(frame, "#" * padding)
name_data = {
"asset": repr_cont["asset"],
"subset": repr_cont["subset"],
"representation": context["representation"]["name"],
"ext": repr_cont["representation"],
"id": context["representation"]["_id"],
"class_name": self.__class__.__name__
}
read_name = self.node_name_template.format(**name_data)
# Create the Loader with the filename path set
read_node = nuke.createNode(
"Read",
"name {}".format(read_name))
# to avoid multiple undo steps for rest of process
# we will switch off undo-ing
with viewer_update_and_undo_stop():
read_node["file"].setValue(file)
# Set colorspace defined in version data
colorspace = context["version"]["data"].get("colorspace")
if colorspace:
read_node["colorspace"].setValue(str(colorspace))
preset_clrsp = get_imageio_input_colorspace(file)
if preset_clrsp is not None:
read_node["colorspace"].setValue(preset_clrsp)
# set start frame depending on workfile or version
self.loader_shift(read_node, start_at_workfile)
read_node["origfirst"].setValue(int(first))
read_node["first"].setValue(int(first))
read_node["origlast"].setValue(int(last))
read_node["last"].setValue(int(last))
# add additional metadata from the version to imprint Avalon knob
add_keys = ["frameStart", "frameEnd",
"source", "colorspace", "author", "fps", "version",
"handleStart", "handleEnd"]
data_imprint = {}
for k in add_keys:
if k == 'version':
data_imprint.update({k: context["version"]['name']})
else:
data_imprint.update(
{k: context["version"]['data'].get(k, str(None))})
data_imprint.update({"objectName": read_name})
read_node["tile_color"].setValue(int("0x4ecd25ff", 16))
if version_data.get("retime", None):
speed = version_data.get("speed", 1)
time_warp_nodes = version_data.get("timewarps", [])
self.make_retimes(speed, time_warp_nodes)
return containerise(read_node,
name=name,
namespace=namespace,
context=context,
loader=self.__class__.__name__,
data=data_imprint)
def switch(self, container, representation):
self.update(container, representation)
def update(self, container, representation):
"""Update the Loader's path
Nuke automatically tries to reset some variables when changing
the loader's path to a new file. These automatic changes are to its
inputs:
"""
from avalon.nuke import (
update_container
)
read_node = nuke.toNode(container['objectName'])
assert read_node.Class() == "Read", "Must be Read"
repr_cont = representation["context"]
assert repr_cont.get("frame"), "Representation is not sequence"
file = api.get_representation_path(representation)
if not file:
repr_id = representation["_id"]
self.log.warning(
"Representation id `{}` is failing to load".format(repr_id))
return
file = file.replace("\\", "/")
if "#" not in file:
frame = repr_cont.get("frame")
if frame:
padding = len(frame)
file = file.replace(frame, "#" * padding)
# Get start frame from version data
version = io.find_one({
"type": "version",
"_id": representation["parent"]
})
# get all versions in list
versions = io.find({
"type": "version",
"parent": version["parent"]
}).distinct('name')
max_version = max(versions)
version_data = version.get("data", {})
self.first_frame = int(nuke.root()["first_frame"].getValue())
self.handle_start = version_data.get("handleStart", 0)
self.handle_end = version_data.get("handleEnd", 0)
first = version_data.get("frameStart")
last = version_data.get("frameEnd")
if first is None:
self.log.warning(
"Missing start frame for updated version"
"assuming starts at frame 0 for: "
"{} ({})".format(read_node['name'].value(), representation))
first = 0
first -= self.handle_start
last += self.handle_end
read_node["file"].setValue(file)
# set start frame depending on workfile or version
self.loader_shift(
read_node,
bool("start at" in read_node['frame_mode'].value()))
read_node["origfirst"].setValue(int(first))
read_node["first"].setValue(int(first))
read_node["origlast"].setValue(int(last))
read_node["last"].setValue(int(last))
updated_dict = {}
updated_dict.update({
"representation": str(representation["_id"]),
"frameStart": str(first),
"frameEnd": str(last),
"version": str(version.get("name")),
"colorspace": version_data.get("colorspace"),
"source": version_data.get("source"),
"handleStart": str(self.handle_start),
"handleEnd": str(self.handle_end),
"fps": str(version_data.get("fps")),
"author": version_data.get("author"),
"outputDir": version_data.get("outputDir"),
})
# change color of read_node
if version.get("name") not in [max_version]:
read_node["tile_color"].setValue(int("0xd84f20ff", 16))
else:
read_node["tile_color"].setValue(int("0x4ecd25ff", 16))
if version_data.get("retime", None):
speed = version_data.get("speed", 1)
time_warp_nodes = version_data.get("timewarps", [])
self.make_retimes(speed, time_warp_nodes)
# Update the imprinted representation
update_container(
read_node,
updated_dict
)
self.log.info("udated to version: {}".format(version.get("name")))
def remove(self, container):
from avalon.nuke import viewer_update_and_undo_stop
read_node = nuke.toNode(container['objectName'])
assert read_node.Class() == "Read", "Must be Read"
with viewer_update_and_undo_stop():
nuke.delete(read_node)
def make_retimes(self, speed, time_warp_nodes):
''' Create all retime and timewarping nodes with coppied animation '''
if speed != 1:
rtn = nuke.createNode(
"Retime",
"speed {}".format(speed))
rtn["before"].setValue("continue")
rtn["after"].setValue("continue")
rtn["input.first_lock"].setValue(True)
rtn["input.first"].setValue(
self.first_frame
)
if time_warp_nodes != []:
start_anim = self.first_frame + (self.handle_start / speed)
for timewarp in time_warp_nodes:
twn = nuke.createNode(timewarp["Class"],
"name {}".format(timewarp["name"]))
if isinstance(timewarp["lookup"], list):
# if array for animation
twn["lookup"].setAnimated()
for i, value in enumerate(timewarp["lookup"]):
twn["lookup"].setValueAt(
(start_anim + i) + value,
(start_anim + i))
else:
# if static value `int`
twn["lookup"].setValue(timewarp["lookup"])
def loader_shift(self, read_node, workfile_start=False):
""" Set start frame of read node to a workfile start
Args:
read_node (nuke.Node): The nuke's read node
workfile_start (bool): set workfile start frame if true
"""
if workfile_start:
read_node['frame_mode'].setValue("start at")
read_node['frame'].setValue(str(self.script_start))

View file

@ -9,7 +9,9 @@ SINGLE_FILE_FORMATS = ['avi', 'mp4', 'mxf', 'mov', 'mpg', 'mpeg', 'wmv', 'm4v',
'm2v']
def evaluate_filepath_new(k_value, k_eval, project_dir, first_frame):
def evaluate_filepath_new(
k_value, k_eval, project_dir, first_frame, allow_relative):
# get combined relative path
combined_relative_path = None
if k_eval is not None and project_dir is not None:
@ -26,8 +28,9 @@ def evaluate_filepath_new(k_value, k_eval, project_dir, first_frame):
combined_relative_path = None
try:
k_value = k_value % first_frame
if os.path.exists(k_value):
# k_value = k_value % first_frame
if os.path.isdir(os.path.basename(k_value)):
# doesn't check for file, only parent dir
filepath = k_value
elif os.path.exists(k_eval):
filepath = k_eval
@ -37,10 +40,12 @@ def evaluate_filepath_new(k_value, k_eval, project_dir, first_frame):
filepath = os.path.abspath(filepath)
except Exception as E:
log.error("Cannot create Read node. Perhaps it needs to be rendered first :) Error: `{}`".format(E))
log.error("Cannot create Read node. Perhaps it needs to be \
rendered first :) Error: `{}`".format(E))
return None
filepath = filepath.replace('\\', '/')
# assumes last number is a sequence counter
current_frame = re.findall(r'\d+', filepath)[-1]
padding = len(current_frame)
basename = filepath[: filepath.rfind(current_frame)]
@ -51,11 +56,13 @@ def evaluate_filepath_new(k_value, k_eval, project_dir, first_frame):
pass
else:
# Image sequence needs hashes
# to do still with no number not handled
filepath = basename + '#' * padding + '.' + filetype
# relative path? make it relative again
if not isinstance(project_dir, type(None)):
filepath = filepath.replace(project_dir, '.')
if allow_relative:
if (not isinstance(project_dir, type(None))) and project_dir != "":
filepath = filepath.replace(project_dir, '.')
# get first and last frame from disk
frames = []
@ -95,41 +102,40 @@ def create_read_node(ndata, comp_start):
return
def write_to_read(gn):
def write_to_read(gn,
allow_relative=False):
comp_start = nuke.Root().knob('first_frame').value()
comp_end = nuke.Root().knob('last_frame').value()
project_dir = nuke.Root().knob('project_directory').getValue()
if not os.path.exists(project_dir):
project_dir = nuke.Root().knob('project_directory').evaluate()
group_read_nodes = []
with gn:
height = gn.screenHeight() # get group height and position
new_xpos = int(gn.knob('xpos').value())
new_ypos = int(gn.knob('ypos').value()) + height + 20
group_writes = [n for n in nuke.allNodes() if n.Class() == "Write"]
print("__ group_writes: {}".format(group_writes))
if group_writes != []:
# there can be only 1 write node, taking first
n = group_writes[0]
if n.knob('file') is not None:
file_path_new = evaluate_filepath_new(
myfile, firstFrame, lastFrame = evaluate_filepath_new(
n.knob('file').getValue(),
n.knob('file').evaluate(),
project_dir,
comp_start
comp_start,
allow_relative
)
if not file_path_new:
if not myfile:
return
myfiletranslated, firstFrame, lastFrame = file_path_new
# get node data
ndata = {
'filepath': myfiletranslated,
'firstframe': firstFrame,
'lastframe': lastFrame,
'filepath': myfile,
'firstframe': int(firstFrame),
'lastframe': int(lastFrame),
'new_xpos': new_xpos,
'new_ypos': new_ypos,
'colorspace': n.knob('colorspace').getValue(),
@ -139,7 +145,6 @@ def write_to_read(gn):
}
group_read_nodes.append(ndata)
# create reads in one go
for oneread in group_read_nodes:
# create read node

View file

@ -17,6 +17,10 @@ class ExtractReview(openpype.api.Extractor):
hosts = ["photoshop"]
families = ["review"]
# Extract Options
jpg_options = None
mov_options = None
def process(self, instance):
staging_dir = self.staging_dir(instance)
self.log.info("Outputting image to {}".format(staging_dir))
@ -53,7 +57,8 @@ class ExtractReview(openpype.api.Extractor):
"name": "jpg",
"ext": "jpg",
"files": output_image,
"stagingDir": staging_dir
"stagingDir": staging_dir,
"tags": self.jpg_options['tags']
})
instance.data["stagingDir"] = staging_dir
@ -97,7 +102,7 @@ class ExtractReview(openpype.api.Extractor):
"frameEnd": 1,
"fps": 25,
"preview": True,
"tags": ["review", "ftrackreview"]
"tags": self.mov_options['tags']
})
# Required for extract_review plugin (L222 onwards).

View file

@ -1,5 +1,4 @@
import pyblish.api
from avalon import io
from openpype.pipeline import (
OpenPypePyblishPluginMixin,

View file

@ -1,6 +1,6 @@
import os
import sys
openpype_dir = ""
mongo_url = ""
project_name = ""
asset_name = ""
@ -9,9 +9,6 @@ ftrack_url = ""
ftrack_username = ""
ftrack_api_key = ""
host_name = "testhost"
current_file = os.path.abspath(__file__)
def multi_dirname(path, times=1):
for _ in range(times):
@ -19,8 +16,12 @@ def multi_dirname(path, times=1):
return path
host_name = "testhost"
current_file = os.path.abspath(__file__)
openpype_dir = multi_dirname(current_file, 4)
os.environ["OPENPYPE_MONGO"] = mongo_url
os.environ["OPENPYPE_ROOT"] = multi_dirname(current_file, 4)
os.environ["OPENPYPE_ROOT"] = openpype_dir
os.environ["AVALON_MONGO"] = mongo_url
os.environ["AVALON_PROJECT"] = project_name
os.environ["AVALON_ASSET"] = asset_name
@ -42,7 +43,7 @@ for path in [
from Qt import QtWidgets, QtCore
from openpype.tools.new_publisher.window import PublisherWindow
from openpype.tools.publisher.window import PublisherWindow
def main():

View file

@ -1,6 +1,8 @@
import os
import logging
import requests
import avalon.api
import pyblish.api
from avalon.tvpaint import pipeline
@ -8,6 +10,7 @@ from avalon.tvpaint.communication_server import register_localization_file
from .lib import set_context_settings
from openpype.hosts import tvpaint
from openpype.api import get_current_project_settings
log = logging.getLogger(__name__)
@ -51,6 +54,19 @@ def initial_launch():
set_context_settings()
def application_exit():
data = get_current_project_settings()
stop_timer = data["tvpaint"]["stop_timer_on_application_exit"]
if not stop_timer:
return
# Stop application timer.
webserver_url = os.environ.get("OPENPYPE_WEBSERVER_URL")
rest_api_url = "{}/timers_manager/stop_timer".format(webserver_url)
requests.post(rest_api_url)
def install():
log.info("OpenPype - Installing TVPaint integration")
localization_file = os.path.join(HOST_DIR, "resources", "avalon.loc")
@ -67,6 +83,7 @@ def install():
pyblish.api.register_callback("instanceToggled", on_instance_toggle)
avalon.api.on("application.launched", initial_launch)
avalon.api.on("application.exit", application_exit)
def uninstall():

View file

@ -1353,23 +1353,23 @@ def _prepare_last_workfile(data, workdir, workfile_template_key):
)
# Last workfile path
last_workfile_path = ""
extensions = avalon.api.HOST_WORKFILE_EXTENSIONS.get(
app.host_name
)
if extensions:
anatomy = data["anatomy"]
# Find last workfile
file_template = anatomy.templates[workfile_template_key]["file"]
workdir_data.update({
"version": 1,
"user": get_openpype_username(),
"ext": extensions[0]
})
last_workfile_path = data.get("last_workfile_path") or ""
if not last_workfile_path:
extensions = avalon.api.HOST_WORKFILE_EXTENSIONS.get(app.host_name)
last_workfile_path = avalon.api.last_workfile(
workdir, file_template, workdir_data, extensions, True
)
if extensions:
anatomy = data["anatomy"]
# Find last workfile
file_template = anatomy.templates["work"]["file"]
workdir_data.update({
"version": 1,
"user": get_openpype_username(),
"ext": extensions[0]
})
last_workfile_path = avalon.api.last_workfile(
workdir, file_template, workdir_data, extensions, True
)
if os.path.exists(last_workfile_path):
log.debug((

View file

@ -6,6 +6,7 @@ import logging
import re
import json
import tempfile
import distutils
from .execute import run_subprocess
from .profiles_filtering import filter_profiles
@ -468,7 +469,7 @@ def oiio_supported():
"""
Checks if oiiotool is configured for this platform.
Expects full path to executable.
Triggers simple subprocess, handles exception if fails.
'should_decompress' will throw exception if configured,
but not present or not working.
@ -476,7 +477,10 @@ def oiio_supported():
(bool)
"""
oiio_path = get_oiio_tools_path()
if not oiio_path or not os.path.exists(oiio_path):
if oiio_path:
oiio_path = distutils.spawn.find_executable(oiio_path)
if not oiio_path:
log.debug("OIIOTool is not configured or not present at {}".
format(oiio_path))
return False

View file

@ -48,7 +48,7 @@ class _ModuleClass(object):
def __getattr__(self, attr_name):
if attr_name not in self.__attributes__:
if attr_name in ("__path__"):
if attr_name in ("__path__", "__file__"):
return None
raise ImportError("No module named {}.{}".format(
self.name, attr_name
@ -104,6 +104,9 @@ class _InterfacesClass(_ModuleClass):
"""
def __getattr__(self, attr_name):
if attr_name not in self.__attributes__:
if attr_name in ("__path__", "__file__"):
return None
# Fake Interface if is not missing
self.__attributes__[attr_name] = type(
attr_name,

View file

@ -11,7 +11,7 @@ import pyblish.api
class CollectDeadlineServerFromInstance(pyblish.api.InstancePlugin):
"""Collect Deadline Webservice URL from instance."""
order = pyblish.api.CollectorOrder
order = pyblish.api.CollectorOrder + 0.02
label = "Deadline Webservice from the Instance"
families = ["rendering"]
@ -46,24 +46,25 @@ class CollectDeadlineServerFromInstance(pyblish.api.InstancePlugin):
["deadline"]
)
try:
default_servers = deadline_settings["deadline_urls"]
project_servers = (
render_instance.context.data
["project_settings"]
["deadline"]
["deadline_servers"]
)
deadline_servers = {
k: default_servers[k]
for k in project_servers
if k in default_servers
}
except AttributeError:
# Handle situation were we had only one url for deadline.
return render_instance.context.data["defaultDeadline"]
default_server = render_instance.context.data["defaultDeadline"]
instance_server = render_instance.data.get("deadlineServers")
if not instance_server:
return default_server
default_servers = deadline_settings["deadline_urls"]
project_servers = (
render_instance.context.data
["project_settings"]
["deadline"]
["deadline_servers"]
)
deadline_servers = {
k: default_servers[k]
for k in project_servers
if k in default_servers
}
# This is Maya specific and may not reflect real selection of deadline
# url as dictionary keys in Python 2 are not ordered
return deadline_servers[
list(deadline_servers.keys())[
int(render_instance.data.get("deadlineServers"))

View file

@ -351,6 +351,11 @@ class MayaSubmitDeadline(pyblish.api.InstancePlugin):
f.replace(orig_scene, new_scene)
)
instance.data["expectedFiles"] = [new_exp]
if instance.data.get("publishRenderMetadataFolder"):
instance.data["publishRenderMetadataFolder"] = \
instance.data["publishRenderMetadataFolder"].replace(
orig_scene, new_scene)
self.log.info("Scene name was switched {} -> {}".format(
orig_scene, new_scene
))

View file

@ -385,6 +385,7 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin):
"""
task = os.environ["AVALON_TASK"]
subset = instance_data["subset"]
cameras = instance_data.get("cameras", [])
instances = []
# go through aovs in expected files
for aov, files in exp_files[0].items():
@ -410,7 +411,11 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin):
task[0].upper(), task[1:],
subset[0].upper(), subset[1:])
subset_name = '{}_{}'.format(group_name, aov)
cam = [c for c in cameras if c in col.head]
if cam:
subset_name = '{}_{}_{}'.format(group_name, cam, aov)
else:
subset_name = '{}_{}'.format(group_name, aov)
if isinstance(col, (list, tuple)):
staging = os.path.dirname(col[0])

View file

@ -1,7 +1,6 @@
import os
import json
import collections
import openpype
from openpype.modules import OpenPypeModule
from openpype_interfaces import (
@ -372,7 +371,7 @@ class FtrackModule(
return self.tray_module.validate()
def tray_exit(self):
return self.tray_module.stop_action_server()
self.tray_module.tray_exit()
def set_credentials_to_env(self, username, api_key):
os.environ["FTRACK_API_USER"] = username or ""
@ -397,3 +396,16 @@ class FtrackModule(
def timer_stopped(self):
if self._timers_manager_module is not None:
self._timers_manager_module.timer_stopped(self.id)
def get_task_time(self, project_name, asset_name, task_name):
session = self.create_ftrack_session()
query = (
'Task where name is "{}"'
' and parent.name is "{}"'
' and project.full_name is "{}"'
).format(task_name, asset_name, project_name)
task_entity = session.query(query).first()
if not task_entity:
return 0
hours_logged = (task_entity["time_logged"] / 60) / 60
return hours_logged

View file

@ -289,6 +289,10 @@ class FtrackTrayWrapper:
parent_menu.addMenu(tray_menu)
def tray_exit(self):
self.stop_action_server()
self.stop_timer_thread()
# Definition of visibility of each menu actions
def set_menu_visibility(self):
self.tray_server_menu.menuAction().setVisible(self.bool_logged)

View file

@ -1,3 +1,5 @@
import json
from aiohttp.web_response import Response
from openpype.api import Logger
@ -28,6 +30,11 @@ class TimersManagerModuleRestApi:
self.prefix + "/stop_timer",
self.stop_timer
)
self.server_manager.add_route(
"GET",
self.prefix + "/get_task_time",
self.get_task_time
)
async def start_timer(self, request):
data = await request.json()
@ -48,3 +55,20 @@ class TimersManagerModuleRestApi:
async def stop_timer(self, request):
self.module.stop_timers()
return Response(status=200)
async def get_task_time(self, request):
data = await request.json()
try:
project_name = data['project_name']
asset_name = data['asset_name']
task_name = data['task_name']
except KeyError:
message = (
"Payload must contain fields 'project_name, 'asset_name',"
" 'task_name'"
)
log.warning(message)
return Response(text=message, status=404)
time = self.module.get_task_time(project_name, asset_name, task_name)
return Response(text=json.dumps(time))

View file

@ -150,6 +150,7 @@ class TimersManager(OpenPypeModule, ITrayService):
def tray_exit(self):
if self._idle_manager:
self._idle_manager.stop()
self._idle_manager.wait()
def start_timer(self, project_name, asset_name, task_name, hierarchy):
"""
@ -191,6 +192,16 @@ class TimersManager(OpenPypeModule, ITrayService):
}
self.timer_started(None, data)
def get_task_time(self, project_name, asset_name, task_name):
times = {}
for module_id, connector in self._connectors_by_module_id.items():
if hasattr(connector, "get_task_time"):
module = self._modules_by_id[module_id]
times[module.name] = connector.get_task_time(
project_name, asset_name, task_name
)
return times
def timer_started(self, source_id, data):
for module_id, connector in self._connectors_by_module_id.items():
if module_id == source_id:

View file

@ -26,6 +26,7 @@ class CollectResourcesPath(pyblish.api.InstancePlugin):
"animation",
"model",
"mayaAscii",
"mayaScene",
"setdress",
"layout",
"ass",
@ -67,6 +68,12 @@ class CollectResourcesPath(pyblish.api.InstancePlugin):
"representation": "TEMP"
})
# For the first time publish
if instance.data.get("hierarchy"):
template_data.update({
"hierarchy": instance.data["hierarchy"]
})
anatomy_filled = anatomy.format(template_data)
if "folder" in anatomy.templates["publish"]:

View file

@ -1,6 +1,5 @@
import os
import re
import subprocess
import json
import copy
import tempfile
@ -46,7 +45,8 @@ class ExtractBurnin(openpype.api.Extractor):
"aftereffects",
"tvpaint",
"webpublisher",
"aftereffects"
"aftereffects",
"photoshop"
# "resolve"
]
optional = True
@ -158,6 +158,11 @@ class ExtractBurnin(openpype.api.Extractor):
filled_anatomy = anatomy.format_all(burnin_data)
burnin_data["anatomy"] = filled_anatomy.get_solved()
# Add context data burnin_data.
burnin_data["custom"] = (
instance.data.get("custom_burnin_data") or {}
)
# Add source camera name to burnin data
camera_name = repre.get("camera_name")
if camera_name:

View file

@ -63,6 +63,7 @@ class IntegrateAssetNew(pyblish.api.InstancePlugin):
"animation",
"model",
"mayaAscii",
"mayaScene",
"setdress",
"layout",
"ass",

View file

@ -257,3 +257,30 @@ class PypeCommands:
def validate_jsons(self):
pass
def run_tests(self, folder, mark, pyargs):
"""
Runs tests from 'folder'
Args:
folder (str): relative path to folder with tests
mark (str): label to run tests marked by it (slow etc)
pyargs (str): package path to test
"""
print("run_tests")
import subprocess
if folder:
folder = " ".join(list(folder))
else:
folder = "../tests"
mark_str = pyargs_str = ''
if mark:
mark_str = "-m {}".format(mark)
if pyargs:
pyargs_str = "--pyargs {}".format(pyargs)
cmd = "pytest {} {} {}".format(folder, mark_str, pyargs_str)
print("Running {}".format(cmd))
subprocess.run(cmd)

View file

@ -109,9 +109,7 @@ def _prores_codec_args(ffprobe_data, source_ffmpeg_cmd):
def _h264_codec_args(ffprobe_data, source_ffmpeg_cmd):
output = []
output.extend(["-codec:v", "h264"])
output = ["-codec:v", "h264"]
# Use arguments from source if are available source arguments
if source_ffmpeg_cmd:
@ -137,6 +135,32 @@ def _h264_codec_args(ffprobe_data, source_ffmpeg_cmd):
return output
def _dnxhd_codec_args(ffprobe_data, source_ffmpeg_cmd):
output = ["-codec:v", "dnxhd"]
# Use source profile (profiles in metadata are not usable in args directly)
profile = ffprobe_data.get("profile") or ""
# Lower profile and replace space with underscore
cleaned_profile = profile.lower().replace(" ", "_")
dnx_profiles = {
"dnxhd",
"dnxhr_lb",
"dnxhr_sq",
"dnxhr_hq",
"dnxhr_hqx",
"dnxhr_444"
}
if cleaned_profile in dnx_profiles:
output.extend(["-profile:v", cleaned_profile])
pix_fmt = ffprobe_data.get("pix_fmt")
if pix_fmt:
output.extend(["-pix_fmt", pix_fmt])
output.extend(["-g", "1"])
return output
def get_codec_args(ffprobe_data, source_ffmpeg_cmd):
codec_name = ffprobe_data.get("codec_name")
# Codec "prores"
@ -147,6 +171,10 @@ def get_codec_args(ffprobe_data, source_ffmpeg_cmd):
if codec_name == "h264":
return _h264_codec_args(ffprobe_data, source_ffmpeg_cmd)
# Coded DNxHD
if codec_name == "dnxhd":
return _dnxhd_codec_args(ffprobe_data, source_ffmpeg_cmd)
output = []
if codec_name:
output.extend(["-codec:v", codec_name])

View file

@ -24,7 +24,7 @@
"animation",
"setdress",
"layout",
"mayaAscii"
"mayaScene"
]
},
"ExtractJpegEXR": {

View file

@ -315,6 +315,21 @@
"optional": true,
"active": true
},
"ValidateRigContents": {
"enabled": false,
"optional": true,
"active": true
},
"ValidateRigJointsHidden": {
"enabled": false,
"optional": true,
"active": true
},
"ValidateRigControllers": {
"enabled": false,
"optional": true,
"active": true
},
"ValidateCameraAttributes": {
"enabled": false,
"optional": true,
@ -489,6 +504,12 @@
255,
255
],
"mayaScene": [
67,
174,
255,
255
],
"setdress": [
255,
250,

View file

@ -117,16 +117,7 @@
"load": {
"LoadImage": {
"enabled": true,
"families": [
"render2d",
"source",
"plate",
"render",
"prerender",
"review",
"image"
],
"representations": [
"_representations": [
"exr",
"dpx",
"jpg",
@ -137,39 +128,9 @@
],
"node_name_template": "{class_name}_{ext}"
},
"LoadMov": {
"LoadClip": {
"enabled": true,
"families": [
"source",
"plate",
"render",
"prerender",
"review"
],
"representations": [
"mov",
"review",
"mp4",
"h264"
],
"node_name_template": "{class_name}_{ext}"
},
"LoadSequence": {
"enabled": true,
"families": [
"render2d",
"source",
"plate",
"render",
"prerender",
"review"
],
"representations": [
"exr",
"dpx",
"jpg",
"jpeg",
"png"
"_representations": [
],
"node_name_template": "{class_name}_{ext}"
}

View file

@ -17,6 +17,18 @@
"png",
"jpg"
]
},
"ExtractReview": {
"jpg_options": {
"tags": [
]
},
"mov_options": {
"tags": [
"review",
"ftrackreview"
]
}
}
},
"workfile_builder": {

View file

@ -1,4 +1,5 @@
{
"stop_timer_on_application_exit": false,
"publish": {
"ExtractSequence": {
"review_bg": [

View file

@ -2,7 +2,7 @@
## Basic rules
- configurations does not define GUI, but GUI defines configurations!
- output is always json (yaml is not needed for anatomy templates anymore)
- output is always json serializable
- GUI schema has multiple input types, all inputs are represented by a dictionary
- each input may have "input modifiers" (keys in dictionary) that are required or optional
- only required modifier for all input items is key `"type"` which says what type of item it is
@ -13,16 +13,16 @@
- `"is_group"` - define that all values under key in hierarchy will be overriden if any value is modified, this information is also stored to overrides
- this keys is not allowed for all inputs as they may have not reason for that
- key is validated, can be only once in hierarchy but is not required
- currently there are `system configurations` and `project configurations`
- currently there are `system settings` and `project settings`
## Inner schema
- GUI schemas are huge json files, to be able to split whole configuration into multiple schema there's type `schema`
- system configuration schemas are stored in `~/tools/settings/settings/gui_schemas/system_schema/` and project configurations in `~/tools/settings/settings/gui_schemas/projects_schema/`
- system configuration schemas are stored in `~/openpype/settings/entities/schemas/system_schema/` and project configurations in `~/openpype/settings/entities/schemas/projects_schema/`
- each schema name is filename of json file except extension (without ".json")
- if content is dictionary content will be used as `schema` else will be used as `schema_template`
### schema
- can have only key `"children"` which is list of strings, each string should represent another schema (order matters) string represebts name of the schema
- can have only key `"children"` which is list of strings, each string should represent another schema (order matters) string represents name of the schema
- will just paste schemas from other schema file in order of "children" list
```
@ -32,8 +32,9 @@
}
```
### schema_template
### template
- allows to define schema "templates" to not duplicate same content multiple times
- legacy name is `schema_template` (still usable)
```javascript
// EXAMPLE json file content (filename: example_template.json)
[
@ -59,11 +60,11 @@
// EXAMPLE usage of the template in schema
{
"type": "dict",
"key": "schema_template_examples",
"key": "template_examples",
"label": "Schema template examples",
"children": [
{
"type": "schema_template",
"type": "template",
// filename of template (example_template.json)
"name": "example_template",
"template_data": {
@ -72,7 +73,7 @@
"multipath_executables": false
}
}, {
"type": "schema_template",
"type": "template",
"name": "example_template",
"template_data": {
"host_label": "Maya 2020",
@ -98,8 +99,16 @@
...
}
```
- Unfilled fields can be also used for non string values, in that case value must contain only one key and value for fill must contain right type.
- Unfilled fields can be also used for non string values(e.g. dictionary), in that case value must contain only one key and value for fill must contain right type.
```javascript
// Passed data
{
"executable_multiplatform": {
"type": "schema",
"name": "my_multiplatform_schema"
}
}
// Template content
{
...
// Allowed
@ -121,32 +130,34 @@
"name": "project_settings/global"
}
```
- all valid `ModuleSettingsDef` classes where calling of `get_settings_schemas`
- all valid `BaseModuleSettingsDef` classes where calling of `get_settings_schemas`
will return dictionary where is key "project_settings/global" with schemas
will extend and replace this item
- works almost the same way as templates
- dynamic schemas work almost the same way as templates
- one item can be replaced by multiple items (or by 0 items)
- goal is to dynamically loaded settings of OpenPype addons without having
their schemas or default values in main repository
- values of these schemas are saved using the `BaseModuleSettingsDef` methods
- easiest is to use `JsonFilesSettingsDef` which has full implementation of storing default values to json files all you have to implement is method `get_settings_root_path` which should return path to root directory where settings schema can be found and will be saved
## Basic Dictionary inputs
- these inputs wraps another inputs into {key: value} relation
## dict
- this is another dictionary input wrapping more inputs but visually makes them different
- item may be used as widget (in `list` or `dict-modifiable`)
- this is dictionary type wrapping more inputs with keys defined in schema
- may be used as dynamic children (e.g. in `list` or `dict-modifiable`)
- in that case the only key modifier is `children` which is list of it's keys
- USAGE: e.g. List of dictionaries where each dictionary have same structure.
- item may be with or without `"label"` if is not used as widget
- required keys are `"key"` under which will be stored
- without label it is just wrap item holding `"key"`
- can't have `"is_group"` key set to True as it breaks visual override showing
- if `"label"` is entetered there which will be shown in GUI
- item with label can be collapsible
- that can be set with key `"collapsible"` as `True`/`False` (Default: `True`)
- with key `"collapsed"` as `True`/`False` can be set that is collapsed when GUI is opened (Default: `False`)
- it is possible to add darker background with `"highlight_content"` (Default: `False`)
- darker background has limits of maximum applies after 3-4 nested highlighted items there is not difference in the color
- if is not used as dynamic children then must have defined `"key"` under which are it's values stored
- may be with or without `"label"` (only for GUI)
- `"label"` must be set to be able mark item as group with `"is_group"` key set to True
- item with label can visually wrap it's children
- this option is enabled by default to turn off set `"use_label_wrap"` to `False`
- label wrap is by default collapsible
- that can be set with key `"collapsible"` to `True`/`False`
- with key `"collapsed"` as `True`/`False` can be set that is collapsed when GUI is opened (Default: `False`)
- it is possible to add lighter background with `"highlight_content"` (Default: `False`)
- lighter background has limits of maximum applies after 3-4 nested highlighted items there is not much difference in the color
- output is dictionary `{the "key": children values}`
```
# Example
@ -198,8 +209,8 @@
```
## dict-conditional
- is similar to `dict` but has only one child entity that will be always available
- the one entity is enumerator of possible values and based on value of the entity are defined and used other children entities
- is similar to `dict` but has always available one enum entity
- the enum entity has single selection and it's value define other children entities
- each value of enumerator have defined children that will be used
- there is no way how to have shared entities across multiple enum items
- value from enumerator is also stored next to other values
@ -207,22 +218,27 @@
- `enum_key` must match key regex and any enum item can't have children with same key
- `enum_label` is label of the entity for UI purposes
- enum items are define with `enum_children`
- it's a list where each item represents enum item
- it's a list where each item represents single item for the enum
- all items in `enum_children` must have at least `key` key which represents value stored under `enum_key`
- items can define `label` for UI purposes
- enum items can define `label` for UI purposes
- most important part is that item can define `children` key where are definitions of it's children (`children` value works the same way as in `dict`)
- to set default value for `enum_key` set it with `enum_default`
- entity must have defined `"label"` if is not used as widget
- is set as group if any parent is not group
- if `"label"` is entetered there which will be shown in GUI
- item with label can be collapsible
- that can be set with key `"collapsible"` as `True`/`False` (Default: `True`)
- with key `"collapsed"` as `True`/`False` can be set that is collapsed when GUI is opened (Default: `False`)
- it is possible to add darker background with `"highlight_content"` (Default: `False`)
- darker background has limits of maximum applies after 3-4 nested highlighted items there is not difference in the color
- output is dictionary `{the "key": children values}`
- is set as group if any parent is not group (can't have children as group)
- may be with or without `"label"` (only for GUI)
- `"label"` must be set to be able mark item as group with `"is_group"` key set to True
- item with label can visually wrap it's children
- this option is enabled by default to turn off set `"use_label_wrap"` to `False`
- label wrap is by default collapsible
- that can be set with key `"collapsible"` to `True`/`False`
- with key `"collapsed"` as `True`/`False` can be set that is collapsed when GUI is opened (Default: `False`)
- it is possible to add lighter background with `"highlight_content"` (Default: `False`)
- lighter background has limits of maximum applies after 3-4 nested highlighted items there is not much difference in the color
- for UI porposes was added `enum_is_horizontal` which will make combobox appear next to children inputs instead of on top of them (Default: `False`)
- this has extended ability of `enum_on_right` which will move combobox to right side next to children widgets (Default: `False`)
- output is dictionary `{the "key": children values}`
- using this type as template item for list type can be used to create infinite hierarchies
```
# Example
{
@ -298,8 +314,8 @@ How output of the schema could look like on save:
```
## Inputs for setting any kind of value (`Pure` inputs)
- all these input must have defined `"key"` under which will be stored and `"label"` which will be shown next to input
- unless they are used in different types of inputs (later) "as widgets" in that case `"key"` and `"label"` are not required as there is not place where to set them
- all inputs must have defined `"key"` if are not used as dynamic item
- they can also have defined `"label"`
### boolean
- simple checkbox, nothing more to set
@ -355,21 +371,15 @@ How output of the schema could look like on save:
```
### path-input
- enhanced text input
- does not allow to enter backslash, is auto-converted to forward slash
- may be added another validations, like do not allow end path with slash
- this input is implemented to add additional features to text input
- this is meant to be used in proxy input `path-widget`
- this is meant to be used in proxy input `path`
- DO NOT USE this input in schema please
### raw-json
- a little bit enhanced text input for raw json
- can store dictionary (`{}`) or list (`[]`) but not both
- by default stores dictionary to change it to list set `is_list` to `True`
- has validations of json format
- empty value is invalid value, always must be json serializable
- valid value types are list `[]` and dictionary `{}`
- schema also defines valid value type
- by default it is dictionary
- to be able use list it is required to define `is_list` to `true`
- output can be stored as string
- this is to allow any keys in dictionary
- set key `store_as_string` to `true`
@ -385,7 +395,7 @@ How output of the schema could look like on save:
```
### enum
- returns value of single on multiple items from predefined values
- enumeration of values that are predefined in schema
- multiselection can be allowed with setting key `"multiselection"` to `True` (Default: `False`)
- values are defined under value of key `"enum_items"` as list
- each item in list is simple dictionary where value is label and key is value which will be stored
@ -415,6 +425,8 @@ How output of the schema could look like on save:
- have only single selection mode
- it is possible to define default value `default`
- `"work"` is used if default value is not specified
- enum values are not updated on the fly it is required to save templates and
reset settings to recache values
```
{
"key": "host",
@ -449,6 +461,42 @@ How output of the schema could look like on save:
}
```
### apps-enum
- enumeration of available application and their variants from system settings
- applications without host name are excluded
- can be used only in project settings
- has only `multiselection`
- used only in project anatomy
```
{
"type": "apps-enum",
"key": "applications",
"label": "Applications"
}
```
### tools-enum
- enumeration of available tools and their variants from system settings
- can be used only in project settings
- has only `multiselection`
- used only in project anatomy
```
{
"type": "tools-enum",
"key": "tools_env",
"label": "Tools"
}
```
### task-types-enum
- enumeration of task types from current project
- enum values are not updated on the fly and modifications of task types on project require save and reset to be propagated to this enum
- has set `multiselection` to `True` but can be changed to `False` in schema
### deadline_url-enum
- deadline module specific enumerator using deadline system settings to fill it's values
- TODO: move this type to deadline module
## Inputs for setting value using Pure inputs
- these inputs also have required `"key"`
- attribute `"label"` is required in few conditions
@ -594,7 +642,7 @@ How output of the schema could look like on save:
}
```
### path-widget
### path
- input for paths, use `path-input` internally
- has 2 input modifiers `"multiplatform"` and `"multipath"`
- `"multiplatform"` - adds `"windows"`, `"linux"` and `"darwin"` path inputs result is dictionary
@ -685,12 +733,13 @@ How output of the schema could look like on save:
}
```
### splitter
- visual splitter of items (more divider than splitter)
### separator
- legacy name is `splitter` (still usable)
- visual separator of items (more divider than separator)
```
{
"type": "splitter"
"type": "separator"
}
```

View file

@ -60,7 +60,39 @@
"object_type": "text"
}
]
}
},
{
"type": "dict",
"collapsible": true,
"key": "ExtractReview",
"label": "Extract Review",
"children": [
{
"type": "dict",
"collapsible": false,
"key": "jpg_options",
"label": "Extracted jpg Options",
"children": [
{
"type": "schema",
"name": "schema_representation_tags"
}
]
},
{
"type": "dict",
"collapsible": false,
"key": "mov_options",
"label": "Extracted mov Options",
"children": [
{
"type": "schema",
"name": "schema_representation_tags"
}
]
}
]
}
]
},
{

View file

@ -5,6 +5,11 @@
"label": "TVPaint",
"is_file": true,
"children": [
{
"type": "boolean",
"key": "stop_timer_on_application_exit",
"label": "Stop timer on application exit"
},
{
"type": "dict",
"collapsible": true,

View file

@ -47,9 +47,14 @@
},
{
"type": "color",
"label": "Maya Scene:",
"label": "Maya Ascii:",
"key": "mayaAscii"
},
{
"type": "color",
"label": "Maya Scene:",
"key": "mayaScene"
},
{
"type": "color",
"label": "Set Dress:",

View file

@ -166,7 +166,6 @@
}
]
},
{
"type": "collapsible-wrap",
"label": "Model",
@ -329,6 +328,30 @@
}
]
},
{
"type": "collapsible-wrap",
"label": "Rig",
"children": [
{
"type": "schema_template",
"name": "template_publish_plugin",
"template_data": [
{
"key": "ValidateRigContents",
"label": "Validate Rig Contents"
},
{
"key": "ValidateRigJointsHidden",
"label": "Validate Rig Joints Hidden"
},
{
"key": "ValidateRigControllers",
"label": "Validate Rig Controllers"
}
]
}
]
},
{
"type": "schema_template",
"name": "template_publish_plugin",

View file

@ -13,12 +13,8 @@
"label": "Image Loader"
},
{
"key": "LoadMov",
"label": "Movie Loader"
},
{
"key": "LoadSequence",
"label": "Image Sequence Loader"
"key": "LoadClip",
"label": "Clip Loader"
}
]
}

View file

@ -13,13 +13,7 @@
},
{
"type": "list",
"key": "families",
"label": "Families",
"object_type": "text"
},
{
"type": "list",
"key": "representations",
"key": "_representations",
"label": "Representations",
"object_type": "text"
},

View file

@ -95,11 +95,11 @@
},
{
"type": "dict",
"key": "schema_template_exaples",
"key": "template_exaples",
"label": "Schema template examples",
"children": [
{
"type": "schema_template",
"type": "template",
"name": "example_template",
"template_data": {
"host_label": "Application 1",
@ -108,7 +108,7 @@
}
},
{
"type": "schema_template",
"type": "template",
"name": "example_template",
"template_data": {
"host_label": "Application 2",

View file

@ -44,6 +44,7 @@
"type": "dict",
"key": "disk_mapping",
"label": "Disk mapping",
"is_group": true,
"use_label_wrap": false,
"collapsible": false,
"children": [

View file

@ -1,5 +0,0 @@
from .app import show
__all__ = (
"show",
)

View file

@ -0,0 +1,7 @@
from .app import show
from .window import PublisherWindow
__all__ = (
"show",
"PublisherWindow"
)

View file

Before

Width:  |  Height:  |  Size: 5.7 KiB

After

Width:  |  Height:  |  Size: 5.7 KiB

Before After
Before After

View file

Before

Width:  |  Height:  |  Size: 1.9 KiB

After

Width:  |  Height:  |  Size: 1.9 KiB

Before After
Before After

View file

Before

Width:  |  Height:  |  Size: 1.8 KiB

After

Width:  |  Height:  |  Size: 1.8 KiB

Before After
Before After

View file

Before

Width:  |  Height:  |  Size: 8.1 KiB

After

Width:  |  Height:  |  Size: 8.1 KiB

Before After
Before After

View file

Before

Width:  |  Height:  |  Size: 12 KiB

After

Width:  |  Height:  |  Size: 12 KiB

Before After
Before After

View file

Before

Width:  |  Height:  |  Size: 4.6 KiB

After

Width:  |  Height:  |  Size: 4.6 KiB

Before After
Before After

View file

Before

Width:  |  Height:  |  Size: 7.1 KiB

After

Width:  |  Height:  |  Size: 7.1 KiB

Before After
Before After

View file

Before

Width:  |  Height:  |  Size: 4 KiB

After

Width:  |  Height:  |  Size: 4 KiB

Before After
Before After

View file

Before

Width:  |  Height:  |  Size: 5 KiB

After

Width:  |  Height:  |  Size: 5 KiB

Before After
Before After

View file

Before

Width:  |  Height:  |  Size: 16 KiB

After

Width:  |  Height:  |  Size: 16 KiB

Before After
Before After

View file

Before

Width:  |  Height:  |  Size: 9.5 KiB

After

Width:  |  Height:  |  Size: 9.5 KiB

Before After
Before After

View file

@ -214,7 +214,8 @@ class BaseWidget(QtWidgets.QWidget):
def _paste_value_actions(self, menu):
output = []
# Allow paste of value only if were copied from this UI
mime_data = QtWidgets.QApplication.clipboard().mimeData()
clipboard = QtWidgets.QApplication.clipboard()
mime_data = clipboard.mimeData()
mime_value = mime_data.data("application/copy_settings_value")
# Skip if there is nothing to do
if not mime_value:

View file

@ -508,7 +508,7 @@ class SettingsCategoryWidget(QtWidgets.QWidget):
first_invalid_item = invalid_items[0]
self.scroll_widget.ensureWidgetVisible(first_invalid_item)
if first_invalid_item.isVisible():
first_invalid_item.setFocus(True)
first_invalid_item.setFocus()
return False
def on_saved(self, saved_tab_widget):

Some files were not shown because too many files have changed in this diff Show more