Merge branch 'develop' into bugfix/OP-3022-Look-publishing-and-srgb-colorspace-in-Maya-2022

This commit is contained in:
Kayla Man 2023-02-01 15:07:47 +08:00
commit 56c66d9b6a
596 changed files with 22564 additions and 8896 deletions

View file

@ -17,7 +17,7 @@ jobs:
- name: Set up Python
uses: actions/setup-python@v2
with:
python-version: 3.7
python-version: 3.9
- name: Install Python requirements
run: pip install gitpython semver PyGithub

View file

@ -19,7 +19,7 @@ jobs:
- name: Set up Python
uses: actions/setup-python@v2
with:
python-version: 3.7
python-version: 3.9
- name: Install Python requirements
run: pip install gitpython semver PyGithub

View file

@ -18,7 +18,7 @@ jobs:
runs-on: windows-latest
strategy:
matrix:
python-version: [3.7]
python-version: [3.9]
steps:
- name: 🚛 Checkout Code
@ -45,7 +45,7 @@ jobs:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: [3.7]
python-version: [3.9]
steps:
- name: 🚛 Checkout Code
@ -70,7 +70,7 @@ jobs:
# runs-on: macos-latest
# strategy:
# matrix:
# python-version: [3.7]
# python-version: [3.9]
# steps:
# - name: 🚛 Checkout Code
@ -87,4 +87,4 @@ jobs:
# - name: 🔨 Build
# run: |
# ./tools/build.sh
# ./tools/build.sh

12
.pre-commit-config.yaml Normal file
View file

@ -0,0 +1,12 @@
# See https://pre-commit.com for more information
# See https://pre-commit.com/hooks.html for more hooks
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.4.0
hooks:
- id: trailing-whitespace
- id: end-of-file-fixer
- id: check-yaml
- id: check-added-large-files
- id: no-commit-to-branch
args: [ '--pattern', '^(?!((release|enhancement|feature|bugfix|documentation|tests|local|chore)\/[a-zA-Z0-9\-]+)$).*' ]

View file

@ -1,5 +1,135 @@
# Changelog
## [3.15.0](https://github.com/ynput/OpenPype/tree/HEAD)
[Full Changelog](https://github.com/ynput/OpenPype/compare/3.14.10...HEAD)
**Deprecated:**
- General: Fill default values of new publish template profiles [\#4245](https://github.com/ynput/OpenPype/pull/4245)
### 📖 Documentation
- documentation: Split tools into separate entries [\#4342](https://github.com/ynput/OpenPype/pull/4342)
- Documentation: Fix harmony docs [\#4301](https://github.com/ynput/OpenPype/pull/4301)
- Remove staging logic set by OpenPype version [\#3979](https://github.com/ynput/OpenPype/pull/3979)
**🆕 New features**
- General: Push to studio library [\#4284](https://github.com/ynput/OpenPype/pull/4284)
- Colorspace Management and Distribution [\#4195](https://github.com/ynput/OpenPype/pull/4195)
- Nuke: refactor to latest publisher workfow [\#4006](https://github.com/ynput/OpenPype/pull/4006)
- Update to Python 3.9 [\#3546](https://github.com/ynput/OpenPype/pull/3546)
**🚀 Enhancements**
- Unreal: Don't use mongo queries in 'ExistingLayoutLoader' [\#4356](https://github.com/ynput/OpenPype/pull/4356)
- General: Loader and Creator plugins can be disabled [\#4310](https://github.com/ynput/OpenPype/pull/4310)
- General: Unbind poetry version [\#4306](https://github.com/ynput/OpenPype/pull/4306)
- General: Enhanced enum def items [\#4295](https://github.com/ynput/OpenPype/pull/4295)
- Git: add pre-commit hooks [\#4289](https://github.com/ynput/OpenPype/pull/4289)
- Tray Publisher: Improve Online family functionality [\#4263](https://github.com/ynput/OpenPype/pull/4263)
- General: Update MacOs to PySide6 [\#4255](https://github.com/ynput/OpenPype/pull/4255)
- Build: update to Gazu in toml [\#4208](https://github.com/ynput/OpenPype/pull/4208)
- Global: adding imageio to settings [\#4158](https://github.com/ynput/OpenPype/pull/4158)
- Blender: added project settings for validator no colons in name [\#4149](https://github.com/ynput/OpenPype/pull/4149)
- Dockerfile for Debian Bullseye [\#4108](https://github.com/ynput/OpenPype/pull/4108)
- AfterEffects: publish multiple compositions [\#4092](https://github.com/ynput/OpenPype/pull/4092)
- AfterEffects: make new publisher default [\#4056](https://github.com/ynput/OpenPype/pull/4056)
- Photoshop: make new publisher default [\#4051](https://github.com/ynput/OpenPype/pull/4051)
- Feature/multiverse [\#4046](https://github.com/ynput/OpenPype/pull/4046)
- Tests: add support for deadline for automatic tests [\#3989](https://github.com/ynput/OpenPype/pull/3989)
- Add version to shortcut name [\#3906](https://github.com/ynput/OpenPype/pull/3906)
- TrayPublisher: Removed from experimental tools [\#3667](https://github.com/ynput/OpenPype/pull/3667)
**🐛 Bug fixes**
- change 3.7 to 3.9 in folder name [\#4354](https://github.com/ynput/OpenPype/pull/4354)
- PushToProject: Fix hierarchy of project change [\#4350](https://github.com/ynput/OpenPype/pull/4350)
- Fix photoshop workfile save-as [\#4347](https://github.com/ynput/OpenPype/pull/4347)
- Nuke Input process node sourcing improvements [\#4341](https://github.com/ynput/OpenPype/pull/4341)
- New publisher: Some validation plugin tweaks [\#4339](https://github.com/ynput/OpenPype/pull/4339)
- Harmony: fix unable to change workfile on Mac [\#4334](https://github.com/ynput/OpenPype/pull/4334)
- Global: fixing in-place source publishing for editorial [\#4333](https://github.com/ynput/OpenPype/pull/4333)
- General: Use class constants of QMessageBox [\#4332](https://github.com/ynput/OpenPype/pull/4332)
- TVPaint: Fix plugin for TVPaint 11.7 [\#4328](https://github.com/ynput/OpenPype/pull/4328)
- Exctract OTIO review has improved quality [\#4325](https://github.com/ynput/OpenPype/pull/4325)
- Ftrack: fix typos causing bugs in sync [\#4322](https://github.com/ynput/OpenPype/pull/4322)
- General: Python 2 compatibility of instance collector [\#4320](https://github.com/ynput/OpenPype/pull/4320)
- Slack: user groups speedup [\#4318](https://github.com/ynput/OpenPype/pull/4318)
- Maya: Bug - Multiverse extractor executed on plain animation family [\#4315](https://github.com/ynput/OpenPype/pull/4315)
- Fix run\_documentation.ps1 [\#4312](https://github.com/ynput/OpenPype/pull/4312)
- Nuke: new creators fixes [\#4308](https://github.com/ynput/OpenPype/pull/4308)
- General: missing comment on standalone and tray publisher [\#4303](https://github.com/ynput/OpenPype/pull/4303)
- AfterEffects: Fix for audio from mp4 layer [\#4296](https://github.com/ynput/OpenPype/pull/4296)
- General: Update gazu in poetry lock [\#4247](https://github.com/ynput/OpenPype/pull/4247)
- Bug: Fixing version detection and filtering in Igniter [\#3914](https://github.com/ynput/OpenPype/pull/3914)
- Bug: Create missing version dir [\#3903](https://github.com/ynput/OpenPype/pull/3903)
**🔀 Refactored code**
- Remove redundant export\_alembic method. [\#4293](https://github.com/ynput/OpenPype/pull/4293)
- Igniter: Use qtpy modules instead of Qt [\#4237](https://github.com/ynput/OpenPype/pull/4237)
**Merged pull requests:**
- Sort families by alphabetical order in the Create plugin [\#4346](https://github.com/ynput/OpenPype/pull/4346)
- Global: Validate unique subsets [\#4336](https://github.com/ynput/OpenPype/pull/4336)
- Maya: Collect instances preserve handles even if frameStart + frameEnd matches context [\#3437](https://github.com/ynput/OpenPype/pull/3437)
## [3.14.10](https://github.com/ynput/OpenPype/tree/HEAD)
[Full Changelog](https://github.com/ynput/OpenPype/compare/3.14.9...3.14.10)
**🆕 New features**
- Global | Nuke: Creator placeholders in workfile template builder [\#4266](https://github.com/ynput/OpenPype/pull/4266)
- Slack: Added dynamic message [\#4265](https://github.com/ynput/OpenPype/pull/4265)
- Blender: Workfile Loader [\#4234](https://github.com/ynput/OpenPype/pull/4234)
- Unreal: Publishing and Loading for UAssets [\#4198](https://github.com/ynput/OpenPype/pull/4198)
- Publish: register publishes without copying them [\#4157](https://github.com/ynput/OpenPype/pull/4157)
**🚀 Enhancements**
- General: Added install method with docstring to HostBase [\#4298](https://github.com/ynput/OpenPype/pull/4298)
- Traypublisher: simple editorial multiple edl [\#4248](https://github.com/ynput/OpenPype/pull/4248)
- General: Extend 'IPluginPaths' to have more available methods [\#4214](https://github.com/ynput/OpenPype/pull/4214)
- Refactorization of folder coloring [\#4211](https://github.com/ynput/OpenPype/pull/4211)
- Flame - loading multilayer with controlled layer names [\#4204](https://github.com/ynput/OpenPype/pull/4204)
**🐛 Bug fixes**
- Unreal: fix missing `maintained_selection` call [\#4300](https://github.com/ynput/OpenPype/pull/4300)
- Ftrack: Fix receive of host ip on MacOs [\#4288](https://github.com/ynput/OpenPype/pull/4288)
- SiteSync: sftp connection failing when shouldnt be tested [\#4278](https://github.com/ynput/OpenPype/pull/4278)
- Deadline: fix default value for passing mongo url [\#4275](https://github.com/ynput/OpenPype/pull/4275)
- Scene Manager: Fix variable name [\#4268](https://github.com/ynput/OpenPype/pull/4268)
- Slack: notification fails because of missing published path [\#4264](https://github.com/ynput/OpenPype/pull/4264)
- hiero: creator gui with min max [\#4257](https://github.com/ynput/OpenPype/pull/4257)
- NiceCheckbox: Fix checker positioning in Python 2 [\#4253](https://github.com/ynput/OpenPype/pull/4253)
- Publisher: Fix 'CreatorType' not equal for Python 2 DCCs [\#4249](https://github.com/ynput/OpenPype/pull/4249)
- Deadline: fix dependencies [\#4242](https://github.com/ynput/OpenPype/pull/4242)
- Houdini: hotfix instance data access [\#4236](https://github.com/ynput/OpenPype/pull/4236)
- bugfix/image plane load error [\#4222](https://github.com/ynput/OpenPype/pull/4222)
- Hiero: thumbnail from multilayer exr [\#4209](https://github.com/ynput/OpenPype/pull/4209)
**🔀 Refactored code**
- Resolve: Use qtpy in Resolve [\#4254](https://github.com/ynput/OpenPype/pull/4254)
- Houdini: Use qtpy in Houdini [\#4252](https://github.com/ynput/OpenPype/pull/4252)
- Max: Use qtpy in Max [\#4251](https://github.com/ynput/OpenPype/pull/4251)
- Maya: Use qtpy in Maya [\#4250](https://github.com/ynput/OpenPype/pull/4250)
- Hiero: Use qtpy in Hiero [\#4240](https://github.com/ynput/OpenPype/pull/4240)
- Nuke: Use qtpy in Nuke [\#4239](https://github.com/ynput/OpenPype/pull/4239)
- Flame: Use qtpy in flame [\#4238](https://github.com/ynput/OpenPype/pull/4238)
- General: Legacy io not used in global plugins [\#4134](https://github.com/ynput/OpenPype/pull/4134)
**Merged pull requests:**
- Bump json5 from 1.0.1 to 1.0.2 in /website [\#4292](https://github.com/ynput/OpenPype/pull/4292)
- Maya: Fix validate frame range repair + fix create render with deadline disabled [\#4279](https://github.com/ynput/OpenPype/pull/4279)
## [3.14.9](https://github.com/pypeclub/OpenPype/tree/3.14.9)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.14.8...3.14.9)

View file

@ -1,6 +1,6 @@
# Build Pype docker image
FROM ubuntu:focal AS builder
ARG OPENPYPE_PYTHON_VERSION=3.7.12
ARG OPENPYPE_PYTHON_VERSION=3.9.12
ARG BUILD_DATE
ARG VERSION

View file

@ -1,6 +1,6 @@
# Build Pype docker image
FROM centos:7 AS builder
ARG OPENPYPE_PYTHON_VERSION=3.7.12
ARG OPENPYPE_PYTHON_VERSION=3.9.12
LABEL org.opencontainers.image.name="pypeclub/openpype"
LABEL org.opencontainers.image.title="OpenPype Docker Image"
@ -96,11 +96,11 @@ RUN source $HOME/.bashrc \
RUN source $HOME/.bashrc \
&& bash ./tools/build.sh
RUN cp /usr/lib64/libffi* ./build/exe.linux-x86_64-3.7/lib \
&& cp /usr/lib64/libssl* ./build/exe.linux-x86_64-3.7/lib \
&& cp /usr/lib64/libcrypto* ./build/exe.linux-x86_64-3.7/lib \
&& cp /root/.pyenv/versions/${OPENPYPE_PYTHON_VERSION}/lib/libpython* ./build/exe.linux-x86_64-3.7/lib \
&& cp /usr/lib64/libxcb* ./build/exe.linux-x86_64-3.7/vendor/python/PySide2/Qt/lib
RUN cp /usr/lib64/libffi* ./build/exe.linux-x86_64-3.9/lib \
&& cp /usr/lib64/libssl* ./build/exe.linux-x86_64-3.9/lib \
&& cp /usr/lib64/libcrypto* ./build/exe.linux-x86_64-3.9/lib \
&& cp /root/.pyenv/versions/${OPENPYPE_PYTHON_VERSION}/lib/libpython* ./build/exe.linux-x86_64-3.9/lib \
&& cp /usr/lib64/libxcb* ./build/exe.linux-x86_64-3.9/vendor/python/PySide2/Qt/lib
RUN cd /opt/openpype \
rm -rf ./vendor/bin

81
Dockerfile.debian Normal file
View file

@ -0,0 +1,81 @@
# Build Pype docker image
FROM debian:bullseye AS builder
ARG OPENPYPE_PYTHON_VERSION=3.9.12
ARG BUILD_DATE
ARG VERSION
LABEL maintainer="info@openpype.io"
LABEL description="Docker Image to build and run OpenPype under Ubuntu 20.04"
LABEL org.opencontainers.image.name="pypeclub/openpype"
LABEL org.opencontainers.image.title="OpenPype Docker Image"
LABEL org.opencontainers.image.url="https://openpype.io/"
LABEL org.opencontainers.image.source="https://github.com/pypeclub/OpenPype"
LABEL org.opencontainers.image.documentation="https://openpype.io/docs/system_introduction"
LABEL org.opencontainers.image.created=$BUILD_DATE
LABEL org.opencontainers.image.version=$VERSION
USER root
ARG DEBIAN_FRONTEND=noninteractive
# update base
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
ca-certificates \
bash \
git \
cmake \
make \
curl \
wget \
build-essential \
libssl-dev \
zlib1g-dev \
libbz2-dev \
libreadline-dev \
libsqlite3-dev \
llvm \
libncursesw5-dev \
xz-utils \
tk-dev \
libxml2-dev \
libxmlsec1-dev \
libffi-dev \
liblzma-dev \
patchelf
SHELL ["/bin/bash", "-c"]
RUN mkdir /opt/openpype
# download and install pyenv
RUN curl https://pyenv.run | bash \
&& echo 'export PATH="$HOME/.pyenv/bin:$PATH"'>> $HOME/init_pyenv.sh \
&& echo 'eval "$(pyenv init -)"' >> $HOME/init_pyenv.sh \
&& echo 'eval "$(pyenv virtualenv-init -)"' >> $HOME/init_pyenv.sh \
&& echo 'eval "$(pyenv init --path)"' >> $HOME/init_pyenv.sh
# install python with pyenv
RUN source $HOME/init_pyenv.sh \
&& pyenv install ${OPENPYPE_PYTHON_VERSION}
COPY . /opt/openpype/
RUN chmod +x /opt/openpype/tools/create_env.sh && chmod +x /opt/openpype/tools/build.sh
WORKDIR /opt/openpype
# set local python version
RUN cd /opt/openpype \
&& source $HOME/init_pyenv.sh \
&& pyenv local ${OPENPYPE_PYTHON_VERSION}
# fetch third party tools/libraries
RUN source $HOME/init_pyenv.sh \
&& ./tools/create_env.sh \
&& ./tools/fetch_thirdparty_libs.sh
# build openpype
RUN source $HOME/init_pyenv.sh \
&& bash ./tools/build.sh

View file

@ -1,5 +1,135 @@
# Changelog
## [3.15.0](https://github.com/ynput/OpenPype/tree/3.15.0)
[Full Changelog](https://github.com/ynput/OpenPype/compare/3.14.10...3.15.0)
**Deprecated:**
- General: Fill default values of new publish template profiles [\#4245](https://github.com/ynput/OpenPype/pull/4245)
### 📖 Documentation
- documentation: Split tools into separate entries [\#4342](https://github.com/ynput/OpenPype/pull/4342)
- Documentation: Fix harmony docs [\#4301](https://github.com/ynput/OpenPype/pull/4301)
- Remove staging logic set by OpenPype version [\#3979](https://github.com/ynput/OpenPype/pull/3979)
**🆕 New features**
- General: Push to studio library [\#4284](https://github.com/ynput/OpenPype/pull/4284)
- Colorspace Management and Distribution [\#4195](https://github.com/ynput/OpenPype/pull/4195)
- Nuke: refactor to latest publisher workfow [\#4006](https://github.com/ynput/OpenPype/pull/4006)
- Update to Python 3.9 [\#3546](https://github.com/ynput/OpenPype/pull/3546)
**🚀 Enhancements**
- Unreal: Don't use mongo queries in 'ExistingLayoutLoader' [\#4356](https://github.com/ynput/OpenPype/pull/4356)
- General: Loader and Creator plugins can be disabled [\#4310](https://github.com/ynput/OpenPype/pull/4310)
- General: Unbind poetry version [\#4306](https://github.com/ynput/OpenPype/pull/4306)
- General: Enhanced enum def items [\#4295](https://github.com/ynput/OpenPype/pull/4295)
- Git: add pre-commit hooks [\#4289](https://github.com/ynput/OpenPype/pull/4289)
- Tray Publisher: Improve Online family functionality [\#4263](https://github.com/ynput/OpenPype/pull/4263)
- General: Update MacOs to PySide6 [\#4255](https://github.com/ynput/OpenPype/pull/4255)
- Build: update to Gazu in toml [\#4208](https://github.com/ynput/OpenPype/pull/4208)
- Global: adding imageio to settings [\#4158](https://github.com/ynput/OpenPype/pull/4158)
- Blender: added project settings for validator no colons in name [\#4149](https://github.com/ynput/OpenPype/pull/4149)
- Dockerfile for Debian Bullseye [\#4108](https://github.com/ynput/OpenPype/pull/4108)
- AfterEffects: publish multiple compositions [\#4092](https://github.com/ynput/OpenPype/pull/4092)
- AfterEffects: make new publisher default [\#4056](https://github.com/ynput/OpenPype/pull/4056)
- Photoshop: make new publisher default [\#4051](https://github.com/ynput/OpenPype/pull/4051)
- Feature/multiverse [\#4046](https://github.com/ynput/OpenPype/pull/4046)
- Tests: add support for deadline for automatic tests [\#3989](https://github.com/ynput/OpenPype/pull/3989)
- Add version to shortcut name [\#3906](https://github.com/ynput/OpenPype/pull/3906)
- TrayPublisher: Removed from experimental tools [\#3667](https://github.com/ynput/OpenPype/pull/3667)
**🐛 Bug fixes**
- change 3.7 to 3.9 in folder name [\#4354](https://github.com/ynput/OpenPype/pull/4354)
- PushToProject: Fix hierarchy of project change [\#4350](https://github.com/ynput/OpenPype/pull/4350)
- Fix photoshop workfile save-as [\#4347](https://github.com/ynput/OpenPype/pull/4347)
- Nuke Input process node sourcing improvements [\#4341](https://github.com/ynput/OpenPype/pull/4341)
- New publisher: Some validation plugin tweaks [\#4339](https://github.com/ynput/OpenPype/pull/4339)
- Harmony: fix unable to change workfile on Mac [\#4334](https://github.com/ynput/OpenPype/pull/4334)
- Global: fixing in-place source publishing for editorial [\#4333](https://github.com/ynput/OpenPype/pull/4333)
- General: Use class constants of QMessageBox [\#4332](https://github.com/ynput/OpenPype/pull/4332)
- TVPaint: Fix plugin for TVPaint 11.7 [\#4328](https://github.com/ynput/OpenPype/pull/4328)
- Exctract OTIO review has improved quality [\#4325](https://github.com/ynput/OpenPype/pull/4325)
- Ftrack: fix typos causing bugs in sync [\#4322](https://github.com/ynput/OpenPype/pull/4322)
- General: Python 2 compatibility of instance collector [\#4320](https://github.com/ynput/OpenPype/pull/4320)
- Slack: user groups speedup [\#4318](https://github.com/ynput/OpenPype/pull/4318)
- Maya: Bug - Multiverse extractor executed on plain animation family [\#4315](https://github.com/ynput/OpenPype/pull/4315)
- Fix run\_documentation.ps1 [\#4312](https://github.com/ynput/OpenPype/pull/4312)
- Nuke: new creators fixes [\#4308](https://github.com/ynput/OpenPype/pull/4308)
- General: missing comment on standalone and tray publisher [\#4303](https://github.com/ynput/OpenPype/pull/4303)
- AfterEffects: Fix for audio from mp4 layer [\#4296](https://github.com/ynput/OpenPype/pull/4296)
- General: Update gazu in poetry lock [\#4247](https://github.com/ynput/OpenPype/pull/4247)
- Bug: Fixing version detection and filtering in Igniter [\#3914](https://github.com/ynput/OpenPype/pull/3914)
- Bug: Create missing version dir [\#3903](https://github.com/ynput/OpenPype/pull/3903)
**🔀 Refactored code**
- Remove redundant export\_alembic method. [\#4293](https://github.com/ynput/OpenPype/pull/4293)
- Igniter: Use qtpy modules instead of Qt [\#4237](https://github.com/ynput/OpenPype/pull/4237)
**Merged pull requests:**
- Sort families by alphabetical order in the Create plugin [\#4346](https://github.com/ynput/OpenPype/pull/4346)
- Global: Validate unique subsets [\#4336](https://github.com/ynput/OpenPype/pull/4336)
- Maya: Collect instances preserve handles even if frameStart + frameEnd matches context [\#3437](https://github.com/ynput/OpenPype/pull/3437)
## [3.14.10](https://github.com/ynput/OpenPype/tree/3.14.10)
[Full Changelog](https://github.com/ynput/OpenPype/compare/3.14.9...3.14.10)
**🆕 New features**
- Global | Nuke: Creator placeholders in workfile template builder [\#4266](https://github.com/ynput/OpenPype/pull/4266)
- Slack: Added dynamic message [\#4265](https://github.com/ynput/OpenPype/pull/4265)
- Blender: Workfile Loader [\#4234](https://github.com/ynput/OpenPype/pull/4234)
- Unreal: Publishing and Loading for UAssets [\#4198](https://github.com/ynput/OpenPype/pull/4198)
- Publish: register publishes without copying them [\#4157](https://github.com/ynput/OpenPype/pull/4157)
**🚀 Enhancements**
- General: Added install method with docstring to HostBase [\#4298](https://github.com/ynput/OpenPype/pull/4298)
- Traypublisher: simple editorial multiple edl [\#4248](https://github.com/ynput/OpenPype/pull/4248)
- General: Extend 'IPluginPaths' to have more available methods [\#4214](https://github.com/ynput/OpenPype/pull/4214)
- Refactorization of folder coloring [\#4211](https://github.com/ynput/OpenPype/pull/4211)
- Flame - loading multilayer with controlled layer names [\#4204](https://github.com/ynput/OpenPype/pull/4204)
**🐛 Bug fixes**
- Unreal: fix missing `maintained_selection` call [\#4300](https://github.com/ynput/OpenPype/pull/4300)
- Ftrack: Fix receive of host ip on MacOs [\#4288](https://github.com/ynput/OpenPype/pull/4288)
- SiteSync: sftp connection failing when shouldnt be tested [\#4278](https://github.com/ynput/OpenPype/pull/4278)
- Deadline: fix default value for passing mongo url [\#4275](https://github.com/ynput/OpenPype/pull/4275)
- Scene Manager: Fix variable name [\#4268](https://github.com/ynput/OpenPype/pull/4268)
- Slack: notification fails because of missing published path [\#4264](https://github.com/ynput/OpenPype/pull/4264)
- hiero: creator gui with min max [\#4257](https://github.com/ynput/OpenPype/pull/4257)
- NiceCheckbox: Fix checker positioning in Python 2 [\#4253](https://github.com/ynput/OpenPype/pull/4253)
- Publisher: Fix 'CreatorType' not equal for Python 2 DCCs [\#4249](https://github.com/ynput/OpenPype/pull/4249)
- Deadline: fix dependencies [\#4242](https://github.com/ynput/OpenPype/pull/4242)
- Houdini: hotfix instance data access [\#4236](https://github.com/ynput/OpenPype/pull/4236)
- bugfix/image plane load error [\#4222](https://github.com/ynput/OpenPype/pull/4222)
- Hiero: thumbnail from multilayer exr [\#4209](https://github.com/ynput/OpenPype/pull/4209)
**🔀 Refactored code**
- Resolve: Use qtpy in Resolve [\#4254](https://github.com/ynput/OpenPype/pull/4254)
- Houdini: Use qtpy in Houdini [\#4252](https://github.com/ynput/OpenPype/pull/4252)
- Max: Use qtpy in Max [\#4251](https://github.com/ynput/OpenPype/pull/4251)
- Maya: Use qtpy in Maya [\#4250](https://github.com/ynput/OpenPype/pull/4250)
- Hiero: Use qtpy in Hiero [\#4240](https://github.com/ynput/OpenPype/pull/4240)
- Nuke: Use qtpy in Nuke [\#4239](https://github.com/ynput/OpenPype/pull/4239)
- Flame: Use qtpy in flame [\#4238](https://github.com/ynput/OpenPype/pull/4238)
- General: Legacy io not used in global plugins [\#4134](https://github.com/ynput/OpenPype/pull/4134)
**Merged pull requests:**
- Bump json5 from 1.0.1 to 1.0.2 in /website [\#4292](https://github.com/ynput/OpenPype/pull/4292)
- Maya: Fix validate frame range repair + fix create render with deadline disabled [\#4279](https://github.com/ynput/OpenPype/pull/4279)
## [3.14.9](https://github.com/pypeclub/OpenPype/tree/3.14.9)

View file

@ -5,10 +5,10 @@
OpenPype
====
[![documentation](https://github.com/pypeclub/pype/actions/workflows/documentation.yml/badge.svg)](https://github.com/pypeclub/pype/actions/workflows/documentation.yml) ![GitHub VFX Platform](https://img.shields.io/badge/vfx%20platform-2021-lightgrey?labelColor=303846)
[![documentation](https://github.com/pypeclub/pype/actions/workflows/documentation.yml/badge.svg)](https://github.com/pypeclub/pype/actions/workflows/documentation.yml) ![GitHub VFX Platform](https://img.shields.io/badge/vfx%20platform-2022-lightgrey?labelColor=303846)
this
Introduction
------------
@ -31,7 +31,7 @@ The main things you will need to run and build OpenPype are:
- **Terminal** in your OS
- PowerShell 5.0+ (Windows)
- Bash (Linux)
- [**Python 3.7.8**](#python) or higher
- [**Python 3.9.6**](#python) or higher
- [**MongoDB**](#database) (needed only for local development)
@ -50,13 +50,14 @@ For more details on requirements visit [requirements documentation](https://open
Building OpenPype
-------------
To build OpenPype you currently need [Python 3.7](https://www.python.org/downloads/) as we are following
To build OpenPype you currently need [Python 3.9](https://www.python.org/downloads/) as we are following
[vfx platform](https://vfxplatform.com). Because of some Linux distros comes with newer Python version
already, you need to install **3.7** version and make use of it. You can use perhaps [pyenv](https://github.com/pyenv/pyenv) for this on Linux.
already, you need to install **3.9** version and make use of it. You can use perhaps [pyenv](https://github.com/pyenv/pyenv) for this on Linux.
**Note**: We do not support 3.9.0 because of [this bug](https://github.com/python/cpython/pull/22670). Please, use higher versions of 3.9.x.
### Windows
You will need [Python 3.7](https://www.python.org/downloads/) and [git](https://git-scm.com/downloads).
You will need [Python >= 3.9.1](https://www.python.org/downloads/) and [git](https://git-scm.com/downloads).
More tools might be needed for installing dependencies (for example for **OpenTimelineIO**) - mostly
development tools like [CMake](https://cmake.org/) and [Visual Studio](https://visualstudio.microsoft.com/cs/downloads/)
@ -82,7 +83,7 @@ OpenPype is build using [CX_Freeze](https://cx-freeze.readthedocs.io/en/latest)
### macOS
You will need [Python 3.7](https://www.python.org/downloads/) and [git](https://git-scm.com/downloads). You'll need also other tools to build
You will need [Python >= 3.9](https://www.python.org/downloads/) and [git](https://git-scm.com/downloads). You'll need also other tools to build
some OpenPype dependencies like [CMake](https://cmake.org/) and **XCode Command Line Tools** (or some other build system).
Easy way of installing everything necessary is to use [Homebrew](https://brew.sh):
@ -106,19 +107,19 @@ exec "$SHELL"
PATH=$(pyenv root)/shims:$PATH
```
4) Pull in required Python version 3.7.x
4) Pull in required Python version 3.9.x
```sh
# install Python build dependences
brew install openssl readline sqlite3 xz zlib
# replace with up-to-date 3.7.x version
pyenv install 3.7.9
# replace with up-to-date 3.9.x version
pyenv install 3.9.6
```
5) Set local Python version
```sh
# switch to OpenPype source directory
pyenv local 3.7.9
pyenv local 3.9.6
```
#### To build OpenPype:
@ -145,7 +146,7 @@ sudo ./tools/docker_build.sh centos7
If all is successful, you'll find built OpenPype in `./build/` folder.
#### Manual build
You will need [Python 3.7](https://www.python.org/downloads/) and [git](https://git-scm.com/downloads). You'll also need [curl](https://curl.se) on systems that doesn't have one preinstalled.
You will need [Python >= 3.9](https://www.python.org/downloads/) and [git](https://git-scm.com/downloads). You'll also need [curl](https://curl.se) on systems that doesn't have one preinstalled.
To build Python related stuff, you need Python header files installed (`python3-dev` on Ubuntu for example).
@ -222,14 +223,14 @@ eval "$(pyenv virtualenv-init -)"
# reload shell
exec $SHELL
# install Python 3.7.9
pyenv install -v 3.7.9
# install Python 3.9.x
pyenv install -v 3.9.6
# change path to OpenPype 3
cd /path/to/openpype-3
# set local python version
pyenv local 3.7.9
pyenv local 3.9.6
```
</details>
@ -345,4 +346,4 @@ Thanks goes to these wonderful people ([emoji key](https://allcontributors.org/d
<!-- ALL-CONTRIBUTORS-LIST:END -->
This project follows the [all-contributors](https://github.com/all-contributors/all-contributors) specification. Contributions of any kind welcome!
This project follows the [all-contributors](https://github.com/all-contributors/all-contributors) specification. Contributions of any kind welcome!

View file

@ -1,4 +1,3 @@
import enlighten
import os
import re
import urllib
@ -252,6 +251,11 @@ class RemoteFileHandler:
if key.startswith('download_warning'):
return value
# handle antivirus warning for big zips
found = re.search("(confirm=)([^&.+])", response.text)
if found:
return found.groups()[1]
return None
@staticmethod
@ -259,15 +263,9 @@ class RemoteFileHandler:
response_gen, destination,
):
with open(destination, "wb") as f:
pbar = enlighten.Counter(
total=None, desc="Save content", units="%", color="green")
progress = 0
for chunk in response_gen:
if chunk: # filter out keep-alive new chunks
f.write(chunk)
progress += len(chunk)
pbar.close()
@staticmethod
def _quota_exceeded(first_chunk):

View file

@ -24,7 +24,7 @@ def open_dialog():
if os.getenv("OPENPYPE_HEADLESS_MODE"):
print("!!! Can't open dialog in headless mode. Exiting.")
sys.exit(1)
from Qt import QtWidgets, QtCore
from qtpy import QtWidgets, QtCore
from .install_dialog import InstallDialog
scale_attr = getattr(QtCore.Qt, "AA_EnableHighDpiScaling", None)
@ -47,7 +47,7 @@ def open_update_window(openpype_version):
if os.getenv("OPENPYPE_HEADLESS_MODE"):
print("!!! Can't open dialog in headless mode. Exiting.")
sys.exit(1)
from Qt import QtWidgets, QtCore
from qtpy import QtWidgets, QtCore
from .update_window import UpdateWindow
scale_attr = getattr(QtCore.Qt, "AA_EnableHighDpiScaling", None)
@ -71,7 +71,7 @@ def show_message_dialog(title, message):
if os.getenv("OPENPYPE_HEADLESS_MODE"):
print("!!! Can't open dialog in headless mode. Exiting.")
sys.exit(1)
from Qt import QtWidgets, QtCore
from qtpy import QtWidgets, QtCore
from .message_dialog import MessageDialog
scale_attr = getattr(QtCore.Qt, "AA_EnableHighDpiScaling", None)

View file

@ -2,8 +2,7 @@
"""Open install dialog."""
import sys
from Qt import QtWidgets # noqa
from Qt.QtCore import Signal # noqa
from qtpy import QtWidgets
from .install_dialog import InstallDialog

View file

@ -57,11 +57,9 @@ class OpenPypeVersion(semver.VersionInfo):
"""Class for storing information about OpenPype version.
Attributes:
staging (bool): True if it is staging version
path (str): path to OpenPype
"""
staging = False
path = None
# this should match any string complying with https://semver.org/
_VERSION_REGEX = re.compile(r"(?P<major>0|[1-9]\d*)\.(?P<minor>0|[1-9]\d*)\.(?P<patch>0|[1-9]\d*)(?:-(?P<prerelease>[a-zA-Z\d\-.]*))?(?:\+(?P<buildmetadata>[a-zA-Z\d\-.]*))?") # noqa: E501
@ -83,12 +81,10 @@ class OpenPypeVersion(semver.VersionInfo):
build (str): an optional build string
version (str): if set, it will be parsed and will override
parameters like `major`, `minor` and so on.
staging (bool): set to True if version is staging.
path (Path): path to version location.
"""
self.path = None
self.staging = False
if "version" in kwargs.keys():
if not kwargs.get("version"):
@ -113,29 +109,8 @@ class OpenPypeVersion(semver.VersionInfo):
if "path" in kwargs.keys():
kwargs.pop("path")
if kwargs.get("staging"):
self.staging = kwargs.get("staging", False)
kwargs.pop("staging")
if "staging" in kwargs.keys():
kwargs.pop("staging")
if self.staging:
if kwargs.get("build"):
if "staging" not in kwargs.get("build"):
kwargs["build"] = f"{kwargs.get('build')}-staging"
else:
kwargs["build"] = "staging"
if kwargs.get("build") and "staging" in kwargs.get("build", ""):
self.staging = True
super().__init__(*args, **kwargs)
def __eq__(self, other):
result = super().__eq__(other)
return bool(result and self.staging == other.staging)
def __repr__(self):
return f"<{self.__class__.__name__}: {str(self)} - path={self.path}>"
@ -150,43 +125,11 @@ class OpenPypeVersion(semver.VersionInfo):
return True
if self.finalize_version() == other.finalize_version() and \
self.prerelease == other.prerelease and \
self.is_staging() and not other.is_staging():
self.prerelease == other.prerelease:
return True
return result
def set_staging(self) -> OpenPypeVersion:
"""Set version as staging and return it.
This will preserve current one.
Returns:
OpenPypeVersion: Set as staging.
"""
if self.staging:
return self
return self.replace(parts={"build": f"{self.build}-staging"})
def set_production(self) -> OpenPypeVersion:
"""Set version as production and return it.
This will preserve current one.
Returns:
OpenPypeVersion: Set as production.
"""
if not self.staging:
return self
return self.replace(
parts={"build": self.build.replace("-staging", "")})
def is_staging(self) -> bool:
"""Test if current version is staging one."""
return self.staging
def get_main_version(self) -> str:
"""Return main version component.
@ -218,21 +161,8 @@ class OpenPypeVersion(semver.VersionInfo):
if not m:
return None
version = OpenPypeVersion.parse(string[m.start():m.end()])
if "staging" in string[m.start():m.end()]:
version.staging = True
return version
@classmethod
def parse(cls, version):
"""Extends parse to handle ta handle staging variant."""
v = super().parse(version)
openpype_version = cls(major=v.major, minor=v.minor,
patch=v.patch, prerelease=v.prerelease,
build=v.build)
if v.build and "staging" in v.build:
openpype_version.staging = True
return openpype_version
def __hash__(self):
return hash(self.path) if self.path else hash(str(self))
@ -382,80 +312,28 @@ class OpenPypeVersion(semver.VersionInfo):
return False
@classmethod
def get_local_versions(
cls, production: bool = None,
staging: bool = None
) -> List:
def get_local_versions(cls) -> List:
"""Get all versions available on this machine.
Arguments give ability to specify if filtering is needed. If both
arguments are set to None all found versions are returned.
Args:
production (bool): Return production versions.
staging (bool): Return staging versions.
Returns:
list: of compatible versions available on the machine.
"""
# Return all local versions if arguments are set to None
if production is None and staging is None:
production = True
staging = True
elif production is None and not staging:
production = True
elif staging is None and not production:
staging = True
# Just return empty output if both are disabled
if not production and not staging:
return []
# DEPRECATED: backwards compatible way to look for versions in root
dir_to_search = Path(user_data_dir("openpype", "pypeclub"))
versions = OpenPypeVersion.get_versions_from_directory(dir_to_search)
filtered_versions = []
for version in versions:
if version.is_staging():
if staging:
filtered_versions.append(version)
elif production:
filtered_versions.append(version)
return list(sorted(set(filtered_versions)))
return list(sorted(set(versions)))
@classmethod
def get_remote_versions(
cls, production: bool = None,
staging: bool = None
) -> List:
def get_remote_versions(cls) -> List:
"""Get all versions available in OpenPype Path.
Arguments give ability to specify if filtering is needed. If both
arguments are set to None all found versions are returned.
Args:
production (bool): Return production versions.
staging (bool): Return staging versions.
Returns:
list of OpenPypeVersions: Versions found in OpenPype path.
"""
# Return all local versions if arguments are set to None
if production is None and staging is None:
production = True
staging = True
elif production is None and not staging:
production = True
elif staging is None and not production:
staging = True
# Just return empty output if both are disabled
if not production and not staging:
return []
dir_to_search = None
if cls.openpype_path_is_accessible():
@ -476,14 +354,7 @@ class OpenPypeVersion(semver.VersionInfo):
versions = cls.get_versions_from_directory(dir_to_search)
filtered_versions = []
for version in versions:
if version.is_staging():
if staging:
filtered_versions.append(version)
elif production:
filtered_versions.append(version)
return list(sorted(set(filtered_versions)))
return list(sorted(set(versions)))
@staticmethod
def get_versions_from_directory(
@ -562,7 +433,6 @@ class OpenPypeVersion(semver.VersionInfo):
@staticmethod
def get_latest_version(
staging: bool = False,
local: bool = None,
remote: bool = None
) -> Union[OpenPypeVersion, None]:
@ -571,7 +441,6 @@ class OpenPypeVersion(semver.VersionInfo):
The version does not contain information about path and source.
This is utility version to get the latest version from all found.
Build version is not listed if staging is enabled.
Arguments 'local' and 'remote' define if local and remote repository
versions are used. All versions are used if both are not set (or set
@ -580,7 +449,6 @@ class OpenPypeVersion(semver.VersionInfo):
'False' in that case only build version can be used.
Args:
staging (bool, optional): List staging versions if True.
local (bool, optional): List local versions if True.
remote (bool, optional): List remote versions if True.
@ -599,22 +467,9 @@ class OpenPypeVersion(semver.VersionInfo):
remote = True
installed_version = OpenPypeVersion.get_installed_version()
local_versions = []
remote_versions = []
if local:
local_versions = OpenPypeVersion.get_local_versions(
staging=staging
)
if remote:
remote_versions = OpenPypeVersion.get_remote_versions(
staging=staging
)
all_versions = local_versions + remote_versions
if not staging:
all_versions.append(installed_version)
if not all_versions:
return None
local_versions = OpenPypeVersion.get_local_versions() if local else []
remote_versions = OpenPypeVersion.get_remote_versions() if remote else [] # noqa: E501
all_versions = local_versions + remote_versions + [installed_version]
all_versions.sort()
return all_versions[-1]
@ -705,7 +560,7 @@ class BootstrapRepos:
"""Get path for specific version in list of OpenPype versions.
Args:
version (str): Version string to look for (1.2.4+staging)
version (str): Version string to look for (1.2.4-nightly.1+test)
version_list (list of OpenPypeVersion): list of version to search.
Returns:
@ -807,6 +662,8 @@ class BootstrapRepos:
"""
version = OpenPypeVersion.version_in_str(zip_file.name)
destination_dir = self.data_dir / f"{version.major}.{version.minor}"
if not destination_dir.exists():
destination_dir.mkdir(parents=True)
destination = destination_dir / zip_file.name
if destination.exists():
@ -1131,14 +988,12 @@ class BootstrapRepos:
@staticmethod
def find_openpype_version(
version: Union[str, OpenPypeVersion],
staging: bool
version: Union[str, OpenPypeVersion]
) -> Union[OpenPypeVersion, None]:
"""Find location of specified OpenPype version.
Args:
version (Union[str, OpenPypeVersion): Version to find.
staging (bool): Filter staging versions.
Returns:
requested OpenPypeVersion.
@ -1151,9 +1006,7 @@ class BootstrapRepos:
if installed_version == version:
return installed_version
local_versions = OpenPypeVersion.get_local_versions(
staging=staging, production=not staging
)
local_versions = OpenPypeVersion.get_local_versions()
zip_version = None
for local_version in local_versions:
if local_version == version:
@ -1165,37 +1018,25 @@ class BootstrapRepos:
if zip_version is not None:
return zip_version
remote_versions = OpenPypeVersion.get_remote_versions(
staging=staging, production=not staging
)
for remote_version in remote_versions:
if remote_version == version:
return remote_version
return None
remote_versions = OpenPypeVersion.get_remote_versions()
return next(
(
remote_version for remote_version in remote_versions
if remote_version == version
), None)
@staticmethod
def find_latest_openpype_version(
staging: bool
) -> Union[OpenPypeVersion, None]:
def find_latest_openpype_version() -> Union[OpenPypeVersion, None]:
"""Find the latest available OpenPype version in all location.
Args:
staging (bool): True to look for staging versions.
Returns:
Latest OpenPype version on None if nothing was found.
"""
installed_version = OpenPypeVersion.get_installed_version()
local_versions = OpenPypeVersion.get_local_versions(
staging=staging
)
remote_versions = OpenPypeVersion.get_remote_versions(
staging=staging
)
all_versions = local_versions + remote_versions
if not staging:
all_versions.append(installed_version)
local_versions = OpenPypeVersion.get_local_versions()
remote_versions = OpenPypeVersion.get_remote_versions()
all_versions = local_versions + remote_versions + [installed_version]
if not all_versions:
return None
@ -1215,7 +1056,6 @@ class BootstrapRepos:
def find_openpype(
self,
openpype_path: Union[Path, str] = None,
staging: bool = False,
include_zips: bool = False
) -> Union[List[OpenPypeVersion], None]:
"""Get ordered dict of detected OpenPype version.
@ -1229,8 +1069,6 @@ class BootstrapRepos:
Args:
openpype_path (Path or str, optional): Try to find OpenPype on
the given path or url.
staging (bool, optional): Filter only staging version, skip them
otherwise.
include_zips (bool, optional): If set True it will try to find
OpenPype in zip files in given directory.
@ -1278,7 +1116,7 @@ class BootstrapRepos:
for dir_to_search in dirs_to_search:
try:
openpype_versions += self.get_openpype_versions(
dir_to_search, staging)
dir_to_search)
except ValueError:
# location is invalid, skip it
pass
@ -1643,15 +1481,11 @@ class BootstrapRepos:
return False
return True
def get_openpype_versions(
self,
openpype_dir: Path,
staging: bool = False) -> list:
def get_openpype_versions(self, openpype_dir: Path) -> list:
"""Get all detected OpenPype versions in directory.
Args:
openpype_dir (Path): Directory to scan.
staging (bool, optional): Find staging versions if True.
Returns:
list of OpenPypeVersion
@ -1669,8 +1503,7 @@ class BootstrapRepos:
for item in openpype_dir.iterdir():
# if the item is directory with major.minor version, dive deeper
if item.is_dir() and re.match(r"^\d+\.\d+$", item.name):
_versions = self.get_openpype_versions(
item, staging=staging)
_versions = self.get_openpype_versions(item)
if _versions:
openpype_versions += _versions
@ -1693,11 +1526,7 @@ class BootstrapRepos:
continue
detected_version.path = item
if staging and detected_version.is_staging():
openpype_versions.append(detected_version)
if not staging and not detected_version.is_staging():
openpype_versions.append(detected_version)
openpype_versions.append(detected_version)
return sorted(openpype_versions)

View file

@ -5,9 +5,7 @@ import sys
import re
import collections
from Qt import QtCore, QtGui, QtWidgets # noqa
from Qt.QtGui import QValidator # noqa
from Qt.QtCore import QTimer # noqa
from qtpy import QtCore, QtGui, QtWidgets
from .install_thread import InstallThread
from .tools import (

View file

@ -4,7 +4,7 @@ import os
import sys
from pathlib import Path
from Qt.QtCore import QThread, Signal, QObject # noqa
from qtpy import QtCore
from .bootstrap_repos import (
BootstrapRepos,
@ -17,7 +17,7 @@ from .bootstrap_repos import (
from .tools import validate_mongo_connection
class InstallThread(QThread):
class InstallThread(QtCore.QThread):
"""Install Worker thread.
This class takes care of finding OpenPype version on user entered path
@ -28,15 +28,14 @@ class InstallThread(QThread):
user data dir.
"""
progress = Signal(int)
message = Signal((str, bool))
progress = QtCore.Signal(int)
message = QtCore.Signal((str, bool))
def __init__(self, parent=None,):
self._mongo = None
self._path = None
self._result = None
QThread.__init__(self, parent)
super().__init__(parent)
def result(self):
"""Result of finished installation."""
@ -62,143 +61,117 @@ class InstallThread(QThread):
progress_callback=self.set_progress, message=self.message)
local_version = OpenPypeVersion.get_installed_version_str()
# if user did enter nothing, we install OpenPype from local version.
# zip content of `repos`, copy it to user data dir and append
# version to it.
if not self._path:
# user did not entered url
if not self._mongo:
# it not set in environment
if not os.getenv("OPENPYPE_MONGO"):
# try to get it from settings registry
try:
self._mongo = bs.secure_registry.get_item(
"openPypeMongo")
except ValueError:
self.message.emit(
"!!! We need MongoDB URL to proceed.", True)
self._set_result(-1)
return
else:
self._mongo = os.getenv("OPENPYPE_MONGO")
else:
self.message.emit("Saving mongo connection string ...", False)
bs.secure_registry.set_item("openPypeMongo", self._mongo)
os.environ["OPENPYPE_MONGO"] = self._mongo
self.message.emit(
f"Detecting installed OpenPype versions in {bs.data_dir}",
False)
detected = bs.find_openpype(include_zips=True)
if detected:
if not OpenPypeVersion.get_installed_version().is_compatible(
detected[-1]):
self.message.emit((
f"Latest detected version {detected[-1]} "
"is not compatible with the currently running "
f"{local_version}"
), True)
self.message.emit((
"Filtering detected versions to compatible ones..."
), False)
detected = [
version for version in detected
if version.is_compatible(
OpenPypeVersion.get_installed_version())
]
if OpenPypeVersion(
version=local_version, path=Path()) < detected[-1]:
self.message.emit((
f"Latest installed version {detected[-1]} is newer "
f"then currently running {local_version}"
), False)
self.message.emit("Skipping OpenPype install ...", False)
if detected[-1].path.suffix.lower() == ".zip":
bs.extract_openpype(detected[-1])
self._set_result(0)
return
if OpenPypeVersion(version=local_version).get_main_version() == detected[-1].get_main_version(): # noqa
self.message.emit((
f"Latest installed version is the same as "
f"currently running {local_version}"
), False)
self.message.emit("Skipping OpenPype install ...", False)
self._set_result(0)
return
self.message.emit((
"All installed versions are older then "
f"currently running one {local_version}"
), False)
else:
if getattr(sys, 'frozen', False):
self.message.emit("None detected.", True)
self.message.emit(("We will use OpenPype coming with "
"installer."), False)
openpype_version = bs.create_version_from_frozen_code()
if not openpype_version:
self.message.emit(
f"!!! Install failed - {openpype_version}", True)
self._set_result(-1)
return
self.message.emit(f"Using: {openpype_version}", False)
bs.install_version(openpype_version)
self.message.emit(f"Installed as {openpype_version}", False)
self.progress.emit(100)
self._set_result(1)
return
else:
self.message.emit("None detected.", False)
self.message.emit(
f"We will use local OpenPype version {local_version}", False)
local_openpype = bs.create_version_from_live_code()
if not local_openpype:
self.message.emit(
f"!!! Install failed - {local_openpype}", True)
self._set_result(-1)
return
# user did not entered url
if self._mongo:
self.message.emit("Saving mongo connection string ...", False)
bs.secure_registry.set_item("openPypeMongo", self._mongo)
elif os.getenv("OPENPYPE_MONGO"):
self._mongo = os.getenv("OPENPYPE_MONGO")
else:
# try to get it from settings registry
try:
bs.install_version(local_openpype)
except (OpenPypeVersionExists,
OpenPypeVersionInvalid,
OpenPypeVersionIOError) as e:
self.message.emit(f"Installed failed: ", True)
self.message.emit(str(e), True)
self._mongo = bs.secure_registry.get_item(
"openPypeMongo")
except ValueError:
self.message.emit(
"!!! We need MongoDB URL to proceed.", True)
self._set_result(-1)
return
os.environ["OPENPYPE_MONGO"] = self._mongo
self.message.emit(f"Installed as {local_openpype}", False)
self.message.emit(
f"Detecting installed OpenPype versions in {bs.data_dir}",
False)
detected = bs.find_openpype(include_zips=True)
if not detected and getattr(sys, 'frozen', False):
self.message.emit("None detected.", True)
self.message.emit(("We will use OpenPype coming with "
"installer."), False)
openpype_version = bs.create_version_from_frozen_code()
if not openpype_version:
self.message.emit(
f"!!! Install failed - {openpype_version}", True)
self._set_result(-1)
return
self.message.emit(f"Using: {openpype_version}", False)
bs.install_version(openpype_version)
self.message.emit(f"Installed as {openpype_version}", False)
self.progress.emit(100)
self._set_result(1)
return
else:
# if we have mongo connection string, validate it, set it to
# user settings and get OPENPYPE_PATH from there.
if self._mongo:
if not validate_mongo_connection(self._mongo):
self.message.emit(
f"!!! invalid mongo url {self._mongo}", True)
self._set_result(-1)
return
bs.secure_registry.set_item("openPypeMongo", self._mongo)
os.environ["OPENPYPE_MONGO"] = self._mongo
self.message.emit(f"processing {self._path}", True)
repo_file = bs.process_entered_location(self._path)
if detected and not OpenPypeVersion.get_installed_version().is_compatible(detected[-1]): # noqa: E501
self.message.emit((
f"Latest detected version {detected[-1]} "
"is not compatible with the currently running "
f"{local_version}"
), True)
self.message.emit((
"Filtering detected versions to compatible ones..."
), False)
if not repo_file:
self.message.emit("!!! Cannot install", True)
self._set_result(-1)
# filter results to get only compatible versions
detected = [
version for version in detected
if version.is_compatible(
OpenPypeVersion.get_installed_version())
]
if detected:
if OpenPypeVersion(
version=local_version, path=Path()) < detected[-1]:
self.message.emit((
f"Latest installed version {detected[-1]} is newer "
f"then currently running {local_version}"
), False)
self.message.emit("Skipping OpenPype install ...", False)
if detected[-1].path.suffix.lower() == ".zip":
bs.extract_openpype(detected[-1])
self._set_result(0)
return
if OpenPypeVersion(version=local_version).get_main_version() == detected[-1].get_main_version(): # noqa: E501
self.message.emit((
f"Latest installed version is the same as "
f"currently running {local_version}"
), False)
self.message.emit("Skipping OpenPype install ...", False)
self._set_result(0)
return
self.message.emit((
"All installed versions are older then "
f"currently running one {local_version}"
), False)
self.message.emit("None detected.", False)
self.message.emit(
f"We will use local OpenPype version {local_version}", False)
local_openpype = bs.create_version_from_live_code()
if not local_openpype:
self.message.emit(
f"!!! Install failed - {local_openpype}", True)
self._set_result(-1)
return
try:
bs.install_version(local_openpype)
except (OpenPypeVersionExists,
OpenPypeVersionInvalid,
OpenPypeVersionIOError) as e:
self.message.emit(f"Installed failed: ", True)
self.message.emit(str(e), True)
self._set_result(-1)
return
self.message.emit(f"Installed as {local_openpype}", False)
self.progress.emit(100)
self._set_result(1)
return
self.progress.emit(100)
self._set_result(1)
return

View file

@ -1,4 +1,4 @@
from Qt import QtWidgets, QtGui
from qtpy import QtWidgets, QtGui
from .tools import (
load_stylesheet,

View file

@ -1,4 +1,4 @@
from Qt import QtCore, QtGui, QtWidgets # noqa
from qtpy import QtWidgets
class NiceProgressBar(QtWidgets.QProgressBar):

View file

@ -153,7 +153,8 @@ def get_openpype_global_settings(url: str) -> dict:
# Create mongo connection
client = MongoClient(url, **kwargs)
# Access settings collection
col = client["openpype"]["settings"]
openpype_db = os.environ.get("OPENPYPE_DATABASE_NAME") or "openpype"
col = client[openpype_db]["settings"]
# Query global settings
global_settings = col.find_one({"type": "global_settings"}) or {}
# Close Mongo connection
@ -184,11 +185,7 @@ def get_openpype_path_from_settings(settings: dict) -> Union[str, None]:
if paths and isinstance(paths, str):
paths = [paths]
# Loop over paths and return only existing
for path in paths:
if os.path.exists(path):
return path
return None
return next((path for path in paths if os.path.exists(path)), None)
def get_expected_studio_version_str(
@ -206,10 +203,7 @@ def get_expected_studio_version_str(
mongo_url = os.environ.get("OPENPYPE_MONGO")
if global_settings is None:
global_settings = get_openpype_global_settings(mongo_url)
if staging:
key = "staging_version"
else:
key = "production_version"
key = "staging_version" if staging else "production_version"
return global_settings.get(key) or ""

View file

@ -1,6 +1,6 @@
# -*- coding: utf-8 -*-
"""Working thread for update."""
from Qt.QtCore import QThread, Signal, QObject # noqa
from qtpy import QtCore
from .bootstrap_repos import (
BootstrapRepos,
@ -8,7 +8,7 @@ from .bootstrap_repos import (
)
class UpdateThread(QThread):
class UpdateThread(QtCore.QThread):
"""Install Worker thread.
This class takes care of finding OpenPype version on user entered path
@ -19,13 +19,13 @@ class UpdateThread(QThread):
user data dir.
"""
progress = Signal(int)
message = Signal((str, bool))
progress = QtCore.Signal(int)
message = QtCore.Signal((str, bool))
def __init__(self, parent=None):
self._result = None
self._openpype_version = None
QThread.__init__(self, parent)
super().__init__(parent)
def set_version(self, openpype_version: OpenPypeVersion):
self._openpype_version = openpype_version

View file

@ -1,8 +1,10 @@
# -*- coding: utf-8 -*-
"""Progress window to show when OpenPype is updating/installing locally."""
import os
from qtpy import QtCore, QtGui, QtWidgets
from .update_thread import UpdateThread
from Qt import QtCore, QtGui, QtWidgets # noqa
from .bootstrap_repos import OpenPypeVersion
from .nice_progress_bar import NiceProgressBar
from .tools import load_stylesheet
@ -47,7 +49,6 @@ class UpdateWindow(QtWidgets.QDialog):
self._update_thread = None
self.resize(QtCore.QSize(self._width, self._height))
self._init_ui()
# Set stylesheet
@ -79,6 +80,16 @@ class UpdateWindow(QtWidgets.QDialog):
self._progress_bar = progress_bar
def showEvent(self, event):
super().showEvent(event)
current_size = self.size()
new_size = QtCore.QSize(
max(current_size.width(), self._width),
max(current_size.height(), self._height)
)
if current_size != new_size:
self.resize(new_size)
def _run_update(self):
"""Start install process.

View file

@ -48,8 +48,8 @@ Source: "build\{#build}\*"; DestDir: "{app}"; Flags: ignoreversion recursesubdir
; NOTE: Don't use "Flags: ignoreversion" on any shared system files
[Icons]
Name: "{autoprograms}\{#MyAppName}"; Filename: "{app}\openpype_gui.exe"
Name: "{autodesktop}\{#MyAppName}"; Filename: "{app}\openpype_gui.exe"; Tasks: desktopicon
Name: "{autoprograms}\{#MyAppName} {#AppVer}"; Filename: "{app}\openpype_gui.exe"
Name: "{autodesktop}\{#MyAppName} {#AppVer}"; Filename: "{app}\openpype_gui.exe"; Tasks: desktopicon
[Run]
Filename: "{app}\openpype_gui.exe"; Description: "{cm:LaunchProgram,OpenPype}"; Flags: nowait postinstall skipifsilent

View file

@ -16,14 +16,15 @@ from .pype_commands import PypeCommands
@click.option("--use-staging", is_flag=True,
expose_value=False, help="use staging variants")
@click.option("--list-versions", is_flag=True, expose_value=False,
help=("list all detected versions. Use With `--use-staging "
"to list staging versions."))
help="list all detected versions.")
@click.option("--validate-version", expose_value=False,
help="validate given version integrity")
@click.option("--debug", is_flag=True, expose_value=False,
help=("Enable debug"))
help="Enable debug")
@click.option("--verbose", expose_value=False,
help=("Change OpenPype log level (debug - critical or 0-50)"))
@click.option("--automatic-tests", is_flag=True, expose_value=False,
help=("Run in automatic tests mode"))
def main(ctx):
"""Pype is main command serving as entry point to pipeline system.
@ -429,20 +430,18 @@ def unpack_project(zipfile, root):
@main.command()
def interactive():
"""Interative (Python like) console.
"""Interactive (Python like) console.
Helpfull command not only for development to directly work with python
Helpful command not only for development to directly work with python
interpreter.
Warning:
Executable 'openpype_gui' on windows won't work.
Executable 'openpype_gui' on Windows won't work.
"""
from openpype.version import __version__
banner = "OpenPype {}\nPython {} on {}".format(
__version__, sys.version, sys.platform
)
banner = f"OpenPype {__version__}\nPython {sys.version} on {sys.platform}"
code.interact(banner)

View file

@ -76,6 +76,18 @@ class HostBase(object):
pass
def install(self):
"""Install host specific functionality.
This is where should be added menu with tools, registered callbacks
and other host integration initialization.
It is called automatically when 'openpype.pipeline.install_host' is
triggered.
"""
pass
@property
def log(self):
if self._log is None:

View file

@ -10,30 +10,15 @@ from .launch_logic import (
)
from .pipeline import (
AfterEffectsHost,
ls,
get_asset_settings,
install,
uninstall,
list_instances,
remove_instance,
containerise,
get_context_data,
update_context_data,
get_context_title
)
from .workio import (
file_extensions,
has_unsaved_changes,
save_file,
open_file,
current_file,
work_root,
containerise
)
from .lib import (
maintained_selection,
get_extension_manifest_path
get_extension_manifest_path,
get_asset_settings
)
from .plugin import (
@ -48,26 +33,12 @@ __all__ = [
# pipeline
"ls",
"get_asset_settings",
"install",
"uninstall",
"list_instances",
"remove_instance",
"containerise",
"get_context_data",
"update_context_data",
"get_context_title",
"file_extensions",
"has_unsaved_changes",
"save_file",
"open_file",
"current_file",
"work_root",
# lib
"maintained_selection",
"get_extension_manifest_path",
"get_asset_settings",
# plugin
"AfterEffectsLoader"

View file

@ -1,5 +1,5 @@
<?xml version="1.0" encoding="UTF-8"?>
<ExtensionManifest Version="8.0" ExtensionBundleId="com.openpype.AE.panel" ExtensionBundleVersion="1.0.23"
<ExtensionManifest Version="8.0" ExtensionBundleId="com.openpype.AE.panel" ExtensionBundleVersion="1.0.24"
ExtensionBundleName="openpype" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<ExtensionList>
<Extension Id="com.openpype.AE.panel" Version="1.0" />

View file

@ -38,17 +38,6 @@
});
</script>
<script type=text/javascript>
$(function() {
$("a#creator-button").bind("click", function() {
RPC.call('AfterEffects.creator_route').then(function (data) {
}, function (error) {
alert(error);
});
});
});
</script>
<script type=text/javascript>
$(function() {
$("a#loader-button").bind("click", function() {
@ -82,17 +71,6 @@
});
</script>
<script type=text/javascript>
$(function() {
$("a#subsetmanager-button").bind("click", function() {
RPC.call('AfterEffects.subsetmanager_route').then(function (data) {
}, function (error) {
alert(error);
});
});
});
</script>
<script type=text/javascript>
$(function() {
$("a#experimental-button").bind("click", function() {
@ -113,11 +91,9 @@
<div>
<div></div><a href=# id=workfiles-button><button class="hostFontSize">Workfiles...</button></a></div>
<div> <a href=# id=creator-button><button class="hostFontSize">Create...</button></a></div>
<div><a href=# id=loader-button><button class="hostFontSize">Load...</button></a></div>
<div><a href=# id=publish-button><button class="hostFontSize">Publish...</button></a></div>
<div><a href=# id=sceneinventory-button><button class="hostFontSize">Manage...</button></a></div>
<div><a href=# id=subsetmanager-button><button class="hostFontSize">Subset Manager...</button></a></div>
<div><a href=# id=experimental-button><button class="hostFontSize">Experimental Tools...</button></a></div>
</div>

View file

@ -237,7 +237,7 @@ function main(websocket_url){
RPC.addRoute('AfterEffects.get_render_info', function (data) {
log.warn('Server called client route "get_render_info":', data);
return runEvalScript("getRenderInfo()")
return runEvalScript("getRenderInfo(" + data.comp_id +")")
.then(function(result){
log.warn("get_render_info: " + result);
return result;
@ -289,7 +289,7 @@ function main(websocket_url){
RPC.addRoute('AfterEffects.render', function (data) {
log.warn('Server called client route "render":', data);
var escapedPath = EscapeStringForJSX(data.folder_url);
return runEvalScript("render('" + escapedPath +"')")
return runEvalScript("render('" + escapedPath +"', " + data.comp_id + ")")
.then(function(result){
log.warn("render: " + result);
return result;

View file

@ -395,41 +395,84 @@ function saveAs(path){
app.project.save(fp = new File(path));
}
function getRenderInfo(){
function getRenderInfo(comp_id){
/***
Get info from render queue.
Currently pulls only file name to parse extension and
Currently pulls only file name to parse extension and
if it is sequence in Python
Args:
comp_id (int): id of composition
Return:
(list) [{file_name:"xx.png", width:00, height:00}]
**/
var item = app.project.itemByID(comp_id);
if (!item){
return _prepareError("Composition with '" + comp_id + "' wasn't found! Recreate publishable instance(s)")
}
var comp_name = item.name;
var output_metadata = []
try{
var render_item = app.project.renderQueue.item(1);
if (render_item.status == RQItemStatus.DONE){
render_item.duplicate(); // create new, cannot change status if DONE
render_item.remove(); // remove existing to limit duplications
render_item = app.project.renderQueue.item(1);
// render_item.duplicate() should create new item on renderQueue
// BUT it works only sometimes, there are some weird synchronization issue
// this method will be called always before render, so prepare items here
// for render to spare the hassle
for (i = 1; i <= app.project.renderQueue.numItems; ++i){
var render_item = app.project.renderQueue.item(i);
if (render_item.comp.id != comp_id){
continue;
}
if (render_item.status == RQItemStatus.DONE){
render_item.duplicate(); // create new, cannot change status if DONE
render_item.remove(); // remove existing to limit duplications
continue;
}
}
render_item.render = true; // always set render queue to render
var item = render_item.outputModule(1);
// properly validate as `numItems` won't change magically
var comp_id_count = 0;
for (i = 1; i <= app.project.renderQueue.numItems; ++i){
var render_item = app.project.renderQueue.item(i);
if (render_item.comp.id != comp_id){
continue;
}
comp_id_count += 1;
var item = render_item.outputModule(1);
for (j = 1; j<= render_item.numOutputModules; ++j){
var file_url = item.file.toString();
output_metadata.push(
JSON.stringify({
"file_name": file_url,
"width": render_item.comp.width,
"height": render_item.comp.height
})
);
}
}
} catch (error) {
return _prepareError("There is no render queue, create one");
}
var file_url = item.file.toString();
return JSON.stringify({
"file_name": file_url,
"width": render_item.comp.width,
"height": render_item.comp.height
})
if (comp_id_count > 1){
return _prepareError("There cannot be more items in Render Queue for '" + comp_name + "'!")
}
if (comp_id_count == 0){
return _prepareError("There is no item in Render Queue for '" + comp_name + "'! Add composition to Render Queue.")
}
return '[' + output_metadata.join() + ']';
}
function getAudioUrlForComp(comp_id){
/**
* Searches composition for audio layer
*
*
* Only single AVLayer is expected!
* Used for collecting Audio
*
*
* Args:
* comp_id (int): id of composition
* Return:
@ -457,7 +500,7 @@ function addItemAsLayerToComp(comp_id, item_id, found_comp){
/**
* Adds already imported FootageItem ('item_id') as a new
* layer to composition ('comp_id').
*
*
* Args:
* comp_id (int): id of target composition
* item_id (int): FootageItem.id
@ -480,17 +523,17 @@ function addItemAsLayerToComp(comp_id, item_id, found_comp){
function importBackground(comp_id, composition_name, files_to_import){
/**
* Imports backgrounds images to existing or new composition.
*
*
* If comp_id is not provided, new composition is created, basic
* values (width, heights, frameRatio) takes from first imported
* image.
*
*
* Args:
* comp_id (int): id of existing composition (null if new)
* composition_name (str): used when new composition
* composition_name (str): used when new composition
* files_to_import (list): list of absolute paths to import and
* add as layers
*
*
* Returns:
* (str): json representation (id, name, members)
*/
@ -512,7 +555,7 @@ function importBackground(comp_id, composition_name, files_to_import){
}
}
}
if (files_to_import){
for (i = 0; i < files_to_import.length; ++i){
item = _importItem(files_to_import[i]);
@ -524,8 +567,8 @@ function importBackground(comp_id, composition_name, files_to_import){
if (!comp){
folder = app.project.items.addFolder(composition_name);
imported_ids.push(folder.id);
comp = app.project.items.addComp(composition_name, item.width,
item.height, item.pixelAspect,
comp = app.project.items.addComp(composition_name, item.width,
item.height, item.pixelAspect,
1, 26.7); // hardcode defaults
imported_ids.push(comp.id);
comp.parentFolder = folder;
@ -534,7 +577,7 @@ function importBackground(comp_id, composition_name, files_to_import){
item.parentFolder = folder;
addItemAsLayerToComp(comp.id, item.id, comp);
}
}
}
var item = {"name": comp.name,
"id": folder.id,
@ -545,19 +588,19 @@ function importBackground(comp_id, composition_name, files_to_import){
function reloadBackground(comp_id, composition_name, files_to_import){
/**
* Reloads existing composition.
*
*
* It deletes complete composition with encompassing folder, recreates
* from scratch via 'importBackground' functionality.
*
*
* Args:
* comp_id (int): id of existing composition (null if new)
* composition_name (str): used when new composition
* composition_name (str): used when new composition
* files_to_import (list): list of absolute paths to import and
* add as layers
*
*
* Returns:
* (str): json representation (id, name, members)
*
*
*/
var imported_ids = []; // keep track of members of composition
comp = app.project.itemByID(comp_id);
@ -620,7 +663,7 @@ function reloadBackground(comp_id, composition_name, files_to_import){
function _get_file_name(file_url){
/**
* Returns file name without extension from 'file_url'
*
*
* Args:
* file_url (str): full absolute url
* Returns:
@ -635,7 +678,7 @@ function _delete_obsolete_items(folder, new_filenames){
/***
* Goes through 'folder' and removes layers not in new
* background
*
*
* Args:
* folder (FolderItem)
* new_filenames (array): list of layer names in new bg
@ -660,14 +703,14 @@ function _delete_obsolete_items(folder, new_filenames){
function _importItem(file_url){
/**
* Imports 'file_url' as new FootageItem
*
*
* Args:
* file_url (str): file url with content
* Returns:
* (FootageItem)
*/
file_name = _get_file_name(file_url);
//importFile prepared previously to return json
item_json = importFile(file_url, file_name, JSON.stringify({"ImportAsType":"FOOTAGE"}));
item_json = JSON.parse(item_json);
@ -689,30 +732,42 @@ function isFileSequence (item){
return false;
}
function render(target_folder){
function render(target_folder, comp_id){
var out_dir = new Folder(target_folder);
var out_dir = out_dir.fsName;
for (i = 1; i <= app.project.renderQueue.numItems; ++i){
var render_item = app.project.renderQueue.item(i);
var om1 = app.project.renderQueue.item(i).outputModule(1);
var file_name = File.decode( om1.file.name ).replace('℗', ''); // Name contains special character, space?
var composition = render_item.comp;
if (composition.id == comp_id){
if (render_item.status == RQItemStatus.DONE){
var new_item = render_item.duplicate();
render_item.remove();
render_item = new_item;
}
render_item.render = true;
var om1 = app.project.renderQueue.item(i).outputModule(1);
var file_name = File.decode( om1.file.name ).replace('℗', ''); // Name contains special character, space?
var omItem1_settable_str = app.project.renderQueue.item(i).outputModule(1).getSettings( GetSettingsFormat.STRING_SETTABLE );
var targetFolder = new Folder(target_folder);
if (!targetFolder.exists) {
targetFolder.create();
}
om1.file = new File(targetFolder.fsName + '/' + file_name);
}else{
if (render_item.status != RQItemStatus.DONE){
render_item.render = false;
}
}
var omItem1_settable_str = app.project.renderQueue.item(i).outputModule(1).getSettings( GetSettingsFormat.STRING_SETTABLE );
if (render_item.status == RQItemStatus.DONE){
render_item.duplicate();
render_item.remove();
continue;
}
var targetFolder = new Folder(target_folder);
if (!targetFolder.exists) {
targetFolder.create();
}
om1.file = new File(targetFolder.fsName + '/' + file_name);
}
app.beginSuppressDialogs();
app.project.renderQueue.render();
app.endSuppressDialogs(false);
}
function close(){

View file

@ -284,9 +284,6 @@ class AfterEffectsRoute(WebSocketRoute):
return await self.socket.call('aftereffects.read')
# panel routes for tools
async def creator_route(self):
self._tool_route("creator")
async def workfiles_route(self):
self._tool_route("workfiles")
@ -294,14 +291,11 @@ class AfterEffectsRoute(WebSocketRoute):
self._tool_route("loader")
async def publish_route(self):
self._tool_route("publish")
self._tool_route("publisher")
async def sceneinventory_route(self):
self._tool_route("sceneinventory")
async def subsetmanager_route(self):
self._tool_route("subsetmanager")
async def experimental_tools_route(self):
self._tool_route("experimental_tools")

View file

@ -13,6 +13,7 @@ from openpype.pipeline import install_host
from openpype.modules import ModulesManager
from openpype.tools.utils import host_tools
from openpype.tests.lib import is_in_tests
from .launch_logic import ProcessLauncher, get_stub
log = logging.getLogger(__name__)
@ -26,9 +27,10 @@ def safe_excepthook(*args):
def main(*subprocess_args):
sys.excepthook = safe_excepthook
from openpype.hosts.aftereffects import api
from openpype.hosts.aftereffects.api import AfterEffectsHost
install_host(api)
host = AfterEffectsHost()
install_host(host)
os.environ["OPENPYPE_LOG_NO_COLORS"] = "False"
app = QtWidgets.QApplication([])
@ -46,7 +48,7 @@ def main(*subprocess_args):
webpublisher_addon.headless_publish,
log,
"CloseAE",
os.environ.get("IS_TEST")
is_in_tests()
)
)
@ -133,3 +135,32 @@ def get_background_layers(file_url):
layer.get("filename")).
replace("\\", "/"))
return layers
def get_asset_settings(asset_doc):
"""Get settings on current asset from database.
Returns:
dict: Scene data.
"""
asset_data = asset_doc["data"]
fps = asset_data.get("fps")
frame_start = asset_data.get("frameStart")
frame_end = asset_data.get("frameEnd")
handle_start = asset_data.get("handleStart")
handle_end = asset_data.get("handleEnd")
resolution_width = asset_data.get("resolutionWidth")
resolution_height = asset_data.get("resolutionHeight")
duration = (frame_end - frame_start + 1) + handle_start + handle_end
return {
"fps": fps,
"frameStart": frame_start,
"frameEnd": frame_end,
"handleStart": handle_start,
"handleEnd": handle_end,
"resolutionWidth": resolution_width,
"resolutionHeight": resolution_height,
"duration": duration
}

View file

@ -16,6 +16,13 @@ from openpype.pipeline import (
from openpype.pipeline.load import any_outdated_containers
import openpype.hosts.aftereffects
from openpype.host import (
HostBase,
IWorkfileHost,
ILoadHost,
IPublishHost
)
from .launch_logic import get_stub, ConnectionNotEstablishedYet
log = Logger.get_logger(__name__)
@ -30,27 +37,142 @@ LOAD_PATH = os.path.join(PLUGINS_DIR, "load")
CREATE_PATH = os.path.join(PLUGINS_DIR, "create")
def install():
print("Installing Pype config...")
class AfterEffectsHost(HostBase, IWorkfileHost, ILoadHost, IPublishHost):
name = "aftereffects"
pyblish.api.register_host("aftereffects")
pyblish.api.register_plugin_path(PUBLISH_PATH)
def __init__(self):
self._stub = None
super(AfterEffectsHost, self).__init__()
register_loader_plugin_path(LOAD_PATH)
register_creator_plugin_path(CREATE_PATH)
log.info(PUBLISH_PATH)
@property
def stub(self):
"""
Handle pulling stub from PS to run operations on host
Returns:
(AEServerStub) or None
"""
if self._stub:
return self._stub
pyblish.api.register_callback(
"instanceToggled", on_pyblish_instance_toggled
)
try:
stub = get_stub() # only after Photoshop is up
except ConnectionNotEstablishedYet:
print("Not connected yet, ignoring")
return
register_event_callback("application.launched", application_launch)
if not stub.get_active_document_name():
return
self._stub = stub
return self._stub
def uninstall():
pyblish.api.deregister_plugin_path(PUBLISH_PATH)
deregister_loader_plugin_path(LOAD_PATH)
deregister_creator_plugin_path(CREATE_PATH)
def install(self):
print("Installing Pype config...")
pyblish.api.register_host("aftereffects")
pyblish.api.register_plugin_path(PUBLISH_PATH)
register_loader_plugin_path(LOAD_PATH)
register_creator_plugin_path(CREATE_PATH)
log.info(PUBLISH_PATH)
pyblish.api.register_callback(
"instanceToggled", on_pyblish_instance_toggled
)
register_event_callback("application.launched", application_launch)
def get_workfile_extensions(self):
return [".aep"]
def save_workfile(self, dst_path=None):
self.stub.saveAs(dst_path, True)
def open_workfile(self, filepath):
self.stub.open(filepath)
return True
def get_current_workfile(self):
try:
full_name = get_stub().get_active_document_full_name()
if full_name and full_name != "null":
return os.path.normpath(full_name).replace("\\", "/")
except ValueError:
print("Nothing opened")
pass
return None
def get_containers(self):
return ls()
def get_context_data(self):
meta = self.stub.get_metadata()
for item in meta:
if item.get("id") == "publish_context":
item.pop("id")
return item
return {}
def update_context_data(self, data, changes):
item = data
item["id"] = "publish_context"
self.stub.imprint(item["id"], item)
# created instances section
def list_instances(self):
"""List all created instances from current workfile which
will be published.
Pulls from File > File Info
For SubsetManager
Returns:
(list) of dictionaries matching instances format
"""
stub = self.stub
if not stub:
return []
instances = []
layers_meta = stub.get_metadata()
for instance in layers_meta:
if instance.get("id") == "pyblish.avalon.instance":
instances.append(instance)
return instances
def remove_instance(self, instance):
"""Remove instance from current workfile metadata.
Updates metadata of current file in File > File Info and removes
icon highlight on group layer.
For SubsetManager
Args:
instance (dict): instance representation from subsetmanager model
"""
stub = self.stub
if not stub:
return
inst_id = instance.get("instance_id") or instance.get("uuid") # legacy
if not inst_id:
log.warning("No instance identifier for {}".format(instance))
return
stub.remove_instance(inst_id)
if instance.get("members"):
item = stub.get_item(instance["members"][0])
if item:
stub.rename_item(item.id,
item.name.replace(stub.PUBLISH_ICON, ''))
def application_launch():
@ -63,35 +185,6 @@ def on_pyblish_instance_toggled(instance, old_value, new_value):
instance[0].Visible = new_value
def get_asset_settings(asset_doc):
"""Get settings on current asset from database.
Returns:
dict: Scene data.
"""
asset_data = asset_doc["data"]
fps = asset_data.get("fps")
frame_start = asset_data.get("frameStart")
frame_end = asset_data.get("frameEnd")
handle_start = asset_data.get("handleStart")
handle_end = asset_data.get("handleEnd")
resolution_width = asset_data.get("resolutionWidth")
resolution_height = asset_data.get("resolutionHeight")
duration = (frame_end - frame_start + 1) + handle_start + handle_end
return {
"fps": fps,
"frameStart": frame_start,
"frameEnd": frame_end,
"handleStart": handle_start,
"handleEnd": handle_end,
"resolutionWidth": resolution_width,
"resolutionHeight": resolution_height,
"duration": duration
}
def ls():
"""Yields containers from active AfterEffects document.
@ -191,102 +284,17 @@ def containerise(name,
return comp
# created instances section
def list_instances():
"""
List all created instances from current workfile which
will be published.
def cache_and_get_instances(creator):
"""Cache instances in shared data.
Pulls from File > File Info
For SubsetManager
Returns:
(list) of dictionaries matching instances format
"""
stub = _get_stub()
if not stub:
return []
instances = []
layers_meta = stub.get_metadata()
for instance in layers_meta:
if instance.get("id") == "pyblish.avalon.instance":
instances.append(instance)
return instances
def remove_instance(instance):
"""
Remove instance from current workfile metadata.
Updates metadata of current file in File > File Info and removes
icon highlight on group layer.
For SubsetManager
Args:
instance (dict): instance representation from subsetmanager model
"""
stub = _get_stub()
if not stub:
return
inst_id = instance.get("instance_id") or instance.get("uuid") # legacy
if not inst_id:
log.warning("No instance identifier for {}".format(instance))
return
stub.remove_instance(inst_id)
if instance.get("members"):
item = stub.get_item(instance["members"][0])
if item:
stub.rename_item(item.id,
item.name.replace(stub.PUBLISH_ICON, ''))
# new publisher section
def get_context_data():
meta = _get_stub().get_metadata()
for item in meta:
if item.get("id") == "publish_context":
item.pop("id")
return item
return {}
def update_context_data(data, changes):
item = data
item["id"] = "publish_context"
_get_stub().imprint(item["id"], item)
def get_context_title():
"""Returns title for Creator window"""
project_name = legacy_io.Session["AVALON_PROJECT"]
asset_name = legacy_io.Session["AVALON_ASSET"]
task_name = legacy_io.Session["AVALON_TASK"]
return "{}/{}/{}".format(project_name, asset_name, task_name)
def _get_stub():
"""
Handle pulling stub from PS to run operations on host
Storing all instances as a list as legacy instances might be still present.
Args:
creator (Creator): Plugin which would like to get instances from host.
Returns:
(AEServerStub) or None
List[]: list of all instances stored in metadata
"""
try:
stub = get_stub() # only after Photoshop is up
except ConnectionNotEstablishedYet:
print("Not connected yet, ignoring")
return
if not stub.get_active_document_name():
return
return stub
shared_key = "openpype.photoshop.instances"
if shared_key not in creator.collection_shared_data:
creator.collection_shared_data[shared_key] = \
creator.host.list_instances()
return creator.collection_shared_data[shared_key]

View file

@ -1,53 +0,0 @@
"""Host API required Work Files tool"""
import os
from .launch_logic import get_stub
def file_extensions():
return [".aep"]
def has_unsaved_changes():
if _active_document():
return not get_stub().is_saved()
return False
def save_file(filepath):
get_stub().saveAs(filepath, True)
def open_file(filepath):
get_stub().open(filepath)
return True
def current_file():
try:
full_name = get_stub().get_active_document_full_name()
if full_name and full_name != "null":
return os.path.normpath(full_name).replace("\\", "/")
except ValueError:
print("Nothing opened")
pass
return None
def work_root(session):
return os.path.normpath(session["AVALON_WORKDIR"]).replace("\\", "/")
def _active_document():
# TODO merge with current_file - even in extension
document_name = None
try:
document_name = get_stub().get_active_document_name()
except ValueError:
print("Nothing opened")
pass
return document_name

View file

@ -418,18 +418,18 @@ class AfterEffectsServerStub():
return self._handle_return(res)
def get_render_info(self):
def get_render_info(self, comp_id):
""" Get render queue info for render purposes
Returns:
(AEItem): with 'file_name' field
(list) of (AEItem): with 'file_name' field
"""
res = self.websocketserver.call(self.client.call
('AfterEffects.get_render_info'))
('AfterEffects.get_render_info',
comp_id=comp_id))
records = self._to_records(self._handle_return(res))
if records:
return records.pop()
return records
def get_audio_url(self, item_id):
""" Get audio layer absolute url for comp
@ -522,7 +522,7 @@ class AfterEffectsServerStub():
if records:
return records.pop()
def render(self, folder_url):
def render(self, folder_url, comp_id):
"""
Render all renderqueueitem to 'folder_url'
Args:
@ -531,7 +531,8 @@ class AfterEffectsServerStub():
"""
res = self.websocketserver.call(self.client.call
('AfterEffects.render',
folder_url=folder_url))
folder_url=folder_url,
comp_id=comp_id))
return self._handle_return(res)
def get_extension_version(self):

View file

@ -1,13 +0,0 @@
from openpype.hosts.aftereffects.plugins.create import create_legacy_render
class CreateLocalRender(create_legacy_render.CreateRender):
""" Creator to render locally.
Created only after default render on farm. So family 'render.local' is
used for backward compatibility.
"""
name = "renderDefault"
label = "Render Locally"
family = "renderLocal"

View file

@ -1,62 +0,0 @@
from openpype.pipeline import create
from openpype.pipeline import CreatorError
from openpype.hosts.aftereffects.api import (
get_stub,
list_instances
)
class CreateRender(create.LegacyCreator):
"""Render folder for publish.
Creates subsets in format 'familyTaskSubsetname',
eg 'renderCompositingMain'.
Create only single instance from composition at a time.
"""
name = "renderDefault"
label = "Render on Farm"
family = "render"
defaults = ["Main"]
def process(self):
stub = get_stub() # only after After Effects is up
items = []
if (self.options or {}).get("useSelection"):
items = stub.get_selected_items(
comps=True, folders=False, footages=False
)
if len(items) > 1:
raise CreatorError(
"Please select only single composition at time."
)
if not items:
raise CreatorError((
"Nothing to create. Select composition "
"if 'useSelection' or create at least "
"one composition."
))
existing_subsets = [
instance['subset'].lower()
for instance in list_instances()
]
item = items.pop()
if self.name.lower() in existing_subsets:
txt = "Instance with name \"{}\" already exists.".format(self.name)
raise CreatorError(txt)
self.data["members"] = [item.id]
self.data["uuid"] = item.id # for SubsetManager
self.data["subset"] = (
self.data["subset"]
.replace(stub.PUBLISH_ICON, '')
.replace(stub.LOADED_ICON, '')
)
stub.imprint(item, self.data)
stub.set_label_color(item.id, 14) # Cyan options 0 - 16
stub.rename_item(item.id, stub.PUBLISH_ICON + self.data["subset"])

View file

@ -1,3 +1,5 @@
import re
from openpype import resources
from openpype.lib import BoolDef, UISeparatorDef
from openpype.hosts.aftereffects import api
@ -7,6 +9,8 @@ from openpype.pipeline import (
CreatorError,
legacy_io,
)
from openpype.hosts.aftereffects.api.pipeline import cache_and_get_instances
from openpype.lib import prepare_template_data
class RenderCreator(Creator):
@ -28,7 +32,7 @@ class RenderCreator(Creator):
return resources.get_openpype_splash_filepath()
def collect_instances(self):
for instance_data in api.list_instances():
for instance_data in cache_and_get_instances(self):
# legacy instances have family=='render' or 'renderLocal', use them
creator_id = (instance_data.get("creator_identifier") or
instance_data.get("family", '').replace("Local", ''))
@ -43,46 +47,71 @@ class RenderCreator(Creator):
for created_inst, _changes in update_list:
api.get_stub().imprint(created_inst.get("instance_id"),
created_inst.data_to_store())
subset_change = _changes.get("subset")
if subset_change:
api.get_stub().rename_item(created_inst.data["members"][0],
subset_change[1])
def remove_instances(self, instances):
for instance in instances:
api.remove_instance(instance)
self._remove_instance_from_context(instance)
self.host.remove_instance(instance)
def create(self, subset_name, data, pre_create_data):
subset = instance.data["subset"]
comp_id = instance.data["members"][0]
comp = api.get_stub().get_item(comp_id)
if comp:
new_comp_name = comp.name.replace(subset, '')
if not new_comp_name:
new_comp_name = "dummyCompName"
api.get_stub().rename_item(comp_id,
new_comp_name)
def create(self, subset_name_from_ui, data, pre_create_data):
stub = api.get_stub() # only after After Effects is up
if pre_create_data.get("use_selection"):
items = stub.get_selected_items(
comps = stub.get_selected_items(
comps=True, folders=False, footages=False
)
else:
items = stub.get_items(comps=True, folders=False, footages=False)
comps = stub.get_items(comps=True, folders=False, footages=False)
if len(items) > 1:
if not comps:
raise CreatorError(
"Please select only single composition at time."
)
if not items:
raise CreatorError((
"Nothing to create. Select composition "
"if 'useSelection' or create at least "
"one composition."
))
)
for inst in self.create_context.instances:
if subset_name == inst.subset_name:
raise CreatorError("{} already exists".format(
inst.subset_name))
for comp in comps:
if pre_create_data.get("use_composition_name"):
composition_name = comp.name
dynamic_fill = prepare_template_data({"composition":
composition_name})
subset_name = subset_name_from_ui.format(**dynamic_fill)
data["composition_name"] = composition_name
else:
subset_name = subset_name_from_ui
subset_name = re.sub(r"\{composition\}", '', subset_name,
flags=re.IGNORECASE)
data["members"] = [items[0].id]
new_instance = CreatedInstance(self.family, subset_name, data, self)
if "farm" in pre_create_data:
use_farm = pre_create_data["farm"]
new_instance.creator_attributes["farm"] = use_farm
for inst in self.create_context.instances:
if subset_name == inst.subset_name:
raise CreatorError("{} already exists".format(
inst.subset_name))
api.get_stub().imprint(new_instance.id,
new_instance.data_to_store())
self._add_instance_to_context(new_instance)
data["members"] = [comp.id]
new_instance = CreatedInstance(self.family, subset_name, data,
self)
if "farm" in pre_create_data:
use_farm = pre_create_data["farm"]
new_instance.creator_attributes["farm"] = use_farm
api.get_stub().imprint(new_instance.id,
new_instance.data_to_store())
self._add_instance_to_context(new_instance)
stub.rename_item(comp.id, subset_name)
def get_default_variants(self):
return self._default_variants
@ -93,6 +122,8 @@ class RenderCreator(Creator):
def get_pre_create_attr_defs(self):
output = [
BoolDef("use_selection", default=True, label="Use selection"),
BoolDef("use_composition_name",
label="Use composition name in subset"),
UISeparatorDef(),
BoolDef("farm", label="Render on farm")
]
@ -101,6 +132,18 @@ class RenderCreator(Creator):
def get_detail_description(self):
return """Creator for Render instances"""
def get_dynamic_data(self, variant, task_name, asset_doc,
project_name, host_name, instance):
dynamic_data = {}
if instance is not None:
composition_name = instance.get("composition_name")
if composition_name:
dynamic_data["composition"] = composition_name
else:
dynamic_data["composition"] = "{composition}"
return dynamic_data
def _handle_legacy(self, instance_data):
"""Converts old instances to new format."""
if not instance_data.get("members"):

View file

@ -5,6 +5,7 @@ from openpype.pipeline import (
CreatedInstance,
legacy_io,
)
from openpype.hosts.aftereffects.api.pipeline import cache_and_get_instances
class AEWorkfileCreator(AutoCreator):
@ -17,7 +18,7 @@ class AEWorkfileCreator(AutoCreator):
return []
def collect_instances(self):
for instance_data in api.list_instances():
for instance_data in cache_and_get_instances(self):
creator_id = instance_data.get("creator_identifier")
if creator_id == self.identifier:
subset_name = instance_data["subset"]
@ -55,7 +56,7 @@ class AEWorkfileCreator(AutoCreator):
}
data.update(self.get_dynamic_data(
self.default_variant, task_name, asset_doc,
project_name, host_name
project_name, host_name, None
))
new_instance = CreatedInstance(

View file

@ -22,7 +22,7 @@ class AERenderInstance(RenderInstance):
stagingDir = attr.ib(default=None)
app_version = attr.ib(default=None)
publish_attributes = attr.ib(default={})
file_name = attr.ib(default=None)
file_names = attr.ib(default=[])
class CollectAERender(publish.AbstractCollectRender):
@ -64,14 +64,13 @@ class CollectAERender(publish.AbstractCollectRender):
if family not in ["render", "renderLocal"]: # legacy
continue
item_id = inst.data["members"][0]
comp_id = int(inst.data["members"][0])
work_area_info = CollectAERender.get_stub().get_work_area(
int(item_id))
work_area_info = CollectAERender.get_stub().get_work_area(comp_id)
if not work_area_info:
self.log.warning("Orphaned instance, deleting metadata")
inst_id = inst.get("instance_id") or item_id
inst_id = inst.get("instance_id") or str(comp_id)
CollectAERender.get_stub().remove_instance(inst_id)
continue
@ -84,9 +83,10 @@ class CollectAERender(publish.AbstractCollectRender):
task_name = inst.data.get("task") # legacy
render_q = CollectAERender.get_stub().get_render_info()
render_q = CollectAERender.get_stub().get_render_info(comp_id)
if not render_q:
raise ValueError("No file extension set in Render Queue")
render_item = render_q[0]
subset_name = inst.data["subset"]
instance = AERenderInstance(
@ -103,8 +103,8 @@ class CollectAERender(publish.AbstractCollectRender):
setMembers='',
publish=True,
name=subset_name,
resolutionWidth=render_q.width,
resolutionHeight=render_q.height,
resolutionWidth=render_item.width,
resolutionHeight=render_item.height,
pixelAspect=1,
tileRendering=False,
tilesX=0,
@ -115,16 +115,16 @@ class CollectAERender(publish.AbstractCollectRender):
fps=fps,
app_version=app_version,
publish_attributes=inst.data.get("publish_attributes", {}),
file_name=render_q.file_name
file_names=[item.file_name for item in render_q]
)
comp = compositions_by_id.get(int(item_id))
comp = compositions_by_id.get(comp_id)
if not comp:
raise ValueError("There is no composition for item {}".
format(item_id))
format(comp_id))
instance.outputDir = self._get_output_dir(instance)
instance.comp_name = comp.name
instance.comp_id = item_id
instance.comp_id = comp_id
is_local = "renderLocal" in inst.data["family"] # legacy
if inst.data.get("creator_attributes"):
@ -163,28 +163,30 @@ class CollectAERender(publish.AbstractCollectRender):
start = render_instance.frameStart
end = render_instance.frameEnd
_, ext = os.path.splitext(os.path.basename(render_instance.file_name))
base_dir = self._get_output_dir(render_instance)
expected_files = []
if "#" not in render_instance.file_name: # single frame (mov)W
path = os.path.join(base_dir, "{}_{}_{}.{}".format(
render_instance.asset,
render_instance.subset,
"v{:03d}".format(render_instance.version),
ext.replace('.', '')
))
expected_files.append(path)
else:
for frame in range(start, end + 1):
path = os.path.join(base_dir, "{}_{}_{}.{}.{}".format(
for file_name in render_instance.file_names:
_, ext = os.path.splitext(os.path.basename(file_name))
ext = ext.replace('.', '')
version_str = "v{:03d}".format(render_instance.version)
if "#" not in file_name: # single frame (mov)W
path = os.path.join(base_dir, "{}_{}_{}.{}".format(
render_instance.asset,
render_instance.subset,
"v{:03d}".format(render_instance.version),
str(frame).zfill(self.padding_width),
ext.replace('.', '')
version_str,
ext
))
expected_files.append(path)
else:
for frame in range(start, end + 1):
path = os.path.join(base_dir, "{}_{}_{}.{}.{}".format(
render_instance.asset,
render_instance.subset,
version_str,
str(frame).zfill(self.padding_width),
ext
))
expected_files.append(path)
return expected_files
def _get_output_dir(self, render_instance):

View file

@ -21,41 +21,55 @@ class ExtractLocalRender(publish.Extractor):
def process(self, instance):
stub = get_stub()
staging_dir = instance.data["stagingDir"]
self.log.info("staging_dir::{}".format(staging_dir))
self.log.debug("staging_dir::{}".format(staging_dir))
# pull file name from Render Queue Output module
render_q = stub.get_render_info()
stub.render(staging_dir)
if not render_q:
# pull file name collected value from Render Queue Output module
if not instance.data["file_names"]:
raise ValueError("No file extension set in Render Queue")
_, ext = os.path.splitext(os.path.basename(render_q.file_name))
ext = ext[1:]
first_file_path = None
files = []
self.log.info("files::{}".format(os.listdir(staging_dir)))
for file_name in os.listdir(staging_dir):
files.append(file_name)
if first_file_path is None:
first_file_path = os.path.join(staging_dir,
file_name)
comp_id = instance.data['comp_id']
stub.render(staging_dir, comp_id)
resulting_files = files
if len(files) == 1:
resulting_files = files[0]
representations = []
for file_name in instance.data["file_names"]:
_, ext = os.path.splitext(os.path.basename(file_name))
ext = ext[1:]
repre_data = {
"frameStart": instance.data["frameStart"],
"frameEnd": instance.data["frameEnd"],
"name": ext,
"ext": ext,
"files": resulting_files,
"stagingDir": staging_dir
}
if instance.data["review"]:
repre_data["tags"] = ["review"]
first_file_path = None
files = []
for found_file_name in os.listdir(staging_dir):
if not found_file_name.endswith(ext):
continue
instance.data["representations"] = [repre_data]
files.append(found_file_name)
if first_file_path is None:
first_file_path = os.path.join(staging_dir,
found_file_name)
if not files:
self.log.info("no files")
return
# single file cannot be wrapped in array
resulting_files = files
if len(files) == 1:
resulting_files = files[0]
repre_data = {
"frameStart": instance.data["frameStart"],
"frameEnd": instance.data["frameEnd"],
"name": ext,
"ext": ext,
"files": resulting_files,
"stagingDir": staging_dir
}
first_repre = not representations
if instance.data["review"] and first_repre:
repre_data["tags"] = ["review"]
representations.append(repre_data)
instance.data["representations"] = representations
ffmpeg_path = get_ffmpeg_tool_path("ffmpeg")
# Generate thumbnail.

View file

@ -9,7 +9,7 @@ Context of the given subset doesn't match your current scene.
### How to repair?
You can fix this with "repair" button on the right.
You can fix this with "repair" button on the right and refresh Publish at the bottom right.
</description>
<detail>
### __Detailed Info__ (optional)

View file

@ -1,6 +1,6 @@
import json
import pyblish.api
from openpype.hosts.aftereffects.api import list_instances
from openpype.hosts.aftereffects.api import AfterEffectsHost
class PreCollectRender(pyblish.api.ContextPlugin):
@ -25,7 +25,7 @@ class PreCollectRender(pyblish.api.ContextPlugin):
self.log.debug("Not applicable for New Publisher, skip")
return
for inst in list_instances():
for inst in AfterEffectsHost().list_instances():
if inst.get("creator_attributes"):
raise ValueError("Instance created in New publisher, "
"cannot be published in Pyblish.\n"

View file

@ -1,5 +1,4 @@
import pyblish.api
import argparse
import sys
from pprint import pformat
@ -11,20 +10,40 @@ class CollectCelactionCliKwargs(pyblish.api.Collector):
order = pyblish.api.Collector.order - 0.1
def process(self, context):
parser = argparse.ArgumentParser(prog="celaction")
parser.add_argument("--currentFile",
help="Pass file to Context as `currentFile`")
parser.add_argument("--chunk",
help=("Render chanks on farm"))
parser.add_argument("--frameStart",
help=("Start of frame range"))
parser.add_argument("--frameEnd",
help=("End of frame range"))
parser.add_argument("--resolutionWidth",
help=("Width of resolution"))
parser.add_argument("--resolutionHeight",
help=("Height of resolution"))
passing_kwargs = parser.parse_args(sys.argv[1:]).__dict__
args = list(sys.argv[1:])
self.log.info(str(args))
missing_kwargs = []
passing_kwargs = {}
for key in (
"chunk",
"frameStart",
"frameEnd",
"resolutionWidth",
"resolutionHeight",
"currentFile",
):
arg_key = f"--{key}"
if arg_key not in args:
missing_kwargs.append(key)
continue
arg_idx = args.index(arg_key)
args.pop(arg_idx)
if key != "currentFile":
value = args.pop(arg_idx)
else:
path_parts = []
while arg_idx < len(args):
path_parts.append(args.pop(arg_idx))
value = " ".join(path_parts).strip('"')
passing_kwargs[key] = value
if missing_kwargs:
raise RuntimeError("Missing arguments {}".format(
", ".join(
[f'"{key}"' for key in missing_kwargs]
)
))
self.log.info("Storing kwargs ...")
self.log.debug("_ passing_kwargs: {}".format(pformat(passing_kwargs)))

View file

@ -1,29 +0,0 @@
import pyblish.api
class ValidateUniqueSubsets(pyblish.api.InstancePlugin):
"""Ensure all instances have a unique subset name"""
order = pyblish.api.ValidatorOrder
label = "Validate Unique Subsets"
families = ["render"]
hosts = ["fusion"]
@classmethod
def get_invalid(cls, instance):
context = instance.context
subset = instance.data["subset"]
for other_instance in context:
if other_instance == instance:
continue
if other_instance.data["subset"] == subset:
return [instance] # current instance is invalid
return []
def process(self, instance):
invalid = self.get_invalid(instance)
if invalid:
raise RuntimeError("Animation content is invalid. See log.")

View file

@ -40,6 +40,7 @@ class Server(threading.Thread):
# Create a TCP/IP socket
self.socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
# Bind the socket to the port
server_address = ("127.0.0.1", port)
@ -91,7 +92,13 @@ class Server(threading.Thread):
self.log.info("wait ttt")
# Receive the data in small chunks and retransmit it
request = None
header = self.connection.recv(10)
try:
header = self.connection.recv(10)
except OSError:
# could happen on MacOS
self.log.info("")
break
if len(header) == 0:
# null data received, socket is closing.
self.log.info(f"[{self.timestamp()}] Connection closing.")

View file

@ -41,7 +41,7 @@ class ExtractThumnail(publish.Extractor):
track_item_name, thumb_frame, ".png")
thumb_path = os.path.join(staging_dir, thumb_file)
thumbnail = track_item.thumbnail(thumb_frame).save(
thumbnail = track_item.thumbnail(thumb_frame, "colour").save(
thumb_path,
format='png'
)

View file

@ -144,13 +144,20 @@ class HoudiniHost(HostBase, IWorkfileHost, ILoadHost, IPublishHost):
"""
obj_network = hou.node("/obj")
op_ctx = obj_network.createNode(
"null", node_name="OpenPypeContext")
op_ctx = obj_network.createNode("null", node_name="OpenPypeContext")
# A null in houdini by default comes with content inside to visualize
# the null. However since we explicitly want to hide the node lets
# remove the content and disable the display flag of the node
for node in op_ctx.children():
node.destroy()
op_ctx.moveToGoodPosition()
op_ctx.setBuiltExplicitly(False)
op_ctx.setCreatorState("OpenPype")
op_ctx.setComment("OpenPype node to hold context metadata")
op_ctx.setColor(hou.Color((0.081, 0.798, 0.810)))
op_ctx.setDisplayFlag(False)
op_ctx.hide(True)
return op_ctx

View file

@ -103,9 +103,8 @@ class HoudiniCreatorBase(object):
fill it with all collected instances from the scene under its
respective creator identifiers.
If legacy instances are detected in the scene, create
`houdini_cached_legacy_subsets` there and fill it with
all legacy subsets under family as a key.
Create `houdini_cached_legacy_subsets` key for any legacy instances
detected in the scene as instances per family.
Args:
Dict[str, Any]: Shared data.
@ -114,30 +113,31 @@ class HoudiniCreatorBase(object):
Dict[str, Any]: Shared data dictionary.
"""
if shared_data.get("houdini_cached_subsets") is None:
shared_data["houdini_cached_subsets"] = {}
if shared_data.get("houdini_cached_legacy_subsets") is None:
shared_data["houdini_cached_legacy_subsets"] = {}
cached_instances = lsattr("id", "pyblish.avalon.instance")
for i in cached_instances:
if not i.parm("creator_identifier"):
# we have legacy instance
family = i.parm("family").eval()
if family not in shared_data[
"houdini_cached_legacy_subsets"]:
shared_data["houdini_cached_legacy_subsets"][
family] = [i]
else:
shared_data[
"houdini_cached_legacy_subsets"][family].append(i)
continue
if shared_data.get("houdini_cached_subsets") is not None:
cache = dict()
cache_legacy = dict()
for node in lsattr("id", "pyblish.avalon.instance"):
creator_identifier_parm = node.parm("creator_identifier")
if creator_identifier_parm:
# creator instance
creator_id = creator_identifier_parm.eval()
cache.setdefault(creator_id, []).append(node)
creator_id = i.parm("creator_identifier").eval()
if creator_id not in shared_data["houdini_cached_subsets"]:
shared_data["houdini_cached_subsets"][creator_id] = [i]
else:
shared_data[
"houdini_cached_subsets"][creator_id].append(i) # noqa
# legacy instance
family_parm = node.parm("family")
if not family_parm:
# must be a broken instance
continue
family = family_parm.eval()
cache_legacy.setdefault(family, []).append(node)
shared_data["houdini_cached_subsets"] = cache
shared_data["houdini_cached_legacy_subsets"] = cache_legacy
return shared_data
@staticmethod

View file

@ -106,7 +106,7 @@ class CollectUpstreamInputs(pyblish.api.InstancePlugin):
# If no valid output node is set then ignore it as validation
# will be checking those cases.
self.log.debug(
"No output node found, skipping " "collecting of inputs.."
"No output node found, skipping collecting of inputs.."
)
return

View file

@ -22,7 +22,7 @@ class ValidateAlembicInputNode(pyblish.api.InstancePlugin):
invalid = self.get_invalid(instance)
if invalid:
raise PublishValidationError(
("Primitive types found that are not supported"
("Primitive types found that are not supported "
"for Alembic output."),
title=self.label
)

View file

@ -14,3 +14,10 @@ class MaxAddon(OpenPypeModule, IHostAddon):
def get_workfile_extensions(self):
return [".max"]
def get_launch_hook_paths(self, app):
if app.host_name != self.host_name:
return []
return [
os.path.join(MAX_HOST_DIR, "hooks")
]

View file

@ -119,7 +119,7 @@ class OpenPypeMenu(object):
def manage_callback(self):
"""Callback to show Scene Manager/Inventory tool."""
host_tools.show_subset_manager(parent=self.main_widget)
host_tools.show_scene_inventory(parent=self.main_widget)
def library_callback(self):
"""Callback to show Library Loader tool."""

View file

@ -2,6 +2,7 @@
"""Pipeline tools for OpenPype Houdini integration."""
import os
import logging
from operator import attrgetter
import json
@ -141,5 +142,25 @@ def ls() -> list:
if rt.getUserProp(obj, "id") == AVALON_CONTAINER_ID
]
for container in sorted(containers, key=lambda name: container.name):
for container in sorted(containers, key=attrgetter("name")):
yield lib.read(container)
def containerise(name: str, nodes: list, context, loader=None, suffix="_CON"):
data = {
"schema": "openpype:container-2.0",
"id": AVALON_CONTAINER_ID,
"name": name,
"namespace": "",
"loader": loader,
"representation": context["representation"]["_id"],
}
container_name = f"{name}{suffix}"
container = rt.container(name=container_name)
for node in nodes:
node.Parent = container
if not lib.imprint(container_name, data):
print(f"imprinting of {container_name} failed.")
return container

View file

@ -0,0 +1,19 @@
# -*- coding: utf-8 -*-
"""Pre-launch hook to inject python environment."""
from openpype.lib import PreLaunchHook
import os
class InjectPythonPath(PreLaunchHook):
"""Inject OpenPype environment to 3dsmax.
Note that this works in combination whit 3dsmax startup script that
is translating it back to PYTHONPATH for cases when 3dsmax drops PYTHONPATH
environment.
Hook `GlobalHostDataHook` must be executed before this hook.
"""
app_groups = ["3dsmax"]
def execute(self):
self.launch_context.env["MAX_PYTHONPATH"] = os.environ["PYTHONPATH"]

View file

@ -6,8 +6,10 @@ Because of limited api, alembics can be only loaded, but not easily updated.
"""
import os
from openpype.pipeline import (
load
load, get_representation_path
)
from openpype.hosts.max.api.pipeline import containerise
from openpype.hosts.max.api import lib
class AbcLoader(load.LoaderPlugin):
@ -52,14 +54,47 @@ importFile @"{file_path}" #noPrompt
abc_container = abc_containers.pop()
container_name = f"{name}_CON"
container = rt.container(name=container_name)
abc_container.Parent = container
return containerise(
name, [abc_container], context, loader=self.__class__.__name__)
return container
def update(self, container, representation):
from pymxs import runtime as rt
path = get_representation_path(representation)
node = rt.getNodeByName(container["instance_node"])
alembic_objects = self.get_container_children(node, "AlembicObject")
for alembic_object in alembic_objects:
alembic_object.source = path
lib.imprint(container["instance_node"], {
"representation": str(representation["_id"])
})
def switch(self, container, representation):
self.update(container, representation)
def remove(self, container):
from pymxs import runtime as rt
node = container["node"]
rt.delete(node)
@staticmethod
def get_container_children(parent, type_name):
from pymxs import runtime as rt
def list_children(node):
children = []
for c in node.Children:
children.append(c)
children += list_children(c)
return children
filtered = []
for child in list_children(parent):
class_type = str(rt.classOf(child.baseObject))
if class_type == type_name:
filtered.append(child)
return filtered

View file

@ -2,8 +2,11 @@
(
local sysPath = dotNetClass "System.IO.Path"
local sysDir = dotNetClass "System.IO.Directory"
local localScript = getThisScriptFilename()
local localScript = getThisScriptFilename()
local startup = sysPath.Combine (sysPath.GetDirectoryName localScript) "startup.py"
local pythonpath = systemTools.getEnvVariable "MAX_PYTHONPATH"
systemTools.setEnvVariable "PYTHONPATH" pythonpath
python.ExecuteFile startup
)

View file

@ -26,7 +26,6 @@ from .workio import (
)
from .lib import (
export_alembic,
lsattr,
lsattrs,
read,
@ -58,7 +57,6 @@ __all__ = [
"work_root",
# Utility functions
"export_alembic",
"lsattr",
"lsattrs",
"read",

View file

@ -289,73 +289,6 @@ def pairwise(iterable):
return zip(a, a)
def export_alembic(nodes,
file,
frame_range=None,
write_uv=True,
write_visibility=True,
attribute_prefix=None):
"""Wrap native MEL command with limited set of arguments
Arguments:
nodes (list): Long names of nodes to cache
file (str): Absolute path to output destination
frame_range (tuple, optional): Start- and end-frame of cache,
default to current animation range.
write_uv (bool, optional): Whether or not to include UVs,
default to True
write_visibility (bool, optional): Turn on to store the visibility
state of objects in the Alembic file. Otherwise, all objects are
considered visible, default to True
attribute_prefix (str, optional): Include all user-defined
attributes with this prefix.
"""
if frame_range is None:
frame_range = (
cmds.playbackOptions(query=True, ast=True),
cmds.playbackOptions(query=True, aet=True)
)
options = [
("file", file),
("frameRange", "%s %s" % frame_range),
] + [("root", mesh) for mesh in nodes]
if isinstance(attribute_prefix, string_types):
# Include all attributes prefixed with "mb"
# TODO(marcus): This would be a good candidate for
# external registration, so that the developer
# doesn't have to edit this function to modify
# the behavior of Alembic export.
options.append(("attrPrefix", str(attribute_prefix)))
if write_uv:
options.append(("uvWrite", ""))
if write_visibility:
options.append(("writeVisibility", ""))
# Generate MEL command
mel_args = list()
for key, value in options:
mel_args.append("-{0} {1}".format(key, value))
mel_args_string = " ".join(mel_args)
mel_cmd = "AbcExport -j \"{0}\"".format(mel_args_string)
# For debuggability, put the string passed to MEL in the Script editor.
print("mel.eval('%s')" % mel_cmd)
return mel.eval(mel_cmd)
def collect_animation_data(fps=False):
"""Get the basic animation data

View file

@ -1132,6 +1132,7 @@ class RenderProductsRenderman(ARenderProducts):
"""
renderer = "renderman"
unmerged_aovs = {"PxrCryptomatte"}
def get_render_products(self):
"""Get all AOVs.
@ -1181,6 +1182,17 @@ class RenderProductsRenderman(ARenderProducts):
if not display_types.get(display["driverNode"]["type"]):
continue
has_cryptomatte = cmds.ls(type=self.unmerged_aovs)
matte_enabled = False
if has_cryptomatte:
for cryptomatte in has_cryptomatte:
cryptomatte_aov = cryptomatte
matte_name = "cryptomatte"
rman_globals = cmds.listConnections(cryptomatte +
".message")
if rman_globals:
matte_enabled = True
aov_name = name
if aov_name == "rmanDefaultDisplay":
aov_name = "beauty"
@ -1199,6 +1211,15 @@ class RenderProductsRenderman(ARenderProducts):
camera=camera,
multipart=True
)
if has_cryptomatte and matte_enabled:
cryptomatte = RenderProduct(
productName=matte_name,
aov=cryptomatte_aov,
ext=extensions,
camera=camera,
multipart=True
)
else:
# this code should handle the case where no multipart
# capable format is selected. But since it involves
@ -1218,6 +1239,9 @@ class RenderProductsRenderman(ARenderProducts):
products.append(product)
if has_cryptomatte and matte_enabled:
products.append(cryptomatte)
return products
def get_files(self, product):

View file

@ -22,17 +22,25 @@ class RenderSettings(object):
_image_prefix_nodes = {
'vray': 'vraySettings.fileNamePrefix',
'arnold': 'defaultRenderGlobals.imageFilePrefix',
'renderman': 'defaultRenderGlobals.imageFilePrefix',
'renderman': 'rmanGlobals.imageFileFormat',
'redshift': 'defaultRenderGlobals.imageFilePrefix'
}
_image_prefixes = {
'vray': get_current_project_settings()["maya"]["RenderSettings"]["vray_renderer"]["image_prefix"], # noqa
'arnold': get_current_project_settings()["maya"]["RenderSettings"]["arnold_renderer"]["image_prefix"], # noqa
'renderman': '<Scene>/<layer>/<layer>{aov_separator}<aov>',
'renderman': get_current_project_settings()["maya"]["RenderSettings"]["renderman_renderer"]["image_prefix"], # noqa
'redshift': get_current_project_settings()["maya"]["RenderSettings"]["redshift_renderer"]["image_prefix"] # noqa
}
# Renderman only
_image_dir = {
'renderman': get_current_project_settings()["maya"]["RenderSettings"]["renderman_renderer"]["image_dir"], # noqa
'cryptomatte': get_current_project_settings()["maya"]["RenderSettings"]["renderman_renderer"]["cryptomatte_dir"], # noqa
'imageDisplay': get_current_project_settings()["maya"]["RenderSettings"]["renderman_renderer"]["imageDisplay_dir"], # noqa
"watermark": get_current_project_settings()["maya"]["RenderSettings"]["renderman_renderer"]["watermark_dir"] # noqa
}
_aov_chars = {
"dot": ".",
"dash": "-",
@ -81,7 +89,6 @@ class RenderSettings(object):
prefix, type="string") # noqa
else:
print("{0} isn't a supported renderer to autoset settings.".format(renderer)) # noqa
# TODO: handle not having res values in the doc
width = asset_doc["data"].get("resolutionWidth")
height = asset_doc["data"].get("resolutionHeight")
@ -97,6 +104,13 @@ class RenderSettings(object):
self._set_redshift_settings(width, height)
mel.eval("redshiftUpdateActiveAovList")
if renderer == "renderman":
image_dir = self._image_dir["renderman"]
cmds.setAttr("rmanGlobals.imageOutputDir",
image_dir, type="string")
self._set_renderman_settings(width, height,
aov_separator)
def _set_arnold_settings(self, width, height):
"""Sets settings for Arnold."""
from mtoa.core import createOptions # noqa
@ -202,6 +216,66 @@ class RenderSettings(object):
cmds.setAttr("defaultResolution.height", height)
self._additional_attribs_setter(additional_options)
def _set_renderman_settings(self, width, height, aov_separator):
"""Sets settings for Renderman"""
rman_render_presets = (
self._project_settings
["maya"]
["RenderSettings"]
["renderman_renderer"]
)
display_filters = rman_render_presets["display_filters"]
d_filters_number = len(display_filters)
for i in range(d_filters_number):
d_node = cmds.ls(typ=display_filters[i])
if len(d_node) > 0:
filter_nodes = d_node[0]
else:
filter_nodes = cmds.createNode(display_filters[i])
cmds.connectAttr(filter_nodes + ".message",
"rmanGlobals.displayFilters[%i]" % i,
force=True)
if filter_nodes.startswith("PxrImageDisplayFilter"):
imageDisplay_dir = self._image_dir["imageDisplay"]
imageDisplay_dir = imageDisplay_dir.replace("{aov_separator}",
aov_separator)
cmds.setAttr(filter_nodes + ".filename",
imageDisplay_dir, type="string")
sample_filters = rman_render_presets["sample_filters"]
s_filters_number = len(sample_filters)
for n in range(s_filters_number):
s_node = cmds.ls(typ=sample_filters[n])
if len(s_node) > 0:
filter_nodes = s_node[0]
else:
filter_nodes = cmds.createNode(sample_filters[n])
cmds.connectAttr(filter_nodes + ".message",
"rmanGlobals.sampleFilters[%i]" % n,
force=True)
if filter_nodes.startswith("PxrCryptomatte"):
matte_dir = self._image_dir["cryptomatte"]
matte_dir = matte_dir.replace("{aov_separator}",
aov_separator)
cmds.setAttr(filter_nodes + ".filename",
matte_dir, type="string")
elif filter_nodes.startswith("PxrWatermarkFilter"):
watermark_dir = self._image_dir["watermark"]
watermark_dir = watermark_dir.replace("{aov_separator}",
aov_separator)
cmds.setAttr(filter_nodes + ".filename",
watermark_dir, type="string")
additional_options = rman_render_presets["additional_options"]
self._set_global_output_settings()
cmds.setAttr("defaultResolution.width", width)
cmds.setAttr("defaultResolution.height", height)
self._additional_attribs_setter(additional_options)
def _set_vray_settings(self, aov_separator, width, height):
# type: (str, int, int) -> None
"""Sets important settings for Vray."""

View file

@ -28,7 +28,7 @@ class MayaTemplateBuilder(AbstractTemplateBuilder):
Args:
path (str): A path to current template (usually given by
get_template_path implementation)
get_template_preset implementation)
Returns:
bool: Wether the template was succesfully imported or not
@ -240,7 +240,7 @@ class MayaPlaceholderLoadPlugin(PlaceholderPlugin, PlaceholderLoadMixin):
cmds.setAttr(node + ".hiddenInOutliner", True)
def load_succeed(self, placeholder, container):
self._parent_in_hierarhchy(placeholder, container)
self._parent_in_hierarchy(placeholder, container)
def _parent_in_hierarchy(self, placeholder, container):
"""Parent loaded container to placeholder's parent.

View file

@ -44,3 +44,6 @@ class CreateAnimation(plugin.Creator):
# Default to not send to farm.
self.data["farm"] = False
self.data["priority"] = 50
# Default to write normals.
self.data["writeNormals"] = True

View file

@ -6,7 +6,7 @@ class CreateMultiverseUsd(plugin.Creator):
name = "mvUsdMain"
label = "Multiverse USD Asset"
family = "mvUsd"
family = "usd"
icon = "cubes"
def __init__(self, *args, **kwargs):

View file

@ -72,15 +72,19 @@ class CreateRender(plugin.Creator):
def __init__(self, *args, **kwargs):
"""Constructor."""
super(CreateRender, self).__init__(*args, **kwargs)
deadline_settings = get_system_settings()["modules"]["deadline"]
if not deadline_settings["enabled"]:
self.deadline_servers = {}
return
# Defaults
self._project_settings = get_project_settings(
legacy_io.Session["AVALON_PROJECT"])
if self._project_settings["maya"]["RenderSettings"]["apply_render_settings"]: # noqa
lib_rendersettings.RenderSettings().set_default_renderer_settings()
# Deadline-only
manager = ModulesManager()
deadline_settings = get_system_settings()["modules"]["deadline"]
if not deadline_settings["enabled"]:
self.deadline_servers = {}
return
self.deadline_module = manager.modules_by_name["deadline"]
try:
default_servers = deadline_settings["deadline_urls"]
@ -193,8 +197,6 @@ class CreateRender(plugin.Creator):
pool_names = []
default_priority = 50
self.server_aliases = list(self.deadline_servers.keys())
self.data["deadlineServers"] = self.server_aliases
self.data["suspendPublishJob"] = False
self.data["review"] = True
self.data["extendFrames"] = False
@ -233,6 +235,9 @@ class CreateRender(plugin.Creator):
raise RuntimeError("Both Deadline and Muster are enabled")
if deadline_enabled:
self.server_aliases = list(self.deadline_servers.keys())
self.data["deadlineServers"] = self.server_aliases
try:
deadline_url = self.deadline_servers["default"]
except KeyError:
@ -254,6 +259,19 @@ class CreateRender(plugin.Creator):
default_priority)
self.data["tile_priority"] = tile_priority
pool_setting = (self._project_settings["deadline"]
["publish"]
["CollectDeadlinePools"])
primary_pool = pool_setting["primary_pool"]
self.data["primaryPool"] = self._set_default_pool(pool_names,
primary_pool)
# We add a string "-" to allow the user to not
# set any secondary pools
pool_names = ["-"] + pool_names
secondary_pool = pool_setting["secondary_pool"]
self.data["secondaryPool"] = self._set_default_pool(pool_names,
secondary_pool)
if muster_enabled:
self.log.info(">>> Loading Muster credentials ...")
self._load_credentials()
@ -273,18 +291,6 @@ class CreateRender(plugin.Creator):
self.log.info(" - pool: {}".format(pool["name"]))
pool_names.append(pool["name"])
pool_setting = (self._project_settings["deadline"]
["publish"]
["CollectDeadlinePools"])
primary_pool = pool_setting["primary_pool"]
self.data["primaryPool"] = self._set_default_pool(pool_names,
primary_pool)
# We add a string "-" to allow the user to not
# set any secondary pools
pool_names = ["-"] + pool_names
secondary_pool = pool_setting["secondary_pool"]
self.data["secondaryPool"] = self._set_default_pool(pool_names,
secondary_pool)
self.options = {"useSelection": False} # Force no content
def _set_default_pool(self, pool_names, pool_value):

View file

@ -1,5 +1,7 @@
# -*- coding: utf-8 -*-
import maya.cmds as cmds
from maya import mel
import os
from openpype.pipeline import (
load,
@ -11,12 +13,13 @@ from openpype.hosts.maya.api.lib import (
unique_namespace
)
from openpype.hosts.maya.api.pipeline import containerise
from openpype.client import get_representation_by_id
class MultiverseUsdLoader(load.LoaderPlugin):
"""Read USD data in a Multiverse Compound"""
families = ["model", "mvUsd", "mvUsdComposition", "mvUsdOverride",
families = ["model", "usd", "mvUsdComposition", "mvUsdOverride",
"pointcache", "animation"]
representations = ["usd", "usda", "usdc", "usdz", "abc"]
@ -26,7 +29,6 @@ class MultiverseUsdLoader(load.LoaderPlugin):
color = "orange"
def load(self, context, name=None, namespace=None, options=None):
asset = context['asset']['name']
namespace = namespace or unique_namespace(
asset + "_",
@ -34,22 +36,20 @@ class MultiverseUsdLoader(load.LoaderPlugin):
suffix="_",
)
# Create the shape
# Make sure we can load the plugin
cmds.loadPlugin("MultiverseForMaya", quiet=True)
import multiverse
# Create the shape
shape = None
transform = None
with maintained_selection():
cmds.namespace(addNamespace=namespace)
with namespaced(namespace, new=False):
import multiverse
shape = multiverse.CreateUsdCompound(self.fname)
transform = cmds.listRelatives(
shape, parent=True, fullPath=True)[0]
# Lock the shape node so the user cannot delete it.
cmds.lockNode(shape, lock=True)
nodes = [transform, shape]
self[:] = nodes
@ -70,15 +70,34 @@ class MultiverseUsdLoader(load.LoaderPlugin):
shapes = cmds.ls(members, type="mvUsdCompoundShape")
assert shapes, "Cannot find mvUsdCompoundShape in container"
path = get_representation_path(representation)
project_name = representation["context"]["project"]["name"]
prev_representation_id = cmds.getAttr("{}.representation".format(node))
prev_representation = get_representation_by_id(project_name,
prev_representation_id)
prev_path = os.path.normpath(prev_representation["data"]["path"])
# Make sure we can load the plugin
cmds.loadPlugin("MultiverseForMaya", quiet=True)
import multiverse
for shape in shapes:
multiverse.SetUsdCompoundAssetPaths(shape, [path])
asset_paths = multiverse.GetUsdCompoundAssetPaths(shape)
asset_paths = [os.path.normpath(p) for p in asset_paths]
assert asset_paths.count(prev_path) == 1, \
"Couldn't find matching path (or too many)"
prev_path_idx = asset_paths.index(prev_path)
path = get_representation_path(representation)
asset_paths[prev_path_idx] = path
multiverse.SetUsdCompoundAssetPaths(shape, asset_paths)
cmds.setAttr("{}.representation".format(node),
str(representation["_id"]),
type="string")
mel.eval('refreshEditorTemplates;')
def switch(self, container, representation):
self.update(container, representation)

View file

@ -0,0 +1,132 @@
# -*- coding: utf-8 -*-
import maya.cmds as cmds
from maya import mel
import os
import qargparse
from openpype.pipeline import (
load,
get_representation_path
)
from openpype.hosts.maya.api.lib import (
maintained_selection
)
from openpype.hosts.maya.api.pipeline import containerise
from openpype.client import get_representation_by_id
class MultiverseUsdOverLoader(load.LoaderPlugin):
"""Reference file"""
families = ["mvUsdOverride"]
representations = ["usda", "usd", "udsz"]
label = "Load Usd Override into Compound"
order = -10
icon = "code-fork"
color = "orange"
options = [
qargparse.String(
"Which Compound",
label="Compound",
help="Select which compound to add this as a layer to."
)
]
def load(self, context, name=None, namespace=None, options=None):
current_usd = cmds.ls(selection=True,
type="mvUsdCompoundShape",
dag=True,
long=True)
if len(current_usd) != 1:
self.log.error("Current selection invalid: '{}', "
"must contain exactly 1 mvUsdCompoundShape."
"".format(current_usd))
return
# Make sure we can load the plugin
cmds.loadPlugin("MultiverseForMaya", quiet=True)
import multiverse
nodes = current_usd
with maintained_selection():
multiverse.AddUsdCompoundAssetPath(current_usd[0], self.fname)
namespace = current_usd[0].split("|")[1].split(":")[0]
container = containerise(
name=name,
namespace=namespace,
nodes=nodes,
context=context,
loader=self.__class__.__name__)
cmds.addAttr(container, longName="mvUsdCompoundShape",
niceName="mvUsdCompoundShape", dataType="string")
cmds.setAttr(container + ".mvUsdCompoundShape",
current_usd[0], type="string")
return container
def update(self, container, representation):
# type: (dict, dict) -> None
"""Update container with specified representation."""
cmds.loadPlugin("MultiverseForMaya", quiet=True)
import multiverse
node = container['objectName']
assert cmds.objExists(node), "Missing container"
members = cmds.sets(node, query=True) or []
shapes = cmds.ls(members, type="mvUsdCompoundShape")
assert shapes, "Cannot find mvUsdCompoundShape in container"
mvShape = container['mvUsdCompoundShape']
assert mvShape, "Missing mv source"
project_name = representation["context"]["project"]["name"]
prev_representation_id = cmds.getAttr("{}.representation".format(node))
prev_representation = get_representation_by_id(project_name,
prev_representation_id)
prev_path = os.path.normpath(prev_representation["data"]["path"])
path = get_representation_path(representation)
for shape in shapes:
asset_paths = multiverse.GetUsdCompoundAssetPaths(shape)
asset_paths = [os.path.normpath(p) for p in asset_paths]
assert asset_paths.count(prev_path) == 1, \
"Couldn't find matching path (or too many)"
prev_path_idx = asset_paths.index(prev_path)
asset_paths[prev_path_idx] = path
multiverse.SetUsdCompoundAssetPaths(shape, asset_paths)
cmds.setAttr("{}.representation".format(node),
str(representation["_id"]),
type="string")
mel.eval('refreshEditorTemplates;')
def switch(self, container, representation):
self.update(container, representation)
def remove(self, container):
# type: (dict) -> None
"""Remove loaded container."""
# Delete container and its contents
if cmds.objExists(container['objectName']):
members = cmds.sets(container['objectName'], query=True) or []
cmds.delete([container['objectName']] + members)
# Remove the namespace, if empty
namespace = container['namespace']
if cmds.namespace(exists=namespace):
members = cmds.namespaceInfo(namespace, listNamespace=True)
if not members:
cmds.namespace(removeNamespace=namespace)
else:
self.log.warning("Namespace not deleted because it "
"still has members: %s", namespace)

View file

@ -26,7 +26,8 @@ class ReferenceLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
"rig",
"camerarig",
"xgen",
"staticMesh"]
"staticMesh",
"mvLook"]
representations = ["ma", "abc", "fbx", "mb"]
label = "Reference"

View file

@ -74,13 +74,6 @@ class CollectInstances(pyblish.api.ContextPlugin):
objectset = cmds.ls("*.id", long=True, type="objectSet",
recursive=True, objectsOnly=True)
ctx_frame_start = context.data['frameStart']
ctx_frame_end = context.data['frameEnd']
ctx_handle_start = context.data['handleStart']
ctx_handle_end = context.data['handleEnd']
ctx_frame_start_handle = context.data['frameStartHandle']
ctx_frame_end_handle = context.data['frameEndHandle']
context.data['objectsets'] = objectset
for objset in objectset:
@ -156,34 +149,20 @@ class CollectInstances(pyblish.api.ContextPlugin):
# Append start frame and end frame to label if present
if "frameStart" and "frameEnd" in data:
# if frame range on maya set is the same as full shot range
# adjust the values to match the asset data
if (ctx_frame_start_handle == data["frameStart"]
and ctx_frame_end_handle == data["frameEnd"]): # noqa: W503, E501
data["frameStartHandle"] = ctx_frame_start_handle
data["frameEndHandle"] = ctx_frame_end_handle
data["frameStart"] = ctx_frame_start
data["frameEnd"] = ctx_frame_end
data["handleStart"] = ctx_handle_start
data["handleEnd"] = ctx_handle_end
# if there are user values on start and end frame not matching
# the asset, use them
else:
if "handles" in data:
data["handleStart"] = data["handles"]
data["handleEnd"] = data["handles"]
else:
data["handleStart"] = 0
data["handleEnd"] = 0
data["frameStartHandle"] = data["frameStart"] - data["handleStart"] # noqa: E501
data["frameEndHandle"] = data["frameEnd"] + data["handleEnd"] # noqa: E501
# Backwards compatibility for 'handles' data
if "handles" in data:
data["handleStart"] = data["handles"]
data["handleEnd"] = data["handles"]
data.pop('handles')
# Take handles from context if not set locally on the instance
for key in ["handleStart", "handleEnd"]:
if key not in data:
data[key] = context.data[key]
data["frameStartHandle"] = data["frameStart"] - data["handleStart"] # noqa: E501
data["frameEndHandle"] = data["frameEnd"] + data["handleEnd"] # noqa: E501
label += " [{0}-{1}]".format(int(data["frameStartHandle"]),
int(data["frameEndHandle"]))

View file

@ -440,7 +440,8 @@ class CollectLook(pyblish.api.InstancePlugin):
for res in self.collect_resources(n):
instance.data["resources"].append(res)
self.log.info("Collected resources: {}".format(instance.data["resources"]))
self.log.info("Collected resources: {}".format(
instance.data["resources"]))
# Log warning when no relevant sets were retrieved for the look.
if (
@ -548,6 +549,11 @@ class CollectLook(pyblish.api.InstancePlugin):
if not cmds.attributeQuery(attr, node=node, exists=True):
continue
attribute = "{}.{}".format(node, attr)
# We don't support mixed-type attributes yet.
if cmds.attributeQuery(attr, node=node, multi=True):
self.log.warning("Attribute '{}' is mixed-type and is "
"not supported yet.".format(attribute))
continue
if cmds.getAttr(attribute, type=True) == "message":
continue
node_attributes[attr] = cmds.getAttr(attribute)

View file

@ -21,37 +21,68 @@ COLOUR_SPACES = ['sRGB', 'linear', 'auto']
MIPMAP_EXTENSIONS = ['tdl']
def get_look_attrs(node):
"""Returns attributes of a node that are important for the look.
class _NodeTypeAttrib(object):
"""docstring for _NodeType"""
These are the "changed" attributes (those that have edits applied
in the current scene).
def __init__(self, name, fname, computed_fname=None, colour_space=None):
self.name = name
self.fname = fname
self.computed_fname = computed_fname or fname
self.colour_space = colour_space or "colorSpace"
Returns:
list: Attribute names to extract
def get_fname(self, node):
return "{}.{}".format(node, self.fname)
def get_computed_fname(self, node):
return "{}.{}".format(node, self.computed_fname)
def get_colour_space(self, node):
return "{}.{}".format(node, self.colour_space)
def __str__(self):
return "_NodeTypeAttrib(name={}, fname={}, "
"computed_fname={}, colour_space={})".format(
self.name, self.fname, self.computed_fname, self.colour_space)
NODETYPES = {
"file": [_NodeTypeAttrib("file", "fileTextureName",
"computedFileTextureNamePattern")],
"aiImage": [_NodeTypeAttrib("aiImage", "filename")],
"RedshiftNormalMap": [_NodeTypeAttrib("RedshiftNormalMap", "tex0")],
"dlTexture": [_NodeTypeAttrib("dlTexture", "textureFile",
None, "textureFile_meta_colorspace")],
"dlTriplanar": [_NodeTypeAttrib("dlTriplanar", "colorTexture",
None, "colorTexture_meta_colorspace"),
_NodeTypeAttrib("dlTriplanar", "floatTexture",
None, "floatTexture_meta_colorspace"),
_NodeTypeAttrib("dlTriplanar", "heightTexture",
None, "heightTexture_meta_colorspace")]
}
def get_file_paths_for_node(node):
"""Gets all the file paths in this node.
Returns all filepaths that this node references. Some node types only
reference one, but others, like dlTriplanar, can reference 3.
Args:
node (str): Name of the Maya node
Returns
list(str): A list with all evaluated maya attributes for filepaths.
"""
# When referenced get only attributes that are "changed since file open"
# which includes any reference edits, otherwise take *all* user defined
# attributes
is_referenced = cmds.referenceQuery(node, isNodeReferenced=True)
result = cmds.listAttr(node, userDefined=True,
changedSinceFileOpen=is_referenced) or []
# `cbId` is added when a scene is saved, ignore by default
if "cbId" in result:
result.remove("cbId")
node_type = cmds.nodeType(node)
if node_type not in NODETYPES:
return []
# For shapes allow render stat changes
if cmds.objectType(node, isAType="shape"):
attrs = cmds.listAttr(node, changedSinceFileOpen=True) or []
for attr in attrs:
if attr in SHAPE_ATTRS:
result.append(attr)
elif attr.startswith('ai'):
result.append(attr)
return result
paths = []
for node_type_attr in NODETYPES[node_type]:
fname = cmds.getAttr("{}.{}".format(node, node_type_attr.fname))
paths.append(fname)
return paths
def node_uses_image_sequence(node):
@ -69,13 +100,29 @@ def node_uses_image_sequence(node):
"""
# useFrameExtension indicates an explicit image sequence
node_path = get_file_node_path(node).lower()
paths = get_file_node_paths(node)
paths = [path.lower() for path in paths]
# The following tokens imply a sequence
patterns = ["<udim>", "<tile>", "<uvtile>", "u<u>_v<v>", "<frame0"]
return (cmds.getAttr('%s.useFrameExtension' % node) or
any(pattern in node_path for pattern in patterns))
def pattern_in_paths(patterns, paths):
"""Helper function for checking to see if a pattern is contained
in the list of paths"""
for pattern in patterns:
for path in paths:
if pattern in path:
return True
return False
node_type = cmds.nodeType(node)
if node_type == 'dlTexture':
return (cmds.getAttr('{}.useImageSequence'.format(node)) or
pattern_in_paths(patterns, paths))
elif node_type == "file":
return (cmds.getAttr('{}.useFrameExtension'.format(node)) or
pattern_in_paths(patterns, paths))
return False
def seq_to_glob(path):
@ -132,7 +179,7 @@ def seq_to_glob(path):
return path
def get_file_node_path(node):
def get_file_node_paths(node):
"""Get the file path used by a Maya file node.
Args:
@ -158,15 +205,9 @@ def get_file_node_path(node):
"<uvtile>"]
lower = texture_pattern.lower()
if any(pattern in lower for pattern in patterns):
return texture_pattern
return [texture_pattern]
if cmds.nodeType(node) == 'aiImage':
return cmds.getAttr('{0}.filename'.format(node))
if cmds.nodeType(node) == 'RedshiftNormalMap':
return cmds.getAttr('{}.tex0'.format(node))
# otherwise use fileTextureName
return cmds.getAttr('{0}.fileTextureName'.format(node))
return get_file_paths_for_node(node)
def get_file_node_files(node):
@ -181,15 +222,15 @@ def get_file_node_files(node):
"""
path = get_file_node_path(node)
path = cmds.workspace(expandName=path)
paths = get_file_node_paths(node)
paths = [cmds.workspace(expandName=path) for path in paths]
if node_uses_image_sequence(node):
glob_pattern = seq_to_glob(path)
return glob.glob(glob_pattern)
elif os.path.exists(path):
return [path]
globs = []
for path in paths:
globs += glob.glob(seq_to_glob(path))
return globs
else:
return []
return list(filter(lambda x: os.path.exists(x), paths))
def get_mipmap(fname):
@ -211,6 +252,11 @@ def is_mipmap(fname):
class CollectMultiverseLookData(pyblish.api.InstancePlugin):
"""Collect Multiverse Look
Searches through the overrides finding all material overrides. From there
it extracts the shading group and then finds all texture files in the
shading group network. It also checks for mipmap versions of texture files
and adds them to the resouces to get published.
"""
order = pyblish.api.CollectorOrder + 0.2
@ -258,12 +304,20 @@ class CollectMultiverseLookData(pyblish.api.InstancePlugin):
shadingGroup), "members": list()}
# The SG may reference files, add those too!
history = cmds.listHistory(shadingGroup)
files = cmds.ls(history, type="file", long=True)
history = cmds.listHistory(
shadingGroup, allConnections=True)
# We need to iterate over node_types since `cmds.ls` may
# error out if we don't have the appropriate plugin loaded.
files = []
for node_type in NODETYPES.keys():
files += cmds.ls(history,
type=node_type,
long=True)
for f in files:
resources = self.collect_resource(f, publishMipMap)
instance.data["resources"].append(resources)
instance.data["resources"] += resources
elif isinstance(matOver, multiverse.MaterialSourceUsdPath):
# TODO: Handle this later.
@ -284,69 +338,63 @@ class CollectMultiverseLookData(pyblish.api.InstancePlugin):
dict
"""
self.log.debug("processing: {}".format(node))
if cmds.nodeType(node) not in ["file", "aiImage", "RedshiftNormalMap"]:
self.log.error(
"Unsupported file node: {}".format(cmds.nodeType(node)))
node_type = cmds.nodeType(node)
self.log.debug("processing: {}/{}".format(node, node_type))
if node_type not in NODETYPES:
self.log.error("Unsupported file node: {}".format(node_type))
raise AssertionError("Unsupported file node")
if cmds.nodeType(node) == 'file':
self.log.debug(" - file node")
attribute = "{}.fileTextureName".format(node)
computed_attribute = "{}.computedFileTextureNamePattern".format(
node)
elif cmds.nodeType(node) == 'aiImage':
self.log.debug("aiImage node")
attribute = "{}.filename".format(node)
computed_attribute = attribute
elif cmds.nodeType(node) == 'RedshiftNormalMap':
self.log.debug("RedshiftNormalMap node")
attribute = "{}.tex0".format(node)
computed_attribute = attribute
resources = []
for node_type_attr in NODETYPES[node_type]:
fname_attrib = node_type_attr.get_fname(node)
computed_fname_attrib = node_type_attr.get_computed_fname(node)
colour_space_attrib = node_type_attr.get_colour_space(node)
source = cmds.getAttr(attribute)
self.log.info(" - file source: {}".format(source))
color_space_attr = "{}.colorSpace".format(node)
try:
color_space = cmds.getAttr(color_space_attr)
except ValueError:
# node doesn't have colorspace attribute
source = cmds.getAttr(fname_attrib)
color_space = "Raw"
# Compare with the computed file path, e.g. the one with the <UDIM>
# pattern in it, to generate some logging information about this
# difference
# computed_attribute = "{}.computedFileTextureNamePattern".format(node)
computed_source = cmds.getAttr(computed_attribute)
if source != computed_source:
self.log.debug("Detected computed file pattern difference "
"from original pattern: {0} "
"({1} -> {2})".format(node,
source,
computed_source))
try:
color_space = cmds.getAttr(colour_space_attrib)
except ValueError:
# node doesn't have colorspace attribute, use "Raw" from before
pass
# Compare with the computed file path, e.g. the one with the <UDIM>
# pattern in it, to generate some logging information about this
# difference
# computed_attribute = "{}.computedFileTextureNamePattern".format(node) # noqa
computed_source = cmds.getAttr(computed_fname_attrib)
if source != computed_source:
self.log.debug("Detected computed file pattern difference "
"from original pattern: {0} "
"({1} -> {2})".format(node,
source,
computed_source))
# We replace backslashes with forward slashes because V-Ray
# can't handle the UDIM files with the backslashes in the
# paths as the computed patterns
source = source.replace("\\", "/")
# We replace backslashes with forward slashes because V-Ray
# can't handle the UDIM files with the backslashes in the
# paths as the computed patterns
source = source.replace("\\", "/")
files = get_file_node_files(node)
files = self.handle_files(files, publishMipMap)
if len(files) == 0:
self.log.error("No valid files found from node `%s`" % node)
files = get_file_node_files(node)
files = self.handle_files(files, publishMipMap)
if len(files) == 0:
self.log.error("No valid files found from node `%s`" % node)
self.log.info("collection of resource done:")
self.log.info(" - node: {}".format(node))
self.log.info(" - attribute: {}".format(attribute))
self.log.info(" - source: {}".format(source))
self.log.info(" - file: {}".format(files))
self.log.info(" - color space: {}".format(color_space))
self.log.info("collection of resource done:")
self.log.info(" - node: {}".format(node))
self.log.info(" - attribute: {}".format(fname_attrib))
self.log.info(" - source: {}".format(source))
self.log.info(" - file: {}".format(files))
self.log.info(" - color space: {}".format(color_space))
# Define the resource
return {"node": node,
"attribute": attribute,
"source": source, # required for resources
"files": files,
"color_space": color_space} # required for resources
# Define the resource
resource = {"node": node,
"attribute": fname_attrib,
"source": source, # required for resources
"files": files,
"color_space": color_space} # required for resources
resources.append(resource)
return resources
def handle_files(self, files, publishMipMap):
"""This will go through all the files and make sure that they are

View file

@ -0,0 +1,152 @@
import os
import sys
from maya import cmds
import pyblish.api
import tempfile
from openpype.lib import run_subprocess
from openpype.pipeline import publish
from openpype.hosts.maya.api import lib
class ExtractImportReference(publish.Extractor):
"""
Extract the scene with imported reference.
The temp scene with imported reference is
published for rendering if this extractor is activated
"""
label = "Extract Import Reference"
order = pyblish.api.ExtractorOrder - 0.48
hosts = ["maya"]
families = ["renderlayer", "workfile"]
optional = True
tmp_format = "_tmp"
@classmethod
def apply_settings(cls, project_setting, system_settings):
cls.active = project_setting["deadline"]["publish"]["MayaSubmitDeadline"]["import_reference"] # noqa
def process(self, instance):
ext_mapping = (
instance.context.data["project_settings"]["maya"]["ext_mapping"]
)
if ext_mapping:
self.log.info("Looking in settings for scene type ...")
# use extension mapping for first family found
for family in self.families:
try:
self.scene_type = ext_mapping[family]
self.log.info(
"Using {} as scene type".format(self.scene_type))
break
except KeyError:
# set scene type to ma
self.scene_type = "ma"
_scene_type = ("mayaAscii"
if self.scene_type == "ma"
else "mayaBinary")
dir_path = self.staging_dir(instance)
# named the file with imported reference
if instance.name == "Main":
return
tmp_name = instance.name + self.tmp_format
current_name = cmds.file(query=True, sceneName=True)
ref_scene_name = "{0}.{1}".format(tmp_name, self.scene_type)
reference_path = os.path.join(dir_path, ref_scene_name)
tmp_path = os.path.dirname(current_name) + "/" + ref_scene_name
self.log.info("Performing extraction..")
# This generates script for mayapy to take care of reference
# importing outside current session. It is passing current scene
# name and destination scene name.
script = ("""
# -*- coding: utf-8 -*-
'''Script to import references to given scene.'''
import maya.standalone
maya.standalone.initialize()
# scene names filled by caller
current_name = "{current_name}"
ref_scene_name = "{ref_scene_name}"
print(">>> Opening {{}} ...".format(current_name))
cmds.file(current_name, open=True, force=True)
print(">>> Processing references")
all_reference = cmds.file(q=True, reference=True) or []
for ref in all_reference:
if cmds.referenceQuery(ref, il=True):
cmds.file(ref, importReference=True)
nested_ref = cmds.file(q=True, reference=True)
if nested_ref:
for new_ref in nested_ref:
if new_ref not in all_reference:
all_reference.append(new_ref)
print(">>> Finish importing references")
print(">>> Saving scene as {{}}".format(ref_scene_name))
cmds.file(rename=ref_scene_name)
cmds.file(save=True, force=True)
print("*** Done")
""").format(current_name=current_name, ref_scene_name=tmp_path)
mayapy_exe = os.path.join(os.getenv("MAYA_LOCATION"), "bin", "mayapy")
if sys.platform == "windows":
mayapy_exe += ".exe"
mayapy_exe = os.path.normpath(mayapy_exe)
# can't use TemporaryNamedFile as that can't be opened in another
# process until handles are closed by context manager.
with tempfile.TemporaryDirectory() as tmp_dir_name:
tmp_script_path = os.path.join(tmp_dir_name, "import_ref.py")
self.log.info("Using script file: {}".format(tmp_script_path))
with open(tmp_script_path, "wt") as tmp:
tmp.write(script)
try:
run_subprocess([mayapy_exe, tmp_script_path])
except Exception:
self.log.error("Import reference failed", exc_info=True)
raise
with lib.maintained_selection():
cmds.select(all=True, noExpand=True)
cmds.file(reference_path,
force=True,
typ=_scene_type,
exportSelected=True,
channels=True,
constraints=True,
shader=True,
expressions=True,
constructionHistory=True)
instance.context.data["currentFile"] = tmp_path
if "files" not in instance.data:
instance.data["files"] = []
instance.data["files"].append(ref_scene_name)
if instance.data.get("representations") is None:
instance.data["representations"] = []
ref_representation = {
"name": self.scene_type,
"ext": self.scene_type,
"files": ref_scene_name,
"stagingDir": os.path.dirname(current_name),
"outputName": "imported"
}
self.log.info("%s" % ref_representation)
instance.data["representations"].append(ref_representation)
self.log.info("Extracted instance '%s' to : '%s'" % (ref_scene_name,
reference_path))

View file

@ -73,12 +73,12 @@ class ExtractMultiverseLook(publish.Extractor):
"writeAll": False,
"writeTransforms": False,
"writeVisibility": False,
"writeAttributes": False,
"writeAttributes": True,
"writeMaterials": True,
"writeVariants": False,
"writeVariantsDefinition": False,
"writeActiveState": False,
"writeNamespaces": False,
"writeNamespaces": True,
"numTimeSamples": 1,
"timeSamplesSpan": 0.0
}

View file

@ -2,7 +2,9 @@ import os
import six
from maya import cmds
from maya import mel
import pyblish.api
from openpype.pipeline import publish
from openpype.hosts.maya.api.lib import maintained_selection
@ -26,7 +28,7 @@ class ExtractMultiverseUsd(publish.Extractor):
label = "Extract Multiverse USD Asset"
hosts = ["maya"]
families = ["mvUsd"]
families = ["usd"]
scene_type = "usd"
file_formats = ["usd", "usda", "usdz"]
@ -87,7 +89,7 @@ class ExtractMultiverseUsd(publish.Extractor):
return {
"stripNamespaces": False,
"mergeTransformAndShape": False,
"writeAncestors": True,
"writeAncestors": False,
"flattenParentXforms": False,
"writeSparseOverrides": False,
"useMetaPrimPath": False,
@ -147,7 +149,15 @@ class ExtractMultiverseUsd(publish.Extractor):
return options
def get_default_options(self):
self.log.info("ExtractMultiverseUsd get_default_options")
return self.default_options
def filter_members(self, members):
return members
def process(self, instance):
# Load plugin first
cmds.loadPlugin("MultiverseForMaya", quiet=True)
@ -161,7 +171,7 @@ class ExtractMultiverseUsd(publish.Extractor):
file_path = file_path.replace('\\', '/')
# Parse export options
options = self.default_options
options = self.get_default_options()
options = self.parse_overrides(instance, options)
self.log.info("Export options: {0}".format(options))
@ -170,27 +180,35 @@ class ExtractMultiverseUsd(publish.Extractor):
with maintained_selection():
members = instance.data("setMembers")
self.log.info('Collected object {}'.format(members))
self.log.info('Collected objects: {}'.format(members))
members = self.filter_members(members)
if not members:
self.log.error('No members!')
return
self.log.info(' - filtered: {}'.format(members))
import multiverse
time_opts = None
frame_start = instance.data['frameStart']
frame_end = instance.data['frameEnd']
handle_start = instance.data['handleStart']
handle_end = instance.data['handleEnd']
step = instance.data['step']
fps = instance.data['fps']
if frame_end != frame_start:
time_opts = multiverse.TimeOptions()
time_opts.writeTimeRange = True
handle_start = instance.data['handleStart']
handle_end = instance.data['handleEnd']
time_opts.frameRange = (
frame_start - handle_start, frame_end + handle_end)
time_opts.frameIncrement = step
time_opts.numTimeSamples = instance.data["numTimeSamples"]
time_opts.timeSamplesSpan = instance.data["timeSamplesSpan"]
time_opts.framePerSecond = fps
time_opts.frameIncrement = instance.data['step']
time_opts.numTimeSamples = instance.data.get(
'numTimeSamples', options['numTimeSamples'])
time_opts.timeSamplesSpan = instance.data.get(
'timeSamplesSpan', options['timeSamplesSpan'])
time_opts.framePerSecond = instance.data.get(
'fps', mel.eval('currentTimeUnitToFPS()'))
asset_write_opts = multiverse.AssetWriteOptions(time_opts)
options_discard_keys = {
@ -203,11 +221,15 @@ class ExtractMultiverseUsd(publish.Extractor):
'step',
'fps'
}
self.log.debug("Write Options:")
for key, value in options.items():
if key in options_discard_keys:
continue
self.log.debug(" - {}={}".format(key, value))
setattr(asset_write_opts, key, value)
self.log.info('WriteAsset: {} / {}'.format(file_path, members))
multiverse.WriteAsset(file_path, members, asset_write_opts)
if "representations" not in instance.data:
@ -223,3 +245,33 @@ class ExtractMultiverseUsd(publish.Extractor):
self.log.info("Extracted instance {} to {}".format(
instance.name, file_path))
class ExtractMultiverseUsdAnim(ExtractMultiverseUsd):
"""Extractor for Multiverse USD Animation Sparse Cache data.
This will extract the sparse cache data from the scene and generate a
USD file with all the animation data.
Upon publish a .usd sparse cache will be written.
"""
label = "Extract Multiverse USD Animation Sparse Cache"
families = ["animation", "usd"]
match = pyblish.api.Subset
def get_default_options(self):
anim_options = self.default_options
anim_options["writeSparseOverrides"] = True
anim_options["writeUsdAttributes"] = True
anim_options["stripNamespaces"] = True
return anim_options
def filter_members(self, members):
out_set = next((i for i in members if i.endswith("out_SET")), None)
if out_set is None:
self.log.warning("Expecting out_SET")
return None
members = cmds.ls(cmds.sets(out_set, query=True), long=True)
return members

View file

@ -1,28 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<root>
<error id="main">
<title>Review subsets not unique</title>
<description>
## Non unique subset name found
Non unique subset names: '{non_unique}'
<detail>
### __Detailed Info__ (optional)
This might happen if you already published for this asset
review subset with legacy name {task}Review.
This legacy name limits possibility of publishing of multiple
reviews from a single workfile. Proper review subset name should
now
contain variant also (as 'Main', 'Default' etc.). That would
result in completely new subset though, so this situation must
be handled manually.
</detail>
### How to repair?
Legacy subsets must be removed from Openpype DB, please ask admin
to do that. Please provide them asset and subset names.
</description>
</error>
</root>

View file

@ -93,12 +93,12 @@ class ValidateAssemblyModelTransforms(pyblish.api.InstancePlugin):
from openpype.hosts.maya.api import lib
# Store namespace in variable, cosmetics thingy
messagebox = QtWidgets.QMessageBox
mode = messagebox.StandardButton.Ok | messagebox.StandardButton.Cancel
choice = messagebox.warning(None,
"Matrix reset",
cls.prompt_message,
mode)
choice = QtWidgets.QMessageBox.warning(
None,
"Matrix reset",
cls.prompt_message,
QtWidgets.QMessageBox.Ok | QtWidgets.QMessageBox.Cancel
)
invalid = cls.get_invalid(instance)
if not invalid:

View file

@ -5,6 +5,11 @@ from openpype.pipeline.publish import (
RepairAction,
ValidateContentsOrder,
)
from openpype.hosts.maya.api.lib_rendersetup import (
get_attr_overrides,
get_attr_in_layer,
)
from maya.app.renderSetup.model.override import AbsOverride
class ValidateFrameRange(pyblish.api.InstancePlugin):
@ -92,10 +97,86 @@ class ValidateFrameRange(pyblish.api.InstancePlugin):
"""
Repair instance container to match asset data.
"""
cmds.setAttr(
"{}.frameStart".format(instance.data["name"]),
instance.context.data.get("frameStartHandle"))
cmds.setAttr(
"{}.frameEnd".format(instance.data["name"]),
instance.context.data.get("frameEndHandle"))
if "renderlayer" in instance.data.get("families"):
# Special behavior for renderlayers
cls.repair_renderlayer(instance)
return
node = instance.data["name"]
context = instance.context
frame_start_handle = int(context.data.get("frameStartHandle"))
frame_end_handle = int(context.data.get("frameEndHandle"))
handle_start = int(context.data.get("handleStart"))
handle_end = int(context.data.get("handleEnd"))
frame_start = int(context.data.get("frameStart"))
frame_end = int(context.data.get("frameEnd"))
# Start
if cmds.attributeQuery("handleStart", node=node, exists=True):
cmds.setAttr("{}.handleStart".format(node), handle_start)
cmds.setAttr("{}.frameStart".format(node), frame_start)
else:
# Include start handle in frame start if no separate handleStart
# attribute exists on the node
cmds.setAttr("{}.frameStart".format(node), frame_start_handle)
# End
if cmds.attributeQuery("handleEnd", node=node, exists=True):
cmds.setAttr("{}.handleEnd".format(node), handle_end)
cmds.setAttr("{}.frameEnd".format(node), frame_end)
else:
# Include end handle in frame end if no separate handleEnd
# attribute exists on the node
cmds.setAttr("{}.frameEnd".format(node), frame_end_handle)
@classmethod
def repair_renderlayer(cls, instance):
"""Apply frame range in render settings"""
layer = instance.data["setMembers"]
context = instance.context
start_attr = "defaultRenderGlobals.startFrame"
end_attr = "defaultRenderGlobals.endFrame"
frame_start_handle = int(context.data.get("frameStartHandle"))
frame_end_handle = int(context.data.get("frameEndHandle"))
cls._set_attr_in_layer(start_attr, layer, frame_start_handle)
cls._set_attr_in_layer(end_attr, layer, frame_end_handle)
@classmethod
def _set_attr_in_layer(cls, node_attr, layer, value):
if get_attr_in_layer(node_attr, layer=layer) == value:
# Already ok. This can happen if you have multiple renderlayers
# validated and there are no frame range overrides. The first
# layer's repair would have fixed the global value already
return
overrides = list(get_attr_overrides(node_attr, layer=layer))
if overrides:
# We set the last absolute override if it is an absolute override
# otherwise we'll add an Absolute override
last_override = overrides[-1][1]
if not isinstance(last_override, AbsOverride):
collection = last_override.parent()
node, attr = node_attr.split(".", 1)
last_override = collection.createAbsoluteOverride(node, attr)
cls.log.debug("Setting {attr} absolute override in "
"layer '{layer}': {value}".format(layer=layer,
attr=node_attr,
value=value))
cmds.setAttr(last_override.name() + ".attrValue", value)
else:
# Set the attribute directly
# (Note that this will set the global attribute)
cls.log.debug("Setting global {attr}: {value}".format(
attr=node_attr,
value=value
))
cmds.setAttr(node_attr, value)

View file

@ -80,13 +80,14 @@ class ValidateMvLookContents(pyblish.api.InstancePlugin):
def is_or_has_mipmap(self, fname, files):
ext = os.path.splitext(fname)[1][1:]
if ext in MIPMAP_EXTENSIONS:
self.log.debug("Is a mipmap '{}'".format(fname))
self.log.debug(" - Is a mipmap '{}'".format(fname))
return True
for colour_space in COLOUR_SPACES:
for mipmap_ext in MIPMAP_EXTENSIONS:
mipmap_fname = '.'.join([fname, colour_space, mipmap_ext])
if mipmap_fname in files:
self.log.debug("Has a mipmap '{}'".format(fname))
self.log.debug(
" - Has a mipmap '{}'".format(mipmap_fname))
return True
return False

View file

@ -0,0 +1,52 @@
import os
from maya import cmds
import pyblish.api
from openpype.pipeline.publish import ValidateContentsOrder
class ValidatePluginPathAttributes(pyblish.api.InstancePlugin):
"""
Validate plug-in path attributes point to existing file paths.
"""
order = ValidateContentsOrder
hosts = ['maya']
families = ["workfile"]
label = "Plug-in Path Attributes"
def get_invalid(self, instance):
invalid = list()
# get the project setting
validate_path = (
instance.context.data["project_settings"]["maya"]["publish"]
)
file_attr = validate_path["ValidatePluginPathAttributes"]["attribute"]
if not file_attr:
return invalid
# get the nodes and file attributes
for node, attr in file_attr.items():
# check the related nodes
targets = cmds.ls(type=node)
for target in targets:
# get the filepath
file_attr = "{}.{}".format(target, attr)
filepath = cmds.getAttr(file_attr)
if filepath and not os.path.exists(filepath):
self.log.error("File {0} not exists".format(filepath)) # noqa
invalid.append(target)
return invalid
def process(self, instance):
"""Process all directories Set as Filenames in Non-Maya Nodes"""
invalid = self.get_invalid(instance)
if invalid:
raise RuntimeError("Non-existent Path "
"found: {0}".format(invalid))

View file

@ -1,38 +0,0 @@
# -*- coding: utf-8 -*-
import collections
import pyblish.api
from openpype.pipeline.publish import (
ValidateContentsOrder,
PublishXmlValidationError,
)
class ValidateReviewSubsetUniqueness(pyblish.api.ContextPlugin):
"""Validates that review subset has unique name."""
order = ValidateContentsOrder
hosts = ["maya"]
families = ["review"]
label = "Validate Review Subset Unique"
def process(self, context):
subset_names = []
for instance in context:
self.log.debug("Instance: {}".format(instance.data))
if instance.data.get('publish'):
subset_names.append(instance.data.get('subset'))
non_unique = \
[item
for item, count in collections.Counter(subset_names).items()
if count > 1]
msg = ("Instance subset names {} are not unique. ".format(non_unique) +
"Ask admin to remove subset from DB for multiple reviews.")
formatting_data = {
"non_unique": ",".join(non_unique)
}
if non_unique:
raise PublishXmlValidationError(self, msg,
formatting_data=formatting_data)

View file

@ -21,6 +21,7 @@ class ValidateTransformNamingSuffix(pyblish.api.InstancePlugin):
- nurbsSurface: _NRB
- locator: _LOC
- null/group: _GRP
Suffices can also be overriden by project settings.
.. warning::
This grabs the first child shape as a reference and doesn't use the
@ -44,6 +45,13 @@ class ValidateTransformNamingSuffix(pyblish.api.InstancePlugin):
ALLOW_IF_NOT_IN_SUFFIX_TABLE = True
@classmethod
def get_table_for_invalid(cls):
ss = []
for k, v in cls.SUFFIX_NAMING_TABLE.items():
ss.append(" - {}: {}".format(k, ", ".join(v)))
return "\n".join(ss)
@staticmethod
def is_valid_name(node_name, shape_type,
SUFFIX_NAMING_TABLE, ALLOW_IF_NOT_IN_SUFFIX_TABLE):
@ -106,5 +114,7 @@ class ValidateTransformNamingSuffix(pyblish.api.InstancePlugin):
"""
invalid = self.get_invalid(instance)
if invalid:
valid = self.get_table_for_invalid()
raise ValueError("Incorrectly named geometry "
"transforms: {0}".format(invalid))
"transforms: {0}, accepted suffixes are: "
"\n{1}".format(invalid, valid))

View file

@ -6,18 +6,26 @@ from .workio import (
current_file,
work_root,
)
from .command import (
viewer_update_and_undo_stop
)
from .plugin import OpenPypeCreator
from .plugin import (
NukeCreator,
NukeWriteCreator,
NukeCreatorError,
OpenPypeCreator,
get_instance_group_node_childs,
get_colorspace_from_node
)
from .pipeline import (
install,
uninstall,
NukeHost,
ls,
list_instances,
remove_instance,
select_instance,
containerise,
parse_container,
update_container,
@ -25,13 +33,19 @@ from .pipeline import (
get_workfile_build_placeholder_plugins,
)
from .lib import (
INSTANCE_DATA_KNOB,
ROOT_DATA_KNOB,
maintained_selection,
reset_selection,
select_nodes,
get_view_process_node,
duplicate_node,
convert_knob_value_to_correct_type
convert_knob_value_to_correct_type,
get_node_data,
set_node_data,
update_node_data,
create_write_node
)
from .utils import (
colorspace_exists_on_node,
get_colorspace_list
@ -47,23 +61,38 @@ __all__ = (
"viewer_update_and_undo_stop",
"NukeCreator",
"NukeWriteCreator",
"NukeCreatorError",
"OpenPypeCreator",
"install",
"uninstall",
"NukeHost",
"get_instance_group_node_childs",
"get_colorspace_from_node",
"ls",
"list_instances",
"remove_instance",
"select_instance",
"containerise",
"parse_container",
"update_container",
"get_workfile_build_placeholder_plugins",
"INSTANCE_DATA_KNOB",
"ROOT_DATA_KNOB",
"maintained_selection",
"reset_selection",
"select_nodes",
"get_view_process_node",
"duplicate_node",
"convert_knob_value_to_correct_type",
"get_node_data",
"set_node_data",
"update_node_data",
"create_write_node",
"colorspace_exists_on_node",
"get_colorspace_list"

View file

@ -1,14 +1,15 @@
import os
from pprint import pformat
import re
import json
import six
import functools
import warnings
import platform
import tempfile
import contextlib
from collections import OrderedDict
import clique
import nuke
from qtpy import QtCore, QtWidgets
@ -30,7 +31,6 @@ from openpype.lib import (
from openpype.settings import (
get_project_settings,
get_anatomy_settings,
get_current_project_settings,
)
from openpype.modules import ModulesManager
@ -44,6 +44,9 @@ from openpype.pipeline.context_tools import (
get_current_project_asset,
get_custom_workfile_template_from_session
)
from openpype.pipeline.colorspace import (
get_imageio_config
)
from openpype.pipeline.workfile import BuildWorkfile
from . import gizmo_menu
@ -64,6 +67,54 @@ EXCLUDED_KNOB_TYPE_ON_READ = (
26, # Text Knob (But for backward compatibility, still be read
# if value is not an empty string.)
)
JSON_PREFIX = "JSON:::"
ROOT_DATA_KNOB = "publish_context"
INSTANCE_DATA_KNOB = "publish_instance"
class DeprecatedWarning(DeprecationWarning):
pass
def deprecated(new_destination):
"""Mark functions as deprecated.
It will result in a warning being emitted when the function is used.
"""
func = None
if callable(new_destination):
func = new_destination
new_destination = None
def _decorator(decorated_func):
if new_destination is None:
warning_message = (
" Please check content of deprecated function to figure out"
" possible replacement."
)
else:
warning_message = " Please replace your usage with '{}'.".format(
new_destination
)
@functools.wraps(decorated_func)
def wrapper(*args, **kwargs):
warnings.simplefilter("always", DeprecatedWarning)
warnings.warn(
(
"Call to deprecated function '{}'"
"\nFunction was moved or removed.{}"
).format(decorated_func.__name__, warning_message),
category=DeprecatedWarning,
stacklevel=4
)
return decorated_func(*args, **kwargs)
return wrapper
if func is None:
return _decorator
return _decorator(func)
class Context:
@ -94,8 +145,78 @@ def get_main_window():
return Context.main_window
def set_node_data(node, knobname, data):
"""Write data to node invisible knob
Will create new in case it doesnt exists
or update the one already created.
Args:
node (nuke.Node): node object
knobname (str): knob name
data (dict): data to be stored in knob
"""
# if exists then update data
if knobname in node.knobs():
log.debug("Updating knobname `{}` on node `{}`".format(
knobname, node.name()
))
update_node_data(node, knobname, data)
return
log.debug("Creating knobname `{}` on node `{}`".format(
knobname, node.name()
))
# else create new
knob_value = JSON_PREFIX + json.dumps(data)
knob = nuke.String_Knob(knobname)
knob.setValue(knob_value)
knob.setFlag(nuke.INVISIBLE)
node.addKnob(knob)
def get_node_data(node, knobname):
"""Read data from node.
Args:
node (nuke.Node): node object
knobname (str): knob name
Returns:
dict: data stored in knob
"""
if knobname not in node.knobs():
return
rawdata = node[knobname].getValue()
if (
isinstance(rawdata, six.string_types)
and rawdata.startswith(JSON_PREFIX)
):
try:
return json.loads(rawdata[len(JSON_PREFIX):])
except json.JSONDecodeError:
return
def update_node_data(node, knobname, data):
"""Update already present data.
Args:
node (nuke.Node): node object
knobname (str): knob name
data (dict): data to update knob value
"""
knob = node[knobname]
node_data = get_node_data(node, knobname) or {}
node_data.update(data)
knob_value = JSON_PREFIX + json.dumps(node_data)
knob.setValue(knob_value)
class Knobby(object):
"""For creating knob which it's type isn't mapped in `create_knobs`
"""[DEPRICATED] For creating knob which it's type isn't
mapped in `create_knobs`
Args:
type (string): Nuke knob type name
@ -120,9 +241,15 @@ class Knobby(object):
knob.setFlag(flag)
return knob
@staticmethod
def nice_naming(key):
"""Convert camelCase name into UI Display Name"""
words = re.findall('[A-Z][^A-Z]*', key[0].upper() + key[1:])
return " ".join(words)
def create_knobs(data, tab=None):
"""Create knobs by data
"""[DEPRICATED] Create knobs by data
Depending on the type of each dict value and creates the correct Knob.
@ -216,7 +343,7 @@ def create_knobs(data, tab=None):
def imprint(node, data, tab=None):
"""Store attributes with value on node
"""[DEPRICATED] Store attributes with value on node
Parse user data into Node knobs.
Use `collections.OrderedDict` to ensure knob order.
@ -272,7 +399,7 @@ def imprint(node, data, tab=None):
def add_publish_knob(node):
"""Add Publish knob to node
"""[DEPRICATED] Add Publish knob to node
Arguments:
node (nuke.Node): nuke node to be processed
@ -290,7 +417,7 @@ def add_publish_knob(node):
def set_avalon_knob_data(node, data=None, prefix="avalon:"):
""" Sets data into nodes's avalon knob
"""[DEPRICATED] Sets data into nodes's avalon knob
Arguments:
node (nuke.Node): Nuke node to imprint with data,
@ -351,8 +478,8 @@ def set_avalon_knob_data(node, data=None, prefix="avalon:"):
return node
def get_avalon_knob_data(node, prefix="avalon:"):
""" Gets a data from nodes's avalon knob
def get_avalon_knob_data(node, prefix="avalon:", create=True):
"""[DEPRICATED] Gets a data from nodes's avalon knob
Arguments:
node (obj): Nuke node to search for data,
@ -380,8 +507,11 @@ def get_avalon_knob_data(node, prefix="avalon:"):
except NameError as e:
# if it doesn't then create it
log.debug("Creating avalon knob: `{}`".format(e))
node = set_avalon_knob_data(node)
return get_avalon_knob_data(node)
if create:
node = set_avalon_knob_data(node)
return get_avalon_knob_data(node)
else:
return {}
# get data from filtered knobs
data.update({k.replace(p, ''): node[k].value()
@ -392,7 +522,7 @@ def get_avalon_knob_data(node, prefix="avalon:"):
def fix_data_for_node_create(data):
"""Fixing data to be used for nuke knobs
"""[DEPRICATED] Fixing data to be used for nuke knobs
"""
for k, v in data.items():
if isinstance(v, six.text_type):
@ -403,7 +533,7 @@ def fix_data_for_node_create(data):
def add_write_node_legacy(name, **kwarg):
"""Adding nuke write node
"""[DEPRICATED] Adding nuke write node
Arguments:
name (str): nuke node name
kwarg (attrs): data for nuke knobs
@ -562,19 +692,12 @@ def get_node_path(path, padding=4):
def get_nuke_imageio_settings():
project_imageio = get_project_settings(
Context.project_name)["nuke"]["imageio"]
# backward compatibility for project started before 3.10
# those are still having `__legacy__` knob types
if not project_imageio["enabled"]:
return get_anatomy_settings(Context.project_name)["imageio"]["nuke"]
return get_project_settings(Context.project_name)["nuke"]["imageio"]
@deprecated("openpype.hosts.nuke.api.lib.get_nuke_imageio_settings")
def get_created_node_imageio_setting_legacy(nodeclass, creator, subset):
''' Get preset data for dataflow (fileType, compression, bitDepth)
'''[DEPRICATED] Get preset data for dataflow (fileType, compression, bitDepth)
'''
assert any([creator, nodeclass]), nuke.message(
@ -764,15 +887,33 @@ def get_imageio_input_colorspace(filename):
def get_view_process_node():
reset_selection()
ipn_orig = None
for v in nuke.allNodes(filter="Viewer"):
ipn = v['input_process_node'].getValue()
if "VIEWER_INPUT" not in ipn:
ipn_orig = nuke.toNode(ipn)
ipn_orig.setSelected(True)
ipn_node = None
for v_ in nuke.allNodes(filter="Viewer"):
ipn = v_['input_process_node'].getValue()
ipn_node = nuke.toNode(ipn)
if ipn_orig:
return duplicate_node(ipn_orig)
# skip if no input node is set
if not ipn:
continue
if ipn == "VIEWER_INPUT" and not ipn_node:
# since it is set by default we can ignore it
# nobody usually use this but use it if
# it exists in nodes
continue
if not ipn_node:
# in case a Viewer node is transfered from
# different workfile with old values
raise NameError((
"Input process node name '{}' set in "
"Viewer '{}' is does't exists in nodes"
).format(ipn, v_.name()))
ipn_node.setSelected(True)
if ipn_node:
return duplicate_node(ipn_node)
def on_script_load():
@ -971,27 +1112,14 @@ def format_anatomy(data):
Return:
path (str)
'''
# TODO: perhaps should be nonPublic
anatomy = Anatomy()
log.debug("__ anatomy.templates: {}".format(anatomy.templates))
try:
# TODO: bck compatibility with old anatomy template
padding = int(
anatomy.templates["render"].get(
"frame_padding",
anatomy.templates["render"].get("padding")
)
padding = int(
anatomy.templates["render"].get(
"frame_padding"
)
except KeyError as e:
msg = ("`padding` key is not in `render` "
"or `frame_padding` on is not available in "
"Anatomy template. Please, add it there and restart "
"the pipeline (padding: \"4\"): `{}`").format(e)
log.error(msg)
nuke.message(msg)
)
version = data.get("version", None)
if not version:
@ -999,16 +1127,16 @@ def format_anatomy(data):
data["version"] = get_version_from_path(file)
project_name = anatomy.project_name
asset_name = data["avalon"]["asset"]
task_name = os.environ["AVALON_TASK"]
asset_name = data["asset"]
task_name = data["task"]
host_name = os.environ["AVALON_APP"]
context_data = get_template_data_with_names(
project_name, asset_name, task_name, host_name
)
data.update(context_data)
data.update({
"subset": data["avalon"]["subset"],
"family": data["avalon"]["family"],
"subset": data["subset"],
"family": data["family"],
"frame": "#" * padding,
})
return anatomy.format(data)
@ -1100,8 +1228,6 @@ def create_write_node(
data,
input=None,
prenodes=None,
review=True,
farm=True,
linked_knobs=None,
**kwargs
):
@ -1143,35 +1269,26 @@ def create_write_node(
'''
prenodes = prenodes or {}
# group node knob overrides
knob_overrides = data.pop("knobs", [])
# filtering variables
plugin_name = data["creator"]
subset = data["subset"]
# get knob settings for write node
imageio_writes = get_imageio_node_setting(
node_class=data["nodeclass"],
node_class="Write",
plugin_name=plugin_name,
subset=subset
)
for knob in imageio_writes["knobs"]:
if knob["name"] == "file_type":
representation = knob["value"]
ext = knob["value"]
try:
data.update({
"imageio_writes": imageio_writes,
"representation": representation,
})
anatomy_filled = format_anatomy(data)
except Exception as e:
msg = "problem with resolving anatomy template: {}".format(e)
log.error(msg)
nuke.message(msg)
data.update({
"imageio_writes": imageio_writes,
"ext": ext
})
anatomy_filled = format_anatomy(data)
# build file path to workfiles
fdir = str(anatomy_filled["work"]["folder"]).replace("\\", "/")
@ -1180,7 +1297,7 @@ def create_write_node(
version=data["version"],
subset=data["subset"],
frame=data["frame"],
ext=representation
ext=ext
)
# create directory
@ -1234,14 +1351,6 @@ def create_write_node(
# connect to previous node
now_node.setInput(0, prev_node)
# imprinting group node
set_avalon_knob_data(GN, data["avalon"])
add_publish_knob(GN)
add_rendering_knobs(GN, farm)
if review:
add_review_knob(GN)
# add divider
GN.addKnob(nuke.Text_Knob('', 'Rendering'))
@ -1287,11 +1396,7 @@ def create_write_node(
# adding write to read button
add_button_clear_rendered(GN, os.path.dirname(fpath))
# Deadline tab.
add_deadline_tab(GN)
# open the our Tab as default
GN[_NODE_TAB_NAME].setFlag(0)
GN.addKnob(nuke.Text_Knob('', ''))
# set tile color
tile_color = next(
@ -1303,12 +1408,10 @@ def create_write_node(
GN["tile_color"].setValue(
color_gui_to_int(tile_color))
# finally add knob overrides
set_node_knobs_from_settings(GN, knob_overrides, **kwargs)
return GN
@deprecated("openpype.hosts.nuke.api.lib.create_write_node")
def create_write_node_legacy(
name, data, input=None, prenodes=None,
review=True, linked_knobs=None, farm=True
@ -1599,6 +1702,13 @@ def set_node_knobs_from_settings(node, knob_settings, **kwargs):
if knob_name not in node.knobs():
continue
if knob_type == "expression":
knob_expression = knob["expression"]
node[knob_name].setExpression(
knob_expression
)
continue
# first deal with formatable knob settings
if knob_type == "formatable":
template = knob["template"]
@ -1607,7 +1717,6 @@ def set_node_knobs_from_settings(node, knob_settings, **kwargs):
_knob_value = template.format(
**kwargs
)
log.debug("__ knob_value0: {}".format(_knob_value))
except KeyError as msg:
log.warning("__ msg: {}".format(msg))
raise KeyError(msg)
@ -1661,6 +1770,7 @@ def color_gui_to_int(color_gui):
return int(hex_value, 16)
@deprecated
def add_rendering_knobs(node, farm=True):
''' Adds additional rendering knobs to given node
@ -1681,6 +1791,7 @@ def add_rendering_knobs(node, farm=True):
return node
@deprecated
def add_review_knob(node):
''' Adds additional review knob to given node
@ -1697,7 +1808,9 @@ def add_review_knob(node):
return node
@deprecated
def add_deadline_tab(node):
# TODO: remove this as it is only linked to legacy create
node.addKnob(nuke.Tab_Knob("Deadline"))
knob = nuke.Int_Knob("deadlinePriority", "Priority")
@ -1723,7 +1836,10 @@ def add_deadline_tab(node):
node.addKnob(knob)
@deprecated
def get_deadline_knob_names():
# TODO: remove this as it is only linked to legacy
# validate_write_deadline_tab
return [
"Deadline",
"deadlineChunkSize",
@ -1880,59 +1996,55 @@ class WorkfileSettings(object):
"Attention! Viewer nodes {} were erased."
"It had wrong color profile".format(erased_viewers))
def set_root_colorspace(self, root_dict):
def set_root_colorspace(self, nuke_colorspace):
''' Adds correct colorspace to root
Arguments:
root_dict (dict): adjustmensts from presets
nuke_colorspace (dict): adjustmensts from presets
'''
if not isinstance(root_dict, dict):
msg = "set_root_colorspace(): argument should be dictionary"
log.error(msg)
nuke.message(msg)
workfile_settings = nuke_colorspace["workfile"]
log.debug(">> root_dict: {}".format(root_dict))
# resolve config data if they are enabled in host
config_data = None
if nuke_colorspace.get("ocio_config", {}).get("enabled"):
# switch ocio config to custom config
workfile_settings["OCIO_config"] = "custom"
workfile_settings["colorManagement"] = "OCIO"
# get resolved ocio config path
config_data = get_imageio_config(
legacy_io.active_project(), "nuke"
)
# first set OCIO
if self._root_node["colorManagement"].value() \
not in str(root_dict["colorManagement"]):
not in str(workfile_settings["colorManagement"]):
self._root_node["colorManagement"].setValue(
str(root_dict["colorManagement"]))
log.debug("nuke.root()['{0}'] changed to: {1}".format(
"colorManagement", root_dict["colorManagement"]))
root_dict.pop("colorManagement")
str(workfile_settings["colorManagement"]))
# we dont need the key anymore
workfile_settings.pop("colorManagement")
# second set ocio version
if self._root_node["OCIO_config"].value() \
not in str(root_dict["OCIO_config"]):
not in str(workfile_settings["OCIO_config"]):
self._root_node["OCIO_config"].setValue(
str(root_dict["OCIO_config"]))
log.debug("nuke.root()['{0}'] changed to: {1}".format(
"OCIO_config", root_dict["OCIO_config"]))
root_dict.pop("OCIO_config")
str(workfile_settings["OCIO_config"]))
# we dont need the key anymore
workfile_settings.pop("OCIO_config")
# third set ocio custom path
if root_dict.get("customOCIOConfigPath"):
unresolved_path = root_dict["customOCIOConfigPath"]
ocio_paths = unresolved_path[platform.system().lower()]
resolved_path = None
for ocio_p in ocio_paths:
resolved_path = str(ocio_p).format(**os.environ)
if not os.path.exists(resolved_path):
continue
if resolved_path:
self._root_node["customOCIOConfigPath"].setValue(
str(resolved_path).replace("\\", "/")
)
log.debug("nuke.root()['{}'] changed to: {}".format(
"customOCIOConfigPath", resolved_path))
root_dict.pop("customOCIOConfigPath")
if config_data:
self._root_node["customOCIOConfigPath"].setValue(
str(config_data["path"]).replace("\\", "/")
)
# backward compatibility, remove in case it exists
workfile_settings.pop("customOCIOConfigPath")
# then set the rest
for knob, value in root_dict.items():
for knob, value in workfile_settings.items():
# skip unfilled ocio config path
# it will be dict in value
if isinstance(value, dict):
@ -2062,7 +2174,7 @@ class WorkfileSettings(object):
log.info("Setting colorspace to workfile...")
try:
self.set_root_colorspace(nuke_colorspace["workfile"])
self.set_root_colorspace(nuke_colorspace)
except AttributeError:
msg = "set_colorspace(): missing `workfile` settings in template"
nuke.message(msg)
@ -2137,7 +2249,8 @@ class WorkfileSettings(object):
range = '{0}-{1}'.format(
int(data["frameStart"]),
int(data["frameEnd"]))
int(data["frameEnd"])
)
for node in nuke.allNodes(filter="Viewer"):
node['frame_range'].setValue(range)
@ -2145,12 +2258,14 @@ class WorkfileSettings(object):
node['frame_range'].setValue(range)
node['frame_range_lock'].setValue(True)
# adding handle_start/end to root avalon knob
if not set_avalon_knob_data(self._root_node, {
"handleStart": int(handle_start),
"handleEnd": int(handle_end)
}):
log.warning("Cannot set Avalon knob to Root node!")
set_node_data(
self._root_node,
INSTANCE_DATA_KNOB,
{
"handleStart": int(handle_start),
"handleEnd": int(handle_end)
}
)
def reset_resolution(self):
"""Set resolution to project resolution."""
@ -2264,29 +2379,25 @@ def get_write_node_template_attr(node):
''' Gets all defined data from presets
'''
# TODO: add identifiers to settings and rename settings key
plugin_names_mapping = {
"create_write_image": "CreateWriteImage",
"create_write_prerender": "CreateWritePrerender",
"create_write_render": "CreateWriteRender"
}
# get avalon data from node
avalon_knob_data = read_avalon_data(node)
# get template data
nuke_imageio_writes = get_imageio_node_setting(
node_class=avalon_knob_data["families"],
plugin_name=avalon_knob_data["creator"],
subset=avalon_knob_data["subset"]
node_data = get_node_data(node, INSTANCE_DATA_KNOB)
identifier = node_data["creator_identifier"]
# return template data
return get_imageio_node_setting(
node_class="Write",
plugin_name=plugin_names_mapping[identifier],
subset=node_data["subset"]
)
# collecting correct data
correct_data = OrderedDict()
# adding imageio knob presets
for k, v in nuke_imageio_writes.items():
if k in ["_id", "_previous"]:
continue
correct_data[k] = v
# fix badly encoded data
return fix_data_for_node_create(correct_data)
def get_dependent_nodes(nodes):
"""Get all dependent nodes connected to the list of nodes.
@ -2325,10 +2436,11 @@ def get_dependent_nodes(nodes):
def find_free_space_to_paste_nodes(
nodes,
group=nuke.root(),
direction="right",
offset=300):
nodes,
group=nuke.root(),
direction="right",
offset=300
):
"""
For getting coordinates in DAG (node graph) for placing new nodes
@ -2554,6 +2666,7 @@ def process_workfile_builder():
open_file(last_workfile_path)
@deprecated
def recreate_instance(origin_node, avalon_data=None):
"""Recreate input instance to different data
@ -2619,6 +2732,32 @@ def recreate_instance(origin_node, avalon_data=None):
return new_node
def add_scripts_menu():
try:
from scriptsmenu import launchfornuke
except ImportError:
log.warning(
"Skipping studio.menu install, because "
"'scriptsmenu' module seems unavailable."
)
return
# load configuration of custom menu
project_settings = get_project_settings(os.getenv("AVALON_PROJECT"))
config = project_settings["nuke"]["scriptsmenu"]["definition"]
_menu = project_settings["nuke"]["scriptsmenu"]["name"]
if not config:
log.warning("Skipping studio menu, no definition found.")
return
# run the launcher for Maya menu
studio_menu = launchfornuke.main(title=_menu.title())
# apply configuration
studio_menu.build_from_configuration(studio_menu, config)
def add_scripts_gizmo():
# load configuration of custom menu
@ -2799,48 +2938,6 @@ def dirmap_file_name_filter(file_name):
return file_name
# ------------------------------------
# This function seems to be deprecated
# ------------------------------------
def ls_img_sequence(path):
"""Listing all available coherent image sequence from path
Arguments:
path (str): A nuke's node object
Returns:
data (dict): with nuke formated path and frameranges
"""
file = os.path.basename(path)
dirpath = os.path.dirname(path)
base, ext = os.path.splitext(file)
name, padding = os.path.splitext(base)
# populate list of files
files = [
f for f in os.listdir(dirpath)
if name in f
if ext in f
]
# create collection from list of files
collections, reminder = clique.assemble(files)
if len(collections) > 0:
head = collections[0].format("{head}")
padding = collections[0].format("{padding}") % 1
padding = "#" * len(padding)
tail = collections[0].format("{tail}")
file = head + padding + tail
return {
"path": os.path.join(dirpath, file).replace("\\", "/"),
"frames": collections[0].format("[{ranges}]")
}
return False
def get_group_io_nodes(nodes):
"""Get the input and the output of a group of nodes."""
@ -2865,10 +2962,11 @@ def get_group_io_nodes(nodes):
break
if input_node is None:
raise ValueError("No Input found")
log.warning("No Input found")
if output_node is None:
raise ValueError("No Output found")
log.warning("No Output found")
return input_node, output_node

View file

@ -1,21 +1,24 @@
import nuke
import os
import importlib
from collections import OrderedDict
import nuke
import pyblish.api
import openpype
from openpype.host import (
HostBase,
IWorkfileHost,
ILoadHost,
IPublishHost
)
from openpype.settings import get_current_project_settings
from openpype.lib import register_event_callback, Logger
from openpype.pipeline import (
register_loader_plugin_path,
register_creator_plugin_path,
register_inventory_action_path,
deregister_loader_plugin_path,
deregister_creator_plugin_path,
deregister_inventory_action_path,
AVALON_CONTAINER_ID,
)
from openpype.pipeline.workfile import BuildWorkfile
@ -24,6 +27,8 @@ from openpype.tools.utils import host_tools
from .command import viewer_update_and_undo_stop
from .lib import (
Context,
ROOT_DATA_KNOB,
INSTANCE_DATA_KNOB,
get_main_window,
add_publish_knob,
WorkfileSettings,
@ -32,14 +37,29 @@ from .lib import (
check_inventory_versions,
set_avalon_knob_data,
read_avalon_data,
on_script_load,
dirmap_file_name_filter,
add_scripts_menu,
add_scripts_gizmo,
get_node_data,
set_node_data
)
from .workfile_template_builder import (
NukePlaceholderLoadPlugin,
NukePlaceholderCreatePlugin,
build_workfile_template,
update_workfile_template,
create_placeholder,
update_placeholder,
)
from .workio import (
open_file,
save_file,
file_extensions,
has_unsaved_changes,
work_root,
current_file
)
log = Logger.get_logger(__name__)
@ -58,6 +78,95 @@ if os.getenv("PYBLISH_GUI", None):
pyblish.api.register_gui(os.getenv("PYBLISH_GUI", None))
class NukeHost(
HostBase, IWorkfileHost, ILoadHost, IPublishHost
):
name = "nuke"
def open_workfile(self, filepath):
return open_file(filepath)
def save_workfile(self, filepath=None):
return save_file(filepath)
def work_root(self, session):
return work_root(session)
def get_current_workfile(self):
return current_file()
def workfile_has_unsaved_changes(self):
return has_unsaved_changes()
def get_workfile_extensions(self):
return file_extensions()
def get_containers(self):
return ls()
def install(self):
''' Installing all requarements for Nuke host
'''
pyblish.api.register_host("nuke")
self.log.info("Registering Nuke plug-ins..")
pyblish.api.register_plugin_path(PUBLISH_PATH)
register_loader_plugin_path(LOAD_PATH)
register_creator_plugin_path(CREATE_PATH)
register_inventory_action_path(INVENTORY_PATH)
# Register Avalon event for workfiles loading.
register_event_callback("workio.open_file", check_inventory_versions)
register_event_callback("taskChanged", change_context_label)
pyblish.api.register_callback(
"instanceToggled", on_pyblish_instance_toggled)
_install_menu()
# add script menu
add_scripts_menu()
add_scripts_gizmo()
add_nuke_callbacks()
launch_workfiles_app()
def get_context_data(self):
root_node = nuke.root()
return get_node_data(root_node, ROOT_DATA_KNOB)
def update_context_data(self, data, changes):
root_node = nuke.root()
set_node_data(root_node, ROOT_DATA_KNOB, data)
def add_nuke_callbacks():
""" Adding all available nuke callbacks
"""
workfile_settings = WorkfileSettings()
# Set context settings.
nuke.addOnCreate(
workfile_settings.set_context_settings, nodeClass="Root")
nuke.addOnCreate(workfile_settings.set_favorites, nodeClass="Root")
nuke.addOnCreate(process_workfile_builder, nodeClass="Root")
# fix ffmpeg settings on script
nuke.addOnScriptLoad(on_script_load)
# set checker for last versions on loaded containers
nuke.addOnScriptLoad(check_inventory_versions)
nuke.addOnScriptSave(check_inventory_versions)
# # set apply all workfile settings on script load and save
nuke.addOnScriptLoad(WorkfileSettings().set_context_settings)
nuke.addFilenameFilter(dirmap_file_name_filter)
log.info("Added Nuke callbacks ...")
def reload_config():
"""Attempt to reload pipeline at run-time.
@ -83,52 +192,6 @@ def reload_config():
reload(module)
def install():
''' Installing all requarements for Nuke host
'''
pyblish.api.register_host("nuke")
log.info("Registering Nuke plug-ins..")
pyblish.api.register_plugin_path(PUBLISH_PATH)
register_loader_plugin_path(LOAD_PATH)
register_creator_plugin_path(CREATE_PATH)
register_inventory_action_path(INVENTORY_PATH)
# Register Avalon event for workfiles loading.
register_event_callback("workio.open_file", check_inventory_versions)
register_event_callback("taskChanged", change_context_label)
pyblish.api.register_callback(
"instanceToggled", on_pyblish_instance_toggled)
workfile_settings = WorkfileSettings()
# Set context settings.
nuke.addOnCreate(workfile_settings.set_context_settings, nodeClass="Root")
nuke.addOnCreate(workfile_settings.set_favorites, nodeClass="Root")
nuke.addOnCreate(process_workfile_builder, nodeClass="Root")
_install_menu()
launch_workfiles_app()
def uninstall():
'''Uninstalling host's integration
'''
log.info("Deregistering Nuke plug-ins..")
pyblish.deregister_host("nuke")
pyblish.api.deregister_plugin_path(PUBLISH_PATH)
deregister_loader_plugin_path(LOAD_PATH)
deregister_creator_plugin_path(CREATE_PATH)
deregister_inventory_action_path(INVENTORY_PATH)
pyblish.api.deregister_callback(
"instanceToggled", on_pyblish_instance_toggled)
reload_config()
_uninstall_menu()
def _show_workfiles():
# Make sure parent is not set
# - this makes Workfiles tool as separated window which
@ -139,7 +202,8 @@ def _show_workfiles():
def get_workfile_build_placeholder_plugins():
return [
NukePlaceholderLoadPlugin
NukePlaceholderLoadPlugin,
NukePlaceholderCreatePlugin
]
@ -165,7 +229,15 @@ def _install_menu():
menu.addSeparator()
menu.addCommand(
"Create...",
lambda: host_tools.show_creator(parent=main_window)
lambda: host_tools.show_publisher(
tab="create"
)
)
menu.addCommand(
"Publish...",
lambda: host_tools.show_publisher(
tab="publish"
)
)
menu.addCommand(
"Load...",
@ -174,14 +246,11 @@ def _install_menu():
use_context=True
)
)
menu.addCommand(
"Publish...",
lambda: host_tools.show_publish(parent=main_window)
)
menu.addCommand(
"Manage...",
lambda: host_tools.show_scene_inventory(parent=main_window)
)
menu.addSeparator()
menu.addCommand(
"Library...",
lambda: host_tools.show_library_loader(
@ -217,10 +286,6 @@ def _install_menu():
"Build Workfile from template",
lambda: build_workfile_template()
)
menu_template.addCommand(
"Update Workfile",
lambda: update_workfile_template()
)
menu_template.addSeparator()
menu_template.addCommand(
"Create Place Holder",
@ -235,7 +300,7 @@ def _install_menu():
"Experimental tools...",
lambda: host_tools.show_experimental_tools_dialog(parent=main_window)
)
menu.addSeparator()
# add reload pipeline only in debug mode
if bool(os.getenv("NUKE_DEBUG")):
menu.addSeparator()
@ -245,15 +310,6 @@ def _install_menu():
add_shortcuts_from_presets()
def _uninstall_menu():
menubar = nuke.menu("Nuke")
menu = menubar.findItem(MENU_LABEL)
for item in menu.items():
log.info("Removing menu item: {}".format(item.name()))
menu.removeItem(item.name())
def change_context_label():
menubar = nuke.menu("Nuke")
menu = menubar.findItem(MENU_LABEL)
@ -285,8 +341,8 @@ def add_shortcuts_from_presets():
if nuke_presets.get("menu"):
menu_label_mapping = {
"manage": "Manage...",
"create": "Create...",
"manage": "Manage...",
"load": "Load...",
"build_workfile": "Build Workfile",
"publish": "Publish..."
@ -304,7 +360,7 @@ def add_shortcuts_from_presets():
item_label = menu_label_mapping[command_name]
menuitem = menu.findItem(item_label)
menuitem.setShortcut(shortcut_str)
except AttributeError as e:
except (AttributeError, KeyError) as e:
log.error(e)
@ -436,11 +492,72 @@ def ls():
"""
all_nodes = nuke.allNodes(recurseGroups=False)
# TODO: add readgeo, readcamera, readimage
nodes = [n for n in all_nodes]
for n in nodes:
log.debug("name: `{}`".format(n.name()))
container = parse_container(n)
if container:
yield container
def list_instances(creator_id=None):
"""List all created instances to publish from current workfile.
For SubsetManager
Returns:
(list) of dictionaries matching instances format
"""
listed_instances = []
for node in nuke.allNodes(recurseGroups=True):
if node.Class() in ["Viewer", "Dot"]:
continue
try:
if node["disable"].value():
continue
except NameError:
# pass if disable knob doesn't exist
pass
# get data from avalon knob
instance_data = get_node_data(
node, INSTANCE_DATA_KNOB)
if not instance_data:
continue
if instance_data["id"] != "pyblish.avalon.instance":
continue
if creator_id and instance_data["creator_identifier"] != creator_id:
continue
listed_instances.append((node, instance_data))
return listed_instances
def remove_instance(instance):
"""Remove instance from current workfile metadata.
For SubsetManager
Args:
instance (dict): instance representation from subsetmanager model
"""
instance_node = instance.transient_data["node"]
instance_knob = instance_node.knobs()[INSTANCE_DATA_KNOB]
instance_node.removeKnob(instance_knob)
def select_instance(instance):
"""
Select instance in Node View
Args:
instance (dict): instance representation from subsetmanager model
"""
instance_node = instance.transient_data["node"]
instance_node["selected"].setValue(True)

View file

@ -1,27 +1,383 @@
import nuke
import re
import os
import sys
import six
import random
import string
from collections import OrderedDict
from collections import OrderedDict, defaultdict
from abc import abstractmethod
import nuke
from openpype.settings import get_current_project_settings
from openpype.lib import (
BoolDef,
EnumDef
)
from openpype.pipeline import (
LegacyCreator,
LoaderPlugin,
CreatorError,
Creator as NewCreator,
CreatedInstance,
legacy_io
)
from .lib import (
INSTANCE_DATA_KNOB,
Knobby,
check_subsetname_exists,
maintained_selection,
get_avalon_knob_data,
set_avalon_knob_data,
add_publish_knob,
get_nuke_imageio_settings,
set_node_knobs_from_settings,
set_node_data,
get_node_data,
get_view_process_node,
get_viewer_config_from_string
get_viewer_config_from_string,
deprecated
)
from .pipeline import (
list_instances,
remove_instance
)
def _collect_and_cache_nodes(creator):
key = "openpype.nuke.nodes"
if key not in creator.collection_shared_data:
instances_by_identifier = defaultdict(list)
for item in list_instances():
_, instance_data = item
identifier = instance_data["creator_identifier"]
instances_by_identifier[identifier].append(item)
creator.collection_shared_data[key] = instances_by_identifier
return creator.collection_shared_data[key]
class NukeCreatorError(CreatorError):
pass
class NukeCreator(NewCreator):
selected_nodes = []
def pass_pre_attributes_to_instance(
self,
instance_data,
pre_create_data,
keys=None
):
if not keys:
keys = pre_create_data.keys()
creator_attrs = instance_data["creator_attributes"] = {}
for pass_key in keys:
creator_attrs[pass_key] = pre_create_data[pass_key]
def add_info_knob(self, node):
if "OP_info" in node.knobs().keys():
return
# add info text
info_knob = nuke.Text_Knob("OP_info", "")
info_knob.setValue("""
<span style=\"color:#fc0303\">
<p>This node is maintained by <b>OpenPype Publisher</b>.</p>
<p>To remove it use Publisher gui.</p>
</span>
""")
node.addKnob(info_knob)
def check_existing_subset(self, subset_name):
"""Make sure subset name is unique.
It search within all nodes recursively
and checks if subset name is found in
any node having instance data knob.
Arguments:
subset_name (str): Subset name
"""
for node in nuke.allNodes(recurseGroups=True):
# make sure testing node is having instance knob
if INSTANCE_DATA_KNOB not in node.knobs().keys():
continue
node_data = get_node_data(node, INSTANCE_DATA_KNOB)
if not node_data:
# a node has no instance data
continue
# test if subset name is matching
if node_data.get("subset") == subset_name:
raise NukeCreatorError(
(
"A publish instance for '{}' already exists "
"in nodes! Please change the variant "
"name to ensure unique output."
).format(subset_name)
)
def create_instance_node(
self,
node_name,
knobs=None,
parent=None,
node_type=None
):
"""Create node representing instance.
Arguments:
node_name (str): Name of the new node.
knobs (OrderedDict): node knobs name and values
parent (str): Name of the parent node.
node_type (str, optional): Nuke node Class.
Returns:
nuke.Node: Newly created instance node.
"""
node_type = node_type or "NoOp"
node_knobs = knobs or {}
# set parent node
parent_node = nuke.root()
if parent:
parent_node = nuke.toNode(parent)
try:
with parent_node:
created_node = nuke.createNode(node_type)
created_node["name"].setValue(node_name)
self.add_info_knob(created_node)
for key, values in node_knobs.items():
if key in created_node.knobs():
created_node["key"].setValue(values)
except Exception as _err:
raise NukeCreatorError("Creating have failed: {}".format(_err))
return created_node
def set_selected_nodes(self, pre_create_data):
if pre_create_data.get("use_selection"):
self.selected_nodes = nuke.selectedNodes()
if self.selected_nodes == []:
raise NukeCreatorError("Creator error: No active selection")
else:
self.selected_nodes = []
def create(self, subset_name, instance_data, pre_create_data):
# make sure selected nodes are added
self.set_selected_nodes(pre_create_data)
# make sure subset name is unique
self.check_existing_subset(subset_name)
try:
instance_node = self.create_instance_node(
subset_name,
node_type=instance_data.pop("node_type", None)
)
instance = CreatedInstance(
self.family,
subset_name,
instance_data,
self
)
instance.transient_data["node"] = instance_node
self._add_instance_to_context(instance)
set_node_data(
instance_node, INSTANCE_DATA_KNOB, instance.data_to_store())
return instance
except Exception as er:
six.reraise(
NukeCreatorError,
NukeCreatorError("Creator error: {}".format(er)),
sys.exc_info()[2])
def collect_instances(self):
cached_instances = _collect_and_cache_nodes(self)
for (node, data) in cached_instances[self.identifier]:
created_instance = CreatedInstance.from_existing(
data, self
)
created_instance.transient_data["node"] = node
self._add_instance_to_context(created_instance)
def update_instances(self, update_list):
for created_inst, _changes in update_list:
instance_node = created_inst.transient_data["node"]
# in case node is not existing anymore (user erased it manually)
try:
instance_node.fullName()
except ValueError:
self.remove_instances([created_inst])
continue
set_node_data(
instance_node,
INSTANCE_DATA_KNOB,
created_inst.data_to_store()
)
def remove_instances(self, instances):
for instance in instances:
remove_instance(instance)
self._remove_instance_from_context(instance)
def get_pre_create_attr_defs(self):
return [
BoolDef("use_selection", label="Use selection")
]
def get_creator_settings(self, project_settings, settings_key=None):
if not settings_key:
settings_key = self.__class__.__name__
return project_settings["nuke"]["create"][settings_key]
class NukeWriteCreator(NukeCreator):
"""Add Publishable Write node"""
identifier = "create_write"
label = "Create Write"
family = "write"
icon = "sign-out"
def integrate_links(self, node, outputs=True):
# skip if no selection
if not self.selected_node:
return
# collect dependencies
input_nodes = [self.selected_node]
dependent_nodes = self.selected_node.dependent() if outputs else []
# relinking to collected connections
for i, input in enumerate(input_nodes):
node.setInput(i, input)
# make it nicer in graph
node.autoplace()
# relink also dependent nodes
for dep_nodes in dependent_nodes:
dep_nodes.setInput(0, node)
def set_selected_nodes(self, pre_create_data):
if pre_create_data.get("use_selection"):
selected_nodes = nuke.selectedNodes()
if selected_nodes == []:
raise NukeCreatorError("Creator error: No active selection")
elif len(selected_nodes) > 1:
NukeCreatorError("Creator error: Select only one camera node")
self.selected_node = selected_nodes[0]
else:
self.selected_node = None
def get_pre_create_attr_defs(self):
attr_defs = [
BoolDef("use_selection", label="Use selection"),
self._get_render_target_enum()
]
return attr_defs
def get_instance_attr_defs(self):
attr_defs = [
self._get_render_target_enum(),
self._get_reviewable_bool()
]
return attr_defs
def _get_render_target_enum(self):
rendering_targets = {
"local": "Local machine rendering",
"frames": "Use existing frames"
}
if ("farm_rendering" in self.instance_attributes):
rendering_targets["farm"] = "Farm rendering"
return EnumDef(
"render_target",
items=rendering_targets,
label="Render target"
)
def _get_reviewable_bool(self):
return BoolDef(
"review",
default=("reviewable" in self.instance_attributes),
label="Review"
)
def create(self, subset_name, instance_data, pre_create_data):
# make sure selected nodes are added
self.set_selected_nodes(pre_create_data)
# make sure subset name is unique
self.check_existing_subset(subset_name)
instance_node = self.create_instance_node(
subset_name,
instance_data
)
try:
instance = CreatedInstance(
self.family,
subset_name,
instance_data,
self
)
instance.transient_data["node"] = instance_node
self._add_instance_to_context(instance)
set_node_data(
instance_node, INSTANCE_DATA_KNOB, instance.data_to_store())
return instance
except Exception as er:
six.reraise(
NukeCreatorError,
NukeCreatorError("Creator error: {}".format(er)),
sys.exc_info()[2]
)
def apply_settings(
self,
project_settings,
system_settings
):
"""Method called on initialization of plugin to apply settings."""
# plugin settings
plugin_settings = self.get_creator_settings(project_settings)
# individual attributes
self.instance_attributes = plugin_settings.get(
"instance_attributes") or self.instance_attributes
self.prenodes = plugin_settings["prenodes"]
self.default_variants = plugin_settings.get(
"default_variants") or self.default_variants
self.temp_rendering_path_template = (
plugin_settings.get("temp_rendering_path_template")
or self.temp_rendering_path_template
)
class OpenPypeCreator(LegacyCreator):
@ -72,6 +428,41 @@ class OpenPypeCreator(LegacyCreator):
return instance
def get_instance_group_node_childs(instance):
"""Return list of instance group node children
Args:
instance (pyblish.Instance): pyblish instance
Returns:
list: [nuke.Node]
"""
node = instance.data["transientData"]["node"]
if node.Class() != "Group":
return
# collect child nodes
child_nodes = []
# iterate all nodes
for node in nuke.allNodes(group=node):
# add contained nodes to instance's node list
child_nodes.append(node)
return child_nodes
def get_colorspace_from_node(node):
# Add version data to instance
colorspace = node["colorspace"].value()
# remove default part of the string
if "default (" in colorspace:
colorspace = re.sub(r"default.\(|\)", "", colorspace)
return colorspace
def get_review_presets_config():
settings = get_current_project_settings()
review_profiles = (
@ -173,7 +564,6 @@ class ExporterReview(object):
def get_file_info(self):
if self.collection:
self.log.debug("Collection: `{}`".format(self.collection))
# get path
self.fname = os.path.basename(self.collection.format(
"{head}{padding}{tail}"))
@ -308,7 +698,6 @@ class ExporterReviewLut(ExporterReview):
# connect
self._temp_nodes.append(cms_node)
self.previous_node = cms_node
self.log.debug("CMSTestPattern... `{}`".format(self._temp_nodes))
if bake_viewer_process:
# Node View Process
@ -341,8 +730,6 @@ class ExporterReviewLut(ExporterReview):
# connect
gen_lut_node.setInput(0, self.previous_node)
self._temp_nodes.append(gen_lut_node)
self.log.debug("GenerateLUT... `{}`".format(self._temp_nodes))
# ---------- end nodes creation
# Export lut file
@ -356,8 +743,6 @@ class ExporterReviewLut(ExporterReview):
# ---------- generate representation data
self.get_representation_data()
self.log.debug("Representation... `{}`".format(self.data))
# ---------- Clean up
self.clean_nodes()
@ -427,6 +812,8 @@ class ExporterReviewMov(ExporterReview):
# create nk path
path = os.path.splitext(self.path)[0] + ".nk"
# save file to the path
if not os.path.exists(os.path.dirname(path)):
os.makedirs(os.path.dirname(path))
shutil.copyfile(self.instance.context.data["currentFile"], path)
self.log.info("Nodes exported...")
@ -581,6 +968,7 @@ class ExporterReviewMov(ExporterReview):
return self.data
@deprecated("openpype.hosts.nuke.api.plugin.NukeWriteCreator")
class AbstractWriteRender(OpenPypeCreator):
"""Abstract creator to gather similar implementation for Write creators"""
name = ""
@ -607,7 +995,6 @@ class AbstractWriteRender(OpenPypeCreator):
self.data = data
self.nodes = nuke.selectedNodes()
self.log.debug("_ self.data: '{}'".format(self.data))
def process(self):
@ -732,3 +1119,149 @@ class AbstractWriteRender(OpenPypeCreator):
node (nuke.Node): group node with data as Knobs
"""
pass
def convert_to_valid_instaces():
""" Check and convert to latest publisher instances
Also save as new minor version of workfile.
"""
def family_to_identifier(family):
mapping = {
"render": "create_write_render",
"prerender": "create_write_prerender",
"still": "create_write_image",
"model": "create_model",
"camera": "create_camera",
"nukenodes": "create_backdrop",
"gizmo": "create_gizmo",
"source": "create_source"
}
return mapping[family]
from openpype.hosts.nuke.api import workio
task_name = legacy_io.Session["AVALON_TASK"]
# save into new workfile
current_file = workio.current_file()
# add file suffex if not
if "_publisherConvert" not in current_file:
new_workfile = (
current_file[:-3]
+ "_publisherConvert"
+ current_file[-3:]
)
else:
new_workfile = current_file
path = new_workfile.replace("\\", "/")
nuke.scriptSaveAs(new_workfile, overwrite=1)
nuke.Root()["name"].setValue(path)
nuke.Root()["project_directory"].setValue(os.path.dirname(path))
nuke.Root().setModified(False)
_remove_old_knobs(nuke.Root())
# loop all nodes and convert
for node in nuke.allNodes(recurseGroups=True):
transfer_data = {
"creator_attributes": {}
}
creator_attr = transfer_data["creator_attributes"]
if node.Class() in ["Viewer", "Dot"]:
continue
if get_node_data(node, INSTANCE_DATA_KNOB):
continue
# get data from avalon knob
avalon_knob_data = get_avalon_knob_data(
node, ["avalon:", "ak:"])
if not avalon_knob_data:
continue
if avalon_knob_data["id"] != "pyblish.avalon.instance":
continue
transfer_data.update({
k: v for k, v in avalon_knob_data.items()
if k not in ["families", "creator"]
})
transfer_data["task"] = task_name
family = avalon_knob_data["family"]
# establish families
families_ak = avalon_knob_data.get("families", [])
if "suspend_publish" in node.knobs():
creator_attr["suspended_publish"] = (
node["suspend_publish"].value())
# get review knob value
if "review" in node.knobs():
creator_attr["review"] = (
node["review"].value())
if "publish" in node.knobs():
transfer_data["active"] = (
node["publish"].value())
# add idetifier
transfer_data["creator_identifier"] = family_to_identifier(family)
# Add all nodes in group instances.
if node.Class() == "Group":
# only alter families for render family
if families_ak and "write" in families_ak.lower():
target = node["render"].value()
if target == "Use existing frames":
creator_attr["render_target"] = "frames"
elif target == "Local":
# Local rendering
creator_attr["render_target"] = "local"
elif target == "On farm":
# Farm rendering
creator_attr["render_target"] = "farm"
if "deadlinePriority" in node.knobs():
transfer_data["farm_priority"] = (
node["deadlinePriority"].value())
if "deadlineChunkSize" in node.knobs():
creator_attr["farm_chunk"] = (
node["deadlineChunkSize"].value())
if "deadlineConcurrentTasks" in node.knobs():
creator_attr["farm_concurency"] = (
node["deadlineConcurrentTasks"].value())
_remove_old_knobs(node)
# add new instance knob with transfer data
set_node_data(
node, INSTANCE_DATA_KNOB, transfer_data)
nuke.scriptSave()
def _remove_old_knobs(node):
remove_knobs = [
"review", "publish", "render", "suspend_publish", "warn", "divd",
"OpenpypeDataGroup", "OpenpypeDataGroup_End", "deadlinePriority",
"deadlineChunkSize", "deadlineConcurrentTasks", "Deadline"
]
print(node.name())
# remove all old knobs
for knob in node.allKnobs():
try:
if knob.name() in remove_knobs:
node.removeKnob(knob)
elif "avalon" in knob.name():
node.removeKnob(knob)
except ValueError:
pass

View file

@ -7,7 +7,9 @@ from openpype.pipeline.workfile.workfile_template_builder import (
AbstractTemplateBuilder,
PlaceholderPlugin,
LoadPlaceholderItem,
CreatePlaceholderItem,
PlaceholderLoadMixin,
PlaceholderCreateMixin
)
from openpype.tools.workfile_template_build import (
WorkfileBuildPlaceholderDialog,
@ -32,7 +34,7 @@ PLACEHOLDER_SET = "PLACEHOLDERS_SET"
class NukeTemplateBuilder(AbstractTemplateBuilder):
"""Concrete implementation of AbstractTemplateBuilder for maya"""
"""Concrete implementation of AbstractTemplateBuilder for nuke"""
def import_template(self, path):
"""Import template into current scene.
@ -40,7 +42,7 @@ class NukeTemplateBuilder(AbstractTemplateBuilder):
Args:
path (str): A path to current template (usually given by
get_template_path implementation)
get_template_preset implementation)
Returns:
bool: Wether the template was succesfully imported or not
@ -74,8 +76,7 @@ class NukePlaceholderPlugin(PlaceholderPlugin):
node_knobs = node.knobs()
if (
"builder_type" not in node_knobs
or "is_placeholder" not in node_knobs
"is_placeholder" not in node_knobs
or not node.knob("is_placeholder").value()
):
continue
@ -273,6 +274,15 @@ class NukePlaceholderLoadPlugin(NukePlaceholderPlugin, PlaceholderLoadMixin):
placeholder.data["nb_children"] += 1
reset_selection()
# remove placeholders marked as delete
if (
placeholder.data.get("delete")
and not placeholder.data.get("keep_placeholder")
):
self.log.debug("Deleting node: {}".format(placeholder_node.name()))
nuke.delete(placeholder_node)
# go back to root group
nuke.root().begin()
@ -454,12 +464,12 @@ class NukePlaceholderLoadPlugin(NukePlaceholderPlugin, PlaceholderLoadMixin):
)
for node in placeholder_node.dependent():
for idx in range(node.inputs()):
if node.input(idx) == placeholder_node:
if node.input(idx) == placeholder_node and output_node:
node.setInput(idx, output_node)
for node in placeholder_node.dependencies():
for idx in range(placeholder_node.inputs()):
if placeholder_node.input(idx) == node:
if placeholder_node.input(idx) == node and input_node:
input_node.setInput(0, node)
def _create_sib_copies(self, placeholder):
@ -535,6 +545,408 @@ class NukePlaceholderLoadPlugin(NukePlaceholderPlugin, PlaceholderLoadMixin):
siblings_input.setInput(0, copy_output)
class NukePlaceholderCreatePlugin(
NukePlaceholderPlugin, PlaceholderCreateMixin
):
identifier = "nuke.create"
label = "Nuke create"
def _parse_placeholder_node_data(self, node):
placeholder_data = super(
NukePlaceholderCreatePlugin, self
)._parse_placeholder_node_data(node)
node_knobs = node.knobs()
nb_children = 0
if "nb_children" in node_knobs:
nb_children = int(node_knobs["nb_children"].getValue())
placeholder_data["nb_children"] = nb_children
siblings = []
if "siblings" in node_knobs:
siblings = node_knobs["siblings"].values()
placeholder_data["siblings"] = siblings
node_full_name = node.fullName()
placeholder_data["group_name"] = node_full_name.rpartition(".")[0]
placeholder_data["last_loaded"] = []
placeholder_data["delete"] = False
return placeholder_data
def _before_instance_create(self, placeholder):
placeholder.data["nodes_init"] = nuke.allNodes()
def collect_placeholders(self):
output = []
scene_placeholders = self._collect_scene_placeholders()
for node_name, node in scene_placeholders.items():
plugin_identifier_knob = node.knob("plugin_identifier")
if (
plugin_identifier_knob is None
or plugin_identifier_knob.getValue() != self.identifier
):
continue
placeholder_data = self._parse_placeholder_node_data(node)
output.append(
CreatePlaceholderItem(node_name, placeholder_data, self)
)
return output
def populate_placeholder(self, placeholder):
self.populate_create_placeholder(placeholder)
def repopulate_placeholder(self, placeholder):
self.populate_create_placeholder(placeholder)
def get_placeholder_options(self, options=None):
return self.get_create_plugin_options(options)
def cleanup_placeholder(self, placeholder, failed):
# deselect all selected nodes
placeholder_node = nuke.toNode(placeholder.scene_identifier)
# getting the latest nodes added
nodes_init = placeholder.data["nodes_init"]
nodes_created = list(set(nuke.allNodes()) - set(nodes_init))
self.log.debug("Created nodes: {}".format(nodes_created))
if not nodes_created:
return
placeholder.data["delete"] = True
nodes_created = self._move_to_placeholder_group(
placeholder, nodes_created
)
placeholder.data["last_created"] = nodes_created
refresh_nodes(nodes_created)
# positioning of the created nodes
min_x, min_y, _, _ = get_extreme_positions(nodes_created)
for node in nodes_created:
xpos = (node.xpos() - min_x) + placeholder_node.xpos()
ypos = (node.ypos() - min_y) + placeholder_node.ypos()
node.setXYpos(xpos, ypos)
refresh_nodes(nodes_created)
# fix the problem of z_order for backdrops
self._fix_z_order(placeholder)
self._imprint_siblings(placeholder)
if placeholder.data["nb_children"] == 0:
# save initial nodes postions and dimensions, update them
# and set inputs and outputs of created nodes
self._imprint_inits()
self._update_nodes(placeholder, nuke.allNodes(), nodes_created)
self._set_created_connections(placeholder)
elif placeholder.data["siblings"]:
# create copies of placeholder siblings for the new created nodes,
# set their inputs and outpus and update all nodes positions and
# dimensions and siblings names
siblings = get_nodes_by_names(placeholder.data["siblings"])
refresh_nodes(siblings)
copies = self._create_sib_copies(placeholder)
new_nodes = list(copies.values()) # copies nodes
self._update_nodes(new_nodes, nodes_created)
placeholder_node.removeKnob(placeholder_node.knob("siblings"))
new_nodes_name = get_names_from_nodes(new_nodes)
imprint(placeholder_node, {"siblings": new_nodes_name})
self._set_copies_connections(placeholder, copies)
self._update_nodes(
nuke.allNodes(),
new_nodes + nodes_created,
20
)
new_siblings = get_names_from_nodes(new_nodes)
placeholder.data["siblings"] = new_siblings
else:
# if the placeholder doesn't have siblings, the created
# nodes will be placed in a free space
xpointer, ypointer = find_free_space_to_paste_nodes(
nodes_created, direction="bottom", offset=200
)
node = nuke.createNode("NoOp")
reset_selection()
nuke.delete(node)
for node in nodes_created:
xpos = (node.xpos() - min_x) + xpointer
ypos = (node.ypos() - min_y) + ypointer
node.setXYpos(xpos, ypos)
placeholder.data["nb_children"] += 1
reset_selection()
# remove placeholders marked as delete
if (
placeholder.data.get("delete")
and not placeholder.data.get("keep_placeholder")
):
self.log.debug("Deleting node: {}".format(placeholder_node.name()))
nuke.delete(placeholder_node)
# go back to root group
nuke.root().begin()
def _move_to_placeholder_group(self, placeholder, nodes_created):
"""
opening the placeholder's group and copying created nodes in it.
Returns :
nodes_created (list): the new list of pasted nodes
"""
groups_name = placeholder.data["group_name"]
reset_selection()
select_nodes(nodes_created)
if groups_name:
with node_tempfile() as filepath:
nuke.nodeCopy(filepath)
for node in nuke.selectedNodes():
nuke.delete(node)
group = nuke.toNode(groups_name)
group.begin()
nuke.nodePaste(filepath)
nodes_created = nuke.selectedNodes()
return nodes_created
def _fix_z_order(self, placeholder):
"""Fix the problem of z_order when a backdrop is create."""
nodes_created = placeholder.data["last_created"]
created_backdrops = []
bd_orders = set()
for node in nodes_created:
if isinstance(node, nuke.BackdropNode):
created_backdrops.append(node)
bd_orders.add(node.knob("z_order").getValue())
if not bd_orders:
return
sib_orders = set()
for node_name in placeholder.data["siblings"]:
node = nuke.toNode(node_name)
if isinstance(node, nuke.BackdropNode):
sib_orders.add(node.knob("z_order").getValue())
if not sib_orders:
return
min_order = min(bd_orders)
max_order = max(sib_orders)
for backdrop_node in created_backdrops:
z_order = backdrop_node.knob("z_order").getValue()
backdrop_node.knob("z_order").setValue(
z_order + max_order - min_order + 1)
def _imprint_siblings(self, placeholder):
"""
- add siblings names to placeholder attributes (nodes created with it)
- add Id to the attributes of all the other nodes
"""
created_nodes = placeholder.data["last_created"]
created_nodes_set = set(created_nodes)
for node in created_nodes:
node_knobs = node.knobs()
if (
"is_placeholder" not in node_knobs
or (
"is_placeholder" in node_knobs
and node.knob("is_placeholder").value()
)
):
siblings = list(created_nodes_set - {node})
siblings_name = get_names_from_nodes(siblings)
siblings = {"siblings": siblings_name}
imprint(node, siblings)
def _imprint_inits(self):
"""Add initial positions and dimensions to the attributes"""
for node in nuke.allNodes():
refresh_node(node)
imprint(node, {"x_init": node.xpos(), "y_init": node.ypos()})
node.knob("x_init").setVisible(False)
node.knob("y_init").setVisible(False)
width = node.screenWidth()
height = node.screenHeight()
if "bdwidth" in node.knobs():
imprint(node, {"w_init": width, "h_init": height})
node.knob("w_init").setVisible(False)
node.knob("h_init").setVisible(False)
refresh_node(node)
def _update_nodes(
self, placeholder, nodes, considered_nodes, offset_y=None
):
"""Adjust backdrop nodes dimensions and positions.
Considering some nodes sizes.
Args:
nodes (list): list of nodes to update
considered_nodes (list): list of nodes to consider while updating
positions and dimensions
offset (int): distance between copies
"""
placeholder_node = nuke.toNode(placeholder.scene_identifier)
min_x, min_y, max_x, max_y = get_extreme_positions(considered_nodes)
diff_x = diff_y = 0
contained_nodes = [] # for backdrops
if offset_y is None:
width_ph = placeholder_node.screenWidth()
height_ph = placeholder_node.screenHeight()
diff_y = max_y - min_y - height_ph
diff_x = max_x - min_x - width_ph
contained_nodes = [placeholder_node]
min_x = placeholder_node.xpos()
min_y = placeholder_node.ypos()
else:
siblings = get_nodes_by_names(placeholder.data["siblings"])
minX, _, maxX, _ = get_extreme_positions(siblings)
diff_y = max_y - min_y + 20
diff_x = abs(max_x - min_x - maxX + minX)
contained_nodes = considered_nodes
if diff_y <= 0 and diff_x <= 0:
return
for node in nodes:
refresh_node(node)
if (
node == placeholder_node
or node in considered_nodes
):
continue
if (
not isinstance(node, nuke.BackdropNode)
or (
isinstance(node, nuke.BackdropNode)
and not set(contained_nodes) <= set(node.getNodes())
)
):
if offset_y is None and node.xpos() >= min_x:
node.setXpos(node.xpos() + diff_x)
if node.ypos() >= min_y:
node.setYpos(node.ypos() + diff_y)
else:
width = node.screenWidth()
height = node.screenHeight()
node.knob("bdwidth").setValue(width + diff_x)
node.knob("bdheight").setValue(height + diff_y)
refresh_node(node)
def _set_created_connections(self, placeholder):
"""
set inputs and outputs of created nodes"""
placeholder_node = nuke.toNode(placeholder.scene_identifier)
input_node, output_node = get_group_io_nodes(
placeholder.data["last_created"]
)
for node in placeholder_node.dependent():
for idx in range(node.inputs()):
if node.input(idx) == placeholder_node and output_node:
node.setInput(idx, output_node)
for node in placeholder_node.dependencies():
for idx in range(placeholder_node.inputs()):
if placeholder_node.input(idx) == node and input_node:
input_node.setInput(0, node)
def _create_sib_copies(self, placeholder):
""" creating copies of the palce_holder siblings (the ones who were
created with it) for the new nodes added
Returns :
copies (dict) : with copied nodes names and their copies
"""
copies = {}
siblings = get_nodes_by_names(placeholder.data["siblings"])
for node in siblings:
new_node = duplicate_node(node)
x_init = int(new_node.knob("x_init").getValue())
y_init = int(new_node.knob("y_init").getValue())
new_node.setXYpos(x_init, y_init)
if isinstance(new_node, nuke.BackdropNode):
w_init = new_node.knob("w_init").getValue()
h_init = new_node.knob("h_init").getValue()
new_node.knob("bdwidth").setValue(w_init)
new_node.knob("bdheight").setValue(h_init)
refresh_node(node)
if "repre_id" in node.knobs().keys():
node.removeKnob(node.knob("repre_id"))
copies[node.name()] = new_node
return copies
def _set_copies_connections(self, placeholder, copies):
"""Set inputs and outputs of the copies.
Args:
copies (dict): Copied nodes by their names.
"""
last_input, last_output = get_group_io_nodes(
placeholder.data["last_created"]
)
siblings = get_nodes_by_names(placeholder.data["siblings"])
siblings_input, siblings_output = get_group_io_nodes(siblings)
copy_input = copies[siblings_input.name()]
copy_output = copies[siblings_output.name()]
for node_init in siblings:
if node_init == siblings_output:
continue
node_copy = copies[node_init.name()]
for node in node_init.dependent():
for idx in range(node.inputs()):
if node.input(idx) != node_init:
continue
if node in siblings:
copies[node.name()].setInput(idx, node_copy)
else:
last_input.setInput(0, node_copy)
for node in node_init.dependencies():
for idx in range(node_init.inputs()):
if node_init.input(idx) != node:
continue
if node_init == siblings_input:
copy_input.setInput(idx, node)
elif node in siblings:
node_copy.setInput(idx, copies[node.name()])
else:
node_copy.setInput(idx, last_output)
siblings_input.setInput(0, copy_output)
def build_workfile_template(*args):
builder = NukeTemplateBuilder(registered_host())
builder.build_template()

View file

@ -0,0 +1,49 @@
from openpype.pipeline.create.creator_plugins import SubsetConvertorPlugin
from openpype.hosts.nuke.api.lib import (
INSTANCE_DATA_KNOB,
get_node_data,
get_avalon_knob_data
)
from openpype.hosts.nuke.api.plugin import convert_to_valid_instaces
import nuke
class LegacyConverted(SubsetConvertorPlugin):
identifier = "legacy.converter"
def find_instances(self):
legacy_found = False
# search for first available legacy item
for node in nuke.allNodes(recurseGroups=True):
if node.Class() in ["Viewer", "Dot"]:
continue
if get_node_data(node, INSTANCE_DATA_KNOB):
continue
# get data from avalon knob
avalon_knob_data = get_avalon_knob_data(
node, ["avalon:", "ak:"], create=False)
if not avalon_knob_data:
continue
if avalon_knob_data["id"] != "pyblish.avalon.instance":
continue
# catch and break
legacy_found = True
break
if legacy_found:
# if not item do not add legacy instance convertor
self.add_convertor_item("Convert legacy instances")
def convert(self):
# loop all instances and convert them
convert_to_valid_instaces()
# remove legacy item if all is fine
self.remove_convertor_item()

View file

@ -1,56 +1,53 @@
import nuke
from openpype.hosts.nuke.api import plugin
from openpype.hosts.nuke.api.lib import (
select_nodes,
set_avalon_knob_data
from nukescripts import autoBackdrop
from openpype.hosts.nuke.api import (
NukeCreator,
maintained_selection,
select_nodes
)
class CreateBackdrop(plugin.OpenPypeCreator):
class CreateBackdrop(NukeCreator):
"""Add Publishable Backdrop"""
name = "nukenodes"
label = "Create Backdrop"
identifier = "create_backdrop"
label = "Nukenodes (backdrop)"
family = "nukenodes"
icon = "file-archive-o"
defaults = ["Main"]
maintain_selection = True
def __init__(self, *args, **kwargs):
super(CreateBackdrop, self).__init__(*args, **kwargs)
self.nodes = nuke.selectedNodes()
self.node_color = "0xdfea5dff"
return
# plugin attributes
node_color = "0xdfea5dff"
def process(self):
from nukescripts import autoBackdrop
nodes = list()
if (self.options or {}).get("useSelection"):
nodes = self.nodes
def create_instance_node(
self,
node_name,
knobs=None,
parent=None,
node_type=None
):
with maintained_selection():
if len(self.selected_nodes) >= 1:
select_nodes(self.selected_nodes)
if len(nodes) >= 1:
select_nodes(nodes)
bckd_node = autoBackdrop()
bckd_node["name"].setValue("{}_BDN".format(self.name))
bckd_node["tile_color"].setValue(int(self.node_color, 16))
bckd_node["note_font_size"].setValue(24)
bckd_node["label"].setValue("[{}]".format(self.name))
# add avalon knobs
instance = set_avalon_knob_data(bckd_node, self.data)
created_node = autoBackdrop()
created_node["name"].setValue(node_name)
created_node["tile_color"].setValue(int(self.node_color, 16))
created_node["note_font_size"].setValue(24)
created_node["label"].setValue("[{}]".format(node_name))
return instance
else:
msg = str("Please select nodes you "
"wish to add to a container")
self.log.error(msg)
nuke.message(msg)
return
else:
bckd_node = autoBackdrop()
bckd_node["name"].setValue("{}_BDN".format(self.name))
bckd_node["tile_color"].setValue(int(self.node_color, 16))
bckd_node["note_font_size"].setValue(24)
bckd_node["label"].setValue("[{}]".format(self.name))
# add avalon knobs
instance = set_avalon_knob_data(bckd_node, self.data)
self.add_info_knob(created_node)
return instance
return created_node
def create(self, subset_name, instance_data, pre_create_data):
# make sure subset name is unique
self.check_existing_subset(subset_name)
instance = super(CreateBackdrop, self).create(
subset_name,
instance_data,
pre_create_data
)
return instance

View file

@ -1,55 +1,68 @@
import nuke
from openpype.hosts.nuke.api import plugin
from openpype.hosts.nuke.api.lib import (
set_avalon_knob_data
from openpype.hosts.nuke.api import (
NukeCreator,
NukeCreatorError,
maintained_selection
)
class CreateCamera(plugin.OpenPypeCreator):
"""Add Publishable Backdrop"""
class CreateCamera(NukeCreator):
"""Add Publishable Camera"""
name = "camera"
label = "Create 3d Camera"
identifier = "create_camera"
label = "Camera (3d)"
family = "camera"
icon = "camera"
defaults = ["Main"]
def __init__(self, *args, **kwargs):
super(CreateCamera, self).__init__(*args, **kwargs)
self.nodes = nuke.selectedNodes()
self.node_color = "0xff9100ff"
return
# plugin attributes
node_color = "0xff9100ff"
def process(self):
nodes = list()
if (self.options or {}).get("useSelection"):
nodes = self.nodes
if len(nodes) >= 1:
# loop selected nodes
for n in nodes:
data = self.data.copy()
if len(nodes) > 1:
# rename subset name only if more
# then one node are selected
subset = self.family + n["name"].value().capitalize()
data["subset"] = subset
# change node color
n["tile_color"].setValue(int(self.node_color, 16))
# add avalon knobs
set_avalon_knob_data(n, data)
return True
def create_instance_node(
self,
node_name,
knobs=None,
parent=None,
node_type=None
):
with maintained_selection():
if self.selected_nodes:
node = self.selected_nodes[0]
if node.Class() != "Camera3":
raise NukeCreatorError(
"Creator error: Select only camera node type")
created_node = self.selected_nodes[0]
else:
msg = str("Please select nodes you "
"wish to add to a container")
self.log.error(msg)
nuke.message(msg)
return
created_node = nuke.createNode("Camera2")
created_node["tile_color"].setValue(
int(self.node_color, 16))
created_node["name"].setValue(node_name)
self.add_info_knob(created_node)
return created_node
def create(self, subset_name, instance_data, pre_create_data):
# make sure subset name is unique
self.check_existing_subset(subset_name)
instance = super(CreateCamera, self).create(
subset_name,
instance_data,
pre_create_data
)
return instance
def set_selected_nodes(self, pre_create_data):
if pre_create_data.get("use_selection"):
self.selected_nodes = nuke.selectedNodes()
if self.selected_nodes == []:
raise NukeCreatorError(
"Creator error: No active selection")
elif len(self.selected_nodes) > 1:
raise NukeCreatorError(
"Creator error: Select only one camera node")
else:
# if selected is off then create one node
camera_node = nuke.createNode("Camera2")
camera_node["tile_color"].setValue(int(self.node_color, 16))
# add avalon knobs
instance = set_avalon_knob_data(camera_node, self.data)
return instance
self.selected_nodes = []

View file

@ -1,87 +1,67 @@
import nuke
from openpype.hosts.nuke.api import plugin
from openpype.hosts.nuke.api.lib import (
maintained_selection,
select_nodes,
set_avalon_knob_data
from openpype.hosts.nuke.api import (
NukeCreator,
NukeCreatorError,
maintained_selection
)
class CreateGizmo(plugin.OpenPypeCreator):
"""Add Publishable "gizmo" group
class CreateGizmo(NukeCreator):
"""Add Publishable Group as gizmo"""
The name is symbolically gizmo as presumably
it is something familiar to nuke users as group of nodes
distributed downstream in workflow
"""
name = "gizmo"
label = "Gizmo"
identifier = "create_gizmo"
label = "Gizmo (group)"
family = "gizmo"
icon = "file-archive-o"
defaults = ["ViewerInput", "Lut", "Effect"]
default_variants = ["ViewerInput", "Lut", "Effect"]
def __init__(self, *args, **kwargs):
super(CreateGizmo, self).__init__(*args, **kwargs)
self.nodes = nuke.selectedNodes()
self.node_color = "0x7533c1ff"
return
def process(self):
if (self.options or {}).get("useSelection"):
nodes = self.nodes
self.log.info(len(nodes))
if len(nodes) == 1:
select_nodes(nodes)
node = nodes[-1]
# check if Group node
if node.Class() in "Group":
node["name"].setValue("{}_GZM".format(self.name))
node["tile_color"].setValue(int(self.node_color, 16))
return set_avalon_knob_data(node, self.data)
else:
msg = ("Please select a group node "
"you wish to publish as the gizmo")
self.log.error(msg)
nuke.message(msg)
if len(nodes) >= 2:
select_nodes(nodes)
nuke.makeGroup()
gizmo_node = nuke.selectedNode()
gizmo_node["name"].setValue("{}_GZM".format(self.name))
gizmo_node["tile_color"].setValue(int(self.node_color, 16))
# add sticky node with guide
with gizmo_node:
sticky = nuke.createNode("StickyNote")
sticky["label"].setValue(
"Add following:\n- set Input"
" nodes\n- set one Output1\n"
"- create User knobs on the group")
# add avalon knobs
return set_avalon_knob_data(gizmo_node, self.data)
# plugin attributes
node_color = "0x7533c1ff"
def create_instance_node(
self,
node_name,
knobs=None,
parent=None,
node_type=None
):
with maintained_selection():
if self.selected_nodes:
node = self.selected_nodes[0]
if node.Class() != "Group":
raise NukeCreatorError(
"Creator error: Select only 'Group' node type")
created_node = node
else:
msg = "Please select nodes you wish to add to the gizmo"
self.log.error(msg)
nuke.message(msg)
return
created_node = nuke.collapseToGroup()
created_node["tile_color"].setValue(
int(self.node_color, 16))
created_node["name"].setValue(node_name)
self.add_info_knob(created_node)
return created_node
def create(self, subset_name, instance_data, pre_create_data):
# make sure subset name is unique
self.check_existing_subset(subset_name)
instance = super(CreateGizmo, self).create(
subset_name,
instance_data,
pre_create_data
)
return instance
def set_selected_nodes(self, pre_create_data):
if pre_create_data.get("use_selection"):
self.selected_nodes = nuke.selectedNodes()
if self.selected_nodes == []:
raise NukeCreatorError("Creator error: No active selection")
elif len(self.selected_nodes) > 1:
NukeCreatorError("Creator error: Select only one 'Group' node")
else:
with maintained_selection():
gizmo_node = nuke.createNode("Group")
gizmo_node["name"].setValue("{}_GZM".format(self.name))
gizmo_node["tile_color"].setValue(int(self.node_color, 16))
# add sticky node with guide
with gizmo_node:
sticky = nuke.createNode("StickyNote")
sticky["label"].setValue(
"Add following:\n- add Input"
" nodes\n- add one Output1\n"
"- create User knobs on the group")
# add avalon knobs
return set_avalon_knob_data(gizmo_node, self.data)
self.selected_nodes = []

View file

@ -1,87 +1,67 @@
import nuke
from openpype.hosts.nuke.api import plugin
from openpype.hosts.nuke.api.lib import (
set_avalon_knob_data
from openpype.hosts.nuke.api import (
NukeCreator,
NukeCreatorError,
maintained_selection
)
class CreateModel(plugin.OpenPypeCreator):
"""Add Publishable Model Geometry"""
class CreateModel(NukeCreator):
"""Add Publishable Camera"""
name = "model"
label = "Create 3d Model"
identifier = "create_model"
label = "Model (3d)"
family = "model"
icon = "cube"
defaults = ["Main"]
default_variants = ["Main"]
def __init__(self, *args, **kwargs):
super(CreateModel, self).__init__(*args, **kwargs)
self.nodes = nuke.selectedNodes()
self.node_color = "0xff3200ff"
return
# plugin attributes
node_color = "0xff3200ff"
def process(self):
nodes = list()
if (self.options or {}).get("useSelection"):
nodes = self.nodes
for n in nodes:
n['selected'].setValue(0)
end_nodes = list()
# get the latest nodes in tree for selecion
for n in nodes:
x = n
end = 0
while end == 0:
try:
x = x.dependent()[0]
except:
end_node = x
end = 1
end_nodes.append(end_node)
# set end_nodes
end_nodes = list(set(end_nodes))
# check if nodes is 3d nodes
for n in end_nodes:
n['selected'].setValue(1)
sn = nuke.createNode("Scene")
if not sn.input(0):
end_nodes.remove(n)
nuke.delete(sn)
# loop over end nodes
for n in end_nodes:
n['selected'].setValue(1)
self.nodes = nuke.selectedNodes()
nodes = self.nodes
if len(nodes) >= 1:
# loop selected nodes
for n in nodes:
data = self.data.copy()
if len(nodes) > 1:
# rename subset name only if more
# then one node are selected
subset = self.family + n["name"].value().capitalize()
data["subset"] = subset
# change node color
n["tile_color"].setValue(int(self.node_color, 16))
# add avalon knobs
set_avalon_knob_data(n, data)
return True
def create_instance_node(
self,
node_name,
knobs=None,
parent=None,
node_type=None
):
with maintained_selection():
if self.selected_nodes:
node = self.selected_nodes[0]
if node.Class() != "Scene":
raise NukeCreatorError(
"Creator error: Select only 'Scene' node type")
created_node = node
else:
msg = str("Please select nodes you "
"wish to add to a container")
self.log.error(msg)
nuke.message(msg)
return
created_node = nuke.createNode("Scene")
created_node["tile_color"].setValue(
int(self.node_color, 16))
created_node["name"].setValue(node_name)
self.add_info_knob(created_node)
return created_node
def create(self, subset_name, instance_data, pre_create_data):
# make sure subset name is unique
self.check_existing_subset(subset_name)
instance = super(CreateModel, self).create(
subset_name,
instance_data,
pre_create_data
)
return instance
def set_selected_nodes(self, pre_create_data):
if pre_create_data.get("use_selection"):
self.selected_nodes = nuke.selectedNodes()
if self.selected_nodes == []:
raise NukeCreatorError("Creator error: No active selection")
elif len(self.selected_nodes) > 1:
NukeCreatorError("Creator error: Select only one 'Scene' node")
else:
# if selected is off then create one node
model_node = nuke.createNode("WriteGeo")
model_node["tile_color"].setValue(int(self.node_color, 16))
# add avalon knobs
instance = set_avalon_knob_data(model_node, self.data)
return instance
self.selected_nodes = []

View file

@ -1,57 +0,0 @@
from collections import OrderedDict
import nuke
from openpype.hosts.nuke.api import plugin
from openpype.hosts.nuke.api.lib import (
set_avalon_knob_data
)
class CrateRead(plugin.OpenPypeCreator):
# change this to template preset
name = "ReadCopy"
label = "Create Read Copy"
hosts = ["nuke"]
family = "source"
families = family
icon = "film"
defaults = ["Effect", "Backplate", "Fire", "Smoke"]
def __init__(self, *args, **kwargs):
super(CrateRead, self).__init__(*args, **kwargs)
self.nodes = nuke.selectedNodes()
data = OrderedDict()
data['family'] = self.family
data['families'] = self.families
for k, v in self.data.items():
if k not in data.keys():
data.update({k: v})
self.data = data
def process(self):
self.name = self.data["subset"]
nodes = self.nodes
if not nodes or len(nodes) == 0:
msg = "Please select Read node"
self.log.error(msg)
nuke.message(msg)
else:
count_reads = 0
for node in nodes:
if node.Class() != 'Read':
continue
avalon_data = self.data
avalon_data['subset'] = "{}".format(self.name)
set_avalon_knob_data(node, avalon_data)
node['tile_color'].setValue(16744935)
count_reads += 1
if count_reads < 1:
msg = "Please select Read node"
self.log.error(msg)
nuke.message(msg)
return

View file

@ -0,0 +1,88 @@
import nuke
import six
import sys
from openpype.hosts.nuke.api import (
INSTANCE_DATA_KNOB,
NukeCreator,
NukeCreatorError,
set_node_data
)
from openpype.pipeline import (
CreatedInstance
)
class CreateSource(NukeCreator):
"""Add Publishable Read with source"""
identifier = "create_source"
label = "Source (read)"
family = "source"
icon = "film"
default_variants = ["Effect", "Backplate", "Fire", "Smoke"]
# plugin attributes
node_color = "0xff9100ff"
def create_instance_node(
self,
node_name,
read_node
):
read_node["tile_color"].setValue(
int(self.node_color, 16))
read_node["name"].setValue(node_name)
self.add_info_knob(read_node)
return read_node
def create(self, subset_name, instance_data, pre_create_data):
# make sure selected nodes are added
self.set_selected_nodes(pre_create_data)
try:
for read_node in self.selected_nodes:
if read_node.Class() != 'Read':
continue
node_name = read_node.name()
_subset_name = subset_name + node_name
# make sure subset name is unique
self.check_existing_subset(_subset_name)
instance_node = self.create_instance_node(
_subset_name,
read_node
)
instance = CreatedInstance(
self.family,
_subset_name,
instance_data,
self
)
instance.transient_data["node"] = instance_node
self._add_instance_to_context(instance)
set_node_data(
instance_node,
INSTANCE_DATA_KNOB,
instance.data_to_store()
)
except Exception as er:
six.reraise(
NukeCreatorError,
NukeCreatorError("Creator error: {}".format(er)),
sys.exc_info()[2])
def set_selected_nodes(self, pre_create_data):
if pre_create_data.get("use_selection"):
self.selected_nodes = nuke.selectedNodes()
if self.selected_nodes == []:
raise NukeCreatorError("Creator error: No active selection")
else:
NukeCreatorError(
"Creator error: only supprted with active selection")

View file

@ -0,0 +1,173 @@
import nuke
import sys
import six
from openpype.pipeline import (
CreatedInstance
)
from openpype.lib import (
BoolDef,
NumberDef,
UISeparatorDef,
EnumDef
)
from openpype.hosts.nuke import api as napi
class CreateWriteImage(napi.NukeWriteCreator):
identifier = "create_write_image"
label = "Image (write)"
family = "image"
icon = "sign-out"
instance_attributes = [
"use_range_limit"
]
default_variants = [
"StillFrame",
"MPFrame",
"LayoutFrame"
]
temp_rendering_path_template = (
"{work}/renders/nuke/{subset}/{subset}.{frame}.{ext}")
def get_pre_create_attr_defs(self):
attr_defs = [
BoolDef(
"use_selection",
default=True,
label="Use selection"
),
self._get_render_target_enum(),
UISeparatorDef(),
self._get_frame_source_number()
]
return attr_defs
def _get_render_target_enum(self):
rendering_targets = {
"local": "Local machine rendering",
"frames": "Use existing frames"
}
return EnumDef(
"render_target",
items=rendering_targets,
label="Render target"
)
def _get_frame_source_number(self):
return NumberDef(
"active_frame",
label="Active frame",
default=nuke.frame()
)
def get_instance_attr_defs(self):
attr_defs = [
self._get_render_target_enum(),
self._get_reviewable_bool()
]
return attr_defs
def create_instance_node(self, subset_name, instance_data):
linked_knobs_ = []
if "use_range_limit" in self.instance_attributes:
linked_knobs_ = ["channels", "___", "first", "last", "use_limit"]
# add fpath_template
write_data = {
"creator": self.__class__.__name__,
"subset": subset_name,
"fpath_template": self.temp_rendering_path_template
}
write_data.update(instance_data)
created_node = napi.create_write_node(
subset_name,
write_data,
input=self.selected_node,
prenodes=self.prenodes,
linked_knobs=linked_knobs_,
**{
"frame": nuke.frame()
}
)
self.add_info_knob(created_node)
self._add_frame_range_limit(created_node, instance_data)
self.integrate_links(created_node, outputs=True)
return created_node
def create(self, subset_name, instance_data, pre_create_data):
subset_name = subset_name.format(**pre_create_data)
# pass values from precreate to instance
self.pass_pre_attributes_to_instance(
instance_data,
pre_create_data,
[
"active_frame",
"render_target"
]
)
# make sure selected nodes are added
self.set_selected_nodes(pre_create_data)
# make sure subset name is unique
self.check_existing_subset(subset_name)
instance_node = self.create_instance_node(
subset_name,
instance_data,
)
try:
instance = CreatedInstance(
self.family,
subset_name,
instance_data,
self
)
instance.transient_data["node"] = instance_node
self._add_instance_to_context(instance)
napi.set_node_data(
instance_node,
napi.INSTANCE_DATA_KNOB,
instance.data_to_store()
)
return instance
except Exception as er:
six.reraise(
napi.NukeCreatorError,
napi.NukeCreatorError("Creator error: {}".format(er)),
sys.exc_info()[2]
)
def _add_frame_range_limit(self, write_node, instance_data):
if "use_range_limit" not in self.instance_attributes:
return
active_frame = (
instance_data["creator_attributes"].get("active_frame"))
write_node.begin()
for n in nuke.allNodes():
# get write node
if n.Class() in "Write":
w_node = n
write_node.end()
w_node["use_limit"].setValue(True)
w_node["first"].setValue(active_frame or nuke.frame())
w_node["last"].setExpression("first")
return write_node

View file

@ -1,56 +1,176 @@
import nuke
import sys
import six
from openpype.hosts.nuke.api import plugin
from openpype.hosts.nuke.api.lib import (
create_write_node, create_write_node_legacy)
from openpype.pipeline import (
CreatedInstance
)
from openpype.lib import (
BoolDef,
NumberDef,
UISeparatorDef,
UILabelDef
)
from openpype.hosts.nuke import api as napi
class CreateWritePrerender(plugin.AbstractWriteRender):
# change this to template preset
name = "WritePrerender"
label = "Create Write Prerender"
hosts = ["nuke"]
n_class = "Write"
class CreateWritePrerender(napi.NukeWriteCreator):
identifier = "create_write_prerender"
label = "Prerender (write)"
family = "prerender"
icon = "sign-out"
# settings
fpath_template = "{work}/render/nuke/{subset}/{subset}.{frame}.{ext}"
defaults = ["Key01", "Bg01", "Fg01", "Branch01", "Part01"]
reviewable = False
use_range_limit = True
instance_attributes = [
"use_range_limit"
]
default_variants = [
"Key01",
"Bg01",
"Fg01",
"Branch01",
"Part01"
]
temp_rendering_path_template = (
"{work}/renders/nuke/{subset}/{subset}.{frame}.{ext}")
def __init__(self, *args, **kwargs):
super(CreateWritePrerender, self).__init__(*args, **kwargs)
def get_pre_create_attr_defs(self):
attr_defs = [
BoolDef(
"use_selection",
default=True,
label="Use selection"
),
self._get_render_target_enum()
]
return attr_defs
def get_instance_attr_defs(self):
attr_defs = [
self._get_render_target_enum(),
self._get_reviewable_bool()
]
if "farm_rendering" in self.instance_attributes:
attr_defs.extend([
UISeparatorDef(),
UILabelDef("Farm rendering attributes"),
BoolDef("suspended_publish", label="Suspended publishing"),
NumberDef(
"farm_priority",
label="Priority",
minimum=1,
maximum=99,
default=50
),
NumberDef(
"farm_chunk",
label="Chunk size",
minimum=1,
maximum=99,
default=10
),
NumberDef(
"farm_concurency",
label="Concurent tasks",
minimum=1,
maximum=10,
default=1
)
])
return attr_defs
def create_instance_node(self, subset_name, instance_data):
linked_knobs_ = []
if "use_range_limit" in self.instance_attributes:
linked_knobs_ = ["channels", "___", "first", "last", "use_limit"]
def _create_write_node(self, selected_node, inputs, outputs, write_data):
# add fpath_template
write_data["fpath_template"] = self.fpath_template
write_data["use_range_limit"] = self.use_range_limit
write_data["frame_range"] = (
nuke.root()["first_frame"].value(),
nuke.root()["last_frame"].value()
write_data = {
"creator": self.__class__.__name__,
"subset": subset_name,
"fpath_template": self.temp_rendering_path_template
}
write_data.update(instance_data)
# get width and height
if self.selected_node:
width, height = (
self.selected_node.width(), self.selected_node.height())
else:
actual_format = nuke.root().knob('format').value()
width, height = (actual_format.width(), actual_format.height())
created_node = napi.create_write_node(
subset_name,
write_data,
input=self.selected_node,
prenodes=self.prenodes,
linked_knobs=linked_knobs_,
**{
"width": width,
"height": height
}
)
self.add_info_knob(created_node)
self._add_frame_range_limit(created_node)
self.integrate_links(created_node, outputs=True)
return created_node
def create(self, subset_name, instance_data, pre_create_data):
# pass values from precreate to instance
self.pass_pre_attributes_to_instance(
instance_data,
pre_create_data,
[
"render_target"
]
)
if not self.is_legacy():
return create_write_node(
self.data["subset"],
write_data,
input=selected_node,
review=self.reviewable,
linked_knobs=["channels", "___", "first", "last", "use_limit"]
)
else:
return create_write_node_legacy(
self.data["subset"],
write_data,
input=selected_node,
review=self.reviewable,
linked_knobs=["channels", "___", "first", "last", "use_limit"]
# make sure selected nodes are added
self.set_selected_nodes(pre_create_data)
# make sure subset name is unique
self.check_existing_subset(subset_name)
instance_node = self.create_instance_node(
subset_name,
instance_data
)
try:
instance = CreatedInstance(
self.family,
subset_name,
instance_data,
self
)
def _modify_write_node(self, write_node):
# open group node
instance.transient_data["node"] = instance_node
self._add_instance_to_context(instance)
napi.set_node_data(
instance_node,
napi.INSTANCE_DATA_KNOB,
instance.data_to_store()
)
return instance
except Exception as er:
six.reraise(
napi.NukeCreatorError,
napi.NukeCreatorError("Creator error: {}".format(er)),
sys.exc_info()[2]
)
def _add_frame_range_limit(self, write_node):
if "use_range_limit" not in self.instance_attributes:
return
write_node.begin()
for n in nuke.allNodes():
# get write node
@ -58,9 +178,8 @@ class CreateWritePrerender(plugin.AbstractWriteRender):
w_node = n
write_node.end()
if self.use_range_limit:
w_node["use_limit"].setValue(True)
w_node["first"].setValue(nuke.root()["first_frame"].value())
w_node["last"].setValue(nuke.root()["last_frame"].value())
w_node["use_limit"].setValue(True)
w_node["first"].setValue(nuke.root()["first_frame"].value())
w_node["last"].setValue(nuke.root()["last_frame"].value())
return write_node

View file

@ -1,86 +1,157 @@
import nuke
import sys
import six
from openpype.hosts.nuke.api import plugin
from openpype.hosts.nuke.api.lib import (
create_write_node, create_write_node_legacy)
from openpype.pipeline import (
CreatedInstance
)
from openpype.lib import (
BoolDef,
NumberDef,
UISeparatorDef,
UILabelDef
)
from openpype.hosts.nuke import api as napi
class CreateWriteRender(plugin.AbstractWriteRender):
# change this to template preset
name = "WriteRender"
label = "Create Write Render"
hosts = ["nuke"]
n_class = "Write"
class CreateWriteRender(napi.NukeWriteCreator):
identifier = "create_write_render"
label = "Render (write)"
family = "render"
icon = "sign-out"
# settings
fpath_template = "{work}/render/nuke/{subset}/{subset}.{frame}.{ext}"
defaults = ["Main", "Mask"]
prenodes = {
"Reformat01": {
"nodeclass": "Reformat",
"dependent": None,
"knobs": [
{
"type": "text",
"name": "resize",
"value": "none"
},
{
"type": "bool",
"name": "black_outside",
"value": True
}
]
}
}
instance_attributes = [
"reviewable"
]
default_variants = [
"Main",
"Mask"
]
temp_rendering_path_template = (
"{work}/renders/nuke/{subset}/{subset}.{frame}.{ext}")
def __init__(self, *args, **kwargs):
super(CreateWriteRender, self).__init__(*args, **kwargs)
def get_pre_create_attr_defs(self):
attr_defs = [
BoolDef(
"use_selection",
default=True,
label="Use selection"
),
self._get_render_target_enum()
]
return attr_defs
def _create_write_node(self, selected_node, inputs, outputs, write_data):
def get_instance_attr_defs(self):
attr_defs = [
self._get_render_target_enum(),
self._get_reviewable_bool()
]
if "farm_rendering" in self.instance_attributes:
attr_defs.extend([
UISeparatorDef(),
UILabelDef("Farm rendering attributes"),
BoolDef("suspended_publish", label="Suspended publishing"),
NumberDef(
"farm_priority",
label="Priority",
minimum=1,
maximum=99,
default=50
),
NumberDef(
"farm_chunk",
label="Chunk size",
minimum=1,
maximum=99,
default=10
),
NumberDef(
"farm_concurency",
label="Concurent tasks",
minimum=1,
maximum=10,
default=1
)
])
return attr_defs
def create_instance_node(self, subset_name, instance_data):
# add fpath_template
write_data["fpath_template"] = self.fpath_template
write_data = {
"creator": self.__class__.__name__,
"subset": subset_name,
"fpath_template": self.temp_rendering_path_template
}
write_data.update(instance_data)
# add reformat node to cut off all outside of format bounding box
# get width and height
try:
width, height = (selected_node.width(), selected_node.height())
except AttributeError:
if self.selected_node:
width, height = (
self.selected_node.width(), self.selected_node.height())
else:
actual_format = nuke.root().knob('format').value()
width, height = (actual_format.width(), actual_format.height())
if not self.is_legacy():
return create_write_node(
self.data["subset"],
write_data,
input=selected_node,
prenodes=self.prenodes,
**{
"width": width,
"height": height
}
)
else:
_prenodes = [
{
"name": "Reformat01",
"class": "Reformat",
"knobs": [
("resize", 0),
("black_outside", 1),
],
"dependent": None
}
created_node = napi.create_write_node(
subset_name,
write_data,
input=self.selected_node,
prenodes=self.prenodes,
**{
"width": width,
"height": height
}
)
self.add_info_knob(created_node)
self.integrate_links(created_node, outputs=False)
return created_node
def create(self, subset_name, instance_data, pre_create_data):
# pass values from precreate to instance
self.pass_pre_attributes_to_instance(
instance_data,
pre_create_data,
[
"render_target"
]
)
# make sure selected nodes are added
self.set_selected_nodes(pre_create_data)
return create_write_node_legacy(
self.data["subset"],
write_data,
input=selected_node,
prenodes=_prenodes
# make sure subset name is unique
self.check_existing_subset(subset_name)
instance_node = self.create_instance_node(
subset_name,
instance_data
)
try:
instance = CreatedInstance(
self.family,
subset_name,
instance_data,
self
)
def _modify_write_node(self, write_node):
return write_node
instance.transient_data["node"] = instance_node
self._add_instance_to_context(instance)
napi.set_node_data(
instance_node,
napi.INSTANCE_DATA_KNOB,
instance.data_to_store()
)
return instance
except Exception as er:
six.reraise(
napi.NukeCreatorError,
napi.NukeCreatorError("Creator error: {}".format(er)),
sys.exc_info()[2]
)

View file

@ -1,105 +0,0 @@
import nuke
from openpype.hosts.nuke.api import plugin
from openpype.hosts.nuke.api.lib import (
create_write_node,
create_write_node_legacy,
get_created_node_imageio_setting_legacy
)
# HACK: just to disable still image on projects which
# are not having anatomy imageio preset for CreateWriteStill
# TODO: remove this code as soon as it will be obsolete
imageio_writes = get_created_node_imageio_setting_legacy(
"Write",
"CreateWriteStill",
"stillMain"
)
print(imageio_writes["knobs"])
class CreateWriteStill(plugin.AbstractWriteRender):
# change this to template preset
name = "WriteStillFrame"
label = "Create Write Still Image"
hosts = ["nuke"]
n_class = "Write"
family = "still"
icon = "image"
# settings
fpath_template = "{work}/render/nuke/{subset}/{subset}.{ext}"
defaults = [
"ImageFrame",
"MPFrame",
"LayoutFrame"
]
prenodes = {
"FrameHold01": {
"nodeclass": "FrameHold",
"dependent": None,
"knobs": [
{
"type": "formatable",
"name": "first_frame",
"template": "{frame}",
"to_type": "number"
}
]
}
}
def __init__(self, *args, **kwargs):
super(CreateWriteStill, self).__init__(*args, **kwargs)
def _create_write_node(self, selected_node, inputs, outputs, write_data):
# add fpath_template
write_data["fpath_template"] = self.fpath_template
if not self.is_legacy():
return create_write_node(
self.name,
write_data,
input=selected_node,
review=False,
prenodes=self.prenodes,
farm=False,
linked_knobs=["channels", "___", "first", "last", "use_limit"],
**{
"frame": nuke.frame()
}
)
else:
_prenodes = [
{
"name": "FrameHold01",
"class": "FrameHold",
"knobs": [
("first_frame", nuke.frame())
],
"dependent": None
}
]
return create_write_node_legacy(
self.name,
write_data,
input=selected_node,
review=False,
prenodes=_prenodes,
farm=False,
linked_knobs=["channels", "___", "first", "last", "use_limit"]
)
def _modify_write_node(self, write_node):
write_node.begin()
for n in nuke.allNodes():
# get write node
if n.Class() in "Write":
w_node = n
write_node.end()
w_node["use_limit"].setValue(True)
w_node["first"].setValue(nuke.frame())
w_node["last"].setValue(nuke.frame())
return write_node

View file

@ -0,0 +1,69 @@
import openpype.hosts.nuke.api as api
from openpype.client import get_asset_by_name
from openpype.pipeline import (
AutoCreator,
CreatedInstance,
legacy_io,
)
from openpype.hosts.nuke.api import (
INSTANCE_DATA_KNOB,
set_node_data
)
import nuke
class WorkfileCreator(AutoCreator):
identifier = "workfile"
family = "workfile"
default_variant = "Main"
def get_instance_attr_defs(self):
return []
def collect_instances(self):
root_node = nuke.root()
instance_data = api.get_node_data(
root_node, api.INSTANCE_DATA_KNOB
)
project_name = legacy_io.Session["AVALON_PROJECT"]
asset_name = legacy_io.Session["AVALON_ASSET"]
task_name = legacy_io.Session["AVALON_TASK"]
host_name = legacy_io.Session["AVALON_APP"]
asset_doc = get_asset_by_name(project_name, asset_name)
subset_name = self.get_subset_name(
self.default_variant, task_name, asset_doc,
project_name, host_name
)
instance_data.update({
"asset": asset_name,
"task": task_name,
"variant": self.default_variant
})
instance_data.update(self.get_dynamic_data(
self.default_variant, task_name, asset_doc,
project_name, host_name, instance_data
))
instance = CreatedInstance(
self.family, subset_name, instance_data, self
)
instance.transient_data["node"] = root_node
self._add_instance_to_context(instance)
def update_instances(self, update_list):
for created_inst, _changes in update_list:
instance_node = created_inst.transient_data["node"]
set_node_data(
instance_node,
INSTANCE_DATA_KNOB,
created_inst.data_to_store()
)
def create(self, options=None):
# no need to create if it is created
# in `collect_instances`
pass

View file

@ -28,7 +28,7 @@ class LoadBackdropNodes(load.LoaderPlugin):
representations = ["nk"]
families = ["workfile", "nukenodes"]
label = "Iport Nuke Nodes"
label = "Import Nuke Nodes"
order = 0
icon = "eye"
color = "white"

View file

@ -1,9 +1,9 @@
from pprint import pformat
import pyblish.api
from openpype.hosts.nuke.api import lib as pnlib
import nuke
@pyblish.api.log
class CollectBackdrops(pyblish.api.InstancePlugin):
"""Collect Backdrop node instance and its content
"""
@ -14,8 +14,9 @@ class CollectBackdrops(pyblish.api.InstancePlugin):
families = ["nukenodes"]
def process(self, instance):
self.log.debug(pformat(instance.data))
bckn = instance[0]
bckn = instance.data["transientData"]["node"]
# define size of the backdrop
left = bckn.xpos()
@ -23,6 +24,7 @@ class CollectBackdrops(pyblish.api.InstancePlugin):
right = left + bckn['bdwidth'].value()
bottom = top + bckn['bdheight'].value()
instance.data["transientData"]["childNodes"] = []
# iterate all nodes
for node in nuke.allNodes():
@ -37,17 +39,17 @@ class CollectBackdrops(pyblish.api.InstancePlugin):
and (node.ypos() + node.screenHeight() < bottom):
# add contained nodes to instance's node list
instance.append(node)
instance.data["transientData"]["childNodes"].append(node)
# get all connections from outside of backdrop
nodes = instance[1:]
nodes = instance.data["transientData"]["childNodes"]
connections_in, connections_out = pnlib.get_dependent_nodes(nodes)
instance.data["nodeConnectionsIn"] = connections_in
instance.data["nodeConnectionsOut"] = connections_out
instance.data["transientData"]["nodeConnectionsIn"] = connections_in
instance.data["transientData"]["nodeConnectionsOut"] = connections_out
# make label nicer
instance.data["label"] = "{0} ({1} nodes)".format(
bckn.name(), len(instance) - 1)
bckn.name(), len(instance.data["transientData"]["childNodes"]))
instance.data["families"].append(instance.data["family"])
@ -83,5 +85,4 @@ class CollectBackdrops(pyblish.api.InstancePlugin):
"frameStart": first_frame,
"frameEnd": last_frame
})
self.log.info("Backdrop content collected: `{}`".format(instance[:]))
self.log.info("Backdrop instance collected: `{}`".format(instance))

Some files were not shown because too many files have changed in this diff Show more