mirror of
https://github.com/ynput/ayon-core.git
synced 2026-01-02 08:54:53 +01:00
Merge branch 'develop' into feature/tvpaint_load_workfile
This commit is contained in:
commit
dd30c4963d
409 changed files with 29368 additions and 5408 deletions
|
|
@ -87,7 +87,7 @@ ipython_config.py
|
|||
# pyenv
|
||||
# For a library or package, you might want to ignore these files since the code is
|
||||
# intended to run in multiple environments; otherwise, check them in:
|
||||
# .python-version
|
||||
.python-version
|
||||
|
||||
# pipenv
|
||||
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
|
||||
|
|
@ -142,5 +142,6 @@ cython_debug/
|
|||
.poetry/
|
||||
.github/
|
||||
vendor/bin/
|
||||
vendor/python/
|
||||
docs/
|
||||
website/
|
||||
|
|
|
|||
1
.gitignore
vendored
1
.gitignore
vendored
|
|
@ -39,6 +39,7 @@ Temporary Items
|
|||
/dist/
|
||||
|
||||
/vendor/bin/*
|
||||
/vendor/python/*
|
||||
/.venv
|
||||
/venv/
|
||||
|
||||
|
|
|
|||
214
CHANGELOG.md
214
CHANGELOG.md
|
|
@ -1,151 +1,127 @@
|
|||
# Changelog
|
||||
|
||||
## [3.5.0-nightly.1](https://github.com/pypeclub/OpenPype/tree/HEAD)
|
||||
## [3.6.0-nightly.5](https://github.com/pypeclub/OpenPype/tree/HEAD)
|
||||
|
||||
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.4.1...HEAD)
|
||||
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.5.0...HEAD)
|
||||
|
||||
**🆕 New features**
|
||||
|
||||
- Maya: Validate setdress top group [\#2068](https://github.com/pypeclub/OpenPype/pull/2068)
|
||||
- Maya: Enable publishing render attrib sets \(e.g. V-Ray Displacement\) with model [\#1955](https://github.com/pypeclub/OpenPype/pull/1955)
|
||||
- Maya : Colorspace configuration [\#2170](https://github.com/pypeclub/OpenPype/pull/2170)
|
||||
- Blender: Added support for audio [\#2168](https://github.com/pypeclub/OpenPype/pull/2168)
|
||||
- Flame: a host basic integration [\#2165](https://github.com/pypeclub/OpenPype/pull/2165)
|
||||
- Houdini: simple HDA workflow [\#2072](https://github.com/pypeclub/OpenPype/pull/2072)
|
||||
|
||||
**🚀 Enhancements**
|
||||
|
||||
- Project manager: Filter first item after selection of project [\#2069](https://github.com/pypeclub/OpenPype/pull/2069)
|
||||
- Tools: add support for pyenv on windows [\#2051](https://github.com/pypeclub/OpenPype/pull/2051)
|
||||
- Add both side availability on Site Sync sites to Loader [\#2220](https://github.com/pypeclub/OpenPype/pull/2220)
|
||||
- Tools: Center loader and library loader on show [\#2219](https://github.com/pypeclub/OpenPype/pull/2219)
|
||||
- Maya : Validate shape zero [\#2212](https://github.com/pypeclub/OpenPype/pull/2212)
|
||||
- Maya : validate unique names [\#2211](https://github.com/pypeclub/OpenPype/pull/2211)
|
||||
- Tools: OpenPype stylesheet in workfiles tool [\#2208](https://github.com/pypeclub/OpenPype/pull/2208)
|
||||
- Ftrack: Replace Queue with deque in event handlers logic [\#2204](https://github.com/pypeclub/OpenPype/pull/2204)
|
||||
- Tools: New select context dialog [\#2200](https://github.com/pypeclub/OpenPype/pull/2200)
|
||||
- Maya : Validate mesh ngons [\#2199](https://github.com/pypeclub/OpenPype/pull/2199)
|
||||
- Delivery: Check 'frame' key in template for sequence delivery [\#2196](https://github.com/pypeclub/OpenPype/pull/2196)
|
||||
- Usage of tools code [\#2185](https://github.com/pypeclub/OpenPype/pull/2185)
|
||||
- Settings: Dictionary based on project roots [\#2184](https://github.com/pypeclub/OpenPype/pull/2184)
|
||||
- Subset name: Be able to pass asset document to get subset name [\#2179](https://github.com/pypeclub/OpenPype/pull/2179)
|
||||
- Tools: Experimental tools [\#2167](https://github.com/pypeclub/OpenPype/pull/2167)
|
||||
- Loader: Refactor and use OpenPype stylesheets [\#2166](https://github.com/pypeclub/OpenPype/pull/2166)
|
||||
- Add loader for linked smart objects in photoshop [\#2149](https://github.com/pypeclub/OpenPype/pull/2149)
|
||||
|
||||
**🐛 Bug fixes**
|
||||
|
||||
- Fix Sync Queue when project disabled [\#2063](https://github.com/pypeclub/OpenPype/pull/2063)
|
||||
- TVPaint: Fixed rendered frame indexes [\#1946](https://github.com/pypeclub/OpenPype/pull/1946)
|
||||
- Maya : multiple subsets review broken [\#2210](https://github.com/pypeclub/OpenPype/pull/2210)
|
||||
- Fix - different command used for Linux and Mac OS [\#2207](https://github.com/pypeclub/OpenPype/pull/2207)
|
||||
- Tools: Workfiles tool don't use avalon widgets [\#2205](https://github.com/pypeclub/OpenPype/pull/2205)
|
||||
- Ftrack: Fill missing ftrack id on mongo project [\#2203](https://github.com/pypeclub/OpenPype/pull/2203)
|
||||
- Project Manager: Fix copying of tasks [\#2191](https://github.com/pypeclub/OpenPype/pull/2191)
|
||||
- StandalonePublisher: Source validator don't expect representations [\#2190](https://github.com/pypeclub/OpenPype/pull/2190)
|
||||
- Blender: Fix trying to pack an image when the shader node has no texture [\#2183](https://github.com/pypeclub/OpenPype/pull/2183)
|
||||
- MacOS: Launching of applications may cause Permissions error [\#2175](https://github.com/pypeclub/OpenPype/pull/2175)
|
||||
- Maya: Aspect ratio [\#2174](https://github.com/pypeclub/OpenPype/pull/2174)
|
||||
- Blender: Fix 'Deselect All' with object not in 'Object Mode' [\#2163](https://github.com/pypeclub/OpenPype/pull/2163)
|
||||
- Maya: Fix hotbox broken by scriptsmenu [\#2151](https://github.com/pypeclub/OpenPype/pull/2151)
|
||||
- Added validator for source files for Standalone Publisher [\#2138](https://github.com/pypeclub/OpenPype/pull/2138)
|
||||
|
||||
**Merged pull requests:**
|
||||
|
||||
- Settings: Site sync project settings improvement [\#2193](https://github.com/pypeclub/OpenPype/pull/2193)
|
||||
- Add validate active site button to sync queue on a project [\#2176](https://github.com/pypeclub/OpenPype/pull/2176)
|
||||
- Bump pillow from 8.2.0 to 8.3.2 [\#2162](https://github.com/pypeclub/OpenPype/pull/2162)
|
||||
|
||||
## [3.5.0](https://github.com/pypeclub/OpenPype/tree/3.5.0) (2021-10-17)
|
||||
|
||||
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.5.0-nightly.8...3.5.0)
|
||||
|
||||
**Deprecated:**
|
||||
|
||||
- Maya: Change mayaAscii family to mayaScene [\#2106](https://github.com/pypeclub/OpenPype/pull/2106)
|
||||
|
||||
**🆕 New features**
|
||||
|
||||
- Added project and task into context change message in Maya [\#2131](https://github.com/pypeclub/OpenPype/pull/2131)
|
||||
- Add ExtractBurnin to photoshop review [\#2124](https://github.com/pypeclub/OpenPype/pull/2124)
|
||||
- PYPE-1218 - changed namespace to contain subset name in Maya [\#2114](https://github.com/pypeclub/OpenPype/pull/2114)
|
||||
- Added running configurable disk mapping command before start of OP [\#2091](https://github.com/pypeclub/OpenPype/pull/2091)
|
||||
- SFTP provider [\#2073](https://github.com/pypeclub/OpenPype/pull/2073)
|
||||
|
||||
**🚀 Enhancements**
|
||||
|
||||
- Maya: make rig validators configurable in settings [\#2137](https://github.com/pypeclub/OpenPype/pull/2137)
|
||||
- Settings: Updated readme for entity types in settings [\#2132](https://github.com/pypeclub/OpenPype/pull/2132)
|
||||
- Nuke: unified clip loader [\#2128](https://github.com/pypeclub/OpenPype/pull/2128)
|
||||
- Settings UI: Project model refreshing and sorting [\#2104](https://github.com/pypeclub/OpenPype/pull/2104)
|
||||
- Create Read From Rendered - Disable Relative paths by default [\#2093](https://github.com/pypeclub/OpenPype/pull/2093)
|
||||
- Added choosing different dirmap mapping if workfile synched locally [\#2088](https://github.com/pypeclub/OpenPype/pull/2088)
|
||||
- General: Remove IdleManager module [\#2084](https://github.com/pypeclub/OpenPype/pull/2084)
|
||||
- Tray UI: Message box about missing settings defaults [\#2080](https://github.com/pypeclub/OpenPype/pull/2080)
|
||||
- Tray UI: Show menu where first click happened [\#2079](https://github.com/pypeclub/OpenPype/pull/2079)
|
||||
- Global: add global validators to settings [\#2078](https://github.com/pypeclub/OpenPype/pull/2078)
|
||||
- Use CRF for burnin when available [\#2070](https://github.com/pypeclub/OpenPype/pull/2070)
|
||||
- Project manager: Filter first item after selection of project [\#2069](https://github.com/pypeclub/OpenPype/pull/2069)
|
||||
|
||||
**🐛 Bug fixes**
|
||||
|
||||
- Maya: fix model publishing [\#2130](https://github.com/pypeclub/OpenPype/pull/2130)
|
||||
- Fix - oiiotool wasn't recognized even if present [\#2129](https://github.com/pypeclub/OpenPype/pull/2129)
|
||||
- General: Disk mapping group [\#2120](https://github.com/pypeclub/OpenPype/pull/2120)
|
||||
- Hiero: publishing effect first time makes wrong resources path [\#2115](https://github.com/pypeclub/OpenPype/pull/2115)
|
||||
- Add startup script for Houdini Core. [\#2110](https://github.com/pypeclub/OpenPype/pull/2110)
|
||||
- TVPaint: Behavior name of loop also accept repeat [\#2109](https://github.com/pypeclub/OpenPype/pull/2109)
|
||||
- Ftrack: Project settings save custom attributes skip unknown attributes [\#2103](https://github.com/pypeclub/OpenPype/pull/2103)
|
||||
- Blender: Fix NoneType error when animation\_data is missing for a rig [\#2101](https://github.com/pypeclub/OpenPype/pull/2101)
|
||||
- Fix broken import in sftp provider [\#2100](https://github.com/pypeclub/OpenPype/pull/2100)
|
||||
- Global: Fix docstring on publish plugin extract review [\#2097](https://github.com/pypeclub/OpenPype/pull/2097)
|
||||
- Delivery Action Files Sequence fix [\#2096](https://github.com/pypeclub/OpenPype/pull/2096)
|
||||
- General: Cloud mongo ca certificate issue [\#2095](https://github.com/pypeclub/OpenPype/pull/2095)
|
||||
- TVPaint: Creator use context from workfile [\#2087](https://github.com/pypeclub/OpenPype/pull/2087)
|
||||
- Blender: fix texture missing when publishing blend files [\#2085](https://github.com/pypeclub/OpenPype/pull/2085)
|
||||
- General: Startup validations oiio tool path fix on linux [\#2083](https://github.com/pypeclub/OpenPype/pull/2083)
|
||||
- Deadline: Collect deadline server does not check existence of deadline key [\#2082](https://github.com/pypeclub/OpenPype/pull/2082)
|
||||
- Blender: fixed Curves with modifiers in Rigs [\#2081](https://github.com/pypeclub/OpenPype/pull/2081)
|
||||
- Nuke UI scaling [\#2077](https://github.com/pypeclub/OpenPype/pull/2077)
|
||||
|
||||
**Merged pull requests:**
|
||||
|
||||
- Bump pywin32 from 300 to 301 [\#2086](https://github.com/pypeclub/OpenPype/pull/2086)
|
||||
|
||||
## [3.4.1](https://github.com/pypeclub/OpenPype/tree/3.4.1) (2021-09-23)
|
||||
|
||||
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.4.1-nightly.1...3.4.1)
|
||||
|
||||
**🆕 New features**
|
||||
|
||||
- Settings: Flag project as deactivated and hide from tools' view [\#2008](https://github.com/pypeclub/OpenPype/pull/2008)
|
||||
|
||||
**🚀 Enhancements**
|
||||
|
||||
- General: Startup validations [\#2054](https://github.com/pypeclub/OpenPype/pull/2054)
|
||||
- Nuke: proxy mode validator [\#2052](https://github.com/pypeclub/OpenPype/pull/2052)
|
||||
- Ftrack: Removed ftrack interface [\#2049](https://github.com/pypeclub/OpenPype/pull/2049)
|
||||
- Settings UI: Deffered set value on entity [\#2044](https://github.com/pypeclub/OpenPype/pull/2044)
|
||||
- Loader: Families filtering [\#2043](https://github.com/pypeclub/OpenPype/pull/2043)
|
||||
- Settings UI: Project view enhancements [\#2042](https://github.com/pypeclub/OpenPype/pull/2042)
|
||||
- Settings for Nuke IncrementScriptVersion [\#2039](https://github.com/pypeclub/OpenPype/pull/2039)
|
||||
- Loader & Library loader: Use tools from OpenPype [\#2038](https://github.com/pypeclub/OpenPype/pull/2038)
|
||||
- Adding predefined project folders creation in PM [\#2030](https://github.com/pypeclub/OpenPype/pull/2030)
|
||||
- WebserverModule: Removed interface of webserver module [\#2028](https://github.com/pypeclub/OpenPype/pull/2028)
|
||||
- TimersManager: Removed interface of timers manager [\#2024](https://github.com/pypeclub/OpenPype/pull/2024)
|
||||
- Feature Maya import asset from scene inventory [\#2018](https://github.com/pypeclub/OpenPype/pull/2018)
|
||||
|
||||
**🐛 Bug fixes**
|
||||
|
||||
- Timers manger: Typo fix [\#2058](https://github.com/pypeclub/OpenPype/pull/2058)
|
||||
- Hiero: Editorial fixes [\#2057](https://github.com/pypeclub/OpenPype/pull/2057)
|
||||
- Differentiate jpg sequences from thumbnail [\#2056](https://github.com/pypeclub/OpenPype/pull/2056)
|
||||
- FFmpeg: Split command to list does not work [\#2046](https://github.com/pypeclub/OpenPype/pull/2046)
|
||||
- Removed shell flag in subprocess call [\#2045](https://github.com/pypeclub/OpenPype/pull/2045)
|
||||
|
||||
**Merged pull requests:**
|
||||
|
||||
- Bump prismjs from 1.24.0 to 1.25.0 in /website [\#2050](https://github.com/pypeclub/OpenPype/pull/2050)
|
||||
|
||||
## [3.4.0](https://github.com/pypeclub/OpenPype/tree/3.4.0) (2021-09-17)
|
||||
|
||||
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.4.0-nightly.6...3.4.0)
|
||||
|
||||
### 📖 Documentation
|
||||
|
||||
- Documentation: Ftrack launch argsuments update [\#2014](https://github.com/pypeclub/OpenPype/pull/2014)
|
||||
- Nuke Quick Start / Tutorial [\#1952](https://github.com/pypeclub/OpenPype/pull/1952)
|
||||
|
||||
**🆕 New features**
|
||||
|
||||
- Nuke: Compatibility with Nuke 13 [\#2003](https://github.com/pypeclub/OpenPype/pull/2003)
|
||||
|
||||
**🚀 Enhancements**
|
||||
|
||||
- Added possibility to configure of synchronization of workfile version… [\#2041](https://github.com/pypeclub/OpenPype/pull/2041)
|
||||
- General: Task types in profiles [\#2036](https://github.com/pypeclub/OpenPype/pull/2036)
|
||||
- Console interpreter: Handle invalid sizes on initialization [\#2022](https://github.com/pypeclub/OpenPype/pull/2022)
|
||||
- Ftrack: Show OpenPype versions in event server status [\#2019](https://github.com/pypeclub/OpenPype/pull/2019)
|
||||
- General: Staging icon [\#2017](https://github.com/pypeclub/OpenPype/pull/2017)
|
||||
- Ftrack: Sync to avalon actions have jobs [\#2015](https://github.com/pypeclub/OpenPype/pull/2015)
|
||||
- Modules: Connect method is not required [\#2009](https://github.com/pypeclub/OpenPype/pull/2009)
|
||||
- Settings UI: Number with configurable steps [\#2001](https://github.com/pypeclub/OpenPype/pull/2001)
|
||||
- Moving project folder structure creation out of ftrack module \#1989 [\#1996](https://github.com/pypeclub/OpenPype/pull/1996)
|
||||
- Configurable items for providers without Settings [\#1987](https://github.com/pypeclub/OpenPype/pull/1987)
|
||||
- Global: Example addons [\#1986](https://github.com/pypeclub/OpenPype/pull/1986)
|
||||
- Standalone Publisher: Extract harmony zip handle workfile template [\#1982](https://github.com/pypeclub/OpenPype/pull/1982)
|
||||
- Settings UI: Number sliders [\#1978](https://github.com/pypeclub/OpenPype/pull/1978)
|
||||
- Workfiles: Support more workfile templates [\#1966](https://github.com/pypeclub/OpenPype/pull/1966)
|
||||
- Launcher: Fix crashes on action click [\#1964](https://github.com/pypeclub/OpenPype/pull/1964)
|
||||
- Settings: Minor fixes in UI and missing default values [\#1963](https://github.com/pypeclub/OpenPype/pull/1963)
|
||||
- Blender: Toggle system console works on windows [\#1962](https://github.com/pypeclub/OpenPype/pull/1962)
|
||||
- Global: Settings defined by Addons/Modules [\#1959](https://github.com/pypeclub/OpenPype/pull/1959)
|
||||
- CI: change release numbering triggers [\#1954](https://github.com/pypeclub/OpenPype/pull/1954)
|
||||
- Global: Avalon Host name collector [\#1949](https://github.com/pypeclub/OpenPype/pull/1949)
|
||||
|
||||
**🐛 Bug fixes**
|
||||
|
||||
- Workfiles tool: Task selection [\#2040](https://github.com/pypeclub/OpenPype/pull/2040)
|
||||
- Ftrack: Delete old versions missing settings key [\#2037](https://github.com/pypeclub/OpenPype/pull/2037)
|
||||
- Nuke: typo on a button [\#2034](https://github.com/pypeclub/OpenPype/pull/2034)
|
||||
- Hiero: Fix "none" named tags [\#2033](https://github.com/pypeclub/OpenPype/pull/2033)
|
||||
- FFmpeg: Subprocess arguments as list [\#2032](https://github.com/pypeclub/OpenPype/pull/2032)
|
||||
- General: Fix Python 2 breaking line [\#2016](https://github.com/pypeclub/OpenPype/pull/2016)
|
||||
- Bugfix/webpublisher task type [\#2006](https://github.com/pypeclub/OpenPype/pull/2006)
|
||||
- Nuke thumbnails generated from middle of the sequence [\#1992](https://github.com/pypeclub/OpenPype/pull/1992)
|
||||
- Nuke: last version from path gets correct version [\#1990](https://github.com/pypeclub/OpenPype/pull/1990)
|
||||
- nuke, resolve, hiero: precollector order lest then 0.5 [\#1984](https://github.com/pypeclub/OpenPype/pull/1984)
|
||||
- Last workfile with multiple work templates [\#1981](https://github.com/pypeclub/OpenPype/pull/1981)
|
||||
- Collectors order [\#1977](https://github.com/pypeclub/OpenPype/pull/1977)
|
||||
- Stop timer was within validator order range. [\#1975](https://github.com/pypeclub/OpenPype/pull/1975)
|
||||
- Ftrack: arrow submodule has https url source [\#1974](https://github.com/pypeclub/OpenPype/pull/1974)
|
||||
- Ftrack: Fix hosts attribute in collect ftrack username [\#1972](https://github.com/pypeclub/OpenPype/pull/1972)
|
||||
- Deadline: Houdini plugins in different hierarchy [\#1970](https://github.com/pypeclub/OpenPype/pull/1970)
|
||||
- Removed deprecated submodules [\#1967](https://github.com/pypeclub/OpenPype/pull/1967)
|
||||
- Global: ExtractJpeg can handle filepaths with spaces [\#1961](https://github.com/pypeclub/OpenPype/pull/1961)
|
||||
|
||||
## [3.3.1](https://github.com/pypeclub/OpenPype/tree/3.3.1) (2021-08-20)
|
||||
|
||||
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.3.1-nightly.1...3.3.1)
|
||||
|
||||
**🚀 Enhancements**
|
||||
|
||||
- OpenPype: Add version validation and `--headless` mode and update progress 🔄 [\#1939](https://github.com/pypeclub/OpenPype/pull/1939)
|
||||
|
||||
**🐛 Bug fixes**
|
||||
|
||||
- Maya: Menu actions fix [\#1945](https://github.com/pypeclub/OpenPype/pull/1945)
|
||||
- standalone: editorial shared object problem [\#1941](https://github.com/pypeclub/OpenPype/pull/1941)
|
||||
|
||||
## [3.3.0](https://github.com/pypeclub/OpenPype/tree/3.3.0) (2021-08-17)
|
||||
|
||||
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.3.0-nightly.11...3.3.0)
|
||||
|
||||
**🆕 New features**
|
||||
|
||||
- Settings UI: Breadcrumbs in settings [\#1932](https://github.com/pypeclub/OpenPype/pull/1932)
|
||||
|
||||
**🚀 Enhancements**
|
||||
|
||||
- Python console interpreter [\#1940](https://github.com/pypeclub/OpenPype/pull/1940)
|
||||
|
||||
**🐛 Bug fixes**
|
||||
|
||||
- Fix - ftrack family was added incorrectly in some cases [\#1935](https://github.com/pypeclub/OpenPype/pull/1935)
|
||||
- Fix - Deadline publish on Linux started Tray instead of headless publishing [\#1930](https://github.com/pypeclub/OpenPype/pull/1930)
|
||||
- Maya: Validate Model Name - repair accident deletion in settings defaults [\#1929](https://github.com/pypeclub/OpenPype/pull/1929)
|
||||
|
||||
**Merged pull requests:**
|
||||
|
||||
- Fix - make AE workfile publish to Ftrack configurable [\#1937](https://github.com/pypeclub/OpenPype/pull/1937)
|
||||
|
||||
## [3.2.0](https://github.com/pypeclub/OpenPype/tree/3.2.0) (2021-07-13)
|
||||
|
||||
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.2.0-nightly.7...3.2.0)
|
||||
|
|
|
|||
83
Dockerfile
83
Dockerfile
|
|
@ -1,7 +1,9 @@
|
|||
# Build Pype docker image
|
||||
FROM centos:7 AS builder
|
||||
ARG OPENPYPE_PYTHON_VERSION=3.7.10
|
||||
FROM debian:bookworm-slim AS builder
|
||||
ARG OPENPYPE_PYTHON_VERSION=3.7.12
|
||||
|
||||
LABEL maintainer="info@openpype.io"
|
||||
LABEL description="Docker Image to build and run OpenPype"
|
||||
LABEL org.opencontainers.image.name="pypeclub/openpype"
|
||||
LABEL org.opencontainers.image.title="OpenPype Docker Image"
|
||||
LABEL org.opencontainers.image.url="https://openpype.io/"
|
||||
|
|
@ -9,56 +11,49 @@ LABEL org.opencontainers.image.source="https://github.com/pypeclub/pype"
|
|||
|
||||
USER root
|
||||
|
||||
# update base
|
||||
RUN yum -y install deltarpm \
|
||||
&& yum -y update \
|
||||
&& yum clean all
|
||||
ARG DEBIAN_FRONTEND=noninteractive
|
||||
|
||||
# add tools we need
|
||||
RUN yum -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm \
|
||||
&& yum -y install centos-release-scl \
|
||||
&& yum -y install \
|
||||
# update base
|
||||
RUN apt-get update \
|
||||
&& apt-get install -y --no-install-recommends \
|
||||
ca-certificates \
|
||||
bash \
|
||||
which \
|
||||
git \
|
||||
devtoolset-7-gcc* \
|
||||
make \
|
||||
cmake \
|
||||
make \
|
||||
curl \
|
||||
wget \
|
||||
gcc \
|
||||
zlib-devel \
|
||||
bzip2 \
|
||||
bzip2-devel \
|
||||
readline-devel \
|
||||
sqlite sqlite-devel \
|
||||
openssl-devel \
|
||||
tk-devel libffi-devel \
|
||||
qt5-qtbase-devel \
|
||||
patchelf \
|
||||
&& yum clean all
|
||||
build-essential \
|
||||
checkinstall \
|
||||
libssl-dev \
|
||||
zlib1g-dev \
|
||||
libbz2-dev \
|
||||
libreadline-dev \
|
||||
libsqlite3-dev \
|
||||
llvm \
|
||||
libncursesw5-dev \
|
||||
xz-utils \
|
||||
tk-dev \
|
||||
libxml2-dev \
|
||||
libxmlsec1-dev \
|
||||
libffi-dev \
|
||||
liblzma-dev \
|
||||
patchelf
|
||||
|
||||
SHELL ["/bin/bash", "-c"]
|
||||
|
||||
RUN mkdir /opt/openpype
|
||||
# RUN useradd -m pype
|
||||
# RUN chown pype /opt/openpype
|
||||
# USER pype
|
||||
|
||||
RUN curl https://pyenv.run | bash
|
||||
ENV PYTHON_CONFIGURE_OPTS --enable-shared
|
||||
|
||||
RUN echo 'export PATH="$HOME/.pyenv/bin:$PATH"'>> $HOME/.bashrc \
|
||||
RUN curl https://pyenv.run | bash \
|
||||
&& echo 'export PATH="$HOME/.pyenv/bin:$PATH"'>> $HOME/.bashrc \
|
||||
&& echo 'eval "$(pyenv init -)"' >> $HOME/.bashrc \
|
||||
&& echo 'eval "$(pyenv virtualenv-init -)"' >> $HOME/.bashrc \
|
||||
&& echo 'eval "$(pyenv init --path)"' >> $HOME/.bashrc
|
||||
RUN source $HOME/.bashrc && pyenv install ${OPENPYPE_PYTHON_VERSION}
|
||||
&& echo 'eval "$(pyenv init --path)"' >> $HOME/.bashrc \
|
||||
&& source $HOME/.bashrc && pyenv install ${OPENPYPE_PYTHON_VERSION}
|
||||
|
||||
COPY . /opt/openpype/
|
||||
RUN rm -rf /openpype/.poetry || echo "No Poetry installed yet."
|
||||
# USER root
|
||||
# RUN chown -R pype /opt/openpype
|
||||
RUN chmod +x /opt/openpype/tools/create_env.sh && chmod +x /opt/openpype/tools/build.sh
|
||||
|
||||
# USER pype
|
||||
RUN chmod +x /opt/openpype/tools/create_env.sh && chmod +x /opt/openpype/tools/build.sh
|
||||
|
||||
WORKDIR /opt/openpype
|
||||
|
||||
|
|
@ -67,16 +62,8 @@ RUN cd /opt/openpype \
|
|||
&& pyenv local ${OPENPYPE_PYTHON_VERSION}
|
||||
|
||||
RUN source $HOME/.bashrc \
|
||||
&& ./tools/create_env.sh
|
||||
|
||||
RUN source $HOME/.bashrc \
|
||||
&& ./tools/create_env.sh \
|
||||
&& ./tools/fetch_thirdparty_libs.sh
|
||||
|
||||
RUN source $HOME/.bashrc \
|
||||
&& bash ./tools/build.sh \
|
||||
&& cp /usr/lib64/libffi* ./build/exe.linux-x86_64-3.7/lib \
|
||||
&& cp /usr/lib64/libssl* ./build/exe.linux-x86_64-3.7/lib \
|
||||
&& cp /usr/lib64/libcrypto* ./build/exe.linux-x86_64-3.7/lib
|
||||
|
||||
RUN cd /opt/openpype \
|
||||
rm -rf ./vendor/bin
|
||||
&& bash ./tools/build.sh
|
||||
|
|
|
|||
98
Dockerfile.centos7
Normal file
98
Dockerfile.centos7
Normal file
|
|
@ -0,0 +1,98 @@
|
|||
# Build Pype docker image
|
||||
FROM centos:7 AS builder
|
||||
ARG OPENPYPE_PYTHON_VERSION=3.7.10
|
||||
|
||||
LABEL org.opencontainers.image.name="pypeclub/openpype"
|
||||
LABEL org.opencontainers.image.title="OpenPype Docker Image"
|
||||
LABEL org.opencontainers.image.url="https://openpype.io/"
|
||||
LABEL org.opencontainers.image.source="https://github.com/pypeclub/pype"
|
||||
|
||||
USER root
|
||||
|
||||
# update base
|
||||
RUN yum -y install deltarpm \
|
||||
&& yum -y update \
|
||||
&& yum clean all
|
||||
|
||||
# add tools we need
|
||||
RUN yum -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm \
|
||||
&& yum -y install centos-release-scl \
|
||||
&& yum -y install \
|
||||
bash \
|
||||
which \
|
||||
git \
|
||||
make \
|
||||
devtoolset-7 \
|
||||
cmake \
|
||||
curl \
|
||||
wget \
|
||||
gcc \
|
||||
zlib-devel \
|
||||
bzip2 \
|
||||
bzip2-devel \
|
||||
readline-devel \
|
||||
sqlite sqlite-devel \
|
||||
openssl-devel \
|
||||
openssl-libs \
|
||||
tk-devel libffi-devel \
|
||||
patchelf \
|
||||
automake \
|
||||
autoconf \
|
||||
ncurses \
|
||||
ncurses-devel \
|
||||
qt5-qtbase-devel \
|
||||
&& yum clean all
|
||||
|
||||
# we need to build our own patchelf
|
||||
WORKDIR /temp-patchelf
|
||||
RUN git clone https://github.com/NixOS/patchelf.git . \
|
||||
&& source scl_source enable devtoolset-7 \
|
||||
&& ./bootstrap.sh \
|
||||
&& ./configure \
|
||||
&& make \
|
||||
&& make install
|
||||
|
||||
RUN mkdir /opt/openpype
|
||||
# RUN useradd -m pype
|
||||
# RUN chown pype /opt/openpype
|
||||
# USER pype
|
||||
|
||||
RUN curl https://pyenv.run | bash
|
||||
# ENV PYTHON_CONFIGURE_OPTS --enable-shared
|
||||
|
||||
RUN echo 'export PATH="$HOME/.pyenv/bin:$PATH"'>> $HOME/.bashrc \
|
||||
&& echo 'eval "$(pyenv init -)"' >> $HOME/.bashrc \
|
||||
&& echo 'eval "$(pyenv virtualenv-init -)"' >> $HOME/.bashrc \
|
||||
&& echo 'eval "$(pyenv init --path)"' >> $HOME/.bashrc
|
||||
RUN source $HOME/.bashrc && pyenv install ${OPENPYPE_PYTHON_VERSION}
|
||||
|
||||
COPY . /opt/openpype/
|
||||
RUN rm -rf /openpype/.poetry || echo "No Poetry installed yet."
|
||||
# USER root
|
||||
# RUN chown -R pype /opt/openpype
|
||||
RUN chmod +x /opt/openpype/tools/create_env.sh && chmod +x /opt/openpype/tools/build.sh
|
||||
|
||||
# USER pype
|
||||
|
||||
WORKDIR /opt/openpype
|
||||
|
||||
RUN cd /opt/openpype \
|
||||
&& source $HOME/.bashrc \
|
||||
&& pyenv local ${OPENPYPE_PYTHON_VERSION}
|
||||
|
||||
RUN source $HOME/.bashrc \
|
||||
&& ./tools/create_env.sh
|
||||
|
||||
RUN source $HOME/.bashrc \
|
||||
&& ./tools/fetch_thirdparty_libs.sh
|
||||
|
||||
RUN source $HOME/.bashrc \
|
||||
&& bash ./tools/build.sh
|
||||
|
||||
RUN cp /usr/lib64/libffi* ./build/exe.linux-x86_64-3.7/lib \
|
||||
&& cp /usr/lib64/libssl* ./build/exe.linux-x86_64-3.7/lib \
|
||||
&& cp /usr/lib64/libcrypto* ./build/exe.linux-x86_64-3.7/lib \
|
||||
&& cp /root/.pyenv/versions/${OPENPYPE_PYTHON_VERSION}/lib/libpython* ./build/exe.linux-x86_64-3.7/lib
|
||||
|
||||
RUN cd /opt/openpype \
|
||||
rm -rf ./vendor/bin
|
||||
11
README.md
11
README.md
|
|
@ -133,6 +133,12 @@ Easiest way to build OpenPype on Linux is using [Docker](https://www.docker.com/
|
|||
sudo ./tools/docker_build.sh
|
||||
```
|
||||
|
||||
This will by default use Debian as base image. If you need to make Centos 7 compatible build, please run:
|
||||
|
||||
```sh
|
||||
sudo ./tools/docker_build.sh centos7
|
||||
```
|
||||
|
||||
If all is successful, you'll find built OpenPype in `./build/` folder.
|
||||
|
||||
#### Manual build
|
||||
|
|
@ -158,6 +164,11 @@ you'll need also additional libraries for Qt5:
|
|||
```sh
|
||||
sudo apt install qt5-default
|
||||
```
|
||||
or if you are on Ubuntu > 20.04, there is no `qt5-default` packages so you need to install its content individually:
|
||||
|
||||
```sh
|
||||
sudo apt-get install qtbase5-dev qtchooser qt5-qmake qtbase5-dev-tools
|
||||
```
|
||||
</details>
|
||||
|
||||
<details>
|
||||
|
|
|
|||
148
igniter/tools.py
148
igniter/tools.py
|
|
@ -1,18 +1,12 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
"""Tools used in **Igniter** GUI.
|
||||
|
||||
Functions ``compose_url()`` and ``decompose_url()`` are the same as in
|
||||
``openpype.lib`` and they are here to avoid importing OpenPype module before its
|
||||
version is decided.
|
||||
|
||||
"""
|
||||
import sys
|
||||
"""Tools used in **Igniter** GUI."""
|
||||
import os
|
||||
from typing import Dict, Union
|
||||
from typing import Union
|
||||
from urllib.parse import urlparse, parse_qs
|
||||
from pathlib import Path
|
||||
import platform
|
||||
|
||||
import certifi
|
||||
from pymongo import MongoClient
|
||||
from pymongo.errors import (
|
||||
ServerSelectionTimeoutError,
|
||||
|
|
@ -22,89 +16,32 @@ from pymongo.errors import (
|
|||
)
|
||||
|
||||
|
||||
def decompose_url(url: str) -> Dict:
|
||||
"""Decompose mongodb url to its separate components.
|
||||
|
||||
Args:
|
||||
url (str): Mongodb url.
|
||||
|
||||
Returns:
|
||||
dict: Dictionary of components.
|
||||
def should_add_certificate_path_to_mongo_url(mongo_url):
|
||||
"""Check if should add ca certificate to mongo url.
|
||||
|
||||
Since 30.9.2021 cloud mongo requires newer certificates that are not
|
||||
available on most of workstation. This adds path to certifi certificate
|
||||
which is valid for it. To add the certificate path url must have scheme
|
||||
'mongodb+srv' or has 'ssl=true' or 'tls=true' in url query.
|
||||
"""
|
||||
components = {
|
||||
"scheme": None,
|
||||
"host": None,
|
||||
"port": None,
|
||||
"username": None,
|
||||
"password": None,
|
||||
"auth_db": None
|
||||
}
|
||||
parsed = urlparse(mongo_url)
|
||||
query = parse_qs(parsed.query)
|
||||
lowered_query_keys = set(key.lower() for key in query.keys())
|
||||
add_certificate = False
|
||||
# Check if url 'ssl' or 'tls' are set to 'true'
|
||||
for key in ("ssl", "tls"):
|
||||
if key in query and "true" in query["ssl"]:
|
||||
add_certificate = True
|
||||
break
|
||||
|
||||
result = urlparse(url)
|
||||
if result.scheme is None:
|
||||
_url = "mongodb://{}".format(url)
|
||||
result = urlparse(_url)
|
||||
# Check if url contains 'mongodb+srv'
|
||||
if not add_certificate and parsed.scheme == "mongodb+srv":
|
||||
add_certificate = True
|
||||
|
||||
components["scheme"] = result.scheme
|
||||
components["host"] = result.hostname
|
||||
try:
|
||||
components["port"] = result.port
|
||||
except ValueError:
|
||||
raise RuntimeError("invalid port specified")
|
||||
components["username"] = result.username
|
||||
components["password"] = result.password
|
||||
|
||||
try:
|
||||
components["auth_db"] = parse_qs(result.query)['authSource'][0]
|
||||
except KeyError:
|
||||
# no auth db provided, mongo will use the one we are connecting to
|
||||
pass
|
||||
|
||||
return components
|
||||
|
||||
|
||||
def compose_url(scheme: str = None,
|
||||
host: str = None,
|
||||
username: str = None,
|
||||
password: str = None,
|
||||
port: int = None,
|
||||
auth_db: str = None) -> str:
|
||||
"""Compose mongodb url from its individual components.
|
||||
|
||||
Args:
|
||||
scheme (str, optional):
|
||||
host (str, optional):
|
||||
username (str, optional):
|
||||
password (str, optional):
|
||||
port (str, optional):
|
||||
auth_db (str, optional):
|
||||
|
||||
Returns:
|
||||
str: mongodb url
|
||||
|
||||
"""
|
||||
|
||||
url = "{scheme}://"
|
||||
|
||||
if username and password:
|
||||
url += "{username}:{password}@"
|
||||
|
||||
url += "{host}"
|
||||
if port:
|
||||
url += ":{port}"
|
||||
|
||||
if auth_db:
|
||||
url += "?authSource={auth_db}"
|
||||
|
||||
return url.format(**{
|
||||
"scheme": scheme,
|
||||
"host": host,
|
||||
"username": username,
|
||||
"password": password,
|
||||
"port": port,
|
||||
"auth_db": auth_db
|
||||
})
|
||||
# Check if url does already contain certificate path
|
||||
if add_certificate and "tlscafile" in lowered_query_keys:
|
||||
add_certificate = False
|
||||
return add_certificate
|
||||
|
||||
|
||||
def validate_mongo_connection(cnx: str) -> (bool, str):
|
||||
|
|
@ -121,12 +58,18 @@ def validate_mongo_connection(cnx: str) -> (bool, str):
|
|||
if parsed.scheme not in ["mongodb", "mongodb+srv"]:
|
||||
return False, "Not mongodb schema"
|
||||
|
||||
kwargs = {
|
||||
"serverSelectionTimeoutMS": 2000
|
||||
}
|
||||
# Add certificate path if should be required
|
||||
if should_add_certificate_path_to_mongo_url(cnx):
|
||||
kwargs["ssl_ca_certs"] = certifi.where()
|
||||
|
||||
try:
|
||||
client = MongoClient(
|
||||
cnx,
|
||||
serverSelectionTimeoutMS=2000
|
||||
)
|
||||
client = MongoClient(cnx, **kwargs)
|
||||
client.server_info()
|
||||
with client.start_session():
|
||||
pass
|
||||
client.close()
|
||||
except ServerSelectionTimeoutError as e:
|
||||
return False, f"Cannot connect to server {cnx} - {e}"
|
||||
|
|
@ -152,10 +95,7 @@ def validate_mongo_string(mongo: str) -> (bool, str):
|
|||
"""
|
||||
if not mongo:
|
||||
return True, "empty string"
|
||||
parsed = urlparse(mongo)
|
||||
if parsed.scheme in ["mongodb", "mongodb+srv"]:
|
||||
return validate_mongo_connection(mongo)
|
||||
return False, "not valid mongodb schema"
|
||||
return validate_mongo_connection(mongo)
|
||||
|
||||
|
||||
def validate_path_string(path: str) -> (bool, str):
|
||||
|
|
@ -195,21 +135,13 @@ def get_openpype_global_settings(url: str) -> dict:
|
|||
Returns:
|
||||
dict: With settings data. Empty dictionary is returned if not found.
|
||||
"""
|
||||
try:
|
||||
components = decompose_url(url)
|
||||
except RuntimeError:
|
||||
return {}
|
||||
mongo_kwargs = {
|
||||
"host": compose_url(**components),
|
||||
"serverSelectionTimeoutMS": 2000
|
||||
}
|
||||
port = components.get("port")
|
||||
if port is not None:
|
||||
mongo_kwargs["port"] = int(port)
|
||||
kwargs = {}
|
||||
if should_add_certificate_path_to_mongo_url(url):
|
||||
kwargs["ssl_ca_certs"] = certifi.where()
|
||||
|
||||
try:
|
||||
# Create mongo connection
|
||||
client = MongoClient(**mongo_kwargs)
|
||||
client = MongoClient(url, **kwargs)
|
||||
# Access settings collection
|
||||
col = client["openpype"]["settings"]
|
||||
# Query global settings
|
||||
|
|
|
|||
|
|
@ -69,6 +69,7 @@ def install():
|
|||
"""Install Pype to Avalon."""
|
||||
from pyblish.lib import MessageHandler
|
||||
from openpype.modules import load_modules
|
||||
from avalon import pipeline
|
||||
|
||||
# Make sure modules are loaded
|
||||
load_modules()
|
||||
|
|
@ -117,7 +118,9 @@ def install():
|
|||
|
||||
# apply monkey patched discover to original one
|
||||
log.info("Patching discovery")
|
||||
|
||||
avalon.discover = patched_discover
|
||||
pipeline.discover = patched_discover
|
||||
|
||||
avalon.on("taskChanged", _on_task_change)
|
||||
|
||||
|
|
|
|||
|
|
@ -57,6 +57,17 @@ def tray(debug=False):
|
|||
PypeCommands().launch_tray(debug)
|
||||
|
||||
|
||||
@PypeCommands.add_modules
|
||||
@main.group(help="Run command line arguments of OpenPype modules")
|
||||
@click.pass_context
|
||||
def module(ctx):
|
||||
"""Module specific commands created dynamically.
|
||||
|
||||
These commands are generated dynamically by currently loaded addon/modules.
|
||||
"""
|
||||
pass
|
||||
|
||||
|
||||
@main.command()
|
||||
@click.option("-d", "--debug", is_flag=True, help="Print debug messages")
|
||||
@click.option("--ftrack-url", envvar="FTRACK_SERVER",
|
||||
|
|
@ -166,7 +177,7 @@ def publish(debug, paths, targets):
|
|||
@click.option("-p", "--project", help="Project")
|
||||
@click.option("-t", "--targets", help="Targets", default=None,
|
||||
multiple=True)
|
||||
def remotepublish(debug, project, path, host, targets=None, user=None):
|
||||
def remotepublishfromapp(debug, project, path, host, user=None, targets=None):
|
||||
"""Start CLI publishing.
|
||||
|
||||
Publish collects json from paths provided as an argument.
|
||||
|
|
@ -174,7 +185,27 @@ def remotepublish(debug, project, path, host, targets=None, user=None):
|
|||
"""
|
||||
if debug:
|
||||
os.environ['OPENPYPE_DEBUG'] = '3'
|
||||
PypeCommands.remotepublish(project, path, host, user, targets=targets)
|
||||
PypeCommands.remotepublishfromapp(
|
||||
project, path, host, user, targets=targets
|
||||
)
|
||||
|
||||
|
||||
@main.command()
|
||||
@click.argument("path")
|
||||
@click.option("-d", "--debug", is_flag=True, help="Print debug messages")
|
||||
@click.option("-u", "--user", help="User email address")
|
||||
@click.option("-p", "--project", help="Project")
|
||||
@click.option("-t", "--targets", help="Targets", default=None,
|
||||
multiple=True)
|
||||
def remotepublish(debug, project, path, user=None, targets=None):
|
||||
"""Start CLI publishing.
|
||||
|
||||
Publish collects json from paths provided as an argument.
|
||||
More than one path is allowed.
|
||||
"""
|
||||
if debug:
|
||||
os.environ['OPENPYPE_DEBUG'] = '3'
|
||||
PypeCommands.remotepublish(project, path, user, targets=targets)
|
||||
|
||||
|
||||
@main.command()
|
||||
|
|
@ -263,6 +294,34 @@ def projectmanager():
|
|||
PypeCommands().launch_project_manager()
|
||||
|
||||
|
||||
@main.command()
|
||||
@click.argument("output_path")
|
||||
@click.option("--project", help="Define project context")
|
||||
@click.option("--asset", help="Define asset in project (project must be set)")
|
||||
@click.option(
|
||||
"--strict",
|
||||
is_flag=True,
|
||||
help="Full context must be set otherwise dialog can't be closed."
|
||||
)
|
||||
def contextselection(
|
||||
output_path,
|
||||
project,
|
||||
asset,
|
||||
strict
|
||||
):
|
||||
"""Show Qt dialog to select context.
|
||||
|
||||
Context is project name, asset name and task name. The result is stored
|
||||
into json file which path is passed in first argument.
|
||||
"""
|
||||
PypeCommands.contextselection(
|
||||
output_path,
|
||||
project,
|
||||
asset,
|
||||
strict
|
||||
)
|
||||
|
||||
|
||||
@main.command(
|
||||
context_settings=dict(
|
||||
ignore_unknown_options=True,
|
||||
|
|
@ -283,3 +342,18 @@ def run(script):
|
|||
args_string = " ".join(args[1:])
|
||||
print(f"... running: {script} {args_string}")
|
||||
runpy.run_path(script, run_name="__main__", )
|
||||
|
||||
|
||||
@main.command()
|
||||
@click.argument("folder", nargs=-1)
|
||||
@click.option("-m",
|
||||
"--mark",
|
||||
help="Run tests marked by",
|
||||
default=None)
|
||||
@click.option("-p",
|
||||
"--pyargs",
|
||||
help="Run tests from package",
|
||||
default=None)
|
||||
def runtests(folder, mark, pyargs):
|
||||
"""Run all automatic tests after proper initialization via start.py"""
|
||||
PypeCommands().run_tests(folder, mark, pyargs)
|
||||
|
|
|
|||
|
|
@ -13,7 +13,7 @@ class LaunchFoundryAppsWindows(PreLaunchHook):
|
|||
|
||||
# Should be as last hook because must change launch arguments to string
|
||||
order = 1000
|
||||
app_groups = ["nuke", "nukex", "hiero", "nukestudio"]
|
||||
app_groups = ["nuke", "nukex", "hiero", "nukestudio", "photoshop"]
|
||||
platforms = ["windows"]
|
||||
|
||||
def execute(self):
|
||||
|
|
|
|||
|
|
@ -43,6 +43,8 @@ class GlobalHostDataHook(PreLaunchHook):
|
|||
|
||||
"env": self.launch_context.env,
|
||||
|
||||
"last_workfile_path": self.data.get("last_workfile_path"),
|
||||
|
||||
"log": self.log
|
||||
})
|
||||
|
||||
|
|
|
|||
|
|
@ -95,6 +95,30 @@ def get_local_collection_with_name(name):
|
|||
return None
|
||||
|
||||
|
||||
def deselect_all():
|
||||
"""Deselect all objects in the scene.
|
||||
|
||||
Blender gives context error if trying to deselect object that it isn't
|
||||
in object mode.
|
||||
"""
|
||||
modes = []
|
||||
active = bpy.context.view_layer.objects.active
|
||||
|
||||
for obj in bpy.data.objects:
|
||||
if obj.mode != 'OBJECT':
|
||||
modes.append((obj, obj.mode))
|
||||
bpy.context.view_layer.objects.active = obj
|
||||
bpy.ops.object.mode_set(mode='OBJECT')
|
||||
|
||||
bpy.ops.object.select_all(action='DESELECT')
|
||||
|
||||
for p in modes:
|
||||
bpy.context.view_layer.objects.active = p[0]
|
||||
bpy.ops.object.mode_set(mode=p[1])
|
||||
|
||||
bpy.context.view_layer.objects.active = active
|
||||
|
||||
|
||||
class Creator(PypeCreatorMixin, blender.Creator):
|
||||
pass
|
||||
|
||||
|
|
|
|||
|
|
@ -3,11 +3,12 @@
|
|||
import bpy
|
||||
|
||||
from avalon import api
|
||||
from avalon.blender import lib
|
||||
import openpype.hosts.blender.api.plugin
|
||||
from avalon.blender import lib, ops
|
||||
from avalon.blender.pipeline import AVALON_INSTANCES
|
||||
from openpype.hosts.blender.api import plugin
|
||||
|
||||
|
||||
class CreateCamera(openpype.hosts.blender.api.plugin.Creator):
|
||||
class CreateCamera(plugin.Creator):
|
||||
"""Polygonal static geometry"""
|
||||
|
||||
name = "cameraMain"
|
||||
|
|
@ -16,17 +17,46 @@ class CreateCamera(openpype.hosts.blender.api.plugin.Creator):
|
|||
icon = "video-camera"
|
||||
|
||||
def process(self):
|
||||
""" Run the creator on Blender main thread"""
|
||||
mti = ops.MainThreadItem(self._process)
|
||||
ops.execute_in_main_thread(mti)
|
||||
|
||||
def _process(self):
|
||||
# Get Instance Containter or create it if it does not exist
|
||||
instances = bpy.data.collections.get(AVALON_INSTANCES)
|
||||
if not instances:
|
||||
instances = bpy.data.collections.new(name=AVALON_INSTANCES)
|
||||
bpy.context.scene.collection.children.link(instances)
|
||||
|
||||
# Create instance object
|
||||
asset = self.data["asset"]
|
||||
subset = self.data["subset"]
|
||||
name = openpype.hosts.blender.api.plugin.asset_name(asset, subset)
|
||||
collection = bpy.data.collections.new(name=name)
|
||||
bpy.context.scene.collection.children.link(collection)
|
||||
name = plugin.asset_name(asset, subset)
|
||||
|
||||
camera = bpy.data.cameras.new(subset)
|
||||
camera_obj = bpy.data.objects.new(subset, camera)
|
||||
|
||||
instances.objects.link(camera_obj)
|
||||
|
||||
asset_group = bpy.data.objects.new(name=name, object_data=None)
|
||||
asset_group.empty_display_type = 'SINGLE_ARROW'
|
||||
instances.objects.link(asset_group)
|
||||
self.data['task'] = api.Session.get('AVALON_TASK')
|
||||
lib.imprint(collection, self.data)
|
||||
print(f"self.data: {self.data}")
|
||||
lib.imprint(asset_group, self.data)
|
||||
|
||||
if (self.options or {}).get("useSelection"):
|
||||
for obj in lib.get_selection():
|
||||
collection.objects.link(obj)
|
||||
bpy.context.view_layer.objects.active = asset_group
|
||||
selected = lib.get_selection()
|
||||
for obj in selected:
|
||||
obj.select_set(True)
|
||||
selected.append(asset_group)
|
||||
bpy.ops.object.parent_set(keep_transform=True)
|
||||
else:
|
||||
plugin.deselect_all()
|
||||
camera_obj.select_set(True)
|
||||
asset_group.select_set(True)
|
||||
bpy.context.view_layer.objects.active = asset_group
|
||||
bpy.ops.object.parent_set(keep_transform=True)
|
||||
|
||||
return collection
|
||||
return asset_group
|
||||
|
|
|
|||
|
|
@ -47,7 +47,7 @@ class CacheModelLoader(plugin.AssetLoader):
|
|||
bpy.data.objects.remove(empty)
|
||||
|
||||
def _process(self, libpath, asset_group, group_name):
|
||||
bpy.ops.object.select_all(action='DESELECT')
|
||||
plugin.deselect_all()
|
||||
|
||||
collection = bpy.context.view_layer.active_layer_collection.collection
|
||||
|
||||
|
|
@ -109,7 +109,7 @@ class CacheModelLoader(plugin.AssetLoader):
|
|||
avalon_info = obj[AVALON_PROPERTY]
|
||||
avalon_info.update({"container_name": group_name})
|
||||
|
||||
bpy.ops.object.select_all(action='DESELECT')
|
||||
plugin.deselect_all()
|
||||
|
||||
return objects
|
||||
|
||||
|
|
|
|||
217
openpype/hosts/blender/plugins/load/load_audio.py
Normal file
217
openpype/hosts/blender/plugins/load/load_audio.py
Normal file
|
|
@ -0,0 +1,217 @@
|
|||
"""Load audio in Blender."""
|
||||
|
||||
from pathlib import Path
|
||||
from pprint import pformat
|
||||
from typing import Dict, List, Optional
|
||||
|
||||
import bpy
|
||||
|
||||
from avalon import api
|
||||
from avalon.blender.pipeline import AVALON_CONTAINERS
|
||||
from avalon.blender.pipeline import AVALON_CONTAINER_ID
|
||||
from avalon.blender.pipeline import AVALON_PROPERTY
|
||||
from openpype.hosts.blender.api import plugin
|
||||
|
||||
|
||||
class AudioLoader(plugin.AssetLoader):
|
||||
"""Load audio in Blender."""
|
||||
|
||||
families = ["audio"]
|
||||
representations = ["wav"]
|
||||
|
||||
label = "Load Audio"
|
||||
icon = "volume-up"
|
||||
color = "orange"
|
||||
|
||||
def process_asset(
|
||||
self, context: dict, name: str, namespace: Optional[str] = None,
|
||||
options: Optional[Dict] = None
|
||||
) -> Optional[List]:
|
||||
"""
|
||||
Arguments:
|
||||
name: Use pre-defined name
|
||||
namespace: Use pre-defined namespace
|
||||
context: Full parenthood of representation to load
|
||||
options: Additional settings dictionary
|
||||
"""
|
||||
libpath = self.fname
|
||||
asset = context["asset"]["name"]
|
||||
subset = context["subset"]["name"]
|
||||
|
||||
asset_name = plugin.asset_name(asset, subset)
|
||||
unique_number = plugin.get_unique_number(asset, subset)
|
||||
group_name = plugin.asset_name(asset, subset, unique_number)
|
||||
namespace = namespace or f"{asset}_{unique_number}"
|
||||
|
||||
avalon_container = bpy.data.collections.get(AVALON_CONTAINERS)
|
||||
if not avalon_container:
|
||||
avalon_container = bpy.data.collections.new(name=AVALON_CONTAINERS)
|
||||
bpy.context.scene.collection.children.link(avalon_container)
|
||||
|
||||
asset_group = bpy.data.objects.new(group_name, object_data=None)
|
||||
avalon_container.objects.link(asset_group)
|
||||
|
||||
# Blender needs the Sequence Editor in the current window, to be able
|
||||
# to load the audio. We take one of the areas in the window, save its
|
||||
# type, and switch to the Sequence Editor. After loading the audio,
|
||||
# we switch back to the previous area.
|
||||
window_manager = bpy.context.window_manager
|
||||
old_type = window_manager.windows[-1].screen.areas[0].type
|
||||
window_manager.windows[-1].screen.areas[0].type = "SEQUENCE_EDITOR"
|
||||
|
||||
# We override the context to load the audio in the sequence editor.
|
||||
oc = bpy.context.copy()
|
||||
oc["area"] = window_manager.windows[-1].screen.areas[0]
|
||||
|
||||
bpy.ops.sequencer.sound_strip_add(oc, filepath=libpath, frame_start=1)
|
||||
|
||||
window_manager.windows[-1].screen.areas[0].type = old_type
|
||||
|
||||
p = Path(libpath)
|
||||
audio = p.name
|
||||
|
||||
asset_group[AVALON_PROPERTY] = {
|
||||
"schema": "openpype:container-2.0",
|
||||
"id": AVALON_CONTAINER_ID,
|
||||
"name": name,
|
||||
"namespace": namespace or '',
|
||||
"loader": str(self.__class__.__name__),
|
||||
"representation": str(context["representation"]["_id"]),
|
||||
"libpath": libpath,
|
||||
"asset_name": asset_name,
|
||||
"parent": str(context["representation"]["parent"]),
|
||||
"family": context["representation"]["context"]["family"],
|
||||
"objectName": group_name,
|
||||
"audio": audio
|
||||
}
|
||||
|
||||
objects = []
|
||||
self[:] = objects
|
||||
return [objects]
|
||||
|
||||
def exec_update(self, container: Dict, representation: Dict):
|
||||
"""Update an audio strip in the sequence editor.
|
||||
|
||||
Arguments:
|
||||
container (openpype:container-1.0): Container to update,
|
||||
from `host.ls()`.
|
||||
representation (openpype:representation-1.0): Representation to
|
||||
update, from `host.ls()`.
|
||||
"""
|
||||
object_name = container["objectName"]
|
||||
asset_group = bpy.data.objects.get(object_name)
|
||||
libpath = Path(api.get_representation_path(representation))
|
||||
|
||||
self.log.info(
|
||||
"Container: %s\nRepresentation: %s",
|
||||
pformat(container, indent=2),
|
||||
pformat(representation, indent=2),
|
||||
)
|
||||
|
||||
assert asset_group, (
|
||||
f"The asset is not loaded: {container['objectName']}"
|
||||
)
|
||||
assert libpath, (
|
||||
"No existing library file found for {container['objectName']}"
|
||||
)
|
||||
assert libpath.is_file(), (
|
||||
f"The file doesn't exist: {libpath}"
|
||||
)
|
||||
|
||||
metadata = asset_group.get(AVALON_PROPERTY)
|
||||
group_libpath = metadata["libpath"]
|
||||
|
||||
normalized_group_libpath = (
|
||||
str(Path(bpy.path.abspath(group_libpath)).resolve())
|
||||
)
|
||||
normalized_libpath = (
|
||||
str(Path(bpy.path.abspath(str(libpath))).resolve())
|
||||
)
|
||||
self.log.debug(
|
||||
"normalized_group_libpath:\n %s\nnormalized_libpath:\n %s",
|
||||
normalized_group_libpath,
|
||||
normalized_libpath,
|
||||
)
|
||||
if normalized_group_libpath == normalized_libpath:
|
||||
self.log.info("Library already loaded, not updating...")
|
||||
return
|
||||
|
||||
old_audio = container["audio"]
|
||||
p = Path(libpath)
|
||||
new_audio = p.name
|
||||
|
||||
# Blender needs the Sequence Editor in the current window, to be able
|
||||
# to update the audio. We take one of the areas in the window, save its
|
||||
# type, and switch to the Sequence Editor. After updating the audio,
|
||||
# we switch back to the previous area.
|
||||
window_manager = bpy.context.window_manager
|
||||
old_type = window_manager.windows[-1].screen.areas[0].type
|
||||
window_manager.windows[-1].screen.areas[0].type = "SEQUENCE_EDITOR"
|
||||
|
||||
# We override the context to load the audio in the sequence editor.
|
||||
oc = bpy.context.copy()
|
||||
oc["area"] = window_manager.windows[-1].screen.areas[0]
|
||||
|
||||
# We deselect all sequencer strips, and then select the one we
|
||||
# need to remove.
|
||||
bpy.ops.sequencer.select_all(oc, action='DESELECT')
|
||||
scene = bpy.context.scene
|
||||
scene.sequence_editor.sequences_all[old_audio].select = True
|
||||
|
||||
bpy.ops.sequencer.delete(oc)
|
||||
bpy.data.sounds.remove(bpy.data.sounds[old_audio])
|
||||
|
||||
bpy.ops.sequencer.sound_strip_add(
|
||||
oc, filepath=str(libpath), frame_start=1)
|
||||
|
||||
window_manager.windows[-1].screen.areas[0].type = old_type
|
||||
|
||||
metadata["libpath"] = str(libpath)
|
||||
metadata["representation"] = str(representation["_id"])
|
||||
metadata["parent"] = str(representation["parent"])
|
||||
metadata["audio"] = new_audio
|
||||
|
||||
def exec_remove(self, container: Dict) -> bool:
|
||||
"""Remove an audio strip from the sequence editor and the container.
|
||||
|
||||
Arguments:
|
||||
container (openpype:container-1.0): Container to remove,
|
||||
from `host.ls()`.
|
||||
|
||||
Returns:
|
||||
bool: Whether the container was deleted.
|
||||
"""
|
||||
object_name = container["objectName"]
|
||||
asset_group = bpy.data.objects.get(object_name)
|
||||
|
||||
if not asset_group:
|
||||
return False
|
||||
|
||||
audio = container["audio"]
|
||||
|
||||
# Blender needs the Sequence Editor in the current window, to be able
|
||||
# to remove the audio. We take one of the areas in the window, save its
|
||||
# type, and switch to the Sequence Editor. After removing the audio,
|
||||
# we switch back to the previous area.
|
||||
window_manager = bpy.context.window_manager
|
||||
old_type = window_manager.windows[-1].screen.areas[0].type
|
||||
window_manager.windows[-1].screen.areas[0].type = "SEQUENCE_EDITOR"
|
||||
|
||||
# We override the context to load the audio in the sequence editor.
|
||||
oc = bpy.context.copy()
|
||||
oc["area"] = window_manager.windows[-1].screen.areas[0]
|
||||
|
||||
# We deselect all sequencer strips, and then select the one we
|
||||
# need to remove.
|
||||
bpy.ops.sequencer.select_all(oc, action='DESELECT')
|
||||
bpy.context.scene.sequence_editor.sequences_all[audio].select = True
|
||||
|
||||
bpy.ops.sequencer.delete(oc)
|
||||
|
||||
window_manager.windows[-1].screen.areas[0].type = old_type
|
||||
|
||||
bpy.data.sounds.remove(bpy.data.sounds[audio])
|
||||
|
||||
bpy.data.objects.remove(asset_group)
|
||||
|
||||
return True
|
||||
|
|
@ -1,247 +0,0 @@
|
|||
"""Load a camera asset in Blender."""
|
||||
|
||||
import logging
|
||||
from pathlib import Path
|
||||
from pprint import pformat
|
||||
from typing import Dict, List, Optional
|
||||
|
||||
from avalon import api, blender
|
||||
import bpy
|
||||
import openpype.hosts.blender.api.plugin
|
||||
|
||||
logger = logging.getLogger("openpype").getChild("blender").getChild("load_camera")
|
||||
|
||||
|
||||
class BlendCameraLoader(openpype.hosts.blender.api.plugin.AssetLoader):
|
||||
"""Load a camera from a .blend file.
|
||||
|
||||
Warning:
|
||||
Loading the same asset more then once is not properly supported at the
|
||||
moment.
|
||||
"""
|
||||
|
||||
families = ["camera"]
|
||||
representations = ["blend"]
|
||||
|
||||
label = "Link Camera"
|
||||
icon = "code-fork"
|
||||
color = "orange"
|
||||
|
||||
def _remove(self, objects, lib_container):
|
||||
for obj in list(objects):
|
||||
bpy.data.cameras.remove(obj.data)
|
||||
|
||||
bpy.data.collections.remove(bpy.data.collections[lib_container])
|
||||
|
||||
def _process(self, libpath, lib_container, container_name, actions):
|
||||
|
||||
relative = bpy.context.preferences.filepaths.use_relative_paths
|
||||
with bpy.data.libraries.load(
|
||||
libpath, link=True, relative=relative
|
||||
) as (_, data_to):
|
||||
data_to.collections = [lib_container]
|
||||
|
||||
scene = bpy.context.scene
|
||||
|
||||
scene.collection.children.link(bpy.data.collections[lib_container])
|
||||
|
||||
camera_container = scene.collection.children[lib_container].make_local()
|
||||
|
||||
objects_list = []
|
||||
|
||||
for obj in camera_container.objects:
|
||||
local_obj = obj.make_local()
|
||||
local_obj.data.make_local()
|
||||
|
||||
if not local_obj.get(blender.pipeline.AVALON_PROPERTY):
|
||||
local_obj[blender.pipeline.AVALON_PROPERTY] = dict()
|
||||
|
||||
avalon_info = local_obj[blender.pipeline.AVALON_PROPERTY]
|
||||
avalon_info.update({"container_name": container_name})
|
||||
|
||||
if actions[0] is not None:
|
||||
if local_obj.animation_data is None:
|
||||
local_obj.animation_data_create()
|
||||
local_obj.animation_data.action = actions[0]
|
||||
|
||||
if actions[1] is not None:
|
||||
if local_obj.data.animation_data is None:
|
||||
local_obj.data.animation_data_create()
|
||||
local_obj.data.animation_data.action = actions[1]
|
||||
|
||||
objects_list.append(local_obj)
|
||||
|
||||
camera_container.pop(blender.pipeline.AVALON_PROPERTY)
|
||||
|
||||
bpy.ops.object.select_all(action='DESELECT')
|
||||
|
||||
return objects_list
|
||||
|
||||
def process_asset(
|
||||
self, context: dict, name: str, namespace: Optional[str] = None,
|
||||
options: Optional[Dict] = None
|
||||
) -> Optional[List]:
|
||||
"""
|
||||
Arguments:
|
||||
name: Use pre-defined name
|
||||
namespace: Use pre-defined namespace
|
||||
context: Full parenthood of representation to load
|
||||
options: Additional settings dictionary
|
||||
"""
|
||||
|
||||
libpath = self.fname
|
||||
asset = context["asset"]["name"]
|
||||
subset = context["subset"]["name"]
|
||||
lib_container = openpype.hosts.blender.api.plugin.asset_name(asset, subset)
|
||||
container_name = openpype.hosts.blender.api.plugin.asset_name(
|
||||
asset, subset, namespace
|
||||
)
|
||||
|
||||
container = bpy.data.collections.new(lib_container)
|
||||
container.name = container_name
|
||||
blender.pipeline.containerise_existing(
|
||||
container,
|
||||
name,
|
||||
namespace,
|
||||
context,
|
||||
self.__class__.__name__,
|
||||
)
|
||||
|
||||
container_metadata = container.get(
|
||||
blender.pipeline.AVALON_PROPERTY)
|
||||
|
||||
container_metadata["libpath"] = libpath
|
||||
container_metadata["lib_container"] = lib_container
|
||||
|
||||
objects_list = self._process(
|
||||
libpath, lib_container, container_name, (None, None))
|
||||
|
||||
# Save the list of objects in the metadata container
|
||||
container_metadata["objects"] = objects_list
|
||||
|
||||
nodes = list(container.objects)
|
||||
nodes.append(container)
|
||||
self[:] = nodes
|
||||
return nodes
|
||||
|
||||
def update(self, container: Dict, representation: Dict):
|
||||
"""Update the loaded asset.
|
||||
|
||||
This will remove all objects of the current collection, load the new
|
||||
ones and add them to the collection.
|
||||
If the objects of the collection are used in another collection they
|
||||
will not be removed, only unlinked. Normally this should not be the
|
||||
case though.
|
||||
|
||||
Warning:
|
||||
No nested collections are supported at the moment!
|
||||
"""
|
||||
|
||||
collection = bpy.data.collections.get(
|
||||
container["objectName"]
|
||||
)
|
||||
|
||||
libpath = Path(api.get_representation_path(representation))
|
||||
extension = libpath.suffix.lower()
|
||||
|
||||
logger.info(
|
||||
"Container: %s\nRepresentation: %s",
|
||||
pformat(container, indent=2),
|
||||
pformat(representation, indent=2),
|
||||
)
|
||||
|
||||
assert collection, (
|
||||
f"The asset is not loaded: {container['objectName']}"
|
||||
)
|
||||
assert not (collection.children), (
|
||||
"Nested collections are not supported."
|
||||
)
|
||||
assert libpath, (
|
||||
"No existing library file found for {container['objectName']}"
|
||||
)
|
||||
assert libpath.is_file(), (
|
||||
f"The file doesn't exist: {libpath}"
|
||||
)
|
||||
assert extension in openpype.hosts.blender.api.plugin.VALID_EXTENSIONS, (
|
||||
f"Unsupported file: {libpath}"
|
||||
)
|
||||
|
||||
collection_metadata = collection.get(
|
||||
blender.pipeline.AVALON_PROPERTY)
|
||||
collection_libpath = collection_metadata["libpath"]
|
||||
objects = collection_metadata["objects"]
|
||||
lib_container = collection_metadata["lib_container"]
|
||||
|
||||
normalized_collection_libpath = (
|
||||
str(Path(bpy.path.abspath(collection_libpath)).resolve())
|
||||
)
|
||||
normalized_libpath = (
|
||||
str(Path(bpy.path.abspath(str(libpath))).resolve())
|
||||
)
|
||||
logger.debug(
|
||||
"normalized_collection_libpath:\n %s\nnormalized_libpath:\n %s",
|
||||
normalized_collection_libpath,
|
||||
normalized_libpath,
|
||||
)
|
||||
if normalized_collection_libpath == normalized_libpath:
|
||||
logger.info("Library already loaded, not updating...")
|
||||
return
|
||||
|
||||
camera = objects[0]
|
||||
|
||||
camera_action = None
|
||||
camera_data_action = None
|
||||
|
||||
if camera.animation_data and camera.animation_data.action:
|
||||
camera_action = camera.animation_data.action
|
||||
|
||||
if camera.data.animation_data and camera.data.animation_data.action:
|
||||
camera_data_action = camera.data.animation_data.action
|
||||
|
||||
actions = (camera_action, camera_data_action)
|
||||
|
||||
self._remove(objects, lib_container)
|
||||
|
||||
objects_list = self._process(
|
||||
str(libpath), lib_container, collection.name, actions)
|
||||
|
||||
# Save the list of objects in the metadata container
|
||||
collection_metadata["objects"] = objects_list
|
||||
collection_metadata["libpath"] = str(libpath)
|
||||
collection_metadata["representation"] = str(representation["_id"])
|
||||
|
||||
bpy.ops.object.select_all(action='DESELECT')
|
||||
|
||||
def remove(self, container: Dict) -> bool:
|
||||
"""Remove an existing container from a Blender scene.
|
||||
|
||||
Arguments:
|
||||
container (openpype:container-1.0): Container to remove,
|
||||
from `host.ls()`.
|
||||
|
||||
Returns:
|
||||
bool: Whether the container was deleted.
|
||||
|
||||
Warning:
|
||||
No nested collections are supported at the moment!
|
||||
"""
|
||||
|
||||
collection = bpy.data.collections.get(
|
||||
container["objectName"]
|
||||
)
|
||||
if not collection:
|
||||
return False
|
||||
assert not (collection.children), (
|
||||
"Nested collections are not supported."
|
||||
)
|
||||
|
||||
collection_metadata = collection.get(
|
||||
blender.pipeline.AVALON_PROPERTY)
|
||||
objects = collection_metadata["objects"]
|
||||
lib_container = collection_metadata["lib_container"]
|
||||
|
||||
self._remove(objects, lib_container)
|
||||
|
||||
bpy.data.collections.remove(collection)
|
||||
|
||||
return True
|
||||
252
openpype/hosts/blender/plugins/load/load_camera_blend.py
Normal file
252
openpype/hosts/blender/plugins/load/load_camera_blend.py
Normal file
|
|
@ -0,0 +1,252 @@
|
|||
"""Load a camera asset in Blender."""
|
||||
|
||||
import logging
|
||||
from pathlib import Path
|
||||
from pprint import pformat
|
||||
from typing import Dict, List, Optional
|
||||
|
||||
import bpy
|
||||
|
||||
from avalon import api
|
||||
from avalon.blender.pipeline import AVALON_CONTAINERS
|
||||
from avalon.blender.pipeline import AVALON_CONTAINER_ID
|
||||
from avalon.blender.pipeline import AVALON_PROPERTY
|
||||
from openpype.hosts.blender.api import plugin
|
||||
|
||||
logger = logging.getLogger("openpype").getChild(
|
||||
"blender").getChild("load_camera")
|
||||
|
||||
|
||||
class BlendCameraLoader(plugin.AssetLoader):
|
||||
"""Load a camera from a .blend file.
|
||||
|
||||
Warning:
|
||||
Loading the same asset more then once is not properly supported at the
|
||||
moment.
|
||||
"""
|
||||
|
||||
families = ["camera"]
|
||||
representations = ["blend"]
|
||||
|
||||
label = "Link Camera (Blend)"
|
||||
icon = "code-fork"
|
||||
color = "orange"
|
||||
|
||||
def _remove(self, asset_group):
|
||||
objects = list(asset_group.children)
|
||||
|
||||
for obj in objects:
|
||||
if obj.type == 'CAMERA':
|
||||
bpy.data.cameras.remove(obj.data)
|
||||
|
||||
def _process(self, libpath, asset_group, group_name):
|
||||
with bpy.data.libraries.load(
|
||||
libpath, link=True, relative=False
|
||||
) as (data_from, data_to):
|
||||
data_to.objects = data_from.objects
|
||||
|
||||
parent = bpy.context.scene.collection
|
||||
|
||||
empties = [obj for obj in data_to.objects if obj.type == 'EMPTY']
|
||||
|
||||
container = None
|
||||
|
||||
for empty in empties:
|
||||
if empty.get(AVALON_PROPERTY):
|
||||
container = empty
|
||||
break
|
||||
|
||||
assert container, "No asset group found"
|
||||
|
||||
# Children must be linked before parents,
|
||||
# otherwise the hierarchy will break
|
||||
objects = []
|
||||
nodes = list(container.children)
|
||||
|
||||
for obj in nodes:
|
||||
obj.parent = asset_group
|
||||
|
||||
for obj in nodes:
|
||||
objects.append(obj)
|
||||
nodes.extend(list(obj.children))
|
||||
|
||||
objects.reverse()
|
||||
|
||||
for obj in objects:
|
||||
parent.objects.link(obj)
|
||||
|
||||
for obj in objects:
|
||||
local_obj = plugin.prepare_data(obj, group_name)
|
||||
|
||||
if local_obj.type != 'EMPTY':
|
||||
plugin.prepare_data(local_obj.data, group_name)
|
||||
|
||||
if not local_obj.get(AVALON_PROPERTY):
|
||||
local_obj[AVALON_PROPERTY] = dict()
|
||||
|
||||
avalon_info = local_obj[AVALON_PROPERTY]
|
||||
avalon_info.update({"container_name": group_name})
|
||||
|
||||
objects.reverse()
|
||||
|
||||
bpy.data.orphans_purge(do_local_ids=False)
|
||||
|
||||
plugin.deselect_all()
|
||||
|
||||
return objects
|
||||
|
||||
def process_asset(
|
||||
self, context: dict, name: str, namespace: Optional[str] = None,
|
||||
options: Optional[Dict] = None
|
||||
) -> Optional[List]:
|
||||
"""
|
||||
Arguments:
|
||||
name: Use pre-defined name
|
||||
namespace: Use pre-defined namespace
|
||||
context: Full parenthood of representation to load
|
||||
options: Additional settings dictionary
|
||||
"""
|
||||
libpath = self.fname
|
||||
asset = context["asset"]["name"]
|
||||
subset = context["subset"]["name"]
|
||||
|
||||
asset_name = plugin.asset_name(asset, subset)
|
||||
unique_number = plugin.get_unique_number(asset, subset)
|
||||
group_name = plugin.asset_name(asset, subset, unique_number)
|
||||
namespace = namespace or f"{asset}_{unique_number}"
|
||||
|
||||
avalon_container = bpy.data.collections.get(AVALON_CONTAINERS)
|
||||
if not avalon_container:
|
||||
avalon_container = bpy.data.collections.new(name=AVALON_CONTAINERS)
|
||||
bpy.context.scene.collection.children.link(avalon_container)
|
||||
|
||||
asset_group = bpy.data.objects.new(group_name, object_data=None)
|
||||
asset_group.empty_display_type = 'SINGLE_ARROW'
|
||||
avalon_container.objects.link(asset_group)
|
||||
|
||||
objects = self._process(libpath, asset_group, group_name)
|
||||
|
||||
bpy.context.scene.collection.objects.link(asset_group)
|
||||
|
||||
asset_group[AVALON_PROPERTY] = {
|
||||
"schema": "openpype:container-2.0",
|
||||
"id": AVALON_CONTAINER_ID,
|
||||
"name": name,
|
||||
"namespace": namespace or '',
|
||||
"loader": str(self.__class__.__name__),
|
||||
"representation": str(context["representation"]["_id"]),
|
||||
"libpath": libpath,
|
||||
"asset_name": asset_name,
|
||||
"parent": str(context["representation"]["parent"]),
|
||||
"family": context["representation"]["context"]["family"],
|
||||
"objectName": group_name
|
||||
}
|
||||
|
||||
self[:] = objects
|
||||
return objects
|
||||
|
||||
def exec_update(self, container: Dict, representation: Dict):
|
||||
"""Update the loaded asset.
|
||||
|
||||
This will remove all children of the asset group, load the new ones
|
||||
and add them as children of the group.
|
||||
"""
|
||||
object_name = container["objectName"]
|
||||
asset_group = bpy.data.objects.get(object_name)
|
||||
libpath = Path(api.get_representation_path(representation))
|
||||
extension = libpath.suffix.lower()
|
||||
|
||||
self.log.info(
|
||||
"Container: %s\nRepresentation: %s",
|
||||
pformat(container, indent=2),
|
||||
pformat(representation, indent=2),
|
||||
)
|
||||
|
||||
assert asset_group, (
|
||||
f"The asset is not loaded: {container['objectName']}"
|
||||
)
|
||||
assert libpath, (
|
||||
"No existing library file found for {container['objectName']}"
|
||||
)
|
||||
assert libpath.is_file(), (
|
||||
f"The file doesn't exist: {libpath}"
|
||||
)
|
||||
assert extension in plugin.VALID_EXTENSIONS, (
|
||||
f"Unsupported file: {libpath}"
|
||||
)
|
||||
|
||||
metadata = asset_group.get(AVALON_PROPERTY)
|
||||
group_libpath = metadata["libpath"]
|
||||
|
||||
normalized_group_libpath = (
|
||||
str(Path(bpy.path.abspath(group_libpath)).resolve())
|
||||
)
|
||||
normalized_libpath = (
|
||||
str(Path(bpy.path.abspath(str(libpath))).resolve())
|
||||
)
|
||||
self.log.debug(
|
||||
"normalized_group_libpath:\n %s\nnormalized_libpath:\n %s",
|
||||
normalized_group_libpath,
|
||||
normalized_libpath,
|
||||
)
|
||||
if normalized_group_libpath == normalized_libpath:
|
||||
self.log.info("Library already loaded, not updating...")
|
||||
return
|
||||
|
||||
# Check how many assets use the same library
|
||||
count = 0
|
||||
for obj in bpy.data.collections.get(AVALON_CONTAINERS).objects:
|
||||
if obj.get(AVALON_PROPERTY).get('libpath') == group_libpath:
|
||||
count += 1
|
||||
|
||||
mat = asset_group.matrix_basis.copy()
|
||||
|
||||
self._remove(asset_group)
|
||||
|
||||
# If it is the last object to use that library, remove it
|
||||
if count == 1:
|
||||
library = bpy.data.libraries.get(bpy.path.basename(group_libpath))
|
||||
if library:
|
||||
bpy.data.libraries.remove(library)
|
||||
|
||||
self._process(str(libpath), asset_group, object_name)
|
||||
|
||||
asset_group.matrix_basis = mat
|
||||
|
||||
metadata["libpath"] = str(libpath)
|
||||
metadata["representation"] = str(representation["_id"])
|
||||
metadata["parent"] = str(representation["parent"])
|
||||
|
||||
def exec_remove(self, container: Dict) -> bool:
|
||||
"""Remove an existing container from a Blender scene.
|
||||
|
||||
Arguments:
|
||||
container (openpype:container-1.0): Container to remove,
|
||||
from `host.ls()`.
|
||||
|
||||
Returns:
|
||||
bool: Whether the container was deleted.
|
||||
"""
|
||||
object_name = container["objectName"]
|
||||
asset_group = bpy.data.objects.get(object_name)
|
||||
libpath = asset_group.get(AVALON_PROPERTY).get('libpath')
|
||||
|
||||
# Check how many assets use the same library
|
||||
count = 0
|
||||
for obj in bpy.data.collections.get(AVALON_CONTAINERS).objects:
|
||||
if obj.get(AVALON_PROPERTY).get('libpath') == libpath:
|
||||
count += 1
|
||||
|
||||
if not asset_group:
|
||||
return False
|
||||
|
||||
self._remove(asset_group)
|
||||
|
||||
bpy.data.objects.remove(asset_group)
|
||||
|
||||
# If it is the last object to use that library, remove it
|
||||
if count == 1:
|
||||
library = bpy.data.libraries.get(bpy.path.basename(libpath))
|
||||
bpy.data.libraries.remove(library)
|
||||
|
||||
return True
|
||||
218
openpype/hosts/blender/plugins/load/load_camera_fbx.py
Normal file
218
openpype/hosts/blender/plugins/load/load_camera_fbx.py
Normal file
|
|
@ -0,0 +1,218 @@
|
|||
"""Load an asset in Blender from an Alembic file."""
|
||||
|
||||
from pathlib import Path
|
||||
from pprint import pformat
|
||||
from typing import Dict, List, Optional
|
||||
|
||||
import bpy
|
||||
|
||||
from avalon import api
|
||||
from avalon.blender import lib
|
||||
from avalon.blender.pipeline import AVALON_CONTAINERS
|
||||
from avalon.blender.pipeline import AVALON_CONTAINER_ID
|
||||
from avalon.blender.pipeline import AVALON_PROPERTY
|
||||
from openpype.hosts.blender.api import plugin
|
||||
|
||||
|
||||
class FbxCameraLoader(plugin.AssetLoader):
|
||||
"""Load a camera from FBX.
|
||||
|
||||
Stores the imported asset in an empty named after the asset.
|
||||
"""
|
||||
|
||||
families = ["camera"]
|
||||
representations = ["fbx"]
|
||||
|
||||
label = "Load Camera (FBX)"
|
||||
icon = "code-fork"
|
||||
color = "orange"
|
||||
|
||||
def _remove(self, asset_group):
|
||||
objects = list(asset_group.children)
|
||||
|
||||
for obj in objects:
|
||||
if obj.type == 'CAMERA':
|
||||
bpy.data.cameras.remove(obj.data)
|
||||
elif obj.type == 'EMPTY':
|
||||
objects.extend(obj.children)
|
||||
bpy.data.objects.remove(obj)
|
||||
|
||||
def _process(self, libpath, asset_group, group_name):
|
||||
plugin.deselect_all()
|
||||
|
||||
collection = bpy.context.view_layer.active_layer_collection.collection
|
||||
|
||||
bpy.ops.import_scene.fbx(filepath=libpath)
|
||||
|
||||
parent = bpy.context.scene.collection
|
||||
|
||||
objects = lib.get_selection()
|
||||
|
||||
for obj in objects:
|
||||
obj.parent = asset_group
|
||||
|
||||
for obj in objects:
|
||||
parent.objects.link(obj)
|
||||
collection.objects.unlink(obj)
|
||||
|
||||
for obj in objects:
|
||||
name = obj.name
|
||||
obj.name = f"{group_name}:{name}"
|
||||
if obj.type != 'EMPTY':
|
||||
name_data = obj.data.name
|
||||
obj.data.name = f"{group_name}:{name_data}"
|
||||
|
||||
if not obj.get(AVALON_PROPERTY):
|
||||
obj[AVALON_PROPERTY] = dict()
|
||||
|
||||
avalon_info = obj[AVALON_PROPERTY]
|
||||
avalon_info.update({"container_name": group_name})
|
||||
|
||||
plugin.deselect_all()
|
||||
|
||||
return objects
|
||||
|
||||
def process_asset(
|
||||
self, context: dict, name: str, namespace: Optional[str] = None,
|
||||
options: Optional[Dict] = None
|
||||
) -> Optional[List]:
|
||||
"""
|
||||
Arguments:
|
||||
name: Use pre-defined name
|
||||
namespace: Use pre-defined namespace
|
||||
context: Full parenthood of representation to load
|
||||
options: Additional settings dictionary
|
||||
"""
|
||||
libpath = self.fname
|
||||
asset = context["asset"]["name"]
|
||||
subset = context["subset"]["name"]
|
||||
|
||||
asset_name = plugin.asset_name(asset, subset)
|
||||
unique_number = plugin.get_unique_number(asset, subset)
|
||||
group_name = plugin.asset_name(asset, subset, unique_number)
|
||||
namespace = namespace or f"{asset}_{unique_number}"
|
||||
|
||||
avalon_container = bpy.data.collections.get(AVALON_CONTAINERS)
|
||||
if not avalon_container:
|
||||
avalon_container = bpy.data.collections.new(name=AVALON_CONTAINERS)
|
||||
bpy.context.scene.collection.children.link(avalon_container)
|
||||
|
||||
asset_group = bpy.data.objects.new(group_name, object_data=None)
|
||||
avalon_container.objects.link(asset_group)
|
||||
|
||||
objects = self._process(libpath, asset_group, group_name)
|
||||
|
||||
objects = []
|
||||
nodes = list(asset_group.children)
|
||||
|
||||
for obj in nodes:
|
||||
objects.append(obj)
|
||||
nodes.extend(list(obj.children))
|
||||
|
||||
bpy.context.scene.collection.objects.link(asset_group)
|
||||
|
||||
asset_group[AVALON_PROPERTY] = {
|
||||
"schema": "openpype:container-2.0",
|
||||
"id": AVALON_CONTAINER_ID,
|
||||
"name": name,
|
||||
"namespace": namespace or '',
|
||||
"loader": str(self.__class__.__name__),
|
||||
"representation": str(context["representation"]["_id"]),
|
||||
"libpath": libpath,
|
||||
"asset_name": asset_name,
|
||||
"parent": str(context["representation"]["parent"]),
|
||||
"family": context["representation"]["context"]["family"],
|
||||
"objectName": group_name
|
||||
}
|
||||
|
||||
self[:] = objects
|
||||
return objects
|
||||
|
||||
def exec_update(self, container: Dict, representation: Dict):
|
||||
"""Update the loaded asset.
|
||||
|
||||
This will remove all objects of the current collection, load the new
|
||||
ones and add them to the collection.
|
||||
If the objects of the collection are used in another collection they
|
||||
will not be removed, only unlinked. Normally this should not be the
|
||||
case though.
|
||||
|
||||
Warning:
|
||||
No nested collections are supported at the moment!
|
||||
"""
|
||||
object_name = container["objectName"]
|
||||
asset_group = bpy.data.objects.get(object_name)
|
||||
libpath = Path(api.get_representation_path(representation))
|
||||
extension = libpath.suffix.lower()
|
||||
|
||||
self.log.info(
|
||||
"Container: %s\nRepresentation: %s",
|
||||
pformat(container, indent=2),
|
||||
pformat(representation, indent=2),
|
||||
)
|
||||
|
||||
assert asset_group, (
|
||||
f"The asset is not loaded: {container['objectName']}"
|
||||
)
|
||||
assert libpath, (
|
||||
"No existing library file found for {container['objectName']}"
|
||||
)
|
||||
assert libpath.is_file(), (
|
||||
f"The file doesn't exist: {libpath}"
|
||||
)
|
||||
assert extension in plugin.VALID_EXTENSIONS, (
|
||||
f"Unsupported file: {libpath}"
|
||||
)
|
||||
|
||||
metadata = asset_group.get(AVALON_PROPERTY)
|
||||
group_libpath = metadata["libpath"]
|
||||
|
||||
normalized_group_libpath = (
|
||||
str(Path(bpy.path.abspath(group_libpath)).resolve())
|
||||
)
|
||||
normalized_libpath = (
|
||||
str(Path(bpy.path.abspath(str(libpath))).resolve())
|
||||
)
|
||||
self.log.debug(
|
||||
"normalized_group_libpath:\n %s\nnormalized_libpath:\n %s",
|
||||
normalized_group_libpath,
|
||||
normalized_libpath,
|
||||
)
|
||||
if normalized_group_libpath == normalized_libpath:
|
||||
self.log.info("Library already loaded, not updating...")
|
||||
return
|
||||
|
||||
mat = asset_group.matrix_basis.copy()
|
||||
|
||||
self._remove(asset_group)
|
||||
self._process(str(libpath), asset_group, object_name)
|
||||
|
||||
asset_group.matrix_basis = mat
|
||||
|
||||
metadata["libpath"] = str(libpath)
|
||||
metadata["representation"] = str(representation["_id"])
|
||||
|
||||
def exec_remove(self, container: Dict) -> bool:
|
||||
"""Remove an existing container from a Blender scene.
|
||||
|
||||
Arguments:
|
||||
container (openpype:container-1.0): Container to remove,
|
||||
from `host.ls()`.
|
||||
|
||||
Returns:
|
||||
bool: Whether the container was deleted.
|
||||
|
||||
Warning:
|
||||
No nested collections are supported at the moment!
|
||||
"""
|
||||
object_name = container["objectName"]
|
||||
asset_group = bpy.data.objects.get(object_name)
|
||||
|
||||
if not asset_group:
|
||||
return False
|
||||
|
||||
self._remove(asset_group)
|
||||
|
||||
bpy.data.objects.remove(asset_group)
|
||||
|
||||
return True
|
||||
|
|
@ -46,7 +46,7 @@ class FbxModelLoader(plugin.AssetLoader):
|
|||
bpy.data.objects.remove(obj)
|
||||
|
||||
def _process(self, libpath, asset_group, group_name, action):
|
||||
bpy.ops.object.select_all(action='DESELECT')
|
||||
plugin.deselect_all()
|
||||
|
||||
collection = bpy.context.view_layer.active_layer_collection.collection
|
||||
|
||||
|
|
@ -112,7 +112,7 @@ class FbxModelLoader(plugin.AssetLoader):
|
|||
avalon_info = obj[AVALON_PROPERTY]
|
||||
avalon_info.update({"container_name": group_name})
|
||||
|
||||
bpy.ops.object.select_all(action='DESELECT')
|
||||
plugin.deselect_all()
|
||||
|
||||
return objects
|
||||
|
||||
|
|
|
|||
|
|
@ -150,7 +150,7 @@ class BlendLayoutLoader(plugin.AssetLoader):
|
|||
|
||||
bpy.data.orphans_purge(do_local_ids=False)
|
||||
|
||||
bpy.ops.object.select_all(action='DESELECT')
|
||||
plugin.deselect_all()
|
||||
|
||||
return objects
|
||||
|
||||
|
|
|
|||
|
|
@ -12,6 +12,7 @@ from avalon.blender.pipeline import AVALON_CONTAINERS
|
|||
from avalon.blender.pipeline import AVALON_CONTAINER_ID
|
||||
from avalon.blender.pipeline import AVALON_PROPERTY
|
||||
from avalon.blender.pipeline import AVALON_INSTANCES
|
||||
from openpype import lib
|
||||
from openpype.hosts.blender.api import plugin
|
||||
|
||||
|
||||
|
|
@ -59,7 +60,7 @@ class JsonLayoutLoader(plugin.AssetLoader):
|
|||
return None
|
||||
|
||||
def _process(self, libpath, asset, asset_group, actions):
|
||||
bpy.ops.object.select_all(action='DESELECT')
|
||||
plugin.deselect_all()
|
||||
|
||||
with open(libpath, "r") as fp:
|
||||
data = json.load(fp)
|
||||
|
|
@ -103,6 +104,21 @@ class JsonLayoutLoader(plugin.AssetLoader):
|
|||
options=options
|
||||
)
|
||||
|
||||
# Create the camera asset and the camera instance
|
||||
creator_plugin = lib.get_creator_by_name("CreateCamera")
|
||||
if not creator_plugin:
|
||||
raise ValueError("Creator plugin \"CreateCamera\" was "
|
||||
"not found.")
|
||||
|
||||
api.create(
|
||||
creator_plugin,
|
||||
name="camera",
|
||||
# name=f"{unique_number}_{subset}_animation",
|
||||
asset=asset,
|
||||
options={"useSelection": False}
|
||||
# data={"dependencies": str(context["representation"]["_id"])}
|
||||
)
|
||||
|
||||
def process_asset(self,
|
||||
context: dict,
|
||||
name: str,
|
||||
|
|
|
|||
|
|
@ -93,7 +93,7 @@ class BlendModelLoader(plugin.AssetLoader):
|
|||
|
||||
bpy.data.orphans_purge(do_local_ids=False)
|
||||
|
||||
bpy.ops.object.select_all(action='DESELECT')
|
||||
plugin.deselect_all()
|
||||
|
||||
return objects
|
||||
|
||||
|
|
@ -126,7 +126,7 @@ class BlendModelLoader(plugin.AssetLoader):
|
|||
asset_group.empty_display_type = 'SINGLE_ARROW'
|
||||
avalon_container.objects.link(asset_group)
|
||||
|
||||
bpy.ops.object.select_all(action='DESELECT')
|
||||
plugin.deselect_all()
|
||||
|
||||
if options is not None:
|
||||
parent = options.get('parent')
|
||||
|
|
@ -158,7 +158,7 @@ class BlendModelLoader(plugin.AssetLoader):
|
|||
|
||||
bpy.ops.object.parent_set(keep_transform=True)
|
||||
|
||||
bpy.ops.object.select_all(action='DESELECT')
|
||||
plugin.deselect_all()
|
||||
|
||||
objects = self._process(libpath, asset_group, group_name)
|
||||
|
||||
|
|
|
|||
|
|
@ -66,12 +66,16 @@ class BlendRigLoader(plugin.AssetLoader):
|
|||
objects = []
|
||||
nodes = list(container.children)
|
||||
|
||||
for obj in nodes:
|
||||
obj.parent = asset_group
|
||||
allowed_types = ['ARMATURE', 'MESH']
|
||||
|
||||
for obj in nodes:
|
||||
objects.append(obj)
|
||||
nodes.extend(list(obj.children))
|
||||
if obj.type in allowed_types:
|
||||
obj.parent = asset_group
|
||||
|
||||
for obj in nodes:
|
||||
if obj.type in allowed_types:
|
||||
objects.append(obj)
|
||||
nodes.extend(list(obj.children))
|
||||
|
||||
objects.reverse()
|
||||
|
||||
|
|
@ -107,7 +111,8 @@ class BlendRigLoader(plugin.AssetLoader):
|
|||
|
||||
if action is not None:
|
||||
local_obj.animation_data.action = action
|
||||
elif local_obj.animation_data.action is not None:
|
||||
elif (local_obj.animation_data and
|
||||
local_obj.animation_data.action is not None):
|
||||
plugin.prepare_data(
|
||||
local_obj.animation_data.action, group_name)
|
||||
|
||||
|
|
@ -126,9 +131,32 @@ class BlendRigLoader(plugin.AssetLoader):
|
|||
|
||||
objects.reverse()
|
||||
|
||||
bpy.data.orphans_purge(do_local_ids=False)
|
||||
curves = [obj for obj in data_to.objects if obj.type == 'CURVE']
|
||||
|
||||
bpy.ops.object.select_all(action='DESELECT')
|
||||
for curve in curves:
|
||||
local_obj = plugin.prepare_data(curve, group_name)
|
||||
plugin.prepare_data(local_obj.data, group_name)
|
||||
|
||||
local_obj.use_fake_user = True
|
||||
|
||||
for mod in local_obj.modifiers:
|
||||
mod_target_name = mod.object.name
|
||||
mod.object = bpy.data.objects.get(
|
||||
f"{group_name}:{mod_target_name}")
|
||||
|
||||
if not local_obj.get(AVALON_PROPERTY):
|
||||
local_obj[AVALON_PROPERTY] = dict()
|
||||
|
||||
avalon_info = local_obj[AVALON_PROPERTY]
|
||||
avalon_info.update({"container_name": group_name})
|
||||
|
||||
local_obj.parent = asset_group
|
||||
objects.append(local_obj)
|
||||
|
||||
while bpy.data.orphans_purge(do_local_ids=False):
|
||||
pass
|
||||
|
||||
plugin.deselect_all()
|
||||
|
||||
return objects
|
||||
|
||||
|
|
@ -163,7 +191,7 @@ class BlendRigLoader(plugin.AssetLoader):
|
|||
|
||||
action = None
|
||||
|
||||
bpy.ops.object.select_all(action='DESELECT')
|
||||
plugin.deselect_all()
|
||||
|
||||
create_animation = False
|
||||
|
||||
|
|
@ -199,7 +227,7 @@ class BlendRigLoader(plugin.AssetLoader):
|
|||
|
||||
bpy.ops.object.parent_set(keep_transform=True)
|
||||
|
||||
bpy.ops.object.select_all(action='DESELECT')
|
||||
plugin.deselect_all()
|
||||
|
||||
objects = self._process(libpath, asset_group, group_name, action)
|
||||
|
||||
|
|
@ -222,7 +250,7 @@ class BlendRigLoader(plugin.AssetLoader):
|
|||
data={"dependencies": str(context["representation"]["_id"])}
|
||||
)
|
||||
|
||||
bpy.ops.object.select_all(action='DESELECT')
|
||||
plugin.deselect_all()
|
||||
|
||||
bpy.context.scene.collection.objects.link(asset_group)
|
||||
|
||||
|
|
|
|||
|
|
@ -28,7 +28,7 @@ class ExtractABC(api.Extractor):
|
|||
# Perform extraction
|
||||
self.log.info("Performing extraction..")
|
||||
|
||||
bpy.ops.object.select_all(action='DESELECT')
|
||||
plugin.deselect_all()
|
||||
|
||||
selected = []
|
||||
asset_group = None
|
||||
|
|
@ -50,7 +50,7 @@ class ExtractABC(api.Extractor):
|
|||
flatten=False
|
||||
)
|
||||
|
||||
bpy.ops.object.select_all(action='DESELECT')
|
||||
plugin.deselect_all()
|
||||
|
||||
if "representations" not in instance.data:
|
||||
instance.data["representations"] = []
|
||||
|
|
|
|||
|
|
@ -28,6 +28,17 @@ class ExtractBlend(openpype.api.Extractor):
|
|||
|
||||
for obj in instance:
|
||||
data_blocks.add(obj)
|
||||
# Pack used images in the blend files.
|
||||
if obj.type == 'MESH':
|
||||
for material_slot in obj.material_slots:
|
||||
mat = material_slot.material
|
||||
if mat and mat.use_nodes:
|
||||
tree = mat.node_tree
|
||||
if tree.type == 'SHADER':
|
||||
for node in tree.nodes:
|
||||
if node.bl_idname == 'ShaderNodeTexImage':
|
||||
if node.image:
|
||||
node.image.pack()
|
||||
|
||||
bpy.data.libraries.write(filepath, data_blocks)
|
||||
|
||||
|
|
|
|||
73
openpype/hosts/blender/plugins/publish/extract_camera.py
Normal file
73
openpype/hosts/blender/plugins/publish/extract_camera.py
Normal file
|
|
@ -0,0 +1,73 @@
|
|||
import os
|
||||
|
||||
from openpype import api
|
||||
from openpype.hosts.blender.api import plugin
|
||||
|
||||
import bpy
|
||||
|
||||
|
||||
class ExtractCamera(api.Extractor):
|
||||
"""Extract as the camera as FBX."""
|
||||
|
||||
label = "Extract Camera"
|
||||
hosts = ["blender"]
|
||||
families = ["camera"]
|
||||
optional = True
|
||||
|
||||
def process(self, instance):
|
||||
# Define extract output file path
|
||||
stagingdir = self.staging_dir(instance)
|
||||
filename = f"{instance.name}.fbx"
|
||||
filepath = os.path.join(stagingdir, filename)
|
||||
|
||||
# Perform extraction
|
||||
self.log.info("Performing extraction..")
|
||||
|
||||
plugin.deselect_all()
|
||||
|
||||
selected = []
|
||||
|
||||
camera = None
|
||||
|
||||
for obj in instance:
|
||||
if obj.type == "CAMERA":
|
||||
obj.select_set(True)
|
||||
selected.append(obj)
|
||||
camera = obj
|
||||
break
|
||||
|
||||
assert camera, "No camera found"
|
||||
|
||||
context = plugin.create_blender_context(
|
||||
active=camera, selected=selected)
|
||||
|
||||
scale_length = bpy.context.scene.unit_settings.scale_length
|
||||
bpy.context.scene.unit_settings.scale_length = 0.01
|
||||
|
||||
# We export the fbx
|
||||
bpy.ops.export_scene.fbx(
|
||||
context,
|
||||
filepath=filepath,
|
||||
use_active_collection=False,
|
||||
use_selection=True,
|
||||
object_types={'CAMERA'},
|
||||
bake_anim_simplify_factor=0.0
|
||||
)
|
||||
|
||||
bpy.context.scene.unit_settings.scale_length = scale_length
|
||||
|
||||
plugin.deselect_all()
|
||||
|
||||
if "representations" not in instance.data:
|
||||
instance.data["representations"] = []
|
||||
|
||||
representation = {
|
||||
'name': 'fbx',
|
||||
'ext': 'fbx',
|
||||
'files': filename,
|
||||
"stagingDir": stagingdir,
|
||||
}
|
||||
instance.data["representations"].append(representation)
|
||||
|
||||
self.log.info("Extracted instance '%s' to: %s",
|
||||
instance.name, representation)
|
||||
|
|
@ -24,7 +24,7 @@ class ExtractFBX(api.Extractor):
|
|||
# Perform extraction
|
||||
self.log.info("Performing extraction..")
|
||||
|
||||
bpy.ops.object.select_all(action='DESELECT')
|
||||
plugin.deselect_all()
|
||||
|
||||
selected = []
|
||||
asset_group = None
|
||||
|
|
@ -60,7 +60,7 @@ class ExtractFBX(api.Extractor):
|
|||
add_leaf_bones=False
|
||||
)
|
||||
|
||||
bpy.ops.object.select_all(action='DESELECT')
|
||||
plugin.deselect_all()
|
||||
|
||||
for mat in new_materials:
|
||||
bpy.data.materials.remove(mat)
|
||||
|
|
|
|||
|
|
@ -0,0 +1,48 @@
|
|||
from typing import List
|
||||
|
||||
import mathutils
|
||||
|
||||
import pyblish.api
|
||||
import openpype.hosts.blender.api.action
|
||||
|
||||
|
||||
class ValidateCameraZeroKeyframe(pyblish.api.InstancePlugin):
|
||||
"""Camera must have a keyframe at frame 0.
|
||||
|
||||
Unreal shifts the first keyframe to frame 0. Forcing the camera to have
|
||||
a keyframe at frame 0 will ensure that the animation will be the same
|
||||
in Unreal and Blender.
|
||||
"""
|
||||
|
||||
order = openpype.api.ValidateContentsOrder
|
||||
hosts = ["blender"]
|
||||
families = ["camera"]
|
||||
category = "geometry"
|
||||
version = (0, 1, 0)
|
||||
label = "Zero Keyframe"
|
||||
actions = [openpype.hosts.blender.api.action.SelectInvalidAction]
|
||||
|
||||
_identity = mathutils.Matrix()
|
||||
|
||||
@classmethod
|
||||
def get_invalid(cls, instance) -> List:
|
||||
invalid = []
|
||||
for obj in [obj for obj in instance]:
|
||||
if obj.type == "CAMERA":
|
||||
if obj.animation_data and obj.animation_data.action:
|
||||
action = obj.animation_data.action
|
||||
frames_set = set()
|
||||
for fcu in action.fcurves:
|
||||
for kp in fcu.keyframe_points:
|
||||
frames_set.add(kp.co[0])
|
||||
frames = list(frames_set)
|
||||
frames.sort()
|
||||
if frames[0] != 0.0:
|
||||
invalid.append(obj)
|
||||
return invalid
|
||||
|
||||
def process(self, instance):
|
||||
invalid = self.get_invalid(instance)
|
||||
if invalid:
|
||||
raise RuntimeError(
|
||||
f"Object found in instance is not in Object Mode: {invalid}")
|
||||
|
|
@ -4,7 +4,6 @@ import copy
|
|||
import argparse
|
||||
|
||||
from avalon import io
|
||||
from avalon.tools import publish
|
||||
|
||||
import pyblish.api
|
||||
import pyblish.util
|
||||
|
|
@ -13,6 +12,7 @@ from openpype.api import Logger
|
|||
import openpype
|
||||
import openpype.hosts.celaction
|
||||
from openpype.hosts.celaction import api as celaction
|
||||
from openpype.tools.utils import host_tools
|
||||
|
||||
log = Logger().get_logger("Celaction_cli_publisher")
|
||||
|
||||
|
|
@ -82,7 +82,7 @@ def main():
|
|||
|
||||
pyblish.api.register_host(publish_host)
|
||||
|
||||
return publish.show()
|
||||
return host_tools.show_publish()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
|
|
|
|||
105
openpype/hosts/flame/__init__.py
Normal file
105
openpype/hosts/flame/__init__.py
Normal file
|
|
@ -0,0 +1,105 @@
|
|||
from .api.utils import (
|
||||
setup
|
||||
)
|
||||
|
||||
from .api.pipeline import (
|
||||
install,
|
||||
uninstall,
|
||||
ls,
|
||||
containerise,
|
||||
update_container,
|
||||
maintained_selection,
|
||||
remove_instance,
|
||||
list_instances,
|
||||
imprint
|
||||
)
|
||||
|
||||
from .api.lib import (
|
||||
FlameAppFramework,
|
||||
maintain_current_timeline,
|
||||
get_project_manager,
|
||||
get_current_project,
|
||||
get_current_timeline,
|
||||
create_bin,
|
||||
)
|
||||
|
||||
from .api.menu import (
|
||||
FlameMenuProjectConnect,
|
||||
FlameMenuTimeline
|
||||
)
|
||||
|
||||
from .api.workio import (
|
||||
open_file,
|
||||
save_file,
|
||||
current_file,
|
||||
has_unsaved_changes,
|
||||
file_extensions,
|
||||
work_root
|
||||
)
|
||||
|
||||
import os
|
||||
|
||||
HOST_DIR = os.path.dirname(
|
||||
os.path.abspath(__file__)
|
||||
)
|
||||
API_DIR = os.path.join(HOST_DIR, "api")
|
||||
PLUGINS_DIR = os.path.join(HOST_DIR, "plugins")
|
||||
PUBLISH_PATH = os.path.join(PLUGINS_DIR, "publish")
|
||||
LOAD_PATH = os.path.join(PLUGINS_DIR, "load")
|
||||
CREATE_PATH = os.path.join(PLUGINS_DIR, "create")
|
||||
INVENTORY_PATH = os.path.join(PLUGINS_DIR, "inventory")
|
||||
|
||||
app_framework = None
|
||||
apps = []
|
||||
|
||||
|
||||
__all__ = [
|
||||
"HOST_DIR",
|
||||
"API_DIR",
|
||||
"PLUGINS_DIR",
|
||||
"PUBLISH_PATH",
|
||||
"LOAD_PATH",
|
||||
"CREATE_PATH",
|
||||
"INVENTORY_PATH",
|
||||
"INVENTORY_PATH",
|
||||
|
||||
"app_framework",
|
||||
"apps",
|
||||
|
||||
# pipeline
|
||||
"install",
|
||||
"uninstall",
|
||||
"ls",
|
||||
"containerise",
|
||||
"update_container",
|
||||
"reload_pipeline",
|
||||
"maintained_selection",
|
||||
"remove_instance",
|
||||
"list_instances",
|
||||
"imprint",
|
||||
|
||||
# utils
|
||||
"setup",
|
||||
|
||||
# lib
|
||||
"FlameAppFramework",
|
||||
"maintain_current_timeline",
|
||||
"get_project_manager",
|
||||
"get_current_project",
|
||||
"get_current_timeline",
|
||||
"create_bin",
|
||||
|
||||
# menu
|
||||
"FlameMenuProjectConnect",
|
||||
"FlameMenuTimeline",
|
||||
|
||||
# plugin
|
||||
|
||||
# workio
|
||||
"open_file",
|
||||
"save_file",
|
||||
"current_file",
|
||||
"has_unsaved_changes",
|
||||
"file_extensions",
|
||||
"work_root"
|
||||
]
|
||||
3
openpype/hosts/flame/api/__init__.py
Normal file
3
openpype/hosts/flame/api/__init__.py
Normal file
|
|
@ -0,0 +1,3 @@
|
|||
"""
|
||||
OpenPype Autodesk Flame api
|
||||
"""
|
||||
276
openpype/hosts/flame/api/lib.py
Normal file
276
openpype/hosts/flame/api/lib.py
Normal file
|
|
@ -0,0 +1,276 @@
|
|||
import sys
|
||||
import os
|
||||
import pickle
|
||||
import contextlib
|
||||
from pprint import pformat
|
||||
|
||||
from openpype.api import Logger
|
||||
|
||||
log = Logger().get_logger(__name__)
|
||||
|
||||
|
||||
@contextlib.contextmanager
|
||||
def io_preferences_file(klass, filepath, write=False):
|
||||
try:
|
||||
flag = "w" if write else "r"
|
||||
yield open(filepath, flag)
|
||||
|
||||
except IOError as _error:
|
||||
klass.log.info("Unable to work with preferences `{}`: {}".format(
|
||||
filepath, _error))
|
||||
|
||||
|
||||
class FlameAppFramework(object):
|
||||
# flameAppFramework class takes care of preferences
|
||||
|
||||
class prefs_dict(dict):
|
||||
|
||||
def __init__(self, master, name, **kwargs):
|
||||
self.name = name
|
||||
self.master = master
|
||||
if not self.master.get(self.name):
|
||||
self.master[self.name] = {}
|
||||
self.master[self.name].__init__()
|
||||
|
||||
def __getitem__(self, k):
|
||||
return self.master[self.name].__getitem__(k)
|
||||
|
||||
def __setitem__(self, k, v):
|
||||
return self.master[self.name].__setitem__(k, v)
|
||||
|
||||
def __delitem__(self, k):
|
||||
return self.master[self.name].__delitem__(k)
|
||||
|
||||
def get(self, k, default=None):
|
||||
return self.master[self.name].get(k, default)
|
||||
|
||||
def setdefault(self, k, default=None):
|
||||
return self.master[self.name].setdefault(k, default)
|
||||
|
||||
def pop(self, k, v=object()):
|
||||
if v is object():
|
||||
return self.master[self.name].pop(k)
|
||||
return self.master[self.name].pop(k, v)
|
||||
|
||||
def update(self, mapping=(), **kwargs):
|
||||
self.master[self.name].update(mapping, **kwargs)
|
||||
|
||||
def __contains__(self, k):
|
||||
return self.master[self.name].__contains__(k)
|
||||
|
||||
def copy(self): # don"t delegate w/ super - dict.copy() -> dict :(
|
||||
return type(self)(self)
|
||||
|
||||
def keys(self):
|
||||
return self.master[self.name].keys()
|
||||
|
||||
@classmethod
|
||||
def fromkeys(cls, keys, v=None):
|
||||
return cls.master[cls.name].fromkeys(keys, v)
|
||||
|
||||
def __repr__(self):
|
||||
return "{0}({1})".format(
|
||||
type(self).__name__, self.master[self.name].__repr__())
|
||||
|
||||
def master_keys(self):
|
||||
return self.master.keys()
|
||||
|
||||
def __init__(self):
|
||||
self.name = self.__class__.__name__
|
||||
self.bundle_name = "OpenPypeFlame"
|
||||
# self.prefs scope is limited to flame project and user
|
||||
self.prefs = {}
|
||||
self.prefs_user = {}
|
||||
self.prefs_global = {}
|
||||
self.log = log
|
||||
|
||||
try:
|
||||
import flame
|
||||
self.flame = flame
|
||||
self.flame_project_name = self.flame.project.current_project.name
|
||||
self.flame_user_name = flame.users.current_user.name
|
||||
except Exception:
|
||||
self.flame = None
|
||||
self.flame_project_name = None
|
||||
self.flame_user_name = None
|
||||
|
||||
import socket
|
||||
self.hostname = socket.gethostname()
|
||||
|
||||
if sys.platform == "darwin":
|
||||
self.prefs_folder = os.path.join(
|
||||
os.path.expanduser("~"),
|
||||
"Library",
|
||||
"Caches",
|
||||
"OpenPype",
|
||||
self.bundle_name
|
||||
)
|
||||
elif sys.platform.startswith("linux"):
|
||||
self.prefs_folder = os.path.join(
|
||||
os.path.expanduser("~"),
|
||||
".OpenPype",
|
||||
self.bundle_name)
|
||||
|
||||
self.prefs_folder = os.path.join(
|
||||
self.prefs_folder,
|
||||
self.hostname,
|
||||
)
|
||||
|
||||
self.log.info("[{}] waking up".format(self.__class__.__name__))
|
||||
self.load_prefs()
|
||||
|
||||
# menu auto-refresh defaults
|
||||
|
||||
if not self.prefs_global.get("menu_auto_refresh"):
|
||||
self.prefs_global["menu_auto_refresh"] = {
|
||||
"media_panel": True,
|
||||
"batch": True,
|
||||
"main_menu": True,
|
||||
"timeline_menu": True
|
||||
}
|
||||
|
||||
self.apps = []
|
||||
|
||||
def get_pref_file_paths(self):
|
||||
|
||||
prefix = self.prefs_folder + os.path.sep + self.bundle_name
|
||||
prefs_file_path = "_".join([
|
||||
prefix, self.flame_user_name,
|
||||
self.flame_project_name]) + ".prefs"
|
||||
prefs_user_file_path = "_".join([
|
||||
prefix, self.flame_user_name]) + ".prefs"
|
||||
prefs_global_file_path = prefix + ".prefs"
|
||||
|
||||
return (prefs_file_path, prefs_user_file_path, prefs_global_file_path)
|
||||
|
||||
def load_prefs(self):
|
||||
|
||||
(proj_pref_path, user_pref_path,
|
||||
glob_pref_path) = self.get_pref_file_paths()
|
||||
|
||||
with io_preferences_file(self, proj_pref_path) as prefs_file:
|
||||
self.prefs = pickle.load(prefs_file)
|
||||
self.log.info(
|
||||
"Project - preferences contents:\n{}".format(
|
||||
pformat(self.prefs)
|
||||
))
|
||||
|
||||
with io_preferences_file(self, user_pref_path) as prefs_file:
|
||||
self.prefs_user = pickle.load(prefs_file)
|
||||
self.log.info(
|
||||
"User - preferences contents:\n{}".format(
|
||||
pformat(self.prefs_user)
|
||||
))
|
||||
|
||||
with io_preferences_file(self, glob_pref_path) as prefs_file:
|
||||
self.prefs_global = pickle.load(prefs_file)
|
||||
self.log.info(
|
||||
"Global - preferences contents:\n{}".format(
|
||||
pformat(self.prefs_global)
|
||||
))
|
||||
|
||||
return True
|
||||
|
||||
def save_prefs(self):
|
||||
# make sure the preference folder is available
|
||||
if not os.path.isdir(self.prefs_folder):
|
||||
try:
|
||||
os.makedirs(self.prefs_folder)
|
||||
except Exception:
|
||||
self.log.info("Unable to create folder {}".format(
|
||||
self.prefs_folder))
|
||||
return False
|
||||
|
||||
# get all pref file paths
|
||||
(proj_pref_path, user_pref_path,
|
||||
glob_pref_path) = self.get_pref_file_paths()
|
||||
|
||||
with io_preferences_file(self, proj_pref_path, True) as prefs_file:
|
||||
pickle.dump(self.prefs, prefs_file)
|
||||
self.log.info(
|
||||
"Project - preferences contents:\n{}".format(
|
||||
pformat(self.prefs)
|
||||
))
|
||||
|
||||
with io_preferences_file(self, user_pref_path, True) as prefs_file:
|
||||
pickle.dump(self.prefs_user, prefs_file)
|
||||
self.log.info(
|
||||
"User - preferences contents:\n{}".format(
|
||||
pformat(self.prefs_user)
|
||||
))
|
||||
|
||||
with io_preferences_file(self, glob_pref_path, True) as prefs_file:
|
||||
pickle.dump(self.prefs_global, prefs_file)
|
||||
self.log.info(
|
||||
"Global - preferences contents:\n{}".format(
|
||||
pformat(self.prefs_global)
|
||||
))
|
||||
|
||||
return True
|
||||
|
||||
|
||||
@contextlib.contextmanager
|
||||
def maintain_current_timeline(to_timeline, from_timeline=None):
|
||||
"""Maintain current timeline selection during context
|
||||
|
||||
Attributes:
|
||||
from_timeline (resolve.Timeline)[optional]:
|
||||
Example:
|
||||
>>> print(from_timeline.GetName())
|
||||
timeline1
|
||||
>>> print(to_timeline.GetName())
|
||||
timeline2
|
||||
|
||||
>>> with maintain_current_timeline(to_timeline):
|
||||
... print(get_current_timeline().GetName())
|
||||
timeline2
|
||||
|
||||
>>> print(get_current_timeline().GetName())
|
||||
timeline1
|
||||
"""
|
||||
# todo: this is still Resolve's implementation
|
||||
project = get_current_project()
|
||||
working_timeline = from_timeline or project.GetCurrentTimeline()
|
||||
|
||||
# swith to the input timeline
|
||||
project.SetCurrentTimeline(to_timeline)
|
||||
|
||||
try:
|
||||
# do a work
|
||||
yield
|
||||
finally:
|
||||
# put the original working timeline to context
|
||||
project.SetCurrentTimeline(working_timeline)
|
||||
|
||||
|
||||
def get_project_manager():
|
||||
# TODO: get_project_manager
|
||||
return
|
||||
|
||||
|
||||
def get_media_storage():
|
||||
# TODO: get_media_storage
|
||||
return
|
||||
|
||||
|
||||
def get_current_project():
|
||||
# TODO: get_current_project
|
||||
return
|
||||
|
||||
|
||||
def get_current_timeline(new=False):
|
||||
# TODO: get_current_timeline
|
||||
return
|
||||
|
||||
|
||||
def create_bin(name, root=None):
|
||||
# TODO: create_bin
|
||||
return
|
||||
|
||||
|
||||
def rescan_hooks():
|
||||
import flame
|
||||
try:
|
||||
flame.execute_shortcut('Rescan Python Hooks')
|
||||
except Exception:
|
||||
pass
|
||||
208
openpype/hosts/flame/api/menu.py
Normal file
208
openpype/hosts/flame/api/menu.py
Normal file
|
|
@ -0,0 +1,208 @@
|
|||
import os
|
||||
from Qt import QtWidgets
|
||||
from copy import deepcopy
|
||||
|
||||
from openpype.tools.utils.host_tools import HostToolsHelper
|
||||
|
||||
|
||||
menu_group_name = 'OpenPype'
|
||||
|
||||
default_flame_export_presets = {
|
||||
'Publish': {
|
||||
'PresetVisibility': 2,
|
||||
'PresetType': 0,
|
||||
'PresetFile': 'OpenEXR/OpenEXR (16-bit fp PIZ).xml'
|
||||
},
|
||||
'Preview': {
|
||||
'PresetVisibility': 3,
|
||||
'PresetType': 2,
|
||||
'PresetFile': 'Generate Preview.xml'
|
||||
},
|
||||
'Thumbnail': {
|
||||
'PresetVisibility': 3,
|
||||
'PresetType': 0,
|
||||
'PresetFile': 'Generate Thumbnail.xml'
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
class _FlameMenuApp(object):
|
||||
def __init__(self, framework):
|
||||
self.name = self.__class__.__name__
|
||||
self.framework = framework
|
||||
self.log = framework.log
|
||||
self.menu_group_name = menu_group_name
|
||||
self.dynamic_menu_data = {}
|
||||
|
||||
# flame module is only avaliable when a
|
||||
# flame project is loaded and initialized
|
||||
self.flame = None
|
||||
try:
|
||||
import flame
|
||||
self.flame = flame
|
||||
except ImportError:
|
||||
self.flame = None
|
||||
|
||||
self.flame_project_name = flame.project.current_project.name
|
||||
self.prefs = self.framework.prefs_dict(self.framework.prefs, self.name)
|
||||
self.prefs_user = self.framework.prefs_dict(
|
||||
self.framework.prefs_user, self.name)
|
||||
self.prefs_global = self.framework.prefs_dict(
|
||||
self.framework.prefs_global, self.name)
|
||||
|
||||
self.mbox = QtWidgets.QMessageBox()
|
||||
|
||||
self.menu = {
|
||||
"actions": [{
|
||||
'name': os.getenv("AVALON_PROJECT", "project"),
|
||||
'isEnabled': False
|
||||
}],
|
||||
"name": self.menu_group_name
|
||||
}
|
||||
self.tools_helper = HostToolsHelper()
|
||||
|
||||
def __getattr__(self, name):
|
||||
def method(*args, **kwargs):
|
||||
print('calling %s' % name)
|
||||
return method
|
||||
|
||||
def rescan(self, *args, **kwargs):
|
||||
if not self.flame:
|
||||
try:
|
||||
import flame
|
||||
self.flame = flame
|
||||
except ImportError:
|
||||
self.flame = None
|
||||
|
||||
if self.flame:
|
||||
self.flame.execute_shortcut('Rescan Python Hooks')
|
||||
self.log.info('Rescan Python Hooks')
|
||||
|
||||
|
||||
class FlameMenuProjectConnect(_FlameMenuApp):
|
||||
|
||||
# flameMenuProjectconnect app takes care of the preferences dialog as well
|
||||
|
||||
def __init__(self, framework):
|
||||
_FlameMenuApp.__init__(self, framework)
|
||||
|
||||
def __getattr__(self, name):
|
||||
def method(*args, **kwargs):
|
||||
project = self.dynamic_menu_data.get(name)
|
||||
if project:
|
||||
self.link_project(project)
|
||||
return method
|
||||
|
||||
def build_menu(self):
|
||||
if not self.flame:
|
||||
return []
|
||||
|
||||
flame_project_name = self.flame_project_name
|
||||
self.log.info("______ {} ______".format(flame_project_name))
|
||||
|
||||
menu = deepcopy(self.menu)
|
||||
|
||||
menu['actions'].append({
|
||||
"name": "Workfiles ...",
|
||||
"execute": lambda x: self.tools_helper.show_workfiles()
|
||||
})
|
||||
menu['actions'].append({
|
||||
"name": "Create ...",
|
||||
"execute": lambda x: self.tools_helper.show_creator()
|
||||
})
|
||||
menu['actions'].append({
|
||||
"name": "Publish ...",
|
||||
"execute": lambda x: self.tools_helper.show_publish()
|
||||
})
|
||||
menu['actions'].append({
|
||||
"name": "Load ...",
|
||||
"execute": lambda x: self.tools_helper.show_loader()
|
||||
})
|
||||
menu['actions'].append({
|
||||
"name": "Manage ...",
|
||||
"execute": lambda x: self.tools_helper.show_scene_inventory()
|
||||
})
|
||||
menu['actions'].append({
|
||||
"name": "Library ...",
|
||||
"execute": lambda x: self.tools_helper.show_library_loader()
|
||||
})
|
||||
return menu
|
||||
|
||||
def get_projects(self, *args, **kwargs):
|
||||
pass
|
||||
|
||||
def refresh(self, *args, **kwargs):
|
||||
self.rescan()
|
||||
|
||||
def rescan(self, *args, **kwargs):
|
||||
if not self.flame:
|
||||
try:
|
||||
import flame
|
||||
self.flame = flame
|
||||
except ImportError:
|
||||
self.flame = None
|
||||
|
||||
if self.flame:
|
||||
self.flame.execute_shortcut('Rescan Python Hooks')
|
||||
self.log.info('Rescan Python Hooks')
|
||||
|
||||
|
||||
class FlameMenuTimeline(_FlameMenuApp):
|
||||
|
||||
# flameMenuProjectconnect app takes care of the preferences dialog as well
|
||||
|
||||
def __init__(self, framework):
|
||||
_FlameMenuApp.__init__(self, framework)
|
||||
|
||||
def __getattr__(self, name):
|
||||
def method(*args, **kwargs):
|
||||
project = self.dynamic_menu_data.get(name)
|
||||
if project:
|
||||
self.link_project(project)
|
||||
return method
|
||||
|
||||
def build_menu(self):
|
||||
if not self.flame:
|
||||
return []
|
||||
|
||||
flame_project_name = self.flame_project_name
|
||||
self.log.info("______ {} ______".format(flame_project_name))
|
||||
|
||||
menu = deepcopy(self.menu)
|
||||
|
||||
menu['actions'].append({
|
||||
"name": "Create ...",
|
||||
"execute": lambda x: self.tools_helper.show_creator()
|
||||
})
|
||||
menu['actions'].append({
|
||||
"name": "Publish ...",
|
||||
"execute": lambda x: self.tools_helper.show_publish()
|
||||
})
|
||||
menu['actions'].append({
|
||||
"name": "Load ...",
|
||||
"execute": lambda x: self.tools_helper.show_loader()
|
||||
})
|
||||
menu['actions'].append({
|
||||
"name": "Manage ...",
|
||||
"execute": lambda x: self.tools_helper.show_scene_inventory()
|
||||
})
|
||||
|
||||
return menu
|
||||
|
||||
def get_projects(self, *args, **kwargs):
|
||||
pass
|
||||
|
||||
def refresh(self, *args, **kwargs):
|
||||
self.rescan()
|
||||
|
||||
def rescan(self, *args, **kwargs):
|
||||
if not self.flame:
|
||||
try:
|
||||
import flame
|
||||
self.flame = flame
|
||||
except ImportError:
|
||||
self.flame = None
|
||||
|
||||
if self.flame:
|
||||
self.flame.execute_shortcut('Rescan Python Hooks')
|
||||
self.log.info('Rescan Python Hooks')
|
||||
155
openpype/hosts/flame/api/pipeline.py
Normal file
155
openpype/hosts/flame/api/pipeline.py
Normal file
|
|
@ -0,0 +1,155 @@
|
|||
"""
|
||||
Basic avalon integration
|
||||
"""
|
||||
import contextlib
|
||||
from avalon import api as avalon
|
||||
from pyblish import api as pyblish
|
||||
from openpype.api import Logger
|
||||
|
||||
AVALON_CONTAINERS = "AVALON_CONTAINERS"
|
||||
|
||||
log = Logger().get_logger(__name__)
|
||||
|
||||
|
||||
def install():
|
||||
from .. import (
|
||||
PUBLISH_PATH,
|
||||
LOAD_PATH,
|
||||
CREATE_PATH,
|
||||
INVENTORY_PATH
|
||||
)
|
||||
# TODO: install
|
||||
|
||||
# Disable all families except for the ones we explicitly want to see
|
||||
family_states = [
|
||||
"imagesequence",
|
||||
"render2d",
|
||||
"plate",
|
||||
"render",
|
||||
"mov",
|
||||
"clip"
|
||||
]
|
||||
avalon.data["familiesStateDefault"] = False
|
||||
avalon.data["familiesStateToggled"] = family_states
|
||||
|
||||
log.info("openpype.hosts.flame installed")
|
||||
|
||||
pyblish.register_host("flame")
|
||||
pyblish.register_plugin_path(PUBLISH_PATH)
|
||||
log.info("Registering Flame plug-ins..")
|
||||
|
||||
avalon.register_plugin_path(avalon.Loader, LOAD_PATH)
|
||||
avalon.register_plugin_path(avalon.Creator, CREATE_PATH)
|
||||
avalon.register_plugin_path(avalon.InventoryAction, INVENTORY_PATH)
|
||||
|
||||
# register callback for switching publishable
|
||||
pyblish.register_callback("instanceToggled", on_pyblish_instance_toggled)
|
||||
|
||||
|
||||
def uninstall():
|
||||
from .. import (
|
||||
PUBLISH_PATH,
|
||||
LOAD_PATH,
|
||||
CREATE_PATH,
|
||||
INVENTORY_PATH
|
||||
)
|
||||
|
||||
# TODO: uninstall
|
||||
pyblish.deregister_host("flame")
|
||||
pyblish.deregister_plugin_path(PUBLISH_PATH)
|
||||
log.info("Deregistering DaVinci Resovle plug-ins..")
|
||||
|
||||
avalon.deregister_plugin_path(avalon.Loader, LOAD_PATH)
|
||||
avalon.deregister_plugin_path(avalon.Creator, CREATE_PATH)
|
||||
avalon.deregister_plugin_path(avalon.InventoryAction, INVENTORY_PATH)
|
||||
|
||||
# register callback for switching publishable
|
||||
pyblish.deregister_callback("instanceToggled", on_pyblish_instance_toggled)
|
||||
|
||||
|
||||
def containerise(tl_segment,
|
||||
name,
|
||||
namespace,
|
||||
context,
|
||||
loader=None,
|
||||
data=None):
|
||||
# TODO: containerise
|
||||
pass
|
||||
|
||||
|
||||
def ls():
|
||||
"""List available containers.
|
||||
"""
|
||||
# TODO: ls
|
||||
pass
|
||||
|
||||
|
||||
def parse_container(tl_segment, validate=True):
|
||||
"""Return container data from timeline_item's openpype tag.
|
||||
"""
|
||||
# TODO: parse_container
|
||||
pass
|
||||
|
||||
|
||||
def update_container(tl_segment, data=None):
|
||||
"""Update container data to input timeline_item's openpype tag.
|
||||
"""
|
||||
# TODO: update_container
|
||||
pass
|
||||
|
||||
|
||||
@contextlib.contextmanager
|
||||
def maintained_selection():
|
||||
"""Maintain selection during context
|
||||
|
||||
Example:
|
||||
>>> with maintained_selection():
|
||||
... node['selected'].setValue(True)
|
||||
>>> print(node['selected'].value())
|
||||
False
|
||||
"""
|
||||
# TODO: maintained_selection + remove undo steps
|
||||
|
||||
try:
|
||||
# do the operation
|
||||
yield
|
||||
finally:
|
||||
pass
|
||||
|
||||
|
||||
def reset_selection():
|
||||
"""Deselect all selected nodes
|
||||
"""
|
||||
pass
|
||||
|
||||
|
||||
def on_pyblish_instance_toggled(instance, old_value, new_value):
|
||||
"""Toggle node passthrough states on instance toggles."""
|
||||
|
||||
log.info("instance toggle: {}, old_value: {}, new_value:{} ".format(
|
||||
instance, old_value, new_value))
|
||||
|
||||
# from openpype.hosts.resolve import (
|
||||
# set_publish_attribute
|
||||
# )
|
||||
|
||||
# # Whether instances should be passthrough based on new value
|
||||
# timeline_item = instance.data["item"]
|
||||
# set_publish_attribute(timeline_item, new_value)
|
||||
|
||||
|
||||
def remove_instance(instance):
|
||||
"""Remove instance marker from track item."""
|
||||
# TODO: remove_instance
|
||||
pass
|
||||
|
||||
|
||||
def list_instances():
|
||||
"""List all created instances from current workfile."""
|
||||
# TODO: list_instances
|
||||
pass
|
||||
|
||||
|
||||
def imprint(item, data=None):
|
||||
# TODO: imprint
|
||||
pass
|
||||
3
openpype/hosts/flame/api/plugin.py
Normal file
3
openpype/hosts/flame/api/plugin.py
Normal file
|
|
@ -0,0 +1,3 @@
|
|||
# Creator plugin functions
|
||||
# Publishing plugin functions
|
||||
# Loader plugin functions
|
||||
108
openpype/hosts/flame/api/utils.py
Normal file
108
openpype/hosts/flame/api/utils.py
Normal file
|
|
@ -0,0 +1,108 @@
|
|||
"""
|
||||
Flame utils for syncing scripts
|
||||
"""
|
||||
|
||||
import os
|
||||
import shutil
|
||||
from openpype.api import Logger
|
||||
log = Logger().get_logger(__name__)
|
||||
|
||||
|
||||
def _sync_utility_scripts(env=None):
|
||||
""" Synchronizing basic utlility scripts for flame.
|
||||
|
||||
To be able to run start OpenPype within Flame we have to copy
|
||||
all utility_scripts and additional FLAME_SCRIPT_DIR into
|
||||
`/opt/Autodesk/shared/python`. This will be always synchronizing those
|
||||
folders.
|
||||
"""
|
||||
from .. import HOST_DIR
|
||||
|
||||
env = env or os.environ
|
||||
|
||||
# initiate inputs
|
||||
scripts = {}
|
||||
fsd_env = env.get("FLAME_SCRIPT_DIRS", "")
|
||||
flame_shared_dir = "/opt/Autodesk/shared/python"
|
||||
|
||||
fsd_paths = [os.path.join(
|
||||
HOST_DIR,
|
||||
"utility_scripts"
|
||||
)]
|
||||
|
||||
# collect script dirs
|
||||
log.info("FLAME_SCRIPT_DIRS: `{fsd_env}`".format(**locals()))
|
||||
log.info("fsd_paths: `{fsd_paths}`".format(**locals()))
|
||||
|
||||
# add application environment setting for FLAME_SCRIPT_DIR
|
||||
# to script path search
|
||||
for _dirpath in fsd_env.split(os.pathsep):
|
||||
if not os.path.isdir(_dirpath):
|
||||
log.warning("Path is not a valid dir: `{_dirpath}`".format(
|
||||
**locals()))
|
||||
continue
|
||||
fsd_paths.append(_dirpath)
|
||||
|
||||
# collect scripts from dirs
|
||||
for path in fsd_paths:
|
||||
scripts.update({path: os.listdir(path)})
|
||||
|
||||
remove_black_list = []
|
||||
for _k, s_list in scripts.items():
|
||||
remove_black_list += s_list
|
||||
|
||||
log.info("remove_black_list: `{remove_black_list}`".format(**locals()))
|
||||
log.info("Additional Flame script paths: `{fsd_paths}`".format(**locals()))
|
||||
log.info("Flame Scripts: `{scripts}`".format(**locals()))
|
||||
|
||||
# make sure no script file is in folder
|
||||
if next(iter(os.listdir(flame_shared_dir)), None):
|
||||
for _itm in os.listdir(flame_shared_dir):
|
||||
skip = False
|
||||
|
||||
# skip all scripts and folders which are not maintained
|
||||
if _itm not in remove_black_list:
|
||||
skip = True
|
||||
|
||||
# do not skyp if pyc in extension
|
||||
if not os.path.isdir(_itm) and "pyc" in os.path.splitext(_itm)[-1]:
|
||||
skip = False
|
||||
|
||||
# continue if skip in true
|
||||
if skip:
|
||||
continue
|
||||
|
||||
path = os.path.join(flame_shared_dir, _itm)
|
||||
log.info("Removing `{path}`...".format(**locals()))
|
||||
if os.path.isdir(path):
|
||||
shutil.rmtree(path, onerror=None)
|
||||
else:
|
||||
os.remove(path)
|
||||
|
||||
# copy scripts into Resolve's utility scripts dir
|
||||
for dirpath, scriptlist in scripts.items():
|
||||
# directory and scripts list
|
||||
for _script in scriptlist:
|
||||
# script in script list
|
||||
src = os.path.join(dirpath, _script)
|
||||
dst = os.path.join(flame_shared_dir, _script)
|
||||
log.info("Copying `{src}` to `{dst}`...".format(**locals()))
|
||||
if os.path.isdir(src):
|
||||
shutil.copytree(
|
||||
src, dst, symlinks=False,
|
||||
ignore=None, ignore_dangling_symlinks=False
|
||||
)
|
||||
else:
|
||||
shutil.copy2(src, dst)
|
||||
|
||||
|
||||
def setup(env=None):
|
||||
""" Wrapper installer started from
|
||||
`flame/hooks/pre_flame_setup.py`
|
||||
"""
|
||||
env = env or os.environ
|
||||
|
||||
# synchronize resolve utility scripts
|
||||
_sync_utility_scripts(env)
|
||||
|
||||
log.info("Flame OpenPype wrapper has been installed")
|
||||
37
openpype/hosts/flame/api/workio.py
Normal file
37
openpype/hosts/flame/api/workio.py
Normal file
|
|
@ -0,0 +1,37 @@
|
|||
"""Host API required Work Files tool"""
|
||||
|
||||
import os
|
||||
from openpype.api import Logger
|
||||
# from .. import (
|
||||
# get_project_manager,
|
||||
# get_current_project
|
||||
# )
|
||||
|
||||
|
||||
log = Logger().get_logger(__name__)
|
||||
|
||||
exported_projet_ext = ".otoc"
|
||||
|
||||
|
||||
def file_extensions():
|
||||
return [exported_projet_ext]
|
||||
|
||||
|
||||
def has_unsaved_changes():
|
||||
pass
|
||||
|
||||
|
||||
def save_file(filepath):
|
||||
pass
|
||||
|
||||
|
||||
def open_file(filepath):
|
||||
pass
|
||||
|
||||
|
||||
def current_file():
|
||||
pass
|
||||
|
||||
|
||||
def work_root(session):
|
||||
return os.path.normpath(session["AVALON_WORKDIR"]).replace("\\", "/")
|
||||
132
openpype/hosts/flame/hooks/pre_flame_setup.py
Normal file
132
openpype/hosts/flame/hooks/pre_flame_setup.py
Normal file
|
|
@ -0,0 +1,132 @@
|
|||
import os
|
||||
import json
|
||||
import tempfile
|
||||
import contextlib
|
||||
from openpype.lib import (
|
||||
PreLaunchHook, get_openpype_username)
|
||||
from openpype.hosts import flame as opflame
|
||||
import openpype
|
||||
from pprint import pformat
|
||||
|
||||
|
||||
class FlamePrelaunch(PreLaunchHook):
|
||||
""" Flame prelaunch hook
|
||||
|
||||
Will make sure flame_script_dirs are coppied to user's folder defined
|
||||
in environment var FLAME_SCRIPT_DIR.
|
||||
"""
|
||||
app_groups = ["flame"]
|
||||
|
||||
# todo: replace version number with avalon launch app version
|
||||
flame_python_exe = "/opt/Autodesk/python/2021/bin/python2.7"
|
||||
|
||||
wtc_script_path = os.path.join(
|
||||
opflame.HOST_DIR, "scripts", "wiretap_com.py")
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
super().__init__(*args, **kwargs)
|
||||
|
||||
self.signature = "( {} )".format(self.__class__.__name__)
|
||||
|
||||
def execute(self):
|
||||
"""Hook entry method."""
|
||||
project_doc = self.data["project_doc"]
|
||||
user_name = get_openpype_username()
|
||||
|
||||
self.log.debug("Collected user \"{}\"".format(user_name))
|
||||
self.log.info(pformat(project_doc))
|
||||
_db_p_data = project_doc["data"]
|
||||
width = _db_p_data["resolutionWidth"]
|
||||
height = _db_p_data["resolutionHeight"]
|
||||
fps = int(_db_p_data["fps"])
|
||||
|
||||
project_data = {
|
||||
"Name": project_doc["name"],
|
||||
"Nickname": _db_p_data["code"],
|
||||
"Description": "Created by OpenPype",
|
||||
"SetupDir": project_doc["name"],
|
||||
"FrameWidth": int(width),
|
||||
"FrameHeight": int(height),
|
||||
"AspectRatio": float((width / height) * _db_p_data["pixelAspect"]),
|
||||
"FrameRate": "{} fps".format(fps),
|
||||
"FrameDepth": "16-bit fp",
|
||||
"FieldDominance": "PROGRESSIVE"
|
||||
}
|
||||
|
||||
data_to_script = {
|
||||
# from settings
|
||||
"host_name": "localhost",
|
||||
"volume_name": "stonefs",
|
||||
"group_name": "staff",
|
||||
"color_policy": "ACES 1.1",
|
||||
|
||||
# from project
|
||||
"project_name": project_doc["name"],
|
||||
"user_name": user_name,
|
||||
"project_data": project_data
|
||||
}
|
||||
app_arguments = self._get_launch_arguments(data_to_script)
|
||||
|
||||
self.log.info(pformat(dict(self.launch_context.env)))
|
||||
|
||||
opflame.setup(self.launch_context.env)
|
||||
|
||||
self.launch_context.launch_args.extend(app_arguments)
|
||||
|
||||
def _get_launch_arguments(self, script_data):
|
||||
# Dump data to string
|
||||
dumped_script_data = json.dumps(script_data)
|
||||
|
||||
with make_temp_file(dumped_script_data) as tmp_json_path:
|
||||
# Prepare subprocess arguments
|
||||
args = [
|
||||
self.flame_python_exe,
|
||||
self.wtc_script_path,
|
||||
tmp_json_path
|
||||
]
|
||||
self.log.info("Executing: {}".format(" ".join(args)))
|
||||
|
||||
process_kwargs = {
|
||||
"logger": self.log,
|
||||
"env": {}
|
||||
}
|
||||
|
||||
openpype.api.run_subprocess(args, **process_kwargs)
|
||||
|
||||
# process returned json file to pass launch args
|
||||
return_json_data = open(tmp_json_path).read()
|
||||
returned_data = json.loads(return_json_data)
|
||||
app_args = returned_data.get("app_args")
|
||||
self.log.info("____ app_args: `{}`".format(app_args))
|
||||
|
||||
if not app_args:
|
||||
RuntimeError("App arguments were not solved")
|
||||
|
||||
return app_args
|
||||
|
||||
|
||||
@contextlib.contextmanager
|
||||
def make_temp_file(data):
|
||||
try:
|
||||
# Store dumped json to temporary file
|
||||
temporary_json_file = tempfile.NamedTemporaryFile(
|
||||
mode="w", suffix=".json", delete=False
|
||||
)
|
||||
temporary_json_file.write(data)
|
||||
temporary_json_file.close()
|
||||
temporary_json_filepath = temporary_json_file.name.replace(
|
||||
"\\", "/"
|
||||
)
|
||||
|
||||
yield temporary_json_filepath
|
||||
|
||||
except IOError as _error:
|
||||
raise IOError(
|
||||
"Not able to create temp json file: {}".format(
|
||||
_error
|
||||
)
|
||||
)
|
||||
|
||||
finally:
|
||||
# Remove the temporary json
|
||||
os.remove(temporary_json_filepath)
|
||||
490
openpype/hosts/flame/scripts/wiretap_com.py
Normal file
490
openpype/hosts/flame/scripts/wiretap_com.py
Normal file
|
|
@ -0,0 +1,490 @@
|
|||
#!/usr/bin/env python2.7
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
from __future__ import absolute_import
|
||||
import os
|
||||
import sys
|
||||
import subprocess
|
||||
import json
|
||||
import xml.dom.minidom as minidom
|
||||
from copy import deepcopy
|
||||
import datetime
|
||||
|
||||
try:
|
||||
from libwiretapPythonClientAPI import (
|
||||
WireTapClientInit)
|
||||
except ImportError:
|
||||
flame_python_path = "/opt/Autodesk/flame_2021/python"
|
||||
flame_exe_path = (
|
||||
"/opt/Autodesk/flame_2021/bin/flame.app"
|
||||
"/Contents/MacOS/startApp")
|
||||
|
||||
sys.path.append(flame_python_path)
|
||||
|
||||
from libwiretapPythonClientAPI import (
|
||||
WireTapClientInit,
|
||||
WireTapClientUninit,
|
||||
WireTapNodeHandle,
|
||||
WireTapServerHandle,
|
||||
WireTapInt,
|
||||
WireTapStr
|
||||
)
|
||||
|
||||
|
||||
class WireTapCom(object):
|
||||
"""
|
||||
Comunicator class wrapper for talking to WireTap db.
|
||||
|
||||
This way we are able to set new project with settings and
|
||||
correct colorspace policy. Also we are able to create new user
|
||||
or get actuall user with similar name (users are usually cloning
|
||||
their profiles and adding date stamp into suffix).
|
||||
"""
|
||||
|
||||
def __init__(self, host_name=None, volume_name=None, group_name=None):
|
||||
"""Initialisation of WireTap communication class
|
||||
|
||||
Args:
|
||||
host_name (str, optional): Name of host server. Defaults to None.
|
||||
volume_name (str, optional): Name of volume. Defaults to None.
|
||||
group_name (str, optional): Name of user group. Defaults to None.
|
||||
"""
|
||||
# set main attributes of server
|
||||
# if there are none set the default installation
|
||||
self.host_name = host_name or "localhost"
|
||||
self.volume_name = volume_name or "stonefs"
|
||||
self.group_name = group_name or "staff"
|
||||
|
||||
# initialize WireTap client
|
||||
WireTapClientInit()
|
||||
|
||||
# add the server to shared variable
|
||||
self._server = WireTapServerHandle("{}:IFFFS".format(self.host_name))
|
||||
print("WireTap connected at '{}'...".format(
|
||||
self.host_name))
|
||||
|
||||
def close(self):
|
||||
self._server = None
|
||||
WireTapClientUninit()
|
||||
print("WireTap closed...")
|
||||
|
||||
def get_launch_args(
|
||||
self, project_name, project_data, user_name, *args, **kwargs):
|
||||
"""Forming launch arguments for OpenPype launcher.
|
||||
|
||||
Args:
|
||||
project_name (str): name of project
|
||||
project_data (dict): Flame compatible project data
|
||||
user_name (str): name of user
|
||||
|
||||
Returns:
|
||||
list: arguments
|
||||
"""
|
||||
|
||||
workspace_name = kwargs.get("workspace_name")
|
||||
color_policy = kwargs.get("color_policy")
|
||||
|
||||
self._project_prep(project_name)
|
||||
self._set_project_settings(project_name, project_data)
|
||||
self._set_project_colorspace(project_name, color_policy)
|
||||
user_name = self._user_prep(user_name)
|
||||
|
||||
if workspace_name is None:
|
||||
# default workspace
|
||||
print("Using a default workspace")
|
||||
return [
|
||||
"--start-project={}".format(project_name),
|
||||
"--start-user={}".format(user_name),
|
||||
"--create-workspace"
|
||||
]
|
||||
|
||||
else:
|
||||
print(
|
||||
"Using a custom workspace '{}'".format(workspace_name))
|
||||
|
||||
self._workspace_prep(project_name, workspace_name)
|
||||
return [
|
||||
"--start-project={}".format(project_name),
|
||||
"--start-user={}".format(user_name),
|
||||
"--create-workspace",
|
||||
"--start-workspace={}".format(workspace_name)
|
||||
]
|
||||
|
||||
def _workspace_prep(self, project_name, workspace_name):
|
||||
"""Preparing a workspace
|
||||
|
||||
In case it doesn not exists it will create one
|
||||
|
||||
Args:
|
||||
project_name (str): project name
|
||||
workspace_name (str): workspace name
|
||||
|
||||
Raises:
|
||||
AttributeError: unable to create workspace
|
||||
"""
|
||||
workspace_exists = self._child_is_in_parent_path(
|
||||
"/projects/{}".format(project_name), workspace_name, "WORKSPACE"
|
||||
)
|
||||
if not workspace_exists:
|
||||
project = WireTapNodeHandle(
|
||||
self._server, "/projects/{}".format(project_name))
|
||||
|
||||
workspace_node = WireTapNodeHandle()
|
||||
created_workspace = project.createNode(
|
||||
workspace_name, "WORKSPACE", workspace_node)
|
||||
|
||||
if not created_workspace:
|
||||
raise AttributeError(
|
||||
"Cannot create workspace `{}` in "
|
||||
"project `{}`: `{}`".format(
|
||||
workspace_name, project_name, project.lastError())
|
||||
)
|
||||
|
||||
print(
|
||||
"Workspace `{}` is successfully created".format(workspace_name))
|
||||
|
||||
def _project_prep(self, project_name):
|
||||
"""Preparing a project
|
||||
|
||||
In case it doesn not exists it will create one
|
||||
|
||||
Args:
|
||||
project_name (str): project name
|
||||
|
||||
Raises:
|
||||
AttributeError: unable to create project
|
||||
"""
|
||||
# test if projeft exists
|
||||
project_exists = self._child_is_in_parent_path(
|
||||
"/projects", project_name, "PROJECT")
|
||||
|
||||
if not project_exists:
|
||||
volumes = self._get_all_volumes()
|
||||
|
||||
if len(volumes) == 0:
|
||||
raise AttributeError(
|
||||
"Not able to create new project. No Volumes existing"
|
||||
)
|
||||
|
||||
# check if volumes exists
|
||||
if self.volume_name not in volumes:
|
||||
raise AttributeError(
|
||||
("Volume '{}' does not exist '{}'").format(
|
||||
self.volume_name, volumes)
|
||||
)
|
||||
|
||||
# form cmd arguments
|
||||
project_create_cmd = [
|
||||
os.path.join(
|
||||
"/opt/Autodesk/",
|
||||
"wiretap",
|
||||
"tools",
|
||||
"2021",
|
||||
"wiretap_create_node",
|
||||
),
|
||||
'-n',
|
||||
os.path.join("/volumes", self.volume_name),
|
||||
'-d',
|
||||
project_name,
|
||||
'-g',
|
||||
]
|
||||
|
||||
project_create_cmd.append(self.group_name)
|
||||
|
||||
print(project_create_cmd)
|
||||
|
||||
exit_code = subprocess.call(
|
||||
project_create_cmd,
|
||||
cwd=os.path.expanduser('~'))
|
||||
|
||||
if exit_code != 0:
|
||||
RuntimeError("Cannot create project in flame db")
|
||||
|
||||
print(
|
||||
"A new project '{}' is created.".format(project_name))
|
||||
|
||||
def _get_all_volumes(self):
|
||||
"""Request all available volumens from WireTap
|
||||
|
||||
Returns:
|
||||
list: all available volumes in server
|
||||
|
||||
Rises:
|
||||
AttributeError: unable to get any volumes childs from server
|
||||
"""
|
||||
root = WireTapNodeHandle(self._server, "/volumes")
|
||||
children_num = WireTapInt(0)
|
||||
|
||||
get_children_num = root.getNumChildren(children_num)
|
||||
if not get_children_num:
|
||||
raise AttributeError(
|
||||
"Cannot get number of volumes: {}".format(root.lastError())
|
||||
)
|
||||
|
||||
volumes = []
|
||||
|
||||
# go trough all children and get volume names
|
||||
child_obj = WireTapNodeHandle()
|
||||
for child_idx in range(children_num):
|
||||
|
||||
# get a child
|
||||
if not root.getChild(child_idx, child_obj):
|
||||
raise AttributeError(
|
||||
"Unable to get child: {}".format(root.lastError()))
|
||||
|
||||
node_name = WireTapStr()
|
||||
get_children_name = child_obj.getDisplayName(node_name)
|
||||
|
||||
if not get_children_name:
|
||||
raise AttributeError(
|
||||
"Unable to get child name: {}".format(
|
||||
child_obj.lastError())
|
||||
)
|
||||
|
||||
volumes.append(node_name.c_str())
|
||||
|
||||
return volumes
|
||||
|
||||
def _user_prep(self, user_name):
|
||||
"""Ensuring user does exists in user's stack
|
||||
|
||||
Args:
|
||||
user_name (str): name of a user
|
||||
|
||||
Raises:
|
||||
AttributeError: unable to create user
|
||||
"""
|
||||
|
||||
# get all used usernames in db
|
||||
used_names = self._get_usernames()
|
||||
print(">> used_names: {}".format(used_names))
|
||||
|
||||
# filter only those which are sharing input user name
|
||||
filtered_users = [user for user in used_names if user_name in user]
|
||||
|
||||
if filtered_users:
|
||||
# todo: need to find lastly created following regex patern for
|
||||
# date used in name
|
||||
return filtered_users.pop()
|
||||
|
||||
# create new user name with date in suffix
|
||||
now = datetime.datetime.now() # current date and time
|
||||
date = now.strftime("%Y%m%d")
|
||||
new_user_name = "{}_{}".format(user_name, date)
|
||||
print(new_user_name)
|
||||
|
||||
if not self._child_is_in_parent_path("/users", new_user_name, "USER"):
|
||||
# Create the new user
|
||||
users = WireTapNodeHandle(self._server, "/users")
|
||||
|
||||
user_node = WireTapNodeHandle()
|
||||
created_user = users.createNode(new_user_name, "USER", user_node)
|
||||
if not created_user:
|
||||
raise AttributeError(
|
||||
"User {} cannot be created: {}".format(
|
||||
new_user_name, users.lastError())
|
||||
)
|
||||
|
||||
print("User `{}` is created".format(new_user_name))
|
||||
return new_user_name
|
||||
|
||||
def _get_usernames(self):
|
||||
"""Requesting all available users from WireTap
|
||||
|
||||
Returns:
|
||||
list: all available user names
|
||||
|
||||
Raises:
|
||||
AttributeError: there are no users in server
|
||||
"""
|
||||
root = WireTapNodeHandle(self._server, "/users")
|
||||
children_num = WireTapInt(0)
|
||||
|
||||
get_children_num = root.getNumChildren(children_num)
|
||||
if not get_children_num:
|
||||
raise AttributeError(
|
||||
"Cannot get number of volumes: {}".format(root.lastError())
|
||||
)
|
||||
|
||||
usernames = []
|
||||
|
||||
# go trough all children and get volume names
|
||||
child_obj = WireTapNodeHandle()
|
||||
for child_idx in range(children_num):
|
||||
|
||||
# get a child
|
||||
if not root.getChild(child_idx, child_obj):
|
||||
raise AttributeError(
|
||||
"Unable to get child: {}".format(root.lastError()))
|
||||
|
||||
node_name = WireTapStr()
|
||||
get_children_name = child_obj.getDisplayName(node_name)
|
||||
|
||||
if not get_children_name:
|
||||
raise AttributeError(
|
||||
"Unable to get child name: {}".format(
|
||||
child_obj.lastError())
|
||||
)
|
||||
|
||||
usernames.append(node_name.c_str())
|
||||
|
||||
return usernames
|
||||
|
||||
def _child_is_in_parent_path(self, parent_path, child_name, child_type):
|
||||
"""Checking if a given child is in parent path.
|
||||
|
||||
Args:
|
||||
parent_path (str): db path to parent
|
||||
child_name (str): name of child
|
||||
child_type (str): type of child
|
||||
|
||||
Raises:
|
||||
AttributeError: Not able to get number of children
|
||||
AttributeError: Not able to get children form parent
|
||||
AttributeError: Not able to get children name
|
||||
AttributeError: Not able to get children type
|
||||
|
||||
Returns:
|
||||
bool: True if child is in parent path
|
||||
"""
|
||||
parent = WireTapNodeHandle(self._server, parent_path)
|
||||
|
||||
# iterate number of children
|
||||
children_num = WireTapInt(0)
|
||||
requested = parent.getNumChildren(children_num)
|
||||
if not requested:
|
||||
raise AttributeError((
|
||||
"Error: Cannot request number of "
|
||||
"childrens from the node {}. Make sure your "
|
||||
"wiretap service is running: {}").format(
|
||||
parent_path, parent.lastError())
|
||||
)
|
||||
|
||||
# iterate children
|
||||
child_obj = WireTapNodeHandle()
|
||||
for child_idx in range(children_num):
|
||||
if not parent.getChild(child_idx, child_obj):
|
||||
raise AttributeError(
|
||||
"Cannot get child: {}".format(
|
||||
parent.lastError()))
|
||||
|
||||
node_name = WireTapStr()
|
||||
node_type = WireTapStr()
|
||||
|
||||
if not child_obj.getDisplayName(node_name):
|
||||
raise AttributeError(
|
||||
"Unable to get child name: %s" % child_obj.lastError()
|
||||
)
|
||||
if not child_obj.getNodeTypeStr(node_type):
|
||||
raise AttributeError(
|
||||
"Unable to obtain child type: %s" % child_obj.lastError()
|
||||
)
|
||||
|
||||
if (node_name.c_str() == child_name) and (
|
||||
node_type.c_str() == child_type):
|
||||
return True
|
||||
|
||||
return False
|
||||
|
||||
def _set_project_settings(self, project_name, project_data):
|
||||
"""Setting project attributes.
|
||||
|
||||
Args:
|
||||
project_name (str): name of project
|
||||
project_data (dict): data with project attributes
|
||||
(flame compatible)
|
||||
|
||||
Raises:
|
||||
AttributeError: Not able to set project attributes
|
||||
"""
|
||||
# generated xml from project_data dict
|
||||
_xml = "<Project>"
|
||||
for key, value in project_data.items():
|
||||
_xml += "<{}>{}</{}>".format(key, value, key)
|
||||
_xml += "</Project>"
|
||||
|
||||
pretty_xml = minidom.parseString(_xml).toprettyxml()
|
||||
print("__ xml: {}".format(pretty_xml))
|
||||
|
||||
# set project data to wiretap
|
||||
project_node = WireTapNodeHandle(
|
||||
self._server, "/projects/{}".format(project_name))
|
||||
|
||||
if not project_node.setMetaData("XML", _xml):
|
||||
raise AttributeError(
|
||||
"Not able to set project attributes {}. Error: {}".format(
|
||||
project_name, project_node.lastError())
|
||||
)
|
||||
|
||||
print("Project settings successfully set.")
|
||||
|
||||
def _set_project_colorspace(self, project_name, color_policy):
|
||||
"""Set project's colorspace policy.
|
||||
|
||||
Args:
|
||||
project_name (str): name of project
|
||||
color_policy (str): name of policy
|
||||
|
||||
Raises:
|
||||
RuntimeError: Not able to set colorspace policy
|
||||
"""
|
||||
color_policy = color_policy or "Legacy"
|
||||
project_colorspace_cmd = [
|
||||
os.path.join(
|
||||
"/opt/Autodesk/",
|
||||
"wiretap",
|
||||
"tools",
|
||||
"2021",
|
||||
"wiretap_duplicate_node",
|
||||
),
|
||||
"-s",
|
||||
"/syncolor/policies/Autodesk/{}".format(color_policy),
|
||||
"-n",
|
||||
"/projects/{}/syncolor".format(project_name)
|
||||
]
|
||||
|
||||
print(project_colorspace_cmd)
|
||||
|
||||
exit_code = subprocess.call(
|
||||
project_colorspace_cmd,
|
||||
cwd=os.path.expanduser('~'))
|
||||
|
||||
if exit_code != 0:
|
||||
RuntimeError("Cannot set colorspace {} on project {}".format(
|
||||
color_policy, project_name
|
||||
))
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
# get json exchange data
|
||||
json_path = sys.argv[-1]
|
||||
json_data = open(json_path).read()
|
||||
in_data = json.loads(json_data)
|
||||
out_data = deepcopy(in_data)
|
||||
|
||||
# get main server attributes
|
||||
host_name = in_data.pop("host_name")
|
||||
volume_name = in_data.pop("volume_name")
|
||||
group_name = in_data.pop("group_name")
|
||||
|
||||
# initialize class
|
||||
wiretap_handler = WireTapCom(host_name, volume_name, group_name)
|
||||
|
||||
try:
|
||||
app_args = wiretap_handler.get_launch_args(
|
||||
project_name=in_data.pop("project_name"),
|
||||
project_data=in_data.pop("project_data"),
|
||||
user_name=in_data.pop("user_name"),
|
||||
**in_data
|
||||
)
|
||||
finally:
|
||||
wiretap_handler.close()
|
||||
|
||||
# set returned args back to out data
|
||||
out_data.update({
|
||||
"app_args": app_args
|
||||
})
|
||||
|
||||
# write it out back to the exchange json file
|
||||
with open(json_path, "w") as file_stream:
|
||||
json.dump(out_data, file_stream, indent=4)
|
||||
191
openpype/hosts/flame/utility_scripts/openpype_in_flame.py
Normal file
191
openpype/hosts/flame/utility_scripts/openpype_in_flame.py
Normal file
|
|
@ -0,0 +1,191 @@
|
|||
from __future__ import print_function
|
||||
import sys
|
||||
from Qt import QtWidgets
|
||||
from pprint import pformat
|
||||
import atexit
|
||||
import openpype
|
||||
import avalon
|
||||
import openpype.hosts.flame as opflame
|
||||
|
||||
flh = sys.modules[__name__]
|
||||
flh._project = None
|
||||
|
||||
|
||||
def openpype_install():
|
||||
"""Registering OpenPype in context
|
||||
"""
|
||||
openpype.install()
|
||||
avalon.api.install(opflame)
|
||||
print("Avalon registred hosts: {}".format(
|
||||
avalon.api.registered_host()))
|
||||
|
||||
|
||||
# Exception handler
|
||||
def exeption_handler(exctype, value, _traceback):
|
||||
"""Exception handler for improving UX
|
||||
|
||||
Args:
|
||||
exctype (str): type of exception
|
||||
value (str): exception value
|
||||
tb (str): traceback to show
|
||||
"""
|
||||
import traceback
|
||||
msg = "OpenPype: Python exception {} in {}".format(value, exctype)
|
||||
mbox = QtWidgets.QMessageBox()
|
||||
mbox.setText(msg)
|
||||
mbox.setDetailedText(
|
||||
pformat(traceback.format_exception(exctype, value, _traceback)))
|
||||
mbox.setStyleSheet('QLabel{min-width: 800px;}')
|
||||
mbox.exec_()
|
||||
sys.__excepthook__(exctype, value, _traceback)
|
||||
|
||||
|
||||
# add exception handler into sys module
|
||||
sys.excepthook = exeption_handler
|
||||
|
||||
|
||||
# register clean up logic to be called at Flame exit
|
||||
def cleanup():
|
||||
"""Cleaning up Flame framework context
|
||||
"""
|
||||
if opflame.apps:
|
||||
print('`{}` cleaning up apps:\n {}\n'.format(
|
||||
__file__, pformat(opflame.apps)))
|
||||
while len(opflame.apps):
|
||||
app = opflame.apps.pop()
|
||||
print('`{}` removing : {}'.format(__file__, app.name))
|
||||
del app
|
||||
opflame.apps = []
|
||||
|
||||
if opflame.app_framework:
|
||||
print('PYTHON\t: %s cleaning up' % opflame.app_framework.bundle_name)
|
||||
opflame.app_framework.save_prefs()
|
||||
opflame.app_framework = None
|
||||
|
||||
|
||||
atexit.register(cleanup)
|
||||
|
||||
|
||||
def load_apps():
|
||||
"""Load available apps into Flame framework
|
||||
"""
|
||||
opflame.apps.append(opflame.FlameMenuProjectConnect(opflame.app_framework))
|
||||
opflame.apps.append(opflame.FlameMenuTimeline(opflame.app_framework))
|
||||
opflame.app_framework.log.info("Apps are loaded")
|
||||
|
||||
|
||||
def project_changed_dict(info):
|
||||
"""Hook for project change action
|
||||
|
||||
Args:
|
||||
info (str): info text
|
||||
"""
|
||||
cleanup()
|
||||
|
||||
|
||||
def app_initialized(parent=None):
|
||||
"""Inicialization of Framework
|
||||
|
||||
Args:
|
||||
parent (obj, optional): Parent object. Defaults to None.
|
||||
"""
|
||||
opflame.app_framework = opflame.FlameAppFramework()
|
||||
|
||||
print("{} initializing".format(
|
||||
opflame.app_framework.bundle_name))
|
||||
|
||||
load_apps()
|
||||
|
||||
|
||||
"""
|
||||
Initialisation of the hook is starting from here
|
||||
|
||||
First it needs to test if it can import the flame modul.
|
||||
This will happen only in case a project has been loaded.
|
||||
Then `app_initialized` will load main Framework which will load
|
||||
all menu objects as apps.
|
||||
"""
|
||||
|
||||
try:
|
||||
import flame # noqa
|
||||
app_initialized(parent=None)
|
||||
except ImportError:
|
||||
print("!!!! not able to import flame module !!!!")
|
||||
|
||||
|
||||
def rescan_hooks():
|
||||
import flame # noqa
|
||||
flame.execute_shortcut('Rescan Python Hooks')
|
||||
|
||||
|
||||
def _build_app_menu(app_name):
|
||||
"""Flame menu object generator
|
||||
|
||||
Args:
|
||||
app_name (str): name of menu object app
|
||||
|
||||
Returns:
|
||||
list: menu object
|
||||
"""
|
||||
menu = []
|
||||
|
||||
# first find the relative appname
|
||||
app = None
|
||||
for _app in opflame.apps:
|
||||
if _app.__class__.__name__ == app_name:
|
||||
app = _app
|
||||
|
||||
if app:
|
||||
menu.append(app.build_menu())
|
||||
|
||||
if opflame.app_framework:
|
||||
menu_auto_refresh = opflame.app_framework.prefs_global.get(
|
||||
'menu_auto_refresh', {})
|
||||
if menu_auto_refresh.get('timeline_menu', True):
|
||||
try:
|
||||
import flame # noqa
|
||||
flame.schedule_idle_event(rescan_hooks)
|
||||
except ImportError:
|
||||
print("!-!!! not able to import flame module !!!!")
|
||||
|
||||
return menu
|
||||
|
||||
|
||||
""" Flame hooks are starting here
|
||||
"""
|
||||
|
||||
|
||||
def project_saved(project_name, save_time, is_auto_save):
|
||||
"""Hook to activate when project is saved
|
||||
|
||||
Args:
|
||||
project_name (str): name of project
|
||||
save_time (str): time when it was saved
|
||||
is_auto_save (bool): autosave is on or off
|
||||
"""
|
||||
if opflame.app_framework:
|
||||
opflame.app_framework.save_prefs()
|
||||
|
||||
|
||||
def get_main_menu_custom_ui_actions():
|
||||
"""Hook to create submenu in start menu
|
||||
|
||||
Returns:
|
||||
list: menu object
|
||||
"""
|
||||
# install openpype and the host
|
||||
openpype_install()
|
||||
|
||||
return _build_app_menu("FlameMenuProjectConnect")
|
||||
|
||||
|
||||
def get_timeline_custom_ui_actions():
|
||||
"""Hook to create submenu in timeline
|
||||
|
||||
Returns:
|
||||
list: menu object
|
||||
"""
|
||||
# install openpype and the host
|
||||
openpype_install()
|
||||
|
||||
return _build_app_menu("FlameMenuTimeline")
|
||||
|
|
@ -1,8 +1,6 @@
|
|||
from .pipeline import (
|
||||
install,
|
||||
uninstall,
|
||||
publish,
|
||||
launch_workfiles_app
|
||||
uninstall
|
||||
)
|
||||
|
||||
from .utils import (
|
||||
|
|
@ -22,12 +20,9 @@ __all__ = [
|
|||
# pipeline
|
||||
"install",
|
||||
"uninstall",
|
||||
"publish",
|
||||
"launch_workfiles_app",
|
||||
|
||||
# utils
|
||||
"setup",
|
||||
"get_resolve_module",
|
||||
|
||||
# lib
|
||||
"get_additional_data",
|
||||
|
|
|
|||
|
|
@ -3,19 +3,7 @@ import sys
|
|||
|
||||
from Qt import QtWidgets, QtCore
|
||||
|
||||
from .pipeline import (
|
||||
publish,
|
||||
launch_workfiles_app
|
||||
)
|
||||
|
||||
from avalon.tools import (
|
||||
creator,
|
||||
sceneinventory,
|
||||
)
|
||||
from openpype.tools import (
|
||||
loader,
|
||||
libraryloader
|
||||
)
|
||||
from openpype.tools.utils import host_tools
|
||||
|
||||
from openpype.hosts.fusion.scripts import (
|
||||
set_rendermode,
|
||||
|
|
@ -36,7 +24,7 @@ def load_stylesheet():
|
|||
|
||||
class Spacer(QtWidgets.QWidget):
|
||||
def __init__(self, height, *args, **kwargs):
|
||||
super(self.__class__, self).__init__(*args, **kwargs)
|
||||
super(Spacer, self).__init__(*args, **kwargs)
|
||||
|
||||
self.setFixedHeight(height)
|
||||
|
||||
|
|
@ -53,7 +41,7 @@ class Spacer(QtWidgets.QWidget):
|
|||
|
||||
class OpenPypeMenu(QtWidgets.QWidget):
|
||||
def __init__(self, *args, **kwargs):
|
||||
super(self.__class__, self).__init__(*args, **kwargs)
|
||||
super(OpenPypeMenu, self).__init__(*args, **kwargs)
|
||||
|
||||
self.setObjectName("OpenPypeMenu")
|
||||
|
||||
|
|
@ -117,27 +105,27 @@ class OpenPypeMenu(QtWidgets.QWidget):
|
|||
|
||||
def on_workfile_clicked(self):
|
||||
print("Clicked Workfile")
|
||||
launch_workfiles_app()
|
||||
host_tools.show_workfiles()
|
||||
|
||||
def on_create_clicked(self):
|
||||
print("Clicked Create")
|
||||
creator.show()
|
||||
host_tools.show_creator()
|
||||
|
||||
def on_publish_clicked(self):
|
||||
print("Clicked Publish")
|
||||
publish(None)
|
||||
host_tools.show_publish()
|
||||
|
||||
def on_load_clicked(self):
|
||||
print("Clicked Load")
|
||||
loader.show(use_context=True)
|
||||
host_tools.show_loader(use_context=True)
|
||||
|
||||
def on_inventory_clicked(self):
|
||||
print("Clicked Inventory")
|
||||
sceneinventory.show()
|
||||
host_tools.show_scene_inventory()
|
||||
|
||||
def on_libload_clicked(self):
|
||||
print("Clicked Library")
|
||||
libraryloader.show()
|
||||
host_tools.show_library_loader()
|
||||
|
||||
def on_rendernode_clicked(self):
|
||||
from avalon import style
|
||||
|
|
|
|||
|
|
@ -3,7 +3,6 @@ Basic avalon integration
|
|||
"""
|
||||
import os
|
||||
|
||||
from openpype.tools import workfiles
|
||||
from avalon import api as avalon
|
||||
from pyblish import api as pyblish
|
||||
from openpype.api import Logger
|
||||
|
|
@ -98,14 +97,3 @@ def on_pyblish_instance_toggled(instance, new_value, old_value):
|
|||
current = attrs["TOOLB_PassThrough"]
|
||||
if current != passthrough:
|
||||
tool.SetAttrs({"TOOLB_PassThrough": passthrough})
|
||||
|
||||
|
||||
def launch_workfiles_app(*args):
|
||||
workdir = os.environ["AVALON_WORKDIR"]
|
||||
workfiles.show(workdir)
|
||||
|
||||
|
||||
def publish(parent):
|
||||
"""Shorthand to publish from within host"""
|
||||
from avalon.tools import publish
|
||||
return publish.show(parent)
|
||||
|
|
|
|||
|
|
@ -3,17 +3,14 @@
|
|||
import os
|
||||
from pathlib import Path
|
||||
import logging
|
||||
import re
|
||||
|
||||
from openpype import lib
|
||||
from openpype.api import (get_current_project_settings)
|
||||
import openpype.hosts.harmony
|
||||
|
||||
import pyblish.api
|
||||
|
||||
from avalon import io, harmony
|
||||
import avalon.api
|
||||
import avalon.tools.sceneinventory
|
||||
|
||||
|
||||
log = logging.getLogger("openpype.hosts.harmony")
|
||||
|
|
|
|||
|
|
@ -2,6 +2,7 @@ import os
|
|||
import sys
|
||||
import hiero.core
|
||||
from openpype.api import Logger
|
||||
from openpype.tools.utils import host_tools
|
||||
from avalon.api import Session
|
||||
from hiero.ui import findMenuAction
|
||||
|
||||
|
|
@ -41,8 +42,6 @@ def menu_install():
|
|||
apply_colorspace_project, apply_colorspace_clips
|
||||
)
|
||||
# here is the best place to add menu
|
||||
from avalon.tools import creator, sceneinventory
|
||||
from openpype.tools import loader
|
||||
from avalon.vendor.Qt import QtGui
|
||||
|
||||
menu_name = os.environ['AVALON_LABEL']
|
||||
|
|
@ -87,15 +86,15 @@ def menu_install():
|
|||
|
||||
creator_action = menu.addAction("Create ...")
|
||||
creator_action.setIcon(QtGui.QIcon("icons:CopyRectangle.png"))
|
||||
creator_action.triggered.connect(creator.show)
|
||||
creator_action.triggered.connect(host_tools.show_creator)
|
||||
|
||||
loader_action = menu.addAction("Load ...")
|
||||
loader_action.setIcon(QtGui.QIcon("icons:CopyRectangle.png"))
|
||||
loader_action.triggered.connect(loader.show)
|
||||
loader_action.triggered.connect(host_tools.show_loader)
|
||||
|
||||
sceneinventory_action = menu.addAction("Manage ...")
|
||||
sceneinventory_action.setIcon(QtGui.QIcon("icons:CopyRectangle.png"))
|
||||
sceneinventory_action.triggered.connect(sceneinventory.show)
|
||||
sceneinventory_action.triggered.connect(host_tools.show_scene_inventory)
|
||||
menu.addSeparator()
|
||||
|
||||
if os.getenv("OPENPYPE_DEVELOP"):
|
||||
|
|
|
|||
|
|
@ -4,13 +4,12 @@ Basic avalon integration
|
|||
import os
|
||||
import contextlib
|
||||
from collections import OrderedDict
|
||||
from avalon.tools import publish as _publish
|
||||
from openpype.tools import workfiles
|
||||
from avalon.pipeline import AVALON_CONTAINER_ID
|
||||
from avalon import api as avalon
|
||||
from avalon import schema
|
||||
from pyblish import api as pyblish
|
||||
from openpype.api import Logger
|
||||
from openpype.tools.utils import host_tools
|
||||
from . import lib, menu, events
|
||||
|
||||
log = Logger().get_logger(__name__)
|
||||
|
|
@ -211,15 +210,13 @@ def update_container(track_item, data=None):
|
|||
def launch_workfiles_app(*args):
|
||||
''' Wrapping function for workfiles launcher '''
|
||||
|
||||
workdir = os.environ["AVALON_WORKDIR"]
|
||||
|
||||
# show workfile gui
|
||||
workfiles.show(workdir)
|
||||
host_tools.show_workfiles()
|
||||
|
||||
|
||||
def publish(parent):
|
||||
"""Shorthand to publish from within host"""
|
||||
return _publish.show(parent)
|
||||
return host_tools.show_publish(parent)
|
||||
|
||||
|
||||
@contextlib.contextmanager
|
||||
|
|
|
|||
|
|
@ -1,6 +1,7 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
"""Houdini specific Avalon/Pyblish plugin definitions."""
|
||||
import sys
|
||||
from avalon.api import CreatorError
|
||||
from avalon import houdini
|
||||
import six
|
||||
|
||||
|
|
@ -8,7 +9,7 @@ import hou
|
|||
from openpype.api import PypeCreatorMixin
|
||||
|
||||
|
||||
class OpenPypeCreatorError(Exception):
|
||||
class OpenPypeCreatorError(CreatorError):
|
||||
pass
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -4,8 +4,8 @@ import contextlib
|
|||
|
||||
import logging
|
||||
from Qt import QtCore, QtGui
|
||||
from avalon.tools.widgets import AssetWidget
|
||||
from avalon import style
|
||||
from openpype.tools.utils.widgets import AssetWidget
|
||||
from avalon import style, io
|
||||
|
||||
from pxr import Sdf
|
||||
|
||||
|
|
@ -31,7 +31,7 @@ def pick_asset(node):
|
|||
# Construct the AssetWidget as a frameless popup so it automatically
|
||||
# closes when clicked outside of it.
|
||||
global tool
|
||||
tool = AssetWidget(silo_creatable=False)
|
||||
tool = AssetWidget(io)
|
||||
tool.setContentsMargins(5, 5, 5, 5)
|
||||
tool.setWindowTitle("Pick Asset")
|
||||
tool.setStyleSheet(style.load_stylesheet())
|
||||
|
|
@ -41,8 +41,6 @@ def pick_asset(node):
|
|||
# Select the current asset if there is any
|
||||
name = parm.eval()
|
||||
if name:
|
||||
from avalon import io
|
||||
|
||||
db_asset = io.find_one({"name": name, "type": "asset"})
|
||||
if db_asset:
|
||||
silo = db_asset.get("silo")
|
||||
|
|
|
|||
96
openpype/hosts/houdini/plugins/create/create_hda.py
Normal file
96
openpype/hosts/houdini/plugins/create/create_hda.py
Normal file
|
|
@ -0,0 +1,96 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
from openpype.hosts.houdini.api import plugin
|
||||
from avalon.houdini import lib
|
||||
from avalon import io
|
||||
import hou
|
||||
|
||||
|
||||
class CreateHDA(plugin.Creator):
|
||||
"""Publish Houdini Digital Asset file."""
|
||||
|
||||
name = "hda"
|
||||
label = "Houdini Digital Asset (Hda)"
|
||||
family = "hda"
|
||||
icon = "gears"
|
||||
maintain_selection = False
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
super(CreateHDA, self).__init__(*args, **kwargs)
|
||||
self.data.pop("active", None)
|
||||
|
||||
def _check_existing(self, subset_name):
|
||||
# type: (str) -> bool
|
||||
"""Check if existing subset name versions already exists."""
|
||||
# Get all subsets of the current asset
|
||||
asset_id = io.find_one({"name": self.data["asset"], "type": "asset"},
|
||||
projection={"_id": True})['_id']
|
||||
subset_docs = io.find(
|
||||
{
|
||||
"type": "subset",
|
||||
"parent": asset_id
|
||||
}, {"name": 1}
|
||||
)
|
||||
existing_subset_names = set(subset_docs.distinct("name"))
|
||||
existing_subset_names_low = {
|
||||
_name.lower() for _name in existing_subset_names
|
||||
}
|
||||
return subset_name.lower() in existing_subset_names_low
|
||||
|
||||
def _process(self, instance):
|
||||
subset_name = self.data["subset"]
|
||||
# get selected nodes
|
||||
out = hou.node("/obj")
|
||||
self.nodes = hou.selectedNodes()
|
||||
|
||||
if (self.options or {}).get("useSelection") and self.nodes:
|
||||
# if we have `use selection` enabled and we have some
|
||||
# selected nodes ...
|
||||
to_hda = self.nodes[0]
|
||||
if len(self.nodes) > 1:
|
||||
# if there is more then one node, create subnet first
|
||||
subnet = out.createNode(
|
||||
"subnet", node_name="{}_subnet".format(self.name))
|
||||
to_hda = subnet
|
||||
else:
|
||||
# in case of no selection, just create subnet node
|
||||
subnet = out.createNode(
|
||||
"subnet", node_name="{}_subnet".format(self.name))
|
||||
subnet.moveToGoodPosition()
|
||||
to_hda = subnet
|
||||
|
||||
if not to_hda.type().definition():
|
||||
# if node type has not its definition, it is not user
|
||||
# created hda. We test if hda can be created from the node.
|
||||
if not to_hda.canCreateDigitalAsset():
|
||||
raise Exception(
|
||||
"cannot create hda from node {}".format(to_hda))
|
||||
|
||||
hda_node = to_hda.createDigitalAsset(
|
||||
name=subset_name,
|
||||
hda_file_name="$HIP/{}.hda".format(subset_name)
|
||||
)
|
||||
hou.moveNodesTo(self.nodes, hda_node)
|
||||
hda_node.layoutChildren()
|
||||
else:
|
||||
if self._check_existing(subset_name):
|
||||
raise plugin.OpenPypeCreatorError(
|
||||
("subset {} is already published with different HDA"
|
||||
"definition.").format(subset_name))
|
||||
hda_node = to_hda
|
||||
|
||||
hda_node.setName(subset_name)
|
||||
|
||||
# delete node created by Avalon in /out
|
||||
# this needs to be addressed in future Houdini workflow refactor.
|
||||
|
||||
hou.node("/out/{}".format(subset_name)).destroy()
|
||||
|
||||
try:
|
||||
lib.imprint(hda_node, self.data)
|
||||
except hou.OperationFailed:
|
||||
raise plugin.OpenPypeCreatorError(
|
||||
("Cannot set metadata on asset. Might be that it already is "
|
||||
"OpenPype asset.")
|
||||
)
|
||||
|
||||
return hda_node
|
||||
62
openpype/hosts/houdini/plugins/load/load_hda.py
Normal file
62
openpype/hosts/houdini/plugins/load/load_hda.py
Normal file
|
|
@ -0,0 +1,62 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
from avalon import api
|
||||
|
||||
from avalon.houdini import pipeline
|
||||
|
||||
|
||||
class HdaLoader(api.Loader):
|
||||
"""Load Houdini Digital Asset file."""
|
||||
|
||||
families = ["hda"]
|
||||
label = "Load Hda"
|
||||
representations = ["hda"]
|
||||
order = -10
|
||||
icon = "code-fork"
|
||||
color = "orange"
|
||||
|
||||
def load(self, context, name=None, namespace=None, data=None):
|
||||
import os
|
||||
import hou
|
||||
|
||||
# Format file name, Houdini only wants forward slashes
|
||||
file_path = os.path.normpath(self.fname)
|
||||
file_path = file_path.replace("\\", "/")
|
||||
|
||||
# Get the root node
|
||||
obj = hou.node("/obj")
|
||||
|
||||
# Create a unique name
|
||||
counter = 1
|
||||
namespace = namespace or context["asset"]["name"]
|
||||
formatted = "{}_{}".format(namespace, name) if namespace else name
|
||||
node_name = "{0}_{1:03d}".format(formatted, counter)
|
||||
|
||||
hou.hda.installFile(file_path)
|
||||
hda_node = obj.createNode(name, node_name)
|
||||
|
||||
self[:] = [hda_node]
|
||||
|
||||
return pipeline.containerise(
|
||||
node_name,
|
||||
namespace,
|
||||
[hda_node],
|
||||
context,
|
||||
self.__class__.__name__,
|
||||
suffix="",
|
||||
)
|
||||
|
||||
def update(self, container, representation):
|
||||
import hou
|
||||
|
||||
hda_node = container["node"]
|
||||
file_path = api.get_representation_path(representation)
|
||||
file_path = file_path.replace("\\", "/")
|
||||
hou.hda.installFile(file_path)
|
||||
defs = hda_node.type().allInstalledDefinitions()
|
||||
def_paths = [d.libraryFilePath() for d in defs]
|
||||
new = def_paths.index(file_path)
|
||||
defs[new].setIsPreferred(True)
|
||||
|
||||
def remove(self, container):
|
||||
node = container["node"]
|
||||
node.destroy()
|
||||
|
|
@ -23,8 +23,10 @@ class CollectInstanceActiveState(pyblish.api.InstancePlugin):
|
|||
return
|
||||
|
||||
# Check bypass state and reverse
|
||||
active = True
|
||||
node = instance[0]
|
||||
active = not node.isBypassed()
|
||||
if hasattr(node, "isBypassed"):
|
||||
active = not node.isBypassed()
|
||||
|
||||
# Set instance active state
|
||||
instance.data.update(
|
||||
|
|
|
|||
|
|
@ -31,6 +31,7 @@ class CollectInstances(pyblish.api.ContextPlugin):
|
|||
def process(self, context):
|
||||
|
||||
nodes = hou.node("/out").children()
|
||||
nodes += hou.node("/obj").children()
|
||||
|
||||
# Include instances in USD stage only when it exists so it
|
||||
# remains backwards compatible with version before houdini 18
|
||||
|
|
@ -49,9 +50,12 @@ class CollectInstances(pyblish.api.ContextPlugin):
|
|||
has_family = node.evalParm("family")
|
||||
assert has_family, "'%s' is missing 'family'" % node.name()
|
||||
|
||||
self.log.info("processing {}".format(node))
|
||||
|
||||
data = lib.read(node)
|
||||
# Check bypass state and reverse
|
||||
data.update({"active": not node.isBypassed()})
|
||||
if hasattr(node, "isBypassed"):
|
||||
data.update({"active": not node.isBypassed()})
|
||||
|
||||
# temporarily translation of `active` to `publish` till issue has
|
||||
# been resolved, https://github.com/pyblish/pyblish-base/issues/307
|
||||
|
|
|
|||
43
openpype/hosts/houdini/plugins/publish/extract_hda.py
Normal file
43
openpype/hosts/houdini/plugins/publish/extract_hda.py
Normal file
|
|
@ -0,0 +1,43 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
import os
|
||||
|
||||
from pprint import pformat
|
||||
|
||||
import pyblish.api
|
||||
import openpype.api
|
||||
|
||||
|
||||
class ExtractHDA(openpype.api.Extractor):
|
||||
|
||||
order = pyblish.api.ExtractorOrder
|
||||
label = "Extract HDA"
|
||||
hosts = ["houdini"]
|
||||
families = ["hda"]
|
||||
|
||||
def process(self, instance):
|
||||
self.log.info(pformat(instance.data))
|
||||
hda_node = instance[0]
|
||||
hda_def = hda_node.type().definition()
|
||||
hda_options = hda_def.options()
|
||||
hda_options.setSaveInitialParmsAndContents(True)
|
||||
|
||||
next_version = instance.data["anatomyData"]["version"]
|
||||
self.log.info("setting version: {}".format(next_version))
|
||||
hda_def.setVersion(str(next_version))
|
||||
hda_def.setOptions(hda_options)
|
||||
hda_def.save(hda_def.libraryFilePath(), hda_node, hda_options)
|
||||
|
||||
if "representations" not in instance.data:
|
||||
instance.data["representations"] = []
|
||||
|
||||
file = os.path.basename(hda_def.libraryFilePath())
|
||||
staging_dir = os.path.dirname(hda_def.libraryFilePath())
|
||||
self.log.info("Using HDA from {}".format(hda_def.libraryFilePath()))
|
||||
|
||||
representation = {
|
||||
'name': 'hda',
|
||||
'ext': 'hda',
|
||||
'files': file,
|
||||
"stagingDir": staging_dir,
|
||||
}
|
||||
instance.data["representations"].append(representation)
|
||||
|
|
@ -35,5 +35,5 @@ class ValidateBypassed(pyblish.api.InstancePlugin):
|
|||
def get_invalid(cls, instance):
|
||||
|
||||
rop = instance[0]
|
||||
if rop.isBypassed():
|
||||
if hasattr(rop, "isBypassed") and rop.isBypassed():
|
||||
return [rop]
|
||||
|
|
|
|||
|
|
@ -7,24 +7,30 @@
|
|||
<scriptItem id="avalon_create">
|
||||
<label>Create ...</label>
|
||||
<scriptCode><![CDATA[
|
||||
from avalon.tools import creator
|
||||
creator.show()
|
||||
import hou
|
||||
from openpype.tools.utils import host_tools
|
||||
parent = hou.qt.mainWindow()
|
||||
host_tools.show_creator(parent)
|
||||
]]></scriptCode>
|
||||
</scriptItem>
|
||||
|
||||
<scriptItem id="avalon_load">
|
||||
<label>Load ...</label>
|
||||
<scriptCode><![CDATA[
|
||||
from openpype.tools import loader
|
||||
loader.show(use_context=True)
|
||||
import hou
|
||||
from openpype.tools.utils import host_tools
|
||||
parent = hou.qt.mainWindow()
|
||||
host_tools.show_loader(parent=parent, use_context=True)
|
||||
]]></scriptCode>
|
||||
</scriptItem>
|
||||
|
||||
<scriptItem id="avalon_manage">
|
||||
<label>Manage ...</label>
|
||||
<scriptCode><![CDATA[
|
||||
from avalon.tools import cbsceneinventory
|
||||
cbsceneinventory.show()
|
||||
import hou
|
||||
from openpype.tools.utils import host_tools
|
||||
parent = hou.qt.mainWindow()
|
||||
host_tools.show_scene_inventory(parent)
|
||||
]]></scriptCode>
|
||||
</scriptItem>
|
||||
|
||||
|
|
@ -32,9 +38,9 @@ cbsceneinventory.show()
|
|||
<label>Publish ...</label>
|
||||
<scriptCode><![CDATA[
|
||||
import hou
|
||||
from avalon.tools import publish
|
||||
from openpype.tools.utils import host_tools
|
||||
parent = hou.qt.mainWindow()
|
||||
publish.show(parent)
|
||||
host_tools.show_publish(parent)
|
||||
]]></scriptCode>
|
||||
</scriptItem>
|
||||
|
||||
|
|
@ -43,9 +49,10 @@ publish.show(parent)
|
|||
<scriptItem id="workfiles">
|
||||
<label>Work Files ...</label>
|
||||
<scriptCode><![CDATA[
|
||||
import hou, os
|
||||
from openpype.tools import workfiles
|
||||
workfiles.show(os.environ["AVALON_WORKDIR"])
|
||||
import hou
|
||||
from openpype.tools.utils import host_tools
|
||||
parent = hou.qt.mainWindow()
|
||||
host_tools.show_workfiles(parent)
|
||||
]]></scriptCode>
|
||||
</scriptItem>
|
||||
|
||||
|
|
|
|||
9
openpype/hosts/houdini/startup/scripts/houdinicore.py
Normal file
9
openpype/hosts/houdini/startup/scripts/houdinicore.py
Normal file
|
|
@ -0,0 +1,9 @@
|
|||
from avalon import api, houdini
|
||||
|
||||
|
||||
def main():
|
||||
print("Installing OpenPype ...")
|
||||
api.install(houdini)
|
||||
|
||||
|
||||
main()
|
||||
|
|
@ -8,11 +8,12 @@ from avalon import api as avalon
|
|||
from avalon import pipeline
|
||||
from avalon.maya import suspended_refresh
|
||||
from avalon.maya.pipeline import IS_HEADLESS
|
||||
from openpype.tools import workfiles
|
||||
from openpype.tools.utils import host_tools
|
||||
from pyblish import api as pyblish
|
||||
from openpype.lib import any_outdated
|
||||
import openpype.hosts.maya
|
||||
from openpype.hosts.maya.lib import copy_workspace_mel
|
||||
from openpype.lib.path_tools import HostDirmap
|
||||
from . import menu, lib
|
||||
|
||||
log = logging.getLogger("openpype.hosts.maya")
|
||||
|
|
@ -30,7 +31,8 @@ def install():
|
|||
|
||||
project_settings = get_project_settings(os.getenv("AVALON_PROJECT"))
|
||||
# process path mapping
|
||||
process_dirmap(project_settings)
|
||||
dirmap_processor = MayaDirmap("maya", project_settings)
|
||||
dirmap_processor.process_dirmap()
|
||||
|
||||
pyblish.register_plugin_path(PUBLISH_PATH)
|
||||
avalon.register_plugin_path(avalon.Loader, LOAD_PATH)
|
||||
|
|
@ -60,40 +62,6 @@ def install():
|
|||
avalon.data["familiesStateToggled"] = ["imagesequence"]
|
||||
|
||||
|
||||
def process_dirmap(project_settings):
|
||||
# type: (dict) -> None
|
||||
"""Go through all paths in Settings and set them using `dirmap`.
|
||||
|
||||
Args:
|
||||
project_settings (dict): Settings for current project.
|
||||
|
||||
"""
|
||||
if not project_settings["maya"].get("maya-dirmap"):
|
||||
return
|
||||
mapping = project_settings["maya"]["maya-dirmap"]["paths"] or {}
|
||||
mapping_enabled = project_settings["maya"]["maya-dirmap"]["enabled"]
|
||||
if not mapping or not mapping_enabled:
|
||||
return
|
||||
if mapping.get("source-path") and mapping_enabled is True:
|
||||
log.info("Processing directory mapping ...")
|
||||
cmds.dirmap(en=True)
|
||||
for k, sp in enumerate(mapping["source-path"]):
|
||||
try:
|
||||
print("{} -> {}".format(sp, mapping["destination-path"][k]))
|
||||
cmds.dirmap(m=(sp, mapping["destination-path"][k]))
|
||||
cmds.dirmap(m=(mapping["destination-path"][k], sp))
|
||||
except IndexError:
|
||||
# missing corresponding destination path
|
||||
log.error(("invalid dirmap mapping, missing corresponding"
|
||||
" destination directory."))
|
||||
break
|
||||
except RuntimeError:
|
||||
log.error("invalid path {} -> {}, mapping not registered".format(
|
||||
sp, mapping["destination-path"][k]
|
||||
))
|
||||
continue
|
||||
|
||||
|
||||
def uninstall():
|
||||
pyblish.deregister_plugin_path(PUBLISH_PATH)
|
||||
avalon.deregister_plugin_path(avalon.Loader, LOAD_PATH)
|
||||
|
|
@ -138,16 +106,12 @@ def on_init(_):
|
|||
launch_workfiles = os.environ.get("WORKFILES_STARTUP")
|
||||
|
||||
if launch_workfiles:
|
||||
safe_deferred(launch_workfiles_app)
|
||||
safe_deferred(host_tools.show_workfiles)
|
||||
|
||||
if not IS_HEADLESS:
|
||||
safe_deferred(override_toolbox_ui)
|
||||
|
||||
|
||||
def launch_workfiles_app():
|
||||
workfiles.show(os.environ["AVALON_WORKDIR"])
|
||||
|
||||
|
||||
def on_before_save(return_code, _):
|
||||
"""Run validation for scene's FPS prior to saving"""
|
||||
return lib.validate_fps()
|
||||
|
|
@ -209,8 +173,7 @@ def on_open(_):
|
|||
|
||||
# Show outdated pop-up
|
||||
def _on_show_inventory():
|
||||
import avalon.tools.sceneinventory as tool
|
||||
tool.show(parent=parent)
|
||||
host_tools.show_scene_inventory(parent=parent)
|
||||
|
||||
dialog = popup.Popup(parent=parent)
|
||||
dialog.setWindowTitle("Maya scene has outdated content")
|
||||
|
|
@ -243,9 +206,15 @@ def on_task_changed(*args):
|
|||
lib.set_context_settings()
|
||||
lib.update_content_on_context_change()
|
||||
|
||||
msg = " project: {}\n asset: {}\n task:{}".format(
|
||||
avalon.Session["AVALON_PROJECT"],
|
||||
avalon.Session["AVALON_ASSET"],
|
||||
avalon.Session["AVALON_TASK"]
|
||||
)
|
||||
|
||||
lib.show_message(
|
||||
"Context was changed",
|
||||
("Context was changed to {}".format(avalon.Session["AVALON_ASSET"])),
|
||||
("Context was changed to:\n{}".format(msg)),
|
||||
)
|
||||
|
||||
|
||||
|
|
@ -255,3 +224,12 @@ def before_workfile_save(workfile_path):
|
|||
|
||||
workdir = os.path.dirname(workfile_path)
|
||||
copy_workspace_mel(workdir)
|
||||
|
||||
|
||||
class MayaDirmap(HostDirmap):
|
||||
def on_enable_dirmap(self):
|
||||
cmds.dirmap(en=True)
|
||||
|
||||
def dirmap_routine(self, source_path, destination_path):
|
||||
cmds.dirmap(m=(source_path, destination_path))
|
||||
cmds.dirmap(m=(destination_path, source_path))
|
||||
|
|
|
|||
|
|
@ -1,10 +1,16 @@
|
|||
"""A set of commands that install overrides to Maya's UI"""
|
||||
|
||||
import os
|
||||
import logging
|
||||
|
||||
from functools import partial
|
||||
|
||||
import maya.cmds as mc
|
||||
import maya.mel as mel
|
||||
from functools import partial
|
||||
import os
|
||||
import logging
|
||||
|
||||
from avalon.maya import pipeline
|
||||
from openpype.api import resources
|
||||
from openpype.tools.utils import host_tools
|
||||
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
|
@ -69,39 +75,8 @@ def override_component_mask_commands():
|
|||
|
||||
def override_toolbox_ui():
|
||||
"""Add custom buttons in Toolbox as replacement for Maya web help icon."""
|
||||
inventory = None
|
||||
loader = None
|
||||
launch_workfiles_app = None
|
||||
mayalookassigner = None
|
||||
try:
|
||||
import avalon.tools.sceneinventory as inventory
|
||||
except Exception:
|
||||
log.warning("Could not import SceneInventory tool")
|
||||
|
||||
try:
|
||||
import openpype.tools.loader as loader
|
||||
except Exception:
|
||||
log.warning("Could not import Loader tool")
|
||||
|
||||
try:
|
||||
from avalon.maya.pipeline import launch_workfiles_app
|
||||
except Exception:
|
||||
log.warning("Could not import Workfiles tool")
|
||||
|
||||
try:
|
||||
from openpype.tools import mayalookassigner
|
||||
except Exception:
|
||||
log.warning("Could not import Maya Look assigner tool")
|
||||
|
||||
from openpype.api import resources
|
||||
|
||||
icons = resources.get_resource("icons")
|
||||
|
||||
if not any((
|
||||
mayalookassigner, launch_workfiles_app, loader, inventory
|
||||
)):
|
||||
return
|
||||
|
||||
# Ensure the maya web icon on toolbox exists
|
||||
web_button = "ToolBox|MainToolboxLayout|mayaWebButton"
|
||||
if not mc.iconTextButton(web_button, query=True, exists=True):
|
||||
|
|
@ -120,14 +95,23 @@ def override_toolbox_ui():
|
|||
# Create our controls
|
||||
background_color = (0.267, 0.267, 0.267)
|
||||
controls = []
|
||||
if mayalookassigner:
|
||||
look_assigner = None
|
||||
try:
|
||||
look_assigner = host_tools.get_tool_by_name(
|
||||
"lookassigner",
|
||||
parent=pipeline._parent
|
||||
)
|
||||
except Exception:
|
||||
log.warning("Couldn't create Look assigner window.", exc_info=True)
|
||||
|
||||
if look_assigner is not None:
|
||||
controls.append(
|
||||
mc.iconTextButton(
|
||||
"pype_toolbox_lookmanager",
|
||||
annotation="Look Manager",
|
||||
label="Look Manager",
|
||||
image=os.path.join(icons, "lookmanager.png"),
|
||||
command=lambda: mayalookassigner.show(),
|
||||
command=host_tools.show_look_assigner,
|
||||
bgc=background_color,
|
||||
width=icon_size,
|
||||
height=icon_size,
|
||||
|
|
@ -135,50 +119,53 @@ def override_toolbox_ui():
|
|||
)
|
||||
)
|
||||
|
||||
if launch_workfiles_app:
|
||||
controls.append(
|
||||
mc.iconTextButton(
|
||||
"pype_toolbox_workfiles",
|
||||
annotation="Work Files",
|
||||
label="Work Files",
|
||||
image=os.path.join(icons, "workfiles.png"),
|
||||
command=lambda: launch_workfiles_app(),
|
||||
bgc=background_color,
|
||||
width=icon_size,
|
||||
height=icon_size,
|
||||
parent=parent
|
||||
)
|
||||
controls.append(
|
||||
mc.iconTextButton(
|
||||
"pype_toolbox_workfiles",
|
||||
annotation="Work Files",
|
||||
label="Work Files",
|
||||
image=os.path.join(icons, "workfiles.png"),
|
||||
command=lambda: host_tools.show_workfiles(
|
||||
parent=pipeline._parent
|
||||
),
|
||||
bgc=background_color,
|
||||
width=icon_size,
|
||||
height=icon_size,
|
||||
parent=parent
|
||||
)
|
||||
)
|
||||
|
||||
if loader:
|
||||
controls.append(
|
||||
mc.iconTextButton(
|
||||
"pype_toolbox_loader",
|
||||
annotation="Loader",
|
||||
label="Loader",
|
||||
image=os.path.join(icons, "loader.png"),
|
||||
command=lambda: loader.show(use_context=True),
|
||||
bgc=background_color,
|
||||
width=icon_size,
|
||||
height=icon_size,
|
||||
parent=parent
|
||||
)
|
||||
controls.append(
|
||||
mc.iconTextButton(
|
||||
"pype_toolbox_loader",
|
||||
annotation="Loader",
|
||||
label="Loader",
|
||||
image=os.path.join(icons, "loader.png"),
|
||||
command=lambda: host_tools.show_loader(
|
||||
parent=pipeline._parent, use_context=True
|
||||
),
|
||||
bgc=background_color,
|
||||
width=icon_size,
|
||||
height=icon_size,
|
||||
parent=parent
|
||||
)
|
||||
)
|
||||
|
||||
if inventory:
|
||||
controls.append(
|
||||
mc.iconTextButton(
|
||||
"pype_toolbox_manager",
|
||||
annotation="Inventory",
|
||||
label="Inventory",
|
||||
image=os.path.join(icons, "inventory.png"),
|
||||
command=lambda: inventory.show(),
|
||||
bgc=background_color,
|
||||
width=icon_size,
|
||||
height=icon_size,
|
||||
parent=parent
|
||||
)
|
||||
controls.append(
|
||||
mc.iconTextButton(
|
||||
"pype_toolbox_manager",
|
||||
annotation="Inventory",
|
||||
label="Inventory",
|
||||
image=os.path.join(icons, "inventory.png"),
|
||||
command=lambda: host_tools.show_scene_inventory(
|
||||
parent=pipeline._parent
|
||||
),
|
||||
bgc=background_color,
|
||||
width=icon_size,
|
||||
height=icon_size,
|
||||
parent=parent
|
||||
)
|
||||
)
|
||||
|
||||
# Add the buttons on the bottom and stack
|
||||
# them above each other with side padding
|
||||
|
|
|
|||
|
|
@ -2,6 +2,7 @@
|
|||
|
||||
import re
|
||||
import os
|
||||
import platform
|
||||
import uuid
|
||||
import math
|
||||
|
||||
|
|
@ -22,6 +23,7 @@ import avalon.maya.lib
|
|||
import avalon.maya.interactive
|
||||
|
||||
from openpype import lib
|
||||
from openpype.api import get_anatomy_settings
|
||||
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
|
@ -437,7 +439,8 @@ def empty_sets(sets, force=False):
|
|||
cmds.connectAttr(src, dest)
|
||||
|
||||
# Restore original members
|
||||
for origin_set, members in original.iteritems():
|
||||
_iteritems = getattr(original, "iteritems", original.items)
|
||||
for origin_set, members in _iteritems():
|
||||
cmds.sets(members, forceElement=origin_set)
|
||||
|
||||
|
||||
|
|
@ -581,7 +584,7 @@ def get_shader_assignments_from_shapes(shapes, components=True):
|
|||
|
||||
# Build a mapping from parent to shapes to include in lookup.
|
||||
transforms = {shape.rsplit("|", 1)[0]: shape for shape in shapes}
|
||||
lookup = set(shapes + transforms.keys())
|
||||
lookup = set(shapes) | set(transforms.keys())
|
||||
|
||||
component_assignments = defaultdict(list)
|
||||
for shading_group in assignments.keys():
|
||||
|
|
@ -669,7 +672,8 @@ def displaySmoothness(nodes,
|
|||
yield
|
||||
finally:
|
||||
# Revert state
|
||||
for node, state in originals.iteritems():
|
||||
_iteritems = getattr(originals, "iteritems", originals.items)
|
||||
for node, state in _iteritems():
|
||||
if state:
|
||||
cmds.displaySmoothness(node, **state)
|
||||
|
||||
|
|
@ -712,7 +716,8 @@ def no_display_layers(nodes):
|
|||
yield
|
||||
finally:
|
||||
# Restore original members
|
||||
for layer, members in original.iteritems():
|
||||
_iteritems = getattr(original, "iteritems", original.items)
|
||||
for layer, members in _iteritems():
|
||||
cmds.editDisplayLayerMembers(layer, members, noRecurse=True)
|
||||
|
||||
|
||||
|
|
@ -1819,7 +1824,7 @@ def set_scene_fps(fps, update=True):
|
|||
cmds.file(modified=True)
|
||||
|
||||
|
||||
def set_scene_resolution(width, height):
|
||||
def set_scene_resolution(width, height, pixelAspect):
|
||||
"""Set the render resolution
|
||||
|
||||
Args:
|
||||
|
|
@ -1847,6 +1852,36 @@ def set_scene_resolution(width, height):
|
|||
cmds.setAttr("%s.width" % control_node, width)
|
||||
cmds.setAttr("%s.height" % control_node, height)
|
||||
|
||||
deviceAspectRatio = ((float(width) / float(height)) * float(pixelAspect))
|
||||
cmds.setAttr("%s.deviceAspectRatio" % control_node, deviceAspectRatio)
|
||||
cmds.setAttr("%s.pixelAspect" % control_node, pixelAspect)
|
||||
|
||||
|
||||
def reset_scene_resolution():
|
||||
"""Apply the scene resolution from the project definition
|
||||
|
||||
scene resolution can be overwritten by an asset if the asset.data contains
|
||||
any information regarding scene resolution .
|
||||
|
||||
Returns:
|
||||
None
|
||||
"""
|
||||
|
||||
project_doc = io.find_one({"type": "project"})
|
||||
project_data = project_doc["data"]
|
||||
asset_data = lib.get_asset()["data"]
|
||||
|
||||
# Set project resolution
|
||||
width_key = "resolutionWidth"
|
||||
height_key = "resolutionHeight"
|
||||
pixelAspect_key = "pixelAspect"
|
||||
|
||||
width = asset_data.get(width_key, project_data.get(width_key, 1920))
|
||||
height = asset_data.get(height_key, project_data.get(height_key, 1080))
|
||||
pixelAspect = asset_data.get(pixelAspect_key,
|
||||
project_data.get(pixelAspect_key, 1))
|
||||
|
||||
set_scene_resolution(width, height, pixelAspect)
|
||||
|
||||
def set_context_settings():
|
||||
"""Apply the project settings from the project definition
|
||||
|
|
@ -1873,18 +1908,14 @@ def set_context_settings():
|
|||
api.Session["AVALON_FPS"] = str(fps)
|
||||
set_scene_fps(fps)
|
||||
|
||||
# Set project resolution
|
||||
width_key = "resolutionWidth"
|
||||
height_key = "resolutionHeight"
|
||||
|
||||
width = asset_data.get(width_key, project_data.get(width_key, 1920))
|
||||
height = asset_data.get(height_key, project_data.get(height_key, 1080))
|
||||
|
||||
set_scene_resolution(width, height)
|
||||
reset_scene_resolution()
|
||||
|
||||
# Set frame range.
|
||||
avalon.maya.interactive.reset_frame_range()
|
||||
|
||||
# Set colorspace
|
||||
set_colorspace()
|
||||
|
||||
|
||||
# Valid FPS
|
||||
def validate_fps():
|
||||
|
|
@ -2152,10 +2183,11 @@ def load_capture_preset(data=None):
|
|||
for key in preset['Display Options']:
|
||||
if key.startswith('background'):
|
||||
disp_options[key] = preset['Display Options'][key]
|
||||
disp_options[key][0] = (float(disp_options[key][0])/255)
|
||||
disp_options[key][1] = (float(disp_options[key][1])/255)
|
||||
disp_options[key][2] = (float(disp_options[key][2])/255)
|
||||
disp_options[key].pop()
|
||||
if len(disp_options[key]) == 4:
|
||||
disp_options[key][0] = (float(disp_options[key][0])/255)
|
||||
disp_options[key][1] = (float(disp_options[key][1])/255)
|
||||
disp_options[key][2] = (float(disp_options[key][2])/255)
|
||||
disp_options[key].pop()
|
||||
else:
|
||||
disp_options['displayGradient'] = True
|
||||
|
||||
|
|
@ -2740,3 +2772,49 @@ def iter_shader_edits(relationships, shader_nodes, nodes_by_id, label=None):
|
|||
"uuid": data["uuid"],
|
||||
"nodes": nodes,
|
||||
"attributes": attr_value}
|
||||
|
||||
|
||||
def set_colorspace():
|
||||
"""Set Colorspace from project configuration
|
||||
"""
|
||||
project_name = os.getenv("AVALON_PROJECT")
|
||||
imageio = get_anatomy_settings(project_name)["imageio"]["maya"]
|
||||
root_dict = imageio["colorManagementPreference"]
|
||||
|
||||
if not isinstance(root_dict, dict):
|
||||
msg = "set_colorspace(): argument should be dictionary"
|
||||
log.error(msg)
|
||||
|
||||
log.debug(">> root_dict: {}".format(root_dict))
|
||||
|
||||
# first enable color management
|
||||
cmds.colorManagementPrefs(e=True, cmEnabled=True)
|
||||
cmds.colorManagementPrefs(e=True, ocioRulesEnabled=True)
|
||||
|
||||
# second set config path
|
||||
if root_dict.get("configFilePath"):
|
||||
unresolved_path = root_dict["configFilePath"]
|
||||
ocio_paths = unresolved_path[platform.system().lower()]
|
||||
|
||||
resolved_path = None
|
||||
for ocio_p in ocio_paths:
|
||||
resolved_path = str(ocio_p).format(**os.environ)
|
||||
if not os.path.exists(resolved_path):
|
||||
continue
|
||||
|
||||
if resolved_path:
|
||||
filepath = str(resolved_path).replace("\\", "/")
|
||||
cmds.colorManagementPrefs(e=True, configFilePath=filepath)
|
||||
cmds.colorManagementPrefs(e=True, cmConfigFileEnabled=True)
|
||||
log.debug("maya '{}' changed to: {}".format(
|
||||
"configFilePath", resolved_path))
|
||||
root_dict.pop("configFilePath")
|
||||
else:
|
||||
cmds.colorManagementPrefs(e=True, cmConfigFileEnabled=False)
|
||||
cmds.colorManagementPrefs(e=True, configFilePath="" )
|
||||
|
||||
# third set rendering space and view transform
|
||||
renderSpace = root_dict["renderSpace"]
|
||||
cmds.colorManagementPrefs(e=True, renderingSpaceName=renderSpace)
|
||||
viewTransform = root_dict["viewTransform"]
|
||||
cmds.colorManagementPrefs(e=True, viewTransformName=viewTransform)
|
||||
|
|
|
|||
|
|
@ -114,6 +114,8 @@ class RenderProduct(object):
|
|||
aov = attr.ib(default=None) # source aov
|
||||
driver = attr.ib(default=None) # source driver
|
||||
multipart = attr.ib(default=False) # multichannel file
|
||||
camera = attr.ib(default=None) # used only when rendering
|
||||
# from multiple cameras
|
||||
|
||||
|
||||
def get(layer, render_instance=None):
|
||||
|
|
@ -183,6 +185,16 @@ class ARenderProducts:
|
|||
self.layer_data = self._get_layer_data()
|
||||
self.layer_data.products = self.get_render_products()
|
||||
|
||||
def has_camera_token(self):
|
||||
# type: () -> bool
|
||||
"""Check if camera token is in image prefix.
|
||||
|
||||
Returns:
|
||||
bool: True/False if camera token is present.
|
||||
|
||||
"""
|
||||
return "<camera>" in self.layer_data.filePrefix.lower()
|
||||
|
||||
@abstractmethod
|
||||
def get_render_products(self):
|
||||
"""To be implemented by renderer class.
|
||||
|
|
@ -307,7 +319,7 @@ class ARenderProducts:
|
|||
# Deadline allows submitting renders with a custom frame list
|
||||
# to support those cases we might want to allow 'custom frames'
|
||||
# to be overridden to `ExpectFiles` class?
|
||||
layer_data = LayerMetadata(
|
||||
return LayerMetadata(
|
||||
frameStart=int(self.get_render_attribute("startFrame")),
|
||||
frameEnd=int(self.get_render_attribute("endFrame")),
|
||||
frameStep=int(self.get_render_attribute("byFrameStep")),
|
||||
|
|
@ -321,7 +333,6 @@ class ARenderProducts:
|
|||
defaultExt=self._get_attr("defaultRenderGlobals.imfPluginKey"),
|
||||
filePrefix=file_prefix
|
||||
)
|
||||
return layer_data
|
||||
|
||||
def _generate_file_sequence(
|
||||
self, layer_data,
|
||||
|
|
@ -330,7 +341,7 @@ class ARenderProducts:
|
|||
force_cameras=None):
|
||||
# type: (LayerMetadata, str, str, list) -> list
|
||||
expected_files = []
|
||||
cameras = force_cameras if force_cameras else layer_data.cameras
|
||||
cameras = force_cameras or layer_data.cameras
|
||||
ext = force_ext or layer_data.defaultExt
|
||||
for cam in cameras:
|
||||
file_prefix = layer_data.filePrefix
|
||||
|
|
@ -361,8 +372,8 @@ class ARenderProducts:
|
|||
)
|
||||
return expected_files
|
||||
|
||||
def get_files(self, product, camera):
|
||||
# type: (RenderProduct, str) -> list
|
||||
def get_files(self, product):
|
||||
# type: (RenderProduct) -> list
|
||||
"""Return list of expected files.
|
||||
|
||||
It will translate render token strings ('<RenderPass>', etc.) to
|
||||
|
|
@ -373,7 +384,6 @@ class ARenderProducts:
|
|||
Args:
|
||||
product (RenderProduct): Render product to be used for file
|
||||
generation.
|
||||
camera (str): Camera name.
|
||||
|
||||
Returns:
|
||||
List of files
|
||||
|
|
@ -383,7 +393,7 @@ class ARenderProducts:
|
|||
self.layer_data,
|
||||
force_aov_name=product.productName,
|
||||
force_ext=product.ext,
|
||||
force_cameras=[camera]
|
||||
force_cameras=[product.camera]
|
||||
)
|
||||
|
||||
def get_renderable_cameras(self):
|
||||
|
|
@ -460,15 +470,21 @@ class RenderProductsArnold(ARenderProducts):
|
|||
|
||||
return prefix
|
||||
|
||||
def _get_aov_render_products(self, aov):
|
||||
def _get_aov_render_products(self, aov, cameras=None):
|
||||
"""Return all render products for the AOV"""
|
||||
|
||||
products = list()
|
||||
products = []
|
||||
aov_name = self._get_attr(aov, "name")
|
||||
ai_drivers = cmds.listConnections("{}.outputs".format(aov),
|
||||
source=True,
|
||||
destination=False,
|
||||
type="aiAOVDriver") or []
|
||||
if not cameras:
|
||||
cameras = [
|
||||
self.sanitize_camera_name(
|
||||
self.get_renderable_cameras()[0]
|
||||
)
|
||||
]
|
||||
|
||||
for ai_driver in ai_drivers:
|
||||
# todo: check aiAOVDriver.prefix as it could have
|
||||
|
|
@ -497,30 +513,37 @@ class RenderProductsArnold(ARenderProducts):
|
|||
name = "beauty"
|
||||
|
||||
# Support Arnold light groups for AOVs
|
||||
# Global AOV: When disabled the main layer is not written: `{pass}`
|
||||
# Global AOV: When disabled the main layer is
|
||||
# not written: `{pass}`
|
||||
# All Light Groups: When enabled, a `{pass}_lgroups` file is
|
||||
# written and is always merged into a single file
|
||||
# Light Groups List: When set, a product per light group is written
|
||||
# written and is always merged into a
|
||||
# single file
|
||||
# Light Groups List: When set, a product per light
|
||||
# group is written
|
||||
# e.g. {pass}_front, {pass}_rim
|
||||
global_aov = self._get_attr(aov, "globalAov")
|
||||
if global_aov:
|
||||
product = RenderProduct(productName=name,
|
||||
ext=ext,
|
||||
aov=aov_name,
|
||||
driver=ai_driver)
|
||||
products.append(product)
|
||||
for camera in cameras:
|
||||
product = RenderProduct(productName=name,
|
||||
ext=ext,
|
||||
aov=aov_name,
|
||||
driver=ai_driver,
|
||||
camera=camera)
|
||||
products.append(product)
|
||||
|
||||
all_light_groups = self._get_attr(aov, "lightGroups")
|
||||
if all_light_groups:
|
||||
# All light groups is enabled. A single multipart
|
||||
# Render Product
|
||||
product = RenderProduct(productName=name + "_lgroups",
|
||||
ext=ext,
|
||||
aov=aov_name,
|
||||
driver=ai_driver,
|
||||
# Always multichannel output
|
||||
multipart=True)
|
||||
products.append(product)
|
||||
for camera in cameras:
|
||||
product = RenderProduct(productName=name + "_lgroups",
|
||||
ext=ext,
|
||||
aov=aov_name,
|
||||
driver=ai_driver,
|
||||
# Always multichannel output
|
||||
multipart=True,
|
||||
camera=camera)
|
||||
products.append(product)
|
||||
else:
|
||||
value = self._get_attr(aov, "lightGroupsList")
|
||||
if not value:
|
||||
|
|
@ -529,11 +552,15 @@ class RenderProductsArnold(ARenderProducts):
|
|||
for light_group in selected_light_groups:
|
||||
# Render Product per selected light group
|
||||
aov_light_group_name = "{}_{}".format(name, light_group)
|
||||
product = RenderProduct(productName=aov_light_group_name,
|
||||
aov=aov_name,
|
||||
driver=ai_driver,
|
||||
ext=ext)
|
||||
products.append(product)
|
||||
for camera in cameras:
|
||||
product = RenderProduct(
|
||||
productName=aov_light_group_name,
|
||||
aov=aov_name,
|
||||
driver=ai_driver,
|
||||
ext=ext,
|
||||
camera=camera
|
||||
)
|
||||
products.append(product)
|
||||
|
||||
return products
|
||||
|
||||
|
|
@ -556,17 +583,26 @@ class RenderProductsArnold(ARenderProducts):
|
|||
# anyway.
|
||||
return []
|
||||
|
||||
default_ext = self._get_attr("defaultRenderGlobals.imfPluginKey")
|
||||
beauty_product = RenderProduct(productName="beauty",
|
||||
ext=default_ext,
|
||||
driver="defaultArnoldDriver")
|
||||
# check if camera token is in prefix. If so, and we have list of
|
||||
# renderable cameras, generate render product for each and every
|
||||
# of them.
|
||||
cameras = [
|
||||
self.sanitize_camera_name(c)
|
||||
for c in self.get_renderable_cameras()
|
||||
]
|
||||
|
||||
default_ext = self._get_attr("defaultRenderGlobals.imfPluginKey")
|
||||
beauty_products = [RenderProduct(
|
||||
productName="beauty",
|
||||
ext=default_ext,
|
||||
driver="defaultArnoldDriver",
|
||||
camera=camera) for camera in cameras]
|
||||
# AOVs > Legacy > Maya Render View > Mode
|
||||
aovs_enabled = bool(
|
||||
self._get_attr("defaultArnoldRenderOptions.aovMode")
|
||||
)
|
||||
if not aovs_enabled:
|
||||
return [beauty_product]
|
||||
return beauty_products
|
||||
|
||||
# Common > File Output > Merge AOVs or <RenderPass>
|
||||
# We don't need to check for Merge AOVs due to overridden
|
||||
|
|
@ -575,8 +611,9 @@ class RenderProductsArnold(ARenderProducts):
|
|||
"<renderpass>" in self.layer_data.filePrefix.lower()
|
||||
)
|
||||
if not has_renderpass_token:
|
||||
beauty_product.multipart = True
|
||||
return [beauty_product]
|
||||
for product in beauty_products:
|
||||
product.multipart = True
|
||||
return beauty_products
|
||||
|
||||
# AOVs are set to be rendered separately. We should expect
|
||||
# <RenderPass> token in path.
|
||||
|
|
@ -598,14 +635,14 @@ class RenderProductsArnold(ARenderProducts):
|
|||
continue
|
||||
|
||||
# For now stick to the legacy output format.
|
||||
aov_products = self._get_aov_render_products(aov)
|
||||
aov_products = self._get_aov_render_products(aov, cameras)
|
||||
products.extend(aov_products)
|
||||
|
||||
if not any(product.aov == "RGBA" for product in products):
|
||||
if all(product.aov != "RGBA" for product in products):
|
||||
# Append default 'beauty' as this is arnolds default.
|
||||
# However, it is excluded whenever a RGBA pass is enabled.
|
||||
# For legibility add the beauty layer as first entry
|
||||
products.insert(0, beauty_product)
|
||||
products += beauty_products
|
||||
|
||||
# TODO: Output Denoising AOVs?
|
||||
|
||||
|
|
@ -670,6 +707,11 @@ class RenderProductsVray(ARenderProducts):
|
|||
# anyway.
|
||||
return []
|
||||
|
||||
cameras = [
|
||||
self.sanitize_camera_name(c)
|
||||
for c in self.get_renderable_cameras()
|
||||
]
|
||||
|
||||
image_format_str = self._get_attr("vraySettings.imageFormatStr")
|
||||
default_ext = image_format_str
|
||||
if default_ext in {"exr (multichannel)", "exr (deep)"}:
|
||||
|
|
@ -680,13 +722,21 @@ class RenderProductsVray(ARenderProducts):
|
|||
# add beauty as default when not disabled
|
||||
dont_save_rgb = self._get_attr("vraySettings.dontSaveRgbChannel")
|
||||
if not dont_save_rgb:
|
||||
products.append(RenderProduct(productName="", ext=default_ext))
|
||||
for camera in cameras:
|
||||
products.append(
|
||||
RenderProduct(productName="",
|
||||
ext=default_ext,
|
||||
camera=camera))
|
||||
|
||||
# separate alpha file
|
||||
separate_alpha = self._get_attr("vraySettings.separateAlpha")
|
||||
if separate_alpha:
|
||||
products.append(RenderProduct(productName="Alpha",
|
||||
ext=default_ext))
|
||||
for camera in cameras:
|
||||
products.append(
|
||||
RenderProduct(productName="Alpha",
|
||||
ext=default_ext,
|
||||
camera=camera)
|
||||
)
|
||||
|
||||
if image_format_str == "exr (multichannel)":
|
||||
# AOVs are merged in m-channel file, only main layer is rendered
|
||||
|
|
@ -716,19 +766,23 @@ class RenderProductsVray(ARenderProducts):
|
|||
# instead seems to output multiple Render Products,
|
||||
# specifically "Self_Illumination" and "Environment"
|
||||
product_names = ["Self_Illumination", "Environment"]
|
||||
for name in product_names:
|
||||
product = RenderProduct(productName=name,
|
||||
ext=default_ext,
|
||||
aov=aov)
|
||||
products.append(product)
|
||||
for camera in cameras:
|
||||
for name in product_names:
|
||||
product = RenderProduct(productName=name,
|
||||
ext=default_ext,
|
||||
aov=aov,
|
||||
camera=camera)
|
||||
products.append(product)
|
||||
# Continue as we've processed this special case AOV
|
||||
continue
|
||||
|
||||
aov_name = self._get_vray_aov_name(aov)
|
||||
product = RenderProduct(productName=aov_name,
|
||||
ext=default_ext,
|
||||
aov=aov)
|
||||
products.append(product)
|
||||
for camera in cameras:
|
||||
product = RenderProduct(productName=aov_name,
|
||||
ext=default_ext,
|
||||
aov=aov,
|
||||
camera=camera)
|
||||
products.append(product)
|
||||
|
||||
return products
|
||||
|
||||
|
|
@ -875,6 +929,11 @@ class RenderProductsRedshift(ARenderProducts):
|
|||
# anyway.
|
||||
return []
|
||||
|
||||
cameras = [
|
||||
self.sanitize_camera_name(c)
|
||||
for c in self.get_renderable_cameras()
|
||||
]
|
||||
|
||||
# For Redshift we don't directly return upon forcing multilayer
|
||||
# due to some AOVs still being written into separate files,
|
||||
# like Cryptomatte.
|
||||
|
|
@ -933,11 +992,14 @@ class RenderProductsRedshift(ARenderProducts):
|
|||
for light_group in light_groups:
|
||||
aov_light_group_name = "{}_{}".format(aov_name,
|
||||
light_group)
|
||||
product = RenderProduct(productName=aov_light_group_name,
|
||||
aov=aov_name,
|
||||
ext=ext,
|
||||
multipart=aov_multipart)
|
||||
products.append(product)
|
||||
for camera in cameras:
|
||||
product = RenderProduct(
|
||||
productName=aov_light_group_name,
|
||||
aov=aov_name,
|
||||
ext=ext,
|
||||
multipart=aov_multipart,
|
||||
camera=camera)
|
||||
products.append(product)
|
||||
|
||||
if light_groups:
|
||||
light_groups_enabled = True
|
||||
|
|
@ -945,11 +1007,13 @@ class RenderProductsRedshift(ARenderProducts):
|
|||
# Redshift AOV Light Select always renders the global AOV
|
||||
# even when light groups are present so we don't need to
|
||||
# exclude it when light groups are active
|
||||
product = RenderProduct(productName=aov_name,
|
||||
aov=aov_name,
|
||||
ext=ext,
|
||||
multipart=aov_multipart)
|
||||
products.append(product)
|
||||
for camera in cameras:
|
||||
product = RenderProduct(productName=aov_name,
|
||||
aov=aov_name,
|
||||
ext=ext,
|
||||
multipart=aov_multipart,
|
||||
camera=camera)
|
||||
products.append(product)
|
||||
|
||||
# When a Beauty AOV is added manually, it will be rendered as
|
||||
# 'Beauty_other' in file name and "standard" beauty will have
|
||||
|
|
@ -959,10 +1023,12 @@ class RenderProductsRedshift(ARenderProducts):
|
|||
return products
|
||||
|
||||
beauty_name = "Beauty_other" if has_beauty_aov else ""
|
||||
products.insert(0,
|
||||
RenderProduct(productName=beauty_name,
|
||||
ext=ext,
|
||||
multipart=multipart))
|
||||
for camera in cameras:
|
||||
products.insert(0,
|
||||
RenderProduct(productName=beauty_name,
|
||||
ext=ext,
|
||||
multipart=multipart,
|
||||
camera=camera))
|
||||
|
||||
return products
|
||||
|
||||
|
|
@ -987,6 +1053,16 @@ class RenderProductsRenderman(ARenderProducts):
|
|||
:func:`ARenderProducts.get_render_products()`
|
||||
|
||||
"""
|
||||
cameras = [
|
||||
self.sanitize_camera_name(c)
|
||||
for c in self.get_renderable_cameras()
|
||||
]
|
||||
|
||||
if not cameras:
|
||||
cameras = [
|
||||
self.sanitize_camera_name(
|
||||
self.get_renderable_cameras()[0])
|
||||
]
|
||||
products = []
|
||||
|
||||
default_ext = "exr"
|
||||
|
|
@ -1000,9 +1076,11 @@ class RenderProductsRenderman(ARenderProducts):
|
|||
if aov_name == "rmanDefaultDisplay":
|
||||
aov_name = "beauty"
|
||||
|
||||
product = RenderProduct(productName=aov_name,
|
||||
ext=default_ext)
|
||||
products.append(product)
|
||||
for camera in cameras:
|
||||
product = RenderProduct(productName=aov_name,
|
||||
ext=default_ext,
|
||||
camera=camera)
|
||||
products.append(product)
|
||||
|
||||
return products
|
||||
|
||||
|
|
|
|||
|
|
@ -2,13 +2,16 @@ import sys
|
|||
import os
|
||||
import logging
|
||||
|
||||
from avalon.vendor.Qt import QtWidgets, QtGui
|
||||
from avalon.maya import pipeline
|
||||
from openpype.api import BuildWorkfile
|
||||
import maya.cmds as cmds
|
||||
from openpype.settings import get_project_settings
|
||||
from Qt import QtWidgets, QtGui
|
||||
|
||||
self = sys.modules[__name__]
|
||||
import maya.cmds as cmds
|
||||
|
||||
from avalon.maya import pipeline
|
||||
|
||||
from openpype.api import BuildWorkfile
|
||||
from openpype.settings import get_project_settings
|
||||
from openpype.tools.utils import host_tools
|
||||
from openpype.hosts.maya.api import lib
|
||||
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
|
@ -19,10 +22,8 @@ def _get_menu(menu_name=None):
|
|||
if menu_name is None:
|
||||
menu_name = pipeline._menu
|
||||
|
||||
widgets = dict((
|
||||
w.objectName(), w) for w in QtWidgets.QApplication.allWidgets())
|
||||
menu = widgets.get(menu_name)
|
||||
return menu
|
||||
widgets = {w.objectName(): w for w in QtWidgets.QApplication.allWidgets()}
|
||||
return widgets.get(menu_name)
|
||||
|
||||
|
||||
def deferred():
|
||||
|
|
@ -36,25 +37,52 @@ def deferred():
|
|||
)
|
||||
|
||||
def add_look_assigner_item():
|
||||
import mayalookassigner
|
||||
cmds.menuItem(
|
||||
"Look assigner",
|
||||
parent=pipeline._menu,
|
||||
command=lambda *args: mayalookassigner.show()
|
||||
command=lambda *args: host_tools.show_look_assigner(
|
||||
pipeline._parent
|
||||
)
|
||||
)
|
||||
|
||||
def modify_workfiles():
|
||||
from openpype.tools import workfiles
|
||||
|
||||
def launch_workfiles_app(*_args, **_kwargs):
|
||||
workfiles.show(
|
||||
os.path.join(
|
||||
cmds.workspace(query=True, rootDirectory=True),
|
||||
cmds.workspace(fileRuleEntry="scene")
|
||||
),
|
||||
parent=pipeline._parent
|
||||
def add_experimental_item():
|
||||
cmds.menuItem(
|
||||
"Experimental tools...",
|
||||
parent=pipeline._menu,
|
||||
command=lambda *args: host_tools.show_experimental_tools_dialog(
|
||||
pipeline._parent
|
||||
)
|
||||
)
|
||||
|
||||
def add_scripts_menu():
|
||||
try:
|
||||
import scriptsmenu.launchformaya as launchformaya
|
||||
except ImportError:
|
||||
log.warning(
|
||||
"Skipping studio.menu install, because "
|
||||
"'scriptsmenu' module seems unavailable."
|
||||
)
|
||||
return
|
||||
|
||||
# load configuration of custom menu
|
||||
project_settings = get_project_settings(os.getenv("AVALON_PROJECT"))
|
||||
config = project_settings["maya"]["scriptsmenu"]["definition"]
|
||||
_menu = project_settings["maya"]["scriptsmenu"]["name"]
|
||||
|
||||
if not config:
|
||||
log.warning("Skipping studio menu, no definition found.")
|
||||
return
|
||||
|
||||
# run the launcher for Maya menu
|
||||
studio_menu = launchformaya.main(
|
||||
title=_menu.title(),
|
||||
objectName=_menu.title().lower().replace(" ", "_")
|
||||
)
|
||||
|
||||
# apply configuration
|
||||
studio_menu.build_from_configuration(studio_menu, config)
|
||||
|
||||
def modify_workfiles():
|
||||
# Find the pipeline menu
|
||||
top_menu = _get_menu()
|
||||
|
||||
|
|
@ -75,7 +103,7 @@ def deferred():
|
|||
cmds.menuItem(
|
||||
"Work Files",
|
||||
parent=pipeline._menu,
|
||||
command=launch_workfiles_app,
|
||||
command=lambda *args: host_tools.show_workfiles(pipeline._parent),
|
||||
insertAfter=after_action
|
||||
)
|
||||
|
||||
|
|
@ -83,6 +111,35 @@ def deferred():
|
|||
if workfile_action:
|
||||
top_menu.removeAction(workfile_action)
|
||||
|
||||
def modify_resolution():
|
||||
# Find the pipeline menu
|
||||
top_menu = _get_menu()
|
||||
|
||||
# Try to find resolution tool action in the menu
|
||||
resolution_action = None
|
||||
for action in top_menu.actions():
|
||||
if action.text() == "Reset Resolution":
|
||||
resolution_action = action
|
||||
break
|
||||
|
||||
# Add at the top of menu if "Work Files" action was not found
|
||||
after_action = ""
|
||||
if resolution_action:
|
||||
# Use action's object name for `insertAfter` argument
|
||||
after_action = resolution_action.objectName()
|
||||
|
||||
# Insert action to menu
|
||||
cmds.menuItem(
|
||||
"Reset Resolution",
|
||||
parent=pipeline._menu,
|
||||
command=lambda *args: lib.reset_scene_resolution(),
|
||||
insertAfter=after_action
|
||||
)
|
||||
|
||||
# Remove replaced action
|
||||
if resolution_action:
|
||||
top_menu.removeAction(resolution_action)
|
||||
|
||||
def remove_project_manager():
|
||||
top_menu = _get_menu()
|
||||
|
||||
|
|
@ -107,40 +164,42 @@ def deferred():
|
|||
if project_manager_action is not None:
|
||||
system_menu.menu().removeAction(project_manager_action)
|
||||
|
||||
def add_colorspace():
|
||||
# Find the pipeline menu
|
||||
top_menu = _get_menu()
|
||||
|
||||
# Try to find workfile tool action in the menu
|
||||
workfile_action = None
|
||||
for action in top_menu.actions():
|
||||
if action.text() == "Reset Resolution":
|
||||
workfile_action = action
|
||||
break
|
||||
|
||||
# Add at the top of menu if "Work Files" action was not found
|
||||
after_action = ""
|
||||
if workfile_action:
|
||||
# Use action's object name for `insertAfter` argument
|
||||
after_action = workfile_action.objectName()
|
||||
|
||||
# Insert action to menu
|
||||
cmds.menuItem(
|
||||
"Set Colorspace",
|
||||
parent=pipeline._menu,
|
||||
command=lambda *args: lib.set_colorspace(),
|
||||
insertAfter=after_action
|
||||
)
|
||||
|
||||
log.info("Attempting to install scripts menu ...")
|
||||
|
||||
# add_scripts_menu()
|
||||
add_build_workfiles_item()
|
||||
add_look_assigner_item()
|
||||
add_experimental_item()
|
||||
modify_workfiles()
|
||||
modify_resolution()
|
||||
remove_project_manager()
|
||||
|
||||
try:
|
||||
import scriptsmenu.launchformaya as launchformaya
|
||||
import scriptsmenu.scriptsmenu as scriptsmenu
|
||||
except ImportError:
|
||||
log.warning(
|
||||
"Skipping studio.menu install, because "
|
||||
"'scriptsmenu' module seems unavailable."
|
||||
)
|
||||
return
|
||||
|
||||
# load configuration of custom menu
|
||||
project_settings = get_project_settings(os.getenv("AVALON_PROJECT"))
|
||||
config = project_settings["maya"]["scriptsmenu"]["definition"]
|
||||
_menu = project_settings["maya"]["scriptsmenu"]["name"]
|
||||
|
||||
if not config:
|
||||
log.warning("Skipping studio menu, no definition found.")
|
||||
return
|
||||
|
||||
# run the launcher for Maya menu
|
||||
studio_menu = launchformaya.main(
|
||||
title=_menu.title(),
|
||||
objectName=_menu.title().lower().replace(" ", "_")
|
||||
)
|
||||
|
||||
# apply configuration
|
||||
studio_menu.build_from_configuration(studio_menu, config)
|
||||
add_colorspace()
|
||||
add_scripts_menu()
|
||||
|
||||
|
||||
def uninstall():
|
||||
|
|
@ -161,7 +220,7 @@ def install():
|
|||
return
|
||||
|
||||
# Allow time for uninstallation to finish.
|
||||
cmds.evalDeferred(deferred)
|
||||
cmds.evalDeferred(deferred, lowestPriority=True)
|
||||
|
||||
|
||||
def popup():
|
||||
|
|
|
|||
|
|
@ -123,7 +123,7 @@ class ReferenceLoader(api.Loader):
|
|||
count = options.get("count") or 1
|
||||
for c in range(0, count):
|
||||
namespace = namespace or lib.unique_namespace(
|
||||
asset["name"] + "_",
|
||||
"{}_{}_".format(asset["name"], context["subset"]["name"]),
|
||||
prefix="_" if asset["name"][0].isdigit() else "",
|
||||
suffix="_",
|
||||
)
|
||||
|
|
|
|||
|
|
@ -5,6 +5,7 @@ import os
|
|||
import contextlib
|
||||
import copy
|
||||
|
||||
import six
|
||||
from maya import cmds
|
||||
|
||||
from avalon import api, io
|
||||
|
|
@ -69,7 +70,8 @@ def unlocked(nodes):
|
|||
yield
|
||||
finally:
|
||||
# Reapply original states
|
||||
for uuid, state in states.iteritems():
|
||||
_iteritems = getattr(states, "iteritems", states.items)
|
||||
for uuid, state in _iteritems():
|
||||
nodes_from_id = cmds.ls(uuid, long=True)
|
||||
if nodes_from_id:
|
||||
node = nodes_from_id[0]
|
||||
|
|
@ -94,7 +96,7 @@ def load_package(filepath, name, namespace=None):
|
|||
# Define a unique namespace for the package
|
||||
namespace = os.path.basename(filepath).split(".")[0]
|
||||
unique_namespace(namespace)
|
||||
assert isinstance(namespace, basestring)
|
||||
assert isinstance(namespace, six.string_types)
|
||||
|
||||
# Load the setdress package data
|
||||
with open(filepath, "r") as fp:
|
||||
|
|
|
|||
|
|
@ -1,11 +1,11 @@
|
|||
from openpype.hosts.maya.api import plugin
|
||||
|
||||
|
||||
class CreateMayaAscii(plugin.Creator):
|
||||
"""Raw Maya Ascii file export"""
|
||||
class CreateMayaScene(plugin.Creator):
|
||||
"""Raw Maya Scene file export"""
|
||||
|
||||
name = "mayaAscii"
|
||||
label = "Maya Ascii"
|
||||
family = "mayaAscii"
|
||||
name = "mayaScene"
|
||||
label = "Maya Scene"
|
||||
family = "mayaScene"
|
||||
icon = "file-archive-o"
|
||||
defaults = ['Main']
|
||||
|
|
|
|||
|
|
@ -13,6 +13,7 @@ class ReferenceLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
|
|||
"pointcache",
|
||||
"animation",
|
||||
"mayaAscii",
|
||||
"mayaScene",
|
||||
"setdress",
|
||||
"layout",
|
||||
"camera",
|
||||
|
|
@ -40,14 +41,13 @@ class ReferenceLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
|
|||
family = "model"
|
||||
|
||||
with maya.maintained_selection():
|
||||
|
||||
groupName = "{}:{}".format(namespace, name)
|
||||
groupName = "{}:_GRP".format(namespace)
|
||||
cmds.loadPlugin("AbcImport.mll", quiet=True)
|
||||
nodes = cmds.file(self.fname,
|
||||
namespace=namespace,
|
||||
sharedReferenceFile=False,
|
||||
groupReference=True,
|
||||
groupName="{}:{}".format(namespace, name),
|
||||
groupName=groupName,
|
||||
reference=True,
|
||||
returnNewNodes=True)
|
||||
|
||||
|
|
@ -71,7 +71,7 @@ class ReferenceLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
|
|||
except: # noqa: E722
|
||||
pass
|
||||
|
||||
if family not in ["layout", "setdress", "mayaAscii"]:
|
||||
if family not in ["layout", "setdress", "mayaAscii", "mayaScene"]:
|
||||
for root in roots:
|
||||
root.setParent(world=True)
|
||||
|
||||
|
|
|
|||
|
|
@ -3,14 +3,14 @@ from maya import cmds
|
|||
import pyblish.api
|
||||
|
||||
|
||||
class CollectMayaAscii(pyblish.api.InstancePlugin):
|
||||
"""Collect May Ascii Data
|
||||
class CollectMayaScene(pyblish.api.InstancePlugin):
|
||||
"""Collect Maya Scene Data
|
||||
|
||||
"""
|
||||
|
||||
order = pyblish.api.CollectorOrder + 0.2
|
||||
label = 'Collect Model Data'
|
||||
families = ["mayaAscii"]
|
||||
families = ["mayaScene"]
|
||||
|
||||
def process(self, instance):
|
||||
# Extract only current frame (override)
|
||||
|
|
@ -174,10 +174,16 @@ class CollectMayaRender(pyblish.api.ContextPlugin):
|
|||
assert render_products, "no render products generated"
|
||||
exp_files = []
|
||||
for product in render_products:
|
||||
for camera in layer_render_products.layer_data.cameras:
|
||||
exp_files.append(
|
||||
{product.productName: layer_render_products.get_files(
|
||||
product, camera)})
|
||||
product_name = product.productName
|
||||
if product.camera and layer_render_products.has_camera_token():
|
||||
product_name = "{}{}".format(
|
||||
product.camera,
|
||||
"_" + product_name if product_name else "")
|
||||
exp_files.append(
|
||||
{
|
||||
product_name: layer_render_products.get_files(
|
||||
product)
|
||||
})
|
||||
|
||||
self.log.info("multipart: {}".format(
|
||||
layer_render_products.multipart))
|
||||
|
|
@ -199,12 +205,14 @@ class CollectMayaRender(pyblish.api.ContextPlugin):
|
|||
|
||||
# replace relative paths with absolute. Render products are
|
||||
# returned as list of dictionaries.
|
||||
publish_meta_path = None
|
||||
for aov in exp_files:
|
||||
full_paths = []
|
||||
for file in aov[aov.keys()[0]]:
|
||||
full_path = os.path.join(workspace, "renders", file)
|
||||
full_path = full_path.replace("\\", "/")
|
||||
full_paths.append(full_path)
|
||||
publish_meta_path = os.path.dirname(full_path)
|
||||
aov_dict[aov.keys()[0]] = full_paths
|
||||
|
||||
frame_start_render = int(self.get_render_attribute(
|
||||
|
|
@ -230,6 +238,26 @@ class CollectMayaRender(pyblish.api.ContextPlugin):
|
|||
frame_end_handle = frame_end_render
|
||||
|
||||
full_exp_files.append(aov_dict)
|
||||
|
||||
# find common path to store metadata
|
||||
# so if image prefix is branching to many directories
|
||||
# metadata file will be located in top-most common
|
||||
# directory.
|
||||
# TODO: use `os.path.commonpath()` after switch to Python 3
|
||||
publish_meta_path = os.path.normpath(publish_meta_path)
|
||||
common_publish_meta_path = os.path.splitdrive(
|
||||
publish_meta_path)[0]
|
||||
if common_publish_meta_path:
|
||||
common_publish_meta_path += os.path.sep
|
||||
for part in publish_meta_path.replace(
|
||||
common_publish_meta_path, "").split(os.path.sep):
|
||||
common_publish_meta_path = os.path.join(
|
||||
common_publish_meta_path, part)
|
||||
if part == expected_layer_name:
|
||||
break
|
||||
self.log.info(
|
||||
"Publish meta path: {}".format(common_publish_meta_path))
|
||||
|
||||
self.log.info(full_exp_files)
|
||||
self.log.info("collecting layer: {}".format(layer_name))
|
||||
# Get layer specific settings, might be overrides
|
||||
|
|
@ -262,6 +290,7 @@ class CollectMayaRender(pyblish.api.ContextPlugin):
|
|||
# which was submitted originally
|
||||
"source": filepath,
|
||||
"expectedFiles": full_exp_files,
|
||||
"publishRenderMetadataFolder": common_publish_meta_path,
|
||||
"resolutionWidth": cmds.getAttr("defaultResolution.width"),
|
||||
"resolutionHeight": cmds.getAttr("defaultResolution.height"),
|
||||
"pixelAspect": cmds.getAttr("defaultResolution.pixelAspect"),
|
||||
|
|
|
|||
|
|
@ -4,7 +4,7 @@ import os
|
|||
from maya import cmds
|
||||
|
||||
|
||||
class CollectMayaScene(pyblish.api.ContextPlugin):
|
||||
class CollectWorkfile(pyblish.api.ContextPlugin):
|
||||
"""Inject the current working file into context"""
|
||||
|
||||
order = pyblish.api.CollectorOrder - 0.01
|
||||
|
|
@ -183,7 +183,8 @@ class ExtractFBX(openpype.api.Extractor):
|
|||
# Apply the FBX overrides through MEL since the commands
|
||||
# only work correctly in MEL according to online
|
||||
# available discussions on the topic
|
||||
for option, value in options.iteritems():
|
||||
_iteritems = getattr(options, "iteritems", options.items)
|
||||
for option, value in _iteritems():
|
||||
key = option[0].upper() + option[1:] # uppercase first letter
|
||||
|
||||
# Boolean must be passed as lower-case strings
|
||||
|
|
|
|||
|
|
@ -205,6 +205,9 @@ class ExtractLook(openpype.api.Extractor):
|
|||
lookdata = instance.data["lookData"]
|
||||
relationships = lookdata["relationships"]
|
||||
sets = relationships.keys()
|
||||
if not sets:
|
||||
self.log.info("No sets found")
|
||||
return
|
||||
|
||||
results = self.process_resources(instance, staging_dir=dir_path)
|
||||
transfers = results["fileTransfers"]
|
||||
|
|
|
|||
|
|
@ -17,6 +17,7 @@ class ExtractMayaSceneRaw(openpype.api.Extractor):
|
|||
label = "Maya Scene (Raw)"
|
||||
hosts = ["maya"]
|
||||
families = ["mayaAscii",
|
||||
"mayaScene",
|
||||
"setdress",
|
||||
"layout",
|
||||
"camerarig",
|
||||
|
|
|
|||
|
|
@ -45,9 +45,12 @@ class ExtractPlayblast(openpype.api.Extractor):
|
|||
# get cameras
|
||||
camera = instance.data['review_camera']
|
||||
|
||||
override_viewport_options = (
|
||||
self.capture_preset['Viewport Options']
|
||||
['override_viewport_options']
|
||||
)
|
||||
preset = lib.load_capture_preset(data=self.capture_preset)
|
||||
|
||||
|
||||
preset['camera'] = camera
|
||||
preset['start_frame'] = start
|
||||
preset['end_frame'] = end
|
||||
|
|
@ -92,6 +95,12 @@ class ExtractPlayblast(openpype.api.Extractor):
|
|||
|
||||
self.log.info('using viewport preset: {}'.format(preset))
|
||||
|
||||
# Update preset with current panel setting
|
||||
# if override_viewport_options is turned off
|
||||
if not override_viewport_options:
|
||||
panel_preset = capture.parse_active_view()
|
||||
preset.update(panel_preset)
|
||||
|
||||
path = capture.capture(**preset)
|
||||
|
||||
self.log.debug("playblast path {}".format(path))
|
||||
|
|
|
|||
|
|
@ -32,6 +32,9 @@ class ExtractThumbnail(openpype.api.Extractor):
|
|||
capture_preset = (
|
||||
instance.context.data["project_settings"]['maya']['publish']['ExtractPlayblast']['capture_preset']
|
||||
)
|
||||
override_viewport_options = (
|
||||
capture_preset['Viewport Options']['override_viewport_options']
|
||||
)
|
||||
|
||||
try:
|
||||
preset = lib.load_capture_preset(data=capture_preset)
|
||||
|
|
@ -86,6 +89,12 @@ class ExtractThumbnail(openpype.api.Extractor):
|
|||
# playblast and viewer
|
||||
preset['viewer'] = False
|
||||
|
||||
# Update preset with current panel setting
|
||||
# if override_viewport_options is turned off
|
||||
if not override_viewport_options:
|
||||
panel_preset = capture.parse_active_view()
|
||||
preset.update(panel_preset)
|
||||
|
||||
path = capture.capture(**preset)
|
||||
playblast = self._fix_playblast_output_path(path)
|
||||
|
||||
|
|
|
|||
|
|
@ -383,7 +383,7 @@ class MayaSubmitMuster(pyblish.api.InstancePlugin):
|
|||
"attributes": {
|
||||
"environmental_variables": {
|
||||
"value": ", ".join("{!s}={!r}".format(k, v)
|
||||
for (k, v) in env.iteritems()),
|
||||
for (k, v) in env.items()),
|
||||
|
||||
"state": True,
|
||||
"subst": False
|
||||
|
|
|
|||
|
|
@ -5,6 +5,8 @@ from __future__ import absolute_import
|
|||
import pyblish.api
|
||||
import openpype.api
|
||||
|
||||
from maya import cmds
|
||||
|
||||
|
||||
class SelectInvalidInstances(pyblish.api.Action):
|
||||
"""Select invalid instances in Outliner."""
|
||||
|
|
@ -18,13 +20,12 @@ class SelectInvalidInstances(pyblish.api.Action):
|
|||
# Get the errored instances
|
||||
failed = []
|
||||
for result in context.data["results"]:
|
||||
if result["error"] is None:
|
||||
continue
|
||||
if result["instance"] is None:
|
||||
continue
|
||||
if result["instance"] in failed:
|
||||
continue
|
||||
if result["plugin"] != plugin:
|
||||
if (
|
||||
result["error"] is None
|
||||
or result["instance"] is None
|
||||
or result["instance"] in failed
|
||||
or result["plugin"] != plugin
|
||||
):
|
||||
continue
|
||||
|
||||
failed.append(result["instance"])
|
||||
|
|
@ -44,25 +45,10 @@ class SelectInvalidInstances(pyblish.api.Action):
|
|||
self.deselect()
|
||||
|
||||
def select(self, instances):
|
||||
if "nuke" in pyblish.api.registered_hosts():
|
||||
import avalon.nuke.lib
|
||||
import nuke
|
||||
avalon.nuke.lib.select_nodes(
|
||||
[nuke.toNode(str(x)) for x in instances]
|
||||
)
|
||||
|
||||
if "maya" in pyblish.api.registered_hosts():
|
||||
from maya import cmds
|
||||
cmds.select(instances, replace=True, noExpand=True)
|
||||
cmds.select(instances, replace=True, noExpand=True)
|
||||
|
||||
def deselect(self):
|
||||
if "nuke" in pyblish.api.registered_hosts():
|
||||
import avalon.nuke.lib
|
||||
avalon.nuke.lib.reset_selection()
|
||||
|
||||
if "maya" in pyblish.api.registered_hosts():
|
||||
from maya import cmds
|
||||
cmds.select(deselect=True)
|
||||
cmds.select(deselect=True)
|
||||
|
||||
|
||||
class RepairSelectInvalidInstances(pyblish.api.Action):
|
||||
|
|
@ -92,23 +78,14 @@ class RepairSelectInvalidInstances(pyblish.api.Action):
|
|||
|
||||
context_asset = context.data["assetEntity"]["name"]
|
||||
for instance in instances:
|
||||
if "nuke" in pyblish.api.registered_hosts():
|
||||
import openpype.hosts.nuke.api as nuke_api
|
||||
origin_node = instance[0]
|
||||
nuke_api.lib.recreate_instance(
|
||||
origin_node, avalon_data={"asset": context_asset}
|
||||
)
|
||||
else:
|
||||
self.set_attribute(instance, context_asset)
|
||||
self.set_attribute(instance, context_asset)
|
||||
|
||||
def set_attribute(self, instance, context_asset):
|
||||
if "maya" in pyblish.api.registered_hosts():
|
||||
from maya import cmds
|
||||
cmds.setAttr(
|
||||
instance.data.get("name") + ".asset",
|
||||
context_asset,
|
||||
type="string"
|
||||
)
|
||||
cmds.setAttr(
|
||||
instance.data.get("name") + ".asset",
|
||||
context_asset,
|
||||
type="string"
|
||||
)
|
||||
|
||||
|
||||
class ValidateInstanceInContext(pyblish.api.InstancePlugin):
|
||||
|
|
@ -124,7 +101,7 @@ class ValidateInstanceInContext(pyblish.api.InstancePlugin):
|
|||
order = openpype.api.ValidateContentsOrder
|
||||
label = "Instance in same Context"
|
||||
optional = True
|
||||
hosts = ["maya", "nuke"]
|
||||
hosts = ["maya"]
|
||||
actions = [SelectInvalidInstances, RepairSelectInvalidInstances]
|
||||
|
||||
def process(self, instance):
|
||||
|
|
@ -2,6 +2,8 @@ import pyblish.api
|
|||
import openpype.api
|
||||
import string
|
||||
|
||||
import six
|
||||
|
||||
# Allow only characters, numbers and underscore
|
||||
allowed = set(string.ascii_lowercase +
|
||||
string.ascii_uppercase +
|
||||
|
|
@ -29,7 +31,7 @@ class ValidateSubsetName(pyblish.api.InstancePlugin):
|
|||
raise RuntimeError("Instance is missing subset "
|
||||
"name: {0}".format(subset))
|
||||
|
||||
if not isinstance(subset, basestring):
|
||||
if not isinstance(subset, six.string_types):
|
||||
raise TypeError("Instance subset name must be string, "
|
||||
"got: {0} ({1})".format(subset, type(subset)))
|
||||
|
||||
|
|
|
|||
|
|
@ -0,0 +1,47 @@
|
|||
import pyblish.api
|
||||
import maya.cmds as cmds
|
||||
import openpype.api
|
||||
import os
|
||||
|
||||
|
||||
class ValidateLoadedPlugin(pyblish.api.ContextPlugin):
|
||||
"""Ensure there are no unauthorized loaded plugins"""
|
||||
|
||||
label = "Loaded Plugin"
|
||||
order = pyblish.api.ValidatorOrder
|
||||
host = ["maya"]
|
||||
actions = [openpype.api.RepairContextAction]
|
||||
|
||||
@classmethod
|
||||
def get_invalid(cls, context):
|
||||
|
||||
invalid = []
|
||||
loaded_plugin = cmds.pluginInfo(query=True, listPlugins=True)
|
||||
# get variable from OpenPype settings
|
||||
whitelist_native_plugins = cls.whitelist_native_plugins
|
||||
authorized_plugins = cls.authorized_plugins or []
|
||||
|
||||
for plugin in loaded_plugin:
|
||||
if not whitelist_native_plugins and os.getenv('MAYA_LOCATION') \
|
||||
in cmds.pluginInfo(plugin, query=True, path=True):
|
||||
continue
|
||||
if plugin not in authorized_plugins:
|
||||
invalid.append(plugin)
|
||||
|
||||
return invalid
|
||||
|
||||
def process(self, context):
|
||||
|
||||
invalid = self.get_invalid(context)
|
||||
if invalid:
|
||||
raise RuntimeError(
|
||||
"Found forbidden plugin name: {}".format(", ".join(invalid))
|
||||
)
|
||||
|
||||
@classmethod
|
||||
def repair(cls, context):
|
||||
"""Unload forbidden plugins"""
|
||||
|
||||
for plugin in cls.get_invalid(context):
|
||||
cmds.pluginInfo(plugin, edit=True, autoload=False)
|
||||
cmds.unloadPlugin(plugin, force=True)
|
||||
53
openpype/hosts/maya/plugins/publish/validate_mesh_ngons.py
Normal file
53
openpype/hosts/maya/plugins/publish/validate_mesh_ngons.py
Normal file
|
|
@ -0,0 +1,53 @@
|
|||
from maya import cmds
|
||||
|
||||
import pyblish.api
|
||||
import openpype.api
|
||||
import openpype.hosts.maya.api.action
|
||||
from avalon import maya
|
||||
from openpype.hosts.maya.api import lib
|
||||
|
||||
|
||||
def polyConstraint(objects, *args, **kwargs):
|
||||
kwargs.pop('mode', None)
|
||||
|
||||
with lib.no_undo(flush=False):
|
||||
with maya.maintained_selection():
|
||||
with lib.reset_polySelectConstraint():
|
||||
cmds.select(objects, r=1, noExpand=True)
|
||||
# Acting as 'polyCleanupArgList' for n-sided polygon selection
|
||||
cmds.polySelectConstraint(*args, mode=3, **kwargs)
|
||||
result = cmds.ls(selection=True)
|
||||
cmds.select(clear=True)
|
||||
|
||||
return result
|
||||
|
||||
|
||||
class ValidateMeshNgons(pyblish.api.Validator):
|
||||
"""Ensure that meshes don't have ngons
|
||||
|
||||
Ngon are faces with more than 4 sides.
|
||||
|
||||
To debug the problem on the meshes you can use Maya's modeling
|
||||
tool: "Mesh > Cleanup..."
|
||||
|
||||
"""
|
||||
|
||||
order = openpype.api.ValidateContentsOrder
|
||||
hosts = ["maya"]
|
||||
families = ["model"]
|
||||
label = "Mesh ngons"
|
||||
actions = [openpype.hosts.maya.api.action.SelectInvalidAction]
|
||||
|
||||
@staticmethod
|
||||
def get_invalid(instance):
|
||||
|
||||
meshes = cmds.ls(instance, type='mesh')
|
||||
return polyConstraint(meshes, type=8, size=3)
|
||||
|
||||
def process(self, instance):
|
||||
"""Process all the nodes in the instance "objectSet"""
|
||||
|
||||
invalid = self.get_invalid(instance)
|
||||
if invalid:
|
||||
raise ValueError("Meshes found with n-gons"
|
||||
"values: {0}".format(invalid))
|
||||
|
|
@ -52,7 +52,8 @@ class ValidateNodeIdsUnique(pyblish.api.InstancePlugin):
|
|||
|
||||
# Take only the ids with more than one member
|
||||
invalid = list()
|
||||
for _ids, members in ids.iteritems():
|
||||
_iteritems = getattr(ids, "iteritems", ids.items)
|
||||
for _ids, members in _iteritems():
|
||||
if len(members) > 1:
|
||||
cls.log.error("ID found on multiple nodes: '%s'" % members)
|
||||
invalid.extend(members)
|
||||
|
|
|
|||
|
|
@ -32,7 +32,10 @@ class ValidateNodeNoGhosting(pyblish.api.InstancePlugin):
|
|||
nodes = cmds.ls(instance, long=True, type=['transform', 'shape'])
|
||||
invalid = []
|
||||
for node in nodes:
|
||||
for attr, required_value in cls._attributes.iteritems():
|
||||
_iteritems = getattr(
|
||||
cls._attributes, "iteritems", cls._attributes.items
|
||||
)
|
||||
for attr, required_value in _iteritems():
|
||||
if cmds.attributeQuery(attr, node=node, exists=True):
|
||||
|
||||
value = cmds.getAttr('{0}.{1}'.format(node, attr))
|
||||
|
|
|
|||
|
|
@ -76,7 +76,7 @@ class ValidateRenderSettings(pyblish.api.InstancePlugin):
|
|||
r'%a|<aov>|<renderpass>', re.IGNORECASE)
|
||||
R_LAYER_TOKEN = re.compile(
|
||||
r'%l|<layer>|<renderlayer>', re.IGNORECASE)
|
||||
R_CAMERA_TOKEN = re.compile(r'%c|<camera>', re.IGNORECASE)
|
||||
R_CAMERA_TOKEN = re.compile(r'%c|Camera>')
|
||||
R_SCENE_TOKEN = re.compile(r'%s|<scene>', re.IGNORECASE)
|
||||
|
||||
DEFAULT_PADDING = 4
|
||||
|
|
@ -126,7 +126,9 @@ class ValidateRenderSettings(pyblish.api.InstancePlugin):
|
|||
if len(cameras) > 1 and not re.search(cls.R_CAMERA_TOKEN, prefix):
|
||||
invalid = True
|
||||
cls.log.error("Wrong image prefix [ {} ] - "
|
||||
"doesn't have: '<camera>' token".format(prefix))
|
||||
"doesn't have: '<Camera>' token".format(prefix))
|
||||
cls.log.error(
|
||||
"Note that to needs to have capital 'C' at the beginning")
|
||||
|
||||
# renderer specific checks
|
||||
if renderer == "vray":
|
||||
|
|
|
|||
|
|
@ -33,7 +33,8 @@ class ValidateShapeRenderStats(pyblish.api.Validator):
|
|||
shapes = cmds.ls(instance, long=True, type='surfaceShape')
|
||||
invalid = []
|
||||
for shape in shapes:
|
||||
for attr, default_value in cls.defaults.iteritems():
|
||||
_iteritems = getattr(cls.defaults, "iteritems", cls.defaults.items)
|
||||
for attr, default_value in _iteritems():
|
||||
if cmds.attributeQuery(attr, node=shape, exists=True):
|
||||
value = cmds.getAttr('{}.{}'.format(shape, attr))
|
||||
if value != default_value:
|
||||
|
|
@ -52,7 +53,8 @@ class ValidateShapeRenderStats(pyblish.api.Validator):
|
|||
@classmethod
|
||||
def repair(cls, instance):
|
||||
for shape in cls.get_invalid(instance):
|
||||
for attr, default_value in cls.defaults.iteritems():
|
||||
_iteritems = getattr(cls.defaults, "iteritems", cls.defaults.items)
|
||||
for attr, default_value in _iteritems():
|
||||
|
||||
if cmds.attributeQuery(attr, node=shape, exists=True):
|
||||
plug = '{0}.{1}'.format(shape, attr)
|
||||
|
|
|
|||
59
openpype/hosts/maya/plugins/publish/validate_shape_zero.py
Normal file
59
openpype/hosts/maya/plugins/publish/validate_shape_zero.py
Normal file
|
|
@ -0,0 +1,59 @@
|
|||
from maya import cmds
|
||||
|
||||
import pyblish.api
|
||||
import openpype.api
|
||||
import openpype.hosts.maya.api.action
|
||||
|
||||
|
||||
class ValidateShapeZero(pyblish.api.Validator):
|
||||
"""shape can't have any values
|
||||
|
||||
To solve this issue, try freezing the shapes. So long
|
||||
as the translation, rotation and scaling values are zero,
|
||||
you're all good.
|
||||
|
||||
"""
|
||||
|
||||
order = openpype.api.ValidateContentsOrder
|
||||
hosts = ["maya"]
|
||||
families = ["model"]
|
||||
label = "Shape Zero (Freeze)"
|
||||
actions = [
|
||||
openpype.hosts.maya.api.action.SelectInvalidAction,
|
||||
openpype.api.RepairAction
|
||||
]
|
||||
|
||||
@staticmethod
|
||||
def get_invalid(instance):
|
||||
"""Returns the invalid shapes in the instance.
|
||||
|
||||
This is the same as checking:
|
||||
- all(pnt == [0,0,0] for pnt in shape.pnts[:])
|
||||
|
||||
Returns:
|
||||
list: Shape with non freezed vertex
|
||||
|
||||
"""
|
||||
|
||||
shapes = cmds.ls(instance, type="shape")
|
||||
|
||||
invalid = []
|
||||
for shape in shapes:
|
||||
if cmds.polyCollapseTweaks(shape, q=True, hasVertexTweaks=True):
|
||||
invalid.append(shape)
|
||||
|
||||
return invalid
|
||||
|
||||
@classmethod
|
||||
def repair(cls, instance):
|
||||
invalid_shapes = cls.get_invalid(instance)
|
||||
for shape in invalid_shapes:
|
||||
cmds.polyCollapseTweaks(shape)
|
||||
|
||||
def process(self, instance):
|
||||
"""Process all the nodes in the instance "objectSet"""
|
||||
|
||||
invalid = self.get_invalid(instance)
|
||||
if invalid:
|
||||
raise ValueError("Nodes found with shape or vertices not freezed"
|
||||
"values: {0}".format(invalid))
|
||||
|
|
@ -21,6 +21,7 @@ def add_implementation_envs(env, _app):
|
|||
new_nuke_paths.append(norm_path)
|
||||
|
||||
env["NUKE_PATH"] = os.pathsep.join(new_nuke_paths)
|
||||
env.pop("QT_AUTO_SCREEN_SCALE_FACTOR", None)
|
||||
|
||||
# Try to add QuickTime to PATH
|
||||
quick_time_path = "C:/Program Files (x86)/QuickTime/QTSystem"
|
||||
|
|
|
|||
|
|
@ -7,7 +7,6 @@ from collections import OrderedDict
|
|||
|
||||
|
||||
from avalon import api, io, lib
|
||||
from openpype.tools import workfiles
|
||||
import avalon.nuke
|
||||
from avalon.nuke import lib as anlib
|
||||
from avalon.nuke import (
|
||||
|
|
@ -24,6 +23,10 @@ from openpype.api import (
|
|||
get_current_project_settings,
|
||||
ApplicationManager
|
||||
)
|
||||
from openpype.tools.utils import host_tools
|
||||
from openpype.lib.path_tools import HostDirmap
|
||||
from openpype.settings import get_project_settings
|
||||
from openpype.modules import ModulesManager
|
||||
|
||||
import nuke
|
||||
|
||||
|
|
@ -288,14 +291,15 @@ def script_name():
|
|||
def add_button_write_to_read(node):
|
||||
name = "createReadNode"
|
||||
label = "Create Read From Rendered"
|
||||
value = "import write_to_read;write_to_read.write_to_read(nuke.thisNode())"
|
||||
value = "import write_to_read;\
|
||||
write_to_read.write_to_read(nuke.thisNode(), allow_relative=False)"
|
||||
knob = nuke.PyScript_Knob(name, label, value)
|
||||
knob.clearFlag(nuke.STARTLINE)
|
||||
node.addKnob(knob)
|
||||
|
||||
|
||||
def create_write_node(name, data, input=None, prenodes=None,
|
||||
review=True, linked_knobs=None):
|
||||
review=True, linked_knobs=None, farm=True):
|
||||
''' Creating write node which is group node
|
||||
|
||||
Arguments:
|
||||
|
|
@ -421,7 +425,15 @@ def create_write_node(name, data, input=None, prenodes=None,
|
|||
))
|
||||
continue
|
||||
|
||||
if knob and value:
|
||||
if not knob and not value:
|
||||
continue
|
||||
|
||||
log.info((knob, value))
|
||||
|
||||
if isinstance(value, str):
|
||||
if "[" in value:
|
||||
now_node[knob].setExpression(value)
|
||||
else:
|
||||
now_node[knob].setValue(value)
|
||||
|
||||
# connect to previous node
|
||||
|
|
@ -466,7 +478,7 @@ def create_write_node(name, data, input=None, prenodes=None,
|
|||
# imprinting group node
|
||||
anlib.set_avalon_knob_data(GN, data["avalon"])
|
||||
anlib.add_publish_knob(GN)
|
||||
add_rendering_knobs(GN)
|
||||
add_rendering_knobs(GN, farm)
|
||||
|
||||
if review:
|
||||
add_review_knob(GN)
|
||||
|
|
@ -526,7 +538,7 @@ def create_write_node(name, data, input=None, prenodes=None,
|
|||
return GN
|
||||
|
||||
|
||||
def add_rendering_knobs(node):
|
||||
def add_rendering_knobs(node, farm=True):
|
||||
''' Adds additional rendering knobs to given node
|
||||
|
||||
Arguments:
|
||||
|
|
@ -535,9 +547,13 @@ def add_rendering_knobs(node):
|
|||
Return:
|
||||
node (obj): with added knobs
|
||||
'''
|
||||
knob_options = [
|
||||
"Use existing frames", "Local"]
|
||||
if farm:
|
||||
knob_options.append("On farm")
|
||||
|
||||
if "render" not in node.knobs():
|
||||
knob = nuke.Enumeration_Knob("render", "", [
|
||||
"Use existing frames", "Local", "On farm"])
|
||||
knob = nuke.Enumeration_Knob("render", "", knob_options)
|
||||
knob.clearFlag(nuke.STARTLINE)
|
||||
node.addKnob(knob)
|
||||
return node
|
||||
|
|
@ -1019,27 +1035,6 @@ class WorkfileSettings(object):
|
|||
log.error(msg)
|
||||
nuke.message(msg)
|
||||
|
||||
bbox = self._asset_entity.get('data', {}).get('crop')
|
||||
|
||||
if bbox:
|
||||
try:
|
||||
x, y, r, t = bbox.split(".")
|
||||
data.update(
|
||||
{
|
||||
"x": int(x),
|
||||
"y": int(y),
|
||||
"r": int(r),
|
||||
"t": int(t),
|
||||
}
|
||||
)
|
||||
except Exception as e:
|
||||
bbox = None
|
||||
msg = ("{}:{} \nFormat:Crop need to be set with dots, "
|
||||
"example: 0.0.1920.1080, "
|
||||
"/nSetting to default").format(__name__, e)
|
||||
log.error(msg)
|
||||
nuke.message(msg)
|
||||
|
||||
existing_format = None
|
||||
for format in nuke.formats():
|
||||
if data["name"] == format.name():
|
||||
|
|
@ -1051,12 +1046,6 @@ class WorkfileSettings(object):
|
|||
existing_format.setWidth(data["width"])
|
||||
existing_format.setHeight(data["height"])
|
||||
existing_format.setPixelAspect(data["pixel_aspect"])
|
||||
|
||||
if bbox:
|
||||
existing_format.setX(data["x"])
|
||||
existing_format.setY(data["y"])
|
||||
existing_format.setR(data["r"])
|
||||
existing_format.setT(data["t"])
|
||||
else:
|
||||
format_string = self.make_format_string(**data)
|
||||
log.info("Creating new format: {}".format(format_string))
|
||||
|
|
@ -1676,7 +1665,7 @@ def launch_workfiles_app():
|
|||
|
||||
if not opnl.workfiles_launched:
|
||||
opnl.workfiles_launched = True
|
||||
workfiles.show(os.environ["AVALON_WORKDIR"])
|
||||
host_tools.show_workfiles()
|
||||
|
||||
|
||||
def process_workfile_builder():
|
||||
|
|
@ -1810,3 +1799,69 @@ def recreate_instance(origin_node, avalon_data=None):
|
|||
dn.setInput(0, new_node)
|
||||
|
||||
return new_node
|
||||
|
||||
|
||||
class NukeDirmap(HostDirmap):
|
||||
def __init__(self, host_name, project_settings, sync_module, file_name):
|
||||
"""
|
||||
Args:
|
||||
host_name (str): Nuke
|
||||
project_settings (dict): settings of current project
|
||||
sync_module (SyncServerModule): to limit reinitialization
|
||||
file_name (str): full path of referenced file from workfiles
|
||||
"""
|
||||
self.host_name = host_name
|
||||
self.project_settings = project_settings
|
||||
self.file_name = file_name
|
||||
self.sync_module = sync_module
|
||||
|
||||
self._mapping = None # cache mapping
|
||||
|
||||
def on_enable_dirmap(self):
|
||||
pass
|
||||
|
||||
def dirmap_routine(self, source_path, destination_path):
|
||||
log.debug("{}: {}->{}".format(self.file_name,
|
||||
source_path, destination_path))
|
||||
source_path = source_path.lower().replace(os.sep, '/')
|
||||
destination_path = destination_path.lower().replace(os.sep, '/')
|
||||
if platform.system().lower() == "windows":
|
||||
self.file_name = self.file_name.lower().replace(
|
||||
source_path, destination_path)
|
||||
else:
|
||||
self.file_name = self.file_name.replace(
|
||||
source_path, destination_path)
|
||||
|
||||
|
||||
class DirmapCache:
|
||||
"""Caching class to get settings and sync_module easily and only once."""
|
||||
_project_settings = None
|
||||
_sync_module = None
|
||||
|
||||
@classmethod
|
||||
def project_settings(cls):
|
||||
if cls._project_settings is None:
|
||||
cls._project_settings = get_project_settings(
|
||||
os.getenv("AVALON_PROJECT"))
|
||||
return cls._project_settings
|
||||
|
||||
@classmethod
|
||||
def sync_module(cls):
|
||||
if cls._sync_module is None:
|
||||
cls._sync_module = ModulesManager().modules_by_name["sync_server"]
|
||||
return cls._sync_module
|
||||
|
||||
|
||||
def dirmap_file_name_filter(file_name):
|
||||
"""Nuke callback function with single full path argument.
|
||||
|
||||
Checks project settings for potential mapping from source to dest.
|
||||
"""
|
||||
dirmap_processor = NukeDirmap("nuke",
|
||||
DirmapCache.project_settings(),
|
||||
DirmapCache.sync_module(),
|
||||
file_name)
|
||||
dirmap_processor.process_dirmap()
|
||||
if os.path.exists(dirmap_processor.file_name):
|
||||
return dirmap_processor.file_name
|
||||
return file_name
|
||||
|
|
|
|||
|
|
@ -4,7 +4,7 @@ from avalon.api import Session
|
|||
|
||||
from .lib import WorkfileSettings
|
||||
from openpype.api import Logger, BuildWorkfile, get_current_project_settings
|
||||
from openpype.tools import workfiles
|
||||
from openpype.tools.utils import host_tools
|
||||
|
||||
log = Logger().get_logger(__name__)
|
||||
|
||||
|
|
@ -25,7 +25,7 @@ def install():
|
|||
menu.removeItem(rm_item[1].name())
|
||||
menu.addCommand(
|
||||
name,
|
||||
workfiles.show,
|
||||
host_tools.show_workfiles,
|
||||
index=2
|
||||
)
|
||||
menu.addSeparator(index=3)
|
||||
|
|
@ -84,6 +84,12 @@ def install():
|
|||
)
|
||||
log.debug("Adding menu item: {}".format(name))
|
||||
|
||||
# Add experimental tools action
|
||||
menu.addSeparator()
|
||||
menu.addCommand(
|
||||
"Experimental tools...",
|
||||
host_tools.show_experimental_tools_dialog
|
||||
)
|
||||
|
||||
# adding shortcuts
|
||||
add_shortcuts_from_presets()
|
||||
|
|
|
|||
|
|
@ -1,4 +1,10 @@
|
|||
import random
|
||||
import string
|
||||
|
||||
import avalon.nuke
|
||||
from avalon.nuke import lib as anlib
|
||||
from avalon import api
|
||||
|
||||
from openpype.api import (
|
||||
get_current_project_settings,
|
||||
PypeCreatorMixin
|
||||
|
|
@ -23,3 +29,68 @@ class PypeCreator(PypeCreatorMixin, avalon.nuke.pipeline.Creator):
|
|||
self.log.error(msg + '\n\nPlease use other subset name!')
|
||||
raise NameError("`{0}: {1}".format(__name__, msg))
|
||||
return
|
||||
|
||||
|
||||
def get_review_presets_config():
|
||||
settings = get_current_project_settings()
|
||||
review_profiles = (
|
||||
settings["global"]
|
||||
["publish"]
|
||||
["ExtractReview"]
|
||||
["profiles"]
|
||||
)
|
||||
|
||||
outputs = {}
|
||||
for profile in review_profiles:
|
||||
outputs.update(profile.get("outputs", {}))
|
||||
|
||||
return [str(name) for name, _prop in outputs.items()]
|
||||
|
||||
|
||||
class NukeLoader(api.Loader):
|
||||
container_id_knob = "containerId"
|
||||
container_id = ''.join(random.choice(
|
||||
string.ascii_uppercase + string.digits) for _ in range(10))
|
||||
|
||||
def get_container_id(self, node):
|
||||
id_knob = node.knobs().get(self.container_id_knob)
|
||||
return id_knob.value() if id_knob else None
|
||||
|
||||
def get_members(self, source):
|
||||
"""Return nodes that has same 'containerId' as `source`"""
|
||||
source_id = self.get_container_id(source)
|
||||
return [node for node in nuke.allNodes(recurseGroups=True)
|
||||
if self.get_container_id(node) == source_id
|
||||
and node is not source] if source_id else []
|
||||
|
||||
def set_as_member(self, node):
|
||||
source_id = self.get_container_id(node)
|
||||
|
||||
if source_id:
|
||||
node[self.container_id_knob].setValue(self.container_id)
|
||||
else:
|
||||
HIDEN_FLAG = 0x00040000
|
||||
_knob = anlib.Knobby(
|
||||
"String_Knob",
|
||||
self.container_id,
|
||||
flags=[nuke.READ_ONLY, HIDEN_FLAG])
|
||||
knob = _knob.create(self.container_id_knob)
|
||||
node.addKnob(knob)
|
||||
|
||||
def clear_members(self, parent_node):
|
||||
members = self.get_members(parent_node)
|
||||
|
||||
dependent_nodes = None
|
||||
for node in members:
|
||||
_depndc = [n for n in node.dependent() if n not in members]
|
||||
if not _depndc:
|
||||
continue
|
||||
|
||||
dependent_nodes = _depndc
|
||||
break
|
||||
|
||||
for member in members:
|
||||
self.log.info("removing node: `{}".format(member.name()))
|
||||
nuke.delete(member)
|
||||
|
||||
return dependent_nodes
|
||||
|
|
|
|||
|
|
@ -99,7 +99,7 @@ class CreateWriteRender(plugin.PypeCreator):
|
|||
"fpath_template": ("{work}/renders/nuke/{subset}"
|
||||
"/{subset}.{frame}.{ext}")})
|
||||
|
||||
# add crop node to cut off all outside of format bounding box
|
||||
# add reformat node to cut off all outside of format bounding box
|
||||
# get width and height
|
||||
try:
|
||||
width, height = (selected_node.width(), selected_node.height())
|
||||
|
|
@ -109,15 +109,11 @@ class CreateWriteRender(plugin.PypeCreator):
|
|||
|
||||
_prenodes = [
|
||||
{
|
||||
"name": "Crop01",
|
||||
"class": "Crop",
|
||||
"name": "Reformat01",
|
||||
"class": "Reformat",
|
||||
"knobs": [
|
||||
("box", [
|
||||
0.0,
|
||||
0.0,
|
||||
width,
|
||||
height
|
||||
])
|
||||
("resize", 0),
|
||||
("black_outside", 1),
|
||||
],
|
||||
"dependent": None
|
||||
}
|
||||
|
|
|
|||
141
openpype/hosts/nuke/plugins/create/create_write_still.py
Normal file
141
openpype/hosts/nuke/plugins/create/create_write_still.py
Normal file
|
|
@ -0,0 +1,141 @@
|
|||
from collections import OrderedDict
|
||||
from openpype.hosts.nuke.api import (
|
||||
plugin,
|
||||
lib)
|
||||
import nuke
|
||||
|
||||
|
||||
class CreateWriteStill(plugin.PypeCreator):
|
||||
# change this to template preset
|
||||
name = "WriteStillFrame"
|
||||
label = "Create Write Still Image"
|
||||
hosts = ["nuke"]
|
||||
n_class = "Write"
|
||||
family = "still"
|
||||
icon = "image"
|
||||
defaults = [
|
||||
"ImageFrame{:0>4}".format(nuke.frame()),
|
||||
"MPFrame{:0>4}".format(nuke.frame()),
|
||||
"LayoutFrame{:0>4}".format(nuke.frame())
|
||||
]
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
super(CreateWriteStill, self).__init__(*args, **kwargs)
|
||||
|
||||
data = OrderedDict()
|
||||
|
||||
data["family"] = self.family
|
||||
data["families"] = self.n_class
|
||||
|
||||
for k, v in self.data.items():
|
||||
if k not in data.keys():
|
||||
data.update({k: v})
|
||||
|
||||
self.data = data
|
||||
self.nodes = nuke.selectedNodes()
|
||||
self.log.debug("_ self.data: '{}'".format(self.data))
|
||||
|
||||
def process(self):
|
||||
|
||||
inputs = []
|
||||
outputs = []
|
||||
instance = nuke.toNode(self.data["subset"])
|
||||
selected_node = None
|
||||
|
||||
# use selection
|
||||
if (self.options or {}).get("useSelection"):
|
||||
nodes = self.nodes
|
||||
|
||||
if not (len(nodes) < 2):
|
||||
msg = ("Select only one node. "
|
||||
"The node you want to connect to, "
|
||||
"or tick off `Use selection`")
|
||||
self.log.error(msg)
|
||||
nuke.message(msg)
|
||||
return
|
||||
|
||||
if len(nodes) == 0:
|
||||
msg = (
|
||||
"No nodes selected. Please select a single node to connect"
|
||||
" to or tick off `Use selection`"
|
||||
)
|
||||
self.log.error(msg)
|
||||
nuke.message(msg)
|
||||
return
|
||||
|
||||
selected_node = nodes[0]
|
||||
inputs = [selected_node]
|
||||
outputs = selected_node.dependent()
|
||||
|
||||
if instance:
|
||||
if (instance.name() in selected_node.name()):
|
||||
selected_node = instance.dependencies()[0]
|
||||
|
||||
# if node already exist
|
||||
if instance:
|
||||
# collect input / outputs
|
||||
inputs = instance.dependencies()
|
||||
outputs = instance.dependent()
|
||||
selected_node = inputs[0]
|
||||
# remove old one
|
||||
nuke.delete(instance)
|
||||
|
||||
# recreate new
|
||||
write_data = {
|
||||
"nodeclass": self.n_class,
|
||||
"families": [self.family],
|
||||
"avalon": self.data
|
||||
}
|
||||
|
||||
# add creator data
|
||||
creator_data = {"creator": self.__class__.__name__}
|
||||
self.data.update(creator_data)
|
||||
write_data.update(creator_data)
|
||||
|
||||
self.log.info("Adding template path from plugin")
|
||||
write_data.update({
|
||||
"fpath_template": (
|
||||
"{work}/renders/nuke/{subset}/{subset}.{ext}")})
|
||||
|
||||
_prenodes = [
|
||||
{
|
||||
"name": "FrameHold01",
|
||||
"class": "FrameHold",
|
||||
"knobs": [
|
||||
("first_frame", nuke.frame())
|
||||
],
|
||||
"dependent": None
|
||||
}
|
||||
]
|
||||
|
||||
write_node = lib.create_write_node(
|
||||
self.name,
|
||||
write_data,
|
||||
input=selected_node,
|
||||
review=False,
|
||||
prenodes=_prenodes,
|
||||
farm=False,
|
||||
linked_knobs=["channels", "___", "first", "last", "use_limit"])
|
||||
|
||||
# relinking to collected connections
|
||||
for i, input in enumerate(inputs):
|
||||
write_node.setInput(i, input)
|
||||
|
||||
write_node.autoplace()
|
||||
|
||||
for output in outputs:
|
||||
output.setInput(0, write_node)
|
||||
|
||||
# link frame hold to group node
|
||||
write_node.begin()
|
||||
for n in nuke.allNodes():
|
||||
# get write node
|
||||
if n.Class() in "Write":
|
||||
w_node = n
|
||||
write_node.end()
|
||||
|
||||
w_node["use_limit"].setValue(True)
|
||||
w_node["first"].setValue(nuke.frame())
|
||||
w_node["last"].setValue(nuke.frame())
|
||||
|
||||
return write_node
|
||||
37
openpype/hosts/nuke/plugins/inventory/repair_old_loaders.py
Normal file
37
openpype/hosts/nuke/plugins/inventory/repair_old_loaders.py
Normal file
|
|
@ -0,0 +1,37 @@
|
|||
from avalon import api, style
|
||||
from avalon.nuke import lib as anlib
|
||||
from openpype.api import (
|
||||
Logger)
|
||||
|
||||
|
||||
class RepairOldLoaders(api.InventoryAction):
|
||||
|
||||
label = "Repair Old Loaders"
|
||||
icon = "gears"
|
||||
color = style.colors.alert
|
||||
|
||||
log = Logger().get_logger(__name__)
|
||||
|
||||
def process(self, containers):
|
||||
import nuke
|
||||
new_loader = "LoadClip"
|
||||
|
||||
for cdata in containers:
|
||||
orig_loader = cdata["loader"]
|
||||
orig_name = cdata["objectName"]
|
||||
if orig_loader not in ["LoadSequence", "LoadMov"]:
|
||||
self.log.warning(
|
||||
"This repair action is only working on "
|
||||
"`LoadSequence` and `LoadMov` Loaders")
|
||||
continue
|
||||
|
||||
new_name = orig_name.replace(orig_loader, new_loader)
|
||||
node = nuke.toNode(cdata["objectName"])
|
||||
|
||||
cdata.update({
|
||||
"loader": new_loader,
|
||||
"objectName": new_name
|
||||
})
|
||||
node["name"].setValue(new_name)
|
||||
# get data from avalon knob
|
||||
anlib.set_avalon_knob_data(node, cdata)
|
||||
|
|
@ -8,10 +8,10 @@ class SelectContainers(api.InventoryAction):
|
|||
color = "#d8d8d8"
|
||||
|
||||
def process(self, containers):
|
||||
|
||||
import nuke
|
||||
import avalon.nuke
|
||||
|
||||
nodes = [i["_node"] for i in containers]
|
||||
nodes = [nuke.toNode(i["objectName"]) for i in containers]
|
||||
|
||||
with avalon.nuke.viewer_update_and_undo_stop():
|
||||
# clear previous_selection
|
||||
|
|
|
|||
|
|
@ -1,68 +0,0 @@
|
|||
# from avalon import api, style
|
||||
# from avalon.vendor.Qt import QtGui, QtWidgets
|
||||
#
|
||||
# import avalon.fusion
|
||||
#
|
||||
#
|
||||
# class FusionSetToolColor(api.InventoryAction):
|
||||
# """Update the color of the selected tools"""
|
||||
#
|
||||
# label = "Set Tool Color"
|
||||
# icon = "plus"
|
||||
# color = "#d8d8d8"
|
||||
# _fallback_color = QtGui.QColor(1.0, 1.0, 1.0)
|
||||
#
|
||||
# def process(self, containers):
|
||||
# """Color all selected tools the selected colors"""
|
||||
#
|
||||
# result = []
|
||||
# comp = avalon.fusion.get_current_comp()
|
||||
#
|
||||
# # Get tool color
|
||||
# first = containers[0]
|
||||
# tool = first["_node"]
|
||||
# color = tool.TileColor
|
||||
#
|
||||
# if color is not None:
|
||||
# qcolor = QtGui.QColor().fromRgbF(color["R"], color["G"], color["B"])
|
||||
# else:
|
||||
# qcolor = self._fallback_color
|
||||
#
|
||||
# # Launch pick color
|
||||
# picked_color = self.get_color_picker(qcolor)
|
||||
# if not picked_color:
|
||||
# return
|
||||
#
|
||||
# with avalon.fusion.comp_lock_and_undo_chunk(comp):
|
||||
# for container in containers:
|
||||
# # Convert color to RGB 0-1 floats
|
||||
# rgb_f = picked_color.getRgbF()
|
||||
# rgb_f_table = {"R": rgb_f[0], "G": rgb_f[1], "B": rgb_f[2]}
|
||||
#
|
||||
# # Update tool
|
||||
# tool = container["_node"]
|
||||
# tool.TileColor = rgb_f_table
|
||||
#
|
||||
# result.append(container)
|
||||
#
|
||||
# return result
|
||||
#
|
||||
# def get_color_picker(self, color):
|
||||
# """Launch color picker and return chosen color
|
||||
#
|
||||
# Args:
|
||||
# color(QtGui.QColor): Start color to display
|
||||
#
|
||||
# Returns:
|
||||
# QtGui.QColor
|
||||
#
|
||||
# """
|
||||
#
|
||||
# color_dialog = QtWidgets.QColorDialog(color)
|
||||
# color_dialog.setStyleSheet(style.load_stylesheet())
|
||||
#
|
||||
# accepted = color_dialog.exec_()
|
||||
# if not accepted:
|
||||
# return
|
||||
#
|
||||
# return color_dialog.selectedColor()
|
||||
371
openpype/hosts/nuke/plugins/load/load_clip.py
Normal file
371
openpype/hosts/nuke/plugins/load/load_clip.py
Normal file
|
|
@ -0,0 +1,371 @@
|
|||
import nuke
|
||||
from avalon.vendor import qargparse
|
||||
from avalon import api, io
|
||||
|
||||
from openpype.hosts.nuke.api.lib import (
|
||||
get_imageio_input_colorspace
|
||||
)
|
||||
from avalon.nuke import (
|
||||
containerise,
|
||||
update_container,
|
||||
viewer_update_and_undo_stop,
|
||||
maintained_selection
|
||||
)
|
||||
from openpype.hosts.nuke.api import plugin
|
||||
|
||||
|
||||
class LoadClip(plugin.NukeLoader):
|
||||
"""Load clip into Nuke
|
||||
|
||||
Either it is image sequence or video file.
|
||||
"""
|
||||
|
||||
families = [
|
||||
"source",
|
||||
"plate",
|
||||
"render",
|
||||
"prerender",
|
||||
"review"
|
||||
]
|
||||
representations = [
|
||||
"exr",
|
||||
"dpx",
|
||||
"mov",
|
||||
"review",
|
||||
"mp4"
|
||||
]
|
||||
|
||||
label = "Load Clip"
|
||||
order = -20
|
||||
icon = "file-video-o"
|
||||
color = "white"
|
||||
|
||||
script_start = int(nuke.root()["first_frame"].value())
|
||||
|
||||
# option gui
|
||||
defaults = {
|
||||
"start_at_workfile": True
|
||||
}
|
||||
|
||||
options = [
|
||||
qargparse.Boolean(
|
||||
"start_at_workfile",
|
||||
help="Load at workfile start frame",
|
||||
default=True
|
||||
)
|
||||
]
|
||||
|
||||
node_name_template = "{class_name}_{ext}"
|
||||
|
||||
@classmethod
|
||||
def get_representations(cls):
|
||||
return (
|
||||
cls.representations
|
||||
+ cls._representations
|
||||
+ plugin.get_review_presets_config()
|
||||
)
|
||||
|
||||
def load(self, context, name, namespace, options):
|
||||
|
||||
is_sequence = len(context["representation"]["files"]) > 1
|
||||
|
||||
file = self.fname.replace("\\", "/")
|
||||
|
||||
start_at_workfile = options.get(
|
||||
"start_at_workfile", self.defaults["start_at_workfile"])
|
||||
|
||||
version = context['version']
|
||||
version_data = version.get("data", {})
|
||||
repr_id = context["representation"]["_id"]
|
||||
colorspace = version_data.get("colorspace")
|
||||
iio_colorspace = get_imageio_input_colorspace(file)
|
||||
repr_cont = context["representation"]["context"]
|
||||
|
||||
self.log.info("version_data: {}\n".format(version_data))
|
||||
self.log.debug(
|
||||
"Representation id `{}` ".format(repr_id))
|
||||
|
||||
self.handle_start = version_data.get("handleStart", 0)
|
||||
self.handle_end = version_data.get("handleEnd", 0)
|
||||
|
||||
first = version_data.get("frameStart", None)
|
||||
last = version_data.get("frameEnd", None)
|
||||
first -= self.handle_start
|
||||
last += self.handle_end
|
||||
|
||||
if not is_sequence:
|
||||
duration = last - first + 1
|
||||
first = 1
|
||||
last = first + duration
|
||||
elif "#" not in file:
|
||||
frame = repr_cont.get("frame")
|
||||
assert frame, "Representation is not sequence"
|
||||
|
||||
padding = len(frame)
|
||||
file = file.replace(frame, "#" * padding)
|
||||
|
||||
# Fallback to asset name when namespace is None
|
||||
if namespace is None:
|
||||
namespace = context['asset']['name']
|
||||
|
||||
if not file:
|
||||
self.log.warning(
|
||||
"Representation id `{}` is failing to load".format(repr_id))
|
||||
return
|
||||
|
||||
name_data = {
|
||||
"asset": repr_cont["asset"],
|
||||
"subset": repr_cont["subset"],
|
||||
"representation": context["representation"]["name"],
|
||||
"ext": repr_cont["representation"],
|
||||
"id": context["representation"]["_id"],
|
||||
"class_name": self.__class__.__name__
|
||||
}
|
||||
|
||||
read_name = self.node_name_template.format(**name_data)
|
||||
|
||||
# Create the Loader with the filename path set
|
||||
read_node = nuke.createNode(
|
||||
"Read",
|
||||
"name {}".format(read_name))
|
||||
|
||||
# to avoid multiple undo steps for rest of process
|
||||
# we will switch off undo-ing
|
||||
with viewer_update_and_undo_stop():
|
||||
read_node["file"].setValue(file)
|
||||
|
||||
# Set colorspace defined in version data
|
||||
if colorspace:
|
||||
read_node["colorspace"].setValue(str(colorspace))
|
||||
elif iio_colorspace is not None:
|
||||
read_node["colorspace"].setValue(iio_colorspace)
|
||||
|
||||
self.set_range_to_node(read_node, first, last, start_at_workfile)
|
||||
|
||||
# add additional metadata from the version to imprint Avalon knob
|
||||
add_keys = ["frameStart", "frameEnd",
|
||||
"source", "colorspace", "author", "fps", "version",
|
||||
"handleStart", "handleEnd"]
|
||||
|
||||
data_imprint = {}
|
||||
for k in add_keys:
|
||||
if k == 'version':
|
||||
data_imprint.update({k: context["version"]['name']})
|
||||
else:
|
||||
data_imprint.update(
|
||||
{k: context["version"]['data'].get(k, str(None))})
|
||||
|
||||
data_imprint.update({"objectName": read_name})
|
||||
|
||||
read_node["tile_color"].setValue(int("0x4ecd25ff", 16))
|
||||
|
||||
container = containerise(
|
||||
read_node,
|
||||
name=name,
|
||||
namespace=namespace,
|
||||
context=context,
|
||||
loader=self.__class__.__name__,
|
||||
data=data_imprint)
|
||||
|
||||
if version_data.get("retime", None):
|
||||
self.make_retimes(read_node, version_data)
|
||||
|
||||
self.set_as_member(read_node)
|
||||
|
||||
return container
|
||||
|
||||
def switch(self, container, representation):
|
||||
self.update(container, representation)
|
||||
|
||||
def update(self, container, representation):
|
||||
"""Update the Loader's path
|
||||
|
||||
Nuke automatically tries to reset some variables when changing
|
||||
the loader's path to a new file. These automatic changes are to its
|
||||
inputs:
|
||||
|
||||
"""
|
||||
|
||||
is_sequence = len(representation["files"]) > 1
|
||||
|
||||
read_node = nuke.toNode(container['objectName'])
|
||||
file = api.get_representation_path(representation).replace("\\", "/")
|
||||
|
||||
start_at_workfile = bool("start at" in read_node['frame_mode'].value())
|
||||
|
||||
version = io.find_one({
|
||||
"type": "version",
|
||||
"_id": representation["parent"]
|
||||
})
|
||||
version_data = version.get("data", {})
|
||||
repr_id = representation["_id"]
|
||||
colorspace = version_data.get("colorspace")
|
||||
iio_colorspace = get_imageio_input_colorspace(file)
|
||||
repr_cont = representation["context"]
|
||||
|
||||
self.handle_start = version_data.get("handleStart", 0)
|
||||
self.handle_end = version_data.get("handleEnd", 0)
|
||||
|
||||
first = version_data.get("frameStart", None)
|
||||
last = version_data.get("frameEnd", None)
|
||||
first -= self.handle_start
|
||||
last += self.handle_end
|
||||
|
||||
if not is_sequence:
|
||||
duration = last - first + 1
|
||||
first = 1
|
||||
last = first + duration
|
||||
elif "#" not in file:
|
||||
frame = repr_cont.get("frame")
|
||||
assert frame, "Representation is not sequence"
|
||||
|
||||
padding = len(frame)
|
||||
file = file.replace(frame, "#" * padding)
|
||||
|
||||
if not file:
|
||||
self.log.warning(
|
||||
"Representation id `{}` is failing to load".format(repr_id))
|
||||
return
|
||||
|
||||
read_node["file"].setValue(file)
|
||||
|
||||
# to avoid multiple undo steps for rest of process
|
||||
# we will switch off undo-ing
|
||||
with viewer_update_and_undo_stop():
|
||||
|
||||
# Set colorspace defined in version data
|
||||
if colorspace:
|
||||
read_node["colorspace"].setValue(str(colorspace))
|
||||
elif iio_colorspace is not None:
|
||||
read_node["colorspace"].setValue(iio_colorspace)
|
||||
|
||||
self.set_range_to_node(read_node, first, last, start_at_workfile)
|
||||
|
||||
updated_dict = {
|
||||
"representation": str(representation["_id"]),
|
||||
"frameStart": str(first),
|
||||
"frameEnd": str(last),
|
||||
"version": str(version.get("name")),
|
||||
"colorspace": colorspace,
|
||||
"source": version_data.get("source"),
|
||||
"handleStart": str(self.handle_start),
|
||||
"handleEnd": str(self.handle_end),
|
||||
"fps": str(version_data.get("fps")),
|
||||
"author": version_data.get("author"),
|
||||
"outputDir": version_data.get("outputDir"),
|
||||
}
|
||||
|
||||
# change color of read_node
|
||||
# get all versions in list
|
||||
versions = io.find({
|
||||
"type": "version",
|
||||
"parent": version["parent"]
|
||||
}).distinct('name')
|
||||
|
||||
max_version = max(versions)
|
||||
|
||||
if version.get("name") not in [max_version]:
|
||||
read_node["tile_color"].setValue(int("0xd84f20ff", 16))
|
||||
else:
|
||||
read_node["tile_color"].setValue(int("0x4ecd25ff", 16))
|
||||
|
||||
# Update the imprinted representation
|
||||
update_container(
|
||||
read_node,
|
||||
updated_dict
|
||||
)
|
||||
self.log.info("udated to version: {}".format(version.get("name")))
|
||||
|
||||
if version_data.get("retime", None):
|
||||
self.make_retimes(read_node, version_data)
|
||||
else:
|
||||
self.clear_members(read_node)
|
||||
|
||||
self.set_as_member(read_node)
|
||||
|
||||
def set_range_to_node(self, read_node, first, last, start_at_workfile):
|
||||
read_node['origfirst'].setValue(int(first))
|
||||
read_node['first'].setValue(int(first))
|
||||
read_node['origlast'].setValue(int(last))
|
||||
read_node['last'].setValue(int(last))
|
||||
|
||||
# set start frame depending on workfile or version
|
||||
self.loader_shift(read_node, start_at_workfile)
|
||||
|
||||
def remove(self, container):
|
||||
|
||||
from avalon.nuke import viewer_update_and_undo_stop
|
||||
|
||||
read_node = nuke.toNode(container['objectName'])
|
||||
assert read_node.Class() == "Read", "Must be Read"
|
||||
|
||||
with viewer_update_and_undo_stop():
|
||||
members = self.get_members(read_node)
|
||||
nuke.delete(read_node)
|
||||
for member in members:
|
||||
nuke.delete(member)
|
||||
|
||||
def make_retimes(self, parent_node, version_data):
|
||||
''' Create all retime and timewarping nodes with coppied animation '''
|
||||
speed = version_data.get('speed', 1)
|
||||
time_warp_nodes = version_data.get('timewarps', [])
|
||||
last_node = None
|
||||
source_id = self.get_container_id(parent_node)
|
||||
self.log.info("__ source_id: {}".format(source_id))
|
||||
self.log.info("__ members: {}".format(self.get_members(parent_node)))
|
||||
dependent_nodes = self.clear_members(parent_node)
|
||||
|
||||
with maintained_selection():
|
||||
parent_node['selected'].setValue(True)
|
||||
|
||||
if speed != 1:
|
||||
rtn = nuke.createNode(
|
||||
"Retime",
|
||||
"speed {}".format(speed))
|
||||
|
||||
rtn["before"].setValue("continue")
|
||||
rtn["after"].setValue("continue")
|
||||
rtn["input.first_lock"].setValue(True)
|
||||
rtn["input.first"].setValue(
|
||||
self.script_start
|
||||
)
|
||||
self.set_as_member(rtn)
|
||||
last_node = rtn
|
||||
|
||||
if time_warp_nodes != []:
|
||||
start_anim = self.script_start + (self.handle_start / speed)
|
||||
for timewarp in time_warp_nodes:
|
||||
twn = nuke.createNode(
|
||||
timewarp["Class"],
|
||||
"name {}".format(timewarp["name"])
|
||||
)
|
||||
if isinstance(timewarp["lookup"], list):
|
||||
# if array for animation
|
||||
twn["lookup"].setAnimated()
|
||||
for i, value in enumerate(timewarp["lookup"]):
|
||||
twn["lookup"].setValueAt(
|
||||
(start_anim + i) + value,
|
||||
(start_anim + i))
|
||||
else:
|
||||
# if static value `int`
|
||||
twn["lookup"].setValue(timewarp["lookup"])
|
||||
|
||||
self.set_as_member(twn)
|
||||
last_node = twn
|
||||
|
||||
if dependent_nodes:
|
||||
# connect to original inputs
|
||||
for i, n in enumerate(dependent_nodes):
|
||||
last_node.setInput(i, n)
|
||||
|
||||
def loader_shift(self, read_node, workfile_start=False):
|
||||
""" Set start frame of read node to a workfile start
|
||||
|
||||
Args:
|
||||
read_node (nuke.Node): The nuke's read node
|
||||
workfile_start (bool): set workfile start frame if true
|
||||
|
||||
"""
|
||||
if workfile_start:
|
||||
read_node['frame_mode'].setValue("start at")
|
||||
read_node['frame'].setValue(str(self.script_start))
|
||||
|
|
@ -12,8 +12,16 @@ from openpype.hosts.nuke.api.lib import (
|
|||
class LoadImage(api.Loader):
|
||||
"""Load still image into Nuke"""
|
||||
|
||||
families = ["render", "source", "plate", "review", "image"]
|
||||
representations = ["exr", "dpx", "jpg", "jpeg", "png", "psd"]
|
||||
families = [
|
||||
"render2d",
|
||||
"source",
|
||||
"plate",
|
||||
"render",
|
||||
"prerender",
|
||||
"review",
|
||||
"image"
|
||||
]
|
||||
representations = ["exr", "dpx", "jpg", "jpeg", "png", "psd", "tiff"]
|
||||
|
||||
label = "Load Image"
|
||||
order = -10
|
||||
|
|
@ -33,6 +41,10 @@ class LoadImage(api.Loader):
|
|||
)
|
||||
]
|
||||
|
||||
@classmethod
|
||||
def get_representations(cls):
|
||||
return cls.representations + cls._representations
|
||||
|
||||
def load(self, context, name, namespace, options):
|
||||
from avalon.nuke import (
|
||||
containerise,
|
||||
|
|
|
|||
|
|
@ -1,347 +0,0 @@
|
|||
import nuke
|
||||
from avalon.vendor import qargparse
|
||||
from avalon import api, io
|
||||
from openpype.api import get_current_project_settings
|
||||
from openpype.hosts.nuke.api.lib import (
|
||||
get_imageio_input_colorspace
|
||||
)
|
||||
|
||||
|
||||
def add_review_presets_config():
|
||||
returning = {
|
||||
"families": list(),
|
||||
"representations": list()
|
||||
}
|
||||
settings = get_current_project_settings()
|
||||
review_profiles = (
|
||||
settings["global"]
|
||||
["publish"]
|
||||
["ExtractReview"]
|
||||
["profiles"]
|
||||
)
|
||||
|
||||
outputs = {}
|
||||
for profile in review_profiles:
|
||||
outputs.update(profile.get("outputs", {}))
|
||||
|
||||
for output, properities in outputs.items():
|
||||
returning["representations"].append(output)
|
||||
returning["families"] += properities.get("families", [])
|
||||
|
||||
return returning
|
||||
|
||||
|
||||
class LoadMov(api.Loader):
|
||||
"""Load mov file into Nuke"""
|
||||
families = ["render", "source", "plate", "review"]
|
||||
representations = ["mov", "review", "mp4"]
|
||||
|
||||
label = "Load mov"
|
||||
order = -10
|
||||
icon = "code-fork"
|
||||
color = "orange"
|
||||
|
||||
first_frame = nuke.root()["first_frame"].value()
|
||||
|
||||
# options gui
|
||||
defaults = {
|
||||
"start_at_workfile": True
|
||||
}
|
||||
|
||||
options = [
|
||||
qargparse.Boolean(
|
||||
"start_at_workfile",
|
||||
help="Load at workfile start frame",
|
||||
default=True
|
||||
)
|
||||
]
|
||||
|
||||
node_name_template = "{class_name}_{ext}"
|
||||
|
||||
def load(self, context, name, namespace, options):
|
||||
from avalon.nuke import (
|
||||
containerise,
|
||||
viewer_update_and_undo_stop
|
||||
)
|
||||
|
||||
start_at_workfile = options.get(
|
||||
"start_at_workfile", self.defaults["start_at_workfile"])
|
||||
|
||||
version = context['version']
|
||||
version_data = version.get("data", {})
|
||||
repr_id = context["representation"]["_id"]
|
||||
|
||||
self.handle_start = version_data.get("handleStart", 0)
|
||||
self.handle_end = version_data.get("handleEnd", 0)
|
||||
|
||||
orig_first = version_data.get("frameStart")
|
||||
orig_last = version_data.get("frameEnd")
|
||||
diff = orig_first - 1
|
||||
|
||||
first = orig_first - diff
|
||||
last = orig_last - diff
|
||||
|
||||
colorspace = version_data.get("colorspace")
|
||||
repr_cont = context["representation"]["context"]
|
||||
|
||||
self.log.debug(
|
||||
"Representation id `{}` ".format(repr_id))
|
||||
|
||||
context["representation"]["_id"]
|
||||
# create handles offset (only to last, because of mov)
|
||||
last += self.handle_start + self.handle_end
|
||||
|
||||
# Fallback to asset name when namespace is None
|
||||
if namespace is None:
|
||||
namespace = context['asset']['name']
|
||||
|
||||
file = self.fname
|
||||
|
||||
if not file:
|
||||
self.log.warning(
|
||||
"Representation id `{}` is failing to load".format(repr_id))
|
||||
return
|
||||
|
||||
file = file.replace("\\", "/")
|
||||
|
||||
name_data = {
|
||||
"asset": repr_cont["asset"],
|
||||
"subset": repr_cont["subset"],
|
||||
"representation": context["representation"]["name"],
|
||||
"ext": repr_cont["representation"],
|
||||
"id": context["representation"]["_id"],
|
||||
"class_name": self.__class__.__name__
|
||||
}
|
||||
|
||||
read_name = self.node_name_template.format(**name_data)
|
||||
|
||||
read_node = nuke.createNode(
|
||||
"Read",
|
||||
"name {}".format(read_name)
|
||||
)
|
||||
|
||||
# to avoid multiple undo steps for rest of process
|
||||
# we will switch off undo-ing
|
||||
with viewer_update_and_undo_stop():
|
||||
read_node["file"].setValue(file)
|
||||
|
||||
read_node["origfirst"].setValue(first)
|
||||
read_node["first"].setValue(first)
|
||||
read_node["origlast"].setValue(last)
|
||||
read_node["last"].setValue(last)
|
||||
read_node['frame_mode'].setValue("start at")
|
||||
|
||||
if start_at_workfile:
|
||||
# start at workfile start
|
||||
read_node['frame'].setValue(str(self.first_frame))
|
||||
else:
|
||||
# start at version frame start
|
||||
read_node['frame'].setValue(
|
||||
str(orig_first - self.handle_start))
|
||||
|
||||
if colorspace:
|
||||
read_node["colorspace"].setValue(str(colorspace))
|
||||
|
||||
preset_clrsp = get_imageio_input_colorspace(file)
|
||||
|
||||
if preset_clrsp is not None:
|
||||
read_node["colorspace"].setValue(preset_clrsp)
|
||||
|
||||
# add additional metadata from the version to imprint Avalon knob
|
||||
add_keys = [
|
||||
"frameStart", "frameEnd", "handles", "source", "author",
|
||||
"fps", "version", "handleStart", "handleEnd"
|
||||
]
|
||||
|
||||
data_imprint = {}
|
||||
for key in add_keys:
|
||||
if key == 'version':
|
||||
data_imprint.update({
|
||||
key: context["version"]['name']
|
||||
})
|
||||
else:
|
||||
data_imprint.update({
|
||||
key: context["version"]['data'].get(key, str(None))
|
||||
})
|
||||
|
||||
data_imprint.update({"objectName": read_name})
|
||||
|
||||
read_node["tile_color"].setValue(int("0x4ecd25ff", 16))
|
||||
|
||||
if version_data.get("retime", None):
|
||||
speed = version_data.get("speed", 1)
|
||||
time_warp_nodes = version_data.get("timewarps", [])
|
||||
self.make_retimes(speed, time_warp_nodes)
|
||||
|
||||
return containerise(
|
||||
read_node,
|
||||
name=name,
|
||||
namespace=namespace,
|
||||
context=context,
|
||||
loader=self.__class__.__name__,
|
||||
data=data_imprint
|
||||
)
|
||||
|
||||
def switch(self, container, representation):
|
||||
self.update(container, representation)
|
||||
|
||||
def update(self, container, representation):
|
||||
"""Update the Loader's path
|
||||
|
||||
Nuke automatically tries to reset some variables when changing
|
||||
the loader's path to a new file. These automatic changes are to its
|
||||
inputs:
|
||||
|
||||
"""
|
||||
|
||||
from avalon.nuke import (
|
||||
update_container
|
||||
)
|
||||
|
||||
read_node = nuke.toNode(container['objectName'])
|
||||
|
||||
assert read_node.Class() == "Read", "Must be Read"
|
||||
|
||||
file = self.fname
|
||||
|
||||
if not file:
|
||||
repr_id = representation["_id"]
|
||||
self.log.warning(
|
||||
"Representation id `{}` is failing to load".format(repr_id))
|
||||
return
|
||||
|
||||
file = file.replace("\\", "/")
|
||||
|
||||
# Get start frame from version data
|
||||
version = io.find_one({
|
||||
"type": "version",
|
||||
"_id": representation["parent"]
|
||||
})
|
||||
|
||||
# get all versions in list
|
||||
versions = io.find({
|
||||
"type": "version",
|
||||
"parent": version["parent"]
|
||||
}).distinct('name')
|
||||
|
||||
max_version = max(versions)
|
||||
|
||||
version_data = version.get("data", {})
|
||||
|
||||
orig_first = version_data.get("frameStart")
|
||||
orig_last = version_data.get("frameEnd")
|
||||
diff = orig_first - 1
|
||||
|
||||
# set first to 1
|
||||
first = orig_first - diff
|
||||
last = orig_last - diff
|
||||
self.handle_start = version_data.get("handleStart", 0)
|
||||
self.handle_end = version_data.get("handleEnd", 0)
|
||||
colorspace = version_data.get("colorspace")
|
||||
|
||||
if first is None:
|
||||
self.log.warning((
|
||||
"Missing start frame for updated version"
|
||||
"assuming starts at frame 0 for: "
|
||||
"{} ({})").format(
|
||||
read_node['name'].value(), representation))
|
||||
first = 0
|
||||
|
||||
# create handles offset (only to last, because of mov)
|
||||
last += self.handle_start + self.handle_end
|
||||
|
||||
read_node["file"].setValue(file)
|
||||
|
||||
# Set the global in to the start frame of the sequence
|
||||
read_node["origfirst"].setValue(first)
|
||||
read_node["first"].setValue(first)
|
||||
read_node["origlast"].setValue(last)
|
||||
read_node["last"].setValue(last)
|
||||
read_node['frame_mode'].setValue("start at")
|
||||
|
||||
if int(float(self.first_frame)) == int(
|
||||
float(read_node['frame'].value())):
|
||||
# start at workfile start
|
||||
read_node['frame'].setValue(str(self.first_frame))
|
||||
else:
|
||||
# start at version frame start
|
||||
read_node['frame'].setValue(str(orig_first - self.handle_start))
|
||||
|
||||
if colorspace:
|
||||
read_node["colorspace"].setValue(str(colorspace))
|
||||
|
||||
preset_clrsp = get_imageio_input_colorspace(file)
|
||||
|
||||
if preset_clrsp is not None:
|
||||
read_node["colorspace"].setValue(preset_clrsp)
|
||||
|
||||
updated_dict = {}
|
||||
updated_dict.update({
|
||||
"representation": str(representation["_id"]),
|
||||
"frameStart": str(first),
|
||||
"frameEnd": str(last),
|
||||
"version": str(version.get("name")),
|
||||
"colorspace": version_data.get("colorspace"),
|
||||
"source": version_data.get("source"),
|
||||
"handleStart": str(self.handle_start),
|
||||
"handleEnd": str(self.handle_end),
|
||||
"fps": str(version_data.get("fps")),
|
||||
"author": version_data.get("author"),
|
||||
"outputDir": version_data.get("outputDir")
|
||||
})
|
||||
|
||||
# change color of node
|
||||
if version.get("name") not in [max_version]:
|
||||
read_node["tile_color"].setValue(int("0xd84f20ff", 16))
|
||||
else:
|
||||
read_node["tile_color"].setValue(int("0x4ecd25ff", 16))
|
||||
|
||||
if version_data.get("retime", None):
|
||||
speed = version_data.get("speed", 1)
|
||||
time_warp_nodes = version_data.get("timewarps", [])
|
||||
self.make_retimes(speed, time_warp_nodes)
|
||||
|
||||
# Update the imprinted representation
|
||||
update_container(
|
||||
read_node, updated_dict
|
||||
)
|
||||
self.log.info("udated to version: {}".format(version.get("name")))
|
||||
|
||||
def remove(self, container):
|
||||
|
||||
from avalon.nuke import viewer_update_and_undo_stop
|
||||
|
||||
read_node = nuke.toNode(container['objectName'])
|
||||
assert read_node.Class() == "Read", "Must be Read"
|
||||
|
||||
with viewer_update_and_undo_stop():
|
||||
nuke.delete(read_node)
|
||||
|
||||
def make_retimes(self, speed, time_warp_nodes):
|
||||
''' Create all retime and timewarping nodes with coppied animation '''
|
||||
if speed != 1:
|
||||
rtn = nuke.createNode(
|
||||
"Retime",
|
||||
"speed {}".format(speed))
|
||||
rtn["before"].setValue("continue")
|
||||
rtn["after"].setValue("continue")
|
||||
rtn["input.first_lock"].setValue(True)
|
||||
rtn["input.first"].setValue(
|
||||
self.first_frame
|
||||
)
|
||||
|
||||
if time_warp_nodes != []:
|
||||
start_anim = self.first_frame + (self.handle_start / speed)
|
||||
for timewarp in time_warp_nodes:
|
||||
twn = nuke.createNode(timewarp["Class"],
|
||||
"name {}".format(timewarp["name"]))
|
||||
if isinstance(timewarp["lookup"], list):
|
||||
# if array for animation
|
||||
twn["lookup"].setAnimated()
|
||||
for i, value in enumerate(timewarp["lookup"]):
|
||||
twn["lookup"].setValueAt(
|
||||
(start_anim + i) + value,
|
||||
(start_anim + i))
|
||||
else:
|
||||
# if static value `int`
|
||||
twn["lookup"].setValue(timewarp["lookup"])
|
||||
|
|
@ -1,320 +0,0 @@
|
|||
import nuke
|
||||
from avalon.vendor import qargparse
|
||||
from avalon import api, io
|
||||
from openpype.hosts.nuke.api.lib import (
|
||||
get_imageio_input_colorspace
|
||||
)
|
||||
|
||||
|
||||
class LoadSequence(api.Loader):
|
||||
"""Load image sequence into Nuke"""
|
||||
|
||||
families = ["render", "source", "plate", "review"]
|
||||
representations = ["exr", "dpx"]
|
||||
|
||||
label = "Load Image Sequence"
|
||||
order = -20
|
||||
icon = "file-video-o"
|
||||
color = "white"
|
||||
|
||||
script_start = nuke.root()["first_frame"].value()
|
||||
|
||||
# option gui
|
||||
defaults = {
|
||||
"start_at_workfile": True
|
||||
}
|
||||
|
||||
options = [
|
||||
qargparse.Boolean(
|
||||
"start_at_workfile",
|
||||
help="Load at workfile start frame",
|
||||
default=True
|
||||
)
|
||||
]
|
||||
|
||||
node_name_template = "{class_name}_{ext}"
|
||||
|
||||
def load(self, context, name, namespace, options):
|
||||
from avalon.nuke import (
|
||||
containerise,
|
||||
viewer_update_and_undo_stop
|
||||
)
|
||||
|
||||
start_at_workfile = options.get(
|
||||
"start_at_workfile", self.defaults["start_at_workfile"])
|
||||
|
||||
version = context['version']
|
||||
version_data = version.get("data", {})
|
||||
repr_id = context["representation"]["_id"]
|
||||
|
||||
self.log.info("version_data: {}\n".format(version_data))
|
||||
self.log.debug(
|
||||
"Representation id `{}` ".format(repr_id))
|
||||
|
||||
self.first_frame = int(nuke.root()["first_frame"].getValue())
|
||||
self.handle_start = version_data.get("handleStart", 0)
|
||||
self.handle_end = version_data.get("handleEnd", 0)
|
||||
|
||||
first = version_data.get("frameStart", None)
|
||||
last = version_data.get("frameEnd", None)
|
||||
|
||||
# Fallback to asset name when namespace is None
|
||||
if namespace is None:
|
||||
namespace = context['asset']['name']
|
||||
|
||||
first -= self.handle_start
|
||||
last += self.handle_end
|
||||
|
||||
file = self.fname
|
||||
|
||||
if not file:
|
||||
repr_id = context["representation"]["_id"]
|
||||
self.log.warning(
|
||||
"Representation id `{}` is failing to load".format(repr_id))
|
||||
return
|
||||
|
||||
file = file.replace("\\", "/")
|
||||
|
||||
repr_cont = context["representation"]["context"]
|
||||
assert repr_cont.get("frame"), "Representation is not sequence"
|
||||
|
||||
if "#" not in file:
|
||||
frame = repr_cont.get("frame")
|
||||
if frame:
|
||||
padding = len(frame)
|
||||
file = file.replace(frame, "#" * padding)
|
||||
|
||||
name_data = {
|
||||
"asset": repr_cont["asset"],
|
||||
"subset": repr_cont["subset"],
|
||||
"representation": context["representation"]["name"],
|
||||
"ext": repr_cont["representation"],
|
||||
"id": context["representation"]["_id"],
|
||||
"class_name": self.__class__.__name__
|
||||
}
|
||||
|
||||
read_name = self.node_name_template.format(**name_data)
|
||||
|
||||
# Create the Loader with the filename path set
|
||||
read_node = nuke.createNode(
|
||||
"Read",
|
||||
"name {}".format(read_name))
|
||||
|
||||
# to avoid multiple undo steps for rest of process
|
||||
# we will switch off undo-ing
|
||||
with viewer_update_and_undo_stop():
|
||||
read_node["file"].setValue(file)
|
||||
|
||||
# Set colorspace defined in version data
|
||||
colorspace = context["version"]["data"].get("colorspace")
|
||||
if colorspace:
|
||||
read_node["colorspace"].setValue(str(colorspace))
|
||||
|
||||
preset_clrsp = get_imageio_input_colorspace(file)
|
||||
|
||||
if preset_clrsp is not None:
|
||||
read_node["colorspace"].setValue(preset_clrsp)
|
||||
|
||||
# set start frame depending on workfile or version
|
||||
self.loader_shift(read_node, start_at_workfile)
|
||||
read_node["origfirst"].setValue(int(first))
|
||||
read_node["first"].setValue(int(first))
|
||||
read_node["origlast"].setValue(int(last))
|
||||
read_node["last"].setValue(int(last))
|
||||
|
||||
# add additional metadata from the version to imprint Avalon knob
|
||||
add_keys = ["frameStart", "frameEnd",
|
||||
"source", "colorspace", "author", "fps", "version",
|
||||
"handleStart", "handleEnd"]
|
||||
|
||||
data_imprint = {}
|
||||
for k in add_keys:
|
||||
if k == 'version':
|
||||
data_imprint.update({k: context["version"]['name']})
|
||||
else:
|
||||
data_imprint.update(
|
||||
{k: context["version"]['data'].get(k, str(None))})
|
||||
|
||||
data_imprint.update({"objectName": read_name})
|
||||
|
||||
read_node["tile_color"].setValue(int("0x4ecd25ff", 16))
|
||||
|
||||
if version_data.get("retime", None):
|
||||
speed = version_data.get("speed", 1)
|
||||
time_warp_nodes = version_data.get("timewarps", [])
|
||||
self.make_retimes(speed, time_warp_nodes)
|
||||
|
||||
return containerise(read_node,
|
||||
name=name,
|
||||
namespace=namespace,
|
||||
context=context,
|
||||
loader=self.__class__.__name__,
|
||||
data=data_imprint)
|
||||
|
||||
def switch(self, container, representation):
|
||||
self.update(container, representation)
|
||||
|
||||
def update(self, container, representation):
|
||||
"""Update the Loader's path
|
||||
|
||||
Nuke automatically tries to reset some variables when changing
|
||||
the loader's path to a new file. These automatic changes are to its
|
||||
inputs:
|
||||
|
||||
"""
|
||||
|
||||
from avalon.nuke import (
|
||||
update_container
|
||||
)
|
||||
|
||||
read_node = nuke.toNode(container['objectName'])
|
||||
|
||||
assert read_node.Class() == "Read", "Must be Read"
|
||||
|
||||
repr_cont = representation["context"]
|
||||
assert repr_cont.get("frame"), "Representation is not sequence"
|
||||
|
||||
file = api.get_representation_path(representation)
|
||||
|
||||
if not file:
|
||||
repr_id = representation["_id"]
|
||||
self.log.warning(
|
||||
"Representation id `{}` is failing to load".format(repr_id))
|
||||
return
|
||||
|
||||
file = file.replace("\\", "/")
|
||||
|
||||
if "#" not in file:
|
||||
frame = repr_cont.get("frame")
|
||||
if frame:
|
||||
padding = len(frame)
|
||||
file = file.replace(frame, "#" * padding)
|
||||
|
||||
# Get start frame from version data
|
||||
version = io.find_one({
|
||||
"type": "version",
|
||||
"_id": representation["parent"]
|
||||
})
|
||||
|
||||
# get all versions in list
|
||||
versions = io.find({
|
||||
"type": "version",
|
||||
"parent": version["parent"]
|
||||
}).distinct('name')
|
||||
|
||||
max_version = max(versions)
|
||||
|
||||
version_data = version.get("data", {})
|
||||
|
||||
self.first_frame = int(nuke.root()["first_frame"].getValue())
|
||||
self.handle_start = version_data.get("handleStart", 0)
|
||||
self.handle_end = version_data.get("handleEnd", 0)
|
||||
|
||||
first = version_data.get("frameStart")
|
||||
last = version_data.get("frameEnd")
|
||||
|
||||
if first is None:
|
||||
self.log.warning(
|
||||
"Missing start frame for updated version"
|
||||
"assuming starts at frame 0 for: "
|
||||
"{} ({})".format(read_node['name'].value(), representation))
|
||||
first = 0
|
||||
|
||||
first -= self.handle_start
|
||||
last += self.handle_end
|
||||
|
||||
read_node["file"].setValue(file)
|
||||
|
||||
# set start frame depending on workfile or version
|
||||
self.loader_shift(
|
||||
read_node,
|
||||
bool("start at" in read_node['frame_mode'].value()))
|
||||
|
||||
read_node["origfirst"].setValue(int(first))
|
||||
read_node["first"].setValue(int(first))
|
||||
read_node["origlast"].setValue(int(last))
|
||||
read_node["last"].setValue(int(last))
|
||||
|
||||
updated_dict = {}
|
||||
updated_dict.update({
|
||||
"representation": str(representation["_id"]),
|
||||
"frameStart": str(first),
|
||||
"frameEnd": str(last),
|
||||
"version": str(version.get("name")),
|
||||
"colorspace": version_data.get("colorspace"),
|
||||
"source": version_data.get("source"),
|
||||
"handleStart": str(self.handle_start),
|
||||
"handleEnd": str(self.handle_end),
|
||||
"fps": str(version_data.get("fps")),
|
||||
"author": version_data.get("author"),
|
||||
"outputDir": version_data.get("outputDir"),
|
||||
})
|
||||
|
||||
# change color of read_node
|
||||
if version.get("name") not in [max_version]:
|
||||
read_node["tile_color"].setValue(int("0xd84f20ff", 16))
|
||||
else:
|
||||
read_node["tile_color"].setValue(int("0x4ecd25ff", 16))
|
||||
|
||||
if version_data.get("retime", None):
|
||||
speed = version_data.get("speed", 1)
|
||||
time_warp_nodes = version_data.get("timewarps", [])
|
||||
self.make_retimes(speed, time_warp_nodes)
|
||||
|
||||
# Update the imprinted representation
|
||||
update_container(
|
||||
read_node,
|
||||
updated_dict
|
||||
)
|
||||
self.log.info("udated to version: {}".format(version.get("name")))
|
||||
|
||||
def remove(self, container):
|
||||
|
||||
from avalon.nuke import viewer_update_and_undo_stop
|
||||
|
||||
read_node = nuke.toNode(container['objectName'])
|
||||
assert read_node.Class() == "Read", "Must be Read"
|
||||
|
||||
with viewer_update_and_undo_stop():
|
||||
nuke.delete(read_node)
|
||||
|
||||
def make_retimes(self, speed, time_warp_nodes):
|
||||
''' Create all retime and timewarping nodes with coppied animation '''
|
||||
if speed != 1:
|
||||
rtn = nuke.createNode(
|
||||
"Retime",
|
||||
"speed {}".format(speed))
|
||||
rtn["before"].setValue("continue")
|
||||
rtn["after"].setValue("continue")
|
||||
rtn["input.first_lock"].setValue(True)
|
||||
rtn["input.first"].setValue(
|
||||
self.first_frame
|
||||
)
|
||||
|
||||
if time_warp_nodes != []:
|
||||
start_anim = self.first_frame + (self.handle_start / speed)
|
||||
for timewarp in time_warp_nodes:
|
||||
twn = nuke.createNode(timewarp["Class"],
|
||||
"name {}".format(timewarp["name"]))
|
||||
if isinstance(timewarp["lookup"], list):
|
||||
# if array for animation
|
||||
twn["lookup"].setAnimated()
|
||||
for i, value in enumerate(timewarp["lookup"]):
|
||||
twn["lookup"].setValueAt(
|
||||
(start_anim + i) + value,
|
||||
(start_anim + i))
|
||||
else:
|
||||
# if static value `int`
|
||||
twn["lookup"].setValue(timewarp["lookup"])
|
||||
|
||||
def loader_shift(self, read_node, workfile_start=False):
|
||||
""" Set start frame of read node to a workfile start
|
||||
|
||||
Args:
|
||||
read_node (nuke.Node): The nuke's read node
|
||||
workfile_start (bool): set workfile start frame if true
|
||||
|
||||
"""
|
||||
if workfile_start:
|
||||
read_node['frame_mode'].setValue("start at")
|
||||
read_node['frame'].setValue(str(self.script_start))
|
||||
|
|
@ -17,7 +17,7 @@ class NukeRenderLocal(openpype.api.Extractor):
|
|||
order = pyblish.api.ExtractorOrder
|
||||
label = "Render Local"
|
||||
hosts = ["nuke"]
|
||||
families = ["render.local", "prerender.local"]
|
||||
families = ["render.local", "prerender.local", "still.local"]
|
||||
|
||||
def process(self, instance):
|
||||
families = instance.data["families"]
|
||||
|
|
@ -66,13 +66,23 @@ class NukeRenderLocal(openpype.api.Extractor):
|
|||
instance.data["representations"] = []
|
||||
|
||||
collected_frames = os.listdir(out_dir)
|
||||
repre = {
|
||||
'name': ext,
|
||||
'ext': ext,
|
||||
'frameStart': "%0{}d".format(len(str(last_frame))) % first_frame,
|
||||
'files': collected_frames,
|
||||
"stagingDir": out_dir
|
||||
}
|
||||
|
||||
if len(collected_frames) == 1:
|
||||
repre = {
|
||||
'name': ext,
|
||||
'ext': ext,
|
||||
'files': collected_frames.pop(),
|
||||
"stagingDir": out_dir
|
||||
}
|
||||
else:
|
||||
repre = {
|
||||
'name': ext,
|
||||
'ext': ext,
|
||||
'frameStart': "%0{}d".format(
|
||||
len(str(last_frame))) % first_frame,
|
||||
'files': collected_frames,
|
||||
"stagingDir": out_dir
|
||||
}
|
||||
instance.data["representations"].append(repre)
|
||||
|
||||
self.log.info("Extracted instance '{0}' to: {1}".format(
|
||||
|
|
@ -89,6 +99,9 @@ class NukeRenderLocal(openpype.api.Extractor):
|
|||
instance.data['family'] = 'prerender'
|
||||
families.remove('prerender.local')
|
||||
families.insert(0, "prerender")
|
||||
elif "still.local" in families:
|
||||
instance.data['family'] = 'image'
|
||||
families.remove('still.local')
|
||||
instance.data["families"] = families
|
||||
|
||||
collections, remainder = clique.assemble(collected_frames)
|
||||
|
|
|
|||
|
|
@ -64,7 +64,7 @@ class CollectNukeWrites(pyblish.api.InstancePlugin):
|
|||
)
|
||||
|
||||
if [fm for fm in _families_test
|
||||
if fm in ["render", "prerender"]]:
|
||||
if fm in ["render", "prerender", "still"]]:
|
||||
if "representations" not in instance.data:
|
||||
instance.data["representations"] = list()
|
||||
|
||||
|
|
@ -100,7 +100,13 @@ class CollectNukeWrites(pyblish.api.InstancePlugin):
|
|||
frame_start_str, frame_slate_str)
|
||||
collected_frames.insert(0, slate_frame)
|
||||
|
||||
representation['files'] = collected_frames
|
||||
if collected_frames_len == 1:
|
||||
representation['files'] = collected_frames.pop()
|
||||
if "still" in _families_test:
|
||||
instance.data['family'] = 'image'
|
||||
instance.data["families"].remove('still')
|
||||
else:
|
||||
representation['files'] = collected_frames
|
||||
instance.data["representations"].append(representation)
|
||||
except Exception:
|
||||
instance.data["representations"].append(representation)
|
||||
|
|
|
|||
|
|
@ -0,0 +1,110 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
"""Validate if instance asset is the same as context asset."""
|
||||
from __future__ import absolute_import
|
||||
|
||||
import nuke
|
||||
|
||||
import pyblish.api
|
||||
import openpype.api
|
||||
import avalon.nuke.lib
|
||||
import openpype.hosts.nuke.api as nuke_api
|
||||
|
||||
|
||||
class SelectInvalidInstances(pyblish.api.Action):
|
||||
"""Select invalid instances in Outliner."""
|
||||
|
||||
label = "Select Instances"
|
||||
icon = "briefcase"
|
||||
on = "failed"
|
||||
|
||||
def process(self, context, plugin):
|
||||
"""Process invalid validators and select invalid instances."""
|
||||
# Get the errored instances
|
||||
failed = []
|
||||
for result in context.data["results"]:
|
||||
if (
|
||||
result["error"] is None
|
||||
or result["instance"] is None
|
||||
or result["instance"] in failed
|
||||
or result["plugin"] != plugin
|
||||
):
|
||||
continue
|
||||
|
||||
failed.append(result["instance"])
|
||||
|
||||
# Apply pyblish.logic to get the instances for the plug-in
|
||||
instances = pyblish.api.instances_by_plugin(failed, plugin)
|
||||
|
||||
if instances:
|
||||
self.log.info(
|
||||
"Selecting invalid nodes: %s" % ", ".join(
|
||||
[str(x) for x in instances]
|
||||
)
|
||||
)
|
||||
self.select(instances)
|
||||
else:
|
||||
self.log.info("No invalid nodes found.")
|
||||
self.deselect()
|
||||
|
||||
def select(self, instances):
|
||||
avalon.nuke.lib.select_nodes(
|
||||
[nuke.toNode(str(x)) for x in instances]
|
||||
)
|
||||
|
||||
def deselect(self):
|
||||
avalon.nuke.lib.reset_selection()
|
||||
|
||||
|
||||
class RepairSelectInvalidInstances(pyblish.api.Action):
|
||||
"""Repair the instance asset."""
|
||||
|
||||
label = "Repair"
|
||||
icon = "wrench"
|
||||
on = "failed"
|
||||
|
||||
def process(self, context, plugin):
|
||||
# Get the errored instances
|
||||
failed = []
|
||||
for result in context.data["results"]:
|
||||
if (
|
||||
result["error"] is None
|
||||
or result["instance"] is None
|
||||
or result["instance"] in failed
|
||||
or result["plugin"] != plugin
|
||||
):
|
||||
continue
|
||||
|
||||
failed.append(result["instance"])
|
||||
|
||||
# Apply pyblish.logic to get the instances for the plug-in
|
||||
instances = pyblish.api.instances_by_plugin(failed, plugin)
|
||||
|
||||
context_asset = context.data["assetEntity"]["name"]
|
||||
for instance in instances:
|
||||
origin_node = instance[0]
|
||||
nuke_api.lib.recreate_instance(
|
||||
origin_node, avalon_data={"asset": context_asset}
|
||||
)
|
||||
|
||||
|
||||
class ValidateInstanceInContext(pyblish.api.InstancePlugin):
|
||||
"""Validator to check if instance asset match context asset.
|
||||
|
||||
When working in per-shot style you always publish data in context of
|
||||
current asset (shot). This validator checks if this is so. It is optional
|
||||
so it can be disabled when needed.
|
||||
|
||||
Action on this validator will select invalid instances in Outliner.
|
||||
"""
|
||||
|
||||
order = openpype.api.ValidateContentsOrder
|
||||
label = "Instance in same Context"
|
||||
hosts = ["nuke"]
|
||||
actions = [SelectInvalidInstances, RepairSelectInvalidInstances]
|
||||
optional = True
|
||||
|
||||
def process(self, instance):
|
||||
asset = instance.data.get("asset")
|
||||
context_asset = instance.context.data["assetEntity"]["name"]
|
||||
msg = "{} has asset {}".format(instance.name, asset)
|
||||
assert asset == context_asset, msg
|
||||
|
|
@ -56,8 +56,8 @@ class ValidateOutputResolution(pyblish.api.InstancePlugin):
|
|||
|
||||
def process(self, instance):
|
||||
|
||||
# Skip bounding box check if a crop node exists.
|
||||
if instance[0].dependencies()[0].Class() == "Crop":
|
||||
# Skip bounding box check if a reformat node exists.
|
||||
if instance[0].dependencies()[0].Class() == "Reformat":
|
||||
return
|
||||
|
||||
msg = "Bounding box is outside the format."
|
||||
|
|
|
|||
Some files were not shown because too many files have changed in this diff Show more
Loading…
Add table
Add a link
Reference in a new issue