Merge branch 'develop' into feature/OP-2637_Houdini-Farm-publishingrendering

This commit is contained in:
Ondřej Samohel 2023-06-05 10:27:42 +02:00 committed by GitHub
commit d2fff3d0b8
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
165 changed files with 5837 additions and 1867 deletions

View file

@ -1,6 +1,6 @@
{
"projectName": "OpenPype",
"projectOwner": "pypeclub",
"projectOwner": "ynput",
"repoType": "github",
"repoHost": "https://github.com",
"files": [
@ -319,8 +319,18 @@
"code",
"doc"
]
},
{
"login": "movalex",
"name": "Alexey Bogomolov",
"avatar_url": "https://avatars.githubusercontent.com/u/11698866?v=4",
"profile": "http://abogomolov.com",
"contributions": [
"code"
]
}
],
"contributorsPerLine": 7,
"skipCi": true
"skipCi": true,
"commitType": "docs"
}

View file

@ -35,6 +35,12 @@ body:
label: Version
description: What version are you running? Look to OpenPype Tray
options:
- 3.15.10-nightly.1
- 3.15.9
- 3.15.9-nightly.2
- 3.15.9-nightly.1
- 3.15.8
- 3.15.8-nightly.3
- 3.15.8-nightly.2
- 3.15.8-nightly.1
- 3.15.7
@ -129,12 +135,6 @@ body:
- 3.14.3-nightly.2
- 3.14.3-nightly.1
- 3.14.2
- 3.14.2-nightly.5
- 3.14.2-nightly.4
- 3.14.2-nightly.3
- 3.14.2-nightly.2
- 3.14.2-nightly.1
- 3.14.1
validations:
required: true
- type: dropdown

View file

@ -1,6 +1,639 @@
# Changelog
## [3.15.9](https://github.com/ynput/OpenPype/tree/3.15.9)
[Full Changelog](https://github.com/ynput/OpenPype/compare/3.15.8...3.15.9)
### **🆕 New features**
<details>
<summary>Blender: Implemented Loading of Alembic Camera <a href="https://github.com/ynput/OpenPype/pull/4990">#4990</a></summary>
Implemented loading of Alembic cameras in Blender.
___
</details>
<details>
<summary>Unreal: Implemented Creator, Loader and Extractor for Levels <a href="https://github.com/ynput/OpenPype/pull/5008">#5008</a></summary>
Creator, Loader and Extractor for Unreal Levels have been implemented.
___
</details>
### **🚀 Enhancements**
<details>
<summary>Blender: Added setting for base unit scale <a href="https://github.com/ynput/OpenPype/pull/4987">#4987</a></summary>
A setting for the base unit scale has been added for Blender.The unit scale is automatically applied when opening a file or creating a new one.
___
</details>
<details>
<summary>Unreal: Changed naming and path of Camera Levels <a href="https://github.com/ynput/OpenPype/pull/5010">#5010</a></summary>
The levels created for the camera in Unreal now include `_camera` in the name, to be better identifiable, and are placed in the camera folder.
___
</details>
<details>
<summary>Settings: Added option to nest settings templates <a href="https://github.com/ynput/OpenPype/pull/5022">#5022</a></summary>
It is possible to nest settings templates in another templates.
___
</details>
<details>
<summary>Enhancement/publisher: Remove "hit play to continue" label on continue <a href="https://github.com/ynput/OpenPype/pull/5029">#5029</a></summary>
Remove "hit play to continue" message on continue so that it doesn't show anymore when play was clicked.
___
</details>
<details>
<summary>Ftrack: Limit number of ftrack events to query at once <a href="https://github.com/ynput/OpenPype/pull/5033">#5033</a></summary>
Limit the amount of ftrack events received from mongo at once to 100.
___
</details>
<details>
<summary>General: Small code cleanups <a href="https://github.com/ynput/OpenPype/pull/5034">#5034</a></summary>
Small code cleanup and updates.
___
</details>
<details>
<summary>Global: collect frames to fix with settings <a href="https://github.com/ynput/OpenPype/pull/5036">#5036</a></summary>
Settings for `Collect Frames to Fix` will allow disable per project the plugin. Also `Rewriting latest version` attribute is hiddable from settings.
___
</details>
<details>
<summary>General: Publish plugin apply settings can expect only project settings <a href="https://github.com/ynput/OpenPype/pull/5037">#5037</a></summary>
Only project settings are passed to optional `apply_settings` method, if the method expects only one argument.
___
</details>
### **🐛 Bug fixes**
<details>
<summary>Maya: Load Assembly fix invalid imports <a href="https://github.com/ynput/OpenPype/pull/4859">#4859</a></summary>
Refactors imports so they are now correct.
___
</details>
<details>
<summary>Maya: Skipping rendersetup for members. <a href="https://github.com/ynput/OpenPype/pull/4973">#4973</a></summary>
When publishing a `rendersetup`, the objectset is and should be empty.
___
</details>
<details>
<summary>Maya: Validate Rig Output IDs <a href="https://github.com/ynput/OpenPype/pull/5016">#5016</a></summary>
Absolute names of node were not used, so plugin did not fetch the nodes properly.Also missed pymel command.
___
</details>
<details>
<summary>Deadline: escape rootless path in publish job <a href="https://github.com/ynput/OpenPype/pull/4910">#4910</a></summary>
If the publish path on Deadline job contains spaces or other characters, command was failing because the path wasn't properly escaped. This is fixing it.
___
</details>
<details>
<summary>General: Company name and URL changed <a href="https://github.com/ynput/OpenPype/pull/4974">#4974</a></summary>
The current records were obsolete in inno_setup, changed to the up-to-date.
___
</details>
<details>
<summary>Unreal: Fix usage of 'get_full_path' function <a href="https://github.com/ynput/OpenPype/pull/5014">#5014</a></summary>
This PR changes all the occurrences of `get_full_path` functions to alternatives to get the path of the objects.
___
</details>
<details>
<summary>Unreal: Fix sequence frames validator to use correct data <a href="https://github.com/ynput/OpenPype/pull/5021">#5021</a></summary>
Fix sequence frames validator to use clipIn and clipOut data instead of frameStart and frameEnd.
___
</details>
<details>
<summary>Unreal: Fix render instances collection to use correct data <a href="https://github.com/ynput/OpenPype/pull/5023">#5023</a></summary>
Fix render instances collection to use `frameStart` and `frameEnd` from the Project Manager, instead of the sequence's ones.
___
</details>
<details>
<summary>Resolve: loader is opening even if no timeline in project <a href="https://github.com/ynput/OpenPype/pull/5025">#5025</a></summary>
Loader is opening now even no timeline is available in a project.
___
</details>
<details>
<summary>nuke: callback for dirmapping is on demand <a href="https://github.com/ynput/OpenPype/pull/5030">#5030</a></summary>
Nuke was slowed down on processing due this callback. Since it is disabled by default it made sense to add it only on demand.
___
</details>
<details>
<summary>Publisher: UI works with instances without label <a href="https://github.com/ynput/OpenPype/pull/5032">#5032</a></summary>
Publisher UI does not crash if instance don't have filled 'label' key in instance data.
___
</details>
<details>
<summary>Publisher: Call explicitly prepared tab methods <a href="https://github.com/ynput/OpenPype/pull/5044">#5044</a></summary>
It is not possible to go to Create tab during publishing from OpenPype menu.
___
</details>
<details>
<summary>Ftrack: Role names are not case sensitive in ftrack event server status action <a href="https://github.com/ynput/OpenPype/pull/5058">#5058</a></summary>
Event server status action is not case sensitive for role names of user.
___
</details>
<details>
<summary>Publisher: Fix border widget <a href="https://github.com/ynput/OpenPype/pull/5063">#5063</a></summary>
Fixed border lines in Publisher UI to be painted correctly with correct indentation and size.
___
</details>
<details>
<summary>Unreal: Fix Commandlet Project and Permissions <a href="https://github.com/ynput/OpenPype/pull/5066">#5066</a></summary>
Fix problem when creating an Unreal Project when Commandlet Project is in a protected location.
___
</details>
<details>
<summary>Unreal: Added verification for Unreal app name format <a href="https://github.com/ynput/OpenPype/pull/5070">#5070</a></summary>
The Unreal app name is used to determine the Unreal version folder, so it is necessary that if follows the format `x-x`, where `x` is any integer. This PR adds a verification that the app name follows that format.
___
</details>
### **📃 Documentation**
<details>
<summary>Docs: Display wrong image in ExtractOIIOTranscode <a href="https://github.com/ynput/OpenPype/pull/5045">#5045</a></summary>
Wrong image display in `https://openpype.io/docs/project_settings/settings_project_global#extract-oiio-transcode`.
___
</details>
### **Merged pull requests**
<details>
<summary>Drop-down menu to list all families in create placeholder <a href="https://github.com/ynput/OpenPype/pull/4928">#4928</a></summary>
Currently in the create placeholder window, we need to write the family manually. This replace the text field by an enum field with all families for the current software.
___
</details>
<details>
<summary>add sync to specific projects or listen only <a href="https://github.com/ynput/OpenPype/pull/4919">#4919</a></summary>
Extend kitsu sync service with additional arguments to sync specific projects.
___
</details>
## [3.15.8](https://github.com/ynput/OpenPype/tree/3.15.8)
[Full Changelog](https://github.com/ynput/OpenPype/compare/3.15.7...3.15.8)
### **🆕 New features**
<details>
<summary>Publisher: Show instances in report page <a href="https://github.com/ynput/OpenPype/pull/4915">#4915</a></summary>
Show publish instances in report page. Also added basic log view with logs grouped by instance. Validation error detail now have 2 colums, one with erro details second with logs. Crashed state shows fast access to report action buttons. Success will show only logs. Publish frame is shrunked automatically on publish stop.
___
</details>
<details>
<summary>Fusion - Loader plugins updates <a href="https://github.com/ynput/OpenPype/pull/4920">#4920</a></summary>
Update to some Fusion loader plugins:The sequence loader can now load footage from the image and online family.The FBX loader can now import all formats Fusions FBX node can read.You can now import the content of another workfile into your current comp with the workfile loader.
___
</details>
<details>
<summary>Fusion: deadline farm rendering <a href="https://github.com/ynput/OpenPype/pull/4955">#4955</a></summary>
Enabling Fusion for deadline farm rendering.
___
</details>
<details>
<summary>AfterEffects: set frame range and resolution <a href="https://github.com/ynput/OpenPype/pull/4983">#4983</a></summary>
Frame information (frame start, duration, fps) and resolution (width and height) is applied to selected composition from Asset Management System (Ftrack or DB) automatically when published instance is created.It is also possible explicitly propagate both values from DB to selected composition by newly added menu buttons.
___
</details>
<details>
<summary>Publish: Enhance automated publish plugin settings <a href="https://github.com/ynput/OpenPype/pull/4986">#4986</a></summary>
Added plugins option to define settings category where to look for settings of a plugin and added public helper functions to apply settings `get_plugin_settings` and `apply_plugin_settings_automatically`.
___
</details>
### **🚀 Enhancements**
<details>
<summary>Load Rig References - Change Rig to Animation in Animation instance <a href="https://github.com/ynput/OpenPype/pull/4877">#4877</a></summary>
We are using the template builder to build an animation scene. All the rig placeholders are imported correctly, but the automatically created animation instances retain the rig family in their names and subsets. In our example, we need animationMain instead of rigMain, because this name will be used in the following steps like lighting.Here is the result we need. I checked, and it's not a template builder problem, because even if I load a rig as a reference, the result is the same. For me, since we are in the animation instance, it makes more sense to have animation instead of rig in the name. The naming is just fine if we use create from the Openpype menu.
___
</details>
<details>
<summary>Enhancement: Resolve prelaunch code refactoring and update defaults <a href="https://github.com/ynput/OpenPype/pull/4916">#4916</a></summary>
The main reason of this PR is wrong default settings in `openpype/settings/defaults/system_settings/applications.json` for Resolve host. The `bin` folder should not be a part of the macos and Linux `RESOLVE_PYTHON3_PATH` variable.The rest of this PR is some code cleanups for Resolve prelaunch hook to simplify further development.Also added a .gitignore for vscode workspace files.
___
</details>
<details>
<summary>Unreal: 🚚 move Unreal plugin to separate repository <a href="https://github.com/ynput/OpenPype/pull/4980">#4980</a></summary>
To support Epic Marketplace have to move AYON Unreal integration plugins to separate repository. This is replacing current files with git submodule, so the change should be functionally without impact.New repository lives here: https://github.com/ynput/ayon-unreal-plugin
___
</details>
<details>
<summary>General: Lib code cleanup <a href="https://github.com/ynput/OpenPype/pull/5003">#5003</a></summary>
Small cleanup in lib files in openpype.
___
</details>
<details>
<summary>Allow to open with djv by extension instead of representation name <a href="https://github.com/ynput/OpenPype/pull/5004">#5004</a></summary>
Filter open in djv action by extension instead of representation.
___
</details>
<details>
<summary>DJV open action `extensions` as `set` <a href="https://github.com/ynput/OpenPype/pull/5005">#5005</a></summary>
Change `extensions` attribute to `set`.
___
</details>
<details>
<summary>Nuke: extract thumbnail with multiple reposition nodes <a href="https://github.com/ynput/OpenPype/pull/5011">#5011</a></summary>
Added support for multiple reposition nodes.
___
</details>
<details>
<summary>Enhancement: Improve logging levels and messages for artist facing publish reports <a href="https://github.com/ynput/OpenPype/pull/5018">#5018</a></summary>
Tweak the logging levels and messages to try and only show those logs that an artist should see and could understand. Move anything that's slightly more involved into a "debug" message instead.
___
</details>
### **🐛 Bug fixes**
<details>
<summary>Bugfix/frame variable fix <a href="https://github.com/ynput/OpenPype/pull/4978">#4978</a></summary>
Renamed variables to match OpenPype terminology to reduce confusion and add consistency.
___
</details>
<details>
<summary>Global: plugins cleanup plugin will leave beauty rendered files <a href="https://github.com/ynput/OpenPype/pull/4790">#4790</a></summary>
Attempt to mark more files to be cleaned up explicitly in intermediate `renders` folder in work area for farm jobs.
___
</details>
<details>
<summary>Fix: Download last workfile doesn't work if not already downloaded <a href="https://github.com/ynput/OpenPype/pull/4942">#4942</a></summary>
Some optimization condition is messing with the feature: if the published workfile is not already downloaded, it won't download it...
___
</details>
<details>
<summary>Unreal: Fix transform when loading layout to match existing assets <a href="https://github.com/ynput/OpenPype/pull/4972">#4972</a></summary>
Fixed transform when loading layout to match existing assets.
___
</details>
<details>
<summary>fix the bug of fbx loaders in Max <a href="https://github.com/ynput/OpenPype/pull/4977">#4977</a></summary>
bug fix of fbx loaders for not being able to parent to the CON instances while importing cameras(and models) which is published from other DCCs such as Maya.
___
</details>
<details>
<summary>AfterEffects: allow returning stub with not saved workfile <a href="https://github.com/ynput/OpenPype/pull/4984">#4984</a></summary>
Allows to use Workfile app to Save first empty workfile.
___
</details>
<details>
<summary>Blender: Fix Alembic loading <a href="https://github.com/ynput/OpenPype/pull/4985">#4985</a></summary>
Fixed problem occurring when trying to load an Alembic model in Blender.
___
</details>
<details>
<summary>Unreal: Addon Py2 compatibility <a href="https://github.com/ynput/OpenPype/pull/4994">#4994</a></summary>
Fixed Python 2 compatibility of unreal addon.
___
</details>
<details>
<summary>Nuke: fixed missing files key in representation <a href="https://github.com/ynput/OpenPype/pull/4999">#4999</a></summary>
Issue with missing keys once rendering target set to existing frames is fixed. Instance has to be evaluated in validation for missing files.
___
</details>
<details>
<summary>Unreal: Fix the frame range when loading camera <a href="https://github.com/ynput/OpenPype/pull/5002">#5002</a></summary>
The keyframes of the camera, when loaded, were not using the correct frame range.
___
</details>
<details>
<summary>Fusion: fixing frame range targeting <a href="https://github.com/ynput/OpenPype/pull/5013">#5013</a></summary>
Frame range targeting at Rendering instances is now following configured options.
___
</details>
<details>
<summary>Deadline: fix selection from multiple webservices <a href="https://github.com/ynput/OpenPype/pull/5015">#5015</a></summary>
Multiple different DL webservice could be configured. First they must by configured in System Settings., then they could be configured per project in `project_settings/deadline/deadline_servers`.Only single webservice could be a target of publish though.
___
</details>
### **Merged pull requests**
<details>
<summary>3dsmax: Refactored publish plugins to use proper implementation of pymxs <a href="https://github.com/ynput/OpenPype/pull/4988">#4988</a></summary>
___
</details>
## [3.15.7](https://github.com/ynput/OpenPype/tree/3.15.7)

View file

@ -1,6 +1,6 @@
<!-- ALL-CONTRIBUTORS-BADGE:START - Do not remove or modify this section -->
[![All Contributors](https://img.shields.io/badge/all_contributors-27-orange.svg?style=flat-square)](#contributors-)
[![All Contributors](https://img.shields.io/badge/all_contributors-28-orange.svg?style=flat-square)](#contributors-)
<!-- ALL-CONTRIBUTORS-BADGE:END -->
OpenPype
====
@ -303,41 +303,44 @@ Thanks goes to these wonderful people ([emoji key](https://allcontributors.org/d
<!-- prettier-ignore-start -->
<!-- markdownlint-disable -->
<table>
<tr>
<td align="center"><a href="http://pype.club/"><img src="https://avatars.githubusercontent.com/u/3333008?v=4?s=100" width="100px;" alt=""/><br /><sub><b>Milan Kolar</b></sub></a><br /><a href="https://github.com/pypeclub/OpenPype/commits?author=mkolar" title="Code">💻</a> <a href="https://github.com/pypeclub/OpenPype/commits?author=mkolar" title="Documentation">📖</a> <a href="#infra-mkolar" title="Infrastructure (Hosting, Build-Tools, etc)">🚇</a> <a href="#business-mkolar" title="Business development">💼</a> <a href="#content-mkolar" title="Content">🖋</a> <a href="#fundingFinding-mkolar" title="Funding Finding">🔍</a> <a href="#maintenance-mkolar" title="Maintenance">🚧</a> <a href="#projectManagement-mkolar" title="Project Management">📆</a> <a href="https://github.com/pypeclub/OpenPype/pulls?q=is%3Apr+reviewed-by%3Amkolar" title="Reviewed Pull Requests">👀</a> <a href="#mentoring-mkolar" title="Mentoring">🧑‍🏫</a> <a href="#question-mkolar" title="Answering Questions">💬</a></td>
<td align="center"><a href="https://www.linkedin.com/in/jakubjezek79"><img src="https://avatars.githubusercontent.com/u/40640033?v=4?s=100" width="100px;" alt=""/><br /><sub><b>Jakub Ježek</b></sub></a><br /><a href="https://github.com/pypeclub/OpenPype/commits?author=jakubjezek001" title="Code">💻</a> <a href="https://github.com/pypeclub/OpenPype/commits?author=jakubjezek001" title="Documentation">📖</a> <a href="#infra-jakubjezek001" title="Infrastructure (Hosting, Build-Tools, etc)">🚇</a> <a href="#content-jakubjezek001" title="Content">🖋</a> <a href="https://github.com/pypeclub/OpenPype/pulls?q=is%3Apr+reviewed-by%3Ajakubjezek001" title="Reviewed Pull Requests">👀</a> <a href="#maintenance-jakubjezek001" title="Maintenance">🚧</a> <a href="#mentoring-jakubjezek001" title="Mentoring">🧑‍🏫</a> <a href="#projectManagement-jakubjezek001" title="Project Management">📆</a> <a href="#question-jakubjezek001" title="Answering Questions">💬</a></td>
<td align="center"><a href="https://github.com/antirotor"><img src="https://avatars.githubusercontent.com/u/33513211?v=4?s=100" width="100px;" alt=""/><br /><sub><b>Ondřej Samohel</b></sub></a><br /><a href="https://github.com/pypeclub/OpenPype/commits?author=antirotor" title="Code">💻</a> <a href="https://github.com/pypeclub/OpenPype/commits?author=antirotor" title="Documentation">📖</a> <a href="#infra-antirotor" title="Infrastructure (Hosting, Build-Tools, etc)">🚇</a> <a href="#content-antirotor" title="Content">🖋</a> <a href="https://github.com/pypeclub/OpenPype/pulls?q=is%3Apr+reviewed-by%3Aantirotor" title="Reviewed Pull Requests">👀</a> <a href="#maintenance-antirotor" title="Maintenance">🚧</a> <a href="#mentoring-antirotor" title="Mentoring">🧑‍🏫</a> <a href="#projectManagement-antirotor" title="Project Management">📆</a> <a href="#question-antirotor" title="Answering Questions">💬</a></td>
<td align="center"><a href="https://github.com/iLLiCiTiT"><img src="https://avatars.githubusercontent.com/u/43494761?v=4?s=100" width="100px;" alt=""/><br /><sub><b>Jakub Trllo</b></sub></a><br /><a href="https://github.com/pypeclub/OpenPype/commits?author=iLLiCiTiT" title="Code">💻</a> <a href="https://github.com/pypeclub/OpenPype/commits?author=iLLiCiTiT" title="Documentation">📖</a> <a href="#infra-iLLiCiTiT" title="Infrastructure (Hosting, Build-Tools, etc)">🚇</a> <a href="https://github.com/pypeclub/OpenPype/pulls?q=is%3Apr+reviewed-by%3AiLLiCiTiT" title="Reviewed Pull Requests">👀</a> <a href="#maintenance-iLLiCiTiT" title="Maintenance">🚧</a> <a href="#question-iLLiCiTiT" title="Answering Questions">💬</a></td>
<td align="center"><a href="https://github.com/kalisp"><img src="https://avatars.githubusercontent.com/u/4457962?v=4?s=100" width="100px;" alt=""/><br /><sub><b>Petr Kalis</b></sub></a><br /><a href="https://github.com/pypeclub/OpenPype/commits?author=kalisp" title="Code">💻</a> <a href="https://github.com/pypeclub/OpenPype/commits?author=kalisp" title="Documentation">📖</a> <a href="#infra-kalisp" title="Infrastructure (Hosting, Build-Tools, etc)">🚇</a> <a href="https://github.com/pypeclub/OpenPype/pulls?q=is%3Apr+reviewed-by%3Akalisp" title="Reviewed Pull Requests">👀</a> <a href="#maintenance-kalisp" title="Maintenance">🚧</a> <a href="#question-kalisp" title="Answering Questions">💬</a></td>
<td align="center"><a href="https://github.com/64qam"><img src="https://avatars.githubusercontent.com/u/26925793?v=4?s=100" width="100px;" alt=""/><br /><sub><b>64qam</b></sub></a><br /><a href="https://github.com/pypeclub/OpenPype/commits?author=64qam" title="Code">💻</a> <a href="https://github.com/pypeclub/OpenPype/pulls?q=is%3Apr+reviewed-by%3A64qam" title="Reviewed Pull Requests">👀</a> <a href="https://github.com/pypeclub/OpenPype/commits?author=64qam" title="Documentation">📖</a> <a href="#infra-64qam" title="Infrastructure (Hosting, Build-Tools, etc)">🚇</a> <a href="#projectManagement-64qam" title="Project Management">📆</a> <a href="#maintenance-64qam" title="Maintenance">🚧</a> <a href="#content-64qam" title="Content">🖋</a> <a href="#userTesting-64qam" title="User Testing">📓</a></td>
<td align="center"><a href="http://www.colorbleed.nl/"><img src="https://avatars.githubusercontent.com/u/2439881?v=4?s=100" width="100px;" alt=""/><br /><sub><b>Roy Nieterau</b></sub></a><br /><a href="https://github.com/pypeclub/OpenPype/commits?author=BigRoy" title="Code">💻</a> <a href="https://github.com/pypeclub/OpenPype/commits?author=BigRoy" title="Documentation">📖</a> <a href="https://github.com/pypeclub/OpenPype/pulls?q=is%3Apr+reviewed-by%3ABigRoy" title="Reviewed Pull Requests">👀</a> <a href="#mentoring-BigRoy" title="Mentoring">🧑‍🏫</a> <a href="#question-BigRoy" title="Answering Questions">💬</a></td>
</tr>
<tr>
<td align="center"><a href="https://github.com/tokejepsen"><img src="https://avatars.githubusercontent.com/u/1860085?v=4?s=100" width="100px;" alt=""/><br /><sub><b>Toke Jepsen</b></sub></a><br /><a href="https://github.com/pypeclub/OpenPype/commits?author=tokejepsen" title="Code">💻</a> <a href="https://github.com/pypeclub/OpenPype/commits?author=tokejepsen" title="Documentation">📖</a> <a href="https://github.com/pypeclub/OpenPype/pulls?q=is%3Apr+reviewed-by%3Atokejepsen" title="Reviewed Pull Requests">👀</a> <a href="#mentoring-tokejepsen" title="Mentoring">🧑‍🏫</a> <a href="#question-tokejepsen" title="Answering Questions">💬</a></td>
<td align="center"><a href="https://github.com/jrsndl"><img src="https://avatars.githubusercontent.com/u/45896205?v=4?s=100" width="100px;" alt=""/><br /><sub><b>Jiri Sindelar</b></sub></a><br /><a href="https://github.com/pypeclub/OpenPype/commits?author=jrsndl" title="Code">💻</a> <a href="https://github.com/pypeclub/OpenPype/pulls?q=is%3Apr+reviewed-by%3Ajrsndl" title="Reviewed Pull Requests">👀</a> <a href="https://github.com/pypeclub/OpenPype/commits?author=jrsndl" title="Documentation">📖</a> <a href="#content-jrsndl" title="Content">🖋</a> <a href="#tutorial-jrsndl" title="Tutorials"></a> <a href="#userTesting-jrsndl" title="User Testing">📓</a></td>
<td align="center"><a href="https://barbierisimone.com/"><img src="https://avatars.githubusercontent.com/u/1087869?v=4?s=100" width="100px;" alt=""/><br /><sub><b>Simone Barbieri</b></sub></a><br /><a href="https://github.com/pypeclub/OpenPype/commits?author=simonebarbieri" title="Code">💻</a> <a href="https://github.com/pypeclub/OpenPype/commits?author=simonebarbieri" title="Documentation">📖</a></td>
<td align="center"><a href="http://karimmozilla.xyz/"><img src="https://avatars.githubusercontent.com/u/82811760?v=4?s=100" width="100px;" alt=""/><br /><sub><b>karimmozilla</b></sub></a><br /><a href="https://github.com/pypeclub/OpenPype/commits?author=karimmozilla" title="Code">💻</a></td>
<td align="center"><a href="https://github.com/Allan-I"><img src="https://avatars.githubusercontent.com/u/76656700?v=4?s=100" width="100px;" alt=""/><br /><sub><b>Allan I. A.</b></sub></a><br /><a href="https://github.com/pypeclub/OpenPype/commits?author=Allan-I" title="Code">💻</a></td>
<td align="center"><a href="https://www.linkedin.com/in/mmuurrpphhyy/"><img src="https://avatars.githubusercontent.com/u/352795?v=4?s=100" width="100px;" alt=""/><br /><sub><b>murphy</b></sub></a><br /><a href="https://github.com/pypeclub/OpenPype/commits?author=m-u-r-p-h-y" title="Code">💻</a> <a href="https://github.com/pypeclub/OpenPype/pulls?q=is%3Apr+reviewed-by%3Am-u-r-p-h-y" title="Reviewed Pull Requests">👀</a> <a href="#userTesting-m-u-r-p-h-y" title="User Testing">📓</a> <a href="https://github.com/pypeclub/OpenPype/commits?author=m-u-r-p-h-y" title="Documentation">📖</a> <a href="#projectManagement-m-u-r-p-h-y" title="Project Management">📆</a></td>
<td align="center"><a href="https://github.com/aardschok"><img src="https://avatars.githubusercontent.com/u/26920875?v=4?s=100" width="100px;" alt=""/><br /><sub><b>Wijnand Koreman</b></sub></a><br /><a href="https://github.com/pypeclub/OpenPype/commits?author=aardschok" title="Code">💻</a></td>
</tr>
<tr>
<td align="center"><a href="http://jedimaster.cnblogs.com/"><img src="https://avatars.githubusercontent.com/u/1798206?v=4?s=100" width="100px;" alt=""/><br /><sub><b>Bo Zhou</b></sub></a><br /><a href="https://github.com/pypeclub/OpenPype/commits?author=zhoub" title="Code">💻</a></td>
<td align="center"><a href="https://www.linkedin.com/in/clementhector/"><img src="https://avatars.githubusercontent.com/u/7068597?v=4?s=100" width="100px;" alt=""/><br /><sub><b>Clément Hector</b></sub></a><br /><a href="https://github.com/pypeclub/OpenPype/commits?author=ClementHector" title="Code">💻</a> <a href="https://github.com/pypeclub/OpenPype/pulls?q=is%3Apr+reviewed-by%3AClementHector" title="Reviewed Pull Requests">👀</a></td>
<td align="center"><a href="https://twitter.com/davidlatwe"><img src="https://avatars.githubusercontent.com/u/3357009?v=4?s=100" width="100px;" alt=""/><br /><sub><b>David Lai</b></sub></a><br /><a href="https://github.com/pypeclub/OpenPype/commits?author=davidlatwe" title="Code">💻</a> <a href="https://github.com/pypeclub/OpenPype/pulls?q=is%3Apr+reviewed-by%3Adavidlatwe" title="Reviewed Pull Requests">👀</a></td>
<td align="center"><a href="https://github.com/2-REC"><img src="https://avatars.githubusercontent.com/u/42170307?v=4?s=100" width="100px;" alt=""/><br /><sub><b>Derek </b></sub></a><br /><a href="https://github.com/pypeclub/OpenPype/commits?author=2-REC" title="Code">💻</a> <a href="https://github.com/pypeclub/OpenPype/commits?author=2-REC" title="Documentation">📖</a></td>
<td align="center"><a href="https://github.com/gabormarinov"><img src="https://avatars.githubusercontent.com/u/8620515?v=4?s=100" width="100px;" alt=""/><br /><sub><b>Gábor Marinov</b></sub></a><br /><a href="https://github.com/pypeclub/OpenPype/commits?author=gabormarinov" title="Code">💻</a> <a href="https://github.com/pypeclub/OpenPype/commits?author=gabormarinov" title="Documentation">📖</a></td>
<td align="center"><a href="https://github.com/icyvapor"><img src="https://avatars.githubusercontent.com/u/1195278?v=4?s=100" width="100px;" alt=""/><br /><sub><b>icyvapor</b></sub></a><br /><a href="https://github.com/pypeclub/OpenPype/commits?author=icyvapor" title="Code">💻</a> <a href="https://github.com/pypeclub/OpenPype/commits?author=icyvapor" title="Documentation">📖</a></td>
<td align="center"><a href="https://github.com/jlorrain"><img src="https://avatars.githubusercontent.com/u/7955673?v=4?s=100" width="100px;" alt=""/><br /><sub><b>Jérôme LORRAIN</b></sub></a><br /><a href="https://github.com/pypeclub/OpenPype/commits?author=jlorrain" title="Code">💻</a></td>
</tr>
<tr>
<td align="center"><a href="https://github.com/dmo-j-cube"><img src="https://avatars.githubusercontent.com/u/89823400?v=4?s=100" width="100px;" alt=""/><br /><sub><b>David Morris-Oliveros</b></sub></a><br /><a href="https://github.com/pypeclub/OpenPype/commits?author=dmo-j-cube" title="Code">💻</a></td>
<td align="center"><a href="https://github.com/BenoitConnan"><img src="https://avatars.githubusercontent.com/u/82808268?v=4?s=100" width="100px;" alt=""/><br /><sub><b>BenoitConnan</b></sub></a><br /><a href="https://github.com/pypeclub/OpenPype/commits?author=BenoitConnan" title="Code">💻</a></td>
<td align="center"><a href="https://github.com/Malthaldar"><img src="https://avatars.githubusercontent.com/u/33671694?v=4?s=100" width="100px;" alt=""/><br /><sub><b>Malthaldar</b></sub></a><br /><a href="https://github.com/pypeclub/OpenPype/commits?author=Malthaldar" title="Code">💻</a></td>
<td align="center"><a href="http://www.svenneve.com/"><img src="https://avatars.githubusercontent.com/u/2472863?v=4?s=100" width="100px;" alt=""/><br /><sub><b>Sven Neve</b></sub></a><br /><a href="https://github.com/pypeclub/OpenPype/commits?author=svenneve" title="Code">💻</a></td>
<td align="center"><a href="https://github.com/zafrs"><img src="https://avatars.githubusercontent.com/u/26890002?v=4?s=100" width="100px;" alt=""/><br /><sub><b>zafrs</b></sub></a><br /><a href="https://github.com/pypeclub/OpenPype/commits?author=zafrs" title="Code">💻</a></td>
<td align="center"><a href="http://felixdavid.com/"><img src="https://avatars.githubusercontent.com/u/22875539?v=4?s=100" width="100px;" alt=""/><br /><sub><b>Félix David</b></sub></a><br /><a href="https://github.com/pypeclub/OpenPype/commits?author=Tilix4" title="Code">💻</a> <a href="https://github.com/pypeclub/OpenPype/commits?author=Tilix4" title="Documentation">📖</a></td>
</tr>
<tbody>
<tr>
<td align="center" valign="top" width="14.28%"><a href="http://pype.club/"><img src="https://avatars.githubusercontent.com/u/3333008?v=4?s=100" width="100px;" alt="Milan Kolar"/><br /><sub><b>Milan Kolar</b></sub></a><br /><a href="https://github.com/ynput/OpenPype/commits?author=mkolar" title="Code">💻</a> <a href="https://github.com/ynput/OpenPype/commits?author=mkolar" title="Documentation">📖</a> <a href="#infra-mkolar" title="Infrastructure (Hosting, Build-Tools, etc)">🚇</a> <a href="#business-mkolar" title="Business development">💼</a> <a href="#content-mkolar" title="Content">🖋</a> <a href="#fundingFinding-mkolar" title="Funding Finding">🔍</a> <a href="#maintenance-mkolar" title="Maintenance">🚧</a> <a href="#projectManagement-mkolar" title="Project Management">📆</a> <a href="https://github.com/ynput/OpenPype/pulls?q=is%3Apr+reviewed-by%3Amkolar" title="Reviewed Pull Requests">👀</a> <a href="#mentoring-mkolar" title="Mentoring">🧑‍🏫</a> <a href="#question-mkolar" title="Answering Questions">💬</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://www.linkedin.com/in/jakubjezek79"><img src="https://avatars.githubusercontent.com/u/40640033?v=4?s=100" width="100px;" alt="Jakub Ježek"/><br /><sub><b>Jakub Ježek</b></sub></a><br /><a href="https://github.com/ynput/OpenPype/commits?author=jakubjezek001" title="Code">💻</a> <a href="https://github.com/ynput/OpenPype/commits?author=jakubjezek001" title="Documentation">📖</a> <a href="#infra-jakubjezek001" title="Infrastructure (Hosting, Build-Tools, etc)">🚇</a> <a href="#content-jakubjezek001" title="Content">🖋</a> <a href="https://github.com/ynput/OpenPype/pulls?q=is%3Apr+reviewed-by%3Ajakubjezek001" title="Reviewed Pull Requests">👀</a> <a href="#maintenance-jakubjezek001" title="Maintenance">🚧</a> <a href="#mentoring-jakubjezek001" title="Mentoring">🧑‍🏫</a> <a href="#projectManagement-jakubjezek001" title="Project Management">📆</a> <a href="#question-jakubjezek001" title="Answering Questions">💬</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/antirotor"><img src="https://avatars.githubusercontent.com/u/33513211?v=4?s=100" width="100px;" alt="Ondřej Samohel"/><br /><sub><b>Ondřej Samohel</b></sub></a><br /><a href="https://github.com/ynput/OpenPype/commits?author=antirotor" title="Code">💻</a> <a href="https://github.com/ynput/OpenPype/commits?author=antirotor" title="Documentation">📖</a> <a href="#infra-antirotor" title="Infrastructure (Hosting, Build-Tools, etc)">🚇</a> <a href="#content-antirotor" title="Content">🖋</a> <a href="https://github.com/ynput/OpenPype/pulls?q=is%3Apr+reviewed-by%3Aantirotor" title="Reviewed Pull Requests">👀</a> <a href="#maintenance-antirotor" title="Maintenance">🚧</a> <a href="#mentoring-antirotor" title="Mentoring">🧑‍🏫</a> <a href="#projectManagement-antirotor" title="Project Management">📆</a> <a href="#question-antirotor" title="Answering Questions">💬</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/iLLiCiTiT"><img src="https://avatars.githubusercontent.com/u/43494761?v=4?s=100" width="100px;" alt="Jakub Trllo"/><br /><sub><b>Jakub Trllo</b></sub></a><br /><a href="https://github.com/ynput/OpenPype/commits?author=iLLiCiTiT" title="Code">💻</a> <a href="https://github.com/ynput/OpenPype/commits?author=iLLiCiTiT" title="Documentation">📖</a> <a href="#infra-iLLiCiTiT" title="Infrastructure (Hosting, Build-Tools, etc)">🚇</a> <a href="https://github.com/ynput/OpenPype/pulls?q=is%3Apr+reviewed-by%3AiLLiCiTiT" title="Reviewed Pull Requests">👀</a> <a href="#maintenance-iLLiCiTiT" title="Maintenance">🚧</a> <a href="#question-iLLiCiTiT" title="Answering Questions">💬</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/kalisp"><img src="https://avatars.githubusercontent.com/u/4457962?v=4?s=100" width="100px;" alt="Petr Kalis"/><br /><sub><b>Petr Kalis</b></sub></a><br /><a href="https://github.com/ynput/OpenPype/commits?author=kalisp" title="Code">💻</a> <a href="https://github.com/ynput/OpenPype/commits?author=kalisp" title="Documentation">📖</a> <a href="#infra-kalisp" title="Infrastructure (Hosting, Build-Tools, etc)">🚇</a> <a href="https://github.com/ynput/OpenPype/pulls?q=is%3Apr+reviewed-by%3Akalisp" title="Reviewed Pull Requests">👀</a> <a href="#maintenance-kalisp" title="Maintenance">🚧</a> <a href="#question-kalisp" title="Answering Questions">💬</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/64qam"><img src="https://avatars.githubusercontent.com/u/26925793?v=4?s=100" width="100px;" alt="64qam"/><br /><sub><b>64qam</b></sub></a><br /><a href="https://github.com/ynput/OpenPype/commits?author=64qam" title="Code">💻</a> <a href="https://github.com/ynput/OpenPype/pulls?q=is%3Apr+reviewed-by%3A64qam" title="Reviewed Pull Requests">👀</a> <a href="https://github.com/ynput/OpenPype/commits?author=64qam" title="Documentation">📖</a> <a href="#infra-64qam" title="Infrastructure (Hosting, Build-Tools, etc)">🚇</a> <a href="#projectManagement-64qam" title="Project Management">📆</a> <a href="#maintenance-64qam" title="Maintenance">🚧</a> <a href="#content-64qam" title="Content">🖋</a> <a href="#userTesting-64qam" title="User Testing">📓</a></td>
<td align="center" valign="top" width="14.28%"><a href="http://www.colorbleed.nl/"><img src="https://avatars.githubusercontent.com/u/2439881?v=4?s=100" width="100px;" alt="Roy Nieterau"/><br /><sub><b>Roy Nieterau</b></sub></a><br /><a href="https://github.com/ynput/OpenPype/commits?author=BigRoy" title="Code">💻</a> <a href="https://github.com/ynput/OpenPype/commits?author=BigRoy" title="Documentation">📖</a> <a href="https://github.com/ynput/OpenPype/pulls?q=is%3Apr+reviewed-by%3ABigRoy" title="Reviewed Pull Requests">👀</a> <a href="#mentoring-BigRoy" title="Mentoring">🧑‍🏫</a> <a href="#question-BigRoy" title="Answering Questions">💬</a></td>
</tr>
<tr>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/tokejepsen"><img src="https://avatars.githubusercontent.com/u/1860085?v=4?s=100" width="100px;" alt="Toke Jepsen"/><br /><sub><b>Toke Jepsen</b></sub></a><br /><a href="https://github.com/ynput/OpenPype/commits?author=tokejepsen" title="Code">💻</a> <a href="https://github.com/ynput/OpenPype/commits?author=tokejepsen" title="Documentation">📖</a> <a href="https://github.com/ynput/OpenPype/pulls?q=is%3Apr+reviewed-by%3Atokejepsen" title="Reviewed Pull Requests">👀</a> <a href="#mentoring-tokejepsen" title="Mentoring">🧑‍🏫</a> <a href="#question-tokejepsen" title="Answering Questions">💬</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/jrsndl"><img src="https://avatars.githubusercontent.com/u/45896205?v=4?s=100" width="100px;" alt="Jiri Sindelar"/><br /><sub><b>Jiri Sindelar</b></sub></a><br /><a href="https://github.com/ynput/OpenPype/commits?author=jrsndl" title="Code">💻</a> <a href="https://github.com/ynput/OpenPype/pulls?q=is%3Apr+reviewed-by%3Ajrsndl" title="Reviewed Pull Requests">👀</a> <a href="https://github.com/ynput/OpenPype/commits?author=jrsndl" title="Documentation">📖</a> <a href="#content-jrsndl" title="Content">🖋</a> <a href="#tutorial-jrsndl" title="Tutorials"></a> <a href="#userTesting-jrsndl" title="User Testing">📓</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://barbierisimone.com/"><img src="https://avatars.githubusercontent.com/u/1087869?v=4?s=100" width="100px;" alt="Simone Barbieri"/><br /><sub><b>Simone Barbieri</b></sub></a><br /><a href="https://github.com/ynput/OpenPype/commits?author=simonebarbieri" title="Code">💻</a> <a href="https://github.com/ynput/OpenPype/commits?author=simonebarbieri" title="Documentation">📖</a></td>
<td align="center" valign="top" width="14.28%"><a href="http://karimmozilla.xyz/"><img src="https://avatars.githubusercontent.com/u/82811760?v=4?s=100" width="100px;" alt="karimmozilla"/><br /><sub><b>karimmozilla</b></sub></a><br /><a href="https://github.com/ynput/OpenPype/commits?author=karimmozilla" title="Code">💻</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/Allan-I"><img src="https://avatars.githubusercontent.com/u/76656700?v=4?s=100" width="100px;" alt="Allan I. A."/><br /><sub><b>Allan I. A.</b></sub></a><br /><a href="https://github.com/ynput/OpenPype/commits?author=Allan-I" title="Code">💻</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://www.linkedin.com/in/mmuurrpphhyy/"><img src="https://avatars.githubusercontent.com/u/352795?v=4?s=100" width="100px;" alt="murphy"/><br /><sub><b>murphy</b></sub></a><br /><a href="https://github.com/ynput/OpenPype/commits?author=m-u-r-p-h-y" title="Code">💻</a> <a href="https://github.com/ynput/OpenPype/pulls?q=is%3Apr+reviewed-by%3Am-u-r-p-h-y" title="Reviewed Pull Requests">👀</a> <a href="#userTesting-m-u-r-p-h-y" title="User Testing">📓</a> <a href="https://github.com/ynput/OpenPype/commits?author=m-u-r-p-h-y" title="Documentation">📖</a> <a href="#projectManagement-m-u-r-p-h-y" title="Project Management">📆</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/aardschok"><img src="https://avatars.githubusercontent.com/u/26920875?v=4?s=100" width="100px;" alt="Wijnand Koreman"/><br /><sub><b>Wijnand Koreman</b></sub></a><br /><a href="https://github.com/ynput/OpenPype/commits?author=aardschok" title="Code">💻</a></td>
</tr>
<tr>
<td align="center" valign="top" width="14.28%"><a href="http://jedimaster.cnblogs.com/"><img src="https://avatars.githubusercontent.com/u/1798206?v=4?s=100" width="100px;" alt="Bo Zhou"/><br /><sub><b>Bo Zhou</b></sub></a><br /><a href="https://github.com/ynput/OpenPype/commits?author=zhoub" title="Code">💻</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://www.linkedin.com/in/clementhector/"><img src="https://avatars.githubusercontent.com/u/7068597?v=4?s=100" width="100px;" alt="Clément Hector"/><br /><sub><b>Clément Hector</b></sub></a><br /><a href="https://github.com/ynput/OpenPype/commits?author=ClementHector" title="Code">💻</a> <a href="https://github.com/ynput/OpenPype/pulls?q=is%3Apr+reviewed-by%3AClementHector" title="Reviewed Pull Requests">👀</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://twitter.com/davidlatwe"><img src="https://avatars.githubusercontent.com/u/3357009?v=4?s=100" width="100px;" alt="David Lai"/><br /><sub><b>David Lai</b></sub></a><br /><a href="https://github.com/ynput/OpenPype/commits?author=davidlatwe" title="Code">💻</a> <a href="https://github.com/ynput/OpenPype/pulls?q=is%3Apr+reviewed-by%3Adavidlatwe" title="Reviewed Pull Requests">👀</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/2-REC"><img src="https://avatars.githubusercontent.com/u/42170307?v=4?s=100" width="100px;" alt="Derek "/><br /><sub><b>Derek </b></sub></a><br /><a href="https://github.com/ynput/OpenPype/commits?author=2-REC" title="Code">💻</a> <a href="https://github.com/ynput/OpenPype/commits?author=2-REC" title="Documentation">📖</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/gabormarinov"><img src="https://avatars.githubusercontent.com/u/8620515?v=4?s=100" width="100px;" alt="Gábor Marinov"/><br /><sub><b>Gábor Marinov</b></sub></a><br /><a href="https://github.com/ynput/OpenPype/commits?author=gabormarinov" title="Code">💻</a> <a href="https://github.com/ynput/OpenPype/commits?author=gabormarinov" title="Documentation">📖</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/icyvapor"><img src="https://avatars.githubusercontent.com/u/1195278?v=4?s=100" width="100px;" alt="icyvapor"/><br /><sub><b>icyvapor</b></sub></a><br /><a href="https://github.com/ynput/OpenPype/commits?author=icyvapor" title="Code">💻</a> <a href="https://github.com/ynput/OpenPype/commits?author=icyvapor" title="Documentation">📖</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/jlorrain"><img src="https://avatars.githubusercontent.com/u/7955673?v=4?s=100" width="100px;" alt="Jérôme LORRAIN"/><br /><sub><b>Jérôme LORRAIN</b></sub></a><br /><a href="https://github.com/ynput/OpenPype/commits?author=jlorrain" title="Code">💻</a></td>
</tr>
<tr>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/dmo-j-cube"><img src="https://avatars.githubusercontent.com/u/89823400?v=4?s=100" width="100px;" alt="David Morris-Oliveros"/><br /><sub><b>David Morris-Oliveros</b></sub></a><br /><a href="https://github.com/ynput/OpenPype/commits?author=dmo-j-cube" title="Code">💻</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/BenoitConnan"><img src="https://avatars.githubusercontent.com/u/82808268?v=4?s=100" width="100px;" alt="BenoitConnan"/><br /><sub><b>BenoitConnan</b></sub></a><br /><a href="https://github.com/ynput/OpenPype/commits?author=BenoitConnan" title="Code">💻</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/Malthaldar"><img src="https://avatars.githubusercontent.com/u/33671694?v=4?s=100" width="100px;" alt="Malthaldar"/><br /><sub><b>Malthaldar</b></sub></a><br /><a href="https://github.com/ynput/OpenPype/commits?author=Malthaldar" title="Code">💻</a></td>
<td align="center" valign="top" width="14.28%"><a href="http://www.svenneve.com/"><img src="https://avatars.githubusercontent.com/u/2472863?v=4?s=100" width="100px;" alt="Sven Neve"/><br /><sub><b>Sven Neve</b></sub></a><br /><a href="https://github.com/ynput/OpenPype/commits?author=svenneve" title="Code">💻</a></td>
<td align="center" valign="top" width="14.28%"><a href="https://github.com/zafrs"><img src="https://avatars.githubusercontent.com/u/26890002?v=4?s=100" width="100px;" alt="zafrs"/><br /><sub><b>zafrs</b></sub></a><br /><a href="https://github.com/ynput/OpenPype/commits?author=zafrs" title="Code">💻</a></td>
<td align="center" valign="top" width="14.28%"><a href="http://felixdavid.com/"><img src="https://avatars.githubusercontent.com/u/22875539?v=4?s=100" width="100px;" alt="Félix David"/><br /><sub><b>Félix David</b></sub></a><br /><a href="https://github.com/ynput/OpenPype/commits?author=Tilix4" title="Code">💻</a> <a href="https://github.com/ynput/OpenPype/commits?author=Tilix4" title="Documentation">📖</a></td>
<td align="center" valign="top" width="14.28%"><a href="http://abogomolov.com"><img src="https://avatars.githubusercontent.com/u/11698866?v=4?s=100" width="100px;" alt="Alexey Bogomolov"/><br /><sub><b>Alexey Bogomolov</b></sub></a><br /><a href="https://github.com/ynput/OpenPype/commits?author=movalex" title="Code">💻</a></td>
</tr>
</tbody>
</table>
<!-- markdownlint-restore -->

View file

@ -14,10 +14,10 @@ AppId={{B9E9DF6A-5BDA-42DD-9F35-C09D564C4D93}
AppName={#MyAppName}
AppVersion={#AppVer}
AppVerName={#MyAppName} version {#AppVer}
AppPublisher=Orbi Tools s.r.o
AppPublisherURL=http://pype.club
AppSupportURL=http://pype.club
AppUpdatesURL=http://pype.club
AppPublisher=Ynput s.r.o
AppPublisherURL=https://ynput.io
AppSupportURL=https://ynput.io
AppUpdatesURL=https://ynput.io
DefaultDirName={autopf}\{#MyAppName}\{#AppVer}
UsePreviousAppDir=no
DisableProgramGroupPage=yes

View file

@ -26,6 +26,8 @@ from openpype.lib import (
emit_event
)
import openpype.hosts.blender
from openpype.settings import get_project_settings
HOST_DIR = os.path.dirname(os.path.abspath(openpype.hosts.blender.__file__))
PLUGINS_DIR = os.path.join(HOST_DIR, "plugins")
@ -83,6 +85,31 @@ def uninstall():
ops.unregister()
def show_message(title, message):
from openpype.widgets.message_window import Window
from .ops import BlenderApplication
BlenderApplication.get_app()
Window(
parent=None,
title=title,
message=message,
level="warning")
def message_window(title, message):
from .ops import (
MainThreadItem,
execute_in_main_thread,
_process_app_events
)
mti = MainThreadItem(show_message, title, message)
execute_in_main_thread(mti)
_process_app_events()
def set_start_end_frames():
project_name = legacy_io.active_project()
asset_name = legacy_io.Session["AVALON_ASSET"]
@ -125,10 +152,36 @@ def set_start_end_frames():
def on_new():
set_start_end_frames()
project = os.environ.get("AVALON_PROJECT")
settings = get_project_settings(project)
unit_scale_settings = settings.get("blender").get("unit_scale_settings")
unit_scale_enabled = unit_scale_settings.get("enabled")
if unit_scale_enabled:
unit_scale = unit_scale_settings.get("base_file_unit_scale")
bpy.context.scene.unit_settings.scale_length = unit_scale
def on_open():
set_start_end_frames()
project = os.environ.get("AVALON_PROJECT")
settings = get_project_settings(project)
unit_scale_settings = settings.get("blender").get("unit_scale_settings")
unit_scale_enabled = unit_scale_settings.get("enabled")
apply_on_opening = unit_scale_settings.get("apply_on_opening")
if unit_scale_enabled and apply_on_opening:
unit_scale = unit_scale_settings.get("base_file_unit_scale")
prev_unit_scale = bpy.context.scene.unit_settings.scale_length
if unit_scale != prev_unit_scale:
bpy.context.scene.unit_settings.scale_length = unit_scale
message_window(
"Base file unit scale changed",
"Base file unit scale changed to match the project settings.")
@bpy.app.handlers.persistent
def _on_save_pre(*args):

View file

@ -0,0 +1,55 @@
from pathlib import Path
from openpype.lib import PreLaunchHook
class AddPythonScriptToLaunchArgs(PreLaunchHook):
"""Add python script to be executed before Blender launch."""
# Append after file argument
order = 15
app_groups = [
"blender",
]
def execute(self):
if not self.launch_context.data.get("python_scripts"):
return
# Add path to workfile to arguments
for python_script_path in self.launch_context.data["python_scripts"]:
self.log.info(
f"Adding python script {python_script_path} to launch"
)
# Test script path exists
python_script_path = Path(python_script_path)
if not python_script_path.exists():
self.log.warning(
f"Python script {python_script_path} doesn't exist. "
"Skipped..."
)
continue
if "--" in self.launch_context.launch_args:
# Insert before separator
separator_index = self.launch_context.launch_args.index("--")
self.launch_context.launch_args.insert(
separator_index,
"-P",
)
self.launch_context.launch_args.insert(
separator_index + 1,
python_script_path.as_posix(),
)
else:
self.launch_context.launch_args.extend(
["-P", python_script_path.as_posix()]
)
# Ensure separator
if "--" not in self.launch_context.launch_args:
self.launch_context.launch_args.append("--")
self.launch_context.launch_args.extend(
[*self.launch_context.data.get("script_args", [])]
)

View file

@ -0,0 +1,209 @@
"""Load an asset in Blender from an Alembic file."""
from pathlib import Path
from pprint import pformat
from typing import Dict, List, Optional
import bpy
from openpype.pipeline import (
get_representation_path,
AVALON_CONTAINER_ID,
)
from openpype.hosts.blender.api import plugin, lib
from openpype.hosts.blender.api.pipeline import (
AVALON_CONTAINERS,
AVALON_PROPERTY,
)
class AbcCameraLoader(plugin.AssetLoader):
"""Load a camera from Alembic file.
Stores the imported asset in an empty named after the asset.
"""
families = ["camera"]
representations = ["abc"]
label = "Load Camera (ABC)"
icon = "code-fork"
color = "orange"
def _remove(self, asset_group):
objects = list(asset_group.children)
for obj in objects:
if obj.type == "CAMERA":
bpy.data.cameras.remove(obj.data)
elif obj.type == "EMPTY":
objects.extend(obj.children)
bpy.data.objects.remove(obj)
def _process(self, libpath, asset_group, group_name):
plugin.deselect_all()
bpy.ops.wm.alembic_import(filepath=libpath)
objects = lib.get_selection()
for obj in objects:
obj.parent = asset_group
for obj in objects:
name = obj.name
obj.name = f"{group_name}:{name}"
if obj.type != "EMPTY":
name_data = obj.data.name
obj.data.name = f"{group_name}:{name_data}"
if not obj.get(AVALON_PROPERTY):
obj[AVALON_PROPERTY] = dict()
avalon_info = obj[AVALON_PROPERTY]
avalon_info.update({"container_name": group_name})
plugin.deselect_all()
return objects
def process_asset(
self,
context: dict,
name: str,
namespace: Optional[str] = None,
options: Optional[Dict] = None,
) -> Optional[List]:
"""
Arguments:
name: Use pre-defined name
namespace: Use pre-defined namespace
context: Full parenthood of representation to load
options: Additional settings dictionary
"""
libpath = self.fname
asset = context["asset"]["name"]
subset = context["subset"]["name"]
asset_name = plugin.asset_name(asset, subset)
unique_number = plugin.get_unique_number(asset, subset)
group_name = plugin.asset_name(asset, subset, unique_number)
namespace = namespace or f"{asset}_{unique_number}"
avalon_container = bpy.data.collections.get(AVALON_CONTAINERS)
if not avalon_container:
avalon_container = bpy.data.collections.new(name=AVALON_CONTAINERS)
bpy.context.scene.collection.children.link(avalon_container)
asset_group = bpy.data.objects.new(group_name, object_data=None)
avalon_container.objects.link(asset_group)
objects = self._process(libpath, asset_group, group_name)
objects = []
nodes = list(asset_group.children)
for obj in nodes:
objects.append(obj)
nodes.extend(list(obj.children))
bpy.context.scene.collection.objects.link(asset_group)
asset_group[AVALON_PROPERTY] = {
"schema": "openpype:container-2.0",
"id": AVALON_CONTAINER_ID,
"name": name,
"namespace": namespace or "",
"loader": str(self.__class__.__name__),
"representation": str(context["representation"]["_id"]),
"libpath": libpath,
"asset_name": asset_name,
"parent": str(context["representation"]["parent"]),
"family": context["representation"]["context"]["family"],
"objectName": group_name,
}
self[:] = objects
return objects
def exec_update(self, container: Dict, representation: Dict):
"""Update the loaded asset.
This will remove all objects of the current collection, load the new
ones and add them to the collection.
If the objects of the collection are used in another collection they
will not be removed, only unlinked. Normally this should not be the
case though.
Warning:
No nested collections are supported at the moment!
"""
object_name = container["objectName"]
asset_group = bpy.data.objects.get(object_name)
libpath = Path(get_representation_path(representation))
extension = libpath.suffix.lower()
self.log.info(
"Container: %s\nRepresentation: %s",
pformat(container, indent=2),
pformat(representation, indent=2),
)
assert asset_group, (
f"The asset is not loaded: {container['objectName']}")
assert libpath, (
f"No existing library file found for {container['objectName']}")
assert libpath.is_file(), f"The file doesn't exist: {libpath}"
assert extension in plugin.VALID_EXTENSIONS, (
f"Unsupported file: {libpath}")
metadata = asset_group.get(AVALON_PROPERTY)
group_libpath = metadata["libpath"]
normalized_group_libpath = str(
Path(bpy.path.abspath(group_libpath)).resolve())
normalized_libpath = str(
Path(bpy.path.abspath(str(libpath))).resolve())
self.log.debug(
"normalized_group_libpath:\n %s\nnormalized_libpath:\n %s",
normalized_group_libpath,
normalized_libpath,
)
if normalized_group_libpath == normalized_libpath:
self.log.info("Library already loaded, not updating...")
return
mat = asset_group.matrix_basis.copy()
self._remove(asset_group)
self._process(str(libpath), asset_group, object_name)
asset_group.matrix_basis = mat
metadata["libpath"] = str(libpath)
metadata["representation"] = str(representation["_id"])
def exec_remove(self, container: Dict) -> bool:
"""Remove an existing container from a Blender scene.
Arguments:
container (openpype:container-1.0): Container to remove,
from `host.ls()`.
Returns:
bool: Whether the container was deleted.
Warning:
No nested collections are supported at the moment!
"""
object_name = container["objectName"]
asset_group = bpy.data.objects.get(object_name)
if not asset_group:
return False
self._remove(asset_group)
bpy.data.objects.remove(asset_group)
return True

View file

@ -256,8 +256,11 @@ def switch_item(container,
@contextlib.contextmanager
def maintained_selection():
comp = get_current_comp()
def maintained_selection(comp=None):
"""Reset comp selection from before the context after the context"""
if comp is None:
comp = get_current_comp()
previous_selection = comp.GetToolList(True).values()
try:
yield
@ -269,6 +272,33 @@ def maintained_selection():
flow.Select(tool, True)
@contextlib.contextmanager
def maintained_comp_range(comp=None,
global_start=True,
global_end=True,
render_start=True,
render_end=True):
"""Reset comp frame ranges from before the context after the context"""
if comp is None:
comp = get_current_comp()
comp_attrs = comp.GetAttrs()
preserve_attrs = {}
if global_start:
preserve_attrs["COMPN_GlobalStart"] = comp_attrs["COMPN_GlobalStart"]
if global_end:
preserve_attrs["COMPN_GlobalEnd"] = comp_attrs["COMPN_GlobalEnd"]
if render_start:
preserve_attrs["COMPN_RenderStart"] = comp_attrs["COMPN_RenderStart"]
if render_end:
preserve_attrs["COMPN_RenderEnd"] = comp_attrs["COMPN_RenderEnd"]
try:
yield
finally:
comp.SetAttrs(preserve_attrs)
def get_frame_path(path):
"""Get filename for the Fusion Saver with padded number as '#'

View file

@ -233,7 +233,7 @@ class CreateSaver(NewCreator):
def _get_frame_range_enum(self):
frame_range_options = {
"asset_db": "Current asset context",
"render_range": "From viewer render in/out",
"render_range": "From render in/out",
"comp_range": "From composition timeline"
}

View file

@ -113,4 +113,4 @@ class CollectUpstreamInputs(pyblish.api.InstancePlugin):
inputs = [c["representation"] for c in containers]
instance.data["inputRepresentations"] = inputs
self.log.info("Collected inputs: %s" % inputs)
self.log.debug("Collected inputs: %s" % inputs)

View file

@ -17,6 +17,8 @@ class FusionRenderInstance(RenderInstance):
tool = attr.ib(default=None)
workfileComp = attr.ib(default=None)
publish_attributes = attr.ib(default={})
frameStartHandle = attr.ib(default=None)
frameEndHandle = attr.ib(default=None)
class CollectFusionRender(
@ -83,8 +85,8 @@ class CollectFusionRender(
frameEnd=inst.data["frameEnd"],
handleStart=inst.data["handleStart"],
handleEnd=inst.data["handleEnd"],
ignoreFrameHandleCheck=(
inst.data["frame_range_source"] == "render_range"),
frameStartHandle=inst.data["frameStartHandle"],
frameEndHandle=inst.data["frameEndHandle"],
frameStep=1,
fps=comp_frame_format_prefs.get("Rate"),
app_version=comp.GetApp().Version,

View file

@ -1,11 +1,12 @@
import os
import logging
import contextlib
import collections
import pyblish.api
from openpype.pipeline import publish
from openpype.hosts.fusion.api import comp_lock_and_undo_chunk
from openpype.hosts.fusion.api.lib import get_frame_path
from openpype.hosts.fusion.api.lib import get_frame_path, maintained_comp_range
log = logging.getLogger(__name__)
@ -52,11 +53,14 @@ class FusionRenderLocal(
hosts = ["fusion"]
families = ["render.local"]
is_rendered_key = "_fusionrenderlocal_has_rendered"
def process(self, instance):
context = instance.context
# Start render
self.render_once(context)
result = self.render(instance)
if result is False:
raise RuntimeError(f"Comp render failed for {instance}")
self._add_representation(instance)
@ -69,39 +73,48 @@ class FusionRenderLocal(
)
)
def render_once(self, context):
"""Render context comp only once, even with more render instances"""
def render(self, instance):
"""Render instance.
# This plug-in assumes all render nodes get rendered at the same time
# to speed up the rendering. The check below makes sure that we only
# execute the rendering once and not for each instance.
key = f"__hasRun{self.__class__.__name__}"
We try to render the minimal amount of times by combining the instances
that have a matching frame range in one Fusion render. Then for the
batch of instances we store whether the render succeeded or failed.
savers_to_render = [
# Get the saver tool from the instance
instance.data["tool"] for instance in context if
# Only active instances
instance.data.get("publish", True) and
# Only render.local instances
"render.local" in instance.data.get("families", [])
]
"""
if key not in context.data:
# We initialize as false to indicate it wasn't successful yet
# so we can keep track of whether Fusion succeeded
context.data[key] = False
if self.is_rendered_key in instance.data:
# This instance was already processed in batch with another
# instance, so we just return the render result directly
self.log.debug(f"Instance {instance} was already rendered")
return instance.data[self.is_rendered_key]
current_comp = context.data["currentComp"]
frame_start = context.data["frameStartHandle"]
frame_end = context.data["frameEndHandle"]
instances_by_frame_range = self.get_render_instances_by_frame_range(
instance.context
)
self.log.info("Starting Fusion render")
self.log.info(f"Start frame: {frame_start}")
self.log.info(f"End frame: {frame_end}")
saver_names = ", ".join(saver.Name for saver in savers_to_render)
self.log.info(f"Rendering tools: {saver_names}")
# Render matching batch of instances that share the same frame range
frame_range = self.get_instance_render_frame_range(instance)
render_instances = instances_by_frame_range[frame_range]
with comp_lock_and_undo_chunk(current_comp):
# We initialize render state false to indicate it wasn't successful
# yet to keep track of whether Fusion succeeded. This is for cases
# where an error below this might cause the comp render result not
# to be stored for the instances of this batch
for render_instance in render_instances:
render_instance.data[self.is_rendered_key] = False
savers_to_render = [inst.data["tool"] for inst in render_instances]
current_comp = instance.context.data["currentComp"]
frame_start, frame_end = frame_range
self.log.info(
f"Starting Fusion render frame range {frame_start}-{frame_end}"
)
saver_names = ", ".join(saver.Name for saver in savers_to_render)
self.log.info(f"Rendering tools: {saver_names}")
with comp_lock_and_undo_chunk(current_comp):
with maintained_comp_range(current_comp):
with enabled_savers(current_comp, savers_to_render):
result = current_comp.Render(
{
@ -111,10 +124,11 @@ class FusionRenderLocal(
}
)
context.data[key] = bool(result)
# Store the render state for all the rendered instances
for render_instance in render_instances:
render_instance.data[self.is_rendered_key] = bool(result)
if context.data[key] is False:
raise RuntimeError("Comp render failed")
return result
def _add_representation(self, instance):
"""Add representation to instance"""
@ -151,3 +165,35 @@ class FusionRenderLocal(
instance.data["representations"].append(repre)
return instance
def get_render_instances_by_frame_range(self, context):
"""Return enabled render.local instances grouped by their frame range.
Arguments:
context (pyblish.Context): The pyblish context
Returns:
dict: (start, end): instances mapping
"""
instances_to_render = [
instance for instance in context if
# Only active instances
instance.data.get("publish", True) and
# Only render.local instances
"render.local" in instance.data.get("families", [])
]
# Instances by frame ranges
instances_by_frame_range = collections.defaultdict(list)
for instance in instances_to_render:
start, end = self.get_instance_render_frame_range(instance)
instances_by_frame_range[(start, end)].append(instance)
return dict(instances_by_frame_range)
def get_instance_render_frame_range(self, instance):
start = instance.data["frameStartHandle"]
end = instance.data["frameEndHandle"]
return start, end

View file

@ -17,5 +17,5 @@ class FusionSaveComp(pyblish.api.ContextPlugin):
current = comp.GetAttrs().get("COMPS_FileName", "")
assert context.data['currentFile'] == current
self.log.info("Saving current file..")
self.log.info("Saving current file: {}".format(current))
comp.Save()

View file

@ -0,0 +1,41 @@
import pyblish.api
from openpype.pipeline import PublishValidationError
class ValidateInstanceFrameRange(pyblish.api.InstancePlugin):
"""Validate instance frame range is within comp's global render range."""
order = pyblish.api.ValidatorOrder
label = "Validate Filename Has Extension"
families = ["render"]
hosts = ["fusion"]
def process(self, instance):
context = instance.context
global_start = context.data["compFrameStart"]
global_end = context.data["compFrameEnd"]
render_start = instance.data["frameStartHandle"]
render_end = instance.data["frameEndHandle"]
if render_start < global_start or render_end > global_end:
message = (
f"Instance {instance} render frame range "
f"({render_start}-{render_end}) is outside of the comp's "
f"global render range ({global_start}-{global_end}) and thus "
f"can't be rendered. "
)
description = (
f"{message}\n\n"
f"Either update the comp's global range or the instance's "
f"frame range to ensure the comp's frame range includes the "
f"to render frame range for the instance."
)
raise PublishValidationError(
title="Frame range outside of comp range",
message=message,
description=description
)

View file

@ -41,8 +41,8 @@ class LoadClip(phiero.SequenceLoader):
clip_name_template = "{asset}_{subset}_{representation}"
@classmethod
def apply_settings(cls, project_settings, system_settings):
plugin_type_settings = (
project_settings
.get("hiero", {})

View file

@ -8,7 +8,6 @@ import pyblish.api
from openpype.hosts.houdini.api import lib
class CollectFrames(pyblish.api.InstancePlugin):
"""Collect all frames which would be saved from the ROP nodes"""
@ -34,8 +33,10 @@ class CollectFrames(pyblish.api.InstancePlugin):
self.log.warning("Using current frame: {}".format(hou.frame()))
output = output_parm.eval()
_, ext = lib.splitext(output,
allowed_multidot_extensions=[".ass.gz"])
_, ext = lib.splitext(
output,
allowed_multidot_extensions=[".ass.gz"]
)
file_name = os.path.basename(output)
result = file_name

View file

@ -117,4 +117,4 @@ class CollectUpstreamInputs(pyblish.api.InstancePlugin):
inputs = [c["representation"] for c in containers]
instance.data["inputRepresentations"] = inputs
self.log.info("Collected inputs: %s" % inputs)
self.log.debug("Collected inputs: %s" % inputs)

View file

@ -55,7 +55,9 @@ class CollectInstances(pyblish.api.ContextPlugin):
has_family = node.evalParm("family")
assert has_family, "'%s' is missing 'family'" % node.name()
self.log.info("processing {}".format(node))
self.log.info(
"Processing legacy instance node {}".format(node.path())
)
data = lib.read(node)
# Check bypass state and reverse

View file

@ -32,5 +32,4 @@ class CollectWorkfile(pyblish.api.InstancePlugin):
"stagingDir": folder,
}]
self.log.info('Collected instance: {}'.format(file))
self.log.info('staging Dir: {}'.format(folder))
self.log.debug('Collected workfile instance: {}'.format(file))

View file

@ -20,7 +20,7 @@ class SaveCurrentScene(pyblish.api.ContextPlugin):
)
if host.has_unsaved_changes():
self.log.info("Saving current file {}...".format(current_file))
self.log.info("Saving current file: {}".format(current_file))
host.save_workfile(current_file)
else:
self.log.debug("No unsaved changes, skipping file save..")

View file

@ -28,18 +28,37 @@ class ValidateWorkfilePaths(
if not self.is_active(instance.data):
return
invalid = self.get_invalid()
self.log.info(
"node types to check: {}".format(", ".join(self.node_types)))
self.log.info(
"prohibited vars: {}".format(", ".join(self.prohibited_vars))
self.log.debug(
"Checking node types: {}".format(", ".join(self.node_types)))
self.log.debug(
"Searching prohibited vars: {}".format(
", ".join(self.prohibited_vars)
)
)
if invalid:
for param in invalid:
self.log.error(
"{}: {}".format(param.path(), param.unexpandedString()))
raise PublishValidationError(
"Invalid paths found", title=self.label)
if invalid:
all_container_vars = set()
for param in invalid:
value = param.unexpandedString()
contained_vars = [
var for var in self.prohibited_vars
if var in value
]
all_container_vars.update(contained_vars)
self.log.error(
"Parm {} contains prohibited vars {}: {}".format(
param.path(),
", ".join(contained_vars),
value)
)
message = (
"Prohibited vars {} found in parameter values".format(
", ".join(all_container_vars)
)
)
raise PublishValidationError(message, title=self.label)
@classmethod
def get_invalid(cls):
@ -63,7 +82,7 @@ class ValidateWorkfilePaths(
def repair(cls, instance):
invalid = cls.get_invalid()
for param in invalid:
cls.log.info("processing: {}".format(param.path()))
cls.log.info("Processing: {}".format(param.path()))
cls.log.info("Replacing {} for {}".format(
param.unexpandedString(),
hou.text.expandString(param.unexpandedString())))

View file

@ -0,0 +1,50 @@
import attr
from pymxs import runtime as rt
@attr.s
class LayerMetadata(object):
"""Data class for Render Layer metadata."""
frameStart = attr.ib()
frameEnd = attr.ib()
@attr.s
class RenderProduct(object):
"""Getting Colorspace as
Specific Render Product Parameter for submitting
publish job.
"""
colorspace = attr.ib() # colorspace
view = attr.ib()
productName = attr.ib(default=None)
class ARenderProduct(object):
def __init__(self):
"""Constructor."""
# Initialize
self.layer_data = self._get_layer_data()
self.layer_data.products = self.get_colorspace_data()
def _get_layer_data(self):
return LayerMetadata(
frameStart=int(rt.rendStart),
frameEnd=int(rt.rendEnd),
)
def get_colorspace_data(self):
"""To be implemented by renderer class.
This should return a list of RenderProducts.
Returns:
list: List of RenderProduct
"""
colorspace_data = [
RenderProduct(
colorspace="sRGB",
view="ACES 1.0",
productName=""
)
]
return colorspace_data

View file

@ -128,7 +128,14 @@ def get_all_children(parent, node_type=None):
def get_current_renderer():
"""get current renderer"""
"""
Notes:
Get current renderer for Max
Returns:
"{Current Renderer}:{Current Renderer}"
e.g. "Redshift_Renderer:Redshift_Renderer"
"""
return rt.renderers.production

View file

@ -3,94 +3,126 @@
# arnold
# https://help.autodesk.com/view/ARNOL/ENU/?guid=arnold_for_3ds_max_ax_maxscript_commands_ax_renderview_commands_html
import os
from pymxs import runtime as rt
from openpype.hosts.max.api.lib import (
get_current_renderer,
get_default_render_folder
)
from openpype.pipeline.context_tools import get_current_project_asset
from openpype.settings import get_project_settings
from openpype.hosts.max.api.lib import get_current_renderer
from openpype.pipeline import legacy_io
from openpype.settings import get_project_settings
class RenderProducts(object):
def __init__(self, project_settings=None):
self._project_settings = project_settings
if not self._project_settings:
self._project_settings = get_project_settings(
legacy_io.Session["AVALON_PROJECT"]
)
self._project_settings = project_settings or get_project_settings(
legacy_io.Session["AVALON_PROJECT"])
def get_beauty(self, container):
render_dir = os.path.dirname(rt.rendOutputFilename)
output_file = os.path.join(render_dir, container)
def render_product(self, container):
folder = rt.maxFilePath
file = rt.maxFileName
folder = folder.replace("\\", "/")
setting = self._project_settings
render_folder = get_default_render_folder(setting)
filename, ext = os.path.splitext(file)
img_fmt = setting["max"]["RenderSettings"]["image_format"] # noqa
output_file = os.path.join(folder,
render_folder,
filename,
start_frame = int(rt.rendStart)
end_frame = int(rt.rendEnd) + 1
return {
"beauty": self.get_expected_beauty(
output_file, start_frame, end_frame, img_fmt
)
}
def get_aovs(self, container):
render_dir = os.path.dirname(rt.rendOutputFilename)
output_file = os.path.join(render_dir,
container)
context = get_current_project_asset()
# TODO: change the frame range follows the current render setting
startFrame = int(rt.rendStart)
endFrame = int(rt.rendEnd) + 1
img_fmt = self._project_settings["max"]["RenderSettings"]["image_format"] # noqa
full_render_list = self.beauty_render_product(output_file,
startFrame,
endFrame,
img_fmt)
setting = self._project_settings
img_fmt = setting["max"]["RenderSettings"]["image_format"] # noqa
start_frame = int(rt.rendStart)
end_frame = int(rt.rendEnd) + 1
renderer_class = get_current_renderer()
renderer = str(renderer_class).split(":")[0]
if renderer == "VUE_File_Renderer":
return full_render_list
render_dict = {}
if renderer in [
"ART_Renderer",
"Redshift_Renderer",
"V_Ray_6_Hotfix_3",
"V_Ray_GPU_6_Hotfix_3",
"Default_Scanline_Renderer",
"Quicksilver_Hardware_Renderer",
]:
render_elem_list = self.render_elements_product(output_file,
startFrame,
endFrame,
img_fmt)
if render_elem_list:
full_render_list.extend(iter(render_elem_list))
return full_render_list
render_name = self.get_render_elements_name()
if render_name:
for name in render_name:
render_dict.update({
name: self.get_expected_render_elements(
output_file, name, start_frame,
end_frame, img_fmt)
})
elif renderer == "Redshift_Renderer":
render_name = self.get_render_elements_name()
if render_name:
rs_aov_files = rt.Execute("renderers.current.separateAovFiles")
# this doesn't work, always returns False
# rs_AovFiles = rt.RedShift_Renderer().separateAovFiles
if img_fmt == "exr" and not rs_aov_files:
for name in render_name:
if name == "RsCryptomatte":
render_dict.update({
name: self.get_expected_render_elements(
output_file, name, start_frame,
end_frame, img_fmt)
})
else:
for name in render_name:
render_dict.update({
name: self.get_expected_render_elements(
output_file, name, start_frame,
end_frame, img_fmt)
})
if renderer == "Arnold":
aov_list = self.arnold_render_product(output_file,
startFrame,
endFrame,
img_fmt)
if aov_list:
full_render_list.extend(iter(aov_list))
return full_render_list
elif renderer == "Arnold":
render_name = self.get_arnold_product_name()
if render_name:
for name in render_name:
render_dict.update({
name: self.get_expected_arnold_product(
output_file, name, start_frame, end_frame, img_fmt)
})
elif renderer in [
"V_Ray_6_Hotfix_3",
"V_Ray_GPU_6_Hotfix_3"
]:
if img_fmt != "exr":
render_name = self.get_render_elements_name()
if render_name:
for name in render_name:
render_dict.update({
name: self.get_expected_render_elements(
output_file, name, start_frame,
end_frame, img_fmt) # noqa
})
def beauty_render_product(self, folder, startFrame, endFrame, fmt):
return render_dict
def get_expected_beauty(self, folder, start_frame, end_frame, fmt):
beauty_frame_range = []
for f in range(startFrame, endFrame):
beauty_output = f"{folder}.{f}.{fmt}"
for f in range(start_frame, end_frame):
frame = "%04d" % f
beauty_output = f"{folder}.{frame}.{fmt}"
beauty_output = beauty_output.replace("\\", "/")
beauty_frame_range.append(beauty_output)
return beauty_frame_range
# TODO: Get the arnold render product
def arnold_render_product(self, folder, startFrame, endFrame, fmt):
"""Get all the Arnold AOVs"""
aovs = []
def get_arnold_product_name(self):
"""Get all the Arnold AOVs name"""
aov_name = []
amw = rt.MaxtoAOps.AOVsManagerWindow()
aov_mgr = rt.renderers.current.AOVManager
@ -100,34 +132,51 @@ class RenderProducts(object):
return
for i in range(aov_group_num):
# get the specific AOV group
for aov in aov_mgr.drivers[i].aov_list:
for f in range(startFrame, endFrame):
render_element = f"{folder}_{aov.name}.{f}.{fmt}"
render_element = render_element.replace("\\", "/")
aovs.append(render_element)
aov_name.extend(aov.name for aov in aov_mgr.drivers[i].aov_list)
# close the AOVs manager window
amw.close()
return aovs
return aov_name
def render_elements_product(self, folder, startFrame, endFrame, fmt):
"""Get all the render element output files. """
render_dirname = []
def get_expected_arnold_product(self, folder, name,
start_frame, end_frame, fmt):
"""Get all the expected Arnold AOVs"""
aov_list = []
for f in range(start_frame, end_frame):
frame = "%04d" % f
render_element = f"{folder}_{name}.{frame}.{fmt}"
render_element = render_element.replace("\\", "/")
aov_list.append(render_element)
return aov_list
def get_render_elements_name(self):
"""Get all the render element names for general """
render_name = []
render_elem = rt.maxOps.GetCurRenderElementMgr()
render_elem_num = render_elem.NumRenderElements()
if render_elem_num < 1:
return
# get render elements from the renders
for i in range(render_elem_num):
renderlayer_name = render_elem.GetRenderElement(i)
target, renderpass = str(renderlayer_name).split(":")
if renderlayer_name.enabled:
for f in range(startFrame, endFrame):
render_element = f"{folder}_{renderpass}.{f}.{fmt}"
render_element = render_element.replace("\\", "/")
render_dirname.append(render_element)
target, renderpass = str(renderlayer_name).split(":")
render_name.append(renderpass)
return render_dirname
return render_name
def get_expected_render_elements(self, folder, name,
start_frame, end_frame, fmt):
"""Get all the expected render element output files. """
render_elements = []
for f in range(start_frame, end_frame):
frame = "%04d" % f
render_element = f"{folder}_{name}.{frame}.{fmt}"
render_element = render_element.replace("\\", "/")
render_elements.append(render_element)
return render_elements
def image_format(self):
return self._project_settings["max"]["RenderSettings"]["image_format"] # noqa

View file

@ -0,0 +1,18 @@
# -*- coding: utf-8 -*-
"""Creator plugin for creating camera."""
from openpype.hosts.max.api import plugin
from openpype.pipeline import CreatedInstance
class CreateRedshiftProxy(plugin.MaxCreator):
identifier = "io.openpype.creators.max.redshiftproxy"
label = "Redshift Proxy"
family = "redshiftproxy"
icon = "gear"
def create(self, subset_name, instance_data, pre_create_data):
_ = super(CreateRedshiftProxy, self).create(
subset_name,
instance_data,
pre_create_data) # type: CreatedInstance

View file

@ -1,5 +1,6 @@
# -*- coding: utf-8 -*-
"""Creator plugin for creating camera."""
import os
from openpype.hosts.max.api import plugin
from openpype.pipeline import CreatedInstance
from openpype.hosts.max.api.lib_rendersettings import RenderSettings
@ -14,6 +15,10 @@ class CreateRender(plugin.MaxCreator):
def create(self, subset_name, instance_data, pre_create_data):
from pymxs import runtime as rt
sel_obj = list(rt.selection)
file = rt.maxFileName
filename, _ = os.path.splitext(file)
instance_data["AssetName"] = filename
instance = super(CreateRender, self).create(
subset_name,
instance_data,

View file

@ -1,8 +1,5 @@
import os
from openpype.pipeline import (
load, get_representation_path
)
from openpype.pipeline import load, get_representation_path
from openpype.hosts.max.api.pipeline import containerise
from openpype.hosts.max.api import lib
from openpype.hosts.max.api.lib import maintained_selection
@ -24,24 +21,20 @@ class ModelAbcLoader(load.LoaderPlugin):
file_path = os.path.normpath(self.fname)
abc_before = {
c for c in rt.rootNode.Children
c
for c in rt.rootNode.Children
if rt.classOf(c) == rt.AlembicContainer
}
abc_import_cmd = (f"""
AlembicImport.ImportToRoot = false
AlembicImport.CustomAttributes = true
AlembicImport.UVs = true
AlembicImport.VertexColors = true
importFile @"{file_path}" #noPrompt
""")
self.log.debug(f"Executing command: {abc_import_cmd}")
rt.execute(abc_import_cmd)
rt.AlembicImport.ImportToRoot = False
rt.AlembicImport.CustomAttributes = True
rt.AlembicImport.UVs = True
rt.AlembicImport.VertexColors = True
rt.importFile(file_path, rt.name("noPrompt"))
abc_after = {
c for c in rt.rootNode.Children
c
for c in rt.rootNode.Children
if rt.classOf(c) == rt.AlembicContainer
}
@ -54,10 +47,12 @@ importFile @"{file_path}" #noPrompt
abc_container = abc_containers.pop()
return containerise(
name, [abc_container], context, loader=self.__class__.__name__)
name, [abc_container], context, loader=self.__class__.__name__
)
def update(self, container, representation):
from pymxs import runtime as rt
path = get_representation_path(representation)
node = rt.getNodeByName(container["instance_node"])
rt.select(node.Children)
@ -76,9 +71,10 @@ importFile @"{file_path}" #noPrompt
with maintained_selection():
rt.select(node)
lib.imprint(container["instance_node"], {
"representation": str(representation["_id"])
})
lib.imprint(
container["instance_node"],
{"representation": str(representation["_id"])},
)
def switch(self, container, representation):
self.update(container, representation)

View file

@ -1,8 +1,5 @@
import os
from openpype.pipeline import (
load,
get_representation_path
)
from openpype.pipeline import load, get_representation_path
from openpype.hosts.max.api.pipeline import containerise
from openpype.hosts.max.api import lib
from openpype.hosts.max.api.lib import maintained_selection
@ -24,10 +21,7 @@ class FbxModelLoader(load.LoaderPlugin):
rt.FBXImporterSetParam("Animation", False)
rt.FBXImporterSetParam("Cameras", False)
rt.FBXImporterSetParam("Preserveinstances", True)
rt.importFile(
filepath,
rt.name("noPrompt"),
using=rt.FBXIMP)
rt.importFile(filepath, rt.name("noPrompt"), using=rt.FBXIMP)
container = rt.getNodeByName(f"{name}")
if not container:
@ -38,7 +32,8 @@ class FbxModelLoader(load.LoaderPlugin):
selection.Parent = container
return containerise(
name, [container], context, loader=self.__class__.__name__)
name, [container], context, loader=self.__class__.__name__
)
def update(self, container, representation):
from pymxs import runtime as rt
@ -46,24 +41,21 @@ class FbxModelLoader(load.LoaderPlugin):
path = get_representation_path(representation)
node = rt.getNodeByName(container["instance_node"])
rt.select(node.Children)
fbx_reimport_cmd = (
f"""
FBXImporterSetParam "Animation" false
FBXImporterSetParam "Cameras" false
FBXImporterSetParam "AxisConversionMethod" true
FbxExporterSetParam "UpAxis" "Y"
FbxExporterSetParam "Preserveinstances" true
importFile @"{path}" #noPrompt using:FBXIMP
""")
rt.execute(fbx_reimport_cmd)
rt.FBXImporterSetParam("Animation", False)
rt.FBXImporterSetParam("Cameras", False)
rt.FBXImporterSetParam("AxisConversionMethod", True)
rt.FBXImporterSetParam("UpAxis", "Y")
rt.FBXImporterSetParam("Preserveinstances", True)
rt.importFile(path, rt.name("noPrompt"), using=rt.FBXIMP)
with maintained_selection():
rt.select(node)
lib.imprint(container["instance_node"], {
"representation": str(representation["_id"])
})
lib.imprint(
container["instance_node"],
{"representation": str(representation["_id"])},
)
def switch(self, container, representation):
self.update(container, representation)

View file

@ -5,9 +5,7 @@ Because of limited api, alembics can be only loaded, but not easily updated.
"""
import os
from openpype.pipeline import (
load, get_representation_path
)
from openpype.pipeline import load, get_representation_path
from openpype.hosts.max.api.pipeline import containerise
from openpype.hosts.max.api import lib
@ -15,9 +13,7 @@ from openpype.hosts.max.api import lib
class AbcLoader(load.LoaderPlugin):
"""Alembic loader."""
families = ["camera",
"animation",
"pointcache"]
families = ["camera", "animation", "pointcache"]
label = "Load Alembic"
representations = ["abc"]
order = -10
@ -30,21 +26,17 @@ class AbcLoader(load.LoaderPlugin):
file_path = os.path.normpath(self.fname)
abc_before = {
c for c in rt.rootNode.Children
c
for c in rt.rootNode.Children
if rt.classOf(c) == rt.AlembicContainer
}
abc_export_cmd = (f"""
AlembicImport.ImportToRoot = false
importFile @"{file_path}" #noPrompt
""")
self.log.debug(f"Executing command: {abc_export_cmd}")
rt.execute(abc_export_cmd)
rt.AlembicImport.ImportToRoot = False
rt.importFile(file_path, rt.name("noPrompt"))
abc_after = {
c for c in rt.rootNode.Children
c
for c in rt.rootNode.Children
if rt.classOf(c) == rt.AlembicContainer
}
@ -57,7 +49,8 @@ importFile @"{file_path}" #noPrompt
abc_container = abc_containers.pop()
return containerise(
name, [abc_container], context, loader=self.__class__.__name__)
name, [abc_container], context, loader=self.__class__.__name__
)
def update(self, container, representation):
from pymxs import runtime as rt
@ -69,9 +62,10 @@ importFile @"{file_path}" #noPrompt
for alembic_object in alembic_objects:
alembic_object.source = path
lib.imprint(container["instance_node"], {
"representation": str(representation["_id"])
})
lib.imprint(
container["instance_node"],
{"representation": str(representation["_id"])},
)
def switch(self, container, representation):
self.update(container, representation)

View file

@ -0,0 +1,63 @@
import os
import clique
from openpype.pipeline import (
load,
get_representation_path
)
from openpype.hosts.max.api.pipeline import containerise
from openpype.hosts.max.api import lib
class RedshiftProxyLoader(load.LoaderPlugin):
"""Load rs files with Redshift Proxy"""
label = "Load Redshift Proxy"
families = ["redshiftproxy"]
representations = ["rs"]
order = -9
icon = "code-fork"
color = "white"
def load(self, context, name=None, namespace=None, data=None):
from pymxs import runtime as rt
filepath = self.filepath_from_context(context)
rs_proxy = rt.RedshiftProxy()
rs_proxy.file = filepath
files_in_folder = os.listdir(os.path.dirname(filepath))
collections, remainder = clique.assemble(files_in_folder)
if collections:
rs_proxy.is_sequence = True
container = rt.container()
container.name = name
rs_proxy.Parent = container
asset = rt.getNodeByName(name)
return containerise(
name, [asset], context, loader=self.__class__.__name__)
def update(self, container, representation):
from pymxs import runtime as rt
path = get_representation_path(representation)
node = rt.getNodeByName(container["instance_node"])
for children in node.Children:
children_node = rt.getNodeByName(children.name)
for proxy in children_node.Children:
proxy.file = path
lib.imprint(container["instance_node"], {
"representation": str(representation["_id"])
})
def switch(self, container, representation):
self.update(container, representation)
def remove(self, container):
from pymxs import runtime as rt
node = rt.getNodeByName(container["instance_node"])
rt.delete(node)

View file

@ -5,7 +5,8 @@ import pyblish.api
from pymxs import runtime as rt
from openpype.pipeline import get_current_asset_name
from openpype.hosts.max.api.lib import get_max_version
from openpype.hosts.max.api import colorspace
from openpype.hosts.max.api.lib import get_max_version, get_current_renderer
from openpype.hosts.max.api.lib_renderproducts import RenderProducts
from openpype.client import get_last_version_by_subset_name
@ -28,8 +29,16 @@ class CollectRender(pyblish.api.InstancePlugin):
context.data['currentFile'] = current_file
asset = get_current_asset_name()
render_layer_files = RenderProducts().render_product(instance.name)
files_by_aov = RenderProducts().get_beauty(instance.name)
folder = folder.replace("\\", "/")
aovs = RenderProducts().get_aovs(instance.name)
files_by_aov.update(aovs)
if "expectedFiles" not in instance.data:
instance.data["expectedFiles"] = list()
instance.data["files"] = list()
instance.data["expectedFiles"].append(files_by_aov)
instance.data["files"].append(files_by_aov)
img_format = RenderProducts().image_format()
project_name = context.data["projectName"]
@ -38,7 +47,6 @@ class CollectRender(pyblish.api.InstancePlugin):
version_doc = get_last_version_by_subset_name(project_name,
instance.name,
asset_id)
self.log.debug("version_doc: {0}".format(version_doc))
version_int = 1
if version_doc:
@ -46,22 +54,42 @@ class CollectRender(pyblish.api.InstancePlugin):
self.log.debug(f"Setting {version_int} to context.")
context.data["version"] = version_int
# setup the plugin as 3dsmax for the internal renderer
# OCIO config not support in
# most of the 3dsmax renderers
# so this is currently hard coded
# TODO: add options for redshift/vray ocio config
instance.data["colorspaceConfig"] = ""
instance.data["colorspaceDisplay"] = "sRGB"
instance.data["colorspaceView"] = "ACES 1.0 SDR-video"
instance.data["renderProducts"] = colorspace.ARenderProduct()
instance.data["publishJobState"] = "Suspended"
instance.data["attachTo"] = []
renderer_class = get_current_renderer()
renderer = str(renderer_class).split(":")[0]
# also need to get the render dir for conversion
data = {
"subset": instance.name,
"asset": asset,
"subset": str(instance.name),
"publish": True,
"maxversion": str(get_max_version()),
"imageFormat": img_format,
"family": 'maxrender',
"families": ['maxrender'],
"renderer": renderer,
"source": filepath,
"expectedFiles": render_layer_files,
"plugin": "3dsmax",
"frameStart": int(rt.rendStart),
"frameEnd": int(rt.rendEnd),
"version": version_int,
"farm": True
}
self.log.info("data: {0}".format(data))
instance.data.update(data)
# TODO: this should be unified with maya and its "multipart" flag
# on instance.
if renderer == "Redshift_Renderer":
instance.data.update(
{"separateAovFiles": rt.Execute(
"renderers.current.separateAovFiles")})
self.log.info("data: {0}".format(data))

View file

@ -0,0 +1,62 @@
import os
import pyblish.api
from openpype.pipeline import publish
from pymxs import runtime as rt
from openpype.hosts.max.api import maintained_selection
class ExtractRedshiftProxy(publish.Extractor):
"""
Extract Redshift Proxy with rsProxy
"""
order = pyblish.api.ExtractorOrder - 0.1
label = "Extract RedShift Proxy"
hosts = ["max"]
families = ["redshiftproxy"]
def process(self, instance):
container = instance.data["instance_node"]
start = int(instance.context.data.get("frameStart"))
end = int(instance.context.data.get("frameEnd"))
self.log.info("Extracting Redshift Proxy...")
stagingdir = self.staging_dir(instance)
rs_filename = "{name}.rs".format(**instance.data)
rs_filepath = os.path.join(stagingdir, rs_filename)
rs_filepath = rs_filepath.replace("\\", "/")
rs_filenames = self.get_rsfiles(instance, start, end)
with maintained_selection():
# select and export
con = rt.getNodeByName(container)
rt.select(con.Children)
# Redshift rsProxy command
# rsProxy fp selected compress connectivity startFrame endFrame
# camera warnExisting transformPivotToOrigin
rt.rsProxy(rs_filepath, 1, 0, 0, start, end, 0, 1, 1)
self.log.info("Performing Extraction ...")
if "representations" not in instance.data:
instance.data["representations"] = []
representation = {
'name': 'rs',
'ext': 'rs',
'files': rs_filenames if len(rs_filenames) > 1 else rs_filenames[0], # noqa
"stagingDir": stagingdir,
}
instance.data["representations"].append(representation)
self.log.info("Extracted instance '%s' to: %s" % (instance.name,
stagingdir))
def get_rsfiles(self, instance, startFrame, endFrame):
rs_filenames = []
rs_name = instance.data["name"]
for frame in range(startFrame, endFrame + 1):
rs_filename = "%s.%04d.rs" % (rs_name, frame)
rs_filenames.append(rs_filename)
return rs_filenames

View file

@ -0,0 +1,21 @@
import pyblish.api
import os
class SaveCurrentScene(pyblish.api.ContextPlugin):
"""Save current scene
"""
label = "Save current file"
order = pyblish.api.ExtractorOrder - 0.49
hosts = ["max"]
families = ["maxrender", "workfile"]
def process(self, context):
from pymxs import runtime as rt
folder = rt.maxFilePath
file = rt.maxFileName
current = os.path.join(folder, file)
assert context.data["currentFile"] == current
rt.saveMaxFile(current)

View file

@ -0,0 +1,43 @@
import os
import pyblish.api
from pymxs import runtime as rt
from openpype.pipeline.publish import (
RepairAction,
ValidateContentsOrder,
PublishValidationError,
OptionalPyblishPluginMixin
)
from openpype.hosts.max.api.lib_rendersettings import RenderSettings
class ValidateDeadlinePublish(pyblish.api.InstancePlugin,
OptionalPyblishPluginMixin):
"""Validates Render File Directory is
not the same in every submission
"""
order = ValidateContentsOrder
families = ["maxrender"]
hosts = ["max"]
label = "Render Output for Deadline"
optional = True
actions = [RepairAction]
def process(self, instance):
if not self.is_active(instance.data):
return
file = rt.maxFileName
filename, ext = os.path.splitext(file)
if filename not in rt.rendOutputFilename:
raise PublishValidationError(
"Render output folder "
"doesn't match the max scene name! "
"Use Repair action to "
"fix the folder file path.."
)
@classmethod
def repair(cls, instance):
container = instance.data.get("instance_node")
RenderSettings().render_output(container)
cls.log.debug("Reset the render output folder...")

View file

@ -0,0 +1,54 @@
# -*- coding: utf-8 -*-
import pyblish.api
from openpype.pipeline import PublishValidationError
from pymxs import runtime as rt
from openpype.pipeline.publish import RepairAction
from openpype.hosts.max.api.lib import get_current_renderer
class ValidateRendererRedshiftProxy(pyblish.api.InstancePlugin):
"""
Validates Redshift as the current renderer for creating
Redshift Proxy
"""
order = pyblish.api.ValidatorOrder
families = ["redshiftproxy"]
hosts = ["max"]
label = "Redshift Renderer"
actions = [RepairAction]
def process(self, instance):
invalid = self.get_redshift_renderer(instance)
if invalid:
raise PublishValidationError("Please install Redshift for 3dsMax"
" before using the Redshift proxy instance") # noqa
invalid = self.get_current_renderer(instance)
if invalid:
raise PublishValidationError("The Redshift proxy extraction"
"discontinued since the current renderer is not Redshift") # noqa
def get_redshift_renderer(self, instance):
invalid = list()
max_renderers_list = str(rt.RendererClass.classes)
if "Redshift_Renderer" not in max_renderers_list:
invalid.append(max_renderers_list)
return invalid
def get_current_renderer(self, instance):
invalid = list()
renderer_class = get_current_renderer()
current_renderer = str(renderer_class).split(":")[0]
if current_renderer != "Redshift_Renderer":
invalid.append(current_renderer)
return invalid
@classmethod
def repair(cls, instance):
for Renderer in rt.RendererClass.classes:
renderer = Renderer()
if "Redshift_Renderer" in str(renderer):
rt.renderers.production = renderer
break

View file

@ -28,7 +28,9 @@ from openpype.pipeline import (
)
from openpype.hosts.maya.api.lib import (
matrix_equals,
unique_namespace
unique_namespace,
get_container_transforms,
DEFAULT_MATRIX
)
log = logging.getLogger("PackageLoader")
@ -183,8 +185,6 @@ def _add(instance, representation_id, loaders, namespace, root="|"):
"""
from openpype.hosts.maya.lib import get_container_transforms
# Process within the namespace
with namespaced(namespace, new=False) as namespace:
@ -379,8 +379,6 @@ def update_scene(set_container, containers, current_data, new_data, new_file):
"""
from openpype.hosts.maya.lib import DEFAULT_MATRIX, get_container_transforms
set_namespace = set_container['namespace']
project_name = legacy_io.active_project()

View file

@ -181,16 +181,34 @@ class CreateRender(plugin.Creator):
primary_pool = pool_setting["primary_pool"]
sorted_pools = self._set_default_pool(list(pools), primary_pool)
cmds.addAttr(self.instance, longName="primaryPool",
attributeType="enum",
enumName=":".join(sorted_pools))
cmds.addAttr(
self.instance,
longName="primaryPool",
attributeType="enum",
enumName=":".join(sorted_pools)
)
cmds.setAttr(
"{}.primaryPool".format(self.instance),
0,
keyable=False,
channelBox=True
)
pools = ["-"] + pools
secondary_pool = pool_setting["secondary_pool"]
sorted_pools = self._set_default_pool(list(pools), secondary_pool)
cmds.addAttr("{}.secondaryPool".format(self.instance),
attributeType="enum",
enumName=":".join(sorted_pools))
cmds.addAttr(
self.instance,
longName="secondaryPool",
attributeType="enum",
enumName=":".join(sorted_pools)
)
cmds.setAttr(
"{}.secondaryPool".format(self.instance),
0,
keyable=False,
channelBox=True
)
def _create_render_settings(self):
"""Create instance settings."""
@ -260,6 +278,12 @@ class CreateRender(plugin.Creator):
default_priority)
self.data["tile_priority"] = tile_priority
strict_error_checking = maya_submit_dl.get("strict_error_checking",
True)
self.data["strict_error_checking"] = strict_error_checking
# Pool attributes should be last since they will be recreated when
# the deadline server changes.
pool_setting = (self._project_settings["deadline"]
["publish"]
["CollectDeadlinePools"])
@ -272,9 +296,6 @@ class CreateRender(plugin.Creator):
secondary_pool = pool_setting["secondary_pool"]
self.data["secondaryPool"] = self._set_default_pool(pool_names,
secondary_pool)
strict_error_checking = maya_submit_dl.get("strict_error_checking",
True)
self.data["strict_error_checking"] = strict_error_checking
if muster_enabled:
self.log.info(">>> Loading Muster credentials ...")

View file

@ -1,8 +1,14 @@
import maya.cmds as cmds
from openpype.pipeline import (
load,
remove_container
)
from openpype.hosts.maya.api.pipeline import containerise
from openpype.hosts.maya.api.lib import unique_namespace
from openpype.hosts.maya.api import setdress
class AssemblyLoader(load.LoaderPlugin):
@ -16,9 +22,6 @@ class AssemblyLoader(load.LoaderPlugin):
def load(self, context, name, namespace, data):
from openpype.hosts.maya.api.pipeline import containerise
from openpype.hosts.maya.api.lib import unique_namespace
asset = context['asset']['name']
namespace = namespace or unique_namespace(
asset + "_",
@ -26,8 +29,6 @@ class AssemblyLoader(load.LoaderPlugin):
suffix="_",
)
from openpype.hosts.maya.api import setdress
containers = setdress.load_package(
filepath=self.fname,
name=name,
@ -50,15 +51,11 @@ class AssemblyLoader(load.LoaderPlugin):
def update(self, container, representation):
from openpype import setdress
return setdress.update_package(container, representation)
def remove(self, container):
"""Remove all sub containers"""
from openpype import setdress
import maya.cmds as cmds
# Remove all members
member_containers = setdress.get_contained_containers(container)
for member_container in member_containers:

View file

@ -33,7 +33,7 @@ def preserve_modelpanel_cameras(container, log=None):
panel_cameras = {}
for panel in cmds.getPanel(type="modelPanel"):
cam = cmds.ls(cmds.modelPanel(panel, query=True, camera=True),
long=True)
long=True)[0]
# Often but not always maya returns the transform from the
# modelPanel as opposed to the camera shape, so we convert it

View file

@ -166,7 +166,7 @@ class CollectUpstreamInputs(pyblish.api.InstancePlugin):
inputs = [c["representation"] for c in containers]
instance.data["inputRepresentations"] = inputs
self.log.info("Collected inputs: %s" % inputs)
self.log.debug("Collected inputs: %s" % inputs)
def _collect_renderlayer_inputs(self, scene_containers, instance):
"""Collects inputs from nodes in renderlayer, incl. shaders + camera"""

View file

@ -336,7 +336,7 @@ class CollectMayaRender(pyblish.api.ContextPlugin):
context.data["system_settings"]["modules"]["deadline"]
)
if deadline_settings["enabled"]:
data["deadlineUrl"] = render_instance.data.get("deadlineUrl")
data["deadlineUrl"] = render_instance.data["deadlineUrl"]
if self.sync_workfile_version:
data["version"] = context.data["version"]

View file

@ -31,5 +31,5 @@ class SaveCurrentScene(pyblish.api.ContextPlugin):
# remove lockfile before saving
if is_workfile_lock_enabled("maya", project_name, project_settings):
remove_workfile_lock(current)
self.log.info("Saving current file..")
self.log.info("Saving current file: {}".format(current))
cmds.file(save=True, force=True)

View file

@ -13,7 +13,6 @@ class ValidateInstanceHasMembers(pyblish.api.InstancePlugin):
@classmethod
def get_invalid(cls, instance):
invalid = list()
if not instance.data["setMembers"]:
objectset_name = instance.data['name']
@ -22,6 +21,10 @@ class ValidateInstanceHasMembers(pyblish.api.InstancePlugin):
return invalid
def process(self, instance):
# Allow renderlayer and workfile to be empty
skip_families = ["workfile", "renderlayer", "rendersetup"]
if instance.data.get("family") in skip_families:
return
invalid = self.get_invalid(instance)
if invalid:

View file

@ -55,7 +55,8 @@ class ValidateRigOutputIds(pyblish.api.InstancePlugin):
if shapes:
instance_nodes.extend(shapes)
scene_nodes = cmds.ls(type="transform") + cmds.ls(type="mesh")
scene_nodes = cmds.ls(type="transform", long=True)
scene_nodes += cmds.ls(type="mesh", long=True)
scene_nodes = set(scene_nodes) - set(instance_nodes)
scene_nodes_by_basename = defaultdict(list)
@ -76,7 +77,7 @@ class ValidateRigOutputIds(pyblish.api.InstancePlugin):
if len(ids) > 1:
cls.log.error(
"\"{}\" id mismatch to: {}".format(
instance_node.longName(), matches
instance_node, matches
)
)
invalid[instance_node] = matches

View file

@ -151,6 +151,7 @@ class NukeHost(
def add_nuke_callbacks():
""" Adding all available nuke callbacks
"""
nuke_settings = get_current_project_settings()["nuke"]
workfile_settings = WorkfileSettings()
# Set context settings.
nuke.addOnCreate(
@ -169,7 +170,10 @@ def add_nuke_callbacks():
# # set apply all workfile settings on script load and save
nuke.addOnScriptLoad(WorkfileSettings().set_context_settings)
nuke.addFilenameFilter(dirmap_file_name_filter)
if nuke_settings["nuke-dirmap"]["enabled"]:
log.info("Added Nuke's dirmaping callback ...")
# Add dirmap for file paths.
nuke.addFilenameFilter(dirmap_file_name_filter)
log.info("Added Nuke callbacks ...")

View file

@ -5,6 +5,8 @@ import pyblish.api
from openpype.pipeline import publish
from openpype.hosts.nuke import api as napi
from openpype.hosts.nuke.api.lib import set_node_knobs_from_settings
if sys.version_info[0] >= 3:
unicode = str
@ -28,7 +30,7 @@ class ExtractThumbnail(publish.Extractor):
bake_viewer_process = True
bake_viewer_input_process = True
nodes = {}
reposition_nodes = None
def process(self, instance):
if instance.data.get("farm"):
@ -123,18 +125,32 @@ class ExtractThumbnail(publish.Extractor):
temporary_nodes.append(rnode)
previous_node = rnode
reformat_node = nuke.createNode("Reformat")
ref_node = self.nodes.get("Reformat", None)
if ref_node:
for k, v in ref_node:
self.log.debug("k, v: {0}:{1}".format(k, v))
if isinstance(v, unicode):
v = str(v)
reformat_node[k].setValue(v)
if self.reposition_nodes is None:
# [deprecated] create reformat node old way
reformat_node = nuke.createNode("Reformat")
ref_node = self.nodes.get("Reformat", None)
if ref_node:
for k, v in ref_node:
self.log.debug("k, v: {0}:{1}".format(k, v))
if isinstance(v, unicode):
v = str(v)
reformat_node[k].setValue(v)
reformat_node.setInput(0, previous_node)
previous_node = reformat_node
temporary_nodes.append(reformat_node)
reformat_node.setInput(0, previous_node)
previous_node = reformat_node
temporary_nodes.append(reformat_node)
else:
# create reformat node new way
for repo_node in self.reposition_nodes:
node_class = repo_node["node_class"]
knobs = repo_node["knobs"]
node = nuke.createNode(node_class)
set_node_knobs_from_settings(node, knobs)
# connect in order
node.setInput(0, previous_node)
previous_node = node
temporary_nodes.append(node)
# only create colorspace baking if toggled on
if bake_viewer_process:

View file

View file

@ -0,0 +1,47 @@
""" OpenPype custom script for resetting read nodes start frame values """
import nuke
import nukescripts
class FrameSettingsPanel(nukescripts.PythonPanel):
""" Frame Settings Panel """
def __init__(self):
nukescripts.PythonPanel.__init__(self, "Set Frame Start (Read Node)")
# create knobs
self.frame = nuke.Int_Knob(
'frame', 'Frame Number')
self.selected = nuke.Boolean_Knob("selection")
# add knobs to panel
self.addKnob(self.selected)
self.addKnob(self.frame)
# set values
self.selected.setValue(False)
self.frame.setValue(nuke.root().firstFrame())
def process(self):
""" Process the panel values. """
# get values
frame = self.frame.value()
if self.selected.value():
# selected nodes processing
if not nuke.selectedNodes():
return
for rn_ in nuke.selectedNodes():
if rn_.Class() != "Read":
continue
rn_["frame_mode"].setValue("start_at")
rn_["frame"].setValue(str(frame))
else:
# all nodes processing
for rn_ in nuke.allNodes(filter="Read"):
rn_["frame_mode"].setValue("start_at")
rn_["frame"].setValue(str(frame))
def main():
p_ = FrameSettingsPanel()
if p_.showModalDialog():
print(p_.process())

View file

@ -24,6 +24,8 @@ from .lib import (
get_project_manager,
get_current_project,
get_current_timeline,
get_any_timeline,
get_new_timeline,
create_bin,
get_media_pool_item,
create_media_pool_item,
@ -95,6 +97,8 @@ __all__ = [
"get_project_manager",
"get_current_project",
"get_current_timeline",
"get_any_timeline",
"get_new_timeline",
"create_bin",
"get_media_pool_item",
"create_media_pool_item",

View file

@ -15,6 +15,7 @@ log = Logger.get_logger(__name__)
self = sys.modules[__name__]
self.project_manager = None
self.media_storage = None
self.current_project = None
# OpenPype sequential rename variables
self.rename_index = 0
@ -85,22 +86,60 @@ def get_media_storage():
def get_current_project():
# initialize project manager
get_project_manager()
"""Get current project object.
"""
if not self.current_project:
self.current_project = get_project_manager().GetCurrentProject()
return self.project_manager.GetCurrentProject()
return self.current_project
def get_current_timeline(new=False):
# get current project
"""Get current timeline object.
Args:
new (bool)[optional]: [DEPRECATED] if True it will create
new timeline if none exists
Returns:
TODO: will need to reflect future `None`
object: resolve.Timeline
"""
project = get_current_project()
timeline = project.GetCurrentTimeline()
# return current timeline if any
if timeline:
return timeline
# TODO: [deprecated] and will be removed in future
if new:
media_pool = project.GetMediaPool()
new_timeline = media_pool.CreateEmptyTimeline(self.pype_timeline_name)
project.SetCurrentTimeline(new_timeline)
return get_new_timeline()
return project.GetCurrentTimeline()
def get_any_timeline():
"""Get any timeline object.
Returns:
object | None: resolve.Timeline
"""
project = get_current_project()
timeline_count = project.GetTimelineCount()
if timeline_count > 0:
return project.GetTimelineByIndex(1)
def get_new_timeline():
"""Get new timeline object.
Returns:
object: resolve.Timeline
"""
project = get_current_project()
media_pool = project.GetMediaPool()
new_timeline = media_pool.CreateEmptyTimeline(self.pype_timeline_name)
project.SetCurrentTimeline(new_timeline)
return new_timeline
def create_bin(name: str, root: object = None) -> object:
@ -312,7 +351,13 @@ def get_current_timeline_items(
track_type = track_type or "video"
selecting_color = selecting_color or "Chocolate"
project = get_current_project()
timeline = get_current_timeline()
# get timeline anyhow
timeline = (
get_current_timeline() or
get_any_timeline() or
get_new_timeline()
)
selected_clips = []
# get all tracks count filtered by track type

View file

@ -327,7 +327,10 @@ class ClipLoader:
self.active_timeline = options["timeline"]
else:
# create new sequence
self.active_timeline = lib.get_current_timeline(new=True)
self.active_timeline = (
lib.get_current_timeline() or
lib.get_new_timeline()
)
else:
self.active_timeline = lib.get_current_timeline()

View file

@ -43,18 +43,22 @@ def open_file(filepath):
"""
Loading project
"""
from . import bmdvr
pm = get_project_manager()
page = bmdvr.GetCurrentPage()
if page is not None:
# Save current project only if Resolve has an active page, otherwise
# we consider Resolve being in a pre-launch state (no open UI yet)
project = pm.GetCurrentProject()
print(f"Saving current project: {project}")
pm.SaveProject()
file = os.path.basename(filepath)
fname, _ = os.path.splitext(file)
dname, _ = fname.split("_v")
# deal with current project
project = pm.GetCurrentProject()
log.info(f"Test `pm`: {pm}")
pm.SaveProject()
try:
log.info(f"Test `dname`: {dname}")
if not set_project_manager_to_folder_name(dname):
raise
# load project from input path
@ -72,6 +76,7 @@ def open_file(filepath):
return False
return True
def current_file():
pm = get_project_manager()
current_dir = os.getenv("AVALON_WORKDIR")

View file

@ -0,0 +1,45 @@
import os
from openpype.lib import PreLaunchHook
import openpype.hosts.resolve
class ResolveLaunchLastWorkfile(PreLaunchHook):
"""Special hook to open last workfile for Resolve.
Checks 'start_last_workfile', if set to False, it will not open last
workfile. This property is set explicitly in Launcher.
"""
# Execute after workfile template copy
order = 10
app_groups = ["resolve"]
def execute(self):
if not self.data.get("start_last_workfile"):
self.log.info("It is set to not start last workfile on start.")
return
last_workfile = self.data.get("last_workfile_path")
if not last_workfile:
self.log.warning("Last workfile was not collected.")
return
if not os.path.exists(last_workfile):
self.log.info("Current context does not have any workfile yet.")
return
# Add path to launch environment for the startup script to pick up
self.log.info(f"Setting OPENPYPE_RESOLVE_OPEN_ON_LAUNCH to launch "
f"last workfile: {last_workfile}")
key = "OPENPYPE_RESOLVE_OPEN_ON_LAUNCH"
self.launch_context.env[key] = last_workfile
# Set the openpype prelaunch startup script path for easy access
# in the LUA .scriptlib code
op_resolve_root = os.path.dirname(openpype.hosts.resolve.__file__)
script_path = os.path.join(op_resolve_root, "startup.py")
key = "OPENPYPE_RESOLVE_STARTUP_SCRIPT"
self.launch_context.env[key] = script_path
self.log.info("Setting OPENPYPE_RESOLVE_STARTUP_SCRIPT to: "
f"{script_path}")

View file

@ -19,6 +19,7 @@ from openpype.lib.transcoding import (
IMAGE_EXTENSIONS
)
class LoadClip(plugin.TimelineItemLoader):
"""Load a subset to timeline as clip

View file

@ -0,0 +1,62 @@
"""This script is used as a startup script in Resolve through a .scriptlib file
It triggers directly after the launch of Resolve and it's recommended to keep
it optimized for fast performance since the Resolve UI is actually interactive
while this is running. As such, there's nothing ensuring the user isn't
continuing manually before any of the logic here runs. As such we also try
to delay any imports as much as possible.
This code runs in a separate process to the main Resolve process.
"""
import os
import openpype.hosts.resolve.api
def ensure_installed_host():
"""Install resolve host with openpype and return the registered host.
This function can be called multiple times without triggering an
additional install.
"""
from openpype.pipeline import install_host, registered_host
host = registered_host()
if host:
return host
install_host(openpype.hosts.resolve.api)
return registered_host()
def launch_menu():
print("Launching Resolve OpenPype menu..")
ensure_installed_host()
openpype.hosts.resolve.api.launch_pype_menu()
def open_file(path):
# Avoid the need to "install" the host
host = ensure_installed_host()
host.open_file(path)
def main():
# Open last workfile
workfile_path = os.environ.get("OPENPYPE_RESOLVE_OPEN_ON_LAUNCH")
if workfile_path:
open_file(workfile_path)
else:
print("No last workfile set to open. Skipping..")
# Launch OpenPype menu
from openpype.settings import get_project_settings
from openpype.pipeline.context_tools import get_current_project_name
project_name = get_current_project_name()
settings = get_project_settings(project_name)
if settings.get("resolve", {}).get("launch_openpype_menu_on_start", True):
launch_menu()
if __name__ == "__main__":
main()

View file

@ -0,0 +1,21 @@
-- Run OpenPype's Python launch script for resolve
function file_exists(name)
local f = io.open(name, "r")
return f ~= nil and io.close(f)
end
openpype_startup_script = os.getenv("OPENPYPE_RESOLVE_STARTUP_SCRIPT")
if openpype_startup_script ~= nil then
script = fusion:MapPath(openpype_startup_script)
if file_exists(script) then
-- We must use RunScript to ensure it runs in a separate
-- process to Resolve itself to avoid a deadlock for
-- certain imports of OpenPype libraries or Qt
print("Running launch script: " .. script)
fusion:RunScript(script)
else
print("Launch script not found at: " .. script)
end
end

View file

@ -0,0 +1,13 @@
#! python3
from openpype.pipeline import install_host
from openpype.hosts.resolve import api as bmdvr
from openpype.hosts.resolve.api.lib import get_current_project
if __name__ == "__main__":
install_host(bmdvr)
project = get_current_project()
timeline_count = project.GetTimelineCount()
print(f"Timeline count: {timeline_count}")
timeline = project.GetTimelineByIndex(timeline_count)
print(f"Timeline name: {timeline.GetName()}")
print(timeline.GetTrackCount("video"))

View file

@ -1,6 +1,6 @@
import os
import shutil
from openpype.lib import Logger
from openpype.lib import Logger, is_running_from_build
RESOLVE_ROOT_DIR = os.path.dirname(os.path.abspath(__file__))
@ -29,6 +29,9 @@ def setup(env):
log.info("Utility Scripts Dir: `{}`".format(util_scripts_paths))
log.info("Utility Scripts: `{}`".format(scripts))
# Make sure scripts dir exists
os.makedirs(util_scripts_dir, exist_ok=True)
# make sure no script file is in folder
for script in os.listdir(util_scripts_dir):
path = os.path.join(util_scripts_dir, script)
@ -41,8 +44,23 @@ def setup(env):
# copy scripts into Resolve's utility scripts dir
for directory, scripts in scripts.items():
for script in scripts:
if (
is_running_from_build() and
script in ["tests", "develop"]
):
# only copy those if started from build
continue
src = os.path.join(directory, script)
dst = os.path.join(util_scripts_dir, script)
# TODO: Make this a less hacky workaround
if script == "openpype_startup.scriptlib":
# Handle special case for scriptlib that needs to be a folder
# up from the Comp folder in the Fusion scripts
dst = os.path.join(os.path.dirname(util_scripts_dir),
script)
log.info("Copying `{}` to `{}`...".format(src, dst))
if os.path.isdir(src):
shutil.copytree(

View file

@ -16,11 +16,12 @@ class SaveCurrentWorkfile(pyblish.api.ContextPlugin):
def process(self, context):
host = registered_host()
if context.data["currentFile"] != host.get_current_workfile():
current = host.get_current_workfile()
if context.data["currentFile"] != current:
raise KnownPublishError("Workfile has changed during publishing!")
if host.has_unsaved_changes():
self.log.info("Saving current file..")
self.log.info("Saving current file: {}".format(current))
host.save_workfile()
else:
self.log.debug("Skipping workfile save because there are no "

View file

@ -1,5 +1,7 @@
import os
import re
from openpype.modules import IHostAddon, OpenPypeModule
from openpype.widgets.message_window import Window
UNREAL_ROOT_DIR = os.path.dirname(os.path.abspath(__file__))
@ -19,6 +21,20 @@ class UnrealAddon(OpenPypeModule, IHostAddon):
from .lib import get_compatible_integration
pattern = re.compile(r'^\d+-\d+$')
if not pattern.match(app.name):
msg = (
"Unreal application key in the settings must be in format"
"'5-0' or '5-1'"
)
Window(
parent=None,
title="Unreal application name format",
message=msg,
level="critical")
raise ValueError(msg)
ue_version = app.name.replace("-", ".")
unreal_plugin_path = os.path.join(
UNREAL_ROOT_DIR, "integration", "UE_{}".format(ue_version), "Ayon"

View file

@ -22,6 +22,8 @@ from .pipeline import (
show_tools_popup,
instantiate,
UnrealHost,
set_sequence_hierarchy,
generate_sequence,
maintained_selection
)
@ -41,5 +43,7 @@ __all__ = [
"show_tools_popup",
"instantiate",
"UnrealHost",
"set_sequence_hierarchy",
"generate_sequence",
"maintained_selection"
]

View file

@ -9,12 +9,14 @@ import time
import pyblish.api
from openpype.client import get_asset_by_name, get_assets
from openpype.pipeline import (
register_loader_plugin_path,
register_creator_plugin_path,
deregister_loader_plugin_path,
deregister_creator_plugin_path,
AYON_CONTAINER_ID,
legacy_io,
)
from openpype.tools.utils import host_tools
import openpype.hosts.unreal
@ -512,6 +514,141 @@ def get_subsequences(sequence: unreal.LevelSequence):
return []
def set_sequence_hierarchy(
seq_i, seq_j, max_frame_i, min_frame_j, max_frame_j, map_paths
):
# Get existing sequencer tracks or create them if they don't exist
tracks = seq_i.get_master_tracks()
subscene_track = None
visibility_track = None
for t in tracks:
if t.get_class() == unreal.MovieSceneSubTrack.static_class():
subscene_track = t
if (t.get_class() ==
unreal.MovieSceneLevelVisibilityTrack.static_class()):
visibility_track = t
if not subscene_track:
subscene_track = seq_i.add_master_track(unreal.MovieSceneSubTrack)
if not visibility_track:
visibility_track = seq_i.add_master_track(
unreal.MovieSceneLevelVisibilityTrack)
# Create the sub-scene section
subscenes = subscene_track.get_sections()
subscene = None
for s in subscenes:
if s.get_editor_property('sub_sequence') == seq_j:
subscene = s
break
if not subscene:
subscene = subscene_track.add_section()
subscene.set_row_index(len(subscene_track.get_sections()))
subscene.set_editor_property('sub_sequence', seq_j)
subscene.set_range(
min_frame_j,
max_frame_j + 1)
# Create the visibility section
ar = unreal.AssetRegistryHelpers.get_asset_registry()
maps = []
for m in map_paths:
# Unreal requires to load the level to get the map name
unreal.EditorLevelLibrary.save_all_dirty_levels()
unreal.EditorLevelLibrary.load_level(m)
maps.append(str(ar.get_asset_by_object_path(m).asset_name))
vis_section = visibility_track.add_section()
index = len(visibility_track.get_sections())
vis_section.set_range(
min_frame_j,
max_frame_j + 1)
vis_section.set_visibility(unreal.LevelVisibility.VISIBLE)
vis_section.set_row_index(index)
vis_section.set_level_names(maps)
if min_frame_j > 1:
hid_section = visibility_track.add_section()
hid_section.set_range(
1,
min_frame_j)
hid_section.set_visibility(unreal.LevelVisibility.HIDDEN)
hid_section.set_row_index(index)
hid_section.set_level_names(maps)
if max_frame_j < max_frame_i:
hid_section = visibility_track.add_section()
hid_section.set_range(
max_frame_j + 1,
max_frame_i + 1)
hid_section.set_visibility(unreal.LevelVisibility.HIDDEN)
hid_section.set_row_index(index)
hid_section.set_level_names(maps)
def generate_sequence(h, h_dir):
tools = unreal.AssetToolsHelpers().get_asset_tools()
sequence = tools.create_asset(
asset_name=h,
package_path=h_dir,
asset_class=unreal.LevelSequence,
factory=unreal.LevelSequenceFactoryNew()
)
project_name = legacy_io.active_project()
asset_data = get_asset_by_name(
project_name,
h_dir.split('/')[-1],
fields=["_id", "data.fps"]
)
start_frames = []
end_frames = []
elements = list(get_assets(
project_name,
parent_ids=[asset_data["_id"]],
fields=["_id", "data.clipIn", "data.clipOut"]
))
for e in elements:
start_frames.append(e.get('data').get('clipIn'))
end_frames.append(e.get('data').get('clipOut'))
elements.extend(get_assets(
project_name,
parent_ids=[e["_id"]],
fields=["_id", "data.clipIn", "data.clipOut"]
))
min_frame = min(start_frames)
max_frame = max(end_frames)
fps = asset_data.get('data').get("fps")
sequence.set_display_rate(
unreal.FrameRate(fps, 1.0))
sequence.set_playback_start(min_frame)
sequence.set_playback_end(max_frame)
sequence.set_work_range_start(min_frame / fps)
sequence.set_work_range_end(max_frame / fps)
sequence.set_view_range_start(min_frame / fps)
sequence.set_view_range_end(max_frame / fps)
tracks = sequence.get_master_tracks()
track = None
for t in tracks:
if (t.get_class() ==
unreal.MovieSceneCameraCutTrack.static_class()):
track = t
break
if not track:
track = sequence.add_master_track(
unreal.MovieSceneCameraCutTrack)
return sequence, (min_frame, max_frame)
@contextmanager
def maintained_selection():
"""Stub to be either implemented or replaced.

View file

@ -17,6 +17,8 @@ class CreateUAsset(UnrealAssetCreator):
family = "uasset"
icon = "cube"
extension = ".uasset"
def create(self, subset_name, instance_data, pre_create_data):
if pre_create_data.get("use_selection"):
ar = unreal.AssetRegistryHelpers.get_asset_registry()
@ -37,10 +39,28 @@ class CreateUAsset(UnrealAssetCreator):
f"{Path(obj).name} is not on the disk. Likely it needs to"
"be saved first.")
if Path(sys_path).suffix != ".uasset":
raise CreatorError(f"{Path(sys_path).name} is not a UAsset.")
if Path(sys_path).suffix != self.extension:
raise CreatorError(
f"{Path(sys_path).name} is not a {self.label}.")
super(CreateUAsset, self).create(
subset_name,
instance_data,
pre_create_data)
class CreateUMap(CreateUAsset):
"""Create Level."""
identifier = "io.ayon.creators.unreal.umap"
label = "Level"
family = "uasset"
extension = ".umap"
def create(self, subset_name, instance_data, pre_create_data):
instance_data["families"] = ["umap"]
super(CreateUMap, self).create(
subset_name,
instance_data,
pre_create_data)

View file

@ -156,7 +156,7 @@ class AnimationFBXLoader(plugin.Loader):
package_paths=[f"{root}/{hierarchy[0]}"],
recursive_paths=False)
levels = ar.get_assets(_filter)
master_level = levels[0].get_full_name()
master_level = levels[0].get_asset().get_path_name()
hierarchy_dir = root
for h in hierarchy:
@ -168,7 +168,7 @@ class AnimationFBXLoader(plugin.Loader):
package_paths=[f"{hierarchy_dir}/"],
recursive_paths=True)
levels = ar.get_assets(_filter)
level = levels[0].get_full_name()
level = levels[0].get_asset().get_path_name()
unreal.EditorLevelLibrary.save_all_dirty_levels()
unreal.EditorLevelLibrary.load_level(level)

View file

@ -3,16 +3,24 @@
from pathlib import Path
import unreal
from unreal import EditorAssetLibrary
from unreal import EditorLevelLibrary
from unreal import EditorLevelUtils
from openpype.client import get_assets, get_asset_by_name
from unreal import (
EditorAssetLibrary,
EditorLevelLibrary,
EditorLevelUtils,
LevelSequenceEditorBlueprintLibrary as LevelSequenceLib,
)
from openpype.client import get_asset_by_name
from openpype.pipeline import (
AYON_CONTAINER_ID,
legacy_io,
)
from openpype.hosts.unreal.api import plugin
from openpype.hosts.unreal.api import pipeline as unreal_pipeline
from openpype.hosts.unreal.api.pipeline import (
generate_sequence,
set_sequence_hierarchy,
create_container,
imprint,
)
class CameraLoader(plugin.Loader):
@ -24,32 +32,6 @@ class CameraLoader(plugin.Loader):
icon = "cube"
color = "orange"
def _set_sequence_hierarchy(
self, seq_i, seq_j, min_frame_j, max_frame_j
):
tracks = seq_i.get_master_tracks()
track = None
for t in tracks:
if t.get_class() == unreal.MovieSceneSubTrack.static_class():
track = t
break
if not track:
track = seq_i.add_master_track(unreal.MovieSceneSubTrack)
subscenes = track.get_sections()
subscene = None
for s in subscenes:
if s.get_editor_property('sub_sequence') == seq_j:
subscene = s
break
if not subscene:
subscene = track.add_section()
subscene.set_row_index(len(track.get_sections()))
subscene.set_editor_property('sub_sequence', seq_j)
subscene.set_range(
min_frame_j,
max_frame_j + 1)
def _import_camera(
self, world, sequence, bindings, import_fbx_settings, import_filename
):
@ -110,10 +92,7 @@ class CameraLoader(plugin.Loader):
hierarchy_dir_list.append(hierarchy_dir)
asset = context.get('asset').get('name')
suffix = "_CON"
if asset:
asset_name = "{}_{}".format(asset, name)
else:
asset_name = "{}".format(name)
asset_name = f"{asset}_{name}" if asset else f"{name}"
tools = unreal.AssetToolsHelpers().get_asset_tools()
@ -127,23 +106,15 @@ class CameraLoader(plugin.Loader):
# Get highest number to make a unique name
folders = [a for a in asset_content
if a[-1] == "/" and f"{name}_" in a]
f_numbers = []
for f in folders:
# Get number from folder name. Splits the string by "_" and
# removes the last element (which is a "/").
f_numbers.append(int(f.split("_")[-1][:-1]))
# Get number from folder name. Splits the string by "_" and
# removes the last element (which is a "/").
f_numbers = [int(f.split("_")[-1][:-1]) for f in folders]
f_numbers.sort()
if not f_numbers:
unique_number = 1
else:
unique_number = f_numbers[-1] + 1
unique_number = f_numbers[-1] + 1 if f_numbers else 1
asset_dir, container_name = tools.create_unique_asset_name(
f"{hierarchy_dir}/{asset}/{name}_{unique_number:02d}", suffix="")
asset_path = Path(asset_dir)
asset_path_parent = str(asset_path.parent.as_posix())
container_name += suffix
EditorAssetLibrary.make_directory(asset_dir)
@ -156,9 +127,9 @@ class CameraLoader(plugin.Loader):
if not EditorAssetLibrary.does_asset_exist(master_level):
EditorLevelLibrary.new_level(f"{h_dir}/{h_asset}_map")
level = f"{asset_path_parent}/{asset}_map.{asset}_map"
level = f"{asset_dir}/{asset}_map_camera.{asset}_map_camera"
if not EditorAssetLibrary.does_asset_exist(level):
EditorLevelLibrary.new_level(f"{asset_path_parent}/{asset}_map")
EditorLevelLibrary.new_level(f"{asset_dir}/{asset}_map_camera")
EditorLevelLibrary.load_level(master_level)
EditorLevelUtils.add_level_to_world(
@ -169,27 +140,13 @@ class CameraLoader(plugin.Loader):
EditorLevelLibrary.save_all_dirty_levels()
EditorLevelLibrary.load_level(level)
project_name = legacy_io.active_project()
# TODO refactor
# - Creating of hierarchy should be a function in unreal integration
# - it's used in multiple loaders but must not be loader's logic
# - hard to say what is purpose of the loop
# - variables does not match their meaning
# - why scene is stored to sequences?
# - asset documents vs. elements
# - cleanup variable names in whole function
# - e.g. 'asset', 'asset_name', 'asset_data', 'asset_doc'
# - really inefficient queries of asset documents
# - existing asset in scene is considered as "with correct values"
# - variable 'elements' is modified during it's loop
# Get all the sequences in the hierarchy. It will create them, if
# they don't exist.
sequences = []
frame_ranges = []
i = 0
for h in hierarchy_dir_list:
sequences = []
for (h_dir, h) in zip(hierarchy_dir_list, hierarchy):
root_content = EditorAssetLibrary.list_assets(
h, recursive=False, include_folder=False)
h_dir, recursive=False, include_folder=False)
existing_sequences = [
EditorAssetLibrary.find_asset_data(asset)
@ -198,57 +155,17 @@ class CameraLoader(plugin.Loader):
asset).get_class().get_name() == 'LevelSequence'
]
if not existing_sequences:
scene = tools.create_asset(
asset_name=hierarchy[i],
package_path=h,
asset_class=unreal.LevelSequence,
factory=unreal.LevelSequenceFactoryNew()
)
asset_data = get_asset_by_name(
project_name,
h.split('/')[-1],
fields=["_id", "data.fps"]
)
start_frames = []
end_frames = []
elements = list(get_assets(
project_name,
parent_ids=[asset_data["_id"]],
fields=["_id", "data.clipIn", "data.clipOut"]
))
for e in elements:
start_frames.append(e.get('data').get('clipIn'))
end_frames.append(e.get('data').get('clipOut'))
elements.extend(get_assets(
project_name,
parent_ids=[e["_id"]],
fields=["_id", "data.clipIn", "data.clipOut"]
))
min_frame = min(start_frames)
max_frame = max(end_frames)
scene.set_display_rate(
unreal.FrameRate(asset_data.get('data').get("fps"), 1.0))
scene.set_playback_start(min_frame)
scene.set_playback_end(max_frame)
sequences.append(scene)
frame_ranges.append((min_frame, max_frame))
else:
for e in existing_sequences:
sequences.append(e.get_asset())
if existing_sequences:
for seq in existing_sequences:
sequences.append(seq.get_asset())
frame_ranges.append((
e.get_asset().get_playback_start(),
e.get_asset().get_playback_end()))
seq.get_asset().get_playback_start(),
seq.get_asset().get_playback_end()))
else:
sequence, frame_range = generate_sequence(h, h_dir)
i += 1
sequences.append(sequence)
frame_ranges.append(frame_range)
EditorAssetLibrary.make_directory(asset_dir)
@ -260,19 +177,24 @@ class CameraLoader(plugin.Loader):
)
# Add sequences data to hierarchy
for i in range(0, len(sequences) - 1):
self._set_sequence_hierarchy(
for i in range(len(sequences) - 1):
set_sequence_hierarchy(
sequences[i], sequences[i + 1],
frame_ranges[i + 1][0], frame_ranges[i + 1][1])
frame_ranges[i][1],
frame_ranges[i + 1][0], frame_ranges[i + 1][1],
[level])
project_name = legacy_io.active_project()
data = get_asset_by_name(project_name, asset)["data"]
cam_seq.set_display_rate(
unreal.FrameRate(data.get("fps"), 1.0))
cam_seq.set_playback_start(data.get('clipIn'))
cam_seq.set_playback_end(data.get('clipOut') + 1)
self._set_sequence_hierarchy(
set_sequence_hierarchy(
sequences[-1], cam_seq,
data.get('clipIn'), data.get('clipOut'))
frame_ranges[-1][1],
data.get('clipIn'), data.get('clipOut'),
[level])
settings = unreal.MovieSceneUserImportFBXSettings()
settings.set_editor_property('reduce_keys', False)
@ -307,7 +229,7 @@ class CameraLoader(plugin.Loader):
key.set_time(unreal.FrameNumber(value=new_time))
# Create Asset Container
unreal_pipeline.create_container(
create_container(
container=container_name, path=asset_dir)
data = {
@ -322,14 +244,14 @@ class CameraLoader(plugin.Loader):
"parent": context["representation"]["parent"],
"family": context["representation"]["context"]["family"]
}
unreal_pipeline.imprint(
"{}/{}".format(asset_dir, container_name), data)
imprint(f"{asset_dir}/{container_name}", data)
EditorLevelLibrary.save_all_dirty_levels()
EditorLevelLibrary.load_level(master_level)
# Save all assets in the hierarchy
asset_content = EditorAssetLibrary.list_assets(
asset_dir, recursive=True, include_folder=True
hierarchy_dir_list[0], recursive=True, include_folder=False
)
for a in asset_content:
@ -340,32 +262,30 @@ class CameraLoader(plugin.Loader):
def update(self, container, representation):
ar = unreal.AssetRegistryHelpers.get_asset_registry()
root = "/Game/ayon"
curr_level_sequence = LevelSequenceLib.get_current_level_sequence()
curr_time = LevelSequenceLib.get_current_time()
is_cam_lock = LevelSequenceLib.is_camera_cut_locked_to_viewport()
editor_subsystem = unreal.UnrealEditorSubsystem()
vp_loc, vp_rot = editor_subsystem.get_level_viewport_camera_info()
asset_dir = container.get('namespace')
context = representation.get("context")
hierarchy = context.get('hierarchy').split("/")
h_dir = f"{root}/{hierarchy[0]}"
h_asset = hierarchy[0]
master_level = f"{h_dir}/{h_asset}_map.{h_asset}_map"
EditorLevelLibrary.save_current_level()
filter = unreal.ARFilter(
_filter = unreal.ARFilter(
class_names=["LevelSequence"],
package_paths=[asset_dir],
recursive_paths=False)
sequences = ar.get_assets(filter)
filter = unreal.ARFilter(
sequences = ar.get_assets(_filter)
_filter = unreal.ARFilter(
class_names=["World"],
package_paths=[str(Path(asset_dir).parent.as_posix())],
package_paths=[asset_dir],
recursive_paths=True)
maps = ar.get_assets(filter)
maps = ar.get_assets(_filter)
# There should be only one map in the list
EditorLevelLibrary.load_level(maps[0].get_full_name())
EditorLevelLibrary.load_level(maps[0].get_asset().get_path_name())
level_sequence = sequences[0].get_asset()
@ -401,12 +321,18 @@ class CameraLoader(plugin.Loader):
root = "/Game/Ayon"
namespace = container.get('namespace').replace(f"{root}/", "")
ms_asset = namespace.split('/')[0]
filter = unreal.ARFilter(
_filter = unreal.ARFilter(
class_names=["LevelSequence"],
package_paths=[f"{root}/{ms_asset}"],
recursive_paths=False)
sequences = ar.get_assets(filter)
sequences = ar.get_assets(_filter)
master_sequence = sequences[0].get_asset()
_filter = unreal.ARFilter(
class_names=["World"],
package_paths=[f"{root}/{ms_asset}"],
recursive_paths=False)
levels = ar.get_assets(_filter)
master_level = levels[0].get_asset().get_path_name()
sequences = [master_sequence]
@ -418,26 +344,20 @@ class CameraLoader(plugin.Loader):
for t in tracks:
if t.get_class() == unreal.MovieSceneSubTrack.static_class():
subscene_track = t
break
if subscene_track:
sections = subscene_track.get_sections()
for ss in sections:
if ss.get_sequence().get_name() == sequence_name:
parent = s
sub_scene = ss
# subscene_track.remove_section(ss)
break
sequences.append(ss.get_sequence())
# Update subscenes indexes.
i = 0
for ss in sections:
for i, ss in enumerate(sections):
ss.set_row_index(i)
i += 1
if parent:
break
assert parent, "Could not find the parent sequence"
assert parent, "Could not find the parent sequence"
EditorAssetLibrary.delete_asset(level_sequence.get_path_name())
@ -466,33 +386,63 @@ class CameraLoader(plugin.Loader):
str(representation["data"]["path"])
)
# Set range of all sections
# Changing the range of the section is not enough. We need to change
# the frame of all the keys in the section.
project_name = legacy_io.active_project()
asset = container.get('asset')
data = get_asset_by_name(project_name, asset)["data"]
for possessable in new_sequence.get_possessables():
for tracks in possessable.get_tracks():
for section in tracks.get_sections():
section.set_range(
data.get('clipIn'),
data.get('clipOut') + 1)
for channel in section.get_all_channels():
for key in channel.get_keys():
old_time = key.get_time().get_editor_property(
'frame_number')
old_time_value = old_time.get_editor_property(
'value')
new_time = old_time_value + (
data.get('clipIn') - data.get('frameStart')
)
key.set_time(unreal.FrameNumber(value=new_time))
data = {
"representation": str(representation["_id"]),
"parent": str(representation["parent"])
}
unreal_pipeline.imprint(
"{}/{}".format(asset_dir, container.get('container_name')), data)
imprint(f"{asset_dir}/{container.get('container_name')}", data)
EditorLevelLibrary.save_current_level()
asset_content = EditorAssetLibrary.list_assets(
asset_dir, recursive=True, include_folder=False)
f"{root}/{ms_asset}", recursive=True, include_folder=False)
for a in asset_content:
EditorAssetLibrary.save_asset(a)
EditorLevelLibrary.load_level(master_level)
if curr_level_sequence:
LevelSequenceLib.open_level_sequence(curr_level_sequence)
LevelSequenceLib.set_current_time(curr_time)
LevelSequenceLib.set_lock_camera_cut_to_viewport(is_cam_lock)
editor_subsystem.set_level_viewport_camera_info(vp_loc, vp_rot)
def remove(self, container):
path = Path(container.get("namespace"))
parent_path = str(path.parent.as_posix())
asset_dir = container.get('namespace')
path = Path(asset_dir)
ar = unreal.AssetRegistryHelpers.get_asset_registry()
filter = unreal.ARFilter(
_filter = unreal.ARFilter(
class_names=["LevelSequence"],
package_paths=[f"{str(path.as_posix())}"],
package_paths=[asset_dir],
recursive_paths=False)
sequences = ar.get_assets(filter)
sequences = ar.get_assets(_filter)
if not sequences:
raise Exception("Could not find sequence.")
@ -500,11 +450,11 @@ class CameraLoader(plugin.Loader):
world = ar.get_asset_by_object_path(
EditorLevelLibrary.get_editor_world().get_path_name())
filter = unreal.ARFilter(
_filter = unreal.ARFilter(
class_names=["World"],
package_paths=[f"{parent_path}"],
package_paths=[asset_dir],
recursive_paths=True)
maps = ar.get_assets(filter)
maps = ar.get_assets(_filter)
# There should be only one map in the list
if not maps:
@ -513,7 +463,7 @@ class CameraLoader(plugin.Loader):
map = maps[0]
EditorLevelLibrary.save_all_dirty_levels()
EditorLevelLibrary.load_level(map.get_full_name())
EditorLevelLibrary.load_level(map.get_asset().get_path_name())
# Remove the camera from the level.
actors = EditorLevelLibrary.get_all_level_actors()
@ -523,7 +473,7 @@ class CameraLoader(plugin.Loader):
EditorLevelLibrary.destroy_actor(a)
EditorLevelLibrary.save_all_dirty_levels()
EditorLevelLibrary.load_level(world.get_full_name())
EditorLevelLibrary.load_level(world.get_asset().get_path_name())
# There should be only one sequence in the path.
sequence_name = sequences[0].asset_name
@ -534,12 +484,18 @@ class CameraLoader(plugin.Loader):
root = "/Game/Ayon"
namespace = container.get('namespace').replace(f"{root}/", "")
ms_asset = namespace.split('/')[0]
filter = unreal.ARFilter(
_filter = unreal.ARFilter(
class_names=["LevelSequence"],
package_paths=[f"{root}/{ms_asset}"],
recursive_paths=False)
sequences = ar.get_assets(filter)
sequences = ar.get_assets(_filter)
master_sequence = sequences[0].get_asset()
_filter = unreal.ARFilter(
class_names=["World"],
package_paths=[f"{root}/{ms_asset}"],
recursive_paths=False)
levels = ar.get_assets(_filter)
master_level = levels[0].get_full_name()
sequences = [master_sequence]
@ -547,10 +503,13 @@ class CameraLoader(plugin.Loader):
for s in sequences:
tracks = s.get_master_tracks()
subscene_track = None
visibility_track = None
for t in tracks:
if t.get_class() == unreal.MovieSceneSubTrack.static_class():
subscene_track = t
break
if (t.get_class() ==
unreal.MovieSceneLevelVisibilityTrack.static_class()):
visibility_track = t
if subscene_track:
sections = subscene_track.get_sections()
for ss in sections:
@ -560,23 +519,48 @@ class CameraLoader(plugin.Loader):
break
sequences.append(ss.get_sequence())
# Update subscenes indexes.
i = 0
for ss in sections:
for i, ss in enumerate(sections):
ss.set_row_index(i)
i += 1
if visibility_track:
sections = visibility_track.get_sections()
for ss in sections:
if (unreal.Name(f"{container.get('asset')}_map_camera")
in ss.get_level_names()):
visibility_track.remove_section(ss)
# Update visibility sections indexes.
i = -1
prev_name = []
for ss in sections:
if prev_name != ss.get_level_names():
i += 1
ss.set_row_index(i)
prev_name = ss.get_level_names()
if parent:
break
assert parent, "Could not find the parent sequence"
EditorAssetLibrary.delete_directory(str(path.as_posix()))
# Create a temporary level to delete the layout level.
EditorLevelLibrary.save_all_dirty_levels()
EditorAssetLibrary.make_directory(f"{root}/tmp")
tmp_level = f"{root}/tmp/temp_map"
if not EditorAssetLibrary.does_asset_exist(f"{tmp_level}.temp_map"):
EditorLevelLibrary.new_level(tmp_level)
else:
EditorLevelLibrary.load_level(tmp_level)
# Delete the layout directory.
EditorAssetLibrary.delete_directory(asset_dir)
EditorLevelLibrary.load_level(master_level)
EditorAssetLibrary.delete_directory(f"{root}/tmp")
# Check if there isn't any more assets in the parent folder, and
# delete it if not.
asset_content = EditorAssetLibrary.list_assets(
parent_path, recursive=False, include_folder=True
path.parent.as_posix(), recursive=False, include_folder=True
)
if len(asset_content) == 0:
EditorAssetLibrary.delete_directory(parent_path)
EditorAssetLibrary.delete_directory(path.parent.as_posix())

View file

@ -5,15 +5,18 @@ import collections
from pathlib import Path
import unreal
from unreal import EditorAssetLibrary
from unreal import EditorLevelLibrary
from unreal import EditorLevelUtils
from unreal import AssetToolsHelpers
from unreal import FBXImportType
from unreal import MovieSceneLevelVisibilityTrack
from unreal import MovieSceneSubTrack
from unreal import (
EditorAssetLibrary,
EditorLevelLibrary,
EditorLevelUtils,
AssetToolsHelpers,
FBXImportType,
MovieSceneLevelVisibilityTrack,
MovieSceneSubTrack,
LevelSequenceEditorBlueprintLibrary as LevelSequenceLib,
)
from openpype.client import get_asset_by_name, get_assets, get_representations
from openpype.client import get_asset_by_name, get_representations
from openpype.pipeline import (
discover_loader_plugins,
loaders_from_representation,
@ -25,7 +28,13 @@ from openpype.pipeline import (
from openpype.pipeline.context_tools import get_current_project_asset
from openpype.settings import get_current_project_settings
from openpype.hosts.unreal.api import plugin
from openpype.hosts.unreal.api import pipeline as unreal_pipeline
from openpype.hosts.unreal.api.pipeline import (
generate_sequence,
set_sequence_hierarchy,
create_container,
imprint,
ls,
)
class LayoutLoader(plugin.Loader):
@ -91,77 +100,6 @@ class LayoutLoader(plugin.Loader):
return None
@staticmethod
def _set_sequence_hierarchy(
seq_i, seq_j, max_frame_i, min_frame_j, max_frame_j, map_paths
):
# Get existing sequencer tracks or create them if they don't exist
tracks = seq_i.get_master_tracks()
subscene_track = None
visibility_track = None
for t in tracks:
if t.get_class() == unreal.MovieSceneSubTrack.static_class():
subscene_track = t
if (t.get_class() ==
unreal.MovieSceneLevelVisibilityTrack.static_class()):
visibility_track = t
if not subscene_track:
subscene_track = seq_i.add_master_track(unreal.MovieSceneSubTrack)
if not visibility_track:
visibility_track = seq_i.add_master_track(
unreal.MovieSceneLevelVisibilityTrack)
# Create the sub-scene section
subscenes = subscene_track.get_sections()
subscene = None
for s in subscenes:
if s.get_editor_property('sub_sequence') == seq_j:
subscene = s
break
if not subscene:
subscene = subscene_track.add_section()
subscene.set_row_index(len(subscene_track.get_sections()))
subscene.set_editor_property('sub_sequence', seq_j)
subscene.set_range(
min_frame_j,
max_frame_j + 1)
# Create the visibility section
ar = unreal.AssetRegistryHelpers.get_asset_registry()
maps = []
for m in map_paths:
# Unreal requires to load the level to get the map name
EditorLevelLibrary.save_all_dirty_levels()
EditorLevelLibrary.load_level(m)
maps.append(str(ar.get_asset_by_object_path(m).asset_name))
vis_section = visibility_track.add_section()
index = len(visibility_track.get_sections())
vis_section.set_range(
min_frame_j,
max_frame_j + 1)
vis_section.set_visibility(unreal.LevelVisibility.VISIBLE)
vis_section.set_row_index(index)
vis_section.set_level_names(maps)
if min_frame_j > 1:
hid_section = visibility_track.add_section()
hid_section.set_range(
1,
min_frame_j)
hid_section.set_visibility(unreal.LevelVisibility.HIDDEN)
hid_section.set_row_index(index)
hid_section.set_level_names(maps)
if max_frame_j < max_frame_i:
hid_section = visibility_track.add_section()
hid_section.set_range(
max_frame_j + 1,
max_frame_i + 1)
hid_section.set_visibility(unreal.LevelVisibility.HIDDEN)
hid_section.set_row_index(index)
hid_section.set_level_names(maps)
def _transform_from_basis(self, transform, basis):
"""Transform a transform from a basis to a new basis."""
# Get the basis matrix
@ -352,63 +290,6 @@ class LayoutLoader(plugin.Loader):
sec_params = section.get_editor_property('params')
sec_params.set_editor_property('animation', animation)
@staticmethod
def _generate_sequence(h, h_dir):
tools = unreal.AssetToolsHelpers().get_asset_tools()
sequence = tools.create_asset(
asset_name=h,
package_path=h_dir,
asset_class=unreal.LevelSequence,
factory=unreal.LevelSequenceFactoryNew()
)
project_name = legacy_io.active_project()
asset_data = get_asset_by_name(
project_name,
h_dir.split('/')[-1],
fields=["_id", "data.fps"]
)
start_frames = []
end_frames = []
elements = list(get_assets(
project_name,
parent_ids=[asset_data["_id"]],
fields=["_id", "data.clipIn", "data.clipOut"]
))
for e in elements:
start_frames.append(e.get('data').get('clipIn'))
end_frames.append(e.get('data').get('clipOut'))
elements.extend(get_assets(
project_name,
parent_ids=[e["_id"]],
fields=["_id", "data.clipIn", "data.clipOut"]
))
min_frame = min(start_frames)
max_frame = max(end_frames)
sequence.set_display_rate(
unreal.FrameRate(asset_data.get('data').get("fps"), 1.0))
sequence.set_playback_start(min_frame)
sequence.set_playback_end(max_frame)
tracks = sequence.get_master_tracks()
track = None
for t in tracks:
if (t.get_class() ==
unreal.MovieSceneCameraCutTrack.static_class()):
track = t
break
if not track:
track = sequence.add_master_track(
unreal.MovieSceneCameraCutTrack)
return sequence, (min_frame, max_frame)
def _get_repre_docs_by_version_id(self, data):
version_ids = {
element.get("version")
@ -696,7 +577,7 @@ class LayoutLoader(plugin.Loader):
]
if not existing_sequences:
sequence, frame_range = self._generate_sequence(h, h_dir)
sequence, frame_range = generate_sequence(h, h_dir)
sequences.append(sequence)
frame_ranges.append(frame_range)
@ -716,7 +597,7 @@ class LayoutLoader(plugin.Loader):
# sequences and frame_ranges have the same length
for i in range(0, len(sequences) - 1):
self._set_sequence_hierarchy(
set_sequence_hierarchy(
sequences[i], sequences[i + 1],
frame_ranges[i][1],
frame_ranges[i + 1][0], frame_ranges[i + 1][1],
@ -729,7 +610,7 @@ class LayoutLoader(plugin.Loader):
shot.set_playback_start(0)
shot.set_playback_end(data.get('clipOut') - data.get('clipIn') + 1)
if sequences:
self._set_sequence_hierarchy(
set_sequence_hierarchy(
sequences[-1], shot,
frame_ranges[-1][1],
data.get('clipIn'), data.get('clipOut'),
@ -740,12 +621,12 @@ class LayoutLoader(plugin.Loader):
loaded_assets = self._process(self.fname, asset_dir, shot)
for s in sequences:
EditorAssetLibrary.save_asset(s.get_full_name())
EditorAssetLibrary.save_asset(s.get_path_name())
EditorLevelLibrary.save_current_level()
# Create Asset Container
unreal_pipeline.create_container(
create_container(
container=container_name, path=asset_dir)
data = {
@ -761,11 +642,13 @@ class LayoutLoader(plugin.Loader):
"family": context["representation"]["context"]["family"],
"loaded_assets": loaded_assets
}
unreal_pipeline.imprint(
imprint(
"{}/{}".format(asset_dir, container_name), data)
save_dir = hierarchy_dir_list[0] if create_sequences else asset_dir
asset_content = EditorAssetLibrary.list_assets(
asset_dir, recursive=True, include_folder=False)
save_dir, recursive=True, include_folder=False)
for a in asset_content:
EditorAssetLibrary.save_asset(a)
@ -781,16 +664,24 @@ class LayoutLoader(plugin.Loader):
ar = unreal.AssetRegistryHelpers.get_asset_registry()
curr_level_sequence = LevelSequenceLib.get_current_level_sequence()
curr_time = LevelSequenceLib.get_current_time()
is_cam_lock = LevelSequenceLib.is_camera_cut_locked_to_viewport()
editor_subsystem = unreal.UnrealEditorSubsystem()
vp_loc, vp_rot = editor_subsystem.get_level_viewport_camera_info()
root = "/Game/Ayon"
asset_dir = container.get('namespace')
context = representation.get("context")
hierarchy = context.get('hierarchy').split("/")
sequence = None
master_level = None
if create_sequences:
hierarchy = context.get('hierarchy').split("/")
h_dir = f"{root}/{hierarchy[0]}"
h_asset = hierarchy[0]
master_level = f"{h_dir}/{h_asset}_map.{h_asset}_map"
@ -819,7 +710,7 @@ class LayoutLoader(plugin.Loader):
recursive_paths=False)
levels = ar.get_assets(filter)
layout_level = levels[0].get_full_name()
layout_level = levels[0].get_asset().get_path_name()
EditorLevelLibrary.save_all_dirty_levels()
EditorLevelLibrary.load_level(layout_level)
@ -843,13 +734,15 @@ class LayoutLoader(plugin.Loader):
"parent": str(representation["parent"]),
"loaded_assets": loaded_assets
}
unreal_pipeline.imprint(
imprint(
"{}/{}".format(asset_dir, container.get('container_name')), data)
EditorLevelLibrary.save_current_level()
save_dir = f"{root}/{hierarchy[0]}" if create_sequences else asset_dir
asset_content = EditorAssetLibrary.list_assets(
asset_dir, recursive=True, include_folder=False)
save_dir, recursive=True, include_folder=False)
for a in asset_content:
EditorAssetLibrary.save_asset(a)
@ -859,6 +752,13 @@ class LayoutLoader(plugin.Loader):
elif prev_level:
EditorLevelLibrary.load_level(prev_level)
if curr_level_sequence:
LevelSequenceLib.open_level_sequence(curr_level_sequence)
LevelSequenceLib.set_current_time(curr_time)
LevelSequenceLib.set_lock_camera_cut_to_viewport(is_cam_lock)
editor_subsystem.set_level_viewport_camera_info(vp_loc, vp_rot)
def remove(self, container):
"""
Delete the layout. First, check if the assets loaded with the layout
@ -870,7 +770,7 @@ class LayoutLoader(plugin.Loader):
root = "/Game/Ayon"
path = Path(container.get("namespace"))
containers = unreal_pipeline.ls()
containers = ls()
layout_containers = [
c for c in containers
if (c.get('asset_name') != container.get('asset_name') and
@ -919,7 +819,7 @@ class LayoutLoader(plugin.Loader):
package_paths=[f"{root}/{ms_asset}"],
recursive_paths=False)
levels = ar.get_assets(_filter)
master_level = levels[0].get_full_name()
master_level = levels[0].get_asset().get_path_name()
sequences = [master_sequence]

View file

@ -21,6 +21,8 @@ class UAssetLoader(plugin.Loader):
icon = "cube"
color = "orange"
extension = "uasset"
def load(self, context, name, namespace, options):
"""Load and containerise representation into Content Browser.
@ -42,26 +44,29 @@ class UAssetLoader(plugin.Loader):
root = "/Game/Ayon/Assets"
asset = context.get('asset').get('name')
suffix = "_CON"
if asset:
asset_name = "{}_{}".format(asset, name)
else:
asset_name = "{}".format(name)
asset_name = f"{asset}_{name}" if asset else f"{name}"
tools = unreal.AssetToolsHelpers().get_asset_tools()
asset_dir, container_name = tools.create_unique_asset_name(
f"{root}/{asset}/{name}", suffix=""
)
container_name += suffix
unique_number = 1
while unreal.EditorAssetLibrary.does_directory_exist(
f"{asset_dir}_{unique_number:02}"
):
unique_number += 1
asset_dir = f"{asset_dir}_{unique_number:02}"
container_name = f"{container_name}_{unique_number:02}{suffix}"
unreal.EditorAssetLibrary.make_directory(asset_dir)
destination_path = asset_dir.replace(
"/Game",
Path(unreal.Paths.project_content_dir()).as_posix(),
1)
"/Game", Path(unreal.Paths.project_content_dir()).as_posix(), 1)
shutil.copy(self.fname, f"{destination_path}/{name}.uasset")
shutil.copy(
self.fname,
f"{destination_path}/{name}_{unique_number:02}.{self.extension}")
# Create Asset Container
unreal_pipeline.create_container(
@ -77,7 +82,7 @@ class UAssetLoader(plugin.Loader):
"loader": str(self.__class__.__name__),
"representation": context["representation"]["_id"],
"parent": context["representation"]["parent"],
"family": context["representation"]["context"]["family"]
"family": context["representation"]["context"]["family"],
}
unreal_pipeline.imprint(f"{asset_dir}/{container_name}", data)
@ -96,10 +101,10 @@ class UAssetLoader(plugin.Loader):
asset_dir = container["namespace"]
name = representation["context"]["subset"]
unique_number = container["container_name"].split("_")[-2]
destination_path = asset_dir.replace(
"/Game",
Path(unreal.Paths.project_content_dir()).as_posix(),
1)
"/Game", Path(unreal.Paths.project_content_dir()).as_posix(), 1)
asset_content = unreal.EditorAssetLibrary.list_assets(
asset_dir, recursive=False, include_folder=True
@ -107,22 +112,24 @@ class UAssetLoader(plugin.Loader):
for asset in asset_content:
obj = ar.get_asset_by_object_path(asset).get_asset()
if not obj.get_class().get_name() == 'AyonAssetContainer':
if obj.get_class().get_name() != "AyonAssetContainer":
unreal.EditorAssetLibrary.delete_asset(asset)
update_filepath = get_representation_path(representation)
shutil.copy(update_filepath, f"{destination_path}/{name}.uasset")
shutil.copy(
update_filepath,
f"{destination_path}/{name}_{unique_number}.{self.extension}")
container_path = "{}/{}".format(container["namespace"],
container["objectName"])
container_path = f'{container["namespace"]}/{container["objectName"]}'
# update metadata
unreal_pipeline.imprint(
container_path,
{
"representation": str(representation["_id"]),
"parent": str(representation["parent"])
})
"parent": str(representation["parent"]),
}
)
asset_content = unreal.EditorAssetLibrary.list_assets(
asset_dir, recursive=True, include_folder=True
@ -143,3 +150,13 @@ class UAssetLoader(plugin.Loader):
if len(asset_content) == 0:
unreal.EditorAssetLibrary.delete_directory(parent_path)
class UMapLoader(UAssetLoader):
"""Load Level."""
families = ["uasset"]
label = "Load Level"
representations = ["umap"]
extension = "umap"

View file

@ -24,7 +24,7 @@ class CollectInstanceMembers(pyblish.api.InstancePlugin):
ar = unreal.AssetRegistryHelpers.get_asset_registry()
inst_path = instance.data.get('instance_path')
inst_name = instance.data.get('objectName')
inst_name = inst_path.split('/')[-1]
pub_instance = ar.get_asset_by_object_path(
f"{inst_path}.{inst_name}").get_asset()

View file

@ -103,8 +103,8 @@ class CollectRenderInstances(pyblish.api.InstancePlugin):
new_instance.data["representations"] = []
repr = {
'frameStart': s.get('frame_range')[0],
'frameEnd': s.get('frame_range')[1],
'frameStart': instance.data["frameStart"],
'frameEnd': instance.data["frameEnd"],
'name': 'png',
'ext': 'png',
'files': frames,

View file

@ -11,16 +11,17 @@ class ExtractUAsset(publish.Extractor):
label = "Extract UAsset"
hosts = ["unreal"]
families = ["uasset"]
families = ["uasset", "umap"]
optional = True
def process(self, instance):
extension = (
"umap" if "umap" in instance.data.get("families") else "uasset")
ar = unreal.AssetRegistryHelpers.get_asset_registry()
self.log.info("Performing extraction..")
staging_dir = self.staging_dir(instance)
filename = "{}.uasset".format(instance.name)
filename = f"{instance.name}.{extension}"
members = instance.data.get("members", [])
@ -36,13 +37,15 @@ class ExtractUAsset(publish.Extractor):
shutil.copy(sys_path, staging_dir)
self.log.info(f"instance.data: {instance.data}")
if "representations" not in instance.data:
instance.data["representations"] = []
representation = {
'name': 'uasset',
'ext': 'uasset',
'files': filename,
"name": extension,
"ext": extension,
"files": filename,
"stagingDir": staging_dir,
}
instance.data["representations"].append(representation)

View file

@ -31,8 +31,8 @@ class ValidateSequenceFrames(pyblish.api.InstancePlugin):
frames = list(collection.indexes)
current_range = (frames[0], frames[-1])
required_range = (data["frameStart"],
data["frameEnd"])
required_range = (data["clipIn"],
data["clipOut"])
if current_range != required_range:
raise ValueError(f"Invalid frame range: {current_range} - "

View file

@ -6,6 +6,8 @@ import subprocess
from distutils import dir_util
from pathlib import Path
from typing import List, Union
import tempfile
from distutils.dir_util import copy_tree
import openpype.hosts.unreal.lib as ue_lib
@ -90,9 +92,20 @@ class UEProjectGenerationWorker(QtCore.QObject):
("Generating a new UE project ... 1 out of "
f"{stage_count}"))
# Need to copy the commandlet project to a temporary folder where
# users don't need admin rights to write to.
cmdlet_tmp = tempfile.TemporaryDirectory()
cmdlet_filename = cmdlet_project.name
cmdlet_dir = cmdlet_project.parent.as_posix()
cmdlet_tmp_name = Path(cmdlet_tmp.name)
cmdlet_tmp_file = cmdlet_tmp_name.joinpath(cmdlet_filename)
copy_tree(
cmdlet_dir,
cmdlet_tmp_name.as_posix())
commandlet_cmd = [
f"{ue_editor_exe.as_posix()}",
f"{cmdlet_project.as_posix()}",
f"{cmdlet_tmp_file.as_posix()}",
"-run=AyonGenerateProject",
f"{project_file.resolve().as_posix()}",
]
@ -111,6 +124,8 @@ class UEProjectGenerationWorker(QtCore.QObject):
gen_process.stdout.close()
return_code = gen_process.wait()
cmdlet_tmp.cleanup()
if return_code and return_code != 0:
msg = (
f"Failed to generate {self.project_name} "

View file

@ -1,6 +1,6 @@
# -*- coding: utf-8 -*-
# flake8: noqa E402
"""Pype module API."""
"""OpenPype lib functions."""
# add vendor to sys path based on Python version
import sys
import os
@ -94,7 +94,8 @@ from .python_module_tools import (
modules_from_path,
recursive_bases_from_class,
classes_from_module,
import_module_from_dirpath
import_module_from_dirpath,
is_func_signature_supported,
)
from .profiles_filtering import (
@ -243,6 +244,7 @@ __all__ = [
"recursive_bases_from_class",
"classes_from_module",
"import_module_from_dirpath",
"is_func_signature_supported",
"get_transcode_temp_directory",
"should_convert_for_ffmpeg",

View file

@ -6,10 +6,9 @@ import inspect
import logging
import weakref
from uuid import uuid4
try:
from weakref import WeakMethod
except Exception:
from openpype.lib.python_2_comp import WeakMethod
from .python_2_comp import WeakMethod
from .python_module_tools import is_func_signature_supported
class MissingEventSystem(Exception):
@ -80,40 +79,8 @@ class EventCallback(object):
# Get expected arguments from function spec
# - positional arguments are always preferred
expect_args = False
expect_kwargs = False
fake_event = "fake"
if hasattr(inspect, "signature"):
# Python 3 using 'Signature' object where we try to bind arg
# or kwarg. Using signature is recommended approach based on
# documentation.
sig = inspect.signature(func)
try:
sig.bind(fake_event)
expect_args = True
except TypeError:
pass
try:
sig.bind(event=fake_event)
expect_kwargs = True
except TypeError:
pass
else:
# In Python 2 'signature' is not available so 'getcallargs' is used
# - 'getcallargs' is marked as deprecated since Python 3.0
try:
inspect.getcallargs(func, fake_event)
expect_args = True
except TypeError:
pass
try:
inspect.getcallargs(func, event=fake_event)
expect_kwargs = True
except TypeError:
pass
expect_args = is_func_signature_supported(func, "fake")
expect_kwargs = is_func_signature_supported(func, event="fake")
self._func_ref = func_ref
self._func_name = func_name

View file

@ -190,7 +190,7 @@ def run_openpype_process(*args, **kwargs):
Example:
```
run_openpype_process("run", "<path to .py script>")
run_detached_process("run", "<path to .py script>")
```
Args:

View file

@ -113,26 +113,29 @@ def pack_project(
project_name
))
roots = project_doc["config"]["roots"]
# Determine root directory of project
source_root = None
source_root_name = None
for root_name, root_value in roots.items():
if source_root is not None:
raise ValueError(
"Packaging is supported only for single root projects"
)
source_root = root_value
source_root_name = root_name
root_path = None
source_root = {}
project_source_path = None
if not only_documents:
roots = project_doc["config"]["roots"]
# Determine root directory of project
source_root_name = None
for root_name, root_value in roots.items():
if source_root is not None:
raise ValueError(
"Packaging is supported only for single root projects"
)
source_root = root_value
source_root_name = root_name
root_path = source_root[platform.system().lower()]
print("Using root \"{}\" with path \"{}\"".format(
source_root_name, root_path
))
root_path = source_root[platform.system().lower()]
print("Using root \"{}\" with path \"{}\"".format(
source_root_name, root_path
))
project_source_path = os.path.join(root_path, project_name)
if not os.path.exists(project_source_path):
raise ValueError("Didn't find source of project files")
project_source_path = os.path.join(root_path, project_name)
if not os.path.exists(project_source_path):
raise ValueError("Didn't find source of project files")
# Determine zip filepath where data will be stored
if not destination_dir:
@ -273,8 +276,7 @@ def unpack_project(
low_platform = platform.system().lower()
project_name = metadata["project_name"]
source_root = metadata["root"]
root_path = source_root[low_platform]
root_path = metadata["root"].get(low_platform)
# Drop existing collection
replace_project_documents(project_name, docs, database_name)

View file

@ -1,41 +1,44 @@
import weakref
class _weak_callable:
def __init__(self, obj, func):
self.im_self = obj
self.im_func = func
WeakMethod = getattr(weakref, "WeakMethod", None)
def __call__(self, *args, **kws):
if self.im_self is None:
return self.im_func(*args, **kws)
else:
return self.im_func(self.im_self, *args, **kws)
if WeakMethod is None:
class _WeakCallable:
def __init__(self, obj, func):
self.im_self = obj
self.im_func = func
def __call__(self, *args, **kws):
if self.im_self is None:
return self.im_func(*args, **kws)
else:
return self.im_func(self.im_self, *args, **kws)
class WeakMethod:
""" Wraps a function or, more importantly, a bound method in
a way that allows a bound method's object to be GCed, while
providing the same interface as a normal weak reference. """
class WeakMethod:
""" Wraps a function or, more importantly, a bound method in
a way that allows a bound method's object to be GCed, while
providing the same interface as a normal weak reference. """
def __init__(self, fn):
try:
self._obj = weakref.ref(fn.im_self)
self._meth = fn.im_func
except AttributeError:
# It's not a bound method
self._obj = None
self._meth = fn
def __init__(self, fn):
try:
self._obj = weakref.ref(fn.im_self)
self._meth = fn.im_func
except AttributeError:
# It's not a bound method
self._obj = None
self._meth = fn
def __call__(self):
if self._dead():
return None
return _weak_callable(self._getobj(), self._meth)
def __call__(self):
if self._dead():
return None
return _WeakCallable(self._getobj(), self._meth)
def _dead(self):
return self._obj is not None and self._obj() is None
def _dead(self):
return self._obj is not None and self._obj() is None
def _getobj(self):
if self._obj is None:
return None
return self._obj()
def _getobj(self):
if self._obj is None:
return None
return self._obj()

View file

@ -230,3 +230,70 @@ def import_module_from_dirpath(dirpath, folder_name, dst_module_name=None):
dirpath, folder_name, dst_module_name
)
return module
def is_func_signature_supported(func, *args, **kwargs):
"""Check if a function signature supports passed args and kwargs.
This check does not actually call the function, just look if function can
be called with the arguments.
Notes:
This does NOT check if the function would work with passed arguments
only if they can be passed in. If function have *args, **kwargs
in paramaters, this will always return 'True'.
Example:
>>> def my_function(my_number):
... return my_number + 1
...
>>> is_func_signature_supported(my_function, 1)
True
>>> is_func_signature_supported(my_function, 1, 2)
False
>>> is_func_signature_supported(my_function, my_number=1)
True
>>> is_func_signature_supported(my_function, number=1)
False
>>> is_func_signature_supported(my_function, "string")
True
>>> def my_other_function(*args, **kwargs):
... my_function(*args, **kwargs)
...
>>> is_func_signature_supported(
... my_other_function,
... "string",
... 1,
... other=None
... )
True
Args:
func (function): A function where the signature should be tested.
*args (tuple[Any]): Positional arguments for function signature.
**kwargs (dict[str, Any]): Keyword arguments for function signature.
Returns:
bool: Function can pass in arguments.
"""
if hasattr(inspect, "signature"):
# Python 3 using 'Signature' object where we try to bind arg
# or kwarg. Using signature is recommended approach based on
# documentation.
sig = inspect.signature(func)
try:
sig.bind(*args, **kwargs)
return True
except TypeError:
pass
else:
# In Python 2 'signature' is not available so 'getcallargs' is used
# - 'getcallargs' is marked as deprecated since Python 3.0
try:
inspect.getcallargs(func, *args, **kwargs)
return True
except TypeError:
pass
return False

View file

@ -582,7 +582,6 @@ class AbstractSubmitDeadline(pyblish.api.InstancePlugin):
metadata_folder = metadata_folder.replace(orig_scene,
new_scene)
instance.data["publishRenderMetadataFolder"] = metadata_folder
self.log.info("Scene name was switched {} -> {}".format(
orig_scene, new_scene
))

View file

@ -5,23 +5,26 @@ This is resolving index of server lists stored in `deadlineServers` instance
attribute or using default server if that attribute doesn't exists.
"""
from maya import cmds
import pyblish.api
class CollectDeadlineServerFromInstance(pyblish.api.InstancePlugin):
"""Collect Deadline Webservice URL from instance."""
order = pyblish.api.CollectorOrder + 0.415
# Run before collect_render.
order = pyblish.api.CollectorOrder + 0.005
label = "Deadline Webservice from the Instance"
families = ["rendering", "renderlayer"]
hosts = ["maya"]
def process(self, instance):
instance.data["deadlineUrl"] = self._collect_deadline_url(instance)
self.log.info(
"Using {} for submission.".format(instance.data["deadlineUrl"]))
@staticmethod
def _collect_deadline_url(render_instance):
def _collect_deadline_url(self, render_instance):
# type: (pyblish.api.Instance) -> str
"""Get Deadline Webservice URL from render instance.
@ -49,8 +52,16 @@ class CollectDeadlineServerFromInstance(pyblish.api.InstancePlugin):
default_server = render_instance.context.data["defaultDeadline"]
instance_server = render_instance.data.get("deadlineServers")
if not instance_server:
self.log.debug("Using default server.")
return default_server
# Get instance server as sting.
if isinstance(instance_server, int):
instance_server = cmds.getAttr(
"{}.deadlineServers".format(render_instance.data["objset"]),
asString=True
)
default_servers = deadline_settings["deadline_urls"]
project_servers = (
render_instance.context.data
@ -58,15 +69,23 @@ class CollectDeadlineServerFromInstance(pyblish.api.InstancePlugin):
["deadline"]
["deadline_servers"]
)
deadline_servers = {
if not project_servers:
self.log.debug("Not project servers found. Using default servers.")
return default_servers[instance_server]
project_enabled_servers = {
k: default_servers[k]
for k in project_servers
if k in default_servers
}
# This is Maya specific and may not reflect real selection of deadline
# url as dictionary keys in Python 2 are not ordered
return deadline_servers[
list(deadline_servers.keys())[
int(render_instance.data.get("deadlineServers"))
]
]
msg = (
"\"{}\" server on instance is not enabled in project settings."
" Enabled project servers:\n{}".format(
instance_server, project_enabled_servers
)
)
assert instance_server in project_enabled_servers, msg
self.log.debug("Using project approved server.")
return project_enabled_servers[instance_server]

View file

@ -4,9 +4,21 @@ import pyblish.api
class CollectDefaultDeadlineServer(pyblish.api.ContextPlugin):
"""Collect default Deadline Webservice URL."""
"""Collect default Deadline Webservice URL.
order = pyblish.api.CollectorOrder + 0.410
DL webservice addresses must be configured first in System Settings for
project settings enum to work.
Default webservice could be overriden by
`project_settings/deadline/deadline_servers`. Currently only single url
is expected.
This url could be overriden by some hosts directly on instances with
`CollectDeadlineServerFromInstance`.
"""
# Run before collect_deadline_server_instance.
order = pyblish.api.CollectorOrder + 0.0025
label = "Default Deadline Webservice"
pass_mongo_url = False
@ -23,3 +35,16 @@ class CollectDefaultDeadlineServer(pyblish.api.ContextPlugin):
context.data["defaultDeadline"] = deadline_module.deadline_urls["default"] # noqa: E501
context.data["deadlinePassMongoUrl"] = self.pass_mongo_url
deadline_servers = (context.data
["project_settings"]
["deadline"]
["deadline_servers"])
if deadline_servers:
deadline_server_name = deadline_servers[0]
deadline_webservice = deadline_module.deadline_urls.get(
deadline_server_name)
if deadline_webservice:
context.data["defaultDeadline"] = deadline_webservice
self.log.debug("Overriding from project settings with {}".format( # noqa: E501
deadline_webservice))

View file

@ -73,7 +73,7 @@ class FusionSubmitDeadline(
def process(self, instance):
if not instance.data.get("farm"):
self.log.info("Skipping local instance.")
self.log.debug("Skipping local instance.")
return
attribute_values = self.get_attr_values_from_data(

View file

@ -78,7 +78,7 @@ class MaxSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline,
job_info.BatchName = src_filename
job_info.Plugin = instance.data["plugin"]
job_info.UserName = context.data.get("deadlineUser", getpass.getuser())
job_info.EnableAutoTimeout = True
# Deadline requires integers in frame range
frames = "{start}-{end}".format(
start=int(instance.data["frameStart"]),
@ -133,7 +133,8 @@ class MaxSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline,
# Add list of expected files to job
# ---------------------------------
exp = instance.data.get("expectedFiles")
for filepath in exp:
for filepath in self._iter_expected_files(exp):
job_info.OutputDirectory += os.path.dirname(filepath)
job_info.OutputFilename += os.path.basename(filepath)
@ -162,10 +163,11 @@ class MaxSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline,
instance = self._instance
filepath = self.scene_path
expected_files = instance.data["expectedFiles"]
if not expected_files:
files = instance.data["expectedFiles"]
if not files:
raise RuntimeError("No Render Elements found!")
output_dir = os.path.dirname(expected_files[0])
first_file = next(self._iter_expected_files(files))
output_dir = os.path.dirname(first_file)
instance.data["outputDir"] = output_dir
instance.data["toBeRenderedOn"] = "deadline"
@ -196,25 +198,22 @@ class MaxSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline,
else:
plugin_data["DisableMultipass"] = 1
expected_files = instance.data.get("expectedFiles")
if not expected_files:
files = instance.data.get("expectedFiles")
if not files:
raise RuntimeError("No render elements found")
old_output_dir = os.path.dirname(expected_files[0])
first_file = next(self._iter_expected_files(files))
old_output_dir = os.path.dirname(first_file)
output_beauty = RenderSettings().get_render_output(instance.name,
old_output_dir)
filepath = self.from_published_scene()
def _clean_name(path):
return os.path.splitext(os.path.basename(path))[0]
new_scene = _clean_name(filepath)
orig_scene = _clean_name(instance.context.data["currentFile"])
output_beauty = output_beauty.replace(orig_scene, new_scene)
output_beauty = output_beauty.replace("\\", "/")
plugin_data["RenderOutput"] = output_beauty
rgb_bname = os.path.basename(output_beauty)
dir = os.path.dirname(first_file)
beauty_name = f"{dir}/{rgb_bname}"
beauty_name = beauty_name.replace("\\", "/")
plugin_data["RenderOutput"] = beauty_name
# as 3dsmax has version with different languages
plugin_data["Language"] = "ENU"
renderer_class = get_current_renderer()
renderer = str(renderer_class).split(":")[0]
if renderer in [
"ART_Renderer",
@ -226,14 +225,37 @@ class MaxSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline,
]:
render_elem_list = RenderSettings().get_render_element()
for i, element in enumerate(render_elem_list):
element = element.replace(orig_scene, new_scene)
plugin_data["RenderElementOutputFilename%d" % i] = element # noqa
elem_bname = os.path.basename(element)
new_elem = f"{dir}/{elem_bname}"
new_elem = new_elem.replace("/", "\\")
plugin_data["RenderElementOutputFilename%d" % i] = new_elem # noqa
if renderer == "Redshift_Renderer":
plugin_data["redshift_SeparateAovFiles"] = instance.data.get(
"separateAovFiles")
self.log.debug("plugin data:{}".format(plugin_data))
plugin_info.update(plugin_data)
return job_info, plugin_info
def from_published_scene(self, replace_in_path=True):
instance = self._instance
if instance.data["renderer"] == "Redshift_Renderer":
self.log.debug("Using Redshift...published scene wont be used..")
replace_in_path = False
return replace_in_path
@staticmethod
def _iter_expected_files(exp):
if isinstance(exp[0], dict):
for _aov, files in exp[0].items():
for file in files:
yield file
else:
for file in exp:
yield file
@classmethod
def get_attribute_defs(cls):
defs = super(MaxSubmitDeadline, cls).get_attribute_defs()

View file

@ -86,7 +86,7 @@ class NukeSubmitDeadline(pyblish.api.InstancePlugin,
def process(self, instance):
if not instance.data.get("farm"):
self.log.info("Skipping local instance.")
self.log.debug("Skipping local instance.")
return
instance.data["attributeValues"] = self.get_attr_values_from_data(

View file

@ -280,7 +280,7 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin):
args = [
"--headless",
'publish',
rootless_metadata_path,
'"{}"'.format(rootless_metadata_path),
"--targets", "deadline",
"--targets", "farm"
]
@ -767,7 +767,7 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin):
"""
if not instance.data.get("farm"):
self.log.info("Skipping local instance.")
self.log.debug("Skipping local instance.")
return
data = instance.data.copy()
@ -1094,6 +1094,10 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin):
deadline_publish_job_id = \
self._submit_deadline_post_job(instance, render_job, instances)
# Inject deadline url to instances.
for inst in instances:
inst["deadlineUrl"] = self.deadline_url
# publish job file
publish_job = {
"asset": asset,

View file

@ -26,7 +26,7 @@ class ValidateDeadlinePools(OptionalPyblishPluginMixin,
def process(self, instance):
if not instance.data.get("farm"):
self.log.info("Skipping local instance.")
self.log.debug("Skipping local instance.")
return
# get default deadline webservice url from deadline module

View file

@ -196,7 +196,7 @@ class ProcessEventHub(SocketBaseEventHub):
{"pype_data.is_processed": False}
).sort(
[("pype_data.stored", pymongo.ASCENDING)]
)
).limit(100)
found = False
for event_data in not_processed_events:

View file

@ -234,6 +234,10 @@ class BaseAction(BaseHandler):
if not settings_roles:
return default
user_roles = {
role_name.lower()
for role_name in user_roles
}
for role_name in settings_roles:
if role_name.lower() in user_roles:
return True
@ -264,8 +268,15 @@ class BaseAction(BaseHandler):
return user_entity
@classmethod
def get_user_roles_from_event(cls, session, event):
"""Query user entity from event."""
def get_user_roles_from_event(cls, session, event, lower=True):
"""Get user roles based on data in event.
Args:
session (ftrack_api.Session): Prepared ftrack session.
event (ftrack_api.event.Event): Event which is processed.
lower (Optional[bool]): Lower the role names. Default 'True'.
"""
not_set = object()
user_roles = event["data"].get("user_roles", not_set)
@ -273,7 +284,10 @@ class BaseAction(BaseHandler):
user_roles = []
user_entity = cls.get_user_entity_from_event(session, event)
for role in user_entity["user_security_roles"]:
user_roles.append(role["security_role"]["name"].lower())
role_name = role["security_role"]["name"]
if lower:
role_name = role_name.lower()
user_roles.append(role_name)
event["data"]["user_roles"] = user_roles
return user_roles
@ -322,7 +336,8 @@ class BaseAction(BaseHandler):
if not settings.get(self.settings_enabled_key, True):
return False
user_role_list = self.get_user_roles_from_event(session, event)
user_role_list = self.get_user_roles_from_event(
session, event, lower=False)
if not self.roles_check(settings.get("role_list"), user_role_list):
return False
return True

View file

@ -296,9 +296,9 @@ def server_activity_validate_user(event):
if not user_ent:
return False
role_list = ["Pypeclub", "Administrator"]
role_list = {"pypeclub", "administrator"}
for role in user_ent["user_security_roles"]:
if role["security_role"]["name"] in role_list:
if role["security_role"]["name"].lower() in role_list:
return True
return False

View file

@ -1,5 +1,3 @@
import os
import requests
from qtpy import QtCore, QtGui, QtWidgets

View file

@ -94,7 +94,7 @@ class KitsuModule(OpenPypeModule, IPluginPaths, ITrayAction):
return {
"publish": [os.path.join(current_dir, "plugins", "publish")],
"actions": [os.path.join(current_dir, "actions")]
"actions": [os.path.join(current_dir, "actions")],
}
def cli(self, click_group):
@ -128,15 +128,35 @@ def push_to_zou(login, password):
@click.option(
"-p", "--password", envvar="KITSU_PWD", help="Password for kitsu username"
)
def sync_service(login, password):
@click.option(
"-prj",
"--project",
"projects",
multiple=True,
default=[],
help="Sync specific kitsu projects",
)
@click.option(
"-lo",
"--listen-only",
"listen_only",
is_flag=True,
default=False,
help="Listen to events only without any syncing",
)
def sync_service(login, password, projects, listen_only):
"""Synchronize openpype database from Zou sever database.
Args:
login (str): Kitsu user login
password (str): Kitsu user password
projects (tuple): specific kitsu projects
listen_only (bool): run listen only without any syncing
"""
from .utils.update_op_with_zou import sync_all_projects
from .utils.sync_service import start_listeners
sync_all_projects(login, password)
if not listen_only:
sync_all_projects(login, password, filter_projects=projects)
start_listeners(login, password)

View file

@ -94,9 +94,7 @@ def update_op_assets(
if not item_doc: # Create asset
op_asset = create_op_asset(item)
insert_result = dbcon.insert_one(op_asset)
item_doc = get_asset_by_id(
project_name, insert_result.inserted_id
)
item_doc = get_asset_by_id(project_name, insert_result.inserted_id)
# Update asset
item_data = deepcopy(item_doc["data"])
@ -329,7 +327,7 @@ def write_project_to_op(project: dict, dbcon: AvalonMongoDB) -> UpdateOne:
"code": project_code,
"fps": float(project["fps"]),
"zou_id": project["id"],
"active": project['project_status_name'] != "Closed",
"active": project["project_status_name"] != "Closed",
}
)
@ -359,7 +357,10 @@ def write_project_to_op(project: dict, dbcon: AvalonMongoDB) -> UpdateOne:
def sync_all_projects(
login: str, password: str, ignore_projects: list = None
login: str,
password: str,
ignore_projects: list = None,
filter_projects: tuple = None,
):
"""Update all OP projects in DB with Zou data.
@ -367,6 +368,7 @@ def sync_all_projects(
login (str): Kitsu user login
password (str): Kitsu user password
ignore_projects (list): List of unsynced project names
filter_projects (tuple): Tuple of filter project names to sync with
Raises:
gazu.exception.AuthFailedException: Wrong user login and/or password
"""
@ -381,7 +383,24 @@ def sync_all_projects(
dbcon = AvalonMongoDB()
dbcon.install()
all_projects = gazu.project.all_projects()
for project in all_projects:
project_to_sync = []
if filter_projects:
all_kitsu_projects = {p["name"]: p for p in all_projects}
for proj_name in filter_projects:
if proj_name in all_kitsu_projects:
project_to_sync.append(all_kitsu_projects[proj_name])
else:
log.info(
f"`{proj_name}` project does not exist in Kitsu."
f" Please make sure the project is spelled correctly."
)
else:
# all project
project_to_sync = all_projects
for project in project_to_sync:
if ignore_projects and project["name"] in ignore_projects:
continue
sync_project_from_kitsu(dbcon, project)
@ -408,14 +427,13 @@ def sync_project_from_kitsu(dbcon: AvalonMongoDB, project: dict):
# Get all statuses for projects from Kitsu
all_status = gazu.project.all_project_status()
for status in all_status:
if project['project_status_id'] == status['id']:
project['project_status_name'] = status['name']
if project["project_status_id"] == status["id"]:
project["project_status_name"] = status["name"]
break
# Do not sync closed kitsu project that is not found in openpype
if (
project['project_status_name'] == "Closed"
and not get_project(project['name'])
if project["project_status_name"] == "Closed" and not get_project(
project["name"]
):
return
@ -444,7 +462,7 @@ def sync_project_from_kitsu(dbcon: AvalonMongoDB, project: dict):
log.info("Project created: {}".format(project_name))
bulk_writes.append(write_project_to_op(project, dbcon))
if project['project_status_name'] == "Closed":
if project["project_status_name"] == "Closed":
return
# Try to find project document

View file

@ -1,7 +1,9 @@
import os
import json
import appdirs
import requests
from openpype.modules import OpenPypeModule, ITrayModule
@ -110,16 +112,10 @@ class MusterModule(OpenPypeModule, ITrayModule):
self.save_credentials(token)
def save_credentials(self, token):
"""
Save credentials to JSON file
"""
data = {
'token': token
}
"""Save credentials to JSON file."""
file = open(self.cred_path, 'w')
file.write(json.dumps(data))
file.close()
with open(self.cred_path, "w") as f:
json.dump({'token': token}, f)
def show_login(self):
"""

View file

@ -39,6 +39,7 @@ from .lib import (
apply_plugin_settings_automatically,
get_plugin_settings,
get_publish_instance_label,
)
from .abstract_expected_files import ExpectedFiles
@ -85,6 +86,7 @@ __all__ = (
"apply_plugin_settings_automatically",
"get_plugin_settings",
"get_publish_instance_label",
"ExpectedFiles",

Some files were not shown because too many files have changed in this diff Show more