Merge branch 'develop' into feature/OP-3278_camera-focal-length

This commit is contained in:
Toke Jepsen 2023-03-29 11:12:09 +02:00 committed by GitHub
commit b7f6589886
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
125 changed files with 3784 additions and 2005 deletions

22
.github/workflows/project_actions.yml vendored Normal file
View file

@ -0,0 +1,22 @@
name: project-actions
on:
pull_request:
types: [review_requested]
pull_request_review:
types: [submitted]
jobs:
pr_review_requested:
name: pr_review_requested
runs-on: ubuntu-latest
if: github.event_name == 'pull_request_review' && github.event.review.state == 'changes_requested'
steps:
- name: Move PR to 'Change Requested'
uses: leonsteinhaeuser/project-beta-automations@v2.1.0
with:
gh_token: ${{ secrets.YNPUT_BOT_TOKEN }}
organization: ynput
project_id: 11
resource_node_id: ${{ github.event.pull_request.node_id }}
status_value: Change Requested

77
ARCHITECTURE.md Normal file
View file

@ -0,0 +1,77 @@
# Architecture
OpenPype is a monolithic Python project that bundles several parts, this document will try to give a birds eye overview of the project and, to a certain degree, each of the sub-projects.
The current file structure looks like this:
```
.
├── common - Code in this folder is backend portion of Addon distribution logic for v4 server.
├── docs - Documentation of the source code.
├── igniter - The OpenPype bootstrapper, deals with running version resolution and setting up the connection to the mongodb.
├── openpype - The actual OpenPype core package.
├── schema - Collection of JSON files describing schematics of objects. This follows Avalon's convention.
├── tests - Integration and unit tests.
├── tools - Conveninece scripts to perform common actions (in both bash and ps1).
├── vendor - When using the igniter, it deploys third party tools in here, such as ffmpeg.
└── website - Source files for https://openpype.io/ which is Docusaursus (https://docusaurus.io/).
```
The core functionality of the pipeline can be found in `igniter` and `openpype`, which in turn rely on the `schema` files, whenever you build (or download a pre-built) version of OpenPype, these two are bundled in there, and `Igniter` is the entry point.
## Igniter
It's the setup and update tool for OpenPype, unless you want to package `openpype` separately and deal with all the config manually, this will most likely be your entry point.
```
igniter/
├── bootstrap_repos.py - Module that will find or install OpenPype versions in the system.
├── __init__.py - Igniter entry point.
├── install_dialog.py- Show dialog for choosing central pype repository.
├── install_thread.py - Threading helpers for the install process.
├── __main__.py - Like `__init__.py` ?
├── message_dialog.py - Qt Dialog with a message and "Ok" button.
├── nice_progress_bar.py - Fancy Qt progress bar.
├── splash.txt - ASCII art for the terminal installer.
├── stylesheet.css - Installer Qt styles.
├── terminal_splash.py - Terminal installer animation, relies in `splash.txt`.
├── tools.py - Collection of methods that don't fit in other modules.
├── update_thread.py - Threading helper to update existing OpenPype installs.
├── update_window.py - Qt UI to update OpenPype installs.
├── user_settings.py - Interface for the OpenPype user settings.
└── version.py - Igniter's version number.
```
## OpenPype
This is the main package of the OpenPype logic, it could be loosely described as a combination of [Avalon](https://getavalon.github.io), [Pyblish](https://pyblish.com/) and glue around those with custom OpenPype only elements, things are in progress of being moved around to better prepare for V4, which will be released under a new name AYON.
```
openpype/
├── client - Interface for the MongoDB.
├── hooks - Hooks to be executed on certain OpenPype Applications defined in `openpype.lib.applications`.
├── host - Base class for the different hosts.
├── hosts - Integration with the different DCCs (hosts) using the `host` base class.
├── lib - Libraries that stitch together the package, some have been moved into other parts.
├── modules - OpenPype modules should contain separated logic of specific kind of implementation, such as Ftrack connection and its python API.
├── pipeline - Core of the OpenPype pipeline, handles creation of data, publishing, etc.
├── plugins - Global/core plugins for loader and publisher tool.
├── resources - Icons, fonts, etc.
├── scripts - Loose scipts that get run by tools/publishers.
├── settings - OpenPype settings interface.
├── style - Qt styling.
├── tests - Unit tests.
├── tools - Core tools, check out https://openpype.io/docs/artist_tools.
├── vendor - Vendoring of needed required Python packes.
├── widgets - Common re-usable Qt Widgets.
├── action.py - LEGACY: Lives now in `openpype.pipeline.publish.action` Pyblish actions.
├── cli.py - Command line interface, leverages `click`.
├── __init__.py - Sets two constants.
├── __main__.py - Entry point, calls the `cli.py`
├── plugin.py - Pyblish plugins.
├── pype_commands.py - Implementation of OpenPype commands.
└── version.py - Current version number.
```

View file

@ -1,5 +1,931 @@
# Changelog
## [3.15.3](https://github.com/ynput/OpenPype/tree/3.15.3)
[Full Changelog](https://github.com/ynput/OpenPype/compare/3.15.2...3.15.3)
### **🆕 New features**
<details>
<summary>Blender: Extract Review <a href="https://github.com/ynput/OpenPype/pull/3616">#3616</a></summary>
<strong>Added Review to Blender.
</strong>This implementation is based on #3508 but made compatible for the current implementation of OpenPype for Blender.
___
</details>
<details>
<summary>Data Exchanges: Point Cloud for 3dsMax <a href="https://github.com/ynput/OpenPype/pull/4532">#4532</a></summary>
<strong>Publish PRT format with tyFlow in 3dsmax
</strong>Publish PRT format with tyFlow in 3dsmax and possibly set up loader to load the format too.
- [x] creator
- [x] extractor
- [x] validator
- [x] loader
___
</details>
<details>
<summary>Global: persistent staging directory for renders <a href="https://github.com/ynput/OpenPype/pull/4583">#4583</a></summary>
<strong>Allows configure if staging directory (`stagingDir`) should be persistent with use of profiles.
</strong>With this feature, users can specify a transient data folder path based on presets, which can be used during the creation and publishing stages. In some cases, these DCCs automatically add a rendering path during the creation stage, which is then used in publishing.One of the key advantages of this feature is that it allows users to take advantage of faster storages for rendering, which can help improve workflow efficiency. Additionally, this feature allows users to keep their rendered data persistent, and use their own infrastructure for regular cleaning.However, it should be noted that some productions may want to use this feature without persistency. Furthermore, there may be a need for retargeting the rendering folder to faster storages, which is also not supported at the moment.It is studio responsibility to clean up obsolete folders with data.Location of the folder is configured in `project_anatomy/templates/others`. ('transient' key is expected, with 'folder' key, could be more templates)Which family/task type/subset is applicable is configured in:`project_settings/global/tools/publish/transient_dir_profiles`
___
</details>
<details>
<summary>Kitsu custom comment template <a href="https://github.com/ynput/OpenPype/pull/4599">#4599</a></summary>
Kitsu allows to write markdown in its comment field. This can be something very powerful to deliver dynamic comments with the help the data from the instance.This feature is defaults to off so the admin have to manually set up the comment field the way they want.I have added a basic example on how the comment can look like as the comment-fields default value.To this I want to add some documentation also but that's on its way when the code itself looks good for the reviewers.
___
</details>
<details>
<summary>MaxScene Family <a href="https://github.com/ynput/OpenPype/pull/4615">#4615</a></summary>
Introduction of the Max Scene Family
___
</details>
### **🚀 Enhancements**
<details>
<summary>Maya: Multiple values on single render attribute - OP-4131 <a href="https://github.com/ynput/OpenPype/pull/4631">#4631</a></summary>
When validating render attributes, this adds support for multiple values. When repairing first value in list is used.
___
</details>
<details>
<summary>Maya: enable 2D Pan/Zoom for playblasts - OP-5213 <a href="https://github.com/ynput/OpenPype/pull/4687">#4687</a></summary>
Setting for enabling 2D Pan/Zoom on reviews.
___
</details>
<details>
<summary>Copy existing or generate new Fusion profile on prelaunch <a href="https://github.com/ynput/OpenPype/pull/4572">#4572</a></summary>
<strong>Fusion preferences will be copied to the predefined `~/.openpype/hosts/fusion/prefs` folder (or any other folder set in system settings) on launch.
</strong>The idea is to create a copy of existing Fusion profile, adding an OpenPype menu to the Fusion instance.By default the copy setting is turned off, so no file copying is performed. Instead the clean Fusion profile is created by Fusion in the predefined folder. The default locaion is set to `~/.openpype/hosts/fusion/prefs`, to better comply with the other os platforms. After creating the default profile, some modifications are applied:
- forced Python3
- forced English interface
- setup Openpype specific path maps.If the `copy_prefs` checkbox is toggled, a copy of existing Fusion profile folder will be placed in the mentioned location. Then they are altered the same way as described above. The operation is run only once, on the first launch, unless the `force_sync [Resync profile on each launch]` is toggled.English interface is forced because the `FUSION16_PROFILE_DIR` environment variable is not read otherwise (seems to be a Fusion bug).
___
</details>
<details>
<summary>Houdini: Create button open new publisher's "create" tab <a href="https://github.com/ynput/OpenPype/pull/4601">#4601</a></summary>
During a talk with @maxpareschi he mentioned that the new publisher in Houdini felt super confusing due to "Create" going to the older creator but now being completely empty and the publish button directly went to the publish tab.This resolves that by fixing the Create button to now open the new publisher but on the Create tab.Also made publish button enforce going to the "publish" tab for consistency in usage.@antirotor I think changing the Create button's callback was just missed in this commit or was there a specific reason to not change that around yet?
___
</details>
<details>
<summary>Clockify: refresh and fix the integration <a href="https://github.com/ynput/OpenPype/pull/4607">#4607</a></summary>
Due to recent API changes, Clockify requires `user_id` to operate with the timers. I updated this part and currently it is a WIP for making it fully functional. Most functions, such as start and stop timer, and projects sync are currently working. For the rate limiting task new dependency is added: https://pypi.org/project/ratelimiter/
___
</details>
<details>
<summary>Fusion publish existing frames <a href="https://github.com/ynput/OpenPype/pull/4611">#4611</a></summary>
This PR adds the function to publish existing frames instead of having to re-render all of them for each new publish.I have split the render_locally plugin so the review-part is its own plugin now.I also change the saver-creator-plugin's label from Saver to Render (saver) as I intend to add a Prerender creator like in Nuke.
___
</details>
<details>
<summary>Resolution settings referenced from DB record for 3dsMax <a href="https://github.com/ynput/OpenPype/pull/4652">#4652</a></summary>
- Add Callback for setting the resolution according to DB after the new scene is created.
- Add a new Action into openpype menu which allows the user to reset the resolution in 3dsMax
___
</details>
<details>
<summary>3dsmax: render instance settings in Publish tab <a href="https://github.com/ynput/OpenPype/pull/4658">#4658</a></summary>
Allows user preset the pools, group and use_published settings in Render Creator in the Max Hosts.User can set the settings before or after creating instance in the new publisher
___
</details>
<details>
<summary>scene length setting referenced from DB record for 3dsMax <a href="https://github.com/ynput/OpenPype/pull/4665">#4665</a></summary>
Setting the timeline length based on DB record in 3dsMax Hosts
___
</details>
<details>
<summary>Publisher: Windows reduce command window pop-ups during Publishing <a href="https://github.com/ynput/OpenPype/pull/4672">#4672</a></summary>
Reduce the command line pop-ups that show on Windows during publishing.
___
</details>
<details>
<summary>Publisher: Explicit save <a href="https://github.com/ynput/OpenPype/pull/4676">#4676</a></summary>
Publisher have explicit button to save changes, so reset can happen without saving any changes. Save still happens automatically when publishing is started or on publisher window close. But a popup is shown if context of host has changed. Important context was enhanced by workfile path (if host integration supports it) so workfile changes are captured too. In that case a dialog with confirmation is shown to user. All callbacks that may require save of context were moved to main window to be able handle dialog show at one place. Save changes now returns success so the rest of logic is skipped -> publishing won't start, when save of instances fails.Save and reset buttons have shortcuts (Ctrl + s and Ctrls + r).
___
</details>
<details>
<summary>CelAction: conditional workfile parameters from settings <a href="https://github.com/ynput/OpenPype/pull/4677">#4677</a></summary>
Since some productions were requesting excluding some workfile parameters from publishing submission, we needed to move them to settings so those could be altered per project.
___
</details>
<details>
<summary>Improve logging of used app + tool envs on application launch <a href="https://github.com/ynput/OpenPype/pull/4682">#4682</a></summary>
Improve logging of what apps + tool environments got loaded for an application launch.
___
</details>
<details>
<summary>Fix name and docstring for Create Workdir Extra Folders prelaunch hook <a href="https://github.com/ynput/OpenPype/pull/4683">#4683</a></summary>
Fix class name and docstring for Create Workdir Extra Folders prelaunch hookThe class name and docstring were originally copied from another plug-in and didn't match the plug-in logic.This also fixes potentially seeing this twice in your logs. Before:After:Where it was actually running both this prelaunch hook and the actual `AddLastWorkfileToLaunchArgs` plugin.
___
</details>
<details>
<summary>Application launch context: Include app group name in logger <a href="https://github.com/ynput/OpenPype/pull/4684">#4684</a></summary>
Clarify in logs better what app group the ApplicationLaunchContext belongs to and what application is being launched.Before:After:
___
</details>
<details>
<summary>increment workfile version 3dsmax <a href="https://github.com/ynput/OpenPype/pull/4685">#4685</a></summary>
increment workfile version in 3dsmax as if in blender and maya hosts.
___
</details>
### **🐛 Bug fixes**
<details>
<summary>Maya: Fix getting non-active model panel. <a href="https://github.com/ynput/OpenPype/pull/2968">#2968</a></summary>
<strong>When capturing multiple cameras with image planes that have file sequences playing, only the active (first) camera will play through the file sequence.
</strong>
___
</details>
<details>
<summary>Maya: Fix broken review publishing. <a href="https://github.com/ynput/OpenPype/pull/4549">#4549</a></summary>
<strong>Resolves #4547
</strong>
___
</details>
<details>
<summary>Maya: Avoid error on right click in Loader if `mtoa` is not loaded <a href="https://github.com/ynput/OpenPype/pull/4616">#4616</a></summary>
Fix an error on right clicking in the Loader when `mtoa` is not a loaded plug-in.Additionally if `mtoa` isn't loaded the loader will now load the plug-in before trying to create the arnold standin.
___
</details>
<details>
<summary>Maya: Fix extract look colorspace detection <a href="https://github.com/ynput/OpenPype/pull/4618">#4618</a></summary>
Fix the logic which guesses the colorspace using `arnold` python library.
- Previously it'd error if `mtoa` was not available on path so it still required `mtoa` to be available.
- The guessing colorspace logic doesn't actually require `mtoa` to be loaded, but just the `arnold` python library to be available. This changes the logic so it doesn't require the `mtoa` plugin to get loaded to guess the colorspace.
- The if/else branch was likely not doing what was intended `cmds.loadPlugin("mtoa", quiet=True)` returns None if the plug-in was already loaded. So this would only ever be true if it ends up loading the `mtoa` plugin the first time.
```python
# Tested in Maya 2022.1
print(cmds.loadPlugin("mtoa", quiet=True))
# ['mtoa']
print(cmds.loadPlugin("mtoa", quiet=True))
# None
```
___
</details>
<details>
<summary>Maya: Maya Playblast Options overrides - OP-3847 <a href="https://github.com/ynput/OpenPype/pull/4634">#4634</a></summary>
When publishing a review in Maya, the extractor would fail due to wrong (long) panel name.
___
</details>
<details>
<summary>Bugfix/op 2834 fix extract playblast <a href="https://github.com/ynput/OpenPype/pull/4701">#4701</a></summary>
Paragraphs contain detailed information on the changes made to the product or service, providing an in-depth description of the updates and enhancements. They can be used to explain the reasoning behind the changes, or to highlight the importance of the new features. Paragraphs can often include links to further information or support documentation.
___
</details>
<details>
<summary>Bugfix/op 2834 fix extract playblast <a href="https://github.com/ynput/OpenPype/pull/4704">#4704</a></summary>
Paragraphs contain detailed information on the changes made to the product or service, providing an in-depth description of the updates and enhancements. They can be used to explain the reasoning behind the changes, or to highlight the importance of the new features. Paragraphs can often include links to further information or support documentation.
___
</details>
<details>
<summary>Maya: bug fix for passing zoom settings if review is attached to subset <a href="https://github.com/ynput/OpenPype/pull/4716">#4716</a></summary>
Fix for attaching review to subset with pan/zoom option.
___
</details>
<details>
<summary>Maya: tile assembly fail in draft - OP-4820 <a href="https://github.com/ynput/OpenPype/pull/4416">#4416</a></summary>
<strong>Tile assembly in Deadline was broken.
</strong>Initial bug report revealed other areas of the tile assembly that needed fixing.
___
</details>
<details>
<summary>Maya: Yeti Validate Rig Input - OP-3454 <a href="https://github.com/ynput/OpenPype/pull/4554">#4554</a></summary>
<strong>Fix Yeti Validate Rig Input
</strong>Existing workflow was broken due to this #3297.
___
</details>
<details>
<summary>Scene inventory: Fix code errors when "not found" entries are found <a href="https://github.com/ynput/OpenPype/pull/4594">#4594</a></summary>
Whenever a "NOT FOUND" entry is present a lot of errors happened in the Scene Inventory:
- It started spamming a lot of errors for the VersionDelegate since it had no numeric version (no version at all).Error reported on Discord:
```python
Traceback (most recent call last):
File "C:\Users\videopro\Documents\github\OpenPype\openpype\tools\utils\delegates.py", line 65, in paint
text = self.displayText(
File "C:\Users\videopro\Documents\github\OpenPype\openpype\tools\utils\delegates.py", line 33, in displayText
assert isinstance(value, numbers.Integral), (
AssertionError: Version is not integer. "None" <class 'NoneType'>
```
- Right click menu would error on NOT FOUND entries, and thus not show. With this PR it will now _disregard_ not found items for "Set version" and "Remove" but still allow actions.This PR resolves those.
___
</details>
<details>
<summary>Kitsu: Sync OP with zou, make sure value-data is int or float <a href="https://github.com/ynput/OpenPype/pull/4596">#4596</a></summary>
Currently the data zou pulls is a string and not a value causing some bugs in the pipe where a value is expected (like `Set frame range` in Fusion).
This PR makes sure each value is set with int() or float() so these bugs can't happen later on.
_(A request to cgwire has also bin sent to allow force values only for some metadata columns, but currently the user can enter what ever they want in there)_
___
</details>
<details>
<summary>Max: fix the bug of removing an instance <a href="https://github.com/ynput/OpenPype/pull/4617">#4617</a></summary>
fix the bug of removing an instance in 3dsMax
___
</details>
<details>
<summary>Global | Nuke: fixing farm publishing workflow <a href="https://github.com/ynput/OpenPype/pull/4623">#4623</a></summary>
After Nuke had adopted new publisher with new creators new issues were introduced. Those issues were addressed with this PR. Those are for example broken reviewable video files publishing if published via farm. Also fixed local publishing.
___
</details>
<details>
<summary>Ftrack: Ftrack additional families filtering <a href="https://github.com/ynput/OpenPype/pull/4633">#4633</a></summary>
Ftrack family collector makes sure the subset family is also in instance families for additional families filtering.
___
</details>
<details>
<summary>Ftrack: Hierarchical <> Non-Hierarchical attributes sync fix <a href="https://github.com/ynput/OpenPype/pull/4635">#4635</a></summary>
Sync between hierarchical and non-hierarchical attributes should be fixed and work as expected. Action should sync the values as expected and event handler should do it too and only on newly created entities.
___
</details>
<details>
<summary>bugfix for 3dsmax publishing error <a href="https://github.com/ynput/OpenPype/pull/4637">#4637</a></summary>
fix the bug of failing publishing job in 3dsMax
___
</details>
<details>
<summary>General: Use right validation for ffmpeg executable <a href="https://github.com/ynput/OpenPype/pull/4640">#4640</a></summary>
Use ffmpeg exec validation for ffmpeg executables instead of oiio exec validation. The validation is used as last possible source of ffmpeg from `PATH` environment variables, which is an edge case but can cause issues.
___
</details>
<details>
<summary>3dsmax: opening last workfile <a href="https://github.com/ynput/OpenPype/pull/4644">#4644</a></summary>
Supports opening last saved workfile in 3dsmax host.
___
</details>
<details>
<summary>Fixed a bug where a QThread in the splash screen could be destroyed before finishing execution <a href="https://github.com/ynput/OpenPype/pull/4647">#4647</a></summary>
This should fix the occasional behavior of the QThread being destroyed before even its worker returns from the `run()` function.After quiting, it should wait for the QThread object to properly close itself.
___
</details>
<details>
<summary>General: Use right plugin class for Collect Comment <a href="https://github.com/ynput/OpenPype/pull/4653">#4653</a></summary>
Collect Comment plugin is instance plugin so should inherit from `InstancePlugin` instead of `ContextPlugin`.
___
</details>
<details>
<summary>Global: add tags field to thumbnail representation <a href="https://github.com/ynput/OpenPype/pull/4660">#4660</a></summary>
Thumbnail representation might be missing tags field.
___
</details>
<details>
<summary>Integrator: Enforce unique destination transfers, disallow overwrites in queued transfers <a href="https://github.com/ynput/OpenPype/pull/4662">#4662</a></summary>
Fix #4656 by enforcing unique destination transfers in the Integrator. It's now disallowed to a destination in the file transaction queue with a new source path during the publish.
___
</details>
<details>
<summary>Hiero: Creator with correct workfile numeric padding input <a href="https://github.com/ynput/OpenPype/pull/4666">#4666</a></summary>
Creator was showing 99 in workfile input for long time, even if users set default value to 1001 in studio settings. This has been fixed now.
___
</details>
<details>
<summary>Nuke: Nukenodes family instance without frame range <a href="https://github.com/ynput/OpenPype/pull/4669">#4669</a></summary>
No need to add frame range data into `nukenodes` (backdrop) family publishes - since those are timeless.
___
</details>
<details>
<summary>TVPaint: Optional Validation plugins can be de/activated by user <a href="https://github.com/ynput/OpenPype/pull/4674">#4674</a></summary>
Added `OptionalPyblishPluginMixin` to TVpaint plugins that can be optional.
___
</details>
<details>
<summary>Kitsu: Slightly less strict with instance data <a href="https://github.com/ynput/OpenPype/pull/4678">#4678</a></summary>
- Allow to take task name from context if asset doesn't have any. Fixes an issue with Photoshop's review instance not having `task` in data.
- Allow to match "review" against both `instance.data["family"]` and `instance.data["families"]` because some instances don't have the primary family in families, e.g. in Photoshop and TVPaint.
- Do not error on Integrate Kitsu Review whenever for whatever reason Integrate Kitsu Note did not created a comment but just log the message that it was unable to connect a review.
___
</details>
<details>
<summary>Publisher: Fix reset shortcut sequence <a href="https://github.com/ynput/OpenPype/pull/4694">#4694</a></summary>
Fix bug created in https://github.com/ynput/OpenPype/pull/4676 where key sequence is checked using unsupported method. The check was changed to convert event into `QKeySequence` object which can be compared to prepared sequence.
___
</details>
<details>
<summary>Refactor _capture <a href="https://github.com/ynput/OpenPype/pull/4702">#4702</a></summary>
Paragraphs contain detailed information on the changes made to the product or service, providing an in-depth description of the updates and enhancements. They can be used to explain the reasoning behind the changes, or to highlight the importance of the new features. Paragraphs can often include links to further information or support documentation.
___
</details>
<details>
<summary>Hiero: correct container colors if UpToDate <a href="https://github.com/ynput/OpenPype/pull/4708">#4708</a></summary>
Colors on loaded containers are now correctly identifying real state of version. `Red` for out of date and `green` for up to date.
___
</details>
### **🔀 Refactored code**
<details>
<summary>Look Assigner: Move Look Assigner tool since it's Maya only <a href="https://github.com/ynput/OpenPype/pull/4604">#4604</a></summary>
Fix #4357: Move Look Assigner tool to maya since it's Maya only
___
</details>
<details>
<summary>Maya: Remove unused functions from Extract Look <a href="https://github.com/ynput/OpenPype/pull/4671">#4671</a></summary>
Remove unused functions from Maya Extract Look plug-in
___
</details>
<details>
<summary>Extract Review code refactor <a href="https://github.com/ynput/OpenPype/pull/3930">#3930</a></summary>
<strong>Trying to reduce complexity of Extract Review plug-in
- Re-use profile filtering from lib
- Remove "combination families" additional filtering which supposedly was from OP v2
- Simplify 'formatting' for filling gaps
- Use `legacy_io.Session` over `os.environ`
</strong>
___
</details>
<details>
<summary>Maya: Replace last usages of Qt module <a href="https://github.com/ynput/OpenPype/pull/4610">#4610</a></summary>
Replace last usage of `Qt` module with `qtpy`. This change is needed for `PySide6` support. All changes happened in Maya loader plugins.
___
</details>
<details>
<summary>Update tests and documentation for `ColormanagedPyblishPluginMixin` <a href="https://github.com/ynput/OpenPype/pull/4612">#4612</a></summary>
Refactor `ExtractorColormanaged` to `ColormanagedPyblishPluginMixin` in tests and documentation.
___
</details>
<details>
<summary>Improve logging of used app + tool envs on application launch (minor tweak) <a href="https://github.com/ynput/OpenPype/pull/4686">#4686</a></summary>
Use `app.full_name` for change done in #4682
___
</details>
### **📃 Documentation**
<details>
<summary>Docs/add architecture document <a href="https://github.com/ynput/OpenPype/pull/4344">#4344</a></summary>
<strong>Add `ARCHITECTURE.md` document.
</strong>his document attemps to give a quick overview of the project to help onboarding, it's not an extensive documentation but more of a elevator pitch one-line descriptions of files/directories and what the attempt to do.
___
</details>
<details>
<summary>Documentation: Tweak grammar and fix some typos <a href="https://github.com/ynput/OpenPype/pull/4613">#4613</a></summary>
This resolves some grammar and typos in the documentation.Also fixes the extension of some images in after effects docs which used uppercase extension even though files were lowercase extension.
___
</details>
<details>
<summary>Docs: Fix some minor grammar/typos <a href="https://github.com/ynput/OpenPype/pull/4680">#4680</a></summary>
Typo/grammar fixes in documentation.
___
</details>
### **Merged pull requests**
<details>
<summary>Maya: Implement image file node loader <a href="https://github.com/ynput/OpenPype/pull/4313">#4313</a></summary>
<strong>Implements a loader for loading texture image into a `file` node in Maya.
</strong>Similar to Maya's hypershade creation of textures on load you have the option to choose for three modes of creating:
- Texture
- Projection
- StencilThese should match what Maya generates if you create those in Maya.
- [x] Load and manage file nodes
- [x] Apply color spaces after #4195
- [x] Support for _either_ UDIM or image sequence - currently it seems to always load sequences as UDIM automatically.
- [ ] Add support for animation sequences of UDIM textures using the `<f>.<udim>.exr` path format?
___
</details>
<details>
<summary>Maya Look Assigner: Don't rely on containers for get all assets <a href="https://github.com/ynput/OpenPype/pull/4600">#4600</a></summary>
This resolves #4044 by not actually relying on containers in the scene but instead just rely on finding nodes with `cbId` attributes. As such, imported nodes would also be found and a shader can be assigned (similar to when using get from selection).**Please take into consideration the potential downsides below**Potential downsides would be:
- IF an already loaded look has any dagNodes, say a 3D Projection node - then that will also show up as a loaded asset where previously nodes from loaded looks were ignored.
- If any dag nodes were created locally - they would have gotten `cbId` attributes on scene save and thus the current asset would almost always show?
___
</details>
<details>
<summary>Maya: Unify menu labels for "Set Frame Range" and "Set Resolution" <a href="https://github.com/ynput/OpenPype/pull/4605">#4605</a></summary>
Fix #4109: Unify menu labels for "Set Frame Range" and "Set Resolution"This also tweaks it in Houdini from Reset Frame Range to Set Frame Range.
___
</details>
<details>
<summary>Resolve missing OPENPYPE_MONGO in deadline global job preload <a href="https://github.com/ynput/OpenPype/pull/4484">#4484</a></summary>
<strong>In the GlobalJobPreLoad plugin, we propose to replace the SpawnProcess by a sub-process and to pass the environment variables in the parameters, since the SpawnProcess under Centos Linux does not pass the environment variables.
</strong>In the GlobalJobPreLoad plugin, the Deadline SpawnProcess is used to start the OpenPype process. The problem is that the SpawnProcess does not pass environment variables, including OPENPYPE_MONGO, to the process when it is under Centos7 linux, and the process gets stuck. We propose to replace it by a subprocess and to pass the variable in the parameters.
___
</details>
<details>
<summary>Tests: Added setup_only to tests <a href="https://github.com/ynput/OpenPype/pull/4591">#4591</a></summary>
Allows to download test zip, unzip and restore DB in preparation for new test.
___
</details>
<details>
<summary>Maya: Arnold don't reset maya timeline frame range on render creation (or setting render settings) <a href="https://github.com/ynput/OpenPype/pull/4603">#4603</a></summary>
Fix #4429: Do not reset fps or playback timeline on applying or creating render settings
___
</details>
<details>
<summary>Bump @sideway/formula from 3.0.0 to 3.0.1 in /website <a href="https://github.com/ynput/OpenPype/pull/4609">#4609</a></summary>
Bumps [@sideway/formula](https://github.com/sideway/formula) from 3.0.0 to 3.0.1.
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/hapijs/formula/commit/5b44c1bffc38135616fb91d5ad46eaf64f03d23b"><code>5b44c1b</code></a> 3.0.1</li>
<li><a href="https://github.com/hapijs/formula/commit/9fbc20a02d75ae809c37a610a57802cd1b41b3fe"><code>9fbc20a</code></a> chore: better number regex</li>
<li><a href="https://github.com/hapijs/formula/commit/41ae98e0421913b100886adb0107a25d552d9e1a"><code>41ae98e</code></a> Cleanup</li>
<li><a href="https://github.com/hapijs/formula/commit/c59f35ec401e18cead10e0cedfb44291517610b1"><code>c59f35e</code></a> Move to Sideway</li>
<li>See full diff in <a href="https://github.com/sideway/formula/compare/v3.0.0...v3.0.1">compare view</a></li>
</ul>
</details>
<details>
<summary>Maintainer changes</summary>
<p>This version was pushed to npm by <a href="https://www.npmjs.com/~marsup">marsup</a>, a new releaser for <code>@sideway/formula</code> since your current version.</p>
</details>
<br />
[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=@sideway/formula&package-manager=npm_and_yarn&previous-version=3.0.0&new-version=3.0.1)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
- `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language
- `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language
- `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language
- `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/ynput/OpenPype/network/alerts).
</details>
___
</details>
<details>
<summary>Update artist_hosts_maya_arnold.md <a href="https://github.com/ynput/OpenPype/pull/4626">#4626</a></summary>
Correct Arnold docs.
___
</details>
<details>
<summary>Maya: Add "Include Parent Hierarchy" option in animation creator plugin <a href="https://github.com/ynput/OpenPype/pull/4645">#4645</a></summary>
Add an option in Project Settings > Maya > Creator Plugins > Create Animation to include (or not) parent hierarchy. This is to avoid artists to check manually the option for all create animation.
___
</details>
<details>
<summary>General: Filter available applications <a href="https://github.com/ynput/OpenPype/pull/4667">#4667</a></summary>
Added option to filter applications that don't have valid executable available in settings in launcher and ftrack actions. This option can be disabled in new settings category `Applications`. The filtering is by default disabled.
___
</details>
<details>
<summary>3dsmax: make sure that startup script executes <a href="https://github.com/ynput/OpenPype/pull/4695">#4695</a></summary>
Fixing reliability of OpenPype startup in 3dsmax.
___
</details>
<details>
<summary>Project Manager: Change minimum frame start/end to '0' <a href="https://github.com/ynput/OpenPype/pull/4719">#4719</a></summary>
Project manager can have frame start/end set to `0`.
___
</details>
## [3.15.2](https://github.com/ynput/OpenPype/tree/3.15.2)
[Full Changelog](https://github.com/ynput/OpenPype/compare/3.15.1...3.15.2)

View file

@ -3,10 +3,13 @@ from openpype.lib import PreLaunchHook
from openpype.pipeline.workfile import create_workdir_extra_folders
class AddLastWorkfileToLaunchArgs(PreLaunchHook):
"""Add last workfile path to launch arguments.
class CreateWorkdirExtraFolders(PreLaunchHook):
"""Create extra folders for the work directory.
Based on setting `project_settings/global/tools/Workfiles/extra_folders`
profile filtering will decide whether extra folders need to be created in
the work directory.
This is not possible to do for all applications the same way.
"""
# Execute after workfile template copy

View file

@ -7,7 +7,7 @@ class LaunchFoundryAppsWindows(PreLaunchHook):
Nuke is executed "like" python process so it is required to pass
`CREATE_NEW_CONSOLE` flag on windows to trigger creation of new console.
At the same time the newly created console won't create it's own stdout
At the same time the newly created console won't create its own stdout
and stderr handlers so they should not be redirected to DEVNULL.
"""
@ -18,7 +18,7 @@ class LaunchFoundryAppsWindows(PreLaunchHook):
def execute(self):
# Change `creationflags` to CREATE_NEW_CONSOLE
# - on Windows will nuke create new window using it's console
# - on Windows nuke will create new window using its console
# Set `stdout` and `stderr` to None so new created console does not
# have redirected output to DEVNULL in build
self.launch_context.kwargs.update({

View file

@ -84,11 +84,11 @@ class MainThreadItem:
self.kwargs = kwargs
def execute(self):
"""Execute callback and store it's result.
"""Execute callback and store its result.
Method must be called from main thread. Item is marked as `done`
when callback execution finished. Store output of callback of exception
information when callback raise one.
information when callback raises one.
"""
print("Executing process in main thread")
if self.done:

View file

@ -38,8 +38,9 @@ class CelactionPrelaunchHook(PreLaunchHook):
)
path_to_cli = os.path.join(CELACTION_SCRIPTS_DIR, "publish_cli.py")
subproces_args = get_openpype_execute_args("run", path_to_cli)
openpype_executable = subproces_args.pop(0)
subprocess_args = get_openpype_execute_args("run", path_to_cli)
openpype_executable = subprocess_args.pop(0)
workfile_settings = self.get_workfile_settings()
winreg.SetValueEx(
hKey,
@ -49,20 +50,34 @@ class CelactionPrelaunchHook(PreLaunchHook):
openpype_executable
)
parameters = subproces_args + [
"--currentFile", "*SCENE*",
"--chunk", "*CHUNK*",
"--frameStart", "*START*",
"--frameEnd", "*END*",
"--resolutionWidth", "*X*",
"--resolutionHeight", "*Y*"
# add required arguments for workfile path
parameters = subprocess_args + [
"--currentFile", "*SCENE*"
]
# Add custom parameters from workfile settings
if "render_chunk" in workfile_settings["submission_overrides"]:
parameters += [
"--chunk", "*CHUNK*"
]
if "resolution" in workfile_settings["submission_overrides"]:
parameters += [
"--resolutionWidth", "*X*",
"--resolutionHeight", "*Y*"
]
if "frame_range" in workfile_settings["submission_overrides"]:
parameters += [
"--frameStart", "*START*",
"--frameEnd", "*END*"
]
winreg.SetValueEx(
hKey, "SubmitParametersTitle", 0, winreg.REG_SZ,
subprocess.list2cmdline(parameters)
)
self.log.debug(f"__ parameters: \"{parameters}\"")
# setting resolution parameters
path_submit = "\\".join([
path_user_settings, "Dialogs", "SubmitOutput"
@ -135,3 +150,6 @@ class CelactionPrelaunchHook(PreLaunchHook):
self.log.info(f"Workfile to open: \"{workfile_path}\"")
return workfile_path
def get_workfile_settings(self):
return self.data["project_settings"]["celaction"]["workfile"]

View file

@ -39,7 +39,7 @@ class CollectCelactionCliKwargs(pyblish.api.Collector):
passing_kwargs[key] = value
if missing_kwargs:
raise RuntimeError("Missing arguments {}".format(
self.log.debug("Missing arguments {}".format(
", ".join(
[f'"{key}"' for key in missing_kwargs]
)

View file

@ -6,12 +6,13 @@ from openpype.pipeline.publish import get_errored_instances_from_context
class SelectInvalidAction(pyblish.api.Action):
"""Select invalid nodes in Maya when plug-in failed.
"""Select invalid nodes in Fusion when plug-in failed.
To retrieve the invalid nodes this assumes a static `get_invalid()`
method is available on the plugin.
"""
label = "Select invalid"
on = "failed" # This action is only available on a failed plug-in
icon = "search" # Icon from Awesome Icon
@ -31,8 +32,10 @@ class SelectInvalidAction(pyblish.api.Action):
if isinstance(invalid_nodes, (list, tuple)):
invalid.extend(invalid_nodes)
else:
self.log.warning("Plug-in returned to be invalid, "
"but has no selectable nodes.")
self.log.warning(
"Plug-in returned to be invalid, "
"but has no selectable nodes."
)
if not invalid:
# Assume relevant comp is current comp and clear selection
@ -51,4 +54,6 @@ class SelectInvalidAction(pyblish.api.Action):
for tool in invalid:
flow.Select(tool, True)
names.add(tool.Name)
self.log.info("Selecting invalid tools: %s" % ", ".join(sorted(names)))
self.log.info(
"Selecting invalid tools: %s" % ", ".join(sorted(names))
)

View file

@ -6,7 +6,6 @@ from openpype.tools.utils import host_tools
from openpype.style import load_stylesheet
from openpype.lib import register_event_callback
from openpype.hosts.fusion.scripts import (
set_rendermode,
duplicate_with_inputs,
)
from openpype.hosts.fusion.api.lib import (
@ -60,7 +59,6 @@ class OpenPypeMenu(QtWidgets.QWidget):
publish_btn = QtWidgets.QPushButton("Publish...", self)
manager_btn = QtWidgets.QPushButton("Manage...", self)
libload_btn = QtWidgets.QPushButton("Library...", self)
rendermode_btn = QtWidgets.QPushButton("Set render mode...", self)
set_framerange_btn = QtWidgets.QPushButton("Set Frame Range", self)
set_resolution_btn = QtWidgets.QPushButton("Set Resolution", self)
duplicate_with_inputs_btn = QtWidgets.QPushButton(
@ -91,7 +89,6 @@ class OpenPypeMenu(QtWidgets.QWidget):
layout.addWidget(set_framerange_btn)
layout.addWidget(set_resolution_btn)
layout.addWidget(rendermode_btn)
layout.addSpacing(20)
@ -108,7 +105,6 @@ class OpenPypeMenu(QtWidgets.QWidget):
load_btn.clicked.connect(self.on_load_clicked)
manager_btn.clicked.connect(self.on_manager_clicked)
libload_btn.clicked.connect(self.on_libload_clicked)
rendermode_btn.clicked.connect(self.on_rendermode_clicked)
duplicate_with_inputs_btn.clicked.connect(
self.on_duplicate_with_inputs_clicked
)
@ -162,15 +158,6 @@ class OpenPypeMenu(QtWidgets.QWidget):
def on_libload_clicked(self):
host_tools.show_library_loader()
def on_rendermode_clicked(self):
if self.render_mode_widget is None:
window = set_rendermode.SetRenderMode()
window.setStyleSheet(load_stylesheet())
window.show()
self.render_mode_widget = window
else:
self.render_mode_widget.show()
def on_duplicate_with_inputs_clicked(self):
duplicate_with_inputs.duplicate_with_input_connections()

View file

@ -4,29 +4,34 @@ import qtawesome
from openpype.hosts.fusion.api import (
get_current_comp,
comp_lock_and_undo_chunk
comp_lock_and_undo_chunk,
)
from openpype.lib import BoolDef
from openpype.lib import (
BoolDef,
EnumDef,
)
from openpype.pipeline import (
legacy_io,
Creator,
CreatedInstance
CreatedInstance,
)
from openpype.client import (
get_asset_by_name,
)
from openpype.client import get_asset_by_name
class CreateSaver(Creator):
identifier = "io.openpype.creators.fusion.saver"
name = "saver"
label = "Saver"
label = "Render (saver)"
name = "render"
family = "render"
default_variants = ["Main"]
default_variants = ["Main", "Mask"]
description = "Fusion Saver to generate image sequence"
def create(self, subset_name, instance_data, pre_create_data):
instance_attributes = ["reviewable"]
def create(self, subset_name, instance_data, pre_create_data):
# TODO: Add pre_create attributes to choose file format?
file_format = "OpenEXRFormat"
@ -58,7 +63,8 @@ class CreateSaver(Creator):
family=self.family,
subset_name=subset_name,
data=instance_data,
creator=self)
creator=self,
)
# Insert the transient data
instance.transient_data["tool"] = saver
@ -68,11 +74,9 @@ class CreateSaver(Creator):
return instance
def collect_instances(self):
comp = get_current_comp()
tools = comp.GetToolList(False, "Saver").values()
for tool in tools:
data = self.get_managed_tool_data(tool)
if not data:
data = self._collect_unmanaged_saver(tool)
@ -90,7 +94,6 @@ class CreateSaver(Creator):
def update_instances(self, update_list):
for created_inst, _changes in update_list:
new_data = created_inst.data_to_store()
tool = created_inst.transient_data["tool"]
self._update_tool_with_data(tool, new_data)
@ -139,7 +142,6 @@ class CreateSaver(Creator):
tool.SetAttrs({"TOOLS_Name": subset})
def _collect_unmanaged_saver(self, tool):
# TODO: this should not be done this way - this should actually
# get the data as stored on the tool explicitly (however)
# that would disallow any 'regular saver' to be collected
@ -153,8 +155,7 @@ class CreateSaver(Creator):
asset = legacy_io.Session["AVALON_ASSET"]
task = legacy_io.Session["AVALON_TASK"]
asset_doc = get_asset_by_name(project_name=project,
asset_name=asset)
asset_doc = get_asset_by_name(project_name=project, asset_name=asset)
path = tool["Clip"][comp.TIME_UNDEFINED]
fname = os.path.basename(path)
@ -178,21 +179,20 @@ class CreateSaver(Creator):
"variant": variant,
"active": not passthrough,
"family": self.family,
# Unique identifier for instance and this creator
"id": "pyblish.avalon.instance",
"creator_identifier": self.identifier
"creator_identifier": self.identifier,
}
def get_managed_tool_data(self, tool):
"""Return data of the tool if it matches creator identifier"""
data = tool.GetData('openpype')
data = tool.GetData("openpype")
if not isinstance(data, dict):
return
required = {
"id": "pyblish.avalon.instance",
"creator_identifier": self.identifier
"creator_identifier": self.identifier,
}
for key, value in required.items():
if key not in data or data[key] != value:
@ -205,11 +205,40 @@ class CreateSaver(Creator):
return data
def get_instance_attr_defs(self):
return [
BoolDef(
"review",
default=True,
label="Review"
)
def get_pre_create_attr_defs(self):
"""Settings for create page"""
attr_defs = [
self._get_render_target_enum(),
self._get_reviewable_bool(),
]
return attr_defs
def get_instance_attr_defs(self):
"""Settings for publish page"""
attr_defs = [
self._get_render_target_enum(),
self._get_reviewable_bool(),
]
return attr_defs
# These functions below should be moved to another file
# so it can be used by other plugins. plugin.py ?
def _get_render_target_enum(self):
rendering_targets = {
"local": "Local machine rendering",
"frames": "Use existing frames",
}
if "farm_rendering" in self.instance_attributes:
rendering_targets["farm"] = "Farm rendering"
return EnumDef(
"render_target", items=rendering_targets, label="Render target"
)
def _get_reviewable_bool(self):
return BoolDef(
"review",
default=("reviewable" in self.instance_attributes),
label="Review",
)

View file

@ -0,0 +1,50 @@
import pyblish.api
from openpype.pipeline import publish
import os
class CollectFusionExpectedFrames(
pyblish.api.InstancePlugin, publish.ColormanagedPyblishPluginMixin
):
"""Collect all frames needed to publish expected frames"""
order = pyblish.api.CollectorOrder + 0.5
label = "Collect Expected Frames"
hosts = ["fusion"]
families = ["render"]
def process(self, instance):
context = instance.context
frame_start = context.data["frameStartHandle"]
frame_end = context.data["frameEndHandle"]
path = instance.data["path"]
output_dir = instance.data["outputDir"]
basename = os.path.basename(path)
head, ext = os.path.splitext(basename)
files = [
f"{head}{str(frame).zfill(4)}{ext}"
for frame in range(frame_start, frame_end + 1)
]
repre = {
"name": ext[1:],
"ext": ext[1:],
"frameStart": f"%0{len(str(frame_end))}d" % frame_start,
"files": files,
"stagingDir": output_dir,
}
self.set_representation_colorspace(
representation=repre,
context=context,
)
# review representation
if instance.data.get("review", False):
repre["tags"] = ["review"]
# add the repre to the instance
if "representations" not in instance.data:
instance.data["representations"] = []
instance.data["representations"].append(repre)

View file

@ -1,44 +0,0 @@
import pyblish.api
class CollectFusionRenderMode(pyblish.api.InstancePlugin):
"""Collect current comp's render Mode
Options:
local
farm
Note that this value is set for each comp separately. When you save the
comp this information will be stored in that file. If for some reason the
available tool does not visualize which render mode is set for the
current comp, please run the following line in the console (Py2)
comp.GetData("openpype.rendermode")
This will return the name of the current render mode as seen above under
Options.
"""
order = pyblish.api.CollectorOrder + 0.4
label = "Collect Render Mode"
hosts = ["fusion"]
families = ["render"]
def process(self, instance):
"""Collect all image sequence tools"""
options = ["local", "farm"]
comp = instance.context.data.get("currentComp")
if not comp:
raise RuntimeError("No comp previously collected, unable to "
"retrieve Fusion version.")
rendermode = comp.GetData("openpype.rendermode") or "local"
assert rendermode in options, "Must be supported render mode"
self.log.info("Render mode: {0}".format(rendermode))
# Append family
family = "render.{0}".format(rendermode)
instance.data["families"].append(family)

View file

@ -0,0 +1,25 @@
import pyblish.api
class CollectFusionRenders(pyblish.api.InstancePlugin):
"""Collect current saver node's render Mode
Options:
local (Render locally)
frames (Use existing frames)
"""
order = pyblish.api.CollectorOrder + 0.4
label = "Collect Renders"
hosts = ["fusion"]
families = ["render"]
def process(self, instance):
render_target = instance.data["render_target"]
family = instance.data["family"]
# add targeted family to families
instance.data["families"].append(
"{}.{}".format(family, render_target)
)

View file

@ -0,0 +1,109 @@
import logging
import contextlib
import pyblish.api
from openpype.hosts.fusion.api import comp_lock_and_undo_chunk
log = logging.getLogger(__name__)
@contextlib.contextmanager
def enabled_savers(comp, savers):
"""Enable only the `savers` in Comp during the context.
Any Saver tool in the passed composition that is not in the savers list
will be set to passthrough during the context.
Args:
comp (object): Fusion composition object.
savers (list): List of Saver tool objects.
"""
passthrough_key = "TOOLB_PassThrough"
original_states = {}
enabled_save_names = {saver.Name for saver in savers}
try:
all_savers = comp.GetToolList(False, "Saver").values()
for saver in all_savers:
original_state = saver.GetAttrs()[passthrough_key]
original_states[saver] = original_state
# The passthrough state we want to set (passthrough != enabled)
state = saver.Name not in enabled_save_names
if state != original_state:
saver.SetAttrs({passthrough_key: state})
yield
finally:
for saver, original_state in original_states.items():
saver.SetAttrs({"TOOLB_PassThrough": original_state})
class FusionRenderLocal(pyblish.api.InstancePlugin):
"""Render the current Fusion composition locally."""
order = pyblish.api.ExtractorOrder - 0.2
label = "Render Local"
hosts = ["fusion"]
families = ["render.local"]
def process(self, instance):
context = instance.context
# Start render
self.render_once(context)
# Log render status
self.log.info(
"Rendered '{nm}' for asset '{ast}' under the task '{tsk}'".format(
nm=instance.data["name"],
ast=instance.data["asset"],
tsk=instance.data["task"],
)
)
def render_once(self, context):
"""Render context comp only once, even with more render instances"""
# This plug-in assumes all render nodes get rendered at the same time
# to speed up the rendering. The check below makes sure that we only
# execute the rendering once and not for each instance.
key = f"__hasRun{self.__class__.__name__}"
savers_to_render = [
# Get the saver tool from the instance
instance[0] for instance in context if
# Only active instances
instance.data.get("publish", True) and
# Only render.local instances
"render.local" in instance.data["families"]
]
if key not in context.data:
# We initialize as false to indicate it wasn't successful yet
# so we can keep track of whether Fusion succeeded
context.data[key] = False
current_comp = context.data["currentComp"]
frame_start = context.data["frameStartHandle"]
frame_end = context.data["frameEndHandle"]
self.log.info("Starting Fusion render")
self.log.info(f"Start frame: {frame_start}")
self.log.info(f"End frame: {frame_end}")
saver_names = ", ".join(saver.Name for saver in savers_to_render)
self.log.info(f"Rendering tools: {saver_names}")
with comp_lock_and_undo_chunk(current_comp):
with enabled_savers(current_comp, savers_to_render):
result = current_comp.Render(
{
"Start": frame_start,
"End": frame_end,
"Wait": True,
}
)
context.data[key] = bool(result)
if context.data[key] is False:
raise RuntimeError("Comp render failed")

View file

@ -1,100 +0,0 @@
import os
import pyblish.api
from openpype.pipeline import publish
from openpype.hosts.fusion.api import comp_lock_and_undo_chunk
class Fusionlocal(pyblish.api.InstancePlugin,
publish.ColormanagedPyblishPluginMixin):
"""Render the current Fusion composition locally.
Extract the result of savers by starting a comp render
This will run the local render of Fusion.
"""
order = pyblish.api.ExtractorOrder - 0.1
label = "Render Local"
hosts = ["fusion"]
families = ["render.local"]
def process(self, instance):
context = instance.context
# Start render
self.render_once(context)
# Log render status
self.log.info(
"Rendered '{nm}' for asset '{ast}' under the task '{tsk}'".format(
nm=instance.data["name"],
ast=instance.data["asset"],
tsk=instance.data["task"],
)
)
frame_start = context.data["frameStartHandle"]
frame_end = context.data["frameEndHandle"]
path = instance.data["path"]
output_dir = instance.data["outputDir"]
basename = os.path.basename(path)
head, ext = os.path.splitext(basename)
files = [
f"{head}{str(frame).zfill(4)}{ext}"
for frame in range(frame_start, frame_end + 1)
]
repre = {
"name": ext[1:],
"ext": ext[1:],
"frameStart": f"%0{len(str(frame_end))}d" % frame_start,
"files": files,
"stagingDir": output_dir,
}
self.set_representation_colorspace(
representation=repre,
context=context,
)
if "representations" not in instance.data:
instance.data["representations"] = []
instance.data["representations"].append(repre)
# review representation
if instance.data.get("review", False):
repre["tags"] = ["review", "ftrackreview"]
def render_once(self, context):
"""Render context comp only once, even with more render instances"""
# This plug-in assumes all render nodes get rendered at the same time
# to speed up the rendering. The check below makes sure that we only
# execute the rendering once and not for each instance.
key = f"__hasRun{self.__class__.__name__}"
if key not in context.data:
# We initialize as false to indicate it wasn't successful yet
# so we can keep track of whether Fusion succeeded
context.data[key] = False
current_comp = context.data["currentComp"]
frame_start = context.data["frameStartHandle"]
frame_end = context.data["frameEndHandle"]
self.log.info("Starting Fusion render")
self.log.info(f"Start frame: {frame_start}")
self.log.info(f"End frame: {frame_end}")
with comp_lock_and_undo_chunk(current_comp):
result = current_comp.Render(
{
"Start": frame_start,
"End": frame_end,
"Wait": True,
}
)
context.data[key] = bool(result)
if context.data[key] is False:
raise RuntimeError("Comp render failed")

View file

@ -14,22 +14,19 @@ class ValidateCreateFolderChecked(pyblish.api.InstancePlugin):
"""
order = pyblish.api.ValidatorOrder
actions = [RepairAction]
label = "Validate Create Folder Checked"
families = ["render"]
hosts = ["fusion"]
actions = [SelectInvalidAction]
actions = [RepairAction, SelectInvalidAction]
@classmethod
def get_invalid(cls, instance):
active = instance.data.get("active", instance.data.get("publish"))
if not active:
return []
tool = instance[0]
create_dir = tool.GetInput("CreateDir")
if create_dir == 0.0:
cls.log.error("%s has Create Folder turned off" % instance[0].Name)
cls.log.error(
"%s has Create Folder turned off" % instance[0].Name
)
return [tool]
def process(self, instance):
@ -37,7 +34,8 @@ class ValidateCreateFolderChecked(pyblish.api.InstancePlugin):
if invalid:
raise PublishValidationError(
"Found Saver with Create Folder During Render checked off",
title=self.label)
title=self.label,
)
@classmethod
def repair(cls, instance):

View file

@ -0,0 +1,78 @@
import os
import pyblish.api
from openpype.pipeline.publish import RepairAction
from openpype.pipeline import PublishValidationError
from openpype.hosts.fusion.api.action import SelectInvalidAction
class ValidateLocalFramesExistence(pyblish.api.InstancePlugin):
"""Checks if files for savers that's set
to publish expected frames exists
"""
order = pyblish.api.ValidatorOrder
label = "Validate Expected Frames Exists"
families = ["render"]
hosts = ["fusion"]
actions = [RepairAction, SelectInvalidAction]
@classmethod
def get_invalid(cls, instance, non_existing_frames=None):
if non_existing_frames is None:
non_existing_frames = []
if instance.data.get("render_target") == "frames":
tool = instance[0]
frame_start = instance.data["frameStart"]
frame_end = instance.data["frameEnd"]
path = instance.data["path"]
output_dir = instance.data["outputDir"]
basename = os.path.basename(path)
head, ext = os.path.splitext(basename)
files = [
f"{head}{str(frame).zfill(4)}{ext}"
for frame in range(frame_start, frame_end + 1)
]
for file in files:
if not os.path.exists(os.path.join(output_dir, file)):
cls.log.error(
f"Missing file: {os.path.join(output_dir, file)}"
)
non_existing_frames.append(file)
if len(non_existing_frames) > 0:
cls.log.error(f"Some of {tool.Name}'s files does not exist")
return [tool]
def process(self, instance):
non_existing_frames = []
invalid = self.get_invalid(instance, non_existing_frames)
if invalid:
raise PublishValidationError(
"{} is set to publish existing frames but "
"some frames are missing. "
"The missing file(s) are:\n\n{}".format(
invalid[0].Name,
"\n\n".join(non_existing_frames),
),
title=self.label,
)
@classmethod
def repair(cls, instance):
invalid = cls.get_invalid(instance)
if invalid:
tool = invalid[0]
# Change render target to local to render locally
tool.SetData("openpype.creator_attributes.render_target", "local")
cls.log.info(
f"Reload the publisher and {tool.Name} "
"will be set to render locally"
)

View file

@ -1,112 +0,0 @@
from qtpy import QtWidgets
import qtawesome
from openpype.hosts.fusion.api import get_current_comp
_help = {"local": "Render the comp on your own machine and publish "
"it from that the destination folder",
"farm": "Submit a Fusion render job to a Render farm to use all other"
" computers and add a publish job"}
class SetRenderMode(QtWidgets.QWidget):
def __init__(self, parent=None):
QtWidgets.QWidget.__init__(self, parent)
self._comp = get_current_comp()
self._comp_name = self._get_comp_name()
self.setWindowTitle("Set Render Mode")
self.setFixedSize(300, 175)
layout = QtWidgets.QVBoxLayout()
# region comp info
comp_info_layout = QtWidgets.QHBoxLayout()
update_btn = QtWidgets.QPushButton(qtawesome.icon("fa.refresh",
color="white"), "")
update_btn.setFixedWidth(25)
update_btn.setFixedHeight(25)
comp_information = QtWidgets.QLineEdit()
comp_information.setEnabled(False)
comp_info_layout.addWidget(comp_information)
comp_info_layout.addWidget(update_btn)
# endregion comp info
# region modes
mode_options = QtWidgets.QComboBox()
mode_options.addItems(_help.keys())
mode_information = QtWidgets.QTextEdit()
mode_information.setReadOnly(True)
# endregion modes
accept_btn = QtWidgets.QPushButton("Accept")
layout.addLayout(comp_info_layout)
layout.addWidget(mode_options)
layout.addWidget(mode_information)
layout.addWidget(accept_btn)
self.setLayout(layout)
self.comp_information = comp_information
self.update_btn = update_btn
self.mode_options = mode_options
self.mode_information = mode_information
self.accept_btn = accept_btn
self.connections()
self.update()
# Force updated render mode help text
self._update_rendermode_info()
def connections(self):
"""Build connections between code and buttons"""
self.update_btn.clicked.connect(self.update)
self.accept_btn.clicked.connect(self._set_comp_rendermode)
self.mode_options.currentIndexChanged.connect(
self._update_rendermode_info)
def update(self):
"""Update all information in the UI"""
self._comp = get_current_comp()
self._comp_name = self._get_comp_name()
self.comp_information.setText(self._comp_name)
# Update current comp settings
mode = self._get_comp_rendermode()
index = self.mode_options.findText(mode)
self.mode_options.setCurrentIndex(index)
def _update_rendermode_info(self):
rendermode = self.mode_options.currentText()
self.mode_information.setText(_help[rendermode])
def _get_comp_name(self):
return self._comp.GetAttrs("COMPS_Name")
def _get_comp_rendermode(self):
return self._comp.GetData("openpype.rendermode") or "local"
def _set_comp_rendermode(self):
rendermode = self.mode_options.currentText()
self._comp.SetData("openpype.rendermode", rendermode)
self._comp.Print("Updated render mode to '%s'\n" % rendermode)
self.hide()
def _validation(self):
ui_mode = self.mode_options.currentText()
comp_mode = self._get_comp_rendermode()
return comp_mode == ui_mode

View file

@ -1221,7 +1221,7 @@ def set_track_color(track_item, color):
def check_inventory_versions(track_items=None):
"""
Actual version color idetifier of Loaded containers
Actual version color identifier of Loaded containers
Check all track items and filter only
Loader nodes for its version. It will get all versions from database
@ -1249,10 +1249,10 @@ def check_inventory_versions(track_items=None):
project_name = legacy_io.active_project()
filter_result = filter_containers(containers, project_name)
for container in filter_result.latest:
set_track_color(container["_item"], clip_color)
set_track_color(container["_item"], clip_color_last)
for container in filter_result.outdated:
set_track_color(container["_item"], clip_color_last)
set_track_color(container["_item"], clip_color)
def selection_changed_timeline(event):

View file

@ -146,6 +146,8 @@ class CreatorWidget(QtWidgets.QDialog):
return " ".join([str(m.group(0)).capitalize() for m in matches])
def create_row(self, layout, type, text, **kwargs):
value_keys = ["setText", "setCheckState", "setValue", "setChecked"]
# get type attribute from qwidgets
attr = getattr(QtWidgets, type)
@ -167,14 +169,27 @@ class CreatorWidget(QtWidgets.QDialog):
# assign the created attribute to variable
item = getattr(self, attr_name)
# set attributes to item which are not values
for func, val in kwargs.items():
if func in value_keys:
continue
if getattr(item, func):
log.debug("Setting {} to {}".format(func, val))
func_attr = getattr(item, func)
if isinstance(val, tuple):
func_attr(*val)
else:
func_attr(val)
# set values to item
for value_item in value_keys:
if value_item not in kwargs:
continue
if getattr(item, value_item):
getattr(item, value_item)(kwargs[value_item])
# add to layout
layout.addRow(label, item)
@ -276,8 +291,11 @@ class CreatorWidget(QtWidgets.QDialog):
elif v["type"] == "QSpinBox":
data[k]["value"] = self.create_row(
content_layout, "QSpinBox", v["label"],
setValue=v["value"], setMinimum=0,
setValue=v["value"],
setDisplayIntegerBase=10000,
setRange=(0, 99999), setMinimum=0,
setMaximum=100000, setToolTip=tool_tip)
return data

View file

@ -6,6 +6,11 @@ from pymxs import runtime as rt
from typing import Union
import contextlib
from openpype.pipeline.context_tools import (
get_current_project_asset,
get_current_project
)
JSON_PREFIX = "JSON::"
@ -157,6 +162,112 @@ def get_multipass_setting(project_setting=None):
["multipass"])
def set_scene_resolution(width: int, height: int):
"""Set the render resolution
Args:
width(int): value of the width
height(int): value of the height
Returns:
None
"""
rt.renderWidth = width
rt.renderHeight = height
def reset_scene_resolution():
"""Apply the scene resolution from the project definition
scene resolution can be overwritten by an asset if the asset.data contains
any information regarding scene resolution .
Returns:
None
"""
data = ["data.resolutionWidth", "data.resolutionHeight"]
project_resolution = get_current_project(fields=data)
project_resolution_data = project_resolution["data"]
asset_resolution = get_current_project_asset(fields=data)
asset_resolution_data = asset_resolution["data"]
# Set project resolution
project_width = int(project_resolution_data.get("resolutionWidth", 1920))
project_height = int(project_resolution_data.get("resolutionHeight", 1080))
width = int(asset_resolution_data.get("resolutionWidth", project_width))
height = int(asset_resolution_data.get("resolutionHeight", project_height))
set_scene_resolution(width, height)
def get_frame_range() -> dict:
"""Get the current assets frame range and handles.
Returns:
dict: with frame start, frame end, handle start, handle end.
"""
# Set frame start/end
asset = get_current_project_asset()
frame_start = asset["data"].get("frameStart")
frame_end = asset["data"].get("frameEnd")
# Backwards compatibility
if frame_start is None or frame_end is None:
frame_start = asset["data"].get("edit_in")
frame_end = asset["data"].get("edit_out")
if frame_start is None or frame_end is None:
return
handles = asset["data"].get("handles") or 0
handle_start = asset["data"].get("handleStart")
if handle_start is None:
handle_start = handles
handle_end = asset["data"].get("handleEnd")
if handle_end is None:
handle_end = handles
return {
"frameStart": frame_start,
"frameEnd": frame_end,
"handleStart": handle_start,
"handleEnd": handle_end
}
def reset_frame_range(fps: bool = True):
"""Set frame range to current asset.
This is part of 3dsmax documentation:
animationRange: A System Global variable which lets you get and
set an Interval value that defines the start and end frames
of the Active Time Segment.
frameRate: A System Global variable which lets you get
and set an Integer value that defines the current
scene frame rate in frames-per-second.
"""
if fps:
data_fps = get_current_project(fields=["data.fps"])
fps_number = float(data_fps["data"]["fps"])
rt.frameRate = fps_number
frame_range = get_frame_range()
frame_start = frame_range["frameStart"] - int(frame_range["handleStart"])
frame_end = frame_range["frameEnd"] + int(frame_range["handleEnd"])
frange_cmd = f"animationRange = interval {frame_start} {frame_end}"
rt.execute(frange_cmd)
def set_context_setting():
"""Apply the project settings from the project definition
Settings can be overwritten by an asset if the asset.data contains
any information regarding those settings.
Examples of settings:
frame range
resolution
Returns:
None
"""
reset_scene_resolution()
def get_max_version():
"""
Args:

View file

@ -4,6 +4,7 @@ from qtpy import QtWidgets, QtCore
from pymxs import runtime as rt
from openpype.tools.utils import host_tools
from openpype.hosts.max.api import lib
class OpenPypeMenu(object):
@ -107,6 +108,17 @@ class OpenPypeMenu(object):
workfiles_action = QtWidgets.QAction("Work Files...", openpype_menu)
workfiles_action.triggered.connect(self.workfiles_callback)
openpype_menu.addAction(workfiles_action)
openpype_menu.addSeparator()
res_action = QtWidgets.QAction("Set Resolution", openpype_menu)
res_action.triggered.connect(self.resolution_callback)
openpype_menu.addAction(res_action)
frame_action = QtWidgets.QAction("Set Frame Range", openpype_menu)
frame_action.triggered.connect(self.frame_range_callback)
openpype_menu.addAction(frame_action)
return openpype_menu
def load_callback(self):
@ -128,3 +140,11 @@ class OpenPypeMenu(object):
def workfiles_callback(self):
"""Callback to show Workfiles tool."""
host_tools.show_workfiles(parent=self.main_widget)
def resolution_callback(self):
"""Callback to reset scene resolution"""
return lib.reset_scene_resolution()
def frame_range_callback(self):
"""Callback to reset frame range"""
return lib.reset_frame_range()

View file

@ -50,6 +50,11 @@ class MaxHost(HostBase, IWorkfileHost, ILoadHost, INewPublisher):
self._has_been_setup = True
def context_setting():
return lib.set_context_setting()
rt.callbacks.addScript(rt.Name('systemPostNew'),
context_setting)
def has_unsaved_changes(self):
# TODO: how to get it from 3dsmax?
return True

View file

@ -61,7 +61,7 @@ class CollectRender(pyblish.api.InstancePlugin):
"plugin": "3dsmax",
"frameStart": context.data['frameStart'],
"frameEnd": context.data['frameEnd'],
"version": version_int
"version": version_int,
}
self.log.info("data: {0}".format(data))
instance.data.update(data)

View file

@ -0,0 +1,19 @@
import pyblish.api
from openpype.lib import version_up
from pymxs import runtime as rt
class IncrementWorkfileVersion(pyblish.api.ContextPlugin):
"""Increment current workfile version."""
order = pyblish.api.IntegratorOrder + 0.9
label = "Increment Workfile Version"
hosts = ["max"]
families = ["workfile"]
def process(self, context):
path = context.data["currentFile"]
filepath = version_up(path)
rt.saveMaxFile(filepath)
self.log.info("Incrementing file version")

View file

@ -26,6 +26,7 @@ class CreateReview(plugin.Creator):
"alpha cut"
]
useMayaTimeline = True
panZoom = False
def __init__(self, *args, **kwargs):
super(CreateReview, self).__init__(*args, **kwargs)
@ -45,5 +46,6 @@ class CreateReview(plugin.Creator):
data["keepImages"] = self.keepImages
data["imagePlane"] = self.imagePlane
data["transparency"] = self.transparency
data["panZoom"] = self.panZoom
self.data = data

View file

@ -80,6 +80,8 @@ class CollectReview(pyblish.api.InstancePlugin):
data['review_width'] = instance.data['review_width']
data['review_height'] = instance.data['review_height']
data["isolate"] = instance.data["isolate"]
data["panZoom"] = instance.data.get("panZoom", False)
data["panel"] = instance.data["panel"]
cmds.setAttr(str(instance) + '.active', 1)
self.log.debug('data {}'.format(instance.context[i].data))
instance.context[i].data.update(data)

View file

@ -30,36 +30,6 @@ def _has_arnold():
return False
def escape_space(path):
"""Ensure path is enclosed by quotes to allow paths with spaces"""
return '"{}"'.format(path) if " " in path else path
def get_ocio_config_path(profile_folder):
"""Path to OpenPype vendorized OCIO.
Vendorized OCIO config file path is grabbed from the specific path
hierarchy specified below.
"{OPENPYPE_ROOT}/vendor/OpenColorIO-Configs/{profile_folder}/config.ocio"
Args:
profile_folder (str): Name of folder to grab config file from.
Returns:
str: Path to vendorized config file.
"""
return os.path.join(
os.environ["OPENPYPE_ROOT"],
"vendor",
"bin",
"ocioconfig",
"OpenColorIOConfigs",
profile_folder,
"config.ocio"
)
def find_paths_by_hash(texture_hash):
"""Find the texture hash key in the dictionary.

View file

@ -1,5 +1,6 @@
import os
import json
import contextlib
import clique
import capture
@ -11,6 +12,16 @@ from maya import cmds
import pymel.core as pm
@contextlib.contextmanager
def panel_camera(panel, camera):
original_camera = cmds.modelPanel(panel, query=True, camera=True)
try:
cmds.modelPanel(panel, edit=True, camera=camera)
yield
finally:
cmds.modelPanel(panel, edit=True, camera=original_camera)
class ExtractPlayblast(publish.Extractor):
"""Extract viewport playblast.
@ -25,6 +36,16 @@ class ExtractPlayblast(publish.Extractor):
optional = True
capture_preset = {}
def _capture(self, preset):
self.log.info(
"Using preset:\n{}".format(
json.dumps(preset, sort_keys=True, indent=4)
)
)
path = capture.capture(log=self.log, **preset)
self.log.debug("playblast path {}".format(path))
def process(self, instance):
self.log.info("Extracting capture..")
@ -43,7 +64,7 @@ class ExtractPlayblast(publish.Extractor):
self.log.info("start: {}, end: {}".format(start, end))
# get cameras
camera = instance.data['review_camera']
camera = instance.data["review_camera"]
preset = lib.load_capture_preset(data=self.capture_preset)
# Grab capture presets from the project settings
@ -57,23 +78,23 @@ class ExtractPlayblast(publish.Extractor):
asset_height = asset_data.get("resolutionHeight")
review_instance_width = instance.data.get("review_width")
review_instance_height = instance.data.get("review_height")
preset['camera'] = camera
preset["camera"] = camera
# Tests if project resolution is set,
# if it is a value other than zero, that value is
# used, if not then the asset resolution is
# used
if review_instance_width and review_instance_height:
preset['width'] = review_instance_width
preset['height'] = review_instance_height
preset["width"] = review_instance_width
preset["height"] = review_instance_height
elif width_preset and height_preset:
preset['width'] = width_preset
preset['height'] = height_preset
preset["width"] = width_preset
preset["height"] = height_preset
elif asset_width and asset_height:
preset['width'] = asset_width
preset['height'] = asset_height
preset['start_frame'] = start
preset['end_frame'] = end
preset["width"] = asset_width
preset["height"] = asset_height
preset["start_frame"] = start
preset["end_frame"] = end
# Enforce persisting camera depth of field
camera_options = preset.setdefault("camera_options", {})
@ -86,8 +107,8 @@ class ExtractPlayblast(publish.Extractor):
self.log.info("Outputting images to %s" % path)
preset['filename'] = path
preset['overwrite'] = True
preset["filename"] = path
preset["overwrite"] = True
pm.refresh(f=True)
@ -114,7 +135,8 @@ class ExtractPlayblast(publish.Extractor):
# Disable Pan/Zoom.
pan_zoom = cmds.getAttr("{}.panZoomEnabled".format(preset["camera"]))
cmds.setAttr("{}.panZoomEnabled".format(preset["camera"]), False)
preset.pop("pan_zoom", None)
preset["camera_options"]["panZoomEnabled"] = instance.data["panZoom"]
# Need to explicitly enable some viewport changes so the viewport is
# refreshed ahead of playblasting.
@ -136,30 +158,39 @@ class ExtractPlayblast(publish.Extractor):
)
override_viewport_options = (
capture_presets['Viewport Options']['override_viewport_options']
capture_presets["Viewport Options"]["override_viewport_options"]
)
with lib.maintained_time():
filename = preset.get("filename", "%TEMP%")
# Force viewer to False in call to capture because we have our own
# viewer opening call to allow a signal to trigger between
# playblast and viewer
preset['viewer'] = False
# Force viewer to False in call to capture because we have our own
# viewer opening call to allow a signal to trigger between
# playblast and viewer
preset["viewer"] = False
# Update preset with current panel setting
# if override_viewport_options is turned off
if not override_viewport_options:
panel_preset = capture.parse_view(instance.data["panel"])
panel_preset.pop("camera")
preset.update(panel_preset)
# Update preset with current panel setting
# if override_viewport_options is turned off
if not override_viewport_options:
panel_preset = capture.parse_view(instance.data["panel"])
panel_preset.pop("camera")
preset.update(panel_preset)
self.log.info(
"Using preset:\n{}".format(
json.dumps(preset, sort_keys=True, indent=4)
# Need to ensure Python 2 compatibility.
# TODO: Remove once dropping Python 2.
if getattr(contextlib, "nested", None):
# Python 3 compatibility.
with contextlib.nested(
lib.maintained_time(),
panel_camera(instance.data["panel"], preset["camera"])
):
self._capture(preset)
else:
# Python 2 compatibility.
with contextlib.ExitStack() as stack:
stack.enter_context(lib.maintained_time())
stack.enter_context(
panel_camera(instance.data["panel"], preset["camera"])
)
)
path = capture.capture(log=self.log, **preset)
self._capture(preset)
# Restoring viewport options.
if viewport_defaults:
@ -169,18 +200,17 @@ class ExtractPlayblast(publish.Extractor):
cmds.setAttr("{}.panZoomEnabled".format(preset["camera"]), pan_zoom)
self.log.debug("playblast path {}".format(path))
collected_files = os.listdir(stagingdir)
patterns = [clique.PATTERNS["frames"]]
collections, remainder = clique.assemble(collected_files,
minimum_items=1,
patterns=patterns)
filename = preset.get("filename", "%TEMP%")
self.log.debug("filename {}".format(filename))
frame_collection = None
for collection in collections:
filebase = collection.format('{head}').rstrip(".")
filebase = collection.format("{head}").rstrip(".")
self.log.debug("collection head {}".format(filebase))
if filebase in filename:
frame_collection = collection
@ -204,15 +234,15 @@ class ExtractPlayblast(publish.Extractor):
collected_files = collected_files[0]
representation = {
'name': 'png',
'ext': 'png',
'files': collected_files,
"name": self.capture_preset["Codec"]["compression"],
"ext": self.capture_preset["Codec"]["compression"],
"files": collected_files,
"stagingDir": stagingdir,
"frameStart": start,
"frameEnd": end,
'fps': fps,
'preview': True,
'tags': tags,
'camera_name': camera_node_name
"fps": fps,
"preview": True,
"tags": tags,
"camera_name": camera_node_name
}
instance.data["representations"].append(representation)

View file

@ -26,28 +26,28 @@ class ExtractThumbnail(publish.Extractor):
def process(self, instance):
self.log.info("Extracting capture..")
camera = instance.data['review_camera']
camera = instance.data["review_camera"]
capture_preset = (
instance.context.data["project_settings"]['maya']['publish']['ExtractPlayblast']['capture_preset']
)
maya_setting = instance.context.data["project_settings"]["maya"]
plugin_setting = maya_setting["publish"]["ExtractPlayblast"]
capture_preset = plugin_setting["capture_preset"]
override_viewport_options = (
capture_preset['Viewport Options']['override_viewport_options']
capture_preset["Viewport Options"]["override_viewport_options"]
)
try:
preset = lib.load_capture_preset(data=capture_preset)
except KeyError as ke:
self.log.error('Error loading capture presets: {}'.format(str(ke)))
self.log.error("Error loading capture presets: {}".format(str(ke)))
preset = {}
self.log.info('Using viewport preset: {}'.format(preset))
self.log.info("Using viewport preset: {}".format(preset))
# preset["off_screen"] = False
preset['camera'] = camera
preset['start_frame'] = instance.data["frameStart"]
preset['end_frame'] = instance.data["frameStart"]
preset['camera_options'] = {
preset["camera"] = camera
preset["start_frame"] = instance.data["frameStart"]
preset["end_frame"] = instance.data["frameStart"]
preset["camera_options"] = {
"displayGateMask": False,
"displayResolution": False,
"displayFilmGate": False,
@ -74,14 +74,14 @@ class ExtractThumbnail(publish.Extractor):
# used, if not then the asset resolution is
# used
if review_instance_width and review_instance_height:
preset['width'] = review_instance_width
preset['height'] = review_instance_height
preset["width"] = review_instance_width
preset["height"] = review_instance_height
elif width_preset and height_preset:
preset['width'] = width_preset
preset['height'] = height_preset
preset["width"] = width_preset
preset["height"] = height_preset
elif asset_width and asset_height:
preset['width'] = asset_width
preset['height'] = asset_height
preset["width"] = asset_width
preset["height"] = asset_height
# Create temp directory for thumbnail
# - this is to avoid "override" of source file
@ -96,8 +96,8 @@ class ExtractThumbnail(publish.Extractor):
self.log.info("Outputting images to %s" % path)
preset['filename'] = path
preset['overwrite'] = True
preset["filename"] = path
preset["overwrite"] = True
pm.refresh(f=True)
@ -123,14 +123,14 @@ class ExtractThumbnail(publish.Extractor):
preset["viewport_options"] = {"imagePlane": image_plane}
# Disable Pan/Zoom.
pan_zoom = cmds.getAttr("{}.panZoomEnabled".format(preset["camera"]))
cmds.setAttr("{}.panZoomEnabled".format(preset["camera"]), False)
preset.pop("pan_zoom", None)
preset["camera_options"]["panZoomEnabled"] = instance.data["panZoom"]
with lib.maintained_time():
# Force viewer to False in call to capture because we have our own
# viewer opening call to allow a signal to trigger between
# playblast and viewer
preset['viewer'] = False
preset["viewer"] = False
# Update preset with current panel setting
# if override_viewport_options is turned off
@ -145,17 +145,15 @@ class ExtractThumbnail(publish.Extractor):
_, thumbnail = os.path.split(playblast)
cmds.setAttr("{}.panZoomEnabled".format(preset["camera"]), pan_zoom)
self.log.info("file list {}".format(thumbnail))
if "representations" not in instance.data:
instance.data["representations"] = []
representation = {
'name': 'thumbnail',
'ext': 'jpg',
'files': thumbnail,
"name": "thumbnail",
"ext": "jpg",
"files": thumbnail,
"stagingDir": dst_staging,
"thumbnail": True
}

View file

@ -80,21 +80,7 @@ def get_all_asset_nodes():
Returns:
list: list of dictionaries
"""
host = registered_host()
nodes = []
for container in host.ls():
# We are not interested in looks but assets!
if container["loader"] == "LookLoader":
continue
# Gather all information
container_name = container["objectName"]
nodes += lib.get_container_members(container_name)
nodes = list(set(nodes))
return nodes
return cmds.ls(dag=True, noIntermediate=True, long=True)
def create_asset_id_hash(nodes):

View file

@ -54,22 +54,19 @@ class LoadBackdropNodes(load.LoaderPlugin):
version = context['version']
version_data = version.get("data", {})
vname = version.get("name", None)
first = version_data.get("frameStart", None)
last = version_data.get("frameEnd", None)
namespace = namespace or context['asset']['name']
colorspace = version_data.get("colorspace", None)
object_name = "{}_{}".format(name, namespace)
# prepare data for imprinting
# add additional metadata from the version to imprint to Avalon knob
add_keys = ["frameStart", "frameEnd", "handleStart", "handleEnd",
"source", "author", "fps"]
add_keys = ["source", "author", "fps"]
data_imprint = {"frameStart": first,
"frameEnd": last,
"version": vname,
"colorspaceInput": colorspace,
"objectName": object_name}
data_imprint = {
"version": vname,
"colorspaceInput": colorspace,
"objectName": object_name
}
for k in add_keys:
data_imprint.update({k: version_data[k]})
@ -204,18 +201,13 @@ class LoadBackdropNodes(load.LoaderPlugin):
name = container['name']
version_data = version_doc.get("data", {})
vname = version_doc.get("name", None)
first = version_data.get("frameStart", None)
last = version_data.get("frameEnd", None)
namespace = container['namespace']
colorspace = version_data.get("colorspace", None)
object_name = "{}_{}".format(name, namespace)
add_keys = ["frameStart", "frameEnd", "handleStart", "handleEnd",
"source", "author", "fps"]
add_keys = ["source", "author", "fps"]
data_imprint = {"representation": str(representation["_id"]),
"frameStart": first,
"frameEnd": last,
"version": vname,
"colorspaceInput": colorspace,
"objectName": object_name}

View file

@ -51,38 +51,10 @@ class CollectBackdrops(pyblish.api.InstancePlugin):
instance.data["label"] = "{0} ({1} nodes)".format(
bckn.name(), len(instance.data["transientData"]["childNodes"]))
instance.data["families"].append(instance.data["family"])
# Get frame range
handle_start = instance.context.data["handleStart"]
handle_end = instance.context.data["handleEnd"]
first_frame = int(nuke.root()["first_frame"].getValue())
last_frame = int(nuke.root()["last_frame"].getValue())
# get version
version = instance.context.data.get('version')
if not version:
raise RuntimeError("Script name has no version in the name.")
if version:
instance.data['version'] = version
instance.data['version'] = version
# Add version data to instance
version_data = {
"handles": handle_start,
"handleStart": handle_start,
"handleEnd": handle_end,
"frameStart": first_frame + handle_start,
"frameEnd": last_frame - handle_end,
"version": int(version),
"families": [instance.data["family"]] + instance.data["families"],
"subset": instance.data["subset"],
"fps": instance.context.data["fps"]
}
instance.data.update({
"versionData": version_data,
"frameStart": first_frame,
"frameEnd": last_frame
})
self.log.info("Backdrop instance collected: `{}`".format(instance))

View file

@ -25,7 +25,7 @@ class ExtractReviewData(publish.Extractor):
# review can be removed since `ProcessSubmittedJobOnFarm` will create
# reviewable representation if needed
if (
"render.farm" in instance.data["families"]
instance.data.get("farm")
and "review" in instance.data["families"]
):
instance.data["families"].remove("review")

View file

@ -49,7 +49,12 @@ class ExtractReviewDataLut(publish.Extractor):
exporter.stagingDir, exporter.file).replace("\\", "/")
instance.data["representations"] += data["representations"]
if "render.farm" in families:
# review can be removed since `ProcessSubmittedJobOnFarm` will create
# reviewable representation if needed
if (
instance.data.get("farm")
and "review" in instance.data["families"]
):
instance.data["families"].remove("review")
self.log.debug(

View file

@ -105,10 +105,7 @@ class ExtractReviewDataMov(publish.Extractor):
self, instance, o_name, o_data["extension"],
multiple_presets)
if (
"render.farm" in families or
"prerender.farm" in families
):
if instance.data.get("farm"):
if "review" in instance.data["families"]:
instance.data["families"].remove("review")

View file

@ -31,7 +31,7 @@ class ExtractThumbnail(publish.Extractor):
def process(self, instance):
if "render.farm" in instance.data["families"]:
if instance.data.get("farm"):
return
with napi.maintained_selection():

View file

@ -66,11 +66,11 @@ class MainThreadItem:
return self._result
def execute(self):
"""Execute callback and store it's result.
"""Execute callback and store its result.
Method must be called from main thread. Item is marked as `done`
when callback execution finished. Store output of callback of exception
information when callback raise one.
information when callback raises one.
"""
log.debug("Executing process in main thread")
if self.done:

View file

@ -389,11 +389,11 @@ class MainThreadItem:
self.kwargs = kwargs
def execute(self):
"""Execute callback and store it's result.
"""Execute callback and store its result.
Method must be called from main thread. Item is marked as `done`
when callback execution finished. Store output of callback of exception
information when callback raise one.
information when callback raises one.
"""
log.debug("Executing process in main thread")
if self.done:

View file

@ -55,7 +55,7 @@ class TVPaintLegacyConverted(SubsetConvertorPlugin):
self._convert_render_layers(
to_convert["renderLayer"], current_instances)
self._convert_render_passes(
to_convert["renderpass"], current_instances)
to_convert["renderPass"], current_instances)
self._convert_render_scenes(
to_convert["renderScene"], current_instances)
self._convert_workfiles(
@ -116,7 +116,7 @@ class TVPaintLegacyConverted(SubsetConvertorPlugin):
render_layers_by_group_id = {}
for instance in current_instances:
if instance.get("creator_identifier") == "render.layer":
group_id = instance["creator_identifier"]["group_id"]
group_id = instance["creator_attributes"]["group_id"]
render_layers_by_group_id[group_id] = instance
for render_pass in render_passes:

View file

@ -415,11 +415,11 @@ class CreateRenderPass(TVPaintCreator):
.get("creator_attributes", {})
.get("render_layer_instance_id")
)
render_layer_info = render_layers.get(render_layer_instance_id)
render_layer_info = render_layers.get(render_layer_instance_id, {})
self.update_instance_labels(
instance_data,
render_layer_info["variant"],
render_layer_info["template_data"]
render_layer_info.get("variant"),
render_layer_info.get("template_data")
)
instance = CreatedInstance.from_existing(instance_data, self)
self._add_instance_to_context(instance)
@ -607,11 +607,11 @@ class CreateRenderPass(TVPaintCreator):
current_instances = self.host.list_instances()
render_layers = [
{
"value": instance["instance_id"],
"label": instance["subset"]
"value": inst["instance_id"],
"label": inst["subset"]
}
for instance in current_instances
if instance["creator_identifier"] == CreateRenderlayer.identifier
for inst in current_instances
if inst.get("creator_identifier") == CreateRenderlayer.identifier
]
if not render_layers:
render_layers.append({"value": None, "label": "N/A"})
@ -697,6 +697,7 @@ class TVPaintAutoDetectRenderCreator(TVPaintCreator):
["create"]
["auto_detect_render"]
)
self.enabled = plugin_settings.get("enabled", False)
self.allow_group_rename = plugin_settings["allow_group_rename"]
self.group_name_template = plugin_settings["group_name_template"]
self.group_idx_offset = plugin_settings["group_idx_offset"]

View file

@ -22,9 +22,11 @@ class CollectOutputFrameRange(pyblish.api.InstancePlugin):
context = instance.context
frame_start = asset_doc["data"]["frameStart"]
fps = asset_doc["data"]["fps"]
frame_end = frame_start + (
context.data["sceneMarkOut"] - context.data["sceneMarkIn"]
)
instance.data["fps"] = fps
instance.data["frameStart"] = frame_start
instance.data["frameEnd"] = frame_end
self.log.info(

View file

@ -1,5 +1,8 @@
import pyblish.api
from openpype.pipeline import PublishXmlValidationError
from openpype.pipeline import (
PublishXmlValidationError,
OptionalPyblishPluginMixin,
)
from openpype.hosts.tvpaint.api.pipeline import (
list_instances,
write_instances,
@ -31,7 +34,10 @@ class FixAssetNames(pyblish.api.Action):
write_instances(new_instance_items)
class ValidateAssetNames(pyblish.api.ContextPlugin):
class ValidateAssetName(
OptionalPyblishPluginMixin,
pyblish.api.ContextPlugin
):
"""Validate assset name present on instance.
Asset name on instance should be the same as context's.
@ -43,6 +49,8 @@ class ValidateAssetNames(pyblish.api.ContextPlugin):
actions = [FixAssetNames]
def process(self, context):
if not self.is_active(context.data):
return
context_asset_name = context.data["asset"]
for instance in context:
asset_name = instance.data.get("asset")

View file

@ -11,7 +11,7 @@ class ValidateLayersVisiblity(pyblish.api.InstancePlugin):
families = ["review", "render"]
def process(self, instance):
layers = instance.data["layers"]
layers = instance.data.get("layers")
# Instance have empty layers
# - it is not job of this validator to check that
if not layers:

View file

@ -1,7 +1,10 @@
import json
import pyblish.api
from openpype.pipeline import PublishXmlValidationError
from openpype.pipeline import (
PublishXmlValidationError,
OptionalPyblishPluginMixin,
)
from openpype.hosts.tvpaint.api.lib import execute_george
@ -23,7 +26,10 @@ class ValidateMarksRepair(pyblish.api.Action):
)
class ValidateMarks(pyblish.api.ContextPlugin):
class ValidateMarks(
OptionalPyblishPluginMixin,
pyblish.api.ContextPlugin
):
"""Validate mark in and out are enabled and it's duration.
Mark In/Out does not have to match frameStart and frameEnd but duration is
@ -59,6 +65,9 @@ class ValidateMarks(pyblish.api.ContextPlugin):
}
def process(self, context):
if not self.is_active(context.data):
return
current_data = {
"markIn": context.data["sceneMarkIn"],
"markInState": context.data["sceneMarkInState"],

View file

@ -1,11 +1,17 @@
import json
import pyblish.api
from openpype.pipeline import PublishXmlValidationError
from openpype.pipeline import (
PublishXmlValidationError,
OptionalPyblishPluginMixin,
)
# TODO @iLliCiTiT add fix action for fps
class ValidateProjectSettings(pyblish.api.ContextPlugin):
class ValidateProjectSettings(
OptionalPyblishPluginMixin,
pyblish.api.ContextPlugin
):
"""Validate scene settings against database."""
label = "Validate Scene Settings"
@ -13,6 +19,9 @@ class ValidateProjectSettings(pyblish.api.ContextPlugin):
optional = True
def process(self, context):
if not self.is_active(context.data):
return
expected_data = context.data["assetEntity"]["data"]
scene_data = {
"fps": context.data.get("sceneFps"),

View file

@ -1,5 +1,8 @@
import pyblish.api
from openpype.pipeline import PublishXmlValidationError
from openpype.pipeline import (
PublishXmlValidationError,
OptionalPyblishPluginMixin,
)
from openpype.hosts.tvpaint.api.lib import execute_george
@ -14,7 +17,10 @@ class RepairStartFrame(pyblish.api.Action):
execute_george("tv_startframe 0")
class ValidateStartFrame(pyblish.api.ContextPlugin):
class ValidateStartFrame(
OptionalPyblishPluginMixin,
pyblish.api.ContextPlugin
):
"""Validate start frame being at frame 0."""
label = "Validate Start Frame"
@ -24,6 +30,9 @@ class ValidateStartFrame(pyblish.api.ContextPlugin):
optional = True
def process(self, context):
if not self.is_active(context.data):
return
start_frame = execute_george("tv_startframe")
if start_frame == 0:
return

View file

@ -5,29 +5,38 @@ import re
import subprocess
from distutils import dir_util
from pathlib import Path
from typing import List
from typing import List, Union
import openpype.hosts.unreal.lib as ue_lib
from qtpy import QtCore
def parse_comp_progress(line: str, progress_signal: QtCore.Signal(int)) -> int:
match = re.search('\[[1-9]+/[0-9]+\]', line)
def parse_comp_progress(line: str, progress_signal: QtCore.Signal(int)):
match = re.search(r"\[[1-9]+/[0-9]+]", line)
if match is not None:
split: list[str] = match.group().split('/')
split: list[str] = match.group().split("/")
curr: float = float(split[0][1:])
total: float = float(split[1][:-1])
progress_signal.emit(int((curr / total) * 100.0))
def parse_prj_progress(line: str, progress_signal: QtCore.Signal(int)) -> int:
match = re.search('@progress', line)
def parse_prj_progress(line: str, progress_signal: QtCore.Signal(int)):
match = re.search("@progress", line)
if match is not None:
percent_match = re.search('\d{1,3}', line)
percent_match = re.search(r"\d{1,3}", line)
progress_signal.emit(int(percent_match.group()))
def retrieve_exit_code(line: str):
match = re.search(r"ExitCode=\d+", line)
if match is not None:
split: list[str] = match.group().split("=")
return int(split[1])
return None
class UEProjectGenerationWorker(QtCore.QObject):
finished = QtCore.Signal(str)
failed = QtCore.Signal(str)
@ -77,16 +86,19 @@ class UEProjectGenerationWorker(QtCore.QObject):
if self.dev_mode:
stage_count = 4
self.stage_begin.emit(f'Generating a new UE project ... 1 out of '
f'{stage_count}')
self.stage_begin.emit(
("Generating a new UE project ... 1 out of "
f"{stage_count}"))
commandlet_cmd = [f'{ue_editor_exe.as_posix()}',
f'{cmdlet_project.as_posix()}',
f'-run=OPGenerateProject',
f'{project_file.resolve().as_posix()}']
commandlet_cmd = [
f"{ue_editor_exe.as_posix()}",
f"{cmdlet_project.as_posix()}",
"-run=OPGenerateProject",
f"{project_file.resolve().as_posix()}",
]
if self.dev_mode:
commandlet_cmd.append('-GenerateCode')
commandlet_cmd.append("-GenerateCode")
gen_process = subprocess.Popen(commandlet_cmd,
stdout=subprocess.PIPE,
@ -94,24 +106,27 @@ class UEProjectGenerationWorker(QtCore.QObject):
for line in gen_process.stdout:
decoded_line = line.decode(errors="replace")
print(decoded_line, end='')
print(decoded_line, end="")
self.log.emit(decoded_line)
gen_process.stdout.close()
return_code = gen_process.wait()
if return_code and return_code != 0:
msg = 'Failed to generate ' + self.project_name \
+ f' project! Exited with return code {return_code}'
msg = (
f"Failed to generate {self.project_name} "
f"project! Exited with return code {return_code}"
)
self.failed.emit(msg, return_code)
raise RuntimeError(msg)
print("--- Project has been generated successfully.")
self.stage_begin.emit(f'Writing the Engine ID of the build UE ... 1'
f' out of {stage_count}')
self.stage_begin.emit(
(f"Writing the Engine ID of the build UE ... 1"
f" out of {stage_count}"))
if not project_file.is_file():
msg = "Failed to write the Engine ID into .uproject file! Can " \
"not read!"
msg = ("Failed to write the Engine ID into .uproject file! Can "
"not read!")
self.failed.emit(msg)
raise RuntimeError(msg)
@ -125,13 +140,14 @@ class UEProjectGenerationWorker(QtCore.QObject):
pf.seek(0)
json.dump(pf_json, pf, indent=4)
pf.truncate()
print(f'--- Engine ID has been written into the project file')
print("--- Engine ID has been written into the project file")
self.progress.emit(90)
if self.dev_mode:
# 2nd stage
self.stage_begin.emit(f'Generating project files ... 2 out of '
f'{stage_count}')
self.stage_begin.emit(
(f"Generating project files ... 2 out of "
f"{stage_count}"))
self.progress.emit(0)
ubt_path = ue_lib.get_path_to_ubt(self.engine_path,
@ -154,8 +170,8 @@ class UEProjectGenerationWorker(QtCore.QObject):
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
for line in gen_proc.stdout:
decoded_line: str = line.decode(errors='replace')
print(decoded_line, end='')
decoded_line: str = line.decode(errors="replace")
print(decoded_line, end="")
self.log.emit(decoded_line)
parse_prj_progress(decoded_line, self.progress)
@ -163,13 +179,13 @@ class UEProjectGenerationWorker(QtCore.QObject):
return_code = gen_proc.wait()
if return_code and return_code != 0:
msg = 'Failed to generate project files! ' \
f'Exited with return code {return_code}'
msg = ("Failed to generate project files! "
f"Exited with return code {return_code}")
self.failed.emit(msg, return_code)
raise RuntimeError(msg)
self.stage_begin.emit(f'Building the project ... 3 out of '
f'{stage_count}')
self.stage_begin.emit(
f"Building the project ... 3 out of {stage_count}")
self.progress.emit(0)
# 3rd stage
build_prj_cmd = [ubt_path.as_posix(),
@ -177,16 +193,16 @@ class UEProjectGenerationWorker(QtCore.QObject):
arch,
"Development",
"-TargetType=Editor",
f'-Project={project_file}',
f'{project_file}',
f"-Project={project_file}",
f"{project_file}",
"-IgnoreJunk"]
build_prj_proc = subprocess.Popen(build_prj_cmd,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
for line in build_prj_proc.stdout:
decoded_line: str = line.decode(errors='replace')
print(decoded_line, end='')
decoded_line: str = line.decode(errors="replace")
print(decoded_line, end="")
self.log.emit(decoded_line)
parse_comp_progress(decoded_line, self.progress)
@ -194,16 +210,17 @@ class UEProjectGenerationWorker(QtCore.QObject):
return_code = build_prj_proc.wait()
if return_code and return_code != 0:
msg = 'Failed to build project! ' \
f'Exited with return code {return_code}'
msg = ("Failed to build project! "
f"Exited with return code {return_code}")
self.failed.emit(msg, return_code)
raise RuntimeError(msg)
# ensure we have PySide2 installed in engine
self.progress.emit(0)
self.stage_begin.emit(f'Checking PySide2 installation... {stage_count}'
f' out of {stage_count}')
self.stage_begin.emit(
(f"Checking PySide2 installation... {stage_count} "
f" out of {stage_count}"))
python_path = None
if platform.system().lower() == "windows":
python_path = self.engine_path / ("Engine/Binaries/ThirdParty/"
@ -225,9 +242,30 @@ class UEProjectGenerationWorker(QtCore.QObject):
msg = f"Unreal Python not found at {python_path}"
self.failed.emit(msg, 1)
raise RuntimeError(msg)
subprocess.check_call(
[python_path.as_posix(), "-m", "pip", "install", "pyside2"]
)
pyside_cmd = [python_path.as_posix(),
"-m",
"pip",
"install",
"pyside2"]
pyside_install = subprocess.Popen(pyside_cmd,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
for line in pyside_install.stdout:
decoded_line: str = line.decode(errors="replace")
print(decoded_line, end="")
self.log.emit(decoded_line)
pyside_install.stdout.close()
return_code = pyside_install.wait()
if return_code and return_code != 0:
msg = ("Failed to create the project! "
"The installation of PySide2 has failed!")
self.failed.emit(msg, return_code)
raise RuntimeError(msg)
self.progress.emit(100)
self.finished.emit("Project successfully built!")
@ -266,26 +304,30 @@ class UEPluginInstallWorker(QtCore.QObject):
# in order to successfully build the plugin,
# It must be built outside the Engine directory and then moved
build_plugin_cmd: List[str] = [f'{uat_path.as_posix()}',
'BuildPlugin',
f'-Plugin={uplugin_path.as_posix()}',
f'-Package={temp_dir.as_posix()}']
build_plugin_cmd: List[str] = [f"{uat_path.as_posix()}",
"BuildPlugin",
f"-Plugin={uplugin_path.as_posix()}",
f"-Package={temp_dir.as_posix()}"]
build_proc = subprocess.Popen(build_plugin_cmd,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
return_code: Union[None, int] = None
for line in build_proc.stdout:
decoded_line: str = line.decode(errors='replace')
print(decoded_line, end='')
decoded_line: str = line.decode(errors="replace")
print(decoded_line, end="")
self.log.emit(decoded_line)
if return_code is None:
return_code = retrieve_exit_code(decoded_line)
parse_comp_progress(decoded_line, self.progress)
build_proc.stdout.close()
return_code = build_proc.wait()
build_proc.wait()
if return_code and return_code != 0:
msg = 'Failed to build plugin' \
f' project! Exited with return code {return_code}'
msg = ("Failed to build plugin"
f" project! Exited with return code {return_code}")
dir_util.remove_tree(temp_dir.as_posix())
self.failed.emit(msg, return_code)
raise RuntimeError(msg)

View file

@ -889,7 +889,8 @@ class ApplicationLaunchContext:
self.modules_manager = ModulesManager()
# Logger
logger_name = "{}-{}".format(self.__class__.__name__, self.app_name)
logger_name = "{}-{}".format(self.__class__.__name__,
self.application.full_name)
self.log = Logger.get_logger(logger_name)
self.executable = executable
@ -1246,7 +1247,7 @@ class ApplicationLaunchContext:
args_len_str = " ({})".format(len(args))
self.log.info(
"Launching \"{}\" with args{}: {}".format(
self.app_name, args_len_str, args
self.application.full_name, args_len_str, args
)
)
self.launch_args = args
@ -1271,7 +1272,9 @@ class ApplicationLaunchContext:
exc_info=True
)
self.log.debug("Launch of {} finished.".format(self.app_name))
self.log.debug("Launch of {} finished.".format(
self.application.full_name
))
return self.process
@ -1508,8 +1511,8 @@ def prepare_app_environments(
if key in source_env:
source_env[key] = value
# `added_env_keys` has debug purpose
added_env_keys = {app.group.name, app.name}
# `app_and_tool_labels` has debug purpose
app_and_tool_labels = [app.full_name]
# Environments for application
environments = [
app.group.environment,
@ -1532,15 +1535,14 @@ def prepare_app_environments(
for group_name in sorted(groups_by_name.keys()):
group = groups_by_name[group_name]
environments.append(group.environment)
added_env_keys.add(group_name)
for tool_name in sorted(tool_by_group_name[group_name].keys()):
tool = tool_by_group_name[group_name][tool_name]
environments.append(tool.environment)
added_env_keys.add(tool.name)
app_and_tool_labels.append(tool.full_name)
log.debug(
"Will add environments for apps and tools: {}".format(
", ".join(added_env_keys)
", ".join(app_and_tool_labels)
)
)

View file

@ -104,6 +104,10 @@ def run_subprocess(*args, **kwargs):
if (
platform.system().lower() == "windows"
and "creationflags" not in kwargs
# shell=True already tries to hide the console window
# and passing these creationflags then shows the window again
# so we avoid it for shell=True cases
and kwargs.get("shell") is not True
):
kwargs["creationflags"] = (
subprocess.CREATE_NEW_PROCESS_GROUP

View file

@ -13,6 +13,16 @@ else:
from shutil import copyfile
class DuplicateDestinationError(ValueError):
"""Error raised when transfer destination already exists in queue.
The error is only raised if `allow_queue_replacements` is False on the
FileTransaction instance and the added file to transfer is of a different
src file than the one already detected in the queue.
"""
class FileTransaction(object):
"""File transaction with rollback options.
@ -44,7 +54,7 @@ class FileTransaction(object):
MODE_COPY = 0
MODE_HARDLINK = 1
def __init__(self, log=None):
def __init__(self, log=None, allow_queue_replacements=False):
if log is None:
log = logging.getLogger("FileTransaction")
@ -60,6 +70,8 @@ class FileTransaction(object):
# Backup file location mapping to original locations
self._backup_to_original = {}
self._allow_queue_replacements = allow_queue_replacements
def add(self, src, dst, mode=MODE_COPY):
"""Add a new file to transfer queue.
@ -82,6 +94,14 @@ class FileTransaction(object):
src, dst))
return
else:
if not self._allow_queue_replacements:
raise DuplicateDestinationError(
"Transfer to destination is already in queue: "
"{} -> {}. It's not allowed to be replaced by "
"a new transfer from {}".format(
queued_src, dst, src
))
self.log.warning("File transfer in queue replaced..")
self.log.debug(
"Removed from queue: {} -> {} replaced by {} -> {}".format(

View file

@ -224,18 +224,26 @@ def find_tool_in_custom_paths(paths, tool, validation_func=None):
def _check_args_returncode(args):
try:
# Python 2 compatibility where DEVNULL is not available
kwargs = {}
if platform.system().lower() == "windows":
kwargs["creationflags"] = (
subprocess.CREATE_NEW_PROCESS_GROUP
| getattr(subprocess, "DETACHED_PROCESS", 0)
| getattr(subprocess, "CREATE_NO_WINDOW", 0)
)
if hasattr(subprocess, "DEVNULL"):
proc = subprocess.Popen(
args,
stdout=subprocess.DEVNULL,
stderr=subprocess.DEVNULL,
**kwargs
)
proc.wait()
else:
with open(os.devnull, "w") as devnull:
proc = subprocess.Popen(
args, stdout=devnull, stderr=devnull,
args, stdout=devnull, stderr=devnull, **kwargs
)
proc.wait()

View file

@ -3,21 +3,60 @@
"""
import pyblish.api
from openpype.lib import TextDef
from openpype.pipeline.publish import OpenPypePyblishPluginMixin
class CollectDeadlinePools(pyblish.api.InstancePlugin):
class CollectDeadlinePools(pyblish.api.InstancePlugin,
OpenPypePyblishPluginMixin):
"""Collect pools from instance if present, from Setting otherwise."""
order = pyblish.api.CollectorOrder + 0.420
label = "Collect Deadline Pools"
families = ["rendering", "render.farm", "renderFarm", "renderlayer"]
families = ["rendering",
"render.farm",
"renderFarm",
"renderlayer",
"maxrender"]
primary_pool = None
secondary_pool = None
@classmethod
def apply_settings(cls, project_settings, system_settings):
# deadline.publish.CollectDeadlinePools
settings = project_settings["deadline"]["publish"]["CollectDeadlinePools"] # noqa
cls.primary_pool = settings.get("primary_pool", None)
cls.secondary_pool = settings.get("secondary_pool", None)
def process(self, instance):
attr_values = self.get_attr_values_from_data(instance.data)
if not instance.data.get("primaryPool"):
instance.data["primaryPool"] = self.primary_pool or "none"
instance.data["primaryPool"] = (
attr_values.get("primaryPool") or self.primary_pool or "none"
)
if not instance.data.get("secondaryPool"):
instance.data["secondaryPool"] = self.secondary_pool or "none"
instance.data["secondaryPool"] = (
attr_values.get("secondaryPool") or self.secondary_pool or "none" # noqa
)
@classmethod
def get_attribute_defs(cls):
# TODO: Preferably this would be an enum for the user
# but the Deadline server URL can be dynamic and
# can be set per render instance. Since get_attribute_defs
# can't be dynamic unfortunately EnumDef isn't possible (yet?)
# pool_names = self.deadline_module.get_deadline_pools(deadline_url,
# self.log)
# secondary_pool_names = ["-"] + pool_names
return [
TextDef("primaryPool",
label="Primary Pool",
default=cls.primary_pool),
TextDef("secondaryPool",
label="Secondary Pool",
default=cls.secondary_pool)
]

View file

@ -106,7 +106,7 @@ class CelactionSubmitDeadline(pyblish.api.InstancePlugin):
# define chunk and priority
chunk_size = instance.context.data.get("chunk")
if chunk_size == 0:
if not chunk_size:
chunk_size = self.deadline_chunk_size
# search for %02d pattern in name, and padding number

View file

@ -3,7 +3,15 @@ import getpass
import copy
import attr
from openpype.pipeline import legacy_io
from openpype.lib import (
TextDef,
BoolDef,
NumberDef,
)
from openpype.pipeline import (
legacy_io,
OpenPypePyblishPluginMixin
)
from openpype.settings import get_project_settings
from openpype.hosts.max.api.lib import (
get_current_renderer,
@ -22,7 +30,8 @@ class MaxPluginInfo(object):
IgnoreInputs = attr.ib(default=True)
class MaxSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline):
class MaxSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline,
OpenPypePyblishPluginMixin):
label = "Submit Render to Deadline"
hosts = ["max"]
@ -31,14 +40,22 @@ class MaxSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline):
use_published = True
priority = 50
tile_priority = 50
chunk_size = 1
jobInfo = {}
pluginInfo = {}
group = None
deadline_pool = None
deadline_pool_secondary = None
framePerTask = 1
@classmethod
def apply_settings(cls, project_settings, system_settings):
settings = project_settings["deadline"]["publish"]["MaxSubmitDeadline"] # noqa
# Take some defaults from settings
cls.use_published = settings.get("use_published",
cls.use_published)
cls.priority = settings.get("priority",
cls.priority)
cls.chuck_size = settings.get("chunk_size", cls.chunk_size)
cls.group = settings.get("group", cls.group)
def get_job_info(self):
job_info = DeadlineJobInfo(Plugin="3dsmax")
@ -49,11 +66,11 @@ class MaxSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline):
instance = self._instance
context = instance.context
# Always use the original work file name for the Job name even when
# rendering is done from the published Work File. The original work
# file name is clearer because it can also have subversion strings,
# etc. which are stripped for the published file.
src_filepath = context.data["currentFile"]
src_filename = os.path.basename(src_filepath)
@ -71,13 +88,13 @@ class MaxSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline):
job_info.Pool = instance.data.get("primaryPool")
job_info.SecondaryPool = instance.data.get("secondaryPool")
job_info.ChunkSize = instance.data.get("chunkSize", 1)
job_info.Comment = context.data.get("comment")
job_info.Priority = instance.data.get("priority", self.priority)
job_info.FramesPerTask = instance.data.get("framesPerTask", 1)
if self.group:
job_info.Group = self.group
attr_values = self.get_attr_values_from_data(instance.data)
job_info.ChunkSize = attr_values.get("chunkSize", 1)
job_info.Comment = context.data.get("comment")
job_info.Priority = attr_values.get("priority", self.priority)
job_info.Group = attr_values.get("group", self.group)
# Add options from RenderGlobals
render_globals = instance.data.get("renderGlobals", {})
@ -216,3 +233,32 @@ class MaxSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline):
plugin_info.update(plugin_data)
return job_info, plugin_info
@classmethod
def get_attribute_defs(cls):
defs = super(MaxSubmitDeadline, cls).get_attribute_defs()
defs.extend([
BoolDef("use_published",
default=cls.use_published,
label="Use Published Scene"),
NumberDef("priority",
minimum=1,
maximum=250,
decimals=0,
default=cls.priority,
label="Priority"),
NumberDef("chunkSize",
minimum=1,
maximum=50,
decimals=0,
default=cls.chunk_size,
label="Frame Per Task"),
TextDef("group",
default=cls.group,
label="Group Name"),
])
return defs

View file

@ -32,7 +32,7 @@ class NukeSubmitDeadline(pyblish.api.InstancePlugin,
label = "Submit Nuke to Deadline"
order = pyblish.api.IntegratorOrder + 0.1
hosts = ["nuke"]
families = ["render", "prerender.farm"]
families = ["render", "prerender"]
optional = True
targets = ["local"]
@ -80,6 +80,10 @@ class NukeSubmitDeadline(pyblish.api.InstancePlugin,
]
def process(self, instance):
if not instance.data.get("farm"):
self.log.info("Skipping local instance.")
return
instance.data["attributeValues"] = self.get_attr_values_from_data(
instance.data)
@ -168,10 +172,10 @@ class NukeSubmitDeadline(pyblish.api.InstancePlugin,
resp.json()["_id"])
# redefinition of families
if "render.farm" in families:
if "render" in instance.data["family"]:
instance.data['family'] = 'write'
families.insert(0, "render2d")
elif "prerender.farm" in families:
elif "prerender" in instance.data["family"]:
instance.data['family'] = 'write'
families.insert(0, "prerender")
instance.data["families"] = families

View file

@ -756,6 +756,10 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin):
instance (pyblish.api.Instance): Instance data.
"""
if not instance.data.get("farm"):
self.log.info("Skipping local instance.")
return
data = instance.data.copy()
context = instance.context
self.context = context

View file

@ -17,10 +17,18 @@ class ValidateDeadlinePools(OptionalPyblishPluginMixin,
label = "Validate Deadline Pools"
order = pyblish.api.ValidatorOrder
families = ["rendering", "render.farm", "renderFarm", "renderlayer"]
families = ["rendering",
"render.farm",
"renderFarm",
"renderlayer",
"maxrender"]
optional = True
def process(self, instance):
if not instance.data.get("farm"):
self.log.info("Skipping local instance.")
return
# get default deadline webservice url from deadline module
deadline_url = instance.context.data["defaultDeadline"]
self.log.info("deadline_url::{}".format(deadline_url))

View file

@ -44,7 +44,7 @@ class AddonSettingsDef(JsonFilesSettingsDef):
class ExampleAddon(OpenPypeAddOn, IPluginPaths, ITrayAction):
"""This Addon has defined it's settings and interface.
"""This Addon has defined its settings and interface.
This example has system settings with an enabled option. And use
few other interfaces:

View file

@ -9,7 +9,7 @@ from openpype_modules.ftrack.lib import (
class PushHierValuesToNonHier(ServerAction):
"""Action push hierarchical custom attribute values to non hierarchical.
"""Action push hierarchical custom attribute values to non-hierarchical.
Hierarchical value is also pushed to their task entities.
@ -119,17 +119,109 @@ class PushHierValuesToNonHier(ServerAction):
self.join_query_keys(object_ids)
)).all()
output = {}
attrs_by_obj_id = collections.defaultdict(list)
hiearchical = []
for attr in attrs:
if attr["is_hierarchical"]:
hiearchical.append(attr)
continue
obj_id = attr["object_type_id"]
if obj_id not in output:
output[obj_id] = []
output[obj_id].append(attr)
return output, hiearchical
attrs_by_obj_id[obj_id].append(attr)
return attrs_by_obj_id, hiearchical
def query_attr_value(
self,
session,
hier_attrs,
attrs_by_obj_id,
dst_object_type_ids,
task_entity_ids,
non_task_entity_ids,
parent_id_by_entity_id
):
all_non_task_ids_with_parents = set()
for entity_id in non_task_entity_ids:
all_non_task_ids_with_parents.add(entity_id)
_entity_id = entity_id
while True:
parent_id = parent_id_by_entity_id.get(_entity_id)
if (
parent_id is None
or parent_id in all_non_task_ids_with_parents
):
break
all_non_task_ids_with_parents.add(parent_id)
_entity_id = parent_id
all_entity_ids = (
set(all_non_task_ids_with_parents)
| set(task_entity_ids)
)
attr_ids = {attr["id"] for attr in hier_attrs}
for obj_id in dst_object_type_ids:
attrs = attrs_by_obj_id.get(obj_id)
if attrs is not None:
for attr in attrs:
attr_ids.add(attr["id"])
real_values_by_entity_id = {
entity_id: {}
for entity_id in all_entity_ids
}
attr_values = query_custom_attributes(
session, attr_ids, all_entity_ids, True
)
for item in attr_values:
entity_id = item["entity_id"]
attr_id = item["configuration_id"]
real_values_by_entity_id[entity_id][attr_id] = item["value"]
# Fill hierarchical values
hier_attrs_key_by_id = {
hier_attr["id"]: hier_attr
for hier_attr in hier_attrs
}
hier_values_per_entity_id = {}
for entity_id in all_non_task_ids_with_parents:
real_values = real_values_by_entity_id[entity_id]
hier_values_per_entity_id[entity_id] = {}
for attr_id, attr in hier_attrs_key_by_id.items():
key = attr["key"]
hier_values_per_entity_id[entity_id][key] = (
real_values.get(attr_id)
)
output = {}
for entity_id in non_task_entity_ids:
output[entity_id] = {}
for attr in hier_attrs_key_by_id.values():
key = attr["key"]
value = hier_values_per_entity_id[entity_id][key]
tried_ids = set()
if value is None:
tried_ids.add(entity_id)
_entity_id = entity_id
while value is None:
parent_id = parent_id_by_entity_id.get(_entity_id)
if not parent_id:
break
value = hier_values_per_entity_id[parent_id][key]
if value is not None:
break
_entity_id = parent_id
tried_ids.add(parent_id)
if value is None:
value = attr["default"]
if value is not None:
for ent_id in tried_ids:
hier_values_per_entity_id[ent_id][key] = value
output[entity_id][key] = value
return real_values_by_entity_id, output
def propagate_values(self, session, event, selected_entities):
ftrack_settings = self.get_ftrack_settings(
@ -156,29 +248,24 @@ class PushHierValuesToNonHier(ServerAction):
}
task_object_type = object_types_by_low_name["task"]
destination_object_types = [task_object_type]
dst_object_type_ids = {task_object_type["id"]}
for ent_type in interest_entity_types:
obj_type = object_types_by_low_name.get(ent_type)
if obj_type and obj_type not in destination_object_types:
destination_object_types.append(obj_type)
destination_object_type_ids = set(
obj_type["id"]
for obj_type in destination_object_types
)
if obj_type:
dst_object_type_ids.add(obj_type["id"])
interest_attributes = action_settings["interest_attributes"]
# Find custom attributes definitions
attrs_by_obj_id, hier_attrs = self.attrs_configurations(
session, destination_object_type_ids, interest_attributes
session, dst_object_type_ids, interest_attributes
)
# Filter destination object types if they have any object specific
# custom attribute
for obj_id in tuple(destination_object_type_ids):
for obj_id in tuple(dst_object_type_ids):
if obj_id not in attrs_by_obj_id:
destination_object_type_ids.remove(obj_id)
dst_object_type_ids.remove(obj_id)
if not destination_object_type_ids:
if not dst_object_type_ids:
# TODO report that there are not matching custom attributes
return {
"success": True,
@ -192,14 +279,14 @@ class PushHierValuesToNonHier(ServerAction):
session,
selected_ids,
project_entity,
destination_object_type_ids
dst_object_type_ids
)
self.log.debug("Preparing whole project hierarchy by ids.")
entities_by_obj_id = {
obj_id: []
for obj_id in destination_object_type_ids
for obj_id in dst_object_type_ids
}
self.log.debug("Filtering Task entities.")
@ -223,10 +310,16 @@ class PushHierValuesToNonHier(ServerAction):
"message": "Nothing to do in your selection."
}
self.log.debug("Getting Hierarchical custom attribute values parents.")
hier_values_by_entity_id = self.get_hier_values(
self.log.debug("Getting Custom attribute values.")
(
real_values_by_entity_id,
hier_values_by_entity_id
) = self.query_attr_value(
session,
hier_attrs,
attrs_by_obj_id,
dst_object_type_ids,
task_entity_ids,
non_task_entity_ids,
parent_id_by_entity_id
)
@ -237,7 +330,8 @@ class PushHierValuesToNonHier(ServerAction):
hier_attrs,
task_entity_ids,
hier_values_by_entity_id,
parent_id_by_entity_id
parent_id_by_entity_id,
real_values_by_entity_id
)
self.log.debug("Setting values to entities themselves.")
@ -245,7 +339,8 @@ class PushHierValuesToNonHier(ServerAction):
session,
entities_by_obj_id,
attrs_by_obj_id,
hier_values_by_entity_id
hier_values_by_entity_id,
real_values_by_entity_id
)
return True
@ -322,112 +417,64 @@ class PushHierValuesToNonHier(ServerAction):
return parent_id_by_entity_id, filtered_entities
def get_hier_values(
self,
session,
hier_attrs,
focus_entity_ids,
parent_id_by_entity_id
):
all_ids_with_parents = set()
for entity_id in focus_entity_ids:
all_ids_with_parents.add(entity_id)
_entity_id = entity_id
while True:
parent_id = parent_id_by_entity_id.get(_entity_id)
if (
not parent_id
or parent_id in all_ids_with_parents
):
break
all_ids_with_parents.add(parent_id)
_entity_id = parent_id
hier_attr_ids = tuple(hier_attr["id"] for hier_attr in hier_attrs)
hier_attrs_key_by_id = {
hier_attr["id"]: hier_attr["key"]
for hier_attr in hier_attrs
}
values_per_entity_id = {}
for entity_id in all_ids_with_parents:
values_per_entity_id[entity_id] = {}
for key in hier_attrs_key_by_id.values():
values_per_entity_id[entity_id][key] = None
values = query_custom_attributes(
session, hier_attr_ids, all_ids_with_parents, True
)
for item in values:
entity_id = item["entity_id"]
key = hier_attrs_key_by_id[item["configuration_id"]]
values_per_entity_id[entity_id][key] = item["value"]
output = {}
for entity_id in focus_entity_ids:
output[entity_id] = {}
for key in hier_attrs_key_by_id.values():
value = values_per_entity_id[entity_id][key]
tried_ids = set()
if value is None:
tried_ids.add(entity_id)
_entity_id = entity_id
while value is None:
parent_id = parent_id_by_entity_id.get(_entity_id)
if not parent_id:
break
value = values_per_entity_id[parent_id][key]
if value is not None:
break
_entity_id = parent_id
tried_ids.add(parent_id)
if value is not None:
for ent_id in tried_ids:
values_per_entity_id[ent_id][key] = value
output[entity_id][key] = value
return output
def set_task_attr_values(
self,
session,
hier_attrs,
task_entity_ids,
hier_values_by_entity_id,
parent_id_by_entity_id
parent_id_by_entity_id,
real_values_by_entity_id
):
hier_attr_id_by_key = {
attr["key"]: attr["id"]
for attr in hier_attrs
}
filtered_task_ids = set()
for task_id in task_entity_ids:
parent_id = parent_id_by_entity_id.get(task_id) or {}
parent_id = parent_id_by_entity_id.get(task_id)
parent_values = hier_values_by_entity_id.get(parent_id)
if not parent_values:
continue
if parent_values:
filtered_task_ids.add(task_id)
if not filtered_task_ids:
return
for task_id in filtered_task_ids:
parent_id = parent_id_by_entity_id[task_id]
parent_values = hier_values_by_entity_id[parent_id]
hier_values_by_entity_id[task_id] = {}
real_task_attr_values = real_values_by_entity_id[task_id]
for key, value in parent_values.items():
hier_values_by_entity_id[task_id][key] = value
if value is None:
continue
configuration_id = hier_attr_id_by_key[key]
_entity_key = collections.OrderedDict([
("configuration_id", configuration_id),
("entity_id", task_id)
])
session.recorded_operations.push(
ftrack_api.operation.UpdateEntityOperation(
"ContextCustomAttributeValue",
op = None
if configuration_id not in real_task_attr_values:
op = ftrack_api.operation.CreateEntityOperation(
"CustomAttributeValue",
_entity_key,
{"value": value}
)
elif real_task_attr_values[configuration_id] != value:
op = ftrack_api.operation.UpdateEntityOperation(
"CustomAttributeValue",
_entity_key,
"value",
ftrack_api.symbol.NOT_SET,
real_task_attr_values[configuration_id],
value
)
)
if len(session.recorded_operations) > 100:
session.commit()
if op is not None:
session.recorded_operations.push(op)
if len(session.recorded_operations) > 100:
session.commit()
session.commit()
@ -436,39 +483,68 @@ class PushHierValuesToNonHier(ServerAction):
session,
entities_by_obj_id,
attrs_by_obj_id,
hier_values_by_entity_id
hier_values_by_entity_id,
real_values_by_entity_id
):
"""Push values from hierarchical custom attributes to non-hierarchical.
Args:
session (ftrack_api.Sessison): Session which queried entities,
values and which is used for change propagation.
entities_by_obj_id (dict[str, list[str]]): TypedContext
ftrack entity ids where the attributes are propagated by their
object ids.
attrs_by_obj_id (dict[str, ftrack_api.Entity]): Objects of
'CustomAttributeConfiguration' by their ids.
hier_values_by_entity_id (doc[str, dict[str, Any]]): Attribute
values by entity id and by their keys.
real_values_by_entity_id (doc[str, dict[str, Any]]): Real attribute
values of entities.
"""
for object_id, entity_ids in entities_by_obj_id.items():
attrs = attrs_by_obj_id.get(object_id)
if not attrs or not entity_ids:
continue
for attr in attrs:
for entity_id in entity_ids:
value = (
hier_values_by_entity_id
.get(entity_id, {})
.get(attr["key"])
)
for entity_id in entity_ids:
real_values = real_values_by_entity_id.get(entity_id)
hier_values = hier_values_by_entity_id.get(entity_id)
if hier_values is None:
continue
for attr in attrs:
attr_id = attr["id"]
attr_key = attr["key"]
value = hier_values.get(attr_key)
if value is None:
continue
_entity_key = collections.OrderedDict([
("configuration_id", attr["id"]),
("configuration_id", attr_id),
("entity_id", entity_id)
])
session.recorded_operations.push(
ftrack_api.operation.UpdateEntityOperation(
"ContextCustomAttributeValue",
op = None
if attr_id not in real_values:
op = ftrack_api.operation.CreateEntityOperation(
"CustomAttributeValue",
_entity_key,
{"value": value}
)
elif real_values[attr_id] != value:
op = ftrack_api.operation.UpdateEntityOperation(
"CustomAttributeValue",
_entity_key,
"value",
ftrack_api.symbol.NOT_SET,
real_values[attr_id],
value
)
)
if len(session.recorded_operations) > 100:
session.commit()
if op is not None:
session.recorded_operations.push(op)
if len(session.recorded_operations) > 100:
session.commit()
session.commit()

View file

@ -124,6 +124,11 @@ class AppplicationsAction(BaseAction):
if not avalon_project_apps:
return False
settings = self.get_project_settings_from_event(
event, avalon_project_doc["name"])
only_available = settings["applications"]["only_available"]
items = []
for app_name in avalon_project_apps:
app = self.application_manager.applications.get(app_name)
@ -133,6 +138,10 @@ class AppplicationsAction(BaseAction):
if app.group.name in CUSTOM_LAUNCH_APP_GROUPS:
continue
# Skip applications without valid executables
if only_available and not app.find_executable():
continue
app_icon = app.icon
if app_icon and self.icon_url:
try:

View file

@ -29,7 +29,7 @@ class CollectKitsuEntities(pyblish.api.ContextPlugin):
if not zou_asset_data:
raise ValueError("Zou asset data not found in OpenPype!")
task_name = instance.data.get("task")
task_name = instance.data.get("task", context.data.get("task"))
if not task_name:
continue

View file

@ -1,6 +1,7 @@
# -*- coding: utf-8 -*-
import gazu
import pyblish.api
import re
class IntegrateKitsuNote(pyblish.api.ContextPlugin):
@ -9,27 +10,98 @@ class IntegrateKitsuNote(pyblish.api.ContextPlugin):
order = pyblish.api.IntegratorOrder
label = "Kitsu Note and Status"
families = ["render", "kitsu"]
# status settings
set_status_note = False
note_status_shortname = "wfa"
status_change_conditions = {
"status_conditions": [],
"family_requirements": [],
}
# comment settings
custom_comment_template = {
"enabled": False,
"comment_template": "{comment}",
}
def format_publish_comment(self, instance):
"""Format the instance's publish comment
Formats `instance.data` against the custom template.
"""
def replace_missing_key(match):
"""If key is not found in kwargs, set None instead"""
key = match.group(1)
if key not in instance.data:
self.log.warning(
"Key '{}' was not found in instance.data "
"and will be rendered as an empty string "
"in the comment".format(key)
)
return ""
else:
return str(instance.data[key])
template = self.custom_comment_template["comment_template"]
pattern = r"\{([^}]*)\}"
return re.sub(pattern, replace_missing_key, template)
def process(self, context):
# Get comment text body
publish_comment = context.data.get("comment")
if not publish_comment:
self.log.info("Comment is not set.")
self.log.debug("Comment is `{}`".format(publish_comment))
for instance in context:
# Check if instance is a review by checking its family
# Allow a match to primary family or any of families
families = set([instance.data["family"]] +
instance.data.get("families", []))
if "review" not in families:
continue
kitsu_task = instance.data.get("kitsu_task")
if kitsu_task is None:
if not kitsu_task:
continue
# Get note status, by default uses the task status for the note
# if it is not specified in the configuration
note_status = kitsu_task["task_status"]["id"]
shortname = kitsu_task["task_status"]["short_name"].upper()
note_status = kitsu_task["task_status_id"]
if self.set_status_note:
# Check if any status condition is not met
allow_status_change = True
for status_cond in self.status_change_conditions[
"status_conditions"
]:
condition = status_cond["condition"] == "equal"
match = status_cond["short_name"].upper() == shortname
if match and not condition or condition and not match:
allow_status_change = False
break
if allow_status_change:
# Get families
families = {
instance.data.get("family")
for instance in context
if instance.data.get("publish")
}
# Check if any family requirement is met
for family_requirement in self.status_change_conditions[
"family_requirements"
]:
condition = family_requirement["condition"] == "equal"
for family in families:
match = family_requirement["family"].lower() == family
if match and not condition or condition and not match:
allow_status_change = False
break
if allow_status_change:
break
# Set note status
if self.set_status_note and allow_status_change:
kitsu_status = gazu.task.get_task_status_by_short_name(
self.note_status_shortname
)
@ -42,11 +114,22 @@ class IntegrateKitsuNote(pyblish.api.ContextPlugin):
"changed!".format(self.note_status_shortname)
)
# Get comment text body
publish_comment = instance.data.get("comment")
if self.custom_comment_template["enabled"]:
publish_comment = self.format_publish_comment(instance)
if not publish_comment:
self.log.info("Comment is not set.")
else:
self.log.debug("Comment is `{}`".format(publish_comment))
# Add comment to kitsu task
task_id = kitsu_task["id"]
self.log.debug("Add new note in taks id {}".format(task_id))
self.log.debug(
"Add new note in tasks id {}".format(kitsu_task["id"])
)
kitsu_comment = gazu.task.add_comment(
task_id, note_status, comment=publish_comment
kitsu_task, note_status, comment=publish_comment
)
instance.data["kitsu_comment"] = kitsu_comment

View file

@ -12,17 +12,17 @@ class IntegrateKitsuReview(pyblish.api.InstancePlugin):
optional = True
def process(self, instance):
task = instance.data["kitsu_task"]["id"]
comment = instance.data["kitsu_comment"]["id"]
# Check comment has been created
if not comment:
comment_id = instance.data.get("kitsu_comment", {}).get("id")
if not comment_id:
self.log.debug(
"Comment not created, review not pushed to preview."
)
return
# Add review representations as preview of comment
task_id = instance.data["kitsu_task"]["id"]
for representation in instance.data.get("representations", []):
# Skip if not tagged as review
if "kitsureview" not in representation.get("tags", []):
@ -31,6 +31,6 @@ class IntegrateKitsuReview(pyblish.api.InstancePlugin):
self.log.debug("Found review at: {}".format(review_path))
gazu.task.add_preview(
task, comment, review_path, normalize_movie=True
task_id, comment_id, review_path, normalize_movie=True
)
self.log.info("Review upload on comment")

View file

@ -129,7 +129,7 @@ def update_op_assets(
frame_out = frame_in + frames_duration - 1
else:
frame_out = project_doc["data"].get("frameEnd", frame_in)
item_data["frameEnd"] = frame_out
item_data["frameEnd"] = int(frame_out)
# Fps, fallback to project's value or default value (25.0)
try:
fps = float(item_data.get("fps"))
@ -147,33 +147,37 @@ def update_op_assets(
item_data["resolutionWidth"] = int(match_res.group(1))
item_data["resolutionHeight"] = int(match_res.group(2))
else:
item_data["resolutionWidth"] = project_doc["data"].get(
"resolutionWidth"
item_data["resolutionWidth"] = int(
project_doc["data"].get("resolutionWidth")
)
item_data["resolutionHeight"] = project_doc["data"].get(
"resolutionHeight"
item_data["resolutionHeight"] = int(
project_doc["data"].get("resolutionHeight")
)
# Properties that doesn't fully exist in Kitsu.
# Guessing those property names below:
# Pixel Aspect Ratio
item_data["pixelAspect"] = item_data.get(
"pixel_aspect", project_doc["data"].get("pixelAspect")
item_data["pixelAspect"] = float(
item_data.get(
"pixel_aspect", project_doc["data"].get("pixelAspect")
)
)
# Handle Start
item_data["handleStart"] = item_data.get(
"handle_start", project_doc["data"].get("handleStart")
item_data["handleStart"] = int(
item_data.get(
"handle_start", project_doc["data"].get("handleStart")
)
)
# Handle End
item_data["handleEnd"] = item_data.get(
"handle_end", project_doc["data"].get("handleEnd")
item_data["handleEnd"] = int(
item_data.get("handle_end", project_doc["data"].get("handleEnd"))
)
# Clip In
item_data["clipIn"] = item_data.get(
"clip_in", project_doc["data"].get("clipIn")
item_data["clipIn"] = int(
item_data.get("clip_in", project_doc["data"].get("clipIn"))
)
# Clip Out
item_data["clipOut"] = item_data.get(
"clip_out", project_doc["data"].get("clipOut")
item_data["clipOut"] = int(
item_data.get("clip_out", project_doc["data"].get("clipOut"))
)
# Tasks

View file

@ -22,7 +22,7 @@ from openpype.lib.attribute_definitions import (
deserialize_attr_defs,
get_default_values,
)
from openpype.host import IPublishHost
from openpype.host import IPublishHost, IWorkfileHost
from openpype.pipeline import legacy_io
from openpype.pipeline.plugin_discover import DiscoverResult
@ -1374,6 +1374,7 @@ class CreateContext:
self._current_project_name = None
self._current_asset_name = None
self._current_task_name = None
self._current_workfile_path = None
self._host_is_valid = host_is_valid
# Currently unused variable
@ -1503,14 +1504,62 @@ class CreateContext:
return os.environ["AVALON_APP"]
def get_current_project_name(self):
"""Project name which was used as current context on context reset.
Returns:
Union[str, None]: Project name.
"""
return self._current_project_name
def get_current_asset_name(self):
"""Asset name which was used as current context on context reset.
Returns:
Union[str, None]: Asset name.
"""
return self._current_asset_name
def get_current_task_name(self):
"""Task name which was used as current context on context reset.
Returns:
Union[str, None]: Task name.
"""
return self._current_task_name
def get_current_workfile_path(self):
"""Workfile path which was opened on context reset.
Returns:
Union[str, None]: Workfile path.
"""
return self._current_workfile_path
@property
def context_has_changed(self):
"""Host context has changed.
As context is used project, asset, task name and workfile path if
host does support workfiles.
Returns:
bool: Context changed.
"""
project_name, asset_name, task_name, workfile_path = (
self._get_current_host_context()
)
return (
self._current_project_name != project_name
or self._current_asset_name != asset_name
or self._current_task_name != task_name
or self._current_workfile_path != workfile_path
)
project_name = property(get_current_project_name)
@property
@ -1575,6 +1624,28 @@ class CreateContext:
self._collection_shared_data = None
self.refresh_thumbnails()
def _get_current_host_context(self):
project_name = asset_name = task_name = workfile_path = None
if hasattr(self.host, "get_current_context"):
host_context = self.host.get_current_context()
if host_context:
project_name = host_context.get("project_name")
asset_name = host_context.get("asset_name")
task_name = host_context.get("task_name")
if isinstance(self.host, IWorkfileHost):
workfile_path = self.host.get_current_workfile()
# --- TODO remove these conditions ---
if not project_name:
project_name = legacy_io.Session.get("AVALON_PROJECT")
if not asset_name:
asset_name = legacy_io.Session.get("AVALON_ASSET")
if not task_name:
task_name = legacy_io.Session.get("AVALON_TASK")
# ---
return project_name, asset_name, task_name, workfile_path
def reset_current_context(self):
"""Refresh current context.
@ -1593,24 +1664,14 @@ class CreateContext:
are stored. We should store the workfile (if is available) too.
"""
project_name = asset_name = task_name = None
if hasattr(self.host, "get_current_context"):
host_context = self.host.get_current_context()
if host_context:
project_name = host_context.get("project_name")
asset_name = host_context.get("asset_name")
task_name = host_context.get("task_name")
if not project_name:
project_name = legacy_io.Session.get("AVALON_PROJECT")
if not asset_name:
asset_name = legacy_io.Session.get("AVALON_ASSET")
if not task_name:
task_name = legacy_io.Session.get("AVALON_TASK")
project_name, asset_name, task_name, workfile_path = (
self._get_current_host_context()
)
self._current_project_name = project_name
self._current_asset_name = asset_name
self._current_task_name = task_name
self._current_workfile_path = workfile_path
def reset_plugins(self, discover_publish_plugins=True):
"""Reload plugins.

View file

@ -1,2 +1,3 @@
DEFAULT_PUBLISH_TEMPLATE = "publish"
DEFAULT_HERO_PUBLISH_TEMPLATE = "hero"
TRANSIENT_DIR_TEMPLATE = "transient"

View file

@ -20,13 +20,15 @@ from openpype.settings import (
get_system_settings,
)
from openpype.pipeline import (
tempdir
tempdir,
Anatomy
)
from openpype.pipeline.plugin_discover import DiscoverResult
from .contants import (
DEFAULT_PUBLISH_TEMPLATE,
DEFAULT_HERO_PUBLISH_TEMPLATE,
TRANSIENT_DIR_TEMPLATE
)
@ -690,3 +692,79 @@ def get_publish_repre_path(instance, repre, only_published=False):
if os.path.exists(src_path):
return src_path
return None
def get_custom_staging_dir_info(project_name, host_name, family, task_name,
task_type, subset_name,
project_settings=None,
anatomy=None, log=None):
"""Checks profiles if context should use special custom dir as staging.
Args:
project_name (str)
host_name (str)
family (str)
task_name (str)
task_type (str)
subset_name (str)
project_settings(Dict[str, Any]): Prepared project settings.
anatomy (Dict[str, Any])
log (Logger) (optional)
Returns:
(tuple)
Raises:
ValueError - if misconfigured template should be used
"""
settings = project_settings or get_project_settings(project_name)
custom_staging_dir_profiles = (settings["global"]
["tools"]
["publish"]
["custom_staging_dir_profiles"])
if not custom_staging_dir_profiles:
return None, None
if not log:
log = Logger.get_logger("get_custom_staging_dir_info")
filtering_criteria = {
"hosts": host_name,
"families": family,
"task_names": task_name,
"task_types": task_type,
"subsets": subset_name
}
profile = filter_profiles(custom_staging_dir_profiles,
filtering_criteria,
logger=log)
if not profile or not profile["active"]:
return None, None
if not anatomy:
anatomy = Anatomy(project_name)
template_name = profile["template_name"] or TRANSIENT_DIR_TEMPLATE
_validate_transient_template(project_name, template_name, anatomy)
custom_staging_dir = anatomy.templates[template_name]["folder"]
is_persistent = profile["custom_staging_dir_persistent"]
return custom_staging_dir, is_persistent
def _validate_transient_template(project_name, template_name, anatomy):
"""Check that transient template is correctly configured.
Raises:
ValueError - if misconfigured template
"""
if template_name not in anatomy.templates:
raise ValueError(("Anatomy of project \"{}\" does not have set"
" \"{}\" template key!"
).format(project_name, template_name))
if "folder" not in anatomy.templates[template_name]:
raise ValueError(("There is not set \"folder\" template in \"{}\" anatomy" # noqa
" for project \"{}\"."
).format(template_name, project_name))

View file

@ -93,6 +93,10 @@ class CleanUp(pyblish.api.InstancePlugin):
self.log.info("No staging directory found: %s" % staging_dir)
return
if instance.data.get("stagingDir_persistent"):
self.log.info("Staging dir: %s should be persistent" % staging_dir)
return
self.log.info("Removing staging directory {}".format(staging_dir))
shutil.rmtree(staging_dir)

View file

@ -37,7 +37,7 @@ class CleanUpFarm(pyblish.api.ContextPlugin):
dirpaths_to_remove = set()
for instance in context:
staging_dir = instance.data.get("stagingDir")
if staging_dir:
if staging_dir and not instance.data.get("stagingDir_persistent"):
dirpaths_to_remove.add(os.path.normpath(staging_dir))
if "representations" in instance.data:

View file

@ -0,0 +1,67 @@
"""
Requires:
anatomy
Provides:
instance.data -> stagingDir (folder path)
-> stagingDir_persistent (bool)
"""
import copy
import os.path
import pyblish.api
from openpype.pipeline.publish.lib import get_custom_staging_dir_info
class CollectCustomStagingDir(pyblish.api.InstancePlugin):
"""Looks through profiles if stagingDir should be persistent and in special
location.
Transient staging dir could be useful in specific use cases where is
desirable to have temporary renders in specific, persistent folders, could
be on disks optimized for speed for example.
It is studio responsibility to clean up obsolete folders with data.
Location of the folder is configured in `project_anatomy/templates/others`.
('transient' key is expected, with 'folder' key)
Which family/task type/subset is applicable is configured in:
`project_settings/global/tools/publish/custom_staging_dir_profiles`
"""
label = "Collect Custom Staging Directory"
order = pyblish.api.CollectorOrder + 0.4990
template_key = "transient"
def process(self, instance):
family = instance.data["family"]
subset_name = instance.data["subset"]
host_name = instance.context.data["hostName"]
project_name = instance.context.data["projectName"]
anatomy = instance.context.data["anatomy"]
anatomy_data = copy.deepcopy(instance.data["anatomyData"])
task = anatomy_data.get("task", {})
transient_tml, is_persistent = get_custom_staging_dir_info(
project_name, host_name, family, task.get("name"),
task.get("type"), subset_name, anatomy=anatomy, log=self.log)
result_str = "Not adding"
if transient_tml:
anatomy_data["root"] = anatomy.roots
scene_name = instance.context.data.get("currentFile")
if scene_name:
anatomy_data["scene_name"] = os.path.basename(scene_name)
transient_dir = transient_tml.format(**anatomy_data)
instance.data["stagingDir"] = transient_dir
instance.data["stagingDir_persistent"] = is_persistent
result_str = "Adding '{}' as".format(transient_dir)
self.log.info("{} custom staging dir for instance with '{}'".format(
result_str, family
))

View file

@ -16,9 +16,7 @@ from openpype.lib import (
get_transcode_temp_directory,
convert_input_paths_for_ffmpeg,
should_convert_for_ffmpeg,
CREATE_NO_WINDOW
should_convert_for_ffmpeg
)
from openpype.lib.profiles_filtering import filter_profiles
@ -341,8 +339,6 @@ class ExtractBurnin(publish.Extractor):
"logger": self.log,
"env": {}
}
if platform.system().lower() == "windows":
process_kwargs["creationflags"] = CREATE_NO_WINDOW
run_openpype_process(*args, **process_kwargs)
# Remove the temporary json
@ -732,7 +728,6 @@ class ExtractBurnin(publish.Extractor):
return filtered_burnin_defs
families = self.families_from_instance(instance)
low_families = [family.lower() for family in families]
for filename_suffix, orig_burnin_def in burnin_defs.items():
burnin_def = copy.deepcopy(orig_burnin_def)
@ -743,7 +738,7 @@ class ExtractBurnin(publish.Extractor):
families_filters = def_filter["families"]
if not self.families_filter_validation(
low_families, families_filters
families, families_filters
):
self.log.debug((
"Skipped burnin definition \"{}\". Family"
@ -780,31 +775,19 @@ class ExtractBurnin(publish.Extractor):
return filtered_burnin_defs
def families_filter_validation(self, families, output_families_filter):
"""Determine if entered families intersect with families filters.
"""Determines if entered families intersect with families filters.
All family values are lowered to avoid unexpected results.
"""
if not output_families_filter:
families_filter_lower = set(family.lower() for family in
output_families_filter
# Exclude empty filter values
if family)
if not families_filter_lower:
return True
for family_filter in output_families_filter:
if not family_filter:
continue
if not isinstance(family_filter, (list, tuple)):
if family_filter.lower() not in families:
continue
return True
valid = True
for family in family_filter:
if family.lower() not in families:
valid = False
break
if valid:
return True
return False
return any(family.lower() in families_filter_lower
for family in families)
def families_from_instance(self, instance):
"""Return all families of entered instance."""

View file

@ -12,7 +12,7 @@ import pyblish.api
from openpype.lib import (
get_ffmpeg_tool_path,
filter_profiles,
path_to_subprocess_arg,
run_subprocess,
)
@ -23,6 +23,7 @@ from openpype.lib.transcoding import (
convert_input_paths_for_ffmpeg,
get_transcode_temp_directory,
)
from openpype.pipeline.publish import KnownPublishError
class ExtractReview(pyblish.api.InstancePlugin):
@ -88,21 +89,23 @@ class ExtractReview(pyblish.api.InstancePlugin):
def _get_outputs_for_instance(self, instance):
host_name = instance.context.data["hostName"]
task_name = os.environ["AVALON_TASK"]
family = self.main_family_from_instance(instance)
self.log.info("Host: \"{}\"".format(host_name))
self.log.info("Task: \"{}\"".format(task_name))
self.log.info("Family: \"{}\"".format(family))
profile = self.find_matching_profile(
host_name, task_name, family
)
profile = filter_profiles(
self.profiles,
{
"hosts": host_name,
"families": family,
},
logger=self.log)
if not profile:
self.log.info((
"Skipped instance. None of profiles in presets are for"
" Host: \"{}\" | Family: \"{}\" | Task \"{}\""
).format(host_name, family, task_name))
" Host: \"{}\" | Family: \"{}\""
).format(host_name, family))
return
self.log.debug("Matching profile: \"{}\"".format(json.dumps(profile)))
@ -112,17 +115,19 @@ class ExtractReview(pyblish.api.InstancePlugin):
filtered_outputs = self.filter_output_defs(
profile, subset_name, instance_families
)
if not filtered_outputs:
self.log.info((
"Skipped instance. All output definitions from selected"
" profile do not match instance families \"{}\" or"
" subset name \"{}\"."
).format(str(instance_families), subset_name))
# Store `filename_suffix` to save arguments
profile_outputs = []
for filename_suffix, definition in filtered_outputs.items():
definition["filename_suffix"] = filename_suffix
profile_outputs.append(definition)
if not filtered_outputs:
self.log.info((
"Skipped instance. All output definitions from selected"
" profile does not match to instance families. \"{}\""
).format(str(instance_families)))
return profile_outputs
def _get_outputs_per_representations(self, instance, profile_outputs):
@ -216,6 +221,7 @@ class ExtractReview(pyblish.api.InstancePlugin):
outputs_per_repres = self._get_outputs_per_representations(
instance, profile_outputs
)
for repre, output_defs in outputs_per_repres:
# Check if input should be preconverted before processing
# Store original staging dir (it's value may change)
@ -297,10 +303,10 @@ class ExtractReview(pyblish.api.InstancePlugin):
shutil.rmtree(new_staging_dir)
def _render_output_definitions(
self, instance, repre, src_repre_staging_dir, output_defs
self, instance, repre, src_repre_staging_dir, output_definitions
):
fill_data = copy.deepcopy(instance.data["anatomyData"])
for _output_def in output_defs:
for _output_def in output_definitions:
output_def = copy.deepcopy(_output_def)
# Make sure output definition has "tags" key
if "tags" not in output_def:
@ -346,10 +352,11 @@ class ExtractReview(pyblish.api.InstancePlugin):
if temp_data["input_is_sequence"]:
self.log.info("Filling gaps in sequence.")
files_to_clean = self.fill_sequence_gaps(
temp_data["origin_repre"]["files"],
new_repre["stagingDir"],
temp_data["frame_start"],
temp_data["frame_end"])
files=temp_data["origin_repre"]["files"],
staging_dir=new_repre["stagingDir"],
start_frame=temp_data["frame_start"],
end_frame=temp_data["frame_end"]
)
# create or update outputName
output_name = new_repre.get("outputName", "")
@ -421,10 +428,10 @@ class ExtractReview(pyblish.api.InstancePlugin):
def input_is_sequence(self, repre):
"""Deduce from representation data if input is sequence."""
# TODO GLOBAL ISSUE - Find better way how to find out if input
# is sequence. Issues( in theory):
# - there may be multiple files ant not be sequence
# - remainders are not checked at all
# - there can be more than one collection
# is sequence. Issues (in theory):
# - there may be multiple files ant not be sequence
# - remainders are not checked at all
# - there can be more than one collection
return isinstance(repre["files"], (list, tuple))
def prepare_temp_data(self, instance, repre, output_def):
@ -816,76 +823,41 @@ class ExtractReview(pyblish.api.InstancePlugin):
is done.
Raises:
AssertionError: if more then one collection is obtained.
KnownPublishError: if more than one collection is obtained.
"""
start_frame = int(start_frame)
end_frame = int(end_frame)
collections = clique.assemble(files)[0]
msg = "Multiple collections {} found.".format(collections)
assert len(collections) == 1, msg
if len(collections) != 1:
raise KnownPublishError(
"Multiple collections {} found.".format(collections))
col = collections[0]
# do nothing if no gap is found in input range
not_gap = True
for fr in range(start_frame, end_frame + 1):
if fr not in col.indexes:
not_gap = False
if not_gap:
return []
holes = col.holes()
# generate ideal sequence
complete_col = clique.assemble(
[("{}{:0" + str(col.padding) + "d}{}").format(
col.head, f, col.tail
) for f in range(start_frame, end_frame)]
)[0][0] # type: clique.Collection
new_files = {}
last_existing_file = None
for idx in holes.indexes:
# get previous existing file
test_file = os.path.normpath(os.path.join(
staging_dir,
("{}{:0" + str(complete_col.padding) + "d}{}").format(
complete_col.head, idx - 1, complete_col.tail)))
if os.path.isfile(test_file):
new_files[idx] = test_file
last_existing_file = test_file
# Prepare which hole is filled with what frame
# - the frame is filled only with already existing frames
prev_frame = next(iter(col.indexes))
hole_frame_to_nearest = {}
for frame in range(int(start_frame), int(end_frame) + 1):
if frame in col.indexes:
prev_frame = frame
else:
if not last_existing_file:
# previous file is not found (sequence has a hole
# at the beginning. Use first available frame
# there is.
try:
last_existing_file = list(col)[0]
except IndexError:
# empty collection?
raise AssertionError(
"Invalid sequence collected")
new_files[idx] = os.path.normpath(
os.path.join(staging_dir, last_existing_file))
# Use previous frame as source for hole
hole_frame_to_nearest[frame] = prev_frame
files_to_clean = []
if new_files:
# so now new files are dict with missing frame as a key and
# existing file as a value.
for frame, file in new_files.items():
self.log.info(
"Filling gap {} with {}".format(frame, file))
# Calculate paths
added_files = []
col_format = col.format("{head}{padding}{tail}")
for hole_frame, src_frame in hole_frame_to_nearest.items():
hole_fpath = os.path.join(staging_dir, col_format % hole_frame)
src_fpath = os.path.join(staging_dir, col_format % src_frame)
if not os.path.isfile(src_fpath):
raise KnownPublishError(
"Missing previously detected file: {}".format(src_fpath))
hole = os.path.join(
staging_dir,
("{}{:0" + str(col.padding) + "d}{}").format(
col.head, frame, col.tail))
speedcopy.copyfile(file, hole)
files_to_clean.append(hole)
speedcopy.copyfile(src_fpath, hole_fpath)
added_files.append(hole_fpath)
return files_to_clean
return added_files
def input_output_paths(self, new_repre, output_def, temp_data):
"""Deduce input nad output file paths based on entered data.
@ -1281,7 +1253,7 @@ class ExtractReview(pyblish.api.InstancePlugin):
# 'use_input_res' is set to 'True'.
use_input_res = False
# Overscal color
# Overscan color
overscan_color_value = "black"
overscan_color = output_def.get("overscan_color")
if overscan_color:
@ -1468,240 +1440,20 @@ class ExtractReview(pyblish.api.InstancePlugin):
families.append(family)
return families
def compile_list_of_regexes(self, in_list):
"""Convert strings in entered list to compiled regex objects."""
regexes = []
if not in_list:
return regexes
for item in in_list:
if not item:
continue
try:
regexes.append(re.compile(item))
except TypeError:
self.log.warning((
"Invalid type \"{}\" value \"{}\"."
" Expected string based object. Skipping."
).format(str(type(item)), str(item)))
return regexes
def validate_value_by_regexes(self, value, in_list):
"""Validates in any regex from list match entered value.
Args:
in_list (list): List with regexes.
value (str): String where regexes is checked.
Returns:
int: Returns `0` when list is not set or is empty. Returns `1` when
any regex match value and returns `-1` when none of regexes
match value entered.
"""
if not in_list:
return 0
output = -1
regexes = self.compile_list_of_regexes(in_list)
for regex in regexes:
if not value:
continue
if re.match(regex, value):
output = 1
break
return output
def profile_exclusion(self, matching_profiles):
"""Find out most matching profile byt host, task and family match.
Profiles are selectively filtered. Each profile should have
"__value__" key with list of booleans. Each boolean represents
existence of filter for specific key (host, tasks, family).
Profiles are looped in sequence. In each sequence are split into
true_list and false_list. For next sequence loop are used profiles in
true_list if there are any profiles else false_list is used.
Filtering ends when only one profile left in true_list. Or when all
existence booleans loops passed, in that case first profile from left
profiles is returned.
Args:
matching_profiles (list): Profiles with same values.
Returns:
dict: Most matching profile.
"""
self.log.info(
"Search for first most matching profile in match order:"
" Host name -> Task name -> Family."
)
# Filter all profiles with highest points value. First filter profiles
# with matching host if there are any then filter profiles by task
# name if there are any and lastly filter by family. Else use first in
# list.
idx = 0
final_profile = None
while True:
profiles_true = []
profiles_false = []
for profile in matching_profiles:
value = profile["__value__"]
# Just use first profile when idx is greater than values.
if not idx < len(value):
final_profile = profile
break
if value[idx]:
profiles_true.append(profile)
else:
profiles_false.append(profile)
if final_profile is not None:
break
if profiles_true:
matching_profiles = profiles_true
else:
matching_profiles = profiles_false
if len(matching_profiles) == 1:
final_profile = matching_profiles[0]
break
idx += 1
final_profile.pop("__value__")
return final_profile
def find_matching_profile(self, host_name, task_name, family):
""" Filter profiles by Host name, Task name and main Family.
Filtering keys are "hosts" (list), "tasks" (list), "families" (list).
If key is not find or is empty than it's expected to match.
Args:
profiles (list): Profiles definition from presets.
host_name (str): Current running host name.
task_name (str): Current context task name.
family (str): Main family of current Instance.
Returns:
dict/None: Return most matching profile or None if none of profiles
match at least one criteria.
"""
matching_profiles = None
if not self.profiles:
return matching_profiles
highest_profile_points = -1
# Each profile get 1 point for each matching filter. Profile with most
# points is returned. For cases when more than one profile will match
# are also stored ordered lists of matching values.
for profile in self.profiles:
profile_points = 0
profile_value = []
# Host filtering
host_names = profile.get("hosts")
match = self.validate_value_by_regexes(host_name, host_names)
if match == -1:
self.log.debug(
"\"{}\" not found in {}".format(host_name, host_names)
)
continue
profile_points += match
profile_value.append(bool(match))
# Task filtering
task_names = profile.get("tasks")
match = self.validate_value_by_regexes(task_name, task_names)
if match == -1:
self.log.debug(
"\"{}\" not found in {}".format(task_name, task_names)
)
continue
profile_points += match
profile_value.append(bool(match))
# Family filtering
families = profile.get("families")
match = self.validate_value_by_regexes(family, families)
if match == -1:
self.log.debug(
"\"{}\" not found in {}".format(family, families)
)
continue
profile_points += match
profile_value.append(bool(match))
if profile_points < highest_profile_points:
continue
if profile_points > highest_profile_points:
matching_profiles = []
highest_profile_points = profile_points
if profile_points == highest_profile_points:
profile["__value__"] = profile_value
matching_profiles.append(profile)
if not matching_profiles:
self.log.warning((
"None of profiles match your setup."
" Host \"{}\" | Task: \"{}\" | Family: \"{}\""
).format(host_name, task_name, family))
return
if len(matching_profiles) == 1:
# Pop temporary key `__value__`
matching_profiles[0].pop("__value__")
return matching_profiles[0]
self.log.warning((
"More than one profile match your setup."
" Host \"{}\" | Task: \"{}\" | Family: \"{}\""
).format(host_name, task_name, family))
return self.profile_exclusion(matching_profiles)
def families_filter_validation(self, families, output_families_filter):
"""Determines if entered families intersect with families filters.
All family values are lowered to avoid unexpected results.
"""
if not output_families_filter:
families_filter_lower = set(family.lower() for family in
output_families_filter
# Exclude empty filter values
if family)
if not families_filter_lower:
return True
single_families = []
combination_families = []
for family_filter in output_families_filter:
if not family_filter:
continue
if isinstance(family_filter, (list, tuple)):
_family_filter = []
for family in family_filter:
if family:
_family_filter.append(family.lower())
combination_families.append(_family_filter)
else:
single_families.append(family_filter.lower())
for family in single_families:
if family in families:
return True
for family_combination in combination_families:
valid = True
for family in family_combination:
if family not in families:
valid = False
break
if valid:
return True
return False
return any(family.lower() in families_filter_lower
for family in families)
def filter_output_defs(self, profile, subset_name, families):
"""Return outputs matching input instance families.
@ -1716,14 +1468,10 @@ class ExtractReview(pyblish.api.InstancePlugin):
Returns:
list: Containg all output definitions matching entered families.
"""
outputs = profile.get("outputs") or []
outputs = profile.get("outputs") or {}
if not outputs:
return outputs
# lower values
# QUESTION is this valid operation?
families = [family.lower() for family in families]
filtered_outputs = {}
for filename_suffix, output_def in outputs.items():
output_filters = output_def.get("filter")
@ -1995,14 +1743,14 @@ class OverscanCrop:
relative_source_regex = re.compile(r"%([\+\-])")
def __init__(
self, input_width, input_height, string_value, overscal_color=None
self, input_width, input_height, string_value, overscan_color=None
):
# Make sure that is not None
string_value = string_value or ""
self.input_width = input_width
self.input_height = input_height
self.overscal_color = overscal_color
self.overscan_color = overscan_color
width, height = self._convert_string_to_values(string_value)
self._width_value = width
@ -2058,20 +1806,20 @@ class OverscanCrop:
elif width >= self.input_width and height >= self.input_height:
output.append(
"pad={}:{}:(iw-ow)/2:(ih-oh)/2:{}".format(
width, height, self.overscal_color
width, height, self.overscan_color
)
)
elif width > self.input_width and height < self.input_height:
output.append("crop=iw:{}".format(height))
output.append("pad={}:ih:(iw-ow)/2:(ih-oh)/2:{}".format(
width, self.overscal_color
width, self.overscan_color
))
elif width < self.input_width and height > self.input_height:
output.append("crop={}:ih".format(width))
output.append("pad=iw:{}:(iw-ow)/2:(ih-oh)/2:{}".format(
height, self.overscal_color
height, self.overscan_color
))
return output

View file

@ -24,7 +24,10 @@ from openpype.client import (
get_version_by_name,
)
from openpype.lib import source_hash
from openpype.lib.file_transaction import FileTransaction
from openpype.lib.file_transaction import (
FileTransaction,
DuplicateDestinationError
)
from openpype.pipeline.publish import (
KnownPublishError,
get_publish_template_name,
@ -170,9 +173,18 @@ class IntegrateAsset(pyblish.api.InstancePlugin):
).format(instance.data["family"]))
return
file_transactions = FileTransaction(log=self.log)
file_transactions = FileTransaction(log=self.log,
# Enforce unique transfers
allow_queue_replacements=False)
try:
self.register(instance, file_transactions, filtered_repres)
except DuplicateDestinationError as exc:
# Raise DuplicateDestinationError as KnownPublishError
# and rollback the transactions
file_transactions.rollback()
six.reraise(KnownPublishError,
KnownPublishError(exc),
sys.exc_info()[2])
except Exception:
# clean destination
# todo: preferably we'd also rollback *any* changes to the database

View file

@ -60,6 +60,8 @@ class PreIntegrateThumbnails(pyblish.api.InstancePlugin):
if not found_profile:
return
thumbnail_repre.setdefault("tags", [])
if not found_profile["integrate_thumbnail"]:
if "delete" not in thumbnail_repre["tags"]:
thumbnail_repre["tags"].append("delete")

View file

@ -398,12 +398,6 @@ class ModifiedBurnins(ffmpeg_burnins.Burnins):
"stderr": subprocess.PIPE,
"shell": True,
}
if platform.system().lower() == "windows":
kwargs["creationflags"] = (
subprocess.CREATE_NEW_PROCESS_GROUP
| getattr(subprocess, "DETACHED_PROCESS", 0)
| getattr(subprocess, "CREATE_NO_WINDOW", 0)
)
proc = subprocess.Popen(command, **kwargs)
_stdout, _stderr = proc.communicate()

View file

@ -58,12 +58,16 @@
"file": "{originalBasename}.{ext}",
"path": "{@folder}/{@file}"
},
"transient": {
"folder": "{root[work]}/{project[name]}/{hierarchy}/{asset}/work/{family}/{subset}"
},
"__dynamic_keys_labels__": {
"maya2unreal": "Maya to Unreal",
"simpleUnrealTextureHero": "Simple Unreal Texture - Hero",
"simpleUnrealTexture": "Simple Unreal Texture",
"online": "online",
"source": "source"
"source": "source",
"transient": "transient"
}
}
}

View file

@ -0,0 +1,3 @@
{
"only_available": false
}

View file

@ -9,6 +9,13 @@
"rules": {}
}
},
"workfile": {
"submission_overrides": [
"render_chunk",
"frame_range",
"resolution"
]
},
"publish": {
"CollectRenderPath": {
"output_extension": "png",

View file

@ -43,10 +43,7 @@
"use_published": true,
"priority": 50,
"chunk_size": 10,
"group": "none",
"deadline_pool": "",
"deadline_pool_secondary": "",
"framePerTask": 1
"group": "none"
},
"NukeSubmitDeadline": {
"enabled": true,

View file

@ -614,7 +614,8 @@
"task_names": [],
"template_name": "simpleUnrealTextureHero"
}
]
],
"custom_staging_dir_profiles": []
}
},
"project_folder_structure": "{\"__project_root__\": {\"prod\": {}, \"resources\": {\"footage\": {\"plates\": {}, \"offline\": {}}, \"audio\": {}, \"art_dept\": {}}, \"editorial\": {}, \"assets\": {\"characters\": {}, \"locations\": {}}, \"shots\": {}}}",

View file

@ -7,7 +7,15 @@
"publish": {
"IntegrateKitsuNote": {
"set_status_note": false,
"note_status_shortname": "wfa"
"note_status_shortname": "wfa",
"status_change_conditions": {
"status_conditions": [],
"family_requirements": []
},
"custom_comment_template": {
"enabled": false,
"comment_template": "{comment}\n\n| | |\n|--|--|\n| version| `{version}` |\n| family | `{family}` |\n| name | `{name}` |"
}
}
}
}

View file

@ -790,7 +790,7 @@
"ExtractPlayblast": {
"capture_preset": {
"Codec": {
"compression": "jpg",
"compression": "png",
"format": "image",
"quality": 95
},
@ -817,7 +817,8 @@
},
"Generic": {
"isolate_view": true,
"off_screen": true
"off_screen": true,
"pan_zoom": false
},
"Renderer": {
"rendererName": "vp2Renderer"

View file

@ -42,6 +42,7 @@
"default_variants": []
},
"auto_detect_render": {
"enabled": false,
"allow_group_rename": true,
"group_name_template": "L{group_index}",
"group_idx_offset": 10,

View file

@ -133,7 +133,7 @@
"linux": []
},
"arguments": {
"windows": [],
"windows": ["-U MAXScript {OPENPYPE_ROOT}\\openpype\\hosts\\max\\startup\\startup.ms"],
"darwin": [],
"linux": []
},

View file

@ -82,6 +82,10 @@
"type": "schema",
"name": "schema_project_slack"
},
{
"type": "schema",
"name": "schema_project_applications"
},
{
"type": "schema",
"name": "schema_project_max"

View file

@ -0,0 +1,14 @@
{
"type": "dict",
"key": "applications",
"label": "Applications",
"collapsible": true,
"is_file": true,
"children": [
{
"type": "boolean",
"key": "only_available",
"label": "Show only available applications"
}
]
}

View file

@ -22,6 +22,31 @@
]
},
{
"type": "dict",
"collapsible": true,
"key": "workfile",
"label": "Workfile",
"children": [
{
"key": "submission_overrides",
"label": "Submission workfile overrides",
"type": "enum",
"multiselection": true,
"enum_items": [
{
"render_chunk": "Pass chunk size"
},
{
"frame_range": "Pass frame range"
},
{
"resolution": "Pass resolution"
}
]
}
]
},
{
"type": "dict",
"collapsible": true,

View file

@ -239,27 +239,12 @@
{
"type": "number",
"key": "chunk_size",
"label": "Chunk Size"
"label": "Frame per Task"
},
{
"type": "text",
"key": "group",
"label": "Group Name"
},
{
"type": "text",
"key": "deadline_pool",
"label": "Deadline pool"
},
{
"type": "text",
"key": "deadline_pool_secondary",
"label": "Deadline pool (secondary)"
},
{
"type": "number",
"key": "framePerTask",
"label": "Frame Per Task"
}
]
},

View file

@ -46,12 +46,94 @@
{
"type": "boolean",
"key": "set_status_note",
"label": "Set status on note"
"label": "Set status with note"
},
{
"type": "text",
"key": "note_status_shortname",
"label": "Note shortname"
},
{
"type": "dict",
"collapsible": true,
"key": "status_change_conditions",
"label": "Status change conditions",
"children": [
{
"type": "list",
"key": "status_conditions",
"label": "Status conditions",
"object_type": {
"type": "dict",
"key": "condition_dict",
"children": [
{
"type": "enum",
"key": "condition",
"label": "Condition",
"enum_items": [
{"equal": "Equal"},
{"not_equal": "Not equal"}
]
},
{
"type": "text",
"key": "short_name",
"label": "Short name"
}
]
}
},
{
"type": "list",
"key": "family_requirements",
"label": "Family requirements",
"object_type": {
"type": "dict",
"key": "requirement_dict",
"children": [
{
"type": "enum",
"key": "condition",
"label": "Condition",
"enum_items": [
{"equal": "Equal"},
{"not_equal": "Not equal"}
]
},
{
"type": "text",
"key": "family",
"label": "Family"
}
]
}
}
]
},
{
"type": "dict",
"collapsible": true,
"checkbox_key": "enabled",
"key": "custom_comment_template",
"label": "Custom Comment Template",
"children": [
{
"type": "boolean",
"key": "enabled",
"label": "Enabled"
},
{
"type": "label",
"label": "Kitsu supports markdown and here you can create a custom comment template.<br/>You can use data from your publishing instance's data."
},
{
"key": "comment_template",
"type": "text",
"multiline": true,
"label": "Custom comment"
}
]
}
]
}

View file

@ -202,7 +202,13 @@
"key": "auto_detect_render",
"label": "Auto-Detect Create Render",
"is_group": true,
"checkbox_key": "enabled",
"children": [
{
"type": "boolean",
"key": "enabled",
"label": "Enabled"
},
{
"type": "label",
"label": "The creator tries to auto-detect Render Layers and Render Passes in scene. For Render Layers is used group name as a variant and for Render Passes is used TVPaint layer name.<br/><br/>Group names can be renamed by their used order in scene. The renaming template where can be used <b>{group_index}</b> formatting key which is filled by \"used position index of group\".<br/>- Template: <b>L{group_index}</b><br/>- Group offset: <b>10</b><br/>- Group padding: <b>3</b><br/>Would create group names \"<b>L010</b>\", \"<b>L020</b>\", ..."

View file

@ -408,6 +408,71 @@
}
]
}
},
{
"type": "list",
"key": "custom_staging_dir_profiles",
"label": "Custom Staging Dir Profiles",
"use_label_wrap": true,
"docstring": "Profiles to specify special location and persistence for staging dir. Could be used in Creators and Publish phase!",
"object_type": {
"type": "dict",
"children": [
{
"type": "boolean",
"key": "active",
"label": "Is active",
"default": true
},
{
"type": "separator"
},
{
"key": "hosts",
"label": "Host names",
"type": "hosts-enum",
"multiselection": true
},
{
"key": "task_types",
"label": "Task types",
"type": "task-types-enum"
},
{
"key": "task_names",
"label": "Task names",
"type": "list",
"object_type": "text"
},
{
"key": "families",
"label": "Families",
"type": "list",
"object_type": "text"
},
{
"key": "subsets",
"label": "Subset names",
"type": "list",
"object_type": "text"
},
{
"type": "separator"
},
{
"key": "custom_staging_dir_persistent",
"label": "Custom Staging Folder Persistent",
"type": "boolean",
"default": false
},
{
"key": "template_name",
"label": "Template Name",
"type": "text",
"placeholder": "transient"
}
]
}
}
]
}

View file

@ -91,6 +91,11 @@
"type": "boolean",
"key": "off_screen",
"label": " Off Screen"
},
{
"type": "boolean",
"key": "pan_zoom",
"label": " 2D Pan/Zoom"
}
]
},
@ -156,7 +161,7 @@
{
"type": "boolean",
"key": "override_viewport_options",
"label": "override_viewport_options"
"label": "Override Viewport Options"
},
{
"type": "enum",

View file

@ -19,6 +19,7 @@ from openpype.lib.applications import (
CUSTOM_LAUNCH_APP_GROUPS,
ApplicationManager
)
from openpype.settings import get_project_settings
from openpype.pipeline import discover_launcher_actions
from openpype.tools.utils.lib import (
DynamicQThread,
@ -94,6 +95,8 @@ class ActionModel(QtGui.QStandardItemModel):
if not project_doc:
return actions
project_settings = get_project_settings(project_name)
only_available = project_settings["applications"]["only_available"]
self.application_manager.refresh()
for app_def in project_doc["config"]["apps"]:
app_name = app_def["name"]
@ -104,6 +107,9 @@ class ActionModel(QtGui.QStandardItemModel):
if app.group.name in CUSTOM_LAUNCH_APP_GROUPS:
continue
if only_available and not app.find_executable():
continue
# Get from app definition, if not there from app in project
action = type(
"app_{}".format(app_name),

View file

@ -72,8 +72,8 @@ class HierarchyView(QtWidgets.QTreeView):
column_delegate_defs = {
"name": NameDef(),
"type": TypeDef(),
"frameStart": NumberDef(1),
"frameEnd": NumberDef(1),
"frameStart": NumberDef(0),
"frameEnd": NumberDef(0),
"fps": NumberDef(1, decimals=3, step=1),
"resolutionWidth": NumberDef(0),
"resolutionHeight": NumberDef(0),

View file

@ -1,4 +1,4 @@
from qtpy import QtCore
from qtpy import QtCore, QtGui
# ID of context item in instance view
CONTEXT_ID = "context"
@ -26,6 +26,9 @@ GROUP_ROLE = QtCore.Qt.UserRole + 7
CONVERTER_IDENTIFIER_ROLE = QtCore.Qt.UserRole + 8
CREATOR_SORT_ROLE = QtCore.Qt.UserRole + 9
ResetKeySequence = QtGui.QKeySequence(
QtCore.Qt.ControlModifier | QtCore.Qt.Key_R
)
__all__ = (
"CONTEXT_ID",

View file

@ -6,7 +6,7 @@ import collections
import uuid
import tempfile
import shutil
from abc import ABCMeta, abstractmethod, abstractproperty
from abc import ABCMeta, abstractmethod
import six
import pyblish.api
@ -964,7 +964,8 @@ class AbstractPublisherController(object):
access objects directly but by using wrappers that can be serialized.
"""
@abstractproperty
@property
@abstractmethod
def log(self):
"""Controller's logger object.
@ -974,13 +975,15 @@ class AbstractPublisherController(object):
pass
@abstractproperty
@property
@abstractmethod
def event_system(self):
"""Inner event system for publisher controller."""
pass
@abstractproperty
@property
@abstractmethod
def project_name(self):
"""Current context project name.
@ -990,7 +993,8 @@ class AbstractPublisherController(object):
pass
@abstractproperty
@property
@abstractmethod
def current_asset_name(self):
"""Current context asset name.
@ -1000,7 +1004,8 @@ class AbstractPublisherController(object):
pass
@abstractproperty
@property
@abstractmethod
def current_task_name(self):
"""Current context task name.
@ -1010,7 +1015,21 @@ class AbstractPublisherController(object):
pass
@abstractproperty
@property
@abstractmethod
def host_context_has_changed(self):
"""Host context changed after last reset.
'CreateContext' has this option available using 'context_has_changed'.
Returns:
bool: Context has changed.
"""
pass
@property
@abstractmethod
def host_is_valid(self):
"""Host is valid for creation part.
@ -1023,7 +1042,8 @@ class AbstractPublisherController(object):
pass
@abstractproperty
@property
@abstractmethod
def instances(self):
"""Collected/created instances.
@ -1134,7 +1154,13 @@ class AbstractPublisherController(object):
@abstractmethod
def save_changes(self):
"""Save changes in create context."""
"""Save changes in create context.
Save can crash because of unexpected errors.
Returns:
bool: Save was successful.
"""
pass
@ -1145,7 +1171,19 @@ class AbstractPublisherController(object):
pass
@abstractproperty
@property
@abstractmethod
def publish_has_started(self):
"""Has publishing finished.
Returns:
bool: If publishing finished and all plugins were iterated.
"""
pass
@property
@abstractmethod
def publish_has_finished(self):
"""Has publishing finished.
@ -1155,7 +1193,8 @@ class AbstractPublisherController(object):
pass
@abstractproperty
@property
@abstractmethod
def publish_is_running(self):
"""Publishing is running right now.
@ -1165,7 +1204,8 @@ class AbstractPublisherController(object):
pass
@abstractproperty
@property
@abstractmethod
def publish_has_validated(self):
"""Publish validation passed.
@ -1175,7 +1215,8 @@ class AbstractPublisherController(object):
pass
@abstractproperty
@property
@abstractmethod
def publish_has_crashed(self):
"""Publishing crashed for any reason.
@ -1185,7 +1226,8 @@ class AbstractPublisherController(object):
pass
@abstractproperty
@property
@abstractmethod
def publish_has_validation_errors(self):
"""During validation happened at least one validation error.
@ -1195,7 +1237,8 @@ class AbstractPublisherController(object):
pass
@abstractproperty
@property
@abstractmethod
def publish_max_progress(self):
"""Get maximum possible progress number.
@ -1205,7 +1248,8 @@ class AbstractPublisherController(object):
pass
@abstractproperty
@property
@abstractmethod
def publish_progress(self):
"""Current progress number.
@ -1215,7 +1259,8 @@ class AbstractPublisherController(object):
pass
@abstractproperty
@property
@abstractmethod
def publish_error_msg(self):
"""Current error message which cause fail of publishing.
@ -1267,7 +1312,8 @@ class AbstractPublisherController(object):
pass
@abstractproperty
@property
@abstractmethod
def convertor_items(self):
pass
@ -1356,6 +1402,7 @@ class BasePublisherController(AbstractPublisherController):
self._publish_has_validation_errors = False
self._publish_has_crashed = False
# All publish plugins are processed
self._publish_has_started = False
self._publish_has_finished = False
self._publish_max_progress = 0
self._publish_progress = 0
@ -1386,7 +1433,8 @@ class BasePublisherController(AbstractPublisherController):
"show.card.message" - Show card message request (UI related).
"instances.refresh.finished" - Instances are refreshed.
"plugins.refresh.finished" - Plugins refreshed.
"publish.reset.finished" - Publish context reset finished.
"publish.reset.finished" - Reset finished.
"controller.reset.started" - Controller reset started.
"controller.reset.finished" - Controller reset finished.
"publish.process.started" - Publishing started. Can be started from
paused state.
@ -1425,7 +1473,16 @@ class BasePublisherController(AbstractPublisherController):
def _set_host_is_valid(self, value):
if self._host_is_valid != value:
self._host_is_valid = value
self._emit_event("publish.host_is_valid.changed", {"value": value})
self._emit_event(
"publish.host_is_valid.changed", {"value": value}
)
def _get_publish_has_started(self):
return self._publish_has_started
def _set_publish_has_started(self, value):
if value != self._publish_has_started:
self._publish_has_started = value
def _get_publish_has_finished(self):
return self._publish_has_finished
@ -1449,7 +1506,9 @@ class BasePublisherController(AbstractPublisherController):
def _set_publish_has_validated(self, value):
if self._publish_has_validated != value:
self._publish_has_validated = value
self._emit_event("publish.has_validated.changed", {"value": value})
self._emit_event(
"publish.has_validated.changed", {"value": value}
)
def _get_publish_has_crashed(self):
return self._publish_has_crashed
@ -1497,6 +1556,9 @@ class BasePublisherController(AbstractPublisherController):
host_is_valid = property(
_get_host_is_valid, _set_host_is_valid
)
publish_has_started = property(
_get_publish_has_started, _set_publish_has_started
)
publish_has_finished = property(
_get_publish_has_finished, _set_publish_has_finished
)
@ -1526,6 +1588,7 @@ class BasePublisherController(AbstractPublisherController):
"""Reset most of attributes that can be reset."""
self.publish_is_running = False
self.publish_has_started = False
self.publish_has_validated = False
self.publish_has_crashed = False
self.publish_has_validation_errors = False
@ -1645,10 +1708,7 @@ class PublisherController(BasePublisherController):
str: Project name.
"""
if not hasattr(self._host, "get_current_context"):
return legacy_io.active_project()
return self._host.get_current_context()["project_name"]
return self._create_context.get_current_project_name()
@property
def current_asset_name(self):
@ -1658,10 +1718,7 @@ class PublisherController(BasePublisherController):
Union[str, None]: Asset name or None if asset is not set.
"""
if not hasattr(self._host, "get_current_context"):
return legacy_io.Session["AVALON_ASSET"]
return self._host.get_current_context()["asset_name"]
return self._create_context.get_current_asset_name()
@property
def current_task_name(self):
@ -1671,10 +1728,11 @@ class PublisherController(BasePublisherController):
Union[str, None]: Task name or None if task is not set.
"""
if not hasattr(self._host, "get_current_context"):
return legacy_io.Session["AVALON_TASK"]
return self._create_context.get_current_task_name()
return self._host.get_current_context()["task_name"]
@property
def host_context_has_changed(self):
return self._create_context.context_has_changed
@property
def instances(self):
@ -1751,6 +1809,8 @@ class PublisherController(BasePublisherController):
"""Reset everything related to creation and publishing."""
self.stop_publish()
self._emit_event("controller.reset.started")
self.host_is_valid = self._create_context.host_is_valid
self._create_context.reset_preparation()
@ -1992,7 +2052,15 @@ class PublisherController(BasePublisherController):
)
def trigger_convertor_items(self, convertor_identifiers):
self.save_changes()
"""Trigger legacy item convertors.
This functionality requires to save and reset CreateContext. The reset
is needed so Creators can collect converted items.
Args:
convertor_identifiers (list[str]): Identifiers of convertor
plugins.
"""
success = True
try:
@ -2039,13 +2107,33 @@ class PublisherController(BasePublisherController):
self._on_create_instance_change()
return success
def save_changes(self):
"""Save changes happened during creation."""
def save_changes(self, show_message=True):
"""Save changes happened during creation.
Trigger save of changes using host api. This functionality does not
validate anything. It is required to do checks before this method is
called to be able to give user actionable response e.g. check of
context using 'host_context_has_changed'.
Args:
show_message (bool): Show message that changes were
saved successfully.
Returns:
bool: Save of changes was successful.
"""
if not self._create_context.host_is_valid:
return
# TODO remove
# Fake success save when host is not valid for CreateContext
# this is for testing as experimental feature
return True
try:
self._create_context.save_changes()
if show_message:
self.emit_card_message("Saved changes..")
return True
except CreatorsOperationFailed as exc:
self._emit_event(
@ -2056,16 +2144,17 @@ class PublisherController(BasePublisherController):
}
)
return False
def remove_instances(self, instance_ids):
"""Remove instances based on instance ids.
Args:
instance_ids (List[str]): List of instance ids to remove.
"""
# QUESTION Expect that instances are really removed? In that case save
# reset is not required and save changes too.
self.save_changes()
# QUESTION Expect that instances are really removed? In that case reset
# is not required.
self._remove_instances_from_context(instance_ids)
self._on_create_instance_change()
@ -2136,12 +2225,22 @@ class PublisherController(BasePublisherController):
self._publish_comment_is_set = True
def publish(self):
"""Run publishing."""
"""Run publishing.
Make sure all changes are saved before method is called (Call
'save_changes' and check output).
"""
self._publish_up_validation = False
self._start_publish()
def validate(self):
"""Run publishing and stop after Validation."""
"""Run publishing and stop after Validation.
Make sure all changes are saved before method is called (Call
'save_changes' and check output).
"""
if self.publish_has_validated:
return
self._publish_up_validation = True
@ -2152,10 +2251,8 @@ class PublisherController(BasePublisherController):
if self.publish_is_running:
return
# Make sure changes are saved
self.save_changes()
self.publish_is_running = True
self.publish_has_started = True
self._emit_event("publish.process.started")

View file

@ -4,8 +4,9 @@ from .icons import (
get_icon
)
from .widgets import (
StopBtn,
SaveBtn,
ResetBtn,
StopBtn,
ValidateBtn,
PublishBtn,
CreateNextPageOverlay,
@ -25,8 +26,9 @@ __all__ = (
"get_pixmap",
"get_icon",
"StopBtn",
"SaveBtn",
"ResetBtn",
"StopBtn",
"ValidateBtn",
"PublishBtn",
"CreateNextPageOverlay",

Some files were not shown because too many files have changed in this diff Show more