mirror of
https://github.com/ynput/ayon-core.git
synced 2026-01-01 16:34:53 +01:00
Merge branch 'develop' into enhancement/OP-5468_3dsMax-render-dialogue-needs-to-be-closed
This commit is contained in:
commit
168d89013d
170 changed files with 5936 additions and 1497 deletions
33
.github/ISSUE_TEMPLATE/bug_report.md
vendored
33
.github/ISSUE_TEMPLATE/bug_report.md
vendored
|
|
@ -1,33 +0,0 @@
|
|||
---
|
||||
name: Bug report
|
||||
about: Create a report to help us improve
|
||||
title: ''
|
||||
labels: bug
|
||||
assignees: ''
|
||||
|
||||
---
|
||||
**Running version**
|
||||
[ex. 3.14.1-nightly.2]
|
||||
|
||||
**Describe the bug**
|
||||
A clear and concise description of what the bug is.
|
||||
|
||||
**To Reproduce**
|
||||
Steps to reproduce the behavior:
|
||||
1. Go to '...'
|
||||
2. Click on '....'
|
||||
3. Scroll down to '....'
|
||||
4. See error
|
||||
|
||||
**Expected behavior**
|
||||
A clear and concise description of what you expected to happen.
|
||||
|
||||
**Screenshots**
|
||||
If applicable, add screenshots to help explain your problem.
|
||||
|
||||
**Desktop (please complete the following information):**
|
||||
- OS: [e.g. windows]
|
||||
- Host: [e.g. Maya, Nuke, Houdini]
|
||||
|
||||
**Additional context**
|
||||
Add any other context about the problem here.
|
||||
183
.github/ISSUE_TEMPLATE/bug_report.yml
vendored
Normal file
183
.github/ISSUE_TEMPLATE/bug_report.yml
vendored
Normal file
|
|
@ -0,0 +1,183 @@
|
|||
name: Bug Report
|
||||
description: File a bug report
|
||||
title: 'Bug: '
|
||||
labels:
|
||||
- 'type: bug'
|
||||
body:
|
||||
- type: markdown
|
||||
attributes:
|
||||
value: |
|
||||
Thanks for taking the time to fill out this bug report!
|
||||
- type: checkboxes
|
||||
attributes:
|
||||
label: Is there an existing issue for this?
|
||||
description: >-
|
||||
Please search to see if an issue already exists for the bug you
|
||||
encountered.
|
||||
options:
|
||||
- label: I have searched the existing issues
|
||||
required: true
|
||||
- type: textarea
|
||||
attributes:
|
||||
label: 'Current Behavior:'
|
||||
description: A concise description of what you're experiencing.
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
attributes:
|
||||
label: 'Expected Behavior:'
|
||||
description: A concise description of what you expected to happen.
|
||||
validations:
|
||||
required: false
|
||||
- type: dropdown
|
||||
id: _version
|
||||
attributes:
|
||||
label: Version
|
||||
description: What version are you running? Look to OpenPype Tray
|
||||
options:
|
||||
- 3.15.4-nightly.3
|
||||
- 3.15.4-nightly.2
|
||||
- 3.15.4-nightly.1
|
||||
- 3.15.3
|
||||
- 3.15.3-nightly.4
|
||||
- 3.15.3-nightly.3
|
||||
- 3.15.3-nightly.2
|
||||
- 3.15.3-nightly.1
|
||||
- 3.15.2
|
||||
- 3.15.2-nightly.6
|
||||
- 3.15.2-nightly.5
|
||||
- 3.15.2-nightly.4
|
||||
- 3.15.2-nightly.3
|
||||
- 3.15.2-nightly.2
|
||||
- 3.15.2-nightly.1
|
||||
- 3.15.1
|
||||
- 3.15.1-nightly.6
|
||||
- 3.15.1-nightly.5
|
||||
- 3.15.1-nightly.4
|
||||
- 3.15.1-nightly.3
|
||||
- 3.15.1-nightly.2
|
||||
- 3.15.1-nightly.1
|
||||
- 3.15.0
|
||||
- 3.15.0-nightly.1
|
||||
- 3.14.11-nightly.4
|
||||
- 3.14.11-nightly.3
|
||||
- 3.14.11-nightly.2
|
||||
- 3.14.11-nightly.1
|
||||
- 3.14.10
|
||||
- 3.14.10-nightly.9
|
||||
- 3.14.10-nightly.8
|
||||
- 3.14.10-nightly.7
|
||||
- 3.14.10-nightly.6
|
||||
- 3.14.10-nightly.5
|
||||
- 3.14.10-nightly.4
|
||||
- 3.14.10-nightly.3
|
||||
- 3.14.10-nightly.2
|
||||
- 3.14.10-nightly.1
|
||||
- 3.14.9
|
||||
- 3.14.9-nightly.5
|
||||
- 3.14.9-nightly.4
|
||||
- 3.14.9-nightly.3
|
||||
- 3.14.9-nightly.2
|
||||
- 3.14.9-nightly.1
|
||||
- 3.14.8
|
||||
- 3.14.8-nightly.4
|
||||
- 3.14.8-nightly.3
|
||||
- 3.14.8-nightly.2
|
||||
- 3.14.8-nightly.1
|
||||
- 3.14.7
|
||||
- 3.14.7-nightly.8
|
||||
- 3.14.7-nightly.7
|
||||
- 3.14.7-nightly.6
|
||||
- 3.14.7-nightly.5
|
||||
- 3.14.7-nightly.4
|
||||
- 3.14.7-nightly.3
|
||||
- 3.14.7-nightly.2
|
||||
- 3.14.7-nightly.1
|
||||
- 3.14.6
|
||||
- 3.14.6-nightly.3
|
||||
- 3.14.6-nightly.2
|
||||
- 3.14.6-nightly.1
|
||||
- 3.14.5
|
||||
- 3.14.5-nightly.3
|
||||
- 3.14.5-nightly.2
|
||||
- 3.14.5-nightly.1
|
||||
- 3.14.4
|
||||
- 3.14.4-nightly.4
|
||||
- 3.14.4-nightly.3
|
||||
- 3.14.4-nightly.2
|
||||
- 3.14.4-nightly.1
|
||||
- 3.14.3
|
||||
- 3.14.3-nightly.7
|
||||
- 3.14.3-nightly.6
|
||||
- 3.14.3-nightly.5
|
||||
- 3.14.3-nightly.4
|
||||
- 3.14.3-nightly.3
|
||||
- 3.14.3-nightly.2
|
||||
- 3.14.3-nightly.1
|
||||
- 3.14.2
|
||||
- 3.14.2-nightly.5
|
||||
- 3.14.2-nightly.4
|
||||
- 3.14.2-nightly.3
|
||||
- 3.14.2-nightly.2
|
||||
- 3.14.2-nightly.1
|
||||
- 3.14.1
|
||||
- 3.14.1-nightly.4
|
||||
- 3.14.1-nightly.3
|
||||
- 3.14.1-nightly.2
|
||||
- 3.14.1-nightly.1
|
||||
- 3.14.0
|
||||
- 3.14.0-nightly.1
|
||||
- 3.13.1-nightly.3
|
||||
- 3.13.1-nightly.2
|
||||
- 3.13.1-nightly.1
|
||||
- 3.13.0
|
||||
- 3.13.0-nightly.1
|
||||
- 3.12.3-nightly.3
|
||||
- 3.12.3-nightly.2
|
||||
- 3.12.3-nightly.1
|
||||
validations:
|
||||
required: true
|
||||
- type: dropdown
|
||||
validations:
|
||||
required: true
|
||||
attributes:
|
||||
label: What platform you are running OpenPype on?
|
||||
description: |
|
||||
Please specify the operating systems you are running OpenPype with.
|
||||
multiple: true
|
||||
options:
|
||||
- Windows
|
||||
- Linux / Centos
|
||||
- Linux / Ubuntu
|
||||
- Linux / RedHat
|
||||
- MacOS
|
||||
- type: textarea
|
||||
id: to-reproduce
|
||||
attributes:
|
||||
label: 'Steps To Reproduce:'
|
||||
description: Steps to reproduce the behavior.
|
||||
placeholder: |
|
||||
1. How did the configuration look like
|
||||
2. What type of action was made
|
||||
validations:
|
||||
required: true
|
||||
- type: checkboxes
|
||||
attributes:
|
||||
label: Are there any labels you wish to add?
|
||||
description: Please search labels and identify those related to your bug.
|
||||
options:
|
||||
- label: I have added the relevant labels to the bug report.
|
||||
required: true
|
||||
- type: textarea
|
||||
id: logs
|
||||
attributes:
|
||||
label: 'Relevant log output:'
|
||||
description: >-
|
||||
Please copy and paste any relevant log output. This will be
|
||||
automatically formatted into code, so no need for backticks.
|
||||
render: shell
|
||||
- type: textarea
|
||||
id: additional-context
|
||||
attributes:
|
||||
label: 'Additional context:'
|
||||
description: Add any other context about the problem here.
|
||||
8
.github/ISSUE_TEMPLATE/config.yml
vendored
Normal file
8
.github/ISSUE_TEMPLATE/config.yml
vendored
Normal file
|
|
@ -0,0 +1,8 @@
|
|||
blank_issues_enabled: false
|
||||
contact_links:
|
||||
- name: Ynput Community Discussions
|
||||
url: https://community.ynput.io
|
||||
about: Please ask and answer questions here.
|
||||
- name: Ynput Discord Server
|
||||
url: https://discord.gg/ynput
|
||||
about: For community quick chats.
|
||||
52
.github/ISSUE_TEMPLATE/enhancement_request.yml
vendored
Normal file
52
.github/ISSUE_TEMPLATE/enhancement_request.yml
vendored
Normal file
|
|
@ -0,0 +1,52 @@
|
|||
name: Enhancement Request
|
||||
description: Create a report to help us enhance a particular feature
|
||||
title: "Enhancement: "
|
||||
labels:
|
||||
- "type: enhancement"
|
||||
body:
|
||||
- type: markdown
|
||||
attributes:
|
||||
value: |
|
||||
Thanks for taking the time to fill out this enhancement request report!
|
||||
- type: checkboxes
|
||||
attributes:
|
||||
label: Is there an existing issue for this?
|
||||
description: Please search to see if an issue already exists for the bug you encountered.
|
||||
options:
|
||||
- label: I have searched the existing issues.
|
||||
required: true
|
||||
- type: textarea
|
||||
id: related-feature
|
||||
attributes:
|
||||
label: Please describe the feature you have in mind and explain what the current shortcomings are?
|
||||
description: A clear and concise description of what the problem is.
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
id: enhancement-proposal
|
||||
attributes:
|
||||
label: How would you imagine the implementation of the feature?
|
||||
description: A clear and concise description of what you want to happen.
|
||||
validations:
|
||||
required: true
|
||||
- type: checkboxes
|
||||
attributes:
|
||||
label: Are there any labels you wish to add?
|
||||
description: Please search labels and identify those related to your enhancement.
|
||||
options:
|
||||
- label: I have added the relevant labels to the enhancement request.
|
||||
required: true
|
||||
- type: textarea
|
||||
id: alternatives
|
||||
attributes:
|
||||
label: "Describe alternatives you've considered:"
|
||||
description: A clear and concise description of any alternative solutions or features you've considered.
|
||||
validations:
|
||||
required: false
|
||||
- type: textarea
|
||||
id: additional-context
|
||||
attributes:
|
||||
label: "Additional context:"
|
||||
description: Add any other context or screenshots about the enhancement request here.
|
||||
validations:
|
||||
required: false
|
||||
20
.github/ISSUE_TEMPLATE/feature_request.md
vendored
20
.github/ISSUE_TEMPLATE/feature_request.md
vendored
|
|
@ -1,20 +0,0 @@
|
|||
---
|
||||
name: Feature request
|
||||
about: Suggest an idea for this project
|
||||
title: ''
|
||||
labels: enhancement
|
||||
assignees: ''
|
||||
|
||||
---
|
||||
|
||||
**Is your feature request related to a problem? Please describe.**
|
||||
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
|
||||
|
||||
**Describe the solution you'd like**
|
||||
A clear and concise description of what you want to happen.
|
||||
|
||||
**Describe alternatives you've considered**
|
||||
A clear and concise description of any alternative solutions or features you've considered.
|
||||
|
||||
**Additional context**
|
||||
Add any other context or screenshots about the feature request here.
|
||||
2
.github/pr-branch-labeler.yml
vendored
2
.github/pr-branch-labeler.yml
vendored
|
|
@ -12,4 +12,4 @@
|
|||
|
||||
# Apply label "release" if base matches "release/*"
|
||||
'Bump Minor':
|
||||
base: "release/next-minor"
|
||||
base: "release/next-minor"
|
||||
|
|
|
|||
2
.github/workflows/documentation.yml
vendored
2
.github/workflows/documentation.yml
vendored
|
|
@ -1,4 +1,4 @@
|
|||
name: documentation
|
||||
name: 📜 Documentation
|
||||
|
||||
on:
|
||||
pull_request:
|
||||
|
|
|
|||
2
.github/workflows/milestone_assign.yml
vendored
2
.github/workflows/milestone_assign.yml
vendored
|
|
@ -1,4 +1,4 @@
|
|||
name: Milestone - assign to PRs
|
||||
name: 👉🏻 Milestone - assign to PRs
|
||||
|
||||
on:
|
||||
pull_request_target:
|
||||
|
|
|
|||
2
.github/workflows/milestone_create.yml
vendored
2
.github/workflows/milestone_create.yml
vendored
|
|
@ -1,4 +1,4 @@
|
|||
name: Milestone - create default
|
||||
name: ➕ Milestone - create default
|
||||
|
||||
on:
|
||||
milestone:
|
||||
|
|
|
|||
|
|
@ -1,4 +1,4 @@
|
|||
name: Milestone Release [trigger]
|
||||
name: 🚩 Milestone Release [trigger]
|
||||
|
||||
on:
|
||||
workflow_dispatch:
|
||||
|
|
@ -45,3 +45,6 @@ jobs:
|
|||
token: ${{ secrets.YNPUT_BOT_TOKEN }}
|
||||
user_email: ${{ secrets.CI_EMAIL }}
|
||||
user_name: ${{ secrets.CI_USER }}
|
||||
cu_api_key: ${{ secrets.CLICKUP_API_KEY }}
|
||||
cu_team_id: ${{ secrets.CLICKUP_TEAM_ID }}
|
||||
cu_field_id: ${{ secrets.CLICKUP_RELEASE_FIELD_ID }}
|
||||
|
|
|
|||
2
.github/workflows/nightly_merge.yml
vendored
2
.github/workflows/nightly_merge.yml
vendored
|
|
@ -1,4 +1,4 @@
|
|||
name: Dev -> Main
|
||||
name: 🔀 Dev -> Main
|
||||
|
||||
on:
|
||||
schedule:
|
||||
|
|
|
|||
49
.github/workflows/pr_labels.yml
vendored
Normal file
49
.github/workflows/pr_labels.yml
vendored
Normal file
|
|
@ -0,0 +1,49 @@
|
|||
name: 🔖 PR labels
|
||||
|
||||
on:
|
||||
pull_request_target:
|
||||
types: [opened, assigned]
|
||||
|
||||
jobs:
|
||||
size-label:
|
||||
name: pr_size_label
|
||||
runs-on: ubuntu-latest
|
||||
if: github.event.action == 'assigned' || github.event.action == 'opened'
|
||||
steps:
|
||||
- name: Add size label
|
||||
uses: "pascalgn/size-label-action@v0.4.3"
|
||||
env:
|
||||
GITHUB_TOKEN: "${{ secrets.YNPUT_BOT_TOKEN }}"
|
||||
IGNORED: ".gitignore\n*.md\n*.json"
|
||||
with:
|
||||
sizes: >
|
||||
{
|
||||
"0": "XS",
|
||||
"100": "S",
|
||||
"500": "M",
|
||||
"1000": "L",
|
||||
"1500": "XL",
|
||||
"2500": "XXL"
|
||||
}
|
||||
|
||||
label_prs_branch:
|
||||
name: pr_branch_label
|
||||
runs-on: ubuntu-latest
|
||||
if: github.event.action == 'assigned' || github.event.action == 'opened'
|
||||
steps:
|
||||
- name: Label PRs - Branch name detection
|
||||
uses: ffittschen/pr-branch-labeler@v1
|
||||
with:
|
||||
repo-token: ${{ secrets.YNPUT_BOT_TOKEN }}
|
||||
|
||||
label_prs_globe:
|
||||
name: pr_globe_label
|
||||
runs-on: ubuntu-latest
|
||||
if: github.event.action == 'assigned' || github.event.action == 'opened'
|
||||
steps:
|
||||
- name: Label PRs - Globe detection
|
||||
uses: actions/labeler@v4.0.3
|
||||
with:
|
||||
repo-token: ${{ secrets.YNPUT_BOT_TOKEN }}
|
||||
configuration-path: ".github/pr-glob-labeler.yml"
|
||||
sync-labels: false
|
||||
2
.github/workflows/prerelease.yml
vendored
2
.github/workflows/prerelease.yml
vendored
|
|
@ -1,4 +1,4 @@
|
|||
name: Nightly Prerelease
|
||||
name: ⏳ Nightly Prerelease
|
||||
|
||||
on:
|
||||
workflow_dispatch:
|
||||
|
|
|
|||
|
|
@ -1,8 +1,6 @@
|
|||
name: project-actions
|
||||
name: 📊 Project task statuses
|
||||
|
||||
on:
|
||||
pull_request_target:
|
||||
types: [opened, assigned]
|
||||
pull_request_review:
|
||||
types: [submitted]
|
||||
issue_comment:
|
||||
|
|
@ -20,11 +18,16 @@ jobs:
|
|||
# - PR issue comment which is not form Ynbot
|
||||
# - PR review comment which is not Hound (or any other bot)
|
||||
# - PR review submitted which is not from Hound (or any other bot) and is not 'Changes requested'
|
||||
# - make sure it only runs if not forked repo
|
||||
# -----------------------------
|
||||
if: |
|
||||
(github.event_name == 'issue_comment' && github.event.comment.user.id != 82967070) ||
|
||||
(github.event_name == 'pull_request_review_comment' && github.event.comment.user.type != 'Bot') ||
|
||||
(github.event_name == 'pull_request_review' && github.event.review.state != 'changes_requested' && github.event.review.user.type != 'Bot')
|
||||
(github.event_name == 'issue_comment' && github.event.pull_request.head.repo.owner.login == 'ynput' && github.event.comment.user.id != 82967070) ||
|
||||
(github.event_name == 'pull_request_review_comment' && github.event.pull_request.head.repo.owner.login == 'ynput' && github.event.comment.user.type != 'Bot') ||
|
||||
(github.event_name == 'pull_request_review' &&
|
||||
github.event.pull_request.head.repo.owner.login == 'ynput' &&
|
||||
github.event.review.state != 'changes_requested' &&
|
||||
github.event.review.state != 'approved' &&
|
||||
github.event.review.user.type != 'Bot')
|
||||
steps:
|
||||
- name: Move PR to 'Review In Progress'
|
||||
uses: leonsteinhaeuser/project-beta-automations@v2.1.0
|
||||
|
|
@ -42,7 +45,7 @@ jobs:
|
|||
# -----------------------------
|
||||
name: pr_review_requested
|
||||
runs-on: ubuntu-latest
|
||||
if: github.event_name == 'pull_request_review' && github.event.review.state == 'changes_requested'
|
||||
if: github.event_name == 'pull_request_review' && github.event.pull_request.head.repo.owner.login == 'ynput' && github.event.review.state == 'changes_requested'
|
||||
steps:
|
||||
- name: Set branch env
|
||||
run: echo "BRANCH_NAME=${{ github.event.pull_request.head.ref}}" >> $GITHUB_ENV
|
||||
|
|
@ -65,53 +68,3 @@ jobs:
|
|||
-d '{
|
||||
"status": "in progress"
|
||||
}'
|
||||
|
||||
size-label:
|
||||
name: pr_size_label
|
||||
runs-on: ubuntu-latest
|
||||
if: |
|
||||
(github.event_name == 'pull_request' && github.event.action == 'assigned') ||
|
||||
(github.event_name == 'pull_request' && github.event.action == 'opened')
|
||||
|
||||
steps:
|
||||
- name: Add size label
|
||||
uses: "pascalgn/size-label-action@v0.4.3"
|
||||
env:
|
||||
GITHUB_TOKEN: "${{ secrets.YNPUT_BOT_TOKEN }}"
|
||||
IGNORED: ".gitignore\n*.md\n*.json"
|
||||
with:
|
||||
sizes: >
|
||||
{
|
||||
"0": "XS",
|
||||
"100": "S",
|
||||
"500": "M",
|
||||
"1000": "L",
|
||||
"1500": "XL",
|
||||
"2500": "XXL"
|
||||
}
|
||||
|
||||
label_prs_branch:
|
||||
name: pr_branch_label
|
||||
runs-on: ubuntu-latest
|
||||
if: |
|
||||
(github.event_name == 'pull_request' && github.event.action == 'assigned') ||
|
||||
(github.event_name == 'pull_request' && github.event.action == 'opened')
|
||||
steps:
|
||||
- name: Label PRs - Branch name detection
|
||||
uses: ffittschen/pr-branch-labeler@v1
|
||||
with:
|
||||
repo-token: ${{ secrets.YNPUT_BOT_TOKEN }}
|
||||
|
||||
label_prs_globe:
|
||||
name: pr_globe_label
|
||||
runs-on: ubuntu-latest
|
||||
if: |
|
||||
(github.event_name == 'pull_request' && github.event.action == 'assigned') ||
|
||||
(github.event_name == 'pull_request' && github.event.action == 'opened')
|
||||
steps:
|
||||
- name: Label PRs - Globe detection
|
||||
uses: actions/labeler@v4.0.3
|
||||
with:
|
||||
repo-token: ${{ secrets.YNPUT_BOT_TOKEN }}
|
||||
configuration-path: ".github/pr-glob-labeler.yml"
|
||||
sync-labels: false
|
||||
2
.github/workflows/test_build.yml
vendored
2
.github/workflows/test_build.yml
vendored
|
|
@ -1,7 +1,7 @@
|
|||
# This workflow will upload a Python Package using Twine when a release is created
|
||||
# For more information see: https://help.github.com/en/actions/language-and-framework-guides/using-python-with-github-actions#publishing-to-package-registries
|
||||
|
||||
name: Test Build
|
||||
name: 🏗️ Test Build
|
||||
|
||||
on:
|
||||
pull_request:
|
||||
|
|
|
|||
25
.github/workflows/update_bug_report.yml
vendored
Normal file
25
.github/workflows/update_bug_report.yml
vendored
Normal file
|
|
@ -0,0 +1,25 @@
|
|||
name: 🐞 Update Bug Report
|
||||
|
||||
on:
|
||||
workflow_dispatch:
|
||||
release:
|
||||
# https://docs.github.com/en/actions/using-workflows/events-that-trigger-workflows#release
|
||||
types: [published]
|
||||
|
||||
jobs:
|
||||
update-bug-report:
|
||||
runs-on: ubuntu-latest
|
||||
name: Update bug report
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
with:
|
||||
ref: ${{ github.event.release.target_commitish }}
|
||||
- name: Update version
|
||||
uses: ynput/gha-populate-form-version@main
|
||||
with:
|
||||
github_token: ${{ secrets.YNPUT_BOT_TOKEN }}
|
||||
registry: github
|
||||
dropdown: _version
|
||||
limit_to: 100
|
||||
form: .github/ISSUE_TEMPLATE/bug_report.yml
|
||||
commit_message: 'chore(): update bug report / version'
|
||||
943
CHANGELOG.md
943
CHANGELOG.md
|
|
@ -1,5 +1,948 @@
|
|||
# Changelog
|
||||
|
||||
|
||||
## [3.15.4](https://github.com/ynput/OpenPype/tree/3.15.4)
|
||||
|
||||
|
||||
[Full Changelog](https://github.com/ynput/OpenPype/compare/3.15.3...3.15.4)
|
||||
|
||||
### **🆕 New features**
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Maya: Cant assign shaders to the ass file - OP-4859 <a href="https://github.com/ynput/OpenPype/pull/4460">#4460</a></summary>
|
||||
|
||||
<strong>Support AiStandIn nodes for look assignment.
|
||||
|
||||
</strong>Using operators we assign shaders and attribute/parameters to nodes within standins. Initially there is only support for a limited mount of attributes but we can add support as needed;
|
||||
```
|
||||
primaryVisibility
|
||||
castsShadows
|
||||
receiveShadows
|
||||
aiSelfShadows
|
||||
aiOpaque
|
||||
aiMatte
|
||||
aiVisibleInDiffuseTransmission
|
||||
aiVisibleInSpecularTransmission
|
||||
aiVisibleInVolume
|
||||
aiVisibleInDiffuseReflection
|
||||
aiVisibleInSpecularReflection
|
||||
aiSubdivUvSmoothing
|
||||
aiDispHeight
|
||||
aiDispPadding
|
||||
aiDispZeroValue
|
||||
aiStepSize
|
||||
aiVolumePadding
|
||||
aiSubdivType
|
||||
aiSubdivIterations
|
||||
```
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Maya: GPU cache representation <a href="https://github.com/ynput/OpenPype/pull/4649">#4649</a></summary>
|
||||
|
||||
Implement GPU cache for model, animation and pointcache.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Houdini: Implement review family with opengl node <a href="https://github.com/ynput/OpenPype/pull/3839">#3839</a></summary>
|
||||
|
||||
<strong>Implements a first pass for Reviews publishing in Houdini. Resolves #2720
|
||||
|
||||
</strong>Uses the `opengl` ROP node to produce PNG images.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Maya: Camera focal length visible in review - OP-3278 <a href="https://github.com/ynput/OpenPype/pull/4531">#4531</a></summary>
|
||||
|
||||
<strong>Camera focal length visible in review.
|
||||
|
||||
</strong>Support camera focal length in review; static and dynamic.Resolves #3220
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Maya: Defining plugins to load on Maya start - OP-4994 <a href="https://github.com/ynput/OpenPype/pull/4714">#4714</a></summary>
|
||||
|
||||
Feature to define plugins to load on Maya launch.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Nuke, DL: Returning Suspended Publishing attribute <a href="https://github.com/ynput/OpenPype/pull/4715">#4715</a></summary>
|
||||
|
||||
Old Nuke Publisher's feature for suspended publishing job on render farm was added back to the current Publisher.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Settings UI: Allow setting a size hint for text fields <a href="https://github.com/ynput/OpenPype/pull/4821">#4821</a></summary>
|
||||
|
||||
Text entity have `minimum_lines_count` which allows to change minimum size hint of UI input.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>TrayPublisher: Move 'BatchMovieCreator' settings to 'create' subcategory <a href="https://github.com/ynput/OpenPype/pull/4827">#4827</a></summary>
|
||||
|
||||
Moved settings for `BatchMoviewCreator` into subcategory `create` in settings. Changes are made to match other hosts settings chema and structure.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
### **🚀 Enhancements**
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Maya looks: support for native Redshift texture format <a href="https://github.com/ynput/OpenPype/pull/2971">#2971</a></summary>
|
||||
|
||||
<strong>Add support for native Redshift textures handling. Closes #2599
|
||||
|
||||
</strong>Uses Redshift's Texture Processor executable to convert textures being used in renders to the Redshift ".rstexbin" format.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Maya: custom namespace for references <a href="https://github.com/ynput/OpenPype/pull/4511">#4511</a></summary>
|
||||
|
||||
<strong>Adding an option in Project Settings > Maya > Loader plugins to set custom namespace. If no namespace is set, the default one is used.
|
||||
|
||||
</strong>
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Maya: Set correct framerange with handles on file opening <a href="https://github.com/ynput/OpenPype/pull/4664">#4664</a></summary>
|
||||
|
||||
Set the range of playback from the asset data, counting handles, to get the correct data when calling the "collect_animation_data" function.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Maya: Fix camera update <a href="https://github.com/ynput/OpenPype/pull/4751">#4751</a></summary>
|
||||
|
||||
Fix resetting any modelPanel to a different camera when loading a camera and updating.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Maya: Remove single assembly validation for animation instances <a href="https://github.com/ynput/OpenPype/pull/4840">#4840</a></summary>
|
||||
|
||||
Rig groups may now be parented to others groups when `includeParentHierarchy` attribute on the instance is "off".
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Maya: Optional control of display lights on playblast. <a href="https://github.com/ynput/OpenPype/pull/4145">#4145</a></summary>
|
||||
|
||||
<strong>Optional control of display lights on playblast.
|
||||
|
||||
</strong>Giving control to what display lights are on the playblasts.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Kitsu: note family requirements <a href="https://github.com/ynput/OpenPype/pull/4551">#4551</a></summary>
|
||||
|
||||
<strong>Allowing to add family requirements to `IntegrateKitsuNote` task status change.
|
||||
|
||||
</strong>Adds a `Family requirements` setting to `Integrate Kitsu Note`, so you can add requirements to determine if kitsu task status should be changed based on which families are published or not. For instance you could have the status change only if another subset than workfile is published (but workfile can still be included) by adding an item set to `Not equal` and `workfile`.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Deactivate closed Kitsu projects on OP <a href="https://github.com/ynput/OpenPype/pull/4619">#4619</a></summary>
|
||||
|
||||
Deactivate project on OP when the project is closed on Kitsu.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Maya: Suggestion to change capture labels. <a href="https://github.com/ynput/OpenPype/pull/4691">#4691</a></summary>
|
||||
|
||||
Change capture labels.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Houdini: Change node type for OpenPypeContext `null` -> `subnet` <a href="https://github.com/ynput/OpenPype/pull/4745">#4745</a></summary>
|
||||
|
||||
Change the node type for OpenPype's hidden context node in Houdini from `null` to `subnet`. This fixes #4734
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>General: Extract burnin hosts filters <a href="https://github.com/ynput/OpenPype/pull/4749">#4749</a></summary>
|
||||
|
||||
Removed hosts filter from ExtractBurnin plugin. Instance without representations won't cause crash but just skip the instance. We've discovered because Blender already has review but did not create burnins.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Global: Improve speed of Collect Custom Staging Directory <a href="https://github.com/ynput/OpenPype/pull/4768">#4768</a></summary>
|
||||
|
||||
Improve speed of Collect Custom Staging Directory.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>General: Anatomy templates formatting <a href="https://github.com/ynput/OpenPype/pull/4773">#4773</a></summary>
|
||||
|
||||
Added option to format only single template from anatomy instead of formatting all of them all the time. Formatting of all templates is causing slowdowns e.g. during publishing of hundreds of instances.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Harmony: Handle zip files with deeper structure <a href="https://github.com/ynput/OpenPype/pull/4782">#4782</a></summary>
|
||||
|
||||
External Harmony zip files might contain one additional level with scene name.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Unreal: Use common logic to configure executable <a href="https://github.com/ynput/OpenPype/pull/4788">#4788</a></summary>
|
||||
|
||||
Unreal Editor location and version was autodetected. This easied configuration in some cases but was not flexible enought. This PR is changing the way Unreal Editor location is set, unifying it with the logic other hosts are using.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Github: Grammar tweaks + uppercase issue title <a href="https://github.com/ynput/OpenPype/pull/4813">#4813</a></summary>
|
||||
|
||||
Tweak some of the grammar in the issue form templates.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Houdini: Allow creation of publish instances via Houdini TAB menu <a href="https://github.com/ynput/OpenPype/pull/4831">#4831</a></summary>
|
||||
|
||||
Register the available Creator's as houdini tools so an artist can add publish instances via the Houdini TAB node search menu from within the network editor.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
### **🐛 Bug fixes**
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Maya: Fix Collect Render for V-Ray, Redshift and Renderman for missing colorspace <a href="https://github.com/ynput/OpenPype/pull/4650">#4650</a></summary>
|
||||
|
||||
Fix Collect Render not working for Redshift, V-Ray and Renderman due to missing `colorspace` argument to `RenderProduct` dataclass.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Maya: Xgen fixes <a href="https://github.com/ynput/OpenPype/pull/4707">#4707</a></summary>
|
||||
|
||||
Fix for Xgen extraction of world parented nodes and validation for required namespace.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Maya: Fix extract review and thumbnail for Maya 2020 <a href="https://github.com/ynput/OpenPype/pull/4744">#4744</a></summary>
|
||||
|
||||
Fix playblasting in Maya 2020 with override viewport options enabled. Fixes #4730.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Maya: local variable 'arnold_standins' referenced before assignment - OP-5542 <a href="https://github.com/ynput/OpenPype/pull/4778">#4778</a></summary>
|
||||
|
||||
MayaLookAssigner erroring when MTOA is not loaded:
|
||||
```
|
||||
# Traceback (most recent call last):
|
||||
# File "\openpype\hosts\maya\tools\mayalookassigner\app.py", line 272, in on_process_selected
|
||||
# nodes = list(set(item["nodes"]).difference(arnold_standins))
|
||||
# UnboundLocalError: local variable 'arnold_standins' referenced before assignment
|
||||
```
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Maya: Fix getting view and display in Maya 2020 - OP-5035 <a href="https://github.com/ynput/OpenPype/pull/4795">#4795</a></summary>
|
||||
|
||||
The `view_transform` returns a different format in Maya 2020. Fixes #4540 (hopefully).
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Maya: Fix Look Maya 2020 Py2 support for Extract Look <a href="https://github.com/ynput/OpenPype/pull/4808">#4808</a></summary>
|
||||
|
||||
Fix Extract Look supporting python 2.7 for Maya 2020.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Maya: Fix Validate Mesh Overlapping UVs plugin <a href="https://github.com/ynput/OpenPype/pull/4816">#4816</a></summary>
|
||||
|
||||
Fix typo in the code where a maya command returns a `list` instead of `str`.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Maya: Fix tile rendering with Vray - OP-5566 <a href="https://github.com/ynput/OpenPype/pull/4832">#4832</a></summary>
|
||||
|
||||
Fixes tile rendering with Vray.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Deadline: checking existing frames fails when there is number in file name <a href="https://github.com/ynput/OpenPype/pull/4698">#4698</a></summary>
|
||||
|
||||
Previous implementation of validator failed on files with any other number in rendered file names.Used regular expression pattern now handles numbers in the file names (eg "Main_beauty.v001.1001.exr", "Main_beauty_v001.1001.exr", "Main_beauty.1001.1001.exr") but not numbers behind frames (eg. "Main_beauty.1001.v001.exr")
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Maya: Validate Render Settings. <a href="https://github.com/ynput/OpenPype/pull/4735">#4735</a></summary>
|
||||
|
||||
Fixes error message when using attribute validation.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>General: Hero version sites recalculation <a href="https://github.com/ynput/OpenPype/pull/4737">#4737</a></summary>
|
||||
|
||||
Sites recalculation in integrate hero version did expect that it is integrated exactly same amount of files as in previous integration. This is not the case in many cases, so the sites recalculation happens in a different way, first are prepared all sites from previous representation files, and all of them are added to each file in new representation.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Houdini: Fix collect current file <a href="https://github.com/ynput/OpenPype/pull/4739">#4739</a></summary>
|
||||
|
||||
Fixes the Workfile publishing getting added into every instance being published from Houdini
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Global: Fix Extract Burnin + Colorspace functions for conflicting python environments with PYTHONHOME <a href="https://github.com/ynput/OpenPype/pull/4740">#4740</a></summary>
|
||||
|
||||
This fixes the running of openpype processes from e.g. a host with conflicting python versions that had `PYTHONHOME` said additionally to `PYTHONPATH`, like e.g. Houdini Py3.7 together with OpenPype Py3.9 when using Extract Burnin for a review in #3839This fix applies to Extract Burnin and some of the colorspace functions that use `run_openpype_process`
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Harmony: render what is in timeline in Harmony locally <a href="https://github.com/ynput/OpenPype/pull/4741">#4741</a></summary>
|
||||
|
||||
Previously it wasn't possible to render according to what was set in Timeline in scene start/end, just by what it was set in whole timeline.This allows artist to override what is in DB with what they require (with disabled `Validate Scene Settings`). Now artist can extend scene by additional frames, that shouldn't be rendered, but which might be desired.Removed explicit set scene settings (eg. applying frames and resolution directly to the scene after launch), added separate menu item to allow artist to do it themselves.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Maya: Extract Review settings add Use Background Gradient <a href="https://github.com/ynput/OpenPype/pull/4747">#4747</a></summary>
|
||||
|
||||
Add Display Gradient Background toggle in settings to fix support for setting flat background color for reviews.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Nuke: publisher is offering review on write families on demand <a href="https://github.com/ynput/OpenPype/pull/4755">#4755</a></summary>
|
||||
|
||||
Original idea where reviewable toggle will be offered in publisher on demand is fixed and now `review` attribute can be disabled in settings.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Workfiles: keep Browse always enabled <a href="https://github.com/ynput/OpenPype/pull/4766">#4766</a></summary>
|
||||
|
||||
Browse might make sense even if there are no workfiles present, actually in that case it makes the most sense (eg. I want to locate workfile from outside - from Desktop for example).
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Global: label key in instance data is optional <a href="https://github.com/ynput/OpenPype/pull/4779">#4779</a></summary>
|
||||
|
||||
Collect OTIO review plugin is not crashing if `label` key is missing in instance data.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Loader: Fix missing variable <a href="https://github.com/ynput/OpenPype/pull/4781">#4781</a></summary>
|
||||
|
||||
There is missing variable `handles` in loader tool after https://github.com/ynput/OpenPype/pull/4746. The variable was renamed to `handles_label` and is initialized to `None` if handles are not available.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Nuke: Workfile Template builder fixes <a href="https://github.com/ynput/OpenPype/pull/4783">#4783</a></summary>
|
||||
|
||||
Popup window after Nuke start is not showing. Knobs with X/Y coordination on nodes where were converted from placeholders are not added if `keepPlaceholders` is witched off.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Maya: Add family filter 'review' to burnin profile with focal length <a href="https://github.com/ynput/OpenPype/pull/4791">#4791</a></summary>
|
||||
|
||||
Avoid profile burnin with `focalLength` key for renders, but use only for playblast reviews.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>add farm instance to the render collector in 3dsMax <a href="https://github.com/ynput/OpenPype/pull/4794">#4794</a></summary>
|
||||
|
||||
bug fix for the failure of submitting publish job in 3dsmax
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Publisher: Plugin active attribute is respected <a href="https://github.com/ynput/OpenPype/pull/4798">#4798</a></summary>
|
||||
|
||||
Publisher consider plugin's `active` attribute, so the plugin is not processed when `active` is set to `False`. But we use the attribute in `OptionalPyblishPluginMixin` for different purposes, so I've added hack bypass of the active state validation when plugin inherit from the mixin. This is temporary solution which cannot be changed until all hosts use Publisher otherwise global plugins would be broken. Also plugins which have `enabled` set to `False` are filtered out -> this happened only when automated settings were applied and the settings contained `"enabled"` key se to `False`.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Nuke: settings and optional attribute in publisher for some validators <a href="https://github.com/ynput/OpenPype/pull/4811">#4811</a></summary>
|
||||
|
||||
New publisher is supporting optional switch for plugins which is offered in Publisher in Right panel. Some plugins were missing this switch and also settings which would offer the optionality.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Settings: Version settings popup fix <a href="https://github.com/ynput/OpenPype/pull/4822">#4822</a></summary>
|
||||
|
||||
Version completer popup have issues on some platforms, this should fix those edge cases. Also fixed issue when completer stayed shown fater reset (save).
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Hiero/Nuke: adding monitorOut key to settings <a href="https://github.com/ynput/OpenPype/pull/4826">#4826</a></summary>
|
||||
|
||||
New versions of Hiero were introduced with new colorspace property for Monitor Out. It have been added into project settings. Also added new config names into settings enumerator option.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Nuke: removed default workfile template builder preset <a href="https://github.com/ynput/OpenPype/pull/4835">#4835</a></summary>
|
||||
|
||||
Default for workfile template builder should have been empty.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>TVPaint: Review can be made from any instance <a href="https://github.com/ynput/OpenPype/pull/4843">#4843</a></summary>
|
||||
|
||||
Add `"review"` tag to output of extract sequence if instance is marked for review. At this moment only instances with family `"review"` were able to define input for `ExtractReview` plugin which is not right.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
### **🔀 Refactored code**
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Deadline: Remove unused FramesPerTask job info submission <a href="https://github.com/ynput/OpenPype/pull/4657">#4657</a></summary>
|
||||
|
||||
Remove unused `FramesPerTask` job info submission to Deadline.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Maya: Remove pymel dependency <a href="https://github.com/ynput/OpenPype/pull/4724">#4724</a></summary>
|
||||
|
||||
Refactors code written using `pymel` to use standard maya python libraries instead like `maya.cmds` or `maya.api.OpenMaya`
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Remove "preview" data from representation <a href="https://github.com/ynput/OpenPype/pull/4759">#4759</a></summary>
|
||||
|
||||
Remove "preview" data from representation
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Maya: Collect Review cleanup code for attached subsets <a href="https://github.com/ynput/OpenPype/pull/4720">#4720</a></summary>
|
||||
|
||||
Refactor some code for Maya: Collect Review for attached subsets.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Refactor: Remove `handles`, `edit_in` and `edit_out` backwards compatibility <a href="https://github.com/ynput/OpenPype/pull/4746">#4746</a></summary>
|
||||
|
||||
Removes backward compatibiliy fallback for data called `handles`, `edit_in` and `edit_out`.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
### **📃 Documentation**
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Bump webpack from 5.69.1 to 5.76.1 in /website <a href="https://github.com/ynput/OpenPype/pull/4624">#4624</a></summary>
|
||||
|
||||
Bumps [webpack](https://github.com/webpack/webpack) from 5.69.1 to 5.76.1.
|
||||
<details>
|
||||
<summary>Release notes</summary>
|
||||
<p><em>Sourced from <a href="https://github.com/webpack/webpack/releases">webpack's releases</a>.</em></p>
|
||||
<blockquote>
|
||||
<h2>v5.76.1</h2>
|
||||
<h2>Fixed</h2>
|
||||
<ul>
|
||||
<li>Added <code>assert/strict</code> built-in to <code>NodeTargetPlugin</code></li>
|
||||
</ul>
|
||||
<h2>Revert</h2>
|
||||
<ul>
|
||||
<li>Improve performance of <code>hashRegExp</code> lookup by <a href="https://github.com/ryanwilsonperkin"><code>@ryanwilsonperkin</code></a> in <a href="https://redirect.github.com/webpack/webpack/pull/16759">webpack/webpack#16759</a></li>
|
||||
</ul>
|
||||
<h2>v5.76.0</h2>
|
||||
<h2>Bugfixes</h2>
|
||||
<ul>
|
||||
<li>Avoid cross-realm object access by <a href="https://github.com/Jack-Works"><code>@Jack-Works</code></a> in <a href="https://redirect.github.com/webpack/webpack/pull/16500">webpack/webpack#16500</a></li>
|
||||
<li>Improve hash performance via conditional initialization by <a href="https://github.com/lvivski"><code>@lvivski</code></a> in <a href="https://redirect.github.com/webpack/webpack/pull/16491">webpack/webpack#16491</a></li>
|
||||
<li>Serialize <code>generatedCode</code> info to fix bug in asset module cache restoration by <a href="https://github.com/ryanwilsonperkin"><code>@ryanwilsonperkin</code></a> in <a href="https://redirect.github.com/webpack/webpack/pull/16703">webpack/webpack#16703</a></li>
|
||||
<li>Improve performance of <code>hashRegExp</code> lookup by <a href="https://github.com/ryanwilsonperkin"><code>@ryanwilsonperkin</code></a> in <a href="https://redirect.github.com/webpack/webpack/pull/16759">webpack/webpack#16759</a></li>
|
||||
</ul>
|
||||
<h2>Features</h2>
|
||||
<ul>
|
||||
<li>add <code>target</code> to <code>LoaderContext</code> type by <a href="https://github.com/askoufis"><code>@askoufis</code></a> in <a href="https://redirect.github.com/webpack/webpack/pull/16781">webpack/webpack#16781</a></li>
|
||||
</ul>
|
||||
<h2>Security</h2>
|
||||
<ul>
|
||||
<li><a href="https://github.com/advisories/GHSA-3rfm-jhwj-7488">CVE-2022-37603</a> fixed by <a href="https://github.com/akhilgkrishnan"><code>@akhilgkrishnan</code></a> in <a href="https://redirect.github.com/webpack/webpack/pull/16446">webpack/webpack#16446</a></li>
|
||||
</ul>
|
||||
<h2>Repo Changes</h2>
|
||||
<ul>
|
||||
<li>Fix HTML5 logo in README by <a href="https://github.com/jakebailey"><code>@jakebailey</code></a> in <a href="https://redirect.github.com/webpack/webpack/pull/16614">webpack/webpack#16614</a></li>
|
||||
<li>Replace TypeScript logo in README by <a href="https://github.com/jakebailey"><code>@jakebailey</code></a> in <a href="https://redirect.github.com/webpack/webpack/pull/16613">webpack/webpack#16613</a></li>
|
||||
<li>Update actions/cache dependencies by <a href="https://github.com/piwysocki"><code>@piwysocki</code></a> in <a href="https://redirect.github.com/webpack/webpack/pull/16493">webpack/webpack#16493</a></li>
|
||||
</ul>
|
||||
<h2>New Contributors</h2>
|
||||
<ul>
|
||||
<li><a href="https://github.com/Jack-Works"><code>@Jack-Works</code></a> made their first contribution in <a href="https://redirect.github.com/webpack/webpack/pull/16500">webpack/webpack#16500</a></li>
|
||||
<li><a href="https://github.com/lvivski"><code>@lvivski</code></a> made their first contribution in <a href="https://redirect.github.com/webpack/webpack/pull/16491">webpack/webpack#16491</a></li>
|
||||
<li><a href="https://github.com/jakebailey"><code>@jakebailey</code></a> made their first contribution in <a href="https://redirect.github.com/webpack/webpack/pull/16614">webpack/webpack#16614</a></li>
|
||||
<li><a href="https://github.com/akhilgkrishnan"><code>@akhilgkrishnan</code></a> made their first contribution in <a href="https://redirect.github.com/webpack/webpack/pull/16446">webpack/webpack#16446</a></li>
|
||||
<li><a href="https://github.com/ryanwilsonperkin"><code>@ryanwilsonperkin</code></a> made their first contribution in <a href="https://redirect.github.com/webpack/webpack/pull/16703">webpack/webpack#16703</a></li>
|
||||
<li><a href="https://github.com/piwysocki"><code>@piwysocki</code></a> made their first contribution in <a href="https://redirect.github.com/webpack/webpack/pull/16493">webpack/webpack#16493</a></li>
|
||||
<li><a href="https://github.com/askoufis"><code>@askoufis</code></a> made their first contribution in <a href="https://redirect.github.com/webpack/webpack/pull/16781">webpack/webpack#16781</a></li>
|
||||
</ul>
|
||||
<p><strong>Full Changelog</strong>: <a href="https://github.com/webpack/webpack/compare/v5.75.0...v5.76.0">https://github.com/webpack/webpack/compare/v5.75.0...v5.76.0</a></p>
|
||||
<h2>v5.75.0</h2>
|
||||
<h1>Bugfixes</h1>
|
||||
<ul>
|
||||
<li><code>experiments.*</code> normalize to <code>false</code> when opt-out</li>
|
||||
<li>avoid <code>NaN%</code></li>
|
||||
<li>show the correct error when using a conflicting chunk name in code</li>
|
||||
<li>HMR code tests existance of <code>window</code> before trying to access it</li>
|
||||
<li>fix <code>eval-nosources-*</code> actually exclude sources</li>
|
||||
<li>fix race condition where no module is returned from processing module</li>
|
||||
<li>fix position of standalong semicolon in runtime code</li>
|
||||
</ul>
|
||||
<h1>Features</h1>
|
||||
<ul>
|
||||
<li>add support for <code>@import</code> to extenal CSS when using experimental CSS in node</li>
|
||||
</ul>
|
||||
<!-- raw HTML omitted -->
|
||||
</blockquote>
|
||||
<p>... (truncated)</p>
|
||||
</details>
|
||||
<details>
|
||||
<summary>Commits</summary>
|
||||
<ul>
|
||||
<li><a href="https://github.com/webpack/webpack/commit/21be52b681c477f8ebc41c1b0e7a7a8ac4fa7008"><code>21be52b</code></a> Merge pull request <a href="https://redirect.github.com/webpack/webpack/issues/16804">#16804</a> from webpack/chore-patch-release</li>
|
||||
<li><a href="https://github.com/webpack/webpack/commit/1cce945dd6c3576d37d3940a0233fd087ce3f6ff"><code>1cce945</code></a> chore(release): 5.76.1</li>
|
||||
<li><a href="https://github.com/webpack/webpack/commit/e76ad9e724410f10209caa2ba86875ca8cf5ed61"><code>e76ad9e</code></a> Merge pull request <a href="https://redirect.github.com/webpack/webpack/issues/16803">#16803</a> from ryanwilsonperkin/revert-16759-real-content-has...</li>
|
||||
<li><a href="https://github.com/webpack/webpack/commit/52b1b0e4ada7c11e7f1b4f3d69b50684938c684e"><code>52b1b0e</code></a> Revert "Improve performance of hashRegExp lookup"</li>
|
||||
<li><a href="https://github.com/webpack/webpack/commit/c989143379d344543e4161fec60f3a21beb9e3ce"><code>c989143</code></a> Merge pull request <a href="https://redirect.github.com/webpack/webpack/issues/16766">#16766</a> from piranna/patch-1</li>
|
||||
<li><a href="https://github.com/webpack/webpack/commit/710eaf4ddaea505e040a24beeb45a769f9e3761b"><code>710eaf4</code></a> Merge pull request <a href="https://redirect.github.com/webpack/webpack/issues/16789">#16789</a> from dmichon-msft/contenthash-hashsalt</li>
|
||||
<li><a href="https://github.com/webpack/webpack/commit/5d6446822aff579a5d3d9503ec2a16437d2f71d1"><code>5d64468</code></a> Merge pull request <a href="https://redirect.github.com/webpack/webpack/issues/16792">#16792</a> from webpack/update-version</li>
|
||||
<li><a href="https://github.com/webpack/webpack/commit/67af5ec1f05fb7cf06be6acf27353aef105ddcbc"><code>67af5ec</code></a> chore(release): 5.76.0</li>
|
||||
<li><a href="https://github.com/webpack/webpack/commit/97b1718720c33f1b17302a74c5284b01e02ec001"><code>97b1718</code></a> Merge pull request <a href="https://redirect.github.com/webpack/webpack/issues/16781">#16781</a> from askoufis/loader-context-target-type</li>
|
||||
<li><a href="https://github.com/webpack/webpack/commit/b84efe6224b276bf72e4c5e2f4e76acddfaeef07"><code>b84efe6</code></a> Merge pull request <a href="https://redirect.github.com/webpack/webpack/issues/16759">#16759</a> from ryanwilsonperkin/real-content-hash-regex-perf</li>
|
||||
<li>Additional commits viewable in <a href="https://github.com/webpack/webpack/compare/v5.69.1...v5.76.1">compare view</a></li>
|
||||
</ul>
|
||||
</details>
|
||||
<details>
|
||||
<summary>Maintainer changes</summary>
|
||||
<p>This version was pushed to npm by <a href="https://www.npmjs.com/~evilebottnawi">evilebottnawi</a>, a new releaser for webpack since your current version.</p>
|
||||
</details>
|
||||
<br />
|
||||
|
||||
|
||||
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
|
||||
|
||||
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
|
||||
|
||||
[//]: # (dependabot-automerge-start)
|
||||
[//]: # (dependabot-automerge-end)
|
||||
|
||||
---
|
||||
|
||||
<details>
|
||||
<summary>Dependabot commands and options</summary>
|
||||
<br />
|
||||
|
||||
You can trigger Dependabot actions by commenting on this PR:
|
||||
- `@dependabot rebase` will rebase this PR
|
||||
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
|
||||
- `@dependabot merge` will merge this PR after your CI passes on it
|
||||
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
|
||||
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
|
||||
- `@dependabot reopen` will reopen this PR if it is closed
|
||||
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
|
||||
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
|
||||
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
|
||||
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
|
||||
- `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language
|
||||
- `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language
|
||||
- `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language
|
||||
- `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language
|
||||
|
||||
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/ynput/OpenPype/network/alerts).
|
||||
|
||||
</details>
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Documentation: Add Extract Burnin documentation <a href="https://github.com/ynput/OpenPype/pull/4765">#4765</a></summary>
|
||||
|
||||
Add documentation for Extract Burnin global plugin settings.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Documentation: Move publisher related tips to publisher area <a href="https://github.com/ynput/OpenPype/pull/4772">#4772</a></summary>
|
||||
|
||||
Move publisher related tips for After Effects artist documentation to the correct position.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Documentation: Add extra terminology to the key concepts glossary <a href="https://github.com/ynput/OpenPype/pull/4838">#4838</a></summary>
|
||||
|
||||
Tweak some of the key concepts in the documentation.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
### **Merged pull requests**
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Maya: Refactor Extract Look with dedicated processors for maketx <a href="https://github.com/ynput/OpenPype/pull/4711">#4711</a></summary>
|
||||
|
||||
Refactor Maya extract look to fix some issues:
|
||||
- [x] Allow Extraction with maketx with OCIO Color Management enabled in Maya.
|
||||
- [x] Fix file hashing so it includes arguments to maketx, so that when arguments change it correctly generates a new hash
|
||||
- [x] Fix maketx destination colorspace when OCIO is enabled
|
||||
- [x] Use pre-collected colorspaces of the resources instead of trying to retrieve again in Extract Look
|
||||
- [x] Fix colorspace attributes being reinterpreted by maya on export (fix remapping) - goal is to resolve #2337
|
||||
- [x] Fix support for checking config path of maya default OCIO config (due to using `lib.get_color_management_preferences` which remaps that path)
|
||||
- [x] Merged in #2971 to refactor MakeTX into TextureProcessor and also support generating Redshift `.rstexbin` files. - goal is to resolve #2599
|
||||
- [x] Allow custom arguments to `maketx` from OpenPype Settings like mentioned here by @fabiaserra for arguments like: `--monochrome-detect`, `--opaque-detect`, `--checknan`.
|
||||
- [x] Actually fix the code and make it work. :) (I'll try to keep below checkboxes in sync with my code changes)
|
||||
- [x] Publishing without texture processor should work (no maketx + no rstexbin)
|
||||
- [x] Publishing with maketx should work
|
||||
- [x] Publishing with rstexbin should work
|
||||
- [x] Test it. (This is just me doing some test-runs, please still test the PR!)
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Maya template builder load all assets linked to the shot <a href="https://github.com/ynput/OpenPype/pull/4761">#4761</a></summary>
|
||||
|
||||
Problem
|
||||
All the assets of the ftrack project are loaded and not those linked to the shot
|
||||
|
||||
How get error
|
||||
Open maya in the context of shot, then build a new scene with the "Build Workfile from template" button in "OpenPype" menu.
|
||||

|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Global: Do not force instance data with frame ranges of the asset <a href="https://github.com/ynput/OpenPype/pull/4383">#4383</a></summary>
|
||||
|
||||
<strong>This aims to resolve #4317
|
||||
|
||||
</strong>
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Cosmetics: Fix some grammar in docstrings and messages (and some code) <a href="https://github.com/ynput/OpenPype/pull/4752">#4752</a></summary>
|
||||
|
||||
Tweak some grammar in codebase
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>Deadline: Submit publish job fails due root work hardcode - OP-5528 <a href="https://github.com/ynput/OpenPype/pull/4775">#4775</a></summary>
|
||||
|
||||
Generating config templates was hardcoded to `root[work]`. This PR fixes that.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
<details>
|
||||
<summary>CreateContext: Added option to remove Unknown attributes <a href="https://github.com/ynput/OpenPype/pull/4776">#4776</a></summary>
|
||||
|
||||
Added option to remove attributes with UnkownAttrDef on instances. Pop of key will also remove the attribute definition from attribute values, so they're not recreated again.
|
||||
|
||||
|
||||
___
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
|
||||
## [3.15.3](https://github.com/ynput/OpenPype/tree/3.15.3)
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -1216,7 +1216,7 @@ def get_representations(
|
|||
version_ids=version_ids,
|
||||
context_filters=context_filters,
|
||||
names_by_version_ids=names_by_version_ids,
|
||||
standard=True,
|
||||
standard=standard,
|
||||
archived=archived,
|
||||
fields=fields
|
||||
)
|
||||
|
|
|
|||
|
|
@ -42,13 +42,5 @@ class AddLastWorkfileToLaunchArgs(PreLaunchHook):
|
|||
self.log.info("Current context does not have any workfile yet.")
|
||||
return
|
||||
|
||||
# Determine whether to open workfile post initialization.
|
||||
if self.host_name == "maya":
|
||||
key = "open_workfile_post_initialization"
|
||||
if self.data["project_settings"]["maya"][key]:
|
||||
self.log.debug("Opening workfile post initialization.")
|
||||
self.data["env"]["OPENPYPE_" + key.upper()] = "1"
|
||||
return
|
||||
|
||||
# Add path to workfile to arguments
|
||||
self.launch_context.launch_args.append(last_workfile)
|
||||
|
|
|
|||
|
|
@ -53,10 +53,10 @@ class CollectWorkfile(pyblish.api.ContextPlugin):
|
|||
"active": True,
|
||||
"asset": asset_entity["name"],
|
||||
"task": task,
|
||||
"frameStart": asset_entity["data"]["frameStart"],
|
||||
"frameEnd": asset_entity["data"]["frameEnd"],
|
||||
"handleStart": asset_entity["data"]["handleStart"],
|
||||
"handleEnd": asset_entity["data"]["handleEnd"],
|
||||
"frameStart": context.data['frameStart'],
|
||||
"frameEnd": context.data['frameEnd'],
|
||||
"handleStart": context.data['handleStart'],
|
||||
"handleEnd": context.data['handleEnd'],
|
||||
"fps": asset_entity["data"]["fps"],
|
||||
"resolutionWidth": asset_entity["data"].get(
|
||||
"resolutionWidth",
|
||||
|
|
|
|||
|
|
@ -72,8 +72,7 @@ class FusionSetFrameRangeWithHandlesLoader(load.LoaderPlugin):
|
|||
return
|
||||
|
||||
# Include handles
|
||||
handles = version_data.get("handles", 0)
|
||||
start -= handles
|
||||
end += handles
|
||||
start -= version_data.get("handleStart", 0)
|
||||
end += version_data.get("handleEnd", 0)
|
||||
|
||||
lib.update_frame_range(start, end)
|
||||
|
|
|
|||
|
|
@ -242,9 +242,15 @@ def launch_zip_file(filepath):
|
|||
print(f"Localizing {filepath}")
|
||||
|
||||
temp_path = get_local_harmony_path(filepath)
|
||||
scene_name = os.path.basename(temp_path)
|
||||
if os.path.exists(os.path.join(temp_path, scene_name)):
|
||||
# unzipped with duplicated scene_name
|
||||
temp_path = os.path.join(temp_path, scene_name)
|
||||
|
||||
scene_path = os.path.join(
|
||||
temp_path, os.path.basename(temp_path) + ".xstage"
|
||||
temp_path, scene_name + ".xstage"
|
||||
)
|
||||
|
||||
unzip = False
|
||||
if os.path.exists(scene_path):
|
||||
# Check remote scene is newer than local.
|
||||
|
|
@ -262,6 +268,10 @@ def launch_zip_file(filepath):
|
|||
with _ZipFile(filepath, "r") as zip_ref:
|
||||
zip_ref.extractall(temp_path)
|
||||
|
||||
if os.path.exists(os.path.join(temp_path, scene_name)):
|
||||
# unzipped with duplicated scene_name
|
||||
temp_path = os.path.join(temp_path, scene_name)
|
||||
|
||||
# Close existing scene.
|
||||
if ProcessContext.pid:
|
||||
os.kill(ProcessContext.pid, signal.SIGTERM)
|
||||
|
|
@ -309,7 +319,7 @@ def launch_zip_file(filepath):
|
|||
)
|
||||
|
||||
if not os.path.exists(scene_path):
|
||||
print("error: cannot determine scene file")
|
||||
print("error: cannot determine scene file {}".format(scene_path))
|
||||
ProcessContext.server.stop()
|
||||
return
|
||||
|
||||
|
|
|
|||
185
openpype/hosts/houdini/api/creator_node_shelves.py
Normal file
185
openpype/hosts/houdini/api/creator_node_shelves.py
Normal file
|
|
@ -0,0 +1,185 @@
|
|||
"""Library to register OpenPype Creators for Houdini TAB node search menu.
|
||||
|
||||
This can be used to install custom houdini tools for the TAB search
|
||||
menu which will trigger a publish instance to be created interactively.
|
||||
|
||||
The Creators are automatically registered on launch of Houdini through the
|
||||
Houdini integration's `host.install()` method.
|
||||
|
||||
"""
|
||||
import contextlib
|
||||
import tempfile
|
||||
import logging
|
||||
import os
|
||||
|
||||
from openpype.pipeline import registered_host
|
||||
from openpype.pipeline.create import CreateContext
|
||||
from openpype.resources import get_openpype_icon_filepath
|
||||
|
||||
import hou
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
CREATE_SCRIPT = """
|
||||
from openpype.hosts.houdini.api.creator_node_shelves import create_interactive
|
||||
create_interactive("{identifier}")
|
||||
"""
|
||||
|
||||
|
||||
def create_interactive(creator_identifier):
|
||||
"""Create a Creator using its identifier interactively.
|
||||
|
||||
This is used by the generated shelf tools as callback when a user selects
|
||||
the creator from the node tab search menu.
|
||||
|
||||
Args:
|
||||
creator_identifier (str): The creator identifier of the Creator plugin
|
||||
to create.
|
||||
|
||||
Return:
|
||||
list: The created instances.
|
||||
|
||||
"""
|
||||
|
||||
# TODO Use Qt instead
|
||||
result, variant = hou.ui.readInput('Define variant name',
|
||||
buttons=("Ok", "Cancel"),
|
||||
initial_contents='Main',
|
||||
title="Define variant",
|
||||
help="Set the variant for the "
|
||||
"publish instance",
|
||||
close_choice=1)
|
||||
if result == 1:
|
||||
# User interrupted
|
||||
return
|
||||
variant = variant.strip()
|
||||
if not variant:
|
||||
raise RuntimeError("Empty variant value entered.")
|
||||
|
||||
host = registered_host()
|
||||
context = CreateContext(host)
|
||||
|
||||
before = context.instances_by_id.copy()
|
||||
|
||||
# Create the instance
|
||||
context.create(
|
||||
creator_identifier=creator_identifier,
|
||||
variant=variant,
|
||||
pre_create_data={"use_selection": True}
|
||||
)
|
||||
|
||||
# For convenience we set the new node as current since that's much more
|
||||
# familiar to the artist when creating a node interactively
|
||||
# TODO Allow to disable auto-select in studio settings or user preferences
|
||||
after = context.instances_by_id
|
||||
new = set(after) - set(before)
|
||||
if new:
|
||||
# Select the new instance
|
||||
for instance_id in new:
|
||||
instance = after[instance_id]
|
||||
node = hou.node(instance.get("instance_node"))
|
||||
node.setCurrent(True)
|
||||
|
||||
return list(new)
|
||||
|
||||
|
||||
@contextlib.contextmanager
|
||||
def shelves_change_block():
|
||||
"""Write shelf changes at the end of the context."""
|
||||
hou.shelves.beginChangeBlock()
|
||||
try:
|
||||
yield
|
||||
finally:
|
||||
hou.shelves.endChangeBlock()
|
||||
|
||||
|
||||
def install():
|
||||
"""Install the Creator plug-ins to show in Houdini's TAB node search menu.
|
||||
|
||||
This function is re-entrant and can be called again to reinstall and
|
||||
update the node definitions. For example during development it can be
|
||||
useful to call it manually:
|
||||
>>> from openpype.hosts.houdini.api.creator_node_shelves import install
|
||||
>>> install()
|
||||
|
||||
Returns:
|
||||
list: List of `hou.Tool` instances
|
||||
|
||||
"""
|
||||
|
||||
host = registered_host()
|
||||
|
||||
# Store the filepath on the host
|
||||
# TODO: Define a less hacky static shelf path for current houdini session
|
||||
filepath_attr = "_creator_node_shelf_filepath"
|
||||
filepath = getattr(host, filepath_attr, None)
|
||||
if filepath is None:
|
||||
f = tempfile.NamedTemporaryFile(prefix="houdini_creator_nodes_",
|
||||
suffix=".shelf",
|
||||
delete=False)
|
||||
f.close()
|
||||
filepath = f.name
|
||||
setattr(host, filepath_attr, filepath)
|
||||
elif os.path.exists(filepath):
|
||||
# Remove any existing shelf file so that we can completey regenerate
|
||||
# and update the tools file if creator identifiers change
|
||||
os.remove(filepath)
|
||||
|
||||
icon = get_openpype_icon_filepath()
|
||||
|
||||
# Create context only to get creator plugins, so we don't reset and only
|
||||
# populate what we need to retrieve the list of creator plugins
|
||||
create_context = CreateContext(host, reset=False)
|
||||
create_context.reset_current_context()
|
||||
create_context._reset_creator_plugins()
|
||||
|
||||
log.debug("Writing OpenPype Creator nodes to shelf: {}".format(filepath))
|
||||
tools = []
|
||||
with shelves_change_block():
|
||||
for identifier, creator in create_context.manual_creators.items():
|
||||
|
||||
# TODO: Allow the creator plug-in itself to override the categories
|
||||
# for where they are shown, by e.g. defining
|
||||
# `Creator.get_network_categories()`
|
||||
|
||||
key = "openpype_create.{}".format(identifier)
|
||||
log.debug(f"Registering {key}")
|
||||
script = CREATE_SCRIPT.format(identifier=identifier)
|
||||
data = {
|
||||
"script": script,
|
||||
"language": hou.scriptLanguage.Python,
|
||||
"icon": icon,
|
||||
"help": "Create OpenPype publish instance for {}".format(
|
||||
creator.label
|
||||
),
|
||||
"help_url": None,
|
||||
"network_categories": [
|
||||
hou.ropNodeTypeCategory(),
|
||||
hou.sopNodeTypeCategory()
|
||||
],
|
||||
"viewer_categories": [],
|
||||
"cop_viewer_categories": [],
|
||||
"network_op_type": None,
|
||||
"viewer_op_type": None,
|
||||
"locations": ["OpenPype"]
|
||||
}
|
||||
|
||||
label = "Create {}".format(creator.label)
|
||||
tool = hou.shelves.tool(key)
|
||||
if tool:
|
||||
tool.setData(**data)
|
||||
tool.setLabel(label)
|
||||
else:
|
||||
tool = hou.shelves.newTool(
|
||||
file_path=filepath,
|
||||
name=key,
|
||||
label=label,
|
||||
**data
|
||||
)
|
||||
|
||||
tools.append(tool)
|
||||
|
||||
# Ensure the shelf is reloaded
|
||||
hou.shelves.loadFile(filepath)
|
||||
|
||||
return tools
|
||||
|
|
@ -127,6 +127,8 @@ def get_output_parameter(node):
|
|||
return node.parm("filename")
|
||||
elif node_type == "comp":
|
||||
return node.parm("copoutput")
|
||||
elif node_type == "opengl":
|
||||
return node.parm("picture")
|
||||
elif node_type == "arnold":
|
||||
if node.evalParm("ar_ass_export_enable"):
|
||||
return node.parm("ar_ass_file")
|
||||
|
|
@ -479,23 +481,13 @@ def reset_framerange():
|
|||
|
||||
frame_start = asset_data.get("frameStart")
|
||||
frame_end = asset_data.get("frameEnd")
|
||||
# Backwards compatibility
|
||||
if frame_start is None or frame_end is None:
|
||||
frame_start = asset_data.get("edit_in")
|
||||
frame_end = asset_data.get("edit_out")
|
||||
|
||||
if frame_start is None or frame_end is None:
|
||||
log.warning("No edit information found for %s" % asset_name)
|
||||
return
|
||||
|
||||
handles = asset_data.get("handles") or 0
|
||||
handle_start = asset_data.get("handleStart")
|
||||
if handle_start is None:
|
||||
handle_start = handles
|
||||
|
||||
handle_end = asset_data.get("handleEnd")
|
||||
if handle_end is None:
|
||||
handle_end = handles
|
||||
handle_start = asset_data.get("handleStart", 0)
|
||||
handle_end = asset_data.get("handleEnd", 0)
|
||||
|
||||
frame_start -= int(handle_start)
|
||||
frame_end += int(handle_end)
|
||||
|
|
|
|||
|
|
@ -18,7 +18,7 @@ from openpype.pipeline import (
|
|||
)
|
||||
from openpype.pipeline.load import any_outdated_containers
|
||||
from openpype.hosts.houdini import HOUDINI_HOST_DIR
|
||||
from openpype.hosts.houdini.api import lib, shelves
|
||||
from openpype.hosts.houdini.api import lib, shelves, creator_node_shelves
|
||||
|
||||
from openpype.lib import (
|
||||
register_event_callback,
|
||||
|
|
@ -83,6 +83,10 @@ class HoudiniHost(HostBase, IWorkfileHost, ILoadHost, IPublishHost):
|
|||
_set_context_settings()
|
||||
shelves.generate_shelves()
|
||||
|
||||
if not IS_HEADLESS:
|
||||
import hdefereval # noqa, hdefereval is only available in ui mode
|
||||
hdefereval.executeDeferred(creator_node_shelves.install)
|
||||
|
||||
def has_unsaved_changes(self):
|
||||
return hou.hipFile.hasUnsavedChanges()
|
||||
|
||||
|
|
@ -144,13 +148,10 @@ class HoudiniHost(HostBase, IWorkfileHost, ILoadHost, IPublishHost):
|
|||
|
||||
"""
|
||||
obj_network = hou.node("/obj")
|
||||
op_ctx = obj_network.createNode("null", node_name="OpenPypeContext")
|
||||
|
||||
# A null in houdini by default comes with content inside to visualize
|
||||
# the null. However since we explicitly want to hide the node lets
|
||||
# remove the content and disable the display flag of the node
|
||||
for node in op_ctx.children():
|
||||
node.destroy()
|
||||
op_ctx = obj_network.createNode("subnet",
|
||||
node_name="OpenPypeContext",
|
||||
run_init_scripts=False,
|
||||
load_contents=False)
|
||||
|
||||
op_ctx.moveToGoodPosition()
|
||||
op_ctx.setBuiltExplicitly(False)
|
||||
|
|
|
|||
125
openpype/hosts/houdini/plugins/create/create_review.py
Normal file
125
openpype/hosts/houdini/plugins/create/create_review.py
Normal file
|
|
@ -0,0 +1,125 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
"""Creator plugin for creating openGL reviews."""
|
||||
from openpype.hosts.houdini.api import plugin
|
||||
from openpype.lib import EnumDef, BoolDef, NumberDef
|
||||
|
||||
|
||||
class CreateReview(plugin.HoudiniCreator):
|
||||
"""Review with OpenGL ROP"""
|
||||
|
||||
identifier = "io.openpype.creators.houdini.review"
|
||||
label = "Review"
|
||||
family = "review"
|
||||
icon = "video-camera"
|
||||
|
||||
def create(self, subset_name, instance_data, pre_create_data):
|
||||
import hou
|
||||
|
||||
instance_data.pop("active", None)
|
||||
instance_data.update({"node_type": "opengl"})
|
||||
instance_data["imageFormat"] = pre_create_data.get("imageFormat")
|
||||
instance_data["keepImages"] = pre_create_data.get("keepImages")
|
||||
|
||||
instance = super(CreateReview, self).create(
|
||||
subset_name,
|
||||
instance_data,
|
||||
pre_create_data)
|
||||
|
||||
instance_node = hou.node(instance.get("instance_node"))
|
||||
|
||||
frame_range = hou.playbar.frameRange()
|
||||
|
||||
filepath = "{root}/{subset}/{subset}.$F4.{ext}".format(
|
||||
root=hou.text.expandString("$HIP/pyblish"),
|
||||
subset="`chs(\"subset\")`", # keep dynamic link to subset
|
||||
ext=pre_create_data.get("image_format") or "png"
|
||||
)
|
||||
|
||||
parms = {
|
||||
"picture": filepath,
|
||||
|
||||
"trange": 1,
|
||||
|
||||
# Unlike many other ROP nodes the opengl node does not default
|
||||
# to expression of $FSTART and $FEND so we preserve that behavior
|
||||
# but do set the range to the frame range of the playbar
|
||||
"f1": frame_range[0],
|
||||
"f2": frame_range[1],
|
||||
}
|
||||
|
||||
override_resolution = pre_create_data.get("override_resolution")
|
||||
if override_resolution:
|
||||
parms.update({
|
||||
"tres": override_resolution,
|
||||
"res1": pre_create_data.get("resx"),
|
||||
"res2": pre_create_data.get("resy"),
|
||||
"aspect": pre_create_data.get("aspect"),
|
||||
})
|
||||
|
||||
if self.selected_nodes:
|
||||
# The first camera found in selection we will use as camera
|
||||
# Other node types we set in force objects
|
||||
camera = None
|
||||
force_objects = []
|
||||
for node in self.selected_nodes:
|
||||
path = node.path()
|
||||
if node.type().name() == "cam":
|
||||
if camera:
|
||||
continue
|
||||
camera = path
|
||||
else:
|
||||
force_objects.append(path)
|
||||
|
||||
if not camera:
|
||||
self.log.warning("No camera found in selection.")
|
||||
|
||||
parms.update({
|
||||
"camera": camera or "",
|
||||
"scenepath": "/obj",
|
||||
"forceobjects": " ".join(force_objects),
|
||||
"vobjects": "" # clear candidate objects from '*' value
|
||||
})
|
||||
|
||||
instance_node.setParms(parms)
|
||||
|
||||
to_lock = ["id", "family"]
|
||||
|
||||
self.lock_parameters(instance_node, to_lock)
|
||||
|
||||
def get_pre_create_attr_defs(self):
|
||||
attrs = super(CreateReview, self).get_pre_create_attr_defs()
|
||||
|
||||
image_format_enum = [
|
||||
"bmp", "cin", "exr", "jpg", "pic", "pic.gz", "png",
|
||||
"rad", "rat", "rta", "sgi", "tga", "tif",
|
||||
]
|
||||
|
||||
return attrs + [
|
||||
BoolDef("keepImages",
|
||||
label="Keep Image Sequences",
|
||||
default=False),
|
||||
EnumDef("imageFormat",
|
||||
image_format_enum,
|
||||
default="png",
|
||||
label="Image Format Options"),
|
||||
BoolDef("override_resolution",
|
||||
label="Override resolution",
|
||||
tooltip="When disabled the resolution set on the camera "
|
||||
"is used instead.",
|
||||
default=True),
|
||||
NumberDef("resx",
|
||||
label="Resolution Width",
|
||||
default=1280,
|
||||
minimum=2,
|
||||
decimals=0),
|
||||
NumberDef("resy",
|
||||
label="Resolution Height",
|
||||
default=720,
|
||||
minimum=2,
|
||||
decimals=0),
|
||||
NumberDef("aspect",
|
||||
label="Aspect Ratio",
|
||||
default=1.0,
|
||||
minimum=0.0001,
|
||||
decimals=3)
|
||||
]
|
||||
|
|
@ -14,7 +14,7 @@ class CollectFrames(pyblish.api.InstancePlugin):
|
|||
|
||||
order = pyblish.api.CollectorOrder
|
||||
label = "Collect Frames"
|
||||
families = ["vdbcache", "imagesequence", "ass", "redshiftproxy"]
|
||||
families = ["vdbcache", "imagesequence", "ass", "redshiftproxy", "review"]
|
||||
|
||||
def process(self, instance):
|
||||
|
||||
|
|
|
|||
|
|
@ -0,0 +1,52 @@
|
|||
import hou
|
||||
import pyblish.api
|
||||
|
||||
|
||||
class CollectHoudiniReviewData(pyblish.api.InstancePlugin):
|
||||
"""Collect Review Data."""
|
||||
|
||||
label = "Collect Review Data"
|
||||
order = pyblish.api.CollectorOrder + 0.1
|
||||
hosts = ["houdini"]
|
||||
families = ["review"]
|
||||
|
||||
def process(self, instance):
|
||||
|
||||
# This fixes the burnin having the incorrect start/end timestamps
|
||||
# because without this it would take it from the context instead
|
||||
# which isn't the actual frame range that this instance renders.
|
||||
instance.data["handleStart"] = 0
|
||||
instance.data["handleEnd"] = 0
|
||||
|
||||
# Get the camera from the rop node to collect the focal length
|
||||
ropnode_path = instance.data["instance_node"]
|
||||
ropnode = hou.node(ropnode_path)
|
||||
|
||||
camera_path = ropnode.parm("camera").eval()
|
||||
camera_node = hou.node(camera_path)
|
||||
if not camera_node:
|
||||
raise RuntimeError("No valid camera node found on review node: "
|
||||
"{}".format(camera_path))
|
||||
|
||||
# Collect focal length.
|
||||
focal_length_parm = camera_node.parm("focal")
|
||||
if not focal_length_parm:
|
||||
self.log.warning("No 'focal' (focal length) parameter found on "
|
||||
"camera: {}".format(camera_path))
|
||||
return
|
||||
|
||||
if focal_length_parm.isTimeDependent():
|
||||
start = instance.data["frameStart"]
|
||||
end = instance.data["frameEnd"] + 1
|
||||
focal_length = [
|
||||
focal_length_parm.evalAsFloatAtFrame(t)
|
||||
for t in range(int(start), int(end))
|
||||
]
|
||||
else:
|
||||
focal_length = focal_length_parm.evalAsFloat()
|
||||
|
||||
# Store focal length in `burninDataMembers`
|
||||
burnin_members = instance.data.setdefault("burninDataMembers", {})
|
||||
burnin_members["focalLength"] = focal_length
|
||||
|
||||
instance.data.setdefault("families", []).append('ftrack')
|
||||
58
openpype/hosts/houdini/plugins/publish/extract_opengl.py
Normal file
58
openpype/hosts/houdini/plugins/publish/extract_opengl.py
Normal file
|
|
@ -0,0 +1,58 @@
|
|||
import os
|
||||
|
||||
import pyblish.api
|
||||
|
||||
from openpype.pipeline import (
|
||||
publish,
|
||||
OptionalPyblishPluginMixin
|
||||
)
|
||||
from openpype.hosts.houdini.api.lib import render_rop
|
||||
|
||||
import hou
|
||||
|
||||
|
||||
class ExtractOpenGL(publish.Extractor,
|
||||
OptionalPyblishPluginMixin):
|
||||
|
||||
order = pyblish.api.ExtractorOrder - 0.01
|
||||
label = "Extract OpenGL"
|
||||
families = ["review"]
|
||||
hosts = ["houdini"]
|
||||
optional = True
|
||||
|
||||
def process(self, instance):
|
||||
if not self.is_active(instance.data):
|
||||
return
|
||||
ropnode = hou.node(instance.data.get("instance_node"))
|
||||
|
||||
output = ropnode.evalParm("picture")
|
||||
staging_dir = os.path.normpath(os.path.dirname(output))
|
||||
instance.data["stagingDir"] = staging_dir
|
||||
file_name = os.path.basename(output)
|
||||
|
||||
self.log.info("Extracting '%s' to '%s'" % (file_name,
|
||||
staging_dir))
|
||||
|
||||
render_rop(ropnode)
|
||||
|
||||
output = instance.data["frames"]
|
||||
|
||||
tags = ["review"]
|
||||
if not instance.data.get("keepImages"):
|
||||
tags.append("delete")
|
||||
|
||||
representation = {
|
||||
"name": instance.data["imageFormat"],
|
||||
"ext": instance.data["imageFormat"],
|
||||
"files": output,
|
||||
"stagingDir": staging_dir,
|
||||
"frameStart": instance.data["frameStart"],
|
||||
"frameEnd": instance.data["frameEnd"],
|
||||
"tags": tags,
|
||||
"preview": True,
|
||||
"camera_name": instance.data.get("review_camera")
|
||||
}
|
||||
|
||||
if "representations" not in instance.data:
|
||||
instance.data["representations"] = []
|
||||
instance.data["representations"].append(representation)
|
||||
|
|
@ -0,0 +1,61 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
import pyblish.api
|
||||
from openpype.pipeline import PublishValidationError
|
||||
import hou
|
||||
|
||||
|
||||
class ValidateSceneReview(pyblish.api.InstancePlugin):
|
||||
"""Validator Some Scene Settings before publishing the review
|
||||
1. Scene Path
|
||||
2. Resolution
|
||||
"""
|
||||
|
||||
order = pyblish.api.ValidatorOrder
|
||||
families = ["review"]
|
||||
hosts = ["houdini"]
|
||||
label = "Scene Setting for review"
|
||||
|
||||
def process(self, instance):
|
||||
invalid = self.get_invalid_scene_path(instance)
|
||||
|
||||
report = []
|
||||
if invalid:
|
||||
report.append(
|
||||
"Scene path does not exist: '%s'" % invalid[0],
|
||||
)
|
||||
|
||||
invalid = self.get_invalid_resolution(instance)
|
||||
if invalid:
|
||||
report.extend(invalid)
|
||||
|
||||
if report:
|
||||
raise PublishValidationError(
|
||||
"\n\n".join(report),
|
||||
title=self.label)
|
||||
|
||||
def get_invalid_scene_path(self, instance):
|
||||
|
||||
node = hou.node(instance.data.get("instance_node"))
|
||||
scene_path_parm = node.parm("scenepath")
|
||||
scene_path_node = scene_path_parm.evalAsNode()
|
||||
if not scene_path_node:
|
||||
return [scene_path_parm.evalAsString()]
|
||||
|
||||
def get_invalid_resolution(self, instance):
|
||||
node = hou.node(instance.data.get("instance_node"))
|
||||
|
||||
# The resolution setting is only used when Override Camera Resolution
|
||||
# is enabled. So we skip validation if it is disabled.
|
||||
override = node.parm("tres").eval()
|
||||
if not override:
|
||||
return
|
||||
|
||||
invalid = []
|
||||
res_width = node.parm("res1").eval()
|
||||
res_height = node.parm("res2").eval()
|
||||
if res_width == 0:
|
||||
invalid.append("Override Resolution width is set to zero.")
|
||||
if res_height == 0:
|
||||
invalid.append("Override Resolution height is set to zero")
|
||||
|
||||
return invalid
|
||||
|
|
@ -128,14 +128,14 @@ class AvalonURIOutputProcessor(base.OutputProcessorBase):
|
|||
if not asset_doc:
|
||||
raise RuntimeError("Invalid asset name: '%s'" % asset)
|
||||
|
||||
formatted_anatomy = anatomy.format({
|
||||
template_obj = anatomy.templates_obj["publish"]["path"]
|
||||
path = template_obj.format_strict({
|
||||
"project": PROJECT,
|
||||
"asset": asset_doc["name"],
|
||||
"subset": subset,
|
||||
"representation": ext,
|
||||
"version": 0 # stub version zero
|
||||
})
|
||||
path = formatted_anatomy["publish"]["path"]
|
||||
|
||||
# Remove the version folder
|
||||
subset_folder = os.path.dirname(os.path.dirname(path))
|
||||
|
|
|
|||
|
|
@ -215,19 +215,12 @@ def get_frame_range() -> dict:
|
|||
asset = get_current_project_asset()
|
||||
frame_start = asset["data"].get("frameStart")
|
||||
frame_end = asset["data"].get("frameEnd")
|
||||
# Backwards compatibility
|
||||
if frame_start is None or frame_end is None:
|
||||
frame_start = asset["data"].get("edit_in")
|
||||
frame_end = asset["data"].get("edit_out")
|
||||
|
||||
if frame_start is None or frame_end is None:
|
||||
return
|
||||
handles = asset["data"].get("handles") or 0
|
||||
handle_start = asset["data"].get("handleStart")
|
||||
if handle_start is None:
|
||||
handle_start = handles
|
||||
handle_end = asset["data"].get("handleEnd")
|
||||
if handle_end is None:
|
||||
handle_end = handles
|
||||
|
||||
handle_start = asset["data"].get("handleStart", 0)
|
||||
handle_end = asset["data"].get("handleEnd", 0)
|
||||
return {
|
||||
"frameStart": frame_start,
|
||||
"frameEnd": frame_end,
|
||||
|
|
|
|||
|
|
@ -62,6 +62,7 @@ class CollectRender(pyblish.api.InstancePlugin):
|
|||
"frameStart": context.data['frameStart'],
|
||||
"frameEnd": context.data['frameEnd'],
|
||||
"version": version_int,
|
||||
"farm": True
|
||||
}
|
||||
self.log.info("data: {0}".format(data))
|
||||
instance.data.update(data)
|
||||
|
|
|
|||
|
|
@ -32,7 +32,13 @@ from openpype.pipeline import (
|
|||
load_container,
|
||||
registered_host,
|
||||
)
|
||||
from openpype.pipeline.context_tools import get_current_project_asset
|
||||
from openpype.pipeline.context_tools import (
|
||||
get_current_asset_name,
|
||||
get_current_project_asset,
|
||||
get_current_project_name,
|
||||
get_current_task_name
|
||||
)
|
||||
from openpype.lib.profiles_filtering import filter_profiles
|
||||
|
||||
|
||||
self = sys.modules[__name__]
|
||||
|
|
@ -112,6 +118,18 @@ FLOAT_FPS = {23.98, 23.976, 29.97, 47.952, 59.94}
|
|||
|
||||
RENDERLIKE_INSTANCE_FAMILIES = ["rendering", "vrayscene"]
|
||||
|
||||
DISPLAY_LIGHTS_VALUES = [
|
||||
"project_settings", "default", "all", "selected", "flat", "none"
|
||||
]
|
||||
DISPLAY_LIGHTS_LABELS = [
|
||||
"Use Project Settings",
|
||||
"Default Lighting",
|
||||
"All Lights",
|
||||
"Selected Lights",
|
||||
"Flat Lighting",
|
||||
"No Lights"
|
||||
]
|
||||
|
||||
|
||||
def get_main_window():
|
||||
"""Acquire Maya's main window"""
|
||||
|
|
@ -292,15 +310,20 @@ def collect_animation_data(fps=False):
|
|||
"""
|
||||
|
||||
# get scene values as defaults
|
||||
start = cmds.playbackOptions(query=True, animationStartTime=True)
|
||||
end = cmds.playbackOptions(query=True, animationEndTime=True)
|
||||
frame_start = cmds.playbackOptions(query=True, minTime=True)
|
||||
frame_end = cmds.playbackOptions(query=True, maxTime=True)
|
||||
handle_start = cmds.playbackOptions(query=True, animationStartTime=True)
|
||||
handle_end = cmds.playbackOptions(query=True, animationEndTime=True)
|
||||
|
||||
handle_start = frame_start - handle_start
|
||||
handle_end = handle_end - frame_end
|
||||
|
||||
# build attributes
|
||||
data = OrderedDict()
|
||||
data["frameStart"] = start
|
||||
data["frameEnd"] = end
|
||||
data["handleStart"] = 0
|
||||
data["handleEnd"] = 0
|
||||
data["frameStart"] = frame_start
|
||||
data["frameEnd"] = frame_end
|
||||
data["handleStart"] = handle_start
|
||||
data["handleEnd"] = handle_end
|
||||
data["step"] = 1.0
|
||||
|
||||
if fps:
|
||||
|
|
@ -1367,6 +1390,71 @@ def set_id(node, unique_id, overwrite=False):
|
|||
cmds.setAttr(attr, unique_id, type="string")
|
||||
|
||||
|
||||
def get_attribute(plug,
|
||||
asString=False,
|
||||
expandEnvironmentVariables=False,
|
||||
**kwargs):
|
||||
"""Maya getAttr with some fixes based on `pymel.core.general.getAttr()`.
|
||||
|
||||
Like Pymel getAttr this applies some changes to `maya.cmds.getAttr`
|
||||
- maya pointlessly returned vector results as a tuple wrapped in a list
|
||||
(ex. '[(1,2,3)]'). This command unpacks the vector for you.
|
||||
- when getting a multi-attr, maya would raise an error, but this will
|
||||
return a list of values for the multi-attr
|
||||
- added support for getting message attributes by returning the
|
||||
connections instead
|
||||
|
||||
Note that the asString + expandEnvironmentVariables argument naming
|
||||
convention matches the `maya.cmds.getAttr` arguments so that it can
|
||||
act as a direct replacement for it.
|
||||
|
||||
Args:
|
||||
plug (str): Node's attribute plug as `node.attribute`
|
||||
asString (bool): Return string value for enum attributes instead
|
||||
of the index. Note that the return value can be dependent on the
|
||||
UI language Maya is running in.
|
||||
expandEnvironmentVariables (bool): Expand any environment variable and
|
||||
(tilde characters on UNIX) found in string attributes which are
|
||||
returned.
|
||||
|
||||
Kwargs:
|
||||
Supports the keyword arguments of `maya.cmds.getAttr`
|
||||
|
||||
Returns:
|
||||
object: The value of the maya attribute.
|
||||
|
||||
"""
|
||||
attr_type = cmds.getAttr(plug, type=True)
|
||||
if asString:
|
||||
kwargs["asString"] = True
|
||||
if expandEnvironmentVariables:
|
||||
kwargs["expandEnvironmentVariables"] = True
|
||||
try:
|
||||
res = cmds.getAttr(plug, **kwargs)
|
||||
except RuntimeError:
|
||||
if attr_type == "message":
|
||||
return cmds.listConnections(plug)
|
||||
|
||||
node, attr = plug.split(".", 1)
|
||||
children = cmds.attributeQuery(attr, node=node, listChildren=True)
|
||||
if children:
|
||||
return [
|
||||
get_attribute("{}.{}".format(node, child))
|
||||
for child in children
|
||||
]
|
||||
|
||||
raise
|
||||
|
||||
# Convert vector result wrapped in tuple
|
||||
if isinstance(res, list) and len(res):
|
||||
if isinstance(res[0], tuple) and len(res):
|
||||
if attr_type in {'pointArray', 'vectorArray'}:
|
||||
return res
|
||||
return res[0]
|
||||
|
||||
return res
|
||||
|
||||
|
||||
def set_attribute(attribute, value, node):
|
||||
"""Adjust attributes based on the value from the attribute data
|
||||
|
||||
|
|
@ -1881,6 +1969,12 @@ def remove_other_uv_sets(mesh):
|
|||
cmds.removeMultiInstance(attr, b=True)
|
||||
|
||||
|
||||
def get_node_parent(node):
|
||||
"""Return full path name for parent of node"""
|
||||
parents = cmds.listRelatives(node, parent=True, fullPath=True)
|
||||
return parents[0] if parents else None
|
||||
|
||||
|
||||
def get_id_from_sibling(node, history_only=True):
|
||||
"""Return first node id in the history chain that matches this node.
|
||||
|
||||
|
|
@ -1904,10 +1998,6 @@ def get_id_from_sibling(node, history_only=True):
|
|||
|
||||
"""
|
||||
|
||||
def _get_parent(node):
|
||||
"""Return full path name for parent of node"""
|
||||
return cmds.listRelatives(node, parent=True, fullPath=True)
|
||||
|
||||
node = cmds.ls(node, long=True)[0]
|
||||
|
||||
# Find all similar nodes in history
|
||||
|
|
@ -1919,8 +2009,8 @@ def get_id_from_sibling(node, history_only=True):
|
|||
similar_nodes = [x for x in similar_nodes if x != node]
|
||||
|
||||
# The node *must be* under the same parent
|
||||
parent = _get_parent(node)
|
||||
similar_nodes = [i for i in similar_nodes if _get_parent(i) == parent]
|
||||
parent = get_node_parent(node)
|
||||
similar_nodes = [i for i in similar_nodes if get_node_parent(i) == parent]
|
||||
|
||||
# Check all of the remaining similar nodes and take the first one
|
||||
# with an id and assume it's the original.
|
||||
|
|
@ -2067,29 +2157,43 @@ def get_frame_range():
|
|||
"""Get the current assets frame range and handles."""
|
||||
|
||||
# Set frame start/end
|
||||
project_name = legacy_io.active_project()
|
||||
asset_name = legacy_io.Session["AVALON_ASSET"]
|
||||
project_name = get_current_project_name()
|
||||
task_name = get_current_task_name()
|
||||
asset_name = get_current_asset_name()
|
||||
asset = get_asset_by_name(project_name, asset_name)
|
||||
settings = get_project_settings(project_name)
|
||||
include_handles_settings = settings["maya"]["include_handles"]
|
||||
current_task = asset.get("data").get("tasks").get(task_name)
|
||||
|
||||
frame_start = asset["data"].get("frameStart")
|
||||
frame_end = asset["data"].get("frameEnd")
|
||||
# Backwards compatibility
|
||||
if frame_start is None or frame_end is None:
|
||||
frame_start = asset["data"].get("edit_in")
|
||||
frame_end = asset["data"].get("edit_out")
|
||||
|
||||
if frame_start is None or frame_end is None:
|
||||
cmds.warning("No edit information found for %s" % asset_name)
|
||||
return
|
||||
|
||||
handles = asset["data"].get("handles") or 0
|
||||
handle_start = asset["data"].get("handleStart")
|
||||
if handle_start is None:
|
||||
handle_start = handles
|
||||
handle_start = asset["data"].get("handleStart") or 0
|
||||
handle_end = asset["data"].get("handleEnd") or 0
|
||||
|
||||
handle_end = asset["data"].get("handleEnd")
|
||||
if handle_end is None:
|
||||
handle_end = handles
|
||||
animation_start = frame_start
|
||||
animation_end = frame_end
|
||||
|
||||
include_handles = include_handles_settings["include_handles_default"]
|
||||
for item in include_handles_settings["per_task_type"]:
|
||||
if current_task["type"] in item["task_type"]:
|
||||
include_handles = item["include_handles"]
|
||||
break
|
||||
if include_handles:
|
||||
animation_start -= int(handle_start)
|
||||
animation_end += int(handle_end)
|
||||
|
||||
cmds.playbackOptions(
|
||||
minTime=frame_start,
|
||||
maxTime=frame_end,
|
||||
animationStartTime=animation_start,
|
||||
animationEndTime=animation_end
|
||||
)
|
||||
cmds.currentTime(frame_start)
|
||||
|
||||
return {
|
||||
"frameStart": frame_start,
|
||||
|
|
@ -2109,7 +2213,6 @@ def reset_frame_range(playback=True, render=True, fps=True):
|
|||
Defaults to True.
|
||||
fps (bool, Optional): Whether to set scene FPS. Defaults to True.
|
||||
"""
|
||||
|
||||
if fps:
|
||||
fps = convert_to_maya_fps(
|
||||
float(legacy_io.Session.get("AVALON_FPS", 25))
|
||||
|
|
@ -3176,38 +3279,78 @@ def set_colorspace():
|
|||
def parent_nodes(nodes, parent=None):
|
||||
# type: (list, str) -> list
|
||||
"""Context manager to un-parent provided nodes and return them back."""
|
||||
import pymel.core as pm # noqa
|
||||
|
||||
parent_node = None
|
||||
def _as_mdagpath(node):
|
||||
"""Return MDagPath for node path."""
|
||||
if not node:
|
||||
return
|
||||
sel = OpenMaya.MSelectionList()
|
||||
sel.add(node)
|
||||
return sel.getDagPath(0)
|
||||
|
||||
# We can only parent dag nodes so we ensure input contains only dag nodes
|
||||
nodes = cmds.ls(nodes, type="dagNode", long=True)
|
||||
if not nodes:
|
||||
# opt-out early
|
||||
yield
|
||||
return
|
||||
|
||||
parent_node_path = None
|
||||
delete_parent = False
|
||||
|
||||
if parent:
|
||||
if not cmds.objExists(parent):
|
||||
parent_node = pm.createNode("transform", n=parent, ss=False)
|
||||
parent_node = cmds.createNode("transform",
|
||||
name=parent,
|
||||
skipSelect=False)
|
||||
delete_parent = True
|
||||
else:
|
||||
parent_node = pm.PyNode(parent)
|
||||
parent_node = parent
|
||||
parent_node_path = cmds.ls(parent_node, long=True)[0]
|
||||
|
||||
# Store original parents
|
||||
node_parents = []
|
||||
for node in nodes:
|
||||
n = pm.PyNode(node)
|
||||
try:
|
||||
root = pm.listRelatives(n, parent=1)[0]
|
||||
except IndexError:
|
||||
root = None
|
||||
node_parents.append((n, root))
|
||||
node_parent = get_node_parent(node)
|
||||
node_parents.append((_as_mdagpath(node), _as_mdagpath(node_parent)))
|
||||
|
||||
try:
|
||||
for node in node_parents:
|
||||
if not parent:
|
||||
node[0].setParent(world=True)
|
||||
for node, node_parent in node_parents:
|
||||
node_parent_path = node_parent.fullPathName() if node_parent else None # noqa
|
||||
if node_parent_path == parent_node_path:
|
||||
# Already a child
|
||||
continue
|
||||
|
||||
if parent_node_path:
|
||||
cmds.parent(node.fullPathName(), parent_node_path)
|
||||
else:
|
||||
node[0].setParent(parent_node)
|
||||
cmds.parent(node.fullPathName(), world=True)
|
||||
|
||||
yield
|
||||
finally:
|
||||
for node in node_parents:
|
||||
if node[1]:
|
||||
node[0].setParent(node[1])
|
||||
# Reparent to original parents
|
||||
for node, original_parent in node_parents:
|
||||
node_path = node.fullPathName()
|
||||
if not node_path:
|
||||
# Node must have been deleted
|
||||
continue
|
||||
|
||||
node_parent_path = get_node_parent(node_path)
|
||||
|
||||
original_parent_path = None
|
||||
if original_parent:
|
||||
original_parent_path = original_parent.fullPathName()
|
||||
if not original_parent_path:
|
||||
# Original parent node must have been deleted
|
||||
continue
|
||||
|
||||
if node_parent_path != original_parent_path:
|
||||
if not original_parent_path:
|
||||
cmds.parent(node_path, world=True)
|
||||
else:
|
||||
cmds.parent(node_path, original_parent_path)
|
||||
|
||||
if delete_parent:
|
||||
pm.delete(parent_node)
|
||||
cmds.delete(parent_node_path)
|
||||
|
||||
|
||||
@contextlib.contextmanager
|
||||
|
|
@ -3558,7 +3701,17 @@ def get_color_management_preferences():
|
|||
# Split view and display from view_transform. view_transform comes in
|
||||
# format of "{view} ({display})".
|
||||
regex = re.compile(r"^(?P<view>.+) \((?P<display>.+)\)$")
|
||||
if int(cmds.about(version=True)) <= 2020:
|
||||
# view_transform comes in format of "{view} {display}" in 2020.
|
||||
regex = re.compile(r"^(?P<view>.+) (?P<display>.+)$")
|
||||
|
||||
match = regex.match(data["view_transform"])
|
||||
if not match:
|
||||
raise ValueError(
|
||||
"Unable to parse view and display from Maya view transform: '{}' "
|
||||
"using regex '{}'".format(data["view_transform"], regex.pattern)
|
||||
)
|
||||
|
||||
data.update({
|
||||
"display": match.group("display"),
|
||||
"view": match.group("view")
|
||||
|
|
@ -3675,3 +3828,88 @@ def len_flattened(components):
|
|||
else:
|
||||
n += 1
|
||||
return n
|
||||
|
||||
|
||||
def get_all_children(nodes):
|
||||
"""Return all children of `nodes` including each instanced child.
|
||||
Using maya.cmds.listRelatives(allDescendents=True) includes only the first
|
||||
instance. As such, this function acts as an optimal replacement with a
|
||||
focus on a fast query.
|
||||
|
||||
"""
|
||||
|
||||
sel = OpenMaya.MSelectionList()
|
||||
traversed = set()
|
||||
iterator = OpenMaya.MItDag(OpenMaya.MItDag.kDepthFirst)
|
||||
for node in nodes:
|
||||
|
||||
if node in traversed:
|
||||
# Ignore if already processed as a child
|
||||
# before
|
||||
continue
|
||||
|
||||
sel.clear()
|
||||
sel.add(node)
|
||||
dag = sel.getDagPath(0)
|
||||
|
||||
iterator.reset(dag)
|
||||
# ignore self
|
||||
iterator.next() # noqa: B305
|
||||
while not iterator.isDone():
|
||||
|
||||
path = iterator.fullPathName()
|
||||
|
||||
if path in traversed:
|
||||
iterator.prune()
|
||||
iterator.next() # noqa: B305
|
||||
continue
|
||||
|
||||
traversed.add(path)
|
||||
iterator.next() # noqa: B305
|
||||
|
||||
return list(traversed)
|
||||
|
||||
|
||||
def get_capture_preset(task_name, task_type, subset, project_settings, log):
|
||||
"""Get capture preset for playblasting.
|
||||
|
||||
Logic for transitioning from old style capture preset to new capture preset
|
||||
profiles.
|
||||
|
||||
Args:
|
||||
task_name (str): Task name.
|
||||
take_type (str): Task type.
|
||||
subset (str): Subset name.
|
||||
project_settings (dict): Project settings.
|
||||
log (object): Logging object.
|
||||
"""
|
||||
capture_preset = None
|
||||
filtering_criteria = {
|
||||
"hosts": "maya",
|
||||
"families": "review",
|
||||
"task_names": task_name,
|
||||
"task_types": task_type,
|
||||
"subset": subset
|
||||
}
|
||||
|
||||
plugin_settings = project_settings["maya"]["publish"]["ExtractPlayblast"]
|
||||
if plugin_settings["profiles"]:
|
||||
profile = filter_profiles(
|
||||
plugin_settings["profiles"],
|
||||
filtering_criteria,
|
||||
logger=log
|
||||
)
|
||||
capture_preset = profile.get("capture_preset")
|
||||
else:
|
||||
log.warning("No profiles present for Extract Playblast")
|
||||
|
||||
# Backward compatibility for deprecated Extract Playblast settings
|
||||
# without profiles.
|
||||
if capture_preset is None:
|
||||
log.debug(
|
||||
"Falling back to deprecated Extract Playblast capture preset "
|
||||
"because no new style playblast profiles are defined."
|
||||
)
|
||||
capture_preset = plugin_settings["capture_preset"]
|
||||
|
||||
return capture_preset or {}
|
||||
|
|
|
|||
|
|
@ -857,6 +857,7 @@ class RenderProductsVray(ARenderProducts):
|
|||
if default_ext in {"exr (multichannel)", "exr (deep)"}:
|
||||
default_ext = "exr"
|
||||
|
||||
colorspace = lib.get_color_management_output_transform()
|
||||
products = []
|
||||
|
||||
# add beauty as default when not disabled
|
||||
|
|
@ -868,7 +869,7 @@ class RenderProductsVray(ARenderProducts):
|
|||
productName="",
|
||||
ext=default_ext,
|
||||
camera=camera,
|
||||
colorspace=lib.get_color_management_output_transform(),
|
||||
colorspace=colorspace,
|
||||
multipart=self.multipart
|
||||
)
|
||||
)
|
||||
|
|
@ -882,6 +883,7 @@ class RenderProductsVray(ARenderProducts):
|
|||
productName="Alpha",
|
||||
ext=default_ext,
|
||||
camera=camera,
|
||||
colorspace=colorspace,
|
||||
multipart=self.multipart
|
||||
)
|
||||
)
|
||||
|
|
@ -917,7 +919,8 @@ class RenderProductsVray(ARenderProducts):
|
|||
product = RenderProduct(productName=name,
|
||||
ext=default_ext,
|
||||
aov=aov,
|
||||
camera=camera)
|
||||
camera=camera,
|
||||
colorspace=colorspace)
|
||||
products.append(product)
|
||||
# Continue as we've processed this special case AOV
|
||||
continue
|
||||
|
|
@ -929,7 +932,7 @@ class RenderProductsVray(ARenderProducts):
|
|||
ext=default_ext,
|
||||
aov=aov,
|
||||
camera=camera,
|
||||
colorspace=lib.get_color_management_output_transform()
|
||||
colorspace=colorspace
|
||||
)
|
||||
products.append(product)
|
||||
|
||||
|
|
@ -1130,6 +1133,7 @@ class RenderProductsRedshift(ARenderProducts):
|
|||
products = []
|
||||
light_groups_enabled = False
|
||||
has_beauty_aov = False
|
||||
colorspace = lib.get_color_management_output_transform()
|
||||
for aov in aovs:
|
||||
enabled = self._get_attr(aov, "enabled")
|
||||
if not enabled:
|
||||
|
|
@ -1173,7 +1177,8 @@ class RenderProductsRedshift(ARenderProducts):
|
|||
ext=ext,
|
||||
multipart=False,
|
||||
camera=camera,
|
||||
driver=aov)
|
||||
driver=aov,
|
||||
colorspace=colorspace)
|
||||
products.append(product)
|
||||
|
||||
if light_groups:
|
||||
|
|
@ -1188,7 +1193,8 @@ class RenderProductsRedshift(ARenderProducts):
|
|||
ext=ext,
|
||||
multipart=False,
|
||||
camera=camera,
|
||||
driver=aov)
|
||||
driver=aov,
|
||||
colorspace=colorspace)
|
||||
products.append(product)
|
||||
|
||||
# When a Beauty AOV is added manually, it will be rendered as
|
||||
|
|
@ -1204,7 +1210,8 @@ class RenderProductsRedshift(ARenderProducts):
|
|||
RenderProduct(productName=beauty_name,
|
||||
ext=ext,
|
||||
multipart=self.multipart,
|
||||
camera=camera))
|
||||
camera=camera,
|
||||
colorspace=colorspace))
|
||||
|
||||
return products
|
||||
|
||||
|
|
@ -1236,6 +1243,8 @@ class RenderProductsRenderman(ARenderProducts):
|
|||
"""
|
||||
from rfm2.api.displays import get_displays # noqa
|
||||
|
||||
colorspace = lib.get_color_management_output_transform()
|
||||
|
||||
cameras = [
|
||||
self.sanitize_camera_name(c)
|
||||
for c in self.get_renderable_cameras()
|
||||
|
|
@ -1302,7 +1311,8 @@ class RenderProductsRenderman(ARenderProducts):
|
|||
productName=aov_name,
|
||||
ext=extensions,
|
||||
camera=camera,
|
||||
multipart=True
|
||||
multipart=True,
|
||||
colorspace=colorspace
|
||||
)
|
||||
|
||||
if has_cryptomatte and matte_enabled:
|
||||
|
|
@ -1311,7 +1321,8 @@ class RenderProductsRenderman(ARenderProducts):
|
|||
aov=cryptomatte_aov,
|
||||
ext=extensions,
|
||||
camera=camera,
|
||||
multipart=True
|
||||
multipart=True,
|
||||
colorspace=colorspace
|
||||
)
|
||||
else:
|
||||
# this code should handle the case where no multipart
|
||||
|
|
|
|||
|
|
@ -19,6 +19,8 @@ from maya.app.renderSetup.model.override import (
|
|||
UniqueOverride
|
||||
)
|
||||
|
||||
from openpype.hosts.maya.api.lib import get_attribute
|
||||
|
||||
EXACT_MATCH = 0
|
||||
PARENT_MATCH = 1
|
||||
CLIENT_MATCH = 2
|
||||
|
|
@ -96,9 +98,6 @@ def get_attr_in_layer(node_attr, layer):
|
|||
|
||||
"""
|
||||
|
||||
# Delay pymel import to here because it's slow to load
|
||||
import pymel.core as pm
|
||||
|
||||
def _layer_needs_update(layer):
|
||||
"""Return whether layer needs updating."""
|
||||
# Use `getattr` as e.g. DEFAULT_RENDER_LAYER does not have
|
||||
|
|
@ -125,7 +124,7 @@ def get_attr_in_layer(node_attr, layer):
|
|||
node = history_overrides[-1] if history_overrides else override
|
||||
node_attr_ = node + ".original"
|
||||
|
||||
return pm.getAttr(node_attr_, asString=True)
|
||||
return get_attribute(node_attr_, asString=True)
|
||||
|
||||
layer = get_rendersetup_layer(layer)
|
||||
rs = renderSetup.instance()
|
||||
|
|
@ -145,7 +144,7 @@ def get_attr_in_layer(node_attr, layer):
|
|||
# we will let it error out.
|
||||
rs.switchToLayer(current_layer)
|
||||
|
||||
return pm.getAttr(node_attr, asString=True)
|
||||
return get_attribute(node_attr, asString=True)
|
||||
|
||||
overrides = get_attr_overrides(node_attr, layer)
|
||||
default_layer_value = get_default_layer_value(node_attr)
|
||||
|
|
@ -156,7 +155,7 @@ def get_attr_in_layer(node_attr, layer):
|
|||
for match, layer_override, index in overrides:
|
||||
if isinstance(layer_override, AbsOverride):
|
||||
# Absolute override
|
||||
value = pm.getAttr(layer_override.name() + ".attrValue")
|
||||
value = get_attribute(layer_override.name() + ".attrValue")
|
||||
if match == EXACT_MATCH:
|
||||
# value = value
|
||||
pass
|
||||
|
|
@ -168,8 +167,8 @@ def get_attr_in_layer(node_attr, layer):
|
|||
elif isinstance(layer_override, RelOverride):
|
||||
# Relative override
|
||||
# Value = Original * Multiply + Offset
|
||||
multiply = pm.getAttr(layer_override.name() + ".multiply")
|
||||
offset = pm.getAttr(layer_override.name() + ".offset")
|
||||
multiply = get_attribute(layer_override.name() + ".multiply")
|
||||
offset = get_attribute(layer_override.name() + ".offset")
|
||||
|
||||
if match == EXACT_MATCH:
|
||||
value = value * multiply + offset
|
||||
|
|
|
|||
|
|
@ -1,4 +1,5 @@
|
|||
import os
|
||||
import re
|
||||
|
||||
from maya import cmds
|
||||
|
||||
|
|
@ -12,6 +13,7 @@ from openpype.pipeline import (
|
|||
AVALON_CONTAINER_ID,
|
||||
Anatomy,
|
||||
)
|
||||
from openpype.pipeline.load import LoadError
|
||||
from openpype.settings import get_project_settings
|
||||
from .pipeline import containerise
|
||||
from . import lib
|
||||
|
|
@ -82,6 +84,44 @@ def get_reference_node_parents(ref):
|
|||
return parents
|
||||
|
||||
|
||||
def get_custom_namespace(custom_namespace):
|
||||
"""Return unique namespace.
|
||||
|
||||
The input namespace can contain a single group
|
||||
of '#' number tokens to indicate where the namespace's
|
||||
unique index should go. The amount of tokens defines
|
||||
the zero padding of the number, e.g ### turns into 001.
|
||||
|
||||
Warning: Note that a namespace will always be
|
||||
prefixed with a _ if it starts with a digit
|
||||
|
||||
Example:
|
||||
>>> get_custom_namespace("myspace_##_")
|
||||
# myspace_01_
|
||||
>>> get_custom_namespace("##_myspace")
|
||||
# _01_myspace
|
||||
>>> get_custom_namespace("myspace##")
|
||||
# myspace01
|
||||
|
||||
"""
|
||||
split = re.split("([#]+)", custom_namespace, 1)
|
||||
|
||||
if len(split) == 3:
|
||||
base, padding, suffix = split
|
||||
padding = "%0{}d".format(len(padding))
|
||||
else:
|
||||
base = split[0]
|
||||
padding = "%02d" # default padding
|
||||
suffix = ""
|
||||
|
||||
return lib.unique_namespace(
|
||||
base,
|
||||
format=padding,
|
||||
prefix="_" if not base or base[0].isdigit() else "",
|
||||
suffix=suffix
|
||||
)
|
||||
|
||||
|
||||
class Creator(LegacyCreator):
|
||||
defaults = ['Main']
|
||||
|
||||
|
|
@ -143,15 +183,46 @@ class ReferenceLoader(Loader):
|
|||
assert os.path.exists(self.fname), "%s does not exist." % self.fname
|
||||
|
||||
asset = context['asset']
|
||||
subset = context['subset']
|
||||
settings = get_project_settings(context['project']['name'])
|
||||
custom_naming = settings['maya']['load']['reference_loader']
|
||||
loaded_containers = []
|
||||
|
||||
count = options.get("count") or 1
|
||||
for c in range(0, count):
|
||||
namespace = namespace or lib.unique_namespace(
|
||||
"{}_{}_".format(asset["name"], context["subset"]["name"]),
|
||||
prefix="_" if asset["name"][0].isdigit() else "",
|
||||
suffix="_",
|
||||
if not custom_naming['namespace']:
|
||||
raise LoadError("No namespace specified in "
|
||||
"Maya ReferenceLoader settings")
|
||||
elif not custom_naming['group_name']:
|
||||
raise LoadError("No group name specified in "
|
||||
"Maya ReferenceLoader settings")
|
||||
|
||||
formatting_data = {
|
||||
"asset_name": asset['name'],
|
||||
"asset_type": asset['type'],
|
||||
"subset": subset['name'],
|
||||
"family": (
|
||||
subset['data'].get('family') or
|
||||
subset['data']['families'][0]
|
||||
)
|
||||
}
|
||||
|
||||
custom_namespace = custom_naming['namespace'].format(
|
||||
**formatting_data
|
||||
)
|
||||
|
||||
custom_group_name = custom_naming['group_name'].format(
|
||||
**formatting_data
|
||||
)
|
||||
|
||||
count = options.get("count") or 1
|
||||
|
||||
for c in range(0, count):
|
||||
namespace = get_custom_namespace(custom_namespace)
|
||||
group_name = "{}:{}".format(
|
||||
namespace,
|
||||
custom_group_name
|
||||
)
|
||||
|
||||
options['group_name'] = group_name
|
||||
|
||||
# Offset loaded subset
|
||||
if "offset" in options:
|
||||
|
|
@ -187,7 +258,7 @@ class ReferenceLoader(Loader):
|
|||
|
||||
return loaded_containers
|
||||
|
||||
def process_reference(self, context, name, namespace, data):
|
||||
def process_reference(self, context, name, namespace, options):
|
||||
"""To be implemented by subclass"""
|
||||
raise NotImplementedError("Must be implemented by subclass")
|
||||
|
||||
|
|
|
|||
|
|
@ -234,26 +234,10 @@ class MayaPlaceholderLoadPlugin(PlaceholderPlugin, PlaceholderLoadMixin):
|
|||
return self.get_load_plugin_options(options)
|
||||
|
||||
def cleanup_placeholder(self, placeholder, failed):
|
||||
"""Hide placeholder, parent them to root
|
||||
add them to placeholder set and register placeholder's parent
|
||||
to keep placeholder info available for future use
|
||||
"""Hide placeholder, add them to placeholder set
|
||||
"""
|
||||
|
||||
node = placeholder._scene_identifier
|
||||
node_parent = placeholder.data["parent"]
|
||||
if node_parent:
|
||||
cmds.setAttr(node + ".parent", node_parent, type="string")
|
||||
|
||||
if cmds.getAttr(node + ".index") < 0:
|
||||
cmds.setAttr(node + ".index", placeholder.data["index"])
|
||||
|
||||
holding_sets = cmds.listSets(object=node)
|
||||
if holding_sets:
|
||||
for set in holding_sets:
|
||||
cmds.sets(node, remove=set)
|
||||
|
||||
if cmds.listRelatives(node, p=True):
|
||||
node = cmds.parent(node, world=True)[0]
|
||||
cmds.sets(node, addElement=PLACEHOLDER_SET)
|
||||
cmds.hide(node)
|
||||
cmds.setAttr(node + ".hiddenInOutliner", True)
|
||||
|
|
@ -286,8 +270,6 @@ class MayaPlaceholderLoadPlugin(PlaceholderPlugin, PlaceholderLoadMixin):
|
|||
elif not cmds.sets(root, q=True):
|
||||
return
|
||||
|
||||
if placeholder.data["parent"]:
|
||||
cmds.parent(nodes_to_parent, placeholder.data["parent"])
|
||||
# Move loaded nodes to correct index in outliner hierarchy
|
||||
placeholder_form = cmds.xform(
|
||||
placeholder.scene_identifier,
|
||||
|
|
|
|||
29
openpype/hosts/maya/hooks/pre_auto_load_plugins.py
Normal file
29
openpype/hosts/maya/hooks/pre_auto_load_plugins.py
Normal file
|
|
@ -0,0 +1,29 @@
|
|||
from openpype.lib import PreLaunchHook
|
||||
|
||||
|
||||
class MayaPreAutoLoadPlugins(PreLaunchHook):
|
||||
"""Define -noAutoloadPlugins command flag."""
|
||||
|
||||
# Before AddLastWorkfileToLaunchArgs
|
||||
order = 9
|
||||
app_groups = ["maya"]
|
||||
|
||||
def execute(self):
|
||||
|
||||
# Ignore if there's no last workfile to start.
|
||||
if not self.data.get("start_last_workfile"):
|
||||
return
|
||||
|
||||
maya_settings = self.data["project_settings"]["maya"]
|
||||
enabled = maya_settings["explicit_plugins_loading"]["enabled"]
|
||||
if enabled:
|
||||
# Force disable the `AddLastWorkfileToLaunchArgs`.
|
||||
self.data.pop("start_last_workfile")
|
||||
|
||||
# Force post initialization so our dedicated plug-in load can run
|
||||
# prior to Maya opening a scene file.
|
||||
key = "OPENPYPE_OPEN_WORKFILE_POST_INITIALIZATION"
|
||||
self.launch_context.env[key] = "1"
|
||||
|
||||
self.log.debug("Explicit plugins loading.")
|
||||
self.launch_context.launch_args.append("-noAutoloadPlugins")
|
||||
|
|
@ -0,0 +1,25 @@
|
|||
from openpype.lib import PreLaunchHook
|
||||
|
||||
|
||||
class MayaPreOpenWorkfilePostInitialization(PreLaunchHook):
|
||||
"""Define whether open last workfile should run post initialize."""
|
||||
|
||||
# Before AddLastWorkfileToLaunchArgs.
|
||||
order = 9
|
||||
app_groups = ["maya"]
|
||||
|
||||
def execute(self):
|
||||
|
||||
# Ignore if there's no last workfile to start.
|
||||
if not self.data.get("start_last_workfile"):
|
||||
return
|
||||
|
||||
maya_settings = self.data["project_settings"]["maya"]
|
||||
enabled = maya_settings["open_workfile_post_initialization"]
|
||||
if enabled:
|
||||
# Force disable the `AddLastWorkfileToLaunchArgs`.
|
||||
self.data.pop("start_last_workfile")
|
||||
|
||||
self.log.debug("Opening workfile post initialization.")
|
||||
key = "OPENPYPE_OPEN_WORKFILE_POST_INITIALIZATION"
|
||||
self.launch_context.env[key] = "1"
|
||||
|
|
@ -1,8 +1,14 @@
|
|||
import os
|
||||
from collections import OrderedDict
|
||||
import json
|
||||
|
||||
from openpype.hosts.maya.api import (
|
||||
lib,
|
||||
plugin
|
||||
)
|
||||
from openpype.settings import get_project_settings
|
||||
from openpype.pipeline import get_current_project_name, get_current_task_name
|
||||
from openpype.client import get_asset_by_name
|
||||
|
||||
|
||||
class CreateReview(plugin.Creator):
|
||||
|
|
@ -32,6 +38,23 @@ class CreateReview(plugin.Creator):
|
|||
super(CreateReview, self).__init__(*args, **kwargs)
|
||||
data = OrderedDict(**self.data)
|
||||
|
||||
project_name = get_current_project_name()
|
||||
asset_doc = get_asset_by_name(project_name, data["asset"])
|
||||
task_name = get_current_task_name()
|
||||
preset = lib.get_capture_preset(
|
||||
task_name,
|
||||
asset_doc["data"]["tasks"][task_name]["type"],
|
||||
data["subset"],
|
||||
get_project_settings(project_name),
|
||||
self.log
|
||||
)
|
||||
if os.environ.get("OPENPYPE_DEBUG") == "1":
|
||||
self.log.debug(
|
||||
"Using preset: {}".format(
|
||||
json.dumps(preset, indent=4, sort_keys=True)
|
||||
)
|
||||
)
|
||||
|
||||
# Option for using Maya or asset frame range in settings.
|
||||
frame_range = lib.get_frame_range()
|
||||
if self.useMayaTimeline:
|
||||
|
|
@ -40,12 +63,14 @@ class CreateReview(plugin.Creator):
|
|||
data[key] = value
|
||||
|
||||
data["fps"] = lib.collect_animation_data(fps=True)["fps"]
|
||||
data["review_width"] = self.Width
|
||||
data["review_height"] = self.Height
|
||||
data["isolate"] = self.isolate
|
||||
|
||||
data["keepImages"] = self.keepImages
|
||||
data["imagePlane"] = self.imagePlane
|
||||
data["transparency"] = self.transparency
|
||||
data["panZoom"] = self.panZoom
|
||||
data["review_width"] = preset["Resolution"]["width"]
|
||||
data["review_height"] = preset["Resolution"]["height"]
|
||||
data["isolate"] = preset["Generic"]["isolate_view"]
|
||||
data["imagePlane"] = preset["Viewport Options"]["imagePlane"]
|
||||
data["panZoom"] = preset["Generic"]["pan_zoom"]
|
||||
data["displayLights"] = lib.DISPLAY_LIGHTS_LABELS
|
||||
|
||||
self.data = data
|
||||
|
|
|
|||
|
|
@ -14,7 +14,7 @@ class AbcLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
|
|||
icon = "code-fork"
|
||||
color = "orange"
|
||||
|
||||
def process_reference(self, context, name, namespace, data):
|
||||
def process_reference(self, context, name, namespace, options):
|
||||
|
||||
import maya.cmds as cmds
|
||||
from openpype.hosts.maya.api.lib import unique_namespace
|
||||
|
|
@ -41,7 +41,7 @@ class AbcLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
|
|||
namespace=namespace,
|
||||
sharedReferenceFile=False,
|
||||
groupReference=True,
|
||||
groupName="{}:{}".format(namespace, name),
|
||||
groupName=options['group_name'],
|
||||
reference=True,
|
||||
returnNewNodes=True)
|
||||
|
||||
|
|
|
|||
|
|
@ -84,7 +84,7 @@ class ArnoldStandinLoader(load.LoaderPlugin):
|
|||
sequence = is_sequence(os.listdir(os.path.dirname(self.fname)))
|
||||
cmds.setAttr(standin_shape + ".useFrameExtension", sequence)
|
||||
|
||||
nodes = [root, standin]
|
||||
nodes = [root, standin, standin_shape]
|
||||
if operator is not None:
|
||||
nodes.append(operator)
|
||||
self[:] = nodes
|
||||
|
|
@ -183,7 +183,7 @@ class ArnoldStandinLoader(load.LoaderPlugin):
|
|||
# If no proxy exists, the string operator won't replace anything.
|
||||
cmds.setAttr(
|
||||
string_replace_operator + ".match",
|
||||
"resources/" + proxy_basename,
|
||||
proxy_basename,
|
||||
type="string"
|
||||
)
|
||||
cmds.setAttr(
|
||||
|
|
|
|||
|
|
@ -11,7 +11,7 @@ from openpype.pipeline import (
|
|||
get_representation_path,
|
||||
)
|
||||
from openpype.hosts.maya.api.pipeline import containerise
|
||||
from openpype.hosts.maya.api.lib import unique_namespace
|
||||
from openpype.hosts.maya.api.lib import unique_namespace, get_container_members
|
||||
|
||||
|
||||
class AudioLoader(load.LoaderPlugin):
|
||||
|
|
@ -52,17 +52,15 @@ class AudioLoader(load.LoaderPlugin):
|
|||
)
|
||||
|
||||
def update(self, container, representation):
|
||||
import pymel.core as pm
|
||||
|
||||
audio_node = None
|
||||
for node in pm.PyNode(container["objectName"]).members():
|
||||
if node.nodeType() == "audio":
|
||||
audio_node = node
|
||||
members = get_container_members(container)
|
||||
audio_nodes = cmds.ls(members, type="audio")
|
||||
|
||||
assert audio_node is not None, "Audio node not found."
|
||||
assert audio_nodes is not None, "Audio node not found."
|
||||
audio_node = audio_nodes[0]
|
||||
|
||||
path = get_representation_path(representation)
|
||||
audio_node.filename.set(path)
|
||||
cmds.setAttr("{}.filename".format(audio_node), path, type="string")
|
||||
cmds.setAttr(
|
||||
container["objectName"] + ".representation",
|
||||
str(representation["_id"]),
|
||||
|
|
@ -80,8 +78,12 @@ class AudioLoader(load.LoaderPlugin):
|
|||
asset = get_asset_by_id(
|
||||
project_name, subset["parent"], fields=["parent"]
|
||||
)
|
||||
audio_node.sourceStart.set(1 - asset["data"]["frameStart"])
|
||||
audio_node.sourceEnd.set(asset["data"]["frameEnd"])
|
||||
|
||||
source_start = 1 - asset["data"]["frameStart"]
|
||||
source_end = asset["data"]["frameEnd"]
|
||||
|
||||
cmds.setAttr("{}.sourceStart".format(audio_node), source_start)
|
||||
cmds.setAttr("{}.sourceEnd".format(audio_node), source_end)
|
||||
|
||||
def switch(self, container, representation):
|
||||
self.update(container, representation)
|
||||
|
|
|
|||
|
|
@ -1,5 +1,9 @@
|
|||
import os
|
||||
|
||||
import maya.cmds as cmds
|
||||
|
||||
from openpype.hosts.maya.api.pipeline import containerise
|
||||
from openpype.hosts.maya.api.lib import unique_namespace
|
||||
from openpype.pipeline import (
|
||||
load,
|
||||
get_representation_path
|
||||
|
|
@ -11,19 +15,15 @@ class GpuCacheLoader(load.LoaderPlugin):
|
|||
"""Load Alembic as gpuCache"""
|
||||
|
||||
families = ["model", "animation", "proxyAbc", "pointcache"]
|
||||
representations = ["abc"]
|
||||
representations = ["abc", "gpu_cache"]
|
||||
|
||||
label = "Import Gpu Cache"
|
||||
label = "Load Gpu Cache"
|
||||
order = -5
|
||||
icon = "code-fork"
|
||||
color = "orange"
|
||||
|
||||
def load(self, context, name, namespace, data):
|
||||
|
||||
import maya.cmds as cmds
|
||||
from openpype.hosts.maya.api.pipeline import containerise
|
||||
from openpype.hosts.maya.api.lib import unique_namespace
|
||||
|
||||
asset = context['asset']['name']
|
||||
namespace = namespace or unique_namespace(
|
||||
asset + "_",
|
||||
|
|
@ -42,10 +42,9 @@ class GpuCacheLoader(load.LoaderPlugin):
|
|||
c = colors.get('model')
|
||||
if c is not None:
|
||||
cmds.setAttr(root + ".useOutlinerColor", 1)
|
||||
cmds.setAttr(root + ".outlinerColor",
|
||||
(float(c[0])/255),
|
||||
(float(c[1])/255),
|
||||
(float(c[2])/255)
|
||||
cmds.setAttr(
|
||||
root + ".outlinerColor",
|
||||
(float(c[0]) / 255), (float(c[1]) / 255), (float(c[2]) / 255)
|
||||
)
|
||||
|
||||
# Create transform with shape
|
||||
|
|
@ -74,9 +73,6 @@ class GpuCacheLoader(load.LoaderPlugin):
|
|||
loader=self.__class__.__name__)
|
||||
|
||||
def update(self, container, representation):
|
||||
|
||||
import maya.cmds as cmds
|
||||
|
||||
path = get_representation_path(representation)
|
||||
|
||||
# Update the cache
|
||||
|
|
@ -96,7 +92,6 @@ class GpuCacheLoader(load.LoaderPlugin):
|
|||
self.update(container, representation)
|
||||
|
||||
def remove(self, container):
|
||||
import maya.cmds as cmds
|
||||
members = cmds.sets(container['objectName'], query=True)
|
||||
cmds.lockNode(members, lock=False)
|
||||
cmds.delete([container['objectName']] + members)
|
||||
|
|
|
|||
|
|
@ -11,11 +11,26 @@ from openpype.pipeline import (
|
|||
get_representation_path
|
||||
)
|
||||
from openpype.hosts.maya.api.pipeline import containerise
|
||||
from openpype.hosts.maya.api.lib import unique_namespace
|
||||
from openpype.hosts.maya.api.lib import (
|
||||
unique_namespace,
|
||||
namespaced,
|
||||
pairwise,
|
||||
get_container_members
|
||||
)
|
||||
|
||||
from maya import cmds
|
||||
|
||||
|
||||
def disconnect_inputs(plug):
|
||||
overrides = cmds.listConnections(plug,
|
||||
source=True,
|
||||
destination=False,
|
||||
plugs=True,
|
||||
connections=True) or []
|
||||
for dest, src in pairwise(overrides):
|
||||
cmds.disconnectAttr(src, dest)
|
||||
|
||||
|
||||
class CameraWindow(QtWidgets.QDialog):
|
||||
|
||||
def __init__(self, cameras):
|
||||
|
|
@ -74,6 +89,7 @@ class CameraWindow(QtWidgets.QDialog):
|
|||
self.camera = None
|
||||
self.close()
|
||||
|
||||
|
||||
class ImagePlaneLoader(load.LoaderPlugin):
|
||||
"""Specific loader of plate for image planes on selected camera."""
|
||||
|
||||
|
|
@ -84,9 +100,7 @@ class ImagePlaneLoader(load.LoaderPlugin):
|
|||
color = "orange"
|
||||
|
||||
def load(self, context, name, namespace, data, options=None):
|
||||
import pymel.core as pm
|
||||
|
||||
new_nodes = []
|
||||
image_plane_depth = 1000
|
||||
asset = context['asset']['name']
|
||||
namespace = namespace or unique_namespace(
|
||||
|
|
@ -96,16 +110,20 @@ class ImagePlaneLoader(load.LoaderPlugin):
|
|||
)
|
||||
|
||||
# Get camera from user selection.
|
||||
camera = None
|
||||
# is_static_image_plane = None
|
||||
# is_in_all_views = None
|
||||
if data:
|
||||
camera = pm.PyNode(data.get("camera"))
|
||||
camera = data.get("camera") if data else None
|
||||
|
||||
if not camera:
|
||||
cameras = pm.ls(type="camera")
|
||||
camera_names = {x.getParent().name(): x for x in cameras}
|
||||
camera_names["Create new camera."] = "create_camera"
|
||||
cameras = cmds.ls(type="camera")
|
||||
|
||||
# Cameras by names
|
||||
camera_names = {}
|
||||
for camera in cameras:
|
||||
parent = cmds.listRelatives(camera, parent=True, path=True)[0]
|
||||
camera_names[parent] = camera
|
||||
|
||||
camera_names["Create new camera."] = "create-camera"
|
||||
window = CameraWindow(camera_names.keys())
|
||||
window.exec_()
|
||||
# Skip if no camera was selected (Dialog was closed)
|
||||
|
|
@ -113,43 +131,48 @@ class ImagePlaneLoader(load.LoaderPlugin):
|
|||
return
|
||||
camera = camera_names[window.camera]
|
||||
|
||||
if camera == "create_camera":
|
||||
camera = pm.createNode("camera")
|
||||
if camera == "create-camera":
|
||||
camera = cmds.createNode("camera")
|
||||
|
||||
if camera is None:
|
||||
return
|
||||
|
||||
try:
|
||||
camera.displayResolution.set(1)
|
||||
camera.farClipPlane.set(image_plane_depth * 10)
|
||||
cmds.setAttr("{}.displayResolution".format(camera), True)
|
||||
cmds.setAttr("{}.farClipPlane".format(camera),
|
||||
image_plane_depth * 10)
|
||||
except RuntimeError:
|
||||
pass
|
||||
|
||||
# Create image plane
|
||||
image_plane_transform, image_plane_shape = pm.imagePlane(
|
||||
fileName=context["representation"]["data"]["path"],
|
||||
camera=camera)
|
||||
image_plane_shape.depth.set(image_plane_depth)
|
||||
with namespaced(namespace):
|
||||
# Create inside the namespace
|
||||
image_plane_transform, image_plane_shape = cmds.imagePlane(
|
||||
fileName=context["representation"]["data"]["path"],
|
||||
camera=camera
|
||||
)
|
||||
start_frame = cmds.playbackOptions(query=True, min=True)
|
||||
end_frame = cmds.playbackOptions(query=True, max=True)
|
||||
|
||||
|
||||
start_frame = pm.playbackOptions(q=True, min=True)
|
||||
end_frame = pm.playbackOptions(q=True, max=True)
|
||||
|
||||
image_plane_shape.frameOffset.set(0)
|
||||
image_plane_shape.frameIn.set(start_frame)
|
||||
image_plane_shape.frameOut.set(end_frame)
|
||||
image_plane_shape.frameCache.set(end_frame)
|
||||
image_plane_shape.useFrameExtension.set(1)
|
||||
for attr, value in {
|
||||
"depth": image_plane_depth,
|
||||
"frameOffset": 0,
|
||||
"frameIn": start_frame,
|
||||
"frameOut": end_frame,
|
||||
"frameCache": end_frame,
|
||||
"useFrameExtension": True
|
||||
}.items():
|
||||
plug = "{}.{}".format(image_plane_shape, attr)
|
||||
cmds.setAttr(plug, value)
|
||||
|
||||
movie_representations = ["mov", "preview"]
|
||||
if context["representation"]["name"] in movie_representations:
|
||||
# Need to get "type" by string, because its a method as well.
|
||||
pm.Attribute(image_plane_shape + ".type").set(2)
|
||||
cmds.setAttr(image_plane_shape + ".type", 2)
|
||||
|
||||
# Ask user whether to use sequence or still image.
|
||||
if context["representation"]["name"] == "exr":
|
||||
# Ensure OpenEXRLoader plugin is loaded.
|
||||
pm.loadPlugin("OpenEXRLoader.mll", quiet=True)
|
||||
cmds.loadPlugin("OpenEXRLoader", quiet=True)
|
||||
|
||||
message = (
|
||||
"Hold image sequence on first frame?"
|
||||
|
|
@ -161,32 +184,18 @@ class ImagePlaneLoader(load.LoaderPlugin):
|
|||
None,
|
||||
"Frame Hold.",
|
||||
message,
|
||||
QtWidgets.QMessageBox.Ok,
|
||||
QtWidgets.QMessageBox.Cancel
|
||||
QtWidgets.QMessageBox.Yes,
|
||||
QtWidgets.QMessageBox.No
|
||||
)
|
||||
if reply == QtWidgets.QMessageBox.Ok:
|
||||
# find the input and output of frame extension
|
||||
expressions = image_plane_shape.frameExtension.inputs()
|
||||
frame_ext_output = image_plane_shape.frameExtension.outputs()
|
||||
if expressions:
|
||||
# the "time1" node is non-deletable attr
|
||||
# in Maya, use disconnectAttr instead
|
||||
pm.disconnectAttr(expressions, frame_ext_output)
|
||||
if reply == QtWidgets.QMessageBox.Yes:
|
||||
frame_extension_plug = "{}.frameExtension".format(image_plane_shape) # noqa
|
||||
|
||||
if not image_plane_shape.frameExtension.isFreeToChange():
|
||||
raise RuntimeError("Can't set frame extension for {}".format(image_plane_shape)) # noqa
|
||||
# get the node of time instead and set the time for it.
|
||||
image_plane_shape.frameExtension.set(start_frame)
|
||||
# Remove current frame expression
|
||||
disconnect_inputs(frame_extension_plug)
|
||||
|
||||
new_nodes.extend(
|
||||
[
|
||||
image_plane_transform.longName().split("|")[-1],
|
||||
image_plane_shape.longName().split("|")[-1]
|
||||
]
|
||||
)
|
||||
cmds.setAttr(frame_extension_plug, start_frame)
|
||||
|
||||
for node in new_nodes:
|
||||
pm.rename(node, "{}:{}".format(namespace, node))
|
||||
new_nodes = [image_plane_transform, image_plane_shape]
|
||||
|
||||
return containerise(
|
||||
name=name,
|
||||
|
|
@ -197,21 +206,19 @@ class ImagePlaneLoader(load.LoaderPlugin):
|
|||
)
|
||||
|
||||
def update(self, container, representation):
|
||||
import pymel.core as pm
|
||||
image_plane_shape = None
|
||||
for node in pm.PyNode(container["objectName"]).members():
|
||||
if node.nodeType() == "imagePlane":
|
||||
image_plane_shape = node
|
||||
|
||||
assert image_plane_shape is not None, "Image plane not found."
|
||||
members = get_container_members(container)
|
||||
image_planes = cmds.ls(members, type="imagePlane")
|
||||
assert image_planes, "Image plane not found."
|
||||
image_plane_shape = image_planes[0]
|
||||
|
||||
path = get_representation_path(representation)
|
||||
image_plane_shape.imageName.set(path)
|
||||
cmds.setAttr(
|
||||
container["objectName"] + ".representation",
|
||||
str(representation["_id"]),
|
||||
type="string"
|
||||
)
|
||||
cmds.setAttr("{}.imageName".format(image_plane_shape),
|
||||
path,
|
||||
type="string")
|
||||
cmds.setAttr("{}.representation".format(container["objectName"]),
|
||||
str(representation["_id"]),
|
||||
type="string")
|
||||
|
||||
# Set frame range.
|
||||
project_name = legacy_io.active_project()
|
||||
|
|
@ -227,10 +234,14 @@ class ImagePlaneLoader(load.LoaderPlugin):
|
|||
start_frame = asset["data"]["frameStart"]
|
||||
end_frame = asset["data"]["frameEnd"]
|
||||
|
||||
image_plane_shape.frameOffset.set(0)
|
||||
image_plane_shape.frameIn.set(start_frame)
|
||||
image_plane_shape.frameOut.set(end_frame)
|
||||
image_plane_shape.frameCache.set(end_frame)
|
||||
for attr, value in {
|
||||
"frameOffset": 0,
|
||||
"frameIn": start_frame,
|
||||
"frameOut": end_frame,
|
||||
"frameCache": end_frame
|
||||
}:
|
||||
plug = "{}.{}".format(image_plane_shape, attr)
|
||||
cmds.setAttr(plug, value)
|
||||
|
||||
def switch(self, container, representation):
|
||||
self.update(container, representation)
|
||||
|
|
|
|||
|
|
@ -12,7 +12,8 @@ from openpype.pipeline.create import (
|
|||
import openpype.hosts.maya.api.plugin
|
||||
from openpype.hosts.maya.api.lib import (
|
||||
maintained_selection,
|
||||
get_container_members
|
||||
get_container_members,
|
||||
parent_nodes
|
||||
)
|
||||
|
||||
|
||||
|
|
@ -118,21 +119,21 @@ class ReferenceLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
|
|||
|
||||
def process_reference(self, context, name, namespace, options):
|
||||
import maya.cmds as cmds
|
||||
import pymel.core as pm
|
||||
|
||||
try:
|
||||
family = context["representation"]["context"]["family"]
|
||||
except ValueError:
|
||||
family = "model"
|
||||
|
||||
group_name = "{}:_GRP".format(namespace)
|
||||
# True by default to keep legacy behaviours
|
||||
attach_to_root = options.get("attach_to_root", True)
|
||||
group_name = options["group_name"]
|
||||
|
||||
with maintained_selection():
|
||||
cmds.loadPlugin("AbcImport.mll", quiet=True)
|
||||
file_url = self.prepare_root_value(self.fname,
|
||||
context["project"]["name"])
|
||||
|
||||
nodes = cmds.file(file_url,
|
||||
namespace=namespace,
|
||||
sharedReferenceFile=False,
|
||||
|
|
@ -148,7 +149,7 @@ class ReferenceLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
|
|||
# if there are cameras, try to lock their transforms
|
||||
self._lock_camera_transforms(new_nodes)
|
||||
|
||||
current_namespace = pm.namespaceInfo(currentNamespace=True)
|
||||
current_namespace = cmds.namespaceInfo(currentNamespace=True)
|
||||
|
||||
if current_namespace != ":":
|
||||
group_name = current_namespace + ":" + group_name
|
||||
|
|
@ -158,37 +159,29 @@ class ReferenceLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
|
|||
self[:] = new_nodes
|
||||
|
||||
if attach_to_root:
|
||||
group_node = pm.PyNode(group_name)
|
||||
roots = set()
|
||||
roots = cmds.listRelatives(group_name,
|
||||
children=True,
|
||||
fullPath=True) or []
|
||||
|
||||
for node in new_nodes:
|
||||
try:
|
||||
roots.add(pm.PyNode(node).getAllParents()[-2])
|
||||
except: # noqa: E722
|
||||
pass
|
||||
if family not in {"layout", "setdress",
|
||||
"mayaAscii", "mayaScene"}:
|
||||
# QUESTION Why do we need to exclude these families?
|
||||
with parent_nodes(roots, parent=None):
|
||||
cmds.xform(group_name, zeroTransformPivots=True)
|
||||
|
||||
if family not in ["layout", "setdress",
|
||||
"mayaAscii", "mayaScene"]:
|
||||
for root in roots:
|
||||
root.setParent(world=True)
|
||||
|
||||
group_node.zeroTransformPivots()
|
||||
for root in roots:
|
||||
root.setParent(group_node)
|
||||
|
||||
cmds.setAttr(group_name + ".displayHandle", 1)
|
||||
cmds.setAttr("{}.displayHandle".format(group_name), 1)
|
||||
|
||||
settings = get_project_settings(os.environ['AVALON_PROJECT'])
|
||||
colors = settings['maya']['load']['colors']
|
||||
c = colors.get(family)
|
||||
if c is not None:
|
||||
group_node.useOutlinerColor.set(1)
|
||||
group_node.outlinerColor.set(
|
||||
(float(c[0]) / 255),
|
||||
(float(c[1]) / 255),
|
||||
(float(c[2]) / 255))
|
||||
cmds.setAttr("{}.useOutlinerColor".format(group_name), 1)
|
||||
cmds.setAttr("{}.outlinerColor".format(group_name),
|
||||
(float(c[0]) / 255),
|
||||
(float(c[1]) / 255),
|
||||
(float(c[2]) / 255))
|
||||
|
||||
cmds.setAttr(group_name + ".displayHandle", 1)
|
||||
cmds.setAttr("{}.displayHandle".format(group_name), 1)
|
||||
# get bounding box
|
||||
bbox = cmds.exactWorldBoundingBox(group_name)
|
||||
# get pivot position on world space
|
||||
|
|
@ -202,15 +195,16 @@ class ReferenceLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
|
|||
cy = cy + pivot[1]
|
||||
cz = cz + pivot[2]
|
||||
# set selection handle offset to center of bounding box
|
||||
cmds.setAttr(group_name + ".selectHandleX", cx)
|
||||
cmds.setAttr(group_name + ".selectHandleY", cy)
|
||||
cmds.setAttr(group_name + ".selectHandleZ", cz)
|
||||
cmds.setAttr("{}.selectHandleX".format(group_name), cx)
|
||||
cmds.setAttr("{}.selectHandleY".format(group_name), cy)
|
||||
cmds.setAttr("{}.selectHandleZ".format(group_name), cz)
|
||||
|
||||
if family == "rig":
|
||||
self._post_process_rig(name, namespace, context, options)
|
||||
else:
|
||||
if "translate" in options:
|
||||
cmds.setAttr(group_name + ".t", *options["translate"])
|
||||
cmds.setAttr("{}.translate".format(group_name),
|
||||
*options["translate"])
|
||||
return new_nodes
|
||||
|
||||
def switch(self, container, representation):
|
||||
|
|
|
|||
|
|
@ -19,8 +19,7 @@ class YetiRigLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
|
|||
def process_reference(
|
||||
self, context, name=None, namespace=None, options=None
|
||||
):
|
||||
|
||||
group_name = "{}:{}".format(namespace, name)
|
||||
group_name = options['group_name']
|
||||
with lib.maintained_selection():
|
||||
file_url = self.prepare_root_value(
|
||||
self.fname, context["project"]["name"]
|
||||
|
|
|
|||
|
|
@ -1,6 +1,7 @@
|
|||
from maya import cmds
|
||||
|
||||
import pyblish.api
|
||||
from openpype.hosts.maya.api.lib import get_all_children
|
||||
|
||||
|
||||
class CollectArnoldSceneSource(pyblish.api.InstancePlugin):
|
||||
|
|
@ -21,18 +22,21 @@ class CollectArnoldSceneSource(pyblish.api.InstancePlugin):
|
|||
self.log.warning("Skipped empty instance: \"%s\" " % objset)
|
||||
continue
|
||||
if objset.endswith("content_SET"):
|
||||
instance.data["setMembers"] = cmds.ls(members, long=True)
|
||||
self.log.debug("content members: {}".format(members))
|
||||
members = cmds.ls(members, long=True)
|
||||
children = get_all_children(members)
|
||||
instance.data["contentMembers"] = children
|
||||
self.log.debug("content members: {}".format(children))
|
||||
elif objset.endswith("proxy_SET"):
|
||||
instance.data["proxy"] = cmds.ls(members, long=True)
|
||||
self.log.debug("proxy members: {}".format(members))
|
||||
set_members = get_all_children(cmds.ls(members, long=True))
|
||||
instance.data["proxy"] = set_members
|
||||
self.log.debug("proxy members: {}".format(set_members))
|
||||
|
||||
# Use camera in object set if present else default to render globals
|
||||
# camera.
|
||||
cameras = cmds.ls(type="camera", long=True)
|
||||
renderable = [c for c in cameras if cmds.getAttr("%s.renderable" % c)]
|
||||
camera = renderable[0]
|
||||
for node in instance.data["setMembers"]:
|
||||
for node in instance.data["contentMembers"]:
|
||||
camera_shapes = cmds.listRelatives(
|
||||
node, shapes=True, type="camera"
|
||||
)
|
||||
|
|
|
|||
|
|
@ -1,48 +1,8 @@
|
|||
from maya import cmds
|
||||
import maya.api.OpenMaya as om
|
||||
|
||||
import pyblish.api
|
||||
import json
|
||||
|
||||
|
||||
def get_all_children(nodes):
|
||||
"""Return all children of `nodes` including each instanced child.
|
||||
Using maya.cmds.listRelatives(allDescendents=True) includes only the first
|
||||
instance. As such, this function acts as an optimal replacement with a
|
||||
focus on a fast query.
|
||||
|
||||
"""
|
||||
|
||||
sel = om.MSelectionList()
|
||||
traversed = set()
|
||||
iterator = om.MItDag(om.MItDag.kDepthFirst)
|
||||
for node in nodes:
|
||||
|
||||
if node in traversed:
|
||||
# Ignore if already processed as a child
|
||||
# before
|
||||
continue
|
||||
|
||||
sel.clear()
|
||||
sel.add(node)
|
||||
dag = sel.getDagPath(0)
|
||||
|
||||
iterator.reset(dag)
|
||||
# ignore self
|
||||
iterator.next() # noqa: B305
|
||||
while not iterator.isDone():
|
||||
|
||||
path = iterator.fullPathName()
|
||||
|
||||
if path in traversed:
|
||||
iterator.prune()
|
||||
iterator.next() # noqa: B305
|
||||
continue
|
||||
|
||||
traversed.add(path)
|
||||
iterator.next() # noqa: B305
|
||||
|
||||
return list(traversed)
|
||||
from openpype.hosts.maya.api.lib import get_all_children
|
||||
|
||||
|
||||
class CollectInstances(pyblish.api.ContextPlugin):
|
||||
|
|
@ -149,13 +109,6 @@ class CollectInstances(pyblish.api.ContextPlugin):
|
|||
|
||||
# Append start frame and end frame to label if present
|
||||
if "frameStart" and "frameEnd" in data:
|
||||
|
||||
# Backwards compatibility for 'handles' data
|
||||
if "handles" in data:
|
||||
data["handleStart"] = data["handles"]
|
||||
data["handleEnd"] = data["handles"]
|
||||
data.pop('handles')
|
||||
|
||||
# Take handles from context if not set locally on the instance
|
||||
for key in ["handleStart", "handleEnd"]:
|
||||
if key not in data:
|
||||
|
|
|
|||
|
|
@ -556,7 +556,7 @@ class CollectLook(pyblish.api.InstancePlugin):
|
|||
continue
|
||||
if cmds.getAttr(attribute, type=True) == "message":
|
||||
continue
|
||||
node_attributes[attr] = cmds.getAttr(attribute)
|
||||
node_attributes[attr] = cmds.getAttr(attribute, asString=True)
|
||||
# Only include if there are any properties we care about
|
||||
if not node_attributes:
|
||||
continue
|
||||
|
|
|
|||
|
|
@ -1,11 +1,10 @@
|
|||
from maya import cmds, mel
|
||||
import pymel.core as pm
|
||||
|
||||
import pyblish.api
|
||||
|
||||
from openpype.client import get_subset_by_name
|
||||
from openpype.pipeline import legacy_io
|
||||
from openpype.hosts.maya.api.lib import get_attribute_input
|
||||
from openpype.pipeline import legacy_io, KnownPublishError
|
||||
from openpype.hosts.maya.api import lib
|
||||
|
||||
|
||||
class CollectReview(pyblish.api.InstancePlugin):
|
||||
|
|
@ -16,7 +15,6 @@ class CollectReview(pyblish.api.InstancePlugin):
|
|||
order = pyblish.api.CollectorOrder + 0.3
|
||||
label = 'Collect Review Data'
|
||||
families = ["review"]
|
||||
legacy = True
|
||||
|
||||
def process(self, instance):
|
||||
|
||||
|
|
@ -31,62 +29,60 @@ class CollectReview(pyblish.api.InstancePlugin):
|
|||
|
||||
# get cameras
|
||||
members = instance.data['setMembers']
|
||||
cameras = cmds.ls(members, long=True,
|
||||
dag=True, cameras=True)
|
||||
self.log.debug('members: {}'.format(members))
|
||||
cameras = cmds.ls(members, long=True, dag=True, cameras=True)
|
||||
camera = cameras[0] if cameras else None
|
||||
|
||||
# validate required settings
|
||||
assert len(cameras) == 1, "Not a single camera found in extraction"
|
||||
camera = cameras[0]
|
||||
self.log.debug('camera: {}'.format(camera))
|
||||
context = instance.context
|
||||
objectset = context.data['objectsets']
|
||||
|
||||
objectset = instance.context.data['objectsets']
|
||||
reviewable_subsets = list(set(members) & set(objectset))
|
||||
if reviewable_subsets:
|
||||
if len(reviewable_subsets) > 1:
|
||||
raise KnownPublishError(
|
||||
"Multiple attached subsets for review are not supported. "
|
||||
"Attached: {}".format(", ".join(reviewable_subsets))
|
||||
)
|
||||
|
||||
reviewable_subset = None
|
||||
reviewable_subset = list(set(members) & set(objectset))
|
||||
if reviewable_subset:
|
||||
assert len(reviewable_subset) <= 1, "Multiple subsets for review"
|
||||
self.log.debug('subset for review: {}'.format(reviewable_subset))
|
||||
reviewable_subset = reviewable_subsets[0]
|
||||
self.log.debug(
|
||||
"Subset attached to review: {}".format(reviewable_subset)
|
||||
)
|
||||
|
||||
i = 0
|
||||
for inst in instance.context:
|
||||
# Find the relevant publishing instance in the current context
|
||||
reviewable_inst = next(inst for inst in context
|
||||
if inst.name == reviewable_subset)
|
||||
data = reviewable_inst.data
|
||||
|
||||
self.log.debug('filtering {}'.format(inst))
|
||||
data = instance.context[i].data
|
||||
self.log.debug(
|
||||
'Adding review family to {}'.format(reviewable_subset)
|
||||
)
|
||||
if data.get('families'):
|
||||
data['families'].append('review')
|
||||
else:
|
||||
data['families'] = ['review']
|
||||
|
||||
if inst.name != reviewable_subset[0]:
|
||||
self.log.debug('subset name does not match {}'.format(
|
||||
reviewable_subset[0]))
|
||||
i += 1
|
||||
continue
|
||||
data["cameras"] = cameras
|
||||
data['review_camera'] = camera
|
||||
data['frameStartFtrack'] = instance.data["frameStartHandle"]
|
||||
data['frameEndFtrack'] = instance.data["frameEndHandle"]
|
||||
data['frameStartHandle'] = instance.data["frameStartHandle"]
|
||||
data['frameEndHandle'] = instance.data["frameEndHandle"]
|
||||
data["frameStart"] = instance.data["frameStart"]
|
||||
data["frameEnd"] = instance.data["frameEnd"]
|
||||
data['step'] = instance.data['step']
|
||||
data['fps'] = instance.data['fps']
|
||||
data['review_width'] = instance.data['review_width']
|
||||
data['review_height'] = instance.data['review_height']
|
||||
data["isolate"] = instance.data["isolate"]
|
||||
data["panZoom"] = instance.data.get("panZoom", False)
|
||||
data["panel"] = instance.data["panel"]
|
||||
|
||||
# The review instance must be active
|
||||
cmds.setAttr(str(instance) + '.active', 1)
|
||||
|
||||
instance.data['remove'] = True
|
||||
|
||||
if data.get('families'):
|
||||
data['families'].append('review')
|
||||
else:
|
||||
data['families'] = ['review']
|
||||
self.log.debug('adding review family to {}'.format(
|
||||
reviewable_subset))
|
||||
data['review_camera'] = camera
|
||||
# data["publish"] = False
|
||||
data['frameStartFtrack'] = instance.data["frameStartHandle"]
|
||||
data['frameEndFtrack'] = instance.data["frameEndHandle"]
|
||||
data['frameStartHandle'] = instance.data["frameStartHandle"]
|
||||
data['frameEndHandle'] = instance.data["frameEndHandle"]
|
||||
data["frameStart"] = instance.data["frameStart"]
|
||||
data["frameEnd"] = instance.data["frameEnd"]
|
||||
data['handles'] = instance.data.get('handles', None)
|
||||
data['step'] = instance.data['step']
|
||||
data['fps'] = instance.data['fps']
|
||||
data['review_width'] = instance.data['review_width']
|
||||
data['review_height'] = instance.data['review_height']
|
||||
data["isolate"] = instance.data["isolate"]
|
||||
data["panZoom"] = instance.data.get("panZoom", False)
|
||||
data["panel"] = instance.data["panel"]
|
||||
cmds.setAttr(str(instance) + '.active', 1)
|
||||
self.log.debug('data {}'.format(instance.context[i].data))
|
||||
instance.context[i].data.update(data)
|
||||
instance.data['remove'] = True
|
||||
self.log.debug('isntance data {}'.format(instance.data))
|
||||
else:
|
||||
legacy_subset_name = task + 'Review'
|
||||
asset_doc = instance.context.data['assetEntity']
|
||||
|
|
@ -101,6 +97,7 @@ class CollectReview(pyblish.api.InstancePlugin):
|
|||
self.log.debug("Existing subsets found, keep legacy name.")
|
||||
instance.data['subset'] = legacy_subset_name
|
||||
|
||||
instance.data["cameras"] = cameras
|
||||
instance.data['review_camera'] = camera
|
||||
instance.data['frameStartFtrack'] = \
|
||||
instance.data["frameStartHandle"]
|
||||
|
|
@ -108,50 +105,62 @@ class CollectReview(pyblish.api.InstancePlugin):
|
|||
instance.data["frameEndHandle"]
|
||||
|
||||
# make ftrack publishable
|
||||
instance.data["families"] = ['ftrack']
|
||||
instance.data.setdefault("families", []).append('ftrack')
|
||||
|
||||
cmds.setAttr(str(instance) + '.active', 1)
|
||||
|
||||
# Collect audio
|
||||
playback_slider = mel.eval('$tmpVar=$gPlayBackSlider')
|
||||
audio_name = cmds.timeControl(playback_slider, q=True, s=True)
|
||||
audio_name = cmds.timeControl(playback_slider,
|
||||
query=True,
|
||||
sound=True)
|
||||
display_sounds = cmds.timeControl(
|
||||
playback_slider, q=True, displaySound=True
|
||||
playback_slider, query=True, displaySound=True
|
||||
)
|
||||
|
||||
audio_nodes = []
|
||||
def get_audio_node_data(node):
|
||||
return {
|
||||
"offset": cmds.getAttr("{}.offset".format(node)),
|
||||
"filename": cmds.getAttr("{}.filename".format(node))
|
||||
}
|
||||
|
||||
audio_data = []
|
||||
|
||||
if audio_name:
|
||||
audio_nodes.append(pm.PyNode(audio_name))
|
||||
audio_data.append(get_audio_node_data(audio_name))
|
||||
|
||||
if not audio_name and display_sounds:
|
||||
start_frame = int(pm.playbackOptions(q=True, min=True))
|
||||
end_frame = float(pm.playbackOptions(q=True, max=True))
|
||||
frame_range = range(int(start_frame), int(end_frame))
|
||||
elif display_sounds:
|
||||
start_frame = int(cmds.playbackOptions(query=True, min=True))
|
||||
end_frame = int(cmds.playbackOptions(query=True, max=True))
|
||||
|
||||
for node in pm.ls(type="audio"):
|
||||
for node in cmds.ls(type="audio"):
|
||||
# Check if frame range and audio range intersections,
|
||||
# for whether to include this audio node or not.
|
||||
start_audio = node.offset.get()
|
||||
end_audio = node.offset.get() + node.duration.get()
|
||||
audio_range = range(int(start_audio), int(end_audio))
|
||||
duration = cmds.getAttr("{}.duration".format(node))
|
||||
start_audio = cmds.getAttr("{}.offset".format(node))
|
||||
end_audio = start_audio + duration
|
||||
|
||||
if bool(set(frame_range).intersection(audio_range)):
|
||||
audio_nodes.append(node)
|
||||
if start_audio <= end_frame and end_audio > start_frame:
|
||||
audio_data.append(get_audio_node_data(node))
|
||||
|
||||
instance.data["audio"] = []
|
||||
for node in audio_nodes:
|
||||
instance.data["audio"].append(
|
||||
{
|
||||
"offset": node.offset.get(),
|
||||
"filename": node.filename.get()
|
||||
}
|
||||
)
|
||||
instance.data["audio"] = audio_data
|
||||
|
||||
# Convert enum attribute index to string.
|
||||
index = instance.data.get("displayLights", 0)
|
||||
display_lights = lib.DISPLAY_LIGHTS_VALUES[index]
|
||||
if display_lights == "project_settings":
|
||||
settings = instance.context.data["project_settings"]
|
||||
settings = settings["maya"]["publish"]["ExtractPlayblast"]
|
||||
settings = settings["capture_preset"]["Viewport Options"]
|
||||
display_lights = settings["displayLights"]
|
||||
instance.data["displayLights"] = display_lights
|
||||
|
||||
# Collect focal length.
|
||||
if camera is None:
|
||||
return
|
||||
|
||||
attr = camera + ".focalLength"
|
||||
focal_length = None
|
||||
if get_attribute_input(attr):
|
||||
if lib.get_attribute_input(attr):
|
||||
start = instance.data["frameStart"]
|
||||
end = instance.data["frameEnd"] + 1
|
||||
focal_length = [
|
||||
|
|
|
|||
|
|
@ -1,12 +1,12 @@
|
|||
import os
|
||||
from collections import defaultdict
|
||||
import json
|
||||
|
||||
from maya import cmds
|
||||
import arnold
|
||||
|
||||
from openpype.pipeline import publish
|
||||
from openpype.hosts.maya.api.lib import (
|
||||
maintained_selection, attribute_values, delete_after
|
||||
)
|
||||
from openpype.hosts.maya.api import lib
|
||||
|
||||
|
||||
class ExtractArnoldSceneSource(publish.Extractor):
|
||||
|
|
@ -19,8 +19,7 @@ class ExtractArnoldSceneSource(publish.Extractor):
|
|||
|
||||
def process(self, instance):
|
||||
staging_dir = self.staging_dir(instance)
|
||||
filename = "{}.ass".format(instance.name)
|
||||
file_path = os.path.join(staging_dir, filename)
|
||||
file_path = os.path.join(staging_dir, "{}.ass".format(instance.name))
|
||||
|
||||
# Mask
|
||||
mask = arnold.AI_NODE_ALL
|
||||
|
|
@ -71,8 +70,8 @@ class ExtractArnoldSceneSource(publish.Extractor):
|
|||
"mask": mask
|
||||
}
|
||||
|
||||
filenames = self._extract(
|
||||
instance.data["setMembers"], attribute_data, kwargs
|
||||
filenames, nodes_by_id = self._extract(
|
||||
instance.data["contentMembers"], attribute_data, kwargs
|
||||
)
|
||||
|
||||
if "representations" not in instance.data:
|
||||
|
|
@ -88,6 +87,19 @@ class ExtractArnoldSceneSource(publish.Extractor):
|
|||
|
||||
instance.data["representations"].append(representation)
|
||||
|
||||
json_path = os.path.join(staging_dir, "{}.json".format(instance.name))
|
||||
with open(json_path, "w") as f:
|
||||
json.dump(nodes_by_id, f)
|
||||
|
||||
representation = {
|
||||
"name": "json",
|
||||
"ext": "json",
|
||||
"files": os.path.basename(json_path),
|
||||
"stagingDir": staging_dir
|
||||
}
|
||||
|
||||
instance.data["representations"].append(representation)
|
||||
|
||||
self.log.info(
|
||||
"Extracted instance {} to: {}".format(instance.name, staging_dir)
|
||||
)
|
||||
|
|
@ -97,7 +109,7 @@ class ExtractArnoldSceneSource(publish.Extractor):
|
|||
return
|
||||
|
||||
kwargs["filename"] = file_path.replace(".ass", "_proxy.ass")
|
||||
filenames = self._extract(
|
||||
filenames, _ = self._extract(
|
||||
instance.data["proxy"], attribute_data, kwargs
|
||||
)
|
||||
|
||||
|
|
@ -113,34 +125,60 @@ class ExtractArnoldSceneSource(publish.Extractor):
|
|||
instance.data["representations"].append(representation)
|
||||
|
||||
def _extract(self, nodes, attribute_data, kwargs):
|
||||
self.log.info("Writing: " + kwargs["filename"])
|
||||
self.log.info(
|
||||
"Writing {} with:\n{}".format(kwargs["filename"], kwargs)
|
||||
)
|
||||
filenames = []
|
||||
nodes_by_id = defaultdict(list)
|
||||
# Duplicating nodes so they are direct children of the world. This
|
||||
# makes the hierarchy of any exported ass file the same.
|
||||
with delete_after() as delete_bin:
|
||||
with lib.delete_after() as delete_bin:
|
||||
duplicate_nodes = []
|
||||
for node in nodes:
|
||||
# Only interested in transforms:
|
||||
if cmds.nodeType(node) != "transform":
|
||||
continue
|
||||
|
||||
# Only interested in transforms with shapes.
|
||||
shapes = cmds.listRelatives(
|
||||
node, shapes=True, noIntermediate=True
|
||||
)
|
||||
if not shapes:
|
||||
continue
|
||||
|
||||
duplicate_transform = cmds.duplicate(node)[0]
|
||||
|
||||
# Discard the children.
|
||||
shapes = cmds.listRelatives(duplicate_transform, shapes=True)
|
||||
if cmds.listRelatives(duplicate_transform, parent=True):
|
||||
duplicate_transform = cmds.parent(
|
||||
duplicate_transform, world=True
|
||||
)[0]
|
||||
|
||||
basename = node.rsplit("|", 1)[-1].rsplit(":", 1)[-1]
|
||||
duplicate_transform = cmds.rename(
|
||||
duplicate_transform, basename
|
||||
)
|
||||
|
||||
# Discard children nodes that are not shapes
|
||||
shapes = cmds.listRelatives(
|
||||
duplicate_transform, shapes=True, fullPath=True
|
||||
)
|
||||
children = cmds.listRelatives(
|
||||
duplicate_transform, children=True
|
||||
duplicate_transform, children=True, fullPath=True
|
||||
)
|
||||
cmds.delete(set(children) - set(shapes))
|
||||
|
||||
duplicate_transform = cmds.parent(
|
||||
duplicate_transform, world=True
|
||||
)[0]
|
||||
|
||||
cmds.rename(duplicate_transform, node.split("|")[-1])
|
||||
duplicate_transform = "|" + node.split("|")[-1]
|
||||
|
||||
duplicate_nodes.append(duplicate_transform)
|
||||
duplicate_nodes.extend(shapes)
|
||||
delete_bin.append(duplicate_transform)
|
||||
|
||||
with attribute_values(attribute_data):
|
||||
with maintained_selection():
|
||||
# Copy cbId to mtoa_constant.
|
||||
for node in duplicate_nodes:
|
||||
# Converting Maya hierarchy separator "|" to Arnold
|
||||
# separator "/".
|
||||
nodes_by_id[lib.get_id(node)].append(node.replace("|", "/"))
|
||||
|
||||
with lib.attribute_values(attribute_data):
|
||||
with lib.maintained_selection():
|
||||
self.log.info(
|
||||
"Writing: {}".format(duplicate_nodes)
|
||||
)
|
||||
|
|
@ -157,4 +195,4 @@ class ExtractArnoldSceneSource(publish.Extractor):
|
|||
|
||||
self.log.info("Exported: {}".format(filenames))
|
||||
|
||||
return filenames
|
||||
return filenames, nodes_by_id
|
||||
|
|
|
|||
65
openpype/hosts/maya/plugins/publish/extract_gpu_cache.py
Normal file
65
openpype/hosts/maya/plugins/publish/extract_gpu_cache.py
Normal file
|
|
@ -0,0 +1,65 @@
|
|||
import json
|
||||
|
||||
from maya import cmds
|
||||
|
||||
from openpype.pipeline import publish
|
||||
|
||||
|
||||
class ExtractGPUCache(publish.Extractor):
|
||||
"""Extract the content of the instance to a GPU cache file."""
|
||||
|
||||
label = "GPU Cache"
|
||||
hosts = ["maya"]
|
||||
families = ["model", "animation", "pointcache"]
|
||||
step = 1.0
|
||||
stepSave = 1
|
||||
optimize = True
|
||||
optimizationThreshold = 40000
|
||||
optimizeAnimationsForMotionBlur = True
|
||||
writeMaterials = True
|
||||
useBaseTessellation = True
|
||||
|
||||
def process(self, instance):
|
||||
cmds.loadPlugin("gpuCache", quiet=True)
|
||||
|
||||
staging_dir = self.staging_dir(instance)
|
||||
filename = "{}_gpu_cache".format(instance.name)
|
||||
|
||||
# Write out GPU cache file.
|
||||
kwargs = {
|
||||
"directory": staging_dir,
|
||||
"fileName": filename,
|
||||
"saveMultipleFiles": False,
|
||||
"simulationRate": self.step,
|
||||
"sampleMultiplier": self.stepSave,
|
||||
"optimize": self.optimize,
|
||||
"optimizationThreshold": self.optimizationThreshold,
|
||||
"optimizeAnimationsForMotionBlur": (
|
||||
self.optimizeAnimationsForMotionBlur
|
||||
),
|
||||
"writeMaterials": self.writeMaterials,
|
||||
"useBaseTessellation": self.useBaseTessellation
|
||||
}
|
||||
self.log.debug(
|
||||
"Extract {} with:\n{}".format(
|
||||
instance[:], json.dumps(kwargs, indent=4, sort_keys=True)
|
||||
)
|
||||
)
|
||||
cmds.gpuCache(instance[:], **kwargs)
|
||||
|
||||
if "representations" not in instance.data:
|
||||
instance.data["representations"] = []
|
||||
|
||||
representation = {
|
||||
"name": "gpu_cache",
|
||||
"ext": "abc",
|
||||
"files": filename + ".abc",
|
||||
"stagingDir": staging_dir,
|
||||
"outputName": "gpu_cache"
|
||||
}
|
||||
|
||||
instance.data["representations"].append(representation)
|
||||
|
||||
self.log.info(
|
||||
"Extracted instance {} to: {}".format(instance.name, staging_dir)
|
||||
)
|
||||
|
|
@ -26,7 +26,7 @@ HARDLINK = 2
|
|||
|
||||
|
||||
@attr.s
|
||||
class TextureResult:
|
||||
class TextureResult(object):
|
||||
"""The resulting texture of a processed file for a resource"""
|
||||
# Path to the file
|
||||
path = attr.ib()
|
||||
|
|
|
|||
|
|
@ -9,7 +9,6 @@ from openpype.pipeline import publish
|
|||
from openpype.hosts.maya.api import lib
|
||||
|
||||
from maya import cmds
|
||||
import pymel.core as pm
|
||||
|
||||
|
||||
@contextlib.contextmanager
|
||||
|
|
@ -35,13 +34,15 @@ class ExtractPlayblast(publish.Extractor):
|
|||
families = ["review"]
|
||||
optional = True
|
||||
capture_preset = {}
|
||||
profiles = None
|
||||
|
||||
def _capture(self, preset):
|
||||
self.log.info(
|
||||
"Using preset:\n{}".format(
|
||||
json.dumps(preset, sort_keys=True, indent=4)
|
||||
if os.environ.get("OPENPYPE_DEBUG") == "1":
|
||||
self.log.debug(
|
||||
"Using preset: {}".format(
|
||||
json.dumps(preset, indent=4, sort_keys=True)
|
||||
)
|
||||
)
|
||||
)
|
||||
|
||||
path = capture.capture(log=self.log, **preset)
|
||||
self.log.debug("playblast path {}".format(path))
|
||||
|
|
@ -66,12 +67,25 @@ class ExtractPlayblast(publish.Extractor):
|
|||
# get cameras
|
||||
camera = instance.data["review_camera"]
|
||||
|
||||
preset = lib.load_capture_preset(data=self.capture_preset)
|
||||
# Grab capture presets from the project settings
|
||||
capture_presets = self.capture_preset
|
||||
task_data = instance.data["anatomyData"].get("task", {})
|
||||
capture_preset = lib.get_capture_preset(
|
||||
task_data.get("name"),
|
||||
task_data.get("type"),
|
||||
instance.data["subset"],
|
||||
instance.context.data["project_settings"],
|
||||
self.log
|
||||
)
|
||||
|
||||
preset = lib.load_capture_preset(data=capture_preset)
|
||||
|
||||
# "isolate_view" will already have been applied at creation, so we'll
|
||||
# ignore it here.
|
||||
preset.pop("isolate_view")
|
||||
|
||||
# Set resolution variables from capture presets
|
||||
width_preset = capture_presets["Resolution"]["width"]
|
||||
height_preset = capture_presets["Resolution"]["height"]
|
||||
width_preset = capture_preset["Resolution"]["width"]
|
||||
height_preset = capture_preset["Resolution"]["height"]
|
||||
|
||||
# Set resolution variables from asset values
|
||||
asset_data = instance.data["assetEntity"]["data"]
|
||||
asset_width = asset_data.get("resolutionWidth")
|
||||
|
|
@ -110,11 +124,15 @@ class ExtractPlayblast(publish.Extractor):
|
|||
preset["filename"] = path
|
||||
preset["overwrite"] = True
|
||||
|
||||
pm.refresh(f=True)
|
||||
cmds.refresh(force=True)
|
||||
|
||||
refreshFrameInt = int(pm.playbackOptions(q=True, minTime=True))
|
||||
pm.currentTime(refreshFrameInt - 1, edit=True)
|
||||
pm.currentTime(refreshFrameInt, edit=True)
|
||||
refreshFrameInt = int(cmds.playbackOptions(q=True, minTime=True))
|
||||
cmds.currentTime(refreshFrameInt - 1, edit=True)
|
||||
cmds.currentTime(refreshFrameInt, edit=True)
|
||||
|
||||
# Use displayLights setting from instance
|
||||
key = "displayLights"
|
||||
preset["viewport_options"][key] = instance.data[key]
|
||||
|
||||
# Override transparency if requested.
|
||||
transparency = instance.data.get("transparency", 0)
|
||||
|
|
@ -122,8 +140,9 @@ class ExtractPlayblast(publish.Extractor):
|
|||
preset["viewport2_options"]["transparencyAlgorithm"] = transparency
|
||||
|
||||
# Isolate view is requested by having objects in the set besides a
|
||||
# camera.
|
||||
if preset.pop("isolate_view", False) and instance.data.get("isolate"):
|
||||
# camera. If there is only 1 member it'll be the camera because we
|
||||
# validate to have 1 camera only.
|
||||
if instance.data["isolate"] and len(instance.data["setMembers"]) > 1:
|
||||
preset["isolate"] = instance.data["setMembers"]
|
||||
|
||||
# Show/Hide image planes on request.
|
||||
|
|
@ -158,7 +177,7 @@ class ExtractPlayblast(publish.Extractor):
|
|||
)
|
||||
|
||||
override_viewport_options = (
|
||||
capture_presets["Viewport Options"]["override_viewport_options"]
|
||||
capture_preset["Viewport Options"]["override_viewport_options"]
|
||||
)
|
||||
|
||||
# Force viewer to False in call to capture because we have our own
|
||||
|
|
@ -226,7 +245,7 @@ class ExtractPlayblast(publish.Extractor):
|
|||
tags.append("delete")
|
||||
|
||||
# Add camera node name to representation data
|
||||
camera_node_name = pm.ls(camera)[0].getTransform().name()
|
||||
camera_node_name = cmds.listRelatives(camera, parent=True)[0]
|
||||
|
||||
collected_files = list(frame_collection)
|
||||
# single frame file shouldn't be in list, only as a string
|
||||
|
|
@ -234,8 +253,8 @@ class ExtractPlayblast(publish.Extractor):
|
|||
collected_files = collected_files[0]
|
||||
|
||||
representation = {
|
||||
"name": self.capture_preset["Codec"]["compression"],
|
||||
"ext": self.capture_preset["Codec"]["compression"],
|
||||
"name": capture_preset["Codec"]["compression"],
|
||||
"ext": capture_preset["Codec"]["compression"],
|
||||
"files": collected_files,
|
||||
"stagingDir": stagingdir,
|
||||
"frameStart": start,
|
||||
|
|
|
|||
|
|
@ -1,6 +1,7 @@
|
|||
import os
|
||||
import glob
|
||||
import tempfile
|
||||
import json
|
||||
|
||||
import capture
|
||||
|
||||
|
|
@ -8,7 +9,6 @@ from openpype.pipeline import publish
|
|||
from openpype.hosts.maya.api import lib
|
||||
|
||||
from maya import cmds
|
||||
import pymel.core as pm
|
||||
|
||||
|
||||
class ExtractThumbnail(publish.Extractor):
|
||||
|
|
@ -28,22 +28,25 @@ class ExtractThumbnail(publish.Extractor):
|
|||
|
||||
camera = instance.data["review_camera"]
|
||||
|
||||
maya_setting = instance.context.data["project_settings"]["maya"]
|
||||
plugin_setting = maya_setting["publish"]["ExtractPlayblast"]
|
||||
capture_preset = plugin_setting["capture_preset"]
|
||||
task_data = instance.data["anatomyData"].get("task", {})
|
||||
capture_preset = lib.get_capture_preset(
|
||||
task_data.get("name"),
|
||||
task_data.get("type"),
|
||||
instance.data["subset"],
|
||||
instance.context.data["project_settings"],
|
||||
self.log
|
||||
)
|
||||
|
||||
preset = lib.load_capture_preset(data=capture_preset)
|
||||
|
||||
# "isolate_view" will already have been applied at creation, so we'll
|
||||
# ignore it here.
|
||||
preset.pop("isolate_view")
|
||||
|
||||
override_viewport_options = (
|
||||
capture_preset["Viewport Options"]["override_viewport_options"]
|
||||
)
|
||||
|
||||
try:
|
||||
preset = lib.load_capture_preset(data=capture_preset)
|
||||
except KeyError as ke:
|
||||
self.log.error("Error loading capture presets: {}".format(str(ke)))
|
||||
preset = {}
|
||||
self.log.info("Using viewport preset: {}".format(preset))
|
||||
|
||||
# preset["off_screen"] = False
|
||||
|
||||
preset["camera"] = camera
|
||||
preset["start_frame"] = instance.data["frameStart"]
|
||||
preset["end_frame"] = instance.data["frameStart"]
|
||||
|
|
@ -59,10 +62,9 @@ class ExtractThumbnail(publish.Extractor):
|
|||
"overscan": 1.0,
|
||||
"depthOfField": cmds.getAttr("{0}.depthOfField".format(camera)),
|
||||
}
|
||||
capture_presets = capture_preset
|
||||
# Set resolution variables from capture presets
|
||||
width_preset = capture_presets["Resolution"]["width"]
|
||||
height_preset = capture_presets["Resolution"]["height"]
|
||||
width_preset = capture_preset["Resolution"]["width"]
|
||||
height_preset = capture_preset["Resolution"]["height"]
|
||||
# Set resolution variables from asset values
|
||||
asset_data = instance.data["assetEntity"]["data"]
|
||||
asset_width = asset_data.get("resolutionWidth")
|
||||
|
|
@ -99,11 +101,15 @@ class ExtractThumbnail(publish.Extractor):
|
|||
preset["filename"] = path
|
||||
preset["overwrite"] = True
|
||||
|
||||
pm.refresh(f=True)
|
||||
cmds.refresh(force=True)
|
||||
|
||||
refreshFrameInt = int(pm.playbackOptions(q=True, minTime=True))
|
||||
pm.currentTime(refreshFrameInt - 1, edit=True)
|
||||
pm.currentTime(refreshFrameInt, edit=True)
|
||||
refreshFrameInt = int(cmds.playbackOptions(q=True, minTime=True))
|
||||
cmds.currentTime(refreshFrameInt - 1, edit=True)
|
||||
cmds.currentTime(refreshFrameInt, edit=True)
|
||||
|
||||
# Use displayLights setting from instance
|
||||
key = "displayLights"
|
||||
preset["viewport_options"][key] = instance.data[key]
|
||||
|
||||
# Override transparency if requested.
|
||||
transparency = instance.data.get("transparency", 0)
|
||||
|
|
@ -111,8 +117,9 @@ class ExtractThumbnail(publish.Extractor):
|
|||
preset["viewport2_options"]["transparencyAlgorithm"] = transparency
|
||||
|
||||
# Isolate view is requested by having objects in the set besides a
|
||||
# camera.
|
||||
if preset.pop("isolate_view", False) and instance.data.get("isolate"):
|
||||
# camera. If there is only 1 member it'll be the camera because we
|
||||
# validate to have 1 camera only.
|
||||
if instance.data["isolate"] and len(instance.data["setMembers"]) > 1:
|
||||
preset["isolate"] = instance.data["setMembers"]
|
||||
|
||||
# Show or Hide Image Plane
|
||||
|
|
@ -140,6 +147,13 @@ class ExtractThumbnail(publish.Extractor):
|
|||
preset.update(panel_preset)
|
||||
cmds.setFocus(panel)
|
||||
|
||||
if os.environ.get("OPENPYPE_DEBUG") == "1":
|
||||
self.log.debug(
|
||||
"Using preset: {}".format(
|
||||
json.dumps(preset, indent=4, sort_keys=True)
|
||||
)
|
||||
)
|
||||
|
||||
path = capture.capture(**preset)
|
||||
playblast = self._fix_playblast_output_path(path)
|
||||
|
||||
|
|
|
|||
|
|
@ -30,9 +30,7 @@ class ExtractVRayProxy(publish.Extractor):
|
|||
# non-animated subsets
|
||||
keys = ["frameStart", "frameEnd",
|
||||
"handleStart", "handleEnd",
|
||||
"frameStartHandle", "frameEndHandle",
|
||||
# Backwards compatibility
|
||||
"handles"]
|
||||
"frameStartHandle", "frameEndHandle"]
|
||||
for key in keys:
|
||||
instance.data.pop(key, None)
|
||||
|
||||
|
|
|
|||
|
|
@ -65,9 +65,10 @@ class ExtractXgen(publish.Extractor):
|
|||
)
|
||||
cmds.delete(set(children) - set(shapes))
|
||||
|
||||
duplicate_transform = cmds.parent(
|
||||
duplicate_transform, world=True
|
||||
)[0]
|
||||
if cmds.listRelatives(duplicate_transform, parent=True):
|
||||
duplicate_transform = cmds.parent(
|
||||
duplicate_transform, world=True
|
||||
)[0]
|
||||
|
||||
duplicate_nodes.append(duplicate_transform)
|
||||
|
||||
|
|
|
|||
|
|
@ -1,5 +1,3 @@
|
|||
import maya.cmds as cmds
|
||||
|
||||
import pyblish.api
|
||||
from openpype.pipeline.publish import (
|
||||
ValidateContentsOrder, PublishValidationError
|
||||
|
|
@ -22,10 +20,11 @@ class ValidateArnoldSceneSource(pyblish.api.InstancePlugin):
|
|||
families = ["ass"]
|
||||
label = "Validate Arnold Scene Source"
|
||||
|
||||
def _get_nodes_data(self, nodes):
|
||||
def _get_nodes_by_name(self, nodes):
|
||||
ungrouped_nodes = []
|
||||
nodes_by_name = {}
|
||||
parents = []
|
||||
same_named_nodes = {}
|
||||
for node in nodes:
|
||||
node_split = node.split("|")
|
||||
if len(node_split) == 2:
|
||||
|
|
@ -35,21 +34,38 @@ class ValidateArnoldSceneSource(pyblish.api.InstancePlugin):
|
|||
if parent:
|
||||
parents.append(parent)
|
||||
|
||||
nodes_by_name[node_split[-1]] = node
|
||||
for shape in cmds.listRelatives(node, shapes=True):
|
||||
nodes_by_name[shape.split("|")[-1]] = shape
|
||||
node_name = node.rsplit("|", 1)[-1].rsplit(":", 1)[-1]
|
||||
|
||||
# Check for same same nodes, which can happen in different
|
||||
# hierarchies.
|
||||
if node_name in nodes_by_name:
|
||||
try:
|
||||
same_named_nodes[node_name].append(node)
|
||||
except KeyError:
|
||||
same_named_nodes[node_name] = [
|
||||
nodes_by_name[node_name], node
|
||||
]
|
||||
|
||||
nodes_by_name[node_name] = node
|
||||
|
||||
if same_named_nodes:
|
||||
message = "Found nodes with the same name:"
|
||||
for name, nodes in same_named_nodes.items():
|
||||
message += "\n\n\"{}\":\n{}".format(name, "\n".join(nodes))
|
||||
|
||||
raise PublishValidationError(message)
|
||||
|
||||
return ungrouped_nodes, nodes_by_name, parents
|
||||
|
||||
def process(self, instance):
|
||||
ungrouped_nodes = []
|
||||
|
||||
nodes, content_nodes_by_name, content_parents = self._get_nodes_data(
|
||||
instance.data["setMembers"]
|
||||
nodes, content_nodes_by_name, content_parents = (
|
||||
self._get_nodes_by_name(instance.data["contentMembers"])
|
||||
)
|
||||
ungrouped_nodes.extend(nodes)
|
||||
|
||||
nodes, proxy_nodes_by_name, proxy_parents = self._get_nodes_data(
|
||||
nodes, proxy_nodes_by_name, proxy_parents = self._get_nodes_by_name(
|
||||
instance.data.get("proxy", [])
|
||||
)
|
||||
ungrouped_nodes.extend(nodes)
|
||||
|
|
@ -66,11 +82,11 @@ class ValidateArnoldSceneSource(pyblish.api.InstancePlugin):
|
|||
return
|
||||
|
||||
# Validate for content and proxy nodes amount being the same.
|
||||
if len(instance.data["setMembers"]) != len(instance.data["proxy"]):
|
||||
if len(instance.data["contentMembers"]) != len(instance.data["proxy"]):
|
||||
raise PublishValidationError(
|
||||
"Amount of content nodes ({}) and proxy nodes ({}) needs to "
|
||||
"be the same.".format(
|
||||
len(instance.data["setMembers"]),
|
||||
len(instance.data["contentMembers"]),
|
||||
len(instance.data["proxy"])
|
||||
)
|
||||
)
|
||||
|
|
|
|||
|
|
@ -0,0 +1,74 @@
|
|||
import pyblish.api
|
||||
from openpype.hosts.maya.api import lib
|
||||
from openpype.pipeline.publish import (
|
||||
ValidateContentsOrder, PublishValidationError, RepairAction
|
||||
)
|
||||
|
||||
|
||||
class ValidateArnoldSceneSourceCbid(pyblish.api.InstancePlugin):
|
||||
"""Validate Arnold Scene Source Cbid.
|
||||
|
||||
It is required for the proxy and content nodes to share the same cbid.
|
||||
"""
|
||||
|
||||
order = ValidateContentsOrder
|
||||
hosts = ["maya"]
|
||||
families = ["ass"]
|
||||
label = "Validate Arnold Scene Source CBID"
|
||||
actions = [RepairAction]
|
||||
|
||||
@staticmethod
|
||||
def _get_nodes_by_name(nodes):
|
||||
nodes_by_name = {}
|
||||
for node in nodes:
|
||||
node_name = node.rsplit("|", 1)[-1].rsplit(":", 1)[-1]
|
||||
nodes_by_name[node_name] = node
|
||||
|
||||
return nodes_by_name
|
||||
|
||||
@classmethod
|
||||
def get_invalid_couples(cls, instance):
|
||||
content_nodes_by_name = cls._get_nodes_by_name(
|
||||
instance.data["contentMembers"]
|
||||
)
|
||||
proxy_nodes_by_name = cls._get_nodes_by_name(
|
||||
instance.data.get("proxy", [])
|
||||
)
|
||||
|
||||
invalid_couples = []
|
||||
for content_name, content_node in content_nodes_by_name.items():
|
||||
proxy_node = proxy_nodes_by_name.get(content_name, None)
|
||||
|
||||
if not proxy_node:
|
||||
cls.log.debug(
|
||||
"Content node '{}' has no matching proxy node.".format(
|
||||
content_node
|
||||
)
|
||||
)
|
||||
continue
|
||||
|
||||
content_id = lib.get_id(content_node)
|
||||
proxy_id = lib.get_id(proxy_node)
|
||||
if content_id != proxy_id:
|
||||
invalid_couples.append((content_node, proxy_node))
|
||||
|
||||
return invalid_couples
|
||||
|
||||
def process(self, instance):
|
||||
# Proxy validation.
|
||||
if not instance.data.get("proxy", []):
|
||||
return
|
||||
|
||||
# Validate for proxy nodes sharing the same cbId as content nodes.
|
||||
invalid_couples = self.get_invalid_couples(instance)
|
||||
if invalid_couples:
|
||||
raise PublishValidationError(
|
||||
"Found proxy nodes with mismatching cbid:\n{}".format(
|
||||
invalid_couples
|
||||
)
|
||||
)
|
||||
|
||||
@classmethod
|
||||
def repair(cls, instance):
|
||||
for content_node, proxy_node in cls.get_invalid_couples(cls, instance):
|
||||
lib.set_id(proxy_node, lib.get_id(content_node), overwrite=False)
|
||||
|
|
@ -1,13 +1,17 @@
|
|||
import pymel.core as pm
|
||||
from collections import defaultdict
|
||||
|
||||
from maya import cmds
|
||||
|
||||
import pyblish.api
|
||||
|
||||
from openpype.hosts.maya.api.lib import set_attribute
|
||||
from openpype.pipeline.publish import (
|
||||
RepairContextAction,
|
||||
ValidateContentsOrder,
|
||||
)
|
||||
|
||||
|
||||
class ValidateAttributes(pyblish.api.ContextPlugin):
|
||||
class ValidateAttributes(pyblish.api.InstancePlugin):
|
||||
"""Ensure attributes are consistent.
|
||||
|
||||
Attributes to validate and their values comes from the
|
||||
|
|
@ -27,86 +31,80 @@ class ValidateAttributes(pyblish.api.ContextPlugin):
|
|||
|
||||
attributes = None
|
||||
|
||||
def process(self, context):
|
||||
def process(self, instance):
|
||||
# Check for preset existence.
|
||||
|
||||
if not self.attributes:
|
||||
return
|
||||
|
||||
invalid = self.get_invalid(context, compute=True)
|
||||
invalid = self.get_invalid(instance, compute=True)
|
||||
if invalid:
|
||||
raise RuntimeError(
|
||||
"Found attributes with invalid values: {}".format(invalid)
|
||||
)
|
||||
|
||||
@classmethod
|
||||
def get_invalid(cls, context, compute=False):
|
||||
invalid = context.data.get("invalid_attributes", [])
|
||||
def get_invalid(cls, instance, compute=False):
|
||||
if compute:
|
||||
invalid = cls.get_invalid_attributes(context)
|
||||
|
||||
return invalid
|
||||
return cls.get_invalid_attributes(instance)
|
||||
else:
|
||||
return instance.data.get("invalid_attributes", [])
|
||||
|
||||
@classmethod
|
||||
def get_invalid_attributes(cls, context):
|
||||
def get_invalid_attributes(cls, instance):
|
||||
invalid_attributes = []
|
||||
for instance in context:
|
||||
# Filter publisable instances.
|
||||
if not instance.data["publish"]:
|
||||
|
||||
# Filter families.
|
||||
families = [instance.data["family"]]
|
||||
families += instance.data.get("families", [])
|
||||
families = set(families) & set(cls.attributes.keys())
|
||||
if not families:
|
||||
return []
|
||||
|
||||
# Get all attributes to validate.
|
||||
attributes = defaultdict(dict)
|
||||
for family in families:
|
||||
if family not in cls.attributes:
|
||||
# No attributes to validate for family
|
||||
continue
|
||||
|
||||
# Filter families.
|
||||
families = [instance.data["family"]]
|
||||
families += instance.data.get("families", [])
|
||||
families = list(set(families) & set(cls.attributes.keys()))
|
||||
if not families:
|
||||
for preset_attr, preset_value in cls.attributes[family].items():
|
||||
node_name, attribute_name = preset_attr.split(".", 1)
|
||||
attributes[node_name][attribute_name] = preset_value
|
||||
|
||||
if not attributes:
|
||||
return []
|
||||
|
||||
# Get invalid attributes.
|
||||
nodes = cmds.ls(long=True)
|
||||
for node in nodes:
|
||||
node_name = node.rsplit("|", 1)[-1].rsplit(":", 1)[-1]
|
||||
if node_name not in attributes:
|
||||
continue
|
||||
|
||||
# Get all attributes to validate.
|
||||
attributes = {}
|
||||
for family in families:
|
||||
for preset in cls.attributes[family]:
|
||||
[node_name, attribute_name] = preset.split(".")
|
||||
try:
|
||||
attributes[node_name].update(
|
||||
{attribute_name: cls.attributes[family][preset]}
|
||||
)
|
||||
except KeyError:
|
||||
attributes.update({
|
||||
node_name: {
|
||||
attribute_name: cls.attributes[family][preset]
|
||||
}
|
||||
})
|
||||
for attr_name, expected in attributes.items():
|
||||
|
||||
# Get invalid attributes.
|
||||
nodes = pm.ls()
|
||||
for node in nodes:
|
||||
name = node.name(stripNamespace=True)
|
||||
if name not in attributes.keys():
|
||||
# Skip if attribute does not exist
|
||||
if not cmds.attributeQuery(attr_name, node=node, exists=True):
|
||||
continue
|
||||
|
||||
presets_to_validate = attributes[name]
|
||||
for attribute in node.listAttr():
|
||||
names = [attribute.shortName(), attribute.longName()]
|
||||
attribute_name = list(
|
||||
set(names) & set(presets_to_validate.keys())
|
||||
plug = "{}.{}".format(node, attr_name)
|
||||
value = cmds.getAttr(plug)
|
||||
if value != expected:
|
||||
invalid_attributes.append(
|
||||
{
|
||||
"attribute": plug,
|
||||
"expected": expected,
|
||||
"current": value
|
||||
}
|
||||
)
|
||||
if attribute_name:
|
||||
expected = presets_to_validate[attribute_name[0]]
|
||||
if attribute.get() != expected:
|
||||
invalid_attributes.append(
|
||||
{
|
||||
"attribute": attribute,
|
||||
"expected": expected,
|
||||
"current": attribute.get()
|
||||
}
|
||||
)
|
||||
|
||||
context.data["invalid_attributes"] = invalid_attributes
|
||||
instance.data["invalid_attributes"] = invalid_attributes
|
||||
return invalid_attributes
|
||||
|
||||
@classmethod
|
||||
def repair(cls, instance):
|
||||
invalid = cls.get_invalid(instance)
|
||||
for data in invalid:
|
||||
data["attribute"].set(data["expected"])
|
||||
node, attr = data["attribute"].split(".", 1)
|
||||
value = data["expected"]
|
||||
set_attribute(node=node, attribute=attr, value=value)
|
||||
|
|
|
|||
|
|
@ -4,6 +4,7 @@ from maya import cmds
|
|||
from openpype.pipeline.publish import (
|
||||
RepairAction,
|
||||
ValidateContentsOrder,
|
||||
PublishValidationError
|
||||
)
|
||||
from openpype.hosts.maya.api.lib_rendersetup import (
|
||||
get_attr_overrides,
|
||||
|
|
@ -49,7 +50,6 @@ class ValidateFrameRange(pyblish.api.InstancePlugin):
|
|||
|
||||
frame_start_handle = int(context.data.get("frameStartHandle"))
|
||||
frame_end_handle = int(context.data.get("frameEndHandle"))
|
||||
handles = int(context.data.get("handles"))
|
||||
handle_start = int(context.data.get("handleStart"))
|
||||
handle_end = int(context.data.get("handleEnd"))
|
||||
frame_start = int(context.data.get("frameStart"))
|
||||
|
|
@ -66,8 +66,6 @@ class ValidateFrameRange(pyblish.api.InstancePlugin):
|
|||
assert frame_start_handle <= frame_end_handle, (
|
||||
"start frame is lower then end frame")
|
||||
|
||||
assert handles >= 0, ("handles cannot have negative values")
|
||||
|
||||
# compare with data on instance
|
||||
errors = []
|
||||
if [ef for ef in self.exclude_families
|
||||
|
|
|
|||
|
|
@ -1,8 +1,14 @@
|
|||
import pymel.core as pc
|
||||
from maya import cmds
|
||||
import pyblish.api
|
||||
|
||||
import openpype.hosts.maya.api.action
|
||||
from openpype.hosts.maya.api.lib import maintained_selection
|
||||
from openpype.hosts.maya.api.lib import (
|
||||
maintained_selection,
|
||||
delete_after,
|
||||
undo_chunk,
|
||||
get_attribute,
|
||||
set_attribute
|
||||
)
|
||||
from openpype.pipeline.publish import (
|
||||
RepairAction,
|
||||
ValidateMeshOrder,
|
||||
|
|
@ -31,60 +37,68 @@ class ValidateMeshArnoldAttributes(pyblish.api.InstancePlugin):
|
|||
else:
|
||||
active = False
|
||||
|
||||
@classmethod
|
||||
def get_default_attributes(cls):
|
||||
# Get default arnold attribute values for mesh type.
|
||||
defaults = {}
|
||||
with delete_after() as tmp:
|
||||
transform = cmds.createNode("transform")
|
||||
tmp.append(transform)
|
||||
|
||||
mesh = cmds.createNode("mesh", parent=transform)
|
||||
for attr in cmds.listAttr(mesh, string="ai*"):
|
||||
plug = "{}.{}".format(mesh, attr)
|
||||
try:
|
||||
defaults[attr] = get_attribute(plug)
|
||||
except RuntimeError:
|
||||
cls.log.debug("Ignoring arnold attribute: {}".format(attr))
|
||||
|
||||
return defaults
|
||||
|
||||
@classmethod
|
||||
def get_invalid_attributes(cls, instance, compute=False):
|
||||
invalid = []
|
||||
|
||||
if compute:
|
||||
# Get default arnold attributes.
|
||||
temp_transform = pc.polyCube()[0]
|
||||
|
||||
for shape in pc.ls(instance, type="mesh"):
|
||||
for attr in temp_transform.getShape().listAttr():
|
||||
if not attr.attrName().startswith("ai"):
|
||||
continue
|
||||
meshes = cmds.ls(instance, type="mesh", long=True)
|
||||
if not meshes:
|
||||
return []
|
||||
|
||||
target_attr = pc.PyNode(
|
||||
"{}.{}".format(shape.name(), attr.attrName())
|
||||
)
|
||||
if attr.get() != target_attr.get():
|
||||
invalid.append(target_attr)
|
||||
|
||||
pc.delete(temp_transform)
|
||||
# Compare the values against the defaults
|
||||
defaults = cls.get_default_attributes()
|
||||
for mesh in meshes:
|
||||
for attr_name, default_value in defaults.items():
|
||||
plug = "{}.{}".format(mesh, attr_name)
|
||||
if get_attribute(plug) != default_value:
|
||||
invalid.append(plug)
|
||||
|
||||
instance.data["nondefault_arnold_attributes"] = invalid
|
||||
else:
|
||||
invalid.extend(instance.data["nondefault_arnold_attributes"])
|
||||
|
||||
return invalid
|
||||
return instance.data.get("nondefault_arnold_attributes", [])
|
||||
|
||||
@classmethod
|
||||
def get_invalid(cls, instance):
|
||||
invalid = []
|
||||
|
||||
for attr in cls.get_invalid_attributes(instance, compute=False):
|
||||
invalid.append(attr.node().name())
|
||||
|
||||
return invalid
|
||||
invalid_attrs = cls.get_invalid_attributes(instance, compute=False)
|
||||
invalid_nodes = set(attr.split(".", 1)[0] for attr in invalid_attrs)
|
||||
return sorted(invalid_nodes)
|
||||
|
||||
@classmethod
|
||||
def repair(cls, instance):
|
||||
with maintained_selection():
|
||||
with pc.UndoChunk():
|
||||
temp_transform = pc.polyCube()[0]
|
||||
|
||||
with undo_chunk():
|
||||
defaults = cls.get_default_attributes()
|
||||
attributes = cls.get_invalid_attributes(
|
||||
instance, compute=False
|
||||
)
|
||||
for attr in attributes:
|
||||
source = pc.PyNode(
|
||||
"{}.{}".format(
|
||||
temp_transform.getShape(), attr.attrName()
|
||||
)
|
||||
node, attr_name = attr.split(".", 1)
|
||||
value = defaults[attr_name]
|
||||
set_attribute(
|
||||
node=node,
|
||||
attribute=attr_name,
|
||||
value=value
|
||||
)
|
||||
attr.set(source.get())
|
||||
|
||||
pc.delete(temp_transform)
|
||||
|
||||
def process(self, instance):
|
||||
|
||||
|
|
|
|||
|
|
@ -1,10 +1,11 @@
|
|||
import pyblish.api
|
||||
import openpype.hosts.maya.api.action
|
||||
import math
|
||||
import maya.api.OpenMaya as om
|
||||
import pymel.core as pm
|
||||
|
||||
from six.moves import xrange
|
||||
|
||||
from maya import cmds
|
||||
import maya.api.OpenMaya as om
|
||||
import pyblish.api
|
||||
|
||||
import openpype.hosts.maya.api.action
|
||||
from openpype.pipeline.publish import ValidateMeshOrder
|
||||
|
||||
|
||||
|
|
@ -185,8 +186,7 @@ class GetOverlappingUVs(object):
|
|||
|
||||
center, radius = self._createBoundingCircle(meshfn)
|
||||
for i in xrange(meshfn.numPolygons): # noqa: F821
|
||||
rayb1, face1Orig, face1Vec = self._createRayGivenFace(
|
||||
meshfn, i)
|
||||
rayb1, face1Orig, face1Vec = self._createRayGivenFace(meshfn, i)
|
||||
if not rayb1:
|
||||
continue
|
||||
cui = center[2*i]
|
||||
|
|
@ -206,8 +206,8 @@ class GetOverlappingUVs(object):
|
|||
if (dsqr >= (ri + rj) * (ri + rj)):
|
||||
continue
|
||||
|
||||
rayb2, face2Orig, face2Vec = self._createRayGivenFace(
|
||||
meshfn, j)
|
||||
rayb2, face2Orig, face2Vec = self._createRayGivenFace(meshfn,
|
||||
j)
|
||||
if not rayb2:
|
||||
continue
|
||||
# Exclude the degenerate face
|
||||
|
|
@ -240,37 +240,45 @@ class ValidateMeshHasOverlappingUVs(pyblish.api.InstancePlugin):
|
|||
optional = True
|
||||
|
||||
@classmethod
|
||||
def _get_overlapping_uvs(cls, node):
|
||||
""" Check if mesh has overlapping UVs.
|
||||
def _get_overlapping_uvs(cls, mesh):
|
||||
"""Return overlapping UVs of mesh.
|
||||
|
||||
Args:
|
||||
mesh (str): Mesh node name
|
||||
|
||||
Returns:
|
||||
list: Overlapping uvs for the input mesh in all uv sets.
|
||||
|
||||
:param node: node to check
|
||||
:type node: str
|
||||
:returns: True is has overlapping UVs, False otherwise
|
||||
:rtype: bool
|
||||
"""
|
||||
ovl = GetOverlappingUVs()
|
||||
|
||||
# Store original uv set
|
||||
original_current_uv_set = cmds.polyUVSet(mesh,
|
||||
query=True,
|
||||
currentUVSet=True)[0]
|
||||
|
||||
overlapping_faces = []
|
||||
for i, uv in enumerate(pm.polyUVSet(node, q=1, auv=1)):
|
||||
pm.polyUVSet(node, cuv=1, uvSet=uv)
|
||||
overlapping_faces.extend(ovl._getOverlapUVFaces(str(node)))
|
||||
for uv_set in cmds.polyUVSet(mesh, query=True, allUVSets=True):
|
||||
cmds.polyUVSet(mesh, currentUVSet=True, uvSet=uv_set)
|
||||
overlapping_faces.extend(ovl._getOverlapUVFaces(mesh))
|
||||
|
||||
# Restore original uv set
|
||||
cmds.polyUVSet(mesh, currentUVSet=True, uvSet=original_current_uv_set)
|
||||
|
||||
return overlapping_faces
|
||||
|
||||
@classmethod
|
||||
def get_invalid(cls, instance, compute=False):
|
||||
invalid = []
|
||||
|
||||
if compute:
|
||||
instance.data["overlapping_faces"] = []
|
||||
for node in pm.ls(instance, type="mesh"):
|
||||
invalid = []
|
||||
for node in cmds.ls(instance, type="mesh"):
|
||||
faces = cls._get_overlapping_uvs(node)
|
||||
invalid.extend(faces)
|
||||
# Store values for later.
|
||||
instance.data["overlapping_faces"].extend(faces)
|
||||
else:
|
||||
invalid.extend(instance.data["overlapping_faces"])
|
||||
|
||||
return invalid
|
||||
instance.data["overlapping_faces"] = invalid
|
||||
|
||||
return instance.data.get("overlapping_faces", [])
|
||||
|
||||
def process(self, instance):
|
||||
|
||||
|
|
|
|||
|
|
@ -1,4 +1,3 @@
|
|||
import pymel.core as pm
|
||||
import maya.cmds as cmds
|
||||
|
||||
import pyblish.api
|
||||
|
|
@ -12,7 +11,7 @@ import openpype.hosts.maya.api.action
|
|||
|
||||
def get_namespace(node_name):
|
||||
# ensure only node's name (not parent path)
|
||||
node_name = node_name.rsplit("|")[-1]
|
||||
node_name = node_name.rsplit("|", 1)[-1]
|
||||
# ensure only namespace
|
||||
return node_name.rpartition(":")[0]
|
||||
|
||||
|
|
@ -45,13 +44,11 @@ class ValidateNoNamespace(pyblish.api.InstancePlugin):
|
|||
|
||||
invalid = cls.get_invalid(instance)
|
||||
|
||||
# Get nodes with pymel since we'll be renaming them
|
||||
# Since we don't want to keep checking the hierarchy
|
||||
# or full paths
|
||||
nodes = pm.ls(invalid)
|
||||
# Iterate over the nodes by long to short names to iterate the lowest
|
||||
# in hierarchy nodes first. This way we avoid having renamed parents
|
||||
# before renaming children nodes
|
||||
for node in sorted(invalid, key=len, reverse=True):
|
||||
|
||||
for node in nodes:
|
||||
namespace = node.namespace()
|
||||
if namespace:
|
||||
name = node.nodeName()
|
||||
node.rename(name[len(namespace):])
|
||||
node_name = node.rsplit("|", 1)[-1]
|
||||
node_name_without_namespace = node_name.rsplit(":")[-1]
|
||||
cmds.rename(node, node_name_without_namespace)
|
||||
|
|
|
|||
30
openpype/hosts/maya/plugins/publish/validate_review.py
Normal file
30
openpype/hosts/maya/plugins/publish/validate_review.py
Normal file
|
|
@ -0,0 +1,30 @@
|
|||
import pyblish.api
|
||||
|
||||
from openpype.pipeline.publish import (
|
||||
ValidateContentsOrder, PublishValidationError
|
||||
)
|
||||
|
||||
|
||||
class ValidateReview(pyblish.api.InstancePlugin):
|
||||
"""Validate review."""
|
||||
|
||||
order = ValidateContentsOrder
|
||||
label = "Validate Review"
|
||||
families = ["review"]
|
||||
|
||||
def process(self, instance):
|
||||
cameras = instance.data["cameras"]
|
||||
|
||||
# validate required settings
|
||||
if len(cameras) == 0:
|
||||
raise PublishValidationError(
|
||||
"No camera found in review instance: {}".format(instance)
|
||||
)
|
||||
elif len(cameras) > 2:
|
||||
raise PublishValidationError(
|
||||
"Only a single camera is allowed for a review instance but "
|
||||
"more than one camera found in review instance: {}. "
|
||||
"Cameras found: {}".format(instance, ", ".join(cameras))
|
||||
)
|
||||
|
||||
self.log.debug('camera: {}'.format(instance.data["review_camera"]))
|
||||
|
|
@ -1,14 +1,22 @@
|
|||
import pymel.core as pc
|
||||
from collections import defaultdict
|
||||
|
||||
from maya import cmds
|
||||
|
||||
import pyblish.api
|
||||
|
||||
import openpype.hosts.maya.api.action
|
||||
from openpype.hosts.maya.api.lib import get_id, set_id
|
||||
from openpype.pipeline.publish import (
|
||||
RepairAction,
|
||||
ValidateContentsOrder,
|
||||
)
|
||||
|
||||
|
||||
def get_basename(node):
|
||||
"""Return node short name without namespace"""
|
||||
return node.rsplit("|", 1)[-1].rsplit(":", 1)[-1]
|
||||
|
||||
|
||||
class ValidateRigOutputIds(pyblish.api.InstancePlugin):
|
||||
"""Validate rig output ids.
|
||||
|
||||
|
|
@ -30,43 +38,48 @@ class ValidateRigOutputIds(pyblish.api.InstancePlugin):
|
|||
|
||||
@classmethod
|
||||
def get_invalid(cls, instance, compute=False):
|
||||
invalid = cls.get_invalid_matches(instance, compute=compute)
|
||||
return [x["node"].longName() for x in invalid]
|
||||
invalid_matches = cls.get_invalid_matches(instance, compute=compute)
|
||||
return list(invalid_matches.keys())
|
||||
|
||||
@classmethod
|
||||
def get_invalid_matches(cls, instance, compute=False):
|
||||
invalid = []
|
||||
invalid = {}
|
||||
|
||||
if compute:
|
||||
out_set = next(x for x in instance if x.endswith("out_SET"))
|
||||
instance_nodes = pc.sets(out_set, query=True)
|
||||
instance_nodes.extend(
|
||||
[x.getShape() for x in instance_nodes if x.getShape()])
|
||||
|
||||
scene_nodes = pc.ls(type="transform") + pc.ls(type="mesh")
|
||||
instance_nodes = cmds.sets(out_set, query=True, nodesOnly=True)
|
||||
instance_nodes = cmds.ls(instance_nodes, long=True)
|
||||
for node in instance_nodes:
|
||||
shapes = cmds.listRelatives(node, shapes=True, fullPath=True)
|
||||
if shapes:
|
||||
instance_nodes.extend(shapes)
|
||||
|
||||
scene_nodes = cmds.ls(type="transform") + cmds.ls(type="mesh")
|
||||
scene_nodes = set(scene_nodes) - set(instance_nodes)
|
||||
|
||||
scene_nodes_by_basename = defaultdict(list)
|
||||
for node in scene_nodes:
|
||||
basename = get_basename(node)
|
||||
scene_nodes_by_basename[basename].append(node)
|
||||
|
||||
for instance_node in instance_nodes:
|
||||
matches = []
|
||||
basename = instance_node.name(stripNamespace=True)
|
||||
for scene_node in scene_nodes:
|
||||
if scene_node.name(stripNamespace=True) == basename:
|
||||
matches.append(scene_node)
|
||||
basename = get_basename(instance_node)
|
||||
if basename not in scene_nodes_by_basename:
|
||||
continue
|
||||
|
||||
if matches:
|
||||
ids = [instance_node.cbId.get()]
|
||||
ids.extend([x.cbId.get() for x in matches])
|
||||
ids = set(ids)
|
||||
matches = scene_nodes_by_basename[basename]
|
||||
|
||||
if len(ids) > 1:
|
||||
cls.log.error(
|
||||
"\"{}\" id mismatch to: {}".format(
|
||||
instance_node.longName(), matches
|
||||
)
|
||||
)
|
||||
invalid.append(
|
||||
{"node": instance_node, "matches": matches}
|
||||
ids = set(get_id(node) for node in matches)
|
||||
ids.add(get_id(instance_node))
|
||||
|
||||
if len(ids) > 1:
|
||||
cls.log.error(
|
||||
"\"{}\" id mismatch to: {}".format(
|
||||
instance_node.longName(), matches
|
||||
)
|
||||
)
|
||||
invalid[instance_node] = matches
|
||||
|
||||
instance.data["mismatched_output_ids"] = invalid
|
||||
else:
|
||||
|
|
@ -76,19 +89,21 @@ class ValidateRigOutputIds(pyblish.api.InstancePlugin):
|
|||
|
||||
@classmethod
|
||||
def repair(cls, instance):
|
||||
invalid = cls.get_invalid_matches(instance)
|
||||
invalid_matches = cls.get_invalid_matches(instance)
|
||||
|
||||
multiple_ids_match = []
|
||||
for data in invalid:
|
||||
ids = [x.cbId.get() for x in data["matches"]]
|
||||
for instance_node, matches in invalid_matches.items():
|
||||
ids = set(get_id(node) for node in matches)
|
||||
|
||||
# If there are multiple scene ids matched, and error needs to be
|
||||
# raised for manual correction.
|
||||
if len(ids) > 1:
|
||||
multiple_ids_match.append(data)
|
||||
multiple_ids_match.append({"node": instance_node,
|
||||
"matches": matches})
|
||||
continue
|
||||
|
||||
data["node"].cbId.set(ids[0])
|
||||
id_to_set = next(iter(ids))
|
||||
set_id(instance_node, id_to_set, overwrite=True)
|
||||
|
||||
if multiple_ids_match:
|
||||
raise RuntimeError(
|
||||
|
|
|
|||
|
|
@ -19,7 +19,7 @@ class ValidateSingleAssembly(pyblish.api.InstancePlugin):
|
|||
|
||||
order = ValidateContentsOrder
|
||||
hosts = ['maya']
|
||||
families = ['rig', 'animation']
|
||||
families = ['rig']
|
||||
label = 'Single Assembly'
|
||||
|
||||
def process(self, instance):
|
||||
|
|
|
|||
|
|
@ -57,3 +57,16 @@ class ValidateXgen(pyblish.api.InstancePlugin):
|
|||
json.dumps(inactive_modifiers, indent=4, sort_keys=True)
|
||||
)
|
||||
)
|
||||
|
||||
# We need a namespace else there will be a naming conflict when
|
||||
# extracting because of stripping namespaces and parenting to world.
|
||||
node_names = [instance.data["xgmPalette"]]
|
||||
for _, connections in instance.data["xgenConnections"].items():
|
||||
node_names.append(connections["transform"].split(".")[0])
|
||||
|
||||
non_namespaced_nodes = [n for n in node_names if ":" not in n]
|
||||
if non_namespaced_nodes:
|
||||
raise PublishValidationError(
|
||||
"Could not find namespace on {}. Namespace is required for"
|
||||
" xgen publishing.".format(non_namespaced_nodes)
|
||||
)
|
||||
|
|
|
|||
|
|
@ -1,5 +1,4 @@
|
|||
import os
|
||||
from functools import partial
|
||||
|
||||
from openpype.settings import get_project_settings
|
||||
from openpype.pipeline import install_host
|
||||
|
|
@ -13,24 +12,41 @@ install_host(host)
|
|||
|
||||
print("Starting OpenPype usersetup...")
|
||||
|
||||
project_settings = get_project_settings(os.environ['AVALON_PROJECT'])
|
||||
|
||||
# Loading plugins explicitly.
|
||||
explicit_plugins_loading = project_settings["maya"]["explicit_plugins_loading"]
|
||||
if explicit_plugins_loading["enabled"]:
|
||||
def _explicit_load_plugins():
|
||||
for plugin in explicit_plugins_loading["plugins_to_load"]:
|
||||
if plugin["enabled"]:
|
||||
print("Loading plug-in: " + plugin["name"])
|
||||
try:
|
||||
cmds.loadPlugin(plugin["name"], quiet=True)
|
||||
except RuntimeError as e:
|
||||
print(e)
|
||||
|
||||
# We need to load plugins deferred as loading them directly does not work
|
||||
# correctly due to Maya's initialization.
|
||||
cmds.evalDeferred(
|
||||
_explicit_load_plugins,
|
||||
lowestPriority=True
|
||||
)
|
||||
|
||||
# Open Workfile Post Initialization.
|
||||
key = "OPENPYPE_OPEN_WORKFILE_POST_INITIALIZATION"
|
||||
if bool(int(os.environ.get(key, "0"))):
|
||||
def _log_and_open():
|
||||
path = os.environ["AVALON_LAST_WORKFILE"]
|
||||
print("Opening \"{}\"".format(path))
|
||||
cmds.file(path, open=True, force=True)
|
||||
cmds.evalDeferred(
|
||||
partial(
|
||||
cmds.file,
|
||||
os.environ["AVALON_LAST_WORKFILE"],
|
||||
open=True,
|
||||
force=True
|
||||
),
|
||||
_log_and_open,
|
||||
lowestPriority=True
|
||||
)
|
||||
|
||||
|
||||
# Build a shelf.
|
||||
settings = get_project_settings(os.environ['AVALON_PROJECT'])
|
||||
shelf_preset = settings['maya'].get('project_shelf')
|
||||
shelf_preset = project_settings['maya'].get('project_shelf')
|
||||
|
||||
if shelf_preset:
|
||||
project = os.environ["AVALON_PROJECT"]
|
||||
|
|
|
|||
|
|
@ -24,6 +24,7 @@ from .commands import (
|
|||
remove_unused_looks
|
||||
)
|
||||
from .vray_proxies import vrayproxy_assign_look
|
||||
from . import arnold_standin
|
||||
|
||||
module = sys.modules[__name__]
|
||||
module.window = None
|
||||
|
|
@ -43,7 +44,7 @@ class MayaLookAssignerWindow(QtWidgets.QWidget):
|
|||
filename = get_workfile()
|
||||
|
||||
self.setObjectName("lookManager")
|
||||
self.setWindowTitle("Look Manager 1.3.0 - [{}]".format(filename))
|
||||
self.setWindowTitle("Look Manager 1.4.0 - [{}]".format(filename))
|
||||
self.setWindowFlags(QtCore.Qt.Window)
|
||||
self.setParent(parent)
|
||||
|
||||
|
|
@ -240,18 +241,38 @@ class MayaLookAssignerWindow(QtWidgets.QWidget):
|
|||
))
|
||||
nodes = item["nodes"]
|
||||
|
||||
# Assign Vray Proxy look.
|
||||
if cmds.pluginInfo('vrayformaya', query=True, loaded=True):
|
||||
self.echo("Getting vray proxy nodes ...")
|
||||
vray_proxies = set(cmds.ls(type="VRayProxy", long=True))
|
||||
|
||||
if vray_proxies:
|
||||
for vp in vray_proxies:
|
||||
if vp in nodes:
|
||||
vrayproxy_assign_look(vp, subset_name)
|
||||
for vp in vray_proxies:
|
||||
if vp in nodes:
|
||||
vrayproxy_assign_look(vp, subset_name)
|
||||
|
||||
nodes = list(set(item["nodes"]).difference(vray_proxies))
|
||||
nodes = list(set(nodes).difference(vray_proxies))
|
||||
else:
|
||||
self.echo(
|
||||
"Could not assign to VRayProxy because vrayformaya plugin "
|
||||
"is not loaded."
|
||||
)
|
||||
|
||||
# Assign look
|
||||
# Assign Arnold Standin look.
|
||||
if cmds.pluginInfo("mtoa", query=True, loaded=True):
|
||||
arnold_standins = set(cmds.ls(type="aiStandIn", long=True))
|
||||
|
||||
for standin in arnold_standins:
|
||||
if standin in nodes:
|
||||
arnold_standin.assign_look(standin, subset_name)
|
||||
|
||||
nodes = list(set(nodes).difference(arnold_standins))
|
||||
else:
|
||||
self.echo(
|
||||
"Could not assign to aiStandIn because mtoa plugin is not "
|
||||
"loaded."
|
||||
)
|
||||
|
||||
# Assign look
|
||||
if nodes:
|
||||
assign_look_by_version(nodes, version_id=version["_id"])
|
||||
|
||||
|
|
|
|||
247
openpype/hosts/maya/tools/mayalookassigner/arnold_standin.py
Normal file
247
openpype/hosts/maya/tools/mayalookassigner/arnold_standin.py
Normal file
|
|
@ -0,0 +1,247 @@
|
|||
import os
|
||||
import json
|
||||
from collections import defaultdict
|
||||
import logging
|
||||
|
||||
from maya import cmds
|
||||
|
||||
from openpype.pipeline import legacy_io
|
||||
from openpype.client import get_last_version_by_subset_name
|
||||
from openpype.hosts.maya import api
|
||||
from . import lib
|
||||
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
|
||||
ATTRIBUTE_MAPPING = {
|
||||
"primaryVisibility": "visibility", # Camera
|
||||
"castsShadows": "visibility", # Shadow
|
||||
"receiveShadows": "receive_shadows",
|
||||
"aiSelfShadows": "self_shadows",
|
||||
"aiOpaque": "opaque",
|
||||
"aiMatte": "matte",
|
||||
"aiVisibleInDiffuseTransmission": "visibility",
|
||||
"aiVisibleInSpecularTransmission": "visibility",
|
||||
"aiVisibleInVolume": "visibility",
|
||||
"aiVisibleInDiffuseReflection": "visibility",
|
||||
"aiVisibleInSpecularReflection": "visibility",
|
||||
"aiSubdivUvSmoothing": "subdiv_uv_smoothing",
|
||||
"aiDispHeight": "disp_height",
|
||||
"aiDispPadding": "disp_padding",
|
||||
"aiDispZeroValue": "disp_zero_value",
|
||||
"aiStepSize": "step_size",
|
||||
"aiVolumePadding": "volume_padding",
|
||||
"aiSubdivType": "subdiv_type",
|
||||
"aiSubdivIterations": "subdiv_iterations"
|
||||
}
|
||||
|
||||
|
||||
def calculate_visibility_mask(attributes):
|
||||
# https://arnoldsupport.com/2018/11/21/backdoor-setting-visibility/
|
||||
mapping = {
|
||||
"primaryVisibility": 1, # Camera
|
||||
"castsShadows": 2, # Shadow
|
||||
"aiVisibleInDiffuseTransmission": 4,
|
||||
"aiVisibleInSpecularTransmission": 8,
|
||||
"aiVisibleInVolume": 16,
|
||||
"aiVisibleInDiffuseReflection": 32,
|
||||
"aiVisibleInSpecularReflection": 64
|
||||
}
|
||||
mask = 255
|
||||
for attr, value in mapping.items():
|
||||
if attributes.get(attr, True):
|
||||
continue
|
||||
|
||||
mask -= value
|
||||
|
||||
return mask
|
||||
|
||||
|
||||
def get_nodes_by_id(standin):
|
||||
"""Get node id from aiStandIn via json sidecar.
|
||||
|
||||
Args:
|
||||
standin (string): aiStandIn node.
|
||||
|
||||
Returns:
|
||||
(dict): Dictionary with node full name/path and id.
|
||||
"""
|
||||
path = cmds.getAttr(standin + ".dso")
|
||||
json_path = None
|
||||
for f in os.listdir(os.path.dirname(path)):
|
||||
if f.endswith(".json"):
|
||||
json_path = os.path.join(os.path.dirname(path), f)
|
||||
break
|
||||
|
||||
if not json_path:
|
||||
log.warning("Could not find json file for {}.".format(standin))
|
||||
return {}
|
||||
|
||||
with open(json_path, "r") as f:
|
||||
return json.load(f)
|
||||
|
||||
|
||||
def shading_engine_assignments(shading_engine, attribute, nodes, assignments):
|
||||
"""Full assignments with shader or disp_map.
|
||||
|
||||
Args:
|
||||
shading_engine (string): Shading engine for material.
|
||||
attribute (string): "surfaceShader" or "displacementShader"
|
||||
nodes: (list): Nodes paths relative to aiStandIn.
|
||||
assignments (dict): Assignments by nodes.
|
||||
"""
|
||||
shader_inputs = cmds.listConnections(
|
||||
shading_engine + "." + attribute, source=True
|
||||
)
|
||||
if not shader_inputs:
|
||||
log.info(
|
||||
"Shading engine \"{}\" missing input \"{}\"".format(
|
||||
shading_engine, attribute
|
||||
)
|
||||
)
|
||||
return
|
||||
|
||||
# Strip off component assignments
|
||||
for i, node in enumerate(nodes):
|
||||
if "." in node:
|
||||
log.warning(
|
||||
"Converting face assignment to full object assignment. This "
|
||||
"conversion can be lossy: {}".format(node)
|
||||
)
|
||||
nodes[i] = node.split(".")[0]
|
||||
|
||||
shader_type = "shader" if attribute == "surfaceShader" else "disp_map"
|
||||
assignment = "{}='{}'".format(shader_type, shader_inputs[0])
|
||||
for node in nodes:
|
||||
assignments[node].append(assignment)
|
||||
|
||||
|
||||
def assign_look(standin, subset):
|
||||
log.info("Assigning {} to {}.".format(subset, standin))
|
||||
|
||||
nodes_by_id = get_nodes_by_id(standin)
|
||||
|
||||
# Group by asset id so we run over the look per asset
|
||||
node_ids_by_asset_id = defaultdict(set)
|
||||
for node_id in nodes_by_id:
|
||||
asset_id = node_id.split(":", 1)[0]
|
||||
node_ids_by_asset_id[asset_id].add(node_id)
|
||||
|
||||
project_name = legacy_io.active_project()
|
||||
for asset_id, node_ids in node_ids_by_asset_id.items():
|
||||
|
||||
# Get latest look version
|
||||
version = get_last_version_by_subset_name(
|
||||
project_name,
|
||||
subset_name=subset,
|
||||
asset_id=asset_id,
|
||||
fields=["_id"]
|
||||
)
|
||||
if not version:
|
||||
log.info("Didn't find last version for subset name {}".format(
|
||||
subset
|
||||
))
|
||||
continue
|
||||
|
||||
relationships = lib.get_look_relationships(version["_id"])
|
||||
shader_nodes, container_node = lib.load_look(version["_id"])
|
||||
namespace = shader_nodes[0].split(":")[0]
|
||||
|
||||
# Get only the node ids and paths related to this asset
|
||||
# And get the shader edits the look supplies
|
||||
asset_nodes_by_id = {
|
||||
node_id: nodes_by_id[node_id] for node_id in node_ids
|
||||
}
|
||||
edits = list(
|
||||
api.lib.iter_shader_edits(
|
||||
relationships, shader_nodes, asset_nodes_by_id
|
||||
)
|
||||
)
|
||||
|
||||
# Create assignments
|
||||
node_assignments = {}
|
||||
for edit in edits:
|
||||
for node in edit["nodes"]:
|
||||
if node not in node_assignments:
|
||||
node_assignments[node] = []
|
||||
|
||||
if edit["action"] == "assign":
|
||||
if not cmds.ls(edit["shader"], type="shadingEngine"):
|
||||
log.info("Skipping non-shader: %s" % edit["shader"])
|
||||
continue
|
||||
|
||||
shading_engine_assignments(
|
||||
shading_engine=edit["shader"],
|
||||
attribute="surfaceShader",
|
||||
nodes=edit["nodes"],
|
||||
assignments=node_assignments
|
||||
)
|
||||
shading_engine_assignments(
|
||||
shading_engine=edit["shader"],
|
||||
attribute="displacementShader",
|
||||
nodes=edit["nodes"],
|
||||
assignments=node_assignments
|
||||
)
|
||||
|
||||
if edit["action"] == "setattr":
|
||||
visibility = False
|
||||
for attr, value in edit["attributes"].items():
|
||||
if attr not in ATTRIBUTE_MAPPING:
|
||||
log.warning(
|
||||
"Skipping setting attribute {} on {} because it is"
|
||||
" not recognized.".format(attr, edit["nodes"])
|
||||
)
|
||||
continue
|
||||
|
||||
if isinstance(value, str):
|
||||
value = "'{}'".format(value)
|
||||
|
||||
if ATTRIBUTE_MAPPING[attr] == "visibility":
|
||||
visibility = True
|
||||
continue
|
||||
|
||||
assignment = "{}={}".format(ATTRIBUTE_MAPPING[attr], value)
|
||||
|
||||
for node in edit["nodes"]:
|
||||
node_assignments[node].append(assignment)
|
||||
|
||||
if visibility:
|
||||
mask = calculate_visibility_mask(edit["attributes"])
|
||||
assignment = "visibility={}".format(mask)
|
||||
|
||||
for node in edit["nodes"]:
|
||||
node_assignments[node].append(assignment)
|
||||
|
||||
# Assign shader
|
||||
# Clear all current shader assignments
|
||||
plug = standin + ".operators"
|
||||
num = cmds.getAttr(plug, size=True)
|
||||
for i in reversed(range(num)):
|
||||
cmds.removeMultiInstance("{}[{}]".format(plug, i), b=True)
|
||||
|
||||
# Create new assignment overrides
|
||||
index = 0
|
||||
for node, assignments in node_assignments.items():
|
||||
if not assignments:
|
||||
continue
|
||||
|
||||
with api.lib.maintained_selection():
|
||||
operator = cmds.createNode("aiSetParameter")
|
||||
operator = cmds.rename(operator, namespace + ":" + operator)
|
||||
|
||||
cmds.setAttr(operator + ".selection", node, type="string")
|
||||
for i, assignment in enumerate(assignments):
|
||||
cmds.setAttr(
|
||||
"{}.assignment[{}]".format(operator, i),
|
||||
assignment,
|
||||
type="string"
|
||||
)
|
||||
|
||||
cmds.connectAttr(
|
||||
operator + ".out", "{}[{}]".format(plug, index)
|
||||
)
|
||||
|
||||
index += 1
|
||||
|
||||
cmds.sets(operator, edit=True, addElement=container_node)
|
||||
|
|
@ -13,6 +13,7 @@ from openpype.pipeline import (
|
|||
from openpype.hosts.maya.api import lib
|
||||
|
||||
from .vray_proxies import get_alembic_ids_cache
|
||||
from . import arnold_standin
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
|
|
@ -44,33 +45,11 @@ def get_namespace_from_node(node):
|
|||
return parts[0] if len(parts) > 1 else u":"
|
||||
|
||||
|
||||
def list_descendents(nodes):
|
||||
"""Include full descendant hierarchy of given nodes.
|
||||
|
||||
This is a workaround to cmds.listRelatives(allDescendents=True) because
|
||||
this way correctly keeps children instance paths (see Maya documentation)
|
||||
|
||||
This fixes LKD-26: assignments not working as expected on instanced shapes.
|
||||
|
||||
Return:
|
||||
list: List of children descendents of nodes
|
||||
|
||||
"""
|
||||
result = []
|
||||
while True:
|
||||
nodes = cmds.listRelatives(nodes,
|
||||
fullPath=True)
|
||||
if nodes:
|
||||
result.extend(nodes)
|
||||
else:
|
||||
return result
|
||||
|
||||
|
||||
def get_selected_nodes():
|
||||
"""Get information from current selection"""
|
||||
|
||||
selection = cmds.ls(selection=True, long=True)
|
||||
hierarchy = list_descendents(selection)
|
||||
hierarchy = lib.get_all_children(selection)
|
||||
return list(set(selection + hierarchy))
|
||||
|
||||
|
||||
|
|
@ -105,10 +84,12 @@ def create_asset_id_hash(nodes):
|
|||
path = cmds.getAttr("{}.fileName".format(node))
|
||||
ids = get_alembic_ids_cache(path)
|
||||
for k, _ in ids.items():
|
||||
pid = k.split(":")[0]
|
||||
if node not in node_id_hash[pid]:
|
||||
node_id_hash[pid].append(node)
|
||||
|
||||
id = k.split(":")[0]
|
||||
node_id_hash[id].append(node)
|
||||
elif cmds.nodeType(node) == "aiStandIn":
|
||||
for id, _ in arnold_standin.get_nodes_by_id(node).items():
|
||||
id = id.split(":")[0]
|
||||
node_id_hash[id].append(node)
|
||||
else:
|
||||
value = lib.get_id(node)
|
||||
if value is None:
|
||||
|
|
|
|||
87
openpype/hosts/maya/tools/mayalookassigner/lib.py
Normal file
87
openpype/hosts/maya/tools/mayalookassigner/lib.py
Normal file
|
|
@ -0,0 +1,87 @@
|
|||
import json
|
||||
import logging
|
||||
|
||||
from openpype.pipeline import (
|
||||
legacy_io,
|
||||
get_representation_path,
|
||||
registered_host,
|
||||
discover_loader_plugins,
|
||||
loaders_from_representation,
|
||||
load_container
|
||||
)
|
||||
from openpype.client import get_representation_by_name
|
||||
from openpype.hosts.maya.api import lib
|
||||
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def get_look_relationships(version_id):
|
||||
# type: (str) -> dict
|
||||
"""Get relations for the look.
|
||||
|
||||
Args:
|
||||
version_id (str): Parent version Id.
|
||||
|
||||
Returns:
|
||||
dict: Dictionary of relations.
|
||||
"""
|
||||
|
||||
project_name = legacy_io.active_project()
|
||||
json_representation = get_representation_by_name(
|
||||
project_name, representation_name="json", version_id=version_id
|
||||
)
|
||||
|
||||
# Load relationships
|
||||
shader_relation = get_representation_path(json_representation)
|
||||
with open(shader_relation, "r") as f:
|
||||
relationships = json.load(f)
|
||||
|
||||
return relationships
|
||||
|
||||
|
||||
def load_look(version_id):
|
||||
# type: (str) -> list
|
||||
"""Load look from version.
|
||||
|
||||
Get look from version and invoke Loader for it.
|
||||
|
||||
Args:
|
||||
version_id (str): Version ID
|
||||
|
||||
Returns:
|
||||
list of shader nodes.
|
||||
|
||||
"""
|
||||
|
||||
project_name = legacy_io.active_project()
|
||||
# Get representations of shader file and relationships
|
||||
look_representation = get_representation_by_name(
|
||||
project_name, representation_name="ma", version_id=version_id
|
||||
)
|
||||
|
||||
# See if representation is already loaded, if so reuse it.
|
||||
host = registered_host()
|
||||
representation_id = str(look_representation['_id'])
|
||||
for container in host.ls():
|
||||
if (container['loader'] == "LookLoader" and
|
||||
container['representation'] == representation_id):
|
||||
log.info("Reusing loaded look ...")
|
||||
container_node = container['objectName']
|
||||
break
|
||||
else:
|
||||
log.info("Using look for the first time ...")
|
||||
|
||||
# Load file
|
||||
all_loaders = discover_loader_plugins()
|
||||
loaders = loaders_from_representation(all_loaders, representation_id)
|
||||
loader = next(
|
||||
(i for i in loaders if i.__name__ == "LookLoader"), None)
|
||||
if loader is None:
|
||||
raise RuntimeError("Could not find LookLoader, this is a bug")
|
||||
|
||||
# Reference the look file
|
||||
with lib.maintained_selection():
|
||||
container_node = load_container(loader, look_representation)[0]
|
||||
|
||||
return lib.get_container_members(container_node), container_node
|
||||
|
|
@ -3,26 +3,16 @@
|
|||
import os
|
||||
from collections import defaultdict
|
||||
import logging
|
||||
import json
|
||||
|
||||
import six
|
||||
|
||||
import alembic.Abc
|
||||
from maya import cmds
|
||||
|
||||
from openpype.client import (
|
||||
get_representation_by_name,
|
||||
get_last_version_by_subset_name,
|
||||
)
|
||||
from openpype.pipeline import (
|
||||
legacy_io,
|
||||
load_container,
|
||||
loaders_from_representation,
|
||||
discover_loader_plugins,
|
||||
get_representation_path,
|
||||
registered_host,
|
||||
)
|
||||
from openpype.hosts.maya.api import lib
|
||||
from openpype.client import get_last_version_by_subset_name
|
||||
from openpype.pipeline import legacy_io
|
||||
import openpype.hosts.maya.lib as maya_lib
|
||||
from . import lib
|
||||
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
|
@ -149,79 +139,6 @@ def assign_vrayproxy_shaders(vrayproxy, assignments):
|
|||
index += 1
|
||||
|
||||
|
||||
def get_look_relationships(version_id):
|
||||
# type: (str) -> dict
|
||||
"""Get relations for the look.
|
||||
|
||||
Args:
|
||||
version_id (str): Parent version Id.
|
||||
|
||||
Returns:
|
||||
dict: Dictionary of relations.
|
||||
"""
|
||||
|
||||
project_name = legacy_io.active_project()
|
||||
json_representation = get_representation_by_name(
|
||||
project_name, representation_name="json", version_id=version_id
|
||||
)
|
||||
|
||||
# Load relationships
|
||||
shader_relation = get_representation_path(json_representation)
|
||||
with open(shader_relation, "r") as f:
|
||||
relationships = json.load(f)
|
||||
|
||||
return relationships
|
||||
|
||||
|
||||
def load_look(version_id):
|
||||
# type: (str) -> list
|
||||
"""Load look from version.
|
||||
|
||||
Get look from version and invoke Loader for it.
|
||||
|
||||
Args:
|
||||
version_id (str): Version ID
|
||||
|
||||
Returns:
|
||||
list of shader nodes.
|
||||
|
||||
"""
|
||||
|
||||
project_name = legacy_io.active_project()
|
||||
# Get representations of shader file and relationships
|
||||
look_representation = get_representation_by_name(
|
||||
project_name, representation_name="ma", version_id=version_id
|
||||
)
|
||||
|
||||
# See if representation is already loaded, if so reuse it.
|
||||
host = registered_host()
|
||||
representation_id = str(look_representation['_id'])
|
||||
for container in host.ls():
|
||||
if (container['loader'] == "LookLoader" and
|
||||
container['representation'] == representation_id):
|
||||
log.info("Reusing loaded look ...")
|
||||
container_node = container['objectName']
|
||||
break
|
||||
else:
|
||||
log.info("Using look for the first time ...")
|
||||
|
||||
# Load file
|
||||
all_loaders = discover_loader_plugins()
|
||||
loaders = loaders_from_representation(all_loaders, representation_id)
|
||||
loader = next(
|
||||
(i for i in loaders if i.__name__ == "LookLoader"), None)
|
||||
if loader is None:
|
||||
raise RuntimeError("Could not find LookLoader, this is a bug")
|
||||
|
||||
# Reference the look file
|
||||
with lib.maintained_selection():
|
||||
container_node = load_container(loader, look_representation)
|
||||
|
||||
# Get container members
|
||||
shader_nodes = lib.get_container_members(container_node)
|
||||
return shader_nodes
|
||||
|
||||
|
||||
def vrayproxy_assign_look(vrayproxy, subset="lookDefault"):
|
||||
# type: (str, str) -> None
|
||||
"""Assign look to vray proxy.
|
||||
|
|
@ -263,8 +180,8 @@ def vrayproxy_assign_look(vrayproxy, subset="lookDefault"):
|
|||
))
|
||||
continue
|
||||
|
||||
relationships = get_look_relationships(version["_id"])
|
||||
shadernodes = load_look(version["_id"])
|
||||
relationships = lib.get_look_relationships(version["_id"])
|
||||
shadernodes, _ = lib.load_look(version["_id"])
|
||||
|
||||
# Get only the node ids and paths related to this asset
|
||||
# And get the shader edits the look supplies
|
||||
|
|
@ -272,8 +189,10 @@ def vrayproxy_assign_look(vrayproxy, subset="lookDefault"):
|
|||
node_id: nodes_by_id[node_id] for node_id in node_ids
|
||||
}
|
||||
edits = list(
|
||||
lib.iter_shader_edits(
|
||||
relationships, shadernodes, asset_nodes_by_id))
|
||||
maya_lib.iter_shader_edits(
|
||||
relationships, shadernodes, asset_nodes_by_id
|
||||
)
|
||||
)
|
||||
|
||||
# Create assignments
|
||||
assignments = {}
|
||||
|
|
|
|||
|
|
@ -23,6 +23,9 @@ from openpype.client import (
|
|||
|
||||
from openpype.host import HostDirmap
|
||||
from openpype.tools.utils import host_tools
|
||||
from openpype.pipeline.workfile.workfile_template_builder import (
|
||||
TemplateProfileNotFound
|
||||
)
|
||||
from openpype.lib import (
|
||||
env_value_to_bool,
|
||||
Logger,
|
||||
|
|
@ -2684,7 +2687,10 @@ def start_workfile_template_builder():
|
|||
|
||||
# to avoid looping of the callback, remove it!
|
||||
log.info("Starting workfile template builder...")
|
||||
build_workfile_template(workfile_creation_enabled=True)
|
||||
try:
|
||||
build_workfile_template(workfile_creation_enabled=True)
|
||||
except TemplateProfileNotFound:
|
||||
log.warning("Template profile not found. Skipping...")
|
||||
|
||||
# remove callback since it would be duplicating the workfile
|
||||
nuke.removeOnCreate(start_workfile_template_builder, nodeClass="Root")
|
||||
|
|
|
|||
|
|
@ -208,6 +208,12 @@ class NukeCreator(NewCreator):
|
|||
|
||||
def collect_instances(self):
|
||||
cached_instances = _collect_and_cache_nodes(self)
|
||||
attr_def_keys = {
|
||||
attr_def.key
|
||||
for attr_def in self.get_instance_attr_defs()
|
||||
}
|
||||
attr_def_keys.discard(None)
|
||||
|
||||
for (node, data) in cached_instances[self.identifier]:
|
||||
created_instance = CreatedInstance.from_existing(
|
||||
data, self
|
||||
|
|
@ -215,6 +221,12 @@ class NukeCreator(NewCreator):
|
|||
created_instance.transient_data["node"] = node
|
||||
self._add_instance_to_context(created_instance)
|
||||
|
||||
for key in (
|
||||
set(created_instance["creator_attributes"].keys())
|
||||
- attr_def_keys
|
||||
):
|
||||
created_instance["creator_attributes"].pop(key)
|
||||
|
||||
def update_instances(self, update_list):
|
||||
for created_inst, _changes in update_list:
|
||||
instance_node = created_inst.transient_data["node"]
|
||||
|
|
@ -301,8 +313,11 @@ class NukeWriteCreator(NukeCreator):
|
|||
def get_instance_attr_defs(self):
|
||||
attr_defs = [
|
||||
self._get_render_target_enum(),
|
||||
self._get_reviewable_bool()
|
||||
]
|
||||
# add reviewable attribute
|
||||
if "reviewable" in self.instance_attributes:
|
||||
attr_defs.append(self._get_reviewable_bool())
|
||||
|
||||
return attr_defs
|
||||
|
||||
def _get_render_target_enum(self):
|
||||
|
|
@ -322,7 +337,7 @@ class NukeWriteCreator(NukeCreator):
|
|||
def _get_reviewable_bool(self):
|
||||
return BoolDef(
|
||||
"review",
|
||||
default=("reviewable" in self.instance_attributes),
|
||||
default=True,
|
||||
label="Review"
|
||||
)
|
||||
|
||||
|
|
|
|||
|
|
@ -219,14 +219,17 @@ class NukePlaceholderLoadPlugin(NukePlaceholderPlugin, PlaceholderLoadMixin):
|
|||
|
||||
# fix the problem of z_order for backdrops
|
||||
self._fix_z_order(placeholder)
|
||||
self._imprint_siblings(placeholder)
|
||||
|
||||
if placeholder.data.get("keep_placeholder"):
|
||||
self._imprint_siblings(placeholder)
|
||||
|
||||
if placeholder.data["nb_children"] == 0:
|
||||
# save initial nodes positions and dimensions, update them
|
||||
# and set inputs and outputs of loaded nodes
|
||||
if placeholder.data.get("keep_placeholder"):
|
||||
self._imprint_inits()
|
||||
self._update_nodes(placeholder, nuke.allNodes(), nodes_loaded)
|
||||
|
||||
self._imprint_inits()
|
||||
self._update_nodes(placeholder, nuke.allNodes(), nodes_loaded)
|
||||
self._set_loaded_connections(placeholder)
|
||||
|
||||
elif placeholder.data["siblings"]:
|
||||
|
|
@ -629,14 +632,18 @@ class NukePlaceholderCreatePlugin(
|
|||
|
||||
# fix the problem of z_order for backdrops
|
||||
self._fix_z_order(placeholder)
|
||||
self._imprint_siblings(placeholder)
|
||||
|
||||
if placeholder.data.get("keep_placeholder"):
|
||||
self._imprint_siblings(placeholder)
|
||||
|
||||
if placeholder.data["nb_children"] == 0:
|
||||
# save initial nodes positions and dimensions, update them
|
||||
# and set inputs and outputs of created nodes
|
||||
|
||||
self._imprint_inits()
|
||||
self._update_nodes(placeholder, nuke.allNodes(), nodes_created)
|
||||
if placeholder.data.get("keep_placeholder"):
|
||||
self._imprint_inits()
|
||||
self._update_nodes(placeholder, nuke.allNodes(), nodes_created)
|
||||
|
||||
self._set_created_connections(placeholder)
|
||||
|
||||
elif placeholder.data["siblings"]:
|
||||
|
|
|
|||
|
|
@ -63,13 +63,6 @@ class CreateWriteImage(napi.NukeWriteCreator):
|
|||
default=nuke.frame()
|
||||
)
|
||||
|
||||
def get_instance_attr_defs(self):
|
||||
attr_defs = [
|
||||
self._get_render_target_enum(),
|
||||
self._get_reviewable_bool()
|
||||
]
|
||||
return attr_defs
|
||||
|
||||
def create_instance_node(self, subset_name, instance_data):
|
||||
linked_knobs_ = []
|
||||
if "use_range_limit" in self.instance_attributes:
|
||||
|
|
|
|||
|
|
@ -41,13 +41,6 @@ class CreateWritePrerender(napi.NukeWriteCreator):
|
|||
]
|
||||
return attr_defs
|
||||
|
||||
def get_instance_attr_defs(self):
|
||||
attr_defs = [
|
||||
self._get_render_target_enum(),
|
||||
self._get_reviewable_bool()
|
||||
]
|
||||
return attr_defs
|
||||
|
||||
def create_instance_node(self, subset_name, instance_data):
|
||||
linked_knobs_ = []
|
||||
if "use_range_limit" in self.instance_attributes:
|
||||
|
|
|
|||
|
|
@ -38,13 +38,6 @@ class CreateWriteRender(napi.NukeWriteCreator):
|
|||
]
|
||||
return attr_defs
|
||||
|
||||
def get_instance_attr_defs(self):
|
||||
attr_defs = [
|
||||
self._get_render_target_enum(),
|
||||
self._get_reviewable_bool()
|
||||
]
|
||||
return attr_defs
|
||||
|
||||
def create_instance_node(self, subset_name, instance_data):
|
||||
# add fpath_template
|
||||
write_data = {
|
||||
|
|
|
|||
|
|
@ -74,8 +74,7 @@ class SetFrameRangeWithHandlesLoader(load.LoaderPlugin):
|
|||
return
|
||||
|
||||
# Include handles
|
||||
handles = version_data.get("handles", 0)
|
||||
start -= handles
|
||||
end += handles
|
||||
start -= version_data.get("handleStart", 0)
|
||||
end += version_data.get("handleEnd", 0)
|
||||
|
||||
lib.update_frame_range(start, end)
|
||||
|
|
|
|||
|
|
@ -138,7 +138,6 @@ class LinkAsGroup(load.LoaderPlugin):
|
|||
"version": version_doc.get("name"),
|
||||
"colorspace": version_data.get("colorspace"),
|
||||
"source": version_data.get("source"),
|
||||
"handles": version_data.get("handles"),
|
||||
"fps": version_data.get("fps"),
|
||||
"author": version_data.get("author")
|
||||
})
|
||||
|
|
|
|||
|
|
@ -49,8 +49,6 @@ class CollectContextData(pyblish.api.ContextPlugin):
|
|||
"resolutionHeight": resolution_height,
|
||||
"pixelAspect": pixel_aspect,
|
||||
|
||||
# backward compatibility handles
|
||||
"handles": handle_start,
|
||||
"handleStart": handle_start,
|
||||
"handleEnd": handle_end,
|
||||
"step": 1,
|
||||
|
|
|
|||
|
|
@ -28,7 +28,6 @@ class CollectGizmo(pyblish.api.InstancePlugin):
|
|||
|
||||
# Add version data to instance
|
||||
version_data = {
|
||||
"handles": handle_start,
|
||||
"handleStart": handle_start,
|
||||
"handleEnd": handle_end,
|
||||
"frameStart": first_frame + handle_start,
|
||||
|
|
|
|||
|
|
@ -28,7 +28,6 @@ class CollectModel(pyblish.api.InstancePlugin):
|
|||
|
||||
# Add version data to instance
|
||||
version_data = {
|
||||
"handles": handle_start,
|
||||
"handleStart": handle_start,
|
||||
"handleEnd": handle_end,
|
||||
"frameStart": first_frame + handle_start,
|
||||
|
|
|
|||
|
|
@ -103,7 +103,6 @@ class CollectNukeReads(pyblish.api.InstancePlugin):
|
|||
|
||||
# Add version data to instance
|
||||
version_data = {
|
||||
"handles": handle_start,
|
||||
"handleStart": handle_start,
|
||||
"handleEnd": handle_end,
|
||||
"frameStart": first_frame + handle_start,
|
||||
|
|
@ -123,7 +122,8 @@ class CollectNukeReads(pyblish.api.InstancePlugin):
|
|||
"frameStart": first_frame,
|
||||
"frameEnd": last_frame,
|
||||
"colorspace": colorspace,
|
||||
"handles": int(asset_doc["data"].get("handles", 0)),
|
||||
"handleStart": handle_start,
|
||||
"handleEnd": handle_end,
|
||||
"step": 1,
|
||||
"fps": int(nuke.root()['fps'].value())
|
||||
})
|
||||
|
|
|
|||
|
|
@ -9,9 +9,9 @@ import openpype.hosts.nuke.api.lib as nlib
|
|||
from openpype.pipeline.publish import (
|
||||
ValidateContentsOrder,
|
||||
PublishXmlValidationError,
|
||||
OptionalPyblishPluginMixin
|
||||
)
|
||||
|
||||
|
||||
class SelectInvalidInstances(pyblish.api.Action):
|
||||
"""Select invalid instances in Outliner."""
|
||||
|
||||
|
|
@ -92,7 +92,10 @@ class RepairSelectInvalidInstances(pyblish.api.Action):
|
|||
nlib.set_node_data(node, nlib.INSTANCE_DATA_KNOB, node_data)
|
||||
|
||||
|
||||
class ValidateCorrectAssetName(pyblish.api.InstancePlugin):
|
||||
class ValidateCorrectAssetName(
|
||||
pyblish.api.InstancePlugin,
|
||||
OptionalPyblishPluginMixin
|
||||
):
|
||||
"""Validator to check if instance asset match context asset.
|
||||
|
||||
When working in per-shot style you always publish data in context of
|
||||
|
|
@ -111,6 +114,9 @@ class ValidateCorrectAssetName(pyblish.api.InstancePlugin):
|
|||
optional = True
|
||||
|
||||
def process(self, instance):
|
||||
if not self.is_active(instance.data):
|
||||
return
|
||||
|
||||
asset = instance.data.get("asset")
|
||||
context_asset = instance.context.data["assetEntity"]["name"]
|
||||
node = instance.data["transientData"]["node"]
|
||||
|
|
|
|||
|
|
@ -1,8 +1,12 @@
|
|||
import nuke
|
||||
import pyblish
|
||||
from openpype.hosts.nuke import api as napi
|
||||
from openpype.pipeline import PublishXmlValidationError
|
||||
|
||||
from openpype.pipeline.publish import (
|
||||
ValidateContentsOrder,
|
||||
PublishXmlValidationError,
|
||||
OptionalPyblishPluginMixin
|
||||
)
|
||||
|
||||
class SelectCenterInNodeGraph(pyblish.api.Action):
|
||||
"""
|
||||
|
|
@ -46,12 +50,15 @@ class SelectCenterInNodeGraph(pyblish.api.Action):
|
|||
nuke.zoom(2, [min(all_xC), min(all_yC)])
|
||||
|
||||
|
||||
class ValidateBackdrop(pyblish.api.InstancePlugin):
|
||||
class ValidateBackdrop(
|
||||
pyblish.api.InstancePlugin,
|
||||
OptionalPyblishPluginMixin
|
||||
):
|
||||
""" Validate amount of nodes on backdrop node in case user
|
||||
forgotten to add nodes above the publishing backdrop node.
|
||||
"""
|
||||
|
||||
order = pyblish.api.ValidatorOrder
|
||||
order = ValidateContentsOrder
|
||||
optional = True
|
||||
families = ["nukenodes"]
|
||||
label = "Validate Backdrop"
|
||||
|
|
@ -59,6 +66,9 @@ class ValidateBackdrop(pyblish.api.InstancePlugin):
|
|||
actions = [SelectCenterInNodeGraph]
|
||||
|
||||
def process(self, instance):
|
||||
if not self.is_active(instance.data):
|
||||
return
|
||||
|
||||
child_nodes = instance.data["transientData"]["childNodes"]
|
||||
connections_out = instance.data["transientData"]["nodeConnectionsOut"]
|
||||
|
||||
|
|
|
|||
|
|
@ -18,7 +18,7 @@ class ValidateScriptAttributes(
|
|||
|
||||
order = pyblish.api.ValidatorOrder + 0.1
|
||||
families = ["workfile"]
|
||||
label = "Validatte script attributes"
|
||||
label = "Validate script attributes"
|
||||
hosts = ["nuke"]
|
||||
optional = True
|
||||
actions = [RepairAction]
|
||||
|
|
|
|||
|
|
@ -27,11 +27,12 @@ class ExtractWorkfileUrl(pyblish.api.ContextPlugin):
|
|||
rep_name = instance.data.get("representations")[0].get("name")
|
||||
template_data["representation"] = rep_name
|
||||
template_data["ext"] = rep_name
|
||||
anatomy_filled = anatomy.format(template_data)
|
||||
template_filled = anatomy_filled["publish"]["path"]
|
||||
template_obj = anatomy.templates_obj["publish"]["path"]
|
||||
template_filled = template_obj.format_strict(template_data)
|
||||
filepath = os.path.normpath(template_filled)
|
||||
self.log.info("Using published scene for render {}".format(
|
||||
filepath))
|
||||
break
|
||||
|
||||
if not filepath:
|
||||
self.log.info("Texture batch doesn't contain workfile.")
|
||||
|
|
|
|||
|
|
@ -36,11 +36,9 @@ class BatchMovieCreator(TrayPublishCreator):
|
|||
# Position batch creator after simple creators
|
||||
order = 110
|
||||
|
||||
def __init__(self, project_settings, *args, **kwargs):
|
||||
super(BatchMovieCreator, self).__init__(project_settings,
|
||||
*args, **kwargs)
|
||||
def apply_settings(self, project_settings, system_settings):
|
||||
creator_settings = (
|
||||
project_settings["traypublisher"]["BatchMovieCreator"]
|
||||
project_settings["traypublisher"]["create"]["BatchMovieCreator"]
|
||||
)
|
||||
self.default_variants = creator_settings["default_variants"]
|
||||
self.default_tasks = creator_settings["default_tasks"]
|
||||
|
|
@ -151,4 +149,3 @@ class BatchMovieCreator(TrayPublishCreator):
|
|||
File names must then contain only asset name, or asset name + version.
|
||||
(eg. 'chair.mov', 'chair_v001.mov', not really safe `my_chair_v001.mov`
|
||||
"""
|
||||
|
||||
|
|
|
|||
|
|
@ -504,14 +504,9 @@ def set_context_settings(project_name, asset_doc):
|
|||
print("Frame range was not found!")
|
||||
return
|
||||
|
||||
handles = asset_doc["data"].get("handles") or 0
|
||||
handle_start = asset_doc["data"].get("handleStart")
|
||||
handle_end = asset_doc["data"].get("handleEnd")
|
||||
|
||||
if handle_start is None or handle_end is None:
|
||||
handle_start = handles
|
||||
handle_end = handles
|
||||
|
||||
# Always start from 0 Mark In and set only Mark Out
|
||||
mark_in = 0
|
||||
mark_out = mark_in + (frame_end - frame_start) + handle_start + handle_end
|
||||
|
|
|
|||
|
|
@ -144,7 +144,7 @@ class ExtractSequence(pyblish.api.Extractor):
|
|||
|
||||
# Fill tags and new families from project settings
|
||||
tags = []
|
||||
if family_lowered == "review":
|
||||
if "review" in instance.data["families"]:
|
||||
tags.append("review")
|
||||
|
||||
# Sequence of one frame
|
||||
|
|
|
|||
|
|
@ -24,7 +24,7 @@ class UnrealPrelaunchHook(PreLaunchHook):
|
|||
"""Hook to handle launching Unreal.
|
||||
|
||||
This hook will check if current workfile path has Unreal
|
||||
project inside. IF not, it initialize it and finally it pass
|
||||
project inside. IF not, it initializes it, and finally it pass
|
||||
path to the project by environment variable to Unreal launcher
|
||||
shell script.
|
||||
|
||||
|
|
@ -61,10 +61,10 @@ class UnrealPrelaunchHook(PreLaunchHook):
|
|||
project_name=project_doc["name"]
|
||||
)
|
||||
# Fill templates
|
||||
filled_anatomy = anatomy.format(workdir_data)
|
||||
template_obj = anatomy.templates_obj[workfile_template_key]["file"]
|
||||
|
||||
# Return filename
|
||||
return filled_anatomy[workfile_template_key]["file"]
|
||||
return template_obj.format_strict(workdir_data)
|
||||
|
||||
def exec_plugin_install(self, engine_path: Path, env: dict = None):
|
||||
# set up the QThread and worker with necessary signals
|
||||
|
|
@ -141,6 +141,7 @@ class UnrealPrelaunchHook(PreLaunchHook):
|
|||
def execute(self):
|
||||
"""Hook entry method."""
|
||||
workdir = self.launch_context.env["AVALON_WORKDIR"]
|
||||
executable = str(self.launch_context.executable)
|
||||
engine_version = self.app_name.split("/")[-1].replace("-", ".")
|
||||
try:
|
||||
if int(engine_version.split(".")[0]) < 4 and \
|
||||
|
|
@ -152,7 +153,7 @@ class UnrealPrelaunchHook(PreLaunchHook):
|
|||
# there can be string in minor version and in that case
|
||||
# int cast is failing. This probably happens only with
|
||||
# early access versions and is of no concert for this check
|
||||
# so lets keep it quite.
|
||||
# so let's keep it quiet.
|
||||
...
|
||||
|
||||
unreal_project_filename = self._get_work_filename()
|
||||
|
|
@ -183,26 +184,6 @@ class UnrealPrelaunchHook(PreLaunchHook):
|
|||
f"[ {engine_version} ]"
|
||||
))
|
||||
|
||||
detected = unreal_lib.get_engine_versions(self.launch_context.env)
|
||||
detected_str = ', '.join(detected.keys()) or 'none'
|
||||
self.log.info((
|
||||
f"{self.signature} detected UE versions: "
|
||||
f"[ {detected_str} ]"
|
||||
))
|
||||
if not detected:
|
||||
raise ApplicationNotFound("No Unreal Engines are found.")
|
||||
|
||||
engine_version = ".".join(engine_version.split(".")[:2])
|
||||
if engine_version not in detected.keys():
|
||||
raise ApplicationLaunchFailed((
|
||||
f"{self.signature} requested version not "
|
||||
f"detected [ {engine_version} ]"
|
||||
))
|
||||
|
||||
ue_path = unreal_lib.get_editor_exe_path(
|
||||
Path(detected[engine_version]), engine_version)
|
||||
|
||||
self.launch_context.launch_args = [ue_path.as_posix()]
|
||||
project_path.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
# Set "OPENPYPE_UNREAL_PLUGIN" to current process environment for
|
||||
|
|
@ -217,7 +198,9 @@ class UnrealPrelaunchHook(PreLaunchHook):
|
|||
if self.launch_context.env.get(env_key):
|
||||
os.environ[env_key] = self.launch_context.env[env_key]
|
||||
|
||||
engine_path: Path = Path(detected[engine_version])
|
||||
# engine_path points to the specific Unreal Engine root
|
||||
# so, we are going up from the executable itself 3 levels.
|
||||
engine_path: Path = Path(executable).parents[3]
|
||||
|
||||
if not unreal_lib.check_plugin_existence(engine_path):
|
||||
self.exec_plugin_install(engine_path)
|
||||
|
|
|
|||
|
|
@ -23,6 +23,8 @@ def get_engine_versions(env=None):
|
|||
Location can be overridden by `UNREAL_ENGINE_LOCATION` environment
|
||||
variable.
|
||||
|
||||
.. deprecated:: 3.15.4
|
||||
|
||||
Args:
|
||||
env (dict, optional): Environment to use.
|
||||
|
||||
|
|
@ -103,6 +105,8 @@ def _win_get_engine_versions():
|
|||
This file is JSON file listing installed stuff, Unreal engines
|
||||
are marked with `"AppName" = "UE_X.XX"`` like `UE_4.24`
|
||||
|
||||
.. deprecated:: 3.15.4
|
||||
|
||||
Returns:
|
||||
dict: version as a key and path as a value.
|
||||
|
||||
|
|
@ -122,6 +126,8 @@ def _darwin_get_engine_version() -> dict:
|
|||
|
||||
It works the same as on Windows, just JSON file location is different.
|
||||
|
||||
.. deprecated:: 3.15.4
|
||||
|
||||
Returns:
|
||||
dict: version as a key and path as a value.
|
||||
|
||||
|
|
@ -144,6 +150,8 @@ def _darwin_get_engine_version() -> dict:
|
|||
def _parse_launcher_locations(install_json_path: str) -> dict:
|
||||
"""This will parse locations from json file.
|
||||
|
||||
.. deprecated:: 3.15.4
|
||||
|
||||
Args:
|
||||
install_json_path (str): Path to `LauncherInstalled.dat`.
|
||||
|
||||
|
|
|
|||
|
|
@ -1,41 +0,0 @@
|
|||
import clique
|
||||
|
||||
import pyblish.api
|
||||
|
||||
|
||||
class ValidateSequenceFrames(pyblish.api.InstancePlugin):
|
||||
"""Ensure the sequence of frames is complete
|
||||
|
||||
The files found in the folder are checked against the frameStart and
|
||||
frameEnd of the instance. If the first or last file is not
|
||||
corresponding with the first or last frame it is flagged as invalid.
|
||||
"""
|
||||
|
||||
order = pyblish.api.ValidatorOrder
|
||||
label = "Validate Sequence Frames"
|
||||
families = ["render"]
|
||||
hosts = ["unreal"]
|
||||
optional = True
|
||||
|
||||
def process(self, instance):
|
||||
representations = instance.data.get("representations")
|
||||
for repr in representations:
|
||||
patterns = [clique.PATTERNS["frames"]]
|
||||
collections, remainder = clique.assemble(
|
||||
repr["files"], minimum_items=1, patterns=patterns)
|
||||
|
||||
assert not remainder, "Must not have remainder"
|
||||
assert len(collections) == 1, "Must detect single collection"
|
||||
collection = collections[0]
|
||||
frames = list(collection.indexes)
|
||||
|
||||
current_range = (frames[0], frames[-1])
|
||||
required_range = (instance.data["frameStart"],
|
||||
instance.data["frameEnd"])
|
||||
|
||||
if current_range != required_range:
|
||||
raise ValueError(f"Invalid frame range: {current_range} - "
|
||||
f"expected: {required_range}")
|
||||
|
||||
missing = collection.holes().indexes
|
||||
assert not missing, "Missing frames: %s" % (missing,)
|
||||
|
|
@ -170,11 +170,13 @@ def clean_envs_for_openpype_process(env=None):
|
|||
"""
|
||||
if env is None:
|
||||
env = os.environ
|
||||
return {
|
||||
key: value
|
||||
for key, value in env.items()
|
||||
if key not in ("PYTHONPATH",)
|
||||
}
|
||||
|
||||
# Exclude some environment variables from a copy of the environment
|
||||
env = env.copy()
|
||||
for key in ["PYTHONPATH", "PYTHONHOME"]:
|
||||
env.pop(key, None)
|
||||
|
||||
return env
|
||||
|
||||
|
||||
def run_openpype_process(*args, **kwargs):
|
||||
|
|
|
|||
|
|
@ -256,17 +256,18 @@ class TemplatesDict(object):
|
|||
elif isinstance(templates, dict):
|
||||
self._raw_templates = copy.deepcopy(templates)
|
||||
self._templates = templates
|
||||
self._objected_templates = self.create_ojected_templates(templates)
|
||||
self._objected_templates = self.create_objected_templates(
|
||||
templates)
|
||||
else:
|
||||
raise TypeError("<{}> argument must be a dict, not {}.".format(
|
||||
self.__class__.__name__, str(type(templates))
|
||||
))
|
||||
|
||||
def __getitem__(self, key):
|
||||
return self.templates[key]
|
||||
return self.objected_templates[key]
|
||||
|
||||
def get(self, key, *args, **kwargs):
|
||||
return self.templates.get(key, *args, **kwargs)
|
||||
return self.objected_templates.get(key, *args, **kwargs)
|
||||
|
||||
@property
|
||||
def raw_templates(self):
|
||||
|
|
@ -280,8 +281,21 @@ class TemplatesDict(object):
|
|||
def objected_templates(self):
|
||||
return self._objected_templates
|
||||
|
||||
@classmethod
|
||||
def create_ojected_templates(cls, templates):
|
||||
def _create_template_object(self, template):
|
||||
"""Create template object from a template string.
|
||||
|
||||
Separated into method to give option change class of templates.
|
||||
|
||||
Args:
|
||||
template (str): Template string.
|
||||
|
||||
Returns:
|
||||
StringTemplate: Object of template.
|
||||
"""
|
||||
|
||||
return StringTemplate(template)
|
||||
|
||||
def create_objected_templates(self, templates):
|
||||
if not isinstance(templates, dict):
|
||||
raise TypeError("Expected dict object, got {}".format(
|
||||
str(type(templates))
|
||||
|
|
@ -297,7 +311,7 @@ class TemplatesDict(object):
|
|||
for key in tuple(item.keys()):
|
||||
value = item[key]
|
||||
if isinstance(value, six.string_types):
|
||||
item[key] = StringTemplate(value)
|
||||
item[key] = self._create_template_object(value)
|
||||
elif isinstance(value, dict):
|
||||
inner_queue.append(value)
|
||||
return objected_templates
|
||||
|
|
|
|||
Some files were not shown because too many files have changed in this diff Show more
Loading…
Add table
Add a link
Reference in a new issue