mirror of
https://github.com/ynput/ayon-core.git
synced 2025-12-24 12:54:40 +01:00
Merge branch 'develop' into kitsu-sync-specific-projects
This commit is contained in:
commit
ae7137a055
263 changed files with 11853 additions and 1900 deletions
33
.github/ISSUE_TEMPLATE/bug_report.md
vendored
33
.github/ISSUE_TEMPLATE/bug_report.md
vendored
|
|
@ -1,33 +0,0 @@
|
|||
---
|
||||
name: Bug report
|
||||
about: Create a report to help us improve
|
||||
title: ''
|
||||
labels: bug
|
||||
assignees: ''
|
||||
|
||||
---
|
||||
**Running version**
|
||||
[ex. 3.14.1-nightly.2]
|
||||
|
||||
**Describe the bug**
|
||||
A clear and concise description of what the bug is.
|
||||
|
||||
**To Reproduce**
|
||||
Steps to reproduce the behavior:
|
||||
1. Go to '...'
|
||||
2. Click on '....'
|
||||
3. Scroll down to '....'
|
||||
4. See error
|
||||
|
||||
**Expected behavior**
|
||||
A clear and concise description of what you expected to happen.
|
||||
|
||||
**Screenshots**
|
||||
If applicable, add screenshots to help explain your problem.
|
||||
|
||||
**Desktop (please complete the following information):**
|
||||
- OS: [e.g. windows]
|
||||
- Host: [e.g. Maya, Nuke, Houdini]
|
||||
|
||||
**Additional context**
|
||||
Add any other context about the problem here.
|
||||
183
.github/ISSUE_TEMPLATE/bug_report.yml
vendored
Normal file
183
.github/ISSUE_TEMPLATE/bug_report.yml
vendored
Normal file
|
|
@ -0,0 +1,183 @@
|
|||
name: Bug Report
|
||||
description: File a bug report
|
||||
title: 'Bug: '
|
||||
labels:
|
||||
- 'type: bug'
|
||||
body:
|
||||
- type: markdown
|
||||
attributes:
|
||||
value: |
|
||||
Thanks for taking the time to fill out this bug report!
|
||||
- type: checkboxes
|
||||
attributes:
|
||||
label: Is there an existing issue for this?
|
||||
description: >-
|
||||
Please search to see if an issue already exists for the bug you
|
||||
encountered.
|
||||
options:
|
||||
- label: I have searched the existing issues
|
||||
required: true
|
||||
- type: textarea
|
||||
attributes:
|
||||
label: 'Current Behavior:'
|
||||
description: A concise description of what you're experiencing.
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
attributes:
|
||||
label: 'Expected Behavior:'
|
||||
description: A concise description of what you expected to happen.
|
||||
validations:
|
||||
required: false
|
||||
- type: dropdown
|
||||
id: _version
|
||||
attributes:
|
||||
label: Version
|
||||
description: What version are you running? Look to OpenPype Tray
|
||||
options:
|
||||
- 3.15.7-nightly.1
|
||||
- 3.15.6
|
||||
- 3.15.6-nightly.3
|
||||
- 3.15.6-nightly.2
|
||||
- 3.15.6-nightly.1
|
||||
- 3.15.5
|
||||
- 3.15.5-nightly.2
|
||||
- 3.15.5-nightly.1
|
||||
- 3.15.4
|
||||
- 3.15.4-nightly.3
|
||||
- 3.15.4-nightly.2
|
||||
- 3.15.4-nightly.1
|
||||
- 3.15.3
|
||||
- 3.15.3-nightly.4
|
||||
- 3.15.3-nightly.3
|
||||
- 3.15.3-nightly.2
|
||||
- 3.15.3-nightly.1
|
||||
- 3.15.2
|
||||
- 3.15.2-nightly.6
|
||||
- 3.15.2-nightly.5
|
||||
- 3.15.2-nightly.4
|
||||
- 3.15.2-nightly.3
|
||||
- 3.15.2-nightly.2
|
||||
- 3.15.2-nightly.1
|
||||
- 3.15.1
|
||||
- 3.15.1-nightly.6
|
||||
- 3.15.1-nightly.5
|
||||
- 3.15.1-nightly.4
|
||||
- 3.15.1-nightly.3
|
||||
- 3.15.1-nightly.2
|
||||
- 3.15.1-nightly.1
|
||||
- 3.15.0
|
||||
- 3.15.0-nightly.1
|
||||
- 3.14.11-nightly.4
|
||||
- 3.14.11-nightly.3
|
||||
- 3.14.11-nightly.2
|
||||
- 3.14.11-nightly.1
|
||||
- 3.14.10
|
||||
- 3.14.10-nightly.9
|
||||
- 3.14.10-nightly.8
|
||||
- 3.14.10-nightly.7
|
||||
- 3.14.10-nightly.6
|
||||
- 3.14.10-nightly.5
|
||||
- 3.14.10-nightly.4
|
||||
- 3.14.10-nightly.3
|
||||
- 3.14.10-nightly.2
|
||||
- 3.14.10-nightly.1
|
||||
- 3.14.9
|
||||
- 3.14.9-nightly.5
|
||||
- 3.14.9-nightly.4
|
||||
- 3.14.9-nightly.3
|
||||
- 3.14.9-nightly.2
|
||||
- 3.14.9-nightly.1
|
||||
- 3.14.8
|
||||
- 3.14.8-nightly.4
|
||||
- 3.14.8-nightly.3
|
||||
- 3.14.8-nightly.2
|
||||
- 3.14.8-nightly.1
|
||||
- 3.14.7
|
||||
- 3.14.7-nightly.8
|
||||
- 3.14.7-nightly.7
|
||||
- 3.14.7-nightly.6
|
||||
- 3.14.7-nightly.5
|
||||
- 3.14.7-nightly.4
|
||||
- 3.14.7-nightly.3
|
||||
- 3.14.7-nightly.2
|
||||
- 3.14.7-nightly.1
|
||||
- 3.14.6
|
||||
- 3.14.6-nightly.3
|
||||
- 3.14.6-nightly.2
|
||||
- 3.14.6-nightly.1
|
||||
- 3.14.5
|
||||
- 3.14.5-nightly.3
|
||||
- 3.14.5-nightly.2
|
||||
- 3.14.5-nightly.1
|
||||
- 3.14.4
|
||||
- 3.14.4-nightly.4
|
||||
- 3.14.4-nightly.3
|
||||
- 3.14.4-nightly.2
|
||||
- 3.14.4-nightly.1
|
||||
- 3.14.3
|
||||
- 3.14.3-nightly.7
|
||||
- 3.14.3-nightly.6
|
||||
- 3.14.3-nightly.5
|
||||
- 3.14.3-nightly.4
|
||||
- 3.14.3-nightly.3
|
||||
- 3.14.3-nightly.2
|
||||
- 3.14.3-nightly.1
|
||||
- 3.14.2
|
||||
- 3.14.2-nightly.5
|
||||
- 3.14.2-nightly.4
|
||||
- 3.14.2-nightly.3
|
||||
- 3.14.2-nightly.2
|
||||
- 3.14.2-nightly.1
|
||||
- 3.14.1
|
||||
- 3.14.1-nightly.4
|
||||
- 3.14.1-nightly.3
|
||||
- 3.14.1-nightly.2
|
||||
- 3.14.1-nightly.1
|
||||
- 3.14.0
|
||||
validations:
|
||||
required: true
|
||||
- type: dropdown
|
||||
validations:
|
||||
required: true
|
||||
attributes:
|
||||
label: What platform you are running OpenPype on?
|
||||
description: |
|
||||
Please specify the operating systems you are running OpenPype with.
|
||||
multiple: true
|
||||
options:
|
||||
- Windows
|
||||
- Linux / Centos
|
||||
- Linux / Ubuntu
|
||||
- Linux / RedHat
|
||||
- MacOS
|
||||
- type: textarea
|
||||
id: to-reproduce
|
||||
attributes:
|
||||
label: 'Steps To Reproduce:'
|
||||
description: Steps to reproduce the behavior.
|
||||
placeholder: |
|
||||
1. How did the configuration look like
|
||||
2. What type of action was made
|
||||
validations:
|
||||
required: true
|
||||
- type: checkboxes
|
||||
attributes:
|
||||
label: Are there any labels you wish to add?
|
||||
description: Please search labels and identify those related to your bug.
|
||||
options:
|
||||
- label: I have added the relevant labels to the bug report.
|
||||
required: true
|
||||
- type: textarea
|
||||
id: logs
|
||||
attributes:
|
||||
label: 'Relevant log output:'
|
||||
description: >-
|
||||
Please copy and paste any relevant log output. This will be
|
||||
automatically formatted into code, so no need for backticks.
|
||||
render: shell
|
||||
- type: textarea
|
||||
id: additional-context
|
||||
attributes:
|
||||
label: 'Additional context:'
|
||||
description: Add any other context about the problem here.
|
||||
8
.github/ISSUE_TEMPLATE/config.yml
vendored
Normal file
8
.github/ISSUE_TEMPLATE/config.yml
vendored
Normal file
|
|
@ -0,0 +1,8 @@
|
|||
blank_issues_enabled: false
|
||||
contact_links:
|
||||
- name: Ynput Community Discussions
|
||||
url: https://community.ynput.io
|
||||
about: Please ask and answer questions here.
|
||||
- name: Ynput Discord Server
|
||||
url: https://discord.gg/ynput
|
||||
about: For community quick chats.
|
||||
52
.github/ISSUE_TEMPLATE/enhancement_request.yml
vendored
Normal file
52
.github/ISSUE_TEMPLATE/enhancement_request.yml
vendored
Normal file
|
|
@ -0,0 +1,52 @@
|
|||
name: Enhancement Request
|
||||
description: Create a report to help us enhance a particular feature
|
||||
title: "Enhancement: "
|
||||
labels:
|
||||
- "type: enhancement"
|
||||
body:
|
||||
- type: markdown
|
||||
attributes:
|
||||
value: |
|
||||
Thanks for taking the time to fill out this enhancement request report!
|
||||
- type: checkboxes
|
||||
attributes:
|
||||
label: Is there an existing issue for this?
|
||||
description: Please search to see if an issue already exists for the bug you encountered.
|
||||
options:
|
||||
- label: I have searched the existing issues.
|
||||
required: true
|
||||
- type: textarea
|
||||
id: related-feature
|
||||
attributes:
|
||||
label: Please describe the feature you have in mind and explain what the current shortcomings are?
|
||||
description: A clear and concise description of what the problem is.
|
||||
validations:
|
||||
required: true
|
||||
- type: textarea
|
||||
id: enhancement-proposal
|
||||
attributes:
|
||||
label: How would you imagine the implementation of the feature?
|
||||
description: A clear and concise description of what you want to happen.
|
||||
validations:
|
||||
required: true
|
||||
- type: checkboxes
|
||||
attributes:
|
||||
label: Are there any labels you wish to add?
|
||||
description: Please search labels and identify those related to your enhancement.
|
||||
options:
|
||||
- label: I have added the relevant labels to the enhancement request.
|
||||
required: true
|
||||
- type: textarea
|
||||
id: alternatives
|
||||
attributes:
|
||||
label: "Describe alternatives you've considered:"
|
||||
description: A clear and concise description of any alternative solutions or features you've considered.
|
||||
validations:
|
||||
required: false
|
||||
- type: textarea
|
||||
id: additional-context
|
||||
attributes:
|
||||
label: "Additional context:"
|
||||
description: Add any other context or screenshots about the enhancement request here.
|
||||
validations:
|
||||
required: false
|
||||
20
.github/ISSUE_TEMPLATE/feature_request.md
vendored
20
.github/ISSUE_TEMPLATE/feature_request.md
vendored
|
|
@ -1,20 +0,0 @@
|
|||
---
|
||||
name: Feature request
|
||||
about: Suggest an idea for this project
|
||||
title: ''
|
||||
labels: enhancement
|
||||
assignees: ''
|
||||
|
||||
---
|
||||
|
||||
**Is your feature request related to a problem? Please describe.**
|
||||
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
|
||||
|
||||
**Describe the solution you'd like**
|
||||
A clear and concise description of what you want to happen.
|
||||
|
||||
**Describe alternatives you've considered**
|
||||
A clear and concise description of any alternative solutions or features you've considered.
|
||||
|
||||
**Additional context**
|
||||
Add any other context or screenshots about the feature request here.
|
||||
2
.github/workflows/documentation.yml
vendored
2
.github/workflows/documentation.yml
vendored
|
|
@ -1,4 +1,4 @@
|
|||
name: documentation
|
||||
name: 📜 Documentation
|
||||
|
||||
on:
|
||||
pull_request:
|
||||
|
|
|
|||
2
.github/workflows/milestone_assign.yml
vendored
2
.github/workflows/milestone_assign.yml
vendored
|
|
@ -1,4 +1,4 @@
|
|||
name: Milestone - assign to PRs
|
||||
name: 👉🏻 Milestone - assign to PRs
|
||||
|
||||
on:
|
||||
pull_request_target:
|
||||
|
|
|
|||
2
.github/workflows/milestone_create.yml
vendored
2
.github/workflows/milestone_create.yml
vendored
|
|
@ -1,4 +1,4 @@
|
|||
name: Milestone - create default
|
||||
name: ➕ Milestone - create default
|
||||
|
||||
on:
|
||||
milestone:
|
||||
|
|
|
|||
|
|
@ -1,4 +1,4 @@
|
|||
name: Milestone Release [trigger]
|
||||
name: 🚩 Milestone Release [trigger]
|
||||
|
||||
on:
|
||||
workflow_dispatch:
|
||||
|
|
@ -45,3 +45,6 @@ jobs:
|
|||
token: ${{ secrets.YNPUT_BOT_TOKEN }}
|
||||
user_email: ${{ secrets.CI_EMAIL }}
|
||||
user_name: ${{ secrets.CI_USER }}
|
||||
cu_api_key: ${{ secrets.CLICKUP_API_KEY }}
|
||||
cu_team_id: ${{ secrets.CLICKUP_TEAM_ID }}
|
||||
cu_field_id: ${{ secrets.CLICKUP_RELEASE_FIELD_ID }}
|
||||
|
|
|
|||
4
.github/workflows/nightly_merge.yml
vendored
4
.github/workflows/nightly_merge.yml
vendored
|
|
@ -1,4 +1,4 @@
|
|||
name: Dev -> Main
|
||||
name: 🔀 Dev -> Main
|
||||
|
||||
on:
|
||||
schedule:
|
||||
|
|
@ -25,5 +25,5 @@ jobs:
|
|||
- name: Invoke pre-release workflow
|
||||
uses: benc-uk/workflow-dispatch@v1
|
||||
with:
|
||||
workflow: Nightly Prerelease
|
||||
workflow: prerelease.yml
|
||||
token: ${{ secrets.YNPUT_BOT_TOKEN }}
|
||||
|
|
|
|||
49
.github/workflows/pr_labels.yml
vendored
Normal file
49
.github/workflows/pr_labels.yml
vendored
Normal file
|
|
@ -0,0 +1,49 @@
|
|||
name: 🔖 PR labels
|
||||
|
||||
on:
|
||||
pull_request_target:
|
||||
types: [opened, assigned]
|
||||
|
||||
jobs:
|
||||
size-label:
|
||||
name: pr_size_label
|
||||
runs-on: ubuntu-latest
|
||||
if: github.event.action == 'assigned' || github.event.action == 'opened'
|
||||
steps:
|
||||
- name: Add size label
|
||||
uses: "pascalgn/size-label-action@v0.4.3"
|
||||
env:
|
||||
GITHUB_TOKEN: "${{ secrets.YNPUT_BOT_TOKEN }}"
|
||||
IGNORED: ".gitignore\n*.md\n*.json"
|
||||
with:
|
||||
sizes: >
|
||||
{
|
||||
"0": "XS",
|
||||
"100": "S",
|
||||
"500": "M",
|
||||
"1000": "L",
|
||||
"1500": "XL",
|
||||
"2500": "XXL"
|
||||
}
|
||||
|
||||
label_prs_branch:
|
||||
name: pr_branch_label
|
||||
runs-on: ubuntu-latest
|
||||
if: github.event.action == 'assigned' || github.event.action == 'opened'
|
||||
steps:
|
||||
- name: Label PRs - Branch name detection
|
||||
uses: ffittschen/pr-branch-labeler@v1
|
||||
with:
|
||||
repo-token: ${{ secrets.YNPUT_BOT_TOKEN }}
|
||||
|
||||
label_prs_globe:
|
||||
name: pr_globe_label
|
||||
runs-on: ubuntu-latest
|
||||
if: github.event.action == 'assigned' || github.event.action == 'opened'
|
||||
steps:
|
||||
- name: Label PRs - Globe detection
|
||||
uses: actions/labeler@v4.0.3
|
||||
with:
|
||||
repo-token: ${{ secrets.YNPUT_BOT_TOKEN }}
|
||||
configuration-path: ".github/pr-glob-labeler.yml"
|
||||
sync-labels: false
|
||||
8
.github/workflows/prerelease.yml
vendored
8
.github/workflows/prerelease.yml
vendored
|
|
@ -1,4 +1,4 @@
|
|||
name: Nightly Prerelease
|
||||
name: ⏳ Nightly Prerelease
|
||||
|
||||
on:
|
||||
workflow_dispatch:
|
||||
|
|
@ -65,3 +65,9 @@ jobs:
|
|||
source_ref: 'main'
|
||||
target_branch: 'develop'
|
||||
commit_message_template: '[Automated] Merged {source_ref} into {target_branch}'
|
||||
|
||||
- name: Invoke Update bug report workflow
|
||||
uses: benc-uk/workflow-dispatch@v1
|
||||
with:
|
||||
workflow: update_bug_report.yml
|
||||
token: ${{ secrets.YNPUT_BOT_TOKEN }}
|
||||
|
|
@ -1,8 +1,6 @@
|
|||
name: project-actions
|
||||
name: 📊 Project task statuses
|
||||
|
||||
on:
|
||||
pull_request_target:
|
||||
types: [opened, assigned]
|
||||
pull_request_review:
|
||||
types: [submitted]
|
||||
issue_comment:
|
||||
|
|
@ -25,7 +23,11 @@ jobs:
|
|||
if: |
|
||||
(github.event_name == 'issue_comment' && github.event.pull_request.head.repo.owner.login == 'ynput' && github.event.comment.user.id != 82967070) ||
|
||||
(github.event_name == 'pull_request_review_comment' && github.event.pull_request.head.repo.owner.login == 'ynput' && github.event.comment.user.type != 'Bot') ||
|
||||
(github.event_name == 'pull_request_review' && github.event.pull_request.head.repo.owner.login == 'ynput' && github.event.review.state != 'changes_requested' && github.event.review.user.type != 'Bot')
|
||||
(github.event_name == 'pull_request_review' &&
|
||||
github.event.pull_request.head.repo.owner.login == 'ynput' &&
|
||||
github.event.review.state != 'changes_requested' &&
|
||||
github.event.review.state != 'approved' &&
|
||||
github.event.review.user.type != 'Bot')
|
||||
steps:
|
||||
- name: Move PR to 'Review In Progress'
|
||||
uses: leonsteinhaeuser/project-beta-automations@v2.1.0
|
||||
|
|
@ -66,53 +68,3 @@ jobs:
|
|||
-d '{
|
||||
"status": "in progress"
|
||||
}'
|
||||
|
||||
size-label:
|
||||
name: pr_size_label
|
||||
runs-on: ubuntu-latest
|
||||
if: |
|
||||
(github.event_name == 'pull_request' && github.event.action == 'assigned') ||
|
||||
(github.event_name == 'pull_request' && github.event.action == 'opened')
|
||||
|
||||
steps:
|
||||
- name: Add size label
|
||||
uses: "pascalgn/size-label-action@v0.4.3"
|
||||
env:
|
||||
GITHUB_TOKEN: "${{ secrets.YNPUT_BOT_TOKEN }}"
|
||||
IGNORED: ".gitignore\n*.md\n*.json"
|
||||
with:
|
||||
sizes: >
|
||||
{
|
||||
"0": "XS",
|
||||
"100": "S",
|
||||
"500": "M",
|
||||
"1000": "L",
|
||||
"1500": "XL",
|
||||
"2500": "XXL"
|
||||
}
|
||||
|
||||
label_prs_branch:
|
||||
name: pr_branch_label
|
||||
runs-on: ubuntu-latest
|
||||
if: |
|
||||
(github.event_name == 'pull_request' && github.event.action == 'assigned') ||
|
||||
(github.event_name == 'pull_request' && github.event.action == 'opened')
|
||||
steps:
|
||||
- name: Label PRs - Branch name detection
|
||||
uses: ffittschen/pr-branch-labeler@v1
|
||||
with:
|
||||
repo-token: ${{ secrets.YNPUT_BOT_TOKEN }}
|
||||
|
||||
label_prs_globe:
|
||||
name: pr_globe_label
|
||||
runs-on: ubuntu-latest
|
||||
if: |
|
||||
(github.event_name == 'pull_request' && github.event.action == 'assigned') ||
|
||||
(github.event_name == 'pull_request' && github.event.action == 'opened')
|
||||
steps:
|
||||
- name: Label PRs - Globe detection
|
||||
uses: actions/labeler@v4.0.3
|
||||
with:
|
||||
repo-token: ${{ secrets.YNPUT_BOT_TOKEN }}
|
||||
configuration-path: ".github/pr-glob-labeler.yml"
|
||||
sync-labels: false
|
||||
2
.github/workflows/test_build.yml
vendored
2
.github/workflows/test_build.yml
vendored
|
|
@ -1,7 +1,7 @@
|
|||
# This workflow will upload a Python Package using Twine when a release is created
|
||||
# For more information see: https://help.github.com/en/actions/language-and-framework-guides/using-python-with-github-actions#publishing-to-package-registries
|
||||
|
||||
name: Test Build
|
||||
name: 🏗️ Test Build
|
||||
|
||||
on:
|
||||
pull_request:
|
||||
|
|
|
|||
33
.github/workflows/update_bug_report.yml
vendored
Normal file
33
.github/workflows/update_bug_report.yml
vendored
Normal file
|
|
@ -0,0 +1,33 @@
|
|||
name: 🐞 Update Bug Report
|
||||
|
||||
on:
|
||||
workflow_dispatch:
|
||||
release:
|
||||
# https://docs.github.com/en/actions/using-workflows/events-that-trigger-workflows#release
|
||||
types: [published]
|
||||
|
||||
jobs:
|
||||
update-bug-report:
|
||||
runs-on: ubuntu-latest
|
||||
name: Update bug report
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
with:
|
||||
ref: ${{ github.event.release.target_commitish }}
|
||||
- name: Update version
|
||||
uses: ynput/gha-populate-form-version@main
|
||||
with:
|
||||
github_token: ${{ secrets.YNPUT_BOT_TOKEN }}
|
||||
registry: github
|
||||
dropdown: _version
|
||||
limit_to: 100
|
||||
form: .github/ISSUE_TEMPLATE/bug_report.yml
|
||||
commit_message: 'chore(): update bug report / version'
|
||||
dry_run: no-push
|
||||
|
||||
- name: Push to protected develop branch
|
||||
uses: CasperWA/push-protected@v2.10.0
|
||||
with:
|
||||
token: ${{ secrets.YNPUT_BOT_TOKEN }}
|
||||
branch: develop
|
||||
unprotect_reviews: true
|
||||
1494
CHANGELOG.md
1494
CHANGELOG.md
File diff suppressed because it is too large
Load diff
|
|
@ -52,7 +52,7 @@ RUN yum -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.n
|
|||
|
||||
# we need to build our own patchelf
|
||||
WORKDIR /temp-patchelf
|
||||
RUN git clone https://github.com/NixOS/patchelf.git . \
|
||||
RUN git clone -b 0.17.0 --single-branch https://github.com/NixOS/patchelf.git . \
|
||||
&& source scl_source enable devtoolset-7 \
|
||||
&& ./bootstrap.sh \
|
||||
&& ./configure \
|
||||
|
|
|
|||
|
|
@ -415,11 +415,12 @@ def repack_version(directory):
|
|||
@main.command()
|
||||
@click.option("--project", help="Project name")
|
||||
@click.option(
|
||||
"--dirpath", help="Directory where package is stored", default=None
|
||||
)
|
||||
def pack_project(project, dirpath):
|
||||
"--dirpath", help="Directory where package is stored", default=None)
|
||||
@click.option(
|
||||
"--dbonly", help="Store only Database data", default=False, is_flag=True)
|
||||
def pack_project(project, dirpath, dbonly):
|
||||
"""Create a package of project with all files and database dump."""
|
||||
PypeCommands().pack_project(project, dirpath)
|
||||
PypeCommands().pack_project(project, dirpath, dbonly)
|
||||
|
||||
|
||||
@main.command()
|
||||
|
|
@ -427,9 +428,11 @@ def pack_project(project, dirpath):
|
|||
@click.option(
|
||||
"--root", help="Replace root which was stored in project", default=None
|
||||
)
|
||||
def unpack_project(zipfile, root):
|
||||
@click.option(
|
||||
"--dbonly", help="Store only Database data", default=False, is_flag=True)
|
||||
def unpack_project(zipfile, root, dbonly):
|
||||
"""Create a package of project with all files and database dump."""
|
||||
PypeCommands().unpack_project(zipfile, root)
|
||||
PypeCommands().unpack_project(zipfile, root, dbonly)
|
||||
|
||||
|
||||
@main.command()
|
||||
|
|
|
|||
|
|
@ -69,6 +69,19 @@ def convert_ids(in_ids):
|
|||
|
||||
|
||||
def get_projects(active=True, inactive=False, fields=None):
|
||||
"""Yield all project entity documents.
|
||||
|
||||
Args:
|
||||
active (Optional[bool]): Include active projects. Defaults to True.
|
||||
inactive (Optional[bool]): Include inactive projects.
|
||||
Defaults to False.
|
||||
fields (Optional[Iterable[str]]): Fields that should be returned. All
|
||||
fields are returned if 'None' is passed.
|
||||
|
||||
Yields:
|
||||
dict: Project entity data which can be reduced to specified 'fields'.
|
||||
None is returned if project with specified filters was not found.
|
||||
"""
|
||||
mongodb = get_project_database()
|
||||
for project_name in mongodb.collection_names():
|
||||
if project_name in ("system.indexes",):
|
||||
|
|
@ -81,6 +94,20 @@ def get_projects(active=True, inactive=False, fields=None):
|
|||
|
||||
|
||||
def get_project(project_name, active=True, inactive=True, fields=None):
|
||||
"""Return project entity document by project name.
|
||||
|
||||
Args:
|
||||
project_name (str): Name of project.
|
||||
active (Optional[bool]): Allow active project. Defaults to True.
|
||||
inactive (Optional[bool]): Allow inactive project. Defaults to True.
|
||||
fields (Optional[Iterable[str]]): Fields that should be returned. All
|
||||
fields are returned if 'None' is passed.
|
||||
|
||||
Returns:
|
||||
Union[Dict, None]: Project entity data which can be reduced to
|
||||
specified 'fields'. None is returned if project with specified
|
||||
filters was not found.
|
||||
"""
|
||||
# Skip if both are disabled
|
||||
if not active and not inactive:
|
||||
return None
|
||||
|
|
@ -124,17 +151,18 @@ def get_whole_project(project_name):
|
|||
|
||||
|
||||
def get_asset_by_id(project_name, asset_id, fields=None):
|
||||
"""Receive asset data by it's id.
|
||||
"""Receive asset data by its id.
|
||||
|
||||
Args:
|
||||
project_name (str): Name of project where to look for queried entities.
|
||||
asset_id (Union[str, ObjectId]): Asset's id.
|
||||
fields (Iterable[str]): Fields that should be returned. All fields are
|
||||
returned if 'None' is passed.
|
||||
fields (Optional[Iterable[str]]): Fields that should be returned. All
|
||||
fields are returned if 'None' is passed.
|
||||
|
||||
Returns:
|
||||
dict: Asset entity data.
|
||||
None: Asset was not found by id.
|
||||
Union[Dict, None]: Asset entity data which can be reduced to
|
||||
specified 'fields'. None is returned if asset with specified
|
||||
filters was not found.
|
||||
"""
|
||||
|
||||
asset_id = convert_id(asset_id)
|
||||
|
|
@ -147,17 +175,18 @@ def get_asset_by_id(project_name, asset_id, fields=None):
|
|||
|
||||
|
||||
def get_asset_by_name(project_name, asset_name, fields=None):
|
||||
"""Receive asset data by it's name.
|
||||
"""Receive asset data by its name.
|
||||
|
||||
Args:
|
||||
project_name (str): Name of project where to look for queried entities.
|
||||
asset_name (str): Asset's name.
|
||||
fields (Iterable[str]): Fields that should be returned. All fields are
|
||||
returned if 'None' is passed.
|
||||
fields (Optional[Iterable[str]]): Fields that should be returned. All
|
||||
fields are returned if 'None' is passed.
|
||||
|
||||
Returns:
|
||||
dict: Asset entity data.
|
||||
None: Asset was not found by name.
|
||||
Union[Dict, None]: Asset entity data which can be reduced to
|
||||
specified 'fields'. None is returned if asset with specified
|
||||
filters was not found.
|
||||
"""
|
||||
|
||||
if not asset_name:
|
||||
|
|
@ -195,8 +224,8 @@ def _get_assets(
|
|||
parent_ids (Iterable[Union[str, ObjectId]]): Parent asset ids.
|
||||
standard (bool): Query standard assets (type 'asset').
|
||||
archived (bool): Query archived assets (type 'archived_asset').
|
||||
fields (Iterable[str]): Fields that should be returned. All fields are
|
||||
returned if 'None' is passed.
|
||||
fields (Optional[Iterable[str]]): Fields that should be returned. All
|
||||
fields are returned if 'None' is passed.
|
||||
|
||||
Returns:
|
||||
Cursor: Query cursor as iterable which returns asset documents matching
|
||||
|
|
@ -261,8 +290,8 @@ def get_assets(
|
|||
asset_names (Iterable[str]): Name assets that should be found.
|
||||
parent_ids (Iterable[Union[str, ObjectId]]): Parent asset ids.
|
||||
archived (bool): Add also archived assets.
|
||||
fields (Iterable[str]): Fields that should be returned. All fields are
|
||||
returned if 'None' is passed.
|
||||
fields (Optional[Iterable[str]]): Fields that should be returned. All
|
||||
fields are returned if 'None' is passed.
|
||||
|
||||
Returns:
|
||||
Cursor: Query cursor as iterable which returns asset documents matching
|
||||
|
|
@ -300,8 +329,8 @@ def get_archived_assets(
|
|||
be found.
|
||||
asset_names (Iterable[str]): Name assets that should be found.
|
||||
parent_ids (Iterable[Union[str, ObjectId]]): Parent asset ids.
|
||||
fields (Iterable[str]): Fields that should be returned. All fields are
|
||||
returned if 'None' is passed.
|
||||
fields (Optional[Iterable[str]]): Fields that should be returned. All
|
||||
fields are returned if 'None' is passed.
|
||||
|
||||
Returns:
|
||||
Cursor: Query cursor as iterable which returns asset documents matching
|
||||
|
|
@ -356,17 +385,18 @@ def get_asset_ids_with_subsets(project_name, asset_ids=None):
|
|||
|
||||
|
||||
def get_subset_by_id(project_name, subset_id, fields=None):
|
||||
"""Single subset entity data by it's id.
|
||||
"""Single subset entity data by its id.
|
||||
|
||||
Args:
|
||||
project_name (str): Name of project where to look for queried entities.
|
||||
subset_id (Union[str, ObjectId]): Id of subset which should be found.
|
||||
fields (Iterable[str]): Fields that should be returned. All fields are
|
||||
returned if 'None' is passed.
|
||||
fields (Optional[Iterable[str]]): Fields that should be returned. All
|
||||
fields are returned if 'None' is passed.
|
||||
|
||||
Returns:
|
||||
None: If subset with specified filters was not found.
|
||||
Dict: Subset document which can be reduced to specified 'fields'.
|
||||
Union[Dict, None]: Subset entity data which can be reduced to
|
||||
specified 'fields'. None is returned if subset with specified
|
||||
filters was not found.
|
||||
"""
|
||||
|
||||
subset_id = convert_id(subset_id)
|
||||
|
|
@ -379,20 +409,19 @@ def get_subset_by_id(project_name, subset_id, fields=None):
|
|||
|
||||
|
||||
def get_subset_by_name(project_name, subset_name, asset_id, fields=None):
|
||||
"""Single subset entity data by it's name and it's version id.
|
||||
"""Single subset entity data by its name and its version id.
|
||||
|
||||
Args:
|
||||
project_name (str): Name of project where to look for queried entities.
|
||||
subset_name (str): Name of subset.
|
||||
asset_id (Union[str, ObjectId]): Id of parent asset.
|
||||
fields (Iterable[str]): Fields that should be returned. All fields are
|
||||
returned if 'None' is passed.
|
||||
fields (Optional[Iterable[str]]): Fields that should be returned. All
|
||||
fields are returned if 'None' is passed.
|
||||
|
||||
Returns:
|
||||
Union[None, Dict[str, Any]]: None if subset with specified filters was
|
||||
not found or dict subset document which can be reduced to
|
||||
specified 'fields'.
|
||||
|
||||
Union[Dict, None]: Subset entity data which can be reduced to
|
||||
specified 'fields'. None is returned if subset with specified
|
||||
filters was not found.
|
||||
"""
|
||||
if not subset_name:
|
||||
return None
|
||||
|
|
@ -434,8 +463,8 @@ def get_subsets(
|
|||
names_by_asset_ids (dict[ObjectId, List[str]]): Complex filtering
|
||||
using asset ids and list of subset names under the asset.
|
||||
archived (bool): Look for archived subsets too.
|
||||
fields (Iterable[str]): Fields that should be returned. All fields are
|
||||
returned if 'None' is passed.
|
||||
fields (Optional[Iterable[str]]): Fields that should be returned. All
|
||||
fields are returned if 'None' is passed.
|
||||
|
||||
Returns:
|
||||
Cursor: Iterable cursor yielding all matching subsets.
|
||||
|
|
@ -520,17 +549,18 @@ def get_subset_families(project_name, subset_ids=None):
|
|||
|
||||
|
||||
def get_version_by_id(project_name, version_id, fields=None):
|
||||
"""Single version entity data by it's id.
|
||||
"""Single version entity data by its id.
|
||||
|
||||
Args:
|
||||
project_name (str): Name of project where to look for queried entities.
|
||||
version_id (Union[str, ObjectId]): Id of version which should be found.
|
||||
fields (Iterable[str]): Fields that should be returned. All fields are
|
||||
returned if 'None' is passed.
|
||||
fields (Optional[Iterable[str]]): Fields that should be returned. All
|
||||
fields are returned if 'None' is passed.
|
||||
|
||||
Returns:
|
||||
None: If version with specified filters was not found.
|
||||
Dict: Version document which can be reduced to specified 'fields'.
|
||||
Union[Dict, None]: Version entity data which can be reduced to
|
||||
specified 'fields'. None is returned if version with specified
|
||||
filters was not found.
|
||||
"""
|
||||
|
||||
version_id = convert_id(version_id)
|
||||
|
|
@ -546,18 +576,19 @@ def get_version_by_id(project_name, version_id, fields=None):
|
|||
|
||||
|
||||
def get_version_by_name(project_name, version, subset_id, fields=None):
|
||||
"""Single version entity data by it's name and subset id.
|
||||
"""Single version entity data by its name and subset id.
|
||||
|
||||
Args:
|
||||
project_name (str): Name of project where to look for queried entities.
|
||||
version (int): name of version entity (it's version).
|
||||
version (int): name of version entity (its version).
|
||||
subset_id (Union[str, ObjectId]): Id of version which should be found.
|
||||
fields (Iterable[str]): Fields that should be returned. All fields are
|
||||
returned if 'None' is passed.
|
||||
fields (Optional[Iterable[str]]): Fields that should be returned. All
|
||||
fields are returned if 'None' is passed.
|
||||
|
||||
Returns:
|
||||
None: If version with specified filters was not found.
|
||||
Dict: Version document which can be reduced to specified 'fields'.
|
||||
Union[Dict, None]: Version entity data which can be reduced to
|
||||
specified 'fields'. None is returned if version with specified
|
||||
filters was not found.
|
||||
"""
|
||||
|
||||
subset_id = convert_id(subset_id)
|
||||
|
|
@ -574,7 +605,7 @@ def get_version_by_name(project_name, version, subset_id, fields=None):
|
|||
|
||||
|
||||
def version_is_latest(project_name, version_id):
|
||||
"""Is version the latest from it's subset.
|
||||
"""Is version the latest from its subset.
|
||||
|
||||
Note:
|
||||
Hero versions are considered as latest.
|
||||
|
|
@ -680,8 +711,8 @@ def get_versions(
|
|||
versions (Iterable[int]): Version names (as integers).
|
||||
Filter ignored if 'None' is passed.
|
||||
hero (bool): Look also for hero versions.
|
||||
fields (Iterable[str]): Fields that should be returned. All fields are
|
||||
returned if 'None' is passed.
|
||||
fields (Optional[Iterable[str]]): Fields that should be returned. All
|
||||
fields are returned if 'None' is passed.
|
||||
|
||||
Returns:
|
||||
Cursor: Iterable cursor yielding all matching versions.
|
||||
|
|
@ -705,12 +736,13 @@ def get_hero_version_by_subset_id(project_name, subset_id, fields=None):
|
|||
project_name (str): Name of project where to look for queried entities.
|
||||
subset_id (Union[str, ObjectId]): Subset id under which
|
||||
is hero version.
|
||||
fields (Iterable[str]): Fields that should be returned. All fields are
|
||||
returned if 'None' is passed.
|
||||
fields (Optional[Iterable[str]]): Fields that should be returned. All
|
||||
fields are returned if 'None' is passed.
|
||||
|
||||
Returns:
|
||||
None: If hero version for passed subset id does not exists.
|
||||
Dict: Hero version entity data.
|
||||
Union[Dict, None]: Hero version entity data which can be reduced to
|
||||
specified 'fields'. None is returned if hero version with specified
|
||||
filters was not found.
|
||||
"""
|
||||
|
||||
subset_id = convert_id(subset_id)
|
||||
|
|
@ -730,17 +762,18 @@ def get_hero_version_by_subset_id(project_name, subset_id, fields=None):
|
|||
|
||||
|
||||
def get_hero_version_by_id(project_name, version_id, fields=None):
|
||||
"""Hero version by it's id.
|
||||
"""Hero version by its id.
|
||||
|
||||
Args:
|
||||
project_name (str): Name of project where to look for queried entities.
|
||||
version_id (Union[str, ObjectId]): Hero version id.
|
||||
fields (Iterable[str]): Fields that should be returned. All fields are
|
||||
returned if 'None' is passed.
|
||||
fields (Optional[Iterable[str]]): Fields that should be returned. All
|
||||
fields are returned if 'None' is passed.
|
||||
|
||||
Returns:
|
||||
None: If hero version with passed id was not found.
|
||||
Dict: Hero version entity data.
|
||||
Union[Dict, None]: Hero version entity data which can be reduced to
|
||||
specified 'fields'. None is returned if hero version with specified
|
||||
filters was not found.
|
||||
"""
|
||||
|
||||
version_id = convert_id(version_id)
|
||||
|
|
@ -773,8 +806,8 @@ def get_hero_versions(
|
|||
should look for hero versions. Filter ignored if 'None' is passed.
|
||||
version_ids (Iterable[Union[str, ObjectId]]): Hero version ids. Filter
|
||||
ignored if 'None' is passed.
|
||||
fields (Iterable[str]): Fields that should be returned. All fields are
|
||||
returned if 'None' is passed.
|
||||
fields (Optional[Iterable[str]]): Fields that should be returned. All
|
||||
fields are returned if 'None' is passed.
|
||||
|
||||
Returns:
|
||||
Cursor|list: Iterable yielding hero versions matching passed filters.
|
||||
|
|
@ -801,8 +834,8 @@ def get_output_link_versions(project_name, version_id, fields=None):
|
|||
project_name (str): Name of project where to look for queried entities.
|
||||
version_id (Union[str, ObjectId]): Version id which can be used
|
||||
as input link for other versions.
|
||||
fields (Iterable[str]): Fields that should be returned. All fields are
|
||||
returned if 'None' is passed.
|
||||
fields (Optional[Iterable[str]]): Fields that should be returned. All
|
||||
fields are returned if 'None' is passed.
|
||||
|
||||
Returns:
|
||||
Iterable: Iterable cursor yielding versions that are used as input
|
||||
|
|
@ -828,8 +861,8 @@ def get_last_versions(project_name, subset_ids, fields=None):
|
|||
Args:
|
||||
project_name (str): Name of project where to look for queried entities.
|
||||
subset_ids (Iterable[Union[str, ObjectId]]): List of subset ids.
|
||||
fields (Iterable[str]): Fields that should be returned. All fields are
|
||||
returned if 'None' is passed.
|
||||
fields (Optional[Iterable[str]]): Fields that should be returned. All
|
||||
fields are returned if 'None' is passed.
|
||||
|
||||
Returns:
|
||||
dict[ObjectId, int]: Key is subset id and value is last version name.
|
||||
|
|
@ -913,12 +946,13 @@ def get_last_version_by_subset_id(project_name, subset_id, fields=None):
|
|||
Args:
|
||||
project_name (str): Name of project where to look for queried entities.
|
||||
subset_id (Union[str, ObjectId]): Id of version which should be found.
|
||||
fields (Iterable[str]): Fields that should be returned. All fields are
|
||||
returned if 'None' is passed.
|
||||
fields (Optional[Iterable[str]]): Fields that should be returned. All
|
||||
fields are returned if 'None' is passed.
|
||||
|
||||
Returns:
|
||||
None: If version with specified filters was not found.
|
||||
Dict: Version document which can be reduced to specified 'fields'.
|
||||
Union[Dict, None]: Version entity data which can be reduced to
|
||||
specified 'fields'. None is returned if version with specified
|
||||
filters was not found.
|
||||
"""
|
||||
|
||||
subset_id = convert_id(subset_id)
|
||||
|
|
@ -945,12 +979,13 @@ def get_last_version_by_subset_name(
|
|||
asset_id (Union[str, ObjectId]): Asset id which is parent of passed
|
||||
subset name.
|
||||
asset_name (str): Asset name which is parent of passed subset name.
|
||||
fields (Iterable[str]): Fields that should be returned. All fields are
|
||||
returned if 'None' is passed.
|
||||
fields (Optional[Iterable[str]]): Fields that should be returned. All
|
||||
fields are returned if 'None' is passed.
|
||||
|
||||
Returns:
|
||||
None: If version with specified filters was not found.
|
||||
Dict: Version document which can be reduced to specified 'fields'.
|
||||
Union[Dict, None]: Version entity data which can be reduced to
|
||||
specified 'fields'. None is returned if version with specified
|
||||
filters was not found.
|
||||
"""
|
||||
|
||||
if not asset_id and not asset_name:
|
||||
|
|
@ -972,18 +1007,18 @@ def get_last_version_by_subset_name(
|
|||
|
||||
|
||||
def get_representation_by_id(project_name, representation_id, fields=None):
|
||||
"""Representation entity data by it's id.
|
||||
"""Representation entity data by its id.
|
||||
|
||||
Args:
|
||||
project_name (str): Name of project where to look for queried entities.
|
||||
representation_id (Union[str, ObjectId]): Representation id.
|
||||
fields (Iterable[str]): Fields that should be returned. All fields are
|
||||
returned if 'None' is passed.
|
||||
fields (Optional[Iterable[str]]): Fields that should be returned. All
|
||||
fields are returned if 'None' is passed.
|
||||
|
||||
Returns:
|
||||
None: If representation with specified filters was not found.
|
||||
Dict: Representation entity data which can be reduced
|
||||
to specified 'fields'.
|
||||
Union[Dict, None]: Representation entity data which can be reduced to
|
||||
specified 'fields'. None is returned if representation with
|
||||
specified filters was not found.
|
||||
"""
|
||||
|
||||
if not representation_id:
|
||||
|
|
@ -1004,19 +1039,19 @@ def get_representation_by_id(project_name, representation_id, fields=None):
|
|||
def get_representation_by_name(
|
||||
project_name, representation_name, version_id, fields=None
|
||||
):
|
||||
"""Representation entity data by it's name and it's version id.
|
||||
"""Representation entity data by its name and its version id.
|
||||
|
||||
Args:
|
||||
project_name (str): Name of project where to look for queried entities.
|
||||
representation_name (str): Representation name.
|
||||
version_id (Union[str, ObjectId]): Id of parent version entity.
|
||||
fields (Iterable[str]): Fields that should be returned. All fields are
|
||||
returned if 'None' is passed.
|
||||
fields (Optional[Iterable[str]]): Fields that should be returned. All
|
||||
fields are returned if 'None' is passed.
|
||||
|
||||
Returns:
|
||||
None: If representation with specified filters was not found.
|
||||
Dict: Representation entity data which can be reduced
|
||||
to specified 'fields'.
|
||||
Union[dict[str, Any], None]: Representation entity data which can be
|
||||
reduced to specified 'fields'. None is returned if representation
|
||||
with specified filters was not found.
|
||||
"""
|
||||
|
||||
version_id = convert_id(version_id)
|
||||
|
|
@ -1202,8 +1237,8 @@ def get_representations(
|
|||
names_by_version_ids (dict[ObjectId, list[str]]): Complex filtering
|
||||
using version ids and list of names under the version.
|
||||
archived (bool): Output will also contain archived representations.
|
||||
fields (Iterable[str]): Fields that should be returned. All fields are
|
||||
returned if 'None' is passed.
|
||||
fields (Optional[Iterable[str]]): Fields that should be returned. All
|
||||
fields are returned if 'None' is passed.
|
||||
|
||||
Returns:
|
||||
Cursor: Iterable cursor yielding all matching representations.
|
||||
|
|
@ -1216,7 +1251,7 @@ def get_representations(
|
|||
version_ids=version_ids,
|
||||
context_filters=context_filters,
|
||||
names_by_version_ids=names_by_version_ids,
|
||||
standard=True,
|
||||
standard=standard,
|
||||
archived=archived,
|
||||
fields=fields
|
||||
)
|
||||
|
|
@ -1247,8 +1282,8 @@ def get_archived_representations(
|
|||
representation context fields.
|
||||
names_by_version_ids (dict[ObjectId, List[str]]): Complex filtering
|
||||
using version ids and list of names under the version.
|
||||
fields (Iterable[str]): Fields that should be returned. All fields are
|
||||
returned if 'None' is passed.
|
||||
fields (Optional[Iterable[str]]): Fields that should be returned. All
|
||||
fields are returned if 'None' is passed.
|
||||
|
||||
Returns:
|
||||
Cursor: Iterable cursor yielding all matching representations.
|
||||
|
|
@ -1377,8 +1412,8 @@ def get_thumbnail_id_from_source(project_name, src_type, src_id):
|
|||
src_id (Union[str, ObjectId]): Id of source entity.
|
||||
|
||||
Returns:
|
||||
ObjectId: Thumbnail id assigned to entity.
|
||||
None: If Source entity does not have any thumbnail id assigned.
|
||||
Union[ObjectId, None]: Thumbnail id assigned to entity. If Source
|
||||
entity does not have any thumbnail id assigned.
|
||||
"""
|
||||
|
||||
if not src_type or not src_id:
|
||||
|
|
@ -1397,14 +1432,14 @@ def get_thumbnails(project_name, thumbnail_ids, fields=None):
|
|||
"""Receive thumbnails entity data.
|
||||
|
||||
Thumbnail entity can be used to receive binary content of thumbnail based
|
||||
on it's content and ThumbnailResolvers.
|
||||
on its content and ThumbnailResolvers.
|
||||
|
||||
Args:
|
||||
project_name (str): Name of project where to look for queried entities.
|
||||
thumbnail_ids (Iterable[Union[str, ObjectId]]): Ids of thumbnail
|
||||
entities.
|
||||
fields (Iterable[str]): Fields that should be returned. All fields are
|
||||
returned if 'None' is passed.
|
||||
fields (Optional[Iterable[str]]): Fields that should be returned. All
|
||||
fields are returned if 'None' is passed.
|
||||
|
||||
Returns:
|
||||
cursor: Cursor of queried documents.
|
||||
|
|
@ -1429,12 +1464,13 @@ def get_thumbnail(project_name, thumbnail_id, fields=None):
|
|||
Args:
|
||||
project_name (str): Name of project where to look for queried entities.
|
||||
thumbnail_id (Union[str, ObjectId]): Id of thumbnail entity.
|
||||
fields (Iterable[str]): Fields that should be returned. All fields are
|
||||
returned if 'None' is passed.
|
||||
fields (Optional[Iterable[str]]): Fields that should be returned. All
|
||||
fields are returned if 'None' is passed.
|
||||
|
||||
Returns:
|
||||
None: If thumbnail with specified id was not found.
|
||||
Dict: Thumbnail entity data which can be reduced to specified 'fields'.
|
||||
Union[Dict, None]: Thumbnail entity data which can be reduced to
|
||||
specified 'fields'.None is returned if thumbnail with specified
|
||||
filters was not found.
|
||||
"""
|
||||
|
||||
if not thumbnail_id:
|
||||
|
|
@ -1458,8 +1494,13 @@ def get_workfile_info(
|
|||
project_name (str): Name of project where to look for queried entities.
|
||||
asset_id (Union[str, ObjectId]): Id of asset entity.
|
||||
task_name (str): Task name on asset.
|
||||
fields (Iterable[str]): Fields that should be returned. All fields are
|
||||
returned if 'None' is passed.
|
||||
fields (Optional[Iterable[str]]): Fields that should be returned. All
|
||||
fields are returned if 'None' is passed.
|
||||
|
||||
Returns:
|
||||
Union[Dict, None]: Workfile entity data which can be reduced to
|
||||
specified 'fields'.None is returned if workfile with specified
|
||||
filters was not found.
|
||||
"""
|
||||
|
||||
if not asset_id or not task_name or not filename:
|
||||
|
|
|
|||
|
|
@ -5,6 +5,12 @@ import logging
|
|||
import pymongo
|
||||
import certifi
|
||||
|
||||
from bson.json_util import (
|
||||
loads,
|
||||
dumps,
|
||||
CANONICAL_JSON_OPTIONS
|
||||
)
|
||||
|
||||
if sys.version_info[0] == 2:
|
||||
from urlparse import urlparse, parse_qs
|
||||
else:
|
||||
|
|
@ -15,6 +21,49 @@ class MongoEnvNotSet(Exception):
|
|||
pass
|
||||
|
||||
|
||||
def documents_to_json(docs):
|
||||
"""Convert documents to json string.
|
||||
|
||||
Args:
|
||||
Union[list[dict[str, Any]], dict[str, Any]]: Document/s to convert to
|
||||
json string.
|
||||
|
||||
Returns:
|
||||
str: Json string with mongo documents.
|
||||
"""
|
||||
|
||||
return dumps(docs, json_options=CANONICAL_JSON_OPTIONS)
|
||||
|
||||
|
||||
def load_json_file(filepath):
|
||||
"""Load mongo documents from a json file.
|
||||
|
||||
Args:
|
||||
filepath (str): Path to a json file.
|
||||
|
||||
Returns:
|
||||
Union[dict[str, Any], list[dict[str, Any]]]: Loaded content from a
|
||||
json file.
|
||||
"""
|
||||
|
||||
if not os.path.exists(filepath):
|
||||
raise ValueError("Path {} was not found".format(filepath))
|
||||
|
||||
with open(filepath, "r") as stream:
|
||||
content = stream.read()
|
||||
return loads("".join(content))
|
||||
|
||||
|
||||
def get_project_database_name():
|
||||
"""Name of database name where projects are available.
|
||||
|
||||
Returns:
|
||||
str: Name of database name where projects are.
|
||||
"""
|
||||
|
||||
return os.environ.get("AVALON_DB") or "avalon"
|
||||
|
||||
|
||||
def _decompose_url(url):
|
||||
"""Decompose mongo url to basic components.
|
||||
|
||||
|
|
@ -210,12 +259,102 @@ class OpenPypeMongoConnection:
|
|||
return mongo_client
|
||||
|
||||
|
||||
def get_project_database():
|
||||
db_name = os.environ.get("AVALON_DB") or "avalon"
|
||||
return OpenPypeMongoConnection.get_mongo_client()[db_name]
|
||||
# ------ Helper Mongo functions ------
|
||||
# Functions can be helpful with custom tools to backup/restore mongo state.
|
||||
# Not meant as API functionality that should be used in production codebase!
|
||||
def get_collection_documents(database_name, collection_name, as_json=False):
|
||||
"""Query all documents from a collection.
|
||||
|
||||
Args:
|
||||
database_name (str): Name of database where to look for collection.
|
||||
collection_name (str): Name of collection where to look for collection.
|
||||
as_json (Optional[bool]): Output should be a json string.
|
||||
Default: 'False'
|
||||
|
||||
Returns:
|
||||
Union[list[dict[str, Any]], str]: Queried documents.
|
||||
"""
|
||||
|
||||
client = OpenPypeMongoConnection.get_mongo_client()
|
||||
output = list(client[database_name][collection_name].find({}))
|
||||
if as_json:
|
||||
output = documents_to_json(output)
|
||||
return output
|
||||
|
||||
|
||||
def get_project_connection(project_name):
|
||||
def store_collection(filepath, database_name, collection_name):
|
||||
"""Store collection documents to a json file.
|
||||
|
||||
Args:
|
||||
filepath (str): Path to a json file where documents will be stored.
|
||||
database_name (str): Name of database where to look for collection.
|
||||
collection_name (str): Name of collection to store.
|
||||
"""
|
||||
|
||||
# Make sure directory for output file exists
|
||||
dirpath = os.path.dirname(filepath)
|
||||
if not os.path.isdir(dirpath):
|
||||
os.makedirs(dirpath)
|
||||
|
||||
content = get_collection_documents(database_name, collection_name, True)
|
||||
with open(filepath, "w") as stream:
|
||||
stream.write(content)
|
||||
|
||||
|
||||
def replace_collection_documents(docs, database_name, collection_name):
|
||||
"""Replace all documents in a collection with passed documents.
|
||||
|
||||
Warnings:
|
||||
All existing documents in collection will be removed if there are any.
|
||||
|
||||
Args:
|
||||
docs (list[dict[str, Any]]): New documents.
|
||||
database_name (str): Name of database where to look for collection.
|
||||
collection_name (str): Name of collection where new documents are
|
||||
uploaded.
|
||||
"""
|
||||
|
||||
client = OpenPypeMongoConnection.get_mongo_client()
|
||||
database = client[database_name]
|
||||
if collection_name in database.list_collection_names():
|
||||
database.drop_collection(collection_name)
|
||||
col = database[collection_name]
|
||||
col.insert_many(docs)
|
||||
|
||||
|
||||
def restore_collection(filepath, database_name, collection_name):
|
||||
"""Restore/replace collection from a json filepath.
|
||||
|
||||
Warnings:
|
||||
All existing documents in collection will be removed if there are any.
|
||||
|
||||
Args:
|
||||
filepath (str): Path to a json with documents.
|
||||
database_name (str): Name of database where to look for collection.
|
||||
collection_name (str): Name of collection where new documents are
|
||||
uploaded.
|
||||
"""
|
||||
|
||||
docs = load_json_file(filepath)
|
||||
replace_collection_documents(docs, database_name, collection_name)
|
||||
|
||||
|
||||
def get_project_database(database_name=None):
|
||||
"""Database object where project collections are.
|
||||
|
||||
Args:
|
||||
database_name (Optional[str]): Custom name of database.
|
||||
|
||||
Returns:
|
||||
pymongo.database.Database: Collection related to passed project.
|
||||
"""
|
||||
|
||||
if not database_name:
|
||||
database_name = get_project_database_name()
|
||||
return OpenPypeMongoConnection.get_mongo_client()[database_name]
|
||||
|
||||
|
||||
def get_project_connection(project_name, database_name=None):
|
||||
"""Direct access to mongo collection.
|
||||
|
||||
We're trying to avoid using direct access to mongo. This should be used
|
||||
|
|
@ -223,13 +362,83 @@ def get_project_connection(project_name):
|
|||
api calls for that.
|
||||
|
||||
Args:
|
||||
project_name(str): Project name for which collection should be
|
||||
project_name (str): Project name for which collection should be
|
||||
returned.
|
||||
database_name (Optional[str]): Custom name of database.
|
||||
|
||||
Returns:
|
||||
pymongo.Collection: Collection realated to passed project.
|
||||
pymongo.collection.Collection: Collection related to passed project.
|
||||
"""
|
||||
|
||||
if not project_name:
|
||||
raise ValueError("Invalid project name {}".format(str(project_name)))
|
||||
return get_project_database()[project_name]
|
||||
return get_project_database(database_name)[project_name]
|
||||
|
||||
|
||||
def get_project_documents(project_name, database_name=None):
|
||||
"""Query all documents from project collection.
|
||||
|
||||
Args:
|
||||
project_name (str): Name of project.
|
||||
database_name (Optional[str]): Name of mongo database where to look for
|
||||
project.
|
||||
|
||||
Returns:
|
||||
list[dict[str, Any]]: Documents in project collection.
|
||||
"""
|
||||
|
||||
if not database_name:
|
||||
database_name = get_project_database_name()
|
||||
return get_collection_documents(database_name, project_name)
|
||||
|
||||
|
||||
def store_project_documents(project_name, filepath, database_name=None):
|
||||
"""Store project documents to a file as json string.
|
||||
|
||||
Args:
|
||||
project_name (str): Name of project to store.
|
||||
filepath (str): Path to a json file where output will be stored.
|
||||
database_name (Optional[str]): Name of mongo database where to look for
|
||||
project.
|
||||
"""
|
||||
|
||||
if not database_name:
|
||||
database_name = get_project_database_name()
|
||||
|
||||
store_collection(filepath, database_name, project_name)
|
||||
|
||||
|
||||
def replace_project_documents(project_name, docs, database_name=None):
|
||||
"""Replace documents in mongo with passed documents.
|
||||
|
||||
Warnings:
|
||||
Existing project collection is removed if exists in mongo.
|
||||
|
||||
Args:
|
||||
project_name (str): Name of project.
|
||||
docs (list[dict[str, Any]]): Documents to restore.
|
||||
database_name (Optional[str]): Name of mongo database where project
|
||||
collection will be created.
|
||||
"""
|
||||
|
||||
if not database_name:
|
||||
database_name = get_project_database_name()
|
||||
replace_collection_documents(docs, database_name, project_name)
|
||||
|
||||
|
||||
def restore_project_documents(project_name, filepath, database_name=None):
|
||||
"""Replace documents in mongo with passed documents.
|
||||
|
||||
Warnings:
|
||||
Existing project collection is removed if exists in mongo.
|
||||
|
||||
Args:
|
||||
project_name (str): Name of project.
|
||||
filepath (str): File to json file with project documents.
|
||||
database_name (Optional[str]): Name of mongo database where project
|
||||
collection will be created.
|
||||
"""
|
||||
|
||||
if not database_name:
|
||||
database_name = get_project_database_name()
|
||||
restore_collection(filepath, database_name, project_name)
|
||||
|
|
|
|||
|
|
@ -25,6 +25,7 @@ class AddLastWorkfileToLaunchArgs(PreLaunchHook):
|
|||
"blender",
|
||||
"photoshop",
|
||||
"tvpaint",
|
||||
"substancepainter",
|
||||
"aftereffects"
|
||||
]
|
||||
|
||||
|
|
@ -42,13 +43,5 @@ class AddLastWorkfileToLaunchArgs(PreLaunchHook):
|
|||
self.log.info("Current context does not have any workfile yet.")
|
||||
return
|
||||
|
||||
# Determine whether to open workfile post initialization.
|
||||
if self.host_name == "maya":
|
||||
key = "open_workfile_post_initialization"
|
||||
if self.data["project_settings"]["maya"][key]:
|
||||
self.log.debug("Opening workfile post initialization.")
|
||||
self.data["env"]["OPENPYPE_" + key.upper()] = "1"
|
||||
return
|
||||
|
||||
# Add path to workfile to arguments
|
||||
self.launch_context.launch_args.append(last_workfile)
|
||||
|
|
|
|||
37
openpype/hooks/pre_host_set_ocio.py
Normal file
37
openpype/hooks/pre_host_set_ocio.py
Normal file
|
|
@ -0,0 +1,37 @@
|
|||
from openpype.lib import PreLaunchHook
|
||||
|
||||
from openpype.pipeline.colorspace import get_imageio_config
|
||||
from openpype.pipeline.template_data import get_template_data
|
||||
|
||||
|
||||
class PreLaunchHostSetOCIO(PreLaunchHook):
|
||||
"""Set OCIO environment for the host"""
|
||||
|
||||
order = 0
|
||||
app_groups = ["substancepainter"]
|
||||
|
||||
def execute(self):
|
||||
"""Hook entry method."""
|
||||
|
||||
anatomy_data = get_template_data(
|
||||
project_doc=self.data["project_doc"],
|
||||
asset_doc=self.data["asset_doc"],
|
||||
task_name=self.data["task_name"],
|
||||
host_name=self.host_name,
|
||||
system_settings=self.data["system_settings"]
|
||||
)
|
||||
|
||||
ocio_config = get_imageio_config(
|
||||
project_name=self.data["project_doc"]["name"],
|
||||
host_name=self.host_name,
|
||||
project_settings=self.data["project_settings"],
|
||||
anatomy_data=anatomy_data,
|
||||
anatomy=self.data["anatomy"]
|
||||
)
|
||||
|
||||
if ocio_config:
|
||||
ocio_path = ocio_config["path"]
|
||||
self.log.info(f"Setting OCIO config path: {ocio_path}")
|
||||
self.launch_context.env["OCIO"] = ocio_path
|
||||
else:
|
||||
self.log.debug("OCIO not set or enabled")
|
||||
|
|
@ -26,12 +26,9 @@ class RenderCreator(Creator):
|
|||
|
||||
create_allow_context_change = True
|
||||
|
||||
def __init__(self, project_settings, *args, **kwargs):
|
||||
super(RenderCreator, self).__init__(project_settings, *args, **kwargs)
|
||||
self._default_variants = (project_settings["aftereffects"]
|
||||
["create"]
|
||||
["RenderCreator"]
|
||||
["defaults"])
|
||||
# Settings
|
||||
default_variants = []
|
||||
mark_for_review = True
|
||||
|
||||
def create(self, subset_name_from_ui, data, pre_create_data):
|
||||
stub = api.get_stub() # only after After Effects is up
|
||||
|
|
@ -82,28 +79,40 @@ class RenderCreator(Creator):
|
|||
use_farm = pre_create_data["farm"]
|
||||
new_instance.creator_attributes["farm"] = use_farm
|
||||
|
||||
review = pre_create_data["mark_for_review"]
|
||||
new_instance.creator_attributes["mark_for_review"] = review
|
||||
|
||||
api.get_stub().imprint(new_instance.id,
|
||||
new_instance.data_to_store())
|
||||
self._add_instance_to_context(new_instance)
|
||||
|
||||
stub.rename_item(comp.id, subset_name)
|
||||
|
||||
def get_default_variants(self):
|
||||
return self._default_variants
|
||||
|
||||
def get_instance_attr_defs(self):
|
||||
return [BoolDef("farm", label="Render on farm")]
|
||||
|
||||
def get_pre_create_attr_defs(self):
|
||||
output = [
|
||||
BoolDef("use_selection", default=True, label="Use selection"),
|
||||
BoolDef("use_composition_name",
|
||||
label="Use composition name in subset"),
|
||||
UISeparatorDef(),
|
||||
BoolDef("farm", label="Render on farm")
|
||||
BoolDef("farm", label="Render on farm"),
|
||||
BoolDef(
|
||||
"mark_for_review",
|
||||
label="Review",
|
||||
default=self.mark_for_review
|
||||
)
|
||||
]
|
||||
return output
|
||||
|
||||
def get_instance_attr_defs(self):
|
||||
return [
|
||||
BoolDef("farm", label="Render on farm"),
|
||||
BoolDef(
|
||||
"mark_for_review",
|
||||
label="Review",
|
||||
default=False
|
||||
)
|
||||
]
|
||||
|
||||
def get_icon(self):
|
||||
return resources.get_openpype_splash_filepath()
|
||||
|
||||
|
|
@ -143,6 +152,13 @@ class RenderCreator(Creator):
|
|||
api.get_stub().rename_item(comp_id,
|
||||
new_comp_name)
|
||||
|
||||
def apply_settings(self, project_settings, system_settings):
|
||||
plugin_settings = (
|
||||
project_settings["aftereffects"]["create"]["RenderCreator"]
|
||||
)
|
||||
|
||||
self.mark_for_review = plugin_settings["mark_for_review"]
|
||||
|
||||
def get_detail_description(self):
|
||||
return """Creator for Render instances
|
||||
|
||||
|
|
@ -201,4 +217,7 @@ class RenderCreator(Creator):
|
|||
instance_data["creator_attributes"] = {"farm": is_old_farm}
|
||||
instance_data["family"] = self.family
|
||||
|
||||
if instance_data["creator_attributes"].get("mark_for_review") is None:
|
||||
instance_data["creator_attributes"]["mark_for_review"] = True
|
||||
|
||||
return instance_data
|
||||
|
|
|
|||
|
|
@ -88,10 +88,11 @@ class CollectAERender(publish.AbstractCollectRender):
|
|||
raise ValueError("No file extension set in Render Queue")
|
||||
render_item = render_q[0]
|
||||
|
||||
instance_families = inst.data.get("families", [])
|
||||
subset_name = inst.data["subset"]
|
||||
instance = AERenderInstance(
|
||||
family="render",
|
||||
families=inst.data.get("families", []),
|
||||
families=instance_families,
|
||||
version=version,
|
||||
time="",
|
||||
source=current_file,
|
||||
|
|
@ -109,6 +110,7 @@ class CollectAERender(publish.AbstractCollectRender):
|
|||
tileRendering=False,
|
||||
tilesX=0,
|
||||
tilesY=0,
|
||||
review="review" in instance_families,
|
||||
frameStart=frame_start,
|
||||
frameEnd=frame_end,
|
||||
frameStep=1,
|
||||
|
|
@ -139,6 +141,9 @@ class CollectAERender(publish.AbstractCollectRender):
|
|||
instance.toBeRenderedOn = "deadline"
|
||||
instance.renderer = "aerender"
|
||||
instance.farm = True # to skip integrate
|
||||
if "review" in instance.families:
|
||||
# to skip ExtractReview locally
|
||||
instance.families.remove("review")
|
||||
|
||||
instances.append(instance)
|
||||
instances_to_remove.append(inst)
|
||||
|
|
@ -218,15 +223,4 @@ class CollectAERender(publish.AbstractCollectRender):
|
|||
if fam not in instance.families:
|
||||
instance.families.append(fam)
|
||||
|
||||
settings = get_project_settings(os.getenv("AVALON_PROJECT"))
|
||||
reviewable_subset_filter = (settings["deadline"]
|
||||
["publish"]
|
||||
["ProcessSubmittedJobOnFarm"]
|
||||
["aov_filter"].get(self.hosts[0]))
|
||||
for aov_pattern in reviewable_subset_filter:
|
||||
if re.match(aov_pattern, instance.subset):
|
||||
instance.families.append("review")
|
||||
instance.review = True
|
||||
break
|
||||
|
||||
return instance
|
||||
|
|
|
|||
|
|
@ -0,0 +1,25 @@
|
|||
"""
|
||||
Requires:
|
||||
None
|
||||
|
||||
Provides:
|
||||
instance -> family ("review")
|
||||
"""
|
||||
import pyblish.api
|
||||
|
||||
|
||||
class CollectReview(pyblish.api.ContextPlugin):
|
||||
"""Add review to families if instance created with 'mark_for_review' flag
|
||||
"""
|
||||
label = "Collect Review"
|
||||
hosts = ["aftereffects"]
|
||||
order = pyblish.api.CollectorOrder + 0.1
|
||||
|
||||
def process(self, context):
|
||||
for instance in context:
|
||||
creator_attributes = instance.data.get("creator_attributes") or {}
|
||||
if (
|
||||
creator_attributes.get("mark_for_review")
|
||||
and "review" not in instance.data["families"]
|
||||
):
|
||||
instance.data["families"].append("review")
|
||||
|
|
@ -53,10 +53,10 @@ class CollectWorkfile(pyblish.api.ContextPlugin):
|
|||
"active": True,
|
||||
"asset": asset_entity["name"],
|
||||
"task": task,
|
||||
"frameStart": asset_entity["data"]["frameStart"],
|
||||
"frameEnd": asset_entity["data"]["frameEnd"],
|
||||
"handleStart": asset_entity["data"]["handleStart"],
|
||||
"handleEnd": asset_entity["data"]["handleEnd"],
|
||||
"frameStart": context.data['frameStart'],
|
||||
"frameEnd": context.data['frameEnd'],
|
||||
"handleStart": context.data['handleStart'],
|
||||
"handleEnd": context.data['handleEnd'],
|
||||
"fps": asset_entity["data"]["fps"],
|
||||
"resolutionWidth": asset_entity["data"].get(
|
||||
"resolutionWidth",
|
||||
|
|
|
|||
|
|
@ -66,33 +66,9 @@ class ExtractLocalRender(publish.Extractor):
|
|||
first_repre = not representations
|
||||
if instance.data["review"] and first_repre:
|
||||
repre_data["tags"] = ["review"]
|
||||
thumbnail_path = os.path.join(staging_dir, files[0])
|
||||
instance.data["thumbnailSource"] = thumbnail_path
|
||||
|
||||
representations.append(repre_data)
|
||||
|
||||
instance.data["representations"] = representations
|
||||
|
||||
ffmpeg_path = get_ffmpeg_tool_path("ffmpeg")
|
||||
# Generate thumbnail.
|
||||
thumbnail_path = os.path.join(staging_dir, "thumbnail.jpg")
|
||||
|
||||
args = [
|
||||
ffmpeg_path, "-y",
|
||||
"-i", first_file_path,
|
||||
"-vf", "scale=300:-1",
|
||||
"-vframes", "1",
|
||||
thumbnail_path
|
||||
]
|
||||
self.log.debug("Thumbnail args:: {}".format(args))
|
||||
try:
|
||||
output = run_subprocess(args)
|
||||
except TypeError:
|
||||
self.log.warning("Error in creating thumbnail")
|
||||
six.reraise(*sys.exc_info())
|
||||
|
||||
instance.data["representations"].append({
|
||||
"name": "thumbnail",
|
||||
"ext": "jpg",
|
||||
"files": os.path.basename(thumbnail_path),
|
||||
"stagingDir": staging_dir,
|
||||
"tags": ["thumbnail"]
|
||||
})
|
||||
|
|
|
|||
|
|
@ -1,7 +1,5 @@
|
|||
import os
|
||||
|
||||
import qtawesome
|
||||
|
||||
from openpype.hosts.fusion.api import (
|
||||
get_current_comp,
|
||||
comp_lock_and_undo_chunk,
|
||||
|
|
@ -28,6 +26,7 @@ class CreateSaver(Creator):
|
|||
family = "render"
|
||||
default_variants = ["Main", "Mask"]
|
||||
description = "Fusion Saver to generate image sequence"
|
||||
icon = "fa5.eye"
|
||||
|
||||
instance_attributes = ["reviewable"]
|
||||
|
||||
|
|
@ -89,9 +88,6 @@ class CreateSaver(Creator):
|
|||
|
||||
self._add_instance_to_context(created_instance)
|
||||
|
||||
def get_icon(self):
|
||||
return qtawesome.icon("fa.eye", color="white")
|
||||
|
||||
def update_instances(self, update_list):
|
||||
for created_inst, _changes in update_list:
|
||||
new_data = created_inst.data_to_store()
|
||||
|
|
|
|||
|
|
@ -1,5 +1,3 @@
|
|||
import qtawesome
|
||||
|
||||
from openpype.hosts.fusion.api import (
|
||||
get_current_comp
|
||||
)
|
||||
|
|
@ -15,6 +13,7 @@ class FusionWorkfileCreator(AutoCreator):
|
|||
identifier = "workfile"
|
||||
family = "workfile"
|
||||
label = "Workfile"
|
||||
icon = "fa5.file"
|
||||
|
||||
default_variant = "Main"
|
||||
|
||||
|
|
@ -104,6 +103,3 @@ class FusionWorkfileCreator(AutoCreator):
|
|||
existing_instance["asset"] = asset_name
|
||||
existing_instance["task"] = task_name
|
||||
existing_instance["subset"] = subset_name
|
||||
|
||||
def get_icon(self):
|
||||
return qtawesome.icon("fa.file-o", color="white")
|
||||
|
|
|
|||
|
|
@ -242,9 +242,15 @@ def launch_zip_file(filepath):
|
|||
print(f"Localizing {filepath}")
|
||||
|
||||
temp_path = get_local_harmony_path(filepath)
|
||||
scene_name = os.path.basename(temp_path)
|
||||
if os.path.exists(os.path.join(temp_path, scene_name)):
|
||||
# unzipped with duplicated scene_name
|
||||
temp_path = os.path.join(temp_path, scene_name)
|
||||
|
||||
scene_path = os.path.join(
|
||||
temp_path, os.path.basename(temp_path) + ".xstage"
|
||||
temp_path, scene_name + ".xstage"
|
||||
)
|
||||
|
||||
unzip = False
|
||||
if os.path.exists(scene_path):
|
||||
# Check remote scene is newer than local.
|
||||
|
|
@ -262,6 +268,10 @@ def launch_zip_file(filepath):
|
|||
with _ZipFile(filepath, "r") as zip_ref:
|
||||
zip_ref.extractall(temp_path)
|
||||
|
||||
if os.path.exists(os.path.join(temp_path, scene_name)):
|
||||
# unzipped with duplicated scene_name
|
||||
temp_path = os.path.join(temp_path, scene_name)
|
||||
|
||||
# Close existing scene.
|
||||
if ProcessContext.pid:
|
||||
os.kill(ProcessContext.pid, signal.SIGTERM)
|
||||
|
|
@ -309,7 +319,7 @@ def launch_zip_file(filepath):
|
|||
)
|
||||
|
||||
if not os.path.exists(scene_path):
|
||||
print("error: cannot determine scene file")
|
||||
print("error: cannot determine scene file {}".format(scene_path))
|
||||
ProcessContext.server.stop()
|
||||
return
|
||||
|
||||
|
|
|
|||
46
openpype/hosts/houdini/api/action.py
Normal file
46
openpype/hosts/houdini/api/action.py
Normal file
|
|
@ -0,0 +1,46 @@
|
|||
import pyblish.api
|
||||
import hou
|
||||
|
||||
from openpype.pipeline.publish import get_errored_instances_from_context
|
||||
|
||||
|
||||
class SelectInvalidAction(pyblish.api.Action):
|
||||
"""Select invalid nodes in Maya when plug-in failed.
|
||||
|
||||
To retrieve the invalid nodes this assumes a static `get_invalid()`
|
||||
method is available on the plugin.
|
||||
|
||||
"""
|
||||
label = "Select invalid"
|
||||
on = "failed" # This action is only available on a failed plug-in
|
||||
icon = "search" # Icon from Awesome Icon
|
||||
|
||||
def process(self, context, plugin):
|
||||
|
||||
errored_instances = get_errored_instances_from_context(context)
|
||||
|
||||
# Apply pyblish.logic to get the instances for the plug-in
|
||||
instances = pyblish.api.instances_by_plugin(errored_instances, plugin)
|
||||
|
||||
# Get the invalid nodes for the plug-ins
|
||||
self.log.info("Finding invalid nodes..")
|
||||
invalid = list()
|
||||
for instance in instances:
|
||||
invalid_nodes = plugin.get_invalid(instance)
|
||||
if invalid_nodes:
|
||||
if isinstance(invalid_nodes, (list, tuple)):
|
||||
invalid.extend(invalid_nodes)
|
||||
else:
|
||||
self.log.warning("Plug-in returned to be invalid, "
|
||||
"but has no selectable nodes.")
|
||||
|
||||
hou.clearAllSelected()
|
||||
if invalid:
|
||||
self.log.info("Selecting invalid nodes: {}".format(
|
||||
", ".join(node.path() for node in invalid)
|
||||
))
|
||||
for node in invalid:
|
||||
node.setSelected(True)
|
||||
node.setCurrent(True)
|
||||
else:
|
||||
self.log.info("No invalid nodes found.")
|
||||
233
openpype/hosts/houdini/api/creator_node_shelves.py
Normal file
233
openpype/hosts/houdini/api/creator_node_shelves.py
Normal file
|
|
@ -0,0 +1,233 @@
|
|||
"""Library to register OpenPype Creators for Houdini TAB node search menu.
|
||||
|
||||
This can be used to install custom houdini tools for the TAB search
|
||||
menu which will trigger a publish instance to be created interactively.
|
||||
|
||||
The Creators are automatically registered on launch of Houdini through the
|
||||
Houdini integration's `host.install()` method.
|
||||
|
||||
"""
|
||||
import contextlib
|
||||
import tempfile
|
||||
import logging
|
||||
import os
|
||||
|
||||
from openpype.client import get_asset_by_name
|
||||
from openpype.pipeline import registered_host
|
||||
from openpype.pipeline.create import CreateContext
|
||||
from openpype.resources import get_openpype_icon_filepath
|
||||
|
||||
import hou
|
||||
import stateutils
|
||||
import soptoolutils
|
||||
import loptoolutils
|
||||
import cop2toolutils
|
||||
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
CATEGORY_GENERIC_TOOL = {
|
||||
hou.sopNodeTypeCategory(): soptoolutils.genericTool,
|
||||
hou.cop2NodeTypeCategory(): cop2toolutils.genericTool,
|
||||
hou.lopNodeTypeCategory(): loptoolutils.genericTool
|
||||
}
|
||||
|
||||
|
||||
CREATE_SCRIPT = """
|
||||
from openpype.hosts.houdini.api.creator_node_shelves import create_interactive
|
||||
create_interactive("{identifier}", **kwargs)
|
||||
"""
|
||||
|
||||
|
||||
def create_interactive(creator_identifier, **kwargs):
|
||||
"""Create a Creator using its identifier interactively.
|
||||
|
||||
This is used by the generated shelf tools as callback when a user selects
|
||||
the creator from the node tab search menu.
|
||||
|
||||
The `kwargs` should be what Houdini passes to the tool create scripts
|
||||
context. For more information see:
|
||||
https://www.sidefx.com/docs/houdini/hom/tool_script.html#arguments
|
||||
|
||||
Args:
|
||||
creator_identifier (str): The creator identifier of the Creator plugin
|
||||
to create.
|
||||
|
||||
Return:
|
||||
list: The created instances.
|
||||
|
||||
"""
|
||||
|
||||
# TODO Use Qt instead
|
||||
result, variant = hou.ui.readInput('Define variant name',
|
||||
buttons=("Ok", "Cancel"),
|
||||
initial_contents='Main',
|
||||
title="Define variant",
|
||||
help="Set the variant for the "
|
||||
"publish instance",
|
||||
close_choice=1)
|
||||
if result == 1:
|
||||
# User interrupted
|
||||
return
|
||||
variant = variant.strip()
|
||||
if not variant:
|
||||
raise RuntimeError("Empty variant value entered.")
|
||||
|
||||
host = registered_host()
|
||||
context = CreateContext(host)
|
||||
creator = context.manual_creators.get(creator_identifier)
|
||||
if not creator:
|
||||
raise RuntimeError("Invalid creator identifier: "
|
||||
"{}".format(creator_identifier))
|
||||
|
||||
# TODO: Once more elaborate unique create behavior should exist per Creator
|
||||
# instead of per network editor area then we should move this from here
|
||||
# to a method on the Creators for which this could be the default
|
||||
# implementation.
|
||||
pane = stateutils.activePane(kwargs)
|
||||
if isinstance(pane, hou.NetworkEditor):
|
||||
pwd = pane.pwd()
|
||||
subset_name = creator.get_subset_name(
|
||||
variant=variant,
|
||||
task_name=context.get_current_task_name(),
|
||||
asset_doc=get_asset_by_name(
|
||||
project_name=context.get_current_project_name(),
|
||||
asset_name=context.get_current_asset_name()
|
||||
),
|
||||
project_name=context.get_current_project_name(),
|
||||
host_name=context.host_name
|
||||
)
|
||||
|
||||
tool_fn = CATEGORY_GENERIC_TOOL.get(pwd.childTypeCategory())
|
||||
if tool_fn is not None:
|
||||
out_null = tool_fn(kwargs, "null")
|
||||
out_null.setName("OUT_{}".format(subset_name), unique_name=True)
|
||||
|
||||
before = context.instances_by_id.copy()
|
||||
|
||||
# Create the instance
|
||||
context.create(
|
||||
creator_identifier=creator_identifier,
|
||||
variant=variant,
|
||||
pre_create_data={"use_selection": True}
|
||||
)
|
||||
|
||||
# For convenience we set the new node as current since that's much more
|
||||
# familiar to the artist when creating a node interactively
|
||||
# TODO Allow to disable auto-select in studio settings or user preferences
|
||||
after = context.instances_by_id
|
||||
new = set(after) - set(before)
|
||||
if new:
|
||||
# Select the new instance
|
||||
for instance_id in new:
|
||||
instance = after[instance_id]
|
||||
node = hou.node(instance.get("instance_node"))
|
||||
node.setCurrent(True)
|
||||
|
||||
return list(new)
|
||||
|
||||
|
||||
@contextlib.contextmanager
|
||||
def shelves_change_block():
|
||||
"""Write shelf changes at the end of the context."""
|
||||
hou.shelves.beginChangeBlock()
|
||||
try:
|
||||
yield
|
||||
finally:
|
||||
hou.shelves.endChangeBlock()
|
||||
|
||||
|
||||
def install():
|
||||
"""Install the Creator plug-ins to show in Houdini's TAB node search menu.
|
||||
|
||||
This function is re-entrant and can be called again to reinstall and
|
||||
update the node definitions. For example during development it can be
|
||||
useful to call it manually:
|
||||
>>> from openpype.hosts.houdini.api.creator_node_shelves import install
|
||||
>>> install()
|
||||
|
||||
Returns:
|
||||
list: List of `hou.Tool` instances
|
||||
|
||||
"""
|
||||
|
||||
host = registered_host()
|
||||
|
||||
# Store the filepath on the host
|
||||
# TODO: Define a less hacky static shelf path for current houdini session
|
||||
filepath_attr = "_creator_node_shelf_filepath"
|
||||
filepath = getattr(host, filepath_attr, None)
|
||||
if filepath is None:
|
||||
f = tempfile.NamedTemporaryFile(prefix="houdini_creator_nodes_",
|
||||
suffix=".shelf",
|
||||
delete=False)
|
||||
f.close()
|
||||
filepath = f.name
|
||||
setattr(host, filepath_attr, filepath)
|
||||
elif os.path.exists(filepath):
|
||||
# Remove any existing shelf file so that we can completey regenerate
|
||||
# and update the tools file if creator identifiers change
|
||||
os.remove(filepath)
|
||||
|
||||
icon = get_openpype_icon_filepath()
|
||||
|
||||
# Create context only to get creator plugins, so we don't reset and only
|
||||
# populate what we need to retrieve the list of creator plugins
|
||||
create_context = CreateContext(host, reset=False)
|
||||
create_context.reset_current_context()
|
||||
create_context._reset_creator_plugins()
|
||||
|
||||
log.debug("Writing OpenPype Creator nodes to shelf: {}".format(filepath))
|
||||
tools = []
|
||||
|
||||
with shelves_change_block():
|
||||
for identifier, creator in create_context.manual_creators.items():
|
||||
|
||||
# Allow the creator plug-in itself to override the categories
|
||||
# for where they are shown with `Creator.get_network_categories()`
|
||||
if not hasattr(creator, "get_network_categories"):
|
||||
log.debug("Creator {} has no `get_network_categories` method "
|
||||
"and will not be added to TAB search.")
|
||||
continue
|
||||
|
||||
network_categories = creator.get_network_categories()
|
||||
if not network_categories:
|
||||
continue
|
||||
|
||||
key = "openpype_create.{}".format(identifier)
|
||||
log.debug(f"Registering {key}")
|
||||
script = CREATE_SCRIPT.format(identifier=identifier)
|
||||
data = {
|
||||
"script": script,
|
||||
"language": hou.scriptLanguage.Python,
|
||||
"icon": icon,
|
||||
"help": "Create OpenPype publish instance for {}".format(
|
||||
creator.label
|
||||
),
|
||||
"help_url": None,
|
||||
"network_categories": network_categories,
|
||||
"viewer_categories": [],
|
||||
"cop_viewer_categories": [],
|
||||
"network_op_type": None,
|
||||
"viewer_op_type": None,
|
||||
"locations": ["OpenPype"]
|
||||
}
|
||||
label = "Create {}".format(creator.label)
|
||||
tool = hou.shelves.tool(key)
|
||||
if tool:
|
||||
tool.setData(**data)
|
||||
tool.setLabel(label)
|
||||
else:
|
||||
tool = hou.shelves.newTool(
|
||||
file_path=filepath,
|
||||
name=key,
|
||||
label=label,
|
||||
**data
|
||||
)
|
||||
|
||||
tools.append(tool)
|
||||
|
||||
# Ensure the shelf is reloaded
|
||||
hou.shelves.loadFile(filepath)
|
||||
|
||||
return tools
|
||||
|
|
@ -127,6 +127,8 @@ def get_output_parameter(node):
|
|||
return node.parm("filename")
|
||||
elif node_type == "comp":
|
||||
return node.parm("copoutput")
|
||||
elif node_type == "opengl":
|
||||
return node.parm("picture")
|
||||
elif node_type == "arnold":
|
||||
if node.evalParm("ar_ass_export_enable"):
|
||||
return node.parm("ar_ass_file")
|
||||
|
|
|
|||
|
|
@ -18,7 +18,7 @@ from openpype.pipeline import (
|
|||
)
|
||||
from openpype.pipeline.load import any_outdated_containers
|
||||
from openpype.hosts.houdini import HOUDINI_HOST_DIR
|
||||
from openpype.hosts.houdini.api import lib, shelves
|
||||
from openpype.hosts.houdini.api import lib, shelves, creator_node_shelves
|
||||
|
||||
from openpype.lib import (
|
||||
register_event_callback,
|
||||
|
|
@ -83,6 +83,10 @@ class HoudiniHost(HostBase, IWorkfileHost, ILoadHost, IPublishHost):
|
|||
_set_context_settings()
|
||||
shelves.generate_shelves()
|
||||
|
||||
if not IS_HEADLESS:
|
||||
import hdefereval # noqa, hdefereval is only available in ui mode
|
||||
hdefereval.executeDeferred(creator_node_shelves.install)
|
||||
|
||||
def has_unsaved_changes(self):
|
||||
return hou.hipFile.hasUnsavedChanges()
|
||||
|
||||
|
|
|
|||
|
|
@ -276,3 +276,19 @@ class HoudiniCreator(NewCreator, HoudiniCreatorBase):
|
|||
color = hou.Color((0.616, 0.871, 0.769))
|
||||
node.setUserData('nodeshape', shape)
|
||||
node.setColor(color)
|
||||
|
||||
def get_network_categories(self):
|
||||
"""Return in which network view type this creator should show.
|
||||
|
||||
The node type categories returned here will be used to define where
|
||||
the creator will show up in the TAB search for nodes in Houdini's
|
||||
Network View.
|
||||
|
||||
This can be overridden in inherited classes to define where that
|
||||
particular Creator should be visible in the TAB search.
|
||||
|
||||
Returns:
|
||||
list: List of houdini node type categories
|
||||
|
||||
"""
|
||||
return [hou.ropNodeTypeCategory()]
|
||||
|
|
|
|||
|
|
@ -3,6 +3,8 @@
|
|||
from openpype.hosts.houdini.api import plugin
|
||||
from openpype.pipeline import CreatedInstance, CreatorError
|
||||
|
||||
import hou
|
||||
|
||||
|
||||
class CreateAlembicCamera(plugin.HoudiniCreator):
|
||||
"""Single baked camera from Alembic ROP."""
|
||||
|
|
@ -47,3 +49,9 @@ class CreateAlembicCamera(plugin.HoudiniCreator):
|
|||
self.lock_parameters(instance_node, to_lock)
|
||||
|
||||
instance_node.parm("trange").set(1)
|
||||
|
||||
def get_network_categories(self):
|
||||
return [
|
||||
hou.ropNodeTypeCategory(),
|
||||
hou.objNodeTypeCategory()
|
||||
]
|
||||
|
|
|
|||
|
|
@ -1,7 +1,9 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
"""Creator plugin for creating composite sequences."""
|
||||
from openpype.hosts.houdini.api import plugin
|
||||
from openpype.pipeline import CreatedInstance
|
||||
from openpype.pipeline import CreatedInstance, CreatorError
|
||||
|
||||
import hou
|
||||
|
||||
|
||||
class CreateCompositeSequence(plugin.HoudiniCreator):
|
||||
|
|
@ -35,8 +37,20 @@ class CreateCompositeSequence(plugin.HoudiniCreator):
|
|||
"copoutput": filepath
|
||||
}
|
||||
|
||||
if self.selected_nodes:
|
||||
if len(self.selected_nodes) > 1:
|
||||
raise CreatorError("More than one item selected.")
|
||||
path = self.selected_nodes[0].path()
|
||||
parms["coppath"] = path
|
||||
|
||||
instance_node.setParms(parms)
|
||||
|
||||
# Lock any parameters in this list
|
||||
to_lock = ["prim_to_detail_pattern"]
|
||||
self.lock_parameters(instance_node, to_lock)
|
||||
|
||||
def get_network_categories(self):
|
||||
return [
|
||||
hou.ropNodeTypeCategory(),
|
||||
hou.cop2NodeTypeCategory()
|
||||
]
|
||||
|
|
|
|||
|
|
@ -3,6 +3,8 @@
|
|||
from openpype.hosts.houdini.api import plugin
|
||||
from openpype.pipeline import CreatedInstance
|
||||
|
||||
import hou
|
||||
|
||||
|
||||
class CreatePointCache(plugin.HoudiniCreator):
|
||||
"""Alembic ROP to pointcache"""
|
||||
|
|
@ -49,3 +51,9 @@ class CreatePointCache(plugin.HoudiniCreator):
|
|||
# Lock any parameters in this list
|
||||
to_lock = ["prim_to_detail_pattern"]
|
||||
self.lock_parameters(instance_node, to_lock)
|
||||
|
||||
def get_network_categories(self):
|
||||
return [
|
||||
hou.ropNodeTypeCategory(),
|
||||
hou.sopNodeTypeCategory()
|
||||
]
|
||||
|
|
|
|||
125
openpype/hosts/houdini/plugins/create/create_review.py
Normal file
125
openpype/hosts/houdini/plugins/create/create_review.py
Normal file
|
|
@ -0,0 +1,125 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
"""Creator plugin for creating openGL reviews."""
|
||||
from openpype.hosts.houdini.api import plugin
|
||||
from openpype.lib import EnumDef, BoolDef, NumberDef
|
||||
|
||||
|
||||
class CreateReview(plugin.HoudiniCreator):
|
||||
"""Review with OpenGL ROP"""
|
||||
|
||||
identifier = "io.openpype.creators.houdini.review"
|
||||
label = "Review"
|
||||
family = "review"
|
||||
icon = "video-camera"
|
||||
|
||||
def create(self, subset_name, instance_data, pre_create_data):
|
||||
import hou
|
||||
|
||||
instance_data.pop("active", None)
|
||||
instance_data.update({"node_type": "opengl"})
|
||||
instance_data["imageFormat"] = pre_create_data.get("imageFormat")
|
||||
instance_data["keepImages"] = pre_create_data.get("keepImages")
|
||||
|
||||
instance = super(CreateReview, self).create(
|
||||
subset_name,
|
||||
instance_data,
|
||||
pre_create_data)
|
||||
|
||||
instance_node = hou.node(instance.get("instance_node"))
|
||||
|
||||
frame_range = hou.playbar.frameRange()
|
||||
|
||||
filepath = "{root}/{subset}/{subset}.$F4.{ext}".format(
|
||||
root=hou.text.expandString("$HIP/pyblish"),
|
||||
subset="`chs(\"subset\")`", # keep dynamic link to subset
|
||||
ext=pre_create_data.get("image_format") or "png"
|
||||
)
|
||||
|
||||
parms = {
|
||||
"picture": filepath,
|
||||
|
||||
"trange": 1,
|
||||
|
||||
# Unlike many other ROP nodes the opengl node does not default
|
||||
# to expression of $FSTART and $FEND so we preserve that behavior
|
||||
# but do set the range to the frame range of the playbar
|
||||
"f1": frame_range[0],
|
||||
"f2": frame_range[1],
|
||||
}
|
||||
|
||||
override_resolution = pre_create_data.get("override_resolution")
|
||||
if override_resolution:
|
||||
parms.update({
|
||||
"tres": override_resolution,
|
||||
"res1": pre_create_data.get("resx"),
|
||||
"res2": pre_create_data.get("resy"),
|
||||
"aspect": pre_create_data.get("aspect"),
|
||||
})
|
||||
|
||||
if self.selected_nodes:
|
||||
# The first camera found in selection we will use as camera
|
||||
# Other node types we set in force objects
|
||||
camera = None
|
||||
force_objects = []
|
||||
for node in self.selected_nodes:
|
||||
path = node.path()
|
||||
if node.type().name() == "cam":
|
||||
if camera:
|
||||
continue
|
||||
camera = path
|
||||
else:
|
||||
force_objects.append(path)
|
||||
|
||||
if not camera:
|
||||
self.log.warning("No camera found in selection.")
|
||||
|
||||
parms.update({
|
||||
"camera": camera or "",
|
||||
"scenepath": "/obj",
|
||||
"forceobjects": " ".join(force_objects),
|
||||
"vobjects": "" # clear candidate objects from '*' value
|
||||
})
|
||||
|
||||
instance_node.setParms(parms)
|
||||
|
||||
to_lock = ["id", "family"]
|
||||
|
||||
self.lock_parameters(instance_node, to_lock)
|
||||
|
||||
def get_pre_create_attr_defs(self):
|
||||
attrs = super(CreateReview, self).get_pre_create_attr_defs()
|
||||
|
||||
image_format_enum = [
|
||||
"bmp", "cin", "exr", "jpg", "pic", "pic.gz", "png",
|
||||
"rad", "rat", "rta", "sgi", "tga", "tif",
|
||||
]
|
||||
|
||||
return attrs + [
|
||||
BoolDef("keepImages",
|
||||
label="Keep Image Sequences",
|
||||
default=False),
|
||||
EnumDef("imageFormat",
|
||||
image_format_enum,
|
||||
default="png",
|
||||
label="Image Format Options"),
|
||||
BoolDef("override_resolution",
|
||||
label="Override resolution",
|
||||
tooltip="When disabled the resolution set on the camera "
|
||||
"is used instead.",
|
||||
default=True),
|
||||
NumberDef("resx",
|
||||
label="Resolution Width",
|
||||
default=1280,
|
||||
minimum=2,
|
||||
decimals=0),
|
||||
NumberDef("resy",
|
||||
label="Resolution Height",
|
||||
default=720,
|
||||
minimum=2,
|
||||
decimals=0),
|
||||
NumberDef("aspect",
|
||||
label="Aspect Ratio",
|
||||
default=1.0,
|
||||
minimum=0.0001,
|
||||
decimals=3)
|
||||
]
|
||||
|
|
@ -3,6 +3,8 @@
|
|||
from openpype.hosts.houdini.api import plugin
|
||||
from openpype.pipeline import CreatedInstance
|
||||
|
||||
import hou
|
||||
|
||||
|
||||
class CreateUSD(plugin.HoudiniCreator):
|
||||
"""Universal Scene Description"""
|
||||
|
|
@ -13,7 +15,6 @@ class CreateUSD(plugin.HoudiniCreator):
|
|||
enabled = False
|
||||
|
||||
def create(self, subset_name, instance_data, pre_create_data):
|
||||
import hou # noqa
|
||||
|
||||
instance_data.pop("active", None)
|
||||
instance_data.update({"node_type": "usd"})
|
||||
|
|
@ -43,3 +44,9 @@ class CreateUSD(plugin.HoudiniCreator):
|
|||
"id",
|
||||
]
|
||||
self.lock_parameters(instance_node, to_lock)
|
||||
|
||||
def get_network_categories(self):
|
||||
return [
|
||||
hou.ropNodeTypeCategory(),
|
||||
hou.lopNodeTypeCategory()
|
||||
]
|
||||
|
|
|
|||
|
|
@ -3,6 +3,8 @@
|
|||
from openpype.hosts.houdini.api import plugin
|
||||
from openpype.pipeline import CreatedInstance
|
||||
|
||||
import hou
|
||||
|
||||
|
||||
class CreateVDBCache(plugin.HoudiniCreator):
|
||||
"""OpenVDB from Geometry ROP"""
|
||||
|
|
@ -34,3 +36,9 @@ class CreateVDBCache(plugin.HoudiniCreator):
|
|||
parms["soppath"] = self.selected_nodes[0].path()
|
||||
|
||||
instance_node.setParms(parms)
|
||||
|
||||
def get_network_categories(self):
|
||||
return [
|
||||
hou.ropNodeTypeCategory(),
|
||||
hou.sopNodeTypeCategory()
|
||||
]
|
||||
|
|
|
|||
|
|
@ -14,7 +14,7 @@ class CreateWorkfile(plugin.HoudiniCreatorBase, AutoCreator):
|
|||
identifier = "io.openpype.creators.houdini.workfile"
|
||||
label = "Workfile"
|
||||
family = "workfile"
|
||||
icon = "document"
|
||||
icon = "fa5.file"
|
||||
|
||||
default_variant = "Main"
|
||||
|
||||
|
|
|
|||
|
|
@ -104,3 +104,6 @@ class AbcLoader(load.LoaderPlugin):
|
|||
|
||||
node = container["node"]
|
||||
node.destroy()
|
||||
|
||||
def switch(self, container, representation):
|
||||
self.update(container, representation)
|
||||
|
|
|
|||
|
|
@ -73,3 +73,6 @@ class AbcArchiveLoader(load.LoaderPlugin):
|
|||
|
||||
node = container["node"]
|
||||
node.destroy()
|
||||
|
||||
def switch(self, container, representation):
|
||||
self.update(container, representation)
|
||||
|
|
|
|||
|
|
@ -106,3 +106,6 @@ class BgeoLoader(load.LoaderPlugin):
|
|||
|
||||
node = container["node"]
|
||||
node.destroy()
|
||||
|
||||
def switch(self, container, representation):
|
||||
self.update(container, representation)
|
||||
|
|
|
|||
|
|
@ -192,3 +192,6 @@ class CameraLoader(load.LoaderPlugin):
|
|||
|
||||
new_node.moveToGoodPosition()
|
||||
return new_node
|
||||
|
||||
def switch(self, container, representation):
|
||||
self.update(container, representation)
|
||||
|
|
|
|||
|
|
@ -125,3 +125,6 @@ class ImageLoader(load.LoaderPlugin):
|
|||
prefix, padding, suffix = first_fname.rsplit(".", 2)
|
||||
fname = ".".join([prefix, "$F{}".format(len(padding)), suffix])
|
||||
return os.path.join(root, fname).replace("\\", "/")
|
||||
|
||||
def switch(self, container, representation):
|
||||
self.update(container, representation)
|
||||
|
|
|
|||
|
|
@ -79,3 +79,6 @@ class USDSublayerLoader(load.LoaderPlugin):
|
|||
|
||||
node = container["node"]
|
||||
node.destroy()
|
||||
|
||||
def switch(self, container, representation):
|
||||
self.update(container, representation)
|
||||
|
|
|
|||
|
|
@ -79,3 +79,6 @@ class USDReferenceLoader(load.LoaderPlugin):
|
|||
|
||||
node = container["node"]
|
||||
node.destroy()
|
||||
|
||||
def switch(self, container, representation):
|
||||
self.update(container, representation)
|
||||
|
|
|
|||
|
|
@ -102,3 +102,6 @@ class VdbLoader(load.LoaderPlugin):
|
|||
|
||||
node = container["node"]
|
||||
node.destroy()
|
||||
|
||||
def switch(self, container, representation):
|
||||
self.update(container, representation)
|
||||
|
|
|
|||
|
|
@ -4,15 +4,14 @@ import hou
|
|||
import pyblish.api
|
||||
|
||||
|
||||
class CollectHoudiniCurrentFile(pyblish.api.InstancePlugin):
|
||||
class CollectHoudiniCurrentFile(pyblish.api.ContextPlugin):
|
||||
"""Inject the current working file into context"""
|
||||
|
||||
order = pyblish.api.CollectorOrder - 0.01
|
||||
order = pyblish.api.CollectorOrder - 0.1
|
||||
label = "Houdini Current File"
|
||||
hosts = ["houdini"]
|
||||
families = ["workfile"]
|
||||
|
||||
def process(self, instance):
|
||||
def process(self, context):
|
||||
"""Inject the current working file"""
|
||||
|
||||
current_file = hou.hipFile.path()
|
||||
|
|
@ -34,26 +33,5 @@ class CollectHoudiniCurrentFile(pyblish.api.InstancePlugin):
|
|||
"saved correctly."
|
||||
)
|
||||
|
||||
instance.context.data["currentFile"] = current_file
|
||||
|
||||
folder, file = os.path.split(current_file)
|
||||
filename, ext = os.path.splitext(file)
|
||||
|
||||
instance.data.update({
|
||||
"setMembers": [current_file],
|
||||
"frameStart": instance.context.data['frameStart'],
|
||||
"frameEnd": instance.context.data['frameEnd'],
|
||||
"handleStart": instance.context.data['handleStart'],
|
||||
"handleEnd": instance.context.data['handleEnd']
|
||||
})
|
||||
|
||||
instance.data['representations'] = [{
|
||||
'name': ext.lstrip("."),
|
||||
'ext': ext.lstrip("."),
|
||||
'files': file,
|
||||
"stagingDir": folder,
|
||||
}]
|
||||
|
||||
self.log.info('Collected instance: {}'.format(file))
|
||||
self.log.info('Scene path: {}'.format(current_file))
|
||||
self.log.info('staging Dir: {}'.format(folder))
|
||||
context.data["currentFile"] = current_file
|
||||
self.log.info('Current workfile path: {}'.format(current_file))
|
||||
|
|
|
|||
|
|
@ -14,7 +14,7 @@ class CollectFrames(pyblish.api.InstancePlugin):
|
|||
|
||||
order = pyblish.api.CollectorOrder
|
||||
label = "Collect Frames"
|
||||
families = ["vdbcache", "imagesequence", "ass", "redshiftproxy"]
|
||||
families = ["vdbcache", "imagesequence", "ass", "redshiftproxy", "review"]
|
||||
|
||||
def process(self, instance):
|
||||
|
||||
|
|
|
|||
|
|
@ -0,0 +1,55 @@
|
|||
import hou
|
||||
import pyblish.api
|
||||
|
||||
|
||||
class CollectHoudiniReviewData(pyblish.api.InstancePlugin):
|
||||
"""Collect Review Data."""
|
||||
|
||||
label = "Collect Review Data"
|
||||
order = pyblish.api.CollectorOrder + 0.1
|
||||
hosts = ["houdini"]
|
||||
families = ["review"]
|
||||
|
||||
def process(self, instance):
|
||||
|
||||
# This fixes the burnin having the incorrect start/end timestamps
|
||||
# because without this it would take it from the context instead
|
||||
# which isn't the actual frame range that this instance renders.
|
||||
instance.data["handleStart"] = 0
|
||||
instance.data["handleEnd"] = 0
|
||||
instance.data["fps"] = instance.context.data["fps"]
|
||||
|
||||
# Enable ftrack functionality
|
||||
instance.data.setdefault("families", []).append('ftrack')
|
||||
|
||||
# Get the camera from the rop node to collect the focal length
|
||||
ropnode_path = instance.data["instance_node"]
|
||||
ropnode = hou.node(ropnode_path)
|
||||
|
||||
camera_path = ropnode.parm("camera").eval()
|
||||
camera_node = hou.node(camera_path)
|
||||
if not camera_node:
|
||||
self.log.warning("No valid camera node found on review node: "
|
||||
"{}".format(camera_path))
|
||||
return
|
||||
|
||||
# Collect focal length.
|
||||
focal_length_parm = camera_node.parm("focal")
|
||||
if not focal_length_parm:
|
||||
self.log.warning("No 'focal' (focal length) parameter found on "
|
||||
"camera: {}".format(camera_path))
|
||||
return
|
||||
|
||||
if focal_length_parm.isTimeDependent():
|
||||
start = instance.data["frameStart"]
|
||||
end = instance.data["frameEnd"] + 1
|
||||
focal_length = [
|
||||
focal_length_parm.evalAsFloatAtFrame(t)
|
||||
for t in range(int(start), int(end))
|
||||
]
|
||||
else:
|
||||
focal_length = focal_length_parm.evalAsFloat()
|
||||
|
||||
# Store focal length in `burninDataMembers`
|
||||
burnin_members = instance.data.setdefault("burninDataMembers", {})
|
||||
burnin_members["focalLength"] = focal_length
|
||||
36
openpype/hosts/houdini/plugins/publish/collect_workfile.py
Normal file
36
openpype/hosts/houdini/plugins/publish/collect_workfile.py
Normal file
|
|
@ -0,0 +1,36 @@
|
|||
import os
|
||||
|
||||
import pyblish.api
|
||||
|
||||
|
||||
class CollectWorkfile(pyblish.api.InstancePlugin):
|
||||
"""Inject workfile representation into instance"""
|
||||
|
||||
order = pyblish.api.CollectorOrder - 0.01
|
||||
label = "Houdini Workfile Data"
|
||||
hosts = ["houdini"]
|
||||
families = ["workfile"]
|
||||
|
||||
def process(self, instance):
|
||||
|
||||
current_file = instance.context.data["currentFile"]
|
||||
folder, file = os.path.split(current_file)
|
||||
filename, ext = os.path.splitext(file)
|
||||
|
||||
instance.data.update({
|
||||
"setMembers": [current_file],
|
||||
"frameStart": instance.context.data['frameStart'],
|
||||
"frameEnd": instance.context.data['frameEnd'],
|
||||
"handleStart": instance.context.data['handleStart'],
|
||||
"handleEnd": instance.context.data['handleEnd']
|
||||
})
|
||||
|
||||
instance.data['representations'] = [{
|
||||
'name': ext.lstrip("."),
|
||||
'ext': ext.lstrip("."),
|
||||
'files': file,
|
||||
"stagingDir": folder,
|
||||
}]
|
||||
|
||||
self.log.info('Collected instance: {}'.format(file))
|
||||
self.log.info('staging Dir: {}'.format(folder))
|
||||
51
openpype/hosts/houdini/plugins/publish/extract_opengl.py
Normal file
51
openpype/hosts/houdini/plugins/publish/extract_opengl.py
Normal file
|
|
@ -0,0 +1,51 @@
|
|||
import os
|
||||
|
||||
import pyblish.api
|
||||
|
||||
from openpype.pipeline import publish
|
||||
from openpype.hosts.houdini.api.lib import render_rop
|
||||
|
||||
import hou
|
||||
|
||||
|
||||
class ExtractOpenGL(publish.Extractor):
|
||||
|
||||
order = pyblish.api.ExtractorOrder - 0.01
|
||||
label = "Extract OpenGL"
|
||||
families = ["review"]
|
||||
hosts = ["houdini"]
|
||||
|
||||
def process(self, instance):
|
||||
ropnode = hou.node(instance.data.get("instance_node"))
|
||||
|
||||
output = ropnode.evalParm("picture")
|
||||
staging_dir = os.path.normpath(os.path.dirname(output))
|
||||
instance.data["stagingDir"] = staging_dir
|
||||
file_name = os.path.basename(output)
|
||||
|
||||
self.log.info("Extracting '%s' to '%s'" % (file_name,
|
||||
staging_dir))
|
||||
|
||||
render_rop(ropnode)
|
||||
|
||||
output = instance.data["frames"]
|
||||
|
||||
tags = ["review"]
|
||||
if not instance.data.get("keepImages"):
|
||||
tags.append("delete")
|
||||
|
||||
representation = {
|
||||
"name": instance.data["imageFormat"],
|
||||
"ext": instance.data["imageFormat"],
|
||||
"files": output,
|
||||
"stagingDir": staging_dir,
|
||||
"frameStart": instance.data["frameStart"],
|
||||
"frameEnd": instance.data["frameEnd"],
|
||||
"tags": tags,
|
||||
"preview": True,
|
||||
"camera_name": instance.data.get("review_camera")
|
||||
}
|
||||
|
||||
if "representations" not in instance.data:
|
||||
instance.data["representations"] = []
|
||||
instance.data["representations"].append(representation)
|
||||
|
|
@ -1,21 +0,0 @@
|
|||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<root>
|
||||
<error id="main">
|
||||
<title>Scene setting</title>
|
||||
<description>
|
||||
## Invalid input node
|
||||
|
||||
VDB input must have the same number of VDBs, points, primitives and vertices as output.
|
||||
|
||||
</description>
|
||||
<detail>
|
||||
### __Detailed Info__ (optional)
|
||||
|
||||
A VDB is an inherited type of Prim, holds the following data:
|
||||
- Primitives: 1
|
||||
- Points: 1
|
||||
- Vertices: 1
|
||||
- VDBs: 1
|
||||
</detail>
|
||||
</error>
|
||||
</root>
|
||||
|
|
@ -0,0 +1,28 @@
|
|||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<root>
|
||||
<error id="main">
|
||||
<title>Invalid VDB</title>
|
||||
<description>
|
||||
## Invalid VDB output
|
||||
|
||||
All primitives of the output geometry must be VDBs, no other primitive
|
||||
types are allowed. That means that regardless of the amount of VDBs in the
|
||||
geometry it will have an equal amount of VDBs, points, primitives and
|
||||
vertices since each VDB primitive is one point, one vertex and one VDB.
|
||||
|
||||
This validation only checks the geometry on the first frame of the export
|
||||
frame range.
|
||||
|
||||
|
||||
|
||||
</description>
|
||||
<detail>
|
||||
### Detailed Info
|
||||
|
||||
ROP node `{rop_path}` is set to export SOP path `{sop_path}`.
|
||||
|
||||
{message}
|
||||
|
||||
</detail>
|
||||
</error>
|
||||
</root>
|
||||
|
|
@ -0,0 +1,75 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
import pyblish.api
|
||||
from openpype.pipeline import PublishValidationError
|
||||
import hou
|
||||
|
||||
|
||||
class ValidateSceneReview(pyblish.api.InstancePlugin):
|
||||
"""Validator Some Scene Settings before publishing the review
|
||||
1. Scene Path
|
||||
2. Resolution
|
||||
"""
|
||||
|
||||
order = pyblish.api.ValidatorOrder
|
||||
families = ["review"]
|
||||
hosts = ["houdini"]
|
||||
label = "Scene Setting for review"
|
||||
|
||||
def process(self, instance):
|
||||
|
||||
report = []
|
||||
instance_node = hou.node(instance.data.get("instance_node"))
|
||||
|
||||
invalid = self.get_invalid_scene_path(instance_node)
|
||||
if invalid:
|
||||
report.append(invalid)
|
||||
|
||||
invalid = self.get_invalid_camera_path(instance_node)
|
||||
if invalid:
|
||||
report.append(invalid)
|
||||
|
||||
invalid = self.get_invalid_resolution(instance_node)
|
||||
if invalid:
|
||||
report.extend(invalid)
|
||||
|
||||
if report:
|
||||
raise PublishValidationError(
|
||||
"\n\n".join(report),
|
||||
title=self.label)
|
||||
|
||||
def get_invalid_scene_path(self, rop_node):
|
||||
scene_path_parm = rop_node.parm("scenepath")
|
||||
scene_path_node = scene_path_parm.evalAsNode()
|
||||
if not scene_path_node:
|
||||
path = scene_path_parm.evalAsString()
|
||||
return "Scene path does not exist: '{}'".format(path)
|
||||
|
||||
def get_invalid_camera_path(self, rop_node):
|
||||
camera_path_parm = rop_node.parm("camera")
|
||||
camera_node = camera_path_parm.evalAsNode()
|
||||
path = camera_path_parm.evalAsString()
|
||||
if not camera_node:
|
||||
return "Camera path does not exist: '{}'".format(path)
|
||||
type_name = camera_node.type().name()
|
||||
if type_name != "cam":
|
||||
return "Camera path is not a camera: '{}' (type: {})".format(
|
||||
path, type_name
|
||||
)
|
||||
|
||||
def get_invalid_resolution(self, rop_node):
|
||||
|
||||
# The resolution setting is only used when Override Camera Resolution
|
||||
# is enabled. So we skip validation if it is disabled.
|
||||
override = rop_node.parm("tres").eval()
|
||||
if not override:
|
||||
return
|
||||
|
||||
invalid = []
|
||||
res_width = rop_node.parm("res1").eval()
|
||||
res_height = rop_node.parm("res2").eval()
|
||||
if res_width == 0:
|
||||
invalid.append("Override Resolution width is set to zero.")
|
||||
if res_height == 0:
|
||||
invalid.append("Override Resolution height is set to zero")
|
||||
|
||||
return invalid
|
||||
|
|
@ -1,52 +0,0 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
import pyblish.api
|
||||
from openpype.pipeline import (
|
||||
PublishValidationError
|
||||
)
|
||||
|
||||
|
||||
class ValidateVDBInputNode(pyblish.api.InstancePlugin):
|
||||
"""Validate that the node connected to the output node is of type VDB.
|
||||
|
||||
Regardless of the amount of VDBs create the output will need to have an
|
||||
equal amount of VDBs, points, primitives and vertices
|
||||
|
||||
A VDB is an inherited type of Prim, holds the following data:
|
||||
- Primitives: 1
|
||||
- Points: 1
|
||||
- Vertices: 1
|
||||
- VDBs: 1
|
||||
|
||||
"""
|
||||
|
||||
order = pyblish.api.ValidatorOrder + 0.1
|
||||
families = ["vdbcache"]
|
||||
hosts = ["houdini"]
|
||||
label = "Validate Input Node (VDB)"
|
||||
|
||||
def process(self, instance):
|
||||
invalid = self.get_invalid(instance)
|
||||
if invalid:
|
||||
raise PublishValidationError(
|
||||
self,
|
||||
"Node connected to the output node is not of type VDB",
|
||||
title=self.label
|
||||
)
|
||||
|
||||
@classmethod
|
||||
def get_invalid(cls, instance):
|
||||
|
||||
node = instance.data["output_node"]
|
||||
|
||||
prims = node.geometry().prims()
|
||||
nr_of_prims = len(prims)
|
||||
|
||||
nr_of_points = len(node.geometry().points())
|
||||
if nr_of_points != nr_of_prims:
|
||||
cls.log.error("The number of primitives and points do not match")
|
||||
return [instance]
|
||||
|
||||
for prim in prims:
|
||||
if prim.numVertices() != 1:
|
||||
cls.log.error("Found primitive with more than 1 vertex!")
|
||||
return [instance]
|
||||
|
|
@ -1,14 +1,73 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
import contextlib
|
||||
|
||||
import pyblish.api
|
||||
import hou
|
||||
from openpype.pipeline import PublishValidationError
|
||||
|
||||
from openpype.pipeline import PublishXmlValidationError
|
||||
from openpype.hosts.houdini.api.action import SelectInvalidAction
|
||||
|
||||
|
||||
def group_consecutive_numbers(nums):
|
||||
"""
|
||||
Args:
|
||||
nums (list): List of sorted integer numbers.
|
||||
|
||||
Yields:
|
||||
str: Group ranges as {start}-{end} if more than one number in the range
|
||||
else it yields {end}
|
||||
|
||||
"""
|
||||
start = None
|
||||
end = None
|
||||
|
||||
def _result(a, b):
|
||||
if a == b:
|
||||
return "{}".format(a)
|
||||
else:
|
||||
return "{}-{}".format(a, b)
|
||||
|
||||
for num in nums:
|
||||
if start is None:
|
||||
start = num
|
||||
end = num
|
||||
elif num == end + 1:
|
||||
end = num
|
||||
else:
|
||||
yield _result(start, end)
|
||||
start = num
|
||||
end = num
|
||||
if start is not None:
|
||||
yield _result(start, end)
|
||||
|
||||
|
||||
@contextlib.contextmanager
|
||||
def update_mode_context(mode):
|
||||
original = hou.updateModeSetting()
|
||||
try:
|
||||
hou.setUpdateMode(mode)
|
||||
yield
|
||||
finally:
|
||||
hou.setUpdateMode(original)
|
||||
|
||||
|
||||
def get_geometry_at_frame(sop_node, frame, force=True):
|
||||
"""Return geometry at frame but force a cooked value."""
|
||||
with update_mode_context(hou.updateMode.AutoUpdate):
|
||||
sop_node.cook(force=force, frame_range=(frame, frame))
|
||||
return sop_node.geometryAtFrame(frame)
|
||||
|
||||
|
||||
class ValidateVDBOutputNode(pyblish.api.InstancePlugin):
|
||||
"""Validate that the node connected to the output node is of type VDB.
|
||||
|
||||
Regardless of the amount of VDBs create the output will need to have an
|
||||
equal amount of VDBs, points, primitives and vertices
|
||||
All primitives of the output geometry must be VDBs, no other primitive
|
||||
types are allowed. That means that regardless of the amount of VDBs in the
|
||||
geometry it will have an equal amount of VDBs, points, primitives and
|
||||
vertices since each VDB primitive is one point, one vertex and one VDB.
|
||||
|
||||
This validation only checks the geometry on the first frame of the export
|
||||
frame range for optimization purposes.
|
||||
|
||||
A VDB is an inherited type of Prim, holds the following data:
|
||||
- Primitives: 1
|
||||
|
|
@ -22,54 +81,95 @@ class ValidateVDBOutputNode(pyblish.api.InstancePlugin):
|
|||
families = ["vdbcache"]
|
||||
hosts = ["houdini"]
|
||||
label = "Validate Output Node (VDB)"
|
||||
actions = [SelectInvalidAction]
|
||||
|
||||
def process(self, instance):
|
||||
invalid = self.get_invalid(instance)
|
||||
if invalid:
|
||||
raise PublishValidationError(
|
||||
"Node connected to the output node is not" " of type VDB!",
|
||||
title=self.label
|
||||
invalid_nodes, message = self.get_invalid_with_message(instance)
|
||||
if invalid_nodes:
|
||||
|
||||
# instance_node is str, but output_node is hou.Node so we convert
|
||||
output = instance.data.get("output_node")
|
||||
output_path = output.path() if output else None
|
||||
|
||||
raise PublishXmlValidationError(
|
||||
self,
|
||||
"Invalid VDB content: {}".format(message),
|
||||
formatting_data={
|
||||
"message": message,
|
||||
"rop_path": instance.data.get("instance_node"),
|
||||
"sop_path": output_path
|
||||
}
|
||||
)
|
||||
|
||||
@classmethod
|
||||
def get_invalid(cls, instance):
|
||||
def get_invalid_with_message(cls, instance):
|
||||
|
||||
node = instance.data["output_node"]
|
||||
node = instance.data.get("output_node")
|
||||
if node is None:
|
||||
cls.log.error(
|
||||
instance_node = instance.data.get("instance_node")
|
||||
error = (
|
||||
"SOP path is not correctly set on "
|
||||
"ROP node '%s'." % instance.data.get("instance_node")
|
||||
"ROP node `{}`.".format(instance_node)
|
||||
)
|
||||
return [instance]
|
||||
return [hou.node(instance_node), error]
|
||||
|
||||
frame = instance.data.get("frameStart", 0)
|
||||
geometry = node.geometryAtFrame(frame)
|
||||
geometry = get_geometry_at_frame(node, frame)
|
||||
if geometry is None:
|
||||
# No geometry data on this node, maybe the node hasn't cooked?
|
||||
cls.log.error(
|
||||
"SOP node has no geometry data. "
|
||||
"Is it cooked? %s" % node.path()
|
||||
error = (
|
||||
"SOP node `{}` has no geometry data. "
|
||||
"Was it unable to cook?".format(node.path())
|
||||
)
|
||||
return [node]
|
||||
return [node, error]
|
||||
|
||||
prims = geometry.prims()
|
||||
nr_of_prims = len(prims)
|
||||
num_prims = geometry.intrinsicValue("primitivecount")
|
||||
num_points = geometry.intrinsicValue("pointcount")
|
||||
if num_prims == 0 and num_points == 0:
|
||||
# Since we are only checking the first frame it doesn't mean there
|
||||
# won't be VDB prims in a few frames. As such we'll assume for now
|
||||
# the user knows what he or she is doing
|
||||
cls.log.warning(
|
||||
"SOP node `{}` has no primitives on start frame {}. "
|
||||
"Validation is skipped and it is assumed elsewhere in the "
|
||||
"frame range VDB prims and only VDB prims will exist."
|
||||
"".format(node.path(), int(frame))
|
||||
)
|
||||
return [None, None]
|
||||
|
||||
# All primitives must be hou.VDB
|
||||
invalid_prim = False
|
||||
for prim in prims:
|
||||
if not isinstance(prim, hou.VDB):
|
||||
cls.log.error("Found non-VDB primitive: %s" % prim)
|
||||
invalid_prim = True
|
||||
if invalid_prim:
|
||||
return [instance]
|
||||
num_vdb_prims = geometry.countPrimType(hou.primType.VDB)
|
||||
cls.log.debug("Detected {} VDB primitives".format(num_vdb_prims))
|
||||
if num_prims != num_vdb_prims:
|
||||
# There's at least one primitive that is not a VDB.
|
||||
# Search them and report them to the artist.
|
||||
prims = geometry.prims()
|
||||
invalid_prims = [prim for prim in prims
|
||||
if not isinstance(prim, hou.VDB)]
|
||||
if invalid_prims:
|
||||
# Log prim numbers as consecutive ranges so logging isn't very
|
||||
# slow for large number of primitives
|
||||
error = (
|
||||
"Found non-VDB primitives for `{}`. "
|
||||
"Primitive indices {} are not VDB primitives.".format(
|
||||
node.path(),
|
||||
", ".join(group_consecutive_numbers(
|
||||
prim.number() for prim in invalid_prims
|
||||
))
|
||||
)
|
||||
)
|
||||
return [node, error]
|
||||
|
||||
nr_of_points = len(geometry.points())
|
||||
if nr_of_points != nr_of_prims:
|
||||
cls.log.error("The number of primitives and points do not match")
|
||||
return [instance]
|
||||
if num_points != num_vdb_prims:
|
||||
# We have points unrelated to the VDB primitives.
|
||||
error = (
|
||||
"The number of primitives and points do not match in '{}'. "
|
||||
"This likely means you have unconnected points, which we do "
|
||||
"not allow in the VDB output.".format(node.path()))
|
||||
return [node, error]
|
||||
|
||||
for prim in prims:
|
||||
if prim.numVertices() != 1:
|
||||
cls.log.error("Found primitive with more than 1 vertex!")
|
||||
return [instance]
|
||||
return [None, None]
|
||||
|
||||
@classmethod
|
||||
def get_invalid(cls, instance):
|
||||
nodes, _ = cls.get_invalid_with_message(instance)
|
||||
return nodes
|
||||
|
|
|
|||
|
|
@ -128,14 +128,14 @@ class AvalonURIOutputProcessor(base.OutputProcessorBase):
|
|||
if not asset_doc:
|
||||
raise RuntimeError("Invalid asset name: '%s'" % asset)
|
||||
|
||||
formatted_anatomy = anatomy.format({
|
||||
template_obj = anatomy.templates_obj["publish"]["path"]
|
||||
path = template_obj.format_strict({
|
||||
"project": PROJECT,
|
||||
"asset": asset_doc["name"],
|
||||
"subset": subset,
|
||||
"representation": ext,
|
||||
"version": 0 # stub version zero
|
||||
})
|
||||
path = formatted_anatomy["publish"]["path"]
|
||||
|
||||
# Remove the version folder
|
||||
subset_folder = os.path.dirname(os.path.dirname(path))
|
||||
|
|
|
|||
24
openpype/hosts/max/hooks/force_startup_script.py
Normal file
24
openpype/hosts/max/hooks/force_startup_script.py
Normal file
|
|
@ -0,0 +1,24 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
"""Pre-launch to force 3ds max startup script."""
|
||||
from openpype.lib import PreLaunchHook
|
||||
import os
|
||||
|
||||
|
||||
class ForceStartupScript(PreLaunchHook):
|
||||
"""Inject OpenPype environment to 3ds max.
|
||||
|
||||
Note that this works in combination whit 3dsmax startup script that
|
||||
is translating it back to PYTHONPATH for cases when 3dsmax drops PYTHONPATH
|
||||
environment.
|
||||
|
||||
Hook `GlobalHostDataHook` must be executed before this hook.
|
||||
"""
|
||||
app_groups = ["3dsmax"]
|
||||
order = 11
|
||||
|
||||
def execute(self):
|
||||
startup_args = [
|
||||
"-U",
|
||||
"MAXScript",
|
||||
f"{os.getenv('OPENPYPE_ROOT')}\\openpype\\hosts\\max\\startup\\startup.ms"] # noqa
|
||||
self.launch_context.launch_args.append(startup_args)
|
||||
28
openpype/hosts/max/plugins/create/create_model.py
Normal file
28
openpype/hosts/max/plugins/create/create_model.py
Normal file
|
|
@ -0,0 +1,28 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
"""Creator plugin for model."""
|
||||
from openpype.hosts.max.api import plugin
|
||||
from openpype.pipeline import CreatedInstance
|
||||
|
||||
|
||||
class CreateModel(plugin.MaxCreator):
|
||||
identifier = "io.openpype.creators.max.model"
|
||||
label = "Model"
|
||||
family = "model"
|
||||
icon = "gear"
|
||||
|
||||
def create(self, subset_name, instance_data, pre_create_data):
|
||||
from pymxs import runtime as rt
|
||||
instance = super(CreateModel, self).create(
|
||||
subset_name,
|
||||
instance_data,
|
||||
pre_create_data) # type: CreatedInstance
|
||||
container = rt.getNodeByName(instance.data.get("instance_node"))
|
||||
# TODO: Disable "Add to Containers?" Panel
|
||||
# parent the selected cameras into the container
|
||||
sel_obj = None
|
||||
if self.selected_nodes:
|
||||
sel_obj = list(self.selected_nodes)
|
||||
for obj in sel_obj:
|
||||
obj.parent = container
|
||||
# for additional work on the node:
|
||||
# instance_node = rt.getNodeByName(instance.get("instance_node"))
|
||||
|
|
@ -10,7 +10,9 @@ class MaxSceneLoader(load.LoaderPlugin):
|
|||
"""Max Scene Loader"""
|
||||
|
||||
families = ["camera",
|
||||
"maxScene"]
|
||||
"maxScene",
|
||||
"model"]
|
||||
|
||||
representations = ["max"]
|
||||
order = -8
|
||||
icon = "code-fork"
|
||||
|
|
|
|||
109
openpype/hosts/max/plugins/load/load_model.py
Normal file
109
openpype/hosts/max/plugins/load/load_model.py
Normal file
|
|
@ -0,0 +1,109 @@
|
|||
|
||||
import os
|
||||
from openpype.pipeline import (
|
||||
load, get_representation_path
|
||||
)
|
||||
from openpype.hosts.max.api.pipeline import containerise
|
||||
from openpype.hosts.max.api import lib
|
||||
from openpype.hosts.max.api.lib import maintained_selection
|
||||
|
||||
|
||||
class ModelAbcLoader(load.LoaderPlugin):
|
||||
"""Loading model with the Alembic loader."""
|
||||
|
||||
families = ["model"]
|
||||
label = "Load Model(Alembic)"
|
||||
representations = ["abc"]
|
||||
order = -10
|
||||
icon = "code-fork"
|
||||
color = "orange"
|
||||
|
||||
def load(self, context, name=None, namespace=None, data=None):
|
||||
from pymxs import runtime as rt
|
||||
|
||||
file_path = os.path.normpath(self.fname)
|
||||
|
||||
abc_before = {
|
||||
c for c in rt.rootNode.Children
|
||||
if rt.classOf(c) == rt.AlembicContainer
|
||||
}
|
||||
|
||||
abc_import_cmd = (f"""
|
||||
AlembicImport.ImportToRoot = false
|
||||
AlembicImport.CustomAttributes = true
|
||||
AlembicImport.UVs = true
|
||||
AlembicImport.VertexColors = true
|
||||
|
||||
importFile @"{file_path}" #noPrompt
|
||||
""")
|
||||
|
||||
self.log.debug(f"Executing command: {abc_import_cmd}")
|
||||
rt.execute(abc_import_cmd)
|
||||
|
||||
abc_after = {
|
||||
c for c in rt.rootNode.Children
|
||||
if rt.classOf(c) == rt.AlembicContainer
|
||||
}
|
||||
|
||||
# This should yield new AlembicContainer node
|
||||
abc_containers = abc_after.difference(abc_before)
|
||||
|
||||
if len(abc_containers) != 1:
|
||||
self.log.error("Something failed when loading.")
|
||||
|
||||
abc_container = abc_containers.pop()
|
||||
|
||||
return containerise(
|
||||
name, [abc_container], context, loader=self.__class__.__name__)
|
||||
|
||||
def update(self, container, representation):
|
||||
from pymxs import runtime as rt
|
||||
path = get_representation_path(representation)
|
||||
node = rt.getNodeByName(container["instance_node"])
|
||||
rt.select(node.Children)
|
||||
|
||||
for alembic in rt.selection:
|
||||
abc = rt.getNodeByName(alembic.name)
|
||||
rt.select(abc.Children)
|
||||
for abc_con in rt.selection:
|
||||
container = rt.getNodeByName(abc_con.name)
|
||||
container.source = path
|
||||
rt.select(container.Children)
|
||||
for abc_obj in rt.selection:
|
||||
alembic_obj = rt.getNodeByName(abc_obj.name)
|
||||
alembic_obj.source = path
|
||||
|
||||
with maintained_selection():
|
||||
rt.select(node)
|
||||
|
||||
lib.imprint(container["instance_node"], {
|
||||
"representation": str(representation["_id"])
|
||||
})
|
||||
|
||||
def switch(self, container, representation):
|
||||
self.update(container, representation)
|
||||
|
||||
def remove(self, container):
|
||||
from pymxs import runtime as rt
|
||||
|
||||
node = rt.getNodeByName(container["instance_node"])
|
||||
rt.delete(node)
|
||||
|
||||
@staticmethod
|
||||
def get_container_children(parent, type_name):
|
||||
from pymxs import runtime as rt
|
||||
|
||||
def list_children(node):
|
||||
children = []
|
||||
for c in node.Children:
|
||||
children.append(c)
|
||||
children += list_children(c)
|
||||
return children
|
||||
|
||||
filtered = []
|
||||
for child in list_children(parent):
|
||||
class_type = str(rt.classOf(child.baseObject))
|
||||
if class_type == type_name:
|
||||
filtered.append(child)
|
||||
|
||||
return filtered
|
||||
77
openpype/hosts/max/plugins/load/load_model_fbx.py
Normal file
77
openpype/hosts/max/plugins/load/load_model_fbx.py
Normal file
|
|
@ -0,0 +1,77 @@
|
|||
import os
|
||||
from openpype.pipeline import (
|
||||
load,
|
||||
get_representation_path
|
||||
)
|
||||
from openpype.hosts.max.api.pipeline import containerise
|
||||
from openpype.hosts.max.api import lib
|
||||
from openpype.hosts.max.api.lib import maintained_selection
|
||||
|
||||
|
||||
class FbxModelLoader(load.LoaderPlugin):
|
||||
"""Fbx Model Loader"""
|
||||
|
||||
families = ["model"]
|
||||
representations = ["fbx"]
|
||||
order = -9
|
||||
icon = "code-fork"
|
||||
color = "white"
|
||||
|
||||
def load(self, context, name=None, namespace=None, data=None):
|
||||
from pymxs import runtime as rt
|
||||
|
||||
filepath = os.path.normpath(self.fname)
|
||||
|
||||
fbx_import_cmd = (
|
||||
f"""
|
||||
|
||||
FBXImporterSetParam "Animation" false
|
||||
FBXImporterSetParam "Cameras" false
|
||||
FBXImporterSetParam "AxisConversionMethod" true
|
||||
FbxExporterSetParam "UpAxis" "Y"
|
||||
FbxExporterSetParam "Preserveinstances" true
|
||||
|
||||
importFile @"{filepath}" #noPrompt using:FBXIMP
|
||||
""")
|
||||
|
||||
self.log.debug(f"Executing command: {fbx_import_cmd}")
|
||||
rt.execute(fbx_import_cmd)
|
||||
|
||||
asset = rt.getNodeByName(f"{name}")
|
||||
|
||||
return containerise(
|
||||
name, [asset], context, loader=self.__class__.__name__)
|
||||
|
||||
def update(self, container, representation):
|
||||
from pymxs import runtime as rt
|
||||
|
||||
path = get_representation_path(representation)
|
||||
node = rt.getNodeByName(container["instance_node"])
|
||||
rt.select(node.Children)
|
||||
fbx_reimport_cmd = (
|
||||
f"""
|
||||
FBXImporterSetParam "Animation" false
|
||||
FBXImporterSetParam "Cameras" false
|
||||
FBXImporterSetParam "AxisConversionMethod" true
|
||||
FbxExporterSetParam "UpAxis" "Y"
|
||||
FbxExporterSetParam "Preserveinstances" true
|
||||
|
||||
importFile @"{path}" #noPrompt using:FBXIMP
|
||||
""")
|
||||
rt.execute(fbx_reimport_cmd)
|
||||
|
||||
with maintained_selection():
|
||||
rt.select(node)
|
||||
|
||||
lib.imprint(container["instance_node"], {
|
||||
"representation": str(representation["_id"])
|
||||
})
|
||||
|
||||
def switch(self, container, representation):
|
||||
self.update(container, representation)
|
||||
|
||||
def remove(self, container):
|
||||
from pymxs import runtime as rt
|
||||
|
||||
node = rt.getNodeByName(container["instance_node"])
|
||||
rt.delete(node)
|
||||
68
openpype/hosts/max/plugins/load/load_model_obj.py
Normal file
68
openpype/hosts/max/plugins/load/load_model_obj.py
Normal file
|
|
@ -0,0 +1,68 @@
|
|||
import os
|
||||
from openpype.pipeline import (
|
||||
load,
|
||||
get_representation_path
|
||||
)
|
||||
from openpype.hosts.max.api.pipeline import containerise
|
||||
from openpype.hosts.max.api import lib
|
||||
from openpype.hosts.max.api.lib import maintained_selection
|
||||
|
||||
|
||||
class ObjLoader(load.LoaderPlugin):
|
||||
"""Obj Loader"""
|
||||
|
||||
families = ["model"]
|
||||
representations = ["obj"]
|
||||
order = -9
|
||||
icon = "code-fork"
|
||||
color = "white"
|
||||
|
||||
def load(self, context, name=None, namespace=None, data=None):
|
||||
from pymxs import runtime as rt
|
||||
|
||||
filepath = os.path.normpath(self.fname)
|
||||
self.log.debug(f"Executing command to import..")
|
||||
|
||||
rt.execute(f'importFile @"{filepath}" #noPrompt using:ObjImp')
|
||||
# create "missing" container for obj import
|
||||
container = rt.container()
|
||||
container.name = f"{name}"
|
||||
|
||||
# get current selection
|
||||
for selection in rt.getCurrentSelection():
|
||||
selection.Parent = container
|
||||
|
||||
asset = rt.getNodeByName(f"{name}")
|
||||
|
||||
return containerise(
|
||||
name, [asset], context, loader=self.__class__.__name__)
|
||||
|
||||
def update(self, container, representation):
|
||||
from pymxs import runtime as rt
|
||||
|
||||
path = get_representation_path(representation)
|
||||
node_name = container["instance_node"]
|
||||
node = rt.getNodeByName(node_name)
|
||||
|
||||
instance_name, _ = node_name.split("_")
|
||||
container = rt.getNodeByName(instance_name)
|
||||
for n in container.Children:
|
||||
rt.delete(n)
|
||||
|
||||
rt.execute(f'importFile @"{path}" #noPrompt using:ObjImp')
|
||||
# get current selection
|
||||
for selection in rt.getCurrentSelection():
|
||||
selection.Parent = container
|
||||
|
||||
with maintained_selection():
|
||||
rt.select(node)
|
||||
|
||||
lib.imprint(node_name, {
|
||||
"representation": str(representation["_id"])
|
||||
})
|
||||
|
||||
def remove(self, container):
|
||||
from pymxs import runtime as rt
|
||||
|
||||
node = rt.getNodeByName(container["instance_node"])
|
||||
rt.delete(node)
|
||||
78
openpype/hosts/max/plugins/load/load_model_usd.py
Normal file
78
openpype/hosts/max/plugins/load/load_model_usd.py
Normal file
|
|
@ -0,0 +1,78 @@
|
|||
import os
|
||||
from openpype.pipeline import (
|
||||
load, get_representation_path
|
||||
)
|
||||
from openpype.hosts.max.api.pipeline import containerise
|
||||
from openpype.hosts.max.api import lib
|
||||
from openpype.hosts.max.api.lib import maintained_selection
|
||||
|
||||
|
||||
class ModelUSDLoader(load.LoaderPlugin):
|
||||
"""Loading model with the USD loader."""
|
||||
|
||||
families = ["model"]
|
||||
label = "Load Model(USD)"
|
||||
representations = ["usda"]
|
||||
order = -10
|
||||
icon = "code-fork"
|
||||
color = "orange"
|
||||
|
||||
def load(self, context, name=None, namespace=None, data=None):
|
||||
from pymxs import runtime as rt
|
||||
# asset_filepath
|
||||
filepath = os.path.normpath(self.fname)
|
||||
import_options = rt.USDImporter.CreateOptions()
|
||||
base_filename = os.path.basename(filepath)
|
||||
filename, ext = os.path.splitext(base_filename)
|
||||
log_filepath = filepath.replace(ext, "txt")
|
||||
|
||||
rt.LogPath = log_filepath
|
||||
rt.LogLevel = rt.name('info')
|
||||
rt.USDImporter.importFile(filepath,
|
||||
importOptions=import_options)
|
||||
|
||||
asset = rt.getNodeByName(f"{name}")
|
||||
|
||||
return containerise(
|
||||
name, [asset], context, loader=self.__class__.__name__)
|
||||
|
||||
def update(self, container, representation):
|
||||
from pymxs import runtime as rt
|
||||
|
||||
path = get_representation_path(representation)
|
||||
node_name = container["instance_node"]
|
||||
node = rt.getNodeByName(node_name)
|
||||
for n in node.Children:
|
||||
for r in n.Children:
|
||||
rt.delete(r)
|
||||
rt.delete(n)
|
||||
instance_name, _ = node_name.split("_")
|
||||
|
||||
import_options = rt.USDImporter.CreateOptions()
|
||||
base_filename = os.path.basename(path)
|
||||
_, ext = os.path.splitext(base_filename)
|
||||
log_filepath = path.replace(ext, "txt")
|
||||
|
||||
rt.LogPath = log_filepath
|
||||
rt.LogLevel = rt.name('info')
|
||||
rt.USDImporter.importFile(path,
|
||||
importOptions=import_options)
|
||||
|
||||
asset = rt.getNodeByName(f"{instance_name}")
|
||||
asset.Parent = node
|
||||
|
||||
with maintained_selection():
|
||||
rt.select(node)
|
||||
|
||||
lib.imprint(node_name, {
|
||||
"representation": str(representation["_id"])
|
||||
})
|
||||
|
||||
def switch(self, container, representation):
|
||||
self.update(container, representation)
|
||||
|
||||
def remove(self, container):
|
||||
from pymxs import runtime as rt
|
||||
|
||||
node = rt.getNodeByName(container["instance_node"])
|
||||
rt.delete(node)
|
||||
|
|
@ -15,8 +15,7 @@ from openpype.hosts.max.api import lib
|
|||
class AbcLoader(load.LoaderPlugin):
|
||||
"""Alembic loader."""
|
||||
|
||||
families = ["model",
|
||||
"camera",
|
||||
families = ["camera",
|
||||
"animation",
|
||||
"pointcache"]
|
||||
label = "Load Alembic"
|
||||
|
|
|
|||
|
|
@ -62,6 +62,7 @@ class CollectRender(pyblish.api.InstancePlugin):
|
|||
"frameStart": context.data['frameStart'],
|
||||
"frameEnd": context.data['frameEnd'],
|
||||
"version": version_int,
|
||||
"farm": True
|
||||
}
|
||||
self.log.info("data: {0}".format(data))
|
||||
instance.data.update(data)
|
||||
|
|
|
|||
|
|
@ -21,7 +21,8 @@ class ExtractMaxSceneRaw(publish.Extractor,
|
|||
label = "Extract Max Scene (Raw)"
|
||||
hosts = ["max"]
|
||||
families = ["camera",
|
||||
"maxScene"]
|
||||
"maxScene",
|
||||
"model"]
|
||||
optional = True
|
||||
|
||||
def process(self, instance):
|
||||
|
|
|
|||
74
openpype/hosts/max/plugins/publish/extract_model.py
Normal file
74
openpype/hosts/max/plugins/publish/extract_model.py
Normal file
|
|
@ -0,0 +1,74 @@
|
|||
import os
|
||||
import pyblish.api
|
||||
from openpype.pipeline import (
|
||||
publish,
|
||||
OptionalPyblishPluginMixin
|
||||
)
|
||||
from pymxs import runtime as rt
|
||||
from openpype.hosts.max.api import (
|
||||
maintained_selection,
|
||||
get_all_children
|
||||
)
|
||||
|
||||
|
||||
class ExtractModel(publish.Extractor,
|
||||
OptionalPyblishPluginMixin):
|
||||
"""
|
||||
Extract Geometry in Alembic Format
|
||||
"""
|
||||
|
||||
order = pyblish.api.ExtractorOrder - 0.1
|
||||
label = "Extract Geometry (Alembic)"
|
||||
hosts = ["max"]
|
||||
families = ["model"]
|
||||
optional = True
|
||||
|
||||
def process(self, instance):
|
||||
if not self.is_active(instance.data):
|
||||
return
|
||||
|
||||
container = instance.data["instance_node"]
|
||||
|
||||
self.log.info("Extracting Geometry ...")
|
||||
|
||||
stagingdir = self.staging_dir(instance)
|
||||
filename = "{name}.abc".format(**instance.data)
|
||||
filepath = os.path.join(stagingdir, filename)
|
||||
|
||||
# We run the render
|
||||
self.log.info("Writing alembic '%s' to '%s'" % (filename,
|
||||
stagingdir))
|
||||
|
||||
export_cmd = (
|
||||
f"""
|
||||
AlembicExport.ArchiveType = #ogawa
|
||||
AlembicExport.CoordinateSystem = #maya
|
||||
AlembicExport.CustomAttributes = true
|
||||
AlembicExport.UVs = true
|
||||
AlembicExport.VertexColors = true
|
||||
AlembicExport.PreserveInstances = true
|
||||
|
||||
exportFile @"{filepath}" #noPrompt selectedOnly:on using:AlembicExport
|
||||
|
||||
""")
|
||||
|
||||
self.log.debug(f"Executing command: {export_cmd}")
|
||||
|
||||
with maintained_selection():
|
||||
# select and export
|
||||
rt.select(get_all_children(rt.getNodeByName(container)))
|
||||
rt.execute(export_cmd)
|
||||
|
||||
self.log.info("Performing Extraction ...")
|
||||
if "representations" not in instance.data:
|
||||
instance.data["representations"] = []
|
||||
|
||||
representation = {
|
||||
'name': 'abc',
|
||||
'ext': 'abc',
|
||||
'files': filename,
|
||||
"stagingDir": stagingdir,
|
||||
}
|
||||
instance.data["representations"].append(representation)
|
||||
self.log.info("Extracted instance '%s' to: %s" % (instance.name,
|
||||
filepath))
|
||||
74
openpype/hosts/max/plugins/publish/extract_model_fbx.py
Normal file
74
openpype/hosts/max/plugins/publish/extract_model_fbx.py
Normal file
|
|
@ -0,0 +1,74 @@
|
|||
import os
|
||||
import pyblish.api
|
||||
from openpype.pipeline import (
|
||||
publish,
|
||||
OptionalPyblishPluginMixin
|
||||
)
|
||||
from pymxs import runtime as rt
|
||||
from openpype.hosts.max.api import (
|
||||
maintained_selection,
|
||||
get_all_children
|
||||
)
|
||||
|
||||
|
||||
class ExtractModelFbx(publish.Extractor,
|
||||
OptionalPyblishPluginMixin):
|
||||
"""
|
||||
Extract Geometry in FBX Format
|
||||
"""
|
||||
|
||||
order = pyblish.api.ExtractorOrder - 0.05
|
||||
label = "Extract FBX"
|
||||
hosts = ["max"]
|
||||
families = ["model"]
|
||||
optional = True
|
||||
|
||||
def process(self, instance):
|
||||
if not self.is_active(instance.data):
|
||||
return
|
||||
|
||||
container = instance.data["instance_node"]
|
||||
|
||||
self.log.info("Extracting Geometry ...")
|
||||
|
||||
stagingdir = self.staging_dir(instance)
|
||||
filename = "{name}.fbx".format(**instance.data)
|
||||
filepath = os.path.join(stagingdir,
|
||||
filename)
|
||||
self.log.info("Writing FBX '%s' to '%s'" % (filepath,
|
||||
stagingdir))
|
||||
|
||||
export_fbx_cmd = (
|
||||
f"""
|
||||
FBXExporterSetParam "Animation" false
|
||||
FBXExporterSetParam "Cameras" false
|
||||
FBXExporterSetParam "Lights" false
|
||||
FBXExporterSetParam "PointCache" false
|
||||
FBXExporterSetParam "AxisConversionMethod" "Animation"
|
||||
FbxExporterSetParam "UpAxis" "Y"
|
||||
FbxExporterSetParam "Preserveinstances" true
|
||||
|
||||
exportFile @"{filepath}" #noPrompt selectedOnly:true using:FBXEXP
|
||||
|
||||
""")
|
||||
|
||||
self.log.debug(f"Executing command: {export_fbx_cmd}")
|
||||
|
||||
with maintained_selection():
|
||||
# select and export
|
||||
rt.select(get_all_children(rt.getNodeByName(container)))
|
||||
rt.execute(export_fbx_cmd)
|
||||
|
||||
self.log.info("Performing Extraction ...")
|
||||
if "representations" not in instance.data:
|
||||
instance.data["representations"] = []
|
||||
|
||||
representation = {
|
||||
'name': 'fbx',
|
||||
'ext': 'fbx',
|
||||
'files': filename,
|
||||
"stagingDir": stagingdir,
|
||||
}
|
||||
instance.data["representations"].append(representation)
|
||||
self.log.info("Extracted instance '%s' to: %s" % (instance.name,
|
||||
filepath))
|
||||
59
openpype/hosts/max/plugins/publish/extract_model_obj.py
Normal file
59
openpype/hosts/max/plugins/publish/extract_model_obj.py
Normal file
|
|
@ -0,0 +1,59 @@
|
|||
import os
|
||||
import pyblish.api
|
||||
from openpype.pipeline import (
|
||||
publish,
|
||||
OptionalPyblishPluginMixin
|
||||
)
|
||||
from pymxs import runtime as rt
|
||||
from openpype.hosts.max.api import (
|
||||
maintained_selection,
|
||||
get_all_children
|
||||
)
|
||||
|
||||
|
||||
class ExtractModelObj(publish.Extractor,
|
||||
OptionalPyblishPluginMixin):
|
||||
"""
|
||||
Extract Geometry in OBJ Format
|
||||
"""
|
||||
|
||||
order = pyblish.api.ExtractorOrder - 0.05
|
||||
label = "Extract OBJ"
|
||||
hosts = ["max"]
|
||||
families = ["model"]
|
||||
optional = True
|
||||
|
||||
def process(self, instance):
|
||||
if not self.is_active(instance.data):
|
||||
return
|
||||
|
||||
container = instance.data["instance_node"]
|
||||
|
||||
self.log.info("Extracting Geometry ...")
|
||||
|
||||
stagingdir = self.staging_dir(instance)
|
||||
filename = "{name}.obj".format(**instance.data)
|
||||
filepath = os.path.join(stagingdir,
|
||||
filename)
|
||||
self.log.info("Writing OBJ '%s' to '%s'" % (filepath,
|
||||
stagingdir))
|
||||
|
||||
with maintained_selection():
|
||||
# select and export
|
||||
rt.select(get_all_children(rt.getNodeByName(container)))
|
||||
rt.execute(f'exportFile @"{filepath}" #noPrompt selectedOnly:true using:ObjExp') # noqa
|
||||
|
||||
self.log.info("Performing Extraction ...")
|
||||
if "representations" not in instance.data:
|
||||
instance.data["representations"] = []
|
||||
|
||||
representation = {
|
||||
'name': 'obj',
|
||||
'ext': 'obj',
|
||||
'files': filename,
|
||||
"stagingDir": stagingdir,
|
||||
}
|
||||
|
||||
instance.data["representations"].append(representation)
|
||||
self.log.info("Extracted instance '%s' to: %s" % (instance.name,
|
||||
filepath))
|
||||
114
openpype/hosts/max/plugins/publish/extract_model_usd.py
Normal file
114
openpype/hosts/max/plugins/publish/extract_model_usd.py
Normal file
|
|
@ -0,0 +1,114 @@
|
|||
import os
|
||||
import pyblish.api
|
||||
from openpype.pipeline import (
|
||||
publish,
|
||||
OptionalPyblishPluginMixin
|
||||
)
|
||||
from pymxs import runtime as rt
|
||||
from openpype.hosts.max.api import (
|
||||
maintained_selection
|
||||
)
|
||||
|
||||
|
||||
class ExtractModelUSD(publish.Extractor,
|
||||
OptionalPyblishPluginMixin):
|
||||
"""
|
||||
Extract Geometry in USDA Format
|
||||
"""
|
||||
|
||||
order = pyblish.api.ExtractorOrder - 0.05
|
||||
label = "Extract Geometry (USD)"
|
||||
hosts = ["max"]
|
||||
families = ["model"]
|
||||
optional = True
|
||||
|
||||
def process(self, instance):
|
||||
if not self.is_active(instance.data):
|
||||
return
|
||||
|
||||
container = instance.data["instance_node"]
|
||||
|
||||
self.log.info("Extracting Geometry ...")
|
||||
|
||||
stagingdir = self.staging_dir(instance)
|
||||
asset_filename = "{name}.usda".format(**instance.data)
|
||||
asset_filepath = os.path.join(stagingdir,
|
||||
asset_filename)
|
||||
self.log.info("Writing USD '%s' to '%s'" % (asset_filepath,
|
||||
stagingdir))
|
||||
|
||||
log_filename = "{name}.txt".format(**instance.data)
|
||||
log_filepath = os.path.join(stagingdir,
|
||||
log_filename)
|
||||
self.log.info("Writing log '%s' to '%s'" % (log_filepath,
|
||||
stagingdir))
|
||||
|
||||
# get the nodes which need to be exported
|
||||
export_options = self.get_export_options(log_filepath)
|
||||
with maintained_selection():
|
||||
# select and export
|
||||
node_list = self.get_node_list(container)
|
||||
rt.USDExporter.ExportFile(asset_filepath,
|
||||
exportOptions=export_options,
|
||||
contentSource=rt.name("selected"),
|
||||
nodeList=node_list)
|
||||
|
||||
self.log.info("Performing Extraction ...")
|
||||
if "representations" not in instance.data:
|
||||
instance.data["representations"] = []
|
||||
|
||||
representation = {
|
||||
'name': 'usda',
|
||||
'ext': 'usda',
|
||||
'files': asset_filename,
|
||||
"stagingDir": stagingdir,
|
||||
}
|
||||
instance.data["representations"].append(representation)
|
||||
|
||||
log_representation = {
|
||||
'name': 'txt',
|
||||
'ext': 'txt',
|
||||
'files': log_filename,
|
||||
"stagingDir": stagingdir,
|
||||
}
|
||||
instance.data["representations"].append(log_representation)
|
||||
|
||||
self.log.info("Extracted instance '%s' to: %s" % (instance.name,
|
||||
asset_filepath))
|
||||
|
||||
def get_node_list(self, container):
|
||||
"""
|
||||
Get the target nodes which are
|
||||
the children of the container
|
||||
"""
|
||||
node_list = []
|
||||
|
||||
container_node = rt.getNodeByName(container)
|
||||
target_node = container_node.Children
|
||||
rt.select(target_node)
|
||||
for sel in rt.selection:
|
||||
node_list.append(sel)
|
||||
|
||||
return node_list
|
||||
|
||||
def get_export_options(self, log_path):
|
||||
"""Set Export Options for USD Exporter"""
|
||||
|
||||
export_options = rt.USDExporter.createOptions()
|
||||
|
||||
export_options.Meshes = True
|
||||
export_options.Shapes = False
|
||||
export_options.Lights = False
|
||||
export_options.Cameras = False
|
||||
export_options.Materials = False
|
||||
export_options.MeshFormat = rt.name('fromScene')
|
||||
export_options.FileFormat = rt.name('ascii')
|
||||
export_options.UpAxis = rt.name('y')
|
||||
export_options.LogLevel = rt.name('info')
|
||||
export_options.LogPath = log_path
|
||||
export_options.PreserveEdgeOrientation = True
|
||||
export_options.TimeMode = rt.name('current')
|
||||
|
||||
rt.USDexporter.UIOptions = export_options
|
||||
|
||||
return export_options
|
||||
|
|
@ -0,0 +1,44 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
import pyblish.api
|
||||
from openpype.pipeline import PublishValidationError
|
||||
from pymxs import runtime as rt
|
||||
|
||||
|
||||
class ValidateModelContent(pyblish.api.InstancePlugin):
|
||||
"""Validates Model instance contents.
|
||||
|
||||
A model instance may only hold either geometry-related
|
||||
object(excluding Shapes) or editable meshes.
|
||||
"""
|
||||
|
||||
order = pyblish.api.ValidatorOrder
|
||||
families = ["model"]
|
||||
hosts = ["max"]
|
||||
label = "Model Contents"
|
||||
|
||||
def process(self, instance):
|
||||
invalid = self.get_invalid(instance)
|
||||
if invalid:
|
||||
raise PublishValidationError("Model instance must only include"
|
||||
"Geometry and Editable Mesh")
|
||||
|
||||
def get_invalid(self, instance):
|
||||
"""
|
||||
Get invalid nodes if the instance is not camera
|
||||
"""
|
||||
invalid = list()
|
||||
container = instance.data["instance_node"]
|
||||
self.log.info("Validating look content for "
|
||||
"{}".format(container))
|
||||
|
||||
con = rt.getNodeByName(container)
|
||||
selection_list = list(con.Children) or rt.getCurrentSelection()
|
||||
for sel in selection_list:
|
||||
if rt.classOf(sel) in rt.Camera.classes:
|
||||
invalid.append(sel)
|
||||
if rt.classOf(sel) in rt.Light.classes:
|
||||
invalid.append(sel)
|
||||
if rt.classOf(sel) in rt.Shape.classes:
|
||||
invalid.append(sel)
|
||||
|
||||
return invalid
|
||||
36
openpype/hosts/max/plugins/publish/validate_usd_plugin.py
Normal file
36
openpype/hosts/max/plugins/publish/validate_usd_plugin.py
Normal file
|
|
@ -0,0 +1,36 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
import pyblish.api
|
||||
from openpype.pipeline import PublishValidationError
|
||||
from pymxs import runtime as rt
|
||||
|
||||
|
||||
class ValidateUSDPlugin(pyblish.api.InstancePlugin):
|
||||
"""Validates if USD plugin is installed or loaded in Max
|
||||
"""
|
||||
|
||||
order = pyblish.api.ValidatorOrder - 0.01
|
||||
families = ["model"]
|
||||
hosts = ["max"]
|
||||
label = "USD Plugin"
|
||||
|
||||
def process(self, instance):
|
||||
plugin_mgr = rt.pluginManager
|
||||
plugin_count = plugin_mgr.pluginDllCount
|
||||
plugin_info = self.get_plugins(plugin_mgr,
|
||||
plugin_count)
|
||||
usd_import = "usdimport.dli"
|
||||
if usd_import not in plugin_info:
|
||||
raise PublishValidationError("USD Plugin {}"
|
||||
" not found".format(usd_import))
|
||||
usd_export = "usdexport.dle"
|
||||
if usd_export not in plugin_info:
|
||||
raise PublishValidationError("USD Plugin {}"
|
||||
" not found".format(usd_export))
|
||||
|
||||
def get_plugins(self, manager, count):
|
||||
plugin_info_list = list()
|
||||
for p in range(1, count + 1):
|
||||
plugin_info = manager.pluginDllName(p)
|
||||
plugin_info_list.append(plugin_info)
|
||||
|
||||
return plugin_info_list
|
||||
|
|
@ -32,7 +32,17 @@ from openpype.pipeline import (
|
|||
load_container,
|
||||
registered_host,
|
||||
)
|
||||
from openpype.pipeline.context_tools import get_current_project_asset
|
||||
from openpype.pipeline.create import (
|
||||
legacy_create,
|
||||
get_legacy_creator_by_name,
|
||||
)
|
||||
from openpype.pipeline.context_tools import (
|
||||
get_current_asset_name,
|
||||
get_current_project_asset,
|
||||
get_current_project_name,
|
||||
get_current_task_name
|
||||
)
|
||||
from openpype.lib.profiles_filtering import filter_profiles
|
||||
|
||||
|
||||
self = sys.modules[__name__]
|
||||
|
|
@ -112,6 +122,18 @@ FLOAT_FPS = {23.98, 23.976, 29.97, 47.952, 59.94}
|
|||
|
||||
RENDERLIKE_INSTANCE_FAMILIES = ["rendering", "vrayscene"]
|
||||
|
||||
DISPLAY_LIGHTS_VALUES = [
|
||||
"project_settings", "default", "all", "selected", "flat", "none"
|
||||
]
|
||||
DISPLAY_LIGHTS_LABELS = [
|
||||
"Use Project Settings",
|
||||
"Default Lighting",
|
||||
"All Lights",
|
||||
"Selected Lights",
|
||||
"Flat Lighting",
|
||||
"No Lights"
|
||||
]
|
||||
|
||||
|
||||
def get_main_window():
|
||||
"""Acquire Maya's main window"""
|
||||
|
|
@ -292,15 +314,20 @@ def collect_animation_data(fps=False):
|
|||
"""
|
||||
|
||||
# get scene values as defaults
|
||||
start = cmds.playbackOptions(query=True, animationStartTime=True)
|
||||
end = cmds.playbackOptions(query=True, animationEndTime=True)
|
||||
frame_start = cmds.playbackOptions(query=True, minTime=True)
|
||||
frame_end = cmds.playbackOptions(query=True, maxTime=True)
|
||||
handle_start = cmds.playbackOptions(query=True, animationStartTime=True)
|
||||
handle_end = cmds.playbackOptions(query=True, animationEndTime=True)
|
||||
|
||||
handle_start = frame_start - handle_start
|
||||
handle_end = handle_end - frame_end
|
||||
|
||||
# build attributes
|
||||
data = OrderedDict()
|
||||
data["frameStart"] = start
|
||||
data["frameEnd"] = end
|
||||
data["handleStart"] = 0
|
||||
data["handleEnd"] = 0
|
||||
data["frameStart"] = frame_start
|
||||
data["frameEnd"] = frame_end
|
||||
data["handleStart"] = handle_start
|
||||
data["handleEnd"] = handle_end
|
||||
data["step"] = 1.0
|
||||
|
||||
if fps:
|
||||
|
|
@ -2130,12 +2157,22 @@ def set_scene_resolution(width, height, pixelAspect):
|
|||
cmds.setAttr("%s.pixelAspect" % control_node, pixelAspect)
|
||||
|
||||
|
||||
def get_frame_range():
|
||||
"""Get the current assets frame range and handles."""
|
||||
def get_frame_range(include_animation_range=False):
|
||||
"""Get the current assets frame range and handles.
|
||||
|
||||
Args:
|
||||
include_animation_range (bool, optional): Whether to include
|
||||
`animationStart` and `animationEnd` keys to define the outer
|
||||
range of the timeline. It is excluded by default.
|
||||
|
||||
Returns:
|
||||
dict: Asset's expected frame range values.
|
||||
|
||||
"""
|
||||
|
||||
# Set frame start/end
|
||||
project_name = legacy_io.active_project()
|
||||
asset_name = legacy_io.Session["AVALON_ASSET"]
|
||||
project_name = get_current_project_name()
|
||||
asset_name = get_current_asset_name()
|
||||
asset = get_asset_by_name(project_name, asset_name)
|
||||
|
||||
frame_start = asset["data"].get("frameStart")
|
||||
|
|
@ -2148,12 +2185,39 @@ def get_frame_range():
|
|||
handle_start = asset["data"].get("handleStart") or 0
|
||||
handle_end = asset["data"].get("handleEnd") or 0
|
||||
|
||||
return {
|
||||
frame_range = {
|
||||
"frameStart": frame_start,
|
||||
"frameEnd": frame_end,
|
||||
"handleStart": handle_start,
|
||||
"handleEnd": handle_end
|
||||
}
|
||||
if include_animation_range:
|
||||
# The animation range values are only included to define whether
|
||||
# the Maya time slider should include the handles or not.
|
||||
# Some usages of this function use the full dictionary to define
|
||||
# instance attributes for which we want to exclude the animation
|
||||
# keys. That is why these are excluded by default.
|
||||
task_name = get_current_task_name()
|
||||
settings = get_project_settings(project_name)
|
||||
include_handles_settings = settings["maya"]["include_handles"]
|
||||
current_task = asset.get("data").get("tasks").get(task_name)
|
||||
|
||||
animation_start = frame_start
|
||||
animation_end = frame_end
|
||||
|
||||
include_handles = include_handles_settings["include_handles_default"]
|
||||
for item in include_handles_settings["per_task_type"]:
|
||||
if current_task["type"] in item["task_type"]:
|
||||
include_handles = item["include_handles"]
|
||||
break
|
||||
if include_handles:
|
||||
animation_start -= int(handle_start)
|
||||
animation_end += int(handle_end)
|
||||
|
||||
frame_range["animationStart"] = animation_start
|
||||
frame_range["animationEnd"] = animation_end
|
||||
|
||||
return frame_range
|
||||
|
||||
|
||||
def reset_frame_range(playback=True, render=True, fps=True):
|
||||
|
|
@ -2166,25 +2230,29 @@ def reset_frame_range(playback=True, render=True, fps=True):
|
|||
Defaults to True.
|
||||
fps (bool, Optional): Whether to set scene FPS. Defaults to True.
|
||||
"""
|
||||
|
||||
if fps:
|
||||
fps = convert_to_maya_fps(
|
||||
float(legacy_io.Session.get("AVALON_FPS", 25))
|
||||
)
|
||||
set_scene_fps(fps)
|
||||
|
||||
frame_range = get_frame_range()
|
||||
frame_range = get_frame_range(include_animation_range=True)
|
||||
if not frame_range:
|
||||
# No frame range data found for asset
|
||||
return
|
||||
|
||||
frame_start = frame_range["frameStart"] - int(frame_range["handleStart"])
|
||||
frame_end = frame_range["frameEnd"] + int(frame_range["handleEnd"])
|
||||
frame_start = frame_range["frameStart"]
|
||||
frame_end = frame_range["frameEnd"]
|
||||
animation_start = frame_range["animationStart"]
|
||||
animation_end = frame_range["animationEnd"]
|
||||
|
||||
if playback:
|
||||
cmds.playbackOptions(minTime=frame_start)
|
||||
cmds.playbackOptions(maxTime=frame_end)
|
||||
cmds.playbackOptions(animationStartTime=frame_start)
|
||||
cmds.playbackOptions(animationEndTime=frame_end)
|
||||
cmds.playbackOptions(minTime=frame_start)
|
||||
cmds.playbackOptions(maxTime=frame_end)
|
||||
cmds.playbackOptions(
|
||||
minTime=frame_start,
|
||||
maxTime=frame_end,
|
||||
animationStartTime=animation_start,
|
||||
animationEndTime=animation_end
|
||||
)
|
||||
cmds.currentTime(frame_start)
|
||||
|
||||
if render:
|
||||
|
|
@ -3655,7 +3723,17 @@ def get_color_management_preferences():
|
|||
# Split view and display from view_transform. view_transform comes in
|
||||
# format of "{view} ({display})".
|
||||
regex = re.compile(r"^(?P<view>.+) \((?P<display>.+)\)$")
|
||||
if int(cmds.about(version=True)) <= 2020:
|
||||
# view_transform comes in format of "{view} {display}" in 2020.
|
||||
regex = re.compile(r"^(?P<view>.+) (?P<display>.+)$")
|
||||
|
||||
match = regex.match(data["view_transform"])
|
||||
if not match:
|
||||
raise ValueError(
|
||||
"Unable to parse view and display from Maya view transform: '{}' "
|
||||
"using regex '{}'".format(data["view_transform"], regex.pattern)
|
||||
)
|
||||
|
||||
data.update({
|
||||
"display": match.group("display"),
|
||||
"view": match.group("view")
|
||||
|
|
@ -3812,3 +3890,98 @@ def get_all_children(nodes):
|
|||
iterator.next() # noqa: B305
|
||||
|
||||
return list(traversed)
|
||||
|
||||
|
||||
def get_capture_preset(task_name, task_type, subset, project_settings, log):
|
||||
"""Get capture preset for playblasting.
|
||||
|
||||
Logic for transitioning from old style capture preset to new capture preset
|
||||
profiles.
|
||||
|
||||
Args:
|
||||
task_name (str): Task name.
|
||||
take_type (str): Task type.
|
||||
subset (str): Subset name.
|
||||
project_settings (dict): Project settings.
|
||||
log (object): Logging object.
|
||||
"""
|
||||
capture_preset = None
|
||||
filtering_criteria = {
|
||||
"hosts": "maya",
|
||||
"families": "review",
|
||||
"task_names": task_name,
|
||||
"task_types": task_type,
|
||||
"subset": subset
|
||||
}
|
||||
|
||||
plugin_settings = project_settings["maya"]["publish"]["ExtractPlayblast"]
|
||||
if plugin_settings["profiles"]:
|
||||
profile = filter_profiles(
|
||||
plugin_settings["profiles"],
|
||||
filtering_criteria,
|
||||
logger=log
|
||||
)
|
||||
capture_preset = profile.get("capture_preset")
|
||||
else:
|
||||
log.warning("No profiles present for Extract Playblast")
|
||||
|
||||
# Backward compatibility for deprecated Extract Playblast settings
|
||||
# without profiles.
|
||||
if capture_preset is None:
|
||||
log.debug(
|
||||
"Falling back to deprecated Extract Playblast capture preset "
|
||||
"because no new style playblast profiles are defined."
|
||||
)
|
||||
capture_preset = plugin_settings["capture_preset"]
|
||||
|
||||
return capture_preset or {}
|
||||
|
||||
|
||||
def create_rig_animation_instance(nodes, context, namespace, log=None):
|
||||
"""Create an animation publish instance for loaded rigs.
|
||||
|
||||
See the RecreateRigAnimationInstance inventory action on how to use this
|
||||
for loaded rig containers.
|
||||
|
||||
Arguments:
|
||||
nodes (list): Member nodes of the rig instance.
|
||||
context (dict): Representation context of the rig container
|
||||
namespace (str): Namespace of the rig container
|
||||
log (logging.Logger, optional): Logger to log to if provided
|
||||
|
||||
Returns:
|
||||
None
|
||||
|
||||
"""
|
||||
output = next((node for node in nodes if
|
||||
node.endswith("out_SET")), None)
|
||||
controls = next((node for node in nodes if
|
||||
node.endswith("controls_SET")), None)
|
||||
|
||||
assert output, "No out_SET in rig, this is a bug."
|
||||
assert controls, "No controls_SET in rig, this is a bug."
|
||||
|
||||
# Find the roots amongst the loaded nodes
|
||||
roots = (
|
||||
cmds.ls(nodes, assemblies=True, long=True) or
|
||||
get_highest_in_hierarchy(nodes)
|
||||
)
|
||||
assert roots, "No root nodes in rig, this is a bug."
|
||||
|
||||
asset = legacy_io.Session["AVALON_ASSET"]
|
||||
dependency = str(context["representation"]["_id"])
|
||||
|
||||
if log:
|
||||
log.info("Creating subset: {}".format(namespace))
|
||||
|
||||
# Create the animation instance
|
||||
creator_plugin = get_legacy_creator_by_name("CreateAnimation")
|
||||
with maintained_selection():
|
||||
cmds.select([output, controls] + roots, noExpand=True)
|
||||
legacy_create(
|
||||
creator_plugin,
|
||||
name=namespace,
|
||||
asset=asset,
|
||||
options={"useSelection": True},
|
||||
data={"dependencies": dependency}
|
||||
)
|
||||
|
|
|
|||
|
|
@ -1,4 +1,5 @@
|
|||
import os
|
||||
import re
|
||||
|
||||
from maya import cmds
|
||||
|
||||
|
|
@ -12,6 +13,7 @@ from openpype.pipeline import (
|
|||
AVALON_CONTAINER_ID,
|
||||
Anatomy,
|
||||
)
|
||||
from openpype.pipeline.load import LoadError
|
||||
from openpype.settings import get_project_settings
|
||||
from .pipeline import containerise
|
||||
from . import lib
|
||||
|
|
@ -82,6 +84,44 @@ def get_reference_node_parents(ref):
|
|||
return parents
|
||||
|
||||
|
||||
def get_custom_namespace(custom_namespace):
|
||||
"""Return unique namespace.
|
||||
|
||||
The input namespace can contain a single group
|
||||
of '#' number tokens to indicate where the namespace's
|
||||
unique index should go. The amount of tokens defines
|
||||
the zero padding of the number, e.g ### turns into 001.
|
||||
|
||||
Warning: Note that a namespace will always be
|
||||
prefixed with a _ if it starts with a digit
|
||||
|
||||
Example:
|
||||
>>> get_custom_namespace("myspace_##_")
|
||||
# myspace_01_
|
||||
>>> get_custom_namespace("##_myspace")
|
||||
# _01_myspace
|
||||
>>> get_custom_namespace("myspace##")
|
||||
# myspace01
|
||||
|
||||
"""
|
||||
split = re.split("([#]+)", custom_namespace, 1)
|
||||
|
||||
if len(split) == 3:
|
||||
base, padding, suffix = split
|
||||
padding = "%0{}d".format(len(padding))
|
||||
else:
|
||||
base = split[0]
|
||||
padding = "%02d" # default padding
|
||||
suffix = ""
|
||||
|
||||
return lib.unique_namespace(
|
||||
base,
|
||||
format=padding,
|
||||
prefix="_" if not base or base[0].isdigit() else "",
|
||||
suffix=suffix
|
||||
)
|
||||
|
||||
|
||||
class Creator(LegacyCreator):
|
||||
defaults = ['Main']
|
||||
|
||||
|
|
@ -143,15 +183,46 @@ class ReferenceLoader(Loader):
|
|||
assert os.path.exists(self.fname), "%s does not exist." % self.fname
|
||||
|
||||
asset = context['asset']
|
||||
subset = context['subset']
|
||||
settings = get_project_settings(context['project']['name'])
|
||||
custom_naming = settings['maya']['load']['reference_loader']
|
||||
loaded_containers = []
|
||||
|
||||
count = options.get("count") or 1
|
||||
for c in range(0, count):
|
||||
namespace = namespace or lib.unique_namespace(
|
||||
"{}_{}_".format(asset["name"], context["subset"]["name"]),
|
||||
prefix="_" if asset["name"][0].isdigit() else "",
|
||||
suffix="_",
|
||||
if not custom_naming['namespace']:
|
||||
raise LoadError("No namespace specified in "
|
||||
"Maya ReferenceLoader settings")
|
||||
elif not custom_naming['group_name']:
|
||||
raise LoadError("No group name specified in "
|
||||
"Maya ReferenceLoader settings")
|
||||
|
||||
formatting_data = {
|
||||
"asset_name": asset['name'],
|
||||
"asset_type": asset['type'],
|
||||
"subset": subset['name'],
|
||||
"family": (
|
||||
subset['data'].get('family') or
|
||||
subset['data']['families'][0]
|
||||
)
|
||||
}
|
||||
|
||||
custom_namespace = custom_naming['namespace'].format(
|
||||
**formatting_data
|
||||
)
|
||||
|
||||
custom_group_name = custom_naming['group_name'].format(
|
||||
**formatting_data
|
||||
)
|
||||
|
||||
count = options.get("count") or 1
|
||||
|
||||
for c in range(0, count):
|
||||
namespace = get_custom_namespace(custom_namespace)
|
||||
group_name = "{}:{}".format(
|
||||
namespace,
|
||||
custom_group_name
|
||||
)
|
||||
|
||||
options['group_name'] = group_name
|
||||
|
||||
# Offset loaded subset
|
||||
if "offset" in options:
|
||||
|
|
@ -187,7 +258,7 @@ class ReferenceLoader(Loader):
|
|||
|
||||
return loaded_containers
|
||||
|
||||
def process_reference(self, context, name, namespace, data):
|
||||
def process_reference(self, context, name, namespace, options):
|
||||
"""To be implemented by subclass"""
|
||||
raise NotImplementedError("Must be implemented by subclass")
|
||||
|
||||
|
|
|
|||
|
|
@ -234,26 +234,10 @@ class MayaPlaceholderLoadPlugin(PlaceholderPlugin, PlaceholderLoadMixin):
|
|||
return self.get_load_plugin_options(options)
|
||||
|
||||
def cleanup_placeholder(self, placeholder, failed):
|
||||
"""Hide placeholder, parent them to root
|
||||
add them to placeholder set and register placeholder's parent
|
||||
to keep placeholder info available for future use
|
||||
"""Hide placeholder, add them to placeholder set
|
||||
"""
|
||||
|
||||
node = placeholder._scene_identifier
|
||||
node_parent = placeholder.data["parent"]
|
||||
if node_parent:
|
||||
cmds.setAttr(node + ".parent", node_parent, type="string")
|
||||
|
||||
if cmds.getAttr(node + ".index") < 0:
|
||||
cmds.setAttr(node + ".index", placeholder.data["index"])
|
||||
|
||||
holding_sets = cmds.listSets(object=node)
|
||||
if holding_sets:
|
||||
for set in holding_sets:
|
||||
cmds.sets(node, remove=set)
|
||||
|
||||
if cmds.listRelatives(node, p=True):
|
||||
node = cmds.parent(node, world=True)[0]
|
||||
cmds.sets(node, addElement=PLACEHOLDER_SET)
|
||||
cmds.hide(node)
|
||||
cmds.setAttr(node + ".hiddenInOutliner", True)
|
||||
|
|
@ -286,8 +270,6 @@ class MayaPlaceholderLoadPlugin(PlaceholderPlugin, PlaceholderLoadMixin):
|
|||
elif not cmds.sets(root, q=True):
|
||||
return
|
||||
|
||||
if placeholder.data["parent"]:
|
||||
cmds.parent(nodes_to_parent, placeholder.data["parent"])
|
||||
# Move loaded nodes to correct index in outliner hierarchy
|
||||
placeholder_form = cmds.xform(
|
||||
placeholder.scene_identifier,
|
||||
|
|
|
|||
29
openpype/hosts/maya/hooks/pre_auto_load_plugins.py
Normal file
29
openpype/hosts/maya/hooks/pre_auto_load_plugins.py
Normal file
|
|
@ -0,0 +1,29 @@
|
|||
from openpype.lib import PreLaunchHook
|
||||
|
||||
|
||||
class MayaPreAutoLoadPlugins(PreLaunchHook):
|
||||
"""Define -noAutoloadPlugins command flag."""
|
||||
|
||||
# Before AddLastWorkfileToLaunchArgs
|
||||
order = 9
|
||||
app_groups = ["maya"]
|
||||
|
||||
def execute(self):
|
||||
|
||||
# Ignore if there's no last workfile to start.
|
||||
if not self.data.get("start_last_workfile"):
|
||||
return
|
||||
|
||||
maya_settings = self.data["project_settings"]["maya"]
|
||||
enabled = maya_settings["explicit_plugins_loading"]["enabled"]
|
||||
if enabled:
|
||||
# Force disable the `AddLastWorkfileToLaunchArgs`.
|
||||
self.data.pop("start_last_workfile")
|
||||
|
||||
# Force post initialization so our dedicated plug-in load can run
|
||||
# prior to Maya opening a scene file.
|
||||
key = "OPENPYPE_OPEN_WORKFILE_POST_INITIALIZATION"
|
||||
self.launch_context.env[key] = "1"
|
||||
|
||||
self.log.debug("Explicit plugins loading.")
|
||||
self.launch_context.launch_args.append("-noAutoloadPlugins")
|
||||
|
|
@ -0,0 +1,25 @@
|
|||
from openpype.lib import PreLaunchHook
|
||||
|
||||
|
||||
class MayaPreOpenWorkfilePostInitialization(PreLaunchHook):
|
||||
"""Define whether open last workfile should run post initialize."""
|
||||
|
||||
# Before AddLastWorkfileToLaunchArgs.
|
||||
order = 9
|
||||
app_groups = ["maya"]
|
||||
|
||||
def execute(self):
|
||||
|
||||
# Ignore if there's no last workfile to start.
|
||||
if not self.data.get("start_last_workfile"):
|
||||
return
|
||||
|
||||
maya_settings = self.data["project_settings"]["maya"]
|
||||
enabled = maya_settings["open_workfile_post_initialization"]
|
||||
if enabled:
|
||||
# Force disable the `AddLastWorkfileToLaunchArgs`.
|
||||
self.data.pop("start_last_workfile")
|
||||
|
||||
self.log.debug("Opening workfile post initialization.")
|
||||
key = "OPENPYPE_OPEN_WORKFILE_POST_INITIALIZATION"
|
||||
self.launch_context.env[key] = "1"
|
||||
|
|
@ -7,6 +7,12 @@ from openpype.hosts.maya.api import (
|
|||
class CreateAnimation(plugin.Creator):
|
||||
"""Animation output for character rigs"""
|
||||
|
||||
# We hide the animation creator from the UI since the creation of it
|
||||
# is automated upon loading a rig. There's an inventory action to recreate
|
||||
# it for loaded rigs if by chance someone deleted the animation instance.
|
||||
# Note: This setting is actually applied from project settings
|
||||
enabled = False
|
||||
|
||||
name = "animationDefault"
|
||||
label = "Animation"
|
||||
family = "animation"
|
||||
|
|
|
|||
|
|
@ -1,8 +1,14 @@
|
|||
import os
|
||||
from collections import OrderedDict
|
||||
import json
|
||||
|
||||
from openpype.hosts.maya.api import (
|
||||
lib,
|
||||
plugin
|
||||
)
|
||||
from openpype.settings import get_project_settings
|
||||
from openpype.pipeline import get_current_project_name, get_current_task_name
|
||||
from openpype.client import get_asset_by_name
|
||||
|
||||
|
||||
class CreateReview(plugin.Creator):
|
||||
|
|
@ -32,6 +38,23 @@ class CreateReview(plugin.Creator):
|
|||
super(CreateReview, self).__init__(*args, **kwargs)
|
||||
data = OrderedDict(**self.data)
|
||||
|
||||
project_name = get_current_project_name()
|
||||
asset_doc = get_asset_by_name(project_name, data["asset"])
|
||||
task_name = get_current_task_name()
|
||||
preset = lib.get_capture_preset(
|
||||
task_name,
|
||||
asset_doc["data"]["tasks"][task_name]["type"],
|
||||
data["subset"],
|
||||
get_project_settings(project_name),
|
||||
self.log
|
||||
)
|
||||
if os.environ.get("OPENPYPE_DEBUG") == "1":
|
||||
self.log.debug(
|
||||
"Using preset: {}".format(
|
||||
json.dumps(preset, indent=4, sort_keys=True)
|
||||
)
|
||||
)
|
||||
|
||||
# Option for using Maya or asset frame range in settings.
|
||||
frame_range = lib.get_frame_range()
|
||||
if self.useMayaTimeline:
|
||||
|
|
@ -40,12 +63,14 @@ class CreateReview(plugin.Creator):
|
|||
data[key] = value
|
||||
|
||||
data["fps"] = lib.collect_animation_data(fps=True)["fps"]
|
||||
data["review_width"] = self.Width
|
||||
data["review_height"] = self.Height
|
||||
data["isolate"] = self.isolate
|
||||
|
||||
data["keepImages"] = self.keepImages
|
||||
data["imagePlane"] = self.imagePlane
|
||||
data["transparency"] = self.transparency
|
||||
data["panZoom"] = self.panZoom
|
||||
data["review_width"] = preset["Resolution"]["width"]
|
||||
data["review_height"] = preset["Resolution"]["height"]
|
||||
data["isolate"] = preset["Generic"]["isolate_view"]
|
||||
data["imagePlane"] = preset["Viewport Options"]["imagePlane"]
|
||||
data["panZoom"] = preset["Generic"]["pan_zoom"]
|
||||
data["displayLights"] = lib.DISPLAY_LIGHTS_LABELS
|
||||
|
||||
self.data = data
|
||||
|
|
|
|||
|
|
@ -0,0 +1,35 @@
|
|||
from openpype.pipeline import (
|
||||
InventoryAction,
|
||||
get_representation_context
|
||||
)
|
||||
from openpype.hosts.maya.api.lib import (
|
||||
create_rig_animation_instance,
|
||||
get_container_members,
|
||||
)
|
||||
|
||||
|
||||
class RecreateRigAnimationInstance(InventoryAction):
|
||||
"""Recreate animation publish instance for loaded rigs"""
|
||||
|
||||
label = "Recreate rig animation instance"
|
||||
icon = "wrench"
|
||||
color = "#888888"
|
||||
|
||||
@staticmethod
|
||||
def is_compatible(container):
|
||||
return (
|
||||
container.get("loader") == "ReferenceLoader"
|
||||
and container.get("name", "").startswith("rig")
|
||||
)
|
||||
|
||||
def process(self, containers):
|
||||
|
||||
for container in containers:
|
||||
# todo: delete an existing entry if it exist or skip creation
|
||||
|
||||
namespace = container["namespace"]
|
||||
representation_id = container["representation"]
|
||||
context = get_representation_context(representation_id)
|
||||
nodes = get_container_members(container)
|
||||
|
||||
create_rig_animation_instance(nodes, context, namespace)
|
||||
|
|
@ -14,7 +14,7 @@ class AbcLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
|
|||
icon = "code-fork"
|
||||
color = "orange"
|
||||
|
||||
def process_reference(self, context, name, namespace, data):
|
||||
def process_reference(self, context, name, namespace, options):
|
||||
|
||||
import maya.cmds as cmds
|
||||
from openpype.hosts.maya.api.lib import unique_namespace
|
||||
|
|
@ -41,7 +41,7 @@ class AbcLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
|
|||
namespace=namespace,
|
||||
sharedReferenceFile=False,
|
||||
groupReference=True,
|
||||
groupName="{}:{}".format(namespace, name),
|
||||
groupName=options['group_name'],
|
||||
reference=True,
|
||||
returnNewNodes=True)
|
||||
|
||||
|
|
|
|||
|
|
@ -4,16 +4,12 @@ import contextlib
|
|||
from maya import cmds
|
||||
|
||||
from openpype.settings import get_project_settings
|
||||
from openpype.pipeline import legacy_io
|
||||
from openpype.pipeline.create import (
|
||||
legacy_create,
|
||||
get_legacy_creator_by_name,
|
||||
)
|
||||
import openpype.hosts.maya.api.plugin
|
||||
from openpype.hosts.maya.api.lib import (
|
||||
maintained_selection,
|
||||
get_container_members,
|
||||
parent_nodes
|
||||
parent_nodes,
|
||||
create_rig_animation_instance
|
||||
)
|
||||
|
||||
|
||||
|
|
@ -114,9 +110,6 @@ class ReferenceLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
|
|||
icon = "code-fork"
|
||||
color = "orange"
|
||||
|
||||
# Name of creator class that will be used to create animation instance
|
||||
animation_creator_name = "CreateAnimation"
|
||||
|
||||
def process_reference(self, context, name, namespace, options):
|
||||
import maya.cmds as cmds
|
||||
|
||||
|
|
@ -125,14 +118,15 @@ class ReferenceLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
|
|||
except ValueError:
|
||||
family = "model"
|
||||
|
||||
group_name = "{}:_GRP".format(namespace)
|
||||
# True by default to keep legacy behaviours
|
||||
attach_to_root = options.get("attach_to_root", True)
|
||||
group_name = options["group_name"]
|
||||
|
||||
with maintained_selection():
|
||||
cmds.loadPlugin("AbcImport.mll", quiet=True)
|
||||
file_url = self.prepare_root_value(self.fname,
|
||||
context["project"]["name"])
|
||||
|
||||
nodes = cmds.file(file_url,
|
||||
namespace=namespace,
|
||||
sharedReferenceFile=False,
|
||||
|
|
@ -168,9 +162,15 @@ class ReferenceLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
|
|||
with parent_nodes(roots, parent=None):
|
||||
cmds.xform(group_name, zeroTransformPivots=True)
|
||||
|
||||
cmds.setAttr("{}.displayHandle".format(group_name), 1)
|
||||
|
||||
settings = get_project_settings(os.environ['AVALON_PROJECT'])
|
||||
|
||||
display_handle = settings['maya']['load'].get(
|
||||
'reference_loader', {}
|
||||
).get('display_handle', True)
|
||||
cmds.setAttr(
|
||||
"{}.displayHandle".format(group_name), display_handle
|
||||
)
|
||||
|
||||
colors = settings['maya']['load']['colors']
|
||||
c = colors.get(family)
|
||||
if c is not None:
|
||||
|
|
@ -180,7 +180,9 @@ class ReferenceLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
|
|||
(float(c[1]) / 255),
|
||||
(float(c[2]) / 255))
|
||||
|
||||
cmds.setAttr("{}.displayHandle".format(group_name), 1)
|
||||
cmds.setAttr(
|
||||
"{}.displayHandle".format(group_name), display_handle
|
||||
)
|
||||
# get bounding box
|
||||
bbox = cmds.exactWorldBoundingBox(group_name)
|
||||
# get pivot position on world space
|
||||
|
|
@ -219,37 +221,10 @@ class ReferenceLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
|
|||
self._lock_camera_transforms(members)
|
||||
|
||||
def _post_process_rig(self, name, namespace, context, options):
|
||||
|
||||
output = next((node for node in self if
|
||||
node.endswith("out_SET")), None)
|
||||
controls = next((node for node in self if
|
||||
node.endswith("controls_SET")), None)
|
||||
|
||||
assert output, "No out_SET in rig, this is a bug."
|
||||
assert controls, "No controls_SET in rig, this is a bug."
|
||||
|
||||
# Find the roots amongst the loaded nodes
|
||||
roots = cmds.ls(self[:], assemblies=True, long=True)
|
||||
assert roots, "No root nodes in rig, this is a bug."
|
||||
|
||||
asset = legacy_io.Session["AVALON_ASSET"]
|
||||
dependency = str(context["representation"]["_id"])
|
||||
|
||||
self.log.info("Creating subset: {}".format(namespace))
|
||||
|
||||
# Create the animation instance
|
||||
creator_plugin = get_legacy_creator_by_name(
|
||||
self.animation_creator_name
|
||||
nodes = self[:]
|
||||
create_rig_animation_instance(
|
||||
nodes, context, namespace, log=self.log
|
||||
)
|
||||
with maintained_selection():
|
||||
cmds.select([output, controls] + roots, noExpand=True)
|
||||
legacy_create(
|
||||
creator_plugin,
|
||||
name=namespace,
|
||||
asset=asset,
|
||||
options={"useSelection": True},
|
||||
data={"dependencies": dependency}
|
||||
)
|
||||
|
||||
def _lock_camera_transforms(self, nodes):
|
||||
cameras = cmds.ls(nodes, type="camera")
|
||||
|
|
|
|||
|
|
@ -19,8 +19,7 @@ class YetiRigLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
|
|||
def process_reference(
|
||||
self, context, name=None, namespace=None, options=None
|
||||
):
|
||||
|
||||
group_name = "{}:{}".format(namespace, name)
|
||||
group_name = options['group_name']
|
||||
with lib.maintained_selection():
|
||||
file_url = self.prepare_root_value(
|
||||
self.fname, context["project"]["name"]
|
||||
|
|
|
|||
|
|
@ -4,7 +4,7 @@ import pyblish.api
|
|||
|
||||
from openpype.client import get_subset_by_name
|
||||
from openpype.pipeline import legacy_io, KnownPublishError
|
||||
from openpype.hosts.maya.api.lib import get_attribute_input
|
||||
from openpype.hosts.maya.api import lib
|
||||
|
||||
|
||||
class CollectReview(pyblish.api.InstancePlugin):
|
||||
|
|
@ -29,26 +29,37 @@ class CollectReview(pyblish.api.InstancePlugin):
|
|||
|
||||
# get cameras
|
||||
members = instance.data['setMembers']
|
||||
cameras = cmds.ls(members, long=True,
|
||||
dag=True, cameras=True)
|
||||
self.log.debug('members: {}'.format(members))
|
||||
|
||||
# validate required settings
|
||||
if len(cameras) == 0:
|
||||
raise KnownPublishError("No camera found in review "
|
||||
"instance: {}".format(instance))
|
||||
elif len(cameras) > 2:
|
||||
raise KnownPublishError(
|
||||
"Only a single camera is allowed for a review instance but "
|
||||
"more than one camera found in review instance: {}. "
|
||||
"Cameras found: {}".format(instance, ", ".join(cameras)))
|
||||
|
||||
camera = cameras[0]
|
||||
self.log.debug('camera: {}'.format(camera))
|
||||
cameras = cmds.ls(members, long=True, dag=True, cameras=True)
|
||||
camera = cameras[0] if cameras else None
|
||||
|
||||
context = instance.context
|
||||
objectset = context.data['objectsets']
|
||||
|
||||
# Convert enum attribute index to string for Display Lights.
|
||||
index = instance.data.get("displayLights", 0)
|
||||
display_lights = lib.DISPLAY_LIGHTS_VALUES[index]
|
||||
if display_lights == "project_settings":
|
||||
settings = instance.context.data["project_settings"]
|
||||
settings = settings["maya"]["publish"]["ExtractPlayblast"]
|
||||
settings = settings["capture_preset"]["Viewport Options"]
|
||||
display_lights = settings["displayLights"]
|
||||
|
||||
# Collect camera focal length.
|
||||
burninDataMembers = instance.data.get("burninDataMembers", {})
|
||||
if camera is not None:
|
||||
attr = camera + ".focalLength"
|
||||
if lib.get_attribute_input(attr):
|
||||
start = instance.data["frameStart"]
|
||||
end = instance.data["frameEnd"] + 1
|
||||
time_range = range(int(start), int(end))
|
||||
focal_length = [cmds.getAttr(attr, time=t) for t in time_range]
|
||||
else:
|
||||
focal_length = cmds.getAttr(attr)
|
||||
|
||||
burninDataMembers["focalLength"] = focal_length
|
||||
|
||||
# Account for nested instances like model.
|
||||
reviewable_subsets = list(set(members) & set(objectset))
|
||||
if reviewable_subsets:
|
||||
if len(reviewable_subsets) > 1:
|
||||
|
|
@ -75,11 +86,14 @@ class CollectReview(pyblish.api.InstancePlugin):
|
|||
else:
|
||||
data['families'] = ['review']
|
||||
|
||||
data["cameras"] = cameras
|
||||
data['review_camera'] = camera
|
||||
data['frameStartFtrack'] = instance.data["frameStartHandle"]
|
||||
data['frameEndFtrack'] = instance.data["frameEndHandle"]
|
||||
data['frameStartHandle'] = instance.data["frameStartHandle"]
|
||||
data['frameEndHandle'] = instance.data["frameEndHandle"]
|
||||
data['handleStart'] = instance.data["handleStart"]
|
||||
data['handleEnd'] = instance.data["handleEnd"]
|
||||
data["frameStart"] = instance.data["frameStart"]
|
||||
data["frameEnd"] = instance.data["frameEnd"]
|
||||
data['step'] = instance.data['step']
|
||||
|
|
@ -89,6 +103,8 @@ class CollectReview(pyblish.api.InstancePlugin):
|
|||
data["isolate"] = instance.data["isolate"]
|
||||
data["panZoom"] = instance.data.get("panZoom", False)
|
||||
data["panel"] = instance.data["panel"]
|
||||
data["displayLights"] = display_lights
|
||||
data["burninDataMembers"] = burninDataMembers
|
||||
|
||||
# The review instance must be active
|
||||
cmds.setAttr(str(instance) + '.active', 1)
|
||||
|
|
@ -109,11 +125,14 @@ class CollectReview(pyblish.api.InstancePlugin):
|
|||
self.log.debug("Existing subsets found, keep legacy name.")
|
||||
instance.data['subset'] = legacy_subset_name
|
||||
|
||||
instance.data["cameras"] = cameras
|
||||
instance.data['review_camera'] = camera
|
||||
instance.data['frameStartFtrack'] = \
|
||||
instance.data["frameStartHandle"]
|
||||
instance.data['frameEndFtrack'] = \
|
||||
instance.data["frameEndHandle"]
|
||||
instance.data["displayLights"] = display_lights
|
||||
instance.data["burninDataMembers"] = burninDataMembers
|
||||
|
||||
# make ftrack publishable
|
||||
instance.data.setdefault("families", []).append('ftrack')
|
||||
|
|
@ -155,20 +174,3 @@ class CollectReview(pyblish.api.InstancePlugin):
|
|||
audio_data.append(get_audio_node_data(node))
|
||||
|
||||
instance.data["audio"] = audio_data
|
||||
|
||||
# Collect focal length.
|
||||
attr = camera + ".focalLength"
|
||||
if get_attribute_input(attr):
|
||||
start = instance.data["frameStart"]
|
||||
end = instance.data["frameEnd"] + 1
|
||||
focal_length = [
|
||||
cmds.getAttr(attr, time=t) for t in range(int(start), int(end))
|
||||
]
|
||||
else:
|
||||
focal_length = cmds.getAttr(attr)
|
||||
|
||||
key = "focalLength"
|
||||
try:
|
||||
instance.data["burninDataMembers"][key] = focal_length
|
||||
except KeyError:
|
||||
instance.data["burninDataMembers"] = {key: focal_length}
|
||||
|
|
|
|||
|
|
@ -26,7 +26,7 @@ HARDLINK = 2
|
|||
|
||||
|
||||
@attr.s
|
||||
class TextureResult:
|
||||
class TextureResult(object):
|
||||
"""The resulting texture of a processed file for a resource"""
|
||||
# Path to the file
|
||||
path = attr.ib()
|
||||
|
|
@ -280,7 +280,7 @@ class MakeTX(TextureProcessor):
|
|||
# Do nothing if the source file is already a .tx file.
|
||||
return TextureResult(
|
||||
path=source,
|
||||
file_hash=None, # todo: unknown texture hash?
|
||||
file_hash=source_hash(source),
|
||||
colorspace=colorspace,
|
||||
transfer_mode=COPY
|
||||
)
|
||||
|
|
|
|||
|
|
@ -34,13 +34,15 @@ class ExtractPlayblast(publish.Extractor):
|
|||
families = ["review"]
|
||||
optional = True
|
||||
capture_preset = {}
|
||||
profiles = None
|
||||
|
||||
def _capture(self, preset):
|
||||
self.log.info(
|
||||
"Using preset:\n{}".format(
|
||||
json.dumps(preset, sort_keys=True, indent=4)
|
||||
if os.environ.get("OPENPYPE_DEBUG") == "1":
|
||||
self.log.debug(
|
||||
"Using preset: {}".format(
|
||||
json.dumps(preset, indent=4, sort_keys=True)
|
||||
)
|
||||
)
|
||||
)
|
||||
|
||||
path = capture.capture(log=self.log, **preset)
|
||||
self.log.debug("playblast path {}".format(path))
|
||||
|
|
@ -65,12 +67,25 @@ class ExtractPlayblast(publish.Extractor):
|
|||
# get cameras
|
||||
camera = instance.data["review_camera"]
|
||||
|
||||
preset = lib.load_capture_preset(data=self.capture_preset)
|
||||
# Grab capture presets from the project settings
|
||||
capture_presets = self.capture_preset
|
||||
task_data = instance.data["anatomyData"].get("task", {})
|
||||
capture_preset = lib.get_capture_preset(
|
||||
task_data.get("name"),
|
||||
task_data.get("type"),
|
||||
instance.data["subset"],
|
||||
instance.context.data["project_settings"],
|
||||
self.log
|
||||
)
|
||||
|
||||
preset = lib.load_capture_preset(data=capture_preset)
|
||||
|
||||
# "isolate_view" will already have been applied at creation, so we'll
|
||||
# ignore it here.
|
||||
preset.pop("isolate_view")
|
||||
|
||||
# Set resolution variables from capture presets
|
||||
width_preset = capture_presets["Resolution"]["width"]
|
||||
height_preset = capture_presets["Resolution"]["height"]
|
||||
width_preset = capture_preset["Resolution"]["width"]
|
||||
height_preset = capture_preset["Resolution"]["height"]
|
||||
|
||||
# Set resolution variables from asset values
|
||||
asset_data = instance.data["assetEntity"]["data"]
|
||||
asset_width = asset_data.get("resolutionWidth")
|
||||
|
|
@ -115,14 +130,19 @@ class ExtractPlayblast(publish.Extractor):
|
|||
cmds.currentTime(refreshFrameInt - 1, edit=True)
|
||||
cmds.currentTime(refreshFrameInt, edit=True)
|
||||
|
||||
# Use displayLights setting from instance
|
||||
key = "displayLights"
|
||||
preset["viewport_options"][key] = instance.data[key]
|
||||
|
||||
# Override transparency if requested.
|
||||
transparency = instance.data.get("transparency", 0)
|
||||
if transparency != 0:
|
||||
preset["viewport2_options"]["transparencyAlgorithm"] = transparency
|
||||
|
||||
# Isolate view is requested by having objects in the set besides a
|
||||
# camera.
|
||||
if preset.pop("isolate_view", False) and instance.data.get("isolate"):
|
||||
# camera. If there is only 1 member it'll be the camera because we
|
||||
# validate to have 1 camera only.
|
||||
if instance.data["isolate"] and len(instance.data["setMembers"]) > 1:
|
||||
preset["isolate"] = instance.data["setMembers"]
|
||||
|
||||
# Show/Hide image planes on request.
|
||||
|
|
@ -157,7 +177,7 @@ class ExtractPlayblast(publish.Extractor):
|
|||
)
|
||||
|
||||
override_viewport_options = (
|
||||
capture_presets["Viewport Options"]["override_viewport_options"]
|
||||
capture_preset["Viewport Options"]["override_viewport_options"]
|
||||
)
|
||||
|
||||
# Force viewer to False in call to capture because we have our own
|
||||
|
|
@ -197,7 +217,11 @@ class ExtractPlayblast(publish.Extractor):
|
|||
instance.data["panel"], edit=True, **viewport_defaults
|
||||
)
|
||||
|
||||
cmds.setAttr("{}.panZoomEnabled".format(preset["camera"]), pan_zoom)
|
||||
try:
|
||||
cmds.setAttr(
|
||||
"{}.panZoomEnabled".format(preset["camera"]), pan_zoom)
|
||||
except RuntimeError:
|
||||
self.log.warning("Cannot restore Pan/Zoom settings.")
|
||||
|
||||
collected_files = os.listdir(stagingdir)
|
||||
patterns = [clique.PATTERNS["frames"]]
|
||||
|
|
@ -233,8 +257,8 @@ class ExtractPlayblast(publish.Extractor):
|
|||
collected_files = collected_files[0]
|
||||
|
||||
representation = {
|
||||
"name": self.capture_preset["Codec"]["compression"],
|
||||
"ext": self.capture_preset["Codec"]["compression"],
|
||||
"name": capture_preset["Codec"]["compression"],
|
||||
"ext": capture_preset["Codec"]["compression"],
|
||||
"files": collected_files,
|
||||
"stagingDir": stagingdir,
|
||||
"frameStart": start,
|
||||
|
|
|
|||
|
|
@ -1,6 +1,7 @@
|
|||
import os
|
||||
import glob
|
||||
import tempfile
|
||||
import json
|
||||
|
||||
import capture
|
||||
|
||||
|
|
@ -27,22 +28,25 @@ class ExtractThumbnail(publish.Extractor):
|
|||
|
||||
camera = instance.data["review_camera"]
|
||||
|
||||
maya_setting = instance.context.data["project_settings"]["maya"]
|
||||
plugin_setting = maya_setting["publish"]["ExtractPlayblast"]
|
||||
capture_preset = plugin_setting["capture_preset"]
|
||||
task_data = instance.data["anatomyData"].get("task", {})
|
||||
capture_preset = lib.get_capture_preset(
|
||||
task_data.get("name"),
|
||||
task_data.get("type"),
|
||||
instance.data["subset"],
|
||||
instance.context.data["project_settings"],
|
||||
self.log
|
||||
)
|
||||
|
||||
preset = lib.load_capture_preset(data=capture_preset)
|
||||
|
||||
# "isolate_view" will already have been applied at creation, so we'll
|
||||
# ignore it here.
|
||||
preset.pop("isolate_view")
|
||||
|
||||
override_viewport_options = (
|
||||
capture_preset["Viewport Options"]["override_viewport_options"]
|
||||
)
|
||||
|
||||
try:
|
||||
preset = lib.load_capture_preset(data=capture_preset)
|
||||
except KeyError as ke:
|
||||
self.log.error("Error loading capture presets: {}".format(str(ke)))
|
||||
preset = {}
|
||||
self.log.info("Using viewport preset: {}".format(preset))
|
||||
|
||||
# preset["off_screen"] = False
|
||||
|
||||
preset["camera"] = camera
|
||||
preset["start_frame"] = instance.data["frameStart"]
|
||||
preset["end_frame"] = instance.data["frameStart"]
|
||||
|
|
@ -58,10 +62,9 @@ class ExtractThumbnail(publish.Extractor):
|
|||
"overscan": 1.0,
|
||||
"depthOfField": cmds.getAttr("{0}.depthOfField".format(camera)),
|
||||
}
|
||||
capture_presets = capture_preset
|
||||
# Set resolution variables from capture presets
|
||||
width_preset = capture_presets["Resolution"]["width"]
|
||||
height_preset = capture_presets["Resolution"]["height"]
|
||||
width_preset = capture_preset["Resolution"]["width"]
|
||||
height_preset = capture_preset["Resolution"]["height"]
|
||||
# Set resolution variables from asset values
|
||||
asset_data = instance.data["assetEntity"]["data"]
|
||||
asset_width = asset_data.get("resolutionWidth")
|
||||
|
|
@ -104,14 +107,19 @@ class ExtractThumbnail(publish.Extractor):
|
|||
cmds.currentTime(refreshFrameInt - 1, edit=True)
|
||||
cmds.currentTime(refreshFrameInt, edit=True)
|
||||
|
||||
# Use displayLights setting from instance
|
||||
key = "displayLights"
|
||||
preset["viewport_options"][key] = instance.data[key]
|
||||
|
||||
# Override transparency if requested.
|
||||
transparency = instance.data.get("transparency", 0)
|
||||
if transparency != 0:
|
||||
preset["viewport2_options"]["transparencyAlgorithm"] = transparency
|
||||
|
||||
# Isolate view is requested by having objects in the set besides a
|
||||
# camera.
|
||||
if preset.pop("isolate_view", False) and instance.data.get("isolate"):
|
||||
# camera. If there is only 1 member it'll be the camera because we
|
||||
# validate to have 1 camera only.
|
||||
if instance.data["isolate"] and len(instance.data["setMembers"]) > 1:
|
||||
preset["isolate"] = instance.data["setMembers"]
|
||||
|
||||
# Show or Hide Image Plane
|
||||
|
|
@ -139,6 +147,13 @@ class ExtractThumbnail(publish.Extractor):
|
|||
preset.update(panel_preset)
|
||||
cmds.setFocus(panel)
|
||||
|
||||
if os.environ.get("OPENPYPE_DEBUG") == "1":
|
||||
self.log.debug(
|
||||
"Using preset: {}".format(
|
||||
json.dumps(preset, indent=4, sort_keys=True)
|
||||
)
|
||||
)
|
||||
|
||||
path = capture.capture(**preset)
|
||||
playblast = self._fix_playblast_output_path(path)
|
||||
|
||||
|
|
|
|||
|
|
@ -65,9 +65,10 @@ class ExtractXgen(publish.Extractor):
|
|||
)
|
||||
cmds.delete(set(children) - set(shapes))
|
||||
|
||||
duplicate_transform = cmds.parent(
|
||||
duplicate_transform, world=True
|
||||
)[0]
|
||||
if cmds.listRelatives(duplicate_transform, parent=True):
|
||||
duplicate_transform = cmds.parent(
|
||||
duplicate_transform, world=True
|
||||
)[0]
|
||||
|
||||
duplicate_nodes.append(duplicate_transform)
|
||||
|
||||
|
|
|
|||
|
|
@ -6,7 +6,7 @@ import pyblish.api
|
|||
|
||||
from openpype.hosts.maya.api.lib import set_attribute
|
||||
from openpype.pipeline.publish import (
|
||||
RepairContextAction,
|
||||
RepairAction,
|
||||
ValidateContentsOrder,
|
||||
)
|
||||
|
||||
|
|
@ -26,7 +26,7 @@ class ValidateAttributes(pyblish.api.InstancePlugin):
|
|||
order = ValidateContentsOrder
|
||||
label = "Attributes"
|
||||
hosts = ["maya"]
|
||||
actions = [RepairContextAction]
|
||||
actions = [RepairAction]
|
||||
optional = True
|
||||
|
||||
attributes = None
|
||||
|
|
@ -81,7 +81,7 @@ class ValidateAttributes(pyblish.api.InstancePlugin):
|
|||
if node_name not in attributes:
|
||||
continue
|
||||
|
||||
for attr_name, expected in attributes.items():
|
||||
for attr_name, expected in attributes[node_name].items():
|
||||
|
||||
# Skip if attribute does not exist
|
||||
if not cmds.attributeQuery(attr_name, node=node, exists=True):
|
||||
|
|
|
|||
|
|
@ -255,7 +255,7 @@ class ValidateMeshHasOverlappingUVs(pyblish.api.InstancePlugin):
|
|||
# Store original uv set
|
||||
original_current_uv_set = cmds.polyUVSet(mesh,
|
||||
query=True,
|
||||
currentUVSet=True)
|
||||
currentUVSet=True)[0]
|
||||
|
||||
overlapping_faces = []
|
||||
for uv_set in cmds.polyUVSet(mesh, query=True, allUVSets=True):
|
||||
|
|
|
|||
30
openpype/hosts/maya/plugins/publish/validate_review.py
Normal file
30
openpype/hosts/maya/plugins/publish/validate_review.py
Normal file
|
|
@ -0,0 +1,30 @@
|
|||
import pyblish.api
|
||||
|
||||
from openpype.pipeline.publish import (
|
||||
ValidateContentsOrder, PublishValidationError
|
||||
)
|
||||
|
||||
|
||||
class ValidateReview(pyblish.api.InstancePlugin):
|
||||
"""Validate review."""
|
||||
|
||||
order = ValidateContentsOrder
|
||||
label = "Validate Review"
|
||||
families = ["review"]
|
||||
|
||||
def process(self, instance):
|
||||
cameras = instance.data["cameras"]
|
||||
|
||||
# validate required settings
|
||||
if len(cameras) == 0:
|
||||
raise PublishValidationError(
|
||||
"No camera found in review instance: {}".format(instance)
|
||||
)
|
||||
elif len(cameras) > 2:
|
||||
raise PublishValidationError(
|
||||
"Only a single camera is allowed for a review instance but "
|
||||
"more than one camera found in review instance: {}. "
|
||||
"Cameras found: {}".format(instance, ", ".join(cameras))
|
||||
)
|
||||
|
||||
self.log.debug('camera: {}'.format(instance.data["review_camera"]))
|
||||
|
|
@ -19,7 +19,7 @@ class ValidateSingleAssembly(pyblish.api.InstancePlugin):
|
|||
|
||||
order = ValidateContentsOrder
|
||||
hosts = ['maya']
|
||||
families = ['rig', 'animation']
|
||||
families = ['rig']
|
||||
label = 'Single Assembly'
|
||||
|
||||
def process(self, instance):
|
||||
|
|
|
|||
|
|
@ -57,3 +57,16 @@ class ValidateXgen(pyblish.api.InstancePlugin):
|
|||
json.dumps(inactive_modifiers, indent=4, sort_keys=True)
|
||||
)
|
||||
)
|
||||
|
||||
# We need a namespace else there will be a naming conflict when
|
||||
# extracting because of stripping namespaces and parenting to world.
|
||||
node_names = [instance.data["xgmPalette"]]
|
||||
for _, connections in instance.data["xgenConnections"].items():
|
||||
node_names.append(connections["transform"].split(".")[0])
|
||||
|
||||
non_namespaced_nodes = [n for n in node_names if ":" not in n]
|
||||
if non_namespaced_nodes:
|
||||
raise PublishValidationError(
|
||||
"Could not find namespace on {}. Namespace is required for"
|
||||
" xgen publishing.".format(non_namespaced_nodes)
|
||||
)
|
||||
|
|
|
|||
|
|
@ -1,5 +1,4 @@
|
|||
import os
|
||||
from functools import partial
|
||||
|
||||
from openpype.settings import get_project_settings
|
||||
from openpype.pipeline import install_host
|
||||
|
|
@ -13,24 +12,41 @@ install_host(host)
|
|||
|
||||
print("Starting OpenPype usersetup...")
|
||||
|
||||
project_settings = get_project_settings(os.environ['AVALON_PROJECT'])
|
||||
|
||||
# Loading plugins explicitly.
|
||||
explicit_plugins_loading = project_settings["maya"]["explicit_plugins_loading"]
|
||||
if explicit_plugins_loading["enabled"]:
|
||||
def _explicit_load_plugins():
|
||||
for plugin in explicit_plugins_loading["plugins_to_load"]:
|
||||
if plugin["enabled"]:
|
||||
print("Loading plug-in: " + plugin["name"])
|
||||
try:
|
||||
cmds.loadPlugin(plugin["name"], quiet=True)
|
||||
except RuntimeError as e:
|
||||
print(e)
|
||||
|
||||
# We need to load plugins deferred as loading them directly does not work
|
||||
# correctly due to Maya's initialization.
|
||||
cmds.evalDeferred(
|
||||
_explicit_load_plugins,
|
||||
lowestPriority=True
|
||||
)
|
||||
|
||||
# Open Workfile Post Initialization.
|
||||
key = "OPENPYPE_OPEN_WORKFILE_POST_INITIALIZATION"
|
||||
if bool(int(os.environ.get(key, "0"))):
|
||||
def _log_and_open():
|
||||
path = os.environ["AVALON_LAST_WORKFILE"]
|
||||
print("Opening \"{}\"".format(path))
|
||||
cmds.file(path, open=True, force=True)
|
||||
cmds.evalDeferred(
|
||||
partial(
|
||||
cmds.file,
|
||||
os.environ["AVALON_LAST_WORKFILE"],
|
||||
open=True,
|
||||
force=True
|
||||
),
|
||||
_log_and_open,
|
||||
lowestPriority=True
|
||||
)
|
||||
|
||||
|
||||
# Build a shelf.
|
||||
settings = get_project_settings(os.environ['AVALON_PROJECT'])
|
||||
shelf_preset = settings['maya'].get('project_shelf')
|
||||
shelf_preset = project_settings['maya'].get('project_shelf')
|
||||
|
||||
if shelf_preset:
|
||||
project = os.environ["AVALON_PROJECT"]
|
||||
|
|
|
|||
97
openpype/hosts/maya/tools/mayalookassigner/alembic.py
Normal file
97
openpype/hosts/maya/tools/mayalookassigner/alembic.py
Normal file
|
|
@ -0,0 +1,97 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
"""Tools for loading looks to vray proxies."""
|
||||
import os
|
||||
from collections import defaultdict
|
||||
import logging
|
||||
|
||||
import six
|
||||
|
||||
import alembic.Abc
|
||||
|
||||
|
||||
log = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def get_alembic_paths_by_property(filename, attr, verbose=False):
|
||||
# type: (str, str, bool) -> dict
|
||||
"""Return attribute value per objects in the Alembic file.
|
||||
|
||||
Reads an Alembic archive hierarchy and retrieves the
|
||||
value from the `attr` properties on the objects.
|
||||
|
||||
Args:
|
||||
filename (str): Full path to Alembic archive to read.
|
||||
attr (str): Id attribute.
|
||||
verbose (bool): Whether to verbosely log missing attributes.
|
||||
|
||||
Returns:
|
||||
dict: Mapping of node full path with its id
|
||||
|
||||
"""
|
||||
# Normalize alembic path
|
||||
filename = os.path.normpath(filename)
|
||||
filename = filename.replace("\\", "/")
|
||||
filename = str(filename) # path must be string
|
||||
|
||||
try:
|
||||
archive = alembic.Abc.IArchive(filename)
|
||||
except RuntimeError:
|
||||
# invalid alembic file - probably vrmesh
|
||||
log.warning("{} is not an alembic file".format(filename))
|
||||
return {}
|
||||
root = archive.getTop()
|
||||
|
||||
iterator = list(root.children)
|
||||
obj_ids = {}
|
||||
|
||||
for obj in iterator:
|
||||
name = obj.getFullName()
|
||||
|
||||
# include children for coming iterations
|
||||
iterator.extend(obj.children)
|
||||
|
||||
props = obj.getProperties()
|
||||
if props.getNumProperties() == 0:
|
||||
# Skip those without properties, e.g. '/materials' in a gpuCache
|
||||
continue
|
||||
|
||||
# THe custom attribute is under the properties' first container under
|
||||
# the ".arbGeomParams"
|
||||
prop = props.getProperty(0) # get base property
|
||||
|
||||
_property = None
|
||||
try:
|
||||
geo_params = prop.getProperty('.arbGeomParams')
|
||||
_property = geo_params.getProperty(attr)
|
||||
except KeyError:
|
||||
if verbose:
|
||||
log.debug("Missing attr on: {0}".format(name))
|
||||
continue
|
||||
|
||||
if not _property.isConstant():
|
||||
log.warning("Id not constant on: {0}".format(name))
|
||||
|
||||
# Get first value sample
|
||||
value = _property.getValue()[0]
|
||||
|
||||
obj_ids[name] = value
|
||||
|
||||
return obj_ids
|
||||
|
||||
|
||||
def get_alembic_ids_cache(path):
|
||||
# type: (str) -> dict
|
||||
"""Build a id to node mapping in Alembic file.
|
||||
|
||||
Nodes without IDs are ignored.
|
||||
|
||||
Returns:
|
||||
dict: Mapping of id to nodes in the Alembic.
|
||||
|
||||
"""
|
||||
node_ids = get_alembic_paths_by_property(path, attr="cbId")
|
||||
id_nodes = defaultdict(list)
|
||||
for node, _id in six.iteritems(node_ids):
|
||||
id_nodes[_id].append(node)
|
||||
|
||||
return dict(six.iteritems(id_nodes))
|
||||
Some files were not shown because too many files have changed in this diff Show more
Loading…
Add table
Add a link
Reference in a new issue