Merge remote-tracking branch 'origin/develop' into feature/OP-3933_RR-support

This commit is contained in:
Ondrej Samohel 2023-04-13 17:10:30 +02:00
commit 79a9deba52
No known key found for this signature in database
GPG key ID: 02376E18990A97C6
292 changed files with 9100 additions and 7543 deletions

View file

@ -1,33 +0,0 @@
---
name: Bug report
about: Create a report to help us improve
title: ''
labels: bug
assignees: ''
---
**Running version**
[ex. 3.14.1-nightly.2]
**Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Desktop (please complete the following information):**
- OS: [e.g. windows]
- Host: [e.g. Maya, Nuke, Houdini]
**Additional context**
Add any other context about the problem here.

183
.github/ISSUE_TEMPLATE/bug_report.yml vendored Normal file
View file

@ -0,0 +1,183 @@
name: Bug Report
description: File a bug report
title: 'Bug: '
labels:
- 'type: bug'
body:
- type: markdown
attributes:
value: |
Thanks for taking the time to fill out this bug report!
- type: checkboxes
attributes:
label: Is there an existing issue for this?
description: >-
Please search to see if an issue already exists for the bug you
encountered.
options:
- label: I have searched the existing issues
required: true
- type: textarea
attributes:
label: 'Current Behavior:'
description: A concise description of what you're experiencing.
validations:
required: true
- type: textarea
attributes:
label: 'Expected Behavior:'
description: A concise description of what you expected to happen.
validations:
required: false
- type: dropdown
id: _version
attributes:
label: Version
description: What version are you running? Look to OpenPype Tray
options:
- 3.15.4-nightly.3
- 3.15.4-nightly.2
- 3.15.4-nightly.1
- 3.15.3
- 3.15.3-nightly.4
- 3.15.3-nightly.3
- 3.15.3-nightly.2
- 3.15.3-nightly.1
- 3.15.2
- 3.15.2-nightly.6
- 3.15.2-nightly.5
- 3.15.2-nightly.4
- 3.15.2-nightly.3
- 3.15.2-nightly.2
- 3.15.2-nightly.1
- 3.15.1
- 3.15.1-nightly.6
- 3.15.1-nightly.5
- 3.15.1-nightly.4
- 3.15.1-nightly.3
- 3.15.1-nightly.2
- 3.15.1-nightly.1
- 3.15.0
- 3.15.0-nightly.1
- 3.14.11-nightly.4
- 3.14.11-nightly.3
- 3.14.11-nightly.2
- 3.14.11-nightly.1
- 3.14.10
- 3.14.10-nightly.9
- 3.14.10-nightly.8
- 3.14.10-nightly.7
- 3.14.10-nightly.6
- 3.14.10-nightly.5
- 3.14.10-nightly.4
- 3.14.10-nightly.3
- 3.14.10-nightly.2
- 3.14.10-nightly.1
- 3.14.9
- 3.14.9-nightly.5
- 3.14.9-nightly.4
- 3.14.9-nightly.3
- 3.14.9-nightly.2
- 3.14.9-nightly.1
- 3.14.8
- 3.14.8-nightly.4
- 3.14.8-nightly.3
- 3.14.8-nightly.2
- 3.14.8-nightly.1
- 3.14.7
- 3.14.7-nightly.8
- 3.14.7-nightly.7
- 3.14.7-nightly.6
- 3.14.7-nightly.5
- 3.14.7-nightly.4
- 3.14.7-nightly.3
- 3.14.7-nightly.2
- 3.14.7-nightly.1
- 3.14.6
- 3.14.6-nightly.3
- 3.14.6-nightly.2
- 3.14.6-nightly.1
- 3.14.5
- 3.14.5-nightly.3
- 3.14.5-nightly.2
- 3.14.5-nightly.1
- 3.14.4
- 3.14.4-nightly.4
- 3.14.4-nightly.3
- 3.14.4-nightly.2
- 3.14.4-nightly.1
- 3.14.3
- 3.14.3-nightly.7
- 3.14.3-nightly.6
- 3.14.3-nightly.5
- 3.14.3-nightly.4
- 3.14.3-nightly.3
- 3.14.3-nightly.2
- 3.14.3-nightly.1
- 3.14.2
- 3.14.2-nightly.5
- 3.14.2-nightly.4
- 3.14.2-nightly.3
- 3.14.2-nightly.2
- 3.14.2-nightly.1
- 3.14.1
- 3.14.1-nightly.4
- 3.14.1-nightly.3
- 3.14.1-nightly.2
- 3.14.1-nightly.1
- 3.14.0
- 3.14.0-nightly.1
- 3.13.1-nightly.3
- 3.13.1-nightly.2
- 3.13.1-nightly.1
- 3.13.0
- 3.13.0-nightly.1
- 3.12.3-nightly.3
- 3.12.3-nightly.2
- 3.12.3-nightly.1
validations:
required: true
- type: dropdown
validations:
required: true
attributes:
label: What platform you are running OpenPype on?
description: |
Please specify the operating systems you are running OpenPype with.
multiple: true
options:
- Windows
- Linux / Centos
- Linux / Ubuntu
- Linux / RedHat
- MacOS
- type: textarea
id: to-reproduce
attributes:
label: 'Steps To Reproduce:'
description: Steps to reproduce the behavior.
placeholder: |
1. How did the configuration look like
2. What type of action was made
validations:
required: true
- type: checkboxes
attributes:
label: Are there any labels you wish to add?
description: Please search labels and identify those related to your bug.
options:
- label: I have added the relevant labels to the bug report.
required: true
- type: textarea
id: logs
attributes:
label: 'Relevant log output:'
description: >-
Please copy and paste any relevant log output. This will be
automatically formatted into code, so no need for backticks.
render: shell
- type: textarea
id: additional-context
attributes:
label: 'Additional context:'
description: Add any other context about the problem here.

8
.github/ISSUE_TEMPLATE/config.yml vendored Normal file
View file

@ -0,0 +1,8 @@
blank_issues_enabled: false
contact_links:
- name: Ynput Community Discussions
url: https://community.ynput.io
about: Please ask and answer questions here.
- name: Ynput Discord Server
url: https://discord.gg/ynput
about: For community quick chats.

View file

@ -0,0 +1,52 @@
name: Enhancement Request
description: Create a report to help us enhance a particular feature
title: "Enhancement: "
labels:
- "type: enhancement"
body:
- type: markdown
attributes:
value: |
Thanks for taking the time to fill out this enhancement request report!
- type: checkboxes
attributes:
label: Is there an existing issue for this?
description: Please search to see if an issue already exists for the bug you encountered.
options:
- label: I have searched the existing issues.
required: true
- type: textarea
id: related-feature
attributes:
label: Please describe the feature you have in mind and explain what the current shortcomings are?
description: A clear and concise description of what the problem is.
validations:
required: true
- type: textarea
id: enhancement-proposal
attributes:
label: How would you imagine the implementation of the feature?
description: A clear and concise description of what you want to happen.
validations:
required: true
- type: checkboxes
attributes:
label: Are there any labels you wish to add?
description: Please search labels and identify those related to your enhancement.
options:
- label: I have added the relevant labels to the enhancement request.
required: true
- type: textarea
id: alternatives
attributes:
label: "Describe alternatives you've considered:"
description: A clear and concise description of any alternative solutions or features you've considered.
validations:
required: false
- type: textarea
id: additional-context
attributes:
label: "Additional context:"
description: Add any other context or screenshots about the enhancement request here.
validations:
required: false

View file

@ -1,20 +0,0 @@
---
name: Feature request
about: Suggest an idea for this project
title: ''
labels: enhancement
assignees: ''
---
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.

View file

@ -7,9 +7,9 @@
head: "enhancement/*"
# Apply label "bugfix" if head matches one of "bugfix/*" or "hotfix/*"
'type: bugfix':
'type: bug':
head: ["bugfix/*", "hotfix/*"]
# Apply label "release" if base matches "release/*"
'Bump Minor':
base: "release/next-minor"
base: "release/next-minor"

View file

@ -1,102 +1,102 @@
# Add type: unittest label if any changes in tests folders
'type: unittest':
- **/*tests*/**/*
- '*/*tests*/**/*'
# any changes in documentation structure
'type: documentation':
- **/*website*/**/*
- **/*docs*/**/*
- '*/**/*website*/**/*'
- '*/**/*docs*/**/*'
# hosts triage
'host: Nuke':
- **/*nuke*
- **/*nuke*/**/*
- '*/**/*nuke*'
- '*/**/*nuke*/**/*'
'host: Photoshop':
- **/*photoshop*
- **/*photoshop*/**/*
- '*/**/*photoshop*'
- '*/**/*photoshop*/**/*'
'host: Harmony':
- **/*harmony*
- **/*harmony*/**/*
- '*/**/*harmony*'
- '*/**/*harmony*/**/*'
'host: UE':
- **/*unreal*
- **/*unreal*/**/*
- '*/**/*unreal*'
- '*/**/*unreal*/**/*'
'host: Houdini':
- **/*houdini*
- **/*houdini*/**/*
- '*/**/*houdini*'
- '*/**/*houdini*/**/*'
'host: Maya':
- **/*maya*
- **/*maya*/**/*
- '*/**/*maya*'
- '*/**/*maya*/**/*'
'host: Resolve':
- **/*resolve*
- **/*resolve*/**/*
- '*/**/*resolve*'
- '*/**/*resolve*/**/*'
'host: Blender':
- **/*blender*
- **/*blender*/**/*
- '*/**/*blender*'
- '*/**/*blender*/**/*'
'host: Hiero':
- **/*hiero*
- **/*hiero*/**/*
- '*/**/*hiero*'
- '*/**/*hiero*/**/*'
'host: Fusion':
- **/*fusion*
- **/*fusion*/**/*
- '*/**/*fusion*'
- '*/**/*fusion*/**/*'
'host: Flame':
- **/*flame*
- **/*flame*/**/*
- '*/**/*flame*'
- '*/**/*flame*/**/*'
'host: TrayPublisher':
- **/*traypublisher*
- **/*traypublisher*/**/*
- '*/**/*traypublisher*'
- '*/**/*traypublisher*/**/*'
'host: 3dsmax':
- **/*max*
- **/*max*/**/*
- '*/**/*max*'
- '*/**/*max*/**/*'
'host: TV Paint':
- **/*tvpaint*
- **/*tvpaint*/**/*
- '*/**/*tvpaint*'
- '*/**/*tvpaint*/**/*'
'host: CelAction':
- **/*celaction*
- **/*celaction*/**/*
- '*/**/*celaction*'
- '*/**/*celaction*/**/*'
'host: After Effects':
- **/*aftereffects*
- **/*aftereffects*/**/*
- '*/**/*aftereffects*'
- '*/**/*aftereffects*/**/*'
'host: Substance Painter':
- **/*substancepainter*
- **/*substancepainter*/**/*
- '*/**/*substancepainter*'
- '*/**/*substancepainter*/**/*'
# modules triage
'module: Deadline':
- **/*deadline*
- **/*deadline*/**/*
- '*/**/*deadline*'
- '*/**/*deadline*/**/*'
'module: RoyalRender':
- **/*royalrender*
- **/*royalrender*/**/*
- '*/**/*royalrender*'
- '*/**/*royalrender*/**/*'
'module: Sitesync':
- **/*sync_server*
- **/*sync_server*/**/*
- '*/**/*sync_server*'
- '*/**/*sync_server*/**/*'
'module: Ftrack':
- **/*ftrack*
- **/*ftrack*/**/*
- '*/**/*ftrack*'
- '*/**/*ftrack*/**/*'
'module: Shotgrid':
- **/*shotgrid*
- **/*shotgrid*/**/*
- '*/**/*shotgrid*'
- '*/**/*shotgrid*/**/*'
'module: Kitsu':
- **/*kitsu*
- **/*kitsu*/**/*
- '*/**/*kitsu*'
- '*/**/*kitsu*/**/*'

View file

@ -1,4 +1,4 @@
name: documentation
name: 📜 Documentation
on:
pull_request:

View file

@ -1,4 +1,4 @@
name: Milestone - assign to PRs
name: 👉🏻 Milestone - assign to PRs
on:
pull_request_target:

View file

@ -1,4 +1,4 @@
name: Milestone - create default
name: Milestone - create default
on:
milestone:

View file

@ -1,4 +1,4 @@
name: Milestone Release [trigger]
name: 🚩 Milestone Release [trigger]
on:
workflow_dispatch:

View file

@ -1,4 +1,4 @@
name: Dev -> Main
name: 🔀 Dev -> Main
on:
schedule:

49
.github/workflows/pr_labels.yml vendored Normal file
View file

@ -0,0 +1,49 @@
name: 🔖 PR labels
on:
pull_request_target:
types: [opened, assigned]
jobs:
size-label:
name: pr_size_label
runs-on: ubuntu-latest
if: github.event.action == 'assigned' || github.event.action == 'opened'
steps:
- name: Add size label
uses: "pascalgn/size-label-action@v0.4.3"
env:
GITHUB_TOKEN: "${{ secrets.YNPUT_BOT_TOKEN }}"
IGNORED: ".gitignore\n*.md\n*.json"
with:
sizes: >
{
"0": "XS",
"100": "S",
"500": "M",
"1000": "L",
"1500": "XL",
"2500": "XXL"
}
label_prs_branch:
name: pr_branch_label
runs-on: ubuntu-latest
if: github.event.action == 'assigned' || github.event.action == 'opened'
steps:
- name: Label PRs - Branch name detection
uses: ffittschen/pr-branch-labeler@v1
with:
repo-token: ${{ secrets.YNPUT_BOT_TOKEN }}
label_prs_globe:
name: pr_globe_label
runs-on: ubuntu-latest
if: github.event.action == 'assigned' || github.event.action == 'opened'
steps:
- name: Label PRs - Globe detection
uses: actions/labeler@v4.0.3
with:
repo-token: ${{ secrets.YNPUT_BOT_TOKEN }}
configuration-path: ".github/pr-glob-labeler.yml"
sync-labels: false

View file

@ -1,4 +1,4 @@
name: Nightly Prerelease
name: Nightly Prerelease
on:
workflow_dispatch:

View file

@ -1,71 +0,0 @@
name: project-actions
on:
pull_request:
types: [opened, synchronize, assigned, review_requested]
pull_request_review:
types: [submitted]
jobs:
pr_review_requested:
name: pr_review_requested
runs-on: ubuntu-latest
if: github.event_name == 'pull_request_review' && github.event.review.state == 'changes_requested'
steps:
- name: Move PR to 'Change Requested'
uses: leonsteinhaeuser/project-beta-automations@v2.1.0
with:
gh_token: ${{ secrets.YNPUT_BOT_TOKEN }}
organization: ynput
project_id: 11
resource_node_id: ${{ github.event.pull_request.node_id }}
status_value: Change Requested
size-label:
name: pr_size_label
runs-on: ubuntu-latest
if: |
${{(github.event_name == 'pull_request' && github.event.action == 'synchronize')
|| (github.event_name == 'pull_request' && github.event.action == 'assigned')}}
steps:
- name: Add size label
uses: "pascalgn/size-label-action@v0.4.3"
env:
GITHUB_TOKEN: "${{ secrets.YNPUT_BOT_TOKEN }}"
IGNORED: ".gitignore\n*.md\n*.json"
with:
sizes: >
{
"0": "XS",
"100": "S",
"500": "M",
"1000": "L",
"1500": "XL",
"2500": "XXL"
}
label_prs_branch:
name: pr_branch_label
runs-on: ubuntu-latest
if: |
${{(github.event_name == 'pull_request' && github.event.action == 'synchronize')
|| (github.event_name == 'pull_request' && github.event.action == 'opened')}}
steps:
- name: Label PRs - Branch name detection
uses: ffittschen/pr-branch-labeler@v1
with:
repo-token: ${{ secrets.YNPUT_BOT_TOKEN }}
label_prs_globe:
name: pr_globe_label
runs-on: ubuntu-latest
if: |
${{(github.event_name == 'pull_request' && github.event.action == 'synchronize')
|| (github.event_name == 'pull_request' && github.event.action == 'opened')}}
steps:
- name: Label PRs - Globe detection
uses: actions/labeler@v4
with:
repo-token: ${{ secrets.YNPUT_BOT_TOKEN }}
configuration-path: ".github/pr-glob-labeler.yml"

View file

@ -0,0 +1,70 @@
name: 📊 Project task statuses
on:
pull_request_review:
types: [submitted]
issue_comment:
types: [created]
pull_request_review_comment:
types: [created]
jobs:
pr_review_started:
name: pr_review_started
runs-on: ubuntu-latest
# -----------------------------
# conditions are:
# - PR issue comment which is not form Ynbot
# - PR review comment which is not Hound (or any other bot)
# - PR review submitted which is not from Hound (or any other bot) and is not 'Changes requested'
# - make sure it only runs if not forked repo
# -----------------------------
if: |
(github.event_name == 'issue_comment' && github.event.pull_request.head.repo.owner.login == 'ynput' && github.event.comment.user.id != 82967070) ||
(github.event_name == 'pull_request_review_comment' && github.event.pull_request.head.repo.owner.login == 'ynput' && github.event.comment.user.type != 'Bot') ||
(github.event_name == 'pull_request_review' &&
github.event.pull_request.head.repo.owner.login == 'ynput' &&
github.event.review.state != 'changes_requested' &&
github.event.review.state != 'approved' &&
github.event.review.user.type != 'Bot')
steps:
- name: Move PR to 'Review In Progress'
uses: leonsteinhaeuser/project-beta-automations@v2.1.0
with:
gh_token: ${{ secrets.YNPUT_BOT_TOKEN }}
organization: ynput
project_id: 11
resource_node_id: ${{ github.event.pull_request.node_id || github.event.issue.node_id }}
status_value: Review In Progress
pr_review_requested:
# -----------------------------
# Resets Clickup Task status to 'In Progress' after 'Changes Requested' were submitted to PR
# It only runs if custom clickup task id was found in ref branch of PR
# -----------------------------
name: pr_review_requested
runs-on: ubuntu-latest
if: github.event_name == 'pull_request_review' && github.event.pull_request.head.repo.owner.login == 'ynput' && github.event.review.state == 'changes_requested'
steps:
- name: Set branch env
run: echo "BRANCH_NAME=${{ github.event.pull_request.head.ref}}" >> $GITHUB_ENV
- name: Get ClickUp ID from ref head name
id: get_cuID
run: |
echo ${{ env.BRANCH_NAME }}
echo "cuID=$(echo $BRANCH_NAME | sed 's/.*\/\(OP\-[0-9]\{4\}\).*/\1/')" >> $GITHUB_OUTPUT
- name: Print ClickUp ID
run: echo ${{ steps.get_cuID.outputs.cuID }}
- name: Move found Clickup task to 'Review in Progress'
if: steps.get_cuID.outputs.cuID
run: |
curl -i -X PUT \
'https://api.clickup.com/api/v2/task/${{ steps.get_cuID.outputs.cuID }}?custom_task_ids=true&team_id=${{secrets.CLICKUP_TEAM_ID}}' \
-H 'Authorization: ${{secrets.CLICKUP_API_KEY}}' \
-H 'Content-Type: application/json' \
-d '{
"status": "in progress"
}'

View file

@ -1,7 +1,7 @@
# This workflow will upload a Python Package using Twine when a release is created
# For more information see: https://help.github.com/en/actions/language-and-framework-guides/using-python-with-github-actions#publishing-to-package-registries
name: Test Build
name: 🏗️ Test Build
on:
pull_request:

25
.github/workflows/update_bug_report.yml vendored Normal file
View file

@ -0,0 +1,25 @@
name: 🐞 Update Bug Report
on:
workflow_dispatch:
release:
# https://docs.github.com/en/actions/using-workflows/events-that-trigger-workflows#release
types: [published]
jobs:
update-bug-report:
runs-on: ubuntu-latest
name: Update bug report
steps:
- uses: actions/checkout@v3
with:
ref: ${{ github.event.release.target_commitish }}
- name: Update version
uses: ynput/gha-populate-form-version@main
with:
github_token: ${{ secrets.YNPUT_BOT_TOKEN }}
registry: github
dropdown: _version
limit_to: 100
form: .github/ISSUE_TEMPLATE/bug_report.yml
commit_message: 'chore(): update bug report / version'

View file

@ -3,7 +3,7 @@
Goal is that most of functions here are called on (or with) an object
that has project name as a context (e.g. on 'ProjectEntity'?).
+ We will need more specific functions doing wery specific queires really fast.
+ We will need more specific functions doing very specific queries really fast.
"""
import re
@ -193,7 +193,7 @@ def _get_assets(
be found.
asset_names (Iterable[str]): Name assets that should be found.
parent_ids (Iterable[Union[str, ObjectId]]): Parent asset ids.
standard (bool): Query standart assets (type 'asset').
standard (bool): Query standard assets (type 'asset').
archived (bool): Query archived assets (type 'archived_asset').
fields (Iterable[str]): Fields that should be returned. All fields are
returned if 'None' is passed.
@ -1185,7 +1185,7 @@ def get_representations(
standard=True,
fields=None
):
"""Representaion entities data from one project filtered by filters.
"""Representation entities data from one project filtered by filters.
Filters are additive (all conditions must pass to return subset).
@ -1231,7 +1231,7 @@ def get_archived_representations(
names_by_version_ids=None,
fields=None
):
"""Archived representaion entities data from project with applied filters.
"""Archived representation entities data from project with applied filters.
Filters are additive (all conditions must pass to return subset).

View file

@ -2,7 +2,7 @@
## Reason
Preparation for OpenPype v4 server. Goal is to remove direct mongo calls in code to prepare a little bit for different source of data for code before. To start think about database calls less as mongo calls but more universally. To do so was implemented simple wrapper around database calls to not use pymongo specific code.
Current goal is not to make universal database model which can be easily replaced with any different source of data but to make it close as possible. Current implementation of OpenPype is too tighly connected to pymongo and it's abilities so we're trying to get closer with long term changes that can be used even in current state.
Current goal is not to make universal database model which can be easily replaced with any different source of data but to make it close as possible. Current implementation of OpenPype is too tightly connected to pymongo and it's abilities so we're trying to get closer with long term changes that can be used even in current state.
## Queries
Query functions don't use full potential of mongo queries like very specific queries based on subdictionaries or unknown structures. We try to avoid these calls as much as possible because they'll probably won't be available in future. If it's really necessary a new function can be added but only if it's reasonable for overall logic. All query functions were moved to `~/client/entities.py`. Each function has arguments with available filters and possible reduce of returned keys for each entity.
@ -14,7 +14,7 @@ Changes are a little bit complicated. Mongo has many options how update can happ
Create operations expect already prepared document data, for that are prepared functions creating skeletal structures of documents (do not fill all required data), except `_id` all data should be right. Existence of entity is not validated so if the same creation operation is send n times it will create the entity n times which can cause issues.
### Update
Update operation require entity id and keys that should be changed, update dictionary must have {"key": value}. If value should be set in nested dictionary the key must have also all subkeys joined with dot `.` (e.g. `{"data": {"fps": 25}}` -> `{"data.fps": 25}`). To simplify update dictionaries were prepared functions which does that for you, their name has template `prepare_<entity type>_update_data` - they work on comparison of previous document and new document. If there is missing function for requested entity type it is because we didn't need it yet and require implementaion.
Update operation require entity id and keys that should be changed, update dictionary must have {"key": value}. If value should be set in nested dictionary the key must have also all subkeys joined with dot `.` (e.g. `{"data": {"fps": 25}}` -> `{"data.fps": 25}`). To simplify update dictionaries were prepared functions which does that for you, their name has template `prepare_<entity type>_update_data` - they work on comparison of previous document and new document. If there is missing function for requested entity type it is because we didn't need it yet and require implementation.
### Delete
Delete operation need entity id. Entity will be deleted from mongo.

View file

@ -368,7 +368,7 @@ def prepare_workfile_info_update_data(old_doc, new_doc, replace=True):
class AbstractOperation(object):
"""Base operation class.
Opration represent a call into database. The call can create, change or
Operation represent a call into database. The call can create, change or
remove data.
Args:
@ -409,7 +409,7 @@ class AbstractOperation(object):
pass
def to_data(self):
"""Convert opration to data that can be converted to json or others.
"""Convert operation to data that can be converted to json or others.
Warning:
Current state returns ObjectId objects which cannot be parsed by
@ -428,7 +428,7 @@ class AbstractOperation(object):
class CreateOperation(AbstractOperation):
"""Opeartion to create an entity.
"""Operation to create an entity.
Args:
project_name (str): On which project operation will happen.
@ -485,7 +485,7 @@ class CreateOperation(AbstractOperation):
class UpdateOperation(AbstractOperation):
"""Opeartion to update an entity.
"""Operation to update an entity.
Args:
project_name (str): On which project operation will happen.
@ -552,7 +552,7 @@ class UpdateOperation(AbstractOperation):
class DeleteOperation(AbstractOperation):
"""Opeartion to delete an entity.
"""Operation to delete an entity.
Args:
project_name (str): On which project operation will happen.

View file

@ -2,7 +2,7 @@
Idea for current dirmap implementation was used from Maya where is possible to
enter source and destination roots and maya will try each found source
in referenced file replace with each destionation paths. First path which
in referenced file replace with each destination paths. First path which
exists is used.
"""
@ -183,7 +183,7 @@ class HostDirmap(object):
project_name, remote_site
)
# dirmap has sense only with regular disk provider, in the workfile
# wont be root on cloud or sftp provider
# won't be root on cloud or sftp provider
if remote_provider != "local_drive":
remote_site = "studio"
for root_name, active_site_dir in active_overrides.items():

View file

@ -18,7 +18,7 @@ class HostBase(object):
Compared to 'avalon' concept:
What was before considered as functions in host implementation folder. The
host implementation should primarily care about adding ability of creation
(mark subsets to be published) and optionaly about referencing published
(mark subsets to be published) and optionally about referencing published
representations as containers.
Host may need extend some functionality like working with workfiles
@ -129,9 +129,9 @@ class HostBase(object):
"""Get current context information.
This method should be used to get current context of host. Usage of
this method can be crutial for host implementations in DCCs where
this method can be crucial for host implementations in DCCs where
can be opened multiple workfiles at one moment and change of context
can't be catched properly.
can't be caught properly.
Default implementation returns values from 'legacy_io.Session'.

View file

@ -81,7 +81,7 @@ class ILoadHost:
@abstractmethod
def get_containers(self):
"""Retreive referenced containers from scene.
"""Retrieve referenced containers from scene.
This can be implemented in hosts where referencing can be used.
@ -191,7 +191,7 @@ class IWorkfileHost:
@abstractmethod
def get_current_workfile(self):
"""Retreive path to current opened file.
"""Retrieve path to current opened file.
Returns:
str: Path to file which is currently opened.
@ -220,8 +220,8 @@ class IWorkfileHost:
Default implementation keeps workdir untouched.
Warnings:
We must handle this modification with more sofisticated way because
this can't be called out of DCC so opening of last workfile
We must handle this modification with more sophisticated way
because this can't be called out of DCC so opening of last workfile
(calculated before DCC is launched) is complicated. Also breaking
defined work template is not a good idea.
Only place where it's really used and can make sense is Maya. There
@ -302,7 +302,7 @@ class IPublishHost:
required methods.
Returns:
list[str]: Missing method implementations for new publsher
list[str]: Missing method implementations for new publisher
workflow.
"""

View file

@ -504,7 +504,7 @@ function addItemAsLayerToComp(comp_id, item_id, found_comp){
* Args:
* comp_id (int): id of target composition
* item_id (int): FootageItem.id
* found_comp (CompItem, optional): to limit quering if
* found_comp (CompItem, optional): to limit querying if
* comp already found previously
*/
var comp = found_comp || app.project.itemByID(comp_id);

View file

@ -80,7 +80,7 @@ class AfterEffectsServerStub():
Get complete stored JSON with metadata from AE.Metadata.Label
field.
It contains containers loaded by any Loader OR instances creted
It contains containers loaded by any Loader OR instances created
by Creator.
Returns:

View file

@ -24,7 +24,7 @@ from .workio import OpenFileCacher
PREVIEW_COLLECTIONS: Dict = dict()
# This seems like a good value to keep the Qt app responsive and doesn't slow
# down Blender. At least on macOS I the interace of Blender gets very laggy if
# down Blender. At least on macOS I the interface of Blender gets very laggy if
# you make it smaller.
TIMER_INTERVAL: float = 0.01 if platform.system() == "Windows" else 0.1

View file

@ -50,7 +50,7 @@ class ExtractPlayblast(publish.Extractor):
# get isolate objects list
isolate = instance.data("isolate", None)
# get ouput path
# get output path
stagingdir = self.staging_dir(instance)
filename = instance.name
path = os.path.join(stagingdir, filename)
@ -116,7 +116,6 @@ class ExtractPlayblast(publish.Extractor):
"frameStart": start,
"frameEnd": end,
"fps": fps,
"preview": True,
"tags": tags,
"camera_name": camera
}

View file

@ -773,7 +773,7 @@ class MediaInfoFile(object):
if logger:
self.log = logger
# test if `dl_get_media_info` paht exists
# test if `dl_get_media_info` path exists
self._validate_media_script_path()
# derivate other feed variables
@ -993,7 +993,7 @@ class MediaInfoFile(object):
def _validate_media_script_path(self):
if not os.path.isfile(self.MEDIA_SCRIPT_PATH):
raise IOError("Media Scirpt does not exist: `{}`".format(
raise IOError("Media Script does not exist: `{}`".format(
self.MEDIA_SCRIPT_PATH))
def _generate_media_info_file(self, fpath, feed_ext, feed_dir):

View file

@ -38,7 +38,7 @@ def install():
pyblish.register_plugin_path(PUBLISH_PATH)
register_loader_plugin_path(LOAD_PATH)
register_creator_plugin_path(CREATE_PATH)
log.info("OpenPype Flame plug-ins registred ...")
log.info("OpenPype Flame plug-ins registered ...")
# register callback for switching publishable
pyblish.register_callback("instanceToggled", on_pyblish_instance_toggled)

View file

@ -157,7 +157,7 @@ class CreatorWidget(QtWidgets.QDialog):
# convert label text to normal capitalized text with spaces
label_text = self.camel_case_split(text)
# assign the new text to lable widget
# assign the new text to label widget
label = QtWidgets.QLabel(label_text)
label.setObjectName("LineLabel")
@ -345,8 +345,8 @@ class PublishableClip:
"track": "sequence",
}
# parents search patern
parents_search_patern = r"\{([a-z]*?)\}"
# parents search pattern
parents_search_pattern = r"\{([a-z]*?)\}"
# default templates for non-ui use
rename_default = False
@ -445,7 +445,7 @@ class PublishableClip:
return self.current_segment
def _populate_segment_default_data(self):
""" Populate default formating data from segment. """
""" Populate default formatting data from segment. """
self.current_segment_default_data = {
"_folder_": "shots",
@ -538,7 +538,7 @@ class PublishableClip:
if not self.index_from_segment:
self.count_steps *= self.rename_index
hierarchy_formating_data = {}
hierarchy_formatting_data = {}
hierarchy_data = deepcopy(self.hierarchy_data)
_data = self.current_segment_default_data.copy()
if self.ui_inputs:
@ -552,7 +552,7 @@ class PublishableClip:
# mark review layer
if self.review_track and (
self.review_track not in self.review_track_default):
# if review layer is defined and not the same as defalut
# if review layer is defined and not the same as default
self.review_layer = self.review_track
# shot num calculate
@ -578,13 +578,13 @@ class PublishableClip:
# fill up pythonic expresisons in hierarchy data
for k, _v in hierarchy_data.items():
hierarchy_formating_data[k] = _v["value"].format(**_data)
hierarchy_formatting_data[k] = _v["value"].format(**_data)
else:
# if no gui mode then just pass default data
hierarchy_formating_data = hierarchy_data
hierarchy_formatting_data = hierarchy_data
tag_hierarchy_data = self._solve_tag_hierarchy_data(
hierarchy_formating_data
hierarchy_formatting_data
)
tag_hierarchy_data.update({"heroTrack": True})
@ -615,27 +615,27 @@ class PublishableClip:
# in case track name and subset name is the same then add
if self.subset_name == self.track_name:
_hero_data["subset"] = self.subset
# assing data to return hierarchy data to tag
# assign data to return hierarchy data to tag
tag_hierarchy_data = _hero_data
break
# add data to return data dict
self.marker_data.update(tag_hierarchy_data)
def _solve_tag_hierarchy_data(self, hierarchy_formating_data):
def _solve_tag_hierarchy_data(self, hierarchy_formatting_data):
""" Solve marker data from hierarchy data and templates. """
# fill up clip name and hierarchy keys
hierarchy_filled = self.hierarchy.format(**hierarchy_formating_data)
clip_name_filled = self.clip_name.format(**hierarchy_formating_data)
hierarchy_filled = self.hierarchy.format(**hierarchy_formatting_data)
clip_name_filled = self.clip_name.format(**hierarchy_formatting_data)
# remove shot from hierarchy data: is not needed anymore
hierarchy_formating_data.pop("shot")
hierarchy_formatting_data.pop("shot")
return {
"newClipName": clip_name_filled,
"hierarchy": hierarchy_filled,
"parents": self.parents,
"hierarchyData": hierarchy_formating_data,
"hierarchyData": hierarchy_formatting_data,
"subset": self.subset,
"family": self.subset_family,
"families": [self.family]
@ -650,17 +650,17 @@ class PublishableClip:
type
)
# first collect formating data to use for formating template
formating_data = {}
# first collect formatting data to use for formatting template
formatting_data = {}
for _k, _v in self.hierarchy_data.items():
value = _v["value"].format(
**self.current_segment_default_data)
formating_data[_k] = value
formatting_data[_k] = value
return {
"entity_type": entity_type,
"entity_name": template.format(
**formating_data
**formatting_data
)
}
@ -668,9 +668,9 @@ class PublishableClip:
""" Create parents and return it in list. """
self.parents = []
patern = re.compile(self.parents_search_patern)
pattern = re.compile(self.parents_search_pattern)
par_split = [(patern.findall(t).pop(), t)
par_split = [(pattern.findall(t).pop(), t)
for t in self.hierarchy.split("/")]
for type, template in par_split:
@ -902,22 +902,22 @@ class OpenClipSolver(flib.MediaInfoFile):
):
return
formating_data = self._update_formating_data(
formatting_data = self._update_formatting_data(
layerName=layer_name,
layerUID=layer_uid
)
name_obj.text = StringTemplate(
self.layer_rename_template
).format(formating_data)
).format(formatting_data)
def _update_formating_data(self, **kwargs):
""" Updating formating data for layer rename
def _update_formatting_data(self, **kwargs):
""" Updating formatting data for layer rename
Attributes:
key=value (optional): will be included to formating data
key=value (optional): will be included to formatting data
as {key: value}
Returns:
dict: anatomy context data for formating
dict: anatomy context data for formatting
"""
self.log.debug(">> self.clip_data: {}".format(self.clip_data))
clip_name_obj = self.clip_data.find("name")

View file

@ -203,7 +203,7 @@ class WireTapCom(object):
list: all available volumes in server
Rises:
AttributeError: unable to get any volumes childs from server
AttributeError: unable to get any volumes children from server
"""
root = WireTapNodeHandle(self._server, "/volumes")
children_num = WireTapInt(0)

View file

@ -108,7 +108,7 @@ def _sync_utility_scripts(env=None):
shutil.copy2(src, dst)
except (PermissionError, FileExistsError) as msg:
log.warning(
"Not able to coppy to: `{}`, Problem with: `{}`".format(
"Not able to copy to: `{}`, Problem with: `{}`".format(
dst,
msg
)

View file

@ -153,7 +153,7 @@ class FlamePrelaunch(PreLaunchHook):
def _add_pythonpath(self):
pythonpath = self.launch_context.env.get("PYTHONPATH")
# separate it explicity by `;` that is what we use in settings
# separate it explicitly by `;` that is what we use in settings
new_pythonpath = self.flame_pythonpath.split(os.pathsep)
new_pythonpath += pythonpath.split(os.pathsep)

View file

@ -209,7 +209,7 @@ class CreateShotClip(opfapi.Creator):
"type": "QComboBox",
"label": "Subset Name",
"target": "ui",
"toolTip": "chose subset name patern, if [ track name ] is selected, name of track layer will be used", # noqa
"toolTip": "chose subset name pattern, if [ track name ] is selected, name of track layer will be used", # noqa
"order": 0},
"subsetFamily": {
"value": ["plate", "take"],

View file

@ -61,9 +61,9 @@ class LoadClip(opfapi.ClipLoader):
self.layer_rename_template = self.layer_rename_template.replace(
"output", "representation")
formating_data = deepcopy(context["representation"]["context"])
formatting_data = deepcopy(context["representation"]["context"])
clip_name = StringTemplate(self.clip_name_template).format(
formating_data)
formatting_data)
# convert colorspace with ocio to flame mapping
# in imageio flame section
@ -88,7 +88,7 @@ class LoadClip(opfapi.ClipLoader):
"version": "v{:0>3}".format(version_name),
"layer_rename_template": self.layer_rename_template,
"layer_rename_patterns": self.layer_rename_patterns,
"context_data": formating_data
"context_data": formatting_data
}
self.log.debug(pformat(
loading_context

View file

@ -58,11 +58,11 @@ class LoadClipBatch(opfapi.ClipLoader):
self.layer_rename_template = self.layer_rename_template.replace(
"output", "representation")
formating_data = deepcopy(context["representation"]["context"])
formating_data["batch"] = self.batch.name.get_value()
formatting_data = deepcopy(context["representation"]["context"])
formatting_data["batch"] = self.batch.name.get_value()
clip_name = StringTemplate(self.clip_name_template).format(
formating_data)
formatting_data)
# convert colorspace with ocio to flame mapping
# in imageio flame section
@ -88,7 +88,7 @@ class LoadClipBatch(opfapi.ClipLoader):
"version": "v{:0>3}".format(version_name),
"layer_rename_template": self.layer_rename_template,
"layer_rename_patterns": self.layer_rename_patterns,
"context_data": formating_data
"context_data": formatting_data
}
self.log.debug(pformat(
loading_context

View file

@ -203,7 +203,7 @@ class CollectTimelineInstances(pyblish.api.ContextPlugin):
self._get_xml_preset_attrs(
attributes, split)
# add xml overides resolution to instance data
# add xml overrides resolution to instance data
xml_overrides = attributes["xml_overrides"]
if xml_overrides.get("width"):
attributes.update({
@ -284,7 +284,7 @@ class CollectTimelineInstances(pyblish.api.ContextPlugin):
self.log.debug("__ head: `{}`".format(head))
self.log.debug("__ tail: `{}`".format(tail))
# HACK: it is here to serve for versions bellow 2021.1
# HACK: it is here to serve for versions below 2021.1
if not any([head, tail]):
retimed_attributes = get_media_range_with_retimes(
otio_clip, handle_start, handle_end)

View file

@ -227,7 +227,7 @@ class ExtractSubsetResources(publish.Extractor):
self.hide_others(
exporting_clip, segment_name, s_track_name)
# change name patern
# change name pattern
name_patern_xml = (
"<segment name>_<shot name>_{}.").format(
unique_name)
@ -358,7 +358,7 @@ class ExtractSubsetResources(publish.Extractor):
representation_data["stagingDir"] = n_stage_dir
files = n_files
# add files to represetation but add
# add files to representation but add
# imagesequence as list
if (
# first check if path in files is not mov extension

View file

@ -50,7 +50,7 @@ class IntegrateBatchGroup(pyblish.api.InstancePlugin):
self._load_clip_to_context(instance, bgroup)
def _add_nodes_to_batch_with_links(self, instance, task_data, batch_group):
# get write file node properties > OrederDict because order does mater
# get write file node properties > OrederDict because order does matter
write_pref_data = self._get_write_prefs(instance, task_data)
batch_nodes = [

View file

@ -72,8 +72,7 @@ class FusionSetFrameRangeWithHandlesLoader(load.LoaderPlugin):
return
# Include handles
handles = version_data.get("handles", 0)
start -= handles
end += handles
start -= version_data.get("handleStart", 0)
end += version_data.get("handleEnd", 0)
lib.update_frame_range(start, end)

View file

@ -432,11 +432,11 @@ copy_files = """function copyFile(srcFilename, dstFilename)
import_files = """function %s_import_files()
{
var PNGTransparencyMode = 0; // Premultiplied wih Black
var TGATransparencyMode = 0; // Premultiplied wih Black
var SGITransparencyMode = 0; // Premultiplied wih Black
var PNGTransparencyMode = 0; // Premultiplied with Black
var TGATransparencyMode = 0; // Premultiplied with Black
var SGITransparencyMode = 0; // Premultiplied with Black
var LayeredPSDTransparencyMode = 1; // Straight
var FlatPSDTransparencyMode = 2; // Premultiplied wih White
var FlatPSDTransparencyMode = 2; // Premultiplied with White
function getUniqueColumnName( column_prefix )
{

View file

@ -142,10 +142,10 @@ function Client() {
};
/**
* Process recieved request. This will eval recieved function and produce
* Process received request. This will eval received function and produce
* results.
* @function
* @param {object} request - recieved request JSON
* @param {object} request - received request JSON
* @return {object} result of evaled function.
*/
self.processRequest = function(request) {
@ -245,7 +245,7 @@ function Client() {
var request = JSON.parse(to_parse);
var mid = request.message_id;
// self.logDebug('[' + mid + '] - Request: ' + '\n' + JSON.stringify(request));
self.logDebug('[' + mid + '] Recieved.');
self.logDebug('[' + mid + '] Received.');
request.result = self.processRequest(request);
self.logDebug('[' + mid + '] Processing done.');
@ -286,8 +286,8 @@ function Client() {
/** Harmony 21.1 doesn't have QDataStream anymore.
This means we aren't able to write bytes into QByteArray so we had
modify how content lenght is sent do the server.
Content lenght is sent as string of 8 char convertible into integer
modify how content length is sent do the server.
Content length is sent as string of 8 char convertible into integer
(instead of 0x00000001[4 bytes] > "000000001"[8 bytes]) */
var codec_name = new QByteArray().append("UTF-8");
@ -476,6 +476,25 @@ function start() {
action.triggered.connect(self.onSubsetManage);
}
/**
* Set scene settings from DB to the scene
*/
self.onSetSceneSettings = function() {
app.avalonClient.send(
{
"module": "openpype.hosts.harmony.api",
"method": "ensure_scene_settings",
"args": []
},
false
);
};
// add Set Scene Settings
if (app.avalonMenu == null) {
action = menu.addAction('Set Scene Settings...');
action.triggered.connect(self.onSetSceneSettings);
}
/**
* Show Experimental dialog
*/

View file

@ -242,9 +242,15 @@ def launch_zip_file(filepath):
print(f"Localizing {filepath}")
temp_path = get_local_harmony_path(filepath)
scene_name = os.path.basename(temp_path)
if os.path.exists(os.path.join(temp_path, scene_name)):
# unzipped with duplicated scene_name
temp_path = os.path.join(temp_path, scene_name)
scene_path = os.path.join(
temp_path, os.path.basename(temp_path) + ".xstage"
temp_path, scene_name + ".xstage"
)
unzip = False
if os.path.exists(scene_path):
# Check remote scene is newer than local.
@ -262,6 +268,10 @@ def launch_zip_file(filepath):
with _ZipFile(filepath, "r") as zip_ref:
zip_ref.extractall(temp_path)
if os.path.exists(os.path.join(temp_path, scene_name)):
# unzipped with duplicated scene_name
temp_path = os.path.join(temp_path, scene_name)
# Close existing scene.
if ProcessContext.pid:
os.kill(ProcessContext.pid, signal.SIGTERM)
@ -309,7 +319,7 @@ def launch_zip_file(filepath):
)
if not os.path.exists(scene_path):
print("error: cannot determine scene file")
print("error: cannot determine scene file {}".format(scene_path))
ProcessContext.server.stop()
return
@ -394,7 +404,7 @@ def get_scene_data():
"function": "AvalonHarmony.getSceneData"
})["result"]
except json.decoder.JSONDecodeError:
# Means no sceen metadata has been made before.
# Means no scene metadata has been made before.
return {}
except KeyError:
# Means no existing scene metadata has been made.
@ -465,7 +475,7 @@ def imprint(node_id, data, remove=False):
Example:
>>> from openpype.hosts.harmony.api import lib
>>> node = "Top/Display"
>>> data = {"str": "someting", "int": 1, "float": 0.32, "bool": True}
>>> data = {"str": "something", "int": 1, "float": 0.32, "bool": True}
>>> lib.imprint(layer, data)
"""
scene_data = get_scene_data()
@ -550,7 +560,7 @@ def save_scene():
method prevents this double request and safely saves the scene.
"""
# Need to turn off the backgound watcher else the communication with
# Need to turn off the background watcher else the communication with
# the server gets spammed with two requests at the same time.
scene_path = send(
{"function": "AvalonHarmony.saveScene"})["result"]

View file

@ -142,7 +142,7 @@ def application_launch(event):
harmony.send({"script": script})
inject_avalon_js()
ensure_scene_settings()
# ensure_scene_settings()
check_inventory()

View file

@ -61,7 +61,7 @@ class Server(threading.Thread):
"module": (str), # Module of method.
"method" (str), # Name of method in module.
"args" (list), # Arguments to pass to method.
"kwargs" (dict), # Keywork arguments to pass to method.
"kwargs" (dict), # Keyword arguments to pass to method.
"reply" (bool), # Optional wait for method completion.
}
"""

View file

@ -25,8 +25,9 @@ class ExtractRender(pyblish.api.InstancePlugin):
application_path = instance.context.data.get("applicationPath")
scene_path = instance.context.data.get("scenePath")
frame_rate = instance.context.data.get("frameRate")
frame_start = instance.context.data.get("frameStart")
frame_end = instance.context.data.get("frameEnd")
# real value from timeline
frame_start = instance.context.data.get("frameStartHandle")
frame_end = instance.context.data.get("frameEndHandle")
audio_path = instance.context.data.get("audioPath")
if audio_path and os.path.exists(audio_path):
@ -55,9 +56,13 @@ class ExtractRender(pyblish.api.InstancePlugin):
# Execute rendering. Ignoring error cause Harmony returns error code
# always.
self.log.info(f"running [ {application_path} -batch {scene_path}")
args = [application_path, "-batch",
"-frames", str(frame_start), str(frame_end),
"-scene", scene_path]
self.log.info(f"running [ {application_path} {' '.join(args)}")
proc = subprocess.Popen(
[application_path, "-batch", scene_path],
args,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
stdin=subprocess.PIPE

View file

@ -60,7 +60,8 @@ class ValidateSceneSettings(pyblish.api.InstancePlugin):
# which is available on 'context.data["assetEntity"]'
# - the same approach can be used in 'ValidateSceneSettingsRepair'
expected_settings = harmony.get_asset_settings()
self.log.info("scene settings from DB:".format(expected_settings))
self.log.info("scene settings from DB:{}".format(expected_settings))
expected_settings.pop("entityType") # not useful for the validation
expected_settings = _update_frames(dict.copy(expected_settings))
expected_settings["frameEndHandle"] = expected_settings["frameEnd"] +\
@ -68,21 +69,32 @@ class ValidateSceneSettings(pyblish.api.InstancePlugin):
if (any(re.search(pattern, os.getenv('AVALON_TASK'))
for pattern in self.skip_resolution_check)):
self.log.info("Skipping resolution check because of "
"task name and pattern {}".format(
self.skip_resolution_check))
expected_settings.pop("resolutionWidth")
expected_settings.pop("resolutionHeight")
entity_type = expected_settings.get("entityType")
if (any(re.search(pattern, entity_type)
if (any(re.search(pattern, os.getenv('AVALON_TASK'))
for pattern in self.skip_timelines_check)):
self.log.info("Skipping frames check because of "
"task name and pattern {}".format(
self.skip_timelines_check))
expected_settings.pop('frameStart', None)
expected_settings.pop('frameEnd', None)
expected_settings.pop("entityType") # not useful after the check
expected_settings.pop('frameStartHandle', None)
expected_settings.pop('frameEndHandle', None)
asset_name = instance.context.data['anatomyData']['asset']
if any(re.search(pattern, asset_name)
for pattern in self.frame_check_filter):
expected_settings.pop("frameEnd")
self.log.info("Skipping frames check because of "
"task name and pattern {}".format(
self.frame_check_filter))
expected_settings.pop('frameStart', None)
expected_settings.pop('frameEnd', None)
expected_settings.pop('frameStartHandle', None)
expected_settings.pop('frameEndHandle', None)
# handle case where ftrack uses only two decimal places
# 23.976023976023978 vs. 23.98
@ -99,6 +111,7 @@ class ValidateSceneSettings(pyblish.api.InstancePlugin):
"frameEnd": instance.context.data["frameEnd"],
"handleStart": instance.context.data.get("handleStart"),
"handleEnd": instance.context.data.get("handleEnd"),
"frameStartHandle": instance.context.data.get("frameStartHandle"),
"frameEndHandle": instance.context.data.get("frameEndHandle"),
"resolutionWidth": instance.context.data.get("resolutionWidth"),
"resolutionHeight": instance.context.data.get("resolutionHeight"),

View file

@ -6,7 +6,7 @@ Ever tried to make a simple script for toonboom Harmony, then got stumped by the
Toonboom Harmony is a very powerful software, with hundreds of functions and tools, and it unlocks a great amount of possibilities for animation studios around the globe. And... being the produce of the hard work of a small team forced to prioritise, it can also be a bit rustic at times!
We are users at heart, animators and riggers, who just want to interact with the software as simply as possible. Simplicity is at the heart of the design of openHarmony. But we also are developpers, and we made the library for people like us who can't resist tweaking the software and bend it in all possible ways, and are looking for powerful functions to help them do it.
We are users at heart, animators and riggers, who just want to interact with the software as simply as possible. Simplicity is at the heart of the design of openHarmony. But we also are developers, and we made the library for people like us who can't resist tweaking the software and bend it in all possible ways, and are looking for powerful functions to help them do it.
This library's aim is to create a more direct way to interact with Toonboom through scripts, by providing a more intuitive way to access its elements, and help with the cumbersome and repetitive tasks as well as help unlock untapped potential in its many available systems. So we can go from having to do things like this:
@ -78,7 +78,7 @@ All you have to do is call :
```javascript
include("openHarmony.js");
```
at the beggining of your script.
at the beginning of your script.
You can ask your users to download their copy of the library and store it alongside, or bundle it as you wish as long as you include the license file provided on this repository.
@ -129,7 +129,7 @@ Check that the environment variable `LIB_OPENHARMONY_PATH` is set correctly to t
## How to add openHarmony to vscode intellisense for autocompletion
Although not fully supported, you can get most of the autocompletion features to work by adding the following lines to a `jsconfig.json` file placed at the root of your working folder.
The paths need to be relative which means the openHarmony source code must be placed directly in your developping environnement.
The paths need to be relative which means the openHarmony source code must be placed directly in your developping environment.
For example, if your working folder contains the openHarmony source in a folder called `OpenHarmony` and your working scripts in a folder called `myScripts`, place the `jsconfig.json` file at the root of the folder and add these lines to the file:

View file

@ -4,7 +4,7 @@
// openHarmony Library
//
//
// Developped by Mathieu Chaptel, Chris Fourney
// Developed by Mathieu Chaptel, Chris Fourney
//
//
// This library is an open source implementation of a Document Object Model
@ -16,7 +16,7 @@
// and by hiding the heavy lifting required by the official API.
//
// This library is provided as is and is a work in progress. As such, not every
// function has been implemented or is garanteed to work. Feel free to contribute
// function has been implemented or is guaranteed to work. Feel free to contribute
// improvements to its official github. If you do make sure you follow the provided
// template and naming conventions and document your new methods properly.
//
@ -78,7 +78,7 @@
* $.log("hello"); // prints out a message to the MessageLog.
* var myPoint = new $.oPoint(0,0,0); // create a new class instance from an openHarmony class.
*
* // function members of the $ objects get published to the global scope, which means $ can be ommited
* // function members of the $ objects get published to the global scope, which means $ can be omitted
*
* log("hello");
* var myPoint = new oPoint(0,0,0); // This is all valid
@ -118,7 +118,7 @@ Object.defineProperty( $, "directory", {
/**
* Wether Harmony is run with the interface or simply from command line
* Whether Harmony is run with the interface or simply from command line
*/
Object.defineProperty( $, "batchMode", {
get: function(){

View file

@ -4,7 +4,7 @@
// openHarmony Library
//
//
// Developped by Mathieu Chaptel, Chris Fourney
// Developed by Mathieu Chaptel, Chris Fourney
//
//
// This library is an open source implementation of a Document Object Model
@ -16,7 +16,7 @@
// and by hiding the heavy lifting required by the official API.
//
// This library is provided as is and is a work in progress. As such, not every
// function has been implemented or is garanteed to work. Feel free to contribute
// function has been implemented or is guaranteed to work. Feel free to contribute
// improvements to its official github. If you do make sure you follow the provided
// template and naming conventions and document your new methods properly.
//
@ -67,7 +67,7 @@
* @hideconstructor
* @namespace
* @example
* // To check wether an action is available, call the synthax:
* // To check whether an action is available, call the synthax:
* Action.validate (<actionName>, <responder>);
*
* // To launch an action, call the synthax:

View file

@ -4,7 +4,7 @@
// openHarmony Library
//
//
// Developped by Mathieu Chaptel, Chris Fourney
// Developed by Mathieu Chaptel, Chris Fourney
//
//
// This library is an open source implementation of a Document Object Model
@ -16,7 +16,7 @@
// and by hiding the heavy lifting required by the official API.
//
// This library is provided as is and is a work in progress. As such, not every
// function has been implemented or is garanteed to work. Feel free to contribute
// function has been implemented or is guaranteed to work. Feel free to contribute
// improvements to its official github. If you do make sure you follow the provided
// template and naming conventions and document your new methods properly.
//
@ -409,7 +409,7 @@ $.oApp.prototype.getToolByName = function(toolName){
/**
* returns the list of stencils useable by the specified tool
* returns the list of stencils usable by the specified tool
* @param {$.oTool} tool the tool object we want valid stencils for
* @return {$.oStencil[]} the list of stencils compatible with the specified tool
*/

View file

@ -4,7 +4,7 @@
// openHarmony Library v0.01
//
//
// Developped by Mathieu Chaptel, Chris Fourney...
// Developed by Mathieu Chaptel, Chris Fourney...
//
//
// This library is an open source implementation of a Document Object Model
@ -16,7 +16,7 @@
// and by hiding the heavy lifting required by the official API.
//
// This library is provided as is and is a work in progress. As such, not every
// function has been implemented or is garanteed to work. Feel free to contribute
// function has been implemented or is guaranteed to work. Feel free to contribute
// improvements to its official github. If you do make sure you follow the provided
// template and naming conventions and document your new methods properly.
//
@ -338,7 +338,7 @@ Object.defineProperty($.oAttribute.prototype, "useSeparate", {
* Returns the default value of the attribute for most keywords
* @name $.oAttribute#defaultValue
* @type {bool}
* @todo switch the implentation to types?
* @todo switch the implementation to types?
* @example
* // to reset an attribute to its default value:
* // (mostly used for position/angle/skew parameters of pegs and drawing nodes)
@ -449,7 +449,7 @@ $.oAttribute.prototype.getLinkedColumns = function(){
/**
* Recursively sets an attribute to the same value as another. Both must have the same keyword.
* @param {bool} [duplicateColumns=false] In the case that the attribute has a column, wether to duplicate the column before linking
* @param {bool} [duplicateColumns=false] In the case that the attribute has a column, whether to duplicate the column before linking
* @private
*/
$.oAttribute.prototype.setToAttributeValue = function(attributeToCopy, duplicateColumns){

View file

@ -4,7 +4,7 @@
// openHarmony Library
//
//
// Developped by Mathieu Chaptel, Chris Fourney
// Developed by Mathieu Chaptel, Chris Fourney
//
//
// This library is an open source implementation of a Document Object Model
@ -16,7 +16,7 @@
// and by hiding the heavy lifting required by the official API.
//
// This library is provided as is and is a work in progress. As such, not every
// function has been implemented or is garanteed to work. Feel free to contribute
// function has been implemented or is guaranteed to work. Feel free to contribute
// improvements to its official github. If you do make sure you follow the provided
// template and naming conventions and document your new methods properly.
//

View file

@ -4,7 +4,7 @@
// openHarmony Library
//
//
// Developped by Mathieu Chaptel, Chris Fourney
// Developed by Mathieu Chaptel, Chris Fourney
//
//
// This library is an open source implementation of a Document Object Model
@ -16,7 +16,7 @@
// and by hiding the heavy lifting required by the official API.
//
// This library is provided as is and is a work in progress. As such, not every
// function has been implemented or is garanteed to work. Feel free to contribute
// function has been implemented or is guaranteed to work. Feel free to contribute
// improvements to its official github. If you do make sure you follow the provided
// template and naming conventions and document your new methods properly.
//
@ -158,7 +158,7 @@ $.oColorValue.prototype.fromColorString = function (hexString){
/**
* Uses a color integer (used in backdrops) and parses the INT; applies the RGBA components of the INT to thos oColorValue
* Uses a color integer (used in backdrops) and parses the INT; applies the RGBA components of the INT to the oColorValue
* @param { int } colorInt 24 bit-shifted integer containing RGBA values
*/
$.oColorValue.prototype.parseColorFromInt = function(colorInt){

View file

@ -4,7 +4,7 @@
// openHarmony Library
//
//
// Developped by Mathieu Chaptel, Chris Fourney
// Developed by Mathieu Chaptel, Chris Fourney
//
//
// This library is an open source implementation of a Document Object Model
@ -16,7 +16,7 @@
// and by hiding the heavy lifting required by the official API.
//
// This library is provided as is and is a work in progress. As such, not every
// function has been implemented or is garanteed to work. Feel free to contribute
// function has been implemented or is guaranteed to work. Feel free to contribute
// improvements to its official github. If you do make sure you follow the provided
// template and naming conventions and document your new methods properly.
//

View file

@ -4,7 +4,7 @@
// openHarmony Library
//
//
// Developped by Mathieu Chaptel, Chris Fourney
// Developed by Mathieu Chaptel, Chris Fourney
//
//
// This library is an open source implementation of a Document Object Model
@ -16,7 +16,7 @@
// and by hiding the heavy lifting required by the official API.
//
// This library is provided as is and is a work in progress. As such, not every
// function has been implemented or is garanteed to work. Feel free to contribute
// function has been implemented or is guaranteed to work. Feel free to contribute
// improvements to its official github. If you do make sure you follow the provided
// template and naming conventions and document your new methods properly.
//

View file

@ -5,7 +5,7 @@
// openHarmony Library
//
//
// Developped by Mathieu Chaptel, Chris Fourney
// Developed by Mathieu Chaptel, Chris Fourney
//
//
// This library is an open source implementation of a Document Object Model
@ -17,7 +17,7 @@
// and by hiding the heavy lifting required by the official API.
//
// This library is provided as is and is a work in progress. As such, not every
// function has been implemented or is garanteed to work. Feel free to contribute
// function has been implemented or is guaranteed to work. Feel free to contribute
// improvements to its official github. If you do make sure you follow the provided
// template and naming conventions and document your new methods properly.
//
@ -250,7 +250,7 @@ $.oDialog.prototype.prompt = function( labelText, title, prefilledText){
/**
* Prompts with a file selector window
* @param {string} [text="Select a file:"] The title of the confirmation dialog.
* @param {string} [filter="*"] The filter for the file type and/or file name that can be selected. Accepts wildcard charater "*".
* @param {string} [filter="*"] The filter for the file type and/or file name that can be selected. Accepts wildcard character "*".
* @param {string} [getExisting=true] Whether to select an existing file or a save location
* @param {string} [acceptMultiple=false] Whether or not selecting more than one file is ok. Is ignored if getExisting is falses.
* @param {string} [startDirectory] The directory showed at the opening of the dialog.
@ -327,14 +327,14 @@ $.oDialog.prototype.browseForFolder = function(text, startDirectory){
* @constructor
* @classdesc An simple progress dialog to display the progress of a task.
* To react to the user clicking the cancel button, connect a function to $.oProgressDialog.canceled() signal.
* When $.batchmode is true, the progress will be outputed as a "Progress : value/range" string to the Harmony stdout.
* When $.batchmode is true, the progress will be outputted as a "Progress : value/range" string to the Harmony stdout.
* @param {string} [labelText] The text displayed above the progress bar.
* @param {string} [range=100] The maximum value that represents a full progress bar.
* @param {string} [title] The title of the dialog
* @param {bool} [show=false] Whether to immediately show the dialog.
*
* @property {bool} wasCanceled Whether the progress bar was cancelled.
* @property {$.oSignal} canceled A Signal emited when the dialog is canceled. Can be connected to a callback.
* @property {$.oSignal} canceled A Signal emitted when the dialog is canceled. Can be connected to a callback.
*/
$.oProgressDialog = function( labelText, range, title, show ){
if (typeof title === 'undefined') var title = "Progress";
@ -608,7 +608,7 @@ $.oPieMenu = function( name, widgets, show, minAngle, maxAngle, radius, position
this.maxAngle = maxAngle;
this.globalCenter = position;
// how wide outisde the icons is the slice drawn
// how wide outside the icons is the slice drawn
this._circleMargin = 30;
// set these values before calling show() to customize the menu appearance
@ -974,7 +974,7 @@ $.oPieMenu.prototype.getMenuRadius = function(){
var _minRadius = UiLoader.dpiScale(30);
var _speed = 10; // the higher the value, the slower the progression
// hyperbolic tangent function to determin the radius
// hyperbolic tangent function to determine the radius
var exp = Math.exp(2*itemsNumber/_speed);
var _radius = ((exp-1)/(exp+1))*_maxRadius+_minRadius;
@ -1383,7 +1383,7 @@ $.oActionButton.prototype.activate = function(){
* This class is a subclass of QPushButton and all the methods from that class are available to modify this button.
* @param {string} paletteName The name of the palette that contains the color
* @param {string} colorName The name of the color (if more than one is present, will pick the first match)
* @param {bool} showName Wether to display the name of the color on the button
* @param {bool} showName Whether to display the name of the color on the button
* @param {QWidget} parent The parent QWidget for the button. Automatically set during initialisation of the menu.
*
*/
@ -1437,7 +1437,7 @@ $.oColorButton.prototype.activate = function(){
* @name $.oScriptButton
* @constructor
* @classdescription This subclass of QPushButton provides an easy way to create a button for a widget that will launch a function from another script file.<br>
* The buttons created this way automatically load the icon named after the script if it finds one named like the funtion in a script-icons folder next to the script file.<br>
* The buttons created this way automatically load the icon named after the script if it finds one named like the function in a script-icons folder next to the script file.<br>
* It will also automatically set the callback to lanch the function from the script.<br>
* This class is a subclass of QPushButton and all the methods from that class are available to modify this button.
* @param {string} scriptFile The path to the script file that will be launched

View file

@ -4,7 +4,7 @@
// openHarmony Library
//
//
// Developped by Mathieu Chaptel, Chris Fourney
// Developed by Mathieu Chaptel, Chris Fourney
//
//
// This library is an open source implementation of a Document Object Model
@ -16,7 +16,7 @@
// and by hiding the heavy lifting required by the official API.
//
// This library is provided as is and is a work in progress. As such, not every
// function has been implemented or is garanteed to work. Feel free to contribute
// function has been implemented or is guaranteed to work. Feel free to contribute
// improvements to its official github. If you do make sure you follow the provided
// template and naming conventions and document your new methods properly.
//
@ -426,7 +426,7 @@ Object.defineProperty($.oDrawing.prototype, 'drawingData', {
/**
* Import a given file into an existing drawing.
* @param {$.oFile} file The path to the file
* @param {bool} [convertToTvg=false] Wether to convert the bitmap to the tvg format (this doesn't vectorise the drawing)
* @param {bool} [convertToTvg=false] Whether to convert the bitmap to the tvg format (this doesn't vectorise the drawing)
*
* @return { $.oFile } the oFile object pointing to the drawing file after being it has been imported into the element folder.
*/
@ -878,8 +878,8 @@ $.oArtLayer.prototype.drawCircle = function(center, radius, lineStyle, fillStyle
* @param {$.oVertex[]} path an array of $.oVertex objects that describe a path.
* @param {$.oLineStyle} [lineStyle] the line style to draw with. (By default, will use the current stencil selection)
* @param {$.oFillStyle} [fillStyle] the fill information for the path. (By default, will use the current palette selection)
* @param {bool} [polygon] Wether bezier handles should be created for the points in the path (ignores "onCurve" properties of oVertex from path)
* @param {bool} [createUnderneath] Wether the new shape will appear on top or underneath the contents of the layer. (not working yet)
* @param {bool} [polygon] Whether bezier handles should be created for the points in the path (ignores "onCurve" properties of oVertex from path)
* @param {bool} [createUnderneath] Whether the new shape will appear on top or underneath the contents of the layer. (not working yet)
*/
$.oArtLayer.prototype.drawShape = function(path, lineStyle, fillStyle, polygon, createUnderneath){
if (typeof fillStyle === 'undefined') var fillStyle = new this.$.oFillStyle();
@ -959,7 +959,7 @@ $.oArtLayer.prototype.drawContour = function(path, fillStyle){
* @param {float} width the width of the rectangle.
* @param {float} height the height of the rectangle.
* @param {$.oLineStyle} lineStyle a line style to use for the rectangle stroke.
* @param {$.oFillStyle} fillStyle a fill style to use for the rectange fill.
* @param {$.oFillStyle} fillStyle a fill style to use for the rectangle fill.
* @returns {$.oShape} the shape containing the added stroke.
*/
$.oArtLayer.prototype.drawRectangle = function(x, y, width, height, lineStyle, fillStyle){
@ -1514,7 +1514,7 @@ Object.defineProperty($.oStroke.prototype, "path", {
/**
* The oVertex that are on the stroke (Bezier handles exluded.)
* The oVertex that are on the stroke (Bezier handles excluded.)
* The first is repeated at the last position when the stroke is closed.
* @name $.oStroke#points
* @type {$.oVertex[]}
@ -1583,7 +1583,7 @@ Object.defineProperty($.oStroke.prototype, "style", {
/**
* wether the stroke is a closed shape.
* whether the stroke is a closed shape.
* @name $.oStroke#closed
* @type {bool}
*/
@ -1919,7 +1919,7 @@ $.oContour.prototype.toString = function(){
* @constructor
* @classdesc
* The $.oVertex class represents a single control point on a stroke. This class is used to get the index of the point in the stroke path sequence, as well as its position as a float along the stroke's length.
* The onCurve property describes wether this control point is a bezier handle or a point on the curve.
* The onCurve property describes whether this control point is a bezier handle or a point on the curve.
*
* @param {$.oStroke} stroke the stroke that this vertex belongs to
* @param {float} x the x coordinate of the vertex, in drawing space

View file

@ -4,7 +4,7 @@
// openHarmony Library v0.01
//
//
// Developped by Mathieu Chaptel, Chris Fourney...
// Developed by Mathieu Chaptel, Chris Fourney...
//
//
// This library is an open source implementation of a Document Object Model
@ -16,7 +16,7 @@
// and by hiding the heavy lifting required by the official API.
//
// This library is provided as is and is a work in progress. As such, not every
// function has been implemented or is garanteed to work. Feel free to contribute
// function has been implemented or is guaranteed to work. Feel free to contribute
// improvements to its official github. If you do make sure you follow the provided
// template and naming conventions and document your new methods properly.
//

View file

@ -4,7 +4,7 @@
// openHarmony Library
//
//
// Developped by Mathieu Chaptel, Chris Fourney
// Developed by Mathieu Chaptel, Chris Fourney
//
//
// This library is an open source implementation of a Document Object Model
@ -16,7 +16,7 @@
// and by hiding the heavy lifting required by the official API.
//
// This library is provided as is and is a work in progress. As such, not every
// function has been implemented or is garanteed to work. Feel free to contribute
// function has been implemented or is guaranteed to work. Feel free to contribute
// improvements to its official github. If you do make sure you follow the provided
// template and naming conventions and document your new methods properly.
//
@ -509,7 +509,7 @@ Object.defineProperty($.oFile.prototype, 'fullName', {
/**
* The name of the file without extenstion.
* The name of the file without extension.
* @name $.oFile#name
* @type {string}
*/

View file

@ -4,7 +4,7 @@
// openHarmony Library
//
//
// Developped by Mathieu Chaptel, Chris Fourney
// Developed by Mathieu Chaptel, Chris Fourney
//
//
// This library is an open source implementation of a Document Object Model
@ -16,7 +16,7 @@
// and by hiding the heavy lifting required by the official API.
//
// This library is provided as is and is a work in progress. As such, not every
// function has been implemented or is garanteed to work. Feel free to contribute
// function has been implemented or is guaranteed to work. Feel free to contribute
// improvements to its official github. If you do make sure you follow the provided
// template and naming conventions and document your new methods properly.
//
@ -263,7 +263,7 @@ Object.defineProperty($.oFrame.prototype, 'duration', {
return _sceneLength;
}
// walk up the frames of the scene to the next keyFrame to determin duration
// walk up the frames of the scene to the next keyFrame to determine duration
var _frames = this.column.frames
for (var i=this.frameNumber+1; i<_sceneLength; i++){
if (_frames[i].isKeyframe) return _frames[i].frameNumber - _startFrame;
@ -426,7 +426,7 @@ Object.defineProperty($.oFrame.prototype, 'velocity', {
* easeIn : a $.oPoint object representing the left handle for bezier columns, or a {point, ease} object for ease columns.
* easeOut : a $.oPoint object representing the left handle for bezier columns, or a {point, ease} object for ease columns.
* continuity : the type of bezier used by the point.
* constant : wether the frame is interpolated or a held value.
* constant : whether the frame is interpolated or a held value.
* @name $.oFrame#ease
* @type {oPoint/object}
*/
@ -520,7 +520,7 @@ Object.defineProperty($.oFrame.prototype, 'easeOut', {
/**
* Determines the frame's continuity setting. Can take the values "CORNER", (two independant bezier handles on each side), "SMOOTH"(handles are aligned) or "STRAIGHT" (no handles and in straight lines).
* Determines the frame's continuity setting. Can take the values "CORNER", (two independent bezier handles on each side), "SMOOTH"(handles are aligned) or "STRAIGHT" (no handles and in straight lines).
* @name $.oFrame#continuity
* @type {string}
*/

View file

@ -4,7 +4,7 @@
// openHarmony Library v0.01
//
//
// Developped by Mathieu Chaptel, Chris Fourney...
// Developed by Mathieu Chaptel, Chris Fourney...
//
//
// This library is an open source implementation of a Document Object Model
@ -16,7 +16,7 @@
// and by hiding the heavy lifting required by the official API.
//
// This library is provided as is and is a work in progress. As such, not every
// function has been implemented or is garanteed to work. Feel free to contribute
// function has been implemented or is guaranteed to work. Feel free to contribute
// improvements to its official github. If you do make sure you follow the provided
// template and naming conventions and document your new methods properly.
//
@ -516,5 +516,5 @@ Object.defineProperty($.oList.prototype, 'toString', {
//Needs all filtering, limiting. mapping, pop, concat, join, ect
//Needs all filtering, limiting. mapping, pop, concat, join, etc
//Speed up by finessing the way it extends and tracks the enumerable properties.

View file

@ -4,7 +4,7 @@
// openHarmony Library
//
//
// Developped by Mathieu Chaptel, Chris Fourney
// Developed by Mathieu Chaptel, Chris Fourney
//
//
// This library is an open source implementation of a Document Object Model
@ -16,7 +16,7 @@
// and by hiding the heavy lifting required by the official API.
//
// This library is provided as is and is a work in progress. As such, not every
// function has been implemented or is garanteed to work. Feel free to contribute
// function has been implemented or is guaranteed to work. Feel free to contribute
// improvements to its official github. If you do make sure you follow the provided
// template and naming conventions and document your new methods properly.
//
@ -193,7 +193,7 @@ $.oPoint.prototype.pointSubtract = function( sub_pt ){
/**
* Subtracts the point to the coordinates of the current oPoint and returns a new oPoint with the result.
* @param {$.oPoint} point The point to subtract to this point.
* @returns {$.oPoint} a new independant oPoint.
* @returns {$.oPoint} a new independent oPoint.
*/
$.oPoint.prototype.subtractPoint = function( point ){
var x = this.x - point.x;
@ -298,9 +298,9 @@ $.oPoint.prototype.convertToWorldspace = function(){
/**
* Linearily Interpolate between this (0.0) and the provided point (1.0)
* Linearly Interpolate between this (0.0) and the provided point (1.0)
* @param {$.oPoint} point The target point at 100%
* @param {double} perc 0-1.0 value to linearily interp
* @param {double} perc 0-1.0 value to linearly interp
*
* @return: { $.oPoint } The interpolated value.
*/
@ -410,9 +410,9 @@ $.oBox.prototype.include = function(box){
/**
* Checks wether the box contains another $.oBox.
* Checks whether the box contains another $.oBox.
* @param {$.oBox} box The $.oBox to check for.
* @param {bool} [partial=false] wether to accept partially contained boxes.
* @param {bool} [partial=false] whether to accept partially contained boxes.
*/
$.oBox.prototype.contains = function(box, partial){
if (typeof partial === 'undefined') var partial = false;
@ -537,7 +537,7 @@ $.oMatrix.prototype.toString = function(){
* @classdesc The $.oVector is a replacement for the Vector3d objects of Harmony.
* @param {float} x a x coordinate for this vector.
* @param {float} y a y coordinate for this vector.
* @param {float} [z=0] a z coordinate for this vector. If ommited, will be set to 0 and vector will be 2D.
* @param {float} [z=0] a z coordinate for this vector. If omitted, will be set to 0 and vector will be 2D.
*/
$.oVector = function(x, y, z){
if (typeof z === "undefined" || isNaN(z)) var z = 0;

View file

@ -4,7 +4,7 @@
// openHarmony Library v0.01
//
//
// Developped by Mathieu Chaptel, Chris Fourney...
// Developed by Mathieu Chaptel, Chris Fourney...
//
//
// This library is an open source implementation of a Document Object Model
@ -16,7 +16,7 @@
// and by hiding the heavy lifting required by the official API.
//
// This library is provided as is and is a work in progress. As such, not every
// function has been implemented or is garanteed to work. Feel free to contribute
// function has been implemented or is guaranteed to work. Feel free to contribute
// improvements to its official github. If you do make sure you follow the provided
// template and naming conventions and document your new methods properly.
//

View file

@ -4,7 +4,7 @@
// openHarmony Library v0.01
//
//
// Developped by Mathieu Chaptel, Chris Fourney...
// Developed by Mathieu Chaptel, Chris Fourney...
//
//
// This library is an open source implementation of a Document Object Model
@ -16,7 +16,7 @@
// and by hiding the heavy lifting required by the official API.
//
// This library is provided as is and is a work in progress. As such, not every
// function has been implemented or is garanteed to work. Feel free to contribute
// function has been implemented or is guaranteed to work. Feel free to contribute
// improvements to its official github. If you do make sure you follow the provided
// template and naming conventions and document your new methods properly.
//
@ -54,7 +54,7 @@
/**
* The $.oUtils helper class -- providing generic utilities. Doesn't need instanciation.
* The $.oUtils helper class -- providing generic utilities. Doesn't need instantiation.
* @classdesc $.oUtils utility Class
*/
$.oUtils = function(){

View file

@ -4,7 +4,7 @@
// openHarmony Library v0.01
//
//
// Developped by Mathieu Chaptel, Chris Fourney...
// Developed by Mathieu Chaptel, Chris Fourney...
//
//
// This library is an open source implementation of a Document Object Model
@ -16,7 +16,7 @@
// and by hiding the heavy lifting required by the official API.
//
// This library is provided as is and is a work in progress. As such, not every
// function has been implemented or is garanteed to work. Feel free to contribute
// function has been implemented or is guaranteed to work. Feel free to contribute
// improvements to its official github. If you do make sure you follow the provided
// template and naming conventions and document your new methods properly.
//
@ -87,7 +87,7 @@ $.oNetwork = function( ){
* @param {function} callback_func Providing a callback function prevents blocking, and will respond on this function. The callback function is in form func( results ){}
* @param {bool} use_json In the event of a JSON api, this will return an object converted from the returned JSON.
*
* @return: {string/object} The resulting object/string from the query -- otherwise a bool as false when an error occured..
* @return: {string/object} The resulting object/string from the query -- otherwise a bool as false when an error occurred..
*/
$.oNetwork.prototype.webQuery = function ( address, callback_func, use_json ){
if (typeof callback_func === 'undefined') var callback_func = false;
@ -272,7 +272,7 @@ $.oNetwork.prototype.webQuery = function ( address, callback_func, use_json ){
* @param {function} path The local file path to save the download.
* @param {bool} replace Replace the file if it exists.
*
* @return: {string/object} The resulting object/string from the query -- otherwise a bool as false when an error occured..
* @return: {string/object} The resulting object/string from the query -- otherwise a bool as false when an error occurred..
*/
$.oNetwork.prototype.downloadSingle = function ( address, path, replace ){
if (typeof replace === 'undefined') var replace = false;

View file

@ -4,7 +4,7 @@
// openHarmony Library
//
//
// Developped by Mathieu Chaptel, Chris Fourney
// Developed by Mathieu Chaptel, Chris Fourney
//
//
// This library is an open source implementation of a Document Object Model
@ -16,7 +16,7 @@
// and by hiding the heavy lifting required by the official API.
//
// This library is provided as is and is a work in progress. As such, not every
// function has been implemented or is garanteed to work. Feel free to contribute
// function has been implemented or is guaranteed to work. Feel free to contribute
// improvements to its official github. If you do make sure you follow the provided
// template and naming conventions and document your new methods properly.
//
@ -562,7 +562,7 @@ Object.defineProperty($.oNode.prototype, 'height', {
/**
* The list of oNodeLinks objects descibing the connections to the inport of this node, in order of inport.
* The list of oNodeLinks objects describing the connections to the inport of this node, in order of inport.
* @name $.oNode#inLinks
* @readonly
* @deprecated returns $.oNodeLink instances but $.oLink is preferred. Use oNode.getInLinks() instead.
@ -658,7 +658,7 @@ Object.defineProperty($.oNode.prototype, 'outPorts', {
/**
* The list of oNodeLinks objects descibing the connections to the outports of this node, in order of outport.
* The list of oNodeLinks objects describing the connections to the outports of this node, in order of outport.
* @name $.oNode#outLinks
* @readonly
* @type {$.oNodeLink[]}
@ -1666,7 +1666,7 @@ $.oNode.prototype.refreshAttributes = function( ){
* It represents peg nodes in the scene.
* @constructor
* @augments $.oNode
* @classdesc Peg Moudle Class
* @classdesc Peg Module Class
* @param {string} path Path to the node in the network.
* @param {oScene} oSceneObject Access to the oScene object of the DOM.
*/
@ -1886,7 +1886,7 @@ $.oDrawingNode.prototype.getDrawingAtFrame = function(frameNumber){
/**
* Gets the list of palettes containing colors used by a drawing node. This only gets palettes with the first occurence of the colors.
* Gets the list of palettes containing colors used by a drawing node. This only gets palettes with the first occurrence of the colors.
* @return {$.oPalette[]} The palettes that contain the color IDs used by the drawings of the node.
*/
$.oDrawingNode.prototype.getUsedPalettes = function(){
@ -1968,7 +1968,7 @@ $.oDrawingNode.prototype.unlinkPalette = function(oPaletteObject){
* Duplicates a node by creating an independent copy.
* @param {string} [newName] The new name for the duplicated node.
* @param {oPoint} [newPosition] The new position for the duplicated node.
* @param {bool} [duplicateElement] Wether to also duplicate the element.
* @param {bool} [duplicateElement] Whether to also duplicate the element.
*/
$.oDrawingNode.prototype.duplicate = function(newName, newPosition, duplicateElement){
if (typeof newPosition === 'undefined') var newPosition = this.nodePosition;
@ -2464,7 +2464,7 @@ $.oGroupNode.prototype.getNodeByName = function(name){
* Returns all the nodes of a certain type in the group.
* Pass a value to recurse to look into the groups as well.
* @param {string} typeName The type of the nodes.
* @param {bool} recurse Wether to look inside the groups.
* @param {bool} recurse Whether to look inside the groups.
*
* @return {$.oNode[]} The nodes found.
*/
@ -2626,7 +2626,7 @@ $.oGroupNode.prototype.orderNodeView = function(recurse){
*
* peg.linkOutNode(drawingNode);
*
* //through all this we didn't specify nodePosition parameters so we'll sort evertything at once
* //through all this we didn't specify nodePosition parameters so we'll sort everything at once
*
* sceneRoot.orderNodeView();
*
@ -3333,7 +3333,7 @@ $.oGroupNode.prototype.importImageAsTVG = function(path, alignment, nodePosition
* imports an image sequence as a node into the current group.
* @param {$.oFile[]} imagePaths a list of paths to the images to import (can pass a list of strings or $.oFile)
* @param {number} [exposureLength=1] the number of frames each drawing should be exposed at. If set to 0/false, each drawing will use the numbering suffix of the file to set its frame.
* @param {boolean} [convertToTvg=false] wether to convert the files to tvg during import
* @param {boolean} [convertToTvg=false] whether to convert the files to tvg during import
* @param {string} [alignment="ASIS"] the alignment to apply to the node
* @param {$.oPoint} [nodePosition] the position of the node in the nodeview
*
@ -3346,7 +3346,7 @@ $.oGroupNode.prototype.importImageSequence = function(imagePaths, exposureLength
if (typeof extendScene === 'undefined') var extendScene = false;
// match anything but capture trailing numbers and separates punctuation preceeding it
// match anything but capture trailing numbers and separates punctuation preceding it
var numberingRe = /(.*?)([\W_]+)?(\d*)$/i;
// sanitize imagePaths

View file

@ -4,7 +4,7 @@
// openHarmony Library v0.01
//
//
// Developped by Mathieu Chaptel, Chris Fourney...
// Developed by Mathieu Chaptel, Chris Fourney...
//
//
// This library is an open source implementation of a Document Object Model
@ -16,7 +16,7 @@
// and by hiding the heavy lifting required by the official API.
//
// This library is provided as is and is a work in progress. As such, not every
// function has been implemented or is garanteed to work. Feel free to contribute
// function has been implemented or is guaranteed to work. Feel free to contribute
// improvements to its official github. If you do make sure you follow the provided
// template and naming conventions and document your new methods properly.
//
@ -174,7 +174,7 @@ Object.defineProperty($.oNodeLink.prototype, 'outNode', {
return;
}
this.apply(); // do we really want to apply everytime we set?
this.apply(); // do we really want to apply every time we set?
}
});
@ -198,7 +198,7 @@ Object.defineProperty($.oNodeLink.prototype, 'inNode', {
return;
}
this.apply(); // do we really want to apply everytime we set?
this.apply(); // do we really want to apply every time we set?
}
});
@ -222,7 +222,7 @@ Object.defineProperty($.oNodeLink.prototype, 'outPort', {
return;
}
this.apply(); // do we really want to apply everytime we set?
this.apply(); // do we really want to apply every time we set?
}
});
@ -256,7 +256,7 @@ Object.defineProperty($.oNodeLink.prototype, 'inPort', {
return;
}
this.apply(); // do we really want to apply everytime we set?
this.apply(); // do we really want to apply every time we set?
}
});
@ -983,7 +983,7 @@ $.oNodeLink.prototype.validate = function ( ) {
* @return {bool} Whether the connection is a valid connection that exists currently in the node system.
*/
$.oNodeLink.prototype.validateUpwards = function( inport, outportProvided ) {
//IN THE EVENT OUTNODE WASNT PROVIDED.
//IN THE EVENT OUTNODE WASN'T PROVIDED.
this.path = this.findInputPath( this._inNode, inport, [] );
if( !this.path || this.path.length == 0 ){
return false;
@ -1173,7 +1173,7 @@ Object.defineProperty($.oLink.prototype, 'outPort', {
/**
* The index of the link comming out of the out-port.
* The index of the link coming out of the out-port.
* <br>In the event this value wasn't known by the link object but the link is actually connected, the correct value will be found.
* @name $.oLink#outLink
* @readonly
@ -1323,7 +1323,7 @@ $.oLink.prototype.getValidLink = function(createOutPorts, createInPorts){
/**
* Attemps to connect a link. Will guess the ports if not provided.
* Attempts to connect a link. Will guess the ports if not provided.
* @return {bool}
*/
$.oLink.prototype.connect = function(){
@ -1623,11 +1623,11 @@ $.oLinkPath.prototype.findExistingPath = function(){
/**
* Gets a link object from two nodes that can be succesfully connected. Provide port numbers if there are specific requirements to match. If a link already exists, it will be returned.
* Gets a link object from two nodes that can be successfully connected. Provide port numbers if there are specific requirements to match. If a link already exists, it will be returned.
* @param {$.oNode} start The node from which the link originates.
* @param {$.oNode} end The node at which the link ends.
* @param {int} [outPort] A prefered out-port for the link to use.
* @param {int} [inPort] A prefered in-port for the link to use.
* @param {int} [outPort] A preferred out-port for the link to use.
* @param {int} [inPort] A preferred in-port for the link to use.
*
* @return {$.oLink} the valid $.oLink object. Returns null if no such link could be created (for example if the node's in-port is already linked)
*/

View file

@ -4,7 +4,7 @@
// openHarmony Library v0.01
//
//
// Developped by Mathieu Chaptel, ...
// Developed by Mathieu Chaptel, ...
//
//
// This library is an open source implementation of a Document Object Model
@ -16,7 +16,7 @@
// and by hiding the heavy lifting required by the official API.
//
// This library is provided as is and is a work in progress. As such, not every
// function has been implemented or is garanteed to work. Feel free to contribute
// function has been implemented or is guaranteed to work. Feel free to contribute
// improvements to its official github. If you do make sure you follow the provided
// template and naming conventions and document your new methods properly.
//
@ -212,7 +212,7 @@ function openHarmony_toolInstaller(){
//----------------------------------------------
//-- GET THE FILE CONTENTS IN A DIRCTORY ON GIT
//-- GET THE FILE CONTENTS IN A DIRECTORY ON GIT
this.recurse_files = function( contents, arr_files ){
with( context.$.global ){
try{
@ -501,7 +501,7 @@ function openHarmony_toolInstaller(){
var download_item = item["download_url"];
var query = $.network.webQuery( download_item, false, false );
if( query ){
//INSTALL TYPES ARE script, package, ect.
//INSTALL TYPES ARE script, package, etc.
if( install_types[ m.install_cache[ item["url"] ] ] ){
m.installLabel.text = install_types[ m.install_cache[ item["url"] ] ];

View file

@ -1,7 +1,7 @@
{
"name": "openharmony",
"version": "0.0.1",
"description": "An Open Source Imlementation of a Document Object Model for the Toonboom Harmony scripting interface",
"description": "An Open Source Implementation of a Document Object Model for the Toonboom Harmony scripting interface",
"main": "openHarmony.js",
"scripts": {
"test": "$",

View file

@ -108,7 +108,7 @@ __all__ = [
"apply_colorspace_project",
"apply_colorspace_clips",
"get_sequence_pattern_and_padding",
# depricated
# deprecated
"get_track_item_pype_tag",
"set_track_item_pype_tag",
"get_track_item_pype_data",

View file

@ -193,8 +193,8 @@ def parse_container(item, validate=True):
return
# convert the data to list and validate them
for _, obj_data in _data.items():
cotnainer = data_to_container(item, obj_data)
return_list.append(cotnainer)
container = data_to_container(item, obj_data)
return_list.append(container)
return return_list
else:
_data = lib.get_trackitem_openpype_data(item)

View file

@ -411,7 +411,7 @@ class ClipLoader:
self.with_handles = options.get("handles") or bool(
options.get("handles") is True)
# try to get value from options or evaluate key value for `load_how`
self.sequencial_load = options.get("sequencially") or bool(
self.sequencial_load = options.get("sequentially") or bool(
"Sequentially in order" in options.get("load_how", ""))
# try to get value from options or evaluate key value for `load_to`
self.new_sequence = options.get("newSequence") or bool(
@ -836,7 +836,7 @@ class PublishClip:
# increasing steps by index of rename iteration
self.count_steps *= self.rename_index
hierarchy_formating_data = {}
hierarchy_formatting_data = {}
hierarchy_data = deepcopy(self.hierarchy_data)
_data = self.track_item_default_data.copy()
if self.ui_inputs:
@ -871,13 +871,13 @@ class PublishClip:
# fill up pythonic expresisons in hierarchy data
for k, _v in hierarchy_data.items():
hierarchy_formating_data[k] = _v["value"].format(**_data)
hierarchy_formatting_data[k] = _v["value"].format(**_data)
else:
# if no gui mode then just pass default data
hierarchy_formating_data = hierarchy_data
hierarchy_formatting_data = hierarchy_data
tag_hierarchy_data = self._solve_tag_hierarchy_data(
hierarchy_formating_data
hierarchy_formatting_data
)
tag_hierarchy_data.update({"heroTrack": True})
@ -905,20 +905,20 @@ class PublishClip:
# add data to return data dict
self.tag_data.update(tag_hierarchy_data)
def _solve_tag_hierarchy_data(self, hierarchy_formating_data):
def _solve_tag_hierarchy_data(self, hierarchy_formatting_data):
""" Solve tag data from hierarchy data and templates. """
# fill up clip name and hierarchy keys
hierarchy_filled = self.hierarchy.format(**hierarchy_formating_data)
clip_name_filled = self.clip_name.format(**hierarchy_formating_data)
hierarchy_filled = self.hierarchy.format(**hierarchy_formatting_data)
clip_name_filled = self.clip_name.format(**hierarchy_formatting_data)
# remove shot from hierarchy data: is not needed anymore
hierarchy_formating_data.pop("shot")
hierarchy_formatting_data.pop("shot")
return {
"newClipName": clip_name_filled,
"hierarchy": hierarchy_filled,
"parents": self.parents,
"hierarchyData": hierarchy_formating_data,
"hierarchyData": hierarchy_formatting_data,
"subset": self.subset,
"family": self.subset_family,
"families": [self.data["family"]]
@ -934,16 +934,16 @@ class PublishClip:
)
# first collect formatting data to use for formatting template
formating_data = {}
formatting_data = {}
for _k, _v in self.hierarchy_data.items():
value = _v["value"].format(
**self.track_item_default_data)
formating_data[_k] = value
formatting_data[_k] = value
return {
"entity_type": entity_type,
"entity_name": template.format(
**formating_data
**formatting_data
)
}

View file

@ -0,0 +1,185 @@
"""Library to register OpenPype Creators for Houdini TAB node search menu.
This can be used to install custom houdini tools for the TAB search
menu which will trigger a publish instance to be created interactively.
The Creators are automatically registered on launch of Houdini through the
Houdini integration's `host.install()` method.
"""
import contextlib
import tempfile
import logging
import os
from openpype.pipeline import registered_host
from openpype.pipeline.create import CreateContext
from openpype.resources import get_openpype_icon_filepath
import hou
log = logging.getLogger(__name__)
CREATE_SCRIPT = """
from openpype.hosts.houdini.api.creator_node_shelves import create_interactive
create_interactive("{identifier}")
"""
def create_interactive(creator_identifier):
"""Create a Creator using its identifier interactively.
This is used by the generated shelf tools as callback when a user selects
the creator from the node tab search menu.
Args:
creator_identifier (str): The creator identifier of the Creator plugin
to create.
Return:
list: The created instances.
"""
# TODO Use Qt instead
result, variant = hou.ui.readInput('Define variant name',
buttons=("Ok", "Cancel"),
initial_contents='Main',
title="Define variant",
help="Set the variant for the "
"publish instance",
close_choice=1)
if result == 1:
# User interrupted
return
variant = variant.strip()
if not variant:
raise RuntimeError("Empty variant value entered.")
host = registered_host()
context = CreateContext(host)
before = context.instances_by_id.copy()
# Create the instance
context.create(
creator_identifier=creator_identifier,
variant=variant,
pre_create_data={"use_selection": True}
)
# For convenience we set the new node as current since that's much more
# familiar to the artist when creating a node interactively
# TODO Allow to disable auto-select in studio settings or user preferences
after = context.instances_by_id
new = set(after) - set(before)
if new:
# Select the new instance
for instance_id in new:
instance = after[instance_id]
node = hou.node(instance.get("instance_node"))
node.setCurrent(True)
return list(new)
@contextlib.contextmanager
def shelves_change_block():
"""Write shelf changes at the end of the context."""
hou.shelves.beginChangeBlock()
try:
yield
finally:
hou.shelves.endChangeBlock()
def install():
"""Install the Creator plug-ins to show in Houdini's TAB node search menu.
This function is re-entrant and can be called again to reinstall and
update the node definitions. For example during development it can be
useful to call it manually:
>>> from openpype.hosts.houdini.api.creator_node_shelves import install
>>> install()
Returns:
list: List of `hou.Tool` instances
"""
host = registered_host()
# Store the filepath on the host
# TODO: Define a less hacky static shelf path for current houdini session
filepath_attr = "_creator_node_shelf_filepath"
filepath = getattr(host, filepath_attr, None)
if filepath is None:
f = tempfile.NamedTemporaryFile(prefix="houdini_creator_nodes_",
suffix=".shelf",
delete=False)
f.close()
filepath = f.name
setattr(host, filepath_attr, filepath)
elif os.path.exists(filepath):
# Remove any existing shelf file so that we can completey regenerate
# and update the tools file if creator identifiers change
os.remove(filepath)
icon = get_openpype_icon_filepath()
# Create context only to get creator plugins, so we don't reset and only
# populate what we need to retrieve the list of creator plugins
create_context = CreateContext(host, reset=False)
create_context.reset_current_context()
create_context._reset_creator_plugins()
log.debug("Writing OpenPype Creator nodes to shelf: {}".format(filepath))
tools = []
with shelves_change_block():
for identifier, creator in create_context.manual_creators.items():
# TODO: Allow the creator plug-in itself to override the categories
# for where they are shown, by e.g. defining
# `Creator.get_network_categories()`
key = "openpype_create.{}".format(identifier)
log.debug(f"Registering {key}")
script = CREATE_SCRIPT.format(identifier=identifier)
data = {
"script": script,
"language": hou.scriptLanguage.Python,
"icon": icon,
"help": "Create OpenPype publish instance for {}".format(
creator.label
),
"help_url": None,
"network_categories": [
hou.ropNodeTypeCategory(),
hou.sopNodeTypeCategory()
],
"viewer_categories": [],
"cop_viewer_categories": [],
"network_op_type": None,
"viewer_op_type": None,
"locations": ["OpenPype"]
}
label = "Create {}".format(creator.label)
tool = hou.shelves.tool(key)
if tool:
tool.setData(**data)
tool.setLabel(label)
else:
tool = hou.shelves.newTool(
file_path=filepath,
name=key,
label=label,
**data
)
tools.append(tool)
# Ensure the shelf is reloaded
hou.shelves.loadFile(filepath)
return tools

View file

@ -127,6 +127,8 @@ def get_output_parameter(node):
return node.parm("filename")
elif node_type == "comp":
return node.parm("copoutput")
elif node_type == "opengl":
return node.parm("picture")
elif node_type == "arnold":
if node.evalParm("ar_ass_export_enable"):
return node.parm("ar_ass_file")
@ -479,23 +481,13 @@ def reset_framerange():
frame_start = asset_data.get("frameStart")
frame_end = asset_data.get("frameEnd")
# Backwards compatibility
if frame_start is None or frame_end is None:
frame_start = asset_data.get("edit_in")
frame_end = asset_data.get("edit_out")
if frame_start is None or frame_end is None:
log.warning("No edit information found for %s" % asset_name)
return
handles = asset_data.get("handles") or 0
handle_start = asset_data.get("handleStart")
if handle_start is None:
handle_start = handles
handle_end = asset_data.get("handleEnd")
if handle_end is None:
handle_end = handles
handle_start = asset_data.get("handleStart", 0)
handle_end = asset_data.get("handleEnd", 0)
frame_start -= int(handle_start)
frame_end += int(handle_end)

View file

@ -18,7 +18,7 @@ from openpype.pipeline import (
)
from openpype.pipeline.load import any_outdated_containers
from openpype.hosts.houdini import HOUDINI_HOST_DIR
from openpype.hosts.houdini.api import lib, shelves
from openpype.hosts.houdini.api import lib, shelves, creator_node_shelves
from openpype.lib import (
register_event_callback,
@ -83,6 +83,10 @@ class HoudiniHost(HostBase, IWorkfileHost, ILoadHost, IPublishHost):
_set_context_settings()
shelves.generate_shelves()
if not IS_HEADLESS:
import hdefereval # noqa, hdefereval is only available in ui mode
hdefereval.executeDeferred(creator_node_shelves.install)
def has_unsaved_changes(self):
return hou.hipFile.hasUnsavedChanges()
@ -144,13 +148,10 @@ class HoudiniHost(HostBase, IWorkfileHost, ILoadHost, IPublishHost):
"""
obj_network = hou.node("/obj")
op_ctx = obj_network.createNode("null", node_name="OpenPypeContext")
# A null in houdini by default comes with content inside to visualize
# the null. However since we explicitly want to hide the node lets
# remove the content and disable the display flag of the node
for node in op_ctx.children():
node.destroy()
op_ctx = obj_network.createNode("subnet",
node_name="OpenPypeContext",
run_init_scripts=False,
load_contents=False)
op_ctx.moveToGoodPosition()
op_ctx.setBuiltExplicitly(False)

View file

@ -60,7 +60,7 @@ class Creator(LegacyCreator):
def process(self):
instance = super(CreateEpicNode, self, process()
# Set paramaters for Alembic node
# Set parameters for Alembic node
instance.setParms(
{"sop_path": "$HIP/%s.abc" % self.nodes[0]}
)

View file

@ -69,7 +69,7 @@ def generate_shelves():
mandatory_attributes = {'label', 'script'}
for tool_definition in shelf_definition.get('tools_list'):
# We verify that the name and script attibutes of the tool
# We verify that the name and script attributes of the tool
# are set
if not all(
tool_definition[key] for key in mandatory_attributes

View file

@ -1,5 +1,5 @@
# -*- coding: utf-8 -*-
"""Convertor for legacy Houdini subsets."""
"""Converter for legacy Houdini subsets."""
from openpype.pipeline.create.creator_plugins import SubsetConvertorPlugin
from openpype.hosts.houdini.api.lib import imprint
@ -7,7 +7,7 @@ from openpype.hosts.houdini.api.lib import imprint
class HoudiniLegacyConvertor(SubsetConvertorPlugin):
"""Find and convert any legacy subsets in the scene.
This Convertor will find all legacy subsets in the scene and will
This Converter will find all legacy subsets in the scene and will
transform them to the current system. Since the old subsets doesn't
retain any information about their original creators, the only mapping
we can do is based on their families.

View file

@ -0,0 +1,125 @@
# -*- coding: utf-8 -*-
"""Creator plugin for creating openGL reviews."""
from openpype.hosts.houdini.api import plugin
from openpype.lib import EnumDef, BoolDef, NumberDef
class CreateReview(plugin.HoudiniCreator):
"""Review with OpenGL ROP"""
identifier = "io.openpype.creators.houdini.review"
label = "Review"
family = "review"
icon = "video-camera"
def create(self, subset_name, instance_data, pre_create_data):
import hou
instance_data.pop("active", None)
instance_data.update({"node_type": "opengl"})
instance_data["imageFormat"] = pre_create_data.get("imageFormat")
instance_data["keepImages"] = pre_create_data.get("keepImages")
instance = super(CreateReview, self).create(
subset_name,
instance_data,
pre_create_data)
instance_node = hou.node(instance.get("instance_node"))
frame_range = hou.playbar.frameRange()
filepath = "{root}/{subset}/{subset}.$F4.{ext}".format(
root=hou.text.expandString("$HIP/pyblish"),
subset="`chs(\"subset\")`", # keep dynamic link to subset
ext=pre_create_data.get("image_format") or "png"
)
parms = {
"picture": filepath,
"trange": 1,
# Unlike many other ROP nodes the opengl node does not default
# to expression of $FSTART and $FEND so we preserve that behavior
# but do set the range to the frame range of the playbar
"f1": frame_range[0],
"f2": frame_range[1],
}
override_resolution = pre_create_data.get("override_resolution")
if override_resolution:
parms.update({
"tres": override_resolution,
"res1": pre_create_data.get("resx"),
"res2": pre_create_data.get("resy"),
"aspect": pre_create_data.get("aspect"),
})
if self.selected_nodes:
# The first camera found in selection we will use as camera
# Other node types we set in force objects
camera = None
force_objects = []
for node in self.selected_nodes:
path = node.path()
if node.type().name() == "cam":
if camera:
continue
camera = path
else:
force_objects.append(path)
if not camera:
self.log.warning("No camera found in selection.")
parms.update({
"camera": camera or "",
"scenepath": "/obj",
"forceobjects": " ".join(force_objects),
"vobjects": "" # clear candidate objects from '*' value
})
instance_node.setParms(parms)
to_lock = ["id", "family"]
self.lock_parameters(instance_node, to_lock)
def get_pre_create_attr_defs(self):
attrs = super(CreateReview, self).get_pre_create_attr_defs()
image_format_enum = [
"bmp", "cin", "exr", "jpg", "pic", "pic.gz", "png",
"rad", "rat", "rta", "sgi", "tga", "tif",
]
return attrs + [
BoolDef("keepImages",
label="Keep Image Sequences",
default=False),
EnumDef("imageFormat",
image_format_enum,
default="png",
label="Image Format Options"),
BoolDef("override_resolution",
label="Override resolution",
tooltip="When disabled the resolution set on the camera "
"is used instead.",
default=True),
NumberDef("resx",
label="Resolution Width",
default=1280,
minimum=2,
decimals=0),
NumberDef("resy",
label="Resolution Height",
default=720,
minimum=2,
decimals=0),
NumberDef("aspect",
label="Aspect Ratio",
default=1.0,
minimum=0.0001,
decimals=3)
]

View file

@ -1,7 +1,6 @@
import os
import hou
from openpype.pipeline import legacy_io
import pyblish.api
@ -11,7 +10,7 @@ class CollectHoudiniCurrentFile(pyblish.api.InstancePlugin):
order = pyblish.api.CollectorOrder - 0.01
label = "Houdini Current File"
hosts = ["houdini"]
family = ["workfile"]
families = ["workfile"]
def process(self, instance):
"""Inject the current working file"""
@ -21,7 +20,7 @@ class CollectHoudiniCurrentFile(pyblish.api.InstancePlugin):
# By default, Houdini will even point a new scene to a path.
# However if the file is not saved at all and does not exist,
# we assume the user never set it.
filepath = ""
current_file = ""
elif os.path.basename(current_file) == "untitled.hip":
# Due to even a new file being called 'untitled.hip' we are unable

View file

@ -14,7 +14,7 @@ class CollectFrames(pyblish.api.InstancePlugin):
order = pyblish.api.CollectorOrder
label = "Collect Frames"
families = ["vdbcache", "imagesequence", "ass", "redshiftproxy"]
families = ["vdbcache", "imagesequence", "ass", "redshiftproxy", "review"]
def process(self, instance):

View file

@ -0,0 +1,52 @@
import hou
import pyblish.api
class CollectHoudiniReviewData(pyblish.api.InstancePlugin):
"""Collect Review Data."""
label = "Collect Review Data"
order = pyblish.api.CollectorOrder + 0.1
hosts = ["houdini"]
families = ["review"]
def process(self, instance):
# This fixes the burnin having the incorrect start/end timestamps
# because without this it would take it from the context instead
# which isn't the actual frame range that this instance renders.
instance.data["handleStart"] = 0
instance.data["handleEnd"] = 0
# Get the camera from the rop node to collect the focal length
ropnode_path = instance.data["instance_node"]
ropnode = hou.node(ropnode_path)
camera_path = ropnode.parm("camera").eval()
camera_node = hou.node(camera_path)
if not camera_node:
raise RuntimeError("No valid camera node found on review node: "
"{}".format(camera_path))
# Collect focal length.
focal_length_parm = camera_node.parm("focal")
if not focal_length_parm:
self.log.warning("No 'focal' (focal length) parameter found on "
"camera: {}".format(camera_path))
return
if focal_length_parm.isTimeDependent():
start = instance.data["frameStart"]
end = instance.data["frameEnd"] + 1
focal_length = [
focal_length_parm.evalAsFloatAtFrame(t)
for t in range(int(start), int(end))
]
else:
focal_length = focal_length_parm.evalAsFloat()
# Store focal length in `burninDataMembers`
burnin_members = instance.data.setdefault("burninDataMembers", {})
burnin_members["focalLength"] = focal_length
instance.data.setdefault("families", []).append('ftrack')

View file

@ -0,0 +1,58 @@
import os
import pyblish.api
from openpype.pipeline import (
publish,
OptionalPyblishPluginMixin
)
from openpype.hosts.houdini.api.lib import render_rop
import hou
class ExtractOpenGL(publish.Extractor,
OptionalPyblishPluginMixin):
order = pyblish.api.ExtractorOrder - 0.01
label = "Extract OpenGL"
families = ["review"]
hosts = ["houdini"]
optional = True
def process(self, instance):
if not self.is_active(instance.data):
return
ropnode = hou.node(instance.data.get("instance_node"))
output = ropnode.evalParm("picture")
staging_dir = os.path.normpath(os.path.dirname(output))
instance.data["stagingDir"] = staging_dir
file_name = os.path.basename(output)
self.log.info("Extracting '%s' to '%s'" % (file_name,
staging_dir))
render_rop(ropnode)
output = instance.data["frames"]
tags = ["review"]
if not instance.data.get("keepImages"):
tags.append("delete")
representation = {
"name": instance.data["imageFormat"],
"ext": instance.data["imageFormat"],
"files": output,
"stagingDir": staging_dir,
"frameStart": instance.data["frameStart"],
"frameEnd": instance.data["frameEnd"],
"tags": tags,
"preview": True,
"camera_name": instance.data.get("review_camera")
}
if "representations" not in instance.data:
instance.data["representations"] = []
instance.data["representations"].append(representation)

View file

@ -0,0 +1,61 @@
# -*- coding: utf-8 -*-
import pyblish.api
from openpype.pipeline import PublishValidationError
import hou
class ValidateSceneReview(pyblish.api.InstancePlugin):
"""Validator Some Scene Settings before publishing the review
1. Scene Path
2. Resolution
"""
order = pyblish.api.ValidatorOrder
families = ["review"]
hosts = ["houdini"]
label = "Scene Setting for review"
def process(self, instance):
invalid = self.get_invalid_scene_path(instance)
report = []
if invalid:
report.append(
"Scene path does not exist: '%s'" % invalid[0],
)
invalid = self.get_invalid_resolution(instance)
if invalid:
report.extend(invalid)
if report:
raise PublishValidationError(
"\n\n".join(report),
title=self.label)
def get_invalid_scene_path(self, instance):
node = hou.node(instance.data.get("instance_node"))
scene_path_parm = node.parm("scenepath")
scene_path_node = scene_path_parm.evalAsNode()
if not scene_path_node:
return [scene_path_parm.evalAsString()]
def get_invalid_resolution(self, instance):
node = hou.node(instance.data.get("instance_node"))
# The resolution setting is only used when Override Camera Resolution
# is enabled. So we skip validation if it is disabled.
override = node.parm("tres").eval()
if not override:
return
invalid = []
res_width = node.parm("res1").eval()
res_height = node.parm("res2").eval()
if res_width == 0:
invalid.append("Override Resolution width is set to zero.")
if res_height == 0:
invalid.append("Override Resolution height is set to zero")
return invalid

View file

@ -209,19 +209,12 @@ def get_frame_range() -> dict:
asset = get_current_project_asset()
frame_start = asset["data"].get("frameStart")
frame_end = asset["data"].get("frameEnd")
# Backwards compatibility
if frame_start is None or frame_end is None:
frame_start = asset["data"].get("edit_in")
frame_end = asset["data"].get("edit_out")
if frame_start is None or frame_end is None:
return
handles = asset["data"].get("handles") or 0
handle_start = asset["data"].get("handleStart")
if handle_start is None:
handle_start = handles
handle_end = asset["data"].get("handleEnd")
if handle_end is None:
handle_end = handles
handle_start = asset["data"].get("handleStart", 0)
handle_end = asset["data"].get("handleEnd", 0)
return {
"frameStart": frame_start,
"frameEnd": frame_end,

View file

@ -62,6 +62,7 @@ class CollectRender(pyblish.api.InstancePlugin):
"frameStart": context.data['frameStart'],
"frameEnd": context.data['frameEnd'],
"version": version_int,
"farm": True
}
self.log.info("data: {0}".format(data))
instance.data.update(data)

View file

@ -69,7 +69,7 @@ def _resolution_from_document(doc):
resolution_width = doc["data"].get("resolution_width")
resolution_height = doc["data"].get("resolution_height")
# Make sure both width and heigh are set
# Make sure both width and height are set
if resolution_width is None or resolution_height is None:
cmds.warning(
"No resolution information found for \"{}\"".format(doc["name"])

View file

@ -32,7 +32,12 @@ from openpype.pipeline import (
load_container,
registered_host,
)
from openpype.pipeline.context_tools import get_current_project_asset
from openpype.pipeline.context_tools import (
get_current_asset_name,
get_current_project_asset,
get_current_project_name,
get_current_task_name
)
self = sys.modules[__name__]
@ -292,15 +297,20 @@ def collect_animation_data(fps=False):
"""
# get scene values as defaults
start = cmds.playbackOptions(query=True, animationStartTime=True)
end = cmds.playbackOptions(query=True, animationEndTime=True)
frame_start = cmds.playbackOptions(query=True, minTime=True)
frame_end = cmds.playbackOptions(query=True, maxTime=True)
handle_start = cmds.playbackOptions(query=True, animationStartTime=True)
handle_end = cmds.playbackOptions(query=True, animationEndTime=True)
handle_start = frame_start - handle_start
handle_end = handle_end - frame_end
# build attributes
data = OrderedDict()
data["frameStart"] = start
data["frameEnd"] = end
data["handleStart"] = 0
data["handleEnd"] = 0
data["frameStart"] = frame_start
data["frameEnd"] = frame_end
data["handleStart"] = handle_start
data["handleEnd"] = handle_end
data["step"] = 1.0
if fps:
@ -1367,6 +1377,71 @@ def set_id(node, unique_id, overwrite=False):
cmds.setAttr(attr, unique_id, type="string")
def get_attribute(plug,
asString=False,
expandEnvironmentVariables=False,
**kwargs):
"""Maya getAttr with some fixes based on `pymel.core.general.getAttr()`.
Like Pymel getAttr this applies some changes to `maya.cmds.getAttr`
- maya pointlessly returned vector results as a tuple wrapped in a list
(ex. '[(1,2,3)]'). This command unpacks the vector for you.
- when getting a multi-attr, maya would raise an error, but this will
return a list of values for the multi-attr
- added support for getting message attributes by returning the
connections instead
Note that the asString + expandEnvironmentVariables argument naming
convention matches the `maya.cmds.getAttr` arguments so that it can
act as a direct replacement for it.
Args:
plug (str): Node's attribute plug as `node.attribute`
asString (bool): Return string value for enum attributes instead
of the index. Note that the return value can be dependent on the
UI language Maya is running in.
expandEnvironmentVariables (bool): Expand any environment variable and
(tilde characters on UNIX) found in string attributes which are
returned.
Kwargs:
Supports the keyword arguments of `maya.cmds.getAttr`
Returns:
object: The value of the maya attribute.
"""
attr_type = cmds.getAttr(plug, type=True)
if asString:
kwargs["asString"] = True
if expandEnvironmentVariables:
kwargs["expandEnvironmentVariables"] = True
try:
res = cmds.getAttr(plug, **kwargs)
except RuntimeError:
if attr_type == "message":
return cmds.listConnections(plug)
node, attr = plug.split(".", 1)
children = cmds.attributeQuery(attr, node=node, listChildren=True)
if children:
return [
get_attribute("{}.{}".format(node, child))
for child in children
]
raise
# Convert vector result wrapped in tuple
if isinstance(res, list) and len(res):
if isinstance(res[0], tuple) and len(res):
if attr_type in {'pointArray', 'vectorArray'}:
return res
return res[0]
return res
def set_attribute(attribute, value, node):
"""Adjust attributes based on the value from the attribute data
@ -1881,6 +1956,12 @@ def remove_other_uv_sets(mesh):
cmds.removeMultiInstance(attr, b=True)
def get_node_parent(node):
"""Return full path name for parent of node"""
parents = cmds.listRelatives(node, parent=True, fullPath=True)
return parents[0] if parents else None
def get_id_from_sibling(node, history_only=True):
"""Return first node id in the history chain that matches this node.
@ -1904,10 +1985,6 @@ def get_id_from_sibling(node, history_only=True):
"""
def _get_parent(node):
"""Return full path name for parent of node"""
return cmds.listRelatives(node, parent=True, fullPath=True)
node = cmds.ls(node, long=True)[0]
# Find all similar nodes in history
@ -1919,8 +1996,8 @@ def get_id_from_sibling(node, history_only=True):
similar_nodes = [x for x in similar_nodes if x != node]
# The node *must be* under the same parent
parent = _get_parent(node)
similar_nodes = [i for i in similar_nodes if _get_parent(i) == parent]
parent = get_node_parent(node)
similar_nodes = [i for i in similar_nodes if get_node_parent(i) == parent]
# Check all of the remaining similar nodes and take the first one
# with an id and assume it's the original.
@ -2067,29 +2144,43 @@ def get_frame_range():
"""Get the current assets frame range and handles."""
# Set frame start/end
project_name = legacy_io.active_project()
asset_name = legacy_io.Session["AVALON_ASSET"]
project_name = get_current_project_name()
task_name = get_current_task_name()
asset_name = get_current_asset_name()
asset = get_asset_by_name(project_name, asset_name)
settings = get_project_settings(project_name)
include_handles_settings = settings["maya"]["include_handles"]
current_task = asset.get("data").get("tasks").get(task_name)
frame_start = asset["data"].get("frameStart")
frame_end = asset["data"].get("frameEnd")
# Backwards compatibility
if frame_start is None or frame_end is None:
frame_start = asset["data"].get("edit_in")
frame_end = asset["data"].get("edit_out")
if frame_start is None or frame_end is None:
cmds.warning("No edit information found for %s" % asset_name)
return
handles = asset["data"].get("handles") or 0
handle_start = asset["data"].get("handleStart")
if handle_start is None:
handle_start = handles
handle_start = asset["data"].get("handleStart") or 0
handle_end = asset["data"].get("handleEnd") or 0
handle_end = asset["data"].get("handleEnd")
if handle_end is None:
handle_end = handles
animation_start = frame_start
animation_end = frame_end
include_handles = include_handles_settings["include_handles_default"]
for item in include_handles_settings["per_task_type"]:
if current_task["type"] in item["task_type"]:
include_handles = item["include_handles"]
break
if include_handles:
animation_start -= int(handle_start)
animation_end += int(handle_end)
cmds.playbackOptions(
minTime=frame_start,
maxTime=frame_end,
animationStartTime=animation_start,
animationEndTime=animation_end
)
cmds.currentTime(frame_start)
return {
"frameStart": frame_start,
@ -2109,7 +2200,6 @@ def reset_frame_range(playback=True, render=True, fps=True):
Defaults to True.
fps (bool, Optional): Whether to set scene FPS. Defaults to True.
"""
if fps:
fps = convert_to_maya_fps(
float(legacy_io.Session.get("AVALON_FPS", 25))
@ -2478,8 +2568,8 @@ def load_capture_preset(data=None):
float(value[2]) / 255
]
disp_options[key] = value
else:
disp_options['displayGradient'] = True
elif key == "displayGradient":
disp_options[key] = value
options['display_options'] = disp_options
@ -3176,38 +3266,78 @@ def set_colorspace():
def parent_nodes(nodes, parent=None):
# type: (list, str) -> list
"""Context manager to un-parent provided nodes and return them back."""
import pymel.core as pm # noqa
parent_node = None
def _as_mdagpath(node):
"""Return MDagPath for node path."""
if not node:
return
sel = OpenMaya.MSelectionList()
sel.add(node)
return sel.getDagPath(0)
# We can only parent dag nodes so we ensure input contains only dag nodes
nodes = cmds.ls(nodes, type="dagNode", long=True)
if not nodes:
# opt-out early
yield
return
parent_node_path = None
delete_parent = False
if parent:
if not cmds.objExists(parent):
parent_node = pm.createNode("transform", n=parent, ss=False)
parent_node = cmds.createNode("transform",
name=parent,
skipSelect=False)
delete_parent = True
else:
parent_node = pm.PyNode(parent)
parent_node = parent
parent_node_path = cmds.ls(parent_node, long=True)[0]
# Store original parents
node_parents = []
for node in nodes:
n = pm.PyNode(node)
try:
root = pm.listRelatives(n, parent=1)[0]
except IndexError:
root = None
node_parents.append((n, root))
node_parent = get_node_parent(node)
node_parents.append((_as_mdagpath(node), _as_mdagpath(node_parent)))
try:
for node in node_parents:
if not parent:
node[0].setParent(world=True)
for node, node_parent in node_parents:
node_parent_path = node_parent.fullPathName() if node_parent else None # noqa
if node_parent_path == parent_node_path:
# Already a child
continue
if parent_node_path:
cmds.parent(node.fullPathName(), parent_node_path)
else:
node[0].setParent(parent_node)
cmds.parent(node.fullPathName(), world=True)
yield
finally:
for node in node_parents:
if node[1]:
node[0].setParent(node[1])
# Reparent to original parents
for node, original_parent in node_parents:
node_path = node.fullPathName()
if not node_path:
# Node must have been deleted
continue
node_parent_path = get_node_parent(node_path)
original_parent_path = None
if original_parent:
original_parent_path = original_parent.fullPathName()
if not original_parent_path:
# Original parent node must have been deleted
continue
if node_parent_path != original_parent_path:
if not original_parent_path:
cmds.parent(node_path, world=True)
else:
cmds.parent(node_path, original_parent_path)
if delete_parent:
pm.delete(parent_node)
cmds.delete(parent_node_path)
@contextlib.contextmanager
@ -3558,7 +3688,17 @@ def get_color_management_preferences():
# Split view and display from view_transform. view_transform comes in
# format of "{view} ({display})".
regex = re.compile(r"^(?P<view>.+) \((?P<display>.+)\)$")
if int(cmds.about(version=True)) <= 2020:
# view_transform comes in format of "{view} {display}" in 2020.
regex = re.compile(r"^(?P<view>.+) (?P<display>.+)$")
match = regex.match(data["view_transform"])
if not match:
raise ValueError(
"Unable to parse view and display from Maya view transform: '{}' "
"using regex '{}'".format(data["view_transform"], regex.pattern)
)
data.update({
"display": match.group("display"),
"view": match.group("view")
@ -3675,3 +3815,43 @@ def len_flattened(components):
else:
n += 1
return n
def get_all_children(nodes):
"""Return all children of `nodes` including each instanced child.
Using maya.cmds.listRelatives(allDescendents=True) includes only the first
instance. As such, this function acts as an optimal replacement with a
focus on a fast query.
"""
sel = OpenMaya.MSelectionList()
traversed = set()
iterator = OpenMaya.MItDag(OpenMaya.MItDag.kDepthFirst)
for node in nodes:
if node in traversed:
# Ignore if already processed as a child
# before
continue
sel.clear()
sel.add(node)
dag = sel.getDagPath(0)
iterator.reset(dag)
# ignore self
iterator.next() # noqa: B305
while not iterator.isDone():
path = iterator.fullPathName()
if path in traversed:
iterator.prune()
iterator.next() # noqa: B305
continue
traversed.add(path)
iterator.next() # noqa: B305
return list(traversed)

View file

@ -339,7 +339,7 @@ class ARenderProducts:
aov_tokens = ["<aov>", "<renderpass>"]
def match_last(tokens, text):
"""regex match the last occurence from a list of tokens"""
"""regex match the last occurrence from a list of tokens"""
pattern = "(?:.*)({})".format("|".join(tokens))
return re.search(pattern, text, re.IGNORECASE)
@ -857,6 +857,7 @@ class RenderProductsVray(ARenderProducts):
if default_ext in {"exr (multichannel)", "exr (deep)"}:
default_ext = "exr"
colorspace = lib.get_color_management_output_transform()
products = []
# add beauty as default when not disabled
@ -868,7 +869,7 @@ class RenderProductsVray(ARenderProducts):
productName="",
ext=default_ext,
camera=camera,
colorspace=lib.get_color_management_output_transform(),
colorspace=colorspace,
multipart=self.multipart
)
)
@ -882,6 +883,7 @@ class RenderProductsVray(ARenderProducts):
productName="Alpha",
ext=default_ext,
camera=camera,
colorspace=colorspace,
multipart=self.multipart
)
)
@ -917,7 +919,8 @@ class RenderProductsVray(ARenderProducts):
product = RenderProduct(productName=name,
ext=default_ext,
aov=aov,
camera=camera)
camera=camera,
colorspace=colorspace)
products.append(product)
# Continue as we've processed this special case AOV
continue
@ -929,7 +932,7 @@ class RenderProductsVray(ARenderProducts):
ext=default_ext,
aov=aov,
camera=camera,
colorspace=lib.get_color_management_output_transform()
colorspace=colorspace
)
products.append(product)
@ -1051,7 +1054,7 @@ class RenderProductsRedshift(ARenderProducts):
def get_files(self, product):
# When outputting AOVs we need to replace Redshift specific AOV tokens
# with Maya render tokens for generating file sequences. We validate to
# a specific AOV fileprefix so we only need to accout for one
# a specific AOV fileprefix so we only need to account for one
# replacement.
if not product.multipart and product.driver:
file_prefix = self._get_attr(product.driver + ".filePrefix")
@ -1130,6 +1133,7 @@ class RenderProductsRedshift(ARenderProducts):
products = []
light_groups_enabled = False
has_beauty_aov = False
colorspace = lib.get_color_management_output_transform()
for aov in aovs:
enabled = self._get_attr(aov, "enabled")
if not enabled:
@ -1173,7 +1177,8 @@ class RenderProductsRedshift(ARenderProducts):
ext=ext,
multipart=False,
camera=camera,
driver=aov)
driver=aov,
colorspace=colorspace)
products.append(product)
if light_groups:
@ -1188,7 +1193,8 @@ class RenderProductsRedshift(ARenderProducts):
ext=ext,
multipart=False,
camera=camera,
driver=aov)
driver=aov,
colorspace=colorspace)
products.append(product)
# When a Beauty AOV is added manually, it will be rendered as
@ -1204,7 +1210,8 @@ class RenderProductsRedshift(ARenderProducts):
RenderProduct(productName=beauty_name,
ext=ext,
multipart=self.multipart,
camera=camera))
camera=camera,
colorspace=colorspace))
return products
@ -1236,6 +1243,8 @@ class RenderProductsRenderman(ARenderProducts):
"""
from rfm2.api.displays import get_displays # noqa
colorspace = lib.get_color_management_output_transform()
cameras = [
self.sanitize_camera_name(c)
for c in self.get_renderable_cameras()
@ -1302,7 +1311,8 @@ class RenderProductsRenderman(ARenderProducts):
productName=aov_name,
ext=extensions,
camera=camera,
multipart=True
multipart=True,
colorspace=colorspace
)
if has_cryptomatte and matte_enabled:
@ -1311,7 +1321,8 @@ class RenderProductsRenderman(ARenderProducts):
aov=cryptomatte_aov,
ext=extensions,
camera=camera,
multipart=True
multipart=True,
colorspace=colorspace
)
else:
# this code should handle the case where no multipart

View file

@ -19,6 +19,8 @@ from maya.app.renderSetup.model.override import (
UniqueOverride
)
from openpype.hosts.maya.api.lib import get_attribute
EXACT_MATCH = 0
PARENT_MATCH = 1
CLIENT_MATCH = 2
@ -96,9 +98,6 @@ def get_attr_in_layer(node_attr, layer):
"""
# Delay pymel import to here because it's slow to load
import pymel.core as pm
def _layer_needs_update(layer):
"""Return whether layer needs updating."""
# Use `getattr` as e.g. DEFAULT_RENDER_LAYER does not have
@ -125,7 +124,7 @@ def get_attr_in_layer(node_attr, layer):
node = history_overrides[-1] if history_overrides else override
node_attr_ = node + ".original"
return pm.getAttr(node_attr_, asString=True)
return get_attribute(node_attr_, asString=True)
layer = get_rendersetup_layer(layer)
rs = renderSetup.instance()
@ -145,7 +144,7 @@ def get_attr_in_layer(node_attr, layer):
# we will let it error out.
rs.switchToLayer(current_layer)
return pm.getAttr(node_attr, asString=True)
return get_attribute(node_attr, asString=True)
overrides = get_attr_overrides(node_attr, layer)
default_layer_value = get_default_layer_value(node_attr)
@ -156,7 +155,7 @@ def get_attr_in_layer(node_attr, layer):
for match, layer_override, index in overrides:
if isinstance(layer_override, AbsOverride):
# Absolute override
value = pm.getAttr(layer_override.name() + ".attrValue")
value = get_attribute(layer_override.name() + ".attrValue")
if match == EXACT_MATCH:
# value = value
pass
@ -168,8 +167,8 @@ def get_attr_in_layer(node_attr, layer):
elif isinstance(layer_override, RelOverride):
# Relative override
# Value = Original * Multiply + Offset
multiply = pm.getAttr(layer_override.name() + ".multiply")
offset = pm.getAttr(layer_override.name() + ".offset")
multiply = get_attribute(layer_override.name() + ".multiply")
offset = get_attribute(layer_override.name() + ".offset")
if match == EXACT_MATCH:
value = value * multiply + offset

View file

@ -1,4 +1,5 @@
import os
import re
from maya import cmds
@ -12,6 +13,7 @@ from openpype.pipeline import (
AVALON_CONTAINER_ID,
Anatomy,
)
from openpype.pipeline.load import LoadError
from openpype.settings import get_project_settings
from .pipeline import containerise
from . import lib
@ -82,6 +84,44 @@ def get_reference_node_parents(ref):
return parents
def get_custom_namespace(custom_namespace):
"""Return unique namespace.
The input namespace can contain a single group
of '#' number tokens to indicate where the namespace's
unique index should go. The amount of tokens defines
the zero padding of the number, e.g ### turns into 001.
Warning: Note that a namespace will always be
prefixed with a _ if it starts with a digit
Example:
>>> get_custom_namespace("myspace_##_")
# myspace_01_
>>> get_custom_namespace("##_myspace")
# _01_myspace
>>> get_custom_namespace("myspace##")
# myspace01
"""
split = re.split("([#]+)", custom_namespace, 1)
if len(split) == 3:
base, padding, suffix = split
padding = "%0{}d".format(len(padding))
else:
base = split[0]
padding = "%02d" # default padding
suffix = ""
return lib.unique_namespace(
base,
format=padding,
prefix="_" if not base or base[0].isdigit() else "",
suffix=suffix
)
class Creator(LegacyCreator):
defaults = ['Main']
@ -143,15 +183,46 @@ class ReferenceLoader(Loader):
assert os.path.exists(self.fname), "%s does not exist." % self.fname
asset = context['asset']
subset = context['subset']
settings = get_project_settings(context['project']['name'])
custom_naming = settings['maya']['load']['reference_loader']
loaded_containers = []
count = options.get("count") or 1
for c in range(0, count):
namespace = namespace or lib.unique_namespace(
"{}_{}_".format(asset["name"], context["subset"]["name"]),
prefix="_" if asset["name"][0].isdigit() else "",
suffix="_",
if not custom_naming['namespace']:
raise LoadError("No namespace specified in "
"Maya ReferenceLoader settings")
elif not custom_naming['group_name']:
raise LoadError("No group name specified in "
"Maya ReferenceLoader settings")
formatting_data = {
"asset_name": asset['name'],
"asset_type": asset['type'],
"subset": subset['name'],
"family": (
subset['data'].get('family') or
subset['data']['families'][0]
)
}
custom_namespace = custom_naming['namespace'].format(
**formatting_data
)
custom_group_name = custom_naming['group_name'].format(
**formatting_data
)
count = options.get("count") or 1
for c in range(0, count):
namespace = get_custom_namespace(custom_namespace)
group_name = "{}:{}".format(
namespace,
custom_group_name
)
options['group_name'] = group_name
# Offset loaded subset
if "offset" in options:
@ -187,7 +258,7 @@ class ReferenceLoader(Loader):
return loaded_containers
def process_reference(self, context, name, namespace, data):
def process_reference(self, context, name, namespace, options):
"""To be implemented by subclass"""
raise NotImplementedError("Must be implemented by subclass")

View file

@ -33,7 +33,7 @@ class MayaTemplateBuilder(AbstractTemplateBuilder):
get_template_preset implementation)
Returns:
bool: Wether the template was succesfully imported or not
bool: Whether the template was successfully imported or not
"""
if cmds.objExists(PLACEHOLDER_SET):
@ -116,7 +116,7 @@ class MayaPlaceholderLoadPlugin(PlaceholderPlugin, PlaceholderLoadMixin):
placeholder_name_parts = placeholder_data["builder_type"].split("_")
pos = 1
# add famlily in any
# add family in any
placeholder_family = placeholder_data["family"]
if placeholder_family:
placeholder_name_parts.insert(pos, placeholder_family)

View file

@ -12,6 +12,7 @@ class CreateLook(plugin.Creator):
family = "look"
icon = "paint-brush"
make_tx = True
rs_tex = False
def __init__(self, *args, **kwargs):
super(CreateLook, self).__init__(*args, **kwargs)
@ -20,7 +21,8 @@ class CreateLook(plugin.Creator):
# Whether to automatically convert the textures to .tx upon publish.
self.data["maketx"] = self.make_tx
# Whether to automatically convert the textures to .rstex upon publish.
self.data["rstex"] = self.rs_tex
# Enable users to force a copy.
# - on Windows is "forceCopy" always changed to `True` because of
# windows implementation of hardlinks

View file

@ -14,7 +14,7 @@ class AbcLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
icon = "code-fork"
color = "orange"
def process_reference(self, context, name, namespace, data):
def process_reference(self, context, name, namespace, options):
import maya.cmds as cmds
from openpype.hosts.maya.api.lib import unique_namespace
@ -41,7 +41,7 @@ class AbcLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
namespace=namespace,
sharedReferenceFile=False,
groupReference=True,
groupName="{}:{}".format(namespace, name),
groupName=options['group_name'],
reference=True,
returnNewNodes=True)

View file

@ -118,7 +118,7 @@ class ImportMayaLoader(load.LoaderPlugin):
"clean_import",
label="Clean import",
default=False,
help="Should all occurences of cbId be purged?"
help="Should all occurrences of cbId be purged?"
)
]

View file

@ -84,7 +84,7 @@ class ArnoldStandinLoader(load.LoaderPlugin):
sequence = is_sequence(os.listdir(os.path.dirname(self.fname)))
cmds.setAttr(standin_shape + ".useFrameExtension", sequence)
nodes = [root, standin]
nodes = [root, standin, standin_shape]
if operator is not None:
nodes.append(operator)
self[:] = nodes
@ -180,10 +180,10 @@ class ArnoldStandinLoader(load.LoaderPlugin):
proxy_basename, proxy_path = self._get_proxy_path(path)
# Whether there is proxy or so, we still update the string operator.
# If no proxy exists, the string operator wont replace anything.
# If no proxy exists, the string operator won't replace anything.
cmds.setAttr(
string_replace_operator + ".match",
"resources/" + proxy_basename,
proxy_basename,
type="string"
)
cmds.setAttr(

View file

@ -11,7 +11,7 @@ from openpype.pipeline import (
get_representation_path,
)
from openpype.hosts.maya.api.pipeline import containerise
from openpype.hosts.maya.api.lib import unique_namespace
from openpype.hosts.maya.api.lib import unique_namespace, get_container_members
class AudioLoader(load.LoaderPlugin):
@ -52,17 +52,15 @@ class AudioLoader(load.LoaderPlugin):
)
def update(self, container, representation):
import pymel.core as pm
audio_node = None
for node in pm.PyNode(container["objectName"]).members():
if node.nodeType() == "audio":
audio_node = node
members = get_container_members(container)
audio_nodes = cmds.ls(members, type="audio")
assert audio_node is not None, "Audio node not found."
assert audio_nodes is not None, "Audio node not found."
audio_node = audio_nodes[0]
path = get_representation_path(representation)
audio_node.filename.set(path)
cmds.setAttr("{}.filename".format(audio_node), path, type="string")
cmds.setAttr(
container["objectName"] + ".representation",
str(representation["_id"]),
@ -80,8 +78,12 @@ class AudioLoader(load.LoaderPlugin):
asset = get_asset_by_id(
project_name, subset["parent"], fields=["parent"]
)
audio_node.sourceStart.set(1 - asset["data"]["frameStart"])
audio_node.sourceEnd.set(asset["data"]["frameEnd"])
source_start = 1 - asset["data"]["frameStart"]
source_end = asset["data"]["frameEnd"]
cmds.setAttr("{}.sourceStart".format(audio_node), source_start)
cmds.setAttr("{}.sourceEnd".format(audio_node), source_end)
def switch(self, container, representation):
self.update(container, representation)

View file

@ -1,5 +1,9 @@
import os
import maya.cmds as cmds
from openpype.hosts.maya.api.pipeline import containerise
from openpype.hosts.maya.api.lib import unique_namespace
from openpype.pipeline import (
load,
get_representation_path
@ -11,19 +15,15 @@ class GpuCacheLoader(load.LoaderPlugin):
"""Load Alembic as gpuCache"""
families = ["model", "animation", "proxyAbc", "pointcache"]
representations = ["abc"]
representations = ["abc", "gpu_cache"]
label = "Import Gpu Cache"
label = "Load Gpu Cache"
order = -5
icon = "code-fork"
color = "orange"
def load(self, context, name, namespace, data):
import maya.cmds as cmds
from openpype.hosts.maya.api.pipeline import containerise
from openpype.hosts.maya.api.lib import unique_namespace
asset = context['asset']['name']
namespace = namespace or unique_namespace(
asset + "_",
@ -42,10 +42,9 @@ class GpuCacheLoader(load.LoaderPlugin):
c = colors.get('model')
if c is not None:
cmds.setAttr(root + ".useOutlinerColor", 1)
cmds.setAttr(root + ".outlinerColor",
(float(c[0])/255),
(float(c[1])/255),
(float(c[2])/255)
cmds.setAttr(
root + ".outlinerColor",
(float(c[0]) / 255), (float(c[1]) / 255), (float(c[2]) / 255)
)
# Create transform with shape
@ -74,9 +73,6 @@ class GpuCacheLoader(load.LoaderPlugin):
loader=self.__class__.__name__)
def update(self, container, representation):
import maya.cmds as cmds
path = get_representation_path(representation)
# Update the cache
@ -96,7 +92,6 @@ class GpuCacheLoader(load.LoaderPlugin):
self.update(container, representation)
def remove(self, container):
import maya.cmds as cmds
members = cmds.sets(container['objectName'], query=True)
cmds.lockNode(members, lock=False)
cmds.delete([container['objectName']] + members)

Some files were not shown because too many files have changed in this diff Show more