Merge branch 'develop' into enhancement/unreal-render_creator_improvements

This commit is contained in:
Ondrej Samohel 2023-04-24 12:38:07 +02:00
commit 7d7206953c
No known key found for this signature in database
GPG key ID: 02376E18990A97C6
412 changed files with 15999 additions and 9826 deletions

View file

@ -1,33 +0,0 @@
---
name: Bug report
about: Create a report to help us improve
title: ''
labels: bug
assignees: ''
---
**Running version**
[ex. 3.14.1-nightly.2]
**Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Desktop (please complete the following information):**
- OS: [e.g. windows]
- Host: [e.g. Maya, Nuke, Houdini]
**Additional context**
Add any other context about the problem here.

183
.github/ISSUE_TEMPLATE/bug_report.yml vendored Normal file
View file

@ -0,0 +1,183 @@
name: Bug Report
description: File a bug report
title: 'Bug: '
labels:
- 'type: bug'
body:
- type: markdown
attributes:
value: |
Thanks for taking the time to fill out this bug report!
- type: checkboxes
attributes:
label: Is there an existing issue for this?
description: >-
Please search to see if an issue already exists for the bug you
encountered.
options:
- label: I have searched the existing issues
required: true
- type: textarea
attributes:
label: 'Current Behavior:'
description: A concise description of what you're experiencing.
validations:
required: true
- type: textarea
attributes:
label: 'Expected Behavior:'
description: A concise description of what you expected to happen.
validations:
required: false
- type: dropdown
id: _version
attributes:
label: Version
description: What version are you running? Look to OpenPype Tray
options:
- 3.15.4-nightly.3
- 3.15.4-nightly.2
- 3.15.4-nightly.1
- 3.15.3
- 3.15.3-nightly.4
- 3.15.3-nightly.3
- 3.15.3-nightly.2
- 3.15.3-nightly.1
- 3.15.2
- 3.15.2-nightly.6
- 3.15.2-nightly.5
- 3.15.2-nightly.4
- 3.15.2-nightly.3
- 3.15.2-nightly.2
- 3.15.2-nightly.1
- 3.15.1
- 3.15.1-nightly.6
- 3.15.1-nightly.5
- 3.15.1-nightly.4
- 3.15.1-nightly.3
- 3.15.1-nightly.2
- 3.15.1-nightly.1
- 3.15.0
- 3.15.0-nightly.1
- 3.14.11-nightly.4
- 3.14.11-nightly.3
- 3.14.11-nightly.2
- 3.14.11-nightly.1
- 3.14.10
- 3.14.10-nightly.9
- 3.14.10-nightly.8
- 3.14.10-nightly.7
- 3.14.10-nightly.6
- 3.14.10-nightly.5
- 3.14.10-nightly.4
- 3.14.10-nightly.3
- 3.14.10-nightly.2
- 3.14.10-nightly.1
- 3.14.9
- 3.14.9-nightly.5
- 3.14.9-nightly.4
- 3.14.9-nightly.3
- 3.14.9-nightly.2
- 3.14.9-nightly.1
- 3.14.8
- 3.14.8-nightly.4
- 3.14.8-nightly.3
- 3.14.8-nightly.2
- 3.14.8-nightly.1
- 3.14.7
- 3.14.7-nightly.8
- 3.14.7-nightly.7
- 3.14.7-nightly.6
- 3.14.7-nightly.5
- 3.14.7-nightly.4
- 3.14.7-nightly.3
- 3.14.7-nightly.2
- 3.14.7-nightly.1
- 3.14.6
- 3.14.6-nightly.3
- 3.14.6-nightly.2
- 3.14.6-nightly.1
- 3.14.5
- 3.14.5-nightly.3
- 3.14.5-nightly.2
- 3.14.5-nightly.1
- 3.14.4
- 3.14.4-nightly.4
- 3.14.4-nightly.3
- 3.14.4-nightly.2
- 3.14.4-nightly.1
- 3.14.3
- 3.14.3-nightly.7
- 3.14.3-nightly.6
- 3.14.3-nightly.5
- 3.14.3-nightly.4
- 3.14.3-nightly.3
- 3.14.3-nightly.2
- 3.14.3-nightly.1
- 3.14.2
- 3.14.2-nightly.5
- 3.14.2-nightly.4
- 3.14.2-nightly.3
- 3.14.2-nightly.2
- 3.14.2-nightly.1
- 3.14.1
- 3.14.1-nightly.4
- 3.14.1-nightly.3
- 3.14.1-nightly.2
- 3.14.1-nightly.1
- 3.14.0
- 3.14.0-nightly.1
- 3.13.1-nightly.3
- 3.13.1-nightly.2
- 3.13.1-nightly.1
- 3.13.0
- 3.13.0-nightly.1
- 3.12.3-nightly.3
- 3.12.3-nightly.2
- 3.12.3-nightly.1
validations:
required: true
- type: dropdown
validations:
required: true
attributes:
label: What platform you are running OpenPype on?
description: |
Please specify the operating systems you are running OpenPype with.
multiple: true
options:
- Windows
- Linux / Centos
- Linux / Ubuntu
- Linux / RedHat
- MacOS
- type: textarea
id: to-reproduce
attributes:
label: 'Steps To Reproduce:'
description: Steps to reproduce the behavior.
placeholder: |
1. How did the configuration look like
2. What type of action was made
validations:
required: true
- type: checkboxes
attributes:
label: Are there any labels you wish to add?
description: Please search labels and identify those related to your bug.
options:
- label: I have added the relevant labels to the bug report.
required: true
- type: textarea
id: logs
attributes:
label: 'Relevant log output:'
description: >-
Please copy and paste any relevant log output. This will be
automatically formatted into code, so no need for backticks.
render: shell
- type: textarea
id: additional-context
attributes:
label: 'Additional context:'
description: Add any other context about the problem here.

8
.github/ISSUE_TEMPLATE/config.yml vendored Normal file
View file

@ -0,0 +1,8 @@
blank_issues_enabled: false
contact_links:
- name: Ynput Community Discussions
url: https://community.ynput.io
about: Please ask and answer questions here.
- name: Ynput Discord Server
url: https://discord.gg/ynput
about: For community quick chats.

View file

@ -0,0 +1,52 @@
name: Enhancement Request
description: Create a report to help us enhance a particular feature
title: "Enhancement: "
labels:
- "type: enhancement"
body:
- type: markdown
attributes:
value: |
Thanks for taking the time to fill out this enhancement request report!
- type: checkboxes
attributes:
label: Is there an existing issue for this?
description: Please search to see if an issue already exists for the bug you encountered.
options:
- label: I have searched the existing issues.
required: true
- type: textarea
id: related-feature
attributes:
label: Please describe the feature you have in mind and explain what the current shortcomings are?
description: A clear and concise description of what the problem is.
validations:
required: true
- type: textarea
id: enhancement-proposal
attributes:
label: How would you imagine the implementation of the feature?
description: A clear and concise description of what you want to happen.
validations:
required: true
- type: checkboxes
attributes:
label: Are there any labels you wish to add?
description: Please search labels and identify those related to your enhancement.
options:
- label: I have added the relevant labels to the enhancement request.
required: true
- type: textarea
id: alternatives
attributes:
label: "Describe alternatives you've considered:"
description: A clear and concise description of any alternative solutions or features you've considered.
validations:
required: false
- type: textarea
id: additional-context
attributes:
label: "Additional context:"
description: Add any other context or screenshots about the enhancement request here.
validations:
required: false

View file

@ -1,20 +0,0 @@
---
name: Feature request
about: Suggest an idea for this project
title: ''
labels: enhancement
assignees: ''
---
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.

15
.github/pr-branch-labeler.yml vendored Normal file
View file

@ -0,0 +1,15 @@
# Apply label "feature" if head matches "feature/*"
'type: feature':
head: "feature/*"
# Apply label "feature" if head matches "feature/*"
'type: enhancement':
head: "enhancement/*"
# Apply label "bugfix" if head matches one of "bugfix/*" or "hotfix/*"
'type: bug':
head: ["bugfix/*", "hotfix/*"]
# Apply label "release" if base matches "release/*"
'Bump Minor':
base: "release/next-minor"

102
.github/pr-glob-labeler.yml vendored Normal file
View file

@ -0,0 +1,102 @@
# Add type: unittest label if any changes in tests folders
'type: unittest':
- '*/*tests*/**/*'
# any changes in documentation structure
'type: documentation':
- '*/**/*website*/**/*'
- '*/**/*docs*/**/*'
# hosts triage
'host: Nuke':
- '*/**/*nuke*'
- '*/**/*nuke*/**/*'
'host: Photoshop':
- '*/**/*photoshop*'
- '*/**/*photoshop*/**/*'
'host: Harmony':
- '*/**/*harmony*'
- '*/**/*harmony*/**/*'
'host: UE':
- '*/**/*unreal*'
- '*/**/*unreal*/**/*'
'host: Houdini':
- '*/**/*houdini*'
- '*/**/*houdini*/**/*'
'host: Maya':
- '*/**/*maya*'
- '*/**/*maya*/**/*'
'host: Resolve':
- '*/**/*resolve*'
- '*/**/*resolve*/**/*'
'host: Blender':
- '*/**/*blender*'
- '*/**/*blender*/**/*'
'host: Hiero':
- '*/**/*hiero*'
- '*/**/*hiero*/**/*'
'host: Fusion':
- '*/**/*fusion*'
- '*/**/*fusion*/**/*'
'host: Flame':
- '*/**/*flame*'
- '*/**/*flame*/**/*'
'host: TrayPublisher':
- '*/**/*traypublisher*'
- '*/**/*traypublisher*/**/*'
'host: 3dsmax':
- '*/**/*max*'
- '*/**/*max*/**/*'
'host: TV Paint':
- '*/**/*tvpaint*'
- '*/**/*tvpaint*/**/*'
'host: CelAction':
- '*/**/*celaction*'
- '*/**/*celaction*/**/*'
'host: After Effects':
- '*/**/*aftereffects*'
- '*/**/*aftereffects*/**/*'
'host: Substance Painter':
- '*/**/*substancepainter*'
- '*/**/*substancepainter*/**/*'
# modules triage
'module: Deadline':
- '*/**/*deadline*'
- '*/**/*deadline*/**/*'
'module: RoyalRender':
- '*/**/*royalrender*'
- '*/**/*royalrender*/**/*'
'module: Sitesync':
- '*/**/*sync_server*'
- '*/**/*sync_server*/**/*'
'module: Ftrack':
- '*/**/*ftrack*'
- '*/**/*ftrack*/**/*'
'module: Shotgrid':
- '*/**/*shotgrid*'
- '*/**/*shotgrid*/**/*'
'module: Kitsu':
- '*/**/*kitsu*'
- '*/**/*kitsu*/**/*'

View file

@ -1,4 +1,4 @@
name: documentation
name: 📜 Documentation
on:
pull_request:

View file

@ -1,4 +1,4 @@
name: Milestone - assign to PRs
name: 👉🏻 Milestone - assign to PRs
on:
pull_request_target:

View file

@ -1,4 +1,4 @@
name: Milestone - create default
name: Milestone - create default
on:
milestone:

View file

@ -1,4 +1,4 @@
name: Milestone Release [trigger]
name: 🚩 Milestone Release [trigger]
on:
workflow_dispatch:
@ -45,3 +45,6 @@ jobs:
token: ${{ secrets.YNPUT_BOT_TOKEN }}
user_email: ${{ secrets.CI_EMAIL }}
user_name: ${{ secrets.CI_USER }}
cu_api_key: ${{ secrets.CLICKUP_API_KEY }}
cu_team_id: ${{ secrets.CLICKUP_TEAM_ID }}
cu_field_id: ${{ secrets.CLICKUP_RELEASE_FIELD_ID }}

View file

@ -1,4 +1,4 @@
name: Dev -> Main
name: 🔀 Dev -> Main
on:
schedule:
@ -25,5 +25,5 @@ jobs:
- name: Invoke pre-release workflow
uses: benc-uk/workflow-dispatch@v1
with:
workflow: Nightly Prerelease
workflow: prerelease.yml
token: ${{ secrets.YNPUT_BOT_TOKEN }}

49
.github/workflows/pr_labels.yml vendored Normal file
View file

@ -0,0 +1,49 @@
name: 🔖 PR labels
on:
pull_request_target:
types: [opened, assigned]
jobs:
size-label:
name: pr_size_label
runs-on: ubuntu-latest
if: github.event.action == 'assigned' || github.event.action == 'opened'
steps:
- name: Add size label
uses: "pascalgn/size-label-action@v0.4.3"
env:
GITHUB_TOKEN: "${{ secrets.YNPUT_BOT_TOKEN }}"
IGNORED: ".gitignore\n*.md\n*.json"
with:
sizes: >
{
"0": "XS",
"100": "S",
"500": "M",
"1000": "L",
"1500": "XL",
"2500": "XXL"
}
label_prs_branch:
name: pr_branch_label
runs-on: ubuntu-latest
if: github.event.action == 'assigned' || github.event.action == 'opened'
steps:
- name: Label PRs - Branch name detection
uses: ffittschen/pr-branch-labeler@v1
with:
repo-token: ${{ secrets.YNPUT_BOT_TOKEN }}
label_prs_globe:
name: pr_globe_label
runs-on: ubuntu-latest
if: github.event.action == 'assigned' || github.event.action == 'opened'
steps:
- name: Label PRs - Globe detection
uses: actions/labeler@v4.0.3
with:
repo-token: ${{ secrets.YNPUT_BOT_TOKEN }}
configuration-path: ".github/pr-glob-labeler.yml"
sync-labels: false

View file

@ -1,4 +1,4 @@
name: Nightly Prerelease
name: Nightly Prerelease
on:
workflow_dispatch:
@ -65,3 +65,9 @@ jobs:
source_ref: 'main'
target_branch: 'develop'
commit_message_template: '[Automated] Merged {source_ref} into {target_branch}'
- name: Invoke Update bug report workflow
uses: benc-uk/workflow-dispatch@v1
with:
workflow: update_bug_report.yml
token: ${{ secrets.YNPUT_BOT_TOKEN }}

View file

@ -0,0 +1,70 @@
name: 📊 Project task statuses
on:
pull_request_review:
types: [submitted]
issue_comment:
types: [created]
pull_request_review_comment:
types: [created]
jobs:
pr_review_started:
name: pr_review_started
runs-on: ubuntu-latest
# -----------------------------
# conditions are:
# - PR issue comment which is not form Ynbot
# - PR review comment which is not Hound (or any other bot)
# - PR review submitted which is not from Hound (or any other bot) and is not 'Changes requested'
# - make sure it only runs if not forked repo
# -----------------------------
if: |
(github.event_name == 'issue_comment' && github.event.pull_request.head.repo.owner.login == 'ynput' && github.event.comment.user.id != 82967070) ||
(github.event_name == 'pull_request_review_comment' && github.event.pull_request.head.repo.owner.login == 'ynput' && github.event.comment.user.type != 'Bot') ||
(github.event_name == 'pull_request_review' &&
github.event.pull_request.head.repo.owner.login == 'ynput' &&
github.event.review.state != 'changes_requested' &&
github.event.review.state != 'approved' &&
github.event.review.user.type != 'Bot')
steps:
- name: Move PR to 'Review In Progress'
uses: leonsteinhaeuser/project-beta-automations@v2.1.0
with:
gh_token: ${{ secrets.YNPUT_BOT_TOKEN }}
organization: ynput
project_id: 11
resource_node_id: ${{ github.event.pull_request.node_id || github.event.issue.node_id }}
status_value: Review In Progress
pr_review_requested:
# -----------------------------
# Resets Clickup Task status to 'In Progress' after 'Changes Requested' were submitted to PR
# It only runs if custom clickup task id was found in ref branch of PR
# -----------------------------
name: pr_review_requested
runs-on: ubuntu-latest
if: github.event_name == 'pull_request_review' && github.event.pull_request.head.repo.owner.login == 'ynput' && github.event.review.state == 'changes_requested'
steps:
- name: Set branch env
run: echo "BRANCH_NAME=${{ github.event.pull_request.head.ref}}" >> $GITHUB_ENV
- name: Get ClickUp ID from ref head name
id: get_cuID
run: |
echo ${{ env.BRANCH_NAME }}
echo "cuID=$(echo $BRANCH_NAME | sed 's/.*\/\(OP\-[0-9]\{4\}\).*/\1/')" >> $GITHUB_OUTPUT
- name: Print ClickUp ID
run: echo ${{ steps.get_cuID.outputs.cuID }}
- name: Move found Clickup task to 'Review in Progress'
if: steps.get_cuID.outputs.cuID
run: |
curl -i -X PUT \
'https://api.clickup.com/api/v2/task/${{ steps.get_cuID.outputs.cuID }}?custom_task_ids=true&team_id=${{secrets.CLICKUP_TEAM_ID}}' \
-H 'Authorization: ${{secrets.CLICKUP_API_KEY}}' \
-H 'Content-Type: application/json' \
-d '{
"status": "in progress"
}'

View file

@ -1,7 +1,7 @@
# This workflow will upload a Python Package using Twine when a release is created
# For more information see: https://help.github.com/en/actions/language-and-framework-guides/using-python-with-github-actions#publishing-to-package-registries
name: Test Build
name: 🏗️ Test Build
on:
pull_request:

25
.github/workflows/update_bug_report.yml vendored Normal file
View file

@ -0,0 +1,25 @@
name: 🐞 Update Bug Report
on:
workflow_dispatch:
release:
# https://docs.github.com/en/actions/using-workflows/events-that-trigger-workflows#release
types: [published]
jobs:
update-bug-report:
runs-on: ubuntu-latest
name: Update bug report
steps:
- uses: actions/checkout@v3
with:
ref: ${{ github.event.release.target_commitish }}
- name: Update version
uses: ynput/gha-populate-form-version@main
with:
github_token: ${{ secrets.YNPUT_BOT_TOKEN }}
registry: github
dropdown: _version
limit_to: 100
form: .github/ISSUE_TEMPLATE/bug_report.yml
commit_message: 'chore(): update bug report / version'

77
ARCHITECTURE.md Normal file
View file

@ -0,0 +1,77 @@
# Architecture
OpenPype is a monolithic Python project that bundles several parts, this document will try to give a birds eye overview of the project and, to a certain degree, each of the sub-projects.
The current file structure looks like this:
```
.
├── common - Code in this folder is backend portion of Addon distribution logic for v4 server.
├── docs - Documentation of the source code.
├── igniter - The OpenPype bootstrapper, deals with running version resolution and setting up the connection to the mongodb.
├── openpype - The actual OpenPype core package.
├── schema - Collection of JSON files describing schematics of objects. This follows Avalon's convention.
├── tests - Integration and unit tests.
├── tools - Conveninece scripts to perform common actions (in both bash and ps1).
├── vendor - When using the igniter, it deploys third party tools in here, such as ffmpeg.
└── website - Source files for https://openpype.io/ which is Docusaursus (https://docusaurus.io/).
```
The core functionality of the pipeline can be found in `igniter` and `openpype`, which in turn rely on the `schema` files, whenever you build (or download a pre-built) version of OpenPype, these two are bundled in there, and `Igniter` is the entry point.
## Igniter
It's the setup and update tool for OpenPype, unless you want to package `openpype` separately and deal with all the config manually, this will most likely be your entry point.
```
igniter/
├── bootstrap_repos.py - Module that will find or install OpenPype versions in the system.
├── __init__.py - Igniter entry point.
├── install_dialog.py- Show dialog for choosing central pype repository.
├── install_thread.py - Threading helpers for the install process.
├── __main__.py - Like `__init__.py` ?
├── message_dialog.py - Qt Dialog with a message and "Ok" button.
├── nice_progress_bar.py - Fancy Qt progress bar.
├── splash.txt - ASCII art for the terminal installer.
├── stylesheet.css - Installer Qt styles.
├── terminal_splash.py - Terminal installer animation, relies in `splash.txt`.
├── tools.py - Collection of methods that don't fit in other modules.
├── update_thread.py - Threading helper to update existing OpenPype installs.
├── update_window.py - Qt UI to update OpenPype installs.
├── user_settings.py - Interface for the OpenPype user settings.
└── version.py - Igniter's version number.
```
## OpenPype
This is the main package of the OpenPype logic, it could be loosely described as a combination of [Avalon](https://getavalon.github.io), [Pyblish](https://pyblish.com/) and glue around those with custom OpenPype only elements, things are in progress of being moved around to better prepare for V4, which will be released under a new name AYON.
```
openpype/
├── client - Interface for the MongoDB.
├── hooks - Hooks to be executed on certain OpenPype Applications defined in `openpype.lib.applications`.
├── host - Base class for the different hosts.
├── hosts - Integration with the different DCCs (hosts) using the `host` base class.
├── lib - Libraries that stitch together the package, some have been moved into other parts.
├── modules - OpenPype modules should contain separated logic of specific kind of implementation, such as Ftrack connection and its python API.
├── pipeline - Core of the OpenPype pipeline, handles creation of data, publishing, etc.
├── plugins - Global/core plugins for loader and publisher tool.
├── resources - Icons, fonts, etc.
├── scripts - Loose scipts that get run by tools/publishers.
├── settings - OpenPype settings interface.
├── style - Qt styling.
├── tests - Unit tests.
├── tools - Core tools, check out https://openpype.io/docs/artist_tools.
├── vendor - Vendoring of needed required Python packes.
├── widgets - Common re-usable Qt Widgets.
├── action.py - LEGACY: Lives now in `openpype.pipeline.publish.action` Pyblish actions.
├── cli.py - Command line interface, leverages `click`.
├── __init__.py - Sets two constants.
├── __main__.py - Entry point, calls the `cli.py`
├── plugin.py - Pyblish plugins.
├── pype_commands.py - Implementation of OpenPype commands.
└── version.py - Current version number.
```

File diff suppressed because it is too large Load diff

View file

@ -52,7 +52,7 @@ RUN yum -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.n
# we need to build our own patchelf
WORKDIR /temp-patchelf
RUN git clone https://github.com/NixOS/patchelf.git . \
RUN git clone -b 0.17.0 --single-branch https://github.com/NixOS/patchelf.git . \
&& source scl_source enable devtoolset-7 \
&& ./bootstrap.sh \
&& ./configure \

View file

@ -3,7 +3,7 @@
Goal is that most of functions here are called on (or with) an object
that has project name as a context (e.g. on 'ProjectEntity'?).
+ We will need more specific functions doing wery specific queires really fast.
+ We will need more specific functions doing very specific queries really fast.
"""
import re
@ -69,6 +69,19 @@ def convert_ids(in_ids):
def get_projects(active=True, inactive=False, fields=None):
"""Yield all project entity documents.
Args:
active (Optional[bool]): Include active projects. Defaults to True.
inactive (Optional[bool]): Include inactive projects.
Defaults to False.
fields (Optional[Iterable[str]]): Fields that should be returned. All
fields are returned if 'None' is passed.
Yields:
dict: Project entity data which can be reduced to specified 'fields'.
None is returned if project with specified filters was not found.
"""
mongodb = get_project_database()
for project_name in mongodb.collection_names():
if project_name in ("system.indexes",):
@ -81,6 +94,20 @@ def get_projects(active=True, inactive=False, fields=None):
def get_project(project_name, active=True, inactive=True, fields=None):
"""Return project entity document by project name.
Args:
project_name (str): Name of project.
active (Optional[bool]): Allow active project. Defaults to True.
inactive (Optional[bool]): Allow inactive project. Defaults to True.
fields (Optional[Iterable[str]]): Fields that should be returned. All
fields are returned if 'None' is passed.
Returns:
Union[Dict, None]: Project entity data which can be reduced to
specified 'fields'. None is returned if project with specified
filters was not found.
"""
# Skip if both are disabled
if not active and not inactive:
return None
@ -124,17 +151,18 @@ def get_whole_project(project_name):
def get_asset_by_id(project_name, asset_id, fields=None):
"""Receive asset data by it's id.
"""Receive asset data by its id.
Args:
project_name (str): Name of project where to look for queried entities.
asset_id (Union[str, ObjectId]): Asset's id.
fields (Iterable[str]): Fields that should be returned. All fields are
returned if 'None' is passed.
fields (Optional[Iterable[str]]): Fields that should be returned. All
fields are returned if 'None' is passed.
Returns:
dict: Asset entity data.
None: Asset was not found by id.
Union[Dict, None]: Asset entity data which can be reduced to
specified 'fields'. None is returned if asset with specified
filters was not found.
"""
asset_id = convert_id(asset_id)
@ -147,17 +175,18 @@ def get_asset_by_id(project_name, asset_id, fields=None):
def get_asset_by_name(project_name, asset_name, fields=None):
"""Receive asset data by it's name.
"""Receive asset data by its name.
Args:
project_name (str): Name of project where to look for queried entities.
asset_name (str): Asset's name.
fields (Iterable[str]): Fields that should be returned. All fields are
returned if 'None' is passed.
fields (Optional[Iterable[str]]): Fields that should be returned. All
fields are returned if 'None' is passed.
Returns:
dict: Asset entity data.
None: Asset was not found by name.
Union[Dict, None]: Asset entity data which can be reduced to
specified 'fields'. None is returned if asset with specified
filters was not found.
"""
if not asset_name:
@ -193,10 +222,10 @@ def _get_assets(
be found.
asset_names (Iterable[str]): Name assets that should be found.
parent_ids (Iterable[Union[str, ObjectId]]): Parent asset ids.
standard (bool): Query standart assets (type 'asset').
standard (bool): Query standard assets (type 'asset').
archived (bool): Query archived assets (type 'archived_asset').
fields (Iterable[str]): Fields that should be returned. All fields are
returned if 'None' is passed.
fields (Optional[Iterable[str]]): Fields that should be returned. All
fields are returned if 'None' is passed.
Returns:
Cursor: Query cursor as iterable which returns asset documents matching
@ -261,8 +290,8 @@ def get_assets(
asset_names (Iterable[str]): Name assets that should be found.
parent_ids (Iterable[Union[str, ObjectId]]): Parent asset ids.
archived (bool): Add also archived assets.
fields (Iterable[str]): Fields that should be returned. All fields are
returned if 'None' is passed.
fields (Optional[Iterable[str]]): Fields that should be returned. All
fields are returned if 'None' is passed.
Returns:
Cursor: Query cursor as iterable which returns asset documents matching
@ -300,8 +329,8 @@ def get_archived_assets(
be found.
asset_names (Iterable[str]): Name assets that should be found.
parent_ids (Iterable[Union[str, ObjectId]]): Parent asset ids.
fields (Iterable[str]): Fields that should be returned. All fields are
returned if 'None' is passed.
fields (Optional[Iterable[str]]): Fields that should be returned. All
fields are returned if 'None' is passed.
Returns:
Cursor: Query cursor as iterable which returns asset documents matching
@ -356,17 +385,18 @@ def get_asset_ids_with_subsets(project_name, asset_ids=None):
def get_subset_by_id(project_name, subset_id, fields=None):
"""Single subset entity data by it's id.
"""Single subset entity data by its id.
Args:
project_name (str): Name of project where to look for queried entities.
subset_id (Union[str, ObjectId]): Id of subset which should be found.
fields (Iterable[str]): Fields that should be returned. All fields are
returned if 'None' is passed.
fields (Optional[Iterable[str]]): Fields that should be returned. All
fields are returned if 'None' is passed.
Returns:
None: If subset with specified filters was not found.
Dict: Subset document which can be reduced to specified 'fields'.
Union[Dict, None]: Subset entity data which can be reduced to
specified 'fields'. None is returned if subset with specified
filters was not found.
"""
subset_id = convert_id(subset_id)
@ -379,20 +409,19 @@ def get_subset_by_id(project_name, subset_id, fields=None):
def get_subset_by_name(project_name, subset_name, asset_id, fields=None):
"""Single subset entity data by it's name and it's version id.
"""Single subset entity data by its name and its version id.
Args:
project_name (str): Name of project where to look for queried entities.
subset_name (str): Name of subset.
asset_id (Union[str, ObjectId]): Id of parent asset.
fields (Iterable[str]): Fields that should be returned. All fields are
returned if 'None' is passed.
fields (Optional[Iterable[str]]): Fields that should be returned. All
fields are returned if 'None' is passed.
Returns:
Union[None, Dict[str, Any]]: None if subset with specified filters was
not found or dict subset document which can be reduced to
specified 'fields'.
Union[Dict, None]: Subset entity data which can be reduced to
specified 'fields'. None is returned if subset with specified
filters was not found.
"""
if not subset_name:
return None
@ -434,8 +463,8 @@ def get_subsets(
names_by_asset_ids (dict[ObjectId, List[str]]): Complex filtering
using asset ids and list of subset names under the asset.
archived (bool): Look for archived subsets too.
fields (Iterable[str]): Fields that should be returned. All fields are
returned if 'None' is passed.
fields (Optional[Iterable[str]]): Fields that should be returned. All
fields are returned if 'None' is passed.
Returns:
Cursor: Iterable cursor yielding all matching subsets.
@ -520,17 +549,18 @@ def get_subset_families(project_name, subset_ids=None):
def get_version_by_id(project_name, version_id, fields=None):
"""Single version entity data by it's id.
"""Single version entity data by its id.
Args:
project_name (str): Name of project where to look for queried entities.
version_id (Union[str, ObjectId]): Id of version which should be found.
fields (Iterable[str]): Fields that should be returned. All fields are
returned if 'None' is passed.
fields (Optional[Iterable[str]]): Fields that should be returned. All
fields are returned if 'None' is passed.
Returns:
None: If version with specified filters was not found.
Dict: Version document which can be reduced to specified 'fields'.
Union[Dict, None]: Version entity data which can be reduced to
specified 'fields'. None is returned if version with specified
filters was not found.
"""
version_id = convert_id(version_id)
@ -546,18 +576,19 @@ def get_version_by_id(project_name, version_id, fields=None):
def get_version_by_name(project_name, version, subset_id, fields=None):
"""Single version entity data by it's name and subset id.
"""Single version entity data by its name and subset id.
Args:
project_name (str): Name of project where to look for queried entities.
version (int): name of version entity (it's version).
version (int): name of version entity (its version).
subset_id (Union[str, ObjectId]): Id of version which should be found.
fields (Iterable[str]): Fields that should be returned. All fields are
returned if 'None' is passed.
fields (Optional[Iterable[str]]): Fields that should be returned. All
fields are returned if 'None' is passed.
Returns:
None: If version with specified filters was not found.
Dict: Version document which can be reduced to specified 'fields'.
Union[Dict, None]: Version entity data which can be reduced to
specified 'fields'. None is returned if version with specified
filters was not found.
"""
subset_id = convert_id(subset_id)
@ -574,7 +605,7 @@ def get_version_by_name(project_name, version, subset_id, fields=None):
def version_is_latest(project_name, version_id):
"""Is version the latest from it's subset.
"""Is version the latest from its subset.
Note:
Hero versions are considered as latest.
@ -680,8 +711,8 @@ def get_versions(
versions (Iterable[int]): Version names (as integers).
Filter ignored if 'None' is passed.
hero (bool): Look also for hero versions.
fields (Iterable[str]): Fields that should be returned. All fields are
returned if 'None' is passed.
fields (Optional[Iterable[str]]): Fields that should be returned. All
fields are returned if 'None' is passed.
Returns:
Cursor: Iterable cursor yielding all matching versions.
@ -705,12 +736,13 @@ def get_hero_version_by_subset_id(project_name, subset_id, fields=None):
project_name (str): Name of project where to look for queried entities.
subset_id (Union[str, ObjectId]): Subset id under which
is hero version.
fields (Iterable[str]): Fields that should be returned. All fields are
returned if 'None' is passed.
fields (Optional[Iterable[str]]): Fields that should be returned. All
fields are returned if 'None' is passed.
Returns:
None: If hero version for passed subset id does not exists.
Dict: Hero version entity data.
Union[Dict, None]: Hero version entity data which can be reduced to
specified 'fields'. None is returned if hero version with specified
filters was not found.
"""
subset_id = convert_id(subset_id)
@ -730,17 +762,18 @@ def get_hero_version_by_subset_id(project_name, subset_id, fields=None):
def get_hero_version_by_id(project_name, version_id, fields=None):
"""Hero version by it's id.
"""Hero version by its id.
Args:
project_name (str): Name of project where to look for queried entities.
version_id (Union[str, ObjectId]): Hero version id.
fields (Iterable[str]): Fields that should be returned. All fields are
returned if 'None' is passed.
fields (Optional[Iterable[str]]): Fields that should be returned. All
fields are returned if 'None' is passed.
Returns:
None: If hero version with passed id was not found.
Dict: Hero version entity data.
Union[Dict, None]: Hero version entity data which can be reduced to
specified 'fields'. None is returned if hero version with specified
filters was not found.
"""
version_id = convert_id(version_id)
@ -773,8 +806,8 @@ def get_hero_versions(
should look for hero versions. Filter ignored if 'None' is passed.
version_ids (Iterable[Union[str, ObjectId]]): Hero version ids. Filter
ignored if 'None' is passed.
fields (Iterable[str]): Fields that should be returned. All fields are
returned if 'None' is passed.
fields (Optional[Iterable[str]]): Fields that should be returned. All
fields are returned if 'None' is passed.
Returns:
Cursor|list: Iterable yielding hero versions matching passed filters.
@ -801,8 +834,8 @@ def get_output_link_versions(project_name, version_id, fields=None):
project_name (str): Name of project where to look for queried entities.
version_id (Union[str, ObjectId]): Version id which can be used
as input link for other versions.
fields (Iterable[str]): Fields that should be returned. All fields are
returned if 'None' is passed.
fields (Optional[Iterable[str]]): Fields that should be returned. All
fields are returned if 'None' is passed.
Returns:
Iterable: Iterable cursor yielding versions that are used as input
@ -828,8 +861,8 @@ def get_last_versions(project_name, subset_ids, fields=None):
Args:
project_name (str): Name of project where to look for queried entities.
subset_ids (Iterable[Union[str, ObjectId]]): List of subset ids.
fields (Iterable[str]): Fields that should be returned. All fields are
returned if 'None' is passed.
fields (Optional[Iterable[str]]): Fields that should be returned. All
fields are returned if 'None' is passed.
Returns:
dict[ObjectId, int]: Key is subset id and value is last version name.
@ -913,12 +946,13 @@ def get_last_version_by_subset_id(project_name, subset_id, fields=None):
Args:
project_name (str): Name of project where to look for queried entities.
subset_id (Union[str, ObjectId]): Id of version which should be found.
fields (Iterable[str]): Fields that should be returned. All fields are
returned if 'None' is passed.
fields (Optional[Iterable[str]]): Fields that should be returned. All
fields are returned if 'None' is passed.
Returns:
None: If version with specified filters was not found.
Dict: Version document which can be reduced to specified 'fields'.
Union[Dict, None]: Version entity data which can be reduced to
specified 'fields'. None is returned if version with specified
filters was not found.
"""
subset_id = convert_id(subset_id)
@ -945,12 +979,13 @@ def get_last_version_by_subset_name(
asset_id (Union[str, ObjectId]): Asset id which is parent of passed
subset name.
asset_name (str): Asset name which is parent of passed subset name.
fields (Iterable[str]): Fields that should be returned. All fields are
returned if 'None' is passed.
fields (Optional[Iterable[str]]): Fields that should be returned. All
fields are returned if 'None' is passed.
Returns:
None: If version with specified filters was not found.
Dict: Version document which can be reduced to specified 'fields'.
Union[Dict, None]: Version entity data which can be reduced to
specified 'fields'. None is returned if version with specified
filters was not found.
"""
if not asset_id and not asset_name:
@ -972,18 +1007,18 @@ def get_last_version_by_subset_name(
def get_representation_by_id(project_name, representation_id, fields=None):
"""Representation entity data by it's id.
"""Representation entity data by its id.
Args:
project_name (str): Name of project where to look for queried entities.
representation_id (Union[str, ObjectId]): Representation id.
fields (Iterable[str]): Fields that should be returned. All fields are
returned if 'None' is passed.
fields (Optional[Iterable[str]]): Fields that should be returned. All
fields are returned if 'None' is passed.
Returns:
None: If representation with specified filters was not found.
Dict: Representation entity data which can be reduced
to specified 'fields'.
Union[Dict, None]: Representation entity data which can be reduced to
specified 'fields'. None is returned if representation with
specified filters was not found.
"""
if not representation_id:
@ -1004,19 +1039,19 @@ def get_representation_by_id(project_name, representation_id, fields=None):
def get_representation_by_name(
project_name, representation_name, version_id, fields=None
):
"""Representation entity data by it's name and it's version id.
"""Representation entity data by its name and its version id.
Args:
project_name (str): Name of project where to look for queried entities.
representation_name (str): Representation name.
version_id (Union[str, ObjectId]): Id of parent version entity.
fields (Iterable[str]): Fields that should be returned. All fields are
returned if 'None' is passed.
fields (Optional[Iterable[str]]): Fields that should be returned. All
fields are returned if 'None' is passed.
Returns:
None: If representation with specified filters was not found.
Dict: Representation entity data which can be reduced
to specified 'fields'.
Union[dict[str, Any], None]: Representation entity data which can be
reduced to specified 'fields'. None is returned if representation
with specified filters was not found.
"""
version_id = convert_id(version_id)
@ -1185,7 +1220,7 @@ def get_representations(
standard=True,
fields=None
):
"""Representaion entities data from one project filtered by filters.
"""Representation entities data from one project filtered by filters.
Filters are additive (all conditions must pass to return subset).
@ -1202,8 +1237,8 @@ def get_representations(
names_by_version_ids (dict[ObjectId, list[str]]): Complex filtering
using version ids and list of names under the version.
archived (bool): Output will also contain archived representations.
fields (Iterable[str]): Fields that should be returned. All fields are
returned if 'None' is passed.
fields (Optional[Iterable[str]]): Fields that should be returned. All
fields are returned if 'None' is passed.
Returns:
Cursor: Iterable cursor yielding all matching representations.
@ -1216,7 +1251,7 @@ def get_representations(
version_ids=version_ids,
context_filters=context_filters,
names_by_version_ids=names_by_version_ids,
standard=True,
standard=standard,
archived=archived,
fields=fields
)
@ -1231,7 +1266,7 @@ def get_archived_representations(
names_by_version_ids=None,
fields=None
):
"""Archived representaion entities data from project with applied filters.
"""Archived representation entities data from project with applied filters.
Filters are additive (all conditions must pass to return subset).
@ -1247,8 +1282,8 @@ def get_archived_representations(
representation context fields.
names_by_version_ids (dict[ObjectId, List[str]]): Complex filtering
using version ids and list of names under the version.
fields (Iterable[str]): Fields that should be returned. All fields are
returned if 'None' is passed.
fields (Optional[Iterable[str]]): Fields that should be returned. All
fields are returned if 'None' is passed.
Returns:
Cursor: Iterable cursor yielding all matching representations.
@ -1377,8 +1412,8 @@ def get_thumbnail_id_from_source(project_name, src_type, src_id):
src_id (Union[str, ObjectId]): Id of source entity.
Returns:
ObjectId: Thumbnail id assigned to entity.
None: If Source entity does not have any thumbnail id assigned.
Union[ObjectId, None]: Thumbnail id assigned to entity. If Source
entity does not have any thumbnail id assigned.
"""
if not src_type or not src_id:
@ -1397,14 +1432,14 @@ def get_thumbnails(project_name, thumbnail_ids, fields=None):
"""Receive thumbnails entity data.
Thumbnail entity can be used to receive binary content of thumbnail based
on it's content and ThumbnailResolvers.
on its content and ThumbnailResolvers.
Args:
project_name (str): Name of project where to look for queried entities.
thumbnail_ids (Iterable[Union[str, ObjectId]]): Ids of thumbnail
entities.
fields (Iterable[str]): Fields that should be returned. All fields are
returned if 'None' is passed.
fields (Optional[Iterable[str]]): Fields that should be returned. All
fields are returned if 'None' is passed.
Returns:
cursor: Cursor of queried documents.
@ -1429,12 +1464,13 @@ def get_thumbnail(project_name, thumbnail_id, fields=None):
Args:
project_name (str): Name of project where to look for queried entities.
thumbnail_id (Union[str, ObjectId]): Id of thumbnail entity.
fields (Iterable[str]): Fields that should be returned. All fields are
returned if 'None' is passed.
fields (Optional[Iterable[str]]): Fields that should be returned. All
fields are returned if 'None' is passed.
Returns:
None: If thumbnail with specified id was not found.
Dict: Thumbnail entity data which can be reduced to specified 'fields'.
Union[Dict, None]: Thumbnail entity data which can be reduced to
specified 'fields'.None is returned if thumbnail with specified
filters was not found.
"""
if not thumbnail_id:
@ -1458,8 +1494,13 @@ def get_workfile_info(
project_name (str): Name of project where to look for queried entities.
asset_id (Union[str, ObjectId]): Id of asset entity.
task_name (str): Task name on asset.
fields (Iterable[str]): Fields that should be returned. All fields are
returned if 'None' is passed.
fields (Optional[Iterable[str]]): Fields that should be returned. All
fields are returned if 'None' is passed.
Returns:
Union[Dict, None]: Workfile entity data which can be reduced to
specified 'fields'.None is returned if workfile with specified
filters was not found.
"""
if not asset_id or not task_name or not filename:

View file

@ -2,7 +2,7 @@
## Reason
Preparation for OpenPype v4 server. Goal is to remove direct mongo calls in code to prepare a little bit for different source of data for code before. To start think about database calls less as mongo calls but more universally. To do so was implemented simple wrapper around database calls to not use pymongo specific code.
Current goal is not to make universal database model which can be easily replaced with any different source of data but to make it close as possible. Current implementation of OpenPype is too tighly connected to pymongo and it's abilities so we're trying to get closer with long term changes that can be used even in current state.
Current goal is not to make universal database model which can be easily replaced with any different source of data but to make it close as possible. Current implementation of OpenPype is too tightly connected to pymongo and it's abilities so we're trying to get closer with long term changes that can be used even in current state.
## Queries
Query functions don't use full potential of mongo queries like very specific queries based on subdictionaries or unknown structures. We try to avoid these calls as much as possible because they'll probably won't be available in future. If it's really necessary a new function can be added but only if it's reasonable for overall logic. All query functions were moved to `~/client/entities.py`. Each function has arguments with available filters and possible reduce of returned keys for each entity.
@ -14,7 +14,7 @@ Changes are a little bit complicated. Mongo has many options how update can happ
Create operations expect already prepared document data, for that are prepared functions creating skeletal structures of documents (do not fill all required data), except `_id` all data should be right. Existence of entity is not validated so if the same creation operation is send n times it will create the entity n times which can cause issues.
### Update
Update operation require entity id and keys that should be changed, update dictionary must have {"key": value}. If value should be set in nested dictionary the key must have also all subkeys joined with dot `.` (e.g. `{"data": {"fps": 25}}` -> `{"data.fps": 25}`). To simplify update dictionaries were prepared functions which does that for you, their name has template `prepare_<entity type>_update_data` - they work on comparison of previous document and new document. If there is missing function for requested entity type it is because we didn't need it yet and require implementaion.
Update operation require entity id and keys that should be changed, update dictionary must have {"key": value}. If value should be set in nested dictionary the key must have also all subkeys joined with dot `.` (e.g. `{"data": {"fps": 25}}` -> `{"data.fps": 25}`). To simplify update dictionaries were prepared functions which does that for you, their name has template `prepare_<entity type>_update_data` - they work on comparison of previous document and new document. If there is missing function for requested entity type it is because we didn't need it yet and require implementation.
### Delete
Delete operation need entity id. Entity will be deleted from mongo.

View file

@ -368,7 +368,7 @@ def prepare_workfile_info_update_data(old_doc, new_doc, replace=True):
class AbstractOperation(object):
"""Base operation class.
Opration represent a call into database. The call can create, change or
Operation represent a call into database. The call can create, change or
remove data.
Args:
@ -409,7 +409,7 @@ class AbstractOperation(object):
pass
def to_data(self):
"""Convert opration to data that can be converted to json or others.
"""Convert operation to data that can be converted to json or others.
Warning:
Current state returns ObjectId objects which cannot be parsed by
@ -428,7 +428,7 @@ class AbstractOperation(object):
class CreateOperation(AbstractOperation):
"""Opeartion to create an entity.
"""Operation to create an entity.
Args:
project_name (str): On which project operation will happen.
@ -485,7 +485,7 @@ class CreateOperation(AbstractOperation):
class UpdateOperation(AbstractOperation):
"""Opeartion to update an entity.
"""Operation to update an entity.
Args:
project_name (str): On which project operation will happen.
@ -552,7 +552,7 @@ class UpdateOperation(AbstractOperation):
class DeleteOperation(AbstractOperation):
"""Opeartion to delete an entity.
"""Operation to delete an entity.
Args:
project_name (str): On which project operation will happen.

View file

@ -42,13 +42,5 @@ class AddLastWorkfileToLaunchArgs(PreLaunchHook):
self.log.info("Current context does not have any workfile yet.")
return
# Determine whether to open workfile post initialization.
if self.host_name == "maya":
key = "open_workfile_post_initialization"
if self.data["project_settings"]["maya"][key]:
self.log.debug("Opening workfile post initialization.")
self.data["env"]["OPENPYPE_" + key.upper()] = "1"
return
# Add path to workfile to arguments
self.launch_context.launch_args.append(last_workfile)

View file

@ -3,10 +3,13 @@ from openpype.lib import PreLaunchHook
from openpype.pipeline.workfile import create_workdir_extra_folders
class AddLastWorkfileToLaunchArgs(PreLaunchHook):
"""Add last workfile path to launch arguments.
class CreateWorkdirExtraFolders(PreLaunchHook):
"""Create extra folders for the work directory.
Based on setting `project_settings/global/tools/Workfiles/extra_folders`
profile filtering will decide whether extra folders need to be created in
the work directory.
This is not possible to do for all applications the same way.
"""
# Execute after workfile template copy

View file

@ -7,7 +7,7 @@ class LaunchFoundryAppsWindows(PreLaunchHook):
Nuke is executed "like" python process so it is required to pass
`CREATE_NEW_CONSOLE` flag on windows to trigger creation of new console.
At the same time the newly created console won't create it's own stdout
At the same time the newly created console won't create its own stdout
and stderr handlers so they should not be redirected to DEVNULL.
"""
@ -18,7 +18,7 @@ class LaunchFoundryAppsWindows(PreLaunchHook):
def execute(self):
# Change `creationflags` to CREATE_NEW_CONSOLE
# - on Windows will nuke create new window using it's console
# - on Windows nuke will create new window using its console
# Set `stdout` and `stderr` to None so new created console does not
# have redirected output to DEVNULL in build
self.launch_context.kwargs.update({

View file

@ -2,7 +2,7 @@
Idea for current dirmap implementation was used from Maya where is possible to
enter source and destination roots and maya will try each found source
in referenced file replace with each destionation paths. First path which
in referenced file replace with each destination paths. First path which
exists is used.
"""
@ -183,7 +183,7 @@ class HostDirmap(object):
project_name, remote_site
)
# dirmap has sense only with regular disk provider, in the workfile
# wont be root on cloud or sftp provider
# won't be root on cloud or sftp provider
if remote_provider != "local_drive":
remote_site = "studio"
for root_name, active_site_dir in active_overrides.items():

View file

@ -18,7 +18,7 @@ class HostBase(object):
Compared to 'avalon' concept:
What was before considered as functions in host implementation folder. The
host implementation should primarily care about adding ability of creation
(mark subsets to be published) and optionaly about referencing published
(mark subsets to be published) and optionally about referencing published
representations as containers.
Host may need extend some functionality like working with workfiles
@ -129,9 +129,9 @@ class HostBase(object):
"""Get current context information.
This method should be used to get current context of host. Usage of
this method can be crutial for host implementations in DCCs where
this method can be crucial for host implementations in DCCs where
can be opened multiple workfiles at one moment and change of context
can't be catched properly.
can't be caught properly.
Default implementation returns values from 'legacy_io.Session'.

View file

@ -81,7 +81,7 @@ class ILoadHost:
@abstractmethod
def get_containers(self):
"""Retreive referenced containers from scene.
"""Retrieve referenced containers from scene.
This can be implemented in hosts where referencing can be used.
@ -191,7 +191,7 @@ class IWorkfileHost:
@abstractmethod
def get_current_workfile(self):
"""Retreive path to current opened file.
"""Retrieve path to current opened file.
Returns:
str: Path to file which is currently opened.
@ -220,8 +220,8 @@ class IWorkfileHost:
Default implementation keeps workdir untouched.
Warnings:
We must handle this modification with more sofisticated way because
this can't be called out of DCC so opening of last workfile
We must handle this modification with more sophisticated way
because this can't be called out of DCC so opening of last workfile
(calculated before DCC is launched) is complicated. Also breaking
defined work template is not a good idea.
Only place where it's really used and can make sense is Maya. There
@ -302,7 +302,7 @@ class IPublishHost:
required methods.
Returns:
list[str]: Missing method implementations for new publsher
list[str]: Missing method implementations for new publisher
workflow.
"""

View file

@ -504,7 +504,7 @@ function addItemAsLayerToComp(comp_id, item_id, found_comp){
* Args:
* comp_id (int): id of target composition
* item_id (int): FootageItem.id
* found_comp (CompItem, optional): to limit quering if
* found_comp (CompItem, optional): to limit querying if
* comp already found previously
*/
var comp = found_comp || app.project.itemByID(comp_id);

View file

@ -80,7 +80,7 @@ class AfterEffectsServerStub():
Get complete stored JSON with metadata from AE.Metadata.Label
field.
It contains containers loaded by any Loader OR instances creted
It contains containers loaded by any Loader OR instances created
by Creator.
Returns:

View file

@ -53,10 +53,10 @@ class CollectWorkfile(pyblish.api.ContextPlugin):
"active": True,
"asset": asset_entity["name"],
"task": task,
"frameStart": asset_entity["data"]["frameStart"],
"frameEnd": asset_entity["data"]["frameEnd"],
"handleStart": asset_entity["data"]["handleStart"],
"handleEnd": asset_entity["data"]["handleEnd"],
"frameStart": context.data['frameStart'],
"frameEnd": context.data['frameEnd'],
"handleStart": context.data['handleStart'],
"handleEnd": context.data['handleEnd'],
"fps": asset_entity["data"]["fps"],
"resolutionWidth": asset_entity["data"].get(
"resolutionWidth",

View file

@ -24,7 +24,7 @@ from .workio import OpenFileCacher
PREVIEW_COLLECTIONS: Dict = dict()
# This seems like a good value to keep the Qt app responsive and doesn't slow
# down Blender. At least on macOS I the interace of Blender gets very laggy if
# down Blender. At least on macOS I the interface of Blender gets very laggy if
# you make it smaller.
TIMER_INTERVAL: float = 0.01 if platform.system() == "Windows" else 0.1
@ -84,11 +84,11 @@ class MainThreadItem:
self.kwargs = kwargs
def execute(self):
"""Execute callback and store it's result.
"""Execute callback and store its result.
Method must be called from main thread. Item is marked as `done`
when callback execution finished. Store output of callback of exception
information when callback raise one.
information when callback raises one.
"""
print("Executing process in main thread")
if self.done:

View file

@ -50,7 +50,7 @@ class ExtractPlayblast(publish.Extractor):
# get isolate objects list
isolate = instance.data("isolate", None)
# get ouput path
# get output path
stagingdir = self.staging_dir(instance)
filename = instance.name
path = os.path.join(stagingdir, filename)
@ -116,7 +116,6 @@ class ExtractPlayblast(publish.Extractor):
"frameStart": start,
"frameEnd": end,
"fps": fps,
"preview": True,
"tags": tags,
"camera_name": camera
}

View file

@ -38,8 +38,9 @@ class CelactionPrelaunchHook(PreLaunchHook):
)
path_to_cli = os.path.join(CELACTION_SCRIPTS_DIR, "publish_cli.py")
subproces_args = get_openpype_execute_args("run", path_to_cli)
openpype_executable = subproces_args.pop(0)
subprocess_args = get_openpype_execute_args("run", path_to_cli)
openpype_executable = subprocess_args.pop(0)
workfile_settings = self.get_workfile_settings()
winreg.SetValueEx(
hKey,
@ -49,20 +50,34 @@ class CelactionPrelaunchHook(PreLaunchHook):
openpype_executable
)
parameters = subproces_args + [
"--currentFile", "*SCENE*",
"--chunk", "*CHUNK*",
"--frameStart", "*START*",
"--frameEnd", "*END*",
"--resolutionWidth", "*X*",
"--resolutionHeight", "*Y*"
# add required arguments for workfile path
parameters = subprocess_args + [
"--currentFile", "*SCENE*"
]
# Add custom parameters from workfile settings
if "render_chunk" in workfile_settings["submission_overrides"]:
parameters += [
"--chunk", "*CHUNK*"
]
if "resolution" in workfile_settings["submission_overrides"]:
parameters += [
"--resolutionWidth", "*X*",
"--resolutionHeight", "*Y*"
]
if "frame_range" in workfile_settings["submission_overrides"]:
parameters += [
"--frameStart", "*START*",
"--frameEnd", "*END*"
]
winreg.SetValueEx(
hKey, "SubmitParametersTitle", 0, winreg.REG_SZ,
subprocess.list2cmdline(parameters)
)
self.log.debug(f"__ parameters: \"{parameters}\"")
# setting resolution parameters
path_submit = "\\".join([
path_user_settings, "Dialogs", "SubmitOutput"
@ -135,3 +150,6 @@ class CelactionPrelaunchHook(PreLaunchHook):
self.log.info(f"Workfile to open: \"{workfile_path}\"")
return workfile_path
def get_workfile_settings(self):
return self.data["project_settings"]["celaction"]["workfile"]

View file

@ -39,7 +39,7 @@ class CollectCelactionCliKwargs(pyblish.api.Collector):
passing_kwargs[key] = value
if missing_kwargs:
raise RuntimeError("Missing arguments {}".format(
self.log.debug("Missing arguments {}".format(
", ".join(
[f'"{key}"' for key in missing_kwargs]
)

View file

@ -773,7 +773,7 @@ class MediaInfoFile(object):
if logger:
self.log = logger
# test if `dl_get_media_info` paht exists
# test if `dl_get_media_info` path exists
self._validate_media_script_path()
# derivate other feed variables
@ -993,7 +993,7 @@ class MediaInfoFile(object):
def _validate_media_script_path(self):
if not os.path.isfile(self.MEDIA_SCRIPT_PATH):
raise IOError("Media Scirpt does not exist: `{}`".format(
raise IOError("Media Script does not exist: `{}`".format(
self.MEDIA_SCRIPT_PATH))
def _generate_media_info_file(self, fpath, feed_ext, feed_dir):

View file

@ -38,7 +38,7 @@ def install():
pyblish.register_plugin_path(PUBLISH_PATH)
register_loader_plugin_path(LOAD_PATH)
register_creator_plugin_path(CREATE_PATH)
log.info("OpenPype Flame plug-ins registred ...")
log.info("OpenPype Flame plug-ins registered ...")
# register callback for switching publishable
pyblish.register_callback("instanceToggled", on_pyblish_instance_toggled)

View file

@ -157,7 +157,7 @@ class CreatorWidget(QtWidgets.QDialog):
# convert label text to normal capitalized text with spaces
label_text = self.camel_case_split(text)
# assign the new text to lable widget
# assign the new text to label widget
label = QtWidgets.QLabel(label_text)
label.setObjectName("LineLabel")
@ -345,8 +345,8 @@ class PublishableClip:
"track": "sequence",
}
# parents search patern
parents_search_patern = r"\{([a-z]*?)\}"
# parents search pattern
parents_search_pattern = r"\{([a-z]*?)\}"
# default templates for non-ui use
rename_default = False
@ -445,7 +445,7 @@ class PublishableClip:
return self.current_segment
def _populate_segment_default_data(self):
""" Populate default formating data from segment. """
""" Populate default formatting data from segment. """
self.current_segment_default_data = {
"_folder_": "shots",
@ -538,7 +538,7 @@ class PublishableClip:
if not self.index_from_segment:
self.count_steps *= self.rename_index
hierarchy_formating_data = {}
hierarchy_formatting_data = {}
hierarchy_data = deepcopy(self.hierarchy_data)
_data = self.current_segment_default_data.copy()
if self.ui_inputs:
@ -552,7 +552,7 @@ class PublishableClip:
# mark review layer
if self.review_track and (
self.review_track not in self.review_track_default):
# if review layer is defined and not the same as defalut
# if review layer is defined and not the same as default
self.review_layer = self.review_track
# shot num calculate
@ -578,13 +578,13 @@ class PublishableClip:
# fill up pythonic expresisons in hierarchy data
for k, _v in hierarchy_data.items():
hierarchy_formating_data[k] = _v["value"].format(**_data)
hierarchy_formatting_data[k] = _v["value"].format(**_data)
else:
# if no gui mode then just pass default data
hierarchy_formating_data = hierarchy_data
hierarchy_formatting_data = hierarchy_data
tag_hierarchy_data = self._solve_tag_hierarchy_data(
hierarchy_formating_data
hierarchy_formatting_data
)
tag_hierarchy_data.update({"heroTrack": True})
@ -615,27 +615,27 @@ class PublishableClip:
# in case track name and subset name is the same then add
if self.subset_name == self.track_name:
_hero_data["subset"] = self.subset
# assing data to return hierarchy data to tag
# assign data to return hierarchy data to tag
tag_hierarchy_data = _hero_data
break
# add data to return data dict
self.marker_data.update(tag_hierarchy_data)
def _solve_tag_hierarchy_data(self, hierarchy_formating_data):
def _solve_tag_hierarchy_data(self, hierarchy_formatting_data):
""" Solve marker data from hierarchy data and templates. """
# fill up clip name and hierarchy keys
hierarchy_filled = self.hierarchy.format(**hierarchy_formating_data)
clip_name_filled = self.clip_name.format(**hierarchy_formating_data)
hierarchy_filled = self.hierarchy.format(**hierarchy_formatting_data)
clip_name_filled = self.clip_name.format(**hierarchy_formatting_data)
# remove shot from hierarchy data: is not needed anymore
hierarchy_formating_data.pop("shot")
hierarchy_formatting_data.pop("shot")
return {
"newClipName": clip_name_filled,
"hierarchy": hierarchy_filled,
"parents": self.parents,
"hierarchyData": hierarchy_formating_data,
"hierarchyData": hierarchy_formatting_data,
"subset": self.subset,
"family": self.subset_family,
"families": [self.family]
@ -650,17 +650,17 @@ class PublishableClip:
type
)
# first collect formating data to use for formating template
formating_data = {}
# first collect formatting data to use for formatting template
formatting_data = {}
for _k, _v in self.hierarchy_data.items():
value = _v["value"].format(
**self.current_segment_default_data)
formating_data[_k] = value
formatting_data[_k] = value
return {
"entity_type": entity_type,
"entity_name": template.format(
**formating_data
**formatting_data
)
}
@ -668,9 +668,9 @@ class PublishableClip:
""" Create parents and return it in list. """
self.parents = []
patern = re.compile(self.parents_search_patern)
pattern = re.compile(self.parents_search_pattern)
par_split = [(patern.findall(t).pop(), t)
par_split = [(pattern.findall(t).pop(), t)
for t in self.hierarchy.split("/")]
for type, template in par_split:
@ -902,22 +902,22 @@ class OpenClipSolver(flib.MediaInfoFile):
):
return
formating_data = self._update_formating_data(
formatting_data = self._update_formatting_data(
layerName=layer_name,
layerUID=layer_uid
)
name_obj.text = StringTemplate(
self.layer_rename_template
).format(formating_data)
).format(formatting_data)
def _update_formating_data(self, **kwargs):
""" Updating formating data for layer rename
def _update_formatting_data(self, **kwargs):
""" Updating formatting data for layer rename
Attributes:
key=value (optional): will be included to formating data
key=value (optional): will be included to formatting data
as {key: value}
Returns:
dict: anatomy context data for formating
dict: anatomy context data for formatting
"""
self.log.debug(">> self.clip_data: {}".format(self.clip_data))
clip_name_obj = self.clip_data.find("name")

View file

@ -203,7 +203,7 @@ class WireTapCom(object):
list: all available volumes in server
Rises:
AttributeError: unable to get any volumes childs from server
AttributeError: unable to get any volumes children from server
"""
root = WireTapNodeHandle(self._server, "/volumes")
children_num = WireTapInt(0)

View file

@ -108,7 +108,7 @@ def _sync_utility_scripts(env=None):
shutil.copy2(src, dst)
except (PermissionError, FileExistsError) as msg:
log.warning(
"Not able to coppy to: `{}`, Problem with: `{}`".format(
"Not able to copy to: `{}`, Problem with: `{}`".format(
dst,
msg
)

View file

@ -153,7 +153,7 @@ class FlamePrelaunch(PreLaunchHook):
def _add_pythonpath(self):
pythonpath = self.launch_context.env.get("PYTHONPATH")
# separate it explicity by `;` that is what we use in settings
# separate it explicitly by `;` that is what we use in settings
new_pythonpath = self.flame_pythonpath.split(os.pathsep)
new_pythonpath += pythonpath.split(os.pathsep)

View file

@ -209,7 +209,7 @@ class CreateShotClip(opfapi.Creator):
"type": "QComboBox",
"label": "Subset Name",
"target": "ui",
"toolTip": "chose subset name patern, if [ track name ] is selected, name of track layer will be used", # noqa
"toolTip": "chose subset name pattern, if [ track name ] is selected, name of track layer will be used", # noqa
"order": 0},
"subsetFamily": {
"value": ["plate", "take"],

View file

@ -61,9 +61,9 @@ class LoadClip(opfapi.ClipLoader):
self.layer_rename_template = self.layer_rename_template.replace(
"output", "representation")
formating_data = deepcopy(context["representation"]["context"])
formatting_data = deepcopy(context["representation"]["context"])
clip_name = StringTemplate(self.clip_name_template).format(
formating_data)
formatting_data)
# convert colorspace with ocio to flame mapping
# in imageio flame section
@ -88,7 +88,7 @@ class LoadClip(opfapi.ClipLoader):
"version": "v{:0>3}".format(version_name),
"layer_rename_template": self.layer_rename_template,
"layer_rename_patterns": self.layer_rename_patterns,
"context_data": formating_data
"context_data": formatting_data
}
self.log.debug(pformat(
loading_context

View file

@ -58,11 +58,11 @@ class LoadClipBatch(opfapi.ClipLoader):
self.layer_rename_template = self.layer_rename_template.replace(
"output", "representation")
formating_data = deepcopy(context["representation"]["context"])
formating_data["batch"] = self.batch.name.get_value()
formatting_data = deepcopy(context["representation"]["context"])
formatting_data["batch"] = self.batch.name.get_value()
clip_name = StringTemplate(self.clip_name_template).format(
formating_data)
formatting_data)
# convert colorspace with ocio to flame mapping
# in imageio flame section
@ -88,7 +88,7 @@ class LoadClipBatch(opfapi.ClipLoader):
"version": "v{:0>3}".format(version_name),
"layer_rename_template": self.layer_rename_template,
"layer_rename_patterns": self.layer_rename_patterns,
"context_data": formating_data
"context_data": formatting_data
}
self.log.debug(pformat(
loading_context

View file

@ -203,7 +203,7 @@ class CollectTimelineInstances(pyblish.api.ContextPlugin):
self._get_xml_preset_attrs(
attributes, split)
# add xml overides resolution to instance data
# add xml overrides resolution to instance data
xml_overrides = attributes["xml_overrides"]
if xml_overrides.get("width"):
attributes.update({
@ -284,7 +284,7 @@ class CollectTimelineInstances(pyblish.api.ContextPlugin):
self.log.debug("__ head: `{}`".format(head))
self.log.debug("__ tail: `{}`".format(tail))
# HACK: it is here to serve for versions bellow 2021.1
# HACK: it is here to serve for versions below 2021.1
if not any([head, tail]):
retimed_attributes = get_media_range_with_retimes(
otio_clip, handle_start, handle_end)

View file

@ -227,7 +227,7 @@ class ExtractSubsetResources(publish.Extractor):
self.hide_others(
exporting_clip, segment_name, s_track_name)
# change name patern
# change name pattern
name_patern_xml = (
"<segment name>_<shot name>_{}.").format(
unique_name)
@ -358,7 +358,7 @@ class ExtractSubsetResources(publish.Extractor):
representation_data["stagingDir"] = n_stage_dir
files = n_files
# add files to represetation but add
# add files to representation but add
# imagesequence as list
if (
# first check if path in files is not mov extension

View file

@ -50,7 +50,7 @@ class IntegrateBatchGroup(pyblish.api.InstancePlugin):
self._load_clip_to_context(instance, bgroup)
def _add_nodes_to_batch_with_links(self, instance, task_data, batch_group):
# get write file node properties > OrederDict because order does mater
# get write file node properties > OrederDict because order does matter
write_pref_data = self._get_write_prefs(instance, task_data)
batch_nodes = [

View file

@ -6,12 +6,13 @@ from openpype.pipeline.publish import get_errored_instances_from_context
class SelectInvalidAction(pyblish.api.Action):
"""Select invalid nodes in Maya when plug-in failed.
"""Select invalid nodes in Fusion when plug-in failed.
To retrieve the invalid nodes this assumes a static `get_invalid()`
method is available on the plugin.
"""
label = "Select invalid"
on = "failed" # This action is only available on a failed plug-in
icon = "search" # Icon from Awesome Icon
@ -31,8 +32,10 @@ class SelectInvalidAction(pyblish.api.Action):
if isinstance(invalid_nodes, (list, tuple)):
invalid.extend(invalid_nodes)
else:
self.log.warning("Plug-in returned to be invalid, "
"but has no selectable nodes.")
self.log.warning(
"Plug-in returned to be invalid, "
"but has no selectable nodes."
)
if not invalid:
# Assume relevant comp is current comp and clear selection
@ -51,4 +54,6 @@ class SelectInvalidAction(pyblish.api.Action):
for tool in invalid:
flow.Select(tool, True)
names.add(tool.Name)
self.log.info("Selecting invalid tools: %s" % ", ".join(sorted(names)))
self.log.info(
"Selecting invalid tools: %s" % ", ".join(sorted(names))
)

View file

@ -6,7 +6,6 @@ from openpype.tools.utils import host_tools
from openpype.style import load_stylesheet
from openpype.lib import register_event_callback
from openpype.hosts.fusion.scripts import (
set_rendermode,
duplicate_with_inputs,
)
from openpype.hosts.fusion.api.lib import (
@ -60,7 +59,6 @@ class OpenPypeMenu(QtWidgets.QWidget):
publish_btn = QtWidgets.QPushButton("Publish...", self)
manager_btn = QtWidgets.QPushButton("Manage...", self)
libload_btn = QtWidgets.QPushButton("Library...", self)
rendermode_btn = QtWidgets.QPushButton("Set render mode...", self)
set_framerange_btn = QtWidgets.QPushButton("Set Frame Range", self)
set_resolution_btn = QtWidgets.QPushButton("Set Resolution", self)
duplicate_with_inputs_btn = QtWidgets.QPushButton(
@ -91,7 +89,6 @@ class OpenPypeMenu(QtWidgets.QWidget):
layout.addWidget(set_framerange_btn)
layout.addWidget(set_resolution_btn)
layout.addWidget(rendermode_btn)
layout.addSpacing(20)
@ -108,7 +105,6 @@ class OpenPypeMenu(QtWidgets.QWidget):
load_btn.clicked.connect(self.on_load_clicked)
manager_btn.clicked.connect(self.on_manager_clicked)
libload_btn.clicked.connect(self.on_libload_clicked)
rendermode_btn.clicked.connect(self.on_rendermode_clicked)
duplicate_with_inputs_btn.clicked.connect(
self.on_duplicate_with_inputs_clicked
)
@ -162,15 +158,6 @@ class OpenPypeMenu(QtWidgets.QWidget):
def on_libload_clicked(self):
host_tools.show_library_loader()
def on_rendermode_clicked(self):
if self.render_mode_widget is None:
window = set_rendermode.SetRenderMode()
window.setStyleSheet(load_stylesheet())
window.show()
self.render_mode_widget = window
else:
self.render_mode_widget.show()
def on_duplicate_with_inputs_clicked(self):
duplicate_with_inputs.duplicate_with_input_connections()

View file

@ -4,29 +4,34 @@ import qtawesome
from openpype.hosts.fusion.api import (
get_current_comp,
comp_lock_and_undo_chunk
comp_lock_and_undo_chunk,
)
from openpype.lib import BoolDef
from openpype.lib import (
BoolDef,
EnumDef,
)
from openpype.pipeline import (
legacy_io,
Creator,
CreatedInstance
CreatedInstance,
)
from openpype.client import (
get_asset_by_name,
)
from openpype.client import get_asset_by_name
class CreateSaver(Creator):
identifier = "io.openpype.creators.fusion.saver"
name = "saver"
label = "Saver"
label = "Render (saver)"
name = "render"
family = "render"
default_variants = ["Main"]
default_variants = ["Main", "Mask"]
description = "Fusion Saver to generate image sequence"
def create(self, subset_name, instance_data, pre_create_data):
instance_attributes = ["reviewable"]
def create(self, subset_name, instance_data, pre_create_data):
# TODO: Add pre_create attributes to choose file format?
file_format = "OpenEXRFormat"
@ -58,7 +63,8 @@ class CreateSaver(Creator):
family=self.family,
subset_name=subset_name,
data=instance_data,
creator=self)
creator=self,
)
# Insert the transient data
instance.transient_data["tool"] = saver
@ -68,11 +74,9 @@ class CreateSaver(Creator):
return instance
def collect_instances(self):
comp = get_current_comp()
tools = comp.GetToolList(False, "Saver").values()
for tool in tools:
data = self.get_managed_tool_data(tool)
if not data:
data = self._collect_unmanaged_saver(tool)
@ -90,7 +94,6 @@ class CreateSaver(Creator):
def update_instances(self, update_list):
for created_inst, _changes in update_list:
new_data = created_inst.data_to_store()
tool = created_inst.transient_data["tool"]
self._update_tool_with_data(tool, new_data)
@ -139,7 +142,6 @@ class CreateSaver(Creator):
tool.SetAttrs({"TOOLS_Name": subset})
def _collect_unmanaged_saver(self, tool):
# TODO: this should not be done this way - this should actually
# get the data as stored on the tool explicitly (however)
# that would disallow any 'regular saver' to be collected
@ -153,8 +155,7 @@ class CreateSaver(Creator):
asset = legacy_io.Session["AVALON_ASSET"]
task = legacy_io.Session["AVALON_TASK"]
asset_doc = get_asset_by_name(project_name=project,
asset_name=asset)
asset_doc = get_asset_by_name(project_name=project, asset_name=asset)
path = tool["Clip"][comp.TIME_UNDEFINED]
fname = os.path.basename(path)
@ -178,21 +179,20 @@ class CreateSaver(Creator):
"variant": variant,
"active": not passthrough,
"family": self.family,
# Unique identifier for instance and this creator
"id": "pyblish.avalon.instance",
"creator_identifier": self.identifier
"creator_identifier": self.identifier,
}
def get_managed_tool_data(self, tool):
"""Return data of the tool if it matches creator identifier"""
data = tool.GetData('openpype')
data = tool.GetData("openpype")
if not isinstance(data, dict):
return
required = {
"id": "pyblish.avalon.instance",
"creator_identifier": self.identifier
"creator_identifier": self.identifier,
}
for key, value in required.items():
if key not in data or data[key] != value:
@ -205,11 +205,40 @@ class CreateSaver(Creator):
return data
def get_instance_attr_defs(self):
return [
BoolDef(
"review",
default=True,
label="Review"
)
def get_pre_create_attr_defs(self):
"""Settings for create page"""
attr_defs = [
self._get_render_target_enum(),
self._get_reviewable_bool(),
]
return attr_defs
def get_instance_attr_defs(self):
"""Settings for publish page"""
attr_defs = [
self._get_render_target_enum(),
self._get_reviewable_bool(),
]
return attr_defs
# These functions below should be moved to another file
# so it can be used by other plugins. plugin.py ?
def _get_render_target_enum(self):
rendering_targets = {
"local": "Local machine rendering",
"frames": "Use existing frames",
}
if "farm_rendering" in self.instance_attributes:
rendering_targets["farm"] = "Farm rendering"
return EnumDef(
"render_target", items=rendering_targets, label="Render target"
)
def _get_reviewable_bool(self):
return BoolDef(
"review",
default=("reviewable" in self.instance_attributes),
label="Review",
)

View file

@ -72,8 +72,7 @@ class FusionSetFrameRangeWithHandlesLoader(load.LoaderPlugin):
return
# Include handles
handles = version_data.get("handles", 0)
start -= handles
end += handles
start -= version_data.get("handleStart", 0)
end += version_data.get("handleEnd", 0)
lib.update_frame_range(start, end)

View file

@ -0,0 +1,50 @@
import pyblish.api
from openpype.pipeline import publish
import os
class CollectFusionExpectedFrames(
pyblish.api.InstancePlugin, publish.ColormanagedPyblishPluginMixin
):
"""Collect all frames needed to publish expected frames"""
order = pyblish.api.CollectorOrder + 0.5
label = "Collect Expected Frames"
hosts = ["fusion"]
families = ["render"]
def process(self, instance):
context = instance.context
frame_start = context.data["frameStartHandle"]
frame_end = context.data["frameEndHandle"]
path = instance.data["path"]
output_dir = instance.data["outputDir"]
basename = os.path.basename(path)
head, ext = os.path.splitext(basename)
files = [
f"{head}{str(frame).zfill(4)}{ext}"
for frame in range(frame_start, frame_end + 1)
]
repre = {
"name": ext[1:],
"ext": ext[1:],
"frameStart": f"%0{len(str(frame_end))}d" % frame_start,
"files": files,
"stagingDir": output_dir,
}
self.set_representation_colorspace(
representation=repre,
context=context,
)
# review representation
if instance.data.get("review", False):
repre["tags"] = ["review"]
# add the repre to the instance
if "representations" not in instance.data:
instance.data["representations"] = []
instance.data["representations"].append(repre)

View file

@ -1,44 +0,0 @@
import pyblish.api
class CollectFusionRenderMode(pyblish.api.InstancePlugin):
"""Collect current comp's render Mode
Options:
local
farm
Note that this value is set for each comp separately. When you save the
comp this information will be stored in that file. If for some reason the
available tool does not visualize which render mode is set for the
current comp, please run the following line in the console (Py2)
comp.GetData("openpype.rendermode")
This will return the name of the current render mode as seen above under
Options.
"""
order = pyblish.api.CollectorOrder + 0.4
label = "Collect Render Mode"
hosts = ["fusion"]
families = ["render"]
def process(self, instance):
"""Collect all image sequence tools"""
options = ["local", "farm"]
comp = instance.context.data.get("currentComp")
if not comp:
raise RuntimeError("No comp previously collected, unable to "
"retrieve Fusion version.")
rendermode = comp.GetData("openpype.rendermode") or "local"
assert rendermode in options, "Must be supported render mode"
self.log.info("Render mode: {0}".format(rendermode))
# Append family
family = "render.{0}".format(rendermode)
instance.data["families"].append(family)

View file

@ -0,0 +1,25 @@
import pyblish.api
class CollectFusionRenders(pyblish.api.InstancePlugin):
"""Collect current saver node's render Mode
Options:
local (Render locally)
frames (Use existing frames)
"""
order = pyblish.api.CollectorOrder + 0.4
label = "Collect Renders"
hosts = ["fusion"]
families = ["render"]
def process(self, instance):
render_target = instance.data["render_target"]
family = instance.data["family"]
# add targeted family to families
instance.data["families"].append(
"{}.{}".format(family, render_target)
)

View file

@ -0,0 +1,109 @@
import logging
import contextlib
import pyblish.api
from openpype.hosts.fusion.api import comp_lock_and_undo_chunk
log = logging.getLogger(__name__)
@contextlib.contextmanager
def enabled_savers(comp, savers):
"""Enable only the `savers` in Comp during the context.
Any Saver tool in the passed composition that is not in the savers list
will be set to passthrough during the context.
Args:
comp (object): Fusion composition object.
savers (list): List of Saver tool objects.
"""
passthrough_key = "TOOLB_PassThrough"
original_states = {}
enabled_save_names = {saver.Name for saver in savers}
try:
all_savers = comp.GetToolList(False, "Saver").values()
for saver in all_savers:
original_state = saver.GetAttrs()[passthrough_key]
original_states[saver] = original_state
# The passthrough state we want to set (passthrough != enabled)
state = saver.Name not in enabled_save_names
if state != original_state:
saver.SetAttrs({passthrough_key: state})
yield
finally:
for saver, original_state in original_states.items():
saver.SetAttrs({"TOOLB_PassThrough": original_state})
class FusionRenderLocal(pyblish.api.InstancePlugin):
"""Render the current Fusion composition locally."""
order = pyblish.api.ExtractorOrder - 0.2
label = "Render Local"
hosts = ["fusion"]
families = ["render.local"]
def process(self, instance):
context = instance.context
# Start render
self.render_once(context)
# Log render status
self.log.info(
"Rendered '{nm}' for asset '{ast}' under the task '{tsk}'".format(
nm=instance.data["name"],
ast=instance.data["asset"],
tsk=instance.data["task"],
)
)
def render_once(self, context):
"""Render context comp only once, even with more render instances"""
# This plug-in assumes all render nodes get rendered at the same time
# to speed up the rendering. The check below makes sure that we only
# execute the rendering once and not for each instance.
key = f"__hasRun{self.__class__.__name__}"
savers_to_render = [
# Get the saver tool from the instance
instance[0] for instance in context if
# Only active instances
instance.data.get("publish", True) and
# Only render.local instances
"render.local" in instance.data["families"]
]
if key not in context.data:
# We initialize as false to indicate it wasn't successful yet
# so we can keep track of whether Fusion succeeded
context.data[key] = False
current_comp = context.data["currentComp"]
frame_start = context.data["frameStartHandle"]
frame_end = context.data["frameEndHandle"]
self.log.info("Starting Fusion render")
self.log.info(f"Start frame: {frame_start}")
self.log.info(f"End frame: {frame_end}")
saver_names = ", ".join(saver.Name for saver in savers_to_render)
self.log.info(f"Rendering tools: {saver_names}")
with comp_lock_and_undo_chunk(current_comp):
with enabled_savers(current_comp, savers_to_render):
result = current_comp.Render(
{
"Start": frame_start,
"End": frame_end,
"Wait": True,
}
)
context.data[key] = bool(result)
if context.data[key] is False:
raise RuntimeError("Comp render failed")

View file

@ -1,100 +0,0 @@
import os
import pyblish.api
from openpype.pipeline import publish
from openpype.hosts.fusion.api import comp_lock_and_undo_chunk
class Fusionlocal(pyblish.api.InstancePlugin,
publish.ColormanagedPyblishPluginMixin):
"""Render the current Fusion composition locally.
Extract the result of savers by starting a comp render
This will run the local render of Fusion.
"""
order = pyblish.api.ExtractorOrder - 0.1
label = "Render Local"
hosts = ["fusion"]
families = ["render.local"]
def process(self, instance):
context = instance.context
# Start render
self.render_once(context)
# Log render status
self.log.info(
"Rendered '{nm}' for asset '{ast}' under the task '{tsk}'".format(
nm=instance.data["name"],
ast=instance.data["asset"],
tsk=instance.data["task"],
)
)
frame_start = context.data["frameStartHandle"]
frame_end = context.data["frameEndHandle"]
path = instance.data["path"]
output_dir = instance.data["outputDir"]
basename = os.path.basename(path)
head, ext = os.path.splitext(basename)
files = [
f"{head}{str(frame).zfill(4)}{ext}"
for frame in range(frame_start, frame_end + 1)
]
repre = {
"name": ext[1:],
"ext": ext[1:],
"frameStart": f"%0{len(str(frame_end))}d" % frame_start,
"files": files,
"stagingDir": output_dir,
}
self.set_representation_colorspace(
representation=repre,
context=context,
)
if "representations" not in instance.data:
instance.data["representations"] = []
instance.data["representations"].append(repre)
# review representation
if instance.data.get("review", False):
repre["tags"] = ["review", "ftrackreview"]
def render_once(self, context):
"""Render context comp only once, even with more render instances"""
# This plug-in assumes all render nodes get rendered at the same time
# to speed up the rendering. The check below makes sure that we only
# execute the rendering once and not for each instance.
key = f"__hasRun{self.__class__.__name__}"
if key not in context.data:
# We initialize as false to indicate it wasn't successful yet
# so we can keep track of whether Fusion succeeded
context.data[key] = False
current_comp = context.data["currentComp"]
frame_start = context.data["frameStartHandle"]
frame_end = context.data["frameEndHandle"]
self.log.info("Starting Fusion render")
self.log.info(f"Start frame: {frame_start}")
self.log.info(f"End frame: {frame_end}")
with comp_lock_and_undo_chunk(current_comp):
result = current_comp.Render(
{
"Start": frame_start,
"End": frame_end,
"Wait": True,
}
)
context.data[key] = bool(result)
if context.data[key] is False:
raise RuntimeError("Comp render failed")

View file

@ -14,22 +14,19 @@ class ValidateCreateFolderChecked(pyblish.api.InstancePlugin):
"""
order = pyblish.api.ValidatorOrder
actions = [RepairAction]
label = "Validate Create Folder Checked"
families = ["render"]
hosts = ["fusion"]
actions = [SelectInvalidAction]
actions = [RepairAction, SelectInvalidAction]
@classmethod
def get_invalid(cls, instance):
active = instance.data.get("active", instance.data.get("publish"))
if not active:
return []
tool = instance[0]
create_dir = tool.GetInput("CreateDir")
if create_dir == 0.0:
cls.log.error("%s has Create Folder turned off" % instance[0].Name)
cls.log.error(
"%s has Create Folder turned off" % instance[0].Name
)
return [tool]
def process(self, instance):
@ -37,7 +34,8 @@ class ValidateCreateFolderChecked(pyblish.api.InstancePlugin):
if invalid:
raise PublishValidationError(
"Found Saver with Create Folder During Render checked off",
title=self.label)
title=self.label,
)
@classmethod
def repair(cls, instance):

View file

@ -0,0 +1,78 @@
import os
import pyblish.api
from openpype.pipeline.publish import RepairAction
from openpype.pipeline import PublishValidationError
from openpype.hosts.fusion.api.action import SelectInvalidAction
class ValidateLocalFramesExistence(pyblish.api.InstancePlugin):
"""Checks if files for savers that's set
to publish expected frames exists
"""
order = pyblish.api.ValidatorOrder
label = "Validate Expected Frames Exists"
families = ["render"]
hosts = ["fusion"]
actions = [RepairAction, SelectInvalidAction]
@classmethod
def get_invalid(cls, instance, non_existing_frames=None):
if non_existing_frames is None:
non_existing_frames = []
if instance.data.get("render_target") == "frames":
tool = instance[0]
frame_start = instance.data["frameStart"]
frame_end = instance.data["frameEnd"]
path = instance.data["path"]
output_dir = instance.data["outputDir"]
basename = os.path.basename(path)
head, ext = os.path.splitext(basename)
files = [
f"{head}{str(frame).zfill(4)}{ext}"
for frame in range(frame_start, frame_end + 1)
]
for file in files:
if not os.path.exists(os.path.join(output_dir, file)):
cls.log.error(
f"Missing file: {os.path.join(output_dir, file)}"
)
non_existing_frames.append(file)
if len(non_existing_frames) > 0:
cls.log.error(f"Some of {tool.Name}'s files does not exist")
return [tool]
def process(self, instance):
non_existing_frames = []
invalid = self.get_invalid(instance, non_existing_frames)
if invalid:
raise PublishValidationError(
"{} is set to publish existing frames but "
"some frames are missing. "
"The missing file(s) are:\n\n{}".format(
invalid[0].Name,
"\n\n".join(non_existing_frames),
),
title=self.label,
)
@classmethod
def repair(cls, instance):
invalid = cls.get_invalid(instance)
if invalid:
tool = invalid[0]
# Change render target to local to render locally
tool.SetData("openpype.creator_attributes.render_target", "local")
cls.log.info(
f"Reload the publisher and {tool.Name} "
"will be set to render locally"
)

View file

@ -1,112 +0,0 @@
from qtpy import QtWidgets
import qtawesome
from openpype.hosts.fusion.api import get_current_comp
_help = {"local": "Render the comp on your own machine and publish "
"it from that the destination folder",
"farm": "Submit a Fusion render job to a Render farm to use all other"
" computers and add a publish job"}
class SetRenderMode(QtWidgets.QWidget):
def __init__(self, parent=None):
QtWidgets.QWidget.__init__(self, parent)
self._comp = get_current_comp()
self._comp_name = self._get_comp_name()
self.setWindowTitle("Set Render Mode")
self.setFixedSize(300, 175)
layout = QtWidgets.QVBoxLayout()
# region comp info
comp_info_layout = QtWidgets.QHBoxLayout()
update_btn = QtWidgets.QPushButton(qtawesome.icon("fa.refresh",
color="white"), "")
update_btn.setFixedWidth(25)
update_btn.setFixedHeight(25)
comp_information = QtWidgets.QLineEdit()
comp_information.setEnabled(False)
comp_info_layout.addWidget(comp_information)
comp_info_layout.addWidget(update_btn)
# endregion comp info
# region modes
mode_options = QtWidgets.QComboBox()
mode_options.addItems(_help.keys())
mode_information = QtWidgets.QTextEdit()
mode_information.setReadOnly(True)
# endregion modes
accept_btn = QtWidgets.QPushButton("Accept")
layout.addLayout(comp_info_layout)
layout.addWidget(mode_options)
layout.addWidget(mode_information)
layout.addWidget(accept_btn)
self.setLayout(layout)
self.comp_information = comp_information
self.update_btn = update_btn
self.mode_options = mode_options
self.mode_information = mode_information
self.accept_btn = accept_btn
self.connections()
self.update()
# Force updated render mode help text
self._update_rendermode_info()
def connections(self):
"""Build connections between code and buttons"""
self.update_btn.clicked.connect(self.update)
self.accept_btn.clicked.connect(self._set_comp_rendermode)
self.mode_options.currentIndexChanged.connect(
self._update_rendermode_info)
def update(self):
"""Update all information in the UI"""
self._comp = get_current_comp()
self._comp_name = self._get_comp_name()
self.comp_information.setText(self._comp_name)
# Update current comp settings
mode = self._get_comp_rendermode()
index = self.mode_options.findText(mode)
self.mode_options.setCurrentIndex(index)
def _update_rendermode_info(self):
rendermode = self.mode_options.currentText()
self.mode_information.setText(_help[rendermode])
def _get_comp_name(self):
return self._comp.GetAttrs("COMPS_Name")
def _get_comp_rendermode(self):
return self._comp.GetData("openpype.rendermode") or "local"
def _set_comp_rendermode(self):
rendermode = self.mode_options.currentText()
self._comp.SetData("openpype.rendermode", rendermode)
self._comp.Print("Updated render mode to '%s'\n" % rendermode)
self.hide()
def _validation(self):
ui_mode = self.mode_options.currentText()
comp_mode = self._get_comp_rendermode()
return comp_mode == ui_mode

View file

@ -432,11 +432,11 @@ copy_files = """function copyFile(srcFilename, dstFilename)
import_files = """function %s_import_files()
{
var PNGTransparencyMode = 0; // Premultiplied wih Black
var TGATransparencyMode = 0; // Premultiplied wih Black
var SGITransparencyMode = 0; // Premultiplied wih Black
var PNGTransparencyMode = 0; // Premultiplied with Black
var TGATransparencyMode = 0; // Premultiplied with Black
var SGITransparencyMode = 0; // Premultiplied with Black
var LayeredPSDTransparencyMode = 1; // Straight
var FlatPSDTransparencyMode = 2; // Premultiplied wih White
var FlatPSDTransparencyMode = 2; // Premultiplied with White
function getUniqueColumnName( column_prefix )
{

View file

@ -142,10 +142,10 @@ function Client() {
};
/**
* Process recieved request. This will eval recieved function and produce
* Process received request. This will eval received function and produce
* results.
* @function
* @param {object} request - recieved request JSON
* @param {object} request - received request JSON
* @return {object} result of evaled function.
*/
self.processRequest = function(request) {
@ -245,7 +245,7 @@ function Client() {
var request = JSON.parse(to_parse);
var mid = request.message_id;
// self.logDebug('[' + mid + '] - Request: ' + '\n' + JSON.stringify(request));
self.logDebug('[' + mid + '] Recieved.');
self.logDebug('[' + mid + '] Received.');
request.result = self.processRequest(request);
self.logDebug('[' + mid + '] Processing done.');
@ -286,8 +286,8 @@ function Client() {
/** Harmony 21.1 doesn't have QDataStream anymore.
This means we aren't able to write bytes into QByteArray so we had
modify how content lenght is sent do the server.
Content lenght is sent as string of 8 char convertible into integer
modify how content length is sent do the server.
Content length is sent as string of 8 char convertible into integer
(instead of 0x00000001[4 bytes] > "000000001"[8 bytes]) */
var codec_name = new QByteArray().append("UTF-8");
@ -476,6 +476,25 @@ function start() {
action.triggered.connect(self.onSubsetManage);
}
/**
* Set scene settings from DB to the scene
*/
self.onSetSceneSettings = function() {
app.avalonClient.send(
{
"module": "openpype.hosts.harmony.api",
"method": "ensure_scene_settings",
"args": []
},
false
);
};
// add Set Scene Settings
if (app.avalonMenu == null) {
action = menu.addAction('Set Scene Settings...');
action.triggered.connect(self.onSetSceneSettings);
}
/**
* Show Experimental dialog
*/

View file

@ -242,9 +242,15 @@ def launch_zip_file(filepath):
print(f"Localizing {filepath}")
temp_path = get_local_harmony_path(filepath)
scene_name = os.path.basename(temp_path)
if os.path.exists(os.path.join(temp_path, scene_name)):
# unzipped with duplicated scene_name
temp_path = os.path.join(temp_path, scene_name)
scene_path = os.path.join(
temp_path, os.path.basename(temp_path) + ".xstage"
temp_path, scene_name + ".xstage"
)
unzip = False
if os.path.exists(scene_path):
# Check remote scene is newer than local.
@ -262,6 +268,10 @@ def launch_zip_file(filepath):
with _ZipFile(filepath, "r") as zip_ref:
zip_ref.extractall(temp_path)
if os.path.exists(os.path.join(temp_path, scene_name)):
# unzipped with duplicated scene_name
temp_path = os.path.join(temp_path, scene_name)
# Close existing scene.
if ProcessContext.pid:
os.kill(ProcessContext.pid, signal.SIGTERM)
@ -309,7 +319,7 @@ def launch_zip_file(filepath):
)
if not os.path.exists(scene_path):
print("error: cannot determine scene file")
print("error: cannot determine scene file {}".format(scene_path))
ProcessContext.server.stop()
return
@ -394,7 +404,7 @@ def get_scene_data():
"function": "AvalonHarmony.getSceneData"
})["result"]
except json.decoder.JSONDecodeError:
# Means no sceen metadata has been made before.
# Means no scene metadata has been made before.
return {}
except KeyError:
# Means no existing scene metadata has been made.
@ -465,7 +475,7 @@ def imprint(node_id, data, remove=False):
Example:
>>> from openpype.hosts.harmony.api import lib
>>> node = "Top/Display"
>>> data = {"str": "someting", "int": 1, "float": 0.32, "bool": True}
>>> data = {"str": "something", "int": 1, "float": 0.32, "bool": True}
>>> lib.imprint(layer, data)
"""
scene_data = get_scene_data()
@ -550,7 +560,7 @@ def save_scene():
method prevents this double request and safely saves the scene.
"""
# Need to turn off the backgound watcher else the communication with
# Need to turn off the background watcher else the communication with
# the server gets spammed with two requests at the same time.
scene_path = send(
{"function": "AvalonHarmony.saveScene"})["result"]

View file

@ -142,7 +142,7 @@ def application_launch(event):
harmony.send({"script": script})
inject_avalon_js()
ensure_scene_settings()
# ensure_scene_settings()
check_inventory()

View file

@ -61,7 +61,7 @@ class Server(threading.Thread):
"module": (str), # Module of method.
"method" (str), # Name of method in module.
"args" (list), # Arguments to pass to method.
"kwargs" (dict), # Keywork arguments to pass to method.
"kwargs" (dict), # Keyword arguments to pass to method.
"reply" (bool), # Optional wait for method completion.
}
"""

View file

@ -25,8 +25,9 @@ class ExtractRender(pyblish.api.InstancePlugin):
application_path = instance.context.data.get("applicationPath")
scene_path = instance.context.data.get("scenePath")
frame_rate = instance.context.data.get("frameRate")
frame_start = instance.context.data.get("frameStart")
frame_end = instance.context.data.get("frameEnd")
# real value from timeline
frame_start = instance.context.data.get("frameStartHandle")
frame_end = instance.context.data.get("frameEndHandle")
audio_path = instance.context.data.get("audioPath")
if audio_path and os.path.exists(audio_path):
@ -55,9 +56,13 @@ class ExtractRender(pyblish.api.InstancePlugin):
# Execute rendering. Ignoring error cause Harmony returns error code
# always.
self.log.info(f"running [ {application_path} -batch {scene_path}")
args = [application_path, "-batch",
"-frames", str(frame_start), str(frame_end),
"-scene", scene_path]
self.log.info(f"running [ {application_path} {' '.join(args)}")
proc = subprocess.Popen(
[application_path, "-batch", scene_path],
args,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
stdin=subprocess.PIPE

View file

@ -60,7 +60,8 @@ class ValidateSceneSettings(pyblish.api.InstancePlugin):
# which is available on 'context.data["assetEntity"]'
# - the same approach can be used in 'ValidateSceneSettingsRepair'
expected_settings = harmony.get_asset_settings()
self.log.info("scene settings from DB:".format(expected_settings))
self.log.info("scene settings from DB:{}".format(expected_settings))
expected_settings.pop("entityType") # not useful for the validation
expected_settings = _update_frames(dict.copy(expected_settings))
expected_settings["frameEndHandle"] = expected_settings["frameEnd"] +\
@ -68,21 +69,32 @@ class ValidateSceneSettings(pyblish.api.InstancePlugin):
if (any(re.search(pattern, os.getenv('AVALON_TASK'))
for pattern in self.skip_resolution_check)):
self.log.info("Skipping resolution check because of "
"task name and pattern {}".format(
self.skip_resolution_check))
expected_settings.pop("resolutionWidth")
expected_settings.pop("resolutionHeight")
entity_type = expected_settings.get("entityType")
if (any(re.search(pattern, entity_type)
if (any(re.search(pattern, os.getenv('AVALON_TASK'))
for pattern in self.skip_timelines_check)):
self.log.info("Skipping frames check because of "
"task name and pattern {}".format(
self.skip_timelines_check))
expected_settings.pop('frameStart', None)
expected_settings.pop('frameEnd', None)
expected_settings.pop("entityType") # not useful after the check
expected_settings.pop('frameStartHandle', None)
expected_settings.pop('frameEndHandle', None)
asset_name = instance.context.data['anatomyData']['asset']
if any(re.search(pattern, asset_name)
for pattern in self.frame_check_filter):
expected_settings.pop("frameEnd")
self.log.info("Skipping frames check because of "
"task name and pattern {}".format(
self.frame_check_filter))
expected_settings.pop('frameStart', None)
expected_settings.pop('frameEnd', None)
expected_settings.pop('frameStartHandle', None)
expected_settings.pop('frameEndHandle', None)
# handle case where ftrack uses only two decimal places
# 23.976023976023978 vs. 23.98
@ -99,6 +111,7 @@ class ValidateSceneSettings(pyblish.api.InstancePlugin):
"frameEnd": instance.context.data["frameEnd"],
"handleStart": instance.context.data.get("handleStart"),
"handleEnd": instance.context.data.get("handleEnd"),
"frameStartHandle": instance.context.data.get("frameStartHandle"),
"frameEndHandle": instance.context.data.get("frameEndHandle"),
"resolutionWidth": instance.context.data.get("resolutionWidth"),
"resolutionHeight": instance.context.data.get("resolutionHeight"),

View file

@ -6,7 +6,7 @@ Ever tried to make a simple script for toonboom Harmony, then got stumped by the
Toonboom Harmony is a very powerful software, with hundreds of functions and tools, and it unlocks a great amount of possibilities for animation studios around the globe. And... being the produce of the hard work of a small team forced to prioritise, it can also be a bit rustic at times!
We are users at heart, animators and riggers, who just want to interact with the software as simply as possible. Simplicity is at the heart of the design of openHarmony. But we also are developpers, and we made the library for people like us who can't resist tweaking the software and bend it in all possible ways, and are looking for powerful functions to help them do it.
We are users at heart, animators and riggers, who just want to interact with the software as simply as possible. Simplicity is at the heart of the design of openHarmony. But we also are developers, and we made the library for people like us who can't resist tweaking the software and bend it in all possible ways, and are looking for powerful functions to help them do it.
This library's aim is to create a more direct way to interact with Toonboom through scripts, by providing a more intuitive way to access its elements, and help with the cumbersome and repetitive tasks as well as help unlock untapped potential in its many available systems. So we can go from having to do things like this:
@ -78,7 +78,7 @@ All you have to do is call :
```javascript
include("openHarmony.js");
```
at the beggining of your script.
at the beginning of your script.
You can ask your users to download their copy of the library and store it alongside, or bundle it as you wish as long as you include the license file provided on this repository.
@ -129,7 +129,7 @@ Check that the environment variable `LIB_OPENHARMONY_PATH` is set correctly to t
## How to add openHarmony to vscode intellisense for autocompletion
Although not fully supported, you can get most of the autocompletion features to work by adding the following lines to a `jsconfig.json` file placed at the root of your working folder.
The paths need to be relative which means the openHarmony source code must be placed directly in your developping environnement.
The paths need to be relative which means the openHarmony source code must be placed directly in your developping environment.
For example, if your working folder contains the openHarmony source in a folder called `OpenHarmony` and your working scripts in a folder called `myScripts`, place the `jsconfig.json` file at the root of the folder and add these lines to the file:

View file

@ -4,7 +4,7 @@
// openHarmony Library
//
//
// Developped by Mathieu Chaptel, Chris Fourney
// Developed by Mathieu Chaptel, Chris Fourney
//
//
// This library is an open source implementation of a Document Object Model
@ -16,7 +16,7 @@
// and by hiding the heavy lifting required by the official API.
//
// This library is provided as is and is a work in progress. As such, not every
// function has been implemented or is garanteed to work. Feel free to contribute
// function has been implemented or is guaranteed to work. Feel free to contribute
// improvements to its official github. If you do make sure you follow the provided
// template and naming conventions and document your new methods properly.
//
@ -78,7 +78,7 @@
* $.log("hello"); // prints out a message to the MessageLog.
* var myPoint = new $.oPoint(0,0,0); // create a new class instance from an openHarmony class.
*
* // function members of the $ objects get published to the global scope, which means $ can be ommited
* // function members of the $ objects get published to the global scope, which means $ can be omitted
*
* log("hello");
* var myPoint = new oPoint(0,0,0); // This is all valid
@ -118,7 +118,7 @@ Object.defineProperty( $, "directory", {
/**
* Wether Harmony is run with the interface or simply from command line
* Whether Harmony is run with the interface or simply from command line
*/
Object.defineProperty( $, "batchMode", {
get: function(){

View file

@ -4,7 +4,7 @@
// openHarmony Library
//
//
// Developped by Mathieu Chaptel, Chris Fourney
// Developed by Mathieu Chaptel, Chris Fourney
//
//
// This library is an open source implementation of a Document Object Model
@ -16,7 +16,7 @@
// and by hiding the heavy lifting required by the official API.
//
// This library is provided as is and is a work in progress. As such, not every
// function has been implemented or is garanteed to work. Feel free to contribute
// function has been implemented or is guaranteed to work. Feel free to contribute
// improvements to its official github. If you do make sure you follow the provided
// template and naming conventions and document your new methods properly.
//
@ -67,7 +67,7 @@
* @hideconstructor
* @namespace
* @example
* // To check wether an action is available, call the synthax:
* // To check whether an action is available, call the synthax:
* Action.validate (<actionName>, <responder>);
*
* // To launch an action, call the synthax:

View file

@ -4,7 +4,7 @@
// openHarmony Library
//
//
// Developped by Mathieu Chaptel, Chris Fourney
// Developed by Mathieu Chaptel, Chris Fourney
//
//
// This library is an open source implementation of a Document Object Model
@ -16,7 +16,7 @@
// and by hiding the heavy lifting required by the official API.
//
// This library is provided as is and is a work in progress. As such, not every
// function has been implemented or is garanteed to work. Feel free to contribute
// function has been implemented or is guaranteed to work. Feel free to contribute
// improvements to its official github. If you do make sure you follow the provided
// template and naming conventions and document your new methods properly.
//
@ -409,7 +409,7 @@ $.oApp.prototype.getToolByName = function(toolName){
/**
* returns the list of stencils useable by the specified tool
* returns the list of stencils usable by the specified tool
* @param {$.oTool} tool the tool object we want valid stencils for
* @return {$.oStencil[]} the list of stencils compatible with the specified tool
*/

View file

@ -4,7 +4,7 @@
// openHarmony Library v0.01
//
//
// Developped by Mathieu Chaptel, Chris Fourney...
// Developed by Mathieu Chaptel, Chris Fourney...
//
//
// This library is an open source implementation of a Document Object Model
@ -16,7 +16,7 @@
// and by hiding the heavy lifting required by the official API.
//
// This library is provided as is and is a work in progress. As such, not every
// function has been implemented or is garanteed to work. Feel free to contribute
// function has been implemented or is guaranteed to work. Feel free to contribute
// improvements to its official github. If you do make sure you follow the provided
// template and naming conventions and document your new methods properly.
//
@ -338,7 +338,7 @@ Object.defineProperty($.oAttribute.prototype, "useSeparate", {
* Returns the default value of the attribute for most keywords
* @name $.oAttribute#defaultValue
* @type {bool}
* @todo switch the implentation to types?
* @todo switch the implementation to types?
* @example
* // to reset an attribute to its default value:
* // (mostly used for position/angle/skew parameters of pegs and drawing nodes)
@ -449,7 +449,7 @@ $.oAttribute.prototype.getLinkedColumns = function(){
/**
* Recursively sets an attribute to the same value as another. Both must have the same keyword.
* @param {bool} [duplicateColumns=false] In the case that the attribute has a column, wether to duplicate the column before linking
* @param {bool} [duplicateColumns=false] In the case that the attribute has a column, whether to duplicate the column before linking
* @private
*/
$.oAttribute.prototype.setToAttributeValue = function(attributeToCopy, duplicateColumns){

View file

@ -4,7 +4,7 @@
// openHarmony Library
//
//
// Developped by Mathieu Chaptel, Chris Fourney
// Developed by Mathieu Chaptel, Chris Fourney
//
//
// This library is an open source implementation of a Document Object Model
@ -16,7 +16,7 @@
// and by hiding the heavy lifting required by the official API.
//
// This library is provided as is and is a work in progress. As such, not every
// function has been implemented or is garanteed to work. Feel free to contribute
// function has been implemented or is guaranteed to work. Feel free to contribute
// improvements to its official github. If you do make sure you follow the provided
// template and naming conventions and document your new methods properly.
//

View file

@ -4,7 +4,7 @@
// openHarmony Library
//
//
// Developped by Mathieu Chaptel, Chris Fourney
// Developed by Mathieu Chaptel, Chris Fourney
//
//
// This library is an open source implementation of a Document Object Model
@ -16,7 +16,7 @@
// and by hiding the heavy lifting required by the official API.
//
// This library is provided as is and is a work in progress. As such, not every
// function has been implemented or is garanteed to work. Feel free to contribute
// function has been implemented or is guaranteed to work. Feel free to contribute
// improvements to its official github. If you do make sure you follow the provided
// template and naming conventions and document your new methods properly.
//
@ -158,7 +158,7 @@ $.oColorValue.prototype.fromColorString = function (hexString){
/**
* Uses a color integer (used in backdrops) and parses the INT; applies the RGBA components of the INT to thos oColorValue
* Uses a color integer (used in backdrops) and parses the INT; applies the RGBA components of the INT to the oColorValue
* @param { int } colorInt 24 bit-shifted integer containing RGBA values
*/
$.oColorValue.prototype.parseColorFromInt = function(colorInt){

View file

@ -4,7 +4,7 @@
// openHarmony Library
//
//
// Developped by Mathieu Chaptel, Chris Fourney
// Developed by Mathieu Chaptel, Chris Fourney
//
//
// This library is an open source implementation of a Document Object Model
@ -16,7 +16,7 @@
// and by hiding the heavy lifting required by the official API.
//
// This library is provided as is and is a work in progress. As such, not every
// function has been implemented or is garanteed to work. Feel free to contribute
// function has been implemented or is guaranteed to work. Feel free to contribute
// improvements to its official github. If you do make sure you follow the provided
// template and naming conventions and document your new methods properly.
//

View file

@ -4,7 +4,7 @@
// openHarmony Library
//
//
// Developped by Mathieu Chaptel, Chris Fourney
// Developed by Mathieu Chaptel, Chris Fourney
//
//
// This library is an open source implementation of a Document Object Model
@ -16,7 +16,7 @@
// and by hiding the heavy lifting required by the official API.
//
// This library is provided as is and is a work in progress. As such, not every
// function has been implemented or is garanteed to work. Feel free to contribute
// function has been implemented or is guaranteed to work. Feel free to contribute
// improvements to its official github. If you do make sure you follow the provided
// template and naming conventions and document your new methods properly.
//

View file

@ -5,7 +5,7 @@
// openHarmony Library
//
//
// Developped by Mathieu Chaptel, Chris Fourney
// Developed by Mathieu Chaptel, Chris Fourney
//
//
// This library is an open source implementation of a Document Object Model
@ -17,7 +17,7 @@
// and by hiding the heavy lifting required by the official API.
//
// This library is provided as is and is a work in progress. As such, not every
// function has been implemented or is garanteed to work. Feel free to contribute
// function has been implemented or is guaranteed to work. Feel free to contribute
// improvements to its official github. If you do make sure you follow the provided
// template and naming conventions and document your new methods properly.
//
@ -250,7 +250,7 @@ $.oDialog.prototype.prompt = function( labelText, title, prefilledText){
/**
* Prompts with a file selector window
* @param {string} [text="Select a file:"] The title of the confirmation dialog.
* @param {string} [filter="*"] The filter for the file type and/or file name that can be selected. Accepts wildcard charater "*".
* @param {string} [filter="*"] The filter for the file type and/or file name that can be selected. Accepts wildcard character "*".
* @param {string} [getExisting=true] Whether to select an existing file or a save location
* @param {string} [acceptMultiple=false] Whether or not selecting more than one file is ok. Is ignored if getExisting is falses.
* @param {string} [startDirectory] The directory showed at the opening of the dialog.
@ -327,14 +327,14 @@ $.oDialog.prototype.browseForFolder = function(text, startDirectory){
* @constructor
* @classdesc An simple progress dialog to display the progress of a task.
* To react to the user clicking the cancel button, connect a function to $.oProgressDialog.canceled() signal.
* When $.batchmode is true, the progress will be outputed as a "Progress : value/range" string to the Harmony stdout.
* When $.batchmode is true, the progress will be outputted as a "Progress : value/range" string to the Harmony stdout.
* @param {string} [labelText] The text displayed above the progress bar.
* @param {string} [range=100] The maximum value that represents a full progress bar.
* @param {string} [title] The title of the dialog
* @param {bool} [show=false] Whether to immediately show the dialog.
*
* @property {bool} wasCanceled Whether the progress bar was cancelled.
* @property {$.oSignal} canceled A Signal emited when the dialog is canceled. Can be connected to a callback.
* @property {$.oSignal} canceled A Signal emitted when the dialog is canceled. Can be connected to a callback.
*/
$.oProgressDialog = function( labelText, range, title, show ){
if (typeof title === 'undefined') var title = "Progress";
@ -608,7 +608,7 @@ $.oPieMenu = function( name, widgets, show, minAngle, maxAngle, radius, position
this.maxAngle = maxAngle;
this.globalCenter = position;
// how wide outisde the icons is the slice drawn
// how wide outside the icons is the slice drawn
this._circleMargin = 30;
// set these values before calling show() to customize the menu appearance
@ -974,7 +974,7 @@ $.oPieMenu.prototype.getMenuRadius = function(){
var _minRadius = UiLoader.dpiScale(30);
var _speed = 10; // the higher the value, the slower the progression
// hyperbolic tangent function to determin the radius
// hyperbolic tangent function to determine the radius
var exp = Math.exp(2*itemsNumber/_speed);
var _radius = ((exp-1)/(exp+1))*_maxRadius+_minRadius;
@ -1383,7 +1383,7 @@ $.oActionButton.prototype.activate = function(){
* This class is a subclass of QPushButton and all the methods from that class are available to modify this button.
* @param {string} paletteName The name of the palette that contains the color
* @param {string} colorName The name of the color (if more than one is present, will pick the first match)
* @param {bool} showName Wether to display the name of the color on the button
* @param {bool} showName Whether to display the name of the color on the button
* @param {QWidget} parent The parent QWidget for the button. Automatically set during initialisation of the menu.
*
*/
@ -1437,7 +1437,7 @@ $.oColorButton.prototype.activate = function(){
* @name $.oScriptButton
* @constructor
* @classdescription This subclass of QPushButton provides an easy way to create a button for a widget that will launch a function from another script file.<br>
* The buttons created this way automatically load the icon named after the script if it finds one named like the funtion in a script-icons folder next to the script file.<br>
* The buttons created this way automatically load the icon named after the script if it finds one named like the function in a script-icons folder next to the script file.<br>
* It will also automatically set the callback to lanch the function from the script.<br>
* This class is a subclass of QPushButton and all the methods from that class are available to modify this button.
* @param {string} scriptFile The path to the script file that will be launched

View file

@ -4,7 +4,7 @@
// openHarmony Library
//
//
// Developped by Mathieu Chaptel, Chris Fourney
// Developed by Mathieu Chaptel, Chris Fourney
//
//
// This library is an open source implementation of a Document Object Model
@ -16,7 +16,7 @@
// and by hiding the heavy lifting required by the official API.
//
// This library is provided as is and is a work in progress. As such, not every
// function has been implemented or is garanteed to work. Feel free to contribute
// function has been implemented or is guaranteed to work. Feel free to contribute
// improvements to its official github. If you do make sure you follow the provided
// template and naming conventions and document your new methods properly.
//
@ -426,7 +426,7 @@ Object.defineProperty($.oDrawing.prototype, 'drawingData', {
/**
* Import a given file into an existing drawing.
* @param {$.oFile} file The path to the file
* @param {bool} [convertToTvg=false] Wether to convert the bitmap to the tvg format (this doesn't vectorise the drawing)
* @param {bool} [convertToTvg=false] Whether to convert the bitmap to the tvg format (this doesn't vectorise the drawing)
*
* @return { $.oFile } the oFile object pointing to the drawing file after being it has been imported into the element folder.
*/
@ -878,8 +878,8 @@ $.oArtLayer.prototype.drawCircle = function(center, radius, lineStyle, fillStyle
* @param {$.oVertex[]} path an array of $.oVertex objects that describe a path.
* @param {$.oLineStyle} [lineStyle] the line style to draw with. (By default, will use the current stencil selection)
* @param {$.oFillStyle} [fillStyle] the fill information for the path. (By default, will use the current palette selection)
* @param {bool} [polygon] Wether bezier handles should be created for the points in the path (ignores "onCurve" properties of oVertex from path)
* @param {bool} [createUnderneath] Wether the new shape will appear on top or underneath the contents of the layer. (not working yet)
* @param {bool} [polygon] Whether bezier handles should be created for the points in the path (ignores "onCurve" properties of oVertex from path)
* @param {bool} [createUnderneath] Whether the new shape will appear on top or underneath the contents of the layer. (not working yet)
*/
$.oArtLayer.prototype.drawShape = function(path, lineStyle, fillStyle, polygon, createUnderneath){
if (typeof fillStyle === 'undefined') var fillStyle = new this.$.oFillStyle();
@ -959,7 +959,7 @@ $.oArtLayer.prototype.drawContour = function(path, fillStyle){
* @param {float} width the width of the rectangle.
* @param {float} height the height of the rectangle.
* @param {$.oLineStyle} lineStyle a line style to use for the rectangle stroke.
* @param {$.oFillStyle} fillStyle a fill style to use for the rectange fill.
* @param {$.oFillStyle} fillStyle a fill style to use for the rectangle fill.
* @returns {$.oShape} the shape containing the added stroke.
*/
$.oArtLayer.prototype.drawRectangle = function(x, y, width, height, lineStyle, fillStyle){
@ -1514,7 +1514,7 @@ Object.defineProperty($.oStroke.prototype, "path", {
/**
* The oVertex that are on the stroke (Bezier handles exluded.)
* The oVertex that are on the stroke (Bezier handles excluded.)
* The first is repeated at the last position when the stroke is closed.
* @name $.oStroke#points
* @type {$.oVertex[]}
@ -1583,7 +1583,7 @@ Object.defineProperty($.oStroke.prototype, "style", {
/**
* wether the stroke is a closed shape.
* whether the stroke is a closed shape.
* @name $.oStroke#closed
* @type {bool}
*/
@ -1919,7 +1919,7 @@ $.oContour.prototype.toString = function(){
* @constructor
* @classdesc
* The $.oVertex class represents a single control point on a stroke. This class is used to get the index of the point in the stroke path sequence, as well as its position as a float along the stroke's length.
* The onCurve property describes wether this control point is a bezier handle or a point on the curve.
* The onCurve property describes whether this control point is a bezier handle or a point on the curve.
*
* @param {$.oStroke} stroke the stroke that this vertex belongs to
* @param {float} x the x coordinate of the vertex, in drawing space

View file

@ -4,7 +4,7 @@
// openHarmony Library v0.01
//
//
// Developped by Mathieu Chaptel, Chris Fourney...
// Developed by Mathieu Chaptel, Chris Fourney...
//
//
// This library is an open source implementation of a Document Object Model
@ -16,7 +16,7 @@
// and by hiding the heavy lifting required by the official API.
//
// This library is provided as is and is a work in progress. As such, not every
// function has been implemented or is garanteed to work. Feel free to contribute
// function has been implemented or is guaranteed to work. Feel free to contribute
// improvements to its official github. If you do make sure you follow the provided
// template and naming conventions and document your new methods properly.
//

View file

@ -4,7 +4,7 @@
// openHarmony Library
//
//
// Developped by Mathieu Chaptel, Chris Fourney
// Developed by Mathieu Chaptel, Chris Fourney
//
//
// This library is an open source implementation of a Document Object Model
@ -16,7 +16,7 @@
// and by hiding the heavy lifting required by the official API.
//
// This library is provided as is and is a work in progress. As such, not every
// function has been implemented or is garanteed to work. Feel free to contribute
// function has been implemented or is guaranteed to work. Feel free to contribute
// improvements to its official github. If you do make sure you follow the provided
// template and naming conventions and document your new methods properly.
//
@ -509,7 +509,7 @@ Object.defineProperty($.oFile.prototype, 'fullName', {
/**
* The name of the file without extenstion.
* The name of the file without extension.
* @name $.oFile#name
* @type {string}
*/

View file

@ -4,7 +4,7 @@
// openHarmony Library
//
//
// Developped by Mathieu Chaptel, Chris Fourney
// Developed by Mathieu Chaptel, Chris Fourney
//
//
// This library is an open source implementation of a Document Object Model
@ -16,7 +16,7 @@
// and by hiding the heavy lifting required by the official API.
//
// This library is provided as is and is a work in progress. As such, not every
// function has been implemented or is garanteed to work. Feel free to contribute
// function has been implemented or is guaranteed to work. Feel free to contribute
// improvements to its official github. If you do make sure you follow the provided
// template and naming conventions and document your new methods properly.
//
@ -263,7 +263,7 @@ Object.defineProperty($.oFrame.prototype, 'duration', {
return _sceneLength;
}
// walk up the frames of the scene to the next keyFrame to determin duration
// walk up the frames of the scene to the next keyFrame to determine duration
var _frames = this.column.frames
for (var i=this.frameNumber+1; i<_sceneLength; i++){
if (_frames[i].isKeyframe) return _frames[i].frameNumber - _startFrame;
@ -426,7 +426,7 @@ Object.defineProperty($.oFrame.prototype, 'velocity', {
* easeIn : a $.oPoint object representing the left handle for bezier columns, or a {point, ease} object for ease columns.
* easeOut : a $.oPoint object representing the left handle for bezier columns, or a {point, ease} object for ease columns.
* continuity : the type of bezier used by the point.
* constant : wether the frame is interpolated or a held value.
* constant : whether the frame is interpolated or a held value.
* @name $.oFrame#ease
* @type {oPoint/object}
*/
@ -520,7 +520,7 @@ Object.defineProperty($.oFrame.prototype, 'easeOut', {
/**
* Determines the frame's continuity setting. Can take the values "CORNER", (two independant bezier handles on each side), "SMOOTH"(handles are aligned) or "STRAIGHT" (no handles and in straight lines).
* Determines the frame's continuity setting. Can take the values "CORNER", (two independent bezier handles on each side), "SMOOTH"(handles are aligned) or "STRAIGHT" (no handles and in straight lines).
* @name $.oFrame#continuity
* @type {string}
*/

View file

@ -4,7 +4,7 @@
// openHarmony Library v0.01
//
//
// Developped by Mathieu Chaptel, Chris Fourney...
// Developed by Mathieu Chaptel, Chris Fourney...
//
//
// This library is an open source implementation of a Document Object Model
@ -16,7 +16,7 @@
// and by hiding the heavy lifting required by the official API.
//
// This library is provided as is and is a work in progress. As such, not every
// function has been implemented or is garanteed to work. Feel free to contribute
// function has been implemented or is guaranteed to work. Feel free to contribute
// improvements to its official github. If you do make sure you follow the provided
// template and naming conventions and document your new methods properly.
//
@ -516,5 +516,5 @@ Object.defineProperty($.oList.prototype, 'toString', {
//Needs all filtering, limiting. mapping, pop, concat, join, ect
//Needs all filtering, limiting. mapping, pop, concat, join, etc
//Speed up by finessing the way it extends and tracks the enumerable properties.

View file

@ -4,7 +4,7 @@
// openHarmony Library
//
//
// Developped by Mathieu Chaptel, Chris Fourney
// Developed by Mathieu Chaptel, Chris Fourney
//
//
// This library is an open source implementation of a Document Object Model
@ -16,7 +16,7 @@
// and by hiding the heavy lifting required by the official API.
//
// This library is provided as is and is a work in progress. As such, not every
// function has been implemented or is garanteed to work. Feel free to contribute
// function has been implemented or is guaranteed to work. Feel free to contribute
// improvements to its official github. If you do make sure you follow the provided
// template and naming conventions and document your new methods properly.
//
@ -193,7 +193,7 @@ $.oPoint.prototype.pointSubtract = function( sub_pt ){
/**
* Subtracts the point to the coordinates of the current oPoint and returns a new oPoint with the result.
* @param {$.oPoint} point The point to subtract to this point.
* @returns {$.oPoint} a new independant oPoint.
* @returns {$.oPoint} a new independent oPoint.
*/
$.oPoint.prototype.subtractPoint = function( point ){
var x = this.x - point.x;
@ -298,9 +298,9 @@ $.oPoint.prototype.convertToWorldspace = function(){
/**
* Linearily Interpolate between this (0.0) and the provided point (1.0)
* Linearly Interpolate between this (0.0) and the provided point (1.0)
* @param {$.oPoint} point The target point at 100%
* @param {double} perc 0-1.0 value to linearily interp
* @param {double} perc 0-1.0 value to linearly interp
*
* @return: { $.oPoint } The interpolated value.
*/
@ -410,9 +410,9 @@ $.oBox.prototype.include = function(box){
/**
* Checks wether the box contains another $.oBox.
* Checks whether the box contains another $.oBox.
* @param {$.oBox} box The $.oBox to check for.
* @param {bool} [partial=false] wether to accept partially contained boxes.
* @param {bool} [partial=false] whether to accept partially contained boxes.
*/
$.oBox.prototype.contains = function(box, partial){
if (typeof partial === 'undefined') var partial = false;
@ -537,7 +537,7 @@ $.oMatrix.prototype.toString = function(){
* @classdesc The $.oVector is a replacement for the Vector3d objects of Harmony.
* @param {float} x a x coordinate for this vector.
* @param {float} y a y coordinate for this vector.
* @param {float} [z=0] a z coordinate for this vector. If ommited, will be set to 0 and vector will be 2D.
* @param {float} [z=0] a z coordinate for this vector. If omitted, will be set to 0 and vector will be 2D.
*/
$.oVector = function(x, y, z){
if (typeof z === "undefined" || isNaN(z)) var z = 0;

View file

@ -4,7 +4,7 @@
// openHarmony Library v0.01
//
//
// Developped by Mathieu Chaptel, Chris Fourney...
// Developed by Mathieu Chaptel, Chris Fourney...
//
//
// This library is an open source implementation of a Document Object Model
@ -16,7 +16,7 @@
// and by hiding the heavy lifting required by the official API.
//
// This library is provided as is and is a work in progress. As such, not every
// function has been implemented or is garanteed to work. Feel free to contribute
// function has been implemented or is guaranteed to work. Feel free to contribute
// improvements to its official github. If you do make sure you follow the provided
// template and naming conventions and document your new methods properly.
//

View file

@ -4,7 +4,7 @@
// openHarmony Library v0.01
//
//
// Developped by Mathieu Chaptel, Chris Fourney...
// Developed by Mathieu Chaptel, Chris Fourney...
//
//
// This library is an open source implementation of a Document Object Model
@ -16,7 +16,7 @@
// and by hiding the heavy lifting required by the official API.
//
// This library is provided as is and is a work in progress. As such, not every
// function has been implemented or is garanteed to work. Feel free to contribute
// function has been implemented or is guaranteed to work. Feel free to contribute
// improvements to its official github. If you do make sure you follow the provided
// template and naming conventions and document your new methods properly.
//
@ -54,7 +54,7 @@
/**
* The $.oUtils helper class -- providing generic utilities. Doesn't need instanciation.
* The $.oUtils helper class -- providing generic utilities. Doesn't need instantiation.
* @classdesc $.oUtils utility Class
*/
$.oUtils = function(){

View file

@ -4,7 +4,7 @@
// openHarmony Library v0.01
//
//
// Developped by Mathieu Chaptel, Chris Fourney...
// Developed by Mathieu Chaptel, Chris Fourney...
//
//
// This library is an open source implementation of a Document Object Model
@ -16,7 +16,7 @@
// and by hiding the heavy lifting required by the official API.
//
// This library is provided as is and is a work in progress. As such, not every
// function has been implemented or is garanteed to work. Feel free to contribute
// function has been implemented or is guaranteed to work. Feel free to contribute
// improvements to its official github. If you do make sure you follow the provided
// template and naming conventions and document your new methods properly.
//
@ -87,7 +87,7 @@ $.oNetwork = function( ){
* @param {function} callback_func Providing a callback function prevents blocking, and will respond on this function. The callback function is in form func( results ){}
* @param {bool} use_json In the event of a JSON api, this will return an object converted from the returned JSON.
*
* @return: {string/object} The resulting object/string from the query -- otherwise a bool as false when an error occured..
* @return: {string/object} The resulting object/string from the query -- otherwise a bool as false when an error occurred..
*/
$.oNetwork.prototype.webQuery = function ( address, callback_func, use_json ){
if (typeof callback_func === 'undefined') var callback_func = false;
@ -272,7 +272,7 @@ $.oNetwork.prototype.webQuery = function ( address, callback_func, use_json ){
* @param {function} path The local file path to save the download.
* @param {bool} replace Replace the file if it exists.
*
* @return: {string/object} The resulting object/string from the query -- otherwise a bool as false when an error occured..
* @return: {string/object} The resulting object/string from the query -- otherwise a bool as false when an error occurred..
*/
$.oNetwork.prototype.downloadSingle = function ( address, path, replace ){
if (typeof replace === 'undefined') var replace = false;

View file

@ -4,7 +4,7 @@
// openHarmony Library
//
//
// Developped by Mathieu Chaptel, Chris Fourney
// Developed by Mathieu Chaptel, Chris Fourney
//
//
// This library is an open source implementation of a Document Object Model
@ -16,7 +16,7 @@
// and by hiding the heavy lifting required by the official API.
//
// This library is provided as is and is a work in progress. As such, not every
// function has been implemented or is garanteed to work. Feel free to contribute
// function has been implemented or is guaranteed to work. Feel free to contribute
// improvements to its official github. If you do make sure you follow the provided
// template and naming conventions and document your new methods properly.
//
@ -562,7 +562,7 @@ Object.defineProperty($.oNode.prototype, 'height', {
/**
* The list of oNodeLinks objects descibing the connections to the inport of this node, in order of inport.
* The list of oNodeLinks objects describing the connections to the inport of this node, in order of inport.
* @name $.oNode#inLinks
* @readonly
* @deprecated returns $.oNodeLink instances but $.oLink is preferred. Use oNode.getInLinks() instead.
@ -658,7 +658,7 @@ Object.defineProperty($.oNode.prototype, 'outPorts', {
/**
* The list of oNodeLinks objects descibing the connections to the outports of this node, in order of outport.
* The list of oNodeLinks objects describing the connections to the outports of this node, in order of outport.
* @name $.oNode#outLinks
* @readonly
* @type {$.oNodeLink[]}
@ -1666,7 +1666,7 @@ $.oNode.prototype.refreshAttributes = function( ){
* It represents peg nodes in the scene.
* @constructor
* @augments $.oNode
* @classdesc Peg Moudle Class
* @classdesc Peg Module Class
* @param {string} path Path to the node in the network.
* @param {oScene} oSceneObject Access to the oScene object of the DOM.
*/
@ -1886,7 +1886,7 @@ $.oDrawingNode.prototype.getDrawingAtFrame = function(frameNumber){
/**
* Gets the list of palettes containing colors used by a drawing node. This only gets palettes with the first occurence of the colors.
* Gets the list of palettes containing colors used by a drawing node. This only gets palettes with the first occurrence of the colors.
* @return {$.oPalette[]} The palettes that contain the color IDs used by the drawings of the node.
*/
$.oDrawingNode.prototype.getUsedPalettes = function(){
@ -1968,7 +1968,7 @@ $.oDrawingNode.prototype.unlinkPalette = function(oPaletteObject){
* Duplicates a node by creating an independent copy.
* @param {string} [newName] The new name for the duplicated node.
* @param {oPoint} [newPosition] The new position for the duplicated node.
* @param {bool} [duplicateElement] Wether to also duplicate the element.
* @param {bool} [duplicateElement] Whether to also duplicate the element.
*/
$.oDrawingNode.prototype.duplicate = function(newName, newPosition, duplicateElement){
if (typeof newPosition === 'undefined') var newPosition = this.nodePosition;
@ -2464,7 +2464,7 @@ $.oGroupNode.prototype.getNodeByName = function(name){
* Returns all the nodes of a certain type in the group.
* Pass a value to recurse to look into the groups as well.
* @param {string} typeName The type of the nodes.
* @param {bool} recurse Wether to look inside the groups.
* @param {bool} recurse Whether to look inside the groups.
*
* @return {$.oNode[]} The nodes found.
*/
@ -2626,7 +2626,7 @@ $.oGroupNode.prototype.orderNodeView = function(recurse){
*
* peg.linkOutNode(drawingNode);
*
* //through all this we didn't specify nodePosition parameters so we'll sort evertything at once
* //through all this we didn't specify nodePosition parameters so we'll sort everything at once
*
* sceneRoot.orderNodeView();
*
@ -3333,7 +3333,7 @@ $.oGroupNode.prototype.importImageAsTVG = function(path, alignment, nodePosition
* imports an image sequence as a node into the current group.
* @param {$.oFile[]} imagePaths a list of paths to the images to import (can pass a list of strings or $.oFile)
* @param {number} [exposureLength=1] the number of frames each drawing should be exposed at. If set to 0/false, each drawing will use the numbering suffix of the file to set its frame.
* @param {boolean} [convertToTvg=false] wether to convert the files to tvg during import
* @param {boolean} [convertToTvg=false] whether to convert the files to tvg during import
* @param {string} [alignment="ASIS"] the alignment to apply to the node
* @param {$.oPoint} [nodePosition] the position of the node in the nodeview
*
@ -3346,7 +3346,7 @@ $.oGroupNode.prototype.importImageSequence = function(imagePaths, exposureLength
if (typeof extendScene === 'undefined') var extendScene = false;
// match anything but capture trailing numbers and separates punctuation preceeding it
// match anything but capture trailing numbers and separates punctuation preceding it
var numberingRe = /(.*?)([\W_]+)?(\d*)$/i;
// sanitize imagePaths

View file

@ -4,7 +4,7 @@
// openHarmony Library v0.01
//
//
// Developped by Mathieu Chaptel, Chris Fourney...
// Developed by Mathieu Chaptel, Chris Fourney...
//
//
// This library is an open source implementation of a Document Object Model
@ -16,7 +16,7 @@
// and by hiding the heavy lifting required by the official API.
//
// This library is provided as is and is a work in progress. As such, not every
// function has been implemented or is garanteed to work. Feel free to contribute
// function has been implemented or is guaranteed to work. Feel free to contribute
// improvements to its official github. If you do make sure you follow the provided
// template and naming conventions and document your new methods properly.
//
@ -174,7 +174,7 @@ Object.defineProperty($.oNodeLink.prototype, 'outNode', {
return;
}
this.apply(); // do we really want to apply everytime we set?
this.apply(); // do we really want to apply every time we set?
}
});
@ -198,7 +198,7 @@ Object.defineProperty($.oNodeLink.prototype, 'inNode', {
return;
}
this.apply(); // do we really want to apply everytime we set?
this.apply(); // do we really want to apply every time we set?
}
});
@ -222,7 +222,7 @@ Object.defineProperty($.oNodeLink.prototype, 'outPort', {
return;
}
this.apply(); // do we really want to apply everytime we set?
this.apply(); // do we really want to apply every time we set?
}
});
@ -256,7 +256,7 @@ Object.defineProperty($.oNodeLink.prototype, 'inPort', {
return;
}
this.apply(); // do we really want to apply everytime we set?
this.apply(); // do we really want to apply every time we set?
}
});
@ -983,7 +983,7 @@ $.oNodeLink.prototype.validate = function ( ) {
* @return {bool} Whether the connection is a valid connection that exists currently in the node system.
*/
$.oNodeLink.prototype.validateUpwards = function( inport, outportProvided ) {
//IN THE EVENT OUTNODE WASNT PROVIDED.
//IN THE EVENT OUTNODE WASN'T PROVIDED.
this.path = this.findInputPath( this._inNode, inport, [] );
if( !this.path || this.path.length == 0 ){
return false;
@ -1173,7 +1173,7 @@ Object.defineProperty($.oLink.prototype, 'outPort', {
/**
* The index of the link comming out of the out-port.
* The index of the link coming out of the out-port.
* <br>In the event this value wasn't known by the link object but the link is actually connected, the correct value will be found.
* @name $.oLink#outLink
* @readonly
@ -1323,7 +1323,7 @@ $.oLink.prototype.getValidLink = function(createOutPorts, createInPorts){
/**
* Attemps to connect a link. Will guess the ports if not provided.
* Attempts to connect a link. Will guess the ports if not provided.
* @return {bool}
*/
$.oLink.prototype.connect = function(){
@ -1623,11 +1623,11 @@ $.oLinkPath.prototype.findExistingPath = function(){
/**
* Gets a link object from two nodes that can be succesfully connected. Provide port numbers if there are specific requirements to match. If a link already exists, it will be returned.
* Gets a link object from two nodes that can be successfully connected. Provide port numbers if there are specific requirements to match. If a link already exists, it will be returned.
* @param {$.oNode} start The node from which the link originates.
* @param {$.oNode} end The node at which the link ends.
* @param {int} [outPort] A prefered out-port for the link to use.
* @param {int} [inPort] A prefered in-port for the link to use.
* @param {int} [outPort] A preferred out-port for the link to use.
* @param {int} [inPort] A preferred in-port for the link to use.
*
* @return {$.oLink} the valid $.oLink object. Returns null if no such link could be created (for example if the node's in-port is already linked)
*/

View file

@ -4,7 +4,7 @@
// openHarmony Library v0.01
//
//
// Developped by Mathieu Chaptel, ...
// Developed by Mathieu Chaptel, ...
//
//
// This library is an open source implementation of a Document Object Model
@ -16,7 +16,7 @@
// and by hiding the heavy lifting required by the official API.
//
// This library is provided as is and is a work in progress. As such, not every
// function has been implemented or is garanteed to work. Feel free to contribute
// function has been implemented or is guaranteed to work. Feel free to contribute
// improvements to its official github. If you do make sure you follow the provided
// template and naming conventions and document your new methods properly.
//
@ -212,7 +212,7 @@ function openHarmony_toolInstaller(){
//----------------------------------------------
//-- GET THE FILE CONTENTS IN A DIRCTORY ON GIT
//-- GET THE FILE CONTENTS IN A DIRECTORY ON GIT
this.recurse_files = function( contents, arr_files ){
with( context.$.global ){
try{
@ -501,7 +501,7 @@ function openHarmony_toolInstaller(){
var download_item = item["download_url"];
var query = $.network.webQuery( download_item, false, false );
if( query ){
//INSTALL TYPES ARE script, package, ect.
//INSTALL TYPES ARE script, package, etc.
if( install_types[ m.install_cache[ item["url"] ] ] ){
m.installLabel.text = install_types[ m.install_cache[ item["url"] ] ];

View file

@ -1,7 +1,7 @@
{
"name": "openharmony",
"version": "0.0.1",
"description": "An Open Source Imlementation of a Document Object Model for the Toonboom Harmony scripting interface",
"description": "An Open Source Implementation of a Document Object Model for the Toonboom Harmony scripting interface",
"main": "openHarmony.js",
"scripts": {
"test": "$",

View file

@ -108,7 +108,7 @@ __all__ = [
"apply_colorspace_project",
"apply_colorspace_clips",
"get_sequence_pattern_and_padding",
# depricated
# deprecated
"get_track_item_pype_tag",
"set_track_item_pype_tag",
"get_track_item_pype_data",

View file

@ -1221,7 +1221,7 @@ def set_track_color(track_item, color):
def check_inventory_versions(track_items=None):
"""
Actual version color idetifier of Loaded containers
Actual version color identifier of Loaded containers
Check all track items and filter only
Loader nodes for its version. It will get all versions from database
@ -1249,10 +1249,10 @@ def check_inventory_versions(track_items=None):
project_name = legacy_io.active_project()
filter_result = filter_containers(containers, project_name)
for container in filter_result.latest:
set_track_color(container["_item"], clip_color)
set_track_color(container["_item"], clip_color_last)
for container in filter_result.outdated:
set_track_color(container["_item"], clip_color_last)
set_track_color(container["_item"], clip_color)
def selection_changed_timeline(event):

View file

@ -193,8 +193,8 @@ def parse_container(item, validate=True):
return
# convert the data to list and validate them
for _, obj_data in _data.items():
cotnainer = data_to_container(item, obj_data)
return_list.append(cotnainer)
container = data_to_container(item, obj_data)
return_list.append(container)
return return_list
else:
_data = lib.get_trackitem_openpype_data(item)

View file

@ -146,6 +146,8 @@ class CreatorWidget(QtWidgets.QDialog):
return " ".join([str(m.group(0)).capitalize() for m in matches])
def create_row(self, layout, type, text, **kwargs):
value_keys = ["setText", "setCheckState", "setValue", "setChecked"]
# get type attribute from qwidgets
attr = getattr(QtWidgets, type)
@ -167,14 +169,27 @@ class CreatorWidget(QtWidgets.QDialog):
# assign the created attribute to variable
item = getattr(self, attr_name)
# set attributes to item which are not values
for func, val in kwargs.items():
if func in value_keys:
continue
if getattr(item, func):
log.debug("Setting {} to {}".format(func, val))
func_attr = getattr(item, func)
if isinstance(val, tuple):
func_attr(*val)
else:
func_attr(val)
# set values to item
for value_item in value_keys:
if value_item not in kwargs:
continue
if getattr(item, value_item):
getattr(item, value_item)(kwargs[value_item])
# add to layout
layout.addRow(label, item)
@ -276,8 +291,11 @@ class CreatorWidget(QtWidgets.QDialog):
elif v["type"] == "QSpinBox":
data[k]["value"] = self.create_row(
content_layout, "QSpinBox", v["label"],
setValue=v["value"], setMinimum=0,
setValue=v["value"],
setDisplayIntegerBase=10000,
setRange=(0, 99999), setMinimum=0,
setMaximum=100000, setToolTip=tool_tip)
return data
@ -393,7 +411,7 @@ class ClipLoader:
self.with_handles = options.get("handles") or bool(
options.get("handles") is True)
# try to get value from options or evaluate key value for `load_how`
self.sequencial_load = options.get("sequencially") or bool(
self.sequencial_load = options.get("sequentially") or bool(
"Sequentially in order" in options.get("load_how", ""))
# try to get value from options or evaluate key value for `load_to`
self.new_sequence = options.get("newSequence") or bool(
@ -818,7 +836,7 @@ class PublishClip:
# increasing steps by index of rename iteration
self.count_steps *= self.rename_index
hierarchy_formating_data = {}
hierarchy_formatting_data = {}
hierarchy_data = deepcopy(self.hierarchy_data)
_data = self.track_item_default_data.copy()
if self.ui_inputs:
@ -853,13 +871,13 @@ class PublishClip:
# fill up pythonic expresisons in hierarchy data
for k, _v in hierarchy_data.items():
hierarchy_formating_data[k] = _v["value"].format(**_data)
hierarchy_formatting_data[k] = _v["value"].format(**_data)
else:
# if no gui mode then just pass default data
hierarchy_formating_data = hierarchy_data
hierarchy_formatting_data = hierarchy_data
tag_hierarchy_data = self._solve_tag_hierarchy_data(
hierarchy_formating_data
hierarchy_formatting_data
)
tag_hierarchy_data.update({"heroTrack": True})
@ -887,20 +905,20 @@ class PublishClip:
# add data to return data dict
self.tag_data.update(tag_hierarchy_data)
def _solve_tag_hierarchy_data(self, hierarchy_formating_data):
def _solve_tag_hierarchy_data(self, hierarchy_formatting_data):
""" Solve tag data from hierarchy data and templates. """
# fill up clip name and hierarchy keys
hierarchy_filled = self.hierarchy.format(**hierarchy_formating_data)
clip_name_filled = self.clip_name.format(**hierarchy_formating_data)
hierarchy_filled = self.hierarchy.format(**hierarchy_formatting_data)
clip_name_filled = self.clip_name.format(**hierarchy_formatting_data)
# remove shot from hierarchy data: is not needed anymore
hierarchy_formating_data.pop("shot")
hierarchy_formatting_data.pop("shot")
return {
"newClipName": clip_name_filled,
"hierarchy": hierarchy_filled,
"parents": self.parents,
"hierarchyData": hierarchy_formating_data,
"hierarchyData": hierarchy_formatting_data,
"subset": self.subset,
"family": self.subset_family,
"families": [self.data["family"]]
@ -916,16 +934,16 @@ class PublishClip:
)
# first collect formatting data to use for formatting template
formating_data = {}
formatting_data = {}
for _k, _v in self.hierarchy_data.items():
value = _v["value"].format(
**self.track_item_default_data)
formating_data[_k] = value
formatting_data[_k] = value
return {
"entity_type": entity_type,
"entity_name": template.format(
**formating_data
**formatting_data
)
}

View file

@ -0,0 +1,185 @@
"""Library to register OpenPype Creators for Houdini TAB node search menu.
This can be used to install custom houdini tools for the TAB search
menu which will trigger a publish instance to be created interactively.
The Creators are automatically registered on launch of Houdini through the
Houdini integration's `host.install()` method.
"""
import contextlib
import tempfile
import logging
import os
from openpype.pipeline import registered_host
from openpype.pipeline.create import CreateContext
from openpype.resources import get_openpype_icon_filepath
import hou
log = logging.getLogger(__name__)
CREATE_SCRIPT = """
from openpype.hosts.houdini.api.creator_node_shelves import create_interactive
create_interactive("{identifier}")
"""
def create_interactive(creator_identifier):
"""Create a Creator using its identifier interactively.
This is used by the generated shelf tools as callback when a user selects
the creator from the node tab search menu.
Args:
creator_identifier (str): The creator identifier of the Creator plugin
to create.
Return:
list: The created instances.
"""
# TODO Use Qt instead
result, variant = hou.ui.readInput('Define variant name',
buttons=("Ok", "Cancel"),
initial_contents='Main',
title="Define variant",
help="Set the variant for the "
"publish instance",
close_choice=1)
if result == 1:
# User interrupted
return
variant = variant.strip()
if not variant:
raise RuntimeError("Empty variant value entered.")
host = registered_host()
context = CreateContext(host)
before = context.instances_by_id.copy()
# Create the instance
context.create(
creator_identifier=creator_identifier,
variant=variant,
pre_create_data={"use_selection": True}
)
# For convenience we set the new node as current since that's much more
# familiar to the artist when creating a node interactively
# TODO Allow to disable auto-select in studio settings or user preferences
after = context.instances_by_id
new = set(after) - set(before)
if new:
# Select the new instance
for instance_id in new:
instance = after[instance_id]
node = hou.node(instance.get("instance_node"))
node.setCurrent(True)
return list(new)
@contextlib.contextmanager
def shelves_change_block():
"""Write shelf changes at the end of the context."""
hou.shelves.beginChangeBlock()
try:
yield
finally:
hou.shelves.endChangeBlock()
def install():
"""Install the Creator plug-ins to show in Houdini's TAB node search menu.
This function is re-entrant and can be called again to reinstall and
update the node definitions. For example during development it can be
useful to call it manually:
>>> from openpype.hosts.houdini.api.creator_node_shelves import install
>>> install()
Returns:
list: List of `hou.Tool` instances
"""
host = registered_host()
# Store the filepath on the host
# TODO: Define a less hacky static shelf path for current houdini session
filepath_attr = "_creator_node_shelf_filepath"
filepath = getattr(host, filepath_attr, None)
if filepath is None:
f = tempfile.NamedTemporaryFile(prefix="houdini_creator_nodes_",
suffix=".shelf",
delete=False)
f.close()
filepath = f.name
setattr(host, filepath_attr, filepath)
elif os.path.exists(filepath):
# Remove any existing shelf file so that we can completey regenerate
# and update the tools file if creator identifiers change
os.remove(filepath)
icon = get_openpype_icon_filepath()
# Create context only to get creator plugins, so we don't reset and only
# populate what we need to retrieve the list of creator plugins
create_context = CreateContext(host, reset=False)
create_context.reset_current_context()
create_context._reset_creator_plugins()
log.debug("Writing OpenPype Creator nodes to shelf: {}".format(filepath))
tools = []
with shelves_change_block():
for identifier, creator in create_context.manual_creators.items():
# TODO: Allow the creator plug-in itself to override the categories
# for where they are shown, by e.g. defining
# `Creator.get_network_categories()`
key = "openpype_create.{}".format(identifier)
log.debug(f"Registering {key}")
script = CREATE_SCRIPT.format(identifier=identifier)
data = {
"script": script,
"language": hou.scriptLanguage.Python,
"icon": icon,
"help": "Create OpenPype publish instance for {}".format(
creator.label
),
"help_url": None,
"network_categories": [
hou.ropNodeTypeCategory(),
hou.sopNodeTypeCategory()
],
"viewer_categories": [],
"cop_viewer_categories": [],
"network_op_type": None,
"viewer_op_type": None,
"locations": ["OpenPype"]
}
label = "Create {}".format(creator.label)
tool = hou.shelves.tool(key)
if tool:
tool.setData(**data)
tool.setLabel(label)
else:
tool = hou.shelves.newTool(
file_path=filepath,
name=key,
label=label,
**data
)
tools.append(tool)
# Ensure the shelf is reloaded
hou.shelves.loadFile(filepath)
return tools

View file

@ -127,6 +127,8 @@ def get_output_parameter(node):
return node.parm("filename")
elif node_type == "comp":
return node.parm("copoutput")
elif node_type == "opengl":
return node.parm("picture")
elif node_type == "arnold":
if node.evalParm("ar_ass_export_enable"):
return node.parm("ar_ass_file")
@ -479,23 +481,13 @@ def reset_framerange():
frame_start = asset_data.get("frameStart")
frame_end = asset_data.get("frameEnd")
# Backwards compatibility
if frame_start is None or frame_end is None:
frame_start = asset_data.get("edit_in")
frame_end = asset_data.get("edit_out")
if frame_start is None or frame_end is None:
log.warning("No edit information found for %s" % asset_name)
return
handles = asset_data.get("handles") or 0
handle_start = asset_data.get("handleStart")
if handle_start is None:
handle_start = handles
handle_end = asset_data.get("handleEnd")
if handle_end is None:
handle_end = handles
handle_start = asset_data.get("handleStart", 0)
handle_end = asset_data.get("handleEnd", 0)
frame_start -= int(handle_start)
frame_end += int(handle_end)

View file

@ -18,7 +18,7 @@ from openpype.pipeline import (
)
from openpype.pipeline.load import any_outdated_containers
from openpype.hosts.houdini import HOUDINI_HOST_DIR
from openpype.hosts.houdini.api import lib, shelves
from openpype.hosts.houdini.api import lib, shelves, creator_node_shelves
from openpype.lib import (
register_event_callback,
@ -83,6 +83,10 @@ class HoudiniHost(HostBase, IWorkfileHost, ILoadHost, IPublishHost):
_set_context_settings()
shelves.generate_shelves()
if not IS_HEADLESS:
import hdefereval # noqa, hdefereval is only available in ui mode
hdefereval.executeDeferred(creator_node_shelves.install)
def has_unsaved_changes(self):
return hou.hipFile.hasUnsavedChanges()
@ -144,13 +148,10 @@ class HoudiniHost(HostBase, IWorkfileHost, ILoadHost, IPublishHost):
"""
obj_network = hou.node("/obj")
op_ctx = obj_network.createNode("null", node_name="OpenPypeContext")
# A null in houdini by default comes with content inside to visualize
# the null. However since we explicitly want to hide the node lets
# remove the content and disable the display flag of the node
for node in op_ctx.children():
node.destroy()
op_ctx = obj_network.createNode("subnet",
node_name="OpenPypeContext",
run_init_scripts=False,
load_contents=False)
op_ctx.moveToGoodPosition()
op_ctx.setBuiltExplicitly(False)

View file

@ -60,7 +60,7 @@ class Creator(LegacyCreator):
def process(self):
instance = super(CreateEpicNode, self, process()
# Set paramaters for Alembic node
# Set parameters for Alembic node
instance.setParms(
{"sop_path": "$HIP/%s.abc" % self.nodes[0]}
)

View file

@ -69,7 +69,7 @@ def generate_shelves():
mandatory_attributes = {'label', 'script'}
for tool_definition in shelf_definition.get('tools_list'):
# We verify that the name and script attibutes of the tool
# We verify that the name and script attributes of the tool
# are set
if not all(
tool_definition[key] for key in mandatory_attributes

View file

@ -1,5 +1,5 @@
# -*- coding: utf-8 -*-
"""Convertor for legacy Houdini subsets."""
"""Converter for legacy Houdini subsets."""
from openpype.pipeline.create.creator_plugins import SubsetConvertorPlugin
from openpype.hosts.houdini.api.lib import imprint
@ -7,7 +7,7 @@ from openpype.hosts.houdini.api.lib import imprint
class HoudiniLegacyConvertor(SubsetConvertorPlugin):
"""Find and convert any legacy subsets in the scene.
This Convertor will find all legacy subsets in the scene and will
This Converter will find all legacy subsets in the scene and will
transform them to the current system. Since the old subsets doesn't
retain any information about their original creators, the only mapping
we can do is based on their families.

Some files were not shown because too many files have changed in this diff Show more