diff --git a/.github/ISSUE_TEMPLATE/bug_report.md b/.github/ISSUE_TEMPLATE/bug_report.md deleted file mode 100644 index 96e768e420..0000000000 --- a/.github/ISSUE_TEMPLATE/bug_report.md +++ /dev/null @@ -1,33 +0,0 @@ ---- -name: Bug report -about: Create a report to help us improve -title: '' -labels: bug -assignees: '' - ---- -**Running version** -[ex. 3.14.1-nightly.2] - -**Describe the bug** -A clear and concise description of what the bug is. - -**To Reproduce** -Steps to reproduce the behavior: -1. Go to '...' -2. Click on '....' -3. Scroll down to '....' -4. See error - -**Expected behavior** -A clear and concise description of what you expected to happen. - -**Screenshots** -If applicable, add screenshots to help explain your problem. - -**Desktop (please complete the following information):** - - OS: [e.g. windows] - - Host: [e.g. Maya, Nuke, Houdini] - -**Additional context** -Add any other context about the problem here. diff --git a/.github/ISSUE_TEMPLATE/bug_report.yml b/.github/ISSUE_TEMPLATE/bug_report.yml new file mode 100644 index 0000000000..cae6a6486b --- /dev/null +++ b/.github/ISSUE_TEMPLATE/bug_report.yml @@ -0,0 +1,183 @@ +name: Bug Report +description: File a bug report +title: 'Bug: ' +labels: + - 'type: bug' +body: + - type: markdown + attributes: + value: | + Thanks for taking the time to fill out this bug report! + - type: checkboxes + attributes: + label: Is there an existing issue for this? + description: >- + Please search to see if an issue already exists for the bug you + encountered. + options: + - label: I have searched the existing issues + required: true + - type: textarea + attributes: + label: 'Current Behavior:' + description: A concise description of what you're experiencing. + validations: + required: true + - type: textarea + attributes: + label: 'Expected Behavior:' + description: A concise description of what you expected to happen. + validations: + required: false + - type: dropdown + id: _version + attributes: + label: Version + description: What version are you running? Look to OpenPype Tray + options: + - 3.15.7-nightly.1 + - 3.15.6 + - 3.15.6-nightly.3 + - 3.15.6-nightly.2 + - 3.15.6-nightly.1 + - 3.15.5 + - 3.15.5-nightly.2 + - 3.15.5-nightly.1 + - 3.15.4 + - 3.15.4-nightly.3 + - 3.15.4-nightly.2 + - 3.15.4-nightly.1 + - 3.15.3 + - 3.15.3-nightly.4 + - 3.15.3-nightly.3 + - 3.15.3-nightly.2 + - 3.15.3-nightly.1 + - 3.15.2 + - 3.15.2-nightly.6 + - 3.15.2-nightly.5 + - 3.15.2-nightly.4 + - 3.15.2-nightly.3 + - 3.15.2-nightly.2 + - 3.15.2-nightly.1 + - 3.15.1 + - 3.15.1-nightly.6 + - 3.15.1-nightly.5 + - 3.15.1-nightly.4 + - 3.15.1-nightly.3 + - 3.15.1-nightly.2 + - 3.15.1-nightly.1 + - 3.15.0 + - 3.15.0-nightly.1 + - 3.14.11-nightly.4 + - 3.14.11-nightly.3 + - 3.14.11-nightly.2 + - 3.14.11-nightly.1 + - 3.14.10 + - 3.14.10-nightly.9 + - 3.14.10-nightly.8 + - 3.14.10-nightly.7 + - 3.14.10-nightly.6 + - 3.14.10-nightly.5 + - 3.14.10-nightly.4 + - 3.14.10-nightly.3 + - 3.14.10-nightly.2 + - 3.14.10-nightly.1 + - 3.14.9 + - 3.14.9-nightly.5 + - 3.14.9-nightly.4 + - 3.14.9-nightly.3 + - 3.14.9-nightly.2 + - 3.14.9-nightly.1 + - 3.14.8 + - 3.14.8-nightly.4 + - 3.14.8-nightly.3 + - 3.14.8-nightly.2 + - 3.14.8-nightly.1 + - 3.14.7 + - 3.14.7-nightly.8 + - 3.14.7-nightly.7 + - 3.14.7-nightly.6 + - 3.14.7-nightly.5 + - 3.14.7-nightly.4 + - 3.14.7-nightly.3 + - 3.14.7-nightly.2 + - 3.14.7-nightly.1 + - 3.14.6 + - 3.14.6-nightly.3 + - 3.14.6-nightly.2 + - 3.14.6-nightly.1 + - 3.14.5 + - 3.14.5-nightly.3 + - 3.14.5-nightly.2 + - 3.14.5-nightly.1 + - 3.14.4 + - 3.14.4-nightly.4 + - 3.14.4-nightly.3 + - 3.14.4-nightly.2 + - 3.14.4-nightly.1 + - 3.14.3 + - 3.14.3-nightly.7 + - 3.14.3-nightly.6 + - 3.14.3-nightly.5 + - 3.14.3-nightly.4 + - 3.14.3-nightly.3 + - 3.14.3-nightly.2 + - 3.14.3-nightly.1 + - 3.14.2 + - 3.14.2-nightly.5 + - 3.14.2-nightly.4 + - 3.14.2-nightly.3 + - 3.14.2-nightly.2 + - 3.14.2-nightly.1 + - 3.14.1 + - 3.14.1-nightly.4 + - 3.14.1-nightly.3 + - 3.14.1-nightly.2 + - 3.14.1-nightly.1 + - 3.14.0 + validations: + required: true + - type: dropdown + validations: + required: true + attributes: + label: What platform you are running OpenPype on? + description: | + Please specify the operating systems you are running OpenPype with. + multiple: true + options: + - Windows + - Linux / Centos + - Linux / Ubuntu + - Linux / RedHat + - MacOS + - type: textarea + id: to-reproduce + attributes: + label: 'Steps To Reproduce:' + description: Steps to reproduce the behavior. + placeholder: | + 1. How did the configuration look like + 2. What type of action was made + validations: + required: true + - type: checkboxes + attributes: + label: Are there any labels you wish to add? + description: Please search labels and identify those related to your bug. + options: + - label: I have added the relevant labels to the bug report. + required: true + - type: textarea + id: logs + attributes: + label: 'Relevant log output:' + description: >- + Please copy and paste any relevant log output. This will be + automatically formatted into code, so no need for backticks. + render: shell + - type: textarea + id: additional-context + attributes: + label: 'Additional context:' + description: Add any other context about the problem here. diff --git a/.github/ISSUE_TEMPLATE/config.yml b/.github/ISSUE_TEMPLATE/config.yml new file mode 100644 index 0000000000..cc61bfd04a --- /dev/null +++ b/.github/ISSUE_TEMPLATE/config.yml @@ -0,0 +1,8 @@ +blank_issues_enabled: false +contact_links: + - name: Ynput Community Discussions + url: https://community.ynput.io + about: Please ask and answer questions here. + - name: Ynput Discord Server + url: https://discord.gg/ynput + about: For community quick chats. \ No newline at end of file diff --git a/.github/ISSUE_TEMPLATE/enhancement_request.yml b/.github/ISSUE_TEMPLATE/enhancement_request.yml new file mode 100644 index 0000000000..52b49e0481 --- /dev/null +++ b/.github/ISSUE_TEMPLATE/enhancement_request.yml @@ -0,0 +1,52 @@ +name: Enhancement Request +description: Create a report to help us enhance a particular feature +title: "Enhancement: " +labels: + - "type: enhancement" +body: + - type: markdown + attributes: + value: | + Thanks for taking the time to fill out this enhancement request report! + - type: checkboxes + attributes: + label: Is there an existing issue for this? + description: Please search to see if an issue already exists for the bug you encountered. + options: + - label: I have searched the existing issues. + required: true + - type: textarea + id: related-feature + attributes: + label: Please describe the feature you have in mind and explain what the current shortcomings are? + description: A clear and concise description of what the problem is. + validations: + required: true + - type: textarea + id: enhancement-proposal + attributes: + label: How would you imagine the implementation of the feature? + description: A clear and concise description of what you want to happen. + validations: + required: true + - type: checkboxes + attributes: + label: Are there any labels you wish to add? + description: Please search labels and identify those related to your enhancement. + options: + - label: I have added the relevant labels to the enhancement request. + required: true + - type: textarea + id: alternatives + attributes: + label: "Describe alternatives you've considered:" + description: A clear and concise description of any alternative solutions or features you've considered. + validations: + required: false + - type: textarea + id: additional-context + attributes: + label: "Additional context:" + description: Add any other context or screenshots about the enhancement request here. + validations: + required: false \ No newline at end of file diff --git a/.github/ISSUE_TEMPLATE/feature_request.md b/.github/ISSUE_TEMPLATE/feature_request.md deleted file mode 100644 index 11fc491ef1..0000000000 --- a/.github/ISSUE_TEMPLATE/feature_request.md +++ /dev/null @@ -1,20 +0,0 @@ ---- -name: Feature request -about: Suggest an idea for this project -title: '' -labels: enhancement -assignees: '' - ---- - -**Is your feature request related to a problem? Please describe.** -A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] - -**Describe the solution you'd like** -A clear and concise description of what you want to happen. - -**Describe alternatives you've considered** -A clear and concise description of any alternative solutions or features you've considered. - -**Additional context** -Add any other context or screenshots about the feature request here. diff --git a/.github/workflows/documentation.yml b/.github/workflows/documentation.yml index f78e95528f..f2e7d1058f 100644 --- a/.github/workflows/documentation.yml +++ b/.github/workflows/documentation.yml @@ -1,4 +1,4 @@ -name: documentation +name: 📜 Documentation on: pull_request: diff --git a/.github/workflows/milestone_assign.yml b/.github/workflows/milestone_assign.yml index 3cbee51472..df4625c225 100644 --- a/.github/workflows/milestone_assign.yml +++ b/.github/workflows/milestone_assign.yml @@ -1,4 +1,4 @@ -name: Milestone - assign to PRs +name: 👉🏻 Milestone - assign to PRs on: pull_request_target: diff --git a/.github/workflows/milestone_create.yml b/.github/workflows/milestone_create.yml index 632704e64a..437c9e31b4 100644 --- a/.github/workflows/milestone_create.yml +++ b/.github/workflows/milestone_create.yml @@ -1,4 +1,4 @@ -name: Milestone - create default +name: ➕ Milestone - create default on: milestone: diff --git a/.github/workflows/miletone_release_trigger.yml b/.github/workflows/miletone_release_trigger.yml index b5b8aab1dc..4a031be7f9 100644 --- a/.github/workflows/miletone_release_trigger.yml +++ b/.github/workflows/miletone_release_trigger.yml @@ -1,4 +1,4 @@ -name: Milestone Release [trigger] +name: 🚩 Milestone Release [trigger] on: workflow_dispatch: @@ -45,3 +45,6 @@ jobs: token: ${{ secrets.YNPUT_BOT_TOKEN }} user_email: ${{ secrets.CI_EMAIL }} user_name: ${{ secrets.CI_USER }} + cu_api_key: ${{ secrets.CLICKUP_API_KEY }} + cu_team_id: ${{ secrets.CLICKUP_TEAM_ID }} + cu_field_id: ${{ secrets.CLICKUP_RELEASE_FIELD_ID }} diff --git a/.github/workflows/nightly_merge.yml b/.github/workflows/nightly_merge.yml index 1776d7a464..3f8c75dce3 100644 --- a/.github/workflows/nightly_merge.yml +++ b/.github/workflows/nightly_merge.yml @@ -1,4 +1,4 @@ -name: Dev -> Main +name: 🔀 Dev -> Main on: schedule: @@ -25,5 +25,5 @@ jobs: - name: Invoke pre-release workflow uses: benc-uk/workflow-dispatch@v1 with: - workflow: Nightly Prerelease + workflow: prerelease.yml token: ${{ secrets.YNPUT_BOT_TOKEN }} diff --git a/.github/workflows/pr_labels.yml b/.github/workflows/pr_labels.yml new file mode 100644 index 0000000000..ecc95051aa --- /dev/null +++ b/.github/workflows/pr_labels.yml @@ -0,0 +1,49 @@ +name: 🔖 PR labels + +on: + pull_request_target: + types: [opened, assigned] + +jobs: + size-label: + name: pr_size_label + runs-on: ubuntu-latest + if: github.event.action == 'assigned' || github.event.action == 'opened' + steps: + - name: Add size label + uses: "pascalgn/size-label-action@v0.4.3" + env: + GITHUB_TOKEN: "${{ secrets.YNPUT_BOT_TOKEN }}" + IGNORED: ".gitignore\n*.md\n*.json" + with: + sizes: > + { + "0": "XS", + "100": "S", + "500": "M", + "1000": "L", + "1500": "XL", + "2500": "XXL" + } + + label_prs_branch: + name: pr_branch_label + runs-on: ubuntu-latest + if: github.event.action == 'assigned' || github.event.action == 'opened' + steps: + - name: Label PRs - Branch name detection + uses: ffittschen/pr-branch-labeler@v1 + with: + repo-token: ${{ secrets.YNPUT_BOT_TOKEN }} + + label_prs_globe: + name: pr_globe_label + runs-on: ubuntu-latest + if: github.event.action == 'assigned' || github.event.action == 'opened' + steps: + - name: Label PRs - Globe detection + uses: actions/labeler@v4.0.3 + with: + repo-token: ${{ secrets.YNPUT_BOT_TOKEN }} + configuration-path: ".github/pr-glob-labeler.yml" + sync-labels: false diff --git a/.github/workflows/prerelease.yml b/.github/workflows/prerelease.yml index 571b0339e1..8c5c733c08 100644 --- a/.github/workflows/prerelease.yml +++ b/.github/workflows/prerelease.yml @@ -1,4 +1,4 @@ -name: Nightly Prerelease +name: ⏳ Nightly Prerelease on: workflow_dispatch: @@ -65,3 +65,9 @@ jobs: source_ref: 'main' target_branch: 'develop' commit_message_template: '[Automated] Merged {source_ref} into {target_branch}' + + - name: Invoke Update bug report workflow + uses: benc-uk/workflow-dispatch@v1 + with: + workflow: update_bug_report.yml + token: ${{ secrets.YNPUT_BOT_TOKEN }} \ No newline at end of file diff --git a/.github/workflows/project_actions.yml b/.github/workflows/project_task_statuses.yml similarity index 59% rename from .github/workflows/project_actions.yml rename to .github/workflows/project_task_statuses.yml index b21946f0ee..d078c08b70 100644 --- a/.github/workflows/project_actions.yml +++ b/.github/workflows/project_task_statuses.yml @@ -1,8 +1,6 @@ -name: project-actions +name: 📊 Project task statuses on: - pull_request_target: - types: [opened, assigned] pull_request_review: types: [submitted] issue_comment: @@ -25,7 +23,11 @@ jobs: if: | (github.event_name == 'issue_comment' && github.event.pull_request.head.repo.owner.login == 'ynput' && github.event.comment.user.id != 82967070) || (github.event_name == 'pull_request_review_comment' && github.event.pull_request.head.repo.owner.login == 'ynput' && github.event.comment.user.type != 'Bot') || - (github.event_name == 'pull_request_review' && github.event.pull_request.head.repo.owner.login == 'ynput' && github.event.review.state != 'changes_requested' && github.event.review.user.type != 'Bot') + (github.event_name == 'pull_request_review' && + github.event.pull_request.head.repo.owner.login == 'ynput' && + github.event.review.state != 'changes_requested' && + github.event.review.state != 'approved' && + github.event.review.user.type != 'Bot') steps: - name: Move PR to 'Review In Progress' uses: leonsteinhaeuser/project-beta-automations@v2.1.0 @@ -66,53 +68,3 @@ jobs: -d '{ "status": "in progress" }' - - size-label: - name: pr_size_label - runs-on: ubuntu-latest - if: | - (github.event_name == 'pull_request' && github.event.action == 'assigned') || - (github.event_name == 'pull_request' && github.event.action == 'opened') - - steps: - - name: Add size label - uses: "pascalgn/size-label-action@v0.4.3" - env: - GITHUB_TOKEN: "${{ secrets.YNPUT_BOT_TOKEN }}" - IGNORED: ".gitignore\n*.md\n*.json" - with: - sizes: > - { - "0": "XS", - "100": "S", - "500": "M", - "1000": "L", - "1500": "XL", - "2500": "XXL" - } - - label_prs_branch: - name: pr_branch_label - runs-on: ubuntu-latest - if: | - (github.event_name == 'pull_request' && github.event.action == 'assigned') || - (github.event_name == 'pull_request' && github.event.action == 'opened') - steps: - - name: Label PRs - Branch name detection - uses: ffittschen/pr-branch-labeler@v1 - with: - repo-token: ${{ secrets.YNPUT_BOT_TOKEN }} - - label_prs_globe: - name: pr_globe_label - runs-on: ubuntu-latest - if: | - (github.event_name == 'pull_request' && github.event.action == 'assigned') || - (github.event_name == 'pull_request' && github.event.action == 'opened') - steps: - - name: Label PRs - Globe detection - uses: actions/labeler@v4.0.3 - with: - repo-token: ${{ secrets.YNPUT_BOT_TOKEN }} - configuration-path: ".github/pr-glob-labeler.yml" - sync-labels: false diff --git a/.github/workflows/test_build.yml b/.github/workflows/test_build.yml index 064a4d47e0..fd8e0e642d 100644 --- a/.github/workflows/test_build.yml +++ b/.github/workflows/test_build.yml @@ -1,7 +1,7 @@ # This workflow will upload a Python Package using Twine when a release is created # For more information see: https://help.github.com/en/actions/language-and-framework-guides/using-python-with-github-actions#publishing-to-package-registries -name: Test Build +name: 🏗️ Test Build on: pull_request: diff --git a/.github/workflows/update_bug_report.yml b/.github/workflows/update_bug_report.yml new file mode 100644 index 0000000000..1e5da414bb --- /dev/null +++ b/.github/workflows/update_bug_report.yml @@ -0,0 +1,33 @@ +name: 🐞 Update Bug Report + +on: + workflow_dispatch: + release: + # https://docs.github.com/en/actions/using-workflows/events-that-trigger-workflows#release + types: [published] + +jobs: + update-bug-report: + runs-on: ubuntu-latest + name: Update bug report + steps: + - uses: actions/checkout@v3 + with: + ref: ${{ github.event.release.target_commitish }} + - name: Update version + uses: ynput/gha-populate-form-version@main + with: + github_token: ${{ secrets.YNPUT_BOT_TOKEN }} + registry: github + dropdown: _version + limit_to: 100 + form: .github/ISSUE_TEMPLATE/bug_report.yml + commit_message: 'chore(): update bug report / version' + dry_run: no-push + + - name: Push to protected develop branch + uses: CasperWA/push-protected@v2.10.0 + with: + token: ${{ secrets.YNPUT_BOT_TOKEN }} + branch: develop + unprotect_reviews: true \ No newline at end of file diff --git a/CHANGELOG.md b/CHANGELOG.md index 4e22b783c4..07c1e7d5fd 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,5 +1,1499 @@ # Changelog + +## [3.15.6](https://github.com/ynput/OpenPype/tree/3.15.6) + + +[Full Changelog](https://github.com/ynput/OpenPype/compare/3.15.5...3.15.6) + +### **🆕 New features** + + +
+Substance Painter Integration #4283 + +This implements a part of #4205 by implementing a Substance Painter integration + +Status: +- [x] Implement Host +- [x] start substance with last workfile using `AddLastWorkfileToLaunchArgs` prelaunch hook +- [x] Implement Qt tools +- [x] Implement loaders +- [x] Implemented a Set project mesh loader (this is relatively special case because a Project will always have exactly one mesh - a Substance Painter project cannot exist without a mesh). +- [x] Implement project open callback +- [x] On project open it notifies the user if the loaded model is outdated +- [x] Implement publishing logic +- [x] Workfile publishing +- [x] Export Texture Sets +- [x] Support OCIO using #4195 (draft brach is set up - see comment) +- [ ] Likely needs more testing on the OCIO front +- [x] Validate all outputs of the Export template are exported/generated +- [x] Allow validation to be optional **(issue: there's no API method to detect what maps will be exported without doing an actual export to disk)** +- [x] Support extracting/integration if not all outputs are generated +- [x] Support multiple materials/texture sets per instance +- [ ] Add validator that can enforce only a single texture set output if studio prefers that. +- [ ] Implement Export File Format (extensions) override in Creator +- [ ] Add settings so Admin can choose which extensions are available. + + +___ + +
+ + +
+Data Exchange: Geometry in 3dsMax #4555 + +Introduces and updates a creator, extractors and loaders for model family + +Introduces new creator, extractors and loaders for model family while adding model families into the existing max scene loader and extractor +- [x] creators +- [x] adding model family into max scene loader and extractor +- [x] fbx loader +- [x] fbx extractor +- [x] usd loader +- [x] usd extractor +- [x] validator for model family +- [x] obj loader(update function) +- [x] fix the update function of the loader as #4675 +- [x] Add documentation + + +___ + +
+ + +
+AfterEffects: add review flag to each instance #4884 + +Adds `mark_for_review` flag to the Creator to allow artists to disable review if necessary.Exposed this flag in Settings, by default set to True (eg. same behavior as previously). + + +___ + +
+ +### **🚀 Enhancements** + + +
+Houdini: Fix Validate Output Node (VDB) #4819 + +- Removes plug-in that was a duplicate of this plug-in. +- Optimize logging of many prims slightly +- Fix error reporting like https://github.com/ynput/OpenPype/pull/4818 did + + +___ + +
+ + +
+Houdini: Add null node as output indicator when using TAB search #4834 + + +___ + +
+ + +
+Houdini: Don't error in collect review if camera is not set correctly #4874 + +Do not raise an error in collector when invalid path is set as camera path. Allow camera path to not be set correctly in review instance until validation so it's nicely shown in a validation report. + + +___ + +
+ + +
+Project packager: Backup and restore can store only database #4879 + +Pack project functionality have option to zip only project database without project files. Unpack project can skip project copy if the folder is not found.Added helper functions to `openpype.client.mongo` that can be also used for tests as replacement of mongo dump. + + +___ + +
+ + +
+Houdini: ExtractOpenGL for Review instance not optional #4881 + +Don't make ExtractOpenGL optional for review instance optional. + + +___ + +
+ + +
+Publisher: Small style changes #4894 + +Small changes in styles and form of publisher UI. + + +___ + +
+ + +
+Houdini: Workfile icon in new publisher #4898 + +Fix icon for the workfile instance in new publisher + + +___ + +
+ + +
+Fusion: Simplify creator icons code #4899 + +Simplify code for setting the icons for the Fusion creators + + +___ + +
+ + +
+Enhancement: Fix PySide 6.5 support for loader #4900 + +Fixes PySide 6.5 support in Loader. + + +___ + +
+ +### **🐛 Bug fixes** + + +
+Maya: Validate Attributes #4917 + +This plugin was broken due to bad fetching of data and wrong repair action. + + +___ + +
+ + +
+Fix: Locally copied version of last published workfile is not incremented #4722 + +### Fix 1 +When copied, the local workfile version keeps the published version number, when it must be +1 to follow OP's naming convention. + +### Fix 2 +Local workfile version's name is built from anatomy. This avoids to get workfiles with their publish template naming. + +### Fix 3 +In the case a subset has at least two tasks with published workfiles, for example `Modeling` and `Rigging`, launching `Rigging` was getting the first one with the `next` and trying to find representations, therefore `workfileModeling` and trying to match the current `task_name` (`Rigging`) with the `representation["context"]["task"]["name"]` of a Modeling representation, which was ending up to a `workfile_representation` to `None`, and exiting the process. + +Trying to find the `task_name` in the `subset['name']` fixes it. + +### Fix 4 +Fetch input dependencies of workfile. + +Replacing https://github.com/ynput/OpenPype/pull/4102 for changes to bring this home. +___ + +
+ + +
+Maya: soft-fail when pan/zoom locked on camera when playblasting #4929 + +When pan/zoom enabled attribute on camera is locked, playblasting with pan/zoom fails because it is trying to restore it. This is fixing it by skipping over with warning. + + +___ + +
+ +### **Merged pull requests** + + +
+Maya Load References - Add Display Handle Setting #4904 + +When we load a reference in Maya using OpenPype loader, display handle is checked by default and prevent us to select easily the object in the viewport. I understand that some productions like to keep this option, so I propose to add display handle to the reference loader settings. + + +___ + +
+ + +
+Photoshop: add autocreators for review and flat image #4871 + +Review and flatten image (produced when no instance of `image` family was created) were created somehow magically. This PRintroduces two new auto creators which allow artists to disable review or flatten image.For all `image` instances `Review` flag was added to provide functionality to create separate review per `image` instance. Previously was possible only to have separate instance of `review` family.Review is not enabled on `image` family by default. (Eg. follows original behavior)Review auto creator is enabled by default as it was before.Flatten image creator must be set in Settings in `project_settings/photoshop/create/AutoImageCreator`. + + +___ + +
+ + + + +## [3.15.5](https://github.com/ynput/OpenPype/tree/3.15.5) + + +[Full Changelog](https://github.com/ynput/OpenPype/compare/3.15.4...3.15.5) + +### **🚀 Enhancements** + + +
+Maya: Playblast profiles #4777 + +Support playblast profiles.This enables studios to customize what playblast settings should be on a per task and/or subset basis. For example `modeling` should have `Wireframe On Shaded` enabled, while all other tasks should have it disabled. + + +___ + +
+ + +
+Maya: Support .abc files directly for Arnold standin look assignment #4856 + +If `.abc` file is loaded into arnold standin support look assignment through the `cbId` attributes in the alembic file. + + +___ + +
+ + +
+Maya: Hide animation instance in creator #4872 + +- Hide animation instance in creator +- Add inventory action to recreate animation publish instance for loaded rigs + + +___ + +
+ + +
+Unreal: Render Creator enhancements #4477 + +Improvements to the creator for render family + +This PR introduces some enhancements to the creator for the render family in Unreal Engine: +- Added the option to create a new, empty sequence for the render. +- Added the option to not include the whole hierarchy for the selected sequence. +- Improvements of the error messages. + + +___ + +
+ + +
+Unreal: Added settings for rendering #4575 + +Added settings for rendering in Unreal Engine. + +Two settings has been added: +- Pre roll frames, to set how many frames are used to load the scene before starting the actual rendering. +- Configuration path, to allow to save a preset of settings from Unreal, and use it for rendering. + + +___ + +
+ + +
+Global: Optimize anatomy formatting by only formatting used templates instead #4784 + +Optimization to not format full anatomy when only a single template is used. Instead format only the single template instead. + + +___ + +
+ + +
+Patchelf version locked #4853 + +For Centos dockerfile it is necessary to lock the patchelf version to the older, otherwise the build process fails. + +___ + +
+ + +
+Houdini: Implement `switch` method on loaders #4866 + +Implement `switch` method on loaders + + +___ + +
+ + +
+Code: Tweak docstrings and return type hints #4875 + +Tweak docstrings and return type hints for functions in `openpype.client.entities`. + + +___ + +
+ + +
+Publisher: Clear comment on successful publish and on window close #4885 + +Clear comment text field on successful publish and on window close. + + +___ + +
+ + +
+Publisher: Make sure to reset asset widget when hidden and reshown #4886 + +Make sure to reset asset widget when hidden and reshown. Without this the asset list would never refresh in the set asset widget when changing context on an existing instance and thus would not show new assets from after the first time launching that widget. + + +___ + +
+ +### **🐛 Bug fixes** + + +
+Maya: Fix nested model instances. #4852 + +Fix nested model instance under review instance, where data collection was not including "Display Lights" and "Focal Length". + + +___ + +
+ + +
+Maya: Make default namespace naming backwards compatible #4873 + +Namespaces of loaded references are now _by default_ back to what they were before #4511 + + +___ + +
+ + +
+Nuke: Legacy convertor skips deprecation warnings #4846 + +Nuke legacy convertor was triggering deprecated function which is causing a lot of logs which slows down whole process. Changed the convertor to skip all nodes without `AVALON_TAB` to avoid the warnings. + + +___ + +
+ + +
+3dsmax: move startup script logic to hook #4849 + +Startup script for OpenPype was interfering with Open Last Workfile feature. Moving this loggic from simple command line argument in the Settings to pre-launch hook is solving the order of command line arguments and making both features work. + + +___ + +
+ + +
+Maya: Don't change time slider ranges in `get_frame_range` #4858 + +Don't change time slider ranges in `get_frame_range` + + +___ + +
+ + +
+Maya: Looks - calculate hash for tx texture #4878 + +Texture hash is calculated for textures used in published look and it is used as key in dictionary. In recent changes, this hash is not calculated for TX files, resulting in `None` value as key in dictionary, crashing publishing. This PR is adding texture hash for TX files to solve that issue. + + +___ + +
+ + +
+Houdini: Collect `currentFile` context data separate from workfile instance #4883 + +Fix publishing without an active workfile instance due to missing `currentFile` data.Now collect `currentFile` into context in houdini through context plugin no matter the active instances. + + +___ + +
+ + +
+Nuke: fixed broken slate workflow once published on deadline #4887 + +Slate workflow is now working as expected and Validate Sequence Frames is not raising the once slate frame is included. + + +___ + +
+ + +
+Add fps as instance.data in collect review in Houdini. #4888 + +fix the bug of failing to publish extract review in HoudiniOriginal error: +```python + File "OpenPype\build\exe.win-amd64-3.9\openpype\plugins\publish\extract_review.py", line 516, in prepare_temp_data + "fps": float(instance.data["fps"]), +KeyError: 'fps' +``` + + +___ + +
+ + +
+TrayPublisher: Fill missing data for instances with review #4891 + +Fill required data to instance in traypublisher if instance has review family. The data are required by ExtractReview and it would be complicated to do proper fix at this moment! The collector does for review instances what did https://github.com/ynput/OpenPype/pull/4383 + + +___ + +
+ + +
+Publisher: Keep track about current context and fix context selection widget #4892 + +Change selected context to current context on reset. Fix bug when context widget is re-enabled. + + +___ + +
+ + +
+Scene inventory: Model refresh fix with cherry picking #4895 + +Fix cherry pick issue in scene inventory. + + +___ + +
+ + +
+Nuke: Pre-render and missing review flag on instance causing crash #4897 + +If instance created in nuke was missing `review` flag, collector crashed. + + +___ + +
+ +### **Merged pull requests** + + +
+After Effects: fix handles KeyError #4727 + +Sometimes when publishing with AE (we only saw this error on AE 2023), we got a KeyError for the handles in the "Collect Workfile" step. So I did get the handles from the context if ther's no handles in the asset entity. + + +___ + +
+ + + + +## [3.15.4](https://github.com/ynput/OpenPype/tree/3.15.4) + + +[Full Changelog](https://github.com/ynput/OpenPype/compare/3.15.3...3.15.4) + +### **🆕 New features** + + +
+Maya: Cant assign shaders to the ass file - OP-4859 #4460 + +Support AiStandIn nodes for look assignment. + +Using operators we assign shaders and attribute/parameters to nodes within standins. Initially there is only support for a limited mount of attributes but we can add support as needed; +``` +primaryVisibility +castsShadows +receiveShadows +aiSelfShadows +aiOpaque +aiMatte +aiVisibleInDiffuseTransmission +aiVisibleInSpecularTransmission +aiVisibleInVolume +aiVisibleInDiffuseReflection +aiVisibleInSpecularReflection +aiSubdivUvSmoothing +aiDispHeight +aiDispPadding +aiDispZeroValue +aiStepSize +aiVolumePadding +aiSubdivType +aiSubdivIterations +``` + + +___ + +
+ + +
+Maya: GPU cache representation #4649 + +Implement GPU cache for model, animation and pointcache. + + +___ + +
+ + +
+Houdini: Implement review family with opengl node #3839 + +Implements a first pass for Reviews publishing in Houdini. Resolves #2720 + +Uses the `opengl` ROP node to produce PNG images. + + +___ + +
+ + +
+Maya: Camera focal length visible in review - OP-3278 #4531 + +Camera focal length visible in review. + +Support camera focal length in review; static and dynamic.Resolves #3220 + + +___ + +
+ + +
+Maya: Defining plugins to load on Maya start - OP-4994 #4714 + +Feature to define plugins to load on Maya launch. + + +___ + +
+ + +
+Nuke, DL: Returning Suspended Publishing attribute #4715 + +Old Nuke Publisher's feature for suspended publishing job on render farm was added back to the current Publisher. + + +___ + +
+ + +
+Settings UI: Allow setting a size hint for text fields #4821 + +Text entity have `minimum_lines_count` which allows to change minimum size hint of UI input. + + +___ + +
+ + +
+TrayPublisher: Move 'BatchMovieCreator' settings to 'create' subcategory #4827 + +Moved settings for `BatchMoviewCreator` into subcategory `create` in settings. Changes are made to match other hosts settings chema and structure. + + +___ + +
+ +### **🚀 Enhancements** + + +
+Maya looks: support for native Redshift texture format #2971 + +Add support for native Redshift textures handling. Closes #2599 + +Uses Redshift's Texture Processor executable to convert textures being used in renders to the Redshift ".rstexbin" format. + + +___ + +
+ + +
+Maya: custom namespace for references #4511 + +Adding an option in Project Settings > Maya > Loader plugins to set custom namespace. If no namespace is set, the default one is used. + + +___ + +
+ + +
+Maya: Set correct framerange with handles on file opening #4664 + +Set the range of playback from the asset data, counting handles, to get the correct data when calling the "collect_animation_data" function. + + +___ + +
+ + +
+Maya: Fix camera update #4751 + +Fix resetting any modelPanel to a different camera when loading a camera and updating. + + +___ + +
+ + +
+Maya: Remove single assembly validation for animation instances #4840 + +Rig groups may now be parented to others groups when `includeParentHierarchy` attribute on the instance is "off". + + +___ + +
+ + +
+Maya: Optional control of display lights on playblast. #4145 + +Optional control of display lights on playblast. + +Giving control to what display lights are on the playblasts. + + +___ + +
+ + +
+Kitsu: note family requirements #4551 + +Allowing to add family requirements to `IntegrateKitsuNote` task status change. + +Adds a `Family requirements` setting to `Integrate Kitsu Note`, so you can add requirements to determine if kitsu task status should be changed based on which families are published or not. For instance you could have the status change only if another subset than workfile is published (but workfile can still be included) by adding an item set to `Not equal` and `workfile`. + + +___ + +
+ + +
+Deactivate closed Kitsu projects on OP #4619 + +Deactivate project on OP when the project is closed on Kitsu. + + +___ + +
+ + +
+Maya: Suggestion to change capture labels. #4691 + +Change capture labels. + + +___ + +
+ + +
+Houdini: Change node type for OpenPypeContext `null` -> `subnet` #4745 + +Change the node type for OpenPype's hidden context node in Houdini from `null` to `subnet`. This fixes #4734 + + +___ + +
+ + +
+General: Extract burnin hosts filters #4749 + +Removed hosts filter from ExtractBurnin plugin. Instance without representations won't cause crash but just skip the instance. We've discovered because Blender already has review but did not create burnins. + + +___ + +
+ + +
+Global: Improve speed of Collect Custom Staging Directory #4768 + +Improve speed of Collect Custom Staging Directory. + + +___ + +
+ + +
+General: Anatomy templates formatting #4773 + +Added option to format only single template from anatomy instead of formatting all of them all the time. Formatting of all templates is causing slowdowns e.g. during publishing of hundreds of instances. + + +___ + +
+ + +
+Harmony: Handle zip files with deeper structure #4782 + +External Harmony zip files might contain one additional level with scene name. + + +___ + +
+ + +
+Unreal: Use common logic to configure executable #4788 + +Unreal Editor location and version was autodetected. This easied configuration in some cases but was not flexible enought. This PR is changing the way Unreal Editor location is set, unifying it with the logic other hosts are using. + + +___ + +
+ + +
+Github: Grammar tweaks + uppercase issue title #4813 + +Tweak some of the grammar in the issue form templates. + + +___ + +
+ + +
+Houdini: Allow creation of publish instances via Houdini TAB menu #4831 + +Register the available Creator's as houdini tools so an artist can add publish instances via the Houdini TAB node search menu from within the network editor. + + +___ + +
+ +### **🐛 Bug fixes** + + +
+Maya: Fix Collect Render for V-Ray, Redshift and Renderman for missing colorspace #4650 + +Fix Collect Render not working for Redshift, V-Ray and Renderman due to missing `colorspace` argument to `RenderProduct` dataclass. + + +___ + +
+ + +
+Maya: Xgen fixes #4707 + +Fix for Xgen extraction of world parented nodes and validation for required namespace. + + +___ + +
+ + +
+Maya: Fix extract review and thumbnail for Maya 2020 #4744 + +Fix playblasting in Maya 2020 with override viewport options enabled. Fixes #4730. + + +___ + +
+ + +
+Maya: local variable 'arnold_standins' referenced before assignment - OP-5542 #4778 + +MayaLookAssigner erroring when MTOA is not loaded: +``` +# Traceback (most recent call last): +# File "\openpype\hosts\maya\tools\mayalookassigner\app.py", line 272, in on_process_selected +# nodes = list(set(item["nodes"]).difference(arnold_standins)) +# UnboundLocalError: local variable 'arnold_standins' referenced before assignment +``` + + +___ + +
+ + +
+Maya: Fix getting view and display in Maya 2020 - OP-5035 #4795 + +The `view_transform` returns a different format in Maya 2020. Fixes #4540 (hopefully). + + +___ + +
+ + +
+Maya: Fix Look Maya 2020 Py2 support for Extract Look #4808 + +Fix Extract Look supporting python 2.7 for Maya 2020. + + +___ + +
+ + +
+Maya: Fix Validate Mesh Overlapping UVs plugin #4816 + +Fix typo in the code where a maya command returns a `list` instead of `str`. + + +___ + +
+ + +
+Maya: Fix tile rendering with Vray - OP-5566 #4832 + +Fixes tile rendering with Vray. + + +___ + +
+ + +
+Deadline: checking existing frames fails when there is number in file name #4698 + +Previous implementation of validator failed on files with any other number in rendered file names.Used regular expression pattern now handles numbers in the file names (eg "Main_beauty.v001.1001.exr", "Main_beauty_v001.1001.exr", "Main_beauty.1001.1001.exr") but not numbers behind frames (eg. "Main_beauty.1001.v001.exr") + + +___ + +
+ + +
+Maya: Validate Render Settings. #4735 + +Fixes error message when using attribute validation. + + +___ + +
+ + +
+General: Hero version sites recalculation #4737 + +Sites recalculation in integrate hero version did expect that it is integrated exactly same amount of files as in previous integration. This is not the case in many cases, so the sites recalculation happens in a different way, first are prepared all sites from previous representation files, and all of them are added to each file in new representation. + + +___ + +
+ + +
+Houdini: Fix collect current file #4739 + +Fixes the Workfile publishing getting added into every instance being published from Houdini + + +___ + +
+ + +
+Global: Fix Extract Burnin + Colorspace functions for conflicting python environments with PYTHONHOME #4740 + +This fixes the running of openpype processes from e.g. a host with conflicting python versions that had `PYTHONHOME` said additionally to `PYTHONPATH`, like e.g. Houdini Py3.7 together with OpenPype Py3.9 when using Extract Burnin for a review in #3839This fix applies to Extract Burnin and some of the colorspace functions that use `run_openpype_process` + + +___ + +
+ + +
+Harmony: render what is in timeline in Harmony locally #4741 + +Previously it wasn't possible to render according to what was set in Timeline in scene start/end, just by what it was set in whole timeline.This allows artist to override what is in DB with what they require (with disabled `Validate Scene Settings`). Now artist can extend scene by additional frames, that shouldn't be rendered, but which might be desired.Removed explicit set scene settings (eg. applying frames and resolution directly to the scene after launch), added separate menu item to allow artist to do it themselves. + + +___ + +
+ + +
+Maya: Extract Review settings add Use Background Gradient #4747 + +Add Display Gradient Background toggle in settings to fix support for setting flat background color for reviews. + + +___ + +
+ + +
+Nuke: publisher is offering review on write families on demand #4755 + +Original idea where reviewable toggle will be offered in publisher on demand is fixed and now `review` attribute can be disabled in settings. + + +___ + +
+ + +
+Workfiles: keep Browse always enabled #4766 + +Browse might make sense even if there are no workfiles present, actually in that case it makes the most sense (eg. I want to locate workfile from outside - from Desktop for example). + + +___ + +
+ + +
+Global: label key in instance data is optional #4779 + +Collect OTIO review plugin is not crashing if `label` key is missing in instance data. + + +___ + +
+ + +
+Loader: Fix missing variable #4781 + +There is missing variable `handles` in loader tool after https://github.com/ynput/OpenPype/pull/4746. The variable was renamed to `handles_label` and is initialized to `None` if handles are not available. + + +___ + +
+ + +
+Nuke: Workfile Template builder fixes #4783 + +Popup window after Nuke start is not showing. Knobs with X/Y coordination on nodes where were converted from placeholders are not added if `keepPlaceholders` is witched off. + + +___ + +
+ + +
+Maya: Add family filter 'review' to burnin profile with focal length #4791 + +Avoid profile burnin with `focalLength` key for renders, but use only for playblast reviews. + + +___ + +
+ + +
+add farm instance to the render collector in 3dsMax #4794 + +bug fix for the failure of submitting publish job in 3dsmax + + +___ + +
+ + +
+Publisher: Plugin active attribute is respected #4798 + +Publisher consider plugin's `active` attribute, so the plugin is not processed when `active` is set to `False`. But we use the attribute in `OptionalPyblishPluginMixin` for different purposes, so I've added hack bypass of the active state validation when plugin inherit from the mixin. This is temporary solution which cannot be changed until all hosts use Publisher otherwise global plugins would be broken. Also plugins which have `enabled` set to `False` are filtered out -> this happened only when automated settings were applied and the settings contained `"enabled"` key se to `False`. + + +___ + +
+ + +
+Nuke: settings and optional attribute in publisher for some validators #4811 + +New publisher is supporting optional switch for plugins which is offered in Publisher in Right panel. Some plugins were missing this switch and also settings which would offer the optionality. + + +___ + +
+ + +
+Settings: Version settings popup fix #4822 + +Version completer popup have issues on some platforms, this should fix those edge cases. Also fixed issue when completer stayed shown fater reset (save). + + +___ + +
+ + +
+Hiero/Nuke: adding monitorOut key to settings #4826 + +New versions of Hiero were introduced with new colorspace property for Monitor Out. It have been added into project settings. Also added new config names into settings enumerator option. + + +___ + +
+ + +
+Nuke: removed default workfile template builder preset #4835 + +Default for workfile template builder should have been empty. + + +___ + +
+ + +
+TVPaint: Review can be made from any instance #4843 + +Add `"review"` tag to output of extract sequence if instance is marked for review. At this moment only instances with family `"review"` were able to define input for `ExtractReview` plugin which is not right. + + +___ + +
+ +### **🔀 Refactored code** + + +
+Deadline: Remove unused FramesPerTask job info submission #4657 + +Remove unused `FramesPerTask` job info submission to Deadline. + + +___ + +
+ + +
+Maya: Remove pymel dependency #4724 + +Refactors code written using `pymel` to use standard maya python libraries instead like `maya.cmds` or `maya.api.OpenMaya` + + +___ + +
+ + +
+Remove "preview" data from representation #4759 + +Remove "preview" data from representation + + +___ + +
+ + +
+Maya: Collect Review cleanup code for attached subsets #4720 + +Refactor some code for Maya: Collect Review for attached subsets. + + +___ + +
+ + +
+Refactor: Remove `handles`, `edit_in` and `edit_out` backwards compatibility #4746 + +Removes backward compatibiliy fallback for data called `handles`, `edit_in` and `edit_out`. + + +___ + +
+ +### **📃 Documentation** + + +
+Bump webpack from 5.69.1 to 5.76.1 in /website #4624 + +Bumps [webpack](https://github.com/webpack/webpack) from 5.69.1 to 5.76.1. +
+Release notes +

Sourced from webpack's releases.

+
+

v5.76.1

+

Fixed

+
    +
  • Added assert/strict built-in to NodeTargetPlugin
  • +
+

Revert

+ +

v5.76.0

+

Bugfixes

+ +

Features

+ +

Security

+ +

Repo Changes

+ +

New Contributors

+ +

Full Changelog: https://github.com/webpack/webpack/compare/v5.75.0...v5.76.0

+

v5.75.0

+

Bugfixes

+
    +
  • experiments.* normalize to false when opt-out
  • +
  • avoid NaN%
  • +
  • show the correct error when using a conflicting chunk name in code
  • +
  • HMR code tests existance of window before trying to access it
  • +
  • fix eval-nosources-* actually exclude sources
  • +
  • fix race condition where no module is returned from processing module
  • +
  • fix position of standalong semicolon in runtime code
  • +
+

Features

+
    +
  • add support for @import to extenal CSS when using experimental CSS in node
  • +
+ +
+

... (truncated)

+
+
+Commits + +
+
+Maintainer changes +

This version was pushed to npm by evilebottnawi, a new releaser for webpack since your current version.

+
+
+ + +[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=webpack&package-manager=npm_and_yarn&previous-version=5.69.1&new-version=5.76.1)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) + +Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. + +[//]: # (dependabot-automerge-start) +[//]: # (dependabot-automerge-end) + +--- + +
+Dependabot commands and options +
+ +You can trigger Dependabot actions by commenting on this PR: +- `@dependabot rebase` will rebase this PR +- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it +- `@dependabot merge` will merge this PR after your CI passes on it +- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it +- `@dependabot cancel merge` will cancel a previously requested merge and block automerging +- `@dependabot reopen` will reopen this PR if it is closed +- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually +- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) +- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) +- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) +- `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language +- `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language +- `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language +- `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language + +You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/ynput/OpenPype/network/alerts). + +
+___ + +
+ + +
+Documentation: Add Extract Burnin documentation #4765 + +Add documentation for Extract Burnin global plugin settings. + + +___ + +
+ + +
+Documentation: Move publisher related tips to publisher area #4772 + +Move publisher related tips for After Effects artist documentation to the correct position. + + +___ + +
+ + +
+Documentation: Add extra terminology to the key concepts glossary #4838 + +Tweak some of the key concepts in the documentation. + + +___ + +
+ +### **Merged pull requests** + + +
+Maya: Refactor Extract Look with dedicated processors for maketx #4711 + +Refactor Maya extract look to fix some issues: +- [x] Allow Extraction with maketx with OCIO Color Management enabled in Maya. +- [x] Fix file hashing so it includes arguments to maketx, so that when arguments change it correctly generates a new hash +- [x] Fix maketx destination colorspace when OCIO is enabled +- [x] Use pre-collected colorspaces of the resources instead of trying to retrieve again in Extract Look +- [x] Fix colorspace attributes being reinterpreted by maya on export (fix remapping) - goal is to resolve #2337 +- [x] Fix support for checking config path of maya default OCIO config (due to using `lib.get_color_management_preferences` which remaps that path) +- [x] Merged in #2971 to refactor MakeTX into TextureProcessor and also support generating Redshift `.rstexbin` files. - goal is to resolve #2599 +- [x] Allow custom arguments to `maketx` from OpenPype Settings like mentioned here by @fabiaserra for arguments like: `--monochrome-detect`, `--opaque-detect`, `--checknan`. +- [x] Actually fix the code and make it work. :) (I'll try to keep below checkboxes in sync with my code changes) +- [x] Publishing without texture processor should work (no maketx + no rstexbin) +- [x] Publishing with maketx should work +- [x] Publishing with rstexbin should work +- [x] Test it. (This is just me doing some test-runs, please still test the PR!) + + +___ + +
+ + +
+Maya template builder load all assets linked to the shot #4761 + +Problem +All the assets of the ftrack project are loaded and not those linked to the shot + +How get error +Open maya in the context of shot, then build a new scene with the "Build Workfile from template" button in "OpenPype" menu. +![image](https://user-images.githubusercontent.com/7068597/229124652-573a23d7-a2b2-4d50-81bf-7592c00d24dc.png) + + +___ + +
+ + +
+Global: Do not force instance data with frame ranges of the asset #4383 + +This aims to resolve #4317 + + +___ + +
+ + +
+Cosmetics: Fix some grammar in docstrings and messages (and some code) #4752 + +Tweak some grammar in codebase + + +___ + +
+ + +
+Deadline: Submit publish job fails due root work hardcode - OP-5528 #4775 + +Generating config templates was hardcoded to `root[work]`. This PR fixes that. + + +___ + +
+ + +
+CreateContext: Added option to remove Unknown attributes #4776 + +Added option to remove attributes with UnkownAttrDef on instances. Pop of key will also remove the attribute definition from attribute values, so they're not recreated again. + + +___ + +
+ + + ## [3.15.3](https://github.com/ynput/OpenPype/tree/3.15.3) diff --git a/Dockerfile.centos7 b/Dockerfile.centos7 index 5eb2f478ea..ce1a624a4f 100644 --- a/Dockerfile.centos7 +++ b/Dockerfile.centos7 @@ -52,7 +52,7 @@ RUN yum -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.n # we need to build our own patchelf WORKDIR /temp-patchelf -RUN git clone https://github.com/NixOS/patchelf.git . \ +RUN git clone -b 0.17.0 --single-branch https://github.com/NixOS/patchelf.git . \ && source scl_source enable devtoolset-7 \ && ./bootstrap.sh \ && ./configure \ diff --git a/openpype/cli.py b/openpype/cli.py index a650a9fdcc..54af42920d 100644 --- a/openpype/cli.py +++ b/openpype/cli.py @@ -415,11 +415,12 @@ def repack_version(directory): @main.command() @click.option("--project", help="Project name") @click.option( - "--dirpath", help="Directory where package is stored", default=None -) -def pack_project(project, dirpath): + "--dirpath", help="Directory where package is stored", default=None) +@click.option( + "--dbonly", help="Store only Database data", default=False, is_flag=True) +def pack_project(project, dirpath, dbonly): """Create a package of project with all files and database dump.""" - PypeCommands().pack_project(project, dirpath) + PypeCommands().pack_project(project, dirpath, dbonly) @main.command() @@ -427,9 +428,11 @@ def pack_project(project, dirpath): @click.option( "--root", help="Replace root which was stored in project", default=None ) -def unpack_project(zipfile, root): +@click.option( + "--dbonly", help="Store only Database data", default=False, is_flag=True) +def unpack_project(zipfile, root, dbonly): """Create a package of project with all files and database dump.""" - PypeCommands().unpack_project(zipfile, root) + PypeCommands().unpack_project(zipfile, root, dbonly) @main.command() diff --git a/openpype/client/entities.py b/openpype/client/entities.py index 7054658c64..8004dc3019 100644 --- a/openpype/client/entities.py +++ b/openpype/client/entities.py @@ -69,6 +69,19 @@ def convert_ids(in_ids): def get_projects(active=True, inactive=False, fields=None): + """Yield all project entity documents. + + Args: + active (Optional[bool]): Include active projects. Defaults to True. + inactive (Optional[bool]): Include inactive projects. + Defaults to False. + fields (Optional[Iterable[str]]): Fields that should be returned. All + fields are returned if 'None' is passed. + + Yields: + dict: Project entity data which can be reduced to specified 'fields'. + None is returned if project with specified filters was not found. + """ mongodb = get_project_database() for project_name in mongodb.collection_names(): if project_name in ("system.indexes",): @@ -81,6 +94,20 @@ def get_projects(active=True, inactive=False, fields=None): def get_project(project_name, active=True, inactive=True, fields=None): + """Return project entity document by project name. + + Args: + project_name (str): Name of project. + active (Optional[bool]): Allow active project. Defaults to True. + inactive (Optional[bool]): Allow inactive project. Defaults to True. + fields (Optional[Iterable[str]]): Fields that should be returned. All + fields are returned if 'None' is passed. + + Returns: + Union[Dict, None]: Project entity data which can be reduced to + specified 'fields'. None is returned if project with specified + filters was not found. + """ # Skip if both are disabled if not active and not inactive: return None @@ -124,17 +151,18 @@ def get_whole_project(project_name): def get_asset_by_id(project_name, asset_id, fields=None): - """Receive asset data by it's id. + """Receive asset data by its id. Args: project_name (str): Name of project where to look for queried entities. asset_id (Union[str, ObjectId]): Asset's id. - fields (Iterable[str]): Fields that should be returned. All fields are - returned if 'None' is passed. + fields (Optional[Iterable[str]]): Fields that should be returned. All + fields are returned if 'None' is passed. Returns: - dict: Asset entity data. - None: Asset was not found by id. + Union[Dict, None]: Asset entity data which can be reduced to + specified 'fields'. None is returned if asset with specified + filters was not found. """ asset_id = convert_id(asset_id) @@ -147,17 +175,18 @@ def get_asset_by_id(project_name, asset_id, fields=None): def get_asset_by_name(project_name, asset_name, fields=None): - """Receive asset data by it's name. + """Receive asset data by its name. Args: project_name (str): Name of project where to look for queried entities. asset_name (str): Asset's name. - fields (Iterable[str]): Fields that should be returned. All fields are - returned if 'None' is passed. + fields (Optional[Iterable[str]]): Fields that should be returned. All + fields are returned if 'None' is passed. Returns: - dict: Asset entity data. - None: Asset was not found by name. + Union[Dict, None]: Asset entity data which can be reduced to + specified 'fields'. None is returned if asset with specified + filters was not found. """ if not asset_name: @@ -195,8 +224,8 @@ def _get_assets( parent_ids (Iterable[Union[str, ObjectId]]): Parent asset ids. standard (bool): Query standard assets (type 'asset'). archived (bool): Query archived assets (type 'archived_asset'). - fields (Iterable[str]): Fields that should be returned. All fields are - returned if 'None' is passed. + fields (Optional[Iterable[str]]): Fields that should be returned. All + fields are returned if 'None' is passed. Returns: Cursor: Query cursor as iterable which returns asset documents matching @@ -261,8 +290,8 @@ def get_assets( asset_names (Iterable[str]): Name assets that should be found. parent_ids (Iterable[Union[str, ObjectId]]): Parent asset ids. archived (bool): Add also archived assets. - fields (Iterable[str]): Fields that should be returned. All fields are - returned if 'None' is passed. + fields (Optional[Iterable[str]]): Fields that should be returned. All + fields are returned if 'None' is passed. Returns: Cursor: Query cursor as iterable which returns asset documents matching @@ -300,8 +329,8 @@ def get_archived_assets( be found. asset_names (Iterable[str]): Name assets that should be found. parent_ids (Iterable[Union[str, ObjectId]]): Parent asset ids. - fields (Iterable[str]): Fields that should be returned. All fields are - returned if 'None' is passed. + fields (Optional[Iterable[str]]): Fields that should be returned. All + fields are returned if 'None' is passed. Returns: Cursor: Query cursor as iterable which returns asset documents matching @@ -356,17 +385,18 @@ def get_asset_ids_with_subsets(project_name, asset_ids=None): def get_subset_by_id(project_name, subset_id, fields=None): - """Single subset entity data by it's id. + """Single subset entity data by its id. Args: project_name (str): Name of project where to look for queried entities. subset_id (Union[str, ObjectId]): Id of subset which should be found. - fields (Iterable[str]): Fields that should be returned. All fields are - returned if 'None' is passed. + fields (Optional[Iterable[str]]): Fields that should be returned. All + fields are returned if 'None' is passed. Returns: - None: If subset with specified filters was not found. - Dict: Subset document which can be reduced to specified 'fields'. + Union[Dict, None]: Subset entity data which can be reduced to + specified 'fields'. None is returned if subset with specified + filters was not found. """ subset_id = convert_id(subset_id) @@ -379,20 +409,19 @@ def get_subset_by_id(project_name, subset_id, fields=None): def get_subset_by_name(project_name, subset_name, asset_id, fields=None): - """Single subset entity data by it's name and it's version id. + """Single subset entity data by its name and its version id. Args: project_name (str): Name of project where to look for queried entities. subset_name (str): Name of subset. asset_id (Union[str, ObjectId]): Id of parent asset. - fields (Iterable[str]): Fields that should be returned. All fields are - returned if 'None' is passed. + fields (Optional[Iterable[str]]): Fields that should be returned. All + fields are returned if 'None' is passed. Returns: - Union[None, Dict[str, Any]]: None if subset with specified filters was - not found or dict subset document which can be reduced to - specified 'fields'. - + Union[Dict, None]: Subset entity data which can be reduced to + specified 'fields'. None is returned if subset with specified + filters was not found. """ if not subset_name: return None @@ -434,8 +463,8 @@ def get_subsets( names_by_asset_ids (dict[ObjectId, List[str]]): Complex filtering using asset ids and list of subset names under the asset. archived (bool): Look for archived subsets too. - fields (Iterable[str]): Fields that should be returned. All fields are - returned if 'None' is passed. + fields (Optional[Iterable[str]]): Fields that should be returned. All + fields are returned if 'None' is passed. Returns: Cursor: Iterable cursor yielding all matching subsets. @@ -520,17 +549,18 @@ def get_subset_families(project_name, subset_ids=None): def get_version_by_id(project_name, version_id, fields=None): - """Single version entity data by it's id. + """Single version entity data by its id. Args: project_name (str): Name of project where to look for queried entities. version_id (Union[str, ObjectId]): Id of version which should be found. - fields (Iterable[str]): Fields that should be returned. All fields are - returned if 'None' is passed. + fields (Optional[Iterable[str]]): Fields that should be returned. All + fields are returned if 'None' is passed. Returns: - None: If version with specified filters was not found. - Dict: Version document which can be reduced to specified 'fields'. + Union[Dict, None]: Version entity data which can be reduced to + specified 'fields'. None is returned if version with specified + filters was not found. """ version_id = convert_id(version_id) @@ -546,18 +576,19 @@ def get_version_by_id(project_name, version_id, fields=None): def get_version_by_name(project_name, version, subset_id, fields=None): - """Single version entity data by it's name and subset id. + """Single version entity data by its name and subset id. Args: project_name (str): Name of project where to look for queried entities. - version (int): name of version entity (it's version). + version (int): name of version entity (its version). subset_id (Union[str, ObjectId]): Id of version which should be found. - fields (Iterable[str]): Fields that should be returned. All fields are - returned if 'None' is passed. + fields (Optional[Iterable[str]]): Fields that should be returned. All + fields are returned if 'None' is passed. Returns: - None: If version with specified filters was not found. - Dict: Version document which can be reduced to specified 'fields'. + Union[Dict, None]: Version entity data which can be reduced to + specified 'fields'. None is returned if version with specified + filters was not found. """ subset_id = convert_id(subset_id) @@ -574,7 +605,7 @@ def get_version_by_name(project_name, version, subset_id, fields=None): def version_is_latest(project_name, version_id): - """Is version the latest from it's subset. + """Is version the latest from its subset. Note: Hero versions are considered as latest. @@ -680,8 +711,8 @@ def get_versions( versions (Iterable[int]): Version names (as integers). Filter ignored if 'None' is passed. hero (bool): Look also for hero versions. - fields (Iterable[str]): Fields that should be returned. All fields are - returned if 'None' is passed. + fields (Optional[Iterable[str]]): Fields that should be returned. All + fields are returned if 'None' is passed. Returns: Cursor: Iterable cursor yielding all matching versions. @@ -705,12 +736,13 @@ def get_hero_version_by_subset_id(project_name, subset_id, fields=None): project_name (str): Name of project where to look for queried entities. subset_id (Union[str, ObjectId]): Subset id under which is hero version. - fields (Iterable[str]): Fields that should be returned. All fields are - returned if 'None' is passed. + fields (Optional[Iterable[str]]): Fields that should be returned. All + fields are returned if 'None' is passed. Returns: - None: If hero version for passed subset id does not exists. - Dict: Hero version entity data. + Union[Dict, None]: Hero version entity data which can be reduced to + specified 'fields'. None is returned if hero version with specified + filters was not found. """ subset_id = convert_id(subset_id) @@ -730,17 +762,18 @@ def get_hero_version_by_subset_id(project_name, subset_id, fields=None): def get_hero_version_by_id(project_name, version_id, fields=None): - """Hero version by it's id. + """Hero version by its id. Args: project_name (str): Name of project where to look for queried entities. version_id (Union[str, ObjectId]): Hero version id. - fields (Iterable[str]): Fields that should be returned. All fields are - returned if 'None' is passed. + fields (Optional[Iterable[str]]): Fields that should be returned. All + fields are returned if 'None' is passed. Returns: - None: If hero version with passed id was not found. - Dict: Hero version entity data. + Union[Dict, None]: Hero version entity data which can be reduced to + specified 'fields'. None is returned if hero version with specified + filters was not found. """ version_id = convert_id(version_id) @@ -773,8 +806,8 @@ def get_hero_versions( should look for hero versions. Filter ignored if 'None' is passed. version_ids (Iterable[Union[str, ObjectId]]): Hero version ids. Filter ignored if 'None' is passed. - fields (Iterable[str]): Fields that should be returned. All fields are - returned if 'None' is passed. + fields (Optional[Iterable[str]]): Fields that should be returned. All + fields are returned if 'None' is passed. Returns: Cursor|list: Iterable yielding hero versions matching passed filters. @@ -801,8 +834,8 @@ def get_output_link_versions(project_name, version_id, fields=None): project_name (str): Name of project where to look for queried entities. version_id (Union[str, ObjectId]): Version id which can be used as input link for other versions. - fields (Iterable[str]): Fields that should be returned. All fields are - returned if 'None' is passed. + fields (Optional[Iterable[str]]): Fields that should be returned. All + fields are returned if 'None' is passed. Returns: Iterable: Iterable cursor yielding versions that are used as input @@ -828,8 +861,8 @@ def get_last_versions(project_name, subset_ids, fields=None): Args: project_name (str): Name of project where to look for queried entities. subset_ids (Iterable[Union[str, ObjectId]]): List of subset ids. - fields (Iterable[str]): Fields that should be returned. All fields are - returned if 'None' is passed. + fields (Optional[Iterable[str]]): Fields that should be returned. All + fields are returned if 'None' is passed. Returns: dict[ObjectId, int]: Key is subset id and value is last version name. @@ -913,12 +946,13 @@ def get_last_version_by_subset_id(project_name, subset_id, fields=None): Args: project_name (str): Name of project where to look for queried entities. subset_id (Union[str, ObjectId]): Id of version which should be found. - fields (Iterable[str]): Fields that should be returned. All fields are - returned if 'None' is passed. + fields (Optional[Iterable[str]]): Fields that should be returned. All + fields are returned if 'None' is passed. Returns: - None: If version with specified filters was not found. - Dict: Version document which can be reduced to specified 'fields'. + Union[Dict, None]: Version entity data which can be reduced to + specified 'fields'. None is returned if version with specified + filters was not found. """ subset_id = convert_id(subset_id) @@ -945,12 +979,13 @@ def get_last_version_by_subset_name( asset_id (Union[str, ObjectId]): Asset id which is parent of passed subset name. asset_name (str): Asset name which is parent of passed subset name. - fields (Iterable[str]): Fields that should be returned. All fields are - returned if 'None' is passed. + fields (Optional[Iterable[str]]): Fields that should be returned. All + fields are returned if 'None' is passed. Returns: - None: If version with specified filters was not found. - Dict: Version document which can be reduced to specified 'fields'. + Union[Dict, None]: Version entity data which can be reduced to + specified 'fields'. None is returned if version with specified + filters was not found. """ if not asset_id and not asset_name: @@ -972,18 +1007,18 @@ def get_last_version_by_subset_name( def get_representation_by_id(project_name, representation_id, fields=None): - """Representation entity data by it's id. + """Representation entity data by its id. Args: project_name (str): Name of project where to look for queried entities. representation_id (Union[str, ObjectId]): Representation id. - fields (Iterable[str]): Fields that should be returned. All fields are - returned if 'None' is passed. + fields (Optional[Iterable[str]]): Fields that should be returned. All + fields are returned if 'None' is passed. Returns: - None: If representation with specified filters was not found. - Dict: Representation entity data which can be reduced - to specified 'fields'. + Union[Dict, None]: Representation entity data which can be reduced to + specified 'fields'. None is returned if representation with + specified filters was not found. """ if not representation_id: @@ -1004,19 +1039,19 @@ def get_representation_by_id(project_name, representation_id, fields=None): def get_representation_by_name( project_name, representation_name, version_id, fields=None ): - """Representation entity data by it's name and it's version id. + """Representation entity data by its name and its version id. Args: project_name (str): Name of project where to look for queried entities. representation_name (str): Representation name. version_id (Union[str, ObjectId]): Id of parent version entity. - fields (Iterable[str]): Fields that should be returned. All fields are - returned if 'None' is passed. + fields (Optional[Iterable[str]]): Fields that should be returned. All + fields are returned if 'None' is passed. Returns: - None: If representation with specified filters was not found. - Dict: Representation entity data which can be reduced - to specified 'fields'. + Union[dict[str, Any], None]: Representation entity data which can be + reduced to specified 'fields'. None is returned if representation + with specified filters was not found. """ version_id = convert_id(version_id) @@ -1202,8 +1237,8 @@ def get_representations( names_by_version_ids (dict[ObjectId, list[str]]): Complex filtering using version ids and list of names under the version. archived (bool): Output will also contain archived representations. - fields (Iterable[str]): Fields that should be returned. All fields are - returned if 'None' is passed. + fields (Optional[Iterable[str]]): Fields that should be returned. All + fields are returned if 'None' is passed. Returns: Cursor: Iterable cursor yielding all matching representations. @@ -1216,7 +1251,7 @@ def get_representations( version_ids=version_ids, context_filters=context_filters, names_by_version_ids=names_by_version_ids, - standard=True, + standard=standard, archived=archived, fields=fields ) @@ -1247,8 +1282,8 @@ def get_archived_representations( representation context fields. names_by_version_ids (dict[ObjectId, List[str]]): Complex filtering using version ids and list of names under the version. - fields (Iterable[str]): Fields that should be returned. All fields are - returned if 'None' is passed. + fields (Optional[Iterable[str]]): Fields that should be returned. All + fields are returned if 'None' is passed. Returns: Cursor: Iterable cursor yielding all matching representations. @@ -1377,8 +1412,8 @@ def get_thumbnail_id_from_source(project_name, src_type, src_id): src_id (Union[str, ObjectId]): Id of source entity. Returns: - ObjectId: Thumbnail id assigned to entity. - None: If Source entity does not have any thumbnail id assigned. + Union[ObjectId, None]: Thumbnail id assigned to entity. If Source + entity does not have any thumbnail id assigned. """ if not src_type or not src_id: @@ -1397,14 +1432,14 @@ def get_thumbnails(project_name, thumbnail_ids, fields=None): """Receive thumbnails entity data. Thumbnail entity can be used to receive binary content of thumbnail based - on it's content and ThumbnailResolvers. + on its content and ThumbnailResolvers. Args: project_name (str): Name of project where to look for queried entities. thumbnail_ids (Iterable[Union[str, ObjectId]]): Ids of thumbnail entities. - fields (Iterable[str]): Fields that should be returned. All fields are - returned if 'None' is passed. + fields (Optional[Iterable[str]]): Fields that should be returned. All + fields are returned if 'None' is passed. Returns: cursor: Cursor of queried documents. @@ -1429,12 +1464,13 @@ def get_thumbnail(project_name, thumbnail_id, fields=None): Args: project_name (str): Name of project where to look for queried entities. thumbnail_id (Union[str, ObjectId]): Id of thumbnail entity. - fields (Iterable[str]): Fields that should be returned. All fields are - returned if 'None' is passed. + fields (Optional[Iterable[str]]): Fields that should be returned. All + fields are returned if 'None' is passed. Returns: - None: If thumbnail with specified id was not found. - Dict: Thumbnail entity data which can be reduced to specified 'fields'. + Union[Dict, None]: Thumbnail entity data which can be reduced to + specified 'fields'.None is returned if thumbnail with specified + filters was not found. """ if not thumbnail_id: @@ -1458,8 +1494,13 @@ def get_workfile_info( project_name (str): Name of project where to look for queried entities. asset_id (Union[str, ObjectId]): Id of asset entity. task_name (str): Task name on asset. - fields (Iterable[str]): Fields that should be returned. All fields are - returned if 'None' is passed. + fields (Optional[Iterable[str]]): Fields that should be returned. All + fields are returned if 'None' is passed. + + Returns: + Union[Dict, None]: Workfile entity data which can be reduced to + specified 'fields'.None is returned if workfile with specified + filters was not found. """ if not asset_id or not task_name or not filename: diff --git a/openpype/client/mongo.py b/openpype/client/mongo.py index 72acbc5476..251041c028 100644 --- a/openpype/client/mongo.py +++ b/openpype/client/mongo.py @@ -5,6 +5,12 @@ import logging import pymongo import certifi +from bson.json_util import ( + loads, + dumps, + CANONICAL_JSON_OPTIONS +) + if sys.version_info[0] == 2: from urlparse import urlparse, parse_qs else: @@ -15,6 +21,49 @@ class MongoEnvNotSet(Exception): pass +def documents_to_json(docs): + """Convert documents to json string. + + Args: + Union[list[dict[str, Any]], dict[str, Any]]: Document/s to convert to + json string. + + Returns: + str: Json string with mongo documents. + """ + + return dumps(docs, json_options=CANONICAL_JSON_OPTIONS) + + +def load_json_file(filepath): + """Load mongo documents from a json file. + + Args: + filepath (str): Path to a json file. + + Returns: + Union[dict[str, Any], list[dict[str, Any]]]: Loaded content from a + json file. + """ + + if not os.path.exists(filepath): + raise ValueError("Path {} was not found".format(filepath)) + + with open(filepath, "r") as stream: + content = stream.read() + return loads("".join(content)) + + +def get_project_database_name(): + """Name of database name where projects are available. + + Returns: + str: Name of database name where projects are. + """ + + return os.environ.get("AVALON_DB") or "avalon" + + def _decompose_url(url): """Decompose mongo url to basic components. @@ -210,12 +259,102 @@ class OpenPypeMongoConnection: return mongo_client -def get_project_database(): - db_name = os.environ.get("AVALON_DB") or "avalon" - return OpenPypeMongoConnection.get_mongo_client()[db_name] +# ------ Helper Mongo functions ------ +# Functions can be helpful with custom tools to backup/restore mongo state. +# Not meant as API functionality that should be used in production codebase! +def get_collection_documents(database_name, collection_name, as_json=False): + """Query all documents from a collection. + + Args: + database_name (str): Name of database where to look for collection. + collection_name (str): Name of collection where to look for collection. + as_json (Optional[bool]): Output should be a json string. + Default: 'False' + + Returns: + Union[list[dict[str, Any]], str]: Queried documents. + """ + + client = OpenPypeMongoConnection.get_mongo_client() + output = list(client[database_name][collection_name].find({})) + if as_json: + output = documents_to_json(output) + return output -def get_project_connection(project_name): +def store_collection(filepath, database_name, collection_name): + """Store collection documents to a json file. + + Args: + filepath (str): Path to a json file where documents will be stored. + database_name (str): Name of database where to look for collection. + collection_name (str): Name of collection to store. + """ + + # Make sure directory for output file exists + dirpath = os.path.dirname(filepath) + if not os.path.isdir(dirpath): + os.makedirs(dirpath) + + content = get_collection_documents(database_name, collection_name, True) + with open(filepath, "w") as stream: + stream.write(content) + + +def replace_collection_documents(docs, database_name, collection_name): + """Replace all documents in a collection with passed documents. + + Warnings: + All existing documents in collection will be removed if there are any. + + Args: + docs (list[dict[str, Any]]): New documents. + database_name (str): Name of database where to look for collection. + collection_name (str): Name of collection where new documents are + uploaded. + """ + + client = OpenPypeMongoConnection.get_mongo_client() + database = client[database_name] + if collection_name in database.list_collection_names(): + database.drop_collection(collection_name) + col = database[collection_name] + col.insert_many(docs) + + +def restore_collection(filepath, database_name, collection_name): + """Restore/replace collection from a json filepath. + + Warnings: + All existing documents in collection will be removed if there are any. + + Args: + filepath (str): Path to a json with documents. + database_name (str): Name of database where to look for collection. + collection_name (str): Name of collection where new documents are + uploaded. + """ + + docs = load_json_file(filepath) + replace_collection_documents(docs, database_name, collection_name) + + +def get_project_database(database_name=None): + """Database object where project collections are. + + Args: + database_name (Optional[str]): Custom name of database. + + Returns: + pymongo.database.Database: Collection related to passed project. + """ + + if not database_name: + database_name = get_project_database_name() + return OpenPypeMongoConnection.get_mongo_client()[database_name] + + +def get_project_connection(project_name, database_name=None): """Direct access to mongo collection. We're trying to avoid using direct access to mongo. This should be used @@ -223,13 +362,83 @@ def get_project_connection(project_name): api calls for that. Args: - project_name(str): Project name for which collection should be + project_name (str): Project name for which collection should be returned. + database_name (Optional[str]): Custom name of database. Returns: - pymongo.Collection: Collection realated to passed project. + pymongo.collection.Collection: Collection related to passed project. """ if not project_name: raise ValueError("Invalid project name {}".format(str(project_name))) - return get_project_database()[project_name] + return get_project_database(database_name)[project_name] + + +def get_project_documents(project_name, database_name=None): + """Query all documents from project collection. + + Args: + project_name (str): Name of project. + database_name (Optional[str]): Name of mongo database where to look for + project. + + Returns: + list[dict[str, Any]]: Documents in project collection. + """ + + if not database_name: + database_name = get_project_database_name() + return get_collection_documents(database_name, project_name) + + +def store_project_documents(project_name, filepath, database_name=None): + """Store project documents to a file as json string. + + Args: + project_name (str): Name of project to store. + filepath (str): Path to a json file where output will be stored. + database_name (Optional[str]): Name of mongo database where to look for + project. + """ + + if not database_name: + database_name = get_project_database_name() + + store_collection(filepath, database_name, project_name) + + +def replace_project_documents(project_name, docs, database_name=None): + """Replace documents in mongo with passed documents. + + Warnings: + Existing project collection is removed if exists in mongo. + + Args: + project_name (str): Name of project. + docs (list[dict[str, Any]]): Documents to restore. + database_name (Optional[str]): Name of mongo database where project + collection will be created. + """ + + if not database_name: + database_name = get_project_database_name() + replace_collection_documents(docs, database_name, project_name) + + +def restore_project_documents(project_name, filepath, database_name=None): + """Replace documents in mongo with passed documents. + + Warnings: + Existing project collection is removed if exists in mongo. + + Args: + project_name (str): Name of project. + filepath (str): File to json file with project documents. + database_name (Optional[str]): Name of mongo database where project + collection will be created. + """ + + if not database_name: + database_name = get_project_database_name() + restore_collection(filepath, database_name, project_name) diff --git a/openpype/hooks/pre_add_last_workfile_arg.py b/openpype/hooks/pre_add_last_workfile_arg.py index 2558daef30..c54acbc203 100644 --- a/openpype/hooks/pre_add_last_workfile_arg.py +++ b/openpype/hooks/pre_add_last_workfile_arg.py @@ -25,6 +25,7 @@ class AddLastWorkfileToLaunchArgs(PreLaunchHook): "blender", "photoshop", "tvpaint", + "substancepainter", "aftereffects" ] @@ -42,13 +43,5 @@ class AddLastWorkfileToLaunchArgs(PreLaunchHook): self.log.info("Current context does not have any workfile yet.") return - # Determine whether to open workfile post initialization. - if self.host_name == "maya": - key = "open_workfile_post_initialization" - if self.data["project_settings"]["maya"][key]: - self.log.debug("Opening workfile post initialization.") - self.data["env"]["OPENPYPE_" + key.upper()] = "1" - return - # Add path to workfile to arguments self.launch_context.launch_args.append(last_workfile) diff --git a/openpype/hooks/pre_host_set_ocio.py b/openpype/hooks/pre_host_set_ocio.py new file mode 100644 index 0000000000..3620d88db6 --- /dev/null +++ b/openpype/hooks/pre_host_set_ocio.py @@ -0,0 +1,37 @@ +from openpype.lib import PreLaunchHook + +from openpype.pipeline.colorspace import get_imageio_config +from openpype.pipeline.template_data import get_template_data + + +class PreLaunchHostSetOCIO(PreLaunchHook): + """Set OCIO environment for the host""" + + order = 0 + app_groups = ["substancepainter"] + + def execute(self): + """Hook entry method.""" + + anatomy_data = get_template_data( + project_doc=self.data["project_doc"], + asset_doc=self.data["asset_doc"], + task_name=self.data["task_name"], + host_name=self.host_name, + system_settings=self.data["system_settings"] + ) + + ocio_config = get_imageio_config( + project_name=self.data["project_doc"]["name"], + host_name=self.host_name, + project_settings=self.data["project_settings"], + anatomy_data=anatomy_data, + anatomy=self.data["anatomy"] + ) + + if ocio_config: + ocio_path = ocio_config["path"] + self.log.info(f"Setting OCIO config path: {ocio_path}") + self.launch_context.env["OCIO"] = ocio_path + else: + self.log.debug("OCIO not set or enabled") diff --git a/openpype/hosts/aftereffects/plugins/create/create_render.py b/openpype/hosts/aftereffects/plugins/create/create_render.py index c20b0ec51b..171d7053ce 100644 --- a/openpype/hosts/aftereffects/plugins/create/create_render.py +++ b/openpype/hosts/aftereffects/plugins/create/create_render.py @@ -26,12 +26,9 @@ class RenderCreator(Creator): create_allow_context_change = True - def __init__(self, project_settings, *args, **kwargs): - super(RenderCreator, self).__init__(project_settings, *args, **kwargs) - self._default_variants = (project_settings["aftereffects"] - ["create"] - ["RenderCreator"] - ["defaults"]) + # Settings + default_variants = [] + mark_for_review = True def create(self, subset_name_from_ui, data, pre_create_data): stub = api.get_stub() # only after After Effects is up @@ -82,28 +79,40 @@ class RenderCreator(Creator): use_farm = pre_create_data["farm"] new_instance.creator_attributes["farm"] = use_farm + review = pre_create_data["mark_for_review"] + new_instance.creator_attributes["mark_for_review"] = review + api.get_stub().imprint(new_instance.id, new_instance.data_to_store()) self._add_instance_to_context(new_instance) stub.rename_item(comp.id, subset_name) - def get_default_variants(self): - return self._default_variants - - def get_instance_attr_defs(self): - return [BoolDef("farm", label="Render on farm")] - def get_pre_create_attr_defs(self): output = [ BoolDef("use_selection", default=True, label="Use selection"), BoolDef("use_composition_name", label="Use composition name in subset"), UISeparatorDef(), - BoolDef("farm", label="Render on farm") + BoolDef("farm", label="Render on farm"), + BoolDef( + "mark_for_review", + label="Review", + default=self.mark_for_review + ) ] return output + def get_instance_attr_defs(self): + return [ + BoolDef("farm", label="Render on farm"), + BoolDef( + "mark_for_review", + label="Review", + default=False + ) + ] + def get_icon(self): return resources.get_openpype_splash_filepath() @@ -143,6 +152,13 @@ class RenderCreator(Creator): api.get_stub().rename_item(comp_id, new_comp_name) + def apply_settings(self, project_settings, system_settings): + plugin_settings = ( + project_settings["aftereffects"]["create"]["RenderCreator"] + ) + + self.mark_for_review = plugin_settings["mark_for_review"] + def get_detail_description(self): return """Creator for Render instances @@ -201,4 +217,7 @@ class RenderCreator(Creator): instance_data["creator_attributes"] = {"farm": is_old_farm} instance_data["family"] = self.family + if instance_data["creator_attributes"].get("mark_for_review") is None: + instance_data["creator_attributes"]["mark_for_review"] = True + return instance_data diff --git a/openpype/hosts/aftereffects/plugins/publish/collect_render.py b/openpype/hosts/aftereffects/plugins/publish/collect_render.py index 6153a426cf..b01b707246 100644 --- a/openpype/hosts/aftereffects/plugins/publish/collect_render.py +++ b/openpype/hosts/aftereffects/plugins/publish/collect_render.py @@ -88,10 +88,11 @@ class CollectAERender(publish.AbstractCollectRender): raise ValueError("No file extension set in Render Queue") render_item = render_q[0] + instance_families = inst.data.get("families", []) subset_name = inst.data["subset"] instance = AERenderInstance( family="render", - families=inst.data.get("families", []), + families=instance_families, version=version, time="", source=current_file, @@ -109,6 +110,7 @@ class CollectAERender(publish.AbstractCollectRender): tileRendering=False, tilesX=0, tilesY=0, + review="review" in instance_families, frameStart=frame_start, frameEnd=frame_end, frameStep=1, @@ -139,6 +141,9 @@ class CollectAERender(publish.AbstractCollectRender): instance.toBeRenderedOn = "deadline" instance.renderer = "aerender" instance.farm = True # to skip integrate + if "review" in instance.families: + # to skip ExtractReview locally + instance.families.remove("review") instances.append(instance) instances_to_remove.append(inst) @@ -218,15 +223,4 @@ class CollectAERender(publish.AbstractCollectRender): if fam not in instance.families: instance.families.append(fam) - settings = get_project_settings(os.getenv("AVALON_PROJECT")) - reviewable_subset_filter = (settings["deadline"] - ["publish"] - ["ProcessSubmittedJobOnFarm"] - ["aov_filter"].get(self.hosts[0])) - for aov_pattern in reviewable_subset_filter: - if re.match(aov_pattern, instance.subset): - instance.families.append("review") - instance.review = True - break - return instance diff --git a/openpype/hosts/aftereffects/plugins/publish/collect_review.py b/openpype/hosts/aftereffects/plugins/publish/collect_review.py new file mode 100644 index 0000000000..a933b9fed2 --- /dev/null +++ b/openpype/hosts/aftereffects/plugins/publish/collect_review.py @@ -0,0 +1,25 @@ +""" +Requires: + None + +Provides: + instance -> family ("review") +""" +import pyblish.api + + +class CollectReview(pyblish.api.ContextPlugin): + """Add review to families if instance created with 'mark_for_review' flag + """ + label = "Collect Review" + hosts = ["aftereffects"] + order = pyblish.api.CollectorOrder + 0.1 + + def process(self, context): + for instance in context: + creator_attributes = instance.data.get("creator_attributes") or {} + if ( + creator_attributes.get("mark_for_review") + and "review" not in instance.data["families"] + ): + instance.data["families"].append("review") diff --git a/openpype/hosts/aftereffects/plugins/publish/collect_workfile.py b/openpype/hosts/aftereffects/plugins/publish/collect_workfile.py index 3c5013b3bd..c21c3623c3 100644 --- a/openpype/hosts/aftereffects/plugins/publish/collect_workfile.py +++ b/openpype/hosts/aftereffects/plugins/publish/collect_workfile.py @@ -53,10 +53,10 @@ class CollectWorkfile(pyblish.api.ContextPlugin): "active": True, "asset": asset_entity["name"], "task": task, - "frameStart": asset_entity["data"]["frameStart"], - "frameEnd": asset_entity["data"]["frameEnd"], - "handleStart": asset_entity["data"]["handleStart"], - "handleEnd": asset_entity["data"]["handleEnd"], + "frameStart": context.data['frameStart'], + "frameEnd": context.data['frameEnd'], + "handleStart": context.data['handleStart'], + "handleEnd": context.data['handleEnd'], "fps": asset_entity["data"]["fps"], "resolutionWidth": asset_entity["data"].get( "resolutionWidth", diff --git a/openpype/hosts/aftereffects/plugins/publish/extract_local_render.py b/openpype/hosts/aftereffects/plugins/publish/extract_local_render.py index d535329eb4..c70aa41dbe 100644 --- a/openpype/hosts/aftereffects/plugins/publish/extract_local_render.py +++ b/openpype/hosts/aftereffects/plugins/publish/extract_local_render.py @@ -66,33 +66,9 @@ class ExtractLocalRender(publish.Extractor): first_repre = not representations if instance.data["review"] and first_repre: repre_data["tags"] = ["review"] + thumbnail_path = os.path.join(staging_dir, files[0]) + instance.data["thumbnailSource"] = thumbnail_path representations.append(repre_data) instance.data["representations"] = representations - - ffmpeg_path = get_ffmpeg_tool_path("ffmpeg") - # Generate thumbnail. - thumbnail_path = os.path.join(staging_dir, "thumbnail.jpg") - - args = [ - ffmpeg_path, "-y", - "-i", first_file_path, - "-vf", "scale=300:-1", - "-vframes", "1", - thumbnail_path - ] - self.log.debug("Thumbnail args:: {}".format(args)) - try: - output = run_subprocess(args) - except TypeError: - self.log.warning("Error in creating thumbnail") - six.reraise(*sys.exc_info()) - - instance.data["representations"].append({ - "name": "thumbnail", - "ext": "jpg", - "files": os.path.basename(thumbnail_path), - "stagingDir": staging_dir, - "tags": ["thumbnail"] - }) diff --git a/openpype/hosts/fusion/plugins/create/create_saver.py b/openpype/hosts/fusion/plugins/create/create_saver.py index 56085b0a06..cedc4029fa 100644 --- a/openpype/hosts/fusion/plugins/create/create_saver.py +++ b/openpype/hosts/fusion/plugins/create/create_saver.py @@ -1,7 +1,5 @@ import os -import qtawesome - from openpype.hosts.fusion.api import ( get_current_comp, comp_lock_and_undo_chunk, @@ -28,6 +26,7 @@ class CreateSaver(Creator): family = "render" default_variants = ["Main", "Mask"] description = "Fusion Saver to generate image sequence" + icon = "fa5.eye" instance_attributes = ["reviewable"] @@ -89,9 +88,6 @@ class CreateSaver(Creator): self._add_instance_to_context(created_instance) - def get_icon(self): - return qtawesome.icon("fa.eye", color="white") - def update_instances(self, update_list): for created_inst, _changes in update_list: new_data = created_inst.data_to_store() diff --git a/openpype/hosts/fusion/plugins/create/create_workfile.py b/openpype/hosts/fusion/plugins/create/create_workfile.py index 0bb3a0d3d4..40721ea88a 100644 --- a/openpype/hosts/fusion/plugins/create/create_workfile.py +++ b/openpype/hosts/fusion/plugins/create/create_workfile.py @@ -1,5 +1,3 @@ -import qtawesome - from openpype.hosts.fusion.api import ( get_current_comp ) @@ -15,6 +13,7 @@ class FusionWorkfileCreator(AutoCreator): identifier = "workfile" family = "workfile" label = "Workfile" + icon = "fa5.file" default_variant = "Main" @@ -104,6 +103,3 @@ class FusionWorkfileCreator(AutoCreator): existing_instance["asset"] = asset_name existing_instance["task"] = task_name existing_instance["subset"] = subset_name - - def get_icon(self): - return qtawesome.icon("fa.file-o", color="white") diff --git a/openpype/hosts/harmony/api/lib.py b/openpype/hosts/harmony/api/lib.py index 8048705dc8..b009dabb44 100644 --- a/openpype/hosts/harmony/api/lib.py +++ b/openpype/hosts/harmony/api/lib.py @@ -242,9 +242,15 @@ def launch_zip_file(filepath): print(f"Localizing {filepath}") temp_path = get_local_harmony_path(filepath) + scene_name = os.path.basename(temp_path) + if os.path.exists(os.path.join(temp_path, scene_name)): + # unzipped with duplicated scene_name + temp_path = os.path.join(temp_path, scene_name) + scene_path = os.path.join( - temp_path, os.path.basename(temp_path) + ".xstage" + temp_path, scene_name + ".xstage" ) + unzip = False if os.path.exists(scene_path): # Check remote scene is newer than local. @@ -262,6 +268,10 @@ def launch_zip_file(filepath): with _ZipFile(filepath, "r") as zip_ref: zip_ref.extractall(temp_path) + if os.path.exists(os.path.join(temp_path, scene_name)): + # unzipped with duplicated scene_name + temp_path = os.path.join(temp_path, scene_name) + # Close existing scene. if ProcessContext.pid: os.kill(ProcessContext.pid, signal.SIGTERM) @@ -309,7 +319,7 @@ def launch_zip_file(filepath): ) if not os.path.exists(scene_path): - print("error: cannot determine scene file") + print("error: cannot determine scene file {}".format(scene_path)) ProcessContext.server.stop() return diff --git a/openpype/hosts/houdini/api/action.py b/openpype/hosts/houdini/api/action.py new file mode 100644 index 0000000000..27e8ce55bb --- /dev/null +++ b/openpype/hosts/houdini/api/action.py @@ -0,0 +1,46 @@ +import pyblish.api +import hou + +from openpype.pipeline.publish import get_errored_instances_from_context + + +class SelectInvalidAction(pyblish.api.Action): + """Select invalid nodes in Maya when plug-in failed. + + To retrieve the invalid nodes this assumes a static `get_invalid()` + method is available on the plugin. + + """ + label = "Select invalid" + on = "failed" # This action is only available on a failed plug-in + icon = "search" # Icon from Awesome Icon + + def process(self, context, plugin): + + errored_instances = get_errored_instances_from_context(context) + + # Apply pyblish.logic to get the instances for the plug-in + instances = pyblish.api.instances_by_plugin(errored_instances, plugin) + + # Get the invalid nodes for the plug-ins + self.log.info("Finding invalid nodes..") + invalid = list() + for instance in instances: + invalid_nodes = plugin.get_invalid(instance) + if invalid_nodes: + if isinstance(invalid_nodes, (list, tuple)): + invalid.extend(invalid_nodes) + else: + self.log.warning("Plug-in returned to be invalid, " + "but has no selectable nodes.") + + hou.clearAllSelected() + if invalid: + self.log.info("Selecting invalid nodes: {}".format( + ", ".join(node.path() for node in invalid) + )) + for node in invalid: + node.setSelected(True) + node.setCurrent(True) + else: + self.log.info("No invalid nodes found.") diff --git a/openpype/hosts/houdini/api/creator_node_shelves.py b/openpype/hosts/houdini/api/creator_node_shelves.py new file mode 100644 index 0000000000..7c6122cffe --- /dev/null +++ b/openpype/hosts/houdini/api/creator_node_shelves.py @@ -0,0 +1,233 @@ +"""Library to register OpenPype Creators for Houdini TAB node search menu. + +This can be used to install custom houdini tools for the TAB search +menu which will trigger a publish instance to be created interactively. + +The Creators are automatically registered on launch of Houdini through the +Houdini integration's `host.install()` method. + +""" +import contextlib +import tempfile +import logging +import os + +from openpype.client import get_asset_by_name +from openpype.pipeline import registered_host +from openpype.pipeline.create import CreateContext +from openpype.resources import get_openpype_icon_filepath + +import hou +import stateutils +import soptoolutils +import loptoolutils +import cop2toolutils + + +log = logging.getLogger(__name__) + +CATEGORY_GENERIC_TOOL = { + hou.sopNodeTypeCategory(): soptoolutils.genericTool, + hou.cop2NodeTypeCategory(): cop2toolutils.genericTool, + hou.lopNodeTypeCategory(): loptoolutils.genericTool +} + + +CREATE_SCRIPT = """ +from openpype.hosts.houdini.api.creator_node_shelves import create_interactive +create_interactive("{identifier}", **kwargs) +""" + + +def create_interactive(creator_identifier, **kwargs): + """Create a Creator using its identifier interactively. + + This is used by the generated shelf tools as callback when a user selects + the creator from the node tab search menu. + + The `kwargs` should be what Houdini passes to the tool create scripts + context. For more information see: + https://www.sidefx.com/docs/houdini/hom/tool_script.html#arguments + + Args: + creator_identifier (str): The creator identifier of the Creator plugin + to create. + + Return: + list: The created instances. + + """ + + # TODO Use Qt instead + result, variant = hou.ui.readInput('Define variant name', + buttons=("Ok", "Cancel"), + initial_contents='Main', + title="Define variant", + help="Set the variant for the " + "publish instance", + close_choice=1) + if result == 1: + # User interrupted + return + variant = variant.strip() + if not variant: + raise RuntimeError("Empty variant value entered.") + + host = registered_host() + context = CreateContext(host) + creator = context.manual_creators.get(creator_identifier) + if not creator: + raise RuntimeError("Invalid creator identifier: " + "{}".format(creator_identifier)) + + # TODO: Once more elaborate unique create behavior should exist per Creator + # instead of per network editor area then we should move this from here + # to a method on the Creators for which this could be the default + # implementation. + pane = stateutils.activePane(kwargs) + if isinstance(pane, hou.NetworkEditor): + pwd = pane.pwd() + subset_name = creator.get_subset_name( + variant=variant, + task_name=context.get_current_task_name(), + asset_doc=get_asset_by_name( + project_name=context.get_current_project_name(), + asset_name=context.get_current_asset_name() + ), + project_name=context.get_current_project_name(), + host_name=context.host_name + ) + + tool_fn = CATEGORY_GENERIC_TOOL.get(pwd.childTypeCategory()) + if tool_fn is not None: + out_null = tool_fn(kwargs, "null") + out_null.setName("OUT_{}".format(subset_name), unique_name=True) + + before = context.instances_by_id.copy() + + # Create the instance + context.create( + creator_identifier=creator_identifier, + variant=variant, + pre_create_data={"use_selection": True} + ) + + # For convenience we set the new node as current since that's much more + # familiar to the artist when creating a node interactively + # TODO Allow to disable auto-select in studio settings or user preferences + after = context.instances_by_id + new = set(after) - set(before) + if new: + # Select the new instance + for instance_id in new: + instance = after[instance_id] + node = hou.node(instance.get("instance_node")) + node.setCurrent(True) + + return list(new) + + +@contextlib.contextmanager +def shelves_change_block(): + """Write shelf changes at the end of the context.""" + hou.shelves.beginChangeBlock() + try: + yield + finally: + hou.shelves.endChangeBlock() + + +def install(): + """Install the Creator plug-ins to show in Houdini's TAB node search menu. + + This function is re-entrant and can be called again to reinstall and + update the node definitions. For example during development it can be + useful to call it manually: + >>> from openpype.hosts.houdini.api.creator_node_shelves import install + >>> install() + + Returns: + list: List of `hou.Tool` instances + + """ + + host = registered_host() + + # Store the filepath on the host + # TODO: Define a less hacky static shelf path for current houdini session + filepath_attr = "_creator_node_shelf_filepath" + filepath = getattr(host, filepath_attr, None) + if filepath is None: + f = tempfile.NamedTemporaryFile(prefix="houdini_creator_nodes_", + suffix=".shelf", + delete=False) + f.close() + filepath = f.name + setattr(host, filepath_attr, filepath) + elif os.path.exists(filepath): + # Remove any existing shelf file so that we can completey regenerate + # and update the tools file if creator identifiers change + os.remove(filepath) + + icon = get_openpype_icon_filepath() + + # Create context only to get creator plugins, so we don't reset and only + # populate what we need to retrieve the list of creator plugins + create_context = CreateContext(host, reset=False) + create_context.reset_current_context() + create_context._reset_creator_plugins() + + log.debug("Writing OpenPype Creator nodes to shelf: {}".format(filepath)) + tools = [] + + with shelves_change_block(): + for identifier, creator in create_context.manual_creators.items(): + + # Allow the creator plug-in itself to override the categories + # for where they are shown with `Creator.get_network_categories()` + if not hasattr(creator, "get_network_categories"): + log.debug("Creator {} has no `get_network_categories` method " + "and will not be added to TAB search.") + continue + + network_categories = creator.get_network_categories() + if not network_categories: + continue + + key = "openpype_create.{}".format(identifier) + log.debug(f"Registering {key}") + script = CREATE_SCRIPT.format(identifier=identifier) + data = { + "script": script, + "language": hou.scriptLanguage.Python, + "icon": icon, + "help": "Create OpenPype publish instance for {}".format( + creator.label + ), + "help_url": None, + "network_categories": network_categories, + "viewer_categories": [], + "cop_viewer_categories": [], + "network_op_type": None, + "viewer_op_type": None, + "locations": ["OpenPype"] + } + label = "Create {}".format(creator.label) + tool = hou.shelves.tool(key) + if tool: + tool.setData(**data) + tool.setLabel(label) + else: + tool = hou.shelves.newTool( + file_path=filepath, + name=key, + label=label, + **data + ) + + tools.append(tool) + + # Ensure the shelf is reloaded + hou.shelves.loadFile(filepath) + + return tools diff --git a/openpype/hosts/houdini/api/lib.py b/openpype/hosts/houdini/api/lib.py index f19dc64992..2e58f3dd98 100644 --- a/openpype/hosts/houdini/api/lib.py +++ b/openpype/hosts/houdini/api/lib.py @@ -127,6 +127,8 @@ def get_output_parameter(node): return node.parm("filename") elif node_type == "comp": return node.parm("copoutput") + elif node_type == "opengl": + return node.parm("picture") elif node_type == "arnold": if node.evalParm("ar_ass_export_enable"): return node.parm("ar_ass_file") diff --git a/openpype/hosts/houdini/api/pipeline.py b/openpype/hosts/houdini/api/pipeline.py index 45e2f8f87f..61274e6028 100644 --- a/openpype/hosts/houdini/api/pipeline.py +++ b/openpype/hosts/houdini/api/pipeline.py @@ -18,7 +18,7 @@ from openpype.pipeline import ( ) from openpype.pipeline.load import any_outdated_containers from openpype.hosts.houdini import HOUDINI_HOST_DIR -from openpype.hosts.houdini.api import lib, shelves +from openpype.hosts.houdini.api import lib, shelves, creator_node_shelves from openpype.lib import ( register_event_callback, @@ -83,6 +83,10 @@ class HoudiniHost(HostBase, IWorkfileHost, ILoadHost, IPublishHost): _set_context_settings() shelves.generate_shelves() + if not IS_HEADLESS: + import hdefereval # noqa, hdefereval is only available in ui mode + hdefereval.executeDeferred(creator_node_shelves.install) + def has_unsaved_changes(self): return hou.hipFile.hasUnsavedChanges() diff --git a/openpype/hosts/houdini/api/plugin.py b/openpype/hosts/houdini/api/plugin.py index 340a7f0770..1e7eaa7e22 100644 --- a/openpype/hosts/houdini/api/plugin.py +++ b/openpype/hosts/houdini/api/plugin.py @@ -276,3 +276,19 @@ class HoudiniCreator(NewCreator, HoudiniCreatorBase): color = hou.Color((0.616, 0.871, 0.769)) node.setUserData('nodeshape', shape) node.setColor(color) + + def get_network_categories(self): + """Return in which network view type this creator should show. + + The node type categories returned here will be used to define where + the creator will show up in the TAB search for nodes in Houdini's + Network View. + + This can be overridden in inherited classes to define where that + particular Creator should be visible in the TAB search. + + Returns: + list: List of houdini node type categories + + """ + return [hou.ropNodeTypeCategory()] diff --git a/openpype/hosts/houdini/plugins/create/create_alembic_camera.py b/openpype/hosts/houdini/plugins/create/create_alembic_camera.py index fec64eb4a1..8c8a5e9eed 100644 --- a/openpype/hosts/houdini/plugins/create/create_alembic_camera.py +++ b/openpype/hosts/houdini/plugins/create/create_alembic_camera.py @@ -3,6 +3,8 @@ from openpype.hosts.houdini.api import plugin from openpype.pipeline import CreatedInstance, CreatorError +import hou + class CreateAlembicCamera(plugin.HoudiniCreator): """Single baked camera from Alembic ROP.""" @@ -47,3 +49,9 @@ class CreateAlembicCamera(plugin.HoudiniCreator): self.lock_parameters(instance_node, to_lock) instance_node.parm("trange").set(1) + + def get_network_categories(self): + return [ + hou.ropNodeTypeCategory(), + hou.objNodeTypeCategory() + ] diff --git a/openpype/hosts/houdini/plugins/create/create_composite.py b/openpype/hosts/houdini/plugins/create/create_composite.py index 45af2b0630..9d4f7969bb 100644 --- a/openpype/hosts/houdini/plugins/create/create_composite.py +++ b/openpype/hosts/houdini/plugins/create/create_composite.py @@ -1,7 +1,9 @@ # -*- coding: utf-8 -*- """Creator plugin for creating composite sequences.""" from openpype.hosts.houdini.api import plugin -from openpype.pipeline import CreatedInstance +from openpype.pipeline import CreatedInstance, CreatorError + +import hou class CreateCompositeSequence(plugin.HoudiniCreator): @@ -35,8 +37,20 @@ class CreateCompositeSequence(plugin.HoudiniCreator): "copoutput": filepath } + if self.selected_nodes: + if len(self.selected_nodes) > 1: + raise CreatorError("More than one item selected.") + path = self.selected_nodes[0].path() + parms["coppath"] = path + instance_node.setParms(parms) # Lock any parameters in this list to_lock = ["prim_to_detail_pattern"] self.lock_parameters(instance_node, to_lock) + + def get_network_categories(self): + return [ + hou.ropNodeTypeCategory(), + hou.cop2NodeTypeCategory() + ] diff --git a/openpype/hosts/houdini/plugins/create/create_pointcache.py b/openpype/hosts/houdini/plugins/create/create_pointcache.py index 6b6b277422..df74070fee 100644 --- a/openpype/hosts/houdini/plugins/create/create_pointcache.py +++ b/openpype/hosts/houdini/plugins/create/create_pointcache.py @@ -3,6 +3,8 @@ from openpype.hosts.houdini.api import plugin from openpype.pipeline import CreatedInstance +import hou + class CreatePointCache(plugin.HoudiniCreator): """Alembic ROP to pointcache""" @@ -49,3 +51,9 @@ class CreatePointCache(plugin.HoudiniCreator): # Lock any parameters in this list to_lock = ["prim_to_detail_pattern"] self.lock_parameters(instance_node, to_lock) + + def get_network_categories(self): + return [ + hou.ropNodeTypeCategory(), + hou.sopNodeTypeCategory() + ] diff --git a/openpype/hosts/houdini/plugins/create/create_review.py b/openpype/hosts/houdini/plugins/create/create_review.py new file mode 100644 index 0000000000..ab06b30c35 --- /dev/null +++ b/openpype/hosts/houdini/plugins/create/create_review.py @@ -0,0 +1,125 @@ +# -*- coding: utf-8 -*- +"""Creator plugin for creating openGL reviews.""" +from openpype.hosts.houdini.api import plugin +from openpype.lib import EnumDef, BoolDef, NumberDef + + +class CreateReview(plugin.HoudiniCreator): + """Review with OpenGL ROP""" + + identifier = "io.openpype.creators.houdini.review" + label = "Review" + family = "review" + icon = "video-camera" + + def create(self, subset_name, instance_data, pre_create_data): + import hou + + instance_data.pop("active", None) + instance_data.update({"node_type": "opengl"}) + instance_data["imageFormat"] = pre_create_data.get("imageFormat") + instance_data["keepImages"] = pre_create_data.get("keepImages") + + instance = super(CreateReview, self).create( + subset_name, + instance_data, + pre_create_data) + + instance_node = hou.node(instance.get("instance_node")) + + frame_range = hou.playbar.frameRange() + + filepath = "{root}/{subset}/{subset}.$F4.{ext}".format( + root=hou.text.expandString("$HIP/pyblish"), + subset="`chs(\"subset\")`", # keep dynamic link to subset + ext=pre_create_data.get("image_format") or "png" + ) + + parms = { + "picture": filepath, + + "trange": 1, + + # Unlike many other ROP nodes the opengl node does not default + # to expression of $FSTART and $FEND so we preserve that behavior + # but do set the range to the frame range of the playbar + "f1": frame_range[0], + "f2": frame_range[1], + } + + override_resolution = pre_create_data.get("override_resolution") + if override_resolution: + parms.update({ + "tres": override_resolution, + "res1": pre_create_data.get("resx"), + "res2": pre_create_data.get("resy"), + "aspect": pre_create_data.get("aspect"), + }) + + if self.selected_nodes: + # The first camera found in selection we will use as camera + # Other node types we set in force objects + camera = None + force_objects = [] + for node in self.selected_nodes: + path = node.path() + if node.type().name() == "cam": + if camera: + continue + camera = path + else: + force_objects.append(path) + + if not camera: + self.log.warning("No camera found in selection.") + + parms.update({ + "camera": camera or "", + "scenepath": "/obj", + "forceobjects": " ".join(force_objects), + "vobjects": "" # clear candidate objects from '*' value + }) + + instance_node.setParms(parms) + + to_lock = ["id", "family"] + + self.lock_parameters(instance_node, to_lock) + + def get_pre_create_attr_defs(self): + attrs = super(CreateReview, self).get_pre_create_attr_defs() + + image_format_enum = [ + "bmp", "cin", "exr", "jpg", "pic", "pic.gz", "png", + "rad", "rat", "rta", "sgi", "tga", "tif", + ] + + return attrs + [ + BoolDef("keepImages", + label="Keep Image Sequences", + default=False), + EnumDef("imageFormat", + image_format_enum, + default="png", + label="Image Format Options"), + BoolDef("override_resolution", + label="Override resolution", + tooltip="When disabled the resolution set on the camera " + "is used instead.", + default=True), + NumberDef("resx", + label="Resolution Width", + default=1280, + minimum=2, + decimals=0), + NumberDef("resy", + label="Resolution Height", + default=720, + minimum=2, + decimals=0), + NumberDef("aspect", + label="Aspect Ratio", + default=1.0, + minimum=0.0001, + decimals=3) + ] diff --git a/openpype/hosts/houdini/plugins/create/create_usd.py b/openpype/hosts/houdini/plugins/create/create_usd.py index 51ed8237c5..e05d254863 100644 --- a/openpype/hosts/houdini/plugins/create/create_usd.py +++ b/openpype/hosts/houdini/plugins/create/create_usd.py @@ -3,6 +3,8 @@ from openpype.hosts.houdini.api import plugin from openpype.pipeline import CreatedInstance +import hou + class CreateUSD(plugin.HoudiniCreator): """Universal Scene Description""" @@ -13,7 +15,6 @@ class CreateUSD(plugin.HoudiniCreator): enabled = False def create(self, subset_name, instance_data, pre_create_data): - import hou # noqa instance_data.pop("active", None) instance_data.update({"node_type": "usd"}) @@ -43,3 +44,9 @@ class CreateUSD(plugin.HoudiniCreator): "id", ] self.lock_parameters(instance_node, to_lock) + + def get_network_categories(self): + return [ + hou.ropNodeTypeCategory(), + hou.lopNodeTypeCategory() + ] diff --git a/openpype/hosts/houdini/plugins/create/create_vbd_cache.py b/openpype/hosts/houdini/plugins/create/create_vbd_cache.py index 1a5011745f..c015cebd49 100644 --- a/openpype/hosts/houdini/plugins/create/create_vbd_cache.py +++ b/openpype/hosts/houdini/plugins/create/create_vbd_cache.py @@ -3,6 +3,8 @@ from openpype.hosts.houdini.api import plugin from openpype.pipeline import CreatedInstance +import hou + class CreateVDBCache(plugin.HoudiniCreator): """OpenVDB from Geometry ROP""" @@ -34,3 +36,9 @@ class CreateVDBCache(plugin.HoudiniCreator): parms["soppath"] = self.selected_nodes[0].path() instance_node.setParms(parms) + + def get_network_categories(self): + return [ + hou.ropNodeTypeCategory(), + hou.sopNodeTypeCategory() + ] diff --git a/openpype/hosts/houdini/plugins/create/create_workfile.py b/openpype/hosts/houdini/plugins/create/create_workfile.py index 0c6d840810..1a8537adcd 100644 --- a/openpype/hosts/houdini/plugins/create/create_workfile.py +++ b/openpype/hosts/houdini/plugins/create/create_workfile.py @@ -14,7 +14,7 @@ class CreateWorkfile(plugin.HoudiniCreatorBase, AutoCreator): identifier = "io.openpype.creators.houdini.workfile" label = "Workfile" family = "workfile" - icon = "document" + icon = "fa5.file" default_variant = "Main" diff --git a/openpype/hosts/houdini/plugins/load/load_alembic.py b/openpype/hosts/houdini/plugins/load/load_alembic.py index 96e666b255..c6f0ebf2f9 100644 --- a/openpype/hosts/houdini/plugins/load/load_alembic.py +++ b/openpype/hosts/houdini/plugins/load/load_alembic.py @@ -104,3 +104,6 @@ class AbcLoader(load.LoaderPlugin): node = container["node"] node.destroy() + + def switch(self, container, representation): + self.update(container, representation) diff --git a/openpype/hosts/houdini/plugins/load/load_alembic_archive.py b/openpype/hosts/houdini/plugins/load/load_alembic_archive.py index b960073e12..47d2e1b896 100644 --- a/openpype/hosts/houdini/plugins/load/load_alembic_archive.py +++ b/openpype/hosts/houdini/plugins/load/load_alembic_archive.py @@ -73,3 +73,6 @@ class AbcArchiveLoader(load.LoaderPlugin): node = container["node"] node.destroy() + + def switch(self, container, representation): + self.update(container, representation) diff --git a/openpype/hosts/houdini/plugins/load/load_bgeo.py b/openpype/hosts/houdini/plugins/load/load_bgeo.py index b298d423bc..86e8675c02 100644 --- a/openpype/hosts/houdini/plugins/load/load_bgeo.py +++ b/openpype/hosts/houdini/plugins/load/load_bgeo.py @@ -106,3 +106,6 @@ class BgeoLoader(load.LoaderPlugin): node = container["node"] node.destroy() + + def switch(self, container, representation): + self.update(container, representation) diff --git a/openpype/hosts/houdini/plugins/load/load_camera.py b/openpype/hosts/houdini/plugins/load/load_camera.py index 059ad11a76..6365508f4e 100644 --- a/openpype/hosts/houdini/plugins/load/load_camera.py +++ b/openpype/hosts/houdini/plugins/load/load_camera.py @@ -192,3 +192,6 @@ class CameraLoader(load.LoaderPlugin): new_node.moveToGoodPosition() return new_node + + def switch(self, container, representation): + self.update(container, representation) diff --git a/openpype/hosts/houdini/plugins/load/load_image.py b/openpype/hosts/houdini/plugins/load/load_image.py index c78798e58a..26bc569c53 100644 --- a/openpype/hosts/houdini/plugins/load/load_image.py +++ b/openpype/hosts/houdini/plugins/load/load_image.py @@ -125,3 +125,6 @@ class ImageLoader(load.LoaderPlugin): prefix, padding, suffix = first_fname.rsplit(".", 2) fname = ".".join([prefix, "$F{}".format(len(padding)), suffix]) return os.path.join(root, fname).replace("\\", "/") + + def switch(self, container, representation): + self.update(container, representation) diff --git a/openpype/hosts/houdini/plugins/load/load_usd_layer.py b/openpype/hosts/houdini/plugins/load/load_usd_layer.py index 2e5079925b..1f0ec25128 100644 --- a/openpype/hosts/houdini/plugins/load/load_usd_layer.py +++ b/openpype/hosts/houdini/plugins/load/load_usd_layer.py @@ -79,3 +79,6 @@ class USDSublayerLoader(load.LoaderPlugin): node = container["node"] node.destroy() + + def switch(self, container, representation): + self.update(container, representation) diff --git a/openpype/hosts/houdini/plugins/load/load_usd_reference.py b/openpype/hosts/houdini/plugins/load/load_usd_reference.py index c4371db39b..f66d05395e 100644 --- a/openpype/hosts/houdini/plugins/load/load_usd_reference.py +++ b/openpype/hosts/houdini/plugins/load/load_usd_reference.py @@ -79,3 +79,6 @@ class USDReferenceLoader(load.LoaderPlugin): node = container["node"] node.destroy() + + def switch(self, container, representation): + self.update(container, representation) diff --git a/openpype/hosts/houdini/plugins/load/load_vdb.py b/openpype/hosts/houdini/plugins/load/load_vdb.py index c558a7a0e7..87900502c5 100644 --- a/openpype/hosts/houdini/plugins/load/load_vdb.py +++ b/openpype/hosts/houdini/plugins/load/load_vdb.py @@ -102,3 +102,6 @@ class VdbLoader(load.LoaderPlugin): node = container["node"] node.destroy() + + def switch(self, container, representation): + self.update(container, representation) diff --git a/openpype/hosts/houdini/plugins/publish/collect_current_file.py b/openpype/hosts/houdini/plugins/publish/collect_current_file.py index caf679f98b..7b55778803 100644 --- a/openpype/hosts/houdini/plugins/publish/collect_current_file.py +++ b/openpype/hosts/houdini/plugins/publish/collect_current_file.py @@ -4,15 +4,14 @@ import hou import pyblish.api -class CollectHoudiniCurrentFile(pyblish.api.InstancePlugin): +class CollectHoudiniCurrentFile(pyblish.api.ContextPlugin): """Inject the current working file into context""" - order = pyblish.api.CollectorOrder - 0.01 + order = pyblish.api.CollectorOrder - 0.1 label = "Houdini Current File" hosts = ["houdini"] - families = ["workfile"] - def process(self, instance): + def process(self, context): """Inject the current working file""" current_file = hou.hipFile.path() @@ -34,26 +33,5 @@ class CollectHoudiniCurrentFile(pyblish.api.InstancePlugin): "saved correctly." ) - instance.context.data["currentFile"] = current_file - - folder, file = os.path.split(current_file) - filename, ext = os.path.splitext(file) - - instance.data.update({ - "setMembers": [current_file], - "frameStart": instance.context.data['frameStart'], - "frameEnd": instance.context.data['frameEnd'], - "handleStart": instance.context.data['handleStart'], - "handleEnd": instance.context.data['handleEnd'] - }) - - instance.data['representations'] = [{ - 'name': ext.lstrip("."), - 'ext': ext.lstrip("."), - 'files': file, - "stagingDir": folder, - }] - - self.log.info('Collected instance: {}'.format(file)) - self.log.info('Scene path: {}'.format(current_file)) - self.log.info('staging Dir: {}'.format(folder)) + context.data["currentFile"] = current_file + self.log.info('Current workfile path: {}'.format(current_file)) diff --git a/openpype/hosts/houdini/plugins/publish/collect_frames.py b/openpype/hosts/houdini/plugins/publish/collect_frames.py index 531cdf1249..6c695f64e9 100644 --- a/openpype/hosts/houdini/plugins/publish/collect_frames.py +++ b/openpype/hosts/houdini/plugins/publish/collect_frames.py @@ -14,7 +14,7 @@ class CollectFrames(pyblish.api.InstancePlugin): order = pyblish.api.CollectorOrder label = "Collect Frames" - families = ["vdbcache", "imagesequence", "ass", "redshiftproxy"] + families = ["vdbcache", "imagesequence", "ass", "redshiftproxy", "review"] def process(self, instance): diff --git a/openpype/hosts/houdini/plugins/publish/collect_review_data.py b/openpype/hosts/houdini/plugins/publish/collect_review_data.py new file mode 100644 index 0000000000..3efb75e66c --- /dev/null +++ b/openpype/hosts/houdini/plugins/publish/collect_review_data.py @@ -0,0 +1,55 @@ +import hou +import pyblish.api + + +class CollectHoudiniReviewData(pyblish.api.InstancePlugin): + """Collect Review Data.""" + + label = "Collect Review Data" + order = pyblish.api.CollectorOrder + 0.1 + hosts = ["houdini"] + families = ["review"] + + def process(self, instance): + + # This fixes the burnin having the incorrect start/end timestamps + # because without this it would take it from the context instead + # which isn't the actual frame range that this instance renders. + instance.data["handleStart"] = 0 + instance.data["handleEnd"] = 0 + instance.data["fps"] = instance.context.data["fps"] + + # Enable ftrack functionality + instance.data.setdefault("families", []).append('ftrack') + + # Get the camera from the rop node to collect the focal length + ropnode_path = instance.data["instance_node"] + ropnode = hou.node(ropnode_path) + + camera_path = ropnode.parm("camera").eval() + camera_node = hou.node(camera_path) + if not camera_node: + self.log.warning("No valid camera node found on review node: " + "{}".format(camera_path)) + return + + # Collect focal length. + focal_length_parm = camera_node.parm("focal") + if not focal_length_parm: + self.log.warning("No 'focal' (focal length) parameter found on " + "camera: {}".format(camera_path)) + return + + if focal_length_parm.isTimeDependent(): + start = instance.data["frameStart"] + end = instance.data["frameEnd"] + 1 + focal_length = [ + focal_length_parm.evalAsFloatAtFrame(t) + for t in range(int(start), int(end)) + ] + else: + focal_length = focal_length_parm.evalAsFloat() + + # Store focal length in `burninDataMembers` + burnin_members = instance.data.setdefault("burninDataMembers", {}) + burnin_members["focalLength"] = focal_length diff --git a/openpype/hosts/houdini/plugins/publish/collect_workfile.py b/openpype/hosts/houdini/plugins/publish/collect_workfile.py new file mode 100644 index 0000000000..a6e94ec29e --- /dev/null +++ b/openpype/hosts/houdini/plugins/publish/collect_workfile.py @@ -0,0 +1,36 @@ +import os + +import pyblish.api + + +class CollectWorkfile(pyblish.api.InstancePlugin): + """Inject workfile representation into instance""" + + order = pyblish.api.CollectorOrder - 0.01 + label = "Houdini Workfile Data" + hosts = ["houdini"] + families = ["workfile"] + + def process(self, instance): + + current_file = instance.context.data["currentFile"] + folder, file = os.path.split(current_file) + filename, ext = os.path.splitext(file) + + instance.data.update({ + "setMembers": [current_file], + "frameStart": instance.context.data['frameStart'], + "frameEnd": instance.context.data['frameEnd'], + "handleStart": instance.context.data['handleStart'], + "handleEnd": instance.context.data['handleEnd'] + }) + + instance.data['representations'] = [{ + 'name': ext.lstrip("."), + 'ext': ext.lstrip("."), + 'files': file, + "stagingDir": folder, + }] + + self.log.info('Collected instance: {}'.format(file)) + self.log.info('staging Dir: {}'.format(folder)) diff --git a/openpype/hosts/houdini/plugins/publish/extract_opengl.py b/openpype/hosts/houdini/plugins/publish/extract_opengl.py new file mode 100644 index 0000000000..6c36dec5f5 --- /dev/null +++ b/openpype/hosts/houdini/plugins/publish/extract_opengl.py @@ -0,0 +1,51 @@ +import os + +import pyblish.api + +from openpype.pipeline import publish +from openpype.hosts.houdini.api.lib import render_rop + +import hou + + +class ExtractOpenGL(publish.Extractor): + + order = pyblish.api.ExtractorOrder - 0.01 + label = "Extract OpenGL" + families = ["review"] + hosts = ["houdini"] + + def process(self, instance): + ropnode = hou.node(instance.data.get("instance_node")) + + output = ropnode.evalParm("picture") + staging_dir = os.path.normpath(os.path.dirname(output)) + instance.data["stagingDir"] = staging_dir + file_name = os.path.basename(output) + + self.log.info("Extracting '%s' to '%s'" % (file_name, + staging_dir)) + + render_rop(ropnode) + + output = instance.data["frames"] + + tags = ["review"] + if not instance.data.get("keepImages"): + tags.append("delete") + + representation = { + "name": instance.data["imageFormat"], + "ext": instance.data["imageFormat"], + "files": output, + "stagingDir": staging_dir, + "frameStart": instance.data["frameStart"], + "frameEnd": instance.data["frameEnd"], + "tags": tags, + "preview": True, + "camera_name": instance.data.get("review_camera") + } + + if "representations" not in instance.data: + instance.data["representations"] = [] + instance.data["representations"].append(representation) diff --git a/openpype/hosts/houdini/plugins/publish/help/validate_vdb_input_node.xml b/openpype/hosts/houdini/plugins/publish/help/validate_vdb_input_node.xml deleted file mode 100644 index 0f92560bf7..0000000000 --- a/openpype/hosts/houdini/plugins/publish/help/validate_vdb_input_node.xml +++ /dev/null @@ -1,21 +0,0 @@ - - - -Scene setting - -## Invalid input node - -VDB input must have the same number of VDBs, points, primitives and vertices as output. - - - -### __Detailed Info__ (optional) - -A VDB is an inherited type of Prim, holds the following data: - - Primitives: 1 - - Points: 1 - - Vertices: 1 - - VDBs: 1 - - - \ No newline at end of file diff --git a/openpype/hosts/houdini/plugins/publish/help/validate_vdb_output_node.xml b/openpype/hosts/houdini/plugins/publish/help/validate_vdb_output_node.xml new file mode 100644 index 0000000000..eb83bfffe3 --- /dev/null +++ b/openpype/hosts/houdini/plugins/publish/help/validate_vdb_output_node.xml @@ -0,0 +1,28 @@ + + + +Invalid VDB + +## Invalid VDB output + +All primitives of the output geometry must be VDBs, no other primitive +types are allowed. That means that regardless of the amount of VDBs in the +geometry it will have an equal amount of VDBs, points, primitives and +vertices since each VDB primitive is one point, one vertex and one VDB. + +This validation only checks the geometry on the first frame of the export +frame range. + + + + + +### Detailed Info + +ROP node `{rop_path}` is set to export SOP path `{sop_path}`. + +{message} + + + + \ No newline at end of file diff --git a/openpype/hosts/houdini/plugins/publish/validate_scene_review.py b/openpype/hosts/houdini/plugins/publish/validate_scene_review.py new file mode 100644 index 0000000000..a44b7e1597 --- /dev/null +++ b/openpype/hosts/houdini/plugins/publish/validate_scene_review.py @@ -0,0 +1,75 @@ +# -*- coding: utf-8 -*- +import pyblish.api +from openpype.pipeline import PublishValidationError +import hou + + +class ValidateSceneReview(pyblish.api.InstancePlugin): + """Validator Some Scene Settings before publishing the review + 1. Scene Path + 2. Resolution + """ + + order = pyblish.api.ValidatorOrder + families = ["review"] + hosts = ["houdini"] + label = "Scene Setting for review" + + def process(self, instance): + + report = [] + instance_node = hou.node(instance.data.get("instance_node")) + + invalid = self.get_invalid_scene_path(instance_node) + if invalid: + report.append(invalid) + + invalid = self.get_invalid_camera_path(instance_node) + if invalid: + report.append(invalid) + + invalid = self.get_invalid_resolution(instance_node) + if invalid: + report.extend(invalid) + + if report: + raise PublishValidationError( + "\n\n".join(report), + title=self.label) + + def get_invalid_scene_path(self, rop_node): + scene_path_parm = rop_node.parm("scenepath") + scene_path_node = scene_path_parm.evalAsNode() + if not scene_path_node: + path = scene_path_parm.evalAsString() + return "Scene path does not exist: '{}'".format(path) + + def get_invalid_camera_path(self, rop_node): + camera_path_parm = rop_node.parm("camera") + camera_node = camera_path_parm.evalAsNode() + path = camera_path_parm.evalAsString() + if not camera_node: + return "Camera path does not exist: '{}'".format(path) + type_name = camera_node.type().name() + if type_name != "cam": + return "Camera path is not a camera: '{}' (type: {})".format( + path, type_name + ) + + def get_invalid_resolution(self, rop_node): + + # The resolution setting is only used when Override Camera Resolution + # is enabled. So we skip validation if it is disabled. + override = rop_node.parm("tres").eval() + if not override: + return + + invalid = [] + res_width = rop_node.parm("res1").eval() + res_height = rop_node.parm("res2").eval() + if res_width == 0: + invalid.append("Override Resolution width is set to zero.") + if res_height == 0: + invalid.append("Override Resolution height is set to zero") + + return invalid diff --git a/openpype/hosts/houdini/plugins/publish/validate_vdb_input_node.py b/openpype/hosts/houdini/plugins/publish/validate_vdb_input_node.py deleted file mode 100644 index 1f9ccc9c42..0000000000 --- a/openpype/hosts/houdini/plugins/publish/validate_vdb_input_node.py +++ /dev/null @@ -1,52 +0,0 @@ -# -*- coding: utf-8 -*- -import pyblish.api -from openpype.pipeline import ( - PublishValidationError -) - - -class ValidateVDBInputNode(pyblish.api.InstancePlugin): - """Validate that the node connected to the output node is of type VDB. - - Regardless of the amount of VDBs create the output will need to have an - equal amount of VDBs, points, primitives and vertices - - A VDB is an inherited type of Prim, holds the following data: - - Primitives: 1 - - Points: 1 - - Vertices: 1 - - VDBs: 1 - - """ - - order = pyblish.api.ValidatorOrder + 0.1 - families = ["vdbcache"] - hosts = ["houdini"] - label = "Validate Input Node (VDB)" - - def process(self, instance): - invalid = self.get_invalid(instance) - if invalid: - raise PublishValidationError( - self, - "Node connected to the output node is not of type VDB", - title=self.label - ) - - @classmethod - def get_invalid(cls, instance): - - node = instance.data["output_node"] - - prims = node.geometry().prims() - nr_of_prims = len(prims) - - nr_of_points = len(node.geometry().points()) - if nr_of_points != nr_of_prims: - cls.log.error("The number of primitives and points do not match") - return [instance] - - for prim in prims: - if prim.numVertices() != 1: - cls.log.error("Found primitive with more than 1 vertex!") - return [instance] diff --git a/openpype/hosts/houdini/plugins/publish/validate_vdb_output_node.py b/openpype/hosts/houdini/plugins/publish/validate_vdb_output_node.py index f9f88b3bf9..674782179c 100644 --- a/openpype/hosts/houdini/plugins/publish/validate_vdb_output_node.py +++ b/openpype/hosts/houdini/plugins/publish/validate_vdb_output_node.py @@ -1,14 +1,73 @@ # -*- coding: utf-8 -*- +import contextlib + import pyblish.api import hou -from openpype.pipeline import PublishValidationError + +from openpype.pipeline import PublishXmlValidationError +from openpype.hosts.houdini.api.action import SelectInvalidAction + + +def group_consecutive_numbers(nums): + """ + Args: + nums (list): List of sorted integer numbers. + + Yields: + str: Group ranges as {start}-{end} if more than one number in the range + else it yields {end} + + """ + start = None + end = None + + def _result(a, b): + if a == b: + return "{}".format(a) + else: + return "{}-{}".format(a, b) + + for num in nums: + if start is None: + start = num + end = num + elif num == end + 1: + end = num + else: + yield _result(start, end) + start = num + end = num + if start is not None: + yield _result(start, end) + + +@contextlib.contextmanager +def update_mode_context(mode): + original = hou.updateModeSetting() + try: + hou.setUpdateMode(mode) + yield + finally: + hou.setUpdateMode(original) + + +def get_geometry_at_frame(sop_node, frame, force=True): + """Return geometry at frame but force a cooked value.""" + with update_mode_context(hou.updateMode.AutoUpdate): + sop_node.cook(force=force, frame_range=(frame, frame)) + return sop_node.geometryAtFrame(frame) class ValidateVDBOutputNode(pyblish.api.InstancePlugin): """Validate that the node connected to the output node is of type VDB. - Regardless of the amount of VDBs create the output will need to have an - equal amount of VDBs, points, primitives and vertices + All primitives of the output geometry must be VDBs, no other primitive + types are allowed. That means that regardless of the amount of VDBs in the + geometry it will have an equal amount of VDBs, points, primitives and + vertices since each VDB primitive is one point, one vertex and one VDB. + + This validation only checks the geometry on the first frame of the export + frame range for optimization purposes. A VDB is an inherited type of Prim, holds the following data: - Primitives: 1 @@ -22,54 +81,95 @@ class ValidateVDBOutputNode(pyblish.api.InstancePlugin): families = ["vdbcache"] hosts = ["houdini"] label = "Validate Output Node (VDB)" + actions = [SelectInvalidAction] def process(self, instance): - invalid = self.get_invalid(instance) - if invalid: - raise PublishValidationError( - "Node connected to the output node is not" " of type VDB!", - title=self.label + invalid_nodes, message = self.get_invalid_with_message(instance) + if invalid_nodes: + + # instance_node is str, but output_node is hou.Node so we convert + output = instance.data.get("output_node") + output_path = output.path() if output else None + + raise PublishXmlValidationError( + self, + "Invalid VDB content: {}".format(message), + formatting_data={ + "message": message, + "rop_path": instance.data.get("instance_node"), + "sop_path": output_path + } ) @classmethod - def get_invalid(cls, instance): + def get_invalid_with_message(cls, instance): - node = instance.data["output_node"] + node = instance.data.get("output_node") if node is None: - cls.log.error( + instance_node = instance.data.get("instance_node") + error = ( "SOP path is not correctly set on " - "ROP node '%s'." % instance.data.get("instance_node") + "ROP node `{}`.".format(instance_node) ) - return [instance] + return [hou.node(instance_node), error] frame = instance.data.get("frameStart", 0) - geometry = node.geometryAtFrame(frame) + geometry = get_geometry_at_frame(node, frame) if geometry is None: # No geometry data on this node, maybe the node hasn't cooked? - cls.log.error( - "SOP node has no geometry data. " - "Is it cooked? %s" % node.path() + error = ( + "SOP node `{}` has no geometry data. " + "Was it unable to cook?".format(node.path()) ) - return [node] + return [node, error] - prims = geometry.prims() - nr_of_prims = len(prims) + num_prims = geometry.intrinsicValue("primitivecount") + num_points = geometry.intrinsicValue("pointcount") + if num_prims == 0 and num_points == 0: + # Since we are only checking the first frame it doesn't mean there + # won't be VDB prims in a few frames. As such we'll assume for now + # the user knows what he or she is doing + cls.log.warning( + "SOP node `{}` has no primitives on start frame {}. " + "Validation is skipped and it is assumed elsewhere in the " + "frame range VDB prims and only VDB prims will exist." + "".format(node.path(), int(frame)) + ) + return [None, None] - # All primitives must be hou.VDB - invalid_prim = False - for prim in prims: - if not isinstance(prim, hou.VDB): - cls.log.error("Found non-VDB primitive: %s" % prim) - invalid_prim = True - if invalid_prim: - return [instance] + num_vdb_prims = geometry.countPrimType(hou.primType.VDB) + cls.log.debug("Detected {} VDB primitives".format(num_vdb_prims)) + if num_prims != num_vdb_prims: + # There's at least one primitive that is not a VDB. + # Search them and report them to the artist. + prims = geometry.prims() + invalid_prims = [prim for prim in prims + if not isinstance(prim, hou.VDB)] + if invalid_prims: + # Log prim numbers as consecutive ranges so logging isn't very + # slow for large number of primitives + error = ( + "Found non-VDB primitives for `{}`. " + "Primitive indices {} are not VDB primitives.".format( + node.path(), + ", ".join(group_consecutive_numbers( + prim.number() for prim in invalid_prims + )) + ) + ) + return [node, error] - nr_of_points = len(geometry.points()) - if nr_of_points != nr_of_prims: - cls.log.error("The number of primitives and points do not match") - return [instance] + if num_points != num_vdb_prims: + # We have points unrelated to the VDB primitives. + error = ( + "The number of primitives and points do not match in '{}'. " + "This likely means you have unconnected points, which we do " + "not allow in the VDB output.".format(node.path())) + return [node, error] - for prim in prims: - if prim.numVertices() != 1: - cls.log.error("Found primitive with more than 1 vertex!") - return [instance] + return [None, None] + + @classmethod + def get_invalid(cls, instance): + nodes, _ = cls.get_invalid_with_message(instance) + return nodes diff --git a/openpype/hosts/houdini/vendor/husdoutputprocessors/avalon_uri_processor.py b/openpype/hosts/houdini/vendor/husdoutputprocessors/avalon_uri_processor.py index d7d1c79d73..48019e0a82 100644 --- a/openpype/hosts/houdini/vendor/husdoutputprocessors/avalon_uri_processor.py +++ b/openpype/hosts/houdini/vendor/husdoutputprocessors/avalon_uri_processor.py @@ -128,14 +128,14 @@ class AvalonURIOutputProcessor(base.OutputProcessorBase): if not asset_doc: raise RuntimeError("Invalid asset name: '%s'" % asset) - formatted_anatomy = anatomy.format({ + template_obj = anatomy.templates_obj["publish"]["path"] + path = template_obj.format_strict({ "project": PROJECT, "asset": asset_doc["name"], "subset": subset, "representation": ext, "version": 0 # stub version zero }) - path = formatted_anatomy["publish"]["path"] # Remove the version folder subset_folder = os.path.dirname(os.path.dirname(path)) diff --git a/openpype/hosts/max/hooks/force_startup_script.py b/openpype/hosts/max/hooks/force_startup_script.py new file mode 100644 index 0000000000..4fcf4fef21 --- /dev/null +++ b/openpype/hosts/max/hooks/force_startup_script.py @@ -0,0 +1,24 @@ +# -*- coding: utf-8 -*- +"""Pre-launch to force 3ds max startup script.""" +from openpype.lib import PreLaunchHook +import os + + +class ForceStartupScript(PreLaunchHook): + """Inject OpenPype environment to 3ds max. + + Note that this works in combination whit 3dsmax startup script that + is translating it back to PYTHONPATH for cases when 3dsmax drops PYTHONPATH + environment. + + Hook `GlobalHostDataHook` must be executed before this hook. + """ + app_groups = ["3dsmax"] + order = 11 + + def execute(self): + startup_args = [ + "-U", + "MAXScript", + f"{os.getenv('OPENPYPE_ROOT')}\\openpype\\hosts\\max\\startup\\startup.ms"] # noqa + self.launch_context.launch_args.append(startup_args) diff --git a/openpype/hosts/max/plugins/create/create_model.py b/openpype/hosts/max/plugins/create/create_model.py new file mode 100644 index 0000000000..e7ae3af9db --- /dev/null +++ b/openpype/hosts/max/plugins/create/create_model.py @@ -0,0 +1,28 @@ +# -*- coding: utf-8 -*- +"""Creator plugin for model.""" +from openpype.hosts.max.api import plugin +from openpype.pipeline import CreatedInstance + + +class CreateModel(plugin.MaxCreator): + identifier = "io.openpype.creators.max.model" + label = "Model" + family = "model" + icon = "gear" + + def create(self, subset_name, instance_data, pre_create_data): + from pymxs import runtime as rt + instance = super(CreateModel, self).create( + subset_name, + instance_data, + pre_create_data) # type: CreatedInstance + container = rt.getNodeByName(instance.data.get("instance_node")) + # TODO: Disable "Add to Containers?" Panel + # parent the selected cameras into the container + sel_obj = None + if self.selected_nodes: + sel_obj = list(self.selected_nodes) + for obj in sel_obj: + obj.parent = container + # for additional work on the node: + # instance_node = rt.getNodeByName(instance.get("instance_node")) diff --git a/openpype/hosts/max/plugins/load/load_max_scene.py b/openpype/hosts/max/plugins/load/load_max_scene.py index 460f4822a6..4b19cd671f 100644 --- a/openpype/hosts/max/plugins/load/load_max_scene.py +++ b/openpype/hosts/max/plugins/load/load_max_scene.py @@ -10,7 +10,9 @@ class MaxSceneLoader(load.LoaderPlugin): """Max Scene Loader""" families = ["camera", - "maxScene"] + "maxScene", + "model"] + representations = ["max"] order = -8 icon = "code-fork" diff --git a/openpype/hosts/max/plugins/load/load_model.py b/openpype/hosts/max/plugins/load/load_model.py new file mode 100644 index 0000000000..95ee014e07 --- /dev/null +++ b/openpype/hosts/max/plugins/load/load_model.py @@ -0,0 +1,109 @@ + +import os +from openpype.pipeline import ( + load, get_representation_path +) +from openpype.hosts.max.api.pipeline import containerise +from openpype.hosts.max.api import lib +from openpype.hosts.max.api.lib import maintained_selection + + +class ModelAbcLoader(load.LoaderPlugin): + """Loading model with the Alembic loader.""" + + families = ["model"] + label = "Load Model(Alembic)" + representations = ["abc"] + order = -10 + icon = "code-fork" + color = "orange" + + def load(self, context, name=None, namespace=None, data=None): + from pymxs import runtime as rt + + file_path = os.path.normpath(self.fname) + + abc_before = { + c for c in rt.rootNode.Children + if rt.classOf(c) == rt.AlembicContainer + } + + abc_import_cmd = (f""" +AlembicImport.ImportToRoot = false +AlembicImport.CustomAttributes = true +AlembicImport.UVs = true +AlembicImport.VertexColors = true + +importFile @"{file_path}" #noPrompt + """) + + self.log.debug(f"Executing command: {abc_import_cmd}") + rt.execute(abc_import_cmd) + + abc_after = { + c for c in rt.rootNode.Children + if rt.classOf(c) == rt.AlembicContainer + } + + # This should yield new AlembicContainer node + abc_containers = abc_after.difference(abc_before) + + if len(abc_containers) != 1: + self.log.error("Something failed when loading.") + + abc_container = abc_containers.pop() + + return containerise( + name, [abc_container], context, loader=self.__class__.__name__) + + def update(self, container, representation): + from pymxs import runtime as rt + path = get_representation_path(representation) + node = rt.getNodeByName(container["instance_node"]) + rt.select(node.Children) + + for alembic in rt.selection: + abc = rt.getNodeByName(alembic.name) + rt.select(abc.Children) + for abc_con in rt.selection: + container = rt.getNodeByName(abc_con.name) + container.source = path + rt.select(container.Children) + for abc_obj in rt.selection: + alembic_obj = rt.getNodeByName(abc_obj.name) + alembic_obj.source = path + + with maintained_selection(): + rt.select(node) + + lib.imprint(container["instance_node"], { + "representation": str(representation["_id"]) + }) + + def switch(self, container, representation): + self.update(container, representation) + + def remove(self, container): + from pymxs import runtime as rt + + node = rt.getNodeByName(container["instance_node"]) + rt.delete(node) + + @staticmethod + def get_container_children(parent, type_name): + from pymxs import runtime as rt + + def list_children(node): + children = [] + for c in node.Children: + children.append(c) + children += list_children(c) + return children + + filtered = [] + for child in list_children(parent): + class_type = str(rt.classOf(child.baseObject)) + if class_type == type_name: + filtered.append(child) + + return filtered diff --git a/openpype/hosts/max/plugins/load/load_model_fbx.py b/openpype/hosts/max/plugins/load/load_model_fbx.py new file mode 100644 index 0000000000..88b8f1ed89 --- /dev/null +++ b/openpype/hosts/max/plugins/load/load_model_fbx.py @@ -0,0 +1,77 @@ +import os +from openpype.pipeline import ( + load, + get_representation_path +) +from openpype.hosts.max.api.pipeline import containerise +from openpype.hosts.max.api import lib +from openpype.hosts.max.api.lib import maintained_selection + + +class FbxModelLoader(load.LoaderPlugin): + """Fbx Model Loader""" + + families = ["model"] + representations = ["fbx"] + order = -9 + icon = "code-fork" + color = "white" + + def load(self, context, name=None, namespace=None, data=None): + from pymxs import runtime as rt + + filepath = os.path.normpath(self.fname) + + fbx_import_cmd = ( + f""" + +FBXImporterSetParam "Animation" false +FBXImporterSetParam "Cameras" false +FBXImporterSetParam "AxisConversionMethod" true +FbxExporterSetParam "UpAxis" "Y" +FbxExporterSetParam "Preserveinstances" true + +importFile @"{filepath}" #noPrompt using:FBXIMP + """) + + self.log.debug(f"Executing command: {fbx_import_cmd}") + rt.execute(fbx_import_cmd) + + asset = rt.getNodeByName(f"{name}") + + return containerise( + name, [asset], context, loader=self.__class__.__name__) + + def update(self, container, representation): + from pymxs import runtime as rt + + path = get_representation_path(representation) + node = rt.getNodeByName(container["instance_node"]) + rt.select(node.Children) + fbx_reimport_cmd = ( + f""" +FBXImporterSetParam "Animation" false +FBXImporterSetParam "Cameras" false +FBXImporterSetParam "AxisConversionMethod" true +FbxExporterSetParam "UpAxis" "Y" +FbxExporterSetParam "Preserveinstances" true + +importFile @"{path}" #noPrompt using:FBXIMP + """) + rt.execute(fbx_reimport_cmd) + + with maintained_selection(): + rt.select(node) + + lib.imprint(container["instance_node"], { + "representation": str(representation["_id"]) + }) + + def switch(self, container, representation): + self.update(container, representation) + + def remove(self, container): + from pymxs import runtime as rt + + node = rt.getNodeByName(container["instance_node"]) + rt.delete(node) diff --git a/openpype/hosts/max/plugins/load/load_model_obj.py b/openpype/hosts/max/plugins/load/load_model_obj.py new file mode 100644 index 0000000000..c55e462111 --- /dev/null +++ b/openpype/hosts/max/plugins/load/load_model_obj.py @@ -0,0 +1,68 @@ +import os +from openpype.pipeline import ( + load, + get_representation_path +) +from openpype.hosts.max.api.pipeline import containerise +from openpype.hosts.max.api import lib +from openpype.hosts.max.api.lib import maintained_selection + + +class ObjLoader(load.LoaderPlugin): + """Obj Loader""" + + families = ["model"] + representations = ["obj"] + order = -9 + icon = "code-fork" + color = "white" + + def load(self, context, name=None, namespace=None, data=None): + from pymxs import runtime as rt + + filepath = os.path.normpath(self.fname) + self.log.debug(f"Executing command to import..") + + rt.execute(f'importFile @"{filepath}" #noPrompt using:ObjImp') + # create "missing" container for obj import + container = rt.container() + container.name = f"{name}" + + # get current selection + for selection in rt.getCurrentSelection(): + selection.Parent = container + + asset = rt.getNodeByName(f"{name}") + + return containerise( + name, [asset], context, loader=self.__class__.__name__) + + def update(self, container, representation): + from pymxs import runtime as rt + + path = get_representation_path(representation) + node_name = container["instance_node"] + node = rt.getNodeByName(node_name) + + instance_name, _ = node_name.split("_") + container = rt.getNodeByName(instance_name) + for n in container.Children: + rt.delete(n) + + rt.execute(f'importFile @"{path}" #noPrompt using:ObjImp') + # get current selection + for selection in rt.getCurrentSelection(): + selection.Parent = container + + with maintained_selection(): + rt.select(node) + + lib.imprint(node_name, { + "representation": str(representation["_id"]) + }) + + def remove(self, container): + from pymxs import runtime as rt + + node = rt.getNodeByName(container["instance_node"]) + rt.delete(node) diff --git a/openpype/hosts/max/plugins/load/load_model_usd.py b/openpype/hosts/max/plugins/load/load_model_usd.py new file mode 100644 index 0000000000..143f91f40b --- /dev/null +++ b/openpype/hosts/max/plugins/load/load_model_usd.py @@ -0,0 +1,78 @@ +import os +from openpype.pipeline import ( + load, get_representation_path +) +from openpype.hosts.max.api.pipeline import containerise +from openpype.hosts.max.api import lib +from openpype.hosts.max.api.lib import maintained_selection + + +class ModelUSDLoader(load.LoaderPlugin): + """Loading model with the USD loader.""" + + families = ["model"] + label = "Load Model(USD)" + representations = ["usda"] + order = -10 + icon = "code-fork" + color = "orange" + + def load(self, context, name=None, namespace=None, data=None): + from pymxs import runtime as rt + # asset_filepath + filepath = os.path.normpath(self.fname) + import_options = rt.USDImporter.CreateOptions() + base_filename = os.path.basename(filepath) + filename, ext = os.path.splitext(base_filename) + log_filepath = filepath.replace(ext, "txt") + + rt.LogPath = log_filepath + rt.LogLevel = rt.name('info') + rt.USDImporter.importFile(filepath, + importOptions=import_options) + + asset = rt.getNodeByName(f"{name}") + + return containerise( + name, [asset], context, loader=self.__class__.__name__) + + def update(self, container, representation): + from pymxs import runtime as rt + + path = get_representation_path(representation) + node_name = container["instance_node"] + node = rt.getNodeByName(node_name) + for n in node.Children: + for r in n.Children: + rt.delete(r) + rt.delete(n) + instance_name, _ = node_name.split("_") + + import_options = rt.USDImporter.CreateOptions() + base_filename = os.path.basename(path) + _, ext = os.path.splitext(base_filename) + log_filepath = path.replace(ext, "txt") + + rt.LogPath = log_filepath + rt.LogLevel = rt.name('info') + rt.USDImporter.importFile(path, + importOptions=import_options) + + asset = rt.getNodeByName(f"{instance_name}") + asset.Parent = node + + with maintained_selection(): + rt.select(node) + + lib.imprint(node_name, { + "representation": str(representation["_id"]) + }) + + def switch(self, container, representation): + self.update(container, representation) + + def remove(self, container): + from pymxs import runtime as rt + + node = rt.getNodeByName(container["instance_node"]) + rt.delete(node) diff --git a/openpype/hosts/max/plugins/load/load_pointcache.py b/openpype/hosts/max/plugins/load/load_pointcache.py index f7a72ece25..b3e12adc7b 100644 --- a/openpype/hosts/max/plugins/load/load_pointcache.py +++ b/openpype/hosts/max/plugins/load/load_pointcache.py @@ -15,8 +15,7 @@ from openpype.hosts.max.api import lib class AbcLoader(load.LoaderPlugin): """Alembic loader.""" - families = ["model", - "camera", + families = ["camera", "animation", "pointcache"] label = "Load Alembic" diff --git a/openpype/hosts/max/plugins/publish/collect_render.py b/openpype/hosts/max/plugins/publish/collect_render.py index 63e4108c84..b040467522 100644 --- a/openpype/hosts/max/plugins/publish/collect_render.py +++ b/openpype/hosts/max/plugins/publish/collect_render.py @@ -62,6 +62,7 @@ class CollectRender(pyblish.api.InstancePlugin): "frameStart": context.data['frameStart'], "frameEnd": context.data['frameEnd'], "version": version_int, + "farm": True } self.log.info("data: {0}".format(data)) instance.data.update(data) diff --git a/openpype/hosts/max/plugins/publish/extract_max_scene_raw.py b/openpype/hosts/max/plugins/publish/extract_max_scene_raw.py index 969f87be48..c14fcdbd0b 100644 --- a/openpype/hosts/max/plugins/publish/extract_max_scene_raw.py +++ b/openpype/hosts/max/plugins/publish/extract_max_scene_raw.py @@ -21,7 +21,8 @@ class ExtractMaxSceneRaw(publish.Extractor, label = "Extract Max Scene (Raw)" hosts = ["max"] families = ["camera", - "maxScene"] + "maxScene", + "model"] optional = True def process(self, instance): diff --git a/openpype/hosts/max/plugins/publish/extract_model.py b/openpype/hosts/max/plugins/publish/extract_model.py new file mode 100644 index 0000000000..710ad5f97d --- /dev/null +++ b/openpype/hosts/max/plugins/publish/extract_model.py @@ -0,0 +1,74 @@ +import os +import pyblish.api +from openpype.pipeline import ( + publish, + OptionalPyblishPluginMixin +) +from pymxs import runtime as rt +from openpype.hosts.max.api import ( + maintained_selection, + get_all_children +) + + +class ExtractModel(publish.Extractor, + OptionalPyblishPluginMixin): + """ + Extract Geometry in Alembic Format + """ + + order = pyblish.api.ExtractorOrder - 0.1 + label = "Extract Geometry (Alembic)" + hosts = ["max"] + families = ["model"] + optional = True + + def process(self, instance): + if not self.is_active(instance.data): + return + + container = instance.data["instance_node"] + + self.log.info("Extracting Geometry ...") + + stagingdir = self.staging_dir(instance) + filename = "{name}.abc".format(**instance.data) + filepath = os.path.join(stagingdir, filename) + + # We run the render + self.log.info("Writing alembic '%s' to '%s'" % (filename, + stagingdir)) + + export_cmd = ( + f""" +AlembicExport.ArchiveType = #ogawa +AlembicExport.CoordinateSystem = #maya +AlembicExport.CustomAttributes = true +AlembicExport.UVs = true +AlembicExport.VertexColors = true +AlembicExport.PreserveInstances = true + +exportFile @"{filepath}" #noPrompt selectedOnly:on using:AlembicExport + + """) + + self.log.debug(f"Executing command: {export_cmd}") + + with maintained_selection(): + # select and export + rt.select(get_all_children(rt.getNodeByName(container))) + rt.execute(export_cmd) + + self.log.info("Performing Extraction ...") + if "representations" not in instance.data: + instance.data["representations"] = [] + + representation = { + 'name': 'abc', + 'ext': 'abc', + 'files': filename, + "stagingDir": stagingdir, + } + instance.data["representations"].append(representation) + self.log.info("Extracted instance '%s' to: %s" % (instance.name, + filepath)) diff --git a/openpype/hosts/max/plugins/publish/extract_model_fbx.py b/openpype/hosts/max/plugins/publish/extract_model_fbx.py new file mode 100644 index 0000000000..ce58e8cc17 --- /dev/null +++ b/openpype/hosts/max/plugins/publish/extract_model_fbx.py @@ -0,0 +1,74 @@ +import os +import pyblish.api +from openpype.pipeline import ( + publish, + OptionalPyblishPluginMixin +) +from pymxs import runtime as rt +from openpype.hosts.max.api import ( + maintained_selection, + get_all_children +) + + +class ExtractModelFbx(publish.Extractor, + OptionalPyblishPluginMixin): + """ + Extract Geometry in FBX Format + """ + + order = pyblish.api.ExtractorOrder - 0.05 + label = "Extract FBX" + hosts = ["max"] + families = ["model"] + optional = True + + def process(self, instance): + if not self.is_active(instance.data): + return + + container = instance.data["instance_node"] + + self.log.info("Extracting Geometry ...") + + stagingdir = self.staging_dir(instance) + filename = "{name}.fbx".format(**instance.data) + filepath = os.path.join(stagingdir, + filename) + self.log.info("Writing FBX '%s' to '%s'" % (filepath, + stagingdir)) + + export_fbx_cmd = ( + f""" +FBXExporterSetParam "Animation" false +FBXExporterSetParam "Cameras" false +FBXExporterSetParam "Lights" false +FBXExporterSetParam "PointCache" false +FBXExporterSetParam "AxisConversionMethod" "Animation" +FbxExporterSetParam "UpAxis" "Y" +FbxExporterSetParam "Preserveinstances" true + +exportFile @"{filepath}" #noPrompt selectedOnly:true using:FBXEXP + + """) + + self.log.debug(f"Executing command: {export_fbx_cmd}") + + with maintained_selection(): + # select and export + rt.select(get_all_children(rt.getNodeByName(container))) + rt.execute(export_fbx_cmd) + + self.log.info("Performing Extraction ...") + if "representations" not in instance.data: + instance.data["representations"] = [] + + representation = { + 'name': 'fbx', + 'ext': 'fbx', + 'files': filename, + "stagingDir": stagingdir, + } + instance.data["representations"].append(representation) + self.log.info("Extracted instance '%s' to: %s" % (instance.name, + filepath)) diff --git a/openpype/hosts/max/plugins/publish/extract_model_obj.py b/openpype/hosts/max/plugins/publish/extract_model_obj.py new file mode 100644 index 0000000000..7bda237880 --- /dev/null +++ b/openpype/hosts/max/plugins/publish/extract_model_obj.py @@ -0,0 +1,59 @@ +import os +import pyblish.api +from openpype.pipeline import ( + publish, + OptionalPyblishPluginMixin +) +from pymxs import runtime as rt +from openpype.hosts.max.api import ( + maintained_selection, + get_all_children +) + + +class ExtractModelObj(publish.Extractor, + OptionalPyblishPluginMixin): + """ + Extract Geometry in OBJ Format + """ + + order = pyblish.api.ExtractorOrder - 0.05 + label = "Extract OBJ" + hosts = ["max"] + families = ["model"] + optional = True + + def process(self, instance): + if not self.is_active(instance.data): + return + + container = instance.data["instance_node"] + + self.log.info("Extracting Geometry ...") + + stagingdir = self.staging_dir(instance) + filename = "{name}.obj".format(**instance.data) + filepath = os.path.join(stagingdir, + filename) + self.log.info("Writing OBJ '%s' to '%s'" % (filepath, + stagingdir)) + + with maintained_selection(): + # select and export + rt.select(get_all_children(rt.getNodeByName(container))) + rt.execute(f'exportFile @"{filepath}" #noPrompt selectedOnly:true using:ObjExp') # noqa + + self.log.info("Performing Extraction ...") + if "representations" not in instance.data: + instance.data["representations"] = [] + + representation = { + 'name': 'obj', + 'ext': 'obj', + 'files': filename, + "stagingDir": stagingdir, + } + + instance.data["representations"].append(representation) + self.log.info("Extracted instance '%s' to: %s" % (instance.name, + filepath)) diff --git a/openpype/hosts/max/plugins/publish/extract_model_usd.py b/openpype/hosts/max/plugins/publish/extract_model_usd.py new file mode 100644 index 0000000000..0bed2d855e --- /dev/null +++ b/openpype/hosts/max/plugins/publish/extract_model_usd.py @@ -0,0 +1,114 @@ +import os +import pyblish.api +from openpype.pipeline import ( + publish, + OptionalPyblishPluginMixin +) +from pymxs import runtime as rt +from openpype.hosts.max.api import ( + maintained_selection +) + + +class ExtractModelUSD(publish.Extractor, + OptionalPyblishPluginMixin): + """ + Extract Geometry in USDA Format + """ + + order = pyblish.api.ExtractorOrder - 0.05 + label = "Extract Geometry (USD)" + hosts = ["max"] + families = ["model"] + optional = True + + def process(self, instance): + if not self.is_active(instance.data): + return + + container = instance.data["instance_node"] + + self.log.info("Extracting Geometry ...") + + stagingdir = self.staging_dir(instance) + asset_filename = "{name}.usda".format(**instance.data) + asset_filepath = os.path.join(stagingdir, + asset_filename) + self.log.info("Writing USD '%s' to '%s'" % (asset_filepath, + stagingdir)) + + log_filename = "{name}.txt".format(**instance.data) + log_filepath = os.path.join(stagingdir, + log_filename) + self.log.info("Writing log '%s' to '%s'" % (log_filepath, + stagingdir)) + + # get the nodes which need to be exported + export_options = self.get_export_options(log_filepath) + with maintained_selection(): + # select and export + node_list = self.get_node_list(container) + rt.USDExporter.ExportFile(asset_filepath, + exportOptions=export_options, + contentSource=rt.name("selected"), + nodeList=node_list) + + self.log.info("Performing Extraction ...") + if "representations" not in instance.data: + instance.data["representations"] = [] + + representation = { + 'name': 'usda', + 'ext': 'usda', + 'files': asset_filename, + "stagingDir": stagingdir, + } + instance.data["representations"].append(representation) + + log_representation = { + 'name': 'txt', + 'ext': 'txt', + 'files': log_filename, + "stagingDir": stagingdir, + } + instance.data["representations"].append(log_representation) + + self.log.info("Extracted instance '%s' to: %s" % (instance.name, + asset_filepath)) + + def get_node_list(self, container): + """ + Get the target nodes which are + the children of the container + """ + node_list = [] + + container_node = rt.getNodeByName(container) + target_node = container_node.Children + rt.select(target_node) + for sel in rt.selection: + node_list.append(sel) + + return node_list + + def get_export_options(self, log_path): + """Set Export Options for USD Exporter""" + + export_options = rt.USDExporter.createOptions() + + export_options.Meshes = True + export_options.Shapes = False + export_options.Lights = False + export_options.Cameras = False + export_options.Materials = False + export_options.MeshFormat = rt.name('fromScene') + export_options.FileFormat = rt.name('ascii') + export_options.UpAxis = rt.name('y') + export_options.LogLevel = rt.name('info') + export_options.LogPath = log_path + export_options.PreserveEdgeOrientation = True + export_options.TimeMode = rt.name('current') + + rt.USDexporter.UIOptions = export_options + + return export_options diff --git a/openpype/hosts/max/plugins/publish/validate_model_contents.py b/openpype/hosts/max/plugins/publish/validate_model_contents.py new file mode 100644 index 0000000000..dd782674ff --- /dev/null +++ b/openpype/hosts/max/plugins/publish/validate_model_contents.py @@ -0,0 +1,44 @@ +# -*- coding: utf-8 -*- +import pyblish.api +from openpype.pipeline import PublishValidationError +from pymxs import runtime as rt + + +class ValidateModelContent(pyblish.api.InstancePlugin): + """Validates Model instance contents. + + A model instance may only hold either geometry-related + object(excluding Shapes) or editable meshes. + """ + + order = pyblish.api.ValidatorOrder + families = ["model"] + hosts = ["max"] + label = "Model Contents" + + def process(self, instance): + invalid = self.get_invalid(instance) + if invalid: + raise PublishValidationError("Model instance must only include" + "Geometry and Editable Mesh") + + def get_invalid(self, instance): + """ + Get invalid nodes if the instance is not camera + """ + invalid = list() + container = instance.data["instance_node"] + self.log.info("Validating look content for " + "{}".format(container)) + + con = rt.getNodeByName(container) + selection_list = list(con.Children) or rt.getCurrentSelection() + for sel in selection_list: + if rt.classOf(sel) in rt.Camera.classes: + invalid.append(sel) + if rt.classOf(sel) in rt.Light.classes: + invalid.append(sel) + if rt.classOf(sel) in rt.Shape.classes: + invalid.append(sel) + + return invalid diff --git a/openpype/hosts/max/plugins/publish/validate_usd_plugin.py b/openpype/hosts/max/plugins/publish/validate_usd_plugin.py new file mode 100644 index 0000000000..747147020a --- /dev/null +++ b/openpype/hosts/max/plugins/publish/validate_usd_plugin.py @@ -0,0 +1,36 @@ +# -*- coding: utf-8 -*- +import pyblish.api +from openpype.pipeline import PublishValidationError +from pymxs import runtime as rt + + +class ValidateUSDPlugin(pyblish.api.InstancePlugin): + """Validates if USD plugin is installed or loaded in Max + """ + + order = pyblish.api.ValidatorOrder - 0.01 + families = ["model"] + hosts = ["max"] + label = "USD Plugin" + + def process(self, instance): + plugin_mgr = rt.pluginManager + plugin_count = plugin_mgr.pluginDllCount + plugin_info = self.get_plugins(plugin_mgr, + plugin_count) + usd_import = "usdimport.dli" + if usd_import not in plugin_info: + raise PublishValidationError("USD Plugin {}" + " not found".format(usd_import)) + usd_export = "usdexport.dle" + if usd_export not in plugin_info: + raise PublishValidationError("USD Plugin {}" + " not found".format(usd_export)) + + def get_plugins(self, manager, count): + plugin_info_list = list() + for p in range(1, count + 1): + plugin_info = manager.pluginDllName(p) + plugin_info_list.append(plugin_info) + + return plugin_info_list diff --git a/openpype/hosts/maya/api/lib.py b/openpype/hosts/maya/api/lib.py index 22803a2e3a..f814187cc1 100644 --- a/openpype/hosts/maya/api/lib.py +++ b/openpype/hosts/maya/api/lib.py @@ -32,7 +32,17 @@ from openpype.pipeline import ( load_container, registered_host, ) -from openpype.pipeline.context_tools import get_current_project_asset +from openpype.pipeline.create import ( + legacy_create, + get_legacy_creator_by_name, +) +from openpype.pipeline.context_tools import ( + get_current_asset_name, + get_current_project_asset, + get_current_project_name, + get_current_task_name +) +from openpype.lib.profiles_filtering import filter_profiles self = sys.modules[__name__] @@ -112,6 +122,18 @@ FLOAT_FPS = {23.98, 23.976, 29.97, 47.952, 59.94} RENDERLIKE_INSTANCE_FAMILIES = ["rendering", "vrayscene"] +DISPLAY_LIGHTS_VALUES = [ + "project_settings", "default", "all", "selected", "flat", "none" +] +DISPLAY_LIGHTS_LABELS = [ + "Use Project Settings", + "Default Lighting", + "All Lights", + "Selected Lights", + "Flat Lighting", + "No Lights" +] + def get_main_window(): """Acquire Maya's main window""" @@ -292,15 +314,20 @@ def collect_animation_data(fps=False): """ # get scene values as defaults - start = cmds.playbackOptions(query=True, animationStartTime=True) - end = cmds.playbackOptions(query=True, animationEndTime=True) + frame_start = cmds.playbackOptions(query=True, minTime=True) + frame_end = cmds.playbackOptions(query=True, maxTime=True) + handle_start = cmds.playbackOptions(query=True, animationStartTime=True) + handle_end = cmds.playbackOptions(query=True, animationEndTime=True) + + handle_start = frame_start - handle_start + handle_end = handle_end - frame_end # build attributes data = OrderedDict() - data["frameStart"] = start - data["frameEnd"] = end - data["handleStart"] = 0 - data["handleEnd"] = 0 + data["frameStart"] = frame_start + data["frameEnd"] = frame_end + data["handleStart"] = handle_start + data["handleEnd"] = handle_end data["step"] = 1.0 if fps: @@ -2130,12 +2157,22 @@ def set_scene_resolution(width, height, pixelAspect): cmds.setAttr("%s.pixelAspect" % control_node, pixelAspect) -def get_frame_range(): - """Get the current assets frame range and handles.""" +def get_frame_range(include_animation_range=False): + """Get the current assets frame range and handles. + + Args: + include_animation_range (bool, optional): Whether to include + `animationStart` and `animationEnd` keys to define the outer + range of the timeline. It is excluded by default. + + Returns: + dict: Asset's expected frame range values. + + """ # Set frame start/end - project_name = legacy_io.active_project() - asset_name = legacy_io.Session["AVALON_ASSET"] + project_name = get_current_project_name() + asset_name = get_current_asset_name() asset = get_asset_by_name(project_name, asset_name) frame_start = asset["data"].get("frameStart") @@ -2148,12 +2185,39 @@ def get_frame_range(): handle_start = asset["data"].get("handleStart") or 0 handle_end = asset["data"].get("handleEnd") or 0 - return { + frame_range = { "frameStart": frame_start, "frameEnd": frame_end, "handleStart": handle_start, "handleEnd": handle_end } + if include_animation_range: + # The animation range values are only included to define whether + # the Maya time slider should include the handles or not. + # Some usages of this function use the full dictionary to define + # instance attributes for which we want to exclude the animation + # keys. That is why these are excluded by default. + task_name = get_current_task_name() + settings = get_project_settings(project_name) + include_handles_settings = settings["maya"]["include_handles"] + current_task = asset.get("data").get("tasks").get(task_name) + + animation_start = frame_start + animation_end = frame_end + + include_handles = include_handles_settings["include_handles_default"] + for item in include_handles_settings["per_task_type"]: + if current_task["type"] in item["task_type"]: + include_handles = item["include_handles"] + break + if include_handles: + animation_start -= int(handle_start) + animation_end += int(handle_end) + + frame_range["animationStart"] = animation_start + frame_range["animationEnd"] = animation_end + + return frame_range def reset_frame_range(playback=True, render=True, fps=True): @@ -2166,25 +2230,29 @@ def reset_frame_range(playback=True, render=True, fps=True): Defaults to True. fps (bool, Optional): Whether to set scene FPS. Defaults to True. """ - if fps: fps = convert_to_maya_fps( float(legacy_io.Session.get("AVALON_FPS", 25)) ) set_scene_fps(fps) - frame_range = get_frame_range() + frame_range = get_frame_range(include_animation_range=True) + if not frame_range: + # No frame range data found for asset + return - frame_start = frame_range["frameStart"] - int(frame_range["handleStart"]) - frame_end = frame_range["frameEnd"] + int(frame_range["handleEnd"]) + frame_start = frame_range["frameStart"] + frame_end = frame_range["frameEnd"] + animation_start = frame_range["animationStart"] + animation_end = frame_range["animationEnd"] if playback: - cmds.playbackOptions(minTime=frame_start) - cmds.playbackOptions(maxTime=frame_end) - cmds.playbackOptions(animationStartTime=frame_start) - cmds.playbackOptions(animationEndTime=frame_end) - cmds.playbackOptions(minTime=frame_start) - cmds.playbackOptions(maxTime=frame_end) + cmds.playbackOptions( + minTime=frame_start, + maxTime=frame_end, + animationStartTime=animation_start, + animationEndTime=animation_end + ) cmds.currentTime(frame_start) if render: @@ -3655,7 +3723,17 @@ def get_color_management_preferences(): # Split view and display from view_transform. view_transform comes in # format of "{view} ({display})". regex = re.compile(r"^(?P.+) \((?P.+)\)$") + if int(cmds.about(version=True)) <= 2020: + # view_transform comes in format of "{view} {display}" in 2020. + regex = re.compile(r"^(?P.+) (?P.+)$") + match = regex.match(data["view_transform"]) + if not match: + raise ValueError( + "Unable to parse view and display from Maya view transform: '{}' " + "using regex '{}'".format(data["view_transform"], regex.pattern) + ) + data.update({ "display": match.group("display"), "view": match.group("view") @@ -3812,3 +3890,98 @@ def get_all_children(nodes): iterator.next() # noqa: B305 return list(traversed) + + +def get_capture_preset(task_name, task_type, subset, project_settings, log): + """Get capture preset for playblasting. + + Logic for transitioning from old style capture preset to new capture preset + profiles. + + Args: + task_name (str): Task name. + take_type (str): Task type. + subset (str): Subset name. + project_settings (dict): Project settings. + log (object): Logging object. + """ + capture_preset = None + filtering_criteria = { + "hosts": "maya", + "families": "review", + "task_names": task_name, + "task_types": task_type, + "subset": subset + } + + plugin_settings = project_settings["maya"]["publish"]["ExtractPlayblast"] + if plugin_settings["profiles"]: + profile = filter_profiles( + plugin_settings["profiles"], + filtering_criteria, + logger=log + ) + capture_preset = profile.get("capture_preset") + else: + log.warning("No profiles present for Extract Playblast") + + # Backward compatibility for deprecated Extract Playblast settings + # without profiles. + if capture_preset is None: + log.debug( + "Falling back to deprecated Extract Playblast capture preset " + "because no new style playblast profiles are defined." + ) + capture_preset = plugin_settings["capture_preset"] + + return capture_preset or {} + + +def create_rig_animation_instance(nodes, context, namespace, log=None): + """Create an animation publish instance for loaded rigs. + + See the RecreateRigAnimationInstance inventory action on how to use this + for loaded rig containers. + + Arguments: + nodes (list): Member nodes of the rig instance. + context (dict): Representation context of the rig container + namespace (str): Namespace of the rig container + log (logging.Logger, optional): Logger to log to if provided + + Returns: + None + + """ + output = next((node for node in nodes if + node.endswith("out_SET")), None) + controls = next((node for node in nodes if + node.endswith("controls_SET")), None) + + assert output, "No out_SET in rig, this is a bug." + assert controls, "No controls_SET in rig, this is a bug." + + # Find the roots amongst the loaded nodes + roots = ( + cmds.ls(nodes, assemblies=True, long=True) or + get_highest_in_hierarchy(nodes) + ) + assert roots, "No root nodes in rig, this is a bug." + + asset = legacy_io.Session["AVALON_ASSET"] + dependency = str(context["representation"]["_id"]) + + if log: + log.info("Creating subset: {}".format(namespace)) + + # Create the animation instance + creator_plugin = get_legacy_creator_by_name("CreateAnimation") + with maintained_selection(): + cmds.select([output, controls] + roots, noExpand=True) + legacy_create( + creator_plugin, + name=namespace, + asset=asset, + options={"useSelection": True}, + data={"dependencies": dependency} + ) diff --git a/openpype/hosts/maya/api/plugin.py b/openpype/hosts/maya/api/plugin.py index 916fddd923..714278ba6c 100644 --- a/openpype/hosts/maya/api/plugin.py +++ b/openpype/hosts/maya/api/plugin.py @@ -1,4 +1,5 @@ import os +import re from maya import cmds @@ -12,6 +13,7 @@ from openpype.pipeline import ( AVALON_CONTAINER_ID, Anatomy, ) +from openpype.pipeline.load import LoadError from openpype.settings import get_project_settings from .pipeline import containerise from . import lib @@ -82,6 +84,44 @@ def get_reference_node_parents(ref): return parents +def get_custom_namespace(custom_namespace): + """Return unique namespace. + + The input namespace can contain a single group + of '#' number tokens to indicate where the namespace's + unique index should go. The amount of tokens defines + the zero padding of the number, e.g ### turns into 001. + + Warning: Note that a namespace will always be + prefixed with a _ if it starts with a digit + + Example: + >>> get_custom_namespace("myspace_##_") + # myspace_01_ + >>> get_custom_namespace("##_myspace") + # _01_myspace + >>> get_custom_namespace("myspace##") + # myspace01 + + """ + split = re.split("([#]+)", custom_namespace, 1) + + if len(split) == 3: + base, padding, suffix = split + padding = "%0{}d".format(len(padding)) + else: + base = split[0] + padding = "%02d" # default padding + suffix = "" + + return lib.unique_namespace( + base, + format=padding, + prefix="_" if not base or base[0].isdigit() else "", + suffix=suffix + ) + + class Creator(LegacyCreator): defaults = ['Main'] @@ -143,15 +183,46 @@ class ReferenceLoader(Loader): assert os.path.exists(self.fname), "%s does not exist." % self.fname asset = context['asset'] + subset = context['subset'] + settings = get_project_settings(context['project']['name']) + custom_naming = settings['maya']['load']['reference_loader'] loaded_containers = [] - count = options.get("count") or 1 - for c in range(0, count): - namespace = namespace or lib.unique_namespace( - "{}_{}_".format(asset["name"], context["subset"]["name"]), - prefix="_" if asset["name"][0].isdigit() else "", - suffix="_", + if not custom_naming['namespace']: + raise LoadError("No namespace specified in " + "Maya ReferenceLoader settings") + elif not custom_naming['group_name']: + raise LoadError("No group name specified in " + "Maya ReferenceLoader settings") + + formatting_data = { + "asset_name": asset['name'], + "asset_type": asset['type'], + "subset": subset['name'], + "family": ( + subset['data'].get('family') or + subset['data']['families'][0] ) + } + + custom_namespace = custom_naming['namespace'].format( + **formatting_data + ) + + custom_group_name = custom_naming['group_name'].format( + **formatting_data + ) + + count = options.get("count") or 1 + + for c in range(0, count): + namespace = get_custom_namespace(custom_namespace) + group_name = "{}:{}".format( + namespace, + custom_group_name + ) + + options['group_name'] = group_name # Offset loaded subset if "offset" in options: @@ -187,7 +258,7 @@ class ReferenceLoader(Loader): return loaded_containers - def process_reference(self, context, name, namespace, data): + def process_reference(self, context, name, namespace, options): """To be implemented by subclass""" raise NotImplementedError("Must be implemented by subclass") diff --git a/openpype/hosts/maya/api/workfile_template_builder.py b/openpype/hosts/maya/api/workfile_template_builder.py index 4bee0664ef..d65e4c74d2 100644 --- a/openpype/hosts/maya/api/workfile_template_builder.py +++ b/openpype/hosts/maya/api/workfile_template_builder.py @@ -234,26 +234,10 @@ class MayaPlaceholderLoadPlugin(PlaceholderPlugin, PlaceholderLoadMixin): return self.get_load_plugin_options(options) def cleanup_placeholder(self, placeholder, failed): - """Hide placeholder, parent them to root - add them to placeholder set and register placeholder's parent - to keep placeholder info available for future use + """Hide placeholder, add them to placeholder set """ - node = placeholder._scene_identifier - node_parent = placeholder.data["parent"] - if node_parent: - cmds.setAttr(node + ".parent", node_parent, type="string") - if cmds.getAttr(node + ".index") < 0: - cmds.setAttr(node + ".index", placeholder.data["index"]) - - holding_sets = cmds.listSets(object=node) - if holding_sets: - for set in holding_sets: - cmds.sets(node, remove=set) - - if cmds.listRelatives(node, p=True): - node = cmds.parent(node, world=True)[0] cmds.sets(node, addElement=PLACEHOLDER_SET) cmds.hide(node) cmds.setAttr(node + ".hiddenInOutliner", True) @@ -286,8 +270,6 @@ class MayaPlaceholderLoadPlugin(PlaceholderPlugin, PlaceholderLoadMixin): elif not cmds.sets(root, q=True): return - if placeholder.data["parent"]: - cmds.parent(nodes_to_parent, placeholder.data["parent"]) # Move loaded nodes to correct index in outliner hierarchy placeholder_form = cmds.xform( placeholder.scene_identifier, diff --git a/openpype/hosts/maya/hooks/pre_auto_load_plugins.py b/openpype/hosts/maya/hooks/pre_auto_load_plugins.py new file mode 100644 index 0000000000..689d7adb4f --- /dev/null +++ b/openpype/hosts/maya/hooks/pre_auto_load_plugins.py @@ -0,0 +1,29 @@ +from openpype.lib import PreLaunchHook + + +class MayaPreAutoLoadPlugins(PreLaunchHook): + """Define -noAutoloadPlugins command flag.""" + + # Before AddLastWorkfileToLaunchArgs + order = 9 + app_groups = ["maya"] + + def execute(self): + + # Ignore if there's no last workfile to start. + if not self.data.get("start_last_workfile"): + return + + maya_settings = self.data["project_settings"]["maya"] + enabled = maya_settings["explicit_plugins_loading"]["enabled"] + if enabled: + # Force disable the `AddLastWorkfileToLaunchArgs`. + self.data.pop("start_last_workfile") + + # Force post initialization so our dedicated plug-in load can run + # prior to Maya opening a scene file. + key = "OPENPYPE_OPEN_WORKFILE_POST_INITIALIZATION" + self.launch_context.env[key] = "1" + + self.log.debug("Explicit plugins loading.") + self.launch_context.launch_args.append("-noAutoloadPlugins") diff --git a/openpype/hosts/maya/hooks/pre_open_workfile_post_initialization.py b/openpype/hosts/maya/hooks/pre_open_workfile_post_initialization.py new file mode 100644 index 0000000000..7582ce0591 --- /dev/null +++ b/openpype/hosts/maya/hooks/pre_open_workfile_post_initialization.py @@ -0,0 +1,25 @@ +from openpype.lib import PreLaunchHook + + +class MayaPreOpenWorkfilePostInitialization(PreLaunchHook): + """Define whether open last workfile should run post initialize.""" + + # Before AddLastWorkfileToLaunchArgs. + order = 9 + app_groups = ["maya"] + + def execute(self): + + # Ignore if there's no last workfile to start. + if not self.data.get("start_last_workfile"): + return + + maya_settings = self.data["project_settings"]["maya"] + enabled = maya_settings["open_workfile_post_initialization"] + if enabled: + # Force disable the `AddLastWorkfileToLaunchArgs`. + self.data.pop("start_last_workfile") + + self.log.debug("Opening workfile post initialization.") + key = "OPENPYPE_OPEN_WORKFILE_POST_INITIALIZATION" + self.launch_context.env[key] = "1" diff --git a/openpype/hosts/maya/plugins/create/create_animation.py b/openpype/hosts/maya/plugins/create/create_animation.py index f992ff2c1a..095cbcdd64 100644 --- a/openpype/hosts/maya/plugins/create/create_animation.py +++ b/openpype/hosts/maya/plugins/create/create_animation.py @@ -7,6 +7,12 @@ from openpype.hosts.maya.api import ( class CreateAnimation(plugin.Creator): """Animation output for character rigs""" + # We hide the animation creator from the UI since the creation of it + # is automated upon loading a rig. There's an inventory action to recreate + # it for loaded rigs if by chance someone deleted the animation instance. + # Note: This setting is actually applied from project settings + enabled = False + name = "animationDefault" label = "Animation" family = "animation" diff --git a/openpype/hosts/maya/plugins/create/create_review.py b/openpype/hosts/maya/plugins/create/create_review.py index e709239ae7..40ae99b57c 100644 --- a/openpype/hosts/maya/plugins/create/create_review.py +++ b/openpype/hosts/maya/plugins/create/create_review.py @@ -1,8 +1,14 @@ +import os from collections import OrderedDict +import json + from openpype.hosts.maya.api import ( lib, plugin ) +from openpype.settings import get_project_settings +from openpype.pipeline import get_current_project_name, get_current_task_name +from openpype.client import get_asset_by_name class CreateReview(plugin.Creator): @@ -32,6 +38,23 @@ class CreateReview(plugin.Creator): super(CreateReview, self).__init__(*args, **kwargs) data = OrderedDict(**self.data) + project_name = get_current_project_name() + asset_doc = get_asset_by_name(project_name, data["asset"]) + task_name = get_current_task_name() + preset = lib.get_capture_preset( + task_name, + asset_doc["data"]["tasks"][task_name]["type"], + data["subset"], + get_project_settings(project_name), + self.log + ) + if os.environ.get("OPENPYPE_DEBUG") == "1": + self.log.debug( + "Using preset: {}".format( + json.dumps(preset, indent=4, sort_keys=True) + ) + ) + # Option for using Maya or asset frame range in settings. frame_range = lib.get_frame_range() if self.useMayaTimeline: @@ -40,12 +63,14 @@ class CreateReview(plugin.Creator): data[key] = value data["fps"] = lib.collect_animation_data(fps=True)["fps"] - data["review_width"] = self.Width - data["review_height"] = self.Height - data["isolate"] = self.isolate + data["keepImages"] = self.keepImages - data["imagePlane"] = self.imagePlane data["transparency"] = self.transparency - data["panZoom"] = self.panZoom + data["review_width"] = preset["Resolution"]["width"] + data["review_height"] = preset["Resolution"]["height"] + data["isolate"] = preset["Generic"]["isolate_view"] + data["imagePlane"] = preset["Viewport Options"]["imagePlane"] + data["panZoom"] = preset["Generic"]["pan_zoom"] + data["displayLights"] = lib.DISPLAY_LIGHTS_LABELS self.data = data diff --git a/openpype/hosts/maya/plugins/inventory/rig_recreate_animation_instance.py b/openpype/hosts/maya/plugins/inventory/rig_recreate_animation_instance.py new file mode 100644 index 0000000000..39bc59fbbf --- /dev/null +++ b/openpype/hosts/maya/plugins/inventory/rig_recreate_animation_instance.py @@ -0,0 +1,35 @@ +from openpype.pipeline import ( + InventoryAction, + get_representation_context +) +from openpype.hosts.maya.api.lib import ( + create_rig_animation_instance, + get_container_members, +) + + +class RecreateRigAnimationInstance(InventoryAction): + """Recreate animation publish instance for loaded rigs""" + + label = "Recreate rig animation instance" + icon = "wrench" + color = "#888888" + + @staticmethod + def is_compatible(container): + return ( + container.get("loader") == "ReferenceLoader" + and container.get("name", "").startswith("rig") + ) + + def process(self, containers): + + for container in containers: + # todo: delete an existing entry if it exist or skip creation + + namespace = container["namespace"] + representation_id = container["representation"] + context = get_representation_context(representation_id) + nodes = get_container_members(container) + + create_rig_animation_instance(nodes, context, namespace) diff --git a/openpype/hosts/maya/plugins/load/_load_animation.py b/openpype/hosts/maya/plugins/load/_load_animation.py index b419a730b5..2ba5fe6b64 100644 --- a/openpype/hosts/maya/plugins/load/_load_animation.py +++ b/openpype/hosts/maya/plugins/load/_load_animation.py @@ -14,7 +14,7 @@ class AbcLoader(openpype.hosts.maya.api.plugin.ReferenceLoader): icon = "code-fork" color = "orange" - def process_reference(self, context, name, namespace, data): + def process_reference(self, context, name, namespace, options): import maya.cmds as cmds from openpype.hosts.maya.api.lib import unique_namespace @@ -41,7 +41,7 @@ class AbcLoader(openpype.hosts.maya.api.plugin.ReferenceLoader): namespace=namespace, sharedReferenceFile=False, groupReference=True, - groupName="{}:{}".format(namespace, name), + groupName=options['group_name'], reference=True, returnNewNodes=True) diff --git a/openpype/hosts/maya/plugins/load/load_reference.py b/openpype/hosts/maya/plugins/load/load_reference.py index 461f4258aa..7d717dcd44 100644 --- a/openpype/hosts/maya/plugins/load/load_reference.py +++ b/openpype/hosts/maya/plugins/load/load_reference.py @@ -4,16 +4,12 @@ import contextlib from maya import cmds from openpype.settings import get_project_settings -from openpype.pipeline import legacy_io -from openpype.pipeline.create import ( - legacy_create, - get_legacy_creator_by_name, -) import openpype.hosts.maya.api.plugin from openpype.hosts.maya.api.lib import ( maintained_selection, get_container_members, - parent_nodes + parent_nodes, + create_rig_animation_instance ) @@ -114,9 +110,6 @@ class ReferenceLoader(openpype.hosts.maya.api.plugin.ReferenceLoader): icon = "code-fork" color = "orange" - # Name of creator class that will be used to create animation instance - animation_creator_name = "CreateAnimation" - def process_reference(self, context, name, namespace, options): import maya.cmds as cmds @@ -125,14 +118,15 @@ class ReferenceLoader(openpype.hosts.maya.api.plugin.ReferenceLoader): except ValueError: family = "model" - group_name = "{}:_GRP".format(namespace) # True by default to keep legacy behaviours attach_to_root = options.get("attach_to_root", True) + group_name = options["group_name"] with maintained_selection(): cmds.loadPlugin("AbcImport.mll", quiet=True) file_url = self.prepare_root_value(self.fname, context["project"]["name"]) + nodes = cmds.file(file_url, namespace=namespace, sharedReferenceFile=False, @@ -168,9 +162,15 @@ class ReferenceLoader(openpype.hosts.maya.api.plugin.ReferenceLoader): with parent_nodes(roots, parent=None): cmds.xform(group_name, zeroTransformPivots=True) - cmds.setAttr("{}.displayHandle".format(group_name), 1) - settings = get_project_settings(os.environ['AVALON_PROJECT']) + + display_handle = settings['maya']['load'].get( + 'reference_loader', {} + ).get('display_handle', True) + cmds.setAttr( + "{}.displayHandle".format(group_name), display_handle + ) + colors = settings['maya']['load']['colors'] c = colors.get(family) if c is not None: @@ -180,7 +180,9 @@ class ReferenceLoader(openpype.hosts.maya.api.plugin.ReferenceLoader): (float(c[1]) / 255), (float(c[2]) / 255)) - cmds.setAttr("{}.displayHandle".format(group_name), 1) + cmds.setAttr( + "{}.displayHandle".format(group_name), display_handle + ) # get bounding box bbox = cmds.exactWorldBoundingBox(group_name) # get pivot position on world space @@ -219,37 +221,10 @@ class ReferenceLoader(openpype.hosts.maya.api.plugin.ReferenceLoader): self._lock_camera_transforms(members) def _post_process_rig(self, name, namespace, context, options): - - output = next((node for node in self if - node.endswith("out_SET")), None) - controls = next((node for node in self if - node.endswith("controls_SET")), None) - - assert output, "No out_SET in rig, this is a bug." - assert controls, "No controls_SET in rig, this is a bug." - - # Find the roots amongst the loaded nodes - roots = cmds.ls(self[:], assemblies=True, long=True) - assert roots, "No root nodes in rig, this is a bug." - - asset = legacy_io.Session["AVALON_ASSET"] - dependency = str(context["representation"]["_id"]) - - self.log.info("Creating subset: {}".format(namespace)) - - # Create the animation instance - creator_plugin = get_legacy_creator_by_name( - self.animation_creator_name + nodes = self[:] + create_rig_animation_instance( + nodes, context, namespace, log=self.log ) - with maintained_selection(): - cmds.select([output, controls] + roots, noExpand=True) - legacy_create( - creator_plugin, - name=namespace, - asset=asset, - options={"useSelection": True}, - data={"dependencies": dependency} - ) def _lock_camera_transforms(self, nodes): cameras = cmds.ls(nodes, type="camera") diff --git a/openpype/hosts/maya/plugins/load/load_yeti_rig.py b/openpype/hosts/maya/plugins/load/load_yeti_rig.py index 6a13d2e145..b8066871b0 100644 --- a/openpype/hosts/maya/plugins/load/load_yeti_rig.py +++ b/openpype/hosts/maya/plugins/load/load_yeti_rig.py @@ -19,8 +19,7 @@ class YetiRigLoader(openpype.hosts.maya.api.plugin.ReferenceLoader): def process_reference( self, context, name=None, namespace=None, options=None ): - - group_name = "{}:{}".format(namespace, name) + group_name = options['group_name'] with lib.maintained_selection(): file_url = self.prepare_root_value( self.fname, context["project"]["name"] diff --git a/openpype/hosts/maya/plugins/publish/collect_review.py b/openpype/hosts/maya/plugins/publish/collect_review.py index 0b03988002..5c190a4a7b 100644 --- a/openpype/hosts/maya/plugins/publish/collect_review.py +++ b/openpype/hosts/maya/plugins/publish/collect_review.py @@ -4,7 +4,7 @@ import pyblish.api from openpype.client import get_subset_by_name from openpype.pipeline import legacy_io, KnownPublishError -from openpype.hosts.maya.api.lib import get_attribute_input +from openpype.hosts.maya.api import lib class CollectReview(pyblish.api.InstancePlugin): @@ -29,26 +29,37 @@ class CollectReview(pyblish.api.InstancePlugin): # get cameras members = instance.data['setMembers'] - cameras = cmds.ls(members, long=True, - dag=True, cameras=True) self.log.debug('members: {}'.format(members)) - - # validate required settings - if len(cameras) == 0: - raise KnownPublishError("No camera found in review " - "instance: {}".format(instance)) - elif len(cameras) > 2: - raise KnownPublishError( - "Only a single camera is allowed for a review instance but " - "more than one camera found in review instance: {}. " - "Cameras found: {}".format(instance, ", ".join(cameras))) - - camera = cameras[0] - self.log.debug('camera: {}'.format(camera)) + cameras = cmds.ls(members, long=True, dag=True, cameras=True) + camera = cameras[0] if cameras else None context = instance.context objectset = context.data['objectsets'] + # Convert enum attribute index to string for Display Lights. + index = instance.data.get("displayLights", 0) + display_lights = lib.DISPLAY_LIGHTS_VALUES[index] + if display_lights == "project_settings": + settings = instance.context.data["project_settings"] + settings = settings["maya"]["publish"]["ExtractPlayblast"] + settings = settings["capture_preset"]["Viewport Options"] + display_lights = settings["displayLights"] + + # Collect camera focal length. + burninDataMembers = instance.data.get("burninDataMembers", {}) + if camera is not None: + attr = camera + ".focalLength" + if lib.get_attribute_input(attr): + start = instance.data["frameStart"] + end = instance.data["frameEnd"] + 1 + time_range = range(int(start), int(end)) + focal_length = [cmds.getAttr(attr, time=t) for t in time_range] + else: + focal_length = cmds.getAttr(attr) + + burninDataMembers["focalLength"] = focal_length + + # Account for nested instances like model. reviewable_subsets = list(set(members) & set(objectset)) if reviewable_subsets: if len(reviewable_subsets) > 1: @@ -75,11 +86,14 @@ class CollectReview(pyblish.api.InstancePlugin): else: data['families'] = ['review'] + data["cameras"] = cameras data['review_camera'] = camera data['frameStartFtrack'] = instance.data["frameStartHandle"] data['frameEndFtrack'] = instance.data["frameEndHandle"] data['frameStartHandle'] = instance.data["frameStartHandle"] data['frameEndHandle'] = instance.data["frameEndHandle"] + data['handleStart'] = instance.data["handleStart"] + data['handleEnd'] = instance.data["handleEnd"] data["frameStart"] = instance.data["frameStart"] data["frameEnd"] = instance.data["frameEnd"] data['step'] = instance.data['step'] @@ -89,6 +103,8 @@ class CollectReview(pyblish.api.InstancePlugin): data["isolate"] = instance.data["isolate"] data["panZoom"] = instance.data.get("panZoom", False) data["panel"] = instance.data["panel"] + data["displayLights"] = display_lights + data["burninDataMembers"] = burninDataMembers # The review instance must be active cmds.setAttr(str(instance) + '.active', 1) @@ -109,11 +125,14 @@ class CollectReview(pyblish.api.InstancePlugin): self.log.debug("Existing subsets found, keep legacy name.") instance.data['subset'] = legacy_subset_name + instance.data["cameras"] = cameras instance.data['review_camera'] = camera instance.data['frameStartFtrack'] = \ instance.data["frameStartHandle"] instance.data['frameEndFtrack'] = \ instance.data["frameEndHandle"] + instance.data["displayLights"] = display_lights + instance.data["burninDataMembers"] = burninDataMembers # make ftrack publishable instance.data.setdefault("families", []).append('ftrack') @@ -155,20 +174,3 @@ class CollectReview(pyblish.api.InstancePlugin): audio_data.append(get_audio_node_data(node)) instance.data["audio"] = audio_data - - # Collect focal length. - attr = camera + ".focalLength" - if get_attribute_input(attr): - start = instance.data["frameStart"] - end = instance.data["frameEnd"] + 1 - focal_length = [ - cmds.getAttr(attr, time=t) for t in range(int(start), int(end)) - ] - else: - focal_length = cmds.getAttr(attr) - - key = "focalLength" - try: - instance.data["burninDataMembers"][key] = focal_length - except KeyError: - instance.data["burninDataMembers"] = {key: focal_length} diff --git a/openpype/hosts/maya/plugins/publish/extract_look.py b/openpype/hosts/maya/plugins/publish/extract_look.py index 93054e5fbb..3cc95a0b2e 100644 --- a/openpype/hosts/maya/plugins/publish/extract_look.py +++ b/openpype/hosts/maya/plugins/publish/extract_look.py @@ -26,7 +26,7 @@ HARDLINK = 2 @attr.s -class TextureResult: +class TextureResult(object): """The resulting texture of a processed file for a resource""" # Path to the file path = attr.ib() @@ -280,7 +280,7 @@ class MakeTX(TextureProcessor): # Do nothing if the source file is already a .tx file. return TextureResult( path=source, - file_hash=None, # todo: unknown texture hash? + file_hash=source_hash(source), colorspace=colorspace, transfer_mode=COPY ) diff --git a/openpype/hosts/maya/plugins/publish/extract_playblast.py b/openpype/hosts/maya/plugins/publish/extract_playblast.py index 0f3425a1de..3ceef6f3d3 100644 --- a/openpype/hosts/maya/plugins/publish/extract_playblast.py +++ b/openpype/hosts/maya/plugins/publish/extract_playblast.py @@ -34,13 +34,15 @@ class ExtractPlayblast(publish.Extractor): families = ["review"] optional = True capture_preset = {} + profiles = None def _capture(self, preset): - self.log.info( - "Using preset:\n{}".format( - json.dumps(preset, sort_keys=True, indent=4) + if os.environ.get("OPENPYPE_DEBUG") == "1": + self.log.debug( + "Using preset: {}".format( + json.dumps(preset, indent=4, sort_keys=True) + ) ) - ) path = capture.capture(log=self.log, **preset) self.log.debug("playblast path {}".format(path)) @@ -65,12 +67,25 @@ class ExtractPlayblast(publish.Extractor): # get cameras camera = instance.data["review_camera"] - preset = lib.load_capture_preset(data=self.capture_preset) - # Grab capture presets from the project settings - capture_presets = self.capture_preset + task_data = instance.data["anatomyData"].get("task", {}) + capture_preset = lib.get_capture_preset( + task_data.get("name"), + task_data.get("type"), + instance.data["subset"], + instance.context.data["project_settings"], + self.log + ) + + preset = lib.load_capture_preset(data=capture_preset) + + # "isolate_view" will already have been applied at creation, so we'll + # ignore it here. + preset.pop("isolate_view") + # Set resolution variables from capture presets - width_preset = capture_presets["Resolution"]["width"] - height_preset = capture_presets["Resolution"]["height"] + width_preset = capture_preset["Resolution"]["width"] + height_preset = capture_preset["Resolution"]["height"] + # Set resolution variables from asset values asset_data = instance.data["assetEntity"]["data"] asset_width = asset_data.get("resolutionWidth") @@ -115,14 +130,19 @@ class ExtractPlayblast(publish.Extractor): cmds.currentTime(refreshFrameInt - 1, edit=True) cmds.currentTime(refreshFrameInt, edit=True) + # Use displayLights setting from instance + key = "displayLights" + preset["viewport_options"][key] = instance.data[key] + # Override transparency if requested. transparency = instance.data.get("transparency", 0) if transparency != 0: preset["viewport2_options"]["transparencyAlgorithm"] = transparency # Isolate view is requested by having objects in the set besides a - # camera. - if preset.pop("isolate_view", False) and instance.data.get("isolate"): + # camera. If there is only 1 member it'll be the camera because we + # validate to have 1 camera only. + if instance.data["isolate"] and len(instance.data["setMembers"]) > 1: preset["isolate"] = instance.data["setMembers"] # Show/Hide image planes on request. @@ -157,7 +177,7 @@ class ExtractPlayblast(publish.Extractor): ) override_viewport_options = ( - capture_presets["Viewport Options"]["override_viewport_options"] + capture_preset["Viewport Options"]["override_viewport_options"] ) # Force viewer to False in call to capture because we have our own @@ -197,7 +217,11 @@ class ExtractPlayblast(publish.Extractor): instance.data["panel"], edit=True, **viewport_defaults ) - cmds.setAttr("{}.panZoomEnabled".format(preset["camera"]), pan_zoom) + try: + cmds.setAttr( + "{}.panZoomEnabled".format(preset["camera"]), pan_zoom) + except RuntimeError: + self.log.warning("Cannot restore Pan/Zoom settings.") collected_files = os.listdir(stagingdir) patterns = [clique.PATTERNS["frames"]] @@ -233,8 +257,8 @@ class ExtractPlayblast(publish.Extractor): collected_files = collected_files[0] representation = { - "name": self.capture_preset["Codec"]["compression"], - "ext": self.capture_preset["Codec"]["compression"], + "name": capture_preset["Codec"]["compression"], + "ext": capture_preset["Codec"]["compression"], "files": collected_files, "stagingDir": stagingdir, "frameStart": start, diff --git a/openpype/hosts/maya/plugins/publish/extract_thumbnail.py b/openpype/hosts/maya/plugins/publish/extract_thumbnail.py index b4ed8dce4c..4160ac4cb2 100644 --- a/openpype/hosts/maya/plugins/publish/extract_thumbnail.py +++ b/openpype/hosts/maya/plugins/publish/extract_thumbnail.py @@ -1,6 +1,7 @@ import os import glob import tempfile +import json import capture @@ -27,22 +28,25 @@ class ExtractThumbnail(publish.Extractor): camera = instance.data["review_camera"] - maya_setting = instance.context.data["project_settings"]["maya"] - plugin_setting = maya_setting["publish"]["ExtractPlayblast"] - capture_preset = plugin_setting["capture_preset"] + task_data = instance.data["anatomyData"].get("task", {}) + capture_preset = lib.get_capture_preset( + task_data.get("name"), + task_data.get("type"), + instance.data["subset"], + instance.context.data["project_settings"], + self.log + ) + + preset = lib.load_capture_preset(data=capture_preset) + + # "isolate_view" will already have been applied at creation, so we'll + # ignore it here. + preset.pop("isolate_view") + override_viewport_options = ( capture_preset["Viewport Options"]["override_viewport_options"] ) - try: - preset = lib.load_capture_preset(data=capture_preset) - except KeyError as ke: - self.log.error("Error loading capture presets: {}".format(str(ke))) - preset = {} - self.log.info("Using viewport preset: {}".format(preset)) - - # preset["off_screen"] = False - preset["camera"] = camera preset["start_frame"] = instance.data["frameStart"] preset["end_frame"] = instance.data["frameStart"] @@ -58,10 +62,9 @@ class ExtractThumbnail(publish.Extractor): "overscan": 1.0, "depthOfField": cmds.getAttr("{0}.depthOfField".format(camera)), } - capture_presets = capture_preset # Set resolution variables from capture presets - width_preset = capture_presets["Resolution"]["width"] - height_preset = capture_presets["Resolution"]["height"] + width_preset = capture_preset["Resolution"]["width"] + height_preset = capture_preset["Resolution"]["height"] # Set resolution variables from asset values asset_data = instance.data["assetEntity"]["data"] asset_width = asset_data.get("resolutionWidth") @@ -104,14 +107,19 @@ class ExtractThumbnail(publish.Extractor): cmds.currentTime(refreshFrameInt - 1, edit=True) cmds.currentTime(refreshFrameInt, edit=True) + # Use displayLights setting from instance + key = "displayLights" + preset["viewport_options"][key] = instance.data[key] + # Override transparency if requested. transparency = instance.data.get("transparency", 0) if transparency != 0: preset["viewport2_options"]["transparencyAlgorithm"] = transparency # Isolate view is requested by having objects in the set besides a - # camera. - if preset.pop("isolate_view", False) and instance.data.get("isolate"): + # camera. If there is only 1 member it'll be the camera because we + # validate to have 1 camera only. + if instance.data["isolate"] and len(instance.data["setMembers"]) > 1: preset["isolate"] = instance.data["setMembers"] # Show or Hide Image Plane @@ -139,6 +147,13 @@ class ExtractThumbnail(publish.Extractor): preset.update(panel_preset) cmds.setFocus(panel) + if os.environ.get("OPENPYPE_DEBUG") == "1": + self.log.debug( + "Using preset: {}".format( + json.dumps(preset, indent=4, sort_keys=True) + ) + ) + path = capture.capture(**preset) playblast = self._fix_playblast_output_path(path) diff --git a/openpype/hosts/maya/plugins/publish/extract_xgen.py b/openpype/hosts/maya/plugins/publish/extract_xgen.py index 0cc842b4ec..fb097ca84a 100644 --- a/openpype/hosts/maya/plugins/publish/extract_xgen.py +++ b/openpype/hosts/maya/plugins/publish/extract_xgen.py @@ -65,9 +65,10 @@ class ExtractXgen(publish.Extractor): ) cmds.delete(set(children) - set(shapes)) - duplicate_transform = cmds.parent( - duplicate_transform, world=True - )[0] + if cmds.listRelatives(duplicate_transform, parent=True): + duplicate_transform = cmds.parent( + duplicate_transform, world=True + )[0] duplicate_nodes.append(duplicate_transform) diff --git a/openpype/hosts/maya/plugins/publish/validate_attributes.py b/openpype/hosts/maya/plugins/publish/validate_attributes.py index 6ca9afb9a4..7ebd9d7d03 100644 --- a/openpype/hosts/maya/plugins/publish/validate_attributes.py +++ b/openpype/hosts/maya/plugins/publish/validate_attributes.py @@ -6,7 +6,7 @@ import pyblish.api from openpype.hosts.maya.api.lib import set_attribute from openpype.pipeline.publish import ( - RepairContextAction, + RepairAction, ValidateContentsOrder, ) @@ -26,7 +26,7 @@ class ValidateAttributes(pyblish.api.InstancePlugin): order = ValidateContentsOrder label = "Attributes" hosts = ["maya"] - actions = [RepairContextAction] + actions = [RepairAction] optional = True attributes = None @@ -81,7 +81,7 @@ class ValidateAttributes(pyblish.api.InstancePlugin): if node_name not in attributes: continue - for attr_name, expected in attributes.items(): + for attr_name, expected in attributes[node_name].items(): # Skip if attribute does not exist if not cmds.attributeQuery(attr_name, node=node, exists=True): diff --git a/openpype/hosts/maya/plugins/publish/validate_mesh_overlapping_uvs.py b/openpype/hosts/maya/plugins/publish/validate_mesh_overlapping_uvs.py index 74269cc506..7dd66eed6c 100644 --- a/openpype/hosts/maya/plugins/publish/validate_mesh_overlapping_uvs.py +++ b/openpype/hosts/maya/plugins/publish/validate_mesh_overlapping_uvs.py @@ -255,7 +255,7 @@ class ValidateMeshHasOverlappingUVs(pyblish.api.InstancePlugin): # Store original uv set original_current_uv_set = cmds.polyUVSet(mesh, query=True, - currentUVSet=True) + currentUVSet=True)[0] overlapping_faces = [] for uv_set in cmds.polyUVSet(mesh, query=True, allUVSets=True): diff --git a/openpype/hosts/maya/plugins/publish/validate_review.py b/openpype/hosts/maya/plugins/publish/validate_review.py new file mode 100644 index 0000000000..12a2e7f86f --- /dev/null +++ b/openpype/hosts/maya/plugins/publish/validate_review.py @@ -0,0 +1,30 @@ +import pyblish.api + +from openpype.pipeline.publish import ( + ValidateContentsOrder, PublishValidationError +) + + +class ValidateReview(pyblish.api.InstancePlugin): + """Validate review.""" + + order = ValidateContentsOrder + label = "Validate Review" + families = ["review"] + + def process(self, instance): + cameras = instance.data["cameras"] + + # validate required settings + if len(cameras) == 0: + raise PublishValidationError( + "No camera found in review instance: {}".format(instance) + ) + elif len(cameras) > 2: + raise PublishValidationError( + "Only a single camera is allowed for a review instance but " + "more than one camera found in review instance: {}. " + "Cameras found: {}".format(instance, ", ".join(cameras)) + ) + + self.log.debug('camera: {}'.format(instance.data["review_camera"])) diff --git a/openpype/hosts/maya/plugins/publish/validate_single_assembly.py b/openpype/hosts/maya/plugins/publish/validate_single_assembly.py index 8771ca58d1..b768c9c4e8 100644 --- a/openpype/hosts/maya/plugins/publish/validate_single_assembly.py +++ b/openpype/hosts/maya/plugins/publish/validate_single_assembly.py @@ -19,7 +19,7 @@ class ValidateSingleAssembly(pyblish.api.InstancePlugin): order = ValidateContentsOrder hosts = ['maya'] - families = ['rig', 'animation'] + families = ['rig'] label = 'Single Assembly' def process(self, instance): diff --git a/openpype/hosts/maya/plugins/publish/validate_xgen.py b/openpype/hosts/maya/plugins/publish/validate_xgen.py index 2870909974..47b24e218c 100644 --- a/openpype/hosts/maya/plugins/publish/validate_xgen.py +++ b/openpype/hosts/maya/plugins/publish/validate_xgen.py @@ -57,3 +57,16 @@ class ValidateXgen(pyblish.api.InstancePlugin): json.dumps(inactive_modifiers, indent=4, sort_keys=True) ) ) + + # We need a namespace else there will be a naming conflict when + # extracting because of stripping namespaces and parenting to world. + node_names = [instance.data["xgmPalette"]] + for _, connections in instance.data["xgenConnections"].items(): + node_names.append(connections["transform"].split(".")[0]) + + non_namespaced_nodes = [n for n in node_names if ":" not in n] + if non_namespaced_nodes: + raise PublishValidationError( + "Could not find namespace on {}. Namespace is required for" + " xgen publishing.".format(non_namespaced_nodes) + ) diff --git a/openpype/hosts/maya/startup/userSetup.py b/openpype/hosts/maya/startup/userSetup.py index c77ecb829e..ae6a999d98 100644 --- a/openpype/hosts/maya/startup/userSetup.py +++ b/openpype/hosts/maya/startup/userSetup.py @@ -1,5 +1,4 @@ import os -from functools import partial from openpype.settings import get_project_settings from openpype.pipeline import install_host @@ -13,24 +12,41 @@ install_host(host) print("Starting OpenPype usersetup...") +project_settings = get_project_settings(os.environ['AVALON_PROJECT']) + +# Loading plugins explicitly. +explicit_plugins_loading = project_settings["maya"]["explicit_plugins_loading"] +if explicit_plugins_loading["enabled"]: + def _explicit_load_plugins(): + for plugin in explicit_plugins_loading["plugins_to_load"]: + if plugin["enabled"]: + print("Loading plug-in: " + plugin["name"]) + try: + cmds.loadPlugin(plugin["name"], quiet=True) + except RuntimeError as e: + print(e) + + # We need to load plugins deferred as loading them directly does not work + # correctly due to Maya's initialization. + cmds.evalDeferred( + _explicit_load_plugins, + lowestPriority=True + ) # Open Workfile Post Initialization. key = "OPENPYPE_OPEN_WORKFILE_POST_INITIALIZATION" if bool(int(os.environ.get(key, "0"))): + def _log_and_open(): + path = os.environ["AVALON_LAST_WORKFILE"] + print("Opening \"{}\"".format(path)) + cmds.file(path, open=True, force=True) cmds.evalDeferred( - partial( - cmds.file, - os.environ["AVALON_LAST_WORKFILE"], - open=True, - force=True - ), + _log_and_open, lowestPriority=True ) - # Build a shelf. -settings = get_project_settings(os.environ['AVALON_PROJECT']) -shelf_preset = settings['maya'].get('project_shelf') +shelf_preset = project_settings['maya'].get('project_shelf') if shelf_preset: project = os.environ["AVALON_PROJECT"] diff --git a/openpype/hosts/maya/tools/mayalookassigner/alembic.py b/openpype/hosts/maya/tools/mayalookassigner/alembic.py new file mode 100644 index 0000000000..6885e923d3 --- /dev/null +++ b/openpype/hosts/maya/tools/mayalookassigner/alembic.py @@ -0,0 +1,97 @@ +# -*- coding: utf-8 -*- +"""Tools for loading looks to vray proxies.""" +import os +from collections import defaultdict +import logging + +import six + +import alembic.Abc + + +log = logging.getLogger(__name__) + + +def get_alembic_paths_by_property(filename, attr, verbose=False): + # type: (str, str, bool) -> dict + """Return attribute value per objects in the Alembic file. + + Reads an Alembic archive hierarchy and retrieves the + value from the `attr` properties on the objects. + + Args: + filename (str): Full path to Alembic archive to read. + attr (str): Id attribute. + verbose (bool): Whether to verbosely log missing attributes. + + Returns: + dict: Mapping of node full path with its id + + """ + # Normalize alembic path + filename = os.path.normpath(filename) + filename = filename.replace("\\", "/") + filename = str(filename) # path must be string + + try: + archive = alembic.Abc.IArchive(filename) + except RuntimeError: + # invalid alembic file - probably vrmesh + log.warning("{} is not an alembic file".format(filename)) + return {} + root = archive.getTop() + + iterator = list(root.children) + obj_ids = {} + + for obj in iterator: + name = obj.getFullName() + + # include children for coming iterations + iterator.extend(obj.children) + + props = obj.getProperties() + if props.getNumProperties() == 0: + # Skip those without properties, e.g. '/materials' in a gpuCache + continue + + # THe custom attribute is under the properties' first container under + # the ".arbGeomParams" + prop = props.getProperty(0) # get base property + + _property = None + try: + geo_params = prop.getProperty('.arbGeomParams') + _property = geo_params.getProperty(attr) + except KeyError: + if verbose: + log.debug("Missing attr on: {0}".format(name)) + continue + + if not _property.isConstant(): + log.warning("Id not constant on: {0}".format(name)) + + # Get first value sample + value = _property.getValue()[0] + + obj_ids[name] = value + + return obj_ids + + +def get_alembic_ids_cache(path): + # type: (str) -> dict + """Build a id to node mapping in Alembic file. + + Nodes without IDs are ignored. + + Returns: + dict: Mapping of id to nodes in the Alembic. + + """ + node_ids = get_alembic_paths_by_property(path, attr="cbId") + id_nodes = defaultdict(list) + for node, _id in six.iteritems(node_ids): + id_nodes[_id].append(node) + + return dict(six.iteritems(id_nodes)) diff --git a/openpype/hosts/maya/tools/mayalookassigner/app.py b/openpype/hosts/maya/tools/mayalookassigner/app.py index 2a8775fff6..13da999c2d 100644 --- a/openpype/hosts/maya/tools/mayalookassigner/app.py +++ b/openpype/hosts/maya/tools/mayalookassigner/app.py @@ -250,7 +250,7 @@ class MayaLookAssignerWindow(QtWidgets.QWidget): if vp in nodes: vrayproxy_assign_look(vp, subset_name) - nodes = list(set(item["nodes"]).difference(vray_proxies)) + nodes = list(set(nodes).difference(vray_proxies)) else: self.echo( "Could not assign to VRayProxy because vrayformaya plugin " @@ -260,17 +260,18 @@ class MayaLookAssignerWindow(QtWidgets.QWidget): # Assign Arnold Standin look. if cmds.pluginInfo("mtoa", query=True, loaded=True): arnold_standins = set(cmds.ls(type="aiStandIn", long=True)) + for standin in arnold_standins: if standin in nodes: arnold_standin.assign_look(standin, subset_name) + + nodes = list(set(nodes).difference(arnold_standins)) else: self.echo( "Could not assign to aiStandIn because mtoa plugin is not " "loaded." ) - nodes = list(set(item["nodes"]).difference(arnold_standins)) - # Assign look if nodes: assign_look_by_version(nodes, version_id=version["_id"]) diff --git a/openpype/hosts/maya/tools/mayalookassigner/arnold_standin.py b/openpype/hosts/maya/tools/mayalookassigner/arnold_standin.py index 7eeeb72553..0ce2b21dcd 100644 --- a/openpype/hosts/maya/tools/mayalookassigner/arnold_standin.py +++ b/openpype/hosts/maya/tools/mayalookassigner/arnold_standin.py @@ -9,6 +9,7 @@ from openpype.pipeline import legacy_io from openpype.client import get_last_version_by_subset_name from openpype.hosts.maya import api from . import lib +from .alembic import get_alembic_ids_cache log = logging.getLogger(__name__) @@ -68,6 +69,11 @@ def get_nodes_by_id(standin): (dict): Dictionary with node full name/path and id. """ path = cmds.getAttr(standin + ".dso") + + if path.endswith(".abc"): + # Support alembic files directly + return get_alembic_ids_cache(path) + json_path = None for f in os.listdir(os.path.dirname(path)): if f.endswith(".json"): diff --git a/openpype/hosts/maya/tools/mayalookassigner/vray_proxies.py b/openpype/hosts/maya/tools/mayalookassigner/vray_proxies.py index 1d2ec5fd87..c875fec7f0 100644 --- a/openpype/hosts/maya/tools/mayalookassigner/vray_proxies.py +++ b/openpype/hosts/maya/tools/mayalookassigner/vray_proxies.py @@ -1,108 +1,20 @@ # -*- coding: utf-8 -*- """Tools for loading looks to vray proxies.""" -import os from collections import defaultdict import logging -import six - -import alembic.Abc from maya import cmds from openpype.client import get_last_version_by_subset_name from openpype.pipeline import legacy_io import openpype.hosts.maya.lib as maya_lib from . import lib +from .alembic import get_alembic_ids_cache log = logging.getLogger(__name__) -def get_alembic_paths_by_property(filename, attr, verbose=False): - # type: (str, str, bool) -> dict - """Return attribute value per objects in the Alembic file. - - Reads an Alembic archive hierarchy and retrieves the - value from the `attr` properties on the objects. - - Args: - filename (str): Full path to Alembic archive to read. - attr (str): Id attribute. - verbose (bool): Whether to verbosely log missing attributes. - - Returns: - dict: Mapping of node full path with its id - - """ - # Normalize alembic path - filename = os.path.normpath(filename) - filename = filename.replace("\\", "/") - filename = str(filename) # path must be string - - try: - archive = alembic.Abc.IArchive(filename) - except RuntimeError: - # invalid alembic file - probably vrmesh - log.warning("{} is not an alembic file".format(filename)) - return {} - root = archive.getTop() - - iterator = list(root.children) - obj_ids = {} - - for obj in iterator: - name = obj.getFullName() - - # include children for coming iterations - iterator.extend(obj.children) - - props = obj.getProperties() - if props.getNumProperties() == 0: - # Skip those without properties, e.g. '/materials' in a gpuCache - continue - - # THe custom attribute is under the properties' first container under - # the ".arbGeomParams" - prop = props.getProperty(0) # get base property - - _property = None - try: - geo_params = prop.getProperty('.arbGeomParams') - _property = geo_params.getProperty(attr) - except KeyError: - if verbose: - log.debug("Missing attr on: {0}".format(name)) - continue - - if not _property.isConstant(): - log.warning("Id not constant on: {0}".format(name)) - - # Get first value sample - value = _property.getValue()[0] - - obj_ids[name] = value - - return obj_ids - - -def get_alembic_ids_cache(path): - # type: (str) -> dict - """Build a id to node mapping in Alembic file. - - Nodes without IDs are ignored. - - Returns: - dict: Mapping of id to nodes in the Alembic. - - """ - node_ids = get_alembic_paths_by_property(path, attr="cbId") - id_nodes = defaultdict(list) - for node, _id in six.iteritems(node_ids): - id_nodes[_id].append(node) - - return dict(six.iteritems(id_nodes)) - - def assign_vrayproxy_shaders(vrayproxy, assignments): # type: (str, dict) -> None """Assign shaders to content of Vray Proxy. diff --git a/openpype/hosts/nuke/api/lib.py b/openpype/hosts/nuke/api/lib.py index 157a02b9aa..64fa32a383 100644 --- a/openpype/hosts/nuke/api/lib.py +++ b/openpype/hosts/nuke/api/lib.py @@ -23,6 +23,9 @@ from openpype.client import ( from openpype.host import HostDirmap from openpype.tools.utils import host_tools +from openpype.pipeline.workfile.workfile_template_builder import ( + TemplateProfileNotFound +) from openpype.lib import ( env_value_to_bool, Logger, @@ -492,17 +495,17 @@ def get_avalon_knob_data(node, prefix="avalon:", create=True): data (dict) """ + data = {} + if AVALON_TAB not in node.knobs(): + return data + # check if lists if not isinstance(prefix, list): - prefix = list([prefix]) - - data = dict() + prefix = [prefix] # loop prefix for p in prefix: # check if the node is avalon tracked - if AVALON_TAB not in node.knobs(): - continue try: # check if data available on the node test = node[AVALON_DATA_GROUP].value() @@ -513,8 +516,7 @@ def get_avalon_knob_data(node, prefix="avalon:", create=True): if create: node = set_avalon_knob_data(node) return get_avalon_knob_data(node) - else: - return {} + return {} # get data from filtered knobs data.update({k.replace(p, ''): node[k].value() @@ -2684,7 +2686,10 @@ def start_workfile_template_builder(): # to avoid looping of the callback, remove it! log.info("Starting workfile template builder...") - build_workfile_template(workfile_creation_enabled=True) + try: + build_workfile_template(workfile_creation_enabled=True) + except TemplateProfileNotFound: + log.warning("Template profile not found. Skipping...") # remove callback since it would be duplicating the workfile nuke.removeOnCreate(start_workfile_template_builder, nodeClass="Root") diff --git a/openpype/hosts/nuke/api/plugin.py b/openpype/hosts/nuke/api/plugin.py index cc3af2a38f..3566cb64c1 100644 --- a/openpype/hosts/nuke/api/plugin.py +++ b/openpype/hosts/nuke/api/plugin.py @@ -208,6 +208,12 @@ class NukeCreator(NewCreator): def collect_instances(self): cached_instances = _collect_and_cache_nodes(self) + attr_def_keys = { + attr_def.key + for attr_def in self.get_instance_attr_defs() + } + attr_def_keys.discard(None) + for (node, data) in cached_instances[self.identifier]: created_instance = CreatedInstance.from_existing( data, self @@ -215,6 +221,12 @@ class NukeCreator(NewCreator): created_instance.transient_data["node"] = node self._add_instance_to_context(created_instance) + for key in ( + set(created_instance["creator_attributes"].keys()) + - attr_def_keys + ): + created_instance["creator_attributes"].pop(key) + def update_instances(self, update_list): for created_inst, _changes in update_list: instance_node = created_inst.transient_data["node"] @@ -301,8 +313,11 @@ class NukeWriteCreator(NukeCreator): def get_instance_attr_defs(self): attr_defs = [ self._get_render_target_enum(), - self._get_reviewable_bool() ] + # add reviewable attribute + if "reviewable" in self.instance_attributes: + attr_defs.append(self._get_reviewable_bool()) + return attr_defs def _get_render_target_enum(self): @@ -322,7 +337,7 @@ class NukeWriteCreator(NukeCreator): def _get_reviewable_bool(self): return BoolDef( "review", - default=("reviewable" in self.instance_attributes), + default=True, label="Review" ) diff --git a/openpype/hosts/nuke/api/workfile_template_builder.py b/openpype/hosts/nuke/api/workfile_template_builder.py index cf85a5ea05..72d4ffb476 100644 --- a/openpype/hosts/nuke/api/workfile_template_builder.py +++ b/openpype/hosts/nuke/api/workfile_template_builder.py @@ -219,14 +219,17 @@ class NukePlaceholderLoadPlugin(NukePlaceholderPlugin, PlaceholderLoadMixin): # fix the problem of z_order for backdrops self._fix_z_order(placeholder) - self._imprint_siblings(placeholder) + + if placeholder.data.get("keep_placeholder"): + self._imprint_siblings(placeholder) if placeholder.data["nb_children"] == 0: # save initial nodes positions and dimensions, update them # and set inputs and outputs of loaded nodes + if placeholder.data.get("keep_placeholder"): + self._imprint_inits() + self._update_nodes(placeholder, nuke.allNodes(), nodes_loaded) - self._imprint_inits() - self._update_nodes(placeholder, nuke.allNodes(), nodes_loaded) self._set_loaded_connections(placeholder) elif placeholder.data["siblings"]: @@ -629,14 +632,18 @@ class NukePlaceholderCreatePlugin( # fix the problem of z_order for backdrops self._fix_z_order(placeholder) - self._imprint_siblings(placeholder) + + if placeholder.data.get("keep_placeholder"): + self._imprint_siblings(placeholder) if placeholder.data["nb_children"] == 0: # save initial nodes positions and dimensions, update them # and set inputs and outputs of created nodes - self._imprint_inits() - self._update_nodes(placeholder, nuke.allNodes(), nodes_created) + if placeholder.data.get("keep_placeholder"): + self._imprint_inits() + self._update_nodes(placeholder, nuke.allNodes(), nodes_created) + self._set_created_connections(placeholder) elif placeholder.data["siblings"]: diff --git a/openpype/hosts/nuke/plugins/create/convert_legacy.py b/openpype/hosts/nuke/plugins/create/convert_legacy.py index c143e4cb27..377e9f78f6 100644 --- a/openpype/hosts/nuke/plugins/create/convert_legacy.py +++ b/openpype/hosts/nuke/plugins/create/convert_legacy.py @@ -2,7 +2,8 @@ from openpype.pipeline.create.creator_plugins import SubsetConvertorPlugin from openpype.hosts.nuke.api.lib import ( INSTANCE_DATA_KNOB, get_node_data, - get_avalon_knob_data + get_avalon_knob_data, + AVALON_TAB, ) from openpype.hosts.nuke.api.plugin import convert_to_valid_instaces @@ -17,13 +18,15 @@ class LegacyConverted(SubsetConvertorPlugin): legacy_found = False # search for first available legacy item for node in nuke.allNodes(recurseGroups=True): - if node.Class() in ["Viewer", "Dot"]: continue if get_node_data(node, INSTANCE_DATA_KNOB): continue + if AVALON_TAB not in node.knobs(): + continue + # get data from avalon knob avalon_knob_data = get_avalon_knob_data( node, ["avalon:", "ak:"], create=False) diff --git a/openpype/hosts/nuke/plugins/create/create_write_image.py b/openpype/hosts/nuke/plugins/create/create_write_image.py index d38253ab2f..b74cea5dae 100644 --- a/openpype/hosts/nuke/plugins/create/create_write_image.py +++ b/openpype/hosts/nuke/plugins/create/create_write_image.py @@ -63,13 +63,6 @@ class CreateWriteImage(napi.NukeWriteCreator): default=nuke.frame() ) - def get_instance_attr_defs(self): - attr_defs = [ - self._get_render_target_enum(), - self._get_reviewable_bool() - ] - return attr_defs - def create_instance_node(self, subset_name, instance_data): linked_knobs_ = [] if "use_range_limit" in self.instance_attributes: diff --git a/openpype/hosts/nuke/plugins/create/create_write_prerender.py b/openpype/hosts/nuke/plugins/create/create_write_prerender.py index 8103cb7c4d..387768b1dd 100644 --- a/openpype/hosts/nuke/plugins/create/create_write_prerender.py +++ b/openpype/hosts/nuke/plugins/create/create_write_prerender.py @@ -41,13 +41,6 @@ class CreateWritePrerender(napi.NukeWriteCreator): ] return attr_defs - def get_instance_attr_defs(self): - attr_defs = [ - self._get_render_target_enum(), - self._get_reviewable_bool() - ] - return attr_defs - def create_instance_node(self, subset_name, instance_data): linked_knobs_ = [] if "use_range_limit" in self.instance_attributes: diff --git a/openpype/hosts/nuke/plugins/create/create_write_render.py b/openpype/hosts/nuke/plugins/create/create_write_render.py index 23efa62e36..09257f662e 100644 --- a/openpype/hosts/nuke/plugins/create/create_write_render.py +++ b/openpype/hosts/nuke/plugins/create/create_write_render.py @@ -38,13 +38,6 @@ class CreateWriteRender(napi.NukeWriteCreator): ] return attr_defs - def get_instance_attr_defs(self): - attr_defs = [ - self._get_render_target_enum(), - self._get_reviewable_bool() - ] - return attr_defs - def create_instance_node(self, subset_name, instance_data): # add fpath_template write_data = { diff --git a/openpype/hosts/nuke/plugins/publish/collect_writes.py b/openpype/hosts/nuke/plugins/publish/collect_writes.py index 536a0698f3..6697a1e59a 100644 --- a/openpype/hosts/nuke/plugins/publish/collect_writes.py +++ b/openpype/hosts/nuke/plugins/publish/collect_writes.py @@ -190,7 +190,7 @@ class CollectNukeWrites(pyblish.api.InstancePlugin, # make sure rendered sequence on farm will # be used for extract review - if not instance.data["review"]: + if not instance.data.get("review"): instance.data["useSequenceForReview"] = False self.log.debug("instance.data: {}".format(pformat(instance.data))) diff --git a/openpype/hosts/nuke/plugins/publish/validate_asset_name.py b/openpype/hosts/nuke/plugins/publish/validate_asset_name.py index f6822bee45..df05f76a5b 100644 --- a/openpype/hosts/nuke/plugins/publish/validate_asset_name.py +++ b/openpype/hosts/nuke/plugins/publish/validate_asset_name.py @@ -9,9 +9,9 @@ import openpype.hosts.nuke.api.lib as nlib from openpype.pipeline.publish import ( ValidateContentsOrder, PublishXmlValidationError, + OptionalPyblishPluginMixin ) - class SelectInvalidInstances(pyblish.api.Action): """Select invalid instances in Outliner.""" @@ -92,7 +92,10 @@ class RepairSelectInvalidInstances(pyblish.api.Action): nlib.set_node_data(node, nlib.INSTANCE_DATA_KNOB, node_data) -class ValidateCorrectAssetName(pyblish.api.InstancePlugin): +class ValidateCorrectAssetName( + pyblish.api.InstancePlugin, + OptionalPyblishPluginMixin +): """Validator to check if instance asset match context asset. When working in per-shot style you always publish data in context of @@ -111,6 +114,9 @@ class ValidateCorrectAssetName(pyblish.api.InstancePlugin): optional = True def process(self, instance): + if not self.is_active(instance.data): + return + asset = instance.data.get("asset") context_asset = instance.context.data["assetEntity"]["name"] node = instance.data["transientData"]["node"] diff --git a/openpype/hosts/nuke/plugins/publish/validate_backdrop.py b/openpype/hosts/nuke/plugins/publish/validate_backdrop.py index 5f4a5c3ab0..ad60089952 100644 --- a/openpype/hosts/nuke/plugins/publish/validate_backdrop.py +++ b/openpype/hosts/nuke/plugins/publish/validate_backdrop.py @@ -1,8 +1,12 @@ import nuke import pyblish from openpype.hosts.nuke import api as napi -from openpype.pipeline import PublishXmlValidationError +from openpype.pipeline.publish import ( + ValidateContentsOrder, + PublishXmlValidationError, + OptionalPyblishPluginMixin +) class SelectCenterInNodeGraph(pyblish.api.Action): """ @@ -46,12 +50,15 @@ class SelectCenterInNodeGraph(pyblish.api.Action): nuke.zoom(2, [min(all_xC), min(all_yC)]) -class ValidateBackdrop(pyblish.api.InstancePlugin): +class ValidateBackdrop( + pyblish.api.InstancePlugin, + OptionalPyblishPluginMixin +): """ Validate amount of nodes on backdrop node in case user forgotten to add nodes above the publishing backdrop node. """ - order = pyblish.api.ValidatorOrder + order = ValidateContentsOrder optional = True families = ["nukenodes"] label = "Validate Backdrop" @@ -59,6 +66,9 @@ class ValidateBackdrop(pyblish.api.InstancePlugin): actions = [SelectCenterInNodeGraph] def process(self, instance): + if not self.is_active(instance.data): + return + child_nodes = instance.data["transientData"]["childNodes"] connections_out = instance.data["transientData"]["nodeConnectionsOut"] diff --git a/openpype/hosts/nuke/plugins/publish/validate_script_attributes.py b/openpype/hosts/nuke/plugins/publish/validate_script_attributes.py index bd0bbf8044..57bfce7993 100644 --- a/openpype/hosts/nuke/plugins/publish/validate_script_attributes.py +++ b/openpype/hosts/nuke/plugins/publish/validate_script_attributes.py @@ -18,7 +18,7 @@ class ValidateScriptAttributes( order = pyblish.api.ValidatorOrder + 0.1 families = ["workfile"] - label = "Validatte script attributes" + label = "Validate script attributes" hosts = ["nuke"] optional = True actions = [RepairAction] diff --git a/openpype/hosts/photoshop/plugins/create/workfile_creator.py b/openpype/hosts/photoshop/lib.py similarity index 83% rename from openpype/hosts/photoshop/plugins/create/workfile_creator.py rename to openpype/hosts/photoshop/lib.py index f5d56adcbc..ae7a33b7b6 100644 --- a/openpype/hosts/photoshop/plugins/create/workfile_creator.py +++ b/openpype/hosts/photoshop/lib.py @@ -7,28 +7,26 @@ from openpype.pipeline import ( from openpype.hosts.photoshop.api.pipeline import cache_and_get_instances -class PSWorkfileCreator(AutoCreator): - identifier = "workfile" - family = "workfile" - - default_variant = "Main" - +class PSAutoCreator(AutoCreator): + """Generic autocreator to extend.""" def get_instance_attr_defs(self): return [] def collect_instances(self): for instance_data in cache_and_get_instances(self): creator_id = instance_data.get("creator_identifier") + if creator_id == self.identifier: - subset_name = instance_data["subset"] - instance = CreatedInstance( - self.family, subset_name, instance_data, self + instance = CreatedInstance.from_existing( + instance_data, self ) self._add_instance_to_context(instance) def update_instances(self, update_list): - # nothing to change on workfiles - pass + self.log.debug("update_list:: {}".format(update_list)) + for created_inst, _changes in update_list: + api.stub().imprint(created_inst.get("instance_id"), + created_inst.data_to_store()) def create(self, options=None): existing_instance = None @@ -58,6 +56,9 @@ class PSWorkfileCreator(AutoCreator): project_name, host_name, None )) + if not self.active_on_create: + data["active"] = False + new_instance = CreatedInstance( self.family, subset_name, data, self ) diff --git a/openpype/hosts/photoshop/plugins/create/create_flatten_image.py b/openpype/hosts/photoshop/plugins/create/create_flatten_image.py new file mode 100644 index 0000000000..3bc61c8184 --- /dev/null +++ b/openpype/hosts/photoshop/plugins/create/create_flatten_image.py @@ -0,0 +1,120 @@ +from openpype.pipeline import CreatedInstance + +from openpype.lib import BoolDef +import openpype.hosts.photoshop.api as api +from openpype.hosts.photoshop.lib import PSAutoCreator +from openpype.pipeline.create import get_subset_name +from openpype.client import get_asset_by_name + + +class AutoImageCreator(PSAutoCreator): + """Creates flatten image from all visible layers. + + Used in simplified publishing as auto created instance. + Must be enabled in Setting and template for subset name provided + """ + identifier = "auto_image" + family = "image" + + # Settings + default_variant = "" + # - Mark by default instance for review + mark_for_review = True + active_on_create = True + + def create(self, options=None): + existing_instance = None + for instance in self.create_context.instances: + if instance.creator_identifier == self.identifier: + existing_instance = instance + break + + context = self.create_context + project_name = context.get_current_project_name() + asset_name = context.get_current_asset_name() + task_name = context.get_current_task_name() + host_name = context.host_name + asset_doc = get_asset_by_name(project_name, asset_name) + + if existing_instance is None: + subset_name = get_subset_name( + self.family, self.default_variant, task_name, asset_doc, + project_name, host_name + ) + + publishable_ids = [layer.id for layer in api.stub().get_layers() + if layer.visible] + data = { + "asset": asset_name, + "task": task_name, + # ids are "virtual" layers, won't get grouped as 'members' do + # same difference in color coded layers in WP + "ids": publishable_ids + } + + if not self.active_on_create: + data["active"] = False + + creator_attributes = {"mark_for_review": self.mark_for_review} + data.update({"creator_attributes": creator_attributes}) + + new_instance = CreatedInstance( + self.family, subset_name, data, self + ) + self._add_instance_to_context(new_instance) + api.stub().imprint(new_instance.get("instance_id"), + new_instance.data_to_store()) + + elif ( # existing instance from different context + existing_instance["asset"] != asset_name + or existing_instance["task"] != task_name + ): + subset_name = get_subset_name( + self.family, self.default_variant, task_name, asset_doc, + project_name, host_name + ) + + existing_instance["asset"] = asset_name + existing_instance["task"] = task_name + existing_instance["subset"] = subset_name + + api.stub().imprint(existing_instance.get("instance_id"), + existing_instance.data_to_store()) + + def get_pre_create_attr_defs(self): + return [ + BoolDef( + "mark_for_review", + label="Review", + default=self.mark_for_review + ) + ] + + def get_instance_attr_defs(self): + return [ + BoolDef( + "mark_for_review", + label="Review" + ) + ] + + def apply_settings(self, project_settings, system_settings): + plugin_settings = ( + project_settings["photoshop"]["create"]["AutoImageCreator"] + ) + + self.active_on_create = plugin_settings["active_on_create"] + self.default_variant = plugin_settings["default_variant"] + self.mark_for_review = plugin_settings["mark_for_review"] + self.enabled = plugin_settings["enabled"] + + def get_detail_description(self): + return """Creator for flatten image. + + Studio might configure simple publishing workflow. In that case + `image` instance is automatically created which will publish flat + image from all visible layers. + + Artist might disable this instance from publishing or from creating + review for it though. + """ diff --git a/openpype/hosts/photoshop/plugins/create/create_image.py b/openpype/hosts/photoshop/plugins/create/create_image.py index 3d82d6b6f0..f3165fca57 100644 --- a/openpype/hosts/photoshop/plugins/create/create_image.py +++ b/openpype/hosts/photoshop/plugins/create/create_image.py @@ -23,6 +23,11 @@ class ImageCreator(Creator): family = "image" description = "Image creator" + # Settings + default_variants = "" + mark_for_review = False + active_on_create = True + def create(self, subset_name_from_ui, data, pre_create_data): groups_to_create = [] top_layers_to_wrap = [] @@ -94,6 +99,12 @@ class ImageCreator(Creator): data.update({"layer_name": layer_name}) data.update({"long_name": "_".join(layer_names_in_hierarchy)}) + creator_attributes = {"mark_for_review": self.mark_for_review} + data.update({"creator_attributes": creator_attributes}) + + if not self.active_on_create: + data["active"] = False + new_instance = CreatedInstance(self.family, subset_name, data, self) @@ -134,11 +145,6 @@ class ImageCreator(Creator): self.host.remove_instance(instance) self._remove_instance_from_context(instance) - def get_default_variants(self): - return [ - "Main" - ] - def get_pre_create_attr_defs(self): output = [ BoolDef("use_selection", default=True, @@ -148,10 +154,34 @@ class ImageCreator(Creator): label="Create separate instance for each selected"), BoolDef("use_layer_name", default=False, - label="Use layer name in subset") + label="Use layer name in subset"), + BoolDef( + "mark_for_review", + label="Create separate review", + default=False + ) ] return output + def get_instance_attr_defs(self): + return [ + BoolDef( + "mark_for_review", + label="Review" + ) + ] + + def apply_settings(self, project_settings, system_settings): + plugin_settings = ( + project_settings["photoshop"]["create"]["ImageCreator"] + ) + + self.active_on_create = plugin_settings["active_on_create"] + self.default_variants = plugin_settings["default_variants"] + self.mark_for_review = plugin_settings["mark_for_review"] + self.enabled = plugin_settings["enabled"] + + def get_detail_description(self): return """Creator for Image instances @@ -180,6 +210,11 @@ class ImageCreator(Creator): but layer name should be used (set explicitly in UI or implicitly if multiple images should be created), it is added in capitalized form as a suffix to subset name. + + Each image could have its separate review created if necessary via + `Create separate review` toggle. + But more use case is to use separate `review` instance to create review + from all published items. """ def _handle_legacy(self, instance_data): diff --git a/openpype/hosts/photoshop/plugins/create/create_review.py b/openpype/hosts/photoshop/plugins/create/create_review.py new file mode 100644 index 0000000000..064485d465 --- /dev/null +++ b/openpype/hosts/photoshop/plugins/create/create_review.py @@ -0,0 +1,28 @@ +from openpype.hosts.photoshop.lib import PSAutoCreator + + +class ReviewCreator(PSAutoCreator): + """Creates review instance which might be disabled from publishing.""" + identifier = "review" + family = "review" + + default_variant = "Main" + + def get_detail_description(self): + return """Auto creator for review. + + Photoshop review is created from all published images or from all + visible layers if no `image` instances got created. + + Review might be disabled by an artist (instance shouldn't be deleted as + it will get recreated in next publish either way). + """ + + def apply_settings(self, project_settings, system_settings): + plugin_settings = ( + project_settings["photoshop"]["create"]["ReviewCreator"] + ) + + self.default_variant = plugin_settings["default_variant"] + self.active_on_create = plugin_settings["active_on_create"] + self.enabled = plugin_settings["enabled"] diff --git a/openpype/hosts/photoshop/plugins/create/create_workfile.py b/openpype/hosts/photoshop/plugins/create/create_workfile.py new file mode 100644 index 0000000000..d498f0549c --- /dev/null +++ b/openpype/hosts/photoshop/plugins/create/create_workfile.py @@ -0,0 +1,28 @@ +from openpype.hosts.photoshop.lib import PSAutoCreator + + +class WorkfileCreator(PSAutoCreator): + identifier = "workfile" + family = "workfile" + + default_variant = "Main" + + def get_detail_description(self): + return """Auto creator for workfile. + + It is expected that each publish will also publish its source workfile + for safekeeping. This creator triggers automatically without need for + an artist to remember and trigger it explicitly. + + Workfile instance could be disabled if it is not required to publish + workfile. (Instance shouldn't be deleted though as it will be recreated + in next publish automatically). + """ + + def apply_settings(self, project_settings, system_settings): + plugin_settings = ( + project_settings["photoshop"]["create"]["WorkfileCreator"] + ) + + self.active_on_create = plugin_settings["active_on_create"] + self.enabled = plugin_settings["enabled"] diff --git a/openpype/hosts/photoshop/plugins/publish/collect_auto_image.py b/openpype/hosts/photoshop/plugins/publish/collect_auto_image.py new file mode 100644 index 0000000000..ce408f8d01 --- /dev/null +++ b/openpype/hosts/photoshop/plugins/publish/collect_auto_image.py @@ -0,0 +1,101 @@ +import pyblish.api + +from openpype.hosts.photoshop import api as photoshop +from openpype.pipeline.create import get_subset_name + + +class CollectAutoImage(pyblish.api.ContextPlugin): + """Creates auto image in non artist based publishes (Webpublisher). + + 'remotepublish' should be renamed to 'autopublish' or similar in the future + """ + + label = "Collect Auto Image" + order = pyblish.api.CollectorOrder + hosts = ["photoshop"] + order = pyblish.api.CollectorOrder + 0.2 + + targets = ["remotepublish"] + + def process(self, context): + family = "image" + for instance in context: + creator_identifier = instance.data.get("creator_identifier") + if creator_identifier and creator_identifier == "auto_image": + self.log.debug("Auto image instance found, won't create new") + return + + project_name = context.data["anatomyData"]["project"]["name"] + proj_settings = context.data["project_settings"] + task_name = context.data["anatomyData"]["task"]["name"] + host_name = context.data["hostName"] + asset_doc = context.data["assetEntity"] + asset_name = asset_doc["name"] + + auto_creator = proj_settings.get( + "photoshop", {}).get( + "create", {}).get( + "AutoImageCreator", {}) + + if not auto_creator or not auto_creator["enabled"]: + self.log.debug("Auto image creator disabled, won't create new") + return + + stub = photoshop.stub() + stored_items = stub.get_layers_metadata() + for item in stored_items: + if item.get("creator_identifier") == "auto_image": + if not item.get("active"): + self.log.debug("Auto_image instance disabled") + return + + layer_items = stub.get_layers() + + publishable_ids = [layer.id for layer in layer_items + if layer.visible] + + # collect stored image instances + instance_names = [] + for layer_item in layer_items: + layer_meta_data = stub.read(layer_item, stored_items) + + # Skip layers without metadata. + if layer_meta_data is None: + continue + + # Skip containers. + if "container" in layer_meta_data["id"]: + continue + + # active might not be in legacy meta + if layer_meta_data.get("active", True) and layer_item.visible: + instance_names.append(layer_meta_data["subset"]) + + if len(instance_names) == 0: + variants = proj_settings.get( + "photoshop", {}).get( + "create", {}).get( + "CreateImage", {}).get( + "default_variants", ['']) + family = "image" + + variant = context.data.get("variant") or variants[0] + + subset_name = get_subset_name( + family, variant, task_name, asset_doc, + project_name, host_name + ) + + instance = context.create_instance(subset_name) + instance.data["family"] = family + instance.data["asset"] = asset_name + instance.data["subset"] = subset_name + instance.data["ids"] = publishable_ids + instance.data["publish"] = True + instance.data["creator_identifier"] = "auto_image" + + if auto_creator["mark_for_review"]: + instance.data["creator_attributes"] = {"mark_for_review": True} + instance.data["families"] = ["review"] + + self.log.info("auto image instance: {} ".format(instance.data)) diff --git a/openpype/hosts/photoshop/plugins/publish/collect_auto_review.py b/openpype/hosts/photoshop/plugins/publish/collect_auto_review.py new file mode 100644 index 0000000000..7de4adcaf4 --- /dev/null +++ b/openpype/hosts/photoshop/plugins/publish/collect_auto_review.py @@ -0,0 +1,92 @@ +""" +Requires: + None + +Provides: + instance -> family ("review") +""" +import pyblish.api + +from openpype.hosts.photoshop import api as photoshop +from openpype.pipeline.create import get_subset_name + + +class CollectAutoReview(pyblish.api.ContextPlugin): + """Create review instance in non artist based workflow. + + Called only if PS is triggered in Webpublisher or in tests. + """ + + label = "Collect Auto Review" + hosts = ["photoshop"] + order = pyblish.api.CollectorOrder + 0.2 + targets = ["remotepublish"] + + publish = True + + def process(self, context): + family = "review" + has_review = False + for instance in context: + if instance.data["family"] == family: + self.log.debug("Review instance found, won't create new") + has_review = True + + creator_attributes = instance.data.get("creator_attributes", {}) + if (creator_attributes.get("mark_for_review") and + "review" not in instance.data["families"]): + instance.data["families"].append("review") + + if has_review: + return + + stub = photoshop.stub() + stored_items = stub.get_layers_metadata() + for item in stored_items: + if item.get("creator_identifier") == family: + if not item.get("active"): + self.log.debug("Review instance disabled") + return + + auto_creator = context.data["project_settings"].get( + "photoshop", {}).get( + "create", {}).get( + "ReviewCreator", {}) + + if not auto_creator or not auto_creator["enabled"]: + self.log.debug("Review creator disabled, won't create new") + return + + variant = (context.data.get("variant") or + auto_creator["default_variant"]) + + project_name = context.data["anatomyData"]["project"]["name"] + proj_settings = context.data["project_settings"] + task_name = context.data["anatomyData"]["task"]["name"] + host_name = context.data["hostName"] + asset_doc = context.data["assetEntity"] + asset_name = asset_doc["name"] + + subset_name = get_subset_name( + family, + variant, + task_name, + asset_doc, + project_name, + host_name=host_name, + project_settings=proj_settings + ) + + instance = context.create_instance(subset_name) + instance.data.update({ + "subset": subset_name, + "label": subset_name, + "name": subset_name, + "family": family, + "families": [], + "representations": [], + "asset": asset_name, + "publish": self.publish + }) + + self.log.debug("auto review created::{}".format(instance.data)) diff --git a/openpype/hosts/photoshop/plugins/publish/collect_auto_workfile.py b/openpype/hosts/photoshop/plugins/publish/collect_auto_workfile.py new file mode 100644 index 0000000000..d10cf62c67 --- /dev/null +++ b/openpype/hosts/photoshop/plugins/publish/collect_auto_workfile.py @@ -0,0 +1,99 @@ +import os +import pyblish.api + +from openpype.hosts.photoshop import api as photoshop +from openpype.pipeline.create import get_subset_name + + +class CollectAutoWorkfile(pyblish.api.ContextPlugin): + """Collect current script for publish.""" + + order = pyblish.api.CollectorOrder + 0.2 + label = "Collect Workfile" + hosts = ["photoshop"] + + targets = ["remotepublish"] + + def process(self, context): + family = "workfile" + file_path = context.data["currentFile"] + _, ext = os.path.splitext(file_path) + staging_dir = os.path.dirname(file_path) + base_name = os.path.basename(file_path) + workfile_representation = { + "name": ext[1:], + "ext": ext[1:], + "files": base_name, + "stagingDir": staging_dir, + } + + for instance in context: + if instance.data["family"] == family: + self.log.debug("Workfile instance found, won't create new") + instance.data.update({ + "label": base_name, + "name": base_name, + "representations": [], + }) + + # creating representation + _, ext = os.path.splitext(file_path) + instance.data["representations"].append( + workfile_representation) + + return + + stub = photoshop.stub() + stored_items = stub.get_layers_metadata() + for item in stored_items: + if item.get("creator_identifier") == family: + if not item.get("active"): + self.log.debug("Workfile instance disabled") + return + + project_name = context.data["anatomyData"]["project"]["name"] + proj_settings = context.data["project_settings"] + auto_creator = proj_settings.get( + "photoshop", {}).get( + "create", {}).get( + "WorkfileCreator", {}) + + if not auto_creator or not auto_creator["enabled"]: + self.log.debug("Workfile creator disabled, won't create new") + return + + # context.data["variant"] might come only from collect_batch_data + variant = (context.data.get("variant") or + auto_creator["default_variant"]) + + task_name = context.data["anatomyData"]["task"]["name"] + host_name = context.data["hostName"] + asset_doc = context.data["assetEntity"] + asset_name = asset_doc["name"] + + subset_name = get_subset_name( + family, + variant, + task_name, + asset_doc, + project_name, + host_name=host_name, + project_settings=proj_settings + ) + + # Create instance + instance = context.create_instance(subset_name) + instance.data.update({ + "subset": subset_name, + "label": base_name, + "name": base_name, + "family": family, + "families": [], + "representations": [], + "asset": asset_name + }) + + # creating representation + instance.data["representations"].append(workfile_representation) + + self.log.debug("auto workfile review created:{}".format(instance.data)) diff --git a/openpype/hosts/photoshop/plugins/publish/collect_instances.py b/openpype/hosts/photoshop/plugins/publish/collect_instances.py deleted file mode 100644 index 5bf12379b1..0000000000 --- a/openpype/hosts/photoshop/plugins/publish/collect_instances.py +++ /dev/null @@ -1,116 +0,0 @@ -import pprint - -import pyblish.api - -from openpype.settings import get_project_settings -from openpype.hosts.photoshop import api as photoshop -from openpype.lib import prepare_template_data -from openpype.pipeline import legacy_io - - -class CollectInstances(pyblish.api.ContextPlugin): - """Gather instances by LayerSet and file metadata - - Collects publishable instances from file metadata or enhance - already collected by creator (family == "image"). - - If no image instances are explicitly created, it looks if there is value - in `flatten_subset_template` (configurable in Settings), in that case it - produces flatten image with all visible layers. - - Identifier: - id (str): "pyblish.avalon.instance" - """ - - label = "Collect Instances" - order = pyblish.api.CollectorOrder - hosts = ["photoshop"] - families_mapping = { - "image": [] - } - # configurable in Settings - flatten_subset_template = "" - - def process(self, context): - instance_by_layer_id = {} - for instance in context: - if ( - instance.data["family"] == "image" and - instance.data.get("members")): - layer_id = str(instance.data["members"][0]) - instance_by_layer_id[layer_id] = instance - - stub = photoshop.stub() - layer_items = stub.get_layers() - layers_meta = stub.get_layers_metadata() - instance_names = [] - - all_layer_ids = [] - for layer_item in layer_items: - layer_meta_data = stub.read(layer_item, layers_meta) - all_layer_ids.append(layer_item.id) - - # Skip layers without metadata. - if layer_meta_data is None: - continue - - # Skip containers. - if "container" in layer_meta_data["id"]: - continue - - # active might not be in legacy meta - if not layer_meta_data.get("active", True): - continue - - instance = instance_by_layer_id.get(str(layer_item.id)) - if instance is None: - instance = context.create_instance(layer_meta_data["subset"]) - - instance.data["layer"] = layer_item - instance.data.update(layer_meta_data) - instance.data["families"] = self.families_mapping[ - layer_meta_data["family"] - ] - instance.data["publish"] = layer_item.visible - instance_names.append(layer_meta_data["subset"]) - - # Produce diagnostic message for any graphical - # user interface interested in visualising it. - self.log.info("Found: \"%s\" " % instance.data["name"]) - self.log.info("instance: {} ".format( - pprint.pformat(instance.data, indent=4))) - - if len(instance_names) != len(set(instance_names)): - self.log.warning("Duplicate instances found. " + - "Remove unwanted via Publisher") - - if len(instance_names) == 0 and self.flatten_subset_template: - project_name = context.data["projectEntity"]["name"] - variants = get_project_settings(project_name).get( - "photoshop", {}).get( - "create", {}).get( - "CreateImage", {}).get( - "defaults", ['']) - family = "image" - task_name = legacy_io.Session["AVALON_TASK"] - asset_name = context.data["assetEntity"]["name"] - - variant = context.data.get("variant") or variants[0] - fill_pairs = { - "variant": variant, - "family": family, - "task": task_name - } - - subset = self.flatten_subset_template.format( - **prepare_template_data(fill_pairs)) - - instance = context.create_instance(subset) - instance.data["family"] = family - instance.data["asset"] = asset_name - instance.data["subset"] = subset - instance.data["ids"] = all_layer_ids - instance.data["families"] = self.families_mapping[family] - instance.data["publish"] = True - - self.log.info("flatten instance: {} ".format(instance.data)) diff --git a/openpype/hosts/photoshop/plugins/publish/collect_review.py b/openpype/hosts/photoshop/plugins/publish/collect_review.py index 7e598a8250..87ec4ee3f1 100644 --- a/openpype/hosts/photoshop/plugins/publish/collect_review.py +++ b/openpype/hosts/photoshop/plugins/publish/collect_review.py @@ -14,10 +14,7 @@ from openpype.pipeline.create import get_subset_name class CollectReview(pyblish.api.ContextPlugin): - """Gather the active document as review instance. - - Triggers once even if no 'image' is published as by defaults it creates - flatten image from a workfile. + """Adds review to families for instances marked to be reviewable. """ label = "Collect Review" @@ -28,25 +25,8 @@ class CollectReview(pyblish.api.ContextPlugin): publish = True def process(self, context): - family = "review" - subset = get_subset_name( - family, - context.data.get("variant", ''), - context.data["anatomyData"]["task"]["name"], - context.data["assetEntity"], - context.data["anatomyData"]["project"]["name"], - host_name=context.data["hostName"], - project_settings=context.data["project_settings"] - ) - - instance = context.create_instance(subset) - instance.data.update({ - "subset": subset, - "label": subset, - "name": subset, - "family": family, - "families": [], - "representations": [], - "asset": os.environ["AVALON_ASSET"], - "publish": self.publish - }) + for instance in context: + creator_attributes = instance.data["creator_attributes"] + if (creator_attributes.get("mark_for_review") and + "review" not in instance.data["families"]): + instance.data["families"].append("review") diff --git a/openpype/hosts/photoshop/plugins/publish/collect_workfile.py b/openpype/hosts/photoshop/plugins/publish/collect_workfile.py index 9a5aad5569..9625464499 100644 --- a/openpype/hosts/photoshop/plugins/publish/collect_workfile.py +++ b/openpype/hosts/photoshop/plugins/publish/collect_workfile.py @@ -14,50 +14,19 @@ class CollectWorkfile(pyblish.api.ContextPlugin): default_variant = "Main" def process(self, context): - existing_instance = None for instance in context: if instance.data["family"] == "workfile": - self.log.debug("Workfile instance found, won't create new") - existing_instance = instance - break + file_path = context.data["currentFile"] + _, ext = os.path.splitext(file_path) + staging_dir = os.path.dirname(file_path) + base_name = os.path.basename(file_path) - family = "workfile" - # context.data["variant"] might come only from collect_batch_data - variant = context.data.get("variant") or self.default_variant - subset = get_subset_name( - family, - variant, - context.data["anatomyData"]["task"]["name"], - context.data["assetEntity"], - context.data["anatomyData"]["project"]["name"], - host_name=context.data["hostName"], - project_settings=context.data["project_settings"] - ) - - file_path = context.data["currentFile"] - staging_dir = os.path.dirname(file_path) - base_name = os.path.basename(file_path) - - # Create instance - if existing_instance is None: - instance = context.create_instance(subset) - instance.data.update({ - "subset": subset, - "label": base_name, - "name": base_name, - "family": family, - "families": [], - "representations": [], - "asset": os.environ["AVALON_ASSET"] - }) - else: - instance = existing_instance - - # creating representation - _, ext = os.path.splitext(file_path) - instance.data["representations"].append({ - "name": ext[1:], - "ext": ext[1:], - "files": base_name, - "stagingDir": staging_dir, - }) + # creating representation + _, ext = os.path.splitext(file_path) + instance.data["representations"].append({ + "name": ext[1:], + "ext": ext[1:], + "files": base_name, + "stagingDir": staging_dir, + }) + return diff --git a/openpype/hosts/photoshop/plugins/publish/extract_review.py b/openpype/hosts/photoshop/plugins/publish/extract_review.py index 9d7eff0211..d5416a389d 100644 --- a/openpype/hosts/photoshop/plugins/publish/extract_review.py +++ b/openpype/hosts/photoshop/plugins/publish/extract_review.py @@ -47,32 +47,42 @@ class ExtractReview(publish.Extractor): layers = self._get_layers_from_image_instances(instance) self.log.info("Layers image instance found: {}".format(layers)) + repre_name = "jpg" + repre_skeleton = { + "name": repre_name, + "ext": "jpg", + "stagingDir": staging_dir, + "tags": self.jpg_options['tags'], + } + + if instance.data["family"] != "review": + # enable creation of review, without this jpg review would clash + # with jpg of the image family + output_name = repre_name + repre_name = "{}_{}".format(repre_name, output_name) + repre_skeleton.update({"name": repre_name, + "outputName": output_name}) + if self.make_image_sequence and len(layers) > 1: self.log.info("Extract layers to image sequence.") img_list = self._save_sequence_images(staging_dir, layers) - instance.data["representations"].append({ - "name": "jpg", - "ext": "jpg", - "files": img_list, + repre_skeleton.update({ "frameStart": 0, "frameEnd": len(img_list), "fps": fps, - "stagingDir": staging_dir, - "tags": self.jpg_options['tags'], + "files": img_list, }) + instance.data["representations"].append(repre_skeleton) processed_img_names = img_list else: self.log.info("Extract layers to flatten image.") img_list = self._save_flatten_image(staging_dir, layers) - instance.data["representations"].append({ - "name": "jpg", - "ext": "jpg", - "files": img_list, # cannot be [] for single frame - "stagingDir": staging_dir, - "tags": self.jpg_options['tags'] + repre_skeleton.update({ + "files": img_list, }) + instance.data["representations"].append(repre_skeleton) processed_img_names = [img_list] ffmpeg_path = get_ffmpeg_tool_path("ffmpeg") diff --git a/openpype/hosts/standalonepublisher/plugins/publish/extract_workfile_location.py b/openpype/hosts/standalonepublisher/plugins/publish/extract_workfile_location.py index 18bf0394ae..9ff84e32fb 100644 --- a/openpype/hosts/standalonepublisher/plugins/publish/extract_workfile_location.py +++ b/openpype/hosts/standalonepublisher/plugins/publish/extract_workfile_location.py @@ -27,11 +27,12 @@ class ExtractWorkfileUrl(pyblish.api.ContextPlugin): rep_name = instance.data.get("representations")[0].get("name") template_data["representation"] = rep_name template_data["ext"] = rep_name - anatomy_filled = anatomy.format(template_data) - template_filled = anatomy_filled["publish"]["path"] + template_obj = anatomy.templates_obj["publish"]["path"] + template_filled = template_obj.format_strict(template_data) filepath = os.path.normpath(template_filled) self.log.info("Using published scene for render {}".format( filepath)) + break if not filepath: self.log.info("Texture batch doesn't contain workfile.") diff --git a/openpype/hosts/substancepainter/__init__.py b/openpype/hosts/substancepainter/__init__.py new file mode 100644 index 0000000000..4c33b9f507 --- /dev/null +++ b/openpype/hosts/substancepainter/__init__.py @@ -0,0 +1,10 @@ +from .addon import ( + SubstanceAddon, + SUBSTANCE_HOST_DIR, +) + + +__all__ = ( + "SubstanceAddon", + "SUBSTANCE_HOST_DIR" +) diff --git a/openpype/hosts/substancepainter/addon.py b/openpype/hosts/substancepainter/addon.py new file mode 100644 index 0000000000..2fbea139c5 --- /dev/null +++ b/openpype/hosts/substancepainter/addon.py @@ -0,0 +1,34 @@ +import os +from openpype.modules import OpenPypeModule, IHostAddon + +SUBSTANCE_HOST_DIR = os.path.dirname(os.path.abspath(__file__)) + + +class SubstanceAddon(OpenPypeModule, IHostAddon): + name = "substancepainter" + host_name = "substancepainter" + + def initialize(self, module_settings): + self.enabled = True + + def add_implementation_envs(self, env, _app): + # Add requirements to SUBSTANCE_PAINTER_PLUGINS_PATH + plugin_path = os.path.join(SUBSTANCE_HOST_DIR, "deploy") + plugin_path = plugin_path.replace("\\", "/") + if env.get("SUBSTANCE_PAINTER_PLUGINS_PATH"): + plugin_path += os.pathsep + env["SUBSTANCE_PAINTER_PLUGINS_PATH"] + + env["SUBSTANCE_PAINTER_PLUGINS_PATH"] = plugin_path + + # Log in Substance Painter doesn't support custom terminal colors + env["OPENPYPE_LOG_NO_COLORS"] = "Yes" + + def get_launch_hook_paths(self, app): + if app.host_name != self.host_name: + return [] + return [ + os.path.join(SUBSTANCE_HOST_DIR, "hooks") + ] + + def get_workfile_extensions(self): + return [".spp", ".toc"] diff --git a/openpype/hosts/substancepainter/api/__init__.py b/openpype/hosts/substancepainter/api/__init__.py new file mode 100644 index 0000000000..937d0c429e --- /dev/null +++ b/openpype/hosts/substancepainter/api/__init__.py @@ -0,0 +1,8 @@ +from .pipeline import ( + SubstanceHost, + +) + +__all__ = [ + "SubstanceHost", +] diff --git a/openpype/hosts/substancepainter/api/colorspace.py b/openpype/hosts/substancepainter/api/colorspace.py new file mode 100644 index 0000000000..375b61b39b --- /dev/null +++ b/openpype/hosts/substancepainter/api/colorspace.py @@ -0,0 +1,157 @@ +"""Substance Painter OCIO management + +Adobe Substance 3D Painter supports OCIO color management using a per project +configuration. Output color spaces are defined at the project level + +More information see: + - https://substance3d.adobe.com/documentation/spdoc/color-management-223053233.html # noqa + - https://substance3d.adobe.com/documentation/spdoc/color-management-with-opencolorio-225969419.html # noqa + +""" +import substance_painter.export +import substance_painter.js +import json + +from .lib import ( + get_document_structure, + get_channel_format +) + + +def _iter_document_stack_channels(): + """Yield all stack paths and channels project""" + + for material in get_document_structure()["materials"]: + material_name = material["name"] + for stack in material["stacks"]: + stack_name = stack["name"] + if stack_name: + stack_path = [material_name, stack_name] + else: + stack_path = material_name + for channel in stack["channels"]: + yield stack_path, channel + + +def _get_first_color_and_data_stack_and_channel(): + """Return first found color channel and data channel.""" + color_channel = None + data_channel = None + for stack_path, channel in _iter_document_stack_channels(): + channel_format = get_channel_format(stack_path, channel) + if channel_format["color"]: + color_channel = (stack_path, channel) + else: + data_channel = (stack_path, channel) + + if color_channel and data_channel: + return color_channel, data_channel + + return color_channel, data_channel + + +def get_project_channel_data(): + """Return colorSpace settings for the current substance painter project. + + In Substance Painter only color channels have Color Management enabled + whereas data channels have no color management applied. This can't be + changed. The artist can only customize the export color space for color + channels per bit-depth for 8 bpc, 16 bpc and 32 bpc. + + As such this returns the color space for 'data' and for per bit-depth + for color channels. + + Example output: + { + "data": {'colorSpace': 'Utility - Raw'}, + "8": {"colorSpace": "ACES - AcesCG"}, + "16": {"colorSpace": "ACES - AcesCG"}, + "16f": {"colorSpace": "ACES - AcesCG"}, + "32f": {"colorSpace": "ACES - AcesCG"} + } + + """ + + keys = ["colorSpace"] + query = {key: f"${key}" for key in keys} + + config = { + "exportPath": "/", + "exportShaderParams": False, + "defaultExportPreset": "query_preset", + + "exportPresets": [{ + "name": "query_preset", + + # List of maps making up this export preset. + "maps": [{ + "fileName": json.dumps(query), + # List of source/destination defining which channels will + # make up the texture file. + "channels": [], + "parameters": { + "fileFormat": "exr", + "bitDepth": "32f", + "dithering": False, + "sizeLog2": 4, + "paddingAlgorithm": "passthrough", + "dilationDistance": 16 + } + }] + }], + } + + def _get_query_output(config): + # Return the basename of the single output path we defined + result = substance_painter.export.list_project_textures(config) + path = next(iter(result.values()))[0] + # strip extension and slash since we know relevant json data starts + # and ends with { and } characters + path = path.strip("/\\.exr") + return json.loads(path) + + # Query for each type of channel (color and data) + color_channel, data_channel = _get_first_color_and_data_stack_and_channel() + colorspaces = {} + for key, channel_data in { + "data": data_channel, + "color": color_channel + }.items(): + if channel_data is None: + # No channel of that datatype anywhere in the Stack. We're + # unable to identify the output color space of the project + colorspaces[key] = None + continue + + stack, channel = channel_data + + # Stack must be a string + if not isinstance(stack, str): + # Assume iterable + stack = "/".join(stack) + + # Define the temp output config + config["exportList"] = [{"rootPath": stack}] + config_map = config["exportPresets"][0]["maps"][0] + config_map["channels"] = [ + { + "destChannel": x, + "srcChannel": x, + "srcMapType": "documentMap", + "srcMapName": channel + } for x in "RGB" + ] + + if key == "color": + # Query for each bit depth + # Color space definition can have a different OCIO config set + # for 8-bit, 16-bit and 32-bit outputs so we need to check each + # bit depth + for depth in ["8", "16", "16f", "32f"]: + config_map["parameters"]["bitDepth"] = depth # noqa + colorspaces[key + depth] = _get_query_output(config) + else: + # Data channel (not color managed) + colorspaces[key] = _get_query_output(config) + + return colorspaces diff --git a/openpype/hosts/substancepainter/api/lib.py b/openpype/hosts/substancepainter/api/lib.py new file mode 100644 index 0000000000..2cd08f862e --- /dev/null +++ b/openpype/hosts/substancepainter/api/lib.py @@ -0,0 +1,649 @@ +import os +import re +import json +from collections import defaultdict + +import substance_painter.project +import substance_painter.resource +import substance_painter.js +import substance_painter.export + +from qtpy import QtGui, QtWidgets, QtCore + + +def get_export_presets(): + """Return Export Preset resource URLs for all available Export Presets. + + Returns: + dict: {Resource url: GUI Label} + + """ + # TODO: Find more optimal way to find all export templates + + preset_resources = {} + for shelf in substance_painter.resource.Shelves.all(): + shelf_path = os.path.normpath(shelf.path()) + + presets_path = os.path.join(shelf_path, "export-presets") + if not os.path.exists(presets_path): + continue + + for filename in os.listdir(presets_path): + if filename.endswith(".spexp"): + template_name = os.path.splitext(filename)[0] + + resource = substance_painter.resource.ResourceID( + context=shelf.name(), + name=template_name + ) + resource_url = resource.url() + + preset_resources[resource_url] = template_name + + # Sort by template name + export_templates = dict(sorted(preset_resources.items(), + key=lambda x: x[1])) + + # Add default built-ins at the start + # TODO: find the built-ins automatically; scraped with https://gist.github.com/BigRoy/97150c7c6f0a0c916418207b9a2bc8f1 # noqa + result = { + "export-preset-generator://viewport2d": "2D View", # noqa + "export-preset-generator://doc-channel-normal-no-alpha": "Document channels + Normal + AO (No Alpha)", # noqa + "export-preset-generator://doc-channel-normal-with-alpha": "Document channels + Normal + AO (With Alpha)", # noqa + "export-preset-generator://sketchfab": "Sketchfab", # noqa + "export-preset-generator://adobe-standard-material": "Substance 3D Stager", # noqa + "export-preset-generator://usd": "USD PBR Metal Roughness", # noqa + "export-preset-generator://gltf": "glTF PBR Metal Roughness", # noqa + "export-preset-generator://gltf-displacement": "glTF PBR Metal Roughness + Displacement texture (experimental)" # noqa + } + result.update(export_templates) + return result + + +def _convert_stack_path_to_cmd_str(stack_path): + """Convert stack path `str` or `[str, str]` for javascript query + + Example usage: + >>> stack_path = _convert_stack_path_to_cmd_str(stack_path) + >>> cmd = f"alg.mapexport.channelIdentifiers({stack_path})" + >>> substance_painter.js.evaluate(cmd) + + Args: + stack_path (list or str): Path to the stack, could be + "Texture set name" or ["Texture set name", "Stack name"] + + Returns: + str: Stack path usable as argument in javascript query. + + """ + return json.dumps(stack_path) + + +def get_channel_identifiers(stack_path=None): + """Return the list of channel identifiers. + + If a context is passed (texture set/stack), + return only used channels with resolved user channels. + + Channel identifiers are: + basecolor, height, specular, opacity, emissive, displacement, + glossiness, roughness, anisotropylevel, anisotropyangle, transmissive, + scattering, reflection, ior, metallic, normal, ambientOcclusion, + diffuse, specularlevel, blendingmask, [custom user names]. + + Args: + stack_path (list or str, Optional): Path to the stack, could be + "Texture set name" or ["Texture set name", "Stack name"] + + Returns: + list: List of channel identifiers. + + """ + if stack_path is None: + stack_path = "" + else: + stack_path = _convert_stack_path_to_cmd_str(stack_path) + cmd = f"alg.mapexport.channelIdentifiers({stack_path})" + return substance_painter.js.evaluate(cmd) + + +def get_channel_format(stack_path, channel): + """Retrieve the channel format of a specific stack channel. + + See `alg.mapexport.channelFormat` (javascript API) for more details. + + The channel format data is: + "label" (str): The channel format label: could be one of + [sRGB8, L8, RGB8, L16, RGB16, L16F, RGB16F, L32F, RGB32F] + "color" (bool): True if the format is in color, False is grayscale + "floating" (bool): True if the format uses floating point + representation, false otherwise + "bitDepth" (int): Bit per color channel (could be 8, 16 or 32 bpc) + + Arguments: + stack_path (list or str): Path to the stack, could be + "Texture set name" or ["Texture set name", "Stack name"] + channel (str): Identifier of the channel to export + (see `get_channel_identifiers`) + + Returns: + dict: The channel format data. + + """ + stack_path = _convert_stack_path_to_cmd_str(stack_path) + cmd = f"alg.mapexport.channelFormat({stack_path}, '{channel}')" + return substance_painter.js.evaluate(cmd) + + +def get_document_structure(): + """Dump the document structure. + + See `alg.mapexport.documentStructure` (javascript API) for more details. + + Returns: + dict: Document structure or None when no project is open + + """ + return substance_painter.js.evaluate("alg.mapexport.documentStructure()") + + +def get_export_templates(config, format="png", strip_folder=True): + """Return export config outputs. + + This use the Javascript API `alg.mapexport.getPathsExportDocumentMaps` + which returns a different output than using the Python equivalent + `substance_painter.export.list_project_textures(config)`. + + The nice thing about the Javascript API version is that it returns the + output textures grouped by filename template. + + A downside is that it doesn't return all the UDIM tiles but per template + always returns a single file. + + Note: + The file format needs to be explicitly passed to the Javascript API + but upon exporting through the Python API the file format can be based + on the output preset. So it's likely the file extension will mismatch + + Warning: + Even though the function appears to solely get the expected outputs + the Javascript API will actually create the config's texture output + folder if it does not exist yet. As such, a valid path must be set. + + Example output: + { + "DefaultMaterial": { + "$textureSet_BaseColor(_$colorSpace)(.$udim)": "DefaultMaterial_BaseColor_ACES - ACEScg.1002.png", # noqa + "$textureSet_Emissive(_$colorSpace)(.$udim)": "DefaultMaterial_Emissive_ACES - ACEScg.1002.png", # noqa + "$textureSet_Height(_$colorSpace)(.$udim)": "DefaultMaterial_Height_Utility - Raw.1002.png", # noqa + "$textureSet_Metallic(_$colorSpace)(.$udim)": "DefaultMaterial_Metallic_Utility - Raw.1002.png", # noqa + "$textureSet_Normal(_$colorSpace)(.$udim)": "DefaultMaterial_Normal_Utility - Raw.1002.png", # noqa + "$textureSet_Roughness(_$colorSpace)(.$udim)": "DefaultMaterial_Roughness_Utility - Raw.1002.png" # noqa + } + } + + Arguments: + config (dict) Export config + format (str, Optional): Output format to write to, defaults to 'png' + strip_folder (bool, Optional): Whether to strip the output folder + from the output filenames. + + Returns: + dict: The expected output maps. + + """ + folder = config["exportPath"].replace("\\", "/") + preset = config["defaultExportPreset"] + cmd = f'alg.mapexport.getPathsExportDocumentMaps("{preset}", "{folder}", "{format}")' # noqa + result = substance_painter.js.evaluate(cmd) + + if strip_folder: + for _stack, maps in result.items(): + for map_template, map_filepath in maps.items(): + map_filepath = map_filepath.replace("\\", "/") + assert map_filepath.startswith(folder) + map_filename = map_filepath[len(folder):].lstrip("/") + maps[map_template] = map_filename + + return result + + +def _templates_to_regex(templates, + texture_set, + colorspaces, + project, + mesh): + """Return regex based on a Substance Painter expot filename template. + + This converts Substance Painter export filename templates like + `$mesh_$textureSet_BaseColor(_$colorSpace)(.$udim)` into a regex + which can be used to query an output filename to help retrieve: + + - Which template filename the file belongs to. + - Which color space the file is written with. + - Which udim tile it is exactly. + + This is used by `get_parsed_export_maps` which tries to as explicitly + as possible match the filename pattern against the known possible outputs. + That's why Texture Set name, Color spaces, Project path and mesh path must + be provided. By doing so we get the best shot at correctly matching the + right template because otherwise $texture_set could basically be any string + and thus match even that of a color space or mesh. + + Arguments: + templates (list): List of templates to convert to regex. + texture_set (str): The texture set to match against. + colorspaces (list): The colorspaces defined in the current project. + project (str): Filepath of current substance project. + mesh (str): Path to mesh file used in current project. + + Returns: + dict: Template: Template regex pattern + + """ + def _filename_no_ext(path): + return os.path.splitext(os.path.basename(path))[0] + + if colorspaces and any(colorspaces): + colorspace_match = "|".join(re.escape(c) for c in set(colorspaces)) + colorspace_match = f"({colorspace_match})" + else: + # No colorspace support enabled + colorspace_match = "" + + # Key to regex valid search values + key_matches = { + "$project": re.escape(_filename_no_ext(project)), + "$mesh": re.escape(_filename_no_ext(mesh)), + "$textureSet": re.escape(texture_set), + "$colorSpace": colorspace_match, + "$udim": "([0-9]{4})" + } + + # Turn the templates into regexes + regexes = {} + for template in templates: + + # We need to tweak a temp + search_regex = re.escape(template) + + # Let's assume that any ( and ) character in the file template was + # intended as an optional template key and do a simple `str.replace` + # Note: we are matching against re.escape(template) so will need to + # search for the escaped brackets. + search_regex = search_regex.replace(re.escape("("), "(") + search_regex = search_regex.replace(re.escape(")"), ")?") + + # Substitute each key into a named group + for key, key_expected_regex in key_matches.items(): + + # We want to use the template as a regex basis in the end so will + # escape the whole thing first. Note that thus we'll need to + # search for the escaped versions of the keys too. + escaped_key = re.escape(key) + key_label = key[1:] # key without $ prefix + + key_expected_grp_regex = f"(?P<{key_label}>{key_expected_regex})" + search_regex = search_regex.replace(escaped_key, + key_expected_grp_regex) + + # The filename templates don't include the extension so we add it + # to be able to match the out filename beginning to end + ext_regex = r"(?P\.[A-Za-z][A-Za-z0-9-]*)" + search_regex = rf"^{search_regex}{ext_regex}$" + + regexes[template] = search_regex + + return regexes + + +def strip_template(template, strip="._ "): + """Return static characters in a substance painter filename template. + + >>> strip_template("$textureSet_HELLO(.$udim)") + # HELLO + >>> strip_template("$mesh_$textureSet_HELLO_WORLD_$colorSpace(.$udim)") + # HELLO_WORLD + >>> strip_template("$textureSet_HELLO(.$udim)", strip=None) + # _HELLO + >>> strip_template("$mesh_$textureSet_$colorSpace(.$udim)", strip=None) + # _HELLO_ + >>> strip_template("$textureSet_HELLO(.$udim)") + # _HELLO + + Arguments: + template (str): Filename template to strip. + strip (str, optional): Characters to strip from beginning and end + of the static string in template. Defaults to: `._ `. + + Returns: + str: The static string in filename template. + + """ + # Return only characters that were part of the template that were static. + # Remove all keys + keys = ["$project", "$mesh", "$textureSet", "$udim", "$colorSpace"] + stripped_template = template + for key in keys: + stripped_template = stripped_template.replace(key, "") + + # Everything inside an optional bracket space is excluded since it's not + # static. We keep a counter to track whether we are currently iterating + # over parts of the template that are inside an 'optional' group or not. + counter = 0 + result = "" + for char in stripped_template: + if char == "(": + counter += 1 + elif char == ")": + counter -= 1 + if counter < 0: + counter = 0 + else: + if counter == 0: + result += char + + if strip: + # Strip of any trailing start/end characters. Technically these are + # static but usually start and end separators like space or underscore + # aren't wanted. + result = result.strip(strip) + + return result + + +def get_parsed_export_maps(config): + """Return Export Config's expected output textures with parsed data. + + This tries to parse the texture outputs using a Python API export config. + + Parses template keys: $project, $mesh, $textureSet, $colorSpace, $udim + + Example: + {("DefaultMaterial", ""): { + "$mesh_$textureSet_BaseColor(_$colorSpace)(.$udim)": [ + { + // OUTPUT DATA FOR FILE #1 OF THE TEMPLATE + }, + { + // OUTPUT DATA FOR FILE #2 OF THE TEMPLATE + }, + ] + }, + }} + + File output data (all outputs are `str`). + 1) Parsed tokens: These are parsed tokens from the template, they will + only exist if found in the filename template and output filename. + + project: Workfile filename without extension + mesh: Filename of the loaded mesh without extension + textureSet: The texture set, e.g. "DefaultMaterial", + colorSpace: The color space, e.g. "ACES - ACEScg", + udim: The udim tile, e.g. "1001" + + 2) Template output and filepath + + filepath: Full path to the resulting texture map, e.g. + "/path/to/mesh_DefaultMaterial_BaseColor_ACES - ACEScg.1002.png", + output: "mesh_DefaultMaterial_BaseColor_ACES - ACEScg.1002.png" + Note: if template had slashes (folders) then `output` will too. + So `output` might include a folder. + + Returns: + dict: [texture_set, stack]: {template: [file1_data, file2_data]} + + """ + # Import is here to avoid recursive lib <-> colorspace imports + from .colorspace import get_project_channel_data + + outputs = substance_painter.export.list_project_textures(config) + templates = get_export_templates(config, strip_folder=False) + + # Get all color spaces set for the current project + project_colorspaces = set( + data["colorSpace"] for data in get_project_channel_data().values() + ) + + # Get current project mesh path and project path to explicitly match + # the $mesh and $project tokens + project_mesh_path = substance_painter.project.last_imported_mesh_path() + project_path = substance_painter.project.file_path() + + # Get the current export path to strip this of the beginning of filepath + # results, since filename templates don't have these we'll match without + # that part of the filename. + export_path = config["exportPath"] + export_path = export_path.replace("\\", "/") + if not export_path.endswith("/"): + export_path += "/" + + # Parse the outputs + result = {} + for key, filepaths in outputs.items(): + texture_set, stack = key + + if stack: + stack_path = f"{texture_set}/{stack}" + else: + stack_path = texture_set + + stack_templates = list(templates[stack_path].keys()) + + template_regex = _templates_to_regex(stack_templates, + texture_set=texture_set, + colorspaces=project_colorspaces, + mesh=project_mesh_path, + project=project_path) + + # Let's precompile the regexes + for template, regex in template_regex.items(): + template_regex[template] = re.compile(regex) + + stack_results = defaultdict(list) + for filepath in sorted(filepaths): + # We strip explicitly using the full parent export path instead of + # using `os.path.basename` because export template is allowed to + # have subfolders in its template which we want to match against + filepath = filepath.replace("\\", "/") + assert filepath.startswith(export_path), ( + f"Filepath {filepath} must start with folder {export_path}" + ) + filename = filepath[len(export_path):] + + for template, regex in template_regex.items(): + match = regex.match(filename) + if match: + parsed = match.groupdict(default={}) + + # Include some special outputs for convenience + parsed["filepath"] = filepath + parsed["output"] = filename + + stack_results[template].append(parsed) + break + else: + raise ValueError(f"Unable to match {filename} against any " + f"template in: {list(template_regex.keys())}") + + result[key] = dict(stack_results) + + return result + + +def load_shelf(path, name=None): + """Add shelf to substance painter (for current application session) + + This will dynamically add a Shelf for the current session. It's good + to note however that these will *not* persist on restart of the host. + + Note: + Consider the loaded shelf a static library of resources. + + The shelf will *not* be visible in application preferences in + Edit > Settings > Libraries. + + The shelf will *not* show in the Assets browser if it has no existing + assets + + The shelf will *not* be a selectable option for selecting it as a + destination to import resources too. + + """ + + # Ensure expanded path with forward slashes + path = os.path.expandvars(path) + path = os.path.abspath(path) + path = path.replace("\\", "/") + + # Path must exist + if not os.path.isdir(path): + raise ValueError(f"Path is not an existing folder: {path}") + + # This name must be unique and must only contain lowercase letters, + # numbers, underscores or hyphens. + if name is None: + name = os.path.basename(path) + + name = name.lower() + name = re.sub(r"[^a-z0-9_\-]", "_", name) # sanitize to underscores + + if substance_painter.resource.Shelves.exists(name): + shelf = next( + shelf for shelf in substance_painter.resource.Shelves.all() + if shelf.name() == name + ) + if os.path.normpath(shelf.path()) != os.path.normpath(path): + raise ValueError(f"Shelf with name '{name}' already exists " + f"for a different path: '{shelf.path()}") + + return + + print(f"Adding Shelf '{name}' to path: {path}") + substance_painter.resource.Shelves.add(name, path) + + return name + + +def _get_new_project_action(): + """Return QAction which triggers Substance Painter's new project dialog""" + + main_window = substance_painter.ui.get_main_window() + + # Find the file menu's New file action + menubar = main_window.menuBar() + new_action = None + for action in menubar.actions(): + menu = action.menu() + if not menu: + continue + + if menu.objectName() != "file": + continue + + # Find the action with the CTRL+N key sequence + new_action = next(action for action in menu.actions() + if action.shortcut() == QtGui.QKeySequence.New) + break + + return new_action + + +def prompt_new_file_with_mesh(mesh_filepath): + """Prompts the user for a new file using Substance Painter's own dialog. + + This will set the mesh path to load to the given mesh and disables the + dialog box to disallow the user to change the path. This way we can allow + user configuration of a project but set the mesh path ourselves. + + Warning: + This is very hacky and experimental. + + Note: + If a project is currently open using the same mesh filepath it can't + accurately detect whether the user had actually accepted the new project + dialog or whether the project afterwards is still the original project, + for example when the user might have cancelled the operation. + + """ + + app = QtWidgets.QApplication.instance() + assert os.path.isfile(mesh_filepath), \ + f"Mesh filepath does not exist: {mesh_filepath}" + + def _setup_file_dialog(): + """Set filepath in QFileDialog and trigger accept result""" + file_dialog = app.activeModalWidget() + assert isinstance(file_dialog, QtWidgets.QFileDialog) + + # Quickly hide the dialog + file_dialog.hide() + app.processEvents(QtCore.QEventLoop.ExcludeUserInputEvents, 1000) + + file_dialog.setDirectory(os.path.dirname(mesh_filepath)) + url = QtCore.QUrl.fromLocalFile(os.path.basename(mesh_filepath)) + file_dialog.selectUrl(url) + + # Give the explorer window time to refresh to the folder and select + # the file + while not file_dialog.selectedFiles(): + app.processEvents(QtCore.QEventLoop.ExcludeUserInputEvents, 1000) + print(f"Selected: {file_dialog.selectedFiles()}") + + # Set it again now we know the path is refreshed - without this + # accepting the dialog will often not trigger the correct filepath + file_dialog.setDirectory(os.path.dirname(mesh_filepath)) + url = QtCore.QUrl.fromLocalFile(os.path.basename(mesh_filepath)) + file_dialog.selectUrl(url) + + file_dialog.done(file_dialog.Accepted) + app.processEvents(QtCore.QEventLoop.AllEvents) + + def _setup_prompt(): + app.processEvents(QtCore.QEventLoop.ExcludeUserInputEvents) + dialog = app.activeModalWidget() + assert dialog.objectName() == "NewProjectDialog" + + # Set the window title + mesh = os.path.basename(mesh_filepath) + dialog.setWindowTitle(f"New Project with mesh: {mesh}") + + # Get the select mesh file button + mesh_select = dialog.findChild(QtWidgets.QPushButton, "meshSelect") + + # Hide the select mesh button to the user to block changing of mesh + mesh_select.setVisible(False) + + # Ensure UI is visually up-to-date + app.processEvents(QtCore.QEventLoop.ExcludeUserInputEvents) + + # Trigger the 'select file' dialog to set the path and have the + # new file dialog to use the path. + QtCore.QTimer.singleShot(10, _setup_file_dialog) + mesh_select.click() + + app.processEvents(QtCore.QEventLoop.AllEvents, 5000) + + mesh_filename = dialog.findChild(QtWidgets.QFrame, "meshFileName") + mesh_filename_label = mesh_filename.findChild(QtWidgets.QLabel) + if not mesh_filename_label.text(): + dialog.close() + raise RuntimeError(f"Failed to set mesh path: {mesh_filepath}") + + new_action = _get_new_project_action() + if not new_action: + raise RuntimeError("Unable to detect new file action..") + + QtCore.QTimer.singleShot(0, _setup_prompt) + new_action.trigger() + app.processEvents(QtCore.QEventLoop.AllEvents, 5000) + + if not substance_painter.project.is_open(): + return + + # Confirm mesh was set as expected + project_mesh = substance_painter.project.last_imported_mesh_path() + if os.path.normpath(project_mesh) != os.path.normpath(mesh_filepath): + return + + return project_mesh diff --git a/openpype/hosts/substancepainter/api/pipeline.py b/openpype/hosts/substancepainter/api/pipeline.py new file mode 100644 index 0000000000..9406fb8edb --- /dev/null +++ b/openpype/hosts/substancepainter/api/pipeline.py @@ -0,0 +1,427 @@ +# -*- coding: utf-8 -*- +"""Pipeline tools for OpenPype Substance Painter integration.""" +import os +import logging +from functools import partial + +# Substance 3D Painter modules +import substance_painter.ui +import substance_painter.event +import substance_painter.project + +import pyblish.api + +from openpype.host import HostBase, IWorkfileHost, ILoadHost, IPublishHost +from openpype.settings import ( + get_current_project_settings, + get_system_settings +) + +from openpype.pipeline.template_data import get_template_data_with_names +from openpype.pipeline import ( + register_creator_plugin_path, + register_loader_plugin_path, + AVALON_CONTAINER_ID, + Anatomy +) +from openpype.lib import ( + StringTemplate, + register_event_callback, + emit_event, +) +from openpype.pipeline.load import any_outdated_containers +from openpype.hosts.substancepainter import SUBSTANCE_HOST_DIR + +from . import lib + +log = logging.getLogger("openpype.hosts.substance") + +PLUGINS_DIR = os.path.join(SUBSTANCE_HOST_DIR, "plugins") +PUBLISH_PATH = os.path.join(PLUGINS_DIR, "publish") +LOAD_PATH = os.path.join(PLUGINS_DIR, "load") +CREATE_PATH = os.path.join(PLUGINS_DIR, "create") +INVENTORY_PATH = os.path.join(PLUGINS_DIR, "inventory") + +OPENPYPE_METADATA_KEY = "OpenPype" +OPENPYPE_METADATA_CONTAINERS_KEY = "containers" # child key +OPENPYPE_METADATA_CONTEXT_KEY = "context" # child key +OPENPYPE_METADATA_INSTANCES_KEY = "instances" # child key + + +class SubstanceHost(HostBase, IWorkfileHost, ILoadHost, IPublishHost): + name = "substancepainter" + + def __init__(self): + super(SubstanceHost, self).__init__() + self._has_been_setup = False + self.menu = None + self.callbacks = [] + self.shelves = [] + + def install(self): + pyblish.api.register_host("substancepainter") + + pyblish.api.register_plugin_path(PUBLISH_PATH) + register_loader_plugin_path(LOAD_PATH) + register_creator_plugin_path(CREATE_PATH) + + log.info("Installing callbacks ... ") + # register_event_callback("init", on_init) + self._register_callbacks() + # register_event_callback("before.save", before_save) + # register_event_callback("save", on_save) + register_event_callback("open", on_open) + # register_event_callback("new", on_new) + + log.info("Installing menu ... ") + self._install_menu() + + project_settings = get_current_project_settings() + self._install_shelves(project_settings) + + self._has_been_setup = True + + def uninstall(self): + self._uninstall_shelves() + self._uninstall_menu() + self._deregister_callbacks() + + def has_unsaved_changes(self): + + if not substance_painter.project.is_open(): + return False + + return substance_painter.project.needs_saving() + + def get_workfile_extensions(self): + return [".spp", ".toc"] + + def save_workfile(self, dst_path=None): + + if not substance_painter.project.is_open(): + return False + + if not dst_path: + dst_path = self.get_current_workfile() + + full_save_mode = substance_painter.project.ProjectSaveMode.Full + substance_painter.project.save_as(dst_path, full_save_mode) + + return dst_path + + def open_workfile(self, filepath): + + if not os.path.exists(filepath): + raise RuntimeError("File does not exist: {}".format(filepath)) + + # We must first explicitly close current project before opening another + if substance_painter.project.is_open(): + substance_painter.project.close() + + substance_painter.project.open(filepath) + return filepath + + def get_current_workfile(self): + if not substance_painter.project.is_open(): + return None + + filepath = substance_painter.project.file_path() + if filepath and filepath.endswith(".spt"): + # When currently in a Substance Painter template assume our + # scene isn't saved. This can be the case directly after doing + # "New project", the path will then be the template used. This + # avoids Workfiles tool trying to save as .spt extension if the + # file hasn't been saved before. + return + + return filepath + + def get_containers(self): + + if not substance_painter.project.is_open(): + return + + metadata = substance_painter.project.Metadata(OPENPYPE_METADATA_KEY) + containers = metadata.get(OPENPYPE_METADATA_CONTAINERS_KEY) + if containers: + for key, container in containers.items(): + container["objectName"] = key + yield container + + def update_context_data(self, data, changes): + + if not substance_painter.project.is_open(): + return + + metadata = substance_painter.project.Metadata(OPENPYPE_METADATA_KEY) + metadata.set(OPENPYPE_METADATA_CONTEXT_KEY, data) + + def get_context_data(self): + + if not substance_painter.project.is_open(): + return + + metadata = substance_painter.project.Metadata(OPENPYPE_METADATA_KEY) + return metadata.get(OPENPYPE_METADATA_CONTEXT_KEY) or {} + + def _install_menu(self): + from PySide2 import QtWidgets + from openpype.tools.utils import host_tools + + parent = substance_painter.ui.get_main_window() + + menu = QtWidgets.QMenu("OpenPype") + + action = menu.addAction("Create...") + action.triggered.connect( + lambda: host_tools.show_publisher(parent=parent, + tab="create") + ) + + action = menu.addAction("Load...") + action.triggered.connect( + lambda: host_tools.show_loader(parent=parent, use_context=True) + ) + + action = menu.addAction("Publish...") + action.triggered.connect( + lambda: host_tools.show_publisher(parent=parent, + tab="publish") + ) + + action = menu.addAction("Manage...") + action.triggered.connect( + lambda: host_tools.show_scene_inventory(parent=parent) + ) + + action = menu.addAction("Library...") + action.triggered.connect( + lambda: host_tools.show_library_loader(parent=parent) + ) + + menu.addSeparator() + action = menu.addAction("Work Files...") + action.triggered.connect( + lambda: host_tools.show_workfiles(parent=parent) + ) + + substance_painter.ui.add_menu(menu) + + def on_menu_destroyed(): + self.menu = None + + menu.destroyed.connect(on_menu_destroyed) + + self.menu = menu + + def _uninstall_menu(self): + if self.menu: + self.menu.destroy() + self.menu = None + + def _register_callbacks(self): + # Prepare emit event callbacks + open_callback = partial(emit_event, "open") + + # Connect to the Substance Painter events + dispatcher = substance_painter.event.DISPATCHER + for event, callback in [ + (substance_painter.event.ProjectOpened, open_callback) + ]: + dispatcher.connect(event, callback) + # Keep a reference so we can deregister if needed + self.callbacks.append((event, callback)) + + def _deregister_callbacks(self): + for event, callback in self.callbacks: + substance_painter.event.DISPATCHER.disconnect(event, callback) + self.callbacks.clear() + + def _install_shelves(self, project_settings): + + shelves = project_settings["substancepainter"].get("shelves", {}) + if not shelves: + return + + # Prepare formatting data if we detect any path which might have + # template tokens like {asset} in there. + formatting_data = {} + has_formatting_entries = any("{" in path for path in shelves.values()) + if has_formatting_entries: + project_name = self.get_current_project_name() + asset_name = self.get_current_asset_name() + task_name = self.get_current_asset_name() + system_settings = get_system_settings() + formatting_data = get_template_data_with_names(project_name, + asset_name, + task_name, + system_settings) + anatomy = Anatomy(project_name) + formatting_data["root"] = anatomy.roots + + for name, path in shelves.items(): + shelf_name = None + + # Allow formatting with anatomy for the paths + if "{" in path: + path = StringTemplate.format_template(path, formatting_data) + + try: + shelf_name = lib.load_shelf(path, name=name) + except ValueError as exc: + print(f"Failed to load shelf -> {exc}") + + if shelf_name: + self.shelves.append(shelf_name) + + def _uninstall_shelves(self): + for shelf_name in self.shelves: + substance_painter.resource.Shelves.remove(shelf_name) + self.shelves.clear() + + +def on_open(): + log.info("Running callback on open..") + + if any_outdated_containers(): + from openpype.widgets import popup + + log.warning("Scene has outdated content.") + + # Get main window + parent = substance_painter.ui.get_main_window() + if parent is None: + log.info("Skipping outdated content pop-up " + "because Substance window can't be found.") + else: + + # Show outdated pop-up + def _on_show_inventory(): + from openpype.tools.utils import host_tools + host_tools.show_scene_inventory(parent=parent) + + dialog = popup.Popup(parent=parent) + dialog.setWindowTitle("Substance scene has outdated content") + dialog.setMessage("There are outdated containers in " + "your Substance scene.") + dialog.on_clicked.connect(_on_show_inventory) + dialog.show() + + +def imprint_container(container, + name, + namespace, + context, + loader): + """Imprint a loaded container with metadata. + + Containerisation enables a tracking of version, author and origin + for loaded assets. + + Arguments: + container (dict): The (substance metadata) dictionary to imprint into. + name (str): Name of resulting assembly + namespace (str): Namespace under which to host container + context (dict): Asset information + loader (load.LoaderPlugin): loader instance used to produce container. + + Returns: + None + + """ + + data = [ + ("schema", "openpype:container-2.0"), + ("id", AVALON_CONTAINER_ID), + ("name", str(name)), + ("namespace", str(namespace) if namespace else None), + ("loader", str(loader.__class__.__name__)), + ("representation", str(context["representation"]["_id"])), + ] + for key, value in data: + container[key] = value + + +def set_container_metadata(object_name, container_data, update=False): + """Helper method to directly set the data for a specific container + + Args: + object_name (str): The unique object name identifier for the container + container_data (dict): The data for the container. + Note 'objectName' data is derived from `object_name` and key in + `container_data` will be ignored. + update (bool): Whether to only update the dict data. + + """ + # The objectName is derived from the key in the metadata so won't be stored + # in the metadata in the container's data. + container_data.pop("objectName", None) + + metadata = substance_painter.project.Metadata(OPENPYPE_METADATA_KEY) + containers = metadata.get(OPENPYPE_METADATA_CONTAINERS_KEY) or {} + if update: + existing_data = containers.setdefault(object_name, {}) + existing_data.update(container_data) # mutable dict, in-place update + else: + containers[object_name] = container_data + metadata.set("containers", containers) + + +def remove_container_metadata(object_name): + """Helper method to remove the data for a specific container""" + metadata = substance_painter.project.Metadata(OPENPYPE_METADATA_KEY) + containers = metadata.get(OPENPYPE_METADATA_CONTAINERS_KEY) + if containers: + containers.pop(object_name, None) + metadata.set("containers", containers) + + +def set_instance(instance_id, instance_data, update=False): + """Helper method to directly set the data for a specific container + + Args: + instance_id (str): Unique identifier for the instance + instance_data (dict): The instance data to store in the metaadata. + """ + set_instances({instance_id: instance_data}, update=update) + + +def set_instances(instance_data_by_id, update=False): + """Store data for multiple instances at the same time. + + This is more optimal than querying and setting them in the metadata one + by one. + """ + metadata = substance_painter.project.Metadata(OPENPYPE_METADATA_KEY) + instances = metadata.get(OPENPYPE_METADATA_INSTANCES_KEY) or {} + + for instance_id, instance_data in instance_data_by_id.items(): + if update: + existing_data = instances.get(instance_id, {}) + existing_data.update(instance_data) + else: + instances[instance_id] = instance_data + + metadata.set("instances", instances) + + +def remove_instance(instance_id): + """Helper method to remove the data for a specific container""" + metadata = substance_painter.project.Metadata(OPENPYPE_METADATA_KEY) + instances = metadata.get(OPENPYPE_METADATA_INSTANCES_KEY) or {} + instances.pop(instance_id, None) + metadata.set("instances", instances) + + +def get_instances_by_id(): + """Return all instances stored in the project instances metadata""" + if not substance_painter.project.is_open(): + return {} + + metadata = substance_painter.project.Metadata(OPENPYPE_METADATA_KEY) + return metadata.get(OPENPYPE_METADATA_INSTANCES_KEY) or {} + + +def get_instances(): + """Return all instances stored in the project instances as a list""" + return list(get_instances_by_id().values()) diff --git a/openpype/hosts/substancepainter/deploy/plugins/openpype_plugin.py b/openpype/hosts/substancepainter/deploy/plugins/openpype_plugin.py new file mode 100644 index 0000000000..e7e1849546 --- /dev/null +++ b/openpype/hosts/substancepainter/deploy/plugins/openpype_plugin.py @@ -0,0 +1,36 @@ + + +def cleanup_openpype_qt_widgets(): + """ + Workaround for Substance failing to shut down correctly + when a Qt window was still open at the time of shutting down. + + This seems to work sometimes, but not all the time. + + """ + # TODO: Create a more reliable method to close down all OpenPype Qt widgets + from PySide2 import QtWidgets + import substance_painter.ui + + # Kill OpenPype Qt widgets + print("Killing OpenPype Qt widgets..") + for widget in QtWidgets.QApplication.topLevelWidgets(): + if widget.__module__.startswith("openpype."): + print(f"Deleting widget: {widget.__class__.__name__}") + substance_painter.ui.delete_ui_element(widget) + + +def start_plugin(): + from openpype.pipeline import install_host + from openpype.hosts.substancepainter.api import SubstanceHost + install_host(SubstanceHost()) + + +def close_plugin(): + from openpype.pipeline import uninstall_host + cleanup_openpype_qt_widgets() + uninstall_host() + + +if __name__ == "__main__": + start_plugin() diff --git a/openpype/hosts/substancepainter/deploy/startup/openpype_load_on_first_run.py b/openpype/hosts/substancepainter/deploy/startup/openpype_load_on_first_run.py new file mode 100644 index 0000000000..04b610b4df --- /dev/null +++ b/openpype/hosts/substancepainter/deploy/startup/openpype_load_on_first_run.py @@ -0,0 +1,43 @@ +"""Ease the OpenPype on-boarding process by loading the plug-in on first run""" + +OPENPYPE_PLUGIN_NAME = "openpype_plugin" + + +def start_plugin(): + try: + # This isn't exposed in the official API so we keep it in a try-except + from painter_plugins_ui import ( + get_settings, + LAUNCH_AT_START_KEY, + ON_STATE, + PLUGINS_MENU, + plugin_manager + ) + + # The `painter_plugins_ui` plug-in itself is also a startup plug-in + # we need to take into account that it could run either earlier or + # later than this startup script, we check whether its menu initialized + is_before_plugins_menu = PLUGINS_MENU is None + + settings = get_settings(OPENPYPE_PLUGIN_NAME) + if settings.value(LAUNCH_AT_START_KEY, None) is None: + print("Initializing OpenPype plug-in on first run...") + if is_before_plugins_menu: + print("- running before 'painter_plugins_ui'") + # Delay the launch to the painter_plugins_ui initialization + settings.setValue(LAUNCH_AT_START_KEY, ON_STATE) + else: + # Launch now + print("- running after 'painter_plugins_ui'") + plugin_manager(OPENPYPE_PLUGIN_NAME)(True) + + # Set the checked state in the menu to avoid confusion + action = next(action for action in PLUGINS_MENU._menu.actions() + if action.text() == OPENPYPE_PLUGIN_NAME) + if action is not None: + action.blockSignals(True) + action.setChecked(True) + action.blockSignals(False) + + except Exception as exc: + print(exc) diff --git a/openpype/hosts/substancepainter/plugins/create/create_textures.py b/openpype/hosts/substancepainter/plugins/create/create_textures.py new file mode 100644 index 0000000000..dece4b2cc1 --- /dev/null +++ b/openpype/hosts/substancepainter/plugins/create/create_textures.py @@ -0,0 +1,162 @@ +# -*- coding: utf-8 -*- +"""Creator plugin for creating textures.""" + +from openpype.pipeline import CreatedInstance, Creator, CreatorError +from openpype.lib import ( + EnumDef, + UILabelDef, + NumberDef, + BoolDef +) + +from openpype.hosts.substancepainter.api.pipeline import ( + get_instances, + set_instance, + set_instances, + remove_instance +) +from openpype.hosts.substancepainter.api.lib import get_export_presets + +import substance_painter.project + + +class CreateTextures(Creator): + """Create a texture set.""" + identifier = "io.openpype.creators.substancepainter.textureset" + label = "Textures" + family = "textureSet" + icon = "picture-o" + + default_variant = "Main" + + def create(self, subset_name, instance_data, pre_create_data): + + if not substance_painter.project.is_open(): + raise CreatorError("Can't create a Texture Set instance without " + "an open project.") + + instance = self.create_instance_in_context(subset_name, + instance_data) + set_instance( + instance_id=instance["instance_id"], + instance_data=instance.data_to_store() + ) + + def collect_instances(self): + for instance in get_instances(): + if (instance.get("creator_identifier") == self.identifier or + instance.get("family") == self.family): + self.create_instance_in_context_from_existing(instance) + + def update_instances(self, update_list): + instance_data_by_id = {} + for instance, _changes in update_list: + # Persist the data + instance_id = instance.get("instance_id") + instance_data = instance.data_to_store() + instance_data_by_id[instance_id] = instance_data + set_instances(instance_data_by_id, update=True) + + def remove_instances(self, instances): + for instance in instances: + remove_instance(instance["instance_id"]) + self._remove_instance_from_context(instance) + + # Helper methods (this might get moved into Creator class) + def create_instance_in_context(self, subset_name, data): + instance = CreatedInstance( + self.family, subset_name, data, self + ) + self.create_context.creator_adds_instance(instance) + return instance + + def create_instance_in_context_from_existing(self, data): + instance = CreatedInstance.from_existing(data, self) + self.create_context.creator_adds_instance(instance) + return instance + + def get_instance_attr_defs(self): + + return [ + EnumDef("exportPresetUrl", + items=get_export_presets(), + label="Output Template"), + BoolDef("allowSkippedMaps", + label="Allow Skipped Output Maps", + tooltip="When enabled this allows the publish to ignore " + "output maps in the used output template if one " + "or more maps are skipped due to the required " + "channels not being present in the current file.", + default=True), + EnumDef("exportFileFormat", + items={ + None: "Based on output template", + # TODO: Get available extensions from substance API + "bmp": "bmp", + "ico": "ico", + "jpeg": "jpeg", + "jng": "jng", + "pbm": "pbm", + "pgm": "pgm", + "png": "png", + "ppm": "ppm", + "tga": "targa", + "tif": "tiff", + "wap": "wap", + "wbmp": "wbmp", + "xpm": "xpm", + "gif": "gif", + "hdr": "hdr", + "exr": "exr", + "j2k": "j2k", + "jp2": "jp2", + "pfm": "pfm", + "webp": "webp", + # TODO: Unsure why jxr format fails to export + # "jxr": "jpeg-xr", + # TODO: File formats that combine the exported textures + # like psd are not correctly supported due to + # publishing only a single file + # "psd": "psd", + # "sbsar": "sbsar", + }, + default=None, + label="File type"), + EnumDef("exportSize", + items={ + None: "Based on each Texture Set's size", + # The key is size of the texture file in log2. + # (i.e. 10 means 2^10 = 1024) + 7: "128", + 8: "256", + 9: "512", + 10: "1024", + 11: "2048", + 12: "4096" + }, + default=None, + label="Size"), + + EnumDef("exportPadding", + items={ + "passthrough": "No padding (passthrough)", + "infinite": "Dilation infinite", + "transparent": "Dilation + transparent", + "color": "Dilation + default background color", + "diffusion": "Dilation + diffusion" + }, + default="infinite", + label="Padding"), + NumberDef("exportDilationDistance", + minimum=0, + maximum=256, + decimals=0, + default=16, + label="Dilation Distance"), + UILabelDef("*only used with " + "'Dilation + ' padding"), + ] + + def get_pre_create_attr_defs(self): + # Use same attributes as for instance attributes + return self.get_instance_attr_defs() diff --git a/openpype/hosts/substancepainter/plugins/create/create_workfile.py b/openpype/hosts/substancepainter/plugins/create/create_workfile.py new file mode 100644 index 0000000000..d7f31f9dcf --- /dev/null +++ b/openpype/hosts/substancepainter/plugins/create/create_workfile.py @@ -0,0 +1,101 @@ +# -*- coding: utf-8 -*- +"""Creator plugin for creating workfiles.""" + +from openpype.pipeline import CreatedInstance, AutoCreator +from openpype.client import get_asset_by_name + +from openpype.hosts.substancepainter.api.pipeline import ( + set_instances, + set_instance, + get_instances +) + +import substance_painter.project + + +class CreateWorkfile(AutoCreator): + """Workfile auto-creator.""" + identifier = "io.openpype.creators.substancepainter.workfile" + label = "Workfile" + family = "workfile" + icon = "document" + + default_variant = "Main" + + def create(self): + + if not substance_painter.project.is_open(): + return + + variant = self.default_variant + project_name = self.project_name + asset_name = self.create_context.get_current_asset_name() + task_name = self.create_context.get_current_task_name() + host_name = self.create_context.host_name + + # Workfile instance should always exist and must only exist once. + # As such we'll first check if it already exists and is collected. + current_instance = next( + ( + instance for instance in self.create_context.instances + if instance.creator_identifier == self.identifier + ), None) + + if current_instance is None: + self.log.info("Auto-creating workfile instance...") + asset_doc = get_asset_by_name(project_name, asset_name) + subset_name = self.get_subset_name( + variant, task_name, asset_doc, project_name, host_name + ) + data = { + "asset": asset_name, + "task": task_name, + "variant": variant + } + current_instance = self.create_instance_in_context(subset_name, + data) + elif ( + current_instance["asset"] != asset_name + or current_instance["task"] != task_name + ): + # Update instance context if is not the same + asset_doc = get_asset_by_name(project_name, asset_name) + subset_name = self.get_subset_name( + variant, task_name, asset_doc, project_name, host_name + ) + current_instance["asset"] = asset_name + current_instance["task"] = task_name + current_instance["subset"] = subset_name + + set_instance( + instance_id=current_instance.get("instance_id"), + instance_data=current_instance.data_to_store() + ) + + def collect_instances(self): + for instance in get_instances(): + if (instance.get("creator_identifier") == self.identifier or + instance.get("family") == self.family): + self.create_instance_in_context_from_existing(instance) + + def update_instances(self, update_list): + instance_data_by_id = {} + for instance, _changes in update_list: + # Persist the data + instance_id = instance.get("instance_id") + instance_data = instance.data_to_store() + instance_data_by_id[instance_id] = instance_data + set_instances(instance_data_by_id, update=True) + + # Helper methods (this might get moved into Creator class) + def create_instance_in_context(self, subset_name, data): + instance = CreatedInstance( + self.family, subset_name, data, self + ) + self.create_context.creator_adds_instance(instance) + return instance + + def create_instance_in_context_from_existing(self, data): + instance = CreatedInstance.from_existing(data, self) + self.create_context.creator_adds_instance(instance) + return instance diff --git a/openpype/hosts/substancepainter/plugins/load/load_mesh.py b/openpype/hosts/substancepainter/plugins/load/load_mesh.py new file mode 100644 index 0000000000..822095641d --- /dev/null +++ b/openpype/hosts/substancepainter/plugins/load/load_mesh.py @@ -0,0 +1,124 @@ +from openpype.pipeline import ( + load, + get_representation_path, +) +from openpype.pipeline.load import LoadError +from openpype.hosts.substancepainter.api.pipeline import ( + imprint_container, + set_container_metadata, + remove_container_metadata +) +from openpype.hosts.substancepainter.api.lib import prompt_new_file_with_mesh + +import substance_painter.project +import qargparse + + +class SubstanceLoadProjectMesh(load.LoaderPlugin): + """Load mesh for project""" + + families = ["*"] + representations = ["abc", "fbx", "obj", "gltf"] + + label = "Load mesh" + order = -10 + icon = "code-fork" + color = "orange" + + options = [ + qargparse.Boolean( + "preserve_strokes", + default=True, + help="Preserve strokes positions on mesh.\n" + "(only relevant when loading into existing project)" + ), + qargparse.Boolean( + "import_cameras", + default=True, + help="Import cameras from the mesh file." + ) + ] + + def load(self, context, name, namespace, data): + + # Get user inputs + import_cameras = data.get("import_cameras", True) + preserve_strokes = data.get("preserve_strokes", True) + + if not substance_painter.project.is_open(): + # Allow to 'initialize' a new project + result = prompt_new_file_with_mesh(mesh_filepath=self.fname) + if not result: + self.log.info("User cancelled new project prompt.") + return + + else: + # Reload the mesh + settings = substance_painter.project.MeshReloadingSettings( + import_cameras=import_cameras, + preserve_strokes=preserve_strokes + ) + + def on_mesh_reload(status: substance_painter.project.ReloadMeshStatus): # noqa + if status == substance_painter.project.ReloadMeshStatus.SUCCESS: # noqa + self.log.info("Reload succeeded") + else: + raise LoadError("Reload of mesh failed") + + path = self.fname + substance_painter.project.reload_mesh(path, + settings, + on_mesh_reload) + + # Store container + container = {} + project_mesh_object_name = "_ProjectMesh_" + imprint_container(container, + name=project_mesh_object_name, + namespace=project_mesh_object_name, + context=context, + loader=self) + + # We want store some options for updating to keep consistent behavior + # from the user's original choice. We don't store 'preserve_strokes' + # as we always preserve strokes on updates. + container["options"] = { + "import_cameras": import_cameras, + } + + set_container_metadata(project_mesh_object_name, container) + + def switch(self, container, representation): + self.update(container, representation) + + def update(self, container, representation): + + path = get_representation_path(representation) + + # Reload the mesh + container_options = container.get("options", {}) + settings = substance_painter.project.MeshReloadingSettings( + import_cameras=container_options.get("import_cameras", True), + preserve_strokes=True + ) + + def on_mesh_reload(status: substance_painter.project.ReloadMeshStatus): + if status == substance_painter.project.ReloadMeshStatus.SUCCESS: + self.log.info("Reload succeeded") + else: + raise LoadError("Reload of mesh failed") + + substance_painter.project.reload_mesh(path, settings, on_mesh_reload) + + # Update container representation + object_name = container["objectName"] + update_data = {"representation": str(representation["_id"])} + set_container_metadata(object_name, update_data, update=True) + + def remove(self, container): + + # Remove OpenPype related settings about what model was loaded + # or close the project? + # TODO: This is likely best 'hidden' away to the user because + # this will leave the project's mesh unmanaged. + remove_container_metadata(container["objectName"]) diff --git a/openpype/hosts/substancepainter/plugins/publish/collect_current_file.py b/openpype/hosts/substancepainter/plugins/publish/collect_current_file.py new file mode 100644 index 0000000000..9a37eb0d1c --- /dev/null +++ b/openpype/hosts/substancepainter/plugins/publish/collect_current_file.py @@ -0,0 +1,17 @@ +import pyblish.api + +from openpype.pipeline import registered_host + + +class CollectCurrentFile(pyblish.api.ContextPlugin): + """Inject the current working file into context""" + + order = pyblish.api.CollectorOrder - 0.49 + label = "Current Workfile" + hosts = ["substancepainter"] + + def process(self, context): + host = registered_host() + path = host.get_current_workfile() + context.data["currentFile"] = path + self.log.debug(f"Current workfile: {path}") diff --git a/openpype/hosts/substancepainter/plugins/publish/collect_textureset_images.py b/openpype/hosts/substancepainter/plugins/publish/collect_textureset_images.py new file mode 100644 index 0000000000..d11abd1019 --- /dev/null +++ b/openpype/hosts/substancepainter/plugins/publish/collect_textureset_images.py @@ -0,0 +1,196 @@ +import os +import copy +import pyblish.api + +from openpype.pipeline import publish + +import substance_painter.textureset +from openpype.hosts.substancepainter.api.lib import ( + get_parsed_export_maps, + strip_template +) +from openpype.pipeline.create import get_subset_name +from openpype.client import get_asset_by_name + + +class CollectTextureSet(pyblish.api.InstancePlugin): + """Extract Textures using an output template config""" + # TODO: Production-test usage of color spaces + # TODO: Detect what source data channels end up in each file + + label = "Collect Texture Set images" + hosts = ["substancepainter"] + families = ["textureSet"] + order = pyblish.api.CollectorOrder + + def process(self, instance): + + config = self.get_export_config(instance) + asset_doc = get_asset_by_name( + project_name=instance.context.data["projectName"], + asset_name=instance.data["asset"] + ) + + instance.data["exportConfig"] = config + maps = get_parsed_export_maps(config) + + # Let's break the instance into multiple instances to integrate + # a subset per generated texture or texture UDIM sequence + for (texture_set_name, stack_name), template_maps in maps.items(): + self.log.info(f"Processing {texture_set_name}/{stack_name}") + for template, outputs in template_maps.items(): + self.log.info(f"Processing {template}") + self.create_image_instance(instance, template, outputs, + asset_doc=asset_doc, + texture_set_name=texture_set_name, + stack_name=stack_name) + + def create_image_instance(self, instance, template, outputs, + asset_doc, texture_set_name, stack_name): + """Create a new instance per image or UDIM sequence. + + The new instances will be of family `image`. + + """ + + context = instance.context + first_filepath = outputs[0]["filepath"] + fnames = [os.path.basename(output["filepath"]) for output in outputs] + ext = os.path.splitext(first_filepath)[1] + assert ext.lstrip("."), f"No extension: {ext}" + + always_include_texture_set_name = False # todo: make this configurable + all_texture_sets = substance_painter.textureset.all_texture_sets() + texture_set = substance_painter.textureset.TextureSet.from_name( + texture_set_name + ) + + # Define the suffix we want to give this particular texture + # set and set up a remapped subset naming for it. + suffix = "" + if always_include_texture_set_name or len(all_texture_sets) > 1: + # More than one texture set, include texture set name + suffix += f".{texture_set_name}" + if texture_set.is_layered_material() and stack_name: + # More than one stack, include stack name + suffix += f".{stack_name}" + + # Always include the map identifier + map_identifier = strip_template(template) + suffix += f".{map_identifier}" + + image_subset = get_subset_name( + # TODO: The family actually isn't 'texture' currently but for now + # this is only done so the subset name starts with 'texture' + family="texture", + variant=instance.data["variant"] + suffix, + task_name=instance.data.get("task"), + asset_doc=asset_doc, + project_name=context.data["projectName"], + host_name=context.data["hostName"], + project_settings=context.data["project_settings"] + ) + + # Prepare representation + representation = { + "name": ext.lstrip("."), + "ext": ext.lstrip("."), + "files": fnames if len(fnames) > 1 else fnames[0], + } + + # Mark as UDIM explicitly if it has UDIM tiles. + if bool(outputs[0].get("udim")): + # The representation for a UDIM sequence should have a `udim` key + # that is a list of all udim tiles (str) like: ["1001", "1002"] + # strings. See CollectTextures plug-in and Integrators. + representation["udim"] = [output["udim"] for output in outputs] + + # Set up the representation for thumbnail generation + # TODO: Simplify this once thumbnail extraction is refactored + staging_dir = os.path.dirname(first_filepath) + representation["tags"] = ["review"] + representation["stagingDir"] = staging_dir + + # Clone the instance + image_instance = context.create_instance(image_subset) + image_instance[:] = instance[:] + image_instance.data.update(copy.deepcopy(instance.data)) + image_instance.data["name"] = image_subset + image_instance.data["label"] = image_subset + image_instance.data["subset"] = image_subset + image_instance.data["family"] = "image" + image_instance.data["families"] = ["image", "textures"] + image_instance.data["representations"] = [representation] + + # Group the textures together in the loader + image_instance.data["subsetGroup"] = instance.data["subset"] + + # Store the texture set name and stack name on the instance + image_instance.data["textureSetName"] = texture_set_name + image_instance.data["textureStackName"] = stack_name + + # Store color space with the instance + # Note: The extractor will assign it to the representation + colorspace = outputs[0].get("colorSpace") + if colorspace: + self.log.debug(f"{image_subset} colorspace: {colorspace}") + image_instance.data["colorspace"] = colorspace + + # Store the instance in the original instance as a member + instance.append(image_instance) + + def get_export_config(self, instance): + """Return an export configuration dict for texture exports. + + This config can be supplied to: + - `substance_painter.export.export_project_textures` + - `substance_painter.export.list_project_textures` + + See documentation on substance_painter.export module about the + formatting of the configuration dictionary. + + Args: + instance (pyblish.api.Instance): Texture Set instance to be + published. + + Returns: + dict: Export config + + """ + + creator_attrs = instance.data["creator_attributes"] + preset_url = creator_attrs["exportPresetUrl"] + self.log.debug(f"Exporting using preset: {preset_url}") + + # See: https://substance3d.adobe.com/documentation/ptpy/api/substance_painter/export # noqa + config = { # noqa + "exportShaderParams": True, + "exportPath": publish.get_instance_staging_dir(instance), + "defaultExportPreset": preset_url, + + # Custom overrides to the exporter + "exportParameters": [ + { + "parameters": { + "fileFormat": creator_attrs["exportFileFormat"], + "sizeLog2": creator_attrs["exportSize"], + "paddingAlgorithm": creator_attrs["exportPadding"], + "dilationDistance": creator_attrs["exportDilationDistance"] # noqa + } + } + ] + } + + # Create the list of Texture Sets to export. + config["exportList"] = [] + for texture_set in substance_painter.textureset.all_texture_sets(): + config["exportList"].append({"rootPath": texture_set.name()}) + + # Consider None values from the creator attributes optionals + for override in config["exportParameters"]: + parameters = override.get("parameters") + for key, value in dict(parameters).items(): + if value is None: + parameters.pop(key) + + return config diff --git a/openpype/hosts/substancepainter/plugins/publish/collect_workfile_representation.py b/openpype/hosts/substancepainter/plugins/publish/collect_workfile_representation.py new file mode 100644 index 0000000000..8d98d0b014 --- /dev/null +++ b/openpype/hosts/substancepainter/plugins/publish/collect_workfile_representation.py @@ -0,0 +1,26 @@ +import os +import pyblish.api + + +class CollectWorkfileRepresentation(pyblish.api.InstancePlugin): + """Create a publish representation for the current workfile instance.""" + + order = pyblish.api.CollectorOrder + label = "Workfile representation" + hosts = ["substancepainter"] + families = ["workfile"] + + def process(self, instance): + + context = instance.context + current_file = context.data["currentFile"] + + folder, file = os.path.split(current_file) + filename, ext = os.path.splitext(file) + + instance.data["representations"] = [{ + "name": ext.lstrip("."), + "ext": ext.lstrip("."), + "files": file, + "stagingDir": folder, + }] diff --git a/openpype/hosts/substancepainter/plugins/publish/extract_textures.py b/openpype/hosts/substancepainter/plugins/publish/extract_textures.py new file mode 100644 index 0000000000..bb6f15ead9 --- /dev/null +++ b/openpype/hosts/substancepainter/plugins/publish/extract_textures.py @@ -0,0 +1,62 @@ +import substance_painter.export + +from openpype.pipeline import KnownPublishError, publish + + +class ExtractTextures(publish.Extractor, + publish.ColormanagedPyblishPluginMixin): + """Extract Textures using an output template config. + + Note: + This Extractor assumes that `collect_textureset_images` has prepared + the relevant export config and has also collected the individual image + instances for publishing including its representation. That is why this + particular Extractor doesn't specify representations to integrate. + + """ + + label = "Extract Texture Set" + hosts = ["substancepainter"] + families = ["textureSet"] + + # Run before thumbnail extractors + order = publish.Extractor.order - 0.1 + + def process(self, instance): + + config = instance.data["exportConfig"] + result = substance_painter.export.export_project_textures(config) + + if result.status != substance_painter.export.ExportStatus.Success: + raise KnownPublishError( + "Failed to export texture set: {}".format(result.message) + ) + + # Log what files we generated + for (texture_set_name, stack_name), maps in result.textures.items(): + # Log our texture outputs + self.log.info(f"Exported stack: {texture_set_name} {stack_name}") + for texture_map in maps: + self.log.info(f"Exported texture: {texture_map}") + + # We'll insert the color space data for each image instance that we + # added into this texture set. The collector couldn't do so because + # some anatomy and other instance data needs to be collected prior + context = instance.context + for image_instance in instance: + representation = next(iter(image_instance.data["representations"])) + + colorspace = image_instance.data.get("colorspace") + if not colorspace: + self.log.debug("No color space data present for instance: " + f"{image_instance}") + continue + + self.set_representation_colorspace(representation, + context=context, + colorspace=colorspace) + + # The TextureSet instance should not be integrated. It generates no + # output data. Instead the separated texture instances are generated + # from it which themselves integrate into the database. + instance.data["integrate"] = False diff --git a/openpype/hosts/substancepainter/plugins/publish/increment_workfile.py b/openpype/hosts/substancepainter/plugins/publish/increment_workfile.py new file mode 100644 index 0000000000..b45d66fbb1 --- /dev/null +++ b/openpype/hosts/substancepainter/plugins/publish/increment_workfile.py @@ -0,0 +1,23 @@ +import pyblish.api + +from openpype.lib import version_up +from openpype.pipeline import registered_host + + +class IncrementWorkfileVersion(pyblish.api.ContextPlugin): + """Increment current workfile version.""" + + order = pyblish.api.IntegratorOrder + 1 + label = "Increment Workfile Version" + optional = True + hosts = ["substancepainter"] + + def process(self, context): + + assert all(result["success"] for result in context.data["results"]), ( + "Publishing not successful so version is not increased.") + + host = registered_host() + path = context.data["currentFile"] + self.log.info(f"Incrementing current workfile to: {path}") + host.save_workfile(version_up(path)) diff --git a/openpype/hosts/substancepainter/plugins/publish/save_workfile.py b/openpype/hosts/substancepainter/plugins/publish/save_workfile.py new file mode 100644 index 0000000000..4874b5e5c7 --- /dev/null +++ b/openpype/hosts/substancepainter/plugins/publish/save_workfile.py @@ -0,0 +1,27 @@ +import pyblish.api + +from openpype.pipeline import ( + registered_host, + KnownPublishError +) + + +class SaveCurrentWorkfile(pyblish.api.ContextPlugin): + """Save current workfile""" + + label = "Save current workfile" + order = pyblish.api.ExtractorOrder - 0.49 + hosts = ["substancepainter"] + + def process(self, context): + + host = registered_host() + if context.data["currentFile"] != host.get_current_workfile(): + raise KnownPublishError("Workfile has changed during publishing!") + + if host.has_unsaved_changes(): + self.log.info("Saving current file..") + host.save_workfile() + else: + self.log.debug("Skipping workfile save because there are no " + "unsaved changes.") diff --git a/openpype/hosts/substancepainter/plugins/publish/validate_ouput_maps.py b/openpype/hosts/substancepainter/plugins/publish/validate_ouput_maps.py new file mode 100644 index 0000000000..b57cf4c5a2 --- /dev/null +++ b/openpype/hosts/substancepainter/plugins/publish/validate_ouput_maps.py @@ -0,0 +1,109 @@ +import copy +import os + +import pyblish.api + +import substance_painter.export + +from openpype.pipeline import PublishValidationError + + +class ValidateOutputMaps(pyblish.api.InstancePlugin): + """Validate all output maps for Output Template are generated. + + Output maps will be skipped by Substance Painter if it is an output + map in the Substance Output Template which uses channels that the current + substance painter project has not painted or generated. + + """ + + order = pyblish.api.ValidatorOrder + label = "Validate output maps" + hosts = ["substancepainter"] + families = ["textureSet"] + + def process(self, instance): + + config = instance.data["exportConfig"] + + # Substance Painter API does not allow to query the actual output maps + # it will generate without actually exporting the files. So we try to + # generate the smallest size / fastest export as possible + config = copy.deepcopy(config) + parameters = config["exportParameters"][0]["parameters"] + parameters["sizeLog2"] = [1, 1] # output 2x2 images (smallest) + parameters["paddingAlgorithm"] = "passthrough" # no dilation (faster) + parameters["dithering"] = False # no dithering (faster) + + result = substance_painter.export.export_project_textures(config) + if result.status != substance_painter.export.ExportStatus.Success: + raise PublishValidationError( + "Failed to export texture set: {}".format(result.message) + ) + + generated_files = set() + for texture_maps in result.textures.values(): + for texture_map in texture_maps: + generated_files.add(os.path.normpath(texture_map)) + # Directly clean up our temporary export + os.remove(texture_map) + + creator_attributes = instance.data.get("creator_attributes", {}) + allow_skipped_maps = creator_attributes.get("allowSkippedMaps", True) + error_report_missing = [] + for image_instance in instance: + + # Confirm whether the instance has its expected files generated. + # We assume there's just one representation and that it is + # the actual texture representation from the collector. + representation = next(iter(image_instance.data["representations"])) + staging_dir = representation["stagingDir"] + filenames = representation["files"] + if not isinstance(filenames, (list, tuple)): + # Convert single file to list + filenames = [filenames] + + missing = [] + for filename in filenames: + filepath = os.path.join(staging_dir, filename) + filepath = os.path.normpath(filepath) + if filepath not in generated_files: + self.log.warning(f"Missing texture: {filepath}") + missing.append(filepath) + + if not missing: + continue + + if allow_skipped_maps: + # TODO: This is changing state on the instance's which + # should not be done during validation. + self.log.warning(f"Disabling texture instance: " + f"{image_instance}") + image_instance.data["active"] = False + image_instance.data["integrate"] = False + representation.setdefault("tags", []).append("delete") + continue + else: + error_report_missing.append((image_instance, missing)) + + if error_report_missing: + + message = ( + "The Texture Set skipped exporting some output maps which are " + "defined in the Output Template. This happens if the Output " + "Templates exports maps from channels which you do not " + "have in your current Substance Painter project.\n\n" + "To allow this enable the *Allow Skipped Output Maps* setting " + "on the instance.\n\n" + f"Instance {instance} skipped exporting output maps:\n" + "" + ) + + for image_instance, missing in error_report_missing: + missing_str = ", ".join(missing) + message += f"- **{image_instance}** skipped: {missing_str}\n" + + raise PublishValidationError( + message=message, + title="Missing output maps" + ) diff --git a/openpype/hosts/traypublisher/plugins/create/create_movie_batch.py b/openpype/hosts/traypublisher/plugins/create/create_movie_batch.py index d077131e4c..1bed07f785 100644 --- a/openpype/hosts/traypublisher/plugins/create/create_movie_batch.py +++ b/openpype/hosts/traypublisher/plugins/create/create_movie_batch.py @@ -36,11 +36,9 @@ class BatchMovieCreator(TrayPublishCreator): # Position batch creator after simple creators order = 110 - def __init__(self, project_settings, *args, **kwargs): - super(BatchMovieCreator, self).__init__(project_settings, - *args, **kwargs) + def apply_settings(self, project_settings, system_settings): creator_settings = ( - project_settings["traypublisher"]["BatchMovieCreator"] + project_settings["traypublisher"]["create"]["BatchMovieCreator"] ) self.default_variants = creator_settings["default_variants"] self.default_tasks = creator_settings["default_tasks"] @@ -151,4 +149,3 @@ class BatchMovieCreator(TrayPublishCreator): File names must then contain only asset name, or asset name + version. (eg. 'chair.mov', 'chair_v001.mov', not really safe `my_chair_v001.mov` """ - diff --git a/openpype/hosts/traypublisher/plugins/publish/collect_review_frames.py b/openpype/hosts/traypublisher/plugins/publish/collect_review_frames.py new file mode 100644 index 0000000000..6b41c0dd21 --- /dev/null +++ b/openpype/hosts/traypublisher/plugins/publish/collect_review_frames.py @@ -0,0 +1,43 @@ +# -*- coding: utf-8 -*- +import pyblish.api + + +class CollectReviewInfo(pyblish.api.InstancePlugin): + """Collect data required for review instances. + + ExtractReview plugin requires frame start/end, fps on instance data which + are missing on instances from TrayPublishes. + + Warning: + This is temporary solution to "make it work". Contains removed changes + from https://github.com/ynput/OpenPype/pull/4383 reduced only for + review instances. + """ + + label = "Collect Review Info" + order = pyblish.api.CollectorOrder + 0.491 + families = ["review"] + hosts = ["traypublisher"] + + def process(self, instance): + asset_entity = instance.data.get("assetEntity") + if instance.data.get("frameStart") is not None or not asset_entity: + self.log.debug("Missing required data on instance") + return + + asset_data = asset_entity["data"] + # Store collected data for logging + collected_data = {} + for key in ( + "fps", + "frameStart", + "frameEnd", + "handleStart", + "handleEnd", + ): + if key in instance.data or key not in asset_data: + continue + value = asset_data[key] + collected_data[key] = value + instance.data[key] = value + self.log.debug("Collected data: {}".format(str(collected_data))) diff --git a/openpype/hosts/tvpaint/plugins/publish/extract_sequence.py b/openpype/hosts/tvpaint/plugins/publish/extract_sequence.py index 1a21715aa2..8a610cf388 100644 --- a/openpype/hosts/tvpaint/plugins/publish/extract_sequence.py +++ b/openpype/hosts/tvpaint/plugins/publish/extract_sequence.py @@ -144,7 +144,7 @@ class ExtractSequence(pyblish.api.Extractor): # Fill tags and new families from project settings tags = [] - if family_lowered == "review": + if "review" in instance.data["families"]: tags.append("review") # Sequence of one frame diff --git a/openpype/hosts/unreal/api/rendering.py b/openpype/hosts/unreal/api/rendering.py index 29e4747f6e..3b54ab08a2 100644 --- a/openpype/hosts/unreal/api/rendering.py +++ b/openpype/hosts/unreal/api/rendering.py @@ -2,8 +2,10 @@ import os import unreal +from openpype.settings import get_project_settings from openpype.pipeline import Anatomy from openpype.hosts.unreal.api import pipeline +from openpype.widgets.message_window import Window queue = None @@ -32,11 +34,20 @@ def start_rendering(): """ Start the rendering process. """ - print("Starting rendering...") + unreal.log("Starting rendering...") # Get selected sequences assets = unreal.EditorUtilityLibrary.get_selected_assets() + if not assets: + Window( + parent=None, + title="No assets selected", + message="No assets selected. Select a render instance.", + level="warning") + raise RuntimeError( + "No assets selected. You need to select a render instance.") + # instances = pipeline.ls_inst() instances = [ a for a in assets @@ -66,6 +77,13 @@ def start_rendering(): ar = unreal.AssetRegistryHelpers.get_asset_registry() + data = get_project_settings(project) + config = None + config_path = str(data.get("unreal").get("render_config_path")) + if config_path and unreal.EditorAssetLibrary.does_asset_exist(config_path): + unreal.log("Found saved render configuration") + config = ar.get_asset_by_object_path(config_path).get_asset() + for i in inst_data: sequence = ar.get_asset_by_object_path(i["sequence"]).get_asset() @@ -81,55 +99,80 @@ def start_rendering(): # Get all the sequences to render. If there are subsequences, # add them and their frame ranges to the render list. We also # use the names for the output paths. - for s in sequences: - subscenes = pipeline.get_subsequences(s.get('sequence')) + for seq in sequences: + subscenes = pipeline.get_subsequences(seq.get('sequence')) if subscenes: - for ss in subscenes: + for sub_seq in subscenes: sequences.append({ - "sequence": ss.get_sequence(), - "output": (f"{s.get('output')}/" - f"{ss.get_sequence().get_name()}"), + "sequence": sub_seq.get_sequence(), + "output": (f"{seq.get('output')}/" + f"{sub_seq.get_sequence().get_name()}"), "frame_range": ( - ss.get_start_frame(), ss.get_end_frame()) + sub_seq.get_start_frame(), sub_seq.get_end_frame()) }) else: # Avoid rendering camera sequences - if "_camera" not in s.get('sequence').get_name(): - render_list.append(s) + if "_camera" not in seq.get('sequence').get_name(): + render_list.append(seq) # Create the rendering jobs and add them to the queue. - for r in render_list: + for render_setting in render_list: job = queue.allocate_new_job(unreal.MoviePipelineExecutorJob) job.sequence = unreal.SoftObjectPath(i["master_sequence"]) job.map = unreal.SoftObjectPath(i["master_level"]) job.author = "OpenPype" + # If we have a saved configuration, copy it to the job. + if config: + job.get_configuration().copy_from(config) + # User data could be used to pass data to the job, that can be # read in the job's OnJobFinished callback. We could, # for instance, pass the AvalonPublishInstance's path to the job. # job.user_data = "" + output_dir = render_setting.get('output') + shot_name = render_setting.get('sequence').get_name() + settings = job.get_configuration().find_or_add_setting_by_class( unreal.MoviePipelineOutputSetting) settings.output_resolution = unreal.IntPoint(1920, 1080) - settings.custom_start_frame = r.get("frame_range")[0] - settings.custom_end_frame = r.get("frame_range")[1] + settings.custom_start_frame = render_setting.get("frame_range")[0] + settings.custom_end_frame = render_setting.get("frame_range")[1] settings.use_custom_playback_range = True - settings.file_name_format = "{sequence_name}.{frame_number}" - settings.output_directory.path = f"{render_dir}/{r.get('output')}" - - renderPass = job.get_configuration().find_or_add_setting_by_class( - unreal.MoviePipelineDeferredPassBase) - renderPass.disable_multisample_effects = True + settings.file_name_format = f"{shot_name}" + ".{frame_number}" + settings.output_directory.path = f"{render_dir}/{output_dir}" job.get_configuration().find_or_add_setting_by_class( - unreal.MoviePipelineImageSequenceOutput_PNG) + unreal.MoviePipelineDeferredPassBase) + + render_format = data.get("unreal").get("render_format", "png") + + if render_format == "png": + job.get_configuration().find_or_add_setting_by_class( + unreal.MoviePipelineImageSequenceOutput_PNG) + elif render_format == "exr": + job.get_configuration().find_or_add_setting_by_class( + unreal.MoviePipelineImageSequenceOutput_EXR) + elif render_format == "jpg": + job.get_configuration().find_or_add_setting_by_class( + unreal.MoviePipelineImageSequenceOutput_JPG) + elif render_format == "bmp": + job.get_configuration().find_or_add_setting_by_class( + unreal.MoviePipelineImageSequenceOutput_BMP) # If there are jobs in the queue, start the rendering process. if queue.get_jobs(): global executor executor = unreal.MoviePipelinePIEExecutor() + + preroll_frames = data.get("unreal").get("preroll_frames", 0) + + settings = unreal.MoviePipelinePIEExecutorSettings() + settings.set_editor_property( + "initial_delay_frame_count", preroll_frames) + executor.on_executor_finished_delegate.add_callable_unique( _queue_finish_callback) executor.on_individual_job_finished_delegate.add_callable_unique( diff --git a/openpype/hosts/unreal/hooks/pre_workfile_preparation.py b/openpype/hosts/unreal/hooks/pre_workfile_preparation.py index da12bc75de..efbacc3b16 100644 --- a/openpype/hosts/unreal/hooks/pre_workfile_preparation.py +++ b/openpype/hosts/unreal/hooks/pre_workfile_preparation.py @@ -24,7 +24,7 @@ class UnrealPrelaunchHook(PreLaunchHook): """Hook to handle launching Unreal. This hook will check if current workfile path has Unreal - project inside. IF not, it initialize it and finally it pass + project inside. IF not, it initializes it, and finally it pass path to the project by environment variable to Unreal launcher shell script. @@ -61,10 +61,10 @@ class UnrealPrelaunchHook(PreLaunchHook): project_name=project_doc["name"] ) # Fill templates - filled_anatomy = anatomy.format(workdir_data) + template_obj = anatomy.templates_obj[workfile_template_key]["file"] # Return filename - return filled_anatomy[workfile_template_key]["file"] + return template_obj.format_strict(workdir_data) def exec_plugin_install(self, engine_path: Path, env: dict = None): # set up the QThread and worker with necessary signals @@ -141,6 +141,7 @@ class UnrealPrelaunchHook(PreLaunchHook): def execute(self): """Hook entry method.""" workdir = self.launch_context.env["AVALON_WORKDIR"] + executable = str(self.launch_context.executable) engine_version = self.app_name.split("/")[-1].replace("-", ".") try: if int(engine_version.split(".")[0]) < 4 and \ @@ -152,7 +153,7 @@ class UnrealPrelaunchHook(PreLaunchHook): # there can be string in minor version and in that case # int cast is failing. This probably happens only with # early access versions and is of no concert for this check - # so lets keep it quite. + # so let's keep it quiet. ... unreal_project_filename = self._get_work_filename() @@ -183,26 +184,6 @@ class UnrealPrelaunchHook(PreLaunchHook): f"[ {engine_version} ]" )) - detected = unreal_lib.get_engine_versions(self.launch_context.env) - detected_str = ', '.join(detected.keys()) or 'none' - self.log.info(( - f"{self.signature} detected UE versions: " - f"[ {detected_str} ]" - )) - if not detected: - raise ApplicationNotFound("No Unreal Engines are found.") - - engine_version = ".".join(engine_version.split(".")[:2]) - if engine_version not in detected.keys(): - raise ApplicationLaunchFailed(( - f"{self.signature} requested version not " - f"detected [ {engine_version} ]" - )) - - ue_path = unreal_lib.get_editor_exe_path( - Path(detected[engine_version]), engine_version) - - self.launch_context.launch_args = [ue_path.as_posix()] project_path.mkdir(parents=True, exist_ok=True) # Set "OPENPYPE_UNREAL_PLUGIN" to current process environment for @@ -217,7 +198,9 @@ class UnrealPrelaunchHook(PreLaunchHook): if self.launch_context.env.get(env_key): os.environ[env_key] = self.launch_context.env[env_key] - engine_path: Path = Path(detected[engine_version]) + # engine_path points to the specific Unreal Engine root + # so, we are going up from the executable itself 3 levels. + engine_path: Path = Path(executable).parents[3] if not unreal_lib.check_plugin_existence(engine_path): self.exec_plugin_install(engine_path) diff --git a/openpype/hosts/unreal/lib.py b/openpype/hosts/unreal/lib.py index 86ce0bb033..05fc87b318 100644 --- a/openpype/hosts/unreal/lib.py +++ b/openpype/hosts/unreal/lib.py @@ -23,6 +23,8 @@ def get_engine_versions(env=None): Location can be overridden by `UNREAL_ENGINE_LOCATION` environment variable. + .. deprecated:: 3.15.4 + Args: env (dict, optional): Environment to use. @@ -103,6 +105,8 @@ def _win_get_engine_versions(): This file is JSON file listing installed stuff, Unreal engines are marked with `"AppName" = "UE_X.XX"`` like `UE_4.24` + .. deprecated:: 3.15.4 + Returns: dict: version as a key and path as a value. @@ -122,6 +126,8 @@ def _darwin_get_engine_version() -> dict: It works the same as on Windows, just JSON file location is different. + .. deprecated:: 3.15.4 + Returns: dict: version as a key and path as a value. @@ -144,6 +150,8 @@ def _darwin_get_engine_version() -> dict: def _parse_launcher_locations(install_json_path: str) -> dict: """This will parse locations from json file. + .. deprecated:: 3.15.4 + Args: install_json_path (str): Path to `LauncherInstalled.dat`. diff --git a/openpype/hosts/unreal/plugins/create/create_render.py b/openpype/hosts/unreal/plugins/create/create_render.py index 5834d2e7a7..b9c443c456 100644 --- a/openpype/hosts/unreal/plugins/create/create_render.py +++ b/openpype/hosts/unreal/plugins/create/create_render.py @@ -1,14 +1,22 @@ # -*- coding: utf-8 -*- +from pathlib import Path + import unreal -from openpype.pipeline import CreatorError from openpype.hosts.unreal.api.pipeline import ( - get_subsequences + UNREAL_VERSION, + create_folder, + get_subsequences, ) from openpype.hosts.unreal.api.plugin import ( UnrealAssetCreator ) -from openpype.lib import UILabelDef +from openpype.lib import ( + UILabelDef, + UISeparatorDef, + BoolDef, + NumberDef +) class CreateRender(UnrealAssetCreator): @@ -19,7 +27,92 @@ class CreateRender(UnrealAssetCreator): family = "render" icon = "eye" - def create(self, subset_name, instance_data, pre_create_data): + def create_instance( + self, instance_data, subset_name, pre_create_data, + selected_asset_path, master_seq, master_lvl, seq_data + ): + instance_data["members"] = [selected_asset_path] + instance_data["sequence"] = selected_asset_path + instance_data["master_sequence"] = master_seq + instance_data["master_level"] = master_lvl + instance_data["output"] = seq_data.get('output') + instance_data["frameStart"] = seq_data.get('frame_range')[0] + instance_data["frameEnd"] = seq_data.get('frame_range')[1] + + super(CreateRender, self).create( + subset_name, + instance_data, + pre_create_data) + + def create_with_new_sequence( + self, subset_name, instance_data, pre_create_data + ): + # If the option to create a new level sequence is selected, + # create a new level sequence and a master level. + + root = f"/Game/OpenPype/Sequences" + + # Create a new folder for the sequence in root + sequence_dir_name = create_folder(root, subset_name) + sequence_dir = f"{root}/{sequence_dir_name}" + + unreal.log_warning(f"sequence_dir: {sequence_dir}") + + # Create the level sequence + asset_tools = unreal.AssetToolsHelpers.get_asset_tools() + seq = asset_tools.create_asset( + asset_name=subset_name, + package_path=sequence_dir, + asset_class=unreal.LevelSequence, + factory=unreal.LevelSequenceFactoryNew()) + + seq.set_playback_start(pre_create_data.get("start_frame")) + seq.set_playback_end(pre_create_data.get("end_frame")) + + pre_create_data["members"] = [seq.get_path_name()] + + unreal.EditorAssetLibrary.save_asset(seq.get_path_name()) + + # Create the master level + if UNREAL_VERSION.major >= 5: + curr_level = unreal.LevelEditorSubsystem().get_current_level() + else: + world = unreal.EditorLevelLibrary.get_editor_world() + levels = unreal.EditorLevelUtils.get_levels(world) + curr_level = levels[0] if len(levels) else None + if not curr_level: + raise RuntimeError("No level loaded.") + curr_level_path = curr_level.get_outer().get_path_name() + + # If the level path does not start with "/Game/", the current + # level is a temporary, unsaved level. + if curr_level_path.startswith("/Game/"): + if UNREAL_VERSION.major >= 5: + unreal.LevelEditorSubsystem().save_current_level() + else: + unreal.EditorLevelLibrary.save_current_level() + + ml_path = f"{sequence_dir}/{subset_name}_MasterLevel" + + if UNREAL_VERSION.major >= 5: + unreal.LevelEditorSubsystem().new_level(ml_path) + else: + unreal.EditorLevelLibrary.new_level(ml_path) + + seq_data = { + "sequence": seq, + "output": f"{seq.get_name()}", + "frame_range": ( + seq.get_playback_start(), + seq.get_playback_end())} + + self.create_instance( + instance_data, subset_name, pre_create_data, + seq.get_path_name(), seq.get_path_name(), ml_path, seq_data) + + def create_from_existing_sequence( + self, subset_name, instance_data, pre_create_data + ): ar = unreal.AssetRegistryHelpers.get_asset_registry() sel_objects = unreal.EditorUtilityLibrary.get_selected_assets() @@ -27,8 +120,8 @@ class CreateRender(UnrealAssetCreator): a.get_path_name() for a in sel_objects if a.get_class().get_name() == "LevelSequence"] - if not selection: - raise CreatorError("Please select at least one Level Sequence.") + if len(selection) == 0: + raise RuntimeError("Please select at least one Level Sequence.") seq_data = None @@ -42,28 +135,38 @@ class CreateRender(UnrealAssetCreator): f"Skipping {selected_asset.get_name()}. It isn't a Level " "Sequence.") - # The asset name is the third element of the path which - # contains the map. - # To take the asset name, we remove from the path the prefix - # "/Game/OpenPype/" and then we split the path by "/". - sel_path = selected_asset_path - asset_name = sel_path.replace("/Game/OpenPype/", "").split("/")[0] + if pre_create_data.get("use_hierarchy"): + # The asset name is the the third element of the path which + # contains the map. + # To take the asset name, we remove from the path the prefix + # "/Game/OpenPype/" and then we split the path by "/". + sel_path = selected_asset_path + asset_name = sel_path.replace( + "/Game/OpenPype/", "").split("/")[0] + + search_path = f"/Game/OpenPype/{asset_name}" + else: + search_path = Path(selected_asset_path).parent.as_posix() # Get the master sequence and the master level. # There should be only one sequence and one level in the directory. - ar_filter = unreal.ARFilter( - class_names=["LevelSequence"], - package_paths=[f"/Game/OpenPype/{asset_name}"], - recursive_paths=False) - sequences = ar.get_assets(ar_filter) - master_seq = sequences[0].get_asset().get_path_name() - master_seq_obj = sequences[0].get_asset() - ar_filter = unreal.ARFilter( - class_names=["World"], - package_paths=[f"/Game/OpenPype/{asset_name}"], - recursive_paths=False) - levels = ar.get_assets(ar_filter) - master_lvl = levels[0].get_asset().get_path_name() + try: + ar_filter = unreal.ARFilter( + class_names=["LevelSequence"], + package_paths=[search_path], + recursive_paths=False) + sequences = ar.get_assets(ar_filter) + master_seq = sequences[0].get_asset().get_path_name() + master_seq_obj = sequences[0].get_asset() + ar_filter = unreal.ARFilter( + class_names=["World"], + package_paths=[search_path], + recursive_paths=False) + levels = ar.get_assets(ar_filter) + master_lvl = levels[0].get_asset().get_path_name() + except IndexError: + raise RuntimeError( + f"Could not find the hierarchy for the selected sequence.") # If the selected asset is the master sequence, we get its data # and then we create the instance for the master sequence. @@ -79,7 +182,8 @@ class CreateRender(UnrealAssetCreator): master_seq_obj.get_playback_start(), master_seq_obj.get_playback_end())} - if selected_asset_path == master_seq: + if (selected_asset_path == master_seq or + pre_create_data.get("use_hierarchy")): seq_data = master_seq_data else: seq_data_list = [master_seq_data] @@ -119,20 +223,54 @@ class CreateRender(UnrealAssetCreator): "sub-sequence of the master sequence.") continue - instance_data["members"] = [selected_asset_path] - instance_data["sequence"] = selected_asset_path - instance_data["master_sequence"] = master_seq - instance_data["master_level"] = master_lvl - instance_data["output"] = seq_data.get('output') - instance_data["frameStart"] = seq_data.get('frame_range')[0] - instance_data["frameEnd"] = seq_data.get('frame_range')[1] + self.create_instance( + instance_data, subset_name, pre_create_data, + selected_asset_path, master_seq, master_lvl, seq_data) - super(CreateRender, self).create( - subset_name, - instance_data, - pre_create_data) + def create(self, subset_name, instance_data, pre_create_data): + if pre_create_data.get("create_seq"): + self.create_with_new_sequence( + subset_name, instance_data, pre_create_data) + else: + self.create_from_existing_sequence( + subset_name, instance_data, pre_create_data) def get_pre_create_attr_defs(self): return [ - UILabelDef("Select the sequence to render.") + UILabelDef( + "Select a Level Sequence to render or create a new one." + ), + BoolDef( + "create_seq", + label="Create a new Level Sequence", + default=False + ), + UILabelDef( + "WARNING: If you create a new Level Sequence, the current\n" + "level will be saved and a new Master Level will be created." + ), + NumberDef( + "start_frame", + label="Start Frame", + default=0, + minimum=-999999, + maximum=999999 + ), + NumberDef( + "end_frame", + label="Start Frame", + default=150, + minimum=-999999, + maximum=999999 + ), + UISeparatorDef(), + UILabelDef( + "The following settings are valid only if you are not\n" + "creating a new sequence." + ), + BoolDef( + "use_hierarchy", + label="Use Hierarchy", + default=False + ), ] diff --git a/openpype/hosts/unreal/plugins/load/load_animation.py b/openpype/hosts/unreal/plugins/load/load_animation.py index 1fe0bef462..f0c08680d3 100644 --- a/openpype/hosts/unreal/plugins/load/load_animation.py +++ b/openpype/hosts/unreal/plugins/load/load_animation.py @@ -156,7 +156,7 @@ class AnimationFBXLoader(plugin.Loader): package_paths=[f"{root}/{hierarchy[0]}"], recursive_paths=False) levels = ar.get_assets(_filter) - master_level = levels[0].get_editor_property('object_path') + master_level = levels[0].get_full_name() hierarchy_dir = root for h in hierarchy: @@ -168,7 +168,7 @@ class AnimationFBXLoader(plugin.Loader): package_paths=[f"{hierarchy_dir}/"], recursive_paths=True) levels = ar.get_assets(_filter) - level = levels[0].get_editor_property('object_path') + level = levels[0].get_full_name() unreal.EditorLevelLibrary.save_all_dirty_levels() unreal.EditorLevelLibrary.load_level(level) diff --git a/openpype/hosts/unreal/plugins/load/load_layout.py b/openpype/hosts/unreal/plugins/load/load_layout.py index 63d415a52b..f0663a8778 100644 --- a/openpype/hosts/unreal/plugins/load/load_layout.py +++ b/openpype/hosts/unreal/plugins/load/load_layout.py @@ -819,7 +819,7 @@ class LayoutLoader(plugin.Loader): recursive_paths=False) levels = ar.get_assets(filter) - layout_level = levels[0].get_editor_property('object_path') + layout_level = levels[0].get_full_name() EditorLevelLibrary.save_all_dirty_levels() EditorLevelLibrary.load_level(layout_level) @@ -919,7 +919,7 @@ class LayoutLoader(plugin.Loader): package_paths=[f"{root}/{ms_asset}"], recursive_paths=False) levels = ar.get_assets(_filter) - master_level = levels[0].get_editor_property('object_path') + master_level = levels[0].get_full_name() sequences = [master_sequence] diff --git a/openpype/hosts/unreal/plugins/publish/validate_sequence_frames.py b/openpype/hosts/unreal/plugins/publish/validate_sequence_frames.py index 87f1338ee8..e6584e130f 100644 --- a/openpype/hosts/unreal/plugins/publish/validate_sequence_frames.py +++ b/openpype/hosts/unreal/plugins/publish/validate_sequence_frames.py @@ -20,6 +20,7 @@ class ValidateSequenceFrames(pyblish.api.InstancePlugin): def process(self, instance): representations = instance.data.get("representations") for repr in representations: + data = instance.data.get("assetEntity", {}).get("data", {}) patterns = [clique.PATTERNS["frames"]] collections, remainder = clique.assemble( repr["files"], minimum_items=1, patterns=patterns) @@ -30,8 +31,8 @@ class ValidateSequenceFrames(pyblish.api.InstancePlugin): frames = list(collection.indexes) current_range = (frames[0], frames[-1]) - required_range = (instance.data["frameStart"], - instance.data["frameEnd"]) + required_range = (data["frameStart"], + data["frameEnd"]) if current_range != required_range: raise ValueError(f"Invalid frame range: {current_range} - " diff --git a/openpype/lib/path_templates.py b/openpype/lib/path_templates.py index 0f99efb430..9be1736abf 100644 --- a/openpype/lib/path_templates.py +++ b/openpype/lib/path_templates.py @@ -256,17 +256,18 @@ class TemplatesDict(object): elif isinstance(templates, dict): self._raw_templates = copy.deepcopy(templates) self._templates = templates - self._objected_templates = self.create_ojected_templates(templates) + self._objected_templates = self.create_objected_templates( + templates) else: raise TypeError("<{}> argument must be a dict, not {}.".format( self.__class__.__name__, str(type(templates)) )) def __getitem__(self, key): - return self.templates[key] + return self.objected_templates[key] def get(self, key, *args, **kwargs): - return self.templates.get(key, *args, **kwargs) + return self.objected_templates.get(key, *args, **kwargs) @property def raw_templates(self): @@ -280,8 +281,21 @@ class TemplatesDict(object): def objected_templates(self): return self._objected_templates - @classmethod - def create_ojected_templates(cls, templates): + def _create_template_object(self, template): + """Create template object from a template string. + + Separated into method to give option change class of templates. + + Args: + template (str): Template string. + + Returns: + StringTemplate: Object of template. + """ + + return StringTemplate(template) + + def create_objected_templates(self, templates): if not isinstance(templates, dict): raise TypeError("Expected dict object, got {}".format( str(type(templates)) @@ -297,7 +311,7 @@ class TemplatesDict(object): for key in tuple(item.keys()): value = item[key] if isinstance(value, six.string_types): - item[key] = StringTemplate(value) + item[key] = self._create_template_object(value) elif isinstance(value, dict): inner_queue.append(value) return objected_templates diff --git a/openpype/lib/project_backpack.py b/openpype/lib/project_backpack.py index ff2f1d4b88..07107ec011 100644 --- a/openpype/lib/project_backpack.py +++ b/openpype/lib/project_backpack.py @@ -1,16 +1,19 @@ -"""These lib functions are primarily for development purposes. +"""These lib functions are for development purposes. -WARNING: This is not meant for production data. +WARNING: + This is not meant for production data. Please don't write code which is + dependent on functionality here. -Goal is to be able create package of current state of project with related -documents from mongo and files from disk to zip file and then be able recreate -the project based on the zip. +Goal is to be able to create package of current state of project with related +documents from mongo and files from disk to zip file and then be able +to recreate the project based on the zip. This gives ability to create project where a changes and tests can be done. -Keep in mind that to be able create a package of project has few requirements. -Possible requirement should be listed in 'pack_project' function. +Keep in mind that to be able to create a package of project has few +requirements. Possible requirement should be listed in 'pack_project' function. """ + import os import json import platform @@ -19,16 +22,12 @@ import shutil import datetime import zipfile -from bson.json_util import ( - loads, - dumps, - CANONICAL_JSON_OPTIONS +from openpype.client.mongo import ( + load_json_file, + get_project_connection, + replace_project_documents, + store_project_documents, ) -from openpype.client import ( - get_project, - get_whole_project, -) -from openpype.pipeline import AvalonMongoDB DOCUMENTS_FILE_NAME = "database" METADATA_FILE_NAME = "metadata" @@ -43,7 +42,52 @@ def add_timestamp(filepath): return new_base + ext -def pack_project(project_name, destination_dir=None): +def get_project_document(project_name, database_name=None): + """Query project document. + + Function 'get_project' from client api cannot be used as it does not allow + to change which 'database_name' is used. + + Args: + project_name (str): Name of project. + database_name (Optional[str]): Name of mongo database where to look for + project. + + Returns: + Union[dict[str, Any], None]: Project document or None. + """ + + col = get_project_connection(project_name, database_name) + return col.find_one({"type": "project"}) + + +def _pack_files_to_zip(zip_stream, source_path, root_path): + """Pack files to a zip stream. + + Args: + zip_stream (zipfile.ZipFile): Stream to a zipfile. + source_path (str): Path to a directory where files are. + root_path (str): Path to a directory which is used for calculation + of relative path. + """ + + for root, _, filenames in os.walk(source_path): + for filename in filenames: + filepath = os.path.join(root, filename) + # TODO add one more folder + archive_name = os.path.join( + PROJECT_FILES_DIR, + os.path.relpath(filepath, root_path) + ) + zip_stream.write(filepath, archive_name) + + +def pack_project( + project_name, + destination_dir=None, + only_documents=False, + database_name=None +): """Make a package of a project with mongo documents and files. This function has few restrictions: @@ -52,13 +96,18 @@ def pack_project(project_name, destination_dir=None): "{root[...]}/{project[name]}" Args: - project_name(str): Project that should be packaged. - destination_dir(str): Optional path where zip will be stored. Project's - root is used if not passed. + project_name (str): Project that should be packaged. + destination_dir (Optional[str]): Optional path where zip will be + stored. Project's root is used if not passed. + only_documents (Optional[bool]): Pack only Mongo documents and skip + files. + database_name (Optional[str]): Custom database name from which is + project queried. """ + print("Creating package of project \"{}\"".format(project_name)) # Validate existence of project - project_doc = get_project(project_name) + project_doc = get_project_document(project_name, database_name) if not project_doc: raise ValueError("Project \"{}\" was not found in database".format( project_name @@ -119,12 +168,7 @@ def pack_project(project_name, destination_dir=None): temp_docs_json = s.name # Query all project documents and store them to temp json - docs = list(get_whole_project(project_name)) - data = dumps( - docs, json_options=CANONICAL_JSON_OPTIONS - ) - with open(temp_docs_json, "w") as stream: - stream.write(data) + store_project_documents(project_name, temp_docs_json, database_name) print("Packing files into zip") # Write all to zip file @@ -133,16 +177,10 @@ def pack_project(project_name, destination_dir=None): zip_stream.write(temp_metadata_json, METADATA_FILE_NAME + ".json") # Add database documents zip_stream.write(temp_docs_json, DOCUMENTS_FILE_NAME + ".json") + # Add project files to zip - for root, _, filenames in os.walk(project_source_path): - for filename in filenames: - filepath = os.path.join(root, filename) - # TODO add one more folder - archive_name = os.path.join( - PROJECT_FILES_DIR, - os.path.relpath(filepath, root_path) - ) - zip_stream.write(filepath, archive_name) + if not only_documents: + _pack_files_to_zip(zip_stream, project_source_path, root_path) print("Cleaning up") # Cleanup @@ -152,80 +190,30 @@ def pack_project(project_name, destination_dir=None): print("*** Packing finished ***") -def unpack_project(path_to_zip, new_root=None): - """Unpack project zip file to recreate project. +def _unpack_project_files(unzip_dir, root_path, project_name): + """Move project files from unarchived temp folder to new root. + + Unpack is skipped if source files are not available in the zip. That can + happen if nothing was published yet or only documents were stored to + package. Args: - path_to_zip(str): Path to zip which was created using 'pack_project' - function. - new_root(str): Optional way how to set different root path for unpacked - project. + unzip_dir (str): Location where zip was unzipped. + root_path (str): Path to new root. + project_name (str): Name of project. """ - print("Unpacking project from zip {}".format(path_to_zip)) - if not os.path.exists(path_to_zip): - print("Zip file does not exists: {}".format(path_to_zip)) + + src_project_files_dir = os.path.join( + unzip_dir, PROJECT_FILES_DIR, project_name + ) + # Skip if files are not in the zip + if not os.path.exists(src_project_files_dir): return - tmp_dir = tempfile.mkdtemp(prefix="unpack_") - print("Zip is extracted to temp: {}".format(tmp_dir)) - with zipfile.ZipFile(path_to_zip, "r") as zip_stream: - zip_stream.extractall(tmp_dir) - - metadata_json_path = os.path.join(tmp_dir, METADATA_FILE_NAME + ".json") - with open(metadata_json_path, "r") as stream: - metadata = json.load(stream) - - docs_json_path = os.path.join(tmp_dir, DOCUMENTS_FILE_NAME + ".json") - with open(docs_json_path, "r") as stream: - content = stream.readlines() - docs = loads("".join(content)) - - low_platform = platform.system().lower() - project_name = metadata["project_name"] - source_root = metadata["root"] - root_path = source_root[low_platform] - - # Drop existing collection - dbcon = AvalonMongoDB() - database = dbcon.database - if project_name in database.list_collection_names(): - database.drop_collection(project_name) - print("Removed existing project collection") - - print("Creating project documents ({})".format(len(docs))) - # Create new collection with loaded docs - collection = database[project_name] - collection.insert_many(docs) - - # Skip change of root if is the same as the one stored in metadata - if ( - new_root - and (os.path.normpath(new_root) == os.path.normpath(root_path)) - ): - new_root = None - - if new_root: - print("Using different root path {}".format(new_root)) - root_path = new_root - - project_doc = get_project(project_name) - roots = project_doc["config"]["roots"] - key = tuple(roots.keys())[0] - update_key = "config.roots.{}.{}".format(key, low_platform) - collection.update_one( - {"_id": project_doc["_id"]}, - {"$set": { - update_key: new_root - }} - ) - # Make sure root path exists if not os.path.exists(root_path): os.makedirs(root_path) - src_project_files_dir = os.path.join( - tmp_dir, PROJECT_FILES_DIR, project_name - ) dst_project_files_dir = os.path.normpath( os.path.join(root_path, project_name) ) @@ -241,8 +229,83 @@ def unpack_project(path_to_zip, new_root=None): )) shutil.move(src_project_files_dir, dst_project_files_dir) + +def unpack_project( + path_to_zip, new_root=None, database_only=None, database_name=None +): + """Unpack project zip file to recreate project. + + Args: + path_to_zip (str): Path to zip which was created using 'pack_project' + function. + new_root (str): Optional way how to set different root path for + unpacked project. + database_only (Optional[bool]): Unpack only database from zip. + database_name (str): Name of database where project will be recreated. + """ + + if database_only is None: + database_only = False + + print("Unpacking project from zip {}".format(path_to_zip)) + if not os.path.exists(path_to_zip): + print("Zip file does not exists: {}".format(path_to_zip)) + return + + tmp_dir = tempfile.mkdtemp(prefix="unpack_") + print("Zip is extracted to temp: {}".format(tmp_dir)) + with zipfile.ZipFile(path_to_zip, "r") as zip_stream: + if database_only: + for filename in ( + "{}.json".format(METADATA_FILE_NAME), + "{}.json".format(DOCUMENTS_FILE_NAME), + ): + zip_stream.extract(filename, tmp_dir) + else: + zip_stream.extractall(tmp_dir) + + metadata_json_path = os.path.join(tmp_dir, METADATA_FILE_NAME + ".json") + with open(metadata_json_path, "r") as stream: + metadata = json.load(stream) + + docs_json_path = os.path.join(tmp_dir, DOCUMENTS_FILE_NAME + ".json") + docs = load_json_file(docs_json_path) + + low_platform = platform.system().lower() + project_name = metadata["project_name"] + source_root = metadata["root"] + root_path = source_root[low_platform] + + # Drop existing collection + replace_project_documents(project_name, docs, database_name) + print("Creating project documents ({})".format(len(docs))) + + # Skip change of root if is the same as the one stored in metadata + if ( + new_root + and (os.path.normpath(new_root) == os.path.normpath(root_path)) + ): + new_root = None + + if new_root: + print("Using different root path {}".format(new_root)) + root_path = new_root + + project_doc = get_project_document(project_name) + roots = project_doc["config"]["roots"] + key = tuple(roots.keys())[0] + update_key = "config.roots.{}.{}".format(key, low_platform) + collection = get_project_connection(project_name, database_name) + collection.update_one( + {"_id": project_doc["_id"]}, + {"$set": { + update_key: new_root + }} + ) + + _unpack_project_files(tmp_dir, root_path, project_name) + # CLeanup print("Cleaning up") shutil.rmtree(tmp_dir) - dbcon.uninstall() print("*** Unpack finished ***") diff --git a/openpype/lib/usdlib.py b/openpype/lib/usdlib.py index 20703ee308..5ef1d38f87 100644 --- a/openpype/lib/usdlib.py +++ b/openpype/lib/usdlib.py @@ -327,7 +327,8 @@ def get_usd_master_path(asset, subset, representation): else: asset_doc = get_asset_by_name(project_name, asset, fields=["name"]) - formatted_result = anatomy.format( + template_obj = anatomy.templates_obj["publish"]["path"] + path = template_obj.format_strict( { "project": { "name": project_name, @@ -340,7 +341,6 @@ def get_usd_master_path(asset, subset, representation): } ) - path = formatted_result["publish"]["path"] # Remove the version folder subset_folder = os.path.dirname(os.path.dirname(path)) master_folder = os.path.join(subset_folder, "master") diff --git a/openpype/modules/deadline/abstract_submit_deadline.py b/openpype/modules/deadline/abstract_submit_deadline.py index 648eb77007..558a637e4b 100644 --- a/openpype/modules/deadline/abstract_submit_deadline.py +++ b/openpype/modules/deadline/abstract_submit_deadline.py @@ -534,8 +534,8 @@ class AbstractSubmitDeadline(pyblish.api.InstancePlugin): template_data["comment"] = None anatomy = instance.context.data['anatomy'] - anatomy_filled = anatomy.format(template_data) - template_filled = anatomy_filled["publish"]["path"] + template_obj = anatomy.templates_obj["publish"]["path"] + template_filled = template_obj.format_strict(template_data) file_path = os.path.normpath(template_filled) self.log.info("Using published scene for render {}".format(file_path)) diff --git a/openpype/modules/deadline/plugins/publish/submit_maya_deadline.py b/openpype/modules/deadline/plugins/publish/submit_maya_deadline.py index 062732c059..a6cdcb7e71 100644 --- a/openpype/modules/deadline/plugins/publish/submit_maya_deadline.py +++ b/openpype/modules/deadline/plugins/publish/submit_maya_deadline.py @@ -142,10 +142,8 @@ class MayaSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline): job_info.Pool = instance.data.get("primaryPool") job_info.SecondaryPool = instance.data.get("secondaryPool") - job_info.ChunkSize = instance.data.get("chunkSize", 10) job_info.Comment = context.data.get("comment") job_info.Priority = instance.data.get("priority", self.priority) - job_info.FramesPerTask = instance.data.get("framesPerTask", 1) if self.group != "none" and self.group: job_info.Group = self.group @@ -327,6 +325,11 @@ class MayaSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline): job_info = copy.deepcopy(payload_job_info) plugin_info = copy.deepcopy(payload_plugin_info) + # Force plugin reload for vray cause the region does not get flushed + # between tile renders. + if plugin_info["Renderer"] == "vray": + job_info.ForceReloadPlugin = True + # if we have sequence of files, we need to create tile job for # every frame job_info.TileJob = True @@ -436,6 +439,7 @@ class MayaSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline): assembly_payloads = [] output_dir = self.job_info.OutputDirectory[0] + config_files = [] for file in assembly_files: frame = re.search(R_FRAME_NUMBER, file).group("frame") @@ -461,6 +465,7 @@ class MayaSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline): datetime.now().strftime("%Y_%m_%d_%H_%M_%S") ) ) + config_files.append(config_file) try: if not os.path.isdir(output_dir): os.makedirs(output_dir) @@ -469,8 +474,6 @@ class MayaSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline): self.log.warning("Path is unreachable: " "`{}`".format(output_dir)) - assembly_plugin_info["ConfigFile"] = config_file - with open(config_file, "w") as cf: print("TileCount={}".format(tiles_count), file=cf) print("ImageFileName={}".format(file), file=cf) @@ -479,6 +482,10 @@ class MayaSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline): print("ImageHeight={}".format( instance.data.get("resolutionHeight")), file=cf) + reversed_y = False + if plugin_info["Renderer"] == "arnold": + reversed_y = True + with open(config_file, "a") as cf: # Need to reverse the order of the y tiles, because image # coordinates are calculated from bottom left corner. @@ -489,7 +496,7 @@ class MayaSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline): instance.data.get("resolutionWidth"), instance.data.get("resolutionHeight"), payload_plugin_info["OutputFilePrefix"], - reversed_y=True + reversed_y=reversed_y )[1] for k, v in sorted(tiles.items()): print("{}={}".format(k, v), file=cf) @@ -518,6 +525,11 @@ class MayaSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline): instance.data["assemblySubmissionJobs"] = assembly_job_ids + # Remove config files to avoid confusion about where data is coming + # from in Deadline. + for config_file in config_files: + os.remove(config_file) + def _get_maya_payload(self, data): job_info = copy.deepcopy(self.job_info) @@ -878,8 +890,6 @@ def _format_tiles( out["PluginInfo"]["RegionRight{}".format(tile)] = right # Tile config - cfg["Tile{}".format(tile)] = new_filename - cfg["Tile{}Tile".format(tile)] = new_filename cfg["Tile{}FileName".format(tile)] = new_filename cfg["Tile{}X".format(tile)] = left cfg["Tile{}Y".format(tile)] = top diff --git a/openpype/modules/deadline/plugins/publish/submit_publish_job.py b/openpype/modules/deadline/plugins/publish/submit_publish_job.py index 4765772bcf..eeb813cb62 100644 --- a/openpype/modules/deadline/plugins/publish/submit_publish_job.py +++ b/openpype/modules/deadline/plugins/publish/submit_publish_job.py @@ -438,7 +438,7 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin): "Finished copying %i files" % len(resource_files)) def _create_instances_for_aov( - self, instance_data, exp_files, additional_data + self, instance_data, exp_files, additional_data, do_not_add_review ): """Create instance for each AOV found. @@ -449,6 +449,8 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin): instance_data (pyblish.plugin.Instance): skeleton data for instance (those needed) later by collector exp_files (list): list of expected files divided by aovs + additional_data (dict): + do_not_add_review (bool): explicitly skip review Returns: list of instances @@ -514,8 +516,6 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin): app = os.environ.get("AVALON_APP", "") - preview = False - if isinstance(col, list): render_file_name = os.path.basename(col[0]) else: @@ -532,6 +532,8 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin): new_instance = deepcopy(instance_data) new_instance["subset"] = subset_name new_instance["subsetGroup"] = group_name + + preview = preview and not do_not_add_review if preview: new_instance["review"] = True @@ -591,7 +593,7 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin): self.log.debug("instances:{}".format(instances)) return instances - def _get_representations(self, instance, exp_files): + def _get_representations(self, instance, exp_files, do_not_add_review): """Create representations for file sequences. This will return representations of expected files if they are not @@ -602,6 +604,7 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin): instance (dict): instance data for which we are setting representations exp_files (list): list of expected files + do_not_add_review (bool): explicitly skip review Returns: list of representations @@ -651,6 +654,7 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin): if instance.get("slate"): frame_start -= 1 + preview = preview and not do_not_add_review rep = { "name": ext, "ext": ext, @@ -705,6 +709,7 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin): preview = match_aov_pattern( host_name, self.aov_filter, remainder ) + preview = preview and not do_not_add_review if preview: rep.update({ "fps": instance.get("fps"), @@ -820,8 +825,12 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin): families = [family] # pass review to families if marked as review + do_not_add_review = False if data.get("review"): families.append("review") + elif data.get("review") == False: + self.log.debug("Instance has review explicitly disabled.") + do_not_add_review = True instance_skeleton_data = { "family": family, @@ -977,7 +986,8 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin): instances = self._create_instances_for_aov( instance_skeleton_data, data.get("expectedFiles"), - additional_data + additional_data, + do_not_add_review ) self.log.info("got {} instance{}".format( len(instances), @@ -986,7 +996,8 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin): else: representations = self._get_representations( instance_skeleton_data, - data.get("expectedFiles") + data.get("expectedFiles"), + do_not_add_review ) if "representations" not in instance_skeleton_data.keys(): @@ -1202,10 +1213,11 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin): template_data["family"] = "render" template_data["version"] = version - anatomy_filled = anatomy.format(template_data) - - if "folder" in anatomy.templates["render"]: - publish_folder = anatomy_filled["render"]["folder"] + render_templates = anatomy.templates_obj["render"] + if "folder" in render_templates: + publish_folder = render_templates["folder"].format_strict( + template_data + ) else: # solve deprecated situation when `folder` key is not underneath # `publish` anatomy @@ -1215,8 +1227,7 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin): " key underneath `publish` (in global of for project `{}`)." ).format(project_name)) - file_path = anatomy_filled["render"]["path"] - # Directory + file_path = render_templates["path"].format_strict(template_data) publish_folder = os.path.dirname(file_path) return publish_folder diff --git a/openpype/hooks/pre_copy_last_published_workfile.py b/openpype/modules/sync_server/launch_hooks/pre_copy_last_published_workfile.py similarity index 56% rename from openpype/hooks/pre_copy_last_published_workfile.py rename to openpype/modules/sync_server/launch_hooks/pre_copy_last_published_workfile.py index 26b43c39cb..bbc220945c 100644 --- a/openpype/hooks/pre_copy_last_published_workfile.py +++ b/openpype/modules/sync_server/launch_hooks/pre_copy_last_published_workfile.py @@ -1,15 +1,20 @@ import os import shutil -from time import sleep + from openpype.client.entities import ( - get_last_version_by_subset_id, get_representations, - get_subsets, + get_project ) + from openpype.lib import PreLaunchHook -from openpype.lib.local_settings import get_local_site_id from openpype.lib.profiles_filtering import filter_profiles -from openpype.pipeline.load.utils import get_representation_path +from openpype.modules.sync_server.sync_server import ( + download_last_published_workfile, +) +from openpype.pipeline.template_data import get_template_data +from openpype.pipeline.workfile.path_resolving import ( + get_workfile_template_key, +) from openpype.settings.lib import get_project_settings @@ -22,7 +27,11 @@ class CopyLastPublishedWorkfile(PreLaunchHook): # Before `AddLastWorkfileToLaunchArgs` order = -1 - app_groups = ["blender", "photoshop", "tvpaint", "aftereffects"] + # any DCC could be used but TrayPublisher and other specials + app_groups = ["blender", "photoshop", "tvpaint", "aftereffects", + "nuke", "nukeassist", "nukex", "hiero", "nukestudio", + "maya", "harmony", "celaction", "flame", "fusion", + "houdini", "tvpaint"] def execute(self): """Check if local workfile doesn't exist, else copy it. @@ -31,11 +40,11 @@ class CopyLastPublishedWorkfile(PreLaunchHook): 2- Check if workfile in work area doesn't exist 3- Check if published workfile exists and is copied locally in publish 4- Substitute copied published workfile as first workfile + with incremented version by +1 Returns: None: This is a void method. """ - sync_server = self.modules_manager.get("sync_server") if not sync_server or not sync_server.enabled: self.log.debug("Sync server module is not enabled or available") @@ -53,6 +62,7 @@ class CopyLastPublishedWorkfile(PreLaunchHook): # Get data project_name = self.data["project_name"] + asset_name = self.data["asset_name"] task_name = self.data["task_name"] task_type = self.data["task_type"] host_name = self.application.host_name @@ -68,6 +78,8 @@ class CopyLastPublishedWorkfile(PreLaunchHook): "hosts": host_name, } last_workfile_settings = filter_profiles(profiles, filter_data) + if not last_workfile_settings: + return use_last_published_workfile = last_workfile_settings.get( "use_last_published_workfile" ) @@ -92,57 +104,27 @@ class CopyLastPublishedWorkfile(PreLaunchHook): ) return + max_retries = int((sync_server.sync_project_settings[project_name] + ["config"] + ["retry_cnt"])) + self.log.info("Trying to fetch last published workfile...") - project_doc = self.data.get("project_doc") asset_doc = self.data.get("asset_doc") anatomy = self.data.get("anatomy") - # Check it can proceed - if not project_doc and not asset_doc: - return + context_filters = { + "asset": asset_name, + "family": "workfile", + "task": {"name": task_name, "type": task_type} + } - # Get subset id - subset_id = next( - ( - subset["_id"] - for subset in get_subsets( - project_name, - asset_ids=[asset_doc["_id"]], - fields=["_id", "data.family", "data.families"], - ) - if subset["data"].get("family") == "workfile" - # Legacy compatibility - or "workfile" in subset["data"].get("families", {}) - ), - None, - ) - if not subset_id: - self.log.debug( - 'No any workfile for asset "{}".'.format(asset_doc["name"]) - ) - return + workfile_representations = list(get_representations( + project_name, + context_filters=context_filters + )) - # Get workfile representation - last_version_doc = get_last_version_by_subset_id( - project_name, subset_id, fields=["_id"] - ) - if not last_version_doc: - self.log.debug("Subset does not have any versions") - return - - workfile_representation = next( - ( - representation - for representation in get_representations( - project_name, version_ids=[last_version_doc["_id"]] - ) - if representation["context"]["task"]["name"] == task_name - ), - None, - ) - - if not workfile_representation: + if not workfile_representations: self.log.debug( 'No published workfile for task "{}" and host "{}".'.format( task_name, host_name @@ -150,28 +132,55 @@ class CopyLastPublishedWorkfile(PreLaunchHook): ) return - local_site_id = get_local_site_id() - sync_server.add_site( - project_name, - workfile_representation["_id"], - local_site_id, - force=True, - priority=99, - reset_timer=True, + filtered_repres = filter( + lambda r: r["context"].get("version") is not None, + workfile_representations ) - - while not sync_server.is_representation_on_site( - project_name, workfile_representation["_id"], local_site_id - ): - sleep(5) - - # Get paths - published_workfile_path = get_representation_path( - workfile_representation, root=anatomy.roots + workfile_representation = max( + filtered_repres, key=lambda r: r["context"]["version"] ) - local_workfile_dir = os.path.dirname(last_workfile) # Copy file and substitute path - self.data["last_workfile_path"] = shutil.copy( - published_workfile_path, local_workfile_dir + last_published_workfile_path = download_last_published_workfile( + host_name, + project_name, + task_name, + workfile_representation, + max_retries, + anatomy=anatomy ) + if not last_published_workfile_path: + self.log.debug( + "Couldn't download {}".format(last_published_workfile_path) + ) + return + + project_doc = self.data["project_doc"] + + project_settings = self.data["project_settings"] + template_key = get_workfile_template_key( + task_name, host_name, project_name, project_settings + ) + + # Get workfile data + workfile_data = get_template_data( + project_doc, asset_doc, task_name, host_name + ) + + extension = last_published_workfile_path.split(".")[-1] + workfile_data["version"] = ( + workfile_representation["context"]["version"] + 1) + workfile_data["ext"] = extension + + anatomy_result = anatomy.format(workfile_data) + local_workfile_path = anatomy_result[template_key]["path"] + + # Copy last published workfile to local workfile directory + shutil.copy( + last_published_workfile_path, + local_workfile_path, + ) + + self.data["last_workfile_path"] = local_workfile_path + # Keep source filepath for further path conformation + self.data["source_filepath"] = last_published_workfile_path diff --git a/openpype/modules/sync_server/sync_server.py b/openpype/modules/sync_server/sync_server.py index 5b873a37cf..d1d5c2863d 100644 --- a/openpype/modules/sync_server/sync_server.py +++ b/openpype/modules/sync_server/sync_server.py @@ -3,10 +3,15 @@ import os import asyncio import threading import concurrent.futures -from concurrent.futures._base import CancelledError +from time import sleep from .providers import lib +from openpype.client.entity_links import get_linked_representation_id from openpype.lib import Logger +from openpype.lib.local_settings import get_local_site_id +from openpype.modules.base import ModulesManager +from openpype.pipeline import Anatomy +from openpype.pipeline.load.utils import get_representation_path_with_anatomy from .utils import SyncStatus, ResumableError @@ -189,6 +194,98 @@ def _site_is_working(module, project_name, site_name, site_config): return handler.is_active() +def download_last_published_workfile( + host_name: str, + project_name: str, + task_name: str, + workfile_representation: dict, + max_retries: int, + anatomy: Anatomy = None, +) -> str: + """Download the last published workfile + + Args: + host_name (str): Host name. + project_name (str): Project name. + task_name (str): Task name. + workfile_representation (dict): Workfile representation. + max_retries (int): complete file failure only after so many attempts + anatomy (Anatomy, optional): Anatomy (Used for optimization). + Defaults to None. + + Returns: + str: last published workfile path localized + """ + + if not anatomy: + anatomy = Anatomy(project_name) + + # Get sync server module + sync_server = ModulesManager().modules_by_name.get("sync_server") + if not sync_server or not sync_server.enabled: + print("Sync server module is disabled or unavailable.") + return + + if not workfile_representation: + print( + "Not published workfile for task '{}' and host '{}'.".format( + task_name, host_name + ) + ) + return + + last_published_workfile_path = get_representation_path_with_anatomy( + workfile_representation, anatomy + ) + if (not last_published_workfile_path or + not os.path.exists(last_published_workfile_path)): + return + + # If representation isn't available on remote site, then return. + if not sync_server.is_representation_on_site( + project_name, + workfile_representation["_id"], + sync_server.get_remote_site(project_name), + ): + print( + "Representation for task '{}' and host '{}'".format( + task_name, host_name + ) + ) + return + + # Get local site + local_site_id = get_local_site_id() + + # Add workfile representation to local site + representation_ids = {workfile_representation["_id"]} + representation_ids.update( + get_linked_representation_id( + project_name, repre_id=workfile_representation["_id"] + ) + ) + for repre_id in representation_ids: + if not sync_server.is_representation_on_site(project_name, repre_id, + local_site_id): + sync_server.add_site( + project_name, + repre_id, + local_site_id, + force=True, + priority=99 + ) + sync_server.reset_timer() + print("Starting to download:{}".format(last_published_workfile_path)) + # While representation unavailable locally, wait. + while not sync_server.is_representation_on_site( + project_name, workfile_representation["_id"], local_site_id, + max_retries=max_retries + ): + sleep(5) + + return last_published_workfile_path + + class SyncServerThread(threading.Thread): """ Separate thread running synchronization server with asyncio loop. @@ -358,7 +455,6 @@ class SyncServerThread(threading.Thread): duration = time.time() - start_time self.log.debug("One loop took {:.2f}s".format(duration)) - delay = self.module.get_loop_delay(project_name) self.log.debug( "Waiting for {} seconds to new loop".format(delay) @@ -370,8 +466,8 @@ class SyncServerThread(threading.Thread): self.log.warning( "ConnectionResetError in sync loop, trying next loop", exc_info=True) - except CancelledError: - # just stopping server + except asyncio.exceptions.CancelledError: + # cancelling timer pass except ResumableError: self.log.warning( diff --git a/openpype/modules/sync_server/sync_server_module.py b/openpype/modules/sync_server/sync_server_module.py index 5a4fa07e98..b85b045bd9 100644 --- a/openpype/modules/sync_server/sync_server_module.py +++ b/openpype/modules/sync_server/sync_server_module.py @@ -838,6 +838,18 @@ class SyncServerModule(OpenPypeModule, ITrayModule): return ret_dict + def get_launch_hook_paths(self): + """Implementation for applications launch hooks. + + Returns: + (str): full absolut path to directory with hooks for the module + """ + + return os.path.join( + os.path.dirname(os.path.abspath(__file__)), + "launch_hooks" + ) + # Needs to be refactored after Settings are updated # # Methods for Settings to get appriate values to fill forms # def get_configurable_items(self, scope=None): @@ -1045,9 +1057,23 @@ class SyncServerModule(OpenPypeModule, ITrayModule): self.sync_server_thread.reset_timer() def is_representation_on_site( - self, project_name, representation_id, site_name + self, project_name, representation_id, site_name, max_retries=None ): - """Checks if 'representation_id' has all files avail. on 'site_name'""" + """Checks if 'representation_id' has all files avail. on 'site_name' + + Args: + project_name (str) + representation_id (str) + site_name (str) + max_retries (int) (optional) - provide only if method used in while + loop to bail out + Returns: + (bool): True if 'representation_id' has all files correctly on the + 'site_name' + Raises: + (ValueError) Only If 'max_retries' provided if upload/download + failed too many times to limit infinite loop check. + """ representation = get_representation_by_id(project_name, representation_id, fields=["_id", "files"]) @@ -1060,6 +1086,11 @@ class SyncServerModule(OpenPypeModule, ITrayModule): if site["name"] != site_name: continue + if max_retries: + tries = self._get_tries_count_from_rec(site) + if tries >= max_retries: + raise ValueError("Failed too many times") + if (site.get("progress") or site.get("error") or not site.get("created_dt")): return False diff --git a/openpype/pipeline/anatomy.py b/openpype/pipeline/anatomy.py index 683960f3d8..30748206a3 100644 --- a/openpype/pipeline/anatomy.py +++ b/openpype/pipeline/anatomy.py @@ -19,6 +19,7 @@ from openpype.client import get_project from openpype.lib.path_templates import ( TemplateUnsolved, TemplateResult, + StringTemplate, TemplatesDict, FormatObject, ) @@ -606,6 +607,32 @@ class AnatomyTemplateResult(TemplateResult): return self.__class__(tmp, self.rootless) +class AnatomyStringTemplate(StringTemplate): + """String template which has access to anatomy.""" + + def __init__(self, anatomy_templates, template): + self.anatomy_templates = anatomy_templates + super(AnatomyStringTemplate, self).__init__(template) + + def format(self, data): + """Format template and add 'root' key to data if not available. + + Args: + data (dict[str, Any]): Formatting data for template. + + Returns: + AnatomyTemplateResult: Formatting result. + """ + + anatomy_templates = self.anatomy_templates + if not data.get("root"): + data = copy.deepcopy(data) + data["root"] = anatomy_templates.anatomy.roots + result = StringTemplate.format(self, data) + rootless_path = anatomy_templates.rootless_path_from_result(result) + return AnatomyTemplateResult(result, rootless_path) + + class AnatomyTemplates(TemplatesDict): inner_key_pattern = re.compile(r"(\{@.*?[^{}0]*\})") inner_key_name_pattern = re.compile(r"\{@(.*?[^{}0]*)\}") @@ -615,12 +642,6 @@ class AnatomyTemplates(TemplatesDict): self.anatomy = anatomy self.loaded_project = None - def __getitem__(self, key): - return self.templates[key] - - def get(self, key, default=None): - return self.templates.get(key, default) - def reset(self): self._raw_templates = None self._templates = None @@ -655,12 +676,7 @@ class AnatomyTemplates(TemplatesDict): def _format_value(self, value, data): if isinstance(value, RootItem): return self._solve_dict(value, data) - - result = super(AnatomyTemplates, self)._format_value(value, data) - if isinstance(result, TemplateResult): - rootless_path = self._rootless_path(result, data) - result = AnatomyTemplateResult(result, rootless_path) - return result + return super(AnatomyTemplates, self)._format_value(value, data) def set_templates(self, templates): if not templates: @@ -689,10 +705,13 @@ class AnatomyTemplates(TemplatesDict): solved_templates = self.solve_template_inner_links(templates) self._templates = solved_templates - self._objected_templates = self.create_ojected_templates( + self._objected_templates = self.create_objected_templates( solved_templates ) + def _create_template_object(self, template): + return AnatomyStringTemplate(self, template) + def default_templates(self): """Return default templates data with solved inner keys.""" return self.solve_template_inner_links( @@ -886,7 +905,8 @@ class AnatomyTemplates(TemplatesDict): return keys_by_subkey - def _dict_to_subkeys_list(self, subdict, pre_keys=None): + @classmethod + def _dict_to_subkeys_list(cls, subdict, pre_keys=None): if pre_keys is None: pre_keys = [] output = [] @@ -895,7 +915,7 @@ class AnatomyTemplates(TemplatesDict): result = list(pre_keys) result.append(key) if isinstance(value, dict): - for item in self._dict_to_subkeys_list(value, result): + for item in cls._dict_to_subkeys_list(value, result): output.append(item) else: output.append(result) @@ -908,7 +928,17 @@ class AnatomyTemplates(TemplatesDict): return {key_list[0]: value} return {key_list[0]: self._keys_to_dicts(key_list[1:], value)} - def _rootless_path(self, result, final_data): + @classmethod + def rootless_path_from_result(cls, result): + """Calculate rootless path from formatting result. + + Args: + result (TemplateResult): Result of StringTemplate formatting. + + Returns: + str: Rootless path if result contains one of anatomy roots. + """ + used_values = result.used_values missing_keys = result.missing_keys template = result.template @@ -924,7 +954,7 @@ class AnatomyTemplates(TemplatesDict): if "root" in invalid_type: return - root_keys = self._dict_to_subkeys_list({"root": used_values["root"]}) + root_keys = cls._dict_to_subkeys_list({"root": used_values["root"]}) if not root_keys: return diff --git a/openpype/pipeline/context_tools.py b/openpype/pipeline/context_tools.py index 6610fd7da7..dede2b8fce 100644 --- a/openpype/pipeline/context_tools.py +++ b/openpype/pipeline/context_tools.py @@ -463,9 +463,7 @@ def get_workdir_from_session(session=None, template_key=None): session = legacy_io.Session project_name = session["AVALON_PROJECT"] host_name = session["AVALON_APP"] - anatomy = Anatomy(project_name) template_data = get_template_data_from_session(session) - anatomy_filled = anatomy.format(template_data) if not template_key: task_type = template_data["task"]["type"] @@ -474,7 +472,10 @@ def get_workdir_from_session(session=None, template_key=None): host_name, project_name=project_name ) - path = anatomy_filled[template_key]["folder"] + + anatomy = Anatomy(project_name) + template_obj = anatomy.templates_obj[template_key]["folder"] + path = template_obj.format_strict(template_data) if path: path = os.path.normpath(path) return path diff --git a/openpype/pipeline/delivery.py b/openpype/pipeline/delivery.py index 8cf9a43aac..500f54040a 100644 --- a/openpype/pipeline/delivery.py +++ b/openpype/pipeline/delivery.py @@ -1,5 +1,6 @@ """Functions useful for delivery of published representations.""" import os +import copy import shutil import glob import clique @@ -146,12 +147,11 @@ def deliver_single_file( report_items["Source file was not found"].append(msg) return report_items, 0 - anatomy_filled = anatomy.format(anatomy_data) if format_dict: - template_result = anatomy_filled["delivery"][template_name] - delivery_path = template_result.rootless.format(**format_dict) - else: - delivery_path = anatomy_filled["delivery"][template_name] + anatomy_data = copy.deepcopy(anatomy_data) + anatomy_data["root"] = format_dict["root"] + template_obj = anatomy.templates_obj["delivery"][template_name] + delivery_path = template_obj.format_strict(anatomy_data) # Backwards compatibility when extension contained `.` delivery_path = delivery_path.replace("..", ".") @@ -269,14 +269,12 @@ def deliver_sequence( frame_indicator = "@####@" + anatomy_data = copy.deepcopy(anatomy_data) anatomy_data["frame"] = frame_indicator - anatomy_filled = anatomy.format(anatomy_data) - if format_dict: - template_result = anatomy_filled["delivery"][template_name] - delivery_path = template_result.rootless.format(**format_dict) - else: - delivery_path = anatomy_filled["delivery"][template_name] + anatomy_data["root"] = format_dict["root"] + template_obj = anatomy.templates_obj["delivery"][template_name] + delivery_path = template_obj.format_strict(anatomy_data) delivery_path = os.path.normpath(delivery_path.replace("\\", "/")) delivery_folder = os.path.dirname(delivery_path) diff --git a/openpype/pipeline/publish/abstract_collect_render.py b/openpype/pipeline/publish/abstract_collect_render.py index ccb2415346..fd35ddb719 100644 --- a/openpype/pipeline/publish/abstract_collect_render.py +++ b/openpype/pipeline/publish/abstract_collect_render.py @@ -58,7 +58,7 @@ class RenderInstance(object): # With default values # metadata renderer = attr.ib(default="") # renderer - can be used in Deadline - review = attr.ib(default=False) # generate review from instance (bool) + review = attr.ib(default=None) # False - explicitly skip review priority = attr.ib(default=50) # job priority on farm family = attr.ib(default="renderlayer") diff --git a/openpype/pipeline/publish/lib.py b/openpype/pipeline/publish/lib.py index 81913bcdd5..8b6212b3ef 100644 --- a/openpype/pipeline/publish/lib.py +++ b/openpype/pipeline/publish/lib.py @@ -7,6 +7,7 @@ import tempfile import xml.etree.ElementTree import six +import pyblish.util import pyblish.plugin import pyblish.api @@ -354,6 +355,61 @@ def publish_plugins_discover(paths=None): return result +def _get_plugin_settings(host_name, project_settings, plugin, log): + """Get plugin settings based on host name and plugin name. + + Args: + host_name (str): Name of host. + project_settings (dict[str, Any]): Project settings. + plugin (pyliblish.Plugin): Plugin where settings are applied. + log (logging.Logger): Logger to log messages. + + Returns: + dict[str, Any]: Plugin settings {'attribute': 'value'}. + """ + + # Use project settings from host name category when available + try: + return ( + project_settings + [host_name] + ["publish"] + [plugin.__name__] + ) + except KeyError: + pass + + # Settings category determined from path + # - usually path is './/plugins/publish/' + # - category can be host name of addon name ('maya', 'deadline', ...) + filepath = os.path.normpath(inspect.getsourcefile(plugin)) + + split_path = filepath.rsplit(os.path.sep, 5) + if len(split_path) < 4: + log.warning( + 'plugin path too short to extract host {}'.format(filepath) + ) + return {} + + category_from_file = split_path[-4] + plugin_kind = split_path[-2] + + # TODO: change after all plugins are moved one level up + if category_from_file == "openpype": + category_from_file = "global" + + try: + return ( + project_settings + [category_from_file] + [plugin_kind] + [plugin.__name__] + ) + except KeyError: + pass + return {} + + def filter_pyblish_plugins(plugins): """Pyblish plugin filter which applies OpenPype settings. @@ -372,21 +428,21 @@ def filter_pyblish_plugins(plugins): # TODO: Don't use host from 'pyblish.api' but from defined host by us. # - kept becau on farm is probably used host 'shell' which propably # affect how settings are applied there - host = pyblish.api.current_host() + host_name = pyblish.api.current_host() project_name = os.environ.get("AVALON_PROJECT") - project_setting = get_project_settings(project_name) + project_settings = get_project_settings(project_name) system_settings = get_system_settings() # iterate over plugins for plugin in plugins[:]: + # Apply settings to plugins if hasattr(plugin, "apply_settings"): + # Use classmethod 'apply_settings' + # - can be used to target settings from custom settings place + # - skip default behavior when successful try: - # Use classmethod 'apply_settings' - # - can be used to target settings from custom settings place - # - skip default behavior when successful - plugin.apply_settings(project_setting, system_settings) - continue + plugin.apply_settings(project_settings, system_settings) except Exception: log.warning( @@ -395,53 +451,20 @@ def filter_pyblish_plugins(plugins): ).format(plugin.__name__), exc_info=True ) - - try: - config_data = ( - project_setting - [host] - ["publish"] - [plugin.__name__] + else: + # Automated + plugin_settins = _get_plugin_settings( + host_name, project_settings, plugin, log ) - except KeyError: - # host determined from path - file = os.path.normpath(inspect.getsourcefile(plugin)) - file = os.path.normpath(file) - - split_path = file.split(os.path.sep) - if len(split_path) < 4: - log.warning( - 'plugin path too short to extract host {}'.format(file) - ) - continue - - host_from_file = split_path[-4] - plugin_kind = split_path[-2] - - # TODO: change after all plugins are moved one level up - if host_from_file == "openpype": - host_from_file = "global" - - try: - config_data = ( - project_setting - [host_from_file] - [plugin_kind] - [plugin.__name__] - ) - except KeyError: - continue - - for option, value in config_data.items(): - if option == "enabled" and value is False: - log.info('removing plugin {}'.format(plugin.__name__)) - plugins.remove(plugin) - else: - log.info('setting {}:{} on plugin {}'.format( + for option, value in plugin_settins.items(): + log.info("setting {}:{} on plugin {}".format( option, value, plugin.__name__)) - setattr(plugin, option, value) + # Remove disabled plugins + if getattr(plugin, "enabled", True) is False: + plugins.remove(plugin) + def find_close_plugin(close_plugin_name, log): if close_plugin_name: diff --git a/openpype/pipeline/publish/publish_plugins.py b/openpype/pipeline/publish/publish_plugins.py index 331235fadc..a38896ec8e 100644 --- a/openpype/pipeline/publish/publish_plugins.py +++ b/openpype/pipeline/publish/publish_plugins.py @@ -45,7 +45,7 @@ class PublishValidationError(Exception): def __init__(self, message, title=None, description=None, detail=None): self.message = message - self.title = title or "< Missing title >" + self.title = title self.description = description or message self.detail = detail super(PublishValidationError, self).__init__(message) diff --git a/openpype/pipeline/workfile/path_resolving.py b/openpype/pipeline/workfile/path_resolving.py index 801cb7223c..15689f4d99 100644 --- a/openpype/pipeline/workfile/path_resolving.py +++ b/openpype/pipeline/workfile/path_resolving.py @@ -132,9 +132,9 @@ def get_workdir_with_workdir_data( project_settings ) - anatomy_filled = anatomy.format(workdir_data) + template_obj = anatomy.templates_obj[template_key]["folder"] # Output is TemplateResult object which contain useful data - output = anatomy_filled[template_key]["folder"] + output = template_obj.format_strict(workdir_data) if output: return output.normalized() return output diff --git a/openpype/pipeline/workfile/workfile_template_builder.py b/openpype/pipeline/workfile/workfile_template_builder.py index 0ce59de8ad..a3d7340367 100644 --- a/openpype/pipeline/workfile/workfile_template_builder.py +++ b/openpype/pipeline/workfile/workfile_template_builder.py @@ -158,7 +158,7 @@ class AbstractTemplateBuilder(object): def linked_asset_docs(self): if self._linked_asset_docs is None: self._linked_asset_docs = get_linked_assets( - self.current_asset_doc + self.project_name, self.current_asset_doc ) return self._linked_asset_docs @@ -1151,13 +1151,10 @@ class PlaceholderItem(object): return self._log def __repr__(self): - name = None - if hasattr("name", self): - name = self.name - if hasattr("_scene_identifier ", self): - name = self._scene_identifier - - return "< {} {} >".format(self.__class__.__name__, name) + return "< {} {} >".format( + self.__class__.__name__, + self._scene_identifier + ) @property def order(self): @@ -1419,16 +1416,7 @@ class PlaceholderLoadMixin(object): "family": [placeholder.data["family"]] } - elif builder_type != "linked_asset": - context_filters = { - "asset": [re.compile(placeholder.data["asset"])], - "subset": [re.compile(placeholder.data["subset"])], - "hierarchy": [re.compile(placeholder.data["hierarchy"])], - "representation": [placeholder.data["representation"]], - "family": [placeholder.data["family"]] - } - - else: + elif builder_type == "linked_asset": asset_regex = re.compile(placeholder.data["asset"]) linked_asset_names = [] for asset_doc in linked_asset_docs: @@ -1444,6 +1432,15 @@ class PlaceholderLoadMixin(object): "family": [placeholder.data["family"]], } + else: + context_filters = { + "asset": [re.compile(placeholder.data["asset"])], + "subset": [re.compile(placeholder.data["subset"])], + "hierarchy": [re.compile(placeholder.data["hierarchy"])], + "representation": [placeholder.data["representation"]], + "family": [placeholder.data["family"]] + } + return list(get_representations( project_name, context_filters=context_filters diff --git a/openpype/plugins/publish/collect_anatomy_instance_data.py b/openpype/plugins/publish/collect_anatomy_instance_data.py index 48171aa957..4fbb93324b 100644 --- a/openpype/plugins/publish/collect_anatomy_instance_data.py +++ b/openpype/plugins/publish/collect_anatomy_instance_data.py @@ -50,7 +50,6 @@ class CollectAnatomyInstanceData(pyblish.api.ContextPlugin): project_name = context.data["projectName"] self.fill_missing_asset_docs(context, project_name) - self.fill_instance_data_from_asset(context) self.fill_latest_versions(context, project_name) self.fill_anatomy_data(context) @@ -115,23 +114,6 @@ class CollectAnatomyInstanceData(pyblish.api.ContextPlugin): "Not found asset documents with names \"{}\"." ).format(joined_asset_names)) - def fill_instance_data_from_asset(self, context): - for instance in context: - asset_doc = instance.data.get("assetEntity") - if not asset_doc: - continue - - asset_data = asset_doc["data"] - for key in ( - "fps", - "frameStart", - "frameEnd", - "handleStart", - "handleEnd", - ): - if key not in instance.data and key in asset_data: - instance.data[key] = asset_data[key] - def fill_latest_versions(self, context, project_name): """Try to find latest version for each instance's subset. diff --git a/openpype/plugins/publish/collect_otio_review.py b/openpype/plugins/publish/collect_otio_review.py index 4d8147e70d..f0157282a1 100644 --- a/openpype/plugins/publish/collect_otio_review.py +++ b/openpype/plugins/publish/collect_otio_review.py @@ -87,7 +87,9 @@ class CollectOtioReview(pyblish.api.InstancePlugin): otio_review_clips.append(otio_gap) if otio_review_clips: - instance.data["label"] += " (review)" + # add review track to instance and change label to reflect it + label = instance.data.get("label", instance.data["subset"]) + instance.data["label"] = label + " (review)" instance.data["families"] += ["review", "ftrack"] instance.data["otioReviewClips"] = otio_review_clips self.log.info( diff --git a/openpype/plugins/publish/collect_resources_path.py b/openpype/plugins/publish/collect_resources_path.py index 4a5f9f1cc2..f96dd0ae18 100644 --- a/openpype/plugins/publish/collect_resources_path.py +++ b/openpype/plugins/publish/collect_resources_path.py @@ -83,10 +83,11 @@ class CollectResourcesPath(pyblish.api.InstancePlugin): "hierarchy": instance.data["hierarchy"] }) - anatomy_filled = anatomy.format(template_data) - - if "folder" in anatomy.templates["publish"]: - publish_folder = anatomy_filled["publish"]["folder"] + publish_templates = anatomy.templates_obj["publish"] + if "folder" in publish_templates: + publish_folder = publish_templates["folder"].format_strict( + template_data + ) else: # solve deprecated situation when `folder` key is not underneath # `publish` anatomy @@ -95,8 +96,7 @@ class CollectResourcesPath(pyblish.api.InstancePlugin): " key underneath `publish` (in global of for project `{}`)." ).format(anatomy.project_name)) - file_path = anatomy_filled["publish"]["path"] - # Directory + file_path = publish_templates["path"].format_strict(template_data) publish_folder = os.path.dirname(file_path) publish_folder = os.path.normpath(publish_folder) diff --git a/openpype/plugins/publish/extract_burnin.py b/openpype/plugins/publish/extract_burnin.py index 88a9ac6f2f..a12e8d18b4 100644 --- a/openpype/plugins/publish/extract_burnin.py +++ b/openpype/plugins/publish/extract_burnin.py @@ -49,7 +49,8 @@ class ExtractBurnin(publish.Extractor): "webpublisher", "aftereffects", "photoshop", - "flame" + "flame", + "houdini" # "resolve" ] @@ -78,9 +79,10 @@ class ExtractBurnin(publish.Extractor): self.log.warning("No profiles present for create burnin") return - # QUESTION what is this for and should we raise an exception? - if "representations" not in instance.data: - raise RuntimeError("Burnin needs already created mov to work on.") + if not instance.data.get("representations"): + self.log.info( + "Instance does not have filled representations. Skipping") + return self.main_process(instance) diff --git a/openpype/plugins/publish/extract_review.py b/openpype/plugins/publish/extract_review.py index 6b2fd32a2a..1062683319 100644 --- a/openpype/plugins/publish/extract_review.py +++ b/openpype/plugins/publish/extract_review.py @@ -44,6 +44,7 @@ class ExtractReview(pyblish.api.InstancePlugin): "nuke", "maya", "blender", + "houdini", "shell", "hiero", "premiere", diff --git a/openpype/plugins/publish/extract_thumbnail.py b/openpype/plugins/publish/extract_thumbnail.py index aa5497a99f..54b933a76d 100644 --- a/openpype/plugins/publish/extract_thumbnail.py +++ b/openpype/plugins/publish/extract_thumbnail.py @@ -19,9 +19,9 @@ class ExtractThumbnail(pyblish.api.InstancePlugin): order = pyblish.api.ExtractorOrder families = [ "imagesequence", "render", "render2d", "prerender", - "source", "clip", "take", "online" + "source", "clip", "take", "online", "image" ] - hosts = ["shell", "fusion", "resolve", "traypublisher"] + hosts = ["shell", "fusion", "resolve", "traypublisher", "substancepainter"] enabled = False # presetable attribute diff --git a/openpype/plugins/publish/integrate.py b/openpype/plugins/publish/integrate.py index 07131ec3ae..8e984a9e97 100644 --- a/openpype/plugins/publish/integrate.py +++ b/openpype/plugins/publish/integrate.py @@ -163,6 +163,11 @@ class IntegrateAsset(pyblish.api.InstancePlugin): "Instance is marked to be processed on farm. Skipping") return + # Instance is marked to not get integrated + if not instance.data.get("integrate", True): + self.log.info("Instance is marked to skip integrating. Skipping") + return + filtered_repres = self.filter_representations(instance) # Skip instance if there are not representations to integrate # all representations should not be integrated @@ -665,8 +670,7 @@ class IntegrateAsset(pyblish.api.InstancePlugin): # - template_data (Dict[str, Any]): source data used to fill template # - to add required data to 'repre_context' not used for # formatting - # - anatomy_filled (Dict[str, Any]): filled anatomy of last file - # - to fill 'publishDir' on instance.data -> not ideal + path_template_obj = anatomy.templates_obj[template_name]["path"] # Treat template with 'orignalBasename' in special way if "{originalBasename}" in template: @@ -700,8 +704,7 @@ class IntegrateAsset(pyblish.api.InstancePlugin): template_data["originalBasename"], _ = os.path.splitext( src_file_name) - anatomy_filled = anatomy.format(template_data) - dst = anatomy_filled[template_name]["path"] + dst = path_template_obj.format_strict(template_data) src = os.path.join(stagingdir, src_file_name) transfers.append((src, dst)) if repre_context is None: @@ -761,8 +764,9 @@ class IntegrateAsset(pyblish.api.InstancePlugin): template_data["udim"] = index else: template_data["frame"] = index - anatomy_filled = anatomy.format(template_data) - template_filled = anatomy_filled[template_name]["path"] + template_filled = path_template_obj.format_strict( + template_data + ) dst_filepaths.append(template_filled) if repre_context is None: self.log.debug( @@ -798,8 +802,7 @@ class IntegrateAsset(pyblish.api.InstancePlugin): if is_udim: template_data["udim"] = repre["udim"][0] # Construct destination filepath from template - anatomy_filled = anatomy.format(template_data) - template_filled = anatomy_filled[template_name]["path"] + template_filled = path_template_obj.format_strict(template_data) repre_context = template_filled.used_values dst = os.path.normpath(template_filled) @@ -810,11 +813,9 @@ class IntegrateAsset(pyblish.api.InstancePlugin): # todo: Are we sure the assumption each representation # ends up in the same folder is valid? if not instance.data.get("publishDir"): - instance.data["publishDir"] = ( - anatomy_filled - [template_name] - ["folder"] - ) + template_obj = anatomy.templates_obj[template_name]["folder"] + template_filled = template_obj.format_strict(template_data) + instance.data["publishDir"] = template_filled for key in self.db_representation_context_keys: # Also add these values to the context even if not used by the diff --git a/openpype/plugins/publish/integrate_hero_version.py b/openpype/plugins/publish/integrate_hero_version.py index 80141e88fe..b71207c24f 100644 --- a/openpype/plugins/publish/integrate_hero_version.py +++ b/openpype/plugins/publish/integrate_hero_version.py @@ -291,6 +291,7 @@ class IntegrateHeroVersion(pyblish.api.InstancePlugin): )) try: src_to_dst_file_paths = [] + path_template_obj = anatomy.templates_obj[template_key]["path"] for repre_info in published_repres.values(): # Skip if new repre does not have published repre files @@ -303,9 +304,7 @@ class IntegrateHeroVersion(pyblish.api.InstancePlugin): anatomy_data.pop("version", None) # Get filled path to repre context - anatomy_filled = anatomy.format(anatomy_data) - template_filled = anatomy_filled[template_key]["path"] - + template_filled = path_template_obj.format_strict(anatomy_data) repre_data = { "path": str(template_filled), "template": hero_template @@ -343,8 +342,9 @@ class IntegrateHeroVersion(pyblish.api.InstancePlugin): # Get head and tail for collection frame_splitter = "_-_FRAME_SPLIT_-_" anatomy_data["frame"] = frame_splitter - _anatomy_filled = anatomy.format(anatomy_data) - _template_filled = _anatomy_filled[template_key]["path"] + _template_filled = path_template_obj.format_strict( + anatomy_data + ) head, tail = _template_filled.split(frame_splitter) padding = int( anatomy.templates[template_key]["frame_padding"] @@ -520,24 +520,24 @@ class IntegrateHeroVersion(pyblish.api.InstancePlugin): }) if "folder" in anatomy.templates[template_key]: - anatomy_filled = anatomy.format(template_data) - publish_folder = anatomy_filled[template_key]["folder"] + template_obj = anatomy.templates_obj[template_key]["folder"] + publish_folder = template_obj.format_strict(template_data) else: # This is for cases of Deprecated anatomy without `folder` # TODO remove when all clients have solved this issue - template_data.update({ - "frame": "FRAME_TEMP", - "representation": "TEMP" - }) - anatomy_filled = anatomy.format(template_data) - # solve deprecated situation when `folder` key is not underneath - # `publish` anatomy self.log.warning(( "Deprecation warning: Anatomy does not have set `folder`" " key underneath `publish` (in global of for project `{}`)." ).format(anatomy.project_name)) + # solve deprecated situation when `folder` key is not underneath + # `publish` anatomy + template_data.update({ + "frame": "FRAME_TEMP", + "representation": "TEMP" + }) + template_obj = anatomy.templates_obj[template_key]["path"] + file_path = template_obj.format_strict(template_data) - file_path = anatomy_filled[template_key]["path"] # Directory publish_folder = os.path.dirname(file_path) diff --git a/openpype/plugins/publish/integrate_legacy.py b/openpype/plugins/publish/integrate_legacy.py index 3f1f6ad0c9..c67ce62bf6 100644 --- a/openpype/plugins/publish/integrate_legacy.py +++ b/openpype/plugins/publish/integrate_legacy.py @@ -480,8 +480,8 @@ class IntegrateAssetNew(pyblish.api.InstancePlugin): else: template_data["udim"] = src_padding_exp % i - anatomy_filled = anatomy.format(template_data) - template_filled = anatomy_filled[template_name]["path"] + template_obj = anatomy.templates_obj[template_name]["path"] + template_filled = template_obj.format_strict(template_data) if repre_context is None: repre_context = template_filled.used_values test_dest_files.append( @@ -587,8 +587,8 @@ class IntegrateAssetNew(pyblish.api.InstancePlugin): if repre.get("udim"): template_data["udim"] = repre["udim"][0] src = os.path.join(stagingdir, fname) - anatomy_filled = anatomy.format(template_data) - template_filled = anatomy_filled[template_name]["path"] + template_obj = anatomy.templates_obj[template_name]["path"] + template_filled = template_obj.format_strict(template_data) repre_context = template_filled.used_values dst = os.path.normpath(template_filled) @@ -600,9 +600,8 @@ class IntegrateAssetNew(pyblish.api.InstancePlugin): if not instance.data.get("publishDir"): instance.data["publishDir"] = ( - anatomy_filled - [template_name] - ["folder"] + anatomy.templates_obj[template_name]["folder"] + .format_strict(template_data) ) if repre.get("udim"): repre_context["udim"] = repre.get("udim") # store list diff --git a/openpype/plugins/publish/integrate_thumbnail.py b/openpype/plugins/publish/integrate_thumbnail.py index 809a1782e0..16cc47d432 100644 --- a/openpype/plugins/publish/integrate_thumbnail.py +++ b/openpype/plugins/publish/integrate_thumbnail.py @@ -271,9 +271,9 @@ class IntegrateThumbnails(pyblish.api.ContextPlugin): "thumbnail_type": "thumbnail" }) - anatomy_filled = anatomy.format(template_data) - thumbnail_template = anatomy.templates["publish"]["thumbnail"] - template_filled = anatomy_filled["publish"]["thumbnail"] + template_obj = anatomy.templates_obj["publish"]["thumbnail"] + template_filled = template_obj.format_strict(template_data) + thumbnail_template = template_filled.template dst_full_path = os.path.normpath(str(template_filled)) self.log.debug("Copying file .. {} -> {}".format( diff --git a/openpype/plugins/publish/validate_sequence_frames.py b/openpype/plugins/publish/validate_sequence_frames.py index f03229da22..239008ee21 100644 --- a/openpype/plugins/publish/validate_sequence_frames.py +++ b/openpype/plugins/publish/validate_sequence_frames.py @@ -1,3 +1,7 @@ +import os +import re + +import clique import pyblish.api @@ -7,28 +11,56 @@ class ValidateSequenceFrames(pyblish.api.InstancePlugin): The files found in the folder are checked against the startFrame and endFrame of the instance. If the first or last file is not corresponding with the first or last frame it is flagged as invalid. + + Used regular expression pattern handles numbers in the file names + (eg "Main_beauty.v001.1001.exr", "Main_beauty_v001.1001.exr", + "Main_beauty.1001.1001.exr") but not numbers behind frames (eg. + "Main_beauty.1001.v001.exr") """ order = pyblish.api.ValidatorOrder label = "Validate Sequence Frames" - families = ["imagesequence"] - hosts = ["shell"] + families = ["imagesequence", "render"] + hosts = ["shell", "unreal"] def process(self, instance): + representations = instance.data.get("representations") + if not representations: + return + for repr in representations: + repr_files = repr["files"] + if isinstance(repr_files, str): + continue - collection = instance[0] - self.log.info(collection) + ext = repr.get("ext") + if not ext: + _, ext = os.path.splitext(repr_files[0]) + elif not ext.startswith("."): + ext = ".{}".format(ext) + pattern = r"\D?(?P(?P0*)\d+){}$".format( + re.escape(ext)) + patterns = [pattern] - frames = list(collection.indexes) + collections, remainder = clique.assemble( + repr_files, minimum_items=1, patterns=patterns) - current_range = (frames[0], frames[-1]) - required_range = (instance.data["frameStart"], - instance.data["frameEnd"]) + assert not remainder, "Must not have remainder" + assert len(collections) == 1, "Must detect single collection" + collection = collections[0] + frames = list(collection.indexes) - if current_range != required_range: - raise ValueError("Invalid frame range: {0} - " - "expected: {1}".format(current_range, - required_range)) + if instance.data.get("slate"): + # Slate is not part of the frame range + frames = frames[1:] - missing = collection.holes().indexes - assert not missing, "Missing frames: %s" % (missing,) + current_range = (frames[0], frames[-1]) + + required_range = (instance.data["frameStart"], + instance.data["frameEnd"]) + + if current_range != required_range: + raise ValueError(f"Invalid frame range: {current_range} - " + f"expected: {required_range}") + + missing = collection.holes().indexes + assert not missing, "Missing frames: %s" % (missing,) diff --git a/openpype/pype_commands.py b/openpype/pype_commands.py index dc5b3d63c3..6a24cb0ebc 100644 --- a/openpype/pype_commands.py +++ b/openpype/pype_commands.py @@ -353,12 +353,12 @@ class PypeCommands: version_packer = VersionRepacker(directory) version_packer.process() - def pack_project(self, project_name, dirpath): + def pack_project(self, project_name, dirpath, database_only): from openpype.lib.project_backpack import pack_project - pack_project(project_name, dirpath) + pack_project(project_name, dirpath, database_only) - def unpack_project(self, zip_filepath, new_root): + def unpack_project(self, zip_filepath, new_root, database_only): from openpype.lib.project_backpack import unpack_project - unpack_project(zip_filepath, new_root) + unpack_project(zip_filepath, new_root, database_only) diff --git a/openpype/resources/app_icons/substancepainter.png b/openpype/resources/app_icons/substancepainter.png new file mode 100644 index 0000000000..dc46f25d74 Binary files /dev/null and b/openpype/resources/app_icons/substancepainter.png differ diff --git a/openpype/settings/defaults/project_settings/aftereffects.json b/openpype/settings/defaults/project_settings/aftereffects.json index 669e1db0b8..6128534344 100644 --- a/openpype/settings/defaults/project_settings/aftereffects.json +++ b/openpype/settings/defaults/project_settings/aftereffects.json @@ -13,10 +13,14 @@ "RenderCreator": { "defaults": [ "Main" - ] + ], + "mark_for_review": true } }, "publish": { + "CollectReview": { + "enabled": true + }, "ValidateSceneSettings": { "enabled": true, "optional": true, diff --git a/openpype/settings/defaults/project_settings/global.json b/openpype/settings/defaults/project_settings/global.json index 30e56300d1..50b62737d8 100644 --- a/openpype/settings/defaults/project_settings/global.json +++ b/openpype/settings/defaults/project_settings/global.json @@ -251,15 +251,16 @@ } }, { - "families": [], + "families": ["review"], "hosts": [ - "maya" + "maya", + "houdini" ], "task_types": [], "task_names": [], "subsets": [], "burnins": { - "maya_burnin": { + "focal_length_burnin": { "TOP_LEFT": "{yy}-{mm}-{dd}", "TOP_CENTERED": "{focalLength:.2f} mm", "TOP_RIGHT": "{anatomy[version]}", diff --git a/openpype/settings/defaults/project_settings/hiero.json b/openpype/settings/defaults/project_settings/hiero.json index 100c1f5b47..3e613aa1bf 100644 --- a/openpype/settings/defaults/project_settings/hiero.json +++ b/openpype/settings/defaults/project_settings/hiero.json @@ -21,7 +21,8 @@ "floatLut": "linear", "logLut": "Cineon", "viewerLut": "sRGB", - "thumbnailLut": "sRGB" + "thumbnailLut": "sRGB", + "monitorOutLut": "sRGB" }, "regexInputs": { "inputs": [ diff --git a/openpype/settings/defaults/project_settings/maya.json b/openpype/settings/defaults/project_settings/maya.json index 19d8667002..72b330ce7a 100644 --- a/openpype/settings/defaults/project_settings/maya.json +++ b/openpype/settings/defaults/project_settings/maya.json @@ -1,5 +1,414 @@ { "open_workfile_post_initialization": false, + "explicit_plugins_loading": { + "enabled": false, + "plugins_to_load": [ + { + "enabled": false, + "name": "AbcBullet" + }, + { + "enabled": true, + "name": "AbcExport" + }, + { + "enabled": true, + "name": "AbcImport" + }, + { + "enabled": false, + "name": "animImportExport" + }, + { + "enabled": false, + "name": "ArubaTessellator" + }, + { + "enabled": false, + "name": "ATFPlugin" + }, + { + "enabled": false, + "name": "atomImportExport" + }, + { + "enabled": false, + "name": "AutodeskPacketFile" + }, + { + "enabled": false, + "name": "autoLoader" + }, + { + "enabled": false, + "name": "bifmeshio" + }, + { + "enabled": false, + "name": "bifrostGraph" + }, + { + "enabled": false, + "name": "bifrostshellnode" + }, + { + "enabled": false, + "name": "bifrostvisplugin" + }, + { + "enabled": false, + "name": "blast2Cmd" + }, + { + "enabled": false, + "name": "bluePencil" + }, + { + "enabled": false, + "name": "Boss" + }, + { + "enabled": false, + "name": "bullet" + }, + { + "enabled": true, + "name": "cacheEvaluator" + }, + { + "enabled": false, + "name": "cgfxShader" + }, + { + "enabled": false, + "name": "cleanPerFaceAssignment" + }, + { + "enabled": false, + "name": "clearcoat" + }, + { + "enabled": false, + "name": "convertToComponentTags" + }, + { + "enabled": false, + "name": "curveWarp" + }, + { + "enabled": false, + "name": "ddsFloatReader" + }, + { + "enabled": true, + "name": "deformerEvaluator" + }, + { + "enabled": false, + "name": "dgProfiler" + }, + { + "enabled": false, + "name": "drawUfe" + }, + { + "enabled": false, + "name": "dx11Shader" + }, + { + "enabled": false, + "name": "fbxmaya" + }, + { + "enabled": false, + "name": "fltTranslator" + }, + { + "enabled": false, + "name": "freeze" + }, + { + "enabled": false, + "name": "Fur" + }, + { + "enabled": false, + "name": "gameFbxExporter" + }, + { + "enabled": false, + "name": "gameInputDevice" + }, + { + "enabled": false, + "name": "GamePipeline" + }, + { + "enabled": false, + "name": "gameVertexCount" + }, + { + "enabled": false, + "name": "geometryReport" + }, + { + "enabled": false, + "name": "geometryTools" + }, + { + "enabled": false, + "name": "glslShader" + }, + { + "enabled": true, + "name": "GPUBuiltInDeformer" + }, + { + "enabled": false, + "name": "gpuCache" + }, + { + "enabled": false, + "name": "hairPhysicalShader" + }, + { + "enabled": false, + "name": "ik2Bsolver" + }, + { + "enabled": false, + "name": "ikSpringSolver" + }, + { + "enabled": false, + "name": "invertShape" + }, + { + "enabled": false, + "name": "lges" + }, + { + "enabled": false, + "name": "lookdevKit" + }, + { + "enabled": false, + "name": "MASH" + }, + { + "enabled": false, + "name": "matrixNodes" + }, + { + "enabled": false, + "name": "mayaCharacterization" + }, + { + "enabled": false, + "name": "mayaHIK" + }, + { + "enabled": false, + "name": "MayaMuscle" + }, + { + "enabled": false, + "name": "mayaUsdPlugin" + }, + { + "enabled": false, + "name": "mayaVnnPlugin" + }, + { + "enabled": false, + "name": "melProfiler" + }, + { + "enabled": false, + "name": "meshReorder" + }, + { + "enabled": true, + "name": "modelingToolkit" + }, + { + "enabled": false, + "name": "mtoa" + }, + { + "enabled": false, + "name": "mtoh" + }, + { + "enabled": false, + "name": "nearestPointOnMesh" + }, + { + "enabled": true, + "name": "objExport" + }, + { + "enabled": false, + "name": "OneClick" + }, + { + "enabled": false, + "name": "OpenEXRLoader" + }, + { + "enabled": false, + "name": "pgYetiMaya" + }, + { + "enabled": false, + "name": "pgyetiVrayMaya" + }, + { + "enabled": false, + "name": "polyBoolean" + }, + { + "enabled": false, + "name": "poseInterpolator" + }, + { + "enabled": false, + "name": "quatNodes" + }, + { + "enabled": false, + "name": "randomizerDevice" + }, + { + "enabled": false, + "name": "redshift4maya" + }, + { + "enabled": true, + "name": "renderSetup" + }, + { + "enabled": false, + "name": "retargeterNodes" + }, + { + "enabled": false, + "name": "RokokoMotionLibrary" + }, + { + "enabled": false, + "name": "rotateHelper" + }, + { + "enabled": false, + "name": "sceneAssembly" + }, + { + "enabled": false, + "name": "shaderFXPlugin" + }, + { + "enabled": false, + "name": "shotCamera" + }, + { + "enabled": false, + "name": "snapTransform" + }, + { + "enabled": false, + "name": "stage" + }, + { + "enabled": true, + "name": "stereoCamera" + }, + { + "enabled": false, + "name": "stlTranslator" + }, + { + "enabled": false, + "name": "studioImport" + }, + { + "enabled": false, + "name": "Substance" + }, + { + "enabled": false, + "name": "substancelink" + }, + { + "enabled": false, + "name": "substancemaya" + }, + { + "enabled": false, + "name": "substanceworkflow" + }, + { + "enabled": false, + "name": "svgFileTranslator" + }, + { + "enabled": false, + "name": "sweep" + }, + { + "enabled": false, + "name": "testify" + }, + { + "enabled": false, + "name": "tiffFloatReader" + }, + { + "enabled": false, + "name": "timeSliderBookmark" + }, + { + "enabled": false, + "name": "Turtle" + }, + { + "enabled": false, + "name": "Type" + }, + { + "enabled": false, + "name": "udpDevice" + }, + { + "enabled": false, + "name": "ufeSupport" + }, + { + "enabled": false, + "name": "Unfold3D" + }, + { + "enabled": false, + "name": "VectorRender" + }, + { + "enabled": false, + "name": "vrayformaya" + }, + { + "enabled": false, + "name": "vrayvolumegrid" + }, + { + "enabled": false, + "name": "xgenToolkit" + }, + { + "enabled": false, + "name": "xgenVray" + } + ] + }, "imageio": { "ocio_config": { "enabled": false, @@ -145,7 +554,7 @@ "publish_mip_map": true }, "CreateAnimation": { - "enabled": true, + "enabled": false, "write_color_sets": false, "write_face_sets": false, "include_parent_hierarchy": false, @@ -911,7 +1320,8 @@ "displayFilmOrigin": false, "overscan": 1.0 } - } + }, + "profiles": [] }, "ExtractMayaSceneRaw": { "enabled": true, @@ -1047,6 +1457,11 @@ 125, 255 ] + }, + "reference_loader": { + "namespace": "{asset_name}_{subset}_##_", + "group_name": "_GRP", + "display_handle": true } }, "workfile_build": { @@ -1140,6 +1555,10 @@ } ] }, + "include_handles": { + "include_handles_default": false, + "per_task_type": [] + }, "templated_workfile_build": { "profiles": [] }, diff --git a/openpype/settings/defaults/project_settings/nuke.json b/openpype/settings/defaults/project_settings/nuke.json index c249955dc8..85dee73176 100644 --- a/openpype/settings/defaults/project_settings/nuke.json +++ b/openpype/settings/defaults/project_settings/nuke.json @@ -363,6 +363,11 @@ "optional": true, "active": true }, + "ValidateBackdrop": { + "enabled": true, + "optional": true, + "active": true + }, "ValidateScript": { "enabled": true, "optional": true, @@ -537,45 +542,10 @@ "create_first_version": false, "custom_templates": [], "builder_on_start": false, - "profiles": [ - { - "task_types": [], - "tasks": [], - "current_context": [ - { - "subset_name_filters": [], - "families": [ - "render", - "plate" - ], - "repre_names": [ - "exr", - "dpx", - "mov", - "mp4", - "h264" - ], - "loaders": [ - "LoadClip" - ] - } - ], - "linked_assets": [] - } - ] + "profiles": [] }, "templated_workfile_build": { - "profiles": [ - { - "task_types": [ - "Compositing" - ], - "task_names": [], - "path": "{project[name]}/templates/comp.nk", - "keep_placeholder": true, - "create_first_version": true - } - ] + "profiles": [] }, "filters": {} } diff --git a/openpype/settings/defaults/project_settings/photoshop.json b/openpype/settings/defaults/project_settings/photoshop.json index bcf21f55dd..2454691958 100644 --- a/openpype/settings/defaults/project_settings/photoshop.json +++ b/openpype/settings/defaults/project_settings/photoshop.json @@ -10,23 +10,40 @@ } }, "create": { - "CreateImage": { - "defaults": [ + "ImageCreator": { + "enabled": true, + "active_on_create": true, + "mark_for_review": false, + "default_variants": [ "Main" ] + }, + "AutoImageCreator": { + "enabled": false, + "active_on_create": true, + "mark_for_review": false, + "default_variant": "" + }, + "ReviewCreator": { + "enabled": true, + "active_on_create": true, + "default_variant": "" + }, + "WorkfileCreator": { + "enabled": true, + "active_on_create": true, + "default_variant": "Main" } }, "publish": { "CollectColorCodedInstances": { + "enabled": true, "create_flatten_image": "no", "flatten_subset_template": "", "color_code_mapping": [] }, - "CollectInstances": { - "flatten_subset_template": "" - }, "CollectReview": { - "publish": true + "enabled": true }, "CollectVersion": { "enabled": false diff --git a/openpype/settings/defaults/project_settings/substancepainter.json b/openpype/settings/defaults/project_settings/substancepainter.json new file mode 100644 index 0000000000..60929e85fd --- /dev/null +++ b/openpype/settings/defaults/project_settings/substancepainter.json @@ -0,0 +1,13 @@ +{ + "imageio": { + "ocio_config": { + "enabled": true, + "filepath": [] + }, + "file_rules": { + "enabled": true, + "rules": {} + } + }, + "shelves": {} +} diff --git a/openpype/settings/defaults/project_settings/traypublisher.json b/openpype/settings/defaults/project_settings/traypublisher.json index fdea4aeaba..1b4253a1f8 100644 --- a/openpype/settings/defaults/project_settings/traypublisher.json +++ b/openpype/settings/defaults/project_settings/traypublisher.json @@ -303,16 +303,18 @@ ] } }, - "BatchMovieCreator": { - "default_variants": [ - "Main" - ], - "default_tasks": [ - "Compositing" - ], - "extensions": [ - ".mov" - ] + "create": { + "BatchMovieCreator": { + "default_variants": [ + "Main" + ], + "default_tasks": [ + "Compositing" + ], + "extensions": [ + ".mov" + ] + } }, "publish": { "ValidateFrameRange": { diff --git a/openpype/settings/defaults/project_settings/unreal.json b/openpype/settings/defaults/project_settings/unreal.json index 75cee11bd9..737a17d289 100644 --- a/openpype/settings/defaults/project_settings/unreal.json +++ b/openpype/settings/defaults/project_settings/unreal.json @@ -11,6 +11,9 @@ }, "level_sequences_for_layouts": false, "delete_unmatched_assets": false, + "render_config_path": "", + "preroll_frames": 0, + "render_format": "png", "project_setup": { "dev_mode": true } diff --git a/openpype/settings/defaults/system_settings/applications.json b/openpype/settings/defaults/system_settings/applications.json index eb3a88ce66..b492bb9321 100644 --- a/openpype/settings/defaults/system_settings/applications.json +++ b/openpype/settings/defaults/system_settings/applications.json @@ -119,9 +119,7 @@ "label": "3ds max", "icon": "{}/app_icons/3dsmax.png", "host_name": "max", - "environment": { - "ADSK_3DSMAX_STARTUPSCRIPTS_ADDON_DIR": "{OPENPYPE_ROOT}\\openpype\\hosts\\max\\startup" - }, + "environment": {}, "variants": { "2023": { "use_python_2": false, @@ -133,7 +131,7 @@ "linux": [] }, "arguments": { - "windows": ["-U MAXScript {OPENPYPE_ROOT}\\openpype\\hosts\\max\\startup\\startup.ms"], + "windows": [], "darwin": [], "linux": [] }, @@ -361,9 +359,15 @@ ] }, "arguments": { - "windows": ["--nukeassist"], - "darwin": ["--nukeassist"], - "linux": ["--nukeassist"] + "windows": [ + "--nukeassist" + ], + "darwin": [ + "--nukeassist" + ], + "linux": [ + "--nukeassist" + ] }, "environment": {} }, @@ -379,9 +383,15 @@ ] }, "arguments": { - "windows": ["--nukeassist"], - "darwin": ["--nukeassist"], - "linux": ["--nukeassist"] + "windows": [ + "--nukeassist" + ], + "darwin": [ + "--nukeassist" + ], + "linux": [ + "--nukeassist" + ] }, "environment": {} }, @@ -397,9 +407,15 @@ ] }, "arguments": { - "windows": ["--nukeassist"], - "darwin": ["--nukeassist"], - "linux": ["--nukeassist"] + "windows": [ + "--nukeassist" + ], + "darwin": [ + "--nukeassist" + ], + "linux": [ + "--nukeassist" + ] }, "environment": {} }, @@ -415,9 +431,15 @@ ] }, "arguments": { - "windows": ["--nukeassist"], - "darwin": ["--nukeassist"], - "linux": ["--nukeassist"] + "windows": [ + "--nukeassist" + ], + "darwin": [ + "--nukeassist" + ], + "linux": [ + "--nukeassist" + ] }, "environment": {} }, @@ -433,9 +455,15 @@ ] }, "arguments": { - "windows": ["--nukeassist"], - "darwin": ["--nukeassist"], - "linux": ["--nukeassist"] + "windows": [ + "--nukeassist" + ], + "darwin": [ + "--nukeassist" + ], + "linux": [ + "--nukeassist" + ] }, "environment": {} }, @@ -449,9 +477,15 @@ "linux": [] }, "arguments": { - "windows": ["--nukeassist"], - "darwin": ["--nukeassist"], - "linux": ["--nukeassist"] + "windows": [ + "--nukeassist" + ], + "darwin": [ + "--nukeassist" + ], + "linux": [ + "--nukeassist" + ] }, "environment": {} }, @@ -1445,26 +1479,77 @@ } } }, + "substancepainter": { + "enabled": true, + "label": "Substance Painter", + "icon": "app_icons/substancepainter.png", + "host_name": "substancepainter", + "environment": {}, + "variants": { + "8-2-0": { + "executables": { + "windows": [ + "C:\\Program Files\\Adobe\\Adobe Substance 3D Painter\\Adobe Substance 3D Painter.exe" + ], + "darwin": [], + "linux": [] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": {} + }, + "__dynamic_keys_labels__": { + "8-2-0": "8.2.0" + } + } + }, "unreal": { "enabled": true, "label": "Unreal Editor", "icon": "{}/app_icons/ue4.png", "host_name": "unreal", - "environment": {}, + "environment": { + "UE_PYTHONPATH": "{PYTHONPATH}" + }, "variants": { - "4-27": { - "use_python_2": false, - "environment": {} - }, "5-0": { "use_python_2": false, - "environment": { - "UE_PYTHONPATH": "{PYTHONPATH}" - } + "executables": { + "windows": [ + "C:\\Program Files\\Epic Games\\UE_5.0\\Engine\\Binaries\\Win64\\UnrealEditor.exe" + ], + "darwin": [], + "linux": [] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": {} + }, + "5-1": { + "use_python_2": false, + "executables": { + "windows": [ + "C:\\Program Files\\Epic Games\\UE_5.1\\Engine\\Binaries\\Win64\\UnrealEditor.exe" + ], + "darwin": [], + "linux": [] + }, + "arguments": { + "windows": [], + "darwin": [], + "linux": [] + }, + "environment": {} }, "__dynamic_keys_labels__": { - "4-27": "4.27", - "5-0": "5.0" + "5-1": "Unreal 5.1", + "5-0": "Unreal 5.0" } } }, diff --git a/openpype/settings/entities/color_entity.py b/openpype/settings/entities/color_entity.py index bdaab6f583..f838a6b0ad 100644 --- a/openpype/settings/entities/color_entity.py +++ b/openpype/settings/entities/color_entity.py @@ -11,8 +11,10 @@ class ColorEntity(InputEntity): def _item_initialization(self): self.valid_value_types = (list, ) - self.value_on_not_set = [0, 0, 0, 255] self.use_alpha = self.schema_data.get("use_alpha", True) + self.value_on_not_set = self.convert_to_valid_type( + self.schema_data.get("default", [0, 0, 0, 255]) + ) def set_override_state(self, *args, **kwargs): super(ColorEntity, self).set_override_state(*args, **kwargs) diff --git a/openpype/settings/entities/enum_entity.py b/openpype/settings/entities/enum_entity.py index c0c103ea10..de3bd353eb 100644 --- a/openpype/settings/entities/enum_entity.py +++ b/openpype/settings/entities/enum_entity.py @@ -168,6 +168,7 @@ class HostsEnumEntity(BaseEnumEntity): "tvpaint", "unreal", "standalonepublisher", + "substancepainter", "traypublisher", "webpublisher" ] diff --git a/openpype/settings/entities/input_entities.py b/openpype/settings/entities/input_entities.py index 89f12afd9b..2fe2033383 100644 --- a/openpype/settings/entities/input_entities.py +++ b/openpype/settings/entities/input_entities.py @@ -442,12 +442,16 @@ class TextEntity(InputEntity): def _item_initialization(self): self.valid_value_types = (STRING_TYPE, ) - self.value_on_not_set = "" + self.value_on_not_set = self.convert_to_valid_type( + self.schema_data.get("default", "") + ) # GUI attributes self.multiline = self.schema_data.get("multiline", False) self.placeholder_text = self.schema_data.get("placeholder") self.value_hints = self.schema_data.get("value_hints") or [] + self.minimum_lines_count = ( + self.schema_data.get("minimum_lines_count") or 0) def schema_validations(self): if self.multiline and self.value_hints: diff --git a/openpype/settings/entities/schemas/README.md b/openpype/settings/entities/schemas/README.md index cff614a4bb..c333628b25 100644 --- a/openpype/settings/entities/schemas/README.md +++ b/openpype/settings/entities/schemas/README.md @@ -380,6 +380,7 @@ How output of the schema could look like on save: - simple text input - key `"multiline"` allows to enter multiple lines of text (Default: `False`) - key `"placeholder"` allows to show text inside input when is empty (Default: `None`) + - key `"minimum_lines_count"` allows to define minimum size hint for UI. Can be 0-n lines. ``` { diff --git a/openpype/settings/entities/schemas/projects_schema/schema_main.json b/openpype/settings/entities/schemas/projects_schema/schema_main.json index 8c1d8ccbdd..4315987a33 100644 --- a/openpype/settings/entities/schemas/projects_schema/schema_main.json +++ b/openpype/settings/entities/schemas/projects_schema/schema_main.json @@ -122,6 +122,10 @@ "type": "schema", "name": "schema_project_photoshop" }, + { + "type": "schema", + "name": "schema_project_substancepainter" + }, { "type": "schema", "name": "schema_project_harmony" diff --git a/openpype/settings/entities/schemas/projects_schema/schema_project_aftereffects.json b/openpype/settings/entities/schemas/projects_schema/schema_project_aftereffects.json index 8dc83f5506..313e0ce8ea 100644 --- a/openpype/settings/entities/schemas/projects_schema/schema_project_aftereffects.json +++ b/openpype/settings/entities/schemas/projects_schema/schema_project_aftereffects.json @@ -40,7 +40,13 @@ "label": "Default Variants", "object_type": "text", "docstring": "Fill default variant(s) (like 'Main' or 'Default') used in subset name creation." - } + }, + { + "type": "boolean", + "key": "mark_for_review", + "label": "Review", + "default": true + } ] } ] @@ -51,6 +57,21 @@ "key": "publish", "label": "Publish plugins", "children": [ + { + "type": "dict", + "collapsible": true, + "key": "CollectReview", + "label": "Collect Review", + "checkbox_key": "enabled", + "children": [ + { + "type": "boolean", + "key": "enabled", + "label": "Enabled", + "default": true + } + ] + }, { "type": "dict", "collapsible": true, diff --git a/openpype/settings/entities/schemas/projects_schema/schema_project_hiero.json b/openpype/settings/entities/schemas/projects_schema/schema_project_hiero.json index f44f92438c..ea05f4ab9b 100644 --- a/openpype/settings/entities/schemas/projects_schema/schema_project_hiero.json +++ b/openpype/settings/entities/schemas/projects_schema/schema_project_hiero.json @@ -42,10 +42,19 @@ "nuke-default": "nuke-default" }, { - "aces_1.0.3": "aces_1.0.3" + "aces_1.0.3": "aces_1.0.3 (12)" }, { - "aces_1.1": "aces_1.1" + "aces_1.1": "aces_1.1 (12, 13)" + }, + { + "aces_1.2": "aces_1.2 (13, 14)" + }, + { + "studio-config-v1.0.0_aces-v1.3_ocio-v2.1": "studio-config-v1.0.0_aces-v1.3_ocio-v2.1 (14)" + }, + { + "cg-config-v1.0.0_aces-v1.3_ocio-v2.1": "cg-config-v1.0.0_aces-v1.3_ocio-v2.1 (14)" }, { "custom": "custom" @@ -93,6 +102,11 @@ "type": "text", "key": "thumbnailLut", "label": "Thumbnails" + }, + { + "type": "text", + "key": "monitorOutLut", + "label": "Monitor" } ] } diff --git a/openpype/settings/entities/schemas/projects_schema/schema_project_maya.json b/openpype/settings/entities/schemas/projects_schema/schema_project_maya.json index 47dfb37024..b27d795806 100644 --- a/openpype/settings/entities/schemas/projects_schema/schema_project_maya.json +++ b/openpype/settings/entities/schemas/projects_schema/schema_project_maya.json @@ -10,6 +10,41 @@ "key": "open_workfile_post_initialization", "label": "Open Workfile Post Initialization" }, + { + "type": "dict", + "key": "explicit_plugins_loading", + "label": "Explicit Plugins Loading", + "collapsible": true, + "is_group": true, + "checkbox_key": "enabled", + "children": [ + { + "type": "boolean", + "key": "enabled", + "label": "Enabled" + }, + { + "type": "list", + "key": "plugins_to_load", + "label": "Plugins To Load", + "object_type": { + "type": "dict", + "children": [ + { + "type": "boolean", + "key": "enabled", + "label": "Enabled" + }, + { + "type": "text", + "key": "name", + "label": "Name" + } + ] + } + } + ] + }, { "key": "imageio", "type": "dict", @@ -151,6 +186,40 @@ } ] }, + { + "type": "dict", + "key": "include_handles", + "collapsible": true, + "label": "Include/Exclude Handles in default playback & render range", + "children": [ + { + "key": "include_handles_default", + "label": "Include handles by default", + "type": "boolean" + }, + { + "type": "list", + "key": "per_task_type", + "label": "Include/exclude handles by task type", + "use_label_wrap": true, + "object_type": { + "type": "dict", + "children": [ + { + "type": "task-types-enum", + "key": "task_type", + "label": "Task types" + }, + { + "type": "boolean", + "key": "include_handles", + "label": "Include handles" + } + ] + } + } + ] + }, { "type": "schema", "name": "schema_scriptsmenu" diff --git a/openpype/settings/entities/schemas/projects_schema/schema_project_photoshop.json b/openpype/settings/entities/schemas/projects_schema/schema_project_photoshop.json index 0071e632af..f6c46aba8b 100644 --- a/openpype/settings/entities/schemas/projects_schema/schema_project_photoshop.json +++ b/openpype/settings/entities/schemas/projects_schema/schema_project_photoshop.json @@ -31,16 +31,126 @@ { "type": "dict", "collapsible": true, - "key": "CreateImage", + "key": "ImageCreator", "label": "Create Image", + "checkbox_key": "enabled", "children": [ + { + "type": "label", + "label": "Manually create instance from layer or group of layers. \n Separate review could be created for this image to be sent to Asset Management System." + }, + { + "type": "boolean", + "key": "enabled", + "label": "Enabled" + }, + { + "type": "boolean", + "key": "active_on_create", + "label": "Active by default" + }, + { + "type": "boolean", + "key": "mark_for_review", + "label": "Review by default" + }, { "type": "list", - "key": "defaults", - "label": "Default Subsets", + "key": "default_variants", + "label": "Default Variants", "object_type": "text" } ] + }, + { + "type": "dict", + "collapsible": true, + "key": "AutoImageCreator", + "label": "Create Flatten Image", + "checkbox_key": "enabled", + "children": [ + { + "type": "label", + "label": "Auto create image for all visible layers, used for simplified processing. \n Separate review could be created for this image to be sent to Asset Management System." + }, + { + "type": "boolean", + "key": "enabled", + "label": "Enabled" + }, + { + "type": "boolean", + "key": "active_on_create", + "label": "Active by default" + }, + { + "type": "boolean", + "key": "mark_for_review", + "label": "Review by default" + }, + { + "type": "text", + "key": "default_variant", + "label": "Default variant" + } + ] + }, + { + "type": "dict", + "collapsible": true, + "key": "ReviewCreator", + "label": "Create Review", + "checkbox_key": "enabled", + "children": [ + { + "type": "label", + "label": "Auto create review instance containing all published image instances or visible layers if no image instance." + }, + { + "type": "boolean", + "key": "enabled", + "label": "Enabled", + "default": true + }, + { + "type": "boolean", + "key": "active_on_create", + "label": "Active by default" + }, + { + "type": "text", + "key": "default_variant", + "label": "Default variant" + } + ] + }, + { + "type": "dict", + "collapsible": true, + "key": "WorkfileCreator", + "label": "Create Workfile", + "checkbox_key": "enabled", + "children": [ + { + "type": "label", + "label": "Auto create workfile instance" + }, + { + "type": "boolean", + "key": "enabled", + "label": "Enabled" + }, + { + "type": "boolean", + "key": "active_on_create", + "label": "Active by default" + }, + { + "type": "text", + "key": "default_variant", + "label": "Default variant" + } + ] } ] }, @@ -56,11 +166,18 @@ "is_group": true, "key": "CollectColorCodedInstances", "label": "Collect Color Coded Instances", + "checkbox_key": "enabled", "children": [ { "type": "label", "label": "Set color for publishable layers, set its resulting family and template for subset name. \nCan create flatten image from published instances.(Applicable only for remote publishing!)" }, + { + "type": "boolean", + "key": "enabled", + "label": "Enabled", + "default": true + }, { "key": "create_flatten_image", "label": "Create flatten image", @@ -131,40 +248,26 @@ } ] }, - { - "type": "dict", - "collapsible": true, - "key": "CollectInstances", - "label": "Collect Instances", - "children": [ - { - "type": "label", - "label": "Name for flatten image created if no image instance present" - }, - { - "type": "text", - "key": "flatten_subset_template", - "label": "Subset template for flatten image" - } - ] - }, { "type": "dict", "collapsible": true, "key": "CollectReview", "label": "Collect Review", + "checkbox_key": "enabled", "children": [ { "type": "boolean", - "key": "publish", - "label": "Active" - } - ] + "key": "enabled", + "label": "Enabled", + "default": true + } + ] }, { "type": "dict", "key": "CollectVersion", "label": "Collect Version", + "checkbox_key": "enabled", "children": [ { "type": "label", diff --git a/openpype/settings/entities/schemas/projects_schema/schema_project_substancepainter.json b/openpype/settings/entities/schemas/projects_schema/schema_project_substancepainter.json new file mode 100644 index 0000000000..79a39b8e6e --- /dev/null +++ b/openpype/settings/entities/schemas/projects_schema/schema_project_substancepainter.json @@ -0,0 +1,35 @@ +{ + "type": "dict", + "collapsible": true, + "key": "substancepainter", + "label": "Substance Painter", + "is_file": true, + "children": [ + { + "key": "imageio", + "type": "dict", + "label": "Color Management (ImageIO)", + "is_group": true, + "children": [ + { + "type": "schema", + "name": "schema_imageio_config" + }, + { + "type": "schema", + "name": "schema_imageio_file_rules" + } + + ] + }, + { + "type": "dict-modifiable", + "key": "shelves", + "label": "Shelves", + "use_label_wrap": true, + "object_type": { + "type": "text" + } + } + ] +} diff --git a/openpype/settings/entities/schemas/projects_schema/schema_project_traypublisher.json b/openpype/settings/entities/schemas/projects_schema/schema_project_traypublisher.json index 2ef1d2a414..f05f3433b0 100644 --- a/openpype/settings/entities/schemas/projects_schema/schema_project_traypublisher.json +++ b/openpype/settings/entities/schemas/projects_schema/schema_project_traypublisher.json @@ -26,7 +26,7 @@ "type": "list", "collapsible": true, "key": "simple_creators", - "label": "Creator plugins", + "label": "Simple Create Plugins", "use_label_wrap": true, "collapsible_key": true, "object_type": { @@ -292,40 +292,48 @@ ] }, { + "key": "create", + "label": "Create plugins", "type": "dict", "collapsible": true, - "key": "BatchMovieCreator", - "label": "Batch Movie Creator", - "collapsible_key": true, "children": [ { - "type": "label", - "label": "Allows to publish multiple video files in one go.
Name of matching asset is parsed from file names ('asset.mov', 'asset_v001.mov', 'my_asset_to_publish.mov')" - }, - { - "type": "list", - "key": "default_variants", - "label": "Default variants", - "object_type": { - "type": "text" - } - }, - { - "type": "list", - "key": "default_tasks", - "label": "Default tasks", - "object_type": { - "type": "text" - } - }, - { - "type": "list", - "key": "extensions", - "label": "Extensions", - "use_label_wrap": true, + "type": "dict", + "collapsible": true, + "key": "BatchMovieCreator", + "label": "Batch Movie Creator", "collapsible_key": true, - "collapsed": false, - "object_type": "text" + "children": [ + { + "type": "label", + "label": "Allows to publish multiple video files in one go.
Name of matching asset is parsed from file names ('asset.mov', 'asset_v001.mov', 'my_asset_to_publish.mov')" + }, + { + "type": "list", + "key": "default_variants", + "label": "Default variants", + "object_type": { + "type": "text" + } + }, + { + "type": "list", + "key": "default_tasks", + "label": "Default tasks", + "object_type": { + "type": "text" + } + }, + { + "type": "list", + "key": "extensions", + "label": "Extensions", + "use_label_wrap": true, + "collapsible_key": true, + "collapsed": false, + "object_type": "text" + } + ] } ] }, diff --git a/openpype/settings/entities/schemas/projects_schema/schema_project_unreal.json b/openpype/settings/entities/schemas/projects_schema/schema_project_unreal.json index 8988dd2ff0..35eb0b24f1 100644 --- a/openpype/settings/entities/schemas/projects_schema/schema_project_unreal.json +++ b/openpype/settings/entities/schemas/projects_schema/schema_project_unreal.json @@ -32,6 +32,28 @@ "key": "delete_unmatched_assets", "label": "Delete assets that are not matched" }, + { + "type": "text", + "key": "render_config_path", + "label": "Render Config Path" + }, + { + "type": "number", + "key": "preroll_frames", + "label": "Pre-roll frames" + }, + { + "key": "render_format", + "label": "Render format", + "type": "enum", + "multiselection": false, + "enum_items": [ + {"png": "PNG"}, + {"exr": "EXR"}, + {"jpg": "JPG"}, + {"bmp": "BMP"} + ] + }, { "type": "dict", "collapsible": true, diff --git a/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_capture.json b/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_capture.json index 1d78f5a03f..d90527ac8c 100644 --- a/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_capture.json +++ b/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_capture.json @@ -7,6 +7,8 @@ { "type": "dict", "key": "capture_preset", + "label": "DEPRECATED! Please use \"Profiles\" below.", + "collapsed": false, "children": [ { "type": "dict", @@ -176,7 +178,7 @@ { "all": "All Lights"}, { "selected": "Selected Lights"}, { "flat": "Flat Lighting"}, - { "nolights": "No Lights"} + { "none": "No Lights"} ] }, { @@ -626,6 +628,747 @@ ] } ] + }, + { + "type": "list", + "key": "profiles", + "label": "Profiles", + "object_type": { + "type": "dict", + "children": [ + { + "key": "task_types", + "label": "Task types", + "type": "task-types-enum" + }, + { + "key": "task_names", + "label": "Task names", + "type": "list", + "object_type": "text" + }, + { + "key": "subsets", + "label": "Subset names", + "type": "list", + "object_type": "text" + }, + { + "type": "splitter" + }, + { + "type": "dict", + "key": "capture_preset", + "children": [ + { + "type": "dict", + "key": "Codec", + "children": [ + { + "type": "label", + "label": "Codec" + }, + { + "type": "text", + "key": "compression", + "label": "Encoding", + "default": "png" + }, + { + "type": "text", + "key": "format", + "label": "Format", + "default": "image" + }, + { + "type": "number", + "key": "quality", + "label": "Quality", + "decimal": 0, + "minimum": 0, + "maximum": 100, + "default": 95 + }, + { + "type": "splitter" + } + ] + }, + { + "type": "dict", + "key": "Display Options", + "children": [ + { + "type": "label", + "label": "Display Options" + }, + { + "type": "boolean", + "key": "override_display", + "label": "Override display options", + "default": true + }, + { + "type": "color", + "key": "background", + "label": "Background Color: ", + "default": [125, 125, 125, 255] + }, + { + "type": "boolean", + "key": "displayGradient", + "label": "Display background gradient", + "default": true + }, + { + "type": "color", + "key": "backgroundBottom", + "label": "Background Bottom: ", + "default": [125, 125, 125, 255] + }, + { + "type": "color", + "key": "backgroundTop", + "label": "Background Top: ", + "default": [125, 125, 125, 255] + } + ] + }, + { + "type": "splitter" + }, + { + "type": "dict", + "key": "Generic", + "children": [ + { + "type": "label", + "label": "Generic" + }, + { + "type": "boolean", + "key": "isolate_view", + "label": " Isolate view", + "default": true + }, + { + "type": "boolean", + "key": "off_screen", + "label": " Off Screen", + "default": true + }, + { + "type": "boolean", + "key": "pan_zoom", + "label": " 2D Pan/Zoom", + "default": false + } + ] + }, + { + "type": "splitter" + }, + { + "type": "dict", + "key": "Renderer", + "children": [ + { + "type": "label", + "label": "Renderer" + }, + { + "type": "enum", + "key": "rendererName", + "label": "Renderer name", + "enum_items": [ + { "vp2Renderer": "Viewport 2.0" } + ], + "default": "vp2Renderer" + } + ] + }, + { + "type": "dict", + "key": "Resolution", + "children": [ + { + "type": "splitter" + }, + { + "type": "label", + "label": "Resolution" + }, + { + "type": "number", + "key": "width", + "label": " Width", + "decimal": 0, + "minimum": 0, + "maximum": 99999, + "default": 0 + }, + { + "type": "number", + "key": "height", + "label": "Height", + "decimal": 0, + "minimum": 0, + "maximum": 99999, + "default": 0 + } + ] + }, + { + "type": "splitter" + }, + { + "type": "dict", + "collapsible": true, + "key": "Viewport Options", + "label": "Viewport Options", + "children": [ + { + "type": "boolean", + "key": "override_viewport_options", + "label": "Override Viewport Options", + "default": true + }, + { + "type": "enum", + "key": "displayLights", + "label": "Display Lights", + "enum_items": [ + { "default": "Default Lighting"}, + { "all": "All Lights"}, + { "selected": "Selected Lights"}, + { "flat": "Flat Lighting"}, + { "nolights": "No Lights"} + ], + "default": "default" + }, + { + "type": "boolean", + "key": "displayTextures", + "label": "Display Textures", + "default": true + }, + { + "type": "number", + "key": "textureMaxResolution", + "label": "Texture Clamp Resolution", + "decimal": 0, + "default": 1024 + }, + { + "type": "splitter" + }, + { + "type": "label", + "label": "Display" + }, + { + "type":"boolean", + "key": "renderDepthOfField", + "label": "Depth of Field", + "default": true + }, + { + "type": "splitter" + }, + { + "type": "boolean", + "key": "shadows", + "label": "Display Shadows", + "default": true + }, + { + "type": "boolean", + "key": "twoSidedLighting", + "label": "Two Sided Lighting", + "default": true + }, + { + "type": "splitter" + }, + { + "type": "boolean", + "key": "lineAAEnable", + "label": "Enable Anti-Aliasing", + "default": true + }, + { + "type": "number", + "key": "multiSample", + "label": "Anti Aliasing Samples", + "decimal": 0, + "minimum": 0, + "maximum": 32, + "default": 8 + }, + { + "type": "splitter" + }, + { + "type": "boolean", + "key": "useDefaultMaterial", + "label": "Use Default Material", + "default": false + }, + { + "type": "boolean", + "key": "wireframeOnShaded", + "label": "Wireframe On Shaded", + "default": false + }, + { + "type": "boolean", + "key": "xray", + "label": "X-Ray", + "default": false + }, + { + "type": "boolean", + "key": "jointXray", + "label": "X-Ray Joints", + "default": false + }, + { + "type": "boolean", + "key": "backfaceCulling", + "label": "Backface Culling", + "default": false + }, + { + "type": "boolean", + "key": "ssaoEnable", + "label": "Screen Space Ambient Occlusion", + "default": false + }, + { + "type": "number", + "key": "ssaoAmount", + "label": "SSAO Amount", + "default": 1 + }, + { + "type": "number", + "key": "ssaoRadius", + "label": "SSAO Radius", + "default": 16 + }, + { + "type": "number", + "key": "ssaoFilterRadius", + "label": "SSAO Filter Radius", + "decimal": 0, + "minimum": 1, + "maximum": 32, + "default": 16 + }, + { + "type": "number", + "key": "ssaoSamples", + "label": "SSAO Samples", + "decimal": 0, + "minimum": 8, + "maximum": 32, + "default": 16 + }, + { + "type": "splitter" + }, + { + "type": "boolean", + "key": "fogging", + "label": "Enable Hardware Fog", + "default": false + }, + { + "type": "enum", + "key": "hwFogFalloff", + "label": "Hardware Falloff", + "enum_items": [ + { "0": "Linear"}, + { "1": "Exponential"}, + { "2": "Exponential Squared"} + ], + "default": "0" + }, + { + "type": "number", + "key": "hwFogDensity", + "label": "Fog Density", + "decimal": 2, + "minimum": 0, + "maximum": 1, + "default": 0 + }, + { + "type": "number", + "key": "hwFogStart", + "label": "Fog Start", + "default": 0 + }, + { + "type": "number", + "key": "hwFogEnd", + "label": "Fog End", + "default": 100 + }, + { + "type": "number", + "key": "hwFogAlpha", + "label": "Fog Alpha", + "default": 0 + }, + { + "type": "number", + "key": "hwFogColorR", + "label": "Fog Color R", + "decimal": 2, + "minimum": 0, + "maximum": 1, + "default": 1 + }, + { + "type": "number", + "key": "hwFogColorG", + "label": "Fog Color G", + "decimal": 2, + "minimum": 0, + "maximum": 1, + "default": 1 + }, + { + "type": "number", + "key": "hwFogColorB", + "label": "Fog Color B", + "decimal": 2, + "minimum": 0, + "maximum": 1, + "default": 1 + }, + { + "type": "splitter" + }, + { + "type": "boolean", + "key": "motionBlurEnable", + "label": "Enable Motion Blur", + "default": false + }, + { + "type": "number", + "key": "motionBlurSampleCount", + "label": "Motion Blur Sample Count", + "decimal": 0, + "minimum": 8, + "maximum": 32, + "default": 8 + }, + { + "type": "number", + "key": "motionBlurShutterOpenFraction", + "label": "Shutter Open Fraction", + "decimal": 3, + "minimum": 0.01, + "maximum": 32, + "default": 0.2 + }, + { + "type": "splitter" + }, + { + "type": "label", + "label": "Show" + }, + { + "type": "boolean", + "key": "cameras", + "label": "Cameras", + "default": false + }, + { + "type": "boolean", + "key": "clipGhosts", + "label": "Clip Ghosts", + "default": false + }, + { + "type": "boolean", + "key": "deformers", + "label": "Deformers", + "default": false + }, + { + "type": "boolean", + "key": "dimensions", + "label": "Dimensions", + "default": false + }, + { + "type": "boolean", + "key": "dynamicConstraints", + "label": "Dynamic Constraints", + "default": false + }, + { + "type": "boolean", + "key": "dynamics", + "label": "Dynamics", + "default": false + }, + { + "type": "boolean", + "key": "fluids", + "label": "Fluids", + "default": false + }, + { + "type": "boolean", + "key": "follicles", + "label": "Follicles", + "default": false + }, + { + "type": "boolean", + "key": "greasePencils", + "label": "Grease Pencil", + "default": false + }, + { + "type": "boolean", + "key": "grid", + "label": "Grid", + "default": false + }, + { + "type": "boolean", + "key": "hairSystems", + "label": "Hair Systems", + "default": true + }, + { + "type": "boolean", + "key": "handles", + "label": "Handles", + "default": false + }, + { + "type": "boolean", + "key": "headsUpDisplay", + "label": "HUD", + "default": false + }, + { + "type": "boolean", + "key": "ikHandles", + "label": "IK Handles", + "default": false + }, + { + "type": "boolean", + "key": "imagePlane", + "label": "Image Planes", + "default": true + }, + { + "type": "boolean", + "key": "joints", + "label": "Joints", + "default": false + }, + { + "type": "boolean", + "key": "lights", + "label": "Lights", + "default": false + }, + { + "type": "boolean", + "key": "locators", + "label": "Locators", + "default": false + }, + { + "type": "boolean", + "key": "manipulators", + "label": "Manipulators", + "default": false + }, + { + "type": "boolean", + "key": "motionTrails", + "label": "Motion Trails", + "default": false + }, + { + "type": "boolean", + "key": "nCloths", + "label": "nCloths", + "default": false + }, + { + "type": "boolean", + "key": "nParticles", + "label": "nParticles", + "default": false + }, + { + "type": "boolean", + "key": "nRigids", + "label": "nRigids", + "default": false + }, + { + "type": "boolean", + "key": "controlVertices", + "label": "NURBS CVs", + "default": false + }, + { + "type": "boolean", + "key": "nurbsCurves", + "label": "NURBS Curves", + "default": false + }, + { + "type": "boolean", + "key": "hulls", + "label": "NURBS Hulls", + "default": false + }, + { + "type": "boolean", + "key": "nurbsSurfaces", + "label": "NURBS Surfaces", + "default": false + }, + { + "type": "boolean", + "key": "particleInstancers", + "label": "Particle Instancers", + "default": false + }, + { + "type": "boolean", + "key": "pivots", + "label": "Pivots", + "default": false + }, + { + "type": "boolean", + "key": "planes", + "label": "Planes", + "default": false + }, + { + "type": "boolean", + "key": "pluginShapes", + "label": "Plugin Shapes", + "default": false + }, + { + "type": "boolean", + "key": "polymeshes", + "label": "Polygons", + "default": true + }, + { + "type": "boolean", + "key": "strokes", + "label": "Strokes", + "default": false + }, + { + "type": "boolean", + "key": "subdivSurfaces", + "label": "Subdiv Surfaces", + "default": false + }, + { + "type": "boolean", + "key": "textures", + "label": "Texture Placements", + "default": false + }, + { + "type": "dict-modifiable", + "key": "pluginObjects", + "label": "Plugin Objects", + "object_type": "boolean" + } + ] + }, + { + "type": "dict", + "collapsible": true, + "key": "Camera Options", + "label": "Camera Options", + "children": [ + { + "type": "boolean", + "key": "displayGateMask", + "label": "Display Gate Mask", + "default": false + }, + { + "type": "boolean", + "key": "displayResolution", + "label": "Display Resolution", + "default": false + }, + { + "type": "boolean", + "key": "displayFilmGate", + "label": "Display Film Gate", + "default": false + }, + { + "type": "boolean", + "key": "displayFieldChart", + "label": "Display Field Chart", + "default": false + }, + { + "type": "boolean", + "key": "displaySafeAction", + "label": "Display Safe Action", + "default": false + }, + { + "type": "boolean", + "key": "displaySafeTitle", + "label": "Display Safe Title", + "default": false + }, + { + "type": "boolean", + "key": "displayFilmPivot", + "label": "Display Film Pivot", + "default": false + }, + { + "type": "boolean", + "key": "displayFilmOrigin", + "label": "Display Film Origin", + "default": false + }, + { + "type": "number", + "key": "overscan", + "label": "Overscan", + "decimal": 1, + "minimum": 0, + "maximum": 10, + "default": 1 + } + ] + } + ] + } + ] + } } ] } diff --git a/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_load.json b/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_load.json index 6b2315abc0..4b6b97ab4e 100644 --- a/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_load.json +++ b/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_load.json @@ -91,6 +91,36 @@ "key": "yetiRig" } ] + }, + { + "type": "dict", + "collapsible": true, + "key": "reference_loader", + "label": "Reference Loader", + "children": [ + { + "type": "text", + "label": "Namespace", + "key": "namespace" + }, + { + "type": "text", + "label": "Group name", + "key": "group_name" + }, + { + "type": "label", + "label": "Here's a link to the doc where you can find explanations about customing the naming of referenced assets: https://openpype.io/docs/admin_hosts_maya#load-plugins" + }, + { + "type": "separator" + }, + { + "type": "boolean", + "key": "display_handle", + "label": "Display Handle On Load References" + } + ] } ] } diff --git a/openpype/settings/entities/schemas/projects_schema/schemas/schema_nuke_imageio.json b/openpype/settings/entities/schemas/projects_schema/schemas/schema_nuke_imageio.json index 1cd6f0e67f..21f6baff9e 100644 --- a/openpype/settings/entities/schemas/projects_schema/schemas/schema_nuke_imageio.json +++ b/openpype/settings/entities/schemas/projects_schema/schemas/schema_nuke_imageio.json @@ -74,28 +74,34 @@ "nuke-default": "nuke-default" }, { - "spi-vfx": "spi-vfx" + "spi-vfx": "spi-vfx (11)" }, { - "spi-anim": "spi-anim" + "spi-anim": "spi-anim (11)" }, { - "aces_0.1.1": "aces_0.1.1" + "aces_0.1.1": "aces_0.1.1 (11)" }, { - "aces_0.7.1": "aces_0.7.1" + "aces_0.7.1": "aces_0.7.1 (11)" }, { - "aces_1.0.1": "aces_1.0.1" + "aces_1.0.1": "aces_1.0.1 (11)" }, { - "aces_1.0.3": "aces_1.0.3" + "aces_1.0.3": "aces_1.0.3 (11, 12)" }, { - "aces_1.1": "aces_1.1" + "aces_1.1": "aces_1.1 (12, 13)" }, { - "aces_1.2": "aces_1.2" + "aces_1.2": "aces_1.2 (13, 14)" + }, + { + "studio-config-v1.0.0_aces-v1.3_ocio-v2.1": "studio-config-v1.0.0_aces-v1.3_ocio-v2.1 (14)" + }, + { + "cg-config-v1.0.0_aces-v1.3_ocio-v2.1": "cg-config-v1.0.0_aces-v1.3_ocio-v2.1 (14)" }, { "custom": "custom" @@ -257,4 +263,4 @@ ] } ] -} \ No newline at end of file +} diff --git a/openpype/settings/entities/schemas/projects_schema/schemas/schema_nuke_publish.json b/openpype/settings/entities/schemas/projects_schema/schemas/schema_nuke_publish.json index 1c542279fc..ce9fa04c6a 100644 --- a/openpype/settings/entities/schemas/projects_schema/schemas/schema_nuke_publish.json +++ b/openpype/settings/entities/schemas/projects_schema/schemas/schema_nuke_publish.json @@ -62,7 +62,7 @@ "template_data": [ { "key": "ValidateCorrectAssetName", - "label": "Validate Correct Asset name" + "label": "Validate Correct Asset Name" } ] }, @@ -72,7 +72,7 @@ "template_data": [ { "key": "ValidateContainers", - "label": "ValidateContainers" + "label": "Validate Containers" } ] }, @@ -81,7 +81,7 @@ "collapsible": true, "checkbox_key": "enabled", "key": "ValidateKnobs", - "label": "ValidateKnobs", + "label": "Validate Knobs", "is_group": true, "children": [ { @@ -104,6 +104,10 @@ "key": "ValidateOutputResolution", "label": "Validate Output Resolution" }, + { + "key": "ValidateBackdrop", + "label": "Validate Backdrop" + }, { "key": "ValidateGizmo", "label": "Validate Gizmo (Group)" diff --git a/openpype/settings/entities/schemas/system_schema/host_settings/schema_substancepainter.json b/openpype/settings/entities/schemas/system_schema/host_settings/schema_substancepainter.json new file mode 100644 index 0000000000..fb3b21e63f --- /dev/null +++ b/openpype/settings/entities/schemas/system_schema/host_settings/schema_substancepainter.json @@ -0,0 +1,40 @@ +{ + "type": "dict", + "key": "substancepainter", + "label": "Substance Painter", + "collapsible": true, + "checkbox_key": "enabled", + "children": [ + { + "type": "boolean", + "key": "enabled", + "label": "Enabled" + }, + { + "type": "schema_template", + "name": "template_host_unchangables" + }, + { + "key": "environment", + "label": "Environment", + "type": "raw-json" + }, + { + "type": "dict-modifiable", + "key": "variants", + "collapsible_key": true, + "use_label_wrap": false, + "object_type": { + "type": "dict", + "collapsible": true, + "children": [ + { + "type": "schema_template", + "name": "template_host_variant_items", + "skip_paths": ["use_python_2"] + } + ] + } + } + ] +} diff --git a/openpype/settings/entities/schemas/system_schema/host_settings/schema_unreal.json b/openpype/settings/entities/schemas/system_schema/host_settings/schema_unreal.json index 133d6c9eaf..df5ec0e6fa 100644 --- a/openpype/settings/entities/schemas/system_schema/host_settings/schema_unreal.json +++ b/openpype/settings/entities/schemas/system_schema/host_settings/schema_unreal.json @@ -30,12 +30,7 @@ "children": [ { "type": "schema_template", - "name": "template_host_variant_items", - "skip_paths": [ - "executables", - "separator", - "arguments" - ] + "name": "template_host_variant_items" } ] } diff --git a/openpype/settings/entities/schemas/system_schema/schema_applications.json b/openpype/settings/entities/schemas/system_schema/schema_applications.json index b17687cf71..abea37a9ab 100644 --- a/openpype/settings/entities/schemas/system_schema/schema_applications.json +++ b/openpype/settings/entities/schemas/system_schema/schema_applications.json @@ -93,6 +93,10 @@ "type": "schema", "name": "schema_celaction" }, + { + "type": "schema", + "name": "schema_substancepainter" + }, { "type": "schema", "name": "schema_unreal" diff --git a/openpype/style/data.json b/openpype/style/data.json index 404ca6944c..bea2a3d407 100644 --- a/openpype/style/data.json +++ b/openpype/style/data.json @@ -48,7 +48,7 @@ "bg-view-selection-hover": "rgba(92, 173, 214, .8)", "border": "#373D48", - "border-hover": "rgba(168, 175, 189, .3)", + "border-hover": "rgb(92, 99, 111)", "border-focus": "rgb(92, 173, 214)", "restart-btn-bg": "#458056", diff --git a/openpype/style/style.css b/openpype/style/style.css index da477eeefa..827b103f94 100644 --- a/openpype/style/style.css +++ b/openpype/style/style.css @@ -35,6 +35,11 @@ QWidget:disabled { color: {color:font-disabled}; } +/* Some DCCs have set borders to solid color */ +QScrollArea { + border: none; +} + QLabel { background: transparent; } @@ -42,7 +47,7 @@ QLabel { /* Inputs */ QAbstractSpinBox, QLineEdit, QPlainTextEdit, QTextEdit { border: 1px solid {color:border}; - border-radius: 0.3em; + border-radius: 0.2em; background: {color:bg-inputs}; padding: 0.1em; } @@ -127,6 +132,7 @@ QPushButton { border-radius: 0.2em; padding: 3px 5px 3px 5px; background: {color:bg-buttons}; + min-width: 0px; /* Substance Painter fix */ } QPushButton:hover { @@ -226,7 +232,7 @@ QMenu::separator { /* Combobox */ QComboBox { border: 1px solid {color:border}; - border-radius: 3px; + border-radius: 0.2em; padding: 1px 3px 1px 3px; background: {color:bg-inputs}; } @@ -332,7 +338,15 @@ QTabWidget::tab-bar { alignment: left; } +/* avoid QTabBar overrides in Substance Painter */ +QTabBar { + text-transform: none; + font-weight: normal; +} + QTabBar::tab { + text-transform: none; + font-weight: normal; border-top: 1px solid {color:border}; border-left: 1px solid {color:border}; border-right: 1px solid {color:border}; @@ -372,6 +386,7 @@ QHeaderView { QHeaderView::section { background: {color:bg-view-header}; padding: 4px; + border-top: 0px; /* Substance Painter fix */ border-right: 1px solid {color:bg-view}; border-radius: 0px; text-align: center; @@ -474,7 +489,6 @@ QAbstractItemView:disabled{ } QAbstractItemView::item:hover { - /* color: {color:bg-view-hover}; */ background: {color:bg-view-hover}; } @@ -743,7 +757,7 @@ OverlayMessageWidget QWidget { #TypeEditor, #ToolEditor, #NameEditor, #NumberEditor { background: transparent; - border-radius: 0.3em; + border-radius: 0.2em; } #TypeEditor:focus, #ToolEditor:focus, #NameEditor:focus, #NumberEditor:focus { @@ -860,7 +874,13 @@ OverlayMessageWidget QWidget { background: {color:bg-view-hover}; } -/* New Create/Publish UI */ +/* Publisher UI (Create/Publish) */ +#PublishWindow QAbstractSpinBox, QLineEdit, QPlainTextEdit, QTextEdit { + padding: 1px; +} +#PublishWindow QComboBox { + padding: 1px 1px 1px 0.2em; +} PublisherTabsWidget { background: {color:publisher:tab-bg}; } @@ -944,6 +964,7 @@ PixmapButton:disabled { border-top-left-radius: 0px; padding-top: 0.5em; padding-bottom: 0.5em; + width: 0.5em; } #VariantInput[state="new"], #VariantInput[state="new"]:focus, #VariantInput[state="new"]:hover { border-color: {color:publisher:success}; @@ -1072,7 +1093,7 @@ ValidationArtistMessage QLabel { #AssetNameInputWidget { background: {color:bg-inputs}; border: 1px solid {color:border}; - border-radius: 0.3em; + border-radius: 0.2em; } #AssetNameInputWidget QWidget { @@ -1465,6 +1486,12 @@ CreateNextPageOverlay { } /* Attribute Definition widgets */ +AttributeDefinitionsWidget QAbstractSpinBox, QLineEdit, QPlainTextEdit, QTextEdit { + padding: 1px; +} +AttributeDefinitionsWidget QComboBox { + padding: 1px 1px 1px 0.2em; +} InViewButton, InViewButton:disabled { background: transparent; } diff --git a/openpype/tools/attribute_defs/widgets.py b/openpype/tools/attribute_defs/widgets.py index 0d4e1e88a9..d46c238da1 100644 --- a/openpype/tools/attribute_defs/widgets.py +++ b/openpype/tools/attribute_defs/widgets.py @@ -1,4 +1,3 @@ -import uuid import copy from qtpy import QtWidgets, QtCore @@ -126,7 +125,7 @@ class AttributeDefinitionsWidget(QtWidgets.QWidget): row = 0 for attr_def in attr_defs: - if not isinstance(attr_def, UIDef): + if attr_def.is_value_def: if attr_def.key in self._current_keys: raise KeyError( "Duplicated key \"{}\"".format(attr_def.key)) @@ -144,11 +143,16 @@ class AttributeDefinitionsWidget(QtWidgets.QWidget): col_num = 2 - expand_cols - if attr_def.label: + if attr_def.is_value_def and attr_def.label: label_widget = QtWidgets.QLabel(attr_def.label, self) tooltip = attr_def.tooltip if tooltip: label_widget.setToolTip(tooltip) + if attr_def.is_label_horizontal: + label_widget.setAlignment( + QtCore.Qt.AlignRight + | QtCore.Qt.AlignVCenter + ) layout.addWidget( label_widget, row, 0, 1, expand_cols ) diff --git a/openpype/tools/loader/model.py b/openpype/tools/loader/model.py index 39e0bd98c3..e5d8400031 100644 --- a/openpype/tools/loader/model.py +++ b/openpype/tools/loader/model.py @@ -123,7 +123,7 @@ class BaseRepresentationModel(object): self.remote_provider = remote_provider -class SubsetsModel(TreeModel, BaseRepresentationModel): +class SubsetsModel(BaseRepresentationModel, TreeModel): doc_fetched = QtCore.Signal() refreshed = QtCore.Signal(bool) @@ -361,10 +361,11 @@ class SubsetsModel(TreeModel, BaseRepresentationModel): version_data.get("endFrame", None) ) + handles_label = None handle_start = version_data.get("handleStart", None) handle_end = version_data.get("handleEnd", None) if handle_start is not None and handle_end is not None: - handles = "{}-{}".format(str(handle_start), str(handle_end)) + handles_label = "{}-{}".format(str(handle_start), str(handle_end)) if frame_start is not None and frame_end is not None: # Remove superfluous zeros from numbers (3.0 -> 3) to improve @@ -401,7 +402,7 @@ class SubsetsModel(TreeModel, BaseRepresentationModel): "frameStart": frame_start, "frameEnd": frame_end, "duration": duration, - "handles": handles, + "handles": handles_label, "frames": frames, "step": version_data.get("step", None), }) diff --git a/openpype/tools/publisher/constants.py b/openpype/tools/publisher/constants.py index 5d23886aa8..660fccecf1 100644 --- a/openpype/tools/publisher/constants.py +++ b/openpype/tools/publisher/constants.py @@ -2,7 +2,7 @@ from qtpy import QtCore, QtGui # ID of context item in instance view CONTEXT_ID = "context" -CONTEXT_LABEL = "Options" +CONTEXT_LABEL = "Context" # Not showed anywhere - used as identifier CONTEXT_GROUP = "__ContextGroup__" @@ -15,6 +15,9 @@ VARIANT_TOOLTIP = ( "\nnumerical characters (0-9) dot (\".\") or underscore (\"_\")." ) +INPUTS_LAYOUT_HSPACING = 4 +INPUTS_LAYOUT_VSPACING = 2 + # Roles for instance views INSTANCE_ID_ROLE = QtCore.Qt.UserRole + 1 SORT_VALUE_ROLE = QtCore.Qt.UserRole + 2 diff --git a/openpype/tools/publisher/control.py b/openpype/tools/publisher/control.py index b62ae7ecc1..4b083d4bc8 100644 --- a/openpype/tools/publisher/control.py +++ b/openpype/tools/publisher/control.py @@ -6,6 +6,7 @@ import collections import uuid import tempfile import shutil +import inspect from abc import ABCMeta, abstractmethod import six @@ -26,8 +27,8 @@ from openpype.pipeline import ( PublishValidationError, KnownPublishError, registered_host, - legacy_io, get_process_id, + OptionalPyblishPluginMixin, ) from openpype.pipeline.create import ( CreateContext, @@ -162,7 +163,7 @@ class AssetDocsCache: return copy.deepcopy(self._full_asset_docs_by_name[asset_name]) -class PublishReport: +class PublishReportMaker: """Report for single publishing process. Report keeps current state of publishing and currently processed plugin. @@ -783,6 +784,13 @@ class PublishValidationErrors: # Make sure the cached report is cleared plugin_id = self._plugins_proxy.get_plugin_id(plugin) + if not error.title: + if hasattr(plugin, "label") and plugin.label: + plugin_label = plugin.label + else: + plugin_label = plugin.__name__ + error.title = plugin_label + self._error_items.append( ValidationErrorItem.from_result(plugin_id, error, instance) ) @@ -1673,7 +1681,7 @@ class PublisherController(BasePublisherController): # pyblish.api.Context self._publish_context = None # Pyblish report - self._publish_report = PublishReport(self) + self._publish_report = PublishReportMaker(self) # Store exceptions of validation error self._publish_validation_errors = PublishValidationErrors() @@ -2307,6 +2315,37 @@ class PublisherController(BasePublisherController): def _process_main_thread_item(self, item): item() + def _is_publish_plugin_active(self, plugin): + """Decide if publish plugin is active. + + This is hack because 'active' is mis-used in mixin + 'OptionalPyblishPluginMixin' where 'active' is used for default value + of optional plugins. Because of that is 'active' state of plugin + which inherit from 'OptionalPyblishPluginMixin' ignored. That affects + headless publishing inside host, potentially remote publishing. + + We have to change that to match pyblish base, but we can do that + only when all hosts use Publisher because the change requires + change of settings schemas. + + Args: + plugin (pyblish.Plugin): Plugin which should be checked if is + active. + + Returns: + bool: Is plugin active. + """ + + if plugin.active: + return True + + if not plugin.optional: + return False + + if OptionalPyblishPluginMixin in inspect.getmro(plugin): + return True + return False + def _publish_iterator(self): """Main logic center of publishing. @@ -2315,11 +2354,9 @@ class PublisherController(BasePublisherController): states of currently processed publish plugin and instance. Also change state of processed orders like validation order has passed etc. - Also stops publishing if should stop on validation. - - QUESTION: - Does validate button still make sense? + Also stops publishing, if should stop on validation. """ + for idx, plugin in enumerate(self._publish_plugins): self._publish_progress = idx @@ -2344,6 +2381,11 @@ class PublisherController(BasePublisherController): # Add plugin to publish report self._publish_report.add_plugin_iter(plugin, self._publish_context) + # WARNING This is hack fix for optional plugins + if not self._is_publish_plugin_active(plugin): + self._publish_report.set_plugin_skipped() + continue + # Trigger callback that new plugin is going to be processed plugin_label = plugin.__name__ if hasattr(plugin, "label") and plugin.label: @@ -2450,7 +2492,11 @@ def collect_families_from_instances(instances, only_active=False): instances(list): List of publish instances from which are families collected. only_active(bool): Return families only for active instances. + + Returns: + list[str]: Families available on instances. """ + all_families = set() for instance in instances: if only_active: diff --git a/openpype/tools/publisher/publish_report_viewer/model.py b/openpype/tools/publisher/publish_report_viewer/model.py index 37da4ab3f2..ff10e091b8 100644 --- a/openpype/tools/publisher/publish_report_viewer/model.py +++ b/openpype/tools/publisher/publish_report_viewer/model.py @@ -162,7 +162,8 @@ class PluginsModel(QtGui.QStandardItemModel): items = [] for plugin_item in plugin_items: - item = QtGui.QStandardItem(plugin_item.label) + label = plugin_item.label or plugin_item.name + item = QtGui.QStandardItem(label) item.setData(False, ITEM_IS_GROUP_ROLE) item.setData(plugin_item.label, ITEM_LABEL_ROLE) item.setData(plugin_item.id, ITEM_ID_ROLE) diff --git a/openpype/tools/publisher/widgets/assets_widget.py b/openpype/tools/publisher/widgets/assets_widget.py index 3c559af259..a750d8d540 100644 --- a/openpype/tools/publisher/widgets/assets_widget.py +++ b/openpype/tools/publisher/widgets/assets_widget.py @@ -211,6 +211,10 @@ class AssetsDialog(QtWidgets.QDialog): layout.addWidget(asset_view, 1) layout.addLayout(btns_layout, 0) + controller.event_system.add_callback( + "controller.reset.finished", self._on_controller_reset + ) + asset_view.double_clicked.connect(self._on_ok_clicked) filter_input.textChanged.connect(self._on_filter_change) ok_btn.clicked.connect(self._on_ok_clicked) @@ -245,6 +249,10 @@ class AssetsDialog(QtWidgets.QDialog): new_pos.setY(new_pos.y() - int(self.height() / 2)) self.move(new_pos) + def _on_controller_reset(self): + # Change reset enabled so model is reset on show event + self._soft_reset_enabled = True + def showEvent(self, event): """Refresh asset model on show.""" super(AssetsDialog, self).showEvent(event) diff --git a/openpype/tools/publisher/widgets/card_view_widgets.py b/openpype/tools/publisher/widgets/card_view_widgets.py index 0734e1bc27..13715bc73c 100644 --- a/openpype/tools/publisher/widgets/card_view_widgets.py +++ b/openpype/tools/publisher/widgets/card_view_widgets.py @@ -9,7 +9,7 @@ Only one item can be selected at a time. ``` : Icon. Can have Warning icon when context is not right ┌──────────────────────┐ -│ Options │ +│ Context │ │ ────────── │ │ [x]│ │ [x]│ @@ -202,7 +202,7 @@ class ConvertorItemsGroupWidget(BaseGroupWidget): class InstanceGroupWidget(BaseGroupWidget): """Widget wrapping instances under group.""" - active_changed = QtCore.Signal() + active_changed = QtCore.Signal(str, str, bool) def __init__(self, group_icons, *args, **kwargs): super(InstanceGroupWidget, self).__init__(*args, **kwargs) @@ -253,13 +253,16 @@ class InstanceGroupWidget(BaseGroupWidget): instance, group_icon, self ) widget.selected.connect(self._on_widget_selection) - widget.active_changed.connect(self.active_changed) + widget.active_changed.connect(self._on_active_changed) self._widgets_by_id[instance.id] = widget self._content_layout.insertWidget(widget_idx, widget) widget_idx += 1 self._update_ordered_item_ids() + def _on_active_changed(self, instance_id, value): + self.active_changed.emit(self.group_name, instance_id, value) + class CardWidget(BaseClickableFrame): """Clickable card used as bigger button.""" @@ -332,7 +335,7 @@ class ContextCardWidget(CardWidget): icon_layout.addWidget(icon_widget) layout = QtWidgets.QHBoxLayout(self) - layout.setContentsMargins(0, 5, 10, 5) + layout.setContentsMargins(0, 2, 10, 2) layout.addLayout(icon_layout, 0) layout.addWidget(label_widget, 1) @@ -363,7 +366,7 @@ class ConvertorItemCardWidget(CardWidget): icon_layout.addWidget(icon_widget) layout = QtWidgets.QHBoxLayout(self) - layout.setContentsMargins(0, 5, 10, 5) + layout.setContentsMargins(0, 2, 10, 2) layout.addLayout(icon_layout, 0) layout.addWidget(label_widget, 1) @@ -377,7 +380,7 @@ class ConvertorItemCardWidget(CardWidget): class InstanceCardWidget(CardWidget): """Card widget representing instance.""" - active_changed = QtCore.Signal() + active_changed = QtCore.Signal(str, bool) def __init__(self, instance, group_icon, parent): super(InstanceCardWidget, self).__init__(parent) @@ -424,7 +427,7 @@ class InstanceCardWidget(CardWidget): top_layout.addWidget(expand_btn, 0) layout = QtWidgets.QHBoxLayout(self) - layout.setContentsMargins(0, 5, 10, 5) + layout.setContentsMargins(0, 2, 10, 2) layout.addLayout(top_layout) layout.addWidget(detail_widget) @@ -445,6 +448,10 @@ class InstanceCardWidget(CardWidget): def set_active_toggle_enabled(self, enabled): self._active_checkbox.setEnabled(enabled) + @property + def is_active(self): + return self._active_checkbox.isChecked() + def set_active(self, new_value): """Set instance as active.""" checkbox_value = self._active_checkbox.isChecked() @@ -515,7 +522,7 @@ class InstanceCardWidget(CardWidget): return self.instance["active"] = new_value - self.active_changed.emit() + self.active_changed.emit(self._id, new_value) def _on_expend_clicked(self): self._set_expanded() @@ -584,6 +591,45 @@ class InstanceCardView(AbstractInstanceView): result.setWidth(width) return result + def _toggle_instances(self, value): + if not self._active_toggle_enabled: + return + + widgets = self._get_selected_widgets() + changed = False + for widget in widgets: + if not isinstance(widget, InstanceCardWidget): + continue + + is_active = widget.is_active + if value == -1: + widget.set_active(not is_active) + changed = True + continue + + _value = bool(value) + if is_active is not _value: + widget.set_active(_value) + changed = True + + if changed: + self.active_changed.emit() + + def keyPressEvent(self, event): + if event.key() == QtCore.Qt.Key_Space: + self._toggle_instances(-1) + return True + + elif event.key() == QtCore.Qt.Key_Backspace: + self._toggle_instances(0) + return True + + elif event.key() == QtCore.Qt.Key_Return: + self._toggle_instances(1) + return True + + return super(InstanceCardView, self).keyPressEvent(event) + def _get_selected_widgets(self): output = [] if ( @@ -742,7 +788,15 @@ class InstanceCardView(AbstractInstanceView): for widget in self._widgets_by_group.values(): widget.update_instance_values() - def _on_active_changed(self): + def _on_active_changed(self, group_name, instance_id, value): + group_widget = self._widgets_by_group[group_name] + instance_widget = group_widget.get_widget_by_item_id(instance_id) + if instance_widget.is_selected: + for widget in self._get_selected_widgets(): + if isinstance(widget, InstanceCardWidget): + widget.set_active(value) + else: + self._select_item_clear(instance_id, group_name, instance_widget) self.active_changed.emit() def _on_widget_selection(self, instance_id, group_name, selection_type): diff --git a/openpype/tools/publisher/widgets/create_widget.py b/openpype/tools/publisher/widgets/create_widget.py index ef9c5b98fe..30980af03d 100644 --- a/openpype/tools/publisher/widgets/create_widget.py +++ b/openpype/tools/publisher/widgets/create_widget.py @@ -22,6 +22,8 @@ from ..constants import ( CREATOR_IDENTIFIER_ROLE, CREATOR_THUMBNAIL_ENABLED_ROLE, CREATOR_SORT_ROLE, + INPUTS_LAYOUT_HSPACING, + INPUTS_LAYOUT_VSPACING, ) SEPARATORS = ("---separator---", "---") @@ -198,6 +200,8 @@ class CreateWidget(QtWidgets.QWidget): variant_subset_layout = QtWidgets.QFormLayout(variant_subset_widget) variant_subset_layout.setContentsMargins(0, 0, 0, 0) + variant_subset_layout.setHorizontalSpacing(INPUTS_LAYOUT_HSPACING) + variant_subset_layout.setVerticalSpacing(INPUTS_LAYOUT_VSPACING) variant_subset_layout.addRow("Variant", variant_widget) variant_subset_layout.addRow("Subset", subset_name_input) @@ -282,6 +286,9 @@ class CreateWidget(QtWidgets.QWidget): thumbnail_widget.thumbnail_created.connect(self._on_thumbnail_create) thumbnail_widget.thumbnail_cleared.connect(self._on_thumbnail_clear) + controller.event_system.add_callback( + "main.window.closed", self._on_main_window_close + ) controller.event_system.add_callback( "plugins.refresh.finished", self._on_plugins_refresh ) @@ -316,6 +323,10 @@ class CreateWidget(QtWidgets.QWidget): self._first_show = True self._last_thumbnail_path = None + self._last_current_context_asset = None + self._last_current_context_task = None + self._use_current_context = True + @property def current_asset_name(self): return self._controller.current_asset_name @@ -356,12 +367,39 @@ class CreateWidget(QtWidgets.QWidget): if check_prereq: self._invalidate_prereq() + def _on_main_window_close(self): + """Publisher window was closed.""" + + # Use current context on next refresh + self._use_current_context = True + def refresh(self): + current_asset_name = self._controller.current_asset_name + current_task_name = self._controller.current_task_name + # Get context before refresh to keep selection of asset and # task widgets asset_name = self._get_asset_name() task_name = self._get_task_name() + # Replace by current context if last loaded context was + # 'current context' before reset + if ( + self._use_current_context + or ( + self._last_current_context_asset + and asset_name == self._last_current_context_asset + and task_name == self._last_current_context_task + ) + ): + asset_name = current_asset_name + task_name = current_task_name + + # Store values for future refresh + self._last_current_context_asset = current_asset_name + self._last_current_context_task = current_task_name + self._use_current_context = False + self._prereq_available = False # Disable context widget so refresh of asset will use context asset @@ -398,7 +436,10 @@ class CreateWidget(QtWidgets.QWidget): prereq_available = False creator_btn_tooltips.append("Creator is not selected") - if self._context_change_is_enabled() and self._asset_name is None: + if ( + self._context_change_is_enabled() + and self._get_asset_name() is None + ): # QUESTION how to handle invalid asset? prereq_available = False creator_btn_tooltips.append("Context is not selected") diff --git a/openpype/tools/publisher/widgets/list_view_widgets.py b/openpype/tools/publisher/widgets/list_view_widgets.py index 227ae7bda9..557e6559c8 100644 --- a/openpype/tools/publisher/widgets/list_view_widgets.py +++ b/openpype/tools/publisher/widgets/list_view_widgets.py @@ -11,7 +11,7 @@ selection can be enabled disabled using checkbox or keyboard key presses: - Backspace - disable selection ``` -|- Options +|- Context |- [x] | |- [x] | |- [x] @@ -486,6 +486,9 @@ class InstanceListView(AbstractInstanceView): group_widget.set_expanded(expanded) def _on_toggle_request(self, toggle): + if not self._active_toggle_enabled: + return + selected_instance_ids = self._instance_view.get_selected_instance_ids() if toggle == -1: active = None @@ -1039,7 +1042,8 @@ class InstanceListView(AbstractInstanceView): proxy_index = proxy_model.mapFromSource(select_indexes[0]) selection_model.setCurrentIndex( proxy_index, - selection_model.ClearAndSelect | selection_model.Rows + QtCore.QItemSelectionModel.ClearAndSelect + | QtCore.QItemSelectionModel.Rows ) return diff --git a/openpype/tools/publisher/widgets/precreate_widget.py b/openpype/tools/publisher/widgets/precreate_widget.py index 3037a0e12d..3bf0bc3657 100644 --- a/openpype/tools/publisher/widgets/precreate_widget.py +++ b/openpype/tools/publisher/widgets/precreate_widget.py @@ -2,6 +2,8 @@ from qtpy import QtWidgets, QtCore from openpype.tools.attribute_defs import create_widget_for_attr_def +from ..constants import INPUTS_LAYOUT_HSPACING, INPUTS_LAYOUT_VSPACING + class PreCreateWidget(QtWidgets.QWidget): def __init__(self, parent): @@ -81,6 +83,8 @@ class AttributesWidget(QtWidgets.QWidget): layout = QtWidgets.QGridLayout(self) layout.setContentsMargins(0, 0, 0, 0) + layout.setHorizontalSpacing(INPUTS_LAYOUT_HSPACING) + layout.setVerticalSpacing(INPUTS_LAYOUT_VSPACING) self._layout = layout @@ -117,8 +121,16 @@ class AttributesWidget(QtWidgets.QWidget): col_num = 2 - expand_cols - if attr_def.label: + if attr_def.is_value_def and attr_def.label: label_widget = QtWidgets.QLabel(attr_def.label, self) + tooltip = attr_def.tooltip + if tooltip: + label_widget.setToolTip(tooltip) + if attr_def.is_label_horizontal: + label_widget.setAlignment( + QtCore.Qt.AlignRight + | QtCore.Qt.AlignVCenter + ) self._layout.addWidget( label_widget, row, 0, 1, expand_cols ) diff --git a/openpype/tools/publisher/widgets/widgets.py b/openpype/tools/publisher/widgets/widgets.py index d2ce1fbcb2..cd1f1f5a96 100644 --- a/openpype/tools/publisher/widgets/widgets.py +++ b/openpype/tools/publisher/widgets/widgets.py @@ -9,7 +9,7 @@ import collections from qtpy import QtWidgets, QtCore, QtGui import qtawesome -from openpype.lib.attribute_definitions import UnknownDef, UIDef +from openpype.lib.attribute_definitions import UnknownDef from openpype.tools.attribute_defs import create_widget_for_attr_def from openpype.tools import resources from openpype.tools.flickcharm import FlickCharm @@ -36,6 +36,8 @@ from .icons import ( from ..constants import ( VARIANT_TOOLTIP, ResetKeySequence, + INPUTS_LAYOUT_HSPACING, + INPUTS_LAYOUT_VSPACING, ) @@ -1098,6 +1100,8 @@ class GlobalAttrsWidget(QtWidgets.QWidget): btns_layout.addWidget(cancel_btn) main_layout = QtWidgets.QFormLayout(self) + main_layout.setHorizontalSpacing(INPUTS_LAYOUT_HSPACING) + main_layout.setVerticalSpacing(INPUTS_LAYOUT_VSPACING) main_layout.addRow("Variant", variant_input) main_layout.addRow("Asset", asset_value_widget) main_layout.addRow("Task", task_value_widget) @@ -1346,6 +1350,8 @@ class CreatorAttrsWidget(QtWidgets.QWidget): content_layout.setColumnStretch(0, 0) content_layout.setColumnStretch(1, 1) content_layout.setAlignment(QtCore.Qt.AlignTop) + content_layout.setHorizontalSpacing(INPUTS_LAYOUT_HSPACING) + content_layout.setVerticalSpacing(INPUTS_LAYOUT_VSPACING) row = 0 for attr_def, attr_instances, values in result: @@ -1371,9 +1377,19 @@ class CreatorAttrsWidget(QtWidgets.QWidget): col_num = 2 - expand_cols - label = attr_def.label or attr_def.key + label = None + if attr_def.is_value_def: + label = attr_def.label or attr_def.key if label: label_widget = QtWidgets.QLabel(label, self) + tooltip = attr_def.tooltip + if tooltip: + label_widget.setToolTip(tooltip) + if attr_def.is_label_horizontal: + label_widget.setAlignment( + QtCore.Qt.AlignRight + | QtCore.Qt.AlignVCenter + ) content_layout.addWidget( label_widget, row, 0, 1, expand_cols ) @@ -1474,6 +1490,8 @@ class PublishPluginAttrsWidget(QtWidgets.QWidget): attr_def_layout = QtWidgets.QGridLayout(attr_def_widget) attr_def_layout.setColumnStretch(0, 0) attr_def_layout.setColumnStretch(1, 1) + attr_def_layout.setHorizontalSpacing(INPUTS_LAYOUT_HSPACING) + attr_def_layout.setVerticalSpacing(INPUTS_LAYOUT_VSPACING) content_layout = QtWidgets.QVBoxLayout(content_widget) content_layout.addWidget(attr_def_widget, 0) @@ -1501,12 +1519,19 @@ class PublishPluginAttrsWidget(QtWidgets.QWidget): expand_cols = 1 col_num = 2 - expand_cols - label = attr_def.label or attr_def.key + label = None + if attr_def.is_value_def: + label = attr_def.label or attr_def.key if label: label_widget = QtWidgets.QLabel(label, content_widget) tooltip = attr_def.tooltip if tooltip: label_widget.setToolTip(tooltip) + if attr_def.is_label_horizontal: + label_widget.setAlignment( + QtCore.Qt.AlignRight + | QtCore.Qt.AlignVCenter + ) attr_def_layout.addWidget( label_widget, row, 0, 1, expand_cols ) @@ -1517,7 +1542,7 @@ class PublishPluginAttrsWidget(QtWidgets.QWidget): ) row += 1 - if isinstance(attr_def, UIDef): + if not attr_def.is_value_def: continue widget.value_changed.connect(self._input_value_changed) diff --git a/openpype/tools/publisher/window.py b/openpype/tools/publisher/window.py index 8826e0f849..b3471163ae 100644 --- a/openpype/tools/publisher/window.py +++ b/openpype/tools/publisher/window.py @@ -46,6 +46,8 @@ class PublisherWindow(QtWidgets.QDialog): def __init__(self, parent=None, controller=None, reset_on_show=None): super(PublisherWindow, self).__init__(parent) + self.setObjectName("PublishWindow") + self.setWindowTitle("OpenPype publisher") icon = QtGui.QIcon(resources.get_openpype_icon_filepath()) @@ -284,6 +286,9 @@ class PublisherWindow(QtWidgets.QDialog): controller.event_system.add_callback( "publish.has_validated.changed", self._on_publish_validated_change ) + controller.event_system.add_callback( + "publish.finished.changed", self._on_publish_finished_change + ) controller.event_system.add_callback( "publish.process.stopped", self._on_publish_stop ) @@ -400,8 +405,12 @@ class PublisherWindow(QtWidgets.QDialog): # TODO capture changes and ask user if wants to save changes on close if not self._controller.host_context_has_changed: self._save_changes(False) + self._comment_input.setText("") # clear comment self._reset_on_show = True self._controller.clear_thumbnail_temp_dir_path() + # Trigger custom event that should be captured only in UI + # - backend (controller) must not be dependent on this event topic!!! + self._controller.event_system.emit("main.window.closed", {}, "window") super(PublisherWindow, self).closeEvent(event) def leaveEvent(self, event): @@ -433,15 +442,24 @@ class PublisherWindow(QtWidgets.QDialog): event.accept() return - if event.matches(QtGui.QKeySequence.Save): + save_match = event.matches(QtGui.QKeySequence.Save) + if save_match == QtGui.QKeySequence.ExactMatch: if not self._controller.publish_has_started: self._save_changes(True) event.accept() return - if ResetKeySequence.matches( - QtGui.QKeySequence(event.key() | event.modifiers()) - ): + # PySide6 Support + if hasattr(event, "keyCombination"): + reset_match_result = ResetKeySequence.matches( + QtGui.QKeySequence(event.keyCombination()) + ) + else: + reset_match_result = ResetKeySequence.matches( + QtGui.QKeySequence(event.modifiers() | event.key()) + ) + + if reset_match_result == QtGui.QKeySequence.ExactMatch: if not self.controller.publish_is_running: self.reset() event.accept() @@ -777,6 +795,11 @@ class PublisherWindow(QtWidgets.QDialog): if event["value"]: self._validate_btn.setEnabled(False) + def _on_publish_finished_change(self, event): + if event["value"]: + # Successful publish, remove comment from UI + self._comment_input.setText("") + def _on_publish_stop(self): self._set_publish_overlay_visibility(False) self._reset_btn.setEnabled(True) diff --git a/openpype/tools/push_to_project/control_integrate.py b/openpype/tools/push_to_project/control_integrate.py index bb95fdb26f..37a0512d59 100644 --- a/openpype/tools/push_to_project/control_integrate.py +++ b/openpype/tools/push_to_project/control_integrate.py @@ -1050,8 +1050,8 @@ class ProjectPushItemProcess: repre_format_data["ext"] = ext[1:] break - tmp_result = anatomy.format(formatting_data) - folder_path = tmp_result[template_name]["folder"] + template_obj = anatomy.templates_obj[template_name]["folder"] + folder_path = template_obj.format_strict(formatting_data) repre_context = folder_path.used_values folder_path_rootless = folder_path.rootless repre_filepaths = [] diff --git a/openpype/tools/sceneinventory/model.py b/openpype/tools/sceneinventory/model.py index 63d2945145..5cc849bb9e 100644 --- a/openpype/tools/sceneinventory/model.py +++ b/openpype/tools/sceneinventory/model.py @@ -199,90 +199,103 @@ class InventoryModel(TreeModel): """Refresh the model""" host = registered_host() - if not items: # for debugging or testing, injecting items from outside + # for debugging or testing, injecting items from outside + if items is None: if isinstance(host, ILoadHost): items = host.get_containers() - else: + elif hasattr(host, "ls"): items = host.ls() + else: + items = [] self.clear() - - if self._hierarchy_view and selected: - if not hasattr(host.pipeline, "update_hierarchy"): - # If host doesn't support hierarchical containers, then - # cherry-pick only. - self.add_items((item for item in items - if item["objectName"] in selected)) - return - - # Update hierarchy info for all containers - items_by_name = {item["objectName"]: item - for item in host.pipeline.update_hierarchy(items)} - - selected_items = set() - - def walk_children(names): - """Select containers and extend to chlid containers""" - for name in [n for n in names if n not in selected_items]: - selected_items.add(name) - item = items_by_name[name] - yield item - - for child in walk_children(item["children"]): - yield child - - items = list(walk_children(selected)) # Cherry-picked and extended - - # Cut unselected upstream containers - for item in items: - if not item.get("parent") in selected_items: - # Parent not in selection, this is root item. - item["parent"] = None - - parents = [self._root_item] - - # The length of `items` array is the maximum depth that a - # hierarchy could be. - # Take this as an easiest way to prevent looping forever. - maximum_loop = len(items) - count = 0 - while items: - if count > maximum_loop: - self.log.warning("Maximum loop count reached, possible " - "missing parent node.") - break - - _parents = list() - for parent in parents: - _unparented = list() - - def _children(): - """Child item provider""" - for item in items: - if item.get("parent") == parent.get("objectName"): - # (NOTE) - # Since `self._root_node` has no "objectName" - # entry, it will be paired with root item if - # the value of key "parent" is None, or not - # having the key. - yield item - else: - # Not current parent's child, try next - _unparented.append(item) - - self.add_items(_children(), parent) - - items[:] = _unparented - - # Parents of next level - for group_node in parent.children(): - _parents += group_node.children() - - parents[:] = _parents - count += 1 - - else: + if not selected or not self._hierarchy_view: self.add_items(items) + return + + if ( + not hasattr(host, "pipeline") + or not hasattr(host.pipeline, "update_hierarchy") + ): + # If host doesn't support hierarchical containers, then + # cherry-pick only. + self.add_items(( + item + for item in items + if item["objectName"] in selected + )) + return + + # TODO find out what this part does. Function 'update_hierarchy' is + # available only in 'blender' at this moment. + + # Update hierarchy info for all containers + items_by_name = { + item["objectName"]: item + for item in host.pipeline.update_hierarchy(items) + } + + selected_items = set() + + def walk_children(names): + """Select containers and extend to chlid containers""" + for name in [n for n in names if n not in selected_items]: + selected_items.add(name) + item = items_by_name[name] + yield item + + for child in walk_children(item["children"]): + yield child + + items = list(walk_children(selected)) # Cherry-picked and extended + + # Cut unselected upstream containers + for item in items: + if not item.get("parent") in selected_items: + # Parent not in selection, this is root item. + item["parent"] = None + + parents = [self._root_item] + + # The length of `items` array is the maximum depth that a + # hierarchy could be. + # Take this as an easiest way to prevent looping forever. + maximum_loop = len(items) + count = 0 + while items: + if count > maximum_loop: + self.log.warning("Maximum loop count reached, possible " + "missing parent node.") + break + + _parents = list() + for parent in parents: + _unparented = list() + + def _children(): + """Child item provider""" + for item in items: + if item.get("parent") == parent.get("objectName"): + # (NOTE) + # Since `self._root_node` has no "objectName" + # entry, it will be paired with root item if + # the value of key "parent" is None, or not + # having the key. + yield item + else: + # Not current parent's child, try next + _unparented.append(item) + + self.add_items(_children(), parent) + + items[:] = _unparented + + # Parents of next level + for group_node in parent.children(): + _parents += group_node.children() + + parents[:] = _parents + count += 1 def add_items(self, items, parent=None): """Add the items to the model. diff --git a/openpype/tools/sceneinventory/window.py b/openpype/tools/sceneinventory/window.py index 89424fd746..6ee1c0d38e 100644 --- a/openpype/tools/sceneinventory/window.py +++ b/openpype/tools/sceneinventory/window.py @@ -107,8 +107,8 @@ class SceneInventoryWindow(QtWidgets.QDialog): view.hierarchy_view_changed.connect( self._on_hierarchy_view_change ) - view.data_changed.connect(self.refresh) - refresh_button.clicked.connect(self.refresh) + view.data_changed.connect(self._on_refresh_request) + refresh_button.clicked.connect(self._on_refresh_request) update_all_button.clicked.connect(self._on_update_all) self._update_all_button = update_all_button @@ -139,6 +139,11 @@ class SceneInventoryWindow(QtWidgets.QDialog): """ + def _on_refresh_request(self): + """Signal callback to trigger 'refresh' without any arguments.""" + + self.refresh() + def refresh(self, items=None): with preserve_expanded_rows( tree_view=self._view, diff --git a/openpype/tools/settings/settings/item_widgets.py b/openpype/tools/settings/settings/item_widgets.py index d51f9b9684..117eca7d6b 100644 --- a/openpype/tools/settings/settings/item_widgets.py +++ b/openpype/tools/settings/settings/item_widgets.py @@ -360,14 +360,16 @@ class TextWidget(InputWidget): def _add_inputs_to_layout(self): multiline = self.entity.multiline if multiline: - self.input_field = SettingsPlainTextEdit(self.content_widget) + input_field = SettingsPlainTextEdit(self.content_widget) + if self.entity.minimum_lines_count: + input_field.set_minimum_lines(self.entity.minimum_lines_count) else: - self.input_field = SettingsLineEdit(self.content_widget) - + input_field = SettingsLineEdit(self.content_widget) placeholder_text = self.entity.placeholder_text if placeholder_text: - self.input_field.setPlaceholderText(placeholder_text) + input_field.setPlaceholderText(placeholder_text) + self.input_field = input_field self.setFocusProxy(self.input_field) layout_kwargs = {} diff --git a/openpype/tools/settings/settings/widgets.py b/openpype/tools/settings/settings/widgets.py index fd04cb0a23..092b6f0e51 100644 --- a/openpype/tools/settings/settings/widgets.py +++ b/openpype/tools/settings/settings/widgets.py @@ -4,7 +4,6 @@ from qtpy import QtWidgets, QtCore, QtGui import qtawesome from openpype.client import get_projects -from openpype.pipeline import AvalonMongoDB from openpype.style import get_objected_colors from openpype.tools.utils.widgets import ImageButton from openpype.tools.utils.lib import paint_image_with_color @@ -97,6 +96,7 @@ class CompleterView(QtWidgets.QListView): # Open the widget unactivated self.setAttribute(QtCore.Qt.WA_ShowWithoutActivating) + self.setAttribute(QtCore.Qt.WA_NoMouseReplay) delegate = QtWidgets.QStyledItemDelegate() self.setItemDelegate(delegate) @@ -241,6 +241,18 @@ class SettingsLineEdit(PlaceholderLineEdit): if self._completer is not None: self._completer.set_text_filter(text) + def _completer_should_be_visible(self): + return ( + self.isVisible() + and (self.hasFocus() or self._completer.hasFocus()) + ) + + def _show_completer(self): + if self._completer_should_be_visible(): + self._focus_timer.start() + self._completer.show() + self._update_completer() + def _update_completer(self): if self._completer is None or not self._completer.isVisible(): return @@ -249,7 +261,7 @@ class SettingsLineEdit(PlaceholderLineEdit): self._completer.move(new_point) def _on_focus_timer(self): - if not self.hasFocus() and not self._completer.hasFocus(): + if not self._completer_should_be_visible(): self._completer.hide() self._focus_timer.stop() @@ -258,9 +270,7 @@ class SettingsLineEdit(PlaceholderLineEdit): self.focused_in.emit() if self._completer is not None: - self._focus_timer.start() - self._completer.show() - self._update_completer() + self._show_completer() def paintEvent(self, event): super(SettingsLineEdit, self).paintEvent(event) @@ -300,11 +310,32 @@ class SettingsLineEdit(PlaceholderLineEdit): class SettingsPlainTextEdit(QtWidgets.QPlainTextEdit): focused_in = QtCore.Signal() + _min_lines = 0 def focusInEvent(self, event): super(SettingsPlainTextEdit, self).focusInEvent(event) self.focused_in.emit() + def set_minimum_lines(self, lines): + self._min_lines = lines + self.update() + + def minimumSizeHint(self): + result = super(SettingsPlainTextEdit, self).minimumSizeHint() + if self._min_lines < 1: + return result + document = self.document() + margins = self.contentsMargins() + d_margin = ( + ((document.documentMargin() + self.frameWidth()) * 2) + + margins.top() + margins.bottom() + ) + font = document.defaultFont() + font_metrics = QtGui.QFontMetrics(font) + result.setHeight( + d_margin + (font_metrics.lineSpacing() * self._min_lines)) + return result + class SettingsToolBtn(ImageButton): _mask_pixmap = None diff --git a/openpype/tools/texture_copy/app.py b/openpype/tools/texture_copy/app.py index a695bb8c4d..a5a9f7349a 100644 --- a/openpype/tools/texture_copy/app.py +++ b/openpype/tools/texture_copy/app.py @@ -47,8 +47,8 @@ class TextureCopy: "hierarchy": hierarchy } anatomy = Anatomy(project_name) - anatomy_filled = anatomy.format(template_data) - return anatomy_filled['texture']['path'] + template_obj = anatomy.templates_obj["texture"]["path"] + return template_obj.format_strict(template_data) def _get_version(self, path): versions = [0] diff --git a/openpype/tools/utils/__init__.py b/openpype/tools/utils/__init__.py index 4292e2d726..4149763f80 100644 --- a/openpype/tools/utils/__init__.py +++ b/openpype/tools/utils/__init__.py @@ -1,6 +1,7 @@ from .widgets import ( FocusSpinBox, FocusDoubleSpinBox, + ComboBox, CustomTextComboBox, PlaceholderLineEdit, BaseClickableFrame, @@ -38,6 +39,7 @@ from .overlay_messages import ( __all__ = ( "FocusSpinBox", "FocusDoubleSpinBox", + "ComboBox", "CustomTextComboBox", "PlaceholderLineEdit", "BaseClickableFrame", diff --git a/openpype/tools/utils/widgets.py b/openpype/tools/utils/widgets.py index b416c56797..bae89aeb09 100644 --- a/openpype/tools/utils/widgets.py +++ b/openpype/tools/utils/widgets.py @@ -41,7 +41,28 @@ class FocusDoubleSpinBox(QtWidgets.QDoubleSpinBox): super(FocusDoubleSpinBox, self).wheelEvent(event) -class CustomTextComboBox(QtWidgets.QComboBox): +class ComboBox(QtWidgets.QComboBox): + """Base of combobox with pre-implement changes used in tools. + + Combobox is using styled delegate by default so stylesheets are propagated. + + Items are not changed on scroll until the combobox is in focus. + """ + + def __init__(self, *args, **kwargs): + super(ComboBox, self).__init__(*args, **kwargs) + delegate = QtWidgets.QStyledItemDelegate() + self.setItemDelegate(delegate) + self.setFocusPolicy(QtCore.Qt.StrongFocus) + + self._delegate = delegate + + def wheelEvent(self, event): + if self.hasFocus(): + return super(ComboBox, self).wheelEvent(event) + + +class CustomTextComboBox(ComboBox): """Combobox which can have different text showed.""" def __init__(self, *args, **kwargs): @@ -253,6 +274,9 @@ class PixmapLabel(QtWidgets.QLabel): self._empty_pixmap = QtGui.QPixmap(0, 0) self._source_pixmap = pixmap + self._last_width = 0 + self._last_height = 0 + def set_source_pixmap(self, pixmap): """Change source image.""" self._source_pixmap = pixmap @@ -263,6 +287,12 @@ class PixmapLabel(QtWidgets.QLabel): size += size % 2 return size, size + def minimumSizeHint(self): + width, height = self._get_pix_size() + if width != self._last_width or height != self._last_height: + self._set_resized_pix() + return QtCore.QSize(width, height) + def _set_resized_pix(self): if self._source_pixmap is None: self.setPixmap(self._empty_pixmap) @@ -276,6 +306,8 @@ class PixmapLabel(QtWidgets.QLabel): QtCore.Qt.SmoothTransformation ) ) + self._last_width = width + self._last_height = height def resizeEvent(self, event): self._set_resized_pix() diff --git a/openpype/tools/workfiles/save_as_dialog.py b/openpype/tools/workfiles/save_as_dialog.py index aa881e7946..9f1d1060da 100644 --- a/openpype/tools/workfiles/save_as_dialog.py +++ b/openpype/tools/workfiles/save_as_dialog.py @@ -60,8 +60,8 @@ class CommentMatcher(object): temp_data["version"] = "<>" temp_data["ext"] = "<>" - formatted = anatomy.format(temp_data) - fname_pattern = formatted[template_key]["file"] + template_obj = anatomy.templates_obj[template_key]["file"] + fname_pattern = template_obj.format_strict(temp_data) fname_pattern = re.escape(fname_pattern) # Replace comment and version with something we can match with regex @@ -375,8 +375,8 @@ class SaveAsDialog(QtWidgets.QDialog): data["ext"] = data["ext"].lstrip(".") - anatomy_filled = self.anatomy.format(data) - return anatomy_filled[self.template_key]["file"] + template_obj = self.anatomy.templates_obj[self.template_key]["file"] + return template_obj.format_strict(data) def refresh(self): extensions = list(self._extensions) diff --git a/openpype/version.py b/openpype/version.py index d9e29d691e..e02053ba76 100644 --- a/openpype/version.py +++ b/openpype/version.py @@ -1,3 +1,3 @@ # -*- coding: utf-8 -*- """Package declaring Pype version.""" -__version__ = "3.15.4-nightly.3" +__version__ = "3.15.7-nightly.1" diff --git a/pyproject.toml b/pyproject.toml index 42ce5aa32c..003f6cf2d3 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -1,6 +1,6 @@ [tool.poetry] name = "OpenPype" -version = "3.15.3" # OpenPype +version = "3.15.6" # OpenPype description = "Open VFX and Animation pipeline with support." authors = ["OpenPype Team "] license = "MIT License" diff --git a/tests/integration/hosts/aftereffects/test_deadline_publish_in_aftereffects_multicomposition.py b/tests/integration/hosts/aftereffects/test_deadline_publish_in_aftereffects_multicomposition.py index d372efcb9a..0e9cd3b00d 100644 --- a/tests/integration/hosts/aftereffects/test_deadline_publish_in_aftereffects_multicomposition.py +++ b/tests/integration/hosts/aftereffects/test_deadline_publish_in_aftereffects_multicomposition.py @@ -9,6 +9,9 @@ log = logging.getLogger("test_publish_in_aftereffects") class TestDeadlinePublishInAfterEffectsMultiComposition(AEDeadlinePublishTestClass): # noqa """est case for DL publishing in AfterEffects with multiple compositions. + Workfile contains 2 prepared `render` instances. First has review set, + second doesn't. + Uses generic TestCase to prepare fixtures for test data, testing DBs, env vars. @@ -68,7 +71,7 @@ class TestDeadlinePublishInAfterEffectsMultiComposition(AEDeadlinePublishTestCla name="renderTest_taskMain2")) failures.append( - DBAssert.count_of_types(dbcon, "representation", 7)) + DBAssert.count_of_types(dbcon, "representation", 5)) additional_args = {"context.subset": "workfileTest_task", "context.ext": "aep"} @@ -105,13 +108,13 @@ class TestDeadlinePublishInAfterEffectsMultiComposition(AEDeadlinePublishTestCla additional_args = {"context.subset": "renderTest_taskMain2", "name": "thumbnail"} failures.append( - DBAssert.count_of_types(dbcon, "representation", 1, + DBAssert.count_of_types(dbcon, "representation", 0, additional_args=additional_args)) additional_args = {"context.subset": "renderTest_taskMain2", "name": "png_exr"} failures.append( - DBAssert.count_of_types(dbcon, "representation", 1, + DBAssert.count_of_types(dbcon, "representation", 0, additional_args=additional_args)) assert not any(failures) diff --git a/tests/integration/hosts/photoshop/test_publish_in_photoshop_auto_image.py b/tests/integration/hosts/photoshop/test_publish_in_photoshop_auto_image.py new file mode 100644 index 0000000000..1594b36dec --- /dev/null +++ b/tests/integration/hosts/photoshop/test_publish_in_photoshop_auto_image.py @@ -0,0 +1,93 @@ +import logging + +from tests.lib.assert_classes import DBAssert +from tests.integration.hosts.photoshop.lib import PhotoshopTestClass + +log = logging.getLogger("test_publish_in_photoshop") + + +class TestPublishInPhotoshopAutoImage(PhotoshopTestClass): + """Test for publish in Phohoshop with different review configuration. + + Workfile contains 3 layers, auto image and review instances created. + + Test contains updates to Settings!!! + + """ + PERSIST = True + + TEST_FILES = [ + ("1iLF6aNI31qlUCD1rGg9X9eMieZzxL-rc", + "test_photoshop_publish_auto_image.zip", "") + ] + + APP_GROUP = "photoshop" + # keep empty to locate latest installed variant or explicit + APP_VARIANT = "" + + APP_NAME = "{}/{}".format(APP_GROUP, APP_VARIANT) + + TIMEOUT = 120 # publish timeout + + def test_db_asserts(self, dbcon, publish_finished): + """Host and input data dependent expected results in DB.""" + print("test_db_asserts") + failures = [] + + failures.append(DBAssert.count_of_types(dbcon, "version", 3)) + + failures.append( + DBAssert.count_of_types(dbcon, "version", 0, name={"$ne": 1})) + + failures.append( + DBAssert.count_of_types(dbcon, "subset", 0, + name="imageMainForeground")) + + failures.append( + DBAssert.count_of_types(dbcon, "subset", 0, + name="imageMainBackground")) + + failures.append( + DBAssert.count_of_types(dbcon, "subset", 1, + name="workfileTest_task")) + + failures.append( + DBAssert.count_of_types(dbcon, "representation", 5)) + + additional_args = {"context.subset": "imageMainForeground", + "context.ext": "png"} + failures.append( + DBAssert.count_of_types(dbcon, "representation", 0, + additional_args=additional_args)) + + additional_args = {"context.subset": "imageMainBackground", + "context.ext": "png"} + failures.append( + DBAssert.count_of_types(dbcon, "representation", 0, + additional_args=additional_args)) + + # review from image + additional_args = {"context.subset": "imageBeautyMain", + "context.ext": "jpg", + "name": "jpg_jpg"} + failures.append( + DBAssert.count_of_types(dbcon, "representation", 1, + additional_args=additional_args)) + + additional_args = {"context.subset": "imageBeautyMain", + "context.ext": "jpg", + "name": "jpg"} + failures.append( + DBAssert.count_of_types(dbcon, "representation", 1, + additional_args=additional_args)) + + additional_args = {"context.subset": "review"} + failures.append( + DBAssert.count_of_types(dbcon, "representation", 1, + additional_args=additional_args)) + + assert not any(failures) + + +if __name__ == "__main__": + test_case = TestPublishInPhotoshopAutoImage() diff --git a/tests/integration/hosts/photoshop/test_publish_in_photoshop_review.py b/tests/integration/hosts/photoshop/test_publish_in_photoshop_review.py new file mode 100644 index 0000000000..64b6868d7c --- /dev/null +++ b/tests/integration/hosts/photoshop/test_publish_in_photoshop_review.py @@ -0,0 +1,111 @@ +import logging + +from tests.lib.assert_classes import DBAssert +from tests.integration.hosts.photoshop.lib import PhotoshopTestClass + +log = logging.getLogger("test_publish_in_photoshop") + + +class TestPublishInPhotoshopImageReviews(PhotoshopTestClass): + """Test for publish in Phohoshop with different review configuration. + + Workfile contains 2 image instance, one has review flag, second doesn't. + + Regular `review` family is disabled. + + Expected result is to `imageMainForeground` to have additional file with + review, `imageMainBackground` without. No separate `review` family. + + `test_project_test_asset_imageMainForeground_v001_jpg.jpg` is expected name + of imageForeground review, `_jpg` suffix is needed to differentiate between + image and review file. + + """ + PERSIST = True + + TEST_FILES = [ + ("12WGbNy9RJ3m9jlnk0Ib9-IZmONoxIz_p", + "test_photoshop_publish_review.zip", "") + ] + + APP_GROUP = "photoshop" + # keep empty to locate latest installed variant or explicit + APP_VARIANT = "" + + APP_NAME = "{}/{}".format(APP_GROUP, APP_VARIANT) + + TIMEOUT = 120 # publish timeout + + def test_db_asserts(self, dbcon, publish_finished): + """Host and input data dependent expected results in DB.""" + print("test_db_asserts") + failures = [] + + failures.append(DBAssert.count_of_types(dbcon, "version", 3)) + + failures.append( + DBAssert.count_of_types(dbcon, "version", 0, name={"$ne": 1})) + + failures.append( + DBAssert.count_of_types(dbcon, "subset", 1, + name="imageMainForeground")) + + failures.append( + DBAssert.count_of_types(dbcon, "subset", 1, + name="imageMainBackground")) + + failures.append( + DBAssert.count_of_types(dbcon, "subset", 1, + name="workfileTest_task")) + + failures.append( + DBAssert.count_of_types(dbcon, "representation", 6)) + + additional_args = {"context.subset": "imageMainForeground", + "context.ext": "png"} + failures.append( + DBAssert.count_of_types(dbcon, "representation", 1, + additional_args=additional_args)) + + additional_args = {"context.subset": "imageMainForeground", + "context.ext": "jpg"} + failures.append( + DBAssert.count_of_types(dbcon, "representation", 2, + additional_args=additional_args)) + + additional_args = {"context.subset": "imageMainForeground", + "context.ext": "jpg", + "context.representation": "jpg_jpg"} + failures.append( + DBAssert.count_of_types(dbcon, "representation", 1, + additional_args=additional_args)) + + additional_args = {"context.subset": "imageMainBackground", + "context.ext": "png"} + failures.append( + DBAssert.count_of_types(dbcon, "representation", 1, + additional_args=additional_args)) + + additional_args = {"context.subset": "imageMainBackground", + "context.ext": "jpg"} + failures.append( + DBAssert.count_of_types(dbcon, "representation", 1, + additional_args=additional_args)) + + additional_args = {"context.subset": "imageMainBackground", + "context.ext": "jpg", + "context.representation": "jpg_jpg"} + failures.append( + DBAssert.count_of_types(dbcon, "representation", 0, + additional_args=additional_args)) + + additional_args = {"context.subset": "review"} + failures.append( + DBAssert.count_of_types(dbcon, "representation", 0, + additional_args=additional_args)) + + assert not any(failures) + + +if __name__ == "__main__": + test_case = TestPublishInPhotoshopImageReviews() diff --git a/tests/unit/openpype/conftest.py b/tests/unit/openpype/conftest.py new file mode 100644 index 0000000000..0aec25becb --- /dev/null +++ b/tests/unit/openpype/conftest.py @@ -0,0 +1,36 @@ +"""Dummy environment that allows importing Openpype modules and run +tests in parent folder and all subfolders manually from IDE. + +This should not get triggered if the tests are running from `runtests` as it +is expected there that environment is handled by OP itself. + +This environment should be enough to run simple `BaseTest` where no +external preparation is necessary (eg. no prepared DB, no source files). +These tests might be enough to import and run simple pyblish plugins to +validate logic. + +Please be aware that these tests might use values in real databases, so use +`BaseTest` only for logic without side effects or special configuration. For +these there is `tests.lib.testing_classes.ModuleUnitTest` which would setup +proper test DB (but it requires `mongorestore` on the sys.path) + +If pyblish plugins require any host dependent communication, it would need + to be mocked. + +This setting of env vars is necessary to run before any imports of OP code! +(This is why it is in `conftest.py` file.) +If your test requires any additional env var, copy this file to folder of your +test, it should only that folder. +""" + +import os + + +if not os.environ.get("IS_TEST"): # running tests from cmd or CI + os.environ["OPENPYPE_MONGO"] = "mongodb://localhost:27017" + os.environ["AVALON_DB"] = "avalon" + os.environ["OPENPYPE_DATABASE_NAME"] = "openpype" + os.environ["AVALON_TIMEOUT"] = '3000' + os.environ["OPENPYPE_DEBUG"] = "1" + os.environ["AVALON_ASSET"] = "test_asset" + os.environ["AVALON_PROJECT"] = "test_project" diff --git a/tests/unit/openpype/plugins/publish/test_validate_sequence_frames.py b/tests/unit/openpype/plugins/publish/test_validate_sequence_frames.py new file mode 100644 index 0000000000..17e47c9f64 --- /dev/null +++ b/tests/unit/openpype/plugins/publish/test_validate_sequence_frames.py @@ -0,0 +1,202 @@ + + +"""Test Publish_plugins pipeline publish modul, tests API methods + + File: + creates temporary directory and downloads .zip file from GDrive + unzips .zip file + uses content of .zip file (MongoDB's dumps) to import to new databases + with use of 'monkeypatch_session' modifies required env vars + temporarily + runs battery of tests checking that site operation for Sync Server + module are working + removes temporary folder + removes temporary databases (?) +""" +import pytest +import logging + +from pyblish.api import Instance as PyblishInstance + +from tests.lib.testing_classes import BaseTest +from openpype.plugins.publish.validate_sequence_frames import ( + ValidateSequenceFrames +) + +log = logging.getLogger(__name__) + + +class TestValidateSequenceFrames(BaseTest): + """ Testing ValidateSequenceFrames plugin + + """ + + @pytest.fixture + def instance(self): + + class Instance(PyblishInstance): + data = { + "frameStart": 1001, + "frameEnd": 1002, + "representations": [] + } + yield Instance + + @pytest.fixture(scope="module") + def plugin(self): + plugin = ValidateSequenceFrames() + plugin.log = log + + yield plugin + + def test_validate_sequence_frames_single_frame(self, instance, plugin): + representations = [ + { + "ext": "exr", + "files": "Main_beauty.1001.exr", + } + ] + instance.data["representations"] = representations + instance.data["frameEnd"] = 1001 + + plugin.process(instance) + + @pytest.mark.parametrize("files", + [ + ["Main_beauty.v001.1001.exr", + "Main_beauty.v001.1002.exr"], + ["Main_beauty_v001.1001.exr", + "Main_beauty_v001.1002.exr"], + ["Main_beauty.1001.1001.exr", + "Main_beauty.1001.1002.exr"], + ["Main_beauty_v001_1001.exr", + "Main_beauty_v001_1002.exr"]]) + def test_validate_sequence_frames_name(self, instance, + plugin, files): + # tests for names with number inside, caused clique failure before + representations = [ + { + "ext": "exr", + "files": files, + } + ] + instance.data["representations"] = representations + + plugin.process(instance) + + @pytest.mark.parametrize("files", + [["Main_beauty.1001.v001.exr", + "Main_beauty.1002.v001.exr"]]) + def test_validate_sequence_frames_wrong_name(self, instance, + plugin, files): + # tests for names with number inside, caused clique failure before + representations = [ + { + "ext": "exr", + "files": files, + } + ] + instance.data["representations"] = representations + + with pytest.raises(AssertionError) as excinfo: + plugin.process(instance) + assert ("Must detect single collection" in + str(excinfo.value)) + + @pytest.mark.parametrize("files", + [["Main_beauty.v001.1001.ass.gz", + "Main_beauty.v001.1002.ass.gz"]]) + def test_validate_sequence_frames_possible_wrong_name( + self, instance, plugin, files): + # currently pattern fails on extensions with dots + representations = [ + { + "files": files, + } + ] + instance.data["representations"] = representations + + with pytest.raises(AssertionError) as excinfo: + plugin.process(instance) + assert ("Must not have remainder" in + str(excinfo.value)) + + @pytest.mark.parametrize("files", + [["Main_beauty.v001.1001.ass.gz", + "Main_beauty.v001.1002.ass.gz"]]) + def test_validate_sequence_frames__correct_ext( + self, instance, plugin, files): + # currently pattern fails on extensions with dots + representations = [ + { + "ext": "ass.gz", + "files": files, + } + ] + instance.data["representations"] = representations + + plugin.process(instance) + + def test_validate_sequence_frames_multi_frame(self, instance, plugin): + representations = [ + { + "ext": "exr", + "files": ["Main_beauty.1001.exr", "Main_beauty.1002.exr", + "Main_beauty.1003.exr"] + } + ] + instance.data["representations"] = representations + instance.data["frameEnd"] = 1003 + + plugin.process(instance) + + def test_validate_sequence_frames_multi_frame_missing(self, instance, + plugin): + representations = [ + { + "ext": "exr", + "files": ["Main_beauty.1001.exr", "Main_beauty.1002.exr"] + } + ] + instance.data["representations"] = representations + instance.data["frameEnd"] = 1003 + + with pytest.raises(ValueError) as excinfo: + plugin.process(instance) + assert ("Invalid frame range: (1001, 1002) - expected: (1001, 1003)" in + str(excinfo.value)) + + def test_validate_sequence_frames_multi_frame_hole(self, instance, plugin): + representations = [ + { + "ext": "exr", + "files": ["Main_beauty.1001.exr", "Main_beauty.1003.exr"] + } + ] + instance.data["representations"] = representations + instance.data["frameEnd"] = 1003 + + with pytest.raises(AssertionError) as excinfo: + plugin.process(instance) + assert ("Missing frames: [1002]" in str(excinfo.value)) + + def test_validate_sequence_frames_slate(self, instance, plugin): + representations = [ + { + "ext": "exr", + "files": [ + "Main_beauty.1000.exr", + "Main_beauty.1001.exr", + "Main_beauty.1002.exr", + "Main_beauty.1003.exr" + ] + } + ] + instance.data["slate"] = True + instance.data["representations"] = representations + instance.data["frameEnd"] = 1003 + + plugin.process(instance) + + +test_case = TestValidateSequenceFrames() diff --git a/website/docs/admin_hosts_maya.md b/website/docs/admin_hosts_maya.md index 5c8c48b2e3..f0b8710246 100644 --- a/website/docs/admin_hosts_maya.md +++ b/website/docs/admin_hosts_maya.md @@ -106,6 +106,37 @@ or Deadlines **Draft Tile Assembler**. This is useful to fix some specific renderer glitches and advanced hacking of Maya Scene files. `Patch name` is label for patch for easier orientation. `Patch regex` is regex used to find line in file, after `Patch line` string is inserted. Note that you need to add line ending. +## Load Plugins + +### Reference Loader + +#### Namespace and Group Name +Here you can create your own custom naming for the reference loader. + +The custom naming is split into two parts: namespace and group name. If you don't set the namespace or the group name, an error will occur. +Here's the different variables you can use: + +
+
+ +| Token | Description | +|---|---| +|`{asset_name}` | Asset name | +|`{asset_type}` | Asset type | +|`{subset}` | Subset name | +|`{family}` | Subset family | + +
+
+ +The namespace field can contain a single group of '#' number tokens to indicate where the namespace's unique index should go. The amount of tokens defines the zero padding of the number, e.g ### turns into 001. + +Warning: Note that a namespace will always be prefixed with a _ if it starts with a digit. + +Example: + +![Namespace and Group Name](assets/maya-admin_custom_namespace.png) + ### Extract GPU Cache ![Maya GPU Cache](assets/maya-admin_gpu_cache.png) @@ -169,6 +200,17 @@ Most settings to override in the viewport are self explanatory and can be found These options are set on the camera shape when publishing the review. They correspond to attributes on the Maya camera shape node. ![Extract Playblast Settings](assets/maya-admin_extract_playblast_settings_camera_options.png) +## Include/exclude handles by task type +You can include or exclude handles, globally or by task type. + +The "Include handles by default" defines whether by default handles are included. Additionally you can add a per task type override whether you want to include or exclude handles. + +For example, in this image you can see that handles are included by default in all task types, except for the 'Lighting' task, where the toggle is disabled. +![Include/exclude handles](assets/maya-admin_exclude_handles.png) + +And here you can see that the handles are disabled by default, except in 'Animation' task where it's enabled. +![Custom menu definition](assets/maya-admin_include_handles.png) + ## Custom Menu You can add your custom tools menu into Maya by extending definitions in **Maya -> Scripts Menu Definition**. @@ -232,3 +274,14 @@ Fill in the necessary fields (the optional fields are regex filters) - Build your workfile ![maya build template](assets/maya-build_workfile_from_template.png) + +## Explicit Plugins Loading +You can define which plugins to load on launch of Maya here; `project_settings/maya/explicit_plugins_loading`. This can help improve Maya's launch speed, if you know which plugins are needed. + +By default only the required plugins are enabled. You can also add any plugin to the list to enable on launch. + +:::note technical +When enabling this feature, the workfile will be launched post initialization no matter the setting on `project_settings/maya/open_workfile_post_initialization`. This is to avoid any issues with references needing plugins. + +Renderfarm integration is not supported for this feature. +::: diff --git a/website/docs/admin_hosts_photoshop.md b/website/docs/admin_hosts_photoshop.md new file mode 100644 index 0000000000..de684f01d2 --- /dev/null +++ b/website/docs/admin_hosts_photoshop.md @@ -0,0 +1,127 @@ +--- +id: admin_hosts_photoshop +title: Photoshop Settings +sidebar_label: Photoshop +--- + +import Tabs from '@theme/Tabs'; +import TabItem from '@theme/TabItem'; + +## Photoshop settings + +There is a couple of settings that could configure publishing process for **Photoshop**. +All of them are Project based, eg. each project could have different configuration. + +Location: Settings > Project > Photoshop + +![AfterEffects Project Settings](assets/admin_hosts_photoshop_settings.png) + +## Color Management (ImageIO) + +Placeholder for Color Management. Currently not implemented yet. + +## Creator plugins + +Contains configurable items for creators used during publishing from Photoshop. + +### Create Image + +Provides list of [variants](artist_concepts.md#variant) that will be shown to an artist in Publisher. Default value `Main`. + +### Create Flatten Image + +Provides simplified publishing process. It will create single `image` instance for artist automatically. This instance will +produce flatten image from all visible layers in a workfile. + +- Subset template for flatten image - provide template for subset name for this instance (example `imageBeauty`) +- Review - should be separate review created for this instance + +### Create Review + +Creates single `review` instance automatically. This allows artists to disable it if needed. + +### Create Workfile + +Creates single `workfile` instance automatically. This allows artists to disable it if needed. + +## Publish plugins + +Contains configurable items for publish plugins used during publishing from Photoshop. + +### Collect Color Coded Instances + +Used only in remote publishing! + +Allows to create automatically `image` instances for configurable highlight color set on layer or group in the workfile. + +#### Create flatten image + - Flatten with images - produce additional `image` with all published `image` instances merged + - Flatten only - produce only merged `image` instance + - No - produce only separate `image` instances + +#### Subset template for flatten image + +Template used to create subset name automatically (example `image{layer}Main` - uses layer name in subset name) + +### Collect Review + +Disable if no review should be created + +### Collect Version + +If enabled it will push version from workfile name to all published items. Eg. if artist is publishing `test_asset_workfile_v005.psd` +produced `image` and `review` files will contain `v005` (even if some previous version were skipped for particular family). + +### Validate Containers + +Checks if all imported assets to the workfile through `Loader` are in latest version. Limits cases that older version of asset would be used. + +If enabled, artist might still decide to disable validation for each publish (for special use cases). +Limit this optionality by toggling `Optional`. +`Active` toggle denotes that by default artists sees that optional validation as enabled. + +### Validate naming of subsets and layers + +Subset cannot contain invalid characters or extract to file would fail + +#### Regex pattern of invalid characters + +Contains weird characters like `/`, `/`, these might cause an issue when file (which contains subset name) is created on OS disk. + +#### Replacement character + +Replace all offending characters with this one. `_` is default. + +### Extract Image + +Controls extension formats of published instances of `image` family. `png` and `jpg` are by default. + +### Extract Review + +Controls output definitions of extracted reviews to upload on Asset Management (AM). + +#### Makes an image sequence instead of flatten image + +If multiple `image` instances are produced, glue created images into image sequence (`mov`) to review all of them separetely. +Without it only flatten image would be produced. + +#### Maximum size of sources for review + +Set Byte limit for review file. Applicable if gigantic `image` instances are produced, full image size is unnecessary to upload to AM. + +#### Extract jpg Options + +Handles tags for produced `.jpg` representation. `Create review` and `Add review to Ftrack` are defaults. + +#### Extract mov Options + +Handles tags for produced `.mov` representation. `Create review` and `Add review to Ftrack` are defaults. + + +### Workfile Builder + +Allows to open prepared workfile for an artist when no workfile exists. Useful to share standards, additional helpful content in the workfile. + +Could be configured per `Task type`, eg. `composition` task type could use different `.psd` template file than `art` task. +Workfile template must be accessible for all artists. +(Currently not handled by [SiteSync](module_site_sync.md)) \ No newline at end of file diff --git a/website/docs/artist_concepts.md b/website/docs/artist_concepts.md index 7582540811..1e55c8139d 100644 --- a/website/docs/artist_concepts.md +++ b/website/docs/artist_concepts.md @@ -14,17 +14,29 @@ OpenPype has a limitation regarding duplicated names. Name of assets must be uni ### Subset -Usually, an asset needs to be created in multiple *'flavours'*. A character might have multiple different looks, model needs to be published in different resolutions, a standard animation rig might not be usable in a crowd system and so on. 'Subsets' are here to accommodate all this variety that might be needed within a single asset. A model might have subset: *'main'*, *'proxy'*, *'sculpt'*, while data of *'look'* family could have subsets *'main'*, *'dirty'*, *'damaged'*. Subsets have some recommendations for their names, but ultimately it's up to the artist to use them for separation of publishes when needed. +A published output from an asset results in a subset. + +The subset type is referred to as [family](#family), for example a rig, pointcache, look. +A single asset can have many subsets, even of a single family, named [variants](#variant). +By default a subset is named as a combination of family + variant, sometimes prefixed with the task name (like workfile). + +### Variant + +Usually, an asset needs to be created in multiple *'flavours'*. A character might have multiple different looks, model needs to be published in different resolutions, a standard animation rig might not be usable in a crowd system and so on. 'Variants' are here to accommodate all this variety that might be needed within a single asset. A model might have variant: *'main'*, *'proxy'*, *'sculpt'*, while data of *'look'* family could have subsets *'main'*, *'dirty'*, *'damaged'*. Variants have some recommendations for their names, but ultimately it's up to the artist to use them for separation of publishes when needed. ### Version -A numbered iteration of a given subset. Each version contains at least one [representation][daa74ebf]. +A numbered iteration of a given subset. Each version contains at least one [representation](#representation). - [daa74ebf]: #representation "representation" +#### Hero version + +A hero version is a version that is always the latest published version. When a new publish is generated its written over the previous hero version replacing it in-place as opposed to regular versions where each new publish is a higher version number. + +This is an optional feature. The generation of hero versions can be completely disabled in OpenPype by an admin through the Studio Settings. ### Representation -Each published variant can come out of the software in multiple representations. All of them hold exactly the same data, but in different formats. A model, for example, might be saved as `.OBJ`, Alembic, Maya geometry or as all of them, to be ready for pickup in any other applications supporting these formats. +Each published subset version can come out of the software in multiple representations. All of them hold exactly the same data, but in different formats. A model, for example, might be saved as `.OBJ`, Alembic, Maya geometry or as all of them, to be ready for pickup in any other applications supporting these formats. #### Naming convention @@ -33,18 +45,22 @@ At this moment names of assets, tasks, subsets or representations can contain on ### Family -Each published [subset][3b89d8e0] can have exactly one family assigned to it. Family determines the type of data that the subset holds. Family doesn't dictate the file type, but can enforce certain technical specifications. For example OpenPype default configuration expects `model` family to only contain geometry without any shaders or joints when it is published. +Each published [subset](#subset) can have exactly one family assigned to it. Family determines the type of data that the subset holds. Family doesn't dictate the file type, but can enforce certain technical specifications. For example OpenPype default configuration expects `model` family to only contain geometry without any shaders or joints when it is published. +### Task - [3b89d8e0]: #subset "subset" +A task defines a work area for an asset where an artist can work in. For example asset *characterA* can have tasks named *modeling* and *rigging*. Tasks also have types. Multiple tasks of the same type may exist on an asset. A task with type `fx` could for example appear twice as *fx_fire* and *fx_cloth*. +Without a task you cannot launch a host application. +### Workfile + +The source scene file an artist works in within their task. These are versioned scene files and can be loaded and saved (automatically named) through the [workfiles tool](artist_tools_workfiles.md). ### Host General term for Software or Application supported by OpenPype and Avalon. These are usually DCC applications like Maya, Houdini or Nuke, but can also be a web based service like Ftrack or Clockify. - ### Tool Small piece of software usually dedicated to a particular purpose. Most of OpenPype and Avalon tools have GUI, but some are command line only. @@ -54,6 +70,10 @@ Small piece of software usually dedicated to a particular purpose. Most of OpenP Process of exporting data from your work scene to versioned, immutable file that can be used by other artists in the studio. +#### (Publish) Instance + +A publish instance is a single entry which defines a publish output. Publish instances persist within the workfile. This way we can expect that a publish from a newer workfile will produce similar consistent versioned outputs. + ### Load Process of importing previously published subsets into your current scene, using any of the OpenPype tools. diff --git a/website/docs/artist_hosts_houdini.md b/website/docs/artist_hosts_houdini.md index f2b128ffc6..8874a0b5cf 100644 --- a/website/docs/artist_hosts_houdini.md +++ b/website/docs/artist_hosts_houdini.md @@ -14,20 +14,29 @@ sidebar_label: Houdini - [Library Loader](artist_tools_library-loader) ## Publishing Alembic Cameras -You can publish baked camera in Alembic format. Select your camera and go **OpenPype -> Create** and select **Camera (abc)**. +You can publish baked camera in Alembic format. + +Select your camera and go **OpenPype -> Create** and select **Camera (abc)**. This will create Alembic ROP in **out** with path and frame range already set. This node will have a name you've assigned in the **Creator** menu. For example if you name the subset `Default`, output Alembic Driver will be named `cameraDefault`. After that, you can **OpenPype -> Publish** and after some validations your camera will be published to `abc` file. ## Publishing Composites - Image Sequences -You can publish image sequence directly from Houdini. You can use any `cop` network you have and publish image -sequence generated from it. For example I've created simple **cop** graph to generate some noise: +You can publish image sequences directly from Houdini's image COP networks. + +You can use any COP node and publish the image sequence generated from it. For example this simple graph to generate some noise: + ![Noise COP](assets/houdini_imagesequence_cop.png) -If I want to publish it, I'll select node I like - in this case `radialblur1` and go **OpenPype -> Create** and -select **Composite (Image Sequence)**. This will create `/out/imagesequenceNoise` Composite ROP (I've named my subset -*Noise*) with frame range set. When you hit **Publish** it will render image sequence from selected node. +To publish the output of the `radialblur1` go to **OpenPype -> Create** and +select **Composite (Image Sequence)**. If you name the variant *Noise* this will create the `/out/imagesequenceNoise` Composite ROP with the frame range set. + +When you hit **Publish** it will render image sequence from selected node. + +:::info Use selection +With *Use selection* is enabled on create the node you have selected when creating will be the node used for published. (It set the Composite ROP node's COP path to it). If you don't do this you'll have to manually set the path as needed on e.g. `/out/imagesequenceNoise` to ensure it outputs what you want. +::: ## Publishing Point Caches (alembic) Publishing point caches in alembic format is pretty straightforward, but it is by default enforcing better compatibility @@ -46,6 +55,16 @@ you handle `path` attribute is up to you, this is just an example.* Now select the `output0` node and go **OpenPype -> Create** and select **Point Cache**. It will create Alembic ROP `/out/pointcacheStrange` +## Publishing Reviews (OpenGL) +To generate a review output from Houdini you need to create a **review** instance. +Go to **OpenPype -> Create** and select **Review**. + +![Houdini Create Review](assets/houdini_review_create_attrs.png) + +On create, with the **Use Selection** checkbox enabled it will set up the first +camera found in your selection as the camera for the OpenGL ROP node and other +non-cameras are set in **Force Objects**. It will then render those even if +their display flag is disabled in your scene. ## Redshift :::note Work in progress diff --git a/website/docs/artist_hosts_maya.md b/website/docs/artist_hosts_maya.md index 0a551f0213..e36ccb77d2 100644 --- a/website/docs/artist_hosts_maya.md +++ b/website/docs/artist_hosts_maya.md @@ -238,12 +238,12 @@ For resolution and frame range, use **OpenPype → Set Frame Range** and Creating and publishing rigs with OpenPype follows similar workflow as with other data types. Create your rig and mark parts of your hierarchy in sets to -help OpenPype validators and extractors to check it and publish it. +help OpenPype validators and extractors to check and publish it. ### Preparing rig for publish When creating rigs, it is recommended (and it is in fact enforced by validators) -to separate bones or driving objects, their controllers and geometry so they are +to separate bones or driven objects, their controllers and geometry so they are easily managed. Currently OpenPype doesn't allow to publish model at the same time as its rig so for demonstration purposes, I'll first create simple model for robotic arm, just made out of simple boxes and I'll publish it. @@ -252,41 +252,48 @@ arm, just made out of simple boxes and I'll publish it. For more information about publishing models, see [Publishing models](artist_hosts_maya.md#publishing-models). -Now lets start with empty scene. Load your model - **OpenPype → Load...**, right +Now let's start with empty scene. Load your model - **OpenPype → Load...**, right click on it and select **Reference (abc)**. -I've created few bones and their controllers in two separate -groups - `rig_GRP` and `controls_GRP`. Naming is not important - just adhere to -your naming conventions. +I've created a few bones in `rig_GRP`, their controllers in `controls_GRP` and +placed the rig's output geometry in `geometry_GRP`. Naming of the groups is not important - just adhere to +your naming conventions. Then I parented everything into a single top group named `arm_rig`. -Then I've put everything into `arm_rig` group. - -When you've prepared your hierarchy, it's time to create *Rig instance* in OpenPype. -Select your whole rig hierarchy and go **OpenPype → Create...**. Select **Rig**. -Set is created in your scene to mark rig parts for export. Notice that it has -two subsets - `controls_SET` and `out_SET`. Put your controls into `controls_SET` +With the prepared hierarchy it is time to create a *Rig instance* in OpenPype. +Select the top group of your rig and go to **OpenPype → Create...**. Select **Rig**. +A publish set for your rig is created in your scene to mark rig parts for export. +Notice that it has two subsets - `controls_SET` and `out_SET`. Put your controls into `controls_SET` and geometry to `out_SET`. You should end up with something like this: ![Maya - Rig Hierarchy Example](assets/maya-rig_hierarchy_example.jpg) +:::note controls_SET and out_SET contents +It is totally allowed to put the `geometry_GRP` in the `out_SET` as opposed to +the individual meshes - it's even **recommended**. However, the `controls_SET` +requires the individual controls in it that the artist is supposed to animate +and manipulate so the publish validators can accurately check the rig's +controls. +::: + ### Publishing rigs -Publishing rig is done in same way as publishing everything else. Save your scene -and go **OpenPype → Publish**. When you run validation you'll mostly run at first into -few issues. Although number of them will seem to be intimidating at first, you'll -find out they are mostly minor things easily fixed. +Publishing rigs is done in a same way as publishing everything else. Save your scene +and go **OpenPype → Publish**. When you run validation you'll most likely run into +a few issues at first. Although a number of them will seem to be intimidating you +will find out they are mostly minor things, easily fixed and are there to optimize +your rig for consistency and safe usage by the artist. -* **Non Duplicate Instance Members (ID)** - This will most likely fail because when +- **Non Duplicate Instance Members (ID)** - This will most likely fail because when creating rigs, we usually duplicate few parts of it to reuse them. But duplication will duplicate also ID of original object and OpenPype needs every object to have unique ID. This is easily fixed by **Repair** action next to validator name. click on little up arrow on right side of validator name and select **Repair** form menu. -* **Joints Hidden** - This is enforcing joints (bones) to be hidden for user as +- **Joints Hidden** - This is enforcing joints (bones) to be hidden for user as animator usually doesn't need to see them and they clutter his viewports. So well behaving rig should have them hidden. **Repair** action will help here also. -* **Rig Controllers** will check if there are no transforms on unlocked attributes +- **Rig Controllers** will check if there are no transforms on unlocked attributes of controllers. This is needed because animator should have ease way to reset rig to it's default position. It also check that those attributes doesn't have any incoming connections from other parts of scene to ensure that published rig doesn't @@ -297,6 +304,19 @@ have any missing dependencies. You can load rig with [Loader](artist_tools_loader). Go **OpenPype → Load...**, select your rig, right click on it and **Reference** it. +### Animation instances + +Whenever you load a rig an animation publish instance is automatically created +for it. This means that if you load a rig you don't need to create a pointcache +instance yourself to publish the geometry. This is all cleanly prepared for you +when loading a published rig. + +:::tip Missing animation instance for your loaded rig? +Did you accidentally delete the animation instance for a loaded rig? You can +recreate it using the [**Recreate rig animation instance**](artist_hosts_maya.md#recreate-rig-animation-instance) +inventory action. +::: + ## Point caches OpenPype is using Alembic format for point caches. Workflow is very similar as other data types. @@ -646,3 +666,15 @@ Select 1 container of type `animation` or `pointcache`, then 1+ container of any The action searches the selected containers for 1 animation container of type `animation` or `pointcache`. This animation container will be connected to the rest of the selected containers. Matching geometries between containers is done by comparing the attribute `cbId`. The connection between geometries is done with a live blendshape. + +### Recreate rig animation instance + +This action can regenerate an animation instance for a loaded rig, for example +for when it was accidentally deleted by the user. + +![Maya - Inventory Action Recreate Rig Animation Instance](assets/maya-inventory_action_recreate_animation_instance.png) + +#### Usage + +Select 1 or more container of type `rig` for which you want to recreate the +animation instance. diff --git a/website/docs/artist_hosts_maya_xgen.md b/website/docs/artist_hosts_maya_xgen.md index ec5f2ed921..db7bbd0557 100644 --- a/website/docs/artist_hosts_maya_xgen.md +++ b/website/docs/artist_hosts_maya_xgen.md @@ -43,6 +43,10 @@ Create an Xgen instance to publish. This needs to contain only **one Xgen collec You can create multiple Xgen instances if you have multiple collections to publish. +:::note +The Xgen publishing requires a namespace on the Xgen collection (palette) and the geometry used. +::: + ### Publish The publishing process will grab geometry used for Xgen along with any external files used in the collection's descriptions. This creates an isolated Maya file with just the Xgen collection's dependencies, so you can use any nested geometry when creating the Xgen description. An Xgen version will consist of: diff --git a/website/docs/artist_hosts_substancepainter.md b/website/docs/artist_hosts_substancepainter.md new file mode 100644 index 0000000000..86bcbba82e --- /dev/null +++ b/website/docs/artist_hosts_substancepainter.md @@ -0,0 +1,107 @@ +--- +id: artist_hosts_substancepainter +title: Substance Painter +sidebar_label: Substance Painter +--- + +## OpenPype global tools + +- [Work Files](artist_tools.md#workfiles) +- [Load](artist_tools.md#loader) +- [Manage (Inventory)](artist_tools.md#inventory) +- [Publish](artist_tools.md#publisher) +- [Library Loader](artist_tools.md#library-loader) + +## Working with OpenPype in Substance Painter + +The Substance Painter OpenPype integration allows you to: + +- Set the project mesh and easily keep it in sync with updates of the model +- Easily export your textures as versioned publishes for others to load and update. + +## Setting the project mesh + +Substance Painter requires a project file to have a mesh path configured. +As such, you can't start a workfile without choosing a mesh path. + +To start a new project using a published model you can _without an open project_ +use OpenPype > Load.. > Load Mesh on a supported publish. This will prompt you +with a New Project prompt preset to that particular mesh file. + +If you already have a project open, you can also replace (reload) your mesh +using the same Load Mesh functionality. + +After having the project mesh loaded or reloaded through the loader +tool the mesh will be _managed_ by OpenPype. For example, you'll be notified +on workfile open whether the mesh in your workfile is outdated. You can also +set it to specific version using OpenPype > Manage.. where you can right click +on the project mesh to perform _Set Version_ + +:::info +A Substance Painter project will always have only one mesh set. Whenever you +trigger _Load Mesh_ from the loader this will **replace** your currently loaded +mesh for your open project. +::: + +## Publishing textures + +To publish your textures we must first create a `textureSet` +publish instance. + +To create a **TextureSet instance** we will use OpenPype's publisher tool. Go +to **OpenPype → Publish... → TextureSet** + +The texture set instance will define what Substance Painter export template (`.spexp`) to +use and thus defines what texture maps will be exported from your workfile. This +can be set with the **Output Template** attribute on the instance. + +:::info +The TextureSet instance gets saved with your Substance Painter project. As such, +you will only need to configure this once for your workfile. Next time you can +just click **OpenPype → Publish...** and start publishing directly with the +same settings. +::: + +#### Publish per output map of the Substance Painter preset + +The Texture Set instance generates a publish per output map that is defined in +the Substance Painter's export preset. For example a publish from a default +PBR Metallic Roughness texture set results in six separate published subsets +(if all the channels exist in your file). + +![Substance Painter PBR Metallic Roughness Export Preset](assets/substancepainter_pbrmetallicroughness_export_preset.png) + +When publishing for example a texture set with variant **Main** six instances will +be published with the variants: +- Main.**BaseColor** +- Main.**Emissive** +- Main.**Height** +- Main.**Metallic** +- Main.**Normal** +- Main.**Roughness** + +The bold output map name for the publish is based on the string that is pulled +from the what is considered to be the static part of the filename templates in +the export preset. The tokens like `$mesh` and `(_$colorSpace)` are ignored. +So `$mesh_$textureSet_BaseColor(_$colorSpace)(.$udim)` becomes `BaseColor`. + +An example output for PBR Metallic Roughness would be: + +![Substance Painter PBR Metallic Roughness Publish Example in Loader](assets/substancepainter_pbrmetallicroughness_published.png) + +## Known issues + +#### Can't see the OpenPype menu? + +If you're unable to see the OpenPype top level menu in Substance Painter make +sure you have launched Substance Painter through OpenPype and that the OpenPype +Integration plug-in is loaded inside Substance Painter: **Python > openpype_plugin** + +#### Substance Painter + Steam + +Running the steam version of Substance Painter within OpenPype will require you +to close the Steam executable before launching Substance Painter through OpenPype. +Otherwise the Substance Painter process is launched using Steam's existing +environment and thus will not be able to pick up the pipeline integration. + +This appears to be a limitation of how Steam works. \ No newline at end of file diff --git a/website/docs/assets/admin_hosts_photoshop_settings.png b/website/docs/assets/admin_hosts_photoshop_settings.png new file mode 100644 index 0000000000..aaa6ecbed7 Binary files /dev/null and b/website/docs/assets/admin_hosts_photoshop_settings.png differ diff --git a/website/docs/assets/houdini_review_create_attrs.png b/website/docs/assets/houdini_review_create_attrs.png new file mode 100644 index 0000000000..8735e79914 Binary files /dev/null and b/website/docs/assets/houdini_review_create_attrs.png differ diff --git a/website/docs/assets/maya-admin_custom_namespace.png b/website/docs/assets/maya-admin_custom_namespace.png new file mode 100644 index 0000000000..80707ea727 Binary files /dev/null and b/website/docs/assets/maya-admin_custom_namespace.png differ diff --git a/website/docs/assets/maya-admin_exclude_handles.png b/website/docs/assets/maya-admin_exclude_handles.png new file mode 100644 index 0000000000..9a50f2c287 Binary files /dev/null and b/website/docs/assets/maya-admin_exclude_handles.png differ diff --git a/website/docs/assets/maya-admin_include_handles.png b/website/docs/assets/maya-admin_include_handles.png new file mode 100644 index 0000000000..88d2270ddc Binary files /dev/null and b/website/docs/assets/maya-admin_include_handles.png differ diff --git a/website/docs/assets/maya-inventory_action_recreate_animation_instance.png b/website/docs/assets/maya-inventory_action_recreate_animation_instance.png new file mode 100644 index 0000000000..42a6f26964 Binary files /dev/null and b/website/docs/assets/maya-inventory_action_recreate_animation_instance.png differ diff --git a/website/docs/assets/substancepainter_pbrmetallicroughness_export_preset.png b/website/docs/assets/substancepainter_pbrmetallicroughness_export_preset.png new file mode 100644 index 0000000000..35a4545f83 Binary files /dev/null and b/website/docs/assets/substancepainter_pbrmetallicroughness_export_preset.png differ diff --git a/website/docs/assets/substancepainter_pbrmetallicroughness_published.png b/website/docs/assets/substancepainter_pbrmetallicroughness_published.png new file mode 100644 index 0000000000..15b0e5b876 Binary files /dev/null and b/website/docs/assets/substancepainter_pbrmetallicroughness_published.png differ diff --git a/website/docs/module_site_sync.md b/website/docs/module_site_sync.md index 3e5794579c..68f56cb548 100644 --- a/website/docs/module_site_sync.md +++ b/website/docs/module_site_sync.md @@ -7,80 +7,112 @@ sidebar_label: Site Sync import Tabs from '@theme/Tabs'; import TabItem from '@theme/TabItem'; +Site Sync allows users and studios to synchronize published assets between +multiple 'sites'. Site denotes a storage location, +which could be a physical disk, server, cloud storage. To be able to use site +sync, it first needs to be configured. -:::warning -**This feature is** currently **in a beta stage** and it is not recommended to rely on it fully for production. -::: - -Site Sync allows users and studios to synchronize published assets between multiple 'sites'. Site denotes a storage location, -which could be a physical disk, server, cloud storage. To be able to use site sync, it first needs to be configured. - -The general idea is that each user acts as an individual site and can download and upload any published project files when they are needed. that way, artist can have access to the whole project, but only every store files that are relevant to them on their home workstation. +The general idea is that each user acts as an individual site and can download +and upload any published project files when they are needed. that way, artist +can have access to the whole project, but only every store files that are +relevant to them on their home workstation. :::note -At the moment site sync is only able to deal with publishes files. No workfiles will be synchronized unless they are published. We are working on making workfile synchronization possible as well. +At the moment site sync is only able to deal with publishes files. No workfiles +will be synchronized unless they are published. We are working on making +workfile synchronization possible as well. ::: ## System Settings -To use synchronization, *Site Sync* needs to be enabled globally in **OpenPype Settings/System/Modules/Site Sync**. +To use synchronization, *Site Sync* needs to be enabled globally in **OpenPype +Settings/System/Modules/Site Sync**. ![Configure module](assets/site_sync_system.png) -### Sites +### Sites By default there are two sites created for each OpenPype installation: -- **studio** - default site - usually a centralized mounted disk accessible to all artists. Studio site is used if Site Sync is disabled. -- **local** - each workstation or server running OpenPype Tray receives its own with unique site name. Workstation refers to itself as "local"however all other sites will see it under it's unique ID. -Artists can explore their site ID by opening OpenPype Info tool by clicking on a version number in the tray app. +- **studio** - default site - usually a centralized mounted disk accessible to + all artists. Studio site is used if Site Sync is disabled. +- **local** - each workstation or server running OpenPype Tray receives its own + with unique site name. Workstation refers to itself as "local"however all + other sites will see it under it's unique ID. -Many different sites can be created and configured on the system level, and some or all can be assigned to each project. +Artists can explore their site ID by opening OpenPype Info tool by clicking on +a version number in the tray app. -Each OpenPype Tray app works with two sites at one time. (Sites can be the same, and no syncing is done in this setup). +Many different sites can be created and configured on the system level, and +some or all can be assigned to each project. -Sites could be configured differently per project basis. +Each OpenPype Tray app works with two sites at one time. (Sites can be the +same, and no syncing is done in this setup). -Each new site needs to be created first in `System Settings`. Most important feature of site is its Provider, select one from already prepared Providers. +Sites could be configured differently per project basis. -#### Alternative sites +Each new site needs to be created first in `System Settings`. Most important +feature of site is its Provider, select one from already prepared Providers. + +#### Alternative sites This attribute is meant for special use cases only. -One of the use cases is sftp site vendoring (exposing) same data as regular site (studio). Each site is accessible for different audience. 'studio' for artists in a studio via shared disk, 'sftp' for externals via sftp server with mounted 'studio' drive. +One of the use cases is sftp site vendoring (exposing) same data as regular +site (studio). Each site is accessible for different audience. 'studio' for +artists in a studio via shared disk, 'sftp' for externals via sftp server with +mounted 'studio' drive. -Change of file status on one site actually means same change on 'alternate' site occurred too. (eg. artists publish to 'studio', 'sftp' is using -same location >> file is accessible on 'sftp' site right away, no need to sync it anyhow.) +Change of file status on one site actually means same change on 'alternate' +site occurred too. (eg. artists publish to 'studio', 'sftp' is using +same location >> file is accessible on 'sftp' site right away, no need to sync +it anyhow.) ##### Example + ![Configure module](assets/site_sync_system_sites.png) -Admin created new `sftp` site which is handled by `SFTP` provider. Somewhere in the studio SFTP server is deployed on a machine that has access to `studio` drive. +Admin created new `sftp` site which is handled by `SFTP` provider. Somewhere in +the studio SFTP server is deployed on a machine that has access to `studio` +drive. Alternative sites work both way: + - everything published to `studio` is accessible on a `sftp` site too -- everything published to `sftp` (most probably via artist's local disk - artists publishes locally, representation is marked to be synced to `sftp`. Immediately after it is synced, it is marked to be available on `studio` too for artists in the studio to use.) +- everything published to `sftp` (most probably via artist's local disk - + artists publishes locally, representation is marked to be synced to `sftp`. + Immediately after it is synced, it is marked to be available on `studio` too + for artists in the studio to use.) ## Project Settings -Sites need to be made available for each project. Of course this is possible to do on the default project as well, in which case all other projects will inherit these settings until overridden explicitly. +Sites need to be made available for each project. Of course this is possible to +do on the default project as well, in which case all other projects will +inherit these settings until overridden explicitly. You'll find the setting in **Settings/Project/Global/Site Sync** -The attributes that can be configured will vary between sites and their providers. +The attributes that can be configured will vary between sites and their +providers. ## Local settings -Each user should configure root folder for their 'local' site via **Local Settings** in OpenPype Tray. This folder will be used for all files that the user publishes or downloads while working on a project. Artist has the option to set the folder as "default"in which case it is used for all the projects, or it can be set on a project level individually. +Each user should configure root folder for their 'local' site via **Local +Settings** in OpenPype Tray. This folder will be used for all files that the +user publishes or downloads while working on a project. Artist has the option +to set the folder as "default"in which case it is used for all the projects, or +it can be set on a project level individually. -Artists can also override which site they use as active and remote if need be. +Artists can also override which site they use as active and remote if need be. ![Local overrides](assets/site_sync_local_setting.png) - ## Providers -Each site implements a so called `provider` which handles most common operations (list files, copy files etc.) and provides interface with a particular type of storage. (disk, gdrive, aws, etc.) -Multiple configured sites could share the same provider with different settings (multiple mounted disks - each disk can be a separate site, while +Each site implements a so called `provider` which handles most common +operations (list files, copy files etc.) and provides interface with a +particular type of storage. (disk, gdrive, aws, etc.) +Multiple configured sites could share the same provider with different +settings (multiple mounted disks - each disk can be a separate site, while all share the same provider). **Currently implemented providers:** @@ -89,21 +121,30 @@ all share the same provider). Handles files stored on disk storage. -Local drive provider is the most basic one that is used for accessing all standard hard disk storage scenarios. It will work with any storage that can be mounted on your system in a standard way. This could correspond to a physical external hard drive, network mounted storage, internal drive or even VPN connected network drive. It doesn't care about how the drive is mounted, but you must be able to point to it with a simple directory path. +Local drive provider is the most basic one that is used for accessing all +standard hard disk storage scenarios. It will work with any storage that can be +mounted on your system in a standard way. This could correspond to a physical +external hard drive, network mounted storage, internal drive or even VPN +connected network drive. It doesn't care about how the drive is mounted, but +you must be able to point to it with a simple directory path. Default sites `local` and `studio` both use local drive provider. - ### Google Drive -Handles files on Google Drive (this). GDrive is provided as a production example for implementing other cloud providers +Handles files on Google Drive (this). GDrive is provided as a production +example for implementing other cloud providers -Let's imagine a small globally distributed studio which wants all published work for all their freelancers uploaded to Google Drive folder. +Let's imagine a small globally distributed studio which wants all published +work for all their freelancers uploaded to Google Drive folder. For this use case admin needs to configure: -- how many times it tries to synchronize file in case of some issue (network, permissions) + +- how many times it tries to synchronize file in case of some issue (network, + permissions) - how often should synchronization check for new assets -- sites for synchronization - 'local' and 'gdrive' (this can be overridden in local settings) +- sites for synchronization - 'local' and 'gdrive' (this can be overridden in + local settings) - user credentials - root folder location on Google Drive side @@ -111,30 +152,43 @@ Configuration would look like this: ![Configure project](assets/site_sync_project_settings.png) -*Site Sync* for Google Drive works using its API: https://developers.google.com/drive/api/v3/about-sdk +*Site Sync* for Google Drive works using its +API: https://developers.google.com/drive/api/v3/about-sdk -To configure Google Drive side you would need to have access to Google Cloud Platform project: https://console.cloud.google.com/ +To configure Google Drive side you would need to have access to Google Cloud +Platform project: https://console.cloud.google.com/ To get working connection to Google Drive there are some necessary steps: -- first you need to enable GDrive API: https://developers.google.com/drive/api/v3/enable-drive-api -- next you need to create user, choose **Service Account** (for basic configuration no roles for account are necessary) + +- first you need to enable GDrive + API: https://developers.google.com/drive/api/v3/enable-drive-api +- next you need to create user, choose **Service Account** (for basic + configuration no roles for account are necessary) - add new key for created account and download .json file with credentials -- share destination folder on the Google Drive with created account (directly in GDrive web application) -- add new site back in OpenPype Settings, name as you want, provider needs to be 'gdrive' +- share destination folder on the Google Drive with created account (directly + in GDrive web application) +- add new site back in OpenPype Settings, name as you want, provider needs to + be 'gdrive' - distribute credentials file via shared mounted disk location :::note -If you are using regular personal GDrive for testing don't forget adding `/My Drive` as the prefix in root configuration. Business accounts and share drives don't need this. +If you are using regular personal GDrive for testing don't forget +adding `/My Drive` as the prefix in root configuration. Business accounts and +share drives don't need this. ::: ### SFTP -SFTP provider is used to connect to SFTP server. Currently authentication with `user:password` or `user:ssh key` is implemented. -Please provide only one combination, don't forget to provide password for ssh key if ssh key was created with a passphrase. +SFTP provider is used to connect to SFTP server. Currently authentication +with `user:password` or `user:ssh key` is implemented. +Please provide only one combination, don't forget to provide password for ssh +key if ssh key was created with a passphrase. -(SFTP connection could be a bit finicky, use FileZilla or WinSCP for testing connection, it will be mush faster.) +(SFTP connection could be a bit finicky, use FileZilla or WinSCP for testing +connection, it will be mush faster.) -Beware that ssh key expects OpenSSH format (`.pem`) not a Putty format (`.ppk`)! +Beware that ssh key expects OpenSSH format (`.pem`) not a Putty +format (`.ppk`)! #### How to set SFTP site @@ -143,60 +197,101 @@ Beware that ssh key expects OpenSSH format (`.pem`) not a Putty format (`.ppk`)! ![Enable syncing and create site](assets/site_sync_sftp_system.png) -- In Projects setting enable Site Sync (on default project - all project will be synched, or on specific project) -- Configure SFTP connection and destination folder on a SFTP server (in screenshot `/upload`) +- In Projects setting enable Site Sync (on default project - all project will + be synched, or on specific project) +- Configure SFTP connection and destination folder on a SFTP server (in + screenshot `/upload`) ![SFTP connection](assets/site_sync_project_sftp_settings.png) - -- if you want to force syncing between local and sftp site for all users, use combination `active site: local`, `remote site: NAME_OF_SFTP_SITE` -- if you want to allow only specific users to use SFTP syncing (external users, not located in the office), use `active site: studio`, `remote site: studio`. + +- if you want to force syncing between local and sftp site for all users, use + combination `active site: local`, `remote site: NAME_OF_SFTP_SITE` +- if you want to allow only specific users to use SFTP syncing (external users, + not located in the office), use `active site: studio`, `remote site: studio`. ![Select active and remote site on a project](assets/site_sync_sftp_project_setting_not_forced.png) -- Each artist can decide and configure syncing from his/her local to SFTP via `Local Settings` +- Each artist can decide and configure syncing from his/her local to SFTP + via `Local Settings` ![Select active and remote site on a project](assets/site_sync_sftp_settings_local.png) - + ### Custom providers -If a studio needs to use other services for cloud storage, or want to implement totally different storage providers, they can do so by writing their own provider plugin. We're working on a developer documentation, however, for now we recommend looking at `abstract_provider.py`and `gdrive.py` inside `openpype/modules/sync_server/providers` and using it as a template. +If a studio needs to use other services for cloud storage, or want to implement +totally different storage providers, they can do so by writing their own +provider plugin. We're working on a developer documentation, however, for now +we recommend looking at `abstract_provider.py`and `gdrive.py` +inside `openpype/modules/sync_server/providers` and using it as a template. ### Running Site Sync in background -Site Sync server synchronizes new published files from artist machine into configured remote location by default. +Site Sync server synchronizes new published files from artist machine into +configured remote location by default. -There might be a use case where you need to synchronize between "non-artist" sites, for example between studio site and cloud. In this case -you need to run Site Sync as a background process from a command line (via service etc) 24/7. +There might be a use case where you need to synchronize between "non-artist" +sites, for example between studio site and cloud. In this case +you need to run Site Sync as a background process from a command line (via +service etc) 24/7. -To configure all sites where all published files should be synced eventually you need to configure `project_settings/global/sync_server/config/always_accessible_on` property in Settings (per project) first. +To configure all sites where all published files should be synced eventually +you need to +configure `project_settings/global/sync_server/config/always_accessible_on` +property in Settings (per project) first. ![Set another non artist remote site](assets/site_sync_always_on.png) This is an example of: + - Site Sync is enabled for a project -- default active and remote sites are set to `studio` - eg. standard process: everyone is working in a studio, publishing to shared location etc. -- (but this also allows any of the artists to work remotely, they would change their active site in their own Local Settings to `local` and configure local root. - This would result in everything artist publishes is saved first onto his local folder AND synchronized to `studio` site eventually.) +- default active and remote sites are set to `studio` - eg. standard process: + everyone is working in a studio, publishing to shared location etc. +- (but this also allows any of the artists to work remotely, they would change + their active site in their own Local Settings to `local` and configure local + root. + This would result in everything artist publishes is saved first onto his + local folder AND synchronized to `studio` site eventually.) - everything exported must also be eventually uploaded to `sftp` site -This eventual synchronization between `studio` and `sftp` sites must be physically handled by background process. +This eventual synchronization between `studio` and `sftp` sites must be +physically handled by background process. -As current implementation relies heavily on Settings and Local Settings, background process for a specific site ('studio' for example) must be configured via Tray first to `syncserver` command to work. +As current implementation relies heavily on Settings and Local Settings, +background process for a specific site ('studio' for example) must be +configured via Tray first to `syncserver` command to work. To do this: -- run OP `Tray` with environment variable OPENPYPE_LOCAL_ID set to name of active (source) site. In most use cases it would be studio (for cases of backups of everything published to studio site to different cloud site etc.) +- run OP `Tray` with environment variable OPENPYPE_LOCAL_ID set to name of + active (source) site. In most use cases it would be studio (for cases of + backups of everything published to studio site to different cloud site etc.) - start `Tray` -- check `Local ID` in information dialog after clicking on version number in the Tray +- check `Local ID` in information dialog after clicking on version number in + the Tray - open `Local Settings` in the `Tray` - configure for each project necessary active site and remote site - close `Tray` - run OP from a command line with `syncserver` and `--active_site` arguments - -This is an example how to trigger background syncing process where active (source) site is `studio`. -(It is expected that OP is installed on a machine, `openpype_console` is on PATH. If not, add full path to executable. +This is an example how to trigger background syncing process where active ( +source) site is `studio`. +(It is expected that OP is installed on a machine, `openpype_console` is on +PATH. If not, add full path to executable. ) + ```shell openpype_console syncserver --active_site studio -``` \ No newline at end of file +``` + +### Syncing of last published workfile + +Some DCC might have enabled +in `project_setting/global/tools/Workfiles/last_workfile_on_startup`, eg. open +DCC with last opened workfile. + +Flag `use_last_published_workfile` tells that last published workfile should be +used if no workfile is present locally. +This use case could happen if artists starts working on new task locally, +doesn't have any workfile present. In that case last published will be +synchronized locally and its version bumped by 1 (as workfile's version is +always +1 from published version). \ No newline at end of file diff --git a/website/docs/project_settings/settings_project_global.md b/website/docs/project_settings/settings_project_global.md index 2de9038f3f..c17f707830 100644 --- a/website/docs/project_settings/settings_project_global.md +++ b/website/docs/project_settings/settings_project_global.md @@ -255,7 +255,7 @@ suffix is **"client"** then the final suffix is **"h264_client"**. | resolution_height | Resolution height. | | fps | Fps of an output. | | timecode | Timecode by frame start and fps. | - | focalLength | **Only available in Maya**

Camera focal length per frame. Use syntax `{focalLength:.2f}` for decimal truncating. Eg. `35.234985` with `{focalLength:.2f}` would produce `35.23`, whereas `{focalLength:.0f}` would produce `35`. | + | focalLength | **Only available in Maya and Houdini**

Camera focal length per frame. Use syntax `{focalLength:.2f}` for decimal truncating. Eg. `35.234985` with `{focalLength:.2f}` would produce `35.23`, whereas `{focalLength:.0f}` would produce `35`. | :::warning `timecode` is a specific key that can be **only at the end of content**. (`"BOTTOM_RIGHT": "TC: {timecode}"`) diff --git a/website/docs/pype2/admin_presets_plugins.md b/website/docs/pype2/admin_presets_plugins.md index 44c2a28dec..6a057f4bb4 100644 --- a/website/docs/pype2/admin_presets_plugins.md +++ b/website/docs/pype2/admin_presets_plugins.md @@ -304,7 +304,7 @@ If source representation has suffix **"h264"** and burnin suffix is **"client"** | resolution_height | Resolution height. | | fps | Fps of an output. | | timecode | Timecode by frame start and fps. | - | focalLength | **Only available in Maya**

Camera focal length per frame. Use syntax `{focalLength:.2f}` for decimal truncating. Eg. `35.234985` with `{focalLength:.2f}` would produce `35.23`, whereas `{focalLength:.0f}` would produce `35`. | + | focalLength | **Only available in Maya and Houdini**

Camera focal length per frame. Use syntax `{focalLength:.2f}` for decimal truncating. Eg. `35.234985` with `{focalLength:.2f}` would produce `35.23`, whereas `{focalLength:.0f}` would produce `35`. | :::warning `timecode` is specific key that can be **only at the end of content**. (`"BOTTOM_RIGHT": "TC: {timecode}"`) diff --git a/website/sidebars.js b/website/sidebars.js index 93887e00f6..4874782197 100644 --- a/website/sidebars.js +++ b/website/sidebars.js @@ -126,6 +126,7 @@ module.exports = { "admin_hosts_nuke", "admin_hosts_resolve", "admin_hosts_harmony", + "admin_hosts_photoshop", "admin_hosts_aftereffects", "admin_hosts_tvpaint" ],