mirror of
https://github.com/ynput/ayon-core.git
synced 2025-12-24 12:54:40 +01:00
Merge remote-tracking branch 'origin/develop' into 3.0/refactoring
This commit is contained in:
commit
08442b2623
55 changed files with 2117 additions and 964 deletions
|
|
@ -13,12 +13,8 @@ jobs:
|
|||
git config --global user.email "mkolar@users.noreply.github.com"
|
||||
git config --global user.name "Website Deployment Script"
|
||||
echo "machine github.com login mkolar password $GITHUB_TOKEN" > ~/.netrc
|
||||
cd website && yarn install && GIT_USER=mkolar yarn run publish-gh-pages
|
||||
cd website && yarn install && GIT_USER=mkolar yarn run deploy
|
||||
|
||||
workflows:
|
||||
build_and_deploy:
|
||||
jobs:
|
||||
- deploy-website:
|
||||
filters:
|
||||
branches:
|
||||
only: feature/move_documentation
|
||||
|
|
|
|||
7
.github/weekly-digest.yml
vendored
Normal file
7
.github/weekly-digest.yml
vendored
Normal file
|
|
@ -0,0 +1,7 @@
|
|||
# Configuration for weekly-digest - https://github.com/apps/weekly-digest
|
||||
publishDay: sun
|
||||
canPublishIssues: true
|
||||
canPublishPullRequests: true
|
||||
canPublishContributors: true
|
||||
canPublishStargazers: true
|
||||
canPublishCommits: true
|
||||
167
CHANGELOG.md
167
CHANGELOG.md
|
|
@ -1,6 +1,169 @@
|
|||
# Changelog
|
||||
|
||||
## [2.12.0](https://github.com/pypeclub/pype/tree/2.12.0) (2020-09-09)
|
||||
## [2.13.1](https://github.com/pypeclub/pype/tree/2.13.1) (2020-10-23)
|
||||
|
||||
[Full Changelog](https://github.com/pypeclub/pype/compare/2.13.0...2.13.1)
|
||||
|
||||
**Enhancements:**
|
||||
|
||||
- move maya look assigner to pype menu [\#292](https://github.com/pypeclub/pype/issues/292)
|
||||
|
||||
**Fixed bugs:**
|
||||
|
||||
- Layer name is not propagating to metadata in Photoshop [\#654](https://github.com/pypeclub/pype/issues/654)
|
||||
- Loader in Photoshop fails with "can't set attribute" [\#650](https://github.com/pypeclub/pype/issues/650)
|
||||
- Hiero: Review video file adding one frame to the end [\#659](https://github.com/pypeclub/pype/issues/659)
|
||||
|
||||
## [2.13.0](https://github.com/pypeclub/pype/tree/2.13.0) (2020-10-18)
|
||||
|
||||
[Full Changelog](https://github.com/pypeclub/pype/compare/2.12.5...2.13.0)
|
||||
|
||||
**Enhancements:**
|
||||
|
||||
- Deadline Output Folder [\#636](https://github.com/pypeclub/pype/issues/636)
|
||||
- Nuke Camera Loader [\#565](https://github.com/pypeclub/pype/issues/565)
|
||||
- Deadline publish job shows publishing output folder [\#649](https://github.com/pypeclub/pype/pull/649)
|
||||
- Get latest version in lib [\#642](https://github.com/pypeclub/pype/pull/642)
|
||||
- Improved publishing of multiple representation from SP [\#638](https://github.com/pypeclub/pype/pull/638)
|
||||
- Launch TvPaint shot work file from within Ftrack [\#631](https://github.com/pypeclub/pype/pull/631)
|
||||
- Add mp4 support for RV action. [\#628](https://github.com/pypeclub/pype/pull/628)
|
||||
- Maya: allow renders to have version synced with workfile [\#618](https://github.com/pypeclub/pype/pull/618)
|
||||
- Renaming nukestudio host folder to hiero [\#617](https://github.com/pypeclub/pype/pull/617)
|
||||
- Harmony: More efficient publishing [\#615](https://github.com/pypeclub/pype/pull/615)
|
||||
- Ftrack server action improvement [\#608](https://github.com/pypeclub/pype/pull/608)
|
||||
- Deadline user defaults to pype username if present [\#607](https://github.com/pypeclub/pype/pull/607)
|
||||
- Standalone publisher now has icon [\#606](https://github.com/pypeclub/pype/pull/606)
|
||||
- Nuke render write targeting knob improvement [\#603](https://github.com/pypeclub/pype/pull/603)
|
||||
- Animated pyblish gui [\#602](https://github.com/pypeclub/pype/pull/602)
|
||||
- Maya: Deadline - make use of asset dependencies optional [\#591](https://github.com/pypeclub/pype/pull/591)
|
||||
- Nuke: Publishing, loading and updating alembic cameras [\#575](https://github.com/pypeclub/pype/pull/575)
|
||||
- Maya: add look assigner to pype menu even if scriptsmenu is not available [\#573](https://github.com/pypeclub/pype/pull/573)
|
||||
- Store task types in the database [\#572](https://github.com/pypeclub/pype/pull/572)
|
||||
- Maya: Tiled EXRs to scanline EXRs render option [\#512](https://github.com/pypeclub/pype/pull/512)
|
||||
- Fusion basic integration [\#452](https://github.com/pypeclub/pype/pull/452)
|
||||
|
||||
**Fixed bugs:**
|
||||
|
||||
- Burnin script did not propagate ffmpeg output [\#640](https://github.com/pypeclub/pype/issues/640)
|
||||
- Pyblish-pype spacer in terminal wasn't transparent [\#646](https://github.com/pypeclub/pype/pull/646)
|
||||
- Lib subprocess without logger [\#645](https://github.com/pypeclub/pype/pull/645)
|
||||
- Nuke: prevent crash if we only have single frame in sequence [\#644](https://github.com/pypeclub/pype/pull/644)
|
||||
- Burnin script logs better output [\#641](https://github.com/pypeclub/pype/pull/641)
|
||||
- Missing audio on farm submission. [\#639](https://github.com/pypeclub/pype/pull/639)
|
||||
- review from imagesequence error [\#633](https://github.com/pypeclub/pype/pull/633)
|
||||
- Hiero: wrong order of fps clip instance data collecting [\#627](https://github.com/pypeclub/pype/pull/627)
|
||||
- Add source for review instances. [\#625](https://github.com/pypeclub/pype/pull/625)
|
||||
- Task processing in event sync [\#623](https://github.com/pypeclub/pype/pull/623)
|
||||
- sync to avalon doesn t remove renamed task [\#619](https://github.com/pypeclub/pype/pull/619)
|
||||
- Intent publish setting wasn't working with default value [\#562](https://github.com/pypeclub/pype/pull/562)
|
||||
- Maya: Updating a look where the shader name changed, leaves the geo without a shader [\#514](https://github.com/pypeclub/pype/pull/514)
|
||||
|
||||
**Merged pull requests:**
|
||||
|
||||
- Audio file existence check [\#614](https://github.com/pypeclub/pype/pull/614)
|
||||
- Avalon module without Qt [\#581](https://github.com/pypeclub/pype/pull/581)
|
||||
- Ftrack module without Qt [\#577](https://github.com/pypeclub/pype/pull/577)
|
||||
|
||||
## [2.12.5](https://github.com/pypeclub/pype/tree/2.12.5) (2020-10-14)
|
||||
|
||||
[Full Changelog](https://github.com/pypeclub/pype/compare/2.12.4...2.12.5)
|
||||
|
||||
**Enhancements:**
|
||||
|
||||
- Launch TvPaint shot work file from within Ftrack [\#629](https://github.com/pypeclub/pype/issues/629)
|
||||
|
||||
**Merged pull requests:**
|
||||
|
||||
- Harmony: Disable application launch logic [\#637](https://github.com/pypeclub/pype/pull/637)
|
||||
|
||||
## [2.12.4](https://github.com/pypeclub/pype/tree/2.12.4) (2020-10-08)
|
||||
|
||||
[Full Changelog](https://github.com/pypeclub/pype/compare/2.12.3...2.12.4)
|
||||
|
||||
**Enhancements:**
|
||||
|
||||
- convert nukestudio to hiero host [\#616](https://github.com/pypeclub/pype/issues/616)
|
||||
- Fusion basic integration [\#451](https://github.com/pypeclub/pype/issues/451)
|
||||
|
||||
**Fixed bugs:**
|
||||
|
||||
- Sync to avalon doesn't remove renamed task [\#605](https://github.com/pypeclub/pype/issues/605)
|
||||
- NukeStudio: FPS collecting into clip instances [\#624](https://github.com/pypeclub/pype/pull/624)
|
||||
|
||||
**Merged pull requests:**
|
||||
|
||||
- NukeStudio: small fixes [\#622](https://github.com/pypeclub/pype/pull/622)
|
||||
- NukeStudio: broken order of plugins [\#620](https://github.com/pypeclub/pype/pull/620)
|
||||
|
||||
## [2.12.3](https://github.com/pypeclub/pype/tree/2.12.3) (2020-10-06)
|
||||
|
||||
[Full Changelog](https://github.com/pypeclub/pype/compare/2.12.2...2.12.3)
|
||||
|
||||
**Enhancements:**
|
||||
|
||||
- Nuke Publish Camera [\#567](https://github.com/pypeclub/pype/issues/567)
|
||||
- Harmony: open xstage file no matter of its name [\#526](https://github.com/pypeclub/pype/issues/526)
|
||||
- Stop integration of unwanted data [\#387](https://github.com/pypeclub/pype/issues/387)
|
||||
- Move avalon-launcher functionality to pype [\#229](https://github.com/pypeclub/pype/issues/229)
|
||||
- avalon workfiles api [\#214](https://github.com/pypeclub/pype/issues/214)
|
||||
- Store task types [\#180](https://github.com/pypeclub/pype/issues/180)
|
||||
- Avalon Mongo Connection split [\#136](https://github.com/pypeclub/pype/issues/136)
|
||||
- nk camera workflow [\#71](https://github.com/pypeclub/pype/issues/71)
|
||||
- Hiero integration added [\#590](https://github.com/pypeclub/pype/pull/590)
|
||||
- Anatomy instance data collection is substantially faster for many instances [\#560](https://github.com/pypeclub/pype/pull/560)
|
||||
|
||||
**Fixed bugs:**
|
||||
|
||||
- test issue [\#596](https://github.com/pypeclub/pype/issues/596)
|
||||
- Harmony: empty scene contamination [\#583](https://github.com/pypeclub/pype/issues/583)
|
||||
- Edit publishing in SP doesn't respect shot selection for publishing [\#542](https://github.com/pypeclub/pype/issues/542)
|
||||
- Pathlib breaks compatibility with python2 hosts [\#281](https://github.com/pypeclub/pype/issues/281)
|
||||
- Updating a look where the shader name changed leaves the geo without a shader [\#237](https://github.com/pypeclub/pype/issues/237)
|
||||
- Better error handling [\#84](https://github.com/pypeclub/pype/issues/84)
|
||||
- Harmony: function signature [\#609](https://github.com/pypeclub/pype/pull/609)
|
||||
- Nuke: gizmo publishing error [\#594](https://github.com/pypeclub/pype/pull/594)
|
||||
- Harmony: fix clashing namespace of called js functions [\#584](https://github.com/pypeclub/pype/pull/584)
|
||||
- Maya: fix maya scene type preset exception [\#569](https://github.com/pypeclub/pype/pull/569)
|
||||
|
||||
**Closed issues:**
|
||||
|
||||
- Nuke Gizmo publishing [\#597](https://github.com/pypeclub/pype/issues/597)
|
||||
- nuke gizmo publishing error [\#592](https://github.com/pypeclub/pype/issues/592)
|
||||
- Publish EDL [\#579](https://github.com/pypeclub/pype/issues/579)
|
||||
- Publish render from SP [\#576](https://github.com/pypeclub/pype/issues/576)
|
||||
- rename ftrack custom attribute group to `pype` [\#184](https://github.com/pypeclub/pype/issues/184)
|
||||
|
||||
**Merged pull requests:**
|
||||
|
||||
- NKS small fixes [\#587](https://github.com/pypeclub/pype/pull/587)
|
||||
- Standalone publisher editorial plugins interfering [\#580](https://github.com/pypeclub/pype/pull/580)
|
||||
|
||||
## [2.12.2](https://github.com/pypeclub/pype/tree/2.12.2) (2020-09-25)
|
||||
|
||||
[Full Changelog](https://github.com/pypeclub/pype/compare/2.12.1...2.12.2)
|
||||
|
||||
**Enhancements:**
|
||||
|
||||
- pype config GUI [\#241](https://github.com/pypeclub/pype/issues/241)
|
||||
|
||||
**Fixed bugs:**
|
||||
|
||||
- Harmony: Saving heavy scenes will crash [\#507](https://github.com/pypeclub/pype/issues/507)
|
||||
- Extract review a representation name with `\*\_burnin` [\#388](https://github.com/pypeclub/pype/issues/388)
|
||||
- Hierarchy data was not considering active isntances [\#551](https://github.com/pypeclub/pype/pull/551)
|
||||
|
||||
## [2.12.1](https://github.com/pypeclub/pype/tree/2.12.1) (2020-09-15)
|
||||
|
||||
[Full Changelog](https://github.com/pypeclub/pype/compare/2.12.0...2.12.1)
|
||||
|
||||
**Fixed bugs:**
|
||||
|
||||
- Pype: changelog.md is outdated [\#503](https://github.com/pypeclub/pype/issues/503)
|
||||
- dependency security alert ! [\#484](https://github.com/pypeclub/pype/issues/484)
|
||||
- Maya: RenderSetup is missing update [\#106](https://github.com/pypeclub/pype/issues/106)
|
||||
- \<pyblish plugin\> extract effects creates new instance [\#78](https://github.com/pypeclub/pype/issues/78)
|
||||
|
||||
## [2.12.0](https://github.com/pypeclub/pype/tree/2.12.0) (2020-09-10)
|
||||
|
||||
[Full Changelog](https://github.com/pypeclub/pype/compare/2.11.8...2.12.0)
|
||||
|
||||
|
|
@ -16,13 +179,13 @@
|
|||
- Properly containerize image plane loads. [\#434](https://github.com/pypeclub/pype/pull/434)
|
||||
- Option to keep the review files. [\#426](https://github.com/pypeclub/pype/pull/426)
|
||||
- Isolate view on instance members. [\#425](https://github.com/pypeclub/pype/pull/425)
|
||||
- ftrack group is bcw compatible [\#418](https://github.com/pypeclub/pype/pull/418)
|
||||
- Maya: Publishing of tile renderings on Deadline [\#398](https://github.com/pypeclub/pype/pull/398)
|
||||
- Feature/little bit better logging gui [\#383](https://github.com/pypeclub/pype/pull/383)
|
||||
|
||||
**Fixed bugs:**
|
||||
|
||||
- Maya: Fix tile order for Draft Tile Assembler [\#511](https://github.com/pypeclub/pype/pull/511)
|
||||
- NukeStudio: Fix comment tag collection and integration. [\#508](https://github.com/pypeclub/pype/pull/508)
|
||||
- Remove extra dash [\#501](https://github.com/pypeclub/pype/pull/501)
|
||||
- Fix: strip dot from repre names in single frame renders [\#498](https://github.com/pypeclub/pype/pull/498)
|
||||
- Better handling of destination during integrating [\#485](https://github.com/pypeclub/pype/pull/485)
|
||||
|
|
|
|||
126
pype/hooks/tvpaint/prelaunch.py
Normal file
126
pype/hooks/tvpaint/prelaunch.py
Normal file
|
|
@ -0,0 +1,126 @@
|
|||
import os
|
||||
import shutil
|
||||
from pype.lib import PypeHook
|
||||
from pype.api import (
|
||||
Anatomy,
|
||||
Logger
|
||||
)
|
||||
import getpass
|
||||
import avalon.api
|
||||
|
||||
|
||||
class TvpaintPrelaunchHook(PypeHook):
|
||||
"""
|
||||
Workfile preparation hook
|
||||
"""
|
||||
host_name = "tvpaint"
|
||||
|
||||
def __init__(self, logger=None):
|
||||
if not logger:
|
||||
self.log = Logger().get_logger(self.__class__.__name__)
|
||||
else:
|
||||
self.log = logger
|
||||
|
||||
self.signature = "( {} )".format(self.__class__.__name__)
|
||||
|
||||
def execute(self, *args, env: dict = None) -> bool:
|
||||
if not env:
|
||||
env = os.environ
|
||||
|
||||
# get context variables
|
||||
project_name = env["AVALON_PROJECT"]
|
||||
asset_name = env["AVALON_ASSET"]
|
||||
task_name = env["AVALON_TASK"]
|
||||
workdir = env["AVALON_WORKDIR"]
|
||||
extension = avalon.api.HOST_WORKFILE_EXTENSIONS[self.host_name][0]
|
||||
|
||||
# get workfile path
|
||||
workfile_path = self.get_anatomy_filled(
|
||||
workdir, project_name, asset_name, task_name)
|
||||
|
||||
# create workdir if doesn't exist
|
||||
os.makedirs(workdir, exist_ok=True)
|
||||
self.log.info(f"Work dir is: `{workdir}`")
|
||||
|
||||
# get last version of workfile
|
||||
workfile_last = env.get("AVALON_LAST_WORKFILE")
|
||||
self.log.debug(f"_ workfile_last: `{workfile_last}`")
|
||||
|
||||
if workfile_last:
|
||||
workfile = workfile_last
|
||||
workfile_path = os.path.join(workdir, workfile)
|
||||
|
||||
# copy workfile from template if doesnt exist any on path
|
||||
if not os.path.isfile(workfile_path):
|
||||
# try to get path from environment or use default
|
||||
# from `pype.hosts.tvpaint` dir
|
||||
template_path = env.get("TVPAINT_TEMPLATE") or os.path.join(
|
||||
env.get("PYPE_MODULE_ROOT"),
|
||||
"pype/hosts/tvpaint/template.tvpp"
|
||||
)
|
||||
|
||||
# try to get template from project config folder
|
||||
proj_config_path = os.path.join(
|
||||
env["PYPE_PROJECT_CONFIGS"], project_name)
|
||||
if os.path.exists(proj_config_path):
|
||||
|
||||
template_file = None
|
||||
for f in os.listdir(proj_config_path):
|
||||
if extension in os.path.splitext(f):
|
||||
template_file = f
|
||||
|
||||
if template_file:
|
||||
template_path = os.path.join(
|
||||
proj_config_path, template_file)
|
||||
self.log.info(
|
||||
f"Creating workfile from template: `{template_path}`")
|
||||
|
||||
# copy template to new destinantion
|
||||
shutil.copy2(
|
||||
os.path.normpath(template_path),
|
||||
os.path.normpath(workfile_path)
|
||||
)
|
||||
|
||||
self.log.info(f"Workfile to open: `{workfile_path}`")
|
||||
|
||||
# adding compulsory environment var for openting file
|
||||
env["PYPE_TVPAINT_PROJECT_FILE"] = workfile_path
|
||||
|
||||
return True
|
||||
|
||||
def get_anatomy_filled(self, workdir, project_name, asset_name, task_name):
|
||||
dbcon = avalon.api.AvalonMongoDB()
|
||||
dbcon.install()
|
||||
dbcon.Session["AVALON_PROJECT"] = project_name
|
||||
project_document = dbcon.find_one({"type": "project"})
|
||||
asset_document = dbcon.find_one({
|
||||
"type": "asset",
|
||||
"name": asset_name
|
||||
})
|
||||
dbcon.uninstall()
|
||||
|
||||
asset_doc_parents = asset_document["data"].get("parents")
|
||||
hierarchy = "/".join(asset_doc_parents)
|
||||
|
||||
data = {
|
||||
"project": {
|
||||
"name": project_document["name"],
|
||||
"code": project_document["data"].get("code")
|
||||
},
|
||||
"task": task_name,
|
||||
"asset": asset_name,
|
||||
"app": self.host_name,
|
||||
"hierarchy": hierarchy
|
||||
}
|
||||
anatomy = Anatomy(project_name)
|
||||
extensions = avalon.api.HOST_WORKFILE_EXTENSIONS[self.host_name]
|
||||
file_template = anatomy.templates["work"]["file"]
|
||||
data.update({
|
||||
"version": 1,
|
||||
"user": os.environ.get("PYPE_USERNAME") or getpass.getuser(),
|
||||
"ext": extensions[0]
|
||||
})
|
||||
|
||||
return avalon.api.last_workfile(
|
||||
workdir, file_template, data, extensions, True
|
||||
)
|
||||
|
|
@ -159,8 +159,11 @@ def check_inventory():
|
|||
|
||||
|
||||
def application_launch():
|
||||
ensure_scene_settings()
|
||||
check_inventory()
|
||||
# FIXME: This is breaking server <-> client communication.
|
||||
# It is now moved so it it manually called.
|
||||
# ensure_scene_settings()
|
||||
# check_inventory()
|
||||
pass
|
||||
|
||||
|
||||
def export_template(backdrops, nodes, filepath):
|
||||
|
|
|
|||
|
|
@ -389,24 +389,28 @@ def create_write_node(name, data, input=None, prenodes=None, review=True):
|
|||
# imprinting group node
|
||||
avalon.nuke.imprint(GN, data["avalon"])
|
||||
|
||||
divider = nuke.Text_Knob('')
|
||||
GN.addKnob(divider)
|
||||
# add divider
|
||||
GN.addKnob(nuke.Text_Knob(''))
|
||||
|
||||
add_rendering_knobs(GN)
|
||||
|
||||
if review:
|
||||
add_review_knob(GN)
|
||||
|
||||
# add divider
|
||||
GN.addKnob(nuke.Text_Knob(''))
|
||||
|
||||
# Add linked knobs.
|
||||
linked_knob_names = ["Render", "use_limit", "first", "last"]
|
||||
for name in linked_knob_names:
|
||||
link = nuke.Link_Knob(name)
|
||||
link.makeLink(write_node.name(), name)
|
||||
link.setName(name)
|
||||
link.setFlag(0x1000)
|
||||
GN.addKnob(link)
|
||||
|
||||
divider = nuke.Text_Knob('')
|
||||
GN.addKnob(divider)
|
||||
# add divider
|
||||
GN.addKnob(nuke.Text_Knob(''))
|
||||
|
||||
# adding write to read button
|
||||
add_button_write_to_read(GN)
|
||||
|
|
@ -431,13 +435,9 @@ def add_rendering_knobs(node):
|
|||
node (obj): with added knobs
|
||||
'''
|
||||
if "render" not in node.knobs():
|
||||
knob = nuke.Boolean_Knob("render", "Render")
|
||||
knob = nuke.Enumeration_Knob("render", "Render", [
|
||||
"Use existing frames", "Local", "On farm"])
|
||||
knob.setFlag(0x1000)
|
||||
knob.setValue(False)
|
||||
node.addKnob(knob)
|
||||
if "render_farm" not in node.knobs():
|
||||
knob = nuke.Boolean_Knob("render_farm", "Render on Farm")
|
||||
knob.setValue(False)
|
||||
node.addKnob(knob)
|
||||
return node
|
||||
|
||||
|
|
|
|||
1
pype/hosts/tvpaint/__init__.py
Normal file
1
pype/hosts/tvpaint/__init__.py
Normal file
|
|
@ -0,0 +1 @@
|
|||
kwargs = None
|
||||
BIN
pype/hosts/tvpaint/template.tvpp
Normal file
BIN
pype/hosts/tvpaint/template.tvpp
Normal file
Binary file not shown.
178
pype/lib.py
178
pype/lib.py
|
|
@ -82,41 +82,67 @@ def get_ffmpeg_tool_path(tool="ffmpeg"):
|
|||
|
||||
# Special naming case for subprocess since its a built-in method.
|
||||
def _subprocess(*args, **kwargs):
|
||||
"""Convenience method for getting output errors for subprocess."""
|
||||
"""Convenience method for getting output errors for subprocess.
|
||||
|
||||
# make sure environment contains only strings
|
||||
if not kwargs.get("env"):
|
||||
filtered_env = {k: str(v) for k, v in os.environ.items()}
|
||||
else:
|
||||
filtered_env = {k: str(v) for k, v in kwargs.get("env").items()}
|
||||
Entered arguments and keyword arguments are passed to subprocess Popen.
|
||||
|
||||
Args:
|
||||
*args: Variable length arument list passed to Popen.
|
||||
**kwargs : Arbitary keyword arguments passed to Popen. Is possible to
|
||||
pass `logging.Logger` object under "logger" if want to use
|
||||
different than lib's logger.
|
||||
|
||||
Returns:
|
||||
str: Full output of subprocess concatenated stdout and stderr.
|
||||
|
||||
Raises:
|
||||
RuntimeError: Exception is raised if process finished with nonzero
|
||||
return code.
|
||||
"""
|
||||
|
||||
# Get environents from kwarg or use current process environments if were
|
||||
# not passed.
|
||||
env = kwargs.get("env") or os.environ
|
||||
# Make sure environment contains only strings
|
||||
filtered_env = {k: str(v) for k, v in env.items()}
|
||||
|
||||
# Use lib's logger if was not passed with kwargs.
|
||||
logger = kwargs.pop("logger", log)
|
||||
|
||||
# set overrides
|
||||
kwargs['stdout'] = kwargs.get('stdout', subprocess.PIPE)
|
||||
kwargs['stderr'] = kwargs.get('stderr', subprocess.STDOUT)
|
||||
kwargs['stderr'] = kwargs.get('stderr', subprocess.PIPE)
|
||||
kwargs['stdin'] = kwargs.get('stdin', subprocess.PIPE)
|
||||
kwargs['env'] = filtered_env
|
||||
|
||||
proc = subprocess.Popen(*args, **kwargs)
|
||||
|
||||
output, error = proc.communicate()
|
||||
full_output = ""
|
||||
_stdout, _stderr = proc.communicate()
|
||||
if _stdout:
|
||||
_stdout = _stdout.decode("utf-8")
|
||||
full_output += _stdout
|
||||
logger.debug(_stdout)
|
||||
|
||||
if output:
|
||||
output = output.decode("utf-8")
|
||||
output += "\n"
|
||||
for line in output.strip().split("\n"):
|
||||
log.info(line)
|
||||
|
||||
if error:
|
||||
error = error.decode("utf-8")
|
||||
error += "\n"
|
||||
for line in error.strip().split("\n"):
|
||||
log.error(line)
|
||||
if _stderr:
|
||||
_stderr = _stderr.decode("utf-8")
|
||||
# Add additional line break if output already containt stdout
|
||||
if full_output:
|
||||
full_output += "\n"
|
||||
full_output += _stderr
|
||||
logger.warning(_stderr)
|
||||
|
||||
if proc.returncode != 0:
|
||||
raise ValueError(
|
||||
"\"{}\" was not successful:\nOutput: {}\nError: {}".format(
|
||||
args, output, error))
|
||||
return output
|
||||
exc_msg = "Executing arguments was not successful: \"{}\"".format(args)
|
||||
if _stdout:
|
||||
exc_msg += "\n\nOutput:\n{}".format(_stdout)
|
||||
|
||||
if _stderr:
|
||||
exc_msg += "Error:\n{}".format(_stderr)
|
||||
|
||||
raise RuntimeError(exc_msg)
|
||||
|
||||
return full_output
|
||||
|
||||
|
||||
def get_hierarchy(asset_name=None):
|
||||
|
|
@ -512,19 +538,6 @@ def get_last_version_from_path(path_dir, filter):
|
|||
return None
|
||||
|
||||
|
||||
def get_avalon_database():
|
||||
if io._database is None:
|
||||
set_io_database()
|
||||
return io._database
|
||||
|
||||
|
||||
def set_io_database():
|
||||
required_keys = ["AVALON_PROJECT", "AVALON_ASSET", "AVALON_SILO"]
|
||||
for key in required_keys:
|
||||
os.environ[key] = os.environ.get(key, "")
|
||||
io.install()
|
||||
|
||||
|
||||
def filter_pyblish_plugins(plugins):
|
||||
"""
|
||||
This servers as plugin filter / modifier for pyblish. It will load plugin
|
||||
|
|
@ -1408,41 +1421,76 @@ def source_hash(filepath, *args):
|
|||
return "|".join([file_name, time, size] + list(args)).replace(".", ",")
|
||||
|
||||
|
||||
def get_latest_version(asset_name, subset_name):
|
||||
def get_latest_version(asset_name, subset_name, dbcon=None, project_name=None):
|
||||
"""Retrieve latest version from `asset_name`, and `subset_name`.
|
||||
|
||||
Do not use if you want to query more than 5 latest versions as this method
|
||||
query 3 times to mongo for each call. For those cases is better to use
|
||||
more efficient way, e.g. with help of aggregations.
|
||||
|
||||
Args:
|
||||
asset_name (str): Name of asset.
|
||||
subset_name (str): Name of subset.
|
||||
dbcon (avalon.mongodb.AvalonMongoDB, optional): Avalon Mongo connection
|
||||
with Session.
|
||||
project_name (str, optional): Find latest version in specific project.
|
||||
|
||||
Returns:
|
||||
None: If asset, subset or version were not found.
|
||||
dict: Last version document for entered .
|
||||
"""
|
||||
# Get asset
|
||||
asset_name = io.find_one(
|
||||
{"type": "asset", "name": asset_name}, projection={"name": True}
|
||||
|
||||
if not dbcon:
|
||||
log.debug("Using `avalon.io` for query.")
|
||||
dbcon = io
|
||||
# Make sure is installed
|
||||
io.install()
|
||||
|
||||
if project_name and project_name != dbcon.Session.get("AVALON_PROJECT"):
|
||||
# `avalon.io` has only `_database` attribute
|
||||
# but `AvalonMongoDB` has `database`
|
||||
database = getattr(dbcon, "database", dbcon._database)
|
||||
collection = database[project_name]
|
||||
else:
|
||||
project_name = dbcon.Session.get("AVALON_PROJECT")
|
||||
collection = dbcon
|
||||
|
||||
log.debug((
|
||||
"Getting latest version for Project: \"{}\" Asset: \"{}\""
|
||||
" and Subset: \"{}\""
|
||||
).format(project_name, asset_name, subset_name))
|
||||
|
||||
# Query asset document id by asset name
|
||||
asset_doc = collection.find_one(
|
||||
{"type": "asset", "name": asset_name},
|
||||
{"_id": True}
|
||||
)
|
||||
if not asset_doc:
|
||||
log.info(
|
||||
"Asset \"{}\" was not found in Database.".format(asset_name)
|
||||
)
|
||||
return None
|
||||
|
||||
subset = io.find_one(
|
||||
{"type": "subset", "name": subset_name, "parent": asset_name["_id"]},
|
||||
projection={"_id": True, "name": True},
|
||||
subset_doc = collection.find_one(
|
||||
{"type": "subset", "name": subset_name, "parent": asset_doc["_id"]},
|
||||
{"_id": True}
|
||||
)
|
||||
if not subset_doc:
|
||||
log.info(
|
||||
"Subset \"{}\" was not found in Database.".format(subset_name)
|
||||
)
|
||||
return None
|
||||
|
||||
# Check if subsets actually exists.
|
||||
assert subset, "No subsets found."
|
||||
|
||||
# Get version
|
||||
version_projection = {
|
||||
"name": True,
|
||||
"parent": True,
|
||||
}
|
||||
|
||||
version = io.find_one(
|
||||
{"type": "version", "parent": subset["_id"]},
|
||||
projection=version_projection,
|
||||
version_doc = collection.find_one(
|
||||
{"type": "version", "parent": subset_doc["_id"]},
|
||||
sort=[("name", -1)],
|
||||
)
|
||||
|
||||
assert version, "No version found, this is a bug"
|
||||
|
||||
return version
|
||||
if not version_doc:
|
||||
log.info(
|
||||
"Subset \"{}\" does not have any version yet.".format(subset_name)
|
||||
)
|
||||
return None
|
||||
return version_doc
|
||||
|
||||
|
||||
class ApplicationLaunchFailed(Exception):
|
||||
|
|
@ -1450,12 +1498,18 @@ class ApplicationLaunchFailed(Exception):
|
|||
|
||||
|
||||
def launch_application(project_name, asset_name, task_name, app_name):
|
||||
database = get_avalon_database()
|
||||
project_document = database[project_name].find_one({"type": "project"})
|
||||
asset_document = database[project_name].find_one({
|
||||
# Prepare mongo connection for query of project and asset documents.
|
||||
dbcon = avalon.api.AvalonMongoDB()
|
||||
dbcon.install()
|
||||
dbcon.Session["AVALON_PROJECT"] = project_name
|
||||
|
||||
project_document = dbcon.find_one({"type": "project"})
|
||||
asset_document = dbcon.find_one({
|
||||
"type": "asset",
|
||||
"name": asset_name
|
||||
})
|
||||
# Uninstall Mongo connection as is not needed anymore.
|
||||
dbcon.uninstall()
|
||||
|
||||
asset_doc_parents = asset_document["data"].get("parents")
|
||||
hierarchy = "/".join(asset_doc_parents)
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
from . import ftrack_server
|
||||
from .ftrack_server import FtrackServer, check_ftrack_url
|
||||
from .lib import BaseHandler, BaseEvent, BaseAction
|
||||
from .lib import BaseHandler, BaseEvent, BaseAction, ServerAction
|
||||
|
||||
__all__ = (
|
||||
"ftrack_server",
|
||||
|
|
@ -8,5 +8,6 @@ __all__ = (
|
|||
"check_ftrack_url",
|
||||
"BaseHandler",
|
||||
"BaseEvent",
|
||||
"BaseAction"
|
||||
"BaseAction",
|
||||
"ServerAction"
|
||||
)
|
||||
|
|
|
|||
|
|
@ -2,13 +2,13 @@ import os
|
|||
import toml
|
||||
import time
|
||||
from pype.modules.ftrack.lib import AppAction
|
||||
from avalon import lib
|
||||
from avalon import lib, api
|
||||
from pype.api import Logger, config
|
||||
|
||||
log = Logger().get_logger(__name__)
|
||||
|
||||
|
||||
def registerApp(app, session, plugins_presets):
|
||||
def register_app(app, dbcon, session, plugins_presets):
|
||||
name = app['name']
|
||||
variant = ""
|
||||
try:
|
||||
|
|
@ -39,7 +39,7 @@ def registerApp(app, session, plugins_presets):
|
|||
|
||||
# register action
|
||||
AppAction(
|
||||
session, label, name, executable, variant,
|
||||
session, dbcon, label, name, executable, variant,
|
||||
icon, description, preactions, plugins_presets
|
||||
).register()
|
||||
|
||||
|
|
@ -85,11 +85,12 @@ def register(session, plugins_presets={}):
|
|||
)
|
||||
)
|
||||
|
||||
dbcon = api.AvalonMongoDB()
|
||||
apps = sorted(apps, key=lambda app: app["name"])
|
||||
app_counter = 0
|
||||
for app in apps:
|
||||
try:
|
||||
registerApp(app, session, plugins_presets)
|
||||
register_app(app, dbcon, session, plugins_presets)
|
||||
if app_counter % 5 == 0:
|
||||
time.sleep(0.1)
|
||||
app_counter += 1
|
||||
|
|
|
|||
|
|
@ -46,7 +46,7 @@ class RVAction(BaseAction):
|
|||
return
|
||||
|
||||
self.allowed_types = self.config_data.get(
|
||||
'file_ext', ["img", "mov", "exr"]
|
||||
'file_ext', ["img", "mov", "exr", "mp4"]
|
||||
)
|
||||
|
||||
def discover(self, session, entities, event):
|
||||
|
|
|
|||
|
|
@ -1,10 +1,10 @@
|
|||
import json
|
||||
import collections
|
||||
import ftrack_api
|
||||
from pype.modules.ftrack.lib import BaseAction
|
||||
from pype.modules.ftrack.lib import ServerAction
|
||||
|
||||
|
||||
class PushFrameValuesToTaskAction(BaseAction):
|
||||
class PushFrameValuesToTaskAction(ServerAction):
|
||||
"""Action for testing purpose or as base for new actions."""
|
||||
|
||||
# Ignore event handler by default
|
||||
|
|
@ -34,50 +34,14 @@ class PushFrameValuesToTaskAction(BaseAction):
|
|||
"frameStart": "fstart",
|
||||
"frameEnd": "fend"
|
||||
}
|
||||
discover_role_list = {"Pypeclub", "Administrator", "Project Manager"}
|
||||
|
||||
def register(self):
|
||||
modified_role_names = set()
|
||||
for role_name in self.discover_role_list:
|
||||
modified_role_names.add(role_name.lower())
|
||||
self.discover_role_list = modified_role_names
|
||||
|
||||
self.session.event_hub.subscribe(
|
||||
"topic=ftrack.action.discover",
|
||||
self._discover,
|
||||
priority=self.priority
|
||||
)
|
||||
|
||||
launch_subscription = (
|
||||
"topic=ftrack.action.launch and data.actionIdentifier={0}"
|
||||
).format(self.identifier)
|
||||
self.session.event_hub.subscribe(launch_subscription, self._launch)
|
||||
role_list = {"Pypeclub", "Administrator", "Project Manager"}
|
||||
|
||||
def discover(self, session, entities, event):
|
||||
""" Validation """
|
||||
# Check if selection is valid
|
||||
valid_selection = False
|
||||
for ent in event["data"]["selection"]:
|
||||
# Ignore entities that are not tasks or projects
|
||||
if ent["entityType"].lower() == "show":
|
||||
valid_selection = True
|
||||
break
|
||||
|
||||
if not valid_selection:
|
||||
return False
|
||||
|
||||
# Get user and check his roles
|
||||
user_id = event.get("source", {}).get("user", {}).get("id")
|
||||
if not user_id:
|
||||
return False
|
||||
|
||||
user = session.query("User where id is \"{}\"".format(user_id)).first()
|
||||
if not user:
|
||||
return False
|
||||
|
||||
for role in user["user_security_roles"]:
|
||||
lowered_role = role["security_role"]["name"].lower()
|
||||
if lowered_role in self.discover_role_list:
|
||||
return True
|
||||
return False
|
||||
|
||||
|
|
|
|||
|
|
@ -1,11 +1,11 @@
|
|||
import time
|
||||
import traceback
|
||||
|
||||
from pype.modules.ftrack import BaseAction
|
||||
from pype.modules.ftrack import ServerAction
|
||||
from pype.modules.ftrack.lib.avalon_sync import SyncEntitiesFactory
|
||||
|
||||
|
||||
class SyncToAvalonServer(BaseAction):
|
||||
class SyncToAvalonServer(ServerAction):
|
||||
"""
|
||||
Synchronizing data action - from Ftrack to Avalon DB
|
||||
|
||||
|
|
@ -15,7 +15,7 @@ class SyncToAvalonServer(BaseAction):
|
|||
- Data(dictionary):
|
||||
- VisualParent(ObjectId) - Avalon Id of parent asset
|
||||
- Parents(array of string) - All parent names except project
|
||||
- Tasks(array of string) - Tasks on asset
|
||||
- Tasks(dictionary of dictionaries) - Tasks on asset
|
||||
- FtrackId(string)
|
||||
- entityType(string) - entity's type on Ftrack
|
||||
* All Custom attributes in group 'Avalon'
|
||||
|
|
@ -36,48 +36,18 @@ class SyncToAvalonServer(BaseAction):
|
|||
variant = "- Sync To Avalon (Server)"
|
||||
#: Action description.
|
||||
description = "Send data from Ftrack to Avalon"
|
||||
role_list = {"Pypeclub", "Administrator", "Project Manager"}
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
super().__init__(*args, **kwargs)
|
||||
self.entities_factory = SyncEntitiesFactory(self.log, self.session)
|
||||
|
||||
def register(self):
|
||||
self.session.event_hub.subscribe(
|
||||
"topic=ftrack.action.discover",
|
||||
self._discover,
|
||||
priority=self.priority
|
||||
)
|
||||
|
||||
launch_subscription = (
|
||||
"topic=ftrack.action.launch and data.actionIdentifier={0}"
|
||||
).format(self.identifier)
|
||||
self.session.event_hub.subscribe(launch_subscription, self._launch)
|
||||
|
||||
def discover(self, session, entities, event):
|
||||
""" Validation """
|
||||
# Check if selection is valid
|
||||
valid_selection = False
|
||||
for ent in event["data"]["selection"]:
|
||||
# Ignore entities that are not tasks or projects
|
||||
if ent["entityType"].lower() in ["show", "task"]:
|
||||
valid_selection = True
|
||||
break
|
||||
|
||||
if not valid_selection:
|
||||
return False
|
||||
|
||||
# Get user and check his roles
|
||||
user_id = event.get("source", {}).get("user", {}).get("id")
|
||||
if not user_id:
|
||||
return False
|
||||
|
||||
user = session.query("User where id is \"{}\"".format(user_id)).first()
|
||||
if not user:
|
||||
return False
|
||||
|
||||
role_list = ["Pypeclub", "Administrator", "Project Manager"]
|
||||
for role in user["user_security_roles"]:
|
||||
if role["security_role"]["name"] in role_list:
|
||||
return True
|
||||
return False
|
||||
|
||||
|
|
|
|||
|
|
@ -40,6 +40,15 @@ class SyncToAvalonEvent(BaseEvent):
|
|||
"select id, name, parent_id, link, custom_attributes from TypedContext"
|
||||
" where project_id is \"{}\" and id in ({})"
|
||||
)
|
||||
|
||||
# useful for getting all tasks for asset
|
||||
task_entities_query_by_parent_id = (
|
||||
"select id, name, parent_id, type_id from Task"
|
||||
" where project_id is \"{}\" and parent_id in ({})"
|
||||
)
|
||||
task_types_query = (
|
||||
"select id, name from Type"
|
||||
)
|
||||
entities_name_query_by_name = (
|
||||
"select id, name from TypedContext"
|
||||
" where project_id is \"{}\" and name in ({})"
|
||||
|
|
@ -313,9 +322,6 @@ class SyncToAvalonEvent(BaseEvent):
|
|||
if self._avalon_archived_by_id is not None:
|
||||
self._avalon_archived_by_id[mongo_id] = entity
|
||||
|
||||
if mongo_id in self.task_changes_by_avalon_id:
|
||||
self.task_changes_by_avalon_id.pop(mongo_id)
|
||||
|
||||
def _bubble_changeability(self, unchangeable_ids):
|
||||
unchangeable_queue = queue.Queue()
|
||||
for entity_id in unchangeable_ids:
|
||||
|
|
@ -383,8 +389,6 @@ class SyncToAvalonEvent(BaseEvent):
|
|||
self._avalon_archived_by_id = None
|
||||
self._avalon_archived_by_name = None
|
||||
|
||||
self.task_changes_by_avalon_id = {}
|
||||
|
||||
self._avalon_custom_attributes = None
|
||||
self._ent_types_by_name = None
|
||||
|
||||
|
|
@ -398,6 +402,10 @@ class SyncToAvalonEvent(BaseEvent):
|
|||
self.ftrack_updated = {}
|
||||
self.ftrack_removed = {}
|
||||
|
||||
# set of ftrack ids with modified tasks
|
||||
# handled separately by full wipeout and replace from FTrack
|
||||
self.modified_tasks_ftrackids = set()
|
||||
|
||||
self.moved_in_avalon = []
|
||||
self.renamed_in_avalon = []
|
||||
self.hier_cust_attrs_changes = collections.defaultdict(list)
|
||||
|
|
@ -472,6 +480,16 @@ class SyncToAvalonEvent(BaseEvent):
|
|||
return filtered_updates
|
||||
|
||||
def get_ent_path(self, ftrack_id):
|
||||
"""
|
||||
Looks for entity in FTrack with 'ftrack_id'. If found returns
|
||||
concatenated paths from its 'link' elemenent's names. Describes
|
||||
location of entity in tree.
|
||||
Args:
|
||||
ftrack_id (string): entityId of FTrack entity
|
||||
|
||||
Returns:
|
||||
(string) - example : "/test_project/assets/my_asset"
|
||||
"""
|
||||
entity = self.ftrack_ents_by_id.get(ftrack_id)
|
||||
if not entity:
|
||||
entity = self.process_session.query(
|
||||
|
|
@ -486,12 +504,24 @@ class SyncToAvalonEvent(BaseEvent):
|
|||
return "/".join([ent["name"] for ent in entity["link"]])
|
||||
|
||||
def launch(self, session, event):
|
||||
"""
|
||||
Main entry port for synchronization.
|
||||
Goes through event (can contain multiple changes) and decides if
|
||||
the event is interesting for us (interest_entTypes).
|
||||
It separates changes into add|remove|update.
|
||||
All task changes are handled together by refresh from Ftrack.
|
||||
Args:
|
||||
session (object): session to Ftrack
|
||||
event (dictionary): event content
|
||||
|
||||
Returns:
|
||||
(boolean or None)
|
||||
"""
|
||||
# Try to commit and if any error happen then recreate session
|
||||
try:
|
||||
self.process_session.commit()
|
||||
except Exception:
|
||||
self.set_process_session(session)
|
||||
|
||||
# Reset object values for each launch
|
||||
self.reset_variables()
|
||||
self._cur_event = event
|
||||
|
|
@ -527,9 +557,21 @@ class SyncToAvalonEvent(BaseEvent):
|
|||
continue
|
||||
ftrack_id = ftrack_id[0]
|
||||
|
||||
# task modified, collect parent id of task, handle separately
|
||||
if entity_type.lower() == "task":
|
||||
changes = ent_info.get("changes") or {}
|
||||
if action == "move":
|
||||
parent_changes = changes["parent_id"]
|
||||
self.modified_tasks_ftrackids.add(parent_changes["new"])
|
||||
self.modified_tasks_ftrackids.add(parent_changes["old"])
|
||||
|
||||
elif "typeid" in changes or "name" in changes:
|
||||
self.modified_tasks_ftrackids.add(ent_info["parentId"])
|
||||
continue
|
||||
|
||||
if action == "move":
|
||||
ent_keys = ent_info["keys"]
|
||||
# Seprate update info from move action
|
||||
# Separate update info from move action
|
||||
if len(ent_keys) > 1:
|
||||
_ent_info = ent_info.copy()
|
||||
for ent_key in ent_keys:
|
||||
|
|
@ -539,14 +581,13 @@ class SyncToAvalonEvent(BaseEvent):
|
|||
else:
|
||||
ent_info["changes"].pop(ent_key, None)
|
||||
ent_info["keys"].remove(ent_key)
|
||||
|
||||
entities_by_action["update"][ftrack_id] = _ent_info
|
||||
|
||||
# regular change process handles all other than Tasks
|
||||
found_actions.add(action)
|
||||
entities_by_action[action][ftrack_id] = ent_info
|
||||
|
||||
found_actions = list(found_actions)
|
||||
if not found_actions:
|
||||
if not found_actions and not self.modified_tasks_ftrackids:
|
||||
return True
|
||||
|
||||
# Check if auto sync was turned on/off
|
||||
|
|
@ -585,9 +626,10 @@ class SyncToAvalonEvent(BaseEvent):
|
|||
|
||||
# skip most of events where nothing has changed for avalon
|
||||
if (
|
||||
len(found_actions) == 1 and
|
||||
found_actions[0] == "update" and
|
||||
not updated
|
||||
len(found_actions) == 1
|
||||
and found_actions[0] == "update"
|
||||
and not updated
|
||||
and not self.modified_tasks_ftrackids
|
||||
):
|
||||
return True
|
||||
|
||||
|
|
@ -622,19 +664,14 @@ class SyncToAvalonEvent(BaseEvent):
|
|||
ft_project["full_name"], debug_msg
|
||||
))
|
||||
# Get ftrack entities - find all ftrack ids first
|
||||
ftrack_ids = []
|
||||
for ftrack_id in updated:
|
||||
ftrack_ids.append(ftrack_id)
|
||||
ftrack_ids = set(updated.keys())
|
||||
|
||||
for action, ftrack_ids in entities_by_action.items():
|
||||
for action, _ftrack_ids in entities_by_action.items():
|
||||
# skip updated (already prepared) and removed (not exist in ftrack)
|
||||
if action == "remove":
|
||||
continue
|
||||
|
||||
for ftrack_id in ftrack_ids:
|
||||
if ftrack_id not in ftrack_ids:
|
||||
ftrack_ids.append(ftrack_id)
|
||||
if action not in ("remove", "update"):
|
||||
ftrack_ids |= set(_ftrack_ids)
|
||||
|
||||
# collect entity records data which might not be in event
|
||||
if ftrack_ids:
|
||||
joined_ids = ", ".join(["\"{}\"".format(id) for id in ftrack_ids])
|
||||
ftrack_entities = self.process_session.query(
|
||||
|
|
@ -688,9 +725,11 @@ class SyncToAvalonEvent(BaseEvent):
|
|||
time_6 = time.time()
|
||||
# 6.) Process changes in hierarchy or hier custom attribues
|
||||
self.process_hier_cleanup()
|
||||
time_7 = time.time()
|
||||
self.process_task_updates()
|
||||
if self.updates:
|
||||
self.update_entities()
|
||||
time_7 = time.time()
|
||||
time_8 = time.time()
|
||||
|
||||
time_removed = time_2 - time_1
|
||||
time_renamed = time_3 - time_2
|
||||
|
|
@ -698,10 +737,14 @@ class SyncToAvalonEvent(BaseEvent):
|
|||
time_moved = time_5 - time_4
|
||||
time_updated = time_6 - time_5
|
||||
time_cleanup = time_7 - time_6
|
||||
time_total = time_7 - time_1
|
||||
self.log.debug("Process time: {} <{}, {}, {}, {}, {}, {}>".format(
|
||||
time_total, time_removed, time_renamed, time_added, time_moved,
|
||||
time_updated, time_cleanup
|
||||
time_task_updates = time_8 - time_7
|
||||
time_total = time_8 - time_1
|
||||
self.log.debug((
|
||||
"Process time: {:.2f} <{:.2f}, {:.2f}, {:.2f}, "
|
||||
"{:.2f}, {:.2f}, {:.2f}, {:.2f}>"
|
||||
).format(
|
||||
time_total, time_removed, time_renamed, time_added,
|
||||
time_moved, time_updated, time_cleanup, time_task_updates
|
||||
))
|
||||
|
||||
except Exception:
|
||||
|
|
@ -714,6 +757,9 @@ class SyncToAvalonEvent(BaseEvent):
|
|||
return True
|
||||
|
||||
def process_removed(self):
|
||||
"""
|
||||
Handles removed entities (not removed tasks - handle separately).
|
||||
"""
|
||||
if not self.ftrack_removed:
|
||||
return
|
||||
ent_infos = self.ftrack_removed
|
||||
|
|
@ -725,29 +771,11 @@ class SyncToAvalonEvent(BaseEvent):
|
|||
removed_names = []
|
||||
for ftrack_id, removed in ent_infos.items():
|
||||
entity_type = removed["entity_type"]
|
||||
parent_id = removed["parentId"]
|
||||
removed_name = removed["changes"]["name"]["old"]
|
||||
if entity_type == "Task":
|
||||
avalon_ent = self.avalon_ents_by_ftrack_id.get(parent_id)
|
||||
if not avalon_ent:
|
||||
self.log.debug((
|
||||
"Parent entity of task was not found in avalon <{}>"
|
||||
).format(self.get_ent_path(parent_id)))
|
||||
continue
|
||||
|
||||
mongo_id = avalon_ent["_id"]
|
||||
if mongo_id not in self.task_changes_by_avalon_id:
|
||||
self.task_changes_by_avalon_id[mongo_id] = (
|
||||
avalon_ent["data"]["tasks"]
|
||||
)
|
||||
|
||||
if removed_name in self.task_changes_by_avalon_id[mongo_id]:
|
||||
self.task_changes_by_avalon_id[mongo_id].remove(
|
||||
removed_name
|
||||
)
|
||||
|
||||
if entity_type.lower() == "task":
|
||||
continue
|
||||
|
||||
removed_name = removed["changes"]["name"]["old"]
|
||||
|
||||
avalon_ent = self.avalon_ents_by_ftrack_id.get(ftrack_id)
|
||||
if not avalon_ent:
|
||||
continue
|
||||
|
|
@ -1067,12 +1095,8 @@ class SyncToAvalonEvent(BaseEvent):
|
|||
)
|
||||
)
|
||||
|
||||
# Tasks
|
||||
tasks = []
|
||||
for child in ftrack_ent["children"]:
|
||||
if child.entity_type.lower() != "task":
|
||||
continue
|
||||
tasks.append(child["name"])
|
||||
# Add entity to modified so tasks are added at the end
|
||||
self.modified_tasks_ftrackids.add(ftrack_ent["id"])
|
||||
|
||||
# Visual Parent
|
||||
vis_par = None
|
||||
|
|
@ -1092,7 +1116,7 @@ class SyncToAvalonEvent(BaseEvent):
|
|||
"entityType": ftrack_ent.entity_type,
|
||||
"parents": parents,
|
||||
"hierarchy": hierarchy,
|
||||
"tasks": tasks,
|
||||
"tasks": {},
|
||||
"visualParent": vis_par
|
||||
}
|
||||
}
|
||||
|
|
@ -1267,21 +1291,14 @@ class SyncToAvalonEvent(BaseEvent):
|
|||
"Processing renamed entities: {}".format(str(ent_infos))
|
||||
)
|
||||
|
||||
renamed_tasks = {}
|
||||
not_found = {}
|
||||
changeable_queue = queue.Queue()
|
||||
for ftrack_id, ent_info in ent_infos.items():
|
||||
entity_type = ent_info["entity_type"]
|
||||
if entity_type == "Task":
|
||||
continue
|
||||
|
||||
new_name = ent_info["changes"]["name"]["new"]
|
||||
old_name = ent_info["changes"]["name"]["old"]
|
||||
if entity_type == "Task":
|
||||
parent_id = ent_info["parentId"]
|
||||
renamed_tasks[parent_id] = {
|
||||
"new": new_name,
|
||||
"old": old_name,
|
||||
"ent_info": ent_info
|
||||
}
|
||||
continue
|
||||
|
||||
ent_path = self.get_ent_path(ftrack_id)
|
||||
avalon_ent = self.avalon_ents_by_ftrack_id.get(ftrack_id)
|
||||
|
|
@ -1400,60 +1417,6 @@ class SyncToAvalonEvent(BaseEvent):
|
|||
if old_names:
|
||||
self.check_names_synchronizable(old_names)
|
||||
|
||||
for parent_id, task_change in renamed_tasks.items():
|
||||
avalon_ent = self.avalon_ents_by_ftrack_id.get(parent_id)
|
||||
ent_info = task_change["ent_info"]
|
||||
if not avalon_ent:
|
||||
not_found[ent_info["entityId"]] = ent_info
|
||||
continue
|
||||
|
||||
new_name = task_change["new"]
|
||||
old_name = task_change["old"]
|
||||
passed_regex = avalon_sync.check_regex(
|
||||
new_name, "task", schema_patterns=self.regex_schemas
|
||||
)
|
||||
if not passed_regex:
|
||||
ftrack_id = ent_info["enityId"]
|
||||
self.regex_failed.append(ftrack_id)
|
||||
continue
|
||||
|
||||
mongo_id = avalon_ent["_id"]
|
||||
if mongo_id not in self.task_changes_by_avalon_id:
|
||||
self.task_changes_by_avalon_id[mongo_id] = (
|
||||
avalon_ent["data"]["tasks"]
|
||||
)
|
||||
|
||||
if old_name in self.task_changes_by_avalon_id[mongo_id]:
|
||||
self.task_changes_by_avalon_id[mongo_id].remove(old_name)
|
||||
else:
|
||||
parent_ftrack_ent = self.ftrack_ents_by_id.get(parent_id)
|
||||
if not parent_ftrack_ent:
|
||||
parent_ftrack_ent = self.process_session.query(
|
||||
self.entities_query_by_id.format(
|
||||
self.cur_project["id"], parent_id
|
||||
)
|
||||
).first()
|
||||
|
||||
if parent_ftrack_ent:
|
||||
self.ftrack_ents_by_id[parent_id] = parent_ftrack_ent
|
||||
child_names = []
|
||||
for child in parent_ftrack_ent["children"]:
|
||||
if child.entity_type.lower() != "task":
|
||||
continue
|
||||
child_names.append(child["name"])
|
||||
|
||||
tasks = [task for task in (
|
||||
self.task_changes_by_avalon_id[mongo_id]
|
||||
)]
|
||||
for task in tasks:
|
||||
if task not in child_names:
|
||||
self.task_changes_by_avalon_id[mongo_id].remove(
|
||||
task
|
||||
)
|
||||
|
||||
if new_name not in self.task_changes_by_avalon_id[mongo_id]:
|
||||
self.task_changes_by_avalon_id[mongo_id].append(new_name)
|
||||
|
||||
# not_found are not processed since all not found are
|
||||
# not found because they are not synchronizable
|
||||
|
||||
|
|
@ -1471,7 +1434,6 @@ class SyncToAvalonEvent(BaseEvent):
|
|||
# Skip if already exit in avalon db or tasks entities
|
||||
# - happen when was created by any sync event/action
|
||||
pop_out_ents = []
|
||||
new_tasks_by_parent = collections.defaultdict(list)
|
||||
for ftrack_id, ent_info in ent_infos.items():
|
||||
if self.avalon_ents_by_ftrack_id.get(ftrack_id):
|
||||
pop_out_ents.append(ftrack_id)
|
||||
|
|
@ -1484,9 +1446,6 @@ class SyncToAvalonEvent(BaseEvent):
|
|||
|
||||
entity_type = ent_info["entity_type"]
|
||||
if entity_type == "Task":
|
||||
parent_id = ent_info["parentId"]
|
||||
new_tasks_by_parent[parent_id].append(ent_info)
|
||||
pop_out_ents.append(ftrack_id)
|
||||
continue
|
||||
|
||||
name = (
|
||||
|
|
@ -1663,82 +1622,11 @@ class SyncToAvalonEvent(BaseEvent):
|
|||
|
||||
self.create_entity_in_avalon(entity, parent_avalon)
|
||||
|
||||
for parent_id, ent_infos in new_tasks_by_parent.items():
|
||||
avalon_ent = self.avalon_ents_by_ftrack_id.get(parent_id)
|
||||
if not avalon_ent:
|
||||
# TODO logging
|
||||
self.log.debug((
|
||||
"Skipping synchronization of task"
|
||||
" because parent was not found in Avalon DB <{}>"
|
||||
).format(self.get_ent_path(parent_id)))
|
||||
continue
|
||||
|
||||
mongo_id = avalon_ent["_id"]
|
||||
if mongo_id not in self.task_changes_by_avalon_id:
|
||||
self.task_changes_by_avalon_id[mongo_id] = (
|
||||
avalon_ent["data"]["tasks"]
|
||||
)
|
||||
|
||||
for ent_info in ent_infos:
|
||||
new_name = ent_info["changes"]["name"]["new"]
|
||||
passed_regex = avalon_sync.check_regex(
|
||||
new_name, "task", schema_patterns=self.regex_schemas
|
||||
)
|
||||
if not passed_regex:
|
||||
self.regex_failed.append(ent_info["entityId"])
|
||||
continue
|
||||
|
||||
if new_name not in self.task_changes_by_avalon_id[mongo_id]:
|
||||
self.task_changes_by_avalon_id[mongo_id].append(new_name)
|
||||
|
||||
def _mongo_id_configuration(
|
||||
self,
|
||||
ent_info,
|
||||
cust_attrs,
|
||||
hier_attrs,
|
||||
temp_dict
|
||||
):
|
||||
# Use hierarchical mongo id attribute if possible.
|
||||
if "_hierarchical" not in temp_dict:
|
||||
hier_mongo_id_configuration_id = None
|
||||
for attr in hier_attrs:
|
||||
if attr["key"] == CUST_ATTR_ID_KEY:
|
||||
hier_mongo_id_configuration_id = attr["id"]
|
||||
break
|
||||
temp_dict["_hierarchical"] = hier_mongo_id_configuration_id
|
||||
|
||||
hier_mongo_id_configuration_id = temp_dict.get("_hierarchical")
|
||||
if hier_mongo_id_configuration_id is not None:
|
||||
return hier_mongo_id_configuration_id
|
||||
|
||||
# Legacy part for cases that MongoID attribute is per entity type.
|
||||
entity_type = ent_info["entity_type"]
|
||||
mongo_id_configuration_id = temp_dict.get(entity_type)
|
||||
if mongo_id_configuration_id is not None:
|
||||
return mongo_id_configuration_id
|
||||
|
||||
for attr in cust_attrs:
|
||||
key = attr["key"]
|
||||
if key != CUST_ATTR_ID_KEY:
|
||||
continue
|
||||
|
||||
if attr["entity_type"] != ent_info["entityType"]:
|
||||
continue
|
||||
|
||||
if (
|
||||
ent_info["entityType"] == "task" and
|
||||
attr["object_type_id"] != ent_info["objectTypeId"]
|
||||
):
|
||||
continue
|
||||
|
||||
mongo_id_configuration_id = attr["id"]
|
||||
break
|
||||
|
||||
temp_dict[entity_type] = mongo_id_configuration_id
|
||||
|
||||
return mongo_id_configuration_id
|
||||
|
||||
def process_moved(self):
|
||||
"""
|
||||
Handles moved entities to different place in hiearchy.
|
||||
(Not tasks - handled separately.)
|
||||
"""
|
||||
if not self.ftrack_moved:
|
||||
return
|
||||
|
||||
|
|
@ -1872,7 +1760,9 @@ class SyncToAvalonEvent(BaseEvent):
|
|||
)
|
||||
|
||||
def process_updated(self):
|
||||
# Only custom attributes changes should get here
|
||||
"""
|
||||
Only custom attributes changes should get here
|
||||
"""
|
||||
if not self.ftrack_updated:
|
||||
return
|
||||
|
||||
|
|
@ -1970,8 +1860,7 @@ class SyncToAvalonEvent(BaseEvent):
|
|||
if (
|
||||
not self.moved_in_avalon and
|
||||
not self.renamed_in_avalon and
|
||||
not self.hier_cust_attrs_changes and
|
||||
not self.task_changes_by_avalon_id
|
||||
not self.hier_cust_attrs_changes
|
||||
):
|
||||
return
|
||||
|
||||
|
|
@ -2000,14 +1889,6 @@ class SyncToAvalonEvent(BaseEvent):
|
|||
if not all_keys and key not in hier_cust_attrs_keys:
|
||||
hier_cust_attrs_keys.append(key)
|
||||
|
||||
# Tasks preparation ****
|
||||
for mongo_id, tasks in self.task_changes_by_avalon_id.items():
|
||||
avalon_ent = self.avalon_ents_by_id[mongo_id]
|
||||
if "data" not in self.updates[mongo_id]:
|
||||
self.updates[mongo_id]["data"] = {}
|
||||
|
||||
self.updates[mongo_id]["data"]["tasks"] = tasks
|
||||
|
||||
# Parents preparation ***
|
||||
mongo_to_ftrack_parents = {}
|
||||
missing_ftrack_ents = {}
|
||||
|
|
@ -2289,11 +2170,96 @@ class SyncToAvalonEvent(BaseEvent):
|
|||
|
||||
self.update_entities()
|
||||
|
||||
def process_task_updates(self):
|
||||
"""
|
||||
Pull task information for selected ftrack ids to replace stored
|
||||
existing in Avalon.
|
||||
Solves problem of changing type (even Status in the future) of
|
||||
task without storing ftrack id for task in the DB. (Which doesn't
|
||||
bring much advantage currently and it could be troublesome for
|
||||
all hosts or plugins (for example Nuke) to collect and store.
|
||||
Returns:
|
||||
None
|
||||
"""
|
||||
self.log.debug(
|
||||
"Processing task changes for parents: {}".format(
|
||||
self.modified_tasks_ftrackids
|
||||
)
|
||||
)
|
||||
if not self.modified_tasks_ftrackids:
|
||||
return
|
||||
|
||||
joined_ids = ", ".join([
|
||||
"\"{}\"".format(ftrack_id)
|
||||
for ftrack_id in self.modified_tasks_ftrackids
|
||||
])
|
||||
task_entities = self.process_session.query(
|
||||
self.task_entities_query_by_parent_id.format(
|
||||
self.cur_project["id"], joined_ids
|
||||
)
|
||||
).all()
|
||||
|
||||
ftrack_mongo_mapping_found = {}
|
||||
not_found_ids = []
|
||||
# Make sure all parents have updated tasks, as they may not have any
|
||||
tasks_per_ftrack_id = {
|
||||
ftrack_id: {}
|
||||
for ftrack_id in self.modified_tasks_ftrackids
|
||||
}
|
||||
|
||||
# Query all task types at once
|
||||
task_types = self.process_session.query(self.task_types_query).all()
|
||||
task_types_by_id = {
|
||||
task_type["id"]: task_type
|
||||
for task_type in task_types
|
||||
}
|
||||
|
||||
# prepare all tasks per parentId, eg. Avalon asset record
|
||||
for task_entity in task_entities:
|
||||
task_type = task_types_by_id[task_entity["type_id"]]
|
||||
ftrack_id = task_entity["parent_id"]
|
||||
if ftrack_id not in tasks_per_ftrack_id:
|
||||
tasks_per_ftrack_id[ftrack_id] = {}
|
||||
|
||||
passed_regex = avalon_sync.check_regex(
|
||||
task_entity["name"], "task",
|
||||
schema_patterns=self.regex_schemas
|
||||
)
|
||||
if not passed_regex:
|
||||
self.regex_failed.append(task_entity["id"])
|
||||
continue
|
||||
|
||||
tasks_per_ftrack_id[ftrack_id][task_entity["name"]] = {
|
||||
"type": task_type["name"]
|
||||
}
|
||||
|
||||
# find avalon entity by parentId
|
||||
# should be there as create was run first
|
||||
for ftrack_id in tasks_per_ftrack_id.keys():
|
||||
avalon_entity = self.avalon_ents_by_ftrack_id.get(ftrack_id)
|
||||
if not avalon_entity:
|
||||
not_found_ids.append(ftrack_id)
|
||||
continue
|
||||
ftrack_mongo_mapping_found[ftrack_id] = avalon_entity["_id"]
|
||||
|
||||
self._update_avalon_tasks(
|
||||
ftrack_mongo_mapping_found,
|
||||
tasks_per_ftrack_id
|
||||
)
|
||||
|
||||
def update_entities(self):
|
||||
"""
|
||||
Update Avalon entities by mongo bulk changes.
|
||||
Expects self.updates which are transfered to $set part of update
|
||||
command.
|
||||
Resets self.updates afterwards.
|
||||
"""
|
||||
mongo_changes_bulk = []
|
||||
for mongo_id, changes in self.updates.items():
|
||||
filter = {"_id": mongo_id}
|
||||
change_data = avalon_sync.from_dict_to_set(changes)
|
||||
avalon_ent = self.avalon_ents_by_id[mongo_id]
|
||||
is_project = avalon_ent["type"] == "project"
|
||||
change_data = avalon_sync.from_dict_to_set(changes, is_project)
|
||||
mongo_changes_bulk.append(UpdateOne(filter, change_data))
|
||||
|
||||
if not mongo_changes_bulk:
|
||||
|
|
@ -2477,6 +2443,77 @@ class SyncToAvalonEvent(BaseEvent):
|
|||
)
|
||||
return True
|
||||
|
||||
def _update_avalon_tasks(
|
||||
self, ftrack_mongo_mapping_found, tasks_per_ftrack_id
|
||||
):
|
||||
"""
|
||||
Prepare new "tasks" content for existing records in Avalon.
|
||||
Args:
|
||||
ftrack_mongo_mapping_found (dictionary): ftrack parentId to
|
||||
Avalon _id mapping
|
||||
tasks_per_ftrack_id (dictionary): task dictionaries per ftrack
|
||||
parentId
|
||||
|
||||
Returns:
|
||||
None
|
||||
"""
|
||||
mongo_changes_bulk = []
|
||||
for ftrack_id, mongo_id in ftrack_mongo_mapping_found.items():
|
||||
filter = {"_id": mongo_id}
|
||||
change_data = {"$set": {}}
|
||||
change_data["$set"]["data.tasks"] = tasks_per_ftrack_id[ftrack_id]
|
||||
mongo_changes_bulk.append(UpdateOne(filter, change_data))
|
||||
|
||||
if mongo_changes_bulk:
|
||||
self.dbcon.bulk_write(mongo_changes_bulk)
|
||||
|
||||
def _mongo_id_configuration(
|
||||
self,
|
||||
ent_info,
|
||||
cust_attrs,
|
||||
hier_attrs,
|
||||
temp_dict
|
||||
):
|
||||
# Use hierarchical mongo id attribute if possible.
|
||||
if "_hierarchical" not in temp_dict:
|
||||
hier_mongo_id_configuration_id = None
|
||||
for attr in hier_attrs:
|
||||
if attr["key"] == CUST_ATTR_ID_KEY:
|
||||
hier_mongo_id_configuration_id = attr["id"]
|
||||
break
|
||||
temp_dict["_hierarchical"] = hier_mongo_id_configuration_id
|
||||
|
||||
hier_mongo_id_configuration_id = temp_dict.get("_hierarchical")
|
||||
if hier_mongo_id_configuration_id is not None:
|
||||
return hier_mongo_id_configuration_id
|
||||
|
||||
# Legacy part for cases that MongoID attribute is per entity type.
|
||||
entity_type = ent_info["entity_type"]
|
||||
mongo_id_configuration_id = temp_dict.get(entity_type)
|
||||
if mongo_id_configuration_id is not None:
|
||||
return mongo_id_configuration_id
|
||||
|
||||
for attr in cust_attrs:
|
||||
key = attr["key"]
|
||||
if key != CUST_ATTR_ID_KEY:
|
||||
continue
|
||||
|
||||
if attr["entity_type"] != ent_info["entityType"]:
|
||||
continue
|
||||
|
||||
if (
|
||||
ent_info["entityType"] == "task" and
|
||||
attr["object_type_id"] != ent_info["objectTypeId"]
|
||||
):
|
||||
continue
|
||||
|
||||
mongo_id_configuration_id = attr["id"]
|
||||
break
|
||||
|
||||
temp_dict[entity_type] = mongo_id_configuration_id
|
||||
|
||||
return mongo_id_configuration_id
|
||||
|
||||
|
||||
def register(session, plugins_presets):
|
||||
'''Register plugin. Called when used as an plugin.'''
|
||||
|
|
|
|||
|
|
@ -2,7 +2,7 @@ from . import avalon_sync
|
|||
from . import credentials
|
||||
from .ftrack_base_handler import BaseHandler
|
||||
from .ftrack_event_handler import BaseEvent
|
||||
from .ftrack_action_handler import BaseAction, statics_icon
|
||||
from .ftrack_action_handler import BaseAction, ServerAction, statics_icon
|
||||
from .ftrack_app_handler import AppAction
|
||||
|
||||
__all__ = (
|
||||
|
|
@ -11,6 +11,7 @@ __all__ = (
|
|||
"BaseHandler",
|
||||
"BaseEvent",
|
||||
"BaseAction",
|
||||
"ServerAction",
|
||||
"statics_icon",
|
||||
"AppAction"
|
||||
)
|
||||
|
|
|
|||
|
|
@ -22,9 +22,9 @@ log = Logger().get_logger(__name__)
|
|||
|
||||
# Current schemas for avalon types
|
||||
EntitySchemas = {
|
||||
"project": "avalon-core:project-2.1",
|
||||
"asset": "avalon-core:asset-3.0",
|
||||
"config": "avalon-core:config-1.1"
|
||||
"project": "pype:project-2.1",
|
||||
"asset": "pype:asset-3.0",
|
||||
"config": "pype:config-1.1"
|
||||
}
|
||||
|
||||
# Group name of custom attributes
|
||||
|
|
@ -101,15 +101,40 @@ def get_pype_attr(session, split_hierarchical=True):
|
|||
return custom_attributes
|
||||
|
||||
|
||||
def from_dict_to_set(data):
|
||||
def from_dict_to_set(data, is_project):
|
||||
"""
|
||||
Converts 'data' into $set part of MongoDB update command.
|
||||
Sets new or modified keys.
|
||||
Tasks are updated completely, not per task. (Eg. change in any of the
|
||||
tasks results in full update of "tasks" from Ftrack.
|
||||
Args:
|
||||
data: (dictionary) - up-to-date data from Ftrack
|
||||
data (dictionary): up-to-date data from Ftrack
|
||||
is_project (boolean): true for project
|
||||
|
||||
Returns:
|
||||
(dictionary) - { "$set" : "{..}"}
|
||||
"""
|
||||
not_set = object()
|
||||
task_changes = not_set
|
||||
if (
|
||||
is_project
|
||||
and "config" in data
|
||||
and "tasks" in data["config"]
|
||||
):
|
||||
task_changes = data["config"].pop("tasks")
|
||||
task_changes_key = "config.tasks"
|
||||
if not data["config"]:
|
||||
data.pop("config")
|
||||
elif (
|
||||
not is_project
|
||||
and "data" in data
|
||||
and "tasks" in data["data"]
|
||||
):
|
||||
task_changes = data["data"].pop("tasks")
|
||||
task_changes_key = "data.tasks"
|
||||
if not data["data"]:
|
||||
data.pop("data")
|
||||
|
||||
result = {"$set": {}}
|
||||
dict_queue = queue.Queue()
|
||||
dict_queue.put((None, data))
|
||||
|
|
@ -126,6 +151,9 @@ def from_dict_to_set(data):
|
|||
result["$set"][new_key] = value
|
||||
continue
|
||||
dict_queue.put((new_key, value))
|
||||
|
||||
if task_changes is not not_set and task_changes_key:
|
||||
result["$set"][task_changes_key] = task_changes
|
||||
return result
|
||||
|
||||
|
||||
|
|
@ -657,7 +685,7 @@ class SyncEntitiesFactory:
|
|||
# Tasks must be checked too
|
||||
for task in entity_dict["tasks"].items():
|
||||
task_name, task = task
|
||||
passed = task_name
|
||||
passed = task_names.get(task_name)
|
||||
if passed is None:
|
||||
passed = check_regex(
|
||||
task_name, "task", schema_patterns=_schema_patterns
|
||||
|
|
@ -729,7 +757,7 @@ class SyncEntitiesFactory:
|
|||
for id in ids:
|
||||
if id not in self.entities_dict:
|
||||
continue
|
||||
self.entities_dict[id]["tasks"].remove(name)
|
||||
self.entities_dict[id]["tasks"].pop(name)
|
||||
ent_path = self.get_ent_path(id)
|
||||
self.log.warning(failed_regex_msg.format(
|
||||
"/".join([ent_path, name])
|
||||
|
|
@ -1678,6 +1706,18 @@ class SyncEntitiesFactory:
|
|||
self.updates[avalon_id]
|
||||
)
|
||||
|
||||
# double check changes in tasks, some task could be renamed or
|
||||
# deleted in Ftrack - not captured otherwise
|
||||
final_entity = self.entities_dict[ftrack_id]["final_entity"]
|
||||
if final_entity["data"].get("tasks", {}) != \
|
||||
avalon_entity["data"].get("tasks", {}):
|
||||
if "data" not in self.updates[avalon_id]:
|
||||
self.updates[avalon_id]["data"] = {}
|
||||
|
||||
self.updates[avalon_id]["data"]["tasks"] = (
|
||||
final_entity["data"]["tasks"]
|
||||
)
|
||||
|
||||
def synchronize(self):
|
||||
self.log.debug("* Synchronization begins")
|
||||
avalon_project_id = self.ftrack_avalon_mapper.get(self.ft_project_id)
|
||||
|
|
@ -2025,15 +2065,20 @@ class SyncEntitiesFactory:
|
|||
self._changeability_by_mongo_id[mongo_id] = is_changeable
|
||||
|
||||
def update_entities(self):
|
||||
"""
|
||||
Runs changes converted to "$set" queries in bulk.
|
||||
"""
|
||||
mongo_changes_bulk = []
|
||||
for mongo_id, changes in self.updates.items():
|
||||
filter = {"_id": ObjectId(mongo_id)}
|
||||
change_data = from_dict_to_set(changes)
|
||||
mongo_id = ObjectId(mongo_id)
|
||||
is_project = mongo_id == self.avalon_project_id
|
||||
change_data = from_dict_to_set(changes, is_project)
|
||||
|
||||
filter = {"_id": mongo_id}
|
||||
mongo_changes_bulk.append(UpdateOne(filter, change_data))
|
||||
if not mongo_changes_bulk:
|
||||
# TODO LOG
|
||||
return
|
||||
log.debug("mongo_changes_bulk:: {}".format(mongo_changes_bulk))
|
||||
self.dbcon.bulk_write(mongo_changes_bulk)
|
||||
|
||||
def reload_parents(self, hierarchy_changing_ids):
|
||||
|
|
@ -2105,6 +2150,18 @@ class SyncEntitiesFactory:
|
|||
)
|
||||
|
||||
def compare_dict(self, dict_new, dict_old, _ignore_keys=[]):
|
||||
"""
|
||||
Recursively compares and list changes between dictionaries
|
||||
'dict_new' and 'dict_old'.
|
||||
Keys in '_ignore_keys' are skipped and not compared.
|
||||
Args:
|
||||
dict_new (dictionary):
|
||||
dict_old (dictionary):
|
||||
_ignore_keys (list):
|
||||
|
||||
Returns:
|
||||
(dictionary) of new or updated keys and theirs values
|
||||
"""
|
||||
# _ignore_keys may be used for keys nested dict like"data.visualParent"
|
||||
changes = {}
|
||||
ignore_keys = []
|
||||
|
|
@ -2146,6 +2203,18 @@ class SyncEntitiesFactory:
|
|||
return changes
|
||||
|
||||
def merge_dicts(self, dict_new, dict_old):
|
||||
"""
|
||||
Apply all new or updated keys from 'dict_new' on 'dict_old'.
|
||||
Recursively.
|
||||
Doesn't recognise that 'dict_new' doesn't contain some keys
|
||||
anymore.
|
||||
Args:
|
||||
dict_new (dictionary): from Ftrack most likely
|
||||
dict_old (dictionary): current in DB
|
||||
|
||||
Returns:
|
||||
(dictionary) of applied changes to original dictionary
|
||||
"""
|
||||
for key, value in dict_new.items():
|
||||
if key not in dict_old:
|
||||
dict_old[key] = value
|
||||
|
|
|
|||
|
|
@ -195,3 +195,82 @@ class BaseAction(BaseHandler):
|
|||
).format(str(type(result))))
|
||||
|
||||
return result
|
||||
|
||||
|
||||
class ServerAction(BaseAction):
|
||||
"""Action class meant to be used on event server.
|
||||
|
||||
Unlike the `BaseAction` roles are not checked on register but on discover.
|
||||
For the same reason register is modified to not filter topics by username.
|
||||
"""
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
if not self.role_list:
|
||||
self.role_list = set()
|
||||
else:
|
||||
self.role_list = set(
|
||||
role_name.lower()
|
||||
for role_name in self.role_list
|
||||
)
|
||||
super(ServerAction, self).__init__(*args, **kwargs)
|
||||
|
||||
def _register_role_check(self):
|
||||
# Skip register role check.
|
||||
return
|
||||
|
||||
def _discover(self, event):
|
||||
"""Check user discover availability."""
|
||||
if not self._check_user_discover(event):
|
||||
return
|
||||
return super(ServerAction, self)._discover(event)
|
||||
|
||||
def _check_user_discover(self, event):
|
||||
"""Should be action discovered by user trying to show actions."""
|
||||
if not self.role_list:
|
||||
return True
|
||||
|
||||
user_entity = self._get_user_entity(event)
|
||||
if not user_entity:
|
||||
return False
|
||||
|
||||
for role in user_entity["user_security_roles"]:
|
||||
lowered_role = role["security_role"]["name"].lower()
|
||||
if lowered_role in self.role_list:
|
||||
return True
|
||||
return False
|
||||
|
||||
def _get_user_entity(self, event):
|
||||
"""Query user entity from event."""
|
||||
not_set = object()
|
||||
|
||||
# Check if user is already stored in event data
|
||||
user_entity = event["data"].get("user_entity", not_set)
|
||||
if user_entity is not_set:
|
||||
# Query user entity from event
|
||||
user_info = event.get("source", {}).get("user", {})
|
||||
user_id = user_info.get("id")
|
||||
username = user_info.get("username")
|
||||
if user_id:
|
||||
user_entity = self.session.query(
|
||||
"User where id is {}".format(user_id)
|
||||
).first()
|
||||
if not user_entity and username:
|
||||
user_entity = self.session.query(
|
||||
"User where username is {}".format(username)
|
||||
).first()
|
||||
event["data"]["user_entity"] = user_entity
|
||||
|
||||
return user_entity
|
||||
|
||||
def register(self):
|
||||
"""Register subcription to Ftrack event hub."""
|
||||
self.session.event_hub.subscribe(
|
||||
"topic=ftrack.action.discover",
|
||||
self._discover,
|
||||
priority=self.priority
|
||||
)
|
||||
|
||||
launch_subscription = (
|
||||
"topic=ftrack.action.launch and data.actionIdentifier={0}"
|
||||
).format(self.identifier)
|
||||
self.session.event_hub.subscribe(launch_subscription, self._launch)
|
||||
|
|
|
|||
|
|
@ -20,7 +20,7 @@ class AppAction(BaseAction):
|
|||
preactions = ["start.timer"]
|
||||
|
||||
def __init__(
|
||||
self, session, label, name, executable, variant=None,
|
||||
self, session, dbcon, label, name, executable, variant=None,
|
||||
icon=None, description=None, preactions=[], plugins_presets={}
|
||||
):
|
||||
self.label = label
|
||||
|
|
@ -31,6 +31,8 @@ class AppAction(BaseAction):
|
|||
self.description = description
|
||||
self.preactions.extend(preactions)
|
||||
|
||||
self.dbcon = dbcon
|
||||
|
||||
super().__init__(session, plugins_presets)
|
||||
if label is None:
|
||||
raise ValueError("Action missing label.")
|
||||
|
|
@ -89,8 +91,10 @@ class AppAction(BaseAction):
|
|||
if avalon_project_apps is None:
|
||||
if avalon_project_doc is None:
|
||||
ft_project = self.get_project_from_entity(entity)
|
||||
database = pypelib.get_avalon_database()
|
||||
project_name = ft_project["full_name"]
|
||||
|
||||
self.dbcon.install()
|
||||
database = self.dbcon.database
|
||||
avalon_project_doc = database[project_name].find_one({
|
||||
"type": "project"
|
||||
}) or False
|
||||
|
|
|
|||
|
|
@ -35,6 +35,7 @@ class BaseHandler(object):
|
|||
type = 'No-type'
|
||||
ignore_me = False
|
||||
preactions = []
|
||||
role_list = []
|
||||
|
||||
def __init__(self, session, plugins_presets=None):
|
||||
'''Expects a ftrack_api.Session instance'''
|
||||
|
|
@ -148,20 +149,27 @@ class BaseHandler(object):
|
|||
def reset_session(self):
|
||||
self.session.reset()
|
||||
|
||||
def _register_role_check(self):
|
||||
if not self.role_list or not isinstance(self.role_list, (list, tuple)):
|
||||
return
|
||||
|
||||
user_entity = self.session.query(
|
||||
"User where username is \"{}\"".format(self.session.api_user)
|
||||
).one()
|
||||
available = False
|
||||
lowercase_rolelist = [
|
||||
role_name.lower()
|
||||
for role_name in self.role_list
|
||||
]
|
||||
for role in user_entity["user_security_roles"]:
|
||||
if role["security_role"]["name"].lower() in lowercase_rolelist:
|
||||
available = True
|
||||
break
|
||||
if available is False:
|
||||
raise MissingPermision
|
||||
|
||||
def _preregister(self):
|
||||
if hasattr(self, "role_list") and len(self.role_list) > 0:
|
||||
username = self.session.api_user
|
||||
user = self.session.query(
|
||||
'User where username is "{}"'.format(username)
|
||||
).one()
|
||||
available = False
|
||||
lowercase_rolelist = [x.lower() for x in self.role_list]
|
||||
for role in user['user_security_roles']:
|
||||
if role['security_role']['name'].lower() in lowercase_rolelist:
|
||||
available = True
|
||||
break
|
||||
if available is False:
|
||||
raise MissingPermision
|
||||
self._register_role_check()
|
||||
|
||||
# Custom validations
|
||||
result = self.preregister()
|
||||
|
|
@ -172,12 +180,11 @@ class BaseHandler(object):
|
|||
).format(self.__class__.__name__))
|
||||
return
|
||||
|
||||
if result is True:
|
||||
return
|
||||
msg = None
|
||||
if isinstance(result, str):
|
||||
msg = result
|
||||
raise PreregisterException(msg)
|
||||
if result is not True:
|
||||
msg = None
|
||||
if isinstance(result, str):
|
||||
msg = result
|
||||
raise PreregisterException(msg)
|
||||
|
||||
def preregister(self):
|
||||
'''
|
||||
|
|
|
|||
|
|
@ -22,8 +22,9 @@ class PhotoshopServerStub():
|
|||
def open(self, path):
|
||||
"""
|
||||
Open file located at 'path' (local).
|
||||
:param path: <string> file path locally
|
||||
:return: None
|
||||
Args:
|
||||
path(string): file path locally
|
||||
Returns: None
|
||||
"""
|
||||
self.websocketserver.call(self.client.call
|
||||
('Photoshop.open', path=path)
|
||||
|
|
@ -32,9 +33,10 @@ class PhotoshopServerStub():
|
|||
def read(self, layer, layers_meta=None):
|
||||
"""
|
||||
Parses layer metadata from Headline field of active document
|
||||
:param layer: <namedTuple Layer("id":XX, "name":"YYY")
|
||||
:param layers_meta: full list from Headline (for performance in loops)
|
||||
:return:
|
||||
Args:
|
||||
layer: <namedTuple Layer("id":XX, "name":"YYY")
|
||||
layers_meta: full list from Headline (for performance in loops)
|
||||
Returns:
|
||||
"""
|
||||
if layers_meta is None:
|
||||
layers_meta = self.get_layers_metadata()
|
||||
|
|
@ -44,22 +46,26 @@ class PhotoshopServerStub():
|
|||
def imprint(self, layer, data, all_layers=None, layers_meta=None):
|
||||
"""
|
||||
Save layer metadata to Headline field of active document
|
||||
:param layer: <namedTuple> Layer("id": XXX, "name":'YYY')
|
||||
:param data: <string> json representation for single layer
|
||||
:param all_layers: <list of namedTuples> - for performance, could be
|
||||
Args:
|
||||
layer (namedtuple): Layer("id": XXX, "name":'YYY')
|
||||
data(string): json representation for single layer
|
||||
all_layers (list of namedtuples): for performance, could be
|
||||
injected for usage in loop, if not, single call will be
|
||||
triggered
|
||||
:param layers_meta: <string> json representation from Headline
|
||||
layers_meta(string): json representation from Headline
|
||||
(for performance - provide only if imprint is in
|
||||
loop - value should be same)
|
||||
:return: None
|
||||
Returns: None
|
||||
"""
|
||||
if not layers_meta:
|
||||
layers_meta = self.get_layers_metadata()
|
||||
# json.dumps writes integer values in a dictionary to string, so
|
||||
# anticipating it here.
|
||||
if str(layer.id) in layers_meta and layers_meta[str(layer.id)]:
|
||||
layers_meta[str(layer.id)].update(data)
|
||||
if data:
|
||||
layers_meta[str(layer.id)].update(data)
|
||||
else:
|
||||
layers_meta.pop(str(layer.id))
|
||||
else:
|
||||
layers_meta[str(layer.id)] = data
|
||||
|
||||
|
|
@ -83,7 +89,7 @@ class PhotoshopServerStub():
|
|||
"""
|
||||
Returns JSON document with all(?) layers in active document.
|
||||
|
||||
:return: <list of namedtuples>
|
||||
Returns: <list of namedtuples>
|
||||
Format of tuple: { 'id':'123',
|
||||
'name': 'My Layer 1',
|
||||
'type': 'GUIDE'|'FG'|'BG'|'OBJ'
|
||||
|
|
@ -97,8 +103,9 @@ class PhotoshopServerStub():
|
|||
def get_layers_in_layers(self, layers):
|
||||
"""
|
||||
Return all layers that belong to layers (might be groups).
|
||||
:param layers: <list of namedTuples>
|
||||
:return: <list of namedTuples>
|
||||
Args:
|
||||
layers <list of namedTuples>:
|
||||
Returns: <list of namedTuples>
|
||||
"""
|
||||
all_layers = self.get_layers()
|
||||
ret = []
|
||||
|
|
@ -116,7 +123,7 @@ class PhotoshopServerStub():
|
|||
def create_group(self, name):
|
||||
"""
|
||||
Create new group (eg. LayerSet)
|
||||
:return: <namedTuple Layer("id":XX, "name":"YYY")>
|
||||
Returns: <namedTuple Layer("id":XX, "name":"YYY")>
|
||||
"""
|
||||
ret = self.websocketserver.call(self.client.call
|
||||
('Photoshop.create_group',
|
||||
|
|
@ -128,7 +135,7 @@ class PhotoshopServerStub():
|
|||
def group_selected_layers(self, name):
|
||||
"""
|
||||
Group selected layers into new LayerSet (eg. group)
|
||||
:return: <json representation of Layer>
|
||||
Returns: <json representation of Layer>
|
||||
"""
|
||||
res = self.websocketserver.call(self.client.call
|
||||
('Photoshop.group_selected_layers',
|
||||
|
|
@ -139,7 +146,7 @@ class PhotoshopServerStub():
|
|||
def get_selected_layers(self):
|
||||
"""
|
||||
Get a list of actually selected layers
|
||||
:return: <list of Layer('id':XX, 'name':"YYY")>
|
||||
Returns: <list of Layer('id':XX, 'name':"YYY")>
|
||||
"""
|
||||
res = self.websocketserver.call(self.client.call
|
||||
('Photoshop.get_selected_layers'))
|
||||
|
|
@ -147,9 +154,10 @@ class PhotoshopServerStub():
|
|||
|
||||
def select_layers(self, layers):
|
||||
"""
|
||||
Selecte specified layers in Photoshop
|
||||
:param layers: <list of Layer('id':XX, 'name':"YYY")>
|
||||
:return: None
|
||||
Selects specified layers in Photoshop by its ids
|
||||
Args:
|
||||
layers: <list of Layer('id':XX, 'name':"YYY")>
|
||||
Returns: None
|
||||
"""
|
||||
layer_ids = [layer.id for layer in layers]
|
||||
|
||||
|
|
@ -161,7 +169,7 @@ class PhotoshopServerStub():
|
|||
def get_active_document_full_name(self):
|
||||
"""
|
||||
Returns full name with path of active document via ws call
|
||||
:return: <string> full path with name
|
||||
Returns(string): full path with name
|
||||
"""
|
||||
res = self.websocketserver.call(
|
||||
self.client.call('Photoshop.get_active_document_full_name'))
|
||||
|
|
@ -171,7 +179,7 @@ class PhotoshopServerStub():
|
|||
def get_active_document_name(self):
|
||||
"""
|
||||
Returns just a name of active document via ws call
|
||||
:return: <string> file name
|
||||
Returns(string): file name
|
||||
"""
|
||||
res = self.websocketserver.call(self.client.call
|
||||
('Photoshop.get_active_document_name'))
|
||||
|
|
@ -181,7 +189,7 @@ class PhotoshopServerStub():
|
|||
def is_saved(self):
|
||||
"""
|
||||
Returns true if no changes in active document
|
||||
:return: <boolean>
|
||||
Returns: <boolean>
|
||||
"""
|
||||
return self.websocketserver.call(self.client.call
|
||||
('Photoshop.is_saved'))
|
||||
|
|
@ -189,7 +197,7 @@ class PhotoshopServerStub():
|
|||
def save(self):
|
||||
"""
|
||||
Saves active document
|
||||
:return: None
|
||||
Returns: None
|
||||
"""
|
||||
self.websocketserver.call(self.client.call
|
||||
('Photoshop.save'))
|
||||
|
|
@ -197,10 +205,11 @@ class PhotoshopServerStub():
|
|||
def saveAs(self, image_path, ext, as_copy):
|
||||
"""
|
||||
Saves active document to psd (copy) or png or jpg
|
||||
:param image_path: <string> full local path
|
||||
:param ext: <string psd|jpg|png>
|
||||
:param as_copy: <boolean>
|
||||
:return: None
|
||||
Args:
|
||||
image_path(string): full local path
|
||||
ext: <string psd|jpg|png>
|
||||
as_copy: <boolean>
|
||||
Returns: None
|
||||
"""
|
||||
self.websocketserver.call(self.client.call
|
||||
('Photoshop.saveAs',
|
||||
|
|
@ -211,9 +220,10 @@ class PhotoshopServerStub():
|
|||
def set_visible(self, layer_id, visibility):
|
||||
"""
|
||||
Set layer with 'layer_id' to 'visibility'
|
||||
:param layer_id: <int>
|
||||
:param visibility: <true - set visible, false - hide>
|
||||
:return: None
|
||||
Args:
|
||||
layer_id: <int>
|
||||
visibility: <true - set visible, false - hide>
|
||||
Returns: None
|
||||
"""
|
||||
self.websocketserver.call(self.client.call
|
||||
('Photoshop.set_visible',
|
||||
|
|
@ -224,7 +234,7 @@ class PhotoshopServerStub():
|
|||
"""
|
||||
Reads layers metadata from Headline from active document in PS.
|
||||
(Headline accessible by File > File Info)
|
||||
:return: <string> - json documents
|
||||
Returns(string): - json documents
|
||||
"""
|
||||
layers_data = {}
|
||||
res = self.websocketserver.call(self.client.call('Photoshop.read'))
|
||||
|
|
@ -234,22 +244,26 @@ class PhotoshopServerStub():
|
|||
pass
|
||||
return layers_data
|
||||
|
||||
def import_smart_object(self, path):
|
||||
def import_smart_object(self, path, layer_name):
|
||||
"""
|
||||
Import the file at `path` as a smart object to active document.
|
||||
|
||||
Args:
|
||||
path (str): File path to import.
|
||||
layer_name (str): Unique layer name to differentiate how many times
|
||||
same smart object was loaded
|
||||
"""
|
||||
res = self.websocketserver.call(self.client.call
|
||||
('Photoshop.import_smart_object',
|
||||
path=path))
|
||||
path=path, name=layer_name))
|
||||
|
||||
return self._to_records(res).pop()
|
||||
|
||||
def replace_smart_object(self, layer, path):
|
||||
def replace_smart_object(self, layer, path, layer_name):
|
||||
"""
|
||||
Replace the smart object `layer` with file at `path`
|
||||
layer_name (str): Unique layer name to differentiate how many times
|
||||
same smart object was loaded
|
||||
|
||||
Args:
|
||||
layer (namedTuple): Layer("id":XX, "name":"YY"..).
|
||||
|
|
@ -257,8 +271,18 @@ class PhotoshopServerStub():
|
|||
"""
|
||||
self.websocketserver.call(self.client.call
|
||||
('Photoshop.replace_smart_object',
|
||||
layer=layer,
|
||||
path=path))
|
||||
layer_id=layer.id,
|
||||
path=path, name=layer_name))
|
||||
|
||||
def delete_layer(self, layer_id):
|
||||
"""
|
||||
Deletes specific layer by it's id.
|
||||
Args:
|
||||
layer_id (int): id of layer to delete
|
||||
"""
|
||||
self.websocketserver.call(self.client.call
|
||||
('Photoshop.delete_layer',
|
||||
layer_id=layer_id))
|
||||
|
||||
def close(self):
|
||||
self.client.close()
|
||||
|
|
@ -267,8 +291,8 @@ class PhotoshopServerStub():
|
|||
"""
|
||||
Converts string json representation into list of named tuples for
|
||||
dot notation access to work.
|
||||
:return: <list of named tuples>
|
||||
:param res: <string> - json representation
|
||||
Returns: <list of named tuples>
|
||||
res(string): - json representation
|
||||
"""
|
||||
try:
|
||||
layers_data = json.loads(res)
|
||||
|
|
|
|||
|
|
@ -1,5 +1,6 @@
|
|||
import pyblish.api
|
||||
import json
|
||||
import os
|
||||
|
||||
|
||||
class IntegrateFtrackInstance(pyblish.api.InstancePlugin):
|
||||
|
|
@ -68,6 +69,16 @@ class IntegrateFtrackInstance(pyblish.api.InstancePlugin):
|
|||
"name": "thumbnail" # Default component name is "main".
|
||||
}
|
||||
comp['thumbnail'] = True
|
||||
comp_files = comp["files"]
|
||||
if isinstance(comp_files, (tuple, list, set)):
|
||||
filename = comp_files[0]
|
||||
else:
|
||||
filename = comp_files
|
||||
|
||||
comp['published_path'] = os.path.join(
|
||||
comp['stagingDir'], filename
|
||||
)
|
||||
|
||||
elif comp.get('ftrackreview') or ("ftrackreview" in comp.get('tags', [])):
|
||||
'''
|
||||
Ftrack bug requirement:
|
||||
|
|
@ -154,6 +165,7 @@ class IntegrateFtrackInstance(pyblish.api.InstancePlugin):
|
|||
# Create copy with ftrack.unmanaged location if thumb or prev
|
||||
if comp.get('thumbnail') or comp.get('preview') \
|
||||
or ("preview" in comp.get('tags', [])) \
|
||||
or ("review" in comp.get('tags', [])) \
|
||||
or ("thumbnail" in comp.get('tags', [])):
|
||||
unmanaged_loc = self.get_ftrack_location(
|
||||
'ftrack.unmanaged', ft_session
|
||||
|
|
|
|||
|
|
@ -143,11 +143,10 @@ class IntegrateHierarchyToFtrack(pyblish.api.ContextPlugin):
|
|||
existing_tasks.append(child['name'].lower())
|
||||
# existing_tasks.append(child['type']['name'])
|
||||
|
||||
for task in tasks:
|
||||
task_name = next(iter(task))
|
||||
task_type = task[task_name]["type"]
|
||||
for task_name in tasks:
|
||||
task_type = tasks[task_name]["type"]
|
||||
if task_name.lower() in existing_tasks:
|
||||
print("Task {} already exists".format(task))
|
||||
print("Task {} already exists".format(task_name))
|
||||
continue
|
||||
tasks_to_create.append((task_name, task_type))
|
||||
|
||||
|
|
|
|||
|
|
@ -21,6 +21,7 @@ class CleanUp(pyblish.api.InstancePlugin):
|
|||
|
||||
# Presets
|
||||
paterns = None # list of regex paterns
|
||||
remove_temp_renders = True
|
||||
|
||||
def process(self, instance):
|
||||
"""Plugin entry point."""
|
||||
|
|
@ -36,8 +37,9 @@ class CleanUp(pyblish.api.InstancePlugin):
|
|||
)
|
||||
)
|
||||
|
||||
self.log.info("Cleaning renders new...")
|
||||
self.clean_renders(instance)
|
||||
if self.remove_temp_renders:
|
||||
self.log.info("Cleaning renders new...")
|
||||
self.clean_renders(instance)
|
||||
|
||||
if [ef for ef in self.exclude_families
|
||||
if instance.data["family"] in ef]:
|
||||
|
|
@ -85,7 +87,11 @@ class CleanUp(pyblish.api.InstancePlugin):
|
|||
if os.path.normpath(src) != os.path.normpath(dest):
|
||||
if instance_family == 'render' or 'render' in current_families:
|
||||
self.log.info("Removing src: `{}`...".format(src))
|
||||
os.remove(src)
|
||||
try:
|
||||
os.remove(src)
|
||||
except PermissionError:
|
||||
self.log.warning("Insufficient permission to delete {}".format(src))
|
||||
continue
|
||||
|
||||
# add dir for cleanup
|
||||
dirnames.append(os.path.dirname(src))
|
||||
|
|
|
|||
|
|
@ -75,12 +75,10 @@ class ExtractBurnin(pype.api.Extractor):
|
|||
# Remove any representations tagged for deletion.
|
||||
# QUESTION Is possible to have representation with "delete" tag?
|
||||
for repre in tuple(instance.data["representations"]):
|
||||
if "delete" in repre.get("tags", []):
|
||||
if all(x in repre.get("tags", []) for x in ['delete', 'burnin']):
|
||||
self.log.debug("Removing representation: {}".format(repre))
|
||||
instance.data["representations"].remove(repre)
|
||||
|
||||
self.log.debug(instance.data["representations"])
|
||||
|
||||
def use_legacy_code(self, instance):
|
||||
presets = instance.context.data.get("presets")
|
||||
if presets is None and self.profiles is None:
|
||||
|
|
@ -230,8 +228,7 @@ class ExtractBurnin(pype.api.Extractor):
|
|||
self.log.debug("Executing: {}".format(args))
|
||||
|
||||
# Run burnin script
|
||||
output = pype.api.subprocess(args, shell=True)
|
||||
self.log.debug("Output: {}".format(output))
|
||||
pype.api.subprocess(args, shell=True, logger=self.log)
|
||||
|
||||
for filepath in temp_data["full_input_paths"]:
|
||||
filepath = filepath.replace("\\", "/")
|
||||
|
|
|
|||
|
|
@ -104,11 +104,10 @@ class ExtractHierarchyToAvalon(pyblish.api.ContextPlugin):
|
|||
new_tasks = data.pop("tasks", {})
|
||||
if "tasks" not in cur_entity_data and not new_tasks:
|
||||
continue
|
||||
for task in new_tasks:
|
||||
task_name = next(iter(task))
|
||||
for task_name in new_tasks:
|
||||
if task_name in cur_entity_data["tasks"].keys():
|
||||
continue
|
||||
cur_entity_data["tasks"][task_name] = task[task_name]
|
||||
cur_entity_data["tasks"][task_name] = new_tasks[task_name]
|
||||
cur_entity_data.update(data)
|
||||
data = cur_entity_data
|
||||
else:
|
||||
|
|
|
|||
|
|
@ -53,6 +53,7 @@ class ExtractReview(pyblish.api.InstancePlugin):
|
|||
to_height = 1080
|
||||
|
||||
def process(self, instance):
|
||||
self.log.debug(instance.data["representations"])
|
||||
# Skip review when requested.
|
||||
if not instance.data.get("review", True):
|
||||
return
|
||||
|
|
@ -79,7 +80,7 @@ class ExtractReview(pyblish.api.InstancePlugin):
|
|||
# Make sure cleanup happens and pop representations with "delete" tag.
|
||||
for repre in tuple(instance.data["representations"]):
|
||||
tags = repre.get("tags") or []
|
||||
if "delete" in tags:
|
||||
if "delete" in tags and "thumbnail" not in tags:
|
||||
instance.data["representations"].remove(repre)
|
||||
|
||||
def main_process(self, instance):
|
||||
|
|
@ -182,8 +183,10 @@ class ExtractReview(pyblish.api.InstancePlugin):
|
|||
|
||||
# run subprocess
|
||||
self.log.debug("Executing: {}".format(subprcs_cmd))
|
||||
output = pype.api.subprocess(subprcs_cmd, shell=True)
|
||||
self.log.debug("Output: {}".format(output))
|
||||
|
||||
pype.api.subprocess(
|
||||
subprcs_cmd, shell=True, logger=self.log
|
||||
)
|
||||
|
||||
output_name = output_def["filename_suffix"]
|
||||
if temp_data["without_handles"]:
|
||||
|
|
@ -240,15 +243,16 @@ class ExtractReview(pyblish.api.InstancePlugin):
|
|||
"""
|
||||
|
||||
frame_start = instance.data["frameStart"]
|
||||
handle_start = instance.data.get(
|
||||
"handleStart",
|
||||
instance.context.data["handleStart"]
|
||||
)
|
||||
frame_end = instance.data["frameEnd"]
|
||||
handle_end = instance.data.get(
|
||||
"handleEnd",
|
||||
instance.context.data["handleEnd"]
|
||||
)
|
||||
|
||||
# Try to get handles from instance
|
||||
handle_start = instance.data.get("handleStart")
|
||||
handle_end = instance.data.get("handleEnd")
|
||||
# If even one of handle values is not set on instance use
|
||||
# handles from context
|
||||
if handle_start is None or handle_end is None:
|
||||
handle_start = instance.context.data["handleStart"]
|
||||
handle_end = instance.context.data["handleEnd"]
|
||||
|
||||
frame_start_handle = frame_start - handle_start
|
||||
frame_end_handle = frame_end + handle_end
|
||||
|
|
@ -262,6 +266,8 @@ class ExtractReview(pyblish.api.InstancePlugin):
|
|||
output_frame_start = frame_start_handle
|
||||
output_frame_end = frame_end_handle
|
||||
|
||||
handles_are_set = handle_start > 0 or handle_end > 0
|
||||
|
||||
return {
|
||||
"fps": float(instance.data["fps"]),
|
||||
"frame_start": frame_start,
|
||||
|
|
@ -277,7 +283,8 @@ class ExtractReview(pyblish.api.InstancePlugin):
|
|||
"resolution_height": instance.data.get("resolutionHeight"),
|
||||
"origin_repre": repre,
|
||||
"input_is_sequence": self.input_is_sequence(repre),
|
||||
"without_handles": without_handles
|
||||
"without_handles": without_handles,
|
||||
"handles_are_set": handles_are_set
|
||||
}
|
||||
|
||||
def _ffmpeg_arguments(self, output_def, instance, new_repre, temp_data):
|
||||
|
|
@ -320,7 +327,8 @@ class ExtractReview(pyblish.api.InstancePlugin):
|
|||
)
|
||||
|
||||
if temp_data["input_is_sequence"]:
|
||||
# Set start frame
|
||||
# Set start frame of input sequence (just frame in filename)
|
||||
# - definition of input filepath
|
||||
ffmpeg_input_args.append(
|
||||
"-start_number {}".format(temp_data["output_frame_start"])
|
||||
)
|
||||
|
|
@ -336,26 +344,37 @@ class ExtractReview(pyblish.api.InstancePlugin):
|
|||
"-framerate {}".format(temp_data["fps"])
|
||||
)
|
||||
|
||||
elif temp_data["without_handles"]:
|
||||
start_sec = float(temp_data["handle_start"]) / temp_data["fps"]
|
||||
ffmpeg_input_args.append("-ss {:0.2f}".format(start_sec))
|
||||
if temp_data["output_is_sequence"]:
|
||||
# Set start frame of output sequence (just frame in filename)
|
||||
# - this is definition of an output
|
||||
ffmpeg_output_args.append(
|
||||
"-start_number {}".format(temp_data["output_frame_start"])
|
||||
)
|
||||
|
||||
# Change output's duration and start point if should not contain
|
||||
# handles
|
||||
if temp_data["without_handles"] and temp_data["handles_are_set"]:
|
||||
# Set start time without handles
|
||||
# - check if handle_start is bigger than 0 to avoid zero division
|
||||
if temp_data["handle_start"] > 0:
|
||||
start_sec = float(temp_data["handle_start"]) / temp_data["fps"]
|
||||
ffmpeg_input_args.append("-ss {:0.2f}".format(start_sec))
|
||||
|
||||
# Set output duration inn seconds
|
||||
duration_sec = float(output_frames_len / temp_data["fps"])
|
||||
ffmpeg_output_args.append("-t {:0.2f}".format(duration_sec))
|
||||
|
||||
# Use shortest input
|
||||
ffmpeg_output_args.append("-shortest")
|
||||
# Set frame range of output when input or output is sequence
|
||||
elif temp_data["input_is_sequence"] or temp_data["output_is_sequence"]:
|
||||
ffmpeg_output_args.append("-frames:v {}".format(output_frames_len))
|
||||
|
||||
# Add video/image input path
|
||||
ffmpeg_input_args.append(
|
||||
"-i \"{}\"".format(temp_data["full_input_path"])
|
||||
)
|
||||
|
||||
if temp_data["output_is_sequence"]:
|
||||
# Set start frame
|
||||
ffmpeg_input_args.append(
|
||||
"-start_number {}".format(temp_data["output_frame_start"])
|
||||
)
|
||||
# Use shortest input
|
||||
ffmpeg_output_args.append("-shortest")
|
||||
|
||||
# Add audio arguments if there are any. Skipped when output are images.
|
||||
if not temp_data["output_ext_is_image"]:
|
||||
|
|
|
|||
|
|
@ -4,7 +4,8 @@
|
|||
import json
|
||||
import os
|
||||
import re
|
||||
from copy import copy
|
||||
from copy import copy, deepcopy
|
||||
import pype.api
|
||||
|
||||
import pyblish.api
|
||||
from avalon import api, io
|
||||
|
|
@ -41,40 +42,6 @@ def _get_script(path):
|
|||
return str(path)
|
||||
|
||||
|
||||
def get_latest_version(asset_name, subset_name, family):
|
||||
"""Retrieve latest files concerning extendFrame feature."""
|
||||
# Get asset
|
||||
asset_name = io.find_one(
|
||||
{"type": "asset", "name": asset_name}, projection={"name": True}
|
||||
)
|
||||
|
||||
subset = io.find_one(
|
||||
{"type": "subset", "name": subset_name, "parent": asset_name["_id"]},
|
||||
projection={"_id": True, "name": True},
|
||||
)
|
||||
|
||||
# Check if subsets actually exists (pre-run check)
|
||||
assert subset, "No subsets found, please publish with `extendFrames` off"
|
||||
|
||||
# Get version
|
||||
version_projection = {
|
||||
"name": True,
|
||||
"data.startFrame": True,
|
||||
"data.endFrame": True,
|
||||
"parent": True,
|
||||
}
|
||||
|
||||
version = io.find_one(
|
||||
{"type": "version", "parent": subset["_id"], "data.families": family},
|
||||
projection=version_projection,
|
||||
sort=[("name", -1)],
|
||||
)
|
||||
|
||||
assert version, "No version found, this is a bug"
|
||||
|
||||
return version
|
||||
|
||||
|
||||
def get_resources(version, extension=None):
|
||||
"""Get the files from the specific version."""
|
||||
query = {"type": "representation", "parent": version["_id"]}
|
||||
|
|
@ -249,7 +216,19 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin):
|
|||
subset = data["subset"]
|
||||
job_name = "Publish - {subset}".format(subset=subset)
|
||||
|
||||
output_dir = instance.data["outputDir"]
|
||||
# instance.data.get("subset") != instances[0]["subset"]
|
||||
# 'Main' vs 'renderMain'
|
||||
override_version = None
|
||||
instance_version = instance.data.get("version") # take this if exists
|
||||
if instance_version != 1:
|
||||
override_version = instance_version
|
||||
output_dir = self._get_publish_folder(instance.context.data['anatomy'],
|
||||
deepcopy(
|
||||
instance.data["anatomyData"]),
|
||||
instance.data.get("asset"),
|
||||
instances[0]["subset"],
|
||||
'render',
|
||||
override_version)
|
||||
|
||||
# Generate the payload for Deadline submission
|
||||
payload = {
|
||||
|
|
@ -321,7 +300,6 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin):
|
|||
payload["JobInfo"].pop("SecondaryPool", None)
|
||||
|
||||
self.log.info("Submitting Deadline job ...")
|
||||
# self.log.info(json.dumps(payload, indent=4, sort_keys=True))
|
||||
|
||||
url = "{}/api/jobs".format(self.DEADLINE_REST_URL)
|
||||
response = requests.post(url, json=payload, timeout=10)
|
||||
|
|
@ -348,9 +326,8 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin):
|
|||
|
||||
# get latest version of subset
|
||||
# this will stop if subset wasn't published yet
|
||||
version = get_latest_version(
|
||||
instance.data.get("asset"),
|
||||
instance.data.get("subset"), "render")
|
||||
version = pype.api.get_latest_version(instance.data.get("asset"),
|
||||
instance.data.get("subset"))
|
||||
# get its files based on extension
|
||||
subset_resources = get_resources(version, representation.get("ext"))
|
||||
r_col, _ = clique.assemble(subset_resources)
|
||||
|
|
@ -740,7 +717,7 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin):
|
|||
"families": []})
|
||||
|
||||
# skip locking version if we are creating v01
|
||||
instance_version = instance.data.get("version")
|
||||
instance_version = instance.data.get("version") # take this if exists
|
||||
if instance_version != 1:
|
||||
instance_skeleton_data["version"] = instance_version
|
||||
|
||||
|
|
@ -996,11 +973,9 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin):
|
|||
prev_start = None
|
||||
prev_end = None
|
||||
|
||||
version = get_latest_version(
|
||||
asset_name=asset,
|
||||
subset_name=subset,
|
||||
family='render'
|
||||
)
|
||||
version = pype.api.get_latest_version(asset_name=asset,
|
||||
subset_name=subset
|
||||
)
|
||||
|
||||
# Set prev start / end frames for comparison
|
||||
if not prev_start and not prev_end:
|
||||
|
|
@ -1016,3 +991,58 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin):
|
|||
)
|
||||
|
||||
return updated_start, updated_end
|
||||
|
||||
def _get_publish_folder(self, anatomy, template_data,
|
||||
asset, subset,
|
||||
family='render', version=None):
|
||||
"""
|
||||
Extracted logic to pre-calculate real publish folder, which is
|
||||
calculated in IntegrateNew inside of Deadline process.
|
||||
This should match logic in:
|
||||
'collect_anatomy_instance_data' - to
|
||||
get correct anatomy, family, version for subset and
|
||||
'collect_resources_path'
|
||||
get publish_path
|
||||
|
||||
Args:
|
||||
anatomy (pypeapp.lib.anatomy.Anatomy):
|
||||
template_data (dict): pre-calculated collected data for process
|
||||
asset (string): asset name
|
||||
subset (string): subset name (actually group name of subset)
|
||||
family (string): for current deadline process it's always 'render'
|
||||
TODO - for generic use family needs to be dynamically
|
||||
calculated like IntegrateNew does
|
||||
version (int): override version from instance if exists
|
||||
|
||||
Returns:
|
||||
(string): publish folder where rendered and published files will
|
||||
be stored
|
||||
based on 'publish' template
|
||||
"""
|
||||
if not version:
|
||||
version = pype.api.get_latest_version(asset, subset)
|
||||
if version:
|
||||
version = int(version["name"]) + 1
|
||||
|
||||
template_data["subset"] = subset
|
||||
template_data["family"] = "render"
|
||||
template_data["version"] = version
|
||||
|
||||
anatomy_filled = anatomy.format(template_data)
|
||||
|
||||
if "folder" in anatomy.templates["publish"]:
|
||||
publish_folder = anatomy_filled["publish"]["folder"]
|
||||
else:
|
||||
# solve deprecated situation when `folder` key is not underneath
|
||||
# `publish` anatomy
|
||||
project_name = api.Session["AVALON_PROJECT"]
|
||||
self.log.warning((
|
||||
"Deprecation warning: Anatomy does not have set `folder`"
|
||||
" key underneath `publish` (in global of for project `{}`)."
|
||||
).format(project_name))
|
||||
|
||||
file_path = anatomy_filled["publish"]["path"]
|
||||
# Directory
|
||||
publish_folder = os.path.dirname(file_path)
|
||||
|
||||
return publish_folder
|
||||
|
|
|
|||
53
pype/plugins/harmony/publish/collect_scene.py
Normal file
53
pype/plugins/harmony/publish/collect_scene.py
Normal file
|
|
@ -0,0 +1,53 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
"""Collect scene data."""
|
||||
import os
|
||||
|
||||
import pyblish.api
|
||||
from avalon import harmony
|
||||
|
||||
|
||||
class CollectScene(pyblish.api.ContextPlugin):
|
||||
"""Collect basic scene information."""
|
||||
|
||||
label = "Scene Data"
|
||||
order = pyblish.api.CollectorOrder
|
||||
hosts = ["harmony"]
|
||||
|
||||
def process(self, context):
|
||||
|
||||
sig = harmony.signature()
|
||||
func = """function %s()
|
||||
{
|
||||
return [
|
||||
about.getApplicationPath(),
|
||||
scene.currentProjectPath(),
|
||||
scene.currentScene(),
|
||||
scene.getFrameRate(),
|
||||
scene.getStartFrame(),
|
||||
scene.getStopFrame(),
|
||||
sound.getSoundtrackAll().path(),
|
||||
scene.defaultResolutionX(),
|
||||
scene.defaultResolutionY()
|
||||
]
|
||||
}
|
||||
%s
|
||||
""" % (sig, sig)
|
||||
result = harmony.send(
|
||||
{"function": func, "args": []}
|
||||
)["result"]
|
||||
|
||||
context.data["applicationPath"] = result[0]
|
||||
context.data["scenePath"] = os.path.join(
|
||||
result[1], result[2] + ".xstage")
|
||||
context.data["frameRate"] = result[3]
|
||||
context.data["frameStart"] = result[4]
|
||||
context.data["frameEnd"] = result[5]
|
||||
context.data["audioPath"] = result[6]
|
||||
context.data["resolutionWidth"] = result[7]
|
||||
context.data["resolutionHeight"] = result[8]
|
||||
|
||||
all_nodes = harmony.send(
|
||||
{"function": "node.subNodes", "args": ["Top"]}
|
||||
)["result"]
|
||||
|
||||
context.data["allNodes"] = all_nodes
|
||||
|
|
@ -21,30 +21,17 @@ class ExtractRender(pyblish.api.InstancePlugin):
|
|||
|
||||
def process(self, instance):
|
||||
# Collect scene data.
|
||||
sig = harmony.signature()
|
||||
func = """function %s(write_node)
|
||||
{
|
||||
return [
|
||||
about.getApplicationPath(),
|
||||
scene.currentProjectPath(),
|
||||
scene.currentScene(),
|
||||
scene.getFrameRate(),
|
||||
scene.getStartFrame(),
|
||||
scene.getStopFrame(),
|
||||
sound.getSoundtrackAll().path()
|
||||
]
|
||||
}
|
||||
%s
|
||||
""" % (sig, sig)
|
||||
result = harmony.send(
|
||||
{"function": func, "args": [instance[0]]}
|
||||
)["result"]
|
||||
application_path = result[0]
|
||||
scene_path = os.path.join(result[1], result[2] + ".xstage")
|
||||
frame_rate = result[3]
|
||||
frame_start = result[4]
|
||||
frame_end = result[5]
|
||||
audio_path = result[6]
|
||||
|
||||
application_path = instance.context.data.get("applicationPath")
|
||||
scene_path = instance.context.data.get("scenePath")
|
||||
frame_rate = instance.context.data.get("frameRate")
|
||||
frame_start = instance.context.data.get("frameStart")
|
||||
frame_end = instance.context.data.get("frameEnd")
|
||||
audio_path = instance.context.data.get("audioPath")
|
||||
|
||||
if audio_path and os.path.exists(audio_path):
|
||||
self.log.info(f"Using audio from {audio_path}")
|
||||
instance.data["audio"] = [{"filename": audio_path}]
|
||||
|
||||
instance.data["fps"] = frame_rate
|
||||
|
||||
|
|
@ -57,7 +44,7 @@ class ExtractRender(pyblish.api.InstancePlugin):
|
|||
}
|
||||
%s
|
||||
""" % (sig, sig)
|
||||
result = harmony.send(
|
||||
harmony.send(
|
||||
{
|
||||
"function": func,
|
||||
"args": [instance[0], path + "/" + instance.data["name"]]
|
||||
|
|
@ -67,6 +54,7 @@ class ExtractRender(pyblish.api.InstancePlugin):
|
|||
|
||||
# Execute rendering. Ignoring error cause Harmony returns error code
|
||||
# always.
|
||||
self.log.info(f"running [ {application_path} -batch {scene_path}")
|
||||
proc = subprocess.Popen(
|
||||
[application_path, "-batch", scene_path],
|
||||
stdout=subprocess.PIPE,
|
||||
|
|
@ -74,12 +62,16 @@ class ExtractRender(pyblish.api.InstancePlugin):
|
|||
stdin=subprocess.PIPE
|
||||
)
|
||||
output, error = proc.communicate()
|
||||
self.log.info("Click on the line below to see more details.")
|
||||
self.log.info(output.decode("utf-8"))
|
||||
|
||||
# Collect rendered files.
|
||||
self.log.debug(path)
|
||||
self.log.debug(f"collecting from: {path}")
|
||||
files = os.listdir(path)
|
||||
self.log.debug(files)
|
||||
assert files, (
|
||||
"No rendered files found, render failed."
|
||||
)
|
||||
self.log.debug(f"files there: {files}")
|
||||
collections, remainder = clique.assemble(files, minimum_items=1)
|
||||
assert not remainder, (
|
||||
"There should not be a remainder for {0}: {1}".format(
|
||||
|
|
|
|||
|
|
@ -16,9 +16,9 @@ class ExtractTemplate(pype.api.Extractor):
|
|||
|
||||
def process(self, instance):
|
||||
staging_dir = self.staging_dir(instance)
|
||||
filepath = os.path.join(staging_dir, "{}.tpl".format(instance.name))
|
||||
filepath = os.path.join(staging_dir, f"{instance.name}.tpl")
|
||||
|
||||
self.log.info("Outputting template to {}".format(staging_dir))
|
||||
self.log.info(f"Outputting template to {staging_dir}")
|
||||
|
||||
dependencies = []
|
||||
self.get_dependencies(instance[0], dependencies)
|
||||
|
|
@ -31,9 +31,7 @@ class ExtractTemplate(pype.api.Extractor):
|
|||
unique_backdrops = [backdrops[x] for x in set(backdrops.keys())]
|
||||
|
||||
# Get non-connected nodes within backdrops.
|
||||
all_nodes = harmony.send(
|
||||
{"function": "node.subNodes", "args": ["Top"]}
|
||||
)["result"]
|
||||
all_nodes = instance.context.data.get("allNodes")
|
||||
for node in [x for x in all_nodes if x not in dependencies]:
|
||||
within_unique_backdrops = bool(
|
||||
[x for x in self.get_backdrops(node) if x in unique_backdrops]
|
||||
|
|
@ -53,15 +51,15 @@ class ExtractTemplate(pype.api.Extractor):
|
|||
# Prep representation.
|
||||
os.chdir(staging_dir)
|
||||
shutil.make_archive(
|
||||
"{}".format(instance.name),
|
||||
f"{instance.name}",
|
||||
"zip",
|
||||
os.path.join(staging_dir, "{}.tpl".format(instance.name))
|
||||
os.path.join(staging_dir, f"{instance.name}.tpl")
|
||||
)
|
||||
|
||||
representation = {
|
||||
"name": "tpl",
|
||||
"ext": "zip",
|
||||
"files": "{}.zip".format(instance.name),
|
||||
"files": f"{instance.name}.zip",
|
||||
"stagingDir": staging_dir
|
||||
}
|
||||
instance.data["representations"] = [representation]
|
||||
|
|
|
|||
|
|
@ -1,5 +1,8 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
"""Extract work file."""
|
||||
import os
|
||||
import shutil
|
||||
from zipfile import ZipFile
|
||||
|
||||
from avalon import harmony
|
||||
|
||||
|
|
@ -15,13 +18,12 @@ class ExtractWorkfile(pype.api.Extractor):
|
|||
families = ["workfile"]
|
||||
|
||||
def process(self, instance):
|
||||
"""Plugin entry point."""
|
||||
# Export template.
|
||||
backdrops = harmony.send(
|
||||
{"function": "Backdrop.backdrops", "args": ["Top"]}
|
||||
)["result"]
|
||||
nodes = harmony.send(
|
||||
{"function": "node.subNodes", "args": ["Top"]}
|
||||
)["result"]
|
||||
nodes = instance.context.data.get("allNodes")
|
||||
staging_dir = self.staging_dir(instance)
|
||||
filepath = os.path.join(staging_dir, "{}.tpl".format(instance.name))
|
||||
|
||||
|
|
@ -30,15 +32,19 @@ class ExtractWorkfile(pype.api.Extractor):
|
|||
# Prep representation.
|
||||
os.chdir(staging_dir)
|
||||
shutil.make_archive(
|
||||
"{}".format(instance.name),
|
||||
f"{instance.name}",
|
||||
"zip",
|
||||
os.path.join(staging_dir, "{}.tpl".format(instance.name))
|
||||
os.path.join(staging_dir, f"{instance.name}.tpl")
|
||||
)
|
||||
# Check if archive is ok
|
||||
with ZipFile(os.path.basename(f"{instance.name}.zip")) as zr:
|
||||
if zr.testzip() is not None:
|
||||
raise Exception("File archive is corrupted.")
|
||||
|
||||
representation = {
|
||||
"name": "tpl",
|
||||
"ext": "zip",
|
||||
"files": "{}.zip".format(instance.name),
|
||||
"files": f"{instance.name}.zip",
|
||||
"stagingDir": staging_dir
|
||||
}
|
||||
instance.data["representations"] = [representation]
|
||||
|
|
|
|||
|
|
@ -9,7 +9,6 @@ class ValidateAudio(pyblish.api.InstancePlugin):
|
|||
|
||||
If you are sure that you want to send render without audio, you can
|
||||
disable this validator before clicking on "publish"
|
||||
|
||||
"""
|
||||
|
||||
order = pyblish.api.ValidatorOrder
|
||||
|
|
|
|||
|
|
@ -1,3 +1,6 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
"""Validate scene settings."""
|
||||
import os
|
||||
import json
|
||||
|
||||
import pyblish.api
|
||||
|
|
@ -14,9 +17,17 @@ class ValidateSceneSettingsRepair(pyblish.api.Action):
|
|||
on = "failed"
|
||||
|
||||
def process(self, context, plugin):
|
||||
"""Repair action entry point."""
|
||||
pype.hosts.harmony.set_scene_settings(
|
||||
pype.hosts.harmony.get_asset_settings()
|
||||
)
|
||||
if not os.patch.exists(context.data["scenePath"]):
|
||||
self.log.info("correcting scene name")
|
||||
scene_dir = os.path.dirname(context.data["currentFile"])
|
||||
scene_path = os.path.join(
|
||||
scene_dir, os.path.basename(scene_dir) + ".xstage"
|
||||
)
|
||||
harmony.save_scene_as(scene_path)
|
||||
|
||||
|
||||
class ValidateSceneSettings(pyblish.api.InstancePlugin):
|
||||
|
|
@ -31,6 +42,7 @@ class ValidateSceneSettings(pyblish.api.InstancePlugin):
|
|||
frame_check_filter = ["_ch_", "_pr_", "_intd_", "_extd_"]
|
||||
|
||||
def process(self, instance):
|
||||
"""Plugin entry point."""
|
||||
expected_settings = pype.hosts.harmony.get_asset_settings()
|
||||
self.log.info(expected_settings)
|
||||
|
||||
|
|
@ -46,20 +58,20 @@ class ValidateSceneSettings(pyblish.api.InstancePlugin):
|
|||
for string in self.frame_check_filter):
|
||||
expected_settings.pop("frameEnd")
|
||||
|
||||
sig = harmony.signature()
|
||||
func = """function %s()
|
||||
{
|
||||
return {
|
||||
"fps": scene.getFrameRate(),
|
||||
"frameStart": scene.getStartFrame(),
|
||||
"frameEnd": scene.getStopFrame(),
|
||||
"resolutionWidth": scene.defaultResolutionX(),
|
||||
"resolutionHeight": scene.defaultResolutionY()
|
||||
};
|
||||
# handle case where ftrack uses only two decimal places
|
||||
# 23.976023976023978 vs. 23.98
|
||||
fps = instance.context.data.get("frameRate")
|
||||
if isinstance(instance.context.data.get("frameRate"), float):
|
||||
fps = float(
|
||||
"{:.2f}".format(instance.context.data.get("frameRate")))
|
||||
|
||||
current_settings = {
|
||||
"fps": fps,
|
||||
"frameStart": instance.context.data.get("frameStart"),
|
||||
"frameEnd": instance.context.data.get("frameEnd"),
|
||||
"resolutionWidth": instance.context.data.get("resolutionWidth"),
|
||||
"resolutionHeight": instance.context.data.get("resolutionHeight"),
|
||||
}
|
||||
%s
|
||||
""" % (sig, sig)
|
||||
current_settings = harmony.send({"function": func})["result"]
|
||||
|
||||
invalid_settings = []
|
||||
for key, value in expected_settings.items():
|
||||
|
|
@ -74,3 +86,6 @@ class ValidateSceneSettings(pyblish.api.InstancePlugin):
|
|||
json.dumps(invalid_settings, sort_keys=True, indent=4)
|
||||
)
|
||||
assert not invalid_settings, msg
|
||||
assert os.path.exists(instance.context.data.get("scenePath")), (
|
||||
"Scene file not found (saved under wrong name)"
|
||||
)
|
||||
|
|
|
|||
|
|
@ -142,9 +142,10 @@ class CollectClips(api.ContextPlugin):
|
|||
"asset": asset,
|
||||
"family": "clip",
|
||||
"families": [],
|
||||
"handleStart": projectdata.get("handleStart", 0),
|
||||
"handleEnd": projectdata.get("handleEnd", 0)})
|
||||
|
||||
"handleStart": int(projectdata.get("handleStart", 0)),
|
||||
"handleEnd": int(projectdata.get("handleEnd", 0)),
|
||||
"fps": context.data["fps"]
|
||||
})
|
||||
instance = context.create_instance(**data)
|
||||
|
||||
self.log.info("Created instance: {}".format(instance))
|
||||
|
|
|
|||
|
|
@ -4,13 +4,14 @@ from pyblish import api
|
|||
class CollectFramerate(api.ContextPlugin):
|
||||
"""Collect framerate from selected sequence."""
|
||||
|
||||
order = api.CollectorOrder + 0.01
|
||||
order = api.CollectorOrder + 0.001
|
||||
label = "Collect Framerate"
|
||||
hosts = ["hiero"]
|
||||
|
||||
def process(self, context):
|
||||
sequence = context.data["activeSequence"]
|
||||
context.data["fps"] = self.get_rate(sequence)
|
||||
self.log.info("Framerate is collected: {}".format(context.data["fps"]))
|
||||
|
||||
def get_rate(self, sequence):
|
||||
num, den = sequence.framerate().toRational()
|
||||
|
|
|
|||
|
|
@ -218,7 +218,7 @@ class CollectHierarchyContext(pyblish.api.ContextPlugin):
|
|||
'''
|
||||
|
||||
label = "Collect Hierarchy Context"
|
||||
order = pyblish.api.CollectorOrder + 0.102
|
||||
order = pyblish.api.CollectorOrder + 0.103
|
||||
|
||||
def update_dict(self, ex_dict, new_dict):
|
||||
for key in ex_dict:
|
||||
|
|
|
|||
|
|
@ -183,7 +183,7 @@ class CollectPlatesData(api.InstancePlugin):
|
|||
"frameEnd": instance.data["sourceOut"] - instance.data["sourceIn"] + 1,
|
||||
'step': 1,
|
||||
'fps': instance.context.data["fps"],
|
||||
'tags': ["preview"],
|
||||
'tags': ["review"],
|
||||
'name': "preview",
|
||||
'ext': "mov",
|
||||
}
|
||||
|
|
@ -192,11 +192,12 @@ class CollectPlatesData(api.InstancePlugin):
|
|||
instance.data["representations"].append(
|
||||
plates_mov_representation)
|
||||
|
||||
thumb_frame = instance.data["clipInH"] + (
|
||||
(instance.data["clipOutH"] - instance.data["clipInH"]) / 2)
|
||||
thumb_frame = instance.data["sourceInH"] + (
|
||||
(instance.data["sourceOutH"] - instance.data["sourceInH"]) / 2)
|
||||
thumb_file = "{}_{}{}".format(head, thumb_frame, ".png")
|
||||
thumb_path = os.path.join(staging_dir, thumb_file)
|
||||
|
||||
self.log.debug("__ thumb_path: `{}`, frame: `{}`".format(
|
||||
thumb_path, thumb_frame))
|
||||
thumbnail = item.thumbnail(thumb_frame).save(
|
||||
thumb_path,
|
||||
format='png'
|
||||
|
|
|
|||
|
|
@ -1,9 +1,11 @@
|
|||
import os
|
||||
import re
|
||||
import clique
|
||||
|
||||
from pyblish import api
|
||||
|
||||
|
||||
class CollectReviews(api.InstancePlugin):
|
||||
class CollectReview(api.InstancePlugin):
|
||||
"""Collect review from tags.
|
||||
|
||||
Tag is expected to have metadata:
|
||||
|
|
@ -15,11 +17,13 @@ class CollectReviews(api.InstancePlugin):
|
|||
|
||||
# Run just before CollectSubsets
|
||||
order = api.CollectorOrder + 0.1022
|
||||
label = "Collect Reviews"
|
||||
label = "Collect Review"
|
||||
hosts = ["hiero"]
|
||||
families = ["plate"]
|
||||
|
||||
def process(self, instance):
|
||||
is_sequence = instance.data["isSequence"]
|
||||
|
||||
# Exclude non-tagged instances.
|
||||
tagged = False
|
||||
for tag in instance.data["tags"]:
|
||||
|
|
@ -84,7 +88,29 @@ class CollectReviews(api.InstancePlugin):
|
|||
file_path = rev_inst.data.get("sourcePath")
|
||||
file_dir = os.path.dirname(file_path)
|
||||
file = os.path.basename(file_path)
|
||||
ext = os.path.splitext(file)[-1][1:]
|
||||
ext = os.path.splitext(file)[-1]
|
||||
|
||||
# detect if sequence
|
||||
if not is_sequence:
|
||||
# is video file
|
||||
files = file
|
||||
else:
|
||||
files = list()
|
||||
source_first = instance.data["sourceFirst"]
|
||||
self.log.debug("_ file: {}".format(file))
|
||||
spliter, padding = self.detect_sequence(file)
|
||||
self.log.debug("_ spliter, padding: {}, {}".format(
|
||||
spliter, padding))
|
||||
base_name = file.split(spliter)[0]
|
||||
collection = clique.Collection(base_name, ext, padding, set(range(
|
||||
int(source_first + rev_inst.data.get("sourceInH")),
|
||||
int(source_first + rev_inst.data.get("sourceOutH") + 1))))
|
||||
self.log.debug("_ collection: {}".format(collection))
|
||||
real_files = os.listdir(file_dir)
|
||||
for item in collection:
|
||||
if item not in real_files:
|
||||
continue
|
||||
files.append(item)
|
||||
|
||||
# change label
|
||||
instance.data["label"] = "{0} - {1} - ({2})".format(
|
||||
|
|
@ -95,7 +121,7 @@ class CollectReviews(api.InstancePlugin):
|
|||
|
||||
# adding representation for review mov
|
||||
representation = {
|
||||
"files": file,
|
||||
"files": files,
|
||||
"stagingDir": file_dir,
|
||||
"frameStart": rev_inst.data.get("sourceIn"),
|
||||
"frameEnd": rev_inst.data.get("sourceOut"),
|
||||
|
|
@ -103,9 +129,9 @@ class CollectReviews(api.InstancePlugin):
|
|||
"frameEndFtrack": rev_inst.data.get("sourceOutH"),
|
||||
"step": 1,
|
||||
"fps": rev_inst.data.get("fps"),
|
||||
"name": "preview",
|
||||
"tags": ["preview", "ftrackreview"],
|
||||
"ext": ext
|
||||
"name": "review",
|
||||
"tags": ["review", "ftrackreview"],
|
||||
"ext": ext[1:]
|
||||
}
|
||||
|
||||
media_duration = instance.data.get("mediaDuration")
|
||||
|
|
@ -137,31 +163,41 @@ class CollectReviews(api.InstancePlugin):
|
|||
|
||||
source_path = instance.data["sourcePath"]
|
||||
source_file = os.path.basename(source_path)
|
||||
head, ext = os.path.splitext(source_file)
|
||||
spliter, padding = self.detect_sequence(source_file)
|
||||
|
||||
if spliter:
|
||||
head, ext = source_file.split(spliter)
|
||||
else:
|
||||
head, ext = os.path.splitext(source_file)
|
||||
|
||||
# staging dir creation
|
||||
staging_dir = os.path.dirname(
|
||||
source_path)
|
||||
|
||||
thumb_frame = instance.data["clipInH"] + (
|
||||
(instance.data["clipOutH"] - instance.data["clipInH"]) / 2)
|
||||
thumb_file = "{}_{}{}".format(head, thumb_frame, ".png")
|
||||
media_duration = instance.data.get("mediaDuration")
|
||||
clip_duration_h = instance.data.get("clipDurationH")
|
||||
self.log.debug("__ media_duration: {}".format(media_duration))
|
||||
self.log.debug("__ clip_duration_h: {}".format(clip_duration_h))
|
||||
|
||||
thumb_frame = int(instance.data["sourceIn"] + (
|
||||
(instance.data["sourceOut"] - instance.data["sourceIn"]) / 2))
|
||||
|
||||
thumb_file = "{}thumbnail{}{}".format(head, thumb_frame, ".png")
|
||||
thumb_path = os.path.join(staging_dir, thumb_file)
|
||||
self.log.debug("__ thumb_path: {}".format(thumb_path))
|
||||
|
||||
self.log.debug("__ thumb_frame: {}".format(thumb_frame))
|
||||
self.log.debug(
|
||||
"__ sourceIn: `{}`".format(instance.data["sourceIn"]))
|
||||
|
||||
thumbnail = item.thumbnail(thumb_frame).save(
|
||||
thumb_path,
|
||||
format='png'
|
||||
)
|
||||
|
||||
self.log.debug(
|
||||
"__ sourceIn: `{}`".format(instance.data["sourceIn"]))
|
||||
self.log.debug(
|
||||
"__ thumbnail: `{}`, frame: `{}`".format(thumbnail, thumb_frame))
|
||||
|
||||
self.log.debug("__ thumbnail: {}".format(thumbnail))
|
||||
|
||||
thumb_representation = {
|
||||
'files': thumb_file,
|
||||
'stagingDir': staging_dir,
|
||||
|
|
@ -199,3 +235,26 @@ class CollectReviews(api.InstancePlugin):
|
|||
instance.data["versionData"] = version_data
|
||||
|
||||
instance.data["source"] = instance.data["sourcePath"]
|
||||
|
||||
def detect_sequence(self, file):
|
||||
""" Get identificating pater for image sequence
|
||||
|
||||
Can find file.0001.ext, file.%02d.ext, file.####.ext
|
||||
|
||||
Return:
|
||||
string: any matching sequence patern
|
||||
int: padding of sequnce numbering
|
||||
"""
|
||||
foundall = re.findall(
|
||||
r"(#+)|(%\d+d)|(?<=[^a-zA-Z0-9])(\d+)(?=\.\w+$)", file)
|
||||
if foundall:
|
||||
found = sorted(list(set(foundall[0])))[-1]
|
||||
|
||||
if "%" in found:
|
||||
padding = int(re.findall(r"\d+", found)[-1])
|
||||
else:
|
||||
padding = len(found)
|
||||
|
||||
return found, padding
|
||||
else:
|
||||
return None, None
|
||||
|
|
@ -43,7 +43,7 @@ class CollectShots(api.InstancePlugin):
|
|||
"{} - {} - tasks:{} - assetbuilds:{} - comments:{}".format(
|
||||
data["asset"],
|
||||
data["subset"],
|
||||
[task.keys()[0] for task in data["tasks"]],
|
||||
[task for task in data["tasks"]],
|
||||
[x["name"] for x in data.get("assetbuilds", [])],
|
||||
len(data.get("comments", []))
|
||||
)
|
||||
|
|
|
|||
|
|
@ -13,7 +13,7 @@ class CollectClipTagTasks(api.InstancePlugin):
|
|||
# gets tags
|
||||
tags = instance.data["tags"]
|
||||
|
||||
tasks = list()
|
||||
tasks = dict()
|
||||
for t in tags:
|
||||
t_metadata = dict(t["metadata"])
|
||||
t_family = t_metadata.get("tag.family", "")
|
||||
|
|
@ -22,7 +22,7 @@ class CollectClipTagTasks(api.InstancePlugin):
|
|||
if "task" in t_family:
|
||||
t_task_name = t_metadata.get("tag.label", "")
|
||||
t_task_type = t_metadata.get("tag.type", "")
|
||||
tasks.append({t_task_name: {"type": t_task_type}})
|
||||
tasks[t_task_name] = {"type": t_task_type}
|
||||
|
||||
instance.data["tasks"] = tasks
|
||||
|
||||
|
|
|
|||
335
pype/plugins/hiero/publish/extract_review_cutup.py
Normal file
335
pype/plugins/hiero/publish/extract_review_cutup.py
Normal file
|
|
@ -0,0 +1,335 @@
|
|||
import os
|
||||
import sys
|
||||
import six
|
||||
import errno
|
||||
from pyblish import api
|
||||
import pype
|
||||
import clique
|
||||
from avalon.vendor import filelink
|
||||
|
||||
|
||||
class ExtractReviewCutUp(pype.api.Extractor):
|
||||
"""Cut up clips from long video file"""
|
||||
|
||||
order = api.ExtractorOrder
|
||||
# order = api.CollectorOrder + 0.1023
|
||||
label = "Extract Review CutUp"
|
||||
hosts = ["hiero"]
|
||||
families = ["review"]
|
||||
|
||||
# presets
|
||||
tags_addition = []
|
||||
|
||||
def process(self, instance):
|
||||
inst_data = instance.data
|
||||
asset = inst_data['asset']
|
||||
|
||||
# get representation and loop them
|
||||
representations = inst_data["representations"]
|
||||
|
||||
# check if sequence
|
||||
is_sequence = inst_data["isSequence"]
|
||||
|
||||
# get resolution default
|
||||
resolution_width = inst_data["resolutionWidth"]
|
||||
resolution_height = inst_data["resolutionHeight"]
|
||||
|
||||
# frame range data
|
||||
media_duration = inst_data["mediaDuration"]
|
||||
|
||||
ffmpeg_path = pype.lib.get_ffmpeg_tool_path("ffmpeg")
|
||||
ffprobe_path = pype.lib.get_ffmpeg_tool_path("ffprobe")
|
||||
|
||||
# filter out mov and img sequences
|
||||
representations_new = representations[:]
|
||||
for repre in representations:
|
||||
input_args = list()
|
||||
output_args = list()
|
||||
|
||||
tags = repre.get("tags", [])
|
||||
|
||||
# check if supported tags are in representation for activation
|
||||
filter_tag = False
|
||||
for tag in ["_cut-bigger", "_cut-smaller"]:
|
||||
if tag in tags:
|
||||
filter_tag = True
|
||||
break
|
||||
if not filter_tag:
|
||||
continue
|
||||
|
||||
self.log.debug("__ repre: {}".format(repre))
|
||||
|
||||
files = repre.get("files")
|
||||
staging_dir = repre.get("stagingDir")
|
||||
fps = repre.get("fps")
|
||||
ext = repre.get("ext")
|
||||
|
||||
# make paths
|
||||
full_output_dir = os.path.join(
|
||||
staging_dir, "cuts")
|
||||
|
||||
if is_sequence:
|
||||
new_files = list()
|
||||
|
||||
# frame range delivery included handles
|
||||
frame_start = (
|
||||
inst_data["frameStart"] - inst_data["handleStart"])
|
||||
frame_end = (
|
||||
inst_data["frameEnd"] + inst_data["handleEnd"])
|
||||
self.log.debug("_ frame_start: {}".format(frame_start))
|
||||
self.log.debug("_ frame_end: {}".format(frame_end))
|
||||
|
||||
# make collection from input files list
|
||||
collections, remainder = clique.assemble(files)
|
||||
collection = collections.pop()
|
||||
self.log.debug("_ collection: {}".format(collection))
|
||||
|
||||
# name components
|
||||
head = collection.format("{head}")
|
||||
padding = collection.format("{padding}")
|
||||
tail = collection.format("{tail}")
|
||||
self.log.debug("_ head: {}".format(head))
|
||||
self.log.debug("_ padding: {}".format(padding))
|
||||
self.log.debug("_ tail: {}".format(tail))
|
||||
|
||||
# make destination file with instance data
|
||||
# frame start and end range
|
||||
index = 0
|
||||
for image in collection:
|
||||
dst_file_num = frame_start + index
|
||||
dst_file_name = head + str(padding % dst_file_num) + tail
|
||||
src = os.path.join(staging_dir, image)
|
||||
dst = os.path.join(full_output_dir, dst_file_name)
|
||||
self.log.info("Creating temp hardlinks: {}".format(dst))
|
||||
self.hardlink_file(src, dst)
|
||||
new_files.append(dst_file_name)
|
||||
index += 1
|
||||
|
||||
self.log.debug("_ new_files: {}".format(new_files))
|
||||
|
||||
else:
|
||||
# ffmpeg when single file
|
||||
new_files = "{}_{}".format(asset, files)
|
||||
|
||||
# frame range
|
||||
frame_start = repre.get("frameStart")
|
||||
frame_end = repre.get("frameEnd")
|
||||
|
||||
full_input_path = os.path.join(
|
||||
staging_dir, files)
|
||||
|
||||
os.path.isdir(full_output_dir) or os.makedirs(full_output_dir)
|
||||
|
||||
full_output_path = os.path.join(
|
||||
full_output_dir, new_files)
|
||||
|
||||
self.log.debug(
|
||||
"__ full_input_path: {}".format(full_input_path))
|
||||
self.log.debug(
|
||||
"__ full_output_path: {}".format(full_output_path))
|
||||
|
||||
# check if audio stream is in input video file
|
||||
ffprob_cmd = (
|
||||
"{ffprobe_path} -i \"{full_input_path}\" -show_streams "
|
||||
"-select_streams a -loglevel error"
|
||||
).format(**locals())
|
||||
|
||||
self.log.debug("ffprob_cmd: {}".format(ffprob_cmd))
|
||||
audio_check_output = pype.api.subprocess(ffprob_cmd)
|
||||
self.log.debug(
|
||||
"audio_check_output: {}".format(audio_check_output))
|
||||
|
||||
# Fix one frame difference
|
||||
""" TODO: this is just work-around for issue:
|
||||
https://github.com/pypeclub/pype/issues/659
|
||||
"""
|
||||
frame_duration_extend = 1
|
||||
if audio_check_output:
|
||||
frame_duration_extend = 0
|
||||
|
||||
# translate frame to sec
|
||||
start_sec = float(frame_start) / fps
|
||||
duration_sec = float(
|
||||
(frame_end - frame_start) + frame_duration_extend) / fps
|
||||
|
||||
empty_add = None
|
||||
|
||||
# check if not missing frames at start
|
||||
if (start_sec < 0) or (media_duration < frame_end):
|
||||
# for later swithing off `-c:v copy` output arg
|
||||
empty_add = True
|
||||
|
||||
# init empty variables
|
||||
video_empty_start = video_layer_start = ""
|
||||
audio_empty_start = audio_layer_start = ""
|
||||
video_empty_end = video_layer_end = ""
|
||||
audio_empty_end = audio_layer_end = ""
|
||||
audio_input = audio_output = ""
|
||||
v_inp_idx = 0
|
||||
concat_n = 1
|
||||
|
||||
# try to get video native resolution data
|
||||
try:
|
||||
resolution_output = pype.api.subprocess((
|
||||
"{ffprobe_path} -i \"{full_input_path}\" -v error "
|
||||
"-select_streams v:0 -show_entries "
|
||||
"stream=width,height -of csv=s=x:p=0"
|
||||
).format(**locals()))
|
||||
|
||||
x, y = resolution_output.split("x")
|
||||
resolution_width = int(x)
|
||||
resolution_height = int(y)
|
||||
except Exception as _ex:
|
||||
self.log.warning(
|
||||
"Video native resolution is untracable: {}".format(
|
||||
_ex))
|
||||
|
||||
if audio_check_output:
|
||||
# adding input for empty audio
|
||||
input_args.append("-f lavfi -i anullsrc")
|
||||
|
||||
# define audio empty concat variables
|
||||
audio_input = "[1:a]"
|
||||
audio_output = ":a=1"
|
||||
v_inp_idx = 1
|
||||
|
||||
# adding input for video black frame
|
||||
input_args.append((
|
||||
"-f lavfi -i \"color=c=black:"
|
||||
"s={resolution_width}x{resolution_height}:r={fps}\""
|
||||
).format(**locals()))
|
||||
|
||||
if (start_sec < 0):
|
||||
# recalculate input video timing
|
||||
empty_start_dur = abs(start_sec)
|
||||
start_sec = 0
|
||||
duration_sec = float(frame_end - (
|
||||
frame_start + (empty_start_dur * fps)) + 1) / fps
|
||||
|
||||
# define starting empty video concat variables
|
||||
video_empty_start = (
|
||||
"[{v_inp_idx}]trim=duration={empty_start_dur}[gv0];" # noqa
|
||||
).format(**locals())
|
||||
video_layer_start = "[gv0]"
|
||||
|
||||
if audio_check_output:
|
||||
# define starting empty audio concat variables
|
||||
audio_empty_start = (
|
||||
"[0]atrim=duration={empty_start_dur}[ga0];"
|
||||
).format(**locals())
|
||||
audio_layer_start = "[ga0]"
|
||||
|
||||
# alter concat number of clips
|
||||
concat_n += 1
|
||||
|
||||
# check if not missing frames at the end
|
||||
if (media_duration < frame_end):
|
||||
# recalculate timing
|
||||
empty_end_dur = float(
|
||||
frame_end - media_duration + 1) / fps
|
||||
duration_sec = float(
|
||||
media_duration - frame_start) / fps
|
||||
|
||||
# define ending empty video concat variables
|
||||
video_empty_end = (
|
||||
"[{v_inp_idx}]trim=duration={empty_end_dur}[gv1];"
|
||||
).format(**locals())
|
||||
video_layer_end = "[gv1]"
|
||||
|
||||
if audio_check_output:
|
||||
# define ending empty audio concat variables
|
||||
audio_empty_end = (
|
||||
"[0]atrim=duration={empty_end_dur}[ga1];"
|
||||
).format(**locals())
|
||||
audio_layer_end = "[ga0]"
|
||||
|
||||
# alter concat number of clips
|
||||
concat_n += 1
|
||||
|
||||
# concatting black frame togather
|
||||
output_args.append((
|
||||
"-filter_complex \""
|
||||
"{audio_empty_start}"
|
||||
"{video_empty_start}"
|
||||
"{audio_empty_end}"
|
||||
"{video_empty_end}"
|
||||
"{video_layer_start}{audio_layer_start}[1:v]{audio_input}" # noqa
|
||||
"{video_layer_end}{audio_layer_end}"
|
||||
"concat=n={concat_n}:v=1{audio_output}\""
|
||||
).format(**locals()))
|
||||
|
||||
# append ffmpeg input video clip
|
||||
input_args.append("-ss {:0.2f}".format(start_sec))
|
||||
input_args.append("-t {:0.2f}".format(duration_sec))
|
||||
input_args.append("-i \"{}\"".format(full_input_path))
|
||||
|
||||
# add copy audio video codec if only shortening clip
|
||||
if ("_cut-bigger" in tags) and (not empty_add):
|
||||
output_args.append("-c:v copy")
|
||||
|
||||
# make sure it is having no frame to frame comprassion
|
||||
output_args.append("-intra")
|
||||
|
||||
# output filename
|
||||
output_args.append("-y \"{}\"".format(full_output_path))
|
||||
|
||||
mov_args = [
|
||||
ffmpeg_path,
|
||||
" ".join(input_args),
|
||||
" ".join(output_args)
|
||||
]
|
||||
subprcs_cmd = " ".join(mov_args)
|
||||
|
||||
# run subprocess
|
||||
self.log.debug("Executing: {}".format(subprcs_cmd))
|
||||
output = pype.api.subprocess(subprcs_cmd)
|
||||
self.log.debug("Output: {}".format(output))
|
||||
|
||||
repre_new = {
|
||||
"files": new_files,
|
||||
"stagingDir": full_output_dir,
|
||||
"frameStart": frame_start,
|
||||
"frameEnd": frame_end,
|
||||
"frameStartFtrack": frame_start,
|
||||
"frameEndFtrack": frame_end,
|
||||
"step": 1,
|
||||
"fps": fps,
|
||||
"name": "cut_up_preview",
|
||||
"tags": ["review"] + self.tags_addition,
|
||||
"ext": ext,
|
||||
"anatomy_template": "publish"
|
||||
}
|
||||
|
||||
representations_new.append(repre_new)
|
||||
|
||||
for repre in representations_new:
|
||||
if ("delete" in repre.get("tags", [])) and (
|
||||
"cut_up_preview" not in repre["name"]):
|
||||
representations_new.remove(repre)
|
||||
|
||||
self.log.debug(
|
||||
"Representations: {}".format(representations_new))
|
||||
instance.data["representations"] = representations_new
|
||||
|
||||
def hardlink_file(self, src, dst):
|
||||
dirname = os.path.dirname(dst)
|
||||
|
||||
# make sure the destination folder exist
|
||||
try:
|
||||
os.makedirs(dirname)
|
||||
except OSError as e:
|
||||
if e.errno == errno.EEXIST:
|
||||
pass
|
||||
else:
|
||||
self.log.critical("An unexpected error occurred.")
|
||||
six.reraise(*sys.exc_info())
|
||||
|
||||
# create hardlined file
|
||||
try:
|
||||
filelink.create(src, dst, filelink.HARDLINK)
|
||||
except OSError as e:
|
||||
if e.errno == errno.EEXIST:
|
||||
pass
|
||||
else:
|
||||
self.log.critical("An unexpected error occurred.")
|
||||
six.reraise(*sys.exc_info())
|
||||
|
|
@ -119,13 +119,14 @@ class LoadSequence(api.Loader):
|
|||
repr_cont = context["representation"]["context"]
|
||||
if "#" not in file:
|
||||
frame = repr_cont.get("frame")
|
||||
padding = len(frame)
|
||||
file = file.replace(frame, "#" * padding)
|
||||
if frame:
|
||||
padding = len(frame)
|
||||
file = file.replace(frame, "#" * padding)
|
||||
|
||||
read_name = "Read_{0}_{1}_{2}".format(
|
||||
repr_cont["asset"],
|
||||
repr_cont["subset"],
|
||||
repr_cont["representation"])
|
||||
context["representation"]["name"])
|
||||
|
||||
# Create the Loader with the filename path set
|
||||
with viewer_update_and_undo_stop():
|
||||
|
|
@ -249,8 +250,9 @@ class LoadSequence(api.Loader):
|
|||
|
||||
if "#" not in file:
|
||||
frame = repr_cont.get("frame")
|
||||
padding = len(frame)
|
||||
file = file.replace(frame, "#" * padding)
|
||||
if frame:
|
||||
padding = len(frame)
|
||||
file = file.replace(frame, "#" * padding)
|
||||
|
||||
# Get start frame from version data
|
||||
version = io.find_one({
|
||||
|
|
|
|||
|
|
@ -73,19 +73,24 @@ class CollectNukeInstances(pyblish.api.ContextPlugin):
|
|||
# Add all nodes in group instances.
|
||||
if node.Class() == "Group":
|
||||
# only alter families for render family
|
||||
if "write" == families_ak:
|
||||
if node["render"].value():
|
||||
self.log.info("flagged for render")
|
||||
add_family = "{}.local".format("render")
|
||||
# dealing with local/farm rendering
|
||||
if node["render_farm"].value():
|
||||
self.log.info("adding render farm family")
|
||||
add_family = "{}.farm".format("render")
|
||||
instance.data["transfer"] = False
|
||||
families.append(add_family)
|
||||
if "render" in families:
|
||||
families.remove("render")
|
||||
family = "write"
|
||||
if "write" in families_ak:
|
||||
target = node["render"].value()
|
||||
if target == "Use existing frames":
|
||||
# Local rendering
|
||||
self.log.info("flagged for no render")
|
||||
families.append("render")
|
||||
elif target == "Local":
|
||||
# Local rendering
|
||||
self.log.info("flagged for local render")
|
||||
families.append("{}.local".format("render"))
|
||||
elif target == "On farm":
|
||||
# Farm rendering
|
||||
self.log.info("flagged for farm render")
|
||||
instance.data["transfer"] = False
|
||||
families.append("{}.farm".format("render"))
|
||||
if "render" in families:
|
||||
families.remove("render")
|
||||
family = "write"
|
||||
|
||||
node.begin()
|
||||
for i in nuke.allNodes():
|
||||
|
|
|
|||
|
|
@ -1,31 +0,0 @@
|
|||
import pyblish.api
|
||||
|
||||
|
||||
class CollectWriteLegacy(pyblish.api.InstancePlugin):
|
||||
"""Collect legacy write nodes."""
|
||||
|
||||
order = pyblish.api.CollectorOrder + 0.0101
|
||||
label = "Collect Write node Legacy"
|
||||
hosts = ["nuke", "nukeassist"]
|
||||
|
||||
def process(self, instance):
|
||||
self.log.info(instance[:])
|
||||
node = instance[0]
|
||||
|
||||
if node.Class() not in ["Group", "Write"]:
|
||||
return
|
||||
|
||||
family_knobs = ["ak:family", "avalon:family"]
|
||||
test = [k for k in node.knobs().keys() if k in family_knobs]
|
||||
self.log.info(test)
|
||||
|
||||
if len(test) == 1:
|
||||
if "render" in node[test[0]].value():
|
||||
self.log.info("render")
|
||||
return
|
||||
|
||||
if "render" in node.knobs():
|
||||
instance.data.update(
|
||||
{"family": "write.legacy",
|
||||
"families": []}
|
||||
)
|
||||
|
|
@ -26,20 +26,24 @@ class CollectReview(pyblish.api.InstancePlugin):
|
|||
if not node["review"].value():
|
||||
return
|
||||
|
||||
# Add audio to instance if it exists.
|
||||
try:
|
||||
version = pype.api.get_latest_version(
|
||||
instance.context.data["assetEntity"]["name"], "audioMain"
|
||||
)
|
||||
representation = io.find_one(
|
||||
{"type": "representation", "parent": version["_id"]}
|
||||
# * Add audio to instance if exists.
|
||||
# Find latest versions document
|
||||
version_doc = pype.api.get_latest_version(
|
||||
instance.context.data["assetEntity"]["name"], "audioMain"
|
||||
)
|
||||
repre_doc = None
|
||||
if version_doc:
|
||||
# Try to find it's representation (Expected there is only one)
|
||||
repre_doc = io.find_one(
|
||||
{"type": "representation", "parent": version_doc["_id"]}
|
||||
)
|
||||
|
||||
# Add audio to instance if representation was found
|
||||
if repre_doc:
|
||||
instance.data["audio"] = [{
|
||||
"offset": 0,
|
||||
"filename": api.get_representation_path(representation)
|
||||
"filename": api.get_representation_path(repre_doc)
|
||||
}]
|
||||
except AssertionError:
|
||||
pass
|
||||
|
||||
instance.data["families"].append("review")
|
||||
instance.data['families'].append('ftrack')
|
||||
|
|
|
|||
|
|
@ -6,81 +6,99 @@ import nuke
|
|||
from avalon import api
|
||||
import re
|
||||
import pyblish.api
|
||||
import pype.api
|
||||
from avalon.nuke import get_avalon_knob_data
|
||||
|
||||
class RepairWriteLegacyAction(pyblish.api.Action):
|
||||
|
||||
label = "Repair"
|
||||
icon = "wrench"
|
||||
on = "failed"
|
||||
|
||||
def process(self, context, plugin):
|
||||
|
||||
# Get the errored instances
|
||||
failed = []
|
||||
for result in context.data["results"]:
|
||||
if (result["error"] is not None and result["instance"] is not None
|
||||
and result["instance"] not in failed):
|
||||
failed.append(result["instance"])
|
||||
|
||||
# Apply pyblish.logic to get the instances for the plug-in
|
||||
instances = pyblish.api.instances_by_plugin(failed, plugin)
|
||||
|
||||
for instance in instances:
|
||||
if "Write" in instance[0].Class():
|
||||
data = toml.loads(instance[0]["avalon"].value())
|
||||
else:
|
||||
data = get_avalon_knob_data(instance[0])
|
||||
|
||||
self.log.info(data)
|
||||
|
||||
data["xpos"] = instance[0].xpos()
|
||||
data["ypos"] = instance[0].ypos()
|
||||
data["input"] = instance[0].input(0)
|
||||
data["publish"] = instance[0]["publish"].value()
|
||||
data["render"] = instance[0]["render"].value()
|
||||
data["render_farm"] = instance[0]["render_farm"].value()
|
||||
data["review"] = instance[0]["review"].value()
|
||||
|
||||
# nuke.delete(instance[0])
|
||||
|
||||
task = os.environ["AVALON_TASK"]
|
||||
sanitized_task = re.sub('[^0-9a-zA-Z]+', '', task)
|
||||
subset_name = "render{}Main".format(
|
||||
sanitized_task.capitalize())
|
||||
|
||||
Create_name = "CreateWriteRender"
|
||||
|
||||
creator_plugin = None
|
||||
for Creator in api.discover(api.Creator):
|
||||
if Creator.__name__ != Create_name:
|
||||
continue
|
||||
|
||||
creator_plugin = Creator
|
||||
|
||||
# return api.create()
|
||||
creator_plugin(data["subset"], data["asset"]).process()
|
||||
|
||||
node = nuke.toNode(data["subset"])
|
||||
node.setXYpos(data["xpos"], data["ypos"])
|
||||
node.setInput(0, data["input"])
|
||||
node["publish"].setValue(data["publish"])
|
||||
node["render"].setValue(data["render"])
|
||||
node["render_farm"].setValue(data["render_farm"])
|
||||
node["review"].setValue(data["review"])
|
||||
|
||||
|
||||
class ValidateWriteLegacy(pyblish.api.InstancePlugin):
|
||||
"""Validate legacy write nodes."""
|
||||
|
||||
order = pyblish.api.ValidatorOrder
|
||||
optional = True
|
||||
families = ["write.legacy"]
|
||||
label = "Write Legacy"
|
||||
families = ["write"]
|
||||
label = "Validate Write Legacy"
|
||||
hosts = ["nuke"]
|
||||
actions = [RepairWriteLegacyAction]
|
||||
actions = [pype.api.RepairAction]
|
||||
|
||||
def process(self, instance):
|
||||
|
||||
node = instance[0]
|
||||
msg = "Clean up legacy write node \"{}\"".format(instance)
|
||||
assert False, msg
|
||||
|
||||
if node.Class() not in ["Group", "Write"]:
|
||||
return
|
||||
|
||||
# test avalon knobs
|
||||
family_knobs = ["ak:family", "avalon:family"]
|
||||
family_test = [k for k in node.knobs().keys() if k in family_knobs]
|
||||
self.log.debug("_ family_test: {}".format(family_test))
|
||||
|
||||
# test if render in family test knob
|
||||
# and only one item should be available
|
||||
assert len(family_test) != 1, msg
|
||||
assert "render" in node[family_test[0]].value(), msg
|
||||
|
||||
# test if `file` knob in node, this way old
|
||||
# non-group-node write could be detected
|
||||
assert "file" in node.knobs(), msg
|
||||
|
||||
# check if write node is having old render targeting
|
||||
assert "render_farm" in node.knobs(), msg
|
||||
|
||||
@classmethod
|
||||
def repair(cls, instance):
|
||||
node = instance[0]
|
||||
|
||||
if "Write" in node.Class():
|
||||
data = toml.loads(node["avalon"].value())
|
||||
else:
|
||||
data = get_avalon_knob_data(node)
|
||||
|
||||
# collect reusable data
|
||||
data["XYpos"] = (node.xpos(), node.ypos())
|
||||
data["input"] = node.input(0)
|
||||
data["publish"] = node["publish"].value()
|
||||
data["render"] = node["render"].value()
|
||||
data["render_farm"] = node["render_farm"].value()
|
||||
data["review"] = node["review"].value()
|
||||
data["use_limit"] = node["use_limit"].value()
|
||||
data["first"] = node["first"].value()
|
||||
data["last"] = node["last"].value()
|
||||
|
||||
family = data["family"]
|
||||
cls.log.debug("_ orig node family: {}".format(family))
|
||||
|
||||
# define what family of write node should be recreated
|
||||
if family == "render":
|
||||
Create_name = "CreateWriteRender"
|
||||
elif family == "prerender":
|
||||
Create_name = "CreateWritePrerender"
|
||||
|
||||
# get appropriate plugin class
|
||||
creator_plugin = None
|
||||
for Creator in api.discover(api.Creator):
|
||||
if Creator.__name__ != Create_name:
|
||||
continue
|
||||
|
||||
creator_plugin = Creator
|
||||
|
||||
# delete the legaci write node
|
||||
nuke.delete(node)
|
||||
|
||||
# create write node with creator
|
||||
new_node_name = data["subset"]
|
||||
creator_plugin(new_node_name, data["asset"]).process()
|
||||
|
||||
node = nuke.toNode(new_node_name)
|
||||
node.setXYpos(*data["XYpos"])
|
||||
node.setInput(0, data["input"])
|
||||
node["publish"].setValue(data["publish"])
|
||||
node["review"].setValue(data["review"])
|
||||
node["use_limit"].setValue(data["use_limit"])
|
||||
node["first"].setValue(data["first"])
|
||||
node["last"].setValue(data["last"])
|
||||
|
||||
# recreate render targets
|
||||
if data["render"]:
|
||||
node["render"].setValue("Local")
|
||||
if data["render_farm"]:
|
||||
node["render"].setValue("On farm")
|
||||
|
|
|
|||
|
|
@ -1,4 +1,6 @@
|
|||
from avalon import api, photoshop
|
||||
import os
|
||||
import re
|
||||
|
||||
stub = photoshop.stub()
|
||||
|
||||
|
|
@ -13,10 +15,13 @@ class ImageLoader(api.Loader):
|
|||
representations = ["*"]
|
||||
|
||||
def load(self, context, name=None, namespace=None, data=None):
|
||||
layer_name = self._get_unique_layer_name(context["asset"]["name"],
|
||||
name)
|
||||
with photoshop.maintained_selection():
|
||||
layer = stub.import_smart_object(self.fname)
|
||||
layer = stub.import_smart_object(self.fname, layer_name)
|
||||
|
||||
self[:] = [layer]
|
||||
namespace = namespace or layer_name
|
||||
|
||||
return photoshop.containerise(
|
||||
name,
|
||||
|
|
@ -27,11 +32,25 @@ class ImageLoader(api.Loader):
|
|||
)
|
||||
|
||||
def update(self, container, representation):
|
||||
""" Switch asset or change version """
|
||||
layer = container.pop("layer")
|
||||
|
||||
context = representation.get("context", {})
|
||||
|
||||
namespace_from_container = re.sub(r'_\d{3}$', '',
|
||||
container["namespace"])
|
||||
layer_name = "{}_{}".format(context["asset"], context["subset"])
|
||||
# switching assets
|
||||
if namespace_from_container != layer_name:
|
||||
layer_name = self._get_unique_layer_name(context["asset"],
|
||||
context["subset"])
|
||||
else: # switching version - keep same name
|
||||
layer_name = container["namespace"]
|
||||
|
||||
path = api.get_representation_path(representation)
|
||||
with photoshop.maintained_selection():
|
||||
stub.replace_smart_object(
|
||||
layer, api.get_representation_path(representation)
|
||||
layer, path, layer_name
|
||||
)
|
||||
|
||||
stub.imprint(
|
||||
|
|
@ -39,7 +58,36 @@ class ImageLoader(api.Loader):
|
|||
)
|
||||
|
||||
def remove(self, container):
|
||||
container["layer"].Delete()
|
||||
"""
|
||||
Removes element from scene: deletes layer + removes from Headline
|
||||
Args:
|
||||
container (dict): container to be removed - used to get layer_id
|
||||
"""
|
||||
layer = container.pop("layer")
|
||||
stub.imprint(layer, {})
|
||||
stub.delete_layer(layer.id)
|
||||
|
||||
def switch(self, container, representation):
|
||||
self.update(container, representation)
|
||||
|
||||
def _get_unique_layer_name(self, asset_name, subset_name):
|
||||
"""
|
||||
Gets all layer names and if 'name' is present in them, increases
|
||||
suffix by 1 (eg. creates unique layer name - for Loader)
|
||||
Args:
|
||||
name (string): in format asset_subset
|
||||
|
||||
Returns:
|
||||
(string): name_00X (without version)
|
||||
"""
|
||||
name = "{}_{}".format(asset_name, subset_name)
|
||||
names = {}
|
||||
for layer in stub.get_layers():
|
||||
layer_name = re.sub(r'_\d{3}$', '', layer.name)
|
||||
if layer_name in names.keys():
|
||||
names[layer_name] = names[layer_name] + 1
|
||||
else:
|
||||
names[layer_name] = 1
|
||||
occurrences = names.get(name, 0)
|
||||
|
||||
return "{}_{:0>3d}".format(name, occurrences + 1)
|
||||
|
|
|
|||
|
|
@ -0,0 +1,31 @@
|
|||
import re
|
||||
import os
|
||||
import pyblish.api
|
||||
|
||||
|
||||
class CollectRepresentationNames(pyblish.api.InstancePlugin):
|
||||
"""
|
||||
Sets the representation names for given families based on RegEx filter
|
||||
"""
|
||||
|
||||
label = "Collect Representaion Names"
|
||||
order = pyblish.api.CollectorOrder
|
||||
families = []
|
||||
hosts = ["standalonepublisher"]
|
||||
name_filter = ""
|
||||
|
||||
def process(self, instance):
|
||||
for repre in instance.data['representations']:
|
||||
new_repre_name = None
|
||||
if isinstance(repre['files'], list):
|
||||
shortened_name = os.path.splitext(repre['files'][0])[0]
|
||||
new_repre_name = re.search(self.name_filter,
|
||||
shortened_name).group()
|
||||
else:
|
||||
new_repre_name = re.search(self.name_filter,
|
||||
repre['files']).group()
|
||||
|
||||
if new_repre_name:
|
||||
repre['name'] = new_repre_name
|
||||
|
||||
repre['outputName'] = repre['name']
|
||||
|
|
@ -112,12 +112,11 @@ class ExtractThumbnailSP(pyblish.api.InstancePlugin):
|
|||
'ext': 'jpg',
|
||||
'files': filename,
|
||||
"stagingDir": staging_dir,
|
||||
"thumbnail": True,
|
||||
"tags": []
|
||||
"tags": ["thumbnail"],
|
||||
}
|
||||
|
||||
# # add Delete tag when temp file was rendered
|
||||
# if not is_jpeg:
|
||||
# representation["tags"].append("delete")
|
||||
if not is_jpeg:
|
||||
representation["tags"].append("delete")
|
||||
|
||||
instance.data["representations"].append(representation)
|
||||
|
|
|
|||
BIN
pype/resources/app_icons/tvpaint.png
Normal file
BIN
pype/resources/app_icons/tvpaint.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 131 KiB |
|
|
@ -4,18 +4,16 @@ import re
|
|||
import subprocess
|
||||
import json
|
||||
import opentimelineio_contrib.adapters.ffmpeg_burnins as ffmpeg_burnins
|
||||
from pype.api import Logger, config
|
||||
from pype.api import config
|
||||
import pype.lib
|
||||
|
||||
log = Logger().get_logger("BurninWrapper", "burninwrap")
|
||||
|
||||
|
||||
ffmpeg_path = pype.lib.get_ffmpeg_tool_path("ffmpeg")
|
||||
ffprobe_path = pype.lib.get_ffmpeg_tool_path("ffprobe")
|
||||
|
||||
|
||||
FFMPEG = (
|
||||
'{} -loglevel panic -i "%(input)s" %(filters)s %(args)s%(output)s'
|
||||
'{} -i "%(input)s" %(filters)s %(args)s%(output)s'
|
||||
).format(ffmpeg_path)
|
||||
|
||||
FFPROBE = (
|
||||
|
|
@ -54,7 +52,7 @@ def _streams(source):
|
|||
|
||||
def get_fps(str_value):
|
||||
if str_value == "0/0":
|
||||
log.warning("Source has \"r_frame_rate\" value set to \"0/0\".")
|
||||
print("WARNING: Source has \"r_frame_rate\" value set to \"0/0\".")
|
||||
return "Unknown"
|
||||
|
||||
items = str_value.split("/")
|
||||
|
|
@ -299,17 +297,34 @@ class ModifiedBurnins(ffmpeg_burnins.Burnins):
|
|||
args=args,
|
||||
overwrite=overwrite
|
||||
)
|
||||
log.info("Launching command: {}".format(command))
|
||||
print("Launching command: {}".format(command))
|
||||
|
||||
proc = subprocess.Popen(
|
||||
command,
|
||||
stdout=subprocess.PIPE,
|
||||
stderr=subprocess.PIPE,
|
||||
shell=True
|
||||
)
|
||||
|
||||
_stdout, _stderr = proc.communicate()
|
||||
if _stdout:
|
||||
print(_stdout.decode("utf-8"))
|
||||
|
||||
# This will probably never happen as ffmpeg use stdout
|
||||
if _stderr:
|
||||
print(_stderr.decode("utf-8"))
|
||||
|
||||
proc = subprocess.Popen(command, shell=True)
|
||||
log.info(proc.communicate()[0])
|
||||
if proc.returncode != 0:
|
||||
raise RuntimeError("Failed to render '%s': %s'"
|
||||
% (output, command))
|
||||
raise RuntimeError(
|
||||
"Failed to render '{}': {}'".format(output, command)
|
||||
)
|
||||
if is_sequence:
|
||||
output = output % kwargs.get("duration")
|
||||
|
||||
if not os.path.exists(output):
|
||||
raise RuntimeError("Failed to generate this fucking file '%s'" % output)
|
||||
raise RuntimeError(
|
||||
"Failed to generate this f*cking file '%s'" % output
|
||||
)
|
||||
|
||||
|
||||
def example(input_path, output_path):
|
||||
|
|
@ -542,6 +557,7 @@ def burnins_from_data(
|
|||
|
||||
|
||||
if __name__ == "__main__":
|
||||
print("* Burnin script started")
|
||||
in_data = json.loads(sys.argv[-1])
|
||||
burnins_from_data(
|
||||
in_data["input"],
|
||||
|
|
@ -551,3 +567,4 @@ if __name__ == "__main__":
|
|||
options=in_data.get("options"),
|
||||
burnin_values=in_data.get("values")
|
||||
)
|
||||
print("* Burnin script has finished")
|
||||
|
|
|
|||
|
|
@ -13,7 +13,7 @@ M_ENVIRONMENT_KEY = "__environment_keys__"
|
|||
M_POP_KEY = "__pop_key__"
|
||||
|
||||
# Folder where studio overrides are stored
|
||||
STUDIO_OVERRIDES_PATH = os.environ.get("PYPE_PROJECT_CONFIGS", "")
|
||||
STUDIO_OVERRIDES_PATH = os.getenv("PYPE_PROJECT_CONFIGS") or ""
|
||||
|
||||
# File where studio's system overrides are stored
|
||||
SYSTEM_SETTINGS_KEY = "system_settings"
|
||||
|
|
@ -23,9 +23,6 @@ SYSTEM_SETTINGS_PATH = os.path.join(
|
|||
|
||||
# File where studio's environment overrides are stored
|
||||
ENVIRONMENTS_KEY = "environments"
|
||||
ENVIRONMENTS_PATH = os.path.join(
|
||||
STUDIO_OVERRIDES_PATH, ENVIRONMENTS_KEY + ".json"
|
||||
)
|
||||
|
||||
# File where studio's default project overrides are stored
|
||||
PROJECT_SETTINGS_KEY = "project_settings"
|
||||
|
|
@ -59,61 +56,100 @@ def default_settings():
|
|||
return _DEFAULT_SETTINGS
|
||||
|
||||
|
||||
def load_json(fpath):
|
||||
def load_json_file(fpath):
|
||||
# Load json data
|
||||
with open(fpath, "r") as opened_file:
|
||||
lines = opened_file.read().splitlines()
|
||||
|
||||
# prepare json string
|
||||
standard_json = ""
|
||||
for line in lines:
|
||||
# Remove all whitespace on both sides
|
||||
line = line.strip()
|
||||
|
||||
# Skip blank lines
|
||||
if len(line) == 0:
|
||||
continue
|
||||
|
||||
standard_json += line
|
||||
|
||||
# Check if has extra commas
|
||||
extra_comma = False
|
||||
if ",]" in standard_json or ",}" in standard_json:
|
||||
extra_comma = True
|
||||
standard_json = standard_json.replace(",]", "]")
|
||||
standard_json = standard_json.replace(",}", "}")
|
||||
|
||||
if extra_comma:
|
||||
log.error("Extra comma in json file: \"{}\"".format(fpath))
|
||||
|
||||
# return empty dict if file is empty
|
||||
if standard_json == "":
|
||||
return {}
|
||||
|
||||
# Try to parse string
|
||||
try:
|
||||
return json.loads(standard_json)
|
||||
|
||||
except json.decoder.JSONDecodeError:
|
||||
# Return empty dict if it is first time that decode error happened
|
||||
return {}
|
||||
|
||||
# Repreduce the exact same exception but traceback contains better
|
||||
# information about position of error in the loaded json
|
||||
try:
|
||||
with open(fpath, "r") as opened_file:
|
||||
json.load(opened_file)
|
||||
return json.load(opened_file)
|
||||
|
||||
except json.decoder.JSONDecodeError:
|
||||
log.warning(
|
||||
"File has invalid json format \"{}\"".format(fpath),
|
||||
exc_info=True
|
||||
)
|
||||
|
||||
return {}
|
||||
|
||||
|
||||
def load_jsons_from_dir(path, *args, **kwargs):
|
||||
"""Load all .json files with content from entered folder path.
|
||||
|
||||
Data are loaded recursively from a directory and recreate the
|
||||
hierarchy as a dictionary.
|
||||
|
||||
Entered path hiearchy:
|
||||
|_ folder1
|
||||
| |_ data1.json
|
||||
|_ folder2
|
||||
|_ subfolder1
|
||||
|_ data2.json
|
||||
|
||||
Will result in:
|
||||
```javascript
|
||||
{
|
||||
"folder1": {
|
||||
"data1": "CONTENT OF FILE"
|
||||
},
|
||||
"folder2": {
|
||||
"data1": {
|
||||
"subfolder1": "CONTENT OF FILE"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Args:
|
||||
path (str): Path to the root folder where the json hierarchy starts.
|
||||
|
||||
Returns:
|
||||
dict: Loaded data.
|
||||
"""
|
||||
output = {}
|
||||
|
||||
path = os.path.normpath(path)
|
||||
if not os.path.exists(path):
|
||||
# TODO warning
|
||||
return output
|
||||
|
||||
sub_keys = list(kwargs.pop("subkeys", args))
|
||||
for sub_key in tuple(sub_keys):
|
||||
_path = os.path.join(path, sub_key)
|
||||
if not os.path.exists(_path):
|
||||
break
|
||||
|
||||
path = _path
|
||||
sub_keys.pop(0)
|
||||
|
||||
base_len = len(path) + 1
|
||||
for base, _directories, filenames in os.walk(path):
|
||||
base_items_str = base[base_len:]
|
||||
if not base_items_str:
|
||||
base_items = []
|
||||
else:
|
||||
base_items = base_items_str.split(os.path.sep)
|
||||
|
||||
for filename in filenames:
|
||||
basename, ext = os.path.splitext(filename)
|
||||
if ext == ".json":
|
||||
full_path = os.path.join(base, filename)
|
||||
value = load_json_file(full_path)
|
||||
dict_keys = base_items + [basename]
|
||||
output = subkey_merge(output, value, dict_keys)
|
||||
|
||||
for sub_key in sub_keys:
|
||||
output = output[sub_key]
|
||||
return output
|
||||
|
||||
|
||||
def find_environments(data):
|
||||
""" Find environemnt values from system settings by it's metadata.
|
||||
|
||||
Args:
|
||||
data(dict): System settings data or dictionary which may contain
|
||||
environments metadata.
|
||||
|
||||
Returns:
|
||||
dict: Key as Environment key and value for `acre` module.
|
||||
"""
|
||||
if not data or not isinstance(data, dict):
|
||||
return
|
||||
|
||||
|
|
@ -152,69 +188,30 @@ def subkey_merge(_dict, value, keys):
|
|||
return _dict
|
||||
|
||||
|
||||
def load_jsons_from_dir(path, *args, **kwargs):
|
||||
output = {}
|
||||
|
||||
path = os.path.normpath(path)
|
||||
if not os.path.exists(path):
|
||||
# TODO warning
|
||||
return output
|
||||
|
||||
sub_keys = list(kwargs.pop("subkeys", args))
|
||||
for sub_key in tuple(sub_keys):
|
||||
_path = os.path.join(path, sub_key)
|
||||
if not os.path.exists(_path):
|
||||
break
|
||||
|
||||
path = _path
|
||||
sub_keys.pop(0)
|
||||
|
||||
base_len = len(path) + 1
|
||||
for base, _directories, filenames in os.walk(path):
|
||||
base_items_str = base[base_len:]
|
||||
if not base_items_str:
|
||||
base_items = []
|
||||
else:
|
||||
base_items = base_items_str.split(os.path.sep)
|
||||
|
||||
for filename in filenames:
|
||||
basename, ext = os.path.splitext(filename)
|
||||
if ext == ".json":
|
||||
full_path = os.path.join(base, filename)
|
||||
value = load_json(full_path)
|
||||
dict_keys = base_items + [basename]
|
||||
output = subkey_merge(output, value, dict_keys)
|
||||
|
||||
for sub_key in sub_keys:
|
||||
output = output[sub_key]
|
||||
return output
|
||||
|
||||
|
||||
def studio_system_settings():
|
||||
"""Studio overrides of system settings."""
|
||||
if os.path.exists(SYSTEM_SETTINGS_PATH):
|
||||
return load_json(SYSTEM_SETTINGS_PATH)
|
||||
return {}
|
||||
|
||||
|
||||
def studio_environments():
|
||||
if os.path.exists(ENVIRONMENTS_PATH):
|
||||
return load_json(ENVIRONMENTS_PATH)
|
||||
return load_json_file(SYSTEM_SETTINGS_PATH)
|
||||
return {}
|
||||
|
||||
|
||||
def studio_project_settings():
|
||||
"""Studio overrides of default project settings."""
|
||||
if os.path.exists(PROJECT_SETTINGS_PATH):
|
||||
return load_json(PROJECT_SETTINGS_PATH)
|
||||
return load_json_file(PROJECT_SETTINGS_PATH)
|
||||
return {}
|
||||
|
||||
|
||||
def studio_project_anatomy():
|
||||
"""Studio overrides of default project anatomy data."""
|
||||
if os.path.exists(PROJECT_ANATOMY_PATH):
|
||||
return load_json(PROJECT_ANATOMY_PATH)
|
||||
return load_json_file(PROJECT_ANATOMY_PATH)
|
||||
return {}
|
||||
|
||||
|
||||
def path_to_project_overrides(project_name):
|
||||
def path_to_project_settings(project_name):
|
||||
if not project_name:
|
||||
return PROJECT_SETTINGS_PATH
|
||||
return os.path.join(
|
||||
STUDIO_OVERRIDES_PATH,
|
||||
project_name,
|
||||
|
|
@ -223,6 +220,8 @@ def path_to_project_overrides(project_name):
|
|||
|
||||
|
||||
def path_to_project_anatomy(project_name):
|
||||
if not project_name:
|
||||
return PROJECT_ANATOMY_PATH
|
||||
return os.path.join(
|
||||
STUDIO_OVERRIDES_PATH,
|
||||
project_name,
|
||||
|
|
@ -230,27 +229,114 @@ def path_to_project_anatomy(project_name):
|
|||
)
|
||||
|
||||
|
||||
def save_studio_settings(data):
|
||||
"""Save studio overrides of system settings.
|
||||
|
||||
Do not use to store whole system settings data with defaults but only it's
|
||||
overrides with metadata defining how overrides should be applied in load
|
||||
function. For loading should be used function `studio_system_settings`.
|
||||
|
||||
Args:
|
||||
data(dict): Data of studio overrides with override metadata.
|
||||
"""
|
||||
dirpath = os.path.dirname(SYSTEM_SETTINGS_PATH)
|
||||
if not os.path.exists(dirpath):
|
||||
os.makedirs(dirpath)
|
||||
|
||||
print("Saving studio overrides. Output path: {}".format(
|
||||
SYSTEM_SETTINGS_PATH
|
||||
))
|
||||
with open(SYSTEM_SETTINGS_PATH, "w") as file_stream:
|
||||
json.dump(data, file_stream, indent=4)
|
||||
|
||||
|
||||
def save_project_settings(project_name, overrides):
|
||||
"""Save studio overrides of project settings.
|
||||
|
||||
Data are saved for specific project or as defaults for all projects.
|
||||
|
||||
Do not use to store whole project settings data with defaults but only it's
|
||||
overrides with metadata defining how overrides should be applied in load
|
||||
function. For loading should be used functions `studio_project_settings`
|
||||
for global project settings and `project_settings_overrides` for
|
||||
project specific settings.
|
||||
|
||||
Args:
|
||||
project_name(str, null): Project name for which overrides are
|
||||
or None for global settings.
|
||||
data(dict): Data of project overrides with override metadata.
|
||||
"""
|
||||
project_overrides_json_path = path_to_project_settings(project_name)
|
||||
dirpath = os.path.dirname(project_overrides_json_path)
|
||||
if not os.path.exists(dirpath):
|
||||
os.makedirs(dirpath)
|
||||
|
||||
print("Saving overrides of project \"{}\". Output path: {}".format(
|
||||
project_name, project_overrides_json_path
|
||||
))
|
||||
with open(project_overrides_json_path, "w") as file_stream:
|
||||
json.dump(overrides, file_stream, indent=4)
|
||||
|
||||
|
||||
def save_project_anatomy(project_name, anatomy_data):
|
||||
"""Save studio overrides of project anatomy data.
|
||||
|
||||
Args:
|
||||
project_name(str, null): Project name for which overrides are
|
||||
or None for global settings.
|
||||
data(dict): Data of project overrides with override metadata.
|
||||
"""
|
||||
project_anatomy_json_path = path_to_project_anatomy(project_name)
|
||||
dirpath = os.path.dirname(project_anatomy_json_path)
|
||||
if not os.path.exists(dirpath):
|
||||
os.makedirs(dirpath)
|
||||
|
||||
print("Saving anatomy of project \"{}\". Output path: {}".format(
|
||||
project_name, project_anatomy_json_path
|
||||
))
|
||||
with open(project_anatomy_json_path, "w") as file_stream:
|
||||
json.dump(anatomy_data, file_stream, indent=4)
|
||||
|
||||
|
||||
def project_settings_overrides(project_name):
|
||||
"""Studio overrides of project settings for specific project.
|
||||
|
||||
Args:
|
||||
project_name(str): Name of project for which data should be loaded.
|
||||
|
||||
Returns:
|
||||
dict: Only overrides for entered project, may be empty dictionary.
|
||||
"""
|
||||
if not project_name:
|
||||
return {}
|
||||
|
||||
path_to_json = path_to_project_overrides(project_name)
|
||||
path_to_json = path_to_project_settings(project_name)
|
||||
if not os.path.exists(path_to_json):
|
||||
return {}
|
||||
return load_json(path_to_json)
|
||||
return load_json_file(path_to_json)
|
||||
|
||||
|
||||
def project_anatomy_overrides(project_name):
|
||||
"""Studio overrides of project anatomy for specific project.
|
||||
|
||||
Args:
|
||||
project_name(str): Name of project for which data should be loaded.
|
||||
|
||||
Returns:
|
||||
dict: Only overrides for entered project, may be empty dictionary.
|
||||
"""
|
||||
if not project_name:
|
||||
return {}
|
||||
|
||||
path_to_json = path_to_project_anatomy(project_name)
|
||||
if not os.path.exists(path_to_json):
|
||||
return {}
|
||||
return load_json(path_to_json)
|
||||
return load_json_file(path_to_json)
|
||||
|
||||
|
||||
def merge_overrides(global_dict, override_dict):
|
||||
def merge_overrides(source_dict, override_dict):
|
||||
"""Merge data from override_dict to source_dict."""
|
||||
|
||||
if M_OVERRIDEN_KEY in override_dict:
|
||||
overriden_keys = set(override_dict.pop(M_OVERRIDEN_KEY))
|
||||
else:
|
||||
|
|
@ -258,20 +344,17 @@ def merge_overrides(global_dict, override_dict):
|
|||
|
||||
for key, value in override_dict.items():
|
||||
if value == M_POP_KEY:
|
||||
global_dict.pop(key)
|
||||
source_dict.pop(key)
|
||||
|
||||
elif (
|
||||
key in overriden_keys
|
||||
or key not in global_dict
|
||||
):
|
||||
global_dict[key] = value
|
||||
elif (key in overriden_keys or key not in source_dict):
|
||||
source_dict[key] = value
|
||||
|
||||
elif isinstance(value, dict) and isinstance(global_dict[key], dict):
|
||||
global_dict[key] = merge_overrides(global_dict[key], value)
|
||||
elif isinstance(value, dict) and isinstance(source_dict[key], dict):
|
||||
source_dict[key] = merge_overrides(source_dict[key], value)
|
||||
|
||||
else:
|
||||
global_dict[key] = value
|
||||
return global_dict
|
||||
source_dict[key] = value
|
||||
return source_dict
|
||||
|
||||
|
||||
def apply_overrides(source_data, override_data):
|
||||
|
|
@ -282,13 +365,15 @@ def apply_overrides(source_data, override_data):
|
|||
|
||||
|
||||
def system_settings():
|
||||
default_values = default_settings()[SYSTEM_SETTINGS_KEY]
|
||||
"""System settings with applied studio overrides."""
|
||||
default_values = copy.deepcopy(default_settings()[SYSTEM_SETTINGS_KEY])
|
||||
studio_values = studio_system_settings()
|
||||
return apply_overrides(default_values, studio_values)
|
||||
|
||||
|
||||
def project_settings(project_name):
|
||||
default_values = default_settings()[PROJECT_SETTINGS_KEY]
|
||||
"""Project settings with applied studio and project overrides."""
|
||||
default_values = copy.deepcopy(default_settings()[PROJECT_SETTINGS_KEY])
|
||||
studio_values = studio_project_settings()
|
||||
|
||||
studio_overrides = apply_overrides(default_values, studio_values)
|
||||
|
|
@ -299,6 +384,14 @@ def project_settings(project_name):
|
|||
|
||||
|
||||
def environments():
|
||||
"""Calculated environment based on defaults and system settings.
|
||||
|
||||
Any default environment also found in the system settings will be fully
|
||||
overriden by the one from the system settings.
|
||||
|
||||
Returns:
|
||||
dict: Output should be ready for `acre` module.
|
||||
"""
|
||||
envs = copy.deepcopy(default_settings()[ENVIRONMENTS_KEY])
|
||||
envs_from_system_settings = find_environments(system_settings())
|
||||
for env_group_key, values in envs_from_system_settings.items():
|
||||
|
|
|
|||
|
|
@ -545,7 +545,9 @@ class TerminalFilterWidget(QtWidgets.QWidget):
|
|||
layout = QtWidgets.QHBoxLayout(self)
|
||||
layout.setContentsMargins(0, 0, 0, 0)
|
||||
# Add spacers
|
||||
layout.addWidget(QtWidgets.QWidget(), 1)
|
||||
spacer = QtWidgets.QWidget()
|
||||
spacer.setAttribute(QtCore.Qt.WA_TranslucentBackground)
|
||||
layout.addWidget(spacer, 1)
|
||||
|
||||
for btn in filter_buttons:
|
||||
layout.addWidget(btn)
|
||||
|
|
|
|||
|
|
@ -3,11 +3,8 @@ import json
|
|||
from Qt import QtWidgets, QtCore, QtGui
|
||||
from pype.settings.lib import (
|
||||
SYSTEM_SETTINGS_KEY,
|
||||
SYSTEM_SETTINGS_PATH,
|
||||
PROJECT_SETTINGS_KEY,
|
||||
PROJECT_SETTINGS_PATH,
|
||||
PROJECT_ANATOMY_KEY,
|
||||
PROJECT_ANATOMY_PATH,
|
||||
|
||||
DEFAULTS_DIR,
|
||||
|
||||
|
|
@ -21,8 +18,9 @@ from pype.settings.lib import (
|
|||
project_settings_overrides,
|
||||
project_anatomy_overrides,
|
||||
|
||||
path_to_project_overrides,
|
||||
path_to_project_anatomy
|
||||
save_studio_settings,
|
||||
save_project_settings,
|
||||
save_project_anatomy
|
||||
)
|
||||
from .widgets import UnsavedChangesDialog
|
||||
from . import lib
|
||||
|
|
@ -183,13 +181,7 @@ class SystemWidget(QtWidgets.QWidget):
|
|||
|
||||
values = lib.convert_gui_data_to_overrides(_data.get("system", {}))
|
||||
|
||||
dirpath = os.path.dirname(SYSTEM_SETTINGS_PATH)
|
||||
if not os.path.exists(dirpath):
|
||||
os.makedirs(dirpath)
|
||||
|
||||
print("Saving data to:", SYSTEM_SETTINGS_PATH)
|
||||
with open(SYSTEM_SETTINGS_PATH, "w") as file_stream:
|
||||
json.dump(values, file_stream, indent=4)
|
||||
save_studio_settings(values)
|
||||
|
||||
self._update_values()
|
||||
|
||||
|
|
@ -621,29 +613,25 @@ class ProjectWidget(QtWidgets.QWidget):
|
|||
if item.child_invalid:
|
||||
has_invalid = True
|
||||
|
||||
if has_invalid:
|
||||
invalid_items = []
|
||||
for item in self.input_fields:
|
||||
invalid_items.extend(item.get_invalid())
|
||||
msg_box = QtWidgets.QMessageBox(
|
||||
QtWidgets.QMessageBox.Warning,
|
||||
"Invalid input",
|
||||
"There is invalid value in one of inputs."
|
||||
" Please lead red color and fix them."
|
||||
)
|
||||
msg_box.setStandardButtons(QtWidgets.QMessageBox.Ok)
|
||||
msg_box.exec_()
|
||||
if not has_invalid:
|
||||
return self._save_overrides()
|
||||
|
||||
first_invalid_item = invalid_items[0]
|
||||
self.scroll_widget.ensureWidgetVisible(first_invalid_item)
|
||||
if first_invalid_item.isVisible():
|
||||
first_invalid_item.setFocus(True)
|
||||
return
|
||||
invalid_items = []
|
||||
for item in self.input_fields:
|
||||
invalid_items.extend(item.get_invalid())
|
||||
msg_box = QtWidgets.QMessageBox(
|
||||
QtWidgets.QMessageBox.Warning,
|
||||
"Invalid input",
|
||||
"There is invalid value in one of inputs."
|
||||
" Please lead red color and fix them."
|
||||
)
|
||||
msg_box.setStandardButtons(QtWidgets.QMessageBox.Ok)
|
||||
msg_box.exec_()
|
||||
|
||||
if self.project_name is None:
|
||||
self._save_studio_overrides()
|
||||
else:
|
||||
self._save_overrides()
|
||||
first_invalid_item = invalid_items[0]
|
||||
self.scroll_widget.ensureWidgetVisible(first_invalid_item)
|
||||
if first_invalid_item.isVisible():
|
||||
first_invalid_item.setFocus(True)
|
||||
|
||||
def _on_refresh(self):
|
||||
self.reset()
|
||||
|
|
@ -655,8 +643,12 @@ class ProjectWidget(QtWidgets.QWidget):
|
|||
|
||||
def _save_overrides(self):
|
||||
data = {}
|
||||
studio_overrides = bool(self.project_name is None)
|
||||
for item in self.input_fields:
|
||||
value, is_group = item.overrides()
|
||||
if studio_overrides:
|
||||
value, is_group = item.studio_overrides()
|
||||
else:
|
||||
value, is_group = item.overrides()
|
||||
if value is not lib.NOT_SET:
|
||||
data.update(value)
|
||||
|
||||
|
|
@ -665,80 +657,24 @@ class ProjectWidget(QtWidgets.QWidget):
|
|||
)
|
||||
|
||||
# Saving overrides data
|
||||
project_overrides_data = output_data.get(
|
||||
PROJECT_SETTINGS_KEY, {}
|
||||
)
|
||||
project_overrides_json_path = path_to_project_overrides(
|
||||
self.project_name
|
||||
)
|
||||
dirpath = os.path.dirname(project_overrides_json_path)
|
||||
if not os.path.exists(dirpath):
|
||||
os.makedirs(dirpath)
|
||||
|
||||
print("Saving data to:", project_overrides_json_path)
|
||||
with open(project_overrides_json_path, "w") as file_stream:
|
||||
json.dump(project_overrides_data, file_stream, indent=4)
|
||||
project_overrides_data = output_data.get(PROJECT_SETTINGS_KEY, {})
|
||||
save_project_settings(self.project_name, project_overrides_data)
|
||||
|
||||
# Saving anatomy data
|
||||
project_anatomy_data = output_data.get(
|
||||
PROJECT_ANATOMY_KEY, {}
|
||||
)
|
||||
project_anatomy_json_path = path_to_project_anatomy(
|
||||
self.project_name
|
||||
)
|
||||
dirpath = os.path.dirname(project_anatomy_json_path)
|
||||
if not os.path.exists(dirpath):
|
||||
os.makedirs(dirpath)
|
||||
project_anatomy_data = output_data.get(PROJECT_ANATOMY_KEY, {})
|
||||
save_project_anatomy(self.project_name, project_anatomy_data)
|
||||
|
||||
print("Saving data to:", project_anatomy_json_path)
|
||||
with open(project_anatomy_json_path, "w") as file_stream:
|
||||
json.dump(project_anatomy_data, file_stream, indent=4)
|
||||
|
||||
# Refill values with overrides
|
||||
self._on_project_change()
|
||||
|
||||
def _save_studio_overrides(self):
|
||||
data = {}
|
||||
for input_field in self.input_fields:
|
||||
value, is_group = input_field.studio_overrides()
|
||||
if value is not lib.NOT_SET:
|
||||
data.update(value)
|
||||
|
||||
output_data = lib.convert_gui_data_to_overrides(
|
||||
data.get("project", {})
|
||||
)
|
||||
|
||||
# Project overrides data
|
||||
project_overrides_data = output_data.get(
|
||||
PROJECT_SETTINGS_KEY, {}
|
||||
)
|
||||
dirpath = os.path.dirname(PROJECT_SETTINGS_PATH)
|
||||
if not os.path.exists(dirpath):
|
||||
os.makedirs(dirpath)
|
||||
|
||||
print("Saving data to:", PROJECT_SETTINGS_PATH)
|
||||
with open(PROJECT_SETTINGS_PATH, "w") as file_stream:
|
||||
json.dump(project_overrides_data, file_stream, indent=4)
|
||||
|
||||
# Project Anatomy data
|
||||
project_anatomy_data = output_data.get(
|
||||
PROJECT_ANATOMY_KEY, {}
|
||||
)
|
||||
dirpath = os.path.dirname(PROJECT_ANATOMY_PATH)
|
||||
if not os.path.exists(dirpath):
|
||||
os.makedirs(dirpath)
|
||||
|
||||
print("Saving data to:", PROJECT_ANATOMY_PATH)
|
||||
with open(PROJECT_ANATOMY_PATH, "w") as file_stream:
|
||||
json.dump(project_anatomy_data, file_stream, indent=4)
|
||||
|
||||
# Update saved values
|
||||
self._update_values()
|
||||
if self.project_name:
|
||||
# Refill values with overrides
|
||||
self._on_project_change()
|
||||
else:
|
||||
# Update saved values
|
||||
self._update_values()
|
||||
|
||||
def _update_values(self):
|
||||
self.ignore_value_changes = True
|
||||
|
||||
default_values = default_values = lib.convert_data_to_gui_data(
|
||||
default_values = lib.convert_data_to_gui_data(
|
||||
{"project": default_settings()}
|
||||
)
|
||||
for input_field in self.input_fields:
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue