mirror of
https://github.com/ynput/ayon-core.git
synced 2025-12-24 21:04:40 +01:00
Merge branch 'develop' into maya_collect_renderlayer_error_fix_4648
This commit is contained in:
commit
fa1f70274d
167 changed files with 1239 additions and 786 deletions
13
.github/workflows/project_actions.yml
vendored
13
.github/workflows/project_actions.yml
vendored
|
|
@ -2,7 +2,7 @@ name: project-actions
|
|||
|
||||
on:
|
||||
pull_request_target:
|
||||
types: [opened, synchronize, assigned, review_requested]
|
||||
types: [opened, assigned]
|
||||
pull_request_review:
|
||||
types: [submitted]
|
||||
|
||||
|
|
@ -25,8 +25,8 @@ jobs:
|
|||
name: pr_size_label
|
||||
runs-on: ubuntu-latest
|
||||
if: |
|
||||
${{(github.event_name == 'pull_request' && github.event.action == 'synchronize')
|
||||
|| (github.event_name == 'pull_request' && github.event.action == 'assigned')}}
|
||||
${{(github.event_name == 'pull_request' && github.event.action == 'assigned')
|
||||
|| (github.event_name == 'pull_request' && github.event.action == 'opened')}}
|
||||
|
||||
steps:
|
||||
- name: Add size label
|
||||
|
|
@ -49,7 +49,7 @@ jobs:
|
|||
name: pr_branch_label
|
||||
runs-on: ubuntu-latest
|
||||
if: |
|
||||
${{(github.event_name == 'pull_request' && github.event.action == 'synchronize')
|
||||
${{(github.event_name == 'pull_request' && github.event.action == 'assigned')
|
||||
|| (github.event_name == 'pull_request' && github.event.action == 'opened')}}
|
||||
steps:
|
||||
- name: Label PRs - Branch name detection
|
||||
|
|
@ -61,11 +61,12 @@ jobs:
|
|||
name: pr_globe_label
|
||||
runs-on: ubuntu-latest
|
||||
if: |
|
||||
${{(github.event_name == 'pull_request' && github.event.action == 'synchronize')
|
||||
${{(github.event_name == 'pull_request' && github.event.action == 'assigned')
|
||||
|| (github.event_name == 'pull_request' && github.event.action == 'opened')}}
|
||||
steps:
|
||||
- name: Label PRs - Globe detection
|
||||
uses: actions/labeler@v4
|
||||
uses: actions/labeler@v4.0.3
|
||||
with:
|
||||
repo-token: ${{ secrets.YNPUT_BOT_TOKEN }}
|
||||
configuration-path: ".github/pr-glob-labeler.yml"
|
||||
sync-labels: false
|
||||
|
|
|
|||
|
|
@ -3,7 +3,7 @@
|
|||
Goal is that most of functions here are called on (or with) an object
|
||||
that has project name as a context (e.g. on 'ProjectEntity'?).
|
||||
|
||||
+ We will need more specific functions doing wery specific queires really fast.
|
||||
+ We will need more specific functions doing very specific queries really fast.
|
||||
"""
|
||||
|
||||
import re
|
||||
|
|
@ -193,7 +193,7 @@ def _get_assets(
|
|||
be found.
|
||||
asset_names (Iterable[str]): Name assets that should be found.
|
||||
parent_ids (Iterable[Union[str, ObjectId]]): Parent asset ids.
|
||||
standard (bool): Query standart assets (type 'asset').
|
||||
standard (bool): Query standard assets (type 'asset').
|
||||
archived (bool): Query archived assets (type 'archived_asset').
|
||||
fields (Iterable[str]): Fields that should be returned. All fields are
|
||||
returned if 'None' is passed.
|
||||
|
|
@ -1185,7 +1185,7 @@ def get_representations(
|
|||
standard=True,
|
||||
fields=None
|
||||
):
|
||||
"""Representaion entities data from one project filtered by filters.
|
||||
"""Representation entities data from one project filtered by filters.
|
||||
|
||||
Filters are additive (all conditions must pass to return subset).
|
||||
|
||||
|
|
@ -1231,7 +1231,7 @@ def get_archived_representations(
|
|||
names_by_version_ids=None,
|
||||
fields=None
|
||||
):
|
||||
"""Archived representaion entities data from project with applied filters.
|
||||
"""Archived representation entities data from project with applied filters.
|
||||
|
||||
Filters are additive (all conditions must pass to return subset).
|
||||
|
||||
|
|
|
|||
|
|
@ -2,7 +2,7 @@
|
|||
## Reason
|
||||
Preparation for OpenPype v4 server. Goal is to remove direct mongo calls in code to prepare a little bit for different source of data for code before. To start think about database calls less as mongo calls but more universally. To do so was implemented simple wrapper around database calls to not use pymongo specific code.
|
||||
|
||||
Current goal is not to make universal database model which can be easily replaced with any different source of data but to make it close as possible. Current implementation of OpenPype is too tighly connected to pymongo and it's abilities so we're trying to get closer with long term changes that can be used even in current state.
|
||||
Current goal is not to make universal database model which can be easily replaced with any different source of data but to make it close as possible. Current implementation of OpenPype is too tightly connected to pymongo and it's abilities so we're trying to get closer with long term changes that can be used even in current state.
|
||||
|
||||
## Queries
|
||||
Query functions don't use full potential of mongo queries like very specific queries based on subdictionaries or unknown structures. We try to avoid these calls as much as possible because they'll probably won't be available in future. If it's really necessary a new function can be added but only if it's reasonable for overall logic. All query functions were moved to `~/client/entities.py`. Each function has arguments with available filters and possible reduce of returned keys for each entity.
|
||||
|
|
@ -14,7 +14,7 @@ Changes are a little bit complicated. Mongo has many options how update can happ
|
|||
Create operations expect already prepared document data, for that are prepared functions creating skeletal structures of documents (do not fill all required data), except `_id` all data should be right. Existence of entity is not validated so if the same creation operation is send n times it will create the entity n times which can cause issues.
|
||||
|
||||
### Update
|
||||
Update operation require entity id and keys that should be changed, update dictionary must have {"key": value}. If value should be set in nested dictionary the key must have also all subkeys joined with dot `.` (e.g. `{"data": {"fps": 25}}` -> `{"data.fps": 25}`). To simplify update dictionaries were prepared functions which does that for you, their name has template `prepare_<entity type>_update_data` - they work on comparison of previous document and new document. If there is missing function for requested entity type it is because we didn't need it yet and require implementaion.
|
||||
Update operation require entity id and keys that should be changed, update dictionary must have {"key": value}. If value should be set in nested dictionary the key must have also all subkeys joined with dot `.` (e.g. `{"data": {"fps": 25}}` -> `{"data.fps": 25}`). To simplify update dictionaries were prepared functions which does that for you, their name has template `prepare_<entity type>_update_data` - they work on comparison of previous document and new document. If there is missing function for requested entity type it is because we didn't need it yet and require implementation.
|
||||
|
||||
### Delete
|
||||
Delete operation need entity id. Entity will be deleted from mongo.
|
||||
|
|
|
|||
|
|
@ -368,7 +368,7 @@ def prepare_workfile_info_update_data(old_doc, new_doc, replace=True):
|
|||
class AbstractOperation(object):
|
||||
"""Base operation class.
|
||||
|
||||
Opration represent a call into database. The call can create, change or
|
||||
Operation represent a call into database. The call can create, change or
|
||||
remove data.
|
||||
|
||||
Args:
|
||||
|
|
@ -409,7 +409,7 @@ class AbstractOperation(object):
|
|||
pass
|
||||
|
||||
def to_data(self):
|
||||
"""Convert opration to data that can be converted to json or others.
|
||||
"""Convert operation to data that can be converted to json or others.
|
||||
|
||||
Warning:
|
||||
Current state returns ObjectId objects which cannot be parsed by
|
||||
|
|
@ -428,7 +428,7 @@ class AbstractOperation(object):
|
|||
|
||||
|
||||
class CreateOperation(AbstractOperation):
|
||||
"""Opeartion to create an entity.
|
||||
"""Operation to create an entity.
|
||||
|
||||
Args:
|
||||
project_name (str): On which project operation will happen.
|
||||
|
|
@ -485,7 +485,7 @@ class CreateOperation(AbstractOperation):
|
|||
|
||||
|
||||
class UpdateOperation(AbstractOperation):
|
||||
"""Opeartion to update an entity.
|
||||
"""Operation to update an entity.
|
||||
|
||||
Args:
|
||||
project_name (str): On which project operation will happen.
|
||||
|
|
@ -552,7 +552,7 @@ class UpdateOperation(AbstractOperation):
|
|||
|
||||
|
||||
class DeleteOperation(AbstractOperation):
|
||||
"""Opeartion to delete an entity.
|
||||
"""Operation to delete an entity.
|
||||
|
||||
Args:
|
||||
project_name (str): On which project operation will happen.
|
||||
|
|
|
|||
|
|
@ -2,7 +2,7 @@
|
|||
|
||||
Idea for current dirmap implementation was used from Maya where is possible to
|
||||
enter source and destination roots and maya will try each found source
|
||||
in referenced file replace with each destionation paths. First path which
|
||||
in referenced file replace with each destination paths. First path which
|
||||
exists is used.
|
||||
"""
|
||||
|
||||
|
|
@ -183,7 +183,7 @@ class HostDirmap(object):
|
|||
project_name, remote_site
|
||||
)
|
||||
# dirmap has sense only with regular disk provider, in the workfile
|
||||
# wont be root on cloud or sftp provider
|
||||
# won't be root on cloud or sftp provider
|
||||
if remote_provider != "local_drive":
|
||||
remote_site = "studio"
|
||||
for root_name, active_site_dir in active_overrides.items():
|
||||
|
|
|
|||
|
|
@ -18,7 +18,7 @@ class HostBase(object):
|
|||
Compared to 'avalon' concept:
|
||||
What was before considered as functions in host implementation folder. The
|
||||
host implementation should primarily care about adding ability of creation
|
||||
(mark subsets to be published) and optionaly about referencing published
|
||||
(mark subsets to be published) and optionally about referencing published
|
||||
representations as containers.
|
||||
|
||||
Host may need extend some functionality like working with workfiles
|
||||
|
|
@ -129,9 +129,9 @@ class HostBase(object):
|
|||
"""Get current context information.
|
||||
|
||||
This method should be used to get current context of host. Usage of
|
||||
this method can be crutial for host implementations in DCCs where
|
||||
this method can be crucial for host implementations in DCCs where
|
||||
can be opened multiple workfiles at one moment and change of context
|
||||
can't be catched properly.
|
||||
can't be caught properly.
|
||||
|
||||
Default implementation returns values from 'legacy_io.Session'.
|
||||
|
||||
|
|
|
|||
|
|
@ -81,7 +81,7 @@ class ILoadHost:
|
|||
|
||||
@abstractmethod
|
||||
def get_containers(self):
|
||||
"""Retreive referenced containers from scene.
|
||||
"""Retrieve referenced containers from scene.
|
||||
|
||||
This can be implemented in hosts where referencing can be used.
|
||||
|
||||
|
|
@ -191,7 +191,7 @@ class IWorkfileHost:
|
|||
|
||||
@abstractmethod
|
||||
def get_current_workfile(self):
|
||||
"""Retreive path to current opened file.
|
||||
"""Retrieve path to current opened file.
|
||||
|
||||
Returns:
|
||||
str: Path to file which is currently opened.
|
||||
|
|
@ -220,8 +220,8 @@ class IWorkfileHost:
|
|||
Default implementation keeps workdir untouched.
|
||||
|
||||
Warnings:
|
||||
We must handle this modification with more sofisticated way because
|
||||
this can't be called out of DCC so opening of last workfile
|
||||
We must handle this modification with more sophisticated way
|
||||
because this can't be called out of DCC so opening of last workfile
|
||||
(calculated before DCC is launched) is complicated. Also breaking
|
||||
defined work template is not a good idea.
|
||||
Only place where it's really used and can make sense is Maya. There
|
||||
|
|
@ -302,7 +302,7 @@ class IPublishHost:
|
|||
required methods.
|
||||
|
||||
Returns:
|
||||
list[str]: Missing method implementations for new publsher
|
||||
list[str]: Missing method implementations for new publisher
|
||||
workflow.
|
||||
"""
|
||||
|
||||
|
|
|
|||
|
|
@ -504,7 +504,7 @@ function addItemAsLayerToComp(comp_id, item_id, found_comp){
|
|||
* Args:
|
||||
* comp_id (int): id of target composition
|
||||
* item_id (int): FootageItem.id
|
||||
* found_comp (CompItem, optional): to limit quering if
|
||||
* found_comp (CompItem, optional): to limit querying if
|
||||
* comp already found previously
|
||||
*/
|
||||
var comp = found_comp || app.project.itemByID(comp_id);
|
||||
|
|
|
|||
|
|
@ -80,7 +80,7 @@ class AfterEffectsServerStub():
|
|||
Get complete stored JSON with metadata from AE.Metadata.Label
|
||||
field.
|
||||
|
||||
It contains containers loaded by any Loader OR instances creted
|
||||
It contains containers loaded by any Loader OR instances created
|
||||
by Creator.
|
||||
|
||||
Returns:
|
||||
|
|
|
|||
|
|
@ -24,7 +24,7 @@ from .workio import OpenFileCacher
|
|||
PREVIEW_COLLECTIONS: Dict = dict()
|
||||
|
||||
# This seems like a good value to keep the Qt app responsive and doesn't slow
|
||||
# down Blender. At least on macOS I the interace of Blender gets very laggy if
|
||||
# down Blender. At least on macOS I the interface of Blender gets very laggy if
|
||||
# you make it smaller.
|
||||
TIMER_INTERVAL: float = 0.01 if platform.system() == "Windows" else 0.1
|
||||
|
||||
|
|
|
|||
|
|
@ -50,7 +50,7 @@ class ExtractPlayblast(publish.Extractor):
|
|||
# get isolate objects list
|
||||
isolate = instance.data("isolate", None)
|
||||
|
||||
# get ouput path
|
||||
# get output path
|
||||
stagingdir = self.staging_dir(instance)
|
||||
filename = instance.name
|
||||
path = os.path.join(stagingdir, filename)
|
||||
|
|
@ -116,7 +116,6 @@ class ExtractPlayblast(publish.Extractor):
|
|||
"frameStart": start,
|
||||
"frameEnd": end,
|
||||
"fps": fps,
|
||||
"preview": True,
|
||||
"tags": tags,
|
||||
"camera_name": camera
|
||||
}
|
||||
|
|
|
|||
|
|
@ -773,7 +773,7 @@ class MediaInfoFile(object):
|
|||
if logger:
|
||||
self.log = logger
|
||||
|
||||
# test if `dl_get_media_info` paht exists
|
||||
# test if `dl_get_media_info` path exists
|
||||
self._validate_media_script_path()
|
||||
|
||||
# derivate other feed variables
|
||||
|
|
@ -993,7 +993,7 @@ class MediaInfoFile(object):
|
|||
|
||||
def _validate_media_script_path(self):
|
||||
if not os.path.isfile(self.MEDIA_SCRIPT_PATH):
|
||||
raise IOError("Media Scirpt does not exist: `{}`".format(
|
||||
raise IOError("Media Script does not exist: `{}`".format(
|
||||
self.MEDIA_SCRIPT_PATH))
|
||||
|
||||
def _generate_media_info_file(self, fpath, feed_ext, feed_dir):
|
||||
|
|
|
|||
|
|
@ -38,7 +38,7 @@ def install():
|
|||
pyblish.register_plugin_path(PUBLISH_PATH)
|
||||
register_loader_plugin_path(LOAD_PATH)
|
||||
register_creator_plugin_path(CREATE_PATH)
|
||||
log.info("OpenPype Flame plug-ins registred ...")
|
||||
log.info("OpenPype Flame plug-ins registered ...")
|
||||
|
||||
# register callback for switching publishable
|
||||
pyblish.register_callback("instanceToggled", on_pyblish_instance_toggled)
|
||||
|
|
|
|||
|
|
@ -157,7 +157,7 @@ class CreatorWidget(QtWidgets.QDialog):
|
|||
# convert label text to normal capitalized text with spaces
|
||||
label_text = self.camel_case_split(text)
|
||||
|
||||
# assign the new text to lable widget
|
||||
# assign the new text to label widget
|
||||
label = QtWidgets.QLabel(label_text)
|
||||
label.setObjectName("LineLabel")
|
||||
|
||||
|
|
@ -345,8 +345,8 @@ class PublishableClip:
|
|||
"track": "sequence",
|
||||
}
|
||||
|
||||
# parents search patern
|
||||
parents_search_patern = r"\{([a-z]*?)\}"
|
||||
# parents search pattern
|
||||
parents_search_pattern = r"\{([a-z]*?)\}"
|
||||
|
||||
# default templates for non-ui use
|
||||
rename_default = False
|
||||
|
|
@ -445,7 +445,7 @@ class PublishableClip:
|
|||
return self.current_segment
|
||||
|
||||
def _populate_segment_default_data(self):
|
||||
""" Populate default formating data from segment. """
|
||||
""" Populate default formatting data from segment. """
|
||||
|
||||
self.current_segment_default_data = {
|
||||
"_folder_": "shots",
|
||||
|
|
@ -538,7 +538,7 @@ class PublishableClip:
|
|||
if not self.index_from_segment:
|
||||
self.count_steps *= self.rename_index
|
||||
|
||||
hierarchy_formating_data = {}
|
||||
hierarchy_formatting_data = {}
|
||||
hierarchy_data = deepcopy(self.hierarchy_data)
|
||||
_data = self.current_segment_default_data.copy()
|
||||
if self.ui_inputs:
|
||||
|
|
@ -552,7 +552,7 @@ class PublishableClip:
|
|||
# mark review layer
|
||||
if self.review_track and (
|
||||
self.review_track not in self.review_track_default):
|
||||
# if review layer is defined and not the same as defalut
|
||||
# if review layer is defined and not the same as default
|
||||
self.review_layer = self.review_track
|
||||
|
||||
# shot num calculate
|
||||
|
|
@ -578,13 +578,13 @@ class PublishableClip:
|
|||
|
||||
# fill up pythonic expresisons in hierarchy data
|
||||
for k, _v in hierarchy_data.items():
|
||||
hierarchy_formating_data[k] = _v["value"].format(**_data)
|
||||
hierarchy_formatting_data[k] = _v["value"].format(**_data)
|
||||
else:
|
||||
# if no gui mode then just pass default data
|
||||
hierarchy_formating_data = hierarchy_data
|
||||
hierarchy_formatting_data = hierarchy_data
|
||||
|
||||
tag_hierarchy_data = self._solve_tag_hierarchy_data(
|
||||
hierarchy_formating_data
|
||||
hierarchy_formatting_data
|
||||
)
|
||||
|
||||
tag_hierarchy_data.update({"heroTrack": True})
|
||||
|
|
@ -615,27 +615,27 @@ class PublishableClip:
|
|||
# in case track name and subset name is the same then add
|
||||
if self.subset_name == self.track_name:
|
||||
_hero_data["subset"] = self.subset
|
||||
# assing data to return hierarchy data to tag
|
||||
# assign data to return hierarchy data to tag
|
||||
tag_hierarchy_data = _hero_data
|
||||
break
|
||||
|
||||
# add data to return data dict
|
||||
self.marker_data.update(tag_hierarchy_data)
|
||||
|
||||
def _solve_tag_hierarchy_data(self, hierarchy_formating_data):
|
||||
def _solve_tag_hierarchy_data(self, hierarchy_formatting_data):
|
||||
""" Solve marker data from hierarchy data and templates. """
|
||||
# fill up clip name and hierarchy keys
|
||||
hierarchy_filled = self.hierarchy.format(**hierarchy_formating_data)
|
||||
clip_name_filled = self.clip_name.format(**hierarchy_formating_data)
|
||||
hierarchy_filled = self.hierarchy.format(**hierarchy_formatting_data)
|
||||
clip_name_filled = self.clip_name.format(**hierarchy_formatting_data)
|
||||
|
||||
# remove shot from hierarchy data: is not needed anymore
|
||||
hierarchy_formating_data.pop("shot")
|
||||
hierarchy_formatting_data.pop("shot")
|
||||
|
||||
return {
|
||||
"newClipName": clip_name_filled,
|
||||
"hierarchy": hierarchy_filled,
|
||||
"parents": self.parents,
|
||||
"hierarchyData": hierarchy_formating_data,
|
||||
"hierarchyData": hierarchy_formatting_data,
|
||||
"subset": self.subset,
|
||||
"family": self.subset_family,
|
||||
"families": [self.family]
|
||||
|
|
@ -650,17 +650,17 @@ class PublishableClip:
|
|||
type
|
||||
)
|
||||
|
||||
# first collect formating data to use for formating template
|
||||
formating_data = {}
|
||||
# first collect formatting data to use for formatting template
|
||||
formatting_data = {}
|
||||
for _k, _v in self.hierarchy_data.items():
|
||||
value = _v["value"].format(
|
||||
**self.current_segment_default_data)
|
||||
formating_data[_k] = value
|
||||
formatting_data[_k] = value
|
||||
|
||||
return {
|
||||
"entity_type": entity_type,
|
||||
"entity_name": template.format(
|
||||
**formating_data
|
||||
**formatting_data
|
||||
)
|
||||
}
|
||||
|
||||
|
|
@ -668,9 +668,9 @@ class PublishableClip:
|
|||
""" Create parents and return it in list. """
|
||||
self.parents = []
|
||||
|
||||
patern = re.compile(self.parents_search_patern)
|
||||
pattern = re.compile(self.parents_search_pattern)
|
||||
|
||||
par_split = [(patern.findall(t).pop(), t)
|
||||
par_split = [(pattern.findall(t).pop(), t)
|
||||
for t in self.hierarchy.split("/")]
|
||||
|
||||
for type, template in par_split:
|
||||
|
|
@ -902,22 +902,22 @@ class OpenClipSolver(flib.MediaInfoFile):
|
|||
):
|
||||
return
|
||||
|
||||
formating_data = self._update_formating_data(
|
||||
formatting_data = self._update_formatting_data(
|
||||
layerName=layer_name,
|
||||
layerUID=layer_uid
|
||||
)
|
||||
name_obj.text = StringTemplate(
|
||||
self.layer_rename_template
|
||||
).format(formating_data)
|
||||
).format(formatting_data)
|
||||
|
||||
def _update_formating_data(self, **kwargs):
|
||||
""" Updating formating data for layer rename
|
||||
def _update_formatting_data(self, **kwargs):
|
||||
""" Updating formatting data for layer rename
|
||||
|
||||
Attributes:
|
||||
key=value (optional): will be included to formating data
|
||||
key=value (optional): will be included to formatting data
|
||||
as {key: value}
|
||||
Returns:
|
||||
dict: anatomy context data for formating
|
||||
dict: anatomy context data for formatting
|
||||
"""
|
||||
self.log.debug(">> self.clip_data: {}".format(self.clip_data))
|
||||
clip_name_obj = self.clip_data.find("name")
|
||||
|
|
|
|||
|
|
@ -203,7 +203,7 @@ class WireTapCom(object):
|
|||
list: all available volumes in server
|
||||
|
||||
Rises:
|
||||
AttributeError: unable to get any volumes childs from server
|
||||
AttributeError: unable to get any volumes children from server
|
||||
"""
|
||||
root = WireTapNodeHandle(self._server, "/volumes")
|
||||
children_num = WireTapInt(0)
|
||||
|
|
|
|||
|
|
@ -108,7 +108,7 @@ def _sync_utility_scripts(env=None):
|
|||
shutil.copy2(src, dst)
|
||||
except (PermissionError, FileExistsError) as msg:
|
||||
log.warning(
|
||||
"Not able to coppy to: `{}`, Problem with: `{}`".format(
|
||||
"Not able to copy to: `{}`, Problem with: `{}`".format(
|
||||
dst,
|
||||
msg
|
||||
)
|
||||
|
|
|
|||
|
|
@ -153,7 +153,7 @@ class FlamePrelaunch(PreLaunchHook):
|
|||
def _add_pythonpath(self):
|
||||
pythonpath = self.launch_context.env.get("PYTHONPATH")
|
||||
|
||||
# separate it explicity by `;` that is what we use in settings
|
||||
# separate it explicitly by `;` that is what we use in settings
|
||||
new_pythonpath = self.flame_pythonpath.split(os.pathsep)
|
||||
new_pythonpath += pythonpath.split(os.pathsep)
|
||||
|
||||
|
|
|
|||
|
|
@ -209,7 +209,7 @@ class CreateShotClip(opfapi.Creator):
|
|||
"type": "QComboBox",
|
||||
"label": "Subset Name",
|
||||
"target": "ui",
|
||||
"toolTip": "chose subset name patern, if [ track name ] is selected, name of track layer will be used", # noqa
|
||||
"toolTip": "chose subset name pattern, if [ track name ] is selected, name of track layer will be used", # noqa
|
||||
"order": 0},
|
||||
"subsetFamily": {
|
||||
"value": ["plate", "take"],
|
||||
|
|
|
|||
|
|
@ -61,9 +61,9 @@ class LoadClip(opfapi.ClipLoader):
|
|||
self.layer_rename_template = self.layer_rename_template.replace(
|
||||
"output", "representation")
|
||||
|
||||
formating_data = deepcopy(context["representation"]["context"])
|
||||
formatting_data = deepcopy(context["representation"]["context"])
|
||||
clip_name = StringTemplate(self.clip_name_template).format(
|
||||
formating_data)
|
||||
formatting_data)
|
||||
|
||||
# convert colorspace with ocio to flame mapping
|
||||
# in imageio flame section
|
||||
|
|
@ -88,7 +88,7 @@ class LoadClip(opfapi.ClipLoader):
|
|||
"version": "v{:0>3}".format(version_name),
|
||||
"layer_rename_template": self.layer_rename_template,
|
||||
"layer_rename_patterns": self.layer_rename_patterns,
|
||||
"context_data": formating_data
|
||||
"context_data": formatting_data
|
||||
}
|
||||
self.log.debug(pformat(
|
||||
loading_context
|
||||
|
|
|
|||
|
|
@ -58,11 +58,11 @@ class LoadClipBatch(opfapi.ClipLoader):
|
|||
self.layer_rename_template = self.layer_rename_template.replace(
|
||||
"output", "representation")
|
||||
|
||||
formating_data = deepcopy(context["representation"]["context"])
|
||||
formating_data["batch"] = self.batch.name.get_value()
|
||||
formatting_data = deepcopy(context["representation"]["context"])
|
||||
formatting_data["batch"] = self.batch.name.get_value()
|
||||
|
||||
clip_name = StringTemplate(self.clip_name_template).format(
|
||||
formating_data)
|
||||
formatting_data)
|
||||
|
||||
# convert colorspace with ocio to flame mapping
|
||||
# in imageio flame section
|
||||
|
|
@ -88,7 +88,7 @@ class LoadClipBatch(opfapi.ClipLoader):
|
|||
"version": "v{:0>3}".format(version_name),
|
||||
"layer_rename_template": self.layer_rename_template,
|
||||
"layer_rename_patterns": self.layer_rename_patterns,
|
||||
"context_data": formating_data
|
||||
"context_data": formatting_data
|
||||
}
|
||||
self.log.debug(pformat(
|
||||
loading_context
|
||||
|
|
|
|||
|
|
@ -203,7 +203,7 @@ class CollectTimelineInstances(pyblish.api.ContextPlugin):
|
|||
self._get_xml_preset_attrs(
|
||||
attributes, split)
|
||||
|
||||
# add xml overides resolution to instance data
|
||||
# add xml overrides resolution to instance data
|
||||
xml_overrides = attributes["xml_overrides"]
|
||||
if xml_overrides.get("width"):
|
||||
attributes.update({
|
||||
|
|
@ -284,7 +284,7 @@ class CollectTimelineInstances(pyblish.api.ContextPlugin):
|
|||
self.log.debug("__ head: `{}`".format(head))
|
||||
self.log.debug("__ tail: `{}`".format(tail))
|
||||
|
||||
# HACK: it is here to serve for versions bellow 2021.1
|
||||
# HACK: it is here to serve for versions below 2021.1
|
||||
if not any([head, tail]):
|
||||
retimed_attributes = get_media_range_with_retimes(
|
||||
otio_clip, handle_start, handle_end)
|
||||
|
|
|
|||
|
|
@ -227,7 +227,7 @@ class ExtractSubsetResources(publish.Extractor):
|
|||
self.hide_others(
|
||||
exporting_clip, segment_name, s_track_name)
|
||||
|
||||
# change name patern
|
||||
# change name pattern
|
||||
name_patern_xml = (
|
||||
"<segment name>_<shot name>_{}.").format(
|
||||
unique_name)
|
||||
|
|
@ -358,7 +358,7 @@ class ExtractSubsetResources(publish.Extractor):
|
|||
representation_data["stagingDir"] = n_stage_dir
|
||||
files = n_files
|
||||
|
||||
# add files to represetation but add
|
||||
# add files to representation but add
|
||||
# imagesequence as list
|
||||
if (
|
||||
# first check if path in files is not mov extension
|
||||
|
|
|
|||
|
|
@ -50,7 +50,7 @@ class IntegrateBatchGroup(pyblish.api.InstancePlugin):
|
|||
self._load_clip_to_context(instance, bgroup)
|
||||
|
||||
def _add_nodes_to_batch_with_links(self, instance, task_data, batch_group):
|
||||
# get write file node properties > OrederDict because order does mater
|
||||
# get write file node properties > OrederDict because order does matter
|
||||
write_pref_data = self._get_write_prefs(instance, task_data)
|
||||
|
||||
batch_nodes = [
|
||||
|
|
|
|||
|
|
@ -432,11 +432,11 @@ copy_files = """function copyFile(srcFilename, dstFilename)
|
|||
|
||||
import_files = """function %s_import_files()
|
||||
{
|
||||
var PNGTransparencyMode = 0; // Premultiplied wih Black
|
||||
var TGATransparencyMode = 0; // Premultiplied wih Black
|
||||
var SGITransparencyMode = 0; // Premultiplied wih Black
|
||||
var PNGTransparencyMode = 0; // Premultiplied with Black
|
||||
var TGATransparencyMode = 0; // Premultiplied with Black
|
||||
var SGITransparencyMode = 0; // Premultiplied with Black
|
||||
var LayeredPSDTransparencyMode = 1; // Straight
|
||||
var FlatPSDTransparencyMode = 2; // Premultiplied wih White
|
||||
var FlatPSDTransparencyMode = 2; // Premultiplied with White
|
||||
|
||||
function getUniqueColumnName( column_prefix )
|
||||
{
|
||||
|
|
|
|||
|
|
@ -142,10 +142,10 @@ function Client() {
|
|||
};
|
||||
|
||||
/**
|
||||
* Process recieved request. This will eval recieved function and produce
|
||||
* Process received request. This will eval received function and produce
|
||||
* results.
|
||||
* @function
|
||||
* @param {object} request - recieved request JSON
|
||||
* @param {object} request - received request JSON
|
||||
* @return {object} result of evaled function.
|
||||
*/
|
||||
self.processRequest = function(request) {
|
||||
|
|
@ -245,7 +245,7 @@ function Client() {
|
|||
var request = JSON.parse(to_parse);
|
||||
var mid = request.message_id;
|
||||
// self.logDebug('[' + mid + '] - Request: ' + '\n' + JSON.stringify(request));
|
||||
self.logDebug('[' + mid + '] Recieved.');
|
||||
self.logDebug('[' + mid + '] Received.');
|
||||
|
||||
request.result = self.processRequest(request);
|
||||
self.logDebug('[' + mid + '] Processing done.');
|
||||
|
|
@ -286,8 +286,8 @@ function Client() {
|
|||
/** Harmony 21.1 doesn't have QDataStream anymore.
|
||||
|
||||
This means we aren't able to write bytes into QByteArray so we had
|
||||
modify how content lenght is sent do the server.
|
||||
Content lenght is sent as string of 8 char convertible into integer
|
||||
modify how content length is sent do the server.
|
||||
Content length is sent as string of 8 char convertible into integer
|
||||
(instead of 0x00000001[4 bytes] > "000000001"[8 bytes]) */
|
||||
var codec_name = new QByteArray().append("UTF-8");
|
||||
|
||||
|
|
@ -476,6 +476,25 @@ function start() {
|
|||
action.triggered.connect(self.onSubsetManage);
|
||||
}
|
||||
|
||||
/**
|
||||
* Set scene settings from DB to the scene
|
||||
*/
|
||||
self.onSetSceneSettings = function() {
|
||||
app.avalonClient.send(
|
||||
{
|
||||
"module": "openpype.hosts.harmony.api",
|
||||
"method": "ensure_scene_settings",
|
||||
"args": []
|
||||
},
|
||||
false
|
||||
);
|
||||
};
|
||||
// add Set Scene Settings
|
||||
if (app.avalonMenu == null) {
|
||||
action = menu.addAction('Set Scene Settings...');
|
||||
action.triggered.connect(self.onSetSceneSettings);
|
||||
}
|
||||
|
||||
/**
|
||||
* Show Experimental dialog
|
||||
*/
|
||||
|
|
|
|||
|
|
@ -394,7 +394,7 @@ def get_scene_data():
|
|||
"function": "AvalonHarmony.getSceneData"
|
||||
})["result"]
|
||||
except json.decoder.JSONDecodeError:
|
||||
# Means no sceen metadata has been made before.
|
||||
# Means no scene metadata has been made before.
|
||||
return {}
|
||||
except KeyError:
|
||||
# Means no existing scene metadata has been made.
|
||||
|
|
@ -465,7 +465,7 @@ def imprint(node_id, data, remove=False):
|
|||
Example:
|
||||
>>> from openpype.hosts.harmony.api import lib
|
||||
>>> node = "Top/Display"
|
||||
>>> data = {"str": "someting", "int": 1, "float": 0.32, "bool": True}
|
||||
>>> data = {"str": "something", "int": 1, "float": 0.32, "bool": True}
|
||||
>>> lib.imprint(layer, data)
|
||||
"""
|
||||
scene_data = get_scene_data()
|
||||
|
|
@ -550,7 +550,7 @@ def save_scene():
|
|||
method prevents this double request and safely saves the scene.
|
||||
|
||||
"""
|
||||
# Need to turn off the backgound watcher else the communication with
|
||||
# Need to turn off the background watcher else the communication with
|
||||
# the server gets spammed with two requests at the same time.
|
||||
scene_path = send(
|
||||
{"function": "AvalonHarmony.saveScene"})["result"]
|
||||
|
|
|
|||
|
|
@ -142,7 +142,7 @@ def application_launch(event):
|
|||
harmony.send({"script": script})
|
||||
inject_avalon_js()
|
||||
|
||||
ensure_scene_settings()
|
||||
# ensure_scene_settings()
|
||||
check_inventory()
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -61,7 +61,7 @@ class Server(threading.Thread):
|
|||
"module": (str), # Module of method.
|
||||
"method" (str), # Name of method in module.
|
||||
"args" (list), # Arguments to pass to method.
|
||||
"kwargs" (dict), # Keywork arguments to pass to method.
|
||||
"kwargs" (dict), # Keyword arguments to pass to method.
|
||||
"reply" (bool), # Optional wait for method completion.
|
||||
}
|
||||
"""
|
||||
|
|
|
|||
|
|
@ -25,8 +25,9 @@ class ExtractRender(pyblish.api.InstancePlugin):
|
|||
application_path = instance.context.data.get("applicationPath")
|
||||
scene_path = instance.context.data.get("scenePath")
|
||||
frame_rate = instance.context.data.get("frameRate")
|
||||
frame_start = instance.context.data.get("frameStart")
|
||||
frame_end = instance.context.data.get("frameEnd")
|
||||
# real value from timeline
|
||||
frame_start = instance.context.data.get("frameStartHandle")
|
||||
frame_end = instance.context.data.get("frameEndHandle")
|
||||
audio_path = instance.context.data.get("audioPath")
|
||||
|
||||
if audio_path and os.path.exists(audio_path):
|
||||
|
|
@ -55,9 +56,13 @@ class ExtractRender(pyblish.api.InstancePlugin):
|
|||
|
||||
# Execute rendering. Ignoring error cause Harmony returns error code
|
||||
# always.
|
||||
self.log.info(f"running [ {application_path} -batch {scene_path}")
|
||||
|
||||
args = [application_path, "-batch",
|
||||
"-frames", str(frame_start), str(frame_end),
|
||||
"-scene", scene_path]
|
||||
self.log.info(f"running [ {application_path} {' '.join(args)}")
|
||||
proc = subprocess.Popen(
|
||||
[application_path, "-batch", scene_path],
|
||||
args,
|
||||
stdout=subprocess.PIPE,
|
||||
stderr=subprocess.STDOUT,
|
||||
stdin=subprocess.PIPE
|
||||
|
|
|
|||
|
|
@ -60,7 +60,8 @@ class ValidateSceneSettings(pyblish.api.InstancePlugin):
|
|||
# which is available on 'context.data["assetEntity"]'
|
||||
# - the same approach can be used in 'ValidateSceneSettingsRepair'
|
||||
expected_settings = harmony.get_asset_settings()
|
||||
self.log.info("scene settings from DB:".format(expected_settings))
|
||||
self.log.info("scene settings from DB:{}".format(expected_settings))
|
||||
expected_settings.pop("entityType") # not useful for the validation
|
||||
|
||||
expected_settings = _update_frames(dict.copy(expected_settings))
|
||||
expected_settings["frameEndHandle"] = expected_settings["frameEnd"] +\
|
||||
|
|
@ -68,21 +69,32 @@ class ValidateSceneSettings(pyblish.api.InstancePlugin):
|
|||
|
||||
if (any(re.search(pattern, os.getenv('AVALON_TASK'))
|
||||
for pattern in self.skip_resolution_check)):
|
||||
self.log.info("Skipping resolution check because of "
|
||||
"task name and pattern {}".format(
|
||||
self.skip_resolution_check))
|
||||
expected_settings.pop("resolutionWidth")
|
||||
expected_settings.pop("resolutionHeight")
|
||||
|
||||
entity_type = expected_settings.get("entityType")
|
||||
if (any(re.search(pattern, entity_type)
|
||||
if (any(re.search(pattern, os.getenv('AVALON_TASK'))
|
||||
for pattern in self.skip_timelines_check)):
|
||||
self.log.info("Skipping frames check because of "
|
||||
"task name and pattern {}".format(
|
||||
self.skip_timelines_check))
|
||||
expected_settings.pop('frameStart', None)
|
||||
expected_settings.pop('frameEnd', None)
|
||||
|
||||
expected_settings.pop("entityType") # not useful after the check
|
||||
expected_settings.pop('frameStartHandle', None)
|
||||
expected_settings.pop('frameEndHandle', None)
|
||||
|
||||
asset_name = instance.context.data['anatomyData']['asset']
|
||||
if any(re.search(pattern, asset_name)
|
||||
for pattern in self.frame_check_filter):
|
||||
expected_settings.pop("frameEnd")
|
||||
self.log.info("Skipping frames check because of "
|
||||
"task name and pattern {}".format(
|
||||
self.frame_check_filter))
|
||||
expected_settings.pop('frameStart', None)
|
||||
expected_settings.pop('frameEnd', None)
|
||||
expected_settings.pop('frameStartHandle', None)
|
||||
expected_settings.pop('frameEndHandle', None)
|
||||
|
||||
# handle case where ftrack uses only two decimal places
|
||||
# 23.976023976023978 vs. 23.98
|
||||
|
|
@ -99,6 +111,7 @@ class ValidateSceneSettings(pyblish.api.InstancePlugin):
|
|||
"frameEnd": instance.context.data["frameEnd"],
|
||||
"handleStart": instance.context.data.get("handleStart"),
|
||||
"handleEnd": instance.context.data.get("handleEnd"),
|
||||
"frameStartHandle": instance.context.data.get("frameStartHandle"),
|
||||
"frameEndHandle": instance.context.data.get("frameEndHandle"),
|
||||
"resolutionWidth": instance.context.data.get("resolutionWidth"),
|
||||
"resolutionHeight": instance.context.data.get("resolutionHeight"),
|
||||
|
|
|
|||
|
|
@ -6,7 +6,7 @@ Ever tried to make a simple script for toonboom Harmony, then got stumped by the
|
|||
|
||||
Toonboom Harmony is a very powerful software, with hundreds of functions and tools, and it unlocks a great amount of possibilities for animation studios around the globe. And... being the produce of the hard work of a small team forced to prioritise, it can also be a bit rustic at times!
|
||||
|
||||
We are users at heart, animators and riggers, who just want to interact with the software as simply as possible. Simplicity is at the heart of the design of openHarmony. But we also are developpers, and we made the library for people like us who can't resist tweaking the software and bend it in all possible ways, and are looking for powerful functions to help them do it.
|
||||
We are users at heart, animators and riggers, who just want to interact with the software as simply as possible. Simplicity is at the heart of the design of openHarmony. But we also are developers, and we made the library for people like us who can't resist tweaking the software and bend it in all possible ways, and are looking for powerful functions to help them do it.
|
||||
|
||||
This library's aim is to create a more direct way to interact with Toonboom through scripts, by providing a more intuitive way to access its elements, and help with the cumbersome and repetitive tasks as well as help unlock untapped potential in its many available systems. So we can go from having to do things like this:
|
||||
|
||||
|
|
@ -78,7 +78,7 @@ All you have to do is call :
|
|||
```javascript
|
||||
include("openHarmony.js");
|
||||
```
|
||||
at the beggining of your script.
|
||||
at the beginning of your script.
|
||||
|
||||
You can ask your users to download their copy of the library and store it alongside, or bundle it as you wish as long as you include the license file provided on this repository.
|
||||
|
||||
|
|
@ -129,7 +129,7 @@ Check that the environment variable `LIB_OPENHARMONY_PATH` is set correctly to t
|
|||
## How to add openHarmony to vscode intellisense for autocompletion
|
||||
|
||||
Although not fully supported, you can get most of the autocompletion features to work by adding the following lines to a `jsconfig.json` file placed at the root of your working folder.
|
||||
The paths need to be relative which means the openHarmony source code must be placed directly in your developping environnement.
|
||||
The paths need to be relative which means the openHarmony source code must be placed directly in your developping environment.
|
||||
|
||||
For example, if your working folder contains the openHarmony source in a folder called `OpenHarmony` and your working scripts in a folder called `myScripts`, place the `jsconfig.json` file at the root of the folder and add these lines to the file:
|
||||
|
||||
|
|
|
|||
|
|
@ -4,7 +4,7 @@
|
|||
// openHarmony Library
|
||||
//
|
||||
//
|
||||
// Developped by Mathieu Chaptel, Chris Fourney
|
||||
// Developed by Mathieu Chaptel, Chris Fourney
|
||||
//
|
||||
//
|
||||
// This library is an open source implementation of a Document Object Model
|
||||
|
|
@ -16,7 +16,7 @@
|
|||
// and by hiding the heavy lifting required by the official API.
|
||||
//
|
||||
// This library is provided as is and is a work in progress. As such, not every
|
||||
// function has been implemented or is garanteed to work. Feel free to contribute
|
||||
// function has been implemented or is guaranteed to work. Feel free to contribute
|
||||
// improvements to its official github. If you do make sure you follow the provided
|
||||
// template and naming conventions and document your new methods properly.
|
||||
//
|
||||
|
|
@ -78,7 +78,7 @@
|
|||
* $.log("hello"); // prints out a message to the MessageLog.
|
||||
* var myPoint = new $.oPoint(0,0,0); // create a new class instance from an openHarmony class.
|
||||
*
|
||||
* // function members of the $ objects get published to the global scope, which means $ can be ommited
|
||||
* // function members of the $ objects get published to the global scope, which means $ can be omitted
|
||||
*
|
||||
* log("hello");
|
||||
* var myPoint = new oPoint(0,0,0); // This is all valid
|
||||
|
|
@ -118,7 +118,7 @@ Object.defineProperty( $, "directory", {
|
|||
|
||||
|
||||
/**
|
||||
* Wether Harmony is run with the interface or simply from command line
|
||||
* Whether Harmony is run with the interface or simply from command line
|
||||
*/
|
||||
Object.defineProperty( $, "batchMode", {
|
||||
get: function(){
|
||||
|
|
|
|||
|
|
@ -4,7 +4,7 @@
|
|||
// openHarmony Library
|
||||
//
|
||||
//
|
||||
// Developped by Mathieu Chaptel, Chris Fourney
|
||||
// Developed by Mathieu Chaptel, Chris Fourney
|
||||
//
|
||||
//
|
||||
// This library is an open source implementation of a Document Object Model
|
||||
|
|
@ -16,7 +16,7 @@
|
|||
// and by hiding the heavy lifting required by the official API.
|
||||
//
|
||||
// This library is provided as is and is a work in progress. As such, not every
|
||||
// function has been implemented or is garanteed to work. Feel free to contribute
|
||||
// function has been implemented or is guaranteed to work. Feel free to contribute
|
||||
// improvements to its official github. If you do make sure you follow the provided
|
||||
// template and naming conventions and document your new methods properly.
|
||||
//
|
||||
|
|
@ -67,7 +67,7 @@
|
|||
* @hideconstructor
|
||||
* @namespace
|
||||
* @example
|
||||
* // To check wether an action is available, call the synthax:
|
||||
* // To check whether an action is available, call the synthax:
|
||||
* Action.validate (<actionName>, <responder>);
|
||||
*
|
||||
* // To launch an action, call the synthax:
|
||||
|
|
|
|||
|
|
@ -4,7 +4,7 @@
|
|||
// openHarmony Library
|
||||
//
|
||||
//
|
||||
// Developped by Mathieu Chaptel, Chris Fourney
|
||||
// Developed by Mathieu Chaptel, Chris Fourney
|
||||
//
|
||||
//
|
||||
// This library is an open source implementation of a Document Object Model
|
||||
|
|
@ -16,7 +16,7 @@
|
|||
// and by hiding the heavy lifting required by the official API.
|
||||
//
|
||||
// This library is provided as is and is a work in progress. As such, not every
|
||||
// function has been implemented or is garanteed to work. Feel free to contribute
|
||||
// function has been implemented or is guaranteed to work. Feel free to contribute
|
||||
// improvements to its official github. If you do make sure you follow the provided
|
||||
// template and naming conventions and document your new methods properly.
|
||||
//
|
||||
|
|
@ -409,7 +409,7 @@ $.oApp.prototype.getToolByName = function(toolName){
|
|||
|
||||
|
||||
/**
|
||||
* returns the list of stencils useable by the specified tool
|
||||
* returns the list of stencils usable by the specified tool
|
||||
* @param {$.oTool} tool the tool object we want valid stencils for
|
||||
* @return {$.oStencil[]} the list of stencils compatible with the specified tool
|
||||
*/
|
||||
|
|
|
|||
|
|
@ -4,7 +4,7 @@
|
|||
// openHarmony Library v0.01
|
||||
//
|
||||
//
|
||||
// Developped by Mathieu Chaptel, Chris Fourney...
|
||||
// Developed by Mathieu Chaptel, Chris Fourney...
|
||||
//
|
||||
//
|
||||
// This library is an open source implementation of a Document Object Model
|
||||
|
|
@ -16,7 +16,7 @@
|
|||
// and by hiding the heavy lifting required by the official API.
|
||||
//
|
||||
// This library is provided as is and is a work in progress. As such, not every
|
||||
// function has been implemented or is garanteed to work. Feel free to contribute
|
||||
// function has been implemented or is guaranteed to work. Feel free to contribute
|
||||
// improvements to its official github. If you do make sure you follow the provided
|
||||
// template and naming conventions and document your new methods properly.
|
||||
//
|
||||
|
|
@ -338,7 +338,7 @@ Object.defineProperty($.oAttribute.prototype, "useSeparate", {
|
|||
* Returns the default value of the attribute for most keywords
|
||||
* @name $.oAttribute#defaultValue
|
||||
* @type {bool}
|
||||
* @todo switch the implentation to types?
|
||||
* @todo switch the implementation to types?
|
||||
* @example
|
||||
* // to reset an attribute to its default value:
|
||||
* // (mostly used for position/angle/skew parameters of pegs and drawing nodes)
|
||||
|
|
@ -449,7 +449,7 @@ $.oAttribute.prototype.getLinkedColumns = function(){
|
|||
|
||||
/**
|
||||
* Recursively sets an attribute to the same value as another. Both must have the same keyword.
|
||||
* @param {bool} [duplicateColumns=false] In the case that the attribute has a column, wether to duplicate the column before linking
|
||||
* @param {bool} [duplicateColumns=false] In the case that the attribute has a column, whether to duplicate the column before linking
|
||||
* @private
|
||||
*/
|
||||
$.oAttribute.prototype.setToAttributeValue = function(attributeToCopy, duplicateColumns){
|
||||
|
|
|
|||
|
|
@ -4,7 +4,7 @@
|
|||
// openHarmony Library
|
||||
//
|
||||
//
|
||||
// Developped by Mathieu Chaptel, Chris Fourney
|
||||
// Developed by Mathieu Chaptel, Chris Fourney
|
||||
//
|
||||
//
|
||||
// This library is an open source implementation of a Document Object Model
|
||||
|
|
@ -16,7 +16,7 @@
|
|||
// and by hiding the heavy lifting required by the official API.
|
||||
//
|
||||
// This library is provided as is and is a work in progress. As such, not every
|
||||
// function has been implemented or is garanteed to work. Feel free to contribute
|
||||
// function has been implemented or is guaranteed to work. Feel free to contribute
|
||||
// improvements to its official github. If you do make sure you follow the provided
|
||||
// template and naming conventions and document your new methods properly.
|
||||
//
|
||||
|
|
|
|||
|
|
@ -4,7 +4,7 @@
|
|||
// openHarmony Library
|
||||
//
|
||||
//
|
||||
// Developped by Mathieu Chaptel, Chris Fourney
|
||||
// Developed by Mathieu Chaptel, Chris Fourney
|
||||
//
|
||||
//
|
||||
// This library is an open source implementation of a Document Object Model
|
||||
|
|
@ -16,7 +16,7 @@
|
|||
// and by hiding the heavy lifting required by the official API.
|
||||
//
|
||||
// This library is provided as is and is a work in progress. As such, not every
|
||||
// function has been implemented or is garanteed to work. Feel free to contribute
|
||||
// function has been implemented or is guaranteed to work. Feel free to contribute
|
||||
// improvements to its official github. If you do make sure you follow the provided
|
||||
// template and naming conventions and document your new methods properly.
|
||||
//
|
||||
|
|
@ -158,7 +158,7 @@ $.oColorValue.prototype.fromColorString = function (hexString){
|
|||
|
||||
|
||||
/**
|
||||
* Uses a color integer (used in backdrops) and parses the INT; applies the RGBA components of the INT to thos oColorValue
|
||||
* Uses a color integer (used in backdrops) and parses the INT; applies the RGBA components of the INT to the oColorValue
|
||||
* @param { int } colorInt 24 bit-shifted integer containing RGBA values
|
||||
*/
|
||||
$.oColorValue.prototype.parseColorFromInt = function(colorInt){
|
||||
|
|
|
|||
|
|
@ -4,7 +4,7 @@
|
|||
// openHarmony Library
|
||||
//
|
||||
//
|
||||
// Developped by Mathieu Chaptel, Chris Fourney
|
||||
// Developed by Mathieu Chaptel, Chris Fourney
|
||||
//
|
||||
//
|
||||
// This library is an open source implementation of a Document Object Model
|
||||
|
|
@ -16,7 +16,7 @@
|
|||
// and by hiding the heavy lifting required by the official API.
|
||||
//
|
||||
// This library is provided as is and is a work in progress. As such, not every
|
||||
// function has been implemented or is garanteed to work. Feel free to contribute
|
||||
// function has been implemented or is guaranteed to work. Feel free to contribute
|
||||
// improvements to its official github. If you do make sure you follow the provided
|
||||
// template and naming conventions and document your new methods properly.
|
||||
//
|
||||
|
|
|
|||
|
|
@ -4,7 +4,7 @@
|
|||
// openHarmony Library
|
||||
//
|
||||
//
|
||||
// Developped by Mathieu Chaptel, Chris Fourney
|
||||
// Developed by Mathieu Chaptel, Chris Fourney
|
||||
//
|
||||
//
|
||||
// This library is an open source implementation of a Document Object Model
|
||||
|
|
@ -16,7 +16,7 @@
|
|||
// and by hiding the heavy lifting required by the official API.
|
||||
//
|
||||
// This library is provided as is and is a work in progress. As such, not every
|
||||
// function has been implemented or is garanteed to work. Feel free to contribute
|
||||
// function has been implemented or is guaranteed to work. Feel free to contribute
|
||||
// improvements to its official github. If you do make sure you follow the provided
|
||||
// template and naming conventions and document your new methods properly.
|
||||
//
|
||||
|
|
|
|||
|
|
@ -5,7 +5,7 @@
|
|||
// openHarmony Library
|
||||
//
|
||||
//
|
||||
// Developped by Mathieu Chaptel, Chris Fourney
|
||||
// Developed by Mathieu Chaptel, Chris Fourney
|
||||
//
|
||||
//
|
||||
// This library is an open source implementation of a Document Object Model
|
||||
|
|
@ -17,7 +17,7 @@
|
|||
// and by hiding the heavy lifting required by the official API.
|
||||
//
|
||||
// This library is provided as is and is a work in progress. As such, not every
|
||||
// function has been implemented or is garanteed to work. Feel free to contribute
|
||||
// function has been implemented or is guaranteed to work. Feel free to contribute
|
||||
// improvements to its official github. If you do make sure you follow the provided
|
||||
// template and naming conventions and document your new methods properly.
|
||||
//
|
||||
|
|
@ -250,7 +250,7 @@ $.oDialog.prototype.prompt = function( labelText, title, prefilledText){
|
|||
/**
|
||||
* Prompts with a file selector window
|
||||
* @param {string} [text="Select a file:"] The title of the confirmation dialog.
|
||||
* @param {string} [filter="*"] The filter for the file type and/or file name that can be selected. Accepts wildcard charater "*".
|
||||
* @param {string} [filter="*"] The filter for the file type and/or file name that can be selected. Accepts wildcard character "*".
|
||||
* @param {string} [getExisting=true] Whether to select an existing file or a save location
|
||||
* @param {string} [acceptMultiple=false] Whether or not selecting more than one file is ok. Is ignored if getExisting is falses.
|
||||
* @param {string} [startDirectory] The directory showed at the opening of the dialog.
|
||||
|
|
@ -327,14 +327,14 @@ $.oDialog.prototype.browseForFolder = function(text, startDirectory){
|
|||
* @constructor
|
||||
* @classdesc An simple progress dialog to display the progress of a task.
|
||||
* To react to the user clicking the cancel button, connect a function to $.oProgressDialog.canceled() signal.
|
||||
* When $.batchmode is true, the progress will be outputed as a "Progress : value/range" string to the Harmony stdout.
|
||||
* When $.batchmode is true, the progress will be outputted as a "Progress : value/range" string to the Harmony stdout.
|
||||
* @param {string} [labelText] The text displayed above the progress bar.
|
||||
* @param {string} [range=100] The maximum value that represents a full progress bar.
|
||||
* @param {string} [title] The title of the dialog
|
||||
* @param {bool} [show=false] Whether to immediately show the dialog.
|
||||
*
|
||||
* @property {bool} wasCanceled Whether the progress bar was cancelled.
|
||||
* @property {$.oSignal} canceled A Signal emited when the dialog is canceled. Can be connected to a callback.
|
||||
* @property {$.oSignal} canceled A Signal emitted when the dialog is canceled. Can be connected to a callback.
|
||||
*/
|
||||
$.oProgressDialog = function( labelText, range, title, show ){
|
||||
if (typeof title === 'undefined') var title = "Progress";
|
||||
|
|
@ -608,7 +608,7 @@ $.oPieMenu = function( name, widgets, show, minAngle, maxAngle, radius, position
|
|||
this.maxAngle = maxAngle;
|
||||
this.globalCenter = position;
|
||||
|
||||
// how wide outisde the icons is the slice drawn
|
||||
// how wide outside the icons is the slice drawn
|
||||
this._circleMargin = 30;
|
||||
|
||||
// set these values before calling show() to customize the menu appearance
|
||||
|
|
@ -974,7 +974,7 @@ $.oPieMenu.prototype.getMenuRadius = function(){
|
|||
var _minRadius = UiLoader.dpiScale(30);
|
||||
var _speed = 10; // the higher the value, the slower the progression
|
||||
|
||||
// hyperbolic tangent function to determin the radius
|
||||
// hyperbolic tangent function to determine the radius
|
||||
var exp = Math.exp(2*itemsNumber/_speed);
|
||||
var _radius = ((exp-1)/(exp+1))*_maxRadius+_minRadius;
|
||||
|
||||
|
|
@ -1383,7 +1383,7 @@ $.oActionButton.prototype.activate = function(){
|
|||
* This class is a subclass of QPushButton and all the methods from that class are available to modify this button.
|
||||
* @param {string} paletteName The name of the palette that contains the color
|
||||
* @param {string} colorName The name of the color (if more than one is present, will pick the first match)
|
||||
* @param {bool} showName Wether to display the name of the color on the button
|
||||
* @param {bool} showName Whether to display the name of the color on the button
|
||||
* @param {QWidget} parent The parent QWidget for the button. Automatically set during initialisation of the menu.
|
||||
*
|
||||
*/
|
||||
|
|
@ -1437,7 +1437,7 @@ $.oColorButton.prototype.activate = function(){
|
|||
* @name $.oScriptButton
|
||||
* @constructor
|
||||
* @classdescription This subclass of QPushButton provides an easy way to create a button for a widget that will launch a function from another script file.<br>
|
||||
* The buttons created this way automatically load the icon named after the script if it finds one named like the funtion in a script-icons folder next to the script file.<br>
|
||||
* The buttons created this way automatically load the icon named after the script if it finds one named like the function in a script-icons folder next to the script file.<br>
|
||||
* It will also automatically set the callback to lanch the function from the script.<br>
|
||||
* This class is a subclass of QPushButton and all the methods from that class are available to modify this button.
|
||||
* @param {string} scriptFile The path to the script file that will be launched
|
||||
|
|
|
|||
|
|
@ -4,7 +4,7 @@
|
|||
// openHarmony Library
|
||||
//
|
||||
//
|
||||
// Developped by Mathieu Chaptel, Chris Fourney
|
||||
// Developed by Mathieu Chaptel, Chris Fourney
|
||||
//
|
||||
//
|
||||
// This library is an open source implementation of a Document Object Model
|
||||
|
|
@ -16,7 +16,7 @@
|
|||
// and by hiding the heavy lifting required by the official API.
|
||||
//
|
||||
// This library is provided as is and is a work in progress. As such, not every
|
||||
// function has been implemented or is garanteed to work. Feel free to contribute
|
||||
// function has been implemented or is guaranteed to work. Feel free to contribute
|
||||
// improvements to its official github. If you do make sure you follow the provided
|
||||
// template and naming conventions and document your new methods properly.
|
||||
//
|
||||
|
|
@ -426,7 +426,7 @@ Object.defineProperty($.oDrawing.prototype, 'drawingData', {
|
|||
/**
|
||||
* Import a given file into an existing drawing.
|
||||
* @param {$.oFile} file The path to the file
|
||||
* @param {bool} [convertToTvg=false] Wether to convert the bitmap to the tvg format (this doesn't vectorise the drawing)
|
||||
* @param {bool} [convertToTvg=false] Whether to convert the bitmap to the tvg format (this doesn't vectorise the drawing)
|
||||
*
|
||||
* @return { $.oFile } the oFile object pointing to the drawing file after being it has been imported into the element folder.
|
||||
*/
|
||||
|
|
@ -878,8 +878,8 @@ $.oArtLayer.prototype.drawCircle = function(center, radius, lineStyle, fillStyle
|
|||
* @param {$.oVertex[]} path an array of $.oVertex objects that describe a path.
|
||||
* @param {$.oLineStyle} [lineStyle] the line style to draw with. (By default, will use the current stencil selection)
|
||||
* @param {$.oFillStyle} [fillStyle] the fill information for the path. (By default, will use the current palette selection)
|
||||
* @param {bool} [polygon] Wether bezier handles should be created for the points in the path (ignores "onCurve" properties of oVertex from path)
|
||||
* @param {bool} [createUnderneath] Wether the new shape will appear on top or underneath the contents of the layer. (not working yet)
|
||||
* @param {bool} [polygon] Whether bezier handles should be created for the points in the path (ignores "onCurve" properties of oVertex from path)
|
||||
* @param {bool} [createUnderneath] Whether the new shape will appear on top or underneath the contents of the layer. (not working yet)
|
||||
*/
|
||||
$.oArtLayer.prototype.drawShape = function(path, lineStyle, fillStyle, polygon, createUnderneath){
|
||||
if (typeof fillStyle === 'undefined') var fillStyle = new this.$.oFillStyle();
|
||||
|
|
@ -959,7 +959,7 @@ $.oArtLayer.prototype.drawContour = function(path, fillStyle){
|
|||
* @param {float} width the width of the rectangle.
|
||||
* @param {float} height the height of the rectangle.
|
||||
* @param {$.oLineStyle} lineStyle a line style to use for the rectangle stroke.
|
||||
* @param {$.oFillStyle} fillStyle a fill style to use for the rectange fill.
|
||||
* @param {$.oFillStyle} fillStyle a fill style to use for the rectangle fill.
|
||||
* @returns {$.oShape} the shape containing the added stroke.
|
||||
*/
|
||||
$.oArtLayer.prototype.drawRectangle = function(x, y, width, height, lineStyle, fillStyle){
|
||||
|
|
@ -1514,7 +1514,7 @@ Object.defineProperty($.oStroke.prototype, "path", {
|
|||
|
||||
|
||||
/**
|
||||
* The oVertex that are on the stroke (Bezier handles exluded.)
|
||||
* The oVertex that are on the stroke (Bezier handles excluded.)
|
||||
* The first is repeated at the last position when the stroke is closed.
|
||||
* @name $.oStroke#points
|
||||
* @type {$.oVertex[]}
|
||||
|
|
@ -1583,7 +1583,7 @@ Object.defineProperty($.oStroke.prototype, "style", {
|
|||
|
||||
|
||||
/**
|
||||
* wether the stroke is a closed shape.
|
||||
* whether the stroke is a closed shape.
|
||||
* @name $.oStroke#closed
|
||||
* @type {bool}
|
||||
*/
|
||||
|
|
@ -1919,7 +1919,7 @@ $.oContour.prototype.toString = function(){
|
|||
* @constructor
|
||||
* @classdesc
|
||||
* The $.oVertex class represents a single control point on a stroke. This class is used to get the index of the point in the stroke path sequence, as well as its position as a float along the stroke's length.
|
||||
* The onCurve property describes wether this control point is a bezier handle or a point on the curve.
|
||||
* The onCurve property describes whether this control point is a bezier handle or a point on the curve.
|
||||
*
|
||||
* @param {$.oStroke} stroke the stroke that this vertex belongs to
|
||||
* @param {float} x the x coordinate of the vertex, in drawing space
|
||||
|
|
|
|||
|
|
@ -4,7 +4,7 @@
|
|||
// openHarmony Library v0.01
|
||||
//
|
||||
//
|
||||
// Developped by Mathieu Chaptel, Chris Fourney...
|
||||
// Developed by Mathieu Chaptel, Chris Fourney...
|
||||
//
|
||||
//
|
||||
// This library is an open source implementation of a Document Object Model
|
||||
|
|
@ -16,7 +16,7 @@
|
|||
// and by hiding the heavy lifting required by the official API.
|
||||
//
|
||||
// This library is provided as is and is a work in progress. As such, not every
|
||||
// function has been implemented or is garanteed to work. Feel free to contribute
|
||||
// function has been implemented or is guaranteed to work. Feel free to contribute
|
||||
// improvements to its official github. If you do make sure you follow the provided
|
||||
// template and naming conventions and document your new methods properly.
|
||||
//
|
||||
|
|
|
|||
|
|
@ -4,7 +4,7 @@
|
|||
// openHarmony Library
|
||||
//
|
||||
//
|
||||
// Developped by Mathieu Chaptel, Chris Fourney
|
||||
// Developed by Mathieu Chaptel, Chris Fourney
|
||||
//
|
||||
//
|
||||
// This library is an open source implementation of a Document Object Model
|
||||
|
|
@ -16,7 +16,7 @@
|
|||
// and by hiding the heavy lifting required by the official API.
|
||||
//
|
||||
// This library is provided as is and is a work in progress. As such, not every
|
||||
// function has been implemented or is garanteed to work. Feel free to contribute
|
||||
// function has been implemented or is guaranteed to work. Feel free to contribute
|
||||
// improvements to its official github. If you do make sure you follow the provided
|
||||
// template and naming conventions and document your new methods properly.
|
||||
//
|
||||
|
|
@ -509,7 +509,7 @@ Object.defineProperty($.oFile.prototype, 'fullName', {
|
|||
|
||||
|
||||
/**
|
||||
* The name of the file without extenstion.
|
||||
* The name of the file without extension.
|
||||
* @name $.oFile#name
|
||||
* @type {string}
|
||||
*/
|
||||
|
|
|
|||
|
|
@ -4,7 +4,7 @@
|
|||
// openHarmony Library
|
||||
//
|
||||
//
|
||||
// Developped by Mathieu Chaptel, Chris Fourney
|
||||
// Developed by Mathieu Chaptel, Chris Fourney
|
||||
//
|
||||
//
|
||||
// This library is an open source implementation of a Document Object Model
|
||||
|
|
@ -16,7 +16,7 @@
|
|||
// and by hiding the heavy lifting required by the official API.
|
||||
//
|
||||
// This library is provided as is and is a work in progress. As such, not every
|
||||
// function has been implemented or is garanteed to work. Feel free to contribute
|
||||
// function has been implemented or is guaranteed to work. Feel free to contribute
|
||||
// improvements to its official github. If you do make sure you follow the provided
|
||||
// template and naming conventions and document your new methods properly.
|
||||
//
|
||||
|
|
@ -263,7 +263,7 @@ Object.defineProperty($.oFrame.prototype, 'duration', {
|
|||
return _sceneLength;
|
||||
}
|
||||
|
||||
// walk up the frames of the scene to the next keyFrame to determin duration
|
||||
// walk up the frames of the scene to the next keyFrame to determine duration
|
||||
var _frames = this.column.frames
|
||||
for (var i=this.frameNumber+1; i<_sceneLength; i++){
|
||||
if (_frames[i].isKeyframe) return _frames[i].frameNumber - _startFrame;
|
||||
|
|
@ -426,7 +426,7 @@ Object.defineProperty($.oFrame.prototype, 'velocity', {
|
|||
* easeIn : a $.oPoint object representing the left handle for bezier columns, or a {point, ease} object for ease columns.
|
||||
* easeOut : a $.oPoint object representing the left handle for bezier columns, or a {point, ease} object for ease columns.
|
||||
* continuity : the type of bezier used by the point.
|
||||
* constant : wether the frame is interpolated or a held value.
|
||||
* constant : whether the frame is interpolated or a held value.
|
||||
* @name $.oFrame#ease
|
||||
* @type {oPoint/object}
|
||||
*/
|
||||
|
|
@ -520,7 +520,7 @@ Object.defineProperty($.oFrame.prototype, 'easeOut', {
|
|||
|
||||
|
||||
/**
|
||||
* Determines the frame's continuity setting. Can take the values "CORNER", (two independant bezier handles on each side), "SMOOTH"(handles are aligned) or "STRAIGHT" (no handles and in straight lines).
|
||||
* Determines the frame's continuity setting. Can take the values "CORNER", (two independent bezier handles on each side), "SMOOTH"(handles are aligned) or "STRAIGHT" (no handles and in straight lines).
|
||||
* @name $.oFrame#continuity
|
||||
* @type {string}
|
||||
*/
|
||||
|
|
|
|||
|
|
@ -4,7 +4,7 @@
|
|||
// openHarmony Library v0.01
|
||||
//
|
||||
//
|
||||
// Developped by Mathieu Chaptel, Chris Fourney...
|
||||
// Developed by Mathieu Chaptel, Chris Fourney...
|
||||
//
|
||||
//
|
||||
// This library is an open source implementation of a Document Object Model
|
||||
|
|
@ -16,7 +16,7 @@
|
|||
// and by hiding the heavy lifting required by the official API.
|
||||
//
|
||||
// This library is provided as is and is a work in progress. As such, not every
|
||||
// function has been implemented or is garanteed to work. Feel free to contribute
|
||||
// function has been implemented or is guaranteed to work. Feel free to contribute
|
||||
// improvements to its official github. If you do make sure you follow the provided
|
||||
// template and naming conventions and document your new methods properly.
|
||||
//
|
||||
|
|
@ -516,5 +516,5 @@ Object.defineProperty($.oList.prototype, 'toString', {
|
|||
|
||||
|
||||
|
||||
//Needs all filtering, limiting. mapping, pop, concat, join, ect
|
||||
//Needs all filtering, limiting. mapping, pop, concat, join, etc
|
||||
//Speed up by finessing the way it extends and tracks the enumerable properties.
|
||||
|
|
@ -4,7 +4,7 @@
|
|||
// openHarmony Library
|
||||
//
|
||||
//
|
||||
// Developped by Mathieu Chaptel, Chris Fourney
|
||||
// Developed by Mathieu Chaptel, Chris Fourney
|
||||
//
|
||||
//
|
||||
// This library is an open source implementation of a Document Object Model
|
||||
|
|
@ -16,7 +16,7 @@
|
|||
// and by hiding the heavy lifting required by the official API.
|
||||
//
|
||||
// This library is provided as is and is a work in progress. As such, not every
|
||||
// function has been implemented or is garanteed to work. Feel free to contribute
|
||||
// function has been implemented or is guaranteed to work. Feel free to contribute
|
||||
// improvements to its official github. If you do make sure you follow the provided
|
||||
// template and naming conventions and document your new methods properly.
|
||||
//
|
||||
|
|
@ -193,7 +193,7 @@ $.oPoint.prototype.pointSubtract = function( sub_pt ){
|
|||
/**
|
||||
* Subtracts the point to the coordinates of the current oPoint and returns a new oPoint with the result.
|
||||
* @param {$.oPoint} point The point to subtract to this point.
|
||||
* @returns {$.oPoint} a new independant oPoint.
|
||||
* @returns {$.oPoint} a new independent oPoint.
|
||||
*/
|
||||
$.oPoint.prototype.subtractPoint = function( point ){
|
||||
var x = this.x - point.x;
|
||||
|
|
@ -298,9 +298,9 @@ $.oPoint.prototype.convertToWorldspace = function(){
|
|||
|
||||
|
||||
/**
|
||||
* Linearily Interpolate between this (0.0) and the provided point (1.0)
|
||||
* Linearly Interpolate between this (0.0) and the provided point (1.0)
|
||||
* @param {$.oPoint} point The target point at 100%
|
||||
* @param {double} perc 0-1.0 value to linearily interp
|
||||
* @param {double} perc 0-1.0 value to linearly interp
|
||||
*
|
||||
* @return: { $.oPoint } The interpolated value.
|
||||
*/
|
||||
|
|
@ -410,9 +410,9 @@ $.oBox.prototype.include = function(box){
|
|||
|
||||
|
||||
/**
|
||||
* Checks wether the box contains another $.oBox.
|
||||
* Checks whether the box contains another $.oBox.
|
||||
* @param {$.oBox} box The $.oBox to check for.
|
||||
* @param {bool} [partial=false] wether to accept partially contained boxes.
|
||||
* @param {bool} [partial=false] whether to accept partially contained boxes.
|
||||
*/
|
||||
$.oBox.prototype.contains = function(box, partial){
|
||||
if (typeof partial === 'undefined') var partial = false;
|
||||
|
|
@ -537,7 +537,7 @@ $.oMatrix.prototype.toString = function(){
|
|||
* @classdesc The $.oVector is a replacement for the Vector3d objects of Harmony.
|
||||
* @param {float} x a x coordinate for this vector.
|
||||
* @param {float} y a y coordinate for this vector.
|
||||
* @param {float} [z=0] a z coordinate for this vector. If ommited, will be set to 0 and vector will be 2D.
|
||||
* @param {float} [z=0] a z coordinate for this vector. If omitted, will be set to 0 and vector will be 2D.
|
||||
*/
|
||||
$.oVector = function(x, y, z){
|
||||
if (typeof z === "undefined" || isNaN(z)) var z = 0;
|
||||
|
|
|
|||
|
|
@ -4,7 +4,7 @@
|
|||
// openHarmony Library v0.01
|
||||
//
|
||||
//
|
||||
// Developped by Mathieu Chaptel, Chris Fourney...
|
||||
// Developed by Mathieu Chaptel, Chris Fourney...
|
||||
//
|
||||
//
|
||||
// This library is an open source implementation of a Document Object Model
|
||||
|
|
@ -16,7 +16,7 @@
|
|||
// and by hiding the heavy lifting required by the official API.
|
||||
//
|
||||
// This library is provided as is and is a work in progress. As such, not every
|
||||
// function has been implemented or is garanteed to work. Feel free to contribute
|
||||
// function has been implemented or is guaranteed to work. Feel free to contribute
|
||||
// improvements to its official github. If you do make sure you follow the provided
|
||||
// template and naming conventions and document your new methods properly.
|
||||
//
|
||||
|
|
|
|||
|
|
@ -4,7 +4,7 @@
|
|||
// openHarmony Library v0.01
|
||||
//
|
||||
//
|
||||
// Developped by Mathieu Chaptel, Chris Fourney...
|
||||
// Developed by Mathieu Chaptel, Chris Fourney...
|
||||
//
|
||||
//
|
||||
// This library is an open source implementation of a Document Object Model
|
||||
|
|
@ -16,7 +16,7 @@
|
|||
// and by hiding the heavy lifting required by the official API.
|
||||
//
|
||||
// This library is provided as is and is a work in progress. As such, not every
|
||||
// function has been implemented or is garanteed to work. Feel free to contribute
|
||||
// function has been implemented or is guaranteed to work. Feel free to contribute
|
||||
// improvements to its official github. If you do make sure you follow the provided
|
||||
// template and naming conventions and document your new methods properly.
|
||||
//
|
||||
|
|
@ -54,7 +54,7 @@
|
|||
|
||||
|
||||
/**
|
||||
* The $.oUtils helper class -- providing generic utilities. Doesn't need instanciation.
|
||||
* The $.oUtils helper class -- providing generic utilities. Doesn't need instantiation.
|
||||
* @classdesc $.oUtils utility Class
|
||||
*/
|
||||
$.oUtils = function(){
|
||||
|
|
|
|||
|
|
@ -4,7 +4,7 @@
|
|||
// openHarmony Library v0.01
|
||||
//
|
||||
//
|
||||
// Developped by Mathieu Chaptel, Chris Fourney...
|
||||
// Developed by Mathieu Chaptel, Chris Fourney...
|
||||
//
|
||||
//
|
||||
// This library is an open source implementation of a Document Object Model
|
||||
|
|
@ -16,7 +16,7 @@
|
|||
// and by hiding the heavy lifting required by the official API.
|
||||
//
|
||||
// This library is provided as is and is a work in progress. As such, not every
|
||||
// function has been implemented or is garanteed to work. Feel free to contribute
|
||||
// function has been implemented or is guaranteed to work. Feel free to contribute
|
||||
// improvements to its official github. If you do make sure you follow the provided
|
||||
// template and naming conventions and document your new methods properly.
|
||||
//
|
||||
|
|
@ -87,7 +87,7 @@ $.oNetwork = function( ){
|
|||
* @param {function} callback_func Providing a callback function prevents blocking, and will respond on this function. The callback function is in form func( results ){}
|
||||
* @param {bool} use_json In the event of a JSON api, this will return an object converted from the returned JSON.
|
||||
*
|
||||
* @return: {string/object} The resulting object/string from the query -- otherwise a bool as false when an error occured..
|
||||
* @return: {string/object} The resulting object/string from the query -- otherwise a bool as false when an error occurred..
|
||||
*/
|
||||
$.oNetwork.prototype.webQuery = function ( address, callback_func, use_json ){
|
||||
if (typeof callback_func === 'undefined') var callback_func = false;
|
||||
|
|
@ -272,7 +272,7 @@ $.oNetwork.prototype.webQuery = function ( address, callback_func, use_json ){
|
|||
* @param {function} path The local file path to save the download.
|
||||
* @param {bool} replace Replace the file if it exists.
|
||||
*
|
||||
* @return: {string/object} The resulting object/string from the query -- otherwise a bool as false when an error occured..
|
||||
* @return: {string/object} The resulting object/string from the query -- otherwise a bool as false when an error occurred..
|
||||
*/
|
||||
$.oNetwork.prototype.downloadSingle = function ( address, path, replace ){
|
||||
if (typeof replace === 'undefined') var replace = false;
|
||||
|
|
|
|||
|
|
@ -4,7 +4,7 @@
|
|||
// openHarmony Library
|
||||
//
|
||||
//
|
||||
// Developped by Mathieu Chaptel, Chris Fourney
|
||||
// Developed by Mathieu Chaptel, Chris Fourney
|
||||
//
|
||||
//
|
||||
// This library is an open source implementation of a Document Object Model
|
||||
|
|
@ -16,7 +16,7 @@
|
|||
// and by hiding the heavy lifting required by the official API.
|
||||
//
|
||||
// This library is provided as is and is a work in progress. As such, not every
|
||||
// function has been implemented or is garanteed to work. Feel free to contribute
|
||||
// function has been implemented or is guaranteed to work. Feel free to contribute
|
||||
// improvements to its official github. If you do make sure you follow the provided
|
||||
// template and naming conventions and document your new methods properly.
|
||||
//
|
||||
|
|
@ -562,7 +562,7 @@ Object.defineProperty($.oNode.prototype, 'height', {
|
|||
|
||||
|
||||
/**
|
||||
* The list of oNodeLinks objects descibing the connections to the inport of this node, in order of inport.
|
||||
* The list of oNodeLinks objects describing the connections to the inport of this node, in order of inport.
|
||||
* @name $.oNode#inLinks
|
||||
* @readonly
|
||||
* @deprecated returns $.oNodeLink instances but $.oLink is preferred. Use oNode.getInLinks() instead.
|
||||
|
|
@ -658,7 +658,7 @@ Object.defineProperty($.oNode.prototype, 'outPorts', {
|
|||
|
||||
|
||||
/**
|
||||
* The list of oNodeLinks objects descibing the connections to the outports of this node, in order of outport.
|
||||
* The list of oNodeLinks objects describing the connections to the outports of this node, in order of outport.
|
||||
* @name $.oNode#outLinks
|
||||
* @readonly
|
||||
* @type {$.oNodeLink[]}
|
||||
|
|
@ -1666,7 +1666,7 @@ $.oNode.prototype.refreshAttributes = function( ){
|
|||
* It represents peg nodes in the scene.
|
||||
* @constructor
|
||||
* @augments $.oNode
|
||||
* @classdesc Peg Moudle Class
|
||||
* @classdesc Peg Module Class
|
||||
* @param {string} path Path to the node in the network.
|
||||
* @param {oScene} oSceneObject Access to the oScene object of the DOM.
|
||||
*/
|
||||
|
|
@ -1886,7 +1886,7 @@ $.oDrawingNode.prototype.getDrawingAtFrame = function(frameNumber){
|
|||
|
||||
|
||||
/**
|
||||
* Gets the list of palettes containing colors used by a drawing node. This only gets palettes with the first occurence of the colors.
|
||||
* Gets the list of palettes containing colors used by a drawing node. This only gets palettes with the first occurrence of the colors.
|
||||
* @return {$.oPalette[]} The palettes that contain the color IDs used by the drawings of the node.
|
||||
*/
|
||||
$.oDrawingNode.prototype.getUsedPalettes = function(){
|
||||
|
|
@ -1968,7 +1968,7 @@ $.oDrawingNode.prototype.unlinkPalette = function(oPaletteObject){
|
|||
* Duplicates a node by creating an independent copy.
|
||||
* @param {string} [newName] The new name for the duplicated node.
|
||||
* @param {oPoint} [newPosition] The new position for the duplicated node.
|
||||
* @param {bool} [duplicateElement] Wether to also duplicate the element.
|
||||
* @param {bool} [duplicateElement] Whether to also duplicate the element.
|
||||
*/
|
||||
$.oDrawingNode.prototype.duplicate = function(newName, newPosition, duplicateElement){
|
||||
if (typeof newPosition === 'undefined') var newPosition = this.nodePosition;
|
||||
|
|
@ -2464,7 +2464,7 @@ $.oGroupNode.prototype.getNodeByName = function(name){
|
|||
* Returns all the nodes of a certain type in the group.
|
||||
* Pass a value to recurse to look into the groups as well.
|
||||
* @param {string} typeName The type of the nodes.
|
||||
* @param {bool} recurse Wether to look inside the groups.
|
||||
* @param {bool} recurse Whether to look inside the groups.
|
||||
*
|
||||
* @return {$.oNode[]} The nodes found.
|
||||
*/
|
||||
|
|
@ -2626,7 +2626,7 @@ $.oGroupNode.prototype.orderNodeView = function(recurse){
|
|||
*
|
||||
* peg.linkOutNode(drawingNode);
|
||||
*
|
||||
* //through all this we didn't specify nodePosition parameters so we'll sort evertything at once
|
||||
* //through all this we didn't specify nodePosition parameters so we'll sort everything at once
|
||||
*
|
||||
* sceneRoot.orderNodeView();
|
||||
*
|
||||
|
|
@ -3333,7 +3333,7 @@ $.oGroupNode.prototype.importImageAsTVG = function(path, alignment, nodePosition
|
|||
* imports an image sequence as a node into the current group.
|
||||
* @param {$.oFile[]} imagePaths a list of paths to the images to import (can pass a list of strings or $.oFile)
|
||||
* @param {number} [exposureLength=1] the number of frames each drawing should be exposed at. If set to 0/false, each drawing will use the numbering suffix of the file to set its frame.
|
||||
* @param {boolean} [convertToTvg=false] wether to convert the files to tvg during import
|
||||
* @param {boolean} [convertToTvg=false] whether to convert the files to tvg during import
|
||||
* @param {string} [alignment="ASIS"] the alignment to apply to the node
|
||||
* @param {$.oPoint} [nodePosition] the position of the node in the nodeview
|
||||
*
|
||||
|
|
@ -3346,7 +3346,7 @@ $.oGroupNode.prototype.importImageSequence = function(imagePaths, exposureLength
|
|||
|
||||
if (typeof extendScene === 'undefined') var extendScene = false;
|
||||
|
||||
// match anything but capture trailing numbers and separates punctuation preceeding it
|
||||
// match anything but capture trailing numbers and separates punctuation preceding it
|
||||
var numberingRe = /(.*?)([\W_]+)?(\d*)$/i;
|
||||
|
||||
// sanitize imagePaths
|
||||
|
|
|
|||
|
|
@ -4,7 +4,7 @@
|
|||
// openHarmony Library v0.01
|
||||
//
|
||||
//
|
||||
// Developped by Mathieu Chaptel, Chris Fourney...
|
||||
// Developed by Mathieu Chaptel, Chris Fourney...
|
||||
//
|
||||
//
|
||||
// This library is an open source implementation of a Document Object Model
|
||||
|
|
@ -16,7 +16,7 @@
|
|||
// and by hiding the heavy lifting required by the official API.
|
||||
//
|
||||
// This library is provided as is and is a work in progress. As such, not every
|
||||
// function has been implemented or is garanteed to work. Feel free to contribute
|
||||
// function has been implemented or is guaranteed to work. Feel free to contribute
|
||||
// improvements to its official github. If you do make sure you follow the provided
|
||||
// template and naming conventions and document your new methods properly.
|
||||
//
|
||||
|
|
@ -174,7 +174,7 @@ Object.defineProperty($.oNodeLink.prototype, 'outNode', {
|
|||
return;
|
||||
}
|
||||
|
||||
this.apply(); // do we really want to apply everytime we set?
|
||||
this.apply(); // do we really want to apply every time we set?
|
||||
}
|
||||
});
|
||||
|
||||
|
|
@ -198,7 +198,7 @@ Object.defineProperty($.oNodeLink.prototype, 'inNode', {
|
|||
return;
|
||||
}
|
||||
|
||||
this.apply(); // do we really want to apply everytime we set?
|
||||
this.apply(); // do we really want to apply every time we set?
|
||||
}
|
||||
});
|
||||
|
||||
|
|
@ -222,7 +222,7 @@ Object.defineProperty($.oNodeLink.prototype, 'outPort', {
|
|||
return;
|
||||
}
|
||||
|
||||
this.apply(); // do we really want to apply everytime we set?
|
||||
this.apply(); // do we really want to apply every time we set?
|
||||
}
|
||||
});
|
||||
|
||||
|
|
@ -256,7 +256,7 @@ Object.defineProperty($.oNodeLink.prototype, 'inPort', {
|
|||
return;
|
||||
}
|
||||
|
||||
this.apply(); // do we really want to apply everytime we set?
|
||||
this.apply(); // do we really want to apply every time we set?
|
||||
}
|
||||
});
|
||||
|
||||
|
|
@ -983,7 +983,7 @@ $.oNodeLink.prototype.validate = function ( ) {
|
|||
* @return {bool} Whether the connection is a valid connection that exists currently in the node system.
|
||||
*/
|
||||
$.oNodeLink.prototype.validateUpwards = function( inport, outportProvided ) {
|
||||
//IN THE EVENT OUTNODE WASNT PROVIDED.
|
||||
//IN THE EVENT OUTNODE WASN'T PROVIDED.
|
||||
this.path = this.findInputPath( this._inNode, inport, [] );
|
||||
if( !this.path || this.path.length == 0 ){
|
||||
return false;
|
||||
|
|
@ -1173,7 +1173,7 @@ Object.defineProperty($.oLink.prototype, 'outPort', {
|
|||
|
||||
|
||||
/**
|
||||
* The index of the link comming out of the out-port.
|
||||
* The index of the link coming out of the out-port.
|
||||
* <br>In the event this value wasn't known by the link object but the link is actually connected, the correct value will be found.
|
||||
* @name $.oLink#outLink
|
||||
* @readonly
|
||||
|
|
@ -1323,7 +1323,7 @@ $.oLink.prototype.getValidLink = function(createOutPorts, createInPorts){
|
|||
|
||||
|
||||
/**
|
||||
* Attemps to connect a link. Will guess the ports if not provided.
|
||||
* Attempts to connect a link. Will guess the ports if not provided.
|
||||
* @return {bool}
|
||||
*/
|
||||
$.oLink.prototype.connect = function(){
|
||||
|
|
@ -1623,11 +1623,11 @@ $.oLinkPath.prototype.findExistingPath = function(){
|
|||
|
||||
|
||||
/**
|
||||
* Gets a link object from two nodes that can be succesfully connected. Provide port numbers if there are specific requirements to match. If a link already exists, it will be returned.
|
||||
* Gets a link object from two nodes that can be successfully connected. Provide port numbers if there are specific requirements to match. If a link already exists, it will be returned.
|
||||
* @param {$.oNode} start The node from which the link originates.
|
||||
* @param {$.oNode} end The node at which the link ends.
|
||||
* @param {int} [outPort] A prefered out-port for the link to use.
|
||||
* @param {int} [inPort] A prefered in-port for the link to use.
|
||||
* @param {int} [outPort] A preferred out-port for the link to use.
|
||||
* @param {int} [inPort] A preferred in-port for the link to use.
|
||||
*
|
||||
* @return {$.oLink} the valid $.oLink object. Returns null if no such link could be created (for example if the node's in-port is already linked)
|
||||
*/
|
||||
|
|
|
|||
|
|
@ -4,7 +4,7 @@
|
|||
// openHarmony Library v0.01
|
||||
//
|
||||
//
|
||||
// Developped by Mathieu Chaptel, ...
|
||||
// Developed by Mathieu Chaptel, ...
|
||||
//
|
||||
//
|
||||
// This library is an open source implementation of a Document Object Model
|
||||
|
|
@ -16,7 +16,7 @@
|
|||
// and by hiding the heavy lifting required by the official API.
|
||||
//
|
||||
// This library is provided as is and is a work in progress. As such, not every
|
||||
// function has been implemented or is garanteed to work. Feel free to contribute
|
||||
// function has been implemented or is guaranteed to work. Feel free to contribute
|
||||
// improvements to its official github. If you do make sure you follow the provided
|
||||
// template and naming conventions and document your new methods properly.
|
||||
//
|
||||
|
|
@ -212,7 +212,7 @@ function openHarmony_toolInstaller(){
|
|||
|
||||
|
||||
//----------------------------------------------
|
||||
//-- GET THE FILE CONTENTS IN A DIRCTORY ON GIT
|
||||
//-- GET THE FILE CONTENTS IN A DIRECTORY ON GIT
|
||||
this.recurse_files = function( contents, arr_files ){
|
||||
with( context.$.global ){
|
||||
try{
|
||||
|
|
@ -501,7 +501,7 @@ function openHarmony_toolInstaller(){
|
|||
var download_item = item["download_url"];
|
||||
var query = $.network.webQuery( download_item, false, false );
|
||||
if( query ){
|
||||
//INSTALL TYPES ARE script, package, ect.
|
||||
//INSTALL TYPES ARE script, package, etc.
|
||||
|
||||
if( install_types[ m.install_cache[ item["url"] ] ] ){
|
||||
m.installLabel.text = install_types[ m.install_cache[ item["url"] ] ];
|
||||
|
|
|
|||
|
|
@ -1,7 +1,7 @@
|
|||
{
|
||||
"name": "openharmony",
|
||||
"version": "0.0.1",
|
||||
"description": "An Open Source Imlementation of a Document Object Model for the Toonboom Harmony scripting interface",
|
||||
"description": "An Open Source Implementation of a Document Object Model for the Toonboom Harmony scripting interface",
|
||||
"main": "openHarmony.js",
|
||||
"scripts": {
|
||||
"test": "$",
|
||||
|
|
|
|||
|
|
@ -108,7 +108,7 @@ __all__ = [
|
|||
"apply_colorspace_project",
|
||||
"apply_colorspace_clips",
|
||||
"get_sequence_pattern_and_padding",
|
||||
# depricated
|
||||
# deprecated
|
||||
"get_track_item_pype_tag",
|
||||
"set_track_item_pype_tag",
|
||||
"get_track_item_pype_data",
|
||||
|
|
|
|||
|
|
@ -193,8 +193,8 @@ def parse_container(item, validate=True):
|
|||
return
|
||||
# convert the data to list and validate them
|
||||
for _, obj_data in _data.items():
|
||||
cotnainer = data_to_container(item, obj_data)
|
||||
return_list.append(cotnainer)
|
||||
container = data_to_container(item, obj_data)
|
||||
return_list.append(container)
|
||||
return return_list
|
||||
else:
|
||||
_data = lib.get_trackitem_openpype_data(item)
|
||||
|
|
|
|||
|
|
@ -411,7 +411,7 @@ class ClipLoader:
|
|||
self.with_handles = options.get("handles") or bool(
|
||||
options.get("handles") is True)
|
||||
# try to get value from options or evaluate key value for `load_how`
|
||||
self.sequencial_load = options.get("sequencially") or bool(
|
||||
self.sequencial_load = options.get("sequentially") or bool(
|
||||
"Sequentially in order" in options.get("load_how", ""))
|
||||
# try to get value from options or evaluate key value for `load_to`
|
||||
self.new_sequence = options.get("newSequence") or bool(
|
||||
|
|
@ -836,7 +836,7 @@ class PublishClip:
|
|||
# increasing steps by index of rename iteration
|
||||
self.count_steps *= self.rename_index
|
||||
|
||||
hierarchy_formating_data = {}
|
||||
hierarchy_formatting_data = {}
|
||||
hierarchy_data = deepcopy(self.hierarchy_data)
|
||||
_data = self.track_item_default_data.copy()
|
||||
if self.ui_inputs:
|
||||
|
|
@ -871,13 +871,13 @@ class PublishClip:
|
|||
|
||||
# fill up pythonic expresisons in hierarchy data
|
||||
for k, _v in hierarchy_data.items():
|
||||
hierarchy_formating_data[k] = _v["value"].format(**_data)
|
||||
hierarchy_formatting_data[k] = _v["value"].format(**_data)
|
||||
else:
|
||||
# if no gui mode then just pass default data
|
||||
hierarchy_formating_data = hierarchy_data
|
||||
hierarchy_formatting_data = hierarchy_data
|
||||
|
||||
tag_hierarchy_data = self._solve_tag_hierarchy_data(
|
||||
hierarchy_formating_data
|
||||
hierarchy_formatting_data
|
||||
)
|
||||
|
||||
tag_hierarchy_data.update({"heroTrack": True})
|
||||
|
|
@ -905,20 +905,20 @@ class PublishClip:
|
|||
# add data to return data dict
|
||||
self.tag_data.update(tag_hierarchy_data)
|
||||
|
||||
def _solve_tag_hierarchy_data(self, hierarchy_formating_data):
|
||||
def _solve_tag_hierarchy_data(self, hierarchy_formatting_data):
|
||||
""" Solve tag data from hierarchy data and templates. """
|
||||
# fill up clip name and hierarchy keys
|
||||
hierarchy_filled = self.hierarchy.format(**hierarchy_formating_data)
|
||||
clip_name_filled = self.clip_name.format(**hierarchy_formating_data)
|
||||
hierarchy_filled = self.hierarchy.format(**hierarchy_formatting_data)
|
||||
clip_name_filled = self.clip_name.format(**hierarchy_formatting_data)
|
||||
|
||||
# remove shot from hierarchy data: is not needed anymore
|
||||
hierarchy_formating_data.pop("shot")
|
||||
hierarchy_formatting_data.pop("shot")
|
||||
|
||||
return {
|
||||
"newClipName": clip_name_filled,
|
||||
"hierarchy": hierarchy_filled,
|
||||
"parents": self.parents,
|
||||
"hierarchyData": hierarchy_formating_data,
|
||||
"hierarchyData": hierarchy_formatting_data,
|
||||
"subset": self.subset,
|
||||
"family": self.subset_family,
|
||||
"families": [self.data["family"]]
|
||||
|
|
@ -934,16 +934,16 @@ class PublishClip:
|
|||
)
|
||||
|
||||
# first collect formatting data to use for formatting template
|
||||
formating_data = {}
|
||||
formatting_data = {}
|
||||
for _k, _v in self.hierarchy_data.items():
|
||||
value = _v["value"].format(
|
||||
**self.track_item_default_data)
|
||||
formating_data[_k] = value
|
||||
formatting_data[_k] = value
|
||||
|
||||
return {
|
||||
"entity_type": entity_type,
|
||||
"entity_name": template.format(
|
||||
**formating_data
|
||||
**formatting_data
|
||||
)
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -60,7 +60,7 @@ class Creator(LegacyCreator):
|
|||
|
||||
def process(self):
|
||||
instance = super(CreateEpicNode, self, process()
|
||||
# Set paramaters for Alembic node
|
||||
# Set parameters for Alembic node
|
||||
instance.setParms(
|
||||
{"sop_path": "$HIP/%s.abc" % self.nodes[0]}
|
||||
)
|
||||
|
|
|
|||
|
|
@ -69,7 +69,7 @@ def generate_shelves():
|
|||
|
||||
mandatory_attributes = {'label', 'script'}
|
||||
for tool_definition in shelf_definition.get('tools_list'):
|
||||
# We verify that the name and script attibutes of the tool
|
||||
# We verify that the name and script attributes of the tool
|
||||
# are set
|
||||
if not all(
|
||||
tool_definition[key] for key in mandatory_attributes
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
"""Convertor for legacy Houdini subsets."""
|
||||
"""Converter for legacy Houdini subsets."""
|
||||
from openpype.pipeline.create.creator_plugins import SubsetConvertorPlugin
|
||||
from openpype.hosts.houdini.api.lib import imprint
|
||||
|
||||
|
|
@ -7,7 +7,7 @@ from openpype.hosts.houdini.api.lib import imprint
|
|||
class HoudiniLegacyConvertor(SubsetConvertorPlugin):
|
||||
"""Find and convert any legacy subsets in the scene.
|
||||
|
||||
This Convertor will find all legacy subsets in the scene and will
|
||||
This Converter will find all legacy subsets in the scene and will
|
||||
transform them to the current system. Since the old subsets doesn't
|
||||
retain any information about their original creators, the only mapping
|
||||
we can do is based on their families.
|
||||
|
|
|
|||
|
|
@ -1,7 +1,6 @@
|
|||
import os
|
||||
import hou
|
||||
|
||||
from openpype.pipeline import legacy_io
|
||||
import pyblish.api
|
||||
|
||||
|
||||
|
|
@ -11,7 +10,7 @@ class CollectHoudiniCurrentFile(pyblish.api.InstancePlugin):
|
|||
order = pyblish.api.CollectorOrder - 0.01
|
||||
label = "Houdini Current File"
|
||||
hosts = ["houdini"]
|
||||
family = ["workfile"]
|
||||
families = ["workfile"]
|
||||
|
||||
def process(self, instance):
|
||||
"""Inject the current working file"""
|
||||
|
|
@ -21,7 +20,7 @@ class CollectHoudiniCurrentFile(pyblish.api.InstancePlugin):
|
|||
# By default, Houdini will even point a new scene to a path.
|
||||
# However if the file is not saved at all and does not exist,
|
||||
# we assume the user never set it.
|
||||
filepath = ""
|
||||
current_file = ""
|
||||
|
||||
elif os.path.basename(current_file) == "untitled.hip":
|
||||
# Due to even a new file being called 'untitled.hip' we are unable
|
||||
|
|
|
|||
|
|
@ -69,7 +69,7 @@ def _resolution_from_document(doc):
|
|||
resolution_width = doc["data"].get("resolution_width")
|
||||
resolution_height = doc["data"].get("resolution_height")
|
||||
|
||||
# Make sure both width and heigh are set
|
||||
# Make sure both width and height are set
|
||||
if resolution_width is None or resolution_height is None:
|
||||
cmds.warning(
|
||||
"No resolution information found for \"{}\"".format(doc["name"])
|
||||
|
|
|
|||
|
|
@ -2478,8 +2478,8 @@ def load_capture_preset(data=None):
|
|||
float(value[2]) / 255
|
||||
]
|
||||
disp_options[key] = value
|
||||
else:
|
||||
disp_options['displayGradient'] = True
|
||||
elif key == "displayGradient":
|
||||
disp_options[key] = value
|
||||
|
||||
options['display_options'] = disp_options
|
||||
|
||||
|
|
|
|||
|
|
@ -339,7 +339,7 @@ class ARenderProducts:
|
|||
aov_tokens = ["<aov>", "<renderpass>"]
|
||||
|
||||
def match_last(tokens, text):
|
||||
"""regex match the last occurence from a list of tokens"""
|
||||
"""regex match the last occurrence from a list of tokens"""
|
||||
pattern = "(?:.*)({})".format("|".join(tokens))
|
||||
return re.search(pattern, text, re.IGNORECASE)
|
||||
|
||||
|
|
@ -1054,7 +1054,7 @@ class RenderProductsRedshift(ARenderProducts):
|
|||
def get_files(self, product):
|
||||
# When outputting AOVs we need to replace Redshift specific AOV tokens
|
||||
# with Maya render tokens for generating file sequences. We validate to
|
||||
# a specific AOV fileprefix so we only need to accout for one
|
||||
# a specific AOV fileprefix so we only need to account for one
|
||||
# replacement.
|
||||
if not product.multipart and product.driver:
|
||||
file_prefix = self._get_attr(product.driver + ".filePrefix")
|
||||
|
|
|
|||
|
|
@ -33,7 +33,7 @@ class MayaTemplateBuilder(AbstractTemplateBuilder):
|
|||
get_template_preset implementation)
|
||||
|
||||
Returns:
|
||||
bool: Wether the template was succesfully imported or not
|
||||
bool: Whether the template was successfully imported or not
|
||||
"""
|
||||
|
||||
if cmds.objExists(PLACEHOLDER_SET):
|
||||
|
|
@ -116,7 +116,7 @@ class MayaPlaceholderLoadPlugin(PlaceholderPlugin, PlaceholderLoadMixin):
|
|||
placeholder_name_parts = placeholder_data["builder_type"].split("_")
|
||||
|
||||
pos = 1
|
||||
# add famlily in any
|
||||
# add family in any
|
||||
placeholder_family = placeholder_data["family"]
|
||||
if placeholder_family:
|
||||
placeholder_name_parts.insert(pos, placeholder_family)
|
||||
|
|
|
|||
|
|
@ -12,6 +12,7 @@ class CreateLook(plugin.Creator):
|
|||
family = "look"
|
||||
icon = "paint-brush"
|
||||
make_tx = True
|
||||
rs_tex = False
|
||||
|
||||
def __init__(self, *args, **kwargs):
|
||||
super(CreateLook, self).__init__(*args, **kwargs)
|
||||
|
|
@ -20,7 +21,8 @@ class CreateLook(plugin.Creator):
|
|||
|
||||
# Whether to automatically convert the textures to .tx upon publish.
|
||||
self.data["maketx"] = self.make_tx
|
||||
|
||||
# Whether to automatically convert the textures to .rstex upon publish.
|
||||
self.data["rstex"] = self.rs_tex
|
||||
# Enable users to force a copy.
|
||||
# - on Windows is "forceCopy" always changed to `True` because of
|
||||
# windows implementation of hardlinks
|
||||
|
|
|
|||
|
|
@ -118,7 +118,7 @@ class ImportMayaLoader(load.LoaderPlugin):
|
|||
"clean_import",
|
||||
label="Clean import",
|
||||
default=False,
|
||||
help="Should all occurences of cbId be purged?"
|
||||
help="Should all occurrences of cbId be purged?"
|
||||
)
|
||||
]
|
||||
|
||||
|
|
|
|||
|
|
@ -180,7 +180,7 @@ class ArnoldStandinLoader(load.LoaderPlugin):
|
|||
proxy_basename, proxy_path = self._get_proxy_path(path)
|
||||
|
||||
# Whether there is proxy or so, we still update the string operator.
|
||||
# If no proxy exists, the string operator wont replace anything.
|
||||
# If no proxy exists, the string operator won't replace anything.
|
||||
cmds.setAttr(
|
||||
string_replace_operator + ".match",
|
||||
"resources/" + proxy_basename,
|
||||
|
|
|
|||
|
|
@ -1,4 +1,6 @@
|
|||
import os
|
||||
import difflib
|
||||
import contextlib
|
||||
from maya import cmds
|
||||
|
||||
from openpype.settings import get_project_settings
|
||||
|
|
@ -8,7 +10,82 @@ from openpype.pipeline.create import (
|
|||
get_legacy_creator_by_name,
|
||||
)
|
||||
import openpype.hosts.maya.api.plugin
|
||||
from openpype.hosts.maya.api.lib import maintained_selection
|
||||
from openpype.hosts.maya.api.lib import (
|
||||
maintained_selection,
|
||||
get_container_members
|
||||
)
|
||||
|
||||
|
||||
@contextlib.contextmanager
|
||||
def preserve_modelpanel_cameras(container, log=None):
|
||||
"""Preserve camera members of container in the modelPanels.
|
||||
|
||||
This is used to ensure a camera remains in the modelPanels after updating
|
||||
to a new version.
|
||||
|
||||
"""
|
||||
|
||||
# Get the modelPanels that used the old camera
|
||||
members = get_container_members(container)
|
||||
old_cameras = set(cmds.ls(members, type="camera", long=True))
|
||||
if not old_cameras:
|
||||
# No need to manage anything
|
||||
yield
|
||||
return
|
||||
|
||||
panel_cameras = {}
|
||||
for panel in cmds.getPanel(type="modelPanel"):
|
||||
cam = cmds.ls(cmds.modelPanel(panel, query=True, camera=True),
|
||||
long=True)
|
||||
|
||||
# Often but not always maya returns the transform from the
|
||||
# modelPanel as opposed to the camera shape, so we convert it
|
||||
# to explicitly be the camera shape
|
||||
if cmds.nodeType(cam) != "camera":
|
||||
cam = cmds.listRelatives(cam,
|
||||
children=True,
|
||||
fullPath=True,
|
||||
type="camera")[0]
|
||||
if cam in old_cameras:
|
||||
panel_cameras[panel] = cam
|
||||
|
||||
if not panel_cameras:
|
||||
# No need to manage anything
|
||||
yield
|
||||
return
|
||||
|
||||
try:
|
||||
yield
|
||||
finally:
|
||||
new_members = get_container_members(container)
|
||||
new_cameras = set(cmds.ls(new_members, type="camera", long=True))
|
||||
if not new_cameras:
|
||||
return
|
||||
|
||||
for panel, cam_name in panel_cameras.items():
|
||||
new_camera = None
|
||||
if cam_name in new_cameras:
|
||||
new_camera = cam_name
|
||||
elif len(new_cameras) == 1:
|
||||
new_camera = next(iter(new_cameras))
|
||||
else:
|
||||
# Multiple cameras in the updated container but not an exact
|
||||
# match detected by name. Find the closest match
|
||||
matches = difflib.get_close_matches(word=cam_name,
|
||||
possibilities=new_cameras,
|
||||
n=1)
|
||||
if matches:
|
||||
new_camera = matches[0] # best match
|
||||
if log:
|
||||
log.info("Camera in '{}' restored with "
|
||||
"closest match camera: {} (before: {})"
|
||||
.format(panel, new_camera, cam_name))
|
||||
|
||||
if not new_camera:
|
||||
# Unable to find the camera to re-apply in the modelpanel
|
||||
continue
|
||||
|
||||
cmds.modelPanel(panel, edit=True, camera=new_camera)
|
||||
|
||||
|
||||
class ReferenceLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
|
||||
|
|
@ -68,6 +145,9 @@ class ReferenceLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
|
|||
|
||||
new_nodes = (list(set(nodes) - set(shapes)))
|
||||
|
||||
# if there are cameras, try to lock their transforms
|
||||
self._lock_camera_transforms(new_nodes)
|
||||
|
||||
current_namespace = pm.namespaceInfo(currentNamespace=True)
|
||||
|
||||
if current_namespace != ":":
|
||||
|
|
@ -136,6 +216,15 @@ class ReferenceLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
|
|||
def switch(self, container, representation):
|
||||
self.update(container, representation)
|
||||
|
||||
def update(self, container, representation):
|
||||
with preserve_modelpanel_cameras(container, log=self.log):
|
||||
super(ReferenceLoader, self).update(container, representation)
|
||||
|
||||
# We also want to lock camera transforms on any new cameras in the
|
||||
# reference or for a camera which might have changed names.
|
||||
members = get_container_members(container)
|
||||
self._lock_camera_transforms(members)
|
||||
|
||||
def _post_process_rig(self, name, namespace, context, options):
|
||||
|
||||
output = next((node for node in self if
|
||||
|
|
@ -168,3 +257,18 @@ class ReferenceLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
|
|||
options={"useSelection": True},
|
||||
data={"dependencies": dependency}
|
||||
)
|
||||
|
||||
def _lock_camera_transforms(self, nodes):
|
||||
cameras = cmds.ls(nodes, type="camera")
|
||||
if not cameras:
|
||||
return
|
||||
|
||||
# Check the Maya version, lockTransform has been introduced since
|
||||
# Maya 2016.5 Ext 2
|
||||
version = int(cmds.about(version=True))
|
||||
if version >= 2016:
|
||||
for camera in cameras:
|
||||
cmds.camera(camera, edit=True, lockTransform=True)
|
||||
else:
|
||||
self.log.warning("This version of Maya does not support locking of"
|
||||
" transforms of cameras.")
|
||||
|
|
|
|||
|
|
@ -255,7 +255,7 @@ class CollectMultiverseLookData(pyblish.api.InstancePlugin):
|
|||
Searches through the overrides finding all material overrides. From there
|
||||
it extracts the shading group and then finds all texture files in the
|
||||
shading group network. It also checks for mipmap versions of texture files
|
||||
and adds them to the resouces to get published.
|
||||
and adds them to the resources to get published.
|
||||
|
||||
"""
|
||||
|
||||
|
|
|
|||
|
|
@ -1,33 +1,42 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
"""Maya look extractor."""
|
||||
import os
|
||||
import json
|
||||
import tempfile
|
||||
import platform
|
||||
import contextlib
|
||||
from abc import ABCMeta, abstractmethod
|
||||
from collections import OrderedDict
|
||||
|
||||
from maya import cmds # noqa
|
||||
import contextlib
|
||||
import json
|
||||
import logging
|
||||
import os
|
||||
import platform
|
||||
import tempfile
|
||||
import six
|
||||
import attr
|
||||
|
||||
import pyblish.api
|
||||
|
||||
from openpype.lib import source_hash, run_subprocess
|
||||
from openpype.pipeline import legacy_io, publish
|
||||
from maya import cmds # noqa
|
||||
|
||||
from openpype.lib.vendor_bin_utils import find_executable
|
||||
from openpype.lib import source_hash, run_subprocess, get_oiio_tools_path
|
||||
from openpype.pipeline import legacy_io, publish, KnownPublishError
|
||||
from openpype.hosts.maya.api import lib
|
||||
from openpype.hosts.maya.api.lib import image_info, guess_colorspace
|
||||
|
||||
# Modes for transfer
|
||||
COPY = 1
|
||||
HARDLINK = 2
|
||||
|
||||
|
||||
def _has_arnold():
|
||||
"""Return whether the arnold package is available and can be imported."""
|
||||
try:
|
||||
import arnold # noqa: F401
|
||||
return True
|
||||
except (ImportError, ModuleNotFoundError):
|
||||
return False
|
||||
@attr.s
|
||||
class TextureResult:
|
||||
"""The resulting texture of a processed file for a resource"""
|
||||
# Path to the file
|
||||
path = attr.ib()
|
||||
# Colorspace of the resulting texture. This might not be the input
|
||||
# colorspace of the texture if a TextureProcessor has processed the file.
|
||||
colorspace = attr.ib()
|
||||
# Hash generated for the texture using openpype.lib.source_hash
|
||||
file_hash = attr.ib()
|
||||
# The transfer mode, e.g. COPY or HARDLINK
|
||||
transfer_mode = attr.ib()
|
||||
|
||||
|
||||
def find_paths_by_hash(texture_hash):
|
||||
|
|
@ -46,61 +55,6 @@ def find_paths_by_hash(texture_hash):
|
|||
return legacy_io.distinct(key, {"type": "version"})
|
||||
|
||||
|
||||
def maketx(source, destination, args, logger):
|
||||
"""Make `.tx` using `maketx` with some default settings.
|
||||
|
||||
The settings are based on default as used in Arnold's
|
||||
txManager in the scene.
|
||||
This function requires the `maketx` executable to be
|
||||
on the `PATH`.
|
||||
|
||||
Args:
|
||||
source (str): Path to source file.
|
||||
destination (str): Writing destination path.
|
||||
args (list): Additional arguments for `maketx`.
|
||||
logger (logging.Logger): Logger to log messages to.
|
||||
|
||||
Returns:
|
||||
str: Output of `maketx` command.
|
||||
|
||||
"""
|
||||
from openpype.lib import get_oiio_tools_path
|
||||
|
||||
maketx_path = get_oiio_tools_path("maketx")
|
||||
|
||||
if not maketx_path:
|
||||
print(
|
||||
"OIIO tool not found in {}".format(maketx_path))
|
||||
raise AssertionError("OIIO tool not found")
|
||||
|
||||
subprocess_args = [
|
||||
maketx_path,
|
||||
"-v", # verbose
|
||||
"-u", # update mode
|
||||
# unpremultiply before conversion (recommended when alpha present)
|
||||
"--unpremult",
|
||||
"--checknan",
|
||||
# use oiio-optimized settings for tile-size, planarconfig, metadata
|
||||
"--oiio",
|
||||
"--filter", "lanczos3",
|
||||
source
|
||||
]
|
||||
|
||||
subprocess_args.extend(args)
|
||||
subprocess_args.extend(["-o", destination])
|
||||
|
||||
cmd = " ".join(subprocess_args)
|
||||
logger.debug(cmd)
|
||||
|
||||
try:
|
||||
out = run_subprocess(subprocess_args)
|
||||
except Exception:
|
||||
logger.error("Maketx converion failed", exc_info=True)
|
||||
raise
|
||||
|
||||
return out
|
||||
|
||||
|
||||
@contextlib.contextmanager
|
||||
def no_workspace_dir():
|
||||
"""Force maya to a fake temporary workspace directory.
|
||||
|
|
@ -133,6 +87,303 @@ def no_workspace_dir():
|
|||
os.rmdir(fake_workspace_dir)
|
||||
|
||||
|
||||
@six.add_metaclass(ABCMeta)
|
||||
class TextureProcessor:
|
||||
|
||||
extension = None
|
||||
|
||||
def __init__(self, log=None):
|
||||
if log is None:
|
||||
log = logging.getLogger(self.__class__.__name__)
|
||||
self.log = log
|
||||
|
||||
def apply_settings(self, system_settings, project_settings):
|
||||
"""Apply OpenPype system/project settings to the TextureProcessor
|
||||
|
||||
Args:
|
||||
system_settings (dict): OpenPype system settings
|
||||
project_settings (dict): OpenPype project settings
|
||||
|
||||
Returns:
|
||||
None
|
||||
|
||||
"""
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def process(self,
|
||||
source,
|
||||
colorspace,
|
||||
color_management,
|
||||
staging_dir):
|
||||
"""Process the `source` texture.
|
||||
|
||||
Must be implemented on inherited class.
|
||||
|
||||
This must always return a TextureResult even when it does not generate
|
||||
a texture. If it doesn't generate a texture then it should return a
|
||||
TextureResult using the input path and colorspace.
|
||||
|
||||
Args:
|
||||
source (str): Path to source file.
|
||||
colorspace (str): Colorspace of the source file.
|
||||
color_management (dict): Maya Color management data from
|
||||
`lib.get_color_management_preferences`
|
||||
staging_dir (str): Output directory to write to.
|
||||
|
||||
Returns:
|
||||
TextureResult: The resulting texture information.
|
||||
|
||||
"""
|
||||
pass
|
||||
|
||||
def __repr__(self):
|
||||
# Log instance as class name
|
||||
return self.__class__.__name__
|
||||
|
||||
|
||||
class MakeRSTexBin(TextureProcessor):
|
||||
"""Make `.rstexbin` using `redshiftTextureProcessor`"""
|
||||
|
||||
extension = ".rstexbin"
|
||||
|
||||
def process(self,
|
||||
source,
|
||||
colorspace,
|
||||
color_management,
|
||||
staging_dir):
|
||||
|
||||
texture_processor_path = self.get_redshift_tool(
|
||||
"redshiftTextureProcessor"
|
||||
)
|
||||
if not texture_processor_path:
|
||||
raise KnownPublishError("Must have Redshift available.")
|
||||
|
||||
subprocess_args = [
|
||||
texture_processor_path,
|
||||
source
|
||||
]
|
||||
|
||||
hash_args = ["rstex"]
|
||||
texture_hash = source_hash(source, *hash_args)
|
||||
|
||||
# Redshift stores the output texture next to the input but with
|
||||
# the extension replaced to `.rstexbin`
|
||||
basename, ext = os.path.splitext(source)
|
||||
destination = "{}{}".format(basename, self.extension)
|
||||
|
||||
self.log.debug(" ".join(subprocess_args))
|
||||
try:
|
||||
run_subprocess(subprocess_args)
|
||||
except Exception:
|
||||
self.log.error("Texture .rstexbin conversion failed",
|
||||
exc_info=True)
|
||||
raise
|
||||
|
||||
return TextureResult(
|
||||
path=destination,
|
||||
file_hash=texture_hash,
|
||||
colorspace=colorspace,
|
||||
transfer_mode=COPY
|
||||
)
|
||||
|
||||
@staticmethod
|
||||
def get_redshift_tool(tool_name):
|
||||
"""Path to redshift texture processor.
|
||||
|
||||
On Windows it adds .exe extension if missing from tool argument.
|
||||
|
||||
Args:
|
||||
tool_name (string): Tool name.
|
||||
|
||||
Returns:
|
||||
str: Full path to redshift texture processor executable.
|
||||
"""
|
||||
if "REDSHIFT_COREDATAPATH" not in os.environ:
|
||||
raise RuntimeError("Must have Redshift available.")
|
||||
|
||||
redshift_tool_path = os.path.join(
|
||||
os.environ["REDSHIFT_COREDATAPATH"],
|
||||
"bin",
|
||||
tool_name
|
||||
)
|
||||
|
||||
return find_executable(redshift_tool_path)
|
||||
|
||||
|
||||
class MakeTX(TextureProcessor):
|
||||
"""Make `.tx` using `maketx` with some default settings.
|
||||
|
||||
Some hardcoded arguments passed to `maketx` are based on the defaults used
|
||||
in Arnold's txManager tool.
|
||||
|
||||
"""
|
||||
|
||||
extension = ".tx"
|
||||
|
||||
def __init__(self, log=None):
|
||||
super(MakeTX, self).__init__(log=log)
|
||||
self.extra_args = []
|
||||
|
||||
def apply_settings(self, system_settings, project_settings):
|
||||
# Allow extra maketx arguments from project settings
|
||||
args_settings = (
|
||||
project_settings["maya"]["publish"]
|
||||
.get("ExtractLook", {}).get("maketx_arguments", [])
|
||||
)
|
||||
extra_args = []
|
||||
for arg_data in args_settings:
|
||||
argument = arg_data["argument"]
|
||||
parameters = arg_data["parameters"]
|
||||
if not argument:
|
||||
self.log.debug("Ignoring empty parameter from "
|
||||
"`maketx_arguments` setting..")
|
||||
continue
|
||||
|
||||
extra_args.append(argument)
|
||||
extra_args.extend(parameters)
|
||||
|
||||
self.extra_args = extra_args
|
||||
|
||||
def process(self,
|
||||
source,
|
||||
colorspace,
|
||||
color_management,
|
||||
staging_dir):
|
||||
"""Process the texture.
|
||||
|
||||
This function requires the `maketx` executable to be available in an
|
||||
OpenImageIO toolset detectable by OpenPype.
|
||||
|
||||
Args:
|
||||
source (str): Path to source file.
|
||||
colorspace (str): Colorspace of the source file.
|
||||
color_management (dict): Maya Color management data from
|
||||
`lib.get_color_management_preferences`
|
||||
staging_dir (str): Output directory to write to.
|
||||
|
||||
Returns:
|
||||
TextureResult: The resulting texture information.
|
||||
|
||||
"""
|
||||
|
||||
maketx_path = get_oiio_tools_path("maketx")
|
||||
|
||||
if not maketx_path:
|
||||
raise AssertionError(
|
||||
"OIIO 'maketx' tool not found. Result: {}".format(maketx_path)
|
||||
)
|
||||
|
||||
# Define .tx filepath in staging if source file is not .tx
|
||||
fname, ext = os.path.splitext(os.path.basename(source))
|
||||
if ext == ".tx":
|
||||
# Do nothing if the source file is already a .tx file.
|
||||
return TextureResult(
|
||||
path=source,
|
||||
file_hash=None, # todo: unknown texture hash?
|
||||
colorspace=colorspace,
|
||||
transfer_mode=COPY
|
||||
)
|
||||
|
||||
# Hardcoded default arguments for maketx conversion based on Arnold's
|
||||
# txManager in Maya
|
||||
args = [
|
||||
# unpremultiply before conversion (recommended when alpha present)
|
||||
"--unpremult",
|
||||
# use oiio-optimized settings for tile-size, planarconfig, metadata
|
||||
"--oiio",
|
||||
"--filter", "lanczos3",
|
||||
]
|
||||
if color_management["enabled"]:
|
||||
config_path = color_management["config"]
|
||||
if not os.path.exists(config_path):
|
||||
raise RuntimeError("OCIO config not found at: "
|
||||
"{}".format(config_path))
|
||||
|
||||
render_colorspace = color_management["rendering_space"]
|
||||
|
||||
self.log.info("tx: converting colorspace {0} "
|
||||
"-> {1}".format(colorspace,
|
||||
render_colorspace))
|
||||
args.extend(["--colorconvert", colorspace, render_colorspace])
|
||||
args.extend(["--colorconfig", config_path])
|
||||
|
||||
else:
|
||||
# Maya Color management is disabled. We cannot rely on an OCIO
|
||||
self.log.debug("tx: Maya color management is disabled. No color "
|
||||
"conversion will be applied to .tx conversion for: "
|
||||
"{}".format(source))
|
||||
# Assume linear
|
||||
render_colorspace = "linear"
|
||||
|
||||
# Note: The texture hash is only reliable if we include any potential
|
||||
# conversion arguments provide to e.g. `maketx`
|
||||
hash_args = ["maketx"] + args + self.extra_args
|
||||
texture_hash = source_hash(source, *hash_args)
|
||||
|
||||
# Ensure folder exists
|
||||
resources_dir = os.path.join(staging_dir, "resources")
|
||||
if not os.path.exists(resources_dir):
|
||||
os.makedirs(resources_dir)
|
||||
|
||||
self.log.info("Generating .tx file for %s .." % source)
|
||||
|
||||
subprocess_args = [
|
||||
maketx_path,
|
||||
"-v", # verbose
|
||||
"-u", # update mode
|
||||
# --checknan doesn't influence the output file but aborts the
|
||||
# conversion if it finds any. So we can avoid it for the file hash
|
||||
"--checknan",
|
||||
source
|
||||
]
|
||||
|
||||
subprocess_args.extend(args)
|
||||
if self.extra_args:
|
||||
subprocess_args.extend(self.extra_args)
|
||||
|
||||
# Add source hash attribute after other arguments for log readability
|
||||
# Note: argument is excluded from the hash since it is the hash itself
|
||||
subprocess_args.extend([
|
||||
"--sattrib",
|
||||
"sourceHash",
|
||||
texture_hash
|
||||
])
|
||||
|
||||
destination = os.path.join(resources_dir, fname + ".tx")
|
||||
subprocess_args.extend(["-o", destination])
|
||||
|
||||
# We want to make sure we are explicit about what OCIO config gets
|
||||
# used. So when we supply no --colorconfig flag that no fallback to
|
||||
# an OCIO env var occurs.
|
||||
env = os.environ.copy()
|
||||
env.pop("OCIO", None)
|
||||
|
||||
self.log.debug(" ".join(subprocess_args))
|
||||
try:
|
||||
run_subprocess(subprocess_args, env=env)
|
||||
except Exception:
|
||||
self.log.error("Texture maketx conversion failed",
|
||||
exc_info=True)
|
||||
raise
|
||||
|
||||
return TextureResult(
|
||||
path=destination,
|
||||
file_hash=texture_hash,
|
||||
colorspace=render_colorspace,
|
||||
transfer_mode=COPY
|
||||
)
|
||||
|
||||
@staticmethod
|
||||
def _has_arnold():
|
||||
"""Return whether the arnold package is available and importable."""
|
||||
try:
|
||||
import arnold # noqa: F401
|
||||
return True
|
||||
except (ImportError, ModuleNotFoundError):
|
||||
return False
|
||||
|
||||
|
||||
class ExtractLook(publish.Extractor):
|
||||
"""Extract Look (Maya Scene + JSON)
|
||||
|
||||
|
|
@ -149,22 +400,6 @@ class ExtractLook(publish.Extractor):
|
|||
scene_type = "ma"
|
||||
look_data_type = "json"
|
||||
|
||||
@staticmethod
|
||||
def get_renderer_name():
|
||||
"""Get renderer name from Maya.
|
||||
|
||||
Returns:
|
||||
str: Renderer name.
|
||||
|
||||
"""
|
||||
renderer = cmds.getAttr(
|
||||
"defaultRenderGlobals.currentRenderer"
|
||||
).lower()
|
||||
# handle various renderman names
|
||||
if renderer.startswith("renderman"):
|
||||
renderer = "renderman"
|
||||
return renderer
|
||||
|
||||
def get_maya_scene_type(self, instance):
|
||||
"""Get Maya scene type from settings.
|
||||
|
||||
|
|
@ -204,16 +439,12 @@ class ExtractLook(publish.Extractor):
|
|||
dir_path = self.staging_dir(instance)
|
||||
maya_fname = "{0}.{1}".format(instance.name, self.scene_type)
|
||||
json_fname = "{0}.{1}".format(instance.name, self.look_data_type)
|
||||
|
||||
# Make texture dump folder
|
||||
maya_path = os.path.join(dir_path, maya_fname)
|
||||
json_path = os.path.join(dir_path, json_fname)
|
||||
|
||||
self.log.info("Performing extraction..")
|
||||
|
||||
# Remove all members of the sets so they are not included in the
|
||||
# exported file by accident
|
||||
self.log.info("Extract sets (%s) ..." % _scene_type)
|
||||
self.log.info("Processing sets..")
|
||||
lookdata = instance.data["lookData"]
|
||||
relationships = lookdata["relationships"]
|
||||
sets = list(relationships.keys())
|
||||
|
|
@ -221,13 +452,36 @@ class ExtractLook(publish.Extractor):
|
|||
self.log.info("No sets found")
|
||||
return
|
||||
|
||||
results = self.process_resources(instance, staging_dir=dir_path)
|
||||
# Specify texture processing executables to activate
|
||||
# TODO: Load these more dynamically once we support more processors
|
||||
processors = []
|
||||
context = instance.context
|
||||
for key, Processor in {
|
||||
# Instance data key to texture processor mapping
|
||||
"maketx": MakeTX,
|
||||
"rstex": MakeRSTexBin
|
||||
}.items():
|
||||
if instance.data.get(key, False):
|
||||
processor = Processor()
|
||||
processor.apply_settings(context.data["system_settings"],
|
||||
context.data["project_settings"])
|
||||
processors.append(processor)
|
||||
|
||||
if processors:
|
||||
self.log.debug("Collected texture processors: "
|
||||
"{}".format(processors))
|
||||
|
||||
self.log.debug("Processing resources..")
|
||||
results = self.process_resources(instance,
|
||||
staging_dir=dir_path,
|
||||
processors=processors)
|
||||
transfers = results["fileTransfers"]
|
||||
hardlinks = results["fileHardlinks"]
|
||||
hashes = results["fileHashes"]
|
||||
remap = results["attrRemap"]
|
||||
|
||||
# Extract in correct render layer
|
||||
self.log.info("Extracting look maya scene file: {}".format(maya_path))
|
||||
layer = instance.data.get("renderlayer", "defaultRenderLayer")
|
||||
with lib.renderlayer(layer):
|
||||
# TODO: Ensure membership edits don't become renderlayer overrides
|
||||
|
|
@ -235,7 +489,7 @@ class ExtractLook(publish.Extractor):
|
|||
# To avoid Maya trying to automatically remap the file
|
||||
# textures relative to the `workspace -directory` we force
|
||||
# it to a fake temporary workspace. This fixes textures
|
||||
# getting incorrectly remapped. (LKD-17, PLN-101)
|
||||
# getting incorrectly remapped.
|
||||
with no_workspace_dir():
|
||||
with lib.attribute_values(remap):
|
||||
with lib.maintained_selection():
|
||||
|
|
@ -299,40 +553,38 @@ class ExtractLook(publish.Extractor):
|
|||
# Source hash for the textures
|
||||
instance.data["sourceHashes"] = hashes
|
||||
|
||||
"""
|
||||
self.log.info("Returning colorspaces to their original values ...")
|
||||
for attr, value in remap.items():
|
||||
self.log.info(" - {}: {}".format(attr, value))
|
||||
cmds.setAttr(attr, value, type="string")
|
||||
"""
|
||||
self.log.info("Extracted instance '%s' to: %s" % (instance.name,
|
||||
maya_path))
|
||||
|
||||
def process_resources(self, instance, staging_dir):
|
||||
def _set_resource_result_colorspace(self, resource, colorspace):
|
||||
"""Update resource resulting colorspace after texture processing"""
|
||||
if "result_color_space" in resource:
|
||||
if resource["result_color_space"] == colorspace:
|
||||
return
|
||||
|
||||
self.log.warning(
|
||||
"Resource already has a resulting colorspace but is now "
|
||||
"being overridden to a new one: {} -> {}".format(
|
||||
resource["result_color_space"], colorspace
|
||||
)
|
||||
)
|
||||
resource["result_color_space"] = colorspace
|
||||
|
||||
def process_resources(self, instance, staging_dir, processors):
|
||||
"""Process all resources in the instance.
|
||||
|
||||
It is assumed that all resources are nodes using file textures.
|
||||
|
||||
Extract the textures to transfer, possibly convert with maketx and
|
||||
remap the node paths to the destination path. Note that a source
|
||||
might be included more than once amongst the resources as they could
|
||||
be the input file to multiple nodes.
|
||||
|
||||
"""
|
||||
|
||||
# Extract the textures to transfer, possibly convert with maketx and
|
||||
# remap the node paths to the destination path. Note that a source
|
||||
# might be included more than once amongst the resources as they could
|
||||
# be the input file to multiple nodes.
|
||||
resources = instance.data["resources"]
|
||||
do_maketx = instance.data.get("maketx", False)
|
||||
color_management = lib.get_color_management_preferences()
|
||||
|
||||
# Collect all unique files used in the resources
|
||||
files_metadata = {}
|
||||
for resource in resources:
|
||||
# Preserve color space values (force value after filepath change)
|
||||
# This will also trigger in the same order at end of context to
|
||||
# ensure after context it's still the original value.
|
||||
color_space = resource.get("color_space")
|
||||
|
||||
for f in resource["files"]:
|
||||
files_metadata[os.path.normpath(f)] = {
|
||||
"color_space": color_space}
|
||||
|
||||
# Process the resource files
|
||||
transfers = []
|
||||
hardlinks = []
|
||||
hashes = {}
|
||||
# Temporary fix to NOT create hardlinks on windows machines
|
||||
if platform.system().lower() == "windows":
|
||||
self.log.info(
|
||||
|
|
@ -342,95 +594,114 @@ class ExtractLook(publish.Extractor):
|
|||
else:
|
||||
force_copy = instance.data.get("forceCopy", False)
|
||||
|
||||
for filepath in files_metadata:
|
||||
destinations_cache = {}
|
||||
|
||||
linearize = False
|
||||
# if OCIO color management enabled
|
||||
# it won't take the condition of the files_metadata
|
||||
def get_resource_destination_cached(path):
|
||||
"""Get resource destination with cached result per filepath"""
|
||||
if path not in destinations_cache:
|
||||
destination = self.get_resource_destination(
|
||||
path, instance.data["resourcesDir"], processors)
|
||||
destinations_cache[path] = destination
|
||||
return destinations_cache[path]
|
||||
|
||||
ocio_maya = cmds.colorManagementPrefs(q=True,
|
||||
cmConfigFileEnabled=True,
|
||||
cmEnabled=True)
|
||||
|
||||
if do_maketx and not ocio_maya:
|
||||
if files_metadata[filepath]["color_space"].lower() == "srgb": # noqa: E501
|
||||
linearize = True
|
||||
# set its file node to 'raw' as tx will be linearized
|
||||
files_metadata[filepath]["color_space"] = "Raw"
|
||||
|
||||
# if do_maketx:
|
||||
# color_space = "Raw"
|
||||
|
||||
source, mode, texture_hash = self._process_texture(
|
||||
filepath,
|
||||
resource,
|
||||
do_maketx,
|
||||
staging=staging_dir,
|
||||
linearize=linearize,
|
||||
force=force_copy
|
||||
)
|
||||
destination = self.resource_destination(instance,
|
||||
source,
|
||||
do_maketx)
|
||||
|
||||
# Force copy is specified.
|
||||
if force_copy:
|
||||
mode = COPY
|
||||
|
||||
if mode == COPY:
|
||||
transfers.append((source, destination))
|
||||
self.log.info('file will be copied {} -> {}'.format(
|
||||
source, destination))
|
||||
elif mode == HARDLINK:
|
||||
hardlinks.append((source, destination))
|
||||
self.log.info('file will be hardlinked {} -> {}'.format(
|
||||
source, destination))
|
||||
|
||||
# Store the hashes from hash to destination to include in the
|
||||
# database
|
||||
hashes[texture_hash] = destination
|
||||
|
||||
# Remap the resources to the destination path (change node attributes)
|
||||
destinations = {}
|
||||
remap = OrderedDict() # needs to be ordered, see color space values
|
||||
# Process all resource's individual files
|
||||
processed_files = {}
|
||||
transfers = []
|
||||
hardlinks = []
|
||||
hashes = {}
|
||||
remap = OrderedDict()
|
||||
for resource in resources:
|
||||
source = os.path.normpath(resource["source"])
|
||||
if source not in destinations:
|
||||
# Cache destination as source resource might be included
|
||||
# multiple times
|
||||
destinations[source] = self.resource_destination(
|
||||
instance, source, do_maketx
|
||||
colorspace = resource["color_space"]
|
||||
|
||||
for filepath in resource["files"]:
|
||||
filepath = os.path.normpath(filepath)
|
||||
|
||||
if filepath in processed_files:
|
||||
# The file was already processed, likely due to usage by
|
||||
# another resource in the scene. We confirm here it
|
||||
# didn't do color spaces different than the current
|
||||
# resource.
|
||||
processed_file = processed_files[filepath]
|
||||
self.log.debug(
|
||||
"File was already processed. Likely used by another "
|
||||
"resource too: {}".format(filepath)
|
||||
)
|
||||
|
||||
if colorspace != processed_file["color_space"]:
|
||||
self.log.warning(
|
||||
"File '{}' was already processed using colorspace "
|
||||
"'{}' instead of the current resource's "
|
||||
"colorspace '{}'. The already processed texture "
|
||||
"result's colorspace '{}' will be used."
|
||||
"".format(filepath,
|
||||
colorspace,
|
||||
processed_file["color_space"],
|
||||
processed_file["result_color_space"]))
|
||||
|
||||
self._set_resource_result_colorspace(
|
||||
resource,
|
||||
colorspace=processed_file["result_color_space"]
|
||||
)
|
||||
continue
|
||||
|
||||
texture_result = self._process_texture(
|
||||
filepath,
|
||||
processors=processors,
|
||||
staging_dir=staging_dir,
|
||||
force_copy=force_copy,
|
||||
color_management=color_management,
|
||||
colorspace=colorspace
|
||||
)
|
||||
|
||||
# Set the resulting color space on the resource
|
||||
self._set_resource_result_colorspace(
|
||||
resource, colorspace=texture_result.colorspace
|
||||
)
|
||||
|
||||
processed_files[filepath] = {
|
||||
"color_space": colorspace,
|
||||
"result_color_space": texture_result.colorspace,
|
||||
}
|
||||
|
||||
source = texture_result.path
|
||||
destination = get_resource_destination_cached(source)
|
||||
if force_copy or texture_result.transfer_mode == COPY:
|
||||
transfers.append((source, destination))
|
||||
self.log.info('file will be copied {} -> {}'.format(
|
||||
source, destination))
|
||||
elif texture_result.transfer_mode == HARDLINK:
|
||||
hardlinks.append((source, destination))
|
||||
self.log.info('file will be hardlinked {} -> {}'.format(
|
||||
source, destination))
|
||||
|
||||
# Store the hashes from hash to destination to include in the
|
||||
# database
|
||||
hashes[texture_result.file_hash] = destination
|
||||
|
||||
# Set up remapping attributes for the node during the publish
|
||||
# The order of these can be important if one attribute directly
|
||||
# affects another, e.g. we set colorspace after filepath because
|
||||
# maya sometimes tries to guess the colorspace when changing
|
||||
# filepaths (which is avoidable, but we don't want to have those
|
||||
# attributes changed in the resulting publish)
|
||||
# Remap filepath to publish destination
|
||||
# TODO It would be much better if we could use the destination path
|
||||
# from the actual processed texture results, but since the
|
||||
# attribute will need to preserve tokens like <f>, <udim> etc for
|
||||
# now we will define the output path from the attribute value
|
||||
# including the tokens to persist them.
|
||||
filepath_attr = resource["attribute"]
|
||||
remap[filepath_attr] = get_resource_destination_cached(
|
||||
resource["source"]
|
||||
)
|
||||
|
||||
# Preserve color space values (force value after filepath change)
|
||||
# This will also trigger in the same order at end of context to
|
||||
# ensure after context it's still the original value.
|
||||
color_space_attr = resource["node"] + ".colorSpace"
|
||||
try:
|
||||
color_space = cmds.getAttr(color_space_attr)
|
||||
except ValueError:
|
||||
# node doesn't have color space attribute
|
||||
color_space = "Raw"
|
||||
else:
|
||||
# get the resolved files
|
||||
metadata = files_metadata.get(source)
|
||||
# if the files are unresolved from `source`
|
||||
# assume color space from the first file of
|
||||
# the resource
|
||||
if not metadata:
|
||||
first_file = next(iter(resource.get(
|
||||
"files", [])), None)
|
||||
if not first_file:
|
||||
continue
|
||||
first_filepath = os.path.normpath(first_file)
|
||||
metadata = files_metadata[first_filepath]
|
||||
if metadata["color_space"] == "Raw":
|
||||
# set color space to raw if we linearized it
|
||||
color_space = "Raw"
|
||||
# Remap file node filename to destination
|
||||
remap[color_space_attr] = color_space
|
||||
attr = resource["attribute"]
|
||||
remap[attr] = destinations[source]
|
||||
node = resource["node"]
|
||||
if cmds.attributeQuery("colorSpace", node=node, exists=True):
|
||||
color_space_attr = "{}.colorSpace".format(node)
|
||||
remap[color_space_attr] = resource["result_color_space"]
|
||||
|
||||
self.log.info("Finished remapping destinations ...")
|
||||
|
||||
|
|
@ -441,134 +712,131 @@ class ExtractLook(publish.Extractor):
|
|||
"attrRemap": remap,
|
||||
}
|
||||
|
||||
def resource_destination(self, instance, filepath, do_maketx):
|
||||
def get_resource_destination(self, filepath, resources_dir, processors):
|
||||
"""Get resource destination path.
|
||||
|
||||
This is utility function to change path if resource file name is
|
||||
changed by some external tool like `maketx`.
|
||||
|
||||
Args:
|
||||
instance: Current Instance.
|
||||
filepath (str): Resource path
|
||||
do_maketx (bool): Flag if resource is processed by `maketx`.
|
||||
filepath (str): Resource source path
|
||||
resources_dir (str): Destination dir for resources in publish.
|
||||
processors (list): Texture processors converting resource.
|
||||
|
||||
Returns:
|
||||
str: Path to resource file
|
||||
|
||||
"""
|
||||
resources_dir = instance.data["resourcesDir"]
|
||||
|
||||
# Compute destination location
|
||||
basename, ext = os.path.splitext(os.path.basename(filepath))
|
||||
|
||||
# If `maketx` then the texture will always end with .tx
|
||||
if do_maketx:
|
||||
ext = ".tx"
|
||||
# Get extension from the last processor
|
||||
for processor in reversed(processors):
|
||||
processor_ext = processor.extension
|
||||
if processor_ext and ext != processor_ext:
|
||||
self.log.debug("Processor {} overrides extension to '{}' "
|
||||
"for path: {}".format(processor,
|
||||
processor_ext,
|
||||
filepath))
|
||||
ext = processor_ext
|
||||
break
|
||||
|
||||
return os.path.join(
|
||||
resources_dir, basename + ext
|
||||
)
|
||||
|
||||
def _process_texture(self, filepath, resource,
|
||||
do_maketx, staging, linearize, force):
|
||||
"""Process a single texture file on disk for publishing.
|
||||
This will:
|
||||
1. Check whether it's already published, if so it will do hardlink
|
||||
2. If not published and maketx is enabled, generate a new .tx file.
|
||||
3. Compute the destination path for the source file.
|
||||
Args:
|
||||
filepath (str): The source file path to process.
|
||||
do_maketx (bool): Whether to produce a .tx file
|
||||
Returns:
|
||||
"""
|
||||
|
||||
fname, ext = os.path.splitext(os.path.basename(filepath))
|
||||
|
||||
args = []
|
||||
if do_maketx:
|
||||
args.append("maketx")
|
||||
texture_hash = source_hash(filepath, *args)
|
||||
def _get_existing_hashed_texture(self, texture_hash):
|
||||
"""Return the first found filepath from a texture hash"""
|
||||
|
||||
# If source has been published before with the same settings,
|
||||
# then don't reprocess but hardlink from the original
|
||||
existing = find_paths_by_hash(texture_hash)
|
||||
if existing and not force:
|
||||
self.log.info("Found hash in database, preparing hardlink..")
|
||||
if existing:
|
||||
source = next((p for p in existing if os.path.exists(p)), None)
|
||||
if source:
|
||||
return source, HARDLINK, texture_hash
|
||||
return source
|
||||
else:
|
||||
self.log.warning(
|
||||
("Paths not found on disk, "
|
||||
"skipping hardlink: %s") % (existing,)
|
||||
"Paths not found on disk, "
|
||||
"skipping hardlink: {}".format(existing)
|
||||
)
|
||||
|
||||
if do_maketx and ext != ".tx":
|
||||
# Produce .tx file in staging if source file is not .tx
|
||||
converted = os.path.join(staging, "resources", fname + ".tx")
|
||||
additional_args = [
|
||||
"--sattrib",
|
||||
"sourceHash",
|
||||
texture_hash
|
||||
]
|
||||
if linearize:
|
||||
if cmds.colorManagementPrefs(query=True, cmEnabled=True):
|
||||
render_colorspace = cmds.colorManagementPrefs(query=True,
|
||||
renderingSpaceName=True) # noqa
|
||||
config_path = cmds.colorManagementPrefs(query=True,
|
||||
configFilePath=True) # noqa
|
||||
if not os.path.exists(config_path):
|
||||
raise RuntimeError("No OCIO config path found!")
|
||||
def _process_texture(self,
|
||||
filepath,
|
||||
processors,
|
||||
staging_dir,
|
||||
force_copy,
|
||||
color_management,
|
||||
colorspace):
|
||||
"""Process a single texture file on disk for publishing.
|
||||
|
||||
color_space_attr = resource["node"] + ".colorSpace"
|
||||
try:
|
||||
color_space = cmds.getAttr(color_space_attr)
|
||||
except ValueError:
|
||||
# node doesn't have color space attribute
|
||||
if _has_arnold():
|
||||
img_info = image_info(filepath)
|
||||
color_space = guess_colorspace(img_info)
|
||||
else:
|
||||
color_space = "Raw"
|
||||
self.log.info("tx: converting {0} -> {1}".format(color_space, render_colorspace)) # noqa
|
||||
This will:
|
||||
1. Check whether it's already published, if so it will do hardlink
|
||||
(if the texture hash is found and force copy is not enabled)
|
||||
2. It will process the texture using the supplied texture
|
||||
processors like MakeTX and MakeRSTexBin if enabled.
|
||||
3. Compute the destination path for the source file.
|
||||
|
||||
additional_args.extend(["--colorconvert",
|
||||
color_space,
|
||||
render_colorspace])
|
||||
else:
|
||||
Args:
|
||||
filepath (str): The source file path to process.
|
||||
processors (list): List of TextureProcessor processing the texture
|
||||
staging_dir (str): The staging directory to write to.
|
||||
force_copy (bool): Whether to force a copy even if a file hash
|
||||
might have existed already in the project, otherwise
|
||||
hardlinking the existing file is allowed.
|
||||
color_management (dict): Maya's Color Management settings from
|
||||
`lib.get_color_management_preferences`
|
||||
colorspace (str): The source colorspace of the resources this
|
||||
texture belongs to.
|
||||
|
||||
if _has_arnold():
|
||||
img_info = image_info(filepath)
|
||||
color_space = guess_colorspace(img_info)
|
||||
if color_space == "sRGB":
|
||||
self.log.info("tx: converting sRGB -> linear")
|
||||
additional_args.extend(["--colorconvert",
|
||||
"sRGB",
|
||||
"Raw"])
|
||||
else:
|
||||
self.log.info("tx: texture's colorspace "
|
||||
"is already linear")
|
||||
else:
|
||||
self.log.warning("cannot guess the colorspace"
|
||||
"color conversion won't be available!") # noqa
|
||||
Returns:
|
||||
TextureResult: The texture result information.
|
||||
"""
|
||||
|
||||
|
||||
additional_args.extend(["--colorconfig", config_path])
|
||||
# Ensure folder exists
|
||||
if not os.path.exists(os.path.dirname(converted)):
|
||||
os.makedirs(os.path.dirname(converted))
|
||||
|
||||
self.log.info("Generating .tx file for %s .." % filepath)
|
||||
maketx(
|
||||
filepath,
|
||||
converted,
|
||||
additional_args,
|
||||
self.log
|
||||
if len(processors) > 1:
|
||||
raise KnownPublishError(
|
||||
"More than one texture processor not supported. "
|
||||
"Current processors enabled: {}".format(processors)
|
||||
)
|
||||
|
||||
return converted, COPY, texture_hash
|
||||
for processor in processors:
|
||||
self.log.debug("Processing texture {} with processor {}".format(
|
||||
filepath, processor
|
||||
))
|
||||
|
||||
return filepath, COPY, texture_hash
|
||||
processed_result = processor.process(filepath,
|
||||
colorspace,
|
||||
color_management,
|
||||
staging_dir)
|
||||
if not processed_result:
|
||||
raise RuntimeError("Texture Processor {} returned "
|
||||
"no result.".format(processor))
|
||||
self.log.info("Generated processed "
|
||||
"texture: {}".format(processed_result.path))
|
||||
|
||||
# TODO: Currently all processors force copy instead of allowing
|
||||
# hardlinks using source hashes. This should be refactored
|
||||
return processed_result
|
||||
|
||||
# No texture processing for this file
|
||||
texture_hash = source_hash(filepath)
|
||||
if not force_copy:
|
||||
existing = self._get_existing_hashed_texture(filepath)
|
||||
if existing:
|
||||
self.log.info("Found hash in database, preparing hardlink..")
|
||||
return TextureResult(
|
||||
path=filepath,
|
||||
file_hash=texture_hash,
|
||||
colorspace=colorspace,
|
||||
transfer_mode=HARDLINK
|
||||
)
|
||||
|
||||
return TextureResult(
|
||||
path=filepath,
|
||||
file_hash=texture_hash,
|
||||
colorspace=colorspace,
|
||||
transfer_mode=COPY
|
||||
)
|
||||
|
||||
|
||||
class ExtractModelRenderSets(ExtractLook):
|
||||
|
|
|
|||
|
|
@ -102,7 +102,7 @@ class ExtractMultiverseUsdOverride(publish.Extractor):
|
|||
long=True)
|
||||
self.log.info("Collected object {}".format(members))
|
||||
|
||||
# TODO: Deal with asset, composition, overide with options.
|
||||
# TODO: Deal with asset, composition, override with options.
|
||||
import multiverse
|
||||
|
||||
time_opts = None
|
||||
|
|
|
|||
|
|
@ -241,7 +241,6 @@ class ExtractPlayblast(publish.Extractor):
|
|||
"frameStart": start,
|
||||
"frameEnd": end,
|
||||
"fps": fps,
|
||||
"preview": True,
|
||||
"tags": tags,
|
||||
"camera_name": camera_node_name
|
||||
}
|
||||
|
|
|
|||
|
|
@ -30,7 +30,7 @@ class ResetXgenAttributes(pyblish.api.InstancePlugin):
|
|||
cmds.setAttr(palette + ".xgExportAsDelta", True)
|
||||
|
||||
# Need to save the scene, cause the attribute changes above does not
|
||||
# mark the scene as modified so user can exit without commiting the
|
||||
# mark the scene as modified so user can exit without committing the
|
||||
# changes.
|
||||
self.log.info("Saving changes.")
|
||||
cmds.file(save=True)
|
||||
|
|
|
|||
|
|
@ -8,7 +8,7 @@ from openpype.pipeline.publish import ValidateContentsOrder
|
|||
class ValidateCameraAttributes(pyblish.api.InstancePlugin):
|
||||
"""Validates Camera has no invalid attribute keys or values.
|
||||
|
||||
The Alembic file format does not a specifc subset of attributes as such
|
||||
The Alembic file format does not a specific subset of attributes as such
|
||||
we validate that no values are set there as the output will not match the
|
||||
current scene. For example the preScale, film offsets and film roll.
|
||||
|
||||
|
|
|
|||
|
|
@ -1,26 +0,0 @@
|
|||
from maya import cmds
|
||||
|
||||
import pyblish.api
|
||||
from openpype.pipeline.publish import ValidateContentsOrder
|
||||
from openpype.pipeline import PublishValidationError
|
||||
|
||||
|
||||
class ValidateMayaColorSpace(pyblish.api.InstancePlugin):
|
||||
"""
|
||||
Check if the OCIO Color Management and maketx options
|
||||
enabled at the same time
|
||||
"""
|
||||
|
||||
order = ValidateContentsOrder
|
||||
families = ['look']
|
||||
hosts = ['maya']
|
||||
label = 'Color Management with maketx'
|
||||
|
||||
def process(self, instance):
|
||||
ocio_maya = cmds.colorManagementPrefs(q=True,
|
||||
cmConfigFileEnabled=True,
|
||||
cmEnabled=True)
|
||||
maketx = instance.data["maketx"]
|
||||
|
||||
if ocio_maya and maketx:
|
||||
raise PublishValidationError("Maya is color managed and maketx option is on. OpenPype doesn't support this combination yet.") # noqa
|
||||
|
|
@ -1,6 +1,7 @@
|
|||
import pyblish.api
|
||||
import openpype.hosts.maya.api.action
|
||||
from openpype.pipeline.publish import ValidateContentsOrder
|
||||
from maya import cmds # noqa
|
||||
|
||||
|
||||
class ValidateLookContents(pyblish.api.InstancePlugin):
|
||||
|
|
@ -85,6 +86,7 @@ class ValidateLookContents(pyblish.api.InstancePlugin):
|
|||
invalid.add(instance.name)
|
||||
|
||||
return list(invalid)
|
||||
|
||||
@classmethod
|
||||
def validate_looks(cls, instance):
|
||||
|
||||
|
|
@ -112,3 +114,23 @@ class ValidateLookContents(pyblish.api.InstancePlugin):
|
|||
invalid.append(node)
|
||||
|
||||
return invalid
|
||||
|
||||
@classmethod
|
||||
def validate_renderer(cls, instance):
|
||||
# TODO: Rewrite this to be more specific and configurable
|
||||
renderer = cmds.getAttr(
|
||||
'defaultRenderGlobals.currentRenderer').lower()
|
||||
do_maketx = instance.data.get("maketx", False)
|
||||
do_rstex = instance.data.get("rstex", False)
|
||||
processors = []
|
||||
|
||||
if do_maketx:
|
||||
processors.append('arnold')
|
||||
if do_rstex:
|
||||
processors.append('redshift')
|
||||
|
||||
for processor in processors:
|
||||
if processor == renderer:
|
||||
continue
|
||||
else:
|
||||
cls.log.error("Converted texture does not match current renderer.") # noqa
|
||||
|
|
|
|||
|
|
@ -34,7 +34,7 @@ class ValidateMayaUnits(pyblish.api.ContextPlugin):
|
|||
|
||||
fps = context.data.get('fps')
|
||||
|
||||
# TODO repace query with using 'context.data["assetEntity"]'
|
||||
# TODO replace query with using 'context.data["assetEntity"]'
|
||||
asset_doc = get_current_project_asset()
|
||||
asset_fps = mayalib.convert_to_maya_fps(asset_doc["data"]["fps"])
|
||||
|
||||
|
|
@ -86,7 +86,7 @@ class ValidateMayaUnits(pyblish.api.ContextPlugin):
|
|||
cls.log.debug(current_linear)
|
||||
|
||||
cls.log.info("Setting time unit to match project")
|
||||
# TODO repace query with using 'context.data["assetEntity"]'
|
||||
# TODO replace query with using 'context.data["assetEntity"]'
|
||||
asset_doc = get_current_project_asset()
|
||||
asset_fps = asset_doc["data"]["fps"]
|
||||
mayalib.set_scene_fps(asset_fps)
|
||||
|
|
|
|||
|
|
@ -42,7 +42,8 @@ class ValidateMvLookContents(pyblish.api.InstancePlugin):
|
|||
resources = instance.data.get("resources", [])
|
||||
for resource in resources:
|
||||
files = resource["files"]
|
||||
self.log.debug("Resouce '{}', files: [{}]".format(resource, files))
|
||||
self.log.debug(
|
||||
"Resource '{}', files: [{}]".format(resource, files))
|
||||
node = resource["node"]
|
||||
if len(files) == 0:
|
||||
self.log.error("File node '{}' uses no or non-existing "
|
||||
|
|
|
|||
|
|
@ -37,8 +37,8 @@ class ValidateRenderLayerAOVs(pyblish.api.InstancePlugin):
|
|||
|
||||
project_name = legacy_io.active_project()
|
||||
asset_doc = instance.data["assetEntity"]
|
||||
render_passses = instance.data.get("renderPasses", [])
|
||||
for render_pass in render_passses:
|
||||
render_passes = instance.data.get("renderPasses", [])
|
||||
for render_pass in render_passes:
|
||||
is_valid = self.validate_subset_registered(
|
||||
project_name, asset_doc, render_pass
|
||||
)
|
||||
|
|
|
|||
|
|
@ -21,7 +21,7 @@ class ValidateTransformNamingSuffix(pyblish.api.InstancePlugin):
|
|||
- nurbsSurface: _NRB
|
||||
- locator: _LOC
|
||||
- null/group: _GRP
|
||||
Suffices can also be overriden by project settings.
|
||||
Suffices can also be overridden by project settings.
|
||||
|
||||
.. warning::
|
||||
This grabs the first child shape as a reference and doesn't use the
|
||||
|
|
|
|||
|
|
@ -148,7 +148,7 @@ def get_main_window():
|
|||
def set_node_data(node, knobname, data):
|
||||
"""Write data to node invisible knob
|
||||
|
||||
Will create new in case it doesnt exists
|
||||
Will create new in case it doesn't exists
|
||||
or update the one already created.
|
||||
|
||||
Args:
|
||||
|
|
@ -506,7 +506,7 @@ def get_avalon_knob_data(node, prefix="avalon:", create=True):
|
|||
try:
|
||||
# check if data available on the node
|
||||
test = node[AVALON_DATA_GROUP].value()
|
||||
log.debug("Only testing if data avalable: `{}`".format(test))
|
||||
log.debug("Only testing if data available: `{}`".format(test))
|
||||
except NameError as e:
|
||||
# if it doesn't then create it
|
||||
log.debug("Creating avalon knob: `{}`".format(e))
|
||||
|
|
@ -908,11 +908,11 @@ def get_view_process_node():
|
|||
continue
|
||||
|
||||
if not ipn_node:
|
||||
# in case a Viewer node is transfered from
|
||||
# in case a Viewer node is transferred from
|
||||
# different workfile with old values
|
||||
raise NameError((
|
||||
"Input process node name '{}' set in "
|
||||
"Viewer '{}' is does't exists in nodes"
|
||||
"Viewer '{}' is doesn't exists in nodes"
|
||||
).format(ipn, v_.name()))
|
||||
|
||||
ipn_node.setSelected(True)
|
||||
|
|
@ -1662,7 +1662,7 @@ def create_write_node_legacy(
|
|||
tile_color = _data.get("tile_color", "0xff0000ff")
|
||||
GN["tile_color"].setValue(tile_color)
|
||||
|
||||
# overrie knob values from settings
|
||||
# override knob values from settings
|
||||
for knob in knob_overrides:
|
||||
knob_type = knob["type"]
|
||||
knob_name = knob["name"]
|
||||
|
|
@ -2117,7 +2117,7 @@ class WorkfileSettings(object):
|
|||
write_node[knob["name"]].setValue(value)
|
||||
except TypeError:
|
||||
log.warning(
|
||||
"Legacy workflow didnt work, switching to current")
|
||||
"Legacy workflow didn't work, switching to current")
|
||||
|
||||
set_node_knobs_from_settings(
|
||||
write_node, nuke_imageio_writes["knobs"])
|
||||
|
|
@ -2543,7 +2543,7 @@ def reset_selection():
|
|||
|
||||
|
||||
def select_nodes(nodes):
|
||||
"""Selects all inputed nodes
|
||||
"""Selects all inputted nodes
|
||||
|
||||
Arguments:
|
||||
nodes (list): nuke nodes to be selected
|
||||
|
|
@ -2560,7 +2560,7 @@ def launch_workfiles_app():
|
|||
Trigger to show workfiles tool on application launch. Can be executed only
|
||||
once all other calls are ignored.
|
||||
|
||||
Workfiles tool show is deffered after application initialization using
|
||||
Workfiles tool show is deferred after application initialization using
|
||||
QTimer.
|
||||
"""
|
||||
|
||||
|
|
@ -2581,7 +2581,7 @@ def launch_workfiles_app():
|
|||
# Show workfiles tool using timer
|
||||
# - this will be probably triggered during initialization in that case
|
||||
# the application is not be able to show uis so it must be
|
||||
# deffered using timer
|
||||
# deferred using timer
|
||||
# - timer should be processed when initialization ends
|
||||
# When applications starts to process events.
|
||||
timer = QtCore.QTimer()
|
||||
|
|
|
|||
|
|
@ -594,7 +594,7 @@ class ExporterReview(object):
|
|||
Defaults to None.
|
||||
range (bool, optional): flag for adding ranges.
|
||||
Defaults to False.
|
||||
custom_tags (list[str], optional): user inputed custom tags.
|
||||
custom_tags (list[str], optional): user inputted custom tags.
|
||||
Defaults to None.
|
||||
"""
|
||||
add_tags = tags or []
|
||||
|
|
@ -1110,7 +1110,7 @@ class AbstractWriteRender(OpenPypeCreator):
|
|||
def is_legacy(self):
|
||||
"""Check if it needs to run legacy code
|
||||
|
||||
In case where `type` key is missing in singe
|
||||
In case where `type` key is missing in single
|
||||
knob it is legacy project anatomy.
|
||||
|
||||
Returns:
|
||||
|
|
|
|||
|
|
@ -87,7 +87,7 @@ def bake_gizmos_recursively(in_group=None):
|
|||
def colorspace_exists_on_node(node, colorspace_name):
|
||||
""" Check if colorspace exists on node
|
||||
|
||||
Look through all options in the colorpsace knob, and see if we have an
|
||||
Look through all options in the colorspace knob, and see if we have an
|
||||
exact match to one of the items.
|
||||
|
||||
Args:
|
||||
|
|
|
|||
|
|
@ -42,7 +42,7 @@ class NukeTemplateBuilder(AbstractTemplateBuilder):
|
|||
get_template_preset implementation)
|
||||
|
||||
Returns:
|
||||
bool: Wether the template was successfully imported or not
|
||||
bool: Whether the template was successfully imported or not
|
||||
"""
|
||||
|
||||
# TODO check if the template is already imported
|
||||
|
|
@ -222,7 +222,7 @@ class NukePlaceholderLoadPlugin(NukePlaceholderPlugin, PlaceholderLoadMixin):
|
|||
self._imprint_siblings(placeholder)
|
||||
|
||||
if placeholder.data["nb_children"] == 0:
|
||||
# save initial nodes postions and dimensions, update them
|
||||
# save initial nodes positions and dimensions, update them
|
||||
# and set inputs and outputs of loaded nodes
|
||||
|
||||
self._imprint_inits()
|
||||
|
|
@ -231,7 +231,7 @@ class NukePlaceholderLoadPlugin(NukePlaceholderPlugin, PlaceholderLoadMixin):
|
|||
|
||||
elif placeholder.data["siblings"]:
|
||||
# create copies of placeholder siblings for the new loaded nodes,
|
||||
# set their inputs and outpus and update all nodes positions and
|
||||
# set their inputs and outputs and update all nodes positions and
|
||||
# dimensions and siblings names
|
||||
|
||||
siblings = get_nodes_by_names(placeholder.data["siblings"])
|
||||
|
|
@ -632,7 +632,7 @@ class NukePlaceholderCreatePlugin(
|
|||
self._imprint_siblings(placeholder)
|
||||
|
||||
if placeholder.data["nb_children"] == 0:
|
||||
# save initial nodes postions and dimensions, update them
|
||||
# save initial nodes positions and dimensions, update them
|
||||
# and set inputs and outputs of created nodes
|
||||
|
||||
self._imprint_inits()
|
||||
|
|
@ -641,7 +641,7 @@ class NukePlaceholderCreatePlugin(
|
|||
|
||||
elif placeholder.data["siblings"]:
|
||||
# create copies of placeholder siblings for the new created nodes,
|
||||
# set their inputs and outpus and update all nodes positions and
|
||||
# set their inputs and outputs and update all nodes positions and
|
||||
# dimensions and siblings names
|
||||
|
||||
siblings = get_nodes_by_names(placeholder.data["siblings"])
|
||||
|
|
|
|||
|
|
@ -39,7 +39,7 @@ class LegacyConverted(SubsetConvertorPlugin):
|
|||
break
|
||||
|
||||
if legacy_found:
|
||||
# if not item do not add legacy instance convertor
|
||||
# if not item do not add legacy instance converter
|
||||
self.add_convertor_item("Convert legacy instances")
|
||||
|
||||
def convert(self):
|
||||
|
|
|
|||
|
|
@ -85,4 +85,4 @@ class CreateSource(NukeCreator):
|
|||
raise NukeCreatorError("Creator error: No active selection")
|
||||
else:
|
||||
NukeCreatorError(
|
||||
"Creator error: only supprted with active selection")
|
||||
"Creator error: only supported with active selection")
|
||||
|
|
|
|||
|
|
@ -189,7 +189,7 @@ class CollectNukeWrites(pyblish.api.InstancePlugin,
|
|||
})
|
||||
|
||||
# make sure rendered sequence on farm will
|
||||
# be used for exctract review
|
||||
# be used for extract review
|
||||
if not instance.data["review"]:
|
||||
instance.data["useSequenceForReview"] = False
|
||||
|
||||
|
|
|
|||
|
|
@ -48,7 +48,7 @@ class SelectCenterInNodeGraph(pyblish.api.Action):
|
|||
|
||||
class ValidateBackdrop(pyblish.api.InstancePlugin):
|
||||
""" Validate amount of nodes on backdrop node in case user
|
||||
forgoten to add nodes above the publishing backdrop node.
|
||||
forgotten to add nodes above the publishing backdrop node.
|
||||
"""
|
||||
|
||||
order = pyblish.api.ValidatorOrder
|
||||
|
|
|
|||
|
|
@ -199,7 +199,7 @@ function getActiveDocumentName(){
|
|||
function getActiveDocumentFullName(){
|
||||
/**
|
||||
* Returns file name of active document with file path.
|
||||
* activeDocument.fullName returns path in URI (eg /c/.. insted of c:/)
|
||||
* activeDocument.fullName returns path in URI (eg /c/.. instead of c:/)
|
||||
* */
|
||||
if (documents.length == 0){
|
||||
return null;
|
||||
|
|
@ -225,7 +225,7 @@ function getSelectedLayers(doc) {
|
|||
* Returns json representation of currently selected layers.
|
||||
* Works in three steps - 1) creates new group with selected layers
|
||||
* 2) traverses this group
|
||||
* 3) deletes newly created group, not neede
|
||||
* 3) deletes newly created group, not needed
|
||||
* Bit weird, but Adobe..
|
||||
**/
|
||||
if (doc == null){
|
||||
|
|
@ -284,7 +284,7 @@ function selectLayers(selectedLayers){
|
|||
existing_ids.push(existing_layers[y]["id"]);
|
||||
}
|
||||
for (var i = 0; i < selectedLayers.length; i++) {
|
||||
// a check to see if the id stil exists
|
||||
// a check to see if the id still exists
|
||||
var id = selectedLayers[i];
|
||||
if(existing_ids.toString().indexOf(id)>=0){
|
||||
layers[i] = charIDToTypeID( "Lyr " );
|
||||
|
|
|
|||
|
|
@ -129,7 +129,6 @@ class ExtractReview(publish.Extractor):
|
|||
"frameStart": 1,
|
||||
"frameEnd": no_of_frames,
|
||||
"fps": fps,
|
||||
"preview": True,
|
||||
"tags": self.mov_options['tags']
|
||||
})
|
||||
|
||||
|
|
|
|||
|
|
@ -250,7 +250,7 @@ def create_timeline_item(media_pool_item: object,
|
|||
media_pool_item, timeline)
|
||||
|
||||
assert output_timeline_item, AssertionError(
|
||||
"Track Item with name `{}` doesnt exist on the timeline: `{}`".format(
|
||||
"Track Item with name `{}` doesn't exist on the timeline: `{}`".format(
|
||||
clip_name, timeline.GetName()
|
||||
))
|
||||
return output_timeline_item
|
||||
|
|
@ -571,7 +571,7 @@ def create_compound_clip(clip_data, name, folder):
|
|||
# Set current folder to input media_pool_folder:
|
||||
mp.SetCurrentFolder(folder)
|
||||
|
||||
# check if clip doesnt exist already:
|
||||
# check if clip doesn't exist already:
|
||||
clips = folder.GetClipList()
|
||||
cct = next((c for c in clips
|
||||
if c.GetName() in name), None)
|
||||
|
|
@ -582,7 +582,7 @@ def create_compound_clip(clip_data, name, folder):
|
|||
# Create empty timeline in current folder and give name:
|
||||
cct = mp.CreateEmptyTimeline(name)
|
||||
|
||||
# check if clip doesnt exist already:
|
||||
# check if clip doesn't exist already:
|
||||
clips = folder.GetClipList()
|
||||
cct = next((c for c in clips
|
||||
if c.GetName() in name), None)
|
||||
|
|
|
|||
|
|
@ -61,7 +61,7 @@ QVBoxLayout {
|
|||
background-color: #282828;
|
||||
}
|
||||
|
||||
#Devider {
|
||||
#Divider {
|
||||
border: 1px solid #090909;
|
||||
background-color: #585858;
|
||||
}
|
||||
|
|
|
|||
|
|
@ -715,7 +715,7 @@ class PublishClip:
|
|||
# increasing steps by index of rename iteration
|
||||
self.count_steps *= self.rename_index
|
||||
|
||||
hierarchy_formating_data = dict()
|
||||
hierarchy_formatting_data = dict()
|
||||
_data = self.timeline_item_default_data.copy()
|
||||
if self.ui_inputs:
|
||||
# adding tag metadata from ui
|
||||
|
|
@ -749,13 +749,13 @@ class PublishClip:
|
|||
|
||||
# fill up pythonic expresisons in hierarchy data
|
||||
for k, _v in self.hierarchy_data.items():
|
||||
hierarchy_formating_data[k] = _v["value"].format(**_data)
|
||||
hierarchy_formatting_data[k] = _v["value"].format(**_data)
|
||||
else:
|
||||
# if no gui mode then just pass default data
|
||||
hierarchy_formating_data = self.hierarchy_data
|
||||
hierarchy_formatting_data = self.hierarchy_data
|
||||
|
||||
tag_hierarchy_data = self._solve_tag_hierarchy_data(
|
||||
hierarchy_formating_data
|
||||
hierarchy_formatting_data
|
||||
)
|
||||
|
||||
tag_hierarchy_data.update({"heroTrack": True})
|
||||
|
|
@ -792,18 +792,17 @@ class PublishClip:
|
|||
else:
|
||||
self.tag_data.update({"reviewTrack": None})
|
||||
|
||||
|
||||
def _solve_tag_hierarchy_data(self, hierarchy_formating_data):
|
||||
def _solve_tag_hierarchy_data(self, hierarchy_formatting_data):
|
||||
""" Solve tag data from hierarchy data and templates. """
|
||||
# fill up clip name and hierarchy keys
|
||||
hierarchy_filled = self.hierarchy.format(**hierarchy_formating_data)
|
||||
clip_name_filled = self.clip_name.format(**hierarchy_formating_data)
|
||||
hierarchy_filled = self.hierarchy.format(**hierarchy_formatting_data)
|
||||
clip_name_filled = self.clip_name.format(**hierarchy_formatting_data)
|
||||
|
||||
return {
|
||||
"newClipName": clip_name_filled,
|
||||
"hierarchy": hierarchy_filled,
|
||||
"parents": self.parents,
|
||||
"hierarchyData": hierarchy_formating_data,
|
||||
"hierarchyData": hierarchy_formatting_data,
|
||||
"subset": self.subset,
|
||||
"family": self.subset_family,
|
||||
"families": ["clip"]
|
||||
|
|
|
|||
|
|
@ -83,9 +83,9 @@ class CollectBulkMovInstances(pyblish.api.InstancePlugin):
|
|||
|
||||
self.log.info(f"Created new instance: {instance_name}")
|
||||
|
||||
def convertor(value):
|
||||
def converter(value):
|
||||
return str(value)
|
||||
|
||||
self.log.debug("Instance data: {}".format(
|
||||
json.dumps(new_instance.data, indent=4, default=convertor)
|
||||
json.dumps(new_instance.data, indent=4, default=converter)
|
||||
))
|
||||
|
|
|
|||
|
|
@ -104,7 +104,7 @@ class CollectContextDataSAPublish(pyblish.api.ContextPlugin):
|
|||
if repr.get(k):
|
||||
repr.pop(k)
|
||||
|
||||
# convert files to list if it isnt
|
||||
# convert files to list if it isn't
|
||||
if not isinstance(files, (tuple, list)):
|
||||
files = [files]
|
||||
|
||||
|
|
@ -174,7 +174,7 @@ class CollectContextDataSAPublish(pyblish.api.ContextPlugin):
|
|||
continue
|
||||
|
||||
files = repre["files"]
|
||||
# Convert files to list if it isnt
|
||||
# Convert files to list if it isn't
|
||||
if not isinstance(files, (tuple, list)):
|
||||
files = [files]
|
||||
|
||||
|
|
@ -255,7 +255,9 @@ class CollectContextDataSAPublish(pyblish.api.ContextPlugin):
|
|||
if ext.startswith("."):
|
||||
component["ext"] = ext[1:]
|
||||
|
||||
if component["preview"]:
|
||||
# Remove 'preview' key from representation data
|
||||
preview = component.pop("preview")
|
||||
if preview:
|
||||
instance.data["families"].append("review")
|
||||
component["tags"] = ["review"]
|
||||
self.log.debug("Adding review family")
|
||||
|
|
|
|||
|
|
@ -116,7 +116,7 @@ class CollectEditorial(pyblish.api.InstancePlugin):
|
|||
kwargs = {}
|
||||
if extension == ".edl":
|
||||
# EDL has no frame rate embedded so needs explicit
|
||||
# frame rate else 24 is asssumed.
|
||||
# frame rate else 24 is assumed.
|
||||
kwargs["rate"] = get_current_project_asset()["data"]["fps"]
|
||||
|
||||
instance.data["otio_timeline"] = otio.adapters.read_from_file(
|
||||
|
|
|
|||
|
|
@ -29,7 +29,7 @@ class ValidateFrameRange(pyblish.api.InstancePlugin):
|
|||
for pattern in self.skip_timelines_check):
|
||||
self.log.info("Skipping for {} task".format(instance.data["task"]))
|
||||
|
||||
# TODO repace query with using 'instance.data["assetEntity"]'
|
||||
# TODO replace query with using 'instance.data["assetEntity"]'
|
||||
asset_data = get_current_project_asset(instance.data["asset"])["data"]
|
||||
frame_start = asset_data["frameStart"]
|
||||
frame_end = asset_data["frameEnd"]
|
||||
|
|
|
|||
|
|
@ -8,10 +8,10 @@ from openpype.pipeline.create import CreatorError
|
|||
class ShotMetadataSolver:
|
||||
""" Solving hierarchical metadata
|
||||
|
||||
Used during editorial publishing. Works with imput
|
||||
Used during editorial publishing. Works with input
|
||||
clip name and settings defining python formatable
|
||||
template. Settings also define searching patterns
|
||||
and its token keys used for formating in templates.
|
||||
and its token keys used for formatting in templates.
|
||||
"""
|
||||
|
||||
NO_DECOR_PATERN = re.compile(r"\{([a-z]*?)\}")
|
||||
|
|
@ -40,13 +40,13 @@ class ShotMetadataSolver:
|
|||
"""Shot renaming function
|
||||
|
||||
Args:
|
||||
data (dict): formating data
|
||||
data (dict): formatting data
|
||||
|
||||
Raises:
|
||||
CreatorError: If missing keys
|
||||
|
||||
Returns:
|
||||
str: formated new name
|
||||
str: formatted new name
|
||||
"""
|
||||
shot_rename_template = self.shot_rename[
|
||||
"shot_rename_template"]
|
||||
|
|
@ -58,7 +58,7 @@ class ShotMetadataSolver:
|
|||
"Make sure all keys in settings are correct:: \n\n"
|
||||
f"From template string {shot_rename_template} > "
|
||||
f"`{_E}` has no equivalent in \n"
|
||||
f"{list(data.keys())} input formating keys!"
|
||||
f"{list(data.keys())} input formatting keys!"
|
||||
))
|
||||
|
||||
def _generate_tokens(self, clip_name, source_data):
|
||||
|
|
@ -68,7 +68,7 @@ class ShotMetadataSolver:
|
|||
|
||||
Args:
|
||||
clip_name (str): name of clip in editorial
|
||||
source_data (dict): data for formating
|
||||
source_data (dict): data for formatting
|
||||
|
||||
Raises:
|
||||
CreatorError: if missing key
|
||||
|
|
@ -106,14 +106,14 @@ class ShotMetadataSolver:
|
|||
return output_data
|
||||
|
||||
def _create_parents_from_settings(self, parents, data):
|
||||
"""Formating parent components.
|
||||
"""formatting parent components.
|
||||
|
||||
Args:
|
||||
parents (list): list of dict parent components
|
||||
data (dict): formating data
|
||||
data (dict): formatting data
|
||||
|
||||
Raises:
|
||||
CreatorError: missing formating key
|
||||
CreatorError: missing formatting key
|
||||
CreatorError: missing token key
|
||||
KeyError: missing parent token
|
||||
|
||||
|
|
@ -126,7 +126,7 @@ class ShotMetadataSolver:
|
|||
|
||||
# fill parent keys data template from anatomy data
|
||||
try:
|
||||
_parent_tokens_formating_data = {
|
||||
_parent_tokens_formatting_data = {
|
||||
parent_token["name"]: parent_token["value"].format(**data)
|
||||
for parent_token in hierarchy_parents
|
||||
}
|
||||
|
|
@ -143,17 +143,17 @@ class ShotMetadataSolver:
|
|||
for _index, _parent in enumerate(
|
||||
shot_hierarchy["parents_path"].split("/")
|
||||
):
|
||||
# format parent token with value which is formated
|
||||
# format parent token with value which is formatted
|
||||
try:
|
||||
parent_name = _parent.format(
|
||||
**_parent_tokens_formating_data)
|
||||
**_parent_tokens_formatting_data)
|
||||
except KeyError as _E:
|
||||
raise CreatorError((
|
||||
"Make sure all keys in settings are correct : \n\n"
|
||||
f"`{_E}` from template string "
|
||||
f"{shot_hierarchy['parents_path']}, "
|
||||
f" has no equivalent in \n"
|
||||
f"{list(_parent_tokens_formating_data.keys())} parents"
|
||||
f"{list(_parent_tokens_formatting_data.keys())} parents"
|
||||
))
|
||||
|
||||
parent_token_name = (
|
||||
|
|
@ -225,7 +225,7 @@ class ShotMetadataSolver:
|
|||
visual_hierarchy = [asset_doc]
|
||||
current_doc = asset_doc
|
||||
|
||||
# looping trought all available visual parents
|
||||
# looping through all available visual parents
|
||||
# if they are not available anymore than it breaks
|
||||
while True:
|
||||
visual_parent_id = current_doc["data"]["visualParent"]
|
||||
|
|
@ -288,7 +288,7 @@ class ShotMetadataSolver:
|
|||
|
||||
Args:
|
||||
clip_name (str): clip name
|
||||
source_data (dict): formating data
|
||||
source_data (dict): formatting data
|
||||
|
||||
Returns:
|
||||
(str, dict): shot name and hierarchy data
|
||||
|
|
@ -301,19 +301,19 @@ class ShotMetadataSolver:
|
|||
# match clip to shot name at start
|
||||
shot_name = clip_name
|
||||
|
||||
# parse all tokens and generate formating data
|
||||
formating_data = self._generate_tokens(shot_name, source_data)
|
||||
# parse all tokens and generate formatting data
|
||||
formatting_data = self._generate_tokens(shot_name, source_data)
|
||||
|
||||
# generate parents from selected asset
|
||||
parents = self._get_parents_from_selected_asset(asset_doc, project_doc)
|
||||
|
||||
if self.shot_rename["enabled"]:
|
||||
shot_name = self._rename_template(formating_data)
|
||||
shot_name = self._rename_template(formatting_data)
|
||||
self.log.info(f"Renamed shot name: {shot_name}")
|
||||
|
||||
if self.shot_hierarchy["enabled"]:
|
||||
parents = self._create_parents_from_settings(
|
||||
parents, formating_data)
|
||||
parents, formatting_data)
|
||||
|
||||
if self.shot_add_tasks:
|
||||
tasks = self._generate_tasks_from_settings(
|
||||
|
|
|
|||
|
|
@ -260,7 +260,7 @@ or updating already created. Publishing will create OTIO file.
|
|||
)
|
||||
|
||||
if not first_otio_timeline:
|
||||
# assing otio timeline for multi file to layer
|
||||
# assign otio timeline for multi file to layer
|
||||
first_otio_timeline = otio_timeline
|
||||
|
||||
# create otio editorial instance
|
||||
|
|
@ -283,7 +283,7 @@ or updating already created. Publishing will create OTIO file.
|
|||
|
||||
Args:
|
||||
subset_name (str): name of subset
|
||||
data (dict): instnance data
|
||||
data (dict): instance data
|
||||
sequence_path (str): path to sequence file
|
||||
media_path (str): path to media file
|
||||
otio_timeline (otio.Timeline): otio timeline object
|
||||
|
|
@ -315,7 +315,7 @@ or updating already created. Publishing will create OTIO file.
|
|||
kwargs = {}
|
||||
if extension == ".edl":
|
||||
# EDL has no frame rate embedded so needs explicit
|
||||
# frame rate else 24 is asssumed.
|
||||
# frame rate else 24 is assumed.
|
||||
kwargs["rate"] = fps
|
||||
kwargs["ignore_timecode_mismatch"] = True
|
||||
|
||||
|
|
@ -358,7 +358,7 @@ or updating already created. Publishing will create OTIO file.
|
|||
sequence_file_name,
|
||||
first_otio_timeline=None
|
||||
):
|
||||
"""Helping function fro creating clip instance
|
||||
"""Helping function for creating clip instance
|
||||
|
||||
Args:
|
||||
otio_timeline (otio.Timeline): otio timeline object
|
||||
|
|
@ -527,7 +527,7 @@ or updating already created. Publishing will create OTIO file.
|
|||
|
||||
Args:
|
||||
otio_clip (otio.Clip): otio clip object
|
||||
preset (dict): sigle family preset
|
||||
preset (dict): single family preset
|
||||
instance_data (dict): instance data
|
||||
parenting_data (dict): shot instance parent data
|
||||
|
||||
|
|
@ -767,7 +767,7 @@ or updating already created. Publishing will create OTIO file.
|
|||
]
|
||||
|
||||
def _validate_clip_for_processing(self, otio_clip):
|
||||
"""Validate otio clip attribues
|
||||
"""Validate otio clip attributes
|
||||
|
||||
Args:
|
||||
otio_clip (otio.Clip): otio clip object
|
||||
|
|
@ -843,7 +843,7 @@ or updating already created. Publishing will create OTIO file.
|
|||
single_item=False,
|
||||
label="Media files",
|
||||
),
|
||||
# TODO: perhpas better would be timecode and fps input
|
||||
# TODO: perhaps better would be timecode and fps input
|
||||
NumberDef(
|
||||
"timeline_offset",
|
||||
default=0,
|
||||
|
|
|
|||
|
|
@ -14,7 +14,7 @@ class CollectSettingsSimpleInstances(pyblish.api.InstancePlugin):
|
|||
|
||||
There is also possibility to have reviewable representation which can be
|
||||
stored under 'reviewable' attribute stored on instance data. If there was
|
||||
already created representation with the same files as 'revieable' containes
|
||||
already created representation with the same files as 'reviewable' contains
|
||||
|
||||
Representations can be marked for review and in that case is also added
|
||||
'review' family to instance families. For review can be marked only one
|
||||
|
|
|
|||
Some files were not shown because too many files have changed in this diff Show more
Loading…
Add table
Add a link
Reference in a new issue