Merge pull request #730 from ynput/feature/remove-maya-addon

Chore: Removed Maya addon
This commit is contained in:
Jakub Trllo 2024-07-03 10:28:35 +02:00 committed by GitHub
commit 5640c810ff
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
316 changed files with 0 additions and 45340 deletions

View file

@ -1,201 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View file

@ -1,4 +0,0 @@
Maya Integration Addon
======================
WIP

View file

@ -1,13 +0,0 @@
from .version import __version__
from .addon import (
MayaAddon,
MAYA_ROOT_DIR,
)
__all__ = (
"__version__",
"MayaAddon",
"MAYA_ROOT_DIR",
)

View file

@ -1,49 +0,0 @@
import os
from ayon_core.addon import AYONAddon, IHostAddon
from .version import __version__
MAYA_ROOT_DIR = os.path.dirname(os.path.abspath(__file__))
class MayaAddon(AYONAddon, IHostAddon):
name = "maya"
version = __version__
host_name = "maya"
def add_implementation_envs(self, env, _app):
# Add requirements to PYTHONPATH
new_python_paths = [
os.path.join(MAYA_ROOT_DIR, "startup")
]
old_python_path = env.get("PYTHONPATH") or ""
for path in old_python_path.split(os.pathsep):
if not path:
continue
norm_path = os.path.normpath(path)
if norm_path not in new_python_paths:
new_python_paths.append(norm_path)
# add vendor path
new_python_paths.append(
os.path.join(MAYA_ROOT_DIR, "vendor", "python")
)
env["PYTHONPATH"] = os.pathsep.join(new_python_paths)
# Set default environments
envs = {
"AYON_LOG_NO_COLORS": "1",
}
for key, value in envs.items():
env[key] = value
def get_launch_hook_paths(self, app):
if app.host_name != self.host_name:
return []
return [
os.path.join(MAYA_ROOT_DIR, "hooks")
]
def get_workfile_extensions(self):
return [".ma", ".mb"]

View file

@ -1,72 +0,0 @@
"""Public API
Anything that isn't defined here is INTERNAL and unreliable for external use.
"""
from .pipeline import (
uninstall,
ls,
containerise,
MayaHost,
)
from .plugin import (
Loader
)
from .workio import (
open_file,
save_file,
current_file,
has_unsaved_changes,
file_extensions,
work_root
)
from .lib import (
lsattr,
lsattrs,
read,
apply_shaders,
maintained_selection,
suspended_refresh,
unique_namespace,
)
__all__ = [
"uninstall",
"ls",
"containerise",
"MayaHost",
"Loader",
# Workfiles API
"open_file",
"save_file",
"current_file",
"has_unsaved_changes",
"file_extensions",
"work_root",
# Utility functions
"lsattr",
"lsattrs",
"read",
"unique_namespace",
"apply_shaders",
"maintained_selection",
"suspended_refresh",
]
# Backwards API compatibility
open = open_file
save = save_file

View file

@ -1,146 +0,0 @@
# absolute_import is needed to counter the `module has no cmds error` in Maya
from __future__ import absolute_import
import pyblish.api
import ayon_api
from ayon_core.pipeline.publish import (
get_errored_instances_from_context,
get_errored_plugins_from_context
)
class GenerateUUIDsOnInvalidAction(pyblish.api.Action):
"""Generate UUIDs on the invalid nodes in the instance.
Invalid nodes are those returned by the plugin's `get_invalid` method.
As such it is the plug-in's responsibility to ensure the nodes that
receive new UUIDs are actually invalid.
Requires:
- instance.data["folderPath"]
"""
label = "Regenerate UUIDs"
on = "failed" # This action is only available on a failed plug-in
icon = "wrench" # Icon from Awesome Icon
def process(self, context, plugin):
from maya import cmds
self.log.info("Finding bad nodes..")
errored_instances = get_errored_instances_from_context(context)
# Apply pyblish logic to get the instances for the plug-in
instances = pyblish.api.instances_by_plugin(errored_instances, plugin)
# Get the nodes from the all instances that ran through this plug-in
all_invalid = []
for instance in instances:
invalid = plugin.get_invalid(instance)
# Don't allow referenced nodes to get their ids regenerated to
# avoid loaded content getting messed up with reference edits
if invalid:
referenced = {node for node in invalid if
cmds.referenceQuery(node, isNodeReferenced=True)}
if referenced:
self.log.warning("Skipping UUID generation on referenced "
"nodes: {}".format(list(referenced)))
invalid = [node for node in invalid
if node not in referenced]
if invalid:
self.log.info("Fixing instance {}".format(instance.name))
self._update_id_attribute(instance, invalid)
all_invalid.extend(invalid)
if not all_invalid:
self.log.info("No invalid nodes found.")
return
all_invalid = list(set(all_invalid))
self.log.info("Generated ids on nodes: {0}".format(all_invalid))
def _update_id_attribute(self, instance, nodes):
"""Delete the id attribute
Args:
instance: The instance we're fixing for
nodes (list): all nodes to regenerate ids on
"""
from . import lib
# Expecting this is called on validators in which case 'folderEntity'
# should be always available, but kept a way to query it by name.
folder_entity = instance.data.get("folderEntity")
if not folder_entity:
folder_path = instance.data["folderPath"]
project_name = instance.context.data["projectName"]
self.log.info((
"Folder is not stored on instance."
" Querying by path \"{}\" from project \"{}\""
).format(folder_path, project_name))
folder_entity = ayon_api.get_folder_by_path(
project_name, folder_path, fields={"id"}
)
for node, _id in lib.generate_ids(
nodes, folder_id=folder_entity["id"]
):
lib.set_id(node, _id, overwrite=True)
class SelectInvalidAction(pyblish.api.Action):
"""Select invalid nodes in Maya when plug-in failed.
To retrieve the invalid nodes this assumes a static `get_invalid()`
method is available on the plugin.
"""
label = "Select invalid"
on = "failed" # This action is only available on a failed plug-in
icon = "search" # Icon from Awesome Icon
def process(self, context, plugin):
try:
from maya import cmds
except ImportError:
raise ImportError("Current host is not Maya")
# Get the invalid nodes for the plug-ins
self.log.info("Finding invalid nodes..")
invalid = list()
if issubclass(plugin, pyblish.api.ContextPlugin):
errored_plugins = get_errored_plugins_from_context(context)
if plugin in errored_plugins:
invalid = plugin.get_invalid(context)
else:
errored_instances = get_errored_instances_from_context(
context, plugin=plugin
)
for instance in errored_instances:
invalid_nodes = plugin.get_invalid(instance)
if invalid_nodes:
if isinstance(invalid_nodes, (list, tuple)):
invalid.extend(invalid_nodes)
else:
self.log.warning("Plug-in returned to be invalid, "
"but has no selectable nodes.")
# Ensure unique (process each node only once)
invalid = list(set(invalid))
if invalid:
self.log.info("Selecting invalid nodes: %s" % ", ".join(invalid))
cmds.select(invalid, replace=True, noExpand=True)
else:
self.log.info("No invalid nodes found.")
cmds.select(deselect=True)

View file

@ -1,350 +0,0 @@
import json
import logging
import os
from maya import cmds # noqa
from ayon_maya.api.lib import evaluation
log = logging.getLogger(__name__)
# The maya alembic export types
ALEMBIC_ARGS = {
"attr": (list, tuple),
"attrPrefix": (list, tuple),
"autoSubd": bool,
"dataFormat": str,
"endFrame": float,
"eulerFilter": bool,
"frameRange": str, # "start end"; overrides startFrame & endFrame
"frameRelativeSample": float,
"melPerFrameCallback": str,
"melPostJobCallback": str,
"noNormals": bool,
"preRoll": bool,
"pythonPerFrameCallback": str,
"pythonPostJobCallback": str,
"renderableOnly": bool,
"root": (list, tuple),
"selection": bool,
"startFrame": float,
"step": float,
"stripNamespaces": bool,
"userAttr": (list, tuple),
"userAttrPrefix": (list, tuple),
"uvWrite": bool,
"uvsOnly": bool,
"verbose": bool,
"wholeFrameGeo": bool,
"worldSpace": bool,
"writeColorSets": bool,
"writeCreases": bool, # Maya 2015 Ext1+
"writeFaceSets": bool,
"writeUVSets": bool, # Maya 2017+
"writeVisibility": bool,
}
def extract_alembic(
file,
attr=None,
attrPrefix=None,
dataFormat="ogawa",
endFrame=None,
eulerFilter=True,
frameRange="",
melPerFrameCallback=None,
melPostJobCallback=None,
noNormals=False,
preRoll=False,
preRollStartFrame=0,
pythonPerFrameCallback=None,
pythonPostJobCallback=None,
renderableOnly=False,
root=None,
selection=True,
startFrame=None,
step=1.0,
stripNamespaces=True,
userAttr=None,
userAttrPrefix=None,
uvsOnly=False,
uvWrite=True,
verbose=False,
wholeFrameGeo=False,
worldSpace=False,
writeColorSets=False,
writeCreases=False,
writeFaceSets=False,
writeUVSets=False,
writeVisibility=False
):
"""Extract a single Alembic Cache.
This extracts an Alembic cache using the `-selection` flag to minimize
the extracted content to solely what was Collected into the instance.
Arguments:
file (str): The filepath to write the alembic file to.
attr (list of str, optional): A specific geometric attribute to write
out. Defaults to [].
attrPrefix (list of str, optional): Prefix filter for determining which
geometric attributes to write out. Defaults to ["ABC_"].
dataFormat (str): The data format to use for the cache,
defaults to "ogawa"
endFrame (float): End frame of output. Ignored if `frameRange`
provided.
eulerFilter (bool): When on, X, Y, and Z rotation data is filtered with
an Euler filter. Euler filtering helps resolve irregularities in
rotations especially if X, Y, and Z rotations exceed 360 degrees.
Defaults to True.
frameRange (tuple or str): Two-tuple with start and end frame or a
string formatted as: "startFrame endFrame". This argument
overrides `startFrame` and `endFrame` arguments.
melPerFrameCallback (Optional[str]): MEL callback run per frame.
melPostJobCallback (Optional[str]): MEL callback after last frame is
written.
noNormals (bool): When on, normal data from the original polygon
objects is not included in the exported Alembic cache file.
preRoll (bool): This frame range will not be sampled.
Defaults to False.
preRollStartFrame (float): The frame to start scene
evaluation at. This is used to set the starting frame for time
dependent translations and can be used to evaluate run-up that
isn't actually translated. Defaults to 0.
pythonPerFrameCallback (Optional[str]): Python callback run per frame.
pythonPostJobCallback (Optional[str]): Python callback after last frame
is written.
renderableOnly (bool): When on, any non-renderable nodes or hierarchy,
such as hidden objects, are not included in the Alembic file.
Defaults to False.
root (list of str): Maya dag path which will be parented to
the root of the Alembic file. Defaults to [], which means the
entire scene will be written out.
selection (bool): Write out all all selected nodes from the
active selection list that are descendents of the roots specified
with -root. Defaults to False.
startFrame (float): Start frame of output. Ignored if `frameRange`
provided.
step (float): The time interval (expressed in frames) at
which the frame range is sampled. Additional samples around each
frame can be specified with -frs. Defaults to 1.0.
stripNamespaces (bool): When on, any namespaces associated with the
exported objects are removed from the Alembic file. For example, an
object with the namespace taco:foo:bar appears as bar in the
Alembic file.
userAttr (list of str, optional): A specific user defined attribute to
write out. Defaults to [].
userAttrPrefix (list of str, optional): Prefix filter for determining
which user defined attributes to write out. Defaults to [].
uvsOnly (bool): When on, only uv data for PolyMesh and SubD shapes
will be written to the Alembic file.
uvWrite (bool): When on, UV data from polygon meshes and subdivision
objects are written to the Alembic file. Only the current UV map is
included.
verbose (bool): When on, outputs frame number information to the
Script Editor or output window during extraction.
wholeFrameGeo (bool): Data for geometry will only be written
out on whole frames. Defaults to False.
worldSpace (bool): When on, the top node in the node hierarchy is
stored as world space. By default, these nodes are stored as local
space. Defaults to False.
writeColorSets (bool): Write all color sets on MFnMeshes as
color 3 or color 4 indexed geometry parameters with face varying
scope. Defaults to False.
writeCreases (bool): If the mesh has crease edges or crease
vertices, the mesh (OPolyMesh) would now be written out as an OSubD
and crease info will be stored in the Alembic file. Otherwise,
creases info won't be preserved in Alembic file unless a custom
Boolean attribute SubDivisionMesh has been added to mesh node and
its value is true. Defaults to False.
writeFaceSets (bool): Write all Face sets on MFnMeshes.
Defaults to False.
writeUVSets (bool): Write all uv sets on MFnMeshes as vector
2 indexed geometry parameters with face varying scope. Defaults to
False.
writeVisibility (bool): Visibility state will be stored in
the Alembic file. Otherwise everything written out is treated as
visible. Defaults to False.
"""
# Ensure alembic exporter is loaded
cmds.loadPlugin('AbcExport', quiet=True)
# Alembic Exporter requires forward slashes
file = file.replace('\\', '/')
# Ensure list arguments are valid.
attr = attr or []
attrPrefix = attrPrefix or []
userAttr = userAttr or []
userAttrPrefix = userAttrPrefix or []
root = root or []
# Pass the start and end frame on as `frameRange` so that it
# never conflicts with that argument
if not frameRange:
# Fallback to maya timeline if no start or end frame provided.
if startFrame is None:
startFrame = cmds.playbackOptions(query=True,
animationStartTime=True)
if endFrame is None:
endFrame = cmds.playbackOptions(query=True,
animationEndTime=True)
# Ensure valid types are converted to frame range
assert isinstance(startFrame, ALEMBIC_ARGS["startFrame"])
assert isinstance(endFrame, ALEMBIC_ARGS["endFrame"])
frameRange = "{0} {1}".format(startFrame, endFrame)
else:
# Allow conversion from tuple for `frameRange`
if isinstance(frameRange, (list, tuple)):
assert len(frameRange) == 2
frameRange = "{0} {1}".format(frameRange[0], frameRange[1])
# Assemble options
options = {
"selection": selection,
"frameRange": frameRange,
"eulerFilter": eulerFilter,
"noNormals": noNormals,
"preRoll": preRoll,
"root": root,
"renderableOnly": renderableOnly,
"uvWrite": uvWrite,
"uvsOnly": uvsOnly,
"writeColorSets": writeColorSets,
"writeFaceSets": writeFaceSets,
"wholeFrameGeo": wholeFrameGeo,
"worldSpace": worldSpace,
"writeVisibility": writeVisibility,
"writeUVSets": writeUVSets,
"writeCreases": writeCreases,
"dataFormat": dataFormat,
"step": step,
"attr": attr,
"attrPrefix": attrPrefix,
"userAttr": userAttr,
"userAttrPrefix": userAttrPrefix,
"stripNamespaces": stripNamespaces,
"verbose": verbose
}
# Validate options
for key, value in options.copy().items():
# Discard unknown options
if key not in ALEMBIC_ARGS:
log.warning("extract_alembic() does not support option '%s'. "
"Flag will be ignored..", key)
options.pop(key)
continue
# Validate value type
valid_types = ALEMBIC_ARGS[key]
if not isinstance(value, valid_types):
raise TypeError("Alembic option unsupported type: "
"{0} (expected {1})".format(value, valid_types))
# Ignore empty values, like an empty string, since they mess up how
# job arguments are built
if isinstance(value, (list, tuple)):
value = [x for x in value if x.strip()]
# Ignore option completely if no values remaining
if not value:
options.pop(key)
continue
options[key] = value
# The `writeCreases` argument was changed to `autoSubd` in Maya 2018+
maya_version = int(cmds.about(version=True))
if maya_version >= 2018:
options['autoSubd'] = options.pop('writeCreases', False)
# Only add callbacks if they are set so that we're not passing `None`
callbacks = {
"melPerFrameCallback": melPerFrameCallback,
"melPostJobCallback": melPostJobCallback,
"pythonPerFrameCallback": pythonPerFrameCallback,
"pythonPostJobCallback": pythonPostJobCallback,
}
for key, callback in callbacks.items():
if callback:
options[key] = str(callback)
# Format the job string from options
job_args = list()
for key, value in options.items():
if isinstance(value, (list, tuple)):
for entry in value:
job_args.append("-{} {}".format(key, entry))
elif isinstance(value, bool):
# Add only when state is set to True
if value:
job_args.append("-{0}".format(key))
else:
job_args.append("-{0} {1}".format(key, value))
job_str = " ".join(job_args)
job_str += ' -file "%s"' % file
# Ensure output directory exists
parent_dir = os.path.dirname(file)
if not os.path.exists(parent_dir):
os.makedirs(parent_dir)
if verbose:
log.debug("Preparing Alembic export with options: %s",
json.dumps(options, indent=4))
log.debug("Extracting Alembic with job arguments: %s", job_str)
# Perform extraction
print("Alembic Job Arguments : {}".format(job_str))
# Disable the parallel evaluation temporarily to ensure no buggy
# exports are made. (PLN-31)
# TODO: Make sure this actually fixes the issues
with evaluation("off"):
cmds.AbcExport(
j=job_str,
verbose=verbose,
preRollStartFrame=preRollStartFrame
)
if verbose:
log.debug("Extracted Alembic to: %s", file)
return file

View file

@ -1,118 +0,0 @@
# -*- coding: utf-8 -*-
"""AYON script commands to be used directly in Maya."""
from maya import cmds
from ayon_api import get_project, get_folder_by_path
from ayon_core.pipeline import get_current_project_name, get_current_folder_path
class ToolWindows:
_windows = {}
@classmethod
def get_window(cls, tool):
"""Get widget for specific tool.
Args:
tool (str): Name of the tool.
Returns:
Stored widget.
"""
try:
return cls._windows[tool]
except KeyError:
return None
@classmethod
def set_window(cls, tool, window):
"""Set widget for the tool.
Args:
tool (str): Name of the tool.
window (QtWidgets.QWidget): Widget
"""
cls._windows[tool] = window
def _resolution_from_entity(entity):
if not entity:
print("Entered entity is not valid. \"{}\"".format(
str(entity)
))
return None
attributes = entity.get("attrib")
if attributes is None:
attributes = entity.get("data", {})
resolution_width = attributes.get("resolutionWidth")
resolution_height = attributes.get("resolutionHeight")
# Backwards compatibility
if resolution_width is None or resolution_height is None:
resolution_width = attributes.get("resolution_width")
resolution_height = attributes.get("resolution_height")
# Make sure both width and height are set
if resolution_width is None or resolution_height is None:
cmds.warning(
"No resolution information found for \"{}\"".format(
entity["name"]
)
)
return None
return int(resolution_width), int(resolution_height)
def reset_resolution():
# Default values
resolution_width = 1920
resolution_height = 1080
# Get resolution from folder
project_name = get_current_project_name()
folder_path = get_current_folder_path()
folder_entity = get_folder_by_path(project_name, folder_path)
resolution = _resolution_from_entity(folder_entity)
# Try get resolution from project
if resolution is None:
# TODO go through visualParents
print((
"Folder '{}' does not have set resolution."
" Trying to get resolution from project"
).format(folder_path))
project_entity = get_project(project_name)
resolution = _resolution_from_entity(project_entity)
if resolution is None:
msg = "Using default resolution {}x{}"
else:
resolution_width, resolution_height = resolution
msg = "Setting resolution to {}x{}"
print(msg.format(resolution_width, resolution_height))
# set for different renderers
# arnold, vray, redshift, renderman
renderer = cmds.getAttr("defaultRenderGlobals.currentRenderer").lower()
# handle various renderman names
if renderer.startswith("renderman"):
renderer = "renderman"
# default attributes are usable for Arnold, Renderman and Redshift
width_attr_name = "defaultResolution.width"
height_attr_name = "defaultResolution.height"
# Vray has its own way
if renderer == "vray":
width_attr_name = "vraySettings.width"
height_attr_name = "vraySettings.height"
cmds.setAttr(width_attr_name, resolution_width)
cmds.setAttr(height_attr_name, resolution_height)

View file

@ -1,179 +0,0 @@
"""A set of commands that install overrides to Maya's UI"""
import os
import logging
from functools import partial
import maya.cmds as cmds
import maya.mel as mel
from ayon_core import resources
from ayon_core.tools.utils import host_tools
from .lib import get_main_window
from ..tools import show_look_assigner
log = logging.getLogger(__name__)
COMPONENT_MASK_ORIGINAL = {}
def override_component_mask_commands():
"""Override component mask ctrl+click behavior.
This implements special behavior for Maya's component
mask menu items where a ctrl+click will instantly make
it an isolated behavior disabling all others.
Tested in Maya 2016 and 2018
"""
log.info("Installing override_component_mask_commands..")
# Get all object mask buttons
buttons = cmds.formLayout("objectMaskIcons",
query=True,
childArray=True)
# Skip the triangle list item
buttons = [btn for btn in buttons if btn != "objPickMenuLayout"]
def on_changed_callback(raw_command, state):
"""New callback"""
# If "control" is held force the toggled one to on and
# toggle the others based on whether any of the buttons
# was remaining active after the toggle, if not then
# enable all
if cmds.getModifiers() == 4: # = CTRL
state = True
active = [cmds.iconTextCheckBox(btn, query=True, value=True)
for btn in buttons]
if any(active):
cmds.selectType(allObjects=False)
else:
cmds.selectType(allObjects=True)
# Replace #1 with the current button state
cmd = raw_command.replace(" #1", " {}".format(int(state)))
mel.eval(cmd)
for btn in buttons:
# Store a reference to the original command so that if
# we rerun this override command it doesn't recursively
# try to implement the fix. (This also allows us to
# "uninstall" the behavior later)
if btn not in COMPONENT_MASK_ORIGINAL:
original = cmds.iconTextCheckBox(btn, query=True, cc=True)
COMPONENT_MASK_ORIGINAL[btn] = original
# Assign the special callback
original = COMPONENT_MASK_ORIGINAL[btn]
new_fn = partial(on_changed_callback, original)
cmds.iconTextCheckBox(btn, edit=True, cc=new_fn)
def override_toolbox_ui():
"""Add custom buttons in Toolbox as replacement for Maya web help icon."""
icons = resources.get_resource("icons")
parent_widget = get_main_window()
# Ensure the maya web icon on toolbox exists
button_names = [
# Maya 2022.1+ with maya.cmds.iconTextStaticLabel
"ToolBox|MainToolboxLayout|mayaHomeToolboxButton",
# Older with maya.cmds.iconTextButton
"ToolBox|MainToolboxLayout|mayaWebButton"
]
for name in button_names:
if cmds.control(name, query=True, exists=True):
web_button = name
break
else:
# Button does not exist
log.warning("Can't find Maya Home/Web button to override toolbox ui..")
return
cmds.control(web_button, edit=True, visible=False)
# real = 32, but 36 with padding - according to toolbox mel script
icon_size = 36
parent = web_button.rsplit("|", 1)[0]
# Ensure the parent is a formLayout
if not cmds.objectTypeUI(parent) == "formLayout":
return
# Create our controls
controls = []
controls.append(
cmds.iconTextButton(
"ayon_toolbox_lookmanager",
annotation="Look Manager",
label="Look Manager",
image=os.path.join(icons, "lookmanager.png"),
command=lambda: show_look_assigner(
parent=parent_widget
),
width=icon_size,
height=icon_size,
parent=parent
)
)
controls.append(
cmds.iconTextButton(
"ayon_toolbox_workfiles",
annotation="Work Files",
label="Work Files",
image=os.path.join(icons, "workfiles.png"),
command=lambda: host_tools.show_workfiles(
parent=parent_widget
),
width=icon_size,
height=icon_size,
parent=parent
)
)
controls.append(
cmds.iconTextButton(
"ayon_toolbox_loader",
annotation="Loader",
label="Loader",
image=os.path.join(icons, "loader.png"),
command=lambda: host_tools.show_loader(
parent=parent_widget, use_context=True
),
width=icon_size,
height=icon_size,
parent=parent
)
)
controls.append(
cmds.iconTextButton(
"ayon_toolbox_manager",
annotation="Inventory",
label="Inventory",
image=os.path.join(icons, "inventory.png"),
command=lambda: host_tools.show_scene_inventory(
parent=parent_widget
),
width=icon_size,
height=icon_size,
parent=parent
)
)
# Add the buttons on the bottom and stack
# them above each other with side padding
controls.reverse()
for i, control in enumerate(controls):
previous = controls[i - 1] if i > 0 else web_button
cmds.formLayout(parent, edit=True,
attachControl=[control, "bottom", 0, previous],
attachForm=([control, "left", 1],
[control, "right", 1]))

View file

@ -1,139 +0,0 @@
"""Backwards compatible implementation of ExitStack for Python 2.
ExitStack contextmanager was implemented with Python 3.3.
As long as we supportPython 2 hosts we can use this backwards
compatible implementation to support bothPython 2 and Python 3.
Instead of using ExitStack from contextlib, use it from this module:
>>> from ayon_maya.api.exitstack import ExitStack
It will provide the appropriate ExitStack implementation for the current
running Python version.
"""
# TODO: Remove the entire script once dropping Python 2 support.
import contextlib
if getattr(contextlib, "nested", None):
from contextlib import ExitStack # noqa
else:
import sys
from collections import deque
class ExitStack(object):
"""Context manager for dynamic management of a stack of exit callbacks
For example:
with ExitStack() as stack:
files = [stack.enter_context(open(fname))
for fname in filenames]
# All opened files will automatically be closed at the end of
# the with statement, even if attempts to open files later
# in the list raise an exception
"""
def __init__(self):
self._exit_callbacks = deque()
def pop_all(self):
"""Preserve the context stack by transferring
it to a new instance"""
new_stack = type(self)()
new_stack._exit_callbacks = self._exit_callbacks
self._exit_callbacks = deque()
return new_stack
def _push_cm_exit(self, cm, cm_exit):
"""Helper to correctly register callbacks
to __exit__ methods"""
def _exit_wrapper(*exc_details):
return cm_exit(cm, *exc_details)
_exit_wrapper.__self__ = cm
self.push(_exit_wrapper)
def push(self, exit):
"""Registers a callback with the standard __exit__ method signature
Can suppress exceptions the same way __exit__ methods can.
Also accepts any object with an __exit__ method (registering a call
to the method instead of the object itself)
"""
# We use an unbound method rather than a bound method to follow
# the standard lookup behaviour for special methods
_cb_type = type(exit)
try:
exit_method = _cb_type.__exit__
except AttributeError:
# Not a context manager, so assume its a callable
self._exit_callbacks.append(exit)
else:
self._push_cm_exit(exit, exit_method)
return exit # Allow use as a decorator
def callback(self, callback, *args, **kwds):
"""Registers an arbitrary callback and arguments.
Cannot suppress exceptions.
"""
def _exit_wrapper(exc_type, exc, tb):
callback(*args, **kwds)
# We changed the signature, so using @wraps is not appropriate, but
# setting __wrapped__ may still help with introspection
_exit_wrapper.__wrapped__ = callback
self.push(_exit_wrapper)
return callback # Allow use as a decorator
def enter_context(self, cm):
"""Enters the supplied context manager
If successful, also pushes its __exit__ method as a callback and
returns the result of the __enter__ method.
"""
# We look up the special methods on the type to
# match the with statement
_cm_type = type(cm)
_exit = _cm_type.__exit__
result = _cm_type.__enter__(cm)
self._push_cm_exit(cm, _exit)
return result
def close(self):
"""Immediately unwind the context stack"""
self.__exit__(None, None, None)
def __enter__(self):
return self
def __exit__(self, *exc_details):
# We manipulate the exception state so it behaves as though
# we were actually nesting multiple with statements
frame_exc = sys.exc_info()[1]
def _fix_exception_context(new_exc, old_exc):
while 1:
exc_context = new_exc.__context__
if exc_context in (None, frame_exc):
break
new_exc = exc_context
new_exc.__context__ = old_exc
# Callbacks are invoked in LIFO order to match the behaviour of
# nested context managers
suppressed_exc = False
while self._exit_callbacks:
cb = self._exit_callbacks.pop()
try:
if cb(*exc_details):
suppressed_exc = True
exc_details = (None, None, None)
except Exception:
new_exc_details = sys.exc_info()
# simulate the stack of exceptions by setting the context
_fix_exception_context(new_exc_details[1], exc_details[1])
if not self._exit_callbacks:
raise
exc_details = new_exc_details
return suppressed_exc

View file

@ -1,210 +0,0 @@
# -*- coding: utf-8 -*-
"""Tools to work with FBX."""
import logging
from maya import cmds # noqa
import maya.mel as mel # noqa
from ayon_maya.api.lib import maintained_selection
class FBXExtractor:
"""Extract FBX from Maya.
This extracts reproducible FBX exports ignoring any of the settings set
on the local machine in the FBX export options window.
All export settings are applied with the `FBXExport*` commands prior
to the `FBXExport` call itself. The options can be overridden with
their
nice names as seen in the "options" property on this class.
For more information on FBX exports see:
- https://knowledge.autodesk.com/support/maya/learn-explore/caas
/CloudHelp/cloudhelp/2016/ENU/Maya/files/GUID-6CCE943A-2ED4-4CEE-96D4
-9CB19C28F4E0-htm.html
- http://forums.cgsociety.org/archive/index.php?t-1032853.html
- https://groups.google.com/forum/#!msg/python_inside_maya/cLkaSo361oE
/LKs9hakE28kJ
"""
@property
def options(self):
"""Overridable options for FBX Export
Given in the following format
- {NAME: EXPECTED TYPE}
If the overridden option's type does not match,
the option is not included and a warning is logged.
"""
return {
"cameras": bool,
"smoothingGroups": bool,
"hardEdges": bool,
"tangents": bool,
"smoothMesh": bool,
"instances": bool,
# "referencedContainersContent": bool, # deprecated in Maya 2016+
"bakeComplexAnimation": bool,
"bakeComplexStart": int,
"bakeComplexEnd": int,
"bakeComplexStep": int,
"bakeResampleAnimation": bool,
"useSceneName": bool,
"quaternion": str, # "euler"
"shapes": bool,
"skins": bool,
"constraints": bool,
"lights": bool,
"embeddedTextures": bool,
"includeChildren": bool,
"inputConnections": bool,
"upAxis": str, # x, y or z,
"triangulate": bool,
"fileVersion": str,
"skeletonDefinitions": bool,
"referencedAssetsContent": bool
}
@property
def default_options(self):
"""The default options for FBX extraction.
This includes shapes, skins, constraints, lights and incoming
connections and exports with the Y-axis as up-axis.
By default this uses the time sliders start and end time.
"""
start_frame = int(cmds.playbackOptions(query=True,
animationStartTime=True))
end_frame = int(cmds.playbackOptions(query=True,
animationEndTime=True))
return {
"cameras": False,
"smoothingGroups": True,
"hardEdges": False,
"tangents": False,
"smoothMesh": True,
"instances": False,
"bakeComplexAnimation": True,
"bakeComplexStart": start_frame,
"bakeComplexEnd": end_frame,
"bakeComplexStep": 1,
"bakeResampleAnimation": True,
"useSceneName": False,
"quaternion": "euler",
"shapes": True,
"skins": True,
"constraints": False,
"lights": True,
"embeddedTextures": False,
"includeChildren": True,
"inputConnections": True,
"upAxis": "y",
"triangulate": False,
"fileVersion": "FBX202000",
"skeletonDefinitions": False,
"referencedAssetsContent": False
}
def __init__(self, log=None):
# Ensure FBX plug-in is loaded
self.log = log or logging.getLogger(self.__class__.__name__)
cmds.loadPlugin("fbxmaya", quiet=True)
def parse_overrides(self, instance, options):
"""Inspect data of instance to determine overridden options
An instance may supply any of the overridable options
as data, the option is then added to the extraction.
"""
for key in instance.data:
if key not in self.options:
continue
# Ensure the data is of correct type
value = instance.data[key]
if not isinstance(value, self.options[key]):
self.log.warning(
"Overridden attribute {key} was of "
"the wrong type: {invalid_type} "
"- should have been {valid_type}".format(
key=key,
invalid_type=type(value).__name__,
valid_type=self.options[key].__name__))
continue
options[key] = value
return options
def set_options_from_instance(self, instance):
"""Sets FBX export options from data in the instance.
Args:
instance (Instance): Instance data.
"""
# Parse export options
options = self.default_options
options = self.parse_overrides(instance, options)
self.log.debug("Export options: {0}".format(options))
# Collect the start and end including handles
start = instance.data.get("frameStartHandle") or \
instance.context.data.get("frameStartHandle")
end = instance.data.get("frameEndHandle") or \
instance.context.data.get("frameEndHandle")
options['bakeComplexStart'] = start
options['bakeComplexEnd'] = end
# First apply the default export settings to be fully consistent
# each time for successive publishes
mel.eval("FBXResetExport")
# Apply the FBX overrides through MEL since the commands
# only work correctly in MEL according to online
# available discussions on the topic
_iteritems = getattr(options, "iteritems", options.items)
for option, value in _iteritems():
key = option[0].upper() + option[1:] # uppercase first letter
# Boolean must be passed as lower-case strings
# as to MEL standards
if isinstance(value, bool):
value = str(value).lower()
template = "FBXExport{0} {1}" if key == "UpAxis" else \
"FBXExport{0} -v {1}" # noqa
cmd = template.format(key, value)
self.log.debug(cmd)
mel.eval(cmd)
# Never show the UI or generate a log
mel.eval("FBXExportShowUI -v false")
mel.eval("FBXExportGenerateLog -v false")
@staticmethod
def export(members, path):
# type: (list, str) -> None
"""Export members as FBX with given path.
Args:
members (list): List of members to export.
path (str): Path to use for export.
"""
# The export requires forward slashes because we need
# to format it into a string in a mel expression
path = path.replace("\\", "/")
with maintained_selection():
cmds.select(members, r=True, noExpand=True)
mel.eval('FBXExport -f "{}" -s'.format(path))

View file

@ -1,88 +0,0 @@
# -*- coding: utf-8 -*-
"""Tools to work with GLTF."""
import logging
from maya import cmds, mel # noqa
log = logging.getLogger(__name__)
_gltf_options = {
"of": str, # outputFolder
"cpr": str, # copyright
"sno": bool, # selectedNodeOnly
"sn": str, # sceneName
"glb": bool, # binary
"nbu": bool, # niceBufferURIs
"hbu": bool, # hashBufferURI
"ext": bool, # externalTextures
"ivt": int, # initialValuesTime
"acn": str, # animationClipName # codespell:ignore acn
"ast": int, # animationClipStartTime
"aet": int, # animationClipEndTime
"afr": float, # animationClipFrameRate
"dsa": int, # detectStepAnimations
"mpa": str, # meshPrimitiveAttributes
"bpa": str, # blendPrimitiveAttributes
"i32": bool, # force32bitIndices
"ssm": bool, # skipStandardMaterials
"eut": bool, # excludeUnusedTexcoord
"dm": bool, # defaultMaterial
"cm": bool, # colorizeMaterials
"dmy": str, # dumpMaya
"dgl": str, # dumpGLTF
"imd": str, # ignoreMeshDeformers
"ssc": bool, # skipSkinClusters
"sbs": bool, # skipBlendShapes
"rvp": bool, # redrawViewport
"vno": bool # visibleNodesOnly
}
def extract_gltf(parent_dir,
filename,
**kwargs):
"""Sets GLTF export options from data in the instance.
"""
cmds.loadPlugin('maya2glTF', quiet=True)
# load the UI to run mel command
mel.eval("maya2glTF_UI()")
parent_dir = parent_dir.replace('\\', '/')
options = {
"dsa": 1,
"glb": True
}
options.update(kwargs)
for key, value in options.copy().items():
if key not in _gltf_options:
log.warning("extract_gltf() does not support option '%s'. "
"Flag will be ignored..", key)
options.pop(key)
options.pop(value)
continue
job_args = list()
default_opt = "maya2glTF -of \"{0}\" -sn \"{1}\"".format(parent_dir, filename) # noqa
job_args.append(default_opt)
for key, value in options.items():
if isinstance(value, str):
job_args.append("-{0} \"{1}\"".format(key, value))
elif isinstance(value, bool):
if value:
job_args.append("-{0}".format(key))
else:
job_args.append("-{0} {1}".format(key, value))
job_str = " ".join(job_args)
log.info("{}".format(job_str))
mel.eval(job_str)
# close the gltf export after finish the export
gltf_UI = "maya2glTF_exporter_window"
if cmds.window(gltf_UI, q=True, exists=True):
cmds.deleteUI(gltf_UI)

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

View file

@ -1,410 +0,0 @@
# -*- coding: utf-8 -*-
"""Class for handling Render Settings."""
import six
import sys
from ayon_core.lib import Logger
from ayon_core.settings import get_project_settings
from ayon_core.pipeline import CreatorError, get_current_project_name
from ayon_core.pipeline.context_tools import get_current_folder_entity
from ayon_maya.api.lib import reset_frame_range
class RenderSettings(object):
_image_prefix_nodes = {
'vray': 'vraySettings.fileNamePrefix',
'arnold': 'defaultRenderGlobals.imageFilePrefix',
'renderman': 'rmanGlobals.imageFileFormat',
'redshift': 'defaultRenderGlobals.imageFilePrefix',
'mayahardware2': 'defaultRenderGlobals.imageFilePrefix'
}
_aov_chars = {
"dot": ".",
"dash": "-",
"underscore": "_"
}
log = Logger.get_logger("RenderSettings")
@classmethod
def get_image_prefix_attr(cls, renderer):
return cls._image_prefix_nodes[renderer]
@staticmethod
def get_padding_attr(renderer):
"""Return attribute for renderer that defines frame padding amount"""
if renderer == "vray":
return "vraySettings.fileNamePadding"
else:
return "defaultRenderGlobals.extensionPadding"
def __init__(self, project_settings=None):
if not project_settings:
project_settings = get_project_settings(
get_current_project_name()
)
render_settings = project_settings["maya"]["render_settings"]
image_prefixes = {
"vray": render_settings["vray_renderer"]["image_prefix"],
"arnold": render_settings["arnold_renderer"]["image_prefix"],
"renderman": render_settings["renderman_renderer"]["image_prefix"],
"redshift": render_settings["redshift_renderer"]["image_prefix"]
}
# TODO probably should be stored to more explicit attribute
# Renderman only
renderman_settings = render_settings["renderman_renderer"]
_image_dir = {
"renderman": renderman_settings["image_dir"],
"cryptomatte": renderman_settings["cryptomatte_dir"],
"imageDisplay": renderman_settings["imageDisplay_dir"],
"watermark": renderman_settings["watermark_dir"]
}
self._image_prefixes = image_prefixes
self._image_dir = _image_dir
self._project_settings = project_settings
def set_default_renderer_settings(self, renderer=None):
"""Set basic settings based on renderer."""
# Not all hosts can import this module.
from maya import cmds # noqa: F401
import maya.mel as mel # noqa: F401
if not renderer:
renderer = cmds.getAttr(
'defaultRenderGlobals.currentRenderer').lower()
folder_entity = get_current_folder_entity()
folder_attributes = folder_entity["attrib"]
# project_settings/maya/create/CreateRender/aov_separator
try:
aov_separator = self._aov_chars[(
self._project_settings["maya"]
["render_settings"]
["aov_separator"]
)]
except KeyError:
aov_separator = "_"
reset_frame = self._project_settings["maya"]["render_settings"]["reset_current_frame"] # noqa
if reset_frame:
start_frame = cmds.getAttr("defaultRenderGlobals.startFrame")
cmds.currentTime(start_frame, edit=True)
if renderer in self._image_prefix_nodes:
prefix = self._image_prefixes[renderer]
prefix = prefix.replace("{aov_separator}", aov_separator)
cmds.setAttr(self._image_prefix_nodes[renderer],
prefix, type="string") # noqa
else:
print("{0} isn't a supported renderer to autoset settings.".format(renderer)) # noqa
# TODO: handle not having res values in the doc
width = folder_attributes.get("resolutionWidth")
height = folder_attributes.get("resolutionHeight")
if renderer == "arnold":
# set renderer settings for Arnold from project settings
self._set_arnold_settings(width, height)
if renderer == "vray":
self._set_vray_settings(aov_separator, width, height)
if renderer == "redshift":
self._set_redshift_settings(width, height)
mel.eval("redshiftUpdateActiveAovList")
if renderer == "renderman":
image_dir = self._image_dir["renderman"]
cmds.setAttr("rmanGlobals.imageOutputDir",
image_dir, type="string")
self._set_renderman_settings(width, height,
aov_separator)
def _set_arnold_settings(self, width, height):
"""Sets settings for Arnold."""
from mtoa.core import createOptions # noqa
from mtoa.aovs import AOVInterface # noqa
# Not all hosts can import this module.
from maya import cmds # noqa: F401
import maya.mel as mel # noqa: F401
createOptions()
render_settings = self._project_settings["maya"]["render_settings"]
arnold_render_presets = render_settings["arnold_renderer"] # noqa
# Force resetting settings and AOV list to avoid having to deal with
# AOV checking logic, for now.
# This is a work around because the standard
# function to revert render settings does not reset AOVs list in MtoA
# Fetch current aovs in case there's any.
current_aovs = AOVInterface().getAOVs()
remove_aovs = render_settings["remove_aovs"]
if remove_aovs:
# Remove fetched AOVs
AOVInterface().removeAOVs(current_aovs)
mel.eval("unifiedRenderGlobalsRevertToDefault")
img_ext = arnold_render_presets["image_format"]
img_prefix = arnold_render_presets["image_prefix"]
aovs = arnold_render_presets["aov_list"]
img_tiled = arnold_render_presets["tiled"]
multi_exr = arnold_render_presets["multilayer_exr"]
additional_options = arnold_render_presets["additional_options"]
for aov in aovs:
if aov in current_aovs and not remove_aovs:
continue
AOVInterface('defaultArnoldRenderOptions').addAOV(aov)
cmds.setAttr("defaultResolution.width", width)
cmds.setAttr("defaultResolution.height", height)
self._set_global_output_settings()
cmds.setAttr(
"defaultRenderGlobals.imageFilePrefix", img_prefix, type="string")
cmds.setAttr(
"defaultArnoldDriver.ai_translator", img_ext, type="string")
cmds.setAttr(
"defaultArnoldDriver.exrTiled", img_tiled)
cmds.setAttr(
"defaultArnoldDriver.mergeAOVs", multi_exr)
self._additional_attribs_setter(additional_options)
reset_frame_range(playback=False, fps=False, render=True)
def _set_redshift_settings(self, width, height):
"""Sets settings for Redshift."""
# Not all hosts can import this module.
from maya import cmds # noqa: F401
import maya.mel as mel # noqa: F401
render_settings = self._project_settings["maya"]["render_settings"]
redshift_render_presets = render_settings["redshift_renderer"]
remove_aovs = render_settings["remove_aovs"]
all_rs_aovs = cmds.ls(type='RedshiftAOV')
if remove_aovs:
for aov in all_rs_aovs:
enabled = cmds.getAttr("{}.enabled".format(aov))
if enabled:
cmds.delete(aov)
redshift_aovs = redshift_render_presets["aov_list"]
# list all the aovs
all_rs_aovs = cmds.ls(type='RedshiftAOV')
for rs_aov in redshift_aovs:
rs_layername = "rsAov_{}".format(rs_aov.replace(" ", ""))
if rs_layername in all_rs_aovs:
continue
cmds.rsCreateAov(type=rs_aov)
# update the AOV list
mel.eval("redshiftUpdateActiveAovList")
rs_p_engine = redshift_render_presets["primary_gi_engine"]
rs_s_engine = redshift_render_presets["secondary_gi_engine"]
if int(rs_p_engine) or int(rs_s_engine) != 0:
cmds.setAttr("redshiftOptions.GIEnabled", 1)
if int(rs_p_engine) == 0:
# reset the primary GI Engine as default
cmds.setAttr("redshiftOptions.primaryGIEngine", 4)
if int(rs_s_engine) == 0:
# reset the secondary GI Engine as default
cmds.setAttr("redshiftOptions.secondaryGIEngine", 2)
else:
cmds.setAttr("redshiftOptions.GIEnabled", 0)
cmds.setAttr("redshiftOptions.primaryGIEngine", int(rs_p_engine))
cmds.setAttr("redshiftOptions.secondaryGIEngine", int(rs_s_engine))
additional_options = redshift_render_presets["additional_options"]
ext = redshift_render_presets["image_format"]
img_exts = ["iff", "exr", "tif", "png", "tga", "jpg"]
img_ext = img_exts.index(ext)
self._set_global_output_settings()
cmds.setAttr("redshiftOptions.imageFormat", img_ext)
cmds.setAttr("defaultResolution.width", width)
cmds.setAttr("defaultResolution.height", height)
self._additional_attribs_setter(additional_options)
def _set_renderman_settings(self, width, height, aov_separator):
"""Sets settings for Renderman"""
# Not all hosts can import this module.
from maya import cmds # noqa: F401
import maya.mel as mel # noqa: F401
rman_render_presets = (
self._project_settings
["maya"]
["render_settings"]
["renderman_renderer"]
)
display_filters = rman_render_presets["display_filters"]
d_filters_number = len(display_filters)
for i in range(d_filters_number):
d_node = cmds.ls(typ=display_filters[i])
if len(d_node) > 0:
filter_nodes = d_node[0]
else:
filter_nodes = cmds.createNode(display_filters[i])
cmds.connectAttr(filter_nodes + ".message",
"rmanGlobals.displayFilters[%i]" % i,
force=True)
if filter_nodes.startswith("PxrImageDisplayFilter"):
imageDisplay_dir = self._image_dir["imageDisplay"]
imageDisplay_dir = imageDisplay_dir.replace("{aov_separator}",
aov_separator)
cmds.setAttr(filter_nodes + ".filename",
imageDisplay_dir, type="string")
sample_filters = rman_render_presets["sample_filters"]
s_filters_number = len(sample_filters)
for n in range(s_filters_number):
s_node = cmds.ls(typ=sample_filters[n])
if len(s_node) > 0:
filter_nodes = s_node[0]
else:
filter_nodes = cmds.createNode(sample_filters[n])
cmds.connectAttr(filter_nodes + ".message",
"rmanGlobals.sampleFilters[%i]" % n,
force=True)
if filter_nodes.startswith("PxrCryptomatte"):
matte_dir = self._image_dir["cryptomatte"]
matte_dir = matte_dir.replace("{aov_separator}",
aov_separator)
cmds.setAttr(filter_nodes + ".filename",
matte_dir, type="string")
elif filter_nodes.startswith("PxrWatermarkFilter"):
watermark_dir = self._image_dir["watermark"]
watermark_dir = watermark_dir.replace("{aov_separator}",
aov_separator)
cmds.setAttr(filter_nodes + ".filename",
watermark_dir, type="string")
additional_options = rman_render_presets["additional_options"]
self._set_global_output_settings()
cmds.setAttr("defaultResolution.width", width)
cmds.setAttr("defaultResolution.height", height)
self._additional_attribs_setter(additional_options)
def _set_vray_settings(self, aov_separator, width, height):
# type: (str, int, int) -> None
"""Sets important settings for Vray."""
# Not all hosts can import this module.
from maya import cmds # noqa: F401
import maya.mel as mel # noqa: F401
settings = cmds.ls(type="VRaySettingsNode")
node = settings[0] if settings else cmds.createNode("VRaySettingsNode")
render_settings = self._project_settings["maya"]["render_settings"]
vray_render_presets = render_settings["vray_renderer"]
# vrayRenderElement
remove_aovs = render_settings["remove_aovs"]
all_vray_aovs = cmds.ls(type='VRayRenderElement')
lightSelect_aovs = cmds.ls(type='VRayRenderElementSet')
if remove_aovs:
for aov in all_vray_aovs:
# remove all aovs except LightSelect
enabled = cmds.getAttr("{}.enabled".format(aov))
if enabled:
cmds.delete(aov)
# remove LightSelect
for light_aovs in lightSelect_aovs:
light_enabled = cmds.getAttr("{}.enabled".format(light_aovs))
if light_enabled:
cmds.delete(lightSelect_aovs)
vray_aovs = vray_render_presets["aov_list"]
for renderlayer in vray_aovs:
renderElement = "vrayAddRenderElement {}".format(renderlayer)
RE_name = mel.eval(renderElement)
# if there is more than one same render element
if RE_name.endswith("1"):
cmds.delete(RE_name)
# Set aov separator
# First we need to explicitly set the UI items in Render Settings
# because that is also what V-Ray updates to when that Render Settings
# UI did initialize before and refreshes again.
MENU = "vrayRenderElementSeparator"
if cmds.optionMenuGrp(MENU, query=True, exists=True):
items = cmds.optionMenuGrp(MENU, query=True, ill=True)
separators = [cmds.menuItem(i, query=True, label=True) for i in items] # noqa: E501
try:
sep_idx = separators.index(aov_separator)
except ValueError:
six.reraise(
CreatorError,
CreatorError(
"AOV character {} not in {}".format(
aov_separator, separators)),
sys.exc_info()[2])
cmds.optionMenuGrp(MENU, edit=True, select=sep_idx + 1)
# Set the render element attribute as string. This is also what V-Ray
# sets whenever the `vrayRenderElementSeparator` menu items switch
cmds.setAttr(
"{}.fileNameRenderElementSeparator".format(node),
aov_separator,
type="string"
)
# Set render file format to exr
ext = vray_render_presets["image_format"]
cmds.setAttr("{}.imageFormatStr".format(node), ext, type="string")
# animType
cmds.setAttr("{}.animType".format(node), 1)
# resolution
cmds.setAttr("{}.width".format(node), width)
cmds.setAttr("{}.height".format(node), height)
additional_options = vray_render_presets["additional_options"]
self._additional_attribs_setter(additional_options)
@staticmethod
def _set_global_output_settings():
# Not all hosts can import this module.
from maya import cmds # noqa: F401
import maya.mel as mel # noqa: F401
# enable animation
cmds.setAttr("defaultRenderGlobals.outFormatControl", 0)
cmds.setAttr("defaultRenderGlobals.animation", 1)
cmds.setAttr("defaultRenderGlobals.putFrameBeforeExt", 1)
cmds.setAttr("defaultRenderGlobals.extensionPadding", 4)
def _additional_attribs_setter(self, additional_attribs):
# Not all hosts can import this module.
from maya import cmds # noqa: F401
import maya.mel as mel # noqa: F401
for item in additional_attribs:
attribute = item["attribute"]
value = item["value"]
attribute = str(attribute) # ensure str conversion from settings
attribute_type = cmds.getAttr(attribute, type=True)
if attribute_type in {"long", "bool"}:
cmds.setAttr(attribute, int(value))
elif attribute_type == "string":
cmds.setAttr(attribute, str(value), type="string")
elif attribute_type in {"double", "doubleAngle", "doubleLinear"}:
cmds.setAttr(attribute, float(value))
else:
self.log.error(
"Attribute {attribute} can not be set due to unsupported "
"type: {attribute_type}".format(
attribute=attribute,
attribute_type=attribute_type)
)

View file

@ -1,417 +0,0 @@
# -*- coding: utf-8 -*-
"""Code to get attributes from render layer without switching to it.
https://github.com/Colorbleed/colorbleed-config/blob/acre/colorbleed/maya/lib_rendersetup.py
Credits: Roy Nieterau (BigRoy) / Colorbleed
Modified for use in AYON
"""
from maya import cmds
import maya.api.OpenMaya as om
import logging
import maya.app.renderSetup.model.utils as utils
from maya.app.renderSetup.model import renderSetup
from maya.app.renderSetup.model.override import (
AbsOverride,
RelOverride,
UniqueOverride
)
from ayon_maya.api.lib import get_attribute
EXACT_MATCH = 0
PARENT_MATCH = 1
CLIENT_MATCH = 2
DEFAULT_RENDER_LAYER = "defaultRenderLayer"
log = logging.getLogger(__name__)
def get_rendersetup_layer(layer):
"""Return render setup layer name.
This also converts names from legacy renderLayer node name to render setup
name.
Note: `defaultRenderLayer` is not a renderSetupLayer node but it is however
the valid layer name for Render Setup - so we return that as is.
Example:
>>> for legacy_layer in cmds.ls(type="renderLayer"):
>>> layer = get_rendersetup_layer(legacy_layer)
Returns:
str or None: Returns renderSetupLayer node name if `layer` is a valid
layer name in legacy renderlayers or render setup layers.
Returns None if the layer can't be found or Render Setup is
currently disabled.
"""
if layer == DEFAULT_RENDER_LAYER:
# defaultRenderLayer doesn't have a `renderSetupLayer`
return layer
if not cmds.mayaHasRenderSetup():
return None
if not cmds.objExists(layer):
return None
if cmds.nodeType(layer) == "renderSetupLayer":
return layer
# By default Render Setup renames the legacy renderlayer
# to `rs_<layername>` but lets not rely on that as the
# layer node can be renamed manually
connections = cmds.listConnections(layer + ".message",
type="renderSetupLayer",
exactType=True,
source=False,
destination=True,
plugs=True) or []
return next((conn.split(".", 1)[0] for conn in connections
if conn.endswith(".legacyRenderLayer")), None)
def get_attr_in_layer(node_attr, layer, as_string=True):
"""Return attribute value in Render Setup layer.
This will only work for attributes which can be
retrieved with `maya.cmds.getAttr` and for which
Relative and Absolute overrides are applicable.
Examples:
>>> get_attr_in_layer("defaultResolution.width", layer="layer1")
>>> get_attr_in_layer("defaultRenderGlobals.startFrame", layer="layer")
>>> get_attr_in_layer("transform.translate", layer="layer3")
Args:
attr (str): attribute name as 'node.attribute'
layer (str): layer name
Returns:
object: attribute value in layer
"""
def _layer_needs_update(layer):
"""Return whether layer needs updating."""
# Use `getattr` as e.g. DEFAULT_RENDER_LAYER does not have
# the attribute
return getattr(layer, "needsMembershipUpdate", False) or \
getattr(layer, "needsApplyUpdate", False)
def get_default_layer_value(node_attr_):
"""Return attribute value in `DEFAULT_RENDER_LAYER`."""
inputs = cmds.listConnections(node_attr_,
source=True,
destination=False,
# We want to skip conversion nodes since
# an override to `endFrame` could have
# a `unitToTimeConversion` node
# in-between
skipConversionNodes=True,
type="applyOverride") or []
if inputs:
override = inputs[0]
history_overrides = cmds.ls(cmds.listHistory(override,
pruneDagObjects=True),
type="applyOverride")
node = history_overrides[-1] if history_overrides else override
node_attr_ = node + ".original"
return get_attribute(node_attr_, asString=as_string)
layer = get_rendersetup_layer(layer)
rs = renderSetup.instance()
current_layer = rs.getVisibleRenderLayer()
if current_layer.name() == layer:
# Ensure layer is up-to-date
if _layer_needs_update(current_layer):
try:
rs.switchToLayer(current_layer)
except RuntimeError:
# Some cases can cause errors on switching
# the first time with Render Setup layers
# e.g. different overrides to compounds
# and its children plugs. So we just force
# it another time. If it then still fails
# we will let it error out.
rs.switchToLayer(current_layer)
return get_attribute(node_attr, asString=as_string)
overrides = get_attr_overrides(node_attr, layer)
default_layer_value = get_default_layer_value(node_attr)
if not overrides:
return default_layer_value
value = default_layer_value
for match, layer_override, index in overrides:
if isinstance(layer_override, AbsOverride):
# Absolute override
value = get_attribute(layer_override.name() + ".attrValue")
if match == EXACT_MATCH:
# value = value
pass
elif match == PARENT_MATCH:
value = value[index]
elif match == CLIENT_MATCH:
value[index] = value
elif isinstance(layer_override, RelOverride):
# Relative override
# Value = Original * Multiply + Offset
multiply = get_attribute(layer_override.name() + ".multiply")
offset = get_attribute(layer_override.name() + ".offset")
if match == EXACT_MATCH:
value = value * multiply + offset
elif match == PARENT_MATCH:
value = value * multiply[index] + offset[index]
elif match == CLIENT_MATCH:
value[index] = value[index] * multiply + offset
else:
raise TypeError("Unsupported override: %s" % layer_override)
return value
def get_attr_overrides(node_attr, layer,
skip_disabled=True,
skip_local_render=True,
stop_at_absolute_override=True):
"""Return all Overrides applicable to the attribute.
Overrides are returned as a 3-tuple:
(Match, Override, Index)
Match:
This is any of EXACT_MATCH, PARENT_MATCH, CLIENT_MATCH
and defines whether the override is exactly on the
plug, on the parent or on a child plug.
Override:
This is the RenderSetup Override instance.
Index:
This is the Plug index under the parent or for
the child that matches. The EXACT_MATCH index will
always be None. For PARENT_MATCH the index is which
index the plug is under the parent plug. For CLIENT_MATCH
the index is which child index matches the plug.
Args:
node_attr (str): attribute name as 'node.attribute'
layer (str): layer name
skip_disabled (bool): exclude disabled overrides
skip_local_render (bool): exclude overrides marked
as local render.
stop_at_absolute_override: exclude overrides prior
to the last absolute override as they have
no influence on the resulting value.
Returns:
list: Ordered Overrides in order of strength
"""
def get_mplug_children(plug):
"""Return children MPlugs of compound `MPlug`."""
children = []
if plug.isCompound:
for i in range(plug.numChildren()):
children.append(plug.child(i))
return children
def get_mplug_names(mplug):
"""Return long and short name of `MPlug`."""
long_name = mplug.partialName(useLongNames=True)
short_name = mplug.partialName(useLongNames=False)
return {long_name, short_name}
def iter_override_targets(override):
try:
for target in override._targets():
yield target
except AssertionError:
# Workaround: There is a bug where the private `_targets()` method
# fails on some attribute plugs. For example overrides
# to the defaultRenderGlobals.endFrame
# (Tested in Maya 2020.2)
log.debug("Workaround for %s" % override)
from maya.app.renderSetup.common.utils import findPlug
attr = override.attributeName()
if isinstance(override, UniqueOverride):
node = override.targetNodeName()
yield findPlug(node, attr)
else:
nodes = override.parent().selector().nodes()
for node in nodes:
if cmds.attributeQuery(attr, node=node, exists=True):
yield findPlug(node, attr)
# Get the MPlug for the node.attr
sel = om.MSelectionList()
sel.add(node_attr)
plug = sel.getPlug(0)
layer = get_rendersetup_layer(layer)
if layer == DEFAULT_RENDER_LAYER:
# DEFAULT_RENDER_LAYER will never have overrides
# since it's the default layer
return []
rs_layer = renderSetup.instance().getRenderLayer(layer)
if rs_layer is None:
# Renderlayer does not exist
return
# Get any parent or children plugs as we also
# want to include them in the attribute match
# for overrides
parent = plug.parent() if plug.isChild else None
parent_index = None
if parent:
parent_index = get_mplug_children(parent).index(plug)
children = get_mplug_children(plug)
# Create lookup for the attribute by both long
# and short names
attr_names = get_mplug_names(plug)
for child in children:
attr_names.update(get_mplug_names(child))
if parent:
attr_names.update(get_mplug_names(parent))
# Get all overrides of the layer
# And find those that are relevant to the attribute
plug_overrides = []
# Iterate over the overrides in reverse so we get the last
# overrides first and can "break" whenever an absolute
# override is reached
layer_overrides = list(utils.getOverridesRecursive(rs_layer))
for layer_override in reversed(layer_overrides):
if skip_disabled and not layer_override.isEnabled():
# Ignore disabled overrides
continue
if skip_local_render and layer_override.isLocalRender():
continue
# The targets list can be very large so we'll do
# a quick filter by attribute name to detect whether
# it matches the attribute name, or its parent or child
if layer_override.attributeName() not in attr_names:
continue
override_match = None
for override_plug in iter_override_targets(layer_override):
override_match = None
if plug == override_plug:
override_match = (EXACT_MATCH, layer_override, None)
elif parent and override_plug == parent:
override_match = (PARENT_MATCH, layer_override, parent_index)
elif children and override_plug in children:
child_index = children.index(override_plug)
override_match = (CLIENT_MATCH, layer_override, child_index)
if override_match:
plug_overrides.append(override_match)
break
if (
override_match and
stop_at_absolute_override and
isinstance(layer_override, AbsOverride) and
# When the override is only on a child plug then it doesn't
# override the entire value so we not stop at this override
not override_match[0] == CLIENT_MATCH
):
# If override is absolute override, then BREAK out
# of parent loop we don't need to look any further as
# this is the absolute override
break
return reversed(plug_overrides)
def get_shader_in_layer(node, layer):
"""Return the assigned shader in a renderlayer without switching layers.
This has been developed and tested for Legacy Renderlayers and *not* for
Render Setup.
Note: This will also return the shader for any face assignments, however
it will *not* return the components they are assigned to. This could
be implemented, but since Maya's renderlayers are famous for breaking
with face assignments there has been no need for this function to
support that.
Returns:
list: The list of assigned shaders in the given layer.
"""
def _get_connected_shader(plug):
"""Return current shader"""
return cmds.listConnections(plug,
source=False,
destination=True,
plugs=False,
connections=False,
type="shadingEngine") or []
# We check the instObjGroups (shader connection) for layer overrides.
plug = node + ".instObjGroups"
# Ignore complex query if we're in the layer anyway (optimization)
current_layer = cmds.editRenderLayerGlobals(query=True,
currentRenderLayer=True)
if layer == current_layer:
return _get_connected_shader(plug)
connections = cmds.listConnections(plug,
plugs=True,
source=False,
destination=True,
type="renderLayer") or []
connections = filter(lambda x: x.endswith(".outPlug"), connections)
if not connections:
# If no overrides anywhere on the shader, just get the current shader
return _get_connected_shader(plug)
def _get_override(connections, layer):
"""Return the overridden connection for that layer in connections"""
# If there's an override on that layer, return that.
for connection in connections:
if (connection.startswith(layer + ".outAdjustments") and
connection.endswith(".outPlug")):
# This is a shader override on that layer so get the shader
# connected to .outValue of the .outAdjustment[i]
out_adjustment = connection.rsplit(".", 1)[0]
connection_attr = out_adjustment + ".outValue"
override = cmds.listConnections(connection_attr) or []
return override
override_shader = _get_override(connections, layer)
if override_shader is not None:
return override_shader
else:
# Get the override for "defaultRenderLayer" (=masterLayer)
return _get_override(connections, layer="defaultRenderLayer")

View file

@ -1,299 +0,0 @@
import os
import json
import logging
from functools import partial
from qtpy import QtWidgets, QtGui
import maya.utils
import maya.cmds as cmds
from ayon_core.pipeline import (
get_current_folder_path,
get_current_task_name,
registered_host
)
from ayon_core.pipeline.workfile import BuildWorkfile
from ayon_core.tools.utils import host_tools
from ayon_maya.api import lib, lib_rendersettings
from .lib import get_main_window, IS_HEADLESS
from ..tools import show_look_assigner
from .workfile_template_builder import (
create_placeholder,
update_placeholder,
build_workfile_template,
update_workfile_template
)
from ayon_core.pipeline.context_tools import version_up_current_workfile
from ayon_core.tools.workfile_template_build import open_template_ui
from .workfile_template_builder import MayaTemplateBuilder
log = logging.getLogger(__name__)
MENU_NAME = "op_maya_menu"
def _get_menu(menu_name=None):
"""Return the menu instance if it currently exists in Maya"""
if menu_name is None:
menu_name = MENU_NAME
widgets = {w.objectName(): w for w in QtWidgets.QApplication.allWidgets()}
return widgets.get(menu_name)
def get_context_label():
return "{}, {}".format(
get_current_folder_path(),
get_current_task_name()
)
def install(project_settings):
if cmds.about(batch=True):
log.info("Skipping AYON menu initialization in batch mode..")
return
def add_menu():
pyblish_icon = host_tools.get_pyblish_icon()
parent_widget = get_main_window()
cmds.menu(
MENU_NAME,
label=os.environ.get("AYON_MENU_LABEL") or "AYON",
tearOff=True,
parent="MayaWindow"
)
# Create context menu
cmds.menuItem(
"currentContext",
label=get_context_label(),
parent=MENU_NAME,
enable=False
)
cmds.setParent("..", menu=True)
try:
if project_settings["core"]["tools"]["ayon_menu"].get(
"version_up_current_workfile"):
cmds.menuItem(divider=True)
cmds.menuItem(
"Version Up Workfile",
command=lambda *args: version_up_current_workfile()
)
except KeyError:
print("Version Up Workfile setting not found in "
"Core Settings. Please update Core Addon")
cmds.menuItem(divider=True)
cmds.menuItem(
"Create...",
command=lambda *args: host_tools.show_publisher(
parent=parent_widget,
tab="create"
)
)
cmds.menuItem(
"Load...",
command=lambda *args: host_tools.show_loader(
parent=parent_widget,
use_context=True
)
)
cmds.menuItem(
"Publish...",
command=lambda *args: host_tools.show_publisher(
parent=parent_widget,
tab="publish"
),
image=pyblish_icon
)
cmds.menuItem(
"Manage...",
command=lambda *args: host_tools.show_scene_inventory(
parent=parent_widget
)
)
cmds.menuItem(
"Library...",
command=lambda *args: host_tools.show_library_loader(
parent=parent_widget
)
)
cmds.menuItem(divider=True)
cmds.menuItem(
"Work Files...",
command=lambda *args: host_tools.show_workfiles(
parent=parent_widget
),
)
cmds.menuItem(
"Set Frame Range",
command=lambda *args: lib.reset_frame_range()
)
cmds.menuItem(
"Set Resolution",
command=lambda *args: lib.reset_scene_resolution()
)
cmds.menuItem(
"Set Colorspace",
command=lambda *args: lib.set_colorspace(),
)
cmds.menuItem(
"Set Render Settings",
command=lambda *args: lib_rendersettings.RenderSettings().set_default_renderer_settings() # noqa
)
cmds.menuItem(divider=True, parent=MENU_NAME)
cmds.menuItem(
"Build First Workfile",
parent=MENU_NAME,
command=lambda *args: BuildWorkfile().process()
)
cmds.menuItem(
"Look assigner...",
command=lambda *args: show_look_assigner(
parent_widget
)
)
cmds.menuItem(
"Experimental tools...",
command=lambda *args: host_tools.show_experimental_tools_dialog(
parent_widget
)
)
builder_menu = cmds.menuItem(
"Template Builder",
subMenu=True,
tearOff=True,
parent=MENU_NAME
)
cmds.menuItem(
"Build Workfile from template",
parent=builder_menu,
command=build_workfile_template
)
cmds.menuItem(
"Update Workfile from template",
parent=builder_menu,
command=update_workfile_template
)
cmds.menuItem(
divider=True,
parent=builder_menu
)
cmds.menuItem(
"Open Template",
parent=builder_menu,
command=lambda *args: open_template_ui(
MayaTemplateBuilder(registered_host()), get_main_window()
),
)
cmds.menuItem(
"Create Placeholder",
parent=builder_menu,
command=create_placeholder
)
cmds.menuItem(
"Update Placeholder",
parent=builder_menu,
command=update_placeholder
)
cmds.setParent(MENU_NAME, menu=True)
def add_scripts_menu(project_settings):
try:
import scriptsmenu.launchformaya as launchformaya
except ImportError:
log.warning(
"Skipping studio.menu install, because "
"'scriptsmenu' module seems unavailable."
)
return
menu_settings = project_settings["maya"]["scriptsmenu"]
menu_name = menu_settings["name"]
config = menu_settings["definition"]
if menu_settings.get("definition_type") == "definition_json":
data = menu_settings["definition_json"]
try:
config = json.loads(data)
except json.JSONDecodeError as exc:
print("Skipping studio menu, error decoding JSON definition.")
log.error(exc)
return
if not config:
log.warning("Skipping studio menu, no definition found.")
return
# run the launcher for Maya menu
studio_menu = launchformaya.main(
title=menu_name.title(),
objectName=menu_name.title().lower().replace(" ", "_")
)
# apply configuration
studio_menu.build_from_configuration(studio_menu, config)
# Allow time for uninstallation to finish.
# We use Maya's executeDeferred instead of QTimer.singleShot
# so that it only gets called after Maya UI has initialized too.
# This is crucial with Maya 2020+ which initializes without UI
# first as a QCoreApplication
maya.utils.executeDeferred(add_menu)
cmds.evalDeferred(partial(add_scripts_menu, project_settings),
lowestPriority=True)
def uninstall():
menu = _get_menu()
if menu:
log.info("Attempting to uninstall ...")
try:
menu.deleteLater()
del menu
except Exception as e:
log.error(e)
def popup():
"""Pop-up the existing menu near the mouse cursor."""
menu = _get_menu()
cursor = QtGui.QCursor()
point = cursor.pos()
menu.exec_(point)
def update_menu_task_label():
"""Update the task label in AYON menu to current session"""
if IS_HEADLESS:
return
object_name = "{}|currentContext".format(MENU_NAME)
if not cmds.menuItem(object_name, query=True, exists=True):
log.warning("Can't find menuItem: {}".format(object_name))
return
label = get_context_label()
cmds.menuItem(object_name, edit=True, label=label)

View file

@ -1,779 +0,0 @@
import json
import base64
import os
import errno
import logging
import contextlib
import shutil
from maya import utils, cmds, OpenMaya
import maya.api.OpenMaya as om
import pyblish.api
from ayon_core.settings import get_project_settings
from ayon_core.host import (
HostBase,
IWorkfileHost,
ILoadHost,
IPublishHost,
HostDirmap,
)
from ayon_core.tools.utils import host_tools
from ayon_core.tools.workfiles.lock_dialog import WorkfileLockDialog
from ayon_core.lib import (
register_event_callback,
emit_event
)
from ayon_core.pipeline import (
get_current_project_name,
register_loader_plugin_path,
register_inventory_action_path,
register_creator_plugin_path,
register_workfile_build_plugin_path,
deregister_loader_plugin_path,
deregister_inventory_action_path,
deregister_creator_plugin_path,
deregister_workfile_build_plugin_path,
AYON_CONTAINER_ID,
AVALON_CONTAINER_ID,
)
from ayon_core.pipeline.load import any_outdated_containers
from ayon_core.pipeline.workfile.lock_workfile import (
create_workfile_lock,
remove_workfile_lock,
is_workfile_locked,
is_workfile_lock_enabled
)
from ayon_maya import MAYA_ROOT_DIR
from ayon_maya.lib import create_workspace_mel
from . import menu, lib
from .workio import (
open_file,
save_file,
file_extensions,
has_unsaved_changes,
work_root,
current_file
)
log = logging.getLogger("ayon_maya")
PLUGINS_DIR = os.path.join(MAYA_ROOT_DIR, "plugins")
PUBLISH_PATH = os.path.join(PLUGINS_DIR, "publish")
LOAD_PATH = os.path.join(PLUGINS_DIR, "load")
CREATE_PATH = os.path.join(PLUGINS_DIR, "create")
INVENTORY_PATH = os.path.join(PLUGINS_DIR, "inventory")
WORKFILE_BUILD_PATH = os.path.join(PLUGINS_DIR, "workfile_build")
AVALON_CONTAINERS = ":AVALON_CONTAINERS"
# Track whether the workfile tool is about to save
_about_to_save = False
class MayaHost(HostBase, IWorkfileHost, ILoadHost, IPublishHost):
name = "maya"
def __init__(self):
super(MayaHost, self).__init__()
self._op_events = {}
def install(self):
project_name = get_current_project_name()
project_settings = get_project_settings(project_name)
# process path mapping
dirmap_processor = MayaDirmap("maya", project_name, project_settings)
dirmap_processor.process_dirmap()
pyblish.api.register_plugin_path(PUBLISH_PATH)
pyblish.api.register_host("mayabatch")
pyblish.api.register_host("mayapy")
pyblish.api.register_host("maya")
register_loader_plugin_path(LOAD_PATH)
register_creator_plugin_path(CREATE_PATH)
register_inventory_action_path(INVENTORY_PATH)
register_workfile_build_plugin_path(WORKFILE_BUILD_PATH)
self.log.info("Installing callbacks ... ")
register_event_callback("init", on_init)
_set_project()
if lib.IS_HEADLESS:
self.log.info((
"Running in headless mode, skipping Maya save/open/new"
" callback installation.."
))
return
self._register_callbacks()
menu.install(project_settings)
register_event_callback("save", on_save)
register_event_callback("open", on_open)
register_event_callback("new", on_new)
register_event_callback("before.save", on_before_save)
register_event_callback("after.save", on_after_save)
register_event_callback("before.close", on_before_close)
register_event_callback("before.file.open", before_file_open)
register_event_callback("taskChanged", on_task_changed)
register_event_callback("workfile.open.before", before_workfile_open)
register_event_callback("workfile.save.before", before_workfile_save)
register_event_callback(
"workfile.save.before", workfile_save_before_xgen
)
register_event_callback("workfile.save.after", after_workfile_save)
def open_workfile(self, filepath):
return open_file(filepath)
def save_workfile(self, filepath=None):
return save_file(filepath)
def work_root(self, session):
return work_root(session)
def get_current_workfile(self):
return current_file()
def workfile_has_unsaved_changes(self):
return has_unsaved_changes()
def get_workfile_extensions(self):
return file_extensions()
def get_containers(self):
return ls()
@contextlib.contextmanager
def maintained_selection(self):
with lib.maintained_selection():
yield
def get_context_data(self):
data = cmds.fileInfo("OpenPypeContext", query=True)
if not data:
return {}
data = data[0] # Maya seems to return a list
decoded = base64.b64decode(data).decode("utf-8")
return json.loads(decoded)
def update_context_data(self, data, changes):
json_str = json.dumps(data)
encoded = base64.b64encode(json_str.encode("utf-8"))
return cmds.fileInfo("OpenPypeContext", encoded)
def _register_callbacks(self):
for handler, event in self._op_events.copy().items():
if event is None:
continue
try:
OpenMaya.MMessage.removeCallback(event)
self._op_events[handler] = None
except RuntimeError as exc:
self.log.info(exc)
self._op_events[_on_scene_save] = OpenMaya.MSceneMessage.addCallback(
OpenMaya.MSceneMessage.kBeforeSave, _on_scene_save
)
self._op_events[_after_scene_save] = (
OpenMaya.MSceneMessage.addCallback(
OpenMaya.MSceneMessage.kAfterSave,
_after_scene_save
)
)
self._op_events[_before_scene_save] = (
OpenMaya.MSceneMessage.addCheckCallback(
OpenMaya.MSceneMessage.kBeforeSaveCheck,
_before_scene_save
)
)
self._op_events[_on_scene_new] = OpenMaya.MSceneMessage.addCallback(
OpenMaya.MSceneMessage.kAfterNew, _on_scene_new
)
self._op_events[_on_maya_initialized] = (
OpenMaya.MSceneMessage.addCallback(
OpenMaya.MSceneMessage.kMayaInitialized,
_on_maya_initialized
)
)
self._op_events[_on_scene_open] = (
OpenMaya.MSceneMessage.addCallback(
OpenMaya.MSceneMessage.kAfterOpen,
_on_scene_open
)
)
self._op_events[_before_scene_open] = (
OpenMaya.MSceneMessage.addCallback(
OpenMaya.MSceneMessage.kBeforeOpen,
_before_scene_open
)
)
self._op_events[_before_close_maya] = (
OpenMaya.MSceneMessage.addCallback(
OpenMaya.MSceneMessage.kMayaExiting,
_before_close_maya
)
)
self.log.info("Installed event handler _on_scene_save..")
self.log.info("Installed event handler _before_scene_save..")
self.log.info("Installed event handler _on_after_save..")
self.log.info("Installed event handler _on_scene_new..")
self.log.info("Installed event handler _on_maya_initialized..")
self.log.info("Installed event handler _on_scene_open..")
self.log.info("Installed event handler _check_lock_file..")
self.log.info("Installed event handler _before_close_maya..")
def _set_project():
"""Sets the maya project to the current Session's work directory.
Returns:
None
"""
workdir = os.getenv("AYON_WORKDIR")
try:
os.makedirs(workdir)
except OSError as e:
# An already existing working directory is fine.
if e.errno == errno.EEXIST:
pass
else:
raise
cmds.workspace(workdir, openWorkspace=True)
def _on_maya_initialized(*args):
emit_event("init")
if cmds.about(batch=True):
log.warning("Running batch mode ...")
return
# Keep reference to the main Window, once a main window exists.
lib.get_main_window()
def _on_scene_new(*args):
emit_event("new")
def _after_scene_save(*arg):
emit_event("after.save")
def _on_scene_save(*args):
emit_event("save")
def _on_scene_open(*args):
emit_event("open")
def _before_close_maya(*args):
emit_event("before.close")
def _before_scene_open(*args):
emit_event("before.file.open")
def _before_scene_save(return_code, client_data):
# Default to allowing the action. Registered
# callbacks can optionally set this to False
# in order to block the operation.
OpenMaya.MScriptUtil.setBool(return_code, True)
emit_event(
"before.save",
{"return_code": return_code}
)
def _remove_workfile_lock():
"""Remove workfile lock on current file"""
if not handle_workfile_locks():
return
filepath = current_file()
log.info("Removing lock on current file {}...".format(filepath))
if filepath:
remove_workfile_lock(filepath)
def handle_workfile_locks():
if lib.IS_HEADLESS:
return False
project_name = get_current_project_name()
return is_workfile_lock_enabled(MayaHost.name, project_name)
def uninstall():
pyblish.api.deregister_plugin_path(PUBLISH_PATH)
pyblish.api.deregister_host("mayabatch")
pyblish.api.deregister_host("mayapy")
pyblish.api.deregister_host("maya")
deregister_loader_plugin_path(LOAD_PATH)
deregister_creator_plugin_path(CREATE_PATH)
deregister_inventory_action_path(INVENTORY_PATH)
deregister_workfile_build_plugin_path(WORKFILE_BUILD_PATH)
menu.uninstall()
def parse_container(container):
"""Return the container node's full container data.
Args:
container (str): A container node name.
Returns:
dict: The container schema data for this container node.
"""
data = lib.read(container)
# Backwards compatibility pre-schemas for containers
data["schema"] = data.get("schema", "openpype:container-1.0")
# Append transient data
data["objectName"] = container
return data
def _ls():
"""Yields AYON container node names.
Used by `ls()` to retrieve the nodes and then query the full container's
data.
Yields:
str: AYON container node name (objectSet)
"""
def _maya_iterate(iterator):
"""Helper to iterate a maya iterator"""
while not iterator.isDone():
yield iterator.thisNode()
iterator.next()
ids = {
AYON_CONTAINER_ID,
# Backwards compatibility
AVALON_CONTAINER_ID
}
# Iterate over all 'set' nodes in the scene to detect whether
# they have the ayon container ".id" attribute.
fn_dep = om.MFnDependencyNode()
iterator = om.MItDependencyNodes(om.MFn.kSet)
for mobject in _maya_iterate(iterator):
if mobject.apiTypeStr != "kSet":
# Only match by exact type
continue
fn_dep.setObject(mobject)
if not fn_dep.hasAttribute("id"):
continue
plug = fn_dep.findPlug("id", True)
value = plug.asString()
if value in ids:
yield fn_dep.name()
def ls():
"""Yields containers from active Maya scene
This is the host-equivalent of api.ls(), but instead of listing
assets on disk, it lists assets already loaded in Maya; once loaded
they are called 'containers'
Yields:
dict: container
"""
container_names = _ls()
for container in sorted(container_names):
yield parse_container(container)
def containerise(name,
namespace,
nodes,
context,
loader=None,
suffix="CON"):
"""Bundle `nodes` into an assembly and imprint it with metadata
Containerisation enables a tracking of version, author and origin
for loaded assets.
Arguments:
name (str): Name of resulting assembly
namespace (str): Namespace under which to host container
nodes (list): Long names of nodes to containerise
context (dict): Asset information
loader (str, optional): Name of loader used to produce this container.
suffix (str, optional): Suffix of container, defaults to `_CON`.
Returns:
container (str): Name of container assembly
"""
container = cmds.sets(nodes, name="%s_%s_%s" % (namespace, name, suffix))
data = [
("schema", "openpype:container-2.0"),
("id", AVALON_CONTAINER_ID),
("name", name),
("namespace", namespace),
("loader", loader),
("representation", context["representation"]["id"]),
]
for key, value in data:
cmds.addAttr(container, longName=key, dataType="string")
cmds.setAttr(container + "." + key, str(value), type="string")
main_container = cmds.ls(AVALON_CONTAINERS, type="objectSet")
if not main_container:
main_container = cmds.sets(empty=True, name=AVALON_CONTAINERS)
# Implement #399: Maya 2019+ hide AVALON_CONTAINERS on creation..
if cmds.attributeQuery("hiddenInOutliner",
node=main_container,
exists=True):
cmds.setAttr(main_container + ".hiddenInOutliner", True)
else:
main_container = main_container[0]
cmds.sets(container, addElement=main_container)
# Implement #399: Maya 2019+ hide containers in outliner
if cmds.attributeQuery("hiddenInOutliner",
node=container,
exists=True):
cmds.setAttr(container + ".hiddenInOutliner", True)
return container
def on_init():
log.info("Running callback on init..")
def safe_deferred(fn):
"""Execute deferred the function in a try-except"""
def _fn():
"""safely call in deferred callback"""
try:
fn()
except Exception as exc:
print(exc)
try:
utils.executeDeferred(_fn)
except Exception as exc:
print(exc)
# Force load Alembic so referenced alembics
# work correctly on scene open
cmds.loadPlugin("AbcImport", quiet=True)
cmds.loadPlugin("AbcExport", quiet=True)
# Force load objExport plug-in (requested by artists)
cmds.loadPlugin("objExport", quiet=True)
if not lib.IS_HEADLESS:
launch_workfiles = os.environ.get("WORKFILES_STARTUP")
if launch_workfiles:
safe_deferred(host_tools.show_workfiles)
from .customize import (
override_component_mask_commands,
override_toolbox_ui
)
safe_deferred(override_component_mask_commands)
safe_deferred(override_toolbox_ui)
def on_before_save():
"""Run validation for scene's FPS prior to saving"""
return lib.validate_fps()
def on_after_save():
"""Check if there is a lockfile after save"""
check_lock_on_current_file()
def check_lock_on_current_file():
"""Check if there is a user opening the file"""
if not handle_workfile_locks():
return
log.info("Running callback on checking the lock file...")
# add the lock file when opening the file
filepath = current_file()
# Skip if current file is 'untitled'
if not filepath:
return
if is_workfile_locked(filepath):
# add lockfile dialog
workfile_dialog = WorkfileLockDialog(filepath)
if not workfile_dialog.exec_():
cmds.file(new=True)
return
create_workfile_lock(filepath)
def on_before_close():
"""Delete the lock file after user quitting the Maya Scene"""
log.info("Closing Maya...")
# delete the lock file
filepath = current_file()
if handle_workfile_locks():
remove_workfile_lock(filepath)
def before_file_open():
"""check lock file when the file changed"""
# delete the lock file
_remove_workfile_lock()
def on_save():
"""Automatically add IDs to new nodes
Any transform of a mesh, without an existing ID, is given one
automatically on file save.
"""
log.info("Running callback on save..")
# remove lockfile if users jumps over from one scene to another
_remove_workfile_lock()
# Generate ids of the current context on nodes in the scene
nodes = lib.get_id_required_nodes(referenced_nodes=False,
existing_ids=False)
for node, new_id in lib.generate_ids(nodes):
lib.set_id(node, new_id, overwrite=False)
# We are now starting the actual save directly
global _about_to_save
_about_to_save = False
def on_open():
"""On scene open let's assume the containers have changed."""
from ayon_core.tools.utils import SimplePopup
# Validate FPS after update_task_from_path to
# ensure it is using correct FPS for the folder
lib.validate_fps()
lib.fix_incompatible_containers()
if any_outdated_containers():
log.warning("Scene has outdated content.")
# Find maya main window
parent = lib.get_main_window()
if parent is None:
log.info("Skipping outdated content pop-up "
"because Maya window can't be found.")
else:
# Show outdated pop-up
def _on_show_inventory():
host_tools.show_scene_inventory(parent=parent)
dialog = SimplePopup(parent=parent)
dialog.setWindowTitle("Maya scene has outdated content")
dialog.set_message("There are outdated containers in "
"your Maya scene.")
dialog.on_clicked.connect(_on_show_inventory)
dialog.show()
# create lock file for the maya scene
check_lock_on_current_file()
def on_new():
"""Set project resolution and fps when create a new file"""
log.info("Running callback on new..")
with lib.suspended_refresh():
lib.set_context_settings()
_remove_workfile_lock()
def on_task_changed():
"""Wrapped function of app initialize and maya's on task changed"""
# Run
menu.update_menu_task_label()
workdir = os.getenv("AYON_WORKDIR")
if os.path.exists(workdir):
log.info("Updating Maya workspace for task change to %s", workdir)
_set_project()
# Set Maya fileDialog's start-dir to /scenes
frule_scene = cmds.workspace(fileRuleEntry="scene")
cmds.optionVar(stringValue=("browserLocationmayaBinaryscene",
workdir + "/" + frule_scene))
else:
log.warning((
"Can't set project for new context because path does not exist: {}"
).format(workdir))
global _about_to_save
if not lib.IS_HEADLESS and _about_to_save:
# Let's prompt the user to update the context settings or not
lib.prompt_reset_context()
def before_workfile_open():
if handle_workfile_locks():
_remove_workfile_lock()
def before_workfile_save(event):
project_name = get_current_project_name()
if handle_workfile_locks():
_remove_workfile_lock()
workdir_path = event["workdir_path"]
if workdir_path:
create_workspace_mel(workdir_path, project_name)
global _about_to_save
_about_to_save = True
def workfile_save_before_xgen(event):
"""Manage Xgen external files when switching context.
Xgen has various external files that needs to be unique and relative to the
workfile, so we need to copy and potentially overwrite these files when
switching context.
Args:
event (Event) - ayon_core/lib/events.py
"""
if not cmds.pluginInfo("xgenToolkit", query=True, loaded=True):
return
import xgenm
current_work_dir = os.getenv("AYON_WORKDIR").replace("\\", "/")
expected_work_dir = event.data["workdir_path"].replace("\\", "/")
if current_work_dir == expected_work_dir:
return
palettes = cmds.ls(type="xgmPalette", long=True)
if not palettes:
return
transfers = []
overwrites = []
attribute_changes = {}
attrs = ["xgFileName", "xgBaseFile"]
for palette in palettes:
sanitized_palette = palette.replace("|", "")
project_path = xgenm.getAttr("xgProjectPath", sanitized_palette)
_, maya_extension = os.path.splitext(event.data["filename"])
for attr in attrs:
node_attr = "{}.{}".format(palette, attr)
attr_value = cmds.getAttr(node_attr)
if not attr_value:
continue
source = os.path.join(project_path, attr_value)
attr_value = event.data["filename"].replace(
maya_extension,
"__{}{}".format(
sanitized_palette.replace(":", "__"),
os.path.splitext(attr_value)[1]
)
)
target = os.path.join(expected_work_dir, attr_value)
transfers.append((source, target))
attribute_changes[node_attr] = attr_value
relative_path = xgenm.getAttr(
"xgDataPath", sanitized_palette
).split(os.pathsep)[0]
absolute_path = relative_path.replace("${PROJECT}", project_path)
for root, _, files in os.walk(absolute_path):
for f in files:
source = os.path.join(root, f).replace("\\", "/")
target = source.replace(project_path, expected_work_dir + "/")
transfers.append((source, target))
if os.path.exists(target):
overwrites.append(target)
# Ask user about overwriting files.
if overwrites:
log.warning(
"WARNING! Potential loss of data.\n\n"
"Found duplicate Xgen files in new context.\n{}".format(
"\n".join(overwrites)
)
)
return
for source, destination in transfers:
if not os.path.exists(os.path.dirname(destination)):
os.makedirs(os.path.dirname(destination))
shutil.copy(source, destination)
for attribute, value in attribute_changes.items():
cmds.setAttr(attribute, value, type="string")
def after_workfile_save(event):
workfile_name = event["filename"]
if (
handle_workfile_locks()
and workfile_name
and not is_workfile_locked(workfile_name)
):
create_workfile_lock(workfile_name)
class MayaDirmap(HostDirmap):
def on_enable_dirmap(self):
cmds.dirmap(en=True)
def dirmap_routine(self, source_path, destination_path):
cmds.dirmap(m=(source_path, destination_path))
cmds.dirmap(m=(destination_path, source_path))

File diff suppressed because it is too large Load diff

View file

@ -1,127 +0,0 @@
# -*- coding: utf-8 -*-
"""Export stuff in render setup layer context.
Export Maya nodes from Render Setup layer as if flattened in that layer instead
of exporting the defaultRenderLayer as Maya forces by default
Credits: Roy Nieterau (BigRoy) / Colorbleed
Modified for use in AYON
"""
import os
import contextlib
from maya import cmds
from maya.app.renderSetup.model import renderSetup
from .lib import pairwise
@contextlib.contextmanager
def allow_export_from_render_setup_layer():
"""Context manager to override Maya settings to allow RS layer export"""
try:
rs = renderSetup.instance()
# Exclude Render Setup nodes from the export
rs._setAllRSNodesDoNotWrite(True)
# Disable Render Setup forcing the switch to master layer
os.environ["MAYA_BATCH_RENDER_EXPORT"] = "1"
yield
finally:
# Reset original state
rs._setAllRSNodesDoNotWrite(False)
os.environ.pop("MAYA_BATCH_RENDER_EXPORT", None)
def export_in_rs_layer(path, nodes, export=None):
"""Export nodes from Render Setup layer.
When exporting from Render Setup layer Maya by default
forces a switch to the defaultRenderLayer as such making
it impossible to export the contents of a Render Setup
layer. Maya presents this warning message:
# Warning: Exporting Render Setup master layer content #
This function however avoids the renderlayer switch and
exports from the Render Setup layer as if the edits were
'flattened' in the master layer.
It does so by:
- Allowing export from Render Setup Layer
- Enforce Render Setup nodes to NOT be written on export
- Disconnect connections from any `applyOverride` nodes
to flatten the values (so they are written correctly)*
*Connection overrides like Shader Override and Material
Overrides export correctly out of the box since they don't
create an intermediate connection to an 'applyOverride' node.
However, any scalar override (absolute or relative override)
will get input connections in the layer so we'll break those
to 'store' the values on the attribute itself and write value
out instead.
Args:
path (str): File path to export to.
nodes (list): Maya nodes to export.
export (callable, optional): Callback to be used for exporting. If
not specified, default export to `.ma` will be called.
Returns:
None
Raises:
AssertionError: When not in a Render Setup layer an
AssertionError is raised. This command assumes
you are currently in a Render Setup layer.
"""
rs = renderSetup.instance()
assert rs.getVisibleRenderLayer().name() != "defaultRenderLayer", \
("Export in Render Setup layer is only supported when in "
"Render Setup layer")
# Break connection to any value overrides
history = cmds.listHistory(nodes) or []
nodes_all = list(
set(cmds.ls(nodes + history, long=True, objectsOnly=True)))
overrides = cmds.listConnections(nodes_all,
source=True,
destination=False,
type="applyOverride",
plugs=True,
connections=True) or []
for dest, src in pairwise(overrides):
# Even after disconnecting the values
# should be preserved as they were
# Note: animated overrides would be lost for export
cmds.disconnectAttr(src, dest)
# Export Selected
with allow_export_from_render_setup_layer():
cmds.select(nodes, noExpand=True)
if export:
export()
else:
cmds.file(path,
force=True,
typ="mayaAscii",
exportSelected=True,
preserveReferences=False,
channels=True,
constraints=True,
expressions=True,
constructionHistory=True)
if overrides:
# If we have broken override connections then Maya
# is unaware that the Render Setup layer is in an
# invalid state. So let's 'hard reset' the state
# by going to default render layer and switching back
layer = rs.getVisibleRenderLayer()
rs.switchToLayer(None)
rs.switchToLayer(layer)

View file

@ -1,606 +0,0 @@
import logging
import json
import os
import contextlib
import copy
import six
import ayon_api
from maya import cmds
from ayon_core.pipeline import (
schema,
discover_loader_plugins,
loaders_from_representation,
load_container,
update_container,
remove_container,
get_representation_path,
get_current_project_name,
)
from ayon_maya.api.lib import (
matrix_equals,
unique_namespace,
get_container_transforms,
DEFAULT_MATRIX
)
log = logging.getLogger("PackageLoader")
def to_namespace(node, namespace):
"""Return node name as if it's inside the namespace.
Args:
node (str): Node name
namespace (str): Namespace
Returns:
str: The node in the namespace.
"""
namespace_prefix = "|{}:".format(namespace)
node = namespace_prefix.join(node.split("|"))
return node
@contextlib.contextmanager
def namespaced(namespace, new=True):
"""Work inside namespace during context
Args:
new (bool): When enabled this will rename the namespace to a unique
namespace if the input namespace already exists.
Yields:
str: The namespace that is used during the context
"""
original = cmds.namespaceInfo(cur=True)
if new:
namespace = unique_namespace(namespace)
cmds.namespace(add=namespace)
try:
cmds.namespace(set=namespace)
yield namespace
finally:
cmds.namespace(set=original)
@contextlib.contextmanager
def unlocked(nodes):
# Get node state by Maya's uuid
nodes = cmds.ls(nodes, long=True)
uuids = cmds.ls(nodes, uuid=True)
states = cmds.lockNode(nodes, query=True, lock=True)
states = {uuid: state for uuid, state in zip(uuids, states)}
originals = {uuid: node for uuid, node in zip(uuids, nodes)}
try:
cmds.lockNode(nodes, lock=False)
yield
finally:
# Reapply original states
_iteritems = getattr(states, "iteritems", states.items)
for uuid, state in _iteritems():
nodes_from_id = cmds.ls(uuid, long=True)
if nodes_from_id:
node = nodes_from_id[0]
else:
log.debug("Falling back to node name: %s", node)
node = originals[uuid]
if not cmds.objExists(node):
log.warning("Unable to find: %s", node)
continue
cmds.lockNode(node, lock=state)
def load_package(filepath, name, namespace=None):
"""Load a package that was gathered elsewhere.
A package is a group of published instances, possibly with additional data
in a hierarchy.
"""
if namespace is None:
# Define a unique namespace for the package
namespace = os.path.basename(filepath).split(".")[0]
unique_namespace(namespace)
assert isinstance(namespace, six.string_types)
# Load the setdress package data
with open(filepath, "r") as fp:
data = json.load(fp)
# Load the setdress alembic hierarchy
# We import this into the namespace in which we'll load the package's
# instances into afterwards.
alembic = filepath.replace(".json", ".abc")
hierarchy = cmds.file(alembic,
reference=True,
namespace=namespace,
returnNewNodes=True,
groupReference=True,
groupName="{}:{}".format(namespace, name),
typ="Alembic")
# Get the top root node (the reference group)
root = "{}:{}".format(namespace, name)
containers = []
all_loaders = discover_loader_plugins()
for representation_id, instances in data.items():
# Find the compatible loaders
loaders = loaders_from_representation(
all_loaders, representation_id
)
for instance in instances:
container = _add(instance=instance,
representation_id=representation_id,
loaders=loaders,
namespace=namespace,
root=root)
containers.append(container)
# TODO: Do we want to cripple? Or do we want to add a 'parent' parameter?
# Cripple the original AYON containers so they don't show up in the
# manager
# for container in containers:
# cmds.setAttr("%s.id" % container,
# "setdress.container",
# type="string")
# TODO: Lock all loaded nodes
# This is to ensure the hierarchy remains unaltered by the artists
# for node in nodes:
# cmds.lockNode(node, lock=True)
return containers + hierarchy
def _add(instance, representation_id, loaders, namespace, root="|"):
"""Add an item from the package
Args:
instance (dict):
representation_id (str):
loaders (list):
namespace (str):
Returns:
str: The created AYON container.
"""
# Process within the namespace
with namespaced(namespace, new=False) as namespace:
# Get the used loader
Loader = next((x for x in loaders if
x.__name__ == instance['loader']),
None)
if Loader is None:
log.warning("Loader is missing: %s. Skipping %s",
instance['loader'], instance)
raise RuntimeError("Loader is missing.")
container = load_container(
Loader,
representation_id,
namespace=instance['namespace']
)
# Get the root from the loaded container
loaded_root = get_container_transforms({"objectName": container},
root=True)
# Apply matrix to root node (if any matrix edits)
matrix = instance.get("matrix", None)
if matrix:
cmds.xform(loaded_root, objectSpace=True, matrix=matrix)
# Parent into the setdress hierarchy
# Namespace is missing from parent node(s), add namespace
# manually
parent = root + to_namespace(instance["parent"], namespace)
cmds.parent(loaded_root, parent, relative=True)
return container
# Store root nodes based on representation and namespace
def _instances_by_namespace(data):
"""Rebuild instance data so we can look it up by namespace.
Note that the `representation` is added into the instance's
data with a `representation` key.
Args:
data (dict): scene build data
Returns:
dict
"""
result = {}
# Add new assets
for representation_id, instances in data.items():
# Ensure we leave the source data unaltered
instances = copy.deepcopy(instances)
for instance in instances:
instance['representation'] = representation_id
result[instance['namespace']] = instance
return result
def get_contained_containers(container):
"""Get the AYON containers in this container
Args:
container (dict): The container dict.
Returns:
list: A list of member container dictionaries.
"""
from .pipeline import parse_container
# Get AYON containers in this package setdress container
containers = []
members = cmds.sets(container['objectName'], query=True)
for node in cmds.ls(members, type="objectSet"):
try:
member_container = parse_container(node)
containers.append(member_container)
except schema.ValidationError:
pass
return containers
def update_package_version(container, version):
"""
Update package by version number
Args:
container (dict): container data of the container node
version (int): the new version number of the package
Returns:
None
"""
# Versioning (from `core.maya.pipeline`)
project_name = get_current_project_name()
repre_id = container["representation"]
current_representation = ayon_api.get_representation_by_id(
project_name, repre_id
)
assert current_representation is not None, "This is a bug"
(
version_entity,
product_entity,
folder_entity,
project_entity
) = ayon_api.get_representation_parents(project_name, repre_id)
if version == -1:
new_version = ayon_api.get_last_version_by_product_id(
project_name, product_entity["id"]
)
else:
new_version = ayon_api.get_version_by_name(
project_name, version, product_entity["id"]
)
if new_version is None:
raise ValueError("Version not found: {}".format(version))
# Get the new representation (new file)
new_representation = ayon_api.get_representation_by_name(
project_name, current_representation["name"], new_version["id"]
)
# TODO there is 'get_representation_context' to get the context which
# could be possible to use here
new_context = {
"project": project_entity,
"folder": folder_entity,
"product": product_entity,
"version": version_entity,
"representation": new_representation,
}
update_package(container, new_context)
def update_package(set_container, context):
"""Update any matrix changes in the scene based on the new data
Args:
set_container (dict): container data from `ls()`
context (dict): the representation document from the database
Returns:
None
"""
# Load the original package data
project_name = context["project"]["name"]
repre_entity = context["representation"]
current_representation = ayon_api.get_representation_by_id(
project_name, set_container["representation"]
)
current_file = get_representation_path(current_representation)
assert current_file.endswith(".json")
with open(current_file, "r") as fp:
current_data = json.load(fp)
# Load the new package data
new_file = get_representation_path(repre_entity)
assert new_file.endswith(".json")
with open(new_file, "r") as fp:
new_data = json.load(fp)
# Update scene content
containers = get_contained_containers(set_container)
update_scene(set_container, containers, current_data, new_data, new_file)
# TODO: This should be handled by the pipeline itself
cmds.setAttr(set_container['objectName'] + ".representation",
context["representation"]["id"], type="string")
def update_scene(set_container, containers, current_data, new_data, new_file):
"""Updates the hierarchy, assets and their matrix
Updates the following within the scene:
* Setdress hierarchy alembic
* Matrix
* Parenting
* Representations
It removes any assets which are not present in the new build data
Args:
set_container (dict): the setdress container of the scene
containers (list): the list of containers under the setdress container
current_data (dict): the current build data of the setdress
new_data (dict): the new build data of the setdres
Returns:
processed_containers (list): all new and updated containers
"""
set_namespace = set_container['namespace']
project_name = get_current_project_name()
# Update the setdress hierarchy alembic
set_root = get_container_transforms(set_container, root=True)
set_hierarchy_root = cmds.listRelatives(set_root, fullPath=True)[0]
set_hierarchy_reference = cmds.referenceQuery(set_hierarchy_root,
referenceNode=True)
new_alembic = new_file.replace(".json", ".abc")
assert os.path.exists(new_alembic), "%s does not exist." % new_alembic
with unlocked(cmds.listRelatives(set_root, ad=True, fullPath=True)):
cmds.file(new_alembic,
loadReference=set_hierarchy_reference,
type="Alembic")
identity = DEFAULT_MATRIX[:]
processed_namespaces = set()
processed_containers = list()
new_lookup = _instances_by_namespace(new_data)
old_lookup = _instances_by_namespace(current_data)
repre_ids = set()
containers_for_repre_compare = []
for container in containers:
container_ns = container['namespace']
# Consider it processed here, even it it fails we want to store that
# the namespace was already available.
processed_namespaces.add(container_ns)
processed_containers.append(container['objectName'])
if container_ns not in new_lookup:
# Remove this container because it's not in the new data
log.warning("Removing content: %s", container_ns)
remove_container(container)
continue
root = get_container_transforms(container, root=True)
if not root:
log.error("Can't find root for %s", container['objectName'])
continue
old_instance = old_lookup.get(container_ns, {})
new_instance = new_lookup[container_ns]
# Update the matrix
# check matrix against old_data matrix to find local overrides
current_matrix = cmds.xform(root,
query=True,
matrix=True,
objectSpace=True)
original_matrix = old_instance.get("matrix", identity)
has_matrix_override = not matrix_equals(current_matrix,
original_matrix)
if has_matrix_override:
log.warning("Matrix override preserved on %s", container_ns)
else:
new_matrix = new_instance.get("matrix", identity)
cmds.xform(root, matrix=new_matrix, objectSpace=True)
# Update the parenting
if old_instance.get("parent", None) != new_instance["parent"]:
parent = to_namespace(new_instance['parent'], set_namespace)
if not cmds.objExists(parent):
log.error("Can't find parent %s", parent)
continue
# Set the new parent
cmds.lockNode(root, lock=False)
root = cmds.parent(root, parent, relative=True)
cmds.lockNode(root, lock=True)
# Update the representation
representation_current = container['representation']
representation_old = old_instance['representation']
representation_new = new_instance['representation']
has_representation_override = (representation_current !=
representation_old)
if representation_new == representation_current:
continue
if has_representation_override:
log.warning("Your scene had local representation "
"overrides within the set. New "
"representations not loaded for %s.",
container_ns)
continue
# We check it against the current 'loader' in the scene instead
# of the original data of the package that was loaded because
# an Artist might have made scene local overrides
if new_instance['loader'] != container['loader']:
log.warning("Loader is switched - local edits will be "
"lost. Removing: %s",
container_ns)
# Remove this from the "has been processed" list so it's
# considered as new element and added afterwards.
processed_containers.pop()
processed_namespaces.remove(container_ns)
remove_container(container)
continue
# Check whether the conversion can be done by the Loader.
# They *must* use the same folder, product and Loader for
# `update_container` to make sense.
repre_ids.add(representation_current)
repre_ids.add(representation_new)
containers_for_repre_compare.append(
(container, representation_current, representation_new)
)
repre_entities_by_id = {
repre_entity["id"]: repre_entity
for repre_entity in ayon_api.get_representations(
project_name, representation_ids=repre_ids
)
}
repre_parents_by_id = ayon_api.get_representations_parents(
project_name, repre_ids
)
for (
container,
repre_current_id,
repre_new_id
) in containers_for_repre_compare:
current_repre = repre_entities_by_id[repre_current_id]
current_parents = repre_parents_by_id[repre_current_id]
new_repre = repre_entities_by_id[repre_new_id]
new_parents = repre_parents_by_id[repre_new_id]
is_valid = compare_representations(
current_repre, current_parents, new_repre, new_parents
)
if not is_valid:
log.error("Skipping: %s. See log for details.",
container["namespace"])
continue
new_version = new_parents.version["version"]
update_container(container, version=new_version)
# Add new assets
all_loaders = discover_loader_plugins()
for representation_id, instances in new_data.items():
# Find the compatible loaders
loaders = loaders_from_representation(
all_loaders, representation_id
)
for instance in instances:
# Already processed in update functionality
if instance['namespace'] in processed_namespaces:
continue
container = _add(instance=instance,
representation_id=representation_id,
loaders=loaders,
namespace=set_container['namespace'],
root=set_root)
# Add to the setdress container
cmds.sets(container,
addElement=set_container['objectName'])
processed_containers.append(container)
return processed_containers
def compare_representations(
current_repre, current_parents, new_repre, new_parents
):
"""Check if the old representation given can be updated
Due to limitations of the `update_container` function we cannot allow
differences in the following data:
* Representation name (extension)
* Folder id
* Product id
If any of those data values differs, the function will raise an
RuntimeError
Args:
current_repre (dict[str, Any]): Current representation entity.
current_parents (RepresentationParents): Current
representation parents.
new_repre (dict[str, Any]): New representation entity.
new_parents (RepresentationParents): New representation parents.
Returns:
bool: False if the representation is not invalid else True
"""
if current_repre["name"] != new_repre["name"]:
log.error("Cannot switch extensions")
return False
# TODO add better validation e.g. based on parent ids
if current_parents.folder["id"] != new_parents.folder["id"]:
log.error("Changing folders between updates is not supported.")
return False
if current_parents.product["id"] != new_parents.product["id"]:
log.error("Changing products between updates is not supported.")
return False
return True

View file

@ -1,290 +0,0 @@
import json
from maya import cmds
from ayon_core.pipeline import (
registered_host,
get_current_folder_path,
AYON_INSTANCE_ID,
AVALON_INSTANCE_ID,
)
from ayon_core.pipeline.workfile.workfile_template_builder import (
TemplateAlreadyImported,
AbstractTemplateBuilder,
PlaceholderPlugin,
PlaceholderItem,
)
from ayon_core.tools.workfile_template_build import (
WorkfileBuildPlaceholderDialog,
)
from .lib import read, imprint, get_main_window
PLACEHOLDER_SET = "PLACEHOLDERS_SET"
class MayaTemplateBuilder(AbstractTemplateBuilder):
"""Concrete implementation of AbstractTemplateBuilder for maya"""
use_legacy_creators = True
def import_template(self, path):
"""Import template into current scene.
Block if a template is already loaded.
Args:
path (str): A path to current template (usually given by
get_template_preset implementation)
Returns:
bool: Whether the template was successfully imported or not
"""
if cmds.objExists(PLACEHOLDER_SET):
raise TemplateAlreadyImported((
"Build template already loaded\n"
"Clean scene if needed (File > New Scene)"
))
cmds.sets(name=PLACEHOLDER_SET, empty=True)
new_nodes = cmds.file(
path,
i=True,
returnNewNodes=True,
preserveReferences=True,
loadReferenceDepth="all",
)
# make default cameras non-renderable
default_cameras = [cam for cam in cmds.ls(cameras=True)
if cmds.camera(cam, query=True, startupCamera=True)]
for cam in default_cameras:
if not cmds.attributeQuery("renderable", node=cam, exists=True):
self.log.debug(
"Camera {} has no attribute 'renderable'".format(cam)
)
continue
cmds.setAttr("{}.renderable".format(cam), 0)
cmds.setAttr(PLACEHOLDER_SET + ".hiddenInOutliner", True)
imported_sets = cmds.ls(new_nodes, set=True)
if not imported_sets:
return True
# update imported sets information
folder_path = get_current_folder_path()
for node in imported_sets:
if not cmds.attributeQuery("id", node=node, exists=True):
continue
if cmds.getAttr("{}.id".format(node)) not in {
AYON_INSTANCE_ID, AVALON_INSTANCE_ID
}:
continue
if not cmds.attributeQuery("folderPath", node=node, exists=True):
continue
cmds.setAttr(
"{}.folderPath".format(node), folder_path, type="string")
return True
class MayaPlaceholderPlugin(PlaceholderPlugin):
"""Base Placeholder Plugin for Maya with one unified cache.
Creates a locator as placeholder node, which during populate provide
all of its attributes defined on the locator's transform in
`placeholder.data` and where `placeholder.scene_identifier` is the
full path to the node.
Inherited classes must still implement `populate_placeholder`
"""
use_selection_as_parent = True
item_class = PlaceholderItem
def _create_placeholder_name(self, placeholder_data):
return self.identifier.replace(".", "_")
def _collect_scene_placeholders(self):
nodes_by_identifier = self.builder.get_shared_populate_data(
"placeholder_nodes"
)
if nodes_by_identifier is None:
# Cache placeholder data to shared data
nodes = cmds.ls("*.plugin_identifier", long=True, objectsOnly=True)
nodes_by_identifier = {}
for node in nodes:
identifier = cmds.getAttr("{}.plugin_identifier".format(node))
nodes_by_identifier.setdefault(identifier, []).append(node)
# Set the cache
self.builder.set_shared_populate_data(
"placeholder_nodes", nodes_by_identifier
)
return nodes_by_identifier
def create_placeholder(self, placeholder_data):
parent = None
if self.use_selection_as_parent:
selection = cmds.ls(selection=True)
if len(selection) > 1:
raise ValueError(
"More than one node is selected. "
"Please select only one to define the parent."
)
parent = selection[0] if selection else None
placeholder_data["plugin_identifier"] = self.identifier
placeholder_name = self._create_placeholder_name(placeholder_data)
placeholder = cmds.spaceLocator(name=placeholder_name)[0]
if parent:
placeholder = cmds.parent(placeholder, selection[0])[0]
self.imprint(placeholder, placeholder_data)
def update_placeholder(self, placeholder_item, placeholder_data):
node_name = placeholder_item.scene_identifier
changed_values = {}
for key, value in placeholder_data.items():
if value != placeholder_item.data.get(key):
changed_values[key] = value
# Delete attributes to ensure we imprint new data with correct type
for key in changed_values.keys():
placeholder_item.data[key] = value
if cmds.attributeQuery(key, node=node_name, exists=True):
attribute = "{}.{}".format(node_name, key)
cmds.deleteAttr(attribute)
self.imprint(node_name, changed_values)
def collect_placeholders(self):
placeholders = []
nodes_by_identifier = self._collect_scene_placeholders()
for node in nodes_by_identifier.get(self.identifier, []):
# TODO do data validations and maybe upgrades if they are invalid
placeholder_data = self.read(node)
placeholders.append(
self.item_class(scene_identifier=node,
data=placeholder_data,
plugin=self)
)
return placeholders
def post_placeholder_process(self, placeholder, failed):
"""Cleanup placeholder after load of its corresponding representations.
Hide placeholder, add them to placeholder set.
Used only by PlaceholderCreateMixin and PlaceholderLoadMixin
Args:
placeholder (PlaceholderItem): Item which was just used to load
representation.
failed (bool): Loading of representation failed.
"""
# Hide placeholder and add them to placeholder set
node = placeholder.scene_identifier
# If we just populate the placeholders from current scene, the
# placeholder set will not be created so account for that.
if not cmds.objExists(PLACEHOLDER_SET):
cmds.sets(name=PLACEHOLDER_SET, empty=True)
cmds.sets(node, addElement=PLACEHOLDER_SET)
cmds.hide(node)
cmds.setAttr("{}.hiddenInOutliner".format(node), True)
def delete_placeholder(self, placeholder):
"""Remove placeholder if building was successful
Used only by PlaceholderCreateMixin and PlaceholderLoadMixin.
"""
node = placeholder.scene_identifier
# To avoid that deleting a placeholder node will have Maya delete
# any objectSets the node was a member of we will first remove it
# from any sets it was a member of. This way the `PLACEHOLDERS_SET`
# will survive long enough
sets = cmds.listSets(o=node) or []
for object_set in sets:
cmds.sets(node, remove=object_set)
cmds.delete(node)
def imprint(self, node, data):
"""Imprint call for placeholder node"""
# Complicated data that can't be represented as flat maya attributes
# we write to json strings, e.g. multiselection EnumDef
for key, value in data.items():
if isinstance(value, (list, tuple, dict)):
data[key] = "JSON::{}".format(json.dumps(value))
imprint(node, data)
def read(self, node):
"""Read call for placeholder node"""
data = read(node)
# Complicated data that can't be represented as flat maya attributes
# we read from json strings, e.g. multiselection EnumDef
for key, value in data.items():
if isinstance(value, str) and value.startswith("JSON::"):
value = value[len("JSON::"):] # strip of JSON:: prefix
data[key] = json.loads(value)
return data
def build_workfile_template(*args):
builder = MayaTemplateBuilder(registered_host())
builder.build_template()
def update_workfile_template(*args):
builder = MayaTemplateBuilder(registered_host())
builder.rebuild_template()
def create_placeholder(*args):
host = registered_host()
builder = MayaTemplateBuilder(host)
window = WorkfileBuildPlaceholderDialog(host, builder,
parent=get_main_window())
window.show()
def update_placeholder(*args):
host = registered_host()
builder = MayaTemplateBuilder(host)
placeholder_items_by_id = {
placeholder_item.scene_identifier: placeholder_item
for placeholder_item in builder.get_placeholders()
}
placeholder_items = []
for node_name in cmds.ls(selection=True, long=True):
if node_name in placeholder_items_by_id:
placeholder_items.append(placeholder_items_by_id[node_name])
# TODO show UI at least
if len(placeholder_items) == 0:
raise ValueError("No node selected")
if len(placeholder_items) > 1:
raise ValueError("Too many selected nodes")
placeholder_item = placeholder_items[0]
window = WorkfileBuildPlaceholderDialog(host, builder,
parent=get_main_window())
window.set_update_mode(placeholder_item)
window.exec_()

View file

@ -1,66 +0,0 @@
"""Host API required Work Files tool"""
import os
from maya import cmds
def file_extensions():
return [".ma", ".mb"]
def has_unsaved_changes():
return cmds.file(query=True, modified=True)
def save_file(filepath):
cmds.file(rename=filepath)
ext = os.path.splitext(filepath)[1]
if ext == ".mb":
file_type = "mayaBinary"
else:
file_type = "mayaAscii"
cmds.file(save=True, type=file_type)
def open_file(filepath):
return cmds.file(filepath, open=True, force=True)
def current_file():
current_filepath = cmds.file(query=True, sceneName=True)
if not current_filepath:
return None
return current_filepath
def work_root(session):
work_dir = session["AYON_WORKDIR"]
scene_dir = None
# Query scene file rule from workspace.mel if it exists in WORKDIR
# We are parsing the workspace.mel manually as opposed to temporarily
# setting the Workspace in Maya in a context manager since Maya had a
# tendency to crash on frequently changing the workspace when this
# function was called many times as one scrolled through Work Files assets.
workspace_mel = os.path.join(work_dir, "workspace.mel")
if os.path.exists(workspace_mel):
scene_rule = 'workspace -fr "scene" '
# We need to use builtins as `open` is overridden by the workio API
open_file = __builtins__["open"]
with open_file(workspace_mel, "r") as f:
for line in f:
if line.strip().startswith(scene_rule):
# remainder == "rule";
remainder = line[len(scene_rule):]
# scene_dir == rule
scene_dir = remainder.split('"')[1]
else:
# We can't query a workspace that does not exist
# so we return similar to what we do in other hosts.
scene_dir = session.get("AVALON_SCENEDIR")
if scene_dir:
return os.path.join(work_dir, scene_dir)
else:
return work_dir

View file

@ -1,101 +0,0 @@
from typing import List
from maya import cmds
def get_yeti_user_variables(yeti_shape_node: str) -> List[str]:
"""Get user defined yeti user variables for a `pgYetiMaya` shape node.
Arguments:
yeti_shape_node (str): The `pgYetiMaya` shape node.
Returns:
list: Attribute names (for a vector attribute it only lists the top
parent attribute, not the attribute per axis)
"""
attrs = cmds.listAttr(yeti_shape_node,
userDefined=True,
string=("yetiVariableV_*",
"yetiVariableF_*")) or []
valid_attrs = []
for attr in attrs:
attr_type = cmds.attributeQuery(attr, node=yeti_shape_node,
attributeType=True)
if attr.startswith("yetiVariableV_") and attr_type == "double3":
# vector
valid_attrs.append(attr)
elif attr.startswith("yetiVariableF_") and attr_type == "double":
valid_attrs.append(attr)
return valid_attrs
def create_yeti_variable(yeti_shape_node: str,
attr_name: str,
value=None,
force_value: bool = False) -> bool:
"""Get user defined yeti user variables for a `pgYetiMaya` shape node.
Arguments:
yeti_shape_node (str): The `pgYetiMaya` shape node.
attr_name (str): The fully qualified yeti variable name, e.g.
"yetiVariableF_myfloat" or "yetiVariableV_myvector"
value (object): The value to set (must match the type of the attribute)
When value is None it will ignored and not be set.
force_value (bool): Whether to set the value if the attribute already
exists or not.
Returns:
bool: Whether the attribute value was set or not.
"""
exists = cmds.attributeQuery(attr_name, node=yeti_shape_node, exists=True)
if not exists:
if attr_name.startswith("yetiVariableV_"):
_create_vector_yeti_user_variable(yeti_shape_node, attr_name)
if attr_name.startswith("yetiVariableF_"):
_create_float_yeti_user_variable(yeti_shape_node, attr_name)
if value is not None and (not exists or force_value):
plug = "{}.{}".format(yeti_shape_node, attr_name)
if (
isinstance(value, (list, tuple))
and attr_name.startswith("yetiVariableV_")
):
cmds.setAttr(plug, *value, type="double3")
else:
cmds.setAttr(plug, value)
return True
return False
def _create_vector_yeti_user_variable(yeti_shape_node: str, attr_name: str):
if not attr_name.startswith("yetiVariableV_"):
raise ValueError("Must start with yetiVariableV_")
cmds.addAttr(yeti_shape_node,
longName=attr_name,
attributeType="double3",
cachedInternally=True,
keyable=True)
for axis in "XYZ":
cmds.addAttr(yeti_shape_node,
longName="{}{}".format(attr_name, axis),
attributeType="double",
parent=attr_name,
cachedInternally=True,
keyable=True)
def _create_float_yeti_user_variable(yeti_node: str, attr_name: str):
if not attr_name.startswith("yetiVariableF_"):
raise ValueError("Must start with yetiVariableF_")
cmds.addAttr(yeti_node,
longName=attr_name,
attributeType="double",
cachedInternally=True,
softMinValue=0,
softMaxValue=100,
keyable=True)

View file

@ -1,30 +0,0 @@
from ayon_applications import PreLaunchHook, LaunchTypes
class MayaPreAutoLoadPlugins(PreLaunchHook):
"""Define -noAutoloadPlugins command flag."""
# Before AddLastWorkfileToLaunchArgs
order = 9
app_groups = {"maya"}
launch_types = {LaunchTypes.local}
def execute(self):
# Ignore if there's no last workfile to start.
if not self.data.get("start_last_workfile"):
return
maya_settings = self.data["project_settings"]["maya"]
enabled = maya_settings["explicit_plugins_loading"]["enabled"]
if enabled:
# Force disable the `AddLastWorkfileToLaunchArgs`.
self.data.pop("start_last_workfile")
# Force post initialization so our dedicated plug-in load can run
# prior to Maya opening a scene file.
key = "AYON_OPEN_WORKFILE_POST_INITIALIZATION"
self.launch_context.env[key] = "1"
self.log.debug("Explicit plugins loading.")
self.launch_context.launch_args.append("-noAutoloadPlugins")

View file

@ -1,23 +0,0 @@
from ayon_applications import PreLaunchHook, LaunchTypes
from ayon_maya.lib import create_workspace_mel
class PreCopyMel(PreLaunchHook):
"""Copy workspace.mel to workdir.
Hook `GlobalHostDataHook` must be executed before this hook.
"""
app_groups = {"maya", "mayapy"}
launch_types = {LaunchTypes.local}
def execute(self):
project_entity = self.data["project_entity"]
workdir = self.launch_context.env.get("AYON_WORKDIR")
if not workdir:
self.log.warning("BUG: Workdir is not filled.")
return
project_settings = self.data["project_settings"]
create_workspace_mel(
workdir, project_entity["name"], project_settings
)

View file

@ -1,26 +0,0 @@
from ayon_applications import PreLaunchHook, LaunchTypes
class MayaPreOpenWorkfilePostInitialization(PreLaunchHook):
"""Define whether open last workfile should run post initialize."""
# Before AddLastWorkfileToLaunchArgs.
order = 9
app_groups = {"maya"}
launch_types = {LaunchTypes.local}
def execute(self):
# Ignore if there's no last workfile to start.
if not self.data.get("start_last_workfile"):
return
maya_settings = self.data["project_settings"]["maya"]
enabled = maya_settings["open_workfile_post_initialization"]
if enabled:
# Force disable the `AddLastWorkfileToLaunchArgs`.
self.data.pop("start_last_workfile")
self.log.debug("Opening workfile post initialization.")
key = "AYON_OPEN_WORKFILE_POST_INITIALIZATION"
self.launch_context.env[key] = "1"

View file

@ -1,25 +0,0 @@
import os
from ayon_core.settings import get_project_settings
from ayon_core.lib import Logger
def create_workspace_mel(workdir, project_name, project_settings=None):
dst_filepath = os.path.join(workdir, "workspace.mel")
if os.path.exists(dst_filepath):
return
if not os.path.exists(workdir):
os.makedirs(workdir)
if not project_settings:
project_settings = get_project_settings(project_name)
mel_script = project_settings["maya"].get("mel_workspace")
# Skip if mel script in settings is empty
if not mel_script:
log = Logger.get_logger("create_workspace_mel")
log.debug("File 'workspace.mel' not created. Settings value is empty.")
return
with open(dst_filepath, "w") as mel_file:
mel_file.write(mel_script)

View file

@ -1,190 +0,0 @@
import ayon_api
from ayon_core.pipeline.create.creator_plugins import ProductConvertorPlugin
from ayon_maya.api import plugin
from ayon_maya.api.lib import read
from maya import cmds
from maya.app.renderSetup.model import renderSetup
class MayaLegacyConvertor(ProductConvertorPlugin,
plugin.MayaCreatorBase):
"""Find and convert any legacy products in the scene.
This Converter will find all legacy products in the scene and will
transform them to the current system. Since the old products doesn't
retain any information about their original creators, the only mapping
we can do is based on their families.
Its limitation is that you can have multiple creators creating product
of the same type and there is no way to handle it. This code should
nevertheless cover all creators that came with AYON.
"""
identifier = "io.openpype.creators.maya.legacy"
# Cases where the identifier or new product type doesn't correspond to the
# original family on the legacy instances
product_type_mapping = {
"rendering": "io.openpype.creators.maya.renderlayer",
}
def find_instances(self):
self.cache_instance_data(self.collection_shared_data)
legacy = self.collection_shared_data.get(
"maya_cached_legacy_instances"
)
if not legacy:
return
self.add_convertor_item("Convert legacy instances")
def convert(self):
self.remove_convertor_item()
# We can't use the collected shared data cache here
# we re-query it here directly to convert all found.
cache = {}
self.cache_instance_data(cache)
legacy = cache.get("maya_cached_legacy_instances")
if not legacy:
return
# From all current new style manual creators find the mapping
# from product type to identifier
product_type_to_id = {}
for identifier, creator in self.create_context.creators.items():
product_type = getattr(creator, "product_type", None)
if not product_type:
continue
if product_type in product_type_to_id:
# We have a clash of product type -> identifier. Multiple
# new style creators use the same product type
self.log.warning(
"Clash on product type->identifier: {}".format(identifier)
)
product_type_to_id[product_type] = identifier
product_type_to_id.update(self.product_type_mapping)
# We also embed the current 'task' into the instance since legacy
# instances didn't store that data on the instances. The old style
# logic was thus to be live to the current task to begin with.
data = dict()
data["task"] = self.create_context.get_current_task_name()
for product_type, instance_nodes in legacy.items():
if product_type not in product_type_to_id:
self.log.warning((
"Unable to convert legacy instance with family '{}'"
" because there is no matching new creator"
).format(product_type))
continue
creator_id = product_type_to_id[product_type]
creator = self.create_context.creators[creator_id]
data["creator_identifier"] = creator_id
if isinstance(creator, plugin.RenderlayerCreator):
self._convert_per_renderlayer(instance_nodes, data, creator)
else:
self._convert_regular(instance_nodes, data)
def _convert_regular(self, instance_nodes, data):
# We only imprint the creator identifier for it to identify
# as the new style creator
for instance_node in instance_nodes:
self.imprint_instance_node(instance_node,
data=data.copy())
def _convert_per_renderlayer(self, instance_nodes, data, creator):
# Split the instance into an instance per layer
rs = renderSetup.instance()
layers = rs.getRenderLayers()
if not layers:
self.log.error(
"Can't convert legacy renderlayer instance because no existing"
" renderSetup layers exist in the scene."
)
return
creator_attribute_names = {
attr_def.key for attr_def in creator.get_instance_attr_defs()
}
for instance_node in instance_nodes:
# Ensure we have the new style singleton node generated
# TODO: Make function public
singleton_node = creator._get_singleton_node()
if singleton_node:
self.log.error(
"Can't convert legacy renderlayer instance '{}' because"
" new style instance '{}' already exists".format(
instance_node,
singleton_node
)
)
continue
creator.create_singleton_node()
# We are creating new nodes to replace the original instance
# Copy the attributes of the original instance to the new node
original_data = read(instance_node)
# The product type gets converted to the new product type (this
# is due to "rendering" being converted to "renderlayer")
original_data["productType"] = creator.product_type
# recreate product name as without it would be
# `renderingMain` vs correct `renderMain`
project_name = self.create_context.get_current_project_name()
folder_entities = list(ayon_api.get_folders(
project_name, folder_names=[original_data["asset"]]
))
if not folder_entities:
cmds.delete(instance_node)
continue
folder_entity = folder_entities[0]
task_entity = ayon_api.get_task_by_name(
project_name, folder_entity["id"], data["task"]
)
product_name = creator.get_product_name(
project_name,
folder_entity,
task_entity,
original_data["variant"],
)
original_data["productName"] = product_name
# Convert to creator attributes when relevant
creator_attributes = {}
for key in list(original_data.keys()):
# Iterate in order of the original attributes to preserve order
# in the output creator attributes
if key in creator_attribute_names:
creator_attributes[key] = original_data.pop(key)
original_data["creator_attributes"] = creator_attributes
# For layer in maya layers
for layer in layers:
layer_instance_node = creator.find_layer_instance_node(layer)
if not layer_instance_node:
# TODO: Make function public
layer_instance_node = creator._create_layer_instance_node(
layer
)
# Transfer the main attributes of the original instance
layer_data = original_data.copy()
layer_data.update(data)
self.imprint_instance_node(layer_instance_node,
data=layer_data)
# Delete the legacy instance node
cmds.delete(instance_node)

View file

@ -1,134 +0,0 @@
from maya import cmds
from ayon_maya.api import lib, plugin
from ayon_core.lib import (
BoolDef,
NumberDef,
)
def _get_animation_attr_defs():
"""Get Animation generic definitions."""
defs = lib.collect_animation_defs()
defs.extend(
[
BoolDef("farm", label="Submit to Farm"),
NumberDef("priority", label="Farm job Priority", default=50),
BoolDef("refresh", label="Refresh viewport during export"),
BoolDef(
"includeParentHierarchy",
label="Include Parent Hierarchy",
tooltip=(
"Whether to include parent hierarchy of nodes in the "
"publish instance."
)
),
BoolDef(
"includeUserDefinedAttributes",
label="Include User Defined Attributes",
tooltip=(
"Whether to include all custom maya attributes found "
"on nodes as attributes in the Alembic data."
)
),
]
)
return defs
def convert_legacy_alembic_creator_attributes(node_data, class_name):
"""This is a legacy transfer of creator attributes to publish attributes
for ExtractAlembic/ExtractAnimation plugin.
"""
publish_attributes = node_data["publish_attributes"]
if class_name in publish_attributes:
return node_data
attributes = [
"attr",
"attrPrefix",
"visibleOnly",
"writeColorSets",
"writeFaceSets",
"writeNormals",
"renderableOnly",
"visibleOnly",
"worldSpace",
"renderableOnly"
]
plugin_attributes = {}
for attr in attributes:
if attr not in node_data["creator_attributes"]:
continue
value = node_data["creator_attributes"].pop(attr)
plugin_attributes[attr] = value
publish_attributes[class_name] = plugin_attributes
return node_data
class CreateAnimation(plugin.MayaHiddenCreator):
"""Animation output for character rigs
We hide the animation creator from the UI since the creation of it is
automated upon loading a rig. There's an inventory action to recreate it
for loaded rigs if by chance someone deleted the animation instance.
"""
identifier = "io.openpype.creators.maya.animation"
name = "animationDefault"
label = "Animation"
product_type = "animation"
icon = "male"
write_color_sets = False
write_face_sets = False
include_parent_hierarchy = False
include_user_defined_attributes = False
def read_instance_node(self, node):
node_data = super(CreateAnimation, self).read_instance_node(node)
node_data = convert_legacy_alembic_creator_attributes(
node_data, "ExtractAnimation"
)
return node_data
def get_instance_attr_defs(self):
return _get_animation_attr_defs()
class CreatePointCache(plugin.MayaCreator):
"""Alembic pointcache for animated data"""
identifier = "io.openpype.creators.maya.pointcache"
label = "Pointcache"
product_type = "pointcache"
icon = "gears"
write_color_sets = False
write_face_sets = False
include_user_defined_attributes = False
def read_instance_node(self, node):
node_data = super(CreatePointCache, self).read_instance_node(node)
node_data = convert_legacy_alembic_creator_attributes(
node_data, "ExtractAlembic"
)
return node_data
def get_instance_attr_defs(self):
return _get_animation_attr_defs()
def create(self, product_name, instance_data, pre_create_data):
instance = super(CreatePointCache, self).create(
product_name, instance_data, pre_create_data
)
instance_node = instance.get("instance_node")
# For Arnold standin proxy
proxy_set = cmds.sets(name=instance_node + "_proxy_SET", empty=True)
cmds.sets(proxy_set, forceElement=instance_node)

View file

@ -1,112 +0,0 @@
from maya import cmds
from ayon_maya.api import (
lib,
plugin
)
from ayon_core.lib import (
NumberDef,
BoolDef
)
class CreateArnoldSceneSource(plugin.MayaCreator):
"""Arnold Scene Source"""
identifier = "io.openpype.creators.maya.ass"
label = "Arnold Scene Source"
product_type = "ass"
icon = "cube"
settings_name = "CreateAss"
expandProcedurals = False
motionBlur = True
motionBlurKeys = 2
motionBlurLength = 0.5
maskOptions = False
maskCamera = False
maskLight = False
maskShape = False
maskShader = False
maskOverride = False
maskDriver = False
maskFilter = False
maskColor_manager = False
maskOperator = False
def get_instance_attr_defs(self):
defs = lib.collect_animation_defs()
defs.extend([
BoolDef("expandProcedural",
label="Expand Procedural",
default=self.expandProcedurals),
BoolDef("motionBlur",
label="Motion Blur",
default=self.motionBlur),
NumberDef("motionBlurKeys",
label="Motion Blur Keys",
decimals=0,
default=self.motionBlurKeys),
NumberDef("motionBlurLength",
label="Motion Blur Length",
decimals=3,
default=self.motionBlurLength),
# Masks
BoolDef("maskOptions",
label="Export Options",
default=self.maskOptions),
BoolDef("maskCamera",
label="Export Cameras",
default=self.maskCamera),
BoolDef("maskLight",
label="Export Lights",
default=self.maskLight),
BoolDef("maskShape",
label="Export Shapes",
default=self.maskShape),
BoolDef("maskShader",
label="Export Shaders",
default=self.maskShader),
BoolDef("maskOverride",
label="Export Override Nodes",
default=self.maskOverride),
BoolDef("maskDriver",
label="Export Drivers",
default=self.maskDriver),
BoolDef("maskFilter",
label="Export Filters",
default=self.maskFilter),
BoolDef("maskOperator",
label="Export Operators",
default=self.maskOperator),
BoolDef("maskColor_manager",
label="Export Color Managers",
default=self.maskColor_manager),
])
return defs
class CreateArnoldSceneSourceProxy(CreateArnoldSceneSource):
"""Arnold Scene Source Proxy
This product type facilitates working with proxy geometry in the viewport.
"""
identifier = "io.openpype.creators.maya.assproxy"
label = "Arnold Scene Source Proxy"
product_type = "assProxy"
icon = "cube"
def create(self, product_name, instance_data, pre_create_data):
instance = super(CreateArnoldSceneSource, self).create(
product_name, instance_data, pre_create_data
)
instance_node = instance.get("instance_node")
proxy = cmds.sets(name=instance_node + "_proxy_SET", empty=True)
cmds.sets([proxy], forceElement=instance_node)

View file

@ -1,10 +0,0 @@
from ayon_maya.api import plugin
class CreateAssembly(plugin.MayaCreator):
"""A grouped package of loaded content"""
identifier = "io.openpype.creators.maya.assembly"
label = "Assembly"
product_type = "assembly"
icon = "cubes"

View file

@ -1,36 +0,0 @@
from ayon_maya.api import (
lib,
plugin
)
from ayon_core.lib import BoolDef
class CreateCamera(plugin.MayaCreator):
"""Single baked camera"""
identifier = "io.openpype.creators.maya.camera"
label = "Camera"
product_type = "camera"
icon = "video-camera"
def get_instance_attr_defs(self):
defs = lib.collect_animation_defs()
defs.extend([
BoolDef("bakeToWorldSpace",
label="Bake to World-Space",
tooltip="Bake to World-Space",
default=True),
])
return defs
class CreateCameraRig(plugin.MayaCreator):
"""Complex hierarchy with camera."""
identifier = "io.openpype.creators.maya.camerarig"
label = "Camera Rig"
product_type = "camerarig"
icon = "video-camera"

View file

@ -1,21 +0,0 @@
from ayon_maya.api import plugin
from ayon_core.lib import BoolDef
class CreateLayout(plugin.MayaCreator):
"""A grouped package of loaded content"""
identifier = "io.openpype.creators.maya.layout"
label = "Layout"
product_type = "layout"
icon = "cubes"
def get_instance_attr_defs(self):
return [
BoolDef("groupLoadedAssets",
label="Group Loaded Assets",
tooltip="Enable this when you want to publish group of "
"loaded asset",
default=False)
]

View file

@ -1,47 +0,0 @@
from ayon_maya.api import (
plugin,
lib
)
from ayon_core.lib import (
BoolDef,
TextDef
)
class CreateLook(plugin.MayaCreator):
"""Shader connections defining shape look"""
identifier = "io.openpype.creators.maya.look"
label = "Look"
product_type = "look"
icon = "paint-brush"
make_tx = True
rs_tex = False
def get_instance_attr_defs(self):
return [
# TODO: This value should actually get set on create!
TextDef("renderLayer",
# TODO: Bug: Hidden attribute's label is still shown in UI?
hidden=True,
default=lib.get_current_renderlayer(),
label="Renderlayer",
tooltip="Renderlayer to extract the look from"),
BoolDef("maketx",
label="MakeTX",
tooltip="Whether to generate .tx files for your textures",
default=self.make_tx),
BoolDef("rstex",
label="Convert textures to .rstex",
tooltip="Whether to generate Redshift .rstex files for "
"your textures",
default=self.rs_tex)
]
def get_pre_create_attr_defs(self):
# Show same attributes on create but include use selection
defs = list(super().get_pre_create_attr_defs())
defs.extend(self.get_instance_attr_defs())
return defs

View file

@ -1,32 +0,0 @@
from ayon_maya.api import (
lib,
plugin
)
from ayon_core.lib import BoolDef
class CreateMatchmove(plugin.MayaCreator):
"""Instance for more complex setup of cameras.
Might contain multiple cameras, geometries etc.
It is expected to be extracted into .abc or .ma
"""
identifier = "io.openpype.creators.maya.matchmove"
label = "Matchmove"
product_type = "matchmove"
icon = "video-camera"
def get_instance_attr_defs(self):
defs = lib.collect_animation_defs()
defs.extend([
BoolDef("bakeToWorldSpace",
label="Bake Cameras to World-Space",
tooltip="Bake Cameras to World-Space",
default=True),
])
return defs

View file

@ -1,102 +0,0 @@
from ayon_maya.api import plugin, lib
from ayon_core.lib import (
BoolDef,
EnumDef,
TextDef
)
from maya import cmds
class CreateMayaUsd(plugin.MayaCreator):
"""Create Maya USD Export"""
identifier = "io.openpype.creators.maya.mayausd"
label = "Maya USD"
product_type = "usd"
icon = "cubes"
description = "Create Maya USD Export"
cache = {}
def get_publish_families(self):
return ["usd", "mayaUsd"]
def get_instance_attr_defs(self):
if "jobContextItems" not in self.cache:
# Query once instead of per instance
job_context_items = {}
try:
cmds.loadPlugin("mayaUsdPlugin", quiet=True)
job_context_items = {
cmds.mayaUSDListJobContexts(jobContext=name): name
for name in cmds.mayaUSDListJobContexts(export=True) or []
}
except RuntimeError:
# Likely `mayaUsdPlugin` plug-in not available
self.log.warning("Unable to retrieve available job "
"contexts for `mayaUsdPlugin` exports")
if not job_context_items:
# enumdef multiselection may not be empty
job_context_items = ["<placeholder; do not use>"]
self.cache["jobContextItems"] = job_context_items
defs = lib.collect_animation_defs()
defs.extend([
EnumDef("defaultUSDFormat",
label="File format",
items={
"usdc": "Binary",
"usda": "ASCII"
},
default="usdc"),
BoolDef("stripNamespaces",
label="Strip Namespaces",
tooltip=(
"Remove namespaces during export. By default, "
"namespaces are exported to the USD file in the "
"following format: nameSpaceExample_pPlatonic1"
),
default=True),
BoolDef("mergeTransformAndShape",
label="Merge Transform and Shape",
tooltip=(
"Combine Maya transform and shape into a single USD"
"prim that has transform and geometry, for all"
" \"geometric primitives\" (gprims).\n"
"This results in smaller and faster scenes. Gprims "
"will be \"unpacked\" back into transform and shape "
"nodes when imported into Maya from USD."
),
default=True),
BoolDef("includeUserDefinedAttributes",
label="Include User Defined Attributes",
tooltip=(
"Whether to include all custom maya attributes found "
"on nodes as metadata (userProperties) in USD."
),
default=False),
TextDef("attr",
label="Custom Attributes",
default="",
placeholder="attr1, attr2"),
TextDef("attrPrefix",
label="Custom Attributes Prefix",
default="",
placeholder="prefix1, prefix2"),
EnumDef("jobContext",
label="Job Context",
items=self.cache["jobContextItems"],
tooltip=(
"Specifies an additional export context to handle.\n"
"These usually contain extra schemas, primitives,\n"
"and materials that are to be exported for a "
"specific\ntask, a target renderer for example."
),
multiselection=True),
])
return defs

View file

@ -1,11 +0,0 @@
from ayon_maya.api import plugin
class CreateMayaScene(plugin.MayaCreator):
"""Raw Maya Scene file export"""
identifier = "io.openpype.creators.maya.mayascene"
name = "mayaScene"
label = "Maya Scene"
product_type = "mayaScene"
icon = "file-archive-o"

View file

@ -1,43 +0,0 @@
from ayon_maya.api import plugin
from ayon_core.lib import (
BoolDef,
TextDef
)
class CreateModel(plugin.MayaCreator):
"""Polygonal static geometry"""
identifier = "io.openpype.creators.maya.model"
label = "Model"
product_type = "model"
icon = "cube"
default_variants = ["Main", "Proxy", "_MD", "_HD", "_LD"]
write_color_sets = False
write_face_sets = False
def get_instance_attr_defs(self):
return [
BoolDef("writeColorSets",
label="Write vertex colors",
tooltip="Write vertex colors with the geometry",
default=self.write_color_sets),
BoolDef("writeFaceSets",
label="Write face sets",
tooltip="Write face sets with the geometry",
default=self.write_face_sets),
BoolDef("includeParentHierarchy",
label="Include Parent Hierarchy",
tooltip="Whether to include parent hierarchy of nodes in "
"the publish instance",
default=False),
TextDef("attr",
label="Custom Attributes",
default="",
placeholder="attr1, attr2"),
TextDef("attrPrefix",
label="Custom Attributes Prefix",
placeholder="prefix1, prefix2")
]

View file

@ -1,223 +0,0 @@
import collections
from ayon_api import (
get_folder_by_name,
get_folder_by_path,
get_folders,
get_tasks,
)
from maya import cmds # noqa: F401
from ayon_maya.api import plugin
from ayon_core.lib import BoolDef, EnumDef, TextDef
from ayon_core.pipeline import (
Creator,
get_current_folder_path,
get_current_project_name,
)
from ayon_core.pipeline.create import CreatorError
class CreateMultishotLayout(plugin.MayaCreator):
"""Create a multi-shot layout in the Maya scene.
This creator will create a Camera Sequencer in the Maya scene based on
the shots found under the specified folder. The shots will be added to
the sequencer in the order of their clipIn and clipOut values. For each
shot a Layout will be created.
"""
identifier = "io.openpype.creators.maya.multishotlayout"
label = "Multi-shot Layout"
product_type = "layout"
icon = "project-diagram"
def get_pre_create_attr_defs(self):
# Present artist with a list of parents of the current context
# to choose from. This will be used to get the shots under the
# selected folder to create the Camera Sequencer.
"""
Todo: `get_folder_by_name` should be switched to `get_folder_by_path`
once the fork to pure AYON is done.
Warning: this will not work for projects where the folder name
is not unique across the project until the switch mentioned
above is done.
"""
project_name = get_current_project_name()
folder_path = get_current_folder_path()
if "/" in folder_path:
current_folder = get_folder_by_path(project_name, folder_path)
else:
current_folder = get_folder_by_name(
project_name, folder_name=folder_path
)
current_path_parts = current_folder["path"].split("/")
# populate the list with parents of the current folder
# this will create menu items like:
# [
# {
# "value": "",
# "label": "project (shots directly under the project)"
# }, {
# "value": "shots/shot_01", "label": "shot_01 (current)"
# }, {
# "value": "shots", "label": "shots"
# }
# ]
# add the project as the first item
items_with_label = [
{
"label": f"{self.project_name} "
"(shots directly under the project)",
"value": ""
}
]
# go through the current folder path and add each part to the list,
# but mark the current folder.
for part_idx, part in enumerate(current_path_parts):
label = part
if label == current_folder["name"]:
label = f"{label} (current)"
value = "/".join(current_path_parts[:part_idx + 1])
items_with_label.append({"label": label, "value": value})
return [
EnumDef("shotParent",
default=current_folder["name"],
label="Shot Parent Folder",
items=items_with_label,
),
BoolDef("groupLoadedAssets",
label="Group Loaded Assets",
tooltip="Enable this when you want to publish group of "
"loaded asset",
default=False),
TextDef("taskName",
label="Associated Task Name",
tooltip=("Task name to be associated "
"with the created Layout"),
default="layout"),
]
def create(self, product_name, instance_data, pre_create_data):
shots = list(
self.get_related_shots(folder_path=pre_create_data["shotParent"])
)
if not shots:
# There are no shot folders under the specified folder.
# We are raising an error here but in the future we might
# want to create a new shot folders by publishing the layouts
# and shot defined in the sequencer. Sort of editorial publish
# in side of Maya.
raise CreatorError((
"No shots found under the specified "
f"folder: {pre_create_data['shotParent']}."))
# Get layout creator
layout_creator_id = "io.openpype.creators.maya.layout"
layout_creator: Creator = self.create_context.creators.get(
layout_creator_id)
if not layout_creator:
raise CreatorError(
f"Creator {layout_creator_id} not found.")
folder_ids = {s["id"] for s in shots}
folder_entities = get_folders(self.project_name, folder_ids)
task_entities = get_tasks(
self.project_name, folder_ids=folder_ids
)
task_entities_by_folder_id = collections.defaultdict(dict)
for task_entity in task_entities:
folder_id = task_entity["folderId"]
task_name = task_entity["name"]
task_entities_by_folder_id[folder_id][task_name] = task_entity
folder_entities_by_id = {fe["id"]: fe for fe in folder_entities}
for shot in shots:
# we are setting shot name to be displayed in the sequencer to
# `shot name (shot label)` if the label is set, otherwise just
# `shot name`. So far, labels are used only when the name is set
# with characters that are not allowed in the shot name.
if not shot["active"]:
continue
# get task for shot
folder_id = shot["id"]
folder_entity = folder_entities_by_id[folder_id]
task_entities = task_entities_by_folder_id[folder_id]
layout_task_name = None
layout_task_entity = None
if pre_create_data["taskName"] in task_entities:
layout_task_name = pre_create_data["taskName"]
layout_task_entity = task_entities[layout_task_name]
shot_name = f"{shot['name']}%s" % (
f" ({shot['label']})" if shot["label"] else "")
cmds.shot(sequenceStartTime=shot["attrib"]["clipIn"],
sequenceEndTime=shot["attrib"]["clipOut"],
shotName=shot_name)
# Create layout instance by the layout creator
instance_data = {
"folderPath": shot["path"],
"variant": layout_creator.get_default_variant()
}
if layout_task_name:
instance_data["task"] = layout_task_name
layout_creator.create(
product_name=layout_creator.get_product_name(
self.project_name,
folder_entity,
layout_task_entity,
layout_creator.get_default_variant(),
),
instance_data=instance_data,
pre_create_data={
"groupLoadedAssets": pre_create_data["groupLoadedAssets"]
}
)
def get_related_shots(self, folder_path: str):
"""Get all shots related to the current folder.
Get all folders of type Shot under specified folder.
Args:
folder_path (str): Path of the folder.
Returns:
list: List of dicts with folder data.
"""
# if folder_path is None, project is selected as a root
# and its name is used as a parent id
parent_id = self.project_name
if folder_path:
current_folder = get_folder_by_path(
project_name=self.project_name,
folder_path=folder_path,
)
parent_id = current_folder["id"]
# get all child folders of the current one
return get_folders(
project_name=self.project_name,
parent_ids=[parent_id],
fields=[
"attrib.clipIn", "attrib.clipOut",
"attrib.frameStart", "attrib.frameEnd",
"name", "label", "path", "folderType", "id"
]
)

View file

@ -1,27 +0,0 @@
from ayon_maya.api import plugin
from ayon_core.lib import (
BoolDef,
EnumDef
)
class CreateMultiverseLook(plugin.MayaCreator):
"""Create Multiverse Look"""
identifier = "io.openpype.creators.maya.mvlook"
label = "Multiverse Look"
product_type = "mvLook"
icon = "cubes"
def get_instance_attr_defs(self):
return [
EnumDef("fileFormat",
label="File Format",
tooltip="USD export file format",
items=["usda", "usd"],
default="usda"),
BoolDef("publishMipMap",
label="Publish MipMap",
default=True),
]

View file

@ -1,139 +0,0 @@
from ayon_maya.api import plugin, lib
from ayon_core.lib import (
BoolDef,
NumberDef,
TextDef,
EnumDef
)
class CreateMultiverseUsd(plugin.MayaCreator):
"""Create Multiverse USD Asset"""
identifier = "io.openpype.creators.maya.mvusdasset"
label = "Multiverse USD Asset"
product_type = "usd"
icon = "cubes"
description = "Create Multiverse USD Asset"
def get_publish_families(self):
return ["usd", "mvUsd"]
def get_instance_attr_defs(self):
defs = lib.collect_animation_defs(fps=True)
defs.extend([
EnumDef("fileFormat",
label="File format",
items=["usd", "usda", "usdz"],
default="usd"),
BoolDef("stripNamespaces",
label="Strip Namespaces",
default=True),
BoolDef("mergeTransformAndShape",
label="Merge Transform and Shape",
default=False),
BoolDef("writeAncestors",
label="Write Ancestors",
default=True),
BoolDef("flattenParentXforms",
label="Flatten Parent Xforms",
default=False),
BoolDef("writeSparseOverrides",
label="Write Sparse Overrides",
default=False),
BoolDef("useMetaPrimPath",
label="Use Meta Prim Path",
default=False),
TextDef("customRootPath",
label="Custom Root Path",
default=''),
TextDef("customAttributes",
label="Custom Attributes",
tooltip="Comma-separated list of attribute names",
default=''),
TextDef("nodeTypesToIgnore",
label="Node Types to Ignore",
tooltip="Comma-separated list of node types to be ignored",
default=''),
BoolDef("writeMeshes",
label="Write Meshes",
default=True),
BoolDef("writeCurves",
label="Write Curves",
default=True),
BoolDef("writeParticles",
label="Write Particles",
default=True),
BoolDef("writeCameras",
label="Write Cameras",
default=False),
BoolDef("writeLights",
label="Write Lights",
default=False),
BoolDef("writeJoints",
label="Write Joints",
default=False),
BoolDef("writeCollections",
label="Write Collections",
default=False),
BoolDef("writePositions",
label="Write Positions",
default=True),
BoolDef("writeNormals",
label="Write Normals",
default=True),
BoolDef("writeUVs",
label="Write UVs",
default=True),
BoolDef("writeColorSets",
label="Write Color Sets",
default=False),
BoolDef("writeTangents",
label="Write Tangents",
default=False),
BoolDef("writeRefPositions",
label="Write Ref Positions",
default=True),
BoolDef("writeBlendShapes",
label="Write BlendShapes",
default=False),
BoolDef("writeDisplayColor",
label="Write Display Color",
default=True),
BoolDef("writeSkinWeights",
label="Write Skin Weights",
default=False),
BoolDef("writeMaterialAssignment",
label="Write Material Assignment",
default=False),
BoolDef("writeHardwareShader",
label="Write Hardware Shader",
default=False),
BoolDef("writeShadingNetworks",
label="Write Shading Networks",
default=False),
BoolDef("writeTransformMatrix",
label="Write Transform Matrix",
default=True),
BoolDef("writeUsdAttributes",
label="Write USD Attributes",
default=True),
BoolDef("writeInstancesAsReferences",
label="Write Instances as References",
default=False),
BoolDef("timeVaryingTopology",
label="Time Varying Topology",
default=False),
TextDef("customMaterialNamespace",
label="Custom Material Namespace",
default=''),
NumberDef("numTimeSamples",
label="Num Time Samples",
default=1),
NumberDef("timeSamplesSpan",
label="Time Samples Span",
default=0.0),
])
return defs

View file

@ -1,48 +0,0 @@
from ayon_maya.api import plugin, lib
from ayon_core.lib import (
BoolDef,
NumberDef,
EnumDef
)
class CreateMultiverseUsdComp(plugin.MayaCreator):
"""Create Multiverse USD Composition"""
identifier = "io.openpype.creators.maya.mvusdcomposition"
label = "Multiverse USD Composition"
product_type = "mvUsdComposition"
icon = "cubes"
def get_instance_attr_defs(self):
defs = lib.collect_animation_defs(fps=True)
defs.extend([
EnumDef("fileFormat",
label="File format",
items=["usd", "usda"],
default="usd"),
BoolDef("stripNamespaces",
label="Strip Namespaces",
default=False),
BoolDef("mergeTransformAndShape",
label="Merge Transform and Shape",
default=False),
BoolDef("flattenContent",
label="Flatten Content",
default=False),
BoolDef("writeAsCompoundLayers",
label="Write As Compound Layers",
default=False),
BoolDef("writePendingOverrides",
label="Write Pending Overrides",
default=False),
NumberDef("numTimeSamples",
label="Num Time Samples",
default=1),
NumberDef("timeSamplesSpan",
label="Time Samples Span",
default=0.0),
])
return defs

View file

@ -1,59 +0,0 @@
from ayon_maya.api import plugin, lib
from ayon_core.lib import (
BoolDef,
NumberDef,
EnumDef
)
class CreateMultiverseUsdOver(plugin.MayaCreator):
"""Create Multiverse USD Override"""
identifier = "io.openpype.creators.maya.mvusdoverride"
label = "Multiverse USD Override"
product_type = "mvUsdOverride"
icon = "cubes"
def get_instance_attr_defs(self):
defs = lib.collect_animation_defs(fps=True)
defs.extend([
EnumDef("fileFormat",
label="File format",
items=["usd", "usda"],
default="usd"),
BoolDef("writeAll",
label="Write All",
default=False),
BoolDef("writeTransforms",
label="Write Transforms",
default=True),
BoolDef("writeVisibility",
label="Write Visibility",
default=True),
BoolDef("writeAttributes",
label="Write Attributes",
default=True),
BoolDef("writeMaterials",
label="Write Materials",
default=True),
BoolDef("writeVariants",
label="Write Variants",
default=True),
BoolDef("writeVariantsDefinition",
label="Write Variants Definition",
default=True),
BoolDef("writeActiveState",
label="Write Active State",
default=True),
BoolDef("writeNamespaces",
label="Write Namespaces",
default=False),
NumberDef("numTimeSamples",
label="Num Time Samples",
default=1),
NumberDef("timeSamplesSpan",
label="Time Samples Span",
default=0.0),
])
return defs

View file

@ -1,50 +0,0 @@
from ayon_maya.api import (
lib,
plugin
)
from ayon_core.lib import (
BoolDef,
TextDef
)
class CreateProxyAlembic(plugin.MayaCreator):
"""Proxy Alembic for animated data"""
identifier = "io.openpype.creators.maya.proxyabc"
label = "Proxy Alembic"
product_type = "proxyAbc"
icon = "gears"
write_color_sets = False
write_face_sets = False
def get_instance_attr_defs(self):
defs = lib.collect_animation_defs()
defs.extend([
BoolDef("writeColorSets",
label="Write vertex colors",
tooltip="Write vertex colors with the geometry",
default=self.write_color_sets),
BoolDef("writeFaceSets",
label="Write face sets",
tooltip="Write face sets with the geometry",
default=self.write_face_sets),
BoolDef("worldSpace",
label="World-Space Export",
default=True),
TextDef("nameSuffix",
label="Name Suffix for Bounding Box",
default="_BBox",
placeholder="_BBox"),
TextDef("attr",
label="Custom Attributes",
default="",
placeholder="attr1, attr2"),
TextDef("attrPrefix",
label="Custom Attributes Prefix",
placeholder="prefix1, prefix2")
])
return defs

View file

@ -1,25 +0,0 @@
# -*- coding: utf-8 -*-
"""Creator of Redshift proxy product types."""
from ayon_maya.api import plugin, lib
from ayon_core.lib import BoolDef
class CreateRedshiftProxy(plugin.MayaCreator):
"""Create instance of Redshift Proxy product."""
identifier = "io.openpype.creators.maya.redshiftproxy"
label = "Redshift Proxy"
product_type = "redshiftproxy"
icon = "gears"
def get_instance_attr_defs(self):
defs = [
BoolDef("animation",
label="Export animation",
default=False)
]
defs.extend(lib.collect_animation_defs())
return defs

View file

@ -1,115 +0,0 @@
# -*- coding: utf-8 -*-
"""Create ``Render`` instance in Maya."""
from ayon_maya.api import (
lib_rendersettings,
plugin
)
from ayon_core.pipeline import CreatorError
from ayon_core.lib import (
BoolDef,
NumberDef,
)
class CreateRenderlayer(plugin.RenderlayerCreator):
"""Create and manages renderlayer product per renderLayer in workfile.
This generates a single node in the scene which tells the Creator to if
it exists collect Maya rendersetup renderlayers as individual instances.
As such, triggering create doesn't actually create the instance node per
layer but only the node which tells the Creator it may now collect
the renderlayers.
"""
identifier = "io.openpype.creators.maya.renderlayer"
product_type = "renderlayer"
label = "Render"
icon = "eye"
layer_instance_prefix = "render"
singleton_node_name = "renderingMain"
render_settings = {}
@classmethod
def apply_settings(cls, project_settings):
cls.render_settings = project_settings["maya"]["render_settings"]
def create(self, product_name, instance_data, pre_create_data):
# Only allow a single render instance to exist
if self._get_singleton_node():
raise CreatorError(
"A Render instance already exists - only one can be "
"configured.\n\n"
"To render multiple render layers, create extra Render Setup "
"Layers via Maya's Render Setup UI.\n"
"Then refresh the publisher to detect the new layers for "
"rendering.\n\n"
"With a render instance present all Render Setup layers in "
"your workfile are renderable instances.")
# Apply default project render settings on create
if self.render_settings.get("apply_render_settings"):
lib_rendersettings.RenderSettings().set_default_renderer_settings()
super(CreateRenderlayer, self).create(product_name,
instance_data,
pre_create_data)
def get_instance_attr_defs(self):
"""Create instance settings."""
return [
BoolDef("review",
label="Review",
tooltip="Mark as reviewable",
default=True),
BoolDef("extendFrames",
label="Extend Frames",
tooltip="Extends the frames on top of the previous "
"publish.\nIf the previous was 1001-1050 and you "
"would now submit 1020-1070 only the new frames "
"1051-1070 would be rendered and published "
"together with the previously rendered frames.\n"
"If 'overrideExistingFrame' is enabled it *will* "
"render any existing frames.",
default=False),
BoolDef("overrideExistingFrame",
label="Override Existing Frame",
tooltip="Override existing rendered frames "
"(if they exist).",
default=True),
# TODO: Should these move to submit_maya_deadline plugin?
# Tile rendering
BoolDef("tileRendering",
label="Enable tiled rendering",
default=False),
NumberDef("tilesX",
label="Tiles X",
default=2,
minimum=1,
decimals=0),
NumberDef("tilesY",
label="Tiles Y",
default=2,
minimum=1,
decimals=0),
# Additional settings
BoolDef("convertToScanline",
label="Convert to Scanline",
tooltip="Convert the output images to scanline images",
default=False),
BoolDef("useReferencedAovs",
label="Use Referenced AOVs",
tooltip="Consider the AOVs from referenced scenes as well",
default=False),
BoolDef("renderSetupIncludeLights",
label="Render Setup Include Lights",
default=self.render_settings.get("enable_all_lights",
False))
]

View file

@ -1,31 +0,0 @@
from ayon_maya.api import plugin
from ayon_core.pipeline import CreatorError
class CreateRenderSetup(plugin.MayaCreator):
"""Create rendersetup template json data"""
identifier = "io.openpype.creators.maya.rendersetup"
label = "Render Setup Preset"
product_type = "rendersetup"
icon = "tablet"
def get_pre_create_attr_defs(self):
# Do not show the "use_selection" setting from parent class
return []
def create(self, product_name, instance_data, pre_create_data):
existing_instance = None
for instance in self.create_context.instances:
if instance.product_type == self.product_type:
existing_instance = instance
break
if existing_instance:
raise CreatorError("A RenderSetup instance already exists - only "
"one can be configured.")
super(CreateRenderSetup, self).create(product_name,
instance_data,
pre_create_data)

View file

@ -1,148 +0,0 @@
import json
from maya import cmds
import ayon_api
from ayon_maya.api import (
lib,
plugin
)
from ayon_core.lib import (
BoolDef,
NumberDef,
EnumDef
)
from ayon_core.pipeline import CreatedInstance
TRANSPARENCIES = [
"preset",
"simple",
"object sorting",
"weighted average",
"depth peeling",
"alpha cut"
]
class CreateReview(plugin.MayaCreator):
"""Playblast reviewable"""
identifier = "io.openpype.creators.maya.review"
label = "Review"
product_type = "review"
icon = "video-camera"
useMayaTimeline = True
panZoom = False
# Overriding "create" method to prefill values from settings.
def create(self, product_name, instance_data, pre_create_data):
members = list()
if pre_create_data.get("use_selection"):
members = cmds.ls(selection=True)
project_name = self.project_name
folder_path = instance_data["folderPath"]
task_name = instance_data["task"]
folder_entity = ayon_api.get_folder_by_path(
project_name, folder_path, fields={"id"}
)
task_entity = ayon_api.get_task_by_name(
project_name, folder_entity["id"], task_name, fields={"taskType"}
)
preset = lib.get_capture_preset(
task_name,
task_entity["taskType"],
product_name,
self.project_settings,
self.log
)
self.log.debug(
"Using preset: {}".format(
json.dumps(preset, indent=4, sort_keys=True)
)
)
with lib.undo_chunk():
instance_node = cmds.sets(members, name=product_name)
instance_data["instance_node"] = instance_node
instance = CreatedInstance(
self.product_type,
product_name,
instance_data,
self)
creator_attribute_defs_by_key = {
x.key: x for x in instance.creator_attribute_defs
}
mapping = {
"review_width": preset["Resolution"]["width"],
"review_height": preset["Resolution"]["height"],
"isolate": preset["Generic"]["isolate_view"],
"imagePlane": preset["ViewportOptions"]["imagePlane"],
"panZoom": preset["Generic"]["pan_zoom"]
}
for key, value in mapping.items():
creator_attribute_defs_by_key[key].default = value
self._add_instance_to_context(instance)
self.imprint_instance_node(instance_node,
data=instance.data_to_store())
return instance
def get_instance_attr_defs(self):
defs = lib.collect_animation_defs()
# Option for using Maya or folder frame range in settings.
if not self.useMayaTimeline:
# Update the defaults to be the folder frame range
frame_range = lib.get_frame_range()
defs_by_key = {attr_def.key: attr_def for attr_def in defs}
for key, value in frame_range.items():
if key not in defs_by_key:
raise RuntimeError("Attribute definition not found to be "
"updated for key: {}".format(key))
attr_def = defs_by_key[key]
attr_def.default = value
defs.extend([
NumberDef("review_width",
label="Review width",
tooltip="A value of zero will use the folder resolution.",
decimals=0,
minimum=0,
default=0),
NumberDef("review_height",
label="Review height",
tooltip="A value of zero will use the folder resolution.",
decimals=0,
minimum=0,
default=0),
BoolDef("keepImages",
label="Keep Images",
tooltip="Whether to also publish along the image sequence "
"next to the video reviewable.",
default=False),
BoolDef("isolate",
label="Isolate render members of instance",
tooltip="When enabled only the members of the instance "
"will be included in the playblast review.",
default=False),
BoolDef("imagePlane",
label="Show Image Plane",
default=True),
EnumDef("transparency",
label="Transparency",
items=TRANSPARENCIES),
BoolDef("panZoom",
label="Enable camera pan/zoom",
default=True),
EnumDef("displayLights",
label="Display Lights",
items=lib.DISPLAY_LIGHTS_ENUM),
])
return defs

View file

@ -1,32 +0,0 @@
from maya import cmds
from ayon_maya.api import plugin
class CreateRig(plugin.MayaCreator):
"""Artist-friendly rig with controls to direct motion"""
identifier = "io.openpype.creators.maya.rig"
label = "Rig"
product_type = "rig"
icon = "wheelchair"
def create(self, product_name, instance_data, pre_create_data):
instance = super(CreateRig, self).create(product_name,
instance_data,
pre_create_data)
instance_node = instance.get("instance_node")
self.log.info("Creating Rig instance set up ...")
# TODOchange name (_controls_SET -> _rigs_SET)
controls = cmds.sets(name=product_name + "_controls_SET", empty=True)
# TODOchange name (_out_SET -> _geo_SET)
pointcache = cmds.sets(name=product_name + "_out_SET", empty=True)
skeleton = cmds.sets(
name=product_name + "_skeletonAnim_SET", empty=True)
skeleton_mesh = cmds.sets(
name=product_name + "_skeletonMesh_SET", empty=True)
cmds.sets([controls, pointcache,
skeleton, skeleton_mesh], forceElement=instance_node)

View file

@ -1,24 +0,0 @@
from ayon_maya.api import plugin
from ayon_core.lib import BoolDef
class CreateSetDress(plugin.MayaCreator):
"""A grouped package of loaded content"""
identifier = "io.openpype.creators.maya.setdress"
label = "Set Dress"
product_type = "setdress"
icon = "cubes"
exactSetMembersOnly = True
shader = True
default_variants = ["Main", "Anim"]
def get_instance_attr_defs(self):
return [
BoolDef("exactSetMembersOnly",
label="Exact Set Members Only",
default=self.exactSetMembersOnly),
BoolDef("shader",
label="Include shader",
default=self.shader)
]

View file

@ -1,105 +0,0 @@
# -*- coding: utf-8 -*-
"""Creator for Unreal Skeletal Meshes."""
from ayon_maya.api import plugin, lib
from ayon_core.lib import (
BoolDef,
TextDef
)
from maya import cmds # noqa
class CreateUnrealSkeletalMesh(plugin.MayaCreator):
"""Unreal Static Meshes with collisions."""
identifier = "io.openpype.creators.maya.unrealskeletalmesh"
label = "Unreal - Skeletal Mesh"
product_type = "skeletalMesh"
icon = "thumbs-up"
# Defined in settings
joint_hints = set()
def get_dynamic_data(
self,
project_name,
folder_entity,
task_entity,
variant,
host_name,
instance
):
"""
The default product name templates for Unreal include {asset} and thus
we should pass that along as dynamic data.
"""
dynamic_data = super(CreateUnrealSkeletalMesh, self).get_dynamic_data(
project_name,
folder_entity,
task_entity,
variant,
host_name,
instance
)
dynamic_data["asset"] = folder_entity["name"]
return dynamic_data
def create(self, product_name, instance_data, pre_create_data):
with lib.undo_chunk():
instance = super(CreateUnrealSkeletalMesh, self).create(
product_name, instance_data, pre_create_data)
instance_node = instance.get("instance_node")
# We reorganize the geometry that was originally added into the
# set into either 'joints_SET' or 'geometry_SET' based on the
# joint_hints from project settings
members = cmds.sets(instance_node, query=True) or []
cmds.sets(clear=instance_node)
geometry_set = cmds.sets(name="geometry_SET", empty=True)
joints_set = cmds.sets(name="joints_SET", empty=True)
cmds.sets([geometry_set, joints_set], forceElement=instance_node)
for node in members:
if node in self.joint_hints:
cmds.sets(node, forceElement=joints_set)
else:
cmds.sets(node, forceElement=geometry_set)
def get_instance_attr_defs(self):
defs = lib.collect_animation_defs()
defs.extend([
BoolDef("renderableOnly",
label="Renderable Only",
tooltip="Only export renderable visible shapes",
default=False),
BoolDef("visibleOnly",
label="Visible Only",
tooltip="Only export dag objects visible during "
"frame range",
default=False),
BoolDef("includeParentHierarchy",
label="Include Parent Hierarchy",
tooltip="Whether to include parent hierarchy of nodes in "
"the publish instance",
default=False),
BoolDef("worldSpace",
label="World-Space Export",
default=True),
BoolDef("refresh",
label="Refresh viewport during export",
default=False),
TextDef("attr",
label="Custom Attributes",
default="",
placeholder="attr1, attr2"),
TextDef("attrPrefix",
label="Custom Attributes Prefix",
placeholder="prefix1, prefix2")
])
return defs

View file

@ -1,95 +0,0 @@
# -*- coding: utf-8 -*-
"""Creator for Unreal Static Meshes."""
from ayon_maya.api import plugin, lib
from maya import cmds # noqa
class CreateUnrealStaticMesh(plugin.MayaCreator):
"""Unreal Static Meshes with collisions."""
identifier = "io.openpype.creators.maya.unrealstaticmesh"
label = "Unreal - Static Mesh"
product_type = "staticMesh"
icon = "cube"
# Defined in settings
collision_prefixes = []
def get_dynamic_data(
self,
project_name,
folder_entity,
task_entity,
variant,
host_name,
instance
):
"""
The default product name templates for Unreal include {asset} and thus
we should pass that along as dynamic data.
"""
dynamic_data = super(CreateUnrealStaticMesh, self).get_dynamic_data(
project_name,
folder_entity,
task_entity,
variant,
host_name,
instance
)
dynamic_data["asset"] = folder_entity["name"]
return dynamic_data
def create(self, product_name, instance_data, pre_create_data):
with lib.undo_chunk():
instance = super(CreateUnrealStaticMesh, self).create(
product_name, instance_data, pre_create_data)
instance_node = instance.get("instance_node")
# We reorganize the geometry that was originally added into the
# set into either 'collision_SET' or 'geometry_SET' based on the
# collision_prefixes from project settings
members = cmds.sets(instance_node, query=True)
cmds.sets(clear=instance_node)
geometry_set = cmds.sets(name="geometry_SET", empty=True)
collisions_set = cmds.sets(name="collisions_SET", empty=True)
cmds.sets([geometry_set, collisions_set],
forceElement=instance_node)
members = cmds.ls(members, long=True) or []
children = cmds.listRelatives(members, allDescendents=True,
fullPath=True) or []
transforms = cmds.ls(members + children, type="transform")
for transform in transforms:
if not cmds.listRelatives(transform,
type="shape",
noIntermediate=True):
# Exclude all transforms that have no direct shapes
continue
if self.has_collision_prefix(transform):
cmds.sets(transform, forceElement=collisions_set)
else:
cmds.sets(transform, forceElement=geometry_set)
def has_collision_prefix(self, node_path):
"""Return whether node name of path matches collision prefix.
If the node name matches the collision prefix we add it to the
`collisions_SET` instead of the `geometry_SET`.
Args:
node_path (str): Maya node path.
Returns:
bool: Whether the node should be considered a collision mesh.
"""
node_name = node_path.rsplit("|", 1)[-1]
for prefix in self.collision_prefixes:
if node_name.startswith(prefix):
return True
return False

View file

@ -1,39 +0,0 @@
from ayon_maya.api import (
lib,
plugin
)
from ayon_core.lib import NumberDef
class CreateUnrealYetiCache(plugin.MayaCreator):
"""Output for procedural plugin nodes of Yeti """
identifier = "io.openpype.creators.maya.unrealyeticache"
label = "Unreal - Yeti Cache"
product_type = "yeticacheUE"
icon = "pagelines"
def get_instance_attr_defs(self):
defs = [
NumberDef("preroll",
label="Preroll",
minimum=0,
default=0,
decimals=0)
]
# Add animation data without step and handles
defs.extend(lib.collect_animation_defs())
remove = {"step", "handleStart", "handleEnd"}
defs = [attr_def for attr_def in defs if attr_def.key not in remove]
# Add samples after frame range
defs.append(
NumberDef("samples",
label="Samples",
default=3,
decimals=0)
)
return defs

View file

@ -1,50 +0,0 @@
from ayon_maya.api import (
plugin,
lib
)
from ayon_core.lib import BoolDef
class CreateVrayProxy(plugin.MayaCreator):
"""Alembic pointcache for animated data"""
identifier = "io.openpype.creators.maya.vrayproxy"
label = "VRay Proxy"
product_type = "vrayproxy"
icon = "gears"
vrmesh = True
alembic = True
def get_instance_attr_defs(self):
defs = [
BoolDef("animation",
label="Export Animation",
default=False)
]
# Add time range attributes but remove some attributes
# which this instance actually doesn't use
defs.extend(lib.collect_animation_defs())
remove = {"handleStart", "handleEnd", "step"}
defs = [attr_def for attr_def in defs if attr_def.key not in remove]
defs.extend([
BoolDef("vertexColors",
label="Write vertex colors",
tooltip="Write vertex colors with the geometry",
default=False),
BoolDef("vrmesh",
label="Export VRayMesh",
tooltip="Publish a .vrmesh (VRayMesh) file for "
"this VRayProxy",
default=self.vrmesh),
BoolDef("alembic",
label="Export Alembic",
tooltip="Publish a .abc (Alembic) file for "
"this VRayProxy",
default=self.alembic),
])
return defs

View file

@ -1,52 +0,0 @@
# -*- coding: utf-8 -*-
"""Create instance of vrayscene."""
from ayon_maya.api import (
lib_rendersettings,
plugin
)
from ayon_core.pipeline import CreatorError
from ayon_core.lib import BoolDef
class CreateVRayScene(plugin.RenderlayerCreator):
"""Create Vray Scene."""
identifier = "io.openpype.creators.maya.vrayscene"
product_type = "vrayscene"
label = "VRay Scene"
icon = "cubes"
render_settings = {}
singleton_node_name = "vraysceneMain"
@classmethod
def apply_settings(cls, project_settings):
cls.render_settings = project_settings["maya"]["render_settings"]
def create(self, product_name, instance_data, pre_create_data):
# Only allow a single render instance to exist
if self._get_singleton_node():
raise CreatorError("A Render instance already exists - only "
"one can be configured.")
super(CreateVRayScene, self).create(product_name,
instance_data,
pre_create_data)
# Apply default project render settings on create
if self.render_settings.get("apply_render_settings"):
lib_rendersettings.RenderSettings().set_default_renderer_settings()
def get_instance_attr_defs(self):
"""Create instance settings."""
return [
BoolDef("vraySceneMultipleFiles",
label="V-Ray Scene Multiple Files",
default=False),
BoolDef("exportOnFarm",
label="Export on farm",
default=False)
]

View file

@ -1,118 +0,0 @@
# -*- coding: utf-8 -*-
"""Creator plugin for creating workfiles."""
import ayon_api
from ayon_core.pipeline import CreatedInstance, AutoCreator
from ayon_maya.api import plugin
from maya import cmds
class CreateWorkfile(plugin.MayaCreatorBase, AutoCreator):
"""Workfile auto-creator."""
identifier = "io.openpype.creators.maya.workfile"
label = "Workfile"
product_type = "workfile"
icon = "fa5.file"
default_variant = "Main"
def create(self):
variant = self.default_variant
current_instance = next(
(
instance for instance in self.create_context.instances
if instance.creator_identifier == self.identifier
), None)
project_name = self.project_name
folder_path = self.create_context.get_current_folder_path()
task_name = self.create_context.get_current_task_name()
host_name = self.create_context.host_name
current_folder_path = None
if current_instance is not None:
current_folder_path = current_instance["folderPath"]
if current_instance is None:
folder_entity = ayon_api.get_folder_by_path(
project_name, folder_path
)
task_entity = ayon_api.get_task_by_name(
project_name, folder_entity["id"], task_name
)
product_name = self.get_product_name(
project_name,
folder_entity,
task_entity,
variant,
host_name,
)
data = {
"folderPath": folder_path,
"task": task_name,
"variant": variant
}
data.update(
self.get_dynamic_data(
project_name,
folder_entity,
task_entity,
variant,
host_name,
current_instance)
)
self.log.info("Auto-creating workfile instance...")
current_instance = CreatedInstance(
self.product_type, product_name, data, self
)
self._add_instance_to_context(current_instance)
elif (
current_folder_path != folder_path
or current_instance["task"] != task_name
):
# Update instance context if is not the same
folder_entity = ayon_api.get_folder_by_path(
project_name, folder_path
)
task_entity = ayon_api.get_task_by_name(
project_name, folder_entity["id"], task_name
)
product_name = self.get_product_name(
project_name,
folder_entity,
task_entity,
variant,
host_name,
)
current_instance["folderPath"] = folder_entity["path"]
current_instance["task"] = task_name
current_instance["productName"] = product_name
def collect_instances(self):
self.cache_instance_data(self.collection_shared_data)
cached_instances = (
self.collection_shared_data["maya_cached_instance_data"]
)
for node in cached_instances.get(self.identifier, []):
node_data = self.read_instance_node(node)
created_instance = CreatedInstance.from_existing(node_data, self)
self._add_instance_to_context(created_instance)
def update_instances(self, update_list):
for created_inst, _changes in update_list:
data = created_inst.data_to_store()
node = data.get("instance_node")
if not node:
node = self.create_node()
created_inst["instance_node"] = node
data = created_inst.data_to_store()
self.imprint_instance_node(node, data)
def create_node(self):
node = cmds.sets(empty=True, name="workfileMain")
cmds.setAttr(node + ".hiddenInOutliner", True)
return node

View file

@ -1,10 +0,0 @@
from ayon_maya.api import plugin
class CreateXgen(plugin.MayaCreator):
"""Xgen"""
identifier = "io.openpype.creators.maya.xgen"
label = "Xgen"
product_type = "xgen"
icon = "pagelines"

View file

@ -1,39 +0,0 @@
from ayon_maya.api import (
lib,
plugin
)
from ayon_core.lib import NumberDef
class CreateYetiCache(plugin.MayaCreator):
"""Output for procedural plugin nodes of Yeti """
identifier = "io.openpype.creators.maya.yeticache"
label = "Yeti Cache"
product_type = "yeticache"
icon = "pagelines"
def get_instance_attr_defs(self):
defs = [
NumberDef("preroll",
label="Preroll",
minimum=0,
default=0,
decimals=0)
]
# Add animation data without step and handles
defs.extend(lib.collect_animation_defs())
remove = {"step", "handleStart", "handleEnd"}
defs = [attr_def for attr_def in defs if attr_def.key not in remove]
# Add samples after frame range
defs.append(
NumberDef("samples",
label="Samples",
default=3,
decimals=0)
)
return defs

View file

@ -1,27 +0,0 @@
from maya import cmds
from ayon_maya.api import (
lib,
plugin
)
class CreateYetiRig(plugin.MayaCreator):
"""Output for procedural plugin nodes ( Yeti / XGen / etc)"""
identifier = "io.openpype.creators.maya.yetirig"
label = "Yeti Rig"
product_type = "yetiRig"
icon = "usb"
def create(self, product_name, instance_data, pre_create_data):
with lib.undo_chunk():
instance = super(CreateYetiRig, self).create(product_name,
instance_data,
pre_create_data)
instance_node = instance.get("instance_node")
self.log.info("Creating Rig instance set up ...")
input_meshes = cmds.sets(name="input_SET", empty=True)
cmds.sets(input_meshes, forceElement=instance_node)

View file

@ -1,158 +0,0 @@
from maya import cmds
from ayon_core.pipeline import InventoryAction, get_repres_contexts
from ayon_maya.api.lib import get_id
class ConnectGeometry(InventoryAction):
"""Connect geometries within containers.
Source container will connect to the target containers, by searching for
matching geometry IDs (cbid).
Source containers are of product type: "animation" and "pointcache".
The connection with be done with a live world space blendshape.
"""
label = "Connect Geometry"
icon = "link"
color = "white"
def process(self, containers):
# Validate selection is more than 1.
message = (
"Only 1 container selected. 2+ containers needed for this action."
)
if len(containers) == 1:
self.display_warning(message)
return
# Categorize containers by family.
containers_by_product_type = {}
repre_ids = {
container["representation"]
for container in containers
}
repre_contexts_by_id = get_repres_contexts(repre_ids)
for container in containers:
repre_id = container["representation"]
repre_context = repre_contexts_by_id[repre_id]
product_type = repre_context["product"]["productType"]
containers_by_product_type.setdefault(product_type, [])
containers_by_product_type[product_type].append(container)
# Validate to only 1 source container.
source_containers = containers_by_product_type.get("animation", [])
source_containers += containers_by_product_type.get("pointcache", [])
source_container_namespaces = [
x["namespace"] for x in source_containers
]
message = (
"{} animation containers selected:\n\n{}\n\nOnly select 1 of type "
"\"animation\" or \"pointcache\".".format(
len(source_containers), source_container_namespaces
)
)
if len(source_containers) != 1:
self.display_warning(message)
return
source_object = source_containers[0]["objectName"]
# Collect matching geometry transforms based cbId attribute.
target_containers = []
for product_type, containers in containers_by_product_type.items():
if product_type in ["animation", "pointcache"]:
continue
target_containers.extend(containers)
source_data = self.get_container_data(source_object)
matches = []
node_types = set()
for target_container in target_containers:
target_data = self.get_container_data(
target_container["objectName"]
)
node_types.update(target_data["node_types"])
for id, transform in target_data["ids"].items():
source_match = source_data["ids"].get(id)
if source_match:
matches.append([source_match, transform])
# Message user about what is about to happen.
if not matches:
self.display_warning("No matching geometries found.")
return
message = "Connecting geometries:\n\n"
for match in matches:
message += "{} > {}\n".format(match[0], match[1])
choice = self.display_warning(message, show_cancel=True)
if choice is False:
return
# Setup live worldspace blendshape connection.
for source, target in matches:
blendshape = cmds.blendShape(source, target)[0]
cmds.setAttr(blendshape + ".origin", 0)
cmds.setAttr(blendshape + "." + target.split(":")[-1], 1)
# Update Xgen if in any of the containers.
if "xgmPalette" in node_types:
cmds.xgmPreview()
def get_container_data(self, container):
"""Collects data about the container nodes.
Args:
container (dict): Container instance.
Returns:
data (dict):
"node_types": All node types in container nodes.
"ids": If the node is a mesh, we collect its parent transform
id.
"""
data = {"node_types": set(), "ids": {}}
ref_node = cmds.sets(container, query=True, nodesOnly=True)[0]
for node in cmds.referenceQuery(ref_node, nodes=True):
node_type = cmds.nodeType(node)
data["node_types"].add(node_type)
# Only interested in mesh transforms for connecting geometry with
# blendshape.
if node_type != "mesh":
continue
transform = cmds.listRelatives(node, parent=True)[0]
data["ids"][get_id(transform)] = transform
return data
def display_warning(self, message, show_cancel=False):
"""Show feedback to user.
Returns:
bool
"""
from qtpy import QtWidgets
accept = QtWidgets.QMessageBox.Ok
if show_cancel:
buttons = accept | QtWidgets.QMessageBox.Cancel
else:
buttons = accept
state = QtWidgets.QMessageBox.warning(
None,
"",
message,
buttons=buttons,
defaultButton=accept
)
return state == accept

View file

@ -1,174 +0,0 @@
from maya import cmds
import xgenm
from ayon_core.pipeline import (
InventoryAction,
get_repres_contexts,
get_representation_path,
)
class ConnectXgen(InventoryAction):
"""Connect Xgen with an animation or pointcache.
"""
label = "Connect Xgen"
icon = "link"
color = "white"
def process(self, containers):
# Validate selection is more than 1.
message = (
"Only 1 container selected. 2+ containers needed for this action."
)
if len(containers) == 1:
self.display_warning(message)
return
# Categorize containers by product type.
containers_by_product_type = {}
repre_ids = {
container["representation"]
for container in containers
}
repre_contexts_by_id = get_repres_contexts(repre_ids)
for container in containers:
repre_id = container["representation"]
repre_context = repre_contexts_by_id[repre_id]
product_type = repre_context["product"]["productType"]
containers_by_product_type.setdefault(product_type, [])
containers_by_product_type[product_type].append(container)
# Validate to only 1 source container.
source_containers = containers_by_product_type.get("animation", [])
source_containers += containers_by_product_type.get("pointcache", [])
source_container_namespaces = [
x["namespace"] for x in source_containers
]
message = (
"{} animation containers selected:\n\n{}\n\nOnly select 1 of type "
"\"animation\" or \"pointcache\".".format(
len(source_containers), source_container_namespaces
)
)
if len(source_containers) != 1:
self.display_warning(message)
return
source_container = source_containers[0]
source_repre_id = source_container["representation"]
source_object = source_container["objectName"]
# Validate source representation is an alembic.
source_path = get_representation_path(
repre_contexts_by_id[source_repre_id]["representation"]
).replace("\\", "/")
message = "Animation container \"{}\" is not an alembic:\n{}".format(
source_container["namespace"], source_path
)
if not source_path.endswith(".abc"):
self.display_warning(message)
return
# Target containers.
target_containers = []
for product_type, containers in containers_by_product_type.items():
if product_type in ["animation", "pointcache"]:
continue
target_containers.extend(containers)
# Inform user of connections from source representation to target
# descriptions.
descriptions_data = []
connections_msg = ""
for target_container in target_containers:
reference_node = cmds.sets(
target_container["objectName"], query=True
)[0]
palettes = cmds.ls(
cmds.referenceQuery(reference_node, nodes=True),
type="xgmPalette"
)
for palette in palettes:
for description in xgenm.descriptions(palette):
descriptions_data.append([palette, description])
connections_msg += "\n{}/{}".format(palette, description)
message = "Connecting \"{}\" to:\n".format(
source_container["namespace"]
)
message += connections_msg
choice = self.display_warning(message, show_cancel=True)
if choice is False:
return
# Recreate "xgenContainers" attribute to reset.
compound_name = "xgenContainers"
attr = "{}.{}".format(source_object, compound_name)
if cmds.objExists(attr):
cmds.deleteAttr(attr)
cmds.addAttr(
source_object,
longName=compound_name,
attributeType="compound",
numberOfChildren=1,
multi=True
)
# Connect target containers.
for target_container in target_containers:
cmds.addAttr(
source_object,
longName="container",
attributeType="message",
parent=compound_name
)
index = target_containers.index(target_container)
cmds.connectAttr(
target_container["objectName"] + ".message",
source_object + ".{}[{}].container".format(
compound_name, index
)
)
# Setup cache on Xgen
object = "SplinePrimitive"
for palette, description in descriptions_data:
xgenm.setAttr("useCache", "true", palette, description, object)
xgenm.setAttr("liveMode", "false", palette, description, object)
xgenm.setAttr(
"cacheFileName", source_path, palette, description, object
)
# Refresh UI and viewport.
de = xgenm.xgGlobal.DescriptionEditor
de.refresh("Full")
def display_warning(self, message, show_cancel=False):
"""Show feedback to user.
Returns:
bool
"""
from qtpy import QtWidgets
accept = QtWidgets.QMessageBox.Ok
if show_cancel:
buttons = accept | QtWidgets.QMessageBox.Cancel
else:
buttons = accept
state = QtWidgets.QMessageBox.warning(
None,
"",
message,
buttons=buttons,
defaultButton=accept
)
return state == accept

View file

@ -1,187 +0,0 @@
import os
import json
from collections import defaultdict
from maya import cmds
from ayon_core.pipeline import (
InventoryAction,
get_repres_contexts,
get_representation_path,
)
from ayon_maya.api.lib import get_container_members, get_id
class ConnectYetiRig(InventoryAction):
"""Connect Yeti Rig with an animation or pointcache."""
label = "Connect Yeti Rig"
icon = "link"
color = "white"
def process(self, containers):
# Validate selection is more than 1.
message = (
"Only 1 container selected. 2+ containers needed for this action."
)
if len(containers) == 1:
self.display_warning(message)
return
# Categorize containers by product type.
containers_by_product_type = defaultdict(list)
repre_ids = {
container["representation"]
for container in containers
}
repre_contexts_by_id = get_repres_contexts(repre_ids)
for container in containers:
repre_id = container["representation"]
repre_context = repre_contexts_by_id[repre_id]
product_type = repre_context["product"]["productType"]
containers_by_product_type.setdefault(product_type, [])
containers_by_product_type[product_type].append(container)
# Validate to only 1 source container.
source_containers = containers_by_product_type.get("animation", [])
source_containers += containers_by_product_type.get("pointcache", [])
source_container_namespaces = [
x["namespace"] for x in source_containers
]
message = (
"{} animation containers selected:\n\n{}\n\nOnly select 1 of type "
"\"animation\" or \"pointcache\".".format(
len(source_containers), source_container_namespaces
)
)
if len(source_containers) != 1:
self.display_warning(message)
return
source_container = source_containers[0]
source_ids = self.nodes_by_id(source_container)
# Target containers.
target_ids = {}
inputs = []
yeti_rig_containers = containers_by_product_type.get("yetiRig")
if not yeti_rig_containers:
self.display_warning(
"Select at least one yetiRig container"
)
return
for container in yeti_rig_containers:
target_ids.update(self.nodes_by_id(container))
repre_id = container["representation"]
maya_file = get_representation_path(
repre_contexts_by_id[repre_id]["representation"]
)
_, ext = os.path.splitext(maya_file)
settings_file = maya_file.replace(ext, ".rigsettings")
if not os.path.exists(settings_file):
continue
with open(settings_file) as f:
inputs.extend(json.load(f)["inputs"])
# Compare loaded connections to scene.
for input in inputs:
source_node = source_ids.get(input["sourceID"])
target_node = target_ids.get(input["destinationID"])
if not source_node or not target_node:
self.log.debug(
"Could not find nodes for input:\n" +
json.dumps(input, indent=4, sort_keys=True)
)
continue
source_attr, target_attr = input["connections"]
if not cmds.attributeQuery(
source_attr, node=source_node, exists=True
):
self.log.debug(
"Could not find attribute {} on node {} for "
"input:\n{}".format(
source_attr,
source_node,
json.dumps(input, indent=4, sort_keys=True)
)
)
continue
if not cmds.attributeQuery(
target_attr, node=target_node, exists=True
):
self.log.debug(
"Could not find attribute {} on node {} for "
"input:\n{}".format(
target_attr,
target_node,
json.dumps(input, indent=4, sort_keys=True)
)
)
continue
source_plug = "{}.{}".format(
source_node, source_attr
)
target_plug = "{}.{}".format(
target_node, target_attr
)
if cmds.isConnected(
source_plug, target_plug, ignoreUnitConversion=True
):
self.log.debug(
"Connection already exists: {} -> {}".format(
source_plug, target_plug
)
)
continue
cmds.connectAttr(source_plug, target_plug, force=True)
self.log.debug(
"Connected attributes: {} -> {}".format(
source_plug, target_plug
)
)
def nodes_by_id(self, container):
ids = {}
for member in get_container_members(container):
id = get_id(member)
if not id:
continue
ids[id] = member
return ids
def display_warning(self, message, show_cancel=False):
"""Show feedback to user.
Returns:
bool
"""
from qtpy import QtWidgets
accept = QtWidgets.QMessageBox.Ok
if show_cancel:
buttons = accept | QtWidgets.QMessageBox.Cancel
else:
buttons = accept
state = QtWidgets.QMessageBox.warning(
None,
"",
message,
buttons=buttons,
defaultButton=accept
)
return state == accept

View file

@ -1,169 +0,0 @@
import re
import json
import ayon_api
from ayon_core.pipeline.load import get_representation_contexts_by_ids
from ayon_core.pipeline import (
InventoryAction,
get_current_project_name,
)
from ayon_maya.api.lib import (
maintained_selection,
apply_shaders
)
class ImportModelRender(InventoryAction):
label = "Import Model Render Sets"
icon = "industry"
color = "#55DDAA"
scene_type_regex = "meta.render.m[ab]"
look_data_type = "meta.render.json"
@staticmethod
def is_compatible(container):
return (
container.get("loader") == "ReferenceLoader"
and container.get("name", "").startswith("model")
)
def process(self, containers):
from maya import cmds # noqa: F401
# --- Query entities that will be used ---
project_name = get_current_project_name()
# Collect representation ids from all containers
repre_ids = {
container["representation"]
for container in containers
}
# Create mapping of representation id to version id
# - used in containers loop
version_id_by_repre_id = {
repre_entity["id"]: repre_entity["versionId"]
for repre_entity in ayon_api.get_representations(
project_name,
representation_ids=repre_ids,
fields={"id", "versionId"}
)
}
# Find all representations of the versions
version_ids = set(version_id_by_repre_id.values())
repre_entities = ayon_api.get_representations(
project_name,
version_ids=version_ids,
fields={"id", "name", "versionId"}
)
repre_entities_by_version_id = {
version_id: []
for version_id in version_ids
}
for repre_entity in repre_entities:
version_id = repre_entity["versionId"]
repre_entities_by_version_id[version_id].append(repre_entity)
look_repres_by_version_id = {}
look_repre_ids = set()
for version_id, repre_entities in (
repre_entities_by_version_id.items()
):
json_repre = None
look_repres = []
scene_type_regex = re.compile(self.scene_type_regex)
for repre_entity in repre_entities:
repre_name = repre_entity["name"]
if repre_name == self.look_data_type:
json_repre = repre_entity
elif scene_type_regex.fullmatch(repre_name):
look_repres.append(repre_entity)
look_repre = look_repres[0] if look_repres else None
if look_repre:
look_repre_ids.add(look_repre["id"])
if json_repre:
look_repre_ids.add(json_repre["id"])
look_repres_by_version_id[version_id] = (json_repre, look_repre)
contexts_by_repre_id = get_representation_contexts_by_ids(
project_name, look_repre_ids
)
# --- Real process logic ---
# Loop over containers and assign the looks
for container in containers:
con_name = container["objectName"]
nodes = []
for n in cmds.sets(con_name, query=True, nodesOnly=True) or []:
if cmds.nodeType(n) == "reference":
nodes += cmds.referenceQuery(n, nodes=True)
else:
nodes.append(n)
repre_id = container["representation"]
version_id = version_id_by_repre_id.get(repre_id)
if version_id is None:
print("Representation '{}' was not found".format(repre_id))
continue
json_repre, look_repre = look_repres_by_version_id[version_id]
print("Importing render sets for model %r" % con_name)
self._assign_model_render(
nodes, json_repre, look_repre, contexts_by_repre_id
)
def _assign_model_render(
self, nodes, json_repre, look_repre, contexts_by_repre_id
):
"""Assign nodes a specific published model render data version by id.
This assumes the nodes correspond with the asset.
Args:
nodes (list): nodes to assign render data to
json_repre (dict[str, Any]): Representation entity of the json
file.
look_repre (dict[str, Any]): First representation entity of the
look files.
contexts_by_repre_id (dict[str, Any]): Mapping of representation
id to its context.
Returns:
None
"""
from maya import cmds # noqa: F401
# QUESTION shouldn't be json representation validated too?
if not look_repre:
print("No model render sets for this model version..")
return
# TODO use 'get_representation_path_with_anatomy' instead
# of 'filepath_from_context'
context = contexts_by_repre_id.get(look_repre["id"])
maya_file = self.filepath_from_context(context)
context = contexts_by_repre_id.get(json_repre["id"])
json_file = self.filepath_from_context(context)
# Import the look file
with maintained_selection():
shader_nodes = cmds.file(maya_file,
i=True, # import
returnNewNodes=True)
# imprint context data
# Load relationships
shader_relation = json_file
with open(shader_relation, "r") as f:
relationships = json.load(f)
# Assign relationships
apply_shaders(relationships, shader_nodes, nodes)

View file

@ -1,27 +0,0 @@
from maya import cmds
from ayon_core.pipeline import InventoryAction
from ayon_maya.api.lib import get_reference_node
class ImportReference(InventoryAction):
"""Imports selected reference to inside of the file."""
label = "Import Reference"
icon = "download"
color = "#d8d8d8"
def process(self, containers):
for container in containers:
if container["loader"] != "ReferenceLoader":
print("Not a reference, skipping")
continue
node = container["objectName"]
members = cmds.sets(node, query=True, nodesOnly=True)
ref_node = get_reference_node(members)
ref_file = cmds.referenceQuery(ref_node, f=True)
cmds.file(ref_file, importReference=True)
return True # return anything to trigger model refresh

View file

@ -1,44 +0,0 @@
from ayon_core.pipeline import (
InventoryAction,
get_current_project_name,
)
from ayon_core.pipeline.load import get_representation_contexts_by_ids
from ayon_maya.api.lib import (
create_rig_animation_instance,
get_container_members,
)
class RecreateRigAnimationInstance(InventoryAction):
"""Recreate animation publish instance for loaded rigs"""
label = "Recreate rig animation instance"
icon = "wrench"
color = "#888888"
@staticmethod
def is_compatible(container):
return (
container.get("loader") == "ReferenceLoader"
and container.get("name", "").startswith("rig")
)
def process(self, containers):
project_name = get_current_project_name()
repre_ids = {
container["representation"]
for container in containers
}
contexts_by_repre_id = get_representation_contexts_by_ids(
project_name, repre_ids
)
for container in containers:
# todo: delete an existing entry if it exist or skip creation
namespace = container["namespace"]
repre_id = container["representation"]
context = contexts_by_repre_id[repre_id]
nodes = get_container_members(container)
create_rig_animation_instance(nodes, context, namespace)

View file

@ -1,46 +0,0 @@
from maya import cmds
from ayon_core.pipeline import InventoryAction, registered_host
from ayon_maya.api.lib import get_container_members
class SelectInScene(InventoryAction):
"""Select nodes in the scene from selected containers in scene inventory"""
label = "Select in scene"
icon = "search"
color = "#888888"
order = 99
def process(self, containers):
all_members = []
for container in containers:
members = get_container_members(container)
all_members.extend(members)
cmds.select(all_members, replace=True, noExpand=True)
class HighlightBySceneSelection(InventoryAction):
"""Select containers in scene inventory from the current scene selection"""
label = "Highlight by scene selection"
icon = "search"
color = "#888888"
order = 100
def process(self, containers):
selection = set(cmds.ls(selection=True, long=True, objectsOnly=True))
host = registered_host()
to_select = []
for container in host.get_containers():
members = get_container_members(container)
if any(member in selection for member in members):
to_select.append(container["objectName"])
return {
"objectNames": to_select,
"options": {"clear": True}
}

View file

@ -1,103 +0,0 @@
import ayon_maya.api.plugin
import maya.cmds as cmds
def _process_reference(file_url, name, namespace, options):
"""Load files by referencing scene in Maya.
Args:
file_url (str): fileapth of the objects to be loaded
name (str): product name
namespace (str): namespace
options (dict): dict of storing the param
Returns:
list: list of object nodes
"""
from ayon_maya.api.lib import unique_namespace
# Get name from asset being loaded
# Assuming name is product name from the animation, we split the number
# suffix from the name to ensure the namespace is unique
name = name.split("_")[0]
ext = file_url.split(".")[-1]
namespace = unique_namespace(
"{}_".format(name),
format="%03d",
suffix="_{}".format(ext)
)
attach_to_root = options.get("attach_to_root", True)
group_name = options["group_name"]
# no group shall be created
if not attach_to_root:
group_name = namespace
nodes = cmds.file(file_url,
namespace=namespace,
sharedReferenceFile=False,
groupReference=attach_to_root,
groupName=group_name,
reference=True,
returnNewNodes=True)
return nodes
class AbcLoader(ayon_maya.api.plugin.ReferenceLoader):
"""Loader to reference an Alembic file"""
product_types = {
"animation",
"camera",
"pointcache",
}
representations = {"abc"}
label = "Reference animation"
order = -10
icon = "code-fork"
color = "orange"
def process_reference(self, context, name, namespace, options):
cmds.loadPlugin("AbcImport.mll", quiet=True)
# hero_001 (abc)
# asset_counter{optional}
path = self.filepath_from_context(context)
file_url = self.prepare_root_value(path,
context["project"]["name"])
nodes = _process_reference(file_url, name, namespace, options)
# load colorbleed ID attribute
self[:] = nodes
return nodes
class FbxLoader(ayon_maya.api.plugin.ReferenceLoader):
"""Loader to reference an Fbx files"""
product_types = {
"animation",
"camera",
}
representations = {"fbx"}
label = "Reference animation"
order = -10
icon = "code-fork"
color = "orange"
def process_reference(self, context, name, namespace, options):
cmds.loadPlugin("fbx4maya.mll", quiet=True)
path = self.filepath_from_context(context)
file_url = self.prepare_root_value(path,
context["project"]["name"])
nodes = _process_reference(file_url, name, namespace, options)
self[:] = nodes
return nodes

View file

@ -1,192 +0,0 @@
"""A module containing generic loader actions that will display in the Loader.
"""
import qargparse
from ayon_core.pipeline import load
from ayon_maya.api.lib import (
maintained_selection,
get_custom_namespace
)
import ayon_maya.api.plugin
class SetFrameRangeLoader(load.LoaderPlugin):
"""Set frame range excluding pre- and post-handles"""
product_types = {
"animation",
"camera",
"proxyAbc",
"pointcache",
}
representations = {"abc"}
label = "Set frame range"
order = 11
icon = "clock-o"
color = "white"
def load(self, context, name, namespace, data):
import maya.cmds as cmds
version_attributes = context["version"]["attrib"]
start = version_attributes.get("frameStart")
end = version_attributes.get("frameEnd")
if start is None or end is None:
print("Skipping setting frame range because start or "
"end frame data is missing..")
return
cmds.playbackOptions(minTime=start,
maxTime=end,
animationStartTime=start,
animationEndTime=end)
class SetFrameRangeWithHandlesLoader(load.LoaderPlugin):
"""Set frame range including pre- and post-handles"""
product_types = {
"animation",
"camera",
"proxyAbc",
"pointcache",
}
representations = {"abc"}
label = "Set frame range (with handles)"
order = 12
icon = "clock-o"
color = "white"
def load(self, context, name, namespace, data):
import maya.cmds as cmds
version_attributes = context["version"]["attrib"]
start = version_attributes.get("frameStart")
end = version_attributes.get("frameEnd")
if start is None or end is None:
print("Skipping setting frame range because start or "
"end frame data is missing..")
return
# Include handles
start -= version_attributes.get("handleStart", 0)
end += version_attributes.get("handleEnd", 0)
cmds.playbackOptions(minTime=start,
maxTime=end,
animationStartTime=start,
animationEndTime=end)
class ImportMayaLoader(ayon_maya.api.plugin.Loader):
"""Import action for Maya (unmanaged)
Warning:
The loaded content will be unmanaged and is *not* visible in the
scene inventory. It's purely intended to merge content into your scene
so you could also use it as a new base.
"""
representations = {"ma", "mb", "obj"}
product_types = {
"model",
"pointcache",
"proxyAbc",
"animation",
"mayaAscii",
"mayaScene",
"setdress",
"layout",
"camera",
"rig",
"camerarig",
"staticMesh",
"workfile",
}
label = "Import"
order = 10
icon = "arrow-circle-down"
color = "#775555"
options = [
qargparse.Boolean(
"clean_import",
label="Clean import",
default=False,
help="Should all occurrences of cbId be purged?"
)
]
@classmethod
def apply_settings(cls, project_settings):
super(ImportMayaLoader, cls).apply_settings(project_settings)
cls.enabled = cls.load_settings["import_loader"].get("enabled", True)
def load(self, context, name=None, namespace=None, data=None):
import maya.cmds as cmds
choice = self.display_warning()
if choice is False:
return
custom_group_name, custom_namespace, options = \
self.get_custom_namespace_and_group(context, data,
"import_loader")
namespace = get_custom_namespace(custom_namespace)
if not options.get("attach_to_root", True):
custom_group_name = namespace
path = self.filepath_from_context(context)
with maintained_selection():
nodes = cmds.file(path,
i=True,
preserveReferences=True,
namespace=namespace,
returnNewNodes=True,
groupReference=options.get("attach_to_root",
True),
groupName=custom_group_name)
if data.get("clean_import", False):
remove_attributes = ["cbId"]
for node in nodes:
for attr in remove_attributes:
if cmds.attributeQuery(attr, node=node, exists=True):
full_attr = "{}.{}".format(node, attr)
print("Removing {}".format(full_attr))
cmds.deleteAttr(full_attr)
# We do not containerize imported content, it remains unmanaged
return
def display_warning(self):
"""Show warning to ensure the user can't import models by accident
Returns:
bool
"""
from qtpy import QtWidgets
accept = QtWidgets.QMessageBox.Ok
buttons = accept | QtWidgets.QMessageBox.Cancel
message = "Are you sure you want import this"
state = QtWidgets.QMessageBox.warning(None,
"Are you sure?",
message,
buttons=buttons,
defaultButton=accept)
return state == accept

View file

@ -1,237 +0,0 @@
import os
import clique
import maya.cmds as cmds
from ayon_core.pipeline import get_representation_path
from ayon_core.settings import get_project_settings
from ayon_maya.api.lib import (
get_attribute_input,
get_fps_for_current_context,
maintained_selection,
unique_namespace,
)
from ayon_maya.api.pipeline import containerise
from ayon_maya.api.plugin import get_load_color_for_product_type
from ayon_maya.api import plugin
def is_sequence(files):
sequence = False
collections, remainder = clique.assemble(files, minimum_items=1)
if collections:
sequence = True
return sequence
class ArnoldStandinLoader(plugin.Loader):
"""Load as Arnold standin"""
product_types = {
"ass",
"assProxy",
"animation",
"model",
"proxyAbc",
"pointcache",
"usd"
}
representations = {"ass", "abc", "usda", "usdc", "usd"}
label = "Load as Arnold standin"
order = -5
icon = "code-fork"
color = "orange"
def load(self, context, name, namespace, options):
if not cmds.pluginInfo("mtoa", query=True, loaded=True):
cmds.loadPlugin("mtoa")
# Create defaultArnoldRenderOptions before creating aiStandin
# which tries to connect it. Since we load the plugin and directly
# create aiStandin without the defaultArnoldRenderOptions,
# we need to create the render options for aiStandin creation.
from mtoa.core import createOptions
createOptions()
import mtoa.ui.arnoldmenu
version_attributes = context["version"]["attrib"]
self.log.info("version_attributes: {}\n".format(version_attributes))
folder_name = context["folder"]["name"]
namespace = namespace or unique_namespace(
folder_name + "_",
prefix="_" if folder_name[0].isdigit() else "",
suffix="_",
)
# Root group
label = "{}:{}".format(namespace, name)
root = cmds.group(name=label, empty=True)
# Set color.
settings = get_project_settings(context["project"]["name"])
color = get_load_color_for_product_type("ass", settings)
if color is not None:
red, green, blue = color
cmds.setAttr(root + ".useOutlinerColor", True)
cmds.setAttr(
root + ".outlinerColor", red, green, blue
)
with maintained_selection():
# Create transform with shape
transform_name = label + "_standin"
standin_shape = mtoa.ui.arnoldmenu.createStandIn()
standin = cmds.listRelatives(standin_shape, parent=True)[0]
standin = cmds.rename(standin, transform_name)
standin_shape = cmds.listRelatives(standin, shapes=True)[0]
cmds.parent(standin, root)
# Set the standin filepath
repre_path = self.filepath_from_context(context)
path, operator = self._setup_proxy(
standin_shape, repre_path, namespace
)
cmds.setAttr(standin_shape + ".dso", path, type="string")
sequence = is_sequence(os.listdir(os.path.dirname(repre_path)))
cmds.setAttr(standin_shape + ".useFrameExtension", sequence)
fps = (
version_attributes.get("fps") or get_fps_for_current_context()
)
cmds.setAttr(standin_shape + ".abcFPS", float(fps))
nodes = [root, standin, standin_shape]
if operator is not None:
nodes.append(operator)
self[:] = nodes
return containerise(
name=name,
namespace=namespace,
nodes=nodes,
context=context,
loader=self.__class__.__name__)
def get_next_free_multi_index(self, attr_name):
"""Find the next unconnected multi index at the input attribute."""
for index in range(10000000):
connection_info = cmds.connectionInfo(
"{}[{}]".format(attr_name, index),
sourceFromDestination=True
)
if len(connection_info or []) == 0:
return index
def _get_proxy_path(self, path):
basename_split = os.path.basename(path).split(".")
proxy_basename = (
basename_split[0] + "_proxy." + ".".join(basename_split[1:])
)
proxy_path = "/".join([os.path.dirname(path), proxy_basename])
return proxy_basename, proxy_path
def _update_operators(self, string_replace_operator, proxy_basename, path):
cmds.setAttr(
string_replace_operator + ".match",
proxy_basename.split(".")[0],
type="string"
)
cmds.setAttr(
string_replace_operator + ".replace",
os.path.basename(path).split(".")[0],
type="string"
)
def _setup_proxy(self, shape, path, namespace):
proxy_basename, proxy_path = self._get_proxy_path(path)
options_node = "defaultArnoldRenderOptions"
merge_operator = get_attribute_input(options_node + ".operator")
if merge_operator is None:
merge_operator = cmds.createNode("aiMerge")
cmds.connectAttr(
merge_operator + ".message", options_node + ".operator"
)
merge_operator = merge_operator.split(".")[0]
string_replace_operator = cmds.createNode(
"aiStringReplace", name=namespace + ":string_replace_operator"
)
node_type = "alembic" if path.endswith(".abc") else "procedural"
cmds.setAttr(
string_replace_operator + ".selection",
"*.(@node=='{}')".format(node_type),
type="string"
)
self._update_operators(string_replace_operator, proxy_basename, path)
cmds.connectAttr(
string_replace_operator + ".out",
"{}.inputs[{}]".format(
merge_operator,
self.get_next_free_multi_index(merge_operator + ".inputs")
)
)
# We setup the string operator no matter whether there is a proxy or
# not. This makes it easier to update since the string operator will
# always be created. Return original path to use for standin.
if not os.path.exists(proxy_path):
return path, string_replace_operator
return proxy_path, string_replace_operator
def update(self, container, context):
# Update the standin
members = cmds.sets(container['objectName'], query=True)
for member in members:
if cmds.nodeType(member) == "aiStringReplace":
string_replace_operator = member
shapes = cmds.listRelatives(member, shapes=True)
if not shapes:
continue
if cmds.nodeType(shapes[0]) == "aiStandIn":
standin = shapes[0]
repre_entity = context["representation"]
path = get_representation_path(repre_entity)
proxy_basename, proxy_path = self._get_proxy_path(path)
# Whether there is proxy or not, we still update the string operator.
# If no proxy exists, the string operator won't replace anything.
self._update_operators(string_replace_operator, proxy_basename, path)
dso_path = path
if os.path.exists(proxy_path):
dso_path = proxy_path
cmds.setAttr(standin + ".dso", dso_path, type="string")
sequence = is_sequence(os.listdir(os.path.dirname(path)))
cmds.setAttr(standin + ".useFrameExtension", sequence)
cmds.setAttr(
container["objectName"] + ".representation",
repre_entity["id"],
type="string"
)
def switch(self, container, context):
self.update(container, context)
def remove(self, container):
members = cmds.sets(container['objectName'], query=True)
cmds.lockNode(members, lock=False)
cmds.delete([container['objectName']] + members)
# Clean up the namespace
try:
cmds.namespace(removeNamespace=container['namespace'],
deleteNamespaceContent=True)
except RuntimeError:
pass

View file

@ -1,33 +0,0 @@
from ayon_core.lib import BoolDef
from ayon_core.pipeline import registered_host
from ayon_maya.api import plugin
from ayon_maya.api.workfile_template_builder import MayaTemplateBuilder
class LoadAsTemplate(plugin.Loader):
"""Load workfile as a template """
product_types = {"workfile", "mayaScene"}
label = "Load as template"
representations = ["ma", "mb"]
icon = "wrench"
color = "#775555"
order = 10
options = [
BoolDef("keep_placeholders",
label="Keep Placeholders",
default=False),
BoolDef("create_first_version",
label="Create First Version",
default=False),
]
def load(self, context, name, namespace, data):
keep_placeholders = data.get("keep_placeholders", False)
create_first_version = data.get("create_first_version", False)
path = self.filepath_from_context(context)
builder = MayaTemplateBuilder(registered_host())
builder.build_template(template_path=path,
keep_placeholders=keep_placeholders,
create_first_version=create_first_version)

View file

@ -1,75 +0,0 @@
import maya.cmds as cmds
from ayon_core.pipeline import remove_container
from ayon_maya.api import setdress
from ayon_maya.api.lib import unique_namespace
from ayon_maya.api.pipeline import containerise
from ayon_maya.api import plugin
class AssemblyLoader(plugin.Loader):
product_types = {"assembly"}
representations = {"json"}
label = "Load Set Dress"
order = -9
icon = "code-fork"
color = "orange"
def load(self, context, name, namespace, data):
folder_name = context["folder"]["name"]
namespace = namespace or unique_namespace(
folder_name + "_",
prefix="_" if folder_name[0].isdigit() else "",
suffix="_",
)
containers = setdress.load_package(
filepath=self.filepath_from_context(context),
name=name,
namespace=namespace
)
self[:] = containers
# Only containerize if any nodes were loaded by the Loader
nodes = self[:]
if not nodes:
return
return containerise(
name=name,
namespace=namespace,
nodes=nodes,
context=context,
loader=self.__class__.__name__)
def update(self, container, context):
return setdress.update_package(container, context)
def remove(self, container):
"""Remove all sub containers"""
# Remove all members
member_containers = setdress.get_contained_containers(container)
for member_container in member_containers:
self.log.info("Removing container %s",
member_container['objectName'])
remove_container(member_container)
# Remove alembic hierarchy reference
# TODO: Check whether removing all contained references is safe enough
members = cmds.sets(container['objectName'], query=True) or []
references = cmds.ls(members, type="reference")
for reference in references:
self.log.info("Removing %s", reference)
fname = cmds.referenceQuery(reference, filename=True)
cmds.file(fname, removeReference=True)
# Delete container and its contents
if cmds.objExists(container['objectName']):
members = cmds.sets(container['objectName'], query=True) or []
cmds.delete([container['objectName']] + members)
# TODO: Ensure namespace is gone

View file

@ -1,111 +0,0 @@
from ayon_core.pipeline import get_representation_path
from ayon_maya.api.lib import get_container_members, unique_namespace
from ayon_maya.api.pipeline import containerise
from ayon_maya.api import plugin
from maya import cmds, mel
class AudioLoader(plugin.Loader):
"""Specific loader of audio."""
product_types = {"audio"}
label = "Load audio"
representations = {"wav"}
icon = "volume-up"
color = "orange"
def load(self, context, name, namespace, data):
start_frame = cmds.playbackOptions(query=True, min=True)
sound_node = cmds.sound(
file=self.filepath_from_context(context), offset=start_frame
)
cmds.timeControl(
mel.eval("$gPlayBackSlider=$gPlayBackSlider"),
edit=True,
sound=sound_node,
displaySound=True
)
folder_name = context["folder"]["name"]
namespace = namespace or unique_namespace(
folder_name + "_",
prefix="_" if folder_name[0].isdigit() else "",
suffix="_",
)
return containerise(
name=name,
namespace=namespace,
nodes=[sound_node],
context=context,
loader=self.__class__.__name__
)
def update(self, container, context):
repre_entity = context["representation"]
members = get_container_members(container)
audio_nodes = cmds.ls(members, type="audio")
assert audio_nodes is not None, "Audio node not found."
audio_node = audio_nodes[0]
current_sound = cmds.timeControl(
mel.eval("$gPlayBackSlider=$gPlayBackSlider"),
query=True,
sound=True
)
activate_sound = current_sound == audio_node
path = get_representation_path(repre_entity)
cmds.sound(
audio_node,
edit=True,
file=path
)
# The source start + end does not automatically update itself to the
# length of thew new audio file, even though maya does do that when
# creating a new audio node. So to update we compute it manually.
# This would however override any source start and source end a user
# might have done on the original audio node after load.
audio_frame_count = cmds.getAttr("{}.frameCount".format(audio_node))
audio_sample_rate = cmds.getAttr("{}.sampleRate".format(audio_node))
duration_in_seconds = audio_frame_count / audio_sample_rate
fps = mel.eval('currentTimeUnitToFPS()') # workfile FPS
source_start = 0
source_end = (duration_in_seconds * fps)
cmds.setAttr("{}.sourceStart".format(audio_node), source_start)
cmds.setAttr("{}.sourceEnd".format(audio_node), source_end)
if activate_sound:
# maya by default deactivates it from timeline on file change
cmds.timeControl(
mel.eval("$gPlayBackSlider=$gPlayBackSlider"),
edit=True,
sound=audio_node,
displaySound=True
)
cmds.setAttr(
container["objectName"] + ".representation",
repre_entity["id"],
type="string"
)
def switch(self, container, context):
self.update(container, context)
def remove(self, container):
members = cmds.sets(container['objectName'], query=True)
cmds.lockNode(members, lock=False)
cmds.delete([container['objectName']] + members)
# Clean up the namespace
try:
cmds.namespace(removeNamespace=container['namespace'],
deleteNamespaceContent=True)
except RuntimeError:
pass

View file

@ -1,101 +0,0 @@
import maya.cmds as cmds
from ayon_core.pipeline import get_representation_path
from ayon_core.settings import get_project_settings
from ayon_maya.api.lib import unique_namespace
from ayon_maya.api.pipeline import containerise
from ayon_maya.api import plugin
from ayon_maya.api.plugin import get_load_color_for_product_type
class GpuCacheLoader(plugin.Loader):
"""Load Alembic as gpuCache"""
product_types = {"model", "animation", "proxyAbc", "pointcache"}
representations = {"abc", "gpu_cache"}
label = "Load Gpu Cache"
order = -5
icon = "code-fork"
color = "orange"
def load(self, context, name, namespace, data):
folder_name = context["folder"]["name"]
namespace = namespace or unique_namespace(
folder_name + "_",
prefix="_" if folder_name[0].isdigit() else "",
suffix="_",
)
cmds.loadPlugin("gpuCache", quiet=True)
# Root group
label = "{}:{}".format(namespace, name)
root = cmds.group(name=label, empty=True)
project_name = context["project"]["name"]
settings = get_project_settings(project_name)
color = get_load_color_for_product_type("model", settings)
if color is not None:
red, green, blue = color
cmds.setAttr(root + ".useOutlinerColor", 1)
cmds.setAttr(
root + ".outlinerColor", red, green, blue
)
# Create transform with shape
transform_name = label + "_GPU"
transform = cmds.createNode("transform", name=transform_name,
parent=root)
cache = cmds.createNode("gpuCache",
parent=transform,
name="{0}Shape".format(transform_name))
# Set the cache filepath
path = self.filepath_from_context(context)
cmds.setAttr(cache + '.cacheFileName', path, type="string")
cmds.setAttr(cache + '.cacheGeomPath', "|", type="string") # root
# Lock parenting of the transform and cache
cmds.lockNode([transform, cache], lock=True)
nodes = [root, transform, cache]
self[:] = nodes
return containerise(
name=name,
namespace=namespace,
nodes=nodes,
context=context,
loader=self.__class__.__name__)
def update(self, container, context):
repre_entity = context["representation"]
path = get_representation_path(repre_entity)
# Update the cache
members = cmds.sets(container['objectName'], query=True)
caches = cmds.ls(members, type="gpuCache", long=True)
assert len(caches) == 1, "This is a bug"
for cache in caches:
cmds.setAttr(cache + ".cacheFileName", path, type="string")
cmds.setAttr(container["objectName"] + ".representation",
repre_entity["id"],
type="string")
def switch(self, container, context):
self.update(container, context)
def remove(self, container):
members = cmds.sets(container['objectName'], query=True)
cmds.lockNode(members, lock=False)
cmds.delete([container['objectName']] + members)
# Clean up the namespace
try:
cmds.namespace(removeNamespace=container['namespace'],
deleteNamespaceContent=True)
except RuntimeError:
pass

View file

@ -1,330 +0,0 @@
import copy
from ayon_core.lib import EnumDef
from ayon_core.pipeline import get_current_host_name
from ayon_core.pipeline.colorspace import (
get_current_context_imageio_config_preset,
get_imageio_file_rules,
get_imageio_file_rules_colorspace_from_filepath,
)
from ayon_core.pipeline.load.utils import get_representation_path_from_context
from ayon_core.settings import get_project_settings
from ayon_maya.api.lib import namespaced, unique_namespace
from ayon_maya.api.pipeline import containerise
from ayon_maya.api import plugin
from maya import cmds
def create_texture():
"""Create place2dTexture with file node with uv connections
Mimics Maya "file [Texture]" creation.
"""
place = cmds.shadingNode("place2dTexture", asUtility=True, name="place2d")
file = cmds.shadingNode("file", asTexture=True, name="file")
connections = ["coverage", "translateFrame", "rotateFrame", "rotateUV",
"mirrorU", "mirrorV", "stagger", "wrapV", "wrapU",
"repeatUV", "offset", "noiseUV", "vertexUvThree",
"vertexUvTwo", "vertexUvOne", "vertexCameraOne"]
for attr in connections:
src = "{}.{}".format(place, attr)
dest = "{}.{}".format(file, attr)
cmds.connectAttr(src, dest)
cmds.connectAttr(place + '.outUV', file + '.uvCoord')
cmds.connectAttr(place + '.outUvFilterSize', file + '.uvFilterSize')
return file, place
def create_projection():
"""Create texture with place3dTexture and projection
Mimics Maya "file [Projection]" creation.
"""
file, place = create_texture()
projection = cmds.shadingNode("projection", asTexture=True,
name="projection")
place3d = cmds.shadingNode("place3dTexture", asUtility=True,
name="place3d")
cmds.connectAttr(place3d + '.worldInverseMatrix[0]',
projection + ".placementMatrix")
cmds.connectAttr(file + '.outColor', projection + ".image")
return file, place, projection, place3d
def create_stencil():
"""Create texture with extra place2dTexture offset and stencil
Mimics Maya "file [Stencil]" creation.
"""
file, place = create_texture()
place_stencil = cmds.shadingNode("place2dTexture", asUtility=True,
name="place2d_stencil")
stencil = cmds.shadingNode("stencil", asTexture=True, name="stencil")
for src_attr, dest_attr in [
("outUV", "uvCoord"),
("outUvFilterSize", "uvFilterSize")
]:
src_plug = "{}.{}".format(place_stencil, src_attr)
cmds.connectAttr(src_plug, "{}.{}".format(place, dest_attr))
cmds.connectAttr(src_plug, "{}.{}".format(stencil, dest_attr))
return file, place, stencil, place_stencil
class FileNodeLoader(plugin.Loader):
"""File node loader."""
product_types = {"image", "plate", "render"}
label = "Load file node"
representations = {"exr", "tif", "png", "jpg"}
icon = "image"
color = "orange"
order = 2
options = [
EnumDef(
"mode",
items={
"texture": "Texture",
"projection": "Projection",
"stencil": "Stencil"
},
default="texture",
label="Texture Mode"
)
]
def load(self, context, name, namespace, data):
folder_name = context["folder"]["name"]
namespace = namespace or unique_namespace(
folder_name + "_",
prefix="_" if folder_name[0].isdigit() else "",
suffix="_",
)
with namespaced(namespace, new=True) as namespace:
# Create the nodes within the namespace
nodes = {
"texture": create_texture,
"projection": create_projection,
"stencil": create_stencil
}[data.get("mode", "texture")]()
file_node = cmds.ls(nodes, type="file")[0]
self._apply_representation_context(context, file_node)
# For ease of access for the user select all the nodes and select
# the file node last so that UI shows its attributes by default
cmds.select(list(nodes) + [file_node], replace=True)
return containerise(
name=name,
namespace=namespace,
nodes=nodes,
context=context,
loader=self.__class__.__name__
)
def update(self, container, context):
repre_entity = context["representation"]
members = cmds.sets(container['objectName'], query=True)
file_node = cmds.ls(members, type="file")[0]
self._apply_representation_context(context, file_node)
# Update representation
cmds.setAttr(
container["objectName"] + ".representation",
repre_entity["id"],
type="string"
)
def switch(self, container, context):
self.update(container, context)
def remove(self, container):
members = cmds.sets(container['objectName'], query=True)
cmds.lockNode(members, lock=False)
cmds.delete([container['objectName']] + members)
# Clean up the namespace
try:
cmds.namespace(removeNamespace=container['namespace'],
deleteNamespaceContent=True)
except RuntimeError:
pass
def _apply_representation_context(self, context, file_node):
"""Update the file node to match the context.
This sets the file node's attributes for:
- file path
- udim tiling mode (if it is an udim tile)
- use frame extension (if it is a sequence)
- colorspace
"""
repre_context = context["representation"]["context"]
has_frames = repre_context.get("frame") is not None
has_udim = repre_context.get("udim") is not None
# Set UV tiling mode if UDIM tiles
if has_udim:
cmds.setAttr(file_node + ".uvTilingMode", 3) # UDIM-tiles
else:
cmds.setAttr(file_node + ".uvTilingMode", 0) # off
# Enable sequence if publish has `startFrame` and `endFrame` and
# `startFrame != endFrame`
if has_frames and self._is_sequence(context):
# When enabling useFrameExtension maya automatically
# connects an expression to <file>.frameExtension to set
# the current frame. However, this expression is generated
# with some delay and thus it'll show a warning if frame 0
# doesn't exist because we're explicitly setting the <f>
# token.
cmds.setAttr(file_node + ".useFrameExtension", True)
else:
cmds.setAttr(file_node + ".useFrameExtension", False)
# Set the file node path attribute
path = self._format_path(context)
cmds.setAttr(file_node + ".fileTextureName", path, type="string")
# Set colorspace
colorspace = self._get_colorspace(context)
if colorspace:
cmds.setAttr(file_node + ".colorSpace", colorspace, type="string")
else:
self.log.debug("Unknown colorspace - setting colorspace skipped.")
def _is_sequence(self, context):
"""Check whether frameStart and frameEnd are not the same."""
version = context["version"]
representation = context["representation"]
# TODO this is invalid logic, it should be based only on
# representation entity
for entity in [representation, version]:
# Frame range can be set on version or representation.
# When set on representation it overrides version data.
attributes = entity["attrib"]
data = entity["data"]
start = data.get("frameStartHandle", attributes.get("frameStart"))
end = data.get("frameEndHandle", attributes.get("frameEnd"))
if start is None or end is None:
continue
if start != end:
return True
else:
return False
return False
def _get_colorspace(self, context):
"""Return colorspace of the file to load.
Retrieves the explicit colorspace from the publish. If no colorspace
data is stored with published content then project imageio settings
are used to make an assumption of the colorspace based on the file
rules. If no file rules match then None is returned.
Returns:
str or None: The colorspace of the file or None if not detected.
"""
# We can't apply color spaces if management is not enabled
if not cmds.colorManagementPrefs(query=True, cmEnabled=True):
return
representation = context["representation"]
colorspace_data = representation.get("data", {}).get("colorspaceData")
if colorspace_data:
return colorspace_data["colorspace"]
# Assume colorspace from filepath based on project settings
project_name = context["project"]["name"]
host_name = get_current_host_name()
project_settings = get_project_settings(project_name)
config_data = get_current_context_imageio_config_preset(
project_settings=project_settings
)
# ignore if host imageio is not enabled
if not config_data:
return
file_rules = get_imageio_file_rules(
project_name, host_name,
project_settings=project_settings
)
path = get_representation_path_from_context(context)
colorspace = get_imageio_file_rules_colorspace_from_filepath(
path,
host_name,
project_name,
config_data=config_data,
file_rules=file_rules,
project_settings=project_settings
)
return colorspace
def _format_path(self, context):
"""Format the path with correct tokens for frames and udim tiles."""
context = copy.deepcopy(context)
representation = context["representation"]
template = representation.get("attrib", {}).get("template")
if not template:
# No template to find token locations for
return get_representation_path_from_context(context)
def _placeholder(key):
# Substitute with a long placeholder value so that potential
# custom formatting with padding doesn't find its way into
# our formatting, so that <f> wouldn't be padded as 0<f>
return "___{}___".format(key)
# We format UDIM and Frame numbers with their specific tokens. To do so
# we in-place change the representation context data to format the path
# with our own data
tokens = {
"frame": "<f>",
"udim": "<UDIM>"
}
has_tokens = False
repre_context = representation["context"]
for key, _token in tokens.items():
if key in repre_context:
repre_context[key] = _placeholder(key)
has_tokens = True
# Replace with our custom template that has the tokens set
representation["attrib"]["template"] = template
path = get_representation_path_from_context(context)
if has_tokens:
for key, token in tokens.items():
if key in repre_context:
path = path.replace(_placeholder(key), token)
return path

View file

@ -1,271 +0,0 @@
from ayon_core.pipeline import get_representation_path
from ayon_maya.api.lib import (
get_container_members,
namespaced,
pairwise,
unique_namespace,
)
from ayon_maya.api.pipeline import containerise
from ayon_maya.api import plugin
from maya import cmds
from qtpy import QtCore, QtWidgets
def disconnect_inputs(plug):
overrides = cmds.listConnections(plug,
source=True,
destination=False,
plugs=True,
connections=True) or []
for dest, src in pairwise(overrides):
cmds.disconnectAttr(src, dest)
class CameraWindow(QtWidgets.QDialog):
def __init__(self, cameras):
super(CameraWindow, self).__init__()
self.setWindowFlags(self.windowFlags() | QtCore.Qt.FramelessWindowHint)
self.camera = None
self.widgets = {
"label": QtWidgets.QLabel("Select camera for image plane."),
"list": QtWidgets.QListWidget(),
"staticImagePlane": QtWidgets.QCheckBox(),
"showInAllViews": QtWidgets.QCheckBox(),
"warning": QtWidgets.QLabel("No cameras selected!"),
"buttons": QtWidgets.QWidget(),
"okButton": QtWidgets.QPushButton("Ok"),
"cancelButton": QtWidgets.QPushButton("Cancel")
}
# Build warning.
self.widgets["warning"].setVisible(False)
self.widgets["warning"].setStyleSheet("color: red")
# Build list.
for camera in cameras:
self.widgets["list"].addItem(camera)
# Build buttons.
layout = QtWidgets.QHBoxLayout(self.widgets["buttons"])
layout.addWidget(self.widgets["okButton"])
layout.addWidget(self.widgets["cancelButton"])
# Build layout.
layout = QtWidgets.QVBoxLayout(self)
layout.addWidget(self.widgets["label"])
layout.addWidget(self.widgets["list"])
layout.addWidget(self.widgets["buttons"])
layout.addWidget(self.widgets["warning"])
self.widgets["okButton"].pressed.connect(self.on_ok_pressed)
self.widgets["cancelButton"].pressed.connect(self.on_cancel_pressed)
self.widgets["list"].itemPressed.connect(self.on_list_itemPressed)
def on_list_itemPressed(self, item):
self.camera = item.text()
def on_ok_pressed(self):
if self.camera is None:
self.widgets["warning"].setVisible(True)
return
self.close()
def on_cancel_pressed(self):
self.camera = None
self.close()
class ImagePlaneLoader(plugin.Loader):
"""Specific loader of plate for image planes on selected camera."""
product_types = {"image", "plate", "render"}
label = "Load imagePlane"
representations = {"mov", "exr", "preview", "png", "jpg"}
icon = "image"
color = "orange"
def load(self, context, name, namespace, data, options=None):
image_plane_depth = 1000
folder_name = context["folder"]["name"]
namespace = namespace or unique_namespace(
folder_name + "_",
prefix="_" if folder_name[0].isdigit() else "",
suffix="_",
)
# Get camera from user selection.
# is_static_image_plane = None
# is_in_all_views = None
camera = data.get("camera") if data else None
if not camera:
cameras = cmds.ls(type="camera")
# Cameras by names
camera_names = {}
for camera in cameras:
parent = cmds.listRelatives(camera, parent=True, path=True)[0]
camera_names[parent] = camera
camera_names["Create new camera."] = "create-camera"
window = CameraWindow(camera_names.keys())
window.exec_()
# Skip if no camera was selected (Dialog was closed)
if window.camera not in camera_names:
return
camera = camera_names[window.camera]
if camera == "create-camera":
camera = cmds.createNode("camera")
if camera is None:
return
try:
cmds.setAttr("{}.displayResolution".format(camera), True)
cmds.setAttr("{}.farClipPlane".format(camera),
image_plane_depth * 10)
except RuntimeError:
pass
# Create image plane
with namespaced(namespace):
# Create inside the namespace
image_plane_transform, image_plane_shape = cmds.imagePlane(
fileName=self.filepath_from_context(context),
camera=camera
)
# Set colorspace
colorspace = self.get_colorspace(context["representation"])
if colorspace:
cmds.setAttr(
"{}.ignoreColorSpaceFileRules".format(image_plane_shape),
True
)
cmds.setAttr("{}.colorSpace".format(image_plane_shape),
colorspace, type="string")
# Set offset frame range
start_frame = cmds.playbackOptions(query=True, min=True)
end_frame = cmds.playbackOptions(query=True, max=True)
for attr, value in {
"depth": image_plane_depth,
"frameOffset": 0,
"frameIn": start_frame,
"frameOut": end_frame,
"frameCache": end_frame,
"useFrameExtension": True
}.items():
plug = "{}.{}".format(image_plane_shape, attr)
cmds.setAttr(plug, value)
movie_representations = {"mov", "preview"}
if context["representation"]["name"] in movie_representations:
cmds.setAttr(image_plane_shape + ".type", 2)
# Ask user whether to use sequence or still image.
if context["representation"]["name"] == "exr":
# Ensure OpenEXRLoader plugin is loaded.
cmds.loadPlugin("OpenEXRLoader", quiet=True)
message = (
"Hold image sequence on first frame?"
"\n{} files available.".format(
len(context["representation"]["files"])
)
)
reply = QtWidgets.QMessageBox.information(
None,
"Frame Hold.",
message,
QtWidgets.QMessageBox.Yes,
QtWidgets.QMessageBox.No
)
if reply == QtWidgets.QMessageBox.Yes:
frame_extension_plug = "{}.frameExtension".format(image_plane_shape) # noqa
# Remove current frame expression
disconnect_inputs(frame_extension_plug)
cmds.setAttr(frame_extension_plug, start_frame)
new_nodes = [image_plane_transform, image_plane_shape]
return containerise(
name=name,
namespace=namespace,
nodes=new_nodes,
context=context,
loader=self.__class__.__name__
)
def update(self, container, context):
folder_entity = context["folder"]
repre_entity = context["representation"]
members = get_container_members(container)
image_planes = cmds.ls(members, type="imagePlane")
assert image_planes, "Image plane not found."
image_plane_shape = image_planes[0]
path = get_representation_path(repre_entity)
cmds.setAttr("{}.imageName".format(image_plane_shape),
path,
type="string")
cmds.setAttr("{}.representation".format(container["objectName"]),
repre_entity["id"],
type="string")
colorspace = self.get_colorspace(repre_entity)
if colorspace:
cmds.setAttr(
"{}.ignoreColorSpaceFileRules".format(image_plane_shape),
True
)
cmds.setAttr("{}.colorSpace".format(image_plane_shape),
colorspace, type="string")
# Set frame range.
start_frame = folder_entity["attrib"]["frameStart"]
end_frame = folder_entity["attrib"]["frameEnd"]
for attr, value in {
"frameOffset": 0,
"frameIn": start_frame,
"frameOut": end_frame,
"frameCache": end_frame
}:
plug = "{}.{}".format(image_plane_shape, attr)
cmds.setAttr(plug, value)
def switch(self, container, context):
self.update(container, context)
def remove(self, container):
members = cmds.sets(container['objectName'], query=True)
cmds.lockNode(members, lock=False)
cmds.delete([container['objectName']] + members)
# Clean up the namespace
try:
cmds.namespace(removeNamespace=container['namespace'],
deleteNamespaceContent=True)
except RuntimeError:
pass
def get_colorspace(self, representation):
data = representation.get("data", {}).get("colorspaceData", {})
if not data:
return
colorspace = data.get("colorspace")
return colorspace

View file

@ -1,138 +0,0 @@
# -*- coding: utf-8 -*-
"""Look loader."""
import json
from collections import defaultdict
import ayon_maya.api.plugin
from ayon_api import get_representation_by_name
from ayon_core.pipeline import get_representation_path
from ayon_core.tools.utils import ScrollMessageBox
from ayon_maya.api import lib
from ayon_maya.api.lib import get_reference_node
from qtpy import QtWidgets
class LookLoader(ayon_maya.api.plugin.ReferenceLoader):
"""Specific loader for lookdev"""
product_types = {"look"}
representations = {"ma"}
label = "Reference look"
order = -10
icon = "code-fork"
color = "orange"
def process_reference(self, context, name, namespace, options):
from maya import cmds
with lib.maintained_selection():
file_url = self.prepare_root_value(
file_url=self.filepath_from_context(context),
project_name=context["project"]["name"]
)
nodes = cmds.file(file_url,
namespace=namespace,
reference=True,
returnNewNodes=True)
self[:] = nodes
def switch(self, container, context):
self.update(container, context)
def update(self, container, context):
"""
Called by Scene Inventory when look should be updated to current
version.
If any reference edits cannot be applied, eg. shader renamed and
material not present, reference is unloaded and cleaned.
All failed edits are highlighted to the user via message box.
Args:
container: object that has look to be updated
context: (dict): relationship data to get proper
representation from DB and persisted
data in .json
Returns:
None
"""
from maya import cmds
# Get reference node from container members
members = lib.get_container_members(container)
reference_node = get_reference_node(members, log=self.log)
shader_nodes = cmds.ls(members, type='shadingEngine')
orig_nodes = set(self._get_nodes_with_shader(shader_nodes))
# Trigger the regular reference update on the ReferenceLoader
super(LookLoader, self).update(container, context)
# get new applied shaders and nodes from new version
shader_nodes = cmds.ls(members, type='shadingEngine')
nodes = set(self._get_nodes_with_shader(shader_nodes))
version_id = context["version"]["id"]
project_name = context["project"]["name"]
json_representation = get_representation_by_name(
project_name, "json", version_id
)
# Load relationships
shader_relation = get_representation_path(json_representation)
with open(shader_relation, "r") as f:
json_data = json.load(f)
# update of reference could result in failed edits - material is not
# present because of renaming etc. If so highlight failed edits to user
failed_edits = cmds.referenceQuery(reference_node,
editStrings=True,
failedEdits=True,
successfulEdits=False)
if failed_edits:
# clean references - removes failed reference edits
cmds.file(cr=reference_node) # cleanReference
# reapply shading groups from json representation on orig nodes
lib.apply_shaders(json_data, shader_nodes, orig_nodes)
msg = ["During reference update some edits failed.",
"All successful edits were kept intact.\n",
"Failed and removed edits:"]
msg.extend(failed_edits)
msg = ScrollMessageBox(QtWidgets.QMessageBox.Warning,
"Some reference edit failed",
msg)
msg.exec_()
attributes = json_data.get("attributes", [])
# region compute lookup
nodes_by_id = defaultdict(list)
for node in nodes:
nodes_by_id[lib.get_id(node)].append(node)
lib.apply_attributes(attributes, nodes_by_id)
def _get_nodes_with_shader(self, shader_nodes):
"""
Returns list of nodes belonging to specific shaders
Args:
shader_nodes: <list> of Shader groups
Returns
<list> node names
"""
from maya import cmds
for shader in shader_nodes:
future = cmds.listHistory(shader, future=True)
connections = cmds.listConnections(future,
type='mesh')
if connections:
# Ensure unique entries only to optimize query and results
connections = list(set(connections))
return cmds.listRelatives(connections,
shapes=True,
fullPath=True) or []
return []

View file

@ -1,31 +0,0 @@
from ayon_maya.api import plugin
from maya import mel
class MatchmoveLoader(plugin.Loader):
"""
This will run matchmove script to create track in scene.
Supported script types are .py and .mel
"""
product_types = {"matchmove"}
representations = {"py", "mel"}
defaults = ["Camera", "Object", "Mocap"]
label = "Run matchmove script"
icon = "empire"
color = "orange"
def load(self, context, name, namespace, data):
path = self.filepath_from_context(context)
if path.lower().endswith(".py"):
exec(open(path).read())
elif path.lower().endswith(".mel"):
mel.eval('source "{}"'.format(path))
else:
self.log.error("Unsupported script type")
return True

View file

@ -1,103 +0,0 @@
# -*- coding: utf-8 -*-
import maya.cmds as cmds
from ayon_core.pipeline import get_representation_path
from ayon_core.pipeline.load import get_representation_path_from_context
from ayon_maya.api.lib import namespaced, unique_namespace
from ayon_maya.api.pipeline import containerise
from ayon_maya.api import plugin
class MayaUsdLoader(plugin.Loader):
"""Read USD data in a Maya USD Proxy"""
product_types = {"model", "usd", "pointcache", "animation"}
representations = {"usd", "usda", "usdc", "usdz", "abc"}
label = "Load USD to Maya Proxy"
order = -1
icon = "code-fork"
color = "orange"
def load(self, context, name=None, namespace=None, options=None):
folder_name = context["folder"]["name"]
namespace = namespace or unique_namespace(
folder_name + "_",
prefix="_" if folder_name[0].isdigit() else "",
suffix="_",
)
# Make sure we can load the plugin
cmds.loadPlugin("mayaUsdPlugin", quiet=True)
path = get_representation_path_from_context(context)
# Create the shape
cmds.namespace(addNamespace=namespace)
with namespaced(namespace, new=False):
transform = cmds.createNode("transform",
name=name,
skipSelect=True)
proxy = cmds.createNode('mayaUsdProxyShape',
name="{}Shape".format(name),
parent=transform,
skipSelect=True)
cmds.connectAttr("time1.outTime", "{}.time".format(proxy))
cmds.setAttr("{}.filePath".format(proxy), path, type="string")
# By default, we force the proxy to not use a shared stage because
# when doing so Maya will quite easily allow to save into the
# loaded usd file. Since we are loading published files we want to
# avoid altering them. Unshared stages also save their edits into
# the workfile as an artist might expect it to do.
cmds.setAttr("{}.shareStage".format(proxy), False)
# cmds.setAttr("{}.shareStage".format(proxy), lock=True)
nodes = [transform, proxy]
self[:] = nodes
return containerise(
name=name,
namespace=namespace,
nodes=nodes,
context=context,
loader=self.__class__.__name__)
def update(self, container, context):
# type: (dict, dict) -> None
"""Update container with specified representation."""
node = container['objectName']
assert cmds.objExists(node), "Missing container"
members = cmds.sets(node, query=True) or []
shapes = cmds.ls(members, type="mayaUsdProxyShape")
repre_entity = context["representation"]
path = get_representation_path(repre_entity)
for shape in shapes:
cmds.setAttr("{}.filePath".format(shape), path, type="string")
cmds.setAttr("{}.representation".format(node),
repre_entity["id"],
type="string")
def switch(self, container, context):
self.update(container, context)
def remove(self, container):
# type: (dict) -> None
"""Remove loaded container."""
# Delete container and its contents
if cmds.objExists(container['objectName']):
members = cmds.sets(container['objectName'], query=True) or []
cmds.delete([container['objectName']] + members)
# Remove the namespace, if empty
namespace = container['namespace']
if cmds.namespace(exists=namespace):
members = cmds.namespaceInfo(namespace, listNamespace=True)
if not members:
cmds.namespace(removeNamespace=namespace)
else:
self.log.warning("Namespace not deleted because it "
"still has members: %s", namespace)

View file

@ -1,122 +0,0 @@
# -*- coding: utf-8 -*-
import os
import maya.cmds as cmds
from ayon_api import get_representation_by_id
from ayon_core.pipeline import get_representation_path
from ayon_maya.api import plugin
from ayon_maya.api.lib import maintained_selection, namespaced, unique_namespace
from ayon_maya.api.pipeline import containerise
from maya import mel
class MultiverseUsdLoader(plugin.Loader):
"""Read USD data in a Multiverse Compound"""
product_types = {
"model",
"usd",
"mvUsdComposition",
"mvUsdOverride",
"pointcache",
"animation",
}
representations = {"usd", "usda", "usdc", "usdz", "abc"}
label = "Load USD to Multiverse"
order = -10
icon = "code-fork"
color = "orange"
def load(self, context, name=None, namespace=None, options=None):
folder_name = context["folder"]["name"]
namespace = namespace or unique_namespace(
folder_name + "_",
prefix="_" if folder_name[0].isdigit() else "",
suffix="_",
)
path = self.filepath_from_context(context)
# Make sure we can load the plugin
cmds.loadPlugin("MultiverseForMaya", quiet=True)
import multiverse
# Create the shape
with maintained_selection():
cmds.namespace(addNamespace=namespace)
with namespaced(namespace, new=False):
shape = multiverse.CreateUsdCompound(path)
transform = cmds.listRelatives(
shape, parent=True, fullPath=True)[0]
nodes = [transform, shape]
self[:] = nodes
return containerise(
name=name,
namespace=namespace,
nodes=nodes,
context=context,
loader=self.__class__.__name__)
def update(self, container, context):
# type: (dict, dict) -> None
"""Update container with specified representation."""
node = container['objectName']
assert cmds.objExists(node), "Missing container"
members = cmds.sets(node, query=True) or []
shapes = cmds.ls(members, type="mvUsdCompoundShape")
assert shapes, "Cannot find mvUsdCompoundShape in container"
project_name = context["project"]["name"]
repre_entity = context["representation"]
path = get_representation_path(repre_entity)
prev_representation_id = cmds.getAttr("{}.representation".format(node))
prev_representation = get_representation_by_id(project_name,
prev_representation_id)
prev_path = os.path.normpath(prev_representation["attrib"]["path"])
# Make sure we can load the plugin
cmds.loadPlugin("MultiverseForMaya", quiet=True)
import multiverse
for shape in shapes:
asset_paths = multiverse.GetUsdCompoundAssetPaths(shape)
asset_paths = [os.path.normpath(p) for p in asset_paths]
assert asset_paths.count(prev_path) == 1, \
"Couldn't find matching path (or too many)"
prev_path_idx = asset_paths.index(prev_path)
asset_paths[prev_path_idx] = path
multiverse.SetUsdCompoundAssetPaths(shape, asset_paths)
cmds.setAttr("{}.representation".format(node),
repre_entity["id"],
type="string")
mel.eval('refreshEditorTemplates;')
def switch(self, container, context):
self.update(container, context)
def remove(self, container):
# type: (dict) -> None
"""Remove loaded container."""
# Delete container and its contents
if cmds.objExists(container['objectName']):
members = cmds.sets(container['objectName'], query=True) or []
cmds.delete([container['objectName']] + members)
# Remove the namespace, if empty
namespace = container['namespace']
if cmds.namespace(exists=namespace):
members = cmds.namespaceInfo(namespace, listNamespace=True)
if not members:
cmds.namespace(removeNamespace=namespace)
else:
self.log.warning("Namespace not deleted because it "
"still has members: %s", namespace)

View file

@ -1,129 +0,0 @@
# -*- coding: utf-8 -*-
import os
import maya.cmds as cmds
import qargparse
from ayon_api import get_representation_by_id
from ayon_core.pipeline import get_representation_path
from ayon_maya.api import plugin
from ayon_maya.api.lib import maintained_selection
from ayon_maya.api.pipeline import containerise
from maya import mel
class MultiverseUsdOverLoader(plugin.Loader):
"""Reference file"""
product_types = {"mvUsdOverride"}
representations = {"usda", "usd", "udsz"}
label = "Load Usd Override into Compound"
order = -10
icon = "code-fork"
color = "orange"
options = [
qargparse.String(
"Which Compound",
label="Compound",
help="Select which compound to add this as a layer to."
)
]
def load(self, context, name=None, namespace=None, options=None):
current_usd = cmds.ls(selection=True,
type="mvUsdCompoundShape",
dag=True,
long=True)
if len(current_usd) != 1:
self.log.error("Current selection invalid: '{}', "
"must contain exactly 1 mvUsdCompoundShape."
"".format(current_usd))
return
# Make sure we can load the plugin
cmds.loadPlugin("MultiverseForMaya", quiet=True)
import multiverse
path = self.filepath_from_context(context)
nodes = current_usd
with maintained_selection():
multiverse.AddUsdCompoundAssetPath(current_usd[0], path)
namespace = current_usd[0].split("|")[1].split(":")[0]
container = containerise(
name=name,
namespace=namespace,
nodes=nodes,
context=context,
loader=self.__class__.__name__)
cmds.addAttr(container, longName="mvUsdCompoundShape",
niceName="mvUsdCompoundShape", dataType="string")
cmds.setAttr(container + ".mvUsdCompoundShape",
current_usd[0], type="string")
return container
def update(self, container, context):
# type: (dict, dict) -> None
"""Update container with specified representation."""
cmds.loadPlugin("MultiverseForMaya", quiet=True)
import multiverse
node = container['objectName']
assert cmds.objExists(node), "Missing container"
members = cmds.sets(node, query=True) or []
shapes = cmds.ls(members, type="mvUsdCompoundShape")
assert shapes, "Cannot find mvUsdCompoundShape in container"
mvShape = container['mvUsdCompoundShape']
assert mvShape, "Missing mv source"
project_name = context["project"]["name"]
repre_entity = context["representation"]
prev_representation_id = cmds.getAttr("{}.representation".format(node))
prev_representation = get_representation_by_id(project_name,
prev_representation_id)
prev_path = os.path.normpath(prev_representation["attrib"]["path"])
path = get_representation_path(repre_entity)
for shape in shapes:
asset_paths = multiverse.GetUsdCompoundAssetPaths(shape)
asset_paths = [os.path.normpath(p) for p in asset_paths]
assert asset_paths.count(prev_path) == 1, \
"Couldn't find matching path (or too many)"
prev_path_idx = asset_paths.index(prev_path)
asset_paths[prev_path_idx] = path
multiverse.SetUsdCompoundAssetPaths(shape, asset_paths)
cmds.setAttr("{}.representation".format(node),
repre_entity["id"],
type="string")
mel.eval('refreshEditorTemplates;')
def switch(self, container, context):
self.update(container, context)
def remove(self, container):
# type: (dict) -> None
"""Remove loaded container."""
# Delete container and its contents
if cmds.objExists(container['objectName']):
members = cmds.sets(container['objectName'], query=True) or []
cmds.delete([container['objectName']] + members)
# Remove the namespace, if empty
namespace = container['namespace']
if cmds.namespace(exists=namespace):
members = cmds.namespaceInfo(namespace, listNamespace=True)
if not members:
cmds.namespace(removeNamespace=namespace)
else:
self.log.warning("Namespace not deleted because it "
"still has members: %s", namespace)

View file

@ -1,150 +0,0 @@
# -*- coding: utf-8 -*-
"""Loader for Redshift proxy."""
import os
import clique
import maya.cmds as cmds
from ayon_core.pipeline import get_representation_path
from ayon_core.settings import get_project_settings
from ayon_maya.api import plugin
from ayon_maya.api.lib import maintained_selection, namespaced, unique_namespace
from ayon_maya.api.pipeline import containerise
from ayon_maya.api.plugin import get_load_color_for_product_type
class RedshiftProxyLoader(plugin.Loader):
"""Load Redshift proxy"""
product_types = {"redshiftproxy"}
representations = {"rs"}
label = "Import Redshift Proxy"
order = -10
icon = "code-fork"
color = "orange"
def load(self, context, name=None, namespace=None, options=None):
"""Plugin entry point."""
product_type = context["product"]["productType"]
folder_name = context["folder"]["name"]
namespace = namespace or unique_namespace(
folder_name + "_",
prefix="_" if folder_name[0].isdigit() else "",
suffix="_",
)
# Ensure Redshift for Maya is loaded.
cmds.loadPlugin("redshift4maya", quiet=True)
path = self.filepath_from_context(context)
with maintained_selection():
cmds.namespace(addNamespace=namespace)
with namespaced(namespace, new=False):
nodes, group_node = self.create_rs_proxy(name, path)
self[:] = nodes
if not nodes:
return
# colour the group node
project_name = context["project"]["name"]
settings = get_project_settings(project_name)
color = get_load_color_for_product_type(product_type, settings)
if color is not None:
red, green, blue = color
cmds.setAttr("{0}.useOutlinerColor".format(group_node), 1)
cmds.setAttr(
"{0}.outlinerColor".format(group_node), red, green, blue
)
return containerise(
name=name,
namespace=namespace,
nodes=nodes,
context=context,
loader=self.__class__.__name__)
def update(self, container, context):
node = container['objectName']
assert cmds.objExists(node), "Missing container"
members = cmds.sets(node, query=True) or []
rs_meshes = cmds.ls(members, type="RedshiftProxyMesh")
assert rs_meshes, "Cannot find RedshiftProxyMesh in container"
repre_entity = context["representation"]
filename = get_representation_path(repre_entity)
for rs_mesh in rs_meshes:
cmds.setAttr("{}.fileName".format(rs_mesh),
filename,
type="string")
# Update metadata
cmds.setAttr("{}.representation".format(node),
repre_entity["id"],
type="string")
def remove(self, container):
# Delete container and its contents
if cmds.objExists(container['objectName']):
members = cmds.sets(container['objectName'], query=True) or []
cmds.delete([container['objectName']] + members)
# Remove the namespace, if empty
namespace = container['namespace']
if cmds.namespace(exists=namespace):
members = cmds.namespaceInfo(namespace, listNamespace=True)
if not members:
cmds.namespace(removeNamespace=namespace)
else:
self.log.warning("Namespace not deleted because it "
"still has members: %s", namespace)
def switch(self, container, context):
self.update(container, context)
def create_rs_proxy(self, name, path):
"""Creates Redshift Proxies showing a proxy object.
Args:
name (str): Proxy name.
path (str): Path to proxy file.
Returns:
(str, str): Name of mesh with Redshift proxy and its parent
transform.
"""
rs_mesh = cmds.createNode(
'RedshiftProxyMesh', name="{}_RS".format(name))
mesh_shape = cmds.createNode("mesh", name="{}_GEOShape".format(name))
cmds.setAttr("{}.fileName".format(rs_mesh),
path,
type="string")
cmds.connectAttr("{}.outMesh".format(rs_mesh),
"{}.inMesh".format(mesh_shape))
# TODO: use the assigned shading group as shaders if existed
# assign default shader to redshift proxy
if cmds.ls("initialShadingGroup", type="shadingEngine"):
cmds.sets(mesh_shape, forceElement="initialShadingGroup")
group_node = cmds.group(empty=True, name="{}_GRP".format(name))
mesh_transform = cmds.listRelatives(mesh_shape,
parent=True, fullPath=True)
cmds.parent(mesh_transform, group_node)
nodes = [rs_mesh, mesh_shape, group_node]
# determine if we need to enable animation support
files_in_folder = os.listdir(os.path.dirname(path))
collections, remainder = clique.assemble(files_in_folder)
if collections:
cmds.setAttr("{}.useFrameExtension".format(rs_mesh), 1)
return nodes, group_node

View file

@ -1,382 +0,0 @@
import contextlib
import difflib
import qargparse
from ayon_core.settings import get_project_settings
from ayon_maya.api import plugin
from ayon_maya.api.lib import (
create_rig_animation_instance,
get_container_members,
maintained_selection,
parent_nodes,
)
from maya import cmds
@contextlib.contextmanager
def preserve_time_units():
"""Preserve current frame, frame range and fps"""
frame = cmds.currentTime(query=True)
fps = cmds.currentUnit(query=True, time=True)
start = cmds.playbackOptions(query=True, minTime=True)
end = cmds.playbackOptions(query=True, maxTime=True)
anim_start = cmds.playbackOptions(query=True, animationStartTime=True)
anim_end = cmds.playbackOptions(query=True, animationEndTime=True)
try:
yield
finally:
cmds.currentUnit(time=fps, updateAnimation=False)
cmds.currentTime(frame)
cmds.playbackOptions(minTime=start,
maxTime=end,
animationStartTime=anim_start,
animationEndTime=anim_end)
@contextlib.contextmanager
def preserve_modelpanel_cameras(container, log=None):
"""Preserve camera members of container in the modelPanels.
This is used to ensure a camera remains in the modelPanels after updating
to a new version.
"""
# Get the modelPanels that used the old camera
members = get_container_members(container)
old_cameras = set(cmds.ls(members, type="camera", long=True))
if not old_cameras:
# No need to manage anything
yield
return
panel_cameras = {}
for panel in cmds.getPanel(type="modelPanel"):
cam = cmds.ls(cmds.modelPanel(panel, query=True, camera=True),
long=True)[0]
# Often but not always maya returns the transform from the
# modelPanel as opposed to the camera shape, so we convert it
# to explicitly be the camera shape
if cmds.nodeType(cam) != "camera":
cam = cmds.listRelatives(cam,
children=True,
fullPath=True,
type="camera")[0]
if cam in old_cameras:
panel_cameras[panel] = cam
if not panel_cameras:
# No need to manage anything
yield
return
try:
yield
finally:
new_members = get_container_members(container)
new_cameras = set(cmds.ls(new_members, type="camera", long=True))
if not new_cameras:
return
for panel, cam_name in panel_cameras.items():
new_camera = None
if cam_name in new_cameras:
new_camera = cam_name
elif len(new_cameras) == 1:
new_camera = next(iter(new_cameras))
else:
# Multiple cameras in the updated container but not an exact
# match detected by name. Find the closest match
matches = difflib.get_close_matches(word=cam_name,
possibilities=new_cameras,
n=1)
if matches:
new_camera = matches[0] # best match
if log:
log.info("Camera in '{}' restored with "
"closest match camera: {} (before: {})"
.format(panel, new_camera, cam_name))
if not new_camera:
# Unable to find the camera to re-apply in the modelpanel
continue
cmds.modelPanel(panel, edit=True, camera=new_camera)
class ReferenceLoader(plugin.ReferenceLoader):
"""Reference file"""
product_types = {
"model",
"pointcache",
"proxyAbc",
"animation",
"mayaAscii",
"mayaScene",
"setdress",
"layout",
"camera",
"rig",
"camerarig",
"staticMesh",
"skeletalMesh",
"mvLook",
"matchmove",
}
representations = {"ma", "abc", "fbx", "mb"}
label = "Reference"
order = -10
icon = "code-fork"
color = "orange"
def process_reference(self, context, name, namespace, options):
import maya.cmds as cmds
product_type = context["product"]["productType"]
project_name = context["project"]["name"]
# True by default to keep legacy behaviours
attach_to_root = options.get("attach_to_root", True)
group_name = options["group_name"]
# no group shall be created
if not attach_to_root:
group_name = namespace
kwargs = {}
if "file_options" in options:
kwargs["options"] = options["file_options"]
if "file_type" in options:
kwargs["type"] = options["file_type"]
path = self.filepath_from_context(context)
with maintained_selection():
cmds.loadPlugin("AbcImport.mll", quiet=True)
file_url = self.prepare_root_value(path, project_name)
nodes = cmds.file(file_url,
namespace=namespace,
sharedReferenceFile=False,
reference=True,
returnNewNodes=True,
groupReference=attach_to_root,
groupName=group_name,
**kwargs)
shapes = cmds.ls(nodes, shapes=True, long=True)
new_nodes = (list(set(nodes) - set(shapes)))
# if there are cameras, try to lock their transforms
self._lock_camera_transforms(new_nodes)
current_namespace = cmds.namespaceInfo(currentNamespace=True)
if current_namespace != ":":
group_name = current_namespace + ":" + group_name
self[:] = new_nodes
if attach_to_root:
group_name = "|" + group_name
roots = cmds.listRelatives(group_name,
children=True,
fullPath=True) or []
if product_type not in {
"layout", "setdress", "mayaAscii", "mayaScene"
}:
# QUESTION Why do we need to exclude these families?
with parent_nodes(roots, parent=None):
cmds.xform(group_name, zeroTransformPivots=True)
settings = get_project_settings(project_name)
display_handle = settings['maya']['load'].get(
'reference_loader', {}
).get('display_handle', True)
cmds.setAttr(
"{}.displayHandle".format(group_name), display_handle
)
color = plugin.get_load_color_for_product_type(
product_type, settings
)
if color is not None:
red, green, blue = color
cmds.setAttr("{}.useOutlinerColor".format(group_name), 1)
cmds.setAttr(
"{}.outlinerColor".format(group_name),
red,
green,
blue
)
cmds.setAttr(
"{}.displayHandle".format(group_name), display_handle
)
# get bounding box
bbox = cmds.exactWorldBoundingBox(group_name)
# get pivot position on world space
pivot = cmds.xform(group_name, q=True, sp=True, ws=True)
# center of bounding box
cx = (bbox[0] + bbox[3]) / 2
cy = (bbox[1] + bbox[4]) / 2
cz = (bbox[2] + bbox[5]) / 2
# add pivot position to calculate offset
cx = cx + pivot[0]
cy = cy + pivot[1]
cz = cz + pivot[2]
# set selection handle offset to center of bounding box
cmds.setAttr("{}.selectHandleX".format(group_name), cx)
cmds.setAttr("{}.selectHandleY".format(group_name), cy)
cmds.setAttr("{}.selectHandleZ".format(group_name), cz)
if product_type == "rig":
self._post_process_rig(namespace, context, options)
else:
if "translate" in options:
if not attach_to_root and new_nodes:
root_nodes = cmds.ls(new_nodes, assemblies=True,
long=True)
# we assume only a single root is ever loaded
group_name = root_nodes[0]
cmds.setAttr("{}.translate".format(group_name),
*options["translate"])
return new_nodes
def switch(self, container, context):
self.update(container, context)
def update(self, container, context):
with preserve_modelpanel_cameras(container, log=self.log):
super(ReferenceLoader, self).update(container, context)
# We also want to lock camera transforms on any new cameras in the
# reference or for a camera which might have changed names.
members = get_container_members(container)
self._lock_camera_transforms(members)
def _post_process_rig(self, namespace, context, options):
nodes = self[:]
create_rig_animation_instance(
nodes, context, namespace, options=options, log=self.log
)
def _lock_camera_transforms(self, nodes):
cameras = cmds.ls(nodes, type="camera")
if not cameras:
return
# Check the Maya version, lockTransform has been introduced since
# Maya 2016.5 Ext 2
version = int(cmds.about(version=True))
if version >= 2016:
for camera in cameras:
cmds.camera(camera, edit=True, lockTransform=True)
else:
self.log.warning("This version of Maya does not support locking of"
" transforms of cameras.")
class MayaUSDReferenceLoader(ReferenceLoader):
"""Reference USD file to native Maya nodes using MayaUSDImport reference"""
label = "Reference Maya USD"
product_types = {"usd"}
representations = {"usd"}
extensions = {"usd", "usda", "usdc"}
options = ReferenceLoader.options + [
qargparse.Boolean(
"readAnimData",
label="Load anim data",
default=True,
help="Load animation data from USD file"
),
qargparse.Boolean(
"useAsAnimationCache",
label="Use as animation cache",
default=True,
help=(
"Imports geometry prims with time-sampled point data using a "
"point-based deformer that references the imported "
"USD file.\n"
"This provides better import and playback performance when "
"importing time-sampled geometry from USD, and should "
"reduce the weight of the resulting Maya scene."
)
),
qargparse.Boolean(
"importInstances",
label="Import instances",
default=True,
help=(
"Import USD instanced geometries as Maya instanced shapes. "
"Will flatten the scene otherwise."
)
),
qargparse.String(
"primPath",
label="Prim Path",
default="/",
help=(
"Name of the USD scope where traversing will begin.\n"
"The prim at the specified primPath (including the prim) will "
"be imported.\n"
"Specifying the pseudo-root (/) means you want "
"to import everything in the file.\n"
"If the passed prim path is empty, it will first try to "
"import the defaultPrim for the rootLayer if it exists.\n"
"Otherwise, it will behave as if the pseudo-root was passed "
"in."
)
)
]
file_type = "USD Import"
def process_reference(self, context, name, namespace, options):
cmds.loadPlugin("mayaUsdPlugin", quiet=True)
def bool_option(key, default):
# Shorthand for getting optional boolean file option from options
value = int(bool(options.get(key, default)))
return "{}={}".format(key, value)
def string_option(key, default):
# Shorthand for getting optional string file option from options
value = str(options.get(key, default))
return "{}={}".format(key, value)
options["file_options"] = ";".join([
string_option("primPath", default="/"),
bool_option("importInstances", default=True),
bool_option("useAsAnimationCache", default=True),
bool_option("readAnimData", default=True),
# TODO: Expose more parameters
# "preferredMaterial=none",
# "importRelativeTextures=Automatic",
# "useCustomFrameRange=0",
# "startTime=0",
# "endTime=0",
# "importUSDZTextures=0"
])
options["file_type"] = self.file_type
# Maya USD import reference has the tendency to change the time slider
# range and current frame, so we force revert it after
with preserve_time_units():
return super(MayaUSDReferenceLoader, self).process_reference(
context, name, namespace, options
)
def update(self, container, context):
# Maya USD import reference has the tendency to change the time slider
# range and current frame, so we force revert it after
with preserve_time_units():
super(MayaUSDReferenceLoader, self).update(container, context)

View file

@ -1,168 +0,0 @@
# -*- coding: utf-8 -*-
"""Load and update RenderSetup settings.
Working with RenderSetup setting is Maya is done utilizing json files.
When this json is loaded, it will overwrite all settings on RenderSetup
instance.
"""
import contextlib
import json
import sys
import maya.app.renderSetup.model.renderSetup as renderSetup
import six
from ayon_core.lib import BoolDef, EnumDef
from ayon_core.pipeline import get_representation_path
from ayon_maya.api import lib
from ayon_maya.api import plugin
from ayon_maya.api.pipeline import containerise
from maya import cmds
@contextlib.contextmanager
def mark_all_imported(enabled):
"""Mark all imported nodes accepted by removing the `imported` attribute"""
if not enabled:
yield
return
node_types = cmds.pluginInfo("renderSetup", query=True, dependNode=True)
# Get node before load, then we can disable `imported`
# attribute on all new render setup layers after import
before = cmds.ls(type=node_types, long=True)
try:
yield
finally:
after = cmds.ls(type=node_types, long=True)
for node in (node for node in after if node not in before):
if cmds.attributeQuery("imported",
node=node,
exists=True):
plug = "{}.imported".format(node)
if cmds.getAttr(plug):
cmds.deleteAttr(plug)
class RenderSetupLoader(plugin.Loader):
"""Load json preset for RenderSetup overwriting current one."""
product_types = {"rendersetup"}
representations = {"json"}
defaults = ['Main']
label = "Load RenderSetup template"
icon = "tablet"
color = "orange"
options = [
BoolDef("accept_import",
label="Accept import on load",
tooltip=(
"By default importing or pasting Render Setup collections "
"will display them italic in the Render Setup list.\nWith "
"this enabled the load will directly mark the import "
"'accepted' and remove the italic view."
),
default=True),
BoolDef("load_managed",
label="Load Managed",
tooltip=(
"Containerize the rendersetup on load so it can be "
"'updated' later."
),
default=True),
EnumDef("import_mode",
label="Import mode",
items={
renderSetup.DECODE_AND_OVERWRITE: (
"Flush existing render setup and "
"add without any namespace"
),
renderSetup.DECODE_AND_MERGE: (
"Merge with the existing render setup objects and "
"rename the unexpected objects"
),
renderSetup.DECODE_AND_RENAME: (
"Renaming all decoded render setup objects to not "
"conflict with the existing render setup"
),
},
default=renderSetup.DECODE_AND_OVERWRITE)
]
def load(self, context, name, namespace, data):
"""Load RenderSetup settings."""
path = self.filepath_from_context(context)
accept_import = data.get("accept_import", True)
import_mode = data.get("import_mode", renderSetup.DECODE_AND_OVERWRITE)
self.log.info(">>> loading json [ {} ]".format(path))
with mark_all_imported(accept_import):
with open(path, "r") as file:
renderSetup.instance().decode(
json.load(file), import_mode, None)
if data.get("load_managed", True):
self.log.info(">>> containerising [ {} ]".format(name))
folder_name = context["folder"]["name"]
namespace = namespace or lib.unique_namespace(
folder_name + "_",
prefix="_" if folder_name[0].isdigit() else "",
suffix="_",
)
return containerise(
name=name,
namespace=namespace,
nodes=[],
context=context,
loader=self.__class__.__name__)
def remove(self, container):
"""Remove RenderSetup settings instance."""
container_name = container["objectName"]
self.log.info("Removing '%s' from Maya.." % container["name"])
container_content = cmds.sets(container_name, query=True) or []
nodes = cmds.ls(container_content, long=True)
nodes.append(container_name)
try:
cmds.delete(nodes)
except ValueError:
# Already implicitly deleted by Maya upon removing reference
pass
def update(self, container, context):
"""Update RenderSetup setting by overwriting existing settings."""
lib.show_message(
"Render setup update",
"Render setup setting will be overwritten by new version. All "
"setting specified by user not included in loaded version "
"will be lost.")
repre_entity = context["representation"]
path = get_representation_path(repre_entity)
with open(path, "r") as file:
try:
renderSetup.instance().decode(
json.load(file), renderSetup.DECODE_AND_OVERWRITE, None)
except Exception:
self.log.error("There were errors during loading")
six.reraise(*sys.exc_info())
# Update metadata
node = container["objectName"]
cmds.setAttr("{}.representation".format(node),
repre_entity["id"],
type="string")
self.log.info("... updated")
def switch(self, container, context):
"""Switch representations."""
self.update(container, context)

View file

@ -1,138 +0,0 @@
"""Load OpenVDB for Arnold in aiVolume.
TODO:
`aiVolume` doesn't automatically set velocity fps correctly, set manual?
"""
import os
from ayon_core.pipeline import get_representation_path
from ayon_core.settings import get_project_settings
from ayon_maya.api import plugin
from ayon_maya.api.plugin import get_load_color_for_product_type
class LoadVDBtoArnold(plugin.Loader):
"""Load OpenVDB for Arnold in aiVolume"""
product_types = {"vdbcache"}
representations = {"vdb"}
label = "Load VDB to Arnold"
icon = "cloud"
color = "orange"
def load(self, context, name, namespace, data):
from ayon_maya.api.lib import unique_namespace
from ayon_maya.api.pipeline import containerise
from maya import cmds
product_type = context["product"]["productType"]
# Check if the plugin for arnold is available on the pc
try:
cmds.loadPlugin("mtoa", quiet=True)
except Exception as exc:
self.log.error("Encountered exception:\n%s" % exc)
return
folder_name = context["folder"]["name"]
namespace = namespace or unique_namespace(
folder_name + "_",
prefix="_" if folder_name[0].isdigit() else "",
suffix="_",
)
# Root group
label = "{}:{}".format(namespace, name)
root = cmds.group(name=label, empty=True)
project_name = context["project"]["name"]
settings = get_project_settings(project_name)
color = get_load_color_for_product_type(product_type, settings)
if color is not None:
red, green, blue = color
cmds.setAttr(root + ".useOutlinerColor", 1)
cmds.setAttr(root + ".outlinerColor", red, green, blue)
# Create VRayVolumeGrid
grid_node = cmds.createNode("aiVolume",
name="{}Shape".format(root),
parent=root)
path = self.filepath_from_context(context)
self._set_path(grid_node,
path=path,
repre_entity=context["representation"])
# Lock the shape node so the user can't delete the transform/shape
# as if it was referenced
cmds.lockNode(grid_node, lock=True)
nodes = [root, grid_node]
self[:] = nodes
return containerise(
name=name,
namespace=namespace,
nodes=nodes,
context=context,
loader=self.__class__.__name__)
def update(self, container, context):
from maya import cmds
repre_entity = context["representation"]
path = get_representation_path(repre_entity)
# Find VRayVolumeGrid
members = cmds.sets(container['objectName'], query=True)
grid_nodes = cmds.ls(members, type="aiVolume", long=True)
assert len(grid_nodes) == 1, "This is a bug"
# Update the VRayVolumeGrid
self._set_path(grid_nodes[0], path=path, repre_entity=repre_entity)
# Update container representation
cmds.setAttr(container["objectName"] + ".representation",
repre_entity["id"],
type="string")
def switch(self, container, context):
self.update(container, context)
def remove(self, container):
from maya import cmds
# Get all members of the AYON container, ensure they are unlocked
# and delete everything
members = cmds.sets(container['objectName'], query=True)
cmds.lockNode(members, lock=False)
cmds.delete([container['objectName']] + members)
# Clean up the namespace
try:
cmds.namespace(removeNamespace=container['namespace'],
deleteNamespaceContent=True)
except RuntimeError:
pass
@staticmethod
def _set_path(grid_node,
path,
repre_entity):
"""Apply the settings for the VDB path to the aiVolume node"""
from maya import cmds
if not os.path.exists(path):
raise RuntimeError("Path does not exist: %s" % path)
is_sequence = "frame" in repre_entity["context"]
cmds.setAttr(grid_node + ".useFrameExtension", is_sequence)
# Set file path
cmds.setAttr(grid_node + ".filename", path, type="string")

View file

@ -1,143 +0,0 @@
import os
from ayon_core.pipeline import get_representation_path
from ayon_core.settings import get_project_settings
from ayon_maya.api import plugin
from ayon_maya.api.plugin import get_load_color_for_product_type
class LoadVDBtoRedShift(plugin.Loader):
"""Load OpenVDB in a Redshift Volume Shape
Note that the RedshiftVolumeShape is created without a RedshiftVolume
shader assigned. To get the Redshift volume to render correctly assign
a RedshiftVolume shader (in the Hypershade) and set the density, scatter
and emission channels to the channel names of the volumes in the VDB file.
"""
product_types = {"vdbcache"}
representations = {"vdb"}
label = "Load VDB to RedShift"
icon = "cloud"
color = "orange"
def load(self, context, name=None, namespace=None, data=None):
from ayon_maya.api.lib import unique_namespace
from ayon_maya.api.pipeline import containerise
from maya import cmds
product_type = context["product"]["productType"]
# Check if the plugin for redshift is available on the pc
try:
cmds.loadPlugin("redshift4maya", quiet=True)
except Exception as exc:
self.log.error("Encountered exception:\n%s" % exc)
return
# Check if viewport drawing engine is Open GL Core (compat)
render_engine = None
compatible = "OpenGL"
if cmds.optionVar(exists="vp2RenderingEngine"):
render_engine = cmds.optionVar(query="vp2RenderingEngine")
if not render_engine or not render_engine.startswith(compatible):
raise RuntimeError("Current scene's settings are incompatible."
"See Preferences > Display > Viewport 2.0 to "
"set the render engine to '%s<type>'"
% compatible)
folder_name = context["folder"]["name"]
namespace = namespace or unique_namespace(
folder_name + "_",
prefix="_" if folder_name[0].isdigit() else "",
suffix="_",
)
# Root group
label = "{}:{}".format(namespace, name)
root = cmds.createNode("transform", name=label)
project_name = context["project"]["name"]
settings = get_project_settings(project_name)
color = get_load_color_for_product_type(product_type, settings)
if color is not None:
red, green, blue = color
cmds.setAttr(root + ".useOutlinerColor", 1)
cmds.setAttr(root + ".outlinerColor", red, green, blue)
# Create VR
volume_node = cmds.createNode("RedshiftVolumeShape",
name="{}RVSShape".format(label),
parent=root)
self._set_path(volume_node,
path=self.filepath_from_context(context),
representation=context["representation"])
nodes = [root, volume_node]
self[:] = nodes
return containerise(
name=name,
namespace=namespace,
nodes=nodes,
context=context,
loader=self.__class__.__name__)
def update(self, container, context):
from maya import cmds
repre_entity = context["representation"]
path = get_representation_path(repre_entity)
# Find VRayVolumeGrid
members = cmds.sets(container['objectName'], query=True)
grid_nodes = cmds.ls(members, type="RedshiftVolumeShape", long=True)
assert len(grid_nodes) == 1, "This is a bug"
# Update the VRayVolumeGrid
self._set_path(grid_nodes[0], path=path, representation=repre_entity)
# Update container representation
cmds.setAttr(container["objectName"] + ".representation",
repre_entity["id"],
type="string")
def remove(self, container):
from maya import cmds
# Get all members of the AYON container, ensure they are unlocked
# and delete everything
members = cmds.sets(container['objectName'], query=True)
cmds.lockNode(members, lock=False)
cmds.delete([container['objectName']] + members)
# Clean up the namespace
try:
cmds.namespace(removeNamespace=container['namespace'],
deleteNamespaceContent=True)
except RuntimeError:
pass
def switch(self, container, context):
self.update(container, context)
@staticmethod
def _set_path(grid_node,
path,
representation):
"""Apply the settings for the VDB path to the RedshiftVolumeShape"""
from maya import cmds
if not os.path.exists(path):
raise RuntimeError("Path does not exist: %s" % path)
is_sequence = "frame" in representation["context"]
cmds.setAttr(grid_node + ".useFrameExtension", is_sequence)
# Set file path
cmds.setAttr(grid_node + ".fileName", path, type="string")

View file

@ -1,286 +0,0 @@
import os
from ayon_core.pipeline import get_representation_path
from ayon_core.settings import get_project_settings
from ayon_maya.api import plugin
from ayon_maya.api.plugin import get_load_color_for_product_type
from maya import cmds
# List of 3rd Party Channels Mapping names for VRayVolumeGrid
# See: https://docs.chaosgroup.com/display/VRAY4MAYA/Input
# #Input-3rdPartyChannelsMapping
THIRD_PARTY_CHANNELS = {
2: "Smoke",
1: "Temperature",
10: "Fuel",
4: "Velocity.x",
5: "Velocity.y",
6: "Velocity.z",
7: "Red",
8: "Green",
9: "Blue",
14: "Wavelet Energy",
19: "Wavelet.u",
20: "Wavelet.v",
21: "Wavelet.w",
# These are not in UI or documentation but V-Ray does seem to set these.
15: "AdvectionOrigin.x",
16: "AdvectionOrigin.y",
17: "AdvectionOrigin.z",
}
def _fix_duplicate_vvg_callbacks():
"""Workaround to kill duplicate VRayVolumeGrids attribute callbacks.
This fixes a huge lag in Maya on switching 3rd Party Channels Mappings
or to different .vdb file paths because it spams an attribute changed
callback: `vvgUserChannelMappingsUpdateUI`.
ChaosGroup bug ticket: 154-008-9890
Found with:
- Maya 2019.2 on Windows 10
- V-Ray: V-Ray Next for Maya, update 1 version 4.12.01.00001
Bug still present in:
- Maya 2022.1 on Windows 10
- V-Ray 5 for Maya, Update 2.1 (v5.20.01 from Dec 16 2021)
"""
# todo(roy): Remove when new V-Ray release fixes duplicate calls
jobs = cmds.scriptJob(listJobs=True)
matched = set()
for entry in jobs:
# Remove the number
index, callback = entry.split(":", 1)
callback = callback.strip()
# Detect whether it is a `vvgUserChannelMappingsUpdateUI`
# attribute change callback
if callback.startswith('"-runOnce" 1 "-attributeChange" "'):
if '"vvgUserChannelMappingsUpdateUI(' in callback:
if callback in matched:
# If we've seen this callback before then
# delete the duplicate callback
cmds.scriptJob(kill=int(index))
else:
matched.add(callback)
class LoadVDBtoVRay(plugin.Loader):
"""Load OpenVDB in a V-Ray Volume Grid"""
product_types = {"vdbcache"}
representations = {"vdb"}
label = "Load VDB to VRay"
icon = "cloud"
color = "orange"
def load(self, context, name, namespace, data):
from ayon_maya.api.lib import unique_namespace
from ayon_maya.api.pipeline import containerise
path = self.filepath_from_context(context)
assert os.path.exists(path), (
"Path does not exist: %s" % path
)
product_type = context["product"]["productType"]
# Ensure V-ray is loaded with the vrayvolumegrid
if not cmds.pluginInfo("vrayformaya", query=True, loaded=True):
cmds.loadPlugin("vrayformaya")
if not cmds.pluginInfo("vrayvolumegrid", query=True, loaded=True):
cmds.loadPlugin("vrayvolumegrid")
# Check if viewport drawing engine is Open GL Core (compat)
render_engine = None
compatible = "OpenGLCoreProfileCompat"
if cmds.optionVar(exists="vp2RenderingEngine"):
render_engine = cmds.optionVar(query="vp2RenderingEngine")
if not render_engine or render_engine != compatible:
self.log.warning("Current scene's settings are incompatible."
"See Preferences > Display > Viewport 2.0 to "
"set the render engine to '%s'" % compatible)
folder_name = context["folder"]["name"]
namespace = namespace or unique_namespace(
folder_name + "_",
prefix="_" if folder_name[0].isdigit() else "",
suffix="_",
)
# Root group
label = "{}:{}_VDB".format(namespace, name)
root = cmds.group(name=label, empty=True)
project_name = context["project"]["name"]
settings = get_project_settings(project_name)
color = get_load_color_for_product_type(product_type, settings)
if color is not None:
red, green, blue = color
cmds.setAttr(root + ".useOutlinerColor", 1)
cmds.setAttr(root + ".outlinerColor", red, green, blue)
# Create VRayVolumeGrid
grid_node = cmds.createNode("VRayVolumeGrid",
name="{}Shape".format(label),
parent=root)
# Ensure .currentTime is connected to time1.outTime
cmds.connectAttr("time1.outTime", grid_node + ".currentTime")
# Set path
self._set_path(grid_node, path, show_preset_popup=True)
# Lock the shape node so the user can't delete the transform/shape
# as if it was referenced
cmds.lockNode(grid_node, lock=True)
nodes = [root, grid_node]
self[:] = nodes
return containerise(
name=name,
namespace=namespace,
nodes=nodes,
context=context,
loader=self.__class__.__name__)
def _set_path(self, grid_node, path, show_preset_popup=True):
from ayon_maya.api.lib import attribute_values
from maya import cmds
def _get_filename_from_folder(path):
# Using the sequence of .vdb files we check the frame range, etc.
# to set the filename with #### padding.
files = sorted(x for x in os.listdir(path) if x.endswith(".vdb"))
if not files:
raise RuntimeError("Couldn't find .vdb files in: %s" % path)
if len(files) == 1:
# Ensure check for single file is also done in folder
fname = files[0]
else:
# Sequence
import clique
# todo: check support for negative frames as input
collections, remainder = clique.assemble(files)
assert len(collections) == 1, (
"Must find a single image sequence, "
"found: %s" % (collections,)
)
collection = collections[0]
fname = collection.format('{head}{{padding}}{tail}')
padding = collection.padding
if padding == 0:
# Clique doesn't provide padding if the frame number never
# starts with a zero and thus has never any visual padding.
# So we fall back to the smallest frame number as padding.
padding = min(len(str(i)) for i in collection.indexes)
# Supply frame/padding with # signs
padding_str = "#" * padding
fname = fname.format(padding=padding_str)
return os.path.join(path, fname)
# The path is either a single file or sequence in a folder so
# we do a quick lookup for our files
if os.path.isfile(path):
path = os.path.dirname(path)
path = _get_filename_from_folder(path)
# Even when not applying a preset V-Ray will reset the 3rd Party
# Channels Mapping of the VRayVolumeGrid when setting the .inPath
# value. As such we try and preserve the values ourselves.
# Reported as ChaosGroup bug ticket: 154-011-2909
# todo(roy): Remove when new V-Ray release preserves values
original_user_mapping = cmds.getAttr(grid_node + ".usrchmap") or ""
# Workaround for V-Ray bug: fix lag on path change, see function
_fix_duplicate_vvg_callbacks()
# Suppress preset pop-up if we want.
popup_attr = "{0}.inDontOfferPresets".format(grid_node)
popup = {popup_attr: not show_preset_popup}
with attribute_values(popup):
cmds.setAttr(grid_node + ".inPath", path, type="string")
# Reapply the 3rd Party channels user mapping when no preset popup
# was shown to the user
if not show_preset_popup:
channels = cmds.getAttr(grid_node + ".usrchmapallch").split(";")
channels = set(channels) # optimize lookup
restored_mapping = ""
for entry in original_user_mapping.split(";"):
if not entry:
# Ignore empty entries
continue
# If 3rd Party Channels selection channel still exists then
# add it again.
index, channel = entry.split(",")
attr = THIRD_PARTY_CHANNELS.get(int(index),
# Fallback for when a mapping
# was set that is not in the
# documentation
"???")
if channel in channels:
restored_mapping += entry + ";"
else:
self.log.warning("Can't preserve '%s' mapping due to "
"missing channel '%s' on node: "
"%s" % (attr, channel, grid_node))
if restored_mapping:
cmds.setAttr(grid_node + ".usrchmap",
restored_mapping,
type="string")
def update(self, container, context):
repre_entity = context["representation"]
path = get_representation_path(repre_entity)
# Find VRayVolumeGrid
members = cmds.sets(container['objectName'], query=True)
grid_nodes = cmds.ls(members, type="VRayVolumeGrid", long=True)
assert len(grid_nodes) > 0, "This is a bug"
# Update the VRayVolumeGrid
for grid_node in grid_nodes:
self._set_path(grid_node, path=path, show_preset_popup=False)
# Update container representation
cmds.setAttr(container["objectName"] + ".representation",
repre_entity["id"],
type="string")
def switch(self, container, context):
self.update(container, context)
def remove(self, container):
# Get all members of the AYON container, ensure they are unlocked
# and delete everything
members = cmds.sets(container['objectName'], query=True)
cmds.lockNode(members, lock=False)
cmds.delete([container['objectName']] + members)
# Clean up the namespace
try:
cmds.namespace(removeNamespace=container['namespace'],
deleteNamespaceContent=True)
except RuntimeError:
pass

View file

@ -1,192 +0,0 @@
# -*- coding: utf-8 -*-
"""Loader for Vray Proxy files.
If there are Alembics published along vray proxy (in the same version),
loader will use them instead of native vray vrmesh format.
"""
import os
import ayon_api
import maya.cmds as cmds
from ayon_core.pipeline import get_representation_path
from ayon_core.settings import get_project_settings
from ayon_maya.api.lib import maintained_selection, namespaced, unique_namespace
from ayon_maya.api.pipeline import containerise
from ayon_maya.api import plugin
from ayon_maya.api.plugin import get_load_color_for_product_type
class VRayProxyLoader(plugin.Loader):
"""Load VRay Proxy with Alembic or VrayMesh."""
product_types = {"vrayproxy", "model", "pointcache", "animation"}
representations = {"vrmesh", "abc"}
label = "Import VRay Proxy"
order = -10
icon = "code-fork"
color = "orange"
def load(self, context, name=None, namespace=None, options=None):
# type: (dict, str, str, dict) -> None
"""Loader entry point.
Args:
context (dict): Loaded representation context.
name (str): Name of container.
namespace (str): Optional namespace name.
options (dict): Optional loader options.
"""
product_type = context["product"]["productType"]
# get all representations for this version
filename = self._get_abc(
context["project"]["name"], context["version"]["id"]
)
if not filename:
filename = self.filepath_from_context(context)
folder_name = context["folder"]["name"]
namespace = namespace or unique_namespace(
folder_name + "_",
prefix="_" if folder_name[0].isdigit() else "",
suffix="_",
)
# Ensure V-Ray for Maya is loaded.
cmds.loadPlugin("vrayformaya", quiet=True)
with maintained_selection():
cmds.namespace(addNamespace=namespace)
with namespaced(namespace, new=False):
nodes, group_node = self.create_vray_proxy(
name, filename=filename)
self[:] = nodes
if not nodes:
return
# colour the group node
project_name = context["project"]["name"]
settings = get_project_settings(project_name)
color = get_load_color_for_product_type(product_type, settings)
if color is not None:
red, green, blue = color
cmds.setAttr("{0}.useOutlinerColor".format(group_node), 1)
cmds.setAttr(
"{0}.outlinerColor".format(group_node), red, green, blue
)
return containerise(
name=name,
namespace=namespace,
nodes=nodes,
context=context,
loader=self.__class__.__name__)
def update(self, container, context):
# type: (dict, dict) -> None
"""Update container with specified representation."""
node = container['objectName']
assert cmds.objExists(node), "Missing container"
members = cmds.sets(node, query=True) or []
vraymeshes = cmds.ls(members, type="VRayProxy")
assert vraymeshes, "Cannot find VRayMesh in container"
# get all representations for this version
repre_entity = context["representation"]
filename = self._get_abc(
context["project"]["name"], context["version"]["id"]
)
if not filename:
filename = get_representation_path(repre_entity)
for vray_mesh in vraymeshes:
cmds.setAttr("{}.fileName".format(vray_mesh),
filename,
type="string")
# Update metadata
cmds.setAttr("{}.representation".format(node),
repre_entity["id"],
type="string")
def remove(self, container):
# type: (dict) -> None
"""Remove loaded container."""
# Delete container and its contents
if cmds.objExists(container['objectName']):
members = cmds.sets(container['objectName'], query=True) or []
cmds.delete([container['objectName']] + members)
# Remove the namespace, if empty
namespace = container['namespace']
if cmds.namespace(exists=namespace):
members = cmds.namespaceInfo(namespace, listNamespace=True)
if not members:
cmds.namespace(removeNamespace=namespace)
else:
self.log.warning("Namespace not deleted because it "
"still has members: %s", namespace)
def switch(self, container, context):
# type: (dict, dict) -> None
"""Switch loaded representation."""
self.update(container, context)
def create_vray_proxy(self, name, filename):
# type: (str, str) -> (list, str)
"""Re-create the structure created by VRay to support vrmeshes
Args:
name (str): Name of the asset.
filename (str): File name of vrmesh.
Returns:
nodes(list)
"""
if name is None:
name = os.path.splitext(os.path.basename(filename))[0]
parent = cmds.createNode("transform", name=name)
proxy = cmds.createNode(
"VRayProxy", name="{}Shape".format(name), parent=parent)
cmds.setAttr(proxy + ".fileName", filename, type="string")
cmds.connectAttr("time1.outTime", proxy + ".currentFrame")
return [parent, proxy], parent
def _get_abc(self, project_name, version_id):
# type: (str) -> str
"""Get abc representation file path if present.
If here is published Alembic (abc) representation published along
vray proxy, get is file path.
Args:
project_name (str): Project name.
version_id (str): Version hash id.
Returns:
str: Path to file.
None: If abc not found.
"""
self.log.debug(
"Looking for abc in published representations of this version.")
abc_rep = ayon_api.get_representation_by_name(
project_name, "abc", version_id
)
if abc_rep:
self.log.debug("Found, we'll link alembic to vray proxy.")
file_name = get_representation_path(abc_rep)
self.log.debug("File: {}".format(file_name))
return file_name
return ""

View file

@ -1,148 +0,0 @@
# -*- coding: utf-8 -*-
import maya.cmds as cmds # noqa
from ayon_core.pipeline import get_representation_path
from ayon_core.settings import get_project_settings
from ayon_maya.api.lib import maintained_selection, namespaced, unique_namespace
from ayon_maya.api.pipeline import containerise
from ayon_maya.api import plugin
from ayon_maya.api.plugin import get_load_color_for_product_type
class VRaySceneLoader(plugin.Loader):
"""Load Vray scene"""
product_types = {"vrayscene_layer"}
representations = {"vrscene"}
label = "Import VRay Scene"
order = -10
icon = "code-fork"
color = "orange"
def load(self, context, name, namespace, data):
product_type = context["product"]["productType"]
folder_name = context["folder"]["name"]
namespace = namespace or unique_namespace(
folder_name + "_",
prefix="_" if folder_name[0].isdigit() else "",
suffix="_",
)
# Ensure V-Ray for Maya is loaded.
cmds.loadPlugin("vrayformaya", quiet=True)
with maintained_selection():
cmds.namespace(addNamespace=namespace)
with namespaced(namespace, new=False):
nodes, root_node = self.create_vray_scene(
name,
filename=self.filepath_from_context(context)
)
self[:] = nodes
if not nodes:
return
# colour the group node
project_name = context["project"]["name"]
settings = get_project_settings(project_name)
color = get_load_color_for_product_type(product_type, settings)
if color is not None:
red, green, blue = color
cmds.setAttr("{0}.useOutlinerColor".format(root_node), 1)
cmds.setAttr(
"{0}.outlinerColor".format(root_node), red, green, blue
)
return containerise(
name=name,
namespace=namespace,
nodes=nodes,
context=context,
loader=self.__class__.__name__)
def update(self, container, context):
node = container['objectName']
assert cmds.objExists(node), "Missing container"
members = cmds.sets(node, query=True) or []
vraymeshes = cmds.ls(members, type="VRayScene")
assert vraymeshes, "Cannot find VRayScene in container"
repre_entity = context["representation"]
filename = get_representation_path(repre_entity)
for vray_mesh in vraymeshes:
cmds.setAttr("{}.FilePath".format(vray_mesh),
filename,
type="string")
# Update metadata
cmds.setAttr("{}.representation".format(node),
repre_entity["id"],
type="string")
def remove(self, container):
# Delete container and its contents
if cmds.objExists(container['objectName']):
members = cmds.sets(container['objectName'], query=True) or []
cmds.delete([container['objectName']] + members)
# Remove the namespace, if empty
namespace = container['namespace']
if cmds.namespace(exists=namespace):
members = cmds.namespaceInfo(namespace, listNamespace=True)
if not members:
cmds.namespace(removeNamespace=namespace)
else:
self.log.warning("Namespace not deleted because it "
"still has members: %s", namespace)
def switch(self, container, context):
self.update(container, context)
def create_vray_scene(self, name, filename):
"""Re-create the structure created by VRay to support vrscenes
Args:
name(str): name of the asset
Returns:
nodes(list)
"""
# Create nodes
mesh_node_name = "VRayScene_{}".format(name)
trans = cmds.createNode(
"transform", name=mesh_node_name)
vray_scene = cmds.createNode(
"VRayScene", name="{}_VRSCN".format(mesh_node_name), parent=trans)
mesh = cmds.createNode(
"mesh", name="{}_Shape".format(mesh_node_name), parent=trans)
cmds.connectAttr(
"{}.outMesh".format(vray_scene), "{}.inMesh".format(mesh))
cmds.setAttr("{}.FilePath".format(vray_scene), filename, type="string")
# Lock the shape nodes so the user cannot delete these
cmds.lockNode(mesh, lock=True)
cmds.lockNode(vray_scene, lock=True)
# Create important connections
cmds.connectAttr("time1.outTime",
"{0}.inputTime".format(trans))
# Connect mesh to initialShadingGroup
cmds.sets([mesh], forceElement="initialShadingGroup")
nodes = [trans, vray_scene, mesh]
# Fix: Force refresh so the mesh shows correctly after creation
cmds.refresh()
return nodes, trans

View file

@ -1,185 +0,0 @@
import os
import shutil
from ayon_maya.api import plugin
import maya.cmds as cmds
import xgenm
from ayon_core.pipeline import get_representation_path
from ayon_maya.api import current_file
from ayon_maya.api.lib import (
attribute_values,
get_container_members,
maintained_selection,
write_xgen_file,
)
from qtpy import QtWidgets
class XgenLoader(plugin.ReferenceLoader):
"""Load Xgen as reference"""
product_types = {"xgen"}
representations = {"ma", "mb"}
label = "Reference Xgen"
icon = "code-fork"
color = "orange"
def get_xgen_xgd_paths(self, palette):
_, maya_extension = os.path.splitext(current_file())
xgen_file = current_file().replace(
maya_extension,
"__{}.xgen".format(palette.replace("|", "").replace(":", "__"))
)
xgd_file = xgen_file.replace(".xgen", ".xgd")
return xgen_file, xgd_file
def process_reference(self, context, name, namespace, options):
# Validate workfile has a path.
if current_file() is None:
QtWidgets.QMessageBox.warning(
None,
"",
"Current workfile has not been saved. Please save the workfile"
" before loading an Xgen."
)
return
maya_filepath = self.prepare_root_value(
file_url=self.filepath_from_context(context),
project_name=context["project"]["name"]
)
# Reference xgen. Xgen does not like being referenced in under a group.
with maintained_selection():
nodes = cmds.file(
maya_filepath,
namespace=namespace,
sharedReferenceFile=False,
reference=True,
returnNewNodes=True
)
xgen_palette = cmds.ls(
nodes, type="xgmPalette", long=True
)[0].replace("|", "")
xgen_file, xgd_file = self.get_xgen_xgd_paths(xgen_palette)
self.set_palette_attributes(xgen_palette, xgen_file, xgd_file)
# Change the cache and disk values of xgDataPath and xgProjectPath
# to ensure paths are setup correctly.
project_path = os.path.dirname(current_file()).replace("\\", "/")
xgenm.setAttr("xgProjectPath", project_path, xgen_palette)
data_path = "${{PROJECT}}xgen/collections/{};{}".format(
xgen_palette.replace(":", "__ns__"),
xgenm.getAttr("xgDataPath", xgen_palette)
)
xgenm.setAttr("xgDataPath", data_path, xgen_palette)
data = {"xgProjectPath": project_path, "xgDataPath": data_path}
write_xgen_file(data, xgen_file)
# This create an expression attribute of float. If we did not add
# any changes to collection, then Xgen does not create an xgd file
# on save. This gives errors when launching the workfile again due
# to trying to find the xgd file.
name = "custom_float_ignore"
if name not in xgenm.customAttrs(xgen_palette):
xgenm.addCustomAttr(
"custom_float_ignore", xgen_palette
)
shapes = cmds.ls(nodes, shapes=True, long=True)
new_nodes = (list(set(nodes) - set(shapes)))
self[:] = new_nodes
return new_nodes
def set_palette_attributes(self, xgen_palette, xgen_file, xgd_file):
cmds.setAttr(
"{}.xgBaseFile".format(xgen_palette),
os.path.basename(xgen_file),
type="string"
)
cmds.setAttr(
"{}.xgFileName".format(xgen_palette),
os.path.basename(xgd_file),
type="string"
)
cmds.setAttr("{}.xgExportAsDelta".format(xgen_palette), True)
def update(self, container, context):
"""Workflow for updating Xgen.
- Export changes to delta file.
- Copy and overwrite the workspace .xgen file.
- Set collection attributes to not include delta files.
- Update xgen maya file reference.
- Apply the delta file changes.
- Reset collection attributes to include delta files.
We have to do this workflow because when using referencing of the xgen
collection, Maya implicitly imports the Xgen data from the xgen file so
we dont have any control over when adding the delta file changes.
There is an implicit increment of the xgen and delta files, due to
using the workfile basename.
"""
# Storing current description to try and maintain later.
current_description = (
xgenm.xgGlobal.DescriptionEditor.currentDescription()
)
container_node = container["objectName"]
members = get_container_members(container_node)
xgen_palette = cmds.ls(
members, type="xgmPalette", long=True
)[0].replace("|", "")
xgen_file, xgd_file = self.get_xgen_xgd_paths(xgen_palette)
# Export current changes to apply later.
xgenm.createDelta(xgen_palette.replace("|", ""), xgd_file)
self.set_palette_attributes(xgen_palette, xgen_file, xgd_file)
repre_entity = context["representation"]
maya_file = get_representation_path(repre_entity)
_, extension = os.path.splitext(maya_file)
new_xgen_file = maya_file.replace(extension, ".xgen")
data_path = ""
with open(new_xgen_file, "r") as f:
for line in f:
if line.startswith("\txgDataPath"):
line = line.rstrip()
data_path = line.split("\t")[-1]
break
project_path = os.path.dirname(current_file()).replace("\\", "/")
data_path = "${{PROJECT}}xgen/collections/{};{}".format(
xgen_palette.replace(":", "__ns__"),
data_path
)
data = {"xgProjectPath": project_path, "xgDataPath": data_path}
shutil.copy(new_xgen_file, xgen_file)
write_xgen_file(data, xgen_file)
attribute_data = {
"{}.xgFileName".format(xgen_palette): os.path.basename(xgen_file),
"{}.xgBaseFile".format(xgen_palette): "",
"{}.xgExportAsDelta".format(xgen_palette): False
}
with attribute_values(attribute_data):
super().update(container, context)
xgenm.applyDelta(xgen_palette.replace("|", ""), xgd_file)
# Restore current selected description if it exists.
if cmds.objExists(current_description):
xgenm.xgGlobal.DescriptionEditor.setCurrentDescription(
current_description
)
# Full UI refresh.
xgenm.xgGlobal.DescriptionEditor.refresh("Full")

View file

@ -1,397 +0,0 @@
import json
import os
import re
from collections import defaultdict
import clique
from ayon_core.pipeline import get_representation_path
from ayon_core.settings import get_project_settings
from ayon_maya.api import lib
from ayon_maya.api.pipeline import containerise
from ayon_maya.api import plugin
from ayon_maya.api.plugin import get_load_color_for_product_type
from ayon_maya.api.yeti import create_yeti_variable
from maya import cmds
# Do not reset these values on update but only apply on first load
# to preserve any potential local overrides
SKIP_UPDATE_ATTRS = {
"displayOutput",
"viewportDensity",
"viewportWidth",
"viewportLength",
"renderDensity",
"renderWidth",
"renderLength",
"increaseRenderBounds"
}
SKIP_ATTR_MESSAGE = (
"Skipping updating %s.%s to %s because it "
"is considered a local overridable attribute. "
"Either set manually or the load the cache "
"anew."
)
def set_attribute(node, attr, value):
"""Wrapper of set attribute which ignores None values"""
if value is None:
return
lib.set_attribute(node, attr, value)
class YetiCacheLoader(plugin.Loader):
"""Load Yeti Cache with one or more Yeti nodes"""
product_types = {"yeticache", "yetiRig"}
representations = {"fur"}
label = "Load Yeti Cache"
order = -9
icon = "code-fork"
color = "orange"
def load(self, context, name=None, namespace=None, data=None):
"""Loads a .fursettings file defining how to load .fur sequences
A single yeticache or yetiRig can have more than a single pgYetiMaya
nodes and thus load more than a single yeti.fur sequence.
The .fursettings file defines what the node names should be and also
what "cbId" attribute they should receive to match the original source
and allow published looks to also work for Yeti rigs and its caches.
"""
product_type = context["product"]["productType"]
# Build namespace
folder_name = context["folder"]["name"]
if namespace is None:
namespace = self.create_namespace(folder_name)
# Ensure Yeti is loaded
if not cmds.pluginInfo("pgYetiMaya", query=True, loaded=True):
cmds.loadPlugin("pgYetiMaya", quiet=True)
# Create Yeti cache nodes according to settings
path = self.filepath_from_context(context)
settings = self.read_settings(path)
nodes = []
for node in settings["nodes"]:
nodes.extend(self.create_node(namespace, node))
group_name = "{}:{}".format(namespace, name)
group_node = cmds.group(nodes, name=group_name)
project_name = context["project"]["name"]
settings = get_project_settings(project_name)
color = get_load_color_for_product_type(product_type, settings)
if color is not None:
red, green, blue = color
cmds.setAttr(group_node + ".useOutlinerColor", 1)
cmds.setAttr(group_node + ".outlinerColor", red, green, blue)
nodes.append(group_node)
self[:] = nodes
return containerise(
name=name,
namespace=namespace,
nodes=nodes,
context=context,
loader=self.__class__.__name__
)
def remove(self, container):
from maya import cmds
namespace = container["namespace"]
container_name = container["objectName"]
self.log.info("Removing '%s' from Maya.." % container["name"])
container_content = cmds.sets(container_name, query=True)
nodes = cmds.ls(container_content, long=True)
nodes.append(container_name)
try:
cmds.delete(nodes)
except ValueError:
# Already implicitly deleted by Maya upon removing reference
pass
cmds.namespace(removeNamespace=namespace, deleteNamespaceContent=True)
def update(self, container, context):
repre_entity = context["representation"]
namespace = container["namespace"]
container_node = container["objectName"]
path = get_representation_path(repre_entity)
settings = self.read_settings(path)
# Collect scene information of asset
set_members = lib.get_container_members(container)
container_root = lib.get_container_transforms(container,
members=set_members,
root=True)
scene_nodes = cmds.ls(set_members, type="pgYetiMaya", long=True)
# Build lookup with cbId as keys
scene_lookup = defaultdict(list)
for node in scene_nodes:
cb_id = lib.get_id(node)
scene_lookup[cb_id].append(node)
# Re-assemble metadata with cbId as keys
meta_data_lookup = {n["cbId"]: n for n in settings["nodes"]}
# Delete nodes by "cbId" that are not in the updated version
to_delete_lookup = {cb_id for cb_id in scene_lookup.keys() if
cb_id not in meta_data_lookup}
if to_delete_lookup:
# Get nodes and remove entry from lookup
to_remove = []
for _id in to_delete_lookup:
# Get all related nodes
shapes = scene_lookup[_id]
# Get the parents of all shapes under the ID
transforms = cmds.listRelatives(shapes,
parent=True,
fullPath=True) or []
to_remove.extend(shapes + transforms)
# Remove id from lookup
scene_lookup.pop(_id, None)
cmds.delete(to_remove)
for cb_id, node_settings in meta_data_lookup.items():
if cb_id not in scene_lookup:
# Create new nodes
self.log.info("Creating new nodes ..")
new_nodes = self.create_node(namespace, node_settings)
cmds.sets(new_nodes, addElement=container_node)
cmds.parent(new_nodes, container_root)
else:
# Update the matching nodes
scene_nodes = scene_lookup[cb_id]
lookup_result = meta_data_lookup[cb_id]["name"]
# Remove namespace if any (e.g.: "character_01_:head_YNShape")
node_name = lookup_result.rsplit(":", 1)[-1]
for scene_node in scene_nodes:
# Get transform node, this makes renaming easier
transforms = cmds.listRelatives(scene_node,
parent=True,
fullPath=True) or []
assert len(transforms) == 1, "This is a bug!"
# Get scene node's namespace and rename the transform node
lead = scene_node.rsplit(":", 1)[0]
namespace = ":{}".format(lead.rsplit("|")[-1])
new_shape_name = "{}:{}".format(namespace, node_name)
new_trans_name = new_shape_name.rsplit("Shape", 1)[0]
transform_node = transforms[0]
cmds.rename(transform_node,
new_trans_name,
ignoreShape=False)
# Get the newly named shape node
yeti_nodes = cmds.listRelatives(new_trans_name,
children=True)
yeti_node = yeti_nodes[0]
for attr, value in node_settings["attrs"].items():
if attr in SKIP_UPDATE_ATTRS:
self.log.info(
SKIP_ATTR_MESSAGE, yeti_node, attr, value
)
continue
set_attribute(attr, value, yeti_node)
# Set up user defined attributes
user_variables = node_settings.get("user_variables", {})
for attr, value in user_variables.items():
was_value_set = create_yeti_variable(
yeti_shape_node=yeti_node,
attr_name=attr,
value=value,
# We do not want to update the
# value if it already exists so
# that any local overrides that
# may have been applied still
# persist
force_value=False
)
if not was_value_set:
self.log.info(
SKIP_ATTR_MESSAGE, yeti_node, attr, value
)
cmds.setAttr("{}.representation".format(container_node),
repre_entity["id"],
typ="string")
def switch(self, container, context):
self.update(container, context)
# helper functions
def create_namespace(self, folder_name):
"""Create a unique namespace
Args:
asset (dict): asset information
"""
asset_name = "{}_".format(folder_name)
prefix = "_" if asset_name[0].isdigit() else ""
namespace = lib.unique_namespace(
asset_name,
prefix=prefix,
suffix="_"
)
return namespace
def get_cache_node_filepath(self, root, node_name):
"""Get the cache file path for one of the yeti nodes.
All caches with more than 1 frame need cache file name set with `%04d`
If the cache has only one frame we return the file name as we assume
it is a snapshot.
This expects the files to be named after the "node name" through
exports with <Name> in Yeti.
Args:
root(str): Folder containing cache files to search in.
node_name(str): Node name to search cache files for
Returns:
str: Cache file path value needed for cacheFileName attribute
"""
name = node_name.replace(":", "_")
pattern = r"^({name})(\.[0-9]+)?(\.fur)$".format(name=re.escape(name))
files = [fname for fname in os.listdir(root) if re.match(pattern,
fname)]
if not files:
self.log.error("Could not find cache files for '{}' "
"with pattern {}".format(node_name, pattern))
return
if len(files) == 1:
# Single file
return os.path.join(root, files[0])
# Get filename for the sequence with padding
collections, remainder = clique.assemble(files)
assert not remainder, "This is a bug"
assert len(collections) == 1, "This is a bug"
collection = collections[0]
# Formats name as {head}%d{tail} like cache.%04d.fur
fname = collection.format("{head}{padding}{tail}")
return os.path.join(root, fname)
def create_node(self, namespace, node_settings):
"""Create nodes with the correct namespace and settings
Args:
namespace(str): namespace
node_settings(dict): Single "nodes" entry from .fursettings file.
Returns:
list: Created nodes
"""
nodes = []
# Get original names and ids
orig_transform_name = node_settings["transform"]["name"]
orig_shape_name = node_settings["name"]
# Add namespace
transform_name = "{}:{}".format(namespace, orig_transform_name)
shape_name = "{}:{}".format(namespace, orig_shape_name)
# Create pgYetiMaya node
transform_node = cmds.createNode("transform",
name=transform_name)
yeti_node = cmds.createNode("pgYetiMaya",
name=shape_name,
parent=transform_node)
lib.set_id(transform_node, node_settings["transform"]["cbId"])
lib.set_id(yeti_node, node_settings["cbId"])
nodes.extend([transform_node, yeti_node])
# Update attributes with defaults
attributes = node_settings["attrs"]
attributes.update({
"verbosity": 2,
"fileMode": 1,
# Fix render stats, like Yeti's own
# ../scripts/pgYetiNode.mel script
"visibleInReflections": True,
"visibleInRefractions": True
})
if "viewportDensity" not in attributes:
attributes["viewportDensity"] = 0.1
# Apply attributes to pgYetiMaya node
for attr, value in attributes.items():
set_attribute(attr, value, yeti_node)
# Set up user defined attributes
user_variables = node_settings.get("user_variables", {})
for attr, value in user_variables.items():
create_yeti_variable(yeti_shape_node=yeti_node,
attr_name=attr,
value=value)
# Connect to the time node
cmds.connectAttr("time1.outTime", "%s.currentTime" % yeti_node)
return nodes
def read_settings(self, path):
"""Read .fursettings file and compute some additional attributes"""
with open(path, "r") as fp:
fur_settings = json.load(fp)
if "nodes" not in fur_settings:
raise RuntimeError("Encountered invalid data, "
"expected 'nodes' in fursettings.")
# Compute the cache file name values we want to set for the nodes
root = os.path.dirname(path)
for node in fur_settings["nodes"]:
cache_filename = self.get_cache_node_filepath(
root=root, node_name=node["name"])
attrs = node.get("attrs", {}) # allow 'attrs' to not exist
attrs["cacheFileName"] = cache_filename
node["attrs"] = attrs
return fur_settings

View file

@ -1,94 +0,0 @@
from typing import List
import maya.cmds as cmds
from ayon_core.pipeline import registered_host
from ayon_core.pipeline.create import CreateContext
from ayon_maya.api import lib, plugin
class YetiRigLoader(plugin.ReferenceLoader):
"""This loader will load Yeti rig."""
product_types = {"yetiRig"}
representations = {"ma"}
label = "Load Yeti Rig"
order = -9
icon = "code-fork"
color = "orange"
# From settings
create_cache_instance_on_load = True
def process_reference(
self, context, name=None, namespace=None, options=None
):
path = self.filepath_from_context(context)
attach_to_root = options.get("attach_to_root", True)
group_name = options["group_name"]
# no group shall be created
if not attach_to_root:
group_name = namespace
with lib.maintained_selection():
file_url = self.prepare_root_value(
path, context["project"]["name"]
)
nodes = cmds.file(
file_url,
namespace=namespace,
reference=True,
returnNewNodes=True,
groupReference=attach_to_root,
groupName=group_name
)
color = plugin.get_load_color_for_product_type("yetiRig")
if color is not None:
red, green, blue = color
cmds.setAttr(group_name + ".useOutlinerColor", 1)
cmds.setAttr(
group_name + ".outlinerColor", red, green, blue
)
self[:] = nodes
if self.create_cache_instance_on_load:
# Automatically create in instance to allow publishing the loaded
# yeti rig into a yeti cache
self._create_yeti_cache_instance(nodes, variant=namespace)
return nodes
def _create_yeti_cache_instance(self, nodes: List[str], variant: str):
"""Create a yeticache product type instance to publish the output.
This is similar to how loading animation rig will automatically create
an animation instance for publishing any loaded character rigs, but
then for yeti rigs.
Args:
nodes (List[str]): Nodes generated on load.
variant (str): Variant for the yeti cache instance to create.
"""
# Find the roots amongst the loaded nodes
yeti_nodes = cmds.ls(nodes, type="pgYetiMaya", long=True)
assert yeti_nodes, "No pgYetiMaya nodes in rig, this is a bug."
self.log.info("Creating variant: {}".format(variant))
creator_identifier = "io.openpype.creators.maya.yeticache"
host = registered_host()
create_context = CreateContext(host)
with lib.maintained_selection():
cmds.select(yeti_nodes, noExpand=True)
create_context.create(
creator_identifier=creator_identifier,
variant=variant,
pre_create_data={"use_selection": True}
)

View file

@ -1,59 +0,0 @@
import maya.cmds as cmds
import pyblish.api
from ayon_maya.api import plugin
class CollectAnimationOutputGeometry(plugin.MayaInstancePlugin):
"""Collect out hierarchy data for instance.
Collect all hierarchy nodes which reside in the out_SET of the animation
instance or point cache instance. This is to unify the logic of retrieving
that specific data. This eliminates the need to write two separate pieces
of logic to fetch all hierarchy nodes.
Results in a list of nodes from the content of the instances
"""
order = pyblish.api.CollectorOrder + 0.4
families = ["animation"]
label = "Collect Animation Output Geometry"
ignore_type = ["constraints"]
def process(self, instance):
"""Collect the hierarchy nodes"""
product_type = instance.data["productType"]
out_set = next((i for i in instance.data["setMembers"] if
i.endswith("out_SET")), None)
if out_set is None:
self.log.warning((
"Expecting out_SET for instance of product type '{}'"
).format(product_type))
return
members = cmds.ls(cmds.sets(out_set, query=True), long=True)
# Get all the relatives of the members
descendants = cmds.listRelatives(members,
allDescendents=True,
fullPath=True) or []
descendants = cmds.ls(descendants, noIntermediate=True, long=True)
# Add members and descendants together for a complete overview
hierarchy = members + descendants
# Ignore certain node types (e.g. constraints)
ignore = cmds.ls(hierarchy, type=self.ignore_type, long=True)
if ignore:
ignore = set(ignore)
hierarchy = [node for node in hierarchy if node not in ignore]
# Store data in the instance for the validator
instance.data["out_hierarchy"] = hierarchy
if instance.data.get("farm"):
instance.data["families"].append("publish.farm")

View file

@ -1,58 +0,0 @@
import pyblish.api
from ayon_maya.api.lib import get_all_children
from ayon_maya.api import plugin
from maya import cmds
class CollectArnoldSceneSource(plugin.MayaInstancePlugin):
"""Collect Arnold Scene Source data."""
# Offset to be after renderable camera collection.
order = pyblish.api.CollectorOrder + 0.2
label = "Collect Arnold Scene Source"
families = ["ass", "assProxy"]
def process(self, instance):
instance.data["members"] = []
for set_member in instance.data["setMembers"]:
if cmds.nodeType(set_member) != "objectSet":
instance.data["members"].extend(self.get_hierarchy(set_member))
continue
members = cmds.sets(set_member, query=True)
members = cmds.ls(members, long=True)
if members is None:
self.log.warning(
"Skipped empty instance: \"%s\" " % set_member
)
continue
if set_member.endswith("proxy_SET"):
instance.data["proxy"] = self.get_hierarchy(members)
# Use camera in object set if present else default to render globals
# camera.
cameras = cmds.ls(type="camera", long=True)
renderable = [c for c in cameras if cmds.getAttr("%s.renderable" % c)]
if renderable:
camera = renderable[0]
for node in instance.data["members"]:
camera_shapes = cmds.listRelatives(
node, shapes=True, type="camera"
)
if camera_shapes:
camera = node
instance.data["camera"] = camera
else:
self.log.debug("No renderable cameras found.")
self.log.debug("data: {}".format(instance.data))
def get_hierarchy(self, nodes):
"""Return nodes with all their children"""
nodes = cmds.ls(nodes, long=True)
if not nodes:
return []
children = get_all_children(nodes)
# Make sure nodes merged with children only
# contains unique entries
return list(set(nodes + list(children)))

View file

@ -1,97 +0,0 @@
"""Collect all relevant assembly items.
Todo:
Publish of assembly need unique namespace for all assets, we should
create validator for this.
"""
from collections import defaultdict
import pyblish.api
from maya import cmds, mel
from ayon_maya import api
from ayon_maya.api import lib
from ayon_maya.api import plugin
class CollectAssembly(plugin.MayaInstancePlugin):
"""Collect all relevant assembly items
Collected data:
* File name
* Compatible loader
* Matrix per instance
* Namespace
Note: GPU caches are currently not supported in the pipeline. There is no
logic yet which supports the swapping of GPU cache to renderable objects.
"""
order = pyblish.api.CollectorOrder + 0.49
label = "Assembly"
families = ["assembly"]
def process(self, instance):
# Find containers
containers = api.ls()
# Get all content from the instance
instance_lookup = set(cmds.ls(instance, type="transform", long=True))
data = defaultdict(list)
hierarchy_nodes = []
for container in containers:
root = lib.get_container_transforms(container, root=True)
if not root or root not in instance_lookup:
continue
# Retrieve the hierarchy
parent = cmds.listRelatives(root, parent=True, fullPath=True)[0]
hierarchy_nodes.append(parent)
# Temporary warning for GPU cache which are not supported yet
loader = container["loader"]
if loader == "GpuCacheLoader":
self.log.warning("GPU Cache Loader is currently not supported"
"in the pipeline, we will export it tho")
# Gather info for new data entry
representation_id = container["representation"]
instance_data = {"loader": loader,
"parent": parent,
"namespace": container["namespace"]}
# Check if matrix differs from default and store changes
matrix_data = self.get_matrix_data(root)
if matrix_data:
instance_data["matrix"] = matrix_data
data[representation_id].append(instance_data)
instance.data["scenedata"] = dict(data)
instance.data["nodesHierarchy"] = list(set(hierarchy_nodes))
def get_file_rule(self, rule):
return mel.eval('workspace -query -fileRuleEntry "{}"'.format(rule))
def get_matrix_data(self, node):
"""Get the matrix of all members when they are not default
Each matrix which differs from the default will be stored in a
dictionary
Args:
members (list): list of transform nmodes
Returns:
dict
"""
matrix = cmds.xform(node, query=True, matrix=True)
if matrix == lib.DEFAULT_MATRIX:
return
return matrix

View file

@ -1,14 +0,0 @@
import pyblish.api
from ayon_maya.api import plugin
from maya import cmds
class CollectCurrentFile(plugin.MayaContextPlugin):
"""Inject the current working file."""
order = pyblish.api.CollectorOrder - 0.4
label = "Maya Current File"
def process(self, context):
"""Inject the current working file"""
context.data['currentFile'] = cmds.file(query=True, sceneName=True)

View file

@ -1,36 +0,0 @@
# -*- coding: utf-8 -*-
import pyblish.api
from ayon_core.pipeline import OptionalPyblishPluginMixin
from ayon_maya.api import plugin
from maya import cmds # noqa
class CollectFbxAnimation(plugin.MayaInstancePlugin,
OptionalPyblishPluginMixin):
"""Collect Animated Rig Data for FBX Extractor."""
order = pyblish.api.CollectorOrder + 0.2
label = "Collect Fbx Animation"
families = ["animation"]
optional = True
def process(self, instance):
if not self.is_active(instance.data):
return
skeleton_sets = [
i for i in instance
if i.endswith("skeletonAnim_SET")
]
if not skeleton_sets:
return
instance.data["families"].append("animation.fbx")
instance.data["animated_skeleton"] = []
for skeleton_set in skeleton_sets:
skeleton_content = cmds.sets(skeleton_set, query=True)
self.log.debug(
"Collected animated skeleton data: {}".format(
skeleton_content
))
if skeleton_content:
instance.data["animated_skeleton"] = skeleton_content

View file

@ -1,21 +0,0 @@
# -*- coding: utf-8 -*-
import pyblish.api
from ayon_maya.api import plugin
from maya import cmds # noqa
class CollectFbxCamera(plugin.MayaInstancePlugin):
"""Collect Camera for FBX export."""
order = pyblish.api.CollectorOrder + 0.2
label = "Collect Camera for FBX export"
families = ["camera"]
def process(self, instance):
if not instance.data.get("families"):
instance.data["families"] = []
if "fbx" not in instance.data["families"]:
instance.data["families"].append("fbx")
instance.data["cameras"] = True

Some files were not shown because too many files have changed in this diff Show more