Merge branch 'develop' into documentation/link_fixes

This commit is contained in:
Ondřej Samohel 2022-02-07 16:39:38 +01:00 committed by GitHub
commit 17f166aa0d
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
189 changed files with 8796 additions and 5566 deletions

View file

@ -1,13 +1,42 @@
# Changelog
## [3.8.2](https://github.com/pypeclub/OpenPype/tree/3.8.2) (2022-02-07)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.8.1...3.8.2)
**🚀 Enhancements**
- TVPaint: Image loaders also work on review family [\#2638](https://github.com/pypeclub/OpenPype/pull/2638)
- General: Project backup tools [\#2629](https://github.com/pypeclub/OpenPype/pull/2629)
- nuke: adding clear button to write nodes [\#2627](https://github.com/pypeclub/OpenPype/pull/2627)
- Ftrack: Family to Asset type mapping is in settings [\#2602](https://github.com/pypeclub/OpenPype/pull/2602)
- Nuke: load color space from representation data [\#2576](https://github.com/pypeclub/OpenPype/pull/2576)
**🐛 Bug fixes**
- Fix pulling of cx\_freeze 6.10 [\#2628](https://github.com/pypeclub/OpenPype/pull/2628)
### 📖 Documentation
- Cosmetics: Fix common typos in openpype/website [\#2617](https://github.com/pypeclub/OpenPype/pull/2617)
**Merged pull requests:**
- Docker: enhance dockerfiles with metadata, fix pyenv initialization [\#2647](https://github.com/pypeclub/OpenPype/pull/2647)
- WebPublisher: fix instance duplicates [\#2641](https://github.com/pypeclub/OpenPype/pull/2641)
- Fix - safer pulling of task name for webpublishing from PS [\#2613](https://github.com/pypeclub/OpenPype/pull/2613)
- Webpublisher: Skip version collect [\#2591](https://github.com/pypeclub/OpenPype/pull/2591)
## [3.8.1](https://github.com/pypeclub/OpenPype/tree/3.8.1) (2022-02-01)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.8.0...3.8.1)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.8.1-nightly.3...3.8.1)
**🚀 Enhancements**
- Webpublisher: Thumbnail extractor [\#2600](https://github.com/pypeclub/OpenPype/pull/2600)
- Webpublisher: Added endpoint to reprocess batch through UI [\#2555](https://github.com/pypeclub/OpenPype/pull/2555)
- Loader: Allow to toggle default family filters between "include" or "exclude" filtering [\#2541](https://github.com/pypeclub/OpenPype/pull/2541)
- Launcher: Added context menu to to skip opening last workfile [\#2536](https://github.com/pypeclub/OpenPype/pull/2536)
**🐛 Bug fixes**
@ -15,18 +44,16 @@
- hotfix: OIIO tool path - add extension on windows [\#2618](https://github.com/pypeclub/OpenPype/pull/2618)
- Settings: Enum does not store empty string if has single item to select [\#2615](https://github.com/pypeclub/OpenPype/pull/2615)
- switch distutils to sysconfig for `get\_platform\(\)` [\#2594](https://github.com/pypeclub/OpenPype/pull/2594)
- Global: fix broken otio review extractor [\#2590](https://github.com/pypeclub/OpenPype/pull/2590)
- Fix poetry index and speedcopy update [\#2589](https://github.com/pypeclub/OpenPype/pull/2589)
- Webpublisher: Fix - subset names from processed .psd used wrong value for task [\#2586](https://github.com/pypeclub/OpenPype/pull/2586)
- `vrscene` creator Deadline webservice URL handling [\#2580](https://github.com/pypeclub/OpenPype/pull/2580)
- global: track name was failing if duplicated root word in name [\#2568](https://github.com/pypeclub/OpenPype/pull/2568)
- General: Do not validate version if build does not support it [\#2557](https://github.com/pypeclub/OpenPype/pull/2557)
- Validate Maya Rig produces no cycle errors [\#2484](https://github.com/pypeclub/OpenPype/pull/2484)
**Merged pull requests:**
- Bump pillow from 8.4.0 to 9.0.0 [\#2595](https://github.com/pypeclub/OpenPype/pull/2595)
- Webpublisher: Skip version collect [\#2591](https://github.com/pypeclub/OpenPype/pull/2591)
- build\(deps\): bump follow-redirects from 1.14.4 to 1.14.7 in /website [\#2534](https://github.com/pypeclub/OpenPype/pull/2534)
- build\(deps\): bump pillow from 8.4.0 to 9.0.0 [\#2523](https://github.com/pypeclub/OpenPype/pull/2523)
## [3.8.0](https://github.com/pypeclub/OpenPype/tree/3.8.0) (2022-01-24)
@ -45,11 +72,9 @@
- Webpublisher: Moved error at the beginning of the log [\#2559](https://github.com/pypeclub/OpenPype/pull/2559)
- Ftrack: Use ApplicationManager to get DJV path [\#2558](https://github.com/pypeclub/OpenPype/pull/2558)
- Webpublisher: Added endpoint to reprocess batch through UI [\#2555](https://github.com/pypeclub/OpenPype/pull/2555)
- Settings: PathInput strip passed string [\#2550](https://github.com/pypeclub/OpenPype/pull/2550)
- Global: Exctract Review anatomy fill data with output name [\#2548](https://github.com/pypeclub/OpenPype/pull/2548)
- Cosmetics: Clean up some cosmetics / typos [\#2542](https://github.com/pypeclub/OpenPype/pull/2542)
- Launcher: Added context menu to to skip opening last workfile [\#2536](https://github.com/pypeclub/OpenPype/pull/2536)
- General: Validate if current process OpenPype version is requested version [\#2529](https://github.com/pypeclub/OpenPype/pull/2529)
- General: Be able to use anatomy data in ffmpeg output arguments [\#2525](https://github.com/pypeclub/OpenPype/pull/2525)
- Expose toggle publish plug-in settings for Maya Look Shading Engine Naming [\#2521](https://github.com/pypeclub/OpenPype/pull/2521)
@ -64,6 +89,7 @@
- General: OpenPype version updates [\#2575](https://github.com/pypeclub/OpenPype/pull/2575)
- Ftrack: Delete action revision [\#2563](https://github.com/pypeclub/OpenPype/pull/2563)
- Webpublisher: ftrack shows incorrect user names [\#2560](https://github.com/pypeclub/OpenPype/pull/2560)
- General: Do not validate version if build does not support it [\#2557](https://github.com/pypeclub/OpenPype/pull/2557)
- Webpublisher: Fixed progress reporting [\#2553](https://github.com/pypeclub/OpenPype/pull/2553)
- Fix Maya AssProxyLoader version switch [\#2551](https://github.com/pypeclub/OpenPype/pull/2551)
- General: Fix install thread in igniter [\#2549](https://github.com/pypeclub/OpenPype/pull/2549)
@ -76,7 +102,6 @@
- Fix published frame content for sequence starting with 0 [\#2513](https://github.com/pypeclub/OpenPype/pull/2513)
- Maya: reset empty string attributes correctly to "" instead of "None" [\#2506](https://github.com/pypeclub/OpenPype/pull/2506)
- Improve FusionPreLaunch hook errors [\#2505](https://github.com/pypeclub/OpenPype/pull/2505)
- General: Modules import function output fix [\#2492](https://github.com/pypeclub/OpenPype/pull/2492)
### 📖 Documentation
@ -87,20 +112,13 @@
- AfterEffects: Move implementation to OpenPype [\#2543](https://github.com/pypeclub/OpenPype/pull/2543)
- Maya: Remove Maya Look Assigner check on startup [\#2540](https://github.com/pypeclub/OpenPype/pull/2540)
- build\(deps\): bump shelljs from 0.8.4 to 0.8.5 in /website [\#2538](https://github.com/pypeclub/OpenPype/pull/2538)
- build\(deps\): bump follow-redirects from 1.14.4 to 1.14.7 in /website [\#2534](https://github.com/pypeclub/OpenPype/pull/2534)
- Nuke: Merge avalon's implementation into OpenPype [\#2514](https://github.com/pypeclub/OpenPype/pull/2514)
## [3.7.0](https://github.com/pypeclub/OpenPype/tree/3.7.0) (2022-01-04)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.7.0-nightly.14...3.7.0)
**🚀 Enhancements**
- General: Workdir extra folders [\#2462](https://github.com/pypeclub/OpenPype/pull/2462)
**🐛 Bug fixes**
- TVPaint: Create render layer dialog is in front [\#2471](https://github.com/pypeclub/OpenPype/pull/2471)
## [3.6.4](https://github.com/pypeclub/OpenPype/tree/3.6.4) (2021-11-23)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.7.0-nightly.1...3.6.4)

View file

@ -1,13 +1,18 @@
# Build Pype docker image
FROM debian:bookworm-slim AS builder
FROM ubuntu:focal AS builder
ARG OPENPYPE_PYTHON_VERSION=3.7.12
ARG BUILD_DATE
ARG VERSION
LABEL maintainer="info@openpype.io"
LABEL description="Docker Image to build and run OpenPype"
LABEL description="Docker Image to build and run OpenPype under Ubuntu 20.04"
LABEL org.opencontainers.image.name="pypeclub/openpype"
LABEL org.opencontainers.image.title="OpenPype Docker Image"
LABEL org.opencontainers.image.url="https://openpype.io/"
LABEL org.opencontainers.image.source="https://github.com/pypeclub/pype"
LABEL org.opencontainers.image.source="https://github.com/pypeclub/OpenPype"
LABEL org.opencontainers.image.documentation="https://openpype.io/docs/system_introduction"
LABEL org.opencontainers.image.created=$BUILD_DATE
LABEL org.opencontainers.image.version=$VERSION
USER root
@ -42,14 +47,19 @@ RUN apt-get update \
SHELL ["/bin/bash", "-c"]
RUN mkdir /opt/openpype
# download and install pyenv
RUN curl https://pyenv.run | bash \
&& echo 'export PATH="$HOME/.pyenv/bin:$PATH"'>> $HOME/.bashrc \
&& echo 'eval "$(pyenv init -)"' >> $HOME/.bashrc \
&& echo 'eval "$(pyenv virtualenv-init -)"' >> $HOME/.bashrc \
&& echo 'eval "$(pyenv init --path)"' >> $HOME/.bashrc \
&& source $HOME/.bashrc && pyenv install ${OPENPYPE_PYTHON_VERSION}
&& echo 'export PATH="$HOME/.pyenv/bin:$PATH"'>> $HOME/init_pyenv.sh \
&& echo 'eval "$(pyenv init -)"' >> $HOME/init_pyenv.sh \
&& echo 'eval "$(pyenv virtualenv-init -)"' >> $HOME/init_pyenv.sh \
&& echo 'eval "$(pyenv init --path)"' >> $HOME/init_pyenv.sh
# install python with pyenv
RUN source $HOME/init_pyenv.sh \
&& pyenv install ${OPENPYPE_PYTHON_VERSION}
COPY . /opt/openpype/
@ -57,13 +67,16 @@ RUN chmod +x /opt/openpype/tools/create_env.sh && chmod +x /opt/openpype/tools/b
WORKDIR /opt/openpype
# set local python version
RUN cd /opt/openpype \
&& source $HOME/.bashrc \
&& source $HOME/init_pyenv.sh \
&& pyenv local ${OPENPYPE_PYTHON_VERSION}
RUN source $HOME/.bashrc \
# fetch third party tools/libraries
RUN source $HOME/init_pyenv.sh \
&& ./tools/create_env.sh \
&& ./tools/fetch_thirdparty_libs.sh
RUN source $HOME/.bashrc \
# build openpype
RUN source $HOME/init_pyenv.sh \
&& bash ./tools/build.sh

View file

@ -1,11 +1,15 @@
# Build Pype docker image
FROM centos:7 AS builder
ARG OPENPYPE_PYTHON_VERSION=3.7.10
ARG OPENPYPE_PYTHON_VERSION=3.7.12
LABEL org.opencontainers.image.name="pypeclub/openpype"
LABEL org.opencontainers.image.title="OpenPype Docker Image"
LABEL org.opencontainers.image.url="https://openpype.io/"
LABEL org.opencontainers.image.source="https://github.com/pypeclub/pype"
LABEL org.opencontainers.image.documentation="https://openpype.io/docs/system_introduction"
LABEL org.opencontainers.image.created=$BUILD_DATE
LABEL org.opencontainers.image.version=$VERSION
USER root

View file

@ -412,3 +412,23 @@ def repack_version(directory):
directory name.
"""
PypeCommands().repack_version(directory)
@main.command()
@click.option("--project", help="Project name")
@click.option(
"--dirpath", help="Directory where package is stored", default=None
)
def pack_project(project, dirpath):
"""Create a package of project with all files and database dump."""
PypeCommands().pack_project(project, dirpath)
@main.command()
@click.option("--zipfile", help="Path to zip file")
@click.option(
"--root", help="Replace root which was stored in project", default=None
)
def unpack_project(zipfile, root):
"""Create a package of project with all files and database dump."""
PypeCommands().unpack_project(zipfile, root)

View file

@ -118,6 +118,7 @@ class CollectAERender(abstract_collect_render.AbstractCollectRender):
instance.anatomyData = context.data["anatomyData"]
instance.outputDir = self._get_output_dir(instance)
instance.context = context
settings = get_project_settings(os.getenv("AVALON_PROJECT"))
reviewable_subset_filter = \
@ -142,7 +143,6 @@ class CollectAERender(abstract_collect_render.AbstractCollectRender):
break
self.log.info("New instance:: {}".format(instance))
instances.append(instance)
return instances

View file

@ -176,6 +176,7 @@ class CollectFarmRender(openpype.lib.abstract_collect_render.
ignoreFrameHandleCheck=True
)
render_instance.context = context
self.log.debug(render_instance)
instances.append(render_instance)

View file

@ -18,6 +18,7 @@ def add_implementation_envs(env, _app):
new_hiero_paths.append(norm_path)
env["HIERO_PLUGIN_PATH"] = os.pathsep.join(new_hiero_paths)
env.pop("QT_AUTO_SCREEN_SCALE_FACTOR", None)
# Try to add QuickTime to PATH
quick_time_path = "C:/Program Files (x86)/QuickTime/QTSystem"

View file

@ -1,174 +1,60 @@
import os
import sys
import logging
import contextlib
from .pipeline import (
install,
uninstall,
import hou
from pyblish import api as pyblish
from avalon import api as avalon
import openpype.hosts.houdini
from openpype.hosts.houdini.api import lib
from openpype.lib import (
any_outdated
ls,
containerise,
)
from .lib import get_asset_fps
from .plugin import (
Creator,
)
log = logging.getLogger("openpype.hosts.houdini")
from .workio import (
open_file,
save_file,
current_file,
has_unsaved_changes,
file_extensions,
work_root
)
HOST_DIR = os.path.dirname(os.path.abspath(openpype.hosts.houdini.__file__))
PLUGINS_DIR = os.path.join(HOST_DIR, "plugins")
PUBLISH_PATH = os.path.join(PLUGINS_DIR, "publish")
LOAD_PATH = os.path.join(PLUGINS_DIR, "load")
CREATE_PATH = os.path.join(PLUGINS_DIR, "create")
INVENTORY_PATH = os.path.join(PLUGINS_DIR, "inventory")
from .lib import (
lsattr,
lsattrs,
read,
maintained_selection,
unique_name
)
def install():
__all__ = [
"install",
"uninstall",
pyblish.register_plugin_path(PUBLISH_PATH)
avalon.register_plugin_path(avalon.Loader, LOAD_PATH)
avalon.register_plugin_path(avalon.Creator, CREATE_PATH)
"ls",
"containerise",
log.info("Installing callbacks ... ")
# avalon.on("init", on_init)
avalon.before("save", before_save)
avalon.on("save", on_save)
avalon.on("open", on_open)
avalon.on("new", on_new)
"Creator",
pyblish.register_callback("instanceToggled", on_pyblish_instance_toggled)
# Workfiles API
"open_file",
"save_file",
"current_file",
"has_unsaved_changes",
"file_extensions",
"work_root",
log.info("Setting default family states for loader..")
avalon.data["familiesStateToggled"] = [
"imagesequence",
"review"
]
# Utility functions
"lsattr",
"lsattrs",
"read",
# add houdini vendor packages
hou_pythonpath = os.path.join(os.path.dirname(HOST_DIR), "vendor")
"maintained_selection",
"unique_name"
]
sys.path.append(hou_pythonpath)
# Set asset FPS for the empty scene directly after launch of Houdini
# so it initializes into the correct scene FPS
_set_asset_fps()
def before_save(*args):
return lib.validate_fps()
def on_save(*args):
avalon.logger.info("Running callback on save..")
nodes = lib.get_id_required_nodes()
for node, new_id in lib.generate_ids(nodes):
lib.set_id(node, new_id, overwrite=False)
def on_open(*args):
if not hou.isUIAvailable():
log.debug("Batch mode detected, ignoring `on_open` callbacks..")
return
avalon.logger.info("Running callback on open..")
# Validate FPS after update_task_from_path to
# ensure it is using correct FPS for the asset
lib.validate_fps()
if any_outdated():
from openpype.widgets import popup
log.warning("Scene has outdated content.")
# Get main window
parent = hou.ui.mainQtWindow()
if parent is None:
log.info("Skipping outdated content pop-up "
"because Houdini window can't be found.")
else:
# Show outdated pop-up
def _on_show_inventory():
import avalon.tools.sceneinventory as tool
tool.show(parent=parent)
dialog = popup.Popup(parent=parent)
dialog.setWindowTitle("Houdini scene has outdated content")
dialog.setMessage("There are outdated containers in "
"your Houdini scene.")
dialog.on_clicked.connect(_on_show_inventory)
dialog.show()
def on_new(_):
"""Set project resolution and fps when create a new file"""
avalon.logger.info("Running callback on new..")
_set_asset_fps()
def _set_asset_fps():
"""Set Houdini scene FPS to the default required for current asset"""
# Set new scene fps
fps = get_asset_fps()
print("Setting scene FPS to %i" % fps)
lib.set_scene_fps(fps)
def on_pyblish_instance_toggled(instance, new_value, old_value):
"""Toggle saver tool passthrough states on instance toggles."""
@contextlib.contextmanager
def main_take(no_update=True):
"""Enter root take during context"""
original_take = hou.takes.currentTake()
original_update_mode = hou.updateModeSetting()
root = hou.takes.rootTake()
has_changed = False
try:
if original_take != root:
has_changed = True
if no_update:
hou.setUpdateMode(hou.updateMode.Manual)
hou.takes.setCurrentTake(root)
yield
finally:
if has_changed:
if no_update:
hou.setUpdateMode(original_update_mode)
hou.takes.setCurrentTake(original_take)
if not instance.data.get("_allowToggleBypass", True):
return
nodes = instance[:]
if not nodes:
return
# Assume instance node is first node
instance_node = nodes[0]
if not hasattr(instance_node, "isBypassed"):
# Likely not a node that can actually be bypassed
log.debug("Can't bypass node: %s", instance_node.path())
return
if instance_node.isBypassed() != (not old_value):
print("%s old bypass state didn't match old instance state, "
"updating anyway.." % instance_node.path())
try:
# Go into the main take, because when in another take changing
# the bypass state of a note cannot be done due to it being locked
# by default.
with main_take(no_update=True):
instance_node.bypass(not new_value)
except hou.PermissionError as exc:
log.warning("%s - %s", instance_node.path(), exc)
# Backwards API compatibility
open = open_file
save = save_file

View file

@ -2,9 +2,11 @@ import uuid
import logging
from contextlib import contextmanager
from openpype.api import get_asset
import six
from avalon import api, io
from avalon.houdini import lib as houdini
from openpype.api import get_asset
import hou
@ -15,11 +17,11 @@ def get_asset_fps():
"""Return current asset fps."""
return get_asset()["data"].get("fps")
def set_id(node, unique_id, overwrite=False):
def set_id(node, unique_id, overwrite=False):
exists = node.parm("id")
if not exists:
houdini.imprint(node, {"id": unique_id})
imprint(node, {"id": unique_id})
if not exists and overwrite:
node.setParm("id", unique_id)
@ -342,3 +344,183 @@ def render_rop(ropnode):
import traceback
traceback.print_exc()
raise RuntimeError("Render failed: {0}".format(exc))
def children_as_string(node):
return [c.name() for c in node.children()]
def imprint(node, data):
"""Store attributes with value on a node
Depending on the type of attribute it creates the correct parameter
template. Houdini uses a template per type, see the docs for more
information.
http://www.sidefx.com/docs/houdini/hom/hou/ParmTemplate.html
Args:
node(hou.Node): node object from Houdini
data(dict): collection of attributes and their value
Returns:
None
"""
parm_group = node.parmTemplateGroup()
parm_folder = hou.FolderParmTemplate("folder", "Extra")
for key, value in data.items():
if value is None:
continue
if isinstance(value, float):
parm = hou.FloatParmTemplate(name=key,
label=key,
num_components=1,
default_value=(value,))
elif isinstance(value, bool):
parm = hou.ToggleParmTemplate(name=key,
label=key,
default_value=value)
elif isinstance(value, int):
parm = hou.IntParmTemplate(name=key,
label=key,
num_components=1,
default_value=(value,))
elif isinstance(value, six.string_types):
parm = hou.StringParmTemplate(name=key,
label=key,
num_components=1,
default_value=(value,))
else:
raise TypeError("Unsupported type: %r" % type(value))
parm_folder.addParmTemplate(parm)
parm_group.append(parm_folder)
node.setParmTemplateGroup(parm_group)
def lsattr(attr, value=None):
if value is None:
nodes = list(hou.node("/obj").allNodes())
return [n for n in nodes if n.parm(attr)]
return lsattrs({attr: value})
def lsattrs(attrs):
"""Return nodes matching `key` and `value`
Arguments:
attrs (dict): collection of attribute: value
Example:
>> lsattrs({"id": "myId"})
["myNode"]
>> lsattr("id")
["myNode", "myOtherNode"]
Returns:
list
"""
matches = set()
nodes = list(hou.node("/obj").allNodes()) # returns generator object
for node in nodes:
for attr in attrs:
if not node.parm(attr):
continue
elif node.evalParm(attr) != attrs[attr]:
continue
else:
matches.add(node)
return list(matches)
def read(node):
"""Read the container data in to a dict
Args:
node(hou.Node): Houdini node
Returns:
dict
"""
# `spareParms` returns a tuple of hou.Parm objects
return {parameter.name(): parameter.eval() for
parameter in node.spareParms()}
def unique_name(name, format="%03d", namespace="", prefix="", suffix="",
separator="_"):
"""Return unique `name`
The function takes into consideration an optional `namespace`
and `suffix`. The suffix is included in evaluating whether a
name exists - such as `name` + "_GRP" - but isn't included
in the returned value.
If a namespace is provided, only names within that namespace
are considered when evaluating whether the name is unique.
Arguments:
format (str, optional): The `name` is given a number, this determines
how this number is formatted. Defaults to a padding of 2.
E.g. my_name01, my_name02.
namespace (str, optional): Only consider names within this namespace.
suffix (str, optional): Only consider names with this suffix.
Example:
>>> name = hou.node("/obj").createNode("geo", name="MyName")
>>> assert hou.node("/obj/MyName")
True
>>> unique = unique_name(name)
>>> assert hou.node("/obj/{}".format(unique))
False
"""
iteration = 1
parts = [prefix, name, format % iteration, suffix]
if namespace:
parts.insert(0, namespace)
unique = separator.join(parts)
children = children_as_string(hou.node("/obj"))
while unique in children:
iteration += 1
unique = separator.join(parts)
if suffix:
return unique[:-len(suffix)]
return unique
@contextmanager
def maintained_selection():
"""Maintain selection during context
Example:
>>> with maintained_selection():
... # Modify selection
... node.setSelected(on=False, clear_all_selected=True)
>>> # Selection restored
"""
previous_selection = hou.selectedNodes()
try:
yield
finally:
# Clear the selection
# todo: does hou.clearAllSelected() do the same?
for node in hou.selectedNodes():
node.setSelected(on=False)
if previous_selection:
for node in previous_selection:
node.setSelected(on=True)

View file

@ -0,0 +1,349 @@
import os
import sys
import logging
import contextlib
import hou
import pyblish.api
import avalon.api
from avalon.pipeline import AVALON_CONTAINER_ID
from avalon.lib import find_submodule
import openpype.hosts.houdini
from openpype.hosts.houdini.api import lib
from openpype.lib import (
any_outdated
)
from .lib import get_asset_fps
log = logging.getLogger("openpype.hosts.houdini")
AVALON_CONTAINERS = "/obj/AVALON_CONTAINERS"
IS_HEADLESS = not hasattr(hou, "ui")
HOST_DIR = os.path.dirname(os.path.abspath(openpype.hosts.houdini.__file__))
PLUGINS_DIR = os.path.join(HOST_DIR, "plugins")
PUBLISH_PATH = os.path.join(PLUGINS_DIR, "publish")
LOAD_PATH = os.path.join(PLUGINS_DIR, "load")
CREATE_PATH = os.path.join(PLUGINS_DIR, "create")
INVENTORY_PATH = os.path.join(PLUGINS_DIR, "inventory")
self = sys.modules[__name__]
self._has_been_setup = False
self._parent = None
self._events = dict()
def install():
_register_callbacks()
pyblish.api.register_host("houdini")
pyblish.api.register_host("hython")
pyblish.api.register_host("hpython")
pyblish.api.register_plugin_path(PUBLISH_PATH)
avalon.api.register_plugin_path(avalon.api.Loader, LOAD_PATH)
avalon.api.register_plugin_path(avalon.api.Creator, CREATE_PATH)
log.info("Installing callbacks ... ")
# avalon.on("init", on_init)
avalon.api.before("save", before_save)
avalon.api.on("save", on_save)
avalon.api.on("open", on_open)
avalon.api.on("new", on_new)
pyblish.api.register_callback(
"instanceToggled", on_pyblish_instance_toggled
)
log.info("Setting default family states for loader..")
avalon.api.data["familiesStateToggled"] = [
"imagesequence",
"review"
]
self._has_been_setup = True
# add houdini vendor packages
hou_pythonpath = os.path.join(os.path.dirname(HOST_DIR), "vendor")
sys.path.append(hou_pythonpath)
# Set asset FPS for the empty scene directly after launch of Houdini
# so it initializes into the correct scene FPS
_set_asset_fps()
def uninstall():
"""Uninstall Houdini-specific functionality of avalon-core.
This function is called automatically on calling `api.uninstall()`.
"""
pyblish.api.deregister_host("hython")
pyblish.api.deregister_host("hpython")
pyblish.api.deregister_host("houdini")
def _register_callbacks():
for event in self._events.copy().values():
if event is None:
continue
try:
hou.hipFile.removeEventCallback(event)
except RuntimeError as e:
log.info(e)
self._events[on_file_event_callback] = hou.hipFile.addEventCallback(
on_file_event_callback
)
def on_file_event_callback(event):
if event == hou.hipFileEventType.AfterLoad:
avalon.api.emit("open", [event])
elif event == hou.hipFileEventType.AfterSave:
avalon.api.emit("save", [event])
elif event == hou.hipFileEventType.BeforeSave:
avalon.api.emit("before_save", [event])
elif event == hou.hipFileEventType.AfterClear:
avalon.api.emit("new", [event])
def get_main_window():
"""Acquire Houdini's main window"""
if self._parent is None:
self._parent = hou.ui.mainQtWindow()
return self._parent
def teardown():
"""Remove integration"""
if not self._has_been_setup:
return
self._has_been_setup = False
print("pyblish: Integration torn down successfully")
def containerise(name,
namespace,
nodes,
context,
loader=None,
suffix=""):
"""Bundle `nodes` into a subnet and imprint it with metadata
Containerisation enables a tracking of version, author and origin
for loaded assets.
Arguments:
name (str): Name of resulting assembly
namespace (str): Namespace under which to host container
nodes (list): Long names of nodes to containerise
context (dict): Asset information
loader (str, optional): Name of loader used to produce this container.
suffix (str, optional): Suffix of container, defaults to `_CON`.
Returns:
container (str): Name of container assembly
"""
# Ensure AVALON_CONTAINERS subnet exists
subnet = hou.node(AVALON_CONTAINERS)
if subnet is None:
obj_network = hou.node("/obj")
subnet = obj_network.createNode("subnet",
node_name="AVALON_CONTAINERS")
# Create proper container name
container_name = "{}_{}".format(name, suffix or "CON")
container = hou.node("/obj/{}".format(name))
container.setName(container_name, unique_name=True)
data = {
"schema": "openpype:container-2.0",
"id": AVALON_CONTAINER_ID,
"name": name,
"namespace": namespace,
"loader": str(loader),
"representation": str(context["representation"]["_id"]),
}
lib.imprint(container, data)
# "Parent" the container under the container network
hou.moveNodesTo([container], subnet)
subnet.node(container_name).moveToGoodPosition()
return container
def parse_container(container):
"""Return the container node's full container data.
Args:
container (hou.Node): A container node name.
Returns:
dict: The container schema data for this container node.
"""
data = lib.read(container)
# Backwards compatibility pre-schemas for containers
data["schema"] = data.get("schema", "openpype:container-1.0")
# Append transient data
data["objectName"] = container.path()
data["node"] = container
return data
def ls():
containers = []
for identifier in (AVALON_CONTAINER_ID,
"pyblish.mindbender.container"):
containers += lib.lsattr("id", identifier)
has_metadata_collector = False
config_host = find_submodule(avalon.api.registered_config(), "houdini")
if hasattr(config_host, "collect_container_metadata"):
has_metadata_collector = True
for container in sorted(containers,
# Hou 19+ Python 3 hou.ObjNode are not
# sortable due to not supporting greater
# than comparisons
key=lambda node: node.path()):
data = parse_container(container)
# Collect custom data if attribute is present
if has_metadata_collector:
metadata = config_host.collect_container_metadata(container)
data.update(metadata)
yield data
def before_save(*args):
return lib.validate_fps()
def on_save(*args):
log.info("Running callback on save..")
nodes = lib.get_id_required_nodes()
for node, new_id in lib.generate_ids(nodes):
lib.set_id(node, new_id, overwrite=False)
def on_open(*args):
if not hou.isUIAvailable():
log.debug("Batch mode detected, ignoring `on_open` callbacks..")
return
log.info("Running callback on open..")
# Validate FPS after update_task_from_path to
# ensure it is using correct FPS for the asset
lib.validate_fps()
if any_outdated():
from openpype.widgets import popup
log.warning("Scene has outdated content.")
# Get main window
parent = get_main_window()
if parent is None:
log.info("Skipping outdated content pop-up "
"because Houdini window can't be found.")
else:
# Show outdated pop-up
def _on_show_inventory():
from openpype.tools.utils import host_tools
host_tools.show_scene_inventory(parent=parent)
dialog = popup.Popup(parent=parent)
dialog.setWindowTitle("Houdini scene has outdated content")
dialog.setMessage("There are outdated containers in "
"your Houdini scene.")
dialog.on_clicked.connect(_on_show_inventory)
dialog.show()
def on_new(_):
"""Set project resolution and fps when create a new file"""
log.info("Running callback on new..")
_set_asset_fps()
def _set_asset_fps():
"""Set Houdini scene FPS to the default required for current asset"""
# Set new scene fps
fps = get_asset_fps()
print("Setting scene FPS to %i" % fps)
lib.set_scene_fps(fps)
def on_pyblish_instance_toggled(instance, new_value, old_value):
"""Toggle saver tool passthrough states on instance toggles."""
@contextlib.contextmanager
def main_take(no_update=True):
"""Enter root take during context"""
original_take = hou.takes.currentTake()
original_update_mode = hou.updateModeSetting()
root = hou.takes.rootTake()
has_changed = False
try:
if original_take != root:
has_changed = True
if no_update:
hou.setUpdateMode(hou.updateMode.Manual)
hou.takes.setCurrentTake(root)
yield
finally:
if has_changed:
if no_update:
hou.setUpdateMode(original_update_mode)
hou.takes.setCurrentTake(original_take)
if not instance.data.get("_allowToggleBypass", True):
return
nodes = instance[:]
if not nodes:
return
# Assume instance node is first node
instance_node = nodes[0]
if not hasattr(instance_node, "isBypassed"):
# Likely not a node that can actually be bypassed
log.debug("Can't bypass node: %s", instance_node.path())
return
if instance_node.isBypassed() != (not old_value):
print("%s old bypass state didn't match old instance state, "
"updating anyway.." % instance_node.path())
try:
# Go into the main take, because when in another take changing
# the bypass state of a note cannot be done due to it being locked
# by default.
with main_take(no_update=True):
instance_node.bypass(not new_value)
except hou.PermissionError as exc:
log.warning("%s - %s", instance_node.path(), exc)

View file

@ -1,25 +1,82 @@
# -*- coding: utf-8 -*-
"""Houdini specific Avalon/Pyblish plugin definitions."""
import sys
from avalon.api import CreatorError
from avalon import houdini
import six
import avalon.api
from avalon.api import CreatorError
import hou
from openpype.api import PypeCreatorMixin
from .lib import imprint
class OpenPypeCreatorError(CreatorError):
pass
class Creator(PypeCreatorMixin, houdini.Creator):
class Creator(PypeCreatorMixin, avalon.api.Creator):
"""Creator plugin to create instances in Houdini
To support the wide range of node types for render output (Alembic, VDB,
Mantra) the Creator needs a node type to create the correct instance
By default, if none is given, is `geometry`. An example of accepted node
types: geometry, alembic, ifd (mantra)
Please check the Houdini documentation for more node types.
Tip: to find the exact node type to create press the `i` left of the node
when hovering over a node. The information is visible under the name of
the node.
"""
def __init__(self, *args, **kwargs):
super(Creator, self).__init__(*args, **kwargs)
self.nodes = list()
def process(self):
"""This is the base functionality to create instances in Houdini
The selected nodes are stored in self to be used in an override method.
This is currently necessary in order to support the multiple output
types in Houdini which can only be rendered through their own node.
Default node type if none is given is `geometry`
It also makes it easier to apply custom settings per instance type
Example of override method for Alembic:
def process(self):
instance = super(CreateEpicNode, self, process()
# Set paramaters for Alembic node
instance.setParms(
{"sop_path": "$HIP/%s.abc" % self.nodes[0]}
)
Returns:
hou.Node
"""
try:
# re-raise as standard Python exception so
# Avalon can catch it
instance = super(Creator, self).process()
if (self.options or {}).get("useSelection"):
self.nodes = hou.selectedNodes()
# Get the node type and remove it from the data, not needed
node_type = self.data.pop("node_type", None)
if node_type is None:
node_type = "geometry"
# Get out node
out = hou.node("/out")
instance = out.createNode(node_type, node_name=self.name)
instance.moveToGoodPosition()
imprint(instance, self.data)
self._process(instance)
except hou.Error as er:
six.reraise(
OpenPypeCreatorError,

View file

@ -0,0 +1,58 @@
"""Host API required Work Files tool"""
import os
import hou
from avalon import api
def file_extensions():
return api.HOST_WORKFILE_EXTENSIONS["houdini"]
def has_unsaved_changes():
return hou.hipFile.hasUnsavedChanges()
def save_file(filepath):
# Force forwards slashes to avoid segfault
filepath = filepath.replace("\\", "/")
hou.hipFile.save(file_name=filepath,
save_to_recent_files=True)
return filepath
def open_file(filepath):
# Force forwards slashes to avoid segfault
filepath = filepath.replace("\\", "/")
hou.hipFile.load(filepath,
suppress_save_prompt=True,
ignore_load_warnings=False)
return filepath
def current_file():
current_filepath = hou.hipFile.path()
if (os.path.basename(current_filepath) == "untitled.hip" and
not os.path.exists(current_filepath)):
# By default a new scene in houdini is saved in the current
# working directory as "untitled.hip" so we need to capture
# that and consider it 'not saved' when it's in that state.
return None
return current_filepath
def work_root(session):
work_dir = session["AVALON_WORKDIR"]
scene_dir = session.get("AVALON_SCENEDIR")
if scene_dir:
return os.path.join(work_dir, scene_dir)
else:
return work_dir

View file

@ -1,8 +1,8 @@
# -*- coding: utf-8 -*-
from openpype.hosts.houdini.api import plugin
from avalon.houdini import lib
from avalon import io
import hou
from avalon import io
from openpype.hosts.houdini.api import lib
from openpype.hosts.houdini.api import plugin
class CreateHDA(plugin.Creator):

View file

@ -1,6 +1,7 @@
import os
from avalon import api
from avalon.houdini import pipeline
from openpype.hosts.houdini.api import pipeline
class AbcLoader(api.Loader):
@ -14,8 +15,6 @@ class AbcLoader(api.Loader):
color = "orange"
def load(self, context, name=None, namespace=None, data=None):
import os
import hou
# Format file name, Houdini only wants forward slashes

View file

@ -1,5 +1,5 @@
from avalon import api
from avalon.houdini import pipeline
from openpype.hosts.houdini.api import pipeline
ARCHIVE_EXPRESSION = ('__import__("_alembic_hom_extensions")'

View file

@ -1,7 +1,7 @@
# -*- coding: utf-8 -*-
from avalon import api
from avalon.houdini import pipeline
from openpype.hosts.houdini.api import pipeline
class HdaLoader(api.Loader):

View file

@ -1,7 +1,7 @@
import os
from avalon import api
from avalon.houdini import pipeline, lib
from openpype.hosts.houdini.api import lib, pipeline
import hou

View file

@ -1,5 +1,5 @@
from avalon import api
from avalon.houdini import pipeline, lib
from openpype.hosts.houdini.api import lib, pipeline
class USDSublayerLoader(api.Loader):

View file

@ -1,5 +1,5 @@
from avalon import api
from avalon.houdini import pipeline, lib
from openpype.hosts.houdini.api import lib, pipeline
class USDReferenceLoader(api.Loader):

View file

@ -2,7 +2,7 @@ import os
import re
from avalon import api
from avalon.houdini import pipeline
from openpype.hosts.houdini.api import pipeline
class VdbLoader(api.Loader):

View file

@ -2,7 +2,7 @@ import hou
import pyblish.api
from avalon.houdini import lib
from openpype.hosts.houdini.api import lib
class CollectInstances(pyblish.api.ContextPlugin):

View file

@ -1,6 +1,6 @@
import hou
import pyblish.api
from avalon.houdini import lib
from openpype.hosts.houdini.api import lib
import openpype.hosts.houdini.api.usd as hou_usdlib
import openpype.lib.usdlib as usdlib

View file

@ -2,7 +2,7 @@ import pyblish.api
import openpype.api
import hou
from avalon.houdini import lib
from openpype.hosts.houdini.api import lib
class CollectRemotePublishSettings(pyblish.api.ContextPlugin):

View file

@ -56,18 +56,6 @@ host_tools.show_workfiles(parent)
]]></scriptCode>
</scriptItem>
<separatorItem/>
<subMenu id="avalon_reload_pipeline">
<label>System</label>
<scriptItem>
<label>Reload Pipeline (unstable)</label>
<scriptCode><![CDATA[
from avalon.houdini import pipeline
pipeline.reload_pipeline()]]></scriptCode>
</scriptItem>
</subMenu>
<separatorItem/>
<scriptItem id="experimental_tools">
<label>Experimental tools...</label>

View file

@ -1,9 +1,10 @@
from avalon import api, houdini
import avalon.api
from openpype.hosts.houdini import api
def main():
print("Installing OpenPype ...")
api.install(houdini)
avalon.api.install(api)
main()

View file

@ -1,9 +1,10 @@
from avalon import api, houdini
import avalon.api
from openpype.hosts.houdini import api
def main():
print("Installing OpenPype ...")
api.install(houdini)
avalon.api.install(api)
main()

View file

@ -5,9 +5,7 @@ def add_implementation_envs(env, _app):
# Add requirements to PYTHONPATH
pype_root = os.environ["OPENPYPE_REPOS_ROOT"]
new_python_paths = [
os.path.join(pype_root, "openpype", "hosts", "maya", "startup"),
os.path.join(pype_root, "repos", "avalon-core", "setup", "maya"),
os.path.join(pype_root, "tools", "mayalookassigner")
os.path.join(pype_root, "openpype", "hosts", "maya", "startup")
]
old_python_path = env.get("PYTHONPATH") or ""
for path in old_python_path.split(os.pathsep):

View file

@ -1,233 +1,91 @@
import os
import logging
import weakref
"""Public API
from maya import utils, cmds
Anything that isn't defined here is INTERNAL and unreliable for external use.
from avalon import api as avalon
from avalon import pipeline
from avalon.maya import suspended_refresh
from avalon.maya.pipeline import IS_HEADLESS
from openpype.tools.utils import host_tools
from pyblish import api as pyblish
from openpype.lib import any_outdated
import openpype.hosts.maya
from openpype.hosts.maya.lib import copy_workspace_mel
from openpype.lib.path_tools import HostDirmap
from . import menu, lib
"""
log = logging.getLogger("openpype.hosts.maya")
from .pipeline import (
install,
uninstall,
HOST_DIR = os.path.dirname(os.path.abspath(openpype.hosts.maya.__file__))
PLUGINS_DIR = os.path.join(HOST_DIR, "plugins")
PUBLISH_PATH = os.path.join(PLUGINS_DIR, "publish")
LOAD_PATH = os.path.join(PLUGINS_DIR, "load")
CREATE_PATH = os.path.join(PLUGINS_DIR, "create")
INVENTORY_PATH = os.path.join(PLUGINS_DIR, "inventory")
ls,
containerise,
lock,
unlock,
is_locked,
lock_ignored,
)
from .plugin import (
Creator,
Loader
)
from .workio import (
open_file,
save_file,
current_file,
has_unsaved_changes,
file_extensions,
work_root
)
from .lib import (
export_alembic,
lsattr,
lsattrs,
read,
apply_shaders,
without_extension,
maintained_selection,
suspended_refresh,
unique_name,
unique_namespace,
)
def install():
from openpype.settings import get_project_settings
__all__ = [
"install",
"uninstall",
project_settings = get_project_settings(os.getenv("AVALON_PROJECT"))
# process path mapping
dirmap_processor = MayaDirmap("maya", project_settings)
dirmap_processor.process_dirmap()
"ls",
"containerise",
pyblish.register_plugin_path(PUBLISH_PATH)
avalon.register_plugin_path(avalon.Loader, LOAD_PATH)
avalon.register_plugin_path(avalon.Creator, CREATE_PATH)
avalon.register_plugin_path(avalon.InventoryAction, INVENTORY_PATH)
log.info(PUBLISH_PATH)
menu.install()
"lock",
"unlock",
"is_locked",
"lock_ignored",
log.info("Installing callbacks ... ")
avalon.on("init", on_init)
"Creator",
"Loader",
# Callbacks below are not required for headless mode, the `init` however
# is important to load referenced Alembics correctly at rendertime.
if IS_HEADLESS:
log.info("Running in headless mode, skipping Maya "
"save/open/new callback installation..")
return
# Workfiles API
"open_file",
"save_file",
"current_file",
"has_unsaved_changes",
"file_extensions",
"work_root",
avalon.on("save", on_save)
avalon.on("open", on_open)
avalon.on("new", on_new)
avalon.before("save", on_before_save)
avalon.on("taskChanged", on_task_changed)
avalon.on("before.workfile.save", before_workfile_save)
# Utility functions
"export_alembic",
"lsattr",
"lsattrs",
"read",
log.info("Setting default family states for loader..")
avalon.data["familiesStateToggled"] = ["imagesequence"]
"unique_name",
"unique_namespace",
"apply_shaders",
"without_extension",
"maintained_selection",
"suspended_refresh",
def uninstall():
pyblish.deregister_plugin_path(PUBLISH_PATH)
avalon.deregister_plugin_path(avalon.Loader, LOAD_PATH)
avalon.deregister_plugin_path(avalon.Creator, CREATE_PATH)
avalon.deregister_plugin_path(avalon.InventoryAction, INVENTORY_PATH)
]
menu.uninstall()
def on_init(_):
avalon.logger.info("Running callback on init..")
def safe_deferred(fn):
"""Execute deferred the function in a try-except"""
def _fn():
"""safely call in deferred callback"""
try:
fn()
except Exception as exc:
print(exc)
try:
utils.executeDeferred(_fn)
except Exception as exc:
print(exc)
# Force load Alembic so referenced alembics
# work correctly on scene open
cmds.loadPlugin("AbcImport", quiet=True)
cmds.loadPlugin("AbcExport", quiet=True)
# Force load objExport plug-in (requested by artists)
cmds.loadPlugin("objExport", quiet=True)
from .customize import (
override_component_mask_commands,
override_toolbox_ui
)
safe_deferred(override_component_mask_commands)
launch_workfiles = os.environ.get("WORKFILES_STARTUP")
if launch_workfiles:
safe_deferred(host_tools.show_workfiles)
if not IS_HEADLESS:
safe_deferred(override_toolbox_ui)
def on_before_save(return_code, _):
"""Run validation for scene's FPS prior to saving"""
return lib.validate_fps()
def on_save(_):
"""Automatically add IDs to new nodes
Any transform of a mesh, without an existing ID, is given one
automatically on file save.
"""
avalon.logger.info("Running callback on save..")
# # Update current task for the current scene
# update_task_from_path(cmds.file(query=True, sceneName=True))
# Generate ids of the current context on nodes in the scene
nodes = lib.get_id_required_nodes(referenced_nodes=False)
for node, new_id in lib.generate_ids(nodes):
lib.set_id(node, new_id, overwrite=False)
def on_open(_):
"""On scene open let's assume the containers have changed."""
from Qt import QtWidgets
from openpype.widgets import popup
cmds.evalDeferred(
"from openpype.hosts.maya.api import lib;"
"lib.remove_render_layer_observer()")
cmds.evalDeferred(
"from openpype.hosts.maya.api import lib;"
"lib.add_render_layer_observer()")
cmds.evalDeferred(
"from openpype.hosts.maya.api import lib;"
"lib.add_render_layer_change_observer()")
# # Update current task for the current scene
# update_task_from_path(cmds.file(query=True, sceneName=True))
# Validate FPS after update_task_from_path to
# ensure it is using correct FPS for the asset
lib.validate_fps()
lib.fix_incompatible_containers()
if any_outdated():
log.warning("Scene has outdated content.")
# Find maya main window
top_level_widgets = {w.objectName(): w for w in
QtWidgets.QApplication.topLevelWidgets()}
parent = top_level_widgets.get("MayaWindow", None)
if parent is None:
log.info("Skipping outdated content pop-up "
"because Maya window can't be found.")
else:
# Show outdated pop-up
def _on_show_inventory():
host_tools.show_scene_inventory(parent=parent)
dialog = popup.Popup(parent=parent)
dialog.setWindowTitle("Maya scene has outdated content")
dialog.setMessage("There are outdated containers in "
"your Maya scene.")
dialog.on_show.connect(_on_show_inventory)
dialog.show()
def on_new(_):
"""Set project resolution and fps when create a new file"""
avalon.logger.info("Running callback on new..")
with suspended_refresh():
cmds.evalDeferred(
"from openpype.hosts.maya.api import lib;"
"lib.remove_render_layer_observer()")
cmds.evalDeferred(
"from openpype.hosts.maya.api import lib;"
"lib.add_render_layer_observer()")
cmds.evalDeferred(
"from openpype.hosts.maya.api import lib;"
"lib.add_render_layer_change_observer()")
lib.set_context_settings()
def on_task_changed(*args):
"""Wrapped function of app initialize and maya's on task changed"""
# Run
with suspended_refresh():
lib.set_context_settings()
lib.update_content_on_context_change()
msg = " project: {}\n asset: {}\n task:{}".format(
avalon.Session["AVALON_PROJECT"],
avalon.Session["AVALON_ASSET"],
avalon.Session["AVALON_TASK"]
)
lib.show_message(
"Context was changed",
("Context was changed to:\n{}".format(msg)),
)
def before_workfile_save(event):
workdir_path = event.workdir_path
if workdir_path:
copy_workspace_mel(workdir_path)
class MayaDirmap(HostDirmap):
def on_enable_dirmap(self):
cmds.dirmap(en=True)
def dirmap_routine(self, source_path, destination_path):
cmds.dirmap(m=(source_path, destination_path))
cmds.dirmap(m=(destination_path, source_path))
# Backwards API compatibility
open = open_file
save = save_file

View file

@ -2,7 +2,7 @@
from __future__ import absolute_import
import pyblish.api
from avalon import io
from openpype.api import get_errored_instances_from_context
@ -72,8 +72,7 @@ class GenerateUUIDsOnInvalidAction(pyblish.api.Action):
nodes (list): all nodes to regenerate ids on
"""
from openpype.hosts.maya.api import lib
import avalon.io as io
from . import lib
asset = instance.data['asset']
asset_id = io.find_one({"name": asset, "type": "asset"},

View file

@ -1,5 +1,7 @@
# -*- coding: utf-8 -*-
"""OpenPype script commands to be used directly in Maya."""
from maya import cmds
from avalon import api, io
class ToolWindows:
@ -51,3 +53,134 @@ def edit_shader_definitions():
window = ShaderDefinitionsEditor(parent=main_window)
ToolWindows.set_window("shader_definition_editor", window)
window.show()
def reset_frame_range():
"""Set frame range to current asset"""
# Set FPS first
fps = {15: 'game',
24: 'film',
25: 'pal',
30: 'ntsc',
48: 'show',
50: 'palf',
60: 'ntscf',
23.98: '23.976fps',
23.976: '23.976fps',
29.97: '29.97fps',
47.952: '47.952fps',
47.95: '47.952fps',
59.94: '59.94fps',
44100: '44100fps',
48000: '48000fps'
}.get(float(api.Session.get("AVALON_FPS", 25)), "pal")
cmds.currentUnit(time=fps)
# Set frame start/end
asset_name = api.Session["AVALON_ASSET"]
asset = io.find_one({"name": asset_name, "type": "asset"})
frame_start = asset["data"].get("frameStart")
frame_end = asset["data"].get("frameEnd")
# Backwards compatibility
if frame_start is None or frame_end is None:
frame_start = asset["data"].get("edit_in")
frame_end = asset["data"].get("edit_out")
if frame_start is None or frame_end is None:
cmds.warning("No edit information found for %s" % asset_name)
return
handles = asset["data"].get("handles") or 0
handle_start = asset["data"].get("handleStart")
if handle_start is None:
handle_start = handles
handle_end = asset["data"].get("handleEnd")
if handle_end is None:
handle_end = handles
frame_start -= int(handle_start)
frame_end += int(handle_end)
cmds.playbackOptions(minTime=frame_start)
cmds.playbackOptions(maxTime=frame_end)
cmds.playbackOptions(animationStartTime=frame_start)
cmds.playbackOptions(animationEndTime=frame_end)
cmds.playbackOptions(minTime=frame_start)
cmds.playbackOptions(maxTime=frame_end)
cmds.currentTime(frame_start)
cmds.setAttr("defaultRenderGlobals.startFrame", frame_start)
cmds.setAttr("defaultRenderGlobals.endFrame", frame_end)
def _resolution_from_document(doc):
if not doc or "data" not in doc:
print("Entered document is not valid. \"{}\"".format(str(doc)))
return None
resolution_width = doc["data"].get("resolutionWidth")
resolution_height = doc["data"].get("resolutionHeight")
# Backwards compatibility
if resolution_width is None or resolution_height is None:
resolution_width = doc["data"].get("resolution_width")
resolution_height = doc["data"].get("resolution_height")
# Make sure both width and heigh are set
if resolution_width is None or resolution_height is None:
cmds.warning(
"No resolution information found for \"{}\"".format(doc["name"])
)
return None
return int(resolution_width), int(resolution_height)
def reset_resolution():
# Default values
resolution_width = 1920
resolution_height = 1080
# Get resolution from asset
asset_name = api.Session["AVALON_ASSET"]
asset_doc = io.find_one({"name": asset_name, "type": "asset"})
resolution = _resolution_from_document(asset_doc)
# Try get resolution from project
if resolution is None:
# TODO go through visualParents
print((
"Asset \"{}\" does not have set resolution."
" Trying to get resolution from project"
).format(asset_name))
project_doc = io.find_one({"type": "project"})
resolution = _resolution_from_document(project_doc)
if resolution is None:
msg = "Using default resolution {}x{}"
else:
resolution_width, resolution_height = resolution
msg = "Setting resolution to {}x{}"
print(msg.format(resolution_width, resolution_height))
# set for different renderers
# arnold, vray, redshift, renderman
renderer = cmds.getAttr("defaultRenderGlobals.currentRenderer").lower()
# handle various renderman names
if renderer.startswith("renderman"):
renderer = "renderman"
# default attributes are usable for Arnold, Renderman and Redshift
width_attr_name = "defaultResolution.width"
height_attr_name = "defaultResolution.height"
# Vray has its own way
if renderer == "vray":
width_attr_name = "vraySettings.width"
height_attr_name = "vraySettings.height"
cmds.setAttr(width_attr_name, resolution_width)
cmds.setAttr(height_attr_name, resolution_height)

View file

@ -8,10 +8,9 @@ from functools import partial
import maya.cmds as mc
import maya.mel as mel
from avalon.maya import pipeline
from openpype.api import resources
from openpype.tools.utils import host_tools
from .lib import get_main_window
log = logging.getLogger(__name__)
@ -76,6 +75,7 @@ def override_component_mask_commands():
def override_toolbox_ui():
"""Add custom buttons in Toolbox as replacement for Maya web help icon."""
icons = resources.get_resource("icons")
parent_widget = get_main_window()
# Ensure the maya web icon on toolbox exists
web_button = "ToolBox|MainToolboxLayout|mayaWebButton"
@ -115,7 +115,7 @@ def override_toolbox_ui():
label="Work Files",
image=os.path.join(icons, "workfiles.png"),
command=lambda: host_tools.show_workfiles(
parent=pipeline._parent
parent=parent_widget
),
width=icon_size,
height=icon_size,
@ -130,7 +130,7 @@ def override_toolbox_ui():
label="Loader",
image=os.path.join(icons, "loader.png"),
command=lambda: host_tools.show_loader(
parent=pipeline._parent, use_context=True
parent=parent_widget, use_context=True
),
width=icon_size,
height=icon_size,
@ -145,7 +145,7 @@ def override_toolbox_ui():
label="Inventory",
image=os.path.join(icons, "inventory.png"),
command=lambda: host_tools.show_scene_inventory(
parent=pipeline._parent
parent=parent_widget
),
width=icon_size,
height=icon_size,

View file

@ -1,7 +1,8 @@
"""Standalone helper functions"""
import re
import os
import sys
import re
import platform
import uuid
import math
@ -18,16 +19,19 @@ import bson
from maya import cmds, mel
import maya.api.OpenMaya as om
from avalon import api, maya, io, pipeline
import avalon.maya.lib
import avalon.maya.interactive
from avalon import api, io, pipeline
from openpype import lib
from openpype.api import get_anatomy_settings
from .commands import reset_frame_range
self = sys.modules[__name__]
self._parent = None
log = logging.getLogger(__name__)
IS_HEADLESS = not hasattr(cmds, "about") or cmds.about(batch=True)
ATTRIBUTE_DICT = {"int": {"attributeType": "long"},
"str": {"dataType": "string"},
"unicode": {"dataType": "string"},
@ -100,6 +104,155 @@ FLOAT_FPS = {23.98, 23.976, 29.97, 47.952, 59.94}
RENDERLIKE_INSTANCE_FAMILIES = ["rendering", "vrayscene"]
def get_main_window():
"""Acquire Maya's main window"""
from Qt import QtWidgets
if self._parent is None:
self._parent = {
widget.objectName(): widget
for widget in QtWidgets.QApplication.topLevelWidgets()
}["MayaWindow"]
return self._parent
@contextlib.contextmanager
def suspended_refresh():
"""Suspend viewport refreshes"""
try:
cmds.refresh(suspend=True)
yield
finally:
cmds.refresh(suspend=False)
@contextlib.contextmanager
def maintained_selection():
"""Maintain selection during context
Example:
>>> scene = cmds.file(new=True, force=True)
>>> node = cmds.createNode("transform", name="Test")
>>> cmds.select("persp")
>>> with maintained_selection():
... cmds.select("Test", replace=True)
>>> "Test" in cmds.ls(selection=True)
False
"""
previous_selection = cmds.ls(selection=True)
try:
yield
finally:
if previous_selection:
cmds.select(previous_selection,
replace=True,
noExpand=True)
else:
cmds.select(clear=True)
def unique_name(name, format="%02d", namespace="", prefix="", suffix=""):
"""Return unique `name`
The function takes into consideration an optional `namespace`
and `suffix`. The suffix is included in evaluating whether a
name exists - such as `name` + "_GRP" - but isn't included
in the returned value.
If a namespace is provided, only names within that namespace
are considered when evaluating whether the name is unique.
Arguments:
format (str, optional): The `name` is given a number, this determines
how this number is formatted. Defaults to a padding of 2.
E.g. my_name01, my_name02.
namespace (str, optional): Only consider names within this namespace.
suffix (str, optional): Only consider names with this suffix.
Example:
>>> name = cmds.createNode("transform", name="MyName")
>>> cmds.objExists(name)
True
>>> unique = unique_name(name)
>>> cmds.objExists(unique)
False
"""
iteration = 1
unique = prefix + (name + format % iteration) + suffix
while cmds.objExists(namespace + ":" + unique):
iteration += 1
unique = prefix + (name + format % iteration) + suffix
if suffix:
return unique[:-len(suffix)]
return unique
def unique_namespace(namespace, format="%02d", prefix="", suffix=""):
"""Return unique namespace
Similar to :func:`unique_name` but evaluating namespaces
as opposed to object names.
Arguments:
namespace (str): Name of namespace to consider
format (str, optional): Formatting of the given iteration number
suffix (str, optional): Only consider namespaces with this suffix.
"""
iteration = 1
unique = prefix + (namespace + format % iteration) + suffix
# The `existing` set does not just contain the namespaces but *all* nodes
# within "current namespace". We need all because the namespace could
# also clash with a node name. To be truly unique and valid one needs to
# check against all.
existing = set(cmds.namespaceInfo(listNamespace=True))
while unique in existing:
iteration += 1
unique = prefix + (namespace + format % iteration) + suffix
return unique
def read(node):
"""Return user-defined attributes from `node`"""
data = dict()
for attr in cmds.listAttr(node, userDefined=True) or list():
try:
value = cmds.getAttr(node + "." + attr, asString=True)
except RuntimeError:
# For Message type attribute or others that have connections,
# take source node name as value.
source = cmds.listConnections(node + "." + attr,
source=True,
destination=False)
source = cmds.ls(source, long=True) or [None]
value = source[0]
except ValueError:
# Some attributes cannot be read directly,
# such as mesh and color attributes. These
# are considered non-essential to this
# particular publishing pipeline.
value = None
data[attr] = value
return data
def _get_mel_global(name):
"""Return the value of a mel global variable"""
return mel.eval("$%s = $%s;" % (name, name))
@ -280,6 +433,73 @@ def shape_from_element(element):
return node
def export_alembic(nodes,
file,
frame_range=None,
write_uv=True,
write_visibility=True,
attribute_prefix=None):
"""Wrap native MEL command with limited set of arguments
Arguments:
nodes (list): Long names of nodes to cache
file (str): Absolute path to output destination
frame_range (tuple, optional): Start- and end-frame of cache,
default to current animation range.
write_uv (bool, optional): Whether or not to include UVs,
default to True
write_visibility (bool, optional): Turn on to store the visibility
state of objects in the Alembic file. Otherwise, all objects are
considered visible, default to True
attribute_prefix (str, optional): Include all user-defined
attributes with this prefix.
"""
if frame_range is None:
frame_range = (
cmds.playbackOptions(query=True, ast=True),
cmds.playbackOptions(query=True, aet=True)
)
options = [
("file", file),
("frameRange", "%s %s" % frame_range),
] + [("root", mesh) for mesh in nodes]
if isinstance(attribute_prefix, string_types):
# Include all attributes prefixed with "mb"
# TODO(marcus): This would be a good candidate for
# external registration, so that the developer
# doesn't have to edit this function to modify
# the behavior of Alembic export.
options.append(("attrPrefix", str(attribute_prefix)))
if write_uv:
options.append(("uvWrite", ""))
if write_visibility:
options.append(("writeVisibility", ""))
# Generate MEL command
mel_args = list()
for key, value in options:
mel_args.append("-{0} {1}".format(key, value))
mel_args_string = " ".join(mel_args)
mel_cmd = "AbcExport -j \"{0}\"".format(mel_args_string)
# For debuggability, put the string passed to MEL in the Script editor.
print("mel.eval('%s')" % mel_cmd)
return mel.eval(mel_cmd)
def collect_animation_data(fps=False):
"""Get the basic animation data
@ -305,6 +525,256 @@ def collect_animation_data(fps=False):
return data
def imprint(node, data):
"""Write `data` to `node` as userDefined attributes
Arguments:
node (str): Long name of node
data (dict): Dictionary of key/value pairs
Example:
>>> from maya import cmds
>>> def compute():
... return 6
...
>>> cube, generator = cmds.polyCube()
>>> imprint(cube, {
... "regularString": "myFamily",
... "computedValue": lambda: compute()
... })
...
>>> cmds.getAttr(cube + ".computedValue")
6
"""
for key, value in data.items():
if callable(value):
# Support values evaluated at imprint
value = value()
if isinstance(value, bool):
add_type = {"attributeType": "bool"}
set_type = {"keyable": False, "channelBox": True}
elif isinstance(value, string_types):
add_type = {"dataType": "string"}
set_type = {"type": "string"}
elif isinstance(value, int):
add_type = {"attributeType": "long"}
set_type = {"keyable": False, "channelBox": True}
elif isinstance(value, float):
add_type = {"attributeType": "double"}
set_type = {"keyable": False, "channelBox": True}
elif isinstance(value, (list, tuple)):
add_type = {"attributeType": "enum", "enumName": ":".join(value)}
set_type = {"keyable": False, "channelBox": True}
value = 0 # enum default
else:
raise TypeError("Unsupported type: %r" % type(value))
cmds.addAttr(node, longName=key, **add_type)
cmds.setAttr(node + "." + key, value, **set_type)
def serialise_shaders(nodes):
"""Generate a shader set dictionary
Arguments:
nodes (list): Absolute paths to nodes
Returns:
dictionary of (shader: id) pairs
Schema:
{
"shader1": ["id1", "id2"],
"shader2": ["id3", "id1"]
}
Example:
{
"Bazooka_Brothers01_:blinn4SG": [
"f9520572-ac1d-11e6-b39e-3085a99791c9.f[4922:5001]",
"f9520572-ac1d-11e6-b39e-3085a99791c9.f[4587:4634]",
"f9520572-ac1d-11e6-b39e-3085a99791c9.f[1120:1567]",
"f9520572-ac1d-11e6-b39e-3085a99791c9.f[4251:4362]"
],
"lambert2SG": [
"f9520571-ac1d-11e6-9dbb-3085a99791c9"
]
}
"""
valid_nodes = cmds.ls(
nodes,
long=True,
recursive=True,
showType=True,
objectsOnly=True,
type="transform"
)
meshes_by_id = {}
for mesh in valid_nodes:
shapes = cmds.listRelatives(valid_nodes[0],
shapes=True,
fullPath=True) or list()
if shapes:
shape = shapes[0]
if not cmds.nodeType(shape):
continue
try:
id_ = cmds.getAttr(mesh + ".mbID")
if id_ not in meshes_by_id:
meshes_by_id[id_] = list()
meshes_by_id[id_].append(mesh)
except ValueError:
continue
meshes_by_shader = dict()
for mesh in meshes_by_id.values():
shape = cmds.listRelatives(mesh,
shapes=True,
fullPath=True) or list()
for shader in cmds.listConnections(shape,
type="shadingEngine") or list():
# Objects in this group are those that haven't got
# any shaders. These are expected to be managed
# elsewhere, such as by the default model loader.
if shader == "initialShadingGroup":
continue
if shader not in meshes_by_shader:
meshes_by_shader[shader] = list()
shaded = cmds.sets(shader, query=True) or list()
meshes_by_shader[shader].extend(shaded)
shader_by_id = {}
for shader, shaded in meshes_by_shader.items():
if shader not in shader_by_id:
shader_by_id[shader] = list()
for mesh in shaded:
# Enable shader assignment to faces.
name = mesh.split(".f[")[0]
transform = name
if cmds.objectType(transform) == "mesh":
transform = cmds.listRelatives(name, parent=True)[0]
try:
id_ = cmds.getAttr(transform + ".mbID")
shader_by_id[shader].append(mesh.replace(name, id_))
except KeyError:
continue
# Remove duplicates
shader_by_id[shader] = list(set(shader_by_id[shader]))
return shader_by_id
def lsattr(attr, value=None):
"""Return nodes matching `key` and `value`
Arguments:
attr (str): Name of Maya attribute
value (object, optional): Value of attribute. If none
is provided, return all nodes with this attribute.
Example:
>> lsattr("id", "myId")
["myNode"]
>> lsattr("id")
["myNode", "myOtherNode"]
"""
if value is None:
return cmds.ls("*.%s" % attr,
recursive=True,
objectsOnly=True,
long=True)
return lsattrs({attr: value})
def lsattrs(attrs):
"""Return nodes with the given attribute(s).
Arguments:
attrs (dict): Name and value pairs of expected matches
Example:
>> # Return nodes with an `age` of five.
>> lsattr({"age": "five"})
>> # Return nodes with both `age` and `color` of five and blue.
>> lsattr({"age": "five", "color": "blue"})
Return:
list: matching nodes.
"""
dep_fn = om.MFnDependencyNode()
dag_fn = om.MFnDagNode()
selection_list = om.MSelectionList()
first_attr = next(iter(attrs))
try:
selection_list.add("*.{0}".format(first_attr),
searchChildNamespaces=True)
except RuntimeError as exc:
if str(exc).endswith("Object does not exist"):
return []
matches = set()
for i in range(selection_list.length()):
node = selection_list.getDependNode(i)
if node.hasFn(om.MFn.kDagNode):
fn_node = dag_fn.setObject(node)
full_path_names = [path.fullPathName()
for path in fn_node.getAllPaths()]
else:
fn_node = dep_fn.setObject(node)
full_path_names = [fn_node.name()]
for attr in attrs:
try:
plug = fn_node.findPlug(attr, True)
if plug.asString() != attrs[attr]:
break
except RuntimeError:
break
else:
matches.update(full_path_names)
return list(matches)
@contextlib.contextmanager
def without_extension():
"""Use cmds.file with defaultExtensions=False"""
previous_setting = cmds.file(defaultExtensions=True, query=True)
try:
cmds.file(defaultExtensions=False)
yield
finally:
cmds.file(defaultExtensions=previous_setting)
@contextlib.contextmanager
def attribute_values(attr_values):
"""Remaps node attributes to values during context.
@ -736,7 +1206,7 @@ def namespaced(namespace, new=True):
"""
original = cmds.namespaceInfo(cur=True, absoluteName=True)
if new:
namespace = avalon.maya.lib.unique_namespace(namespace)
namespace = unique_namespace(namespace)
cmds.namespace(add=namespace)
try:
@ -1408,7 +1878,7 @@ def assign_look_by_version(nodes, version_id):
raise RuntimeError("Could not find LookLoader, this is a bug")
# Reference the look file
with maya.maintained_selection():
with maintained_selection():
container_node = pipeline.load(Loader, look_representation)
# Get container members
@ -1947,7 +2417,7 @@ def set_context_settings():
reset_scene_resolution()
# Set frame range.
avalon.maya.interactive.reset_frame_range()
reset_frame_range()
# Set colorspace
set_colorspace()
@ -2386,7 +2856,7 @@ def get_attr_in_layer(attr, layer):
def fix_incompatible_containers():
"""Return whether the current scene has any outdated content"""
host = avalon.api.registered_host()
host = api.registered_host()
for container in host.ls():
loader = container['loader']

View file

@ -1,58 +1,146 @@
import sys
import os
import logging
from Qt import QtWidgets, QtGui
import maya.utils
import maya.cmds as cmds
from avalon.maya import pipeline
import avalon.api
from openpype.api import BuildWorkfile
from openpype.settings import get_project_settings
from openpype.tools.utils import host_tools
from openpype.hosts.maya.api import lib
from .lib import get_main_window, IS_HEADLESS
from .commands import reset_frame_range
log = logging.getLogger(__name__)
MENU_NAME = "op_maya_menu"
def _get_menu(menu_name=None):
"""Return the menu instance if it currently exists in Maya"""
if menu_name is None:
menu_name = pipeline._menu
menu_name = MENU_NAME
widgets = {w.objectName(): w for w in QtWidgets.QApplication.allWidgets()}
return widgets.get(menu_name)
def deferred():
def add_build_workfiles_item():
# Add build first workfile
cmds.menuItem(divider=True, parent=pipeline._menu)
def install():
if cmds.about(batch=True):
log.info("Skipping openpype.menu initialization in batch mode..")
return
def deferred():
from avalon.tools import publish
parent_widget = get_main_window()
cmds.menu(
MENU_NAME,
label=avalon.api.Session["AVALON_LABEL"],
tearOff=True,
parent="MayaWindow"
)
# Create context menu
context_label = "{}, {}".format(
avalon.api.Session["AVALON_ASSET"],
avalon.api.Session["AVALON_TASK"]
)
cmds.menuItem(
"currentContext",
label=context_label,
parent=MENU_NAME,
enable=False
)
cmds.setParent("..", menu=True)
cmds.menuItem(divider=True)
# Create default items
cmds.menuItem(
"Create...",
command=lambda *args: host_tools.show_creator(parent=parent_widget)
)
cmds.menuItem(
"Load...",
command=lambda *args: host_tools.show_loader(
parent=parent_widget,
use_context=True
)
)
cmds.menuItem(
"Publish...",
command=lambda *args: host_tools.show_publish(
parent=parent_widget
),
image=publish.ICON
)
cmds.menuItem(
"Manage...",
command=lambda *args: host_tools.show_scene_inventory(
parent=parent_widget
)
)
cmds.menuItem(
"Library...",
command=lambda *args: host_tools.show_library_loader(
parent=parent_widget
)
)
cmds.menuItem(divider=True)
cmds.menuItem(
"Work Files...",
command=lambda *args: host_tools.show_workfiles(
parent=parent_widget
),
)
cmds.menuItem(
"Reset Frame Range",
command=reset_frame_range
)
cmds.menuItem(
"Reset Resolution",
command=lib.reset_scene_resolution
)
cmds.menuItem(
"Set Colorspace",
command=lib.set_colorspace,
)
cmds.menuItem(divider=True, parent=MENU_NAME)
cmds.menuItem(
"Build First Workfile",
parent=pipeline._menu,
parent=MENU_NAME,
command=lambda *args: BuildWorkfile().process()
)
def add_look_assigner_item():
cmds.menuItem(
"Look assigner",
parent=pipeline._menu,
"Look assigner...",
command=lambda *args: host_tools.show_look_assigner(
pipeline._parent
parent_widget
)
)
def add_experimental_item():
cmds.menuItem(
"Experimental tools...",
parent=pipeline._menu,
command=lambda *args: host_tools.show_experimental_tools_dialog(
pipeline._parent
parent_widget
)
)
cmds.setParent(MENU_NAME, menu=True)
def add_scripts_menu():
try:
@ -82,124 +170,13 @@ def deferred():
# apply configuration
studio_menu.build_from_configuration(studio_menu, config)
def modify_workfiles():
# Find the pipeline menu
top_menu = _get_menu()
# Try to find workfile tool action in the menu
workfile_action = None
for action in top_menu.actions():
if action.text() == "Work Files":
workfile_action = action
break
# Add at the top of menu if "Work Files" action was not found
after_action = ""
if workfile_action:
# Use action's object name for `insertAfter` argument
after_action = workfile_action.objectName()
# Insert action to menu
cmds.menuItem(
"Work Files",
parent=pipeline._menu,
command=lambda *args: host_tools.show_workfiles(pipeline._parent),
insertAfter=after_action
)
# Remove replaced action
if workfile_action:
top_menu.removeAction(workfile_action)
def modify_resolution():
# Find the pipeline menu
top_menu = _get_menu()
# Try to find resolution tool action in the menu
resolution_action = None
for action in top_menu.actions():
if action.text() == "Reset Resolution":
resolution_action = action
break
# Add at the top of menu if "Work Files" action was not found
after_action = ""
if resolution_action:
# Use action's object name for `insertAfter` argument
after_action = resolution_action.objectName()
# Insert action to menu
cmds.menuItem(
"Reset Resolution",
parent=pipeline._menu,
command=lambda *args: lib.reset_scene_resolution(),
insertAfter=after_action
)
# Remove replaced action
if resolution_action:
top_menu.removeAction(resolution_action)
def remove_project_manager():
top_menu = _get_menu()
# Try to find "System" menu action in the menu
system_menu = None
for action in top_menu.actions():
if action.text() == "System":
system_menu = action
break
if system_menu is None:
return
# Try to find "Project manager" action in "System" menu
project_manager_action = None
for action in system_menu.menu().children():
if hasattr(action, "text") and action.text() == "Project Manager":
project_manager_action = action
break
# Remove "Project manager" action if was found
if project_manager_action is not None:
system_menu.menu().removeAction(project_manager_action)
def add_colorspace():
# Find the pipeline menu
top_menu = _get_menu()
# Try to find workfile tool action in the menu
workfile_action = None
for action in top_menu.actions():
if action.text() == "Reset Resolution":
workfile_action = action
break
# Add at the top of menu if "Work Files" action was not found
after_action = ""
if workfile_action:
# Use action's object name for `insertAfter` argument
after_action = workfile_action.objectName()
# Insert action to menu
cmds.menuItem(
"Set Colorspace",
parent=pipeline._menu,
command=lambda *args: lib.set_colorspace(),
insertAfter=after_action
)
log.info("Attempting to install scripts menu ...")
# add_scripts_menu()
add_build_workfiles_item()
add_look_assigner_item()
add_experimental_item()
modify_workfiles()
modify_resolution()
remove_project_manager()
add_colorspace()
add_scripts_menu()
# Allow time for uninstallation to finish.
# We use Maya's executeDeferred instead of QTimer.singleShot
# so that it only gets called after Maya UI has initialized too.
# This is crucial with Maya 2020+ which initializes without UI
# first as a QCoreApplication
maya.utils.executeDeferred(deferred)
cmds.evalDeferred(add_scripts_menu, lowestPriority=True)
def uninstall():
@ -214,18 +191,27 @@ def uninstall():
log.error(e)
def install():
if cmds.about(batch=True):
log.info("Skipping openpype.menu initialization in batch mode..")
return
# Allow time for uninstallation to finish.
cmds.evalDeferred(deferred, lowestPriority=True)
def popup():
"""Pop-up the existing menu near the mouse cursor."""
menu = _get_menu()
cursor = QtGui.QCursor()
point = cursor.pos()
menu.exec_(point)
def update_menu_task_label():
"""Update the task label in Avalon menu to current session"""
if IS_HEADLESS:
return
object_name = "{}|currentContext".format(MENU_NAME)
if not cmds.menuItem(object_name, query=True, exists=True):
log.warning("Can't find menuItem: {}".format(object_name))
return
label = "{}, {}".format(
avalon.api.Session["AVALON_ASSET"],
avalon.api.Session["AVALON_TASK"]
)
cmds.menuItem(object_name, edit=True, label=label)

View file

@ -0,0 +1,596 @@
import os
import sys
import errno
import logging
import contextlib
from maya import utils, cmds, OpenMaya
import maya.api.OpenMaya as om
import pyblish.api
import avalon.api
from avalon.lib import find_submodule
from avalon.pipeline import AVALON_CONTAINER_ID
import openpype.hosts.maya
from openpype.tools.utils import host_tools
from openpype.lib import any_outdated
from openpype.lib.path_tools import HostDirmap
from openpype.hosts.maya.lib import copy_workspace_mel
from . import menu, lib
log = logging.getLogger("openpype.hosts.maya")
HOST_DIR = os.path.dirname(os.path.abspath(openpype.hosts.maya.__file__))
PLUGINS_DIR = os.path.join(HOST_DIR, "plugins")
PUBLISH_PATH = os.path.join(PLUGINS_DIR, "publish")
LOAD_PATH = os.path.join(PLUGINS_DIR, "load")
CREATE_PATH = os.path.join(PLUGINS_DIR, "create")
INVENTORY_PATH = os.path.join(PLUGINS_DIR, "inventory")
AVALON_CONTAINERS = ":AVALON_CONTAINERS"
self = sys.modules[__name__]
self._ignore_lock = False
self._events = {}
def install():
from openpype.settings import get_project_settings
project_settings = get_project_settings(os.getenv("AVALON_PROJECT"))
# process path mapping
dirmap_processor = MayaDirmap("maya", project_settings)
dirmap_processor.process_dirmap()
pyblish.api.register_plugin_path(PUBLISH_PATH)
pyblish.api.register_host("mayabatch")
pyblish.api.register_host("mayapy")
pyblish.api.register_host("maya")
avalon.api.register_plugin_path(avalon.api.Loader, LOAD_PATH)
avalon.api.register_plugin_path(avalon.api.Creator, CREATE_PATH)
avalon.api.register_plugin_path(avalon.api.InventoryAction, INVENTORY_PATH)
log.info(PUBLISH_PATH)
log.info("Installing callbacks ... ")
avalon.api.on("init", on_init)
# Callbacks below are not required for headless mode, the `init` however
# is important to load referenced Alembics correctly at rendertime.
if lib.IS_HEADLESS:
log.info(("Running in headless mode, skipping Maya "
"save/open/new callback installation.."))
return
_set_project()
_register_callbacks()
menu.install()
avalon.api.on("save", on_save)
avalon.api.on("open", on_open)
avalon.api.on("new", on_new)
avalon.api.before("save", on_before_save)
avalon.api.on("taskChanged", on_task_changed)
avalon.api.on("before.workfile.save", before_workfile_save)
log.info("Setting default family states for loader..")
avalon.api.data["familiesStateToggled"] = ["imagesequence"]
def _set_project():
"""Sets the maya project to the current Session's work directory.
Returns:
None
"""
workdir = avalon.api.Session["AVALON_WORKDIR"]
try:
os.makedirs(workdir)
except OSError as e:
# An already existing working directory is fine.
if e.errno == errno.EEXIST:
pass
else:
raise
cmds.workspace(workdir, openWorkspace=True)
def _register_callbacks():
for handler, event in self._events.copy().items():
if event is None:
continue
try:
OpenMaya.MMessage.removeCallback(event)
self._events[handler] = None
except RuntimeError as e:
log.info(e)
self._events[_on_scene_save] = OpenMaya.MSceneMessage.addCallback(
OpenMaya.MSceneMessage.kBeforeSave, _on_scene_save
)
self._events[_before_scene_save] = OpenMaya.MSceneMessage.addCheckCallback(
OpenMaya.MSceneMessage.kBeforeSaveCheck, _before_scene_save
)
self._events[_on_scene_new] = OpenMaya.MSceneMessage.addCallback(
OpenMaya.MSceneMessage.kAfterNew, _on_scene_new
)
self._events[_on_maya_initialized] = OpenMaya.MSceneMessage.addCallback(
OpenMaya.MSceneMessage.kMayaInitialized, _on_maya_initialized
)
self._events[_on_scene_open] = OpenMaya.MSceneMessage.addCallback(
OpenMaya.MSceneMessage.kAfterOpen, _on_scene_open
)
log.info("Installed event handler _on_scene_save..")
log.info("Installed event handler _before_scene_save..")
log.info("Installed event handler _on_scene_new..")
log.info("Installed event handler _on_maya_initialized..")
log.info("Installed event handler _on_scene_open..")
def _on_maya_initialized(*args):
avalon.api.emit("init", args)
if cmds.about(batch=True):
log.warning("Running batch mode ...")
return
# Keep reference to the main Window, once a main window exists.
lib.get_main_window()
def _on_scene_new(*args):
avalon.api.emit("new", args)
def _on_scene_save(*args):
avalon.api.emit("save", args)
def _on_scene_open(*args):
avalon.api.emit("open", args)
def _before_scene_save(return_code, client_data):
# Default to allowing the action. Registered
# callbacks can optionally set this to False
# in order to block the operation.
OpenMaya.MScriptUtil.setBool(return_code, True)
avalon.api.emit("before_save", [return_code, client_data])
def uninstall():
pyblish.api.deregister_plugin_path(PUBLISH_PATH)
pyblish.api.deregister_host("mayabatch")
pyblish.api.deregister_host("mayapy")
pyblish.api.deregister_host("maya")
avalon.api.deregister_plugin_path(avalon.api.Loader, LOAD_PATH)
avalon.api.deregister_plugin_path(avalon.api.Creator, CREATE_PATH)
avalon.api.deregister_plugin_path(
avalon.api.InventoryAction, INVENTORY_PATH
)
menu.uninstall()
def lock():
"""Lock scene
Add an invisible node to your Maya scene with the name of the
current file, indicating that this file is "locked" and cannot
be modified any further.
"""
if not cmds.objExists("lock"):
with lib.maintained_selection():
cmds.createNode("objectSet", name="lock")
cmds.addAttr("lock", ln="basename", dataType="string")
# Permanently hide from outliner
cmds.setAttr("lock.verticesOnlySet", True)
fname = cmds.file(query=True, sceneName=True)
basename = os.path.basename(fname)
cmds.setAttr("lock.basename", basename, type="string")
def unlock():
"""Permanently unlock a locked scene
Doesn't throw an error if scene is already unlocked.
"""
try:
cmds.delete("lock")
except ValueError:
pass
def is_locked():
"""Query whether current scene is locked"""
fname = cmds.file(query=True, sceneName=True)
basename = os.path.basename(fname)
if self._ignore_lock:
return False
try:
return cmds.getAttr("lock.basename") == basename
except ValueError:
return False
@contextlib.contextmanager
def lock_ignored():
"""Context manager for temporarily ignoring the lock of a scene
The purpose of this function is to enable locking a scene and
saving it with the lock still in place.
Example:
>>> with lock_ignored():
... pass # Do things without lock
"""
self._ignore_lock = True
try:
yield
finally:
self._ignore_lock = False
def parse_container(container):
"""Return the container node's full container data.
Args:
container (str): A container node name.
Returns:
dict: The container schema data for this container node.
"""
data = lib.read(container)
# Backwards compatibility pre-schemas for containers
data["schema"] = data.get("schema", "openpype:container-1.0")
# Append transient data
data["objectName"] = container
return data
def _ls():
"""Yields Avalon container node names.
Used by `ls()` to retrieve the nodes and then query the full container's
data.
Yields:
str: Avalon container node name (objectSet)
"""
def _maya_iterate(iterator):
"""Helper to iterate a maya iterator"""
while not iterator.isDone():
yield iterator.thisNode()
iterator.next()
ids = {AVALON_CONTAINER_ID,
# Backwards compatibility
"pyblish.mindbender.container"}
# Iterate over all 'set' nodes in the scene to detect whether
# they have the avalon container ".id" attribute.
fn_dep = om.MFnDependencyNode()
iterator = om.MItDependencyNodes(om.MFn.kSet)
for mobject in _maya_iterate(iterator):
if mobject.apiTypeStr != "kSet":
# Only match by exact type
continue
fn_dep.setObject(mobject)
if not fn_dep.hasAttribute("id"):
continue
plug = fn_dep.findPlug("id", True)
value = plug.asString()
if value in ids:
yield fn_dep.name()
def ls():
"""Yields containers from active Maya scene
This is the host-equivalent of api.ls(), but instead of listing
assets on disk, it lists assets already loaded in Maya; once loaded
they are called 'containers'
Yields:
dict: container
"""
container_names = _ls()
has_metadata_collector = False
config_host = find_submodule(avalon.api.registered_config(), "maya")
if hasattr(config_host, "collect_container_metadata"):
has_metadata_collector = True
for container in sorted(container_names):
data = parse_container(container)
# Collect custom data if attribute is present
if has_metadata_collector:
metadata = config_host.collect_container_metadata(container)
data.update(metadata)
yield data
def containerise(name,
namespace,
nodes,
context,
loader=None,
suffix="CON"):
"""Bundle `nodes` into an assembly and imprint it with metadata
Containerisation enables a tracking of version, author and origin
for loaded assets.
Arguments:
name (str): Name of resulting assembly
namespace (str): Namespace under which to host container
nodes (list): Long names of nodes to containerise
context (dict): Asset information
loader (str, optional): Name of loader used to produce this container.
suffix (str, optional): Suffix of container, defaults to `_CON`.
Returns:
container (str): Name of container assembly
"""
container = cmds.sets(nodes, name="%s_%s_%s" % (namespace, name, suffix))
data = [
("schema", "openpype:container-2.0"),
("id", AVALON_CONTAINER_ID),
("name", name),
("namespace", namespace),
("loader", str(loader)),
("representation", context["representation"]["_id"]),
]
for key, value in data:
if not value:
continue
if isinstance(value, (int, float)):
cmds.addAttr(container, longName=key, attributeType="short")
cmds.setAttr(container + "." + key, value)
else:
cmds.addAttr(container, longName=key, dataType="string")
cmds.setAttr(container + "." + key, value, type="string")
main_container = cmds.ls(AVALON_CONTAINERS, type="objectSet")
if not main_container:
main_container = cmds.sets(empty=True, name=AVALON_CONTAINERS)
# Implement #399: Maya 2019+ hide AVALON_CONTAINERS on creation..
if cmds.attributeQuery("hiddenInOutliner",
node=main_container,
exists=True):
cmds.setAttr(main_container + ".hiddenInOutliner", True)
else:
main_container = main_container[0]
cmds.sets(container, addElement=main_container)
# Implement #399: Maya 2019+ hide containers in outliner
if cmds.attributeQuery("hiddenInOutliner",
node=container,
exists=True):
cmds.setAttr(container + ".hiddenInOutliner", True)
return container
def on_init(_):
log.info("Running callback on init..")
def safe_deferred(fn):
"""Execute deferred the function in a try-except"""
def _fn():
"""safely call in deferred callback"""
try:
fn()
except Exception as exc:
print(exc)
try:
utils.executeDeferred(_fn)
except Exception as exc:
print(exc)
# Force load Alembic so referenced alembics
# work correctly on scene open
cmds.loadPlugin("AbcImport", quiet=True)
cmds.loadPlugin("AbcExport", quiet=True)
# Force load objExport plug-in (requested by artists)
cmds.loadPlugin("objExport", quiet=True)
from .customize import (
override_component_mask_commands,
override_toolbox_ui
)
safe_deferred(override_component_mask_commands)
launch_workfiles = os.environ.get("WORKFILES_STARTUP")
if launch_workfiles:
safe_deferred(host_tools.show_workfiles)
if not lib.IS_HEADLESS:
safe_deferred(override_toolbox_ui)
def on_before_save(return_code, _):
"""Run validation for scene's FPS prior to saving"""
return lib.validate_fps()
def on_save(_):
"""Automatically add IDs to new nodes
Any transform of a mesh, without an existing ID, is given one
automatically on file save.
"""
log.info("Running callback on save..")
# # Update current task for the current scene
# update_task_from_path(cmds.file(query=True, sceneName=True))
# Generate ids of the current context on nodes in the scene
nodes = lib.get_id_required_nodes(referenced_nodes=False)
for node, new_id in lib.generate_ids(nodes):
lib.set_id(node, new_id, overwrite=False)
def on_open(_):
"""On scene open let's assume the containers have changed."""
from Qt import QtWidgets
from openpype.widgets import popup
cmds.evalDeferred(
"from openpype.hosts.maya.api import lib;"
"lib.remove_render_layer_observer()")
cmds.evalDeferred(
"from openpype.hosts.maya.api import lib;"
"lib.add_render_layer_observer()")
cmds.evalDeferred(
"from openpype.hosts.maya.api import lib;"
"lib.add_render_layer_change_observer()")
# # Update current task for the current scene
# update_task_from_path(cmds.file(query=True, sceneName=True))
# Validate FPS after update_task_from_path to
# ensure it is using correct FPS for the asset
lib.validate_fps()
lib.fix_incompatible_containers()
if any_outdated():
log.warning("Scene has outdated content.")
# Find maya main window
top_level_widgets = {w.objectName(): w for w in
QtWidgets.QApplication.topLevelWidgets()}
parent = top_level_widgets.get("MayaWindow", None)
if parent is None:
log.info("Skipping outdated content pop-up "
"because Maya window can't be found.")
else:
# Show outdated pop-up
def _on_show_inventory():
host_tools.show_scene_inventory(parent=parent)
dialog = popup.Popup(parent=parent)
dialog.setWindowTitle("Maya scene has outdated content")
dialog.setMessage("There are outdated containers in "
"your Maya scene.")
dialog.on_show.connect(_on_show_inventory)
dialog.show()
def on_new(_):
"""Set project resolution and fps when create a new file"""
log.info("Running callback on new..")
with lib.suspended_refresh():
cmds.evalDeferred(
"from openpype.hosts.maya.api import lib;"
"lib.remove_render_layer_observer()")
cmds.evalDeferred(
"from openpype.hosts.maya.api import lib;"
"lib.add_render_layer_observer()")
cmds.evalDeferred(
"from openpype.hosts.maya.api import lib;"
"lib.add_render_layer_change_observer()")
lib.set_context_settings()
def on_task_changed(*args):
"""Wrapped function of app initialize and maya's on task changed"""
# Run
menu.update_menu_task_label()
workdir = avalon.api.Session["AVALON_WORKDIR"]
if os.path.exists(workdir):
log.info("Updating Maya workspace for task change to %s", workdir)
_set_project()
# Set Maya fileDialog's start-dir to /scenes
frule_scene = cmds.workspace(fileRuleEntry="scene")
cmds.optionVar(stringValue=("browserLocationmayaBinaryscene",
workdir + "/" + frule_scene))
else:
log.warning((
"Can't set project for new context because path does not exist: {}"
).format(workdir))
with lib.suspended_refresh():
lib.set_context_settings()
lib.update_content_on_context_change()
msg = " project: {}\n asset: {}\n task:{}".format(
avalon.api.Session["AVALON_PROJECT"],
avalon.api.Session["AVALON_ASSET"],
avalon.api.Session["AVALON_TASK"]
)
lib.show_message(
"Context was changed",
("Context was changed to:\n{}".format(msg)),
)
def before_workfile_save(event):
workdir_path = event.workdir_path
if workdir_path:
copy_workspace_mel(workdir_path)
class MayaDirmap(HostDirmap):
def on_enable_dirmap(self):
cmds.dirmap(en=True)
def dirmap_routine(self, source_path, destination_path):
cmds.dirmap(m=(source_path, destination_path))
cmds.dirmap(m=(destination_path, source_path))

View file

@ -1,8 +1,14 @@
import os
from maya import cmds
from avalon import api
from avalon.vendor import qargparse
import avalon.maya
from openpype.api import PypeCreatorMixin
from .pipeline import containerise
from . import lib
def get_reference_node(members, log=None):
"""Get the reference node from the container members
@ -14,8 +20,6 @@ def get_reference_node(members, log=None):
"""
from maya import cmds
# Collect the references without .placeHolderList[] attributes as
# unique entries (objects only) and skipping the sharedReferenceNode.
references = set()
@ -61,8 +65,6 @@ def get_reference_node_parents(ref):
list: The upstream parent reference nodes.
"""
from maya import cmds
parent = cmds.referenceQuery(ref,
referenceNode=True,
parent=True)
@ -75,11 +77,25 @@ def get_reference_node_parents(ref):
return parents
class Creator(PypeCreatorMixin, avalon.maya.Creator):
pass
class Creator(PypeCreatorMixin, api.Creator):
def process(self):
nodes = list()
with lib.undo_chunk():
if (self.options or {}).get("useSelection"):
nodes = cmds.ls(selection=True)
instance = cmds.sets(nodes, name=self.name)
lib.imprint(instance, self.data)
return instance
class ReferenceLoader(api.Loader):
class Loader(api.Loader):
hosts = ["maya"]
class ReferenceLoader(Loader):
"""A basic ReferenceLoader for Maya
This will implement the basic behavior for a loader to inherit from that
@ -117,11 +133,6 @@ class ReferenceLoader(api.Loader):
namespace=None,
options=None
):
import os
from avalon.maya import lib
from avalon.maya.pipeline import containerise
assert os.path.exists(self.fname), "%s does not exist." % self.fname
asset = context['asset']
@ -182,8 +193,6 @@ class ReferenceLoader(api.Loader):
def update(self, container, representation):
import os
from maya import cmds
node = container["objectName"]

View file

@ -9,8 +9,10 @@ import six
from maya import cmds
from avalon import api, io
from avalon.maya.lib import unique_namespace
from openpype.hosts.maya.api.lib import matrix_equals
from openpype.hosts.maya.api.lib import (
matrix_equals,
unique_namespace
)
log = logging.getLogger("PackageLoader")
@ -239,7 +241,7 @@ def get_contained_containers(container):
"""
import avalon.schema
from avalon.maya.pipeline import parse_container
from .pipeline import parse_container
# Get avalon containers in this package setdress container
containers = []

View file

@ -0,0 +1,67 @@
"""Host API required Work Files tool"""
import os
from maya import cmds
from avalon import api
def file_extensions():
return api.HOST_WORKFILE_EXTENSIONS["maya"]
def has_unsaved_changes():
return cmds.file(query=True, modified=True)
def save_file(filepath):
cmds.file(rename=filepath)
ext = os.path.splitext(filepath)[1]
if ext == ".mb":
file_type = "mayaBinary"
else:
file_type = "mayaAscii"
cmds.file(save=True, type=file_type)
def open_file(filepath):
return cmds.file(filepath, open=True, force=True)
def current_file():
current_filepath = cmds.file(query=True, sceneName=True)
if not current_filepath:
return None
return current_filepath
def work_root(session):
work_dir = session["AVALON_WORKDIR"]
scene_dir = None
# Query scene file rule from workspace.mel if it exists in WORKDIR
# We are parsing the workspace.mel manually as opposed to temporarily
# setting the Workspace in Maya in a context manager since Maya had a
# tendency to crash on frequently changing the workspace when this
# function was called many times as one scrolled through Work Files assets.
workspace_mel = os.path.join(work_dir, "workspace.mel")
if os.path.exists(workspace_mel):
scene_rule = 'workspace -fr "scene" '
# We need to use builtins as `open` is overridden by the workio API
open_file = __builtins__["open"]
with open_file(workspace_mel, "r") as f:
for line in f:
if line.strip().startswith(scene_rule):
# remainder == "rule";
remainder = line[len(scene_rule):]
# scene_dir == rule
scene_dir = remainder.split('"')[1]
else:
# We can't query a workspace that does not exist
# so we return similar to what we do in other hosts.
scene_dir = session.get("AVALON_SCENEDIR")
if scene_dir:
return os.path.join(work_dir, scene_dir)
else:
return work_dir

View file

@ -1,4 +1,9 @@
from avalon import api, io
import json
from avalon import api, io, pipeline
from openpype.hosts.maya.api.lib import (
maintained_selection,
apply_shaders
)
class ImportModelRender(api.InventoryAction):
@ -49,10 +54,8 @@ class ImportModelRender(api.InventoryAction):
Returns:
None
"""
import json
from maya import cmds
from avalon import maya, io, pipeline
from openpype.hosts.maya.api import lib
# Get representations of shader file and relationships
look_repr = io.find_one({
@ -77,7 +80,7 @@ class ImportModelRender(api.InventoryAction):
json_file = pipeline.get_representation_path_from_context(context)
# Import the look file
with maya.maintained_selection():
with maintained_selection():
shader_nodes = cmds.file(maya_file,
i=True, # import
returnNewNodes=True)
@ -89,4 +92,4 @@ class ImportModelRender(api.InventoryAction):
relationships = json.load(f)
# Assign relationships
lib.apply_shaders(relationships, shader_nodes, nodes)
apply_shaders(relationships, shader_nodes, nodes)

View file

@ -17,7 +17,7 @@ class AbcLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
def process_reference(self, context, name, namespace, data):
import maya.cmds as cmds
from avalon import maya
from openpype.hosts.maya.api.lib import unique_namespace
cmds.loadPlugin("AbcImport.mll", quiet=True)
# Prevent identical alembic nodes from being shared
@ -27,9 +27,11 @@ class AbcLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
# Assuming name is subset name from the animation, we split the number
# suffix from the name to ensure the namespace is unique
name = name.split("_")[0]
namespace = maya.unique_namespace("{}_".format(name),
format="%03d",
suffix="_abc")
namespace = unique_namespace(
"{}_".format(name),
format="%03d",
suffix="_abc"
)
# hero_001 (abc)
# asset_counter{optional}

View file

@ -3,6 +3,10 @@
"""
from avalon import api
from openpype.hosts.maya.api.lib import (
maintained_selection,
unique_namespace
)
class SetFrameRangeLoader(api.Loader):
@ -98,22 +102,19 @@ class ImportMayaLoader(api.Loader):
def load(self, context, name=None, namespace=None, data=None):
import maya.cmds as cmds
from avalon import maya
from avalon.maya import lib
choice = self.display_warning()
if choice is False:
return
asset = context['asset']
namespace = namespace or lib.unique_namespace(
namespace = namespace or unique_namespace(
asset["name"] + "_",
prefix="_" if asset["name"][0].isdigit() else "",
suffix="_",
)
with maya.maintained_selection():
with maintained_selection():
cmds.file(self.fname,
i=True,
preserveReferences=True,

View file

@ -1,9 +1,15 @@
import os
import clique
from avalon import api
from openpype.api import get_project_settings
import openpype.hosts.maya.api.plugin
from openpype.hosts.maya.api.plugin import get_reference_node
import os
from openpype.api import get_project_settings
import clique
from openpype.hosts.maya.api.lib import (
maintained_selection,
unique_namespace
)
from openpype.hosts.maya.api.pipeline import containerise
class AssProxyLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
@ -20,7 +26,6 @@ class AssProxyLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
def process_reference(self, context, name, namespace, options):
import maya.cmds as cmds
from avalon import maya
import pymel.core as pm
version = context['version']
@ -35,7 +40,7 @@ class AssProxyLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
except ValueError:
family = "ass"
with maya.maintained_selection():
with maintained_selection():
groupName = "{}:{}".format(namespace, name)
path = self.fname
@ -95,8 +100,6 @@ class AssProxyLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
self.update(container, representation)
def update(self, container, representation):
import os
from maya import cmds
import pymel.core as pm
@ -175,8 +178,6 @@ class AssStandinLoader(api.Loader):
def load(self, context, name, namespace, options):
import maya.cmds as cmds
import avalon.maya.lib as lib
from avalon.maya.pipeline import containerise
import mtoa.ui.arnoldmenu
import pymel.core as pm
@ -188,7 +189,7 @@ class AssStandinLoader(api.Loader):
frameStart = version_data.get("frameStart", None)
asset = context['asset']['name']
namespace = namespace or lib.unique_namespace(
namespace = namespace or unique_namespace(
asset + "_",
prefix="_" if asset[0].isdigit() else "",
suffix="_",

View file

@ -13,11 +13,11 @@ class AssemblyLoader(api.Loader):
def load(self, context, name, namespace, data):
from avalon.maya.pipeline import containerise
from avalon.maya import lib
from openpype.hosts.maya.api.pipeline import containerise
from openpype.hosts.maya.api.lib import unique_namespace
asset = context['asset']['name']
namespace = namespace or lib.unique_namespace(
namespace = namespace or unique_namespace(
asset + "_",
prefix="_" if asset[0].isdigit() else "",
suffix="_",
@ -25,9 +25,11 @@ class AssemblyLoader(api.Loader):
from openpype.hosts.maya.api import setdress
containers = setdress.load_package(filepath=self.fname,
name=name,
namespace=namespace)
containers = setdress.load_package(
filepath=self.fname,
name=name,
namespace=namespace
)
self[:] = containers

View file

@ -1,7 +1,7 @@
from avalon import api, io
from avalon.maya.pipeline import containerise
from avalon.maya import lib
from maya import cmds, mel
from avalon import api, io
from openpype.hosts.maya.api.pipeline import containerise
from openpype.hosts.maya.api.lib import unique_namespace
class AudioLoader(api.Loader):
@ -27,7 +27,7 @@ class AudioLoader(api.Loader):
)
asset = context["asset"]["name"]
namespace = namespace or lib.unique_namespace(
namespace = namespace or unique_namespace(
asset + "_",
prefix="_" if asset[0].isdigit() else "",
suffix="_",

View file

@ -17,11 +17,11 @@ class GpuCacheLoader(api.Loader):
def load(self, context, name, namespace, data):
import maya.cmds as cmds
import avalon.maya.lib as lib
from avalon.maya.pipeline import containerise
from openpype.hosts.maya.api.pipeline import containerise
from openpype.hosts.maya.api.lib import unique_namespace
asset = context['asset']['name']
namespace = namespace or lib.unique_namespace(
namespace = namespace or unique_namespace(
asset + "_",
prefix="_" if asset[0].isdigit() else "",
suffix="_",

View file

@ -1,8 +1,9 @@
from avalon import api, io
from avalon.maya.pipeline import containerise
from avalon.maya import lib
from Qt import QtWidgets, QtCore
from avalon import api, io
from openpype.hosts.maya.api.pipeline import containerise
from openpype.hosts.maya.api.lib import unique_namespace
from maya import cmds
@ -88,7 +89,7 @@ class ImagePlaneLoader(api.Loader):
new_nodes = []
image_plane_depth = 1000
asset = context['asset']['name']
namespace = namespace or lib.unique_namespace(
namespace = namespace or unique_namespace(
asset + "_",
prefix="_" if asset[0].isdigit() else "",
suffix="_",

View file

@ -1,13 +1,15 @@
# -*- coding: utf-8 -*-
"""Look loader."""
import openpype.hosts.maya.api.plugin
from avalon import api, io
import json
import openpype.hosts.maya.api.lib
from collections import defaultdict
from openpype.widgets.message_window import ScrollMessageBox
from Qt import QtWidgets
from avalon import api, io
import openpype.hosts.maya.api.plugin
from openpype.hosts.maya.api import lib
from openpype.widgets.message_window import ScrollMessageBox
from openpype.hosts.maya.api.plugin import get_reference_node
@ -36,9 +38,8 @@ class LookLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
"""
import maya.cmds as cmds
from avalon import maya
with maya.maintained_selection():
with lib.maintained_selection():
nodes = cmds.file(self.fname,
namespace=namespace,
reference=True,
@ -140,9 +141,7 @@ class LookLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
cmds.file(cr=reference_node) # cleanReference
# reapply shading groups from json representation on orig nodes
openpype.hosts.maya.api.lib.apply_shaders(json_data,
shader_nodes,
orig_nodes)
lib.apply_shaders(json_data, shader_nodes, orig_nodes)
msg = ["During reference update some edits failed.",
"All successful edits were kept intact.\n",
@ -159,8 +158,8 @@ class LookLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
# region compute lookup
nodes_by_id = defaultdict(list)
for n in nodes:
nodes_by_id[openpype.hosts.maya.api.lib.get_id(n)].append(n)
openpype.hosts.maya.api.lib.apply_attributes(attributes, nodes_by_id)
nodes_by_id[lib.get_id(n)].append(n)
lib.apply_attributes(attributes, nodes_by_id)
# Update metadata
cmds.setAttr("{}.representation".format(node),

View file

@ -1,11 +1,18 @@
# -*- coding: utf-8 -*-
"""Loader for Redshift proxy."""
from avalon.maya import lib
import os
import clique
import maya.cmds as cmds
from avalon import api
from openpype.api import get_project_settings
import os
import maya.cmds as cmds
import clique
from openpype.hosts.maya.api.lib import (
namespaced,
maintained_selection,
unique_namespace
)
from openpype.hosts.maya.api.pipeline import containerise
class RedshiftProxyLoader(api.Loader):
@ -21,17 +28,13 @@ class RedshiftProxyLoader(api.Loader):
def load(self, context, name=None, namespace=None, options=None):
"""Plugin entry point."""
from avalon.maya.pipeline import containerise
from openpype.hosts.maya.api.lib import namespaced
try:
family = context["representation"]["context"]["family"]
except ValueError:
family = "redshiftproxy"
asset_name = context['asset']["name"]
namespace = namespace or lib.unique_namespace(
namespace = namespace or unique_namespace(
asset_name + "_",
prefix="_" if asset_name[0].isdigit() else "",
suffix="_",
@ -40,7 +43,7 @@ class RedshiftProxyLoader(api.Loader):
# Ensure Redshift for Maya is loaded.
cmds.loadPlugin("redshift4maya", quiet=True)
with lib.maintained_selection():
with maintained_selection():
cmds.namespace(addNamespace=namespace)
with namespaced(namespace, new=False):
nodes, group_node = self.create_rs_proxy(

View file

@ -1,9 +1,10 @@
import openpype.hosts.maya.api.plugin
from avalon import api, maya
from maya import cmds
import os
from maya import cmds
from avalon import api
from openpype.api import get_project_settings
from openpype.lib import get_creator_by_name
import openpype.hosts.maya.api.plugin
from openpype.hosts.maya.api.lib import maintained_selection
class ReferenceLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
@ -32,7 +33,6 @@ class ReferenceLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
def process_reference(self, context, name, namespace, options):
import maya.cmds as cmds
from avalon import maya
import pymel.core as pm
try:
@ -44,7 +44,7 @@ class ReferenceLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
# True by default to keep legacy behaviours
attach_to_root = options.get("attach_to_root", True)
with maya.maintained_selection():
with maintained_selection():
cmds.loadPlugin("AbcImport.mll", quiet=True)
nodes = cmds.file(self.fname,
namespace=namespace,
@ -149,7 +149,7 @@ class ReferenceLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
# Create the animation instance
creator_plugin = get_creator_by_name(self.animation_creator_name)
with maya.maintained_selection():
with maintained_selection():
cmds.select([output, controls] + roots, noExpand=True)
api.create(
creator_plugin,

View file

@ -11,8 +11,8 @@ import six
import sys
from avalon import api
from avalon.maya import lib
from openpype.hosts.maya.api import lib as pypelib
from openpype.hosts.maya.api import lib
from openpype.hosts.maya.api.pipeline import containerise
from maya import cmds
import maya.app.renderSetup.model.renderSetup as renderSetup
@ -31,7 +31,6 @@ class RenderSetupLoader(api.Loader):
def load(self, context, name, namespace, data):
"""Load RenderSetup settings."""
from avalon.maya.pipeline import containerise
# from openpype.hosts.maya.api.lib import namespaced
@ -83,7 +82,7 @@ class RenderSetupLoader(api.Loader):
def update(self, container, representation):
"""Update RenderSetup setting by overwriting existing settings."""
pypelib.show_message(
lib.show_message(
"Render setup update",
"Render setup setting will be overwritten by new version. All "
"setting specified by user not included in loaded version "

View file

@ -1,7 +1,8 @@
from avalon import api
import os
from avalon import api
from openpype.api import get_project_settings
class LoadVDBtoRedShift(api.Loader):
"""Load OpenVDB in a Redshift Volume Shape"""
@ -15,8 +16,8 @@ class LoadVDBtoRedShift(api.Loader):
def load(self, context, name=None, namespace=None, data=None):
from maya import cmds
import avalon.maya.lib as lib
from avalon.maya.pipeline import containerise
from openpype.hosts.maya.api.pipeline import containerise
from openpype.hosts.maya.api.lib import unique_namespace
try:
family = context["representation"]["context"]["family"]
@ -45,7 +46,7 @@ class LoadVDBtoRedShift(api.Loader):
asset = context['asset']
asset_name = asset["name"]
namespace = namespace or lib.unique_namespace(
namespace = namespace or unique_namespace(
asset_name + "_",
prefix="_" if asset_name[0].isdigit() else "",
suffix="_",

View file

@ -1,6 +1,6 @@
import os
from avalon import api
from openpype.api import get_project_settings
import os
from maya import cmds
@ -80,8 +80,8 @@ class LoadVDBtoVRay(api.Loader):
def load(self, context, name, namespace, data):
import avalon.maya.lib as lib
from avalon.maya.pipeline import containerise
from openpype.hosts.maya.api.lib import unique_namespace
from openpype.hosts.maya.api.pipeline import containerise
assert os.path.exists(self.fname), (
"Path does not exist: %s" % self.fname
@ -111,7 +111,7 @@ class LoadVDBtoVRay(api.Loader):
asset = context['asset']
asset_name = asset["name"]
namespace = namespace or lib.unique_namespace(
namespace = namespace or unique_namespace(
asset_name + "_",
prefix="_" if asset_name[0].isdigit() else "",
suffix="_",

View file

@ -9,9 +9,14 @@ import os
import maya.cmds as cmds
from avalon.maya import lib
from avalon import api, io
from openpype.api import get_project_settings
from openpype.hosts.maya.api.lib import (
maintained_selection,
namespaced,
unique_namespace
)
from openpype.hosts.maya.api.pipeline import containerise
class VRayProxyLoader(api.Loader):
@ -36,8 +41,6 @@ class VRayProxyLoader(api.Loader):
options (dict): Optional loader options.
"""
from avalon.maya.pipeline import containerise
from openpype.hosts.maya.api.lib import namespaced
try:
family = context["representation"]["context"]["family"]
@ -48,7 +51,7 @@ class VRayProxyLoader(api.Loader):
self.fname = self._get_abc(context["version"]["_id"]) or self.fname
asset_name = context['asset']["name"]
namespace = namespace or lib.unique_namespace(
namespace = namespace or unique_namespace(
asset_name + "_",
prefix="_" if asset_name[0].isdigit() else "",
suffix="_",
@ -57,7 +60,7 @@ class VRayProxyLoader(api.Loader):
# Ensure V-Ray for Maya is loaded.
cmds.loadPlugin("vrayformaya", quiet=True)
with lib.maintained_selection():
with maintained_selection():
cmds.namespace(addNamespace=namespace)
with namespaced(namespace, new=False):
nodes, group_node = self.create_vray_proxy(

View file

@ -1,8 +1,13 @@
from avalon.maya import lib
from avalon import api
from openpype.api import config
import os
import maya.cmds as cmds
from avalon import api
from openpype.api import get_project_settings
from openpype.hosts.maya.api.lib import (
maintained_selection,
namespaced,
unique_namespace
)
from openpype.hosts.maya.api.pipeline import containerise
class VRaySceneLoader(api.Loader):
@ -18,8 +23,6 @@ class VRaySceneLoader(api.Loader):
def load(self, context, name, namespace, data):
from avalon.maya.pipeline import containerise
from openpype.hosts.maya.lib import namespaced
try:
family = context["representation"]["context"]["family"]
@ -27,7 +30,7 @@ class VRaySceneLoader(api.Loader):
family = "vrayscene_layer"
asset_name = context['asset']["name"]
namespace = namespace or lib.unique_namespace(
namespace = namespace or unique_namespace(
asset_name + "_",
prefix="_" if asset_name[0].isdigit() else "",
suffix="_",
@ -36,7 +39,7 @@ class VRaySceneLoader(api.Loader):
# Ensure V-Ray for Maya is loaded.
cmds.loadPlugin("vrayformaya", quiet=True)
with lib.maintained_selection():
with maintained_selection():
cmds.namespace(addNamespace=namespace)
with namespaced(namespace, new=False):
nodes, group_node = self.create_vray_scene(name,
@ -47,8 +50,8 @@ class VRaySceneLoader(api.Loader):
return
# colour the group node
presets = config.get_presets(project=os.environ['AVALON_PROJECT'])
colors = presets['plugins']['maya']['load']['colors']
presets = get_project_settings(os.environ['AVALON_PROJECT'])
colors = presets['maya']['load']['colors']
c = colors.get(family)
if c is not None:
cmds.setAttr("{0}.useOutlinerColor".format(group_node), 1)

View file

@ -3,14 +3,14 @@ import json
import re
import glob
from collections import defaultdict
from pprint import pprint
from maya import cmds
from avalon import api, io
from avalon.maya import lib as avalon_lib, pipeline
from openpype.hosts.maya.api import lib
from openpype.api import get_project_settings
from pprint import pprint
from openpype.hosts.maya.api import lib
from openpype.hosts.maya.api.pipeline import containerise
class YetiCacheLoader(api.Loader):
@ -75,11 +75,13 @@ class YetiCacheLoader(api.Loader):
self[:] = nodes
return pipeline.containerise(name=name,
namespace=namespace,
nodes=nodes,
context=context,
loader=self.__class__.__name__)
return containerise(
name=name,
namespace=namespace,
nodes=nodes,
context=context,
loader=self.__class__.__name__
)
def remove(self, container):
@ -239,9 +241,11 @@ class YetiCacheLoader(api.Loader):
asset_name = "{}_".format(asset)
prefix = "_" if asset_name[0].isdigit()else ""
namespace = avalon_lib.unique_namespace(asset_name,
prefix=prefix,
suffix="_")
namespace = lib.unique_namespace(
asset_name,
prefix=prefix,
suffix="_"
)
return namespace

View file

@ -25,7 +25,6 @@ class YetiRigLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
self, context, name=None, namespace=None, options=None):
import maya.cmds as cmds
from avalon import maya
# get roots of selected hierarchies
selected_roots = []
@ -53,7 +52,7 @@ class YetiRigLoader(openpype.hosts.maya.api.plugin.ReferenceLoader):
scene_lookup[cb_id] = node
# load rig
with maya.maintained_selection():
with lib.maintained_selection():
nodes = cmds.file(self.fname,
namespace=namespace,
reference=True,

View file

@ -2,7 +2,7 @@ from collections import defaultdict
import pyblish.api
from maya import cmds, mel
from avalon import maya as avalon
from openpype.hosts.maya import api
from openpype.hosts.maya.api import lib
# TODO : Publish of assembly: -unique namespace for all assets, VALIDATOR!
@ -30,7 +30,7 @@ class CollectAssembly(pyblish.api.InstancePlugin):
def process(self, instance):
# Find containers
containers = avalon.ls()
containers = api.ls()
# Get all content from the instance
instance_lookup = set(cmds.ls(instance, type="transform", long=True))

View file

@ -49,7 +49,7 @@ import maya.app.renderSetup.model.renderSetup as renderSetup
import pyblish.api
from avalon import maya, api
from avalon import api
from openpype.hosts.maya.api.lib_renderproducts import get as get_layer_render_products # noqa: E501
from openpype.hosts.maya.api import lib
@ -409,7 +409,7 @@ class CollectMayaRender(pyblish.api.ContextPlugin):
dict: only overrides with values
"""
attributes = maya.read(render_globals)
attributes = lib.read(render_globals)
options = {"renderGlobals": {}}
options["renderGlobals"]["Priority"] = attributes["priority"]

View file

@ -2,9 +2,12 @@ import os
from maya import cmds
import avalon.maya
import openpype.api
from openpype.hosts.maya.api.lib import extract_alembic
from openpype.hosts.maya.api.lib import (
extract_alembic,
suspended_refresh,
maintained_selection
)
class ExtractAnimation(openpype.api.Extractor):
@ -71,8 +74,8 @@ class ExtractAnimation(openpype.api.Extractor):
# Since Maya 2017 alembic supports multiple uv sets - write them.
options["writeUVSets"] = True
with avalon.maya.suspended_refresh():
with avalon.maya.maintained_selection():
with suspended_refresh():
with maintained_selection():
cmds.select(nodes, noExpand=True)
extract_alembic(file=path,
startFrame=float(start),

View file

@ -1,9 +1,9 @@
import os
import avalon.maya
import openpype.api
from maya import cmds
from openpype.hosts.maya.api.lib import maintained_selection
class ExtractAssStandin(openpype.api.Extractor):
@ -30,7 +30,7 @@ class ExtractAssStandin(openpype.api.Extractor):
# Write out .ass file
self.log.info("Writing: '%s'" % file_path)
with avalon.maya.maintained_selection():
with maintained_selection():
self.log.info("Writing: {}".format(instance.data["setMembers"]))
cmds.select(instance.data["setMembers"], noExpand=True)

View file

@ -1,10 +1,10 @@
import os
from maya import cmds
import contextlib
import avalon.maya
from maya import cmds
import openpype.api
from openpype.hosts.maya.api.lib import maintained_selection
class ExtractAssProxy(openpype.api.Extractor):
@ -54,7 +54,7 @@ class ExtractAssProxy(openpype.api.Extractor):
noIntermediate=True)
self.log.info(members)
with avalon.maya.maintained_selection():
with maintained_selection():
with unparent(members[0]):
cmds.select(members, noExpand=True)
cmds.file(path,

View file

@ -2,9 +2,7 @@ import os
from maya import cmds
import avalon.maya
import openpype.api
from openpype.hosts.maya.api import lib
@ -54,7 +52,7 @@ class ExtractCameraAlembic(openpype.api.Extractor):
path = os.path.join(dir_path, filename)
# Perform alembic extraction
with avalon.maya.maintained_selection():
with lib.maintained_selection():
cmds.select(camera, replace=True, noExpand=True)
# Enforce forward slashes for AbcExport because we're
@ -86,7 +84,7 @@ class ExtractCameraAlembic(openpype.api.Extractor):
job_str += " -attr {0}".format(attr)
with lib.evaluation("off"):
with avalon.maya.suspended_refresh():
with lib.suspended_refresh():
cmds.AbcExport(j=job_str, verbose=False)
if "representations" not in instance.data:

View file

@ -5,7 +5,6 @@ import itertools
from maya import cmds
import avalon.maya
import openpype.api
from openpype.hosts.maya.api import lib
@ -157,9 +156,9 @@ class ExtractCameraMayaScene(openpype.api.Extractor):
path = os.path.join(dir_path, filename)
# Perform extraction
with avalon.maya.maintained_selection():
with lib.maintained_selection():
with lib.evaluation("off"):
with avalon.maya.suspended_refresh():
with lib.suspended_refresh():
if bake_to_worldspace:
self.log.info(
"Performing camera bakes: {}".format(transform))

View file

@ -3,12 +3,12 @@ import os
from maya import cmds # noqa
import maya.mel as mel # noqa
from openpype.hosts.maya.api.lib import root_parent
import pyblish.api
import avalon.maya
import openpype.api
from openpype.hosts.maya.api.lib import (
root_parent,
maintained_selection
)
class ExtractFBX(openpype.api.Extractor):
@ -205,13 +205,13 @@ class ExtractFBX(openpype.api.Extractor):
# Export
if "unrealStaticMesh" in instance.data["families"]:
with avalon.maya.maintained_selection():
with maintained_selection():
with root_parent(members):
self.log.info("Un-parenting: {}".format(members))
cmds.select(members, r=1, noExpand=True)
mel.eval('FBXExport -f "{}" -s'.format(path))
else:
with avalon.maya.maintained_selection():
with maintained_selection():
cmds.select(members, r=1, noExpand=True)
mel.eval('FBXExport -f "{}" -s'.format(path))

View file

@ -11,8 +11,7 @@ from collections import OrderedDict
from maya import cmds # noqa
import pyblish.api
import avalon.maya
from avalon import io, api
from avalon import io
import openpype.api
from openpype.hosts.maya.api import lib
@ -239,7 +238,7 @@ class ExtractLook(openpype.api.Extractor):
# getting incorrectly remapped. (LKD-17, PLN-101)
with no_workspace_dir():
with lib.attribute_values(remap):
with avalon.maya.maintained_selection():
with lib.maintained_selection():
cmds.select(sets, noExpand=True)
cmds.file(
maya_path,

View file

@ -4,8 +4,8 @@ import os
from maya import cmds
import avalon.maya
import openpype.api
from openpype.hosts.maya.api.lib import maintained_selection
class ExtractMayaSceneRaw(openpype.api.Extractor):
@ -59,7 +59,7 @@ class ExtractMayaSceneRaw(openpype.api.Extractor):
# Perform extraction
self.log.info("Performing extraction ...")
with avalon.maya.maintained_selection():
with maintained_selection():
cmds.select(members, noExpand=True)
cmds.file(path,
force=True,

View file

@ -4,7 +4,6 @@ import os
from maya import cmds
import avalon.maya
import openpype.api
from openpype.hosts.maya.api import lib
@ -74,7 +73,7 @@ class ExtractModel(openpype.api.Extractor):
polygonObject=1):
with lib.shader(members,
shadingEngine="initialShadingGroup"):
with avalon.maya.maintained_selection():
with lib.maintained_selection():
cmds.select(members, noExpand=True)
cmds.file(path,
force=True,

View file

@ -2,9 +2,12 @@ import os
from maya import cmds
import avalon.maya
import openpype.api
from openpype.hosts.maya.api.lib import extract_alembic
from openpype.hosts.maya.api.lib import (
extract_alembic,
suspended_refresh,
maintained_selection
)
class ExtractAlembic(openpype.api.Extractor):
@ -70,8 +73,8 @@ class ExtractAlembic(openpype.api.Extractor):
# Since Maya 2017 alembic supports multiple uv sets - write them.
options["writeUVSets"] = True
with avalon.maya.suspended_refresh():
with avalon.maya.maintained_selection():
with suspended_refresh():
with maintained_selection():
cmds.select(nodes, noExpand=True)
extract_alembic(file=path,
startFrame=start,

View file

@ -2,11 +2,11 @@
"""Redshift Proxy extractor."""
import os
import avalon.maya
import openpype.api
from maya import cmds
import openpype.api
from openpype.hosts.maya.api.lib import maintained_selection
class ExtractRedshiftProxy(openpype.api.Extractor):
"""Extract the content of the instance to a redshift proxy file."""
@ -54,7 +54,7 @@ class ExtractRedshiftProxy(openpype.api.Extractor):
# Write out rs file
self.log.info("Writing: '%s'" % file_path)
with avalon.maya.maintained_selection():
with maintained_selection():
cmds.select(instance.data["setMembers"], noExpand=True)
cmds.file(file_path,
pr=False,

View file

@ -4,8 +4,8 @@ import os
from maya import cmds
import avalon.maya
import openpype.api
from openpype.hosts.maya.api.lib import maintained_selection
class ExtractRig(openpype.api.Extractor):
@ -40,7 +40,7 @@ class ExtractRig(openpype.api.Extractor):
# Perform extraction
self.log.info("Performing extraction ...")
with avalon.maya.maintained_selection():
with maintained_selection():
cmds.select(instance, noExpand=True)
cmds.file(path,
force=True,

View file

@ -1,10 +1,10 @@
import os
import avalon.maya
import openpype.api
from maya import cmds
import openpype.api
from openpype.hosts.maya.api.lib import maintained_selection
class ExtractVRayProxy(openpype.api.Extractor):
"""Extract the content of the instance to a vrmesh file
@ -41,7 +41,7 @@ class ExtractVRayProxy(openpype.api.Extractor):
# Write out vrmesh file
self.log.info("Writing: '%s'" % file_path)
with avalon.maya.maintained_selection():
with maintained_selection():
cmds.select(instance.data["setMembers"], noExpand=True)
cmds.vrayCreateProxy(exportType=1,
dir=staging_dir,

View file

@ -3,9 +3,9 @@
import os
import re
import avalon.maya
import openpype.api
from openpype.hosts.maya.api.render_setup_tools import export_in_rs_layer
from openpype.hosts.maya.api.lib import maintained_selection
from maya import cmds
@ -57,7 +57,7 @@ class ExtractVrayscene(openpype.api.Extractor):
# Write out vrscene file
self.log.info("Writing: '%s'" % file_path)
with avalon.maya.maintained_selection():
with maintained_selection():
if "*" not in instance.data["setMembers"]:
self.log.info(
"Exporting: {}".format(instance.data["setMembers"]))

View file

@ -2,8 +2,11 @@ import os
from maya import cmds
import avalon.maya
import openpype.api
from openpype.hosts.maya.api.lib import (
suspended_refresh,
maintained_selection
)
class ExtractXgenCache(openpype.api.Extractor):
@ -32,8 +35,8 @@ class ExtractXgenCache(openpype.api.Extractor):
filename = "{name}.abc".format(**instance.data)
path = os.path.join(parent_dir, filename)
with avalon.maya.suspended_refresh():
with avalon.maya.maintained_selection():
with suspended_refresh():
with maintained_selection():
command = (
'-file '
+ path

View file

@ -7,9 +7,8 @@ import contextlib
from maya import cmds
import avalon.maya.lib as lib
import openpype.api
import openpype.hosts.maya.api.lib as maya
from openpype.hosts.maya.api import lib
@contextlib.contextmanager

View file

@ -2,10 +2,9 @@ from maya import cmds
import pyblish.api
from avalon import maya
import openpype.api
import openpype.hosts.maya.api.action
from openpype.hosts.maya.api.lib import maintained_selection
class ValidateCycleError(pyblish.api.InstancePlugin):
@ -26,7 +25,7 @@ class ValidateCycleError(pyblish.api.InstancePlugin):
@classmethod
def get_invalid(cls, instance):
with maya.maintained_selection():
with maintained_selection():
cmds.select(instance[:], noExpand=True)
plugs = cmds.cycleCheck(all=False, # check selection only
list=True)

View file

@ -3,7 +3,7 @@ from maya import cmds
import pyblish.api
import openpype.api
import openpype.hosts.maya.api.action
from avalon import maya
from openpype.hosts.maya.api.lib import maintained_selection
class ValidateMeshArnoldAttributes(pyblish.api.InstancePlugin):
@ -67,7 +67,7 @@ class ValidateMeshArnoldAttributes(pyblish.api.InstancePlugin):
@classmethod
def repair(cls, instance):
with maya.maintained_selection():
with maintained_selection():
with pc.UndoChunk():
temp_transform = pc.polyCube()[0]

View file

@ -3,7 +3,6 @@ from maya import cmds
import pyblish.api
import openpype.api
import openpype.hosts.maya.api.action
from avalon import maya
from openpype.hosts.maya.api import lib

View file

@ -5,8 +5,6 @@ import openpype.api
import openpype.hosts.maya.api.action
from openpype.hosts.maya.api import lib
from avalon.maya import maintained_selection
class ValidateShapeZero(pyblish.api.Validator):
"""Shape components may not have any "tweak" values
@ -51,7 +49,7 @@ class ValidateShapeZero(pyblish.api.Validator):
if not invalid_shapes:
return
with maintained_selection():
with lib.maintained_selection():
with lib.tool("selectSuperContext"):
for shape in invalid_shapes:
cmds.polyCollapseTweaks(shape)

View file

@ -1,8 +1,12 @@
import os
import avalon.api
from openpype.api import get_project_settings
from openpype.hosts.maya import api
import openpype.hosts.maya.api.lib as mlib
from maya import cmds
avalon.api.install(api)
print("starting OpenPype usersetup")

View file

@ -29,6 +29,10 @@ from .lib import (
maintained_selection
)
from .utils import (
colorspace_exists_on_node,
get_colorspace_list
)
__all__ = (
"file_extensions",
@ -54,4 +58,7 @@ __all__ = (
"update_container",
"maintained_selection",
"colorspace_exists_on_node",
"get_colorspace_list"
)

View file

@ -753,7 +753,7 @@ def script_name():
def add_button_write_to_read(node):
name = "createReadNode"
label = "Create Read From Rendered"
label = "Read From Rendered"
value = "import write_to_read;\
write_to_read.write_to_read(nuke.thisNode(), allow_relative=False)"
knob = nuke.PyScript_Knob(name, label, value)
@ -761,6 +761,15 @@ def add_button_write_to_read(node):
node.addKnob(knob)
def add_button_clear_rendered(node, path):
name = "clearRendered"
label = "Clear Rendered"
value = "import clear_rendered;\
clear_rendered.clear_rendered(\"{}\")".format(path)
knob = nuke.PyScript_Knob(name, label, value)
node.addKnob(knob)
def create_write_node(name, data, input=None, prenodes=None,
review=True, linked_knobs=None, farm=True):
''' Creating write node which is group node
@ -988,6 +997,9 @@ def create_write_node(name, data, input=None, prenodes=None,
# adding write to read button
add_button_write_to_read(GN)
# adding write to read button
add_button_clear_rendered(GN, os.path.dirname(fpath))
# Deadline tab.
add_deadline_tab(GN)

View file

@ -82,3 +82,50 @@ def bake_gizmos_recursively(in_group=None):
if node.Class() == "Group":
bake_gizmos_recursively(node)
def colorspace_exists_on_node(node, colorspace_name):
""" Check if colorspace exists on node
Look through all options in the colorpsace knob, and see if we have an
exact match to one of the items.
Args:
node (nuke.Node): nuke node object
colorspace_name (str): color profile name
Returns:
bool: True if exists
"""
try:
colorspace_knob = node['colorspace']
except ValueError:
# knob is not available on input node
return False
all_clrs = get_colorspace_list(colorspace_knob)
return colorspace_name in all_clrs
def get_colorspace_list(colorspace_knob):
"""Get available colorspace profile names
Args:
colorspace_knob (nuke.Knob): nuke knob object
Returns:
list: list of strings names of profiles
"""
all_clrs = list(colorspace_knob.values())
reduced_clrs = []
if not colorspace_knob.getFlag(nuke.STRIP_CASCADE_PREFIX):
return all_clrs
# strip colorspace with nested path
for clrs in all_clrs:
clrs = clrs.split('/')[-1]
reduced_clrs.append(clrs)
return reduced_clrs

View file

@ -9,7 +9,8 @@ from openpype.hosts.nuke.api.lib import (
from openpype.hosts.nuke.api import (
containerise,
update_container,
viewer_update_and_undo_stop
viewer_update_and_undo_stop,
colorspace_exists_on_node
)
from openpype.hosts.nuke.api import plugin
@ -66,11 +67,11 @@ class LoadClip(plugin.NukeLoader):
)
def load(self, context, name, namespace, options):
repre = context["representation"]
# reste container id so it is always unique for each instance
self.reset_container_id()
is_sequence = len(context["representation"]["files"]) > 1
is_sequence = len(repre["files"]) > 1
file = self.fname.replace("\\", "/")
@ -79,14 +80,13 @@ class LoadClip(plugin.NukeLoader):
version = context['version']
version_data = version.get("data", {})
repr_id = context["representation"]["_id"]
colorspace = version_data.get("colorspace")
iio_colorspace = get_imageio_input_colorspace(file)
repr_cont = context["representation"]["context"]
repre_id = repre["_id"]
repre_cont = repre["context"]
self.log.info("version_data: {}\n".format(version_data))
self.log.debug(
"Representation id `{}` ".format(repr_id))
"Representation id `{}` ".format(repre_id))
self.handle_start = version_data.get("handleStart", 0)
self.handle_end = version_data.get("handleEnd", 0)
@ -101,7 +101,7 @@ class LoadClip(plugin.NukeLoader):
first = 1
last = first + duration
elif "#" not in file:
frame = repr_cont.get("frame")
frame = repre_cont.get("frame")
assert frame, "Representation is not sequence"
padding = len(frame)
@ -113,10 +113,10 @@ class LoadClip(plugin.NukeLoader):
if not file:
self.log.warning(
"Representation id `{}` is failing to load".format(repr_id))
"Representation id `{}` is failing to load".format(repre_id))
return
read_name = self._get_node_name(context["representation"])
read_name = self._get_node_name(repre)
# Create the Loader with the filename path set
read_node = nuke.createNode(
@ -128,11 +128,8 @@ class LoadClip(plugin.NukeLoader):
with viewer_update_and_undo_stop():
read_node["file"].setValue(file)
# Set colorspace defined in version data
if colorspace:
read_node["colorspace"].setValue(str(colorspace))
elif iio_colorspace is not None:
read_node["colorspace"].setValue(iio_colorspace)
used_colorspace = self._set_colorspace(
read_node, version_data, repre["data"])
self._set_range_to_node(read_node, first, last, start_at_workfile)
@ -145,6 +142,12 @@ class LoadClip(plugin.NukeLoader):
for k in add_keys:
if k == 'version':
data_imprint.update({k: context["version"]['name']})
elif k == 'colorspace':
colorspace = repre["data"].get(k)
colorspace = colorspace or version_data.get(k)
data_imprint["db_colorspace"] = colorspace
if used_colorspace:
data_imprint["used_colorspace"] = used_colorspace
else:
data_imprint.update(
{k: context["version"]['data'].get(k, str(None))})
@ -192,10 +195,13 @@ class LoadClip(plugin.NukeLoader):
"_id": representation["parent"]
})
version_data = version.get("data", {})
repr_id = representation["_id"]
colorspace = version_data.get("colorspace")
iio_colorspace = get_imageio_input_colorspace(file)
repr_cont = representation["context"]
repre_id = representation["_id"]
repre_cont = representation["context"]
# colorspace profile
colorspace = representation["data"].get("colorspace")
colorspace = colorspace or version_data.get("colorspace")
self.handle_start = version_data.get("handleStart", 0)
self.handle_end = version_data.get("handleEnd", 0)
@ -210,7 +216,7 @@ class LoadClip(plugin.NukeLoader):
first = 1
last = first + duration
elif "#" not in file:
frame = repr_cont.get("frame")
frame = repre_cont.get("frame")
assert frame, "Representation is not sequence"
padding = len(frame)
@ -218,7 +224,7 @@ class LoadClip(plugin.NukeLoader):
if not file:
self.log.warning(
"Representation id `{}` is failing to load".format(repr_id))
"Representation id `{}` is failing to load".format(repre_id))
return
read_name = self._get_node_name(representation)
@ -229,12 +235,9 @@ class LoadClip(plugin.NukeLoader):
# to avoid multiple undo steps for rest of process
# we will switch off undo-ing
with viewer_update_and_undo_stop():
# Set colorspace defined in version data
if colorspace:
read_node["colorspace"].setValue(str(colorspace))
elif iio_colorspace is not None:
read_node["colorspace"].setValue(iio_colorspace)
used_colorspace = self._set_colorspace(
read_node, version_data, representation["data"],
path=file)
self._set_range_to_node(read_node, first, last, start_at_workfile)
@ -243,7 +246,7 @@ class LoadClip(plugin.NukeLoader):
"frameStart": str(first),
"frameEnd": str(last),
"version": str(version.get("name")),
"colorspace": colorspace,
"db_colorspace": colorspace,
"source": version_data.get("source"),
"handleStart": str(self.handle_start),
"handleEnd": str(self.handle_end),
@ -251,6 +254,10 @@ class LoadClip(plugin.NukeLoader):
"author": version_data.get("author")
}
# add used colorspace if found any
if used_colorspace:
updated_dict["used_colorspace"] = used_colorspace
# change color of read_node
# get all versions in list
versions = io.find({
@ -365,14 +372,37 @@ class LoadClip(plugin.NukeLoader):
def _get_node_name(self, representation):
repr_cont = representation["context"]
repre_cont = representation["context"]
name_data = {
"asset": repr_cont["asset"],
"subset": repr_cont["subset"],
"asset": repre_cont["asset"],
"subset": repre_cont["subset"],
"representation": representation["name"],
"ext": repr_cont["representation"],
"ext": repre_cont["representation"],
"id": representation["_id"],
"class_name": self.__class__.__name__
}
return self.node_name_template.format(**name_data)
def _set_colorspace(self, node, version_data, repre_data, path=None):
output_color = None
path = path or self.fname.replace("\\", "/")
# get colorspace
colorspace = repre_data.get("colorspace")
colorspace = colorspace or version_data.get("colorspace")
# colorspace from `project_anatomy/imageio/nuke/regexInputs`
iio_colorspace = get_imageio_input_colorspace(path)
# Set colorspace defined in version data
if (
colorspace is not None
and colorspace_exists_on_node(node, str(colorspace))
):
node["colorspace"].setValue(str(colorspace))
output_color = str(colorspace)
elif iio_colorspace is not None:
node["colorspace"].setValue(iio_colorspace)
output_color = iio_colorspace
return output_color

View file

@ -0,0 +1,11 @@
import os
from openpype.api import Logger
log = Logger().get_logger(__name__)
def clear_rendered(dir_path):
for _f in os.listdir(dir_path):
_f_path = os.path.join(dir_path, _f)
log.info("Removing: `{}`".format(_f_path))
os.remove(_f_path)

View file

@ -10,6 +10,7 @@ import avalon.api
from openpype.api import Logger
from openpype.tools.utils import host_tools
from openpype.lib.remote_publish import headless_publish
from openpype.lib import env_value_to_bool
from .launch_logic import ProcessLauncher, stub
@ -34,20 +35,19 @@ def main(*subprocess_args):
launcher = ProcessLauncher(subprocess_args)
launcher.start()
if os.environ.get("HEADLESS_PUBLISH"):
if env_value_to_bool("HEADLESS_PUBLISH"):
launcher.execute_in_main_thread(
headless_publish,
log,
"ClosePS",
os.environ.get("IS_TEST")
)
elif os.environ.get("AVALON_PHOTOSHOP_WORKFILES_ON_LAUNCH", True):
save = False
if os.getenv("WORKFILES_SAVE_AS"):
save = True
elif env_value_to_bool("AVALON_PHOTOSHOP_WORKFILES_ON_LAUNCH",
default=True):
launcher.execute_in_main_thread(
host_tools.show_workfiles, save=save
host_tools.show_workfiles,
save=env_value_to_bool("WORKFILES_SAVE_AS")
)
sys.exit(app.exec_())

View file

@ -17,9 +17,7 @@ class CreateImage(openpype.api.Creator):
create_group = False
stub = photoshop.stub()
useSelection = False
if (self.options or {}).get("useSelection"):
useSelection = True
multiple_instances = False
selection = stub.get_selected_layers()
self.log.info("selection {}".format(selection))
@ -64,8 +62,7 @@ class CreateImage(openpype.api.Creator):
# No selection creates an empty group.
create_group = True
else:
stub.select_layers(stub.get_layers())
group = stub.group_selected_layers(self.name)
group = stub.create_group(self.name)
groups.append(group)
if create_group:
@ -77,16 +74,15 @@ class CreateImage(openpype.api.Creator):
group = stub.group_selected_layers(layer.name)
groups.append(group)
creator_subset_name = self.data["subset"]
for group in groups:
long_names = []
group.name = group.name.replace(stub.PUBLISH_ICON, ''). \
replace(stub.LOADED_ICON, '')
if useSelection:
subset_name = self.data["subset"] + group.name
else:
# use value provided by user from Creator
subset_name = self.data["subset"]
subset_name = creator_subset_name
if len(groups) > 1:
subset_name += group.name.title().replace(" ", "")
if group.long_name:
for directory in group.long_name[::-1]:

View file

@ -9,7 +9,7 @@ from openpype.hosts.photoshop import api as photoshop
class CollectRemoteInstances(pyblish.api.ContextPlugin):
"""Gather instances configured color code of a layer.
"""Creates instances for configured color code of a layer.
Used in remote publishing when artists marks publishable layers by color-
coding.
@ -46,6 +46,11 @@ class CollectRemoteInstances(pyblish.api.ContextPlugin):
stub = photoshop.stub()
layers = stub.get_layers()
existing_subset_names = []
for instance in context:
if instance.data.get('publish'):
existing_subset_names.append(instance.data.get('subset'))
asset, task_name, task_type = get_batch_asset_task_info(
task_data["context"])
@ -55,6 +60,10 @@ class CollectRemoteInstances(pyblish.api.ContextPlugin):
instance_names = []
for layer in layers:
self.log.debug("Layer:: {}".format(layer))
if layer.parents:
self.log.debug("!!! Not a top layer, skip")
continue
resolved_family, resolved_subset_template = self._resolve_mapping(
layer
)
@ -66,8 +75,19 @@ class CollectRemoteInstances(pyblish.api.ContextPlugin):
self.log.debug("!!! Not found family or template, skip")
continue
if layer.parents:
self.log.debug("!!! Not a top layer, skip")
fill_pairs = {
"variant": variant,
"family": resolved_family,
"task": task_name,
"layer": layer.name
}
subset = resolved_subset_template.format(
**prepare_template_data(fill_pairs))
if subset in existing_subset_names:
self.log.info(
"Subset {} already created, skipping.".format(subset))
continue
instance = context.create_instance(layer.name)
@ -76,15 +96,6 @@ class CollectRemoteInstances(pyblish.api.ContextPlugin):
instance.data["publish"] = layer.visible
instance.data["asset"] = asset
instance.data["task"] = task_name
fill_pairs = {
"variant": variant,
"family": instance.data["family"],
"task": instance.data["task"],
"layer": layer.name
}
subset = resolved_subset_template.format(
**prepare_template_data(fill_pairs))
instance.data["subset"] = subset
instance_names.append(layer.name)

View file

@ -1,3 +1,4 @@
import collections
import pyblish.api
import openpype.api
@ -16,11 +17,14 @@ class ValidateSubsetUniqueness(pyblish.api.ContextPlugin):
subset_names = []
for instance in context:
self.log.info("instance:: {}".format(instance.data))
if instance.data.get('publish'):
subset_names.append(instance.data.get('subset'))
msg = (
"Instance subset names are not unique. " +
"Remove duplicates via SubsetManager."
)
assert len(subset_names) == len(set(subset_names)), msg
non_unique = \
[item
for item, count in collections.Counter(subset_names).items()
if count > 1]
msg = ("Instance subset names {} are not unique. ".format(non_unique) +
"Remove duplicates via SubsetManager.")
assert not non_unique, msg

View file

@ -11,7 +11,7 @@ class MyAutoCreator(AutoCreator):
identifier = "workfile"
family = "workfile"
def get_attribute_defs(self):
def get_instance_attr_defs(self):
output = [
lib.NumberDef("number_key", label="Number")
]

View file

@ -1,3 +1,4 @@
import json
from openpype import resources
from openpype.hosts.testhost.api import pipeline
from openpype.pipeline import (
@ -13,6 +14,8 @@ class TestCreatorOne(Creator):
family = "test"
description = "Testing creator of testhost"
create_allow_context_change = False
def get_icon(self):
return resources.get_openpype_splash_filepath()
@ -33,7 +36,10 @@ class TestCreatorOne(Creator):
for instance in instances:
self._remove_instance_from_context(instance)
def create(self, subset_name, data, options=None):
def create(self, subset_name, data, pre_create_data):
print("Data that can be used in create:\n{}".format(
json.dumps(pre_create_data, indent=4)
))
new_instance = CreatedInstance(self.family, subset_name, data, self)
pipeline.HostContext.add_instance(new_instance.data_to_store())
self.log.info(new_instance.data)
@ -46,9 +52,21 @@ class TestCreatorOne(Creator):
"different_variant"
]
def get_attribute_defs(self):
def get_instance_attr_defs(self):
output = [
lib.NumberDef("number_key", label="Number")
lib.NumberDef("number_key", label="Number"),
]
return output
def get_pre_create_attr_defs(self):
output = [
lib.BoolDef("use_selection", label="Use selection"),
lib.UISeparatorDef(),
lib.UILabelDef("Testing label"),
lib.FileDef("filepath", folders=True, label="Filepath"),
lib.FileDef(
"filepath_2", multipath=True, folders=True, label="Filepath 2"
)
]
return output

View file

@ -15,7 +15,7 @@ class TestCreatorTwo(Creator):
def get_icon(self):
return "cube"
def create(self, subset_name, data, options=None):
def create(self, subset_name, data, pre_create_data):
new_instance = CreatedInstance(self.family, subset_name, data, self)
pipeline.HostContext.add_instance(new_instance.data_to_store())
self.log.info(new_instance.data)
@ -38,7 +38,7 @@ class TestCreatorTwo(Creator):
for instance in instances:
self._remove_instance_from_context(instance)
def get_attribute_defs(self):
def get_instance_attr_defs(self):
output = [
lib.NumberDef("number_key"),
lib.TextDef("text_key")

View file

@ -19,7 +19,7 @@ class CollectContextDataTestHost(
hosts = ["testhost"]
@classmethod
def get_attribute_defs(cls):
def get_instance_attr_defs(cls):
return [
attribute_definitions.BoolDef(
"test_bool",

View file

@ -20,7 +20,7 @@ class CollectInstanceOneTestHost(
hosts = ["testhost"]
@classmethod
def get_attribute_defs(cls):
def get_instance_attr_defs(cls):
return [
attribute_definitions.NumberDef(
"version",

View file

@ -5,7 +5,7 @@ from openpype.hosts.tvpaint.api import lib, plugin
class ImportImage(plugin.Loader):
"""Load image or image sequence to TVPaint as new layer."""
families = ["render", "image", "background", "plate"]
families = ["render", "image", "background", "plate", "review"]
representations = ["*"]
label = "Import Image"

View file

@ -7,7 +7,7 @@ from openpype.hosts.tvpaint.api import lib, pipeline, plugin
class LoadImage(plugin.Loader):
"""Load image or image sequence to TVPaint as new layer."""
families = ["render", "image", "background", "plate"]
families = ["render", "image", "background", "plate", "review"]
representations = ["*"]
label = "Load Image"

View file

@ -76,6 +76,7 @@ class RenderInstance(object):
deadlineSubmissionJob = attr.ib(default=None)
anatomyData = attr.ib(default=None)
outputDir = attr.ib(default=None)
context = attr.ib(default=None)
@frameStart.validator
def check_frame_start(self, _, value):

View file

@ -1040,10 +1040,19 @@ class ApplicationLaunchContext:
# Prepare data that will be passed to midprocess
# - store arguments to a json and pass path to json as last argument
# - pass environments to set
app_env = self.kwargs.pop("env", {})
json_data = {
"args": self.launch_args,
"env": self.kwargs.pop("env", {})
"env": app_env
}
if app_env:
# Filter environments of subprocess
self.kwargs["env"] = {
key: value
for key, value in os.environ.items()
if key in app_env
}
# Create temp file
json_temp = tempfile.NamedTemporaryFile(
mode="w", prefix="op_app_args", suffix=".json", delete=False

View file

@ -0,0 +1,247 @@
"""These lib functions are primarily for development purposes.
WARNING: This is not meant for production data.
Goal is to be able create package of current state of project with related
documents from mongo and files from disk to zip file and then be able recreate
the project based on the zip.
This gives ability to create project where a changes and tests can be done.
Keep in mind that to be able create a package of project has few requirements.
Possible requirement should be listed in 'pack_project' function.
"""
import os
import json
import platform
import tempfile
import shutil
import datetime
import zipfile
from bson.json_util import (
loads,
dumps,
CANONICAL_JSON_OPTIONS
)
from avalon.api import AvalonMongoDB
DOCUMENTS_FILE_NAME = "database"
METADATA_FILE_NAME = "metadata"
PROJECT_FILES_DIR = "project_files"
def add_timestamp(filepath):
"""Add timestamp string to a file."""
base, ext = os.path.splitext(filepath)
timestamp = datetime.datetime.now().strftime("%y%m%d_%H%M%S")
new_base = "{}_{}".format(base, timestamp)
return new_base + ext
def pack_project(project_name, destination_dir=None):
"""Make a package of a project with mongo documents and files.
This function has few restrictions:
- project must have only one root
- project must have all templates starting with
"{root[...]}/{project[name]}"
Args:
project_name(str): Project that should be packaged.
destination_dir(str): Optinal path where zip will be stored. Project's
root is used if not passed.
"""
print("Creating package of project \"{}\"".format(project_name))
# Validate existence of project
dbcon = AvalonMongoDB()
dbcon.Session["AVALON_PROJECT"] = project_name
project_doc = dbcon.find_one({"type": "project"})
if not project_doc:
raise ValueError("Project \"{}\" was not found in database".format(
project_name
))
roots = project_doc["config"]["roots"]
# Determine root directory of project
source_root = None
source_root_name = None
for root_name, root_value in roots.items():
if source_root is not None:
raise ValueError(
"Packaging is supported only for single root projects"
)
source_root = root_value
source_root_name = root_name
root_path = source_root[platform.system().lower()]
print("Using root \"{}\" with path \"{}\"".format(
source_root_name, root_path
))
project_source_path = os.path.join(root_path, project_name)
if not os.path.exists(project_source_path):
raise ValueError("Didn't find source of project files")
# Determine zip filepath where data will be stored
if not destination_dir:
destination_dir = root_path
destination_dir = os.path.normpath(destination_dir)
if not os.path.exists(destination_dir):
os.makedirs(destination_dir)
zip_path = os.path.join(destination_dir, project_name + ".zip")
print("Project will be packaged into \"{}\"".format(zip_path))
# Rename already existing zip
if os.path.exists(zip_path):
dst_filepath = add_timestamp(zip_path)
os.rename(zip_path, dst_filepath)
# We can add more data
metadata = {
"project_name": project_name,
"root": source_root,
"version": 1
}
# Create temp json file where metadata are stored
with tempfile.NamedTemporaryFile("w", suffix=".json", delete=False) as s:
temp_metadata_json = s.name
with open(temp_metadata_json, "w") as stream:
json.dump(metadata, stream)
# Create temp json file where database documents are stored
with tempfile.NamedTemporaryFile("w", suffix=".json", delete=False) as s:
temp_docs_json = s.name
# Query all project documents and store them to temp json
docs = list(dbcon.find({}))
data = dumps(
docs, json_options=CANONICAL_JSON_OPTIONS
)
with open(temp_docs_json, "w") as stream:
stream.write(data)
print("Packing files into zip")
# Write all to zip file
with zipfile.ZipFile(zip_path, "w", zipfile.ZIP_DEFLATED) as zip_stream:
# Add metadata file
zip_stream.write(temp_metadata_json, METADATA_FILE_NAME + ".json")
# Add database documents
zip_stream.write(temp_docs_json, DOCUMENTS_FILE_NAME + ".json")
# Add project files to zip
for root, _, filenames in os.walk(project_source_path):
for filename in filenames:
filepath = os.path.join(root, filename)
# TODO add one more folder
archive_name = os.path.join(
PROJECT_FILES_DIR,
os.path.relpath(filepath, root_path)
)
zip_stream.write(filepath, archive_name)
print("Cleaning up")
# Cleanup
os.remove(temp_docs_json)
os.remove(temp_metadata_json)
dbcon.uninstall()
print("*** Packing finished ***")
def unpack_project(path_to_zip, new_root=None):
"""Unpack project zip file to recreate project.
Args:
path_to_zip(str): Path to zip which was created using 'pack_project'
function.
new_root(str): Optional way how to set different root path for unpacked
project.
"""
print("Unpacking project from zip {}".format(path_to_zip))
if not os.path.exists(path_to_zip):
print("Zip file does not exists: {}".format(path_to_zip))
return
tmp_dir = tempfile.mkdtemp(prefix="unpack_")
print("Zip is extracted to temp: {}".format(tmp_dir))
with zipfile.ZipFile(path_to_zip, "r") as zip_stream:
zip_stream.extractall(tmp_dir)
metadata_json_path = os.path.join(tmp_dir, METADATA_FILE_NAME + ".json")
with open(metadata_json_path, "r") as stream:
metadata = json.load(stream)
docs_json_path = os.path.join(tmp_dir, DOCUMENTS_FILE_NAME + ".json")
with open(docs_json_path, "r") as stream:
content = stream.readlines()
docs = loads("".join(content))
low_platform = platform.system().lower()
project_name = metadata["project_name"]
source_root = metadata["root"]
root_path = source_root[low_platform]
# Drop existing collection
dbcon = AvalonMongoDB()
database = dbcon.database
if project_name in database.list_collection_names():
database.drop_collection(project_name)
print("Removed existing project collection")
print("Creating project documents ({})".format(len(docs)))
# Create new collection with loaded docs
collection = database[project_name]
collection.insert_many(docs)
# Skip change of root if is the same as the one stored in metadata
if (
new_root
and (os.path.normpath(new_root) == os.path.normpath(root_path))
):
new_root = None
if new_root:
print("Using different root path {}".format(new_root))
root_path = new_root
project_doc = collection.find_one({"type": "project"})
roots = project_doc["config"]["roots"]
key = tuple(roots.keys())[0]
update_key = "config.roots.{}.{}".format(key, low_platform)
collection.update_one(
{"_id": project_doc["_id"]},
{"$set": {
update_key: new_root
}}
)
# Make sure root path exists
if not os.path.exists(root_path):
os.makedirs(root_path)
src_project_files_dir = os.path.join(
tmp_dir, PROJECT_FILES_DIR, project_name
)
dst_project_files_dir = os.path.normpath(
os.path.join(root_path, project_name)
)
if os.path.exists(dst_project_files_dir):
new_path = add_timestamp(dst_project_files_dir)
print("Project folder already exists. Renamed \"{}\" -> \"{}\"".format(
dst_project_files_dir, new_path
))
os.rename(dst_project_files_dir, new_path)
print("Moving project files from temp \"{}\" -> \"{}\"".format(
src_project_files_dir, dst_project_files_dir
))
shutil.move(src_project_files_dir, dst_project_files_dir)
# CLeanup
print("Cleaning up")
shutil.rmtree(tmp_dir)
dbcon.uninstall()
print("*** Unpack finished ***")

View file

@ -88,7 +88,6 @@ def publish(log, close_plugin_name=None):
if close_plugin: # close host app explicitly after error
context = pyblish.api.Context()
close_plugin().process(context)
sys.exit(1)
def publish_and_log(dbcon, _id, log, close_plugin_name=None, batch_id=None):
@ -137,7 +136,7 @@ def publish_and_log(dbcon, _id, log, close_plugin_name=None, batch_id=None):
if close_plugin: # close host app explicitly after error
context = pyblish.api.Context()
close_plugin().process(context)
sys.exit(1)
return
elif processed % log_every == 0:
# pyblish returns progress in 0.0 - 2.0
progress = min(round(result["progress"] / 2 * 100), 99)

View file

@ -9,6 +9,8 @@ class CollectDefaultDeadlineServer(pyblish.api.ContextPlugin):
order = pyblish.api.CollectorOrder + 0.01
label = "Default Deadline Webservice"
pass_mongo_url = False
def process(self, context):
try:
deadline_module = context.data.get("openPypeModules")["deadline"]
@ -19,3 +21,5 @@ class CollectDefaultDeadlineServer(pyblish.api.ContextPlugin):
# get default deadline webservice url from deadline module
self.log.debug(deadline_module.deadline_urls)
context.data["defaultDeadline"] = deadline_module.deadline_urls["default"] # noqa: E501
context.data["deadlinePassMongoUrl"] = self.pass_mongo_url

View file

@ -67,6 +67,9 @@ class AfterEffectsSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline
"OPENPYPE_DEV",
"OPENPYPE_LOG_NO_COLORS"
]
# Add mongo url if it's enabled
if self._instance.context.data.get("deadlinePassMongoUrl"):
keys.append("OPENPYPE_MONGO")
environment = dict({key: os.environ[key] for key in keys
if key in os.environ}, **api.Session)

Some files were not shown because too many files have changed in this diff Show more