Merge branch 'develop' into enhancement/OP-3075_houdini-new-publisher

This commit is contained in:
Ondrej Samohel 2022-09-15 12:09:57 +02:00
commit 9ed7e51ac7
No known key found for this signature in database
GPG key ID: 02376E18990A97C6
124 changed files with 3172 additions and 2206 deletions

3
.gitignore vendored
View file

@ -107,3 +107,6 @@ website/.docusaurus
mypy.ini
tools/run_eventserver.*
# Developer tools
tools/dev_*

View file

@ -1,8 +1,22 @@
# Changelog
## [3.14.2-nightly.2](https://github.com/pypeclub/OpenPype/tree/HEAD)
## [3.14.3-nightly.1](https://github.com/pypeclub/OpenPype/tree/HEAD)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.14.1...HEAD)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/3.14.2...HEAD)
**🚀 Enhancements**
- Publisher: Add new publisher to host tools [\#3833](https://github.com/pypeclub/OpenPype/pull/3833)
- Maya: Workspace mel loaded from settings [\#3790](https://github.com/pypeclub/OpenPype/pull/3790)
**🐛 Bug fixes**
- Ftrack: Url validation does not require ftrackapp [\#3834](https://github.com/pypeclub/OpenPype/pull/3834)
- Maya+Ftrack: Change typo in family name `mayaascii` -\> `mayaAscii` [\#3820](https://github.com/pypeclub/OpenPype/pull/3820)
## [3.14.2](https://github.com/pypeclub/OpenPype/tree/3.14.2) (2022-09-12)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.14.2-nightly.5...3.14.2)
**🆕 New features**
@ -11,32 +25,50 @@
**🚀 Enhancements**
- Flame: Adding Creator's retimed shot and handles switch [\#3826](https://github.com/pypeclub/OpenPype/pull/3826)
- Flame: OpenPype submenu to batch and media manager [\#3825](https://github.com/pypeclub/OpenPype/pull/3825)
- General: Better pixmap scaling [\#3809](https://github.com/pypeclub/OpenPype/pull/3809)
- Photoshop: attempt to speed up ExtractImage [\#3793](https://github.com/pypeclub/OpenPype/pull/3793)
- SyncServer: Added cli commands for sync server [\#3765](https://github.com/pypeclub/OpenPype/pull/3765)
- Maya: move set render settings menu entry [\#3669](https://github.com/pypeclub/OpenPype/pull/3669)
- Scene Inventory: Maya add actions to select from or to scene [\#3659](https://github.com/pypeclub/OpenPype/pull/3659)
- Kitsu: Drop 'entities root' setting. [\#3739](https://github.com/pypeclub/OpenPype/pull/3739)
- git: update gitignore [\#3722](https://github.com/pypeclub/OpenPype/pull/3722)
**🐛 Bug fixes**
- General: Fix Pattern access in client code [\#3828](https://github.com/pypeclub/OpenPype/pull/3828)
- Launcher: Skip opening last work file works for groups [\#3822](https://github.com/pypeclub/OpenPype/pull/3822)
- Maya: Publishing data key change [\#3811](https://github.com/pypeclub/OpenPype/pull/3811)
- Igniter: Fix status handling when version is already installed [\#3804](https://github.com/pypeclub/OpenPype/pull/3804)
- Resolve: Addon import is Python 2 compatible [\#3798](https://github.com/pypeclub/OpenPype/pull/3798)
- Hiero: retimed clip publishing is working [\#3792](https://github.com/pypeclub/OpenPype/pull/3792)
- nuke: validate write node is not failing due wrong type [\#3780](https://github.com/pypeclub/OpenPype/pull/3780)
- Fix - changed format of version string in pyproject.toml [\#3777](https://github.com/pypeclub/OpenPype/pull/3777)
- Ftrack status fix typo prgoress -\> progress [\#3761](https://github.com/pypeclub/OpenPype/pull/3761)
- Fix version resolution [\#3757](https://github.com/pypeclub/OpenPype/pull/3757)
- Maya: `containerise` dont skip empty values [\#3674](https://github.com/pypeclub/OpenPype/pull/3674)
**🔀 Refactored code**
- Photoshop: Use new Extractor location [\#3789](https://github.com/pypeclub/OpenPype/pull/3789)
- Blender: Use new Extractor location [\#3787](https://github.com/pypeclub/OpenPype/pull/3787)
- AfterEffects: Use new Extractor location [\#3784](https://github.com/pypeclub/OpenPype/pull/3784)
- General: Remove unused teshost [\#3773](https://github.com/pypeclub/OpenPype/pull/3773)
- General: Copied 'Extractor' plugin to publish pipeline [\#3771](https://github.com/pypeclub/OpenPype/pull/3771)
- General: Move queries of asset and representation links [\#3770](https://github.com/pypeclub/OpenPype/pull/3770)
- General: Move create project folders to pipeline [\#3768](https://github.com/pypeclub/OpenPype/pull/3768)
- General: Create project function moved to client code [\#3766](https://github.com/pypeclub/OpenPype/pull/3766)
- Maya: Refactor submit deadline to use AbstractSubmitDeadline [\#3759](https://github.com/pypeclub/OpenPype/pull/3759)
- General: Change publish template settings location [\#3755](https://github.com/pypeclub/OpenPype/pull/3755)
- General: Move hostdirname functionality into host [\#3749](https://github.com/pypeclub/OpenPype/pull/3749)
- General: Move publish utils to pipeline [\#3745](https://github.com/pypeclub/OpenPype/pull/3745)
- Houdini: Define houdini as addon [\#3735](https://github.com/pypeclub/OpenPype/pull/3735)
- Fusion: Defined fusion as addon [\#3733](https://github.com/pypeclub/OpenPype/pull/3733)
- Flame: Defined flame as addon [\#3732](https://github.com/pypeclub/OpenPype/pull/3732)
- Resolve: Define resolve as addon [\#3727](https://github.com/pypeclub/OpenPype/pull/3727)
**Merged pull requests:**
- Standalone Publisher: Ignore empty labels, then still use name like other asset models [\#3779](https://github.com/pypeclub/OpenPype/pull/3779)
- Kitsu - sync\_all\_project - add list ignore\_projects [\#3776](https://github.com/pypeclub/OpenPype/pull/3776)
## [3.14.1](https://github.com/pypeclub/OpenPype/tree/3.14.1) (2022-08-30)
@ -45,12 +77,6 @@
### 📖 Documentation
- Documentation: Few updates [\#3698](https://github.com/pypeclub/OpenPype/pull/3698)
- Documentation: Settings development [\#3660](https://github.com/pypeclub/OpenPype/pull/3660)
**🆕 New features**
- Webpublisher:change create flatten image into tri state [\#3678](https://github.com/pypeclub/OpenPype/pull/3678)
- Blender: validators code correction with settings and defaults [\#3662](https://github.com/pypeclub/OpenPype/pull/3662)
**🚀 Enhancements**
@ -59,9 +85,6 @@
- General: Added helper getters to modules manager [\#3712](https://github.com/pypeclub/OpenPype/pull/3712)
- Unreal: Define unreal as module and use host class [\#3701](https://github.com/pypeclub/OpenPype/pull/3701)
- Settings: Lock settings UI session [\#3700](https://github.com/pypeclub/OpenPype/pull/3700)
- General: Benevolent context label collector [\#3686](https://github.com/pypeclub/OpenPype/pull/3686)
- Ftrack: Store ftrack entities on hierarchy integration to instances [\#3677](https://github.com/pypeclub/OpenPype/pull/3677)
- Blender: ops refresh manager after process events [\#3663](https://github.com/pypeclub/OpenPype/pull/3663)
**🐛 Bug fixes**
@ -75,8 +98,6 @@
- Settings: Fix project overrides save [\#3708](https://github.com/pypeclub/OpenPype/pull/3708)
- Workfiles tool: Fix published workfile filtering [\#3704](https://github.com/pypeclub/OpenPype/pull/3704)
- PS, AE: Provide default variant value for workfile subset [\#3703](https://github.com/pypeclub/OpenPype/pull/3703)
- Flame: retime is working on clip publishing [\#3684](https://github.com/pypeclub/OpenPype/pull/3684)
- Webpublisher: added check for empty context [\#3682](https://github.com/pypeclub/OpenPype/pull/3682)
**🔀 Refactored code**
@ -104,38 +125,11 @@
- Hiero: Define hiero as module [\#3717](https://github.com/pypeclub/OpenPype/pull/3717)
- Deadline: better logging for DL webservice failures [\#3694](https://github.com/pypeclub/OpenPype/pull/3694)
- Photoshop: resize saved images in ExtractReview for ffmpeg [\#3676](https://github.com/pypeclub/OpenPype/pull/3676)
## [3.14.0](https://github.com/pypeclub/OpenPype/tree/3.14.0) (2022-08-18)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.14.0-nightly.1...3.14.0)
**🚀 Enhancements**
- Ftrack: Addiotional component metadata [\#3685](https://github.com/pypeclub/OpenPype/pull/3685)
- Ftrack: Set task status on farm publishing [\#3680](https://github.com/pypeclub/OpenPype/pull/3680)
- Ftrack: Set task status on task creation in integrate hierarchy [\#3675](https://github.com/pypeclub/OpenPype/pull/3675)
- Maya: Disable rendering of all lights for render instances submitted through Deadline. [\#3661](https://github.com/pypeclub/OpenPype/pull/3661)
- General: Optimized OCIO configs [\#3650](https://github.com/pypeclub/OpenPype/pull/3650)
**🐛 Bug fixes**
- General: Switch from hero version to versioned works [\#3691](https://github.com/pypeclub/OpenPype/pull/3691)
- General: Fix finding of last version [\#3656](https://github.com/pypeclub/OpenPype/pull/3656)
- General: Extract Review can scale with pixel aspect ratio [\#3644](https://github.com/pypeclub/OpenPype/pull/3644)
- Maya: Refactor moved usage of CreateRender settings [\#3643](https://github.com/pypeclub/OpenPype/pull/3643)
**🔀 Refactored code**
- General: Use client projects getter [\#3673](https://github.com/pypeclub/OpenPype/pull/3673)
- Resolve: Match folder structure to other hosts [\#3653](https://github.com/pypeclub/OpenPype/pull/3653)
- Maya: Hosts as modules [\#3647](https://github.com/pypeclub/OpenPype/pull/3647)
**Merged pull requests:**
- Deadline: Global job pre load is not Pype 2 compatible [\#3666](https://github.com/pypeclub/OpenPype/pull/3666)
- Maya: Remove unused get current renderer logic [\#3645](https://github.com/pypeclub/OpenPype/pull/3645)
## [3.13.0](https://github.com/pypeclub/OpenPype/tree/3.13.0) (2022-08-09)
[Full Changelog](https://github.com/pypeclub/OpenPype/compare/CI/3.13.0-nightly.1...3.13.0)

View file

@ -41,7 +41,7 @@ It can be built and ran on all common platforms. We develop and test on the foll
- **Linux**
- **Ubuntu** 20.04 LTS
- **Centos** 7
- **Mac OSX**
- **Mac OSX**
- **10.15** Catalina
- **11.1** Big Sur (using Rosetta2)
@ -287,6 +287,14 @@ To run tests, execute `.\tools\run_tests(.ps1|.sh)`.
**Note that it needs existing virtual environment.**
Developer tools
-------------
In case you wish to add your own tools to `.\tools` folder without git tracking, it is possible by adding it with `dev_*` suffix (example: `dev_clear_pyc(.ps1|.sh)`).
## Contributors ✨
Thanks goes to these wonderful people ([emoji key](https://allcontributors.org/docs/en/emoji-key)):

View file

@ -388,8 +388,11 @@ class InstallDialog(QtWidgets.QDialog):
install_thread.start()
def _installation_finished(self):
# TODO we should find out why status can be set to 'None'?
# - 'InstallThread.run' should handle all cases so not sure where
# that come from
status = self._install_thread.result()
if status >= 0:
if status is not None and status >= 0:
self._update_progress(100)
QtWidgets.QApplication.processEvents()
self.done(3)

View file

@ -14,6 +14,8 @@ from bson.objectid import ObjectId
from .mongo import get_project_database, get_project_connection
PatternType = type(re.compile(""))
def _prepare_fields(fields, required_fields=None):
if not fields:
@ -1054,11 +1056,11 @@ def _regex_filters(filters):
for key, value in filters.items():
regexes = []
a_values = []
if isinstance(value, re.Pattern):
if isinstance(value, PatternType):
regexes.append(value)
elif isinstance(value, (list, tuple, set)):
for item in value:
if isinstance(item, re.Pattern):
if isinstance(item, PatternType):
regexes.append(item)
else:
a_values.append(item)
@ -1194,7 +1196,7 @@ def get_representations(
as filter. Filter ignored if 'None' is passed.
version_ids (Iterable[str]): Subset ids used as parent filter. Filter
ignored if 'None' is passed.
context_filters (Dict[str, List[str, re.Pattern]]): Filter by
context_filters (Dict[str, List[str, PatternType]]): Filter by
representation context fields.
names_by_version_ids (dict[ObjectId, list[str]]): Complex filtering
using version ids and list of names under the version.
@ -1240,7 +1242,7 @@ def get_archived_representations(
as filter. Filter ignored if 'None' is passed.
version_ids (Iterable[str]): Subset ids used as parent filter. Filter
ignored if 'None' is passed.
context_filters (Dict[str, List[str, re.Pattern]]): Filter by
context_filters (Dict[str, List[str, PatternType]]): Filter by
representation context fields.
names_by_version_ids (dict[ObjectId, List[str]]): Complex filtering
using version ids and list of names under the version.

View file

@ -1,8 +1,6 @@
import os
from openpype.lib import (
PreLaunchHook,
create_workdir_extra_folders
)
from openpype.lib import PreLaunchHook
from openpype.pipeline.workfile import create_workdir_extra_folders
class AddLastWorkfileToLaunchArgs(PreLaunchHook):

View file

@ -1,113 +0,0 @@
import os
import collections
from pprint import pformat
import pyblish.api
from openpype.client import (
get_subsets,
get_last_versions,
get_representations
)
from openpype.pipeline import legacy_io
class AppendCelactionAudio(pyblish.api.ContextPlugin):
label = "Colect Audio for publishing"
order = pyblish.api.CollectorOrder + 0.1
def process(self, context):
self.log.info('Collecting Audio Data')
asset_doc = context.data["assetEntity"]
# get all available representations
subsets = self.get_subsets(
asset_doc,
representations=["audio", "wav"]
)
self.log.info(f"subsets is: {pformat(subsets)}")
if not subsets.get("audioMain"):
raise AttributeError("`audioMain` subset does not exist")
reprs = subsets.get("audioMain", {}).get("representations", [])
self.log.info(f"reprs is: {pformat(reprs)}")
repr = next((r for r in reprs), None)
if not repr:
raise "Missing `audioMain` representation"
self.log.info(f"representation is: {repr}")
audio_file = repr.get('data', {}).get('path', "")
if os.path.exists(audio_file):
context.data["audioFile"] = audio_file
self.log.info(
'audio_file: {}, has been added to context'.format(audio_file))
else:
self.log.warning("Couldn't find any audio file on Ftrack.")
def get_subsets(self, asset_doc, representations):
"""
Query subsets with filter on name.
The method will return all found subsets and its defined version
and subsets. Version could be specified with number. Representation
can be filtered.
Arguments:
asset_doct (dict): Asset (shot) mongo document
representations (list): list for all representations
Returns:
dict: subsets with version and representations in keys
"""
# Query all subsets for asset
project_name = legacy_io.active_project()
subset_docs = get_subsets(
project_name, asset_ids=[asset_doc["_id"]], fields=["_id"]
)
# Collect all subset ids
subset_ids = [
subset_doc["_id"]
for subset_doc in subset_docs
]
# Check if we found anything
assert subset_ids, (
"No subsets found. Check correct filter. "
"Try this for start `r'.*'`: asset: `{}`"
).format(asset_doc["name"])
last_versions_by_subset_id = get_last_versions(
project_name, subset_ids, fields=["_id", "parent"]
)
version_docs_by_id = {}
for version_doc in last_versions_by_subset_id.values():
version_docs_by_id[version_doc["_id"]] = version_doc
repre_docs = get_representations(
project_name,
version_ids=version_docs_by_id.keys(),
representation_names=representations
)
repre_docs_by_version_id = collections.defaultdict(list)
for repre_doc in repre_docs:
version_id = repre_doc["parent"]
repre_docs_by_version_id[version_id].append(repre_doc)
output_dict = {}
for version_id, repre_docs in repre_docs_by_version_id.items():
version_doc = version_docs_by_id[version_id]
subset_id = version_doc["parent"]
subset_doc = last_versions_by_subset_id[subset_id]
# Store queried docs by subset name
output_dict[subset_doc["name"]] = {
"representations": repre_docs,
"version": version_doc
}
return output_dict

View file

@ -51,7 +51,8 @@ from .pipeline import (
)
from .menu import (
FlameMenuProjectConnect,
FlameMenuTimeline
FlameMenuTimeline,
FlameMenuUniversal
)
from .plugin import (
Creator,
@ -131,6 +132,7 @@ __all__ = [
# menu
"FlameMenuProjectConnect",
"FlameMenuTimeline",
"FlameMenuUniversal",
# plugin
"Creator",

View file

@ -201,3 +201,53 @@ class FlameMenuTimeline(_FlameMenuApp):
if self.flame:
self.flame.execute_shortcut('Rescan Python Hooks')
self.log.info('Rescan Python Hooks')
class FlameMenuUniversal(_FlameMenuApp):
# flameMenuProjectconnect app takes care of the preferences dialog as well
def __init__(self, framework):
_FlameMenuApp.__init__(self, framework)
def __getattr__(self, name):
def method(*args, **kwargs):
project = self.dynamic_menu_data.get(name)
if project:
self.link_project(project)
return method
def build_menu(self):
if not self.flame:
return []
menu = deepcopy(self.menu)
menu['actions'].append({
"name": "Load...",
"execute": lambda x: self.tools_helper.show_loader()
})
menu['actions'].append({
"name": "Manage...",
"execute": lambda x: self.tools_helper.show_scene_inventory()
})
menu['actions'].append({
"name": "Library...",
"execute": lambda x: self.tools_helper.show_library_loader()
})
return menu
def refresh(self, *args, **kwargs):
self.rescan()
def rescan(self, *args, **kwargs):
if not self.flame:
try:
import flame
self.flame = flame
except ImportError:
self.flame = None
if self.flame:
self.flame.execute_shortcut('Rescan Python Hooks')
self.log.info('Rescan Python Hooks')

View file

@ -361,6 +361,8 @@ class PublishableClip:
index_from_segment_default = False
use_shot_name_default = False
include_handles_default = False
retimed_handles_default = True
retimed_framerange_default = True
def __init__(self, segment, **kwargs):
self.rename_index = kwargs["rename_index"]
@ -496,6 +498,14 @@ class PublishableClip:
"audio", {}).get("value") or False
self.include_handles = self.ui_inputs.get(
"includeHandles", {}).get("value") or self.include_handles_default
self.retimed_handles = (
self.ui_inputs.get("retimedHandles", {}).get("value")
or self.retimed_handles_default
)
self.retimed_framerange = (
self.ui_inputs.get("retimedFramerange", {}).get("value")
or self.retimed_framerange_default
)
# build subset name from layer name
if self.subset_name == "[ track name ]":

View file

@ -276,6 +276,22 @@ class CreateShotClip(opfapi.Creator):
"target": "tag",
"toolTip": "By default handles are excluded", # noqa
"order": 3
},
"retimedHandles": {
"value": True,
"type": "QCheckBox",
"label": "Retimed handles",
"target": "tag",
"toolTip": "By default handles are retimed.", # noqa
"order": 4
},
"retimedFramerange": {
"value": True,
"type": "QCheckBox",
"label": "Retimed framerange",
"target": "tag",
"toolTip": "By default framerange is retimed.", # noqa
"order": 5
}
}
}

View file

@ -131,6 +131,10 @@ class CollectTimelineInstances(pyblish.api.ContextPlugin):
"fps": self.fps,
"workfileFrameStart": workfile_start,
"sourceFirstFrame": int(first_frame),
"notRetimedHandles": (
not marker_data.get("retimedHandles")),
"notRetimedFramerange": (
not marker_data.get("retimedFramerange")),
"path": file_path,
"flameAddTasks": self.add_tasks,
"tasks": {

View file

@ -90,26 +90,38 @@ class ExtractSubsetResources(openpype.api.Extractor):
handle_end = instance.data["handleEnd"]
handles = max(handle_start, handle_end)
include_handles = instance.data.get("includeHandles")
retimed_handles = instance.data.get("retimedHandles")
# get media source range with handles
source_start_handles = instance.data["sourceStartH"]
source_end_handles = instance.data["sourceEndH"]
# retime if needed
if r_speed != 1.0:
source_start_handles = (
instance.data["sourceStart"] - r_handle_start)
source_end_handles = (
source_start_handles
+ (r_source_dur - 1)
+ r_handle_start
+ r_handle_end
)
if retimed_handles:
# handles are retimed
source_start_handles = (
instance.data["sourceStart"] - r_handle_start)
source_end_handles = (
source_start_handles
+ (r_source_dur - 1)
+ r_handle_start
+ r_handle_end
)
else:
# handles are not retimed
source_end_handles = (
source_start_handles
+ (r_source_dur - 1)
+ handle_start
+ handle_end
)
# get frame range with handles for representation range
frame_start_handle = frame_start - handle_start
repre_frame_start = frame_start_handle
if include_handles:
if r_speed == 1.0:
if r_speed == 1.0 or not retimed_handles:
frame_start_handle = frame_start
else:
frame_start_handle = (

View file

@ -73,6 +73,8 @@ def load_apps():
opfapi.FlameMenuProjectConnect(opfapi.CTX.app_framework))
opfapi.CTX.flame_apps.append(
opfapi.FlameMenuTimeline(opfapi.CTX.app_framework))
opfapi.CTX.flame_apps.append(
opfapi.FlameMenuUniversal(opfapi.CTX.app_framework))
opfapi.CTX.app_framework.log.info("Apps are loaded")
@ -191,3 +193,27 @@ def get_timeline_custom_ui_actions():
openpype_install()
return _build_app_menu("FlameMenuTimeline")
def get_batch_custom_ui_actions():
"""Hook to create submenu in batch
Returns:
list: menu object
"""
# install openpype and the host
openpype_install()
return _build_app_menu("FlameMenuUniversal")
def get_media_panel_custom_ui_actions():
"""Hook to create submenu in desktop
Returns:
list: menu object
"""
# install openpype and the host
openpype_install()
return _build_app_menu("FlameMenuUniversal")

View file

@ -0,0 +1,10 @@
from .addon import (
FusionAddon,
FUSION_HOST_DIR,
)
__all__ = (
"FusionAddon",
"FUSION_HOST_DIR",
)

View file

@ -0,0 +1,23 @@
import os
from openpype.modules import OpenPypeModule
from openpype.modules.interfaces import IHostAddon
FUSION_HOST_DIR = os.path.dirname(os.path.abspath(__file__))
class FusionAddon(OpenPypeModule, IHostAddon):
name = "fusion"
host_name = "fusion"
def initialize(self, module_settings):
self.enabled = True
def get_launch_hook_paths(self, app):
if app.host_name != self.host_name:
return []
return [
os.path.join(FUSION_HOST_DIR, "hooks")
]
def get_workfile_extensions(self):
return [".comp"]

View file

@ -18,12 +18,11 @@ from openpype.pipeline import (
deregister_inventory_action_path,
AVALON_CONTAINER_ID,
)
import openpype.hosts.fusion
from openpype.hosts.fusion import FUSION_HOST_DIR
log = Logger.get_logger(__name__)
HOST_DIR = os.path.dirname(os.path.abspath(openpype.hosts.fusion.__file__))
PLUGINS_DIR = os.path.join(HOST_DIR, "plugins")
PLUGINS_DIR = os.path.join(FUSION_HOST_DIR, "plugins")
PUBLISH_PATH = os.path.join(PLUGINS_DIR, "publish")
LOAD_PATH = os.path.join(PLUGINS_DIR, "load")

View file

@ -2,13 +2,11 @@
import sys
import os
from openpype.pipeline import HOST_WORKFILE_EXTENSIONS
from .pipeline import get_current_comp
def file_extensions():
return HOST_WORKFILE_EXTENSIONS["fusion"]
return [".comp"]
def has_unsaved_changes():

View file

@ -0,0 +1,114 @@
from bson.objectid import ObjectId
import pyblish.api
from openpype.pipeline import registered_host
def collect_input_containers(tools):
"""Collect containers that contain any of the node in `nodes`.
This will return any loaded Avalon container that contains at least one of
the nodes. As such, the Avalon container is an input for it. Or in short,
there are member nodes of that container.
Returns:
list: Input avalon containers
"""
# Lookup by node ids
lookup = frozenset([tool.Name for tool in tools])
containers = []
host = registered_host()
for container in host.ls():
name = container["_tool"].Name
# We currently assume no "groups" as containers but just single tools
# like a single "Loader" operator. As such we just check whether the
# Loader is part of the processing queue.
if name in lookup:
containers.append(container)
return containers
def iter_upstream(tool):
"""Yields all upstream inputs for the current tool.
Yields:
tool: The input tools.
"""
def get_connected_input_tools(tool):
"""Helper function that returns connected input tools for a tool."""
inputs = []
# Filter only to actual types that will have sensible upstream
# connections. So we ignore just "Number" inputs as they can be
# many to iterate, slowing things down quite a bit - and in practice
# they don't have upstream connections.
VALID_INPUT_TYPES = ['Image', 'Particles', 'Mask', 'DataType3D']
for type_ in VALID_INPUT_TYPES:
for input_ in tool.GetInputList(type_).values():
output = input_.GetConnectedOutput()
if output:
input_tool = output.GetTool()
inputs.append(input_tool)
return inputs
# Initialize process queue with the node's inputs itself
queue = get_connected_input_tools(tool)
# We keep track of which node names we have processed so far, to ensure we
# don't process the same hierarchy again. We are not pushing the tool
# itself into the set as that doesn't correctly recognize the same tool.
# Since tool names are unique in a comp in Fusion we rely on that.
collected = set(tool.Name for tool in queue)
# Traverse upstream references for all nodes and yield them as we
# process the queue.
while queue:
upstream_tool = queue.pop()
yield upstream_tool
# Find upstream tools that are not collected yet.
upstream_inputs = get_connected_input_tools(upstream_tool)
upstream_inputs = [t for t in upstream_inputs if
t.Name not in collected]
queue.extend(upstream_inputs)
collected.update(tool.Name for tool in upstream_inputs)
class CollectUpstreamInputs(pyblish.api.InstancePlugin):
"""Collect source input containers used for this publish.
This will include `inputs` data of which loaded publishes were used in the
generation of this publish. This leaves an upstream trace to what was used
as input.
"""
label = "Collect Inputs"
order = pyblish.api.CollectorOrder + 0.2
hosts = ["fusion"]
def process(self, instance):
# Get all upstream and include itself
tool = instance[0]
nodes = list(iter_upstream(tool))
nodes.append(tool)
# Collect containers for the given set of nodes
containers = collect_input_containers(nodes)
inputs = [ObjectId(c["representation"]) for c in containers]
instance.data["inputRepresentations"] = inputs
self.log.info("Collected inputs: %s" % inputs)

View file

@ -318,10 +318,9 @@ class PrecollectInstances(pyblish.api.ContextPlugin):
@staticmethod
def create_otio_time_range_from_timeline_item_data(track_item):
speed = track_item.playbackSpeed()
timeline = phiero.get_current_sequence()
frame_start = int(track_item.timelineIn())
frame_duration = int((track_item.duration() - 1) / speed)
frame_duration = int(track_item.duration())
fps = timeline.framerate().toFloat()
return hiero_export.create_otio_time_range(

View file

@ -18,7 +18,7 @@ from openpype.pipeline import (
)
from openpype.pipeline.load import any_outdated_containers
from openpype.hosts.houdini import HOUDINI_HOST_DIR
from openpype.hosts.houdini.api import lib
from openpype.hosts.houdini.api import lib, shelves
from openpype.lib import (
register_event_callback,
@ -81,6 +81,7 @@ class HoudiniHost(HostBase, IWorkfileHost, ILoadHost, INewPublisher):
# TODO: make sure this doesn't trigger when
# opening with last workfile.
_set_context_settings()
shelves.generate_shelves()
def has_unsaved_changes(self):
return hou.hipFile.hasUnsavedChanges()

View file

@ -0,0 +1,204 @@
import os
import logging
import platform
import six
from openpype.settings import get_project_settings
import hou
log = logging.getLogger("openpype.hosts.houdini.shelves")
if six.PY2:
FileNotFoundError = IOError
def generate_shelves():
"""This function generates complete shelves from shelf set to tools
in Houdini from openpype project settings houdini shelf definition.
Raises:
FileNotFoundError: Raised when the shelf set filepath does not exist
"""
current_os = platform.system().lower()
# load configuration of houdini shelves
project_settings = get_project_settings(os.getenv("AVALON_PROJECT"))
shelves_set_config = project_settings["houdini"]["shelves"]
if not shelves_set_config:
log.debug(
"No custom shelves found in project settings."
)
return
for shelf_set_config in shelves_set_config:
shelf_set_filepath = shelf_set_config.get('shelf_set_source_path')
if shelf_set_filepath[current_os]:
if not os.path.isfile(shelf_set_filepath[current_os]):
raise FileNotFoundError(
"This path doesn't exist - {}".format(
shelf_set_filepath[current_os]
)
)
hou.shelves.newShelfSet(file_path=shelf_set_filepath[current_os])
continue
shelf_set_name = shelf_set_config.get('shelf_set_name')
if not shelf_set_name:
log.warning(
"No name found in shelf set definition."
)
return
shelf_set = get_or_create_shelf_set(shelf_set_name)
shelves_definition = shelf_set_config.get('shelf_definition')
if not shelves_definition:
log.debug(
"No shelf definition found for shelf set named '{}'".format(
shelf_set_name
)
)
return
for shelf_definition in shelves_definition:
shelf_name = shelf_definition.get('shelf_name')
if not shelf_name:
log.warning(
"No name found in shelf definition."
)
return
shelf = get_or_create_shelf(shelf_name)
if not shelf_definition.get('tools_list'):
log.debug(
"No tool definition found for shelf named {}".format(
shelf_name
)
)
return
mandatory_attributes = {'name', 'script'}
for tool_definition in shelf_definition.get('tools_list'):
# We verify that the name and script attibutes of the tool
# are set
if not all(
tool_definition[key] for key in mandatory_attributes
):
log.warning(
"You need to specify at least the name and \
the script path of the tool.")
continue
tool = get_or_create_tool(tool_definition, shelf)
if not tool:
return
# Add the tool to the shelf if not already in it
if tool not in shelf.tools():
shelf.setTools(list(shelf.tools()) + [tool])
# Add the shelf in the shelf set if not already in it
if shelf not in shelf_set.shelves():
shelf_set.setShelves(shelf_set.shelves() + (shelf,))
def get_or_create_shelf_set(shelf_set_label):
"""This function verifies if the shelf set label exists. If not,
creates a new shelf set.
Arguments:
shelf_set_label (str): The label of the shelf set
Returns:
hou.ShelfSet: The shelf set existing or the new one
"""
all_shelves_sets = hou.shelves.shelfSets().values()
shelf_sets = [
shelf for shelf in all_shelves_sets if shelf.label() == shelf_set_label
]
if shelf_sets:
return shelf_sets[0]
shelf_set_name = shelf_set_label.replace(' ', '_').lower()
new_shelf_set = hou.shelves.newShelfSet(
name=shelf_set_name,
label=shelf_set_label
)
return new_shelf_set
def get_or_create_shelf(shelf_label):
"""This function verifies if the shelf label exists. If not, creates
a new shelf.
Arguments:
shelf_label (str): The label of the shelf
Returns:
hou.Shelf: The shelf existing or the new one
"""
all_shelves = hou.shelves.shelves().values()
shelf = [s for s in all_shelves if s.label() == shelf_label]
if shelf:
return shelf[0]
shelf_name = shelf_label.replace(' ', '_').lower()
new_shelf = hou.shelves.newShelf(
name=shelf_name,
label=shelf_label
)
return new_shelf
def get_or_create_tool(tool_definition, shelf):
"""This function verifies if the tool exists and updates it. If not, creates
a new one.
Arguments:
tool_definition (dict): Dict with label, script, icon and help
shelf (hou.Shelf): The parent shelf of the tool
Returns:
hou.Tool: The tool updated or the new one
"""
existing_tools = shelf.tools()
tool_label = tool_definition.get('label')
existing_tool = [
tool for tool in existing_tools if tool.label() == tool_label
]
if existing_tool:
tool_definition.pop('name', None)
tool_definition.pop('label', None)
existing_tool[0].setData(**tool_definition)
return existing_tool[0]
tool_name = tool_label.replace(' ', '_').lower()
if not os.path.exists(tool_definition['script']):
log.warning(
"This path doesn't exist - {}".format(
tool_definition['script']
)
)
return
with open(tool_definition['script']) as f:
script = f.read()
tool_definition.update({'script': script})
new_tool = hou.shelves.newTool(name=tool_name, **tool_definition)
return new_tool

View file

@ -1,3 +1,5 @@
from bson.objectid import ObjectId
import pyblish.api
from openpype.pipeline import registered_host
@ -115,7 +117,7 @@ class CollectUpstreamInputs(pyblish.api.InstancePlugin):
# Collect containers for the given set of nodes
containers = collect_input_containers(nodes)
inputs = [c["representation"] for c in containers]
instance.data["inputs"] = inputs
inputs = [ObjectId(c["representation"]) for c in containers]
instance.data["inputRepresentations"] = inputs
self.log.info("Collected inputs: %s" % inputs)

View file

@ -2,10 +2,9 @@ import pyblish.api
from openpype.lib import version_up
from openpype.pipeline import registered_host
from openpype.pipeline.publish import get_errored_plugins_from_context
class IncrementCurrentFile(pyblish.api.InstancePlugin):
class IncrementCurrentFile(pyblish.api.ContextPlugin):
"""Increment the current file.
Saves the current scene with an increased version number.
@ -15,30 +14,10 @@ class IncrementCurrentFile(pyblish.api.InstancePlugin):
label = "Increment current file"
order = pyblish.api.IntegratorOrder + 9.0
hosts = ["houdini"]
families = ["colorbleed.usdrender", "redshift_rop"]
targets = ["local"]
families = ["workfile"]
optional = True
def process(self, instance):
# This should be a ContextPlugin, but this is a workaround
# for a bug in pyblish to run once for a family: issue #250
context = instance.context
key = "__hasRun{}".format(self.__class__.__name__)
if context.data.get(key, False):
return
else:
context.data[key] = True
context = instance.context
errored_plugins = get_errored_plugins_from_context(context)
if any(
plugin.__name__ == "HoudiniSubmitPublishDeadline"
for plugin in errored_plugins
):
raise RuntimeError(
"Skipping incrementing current file because "
"submission to deadline failed."
)
def process(self, context):
# Filename must not have changed since collecting
host = registered_host()

View file

@ -1,35 +0,0 @@
import pyblish.api
import hou
from openpype.lib import version_up
from openpype.pipeline.publish import get_errored_plugins_from_context
class IncrementCurrentFileDeadline(pyblish.api.ContextPlugin):
"""Increment the current file.
Saves the current scene with an increased version number.
"""
label = "Increment current file"
order = pyblish.api.IntegratorOrder + 9.0
hosts = ["houdini"]
targets = ["deadline"]
def process(self, context):
errored_plugins = get_errored_plugins_from_context(context)
if any(
plugin.__name__ == "HoudiniSubmitPublishDeadline"
for plugin in errored_plugins
):
raise RuntimeError(
"Skipping incrementing current file because "
"submission to deadline failed."
)
current_filepath = context.data["currentFile"]
new_filepath = version_up(current_filepath)
hou.hipFile.save(file_name=new_filepath, save_to_recent_files=True)

View file

@ -348,3 +348,71 @@ def get_attr_overrides(node_attr, layer,
break
return reversed(plug_overrides)
def get_shader_in_layer(node, layer):
"""Return the assigned shader in a renderlayer without switching layers.
This has been developed and tested for Legacy Renderlayers and *not* for
Render Setup.
Note: This will also return the shader for any face assignments, however
it will *not* return the components they are assigned to. This could
be implemented, but since Maya's renderlayers are famous for breaking
with face assignments there has been no need for this function to
support that.
Returns:
list: The list of assigned shaders in the given layer.
"""
def _get_connected_shader(plug):
"""Return current shader"""
return cmds.listConnections(plug,
source=False,
destination=True,
plugs=False,
connections=False,
type="shadingEngine") or []
# We check the instObjGroups (shader connection) for layer overrides.
plug = node + ".instObjGroups"
# Ignore complex query if we're in the layer anyway (optimization)
current_layer = cmds.editRenderLayerGlobals(query=True,
currentRenderLayer=True)
if layer == current_layer:
return _get_connected_shader(plug)
connections = cmds.listConnections(plug,
plugs=True,
source=False,
destination=True,
type="renderLayer") or []
connections = filter(lambda x: x.endswith(".outPlug"), connections)
if not connections:
# If no overrides anywhere on the shader, just get the current shader
return _get_connected_shader(plug)
def _get_override(connections, layer):
"""Return the overridden connection for that layer in connections"""
# If there's an override on that layer, return that.
for connection in connections:
if (connection.startswith(layer + ".outAdjustments") and
connection.endswith(".outPlug")):
# This is a shader override on that layer so get the shader
# connected to .outValue of the .outAdjustment[i]
out_adjustment = connection.rsplit(".", 1)[0]
connection_attr = out_adjustment + ".outValue"
override = cmds.listConnections(connection_attr) or []
return override
override_shader = _get_override(connections, layer)
if override_shader is not None:
return override_shader
else:
# Get the override for "defaultRenderLayer" (=masterLayer)
return _get_override(connections, layer="defaultRenderLayer")

View file

@ -32,7 +32,7 @@ from openpype.pipeline import (
)
from openpype.pipeline.load import any_outdated_containers
from openpype.hosts.maya import MAYA_ROOT_DIR
from openpype.hosts.maya.lib import copy_workspace_mel
from openpype.hosts.maya.lib import create_workspace_mel
from . import menu, lib
from .workio import (
@ -63,7 +63,7 @@ class MayaHost(HostBase, IWorkfileHost, ILoadHost):
self._op_events = {}
def install(self):
project_name = os.getenv("AVALON_PROJECT")
project_name = legacy_io.active_project()
project_settings = get_project_settings(project_name)
# process path mapping
dirmap_processor = MayaDirmap("maya", project_name, project_settings)
@ -533,7 +533,7 @@ def on_task_changed():
lib.update_content_on_context_change()
msg = " project: {}\n asset: {}\n task:{}".format(
legacy_io.Session["AVALON_PROJECT"],
legacy_io.active_project(),
legacy_io.Session["AVALON_ASSET"],
legacy_io.Session["AVALON_TASK"]
)
@ -545,9 +545,10 @@ def on_task_changed():
def before_workfile_save(event):
project_name = legacy_io.active_project()
workdir_path = event["workdir_path"]
if workdir_path:
copy_workspace_mel(workdir_path)
create_workspace_mel(workdir_path, project_name)
class MayaDirmap(HostDirmap):

View file

@ -1,5 +1,5 @@
from openpype.lib import PreLaunchHook
from openpype.hosts.maya.lib import copy_workspace_mel
from openpype.hosts.maya.lib import create_workspace_mel
class PreCopyMel(PreLaunchHook):
@ -10,9 +10,10 @@ class PreCopyMel(PreLaunchHook):
app_groups = ["maya"]
def execute(self):
project_name = self.launch_context.env.get("AVALON_PROJECT")
workdir = self.launch_context.env.get("AVALON_WORKDIR")
if not workdir:
self.log.warning("BUG: Workdir is not filled.")
return
copy_workspace_mel(workdir)
create_workspace_mel(workdir, project_name)

View file

@ -1,26 +1,24 @@
import os
import shutil
from openpype.settings import get_project_settings
from openpype.lib import Logger
def copy_workspace_mel(workdir):
# Check that source mel exists
current_dir = os.path.dirname(os.path.abspath(__file__))
src_filepath = os.path.join(current_dir, "resources", "workspace.mel")
if not os.path.exists(src_filepath):
print("Source mel file does not exist. {}".format(src_filepath))
return
# Skip if workspace.mel already exists
def create_workspace_mel(workdir, project_name):
dst_filepath = os.path.join(workdir, "workspace.mel")
if os.path.exists(dst_filepath):
return
# Create workdir if does not exists yet
if not os.path.exists(workdir):
os.makedirs(workdir)
# Copy file
print("Copying workspace mel \"{}\" -> \"{}\"".format(
src_filepath, dst_filepath
))
shutil.copy(src_filepath, dst_filepath)
project_setting = get_project_settings(project_name)
mel_script = project_setting["maya"].get("mel_workspace")
# Skip if mel script in settings is empty
if not mel_script:
log = Logger.get_logger("create_workspace_mel")
log.debug("File 'workspace.mel' not created. Settings value is empty.")
return
with open(dst_filepath, "w") as mel_file:
mel_file.write(mel_script)

View file

@ -70,7 +70,7 @@ class CollectAssembly(pyblish.api.InstancePlugin):
data[representation_id].append(instance_data)
instance.data["scenedata"] = dict(data)
instance.data["hierarchy"] = list(set(hierarchy_nodes))
instance.data["nodesHierarchy"] = list(set(hierarchy_nodes))
def get_file_rule(self, rule):
return mel.eval('workspace -query -fileRuleEntry "{}"'.format(rule))

View file

@ -0,0 +1,215 @@
import copy
from bson.objectid import ObjectId
from maya import cmds
import maya.api.OpenMaya as om
import pyblish.api
from openpype.pipeline import registered_host
from openpype.hosts.maya.api.lib import get_container_members
from openpype.hosts.maya.api.lib_rendersetup import get_shader_in_layer
def iter_history(nodes,
filter=om.MFn.kInvalid,
direction=om.MItDependencyGraph.kUpstream):
"""Iterate unique upstream history for list of nodes.
This acts as a replacement to maya.cmds.listHistory.
It's faster by about 2x-3x. It returns less than
maya.cmds.listHistory as it excludes the input nodes
from the output (unless an input node was history
for another input node). It also excludes duplicates.
Args:
nodes (list): Maya node names to start search from.
filter (om.MFn.Type): Filter to only specific types.
e.g. to dag nodes using om.MFn.kDagNode
direction (om.MItDependencyGraph.Direction): Direction to traverse in.
Defaults to upstream.
Yields:
str: Node names in upstream history.
"""
if not nodes:
return
sel = om.MSelectionList()
for node in nodes:
sel.add(node)
it = om.MItDependencyGraph(sel.getDependNode(0)) # init iterator
handle = om.MObjectHandle
traversed = set()
fn_dep = om.MFnDependencyNode()
fn_dag = om.MFnDagNode()
for i in range(sel.length()):
start_node = sel.getDependNode(i)
start_node_hash = handle(start_node).hashCode()
if start_node_hash in traversed:
continue
it.resetTo(start_node,
filter=filter,
direction=direction)
while not it.isDone():
node = it.currentNode()
node_hash = handle(node).hashCode()
if node_hash in traversed:
it.prune()
it.next() # noqa: B305
continue
traversed.add(node_hash)
if node.hasFn(om.MFn.kDagNode):
fn_dag.setObject(node)
yield fn_dag.fullPathName()
else:
fn_dep.setObject(node)
yield fn_dep.name()
it.next() # noqa: B305
def collect_input_containers(containers, nodes):
"""Collect containers that contain any of the node in `nodes`.
This will return any loaded Avalon container that contains at least one of
the nodes. As such, the Avalon container is an input for it. Or in short,
there are member nodes of that container.
Returns:
list: Input avalon containers
"""
# Assume the containers have collected their cached '_members' data
# in the collector.
return [container for container in containers
if any(node in container["_members"] for node in nodes)]
class CollectUpstreamInputs(pyblish.api.InstancePlugin):
"""Collect input source inputs for this publish.
This will include `inputs` data of which loaded publishes were used in the
generation of this publish. This leaves an upstream trace to what was used
as input.
"""
label = "Collect Inputs"
order = pyblish.api.CollectorOrder + 0.34
hosts = ["maya"]
def process(self, instance):
# For large scenes the querying of "host.ls()" can be relatively slow
# e.g. up to a second. Many instances calling it easily slows this
# down. As such, we cache it so we trigger it only once.
# todo: Instead of hidden cache make "CollectContainers" plug-in
cache_key = "__cache_containers"
scene_containers = instance.context.data.get(cache_key, None)
if scene_containers is None:
# Query the scenes' containers if there's no cache yet
host = registered_host()
scene_containers = list(host.ls())
for container in scene_containers:
# Embed the members into the container dictionary
container_members = set(get_container_members(container))
container["_members"] = container_members
instance.context.data["__cache_containers"] = scene_containers
# Collect the relevant input containers for this instance
if "renderlayer" in set(instance.data.get("families", [])):
# Special behavior for renderlayers
self.log.debug("Collecting renderlayer inputs....")
containers = self._collect_renderlayer_inputs(scene_containers,
instance)
else:
# Basic behavior
nodes = instance[:]
# Include any input connections of history with long names
# For optimization purposes only trace upstream from shape nodes
# looking for used dag nodes. This way having just a constraint
# on a transform is also ignored which tended to give irrelevant
# inputs for the majority of our use cases. We tend to care more
# about geometry inputs.
shapes = cmds.ls(nodes,
type=("mesh", "nurbsSurface", "nurbsCurve"),
noIntermediate=True)
if shapes:
history = list(iter_history(shapes, filter=om.MFn.kShape))
history = cmds.ls(history, long=True)
# Include the transforms in the collected history as shapes
# are excluded from containers
transforms = cmds.listRelatives(cmds.ls(history, shapes=True),
parent=True,
fullPath=True,
type="transform")
if transforms:
history.extend(transforms)
if history:
nodes = list(set(nodes + history))
# Collect containers for the given set of nodes
containers = collect_input_containers(scene_containers,
nodes)
inputs = [ObjectId(c["representation"]) for c in containers]
instance.data["inputRepresentations"] = inputs
self.log.info("Collected inputs: %s" % inputs)
def _collect_renderlayer_inputs(self, scene_containers, instance):
"""Collects inputs from nodes in renderlayer, incl. shaders + camera"""
# Get the renderlayer
renderlayer = instance.data.get("setMembers")
if renderlayer == "defaultRenderLayer":
# Assume all loaded containers in the scene are inputs
# for the masterlayer
return copy.deepcopy(scene_containers)
else:
# Get the members of the layer
members = cmds.editRenderLayerMembers(renderlayer,
query=True,
fullNames=True) or []
# In some cases invalid objects are returned from
# `editRenderLayerMembers` so we filter them out
members = cmds.ls(members, long=True)
# Include all children
children = cmds.listRelatives(members,
allDescendents=True,
fullPath=True) or []
members.extend(children)
# Include assigned shaders in renderlayer
shapes = cmds.ls(members, shapes=True, long=True)
shaders = set()
for shape in shapes:
shape_shaders = get_shader_in_layer(shape, layer=renderlayer)
if not shape_shaders:
continue
shaders.update(shape_shaders)
members.extend(shaders)
# Explicitly include the camera being rendered in renderlayer
cameras = instance.data.get("cameras")
members.extend(cameras)
containers = collect_input_containers(scene_containers, members)
return containers

View file

@ -293,6 +293,7 @@ class CollectMayaRender(pyblish.api.ContextPlugin):
"source": filepath,
"expectedFiles": full_exp_files,
"publishRenderMetadataFolder": common_publish_meta_path,
"renderProducts": layer_render_products,
"resolutionWidth": lib.get_attr_in_layer(
"defaultResolution.width", layer=layer_name
),
@ -359,7 +360,6 @@ class CollectMayaRender(pyblish.api.ContextPlugin):
instance.data["label"] = label
instance.data["farm"] = True
instance.data.update(data)
self.log.debug("data: {}".format(json.dumps(data, indent=4)))
def parse_options(self, render_globals):
"""Get all overrides with a value, skip those without.

View file

@ -1,12 +1,12 @@
import os
import openpype.api
from maya import cmds
from openpype.pipeline import publish
from openpype.hosts.maya.api.lib import maintained_selection
class ExtractAssStandin(openpype.api.Extractor):
class ExtractAssStandin(publish.Extractor):
"""Extract the content of the instance to a ass file
Things to pay attention to:

View file

@ -1,14 +1,13 @@
import os
import json
import os
import openpype.api
from openpype.pipeline import publish
from openpype.hosts.maya.api.lib import extract_alembic
from maya import cmds
class ExtractAssembly(openpype.api.Extractor):
class ExtractAssembly(publish.Extractor):
"""Produce an alembic of just point positions and normals.
Positions and normals are preserved, but nothing more,
@ -33,7 +32,7 @@ class ExtractAssembly(openpype.api.Extractor):
json.dump(instance.data["scenedata"], filepath, ensure_ascii=False)
self.log.info("Extracting point cache ..")
cmds.select(instance.data["hierarchy"])
cmds.select(instance.data["nodesHierarchy"])
# Run basic alembic exporter
extract_alembic(file=hierarchy_path,

View file

@ -3,17 +3,17 @@ import contextlib
from maya import cmds
import openpype.api
from openpype.pipeline import publish
from openpype.hosts.maya.api.lib import maintained_selection
class ExtractAssProxy(openpype.api.Extractor):
class ExtractAssProxy(publish.Extractor):
"""Extract proxy model as Maya Ascii to use as arnold standin
"""
order = openpype.api.Extractor.order + 0.2
order = publish.Extractor.order + 0.2
label = "Ass Proxy (Maya ASCII)"
hosts = ["maya"]
families = ["ass"]

View file

@ -2,11 +2,11 @@ import os
from maya import cmds
import openpype.api
from openpype.pipeline import publish
from openpype.hosts.maya.api import lib
class ExtractCameraAlembic(openpype.api.Extractor):
class ExtractCameraAlembic(publish.Extractor):
"""Extract a Camera as Alembic.
The cameras gets baked to world space by default. Only when the instance's

View file

@ -5,7 +5,7 @@ import itertools
from maya import cmds
import openpype.api
from openpype.pipeline import publish
from openpype.hosts.maya.api import lib
@ -78,7 +78,7 @@ def unlock(plug):
cmds.disconnectAttr(source, destination)
class ExtractCameraMayaScene(openpype.api.Extractor):
class ExtractCameraMayaScene(publish.Extractor):
"""Extract a Camera as Maya Scene.
This will create a duplicate of the camera that will be baked *with*

View file

@ -4,13 +4,13 @@ import os
from maya import cmds # noqa
import maya.mel as mel # noqa
import pyblish.api
import openpype.api
from openpype.hosts.maya.api.lib import maintained_selection
from openpype.pipeline import publish
from openpype.hosts.maya.api.lib import maintained_selection
from openpype.hosts.maya.api import fbx
class ExtractFBX(openpype.api.Extractor):
class ExtractFBX(publish.Extractor):
"""Extract FBX from Maya.
This extracts reproducible FBX exports ignoring any of the

View file

@ -5,13 +5,11 @@ import json
from maya import cmds
from maya.api import OpenMaya as om
from bson.objectid import ObjectId
from openpype.pipeline import legacy_io
import openpype.api
from openpype.client import get_representation_by_id
from openpype.pipeline import legacy_io, publish
class ExtractLayout(openpype.api.Extractor):
class ExtractLayout(publish.Extractor):
"""Extract a layout."""
label = "Extract Layout"
@ -30,6 +28,8 @@ class ExtractLayout(openpype.api.Extractor):
instance.data["representations"] = []
json_data = []
# TODO representation queries can be refactored to be faster
project_name = legacy_io.active_project()
for asset in cmds.sets(str(instance), query=True):
# Find the container
@ -43,11 +43,11 @@ class ExtractLayout(openpype.api.Extractor):
representation_id = cmds.getAttr(f"{container}.representation")
representation = legacy_io.find_one(
{
"type": "representation",
"_id": ObjectId(representation_id)
}, projection={"parent": True, "context.family": True})
representation = get_representation_by_id(
project_name,
representation_id,
fields=["parent", "context.family"]
)
self.log.info(representation)
@ -102,9 +102,10 @@ class ExtractLayout(openpype.api.Extractor):
for i in range(0, len(t_matrix_list), row_length):
t_matrix.append(t_matrix_list[i:i + row_length])
json_element["transform_matrix"] = []
for row in t_matrix:
json_element["transform_matrix"].append(list(row))
json_element["transform_matrix"] = [
list(row)
for row in t_matrix
]
basis_list = [
1, 0, 0, 0,

View file

@ -13,8 +13,8 @@ from maya import cmds # noqa
import pyblish.api
import openpype.api
from openpype.pipeline import legacy_io
from openpype.lib import source_hash
from openpype.pipeline import legacy_io, publish
from openpype.hosts.maya.api import lib
# Modes for transfer
@ -161,7 +161,7 @@ def no_workspace_dir():
os.rmdir(fake_workspace_dir)
class ExtractLook(openpype.api.Extractor):
class ExtractLook(publish.Extractor):
"""Extract Look (Maya Scene + JSON)
Only extracts the sets (shadingEngines and alike) alongside a .json file
@ -505,7 +505,7 @@ class ExtractLook(openpype.api.Extractor):
args = []
if do_maketx:
args.append("maketx")
texture_hash = openpype.api.source_hash(filepath, *args)
texture_hash = source_hash(filepath, *args)
# If source has been published before with the same settings,
# then don't reprocess but hardlink from the original

View file

@ -4,12 +4,11 @@ import os
from maya import cmds
import openpype.api
from openpype.hosts.maya.api.lib import maintained_selection
from openpype.pipeline import AVALON_CONTAINER_ID
from openpype.pipeline import AVALON_CONTAINER_ID, publish
class ExtractMayaSceneRaw(openpype.api.Extractor):
class ExtractMayaSceneRaw(publish.Extractor):
"""Extract as Maya Scene (raw).
This will preserve all references, construction history, etc.

View file

@ -4,11 +4,11 @@ import os
from maya import cmds
import openpype.api
from openpype.pipeline import publish
from openpype.hosts.maya.api import lib
class ExtractModel(openpype.api.Extractor):
class ExtractModel(publish.Extractor):
"""Extract as Model (Maya Scene).
Only extracts contents based on the original "setMembers" data to ensure

View file

@ -2,11 +2,11 @@ import os
from maya import cmds
import openpype.api
from openpype.pipeline import publish
from openpype.hosts.maya.api.lib import maintained_selection
class ExtractMultiverseLook(openpype.api.Extractor):
class ExtractMultiverseLook(publish.Extractor):
"""Extractor for Multiverse USD look data.
This will extract:

View file

@ -3,11 +3,11 @@ import six
from maya import cmds
import openpype.api
from openpype.pipeline import publish
from openpype.hosts.maya.api.lib import maintained_selection
class ExtractMultiverseUsd(openpype.api.Extractor):
class ExtractMultiverseUsd(publish.Extractor):
"""Extractor for Multiverse USD Asset data.
This will extract settings for a Multiverse Write Asset operation:

View file

@ -2,11 +2,11 @@ import os
from maya import cmds
import openpype.api
from openpype.pipeline import publish
from openpype.hosts.maya.api.lib import maintained_selection
class ExtractMultiverseUsdComposition(openpype.api.Extractor):
class ExtractMultiverseUsdComposition(publish.Extractor):
"""Extractor of Multiverse USD Composition data.
This will extract settings for a Multiverse Write Composition operation:

View file

@ -1,12 +1,12 @@
import os
import openpype.api
from openpype.pipeline import publish
from openpype.hosts.maya.api.lib import maintained_selection
from maya import cmds
class ExtractMultiverseUsdOverride(openpype.api.Extractor):
class ExtractMultiverseUsdOverride(publish.Extractor):
"""Extractor for Multiverse USD Override data.
This will extract settings for a Multiverse Write Override operation:

View file

@ -1,18 +1,16 @@
import os
import glob
import contextlib
import clique
import capture
from openpype.pipeline import publish
from openpype.hosts.maya.api import lib
import openpype.api
from maya import cmds
import pymel.core as pm
class ExtractPlayblast(openpype.api.Extractor):
class ExtractPlayblast(publish.Extractor):
"""Extract viewport playblast.
Takes review camera and creates review Quicktime video based on viewport

View file

@ -2,7 +2,7 @@ import os
from maya import cmds
import openpype.api
from openpype.pipeline import publish
from openpype.hosts.maya.api.lib import (
extract_alembic,
suspended_refresh,
@ -11,7 +11,7 @@ from openpype.hosts.maya.api.lib import (
)
class ExtractAlembic(openpype.api.Extractor):
class ExtractAlembic(publish.Extractor):
"""Produce an alembic of just point positions and normals.
Positions and normals, uvs, creases are preserved, but nothing more,

View file

@ -4,11 +4,11 @@ import os
from maya import cmds
import openpype.api
from openpype.pipeline import publish
from openpype.hosts.maya.api.lib import maintained_selection
class ExtractRedshiftProxy(openpype.api.Extractor):
class ExtractRedshiftProxy(publish.Extractor):
"""Extract the content of the instance to a redshift proxy file."""
label = "Redshift Proxy (.rs)"

View file

@ -1,10 +1,11 @@
import json
import os
import openpype.api
import json
import maya.app.renderSetup.model.renderSetup as renderSetup
from openpype.pipeline import publish
class ExtractRenderSetup(openpype.api.Extractor):
class ExtractRenderSetup(publish.Extractor):
"""
Produce renderSetup template file

View file

@ -4,11 +4,11 @@ import os
from maya import cmds
import openpype.api
from openpype.pipeline import publish
from openpype.hosts.maya.api.lib import maintained_selection
class ExtractRig(openpype.api.Extractor):
class ExtractRig(publish.Extractor):
"""Extract rig as Maya Scene."""
label = "Extract Rig (Maya Scene)"

View file

@ -3,14 +3,14 @@ import glob
import capture
from openpype.pipeline import publish
from openpype.hosts.maya.api import lib
import openpype.api
from maya import cmds
import pymel.core as pm
class ExtractThumbnail(openpype.api.Extractor):
class ExtractThumbnail(publish.Extractor):
"""Extract viewport thumbnail.
Takes review camera and creates a thumbnail based on viewport

View file

@ -6,7 +6,8 @@ from contextlib import contextmanager
from maya import cmds # noqa
import pyblish.api
import openpype.api
from openpype.pipeline import publish
from openpype.hosts.maya.api import fbx
@ -20,7 +21,7 @@ def renamed(original_name, renamed_name):
cmds.rename(renamed_name, original_name)
class ExtractUnrealSkeletalMesh(openpype.api.Extractor):
class ExtractUnrealSkeletalMesh(publish.Extractor):
"""Extract Unreal Skeletal Mesh as FBX from Maya. """
order = pyblish.api.ExtractorOrder - 0.1

View file

@ -5,7 +5,8 @@ import os
from maya import cmds # noqa
import pyblish.api
import openpype.api
from openpype.pipeline import publish
from openpype.hosts.maya.api.lib import (
parent_nodes,
maintained_selection
@ -13,7 +14,7 @@ from openpype.hosts.maya.api.lib import (
from openpype.hosts.maya.api import fbx
class ExtractUnrealStaticMesh(openpype.api.Extractor):
class ExtractUnrealStaticMesh(publish.Extractor):
"""Extract Unreal Static Mesh as FBX from Maya. """
order = pyblish.api.ExtractorOrder - 0.1

View file

@ -2,11 +2,11 @@ import os
from maya import cmds
import openpype.api
from openpype.pipeline import publish
from openpype.hosts.maya.api.lib import maintained_selection
class ExtractVRayProxy(openpype.api.Extractor):
class ExtractVRayProxy(publish.Extractor):
"""Extract the content of the instance to a vrmesh file
Things to pay attention to:

View file

@ -3,14 +3,14 @@
import os
import re
import openpype.api
from openpype.pipeline import publish
from openpype.hosts.maya.api.render_setup_tools import export_in_rs_layer
from openpype.hosts.maya.api.lib import maintained_selection
from maya import cmds
class ExtractVrayscene(openpype.api.Extractor):
class ExtractVrayscene(publish.Extractor):
"""Extractor for vrscene."""
label = "VRay Scene (.vrscene)"

View file

@ -2,14 +2,14 @@ import os
from maya import cmds
import openpype.api
from openpype.pipeline import publish
from openpype.hosts.maya.api.lib import (
suspended_refresh,
maintained_selection
)
class ExtractXgenCache(openpype.api.Extractor):
class ExtractXgenCache(publish.Extractor):
"""Produce an alembic of just xgen interactive groom
"""

View file

@ -3,10 +3,10 @@ import json
from maya import cmds
import openpype.api
from openpype.pipeline import publish
class ExtractYetiCache(openpype.api.Extractor):
class ExtractYetiCache(publish.Extractor):
"""Producing Yeti cache files using scene time range.
This will extract Yeti cache file sequence and fur settings.

View file

@ -7,7 +7,7 @@ import contextlib
from maya import cmds
import openpype.api
from openpype.pipeline import publish
from openpype.hosts.maya.api import lib
@ -90,7 +90,7 @@ def yetigraph_attribute_values(assumed_destination, resources):
pass
class ExtractYetiRig(openpype.api.Extractor):
class ExtractYetiRig(publish.Extractor):
"""Extract the Yeti rig to a Maya Scene and write the Yeti rig data."""
label = "Extract Yeti Rig"

View file

@ -48,7 +48,7 @@ class ValidateAssemblyModelTransforms(pyblish.api.InstancePlugin):
from openpype.hosts.maya.api import lib
# Get all transforms in the loaded containers
container_roots = cmds.listRelatives(instance.data["hierarchy"],
container_roots = cmds.listRelatives(instance.data["nodesHierarchy"],
children=True,
type="transform",
fullPath=True)

View file

@ -1,11 +0,0 @@
//Maya 2018 Project Definition
workspace -fr "shaders" "renderData/shaders";
workspace -fr "alembicCache" "cache/alembic";
workspace -fr "mayaAscii" "";
workspace -fr "mayaBinary" "";
workspace -fr "renderData" "renderData";
workspace -fr "fileCache" "cache/nCache";
workspace -fr "scene" "";
workspace -fr "sourceImages" "sourceimages";
workspace -fr "images" "renders";

View file

@ -201,34 +201,6 @@ class CollectNukeWrites(pyblish.api.InstancePlugin):
if not instance.data["review"]:
instance.data["useSequenceForReview"] = False
project_name = legacy_io.active_project()
asset_name = instance.data["asset"]
# * Add audio to instance if exists.
# Find latest versions document
last_version_doc = get_last_version_by_subset_name(
project_name, "audioMain", asset_name=asset_name, fields=["_id"]
)
repre_doc = None
if last_version_doc:
# Try to find it's representation (Expected there is only one)
repre_docs = list(get_representations(
project_name, version_ids=[last_version_doc["_id"]]
))
if not repre_docs:
self.log.warning(
"Version document does not contain any representations"
)
else:
repre_doc = repre_docs[0]
# Add audio to instance if representation was found
if repre_doc:
instance.data["audio"] = [{
"offset": 0,
"filename": get_representation_path(repre_doc)
}]
self.log.debug("instance.data: {}".format(pformat(instance.data)))
def is_prerender(self, families):

View file

@ -1,21 +1,60 @@
import os
import re
import abc
import json
import logging
import six
import platform
import functools
import warnings
import clique
from openpype.client import get_project
from openpype.settings import get_project_settings
from .profiles_filtering import filter_profiles
log = logging.getLogger(__name__)
class PathToolsDeprecatedWarning(DeprecationWarning):
pass
def deprecated(new_destination):
"""Mark functions as deprecated.
It will result in a warning being emitted when the function is used.
"""
func = None
if callable(new_destination):
func = new_destination
new_destination = None
def _decorator(decorated_func):
if new_destination is None:
warning_message = (
" Please check content of deprecated function to figure out"
" possible replacement."
)
else:
warning_message = " Please replace your usage with '{}'.".format(
new_destination
)
@functools.wraps(decorated_func)
def wrapper(*args, **kwargs):
warnings.simplefilter("always", PathToolsDeprecatedWarning)
warnings.warn(
(
"Call to deprecated function '{}'"
"\nFunction was moved or removed.{}"
).format(decorated_func.__name__, warning_message),
category=PathToolsDeprecatedWarning,
stacklevel=4
)
return decorated_func(*args, **kwargs)
return wrapper
if func is None:
return _decorator
return _decorator(func)
def format_file_size(file_size, suffix=None):
"""Returns formatted string with size in appropriate unit.
@ -232,107 +271,69 @@ def get_last_version_from_path(path_dir, filter):
return None
@deprecated("openpype.pipeline.project_folders.concatenate_splitted_paths")
def concatenate_splitted_paths(split_paths, anatomy):
pattern_array = re.compile(r"\[.*\]")
output = []
for path_items in split_paths:
clean_items = []
if isinstance(path_items, str):
path_items = [path_items]
"""
Deprecated:
Function will be removed after release version 3.16.*
"""
for path_item in path_items:
if not re.match(r"{.+}", path_item):
path_item = re.sub(pattern_array, "", path_item)
clean_items.append(path_item)
from openpype.pipeline.project_folders import concatenate_splitted_paths
# backward compatibility
if "__project_root__" in path_items:
for root, root_path in anatomy.roots.items():
if not os.path.exists(str(root_path)):
log.debug("Root {} path path {} not exist on \
computer!".format(root, root_path))
continue
clean_items = ["{{root[{}]}}".format(root),
r"{project[name]}"] + clean_items[1:]
output.append(os.path.normpath(os.path.sep.join(clean_items)))
continue
output.append(os.path.normpath(os.path.sep.join(clean_items)))
return output
return concatenate_splitted_paths(split_paths, anatomy)
@deprecated
def get_format_data(anatomy):
project_doc = get_project(anatomy.project_name, fields=["data.code"])
project_code = project_doc["data"]["code"]
"""
Deprecated:
Function will be removed after release version 3.16.*
"""
return {
"root": anatomy.roots,
"project": {
"name": anatomy.project_name,
"code": project_code
},
}
from openpype.pipeline.template_data import get_project_template_data
data = get_project_template_data(project_name=anatomy.project_name)
data["root"] = anatomy.roots
return data
@deprecated("openpype.pipeline.project_folders.fill_paths")
def fill_paths(path_list, anatomy):
format_data = get_format_data(anatomy)
filled_paths = []
"""
Deprecated:
Function will be removed after release version 3.16.*
"""
for path in path_list:
new_path = path.format(**format_data)
filled_paths.append(new_path)
from openpype.pipeline.project_folders import fill_paths
return filled_paths
return fill_paths(path_list, anatomy)
@deprecated("openpype.pipeline.project_folders.create_project_folders")
def create_project_folders(basic_paths, project_name):
from openpype.pipeline import Anatomy
anatomy = Anatomy(project_name)
"""
Deprecated:
Function will be removed after release version 3.16.*
"""
concat_paths = concatenate_splitted_paths(basic_paths, anatomy)
filled_paths = fill_paths(concat_paths, anatomy)
from openpype.pipeline.project_folders import create_project_folders
# Create folders
for path in filled_paths:
if os.path.exists(path):
log.debug("Folder already exists: {}".format(path))
else:
log.debug("Creating folder: {}".format(path))
os.makedirs(path)
def _list_path_items(folder_structure):
output = []
for key, value in folder_structure.items():
if not value:
output.append(key)
else:
paths = _list_path_items(value)
for path in paths:
if not isinstance(path, (list, tuple)):
path = [path]
item = [key]
item.extend(path)
output.append(item)
return output
return create_project_folders(project_name, basic_paths)
@deprecated("openpype.pipeline.project_folders.get_project_basic_paths")
def get_project_basic_paths(project_name):
project_settings = get_project_settings(project_name)
folder_structure = (
project_settings["global"]["project_folder_structure"]
)
if not folder_structure:
return []
"""
Deprecated:
Function will be removed after release version 3.16.*
"""
if isinstance(folder_structure, str):
folder_structure = json.loads(folder_structure)
return _list_path_items(folder_structure)
from openpype.pipeline.project_folders import get_project_basic_paths
return get_project_basic_paths(project_name)
@deprecated("openpype.pipeline.workfile.create_workdir_extra_folders")
def create_workdir_extra_folders(
workdir, host_name, task_type, task_name, project_name,
project_settings=None
@ -349,37 +350,18 @@ def create_workdir_extra_folders(
project_name (str): Name of project on which task is.
project_settings (dict): Prepared project settings. Are loaded if not
passed.
Deprecated:
Function will be removed after release version 3.16.*
"""
# Load project settings if not set
if not project_settings:
project_settings = get_project_settings(project_name)
# Load extra folders profiles
extra_folders_profiles = (
project_settings["global"]["tools"]["Workfiles"]["extra_folders"]
from openpype.pipeline.project_folders import create_workdir_extra_folders
return create_workdir_extra_folders(
workdir,
host_name,
task_type,
task_name,
project_name,
project_settings
)
# Skip if are empty
if not extra_folders_profiles:
return
# Prepare profiles filters
filter_data = {
"task_types": task_type,
"task_names": task_name,
"hosts": host_name
}
profile = filter_profiles(extra_folders_profiles, filter_data)
if profile is None:
return
for subfolder in profile["folders"]:
# Make sure backslashes are converted to forwards slashes
# and does not start with slash
subfolder = subfolder.replace("\\", "/").lstrip("/")
# Skip empty strings
if not subfolder:
continue
fullpath = os.path.join(workdir, subfolder)
if not os.path.exists(fullpath):
os.makedirs(fullpath)

View file

@ -3,7 +3,6 @@
import os
import logging
import re
import json
import warnings
import functools

View file

@ -154,7 +154,7 @@ def convert_value_by_type_name(value_type, value, logger=None):
elif parts_len == 4:
divisor = 2
elif parts_len == 9:
divisor == 3
divisor = 3
elif parts_len == 16:
divisor = 4
else:

View file

@ -9,6 +9,7 @@ import os
from abc import abstractmethod
import platform
import getpass
from functools import partial
from collections import OrderedDict
import six
@ -66,6 +67,96 @@ def requests_get(*args, **kwargs):
return requests.get(*args, **kwargs)
class DeadlineKeyValueVar(dict):
"""
Serializes dictionary key values as "{key}={value}" like Deadline uses
for EnvironmentKeyValue.
As an example:
EnvironmentKeyValue0="A_KEY=VALUE_A"
EnvironmentKeyValue1="OTHER_KEY=VALUE_B"
The keys are serialized in alphabetical order (sorted).
Example:
>>> var = DeadlineKeyValueVar("EnvironmentKeyValue")
>>> var["my_var"] = "hello"
>>> var["my_other_var"] = "hello2"
>>> var.serialize()
"""
def __init__(self, key):
super(DeadlineKeyValueVar, self).__init__()
self.__key = key
def serialize(self):
key = self.__key
# Allow custom location for index in serialized string
if "{}" not in key:
key = key + "{}"
return {
key.format(index): "{}={}".format(var_key, var_value)
for index, (var_key, var_value) in enumerate(sorted(self.items()))
}
class DeadlineIndexedVar(dict):
"""
Allows to set and query values by integer indices:
Query: var[1] or var.get(1)
Set: var[1] = "my_value"
Append: var += "value"
Note: Iterating the instance is not guarantueed to be the order of the
indices. To do so iterate with `sorted()`
"""
def __init__(self, key):
super(DeadlineIndexedVar, self).__init__()
self.__key = key
def serialize(self):
key = self.__key
# Allow custom location for index in serialized string
if "{}" not in key:
key = key + "{}"
return {
key.format(index): value for index, value in sorted(self.items())
}
def next_available_index(self):
# Add as first unused entry
i = 0
while i in self.keys():
i += 1
return i
def update(self, data):
# Force the integer key check
for key, value in data.items():
self.__setitem__(key, value)
def __iadd__(self, other):
index = self.next_available_index()
self[index] = other
return self
def __setitem__(self, key, value):
if not isinstance(key, int):
raise TypeError("Key must be an integer: {}".format(key))
if key < 0:
raise ValueError("Negative index can't be set: {}".format(key))
dict.__setitem__(self, key, value)
@attr.s
class DeadlineJobInfo(object):
"""Mapping of all Deadline *JobInfo* attributes.
@ -218,24 +309,8 @@ class DeadlineJobInfo(object):
# Environment
# ----------------------------------------------
_environmentKeyValue = attr.ib(factory=list)
@property
def EnvironmentKeyValue(self): # noqa: N802
"""Return all environment key values formatted for Deadline.
Returns:
dict: as `{'EnvironmentKeyValue0', 'key=value'}`
"""
out = {}
for index, v in enumerate(self._environmentKeyValue):
out["EnvironmentKeyValue{}".format(index)] = v
return out
@EnvironmentKeyValue.setter
def EnvironmentKeyValue(self, val): # noqa: N802
self._environmentKeyValue.append(val)
EnvironmentKeyValue = attr.ib(factory=partial(DeadlineKeyValueVar,
"EnvironmentKeyValue"))
IncludeEnvironment = attr.ib(default=None) # Default: false
UseJobEnvironmentOnly = attr.ib(default=None) # Default: false
@ -243,121 +318,29 @@ class DeadlineJobInfo(object):
# Job Extra Info
# ----------------------------------------------
_extraInfos = attr.ib(factory=list)
_extraInfoKeyValues = attr.ib(factory=list)
@property
def ExtraInfo(self): # noqa: N802
"""Return all ExtraInfo values formatted for Deadline.
Returns:
dict: as `{'ExtraInfo0': 'value'}`
"""
out = {}
for index, v in enumerate(self._extraInfos):
out["ExtraInfo{}".format(index)] = v
return out
@ExtraInfo.setter
def ExtraInfo(self, val): # noqa: N802
self._extraInfos.append(val)
@property
def ExtraInfoKeyValue(self): # noqa: N802
"""Return all ExtraInfoKeyValue values formatted for Deadline.
Returns:
dict: as {'ExtraInfoKeyValue0': 'key=value'}`
"""
out = {}
for index, v in enumerate(self._extraInfoKeyValues):
out["ExtraInfoKeyValue{}".format(index)] = v
return out
@ExtraInfoKeyValue.setter
def ExtraInfoKeyValue(self, val): # noqa: N802
self._extraInfoKeyValues.append(val)
ExtraInfo = attr.ib(factory=partial(DeadlineIndexedVar, "ExtraInfo"))
ExtraInfoKeyValue = attr.ib(factory=partial(DeadlineKeyValueVar,
"ExtraInfoKeyValue"))
# Task Extra Info Names
# ----------------------------------------------
OverrideTaskExtraInfoNames = attr.ib(default=None) # Default: false
_taskExtraInfos = attr.ib(factory=list)
@property
def TaskExtraInfoName(self): # noqa: N802
"""Return all TaskExtraInfoName values formatted for Deadline.
Returns:
dict: as `{'TaskExtraInfoName0': 'value'}`
"""
out = {}
for index, v in enumerate(self._taskExtraInfos):
out["TaskExtraInfoName{}".format(index)] = v
return out
@TaskExtraInfoName.setter
def TaskExtraInfoName(self, val): # noqa: N802
self._taskExtraInfos.append(val)
TaskExtraInfoName = attr.ib(factory=partial(DeadlineIndexedVar,
"TaskExtraInfoName"))
# Output
# ----------------------------------------------
_outputFilename = attr.ib(factory=list)
_outputFilenameTile = attr.ib(factory=list)
_outputDirectory = attr.ib(factory=list)
OutputFilename = attr.ib(factory=partial(DeadlineIndexedVar,
"OutputFilename"))
OutputFilenameTile = attr.ib(factory=partial(DeadlineIndexedVar,
"OutputFilename{}Tile"))
OutputDirectory = attr.ib(factory=partial(DeadlineIndexedVar,
"OutputDirectory"))
@property
def OutputFilename(self): # noqa: N802
"""Return all OutputFilename values formatted for Deadline.
Returns:
dict: as `{'OutputFilename0': 'filename'}`
"""
out = {}
for index, v in enumerate(self._outputFilename):
out["OutputFilename{}".format(index)] = v
return out
@OutputFilename.setter
def OutputFilename(self, val): # noqa: N802
self._outputFilename.append(val)
@property
def OutputFilenameTile(self): # noqa: N802
"""Return all OutputFilename#Tile values formatted for Deadline.
Returns:
dict: as `{'OutputFilenme#Tile': 'tile'}`
"""
out = {}
for index, v in enumerate(self._outputFilenameTile):
out["OutputFilename{}Tile".format(index)] = v
return out
@OutputFilenameTile.setter
def OutputFilenameTile(self, val): # noqa: N802
self._outputFilenameTile.append(val)
@property
def OutputDirectory(self): # noqa: N802
"""Return all OutputDirectory values formatted for Deadline.
Returns:
dict: as `{'OutputDirectory0': 'dir'}`
"""
out = {}
for index, v in enumerate(self._outputDirectory):
out["OutputDirectory{}".format(index)] = v
return out
@OutputDirectory.setter
def OutputDirectory(self, val): # noqa: N802
self._outputDirectory.append(val)
# Asset Dependency
# ----------------------------------------------
AssetDependency = attr.ib(factory=partial(DeadlineIndexedVar,
"AssetDependency"))
# Tile Job
# ----------------------------------------------
@ -381,7 +364,7 @@ class DeadlineJobInfo(object):
"""
def filter_data(a, v):
if a.name.startswith("_"):
if isinstance(v, (DeadlineIndexedVar, DeadlineKeyValueVar)):
return False
if v is None:
return False
@ -389,15 +372,27 @@ class DeadlineJobInfo(object):
serialized = attr.asdict(
self, dict_factory=OrderedDict, filter=filter_data)
serialized.update(self.EnvironmentKeyValue)
serialized.update(self.ExtraInfo)
serialized.update(self.ExtraInfoKeyValue)
serialized.update(self.TaskExtraInfoName)
serialized.update(self.OutputFilename)
serialized.update(self.OutputFilenameTile)
serialized.update(self.OutputDirectory)
# Custom serialize these attributes
for attribute in [
self.EnvironmentKeyValue,
self.ExtraInfo,
self.ExtraInfoKeyValue,
self.TaskExtraInfoName,
self.OutputFilename,
self.OutputFilenameTile,
self.OutputDirectory,
self.AssetDependency
]:
serialized.update(attribute.serialize())
return serialized
def update(self, data):
"""Update instance with data dict"""
for key, value in data.items():
setattr(self, key, value)
@six.add_metaclass(AbstractMetaInstancePlugin)
class AbstractSubmitDeadline(pyblish.api.InstancePlugin):
@ -521,68 +516,72 @@ class AbstractSubmitDeadline(pyblish.api.InstancePlugin):
published.
"""
anatomy = self._instance.context.data['anatomy']
file_path = None
for i in self._instance.context:
if "workfile" in i.data["families"] \
or i.data["family"] == "workfile":
# test if there is instance of workfile waiting
# to be published.
assert i.data["publish"] is True, (
"Workfile (scene) must be published along")
# determine published path from Anatomy.
template_data = i.data.get("anatomyData")
rep = i.data.get("representations")[0].get("ext")
template_data["representation"] = rep
template_data["ext"] = rep
template_data["comment"] = None
anatomy_filled = anatomy.format(template_data)
template_filled = anatomy_filled["publish"]["path"]
file_path = os.path.normpath(template_filled)
self.log.info("Using published scene for render {}".format(
file_path))
instance = self._instance
workfile_instance = self._get_workfile_instance(instance.context)
if workfile_instance is None:
return
if not os.path.exists(file_path):
self.log.error("published scene does not exist!")
raise
# determine published path from Anatomy.
template_data = workfile_instance.data.get("anatomyData")
rep = workfile_instance.data.get("representations")[0]
template_data["representation"] = rep.get("name")
template_data["ext"] = rep.get("ext")
template_data["comment"] = None
if not replace_in_path:
return file_path
anatomy = instance.context.data['anatomy']
anatomy_filled = anatomy.format(template_data)
template_filled = anatomy_filled["publish"]["path"]
file_path = os.path.normpath(template_filled)
# now we need to switch scene in expected files
# because <scene> token will now point to published
# scene file and that might differ from current one
new_scene = os.path.splitext(
os.path.basename(file_path))[0]
orig_scene = os.path.splitext(
os.path.basename(
self._instance.context.data["currentFile"]))[0]
exp = self._instance.data.get("expectedFiles")
self.log.info("Using published scene for render {}".format(file_path))
if isinstance(exp[0], dict):
# we have aovs and we need to iterate over them
new_exp = {}
for aov, files in exp[0].items():
replaced_files = []
for f in files:
replaced_files.append(
str(f).replace(orig_scene, new_scene)
)
new_exp[aov] = replaced_files
# [] might be too much here, TODO
self._instance.data["expectedFiles"] = [new_exp]
else:
new_exp = []
for f in exp:
new_exp.append(
str(f).replace(orig_scene, new_scene)
)
self._instance.data["expectedFiles"] = new_exp
if not os.path.exists(file_path):
self.log.error("published scene does not exist!")
raise
self.log.info("Scene name was switched {} -> {}".format(
orig_scene, new_scene
))
if not replace_in_path:
return file_path
# now we need to switch scene in expected files
# because <scene> token will now point to published
# scene file and that might differ from current one
def _clean_name(path):
return os.path.splitext(os.path.basename(path))[0]
new_scene = _clean_name(file_path)
orig_scene = _clean_name(instance.context.data["currentFile"])
expected_files = instance.data.get("expectedFiles")
if isinstance(expected_files[0], dict):
# we have aovs and we need to iterate over them
new_exp = {}
for aov, files in expected_files[0].items():
replaced_files = []
for f in files:
replaced_files.append(
str(f).replace(orig_scene, new_scene)
)
new_exp[aov] = replaced_files
# [] might be too much here, TODO
instance.data["expectedFiles"] = [new_exp]
else:
new_exp = []
for f in expected_files:
new_exp.append(
str(f).replace(orig_scene, new_scene)
)
instance.data["expectedFiles"] = new_exp
metadata_folder = instance.data.get("publishRenderMetadataFolder")
if metadata_folder:
metadata_folder = metadata_folder.replace(orig_scene,
new_scene)
instance.data["publishRenderMetadataFolder"] = metadata_folder
self.log.info("Scene name was switched {} -> {}".format(
orig_scene, new_scene
))
return file_path
@ -645,3 +644,22 @@ class AbstractSubmitDeadline(pyblish.api.InstancePlugin):
self._instance.data["deadlineSubmissionJob"] = result
return result["_id"]
@staticmethod
def _get_workfile_instance(context):
"""Find workfile instance in context"""
for i in context:
is_workfile = (
"workfile" in i.data.get("families", []) or
i.data["family"] == "workfile"
)
if not is_workfile:
continue
# test if there is instance of workfile waiting
# to be published.
assert i.data["publish"] is True, (
"Workfile (scene) must be published along")
return i

View file

@ -13,7 +13,7 @@ class CollectDeadlineServerFromInstance(pyblish.api.InstancePlugin):
order = pyblish.api.CollectorOrder + 0.415
label = "Deadline Webservice from the Instance"
families = ["rendering"]
families = ["rendering", "renderlayer"]
def process(self, instance):
instance.data["deadlineUrl"] = self._collect_deadline_url(instance)

View file

@ -67,9 +67,9 @@ class AfterEffectsSubmitDeadline(
dln_job_info.Group = self.group
dln_job_info.Department = self.department
dln_job_info.ChunkSize = self.chunk_size
dln_job_info.OutputFilename = \
dln_job_info.OutputFilename += \
os.path.basename(self._instance.data["expectedFiles"][0])
dln_job_info.OutputDirectory = \
dln_job_info.OutputDirectory += \
os.path.dirname(self._instance.data["expectedFiles"][0])
dln_job_info.JobDelay = "00:00:00"
@ -92,13 +92,12 @@ class AfterEffectsSubmitDeadline(
environment = dict({key: os.environ[key] for key in keys
if key in os.environ}, **legacy_io.Session)
for key in keys:
val = environment.get(key)
if val:
dln_job_info.EnvironmentKeyValue = "{key}={value}".format(
key=key,
value=val)
value = environment.get(key)
if value:
dln_job_info.EnvironmentKeyValue[key] = value
# to recognize job from PYPE for turning Event On/Off
dln_job_info.EnvironmentKeyValue = "OPENPYPE_RENDER_JOB=1"
dln_job_info.EnvironmentKeyValue["OPENPYPE_RENDER_JOB"] = "1"
return dln_job_info

View file

@ -284,14 +284,12 @@ class HarmonySubmitDeadline(
environment = dict({key: os.environ[key] for key in keys
if key in os.environ}, **legacy_io.Session)
for key in keys:
val = environment.get(key)
if val:
job_info.EnvironmentKeyValue = "{key}={value}".format(
key=key,
value=val)
value = environment.get(key)
if value:
job_info.EnvironmentKeyValue[key] = value
# to recognize job from PYPE for turning Event On/Off
job_info.EnvironmentKeyValue = "OPENPYPE_RENDER_JOB=1"
job_info.EnvironmentKeyValue["OPENPYPE_RENDER_JOB"] = "1"
return job_info

View file

@ -778,7 +778,9 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin):
"resolutionHeight": data.get("resolutionHeight", 1080),
"multipartExr": data.get("multipartExr", False),
"jobBatchName": data.get("jobBatchName", ""),
"useSequenceForReview": data.get("useSequenceForReview", True)
"useSequenceForReview": data.get("useSequenceForReview", True),
# map inputVersions `ObjectId` -> `str` so json supports it
"inputVersions": list(map(str, data.get("inputVersions", [])))
}
# skip locking version if we are creating v01

View file

@ -71,7 +71,7 @@ def convert_value_by_type_name(value_type, value):
elif parts_len == 4:
divisor = 2
elif parts_len == 9:
divisor == 3
divisor = 3
elif parts_len == 16:
divisor = 4
else:
@ -453,7 +453,7 @@ class OpenPypeTileAssembler(DeadlinePlugin):
# Swap to have input as foreground
args.append("--swap")
# Paste foreground to background
args.append("--paste +{}+{}".format(pos_x, pos_y))
args.append("--paste {x:+d}{y:+d}".format(x=pos_x, y=pos_y))
args.append("-o")
args.append(output_path)

View file

@ -1,9 +1,13 @@
from .ftrack_module import (
FtrackModule,
FTRACK_MODULE_DIR
FTRACK_MODULE_DIR,
resolve_ftrack_url,
)
__all__ = (
"FtrackModule",
"FTRACK_MODULE_DIR"
"FTRACK_MODULE_DIR",
"resolve_ftrack_url",
)

View file

@ -1,7 +1,10 @@
import re
from openpype.pipeline.project_folders import (
get_project_basic_paths,
create_project_folders,
)
from openpype_modules.ftrack.lib import BaseAction, statics_icon
from openpype.api import get_project_basic_paths, create_project_folders
class CreateProjectFolders(BaseAction):
@ -81,7 +84,7 @@ class CreateProjectFolders(BaseAction):
}
# Invoking OpenPype API to create the project folders
create_project_folders(basic_paths, project_name)
create_project_folders(project_name, basic_paths)
self.create_ftrack_entities(basic_paths, project_entity)
self.trigger_event(

View file

@ -6,14 +6,16 @@ import platform
import click
from openpype.modules import OpenPypeModule
from openpype_interfaces import (
from openpype.modules.interfaces import (
ITrayModule,
IPluginPaths,
ISettingsChangeListener
)
from openpype.settings import SaveWarningExc
from openpype.lib import Logger
FTRACK_MODULE_DIR = os.path.dirname(os.path.abspath(__file__))
_URL_NOT_SET = object()
class FtrackModule(
@ -28,17 +30,8 @@ class FtrackModule(
ftrack_settings = settings[self.name]
self.enabled = ftrack_settings["enabled"]
# Add http schema
ftrack_url = ftrack_settings["ftrack_server"].strip("/ ")
if ftrack_url:
if "http" not in ftrack_url:
ftrack_url = "https://" + ftrack_url
# Check if "ftrack.app" is part os url
if "ftrackapp.com" not in ftrack_url:
ftrack_url = ftrack_url + ".ftrackapp.com"
self.ftrack_url = ftrack_url
self._settings_ftrack_url = ftrack_settings["ftrack_server"]
self._ftrack_url = _URL_NOT_SET
current_dir = os.path.dirname(os.path.abspath(__file__))
low_platform = platform.system().lower()
@ -70,6 +63,16 @@ class FtrackModule(
self.timers_manager_connector = None
self._timers_manager_module = None
def get_ftrack_url(self):
if self._ftrack_url is _URL_NOT_SET:
self._ftrack_url = resolve_ftrack_url(
self._settings_ftrack_url,
logger=self.log
)
return self._ftrack_url
ftrack_url = property(get_ftrack_url)
def get_global_environments(self):
"""Ftrack's global environments."""
return {
@ -479,6 +482,51 @@ class FtrackModule(
click_group.add_command(cli_main)
def _check_ftrack_url(url):
import requests
try:
result = requests.get(url, allow_redirects=False)
except requests.exceptions.RequestException:
return False
if (result.status_code != 200 or "FTRACK_VERSION" not in result.headers):
return False
return True
def resolve_ftrack_url(url, logger=None):
"""Checks if Ftrack server is responding."""
if logger is None:
logger = Logger.get_logger(__name__)
url = url.strip("/ ")
if not url:
logger.error("Ftrack URL is not set!")
return None
if not url.startswith("http"):
url = "https://" + url
ftrack_url = None
if not url.endswith("ftrackapp.com"):
ftrackapp_url = url + ".ftrackapp.com"
if _check_ftrack_url(ftrackapp_url):
ftrack_url = ftrackapp_url
if not ftrack_url and _check_ftrack_url(url):
ftrack_url = url
if ftrack_url:
logger.debug("Ftrack server \"{}\" is accessible.".format(ftrack_url))
else:
logger.error("Ftrack server \"{}\" is not accessible!".format(url))
return ftrack_url
@click.group(FtrackModule.name, help="Ftrack module related commands.")
def cli_main():
pass

View file

@ -1,8 +1,6 @@
from .ftrack_server import FtrackServer
from .lib import check_ftrack_url
__all__ = (
"FtrackServer",
"check_ftrack_url"
)

View file

@ -20,9 +20,11 @@ from openpype.lib import (
get_openpype_version,
get_build_version,
)
from openpype_modules.ftrack import FTRACK_MODULE_DIR
from openpype_modules.ftrack import (
FTRACK_MODULE_DIR,
resolve_ftrack_url,
)
from openpype_modules.ftrack.lib import credentials
from openpype_modules.ftrack.ftrack_server.lib import check_ftrack_url
from openpype_modules.ftrack.ftrack_server import socket_thread
@ -114,7 +116,7 @@ def legacy_server(ftrack_url):
while True:
if not ftrack_accessible:
ftrack_accessible = check_ftrack_url(ftrack_url)
ftrack_accessible = resolve_ftrack_url(ftrack_url)
# Run threads only if Ftrack is accessible
if not ftrack_accessible and not printed_ftrack_error:
@ -257,7 +259,7 @@ def main_loop(ftrack_url):
while True:
# Check if accessible Ftrack and Mongo url
if not ftrack_accessible:
ftrack_accessible = check_ftrack_url(ftrack_url)
ftrack_accessible = resolve_ftrack_url(ftrack_url)
if not mongo_accessible:
mongo_accessible = check_mongo_url(mongo_uri)
@ -441,7 +443,7 @@ def run_event_server(
os.environ["CLOCKIFY_API_KEY"] = clockify_api_key
# Check url regex and accessibility
ftrack_url = check_ftrack_url(ftrack_url)
ftrack_url = resolve_ftrack_url(ftrack_url)
if not ftrack_url:
print('Exiting! < Please enter Ftrack server url >')
return 1

View file

@ -26,45 +26,12 @@ except ImportError:
from openpype_modules.ftrack.lib import get_ftrack_event_mongo_info
from openpype.client import OpenPypeMongoConnection
from openpype.api import Logger
from openpype.lib import Logger
TOPIC_STATUS_SERVER = "openpype.event.server.status"
TOPIC_STATUS_SERVER_RESULT = "openpype.event.server.status.result"
def check_ftrack_url(url, log_errors=True, logger=None):
"""Checks if Ftrack server is responding"""
if logger is None:
logger = Logger.get_logger(__name__)
if not url:
logger.error("Ftrack URL is not set!")
return None
url = url.strip('/ ')
if 'http' not in url:
if url.endswith('ftrackapp.com'):
url = 'https://' + url
else:
url = 'https://{0}.ftrackapp.com'.format(url)
try:
result = requests.get(url, allow_redirects=False)
except requests.exceptions.RequestException:
if log_errors:
logger.error("Entered Ftrack URL is not accesible!")
return False
if (result.status_code != 200 or 'FTRACK_VERSION' not in result.headers):
if log_errors:
logger.error("Entered Ftrack URL is not accesible!")
return False
logger.debug("Ftrack server {} is accessible.".format(url))
return url
class SocketBaseEventHub(ftrack_api.event.hub.EventHub):
hearbeat_msg = b"hearbeat"

View file

@ -19,11 +19,8 @@ from openpype.client.operations import (
CURRENT_PROJECT_SCHEMA,
CURRENT_PROJECT_CONFIG_SCHEMA,
)
from openpype.api import (
Logger,
get_anatomy_settings
)
from openpype.lib import ApplicationManager
from openpype.settings import get_anatomy_settings
from openpype.lib import ApplicationManager, Logger
from openpype.pipeline import AvalonMongoDB, schema
from .constants import CUST_ATTR_ID_KEY, FPS_KEYS

View file

@ -1,5 +1,8 @@
"""Loads publishing context from json and continues in publish process.
Should run before 'CollectAnatomyContextData' so the user on context is
changed before it's stored to context anatomy data or instance anatomy data.
Requires:
anatomy -> context["anatomy"] *(pyblish.api.CollectorOrder - 0.11)
@ -13,7 +16,7 @@ import os
import pyblish.api
class CollectUsername(pyblish.api.ContextPlugin):
class CollectUsernameForWebpublish(pyblish.api.ContextPlugin):
"""
Translates user email to Ftrack username.
@ -32,10 +35,8 @@ class CollectUsername(pyblish.api.ContextPlugin):
hosts = ["webpublisher", "photoshop"]
targets = ["remotepublish", "filespublish", "tvpaint_worker"]
_context = None
def process(self, context):
self.log.info("CollectUsername")
self.log.info("{}".format(self.__class__.__name__))
os.environ["FTRACK_API_USER"] = os.environ["FTRACK_BOT_API_USER"]
os.environ["FTRACK_API_KEY"] = os.environ["FTRACK_BOT_API_KEY"]
@ -54,12 +55,14 @@ class CollectUsername(pyblish.api.ContextPlugin):
return
session = ftrack_api.Session(auto_connect_event_hub=False)
user = session.query("User where email like '{}'".format(user_email))
user = session.query(
"User where email like '{}'".format(user_email)
).first()
if not user:
raise ValueError(
"Couldn't find user with {} email".format(user_email))
user = user[0]
username = user.get("username")
self.log.debug("Resolved ftrack username:: {}".format(username))
os.environ["FTRACK_API_USER"] = username
@ -67,5 +70,4 @@ class CollectUsername(pyblish.api.ContextPlugin):
burnin_name = username
if '@' in burnin_name:
burnin_name = burnin_name[:burnin_name.index('@')]
os.environ["WEBPUBLISH_OPENPYPE_USERNAME"] = burnin_name
context.data["user"] = burnin_name

View file

@ -35,7 +35,7 @@ class IntegrateFtrackInstance(pyblish.api.InstancePlugin):
family_mapping = {
"camera": "cam",
"look": "look",
"mayaascii": "scene",
"mayaAscii": "scene",
"model": "geo",
"rig": "rig",
"setdress": "setdress",
@ -74,11 +74,15 @@ class IntegrateFtrackInstance(pyblish.api.InstancePlugin):
version_number = int(instance_version)
family = instance.data["family"]
family_low = family.lower()
# Perform case-insensitive family mapping
family_low = family.lower()
asset_type = instance.data.get("ftrackFamily")
if not asset_type and family_low in self.family_mapping:
asset_type = self.family_mapping[family_low]
if not asset_type:
for map_family, map_value in self.family_mapping.items():
if map_family.lower() == family_low:
asset_type = map_value
break
if not asset_type:
asset_type = "upload"
@ -86,15 +90,6 @@ class IntegrateFtrackInstance(pyblish.api.InstancePlugin):
self.log.debug(
"Family: {}\nMapping: {}".format(family_low, self.family_mapping)
)
# Ignore this instance if neither "ftrackFamily" or a family mapping is
# found.
if not asset_type:
self.log.info((
"Family \"{}\" does not match any asset type mapping"
).format(family))
return
status_name = self._get_asset_version_status_name(instance)
# Base of component item data

View file

@ -6,22 +6,18 @@ import threading
from Qt import QtCore, QtWidgets, QtGui
import ftrack_api
from ..ftrack_server.lib import check_ftrack_url
from ..ftrack_server import socket_thread
from ..lib import credentials
from ..ftrack_module import FTRACK_MODULE_DIR
from . import login_dialog
from openpype import resources
from openpype.lib import Logger
log = Logger.get_logger("FtrackModule")
from openpype_modules.ftrack import resolve_ftrack_url, FTRACK_MODULE_DIR
from openpype_modules.ftrack.ftrack_server import socket_thread
from openpype_modules.ftrack.lib import credentials
from . import login_dialog
class FtrackTrayWrapper:
def __init__(self, module):
self.module = module
self.log = Logger.get_logger(self.__class__.__name__)
self.thread_action_server = None
self.thread_socket_server = None
@ -62,19 +58,19 @@ class FtrackTrayWrapper:
if validation:
self.widget_login.set_credentials(ft_user, ft_api_key)
self.module.set_credentials_to_env(ft_user, ft_api_key)
log.info("Connected to Ftrack successfully")
self.log.info("Connected to Ftrack successfully")
self.on_login_change()
return validation
if not validation and ft_user and ft_api_key:
log.warning(
self.log.warning(
"Current Ftrack credentials are not valid. {}: {} - {}".format(
str(os.environ.get("FTRACK_SERVER")), ft_user, ft_api_key
)
)
log.info("Please sign in to Ftrack")
self.log.info("Please sign in to Ftrack")
self.bool_logged = False
self.show_login_widget()
self.set_menu_visibility()
@ -104,7 +100,7 @@ class FtrackTrayWrapper:
self.action_credentials.setIcon(self.icon_not_logged)
self.action_credentials.setToolTip("Logged out")
log.info("Logged out of Ftrack")
self.log.info("Logged out of Ftrack")
self.bool_logged = False
self.set_menu_visibility()
@ -126,10 +122,6 @@ class FtrackTrayWrapper:
ftrack_url = self.module.ftrack_url
os.environ["FTRACK_SERVER"] = ftrack_url
parent_file_path = os.path.dirname(
os.path.dirname(os.path.realpath(__file__))
)
min_fail_seconds = 5
max_fail_count = 3
wait_time_after_max_fail = 10
@ -154,17 +146,19 @@ class FtrackTrayWrapper:
# Main loop
while True:
if not self.bool_action_server_running:
log.debug("Action server was pushed to stop.")
self.log.debug("Action server was pushed to stop.")
break
# Check if accessible Ftrack and Mongo url
if not ftrack_accessible:
ftrack_accessible = check_ftrack_url(ftrack_url)
ftrack_accessible = resolve_ftrack_url(ftrack_url)
# Run threads only if Ftrack is accessible
if not ftrack_accessible:
if not printed_ftrack_error:
log.warning("Can't access Ftrack {}".format(ftrack_url))
self.log.warning(
"Can't access Ftrack {}".format(ftrack_url)
)
if self.thread_socket_server is not None:
self.thread_socket_server.stop()
@ -191,7 +185,7 @@ class FtrackTrayWrapper:
self.set_menu_visibility()
elif failed_count == max_fail_count:
log.warning((
self.log.warning((
"Action server failed {} times."
" I'll try to run again {}s later"
).format(
@ -243,10 +237,10 @@ class FtrackTrayWrapper:
self.thread_action_server.join()
self.thread_action_server = None
log.info("Ftrack action server was forced to stop")
self.log.info("Ftrack action server was forced to stop")
except Exception:
log.warning(
self.log.warning(
"Error has happened during Killing action server",
exc_info=True
)
@ -343,7 +337,7 @@ class FtrackTrayWrapper:
self.thread_timer = None
except Exception as e:
log.error("During Killing Timer event server: {0}".format(e))
self.log.error("During Killing Timer event server: {0}".format(e))
def changed_user(self):
self.stop_action_server()

View file

@ -166,50 +166,21 @@ def update_op_assets(
# Substitute item type for general classification (assets or shots)
if item_type in ["Asset", "AssetType"]:
substitute_item_type = "assets"
entity_root_asset_name = "Assets"
elif item_type in ["Episode", "Sequence"]:
substitute_item_type = "shots"
else:
substitute_item_type = f"{item_type.lower()}s"
entity_parent_folders = [
f
for f in project_module_settings["entities_root"]
.get(substitute_item_type)
.split("/")
if f
]
entity_root_asset_name = "Shots"
# Root parent folder if exist
visual_parent_doc_id = (
asset_doc_ids[parent_zou_id]["_id"] if parent_zou_id else None
)
if visual_parent_doc_id is None:
# Find root folder docs
root_folder_docs = get_assets(
# Find root folder doc ("Assets" or "Shots")
root_folder_doc = get_asset_by_name(
project_name,
asset_names=[entity_parent_folders[-1]],
asset_name=entity_root_asset_name,
fields=["_id", "data.root_of"],
)
# NOTE: Not sure why it's checking for entity type?
# OP3 does not support multiple assets with same names so type
# filtering is irelevant.
# This way mimics previous implementation:
# ```
# root_folder_doc = dbcon.find_one(
# {
# "type": "asset",
# "name": entity_parent_folders[-1],
# "data.root_of": substitute_item_type,
# },
# ["_id"],
# )
# ```
root_folder_doc = None
for folder_doc in root_folder_docs:
root_of = folder_doc.get("data", {}).get("root_of")
if root_of == substitute_item_type:
root_folder_doc = folder_doc
break
if root_folder_doc:
visual_parent_doc_id = root_folder_doc["_id"]
@ -240,7 +211,7 @@ def update_op_assets(
item_name = item["name"]
# Set root folders parents
item_data["parents"] = entity_parent_folders + item_data["parents"]
item_data["parents"] = [entity_root_asset_name] + item_data["parents"]
# Update 'data' different in zou DB
updated_data = {
@ -318,13 +289,13 @@ def write_project_to_op(project: dict, dbcon: AvalonMongoDB) -> UpdateOne:
)
def sync_all_projects(login: str, password: str):
def sync_all_projects(login: str, password: str, ignore_projects: list = None):
"""Update all OP projects in DB with Zou data.
Args:
login (str): Kitsu user login
password (str): Kitsu user password
ignore_projects (list): List of unsynced project names
Raises:
gazu.exception.AuthFailedException: Wrong user login and/or password
"""
@ -340,6 +311,8 @@ def sync_all_projects(login: str, password: str):
dbcon.install()
all_projects = gazu.project.all_open_projects()
for project in all_projects:
if ignore_projects and project["name"] in ignore_projects:
continue
sync_project_from_kitsu(dbcon, project)
@ -396,54 +369,30 @@ def sync_project_from_kitsu(dbcon: AvalonMongoDB, project: dict):
zou_ids_and_asset_docs[project["id"]] = project_doc
# Create entities root folders
project_module_settings = get_project_settings(project_name)["kitsu"]
for entity_type, root in project_module_settings["entities_root"].items():
parent_folders = root.split("/")
direct_parent_doc = None
for i, folder in enumerate(parent_folders, 1):
parent_doc = get_asset_by_name(
project_name, folder, fields=["_id", "data.root_of"]
)
# NOTE: Not sure why it's checking for entity type?
# OP3 does not support multiple assets with same names so type
# filtering is irelevant.
# Also all of the entities could find be queried at once using
# 'get_assets'.
# This way mimics previous implementation:
# ```
# parent_doc = dbcon.find_one(
# {"type": "asset", "name": folder, "data.root_of": entity_type}
# )
# ```
if (
parent_doc
and parent_doc.get("data", {}).get("root_of") != entity_type
):
parent_doc = None
if not parent_doc:
direct_parent_doc = dbcon.insert_one(
{
"name": folder,
"type": "asset",
"schema": "openpype:asset-3.0",
"data": {
"root_of": entity_type,
"parents": parent_folders[:i],
"visualParent": direct_parent_doc.inserted_id
if direct_parent_doc
else None,
"tasks": {},
},
}
)
to_insert = [
{
"name": r,
"type": "asset",
"schema": "openpype:asset-3.0",
"data": {
"root_of": r,
"tasks": {},
},
}
for r in ["Assets", "Shots"]
if not get_asset_by_name(
project_name, r, fields=["_id", "data.root_of"]
)
]
# Create
to_insert = [
create_op_asset(item)
for item in all_entities
if item["id"] not in zou_ids_and_asset_docs.keys()
]
to_insert.extend(
[
create_op_asset(item)
for item in all_entities
if item["id"] not in zou_ids_and_asset_docs.keys()
]
)
if to_insert:
# Insert doc in DB
dbcon.insert_many(to_insert)

View file

@ -95,13 +95,15 @@ class IntegrateSlackAPI(pyblish.api.InstancePlugin):
Reviews might be large, so allow only adding link to message instead of
uploading only.
"""
fill_data = copy.deepcopy(instance.context.data["anatomyData"])
username = fill_data.get("user")
fill_pairs = [
("asset", instance.data.get("asset", fill_data.get("asset"))),
("subset", instance.data.get("subset", fill_data.get("subset"))),
("username", instance.data.get("username",
fill_data.get("username"))),
("user", username),
("username", username),
("app", instance.data.get("app", fill_data.get("app"))),
("family", instance.data.get("family", fill_data.get("family"))),
("version", str(instance.data.get("version",
@ -110,13 +112,19 @@ class IntegrateSlackAPI(pyblish.api.InstancePlugin):
if review_path:
fill_pairs.append(("review_filepath", review_path))
task_data = instance.data.get("task")
if not task_data:
task_data = fill_data.get("task")
for key, value in task_data.items():
fill_key = "task[{}]".format(key)
fill_pairs.append((fill_key, value))
fill_pairs.append(("task", task_data["name"]))
task_data = fill_data.get("task")
if task_data:
if (
"{task}" in message_templ
or "{Task}" in message_templ
or "{TASK}" in message_templ
):
fill_pairs.append(("task", task_data["name"]))
else:
for key, value in task_data.items():
fill_key = "task[{}]".format(key)
fill_pairs.append((fill_key, value))
self.log.debug("fill_pairs ::{}".format(fill_pairs))
multiple_case_variants = prepare_template_data(fill_pairs)

View file

@ -0,0 +1,107 @@
import os
import re
import json
import six
from openpype.settings import get_project_settings
from openpype.lib import Logger
from .anatomy import Anatomy
from .template_data import get_project_template_data
def concatenate_splitted_paths(split_paths, anatomy):
log = Logger.get_logger("concatenate_splitted_paths")
pattern_array = re.compile(r"\[.*\]")
output = []
for path_items in split_paths:
clean_items = []
if isinstance(path_items, str):
path_items = [path_items]
for path_item in path_items:
if not re.match(r"{.+}", path_item):
path_item = re.sub(pattern_array, "", path_item)
clean_items.append(path_item)
# backward compatibility
if "__project_root__" in path_items:
for root, root_path in anatomy.roots.items():
if not os.path.exists(str(root_path)):
log.debug("Root {} path path {} not exist on \
computer!".format(root, root_path))
continue
clean_items = ["{{root[{}]}}".format(root),
r"{project[name]}"] + clean_items[1:]
output.append(os.path.normpath(os.path.sep.join(clean_items)))
continue
output.append(os.path.normpath(os.path.sep.join(clean_items)))
return output
def fill_paths(path_list, anatomy):
format_data = get_project_template_data(project_name=anatomy.project_name)
format_data["root"] = anatomy.roots
filled_paths = []
for path in path_list:
new_path = path.format(**format_data)
filled_paths.append(new_path)
return filled_paths
def create_project_folders(project_name, basic_paths=None):
log = Logger.get_logger("create_project_folders")
anatomy = Anatomy(project_name)
if basic_paths is None:
basic_paths = get_project_basic_paths(project_name)
if not basic_paths:
return
concat_paths = concatenate_splitted_paths(basic_paths, anatomy)
filled_paths = fill_paths(concat_paths, anatomy)
# Create folders
for path in filled_paths:
if os.path.exists(path):
log.debug("Folder already exists: {}".format(path))
else:
log.debug("Creating folder: {}".format(path))
os.makedirs(path)
def _list_path_items(folder_structure):
output = []
for key, value in folder_structure.items():
if not value:
output.append(key)
continue
paths = _list_path_items(value)
for path in paths:
if not isinstance(path, (list, tuple)):
path = [path]
item = [key]
item.extend(path)
output.append(item)
return output
def get_project_basic_paths(project_name):
project_settings = get_project_settings(project_name)
folder_structure = (
project_settings["global"]["project_folder_structure"]
)
if not folder_structure:
return []
if isinstance(folder_structure, six.string_types):
folder_structure = json.loads(folder_structure)
return _list_path_items(folder_structure)

View file

@ -22,6 +22,8 @@ from .publish_plugins import (
)
from .lib import (
get_publish_template_name,
DiscoverResult,
publish_plugins_discover,
load_help_content_from_plugin,
@ -62,6 +64,8 @@ __all__ = (
"Extractor",
"get_publish_template_name",
"DiscoverResult",
"publish_plugins_discover",
"load_help_content_from_plugin",

View file

@ -0,0 +1,2 @@
DEFAULT_PUBLISH_TEMPLATE = "publish"
DEFAULT_HERO_PUBLISH_TEMPLATE = "hero"

View file

@ -2,6 +2,7 @@ import os
import sys
import types
import inspect
import copy
import tempfile
import xml.etree.ElementTree
@ -9,8 +10,190 @@ import six
import pyblish.plugin
import pyblish.api
from openpype.lib import Logger
from openpype.settings import get_project_settings, get_system_settings
from openpype.lib import Logger, filter_profiles
from openpype.settings import (
get_project_settings,
get_system_settings,
)
from .contants import (
DEFAULT_PUBLISH_TEMPLATE,
DEFAULT_HERO_PUBLISH_TEMPLATE,
)
def get_template_name_profiles(
project_name, project_settings=None, logger=None
):
"""Receive profiles for publish template keys.
At least one of arguments must be passed.
Args:
project_name (str): Name of project where to look for templates.
project_settings(Dic[str, Any]): Prepared project settings.
Returns:
List[Dict[str, Any]]: Publish template profiles.
"""
if not project_name and not project_settings:
raise ValueError((
"Both project name and project settings are missing."
" At least one must be entered."
))
if not project_settings:
project_settings = get_project_settings(project_name)
profiles = (
project_settings
["global"]
["tools"]
["publish"]
["template_name_profiles"]
)
if profiles:
return copy.deepcopy(profiles)
# Use legacy approach for cases new settings are not filled yet for the
# project
legacy_profiles = (
project_settings
["global"]
["publish"]
["IntegrateAssetNew"]
["template_name_profiles"]
)
if legacy_profiles:
if not logger:
logger = Logger.get_logger("get_template_name_profiles")
logger.warning((
"Project \"{}\" is using legacy access to publish template."
" It is recommended to move settings to new location"
" 'project_settings/global/tools/publish/template_name_profiles'."
).format(project_name))
# Replace "tasks" key with "task_names"
profiles = []
for profile in copy.deepcopy(legacy_profiles):
profile["task_names"] = profile.pop("tasks", [])
profiles.append(profile)
return profiles
def get_hero_template_name_profiles(
project_name, project_settings=None, logger=None
):
"""Receive profiles for hero publish template keys.
At least one of arguments must be passed.
Args:
project_name (str): Name of project where to look for templates.
project_settings(Dic[str, Any]): Prepared project settings.
Returns:
List[Dict[str, Any]]: Publish template profiles.
"""
if not project_name and not project_settings:
raise ValueError((
"Both project name and project settings are missing."
" At least one must be entered."
))
if not project_settings:
project_settings = get_project_settings(project_name)
profiles = (
project_settings
["global"]
["tools"]
["publish"]
["hero_template_name_profiles"]
)
if profiles:
return copy.deepcopy(profiles)
# Use legacy approach for cases new settings are not filled yet for the
# project
legacy_profiles = copy.deepcopy(
project_settings
["global"]
["publish"]
["IntegrateHeroVersion"]
["template_name_profiles"]
)
if legacy_profiles:
if not logger:
logger = Logger.get_logger("get_hero_template_name_profiles")
logger.warning((
"Project \"{}\" is using legacy access to hero publish template."
" It is recommended to move settings to new location"
" 'project_settings/global/tools/publish/"
"hero_template_name_profiles'."
).format(project_name))
return legacy_profiles
def get_publish_template_name(
project_name,
host_name,
family,
task_name,
task_type,
project_settings=None,
hero=False,
logger=None
):
"""Get template name which should be used for passed context.
Publish templates are filtered by host name, family, task name and
task type.
Default template which is used at if profiles are not available or profile
has empty value is defined by 'DEFAULT_PUBLISH_TEMPLATE' constant.
Args:
project_name (str): Name of project where to look for settings.
host_name (str): Name of host integration.
family (str): Family for which should be found template.
task_name (str): Task name on which is intance working.
task_type (str): Task type on which is intance working.
project_setting (Dict[str, Any]): Prepared project settings.
logger (logging.Logger): Custom logger used for 'filter_profiles'
function.
Returns:
str: Template name which should be used for integration.
"""
template = None
filter_criteria = {
"hosts": host_name,
"families": family,
"task_names": task_name,
"task_types": task_type,
}
if hero:
default_template = DEFAULT_HERO_PUBLISH_TEMPLATE
profiles = get_hero_template_name_profiles(
project_name, project_settings, logger
)
else:
profiles = get_template_name_profiles(
project_name, project_settings, logger
)
default_template = DEFAULT_PUBLISH_TEMPLATE
profile = filter_profiles(profiles, filter_criteria, logger=logger)
if profile:
template = profile["template_name"]
return template or default_template
class DiscoverResult:

View file

@ -53,7 +53,7 @@ def get_project_template_data(project_doc=None, project_name=None):
project_name = project_doc["name"]
if not project_doc:
project_code = get_project(project_name, fields=["data.code"])
project_doc = get_project(project_name, fields=["data.code"])
project_code = project_doc.get("data", {}).get("code")
return {

View file

@ -9,6 +9,8 @@ from .path_resolving import (
get_custom_workfile_template,
get_custom_workfile_template_by_string_context,
create_workdir_extra_folders,
)
from .build_workfile import BuildWorkfile
@ -26,5 +28,7 @@ __all__ = (
"get_custom_workfile_template",
"get_custom_workfile_template_by_string_context",
"create_workdir_extra_folders",
"BuildWorkfile",
)

View file

@ -467,3 +467,60 @@ def get_custom_workfile_template_by_string_context(
return get_custom_workfile_template(
project_doc, asset_doc, task_name, host_name, anatomy, project_settings
)
def create_workdir_extra_folders(
workdir,
host_name,
task_type,
task_name,
project_name,
project_settings=None
):
"""Create extra folders in work directory based on context.
Args:
workdir (str): Path to workdir where workfiles is stored.
host_name (str): Name of host implementation.
task_type (str): Type of task for which extra folders should be
created.
task_name (str): Name of task for which extra folders should be
created.
project_name (str): Name of project on which task is.
project_settings (dict): Prepared project settings. Are loaded if not
passed.
"""
# Load project settings if not set
if not project_settings:
project_settings = get_project_settings(project_name)
# Load extra folders profiles
extra_folders_profiles = (
project_settings["global"]["tools"]["Workfiles"]["extra_folders"]
)
# Skip if are empty
if not extra_folders_profiles:
return
# Prepare profiles filters
filter_data = {
"task_types": task_type,
"task_names": task_name,
"hosts": host_name
}
profile = filter_profiles(extra_folders_profiles, filter_data)
if profile is None:
return
for subfolder in profile["folders"]:
# Make sure backslashes are converted to forwards slashes
# and does not start with slash
subfolder = subfolder.replace("\\", "/").lstrip("/")
# Skip empty strings
if not subfolder:
continue
fullpath = os.path.join(workdir, subfolder)
if not os.path.exists(fullpath):
os.makedirs(fullpath)

View file

@ -0,0 +1,105 @@
import pyblish.api
from openpype.client import (
get_last_version_by_subset_name,
get_representations,
)
from openpype.pipeline import (
legacy_io,
get_representation_path,
)
class CollectAudio(pyblish.api.InstancePlugin):
"""Collect asset's last published audio.
The audio subset name searched for is defined in:
project settings > Collect Audio
"""
label = "Collect Asset Audio"
order = pyblish.api.CollectorOrder + 0.1
families = ["review"]
hosts = [
"nuke",
"maya",
"shell",
"hiero",
"premiere",
"harmony",
"traypublisher",
"standalonepublisher",
"fusion",
"tvpaint",
"resolve",
"webpublisher",
"aftereffects",
"flame",
"unreal"
]
audio_subset_name = "audioMain"
def process(self, instance):
if instance.data.get("audio"):
self.log.info(
"Skipping Audio collecion. It is already collected"
)
return
# Add audio to instance if exists.
self.log.info((
"Searching for audio subset '{subset}'"
" in asset '{asset}'"
).format(
subset=self.audio_subset_name,
asset=instance.data["asset"]
))
repre_doc = self._get_repre_doc(instance)
# Add audio to instance if representation was found
if repre_doc:
instance.data["audio"] = [{
"offset": 0,
"filename": get_representation_path(repre_doc)
}]
self.log.info("Audio Data added to instance ...")
def _get_repre_doc(self, instance):
cache = instance.context.data.get("__cache_asset_audio")
if cache is None:
cache = {}
instance.context.data["__cache_asset_audio"] = cache
asset_name = instance.data["asset"]
# first try to get it from cache
if asset_name in cache:
return cache[asset_name]
project_name = legacy_io.active_project()
# Find latest versions document
last_version_doc = get_last_version_by_subset_name(
project_name,
self.audio_subset_name,
asset_name=asset_name,
fields=["_id"]
)
repre_doc = None
if last_version_doc:
# Try to find it's representation (Expected there is only one)
repre_docs = list(get_representations(
project_name, version_ids=[last_version_doc["_id"]]
))
if not repre_docs:
self.log.warning(
"Version document does not contain any representations"
)
else:
repre_doc = repre_docs[0]
# update cache
cache[asset_name] = repre_doc
return repre_doc

View file

@ -0,0 +1,47 @@
import pyblish.api
from bson.objectid import ObjectId
from openpype.client import get_representations
class CollectInputRepresentationsToVersions(pyblish.api.ContextPlugin):
"""Converts collected input representations to input versions.
Any data in `instance.data["inputRepresentations"]` gets converted into
`instance.data["inputVersions"]` as supported in OpenPype v3.
"""
# This is a ContextPlugin because then we can query the database only once
# for the conversion of representation ids to version ids (optimization)
label = "Input Representations to Versions"
order = pyblish.api.CollectorOrder + 0.499
hosts = ["*"]
def process(self, context):
# Query all version ids for representation ids from the database once
representations = set()
for instance in context:
inst_repre = instance.data.get("inputRepresentations", [])
representations.update(inst_repre)
representations_docs = get_representations(
project_name=context.data["projectEntity"]["name"],
representation_ids=representations,
fields=["_id", "parent"])
representation_id_to_version_id = {
repre["_id"]: repre["parent"] for repre in representations_docs
}
for instance in context:
inst_repre = instance.data.get("inputRepresentations", [])
if not inst_repre:
continue
input_versions = instance.data.get("inputVersions", [])
for repre_id in inst_repre:
repre_id = ObjectId(repre_id)
version_id = representation_id_to_version_id[repre_id]
input_versions.append(version_id)
instance.data["inputVersions"] = input_versions

View file

@ -29,6 +29,7 @@ class CollectOtioFrameRanges(pyblish.api.InstancePlugin):
# get basic variables
otio_clip = instance.data["otioClip"]
workfile_start = instance.data["workfileFrameStart"]
workfile_source_duration = instance.data.get("notRetimedFramerange")
# get ranges
otio_tl_range = otio_clip.range_in_parent()
@ -54,6 +55,11 @@ class CollectOtioFrameRanges(pyblish.api.InstancePlugin):
frame_end = frame_start + otio.opentime.to_frames(
otio_tl_range.duration, otio_tl_range.duration.rate) - 1
# in case of retimed clip and frame range should not be retimed
if workfile_source_duration:
frame_end = frame_start + otio.opentime.to_frames(
otio_src_range.duration, otio_src_range.duration.rate) - 1
data = {
"frameStart": frame_start,
"frameEnd": frame_end,

View file

@ -488,12 +488,6 @@ class ExtractBurnin(publish.Extractor):
"frame_end_handle": frame_end_handle
}
# use explicit username for webpublishes as rewriting
# OPENPYPE_USERNAME might have side effects
webpublish_user_name = os.environ.get("WEBPUBLISH_OPENPYPE_USERNAME")
if webpublish_user_name:
burnin_data["username"] = webpublish_user_name
self.log.debug(
"Basic burnin_data: {}".format(json.dumps(burnin_data, indent=4))
)

View file

@ -5,6 +5,9 @@ import copy
import clique
import six
from bson.objectid import ObjectId
import pyblish.api
from openpype.client.operations import (
OperationsSession,
new_subset_document,
@ -14,8 +17,6 @@ from openpype.client.operations import (
prepare_version_update_data,
prepare_representation_update_data,
)
from bson.objectid import ObjectId
import pyblish.api
from openpype.client import (
get_representations,
@ -23,10 +24,12 @@ from openpype.client import (
get_version_by_name,
)
from openpype.lib import source_hash
from openpype.lib.profiles_filtering import filter_profiles
from openpype.lib.file_transaction import FileTransaction
from openpype.pipeline import legacy_io
from openpype.pipeline.publish import KnownPublishError
from openpype.pipeline.publish import (
KnownPublishError,
get_publish_template_name,
)
log = logging.getLogger(__name__)
@ -135,7 +138,7 @@ class IntegrateAsset(pyblish.api.InstancePlugin):
# the database even if not used by the destination template
db_representation_context_keys = [
"project", "asset", "task", "subset", "version", "representation",
"family", "hierarchy", "username", "output"
"family", "hierarchy", "username", "user", "output"
]
skip_host_families = []
@ -792,52 +795,26 @@ class IntegrateAsset(pyblish.api.InstancePlugin):
def get_template_name(self, instance):
"""Return anatomy template name to use for integration"""
# Define publish template name from profiles
filter_criteria = self.get_profile_filter_criteria(instance)
template_name_profiles = self._get_template_name_profiles(instance)
profile = filter_profiles(
template_name_profiles,
filter_criteria,
logger=self.log
)
if profile:
return profile["template_name"]
return self.default_template_name
def _get_template_name_profiles(self, instance):
"""Receive profiles for publish template keys.
Reuse template name profiles from legacy integrator. Goal is to move
the profile settings out of plugin settings but until that happens we
want to be able set it at one place and don't break backwards
compatibility (more then once).
"""
return (
instance.context.data["project_settings"]
["global"]
["publish"]
["IntegrateAssetNew"]
["template_name_profiles"]
)
def get_profile_filter_criteria(self, instance):
"""Return filter criteria for `filter_profiles`"""
# Anatomy data is pre-filled by Collectors
anatomy_data = instance.data["anatomyData"]
project_name = legacy_io.active_project()
# Task can be optional in anatomy data
task = anatomy_data.get("task", {})
host_name = instance.context.data["hostName"]
anatomy_data = instance.data["anatomyData"]
family = anatomy_data["family"]
task_info = anatomy_data.get("task") or {}
# Return filter criteria
return {
"families": anatomy_data["family"],
"tasks": task.get("name"),
"task_types": task.get("type"),
"hosts": instance.context.data["hostName"],
}
return get_publish_template_name(
project_name,
host_name,
family,
task_name=task_info.get("name"),
task_type=task_info.get("type"),
project_settings=instance.context.data["project_settings"],
logger=self.log
)
def get_rootless_path(self, anatomy, path):
"""Returns, if possible, path without absolute portion from root

View file

@ -14,14 +14,12 @@ from openpype.client import (
get_archived_representations,
get_representations,
)
from openpype.lib import (
create_hard_link,
filter_profiles
)
from openpype.lib import create_hard_link
from openpype.pipeline import (
schema,
legacy_io,
)
from openpype.pipeline.publish import get_publish_template_name
class IntegrateHeroVersion(pyblish.api.InstancePlugin):
@ -46,7 +44,7 @@ class IntegrateHeroVersion(pyblish.api.InstancePlugin):
ignored_representation_names = []
db_representation_context_keys = [
"project", "asset", "task", "subset", "representation",
"family", "hierarchy", "task", "username"
"family", "hierarchy", "task", "username", "user"
]
# QUESTION/TODO this process should happen on server if crashed due to
# permissions error on files (files were used or user didn't have perms)
@ -68,10 +66,11 @@ class IntegrateHeroVersion(pyblish.api.InstancePlugin):
)
return
template_key = self._get_template_key(instance)
anatomy = instance.context.data["anatomy"]
project_name = anatomy.project_name
template_key = self._get_template_key(project_name, instance)
if template_key not in anatomy.templates:
self.log.warning((
"!!! Anatomy of project \"{}\" does not have set"
@ -527,30 +526,24 @@ class IntegrateHeroVersion(pyblish.api.InstancePlugin):
return publish_folder
def _get_template_key(self, instance):
def _get_template_key(self, project_name, instance):
anatomy_data = instance.data["anatomyData"]
task_data = anatomy_data.get("task") or {}
task_name = task_data.get("name")
task_type = task_data.get("type")
task_info = anatomy_data.get("task") or {}
host_name = instance.context.data["hostName"]
# TODO raise error if Hero not set?
family = self.main_family_from_instance(instance)
key_values = {
"families": family,
"task_names": task_name,
"task_types": task_type,
"hosts": host_name
}
profile = filter_profiles(
self.template_name_profiles,
key_values,
return get_publish_template_name(
project_name,
host_name,
family,
task_info.get("name"),
task_info.get("type"),
project_settings=instance.context.data["project_settings"],
hero=True,
logger=self.log
)
if profile:
template_name = profile["template_name"]
else:
template_name = self._default_template_name
return template_name
def main_family_from_instance(self, instance):
"""Returns main family of entered instance."""

View file

@ -15,7 +15,6 @@ from bson.objectid import ObjectId
from pymongo import DeleteOne, InsertOne
import pyblish.api
import openpype.api
from openpype.client import (
get_asset_by_name,
get_subset_by_id,
@ -25,14 +24,17 @@ from openpype.client import (
get_representations,
get_archived_representations,
)
from openpype.lib.profiles_filtering import filter_profiles
from openpype.lib import (
prepare_template_data,
create_hard_link,
StringTemplate,
TemplateUnsolved
TemplateUnsolved,
source_hash,
filter_profiles,
get_local_site_id,
)
from openpype.pipeline import legacy_io
from openpype.pipeline.publish import get_publish_template_name
# this is needed until speedcopy for linux is fixed
if sys.platform == "win32":
@ -127,7 +129,7 @@ class IntegrateAssetNew(pyblish.api.InstancePlugin):
exclude_families = ["render.farm"]
db_representation_context_keys = [
"project", "asset", "task", "subset", "version", "representation",
"family", "hierarchy", "task", "username"
"family", "hierarchy", "task", "username", "user"
]
default_template_name = "publish"
@ -138,7 +140,6 @@ class IntegrateAssetNew(pyblish.api.InstancePlugin):
integrated_file_sizes = {}
# Attributes set by settings
template_name_profiles = None
subset_grouping_profiles = None
def process(self, instance):
@ -388,22 +389,16 @@ class IntegrateAssetNew(pyblish.api.InstancePlugin):
family = self.main_family_from_instance(instance)
key_values = {
"families": family,
"tasks": task_name,
"hosts": instance.context.data["hostName"],
"task_types": task_type
}
profile = filter_profiles(
self.template_name_profiles,
key_values,
template_name = get_publish_template_name(
project_name,
instance.context.data["hostName"],
family,
task_name=task_info.get("name"),
task_type=task_info.get("type"),
project_settings=instance.context.data["project_settings"],
logger=self.log
)
template_name = "publish"
if profile:
template_name = profile["template_name"]
published_representations = {}
for idx, repre in enumerate(repres):
published_files = []
@ -1058,7 +1053,7 @@ class IntegrateAssetNew(pyblish.api.InstancePlugin):
for _src, dest in resources:
path = self.get_rootless_path(anatomy, dest)
dest = self.get_dest_temp_url(dest)
file_hash = openpype.api.source_hash(dest)
file_hash = source_hash(dest)
if self.TMP_FILE_EXT and \
',{}'.format(self.TMP_FILE_EXT) in file_hash:
file_hash = file_hash.replace(',{}'.format(self.TMP_FILE_EXT),
@ -1168,7 +1163,7 @@ class IntegrateAssetNew(pyblish.api.InstancePlugin):
def _get_sites(self, sync_project_presets):
"""Returns tuple (local_site, remote_site)"""
local_site_id = openpype.api.get_local_site_id()
local_site_id = get_local_site_id()
local_site = sync_project_presets["config"]. \
get("active_site", "studio").strip()

View file

@ -455,7 +455,7 @@
"family_mapping": {
"camera": "cam",
"look": "look",
"mayaascii": "scene",
"mayaAscii": "scene",
"model": "geo",
"rig": "rig",
"setdress": "setdress",

View file

@ -3,6 +3,10 @@
"CollectAnatomyInstanceData": {
"follow_workfile_version": false
},
"CollectAudio": {
"enabled": false,
"audio_subset_name": "audioMain"
},
"CollectSceneVersion": {
"hosts": [
"aftereffects",
@ -414,6 +418,10 @@
"filter_families": []
}
]
},
"publish": {
"template_name_profiles": [],
"hero_template_name_profiles": []
}
},
"project_folder_structure": "{\"__project_root__\": {\"prod\": {}, \"resources\": {\"footage\": {\"plates\": {}, \"offline\": {}}, \"audio\": {}, \"art_dept\": {}}, \"editorial\": {}, \"assets\": {\"characters\": {}, \"locations\": {}}, \"shots\": {}}}",

Some files were not shown because too many files have changed in this diff Show more