diff --git a/.github/workflows/project_actions.yml b/.github/workflows/project_actions.yml new file mode 100644 index 0000000000..9c8c9da7dd --- /dev/null +++ b/.github/workflows/project_actions.yml @@ -0,0 +1,23 @@ +name: project-actions + +on: + pull_request: + types: [review_requested, closed] + pull_request_review: + types: [submitted] + +jobs: + pr_review_requested: + name: pr_review_requested + runs-on: ubuntu-latest + if: github.event_name == 'pull_request' && github.event.action == 'review_requested' + steps: + - name: Move PR to 'Change Requested' + uses: leonsteinhaeuser/project-beta-automations@v2.1.0 + with: + gh_token: ${{ secrets.YNPUT_BOT_TOKEN }} + user: ${{ secrets.CI_USER }} + organization: ynput + project_id: 11 + resource_node_id: ${{ github.event.pull_request.node_id }} + status_value: Change Requested diff --git a/ARCHITECTURE.md b/ARCHITECTURE.md new file mode 100644 index 0000000000..912780d803 --- /dev/null +++ b/ARCHITECTURE.md @@ -0,0 +1,77 @@ +# Architecture + +OpenPype is a monolithic Python project that bundles several parts, this document will try to give a birds eye overview of the project and, to a certain degree, each of the sub-projects. +The current file structure looks like this: + +``` +. +├── common - Code in this folder is backend portion of Addon distribution logic for v4 server. +├── docs - Documentation of the source code. +├── igniter - The OpenPype bootstrapper, deals with running version resolution and setting up the connection to the mongodb. +├── openpype - The actual OpenPype core package. +├── schema - Collection of JSON files describing schematics of objects. This follows Avalon's convention. +├── tests - Integration and unit tests. +├── tools - Conveninece scripts to perform common actions (in both bash and ps1). +├── vendor - When using the igniter, it deploys third party tools in here, such as ffmpeg. +└── website - Source files for https://openpype.io/ which is Docusaursus (https://docusaurus.io/). +``` + +The core functionality of the pipeline can be found in `igniter` and `openpype`, which in turn rely on the `schema` files, whenever you build (or download a pre-built) version of OpenPype, these two are bundled in there, and `Igniter` is the entry point. + + +## Igniter + +It's the setup and update tool for OpenPype, unless you want to package `openpype` separately and deal with all the config manually, this will most likely be your entry point. + +``` +igniter/ +├── bootstrap_repos.py - Module that will find or install OpenPype versions in the system. +├── __init__.py - Igniter entry point. +├── install_dialog.py- Show dialog for choosing central pype repository. +├── install_thread.py - Threading helpers for the install process. +├── __main__.py - Like `__init__.py` ? +├── message_dialog.py - Qt Dialog with a message and "Ok" button. +├── nice_progress_bar.py - Fancy Qt progress bar. +├── splash.txt - ASCII art for the terminal installer. +├── stylesheet.css - Installer Qt styles. +├── terminal_splash.py - Terminal installer animation, relies in `splash.txt`. +├── tools.py - Collection of methods that don't fit in other modules. +├── update_thread.py - Threading helper to update existing OpenPype installs. +├── update_window.py - Qt UI to update OpenPype installs. +├── user_settings.py - Interface for the OpenPype user settings. +└── version.py - Igniter's version number. +``` + +## OpenPype + +This is the main package of the OpenPype logic, it could be loosely described as a combination of [Avalon](https://getavalon.github.io), [Pyblish](https://pyblish.com/) and glue around those with custom OpenPype only elements, things are in progress of being moved around to better prepare for V4, which will be released under a new name AYON. + +``` +openpype/ +├── client - Interface for the MongoDB. +├── hooks - Hooks to be executed on certain OpenPype Applications defined in `openpype.lib.applications`. +├── host - Base class for the different hosts. +├── hosts - Integration with the different DCCs (hosts) using the `host` base class. +├── lib - Libraries that stitch together the package, some have been moved into other parts. +├── modules - OpenPype modules should contain separated logic of specific kind of implementation, such as Ftrack connection and its python API. +├── pipeline - Core of the OpenPype pipeline, handles creation of data, publishing, etc. +├── plugins - Global/core plugins for loader and publisher tool. +├── resources - Icons, fonts, etc. +├── scripts - Loose scipts that get run by tools/publishers. +├── settings - OpenPype settings interface. +├── style - Qt styling. +├── tests - Unit tests. +├── tools - Core tools, check out https://openpype.io/docs/artist_tools. +├── vendor - Vendoring of needed required Python packes. +├── widgets - Common re-usable Qt Widgets. +├── action.py - LEGACY: Lives now in `openpype.pipeline.publish.action` Pyblish actions. +├── cli.py - Command line interface, leverages `click`. +├── __init__.py - Sets two constants. +├── __main__.py - Entry point, calls the `cli.py` +├── plugin.py - Pyblish plugins. +├── pype_commands.py - Implementation of OpenPype commands. +└── version.py - Current version number. +``` + + + diff --git a/openpype/hooks/pre_add_last_workfile_arg.py b/openpype/hooks/pre_add_last_workfile_arg.py index 1c8746c559..2558daef30 100644 --- a/openpype/hooks/pre_add_last_workfile_arg.py +++ b/openpype/hooks/pre_add_last_workfile_arg.py @@ -14,6 +14,7 @@ class AddLastWorkfileToLaunchArgs(PreLaunchHook): # Execute after workfile template copy order = 10 app_groups = [ + "3dsmax", "maya", "nuke", "nukex", diff --git a/openpype/hooks/pre_create_extra_workdir_folders.py b/openpype/hooks/pre_create_extra_workdir_folders.py index c5af620c87..8856281120 100644 --- a/openpype/hooks/pre_create_extra_workdir_folders.py +++ b/openpype/hooks/pre_create_extra_workdir_folders.py @@ -3,10 +3,13 @@ from openpype.lib import PreLaunchHook from openpype.pipeline.workfile import create_workdir_extra_folders -class AddLastWorkfileToLaunchArgs(PreLaunchHook): - """Add last workfile path to launch arguments. +class CreateWorkdirExtraFolders(PreLaunchHook): + """Create extra folders for the work directory. + + Based on setting `project_settings/global/tools/Workfiles/extra_folders` + profile filtering will decide whether extra folders need to be created in + the work directory. - This is not possible to do for all applications the same way. """ # Execute after workfile template copy diff --git a/openpype/hooks/pre_foundry_apps.py b/openpype/hooks/pre_foundry_apps.py index 2092d5025d..21ec8e7881 100644 --- a/openpype/hooks/pre_foundry_apps.py +++ b/openpype/hooks/pre_foundry_apps.py @@ -7,7 +7,7 @@ class LaunchFoundryAppsWindows(PreLaunchHook): Nuke is executed "like" python process so it is required to pass `CREATE_NEW_CONSOLE` flag on windows to trigger creation of new console. - At the same time the newly created console won't create it's own stdout + At the same time the newly created console won't create its own stdout and stderr handlers so they should not be redirected to DEVNULL. """ @@ -18,7 +18,7 @@ class LaunchFoundryAppsWindows(PreLaunchHook): def execute(self): # Change `creationflags` to CREATE_NEW_CONSOLE - # - on Windows will nuke create new window using it's console + # - on Windows nuke will create new window using its console # Set `stdout` and `stderr` to None so new created console does not # have redirected output to DEVNULL in build self.launch_context.kwargs.update({ diff --git a/openpype/hosts/blender/api/__init__.py b/openpype/hosts/blender/api/__init__.py index e017d74d91..75a11affde 100644 --- a/openpype/hosts/blender/api/__init__.py +++ b/openpype/hosts/blender/api/__init__.py @@ -31,10 +31,13 @@ from .lib import ( lsattrs, read, maintained_selection, + maintained_time, get_selection, # unique_name, ) +from .capture import capture + __all__ = [ "install", @@ -56,9 +59,11 @@ __all__ = [ # Utility functions "maintained_selection", + "maintained_time", "lsattr", "lsattrs", "read", "get_selection", + "capture", # "unique_name", ] diff --git a/openpype/hosts/blender/api/capture.py b/openpype/hosts/blender/api/capture.py new file mode 100644 index 0000000000..849f8ee629 --- /dev/null +++ b/openpype/hosts/blender/api/capture.py @@ -0,0 +1,278 @@ + +"""Blender Capture +Playblasting with independent viewport, camera and display options +""" +import contextlib +import bpy + +from .lib import maintained_time +from .plugin import deselect_all, create_blender_context + + +def capture( + camera=None, + width=None, + height=None, + filename=None, + start_frame=None, + end_frame=None, + step_frame=None, + sound=None, + isolate=None, + maintain_aspect_ratio=True, + overwrite=False, + image_settings=None, + display_options=None +): + """Playblast in an independent windows + Arguments: + camera (str, optional): Name of camera, defaults to "Camera" + width (int, optional): Width of output in pixels + height (int, optional): Height of output in pixels + filename (str, optional): Name of output file path. Defaults to current + render output path. + start_frame (int, optional): Defaults to current start frame. + end_frame (int, optional): Defaults to current end frame. + step_frame (int, optional): Defaults to 1. + sound (str, optional): Specify the sound node to be used during + playblast. When None (default) no sound will be used. + isolate (list): List of nodes to isolate upon capturing + maintain_aspect_ratio (bool, optional): Modify height in order to + maintain aspect ratio. + overwrite (bool, optional): Whether or not to overwrite if file + already exists. If disabled and file exists and error will be + raised. + image_settings (dict, optional): Supplied image settings for render, + using `ImageSettings` + display_options (dict, optional): Supplied display options for render + """ + + scene = bpy.context.scene + camera = camera or "Camera" + + # Ensure camera exists. + if camera not in scene.objects and camera != "AUTO": + raise RuntimeError("Camera does not exist: {0}".format(camera)) + + # Ensure resolution. + if width and height: + maintain_aspect_ratio = False + width = width or scene.render.resolution_x + height = height or scene.render.resolution_y + if maintain_aspect_ratio: + ratio = scene.render.resolution_x / scene.render.resolution_y + height = round(width / ratio) + + # Get frame range. + if start_frame is None: + start_frame = scene.frame_start + if end_frame is None: + end_frame = scene.frame_end + if step_frame is None: + step_frame = 1 + frame_range = (start_frame, end_frame, step_frame) + + if filename is None: + filename = scene.render.filepath + + render_options = { + "filepath": "{}.".format(filename.rstrip(".")), + "resolution_x": width, + "resolution_y": height, + "use_overwrite": overwrite, + } + + with _independent_window() as window: + + applied_view(window, camera, isolate, options=display_options) + + with contextlib.ExitStack() as stack: + stack.enter_context(maintain_camera(window, camera)) + stack.enter_context(applied_frame_range(window, *frame_range)) + stack.enter_context(applied_render_options(window, render_options)) + stack.enter_context(applied_image_settings(window, image_settings)) + stack.enter_context(maintained_time()) + + bpy.ops.render.opengl( + animation=True, + render_keyed_only=False, + sequencer=False, + write_still=False, + view_context=True + ) + + return filename + + +ImageSettings = { + "file_format": "FFMPEG", + "color_mode": "RGB", + "ffmpeg": { + "format": "QUICKTIME", + "use_autosplit": False, + "codec": "H264", + "constant_rate_factor": "MEDIUM", + "gopsize": 18, + "use_max_b_frames": False, + }, +} + + +def isolate_objects(window, objects): + """Isolate selection""" + deselect_all() + + for obj in objects: + obj.select_set(True) + + context = create_blender_context(selected=objects, window=window) + + bpy.ops.view3d.view_axis(context, type="FRONT") + bpy.ops.view3d.localview(context) + + deselect_all() + + +def _apply_options(entity, options): + for option, value in options.items(): + if isinstance(value, dict): + _apply_options(getattr(entity, option), value) + else: + setattr(entity, option, value) + + +def applied_view(window, camera, isolate=None, options=None): + """Apply view options to window.""" + area = window.screen.areas[0] + space = area.spaces[0] + + area.ui_type = "VIEW_3D" + + meshes = [obj for obj in window.scene.objects if obj.type == "MESH"] + + if camera == "AUTO": + space.region_3d.view_perspective = "ORTHO" + isolate_objects(window, isolate or meshes) + else: + isolate_objects(window, isolate or meshes) + space.camera = window.scene.objects.get(camera) + space.region_3d.view_perspective = "CAMERA" + + if isinstance(options, dict): + _apply_options(space, options) + else: + space.shading.type = "SOLID" + space.shading.color_type = "MATERIAL" + space.show_gizmo = False + space.overlay.show_overlays = False + + +@contextlib.contextmanager +def applied_frame_range(window, start, end, step): + """Context manager for setting frame range.""" + # Store current frame range + current_frame_start = window.scene.frame_start + current_frame_end = window.scene.frame_end + current_frame_step = window.scene.frame_step + # Apply frame range + window.scene.frame_start = start + window.scene.frame_end = end + window.scene.frame_step = step + try: + yield + finally: + # Restore frame range + window.scene.frame_start = current_frame_start + window.scene.frame_end = current_frame_end + window.scene.frame_step = current_frame_step + + +@contextlib.contextmanager +def applied_render_options(window, options): + """Context manager for setting render options.""" + render = window.scene.render + + # Store current settings + original = {} + for opt in options.copy(): + try: + original[opt] = getattr(render, opt) + except ValueError: + options.pop(opt) + + # Apply settings + _apply_options(render, options) + + try: + yield + finally: + # Restore previous settings + _apply_options(render, original) + + +@contextlib.contextmanager +def applied_image_settings(window, options): + """Context manager to override image settings.""" + + options = options or ImageSettings.copy() + ffmpeg = options.pop("ffmpeg", {}) + render = window.scene.render + + # Store current image settings + original = {} + for opt in options.copy(): + try: + original[opt] = getattr(render.image_settings, opt) + except ValueError: + options.pop(opt) + + # Store current ffmpeg settings + original_ffmpeg = {} + for opt in ffmpeg.copy(): + try: + original_ffmpeg[opt] = getattr(render.ffmpeg, opt) + except ValueError: + ffmpeg.pop(opt) + + # Apply image settings + for opt, value in options.items(): + setattr(render.image_settings, opt, value) + + # Apply ffmpeg settings + for opt, value in ffmpeg.items(): + setattr(render.ffmpeg, opt, value) + + try: + yield + finally: + # Restore previous settings + for opt, value in original.items(): + setattr(render.image_settings, opt, value) + for opt, value in original_ffmpeg.items(): + setattr(render.ffmpeg, opt, value) + + +@contextlib.contextmanager +def maintain_camera(window, camera): + """Context manager to override camera.""" + current_camera = window.scene.camera + if camera in window.scene.objects: + window.scene.camera = window.scene.objects.get(camera) + try: + yield + finally: + window.scene.camera = current_camera + + +@contextlib.contextmanager +def _independent_window(): + """Create capture-window context.""" + context = create_blender_context() + current_windows = set(bpy.context.window_manager.windows) + bpy.ops.wm.window_new(context) + window = list(set(bpy.context.window_manager.windows) - current_windows)[0] + context["window"] = window + try: + yield window + finally: + bpy.ops.wm.window_close(context) diff --git a/openpype/hosts/blender/api/lib.py b/openpype/hosts/blender/api/lib.py index 05912885f7..6526f1fb87 100644 --- a/openpype/hosts/blender/api/lib.py +++ b/openpype/hosts/blender/api/lib.py @@ -284,3 +284,13 @@ def maintained_selection(): # This could happen if the active node was deleted during the # context. log.exception("Failed to set active object.") + + +@contextlib.contextmanager +def maintained_time(): + """Maintain current frame during context.""" + current_time = bpy.context.scene.frame_current + try: + yield + finally: + bpy.context.scene.frame_current = current_time diff --git a/openpype/hosts/blender/api/ops.py b/openpype/hosts/blender/api/ops.py index b1fa13acb9..158c32fb5a 100644 --- a/openpype/hosts/blender/api/ops.py +++ b/openpype/hosts/blender/api/ops.py @@ -84,11 +84,11 @@ class MainThreadItem: self.kwargs = kwargs def execute(self): - """Execute callback and store it's result. + """Execute callback and store its result. Method must be called from main thread. Item is marked as `done` when callback execution finished. Store output of callback of exception - information when callback raise one. + information when callback raises one. """ print("Executing process in main thread") if self.done: diff --git a/openpype/hosts/blender/api/plugin.py b/openpype/hosts/blender/api/plugin.py index c59be8d7ff..1274795c6b 100644 --- a/openpype/hosts/blender/api/plugin.py +++ b/openpype/hosts/blender/api/plugin.py @@ -62,7 +62,8 @@ def prepare_data(data, container_name=None): def create_blender_context(active: Optional[bpy.types.Object] = None, - selected: Optional[bpy.types.Object] = None,): + selected: Optional[bpy.types.Object] = None, + window: Optional[bpy.types.Window] = None): """Create a new Blender context. If an object is passed as parameter, it is set as selected and active. """ @@ -72,7 +73,9 @@ def create_blender_context(active: Optional[bpy.types.Object] = None, override_context = bpy.context.copy() - for win in bpy.context.window_manager.windows: + windows = [window] if window else bpy.context.window_manager.windows + + for win in windows: for area in win.screen.areas: if area.type == 'VIEW_3D': for region in area.regions: diff --git a/openpype/hosts/blender/plugins/create/create_review.py b/openpype/hosts/blender/plugins/create/create_review.py new file mode 100644 index 0000000000..bf4ea6a7cd --- /dev/null +++ b/openpype/hosts/blender/plugins/create/create_review.py @@ -0,0 +1,47 @@ +"""Create review.""" + +import bpy + +from openpype.pipeline import legacy_io +from openpype.hosts.blender.api import plugin, lib, ops +from openpype.hosts.blender.api.pipeline import AVALON_INSTANCES + + +class CreateReview(plugin.Creator): + """Single baked camera""" + + name = "reviewDefault" + label = "Review" + family = "review" + icon = "video-camera" + + def process(self): + """ Run the creator on Blender main thread""" + mti = ops.MainThreadItem(self._process) + ops.execute_in_main_thread(mti) + + def _process(self): + # Get Instance Container or create it if it does not exist + instances = bpy.data.collections.get(AVALON_INSTANCES) + if not instances: + instances = bpy.data.collections.new(name=AVALON_INSTANCES) + bpy.context.scene.collection.children.link(instances) + + # Create instance object + asset = self.data["asset"] + subset = self.data["subset"] + name = plugin.asset_name(asset, subset) + asset_group = bpy.data.collections.new(name=name) + instances.children.link(asset_group) + self.data['task'] = legacy_io.Session.get('AVALON_TASK') + lib.imprint(asset_group, self.data) + + if (self.options or {}).get("useSelection"): + selected = lib.get_selection() + for obj in selected: + asset_group.objects.link(obj) + elif (self.options or {}).get("asset_group"): + obj = (self.options or {}).get("asset_group") + asset_group.objects.link(obj) + + return asset_group diff --git a/openpype/hosts/blender/plugins/publish/collect_review.py b/openpype/hosts/blender/plugins/publish/collect_review.py new file mode 100644 index 0000000000..d6abd9d967 --- /dev/null +++ b/openpype/hosts/blender/plugins/publish/collect_review.py @@ -0,0 +1,64 @@ +import bpy + +import pyblish.api +from openpype.pipeline import legacy_io + + +class CollectReview(pyblish.api.InstancePlugin): + """Collect Review data + + """ + + order = pyblish.api.CollectorOrder + 0.3 + label = "Collect Review Data" + families = ["review"] + + def process(self, instance): + + self.log.debug(f"instance: {instance}") + + # get cameras + cameras = [ + obj + for obj in instance + if isinstance(obj, bpy.types.Object) and obj.type == "CAMERA" + ] + + assert len(cameras) == 1, ( + f"Not a single camera found in extraction: {cameras}" + ) + camera = cameras[0].name + self.log.debug(f"camera: {camera}") + + # get isolate objects list from meshes instance members . + isolate_objects = [ + obj + for obj in instance + if isinstance(obj, bpy.types.Object) and obj.type == "MESH" + ] + + if not instance.data.get("remove"): + + task = legacy_io.Session.get("AVALON_TASK") + + instance.data.update({ + "subset": f"{task}Review", + "review_camera": camera, + "frameStart": instance.context.data["frameStart"], + "frameEnd": instance.context.data["frameEnd"], + "fps": instance.context.data["fps"], + "isolate": isolate_objects, + }) + + self.log.debug(f"instance data: {instance.data}") + + # TODO : Collect audio + audio_tracks = [] + instance.data["audio"] = [] + for track in audio_tracks: + instance.data["audio"].append( + { + "offset": track.offset.get(), + "filename": track.filename.get(), + } + ) diff --git a/openpype/hosts/blender/plugins/publish/extract_playblast.py b/openpype/hosts/blender/plugins/publish/extract_playblast.py new file mode 100644 index 0000000000..8dc2f66c22 --- /dev/null +++ b/openpype/hosts/blender/plugins/publish/extract_playblast.py @@ -0,0 +1,123 @@ +import os +import clique + +import bpy + +import pyblish.api +from openpype.pipeline import publish +from openpype.hosts.blender.api import capture +from openpype.hosts.blender.api.lib import maintained_time + + +class ExtractPlayblast(publish.Extractor): + """ + Extract viewport playblast. + + Takes review camera and creates review Quicktime video based on viewport + capture. + """ + + label = "Extract Playblast" + hosts = ["blender"] + families = ["review"] + optional = True + order = pyblish.api.ExtractorOrder + 0.01 + + def process(self, instance): + self.log.info("Extracting capture..") + + self.log.info(instance.data) + + # get scene fps + fps = instance.data.get("fps") + if fps is None: + fps = bpy.context.scene.render.fps + instance.data["fps"] = fps + + self.log.info(f"fps: {fps}") + + # If start and end frames cannot be determined, + # get them from Blender timeline. + start = instance.data.get("frameStart", bpy.context.scene.frame_start) + end = instance.data.get("frameEnd", bpy.context.scene.frame_end) + + self.log.info(f"start: {start}, end: {end}") + assert end > start, "Invalid time range !" + + # get cameras + camera = instance.data("review_camera", None) + + # get isolate objects list + isolate = instance.data("isolate", None) + + # get ouput path + stagingdir = self.staging_dir(instance) + filename = instance.name + path = os.path.join(stagingdir, filename) + + self.log.info(f"Outputting images to {path}") + + project_settings = instance.context.data["project_settings"]["blender"] + presets = project_settings["publish"]["ExtractPlayblast"]["presets"] + preset = presets.get("default") + preset.update({ + "camera": camera, + "start_frame": start, + "end_frame": end, + "filename": path, + "overwrite": True, + "isolate": isolate, + }) + preset.setdefault( + "image_settings", + { + "file_format": "PNG", + "color_mode": "RGB", + "color_depth": "8", + "compression": 15, + }, + ) + + with maintained_time(): + path = capture(**preset) + + self.log.debug(f"playblast path {path}") + + collected_files = os.listdir(stagingdir) + collections, remainder = clique.assemble( + collected_files, + patterns=[f"{filename}\\.{clique.DIGITS_PATTERN}\\.png$"], + ) + + if len(collections) > 1: + raise RuntimeError( + f"More than one collection found in stagingdir: {stagingdir}" + ) + elif len(collections) == 0: + raise RuntimeError( + f"No collection found in stagingdir: {stagingdir}" + ) + + frame_collection = collections[0] + + self.log.info(f"We found collection of interest {frame_collection}") + + instance.data.setdefault("representations", []) + + tags = ["review"] + if not instance.data.get("keepImages"): + tags.append("delete") + + representation = { + "name": "png", + "ext": "png", + "files": list(frame_collection), + "stagingDir": stagingdir, + "frameStart": start, + "frameEnd": end, + "fps": fps, + "preview": True, + "tags": tags, + "camera_name": camera + } + instance.data["representations"].append(representation) diff --git a/openpype/hosts/blender/plugins/publish/extract_thumbnail.py b/openpype/hosts/blender/plugins/publish/extract_thumbnail.py new file mode 100644 index 0000000000..65c3627375 --- /dev/null +++ b/openpype/hosts/blender/plugins/publish/extract_thumbnail.py @@ -0,0 +1,99 @@ +import os +import glob + +import pyblish.api +from openpype.pipeline import publish +from openpype.hosts.blender.api import capture +from openpype.hosts.blender.api.lib import maintained_time + +import bpy + + +class ExtractThumbnail(publish.Extractor): + """Extract viewport thumbnail. + + Takes review camera and creates a thumbnail based on viewport + capture. + + """ + + label = "Extract Thumbnail" + hosts = ["blender"] + families = ["review"] + order = pyblish.api.ExtractorOrder + 0.01 + presets = {} + + def process(self, instance): + self.log.info("Extracting capture..") + + stagingdir = self.staging_dir(instance) + filename = instance.name + path = os.path.join(stagingdir, filename) + + self.log.info(f"Outputting images to {path}") + + camera = instance.data.get("review_camera", "AUTO") + start = instance.data.get("frameStart", bpy.context.scene.frame_start) + family = instance.data.get("family") + isolate = instance.data("isolate", None) + + preset = self.presets.get(family, {}) + + preset.update({ + "camera": camera, + "start_frame": start, + "end_frame": start, + "filename": path, + "overwrite": True, + "isolate": isolate, + }) + preset.setdefault( + "image_settings", + { + "file_format": "JPEG", + "color_mode": "RGB", + "quality": 100, + }, + ) + + with maintained_time(): + path = capture(**preset) + + thumbnail = os.path.basename(self._fix_output_path(path)) + + self.log.info(f"thumbnail: {thumbnail}") + + instance.data.setdefault("representations", []) + + representation = { + "name": "thumbnail", + "ext": "jpg", + "files": thumbnail, + "stagingDir": stagingdir, + "thumbnail": True + } + instance.data["representations"].append(representation) + + def _fix_output_path(self, filepath): + """"Workaround to return correct filepath. + + To workaround this we just glob.glob() for any file extensions and + assume the latest modified file is the correct file and return it. + + """ + # Catch cancelled playblast + if filepath is None: + self.log.warning( + "Playblast did not result in output path. " + "Playblast is probably interrupted." + ) + return None + + if not os.path.exists(filepath): + files = glob.glob(f"{filepath}.*.jpg") + + if not files: + raise RuntimeError(f"Couldn't find playblast from: {filepath}") + filepath = max(files, key=os.path.getmtime) + + return filepath diff --git a/openpype/hosts/celaction/hooks/pre_celaction_setup.py b/openpype/hosts/celaction/hooks/pre_celaction_setup.py index 62cebf99ed..96e784875c 100644 --- a/openpype/hosts/celaction/hooks/pre_celaction_setup.py +++ b/openpype/hosts/celaction/hooks/pre_celaction_setup.py @@ -38,8 +38,9 @@ class CelactionPrelaunchHook(PreLaunchHook): ) path_to_cli = os.path.join(CELACTION_SCRIPTS_DIR, "publish_cli.py") - subproces_args = get_openpype_execute_args("run", path_to_cli) - openpype_executable = subproces_args.pop(0) + subprocess_args = get_openpype_execute_args("run", path_to_cli) + openpype_executable = subprocess_args.pop(0) + workfile_settings = self.get_workfile_settings() winreg.SetValueEx( hKey, @@ -49,20 +50,34 @@ class CelactionPrelaunchHook(PreLaunchHook): openpype_executable ) - parameters = subproces_args + [ - "--currentFile", "*SCENE*", - "--chunk", "*CHUNK*", - "--frameStart", "*START*", - "--frameEnd", "*END*", - "--resolutionWidth", "*X*", - "--resolutionHeight", "*Y*" + # add required arguments for workfile path + parameters = subprocess_args + [ + "--currentFile", "*SCENE*" ] + # Add custom parameters from workfile settings + if "render_chunk" in workfile_settings["submission_overrides"]: + parameters += [ + "--chunk", "*CHUNK*" + ] + if "resolution" in workfile_settings["submission_overrides"]: + parameters += [ + "--resolutionWidth", "*X*", + "--resolutionHeight", "*Y*" + ] + if "frame_range" in workfile_settings["submission_overrides"]: + parameters += [ + "--frameStart", "*START*", + "--frameEnd", "*END*" + ] + winreg.SetValueEx( hKey, "SubmitParametersTitle", 0, winreg.REG_SZ, subprocess.list2cmdline(parameters) ) + self.log.debug(f"__ parameters: \"{parameters}\"") + # setting resolution parameters path_submit = "\\".join([ path_user_settings, "Dialogs", "SubmitOutput" @@ -135,3 +150,6 @@ class CelactionPrelaunchHook(PreLaunchHook): self.log.info(f"Workfile to open: \"{workfile_path}\"") return workfile_path + + def get_workfile_settings(self): + return self.data["project_settings"]["celaction"]["workfile"] diff --git a/openpype/hosts/celaction/plugins/publish/collect_celaction_cli_kwargs.py b/openpype/hosts/celaction/plugins/publish/collect_celaction_cli_kwargs.py index 43b81b83e7..54dea15dff 100644 --- a/openpype/hosts/celaction/plugins/publish/collect_celaction_cli_kwargs.py +++ b/openpype/hosts/celaction/plugins/publish/collect_celaction_cli_kwargs.py @@ -39,7 +39,7 @@ class CollectCelactionCliKwargs(pyblish.api.Collector): passing_kwargs[key] = value if missing_kwargs: - raise RuntimeError("Missing arguments {}".format( + self.log.debug("Missing arguments {}".format( ", ".join( [f'"{key}"' for key in missing_kwargs] ) diff --git a/openpype/hosts/fusion/__init__.py b/openpype/hosts/fusion/__init__.py index ddae01890b..1da11ba9d1 100644 --- a/openpype/hosts/fusion/__init__.py +++ b/openpype/hosts/fusion/__init__.py @@ -1,10 +1,14 @@ from .addon import ( + get_fusion_version, FusionAddon, FUSION_HOST_DIR, + FUSION_VERSIONS_DICT, ) __all__ = ( + "get_fusion_version", "FusionAddon", "FUSION_HOST_DIR", + "FUSION_VERSIONS_DICT", ) diff --git a/openpype/hosts/fusion/addon.py b/openpype/hosts/fusion/addon.py index d1bd1566b7..45683cfbde 100644 --- a/openpype/hosts/fusion/addon.py +++ b/openpype/hosts/fusion/addon.py @@ -1,8 +1,52 @@ import os +import re from openpype.modules import OpenPypeModule, IHostAddon +from openpype.lib import Logger FUSION_HOST_DIR = os.path.dirname(os.path.abspath(__file__)) +# FUSION_VERSIONS_DICT is used by the pre-launch hooks +# The keys correspond to all currently supported Fusion versions +# Each value is a list of corresponding Python home variables and a profile +# number, which is used by the profile hook to set Fusion profile variables. +FUSION_VERSIONS_DICT = { + 9: ("FUSION_PYTHON36_HOME", 9), + 16: ("FUSION16_PYTHON36_HOME", 16), + 17: ("FUSION16_PYTHON36_HOME", 16), + 18: ("FUSION_PYTHON3_HOME", 16), +} + + +def get_fusion_version(app_name): + """ + The function is triggered by the prelaunch hooks to get the fusion version. + + `app_name` is obtained by prelaunch hooks from the + `launch_context.env.get("AVALON_APP_NAME")`. + + To get a correct Fusion version, a version number should be present + in the `applications/fusion/variants` key + of the Blackmagic Fusion Application Settings. + """ + + log = Logger.get_logger(__name__) + + if not app_name: + return + + app_version_candidates = re.findall(r"\d+", app_name) + if not app_version_candidates: + return + for app_version in app_version_candidates: + if int(app_version) in FUSION_VERSIONS_DICT: + return int(app_version) + else: + log.info( + "Unsupported Fusion version: {app_version}".format( + app_version=app_version + ) + ) + class FusionAddon(OpenPypeModule, IHostAddon): name = "fusion" @@ -14,15 +58,11 @@ class FusionAddon(OpenPypeModule, IHostAddon): def get_launch_hook_paths(self, app): if app.host_name != self.host_name: return [] - return [ - os.path.join(FUSION_HOST_DIR, "hooks") - ] + return [os.path.join(FUSION_HOST_DIR, "hooks")] def add_implementation_envs(self, env, _app): # Set default values if are not already set via settings - defaults = { - "OPENPYPE_LOG_NO_COLORS": "Yes" - } + defaults = {"OPENPYPE_LOG_NO_COLORS": "Yes"} for key, value in defaults.items(): if not env.get(key): env[key] = value diff --git a/openpype/hosts/fusion/api/lib.py b/openpype/hosts/fusion/api/lib.py index 88a3f0b49b..40cc4d2963 100644 --- a/openpype/hosts/fusion/api/lib.py +++ b/openpype/hosts/fusion/api/lib.py @@ -303,10 +303,18 @@ def get_frame_path(path): return filename, padding, ext -def get_current_comp(): - """Hack to get current comp in this session""" +def get_fusion_module(): + """Get current Fusion instance""" fusion = getattr(sys.modules["__main__"], "fusion", None) - return fusion.CurrentComp if fusion else None + return fusion + + +def get_current_comp(): + """Get current comp in this session""" + fusion = get_fusion_module() + if fusion is not None: + comp = fusion.CurrentComp + return comp @contextlib.contextmanager diff --git a/openpype/hosts/fusion/deploy/fusion_shared.prefs b/openpype/hosts/fusion/deploy/fusion_shared.prefs index 998c6a6d66..b379ea7c66 100644 --- a/openpype/hosts/fusion/deploy/fusion_shared.prefs +++ b/openpype/hosts/fusion/deploy/fusion_shared.prefs @@ -1,19 +1,19 @@ { Locked = true, Global = { - Paths = { - Map = { - ["OpenPype:"] = "$(OPENPYPE_FUSION)/deploy", - ["Reactor:"] = "$(REACTOR)", - - ["Config:"] = "UserPaths:Config;OpenPype:Config", - ["Scripts:"] = "UserPaths:Scripts;Reactor:System/Scripts;OpenPype:Scripts", - ["UserPaths:"] = "UserData:;AllData:;Fusion:;Reactor:Deploy" - }, - }, - Script = { - PythonVersion = 3, - Python3Forced = true - }, + Paths = { + Map = { + ["OpenPype:"] = "$(OPENPYPE_FUSION)/deploy", + ["Config:"] = "UserPaths:Config;OpenPype:Config", + ["Scripts:"] = "UserPaths:Scripts;Reactor:System/Scripts;OpenPype:Scripts", }, -} \ No newline at end of file + }, + Script = { + PythonVersion = 3, + Python3Forced = true + }, + UserInterface = { + Language = "en_US" + }, + }, +} diff --git a/openpype/hosts/fusion/hooks/pre_fusion_profile_hook.py b/openpype/hosts/fusion/hooks/pre_fusion_profile_hook.py new file mode 100644 index 0000000000..fd726ccda1 --- /dev/null +++ b/openpype/hosts/fusion/hooks/pre_fusion_profile_hook.py @@ -0,0 +1,161 @@ +import os +import shutil +import platform +from pathlib import Path +from openpype.lib import PreLaunchHook, ApplicationLaunchFailed +from openpype.hosts.fusion import ( + FUSION_HOST_DIR, + FUSION_VERSIONS_DICT, + get_fusion_version, +) + + +class FusionCopyPrefsPrelaunch(PreLaunchHook): + """ + Prepares local Fusion profile directory, copies existing Fusion profile. + This also sets FUSION MasterPrefs variable, which is used + to apply Master.prefs file to override some Fusion profile settings to: + - enable the OpenPype menu + - force Python 3 over Python 2 + - force English interface + Master.prefs is defined in openpype/hosts/fusion/deploy/fusion_shared.prefs + """ + + app_groups = ["fusion"] + order = 2 + + def get_fusion_profile_name(self, profile_version) -> str: + # Returns 'Default', unless FUSION16_PROFILE is set + return os.getenv(f"FUSION{profile_version}_PROFILE", "Default") + + def get_fusion_profile_dir(self, profile_version) -> Path: + # Get FUSION_PROFILE_DIR variable + fusion_profile = self.get_fusion_profile_name(profile_version) + fusion_var_prefs_dir = os.getenv( + f"FUSION{profile_version}_PROFILE_DIR" + ) + + # Check if FUSION_PROFILE_DIR exists + if fusion_var_prefs_dir and Path(fusion_var_prefs_dir).is_dir(): + fu_prefs_dir = Path(fusion_var_prefs_dir, fusion_profile) + self.log.info(f"{fusion_var_prefs_dir} is set to {fu_prefs_dir}") + return fu_prefs_dir + + def get_profile_source(self, profile_version) -> Path: + """Get Fusion preferences profile location. + See Per-User_Preferences_and_Paths on VFXpedia for reference. + """ + fusion_profile = self.get_fusion_profile_name(profile_version) + profile_source = self.get_fusion_profile_dir(profile_version) + if profile_source: + return profile_source + # otherwise get default location of the profile folder + fu_prefs_dir = f"Blackmagic Design/Fusion/Profiles/{fusion_profile}" + if platform.system() == "Windows": + profile_source = Path(os.getenv("AppData"), fu_prefs_dir) + elif platform.system() == "Darwin": + profile_source = Path( + "~/Library/Application Support/", fu_prefs_dir + ).expanduser() + elif platform.system() == "Linux": + profile_source = Path("~/.fusion", fu_prefs_dir).expanduser() + self.log.info( + f"Locating source Fusion prefs directory: {profile_source}" + ) + return profile_source + + def get_copy_fusion_prefs_settings(self): + # Get copy preferences options from the global application settings + + copy_fusion_settings = self.data["project_settings"]["fusion"].get( + "copy_fusion_settings", {} + ) + if not copy_fusion_settings: + self.log.error("Copy prefs settings not found") + copy_status = copy_fusion_settings.get("copy_status", False) + force_sync = copy_fusion_settings.get("force_sync", False) + copy_path = copy_fusion_settings.get("copy_path") or None + if copy_path: + copy_path = Path(copy_path).expanduser() + return copy_status, copy_path, force_sync + + def copy_fusion_profile( + self, copy_from: Path, copy_to: Path, force_sync: bool + ) -> None: + """On the first Fusion launch copy the contents of Fusion profile + directory to the working predefined location. If the Openpype profile + folder exists, skip copying, unless re-sync is checked. + If the prefs were not copied on the first launch, + clean Fusion profile will be created in fu_profile_dir. + """ + if copy_to.exists() and not force_sync: + self.log.info( + "Destination Fusion preferences folder already exists: " + f"{copy_to} " + ) + return + self.log.info("Starting copying Fusion preferences") + self.log.debug(f"force_sync option is set to {force_sync}") + try: + copy_to.mkdir(exist_ok=True, parents=True) + except PermissionError: + self.log.warning(f"Creating the folder not permitted at {copy_to}") + return + if not copy_from.exists(): + self.log.warning(f"Fusion preferences not found in {copy_from}") + return + for file in copy_from.iterdir(): + if file.suffix in ( + ".prefs", + ".def", + ".blocklist", + ".fu", + ".toolbars", + ): + # convert Path to str to be compatible with Python 3.6+ + shutil.copy(str(file), str(copy_to)) + self.log.info( + f"Successfully copied preferences: {copy_from} to {copy_to}" + ) + + def execute(self): + ( + copy_status, + fu_profile_dir, + force_sync, + ) = self.get_copy_fusion_prefs_settings() + + # Get launched application context and return correct app version + app_name = self.launch_context.env.get("AVALON_APP_NAME") + app_version = get_fusion_version(app_name) + if app_version is None: + version_names = ", ".join(str(x) for x in FUSION_VERSIONS_DICT) + raise ApplicationLaunchFailed( + "Unable to detect valid Fusion version number from app " + f"name: {app_name}.\nMake sure to include at least a digit " + "to indicate the Fusion version like '18'.\n" + f"Detectable Fusion versions are: {version_names}" + ) + + _, profile_version = FUSION_VERSIONS_DICT[app_version] + fu_profile = self.get_fusion_profile_name(profile_version) + + # do a copy of Fusion profile if copy_status toggle is enabled + if copy_status and fu_profile_dir is not None: + profile_source = self.get_profile_source(profile_version) + dest_folder = Path(fu_profile_dir, fu_profile) + self.copy_fusion_profile(profile_source, dest_folder, force_sync) + + # Add temporary profile directory variables to customize Fusion + # to define where it can read custom scripts and tools from + fu_profile_dir_variable = f"FUSION{profile_version}_PROFILE_DIR" + self.log.info(f"Setting {fu_profile_dir_variable}: {fu_profile_dir}") + self.launch_context.env[fu_profile_dir_variable] = str(fu_profile_dir) + + # Add custom Fusion Master Prefs and the temporary + # profile directory variables to customize Fusion + # to define where it can read custom scripts and tools from + master_prefs_variable = f"FUSION{profile_version}_MasterPrefs" + master_prefs = Path(FUSION_HOST_DIR, "deploy", "fusion_shared.prefs") + self.log.info(f"Setting {master_prefs_variable}: {master_prefs}") + self.launch_context.env[master_prefs_variable] = str(master_prefs) diff --git a/openpype/hosts/fusion/hooks/pre_fusion_setup.py b/openpype/hosts/fusion/hooks/pre_fusion_setup.py index 323b8b0029..f27cd1674b 100644 --- a/openpype/hosts/fusion/hooks/pre_fusion_setup.py +++ b/openpype/hosts/fusion/hooks/pre_fusion_setup.py @@ -1,32 +1,43 @@ import os from openpype.lib import PreLaunchHook, ApplicationLaunchFailed -from openpype.hosts.fusion import FUSION_HOST_DIR +from openpype.hosts.fusion import ( + FUSION_HOST_DIR, + FUSION_VERSIONS_DICT, + get_fusion_version, +) class FusionPrelaunch(PreLaunchHook): - """Prepares OpenPype Fusion environment - - Requires FUSION_PYTHON3_HOME to be defined in the environment for Fusion - to point at a valid Python 3 build for Fusion. That is Python 3.3-3.10 - for Fusion 18 and Fusion 3.6 for Fusion 16 and 17. - - This also sets FUSION16_MasterPrefs to apply the fusion master prefs - as set in openpype/hosts/fusion/deploy/fusion_shared.prefs to enable - the OpenPype menu and force Python 3 over Python 2. - """ + Prepares OpenPype Fusion environment. + Requires correct Python home variable to be defined in the environment + settings for Fusion to point at a valid Python 3 build for Fusion. + Python3 versions that are supported by Fusion: + Fusion 9, 16, 17 : Python 3.6 + Fusion 18 : Python 3.6 - 3.10 + """ + app_groups = ["fusion"] + order = 1 def execute(self): # making sure python 3 is installed at provided path # Py 3.3-3.10 for Fusion 18+ or Py 3.6 for Fu 16-17 - py3_var = "FUSION_PYTHON3_HOME" + app_data = self.launch_context.env.get("AVALON_APP_NAME") + app_version = get_fusion_version(app_data) + if not app_version: + raise ApplicationLaunchFailed( + "Fusion version information not found in System settings.\n" + "The key field in the 'applications/fusion/variants' should " + "consist a number, corresponding to major Fusion version." + ) + py3_var, _ = FUSION_VERSIONS_DICT[app_version] fusion_python3_home = self.launch_context.env.get(py3_var, "") - self.log.info(f"Looking for Python 3 in: {fusion_python3_home}") for path in fusion_python3_home.split(os.pathsep): - # Allow defining multiple paths to allow "fallback" to other - # path. But make to set only a single path as final variable. + # Allow defining multiple paths, separated by os.pathsep, + # to allow "fallback" to other path. + # But make to set only a single path as final variable. py3_dir = os.path.normpath(path) if os.path.isdir(py3_dir): break @@ -43,19 +54,10 @@ class FusionPrelaunch(PreLaunchHook): self.launch_context.env[py3_var] = py3_dir # Fusion 18+ requires FUSION_PYTHON3_HOME to also be on PATH - self.launch_context.env["PATH"] += ";" + py3_dir + if app_version >= 18: + self.launch_context.env["PATH"] += os.pathsep + py3_dir - # Fusion 16 and 17 use FUSION16_PYTHON36_HOME instead of - # FUSION_PYTHON3_HOME and will only work with a Python 3.6 version - # TODO: Detect Fusion version to only set for specific Fusion build - self.launch_context.env["FUSION16_PYTHON36_HOME"] = py3_dir + self.launch_context.env[py3_var] = py3_dir - # Add our Fusion Master Prefs which is the only way to customize - # Fusion to define where it can read custom scripts and tools from self.log.info(f"Setting OPENPYPE_FUSION: {FUSION_HOST_DIR}") self.launch_context.env["OPENPYPE_FUSION"] = FUSION_HOST_DIR - - pref_var = "FUSION16_MasterPrefs" # used by Fusion 16, 17 and 18 - prefs = os.path.join(FUSION_HOST_DIR, "deploy", "fusion_shared.prefs") - self.log.info(f"Setting {pref_var}: {prefs}") - self.launch_context.env[pref_var] = prefs diff --git a/openpype/hosts/hiero/api/plugin.py b/openpype/hosts/hiero/api/plugin.py index 07457db1a4..5ca901caaa 100644 --- a/openpype/hosts/hiero/api/plugin.py +++ b/openpype/hosts/hiero/api/plugin.py @@ -146,6 +146,8 @@ class CreatorWidget(QtWidgets.QDialog): return " ".join([str(m.group(0)).capitalize() for m in matches]) def create_row(self, layout, type, text, **kwargs): + value_keys = ["setText", "setCheckState", "setValue", "setChecked"] + # get type attribute from qwidgets attr = getattr(QtWidgets, type) @@ -167,14 +169,27 @@ class CreatorWidget(QtWidgets.QDialog): # assign the created attribute to variable item = getattr(self, attr_name) + + # set attributes to item which are not values for func, val in kwargs.items(): + if func in value_keys: + continue + if getattr(item, func): + log.debug("Setting {} to {}".format(func, val)) func_attr = getattr(item, func) if isinstance(val, tuple): func_attr(*val) else: func_attr(val) + # set values to item + for value_item in value_keys: + if value_item not in kwargs: + continue + if getattr(item, value_item): + getattr(item, value_item)(kwargs[value_item]) + # add to layout layout.addRow(label, item) @@ -276,8 +291,11 @@ class CreatorWidget(QtWidgets.QDialog): elif v["type"] == "QSpinBox": data[k]["value"] = self.create_row( content_layout, "QSpinBox", v["label"], - setValue=v["value"], setMinimum=0, + setValue=v["value"], + setDisplayIntegerBase=10000, + setRange=(0, 99999), setMinimum=0, setMaximum=100000, setToolTip=tool_tip) + return data diff --git a/openpype/hosts/max/api/lib_renderproducts.py b/openpype/hosts/max/api/lib_renderproducts.py index a74a6a7426..350eb97661 100644 --- a/openpype/hosts/max/api/lib_renderproducts.py +++ b/openpype/hosts/max/api/lib_renderproducts.py @@ -8,6 +8,7 @@ from openpype.hosts.max.api.lib import ( get_current_renderer, get_default_render_folder ) +from openpype.pipeline.context_tools import get_current_project_asset from openpype.settings import get_project_settings from openpype.pipeline import legacy_io @@ -34,14 +35,20 @@ class RenderProducts(object): filename, container) + context = get_current_project_asset() + startFrame = context["data"].get("frameStart") + endFrame = context["data"].get("frameEnd") + 1 + img_fmt = self._project_settings["max"]["RenderSettings"]["image_format"] # noqa - full_render_list = [] - beauty = self.beauty_render_product(output_file, img_fmt) - full_render_list.append(beauty) + full_render_list = self.beauty_render_product(output_file, + startFrame, + endFrame, + img_fmt) renderer_class = get_current_renderer() renderer = str(renderer_class).split(":")[0] + if renderer == "VUE_File_Renderer": return full_render_list @@ -54,6 +61,8 @@ class RenderProducts(object): "Quicksilver_Hardware_Renderer", ]: render_elem_list = self.render_elements_product(output_file, + startFrame, + endFrame, img_fmt) if render_elem_list: full_render_list.extend(iter(render_elem_list)) @@ -61,18 +70,24 @@ class RenderProducts(object): if renderer == "Arnold": aov_list = self.arnold_render_product(output_file, + startFrame, + endFrame, img_fmt) if aov_list: full_render_list.extend(iter(aov_list)) return full_render_list - def beauty_render_product(self, folder, fmt): - beauty_output = f"{folder}.####.{fmt}" - beauty_output = beauty_output.replace("\\", "/") - return beauty_output + def beauty_render_product(self, folder, startFrame, endFrame, fmt): + beauty_frame_range = [] + for f in range(startFrame, endFrame): + beauty_output = f"{folder}.{f}.{fmt}" + beauty_output = beauty_output.replace("\\", "/") + beauty_frame_range.append(beauty_output) + + return beauty_frame_range # TODO: Get the arnold render product - def arnold_render_product(self, folder, fmt): + def arnold_render_product(self, folder, startFrame, endFrame, fmt): """Get all the Arnold AOVs""" aovs = [] @@ -85,15 +100,17 @@ class RenderProducts(object): for i in range(aov_group_num): # get the specific AOV group for aov in aov_mgr.drivers[i].aov_list: - render_element = f"{folder}_{aov.name}.####.{fmt}" - render_element = render_element.replace("\\", "/") - aovs.append(render_element) + for f in range(startFrame, endFrame): + render_element = f"{folder}_{aov.name}.{f}.{fmt}" + render_element = render_element.replace("\\", "/") + aovs.append(render_element) + # close the AOVs manager window amw.close() return aovs - def render_elements_product(self, folder, fmt): + def render_elements_product(self, folder, startFrame, endFrame, fmt): """Get all the render element output files. """ render_dirname = [] @@ -104,9 +121,10 @@ class RenderProducts(object): renderlayer_name = render_elem.GetRenderElement(i) target, renderpass = str(renderlayer_name).split(":") if renderlayer_name.enabled: - render_element = f"{folder}_{renderpass}.####.{fmt}" - render_element = render_element.replace("\\", "/") - render_dirname.append(render_element) + for f in range(startFrame, endFrame): + render_element = f"{folder}_{renderpass}.{f}.{fmt}" + render_element = render_element.replace("\\", "/") + render_dirname.append(render_element) return render_dirname diff --git a/openpype/hosts/max/plugins/create/create_maxScene.py b/openpype/hosts/max/plugins/create/create_maxScene.py new file mode 100644 index 0000000000..7900336f32 --- /dev/null +++ b/openpype/hosts/max/plugins/create/create_maxScene.py @@ -0,0 +1,26 @@ +# -*- coding: utf-8 -*- +"""Creator plugin for creating raw max scene.""" +from openpype.hosts.max.api import plugin +from openpype.pipeline import CreatedInstance + + +class CreateMaxScene(plugin.MaxCreator): + identifier = "io.openpype.creators.max.maxScene" + label = "Max Scene" + family = "maxScene" + icon = "gear" + + def create(self, subset_name, instance_data, pre_create_data): + from pymxs import runtime as rt + sel_obj = list(rt.selection) + instance = super(CreateMaxScene, self).create( + subset_name, + instance_data, + pre_create_data) # type: CreatedInstance + container = rt.getNodeByName(instance.data.get("instance_node")) + # TODO: Disable "Add to Containers?" Panel + # parent the selected cameras into the container + for obj in sel_obj: + obj.parent = container + # for additional work on the node: + # instance_node = rt.getNodeByName(instance.get("instance_node")) diff --git a/openpype/hosts/max/plugins/create/create_pointcloud.py b/openpype/hosts/max/plugins/create/create_pointcloud.py new file mode 100644 index 0000000000..c83acac3df --- /dev/null +++ b/openpype/hosts/max/plugins/create/create_pointcloud.py @@ -0,0 +1,26 @@ +# -*- coding: utf-8 -*- +"""Creator plugin for creating point cloud.""" +from openpype.hosts.max.api import plugin +from openpype.pipeline import CreatedInstance + + +class CreatePointCloud(plugin.MaxCreator): + identifier = "io.openpype.creators.max.pointcloud" + label = "Point Cloud" + family = "pointcloud" + icon = "gear" + + def create(self, subset_name, instance_data, pre_create_data): + from pymxs import runtime as rt + sel_obj = list(rt.selection) + instance = super(CreatePointCloud, self).create( + subset_name, + instance_data, + pre_create_data) # type: CreatedInstance + container = rt.getNodeByName(instance.data.get("instance_node")) + # TODO: Disable "Add to Containers?" Panel + # parent the selected cameras into the container + for obj in sel_obj: + obj.parent = container + # for additional work on the node: + # instance_node = rt.getNodeByName(instance.get("instance_node")) diff --git a/openpype/hosts/max/plugins/load/load_max_scene.py b/openpype/hosts/max/plugins/load/load_max_scene.py index b863b9363f..460f4822a6 100644 --- a/openpype/hosts/max/plugins/load/load_max_scene.py +++ b/openpype/hosts/max/plugins/load/load_max_scene.py @@ -9,7 +9,8 @@ from openpype.hosts.max.api import lib class MaxSceneLoader(load.LoaderPlugin): """Max Scene Loader""" - families = ["camera"] + families = ["camera", + "maxScene"] representations = ["max"] order = -8 icon = "code-fork" @@ -46,8 +47,7 @@ class MaxSceneLoader(load.LoaderPlugin): path = get_representation_path(representation) node = rt.getNodeByName(container["instance_node"]) - - max_objects = self.get_container_children(node) + max_objects = node.Children for max_object in max_objects: max_object.source = path diff --git a/openpype/hosts/max/plugins/load/load_pointcloud.py b/openpype/hosts/max/plugins/load/load_pointcloud.py new file mode 100644 index 0000000000..27bc88b4f3 --- /dev/null +++ b/openpype/hosts/max/plugins/load/load_pointcloud.py @@ -0,0 +1,51 @@ +import os +from openpype.pipeline import ( + load, get_representation_path +) +from openpype.hosts.max.api.pipeline import containerise +from openpype.hosts.max.api import lib + + +class PointCloudLoader(load.LoaderPlugin): + """Point Cloud Loader""" + + families = ["pointcloud"] + representations = ["prt"] + order = -8 + icon = "code-fork" + color = "green" + + def load(self, context, name=None, namespace=None, data=None): + """load point cloud by tyCache""" + from pymxs import runtime as rt + + filepath = os.path.normpath(self.fname) + obj = rt.tyCache() + obj.filename = filepath + + prt_container = rt.getNodeByName(f"{obj.name}") + + return containerise( + name, [prt_container], context, loader=self.__class__.__name__) + + def update(self, container, representation): + """update the container""" + from pymxs import runtime as rt + + path = get_representation_path(representation) + node = rt.getNodeByName(container["instance_node"]) + + prt_objects = self.get_container_children(node) + for prt_object in prt_objects: + prt_object.source = path + + lib.imprint(container["instance_node"], { + "representation": str(representation["_id"]) + }) + + def remove(self, container): + """remove the container""" + from pymxs import runtime as rt + + node = rt.getNodeByName(container["instance_node"]) + rt.delete(node) diff --git a/openpype/hosts/max/plugins/publish/extract_max_scene_raw.py b/openpype/hosts/max/plugins/publish/extract_max_scene_raw.py index cacc84c591..969f87be48 100644 --- a/openpype/hosts/max/plugins/publish/extract_max_scene_raw.py +++ b/openpype/hosts/max/plugins/publish/extract_max_scene_raw.py @@ -20,7 +20,8 @@ class ExtractMaxSceneRaw(publish.Extractor, order = pyblish.api.ExtractorOrder - 0.2 label = "Extract Max Scene (Raw)" hosts = ["max"] - families = ["camera"] + families = ["camera", + "maxScene"] optional = True def process(self, instance): diff --git a/openpype/hosts/max/plugins/publish/extract_pointcloud.py b/openpype/hosts/max/plugins/publish/extract_pointcloud.py new file mode 100644 index 0000000000..e8d58ab713 --- /dev/null +++ b/openpype/hosts/max/plugins/publish/extract_pointcloud.py @@ -0,0 +1,207 @@ +import os +import pyblish.api +from openpype.pipeline import publish +from pymxs import runtime as rt +from openpype.hosts.max.api import ( + maintained_selection +) +from openpype.settings import get_project_settings +from openpype.pipeline import legacy_io + + +def get_setting(project_setting=None): + project_setting = get_project_settings( + legacy_io.Session["AVALON_PROJECT"] + ) + return (project_setting["max"]["PointCloud"]) + + +class ExtractPointCloud(publish.Extractor): + """ + Extract PRT format with tyFlow operators + + Notes: + Currently only works for the default partition setting + + Args: + export_particle(): sets up all job arguments for attributes + to be exported in MAXscript + + get_operators(): get the export_particle operator + + get_custom_attr(): get all custom channel attributes from Openpype + setting and sets it as job arguments before exporting + + get_files(): get the files with tyFlow naming convention + before publishing + + partition_output_name(): get the naming with partition settings. + get_partition(): get partition value + + """ + + order = pyblish.api.ExtractorOrder - 0.2 + label = "Extract Point Cloud" + hosts = ["max"] + families = ["pointcloud"] + + def process(self, instance): + start = int(instance.context.data.get("frameStart")) + end = int(instance.context.data.get("frameEnd")) + container = instance.data["instance_node"] + self.log.info("Extracting PRT...") + + stagingdir = self.staging_dir(instance) + filename = "{name}.prt".format(**instance.data) + path = os.path.join(stagingdir, filename) + + with maintained_selection(): + job_args = self.export_particle(container, + start, + end, + path) + for job in job_args: + rt.execute(job) + + self.log.info("Performing Extraction ...") + if "representations" not in instance.data: + instance.data["representations"] = [] + + self.log.info("Writing PRT with TyFlow Plugin...") + filenames = self.get_files(container, path, start, end) + self.log.debug("filenames: {0}".format(filenames)) + + partition = self.partition_output_name(container) + + representation = { + 'name': 'prt', + 'ext': 'prt', + 'files': filenames if len(filenames) > 1 else filenames[0], + "stagingDir": stagingdir, + "outputName": partition # partition value + } + instance.data["representations"].append(representation) + self.log.info("Extracted instance '%s' to: %s" % (instance.name, + path)) + + def export_particle(self, + container, + start, + end, + filepath): + job_args = [] + opt_list = self.get_operators(container) + for operator in opt_list: + start_frame = "{0}.frameStart={1}".format(operator, + start) + job_args.append(start_frame) + end_frame = "{0}.frameEnd={1}".format(operator, + end) + job_args.append(end_frame) + filepath = filepath.replace("\\", "/") + prt_filename = '{0}.PRTFilename="{1}"'.format(operator, + filepath) + + job_args.append(prt_filename) + # Partition + mode = "{0}.PRTPartitionsMode=2".format(operator) + job_args.append(mode) + + additional_args = self.get_custom_attr(operator) + for args in additional_args: + job_args.append(args) + + prt_export = "{0}.exportPRT()".format(operator) + job_args.append(prt_export) + + return job_args + + def get_operators(self, container): + """Get Export Particles Operator""" + + opt_list = [] + node = rt.getNodebyName(container) + selection_list = list(node.Children) + for sel in selection_list: + obj = sel.baseobject + # TODO: to see if it can be used maxscript instead + anim_names = rt.getsubanimnames(obj) + for anim_name in anim_names: + sub_anim = rt.getsubanim(obj, anim_name) + boolean = rt.isProperty(sub_anim, "Export_Particles") + event_name = sub_anim.name + if boolean: + opt = "${0}.{1}.export_particles".format(sel.name, + event_name) + opt_list.append(opt) + + return opt_list + + def get_custom_attr(self, operator): + """Get Custom Attributes""" + + custom_attr_list = [] + attr_settings = get_setting()["attribute"] + for key, value in attr_settings.items(): + custom_attr = "{0}.PRTChannels_{1}=True".format(operator, + value) + self.log.debug( + "{0} will be added as custom attribute".format(key) + ) + custom_attr_list.append(custom_attr) + + return custom_attr_list + + def get_files(self, + container, + path, + start_frame, + end_frame): + """ + Note: + Set the filenames accordingly to the tyFlow file + naming extension for the publishing purpose + + Actual File Output from tyFlow: + __partof..prt + e.g. tyFlow_cloth_CCCS_blobbyFill_001__part1of1_00004.prt + """ + filenames = [] + filename = os.path.basename(path) + orig_name, ext = os.path.splitext(filename) + partition_count, partition_start = self.get_partition(container) + for frame in range(int(start_frame), int(end_frame) + 1): + actual_name = "{}__part{:03}of{}_{:05}".format(orig_name, + partition_start, + partition_count, + frame) + actual_filename = path.replace(orig_name, actual_name) + filenames.append(os.path.basename(actual_filename)) + + return filenames + + def partition_output_name(self, container): + """ + Notes: + Partition output name set for mapping + the published file output + + todo: + Customizes the setting for the output + """ + partition_count, partition_start = self.get_partition(container) + partition = "_part{:03}of{}".format(partition_start, + partition_count) + + return partition + + def get_partition(self, container): + """ + Get Partition Value + """ + opt_list = self.get_operators(container) + for operator in opt_list: + count = rt.execute(f'{operator}.PRTPartitionsCount') + start = rt.execute(f'{operator}.PRTPartitionsFrom') + + return count, start diff --git a/openpype/hosts/max/plugins/publish/increment_workfile_version.py b/openpype/hosts/max/plugins/publish/increment_workfile_version.py new file mode 100644 index 0000000000..3dec214f77 --- /dev/null +++ b/openpype/hosts/max/plugins/publish/increment_workfile_version.py @@ -0,0 +1,19 @@ +import pyblish.api +from openpype.lib import version_up +from pymxs import runtime as rt + + +class IncrementWorkfileVersion(pyblish.api.ContextPlugin): + """Increment current workfile version.""" + + order = pyblish.api.IntegratorOrder + 0.9 + label = "Increment Workfile Version" + hosts = ["max"] + families = ["workfile"] + + def process(self, context): + path = context.data["currentFile"] + filepath = version_up(path) + + rt.saveMaxFile(filepath) + self.log.info("Incrementing file version") diff --git a/openpype/hosts/max/plugins/publish/validate_no_max_content.py b/openpype/hosts/max/plugins/publish/validate_no_max_content.py new file mode 100644 index 0000000000..c20a1968ed --- /dev/null +++ b/openpype/hosts/max/plugins/publish/validate_no_max_content.py @@ -0,0 +1,23 @@ +# -*- coding: utf-8 -*- +import pyblish.api +from openpype.pipeline import PublishValidationError +from pymxs import runtime as rt + + +class ValidateMaxContents(pyblish.api.InstancePlugin): + """Validates Max contents. + + Check if MaxScene container includes any contents underneath. + """ + + order = pyblish.api.ValidatorOrder + families = ["camera", + "maxScene", + "maxrender"] + hosts = ["max"] + label = "Max Scene Contents" + + def process(self, instance): + container = rt.getNodeByName(instance.data["instance_node"]) + if not list(container.Children): + raise PublishValidationError("No content found in the container") diff --git a/openpype/hosts/max/plugins/publish/validate_pointcloud.py b/openpype/hosts/max/plugins/publish/validate_pointcloud.py new file mode 100644 index 0000000000..f654058648 --- /dev/null +++ b/openpype/hosts/max/plugins/publish/validate_pointcloud.py @@ -0,0 +1,191 @@ +import pyblish.api +from openpype.pipeline import PublishValidationError +from pymxs import runtime as rt +from openpype.settings import get_project_settings +from openpype.pipeline import legacy_io + + +def get_setting(project_setting=None): + project_setting = get_project_settings( + legacy_io.Session["AVALON_PROJECT"] + ) + return (project_setting["max"]["PointCloud"]) + + +class ValidatePointCloud(pyblish.api.InstancePlugin): + """Validate that workfile was saved.""" + + order = pyblish.api.ValidatorOrder + families = ["pointcloud"] + hosts = ["max"] + label = "Validate Point Cloud" + + def process(self, instance): + """ + Notes: + + 1. Validate the container only include tyFlow objects + 2. Validate if tyFlow operator Export Particle exists + 3. Validate if the export mode of Export Particle is at PRT format + 4. Validate the partition count and range set as default value + Partition Count : 100 + Partition Range : 1 to 1 + 5. Validate if the custom attribute(s) exist as parameter(s) + of export_particle operator + + """ + invalid = self.get_tyFlow_object(instance) + if invalid: + raise PublishValidationError("Non tyFlow object " + "found: {}".format(invalid)) + invalid = self.get_tyFlow_operator(instance) + if invalid: + raise PublishValidationError("tyFlow ExportParticle operator " + "not found: {}".format(invalid)) + + invalid = self.validate_export_mode(instance) + if invalid: + raise PublishValidationError("The export mode is not at PRT") + + invalid = self.validate_partition_value(instance) + if invalid: + raise PublishValidationError("tyFlow Partition setting is " + "not at the default value") + invalid = self.validate_custom_attribute(instance) + if invalid: + raise PublishValidationError("Custom Attribute not found " + ":{}".format(invalid)) + + def get_tyFlow_object(self, instance): + invalid = [] + container = instance.data["instance_node"] + self.log.info("Validating tyFlow container " + "for {}".format(container)) + + con = rt.getNodeByName(container) + selection_list = list(con.Children) + for sel in selection_list: + sel_tmp = str(sel) + if rt.classOf(sel) in [rt.tyFlow, + rt.Editable_Mesh]: + if "tyFlow" not in sel_tmp: + invalid.append(sel) + else: + invalid.append(sel) + + return invalid + + def get_tyFlow_operator(self, instance): + invalid = [] + container = instance.data["instance_node"] + self.log.info("Validating tyFlow object " + "for {}".format(container)) + + con = rt.getNodeByName(container) + selection_list = list(con.Children) + bool_list = [] + for sel in selection_list: + obj = sel.baseobject + anim_names = rt.getsubanimnames(obj) + for anim_name in anim_names: + # get all the names of the related tyFlow nodes + sub_anim = rt.getsubanim(obj, anim_name) + # check if there is export particle operator + boolean = rt.isProperty(sub_anim, "Export_Particles") + bool_list.append(str(boolean)) + # if the export_particles property is not there + # it means there is not a "Export Particle" operator + if "True" not in bool_list: + self.log.error("Operator 'Export Particles' not found!") + invalid.append(sel) + + return invalid + + def validate_custom_attribute(self, instance): + invalid = [] + container = instance.data["instance_node"] + self.log.info("Validating tyFlow custom " + "attributes for {}".format(container)) + + con = rt.getNodeByName(container) + selection_list = list(con.Children) + for sel in selection_list: + obj = sel.baseobject + anim_names = rt.getsubanimnames(obj) + for anim_name in anim_names: + # get all the names of the related tyFlow nodes + sub_anim = rt.getsubanim(obj, anim_name) + # check if there is export particle operator + boolean = rt.isProperty(sub_anim, "Export_Particles") + event_name = sub_anim.name + if boolean: + opt = "${0}.{1}.export_particles".format(sel.name, + event_name) + attributes = get_setting()["attribute"] + for key, value in attributes.items(): + custom_attr = "{0}.PRTChannels_{1}".format(opt, + value) + try: + rt.execute(custom_attr) + except RuntimeError: + invalid.add(key) + + return invalid + + def validate_partition_value(self, instance): + invalid = [] + container = instance.data["instance_node"] + self.log.info("Validating tyFlow partition " + "value for {}".format(container)) + + con = rt.getNodeByName(container) + selection_list = list(con.Children) + for sel in selection_list: + obj = sel.baseobject + anim_names = rt.getsubanimnames(obj) + for anim_name in anim_names: + # get all the names of the related tyFlow nodes + sub_anim = rt.getsubanim(obj, anim_name) + # check if there is export particle operator + boolean = rt.isProperty(sub_anim, "Export_Particles") + event_name = sub_anim.name + if boolean: + opt = "${0}.{1}.export_particles".format(sel.name, + event_name) + count = rt.execute(f'{opt}.PRTPartitionsCount') + if count != 100: + invalid.append(count) + start = rt.execute(f'{opt}.PRTPartitionsFrom') + if start != 1: + invalid.append(start) + end = rt.execute(f'{opt}.PRTPartitionsTo') + if end != 1: + invalid.append(end) + + return invalid + + def validate_export_mode(self, instance): + invalid = [] + container = instance.data["instance_node"] + self.log.info("Validating tyFlow export " + "mode for {}".format(container)) + + con = rt.getNodeByName(container) + selection_list = list(con.Children) + for sel in selection_list: + obj = sel.baseobject + anim_names = rt.getsubanimnames(obj) + for anim_name in anim_names: + # get all the names of the related tyFlow nodes + sub_anim = rt.getsubanim(obj, anim_name) + # check if there is export particle operator + boolean = rt.isProperty(sub_anim, "Export_Particles") + event_name = sub_anim.name + if boolean: + opt = "${0}.{1}.export_particles".format(sel.name, + event_name) + export_mode = rt.execute(f'{opt}.exportMode') + if export_mode != 1: + invalid.append(export_mode) + + return invalid diff --git a/openpype/hosts/maya/api/lib.py b/openpype/hosts/maya/api/lib.py index a670737b66..aa1e501578 100644 --- a/openpype/hosts/maya/api/lib.py +++ b/openpype/hosts/maya/api/lib.py @@ -2099,29 +2099,40 @@ def get_frame_range(): } -def reset_frame_range(): - """Set frame range to current asset""" +def reset_frame_range(playback=True, render=True, fps=True): + """Set frame range to current asset - fps = convert_to_maya_fps( - float(legacy_io.Session.get("AVALON_FPS", 25)) - ) - set_scene_fps(fps) + Args: + playback (bool, Optional): Whether to set the maya timeline playback + frame range. Defaults to True. + render (bool, Optional): Whether to set the maya render frame range. + Defaults to True. + fps (bool, Optional): Whether to set scene FPS. Defaults to True. + """ + + if fps: + fps = convert_to_maya_fps( + float(legacy_io.Session.get("AVALON_FPS", 25)) + ) + set_scene_fps(fps) frame_range = get_frame_range() frame_start = frame_range["frameStart"] - int(frame_range["handleStart"]) frame_end = frame_range["frameEnd"] + int(frame_range["handleEnd"]) - cmds.playbackOptions(minTime=frame_start) - cmds.playbackOptions(maxTime=frame_end) - cmds.playbackOptions(animationStartTime=frame_start) - cmds.playbackOptions(animationEndTime=frame_end) - cmds.playbackOptions(minTime=frame_start) - cmds.playbackOptions(maxTime=frame_end) - cmds.currentTime(frame_start) + if playback: + cmds.playbackOptions(minTime=frame_start) + cmds.playbackOptions(maxTime=frame_end) + cmds.playbackOptions(animationStartTime=frame_start) + cmds.playbackOptions(animationEndTime=frame_end) + cmds.playbackOptions(minTime=frame_start) + cmds.playbackOptions(maxTime=frame_end) + cmds.currentTime(frame_start) - cmds.setAttr("defaultRenderGlobals.startFrame", frame_start) - cmds.setAttr("defaultRenderGlobals.endFrame", frame_end) + if render: + cmds.setAttr("defaultRenderGlobals.startFrame", frame_start) + cmds.setAttr("defaultRenderGlobals.endFrame", frame_end) def reset_scene_resolution(): diff --git a/openpype/hosts/maya/api/lib_rendersettings.py b/openpype/hosts/maya/api/lib_rendersettings.py index 7d44d4eaa6..eaa728a2f6 100644 --- a/openpype/hosts/maya/api/lib_rendersettings.py +++ b/openpype/hosts/maya/api/lib_rendersettings.py @@ -158,7 +158,7 @@ class RenderSettings(object): cmds.setAttr( "defaultArnoldDriver.mergeAOVs", multi_exr) self._additional_attribs_setter(additional_options) - reset_frame_range() + reset_frame_range(playback=False, fps=False, render=True) def _set_redshift_settings(self, width, height): """Sets settings for Redshift.""" diff --git a/openpype/hosts/maya/plugins/create/create_animation.py b/openpype/hosts/maya/plugins/create/create_animation.py index a4b6e86598..f992ff2c1a 100644 --- a/openpype/hosts/maya/plugins/create/create_animation.py +++ b/openpype/hosts/maya/plugins/create/create_animation.py @@ -13,6 +13,7 @@ class CreateAnimation(plugin.Creator): icon = "male" write_color_sets = False write_face_sets = False + include_parent_hierarchy = False include_user_defined_attributes = False def __init__(self, *args, **kwargs): @@ -37,7 +38,7 @@ class CreateAnimation(plugin.Creator): self.data["visibleOnly"] = False # Include the groups above the out_SET content - self.data["includeParentHierarchy"] = False # Include parent groups + self.data["includeParentHierarchy"] = self.include_parent_hierarchy # Default to exporting world-space self.data["worldSpace"] = True diff --git a/openpype/hosts/maya/plugins/inventory/connect_yeti_rig.py b/openpype/hosts/maya/plugins/inventory/connect_yeti_rig.py new file mode 100644 index 0000000000..924a1a4627 --- /dev/null +++ b/openpype/hosts/maya/plugins/inventory/connect_yeti_rig.py @@ -0,0 +1,178 @@ +import os +import json +from collections import defaultdict + +from maya import cmds + +from openpype.pipeline import ( + InventoryAction, get_representation_context, get_representation_path +) +from openpype.hosts.maya.api.lib import get_container_members, get_id + + +class ConnectYetiRig(InventoryAction): + """Connect Yeti Rig with an animation or pointcache.""" + + label = "Connect Yeti Rig" + icon = "link" + color = "white" + + def process(self, containers): + # Validate selection is more than 1. + message = ( + "Only 1 container selected. 2+ containers needed for this action." + ) + if len(containers) == 1: + self.display_warning(message) + return + + # Categorize containers by family. + containers_by_family = defaultdict(list) + for container in containers: + family = get_representation_context( + container["representation"] + )["subset"]["data"]["family"] + containers_by_family[family].append(container) + + # Validate to only 1 source container. + source_containers = containers_by_family.get("animation", []) + source_containers += containers_by_family.get("pointcache", []) + source_container_namespaces = [ + x["namespace"] for x in source_containers + ] + message = ( + "{} animation containers selected:\n\n{}\n\nOnly select 1 of type " + "\"animation\" or \"pointcache\".".format( + len(source_containers), source_container_namespaces + ) + ) + if len(source_containers) != 1: + self.display_warning(message) + return + + source_container = source_containers[0] + source_ids = self.nodes_by_id(source_container) + + # Target containers. + target_ids = {} + inputs = [] + + yeti_rig_containers = containers_by_family.get("yetiRig") + if not yeti_rig_containers: + self.display_warning( + "Select at least one yetiRig container" + ) + return + + for container in yeti_rig_containers: + target_ids.update(self.nodes_by_id(container)) + + maya_file = get_representation_path( + get_representation_context( + container["representation"] + )["representation"] + ) + _, ext = os.path.splitext(maya_file) + settings_file = maya_file.replace(ext, ".rigsettings") + if not os.path.exists(settings_file): + continue + + with open(settings_file) as f: + inputs.extend(json.load(f)["inputs"]) + + # Compare loaded connections to scene. + for input in inputs: + source_node = source_ids.get(input["sourceID"]) + target_node = target_ids.get(input["destinationID"]) + + if not source_node or not target_node: + self.log.debug( + "Could not find nodes for input:\n" + + json.dumps(input, indent=4, sort_keys=True) + ) + continue + source_attr, target_attr = input["connections"] + + if not cmds.attributeQuery( + source_attr, node=source_node, exists=True + ): + self.log.debug( + "Could not find attribute {} on node {} for " + "input:\n{}".format( + source_attr, + source_node, + json.dumps(input, indent=4, sort_keys=True) + ) + ) + continue + + if not cmds.attributeQuery( + target_attr, node=target_node, exists=True + ): + self.log.debug( + "Could not find attribute {} on node {} for " + "input:\n{}".format( + target_attr, + target_node, + json.dumps(input, indent=4, sort_keys=True) + ) + ) + continue + + source_plug = "{}.{}".format( + source_node, source_attr + ) + target_plug = "{}.{}".format( + target_node, target_attr + ) + if cmds.isConnected( + source_plug, target_plug, ignoreUnitConversion=True + ): + self.log.debug( + "Connection already exists: {} -> {}".format( + source_plug, target_plug + ) + ) + continue + + cmds.connectAttr(source_plug, target_plug, force=True) + self.log.debug( + "Connected attributes: {} -> {}".format( + source_plug, target_plug + ) + ) + + def nodes_by_id(self, container): + ids = {} + for member in get_container_members(container): + id = get_id(member) + if not id: + continue + ids[id] = member + + return ids + + def display_warning(self, message, show_cancel=False): + """Show feedback to user. + + Returns: + bool + """ + + from qtpy import QtWidgets + + accept = QtWidgets.QMessageBox.Ok + if show_cancel: + buttons = accept | QtWidgets.QMessageBox.Cancel + else: + buttons = accept + + state = QtWidgets.QMessageBox.warning( + None, + "", + message, + buttons=buttons, + defaultButton=accept + ) + + return state == accept diff --git a/openpype/hosts/maya/plugins/load/load_image.py b/openpype/hosts/maya/plugins/load/load_image.py new file mode 100644 index 0000000000..b464c268fc --- /dev/null +++ b/openpype/hosts/maya/plugins/load/load_image.py @@ -0,0 +1,332 @@ +import os +import copy + +from openpype.lib import EnumDef +from openpype.pipeline import ( + load, + get_representation_context +) +from openpype.pipeline.load.utils import get_representation_path_from_context +from openpype.pipeline.colorspace import ( + get_imageio_colorspace_from_filepath, + get_imageio_config, + get_imageio_file_rules +) +from openpype.settings import get_project_settings + +from openpype.hosts.maya.api.pipeline import containerise +from openpype.hosts.maya.api.lib import ( + unique_namespace, + namespaced +) + +from maya import cmds + + +def create_texture(): + """Create place2dTexture with file node with uv connections + + Mimics Maya "file [Texture]" creation. + """ + + place = cmds.shadingNode("place2dTexture", asUtility=True, name="place2d") + file = cmds.shadingNode("file", asTexture=True, name="file") + + connections = ["coverage", "translateFrame", "rotateFrame", "rotateUV", + "mirrorU", "mirrorV", "stagger", "wrapV", "wrapU", + "repeatUV", "offset", "noiseUV", "vertexUvThree", + "vertexUvTwo", "vertexUvOne", "vertexCameraOne"] + for attr in connections: + src = "{}.{}".format(place, attr) + dest = "{}.{}".format(file, attr) + cmds.connectAttr(src, dest) + + cmds.connectAttr(place + '.outUV', file + '.uvCoord') + cmds.connectAttr(place + '.outUvFilterSize', file + '.uvFilterSize') + + return file, place + + +def create_projection(): + """Create texture with place3dTexture and projection + + Mimics Maya "file [Projection]" creation. + """ + + file, place = create_texture() + projection = cmds.shadingNode("projection", asTexture=True, + name="projection") + place3d = cmds.shadingNode("place3dTexture", asUtility=True, + name="place3d") + + cmds.connectAttr(place3d + '.worldInverseMatrix[0]', + projection + ".placementMatrix") + cmds.connectAttr(file + '.outColor', projection + ".image") + + return file, place, projection, place3d + + +def create_stencil(): + """Create texture with extra place2dTexture offset and stencil + + Mimics Maya "file [Stencil]" creation. + """ + + file, place = create_texture() + + place_stencil = cmds.shadingNode("place2dTexture", asUtility=True, + name="place2d_stencil") + stencil = cmds.shadingNode("stencil", asTexture=True, name="stencil") + + for src_attr, dest_attr in [ + ("outUV", "uvCoord"), + ("outUvFilterSize", "uvFilterSize") + ]: + src_plug = "{}.{}".format(place_stencil, src_attr) + cmds.connectAttr(src_plug, "{}.{}".format(place, dest_attr)) + cmds.connectAttr(src_plug, "{}.{}".format(stencil, dest_attr)) + + return file, place, stencil, place_stencil + + +class FileNodeLoader(load.LoaderPlugin): + """File node loader.""" + + families = ["image", "plate", "render"] + label = "Load file node" + representations = ["exr", "tif", "png", "jpg"] + icon = "image" + color = "orange" + order = 2 + + options = [ + EnumDef( + "mode", + items={ + "texture": "Texture", + "projection": "Projection", + "stencil": "Stencil" + }, + default="texture", + label="Texture Mode" + ) + ] + + def load(self, context, name, namespace, data): + + asset = context['asset']['name'] + namespace = namespace or unique_namespace( + asset + "_", + prefix="_" if asset[0].isdigit() else "", + suffix="_", + ) + + with namespaced(namespace, new=True) as namespace: + # Create the nodes within the namespace + nodes = { + "texture": create_texture, + "projection": create_projection, + "stencil": create_stencil + }[data.get("mode", "texture")]() + + file_node = cmds.ls(nodes, type="file")[0] + + self._apply_representation_context(context, file_node) + + # For ease of access for the user select all the nodes and select + # the file node last so that UI shows its attributes by default + cmds.select(list(nodes) + [file_node], replace=True) + + return containerise( + name=name, + namespace=namespace, + nodes=nodes, + context=context, + loader=self.__class__.__name__ + ) + + def update(self, container, representation): + + members = cmds.sets(container['objectName'], query=True) + file_node = cmds.ls(members, type="file")[0] + + context = get_representation_context(representation) + self._apply_representation_context(context, file_node) + + # Update representation + cmds.setAttr( + container["objectName"] + ".representation", + str(representation["_id"]), + type="string" + ) + + def switch(self, container, representation): + self.update(container, representation) + + def remove(self, container): + members = cmds.sets(container['objectName'], query=True) + cmds.lockNode(members, lock=False) + cmds.delete([container['objectName']] + members) + + # Clean up the namespace + try: + cmds.namespace(removeNamespace=container['namespace'], + deleteNamespaceContent=True) + except RuntimeError: + pass + + def _apply_representation_context(self, context, file_node): + """Update the file node to match the context. + + This sets the file node's attributes for: + - file path + - udim tiling mode (if it is an udim tile) + - use frame extension (if it is a sequence) + - colorspace + + """ + + repre_context = context["representation"]["context"] + has_frames = repre_context.get("frame") is not None + has_udim = repre_context.get("udim") is not None + + # Set UV tiling mode if UDIM tiles + if has_udim: + cmds.setAttr(file_node + ".uvTilingMode", 3) # UDIM-tiles + else: + cmds.setAttr(file_node + ".uvTilingMode", 0) # off + + # Enable sequence if publish has `startFrame` and `endFrame` and + # `startFrame != endFrame` + if has_frames and self._is_sequence(context): + # When enabling useFrameExtension maya automatically + # connects an expression to .frameExtension to set + # the current frame. However, this expression is generated + # with some delay and thus it'll show a warning if frame 0 + # doesn't exist because we're explicitly setting the + # token. + cmds.setAttr(file_node + ".useFrameExtension", True) + else: + cmds.setAttr(file_node + ".useFrameExtension", False) + + # Set the file node path attribute + path = self._format_path(context) + cmds.setAttr(file_node + ".fileTextureName", path, type="string") + + # Set colorspace + colorspace = self._get_colorspace(context) + if colorspace: + cmds.setAttr(file_node + ".colorSpace", colorspace, type="string") + else: + self.log.debug("Unknown colorspace - setting colorspace skipped.") + + def _is_sequence(self, context): + """Check whether frameStart and frameEnd are not the same.""" + version = context.get("version", {}) + representation = context.get("representation", {}) + + for doc in [representation, version]: + # Frame range can be set on version or representation. + # When set on representation it overrides version data. + data = doc.get("data", {}) + start = data.get("frameStartHandle", data.get("frameStart", None)) + end = data.get("frameEndHandle", data.get("frameEnd", None)) + + if start is None or end is None: + continue + + if start != end: + return True + else: + return False + + return False + + def _get_colorspace(self, context): + """Return colorspace of the file to load. + + Retrieves the explicit colorspace from the publish. If no colorspace + data is stored with published content then project imageio settings + are used to make an assumption of the colorspace based on the file + rules. If no file rules match then None is returned. + + Returns: + str or None: The colorspace of the file or None if not detected. + + """ + + # We can't apply color spaces if management is not enabled + if not cmds.colorManagementPrefs(query=True, cmEnabled=True): + return + + representation = context["representation"] + colorspace_data = representation.get("data", {}).get("colorspaceData") + if colorspace_data: + return colorspace_data["colorspace"] + + # Assume colorspace from filepath based on project settings + project_name = context["project"]["name"] + host_name = os.environ.get("AVALON_APP") + project_settings = get_project_settings(project_name) + + config_data = get_imageio_config( + project_name, host_name, + project_settings=project_settings + ) + file_rules = get_imageio_file_rules( + project_name, host_name, + project_settings=project_settings + ) + + path = get_representation_path_from_context(context) + colorspace = get_imageio_colorspace_from_filepath( + path=path, + host_name=host_name, + project_name=project_name, + config_data=config_data, + file_rules=file_rules, + project_settings=project_settings + ) + + return colorspace + + def _format_path(self, context): + """Format the path with correct tokens for frames and udim tiles.""" + + context = copy.deepcopy(context) + representation = context["representation"] + template = representation.get("data", {}).get("template") + if not template: + # No template to find token locations for + return get_representation_path_from_context(context) + + def _placeholder(key): + # Substitute with a long placeholder value so that potential + # custom formatting with padding doesn't find its way into + # our formatting, so that wouldn't be padded as 0 + return "___{}___".format(key) + + # We format UDIM and Frame numbers with their specific tokens. To do so + # we in-place change the representation context data to format the path + # with our own data + tokens = { + "frame": "", + "udim": "" + } + has_tokens = False + repre_context = representation["context"] + for key, _token in tokens.items(): + if key in repre_context: + repre_context[key] = _placeholder(key) + has_tokens = True + + # Replace with our custom template that has the tokens set + representation["data"]["template"] = template + path = get_representation_path_from_context(context) + + if has_tokens: + for key, token in tokens.items(): + if key in repre_context: + path = path.replace(_placeholder(key), token) + + return path diff --git a/openpype/hosts/maya/plugins/load/load_yeti_rig.py b/openpype/hosts/maya/plugins/load/load_yeti_rig.py index 651607de8a..6a13d2e145 100644 --- a/openpype/hosts/maya/plugins/load/load_yeti_rig.py +++ b/openpype/hosts/maya/plugins/load/load_yeti_rig.py @@ -1,17 +1,12 @@ -import os -from collections import defaultdict +import maya.cmds as cmds -from openpype.settings import get_project_settings +from openpype.settings import get_current_project_settings import openpype.hosts.maya.api.plugin from openpype.hosts.maya.api import lib class YetiRigLoader(openpype.hosts.maya.api.plugin.ReferenceLoader): - """ - This loader will load Yeti rig. You can select something in scene and if it - has same ID as mesh published with rig, their shapes will be linked - together. - """ + """This loader will load Yeti rig.""" families = ["yetiRig"] representations = ["ma"] @@ -22,72 +17,31 @@ class YetiRigLoader(openpype.hosts.maya.api.plugin.ReferenceLoader): color = "orange" def process_reference( - self, context, name=None, namespace=None, options=None): + self, context, name=None, namespace=None, options=None + ): - import maya.cmds as cmds - - # get roots of selected hierarchies - selected_roots = [] - for sel in cmds.ls(sl=True, long=True): - selected_roots.append(sel.split("|")[1]) - - # get all objects under those roots - selected_hierarchy = [] - for root in selected_roots: - selected_hierarchy.append(cmds.listRelatives( - root, - allDescendents=True) or []) - - # flatten the list and filter only shapes - shapes_flat = [] - for root in selected_hierarchy: - shapes = cmds.ls(root, long=True, type="mesh") or [] - for shape in shapes: - shapes_flat.append(shape) - - # create dictionary of cbId and shape nodes - scene_lookup = defaultdict(list) - for node in shapes_flat: - cb_id = lib.get_id(node) - scene_lookup[cb_id] = node - - # load rig + group_name = "{}:{}".format(namespace, name) with lib.maintained_selection(): - file_url = self.prepare_root_value(self.fname, - context["project"]["name"]) - nodes = cmds.file(file_url, - namespace=namespace, - reference=True, - returnNewNodes=True, - groupReference=True, - groupName="{}:{}".format(namespace, name)) + file_url = self.prepare_root_value( + self.fname, context["project"]["name"] + ) + nodes = cmds.file( + file_url, + namespace=namespace, + reference=True, + returnNewNodes=True, + groupReference=True, + groupName=group_name + ) - # for every shape node we've just loaded find matching shape by its - # cbId in selection. If found outMesh of scene shape will connect to - # inMesh of loaded shape. - for destination_node in nodes: - source_node = scene_lookup[lib.get_id(destination_node)] - if source_node: - self.log.info("found: {}".format(source_node)) - self.log.info( - "creating connection to {}".format(destination_node)) - - cmds.connectAttr("{}.outMesh".format(source_node), - "{}.inMesh".format(destination_node), - force=True) - - groupName = "{}:{}".format(namespace, name) - - settings = get_project_settings(os.environ['AVALON_PROJECT']) - colors = settings['maya']['load']['colors'] - - c = colors.get('yetiRig') + settings = get_current_project_settings() + colors = settings["maya"]["load"]["colors"] + c = colors.get("yetiRig") if c is not None: - cmds.setAttr(groupName + ".useOutlinerColor", 1) - cmds.setAttr(groupName + ".outlinerColor", - (float(c[0])/255), - (float(c[1])/255), - (float(c[2])/255) + cmds.setAttr(group_name + ".useOutlinerColor", 1) + cmds.setAttr( + group_name + ".outlinerColor", + (float(c[0]) / 255), (float(c[1]) / 255), (float(c[2]) / 255) ) self[:] = nodes diff --git a/openpype/hosts/maya/plugins/publish/collect_review.py b/openpype/hosts/maya/plugins/publish/collect_review.py index 65ff7cf0fe..548b1c996a 100644 --- a/openpype/hosts/maya/plugins/publish/collect_review.py +++ b/openpype/hosts/maya/plugins/publish/collect_review.py @@ -24,7 +24,9 @@ class CollectReview(pyblish.api.InstancePlugin): task = legacy_io.Session["AVALON_TASK"] # Get panel. - instance.data["panel"] = cmds.playblast(activeEditor=True) + instance.data["panel"] = cmds.playblast( + activeEditor=True + ).split("|")[-1] # get cameras members = instance.data['setMembers'] diff --git a/openpype/hosts/maya/plugins/publish/extract_look.py b/openpype/hosts/maya/plugins/publish/extract_look.py index efeddcfbe4..447c9a615c 100644 --- a/openpype/hosts/maya/plugins/publish/extract_look.py +++ b/openpype/hosts/maya/plugins/publish/extract_look.py @@ -1,12 +1,10 @@ # -*- coding: utf-8 -*- """Maya look extractor.""" import os -import sys import json import tempfile import platform import contextlib -import subprocess from collections import OrderedDict from maya import cmds # noqa @@ -23,34 +21,13 @@ COPY = 1 HARDLINK = 2 -def escape_space(path): - """Ensure path is enclosed by quotes to allow paths with spaces""" - return '"{}"'.format(path) if " " in path else path - - -def get_ocio_config_path(profile_folder): - """Path to OpenPype vendorized OCIO. - - Vendorized OCIO config file path is grabbed from the specific path - hierarchy specified below. - - "{OPENPYPE_ROOT}/vendor/OpenColorIO-Configs/{profile_folder}/config.ocio" - Args: - profile_folder (str): Name of folder to grab config file from. - - Returns: - str: Path to vendorized config file. - """ - - return os.path.join( - os.environ["OPENPYPE_ROOT"], - "vendor", - "bin", - "ocioconfig", - "OpenColorIOConfigs", - profile_folder, - "config.ocio" - ) +def _has_arnold(): + """Return whether the arnold package is available and can be imported.""" + try: + import arnold # noqa: F401 + return True + except (ImportError, ModuleNotFoundError): + return False def find_paths_by_hash(texture_hash): @@ -548,7 +525,7 @@ class ExtractLook(publish.Extractor): color_space = cmds.getAttr(color_space_attr) except ValueError: # node doesn't have color space attribute - if cmds.loadPlugin("mtoa", quiet=True): + if _has_arnold(): img_info = image_info(filepath) color_space = guess_colorspace(img_info) else: @@ -560,7 +537,7 @@ class ExtractLook(publish.Extractor): render_colorspace]) else: - if cmds.loadPlugin("mtoa", quiet=True): + if _has_arnold(): img_info = image_info(filepath) color_space = guess_colorspace(img_info) if color_space == "sRGB": diff --git a/openpype/hosts/maya/plugins/publish/validate_rendersettings.py b/openpype/hosts/maya/plugins/publish/validate_rendersettings.py index 94e2633593..53f340cd2c 100644 --- a/openpype/hosts/maya/plugins/publish/validate_rendersettings.py +++ b/openpype/hosts/maya/plugins/publish/validate_rendersettings.py @@ -13,6 +13,22 @@ from openpype.pipeline.publish import ( from openpype.hosts.maya.api import lib +def convert_to_int_or_float(string_value): + # Order of types are important here since float can convert string + # representation of integer. + types = [int, float] + for t in types: + try: + result = t(string_value) + except ValueError: + continue + else: + return result + + # Neither integer or float. + return string_value + + def get_redshift_image_format_labels(): """Return nice labels for Redshift image formats.""" var = "$g_redshiftImageFormatLabels" @@ -242,10 +258,6 @@ class ValidateRenderSettings(pyblish.api.InstancePlugin): cls.DEFAULT_PADDING, "0" * cls.DEFAULT_PADDING)) # load validation definitions from settings - validation_settings = ( - instance.context.data["project_settings"]["maya"]["publish"]["ValidateRenderSettings"].get( # noqa: E501 - "{}_render_attributes".format(renderer)) or [] - ) settings_lights_flag = instance.context.data["project_settings"].get( "maya", {}).get( "RenderSettings", {}).get( @@ -253,17 +265,67 @@ class ValidateRenderSettings(pyblish.api.InstancePlugin): instance_lights_flag = instance.data.get("renderSetupIncludeLights") if settings_lights_flag != instance_lights_flag: - cls.log.warning('Instance flag for "Render Setup Include Lights" is set to {0} and Settings flag is set to {1}'.format(instance_lights_flag, settings_lights_flag)) # noqa + cls.log.warning( + "Instance flag for \"Render Setup Include Lights\" is set to " + "{} and Settings flag is set to {}".format( + instance_lights_flag, settings_lights_flag + ) + ) # go through definitions and test if such node.attribute exists. # if so, compare its value from the one required. - for attr, value in OrderedDict(validation_settings).items(): - cls.log.debug("{}: {}".format(attr, value)) - if "." not in attr: - cls.log.warning("Skipping invalid attribute defined in " - "validation settings: '{}'".format(attr)) + for attribute, data in cls.get_nodes(instance, renderer).items(): + # Validate the settings has values. + if not data["values"]: + cls.log.error( + "Settings for {}.{} is missing values.".format( + node, attribute + ) + ) continue + for node in data["nodes"]: + try: + render_value = cmds.getAttr( + "{}.{}".format(node, attribute) + ) + except RuntimeError: + invalid = True + cls.log.error( + "Cannot get value of {}.{}".format(node, attribute) + ) + else: + if render_value not in data["values"]: + invalid = True + cls.log.error( + "Invalid value {} set on {}.{}. Expecting " + "{}".format( + render_value, node, attribute, data["values"] + ) + ) + + return invalid + + @classmethod + def get_nodes(cls, instance, renderer): + maya_settings = instance.context.data["project_settings"]["maya"] + validation_settings = ( + maya_settings["publish"]["ValidateRenderSettings"].get( + "{}_render_attributes".format(renderer) + ) or [] + ) + result = {} + for attr, values in OrderedDict(validation_settings).items(): + cls.log.debug("{}: {}".format(attr, values)) + if "." not in attr: + cls.log.warning( + "Skipping invalid attribute defined in validation " + "settings: \"{}\"".format(attr) + ) + continue + + values = [convert_to_int_or_float(v) for v in values] + node_type, attribute_name = attr.split(".", 1) # first get node of that type @@ -271,28 +333,13 @@ class ValidateRenderSettings(pyblish.api.InstancePlugin): if not nodes: cls.log.warning( - "No nodes of type '{}' found.".format(node_type)) + "No nodes of type \"{}\" found.".format(node_type) + ) continue - for node in nodes: - try: - render_value = cmds.getAttr( - "{}.{}".format(node, attribute_name)) - except RuntimeError: - invalid = True - cls.log.error( - "Cannot get value of {}.{}".format( - node, attribute_name)) - else: - if str(value) != str(render_value): - invalid = True - cls.log.error( - ("Invalid value {} set on {}.{}. " - "Expecting {}").format( - render_value, node, attribute_name, value) - ) + result[attribute_name] = {"nodes": nodes, "values": values} - return invalid + return result @classmethod def repair(cls, instance): @@ -305,6 +352,12 @@ class ValidateRenderSettings(pyblish.api.InstancePlugin): "{aov_separator}", instance.data.get("aovSeparator", "_") ) + for attribute, data in cls.get_nodes(instance, renderer).items(): + if not data["values"]: + continue + for node in data["nodes"]: + lib.set_attribute(attribute, data["values"][0], node) + with lib.renderlayer(layer_node): default = lib.RENDER_ATTRS['default'] render_attrs = lib.RENDER_ATTRS.get(renderer, default) diff --git a/openpype/hosts/maya/plugins/publish/validate_yeti_renderscript_callbacks.py b/openpype/hosts/maya/plugins/publish/validate_yeti_renderscript_callbacks.py index a864a18cee..06250f5779 100644 --- a/openpype/hosts/maya/plugins/publish/validate_yeti_renderscript_callbacks.py +++ b/openpype/hosts/maya/plugins/publish/validate_yeti_renderscript_callbacks.py @@ -48,6 +48,18 @@ class ValidateYetiRenderScriptCallbacks(pyblish.api.InstancePlugin): yeti_loaded = cmds.pluginInfo("pgYetiMaya", query=True, loaded=True) + if not yeti_loaded and not cmds.ls(type="pgYetiMaya"): + # The yeti plug-in is available and loaded so at + # this point we don't really care whether the scene + # has any yeti callback set or not since if the callback + # is there it wouldn't error and if it weren't then + # nothing happens because there are no yeti nodes. + cls.log.info( + "Yeti is loaded but no yeti nodes were found. " + "Callback validation skipped.." + ) + return False + renderer = instance.data["renderer"] if renderer == "redshift": cls.log.info("Redshift ignores any pre and post render callbacks") diff --git a/openpype/hosts/maya/tools/mayalookassigner/commands.py b/openpype/hosts/maya/tools/mayalookassigner/commands.py index 2e7a51efde..3d9746511d 100644 --- a/openpype/hosts/maya/tools/mayalookassigner/commands.py +++ b/openpype/hosts/maya/tools/mayalookassigner/commands.py @@ -80,21 +80,7 @@ def get_all_asset_nodes(): Returns: list: list of dictionaries """ - - host = registered_host() - - nodes = [] - for container in host.ls(): - # We are not interested in looks but assets! - if container["loader"] == "LookLoader": - continue - - # Gather all information - container_name = container["objectName"] - nodes += lib.get_container_members(container_name) - - nodes = list(set(nodes)) - return nodes + return cmds.ls(dag=True, noIntermediate=True, long=True) def create_asset_id_hash(nodes): diff --git a/openpype/hosts/nuke/plugins/publish/extract_review_data.py b/openpype/hosts/nuke/plugins/publish/extract_review_data.py index dee8248295..c221af40fb 100644 --- a/openpype/hosts/nuke/plugins/publish/extract_review_data.py +++ b/openpype/hosts/nuke/plugins/publish/extract_review_data.py @@ -25,7 +25,7 @@ class ExtractReviewData(publish.Extractor): # review can be removed since `ProcessSubmittedJobOnFarm` will create # reviewable representation if needed if ( - "render.farm" in instance.data["families"] + instance.data.get("farm") and "review" in instance.data["families"] ): instance.data["families"].remove("review") diff --git a/openpype/hosts/nuke/plugins/publish/extract_review_data_lut.py b/openpype/hosts/nuke/plugins/publish/extract_review_data_lut.py index 67779e9599..e4b7b155cd 100644 --- a/openpype/hosts/nuke/plugins/publish/extract_review_data_lut.py +++ b/openpype/hosts/nuke/plugins/publish/extract_review_data_lut.py @@ -49,7 +49,12 @@ class ExtractReviewDataLut(publish.Extractor): exporter.stagingDir, exporter.file).replace("\\", "/") instance.data["representations"] += data["representations"] - if "render.farm" in families: + # review can be removed since `ProcessSubmittedJobOnFarm` will create + # reviewable representation if needed + if ( + instance.data.get("farm") + and "review" in instance.data["families"] + ): instance.data["families"].remove("review") self.log.debug( diff --git a/openpype/hosts/nuke/plugins/publish/extract_review_data_mov.py b/openpype/hosts/nuke/plugins/publish/extract_review_data_mov.py index 3fcfc2a4b5..956d1a54a3 100644 --- a/openpype/hosts/nuke/plugins/publish/extract_review_data_mov.py +++ b/openpype/hosts/nuke/plugins/publish/extract_review_data_mov.py @@ -105,10 +105,7 @@ class ExtractReviewDataMov(publish.Extractor): self, instance, o_name, o_data["extension"], multiple_presets) - if ( - "render.farm" in families or - "prerender.farm" in families - ): + if instance.data.get("farm"): if "review" in instance.data["families"]: instance.data["families"].remove("review") diff --git a/openpype/hosts/nuke/plugins/publish/extract_thumbnail.py b/openpype/hosts/nuke/plugins/publish/extract_thumbnail.py index a1a0e241c0..f391ca1e7c 100644 --- a/openpype/hosts/nuke/plugins/publish/extract_thumbnail.py +++ b/openpype/hosts/nuke/plugins/publish/extract_thumbnail.py @@ -31,7 +31,7 @@ class ExtractThumbnail(publish.Extractor): def process(self, instance): - if "render.farm" in instance.data["families"]: + if instance.data.get("farm"): return with napi.maintained_selection(): diff --git a/openpype/hosts/photoshop/api/launch_logic.py b/openpype/hosts/photoshop/api/launch_logic.py index 89ba6ad4e6..25732446b5 100644 --- a/openpype/hosts/photoshop/api/launch_logic.py +++ b/openpype/hosts/photoshop/api/launch_logic.py @@ -66,11 +66,11 @@ class MainThreadItem: return self._result def execute(self): - """Execute callback and store it's result. + """Execute callback and store its result. Method must be called from main thread. Item is marked as `done` when callback execution finished. Store output of callback of exception - information when callback raise one. + information when callback raises one. """ log.debug("Executing process in main thread") if self.done: diff --git a/openpype/hosts/tvpaint/api/communication_server.py b/openpype/hosts/tvpaint/api/communication_server.py index e94e64e04a..6f76c25e0c 100644 --- a/openpype/hosts/tvpaint/api/communication_server.py +++ b/openpype/hosts/tvpaint/api/communication_server.py @@ -389,11 +389,11 @@ class MainThreadItem: self.kwargs = kwargs def execute(self): - """Execute callback and store it's result. + """Execute callback and store its result. Method must be called from main thread. Item is marked as `done` when callback execution finished. Store output of callback of exception - information when callback raise one. + information when callback raises one. """ log.debug("Executing process in main thread") if self.done: diff --git a/openpype/hosts/unreal/ue_workers.py b/openpype/hosts/unreal/ue_workers.py index 00f83a7d7a..d1740124a8 100644 --- a/openpype/hosts/unreal/ue_workers.py +++ b/openpype/hosts/unreal/ue_workers.py @@ -5,29 +5,38 @@ import re import subprocess from distutils import dir_util from pathlib import Path -from typing import List +from typing import List, Union import openpype.hosts.unreal.lib as ue_lib from qtpy import QtCore -def parse_comp_progress(line: str, progress_signal: QtCore.Signal(int)) -> int: - match = re.search('\[[1-9]+/[0-9]+\]', line) +def parse_comp_progress(line: str, progress_signal: QtCore.Signal(int)): + match = re.search(r"\[[1-9]+/[0-9]+]", line) if match is not None: - split: list[str] = match.group().split('/') + split: list[str] = match.group().split("/") curr: float = float(split[0][1:]) total: float = float(split[1][:-1]) progress_signal.emit(int((curr / total) * 100.0)) -def parse_prj_progress(line: str, progress_signal: QtCore.Signal(int)) -> int: - match = re.search('@progress', line) +def parse_prj_progress(line: str, progress_signal: QtCore.Signal(int)): + match = re.search("@progress", line) if match is not None: - percent_match = re.search('\d{1,3}', line) + percent_match = re.search(r"\d{1,3}", line) progress_signal.emit(int(percent_match.group())) +def retrieve_exit_code(line: str): + match = re.search(r"ExitCode=\d+", line) + if match is not None: + split: list[str] = match.group().split("=") + return int(split[1]) + + return None + + class UEProjectGenerationWorker(QtCore.QObject): finished = QtCore.Signal(str) failed = QtCore.Signal(str) @@ -77,16 +86,19 @@ class UEProjectGenerationWorker(QtCore.QObject): if self.dev_mode: stage_count = 4 - self.stage_begin.emit(f'Generating a new UE project ... 1 out of ' - f'{stage_count}') + self.stage_begin.emit( + ("Generating a new UE project ... 1 out of " + f"{stage_count}")) - commandlet_cmd = [f'{ue_editor_exe.as_posix()}', - f'{cmdlet_project.as_posix()}', - f'-run=OPGenerateProject', - f'{project_file.resolve().as_posix()}'] + commandlet_cmd = [ + f"{ue_editor_exe.as_posix()}", + f"{cmdlet_project.as_posix()}", + "-run=OPGenerateProject", + f"{project_file.resolve().as_posix()}", + ] if self.dev_mode: - commandlet_cmd.append('-GenerateCode') + commandlet_cmd.append("-GenerateCode") gen_process = subprocess.Popen(commandlet_cmd, stdout=subprocess.PIPE, @@ -94,24 +106,27 @@ class UEProjectGenerationWorker(QtCore.QObject): for line in gen_process.stdout: decoded_line = line.decode(errors="replace") - print(decoded_line, end='') + print(decoded_line, end="") self.log.emit(decoded_line) gen_process.stdout.close() return_code = gen_process.wait() if return_code and return_code != 0: - msg = 'Failed to generate ' + self.project_name \ - + f' project! Exited with return code {return_code}' + msg = ( + f"Failed to generate {self.project_name} " + f"project! Exited with return code {return_code}" + ) self.failed.emit(msg, return_code) raise RuntimeError(msg) print("--- Project has been generated successfully.") - self.stage_begin.emit(f'Writing the Engine ID of the build UE ... 1' - f' out of {stage_count}') + self.stage_begin.emit( + (f"Writing the Engine ID of the build UE ... 1" + f" out of {stage_count}")) if not project_file.is_file(): - msg = "Failed to write the Engine ID into .uproject file! Can " \ - "not read!" + msg = ("Failed to write the Engine ID into .uproject file! Can " + "not read!") self.failed.emit(msg) raise RuntimeError(msg) @@ -125,13 +140,14 @@ class UEProjectGenerationWorker(QtCore.QObject): pf.seek(0) json.dump(pf_json, pf, indent=4) pf.truncate() - print(f'--- Engine ID has been written into the project file') + print("--- Engine ID has been written into the project file") self.progress.emit(90) if self.dev_mode: # 2nd stage - self.stage_begin.emit(f'Generating project files ... 2 out of ' - f'{stage_count}') + self.stage_begin.emit( + (f"Generating project files ... 2 out of " + f"{stage_count}")) self.progress.emit(0) ubt_path = ue_lib.get_path_to_ubt(self.engine_path, @@ -154,8 +170,8 @@ class UEProjectGenerationWorker(QtCore.QObject): stdout=subprocess.PIPE, stderr=subprocess.PIPE) for line in gen_proc.stdout: - decoded_line: str = line.decode(errors='replace') - print(decoded_line, end='') + decoded_line: str = line.decode(errors="replace") + print(decoded_line, end="") self.log.emit(decoded_line) parse_prj_progress(decoded_line, self.progress) @@ -163,13 +179,13 @@ class UEProjectGenerationWorker(QtCore.QObject): return_code = gen_proc.wait() if return_code and return_code != 0: - msg = 'Failed to generate project files! ' \ - f'Exited with return code {return_code}' + msg = ("Failed to generate project files! " + f"Exited with return code {return_code}") self.failed.emit(msg, return_code) raise RuntimeError(msg) - self.stage_begin.emit(f'Building the project ... 3 out of ' - f'{stage_count}') + self.stage_begin.emit( + f"Building the project ... 3 out of {stage_count}") self.progress.emit(0) # 3rd stage build_prj_cmd = [ubt_path.as_posix(), @@ -177,16 +193,16 @@ class UEProjectGenerationWorker(QtCore.QObject): arch, "Development", "-TargetType=Editor", - f'-Project={project_file}', - f'{project_file}', + f"-Project={project_file}", + f"{project_file}", "-IgnoreJunk"] build_prj_proc = subprocess.Popen(build_prj_cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE) for line in build_prj_proc.stdout: - decoded_line: str = line.decode(errors='replace') - print(decoded_line, end='') + decoded_line: str = line.decode(errors="replace") + print(decoded_line, end="") self.log.emit(decoded_line) parse_comp_progress(decoded_line, self.progress) @@ -194,16 +210,17 @@ class UEProjectGenerationWorker(QtCore.QObject): return_code = build_prj_proc.wait() if return_code and return_code != 0: - msg = 'Failed to build project! ' \ - f'Exited with return code {return_code}' + msg = ("Failed to build project! " + f"Exited with return code {return_code}") self.failed.emit(msg, return_code) raise RuntimeError(msg) # ensure we have PySide2 installed in engine self.progress.emit(0) - self.stage_begin.emit(f'Checking PySide2 installation... {stage_count}' - f' out of {stage_count}') + self.stage_begin.emit( + (f"Checking PySide2 installation... {stage_count} " + f" out of {stage_count}")) python_path = None if platform.system().lower() == "windows": python_path = self.engine_path / ("Engine/Binaries/ThirdParty/" @@ -225,9 +242,30 @@ class UEProjectGenerationWorker(QtCore.QObject): msg = f"Unreal Python not found at {python_path}" self.failed.emit(msg, 1) raise RuntimeError(msg) - subprocess.check_call( - [python_path.as_posix(), "-m", "pip", "install", "pyside2"] - ) + pyside_cmd = [python_path.as_posix(), + "-m", + "pip", + "install", + "pyside2"] + + pyside_install = subprocess.Popen(pyside_cmd, + stdout=subprocess.PIPE, + stderr=subprocess.PIPE) + + for line in pyside_install.stdout: + decoded_line: str = line.decode(errors="replace") + print(decoded_line, end="") + self.log.emit(decoded_line) + + pyside_install.stdout.close() + return_code = pyside_install.wait() + + if return_code and return_code != 0: + msg = ("Failed to create the project! " + "The installation of PySide2 has failed!") + self.failed.emit(msg, return_code) + raise RuntimeError(msg) + self.progress.emit(100) self.finished.emit("Project successfully built!") @@ -266,26 +304,30 @@ class UEPluginInstallWorker(QtCore.QObject): # in order to successfully build the plugin, # It must be built outside the Engine directory and then moved - build_plugin_cmd: List[str] = [f'{uat_path.as_posix()}', - 'BuildPlugin', - f'-Plugin={uplugin_path.as_posix()}', - f'-Package={temp_dir.as_posix()}'] + build_plugin_cmd: List[str] = [f"{uat_path.as_posix()}", + "BuildPlugin", + f"-Plugin={uplugin_path.as_posix()}", + f"-Package={temp_dir.as_posix()}"] build_proc = subprocess.Popen(build_plugin_cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE) + return_code: Union[None, int] = None for line in build_proc.stdout: - decoded_line: str = line.decode(errors='replace') - print(decoded_line, end='') + decoded_line: str = line.decode(errors="replace") + print(decoded_line, end="") self.log.emit(decoded_line) + if return_code is None: + return_code = retrieve_exit_code(decoded_line) parse_comp_progress(decoded_line, self.progress) build_proc.stdout.close() - return_code = build_proc.wait() + build_proc.wait() if return_code and return_code != 0: - msg = 'Failed to build plugin' \ - f' project! Exited with return code {return_code}' + msg = ("Failed to build plugin" + f" project! Exited with return code {return_code}") + dir_util.remove_tree(temp_dir.as_posix()) self.failed.emit(msg, return_code) raise RuntimeError(msg) diff --git a/openpype/lib/applications.py b/openpype/lib/applications.py index 7cc296f47b..127d31d042 100644 --- a/openpype/lib/applications.py +++ b/openpype/lib/applications.py @@ -889,7 +889,8 @@ class ApplicationLaunchContext: self.modules_manager = ModulesManager() # Logger - logger_name = "{}-{}".format(self.__class__.__name__, self.app_name) + logger_name = "{}-{}".format(self.__class__.__name__, + self.application.full_name) self.log = Logger.get_logger(logger_name) self.executable = executable @@ -1246,7 +1247,7 @@ class ApplicationLaunchContext: args_len_str = " ({})".format(len(args)) self.log.info( "Launching \"{}\" with args{}: {}".format( - self.app_name, args_len_str, args + self.application.full_name, args_len_str, args ) ) self.launch_args = args @@ -1271,7 +1272,9 @@ class ApplicationLaunchContext: exc_info=True ) - self.log.debug("Launch of {} finished.".format(self.app_name)) + self.log.debug("Launch of {} finished.".format( + self.application.full_name + )) return self.process @@ -1508,8 +1511,8 @@ def prepare_app_environments( if key in source_env: source_env[key] = value - # `added_env_keys` has debug purpose - added_env_keys = {app.group.name, app.name} + # `app_and_tool_labels` has debug purpose + app_and_tool_labels = [app.full_name] # Environments for application environments = [ app.group.environment, @@ -1532,15 +1535,14 @@ def prepare_app_environments( for group_name in sorted(groups_by_name.keys()): group = groups_by_name[group_name] environments.append(group.environment) - added_env_keys.add(group_name) for tool_name in sorted(tool_by_group_name[group_name].keys()): tool = tool_by_group_name[group_name][tool_name] environments.append(tool.environment) - added_env_keys.add(tool.name) + app_and_tool_labels.append(tool.full_name) log.debug( "Will add environments for apps and tools: {}".format( - ", ".join(added_env_keys) + ", ".join(app_and_tool_labels) ) ) diff --git a/openpype/lib/execute.py b/openpype/lib/execute.py index 759a4db0cb..7a929a0ade 100644 --- a/openpype/lib/execute.py +++ b/openpype/lib/execute.py @@ -102,6 +102,10 @@ def run_subprocess(*args, **kwargs): if ( platform.system().lower() == "windows" and "creationflags" not in kwargs + # shell=True already tries to hide the console window + # and passing these creationflags then shows the window again + # so we avoid it for shell=True cases + and kwargs.get("shell") is not True ): kwargs["creationflags"] = ( subprocess.CREATE_NEW_PROCESS_GROUP diff --git a/openpype/lib/file_transaction.py b/openpype/lib/file_transaction.py index fe70b37cb1..81332a8891 100644 --- a/openpype/lib/file_transaction.py +++ b/openpype/lib/file_transaction.py @@ -13,6 +13,16 @@ else: from shutil import copyfile +class DuplicateDestinationError(ValueError): + """Error raised when transfer destination already exists in queue. + + The error is only raised if `allow_queue_replacements` is False on the + FileTransaction instance and the added file to transfer is of a different + src file than the one already detected in the queue. + + """ + + class FileTransaction(object): """File transaction with rollback options. @@ -44,7 +54,7 @@ class FileTransaction(object): MODE_COPY = 0 MODE_HARDLINK = 1 - def __init__(self, log=None): + def __init__(self, log=None, allow_queue_replacements=False): if log is None: log = logging.getLogger("FileTransaction") @@ -60,6 +70,8 @@ class FileTransaction(object): # Backup file location mapping to original locations self._backup_to_original = {} + self._allow_queue_replacements = allow_queue_replacements + def add(self, src, dst, mode=MODE_COPY): """Add a new file to transfer queue. @@ -82,6 +94,14 @@ class FileTransaction(object): src, dst)) return else: + if not self._allow_queue_replacements: + raise DuplicateDestinationError( + "Transfer to destination is already in queue: " + "{} -> {}. It's not allowed to be replaced by " + "a new transfer from {}".format( + queued_src, dst, src + )) + self.log.warning("File transfer in queue replaced..") self.log.debug( "Removed from queue: {} -> {} replaced by {} -> {}".format( diff --git a/openpype/lib/vendor_bin_utils.py b/openpype/lib/vendor_bin_utils.py index b6797dbba0..e5deb7a6b2 100644 --- a/openpype/lib/vendor_bin_utils.py +++ b/openpype/lib/vendor_bin_utils.py @@ -224,18 +224,26 @@ def find_tool_in_custom_paths(paths, tool, validation_func=None): def _check_args_returncode(args): try: - # Python 2 compatibility where DEVNULL is not available + kwargs = {} + if platform.system().lower() == "windows": + kwargs["creationflags"] = ( + subprocess.CREATE_NEW_PROCESS_GROUP + | getattr(subprocess, "DETACHED_PROCESS", 0) + | getattr(subprocess, "CREATE_NO_WINDOW", 0) + ) + if hasattr(subprocess, "DEVNULL"): proc = subprocess.Popen( args, stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL, + **kwargs ) proc.wait() else: with open(os.devnull, "w") as devnull: proc = subprocess.Popen( - args, stdout=devnull, stderr=devnull, + args, stdout=devnull, stderr=devnull, **kwargs ) proc.wait() @@ -375,7 +383,7 @@ def get_ffmpeg_tool_path(tool="ffmpeg"): # Look to PATH for the tool if not tool_executable_path: from_path = find_executable(tool) - if from_path and _oiio_executable_validation(from_path): + if from_path and _ffmpeg_executable_validation(from_path): tool_executable_path = from_path CachedToolPaths.cache_executable_path(tool, tool_executable_path) diff --git a/openpype/modules/clockify/clockify_api.py b/openpype/modules/clockify/clockify_api.py index 6af911fffc..80979c83ab 100644 --- a/openpype/modules/clockify/clockify_api.py +++ b/openpype/modules/clockify/clockify_api.py @@ -6,34 +6,22 @@ import datetime import requests from .constants import ( CLOCKIFY_ENDPOINT, - ADMIN_PERMISSION_NAMES + ADMIN_PERMISSION_NAMES, ) from openpype.lib.local_settings import OpenPypeSecureRegistry - - -def time_check(obj): - if obj.request_counter < 10: - obj.request_counter += 1 - return - - wait_time = 1 - (time.time() - obj.request_time) - if wait_time > 0: - time.sleep(wait_time) - - obj.request_time = time.time() - obj.request_counter = 0 +from openpype.lib import Logger class ClockifyAPI: + log = Logger.get_logger(__name__) + def __init__(self, api_key=None, master_parent=None): self.workspace_name = None - self.workspace_id = None self.master_parent = master_parent self.api_key = api_key - self.request_counter = 0 - self.request_time = time.time() - + self._workspace_id = None + self._user_id = None self._secure_registry = None @property @@ -44,11 +32,19 @@ class ClockifyAPI: @property def headers(self): - return {"X-Api-Key": self.api_key} + return {"x-api-key": self.api_key} + + @property + def workspace_id(self): + return self._workspace_id + + @property + def user_id(self): + return self._user_id def verify_api(self): for key, value in self.headers.items(): - if value is None or value.strip() == '': + if value is None or value.strip() == "": return False return True @@ -59,65 +55,55 @@ class ClockifyAPI: if api_key is not None and self.validate_api_key(api_key) is True: self.api_key = api_key self.set_workspace() + self.set_user_id() if self.master_parent: self.master_parent.signed_in() return True return False def validate_api_key(self, api_key): - test_headers = {'X-Api-Key': api_key} - action_url = 'workspaces/' - time_check(self) + test_headers = {"x-api-key": api_key} + action_url = "user" response = requests.get( - CLOCKIFY_ENDPOINT + action_url, - headers=test_headers + CLOCKIFY_ENDPOINT + action_url, headers=test_headers ) if response.status_code != 200: return False return True - def validate_workspace_perm(self, workspace_id=None): - user_id = self.get_user_id() + def validate_workspace_permissions(self, workspace_id=None, user_id=None): if user_id is None: + self.log.info("No user_id found during validation") return False if workspace_id is None: workspace_id = self.workspace_id - action_url = "/workspaces/{}/users/{}/permissions".format( - workspace_id, user_id - ) - time_check(self) + action_url = f"workspaces/{workspace_id}/users?includeRoles=1" response = requests.get( - CLOCKIFY_ENDPOINT + action_url, - headers=self.headers + CLOCKIFY_ENDPOINT + action_url, headers=self.headers ) - user_permissions = response.json() - for perm in user_permissions: - if perm['name'] in ADMIN_PERMISSION_NAMES: + data = response.json() + for user in data: + if user.get("id") == user_id: + roles_data = user.get("roles") + for entities in roles_data: + if entities.get("role") in ADMIN_PERMISSION_NAMES: return True return False def get_user_id(self): - action_url = 'v1/user/' - time_check(self) + action_url = "user" response = requests.get( - CLOCKIFY_ENDPOINT + action_url, - headers=self.headers + CLOCKIFY_ENDPOINT + action_url, headers=self.headers ) - # this regex is neccessary: UNICODE strings are crashing - # during json serialization - id_regex = '\"{1}id\"{1}\:{1}\"{1}\w+\"{1}' - result = re.findall(id_regex, str(response.content)) - if len(result) != 1: - # replace with log and better message? - print('User ID was not found (this is a BUG!!!)') - return None - return json.loads('{'+result[0]+'}')['id'] + result = response.json() + user_id = result.get("id", None) + + return user_id def set_workspace(self, name=None): if name is None: - name = os.environ.get('CLOCKIFY_WORKSPACE', None) + name = os.environ.get("CLOCKIFY_WORKSPACE", None) self.workspace_name = name - self.workspace_id = None if self.workspace_name is None: return try: @@ -125,7 +111,7 @@ class ClockifyAPI: except Exception: result = False if result is not False: - self.workspace_id = result + self._workspace_id = result if self.master_parent is not None: self.master_parent.start_timer_check() return True @@ -139,6 +125,14 @@ class ClockifyAPI: return all_workspaces[name] return False + def set_user_id(self): + try: + user_id = self.get_user_id() + except Exception: + user_id = None + if user_id is not None: + self._user_id = user_id + def get_api_key(self): return self.secure_registry.get_item("api_key", None) @@ -146,11 +140,9 @@ class ClockifyAPI: self.secure_registry.set_item("api_key", api_key) def get_workspaces(self): - action_url = 'workspaces/' - time_check(self) + action_url = "workspaces/" response = requests.get( - CLOCKIFY_ENDPOINT + action_url, - headers=self.headers + CLOCKIFY_ENDPOINT + action_url, headers=self.headers ) return { workspace["name"]: workspace["id"] for workspace in response.json() @@ -159,27 +151,22 @@ class ClockifyAPI: def get_projects(self, workspace_id=None): if workspace_id is None: workspace_id = self.workspace_id - action_url = 'workspaces/{}/projects/'.format(workspace_id) - time_check(self) + action_url = f"workspaces/{workspace_id}/projects" response = requests.get( - CLOCKIFY_ENDPOINT + action_url, - headers=self.headers + CLOCKIFY_ENDPOINT + action_url, headers=self.headers ) - - return { - project["name"]: project["id"] for project in response.json() - } + if response.status_code != 403: + result = response.json() + return {project["name"]: project["id"] for project in result} def get_project_by_id(self, project_id, workspace_id=None): if workspace_id is None: workspace_id = self.workspace_id - action_url = 'workspaces/{}/projects/{}/'.format( + action_url = "workspaces/{}/projects/{}".format( workspace_id, project_id ) - time_check(self) response = requests.get( - CLOCKIFY_ENDPOINT + action_url, - headers=self.headers + CLOCKIFY_ENDPOINT + action_url, headers=self.headers ) return response.json() @@ -187,32 +174,24 @@ class ClockifyAPI: def get_tags(self, workspace_id=None): if workspace_id is None: workspace_id = self.workspace_id - action_url = 'workspaces/{}/tags/'.format(workspace_id) - time_check(self) + action_url = "workspaces/{}/tags".format(workspace_id) response = requests.get( - CLOCKIFY_ENDPOINT + action_url, - headers=self.headers + CLOCKIFY_ENDPOINT + action_url, headers=self.headers ) - return { - tag["name"]: tag["id"] for tag in response.json() - } + return {tag["name"]: tag["id"] for tag in response.json()} def get_tasks(self, project_id, workspace_id=None): if workspace_id is None: workspace_id = self.workspace_id - action_url = 'workspaces/{}/projects/{}/tasks/'.format( + action_url = "workspaces/{}/projects/{}/tasks".format( workspace_id, project_id ) - time_check(self) response = requests.get( - CLOCKIFY_ENDPOINT + action_url, - headers=self.headers + CLOCKIFY_ENDPOINT + action_url, headers=self.headers ) - return { - task["name"]: task["id"] for task in response.json() - } + return {task["name"]: task["id"] for task in response.json()} def get_workspace_id(self, workspace_name): all_workspaces = self.get_workspaces() @@ -236,48 +215,64 @@ class ClockifyAPI: return None return all_tasks[tag_name] - def get_task_id( - self, task_name, project_id, workspace_id=None - ): + def get_task_id(self, task_name, project_id, workspace_id=None): if workspace_id is None: workspace_id = self.workspace_id - all_tasks = self.get_tasks( - project_id, workspace_id - ) + all_tasks = self.get_tasks(project_id, workspace_id) if task_name not in all_tasks: return None return all_tasks[task_name] def get_current_time(self): - return str(datetime.datetime.utcnow().isoformat())+'Z' + return str(datetime.datetime.utcnow().isoformat()) + "Z" def start_time_entry( - self, description, project_id, task_id=None, tag_ids=[], - workspace_id=None, billable=True + self, + description, + project_id, + task_id=None, + tag_ids=None, + workspace_id=None, + user_id=None, + billable=True, ): # Workspace if workspace_id is None: workspace_id = self.workspace_id + # User ID + if user_id is None: + user_id = self._user_id + + # get running timer to check if we need to start it + current_timer = self.get_in_progress() # Check if is currently run another times and has same values - current = self.get_in_progress(workspace_id) - if current is not None: + # DO not restart the timer, if it is already running for curent task + if current_timer: + current_timer_hierarchy = current_timer.get("description") + current_project_id = current_timer.get("projectId") + current_task_id = current_timer.get("taskId") if ( - current.get("description", None) == description and - current.get("projectId", None) == project_id and - current.get("taskId", None) == task_id + description == current_timer_hierarchy + and project_id == current_project_id + and task_id == current_task_id ): + self.log.info( + "Timer for the current project is already running" + ) self.bool_timer_run = True return self.bool_timer_run - self.finish_time_entry(workspace_id) + self.finish_time_entry() # Convert billable to strings if billable: - billable = 'true' + billable = "true" else: - billable = 'false' + billable = "false" # Rest API Action - action_url = 'workspaces/{}/timeEntries/'.format(workspace_id) + action_url = "workspaces/{}/user/{}/time-entries".format( + workspace_id, user_id + ) start = self.get_current_time() body = { "start": start, @@ -285,169 +280,135 @@ class ClockifyAPI: "description": description, "projectId": project_id, "taskId": task_id, - "tagIds": tag_ids + "tagIds": tag_ids, } - time_check(self) response = requests.post( - CLOCKIFY_ENDPOINT + action_url, - headers=self.headers, - json=body + CLOCKIFY_ENDPOINT + action_url, headers=self.headers, json=body ) - - success = False if response.status_code < 300: - success = True - return success + return True + return False - def get_in_progress(self, workspace_id=None): - if workspace_id is None: - workspace_id = self.workspace_id - action_url = 'workspaces/{}/timeEntries/inProgress'.format( - workspace_id - ) - time_check(self) - response = requests.get( - CLOCKIFY_ENDPOINT + action_url, - headers=self.headers - ) + def _get_current_timer_values(self, response): + if response is None: + return try: output = response.json() except json.decoder.JSONDecodeError: - output = None - return output + return None + if output and isinstance(output, list): + return output[0] + return None - def finish_time_entry(self, workspace_id=None): + def get_in_progress(self, user_id=None, workspace_id=None): if workspace_id is None: workspace_id = self.workspace_id - current = self.get_in_progress(workspace_id) - if current is None: - return + if user_id is None: + user_id = self.user_id - current_id = current["id"] - action_url = 'workspaces/{}/timeEntries/{}'.format( - workspace_id, current_id + action_url = ( + f"workspaces/{workspace_id}/user/" + f"{user_id}/time-entries?in-progress=1" ) - body = { - "start": current["timeInterval"]["start"], - "billable": current["billable"], - "description": current["description"], - "projectId": current["projectId"], - "taskId": current["taskId"], - "tagIds": current["tagIds"], - "end": self.get_current_time() - } - time_check(self) - response = requests.put( - CLOCKIFY_ENDPOINT + action_url, - headers=self.headers, - json=body + response = requests.get( + CLOCKIFY_ENDPOINT + action_url, headers=self.headers + ) + return self._get_current_timer_values(response) + + def finish_time_entry(self, workspace_id=None, user_id=None): + if workspace_id is None: + workspace_id = self.workspace_id + if user_id is None: + user_id = self.user_id + current_timer = self.get_in_progress() + if not current_timer: + return + action_url = "workspaces/{}/user/{}/time-entries".format( + workspace_id, user_id + ) + body = {"end": self.get_current_time()} + response = requests.patch( + CLOCKIFY_ENDPOINT + action_url, headers=self.headers, json=body ) return response.json() - def get_time_entries( - self, workspace_id=None, quantity=10 - ): + def get_time_entries(self, workspace_id=None, user_id=None, quantity=10): if workspace_id is None: workspace_id = self.workspace_id - action_url = 'workspaces/{}/timeEntries/'.format(workspace_id) - time_check(self) + if user_id is None: + user_id = self.user_id + action_url = "workspaces/{}/user/{}/time-entries".format( + workspace_id, user_id + ) response = requests.get( - CLOCKIFY_ENDPOINT + action_url, - headers=self.headers + CLOCKIFY_ENDPOINT + action_url, headers=self.headers ) return response.json()[:quantity] - def remove_time_entry(self, tid, workspace_id=None): + def remove_time_entry(self, tid, workspace_id=None, user_id=None): if workspace_id is None: workspace_id = self.workspace_id - action_url = 'workspaces/{}/timeEntries/{}'.format( - workspace_id, tid + action_url = "workspaces/{}/user/{}/time-entries/{}".format( + workspace_id, user_id, tid ) - time_check(self) response = requests.delete( - CLOCKIFY_ENDPOINT + action_url, - headers=self.headers + CLOCKIFY_ENDPOINT + action_url, headers=self.headers ) return response.json() def add_project(self, name, workspace_id=None): if workspace_id is None: workspace_id = self.workspace_id - action_url = 'workspaces/{}/projects/'.format(workspace_id) + action_url = "workspaces/{}/projects".format(workspace_id) body = { "name": name, "clientId": "", "isPublic": "false", - "estimate": { - "estimate": 0, - "type": "AUTO" - }, + "estimate": {"estimate": 0, "type": "AUTO"}, "color": "#f44336", - "billable": "true" + "billable": "true", } - time_check(self) response = requests.post( - CLOCKIFY_ENDPOINT + action_url, - headers=self.headers, - json=body + CLOCKIFY_ENDPOINT + action_url, headers=self.headers, json=body ) return response.json() def add_workspace(self, name): - action_url = 'workspaces/' + action_url = "workspaces/" body = {"name": name} - time_check(self) response = requests.post( - CLOCKIFY_ENDPOINT + action_url, - headers=self.headers, - json=body + CLOCKIFY_ENDPOINT + action_url, headers=self.headers, json=body ) return response.json() - def add_task( - self, name, project_id, workspace_id=None - ): + def add_task(self, name, project_id, workspace_id=None): if workspace_id is None: workspace_id = self.workspace_id - action_url = 'workspaces/{}/projects/{}/tasks/'.format( + action_url = "workspaces/{}/projects/{}/tasks".format( workspace_id, project_id ) - body = { - "name": name, - "projectId": project_id - } - time_check(self) + body = {"name": name, "projectId": project_id} response = requests.post( - CLOCKIFY_ENDPOINT + action_url, - headers=self.headers, - json=body + CLOCKIFY_ENDPOINT + action_url, headers=self.headers, json=body ) return response.json() def add_tag(self, name, workspace_id=None): if workspace_id is None: workspace_id = self.workspace_id - action_url = 'workspaces/{}/tags'.format(workspace_id) - body = { - "name": name - } - time_check(self) + action_url = "workspaces/{}/tags".format(workspace_id) + body = {"name": name} response = requests.post( - CLOCKIFY_ENDPOINT + action_url, - headers=self.headers, - json=body + CLOCKIFY_ENDPOINT + action_url, headers=self.headers, json=body ) return response.json() - def delete_project( - self, project_id, workspace_id=None - ): + def delete_project(self, project_id, workspace_id=None): if workspace_id is None: workspace_id = self.workspace_id - action_url = '/workspaces/{}/projects/{}'.format( + action_url = "/workspaces/{}/projects/{}".format( workspace_id, project_id ) - time_check(self) response = requests.delete( CLOCKIFY_ENDPOINT + action_url, headers=self.headers, @@ -455,12 +416,12 @@ class ClockifyAPI: return response.json() def convert_input( - self, entity_id, entity_name, mode='Workspace', project_id=None + self, entity_id, entity_name, mode="Workspace", project_id=None ): if entity_id is None: error = False error_msg = 'Missing information "{}"' - if mode.lower() == 'workspace': + if mode.lower() == "workspace": if entity_id is None and entity_name is None: if self.workspace_id is not None: entity_id = self.workspace_id @@ -471,14 +432,14 @@ class ClockifyAPI: else: if entity_id is None and entity_name is None: error = True - elif mode.lower() == 'project': + elif mode.lower() == "project": entity_id = self.get_project_id(entity_name) - elif mode.lower() == 'task': + elif mode.lower() == "task": entity_id = self.get_task_id( task_name=entity_name, project_id=project_id ) else: - raise TypeError('Unknown type') + raise TypeError("Unknown type") # Raise error if error: raise ValueError(error_msg.format(mode)) diff --git a/openpype/modules/clockify/clockify_module.py b/openpype/modules/clockify/clockify_module.py index 300d5576e2..200a268ad7 100644 --- a/openpype/modules/clockify/clockify_module.py +++ b/openpype/modules/clockify/clockify_module.py @@ -2,24 +2,13 @@ import os import threading import time -from openpype.modules import ( - OpenPypeModule, - ITrayModule, - IPluginPaths -) +from openpype.modules import OpenPypeModule, ITrayModule, IPluginPaths +from openpype.client import get_asset_by_name -from .clockify_api import ClockifyAPI -from .constants import ( - CLOCKIFY_FTRACK_USER_PATH, - CLOCKIFY_FTRACK_SERVER_PATH -) +from .constants import CLOCKIFY_FTRACK_USER_PATH, CLOCKIFY_FTRACK_SERVER_PATH -class ClockifyModule( - OpenPypeModule, - ITrayModule, - IPluginPaths -): +class ClockifyModule(OpenPypeModule, ITrayModule, IPluginPaths): name = "clockify" def initialize(self, modules_settings): @@ -33,18 +22,23 @@ class ClockifyModule( self.timer_manager = None self.MessageWidgetClass = None self.message_widget = None - - self.clockapi = ClockifyAPI(master_parent=self) + self._clockify_api = None # TimersManager attributes # - set `timers_manager_connector` only in `tray_init` self.timers_manager_connector = None self._timers_manager_module = None + @property + def clockify_api(self): + if self._clockify_api is None: + from .clockify_api import ClockifyAPI + + self._clockify_api = ClockifyAPI(master_parent=self) + return self._clockify_api + def get_global_environments(self): - return { - "CLOCKIFY_WORKSPACE": self.workspace_name - } + return {"CLOCKIFY_WORKSPACE": self.workspace_name} def tray_init(self): from .widgets import ClockifySettings, MessageWidget @@ -52,7 +46,7 @@ class ClockifyModule( self.MessageWidgetClass = MessageWidget self.message_widget = None - self.widget_settings = ClockifySettings(self.clockapi) + self.widget_settings = ClockifySettings(self.clockify_api) self.widget_settings_required = None self.thread_timer_check = None @@ -61,7 +55,7 @@ class ClockifyModule( self.bool_api_key_set = False self.bool_workspace_set = False self.bool_timer_run = False - self.bool_api_key_set = self.clockapi.set_api() + self.bool_api_key_set = self.clockify_api.set_api() # Define itself as TimersManager connector self.timers_manager_connector = self @@ -71,12 +65,11 @@ class ClockifyModule( self.show_settings() return - self.bool_workspace_set = self.clockapi.workspace_id is not None + self.bool_workspace_set = self.clockify_api.workspace_id is not None if self.bool_workspace_set is False: return self.start_timer_check() - self.set_menu_visibility() def tray_exit(self, *_a, **_kw): @@ -85,23 +78,19 @@ class ClockifyModule( def get_plugin_paths(self): """Implementaton of IPluginPaths to get plugin paths.""" actions_path = os.path.join( - os.path.dirname(os.path.abspath(__file__)), - "launcher_actions" + os.path.dirname(os.path.abspath(__file__)), "launcher_actions" ) - return { - "actions": [actions_path] - } + return {"actions": [actions_path]} def get_ftrack_event_handler_paths(self): """Function for Ftrack module to add ftrack event handler paths.""" return { "user": [CLOCKIFY_FTRACK_USER_PATH], - "server": [CLOCKIFY_FTRACK_SERVER_PATH] + "server": [CLOCKIFY_FTRACK_SERVER_PATH], } def clockify_timer_stopped(self): self.bool_timer_run = False - # Call `ITimersManager` method self.timer_stopped() def start_timer_check(self): @@ -122,45 +111,44 @@ class ClockifyModule( def check_running(self): while self.bool_thread_check_running is True: bool_timer_run = False - if self.clockapi.get_in_progress() is not None: + if self.clockify_api.get_in_progress() is not None: bool_timer_run = True if self.bool_timer_run != bool_timer_run: if self.bool_timer_run is True: self.clockify_timer_stopped() elif self.bool_timer_run is False: - actual_timer = self.clockapi.get_in_progress() - if not actual_timer: + current_timer = self.clockify_api.get_in_progress() + if current_timer is None: + continue + current_proj_id = current_timer.get("projectId") + if not current_proj_id: continue - actual_proj_id = actual_timer["projectId"] - if not actual_proj_id: - continue - - project = self.clockapi.get_project_by_id(actual_proj_id) + project = self.clockify_api.get_project_by_id( + current_proj_id + ) if project and project.get("code") == 501: continue - project_name = project["name"] + project_name = project.get("name") - actual_timer_hierarchy = actual_timer["description"] - hierarchy_items = actual_timer_hierarchy.split("/") + current_timer_hierarchy = current_timer.get("description") + if not current_timer_hierarchy: + continue + hierarchy_items = current_timer_hierarchy.split("/") # Each pype timer must have at least 2 items! if len(hierarchy_items) < 2: continue + task_name = hierarchy_items[-1] hierarchy = hierarchy_items[:-1] - task_type = None - if len(actual_timer.get("tags", [])) > 0: - task_type = actual_timer["tags"][0].get("name") data = { "task_name": task_name, "hierarchy": hierarchy, "project_name": project_name, - "task_type": task_type } - # Call `ITimersManager` method self.timer_started(data) self.bool_timer_run = bool_timer_run @@ -184,6 +172,7 @@ class ClockifyModule( def tray_menu(self, parent_menu): # Menu for Tray App from qtpy import QtWidgets + menu = QtWidgets.QMenu("Clockify", parent_menu) menu.setProperty("submenu", "on") @@ -204,7 +193,9 @@ class ClockifyModule( parent_menu.addMenu(menu) def show_settings(self): - self.widget_settings.input_api_key.setText(self.clockapi.get_api_key()) + self.widget_settings.input_api_key.setText( + self.clockify_api.get_api_key() + ) self.widget_settings.show() def set_menu_visibility(self): @@ -218,72 +209,82 @@ class ClockifyModule( def timer_started(self, data): """Tell TimersManager that timer started.""" if self._timers_manager_module is not None: - self._timers_manager_module.timer_started(self._module.id, data) + self._timers_manager_module.timer_started(self.id, data) def timer_stopped(self): """Tell TimersManager that timer stopped.""" if self._timers_manager_module is not None: - self._timers_manager_module.timer_stopped(self._module.id) + self._timers_manager_module.timer_stopped(self.id) def stop_timer(self): """Called from TimersManager to stop timer.""" - self.clockapi.finish_time_entry() + self.clockify_api.finish_time_entry() - def start_timer(self, input_data): - """Called from TimersManager to start timer.""" - # If not api key is not entered then skip - if not self.clockapi.get_api_key(): - return - - actual_timer = self.clockapi.get_in_progress() - actual_timer_hierarchy = None - actual_project_id = None - if actual_timer is not None: - actual_timer_hierarchy = actual_timer.get("description") - actual_project_id = actual_timer.get("projectId") - - # Concatenate hierarchy and task to get description - desc_items = [val for val in input_data.get("hierarchy", [])] - desc_items.append(input_data["task_name"]) - description = "/".join(desc_items) - - # Check project existence - project_name = input_data["project_name"] - project_id = self.clockapi.get_project_id(project_name) + def _verify_project_exists(self, project_name): + project_id = self.clockify_api.get_project_id(project_name) if not project_id: - self.log.warning(( - "Project \"{}\" was not found in Clockify. Timer won't start." - ).format(project_name)) + self.log.warning( + 'Project "{}" was not found in Clockify. Timer won\'t start.' + ).format(project_name) if not self.MessageWidgetClass: return msg = ( - "Project \"{}\" is not" - " in Clockify Workspace \"{}\"." + 'Project "{}" is not' + ' in Clockify Workspace "{}".' "

Please inform your Project Manager." - ).format(project_name, str(self.clockapi.workspace_name)) + ).format(project_name, str(self.clockify_api.workspace_name)) self.message_widget = self.MessageWidgetClass( msg, "Clockify - Info Message" ) self.message_widget.closed.connect(self.on_message_widget_close) self.message_widget.show() + return False + return project_id + def start_timer(self, input_data): + """Called from TimersManager to start timer.""" + # If not api key is not entered then skip + if not self.clockify_api.get_api_key(): return - if ( - actual_timer is not None and - description == actual_timer_hierarchy and - project_id == actual_project_id - ): + task_name = input_data.get("task_name") + + # Concatenate hierarchy and task to get description + description_items = list(input_data.get("hierarchy", [])) + description_items.append(task_name) + description = "/".join(description_items) + + # Check project existence + project_name = input_data.get("project_name") + project_id = self._verify_project_exists(project_name) + if not project_id: return + # Setup timer tags tag_ids = [] - task_tag_id = self.clockapi.get_tag_id(input_data["task_type"]) + tag_name = input_data.get("task_type") + if not tag_name: + # no task_type found in the input data + # if the timer is restarted by idle time (bug?) + asset_name = input_data["hierarchy"][-1] + asset_doc = get_asset_by_name(project_name, asset_name) + task_info = asset_doc["data"]["tasks"][task_name] + tag_name = task_info.get("type", "") + if not tag_name: + self.log.info("No tag information found for the timer") + + task_tag_id = self.clockify_api.get_tag_id(tag_name) if task_tag_id is not None: tag_ids.append(task_tag_id) - self.clockapi.start_time_entry( - description, project_id, tag_ids=tag_ids + # Start timer + self.clockify_api.start_time_entry( + description, + project_id, + tag_ids=tag_ids, + workspace_id=self.clockify_api.workspace_id, + user_id=self.clockify_api.user_id, ) diff --git a/openpype/modules/clockify/constants.py b/openpype/modules/clockify/constants.py index 66f6cb899a..4574f91be1 100644 --- a/openpype/modules/clockify/constants.py +++ b/openpype/modules/clockify/constants.py @@ -9,4 +9,4 @@ CLOCKIFY_FTRACK_USER_PATH = os.path.join( ) ADMIN_PERMISSION_NAMES = ["WORKSPACE_OWN", "WORKSPACE_ADMIN"] -CLOCKIFY_ENDPOINT = "https://api.clockify.me/api/" +CLOCKIFY_ENDPOINT = "https://api.clockify.me/api/v1/" diff --git a/openpype/modules/clockify/ftrack/server/action_clockify_sync_server.py b/openpype/modules/clockify/ftrack/server/action_clockify_sync_server.py index c6b55947da..985cf49b97 100644 --- a/openpype/modules/clockify/ftrack/server/action_clockify_sync_server.py +++ b/openpype/modules/clockify/ftrack/server/action_clockify_sync_server.py @@ -4,7 +4,7 @@ from openpype_modules.ftrack.lib import ServerAction from openpype_modules.clockify.clockify_api import ClockifyAPI -class SyncClocifyServer(ServerAction): +class SyncClockifyServer(ServerAction): '''Synchronise project names and task types.''' identifier = "clockify.sync.server" @@ -14,12 +14,12 @@ class SyncClocifyServer(ServerAction): role_list = ["Pypeclub", "Administrator", "project Manager"] def __init__(self, *args, **kwargs): - super(SyncClocifyServer, self).__init__(*args, **kwargs) + super(SyncClockifyServer, self).__init__(*args, **kwargs) workspace_name = os.environ.get("CLOCKIFY_WORKSPACE") api_key = os.environ.get("CLOCKIFY_API_KEY") - self.clockapi = ClockifyAPI(api_key) - self.clockapi.set_workspace(workspace_name) + self.clockify_api = ClockifyAPI(api_key) + self.clockify_api.set_workspace(workspace_name) if api_key is None: modified_key = "None" else: @@ -48,13 +48,16 @@ class SyncClocifyServer(ServerAction): return True def launch(self, session, entities, event): - if self.clockapi.workspace_id is None: + self.clockify_api.set_api() + if self.clockify_api.workspace_id is None: return { "success": False, "message": "Clockify Workspace or API key are not set!" } - if self.clockapi.validate_workspace_perm() is False: + if not self.clockify_api.validate_workspace_permissions( + self.clockify_api.workspace_id, self.clockify_api.user_id + ): return { "success": False, "message": "Missing permissions for this action!" @@ -88,9 +91,9 @@ class SyncClocifyServer(ServerAction): task_type["name"] for task_type in task_types ] try: - clockify_projects = self.clockapi.get_projects() + clockify_projects = self.clockify_api.get_projects() if project_name not in clockify_projects: - response = self.clockapi.add_project(project_name) + response = self.clockify_api.add_project(project_name) if "id" not in response: self.log.warning( "Project \"{}\" can't be created. Response: {}".format( @@ -105,7 +108,7 @@ class SyncClocifyServer(ServerAction): ).format(project_name) } - clockify_workspace_tags = self.clockapi.get_tags() + clockify_workspace_tags = self.clockify_api.get_tags() for task_type_name in task_type_names: if task_type_name in clockify_workspace_tags: self.log.debug( @@ -113,7 +116,7 @@ class SyncClocifyServer(ServerAction): ) continue - response = self.clockapi.add_tag(task_type_name) + response = self.clockify_api.add_tag(task_type_name) if "id" not in response: self.log.warning( "Task \"{}\" can't be created. Response: {}".format( @@ -138,4 +141,4 @@ class SyncClocifyServer(ServerAction): def register(session, **kw): - SyncClocifyServer(session).register() + SyncClockifyServer(session).register() diff --git a/openpype/modules/clockify/ftrack/user/action_clockify_sync_local.py b/openpype/modules/clockify/ftrack/user/action_clockify_sync_local.py index a430791906..0e8cf6bd37 100644 --- a/openpype/modules/clockify/ftrack/user/action_clockify_sync_local.py +++ b/openpype/modules/clockify/ftrack/user/action_clockify_sync_local.py @@ -3,7 +3,7 @@ from openpype_modules.ftrack.lib import BaseAction, statics_icon from openpype_modules.clockify.clockify_api import ClockifyAPI -class SyncClocifyLocal(BaseAction): +class SyncClockifyLocal(BaseAction): '''Synchronise project names and task types.''' #: Action identifier. @@ -18,9 +18,9 @@ class SyncClocifyLocal(BaseAction): icon = statics_icon("app_icons", "clockify-white.png") def __init__(self, *args, **kwargs): - super(SyncClocifyLocal, self).__init__(*args, **kwargs) + super(SyncClockifyLocal, self).__init__(*args, **kwargs) #: CLockifyApi - self.clockapi = ClockifyAPI() + self.clockify_api = ClockifyAPI() def discover(self, session, entities, event): if ( @@ -31,14 +31,18 @@ class SyncClocifyLocal(BaseAction): return False def launch(self, session, entities, event): - self.clockapi.set_api() - if self.clockapi.workspace_id is None: + self.clockify_api.set_api() + if self.clockify_api.workspace_id is None: return { "success": False, "message": "Clockify Workspace or API key are not set!" } - if self.clockapi.validate_workspace_perm() is False: + if ( + self.clockify_api.validate_workspace_permissions( + self.clockify_api.workspace_id, self.clockify_api.user_id) + is False + ): return { "success": False, "message": "Missing permissions for this action!" @@ -74,9 +78,9 @@ class SyncClocifyLocal(BaseAction): task_type["name"] for task_type in task_types ] try: - clockify_projects = self.clockapi.get_projects() + clockify_projects = self.clockify_api.get_projects() if project_name not in clockify_projects: - response = self.clockapi.add_project(project_name) + response = self.clockify_api.add_project(project_name) if "id" not in response: self.log.warning( "Project \"{}\" can't be created. Response: {}".format( @@ -91,7 +95,7 @@ class SyncClocifyLocal(BaseAction): ).format(project_name) } - clockify_workspace_tags = self.clockapi.get_tags() + clockify_workspace_tags = self.clockify_api.get_tags() for task_type_name in task_type_names: if task_type_name in clockify_workspace_tags: self.log.debug( @@ -99,7 +103,7 @@ class SyncClocifyLocal(BaseAction): ) continue - response = self.clockapi.add_tag(task_type_name) + response = self.clockify_api.add_tag(task_type_name) if "id" not in response: self.log.warning( "Task \"{}\" can't be created. Response: {}".format( @@ -121,4 +125,4 @@ class SyncClocifyLocal(BaseAction): def register(session, **kw): - SyncClocifyLocal(session).register() + SyncClockifyLocal(session).register() diff --git a/openpype/modules/clockify/launcher_actions/ClockifyStart.py b/openpype/modules/clockify/launcher_actions/ClockifyStart.py index 7663aecc31..4a653c1b8d 100644 --- a/openpype/modules/clockify/launcher_actions/ClockifyStart.py +++ b/openpype/modules/clockify/launcher_actions/ClockifyStart.py @@ -6,9 +6,9 @@ from openpype_modules.clockify.clockify_api import ClockifyAPI class ClockifyStart(LauncherAction): name = "clockify_start_timer" label = "Clockify - Start Timer" - icon = "clockify_icon" + icon = "app_icons/clockify.png" order = 500 - clockapi = ClockifyAPI() + clockify_api = ClockifyAPI() def is_compatible(self, session): """Return whether the action is compatible with the session""" @@ -17,23 +17,39 @@ class ClockifyStart(LauncherAction): return False def process(self, session, **kwargs): + self.clockify_api.set_api() + user_id = self.clockify_api.user_id + workspace_id = self.clockify_api.workspace_id project_name = session["AVALON_PROJECT"] asset_name = session["AVALON_ASSET"] task_name = session["AVALON_TASK"] - description = asset_name - asset_doc = get_asset_by_name( - project_name, asset_name, fields=["data.parents"] - ) - if asset_doc is not None: - desc_items = asset_doc.get("data", {}).get("parents", []) - desc_items.append(asset_name) - desc_items.append(task_name) - description = "/".join(desc_items) - project_id = self.clockapi.get_project_id(project_name) - tag_ids = [] - tag_ids.append(self.clockapi.get_tag_id(task_name)) - self.clockapi.start_time_entry( - description, project_id, tag_ids=tag_ids + # fetch asset docs + asset_doc = get_asset_by_name(project_name, asset_name) + + # get task type to fill the timer tag + task_info = asset_doc["data"]["tasks"][task_name] + task_type = task_info["type"] + + # check if the task has hierarchy and fill the + parents_data = asset_doc["data"] + if parents_data is not None: + description_items = parents_data.get("parents", []) + description_items.append(asset_name) + description_items.append(task_name) + description = "/".join(description_items) + + project_id = self.clockify_api.get_project_id( + project_name, workspace_id + ) + tag_ids = [] + tag_name = task_type + tag_ids.append(self.clockify_api.get_tag_id(tag_name, workspace_id)) + self.clockify_api.start_time_entry( + description, + project_id, + tag_ids=tag_ids, + workspace_id=workspace_id, + user_id=user_id, ) diff --git a/openpype/modules/clockify/launcher_actions/ClockifySync.py b/openpype/modules/clockify/launcher_actions/ClockifySync.py index c346a1b4f6..cbd2519a04 100644 --- a/openpype/modules/clockify/launcher_actions/ClockifySync.py +++ b/openpype/modules/clockify/launcher_actions/ClockifySync.py @@ -3,20 +3,39 @@ from openpype_modules.clockify.clockify_api import ClockifyAPI from openpype.pipeline import LauncherAction -class ClockifySync(LauncherAction): +class ClockifyPermissionsCheckFailed(Exception): + """Timer start failed due to user permissions check. + Message should be self explanatory as traceback won't be shown. + """ + pass + + +class ClockifySync(LauncherAction): name = "sync_to_clockify" label = "Sync to Clockify" - icon = "clockify_white_icon" + icon = "app_icons/clockify-white.png" order = 500 - clockapi = ClockifyAPI() - have_permissions = clockapi.validate_workspace_perm() + clockify_api = ClockifyAPI() def is_compatible(self, session): - """Return whether the action is compatible with the session""" - return self.have_permissions + """Check if there's some projects to sync""" + try: + next(get_projects()) + return True + except StopIteration: + return False def process(self, session, **kwargs): + self.clockify_api.set_api() + workspace_id = self.clockify_api.workspace_id + user_id = self.clockify_api.user_id + if not self.clockify_api.validate_workspace_permissions( + workspace_id, user_id + ): + raise ClockifyPermissionsCheckFailed( + "Current CLockify user is missing permissions for this action!" + ) project_name = session.get("AVALON_PROJECT") or "" projects_to_sync = [] @@ -30,24 +49,28 @@ class ClockifySync(LauncherAction): task_types = project["config"]["tasks"].keys() projects_info[project["name"]] = task_types - clockify_projects = self.clockapi.get_projects() + clockify_projects = self.clockify_api.get_projects(workspace_id) for project_name, task_types in projects_info.items(): if project_name in clockify_projects: continue - response = self.clockapi.add_project(project_name) + response = self.clockify_api.add_project( + project_name, workspace_id + ) if "id" not in response: - self.log.error("Project {} can't be created".format( - project_name - )) + self.log.error( + "Project {} can't be created".format(project_name) + ) continue - clockify_workspace_tags = self.clockapi.get_tags() + clockify_workspace_tags = self.clockify_api.get_tags(workspace_id) for task_type in task_types: if task_type not in clockify_workspace_tags: - response = self.clockapi.add_tag(task_type) + response = self.clockify_api.add_tag( + task_type, workspace_id + ) if "id" not in response: - self.log.error('Task {} can\'t be created'.format( - task_type - )) + self.log.error( + "Task {} can't be created".format(task_type) + ) continue diff --git a/openpype/modules/clockify/widgets.py b/openpype/modules/clockify/widgets.py index 122b6212c0..8c28f38b6e 100644 --- a/openpype/modules/clockify/widgets.py +++ b/openpype/modules/clockify/widgets.py @@ -77,15 +77,15 @@ class MessageWidget(QtWidgets.QWidget): class ClockifySettings(QtWidgets.QWidget): - SIZE_W = 300 + SIZE_W = 500 SIZE_H = 130 loginSignal = QtCore.Signal(object, object, object) - def __init__(self, clockapi, optional=True): + def __init__(self, clockify_api, optional=True): super(ClockifySettings, self).__init__() - self.clockapi = clockapi + self.clockify_api = clockify_api self.optional = optional self.validated = False @@ -162,17 +162,17 @@ class ClockifySettings(QtWidgets.QWidget): def click_ok(self): api_key = self.input_api_key.text().strip() if self.optional is True and api_key == '': - self.clockapi.save_api_key(None) - self.clockapi.set_api(api_key) + self.clockify_api.save_api_key(None) + self.clockify_api.set_api(api_key) self.validated = False self._close_widget() return - validation = self.clockapi.validate_api_key(api_key) + validation = self.clockify_api.validate_api_key(api_key) if validation: - self.clockapi.save_api_key(api_key) - self.clockapi.set_api(api_key) + self.clockify_api.save_api_key(api_key) + self.clockify_api.set_api(api_key) self.validated = True self._close_widget() else: diff --git a/openpype/modules/deadline/plugins/publish/submit_celaction_deadline.py b/openpype/modules/deadline/plugins/publish/submit_celaction_deadline.py index 038ee4fc03..bcf0850768 100644 --- a/openpype/modules/deadline/plugins/publish/submit_celaction_deadline.py +++ b/openpype/modules/deadline/plugins/publish/submit_celaction_deadline.py @@ -106,7 +106,7 @@ class CelactionSubmitDeadline(pyblish.api.InstancePlugin): # define chunk and priority chunk_size = instance.context.data.get("chunk") - if chunk_size == 0: + if not chunk_size: chunk_size = self.deadline_chunk_size # search for %02d pattern in name, and padding number diff --git a/openpype/modules/deadline/plugins/publish/submit_maya_deadline.py b/openpype/modules/deadline/plugins/publish/submit_maya_deadline.py index 15025e47f2..062732c059 100644 --- a/openpype/modules/deadline/plugins/publish/submit_maya_deadline.py +++ b/openpype/modules/deadline/plugins/publish/submit_maya_deadline.py @@ -422,6 +422,7 @@ class MayaSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline): assembly_job_info.Priority = instance.data.get( "tile_priority", self.tile_priority ) + assembly_job_info.TileJob = False pool = instance.context.data["project_settings"]["deadline"] pool = pool["publish"]["ProcessSubmittedJobOnFarm"]["deadline_pool"] @@ -450,15 +451,14 @@ class MayaSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline): frame_assembly_job_info.ExtraInfo[0] = file_hash frame_assembly_job_info.ExtraInfo[1] = file frame_assembly_job_info.JobDependencies = tile_job_id + frame_assembly_job_info.Frames = frame # write assembly job config files - now = datetime.now() - config_file = os.path.join( output_dir, "{}_config_{}.txt".format( os.path.splitext(file)[0], - now.strftime("%Y_%m_%d_%H_%M_%S") + datetime.now().strftime("%Y_%m_%d_%H_%M_%S") ) ) try: @@ -469,6 +469,8 @@ class MayaSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline): self.log.warning("Path is unreachable: " "`{}`".format(output_dir)) + assembly_plugin_info["ConfigFile"] = config_file + with open(config_file, "w") as cf: print("TileCount={}".format(tiles_count), file=cf) print("ImageFileName={}".format(file), file=cf) @@ -477,25 +479,30 @@ class MayaSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline): print("ImageHeight={}".format( instance.data.get("resolutionHeight")), file=cf) + with open(config_file, "a") as cf: + # Need to reverse the order of the y tiles, because image + # coordinates are calculated from bottom left corner. tiles = _format_tiles( file, 0, instance.data.get("tilesX"), instance.data.get("tilesY"), instance.data.get("resolutionWidth"), instance.data.get("resolutionHeight"), - payload_plugin_info["OutputFilePrefix"] + payload_plugin_info["OutputFilePrefix"], + reversed_y=True )[1] for k, v in sorted(tiles.items()): print("{}={}".format(k, v), file=cf) - payload = self.assemble_payload( - job_info=frame_assembly_job_info, - plugin_info=assembly_plugin_info.copy(), - # todo: aux file transfers don't work with deadline webservice - # add config file as job auxFile - # aux_files=[config_file] + assembly_payloads.append( + self.assemble_payload( + job_info=frame_assembly_job_info, + plugin_info=assembly_plugin_info.copy(), + # This would fail if the client machine and webserice are + # using different storage paths. + aux_files=[config_file] + ) ) - assembly_payloads.append(payload) # Submit assembly jobs assembly_job_ids = [] @@ -505,6 +512,7 @@ class MayaSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline): "submitting assembly job {} of {}".format(i + 1, num_assemblies) ) + self.log.info(payload) assembly_job_id = self.submit(payload) assembly_job_ids.append(assembly_job_id) @@ -764,8 +772,15 @@ class MayaSubmitDeadline(abstract_submit_deadline.AbstractSubmitDeadline): def _format_tiles( - filename, index, tiles_x, tiles_y, - width, height, prefix): + filename, + index, + tiles_x, + tiles_y, + width, + height, + prefix, + reversed_y=False +): """Generate tile entries for Deadline tile job. Returns two dictionaries - one that can be directly used in Deadline @@ -802,6 +817,7 @@ def _format_tiles( width (int): Width resolution of final image. height (int): Height resolution of final image. prefix (str): Image prefix. + reversed_y (bool): Reverses the order of the y tiles. Returns: (dict, dict): Tuple of two dictionaries - first can be used to @@ -824,12 +840,16 @@ def _format_tiles( cfg["TilesCropped"] = "False" tile = 0 + range_y = range(1, tiles_y + 1) + reversed_y_range = list(reversed(range_y)) for tile_x in range(1, tiles_x + 1): - for tile_y in reversed(range(1, tiles_y + 1)): + for i, tile_y in enumerate(range_y): + tile_y_index = tile_y + if reversed_y: + tile_y_index = reversed_y_range[i] + tile_prefix = "_tile_{}x{}_{}x{}_".format( - tile_x, tile_y, - tiles_x, - tiles_y + tile_x, tile_y_index, tiles_x, tiles_y ) new_filename = "{}/{}{}".format( @@ -844,11 +864,14 @@ def _format_tiles( right = (tile_x * w_space) - 1 # Job info - out["JobInfo"]["OutputFilename{}Tile{}".format(index, tile)] = new_filename # noqa: E501 + key = "OutputFilename{}".format(index) + out["JobInfo"][key] = new_filename # Plugin Info - out["PluginInfo"]["RegionPrefix{}".format(str(tile))] = \ - "/{}".format(tile_prefix).join(prefix.rsplit("/", 1)) + key = "RegionPrefix{}".format(str(tile)) + out["PluginInfo"][key] = "/{}".format( + tile_prefix + ).join(prefix.rsplit("/", 1)) out["PluginInfo"]["RegionTop{}".format(tile)] = top out["PluginInfo"]["RegionBottom{}".format(tile)] = bottom out["PluginInfo"]["RegionLeft{}".format(tile)] = left diff --git a/openpype/modules/deadline/plugins/publish/submit_nuke_deadline.py b/openpype/modules/deadline/plugins/publish/submit_nuke_deadline.py index aff34c7e4a..cc069cf51a 100644 --- a/openpype/modules/deadline/plugins/publish/submit_nuke_deadline.py +++ b/openpype/modules/deadline/plugins/publish/submit_nuke_deadline.py @@ -32,7 +32,7 @@ class NukeSubmitDeadline(pyblish.api.InstancePlugin, label = "Submit Nuke to Deadline" order = pyblish.api.IntegratorOrder + 0.1 hosts = ["nuke"] - families = ["render", "prerender.farm"] + families = ["render", "prerender"] optional = True targets = ["local"] @@ -80,6 +80,10 @@ class NukeSubmitDeadline(pyblish.api.InstancePlugin, ] def process(self, instance): + if not instance.data.get("farm"): + self.log.info("Skipping local instance.") + return + instance.data["attributeValues"] = self.get_attr_values_from_data( instance.data) @@ -168,10 +172,10 @@ class NukeSubmitDeadline(pyblish.api.InstancePlugin, resp.json()["_id"]) # redefinition of families - if "render.farm" in families: + if "render" in instance.data["family"]: instance.data['family'] = 'write' families.insert(0, "render2d") - elif "prerender.farm" in families: + elif "prerender" in instance.data["family"]: instance.data['family'] = 'write' families.insert(0, "prerender") instance.data["families"] = families diff --git a/openpype/modules/deadline/plugins/publish/submit_publish_job.py b/openpype/modules/deadline/plugins/publish/submit_publish_job.py index 53c09ad22f..0d0698c21f 100644 --- a/openpype/modules/deadline/plugins/publish/submit_publish_job.py +++ b/openpype/modules/deadline/plugins/publish/submit_publish_job.py @@ -756,6 +756,10 @@ class ProcessSubmittedJobOnFarm(pyblish.api.InstancePlugin): instance (pyblish.api.Instance): Instance data. """ + if not instance.data.get("farm"): + self.log.info("Skipping local instance.") + return + data = instance.data.copy() context = instance.context self.context = context diff --git a/openpype/modules/deadline/plugins/publish/validate_deadline_pools.py b/openpype/modules/deadline/plugins/publish/validate_deadline_pools.py index 78eed17c98..05afa5080d 100644 --- a/openpype/modules/deadline/plugins/publish/validate_deadline_pools.py +++ b/openpype/modules/deadline/plugins/publish/validate_deadline_pools.py @@ -21,6 +21,10 @@ class ValidateDeadlinePools(OptionalPyblishPluginMixin, optional = True def process(self, instance): + if not instance.data.get("farm"): + self.log.info("Skipping local instance.") + return + # get default deadline webservice url from deadline module deadline_url = instance.context.data["defaultDeadline"] self.log.info("deadline_url::{}".format(deadline_url)) diff --git a/openpype/modules/deadline/plugins/publish/validate_expected_and_rendered_files.py b/openpype/modules/deadline/plugins/publish/validate_expected_and_rendered_files.py index f34f71d213..ff4be677e7 100644 --- a/openpype/modules/deadline/plugins/publish/validate_expected_and_rendered_files.py +++ b/openpype/modules/deadline/plugins/publish/validate_expected_and_rendered_files.py @@ -68,8 +68,15 @@ class ValidateExpectedFiles(pyblish.api.InstancePlugin): # files to be in the folder that we might not want to use. missing = expected_files - existing_files if missing: - raise RuntimeError("Missing expected files: {}".format( - sorted(missing))) + raise RuntimeError( + "Missing expected files: {}\n" + "Expected files: {}\n" + "Existing files: {}".format( + sorted(missing), + sorted(expected_files), + sorted(existing_files) + ) + ) def _get_frame_list(self, original_job_id): """Returns list of frame ranges from all render job. diff --git a/openpype/modules/deadline/repository/custom/plugins/OpenPypeTileAssembler/OpenPypeTileAssembler.py b/openpype/modules/deadline/repository/custom/plugins/OpenPypeTileAssembler/OpenPypeTileAssembler.py index 861f16518c..b51daffbc8 100644 --- a/openpype/modules/deadline/repository/custom/plugins/OpenPypeTileAssembler/OpenPypeTileAssembler.py +++ b/openpype/modules/deadline/repository/custom/plugins/OpenPypeTileAssembler/OpenPypeTileAssembler.py @@ -16,6 +16,10 @@ from Deadline.Scripting import ( FileUtils, RepositoryUtils, SystemUtils) +version_major = 1 +version_minor = 0 +version_patch = 0 +version_string = "{}.{}.{}".format(version_major, version_minor, version_patch) STRING_TAGS = { "format" } @@ -264,6 +268,7 @@ class OpenPypeTileAssembler(DeadlinePlugin): def initialize_process(self): """Initialization.""" + self.LogInfo("Plugin version: {}".format(version_string)) self.SingleFramesOnly = True self.StdoutHandling = True self.renderer = self.GetPluginInfoEntryWithDefault( @@ -320,12 +325,7 @@ class OpenPypeTileAssembler(DeadlinePlugin): output_file = data["ImageFileName"] output_file = RepositoryUtils.CheckPathMapping(output_file) output_file = self.process_path(output_file) - """ - _, ext = os.path.splitext(output_file) - if "exr" not in ext: - self.FailRender( - "[{}] Only EXR format is supported for now.".format(ext)) - """ + tile_info = [] for tile in range(int(data["TileCount"])): tile_info.append({ @@ -336,11 +336,6 @@ class OpenPypeTileAssembler(DeadlinePlugin): "width": int(data["Tile{}Width".format(tile)]) }) - # FFMpeg doesn't support tile coordinates at the moment. - # arguments = self.tile_completer_ffmpeg_args( - # int(data["ImageWidth"]), int(data["ImageHeight"]), - # tile_info, output_file) - arguments = self.tile_oiio_args( int(data["ImageWidth"]), int(data["ImageHeight"]), tile_info, output_file) @@ -362,20 +357,20 @@ class OpenPypeTileAssembler(DeadlinePlugin): def pre_render_tasks(self): """Load config file and do remapping.""" self.LogInfo("OpenPype Tile Assembler starting...") - scene_filename = self.GetDataFilename() + config_file = self.GetPluginInfoEntry("ConfigFile") temp_scene_directory = self.CreateTempDirectory( "thread" + str(self.GetThreadNumber())) - temp_scene_filename = Path.GetFileName(scene_filename) + temp_scene_filename = Path.GetFileName(config_file) self.config_file = Path.Combine( temp_scene_directory, temp_scene_filename) if SystemUtils.IsRunningOnWindows(): RepositoryUtils.CheckPathMappingInFileAndReplaceSeparator( - scene_filename, self.config_file, "/", "\\") + config_file, self.config_file, "/", "\\") else: RepositoryUtils.CheckPathMappingInFileAndReplaceSeparator( - scene_filename, self.config_file, "\\", "/") + config_file, self.config_file, "\\", "/") os.chmod(self.config_file, os.stat(self.config_file).st_mode) def post_render_tasks(self): @@ -459,75 +454,3 @@ class OpenPypeTileAssembler(DeadlinePlugin): args.append(output_path) return args - - def tile_completer_ffmpeg_args( - self, output_width, output_height, tiles_info, output_path): - """Generate ffmpeg arguments for tile assembly. - - Expected inputs are tiled images. - - Args: - output_width (int): Width of output image. - output_height (int): Height of output image. - tiles_info (list): List of tile items, each item must be - dictionary with `filepath`, `pos_x` and `pos_y` keys - representing path to file and x, y coordinates on output - image where top-left point of tile item should start. - output_path (str): Path to file where should be output stored. - - Returns: - (list): ffmpeg arguments. - - """ - previous_name = "base" - ffmpeg_args = [] - filter_complex_strs = [] - - filter_complex_strs.append("nullsrc=size={}x{}[{}]".format( - output_width, output_height, previous_name - )) - - new_tiles_info = {} - for idx, tile_info in enumerate(tiles_info): - # Add input and store input index - filepath = tile_info["filepath"] - ffmpeg_args.append("-i \"{}\"".format(filepath.replace("\\", "/"))) - - # Prepare initial filter complex arguments - index_name = "input{}".format(idx) - filter_complex_strs.append( - "[{}]setpts=PTS-STARTPTS[{}]".format(idx, index_name) - ) - tile_info["index"] = idx - new_tiles_info[index_name] = tile_info - - # Set frames to 1 - ffmpeg_args.append("-frames 1") - - # Concatenation filter complex arguments - global_index = 1 - total_index = len(new_tiles_info) - for index_name, tile_info in new_tiles_info.items(): - item_str = ( - "[{previous_name}][{index_name}]overlay={pos_x}:{pos_y}" - ).format( - previous_name=previous_name, - index_name=index_name, - pos_x=tile_info["pos_x"], - pos_y=tile_info["pos_y"] - ) - new_previous = "tmp{}".format(global_index) - if global_index != total_index: - item_str += "[{}]".format(new_previous) - filter_complex_strs.append(item_str) - previous_name = new_previous - global_index += 1 - - joined_parts = ";".join(filter_complex_strs) - filter_complex_str = "-filter_complex \"{}\"".format(joined_parts) - - ffmpeg_args.append(filter_complex_str) - ffmpeg_args.append("-y") - ffmpeg_args.append("\"{}\"".format(output_path)) - - return ffmpeg_args diff --git a/openpype/modules/example_addons/example_addon/addon.py b/openpype/modules/example_addons/example_addon/addon.py index ead647b41d..be1d3ff920 100644 --- a/openpype/modules/example_addons/example_addon/addon.py +++ b/openpype/modules/example_addons/example_addon/addon.py @@ -44,7 +44,7 @@ class AddonSettingsDef(JsonFilesSettingsDef): class ExampleAddon(OpenPypeAddOn, IPluginPaths, ITrayAction): - """This Addon has defined it's settings and interface. + """This Addon has defined its settings and interface. This example has system settings with an enabled option. And use few other interfaces: diff --git a/openpype/modules/ftrack/event_handlers_user/action_applications.py b/openpype/modules/ftrack/event_handlers_user/action_applications.py index 102f04c956..30399b463d 100644 --- a/openpype/modules/ftrack/event_handlers_user/action_applications.py +++ b/openpype/modules/ftrack/event_handlers_user/action_applications.py @@ -124,6 +124,11 @@ class AppplicationsAction(BaseAction): if not avalon_project_apps: return False + settings = self.get_project_settings_from_event( + event, avalon_project_doc["name"]) + + only_available = settings["applications"]["only_available"] + items = [] for app_name in avalon_project_apps: app = self.application_manager.applications.get(app_name) @@ -133,6 +138,10 @@ class AppplicationsAction(BaseAction): if app.group.name in CUSTOM_LAUNCH_APP_GROUPS: continue + # Skip applications without valid executables + if only_available and not app.find_executable(): + continue + app_icon = app.icon if app_icon and self.icon_url: try: diff --git a/openpype/modules/ftrack/plugins/publish/collect_ftrack_family.py b/openpype/modules/ftrack/plugins/publish/collect_ftrack_family.py index 576a7d36c4..97815f490f 100644 --- a/openpype/modules/ftrack/plugins/publish/collect_ftrack_family.py +++ b/openpype/modules/ftrack/plugins/publish/collect_ftrack_family.py @@ -7,23 +7,22 @@ Provides: """ import pyblish.api -from openpype.pipeline import legacy_io from openpype.lib import filter_profiles class CollectFtrackFamily(pyblish.api.InstancePlugin): + """Adds explicitly 'ftrack' to families to upload instance to FTrack. + + Uses selection by combination of hosts/families/tasks names via + profiles resolution. + + Triggered everywhere, checks instance against configured. + + Checks advanced filtering which works on 'families' not on main + 'family', as some variants dynamically resolves addition of ftrack + based on 'families' (editorial drives it by presence of 'review') """ - Adds explicitly 'ftrack' to families to upload instance to FTrack. - Uses selection by combination of hosts/families/tasks names via - profiles resolution. - - Triggered everywhere, checks instance against configured. - - Checks advanced filtering which works on 'families' not on main - 'family', as some variants dynamically resolves addition of ftrack - based on 'families' (editorial drives it by presence of 'review') - """ label = "Collect Ftrack Family" order = pyblish.api.CollectorOrder + 0.4990 @@ -34,68 +33,64 @@ class CollectFtrackFamily(pyblish.api.InstancePlugin): self.log.warning("No profiles present for adding Ftrack family") return - add_ftrack_family = False - task_name = instance.data.get("task", - legacy_io.Session["AVALON_TASK"]) - host_name = legacy_io.Session["AVALON_APP"] + host_name = instance.context.data["hostName"] family = instance.data["family"] + task_name = instance.data.get("task") filtering_criteria = { "hosts": host_name, "families": family, "tasks": task_name } - profile = filter_profiles(self.profiles, filtering_criteria, - logger=self.log) + profile = filter_profiles( + self.profiles, + filtering_criteria, + logger=self.log + ) + + add_ftrack_family = False + families = instance.data.setdefault("families", []) if profile: - families = instance.data.get("families") add_ftrack_family = profile["add_ftrack_family"] - additional_filters = profile.get("advanced_filtering") if additional_filters: - self.log.info("'{}' families used for additional filtering". - format(families)) + families_set = set(families) | {family} + self.log.info( + "'{}' families used for additional filtering".format( + families_set)) add_ftrack_family = self._get_add_ftrack_f_from_addit_filters( additional_filters, - families, + families_set, add_ftrack_family ) - if add_ftrack_family: - self.log.debug("Adding ftrack family for '{}'". - format(instance.data.get("family"))) + result_str = "Not adding" + if add_ftrack_family: + result_str = "Adding" + if "ftrack" not in families: + families.append("ftrack") - if families: - if "ftrack" not in families: - instance.data["families"].append("ftrack") - else: - instance.data["families"] = ["ftrack"] - - result_str = "Adding" - if not add_ftrack_family: - result_str = "Not adding" self.log.info("{} 'ftrack' family for instance with '{}'".format( result_str, family )) - def _get_add_ftrack_f_from_addit_filters(self, - additional_filters, - families, - add_ftrack_family): - """ - Compares additional filters - working on instance's families. + def _get_add_ftrack_f_from_addit_filters( + self, additional_filters, families, add_ftrack_family + ): + """Compares additional filters - working on instance's families. - Triggered for more detailed filtering when main family matches, - but content of 'families' actually matter. - (For example 'review' in 'families' should result in adding to - Ftrack) + Triggered for more detailed filtering when main family matches, + but content of 'families' actually matter. + (For example 'review' in 'families' should result in adding to + Ftrack) - Args: - additional_filters (dict) - from Setting - families (list) - subfamilies - add_ftrack_family (bool) - add ftrack to families if True + Args: + additional_filters (dict) - from Setting + families (set[str]) - subfamilies + add_ftrack_family (bool) - add ftrack to families if True """ + override_filter = None override_filter_value = -1 for additional_filter in additional_filters: diff --git a/openpype/modules/kitsu/plugins/publish/collect_kitsu_entities.py b/openpype/modules/kitsu/plugins/publish/collect_kitsu_entities.py index a0bd2b305b..4ea8135620 100644 --- a/openpype/modules/kitsu/plugins/publish/collect_kitsu_entities.py +++ b/openpype/modules/kitsu/plugins/publish/collect_kitsu_entities.py @@ -29,7 +29,7 @@ class CollectKitsuEntities(pyblish.api.ContextPlugin): if not zou_asset_data: raise ValueError("Zou asset data not found in OpenPype!") - task_name = instance.data.get("task") + task_name = instance.data.get("task", context.data.get("task")) if not task_name: continue diff --git a/openpype/modules/kitsu/plugins/publish/integrate_kitsu_note.py b/openpype/modules/kitsu/plugins/publish/integrate_kitsu_note.py index 6702cbe7aa..6be14b3bdf 100644 --- a/openpype/modules/kitsu/plugins/publish/integrate_kitsu_note.py +++ b/openpype/modules/kitsu/plugins/publish/integrate_kitsu_note.py @@ -1,6 +1,7 @@ # -*- coding: utf-8 -*- import gazu import pyblish.api +import re class IntegrateKitsuNote(pyblish.api.ContextPlugin): @@ -9,27 +10,69 @@ class IntegrateKitsuNote(pyblish.api.ContextPlugin): order = pyblish.api.IntegratorOrder label = "Kitsu Note and Status" families = ["render", "kitsu"] + + # status settings set_status_note = False note_status_shortname = "wfa" + status_conditions = list() + + # comment settings + custom_comment_template = { + "enabled": False, + "comment_template": "{comment}", + } + + def format_publish_comment(self, instance): + """Format the instance's publish comment + + Formats `instance.data` against the custom template. + """ + + def replace_missing_key(match): + """If key is not found in kwargs, set None instead""" + key = match.group(1) + if key not in instance.data: + self.log.warning( + "Key '{}' was not found in instance.data " + "and will be rendered as an empty string " + "in the comment".format(key) + ) + return "" + else: + return str(instance.data[key]) + + template = self.custom_comment_template["comment_template"] + pattern = r"\{([^}]*)\}" + return re.sub(pattern, replace_missing_key, template) def process(self, context): - # Get comment text body - publish_comment = context.data.get("comment") - if not publish_comment: - self.log.info("Comment is not set.") - - self.log.debug("Comment is `{}`".format(publish_comment)) - for instance in context: + # Check if instance is a review by checking its family + # Allow a match to primary family or any of families + families = set([instance.data["family"]] + + instance.data.get("families", [])) + if "review" not in families: + continue + kitsu_task = instance.data.get("kitsu_task") if kitsu_task is None: continue # Get note status, by default uses the task status for the note # if it is not specified in the configuration - note_status = kitsu_task["task_status"]["id"] + shortname = kitsu_task["task_status"]["short_name"].upper() + note_status = kitsu_task["task_status_id"] - if self.set_status_note: + # Check if any status condition is not met + allow_status_change = True + for status_cond in self.status_conditions: + condition = status_cond["condition"] == "equal" + match = status_cond["short_name"].upper() == shortname + if match and not condition or condition and not match: + allow_status_change = False + break + + if self.set_status_note and allow_status_change: kitsu_status = gazu.task.get_task_status_by_short_name( self.note_status_shortname ) @@ -42,11 +85,22 @@ class IntegrateKitsuNote(pyblish.api.ContextPlugin): "changed!".format(self.note_status_shortname) ) + # Get comment text body + publish_comment = instance.data.get("comment") + if self.custom_comment_template["enabled"]: + publish_comment = self.format_publish_comment(instance) + + if not publish_comment: + self.log.info("Comment is not set.") + else: + self.log.debug("Comment is `{}`".format(publish_comment)) + # Add comment to kitsu task - task_id = kitsu_task["id"] - self.log.debug("Add new note in taks id {}".format(task_id)) + self.log.debug( + "Add new note in tasks id {}".format(kitsu_task["id"]) + ) kitsu_comment = gazu.task.add_comment( - task_id, note_status, comment=publish_comment + kitsu_task, note_status, comment=publish_comment ) instance.data["kitsu_comment"] = kitsu_comment diff --git a/openpype/modules/kitsu/plugins/publish/integrate_kitsu_review.py b/openpype/modules/kitsu/plugins/publish/integrate_kitsu_review.py index 12482b5657..e05ff05f50 100644 --- a/openpype/modules/kitsu/plugins/publish/integrate_kitsu_review.py +++ b/openpype/modules/kitsu/plugins/publish/integrate_kitsu_review.py @@ -12,17 +12,17 @@ class IntegrateKitsuReview(pyblish.api.InstancePlugin): optional = True def process(self, instance): - task = instance.data["kitsu_task"]["id"] - comment = instance.data["kitsu_comment"]["id"] # Check comment has been created - if not comment: + comment_id = instance.data.get("kitsu_comment", {}).get("id") + if not comment_id: self.log.debug( "Comment not created, review not pushed to preview." ) return # Add review representations as preview of comment + task_id = instance.data["kitsu_task"]["id"] for representation in instance.data.get("representations", []): # Skip if not tagged as review if "kitsureview" not in representation.get("tags", []): @@ -31,6 +31,6 @@ class IntegrateKitsuReview(pyblish.api.InstancePlugin): self.log.debug("Found review at: {}".format(review_path)) gazu.task.add_preview( - task, comment, review_path, normalize_movie=True + task_id, comment_id, review_path, normalize_movie=True ) self.log.info("Review upload on comment") diff --git a/openpype/modules/kitsu/utils/update_op_with_zou.py b/openpype/modules/kitsu/utils/update_op_with_zou.py index 4fa8cf9fdd..1f38648dfa 100644 --- a/openpype/modules/kitsu/utils/update_op_with_zou.py +++ b/openpype/modules/kitsu/utils/update_op_with_zou.py @@ -129,7 +129,7 @@ def update_op_assets( frame_out = frame_in + frames_duration - 1 else: frame_out = project_doc["data"].get("frameEnd", frame_in) - item_data["frameEnd"] = frame_out + item_data["frameEnd"] = int(frame_out) # Fps, fallback to project's value or default value (25.0) try: fps = float(item_data.get("fps")) @@ -147,33 +147,37 @@ def update_op_assets( item_data["resolutionWidth"] = int(match_res.group(1)) item_data["resolutionHeight"] = int(match_res.group(2)) else: - item_data["resolutionWidth"] = project_doc["data"].get( - "resolutionWidth" + item_data["resolutionWidth"] = int( + project_doc["data"].get("resolutionWidth") ) - item_data["resolutionHeight"] = project_doc["data"].get( - "resolutionHeight" + item_data["resolutionHeight"] = int( + project_doc["data"].get("resolutionHeight") ) # Properties that doesn't fully exist in Kitsu. # Guessing those property names below: # Pixel Aspect Ratio - item_data["pixelAspect"] = item_data.get( - "pixel_aspect", project_doc["data"].get("pixelAspect") + item_data["pixelAspect"] = float( + item_data.get( + "pixel_aspect", project_doc["data"].get("pixelAspect") + ) ) # Handle Start - item_data["handleStart"] = item_data.get( - "handle_start", project_doc["data"].get("handleStart") + item_data["handleStart"] = int( + item_data.get( + "handle_start", project_doc["data"].get("handleStart") + ) ) # Handle End - item_data["handleEnd"] = item_data.get( - "handle_end", project_doc["data"].get("handleEnd") + item_data["handleEnd"] = int( + item_data.get("handle_end", project_doc["data"].get("handleEnd")) ) # Clip In - item_data["clipIn"] = item_data.get( - "clip_in", project_doc["data"].get("clipIn") + item_data["clipIn"] = int( + item_data.get("clip_in", project_doc["data"].get("clipIn")) ) # Clip Out - item_data["clipOut"] = item_data.get( - "clip_out", project_doc["data"].get("clipOut") + item_data["clipOut"] = int( + item_data.get("clip_out", project_doc["data"].get("clipOut")) ) # Tasks diff --git a/openpype/modules/timers_manager/timers_manager.py b/openpype/modules/timers_manager/timers_manager.py index 0ba68285a4..43286f7da4 100644 --- a/openpype/modules/timers_manager/timers_manager.py +++ b/openpype/modules/timers_manager/timers_manager.py @@ -141,7 +141,9 @@ class TimersManager( signal_handler = SignalHandler(self) idle_manager = IdleManager() widget_user_idle = WidgetUserIdle(self) - widget_user_idle.set_countdown_start(self.time_show_message) + widget_user_idle.set_countdown_start( + self.time_stop_timer - self.time_show_message + ) idle_manager.signal_reset_timer.connect( widget_user_idle.reset_countdown diff --git a/openpype/pipeline/create/context.py b/openpype/pipeline/create/context.py index acc2bb054f..22cab28e4b 100644 --- a/openpype/pipeline/create/context.py +++ b/openpype/pipeline/create/context.py @@ -22,7 +22,7 @@ from openpype.lib.attribute_definitions import ( deserialize_attr_defs, get_default_values, ) -from openpype.host import IPublishHost +from openpype.host import IPublishHost, IWorkfileHost from openpype.pipeline import legacy_io from openpype.pipeline.plugin_discover import DiscoverResult @@ -1374,6 +1374,7 @@ class CreateContext: self._current_project_name = None self._current_asset_name = None self._current_task_name = None + self._current_workfile_path = None self._host_is_valid = host_is_valid # Currently unused variable @@ -1503,14 +1504,62 @@ class CreateContext: return os.environ["AVALON_APP"] def get_current_project_name(self): + """Project name which was used as current context on context reset. + + Returns: + Union[str, None]: Project name. + """ + return self._current_project_name def get_current_asset_name(self): + """Asset name which was used as current context on context reset. + + Returns: + Union[str, None]: Asset name. + """ + return self._current_asset_name def get_current_task_name(self): + """Task name which was used as current context on context reset. + + Returns: + Union[str, None]: Task name. + """ + return self._current_task_name + def get_current_workfile_path(self): + """Workfile path which was opened on context reset. + + Returns: + Union[str, None]: Workfile path. + """ + + return self._current_workfile_path + + @property + def context_has_changed(self): + """Host context has changed. + + As context is used project, asset, task name and workfile path if + host does support workfiles. + + Returns: + bool: Context changed. + """ + + project_name, asset_name, task_name, workfile_path = ( + self._get_current_host_context() + ) + return ( + self._current_project_name != project_name + or self._current_asset_name != asset_name + or self._current_task_name != task_name + or self._current_workfile_path != workfile_path + ) + project_name = property(get_current_project_name) @property @@ -1575,6 +1624,28 @@ class CreateContext: self._collection_shared_data = None self.refresh_thumbnails() + def _get_current_host_context(self): + project_name = asset_name = task_name = workfile_path = None + if hasattr(self.host, "get_current_context"): + host_context = self.host.get_current_context() + if host_context: + project_name = host_context.get("project_name") + asset_name = host_context.get("asset_name") + task_name = host_context.get("task_name") + + if isinstance(self.host, IWorkfileHost): + workfile_path = self.host.get_current_workfile() + + # --- TODO remove these conditions --- + if not project_name: + project_name = legacy_io.Session.get("AVALON_PROJECT") + if not asset_name: + asset_name = legacy_io.Session.get("AVALON_ASSET") + if not task_name: + task_name = legacy_io.Session.get("AVALON_TASK") + # --- + return project_name, asset_name, task_name, workfile_path + def reset_current_context(self): """Refresh current context. @@ -1593,24 +1664,14 @@ class CreateContext: are stored. We should store the workfile (if is available) too. """ - project_name = asset_name = task_name = None - if hasattr(self.host, "get_current_context"): - host_context = self.host.get_current_context() - if host_context: - project_name = host_context.get("project_name") - asset_name = host_context.get("asset_name") - task_name = host_context.get("task_name") - - if not project_name: - project_name = legacy_io.Session.get("AVALON_PROJECT") - if not asset_name: - asset_name = legacy_io.Session.get("AVALON_ASSET") - if not task_name: - task_name = legacy_io.Session.get("AVALON_TASK") + project_name, asset_name, task_name, workfile_path = ( + self._get_current_host_context() + ) self._current_project_name = project_name self._current_asset_name = asset_name self._current_task_name = task_name + self._current_workfile_path = workfile_path def reset_plugins(self, discover_publish_plugins=True): """Reload plugins. diff --git a/openpype/plugins/publish/collect_comment.py b/openpype/plugins/publish/collect_comment.py index 5be04731ac..9f41e37f22 100644 --- a/openpype/plugins/publish/collect_comment.py +++ b/openpype/plugins/publish/collect_comment.py @@ -29,7 +29,7 @@ from openpype.pipeline.publish import OpenPypePyblishPluginMixin class CollectInstanceCommentDef( - pyblish.api.ContextPlugin, + pyblish.api.InstancePlugin, OpenPypePyblishPluginMixin ): label = "Comment per instance" diff --git a/openpype/plugins/publish/extract_burnin.py b/openpype/plugins/publish/extract_burnin.py index f113e61bb0..44ba4a5025 100644 --- a/openpype/plugins/publish/extract_burnin.py +++ b/openpype/plugins/publish/extract_burnin.py @@ -16,9 +16,7 @@ from openpype.lib import ( get_transcode_temp_directory, convert_input_paths_for_ffmpeg, - should_convert_for_ffmpeg, - - CREATE_NO_WINDOW + should_convert_for_ffmpeg ) from openpype.lib.profiles_filtering import filter_profiles @@ -338,8 +336,6 @@ class ExtractBurnin(publish.Extractor): "logger": self.log, "env": {} } - if platform.system().lower() == "windows": - process_kwargs["creationflags"] = CREATE_NO_WINDOW run_openpype_process(*args, **process_kwargs) # Remove the temporary json @@ -729,7 +725,6 @@ class ExtractBurnin(publish.Extractor): return filtered_burnin_defs families = self.families_from_instance(instance) - low_families = [family.lower() for family in families] for filename_suffix, orig_burnin_def in burnin_defs.items(): burnin_def = copy.deepcopy(orig_burnin_def) @@ -740,7 +735,7 @@ class ExtractBurnin(publish.Extractor): families_filters = def_filter["families"] if not self.families_filter_validation( - low_families, families_filters + families, families_filters ): self.log.debug(( "Skipped burnin definition \"{}\". Family" @@ -777,31 +772,19 @@ class ExtractBurnin(publish.Extractor): return filtered_burnin_defs def families_filter_validation(self, families, output_families_filter): - """Determine if entered families intersect with families filters. + """Determines if entered families intersect with families filters. All family values are lowered to avoid unexpected results. """ - if not output_families_filter: + + families_filter_lower = set(family.lower() for family in + output_families_filter + # Exclude empty filter values + if family) + if not families_filter_lower: return True - - for family_filter in output_families_filter: - if not family_filter: - continue - - if not isinstance(family_filter, (list, tuple)): - if family_filter.lower() not in families: - continue - return True - - valid = True - for family in family_filter: - if family.lower() not in families: - valid = False - break - - if valid: - return True - return False + return any(family.lower() in families_filter_lower + for family in families) def families_from_instance(self, instance): """Return all families of entered instance.""" diff --git a/openpype/plugins/publish/extract_review.py b/openpype/plugins/publish/extract_review.py index 0f6dacba18..7de92a8bbd 100644 --- a/openpype/plugins/publish/extract_review.py +++ b/openpype/plugins/publish/extract_review.py @@ -12,7 +12,7 @@ import pyblish.api from openpype.lib import ( get_ffmpeg_tool_path, - + filter_profiles, path_to_subprocess_arg, run_subprocess, ) @@ -23,6 +23,7 @@ from openpype.lib.transcoding import ( convert_input_paths_for_ffmpeg, get_transcode_temp_directory, ) +from openpype.pipeline.publish import KnownPublishError class ExtractReview(pyblish.api.InstancePlugin): @@ -42,6 +43,7 @@ class ExtractReview(pyblish.api.InstancePlugin): hosts = [ "nuke", "maya", + "blender", "shell", "hiero", "premiere", @@ -87,21 +89,23 @@ class ExtractReview(pyblish.api.InstancePlugin): def _get_outputs_for_instance(self, instance): host_name = instance.context.data["hostName"] - task_name = os.environ["AVALON_TASK"] family = self.main_family_from_instance(instance) self.log.info("Host: \"{}\"".format(host_name)) - self.log.info("Task: \"{}\"".format(task_name)) self.log.info("Family: \"{}\"".format(family)) - profile = self.find_matching_profile( - host_name, task_name, family - ) + profile = filter_profiles( + self.profiles, + { + "hosts": host_name, + "families": family, + }, + logger=self.log) if not profile: self.log.info(( "Skipped instance. None of profiles in presets are for" - " Host: \"{}\" | Family: \"{}\" | Task \"{}\"" - ).format(host_name, family, task_name)) + " Host: \"{}\" | Family: \"{}\"" + ).format(host_name, family)) return self.log.debug("Matching profile: \"{}\"".format(json.dumps(profile))) @@ -111,17 +115,19 @@ class ExtractReview(pyblish.api.InstancePlugin): filtered_outputs = self.filter_output_defs( profile, subset_name, instance_families ) + if not filtered_outputs: + self.log.info(( + "Skipped instance. All output definitions from selected" + " profile do not match instance families \"{}\" or" + " subset name \"{}\"." + ).format(str(instance_families), subset_name)) + # Store `filename_suffix` to save arguments profile_outputs = [] for filename_suffix, definition in filtered_outputs.items(): definition["filename_suffix"] = filename_suffix profile_outputs.append(definition) - if not filtered_outputs: - self.log.info(( - "Skipped instance. All output definitions from selected" - " profile does not match to instance families. \"{}\"" - ).format(str(instance_families))) return profile_outputs def _get_outputs_per_representations(self, instance, profile_outputs): @@ -215,6 +221,7 @@ class ExtractReview(pyblish.api.InstancePlugin): outputs_per_repres = self._get_outputs_per_representations( instance, profile_outputs ) + for repre, output_defs in outputs_per_repres: # Check if input should be preconverted before processing # Store original staging dir (it's value may change) @@ -296,10 +303,10 @@ class ExtractReview(pyblish.api.InstancePlugin): shutil.rmtree(new_staging_dir) def _render_output_definitions( - self, instance, repre, src_repre_staging_dir, output_defs + self, instance, repre, src_repre_staging_dir, output_definitions ): fill_data = copy.deepcopy(instance.data["anatomyData"]) - for _output_def in output_defs: + for _output_def in output_definitions: output_def = copy.deepcopy(_output_def) # Make sure output definition has "tags" key if "tags" not in output_def: @@ -345,10 +352,11 @@ class ExtractReview(pyblish.api.InstancePlugin): if temp_data["input_is_sequence"]: self.log.info("Filling gaps in sequence.") files_to_clean = self.fill_sequence_gaps( - temp_data["origin_repre"]["files"], - new_repre["stagingDir"], - temp_data["frame_start"], - temp_data["frame_end"]) + files=temp_data["origin_repre"]["files"], + staging_dir=new_repre["stagingDir"], + start_frame=temp_data["frame_start"], + end_frame=temp_data["frame_end"] + ) # create or update outputName output_name = new_repre.get("outputName", "") @@ -420,10 +428,10 @@ class ExtractReview(pyblish.api.InstancePlugin): def input_is_sequence(self, repre): """Deduce from representation data if input is sequence.""" # TODO GLOBAL ISSUE - Find better way how to find out if input - # is sequence. Issues( in theory): - # - there may be multiple files ant not be sequence - # - remainders are not checked at all - # - there can be more than one collection + # is sequence. Issues (in theory): + # - there may be multiple files ant not be sequence + # - remainders are not checked at all + # - there can be more than one collection return isinstance(repre["files"], (list, tuple)) def prepare_temp_data(self, instance, repre, output_def): @@ -815,76 +823,41 @@ class ExtractReview(pyblish.api.InstancePlugin): is done. Raises: - AssertionError: if more then one collection is obtained. - + KnownPublishError: if more than one collection is obtained. """ - start_frame = int(start_frame) - end_frame = int(end_frame) + collections = clique.assemble(files)[0] - msg = "Multiple collections {} found.".format(collections) - assert len(collections) == 1, msg + if len(collections) != 1: + raise KnownPublishError( + "Multiple collections {} found.".format(collections)) + col = collections[0] - # do nothing if no gap is found in input range - not_gap = True - for fr in range(start_frame, end_frame + 1): - if fr not in col.indexes: - not_gap = False - - if not_gap: - return [] - - holes = col.holes() - - # generate ideal sequence - complete_col = clique.assemble( - [("{}{:0" + str(col.padding) + "d}{}").format( - col.head, f, col.tail - ) for f in range(start_frame, end_frame)] - )[0][0] # type: clique.Collection - - new_files = {} - last_existing_file = None - - for idx in holes.indexes: - # get previous existing file - test_file = os.path.normpath(os.path.join( - staging_dir, - ("{}{:0" + str(complete_col.padding) + "d}{}").format( - complete_col.head, idx - 1, complete_col.tail))) - if os.path.isfile(test_file): - new_files[idx] = test_file - last_existing_file = test_file + # Prepare which hole is filled with what frame + # - the frame is filled only with already existing frames + prev_frame = next(iter(col.indexes)) + hole_frame_to_nearest = {} + for frame in range(int(start_frame), int(end_frame) + 1): + if frame in col.indexes: + prev_frame = frame else: - if not last_existing_file: - # previous file is not found (sequence has a hole - # at the beginning. Use first available frame - # there is. - try: - last_existing_file = list(col)[0] - except IndexError: - # empty collection? - raise AssertionError( - "Invalid sequence collected") - new_files[idx] = os.path.normpath( - os.path.join(staging_dir, last_existing_file)) + # Use previous frame as source for hole + hole_frame_to_nearest[frame] = prev_frame - files_to_clean = [] - if new_files: - # so now new files are dict with missing frame as a key and - # existing file as a value. - for frame, file in new_files.items(): - self.log.info( - "Filling gap {} with {}".format(frame, file)) + # Calculate paths + added_files = [] + col_format = col.format("{head}{padding}{tail}") + for hole_frame, src_frame in hole_frame_to_nearest.items(): + hole_fpath = os.path.join(staging_dir, col_format % hole_frame) + src_fpath = os.path.join(staging_dir, col_format % src_frame) + if not os.path.isfile(src_fpath): + raise KnownPublishError( + "Missing previously detected file: {}".format(src_fpath)) - hole = os.path.join( - staging_dir, - ("{}{:0" + str(col.padding) + "d}{}").format( - col.head, frame, col.tail)) - speedcopy.copyfile(file, hole) - files_to_clean.append(hole) + speedcopy.copyfile(src_fpath, hole_fpath) + added_files.append(hole_fpath) - return files_to_clean + return added_files def input_output_paths(self, new_repre, output_def, temp_data): """Deduce input nad output file paths based on entered data. @@ -1280,7 +1253,7 @@ class ExtractReview(pyblish.api.InstancePlugin): # 'use_input_res' is set to 'True'. use_input_res = False - # Overscal color + # Overscan color overscan_color_value = "black" overscan_color = output_def.get("overscan_color") if overscan_color: @@ -1467,240 +1440,20 @@ class ExtractReview(pyblish.api.InstancePlugin): families.append(family) return families - def compile_list_of_regexes(self, in_list): - """Convert strings in entered list to compiled regex objects.""" - regexes = [] - if not in_list: - return regexes - - for item in in_list: - if not item: - continue - - try: - regexes.append(re.compile(item)) - except TypeError: - self.log.warning(( - "Invalid type \"{}\" value \"{}\"." - " Expected string based object. Skipping." - ).format(str(type(item)), str(item))) - - return regexes - - def validate_value_by_regexes(self, value, in_list): - """Validates in any regex from list match entered value. - - Args: - in_list (list): List with regexes. - value (str): String where regexes is checked. - - Returns: - int: Returns `0` when list is not set or is empty. Returns `1` when - any regex match value and returns `-1` when none of regexes - match value entered. - """ - if not in_list: - return 0 - - output = -1 - regexes = self.compile_list_of_regexes(in_list) - for regex in regexes: - if not value: - continue - if re.match(regex, value): - output = 1 - break - return output - - def profile_exclusion(self, matching_profiles): - """Find out most matching profile byt host, task and family match. - - Profiles are selectively filtered. Each profile should have - "__value__" key with list of booleans. Each boolean represents - existence of filter for specific key (host, tasks, family). - Profiles are looped in sequence. In each sequence are split into - true_list and false_list. For next sequence loop are used profiles in - true_list if there are any profiles else false_list is used. - - Filtering ends when only one profile left in true_list. Or when all - existence booleans loops passed, in that case first profile from left - profiles is returned. - - Args: - matching_profiles (list): Profiles with same values. - - Returns: - dict: Most matching profile. - """ - self.log.info( - "Search for first most matching profile in match order:" - " Host name -> Task name -> Family." - ) - # Filter all profiles with highest points value. First filter profiles - # with matching host if there are any then filter profiles by task - # name if there are any and lastly filter by family. Else use first in - # list. - idx = 0 - final_profile = None - while True: - profiles_true = [] - profiles_false = [] - for profile in matching_profiles: - value = profile["__value__"] - # Just use first profile when idx is greater than values. - if not idx < len(value): - final_profile = profile - break - - if value[idx]: - profiles_true.append(profile) - else: - profiles_false.append(profile) - - if final_profile is not None: - break - - if profiles_true: - matching_profiles = profiles_true - else: - matching_profiles = profiles_false - - if len(matching_profiles) == 1: - final_profile = matching_profiles[0] - break - idx += 1 - - final_profile.pop("__value__") - return final_profile - - def find_matching_profile(self, host_name, task_name, family): - """ Filter profiles by Host name, Task name and main Family. - - Filtering keys are "hosts" (list), "tasks" (list), "families" (list). - If key is not find or is empty than it's expected to match. - - Args: - profiles (list): Profiles definition from presets. - host_name (str): Current running host name. - task_name (str): Current context task name. - family (str): Main family of current Instance. - - Returns: - dict/None: Return most matching profile or None if none of profiles - match at least one criteria. - """ - - matching_profiles = None - if not self.profiles: - return matching_profiles - - highest_profile_points = -1 - # Each profile get 1 point for each matching filter. Profile with most - # points is returned. For cases when more than one profile will match - # are also stored ordered lists of matching values. - for profile in self.profiles: - profile_points = 0 - profile_value = [] - - # Host filtering - host_names = profile.get("hosts") - match = self.validate_value_by_regexes(host_name, host_names) - if match == -1: - self.log.debug( - "\"{}\" not found in {}".format(host_name, host_names) - ) - continue - profile_points += match - profile_value.append(bool(match)) - - # Task filtering - task_names = profile.get("tasks") - match = self.validate_value_by_regexes(task_name, task_names) - if match == -1: - self.log.debug( - "\"{}\" not found in {}".format(task_name, task_names) - ) - continue - profile_points += match - profile_value.append(bool(match)) - - # Family filtering - families = profile.get("families") - match = self.validate_value_by_regexes(family, families) - if match == -1: - self.log.debug( - "\"{}\" not found in {}".format(family, families) - ) - continue - profile_points += match - profile_value.append(bool(match)) - - if profile_points < highest_profile_points: - continue - - if profile_points > highest_profile_points: - matching_profiles = [] - highest_profile_points = profile_points - - if profile_points == highest_profile_points: - profile["__value__"] = profile_value - matching_profiles.append(profile) - - if not matching_profiles: - self.log.warning(( - "None of profiles match your setup." - " Host \"{}\" | Task: \"{}\" | Family: \"{}\"" - ).format(host_name, task_name, family)) - return - - if len(matching_profiles) == 1: - # Pop temporary key `__value__` - matching_profiles[0].pop("__value__") - return matching_profiles[0] - - self.log.warning(( - "More than one profile match your setup." - " Host \"{}\" | Task: \"{}\" | Family: \"{}\"" - ).format(host_name, task_name, family)) - - return self.profile_exclusion(matching_profiles) - def families_filter_validation(self, families, output_families_filter): """Determines if entered families intersect with families filters. All family values are lowered to avoid unexpected results. """ - if not output_families_filter: + + families_filter_lower = set(family.lower() for family in + output_families_filter + # Exclude empty filter values + if family) + if not families_filter_lower: return True - - single_families = [] - combination_families = [] - for family_filter in output_families_filter: - if not family_filter: - continue - if isinstance(family_filter, (list, tuple)): - _family_filter = [] - for family in family_filter: - if family: - _family_filter.append(family.lower()) - combination_families.append(_family_filter) - else: - single_families.append(family_filter.lower()) - - for family in single_families: - if family in families: - return True - - for family_combination in combination_families: - valid = True - for family in family_combination: - if family not in families: - valid = False - break - - if valid: - return True - return False + return any(family.lower() in families_filter_lower + for family in families) def filter_output_defs(self, profile, subset_name, families): """Return outputs matching input instance families. @@ -1715,14 +1468,10 @@ class ExtractReview(pyblish.api.InstancePlugin): Returns: list: Containg all output definitions matching entered families. """ - outputs = profile.get("outputs") or [] + outputs = profile.get("outputs") or {} if not outputs: return outputs - # lower values - # QUESTION is this valid operation? - families = [family.lower() for family in families] - filtered_outputs = {} for filename_suffix, output_def in outputs.items(): output_filters = output_def.get("filter") @@ -1994,14 +1743,14 @@ class OverscanCrop: relative_source_regex = re.compile(r"%([\+\-])") def __init__( - self, input_width, input_height, string_value, overscal_color=None + self, input_width, input_height, string_value, overscan_color=None ): # Make sure that is not None string_value = string_value or "" self.input_width = input_width self.input_height = input_height - self.overscal_color = overscal_color + self.overscan_color = overscan_color width, height = self._convert_string_to_values(string_value) self._width_value = width @@ -2057,20 +1806,20 @@ class OverscanCrop: elif width >= self.input_width and height >= self.input_height: output.append( "pad={}:{}:(iw-ow)/2:(ih-oh)/2:{}".format( - width, height, self.overscal_color + width, height, self.overscan_color ) ) elif width > self.input_width and height < self.input_height: output.append("crop=iw:{}".format(height)) output.append("pad={}:ih:(iw-ow)/2:(ih-oh)/2:{}".format( - width, self.overscal_color + width, self.overscan_color )) elif width < self.input_width and height > self.input_height: output.append("crop={}:ih".format(width)) output.append("pad=iw:{}:(iw-ow)/2:(ih-oh)/2:{}".format( - height, self.overscal_color + height, self.overscan_color )) return output diff --git a/openpype/plugins/publish/integrate.py b/openpype/plugins/publish/integrate.py index b117006871..760b1a6b37 100644 --- a/openpype/plugins/publish/integrate.py +++ b/openpype/plugins/publish/integrate.py @@ -24,7 +24,10 @@ from openpype.client import ( get_version_by_name, ) from openpype.lib import source_hash -from openpype.lib.file_transaction import FileTransaction +from openpype.lib.file_transaction import ( + FileTransaction, + DuplicateDestinationError +) from openpype.pipeline.publish import ( KnownPublishError, get_publish_template_name, @@ -80,10 +83,12 @@ class IntegrateAsset(pyblish.api.InstancePlugin): order = pyblish.api.IntegratorOrder families = ["workfile", "pointcache", + "pointcloud", "proxyAbc", "camera", "animation", "model", + "maxScene", "mayaAscii", "mayaScene", "setdress", @@ -168,9 +173,18 @@ class IntegrateAsset(pyblish.api.InstancePlugin): ).format(instance.data["family"])) return - file_transactions = FileTransaction(log=self.log) + file_transactions = FileTransaction(log=self.log, + # Enforce unique transfers + allow_queue_replacements=False) try: self.register(instance, file_transactions, filtered_repres) + except DuplicateDestinationError as exc: + # Raise DuplicateDestinationError as KnownPublishError + # and rollback the transactions + file_transactions.rollback() + six.reraise(KnownPublishError, + KnownPublishError(exc), + sys.exc_info()[2]) except Exception: # clean destination # todo: preferably we'd also rollback *any* changes to the database diff --git a/openpype/plugins/publish/integrate_legacy.py b/openpype/plugins/publish/integrate_legacy.py index b93abab1d8..1d0177f151 100644 --- a/openpype/plugins/publish/integrate_legacy.py +++ b/openpype/plugins/publish/integrate_legacy.py @@ -76,10 +76,12 @@ class IntegrateAssetNew(pyblish.api.InstancePlugin): order = pyblish.api.IntegratorOrder + 0.00001 families = ["workfile", "pointcache", + "pointcloud", "proxyAbc", "camera", "animation", "model", + "maxScene", "mayaAscii", "mayaScene", "setdress", diff --git a/openpype/plugins/publish/preintegrate_thumbnail_representation.py b/openpype/plugins/publish/preintegrate_thumbnail_representation.py index b88ccee9dc..1c95b82c97 100644 --- a/openpype/plugins/publish/preintegrate_thumbnail_representation.py +++ b/openpype/plugins/publish/preintegrate_thumbnail_representation.py @@ -60,6 +60,8 @@ class PreIntegrateThumbnails(pyblish.api.InstancePlugin): if not found_profile: return + thumbnail_repre.setdefault("tags", []) + if not found_profile["integrate_thumbnail"]: if "delete" not in thumbnail_repre["tags"]: thumbnail_repre["tags"].append("delete") diff --git a/openpype/scripts/otio_burnin.py b/openpype/scripts/otio_burnin.py index cb4646c099..ef449f4f74 100644 --- a/openpype/scripts/otio_burnin.py +++ b/openpype/scripts/otio_burnin.py @@ -345,12 +345,6 @@ class ModifiedBurnins(ffmpeg_burnins.Burnins): "stderr": subprocess.PIPE, "shell": True, } - if platform.system().lower() == "windows": - kwargs["creationflags"] = ( - subprocess.CREATE_NEW_PROCESS_GROUP - | getattr(subprocess, "DETACHED_PROCESS", 0) - | getattr(subprocess, "CREATE_NO_WINDOW", 0) - ) proc = subprocess.Popen(command, **kwargs) _stdout, _stderr = proc.communicate() diff --git a/openpype/settings/defaults/project_settings/applications.json b/openpype/settings/defaults/project_settings/applications.json new file mode 100644 index 0000000000..62f3cdfe1b --- /dev/null +++ b/openpype/settings/defaults/project_settings/applications.json @@ -0,0 +1,3 @@ +{ + "only_available": false +} diff --git a/openpype/settings/defaults/project_settings/blender.json b/openpype/settings/defaults/project_settings/blender.json index fe05f94590..20eec0c09d 100644 --- a/openpype/settings/defaults/project_settings/blender.json +++ b/openpype/settings/defaults/project_settings/blender.json @@ -80,6 +80,94 @@ "enabled": true, "optional": true, "active": false + }, + "ExtractThumbnail": { + "enabled": true, + "optional": true, + "active": true, + "presets": { + "model": { + "image_settings": { + "file_format": "JPEG", + "color_mode": "RGB", + "quality": 100 + }, + "display_options": { + "shading": { + "light": "STUDIO", + "studio_light": "Default", + "type": "SOLID", + "color_type": "OBJECT", + "show_xray": false, + "show_shadows": false, + "show_cavity": true + }, + "overlay": { + "show_overlays": false + } + } + }, + "rig": { + "image_settings": { + "file_format": "JPEG", + "color_mode": "RGB", + "quality": 100 + }, + "display_options": { + "shading": { + "light": "STUDIO", + "studio_light": "Default", + "type": "SOLID", + "color_type": "OBJECT", + "show_xray": true, + "show_shadows": false, + "show_cavity": false + }, + "overlay": { + "show_overlays": true, + "show_ortho_grid": false, + "show_floor": false, + "show_axis_x": false, + "show_axis_y": false, + "show_axis_z": false, + "show_text": false, + "show_stats": false, + "show_cursor": false, + "show_annotation": false, + "show_extras": false, + "show_relationship_lines": false, + "show_outline_selected": false, + "show_motion_paths": false, + "show_object_origins": false, + "show_bones": true + } + } + } + } + }, + "ExtractPlayblast": { + "enabled": true, + "optional": true, + "active": true, + "presets": { + "default": { + "image_settings": { + "file_format": "PNG", + "color_mode": "RGB", + "color_depth": "8", + "compression": 15 + }, + "display_options": { + "shading": { + "type": "MATERIAL", + "render_pass": "COMBINED" + }, + "overlay": { + "show_overlays": false + } + } + } + } } } } diff --git a/openpype/settings/defaults/project_settings/celaction.json b/openpype/settings/defaults/project_settings/celaction.json index bdba6d7322..822604fd2f 100644 --- a/openpype/settings/defaults/project_settings/celaction.json +++ b/openpype/settings/defaults/project_settings/celaction.json @@ -9,6 +9,13 @@ "rules": {} } }, + "workfile": { + "submission_overrides": [ + "render_chunk", + "frame_range", + "resolution" + ] + }, "publish": { "CollectRenderPath": { "output_extension": "png", diff --git a/openpype/settings/defaults/project_settings/deadline.json b/openpype/settings/defaults/project_settings/deadline.json index 6b6f2d465b..0cbd323299 100644 --- a/openpype/settings/defaults/project_settings/deadline.json +++ b/openpype/settings/defaults/project_settings/deadline.json @@ -23,7 +23,7 @@ "enabled": true, "optional": false, "active": true, - "tile_assembler_plugin": "OpenPypeTileAssembler", + "tile_assembler_plugin": "DraftTileAssembler", "use_published": true, "import_reference": false, "asset_dependencies": true, diff --git a/openpype/settings/defaults/project_settings/fusion.json b/openpype/settings/defaults/project_settings/fusion.json index 954606820a..f974eebaca 100644 --- a/openpype/settings/defaults/project_settings/fusion.json +++ b/openpype/settings/defaults/project_settings/fusion.json @@ -16,5 +16,10 @@ "linux": [] } } + }, + "copy_fusion_settings": { + "copy_path": "~/.openpype/hosts/fusion/profiles", + "copy_status": false, + "force_sync": false } } diff --git a/openpype/settings/defaults/project_settings/kitsu.json b/openpype/settings/defaults/project_settings/kitsu.json index 95b3da04ae..0638450595 100644 --- a/openpype/settings/defaults/project_settings/kitsu.json +++ b/openpype/settings/defaults/project_settings/kitsu.json @@ -7,7 +7,12 @@ "publish": { "IntegrateKitsuNote": { "set_status_note": false, - "note_status_shortname": "wfa" + "note_status_shortname": "wfa", + "status_conditions": [], + "custom_comment_template": { + "enabled": false, + "comment_template": "{comment}\n\n| | |\n|--|--|\n| version| `{version}` |\n| family | `{family}` |\n| name | `{name}` |" + } } } } diff --git a/openpype/settings/defaults/project_settings/max.json b/openpype/settings/defaults/project_settings/max.json index 667b42411d..d59cdf8c4a 100644 --- a/openpype/settings/defaults/project_settings/max.json +++ b/openpype/settings/defaults/project_settings/max.json @@ -4,5 +4,20 @@ "aov_separator": "underscore", "image_format": "exr", "multipass": true + }, + "PointCloud":{ + "attribute":{ + "Age": "age", + "Radius": "radius", + "Position": "position", + "Rotation": "rotation", + "Scale": "scale", + "Velocity": "velocity", + "Color": "color", + "TextureCoordinate": "texcoord", + "MaterialID": "matid", + "custFloats": "custFloats", + "custVecs": "custVecs" + } } } diff --git a/openpype/settings/defaults/project_settings/maya.json b/openpype/settings/defaults/project_settings/maya.json index 63ba4542f3..2aa95fd1be 100644 --- a/openpype/settings/defaults/project_settings/maya.json +++ b/openpype/settings/defaults/project_settings/maya.json @@ -147,6 +147,7 @@ "enabled": true, "write_color_sets": false, "write_face_sets": false, + "include_parent_hierarchy": false, "include_user_defined_attributes": false, "defaults": [ "Main" diff --git a/openpype/settings/entities/schemas/projects_schema/schema_main.json b/openpype/settings/entities/schemas/projects_schema/schema_main.json index ebe59c7942..8c1d8ccbdd 100644 --- a/openpype/settings/entities/schemas/projects_schema/schema_main.json +++ b/openpype/settings/entities/schemas/projects_schema/schema_main.json @@ -82,6 +82,10 @@ "type": "schema", "name": "schema_project_slack" }, + { + "type": "schema", + "name": "schema_project_applications" + }, { "type": "schema", "name": "schema_project_max" diff --git a/openpype/settings/entities/schemas/projects_schema/schema_project_applications.json b/openpype/settings/entities/schemas/projects_schema/schema_project_applications.json new file mode 100644 index 0000000000..030ed3ee8a --- /dev/null +++ b/openpype/settings/entities/schemas/projects_schema/schema_project_applications.json @@ -0,0 +1,14 @@ +{ + "type": "dict", + "key": "applications", + "label": "Applications", + "collapsible": true, + "is_file": true, + "children": [ + { + "type": "boolean", + "key": "only_available", + "label": "Show only available applications" + } + ] +} diff --git a/openpype/settings/entities/schemas/projects_schema/schema_project_celaction.json b/openpype/settings/entities/schemas/projects_schema/schema_project_celaction.json index 2320d9ae26..c5ca3eb9f5 100644 --- a/openpype/settings/entities/schemas/projects_schema/schema_project_celaction.json +++ b/openpype/settings/entities/schemas/projects_schema/schema_project_celaction.json @@ -22,6 +22,31 @@ ] }, + { + "type": "dict", + "collapsible": true, + "key": "workfile", + "label": "Workfile", + "children": [ + { + "key": "submission_overrides", + "label": "Submission workfile overrides", + "type": "enum", + "multiselection": true, + "enum_items": [ + { + "render_chunk": "Pass chunk size" + }, + { + "frame_range": "Pass frame range" + }, + { + "resolution": "Pass resolution" + } + ] + } + ] + }, { "type": "dict", "collapsible": true, diff --git a/openpype/settings/entities/schemas/projects_schema/schema_project_deadline.json b/openpype/settings/entities/schemas/projects_schema/schema_project_deadline.json index bb5a65e1b7..9906939cc7 100644 --- a/openpype/settings/entities/schemas/projects_schema/schema_project_deadline.json +++ b/openpype/settings/entities/schemas/projects_schema/schema_project_deadline.json @@ -121,7 +121,7 @@ "DraftTileAssembler": "Draft Tile Assembler" }, { - "OpenPypeTileAssembler": "Open Image IO" + "OpenPypeTileAssembler": "OpenPype Tile Assembler" } ] }, diff --git a/openpype/settings/entities/schemas/projects_schema/schema_project_fusion.json b/openpype/settings/entities/schemas/projects_schema/schema_project_fusion.json index 8c62d75815..464cf2c06d 100644 --- a/openpype/settings/entities/schemas/projects_schema/schema_project_fusion.json +++ b/openpype/settings/entities/schemas/projects_schema/schema_project_fusion.json @@ -45,6 +45,29 @@ ] } ] + }, + { + "type": "dict", + "key": "copy_fusion_settings", + "collapsible": true, + "label": "Local Fusion profile settings", + "children": [ + { + "key": "copy_path", + "type": "path", + "label": "Local Fusion profile directory" + }, + { + "type": "boolean", + "key": "copy_status", + "label": "Copy profile on first launch" + }, + { + "key":"force_sync", + "type": "boolean", + "label": "Resync profile on each launch" + } + ] } ] } diff --git a/openpype/settings/entities/schemas/projects_schema/schema_project_kitsu.json b/openpype/settings/entities/schemas/projects_schema/schema_project_kitsu.json index fb47670e74..ee309f63a7 100644 --- a/openpype/settings/entities/schemas/projects_schema/schema_project_kitsu.json +++ b/openpype/settings/entities/schemas/projects_schema/schema_project_kitsu.json @@ -46,12 +46,62 @@ { "type": "boolean", "key": "set_status_note", - "label": "Set status on note" + "label": "Set status with note" }, { "type": "text", "key": "note_status_shortname", "label": "Note shortname" + }, + { + "type": "list", + "key": "status_conditions", + "label": "Status conditions", + "object_type": { + "type": "dict", + "key": "conditions_dict", + "children": [ + { + "type": "enum", + "key": "condition", + "label": "Condition", + "enum_items": [ + {"equal": "Equal"}, + {"not_equal": "Not equal"} + ] + }, + { + "type": "text", + "key": "short_name", + "label": "Short name" + } + ] + }, + "label": "Status shortname" + }, + { + "type": "dict", + "collapsible": true, + "checkbox_key": "enabled", + "key": "custom_comment_template", + "label": "Custom Comment Template", + "children": [ + { + "type": "boolean", + "key": "enabled", + "label": "Enabled" + }, + { + "type": "label", + "label": "Kitsu supports markdown and here you can create a custom comment template.
You can use data from your publishing instance's data." + }, + { + "key": "comment_template", + "type": "text", + "multiline": true, + "label": "Custom comment" + } + ] } ] } diff --git a/openpype/settings/entities/schemas/projects_schema/schema_project_max.json b/openpype/settings/entities/schemas/projects_schema/schema_project_max.json index 8a283c1acc..4fba9aff0a 100644 --- a/openpype/settings/entities/schemas/projects_schema/schema_project_max.json +++ b/openpype/settings/entities/schemas/projects_schema/schema_project_max.json @@ -51,6 +51,28 @@ "label": "multipass" } ] + }, + { + "type": "dict", + "collapsible": true, + "key": "PointCloud", + "label": "Point Cloud", + "children": [ + { + "type": "label", + "label": "Define the channel attribute names before exporting as PRT" + }, + { + "type": "dict-modifiable", + "collapsible": true, + "key": "attribute", + "label": "Channel Attribute", + "use_label_wrap": true, + "object_type": { + "type": "text" + } + } + ] } ] -} \ No newline at end of file +} diff --git a/openpype/settings/entities/schemas/projects_schema/schemas/schema_blender_publish.json b/openpype/settings/entities/schemas/projects_schema/schemas/schema_blender_publish.json index 53949f65cb..1037519f57 100644 --- a/openpype/settings/entities/schemas/projects_schema/schemas/schema_blender_publish.json +++ b/openpype/settings/entities/schemas/projects_schema/schemas/schema_blender_publish.json @@ -112,6 +112,66 @@ "label": "Extract Layout as JSON" } ] + }, + { + "type": "dict", + "collapsible": true, + "key": "ExtractThumbnail", + "label": "ExtractThumbnail", + "checkbox_key": "enabled", + "is_group": true, + "children": [ + { + "type": "boolean", + "key": "enabled", + "label": "Enabled" + }, + { + "type": "boolean", + "key": "optional", + "label": "Optional" + }, + { + "type": "boolean", + "key": "active", + "label": "Active" + }, + { + "type": "raw-json", + "key": "presets", + "label": "Presets" + } + ] + }, + { + "type": "dict", + "collapsible": true, + "key": "ExtractPlayblast", + "label": "ExtractPlayblast", + "checkbox_key": "enabled", + "is_group": true, + "children": [ + { + "type": "boolean", + "key": "enabled", + "label": "Enabled" + }, + { + "type": "boolean", + "key": "optional", + "label": "Optional" + }, + { + "type": "boolean", + "key": "active", + "label": "Active" + }, + { + "type": "raw-json", + "key": "presets", + "label": "Presets" + } + ] } ] -} +} \ No newline at end of file diff --git a/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_create.json b/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_create.json index 1598f90643..d6e6c97b8c 100644 --- a/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_create.json +++ b/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_create.json @@ -132,6 +132,11 @@ "key": "write_face_sets", "label": "Write Face Sets" }, + { + "type": "boolean", + "key": "include_parent_hierarchy", + "label": "Include Parent Hierarchy" + }, { "type": "boolean", "key": "include_user_defined_attributes", diff --git a/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_publish.json b/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_publish.json index 3484f42f6b..5a66f8a513 100644 --- a/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_publish.json +++ b/openpype/settings/entities/schemas/projects_schema/schemas/schema_maya_publish.json @@ -369,7 +369,8 @@ "label": "Arnold Render Attributes", "use_label_wrap": true, "object_type": { - "type": "text" + "type": "list", + "object_type": "text" } }, { @@ -379,7 +380,8 @@ "label": "Vray Render Attributes", "use_label_wrap": true, "object_type": { - "type": "text" + "type": "list", + "object_type": "text" } }, { @@ -389,7 +391,8 @@ "label": "Redshift Render Attributes", "use_label_wrap": true, "object_type": { - "type": "text" + "type": "list", + "object_type": "text" } }, { @@ -399,7 +402,8 @@ "label": "Renderman Render Attributes", "use_label_wrap": true, "object_type": { - "type": "text" + "type": "list", + "object_type": "text" } } ] diff --git a/openpype/tools/launcher/models.py b/openpype/tools/launcher/models.py index 6c763544a9..3aa6c5d8cb 100644 --- a/openpype/tools/launcher/models.py +++ b/openpype/tools/launcher/models.py @@ -19,6 +19,7 @@ from openpype.lib.applications import ( CUSTOM_LAUNCH_APP_GROUPS, ApplicationManager ) +from openpype.settings import get_project_settings from openpype.pipeline import discover_launcher_actions from openpype.tools.utils.lib import ( DynamicQThread, @@ -94,6 +95,8 @@ class ActionModel(QtGui.QStandardItemModel): if not project_doc: return actions + project_settings = get_project_settings(project_name) + only_available = project_settings["applications"]["only_available"] self.application_manager.refresh() for app_def in project_doc["config"]["apps"]: app_name = app_def["name"] @@ -104,6 +107,9 @@ class ActionModel(QtGui.QStandardItemModel): if app.group.name in CUSTOM_LAUNCH_APP_GROUPS: continue + if only_available and not app.find_executable(): + continue + # Get from app definition, if not there from app in project action = type( "app_{}".format(app_name), diff --git a/openpype/tools/publisher/constants.py b/openpype/tools/publisher/constants.py index b2bfd7dd5c..5d23886aa8 100644 --- a/openpype/tools/publisher/constants.py +++ b/openpype/tools/publisher/constants.py @@ -1,4 +1,4 @@ -from qtpy import QtCore +from qtpy import QtCore, QtGui # ID of context item in instance view CONTEXT_ID = "context" @@ -26,6 +26,9 @@ GROUP_ROLE = QtCore.Qt.UserRole + 7 CONVERTER_IDENTIFIER_ROLE = QtCore.Qt.UserRole + 8 CREATOR_SORT_ROLE = QtCore.Qt.UserRole + 9 +ResetKeySequence = QtGui.QKeySequence( + QtCore.Qt.ControlModifier | QtCore.Qt.Key_R +) __all__ = ( "CONTEXT_ID", diff --git a/openpype/tools/publisher/control.py b/openpype/tools/publisher/control.py index 49e7eeb4f7..b62ae7ecc1 100644 --- a/openpype/tools/publisher/control.py +++ b/openpype/tools/publisher/control.py @@ -6,7 +6,7 @@ import collections import uuid import tempfile import shutil -from abc import ABCMeta, abstractmethod, abstractproperty +from abc import ABCMeta, abstractmethod import six import pyblish.api @@ -964,7 +964,8 @@ class AbstractPublisherController(object): access objects directly but by using wrappers that can be serialized. """ - @abstractproperty + @property + @abstractmethod def log(self): """Controller's logger object. @@ -974,13 +975,15 @@ class AbstractPublisherController(object): pass - @abstractproperty + @property + @abstractmethod def event_system(self): """Inner event system for publisher controller.""" pass - @abstractproperty + @property + @abstractmethod def project_name(self): """Current context project name. @@ -990,7 +993,8 @@ class AbstractPublisherController(object): pass - @abstractproperty + @property + @abstractmethod def current_asset_name(self): """Current context asset name. @@ -1000,7 +1004,8 @@ class AbstractPublisherController(object): pass - @abstractproperty + @property + @abstractmethod def current_task_name(self): """Current context task name. @@ -1010,7 +1015,21 @@ class AbstractPublisherController(object): pass - @abstractproperty + @property + @abstractmethod + def host_context_has_changed(self): + """Host context changed after last reset. + + 'CreateContext' has this option available using 'context_has_changed'. + + Returns: + bool: Context has changed. + """ + + pass + + @property + @abstractmethod def host_is_valid(self): """Host is valid for creation part. @@ -1023,7 +1042,8 @@ class AbstractPublisherController(object): pass - @abstractproperty + @property + @abstractmethod def instances(self): """Collected/created instances. @@ -1134,7 +1154,13 @@ class AbstractPublisherController(object): @abstractmethod def save_changes(self): - """Save changes in create context.""" + """Save changes in create context. + + Save can crash because of unexpected errors. + + Returns: + bool: Save was successful. + """ pass @@ -1145,7 +1171,19 @@ class AbstractPublisherController(object): pass - @abstractproperty + @property + @abstractmethod + def publish_has_started(self): + """Has publishing finished. + + Returns: + bool: If publishing finished and all plugins were iterated. + """ + + pass + + @property + @abstractmethod def publish_has_finished(self): """Has publishing finished. @@ -1155,7 +1193,8 @@ class AbstractPublisherController(object): pass - @abstractproperty + @property + @abstractmethod def publish_is_running(self): """Publishing is running right now. @@ -1165,7 +1204,8 @@ class AbstractPublisherController(object): pass - @abstractproperty + @property + @abstractmethod def publish_has_validated(self): """Publish validation passed. @@ -1175,7 +1215,8 @@ class AbstractPublisherController(object): pass - @abstractproperty + @property + @abstractmethod def publish_has_crashed(self): """Publishing crashed for any reason. @@ -1185,7 +1226,8 @@ class AbstractPublisherController(object): pass - @abstractproperty + @property + @abstractmethod def publish_has_validation_errors(self): """During validation happened at least one validation error. @@ -1195,7 +1237,8 @@ class AbstractPublisherController(object): pass - @abstractproperty + @property + @abstractmethod def publish_max_progress(self): """Get maximum possible progress number. @@ -1205,7 +1248,8 @@ class AbstractPublisherController(object): pass - @abstractproperty + @property + @abstractmethod def publish_progress(self): """Current progress number. @@ -1215,7 +1259,8 @@ class AbstractPublisherController(object): pass - @abstractproperty + @property + @abstractmethod def publish_error_msg(self): """Current error message which cause fail of publishing. @@ -1267,7 +1312,8 @@ class AbstractPublisherController(object): pass - @abstractproperty + @property + @abstractmethod def convertor_items(self): pass @@ -1356,6 +1402,7 @@ class BasePublisherController(AbstractPublisherController): self._publish_has_validation_errors = False self._publish_has_crashed = False # All publish plugins are processed + self._publish_has_started = False self._publish_has_finished = False self._publish_max_progress = 0 self._publish_progress = 0 @@ -1386,7 +1433,8 @@ class BasePublisherController(AbstractPublisherController): "show.card.message" - Show card message request (UI related). "instances.refresh.finished" - Instances are refreshed. "plugins.refresh.finished" - Plugins refreshed. - "publish.reset.finished" - Publish context reset finished. + "publish.reset.finished" - Reset finished. + "controller.reset.started" - Controller reset started. "controller.reset.finished" - Controller reset finished. "publish.process.started" - Publishing started. Can be started from paused state. @@ -1425,7 +1473,16 @@ class BasePublisherController(AbstractPublisherController): def _set_host_is_valid(self, value): if self._host_is_valid != value: self._host_is_valid = value - self._emit_event("publish.host_is_valid.changed", {"value": value}) + self._emit_event( + "publish.host_is_valid.changed", {"value": value} + ) + + def _get_publish_has_started(self): + return self._publish_has_started + + def _set_publish_has_started(self, value): + if value != self._publish_has_started: + self._publish_has_started = value def _get_publish_has_finished(self): return self._publish_has_finished @@ -1449,7 +1506,9 @@ class BasePublisherController(AbstractPublisherController): def _set_publish_has_validated(self, value): if self._publish_has_validated != value: self._publish_has_validated = value - self._emit_event("publish.has_validated.changed", {"value": value}) + self._emit_event( + "publish.has_validated.changed", {"value": value} + ) def _get_publish_has_crashed(self): return self._publish_has_crashed @@ -1497,6 +1556,9 @@ class BasePublisherController(AbstractPublisherController): host_is_valid = property( _get_host_is_valid, _set_host_is_valid ) + publish_has_started = property( + _get_publish_has_started, _set_publish_has_started + ) publish_has_finished = property( _get_publish_has_finished, _set_publish_has_finished ) @@ -1526,6 +1588,7 @@ class BasePublisherController(AbstractPublisherController): """Reset most of attributes that can be reset.""" self.publish_is_running = False + self.publish_has_started = False self.publish_has_validated = False self.publish_has_crashed = False self.publish_has_validation_errors = False @@ -1645,10 +1708,7 @@ class PublisherController(BasePublisherController): str: Project name. """ - if not hasattr(self._host, "get_current_context"): - return legacy_io.active_project() - - return self._host.get_current_context()["project_name"] + return self._create_context.get_current_project_name() @property def current_asset_name(self): @@ -1658,10 +1718,7 @@ class PublisherController(BasePublisherController): Union[str, None]: Asset name or None if asset is not set. """ - if not hasattr(self._host, "get_current_context"): - return legacy_io.Session["AVALON_ASSET"] - - return self._host.get_current_context()["asset_name"] + return self._create_context.get_current_asset_name() @property def current_task_name(self): @@ -1671,10 +1728,11 @@ class PublisherController(BasePublisherController): Union[str, None]: Task name or None if task is not set. """ - if not hasattr(self._host, "get_current_context"): - return legacy_io.Session["AVALON_TASK"] + return self._create_context.get_current_task_name() - return self._host.get_current_context()["task_name"] + @property + def host_context_has_changed(self): + return self._create_context.context_has_changed @property def instances(self): @@ -1751,6 +1809,8 @@ class PublisherController(BasePublisherController): """Reset everything related to creation and publishing.""" self.stop_publish() + self._emit_event("controller.reset.started") + self.host_is_valid = self._create_context.host_is_valid self._create_context.reset_preparation() @@ -1992,7 +2052,15 @@ class PublisherController(BasePublisherController): ) def trigger_convertor_items(self, convertor_identifiers): - self.save_changes() + """Trigger legacy item convertors. + + This functionality requires to save and reset CreateContext. The reset + is needed so Creators can collect converted items. + + Args: + convertor_identifiers (list[str]): Identifiers of convertor + plugins. + """ success = True try: @@ -2039,13 +2107,33 @@ class PublisherController(BasePublisherController): self._on_create_instance_change() return success - def save_changes(self): - """Save changes happened during creation.""" + def save_changes(self, show_message=True): + """Save changes happened during creation. + + Trigger save of changes using host api. This functionality does not + validate anything. It is required to do checks before this method is + called to be able to give user actionable response e.g. check of + context using 'host_context_has_changed'. + + Args: + show_message (bool): Show message that changes were + saved successfully. + + Returns: + bool: Save of changes was successful. + """ + if not self._create_context.host_is_valid: - return + # TODO remove + # Fake success save when host is not valid for CreateContext + # this is for testing as experimental feature + return True try: self._create_context.save_changes() + if show_message: + self.emit_card_message("Saved changes..") + return True except CreatorsOperationFailed as exc: self._emit_event( @@ -2056,16 +2144,17 @@ class PublisherController(BasePublisherController): } ) + return False + def remove_instances(self, instance_ids): """Remove instances based on instance ids. Args: instance_ids (List[str]): List of instance ids to remove. """ - # QUESTION Expect that instances are really removed? In that case save - # reset is not required and save changes too. - self.save_changes() + # QUESTION Expect that instances are really removed? In that case reset + # is not required. self._remove_instances_from_context(instance_ids) self._on_create_instance_change() @@ -2136,12 +2225,22 @@ class PublisherController(BasePublisherController): self._publish_comment_is_set = True def publish(self): - """Run publishing.""" + """Run publishing. + + Make sure all changes are saved before method is called (Call + 'save_changes' and check output). + """ + self._publish_up_validation = False self._start_publish() def validate(self): - """Run publishing and stop after Validation.""" + """Run publishing and stop after Validation. + + Make sure all changes are saved before method is called (Call + 'save_changes' and check output). + """ + if self.publish_has_validated: return self._publish_up_validation = True @@ -2152,10 +2251,8 @@ class PublisherController(BasePublisherController): if self.publish_is_running: return - # Make sure changes are saved - self.save_changes() - self.publish_is_running = True + self.publish_has_started = True self._emit_event("publish.process.started") diff --git a/openpype/tools/publisher/widgets/__init__.py b/openpype/tools/publisher/widgets/__init__.py index 042985b007..f18e6cc61e 100644 --- a/openpype/tools/publisher/widgets/__init__.py +++ b/openpype/tools/publisher/widgets/__init__.py @@ -4,8 +4,9 @@ from .icons import ( get_icon ) from .widgets import ( - StopBtn, + SaveBtn, ResetBtn, + StopBtn, ValidateBtn, PublishBtn, CreateNextPageOverlay, @@ -25,8 +26,9 @@ __all__ = ( "get_pixmap", "get_icon", - "StopBtn", + "SaveBtn", "ResetBtn", + "StopBtn", "ValidateBtn", "PublishBtn", "CreateNextPageOverlay", diff --git a/openpype/tools/publisher/widgets/card_view_widgets.py b/openpype/tools/publisher/widgets/card_view_widgets.py index 3fd5243ce9..0734e1bc27 100644 --- a/openpype/tools/publisher/widgets/card_view_widgets.py +++ b/openpype/tools/publisher/widgets/card_view_widgets.py @@ -164,6 +164,11 @@ class BaseGroupWidget(QtWidgets.QWidget): def _on_widget_selection(self, instance_id, group_id, selection_type): self.selected.emit(instance_id, group_id, selection_type) + def set_active_toggle_enabled(self, enabled): + for widget in self._widgets_by_id.values(): + if isinstance(widget, InstanceCardWidget): + widget.set_active_toggle_enabled(enabled) + class ConvertorItemsGroupWidget(BaseGroupWidget): def update_items(self, items_by_id): @@ -437,6 +442,9 @@ class InstanceCardWidget(CardWidget): self.update_instance_values() + def set_active_toggle_enabled(self, enabled): + self._active_checkbox.setEnabled(enabled) + def set_active(self, new_value): """Set instance as active.""" checkbox_value = self._active_checkbox.isChecked() @@ -551,6 +559,7 @@ class InstanceCardView(AbstractInstanceView): self._context_widget = None self._convertor_items_group = None + self._active_toggle_enabled = True self._widgets_by_group = {} self._ordered_groups = [] @@ -667,6 +676,9 @@ class InstanceCardView(AbstractInstanceView): group_widget.update_instances( instances_by_group[group_name] ) + group_widget.set_active_toggle_enabled( + self._active_toggle_enabled + ) self._update_ordered_group_names() @@ -1091,3 +1103,10 @@ class InstanceCardView(AbstractInstanceView): self._explicitly_selected_groups = selected_groups self._explicitly_selected_instance_ids = selected_instances + + def set_active_toggle_enabled(self, enabled): + if self._active_toggle_enabled is enabled: + return + self._active_toggle_enabled = enabled + for group_widget in self._widgets_by_group.values(): + group_widget.set_active_toggle_enabled(enabled) diff --git a/openpype/tools/publisher/widgets/images/save.png b/openpype/tools/publisher/widgets/images/save.png new file mode 100644 index 0000000000..0db48d74ae Binary files /dev/null and b/openpype/tools/publisher/widgets/images/save.png differ diff --git a/openpype/tools/publisher/widgets/list_view_widgets.py b/openpype/tools/publisher/widgets/list_view_widgets.py index 172563d15c..227ae7bda9 100644 --- a/openpype/tools/publisher/widgets/list_view_widgets.py +++ b/openpype/tools/publisher/widgets/list_view_widgets.py @@ -198,6 +198,9 @@ class InstanceListItemWidget(QtWidgets.QWidget): self.instance["active"] = new_value self.active_changed.emit(self.instance.id, new_value) + def set_active_toggle_enabled(self, enabled): + self._active_checkbox.setEnabled(enabled) + class ListContextWidget(QtWidgets.QFrame): """Context (or global attributes) widget.""" @@ -302,6 +305,9 @@ class InstanceListGroupWidget(QtWidgets.QFrame): else: self.expand_btn.setArrowType(QtCore.Qt.RightArrow) + def set_active_toggle_enabled(self, enabled): + self.toggle_checkbox.setEnabled(enabled) + class InstanceTreeView(QtWidgets.QTreeView): """View showing instances and their groups.""" @@ -461,6 +467,8 @@ class InstanceListView(AbstractInstanceView): self._instance_model = instance_model self._proxy_model = proxy_model + self._active_toggle_enabled = True + def _on_expand(self, index): self._update_widget_expand_state(index, True) @@ -667,6 +675,9 @@ class InstanceListView(AbstractInstanceView): widget = InstanceListItemWidget( instance, self._instance_view ) + widget.set_active_toggle_enabled( + self._active_toggle_enabled + ) widget.active_changed.connect(self._on_active_changed) self._instance_view.setIndexWidget(proxy_index, widget) self._widgets_by_id[instance.id] = widget @@ -802,6 +813,9 @@ class InstanceListView(AbstractInstanceView): proxy_index = self._proxy_model.mapFromSource(index) group_name = group_item.data(GROUP_ROLE) widget = InstanceListGroupWidget(group_name, self._instance_view) + widget.set_active_toggle_enabled( + self._active_toggle_enabled + ) widget.expand_changed.connect(self._on_group_expand_request) widget.toggle_requested.connect(self._on_group_toggle_request) self._group_widgets[group_name] = widget @@ -1051,3 +1065,16 @@ class InstanceListView(AbstractInstanceView): QtCore.QItemSelectionModel.Select | QtCore.QItemSelectionModel.Rows ) + + def set_active_toggle_enabled(self, enabled): + if self._active_toggle_enabled is enabled: + return + + self._active_toggle_enabled = enabled + for widget in self._widgets_by_id.values(): + if isinstance(widget, InstanceListItemWidget): + widget.set_active_toggle_enabled(enabled) + + for widget in self._group_widgets.values(): + if isinstance(widget, InstanceListGroupWidget): + widget.set_active_toggle_enabled(enabled) diff --git a/openpype/tools/publisher/widgets/overview_widget.py b/openpype/tools/publisher/widgets/overview_widget.py index 8706daeda6..25fff73134 100644 --- a/openpype/tools/publisher/widgets/overview_widget.py +++ b/openpype/tools/publisher/widgets/overview_widget.py @@ -17,6 +17,7 @@ class OverviewWidget(QtWidgets.QFrame): active_changed = QtCore.Signal() instance_context_changed = QtCore.Signal() create_requested = QtCore.Signal() + convert_requested = QtCore.Signal() anim_end_value = 200 anim_duration = 200 @@ -132,6 +133,9 @@ class OverviewWidget(QtWidgets.QFrame): controller.event_system.add_callback( "publish.process.started", self._on_publish_start ) + controller.event_system.add_callback( + "controller.reset.started", self._on_controller_reset_start + ) controller.event_system.add_callback( "publish.reset.finished", self._on_publish_reset ) @@ -336,13 +340,31 @@ class OverviewWidget(QtWidgets.QFrame): self.instance_context_changed.emit() def _on_convert_requested(self): - _, _, convertor_identifiers = self.get_selected_items() - self._controller.trigger_convertor_items(convertor_identifiers) + self.convert_requested.emit() def get_selected_items(self): + """Selected items in current view widget. + + Returns: + tuple[list[str], bool, list[str]]: Selected items. List of + instance ids, context is selected, list of selected legacy + convertor plugins. + """ + view = self._subset_views_layout.currentWidget() return view.get_selected_items() + def get_selected_legacy_convertors(self): + """Selected legacy convertor identifiers. + + Returns: + list[str]: Selected legacy convertor identifiers. + Example: ['io.openpype.creators.houdini.legacy'] + """ + + _, _, convertor_identifiers = self.get_selected_items() + return convertor_identifiers + def _change_view_type(self): idx = self._subset_views_layout.currentIndex() new_idx = (idx + 1) % self._subset_views_layout.count() @@ -391,9 +413,19 @@ class OverviewWidget(QtWidgets.QFrame): self._create_btn.setEnabled(False) self._subset_attributes_wrap.setEnabled(False) + for idx in range(self._subset_views_layout.count()): + widget = self._subset_views_layout.widget(idx) + widget.set_active_toggle_enabled(False) + + def _on_controller_reset_start(self): + """Controller reset started.""" + + for idx in range(self._subset_views_layout.count()): + widget = self._subset_views_layout.widget(idx) + widget.set_active_toggle_enabled(True) def _on_publish_reset(self): - """Context in controller has been refreshed.""" + """Context in controller has been reseted.""" self._create_btn.setEnabled(True) self._subset_attributes_wrap.setEnabled(True) diff --git a/openpype/tools/publisher/widgets/widgets.py b/openpype/tools/publisher/widgets/widgets.py index 86475460aa..d2ce1fbcb2 100644 --- a/openpype/tools/publisher/widgets/widgets.py +++ b/openpype/tools/publisher/widgets/widgets.py @@ -34,7 +34,8 @@ from .icons import ( ) from ..constants import ( - VARIANT_TOOLTIP + VARIANT_TOOLTIP, + ResetKeySequence, ) @@ -198,12 +199,26 @@ class CreateBtn(PublishIconBtn): self.setLayoutDirection(QtCore.Qt.RightToLeft) +class SaveBtn(PublishIconBtn): + """Save context and instances information.""" + def __init__(self, parent=None): + icon_path = get_icon_path("save") + super(SaveBtn, self).__init__(icon_path, parent) + self.setToolTip( + "Save changes ({})".format( + QtGui.QKeySequence(QtGui.QKeySequence.Save).toString() + ) + ) + + class ResetBtn(PublishIconBtn): """Publish reset button.""" def __init__(self, parent=None): icon_path = get_icon_path("refresh") super(ResetBtn, self).__init__(icon_path, parent) - self.setToolTip("Refresh publishing") + self.setToolTip( + "Reset & discard changes ({})".format(ResetKeySequence.toString()) + ) class StopBtn(PublishIconBtn): @@ -348,6 +363,19 @@ class AbstractInstanceView(QtWidgets.QWidget): "{} Method 'set_selected_items' is not implemented." ).format(self.__class__.__name__)) + def set_active_toggle_enabled(self, enabled): + """Instances are disabled for changing enabled state. + + Active state should stay the same until is "unset". + + Args: + enabled (bool): Instance state can be changed. + """ + + raise NotImplementedError(( + "{} Method 'set_active_toggle_enabled' is not implemented." + ).format(self.__class__.__name__)) + class ClickableLineEdit(QtWidgets.QLineEdit): """QLineEdit capturing left mouse click. @@ -1533,7 +1561,7 @@ class SubsetAttributesWidget(QtWidgets.QWidget): │ attributes │ Thumbnail │ TOP │ │ │ ├─────────────┬───┴─────────────┤ - │ Family │ Publish │ + │ Creator │ Publish │ │ attributes │ plugin │ BOTTOM │ │ attributes │ └───────────────────────────────┘ diff --git a/openpype/tools/publisher/window.py b/openpype/tools/publisher/window.py index 74977d65d8..8826e0f849 100644 --- a/openpype/tools/publisher/window.py +++ b/openpype/tools/publisher/window.py @@ -13,6 +13,7 @@ from openpype.tools.utils import ( PixmapLabel, ) +from .constants import ResetKeySequence from .publish_report_viewer import PublishReportViewerWidget from .control_qt import QtPublisherController from .widgets import ( @@ -22,8 +23,9 @@ from .widgets import ( PublisherTabsWidget, - StopBtn, + SaveBtn, ResetBtn, + StopBtn, ValidateBtn, PublishBtn, @@ -121,6 +123,7 @@ class PublisherWindow(QtWidgets.QDialog): "Attach a comment to your publish" ) + save_btn = SaveBtn(footer_widget) reset_btn = ResetBtn(footer_widget) stop_btn = StopBtn(footer_widget) validate_btn = ValidateBtn(footer_widget) @@ -129,6 +132,7 @@ class PublisherWindow(QtWidgets.QDialog): footer_bottom_layout = QtWidgets.QHBoxLayout(footer_bottom_widget) footer_bottom_layout.setContentsMargins(0, 0, 0, 0) footer_bottom_layout.addStretch(1) + footer_bottom_layout.addWidget(save_btn, 0) footer_bottom_layout.addWidget(reset_btn, 0) footer_bottom_layout.addWidget(stop_btn, 0) footer_bottom_layout.addWidget(validate_btn, 0) @@ -250,7 +254,11 @@ class PublisherWindow(QtWidgets.QDialog): overview_widget.create_requested.connect( self._on_create_request ) + overview_widget.convert_requested.connect( + self._on_convert_requested + ) + save_btn.clicked.connect(self._on_save_clicked) reset_btn.clicked.connect(self._on_reset_clicked) stop_btn.clicked.connect(self._on_stop_clicked) validate_btn.clicked.connect(self._on_validate_clicked) @@ -330,8 +338,9 @@ class PublisherWindow(QtWidgets.QDialog): self._comment_input = comment_input self._footer_spacer = footer_spacer - self._stop_btn = stop_btn + self._save_btn = save_btn self._reset_btn = reset_btn + self._stop_btn = stop_btn self._validate_btn = validate_btn self._publish_btn = publish_btn @@ -388,7 +397,9 @@ class PublisherWindow(QtWidgets.QDialog): def closeEvent(self, event): self._window_is_visible = False self._uninstall_app_event_listener() - self.save_changes() + # TODO capture changes and ask user if wants to save changes on close + if not self._controller.host_context_has_changed: + self._save_changes(False) self._reset_on_show = True self._controller.clear_thumbnail_temp_dir_path() super(PublisherWindow, self).closeEvent(event) @@ -421,6 +432,21 @@ class PublisherWindow(QtWidgets.QDialog): if event.key() == QtCore.Qt.Key_Escape: event.accept() return + + if event.matches(QtGui.QKeySequence.Save): + if not self._controller.publish_has_started: + self._save_changes(True) + event.accept() + return + + if ResetKeySequence.matches( + QtGui.QKeySequence(event.key() | event.modifiers()) + ): + if not self.controller.publish_is_running: + self.reset() + event.accept() + return + super(PublisherWindow, self).keyPressEvent(event) def _on_overlay_message(self, event): @@ -455,8 +481,65 @@ class PublisherWindow(QtWidgets.QDialog): self._reset_on_show = False self.reset() - def save_changes(self): - self._controller.save_changes() + def _checks_before_save(self, explicit_save): + """Save of changes may trigger some issues. + + Check if context did change and ask user if he is really sure the + save should happen. A dialog can be shown during this method. + + Args: + explicit_save (bool): Method was called when user explicitly asked + for save. Value affects shown message. + + Returns: + bool: Save can happen. + """ + + if not self._controller.host_context_has_changed: + return True + + title = "Host context changed" + if explicit_save: + message = ( + "Context has changed since Publisher window was refreshed last" + " time.\n\nAre you sure you want to save changes?" + ) + else: + message = ( + "Your action requires save of changes but context has changed" + " since Publisher window was refreshed last time.\n\nAre you" + " sure you want to continue and save changes?" + ) + + result = QtWidgets.QMessageBox.question( + self, + title, + message, + QtWidgets.QMessageBox.Save | QtWidgets.QMessageBox.Cancel + ) + return result == QtWidgets.QMessageBox.Save + + def _save_changes(self, explicit_save): + """Save changes of Creation part. + + All possible triggers of save changes were moved to main window (here), + so it can handle possible issues with save at one place. Do checks, + so user don't accidentally save changes to different file or using + different context. + Moving responsibility to this place gives option to show the dialog and + wait for user's response without breaking action he wanted to do. + + Args: + explicit_save (bool): Method was called when user explicitly asked + for save. Value affects shown message. + + Returns: + bool: Save happened successfully. + """ + + if not self._checks_before_save(explicit_save): + return False + return self._controller.save_changes() def reset(self): self._controller.reset() @@ -491,15 +574,18 @@ class PublisherWindow(QtWidgets.QDialog): self._help_dialog.show() window = self.window() - desktop = QtWidgets.QApplication.desktop() - screen_idx = desktop.screenNumber(window) - screen = desktop.screen(screen_idx) - screen_rect = screen.geometry() + if hasattr(QtWidgets.QApplication, "desktop"): + desktop = QtWidgets.QApplication.desktop() + screen_idx = desktop.screenNumber(window) + screen_geo = desktop.screenGeometry(screen_idx) + else: + screen = window.screen() + screen_geo = screen.geometry() window_geo = window.geometry() dialog_x = window_geo.x() + window_geo.width() dialog_right = (dialog_x + self._help_dialog.width()) - 1 - diff = dialog_right - screen_rect.right() + diff = dialog_right - screen_geo.right() if diff > 0: dialog_x -= diff @@ -549,6 +635,14 @@ class PublisherWindow(QtWidgets.QDialog): def _on_create_request(self): self._go_to_create_tab() + def _on_convert_requested(self): + if not self._save_changes(False): + return + convertor_identifiers = ( + self._overview_widget.get_selected_legacy_convertors() + ) + self._controller.trigger_convertor_items(convertor_identifiers) + def _set_current_tab(self, identifier): self._tabs_widget.set_current_tab(identifier) @@ -599,8 +693,10 @@ class PublisherWindow(QtWidgets.QDialog): self._publish_frame.setVisible(visible) self._update_publish_frame_rect() + def _on_save_clicked(self): + self._save_changes(True) + def _on_reset_clicked(self): - self.save_changes() self.reset() def _on_stop_clicked(self): @@ -610,14 +706,17 @@ class PublisherWindow(QtWidgets.QDialog): self._controller.set_comment(self._comment_input.text()) def _on_validate_clicked(self): - self._set_publish_comment() - self._controller.validate() + if self._save_changes(False): + self._set_publish_comment() + self._controller.validate() def _on_publish_clicked(self): - self._set_publish_comment() - self._controller.publish() + if self._save_changes(False): + self._set_publish_comment() + self._controller.publish() def _set_footer_enabled(self, enabled): + self._save_btn.setEnabled(True) self._reset_btn.setEnabled(True) if enabled: self._stop_btn.setEnabled(False) diff --git a/openpype/tools/sceneinventory/model.py b/openpype/tools/sceneinventory/model.py index 680dfd5a51..63d2945145 100644 --- a/openpype/tools/sceneinventory/model.py +++ b/openpype/tools/sceneinventory/model.py @@ -327,7 +327,7 @@ class InventoryModel(TreeModel): project_name, repre_id ) if not representation: - not_found["representation"].append(group_items) + not_found["representation"].extend(group_items) not_found_ids.append(repre_id) continue @@ -335,7 +335,7 @@ class InventoryModel(TreeModel): project_name, representation["parent"] ) if not version: - not_found["version"].append(group_items) + not_found["version"].extend(group_items) not_found_ids.append(repre_id) continue @@ -348,13 +348,13 @@ class InventoryModel(TreeModel): subset = get_subset_by_id(project_name, version["parent"]) if not subset: - not_found["subset"].append(group_items) + not_found["subset"].extend(group_items) not_found_ids.append(repre_id) continue asset = get_asset_by_id(project_name, subset["parent"]) if not asset: - not_found["asset"].append(group_items) + not_found["asset"].extend(group_items) not_found_ids.append(repre_id) continue @@ -380,11 +380,11 @@ class InventoryModel(TreeModel): self.add_child(group_node, parent=parent) - for _group_items in group_items: + for item in group_items: item_node = Item() - item_node["Name"] = ", ".join( - [item["objectName"] for item in _group_items] - ) + item_node.update(item) + item_node["Name"] = item.get("objectName", "NO NAME") + item_node["isNotFound"] = True self.add_child(item_node, parent=group_node) for repre_id, group_dict in sorted(grouped.items()): diff --git a/openpype/tools/sceneinventory/view.py b/openpype/tools/sceneinventory/view.py index a04171e429..3279be6094 100644 --- a/openpype/tools/sceneinventory/view.py +++ b/openpype/tools/sceneinventory/view.py @@ -80,9 +80,16 @@ class SceneInventoryView(QtWidgets.QTreeView): self.setStyleSheet("QTreeView {}") def _build_item_menu_for_selection(self, items, menu): + + # Exclude items that are "NOT FOUND" since setting versions, updating + # and removal won't work for those items. + items = [item for item in items if not item.get("isNotFound")] + if not items: return + # An item might not have a representation, for example when an item + # is listed as "NOT FOUND" repre_ids = { item["representation"] for item in items diff --git a/openpype/tools/traypublisher/window.py b/openpype/tools/traypublisher/window.py index 3007fa66a5..3ac1b4c4ad 100644 --- a/openpype/tools/traypublisher/window.py +++ b/openpype/tools/traypublisher/window.py @@ -247,7 +247,7 @@ class TrayPublishWindow(PublisherWindow): def _on_project_select(self, project_name): # TODO register project specific plugin paths - self._controller.save_changes() + self._controller.save_changes(False) self._controller.reset_project_data_cache() self.reset() diff --git a/openpype/tools/utils/delegates.py b/openpype/tools/utils/delegates.py index d76284afb1..fa69113ef1 100644 --- a/openpype/tools/utils/delegates.py +++ b/openpype/tools/utils/delegates.py @@ -30,9 +30,11 @@ class VersionDelegate(QtWidgets.QStyledItemDelegate): def displayText(self, value, locale): if isinstance(value, HeroVersionType): return lib.format_version(value, True) - assert isinstance(value, numbers.Integral), ( - "Version is not integer. \"{}\" {}".format(value, str(type(value))) - ) + if not isinstance(value, numbers.Integral): + # For cases where no version is resolved like NOT FOUND cases + # where a representation might not exist in current database + return + return lib.format_version(value) def paint(self, painter, option, index): diff --git a/openpype/tools/utils/lib.py b/openpype/tools/utils/lib.py index 8d38f03b8d..950c782727 100644 --- a/openpype/tools/utils/lib.py +++ b/openpype/tools/utils/lib.py @@ -862,11 +862,11 @@ class WrappedCallbackItem: return self._result def execute(self): - """Execute callback and store it's result. + """Execute callback and store its result. Method must be called from main thread. Item is marked as `done` when callback execution finished. Store output of callback of exception - information when callback raise one. + information when callback raises one. """ if self.done: self.log.warning("- item is already processed") diff --git a/openpype/version.py b/openpype/version.py index 39a7dc9344..5b6db12b5e 100644 --- a/openpype/version.py +++ b/openpype/version.py @@ -1,3 +1,3 @@ # -*- coding: utf-8 -*- """Package declaring Pype version.""" -__version__ = "3.15.3-nightly.1" +__version__ = "3.15.3-nightly.3" diff --git a/openpype/widgets/splash_screen.py b/openpype/widgets/splash_screen.py index fffe143ea5..7c1ff72ecd 100644 --- a/openpype/widgets/splash_screen.py +++ b/openpype/widgets/splash_screen.py @@ -49,9 +49,7 @@ class SplashScreen(QtWidgets.QDialog): self.init_ui() def was_proc_successful(self) -> bool: - if self.thread_return_code == 0: - return True - return False + return self.thread_return_code == 0 def start_thread(self, q_thread: QtCore.QThread): """Saves the reference to this thread and starts it. @@ -80,8 +78,14 @@ class SplashScreen(QtWidgets.QDialog): """ self.thread_return_code = 0 self.q_thread.quit() + + if not self.q_thread.wait(5000): + raise RuntimeError("Failed to quit the QThread! " + "The deadline has been reached! The thread " + "has not finished it's execution!.") self.close() + @QtCore.Slot() def toggle_log(self): if self.is_log_visible: @@ -256,3 +260,4 @@ class SplashScreen(QtWidgets.QDialog): self.close_btn.show() self.thread_return_code = return_code self.q_thread.exit(return_code) + self.q_thread.wait() diff --git a/tests/integration/hosts/nuke/test_deadline_publish_in_nuke.py b/tests/integration/hosts/nuke/test_deadline_publish_in_nuke.py index cd9cbb94f8..a4026f195b 100644 --- a/tests/integration/hosts/nuke/test_deadline_publish_in_nuke.py +++ b/tests/integration/hosts/nuke/test_deadline_publish_in_nuke.py @@ -71,12 +71,30 @@ class TestDeadlinePublishInNuke(NukeDeadlinePublishTestClass): failures.append( DBAssert.count_of_types(dbcon, "representation", 4)) + additional_args = {"context.subset": "workfileTest_task", + "context.ext": "nk"} + failures.append( + DBAssert.count_of_types(dbcon, "representation", 1, + additional_args=additional_args)) + additional_args = {"context.subset": "renderTest_taskMain", "context.ext": "exr"} failures.append( DBAssert.count_of_types(dbcon, "representation", 1, additional_args=additional_args)) + additional_args = {"context.subset": "renderTest_taskMain", + "name": "thumbnail"} + failures.append( + DBAssert.count_of_types(dbcon, "representation", 1, + additional_args=additional_args)) + + additional_args = {"context.subset": "renderTest_taskMain", + "name": "h264_mov"} + failures.append( + DBAssert.count_of_types(dbcon, "representation", 1, + additional_args=additional_args)) + assert not any(failures) diff --git a/tests/integration/hosts/nuke/test_publish_in_nuke.py b/tests/integration/hosts/nuke/test_publish_in_nuke.py index f84f13fa20..bfd84e4fd5 100644 --- a/tests/integration/hosts/nuke/test_publish_in_nuke.py +++ b/tests/integration/hosts/nuke/test_publish_in_nuke.py @@ -15,7 +15,7 @@ class TestPublishInNuke(NukeLocalPublishTestClass): !!! It expects modified path in WriteNode, use '[python {nuke.script_directory()}]' instead of regular root - dir (eg. instead of `c:/projects/test_project/test_asset/test_task`). + dir (eg. instead of `c:/projects`). Access file path by selecting WriteNode group, CTRL+Enter, update file input !!! @@ -70,12 +70,30 @@ class TestPublishInNuke(NukeLocalPublishTestClass): failures.append( DBAssert.count_of_types(dbcon, "representation", 4)) + additional_args = {"context.subset": "workfileTest_task", + "context.ext": "nk"} + failures.append( + DBAssert.count_of_types(dbcon, "representation", 1, + additional_args=additional_args)) + additional_args = {"context.subset": "renderTest_taskMain", "context.ext": "exr"} failures.append( DBAssert.count_of_types(dbcon, "representation", 1, additional_args=additional_args)) + additional_args = {"context.subset": "renderTest_taskMain", + "name": "thumbnail"} + failures.append( + DBAssert.count_of_types(dbcon, "representation", 1, + additional_args=additional_args)) + + additional_args = {"context.subset": "renderTest_taskMain", + "name": "h264_mov"} + failures.append( + DBAssert.count_of_types(dbcon, "representation", 1, + additional_args=additional_args)) + assert not any(failures) diff --git a/tests/unit/openpype/pipeline/publish/test_publish_plugins.py b/tests/unit/openpype/pipeline/publish/test_publish_plugins.py index 5c2d7b367f..88e0095e34 100644 --- a/tests/unit/openpype/pipeline/publish/test_publish_plugins.py +++ b/tests/unit/openpype/pipeline/publish/test_publish_plugins.py @@ -135,7 +135,7 @@ class TestPipelinePublishPlugins(TestPipeline): } # load plugin function for testing - plugin = publish_plugins.ExtractorColormanaged() + plugin = publish_plugins.ColormanagedPyblishPluginMixin() plugin.log = log config_data, file_rules = plugin.get_colorspace_settings(context) @@ -175,14 +175,14 @@ class TestPipelinePublishPlugins(TestPipeline): } # load plugin function for testing - plugin = publish_plugins.ExtractorColormanaged() + plugin = publish_plugins.ColormanagedPyblishPluginMixin() plugin.log = log plugin.set_representation_colorspace( representation_nuke, context, colorspace_settings=(config_data_nuke, file_rules_nuke) ) # load plugin function for testing - plugin = publish_plugins.ExtractorColormanaged() + plugin = publish_plugins.ColormanagedPyblishPluginMixin() plugin.log = log plugin.set_representation_colorspace( representation_hiero, context, diff --git a/website/docs/admin_hosts_maya.md b/website/docs/admin_hosts_maya.md index ae0cf76f53..23cacb4193 100644 --- a/website/docs/admin_hosts_maya.md +++ b/website/docs/admin_hosts_maya.md @@ -6,13 +6,13 @@ sidebar_label: Maya ## Publish Plugins -### Render Settings Validator +### Render Settings Validator `ValidateRenderSettings` Render Settings Validator is here to make sure artists will submit renders -we correct settings. Some of these settings are needed by OpenPype but some -can be defined by TD using [OpenPype Settings UI](admin_settings.md). +with the correct settings. Some of these settings are needed by OpenPype but some +can be defined by the admin using [OpenPype Settings UI](admin_settings.md). OpenPype enforced settings include: @@ -36,10 +36,9 @@ For **Renderman**: For **Arnold**: - there shouldn't be `` token when merge AOVs option is turned on - Additional check can be added via Settings - **Project Settings > Maya > Publish plugin > ValidateRenderSettings**. You can add as many options as you want for every supported renderer. In first field put node type and attribute -and in the second required value. +and in the second required value. You can create multiple values for an attribute, but when repairing it'll be the first value in the list that get selected. ![Settings example](assets/maya-admin_render_settings_validator.png) @@ -51,7 +50,11 @@ just one instance of this node type but if that is not so, validator will go thr instances and check the value there. Node type for **VRay** settings is `VRaySettingsNode`, for **Renderman** it is `rmanGlobals`, for **Redshift** it is `RedshiftOptions`. -### Model Name Validator +:::info getting attribute values +If you do not know what an attributes value is supposed to be, for example for dropdown menu (enum), try changing the attribute and look in the script editor where it should log what the attribute was set to. +::: + +### Model Name Validator `ValidateRenderSettings` @@ -95,7 +98,7 @@ You can set various aspects of scene submission to farm with per-project setting - **Optional** will mark sumission plugin optional - **Active** will enable/disable plugin - - **Tile Assembler Plugin** will set what should be used to assemble tiles on Deadline. Either **Open Image IO** will be used + - **Tile Assembler Plugin** will set what should be used to assemble tiles on Deadline. Either **Open Image IO** will be used or Deadlines **Draft Tile Assembler**. - **Use Published scene** enable to render from published scene instead of scene in work area. Rendering from published files is much safer. - **Use Asset dependencies** will mark job pending on farm until asset dependencies are fulfilled - for example Deadline will wait for scene file to be synced to cloud, etc. @@ -169,5 +172,3 @@ Fill in the necessary fields (the optional fields are regex filters) - Build your workfile ![maya build template](assets/maya-build_workfile_from_template.png) - - diff --git a/website/docs/artist_hosts_maya.md b/website/docs/artist_hosts_maya.md index 41045935d5..0a551f0213 100644 --- a/website/docs/artist_hosts_maya.md +++ b/website/docs/artist_hosts_maya.md @@ -516,6 +516,22 @@ In the scene from where you want to publish your model create *Render subset*. P model subset (Maya set node) under corresponding `LAYER_` set under *Render instance*. During publish, it will submit this render to farm and after it is rendered, it will be attached to your model subset. +### Tile Rendering +:::note Deadline +This feature is only supported when using Deadline. See [here](module_deadline#openpypetileassembler-plugin) for setup. +::: +On the render instance objectset you'll find: + +* `Tile Rendering` - for enabling tile rendering. +* `Tile X` - number of tiles in the X axis. +* `Tile Y` - number of tiles in the Y axis. + +When submittig to Deadline, you'll get: + +- for each frame a tile rendering job, to render each from Maya. +- for each frame a tile assembly job, to assemble the rendered tiles. +- job to publish the assembled frames. + ## Render Setups ### Publishing Render Setups diff --git a/website/docs/artist_hosts_maya_yeti.md b/website/docs/artist_hosts_maya_yeti.md index f5a6a4d2c9..aa783cc8b6 100644 --- a/website/docs/artist_hosts_maya_yeti.md +++ b/website/docs/artist_hosts_maya_yeti.md @@ -9,7 +9,9 @@ sidebar_label: Yeti OpenPype can work with [Yeti](https://peregrinelabs.com/yeti/) in two data modes. It can handle Yeti caches and Yeti rigs. -### Creating and publishing Yeti caches +## Yeti Caches + +### Creating and publishing Let start by creating simple Yeti setup, just one object and Yeti node. Open new empty scene in Maya and create sphere. Then select sphere and go **Yeti → Create Yeti Node on Mesh** @@ -44,7 +46,15 @@ You can now publish Yeti cache as any other types. **OpenPype → Publish**. It create sequence of `.fur` files and `.fursettings` metadata file with Yeti node setting. -### Loading Yeti caches +:::note Collect Yeti Cache failure +If you encounter **Collect Yeti Cache** failure during collecting phase, and the error is like +```fix +No object matches name: pgYetiMaya1Shape.cbId +``` +then it is probably caused by scene not being saved before publishing. +::: + +### Loading You can load Yeti cache by **OpenPype → Load ...**. Select your cache, right+click on it and select **Load Yeti cache**. This will create Yeti node in scene and set its @@ -52,26 +62,39 @@ cache path to point to your published cache files. Note that this Yeti node will be named with same name as the one you've used to publish cache. Also notice that when you open graph on this Yeti node, all nodes are as they were in publishing node. -### Creating and publishing Yeti Rig +## Yeti Rigs -Yeti Rigs are working in similar way as caches, but are more complex and they deal with -other data used by Yeti, like geometry and textures. +### Creating and publishing -Let's start by [loading](artist_hosts_maya.md#loading-model) into new scene some model. -I've loaded my Buddha model. +Yeti Rigs are designed to connect to published models or animation rig. The workflow gives the Yeti Rig full control on that geometry to do additional things on top of whatever input comes in, e.g. deleting faces, pushing faces in/out, subdividing, etc. -Create select model mesh, create Yeti node - **Yeti → Create Yeti Node on Mesh** and -setup similar Yeti graph as in cache example above. +Let's start with a [model](artist_hosts_maya.md#loading-model) or [rig](artist_hosts_maya.md#loading-rigs) loaded into the scene. Here we are using a simple rig. -Then select this Yeti node (mine is called with default name `pgYetiMaya1`) and -create *Yeti Rig instance* - **OpenPype → Create...** and select **Yeti Cache**. +![Maya - Yeti Simple Rig](assets/maya-yeti_simple_rig.png) + +We'll need to prepare the scene a bit. We want some Yeti hair on the ball geometry, so duplicating the geometry, adding the Yeti hair and grouping it together. + +![Maya - Yeti Hair Setup](assets/maya-yeti_hair_setup.png) + +:::note yeti nodes and types +You can use any number of Yeti nodes and types, but they have to have unique names. +::: + +Now we need to connect the Yeti Rig with the animation rig. Yeti Rigs work by publishing the attribute connections from its input nodes and reconnect them later in the pipeline. This means we can only use attribute connections to from outside of the Yeti Rig hierarchy. Internal to the Yeti Rig hierarchy, we can use any complexity of node connections. We'll connnect the Yeti Rig geometry to the animation rig, with the transform and mesh attributes. + +![Maya - Yeti Rig Setup](assets/maya-yeti_rig_setup.png) + +Now we are ready for publishing. Select the Yeti Rig group (`rig_GRP`) and +create *Yeti Rig instance* - **OpenPype → Create...** and select **Yeti Rig**. Leave `Use selection` checked. -Last step is to add our model geometry to rig instance, so middle+drag its -geometry to `input_SET` under `yetiRigDefault` set representing rig instance. +Last step is to add our geometry to the rig instance, so middle+drag its +geometry to `input_SET` under the `yetiRigMain` set representing rig instance. Note that its name can differ and is based on your subset name. -![Maya - Yeti Rig Setup](assets/maya-yeti_rig.jpg) +![Maya - Yeti Publish Setup](assets/maya-yeti_publish_setup.png) + +You can have any number of nodes in the Yeti Rig, but only nodes with incoming attribute connections from outside of the Yeti Rig hierarchy is needed in the `input_SET`. Save your scene and ready for publishing our new simple Yeti Rig! @@ -81,28 +104,14 @@ the beginning of your timeline. It will also collect all textures used in Yeti node, copy them to publish folder `resource` directory and set *Image search path* of published node to this location. -:::note Collect Yeti Cache failure -If you encounter **Collect Yeti Cache** failure during collecting phase, and the error is like -```fix -No object matches name: pgYetiMaya1Shape.cbId -``` -then it is probably caused by scene not being saved before publishing. -::: +### Loading -### Loading Yeti Rig - -You can load published Yeti Rigs as any other thing in OpenPype - **OpenPype → Load ...**, +You can load published Yeti Rigs in OpenPype with **OpenPype → Load ...**, select you Yeti rig and right+click on it. In context menu you should see -**Load Yeti Cache** and **Load Yeti Rig** items (among others). First one will -load that one frame cache. The other one will load whole rig. +**Load Yeti Rig** item (among others). -Notice that although we put only geometry into `input_SET`, whole hierarchy was -pulled inside also. This allows you to store complex scene element along Yeti -node. +To connect the Yeti Rig with published animation, we'll load in the animation and use the Inventory to establish the connections. -:::tip auto-connecting rig mesh to existing one -If you select some objects before loading rig it will try to find shapes -under selected hierarchies and match them with shapes loaded with rig (published -under `input_SET`). This mechanism uses *cbId* attribute on those shapes. -If match is found shapes are connected using their `outMesh` and `outMesh`. Thus you can easily connect existing animation to loaded rig. -::: +![Maya - Yeti Publish Setup](assets/maya-yeti_load_connections.png) + +The Yeti Rig should now be following the animation. :tada: diff --git a/website/docs/assets/integrate_kitsu_note_settings.png b/website/docs/assets/integrate_kitsu_note_settings.png new file mode 100644 index 0000000000..127e79ab80 Binary files /dev/null and b/website/docs/assets/integrate_kitsu_note_settings.png differ diff --git a/website/docs/assets/maya-admin_render_settings_validator.png b/website/docs/assets/maya-admin_render_settings_validator.png index 8687b538b1..8ee11f16f6 100644 Binary files a/website/docs/assets/maya-admin_render_settings_validator.png and b/website/docs/assets/maya-admin_render_settings_validator.png differ diff --git a/website/docs/assets/maya-yeti_hair_setup.png b/website/docs/assets/maya-yeti_hair_setup.png new file mode 100644 index 0000000000..8cd7f5f97a Binary files /dev/null and b/website/docs/assets/maya-yeti_hair_setup.png differ diff --git a/website/docs/assets/maya-yeti_load_connections.png b/website/docs/assets/maya-yeti_load_connections.png new file mode 100644 index 0000000000..4e96fdc0b3 Binary files /dev/null and b/website/docs/assets/maya-yeti_load_connections.png differ diff --git a/website/docs/assets/maya-yeti_publish_setup.png b/website/docs/assets/maya-yeti_publish_setup.png new file mode 100644 index 0000000000..f376f47cc1 Binary files /dev/null and b/website/docs/assets/maya-yeti_publish_setup.png differ diff --git a/website/docs/assets/maya-yeti_rig.jpg b/website/docs/assets/maya-yeti_rig.jpg deleted file mode 100644 index 07b13db409..0000000000 Binary files a/website/docs/assets/maya-yeti_rig.jpg and /dev/null differ diff --git a/website/docs/assets/maya-yeti_rig_setup.png b/website/docs/assets/maya-yeti_rig_setup.png new file mode 100644 index 0000000000..876ecba0a2 Binary files /dev/null and b/website/docs/assets/maya-yeti_rig_setup.png differ diff --git a/website/docs/assets/maya-yeti_simple_rig.png b/website/docs/assets/maya-yeti_simple_rig.png new file mode 100644 index 0000000000..31a60ac40b Binary files /dev/null and b/website/docs/assets/maya-yeti_simple_rig.png differ diff --git a/website/docs/dev_colorspace.md b/website/docs/dev_colorspace.md index b46d80e1a2..c4b8e74d73 100644 --- a/website/docs/dev_colorspace.md +++ b/website/docs/dev_colorspace.md @@ -63,7 +63,7 @@ It's up to the Loaders to read these values and apply the correct expected color - set the `OCIO` environment variable before launching the host via a prelaunch hook - or (if the host allows) to set the workfile OCIO config path using the host's API -3. Each Extractor exporting pixel data (e.g. image or video) has to use parent class `openpype.pipeline.publish.publish_plugins.ExtractorColormanaged` and use `self.set_representation_colorspace` on the representations to be integrated. +3. Each Extractor exporting pixel data (e.g. image or video) has to inherit from the mixin class `openpype.pipeline.publish.publish_plugins.ColormanagedPyblishPluginMixin` and use `self.set_representation_colorspace` on the representations to be integrated. The **set_representation_colorspace** method adds `colorspaceData` to the representation. If the `colorspace` passed is not `None` then it is added directly to the representation with resolved config path otherwise a color space is assumed using the configured file rules. If no file rule matches the `colorspaceData` is **not** added to the representation. diff --git a/website/docs/module_deadline.md b/website/docs/module_deadline.md index ab1016788d..94b6a381c2 100644 --- a/website/docs/module_deadline.md +++ b/website/docs/module_deadline.md @@ -45,6 +45,10 @@ executable. It is recommended to use the `openpype_console` executable as it pro ![Configure plugin](assets/deadline_configure_plugin.png) +### OpenPypeTileAssembler Plugin +To setup tile rendering copy the `OpenPypeTileAssembler` plugin to the repository; +`[OpenPype]\openpype\modules\deadline\repository\custom\plugins\OpenPypeTileAssembler` > `[DeadlineRepository]\custom\plugins\OpenPypeTileAssembler` + ### Pools The main pools can be configured at `project_settings/deadline/publish/CollectDeadlinePools/primary_pool`, which is applied to the rendering jobs. diff --git a/website/docs/module_kitsu.md b/website/docs/module_kitsu.md index 7be2a42c45..23898fba2e 100644 --- a/website/docs/module_kitsu.md +++ b/website/docs/module_kitsu.md @@ -38,6 +38,18 @@ This functionality cannot deal with all cases and is not error proof, some inter openpype_console module kitsu push-to-zou -l me@domain.ext -p my_password ``` +## Integrate Kitsu Note +Task status can be automatically set during publish thanks to `Integrate Kitsu Note`. This feature can be configured in: + +`Admin -> Studio Settings -> Project Settings -> Kitsu -> Integrate Kitsu Note`. + +There are three settings available: +- `Set status on note` -> turns on and off this integrator. +- `Note shortname` -> Which status shortname should be set automatically (Case sensitive). +- `Status conditions` -> Conditions that need to be met for kitsu status to be changed. You can add as many conditions as you like. There are two fields to each conditions: `Condition` (Whether current status should be equal or not equal to the condition status) and `Short name` (Kitsu Shortname of the condition status). + +![Integrate Kitsu Note project settings](assets/integrate_kitsu_note_settings.png) + ## Q&A ### Is it safe to rename an entity from Kitsu? Absolutely! Entities are linked by their unique IDs between the two databases. diff --git a/website/docs/project_settings/settings_project_global.md b/website/docs/project_settings/settings_project_global.md index 6c1a269b1f..b6d624caa8 100644 --- a/website/docs/project_settings/settings_project_global.md +++ b/website/docs/project_settings/settings_project_global.md @@ -82,8 +82,8 @@ All context filters are lists which may contain strings or Regular expressions ( - **`tasks`** - Currently processed task. `["modeling", "animation"]` :::important Filtering -Filters are optional. In case when multiple profiles match current context, profile with higher number of matched filters has higher priority that profile without filters. -(Eg. order of when filter is added doesn't matter, only the precision of matching does.) +Filters are optional. In case when multiple profiles match current context, profile with higher number of matched filters has higher priority than profile without filters. +(The order the profiles in settings doesn't matter, only the precision of matching does.) ::: ## Publish plugins @@ -94,7 +94,7 @@ Publish plugins used across all integrations. ### Extract Review Plugin responsible for automatic FFmpeg conversion to variety of formats. -Extract review is using [profile filtering](#profile-filters) to be able render different outputs for different situations. +Extract review uses [profile filtering](#profile-filters) to render different outputs for different situations. Applicable context filters: **`hosts`** - Host from which publishing was triggered. `["maya", "nuke"]` @@ -104,7 +104,7 @@ Applicable context filters: **Output Definitions** -Profile may generate multiple outputs from a single input. Each output must define unique name and output extension (use the extension without a dot e.g. **mp4**). All other settings of output definition are optional. +A profile may generate multiple outputs from a single input. Each output must define unique name and output extension (use the extension without a dot e.g. **mp4**). All other settings of output definition are optional. ![global_extract_review_output_defs](assets/global_extract_review_output_defs.png) - **`Tags`** @@ -118,7 +118,7 @@ Profile may generate multiple outputs from a single input. Each output must defi - **Output arguments** other FFmpeg output arguments like codec definition. - **`Output width`** and **`Output height`** - - it is possible to rescale output to specified resolution and keep aspect ratio. + - It is possible to rescale output to specified resolution and keep aspect ratio. - If value is set to 0, source resolution will be used. - **`Overscan crop`** diff --git a/website/docs/pype2/admin_presets_plugins.md b/website/docs/pype2/admin_presets_plugins.md index fcdc09439b..e589f7d14b 100644 --- a/website/docs/pype2/admin_presets_plugins.md +++ b/website/docs/pype2/admin_presets_plugins.md @@ -36,7 +36,7 @@ All context filters are lists which may contain strings or Regular expressions ( - **families** - Main family of processed instance. `["plate", "model"]` :::important Filtering -Filters are optional and may not be set. In case when multiple profiles match current context, profile with filters has higher priority that profile without filters. +Filters are optional and may not be set. In case when multiple profiles match current context, profile with filters has higher priority than profile without filters. ::: #### Profile outputs